text
string
source
string
hypothesis of no signal. Consequently, we proceed by considering the next, larger, regression tree Tm′,m′> m, in the sequence of nested regression trees. If, when considering the regression tree Tm′we find that m′−1X k=1pobs,k> δ, (8) then the procedure stops and the previous regression tree Tmis selected as the optimal regression tree. Since we consider an upper bound for the probability (under the null hypothesis) of the event ∪m−1 k=1{Umax,k> u obs,k}and since we consider upper bounds pobs,kfor the probabilities of the events Umax,k> u obs,k, we are more likely to stop – observe that (8) holds – compared to a hypothetical situation where the probability of the event ∪m−1 k=1{Umax,k> u obs,k}could be computed and were found to exceed the tolerance level δ. Hence, our stopping criterion is conservative. We therefore have to be concerned with the possibility of a too conservative stopping criterion. However, it is shown in Proposition 1 below that under an alternative hypothesis of a sufficiently strong signal, the computable upper bound pobsfor the true p-value is very small. Hence, our stopping criterion is not too conservative. 2.2 Change point detection for a single binary split Thequestionofwhetheracandidatebinarysplitshouldberejectedornotcan be phrased as a change-point-detection problem. This observation has been made already in [Shih and Tsai, 2004], where the aim was to target inference based variable selection. The idea here is to make inference on squared-error- loss reduction, where a significant loss reduction translates into not rejecting a split, hence continuing the tree-growing process. This approach builds on the analysis of change-point detection from [Yao and Davis, 1986] that uses a scaled version of (6) according to U(n) j:= max 1≤r≤n−1S−(S≤r+S>r) S/n, U(n) max:= max 1≤j≤dU(n) j, where the dependence of U(n) jonjis implicit in the order of Y(1), . . . , Y(n) which determines the sums of squares S≤randS>r, as before. That is, the optimal candidate change point w.r.t. covariate dimension jis expressed in terms of the statistic U(n) j, which, hence, is identical to a candidate split point. 8 The test for rejecting a candidate split is based on the null hypothesis saying that observing Xgives no information about Y. The null hypothesis corresponds to a simple model Nfor(Y, X). Definition 1 (Null hypothesis, H0).For model N,YandXare independent andYis normally distributed: there exist µ∈Randσ2∈(0,∞)such that PN(Y∈ · |X) =PN(Y∈ ·) =N(µ, σ2). When considering a nested sequence of binary regression trees, U(n) maxis the random variable whose outcome is the observed test statistic for a single candidate binary split. Under the null hypothesis, the common distribution of the statistics U(n) 1, . . . , U(n) ddoes not depend on µandσ. Hence, under the null hypothesis, the distribution of U(n) maxdoes not depend on µandσ. Clearly, PN U(n) max> u =PN ∪d j=1{U(n) j> u} ≤dPN U(n) j> u (9) which does not depend on jsince the probability is evaluated under the null hypothesis. We approximate the tail probability PN(U(n) j> u)bypn(u), where pn(u) := 1 −Φ u1/2−ln3(n) + ln(2) (2 ln 2(n))1/22 ln(n/2) , (10) where lnk(n)corresponds to the ktimes iterated logarithm, e.g. ln2(n) = ln(ln(
https://arxiv.org/abs/2505.18769v1
n)). The approximation pn(u)from (10) corresponds to Eq. (2.5) on p. 345 in [Yao and Davis, 1986]. The true p-value is the function u7→ PN U(n) max> u evaluated at the observed value for U(n) max. The true p-value is approximated from above by P(n) max:=dpn(U(n) max). (11) We emphasise that given an observation uobs,kofU(n) max,pobs,kis the observed outcome of P(n) max. If the true signal is not too weak, which means that the conditional ex- pectation of Ygiven Xshould fluctuate sufficiently in size, then for any significance level we want to reject the null hypothesis in a setting with suf- ficiently large sample size n. In order to make the meaning of this statement precise, and in order to verify it, we must consider the alternative hypothesis as a sequence of hypotheses indexed by the sample size n. The alternative hypothesis corresponds to a sequence of models A= (A(n)). 9 Definition 2 (Alternative hypothesis, HA).For the sequence of models (A(n))there exist j∈ {1, . . . , d },ξ∈R,t0∈(0,1),σ2∈(0,∞)and µl, µr∈R,µl̸=µr, such that for all n PA(n)(Xj≤ξ) =t0, PA(n)(Y∈ · |Xj=x) =N(µl1{x<ξ}+µr1{x≥ξ}, σ2), where |µr−µl|=σθn>0. The sequence θnsatisfies θn=(2 ln 2(n))1/2+ηn n1/2(t0(1−t0))1/2(12) for some increasing sequence ηnwith limn→∞ηn=∞andlim supn→∞θn< ∞. The requirement under the alternative hypothesis of a shift in mean of sizeσθnsays that the amplitude of the signal is allowed to decrease towards zero with n, but not too fast. We could consider θn=n−rfor some r < 1/2. We may also consider a constant signal amplitude θ. However, that situation is not very interesting since such a signal should eventually be easily detectable as the sample size nbecomes very large. The expression for θnin (12) comes from [Yao and Davis, 1986] (Eq. (3.2) on p. 347) and corresponds to an at least slightly stronger signal compared to what was considered in [Yao and Davis, 1986] ( ηn→ ∞instead of ηn=η+o(1)). We want to show that under the alternative hypothesis we will reject the null hypothesis with a probability tending to one. The null hypothesis is not rejected at significance level ε >0ifP(n) max> ε. We want to show that under the alternative hypothesis, the probability of falsely not rejecting the null hypothesis is very small. More precisely, we show the following: Proposition 1. limn→∞PA(n)(P(n) max> ε) = 0for every ε >0. The proof of Proposition 1 is given in the Appendix. To conclude, using the p-value approximation (11) results in (i)a conservative stopping rule, given that the null hypothesis H0of no signal is true, i.e. the tree-growing process will not be stopped too early, due to that we are using a Bonferroni upper bound, (ii)that a not too weak signal should be detected with a high probability, given a sufficient sample size, i.e. given that the alternative hypothesis HAis true, the signal will be detected as the sample size tends to infinity. 10 3 Relation to classical regularisation techniques The focus of this section is on a single binary split. Let T2(X)denote an optimal binary split CART tree with a single split, and let T1(X)denote the root tree of T2(X).
https://arxiv.org/abs/2505.18769v1
Based on the notation in Section 2 the split is accepted at significance level εif U(n) max=S−(S≤r∗+S>r∗) S/n> u ε, (13) where “ ∗” indicates that we consider the optimal split, and where uεis the solution to dpn(uε) =ε, (14) where pn(u)is from (10). An equivalent rephrasing of (13) is MSE 1−MSE 2−uεbσ2>0, (15) where MSE 1:=S,MSE 2:=S≤r∗+S>r∗, together with bσ2:=S/n. A natural question, which is partially answered below, is how uεdepends on nfor a fixed significance level ε. Proposition 2. uεsolving(14)satisfies uε=o(ln2(n))asn→ ∞. The proof of Proposition 2 is given in the Appendix. Based on (15) it is seen that uεcan be thought of as a regularisation term (or penalty), and from Proposition 2 it is seen that this term behaves almost like a constant. We will continue with a short comparison with other techniques that can be used to decide on accepting a split or not. 3.1 Cost-complexity pruning cost-complexity pruning was introduced in [Breiman et al., 1984] and is de- scribed in terms of the so-called “cost” w.r.t. a split tolerance ϑ, denoted by Rϑ(T), defined as Rϑ(T) :=R(T) +ϑ|T|, (16) where, in our sitting, we have R(T) =Pn i=1(Y(i)−T(X(i)))2(other loss functions may be considered). The parameter ϑis also referred to as the “cost-complexity” parameter. Note that the critical ϑvalue needed in order 11 to accept T2(X)in favour of T1(X)is the threshold value ϑfor which the so-called “gain” Rϑ(T1)−Rϑ(T2)is 0, which gives Rϑ(T1)−Rϑ(T2) =R(T1)−R(T2) +ϑ(|T1| − |T2|) = MSE 1−MSE 2−ϑ= 0, or equivalently, the split is accepted if MSE 1−MSE 2−ϑ >0. The choice of ϑused in applications is typically based on out-of-sample performance using, e.g., cross-validation; also recall the discussion in relation to (5) above. Using the specific choice ϑ:=uεbσ2is equivalent to using the p-value based penalty from (15). Note that this equivalence only applies to the situation concerning whether one should accept a single split or not, whereas, as mentioned above, the cost-complexity pruning is a procedure that evaluates entire subtrees. 3.2 Covariance penalty and information criteria Another alternative is to assess a candidate split based on its predictive per- formance using the mean squared error of prediction (MSEP), conditioning on the observed covariate values. When working with linear Gaussian models this corresponds to using Mallows’ Cp, where pcorresponds to the number of regression parameters, see e.g. [Mallows, 1973], which is an example of an estimate of the prediction error using covariance based penalty, see e.g. [Efron, 2004]. The Cpstatistic can then be expressed as Cp:=1 n MSE p+2pbσ2 , which is the formulation used in [Hastie et al., 2009, Ch. 7.5, Eq. (7.26)]. Consequently, since a binary single-split L2regression tree with predeter- mined split point can be interpreted as fitting a Gaussian model with a single binary covariate, Cpcan in this situation be used to evaluate predic- tive performance. By considering the Cpimprovement when going from no split, i.e. p= 1, to one split, p= 2, corresponds to C1−C2>0, which is equivalent to MSE 1−MSE 2−2bσ2>0. Thus, using Mallows’ Cp, targeting the predictive performance of the estima- tor, willbeasymptoticallytooliberalcomparedtothe p-valuebasedstopping rule. This, however, should not
https://arxiv.org/abs/2505.18769v1
be too surprising, since the above application of the Cpstatistic does nottake into account that the candidate split point has been chosen by minimising an L2loss. For a p-parameter Gaussian model the Cpstatistic coincides with the Akaike information criterion (AIC), see e.g. [Hastie et al., 2009, Ch. 7.5, 12 Eq. (7.29)]. For a p-parameter Gaussian model, the Bayesian information criterion (BIC) considers the quantity BIC p:=n σ2 MSE p+ ln(n)pσ2 , see e.g. [Hastie et al., 2009, Ch. 7.7, Eq. (7.36)], as the basis for model selection. In practice σ2is replaced by a suitable estimator, bσ2, see, e.g., the discussion in the paragraph following [Hastie et al., 2009, Ch. 7.7, Eq. (7.36)]. Hence, it follows that accepting a split based on BIC-improvement in a single split corresponds to BIC 1−BIC 2>0, which is equivalent to MSE 1−MSE 2−ln(n)bσ2. Thus, using BIC as a stopping criterion is more conservative than the p-value based stopping criterion, despite not taking into account that the split point is given as a result of an optimisation procedure. 4 Numerical illustrations 4.1 The p-value approximation for a single split In this section we investigate the error from applying the two approximations in (9) and (10). Both together provide the p-value approximation used to test for signal. Since we do not have access to the true distribution of Umax under H0, we compute its empirical distribution from 10,000realisations in order to compare to the approximations. Figure1showstheapproximatedandtruecdfsforvaryingsamplesizeand covariate dependence. Here, the covariate dimension is set to d= 10. Table 1 compares the approximated and true critical quantile values at a 0.95-level for varying sample size, covariate dimension and covariate dependence. Note that for d= 1, varying dependence is not an issue so that the entries of the first two tables are identical. As was noted in [Yao and Davis, 1986][Remark 2.3], the approximation (10) yields satisfactory results even for small sample sizes 20≤n≤50. This is confirmed by the first row of Table 1. The second row of Figure 1 as well as the middle part of Table 1 show that a strong positive pairwise correlation of ρ= 0.8between covariates does not substantially affect the upper tail of the 13 10 11 12 13 14 15 16 17 180.50.60.70.80.91.0 10 11 12 13 14 15 16 17 180.50.60.70.80.91.0 10 11 12 13 14 15 16 17 180.50.60.70.80.91.0 10 11 12 13 14 15 16 17 180.50.60.70.80.91.0Figure 1: Blue curves: empirical cdf of Umaxgiven H0computed from 10,000 realisations. Orangecurves: Approximation 1−dpn(u). Leftcolumn: n= 50, right column: n= 1000. Top row: independent standard normal covariates, bottom row: dependent normal covariates with common pairwise correlation ρ= 0.8and unit variance. The points of intersection with the dashed blue line illustrate empirical and approximate 0.95-quantile of Umax. 14 n= 50 n= 1000 d= 1 8.55 10.78 d= 2 9.79 12.10 d= 10 12.46 15.51n= 50 n= 1000 8.55 10.78 9.62 12.00 11.94 14.84n= 50 n= 1000 9.12 11.09 10.67 12.68 14.23 16.31 Table 1: Left table: 0.95-level quantiles based on the empirical cdf of Umax given H0computed from 10,000 realisations for independent standard nor- mal covariates.
https://arxiv.org/abs/2505.18769v1
Middle table: The analogous quantiles for dependent normal covariates with a common pairwise correlation of ρ= 0.8and standard vari- ances. Right table: Quantile approximation corresponding to (10). distribution of Umaxunder H0and that the quantile approximations provide good upper bounds. We now turn to assuming that the alternative hypothesis HAaccording to Definition 2 holds. In order to illustrate Proposition 1, we pick ε= 0.05, σ2= 1,j= 1,ξ= 0,t0= 1/2,µl= 0andµr=n−1/5. Note that the step size is chosen to decrease slowly enough towards zero in order to fulfil the assumptions of HAin Definition 2. In Figure 2, we plot the fraction of correct signal detections from 1000 realisations of the event {U(n) max> u ε}, where uεis given in (14). We run the simulations for an increasing number of data points n. Figure 2 confirms the findings of Proposition 1 that the probability of detecting a slowly decreasing signal converges to one as ntends to infinity. It can be noted that the upper tail of U(n) maxis not affected much by intro- ducing dependence between the covariates, as the orange and blue curves in the right plot of Figure 2 differ little. 4.2 Simulated examples from Neufeldt et al. In this section we fix a simple tree and then generate residuals around its level values in order to illustrate the detection performance of our method. We consider the following example as proposed by [Neufeld et al., 2022, sec- tion 5]. Consider independent standard normal covariates and a regression function given by µ(x) =b 1{x1≤0} 1 +a1{x2>0}+1{x2x3>0} , (17) forx∈R10and parameters a, b∈Rdetermining the step size between the level values (signal strength). The step size between siblings at level two is 15 0 1000 2000 3000 4000 5000 60000.00.20.40.60.81.0 0 1000 2000 3000 4000 5000 60000.00.20.40.60.81.0Figure 2: Blue curves: Fraction of correct signal detections according to {U(n) max> u ε}for an increasing number of data points nand independent standard normal covariates. Orange curve: The analogous fraction based on dependent multivariate normal covariates with common pairwise correlation ρ= 0.8andunitvariance. Greencurve: Thesignalstrength |µr−µl|=n−1/5. Thebluedashedlineshowsthe 0.95-level. Theleftandrightplotscorrespond tod= 1andd= 10covariates, respectively. abwhile the step size between siblings at level three is b. An illustration of the tree corresponding to (17) is given in Figure 3. We generate 500 iid covariate vectors X1, . . . , X 500ofN(0, I10)and corresponding response variables Y1, . . . , Y 500, where, given Xi,Yiis drawn from N(µ(Xi),1). X1≤0 b X2≤0 2b X3≤0 1.5b 2b bX3≤0 2.5b 2b 3b0 Figure 3: Regression tree corresponding to (17) with a= 1adopted from [Neufeld et al., 2022, section 5]. Each left child answers the inequality with “true”. Using the python package sklearn.tree.DecisionTreeRegressor, we grow a full CART tree of maximal depth 4with a minimal number of data points per 16 leaf set to 20. For each tree in the nested sequence of cost-complexity-pruned subtrees (from the root to the fully grown CART tree), we compute the in- sampleerror(MSE)andout-of-sampleerror(MSEP),wherethelatterisdone using independently generated test data of the same size n= 500which was neither used to fit the CART tree, nor to compute p-values, but serves only
https://arxiv.org/abs/2505.18769v1
as a data set for pure out-of-sample testing. 2 4 6 8 10 12 141.01.21.41.61.82.02.2 2 4 6 8 10 12 1405101520 Figure 4: Left plot: MSEP (blue) and MSE (orange) for each tree in the nested sequence of cost-complexity-pruned subtrees. Right plot: cumula- tivep-value for each tree in the nested sequence of cost-complexity-pruned subtrees. The x-axis depicts the number of leaves of the subtree considered. The dashed blue line marks our method’s output tree, i.e. the largest subtree whose cumulative p-value lies below δ= 0.05. The signal strength parame- ters are a=b= 1. In the example of Figure 4, the proposed method detects the correct com- plexityof µwhichisgivenby 5leavesandwhichminimisesMSEP.Thecumu- lative p-values of all smaller subtrees are very close to zero ( 0,0.0001,0.0003), while jumping sharply to 1.08after the first “unnecessary” split (cf. Figure 6). The results in this example are hence not sensitive to the choice of the tolerance parameter δ. Note that individual p-values may exceed one due to the approximation (11). Comparing Figure 3 with the upper tree of Figure 6, we note that also the split points and mean values are accurate. We repeat the simulation for a decreased signal parameter b= 0.5, while keeping a= 1,σ2= 1andn= 500. As can be observed in Figure 5 and the bottom tree of Figure 6, the method stops after already one split not capable of detecting the weak signal in the lower part of the tree. However, itregulariseswellinthesensethatMSEPsareclosetominimal. Eventhough the sample size n= 500is chosen rather small, the results of Figures 4 and 5 do not vary much between runs with different random seeds for the training 17 and validation data generation. 2 4 6 8 10 12 140.91.01.11.21.31.4 2 4 6 8 10 12 140.02.55.07.510.012.515.017.5 Figure 5: Analogue of Figure 4 with b= 0.5instead of b= 1. Moreover, from Figure 2 in Section 4.2 we can observe that a larger number of data points of around n= 2500would ensure (with a 95percent probability) the detection of an even lower signal 0.21< b= 0.5in each split of the tree. We conclude that n= 500is insufficient in this example with b= 0.5. 4.2.1 Illustratingtherandomnessoftreeconstructionusingcross- validation Abovewementionthedrawbackoftrainingtreesusingcross-validationwhich is that the resulting tree depends of the randomness inherent in the cross- validation procedure. In this section we illustrate this fact for CART-trees. We generate data according to the model from [Neufeld et al., 2022], as pre- sented in Section 4.2, with parameters a= 1, b= 1andσ2= 1. Here we consider sample size n= 1000(rather than n= 500considered in Section 4.2). Wesplitthedataintoa 80%trainingsetanda 20%testset. TheCART- tree is trained using 5-fold cross-validation on the training set, which entails optimally choosing a cost-complexity parameter ϑ. An optimal CART-tree is trained on the complete training set using the cost-complexity parameter ϑ. Finally, the trained model is evaluated on the test set. This procedure is repeated 500times, allowing us to estimate RMSE values empirically. It turns out that throughout the 500iterations of the procedure, only two dis- tinct trees are selected by the cross-validation procedure: either a tree with two leaves or a tree made up of only
https://arxiv.org/abs/2505.18769v1
the root node. Since cross-validation results in non-deterministic ϑ, we realise two distinct ϑvalues corresponding to two distinct trees in the sequence cost-complexity pruned trees. 18 X1≤0.00 0.99 0 0 X2≤ −0.01 2.03 8·10−5 1·10−4 X3≤0.04 1.57 1·10−5 1·10−4 X4≤0.17 2.10 0.86 1.9377X7≤0.3 0.75 1.58 5.49X3≤0.05 2.46 2·10−4 3·10−4 X4≤ −0.54 2.04 0.53 2.46X2≤0.75 3.15 5.69 21.14X7≤0.22 0.04 0.94 1.08 X1≤0.00 0.51 0 0 X2≤ −0.01 1.03 0.28 0.30X7≤0.22 0.04 0.99 1.52 Figure 6: Regularised output trees for b= 1(top) and b= 0.5(bottom). First row of each node: split point selected by CART. Second row: mean value. Third row: node p-value. Fourth row: cumulative p-value of the smallest subtree the node appears in as a non-leaf. Nodes shaded red violate the condition that the cumulative p-value lies below 0.05. 19 We evaluate our model on the same dataset with identical CART-tree parametersandsignificancelevels δ= 0.1,0.05,0.01andfindthatourmethod attains an even lower RMSE for all three choices of δ. The results can be seen in 2. Further, we can see the shape of the estimated trees in Figure 7. cost-complexity parameter ϑRMSE number of leaves 0.000 1.045 7 0.008 1.012 5 0.072 1.046 4 0.075 1.062 3 0.113 1.144 2 1.016 1.537 1 Table 2: Evaluation of trees in the sequence of cost-complexity pruned trees. 4.3 An application to L2-boosting In this section, we illustrate how our proposed method performs when it is used as a weak learner in a standard L2-boosting setting applied to the datasetsCaliforniaHousingandbeMTPL16from[Dutang and Charpentier, 2024]. Throughout these illustrations we compare the L2-boosting version of our method to the Gradient Boosting Machine (GBM) with identical configura- tions. For both methods, we split the data into a 80% training set and a 20% test set. We train the models on the same training set and evaluate them on the same test set. We fix the max depth of the weak learners to 3, i.e. a tree with at most 8 leaves can be added in a single iteration, the minimum samples per leaf is set to 20and we set the learning rate for the boosting procedures to 0.1. Ineachboostingiteration, weusetheresidualsfromthepreviousiteration as the working response. In each boosting iteration, we determine a nested sequence of trees (as described above) and the weak learner is selected as the maximally split tree that satisfies the criterionPpj< δfor the chosen significancelevel δ. Westoptheboostingprocedurewhenthecandidateweak learner is the root node, i.e, no statistically significant split can be made. Note that the complexity of the weak learner for our method is dynamic, determined by the criterionPpj< δ. TheCaliforniaHousingdatasetconsistsof n= 20640 datapoints, andthe number of covariates is d= 8. The beMTPL16 dataset consists of n= 70791 datapoints, andthenumberofcovariatesis d= 6. InFigure8weseehowour method compares to the GBM when applied to the two datasets for varying 20 0.996 (a) CV method: RMSE = 1.537X1≤0.004 2.03 0.013 (b) CV method: RMSE = 1.144 X1≤0.004 X2≤ −0.061 X3≤ −0.155 2.23 1.049X3≤ −0.016 1.969 3.0040.013 (c)p-value method: RMSE = 1.012 Figure 7: Regression trees corresponding to different cost-complexity param- eters related to the test of cross-validation randomness; Panels (a) and (b) show the trees obtained using
https://arxiv.org/abs/2505.18769v1
CV, Panel (c) shows the tree obtained using thep-value method. Leaf values correspond to mean values. All RMSE val- ues can be found in Table 2. levels of δ. The value δ=∞gives a boosted-trees procedure similar to the ABT-machinefrom[Huyghe et al., 2024]inthecaseof L2-boosting. Itshould be noted that the GBM stopping criterion implies, for the California housing dataset, that it is trained for approximately 2500iterations before stopping. One could consider tuning the shrinkage parameter in order to adjust the number of boosting steps, but this has not been investigated further in the present paper. It can be seen from Figure 8 that the number of iterations for thep-value based method is not necessarily monotone in δ. However, this is not contradictory since different values of δwill result in that the trees added ineachiterationmayhavearatherdifferenttreecomplexity. Wefindthatthe p-valuebasedstoppingcriterionfortheweaklearnerin L2-boostinggenerates promising results and should be investigated further, including comparisons with, e.g., the ABT-machine from [Huyghe et al., 2024]. 21 0 50 100 150 200 250 3000.450.500.550.600.650.70 0 10 20 30 40 50 60 70 800.45500.45550.45600.45650.45700.45750.4580Figure 8: RMSE on test data (y-axis) as a function of the number of boost- ing iterations (x-axis). The left plot corresponds to the California Housing dataset, the right plot to the beMTPL16 dataset. The blue curve corre- sponds to our method using δ=∞, the orange curve δ= 0.10, the green curve δ= 0.05, the red curve δ= 0.01and the purple curve corresponds to the GBM. The vertical dashed lines corresponds to where the iterations stop and the horizontal dashed lines corresponds to the lowest RMSE achieved for the respective methods. References [Breiman et al., 1984] Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and regression trees . Wadsworth, Bel- mont, Calif. [Dutang and Charpentier, 2024] Dutang, C. and Charpentier, A. (2024). CASdatasets: Insurance datasets. R package version 1.2-0, DOI 10.57745/P0KHAG. [Efron, 2004] Efron, B. (2004). The estimation of prediction error: covari- ance penalties and cross-validation. Journal of the American Statistical Association , 99(467):619–632. [Hastie et al., 2009] Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning [Elektronisk resurs] Data Mining, Infer- ence, and Prediction . Springer New York, New York, NY, second. edition. [Hothorn et al., 2006] Hothorn, T., Hornik, K., and Zeileis, A. (2006). Un- biased recursive partitioning: A conditional inference framework. Journal of Computational and Graphical statistics , 15(3):651–674. 22 [Huyghe et al., 2024] Huyghe,J.,Trufin,J.,andDenuit,M.(2024). Boosting cost-complexity pruned trees on tweedie responses: the abt machine for insurance ratemaking. Scandinavian Actuarial Journal , 2024(5):417–439. [Mallows, 1973] Mallows, C. (1973). Some comments on cp.Technometrics , 15(4):661–675. [Neufeld et al., 2022] Neufeld, A. C., Gao, L. L., and Witten, D. M. (2022). Tree-values: selective inference for regression trees. The Journal of Ma- chine Learning Research , 23(1):13759–13801. [Shih and Tsai, 2004] Shih, Y.-S. and Tsai, H.-W. (2004). Variable selection bias in regression trees with constant fits. Computational statistics & data analysis, 45(3):595–607. [Yao and Davis, 1986] Yao, Y.-C. and Davis, R. A. (1986). The asymptotic behavior of the likelihood ratio statistic for testing a shift in mean in a sequence of independent normal
https://arxiv.org/abs/2505.18769v1
variates. Sankhy¯ a: The Indian Journal of Statistics, Series A , pages 339–353. A Proofs A.1 Proof of Proposition 1 Before starting the proof of Proposition 1 we note the following: Remark 3. The distribution of the observed test statistic U(n) maxdoes not de- pend on σunder the alternative hypothesis. Under the alternative hypothesis, for any r∈ {1, . . . , n }andb∈ {1, . . . , r }such that X(i) j< ξfori≤band X(i) j≥ξfori > b, we may write Y(i)=Z(i)+µl, i= 1, . . . , b, µr, i=b+ 1, . . . , n. where Z(1), . . . , Z(n)are independent and N(0, σ2)distributed. Then Y≤r= Z≤r+ (bµl+ (r−b)µr)/rand Y(i)−Y≤r=Z(i)−Z≤r+(µl−µr)(r−b)/r, i = 1, . . . , b, (µr−µl)b/r, i =b+ 1, . . . , r. Hence, Y(i)−Y≤requals σtimes a random variable whose distribution does not depend on σ. This also holds for Y(i)−Y>r. We conclude that the distribution of U(n) jdoes not depend on σunder the alternative hypothesis. 23 Remark 4. By construction U(n) jS/n σ2= max 1≤r≤n−11 σ2 S−S≤r−S>r . (18) Under the alternative hypothesis, by [Yao and Davis, 1986] p. 347, max 1≤r≤n−11 σ2 S−S≤r−S>rd= max 1≤nt≤n−1(W0(t)−fn(t))2 t(1−t), where W0is a standard Brownian bridge and fn(t) =n1/2θnt(1−[nt0]/n),ifnt≤[nt0], n1/2θn(1−t)[nt0]/n,ifnt > [nt0]. Proof of Proposition 1. Since pnisadecreasingfunctionweknowthat P(n) max≤ dpn(U(n) j)for every j, in particular for jfor which there is signal with ampli- tude σθnaccording to the model A(n). Hence, PA(n)(P(n) max> ε)≤PA(n)(pn(U(n) j)> ε/d ) = 1−PA(n)(U(n) j> p−1 n(ε/d)) LetT2 ndenote the quantity U(n) j(S/n)/σ2in (18). Then PA(n)(U(n) j> p−1 n(ε/d)) =PA(n) T2 nc2 n p−1 n(ε/d)σ2 S/n > c2 n for any positive sequence (c2 n). We consider the choice of sequence c2 n=2−1ln3(n)−ln(2−1π1/2ln((1−α)−1)) (2 ln 2(n))1/2+ (2 ln 2(n))1/22 (19) in order to relate the tail probability PA(n)(U(n) j> p−1 n(ε/d))to the tail probability PA(n)(T2 n> c2 n)studied by [Yao and Davis, 1986]. By Lemma 5, lim inf n→∞PA(n)(U(n) j> p−1 n(ε/d))≥lim inf n→∞PA(n)(T2 n> c2 n). For any η∈R, by Lemma 6, lim inf n→∞PA(n)(T2 n> c2 n)≥α+ Φ(η)(1−α). Hence, for any η∈R, lim sup n→∞PA(n)(P(n) max> ε)≤lim sup n→∞ 1−PA(n)(T2 n> c2 n) ≤1−α−Φ(η)(1−α). Since we may choose ηarbitrarily large, the proof is complete. 24 Lemma 5. lim inf n→∞PA(n)(U(n) j> p−1 n(ε/d))≥lim inf n→∞PA(n)(T2 n> c2 n) Proof.Let Fn:=c2 n p−1 n(ε/d)σ2 S/n and note that PA(n)(U(n) j> p−1 n(ε/d)) =PA(n)(T2 nFn> c2 n) ≥PA(n)(T2 nFn> c2 n|Fn<1)PA(n)(Fn<1) +PA(n)(T2 n> c2 n). We will show that limn→∞PA(n)(Fn<1) = 0from which the conclusion follows. By Lemma 7, lim n→∞p−1 n(ε/d)/c2 n= 0. (20) Under A(n)there exist independent Z(i)∼N(0, σ2)andr∈ {1, . . . , n }such thatY(i)=Z(i)fori≤r, and Y(i)=Z(i)+σθnfori > r. Therefore, S=rX i=1(Y(i)−Y≤n)2+nX i=r+1(Y(i)−Y≤n)2 =nX i=1(Z(i)−Z≤n)2+σ2θ2 n rn−r n2 + (n−r)r n2 + 2rX i=1(Z(i)−Z≤n)σθnn−r n+ 2nX i=r+1(Z(i)−Z≤n)σθnr n Therefore, by Hölder’s inequality applied to the sum of the last two terms above, S n≤1 nnX i=1(Z(i)−Z≤n)2+σ2θ2 n+ 21 nnX i=1(Z(i)−Z≤n)21/2 σθn =1 nnX i=1(Z(i)−Z≤n)21/2 +σθn2 . Since the first term inside the square converges in probability to σand since the second term is bounded we conclude
https://arxiv.org/abs/2505.18769v1
that limn→∞PA(n)(Fn<1) = 0. The proof is complete. Lemma 6. For every η∈R,lim inf n→∞PA(n)(T2 n> c2 n)≥α+ Φ(η)(1−α). 25 Proof.Fixη∈R. From the expression for the tail probability on page 350 in [Yao and Davis, 1986] we see that for each n, PA(n)(T2 n> c2 n)≥P(Bn,1∩Bn,2∪An(θn)). The events Bn,1, Bn,2are independent of θnand given by Bn,1= max t∈Dn,1|W(t)| t1/2> cn , B n,2= max t∈Dn,2|W(t)−W(1)| (1−t)1/2> cn , where Wis standard Brownian motion and Dn,1, Dn,2are index sets. The event An(θn)is increasing in θnand given by the expression on p. 350 in [Yao and Davis, 1986] (there with θinstead of θn). Writing θn=θ(n, ηn)for θnin (12), note that θ(n, ηn)≥θ(n, η)fornsufficiently large since ηn→ ∞ asn→ ∞. Hence, for nsufficiently large, PA(n)(T2 n> c2 n)≥P(Bn,1∩Bn,2∪An(θ(n, η))) and the right-hand side converges to α+ Φ(η)(1−α)as concluded on p. 350 in [Yao and Davis, 1986]. The proof is complete. Lemma 7. limn→∞p−1 n(ε/d)/c2 n= 0 Proof.We have, from the definition of pn, p−1 n(ε/d) =ln3(n) + ln(2) (2 ln 2(n))1/2+ Φ−1 (1−ε/d)1/(2 ln( n/2))2 ,(21) where the first term vanishes asymptotically and the second term tends to ∞asn→ ∞. Similarly, in (19) the first term vanishes asymptotically and the second term tends to ∞asn→ ∞. Hence, it is sufficient to compare the two terms that are not vanishing asymptotically and show that lim n→∞Φ−1(xn) Φ−1(yn)= 0, x n:= (1−ε/d)1/(2 ln( n/2)), y n:= Φ((2 ln 2(n))1/2). By l’Hospital’s rule, the convergence follows if we verify that lim n→∞ϕ(Φ−1(yn)) ϕ(Φ−1(xn))= 0. Note that ϕ(Φ−1(yn)) = (√ 2πln(n))−1→0asn→ ∞. The Mill’s ratio bound (1−Φ(z))/ϕ(z)<1/zforz >0yields, with z= Φ−1(xn), ϕ(Φ−1(xn))>Φ−1(xn)(1−xn), x n>1/2. 26 Hence, ϕ(Φ−1(yn)) ϕ(Φ−1(xn))≤1√ 2πln(n)Φ−1(xn)(1−xn). We claim that ln(n)(1−xn)converges to a positive limit as n→ ∞. Since Φ−1(xn)→ ∞asn→ ∞verifying this claim will prove the statement of the lemma. Note that  1−ε/d1/(2 ln( n/2)) = expln 1−ε/d 2 ln(n/2) and hence 1 +ln 1−ε/d 2 ln(n/2)< 1−ε/d1/(2 ln( n/2)) <1 +ln 1−ε/d 2 ln(n/2)+1 2ln 1−ε/d 2 ln(n/2)2 . Hence, with γ:=−ln(1−ε/d), γ 2ln(n) ln(n/2)>ln(n)(1−xn)>γ 2ln(n) ln(n/2)−γ2 8ln(n) ln(n/2)2 which shows that limn→∞ln(n)(1−xn) =γ/2. The proof is complete. A.2 Proof of Proposition 2 Proof.Note that (14) is equivalent to uε=p−1 n(ε/d). Lemma 7 says that limn→∞p−1 n(ε/d)/c2 n= 0. Note that c2 ngiven by (19) takes the form c2 n= (dn+ (2 ln 2(n))1/2)2, where limn→∞dn= 0. The inequality (a+b)2≤2(a2+b2)gives c2 n= (dn+ (2 ln 2(n))1/2)2≤2(d2 n+ 2 ln 2(n)). Hence, limn→∞uε/ln2(n) = 0which completes the proof. 27
https://arxiv.org/abs/2505.18769v1
arXiv:2505.19104v1 [math.ST] 25 May 2025Distributional Limit Theory for Optimal Transport Eustasio del Barrio∗Alberto González-Sanz† Jean-Michel Loubes‡David Rodríguez-Vítores§ Abstract Optimal Transport (OT) is a resource allocation problem with applications in biology, data science, economics and statistics, among others. In some of the applications, practitioners have access to samples which approximate the continuous measure. Hence the quantities of interest derived from OT — plans, maps and costs — are only available in their empirical versions. Statistical inference on OT aims at finding confidence intervals of the population plans, maps and costs. In recent years this topic gained an increasing interest in the statistical community. In this paper we provide a comprehensive review of the most influential results on this research field, underlying the some of the applications. Finally, we provide a list of open problems. Keywords : Optimal Transport; Central Limit Theorem; Sample Complexity; Wasserstein Distance AMS 2020 Subject Classification : 62G05; 62R10; 62G30 1 Introduction Optimal transport (OT) is, by now, a regular member of the standard toolkit in many fields of Data Science. The list includes computer vision [ 9], flow cytometry gating[32,44], domainadaptation[ 16], fairlearning[ 63,21]orgenerativemodeling in AI [99], to name only a few examples. While optimal transport (OT) has long been a fundamental problem in mathematics, its recent surge in popularity within data science is largely driven by computational advances, particularly the introductionoftheSinkhornalgorithm[ 20,98], whichhassignificantlyimprovedthe scalability of OT-based methods The use of optimal transport (OT) for inferential purposes has a long history, but until recently, it was ‘hindered by the lack of distributional limits’ [110]. While there had been some early attempts to apply OT in goodness-of-fit problems [ 93,25,27,42], these efforts were largely restricted to the univariate setting, limiting their broader applicability. To be more precise, let ∗IMUVa, Universidad de Valladolid, Spain, eustasio.delbarrio@uva.es †Department of Statistics, Columbia University, US, ag4855@columbia.edu ‡INRIA, Toulouse, France, loubes@math.univ-toulouse.fr §IMUVa, Universidad de Valladolid, Spain, david.rodriguez.vitores@uva.es 1 Distributional Limit Theory for Optimal Transport us fix some notation. We limit our presentation to the Euclidean setup. We recall that the OT problem is the minimization problem Tc(P,Q) := min π∈Π(P,Q)/integraldisplay Rd×Rdc(x,y)dπ(x,y), (1) wherePandQare Borel probabilities on Rd,c:Rd×Rd→Ris a cost function andΠ(P,Q)stands for the set of joint probabilities on Rd×Rdwith marginals PandQ. We will write cp(x,y) =|x−y|pandTpinstead ofTcwhenc=cp, p≥1. AnOT plan is a minimizer in (1). Under mild assumptions (see [ 114, Theorem 5.10]), the optimization problem (1)admits a dual formulation given by Tc(P,Q) = max f,g/integraldisplay f(x)dP(x) +/integraldisplay g(y)dQ(y), (2) where the supremum is taken over the class Φc(P,Q)consisting of pairs (f,g)∈ L1(P)×L1(Q)such that f(x) +g(y)≤c(x,y), x,y∈Rd. A pair of OT potentials (f0,g0)is any maximizer in (2). Under some assumptions on the cost function and the probability measures it is well known (see, e.g., Theorem 2.44 in [ 113]) that the optimal plan in (1)is of typeπ= (Id,T )♯P, whereTis such that Tc(P,Q) =/integraldisplay Rdc(x,T(x))dP(x) = min S:S♯P=Q/integraldisplay Rdc(x,S(x))dP(x).(3) Such a minimizer is called an OT map. Here, and throughout this paper, T♯P denotes the push-forward of PbyT, namely, the probability induced from Pby the mapT. The basic objects of interest in the goodness-of-fit problems cited
https://arxiv.org/abs/2505.19104v1
above were the empirical versions of Tc(P,Q), namely, given the observation of an i.i.d. sample X1,...,Xn∼Pwe look atTc(Pn,Q), wherePndenotes the empirical measures on the sample (for the sake of brevity we focus here on one sample problems, but all the results that we present can be adapted to the two sample case). This paper will present an essentially complete description of the distributional limit theory forTc(Pn,Q). To avoid excessive technicalities we limit ourselves to the case of potential costs, cp,p≥1, which has received a great deal of attention in the literature. The related Wasserstein distance ,Wp(P,Q) = (Tp(P,Q))1/pis a metric on the set of probability measures on Rdwith finite moment of order p that metrizes weak convergence of probabilities plus convergence p-th moments. We refer again to [ 113,114] for these and other background results on OT. Distributional limit theorems for Tp(Pn,Q)and related functionals are the basis of the already cited applications of OT in goodness-of-fit problems in goodness-of-fit problems [93, 25, 27, 42]. We will also look at the empirical OT potentials, plans and maps. These are objects of interest by themselves. The OT map to or from a reference measure is 2 Distributional Limit Theory for Optimal Transport the center-outward distribution or quantile function of [ 17,67]. This maps can be used to define multivariate ranks which can be used to build efficient inference procedures in a semiparametric or non parametric setup, see, e.g., [ 22,24,68]. This gives additional motivation to look for distributional limit theorems for these OT maps and we try to give a short account about the available results on this topic. An already well-known fact about OT is that estimation suffers from the curse of dimensionality, either if the goal is to estimate the OT map (see [ 76]) or the cost ([96,89]). This can be alleviated through different paths. The minimax rate of estimation of the OT map improves under smootness ([ 76]) and one can look for estimators that adapt to smoothness (and remain computationally tractable) see [87]. Some recent work provides distributional limit theorems for these improved estimators [86]. Smoothness is not the only possible workaround to the curse of dimensionality in OT estimation. The discrete setup is rich enough for an interesting set of applications and has been extensively analyzed (see [ 104,110,111]). Additionally, OT satisfies a lower complexity adaptation principle (see [ 75]). Roughly speaking, this means that the statistical complexity of OT estimation is determined by the statistical complexity in the estimation of the ‘easy’ measure, PorQ. This leaves enough space for some distributional limit theorems in low dimension or in the semidiscrete setup (see [75, 29]). Beyond the smoothness or low-complexity environment, alternative formulations of the OT problem have received a great deal of attention in the literature. Entropic optimal transport (see [ 20,98]) is a very attractive choice, both in computational terms, thanks to the celebrated Sinkhorn algorithm, and in statistical complexity (see [90,35]). The estimation of the entropic OT cost (EOT cost in the sequel) is not affected by the curse of dimensionality and this opens the
https://arxiv.org/abs/2505.19104v1
possibility, for instance, to build asymptotically valid confidence intervals for the EOT cost in any dimension. Here we present a review of the best distributional limit theorems available and a discussion on the different proof techniques leading to them. An alternative way to benefit from OT data analysis tools while keeping away from high-dimensional inconvenience is to look at OT between projections of the target measures. Of all the possible approaches, the sliced Wasserstein metric, looking at one-dimensional projections, has received most attention in the literature (see [115,88,70,116,53]). In this line we comment briefly on the related CLTs, including some possible improvements over the available literature. While we do not aim to give a complete account of the technical details underlying the proofs of all the results that we summarize here, we find convenient to include a few key guidelines to the main approaches. The OT problem, either in its primal (1)or dual form (2)is a linear minimization problem over a convex set. However, from the point of view of distributional limit theory, other formulations of the problem may be more convenient. The Monge formulation (3)may look more direct, but the problem becomes highly non-linear and some linearization technique could provide a usefull approach to the problem. Different possibilities have been 3 Distributional Limit Theory for Optimal Transport considered. In the case of discrete probability measures the transportation cost functional is directionally Hadamard differentiable. This fact, together with a delta method for nonlinear derivatives is enough to yield CLTs for transportation cost (see [110,111]). Hadamard differentiability of supremum type functionals (see [12]) can also be applied to obtain CLTs for the empirical transportation cost in the semidiscrete setup (see [ 73,29]). Additionally, this approach can yield distributional limit theorems for OT ponentials. On the other hand, if we put the main focus on the transportation cost, other linearization tools can give sharper results. The Efron-Stein inequality for variances of functions of independent random elements (see, e.g., [ 11]) turns out to be a very convenient tool in this field. As an example of this we can refer to Lemma 8.1 in the supplementary material to this paper: if PandQare Borel probabiity measures on Rd,Qhas finite mean and Phas finite varianceσ2(P)then a trivial application of the Efron-Stein inequality shows that Var(T1(Pn,Q))≤σ2(P) n. (4) A slightly more elaborate application of the Efron-Stein inequality yields a similar bound for the case p>1(under the assumption that PandQhave finite moments of order 2pandQhas a density; see Theorem 3.1 in [ 33] and Corollary 4.3 in [28]). As a consequence, we see that√n(Tp(Pn,Q)−ETp(Pn,Q))is stochastically bounded under very mild assumptions. This suggests to look for CLTs for the fluctuation of the empirical transportation cost with respect to its expected value and, indeed, we can also use the Efron-Stein inequality as a linearization tool to obtain CLTs for this fluctuation with great generality, as we will show in later sections. The remaining sections of this paper are organized as follows. Section 2 presents CLTs for one-dimensional OT objects. While this may look very narrow in scope, some of the key
https://arxiv.org/abs/2505.19104v1
features of the theory show up in this simpler setup: CLTs for the fluctuation of the OT cost with respect to its expected value hold with great generality; Gaussianity of the limiting distribution of the empirical OT cost is related to the regularity of the cost; limit theorems for the OT plans and maps require more restrictive assumptions. These aspects are revisited in Section 3 for general dimension. Here we pay special attention to those cases in which OT is not affected by the curse of dimensionality (low dimensional, discrete or semi-discrete setup), linking the results to the lower complexity adaptation principle of OT observed in [ 75]. Section 4 presents distributional limit theory for regularized OT. We consider entropic and smooth OT. From point of view of practitioners EOT is most attractive due to the availability of efficient computational tools. We present limit theorems for the cost, potentials and plans and also for the related Sinkhorn divergence. In the case of smooth OT we deal with the cost and also with some estimators of the OT map that adapt to smoothness. This is a very interesting field of research, since, theoretically, one can construct estimators of the OT map that adapt to smoothness and avoid the curse of dimensionality. However, the available theory covers a somewhat limited setup and we discuss on possible ways 4 Distributional Limit Theory for Optimal Transport to extend it to cover more natural cases. Regrettably, this part of the discussion is very technical, but we believe that the topic is interesting enough to include this material in the paper. We continue with Section 5, devoted to sliced Wasserstein distances. These combine some convenient features of one-dimensional OT with the ability to capture differences in multivariate distributions. We review the available results, paying special attention to some recent improvements over the existing literature. While we focus mainly on results and outline proof techniques, we include a small section (Section 7) discussing the potential applications of some of the distributional results presented in this paper. Finally, we include a section with a small sample of open problems which we believe deserve further investigation. 2 One dimensional central limit theorems We consider in this section empirical objects in OT problems on the real line. A crucial simplification here is that, if PandQare probabilities on the real line with distribution functions FandG, respectively, and p≥1then Tp(P,Q) =/integraldisplay1 0|F−1(t)−G−1(t)|pdt, (5) whereF−1,G−1denote the corresponding quantile functions. A further simplifica- tion holds in the case p= 1, since T1(P,Q) =/integraldisplay R|F(x)−G(x)|dx, (6) that is,T1(P,Q)is simply the L1(R)-norm of the difference between distribution functions. The link between OT and distribution and quantile functions goes further. IfPhas no atoms then Fis the OT map between Pand the uniform distribution on (0,1),U(0,1). In all cases F−1is the OT map between U(0,1) andP. While the empirical d.f., say Fn, is not the OT map between the empirical measure,Pn, andU(0,1)(such an object does not exist since Pnis discrete), it is a consistent estimator of the OT map Fin different senses. F−1 nis the OT map betweenU(0,1)andPn. All these
https://arxiv.org/abs/2505.19104v1
facts turn the study of distributional limit laws for empirical estimators of OT maps on the real line into exercises about weak convergence of empirical or quantile processes in some Lpspace. However, weak convergence of these processes typically requires stronger assumptions than weak convergence of the transportation cost, which as we will see next, holds under very mild assumptions if we look at the fluctuation between the empirical cost and its expected value. Later in this section we discuss the role of the centering constants in the CLTs for the transportation cost. We complete the section with a study of CLTs for OT maps and plans in this one-dimensional setup. 5 Distributional Limit Theory for Optimal Transport 2.1 CLTs for the fluctuation In the one-dimensional case ( d= 1) the fluctuation of the empirical transportation cost is fully understood through the following result. Theorem 2.1. Assumed= 1. Ifp>1,PandQhave finite moments of order 2pandQhas a continuous quantile function then √n(Tp(Pn,Q)−E[Tp(Pn,Q)])−→wN(0,σ2 p(P,Q)), (7) whereσ2 p(P,Q)>0, defined in (39), is strictly positive unless P=QorPis Dirac’s measure on a point. √n(T1(Pn,Q)−E[T1(Pn,Q)])−→wγ(P,Q), (8) where γ(P,Q) :=/integraldisplay R(vF,G(x)−E[vF,G(x)])dx, (9) vF,G(x) =sgn(F(x)−G(x))B◦F(x)I(F(x)̸=G(x)) +|B◦F(x)|I(F(x)̸=G(x)). (10) andBis a standard Brownian bridge on [0,1]. The distribution of γ(P,Q)is Gaussian ifℓ(F=G) = 0. The casep>1in Theorem 2.1 is Theorem 2.1 (ii) in [ 31]. To our knowledge, the casep= 1in this generality is new. A proof is given in the supplementary material, including the fact that (9)defines indeed a probability distribution. This is clear under the additional assumption /integraldisplay R/radicalig F(x)(1−F(x))dx<∞, (11) related to the interpolation space L2,1(see [26] for further details): it holds if P has a finite moment of order 2 +δfor someδ >0; it implies finite moment of order 2. Under (11)the trajectories of the process B◦Fbelong a.s. to L1(R). In this case and assuming P=Qwe have that √nT1(Pn,Q)))−→w/integraldisplay R|B◦F|. (12) This is part of Theorem 1.1 in [ 26]. Theorem 5.1 in the same reference covers the caseP=Qunder the weaker assumption that Phas a finite variance. Here we extend the result to general Q. Theorem 2.1, while limited in scope, brings to the focus a few significant issues. First, the fluctuation of the empirical OT cost satisfes a CLT under very mild moments and smoothness assumptions: no need for bounded support or light tails (for the sake of brevity we refrain from pursuing the case of heavier tails, but the literature covers it in some cases, see [ 26,27]). We observe that in the case 6 Distributional Limit Theory for Optimal Transport p>1, ifQis Dirac’s measure on a point then the CLT for Tp(Pn,Q)is simply the CLT for sums of i.i.d. random variables and a finite moment of order 2pis then a necessary condition for the conclusion. The assumption about continuity of the quantile function rules out the possibility of a disconnected support: the support ofQis connected if and only if G−1is continuous, see Proposition A.7 in [7]. The different regime of the empirical transportation cost in this case has been observed in the literature for a long time (see [104, 7]). A further point of interest in Theorem
https://arxiv.org/abs/2505.19104v1
2.1 is that non strictly convex cost functions (the case p= 1) result in some differences with respect to the CLT. We see that non Gaussian limiting distributions can show up. In fact, this will be always the case if P=Q. The limiting distribution will be the law of the centered version of the the right-hand side in (12). In particular, we see here a non-denegerate limiting distribution. In contrast, the conclusion of Theorem 2.1 forp>1andP=Qis simply √n(Tp(Pn,P)−E[Tp(Pn,P)])−→ Pr.0. (13) As a summary to this subsection we can say that CLTs for the empirical transportation cost on the real line, with a Gaussian limiting distribution, hold with great generality. that is, under minimal moment and smoothness conditions. However, these minimal assumptions come at the price of somewhat unnatural centering constants: the centering constant in (7)is notTp(P,Q), as one would desire in some statistical applications. We deal with this issue next. 2.2 The role of the centering constants Given the interest in Wasserstein metrics, one could look for, say, a confidence interval forWp(P,Q)or forTp(P,Q). If we could ensure that √n(Tp(Pn,Q)−Tp(P,Q))−→wN(0,σ2 p(P,Q)), (14) then, provided that ˆσ2 p(P,Q)is a consistent estimator of σ2 p(P,Q), we would have that/bracketleftbigg Tp(Pn,Q)±ˆσp(P,Q)√nΦ−1(1−α 2)/bracketrightbigg is an appproximate confidence interval for Tp(P,Q)with asymptotic level 1−α. The key to move from (7) to the desired version (14) would be to prove that √n/parenleftbigg E/bracketleftig Tp(Pn,Q)/bracketrightig −Tp(P,Q)/parenrightbigg →0. (15) Theorem 2.3 in [ 31] provides sufficient conditions to replace this centering constant withTp(P,Q). This result was recently improved by Theorem 4.1 in [ 34]. We present here an adaptation of this result for the one-dimensional setup. The key assumption can be formulated in terms of the following quantities: Jp(P) :=/integraldisplay1 0(t(1−t))p/2 fp(F−1(t))dt, p≥1. 7 Distributional Limit Theory for Optimal Transport HereFis the distribution function of P, which is assumed to have density f. Theorem2.2. Assumep>1andletP,Q∈Pp(R)besuchthat Phasdistribution functionFand density fwithf(F−1(t))is positive and continuous in (0,1), and monotone for tsufficiently close to 0and1. If there exist α,β > 1with1 α+1 β= 1 andδ > 0such thatJα+δ(P)<∞and the following additional moment assumptions are satisfied (i) if 1<p< 2P,Qhave finiteβ+δmoment. (ii) ifp= 2,Qhas finiteβ+δmoment, (iii) ifp>2P,Qhave finite (p−1)β+δmoment. Then,(15)holds. Ifp= 1andJ1(P)<∞then √n/parenleftbigg E/bracketleftig T1(Pn,Q)/bracketrightig −T 1(P,Q)/parenrightbigg →c, withc≥0a finite constant. If ℓ(F=G) = 0, thenc= 0and(15)holds. Details of the proof are given in the supplementary material. As a consequence of Theorem 2.2 we see that we can indeed replace E[Tp(Pn,Q)]withTp(P,Q)in (7)or(8)under some additional assumptions. We should recall at this point that finiteness of Jp(P)is necessary and sufficient for boundedness of np/2E[Tp(Pn,P)] and also for boundedness of√nE[(Tp(Pn,P))1/p](this is Corollary 5.10 in [ 7]). Hence, forp>1we see that if Jp(P)<∞then √n/parenleftbigg E/bracketleftig Tp(Pn,P)/bracketrightig −Tp(P,P)/parenrightbigg →0. Theorem 2.2 improves upon this known result dealing with the case P̸=Q. 2.3 The null case As noted above, in the case P=Qthe limiting variance in (7)vanishes. Under the additional assumption Jp(P)<∞we obtain (for p>1) that √nTp(Pn,P)−→ Pr.0. One may wonder if it is possible to get a non-denegerate limiting distribution with a different rate. Combining different approaches (see Theorem 6.4.1 in [ 19], Theorem 3.2 in [104]) the following result follows.
https://arxiv.org/abs/2505.19104v1
Theorem 2.3. AssumePhas a continuous density fsuch thatf◦F−1is continuous in (0,1)and monotonic in a neighbourhood of 0and1. If for some p≥1 Jp(P) :=/integraldisplay1 0(t(1−t))p/2 fp(F−1(t))dt<∞. (16) 8 Distributional Limit Theory for Optimal Transport then np/2Tp(Pn,P)−→w/integraldisplay1 0|B(t)|p fp(F−1(t))dt, whereBis a standard Brownian brigde on [0,1]. Again, we defer details of the proof to the supplementary material. Finiteness ofJp(P)is a necessary and sufficient condition for a.s. integrability of the process in the limiting integral. It is also related to integrability of some functionals of the hazard function. This result does not bring any improvement over (12)in the casep= 1. Forp >1it can be shown that (16)implies finiteness of the moment generating function in a neighborhood of the origin (see Section 3 in [104]). Hence, this is a much stronger assumption than finiteness of moments of order 2pin Theorem 2.1. Under some relaxations of (16)it is still possible to get a version of (2.3), although with the addition of some centering terms. We refer to [27] for details. 2.4 Limits for optimal transportation plans and maps. Whenp>1and one of the probabilities in the OT problem gives has a density (this can be relaxed, see Theorem 2.44 in [ 113]) the OT plan is given by a map, that is, is of typeπ= (Id×T)♯P. This general result has also a particularly simple formu- lation in the one-dimensional case, since the joint probability (F−1,G−1)♯ℓ|(0,1)is an optimal plan for all the cpcost functions, p≥1. We recall that F−1is also the unique optimal transportation map from ℓ|(0,1)toP. Hence, distributional limit theory for optimal plans and maps can, in this case, be formulated in terms of the natural estimator of the quantile function, namely, the empirical quantile function, F−1 n. This is a left-continuous, piecewise constant map, with F−1 n(t) =X(i)for i−1 n<t≤i n,i= 1,...,n, whereX(i)denotes the i-th order statistic associated to the sample X1,...,Xnof i.i.d. observations from P. Now, the natural object to look at is the quantile process vn(t) :=√n(F−1 n(t)−F−1(t)),0<t< 1. It is well known that vn(t)is asymptotically Gaussian if Phas a density such that f(F−1(t))>0. The natural setup to look for a functional CLT for vnis, in turn, Lp(0,1): if we prove that vn−→wVfor someLp(0,1)-valued random element then we would get, by the continuous mapping theorem that np/2Tp(Pn,P) = ∥vn∥p Lp−→w∥V∥p Lp, recovering the conclusion of Theorem 2.3. This can be made precise. For completeness we include next a formal statement of this fact. Theorem 2.4. AssumePhas a continuous density fsuch thatf◦F−1is continuous in (0,1)and monotonic in a neighbourhood of 0and1. Assume also thatJp(P)<∞for somep≥1. Set V(t) =B(t) f(F−1(t)),0<t< 1, 9 Distributional Limit Theory for Optimal Transport withBa Brownian bridge on (0,1). Then, with probability one, the trajectories of Vbelong toLp(0,1). Furthermore, vn−→wVas random elements in Lp(0,1). We include a schematic proof of this result in the supplementary material. We must cite that the sufficient conditions here are not far from being necessary. In fact, in the recent paper [ 5], it is shown that, in the case p= 1, weak convergence of vninL1(0,1)holds if and only if F−1is absolutely continuous and(11)holds. Absolute continuity of F−1holds,
https://arxiv.org/abs/2505.19104v1
in turn, if and only if Pis supported on an interval and the absolutely continuous component of Phas on that interval an a.e. positive density (this is Proposition A.17 in [ 7]). In this case p= 1it is of interest to observe that (11)is the necessary and sufficient condition for weak convergence of√n(Fn−F)as a random element in L1(R). Since√nT1(Pn,P) =∥√n(Fn−F)∥L1(R)=∥vn∥L1(0,1), we see that weak convergence of√nT1(Pn,P)holds in some cases in which vnfails to converge. 3 Limit theorems for general dimensions Distributional limit theorems in the case d > 1are much more recent. We present now a succint account of the main results. Remarkably, the CLTs for the fluctuations remain valid with very small changes. However, the natural centering constants cannot be used apart from some particular cases. Given the interest of using the natural centering for inferential goals, we pay special attention to those cases. For a cleaner exposition we start dealing with the fluctuations, even though some of the CLTs that we present later were published earlier. 3.1 CLTs for the fluctuation The following version of Theorem 2.1 remains valid in general dimension: Theorem 3.1. AssumeP,Qare probabilities on Rdwith finite moments of order 2p. Assume further that Phas a density and connected support with a negligible boundary. Then √n(Tp(Pn,Q)−E[Tp(Pn,Q)])−→wN(0,σ2 p(P,Q)), (17) where σ2 p(P,Q) =/integraldisplay f2 0dP−/parenleftbigg/integraldisplay f0dP/parenrightbigg2 <∞, andf0is an OT potential from PtoQ. This is Corollary 4.7 in [ 28]. Some comments are in order at this point. The assumptions in Theorem 3.1 guarantee that the OT potential f0is uniquely determined, up to the addition of a constant. Since σ2(P,Q)is the variance of the potential, this quantity does not depend on the particular choice of f0and σ2(P,Q)is well defined. The proof of the result includes the fact that f0∈L2(P) 10 Distributional Limit Theory for Optimal Transport ifPandQhave finite moments of order 2p. The apparently simpler description of the limiting variance in this result is explained by the stronger assumptions here. IfPis assumed to have a density in Theorem 2.1 then G−1◦Fis the optimal map from PtoQ, the OT potential, f0, is any primitive of the map x∝⇕⊣√∫⊔≀→h′ p(x−G−1(F(x))), withhp(x) =|x|pand we see that the limiting variances in both results are equal. Finally, we would like to briefly sketch the path to prove(17)through the Efron-Stein linearization. Under sufficient integrability the (one-dimensional) CLT ensures that √n/parenleftbigg/integraldisplay f0dPn−/integraldisplay f0dP/parenrightbigg −→wN(0,σ2 p(P,Q)). We can use the Efron-Stein inequality to see that nVar/parenleftbigg Tp(Pn,Q)−/integraldisplay f0dPn/parenrightbigg ≤cE/bracketleftig (fn(X1)−f0(X1))2/bracketrightig , for some constant c>0, wherefnis an OT potential for the empirical problem (OT from between PnandQ). With a careful choice of centering constants the empirical OT potentials, fn, converge to f0a.s. and inL2(P)(Theorem 2.10 in [33], Corollary 3.5 in [ 28]) and we can conclude that the last upper bound vanishes asn→∞. But then (17)follows. This scheme works provided Phas a finite moment of order 2p+δfor someδ>0. In this case we also have convergence of variances nVar/parenleftbigg Tp(Pn,Q)/parenrightbigg →σ2 p(P,Q). The moment assumption can be relaxed to finite moment of order 2p, at the prize of needing a more involved linearization technique and losing, possibly, the
https://arxiv.org/abs/2505.19104v1
convergence of variances. We refer to [ 28] for details. Theorem 3.1 holds also for the quadratic cost and compactly supported measures in infinite dimensional Hilbert spaces [57]. 3.2 Centering constants in dimension d>1 As in the one-dimensional case, it would be of interest to obtain conditions ensuring that the bias converges with parametric rate towards zero as in (15). But now, in contrast to the one-dimensional case, integrability and smoothness assumptions do not suffice in all cases. In fact, under the favorable situation in which PandQare compactly supported on disjoint convex sets, the best possible general rate is that /vextendsingle/vextendsingle/vextendsingleE/bracketleftig Tp(Pn,Q)/bracketrightig −Tp(P,Q)/vextendsingle/vextendsingle/vextendsingle≤cn−2/d, see Corollary 3 in [ 89]. This means that we cannot generally expect (15)to hold ifd≥4. We refer to [ 96] for further details on the minimax estimation rates for Tp(P,Q). The main consequence is that estimation of Tp(P,Q)is affected by the curse of dimensionality and it is of interest to investigate particular setups in which parametric estimation is possible and to develop useful distributional limit theory in those setups. This is what we present next. 11 Distributional Limit Theory for Optimal Transport 3.3 Discrete spaces Discrete probabilities are a rich enough setup for many useful applications and an interesting benchmark to test alternative approaches to distributional limit theorems in OT. Let us assume that PandQare supported on the finite sets X={x1,...,xN}⊂Rd,Y={y1,...,yN}⊂Rd. ThenPis characterized by the probability pi=P({xi})given to each of the atoms of X, i.e.,P=/summationtextN i=1piδxi. Therefore, as the points are fixed, the only variable is the element p= (p1,...,pN) of theN-dimensional simplex. As a consequence, OT in discrete spaces is the finite dimensional linear program min π∈Γ(p,q)⟨C,π⟩F, (18) whereC= (c(xi,xj))∈RN×Mdenotes the cost matrix and Γ(p,q) =/braceleftigg π= (πij)∈RN×M:M/summationdisplay j=1πij=pi,N/summationdisplay i=1πij=qjπij≥0/bracerightigg the transport polytope. The empirical measures are defined by random elements ˆp= (ˆp1,..., ˆpN)on the simplex. Of course, there is no particular role of the ambient space Rdon the problem, but we try to keep a homogeneous notation. The multivariate central limit theorem shows that√n(ˆp−p)is asymptotically Gaussian with variance Σ(p)defined as  p1(1−p1)−p1p2··· −p1pN −p2p1p2(1−p2)··· −p2pN............ −pNp1−pNp2···pN(1−pN) . (19) Hence, in the discrete case, one can set T(p) :=Tp(P,Q)and investigate differentiability results for T. The key result is that Tis directionally Hadamard differentiable, with derivative h∝⇕⊣√∫⊔≀→ max u∈Φ∗p(p,q)⟨u,h⟩, where Φ∗ p(p,q)is the set of optimal solutions to the dual of the linear program (18)whenc(x,y) =∥x−y∥p, i.e., points u∈RNfor which there exists v∈RM such that ⟨p,u⟩+⟨q,v⟩=Tp(P,Q) and ui+vj≤∥xi−yj∥p,for1≤i≤Nand1≤j≤M. Directional, as opposed to standard differentiability means that the derivative function is not linear. Otherwise, the delta method can be used to conclude the following. 12 Distributional Limit Theory for Optimal Transport Theorem 3.2. IfPandQare finitely supported probabilities on Rdthen √n/parenleftig Tp(Pn,Q)−Tp(P,Q)/parenrightig −→wmax u∈Φ∗p(p,q)⟨G,u⟩, (20) where Gis a centered Gaussian r.v. with covariance matrix Σ(p)as in(19). Theorem 3.2 is a simplified version of Theorem 3 in [ 110]. The case P= Qadmits a slightly cleaner expression for the limit, generalizing earlier related work [104]. We see that normality of the limit holds if the maximizer in the dual problem is unique, but, obviously, this is not directly related to the
https://arxiv.org/abs/2505.19104v1
smoothness of the cost. A similar result holds when the support of PandQare countable, under the additional assumption ∞/summationdisplay i=1∥xi∥p√pi<∞, see [111] for details. A central limit theorem for the OT plans in discrete spaces is derived in [ 78], where the corresponding limit distribution is usually non-Gaussian and is determined from the way the empirical plan is chosen. The specifics of the statement are highly technical and extend beyond the scope of this discussion. For a comprehensive treatment and the formal statement, we direct the reader to [78, Theorem 6.1]. 3.4 Lower-complexity adaptation principle In between the dual and primal formulation of OT we have the semidual formulation Tc(P,Q) = max f/integraldisplay f(x)dP(x) +/integraldisplay fc(y)dQ(y), (21) where fc= inf x∈Rd{c(x,y)−f(x)} denotes the c-conjugate of f. The seminal work [ 75], the authors realized that c-conjugate operations preserves the complexity of the class, i.e., the covering numbers of a class of functions Fare the same ofFc={fc:f∈F}, for any bounded cost. This clever observation implies that the statistical complexity of the OT cost adapts to the smaller of the complexities of the probability measures as we may explain below. If the cost function is Lipschitz, with constant L, and both measures are supported on compact sets, then the empirical and population OT potentials f∗andfnbelong to the class LipL(supp(P))ofL-Lipschitz functions over the support supp(P)ofP. Ifsupp(P)has low intrinsic dimension then LipL(supp(P)) anda fortiori (LipL(supp(P)))chave small covering numbers, which implies the following result (for a more general statement we refer to [ 75, Theorems 3.3 and 3.8]). Here the complexity of the supports of supp(µ)of a measure µis quantified by its covering numbers N(ϵ,supp(µ),∥·∥). 13 Distributional Limit Theory for Optimal Transport Theorem 3.3. LetPandQbe compactly supported probability measures on Rd. Assume that there exist C,ϵ 0,κ> 0such that N(ϵ,supp(µ),∥·∥)≤Cϵ−κforϵ≤ϵ0 andµ=Porµ=Q. Then E[|Tp(Pn,Q)−Tp(P,Q)|]≲an,κ,p, where an,κ,p=  n−1 2ifκ<2orκ≤3andp>1, log(n) n1 2if(κ,p) = (2,1)orκ= 4andp>1, n−1 κotherwise. Theorem 3.3 implies that if one of the measures has intrinsic dimensions κ≤3 then the OT cost is not cursed by dimensionality for p > 1. Moreover, the optimization class in the dual formulation (2)is intrinsically Donsker. Then, by means of the functional delta method, we obtain the following result (again, this is a simplified version of the results in [ 73], which should be consulted for a much more complete account of the implications of this lower complexity adaptation principle). Theorem 3.4. AssumeP,Qare compactly supported probabilities on Rdwith finite moments of order 2p, wherep≥2andd≤3. Assume further that Phas a density and connected support with a negligible boundary. Then √n(Tp(Pn,Q)−Tp(P,Q))−→wN(0,σ2 p(P,Q)), (22) where σ2 p(P,Q) =/integraldisplay f2 0dP−/parenleftbigg/integraldisplay f0dP/parenrightbigg2 <∞, andf0is an OT potential from PtoQfor the cost cp. The assumptions in Theorem 3.4 guarantee uniqueness (up to an additive constant) of the OT potential. This is the key to the Gaussianity of the limiting distribution, but other non Gaussian limits can be obtained without this assumption. Now that we have established that the statistical complexity of the transport cost adapts to the lower intrinsic dimension of the marginal probability measures, a natural question arises: does the same hold for the
https://arxiv.org/abs/2505.19104v1
transport potentials or plans? This remains an open problem, except in the semi-discrete case, which we will examine next. 3.5 Semi-discrete OT An important application of Theorem 3.3 arises in the setting of semi-discrete optimal transport, where one of the probability measures, PorQ, is discrete while 14 Distributional Limit Theory for Optimal Transport the other remains continuous. In this case, the entropy numbers of the discrete measure’s support are inherently bounded, ensuring that the CLT for the empirical OT cost holds irrespective of the dimension. The next result was established by [29], independently of the related findings in [75, 73]. Theorem 3.5. AssumeP,Qare probabilities on RdwithPfinitely supported and Qhaving a density and connected support with a negligible boundary. If Qhas finite moment of order pthen √n(Tp(Pn,Q)−Tp(P,Q))−→wN(0,σ2 p(P,Q)), (23) where σ2 p(P,Q) =/integraldisplay f2 0dP−/parenleftbigg/integraldisplay f0dP/parenrightbigg2 <∞, andf0is an OT potential from PtoQfor the cost cp. The same result holds if Qis finitely supported, and Phas a density with connected support and negligible boundary and finite moment of order 2p. In this semidiscrete setup it is possible to give CLTs for the optimal potentials, as we discuss next. In our presentation we focus only on the squared-Euclidean costc2. We assume that Pthat is the discrete probability measure with atoms x1,...,xN∈Rdand weights p1,...,pNin theN-dimensional simplex, i.e., P=/summationtextN i=1piδxi. The semidual formulation of this problem maximizes the function M(z) defined as N/summationdisplay i=1zipi+/integraldisplay min i=1,...,N{∥xi−y∥2−zi}dQ(y), (24) where the solution z∗of(24)encodes the OT potentials and defines the Laguerre cells Lagk(z) :={y:∥xk−y∥2−z∗ k<∥xi−y∥2−z∗ i,∀i̸=k}. Semi-discreteoptimaltransportplaysanimportantroleinapplicationsineconomical modeling, quantization [ 65], partial differential equations [ 46], seismic imaging [ 91] or astronomy [ 83]. For instance, in quantization the Laguerre cells provides the optimal assignment of data-points Qto the centroids x1,...,xd. In economical applications (see [ 45]), discrete set{x1,...,xd}represents fountains selling a common product, weights p1,...,pNrepresents the capacity of each fountain, Qthe distribution of consumers, and the cost function crepresents minus the utility function U. The fountain aims at rising the prices, represented by the vector (v1,...,vN), and a consumer ychooses a store according to the following criterium; maximize the utility while minimize the prize, i.e., arg min i=1,...,N{∥xi−y∥2+vi}. The equilibrium of the system is found when the prizes are the OT potentials for the cost function equal to minus the squared-Euclidean distance, see [ 45]. Hence, Laguerre cells represent the demand sets of the market and the results obtained by [29,103], which we describe below, provide confidence bands to the optimal prices and demand sets of random markets. 15 Distributional Limit Theory for Optimal Transport 3.5.1 Potentials In [77] it was shown that if Qis supported on a compact convex set Ω′and has a continuous density bounded away from zero, the functional (24)is concave and twice differentiable with first derivative (−Q(Lag1(z)) +p1,...,−Q(LagN(z)) +pN), (25) and Hessian matrix ∇2M(z) =/parenleftig ∂2 ∂zi∂zjM(z)/parenrightig i,j=1,...,Nwith partial derivatives ∂2 ∂zi∂zjM(z) =/integraldisplay Lagk(z)∩Lagk(z)q(y) ∥xi−xj∥dHd−1(y), ifn̸=j, and ∂2 ∂2ziM(z) =−/summationdisplay j̸=i∂2 ∂zi∂zjMp(z). Moreover, under these conditions the Hessian ∇2M(z∗)is a negative symmetric matrix having zero as simple eigenvalue, corresponding to the ‘constant’ vector (1,..., 1). Hence, the optimal potential z∗is then found where the
https://arxiv.org/abs/2505.19104v1
equilibrium Q(Lagi(z∗)) =piis attained, for all i= 1,...,N. The nondegeneracy of the Hessian yields a Polyak-Lojasiewicz inequality for the dual functional. Then, due to the intrinsic parametric complexity of the problem, [ 29] proved that the rates of convergence of the empirical potential z(n)are parametric and derived the following central limit theorem. Some of the assumptions has been relaxed by [52]. Theorem 3.6. LetQbe supported on a compact convex set in which it has a continuous density bounded away from zero. Then it holds that √n(z(n)−z∗)−→w/parenleftig ∇2M(z∗)/parenrightig−1(U1,...,UN), where (U1,...,UN)is a centered multivariate Gaussian with variance (19). The other potential can be derived from z∗via the relation f∗(y) = min i=1,...,N{∥xi−y∥2−z∗ i}. The empirical potential fn(y)is obtained in a similar fashion. The observation ∥fn−f∗∥∞= sup y∈Ω′|fn(y)−f∗(y)| = max i|z(n) i−z∗ i|=∥z(n)−z∗∥∞, and Theorem 3.6 yield the limit √n∥fn−f∗∥∞−→w/vextenddouble/vextenddouble/vextenddouble(∇2MP(z∗))−1((U1,...,UN))/vextenddouble/vextenddouble/vextenddouble ∞. 16 Distributional Limit Theory for Optimal Transport Laguerre cells are another interesting object related with the semi-discrete optimal transport. They are set-valued mappings of the potentials z∗. Since each cell is convex, we can measure the distances between the cells via the support functions. Since the cells, as defined above, are not bounded, we truncate them in a sufficiently large ball containing the support of Qand define LagR i(z) =Lagi(z)∩RBd. Recall that the support function of a bounded convex set Sis the convex conjugate of the convex indicator function of S, i.e.,hS(x) =supy∈S⟨x,y⟩. The values in the unit sphere Sd−1of the support function characterizes a bounded convex set. Hence, for p∈[1,∞)we defineLpmetric as dp(A,B) :=/parenleftbigg/integraldisplay Sd−1|hA−hB|pdHd−1/parenrightbigg1 p, whereHd−1is the Hausdorff measure in Sd−1, and the uniform norm d∞(A,B) = supv∈Sd−1|hA−hB|,which corresponds is the Hausdorff distance between compact convex sets. Using the characterization hLagR k(z)(v) = min tj>0/braceleftbigg/summationdisplay j̸=ktj(∥xk|2−∥xj∥2−zk+zj) +R/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublev−/summationdisplay j̸=ktj(xk−xj)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/bracerightbigg of the support functions of the Laguerre cells, [ 29] proved the following limit theorem. Theorem 3.7. LetQbe supported on a compact convex set, with a continuous density bounded away from zero. Then the sequence, √n(hLagR k(zn)−hLagR k(z∗)) has a Gaussian limit in distribution in Lpfor allp∈[1,∞). While no CLT in Hausdorff distance is available for the Laguerre cells, [ 29] provides confidence intervals for the Hausdorff distance between the cells (see Remark 4.7, ibid). 3.5.2 Plans and maps Limit theorems of the Lp-distances between the support functions of the cells are useful to provide confidence bands on the demand sets in economical applications. However, these distances do not take into account the element of the discrete set {x1,...,xN}associated with the cell, which is useful in some applications such as quantization. This can be measure by means of the transport map, i.e., the a.s. defined gradient of the convex function function φn(y) =∥y∥2/2−gn(y)/2. Here the most popular distances are the Lγ(Q)forγ≥1. However, as [ 52] proved, the OT maps do not satisfy a CLT in Lγ(Q). However, they do satisfy a central limit theorem in dual topologies. We denote by Cβ(Ω′;Rd)the Banach space of continuousβ-Hölder functions and by (Cβ(Ω′;Rd))′its dual space. 17 Distributional Limit Theory for Optimal Transport Theorem 3.8. LetQis be supported on a compact convex set and having continuous density bounded away
https://arxiv.org/abs/2505.19104v1
from zero. Then 1. There exists a nonzero random variable Vγsuch that n1 2γ∥∇φn−∇φ∗∥Lγ(Q)−→wVγ. 2.Thereexistsatightrandomelement GinthedualBanachspace (Cβ(Ω′;Rd))′ such that the element Cβ(Ω′;Rd)∋f∝⇕⊣√∫⊔≀→√n⟨∇φn−∇φ∗,f⟩L2(Q) of(Cβ(Ω′;Rd))′converges in distribution to Gin(Cβ(Ω′;Rd))′. Apart from these two points, [52] proved a central limit theorem of √n⟨∇φn−∇φ∗,f⟩L2(Q) for a fixed test function fwith negligible discontinuities w.r.t. the d−1-dimensional Hausdorff measure. Proposition 4 in [ 103] uses this result to show that rn(∇φn− ∇φn)does not possess nonzero distributional limits in L2(Q). Their elegant and short proof uses the Hilbertian structure of L2(Q). Inspired by that, we derive the same condition in any Lγ(Q)space forγ∈(1,∞). Assume that rn(∇φn−∇φn) has a nonzero limit UinLγ(Q), thenrn=n1 γby the first point of Theorem 3.8. The continuous mapping theorem yields sup ∥f∥Cβ(Ω′;Rd)≤1n1 γ⟨∇φn−∇φn,f⟩L2(Q)−→wsup ∥f∥Cβ(Ω′;Rd)≤1⟨U,f⟩L2(Q), so that, by the second point of Theorem 3.8, we get P/parenleftig ⟨U,f⟩L2(Q)= 0,∀f∈D/parenrightig = 1, whereDis a countable dense subset of Cβ(Ω′;Rd). SinceCβ(Ω′;Rd)is dense in Lγ∗(Q), whereγ∗∈(0,1)is the conjugate exponent of γ, we get U= 0inLγ(Q). Let us delve into the details of the proof of [ 50]. First, the reason of this mismatch rates among the different Lγnorms is due to the relation ∥∇φn−∇φ∗∥Lγ(Q)= K/summationdisplay i,j=1∥xi−xj∥γQ/parenleftig Lagi(z∗)∩Lagj(z(n))/parenrightig 1 γ , which implies that ∥∇φn−∇φ∗∥Lγ(Q)has the same convergence rate as  K/summationdisplay i̸=jQ/parenleftig Lagi(z∗)∩Lagj(z(n))/parenrightig 1 γ . 18 Distributional Limit Theory for Optimal Transport Hence, it depends on γ. Moreover, the limit theorems follow from that of Q/parenleftig Lagi(z∗)∩Lagj(z(n))/parenrightig . Moreover, the relation ⟨∇φn−∇φn,f⟩L2(Q)=K/summationdisplay i,j=1/integraldisplay Lagi(z∗)∩Lagj(z(n))⟨xi−xj,f(y)⟩dQ(y), indicatesthatthelimitofthefamilyofindicatorfunctions I/bracketleftig Lagi(z∗)∩Lagj(z(n))/bracketrightig integrated w.r.t. test functions determines all the limits of Theorem 3.8. The strategy of [52] is to show the differentiability of the functional z∝⇕⊣√∫⊔≀→/integraldisplay Lagi(z∗)∩Lagj(z)f(y)dQ(y), atz∗in order to apply the delta-method and Theorem 3.6. 4 Regularized problems Beyond the cases described in Section 3, estimation of OT (cost, potentials or plans) is affected by the curse of dimensionality. As noted in the Introduction this and computational considerations have motivated the consideration of regularized versions of OT. We devote this section to distributional limit theory in this setup. 4.1 Entropy regularized optimal transport The entropy regularized optimal transport (EOT), popularized by [ 20], corresponds to the minimization of the OT cost together with a penalty Eϵ(P,Q) = min π∈Π(P,Q)/integraldisplay Rd×Rdc(x,y)dπ(x,y) +ϵ·H(π|P⊗Q),(26) where H(π|P⊗Q)denotes the Kulback-Leiber divergence of πw.r.t. the product probability measure P⊗Qandϵ >0balances the weight of the penalty. The minimizer of (26)is called the Entropic Optimal Transport (EOT) plan while the maximizers (fϵ,gϵ)of the dual functional /integraldisplay/braceleftbigg f(x) +g(y)−ϵ·ef(x)+g(y)−c(x,y) ϵ +ϵ/bracerightbigg d(P⊗Q)(x,y) are called EOT potentials. They can be equivalently defined as the unique solutions of the Schrödinger system /integraldisplay exp/parenleftbiggfϵ(x) +gϵ(y)−c(x,y) ϵ/parenrightbigg dQ(y) = 1, x∈Rd, /integraldisplay exp/parenleftbiggfϵ(x) +gϵ(y)−c(x,y) ϵ/parenrightbigg dP(x) = 1, y∈Rd.(27) 19 Distributional Limit Theory for Optimal Transport EOT is undoubtedly the most popular among all the variations of OT, mainly due to tree factors, (i) the Sinkhorn algorithm [ 109] is an iterative fixed point algorithm with linear convergence rates [ 40,13] which allows for the efficient computation of EOT plans, potentials and costs [ 20]; (ii) its connections with the Schrödinger bridge [106] and the theory of large deviations [ 85] and (iii) it avoids
https://arxiv.org/abs/2505.19104v1
the curse of dimensionality for a fixed regularization parameter ϵ>0[48, 90]. Recently, other types of penalizations for the OT problem have been proposed in the literature [ 94,6,97]. These approaches may be particularly applicable in cases where a sparse approximation of the OT plan is desired [60, 61, 117]. We do not cover the details of these variations in this survey. However, it is worth mentioning that the statistical complexity was first studied in [ 4] through the control of dual potential smoothness. Following this approach, [ 4] obtained convergence rates that suffer from the curse of dimensionality. In contrast, [ 55] demonstrated, usingadifferentprooftechnique, thatregularizedtransportproblems generally do not suffer from the curse of dimensionality. Itisworthmentioning,althoughwewillnotgointodetail,therecentpreprint[ 92], which studies central limit theorems in EOT in the regime where the regularization parameter epsilon tends to zero. 4.1.1 Costs Proving a CLT requires a control on the regularity of the EOT potentials (fϵ,gϵ). From Equation (27), we first observe that they inherit the regularity of the cost function. Hence, for the squared-Euclidean cost, the EOT potentials are C∞. Moreover, for PandQsupported on compact sets ΩandΩ′, the derivatives of (fϵ,gϵ)are bounded by a constant depending on the diameters of Ω,Ω′and the regularization parameter ϵ(subgaussian tails yield the same result, cf. [ 48,90]). That is, both the empirical (fϵ,n,gϵ,n)and population (fϵ,gϵ)EOT potentials belong to a ball in Cs(Ω)×Cs(Ω′)of radius irrespective of the sample size, for alls∈N. Hence the random sequence (fϵ,n,gϵ,n)belongs to a Donsker class. In [90, 48] these estimates have been used to show that E[|Eϵ(Pn,Q)−Eϵ(P,Q)|]≤C√n for smooth costs. In the same work, it is shown that in the bias-variance decomposition of Eϵ(Pn,Q)−Eϵ(P,Q), the variance term Eϵ(Pn,Q)−E[Eϵ(Pn,Q)]can be handled with the Efron-Stein linearization of [33], leading to √n(Eϵ(Pn,Q)−E[Eϵ(Pn,Q)])−→wN(0,VarP(fϵ)). Additionally, [ 35] showed that the bias term |E[Eϵ(Pn,Q)]−Eϵ(P,Q)|converges to zero faster than the variance, again with arguments based on the uniform smoothness of the potentials. As a consequence, the following CLT holds. 20 Distributional Limit Theory for Optimal Transport Theorem 4.1. Assume that PandQare subgaussian probabilities on Rdand c(x,y) =∥x−y∥2. Then √n(Eϵ(Pn,Q)−Eϵ(P,Q))−→wN(0,VarP(fϵ)), wherefϵis an EOT potential. A different approach to obtain the same result was proposed by [ 53] using that (fϵ,n,gϵ,n)belongs to a Donsker class and employing the functional Delta method, yet under the restrictive assumption that the distributions are compactly supported. In the discrete setting the same technique also provides the central limit theorems, cf. [79,72]. This last technique, even if effective to deal with a wider amount of problems, does not provide, in general the sharpest conditions as illustrated later by [100]. Avoiding the use of empirical processes theory and exploiting the strong convexity of the dual formulation of EOT, [ 100] derived the same bound of the bias term as [ 35]. Such direction was used by [ 58] to derive the CLT for the EOT with general cost functions under more general assumption, as follows. Theorem 4.2. Assume that the cost function c:Rd×Rd→Ris bounded and measurable in the support of P⊗Q. Then it holds that √n(Eϵ(Pn,Q)−Eϵ(P,Q))−→wN(0,VarP(fϵ)). 4.1.2 Potentials and Plans As stated previously, the empirical EOT potentials are the unique solutions (up
https://arxiv.org/abs/2505.19104v1
to additive constants) of the empirical version of the Schrödinger system (27), that is, the version in which we replace PwithPn. It is convenient at this point to introduce the notation Γ(f,g) :=/parenleftbigg/integraldisplay exp/parenleftbiggf(x) +g(y)−c(x,y) ϵ/parenrightbigg dQ(y), /integraldisplay exp/parenleftbiggf(x) +g(y)−c(x,y) ϵ/parenrightbigg dP(x)/parenrightbigg , and, similarly, we write Γnfor the empirical version. The empirical EOT potentials are then defined through the equation Γn(fϵ,n,gϵ,n) = (1,1). This definition enables to reframe the estimation of EOT potentials as a Z-estimation problem in the nomenclature of empirical processes. In this framework, CLTs are obtained using the implicit function theorem in Banach spaces. This approach was employed by [59,54] to prove a CLT for the EOT potentials for C∞costs. The derivative of the Schrödinger operator Γis of the form L=I+AwhereAis a compact operator inC(Ω)×C(Ω′). The first nontrivial eigenvalue of Ais−1, which has multiplicity one and is generated by the constant function (1,−1). Any other eigenvalue ofAis bounded away from one by Jensen’s inequality. Hence, the Fredholm alternative yields the invertibility of the linearized operator L(see [14]) inC(Ω)×C(Ω′)/∼where (f,g)∼(0,0)if(f,g) = (c,−c)forc∈R. 21 Distributional Limit Theory for Optimal Transport Actually, their approach can be extended to Lipschitz costs by means of the following observation. Up to a term of order oP(∥fϵ⊕gϵ−fϵ,n⊕gϵ,n∥∞),the expansion Γ(fϵ,gϵ)−Γ(fϵ,n,gϵ,n) =L(fϵ−fϵ,n,gϵ−gϵ,n) holds. It now suffices to prove a central limit theorem for the left-hand side of the previous equation in order to derive the limiting distribution of the EOT potentials. This follows from the fact that the random function e−c(Xi,·) ϵsatisfies a CLT in C(Ω)for any Lipschitz cost (see [ 82] for a discussion on the conditions where a uniform central limit theorem holds). We refer to [ 59] for the description of the covariance of the Gaussian process in the following theorem. Theorem 4.3. LetPandQbe supported on the compact sets ΩandΩ′. Assume that the cost function is Lipschitz. Then there exists a centered Gaussian process GinΩ×Ω′with continuous sample paths such that (fϵ−fϵ,n,gϵ−gϵ,n)−→wG inC(Ω×Ω′). If the cost isCsthen the limit holds in Cs−1(Ω×Ω′). From this result a CLT for EOT plans can be established through the asymptotic analysis of the dual potentials and the relation dπϵ(x,y) = exp/parenleftbiggfϵ(x) +gϵ(y)−c(x,y) ϵ/parenrightbigg dP(x)dQ(y) and its empirical counterpart. The first work showing central limit theorems for EOT plans was [ 71] for a different estimator, where the authors conjectured the validity of the result for the plug-in estimator πn,ϵand any bounded cost function (cf. Remark 1, ibid). In [ 59,54], the conjecture was proven for smooth cost functions. Finally, [ 58] relaxed the regularity assumption on the cost, proving the whole conjecture of [ 71]. However, this generalization introduces significant technical challenges, as the linearization of the empirical Schrödinger system must now be carried out , and within the sequence of Hilbert spaces L2 0(Pn), consisting ofPn-centered elements in L2(Pn). A key idea in their approach is to express the derivative of Γn, which takes the form I+An, through its series expansion/summationtext∞ k=1(−An)k.Since the operator norm of Anis uniformly bounded by a constant, strictly smaller than one, this expansion provides a powerful tool for the analysis. In particular, it
https://arxiv.org/abs/2505.19104v1
enables to interpret the difference (fϵ−fϵ,n,gϵ−gϵ,n)as an infinite-order V-statistic, facilitating the derivation of the central limit theorem in this more general setting. Theorem 4.4. Fixη:Rd×Rd→R. Assume that the cost function c:Rd×Rd→ Randηare and measurable in the support of P⊗Q. Then it holds that √n/integraldisplay ηd(πϵ,n−πϵ)−→wN(0,σ2 λ(η)). 22 Distributional Limit Theory for Optimal Transport The variance of the limit described in previous result is a technical expression and can be found in [58, 59, 71], see also [79, 72] for the discrete setting. Applications of the previous central limit theorems for the potentials include the derivation of smoothness and convergence rates of Gaussian process indexed by distributions using a kernel based on the entropic transportation cost, as defined in [2,1]. They also help provide confidence bands Sinkhorn cost as initially proposed by [20], limits for the entropic optimal transport map [ 107] and optimal transport- based colocalization curves [ 79]. In the following section we underline one of the most important applications for statistical inference, the Sinkhorn divergence [ 49]. 4.1.3 Sinkhorn Divergence As noted in the Introduction, the OT cost (for some choices of cost functions) induces a distance over the space of distributions, making it a powerful tool for data analysis and statistics. For instance, goodness-of-fit and independence tests have been proposed using this property [ 56,69]. However, as we have seen earlier, the curse of dimensionality hinders its use in general dimensions, thus reducing its applicability. The EOT cost with Euclidean squared cost seems a natural substitute but it does not define a distance, nor even a divergence, between probability measures. It does not satisfy the triangle inequality and the property that it equals zero if and only if the measures are identical. A natural remedy is proposed by [49] where they define the Sinkhorn divergence as follows Dϵ(P,Q) =Eϵ(P,Q)−1 2(Eϵ(P,P) +Eϵ(Q,Q)), with the cost function being squared Euclidean distance. Although the Sinkhorn divergence does not satisfy the triangle inequality, it does define a divergence in its common formulation (recall that a function D:P(Rd)×P(Rd)defines a divergence if and only if it is symmetric and D(P,Q)≥0, with equality if and only ifP=Q), cf. [37]. This makes it a natural substitute for the Wasserstein distance in high-dimensional data analysis problems. The results obtained by [59,54] enable the construction of confidence bands for the Sinkhorn divergence under the two hypotheses H0:P=QandH1:P̸=Q. The test under the alternative hypothesis can be easily derived from the results for the cost. Under the null hypothesis a second-order analysis is required, where the central limit theorem for the potentials plays a key role. Here, we present the result in the form described in [ 59]. For a description of the limiting variances we refer to [ 58,59] for the continuous case and [79, 72] for the discrete case. Theorem 4.5. Let the cost function be the squared Euclidean distance. Then it holds that 1. forP̸=Q, √n(Dϵ(Pn,Qn)−Dϵ(P,Q))w−→N (0,σ2(P,Q)), 23 Distributional Limit Theory for Optimal Transport 2. and forP=Q, nDϵ(Pn,P)w−→ϵ 2∞/summationdisplay i=1λ2 iN2 j, where{Ni}i∈Nis a sequence of i.i.d. random variables with Ni∼N(0,1) and{λi}i∈N⊂[0,∞)is such that/summationtext∞ i=1λ2 i<∞. Thelimitingdistributiondependson P,
https://arxiv.org/abs/2505.19104v1
hence, theteststatistic Dϵ(Pn,P)isnot distribution-free. Finding consistent estimators of the sequence {λi}i∈N⊂[0,∞) is an open problem. We believe that the series expansion of the linearized operators provided in [58] might give a consistent estimator of these parameters. 4.2 Smooth optimal transport The curse of dimensionality is not a specific problem of classical OT and it shows up several other statistical problems. One of the most classical examples is the density estimation problem, where the kernel density estimator ˆp(k) h(x) =1 hdnn/summationdisplay i=1K/parenleftbiggx−Xi h/parenrightbigg (28) or the wavelet density estimator ˆp(w) hhave rates of convergence depending on two factors; the smoothness of the density pofPand the dimension. This means that some plug-in kernel/wavelet density estimators adapt to the smoothness of the unknown density, giving almost parametric rates if the dimension is small compared to the degree of smoothness. As we have seen, the plug-in estimator of the OT cost adapts to the underlying complexity of the measures [ 75] but not to the smoothness of the densities [ 39]. Hence, different types of estimators are required to leverage the a-priori smoothness information of the density. The first contribution in this direction was due to [ 76], where the minimax rates of convergence for the estimation of smooth optimal transport maps are provided. The upper bound is found via an estimator based on the minimization of the empirical semi-dual OT problem, over truncated wavelet expansions, which turns out to be computationally unfeasible. More tractable estimators have been proposed in [ 87,23,66] based on solving the OT between the continuous smooth density estimators described above. Obviously, to approximate the OT related quantities the density estimators should converge to its population counterpart. Hence, bandwidths (for kernel estimators) or truncations (for wavelet) should be adapted to the sample size. 4.2.1 Smooth Wasserstein distances In this section we focus on kernel-type estimators where the bandwidth does not change with the sample size. In particular, we focus on the Smooth p-Wasserstein distance, proposed by [51, 50], Wσ p(P,Q) =Wp(N(0,σ2I)∗P,N(0,σ2I)∗Q), 24 Distributional Limit Theory for Optimal Transport whereN(0,σ2I)∗Pdenotes the convolution of Pwith aN(0,σ2I)measure. The smooth OT-plans are denoted by πσ, which by [ 47] are concentrated on the graph of a map (for p̸= 1). A little thought shows that Wσ pdefines a distance in the space of probability measures with finite p-th moment. The smooth OT plans and cost converge to their unregularized counterpart as the regularization parameter decreases. Hence, its empirical version Wσ p(Pn,Q), if computable, can be used as a one-sample test statistic. As EOT, for a fixed regularization parameter, smooth OT avoids the curse of dimensionality and the following limiting theorem hold [ 52]. We state the result for compactly supported measures, however it holds in greater generality, cf. Equation (4) in [52]. Theorem 4.6. Fixσ > 0andp > 1. Assume that PandQare compactly supported. Then (i)IfP=Q,√nWσ p(Pn,P)converges in distribution to the dual norm of a centered Gaussian process in a negative Sovolev space, (ii)IfP̸=Q,√n(Wσ p(Pn,Q)−Wσ p(P,Q))converges in distribution to a non-degenerated centered Gaussian r.v. Remark 4.7. A similar result holds for p= 1, where under the
https://arxiv.org/abs/2505.19104v1
null hypothesis P=Q, the limit√nWσ p(Pn,P)has also as limiting distribution a dual norm [102,95]. However, for p= 1and under the alternative, the limiting distribution is not in general Gaussian due to the non uniqueness of the potentials. The proof technique follows the functional delta-method and the fact that first variations of the Wasserstein distances are dual Sobolev norms, cf. [ 113, Exercise 22.20] and [ 52, Remark 3.2]. The authors also show the consistency of the bootstrap (cf. Proposition 3, ibidem). The same proofs adapts for variations of smooth Wasserstein distances, based on convolutions with different smooth measures such as the Neumann heat semigroup of a bounded strictly convex setΩwith smooth boundary. Assuming that PandQare supported in Ω, the linearization of the optimal transport map for the squared-Euclidean cost is possible following the approach described in the next section. Hence, in this setting central limit theorems for the regularized maps hold for sufficiently smooth domains. Finally, we point out that the contraction of the heat semigroup w.r.t. the Wasserstein distance, i.e., the inequality Wσ p(Pn,Q)≤e−cσWp(Pn,Q), implies that the one-sample goodness-of-fit test statistic Wσ p(Pn,Q)provides uniform conservative control of Wp(Pn,Q)under the alternative H1:Q̸=P. However, the same bound implies that the test will be less powerful as σincreases. This is something expected, as N(0,σ2I)∗Pbecomes exponentially close to the fundamental solution of the heat equation at time t=σas the time param- eter increases. Hence both N(0,σ2I)∗PnandN(0,σ2I)∗Qbecome almost indistinguishable for large values of σ. 25 Distributional Limit Theory for Optimal Transport 4.2.2 Smooth estimation of optimal transport map We saw that EOT and Smooth OT avoid the curse of dimensionality with plans and cost satisfying a CLT. However, the centering variable is not the unregularized counterpart. In this section we focus on a class of estimators which approximate the (sufficiently smooth) OT map satisfying a CLT. Here, up to our knowledge, there is only one estimator where a central limit theorem has been obtained. Such a result was established in the pioneering work of [ 86] for kernel density estimator-based methods and measures supported on the flat torus (i.e., periodic measures and squared periodic distance cost). Let us review the main ideas of [ 86] and the difficulties for its Euclidean extension. LetˆP(k) hbe the probability measure with density (28). Let ˆΩP,ΩPandΩQbe the supports of ˆP(k) h,PandQ, respectively. Assume that there exist differentiable gradients of convex functions ˆTh=∇ˆφhandTpushing forward QtoˆP(k) h(x)and toP, respectively. Then they solve Monge-Ampère equations det(∇2ˆφh) =q ˆp(k) h◦∇ˆφhs.t.∇ˆφh(ΩQ)⊂ˆΩP and det(∇2φ) =q p◦∇φs.t.∇φ(ΩQ)⊂ΩP. In the flat Torus, the boundary conditions are periodic. The main idea of [ 86] is to treat these equations as a Z-estimation problem. Hence, one needs to show the Fréchet differentiability of the Monge-Ampère equation at the pair (P,φ)with partial derivative w.r.t. φadmitting bounded inverse in some appropriate Banach spaces of differentiable functions. The interior part of the equation is easy to linearize if the probability densities are bounded away from zero and C1,α(cf. [38, Chapter 3]). The resulting linearized operator is elliptic. Under the additional assumption λI≤∇2φ≤ΛI,0<λ≤Λ<∞, (29) the linearized equation is non-degenerated elliptic. In the flat torus,
https://arxiv.org/abs/2505.19104v1
this linearized equation admits a unique classical solution (up to additive constants) for smooth data —hence, the linearized operator is invertible—, see [ 86]. In the Euclidean case, the boundary conditions need to be linearized too. This is the first major difficulty for Euclidean data. However, this one can be solved if the smoothed density ˆP(k) hhas a support ˆΩPconverging to ΩPin a somehow smooth way. For instance, if ˆΩP=ΩPholds for all n∈N. In this case, one obtain the linearized equation is elliptic with oblique boundary conditions and its solvability holds as in the periodic case (cf. [84, 62]). One thus has the relation ∇ˆφh−∇φ=L−1(ˆp(k) h−p) +o/parenleftig ∥ˆp(k) h−p∥C1,α/parenrightig , where L−1is the inverse of the linearized Monge-Ampère equation at the φ. The error term in the previous display is not sharp and it can be improved 26 Distributional Limit Theory for Optimal Transport toO/parenleftig ∥ˆp(k) h−p∥C0,α∥ˆp(k) h−p∥C1,α/parenrightig . The suboptimal error derived in the first argument is due to the fact that the linearization of right-hand side of the Monge- Ampère equation requires a control of a further derivative of the difference ˆp(k) h−p. This can be improved by linearizing instead the right hand side of (p◦∇ˆφh)·det(∇2ˆφh) =(p◦∇ˆφh)·q ˆp(k) h◦∇ˆφh, (30) with the same boundary conditions (oblique or periodic, depending on the setting). The linearization of the left-hand side of (30)is again a non-degenerated elliptic operator, while the linearization of the right-hand side of (30)behaves inC0,αas /vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble(p◦∇ˆφh)·q ˆp(k) h◦∇ˆφh−q/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble C0,α≤C∥ˆp(k) h−p∥C0,α. From here we get the estimate ∥∇ˆφh−∇φ∥C1,α≤C∥ˆp(k) h−p∥C0,α, (31) which allows us to get the claimed improvement (cf. [ 86, Theorem 2]). Hence, if there were a central limit theorem for ˆp(k) has an element ofC0,αone would get a central limit theorem for ∇ˆφhinC1,α. This however, cannot hold (cf. [ 86, Theorem 7]). Limits can only be obtained in negative Sobolev norms or pointwise for dimensions greater than 3. To get the point-wise limits, [ 86] developed a bias-variance decomposition of ∇ˆφh−∇φ, where the bias term is represented by ∇φh−∇φand the variance by ∇ˆφh−∇φh, for∇φhbeing the transport map from Qto the probability measure Pconvolved with the kernel with bandwidth h, call its density ph. Then, using the estimates ∥ph−p∥C0,α≤hs−α,∥ph−ˆph∥C0,α≤/radicaligg log(h−1) nh2α+d, wheresis the number of continuous derivatives of the densities, and the previous estimates, [86] gets the bound ∥∇ˆφh−∇φ−L−1(ˆp(k) h−p)∥C2,α≤h2s−1−3α+/radicaligg 1 nhd+2+4α, for appropriate choices of the bandwidth h. The next step is to show that L−1(ˆp(k) h−p)has a non-degenerated point-wise limit with rate slower than the right-hand side of the previous display. This is done first by showing that the bias terms tends to zero faster than the variance term. Then, by means of a Fourier analysis to guarantee Lyapunov’s condition, the central limit theorem for the variance term holds. This last step and the control of the boundary bias of the kernel density estimators are the main hurdles for the Euclidean generalizations of the following nice result derived by [86] for periodic data.1 1One way to fix the boundary bias issue is to work with the boundary-corrected density estimators ([87, p. 23,]), which leads to minimax optimal rates
https://arxiv.org/abs/2505.19104v1
[3]. 27 Distributional Limit Theory for Optimal Transport Theorem 4.8. Fixd≥3ands≥d. Letp,qbe densities in Rd/Zdsuch that log(p)andlog(q)areCs. Assume that K∈C∞ c((0,1)d)is an even kernel such that its Fourier transform F(K)satisfies sup ∥x∦=0|[F(K)](x)−1| ∥x∥s+1<∞. Then for any x∈Rd/Zdandh=cn−βfor somec>0and1 d+4<β <1 d+2s, the sequence √ nhd−2(∇ˆφh(x)−∇φ(x)) admits a non-degenerated Gaussian limit in distribution. 5 Sliced optimal transport An alternative way to mitigate the computational and statistical challenges that arise in high-dimensional OT problems is to leverage the properties of the one- dimensional setting by computing OT distances between projected distributions. Among the various notions explored in the existing literature, we focus on the most prominent ones: the sliced and max-sliced Wasserstein distances, defined respectively as Sp(Pn,Q) =/integraldisplay Sd−1Tp/parenleftig Pru♯Pn,Pru♯Q/parenrightig dσ(u),and (32) Mp(Pn,Q) = sup u∈Sd−1Tp/parenleftig Pru♯Pn,Pru♯Q/parenrightig , (33) whereσdenotes the uniform measure on the unit sphere Sd−1andPruthe projection onto the linear subspace generated by u∈Sd−1. For the max-sliced version (and a subspace generalization), [ 96] obtained expectation bounds, with rates independent of the dimension. Later, [ 88] adapted the previous bounds to the sliced setting, extending their validity to a trimmed version of (32), and proving the first distributional limit for the sliced Wasserstein distance √n(Sp(Pn,Q)−Sp(P,Q))−→wN(0,v2 p(P,Q)), (34) and its extension for the trimmed version. Denoting by Fu,nandFuthe CDF of the projected distributions along the direction of u, their asymptotic result is based on the weak convergence of the empirical process Gn(u,x) =√n(Fu,n(x)−Fu(x)), along with Hadamard differentiability and the functional delta method. Since this work primarily focuses on the trimmed version, the assumptions are rather strong for the untrimmed setting, involving finiteness of the functional SJ∞(P) = essup 0<t<11 fu(F−1 u(t))<∞, (35) wherefudenotes the density associated to Fu. An important step in the derivation of distributional limits for projection-based distances was the unifying approach 28 Distributional Limit Theory for Optimal Transport presented in [ 115]. Using similar ideas to those in [ 88], and inspired by the duality-based approach in [ 73], [115] investigates the weak convergence of the sliced process Gn(u) =√n(Tp(Pu n,Qu)−Tp(Pu,Qu)), forp > 1, under the assumptions of compactly supported probability distributions such that the support of the projected probabilities is an interval. This ensures the uniqueness (up to constants) of the optimal transport (OT) potentials for each projection. Weak convergence for various projection-based distance notions follows directly from Hadamard differentiability and the functional delta method. Specifically, both sliced and max-sliced distances are considered. The Hadamard differential for the sliced version is linear, which implies the centered Gaussian limit (34), but this is not necessarily the case for the max-sliced version. Furthermore, a distributional limit is also provided for an extension of the sliced distance, referred to as the Distributional Sliced Wasserstein distance. Independently, [ 53] derived distributional limits for the sliced and max-sliced Wasserstein distances as a consequence of techniques more closely related to optimal transport. In particular, they leverage the dual expression of the one- dimensional Wasserstein distance with p= 1as a supremum over Lipschitz functions to demonstrate weak convergence of the empirical process indexed by the functionsφ◦Pru, whereφis 1-Lipschitz and u∈Sd−1. Then, they conclude weak convergence from
https://arxiv.org/abs/2505.19104v1
the extended functional delta method under the assumption of compactly supported probabilities with convex support for p>1, and under mild moment assumptions for p= 1. The assumptions required for p= 1were further refined by [116]. Later, [74] provided a refined version of the results in [ 115] and [53], again for compactly supported probabilities but with slightly weaker assumptions on the supports, using similar arguments to those in [ 115]. In a different vein, employing slightly different proof techniques, this work also explores, with great generality, distributional limits for empirical transport problems where the cost function is also estimated from the data. In contrast to these unifying approaches, two recent papers have again adopted problem-specific strategies to improve existing results. [ 70] proved completely dimension-free expectation bounds for Mp, which can be extended to infinite- dimensional Hilbert spaces. The recent paper [ 34] gives a new CLT for Spbased on the Efron-Stein linearization approach presented in Section 3 in this paper. We quote here a version of the CLT for the fluctuation. Theorem 5.1. Assumep>1,δ>0. LetPandQbe probabilities on Rdwith finite moments of order 2p+δ. Assume further Pis absolutely continuous with negligible boundary, and int(suppP)is connected. Then √n(Sp(Pn,Q)−E[Sp(Pn,Q)])−→wN(0,v2 p(P,Q)), (36) for somev2 p(P,Q)≥0. We refer to [34] for a precise description of the limiting variance and a proof. 29 Distributional Limit Theory for Optimal Transport Unlike standard OT, Spretains the favorable properties of the one-dimensional setting and it is possible to replace the centering constants in Theorem 36 by the population counterparts. Theorem 4.1 in [ 10] provides sufficient conditions for this goal. The assumptions on the density in Theorem 2.2 must be imposed for all the projected densities fu, andJαmust be replaced with its integrated version, SJα(P) =/integraltext Sd−1Jα(Pru♯P)dσ(u), to ensure √n(E[Sp(Pn,Q)]−Sp(P,Q))→0. (37) Combining (36)and(37), [34] provides the first CLT for Spwhich is also valid for measures without compact support. 6Applications of central limit theorems for opti- mal transport Understanding the asymptotic behavior of OT costs enables to build tests to assess the similarity between distributions. Contrary to usual goodness-of-fit tests based on Kullback-Leibler distance, tests based on Monge-Kantorovich, a.k.a. Wasserstein distance, enable to better capture the structural variability of the distribution of the observations. The first tests were developed for medical or biological applications to capture the inter-individual variability of this type of data where the response is highly influenced by the individual characteristics. When the response was a function, tests were based on registration methods and the natural extension to distributional response was obtained by considering similarity tests based on OT-based distances as in [ 36] or [43] and [41]. In this context, Optimal transport is viewed as a way to provide a sound geometrical interpretation of distributions and testing in the Wasserstein space of distributions enable to obtain a better interpretation of the notion of similarity. Tests in the Wasserstein space are natural for deformation models [ 30] or for data analysis on distributions as in [15] or [8]. The problem of assessing bias in algorithmic decisions is similar, up to some extent, to previous applications. Actually, bias in
https://arxiv.org/abs/2505.19104v1
AI arises when a variable, which characterizes a group of individuals, affects systematically the behaviour of an algorithm. This implies that the decisions or the performance of the algorithm may be different for different subgroups, leading to possible infringement of their fundamental rights and thus to discrimination. Detecting such disparate behaviour of the algorithm provides a natural framework for optimal transport based goodness-of-fit. For instance in the case where the population is split into a minority and a majority according to the value of a so-called sensitive variable (A∈{0,1}), testing for bias amounts to test whether the algorithm or its loss exhibit different behaviours for the two groups and looking at the distance between the corresponding conditional distribution when A= 0orA= 1, namelyµA=0(f) andµA=1(f). Note that choosing Wasserstein distance to evaluate fairness is a relevant choice since OT transport cost is deeply related to bias discovery 30 Distributional Limit Theory for Optimal Transport and bias mitigation as pointed out in [ 64] and [18]. Regulations as provided for instance by the European AI Act have determined a threshold ∆above which the dissimilarity is not acceptable leading to the non conformity of the AI system. Hence the corresponding statistical for fairness using OT asymptotic behaviour can be formulated by choosing the null hypothesis as H0:W(µA=1(f),µA=0(f))≥∆. As pointed out in [ 31], rejection of the null hypothesis would yield statistical evidence that the algorithm has the same behaviour over both groups and thus is compliant with respect to the considered regulation. Fairness test with Wasserstein distance are also provided in [112] and [108]. 7 Open problems While the distributional limit theory fo OT is well developed by now and we have presented what we believe to be a reasonably complete description of it, some aspects of the theory are not completely understood. We would like to end this paper with a small sample of open problems that we believe deserve further investigation. The fluctuation CLT for OT (Theorem 3.1) is valid in general dimension for everycpcost withp>1. Given the minimal assumptions needed for the case p= 1 in the one-dimensional case one could wonder whether a similar result holds in higher dimension. As we have seen,√n(T1(Pn,Q)−E[T1(Pn,Q)])is stochatically bounded in any dimension. We think it would be of interest to get an answer to the following: Problem 1. Can we find conditions under which √n(T1(Pn,Q)−E[T1(Pn,Q)]) converges weakly? Is it possible to get a Gaussian limit? Even in the one-dimensional setup some questions remain unsolved. As an example, we can mention the weak convergence of the quantile process discussed in Theorem 2.4. The assumption Jp(P)<∞is a necessary condition in the result, since it is necessary for the integrability of the weighted Brownian bridge in the limit. However, the monotonicity of f◦F−1might be not necessary. It is not in the casep= 1, as shown in the recent paper [ 5]. With this motivation we think it is worth considering the next question. Problem 2. Can we find necessary and sufficient conditions for the weak conver- gence of the quantile process vn(t) =√n(F−1 n(t)−F−1(t)),0<t< 1, as a random
https://arxiv.org/abs/2505.19104v1
element in Lp(0,1)? 31 Distributional Limit Theory for Optimal Transport In the null case P=Qand strictly convex cost ( p > 1) the fluctuation CLT yields a null limiting distribution. With a faster rate there is still some potential room for a different CLT. Building upon earlier work by Ambrosio and coauthors (see [ 80]), Michel Ledoux conjectured in [ 81] that when Pis the uniform distribution on the unit square [0,1]2, then n(T2(Pn,P)−E[T2(Pn,P)])−→wξ, (38) ξbeing some centered random variable. A result like (38)holds under some conditions in dimension one. More generally, we find a very interesting (and challenging) question the following. Problem 3. Does(38)hold? In general dimension? Can we get a good description of the limit? Finally, we turn out to an apparently simpler, yet open problem. It is well-known that the the optimal transport problem with Euclidean cost might admit several solutions. For instance, if µ=Uniform [0,1]2andν=Uniform ([2,3]×[0,1]), any coupling πsuch that the vertical coordinates remain unchanged is optimal, i.e.,/integraltext|y2−y2|dπ(x,y) = 0, wherex= (x1,x2)andy= (y1,y2). Note that in this case, since π(y1>x 1) = 1, we get /integraldisplay ∥x−y∥dπ(x,y) =/integraldisplay |x1−y1|dπ(x,y) =/integraldisplay (y1−x1)dπ(x,y) = 2. However, it is easy to check that the empirical optimal transport plan is a mapping with probability one. The question is the following. Problem 4 (Conjectured in [ 105]).LetX1,...,Xniid∼U[0,1]2andY1,...,Yniid∼ U([2,3]×[0,1])be independent. Let πnbe the empirical optimal transport plan. Find the limit as n→∞ofπn. Acknowledgments We gratefully acknowledge Tudor Manole for his valuable comments and corrections, particularly regarding the central limit theorems for smooth optimal transport. References [1]F. Bachoc, L. Béthune, A. Gonzalez-Sanz, and J.-M. Loubes. Gaussian processes on distributions based on regularized optimal transport. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics , volume 206 ofProceedings of Machine Learning Research , pages 4986–5010. PMLR, 25–27 Apr 2023. 32 Distributional Limit Theory for Optimal Transport [2]F. Bachoc, L. Béthune, A. González-Sanz, and J.-M. Loubes. Improved learning theoryforkerneldistributionregressionwithtwo-stagesampling. arXiv:2308.14335 , 2023. [3]S. Balakrishnan and T. Manole. Stability bounds for smooth optimal transport maps and their statistical implications. arXiv:2502.12326 , 2025. [4]E. Bayraktar, S. Eckstein, and X. Zhang. Stability and sample complexity of divergence regularized optimal transport. Bernoulli, 31(1):213–239, 2025. [5]B. K. Beare and T. Kaji. Necessary and sufficient conditions for convergence in distribution of quantile and p-p processes in l1(0,1), 2025. [6]M. Blondel, V. Seguy, and A. Rolet. Smooth and sparse optimal transport. volume 84 of Proceedings of Machine Learning Research , pages 880–889, 2018. [7]S. G. Bobkov and M. Ledoux. One-dimensional empirical measures, order statistics, and kantorovich transport distances. Memoirs of the American Mathematical Society, 2019. [8]E. Boissard, T. Le Gouic, and J.-M. Loubes. Distribution’s template estimate with wasserstein metrics. Bernoulli, 21(2), 2015. [9]N. Bonneel and J. Digne. A survey of optimal transport for computer graphics and computer vision. Computer Graphics Forum , 42:439–460, 2023. [10]N. Bonneel, J. Rabin, G. Peyré, and H. Pfister. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision , 51:22 – 45, 2014. [11]S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities . Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence.
https://arxiv.org/abs/2505.19104v1
[12]J. Cárcamo, A. Cuevas, and L.-A. Rodríguez. Directional differentiability for supremum-type functionals: Statistical applications. Bernoulli, 26:2143 – 2175, 2020. [13]G. Carlier. On the linear convergence of the multimarginal sinkhorn algorithm. SIAM Journal on Optimization , 32(2):786–794, 2022. [14]G. Carlier and M. Laborde. A differential approach to the multi-marginal schrödinger system. SIAM Journal on Mathematical Analysis , 52(1):709–717, 2020. [15]E. Cazelles, V. Seguy, J. Bigot, M. Cuturi, and N. Papadakis. Geodesic pca versus log-pca of histograms in the wasserstein space. SIAM Journal on Scientific Computing , 40(2):B429–B456, 2018. [16]W. Chang, Y. Shi, H. Tuan, and J. Wang. Unified optimal transport framework for universal domain adaptation. Advances in Neural Information Processing Systems, 35:29512–29524, 2022. [17]V. Chernozhukov, A. Galichon, M. Hallin, and M. Henry. Monge–Kantorovich depth, quantiles, ranks and signs. The Annals of Statistics , 45(1):223 – 256, 2017. [18]E. Chzhen, C. Denis, M. Hebiri, L. Oneto, and M. Pontil. Fair regression with 33 Distributional Limit Theory for Optimal Transport wasserstein barycenters. Advances in Neural Information Processing Systems , 33:7321–7331, 2020. [19]M. Csörgő and L. Horvàth. Weighted Approximations in Probability and Statistics. Chichester: Wiley., 1993. [20]M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NeurIPS, volume 26, 2013. [21]L. De Lara, A. González-Sanz, N. Asher, L. Risser, and J.-M. Loubes. Transport- based counterfactual models. Journal of Machine Learning Research , 25:1–59, 2024. [22]N. Deb, B. B. Bhattacharya, and B. Sen. Pitman efficiency lower bounds for multivariate distribution-free tests based on optimal transport, 2023. [23]N. Deb, P. Ghosal, and B. Sen. Rates of estimation of optimal transport maps usingplug-inestimatorsviabarycentricprojections. InM.Ranzato, A.Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 29736–29753. Curran Associates, Inc., 2021. [24]N. Deb and B. Sen. Multivariate rank-based distribution-free nonparametric testing using measure transportation. J. Amer. Statist. Assoc. , 118(541):192–207, 2023. [25]E. del Barrio, J. Cuesta-Albertos, C. Matrán, and J. Rodríguez-Rodríguez. Tests of goodnessof fit based onthe L2-Wasserstein distance. Ann. Statist. , 27:1230–1239, 1999. [26]E. del Barrio, E. Giné, and C. Matrán. Central limit theorems for the Wasser- stein distance between the empirical and the true distributions. Ann. Probab. , 27(2):1009–1071, 1999. [27]E. del Barrio, E. Giné, and F. Utzet. Asymptotics for L2functionals of the empirical quantile process, with applications to tests of fit based on weighted Wasserstein distances. Bernoulli, 11:131–189, 2005. [28]E. del Barrio, A. González-Sanz, and J.-M. Loubes. Central limit theorems for general transportation costs. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques , 60(2):847 – 873, 2024. [29]E. del Barrio, A. González Sanz, and J.-M. Loubes. Central limit theorems for semi-discrete Wasserstein distances. Bernoulli, 30(1):554 – 580, 2024. [30]E. Del Barrio, P. Gordaliza, H. Lescornel, and J.-M. Loubes. Central limit theorem and bootstrap procedure for wasserstein’s variations with an application to structural relationships between distributions. Journal of Multivariate Analysis , 169:341–362, 2019. [31]E. del Barrio, P. Gordaliza, and J.-M. Loubes. A central limit theorem for Lp transportation cost on the real line with application to fairness assessment in machine learning. Information and Inference: A Journal of the IMA , 8(4):817–849, 10 2019. 34 Distributional Limit Theory for Optimal
https://arxiv.org/abs/2505.19104v1
Transport [32]E. Del Barrio, H. Inouzhe, J.-M. Loubes, C. Matrán, and A. Mayo-Íscar. opti- malflow: optimal transport approach to flow cytometry gating and population matching. BMC bioinformatics , 21:1–25, 2020. [33]E. del Barrio and J. Loubes. Central limit theorems for empirical transportation cost in general dimension. Ann. Probab. , 2019. [34]E. del Barrio, J.-M. Loubes, and D. Rodríguez-Vítores. Improved Central Limit Theorems for the empirical sliced Wasserstein distance. arXiv preprint , 2025. [35]E. del Barrio, A. G. Sanz, J.-M. Loubes, and J. Niles-Weed. An improved central limit theorem and fast convergence rates for entropic transportation costs. SIAM Journal on Mathematics of Data Science , 5(3):639–669, 2023. [36]J.-F. Dupuy, J.-M. Loubes, and E. Maza. Non parametric estimation of the struc- tural expectation of a stochastic increasing function. Statistics and Computing , 21:121–136, 2011. [37]J. Feydy, T. Séjourné, F.-X. Vialard, S.-i. Amari, A. Trouve, and G. Peyré. Interpolating between optimal transport and MMD using Sinkhorn divergences. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research , pages 2681– 2690. PMLR, 16–18 Apr 2019. [38]A. Figalli. The Monge-Ampère Equation and Its Applications . Zurich Lectures in Advanced Mathematics, European Mathematical Society (EMS), Zurich, 2017. [39]N. Fournier and A. Guillin. On the rate of convergence in wasserstein distance of the empirical measure. Probability Theory and Related Fields , 162(3–4):707–738, Oct. 2014. [40]J. Franklin and J. Lorenz. On the scaling of multidimensional matrices. Linear Algebra and its Applications , 114-115:717–735, 1989. Special Issue Dedicated to Alan J. Hoffman. [41]G. Freitag, C. Czado, and A. Munk. A nonparametric test for similarity of marginals—with applications to the assessment of population bioequivalence. Journal of statistical planning and inference , 137(3):697–711, 2007. [42]G. Freitag and A. Munk. On Hadamard differentiability in k-sample semipara- metric models—with applications to the assessment of structural relationships. J. Multivariate Anal. , 94:123–158, 2005. [43]G. Freitag and A. Munk. On hadamard differentiability in k-sample semiparametric models—with applications to the assessment of structural relationships. Journal of multivariate analysis , 94(1):123–158, 2005. [44]P. Freulon, J. Bigot, and B. P. Hejblum. Cytopt: Optimal transport with domain adaptationforinterpretingflowcytometrydata. Ann.Appl.Statist. , 17:1086–1104, 2023. [45]A. Galichon. Optimal Transport Methods in Economics . Princeton University Press, 09 2016. [46]T.GallouëtandQ.Mérigot. Alagrangianschemeàlabrenierfortheincompressible 35 Distributional Limit Theory for Optimal Transport euler equations. Found. Comput. Math. , 18:835–865, 2018. [47]W. Gangbo and R. J. McCann. The geometry of optimal transportation. Acta Mathematica , 177(2):113–161, 1996. [48]A. Genevay, L. Chizat, F. Bach, M. Cuturi, and G. Peyré. Sample complexity of sinkhorn divergences. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics , volume 89 of Proceedings of Machine Learning Research , pages 1574– 1583. PMLR, 16–18 Apr 2019. [49]A. Genevay, G. Peyré, and M. Cuturi. Learning generative models with sinkhorn divergences. In A. Storkey and F. Perez-Cruz, editors, Proceedings of the Twenty- First International Conference on Artificial Intelligence and Statistics , volume 84 ofProceedings of Machine Learning Research , pages 1608–1617, 09–11 Apr 2018. [50]Z. Goldfeld and K. Greenewald. Gaussian-smoothed optimal transport: Metric structure and statistical efficiency. In International
https://arxiv.org/abs/2505.19104v1
Conference on Artificial Intelligence and Statistics , pages 3327–3337. PMLR, 2020. [51]Z. Goldfeld, K. Greenewald, J. Niles-Weed, and Y. Polyanskiy. Convergence of smoothed empirical measures with applications to entropy estimation. IEEE Transactions on Information Theory , 66(7):4368–4391, 2020. [52]Z. Goldfeld, K. Kato, S. Nietert, and G. Rioux. Limit distribution theory for smooth p-wasserstein distances. The Annals of Applied Probability , 34(2), Apr. 2024. [53]Z. Goldfeld, K. Kato, G. Rioux, and R. Sadhu. Statistical inference with regularized optimal transport. arXiv:2205.04283 , 2022. [54]Z. Goldfeld, K. Kato, G. Rioux, and R. Sadhu. Limit theorems for entropic optimal transport maps and sinkhorn divergence. Electronic Journal of Statistics , 18(1), Jan. 2024. [55]A. González-Sanz, S. Eckstein, and M. Nutz. Sparse regularized optimal transport without curse of dimensionality. Preprint, 2025. [56]J. González-Delgado, A. González-Sanz, J. Cortés, and P. Neuvial. Two-sample goodness-of-fit tests on the flat torus based on wasserstein distance and their relevance to structural biology. Electronic Journal of Statistics , 17(1), Jan. 2023. [57]A. González-Sanz, M. Hallin, and B. Sen. Monotone measure-preserving maps in Hilbert spaces: existence, uniqueness, and stability. arXiv:2305.11751 , 2023. [58]A. González-Sanz and S. Hundrieser. Weak limits for empirical entropic optimal transport: Beyond smooth costs. arXiv:2305.09745 , 2023. [59]A. González-Sanz, J.-M. Loubes, and J. Niles-Weed. Weak limits of entropy regularized optimal transport; potentials, plans and divergences. arXiv:2207.07427 , 2024. [60]A. González-Sanz and M. Nutz. Sparsity of quadratically regularized optimal transport: Scalar case. arXiv:2410.03353 , 2024. [61]A. González-Sanz, M. Nutz, and A. R. Valdevenito. Monotonicity in quadratically 36 Distributional Limit Theory for Optimal Transport regularized linear programs. arXiv:2408.07871 , 2024. [62]A. González-Sanz and S. Sheng. Linearization of Monge-Ampère equations and data science applications. arXiv:2408.06534 , 2024. [63]P. Gordaliza, E. Del Barrio, G. Fabrice, and J.-M. Loubes. Obtaining fairness using optimal transport theory. In International conference on machine learning , pages 2357–2365. PMLR, 2019. [64]T. L. Gouic, J.-M. Loubes, and P. Rigollet. Projection to fairness in statistical learning. arXiv preprint arXiv:2005.11720 , 2020. [65]S. Graf and H. Luschgy. Foundations of Quantization for Probability Distributions . Springer-Verlag, Berlin, Heidelberg, 2000. [66]F.F.Gunsilius. Ontheconvergencerateofpotentialsofbreniermaps. Econometric Theory, 38(2):381–417, Feb. 2021. [67]M. Hallin, E. del Barrio, J. Cuesta-Albertos, and C. Matrán. Distribution and quantile functions, ranks and signs in dimension d: A measure transportation approach. The Annals of Statistics , 49(2):1139 – 1165, 2021. [68]M. Hallin, D. Hlubinka, and v. S. Hudecová. Efficient fully distribution-free center-outward rank tests for multiple-output regression and MANOVA. J. Amer. Statist. Assoc. , 118(543):1923–1939, 2023. [69]M. Hallin, G. Mordant, and J. Segers. Multivariate goodness-of-fit tests based on wasserstein distance. Electronic Journal of Statistics , 15(1), Jan. 2021. [70]R. Han, C. Rush, and J. Wiesel. Max-sliced wasserstein concentration and uniform ratio bounds of empirical measures on rkhs. arXiv preprint arXiv:2405.13153 , 2024. [71]Z. Harchaoui, L. Liu, and S. Pal. Asymptotics of entropy-regularized optimal transport via chaos decomposition. Preprint arXiv:2011.08963 , 2020. [72]S. Hundrieser, M. Klatt, and A. Munk. Limit distributions and sensitivity analysis for empirical entropic optimal transport on countable spaces. Ann. Appl. Probab. , 34(1B):1403–1468, 2024. [73]S. Hundrieser, M. Klatt, A. Munk, and T. Staudt. A unifying approach to distributional limits for empirical optimal transport.
https://arxiv.org/abs/2505.19104v1
Bernoulli, 30(4):2846 – 2877, 2024. [74]S. Hundrieser, G. Mordant, C. A. Weitkamp, and A. Munk. Empirical optimal transport under estimated costs: Distributional limits and statistical applications. Stochastic Processes and their Applications , 178:104462, 2024. [75]S. Hundrieser, T. Staudt, and A. Munk. Empirical optimal transport between different measures adapts to lower complexity. Annales de l’Institut Henri Poincaré, Probabilités et Statistiques , 60(2), May 2024. [76]J.-C. Hütter and P. Rigollet. Minimax estimation of smooth optimal transport maps.The Annals of Statistics , 49(2):1166 – 1194, 2021. [77]J. Kitagawa, Q. Mérigot, and B. Thibert. Convergence of a newton algorithm for semi-discrete optimal transport. J. Eur. Math. Soc. , 21:2603–2651, 2019. 37 Distributional Limit Theory for Optimal Transport [78]M. Klatt, A. Munk, and Y. Zemel. Limit laws for empirical optimal solutions in random linear programs. Annals of Operations Research , 315(1):251–278, Apr. 2022. [79]M. Klatt, C. Tameling, and A. Munk. Empirical regularized optimal transport: Statistical theory and applications. SIAM Journal on Mathematics of Data Science , 2(2):419–443, 2020. [80]D. T. L. Ambrosio, F. Stra. A pde approach to a 2-dimensional matching problem. Probab. Theory Related Fields , 173:433–478, 2019. [81]M. Ledoux. Optimal matching of random samples and rates of convergence of empirical measures. Mathematics going forward—collected mathematical brushstrokes , 2023. [82] M. Ledoux and M. Talagrand. Probability in Banach Spaces . Springer, 1991. [83]B. Levy, R. Mohayaee, and S. von Hausegger. A fast semidiscrete optimal transport algorithm for a unique reconstruction of the early universe. Monthly Notices of the Royal Astronomical Society , 506(1):1165–1185, June 2021. [84]G. Loeper. On the regularity of the polar factorization for time dependent maps. Calc. Var. Partial Differential Equations , 22(3):343–374, 2005. [85]C. Léonard. A survey of the schrödinger problem and some of its connec- tions with optimal transport. Discrete and Continuous Dynamical Systems – A , 34(4):1533–1574, 2014. [86]T. Manole, S. Balakrishnan, J. Niles-Weed, and L. Wasserman. Central limit theorems for smooth optimal transport maps. arXiv:2312.12407 , 2024. [87]T. Manole, S. Balakrishnan, J. Niles-Weed, and L. Wasserman. Plugin estimation of smooth optimal transport maps. The Annals of Statistics , 52(3):966–998, 2024. [88]T. Manole, S. Balakrishnan, and L. Wasserman. Minimax confidence intervals for the Sliced Wasserstein distance. Electronic Journal of Statistics , 16(1):2252 – 2345, 2022. [89]T. Manole and J. Niles-Weed. Sharp convergence rates for empirical optimal transport with smooth costs. The Annals of Applied Probability , 34(1B):1108 – 1135, 2024. [90]G. Mena and J. Niles-Weed. Statistical bounds for entropic optimal transport: Sample complexity and the central limit theorem. Advances in Neural Information Processing Systems , 32, 2019. [91]J. Meyron. Initialization procedures for discrete and semi-discrete optimal trans- port.Computer-Aided Design , 115:13–22, 2019. [92]G. Mordant. The entropic optimal (self-)transport problem: Limit distributions for decreasing regularization with application to score function estimation, 2024. [93]A. Munk and C. Czado. Nonparametric validation of similar distributions and assessment of goodness of fit. J. R. Stat. Soc. Ser. B Stat. Methodol. , 60:223–241, 1998. 38 Distributional Limit Theory for Optimal Transport [94]B. Muzellec, R. Nock, G. Patrini, and F. Nielsen. Tsallis regularized optimal transport and ecological inference. Proceedings of the AAAI Conference on Artificial Intelligence , 31(1),
https://arxiv.org/abs/2505.19104v1
Feb. 2017. [95]S. Nietert, Z. Goldfeld, and K. Kato. Smooth p-wasserstein distance: Structure, empirical approximation, and statistical applications. In M. Meila and T. Zhang, editors,Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 8172–8183. PMLR, 18–24 Jul 2021. [96] J. Niles-Weed and P. Rigollet. Estimation of wasserstein distances in the spiked transport model. Bernoulli, 28(4):2663–2688, 2022. [97]M. Nutz. Quadratically regularized optimal transport: Existence and multiplicity of potentials. SIAM J. Math. Anal., to appear , 2024. [98]G. Peyré and M. Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning , 11:355–607, 2019. [99]A.-A. Pooladian and J. Niles-Weed. Plug-in estimation of schrödinger bridges, 2024. [100]P. Rigollet and A. J. Stromme. On the sample complexity of entropic optimal transport. Preprint arXiv:2206.13472 , 2022. [101]D. Rodríguez-Vítores, E. del Barrio, and J.-M. Loubes. An improved cen- tral limit theorem for the empirical sliced wasserstein distance. arXiv preprint arXiv:2503.18831 , 2025. [102]R. Sadhu, Z. Goldfeld, and K. Kato. Limit distribution theory for the smooth 1-wasserstein distance with applications. Arxiv:2107.13494 , 2022. [103]R. Sadhu, Z. Goldfeld, and K. Kato. Stability and statistical inference for semidiscrete optimal transport maps. The Annals of Applied Probability , 34(6), Dec. 2024. [104]R. Samworth and O. Johnson. Convergence of the empirical process in mallows distance, with an application to bootstrap performance. preprint. Centre for Mathematical Sciences, Cambridge. , 2004. [105]F. Santambrogio. Optimal transport for applied mathematicians . Birkhäuser/Springer, 2015. [106]E. Schrödinger. Sur la théorie relativiste de l’électron et l’interprétation de la mécanique quantique. Annales de l’institut Henri Poincaré , 2(4):269–310, 1932. [107]V. Seguy, B. B. Damodaran, R. Flamary, N. Courty, A. Rolet, and M. Blondel. Large-scale optimal transport and mapping estimation. In ICLR 2018-International Conference on Learning Representations , pages 1–15, 2018. [108]N. Si, K. Murthy, J. Blanchet, and V. A. Nguyen. Testing group fairness via optimal transport projections. In International Conference on Machine Learning , pages 9649–9659. PMLR, 2021. [109]R. Sinkhorn. Diagonal equivalence to matrices with prescribed row and column sums.The American Mathematical Monthly , 74(4):402, Apr. 1967. 39 Distributional Limit Theory for Optimal Transport [110]M. Sommerfeld and A. Munk. Inference for empirical wasserstein distances on finite spaces. Journal of the Royal Statistical Society Series B: Statistical Methodology , 80(1):219–238, 2018. [111]C. Tameling, M. Sommerfeld, and A. Munk. Empirical optimal transport on conuntable metric spaces: distributional limits and statistical applcations. The Annals of Applied Probability , 29:2744–2781, 2019. [112]B. Taskesen, J. Blanchet, D. Kuhn, and V. A. Nguyen. A statistical test for probabilistic fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency , FAccT ’21, page 648–665, New York, NY, USA, 2021. Association for Computing Machinery. [113]C. Villani. Topics in Optimal Transportation . Graduate studies in mathematics. American Mathematical Society, 2003. [114] C. Villani. Optimal transport. Old and new. Springer, 2009. [115]J. Xi and J. Niles-Weed. Distributional convergence of the sliced wasserstein process. In Neural Information Processing Systems , 2022. [116]X. Xu and Z. Huang. Central limit theorem for the sliced 1-wasserstein distance and the max-sliced 1-wasserstein distance. arXiv preprint arXiv:2205.14624 ,
https://arxiv.org/abs/2505.19104v1
2022. [117]S. Zhang, G. Mordant, T. Matsumoto, and G. Schiebinger. Manifold learning with sparse regularised optimal transport. arXiv:2307.09816 , 2023. 8 supplement Proof of Theorem 2.1. The convergence statement (7)is Theorem 2.1 in [ 31]. The limiting variance is defined as follows. We set hp(x) =|x|p. Then,h′ p(x) = sgn(x)p|x|p−1and we set cp(t;F,G) =/integraltextF−1(t) F−1(1 2)h′ p(s−G−1(F(s))ds,0< t < 1 and¯cp(t;F,G) =cp(t;F,G)−/integraltext1 0cp(s;F,G)ds. It can be shown that cp(·;F,G)∈ L2(0,1)(Lemma A.1 in [31]). The limiting variance in (7) is σ2(P,Q) =/integraldisplay1 0¯c2 p(t;F,G)dt. (39) For a proof of the fact that σ2(P,Q)>0if and only if P̸=Qwe refer also to [31]. It remains only to prove (8). By Lemma 8.1 below the random sequence√n(T1(Pn,Q))−ET1(Pn,Q)))is stochastically bounded and from the comments after the proof we see that Var(Zn,M, 1)≤C(P,M ) n, whereZn,M, 1=/integraltext [−M,M ]C|Fn(x)−G(x)|dxandC(P,M )→0asM→∞. We set alsoZn,M, 2=/integraltextM −M|Fn(x)−G(x)|dx. Now, Zn,M, 2=/integraldisplayM −M/vextendsingle/vextendsingle/vextendsingleF(x)−G(x) +αn(x)√n/vextendsingle/vextendsingle/vextendsingledx, 40 Distributional Limit Theory for Optimal Transport whereαn(x) =√n(Fn(x)−F(x))is the empirical process indexed by intervals (−∞,x],−M≤x≤M. The proof of Theorem 2.1 in [ 26] shows that the necessary and sufficient condition for the weak convergence of αntoB◦Fin L1(R)is, withBa Brownian bridge on [0,1]is finiteness of the integral /integraldisplayM −M/radicalig F(x)(1−F(x))dx, which obviously holds. Now, without loss of generality, we can choose versions of αnandB◦F(for which we will keep the same notation) such that/integraltextM −M|αn(x)− B◦F(x)|→0a.s. asn→∞. But then, if ˜Zn,M, 2=/integraldisplayM −M/vextendsingle/vextendsingle/vextendsingleF(x)−G(x) +B◦F(x)√n|/vextendsingle/vextendsingle/vextendsingledx, we have that √n|Zn,M, 2−˜Zn,M, 2|≤/integraldisplayM −M|αn(x)−B◦F(x)|−→ 0a.s.(40) We observe next that √n/parenleftbigg ˜Zn,M, 2−/integraldisplayM −M/vextendsingle/vextendsingle/vextendsingleF(x)−G(x)/vextendsingle/vextendsingle/vextendsingledx/parenrightbigg −→/integraldisplayM −MvF,G(x)dx, (41) a.s., withvF,Gas in (10). In fact, ž√n/parenleftbigg ˜Zn,M, 2−/integraldisplayM −M/vextendsingle/vextendsingle/vextendsingleF(x)−G(x)/vextendsingle/vextendsingle/vextendsingledx/parenrightbigg =/integraldisplayM −M√n/parenleftbigg/vextendsingle/vextendsingle/vextendsingleF(x)−G(x) +B◦F(x)√n|/vextendsingle/vextendsingle/vextendsingle−/vextendsingle/vextendsingle/vextendsingleF(x)−G(x)/vextendsingle/vextendsingle/vextendsingle/parenrightbigg dx. Theintegrandinthelastexpressionconvergespointwiseto sgn(F(x)−G(x))B(F(x)) ifF(x)̸=G(x)and to|B(F(x))|otherwise. Furthermore, it is upper bounded by |B(F(x))|and the claim follows by dominated convergence. In(40)and(41)we can easily check that we have also convergence of moments of orderr <2. Hence, combining (40),(41)and moment convergence (of all orders) we see that √n(Zn,M, 2−E[Zn,M, 2])−→w/integraldisplayM −M(vF,G(x)−EvF,G(x))dx. (42) Moment convergence ensures that the Wasserstein distance of order r<2between the law of√n(Zn,M, 2−E[Zn,M, 2])and that of the random variable on the right tends to 0 as n→∞. 41 Distributional Limit Theory for Optimal Transport Next, we set v(1) F,G(x) =|B◦F(x)|I(F(x) =G(x))andv(2) F,G(x) =sgn(F(x)− G(x))B◦F(x)I(F(x)̸=G(x))and observe that for M1<M 2 V(M1,M2) :=E/bracketleftigg/parenleftbigg/integraldisplayM2 M1(vF,G(x)−EvF,G(x))dx/parenrightbigg2/bracketrightigg ≤2E/bracketleftigg/parenleftbigg/integraldisplayM2 M1(v(1) F,G(x)−Ev(1) F,G(x))dx/parenrightbigg2/bracketrightigg + 2E/bracketleftigg/parenleftbigg/integraldisplayM2 M1v(2) F,G(x)dx/parenrightbigg2/bracketrightigg := 2V1+ 2V2. The integral defining V2is a centered, Gaussian random variable and we can easily see that V2≤/integraldisplayM2 M1/integraldisplayM2 M1(F(x∧y)−F(x)F(y))dxdy. The integrand in the last expression is integrable over R⊭(and the integral equals the variance σ2(P)). This entails that V2→0asM1,M2→∞. Similarly, using the fact that we can check that Cov(|B(s)|,|B(t)|)≤s∧t−st,0≤s,t≤1we can check that V1≤/integraldisplayM2 M1/integraldisplayM2 M1(F(x∧y)−F(x)F(y))dxdy and conclude that V(M1,M2)→0asM1,M2→∞. Now we can argue as in the proof of Theorem 5.1 in [ 26] to see that/integraltextM −M(vF,G(x)−E[vF,G(x)])dxconverges inL2asM→∞to a the random variable γ(P,Q)defined in (9). Combining all the above estimates and using a classical 3εargument we conclude that √n(Zn−E[Zn])−→wG(P,Q). Lemma 8.1. IfPandQare Borel probabiity measures on Rd,Qhas finite mean andPhas finite variance σ2(P)andPndenotes the empirical measure on an i.i.d. sample,X1,...,Xn, of observations from P, then Var(T1(Pn,Q))≤σ2(P) n. (43) Proof.By duality Z:=T1(Pn,Q) = sup f:,∥f∥Lip≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay fd(Pn−Q)/vextendsingle/vextendsingle/vextendsingle/vextendsingle. By the Efron-Stein inequality Var(T1(Pn,Q))≤n 2E[(Z−Z′)2]ifZ′:=T1(P′ n,Q)
https://arxiv.org/abs/2505.19104v1
andP′ ndenotes the empirical d.f. on X′ 1,X2,..., Xn, withX′ 1a further indepen- dent observation from P. We observe that |Z−Z′|≤ sup f:,∥f∥Lip≤1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay fd(Pn−P′ n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤1 n∥X1−X′ 1∥, 42 Distributional Limit Theory for Optimal Transport which implies that Var(T1(Pn,Q))≤E[∥X1−X′ 1∥2] 2n=σ2(P) n. In the case of probabilities on the real line we can reach the same conclusion writingFnGfor the d.f.’s associated to PnandQ, respectively, and recalling that T1(Pn,Q) =/integraltext R|Fn(x)−G(x)|dx=:Z. With this representation we see that |Z−Z′|≤/integraldisplay R|Fn(x)−˜Fn(x)|dx=1 n|X1−X′ 1|′ and the Efron-Stein inequality yields again (43). We can argue similarly to get a tighter control of the variance of ˜T(M) 1(Pn,Q) :=/integraldisplay∞ M|Fn(x)−G(x)|dx. In fact, denoting now Z=˜T(M) 1(Pn,Q)andZ′as above we see that |Z−Z′| ≤1 n/integraldisplay∞ M|I(X1≤x)−I(X′ 1≤x)|dx ≤1 n|X1−X′ 1|I((X1>M )∪(X′ 1>M )). Hence, we conclude that nVar(˜T(M) 1(Pn,Q)) ≤E/parenleftig |X1−X′ 1|2I((X1>M )∪(X′ 1>M ))/parenrightig . We observe that the last upper bound vanishes as M→∞ifPhas a finite second moment. Proof of Theorem 2.2. The casep>1is considered in [ 101]. For the case p= 1 we argue as in (41), but now the assumption J1(P)<∞allows to use weak convergence of the empirical process√n(Fn−F)inL1(R)to conclude √n(T1(Pn,Q)−T 1(P,Q))−→w/integraldisplay RvF,G(x)dx, with convergence of moments of order rforr<2. As a consequence we see that √n(ET1(Pn,Q)−T 1(P,Q))−→/integraldisplay F(x)=G(x)E[|B(F(x))|]dx. 43 Distributional Limit Theory for Optimal Transport Proofs of Theorems 2.3 and 2.4. To prove weak convergence of the process vnto Vas random elements in Lp(0,1)we can argue as in the proof of Theorem 2.5 in [104], based, in turn, on Theorem 2.1 in [ 19] to conclude that there are versions ofvnand Brownian bridges such that /integraldisplay1−1/n 1/n/vextendsingle/vextendsingle/vextendsingle/vextendsinglevn(t)−Bn(t) f(F−1(t))/vextendsingle/vextendsingle/vextendsingle/vextendsinglep dt−→ Pr.0. (44) The assumption Jp(P)is sufficient (and necessary) to ensure thatBn f◦F−1is an Lp-valued random element and is also clear from it that /integraldisplay [1/n,1−1/n]c/vextendsingle/vextendsingle/vextendsingle/vextendsingleBn(t) f(F−1(t))/vextendsingle/vextendsingle/vextendsingle/vextendsinglep dt−→ Pr.0. Hence, to conclude it only remains to show that /integraldisplay [1/n,1−1/n]c|vn(t)|pdt−→ Pr.0. But this is equivalent to showing that np/2/integraldisplay1 1−1/n|X(n)−F−1(t)|pdt−→ Pr.0, (45) whereX(n)denotes the maximum of the sample, with a similar condition for the lower tail. To check (45) we observe that /integraldisplay1 1−1/n|X(n)−F−1(t)|pdt≲/integraldisplay1 1−1/n|X(n)−mn|pdt+/integraldisplay1 1−1/n|F−1(t)−mn|pdt, withmndenoting the median of X(n). This shows that (45)will follow if we prove that np/2−1|X(n)−mn|p−→ Pr.0 (46) and np/2−1/integraldisplay1 1−1/n|F−1(t)−mn|pdt. (47) To prove (46)we observe that X(n)d=F−1(U(n)), withU(n)following a beta distribution with parameters nand1and use the variance bound in Proposition B.8 in [7] (the version centered at medians), which, in our setup means that np/2−1E[|X(n)−mn|p]≲/integraldisplay1 0(t(1−t))p/2 fp(F−1(t))tndt. By dominated convergence the last integral vanishes, proving (46). Finally, to prove (47)we observe first that P(X(n)≥F−1(1−1/n)) = 1−(1−1/n)n→1−1 e>0. We will use in the sequel the following inequality, /integraldisplay∞ t(x−t)pf(x)dx≤pp/integraldisplay∞ t/parenleftbigg1−F(x) f(x)/parenrightbiggp f(x)dx, (48) 44 Distributional Limit Theory for Optimal Transport which follows once we use integration by parts to see that /integraldisplay∞ t(x−t)pf(x)dx=p/integraldisplay∞ t(x−t)p−11−F(x) f(x)f(x)dx and use then Hölder’s inequality (see [ 104] for the case p= 2). Using (48)(with the density of X(n), namely,nF(x)n−1f(x)), we obtain that np 2−1E/bracketleftig |X(n)−F−1(1−1 n)|pI(X(n)≥F−1(1 n))/bracketrightig ≲/integraldisplay1 1−1 n(t(1−t))p/2 fp(F−1(t))dt. We have used the fact that 1−tn≤n(1−t),0≤t≤1and also that, for t≥1−1 n,tn−1≥(1−1 n)n−1→1 e>0. Finiteness of Jp(P)implies that the last integral vanishes, showing that np 2−1E/bracketleftig
https://arxiv.org/abs/2505.19104v1
arXiv:2505.19351v1 [math.AC] 25 May 2025Squared Linear Models Hannah Friedman, Bernd Sturmfels and Maximilian Wiesmann Abstract We study statistical models that are parametrized by squares of linear forms. All crit- ical points of the likelihood function are real and positive. There is one for each region of the projective hyperplane arrangement. We study the ideal and singular locus of the model, and we give a determinantal presentation for its likelihood correspondence. We characterize tropical degenerations of the MLE, and we describe log-normal polytopes. 1 Introduction We consider a discrete statistical model on nstates which is given by linear forms ℓ1, . . . , ℓ n in real variables x1, . . . , x d, where n > d > 1. The probability of observing the i-th state is pi(x) =ℓ2 i(x)Pn j=1ℓ2 j(x)fori= 1,2, . . . , n. (1) Here x= (x1, . . . , x d) is a vector of model parameters. The denominator is the partition function. We write Xfor the projective variety in Pn−1parametrized by (1) and IXfor its prime ideal in R[p1, p2, . . . , p n]. Our aim is to explain the likelihood geometry [10] of X. Example 1.1 (d= 3, n= 4).Consider the model Xdefined by four lines in P2, for instance ℓ1=x1, ℓ2=x2, ℓ3=x3and ℓ4=x1+x2+x3. Then Xis aSteiner surface inP3, by [8, Example 3.5], with ideal IXgenerated by the quartic p4 1+p4 2+p4 3+p4 4+ 6 ( p2 1p2 2+p2 1p2 3+p2 1p2 4+p2 2p2 3+p2 2p2 4+p2 3p2 4) −4(p3 1p2+p3 1p3+···+p3p3 4) + 4( p2 1p2p3+p2 1p2p4+···+p2p3p2 4)−40p1p2p3p4.(2) The surface XRis singular along three lines. Its smooth part has 16 = 4 + 3 ·4 connected components in the tetrahedron ∆ 3=P3 >0. These components are formed by the 7 = 4 + 3 regions of the line arrangement A={ℓ1, ℓ2, ℓ3, ℓ4}in the plane P2 R. The likelihood function p1(x)s1p2(x)s2p3(x)s3p4(x)s4, for given data s1, s2, s3, s4∈N, is positive on P2 R\A. It has seven complex critical points, all real, one in each region. In general, the likelihood function of a squared linear model (1) can have arbitrarily many local maxima inside the probability simplex. We next present an example to show this. 1 Example 1.2 (The braid arrangement) .Fixc=d+ 1,n=c 2 , and let Xbe the model pij(x) =(xi−xj)2 c(Pc k=1x2 k)−(Pc k=1xk)2for 1≤i < j≤c. (3) The ideal IXis generated by the 2 ×2 minors of the symmetric d×dmatrix with diagonal entries 2 picand off-diagonal entries pic+pjc−pijfor 1≤i, j≤d. Thus Xis aVeronese variety . It has dimension c−2 and degree 2c−2inP(c 2)−1. The likelihood function has c!/2 complex critical points, all are real and positive. There is one such point in each region of the braid arrangement in Pc−1 R. These regions are indexed by the c! permutations of {1,2, . . . , c}, modulo reversal involution. We can view Xas a subvariety of the squared Grassmannian sGr(2 , c+ 1), which was studied in [7, 9]. Both models have the same ML degree. We now discuss the structure of this paper, and we
https://arxiv.org/abs/2505.19351v1
summarize our main results. Given the model (1), we fix data s1, . . . , s n∈R>0. We are interested in the log-likelihood function x7→s1logℓ2 1(x) +···+snlogℓ2 n(x)−(s1+···+sn) log ℓ2 1(x) +···+ℓ2 n(x) .(4) In Section 2 we prove that all complex critical points of (4) are real, and there is precisely one critical point in each connected component of Pd−1 R\A. Hence the ML degree equals the number of such regions, which isPd−1 i=0n−1 i when the hyperplane arrangement Ais generic; see, e.g., (11). This result is Theorem 2.1. It rests on recent work of Reinke and Wang [15]. In Section 3 we study the variety X⊂Pn−1, focussing on generic arrangements A. For large n, the ideal IXis generated by linear forms and the 2 ×2 minors of a symmetric d×d matrix (Proposition 3.1). For small n, the variety Xis a projection of the Veronese variety ν2(Pd−1). The ideal IXis complicated, and tough computations are needed. Proposition 3.4 summarizes what we currently know. The singular locus of Xis determined in Theorem 3.5. Section 4 is devoted to the likelihood correspondence. This is the variety in Pn−1×Pd−1 which parametrizes pairs ( s, x) where xis a critical point of the log-likelihood function (4) for the data s. We study its bihomogeneous prime ideal using the methods in [12, Section 2]. Theorem 4.2 expresses the likelihood ideal for generic arrangements as a determinantal ideal. The ideal is minimally generated byn d−2 polynomials of bidegree (1 , n−d+ 2) in R[s, x]. In Section 5 we study likelihood degenerations [1] when Ais generic. Theorem 5.1 shows that the critical points are distinct even when sis a unit vector ei. We then turn to tropicalization of MLE for squared linear models. Tropical MLE for linear models appeared in [1, Section 7], and it was developed for toric models in [5]. Corollary 5.3 is an analogue to [1, Theorem 7.1], but now for all regions of the arrangement A, not just bounded regions. In Section 6 we investigate log-normal polytopes and log-Voronoi cells, as defined in [3]. For each distribution pin the model X, the former parametrizes data ssuch that pis critical for (4), while the latter parametrizes data ssuch that pis the MLE. These convex bodies agree for linear models [2]. They disagree for squared linear models, even when both are polytopes (Example 6.5). The log-normal polytopes are characterized in Theorem 6.1. Software and data for computational results in this paper are presented in the MathRepo collection at MPI-MiS, namely at https://mathrepo.mis.mpg.de/SquaredLinearModels . 2 2 Real and Positive We will show that all complex critical points xof (4) are real. Thie implies that ℓ2 1(x), . . . , ℓ2 n(x) are positive, and every critical point of the log-likelihood function yields a probability distribution in ∆ n−1. A similar argument was used in [9] to show that the log-likelihood function on the squared Grassmannian has only positive critical points. By [10, Theorem 1.7], the number of critical points of (4) is the signed Euler characteristic of the very affine variety underlying our statistical
https://arxiv.org/abs/2505.19351v1
model. This variety is the complement in Pd−1of the nhyperplanes V(ℓ1), . . . , V (ℓn) and the quadric V(q) defined q=ℓ2 1+···+ℓ2 n. Its Euler characteristic is an upper bound on the number of real critical points. One argues that there is at least one critical point per region of Pd−1 R\∪n i=1V(ℓi). This gives a lower bound on the number of real critical points. The punchline is that the two bounds agree. Theorem 2.1. For generic data s∈Rn >0, all complex critical points of the log-likelihood function (4)are real, and there is one critical point in each region of Pd−1 R\A. Hence every critical point on the implicit model X⊂Pn−1is positive and is a local maximum. To prove this, we apply [15, Theorem 1.2], which states that, if the partition function q=ℓ2 1+···+ℓ2 nis replaced with a generic quadric g, then all critical points of (4) are real. By generic we mean that the hypersurface V(g) is in general position relative to the hyperplane arrangement A={V(ℓ1), . . . , V (ℓn)}, i.e., no nonempty intersectionT i∈IV(ℓi) for I⊆[n] is tangent to V(g). It therefore suffices to prove that the partition function qsatisfies this. Lemma 2.2. The quadratic hypersurface defined by q=ℓ2 1+···+ℓ2 ninPd−1is in general position with respect to the hyperplane arrangement Athat underlies the squared linear model. Proof. We prove thatT i∈IV(ℓi) is not tangent to V(q). Let Abe the n×dmatrix satisfying Ax= (ℓi(x))i∈[n]. We assume that Ahas rank d, so the arrangement Ais essential. Other- wise, d−rank( A) of the variables can be eliminated, resulting in an essential arrangement inPrank( A)−1. IfAidenotes the ith row of A, then the gradient of the partition function is ∇(q(x)) = 2 ℓ1(x)A1+···+ 2ℓn(x)An= 2xTATA. We proceed by induction on the cardinality |I|. We first claim that V(ℓi) is not tangent to V(q) for i= 1, . . . , n . This is equivalent to proving that the 2 ×dmatrix C=xTATA Ai has rank 2 at every point x∈V(q)∩V(ℓi). We now show this by contradiction. Suppose there exists α∈Csuch that αxTATA=Ai. Since Aiis real and nonzero and ATAis a real invertible matrix, αxis also real and nonzero. But then q(αx) =α2q(x)̸= 0, since qis a sum of squares and α̸= 0. Thus the matrix Chas rank 2 for any x∈V(q)∩V(ℓi). We now consider an index set Iof cardinality r. After relabeling, I={1,2, . . . , r }, and we assume thatTr i=1V(ℓi) is non-empty. We must show that this intersection is not tangent 3 toV(q). If it were tangent, there would exist λ1, . . . , λ rsuch that λ1A1+···+λrAr=xTATA. Letting projℓi(v) denote the orthogonal projection of vonto the hyperplane V(ℓi), this implies 0 +λ2projℓ1A2+···+λrprojℓ1Ar= projℓ1(xTATA). (5) We now view the intersections V(ℓ2)∩V(ℓ1), . . . , V (ℓn)∩V(ℓ1), V(q)∩V(ℓ1) as subvarieties ofV(ℓ1)∼=Pd−2. In that ambient space, the normal vector of V(q)∩V(ℓ1) is projℓ1(xTATA) atx. Similarly, the normal vector of V(ℓi)∩V(ℓ1) is projℓ1(Ai). The condition (5) means that the intersectionTr i=2(V(ℓi)∩V(ℓ1)) is tangent to V(q)∩V(ℓ1) =V(ℓ2 2+···+ℓ2 n)∩V(ℓ1) inside Pd−2. This is a contradiction to our induction
https://arxiv.org/abs/2505.19351v1
hypothesis. We conclude that the ℓi are in general position relative to the quadric qin the complex projective space Pd−1. Remark 2.3. The hypothesis that ℓ1, . . . , ℓ nhave real coefficients is necessary in Lemma 2.2. For example, if d= 2, n= 3 and ℓ1=√−1·x, ℓ 2=y, ℓ 3=x+y, then the quadric factors asq=ℓ2 1+ℓ2 2+ℓ2 3= 2y(x+y). The intersection V(q)∩V(ℓ2) =V(ℓ2) is not transverse. We will now prove the main result in this section. Proof of Theorem 2.1. By Lemma 2.2, the quadric V(ℓ2 1+···+ℓ2 n) is in general position with respect to the arrangement A. As in the previous proof, we assume that Ais essential, since we can always reduce the number of variables to obtain an essential arrangement. We work in an affine chart where ℓ1(x) = 1; the quadric is still general with respect to V(ℓ2), . . . , V (ℓn) and the arrangement A′={V(ℓ2), . . . , V (ℓn)}is essential. Indeed, since n > d, we may assume that the normal of V(ℓ1) is in the span of the normals of V(ℓ2), . . . , V (ℓn). On our affine chart {ℓ1(x) = 1}∼=Cd−1, we consider the affine log-likelihood function 2s2logℓ2(x) +···+ 2snlogℓn(x)−(s1+···+sn) log(1 + ℓ2(x)2+···+ℓn(x)2).(6) We do not lose any critical points when moving to the affine chart, since all critical points of (4) have ℓ1(x)̸= 0. By [15, Theorem 1.2], the affine log-likelihood function (6) has only real critical points, and there is exactly one critical point per region of the affine hyperplane arrangement Rd−1\A′. Hence the log-likelihood function (4) has only real critical points. It has exactly one critical point per region of the projective hyperplane arrangement Pd−1 R\A. The implicit model Xis a subvariety of Pn−1. For a critical point p∗inX, there is a critical point x∗∈Pd−1of (4) such that p∗ i=ℓ2 i(x∗) for all i. Since x∗is real, all complex critical points p∗onXare real with positive coordinates. The likelihood functionQn i=1ℓi(x)2siis positive on Pd−1 R\Aand zero on A, so every critical point is a local maximum. The number of regions of a hyperplane arrangement Ain real projective space Pd−1 Ris computed by means of the characteristic polynomial χA(t) of the matroid represented by A. See [15, Section 2] for definitions and [6] for state of the art on computations. Zaslavsky’s Theorem states that the number of regions of the central arrangement in Rdequals |χA(−1)|. Corollary 2.4. The ML degree of the squared linear model (1) is equal to |χA(−1)|/2. Example 2.5. The braid arrangement in Example 1.2 has the characteristic polynomial χA(t) = ( t−1)(t−2)···(t−(c−1)). Hence the ML degree of the squared linear model (3) is equal to |χA(−1)|/2 = c!/2. 4 Remark 2.6. The variety Xhas degree 2d−1inPn−1provided Acontains d+1 hyperplanes in general position. In this case, the map Pd−1→Xis birational, and the ML degree of the parametric model agrees with that of the implicit model. To prove this, consider a generic fiber{(±x1:±x2:···:±xd)}of the coordinatewise squaring map Pd−1→Pd−1. These 2d−1 points have distinct images in Pdif we tag on the extra coordinate ( ±x1+±x2+···+±xn)2. A
https://arxiv.org/abs/2505.19351v1
birational inverse from XtoPd−1can be found using Gr¨ obner bases, but its coordinates are generally complicated. For instance, here is such an inversion formula for Example 1.1: x1 x3=p2 1+p2 2+p2 3+p2 4−2p1p2+ 6p1p3−2p1p4−2p2p3−2p2p4−2p3p4 4p3(−p1+p2−p3+p4). If the matroid of Ais not connected, then the map Pd−1→Xis not birational, and the degree of XinPn−1is strictly smaller than 2d−1. For instance, if we replace Example 1.1 by ℓ1=x1, ℓ2=x2, ℓ3=x3and ℓ4=x1+x2, then Xis a quadratic cone in P3, and its parametrization P2→Xis two-to-one. LetX>0denote the intersection of the variety Xwith the open probability simplex ∆n−1=Pn−1 >0. In statistics, X>0is the natural domain of the log-likelihood function. Proposition 2.7. If the variety Xis smooth and its parametrization Pd−1→Xis birational, then its positive part X>0is a manifold with precisely |χA(−1)|/2connected components. This holds for generic arrangements Awhen n≥2d−1, but it does not hold for n=d+ 1≥4. Proof. Here we make a forward reference to the results on singularities in Theorem 3.5. Since Xis smooth, the parametrization is one-to-one outside the nhyperplanes V(ℓi) of the arrangement A. Hence the parametrization restricts to a homeomorphism between Pd−1 R\A andX>0. The inverse is given by identifying the correctly signed square root for each pi(x) =ℓi(x)2. The result hence follows from Zaslavsky’s Theorem. The failure for n=d+1 was seen for the Steiner surface in Example 1.1. Here the singular locus has codimension one, and the map attaches pairs of different regions of Pd−1 R\Aalong the singular locus. 3 Implicit Model Description In this section, we study the implicit representation of a squared linear model in Pn−1. Our aim is to describe its homogeneous prime ideal. We first focus on the generic case. Let Xd,n denote the model for ngeneric linear forms in dvariables, and write Id,nfor its prime ideal. Thus Id,nis the ideal of algebraic relations among the squares of ngeneric linear forms in x1, . . . , x d. That ideal is easy to describe in the regime when the number nis large enough. FixN=d+1 2 . Let Lbe the n×Nmatrix whose ith row consists of the coefficients of ℓ2 i(x). The columns of Lare indexed by quadratic monomials in x1, . . . , x d. Let [ L p] denote then×(N+1) matrix obtained from Lby appending a column of variables p= (p1···pn)T. To construct the quadratic generators of Id,n, we define L′to be the N×Nmatrix formed 5 by the first Nrows of L. The matrix L′is invertible because the linear forms ℓ1, . . . , ℓ Nare generic, so their squares span the space of all quadratic forms. We define the column vector r=L′−1·(p1p2···pN)T. (7) The entries of rare indexed by the quadratic monomials in x1, . . . , x d. These are naturally identified with the entries of a symmetric d×dmatrix R= (rij). Hence Ris ad×dmatrix whose entries are linear forms in the first N=d+1 2 coordinates p1, . . . , p NofPn−1. Given any matrix Mandt∈N, we write It(M) for the ideal generated by the t×tminors of M. Proposition 3.1. Letn≥N. The prime ideal of Xd,nequals Id,n=IN+1([L p]) +I2(R). This ideal
https://arxiv.org/abs/2505.19351v1
is minimally generated by n−d 2 linear forms and1 12(d+ 1)d2(d−1)quadrics. The variety Xd,nis isomorphic to the quadratic Veronese embedding of Pd−1intoPN−1. Proof. The ( N+1)×(N+1) minors of [ L p] give linear forms in pthat vanish on the variety Xd,n. Since Lhas rank N, precisely n−Nof these linear forms are linearly independent. By construction, the matrix Rhas rank one on the variety Xd,n. This means that the 2×2 minors of Rlie in the ideal Id,n. Together with the n−Nindependent linear forms from [L p] they generate a prime ideal of the correct codimension. Hence this ideal equals Id,n. A symmetric d×dmatrix has(d 2)+1 2 minors of size two. These are linearly dependent ford≥4. Namely, they obeyd 4 linear relations. Hence the space spanned by the 2 ×2 minors of a generic symmetric d×dmatrix has dimension1 12(d+1)d2(d−1). This is also the dimension of the Schur module S2,2(Cd), so our count follows from [13, Proposition 6.10.4.1]. We claim that Xd,nis isomorphic to ν2(Pd−1). Indeed, the coordinate ring of Xd,nis R[p1, . . . , p n]/Id,n=R[p1, . . . , p n]/(I2(R) +IN+1([L p]))∼=R[p1, . . . , p N]/I2(R). The isomorphism follows from the fact that the maximal minors of [ L p] express pN+1, . . . , p n as linear combinations of p1, . . . , p N. Now, apply Proj to these isomorphic graded rings. The braid arrangements in Example 1.2 yield an infinite family of models with n=N. The first instance in this family of quadratic Veronese varieties is the plane curve ν2(P1)⊂P2. Example 3.2. Letd= 2, n=N= 3 and consider the linear forms {x, y, x +y}. We have L=L′= 1 0 0 0 0 1 1 2 1 and hence L′−1p=1 2 2 0 0 −1−1 1 0 2 0 p=1 2 2p1 p3−p1−p2 2p2 . The model X2,3∼=ν2(P1) is the conic in the plane P2defined by 4 p1p2= (p3−p1−p2)2. Example 3.3. Consider the seven lines in P2defined by {x, y, z, x +y+z, x+ 2y+ 3z, x+ 5y+7z, x+11y+13z}. Here, d= 3, n= 7 and N=3+1 2 = 6. We consider the 7 ×7 matrix [L p] = 1 0 0 0 0 0 p1 0 0 1 0 0 0 p2 0 0 0 0 0 1 p3 1 2 1 2 2 1 p4 1 4 4 6 12 9 p5 1 10 25 14 70 49 p6 1 22 121 25 286 169 p7 . 6 The first six columns are labeled by ( x2, xy, y2, xz, yz, z2). The upper left 6 ×6 block of [ L p] is the matrix L′. Its inverse yields the vector r=L′−1p, whose coordinates are the entries in R= r1r2r4 r2r3r5 r4r5r6 . The prime ideal Id,nof the model X3,6⊂P5is generated by one linear form and six quadrics. The linear form is the determinant of [ L p], and the quadrics are the 2 ×2 minors of R. We now turn to the regime where nis smaller than N=d+1 2 . Here the ideal generators are much more complicated than in Proposition
https://arxiv.org/abs/2505.19351v1
3.1. Not much is known about the ideals of such projections in general. The following result collects what we know in small dimensions. Proposition 3.4. Table 1 gives the degrees of the generators of Id,nford≤6andn < N . (d, n) Degrees (3,4) 41 (3,5) 37 (4,5) 81 (4,6) 4151062 (4,7) 445 (4,8) 2232041 (4,9) 210(d, n) Degrees (5,6) 161 (5,7)9711035111 (5,8) 51698 (5,9) 5286 (5,10) 3104120 (5,11) 376 (5,12) 28358 (5,13) 22132 (5,14) 235(d, n) Degrees (6,7) 321 (6,8) 151161781737081811 (6,9) 101338111668 (6,10) 769481158 (6,11) 61820 (6,12) 4785429 (6,13) 4533(d, n) Degrees (6,14) 398 (6,15) 3218 (6,16) 2103194 (6,17) 227348 (6,18) 245 (6,19) 264 (6,20) 284 Table 1: Degrees of the minimal generators (multiplicities in the exponents) for the prime ideals Id,n. The gray entries refer not to minimal generators but to Gr¨ obner basis elements. Proof and Discussion. Proposition 3.4 was obtained by computer algebra, using the Gr¨ obner basis implementation in Oscar.jl [14]. Computations were performed over the rational numbers or finite fields. We computed Gr¨ obner bases and extracted minimal generators. The cases (6 ,8),(6,9),(6,10) are more challenging and will be completed in the future. The ideal Id,ncomprises relations among squares ℓ2 iof generic linear forms ℓi. If we replace these squares with generic quadrics then most entries of Table 1 remain unchanged. The smallest exception is ( d, n) = (5 ,7). Here, generic quadrics yield different numbers of minimal generators. The prime ideal I5,7has 107 minimal generators. There are 71 generators in degree 9, and 35 in degree 10, and one in degree 11. By contrast, the prime ideal of relations among seven generic quadrics in five variables requires 112 minimal generators, namely 70 in degree 9 and 42 in degree 10. The (5 ,7) entry would be 9701042for generic quadrics. What follows is a brief discussion of the prime ideal Ifor the squared linear model X given by an arbitrary hyperplane arrangement A. The linear forms ℓ1, ℓ2, . . . , ℓ nare allowed to be special. The ideal Iwas studied for small dby Dey, G¨ orlach and Kaihnsa in [8, Section 4.3]. Their analysis rests on the point configuration dual to the arrangement A. Let Ai=∇xℓi(x)∈Rdbe the vector of coefficients of ℓi. We view A1, . . . , A nas points in Pd−1. 7 The ideal Idepends on the space of quadrics in Pd−1that pass through the points A1, . . . , A n. If there are no such quadrics then we are in the generic situation of Proposi- tion 3.1. If the Aispan a unique quadric that is irreducible then Iis generated by n−N−1 linear forms together with the nonlinear equations of Xd,N−1. For instance, if d= 3 and A1, A2, . . . , A nlie on a conic in P2then Iis generated by n−5 linear forms and 7 cubics. This is case (a) in [8, Theorem 4.9 (ii)]. If the Ailie on several quadratic hypersurfaces then we are led to a case distinction as in [8, Theorem 4.9 (iii)]. See also [8, Section 4.2] for an interesting connection to
https://arxiv.org/abs/2505.19351v1
the variety of symmetric matrices with degenerate eigenvalues. We now turn to the question whether the generic model Xd,nis smooth. This happens when nis large relative to d. For instance, if d= 3 then the model is singular for n= 4, by Example 1.1, but it is smooth for n≥5. This threshold is explained by the following theorem. Theorem 3.5. Ifn >2d−2then the generic squared linear model Xd,nis smooth in Pn−1. Ifn≤2d−2then Xd,nis singular, and the singular locus has dimension 2d−n−1. It is the image of all linear subspaces in Pd−1on which the map x7→(ℓ2 1(x) :···:ℓ2 n(x))is not injective. In particular, for n= 2d−2the singular locus of Xd,nconsists of1 22d−2 d−1 lines. Proof. Forn≥N=d+1 2 , the variety Xd,nis smooth by Proposition 3.1. Next suppose 2d−2< n < N . Then Xd,nis the image of ν2(Pd−1)⊂PN−1under a generic linear projection π:PN−199KPn−1. The center Eofπhas dimension N−n−1. The secant variety σ2(ν2(Pd−1)) has dimension 2 d−2, by [13, Exercise 5.1.2.4]. Therefore, since 2 d−2< n, a general linear space Edoes not meet the secant variety of ν2(Pd−1). Thus, by [16, Corollary 2.7],πdefines an isomorphic embedding of Pd−1intoPn−1, and smoothness is preserved. Now, let n≤2d−2. Since Ais generic, the rank of the Jacobian of our map is constant onPd−1\A. All singularities of Xd,narise from pairs {x, x′}of distinct points in Pd−1which have the same image in Xd,n⊂Pn−1. Since the linear map A:Pd−1→Pn−1, x7→(ℓi(x))i∈[n] is injective, these points satisfy Ax̸=Ax′. Hence AxandAx′differ only by sign flips. Consider any partition I⊔Jof [n] ={1, . . . , n }. Let AI,Jbe the matrix whose ith row isAiifi∈Iand−Aiifi∈J. Then ( Ax)2= (Ax′)2exactly when Ax′=AI,Jxfor some partition I⊔J= [n]. Note that if |I| ≥dor|J| ≥d, then x′is forced to equal x. Therefore the model parametrization is non-injective precisely on the subspaces ker( BAI,J)⊂Pd−1, where the partition I⊔J= [n] satisfies |I|,|J| ≤d−1. Here Bdenotes an ( n−d)×dmatrix with ker( B) = image( A). The subspaces ker( BAI,J) have dimension 2 d−n−1 as desired. In the boundary case n= 2d−2, we have |I|=|J|=d−1. There are precisely1 22d−2 d−1 such partitions I⊔J= [n]. Each of them contributes a line to the singular locus of Xd,n. Example 3.6 (d= 4).The variety X4,nis smooth for n≥7. For n= 6, our model is a threefold of degree 8 in P5, defined by one quartic, ten quintics and two sextics. The singular locus of X4,6consists of ten lines, one for each partition of {1,2, . . . , 6}into two triples. Forn= 5, we take ℓi=xifori= 1,2,3,4 and ℓ5=x1+x2+x3+x4. Then I4,5is generated by one polynomial of degree 8 with 495 terms. The singular locus of the threefold X4,5decomposes into ten irreducible surfaces in P4. Each of these is a quadratic cone, like V(p1−p2, p2 3+p2 4+p2 5−2p3p4−2p3p5−2p4p5) = image of V(ℓ1, ℓ2) +V(ℓ3, ℓ4, ℓ5) . It would be interesting to derive a combinatorial rule for the singular locus of Xin general. 8 4 Likelihood Correspondence In this section, we study the likelihood correspondence of the squared linear model given by ℓ1, . . . , ℓ n. The likelihood correspondence is a central object
https://arxiv.org/abs/2505.19351v1
in algebraic statistics [10, 12]. It captures the relationship between data and critical points of the log-likelihood function ΛA(s, x) =nX i=1silog(pi(x)) =nX i=1silog(ℓ2 i(x))−log(q(x))nX j=1sj. (8) As before, q=ℓ2 1+···+ℓ2 nis the partition function, and the parameters s1, . . . , s nrepresent the data. The partial derivatives of Λ Awith respect to x1, . . . , x dare homogeneous rational functions. These depend linearly on s1, . . . , s n. Our object of interest is the variety in the product space of all pairs ( s, x) which is defined by setting these rational functions to zero: Definition 4.1. Thelikelihood correspondence LAis the Zariski closure in Pn−1×Pd−1of  (s, x)∈Pn−1×Pd−1:ℓ1(x)···ℓn(x)q(x)̸= 0, p(x)∈Xregand∂ΛA ∂xi(s, x) = 0 ∀i∈[d] . Thelikelihood ideal IA⊂R[s1, . . . , s n, x1, . . . , x d] is the bihomogeneous prime ideal of LA. The projection Pn−1×Pd−1→Pn−1,(s, x)7→sinduces a finite-to-one map LA→Pn−1. The degree of this map is the ML degree. By Corollary 2.4, the ML degree is equal to the number of regions of the hyperplane arrangement. Thus, all fibers are fully real. Each connected component of Pd−1 R\Ais represented by one point in each fiber of LA→Pn−1. LetAdenote the n×dJacobian matrix of our linear forms. Thus, the rows of Aare the gradient vectors Ai=∇xℓi(x) fori= 1, . . . , n . In the following theorem, we assume that Ais generic. In particular, the rank dmatroid of Ais uniform, and qis transverse to the generic arrangement AinPd−1. Fix any ( n−d)×nmatrix Bwhose kernel equals the image of A. Then−drows of Bspan the linear relations among the linear forms ℓ1(x), ℓ2(x), . . . , ℓ n(x). Our main result in this section is the following explicit description of the prime ideal IA. Theorem 4.2. The likelihood ideal for Xd,nis minimally generated by the maximal minors of M= s1s2··· sn ℓ2 1ℓ2 2··· ℓ2 n B·diag( ℓ1, . . . , ℓ n) . (9) This matrix has n−d+2rows and ncolumns, and itsn d−2 minors all have degree (1, n−d+2). Proof. We use the description of the likelihood ideal for parametric models that appears in [11, Definition 2.3] and in [12, Section 1]. In those sources, the authors consider homogeneous polynomials f1, f2, . . . , f m. When these define an SNC arrangement, the construction in [12, equation (7)] gives a matrix Q=Qs \1whose maximal minors generate the likelihood ideal. In that construction, the first polynomial f1is special. Namely, the subscript “ \1” indicates 9 that the column for f1was deleted from another matrix Qs. In what follows, the partition function qplays the role of f1, and the linear forms ℓ1, . . . , ℓ nplay the role of f2, . . . , f m. The log-likelihood function of the squared linear model Xd,nis shown in (8). This function coincides with the log-likelihood function of the (unsquared) arrangement A′= q, ℓ1, ℓ2, . . . , ℓ n after substituting s07→ −(s1+···+sn) and si7→2sifori= 1, . . . , n . We claim that
https://arxiv.org/abs/2505.19351v1
the arrangement A′is strict normal crossing (SNC) in the sense of [12]. This means that all intersections of hypersurfaces in A′are smooth of the expected dimension. Indeed, by Lemma 2.2, no intersection of the hyperplanes is tangent to V(q). It remains to show that the hyperplanes meet V(q) in the expected dimension. Without loss of generality, we may work in an affine chart and transform coordinates so that ℓi(x) =xifori= 1, . . . , d . Assume there exists x∈V(q)∩Td−1 i=1V(ℓi). Then x= (0, . . . , 0, xd) and q(x) =αx2 dfor some nonzero constant α. Hence, x∈V(ℓd), so xis in the intersection of dhyperplanes. This is a contradiction to the ℓibeing in general position. Therefore, A′is strict normal crossing. It now follows from [12, Corollary 2.4] that the likelihood ideal IAis generated by the maximal minors of the following matrix, which has n+ 2 rows and n+dcolumns: Q= 0 0 . . . 0 ∂x1q(x)∂x2q(x). . . ∂ xdq(x) ℓ1(x) 0 . . . 0 ∂x1ℓ1(x)∂x2ℓ1(x). . . ∂ xdℓ1(x) 0 ℓ2(x). . . 0 ∂x1ℓ2(x)∂x2ℓ2(x). . . ∂ xdℓ2(x) ........................ 0 0 . . . ℓ n(x)∂x1ℓn(x)∂x2ℓn(x). . . ∂ xdℓn(x) s1 s2. . . s n 0 0 . . . 0 . (10) On the right, we see the n×dJacobian matrix A= ∂xiℓj(x) of the nlinear forms. And, the row above this is the gradient ∇q(x) = 2 xTATAof the quadratic form q(x). The image of Aspans the kernel of the ( n−d)×nmatrix B, so the n×nmatrix BTAT is invertible. The following ( n+ 2)×(n+ 2) matrix has an inverse with polynomial entries: T= 0 0 ··· 0 1 −1ℓ1(x)··· ℓn(x) 0 0 B 0 0 AT0 . Hence the ideal of maximal minors of Qagrees with that of the ( n+ 2)×(n+d) matrix T·Q= s1s2···sn 0 0···0 ℓ2 1ℓ2 2···ℓ2 n −xTATA B·diag( ℓ1, . . . , ℓ n) 0 0 ···0 AT·diag( ℓ1, . . . , ℓ n) AT·A . Since the rows of ATare linearly independent, the symmetric d×dmatrix ATAis in- vertible. Hence we can replace T·Qby its submatrix given by the first ncolumns and the first 2 + n−drows. That submatrix is precisely the ( n−d+ 2)×nmatrix in (9). We have thus shown that IAis generated by the maximal minors of (9). All generators have the same degree (1 , n−d+ 2). And, unlike Q, the matrix (9) has no constant entries. This ensures that then d−2 maximal minors of (9) are minimal generators for the prime ideal IA. 10 Our argument confirms that the ML degree of Xd.nequals the ML degree of the arrange- mentA′. According to [12], the latter is the coefficient of zd−1in the generating function 1 (1−z)n−d(1−2z). (11) This coefficient is found to be µ=Pd−1 i=0n−1 i , which is the number of regions in Pd−1 R\A. Example 4.3 (d= 3, n= 4).LetAbe the arrangement of four lines in Example 1.1. Then A= 1 0 0 1 0 1 0 1 0 0 1 1 T and B= 1 1 1 −1 . The likelihood
https://arxiv.org/abs/2505.19351v1
ideal IAinR[s1, s2, s3, s4, x1, x2, x3] is generated by the 3 ×3 minors of M= s1s2s3s4 ℓ2 1ℓ2 2ℓ2 3ℓ2 4 ℓ1ℓ2ℓ3−ℓ4 . The likelihood correspondence LAis a threefold in P3×P2. The map onto P3is 7-to-1. We conclude with a remark on the non-generic case. If A′is not SNC then the maximal minors of the matrices (9) and (10) generate the same ideal IinR[s, x]. However, that ideal is strictly contained in the likelihood ideal IA. We can compute IAfrom Iby saturation with respect to ℓ1ℓ2···ℓn. This follows from Proposition 2.9 and Remark 2.10 of [11]. Example 4.4 (The braid arrangement) .Letd= 3, n= 6, c= 4 and consider the model in Example 1.2. Setting x1=x, x 2=y, x 3=z, x 4= 0 and relabeling, the matrix (9) equals M= s12s13s14 s23 s24 s34 x2y2z2(x−y)2(x−z)2(y−z)2 −x y 0 x−y 0 0 −x0z 0 x−z 0 0−y z 0 0 y−z . The six maximal minors of Mgenerate a radical ideal I. It has the prime decomposition I=IA∩ ⟨x, y⟩ ∩ ⟨ x, z⟩ ∩ ⟨ y, z⟩ ∩ ⟨ x−y, x−z⟩. The likelihood ideal IAhas three minimal generators, of bidegrees (1 ,3), (1 ,4) and (1 ,5). 5 Likelihood Degenerations In this section, we examine the likelihood correspondence of squared linear models over degenerate data points s. In particular, we determine tropical limits of the critical points of (4). This extends work on tropical MLE for linear models in [1, 4] and toric models in [5]. 11 The following formulation of our MLE problem will be used in this section. Our model is specified by two matrices A∈Rn×dandB∈R(n−d)×nsuch that the image of Aequals the kernel of B. We shall assume that the pair ( A, B) is generic, in a sense to be made precise. For our computations, we use the nunknowns yi=√pi=ℓi(x),i= 1,2, . . . , n . Thus we work in projective space Pn−1with coordinates y= (y1:y2:···:yn). The hyperplane arrangement Ais the restriction to image( A)≃Pd−1of the coordinate hyperplanes in Pn−1. According to Theorem 4.2, our task is to solve the system of polynomial equations B·y= 0 and rank s1s2··· sn y2 1y2 2··· y2 n B·diag( y1, . . . , y n) ≤n−d+ 1. (12) For generic data s= (s1, . . . , s n)∈Pn−1 R, the number of solutions to (12) is µ=Pd−1 i=0n−1 i . We set [ n] ={1,2, . . . , n }and we write eifor the ith standard basis vector in Rn. In what follows next, we specialize s=ei. This means that si= 1 and sj= 0 for j∈[n]\{i}. Theorem 5.1. Suppose that (A, B)is generic and set s=ei. Then (12) defines a radical ideal in R[y1, . . . , y n], which has µzeros yinPn−1 R. The supports of the µzeros are distinct. Namely, J={j∈[n] :yj= 0}ranges over all subsets of cardinality at most d−1in[n]\{i}. Proof. LetA(i)be the affine arrangement of n−1 hyperplanes obtained by setting yi= 1. Letbjdenote the jth column of the matrix B. We consider the ( n−d+ 1)×(n−1) matrix B(i)=y1···yi−1yi+1···yn b1···bi−1bi+1···bn .
https://arxiv.org/abs/2505.19351v1
We also introduce the diagonal matrix Y(i)= diag( y1, . . . , y i−1, yi+1, . . . , y n). Since s=ei, the maximal minors of the matrix in (12) are precisely the maximal minors of B(i)Y(i). After relabeling we may assume i=n. Each maximal minor of B(n)Y(n)factors as det(B(n) C)yc1yc2···ycn−d+1where B(n) Cis the submatrix of B(n)with column indices C= {c1, . . . , c n−d+1} ⊆[n−1]. The ideal of maximal minors is Cohen–Macaulay of codimension d−1 and degree µ. We claim that it is radical and that its prime decomposition equals In−d+1(B(n)Y(n)) =\ J ⟨yj:j∈J⟩+In−d+1(B(n) [n−1]\J) . (13) Here, the intersection is taken over all flats J⊆[n−1] of the affine arrangement A(n). Since Bis generic, there are µflats, given by the subsets Jof cardinality ≤d−1. The factorization of minors above shows that the radical of the ideal on the left in (13) equals the intersection on the right. Since the left ideal is Cohen–Macaulay of degree µ, the ideals must be equal. We now impose the n−dlinear equations B·y= 0 on each of the µlinear spaces of codimension d−1 on the right in (13). Since Bis generic, this has a unique solution y∈Pn−1, and this solution satisfies yj̸= 0 for j̸∈J. More precisely, the coordinates are yn= 1, yj= 0 for j∈J,and y[n−1]\J=−BT [n−1]\J(B[n−1]\JBT [n−1]\J)−1bn. 12 For generic B,y[n−1]\Jhas non-zero coordinates, and B·y=B[n−1]\J·y[n−1]\J+bn= 0. We have shown that the linear space {y∈Pn−1:B·y= 0}intersects the µirreducible components in distinct reduced points. Therefore, the entries of B·yform a regular sequence modulo In−d+1(B(n)Y(n)). Adding this regular sequence yields a Cohen–Macaulay ideal of Krull dimension one in R[y1, . . . , y n]. This ideal is radical, and it equals the ideal in (12). Example 5.2 (d= 2, n= 4).The rank 2 matroid on [4] is uniform for the model given by AT=1 1 1 0 0 1 2 1 and B=1−2 1 0 1−1 0 1 . Theorem 5.1 concerns the special data s= (1,0,0,0). The relevant minor of Min (9) is det(B(1)Y(1)) = det y2 2y2 3y2 4 −2y2y30 −y20y4 . The radical ideal I3(B(1)Y(1)) =⟨y2⟩ ∩ ⟨y3⟩ ∩ ⟨y4⟩ ∩ ⟨y2+2y3+y4⟩gives µ= 4 planes in P3, but two of these planes meet the line {y∈P3:B·y= 0}in the same point. So the model is not generic in the sense of Theorem 5.1. The likelihood ideal at s=e1is not radical: I3(B(1)Y(1))+⟨B·y⟩=⟨y2, y1+y4, y3−y4⟩∩⟨y2 3, y1−y3+2y4, y2−y3+y4⟩∩⟨y4, y1−y3, y2−y3⟩. One checks that this ideal becomes radical for any small perturbation of the model ( A, B). We now discuss tropical MLE. By Theorem 2.1, all solutions to (12) are real, and there is one solution in each region of Pd−1 R\A. This statement remains valid for any real closed field, such as the field R=R{{ϵ}}of real Puiseux series. If our data vector shas coordinates inR, the system (12) has µsolutions with coordinates in R. The coordinatewise valuation w= val( s)∈Qnofsis the tropical data vector . For any solution yof (12) in Pn−1 R, we also record its valuation z= val( y)∈Qn/Q1. We present a formula that writes zin terms of w.
https://arxiv.org/abs/2505.19351v1
Corollary 5.3. Fix a generic model (A, B)andi∈[n]. Given data s∈Rn, where w= val( s) satisfies wi< w jforj∈[n]\{i}, the µtropical critical points zare distinct. They are z=X j∈J(wj−wi)ej, (14) where Jruns over the µsubsets of cardinality at most d−1in[n]\{i}. Before we come to the proof, we go over a detailed example that explains the assertion. Example 5.4 (d= 3, n= 4).LetB= 1 1 1 −1 . This is the Steiner surface model in Examples 1.1 and 4.3. Its ML degree is µ= 7. We examine Theorem 5.1 for i= 1, so the data vector is s= (1,0,0,0). The ideal in (12) is radical. Its seven solutions in P3 Rare y(0) = (1 : 0 : 0 : 1) ,(1 : 0 : −1 : 0) ,(1 :−1 : 0 : 0) , (2 : 0 : −1 : 1) ,(2 :−1 : 0 : 1) ,(2 :−1 :−1 : 0) ,(3 :−1 :−1 : 1) . 13 We now introduce a small parameter ϵ >0, and we fix the data vector s= 1, ϵ3, ϵ4, ϵ5 . Each solutions y(ϵ)∈P3 Rconverges to one of the y(0) above. The seven solutions y(ϵ) are 1 : 2 ϵ3+ 6ϵ6−2ϵ7+2ϵ8: 2ϵ4−2ϵ7−6ϵ8+2ϵ9: 1 + 2 ϵ3+2ϵ4−6ϵ6−4ϵ7 , 1 : 2 ϵ3−6ϵ6+2ϵ7−2ϵ8:−1−2ϵ3−2ϵ5+6ϵ6−2ϵ7:−2ϵ5+ 2ϵ8−2ϵ9+6ϵ10 , 1 :−1−2ϵ4−2ϵ5−2ϵ7+4ϵ8: 2ϵ4+ 2ϵ7−6ϵ8−2ϵ9:−2ϵ5−2ϵ8+2ϵ9+6ϵ10 , 2 : 6 ϵ3−48ϵ6+12ϵ7+12ϵ8+816 ϵ9:−1−3ϵ3−3ϵ4+3ϵ5+24ϵ6: 1+3 ϵ3−3ϵ4+3ϵ5−24ϵ6 , 2 :−1−3ϵ3−3ϵ4+3ϵ5+12ϵ6: 6ϵ4+ 12ϵ7−48ϵ8+12ϵ9−12ϵ10: 1−3ϵ3+3ϵ4+3ϵ5+12ϵ6 , 2 :−1−3ϵ3+3ϵ4−3ϵ5+12ϵ6:−1 + 3 ϵ3−3ϵ4−3ϵ5−12ϵ6:−6ϵ5−12ϵ8−12ϵ9+48ϵ10 , 3 :−1−8ϵ3+4ϵ4+4ϵ5+72ϵ6:−1 + 4 ϵ3−8ϵ4+4ϵ5−36ϵ6: 1−4ϵ3−4ϵ4+8ϵ5+36ϵ6 . Each coordinate is a convergent power series in Z[ [ϵ] ]. We computed these with the command puiseux inMaple . Passing to valuations, the tropical data vector is w= val( s) = (0 ,3,4,5). The tropical MLE is given by the valuations of the seven solutions above. We see that z= val( y(ϵ)) = (0 : 3 : 4 : 0) ,(0 : 3 : 0 : 5) ,(0 : 0 : 4 : 5) , (0 : 3 : 0 : 0) ,(0 : 0 : 4 : 0) ,(0 : 0 : 0 : 5) ,(0 : 0 : 0 : 0) . These are the µ= 7 tropical critical points. These were written in terms of win (14). Proof of Corollary 5.3. We may assume si= 1 and wi= 0. Then s1, . . . , s i−1, si+1, . . . , s n have positive valuations. We write w=P j∈[n]wjej=P j∈[n]val(sj)ejfor the tropical data. The system (12) has µdistinct solutions y(ϵ) inPn−1 R. Their special fibers y(0)∈Pn−1 Rare the zeros of the prime components in (13). Hence there are µdistinct tropical critical points. Suppose that y(ϵ) is a tropical critical point with valuation z= val( y(ϵ)) and let J= {j∈[n]|zj>0}. Because J={j∈[n]|(y(0)) j= 0}, Theorem 5.1 implies that |J| ≤d−1. By the definition of J,zj= 0 if j /∈J. Our claim states that zj=wjifj∈J. Fixj∈J. To prove zj=wj, we tropicalize a determinantal equation in (12). Let Cbe the union of {i, j}with an ( n−d)-subset of [ n]\(J∪ {i}). The minor of Mindexed by Cis Y c∈Cyc·X k̸=ℓ∈C±(sk/yk)yℓdet(BC\{k,ℓ}). The coordinates ycofyare nonzero for c∈C,
https://arxiv.org/abs/2505.19351v1
so the sum vanishes. After tropicalization, the vanishing of the sum becomes the condition that the minimum min k̸=ℓ∈C(wk−zk+zℓ) is attained twice. Since zk= 0 for k∈C\{j}, we have wk−zk+zℓ=  wk ifk, ℓ̸=j, wj−zj ifk=j, wk+zj ifℓ=j. Since zj>0, the minimum is never attained in the last case. By assumption, wk> w i. This means that wkcannot attain the minimum unless k=i. Hence the tropical equation simplifies to min( wi, wj−zj). This yields wj−zj=wi= 0, and we conclude that zj=wj. 14 The field Rof real Puiseux series is both valued and ordered, and it is interesting to combine both structures when studying the likelihood geometry of squared linear models. In the setting of Corollary 5.3, there is one critical point yin each region of Pn−1 R\A. The region is a polytope in Pn−1 R, and y=y(ϵ) is an arc whose limit y(0) lies on one of the faces of that polytope. All arcs are repelled by the hyperplane {yi= 0}at infinity, so they converge to a face of the affine arrangement A(i). Note that A(i)hasµflats, while Ahasµregions. Theµcritical arcs y=y(ϵ) therefore specify a bijection between the flats and the regions. y2y3 y4 y1 +− −++− −− + +−− + +−++ + +++−+++−+− Figure 1: Tropical MLE for d= 3, n= 4 gives a bijection between the seven regions of P2 R\A and the seven faces of the triangle. Each arc y(ϵ) travels from its region to the triangle. Example 5.5 (n=d+ 1).HereAhasµ= 2d−1 regions. Only one of them is bounded inA(i). The bounded region is a ( d−1)-simplex, so it has µfaces. Each region meets the (d−1)-simplex in a distinct face, made manifest by an arc y(ϵ) from the region to that face. We visualize the case d= 3 with i= 1 in Figure 1. The arrangement Ahas four lines and seven regions in P2 R. The seven arcs y(ϵ) are given algebraically in Example 5.4. Each limit point y(0) lies in the closed triangle: three at the vertices, three on the edges, and one in the interior. The seven tropical solutions zreveal how each arc approaches its limit point. 6 Log-Normal Polytopes The likelihood correspondence can be viewed as the logarithmic normal bundle of the ( d−1)- dimensional variety X⊂Pn−1. Its fiber over a non-singular point pinX>0is a linear space of dimension n−dinside the space Pn−1of data s. The log-normal polytope is the intersection of this fiber with the simplex ∆n−1=Pn−1 ≥0. Each log-normal polytope has dimension n−d. The model is given by a pair ( A, B) as in Section 5. We fix p= (y2 1, . . . , y2 n) inX>0. Then y=Axfor some x∈Pd−1 R\A, and we have By= 0. The rank constraints in (12) give d−1 independent linear equations in the unknowns s= (s1, . . . , s n). We seek the solutions to these equations in the simplex ∆n−1=Pn−1 ≥0. In symbols, the log-normal polytope at yis Π(y) := s∈Rn ≥0:ssatisfies (12) and s1+s2+···+sn= 1 . (15) 15 Thus, Π( y) consists of all probability distributions in the row span of the (
https://arxiv.org/abs/2505.19351v1
n−d+1)×nmatrix y2 1y2 2···y2 n B·Y! = y1y2···yn B! ·Y= 1 1···1 B·Y−1! ·Y2. (16) Here we abbreviate Y= diag( y1, . . . , y n). Then d−1 maximal minors of (16) factor as follows: yj0yj1···yjn−d·detyj0yj1···yjn−d bj0bj1···bjn−d for 1 ≤yj0<yj1<···<yjn−d≤n. (17) These determinants definen d−1 hyperplanes in Pd−1=P(ker(B)), in addition to the ngiven hyperplanes yi=ℓi(x). The chamber arrangement Achconsists of all n+n d−1 hyperplanes. Using this enlarged arrangement, we obtain the following characterization of our polytope. Theorem 6.1. Each log-normal polytope Π(y)is simple for y∈Pd−1 R\Ach. Its combinato- rial type is fixed for yin a region of the chamber arrangement Ach. The polytope Π(y)is combinatorially dual to the convex hull Qof the columns of the (n−d)×nmatrix BY−1. Proof. The facial structure of Π( y) is determined by the rank n−d+ 1 oriented matroid of the matrix (16). When yis inPd−1 R\Ach, the minors (17) are nonzero and hence the oriented matroid is uniform. Therefore Π( y) is simple. For yin a fixed region of Pd−1 R\Ach, the oriented matroid is fixed, since changing the matroid requires crossing one of the hyperplanes. LetQ∆denote the polar dual of Q. We now prove that Q∆is combinatorially equivalent to Π( y). Since y2 i>0 for all i, removing the factor Y2from (16) does not change the combinatorial type of the polytope spanned by its columns. Setting ˜B=1 BY−1 , we have Q∆={z∈Rn−d:zTBY−1≤1} ≃ { z∈Rn−d+1:z1= 1, zT˜B≥0}, and Π(y) ={zT˜B∈Rn:zT˜B≥0,nX i=1(zT˜B)i= 1} ≃ { z∈Rn−d+1:zT˜B≥0,nX i=1(zT˜B)i= 1}. This shows that the cones over Q∆and Π( y) are equal. We then apply the argument in the proof of [2, Theorem 4] to conclude that the polytopes are combinatorially equivalent. Example 6.2 (d= 2, n= 6).We consider the 1-dimensional model XinP5defined by A=1 1 1 1 1 1 1 2 3 4 5 6T and B= 1−2 1 0 0 0 0 1 −2 1 0 0 0 0 1 −2 1 0 0 0 0 1 −2 1 . The chamber arrangement Achconsists of 12 points on the circle P1 R. In addition to the six points given by y1, . . . , y 6, there are six new points from the determinants in (17). For instance, for {j0, . . . , j 4}={1,2,3,5,6}, we obtain 3 y1+2y2+y3−y5−2y6= 3x1−7x2, which reveals the transition point (70 : 30) in Ach⊂P1 R. For x∈P1 R\Ach, the log-normal polytope is a product of simplices, namely ∆ 2×∆2or ∆ 1×∆3or ∆ 4, depending on the oriented matroid of (16). For an explicit transition, try the points x= (69 : 30) ,(70 : 30) ,(71 : 30). 16 We now turn to the log-Voronoi cell of a point p= (y2 1, . . . , y2 n) in the squared linear model X>0. This is the subset of all data points sin Π( y) such that pis the MLE for s. Log- Voronoi cells for discrete models were introduced by Alexandr and Heaton in [3]. Theorems 8, 9 and 10 in [3] identify various models for which each log-Voronoi cell coincides with its ambient log-normal polytope. This holds for
https://arxiv.org/abs/2505.19351v1
all linear models. Their log-Voronoi cells were studied in detail by Alexandr in [2]. However, in general, the log-Voronoi cells are non-linear convex bodies that are strictly contained in their log-normal polytopes. For an illustration see [3, Figure 2]. In what follows, we initiate the study of log-Voronoi cells for squared linear models. We shall see that their geometry is more complicated than that for linear models. We begin by discussing the simplest model, which is a circle inscribed in a triangle. s1 s2s3 p Figure 2: The squared linear model (blue) shown inside the triangle ∆ 2of data (black), together with its logarithmic normal bundle (gray dashed lines). The triangle is divided into six Weyl chambers (red lines). The log-Voronoi cell of the point pis the intersection of the fiber of the logarithmic normal bundle with the corresponding Weyl chamber (green line). Example 6.3 (n= 3, d= 2).LetAbe the arrangement of three ponts in P1defined by ℓ1= x1, ℓ2=x2, ℓ3=x1+x2. The model is the conic X=V(p2 1+p2 2+p2 3−2p1p2−2p1p3−2p2p3) inP2. In the probability triangle ∆2, this is an inscribed circle, so it has three connected components in ∆ 2. We see this in Figure 2, where the model is drawn in blue. Fixp= x2 1:x2 2: (x1+x2)2 ∈X>0. The log-normal space at pis the line given by us1+vs2+ws3:= det s1s2 s3 x2 1x2 2(x1+x2)2 −x1−x2x1+x2 . This is a linear form in ( s1, s2, s3), whose coefficients depend cubically on the model point u=x2(x1+ 2x2)(x1+x2), v=−x1(x1+x2)(2x1+x2), w=−x1x2(x1−x2). The triangle ∆ 2is divided into six Weyl chambers, depending on the ordering of s1, s2, s3. These are the six red triangles in Figure 2. The log-Voronoi cell at pis the green line segment 17 through p. It is the intersection of the log-normal line with the Weyl chamber. The log- Voronoi segments interpolate between a red boundary and a half-edge of the triangle. This example also shows that each log-Voronoi cell is strictly contained in its log-normal polytope. The latter is the intersection of the triangle with the line spanned by the dashed segment. Thus, squared linear models are more complicated than linear models and toric models, for which these ( n−d)-dimensional convex bodies coincide [3, Theorems 9 and 10]. The topological boundary ∂Sof a log-Voronoi cell Sconsists of data points with at least one additional MLE with another sign pattern. The boundary ∂Sis defined by a piecewise analytic function. We next identify a setting in which the boundary has linear pieces. Given i, j∈ {1, . . . , n }andσ∈ {− 1,+1}n, let τijσ:Pn−1→Pn−1denote the map exchanging coordinates iandjfollowed by coordinatewise multiplication by σ. Proposition 6.4. Fix a squared linear model Xandy2∈X. Let Sdenote the log-Voronoi cell of y. Ifyandτijσ(y)have different sign vectors, τijσ(y)2∈X, and V(si−sj)∩S̸=∅, then V(si−sj)∩Sis a connected, linear piece of the boundary ∂Sof the log-Voronoi cell. Proof. LetS′⊂Π(y) denote the set of data points sin the log-normal polytope Π( y) such thatshas an MLE with sign vector equal to the sign vector of τijσ(y). Ifs∈S∩V(si−sj), then the likelihood function for shas the same value at yand
https://arxiv.org/abs/2505.19351v1
at τijσ(y). Therefore s∈S∩S′. Since Sis convex, S∩V(si−sj)⊆S∩S′⊆∂Sis a connected, linear piece of the boundary. If the proposition holds for sufficiently many triples i, j, σ , then ∂Sis piecewise linear. Example 6.5 (d= 2, n= 4).The model in Example 5.2 has A={x1, x1+x2, x1+2x2, x2}. Fixy= (3,2,1,−1). The log-normal polygon Π( y) is the intersection of the tetrahedron ∆ 3 with the plane V(s1−s2−3s3−2s4). This is a quadrilateral, divided into four cells based on the sign vector of the MLE; see Figure 3. Their boundaries are s1=s3ands2=s4, because the vectors τ13,+++−(y) = (1 ,2,3,1) and τ24,+−−−(y) = (3 ,1,−1,−2) are in the kernel of B. Consider the slightly modified arrangement {x1, x1+x2, x1−2x2, x2}. The log-normal polytope of y= (3,2,5,−1) is again a quadrilateral with four nonempty cells. The log- Voronoi cell meets all other cells in a codimension 1 boundary. This boundary is nonlinear. We found that the log-Voronoi cells of squared linear models exhibit a wide range of behaviors. Experiments suggest that, for a generic model and point, the boundary of the +++− s1≤s3, s2≥s4+−−− s1≥s3, s2≤s4 ++−− s1≥s3, s2≥s4 ++++ s1≤s3, s2≤s4 Figure 3: The log-normal polygon in Example 6.5. The log-Voronoi cell is marked in green. 18 log-Voronoi cell is nonlinear. As in Section 5, being generic is stronger than having a uniform matroid. Example 6.5 underscores this. As in [3, page 9], we think that these boundaries are generally not algebraic. Experiments indicate that it is also possible to have linear and nonlinear boundary pieces at the same time. Finally, it is possible that all boundaries are linear, so the log-Voronoi cell is a polytope (see Figure 3). In conclusion, the detailed study of log-Voronoi cells for squared linear models is a promising direction for future research. Acknowledgement. We thank Benjamin Hollering and Yue Ren for helping us with com- putations for Proposition 3.4 and Section 5 respectively. References [1] D. Agostini, T. Brysiewicz, C. Fevola, L. K¨ uhne, B. Sturmfels and S. Telen: Likelihood degen- erations , Advances in Mathematics 414(2023) 108863. [2] Y. Alexandr: Logarithmic Voronoi polytopes for discrete linear models , Algebraic Statistics 15 (2024) 1–13. [3] Y. Alexandr and A. Heaton: Logarithmic Voronoi cells , Algebraic Statistics 12(2021) 75–95. [4] F. Ardila-Mantilla, C. Eur and R. Penaguiao: The tropical critical points of an affine matroid , SIAM Journal on Discrete Mathematics 38(2024) 1930–1942. [5] E. Boniface, K. Devriendt and S. Ho¸ sten: Tropical toric maximum likelihood estimation , arXiv:2404.10567 . [6] T. Brysiewicz, H. Eble and L. K¨ uhne: Computing characteristic polynomials of hyperplane arrangements with symmetries , Discrete and Computational Geometry 70(2023) 1356–1377. [7] K. Devriendt, H. Friedman, B. Reinke and B. Sturmfels: The two lives of the Grassmannian , Acta Universitatis Sapientiae, Mathematica (2025). [8] P. Dey, P. G¨ orlach and N. Kaihnsa: Coordinate-wise powers of algebraic varieties , Beitr¨ age zur Algebra und Geometrie 61(2020) 473–505. [9] H. Friedman: Likelihood geometry of the squared Grassmannian , Proceedings of the American Mathematical Society, to appear. [10] J. Huh and B. Sturmfels: Likelihood geometry , Combinatorial Algebraic Geometry (eds. Aldo Conca
https://arxiv.org/abs/2505.19351v1
et al.), Lecture Notes in Mathematics, volume 2108, Springer, pages 63–117, 2014. [11] T. Kahle, L. K¨ uhne, L. M¨ uhlherr, B. Sturmfels and M. Wiesmann: Arrangements and likeli- hood,arXiv:2411.09508 . [12] T. Kahle, H. Schenck, B. Sturmfels and M. Wiesmann: The likelihood correspondence , arXiv:2503.02536 . [13] J.M. Landsberg: Tensors: Geometry and Applications , volume 128, American Mathematical Society, 2011. [14] OSCAR – Open Source Computer Algebra Research system, Version 1.3.1, The OSCAR Team, 2025. https://www.oscar-system.org . 19 [15] B. Reinke and K. Wang: Hypersurface arrangements with generic hypersurfaces added , arXiv:2412.20869 . [16] I. Shafarevich: Basic Algebraic Geometry , volume 1, Springer-Verlag, 1994. Authors’ addresses: Hannah Friedman, UC Berkeley hannahfriedman@berkeley.edu Bernd Sturmfels, MPI-MiS Leipzig bernd@mis.mpg.de Maximilian Wiesmann, MPI-MiS Leipzig wiesmann@mis.mpg.de 20
https://arxiv.org/abs/2505.19351v1
arXiv:2505.19359v1 [math.ST] 25 May 2025NONPARAMETRIC ESTIMATION OF SLICED INVERSE REGRESSION BY THE k-NEAREST NEIGHBORS KERNEL METHOD LURAN BENGONO MINTOGO, EMMANUEL DE DIEU NKOU, AND GUY MARTIAL NKIET Abstract. We investigate nonparametric estimation of sliced inverse regression (SIR) via the k-nearest neighbors approach with a kernel. An estimator of the covariance matrix of the con- ditional expectation of the explanatory random vector given the response is then introduced, thereby allowing to estimate the effective dimension reduction (EDR) space. Consistency of the proposed estimators is proved through derivation of asymptotic normality. A simulation study, made in order to assess the finite-sample behaviour of the proposed method and to compare it to the kernel estimate, is presented. R´ESUM ´E. Nous abordons l’estimation non-param´ etrique de la r´ egression inverse par tranche (SIR) par la m´ ethode des kplus proches voisins ` a noyau. Un estimateur de la matrice des covariances de l’esp´ erance conditionnelle du vecteur al´ eatoire explicatif sachant la variable al´ eatoire ` a expliquer est alors introduit, permettant ainsi d’estimer l’espace de r´ eduction effective de la dimension (EDR). La convergence des estimateurs propos´ es est prouv´ ee ` a travers la d´ etermination de leur normalit´ e asymptotique. Une ´ etude par simulations, permettant d’´ evaluer les performances ` a taille d’´ echantillon finie de la m´ ethode propos´ ee et de la comparer ` a l’estimation par noyau, est pr´ esent´ ee. 1.Introduction Dimension reduction has become a major concern in statistical multivariate analysis since the introduction of sliced inverse regression (SIR) by [10]. It is an approach allowing to handle a regression with a reasonable number of explanatory variables by projecting an initial multidimen- sional predictor onto a subspace of lower dimension than that of this predictor. For doing that, [10] introduced the model Y=F(β⊤ 1X, ..., β⊤ NX, ε), (1.1) where Yis a scalar response, Xis ad-dimensional random vector containing the initial explanatory variables as coordinates, Nis an integer satisfying 1 ⩽N < d ,β1, . . . , β Nare unknown vectors in Rd,εis a scalar random variable independent of X,u⊤denotes the transpose of the vector u, and Fis an arbitrary unknown function. This model allows to perform the aforementioned reduction of the number of predictors since N < d , but requires an appropriate choice of the vectors β1, . . . , β N so that the projection onto the subspace they span, called the effective dimension reduction (EDR) space, retains a maximum of information on Y. Estimating the EDR space is then the most crucial issue related to this model and has been tackled in several works. Li[10] proposed an approach based on estimating the covariance matrix of the conditional expectation E(X|Y) from slicing the range of Y. Although there exist several alternative methods, it stills the most recognized method in sufficient dimension reduction, and has seen numerous developments such as asymptotic studies ([9, 22]), extensions to the multiple scalar responses case ([6, 11]), estimation of the dimensionality N([8, 13, 18, 19]), extension to the high-dimensional framework ([21]). Among the alternative 2020 Mathematics Subject
https://arxiv.org/abs/2505.19359v1
Classification. Primary 62G05; Secondary 62J02. Key words and phrases. Dimension reduction, knearest neighbors estimation, nonparametric regression, sliced inverse regression, asymptotic normality. 1 2 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET methods, there is the sliced average variance estimation (SAVE) method of [5], the parametric inverse regression (PIR) method of [2], the central mean subspace (CMS) estimation method introduced in [4]. Based on the fact that the aforementioned covariance matrix can be expressed as a function of the density of Yand regression functions, [20] proposed a nonparametric approach for estimating the EDR space by using Nadaraya-Watson type estimators. This enlighted for the first time the possibility and the interest of using nonparametric estimation methods for estimating SIR. Later, [15] established the strong consistency of this approach, [16] proposed another nonparametric estimation approach based on wavelets and [14] used recursive kernel estimators. However, the practical choice of the bandwidth on which the estimators used in [20] rely is not straightforward and stills a challenging issue. So, there is an interest in using alternative estimators which do not require to make such a choice. Among them, the k-nearest neighbors ( k-NN) kernel estimators have attracted particular attention. They have the same form than the kernel estimators, but with bandwidth replaced by the Euclidean distance between the point to which the estimator is calculated and the kth nearest neighbor of this point among the observations. Earlier works on these estimators go back to [12] for density estimation and to [3] for the case of regression function. It is true that the practical implementation of these estimators requires choosing the number of neighbors, but this choice is easier to make since it is to be made on a finite set of integers. To the best of our knowledge, these estimators have never been used for performing dimension reduction methods. In this paper, we propose an estimation method for the EDR space related to the model (1.1), based on k-NN kernel estimates of the density and regression functions involved in the aforemen- tioned covariance matrix. The rest of the paper is organized as follows. In Section 2, we define the tackled estimators based on k-NN kernel method. Their asymptotic properties are then given in Section 3. Section 4 is devoted to a simulation study made in order to evaluate the performance of the proposal with comparison to the method of [20]. The proofs of the main results are postponed in Section 5. 2.Estimation based on the k-NN method 2.1.The matrix to be estimated. The most crucial issue in handling the model (1.1) is to estimate the EDR space which is known to be spanned, under some conditions, by eigenvectors of a matrix that will now be specified. We assume that the explanatory random vector X= (X1, . . . , X d)⊤, satisfies E ∥X∥2 <∞, where ∥ · ∥denotes the usual Euclidean norm of Rd, and, without loss of generality, E(X) = 0. Then it is known from [10] that, under some conditions, the EDR space is spanned by eigenvectors associated to the largest eigenvalues of
https://arxiv.org/abs/2505.19359v1
the covariance matrix Λ of the conditional expectation of Xgiven Ydenoted, as usual, by E(X|Y); that is Λ =E R(Y)R(Y)⊤ , where R(Y) =E(X|Y). Since R(Y) = (R1(Y), . . . , R d(Y))⊤, where Rℓ(Y) =E(Xℓ|Y) for ℓ∈ {1, . . . , d }, it is clear that Λ = λℓj 1⩽ℓ,j⩽dwith λℓj=Rℓ(Y)Rj(Y). Estimation of the EDR space is, therefore, reduced to that of Λ, what is obtained from estimating theλℓj’s. For doing that, [10] introduced a method, called sliced inverse regression (SIR), based on slicing the range of Yand then estimating Λ by a scatter matrix computed by using the dispersion NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 3 of the observations within the different slices. As the Rℓ’s are in fact regression functions, they can be estimated from nonparametric approaches. More specifically, assuming that each pair ( Xℓ, Y) admits a density that we denote by f(Xℓ,Y), and that the density fofYis such that f(y)>0 for ally∈R, we have Rℓ(y) =E(Xℓ|Y=y) =gℓ(y) f(y)where gℓ(y) =Z Rxf(Xℓ,Y)(x, y)dx. Based on this, [20] proposed a nonparametric estimation of Λ based on a Nadaraya-Watson type estimator of Rℓwhereas [16] uses estimation based on wavelets. In this paper, we use rather estimation based on the knearest neighbors method with a kernel. 2.2.The k-NN kernel estimator. Considering an i.i.d. sample {(X(i), Yi)}1⩽i⩽nof (X, Y), and putting X(i)= (Xi1, . . . , X id)⊤,we make use of the k-NN kernel estimators of the density fand the functions gℓ, as defined in [1, 3, 12]. More specifically, given a kernel K:R→R, we consider the estimators bfnandbgℓ,noffandgℓ, respectively, given by bfn(y) =1 nHn(y)nX i=1KYi−y Hn(y) , and bgℓ,n(y) =1 nHn(y)nX i=1XiℓKYi−y Hn(y) , where Hn(y) = min( h∈R∗ + nX i=11]y−h,y+h[(Yi) =kn) , ℓ∈ {1, . . . , d }, and ( kn)n∈N∗is a sequence of integers such that kn→ ∞ asn→ ∞ . As it was already done in [1, 14, 15, 16, 20], in order to avoid small values in the denominator, we consider fbn(y) = max f(y), bn andbfbn(y) = maxbfn(y), bn , where ( bn)n∈N∗is a sequence of positive real numbers such that lim n→∞(bn) = 0, and we estimate the ratio Rbn,ℓ(y) =gℓ(y) fbn(y)(2.1) by bRbn,ℓ(y) =bgℓ,n(y) bfbn(y). Then, putting bRbn(y) = bRbn,1(y), . . . ,bRbn,d(y)⊤ , we take as estimator of Λ the random matrix bΛn=1 nnX i=1bRbn(Yi)bRbn(Yi)⊤. An estimate of the EDR space is obtained from the spectral analysis of this matrix. Indeed, if bβℓ is as an eigenvector of bΛnassociated with the ℓ-th largest eigenvalue, we estimate the EDR space by the subspace of Rdspanned by bβ1,···,bβN. 4 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET Remark 2.1. The main difference between our approach and that of [20]is about the bandwith of the used estimators. A fixed real bandwith, that only depends on n, is used in [20]whereas our approach consists in considering a random bandwith that depends on the observations and the point of estimation. Such estimators, that belong to the broader class of variable kernel estimators, have some advantages over fixed
https://arxiv.org/abs/2505.19359v1
bandwith estimators. In particular, they do not require to make a prior choice of the bandwith as it is the case for the fixed bandwith estimators. It is true that the choice of the number of neighbors must be made, but this is done in a finite set of integers, which makes the research easier. 3.Asymptotic properties In this section, we first introduce the assumptions needed to obtain the main results of the paper, and then we state theorems that give asymptotic normality for the considered estimators. 3.1.Assumptions. We make the following assumptions: Assumption 3.1. E ∥X∥4 <∞. Assumption 3.2. There exists a sequence Mnof strictly positive numbers such that Mn∼p log(n)and, for ℓ∈ {1, . . . , d },max 1⩽i⩽n|Xiℓ|⩽Mn. Assumption 3.3. The density fofYis bounded from below: there exists c0>0such that infy∈Rf(y)⩾c0. Assumption 3.4. The density fis bounded and belongs to the class C(c,3)of functions φ:R→R that are 3 times differentiable with third derivative satisfying the Lipschitz condition: φ(3)(y+u)−φ(3)(y) ⩽c|u|, where c >0. Assumption 3.5. Forℓ∈ {1, . . . , d }, the functions g1,ℓandg2,ℓdefined by g1,ℓ(y) =E Xℓ1{Xℓ⩾0}|Y=y f(y), g2,ℓ(y) =E −Xℓ1{Xℓ<0}|Y=y f(y), (3.1) are bounded and belong to the class C(c,3)previously defined. Assumption 3.6. For all ℓ∈ {1, . . . , d }and all m∈ {1,2}, the function Rm,ℓdefined by Rm,ℓ(y) =gm,ℓ(y) f(y)(3.2) satisfies: (i)Eh R4 m,ℓ(Y)i <∞. (ii)|Rm,ℓ(y+u)−Rm,ℓ(y)|⩽c|u|. Assumption 3.7.√nE |Rm,ℓ(Y)Rs,j(Y)|1{f(Y)⩽an} = o(1) for any (ℓ, j)∈ {1, . . . , d }2,(m, s)∈ {1,2}2and any sequence (an)n∈N∗such that an∼bnasn→ ∞ . Assumption 3.8. The kernel K:R→Rsatisfies the following properties: (i)Kis bounded, that is G= supt∈R|K(t)|<∞. (ii)Kis symetric with respect to 0, that is K(t) =K(−t),∀t∈R. (iii)R RK(t)dt= 1. (iv)Kis of order 3, that isZ RtmK(t)dt= 0, m∈ {1,2,3}. NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 5 (v)Z R|K(t)|dt <∞,Z R|t| |K(t)|dt <∞andZ Rt4|K(t)|dt <∞. (vi)∀t∈R,∀a∈[0,1],K(at)⩾K(t). Assumption 3.9. The number knof neighbors is such that kn∼nc1, where 1/2< c1<9/10. Assumption 3.10. The sequence (bn)n∈N∗satisfies bn∼n−c2with 0< c2<1/10,c1> c2+ 3/4 andc1+c2/4<7/8. Assumption 3.11. The eigenvalues ν1, . . . , ν dofΛsatisfy ν1> ν2>···> νd>0. Remark 3.1. Assumption 3.1 is a classical one in the literature on asymptotic studies in multi- variate analysis. In the context of dimension reduction techniques, it was made, for instance, in [20]. Assumption 3.2 is weaker than Assumption 3.1 of [15]and Assumption 3.1 of [16]where it is supposed that Xis bounded. For instance, it has been considered in [1, 14] . Assumption 3.3 has been introduced in some works on nonparametric estimation such as [1, 7, 23] . Assumption 3.4 just is the first part of Condition 1 of [20]whereas Assumption 3.5 implies its second part. Indeed, since gℓ=g1,ℓ−g2,ℓ, it follows that |gℓ(y)−gℓ(z)|⩽|g1,ℓ(y)−g1,ℓ(z)|+|g2,ℓ(y)−g2,ℓ(z)|. So, if g1,ℓandg2,ℓbelong to C(c,3), then gℓbelongs to C(2c,3). It is the same for Assumptions 3.6 and 3.7 which imply Assumptions 3 and 7 of [16]and Condition 6 of [20]since we have Rℓ=R1,ℓ−R2,ℓwhich yields |Rℓ(y)−Rℓ(z)|⩽|R1,ℓ(y)−R1,ℓ(z)|+|R2,ℓ(y)−R2,ℓ(z)|, |Rℓ(y)|⩽|R1,ℓ(y)|+|R2,ℓ(y)|, and R4 ℓ(y)⩽8 R4 1,ℓ(y) +R4 2,ℓ(y) . So, if R1,ℓandR2,ℓbelong to C(c,3), then Rℓbelongs to C(2c,3). Moreover, if Eh R4 m,ℓ(Y)i <∞ form∈ {1,2}, then E R4
https://arxiv.org/abs/2505.19359v1
ℓ(Y) <∞, and if√nE |Rm,ℓ(Y)Rs,j(Y)|1{f(Y)⩽an} = o(1) for (m, s)∈ {1,2}2andan∼bn, then √nE Rℓ(Y)Rj(Y)1{f(Y)⩽bn} = o(1) because √nE Rℓ(Y)Rj(Y)1{f(Y)⩽bn} ⩽√nE |Rℓ(Y)Rj(Y)|1{f(Y)⩽bn} ⩽√nE |R1,ℓ(Y)R1,j(Y)|1{f(Y)⩽bn} +√nE |R1,ℓ(Y)R2,j(Y)|1{f(Y)⩽bn} +√nE |R2,ℓ(Y)R1,j(Y)|1{f(Y)⩽bn} +√nE |R2,ℓ(Y)R2,j(Y)|1{f(Y)⩽bn} . The conditions (i)to(v)of Assumption 3.8 are classical ones and was considered, for instance, in[1, 7, 15, 20, 23] . The condition (vi)often arises in the literature on k-NN kernel estima- tors. Assumptions 3.9 and 3.10 are of types that have already be considered in the litterature on nonparametric estimation in dimension reduction methods (e.g., [1, 7, 15, 16, 20, 23] ). 3.2.Results. Here, we give the main results of the paper which give asymptotic normality for the estimator bΛnintroduced previously and also, as a consequence, that of its eigenvectors under some specified conditions. 6 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET Theorem 3.1. Under the assumptions 3.1 to 3.10, we have √n bΛn−ΛD→ H asn→ ∞ , whereD→denotes convergence in distribution, His a random variable having a normal distribution in the space Md(R)ofd×dmatrices, such that, for any A= (aℓj)∈Md(R)− {0}, one has Tr A⊤H ⇝N(0, σ2 A)with: σ2 A=V ar dX ℓ=1dX j=1aℓj 2(XℓRj(Y) +XjRℓ(Y)) . (3.3) From Theorem 3.2, we can derive the asymptotic normality of the eigenvectors. For ( ℓ, j)∈ {1,···, d}2, we put βℓ= (βℓ1, . . . , β ℓd)⊤and we consider the random variable Wℓj= dX r=1 r̸=jβrj νℓ−νr dX p=1dX q=1βℓpβℓq 2 XqRp(Y) +XpRq(Y) . Then, we have: Theorem 3.2. Under the assumptions 3.1 to 3.11, we have for any ℓ∈ {1, . . . , d },√n bβℓ−βℓD→ N(0,Σℓ)asn→ ∞ , where Σℓis the d×dcovariance matrix of the random vector Wℓ= (Wℓ1,···,Wℓd)⊤. 4.Simulation results In order to observe the performance of the introduced method for estimating SIR, and to compare it with the method of [20] based on kernel estimates, we made simulations within frameworks corresponding to the following models with dimension d= 5: Model 1: Y=X1+X2+X3+X4+ε; Model 2: Y=X1(X1+X2+ 1) + ε; Model 3: Y=X1 0.5 + (X2+ 1.5)2−1 +ε. In these models, which come from [16], the predictor X= (X1, . . . , X 5)⊤is generated from a multivariate normal distribution N(0, I5), where I5is the 5 ×5 identity matrix, εis generated independently of Xfrom a standard normal distribution, and Yis computed according to these models. In Model 1, we have N= 1,β1= (1,1,1,1,0)⊤whereas Model 2 and Model 3 correspond to the case of N= 2,β1= (1,0,0,0,0)⊤, and respectively: β2= (1,1,0,0,0)⊤, β2= (0,1,0,0,0)⊤. We generated 500 independent samples of size n= 50,100,200,400 from the above models. For each of these samples we computed estimates bβ1andbβ2ofβ1andβ2, and then the distance D between the true EDR space and its estimation given by D2= Tr P−bP2 , NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 7 where P(resp. bP) denotes the projector onto the space spanned by β1(resp. bβ1) in case of Model 1, and by {β1, β2}(resp. {bβ1,bβ2}) in case of Model 2 or Model 3. Then, the average and standard deviation of Dover the 500 replicates are computed in order to assess the performance of the methods. The two methods
https://arxiv.org/abs/2505.19359v1
were used with bn=n−0.09and different kernels : Gaussian kernel, Epanechnikov kernel, biweight kernel, triweight kernel and triangular kernel. For our method, we took knas the integer part of n0.85. The obtained results are reported in Table 1 and Table 2. It can be seen that both methods give equivalent results. Indeed, when one outperforms the other, the difference is generally very small. This shows that our method can be seen as a good alternative to the classical method based on kernel estimates. 5.Proofs We first give some preliminary lemmas. Then, these lemmas are used for proving Theorem 3.1. The proof of Theorem 3.2 is identical to that of Theorem 2 in [16]; it is, therefore, omitted. 5.1.Preliminary lemmas. For ( ℓ, j)∈ {1, . . . , d }2, we consider: U(1) n,ℓ,j=1√nnX i=1n gℓ(Yi) (bgj,n(Yi)−gj(Yi)) +gj(Yi) (bgℓ,n(Yi)−gℓ(Yi))obf2 bn(Yi)−f2 bn(Yi) bf2 bn(Yi)f2 bn(Yi),(5.1) U(2) n,ℓ,j=1√nnX i=1(bgℓ,n(Yi)−gℓ(Yi)) (bgj,n(Yi)−gj(Yi)) bf2 bn(Yi), (5.2) U(3) n,ℓ,j=1√nnX i=1Rbn,ℓ(Yi)Rbn,j(Yi) bf2 bn(Yi)−f2 bn(Yi)2 bf2 bn(Yi)f2 bn(Yi), (5.3) U(4) n,ℓ,j=1√nnX i=1 bfbn(Yi)−fbn(Yi)2Rbn,ℓ(Yi)Rbn,j(Yi) f2 bn(Yi), (5.4) where Rbn,ℓis the function defined in (2.1). Then, we have: Lemma 5.1. Under the assumptions 3.2 to 3.5, 3.8, 3.9 and 3.10, we have for (ℓ, j)∈ {1, . . . , d }2 andm∈ {1, . . . , 4}, U(m) n,ℓ,j = op(1). Proof. Arguing as in the proof of Lemma 5.2 in [16] and using Theorem 1 and Theorem 2 of [1], we get, almost surely, U(1) n,ℓ,j ⩽C1C2ρnτn√n b−2 n C2b−1 nτn+ 21 nnX i=1 |Rℓ(Yi)|+|Rj(Yi)| , where C1andC2are positive constants, ρn=k4 n n4+Mn√ nlog(n) knandτn=k4 n n4+√ nlog(n) kn. Since, from Assumptions 3.2, 3.9 and 3.10, ρn∼Mnn1/2log1/2(n) kn∼n1/2−c1log(n), τn∼n1/2log1/2(n) kn∼n1/2−c1log1/2(n), (5.5) it follows that b−1 nτn∼nc2+1/2−c1log1/2(n), ρnτn√n b−2 n∼n2(3/4+c2−c1)log3/2(n) 8 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET Table 1. Average and standard deviation of Dover 500 replicates, with sample sizen= 50,100. Sample size Kernel Model 1 Model 2 Model 3 k-NN Kernel k-NN Kernel k-NN Kernel Gaussian 0.557 0.462 1.232 1.166 1.240 1.248 (0.202) (0.166) (0.208) (0.212) (0.194) (0.178) Epanechnikov 0.504 0.534 1.232 1.188 1.238 1.236 (0.180) (0.200) (0.188) (0.212) (0.202) (0.194) n= 50 Biweight 0.496 0.542 1.234 1.194 1.228 1.228 (0.178) (0.202) (0.190) (0.206) (0.202) (0.204) Triweight 0.522 0.578 1.210 1.192 1.230 1.218 (0.182) (0.198) (0.216) (0.216) (0.200) (0.200) Triangular 0.524 0.562 1.240 1.182 1.236 1.236 (0.178) (0.192) (0.206) (0.222) (0.210) (0.204) Gaussian 0.396 0.328 1.266 1.182 1.224 1.216 (0.142) (0.112) (0.178) (0.174) (0.196) (0.178) Epanechnikov 0.346 0.344 1.240 1.178 1.222 1.214 (0.126) (0.126) (0.174) (0.176) (0.178) (0.184) n= 100 Biweight 0.332 0.354 1.220 1.170 1.224 1.218 (0.116) (0.124) (0.182) (0.190) (0.184) (0.186) Triweight 0.346 0.370 1.222 1.180 1.236 1.226 (0.124) (0.134) (0.186) (0.182) (0.186) (0.176) Triangular 0.340 0.358 1.238 1.178 1.230 1.230 (0.118) (0.122) (0.172) (0.180) (0.186) (0.176) and, consequently, that lim n→∞ρnτn√n b−2 n b−1 nτn+ 2 = 0 because c1−c2>3/4>1/2. Then, using the law of large numbers we deduce that U(1) n,ℓ,j =op(1). Similarly, as in the proof of Lemma 5.2 in [16], we have, almost surely, the inequalities U(2) n,ℓ,j ⩽C2 1n1/2b−2 nρ2 n, NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 9
https://arxiv.org/abs/2505.19359v1
Table 2. Average and standard deviation of Dover 500 replicates, with sample sizen= 200 ,400. Sample size Kernel Model 1 Model 2 Model 3 k-NN Kernel k-NN Kernel k-NN Kernel Gaussian 0.266 0.220 1.146 1.174 1.228 1.224 (0.092) (0.074) (0.186) (0.138) (0.160) (0.132) Epanechnikov 0.230 0.226 1.228 1.170 1.218 1.226 (0.083) (0.080) (0.146) (0.146) (0.158) (0.142) n= 200 Biweight 0.238 0.240 1.222 1.172 1.208 1.218 (0.082) (0.084) (0.160) (0.146) (0.158) (0.150) Triweight 0.236 0.244 1.214 1.160 1.224 1.226 (0.080) (0.086) (0.148) (0.142) (0.156) (0.154) Triangular 0.244 0.244 1.228 1.162 1.236 1.234 (0.085) (0.082) (0.160) (0.150) (0.142) (0.134) Gaussian 0.176 0.154 1.266 1.184 1.220 1.220 (0.066) (0.054) (0.122) (0.096) (0.120) (0.096) Epanechnikov 0.164 0.162 1.228 1.178 1.218 1.222 (0.060) (0.060) (0.116) (0.106) (0.112) (0.108) n= 400 Biweight 0.160 0.160 1.224 1.174 1.220 1.222 (0.061) (0.062) (0.116) (0.106) (0.112) (0.112) Triweight 0.162 0.166 1.216 1.168 1.224 1.228 (0.060) (0.060) (0.114) (0.104) (0.102) (0.098) Triangular 0.160 0.160 1.226 1.174 1.222 1.222 (0.058) (0.058) (0.114) (0.104) (0.102) (0.098) U(3) n,ℓ,j ⩽C2 2τ2 n√n b−2 n C2 2b−2 nτ2 n+ 4C2b−1 nτn+ 41 nnX i=1|Rℓ(Yi)Rj(Yi)|, U(4) n,ℓ,j ⩽C2 2 n1/2b−2 nτ2 n1 nnX i=1|Rℓ(Yi)Rj(Yi)|, 10 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET from which we deduce that U(2) n,ℓ,j = o p(1), U(3) n,ℓ,j = o p(1) and U(4) n,ℓ,j = o p(1) since, as above, we have lim n→∞ n1/2b−2 nρ2 n = 0, lim n→∞ τ2 n√n b−2 n C2 2b−2 nτ2 n+ 4C2b−1 nτn+ 4 = 0 and lim n→∞ n1/2b−2 nτ2 n = 0. □ Considering a sequence ( βn)n∈N∗in ]0,1[ such that 1 −βn∼n−4, we put D− n(y) =kn√βn nf(y), D+ n(y) =kn n√βnf(y), (5.6) and we consider, for ℓ∈ {1, . . . , d }, the functions defined as bg− 1,ℓ,n(y) =1 nD+n(y)nX i=1Xiℓ1{Xiℓ⩾0}KYi−y D−n(y) (5.7) and bg+ 1,ℓ,n(y) =1 nD−n(y)nX i=1Xiℓ1{Xiℓ⩾0}KYi−y D+n(y) . (5.8) Then, we have: Lemma 5.2. Under the assumptions 3.5 to 3.10, we have for (ℓ, j)∈ {1, . . . , d }2: √n Eg1,ℓ(Y) f2 bn(Y) bg− 1,j,n(Y)−bg+ 1,j,n(Y) = o(1) , where g1,ℓis a function defined in (3.1). Proof. We have: √nEg1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) =n3/2√βn knE" 1 nnX i=1g1,ℓ(Y)f(Y) f2 bn(Y)Xij1{Xij⩾0}Knf(Y)(Yi−Y) kn√βn# =n3/2√βn knE" g1,ℓ(Y)f(Y) f2 bn(Y)X1j1{X1j⩾0}Knf(Y)(Y1−Y) kn√βn# =n3/2√βn knZ Z Zg1,ℓ(y)f(y) f2 bn(y)x1R+(x)Knf(y)(z−y) kn√βn f(X1j,Y1,Y)(x, z, y )dx dz dy =n3/2√βn knZ Z Zg1,ℓ(y)f(y) f2 bn(y)x1R+(x)Knf(y)(z−y) kn√βn f(X1j,Y1)(x, z)f(y)dx dz dy =n3/2√βn knZ Z Zg1,ℓ(y)f(y) f2 bn(y)x1R+(x)Knf(y)(z−y) kn√βn f(Xj,Y)(x, z)f(y)dx dz dy =n3/2√βn knZ Zg1,ℓ(y)f(y) f2 bn(y)Z x1R+(x)f(Xj,Y)(x, z)dx Knf(y)(z−y) kn√βn f(y)dz dy =n3/2√βn knZ Zg1,ℓ(y)f(y) f2 bn(y)g1,j(z)Knf(y)(z−y) kn√βn f(y)dz dy. SinceZ g1,j(z)Knf(y)(z−y) kn√βn dz=kn√βn nf(y)Z g1,j y+kn√βn nf(y)t K(t)dt, NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 11 it follows √nEg1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) =√nβnZ Zg1,ℓ(y) f2 bn(y)g1,j y+kn√βn nf(y)t K(t)f(y)dt dy =An+Bn, where An=√nβnZ Zg1,ℓ(y) f2 bn(y) g1,j y+kn√βn nf(y)t −g1,j(y) K(t)f(y)dt dy and Bn=√nβnZ Zg1,ℓ(y)g1,j(y) f2 bn(y)K(t)f(y)dt dy =√nβnZg1,ℓ(y)g1,j(y) f2 bn(y)f(y)dy. By Taylor expansion, we have: g1,j y+kn√βn nf(y)t =g1,j(y)+2X m=1g(m) 1,j(y) m!kn√βn nf(y)m tm+1 6kn√βn nf(y)3 t3g(3) 1,j y+θkn√βn nf(y)t , where 0 < θ < 1. Thus Z g1,j y+kn√βn nf(y)t −g1,j(y) K(t)dt=2X m=1g(m)
https://arxiv.org/abs/2505.19359v1
1,j(y) m!kn√βn nf(y)mZ tmK(t)dt +1 6kn√βn nf(y)3Z t3g(3) 1,j y+θkn√βn nf(y)t K(t)dt, and since Kis of order 3 (see Assumption 3.8-( iv)), it follows Z g1,j y+kn√βn nf(y)t −g1,j(y) K(t)dt =1 6kn√βn nf(y)3Z t3g(3) 1,j y+θkn√βn nf(y)t K(t)dt =1 6kn√βn nf(y)3Z t3 g(3) 1,j y+θkn√βn nf(y)t −g(3) 1,j(y) K(t)dt. Hence An=k3 nβ5/2 n 6n5/2Z Zg1,ℓ(y)t3 f2 bn(y)f2(y) g(3) 1,j y+θkn√βn nf(y)t −g(3) 1,j(y) K(t)dt dy and, using Assumption 3.5, |An|⩽ck4 nβ3 n 6n7/2θZ t4|K(t)|dt×Z|g1,ℓ(y)| f2 bn(y)f3(y)dy ⩽ck4 nβ3 n 6n7/2Z t4|K(t)|dt×Z|g1,ℓ(y)| f2 bn(y)f3(y)dy ⩽ck4 nβ3 n 6n7/2Z t4|K(t)|dt×Z|R1,ℓ(y)| fbn(y)f3(y)dy ⩽ck4 nb−1 n 6c4 0n7/2Z t4|K(t)|dt×Z |R1,ℓ(y)|f(y)dy =ck4 nb−1 n 6c4 0n7/2Z t4|K(t)|dt×E(|R1,ℓ(Y)|). 12 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET From Assumptions 3.9 and 3.10, n−7/2b−1 nk4 n∼n4c1+c2−7/2→0 asn→ ∞ . Then, we deduce from the above inequality that An= o(1). Moreover, since Bn=√nβnZg1,ℓ(y)g1,j(y) f2 bn(y)f(y)dy =√nβnZ R1,ℓ(y)R1,j(y)ε2 bn(y)f(y)dy =√nβnE R1,ℓ(Y)R1,j(Y)ε2 bn(Y) , where εbn(y) =f(y) fbn(y), (5.9) it follows Bn−√nβnE(R1,ℓ(Y)R1,j(Y)) ⩽√nβnE |R1,ℓ(Y)R1,j(Y)| |1−ε2 bn(Y)| . However, since 0 ⩽εbn(y)⩽1, we have 0⩽1−ε2 bn(Y) = 1−f2(Y) f2 bn(Y)! 1{f(Y)<bn}⩽1{f(Y)<bn} and, therefore, Bn−√nβnE(R1,ℓ(Y)R1,j(Y)) ⩽√nE |R1,ℓ(Y)R1,j(Y)|1{f(Y)<bn} = o(1) , which implies that Bn=√nβnE(R1,ℓ(Y)R1,j(Y)) + o(1) . and, consequently, that √nEg1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) =√nβnE(R1,ℓ(Y)R1,j(Y)) + o(1) . From this latter equality, we get √nEg1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) −√nE(R1,ℓ(Y)R1,j(Y)) =−√n(1−βn)E(R1,ℓ(Y)R1,j(Y)) + o(1) , and since√n(1−βn)∼n−7/2→0 asn→ ∞ , it follows that √nEg1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) =√nE(R1,ℓ(Y)R1,j(Y)) + o(1) . (5.10) Furthermore, since bg+ 1,j,nis obtained from bg− 1,j,nby replacing βnby 1/βn, we obtain analogously, √nEg1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y) =A′ n+B′ n, (5.11) where |A′ n|⩽ck4 n 6n7/2β3nθZ t4|K(t)|dt×Z|g1,ℓ(y)| f2 bn(y)f3(y)dy ⩽ck4 nb−1 n 6c4 0β3nn7/2Z t4|K(t)|dt×E(|R1,ℓ(Y)|). (5.12) and B′ n−√n βnE(R1,ℓ(Y)R1,j(Y)) ⩽√n βnE |R1,ℓ(Y)R1,j(Y)|1{f(Y)<bn} . (5.13) NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 13 Since βn→1 as n→ ∞ , it follows from (5.12), (5.13) and Assumption 3.7 that A′ n= o(1), B′ n−√n βnE(R1,ℓ(Y)R1,j(Y)) = o(1) and, consequently, that B′ n=√n βnE(R1,ℓ(Y)R1,j(Y)) + o(1). This yields B′ n−√nE(R1,ℓ(Y)R1,j(Y)) =√n(1−βn) βnE(R1,ℓ(Y)R1,j(Y)) + o(1) , and since√n(1−βn) βn∼√n n−4=n−7/2, it follows that B′ n−√nE(R1,ℓ(Y)R1,j(Y)) = o(1) and, from (5.11), that √nEg1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y) =√nE(R1,ℓ(Y)R1,j(Y)) + o(1) . (5.14) Then, (5.2) is obtained from (5.10) and (5.14). □ Now, we put V− i,ℓ,j,n=n√βnf(Yi) 2knZ Z g1,ℓ(Yi) f2 bn(Yi)x1R+(x) +g1,ℓ(z) f2 bn(z)Xij1{Xij⩾0}! Knf(Yi)(z−Yi) kn√βn f(Xj,Y)(x, z)dx dz, V+ i,ℓ,j,n=nf(Yi) 2kn√βnZ Z g1,ℓ(Yi) f2 bn(Yi)x1R+(x) +g1,ℓ(z) f2 bn(z)Xij1{Xij⩾0}! Kn√βnf(Yi)(z−Yi) kn f(Xj,Y)(x, z)dx dz. The following lemma is obtained by applying similar argument than in several works in order to approximate a sum by a U-statistic (e.g., [17]). Because of the length of its proof we omit it. Lemma 5.3. Under the assumptions 3.2, 3.4, 3.6, 3.9 and 3.10 we have for (ℓ, j)∈ {1, . . . , d }2: (i)1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y)#) =1√nnX i=1 V− i,ℓ,j,n−E V− i,ℓ,j,n + op(1); (ii)1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y)#) =1√nnX i=1 V+ i,ℓ,j,n−E V+ i,ℓ,j,n + op(1). Then, using this lemma we obtain the following result. Lemma 5.4. Under the assumptions 3.1, 3.6 and 3.8 we have for (ℓ, j)∈ {1, . . . , d
https://arxiv.org/abs/2505.19359v1
}2: (i)1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg− 1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) +g1,j(Y) f2 bn(Y)bg− 1,ℓ,n(Y)#) =1√nnX i=1g1,ℓ(Yi)g1,j(Yi) f2 bn(Yi)+g1,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0}+g1,j(Yi)f(Yi) 2f2 bn(Yi)Xiℓ1{Xiℓ⩾0} −E" g1,ℓ(Y)g1,j(Y) f2 bn(Y)+g1,ℓ(Y)f(Y) 2f2 bn(Y)Xij1{Xij⩾0}+g1,j(Y)f(Y) 2f2 bn(Y)Xiℓ1{Xiℓ⩾0}# + op(1); 14 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET (ii)1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg+ 1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y) +g1,j(Y) f2 bn(Y)bg+ 1,ℓ,n(Y)#) =1√nnX i=1g1,ℓ(Yi)g1,j(Yi) f2 bn(Yi)+g1,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0}+g1,j(Yi)f(Yi) 2f2 bn(Yi)Xiℓ1{Xiℓ⩾0} −E" g1,ℓ(Y)g1,j(Y) f2 bn(Y)+g1,ℓ(Y)f(Y) 2f2 bn(Y)Xij1{Xij⩾0}+g1,j(Y)f(Y) 2f2 bn(Y)Xiℓ1{Xiℓ⩾0}# + op(1); Proof. Clearly, V− i,ℓ,j,n=n√βng1,ℓ(Yi)f(Yi) 2knf2 bn(Yi)ZZ x1R+(x)f(Xj,Y)(x, z)dx Knf(Yi)(z−Yi) kn√βn dz +n√βnf(Yi) 2knXij1{Xij⩾0}Zg1,ℓ(z) f2 bn(z)Z f(Xj,Y)(x, z)dx Knf(Yi)(z−Yi) kn√βn dz =n√βng1,ℓ(Yi)f(Yi) 2knf2 bn(Yi)Z g1,j(z)Knf(Yi)(z−Yi) kn√βn dz +n√βnf(Yi) 2knXij1{Xij⩾0}Zg1,ℓ(z) f2 bn(z)f(z)Knf(Yi)(z−Yi) kn√βn dz =n√βng1,ℓ(Yi)f(Yi) 2knf2 bn(Yi)Z R1,j(z)f(z)Knf(Yi)(z−Yi) kn√βn dz +n√βnf(Yi) 2knXij1{Xij⩾0}Zg1,ℓ(z) f2 bn(z)f(z)Knf(Yi)(z−Yi) kn√βn dz. Since n√βnf(Yi) knZ Knf(Yi)(z−y) kn√βn dz=βnZ K(t)dt=βn, it follows V− i,ℓ,j,n=Cℓ,j,n(Yi) +Dℓ,n(Xij, Yi) +Eℓ,j,n(Yi), (5.15) where Cℓ,j,n(Yi) =n√βng1,ℓ(Yi)f(Yi) 2knf2 bn(Yi)Z R1,j(z)f(z)−R1,j(Yi)f(Yi) Knf(Yi)(z−Yi) kn√βn dz, Dℓ,n(Xij, Yi) =n√βnf(Yi) 2knXij1{Xij⩾0}Zg1,ℓ(z) f2 bn(z)f(z)−g1,ℓ(Yi) f2 bn(Yi)f(Yi) Knf(Yi)(z−Yi) kn√βn dz and Eℓ,j,n(Yi) =g1,ℓ(Yi)R1,j(Yi)f(Yi)βn 2f2 bn(Yi)+g1,ℓ(Yi)f(Yi)βn 2f2 bn(Yi)Xij1{Xij⩾0} =1 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi)βn+1 2R1,ℓ(Yi)ε2 bn(Yi)βnXij1{Xij⩾0}, NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 15 εbnbeing defined in (5.9). We have E C2 ℓ,j,n(Yi) =E" n2βng2 1,ℓ(Yi)f2(Yi) 4k2nf4 bn(Yi)Z R1,j(z)f(z)−R1,j(Yi)f(Yi) Knf(Yi)(z−Yi) kn√βn dz2# =Zn2βng2 1,ℓ(y)f2(y) 4k2nf4 bn(y)Z g1,j(z)−g1,j(y) Knf(y)(z−y) kn√βn dz2 f(y)dy =Zβ2 ng2 1,ℓ(y) 4f4 bn(y)Z g1,j y+kn√βn nf(y)t −g1,j(y) K(t)dt2 f(y)dy ⩽Zβ2 ng2 1,ℓ(y) 4f4 bn(y)Z g1,j y+kn√βn nf(y)t −g1,j(y) |K(t)|dt2 f(y)dy ⩽Zc2k2 nβ3 nR2 1,ℓ(y) 4n2f4 bn(y)f(y)dyZ |t| |K(t)|dt2 ⩽c2k2 nn−2b−2 n 4c2 0E R2 1,ℓ(Y)ε2 bn(Y)Z |t| |K(t)|dt2 ⩽c2k2 nn−2b−2 n 4c2 0E R2 1,ℓ(Y)Z |t| |K(t)|dt2 . Since k2 nn−2b−2 n∼n2c1+2c2−2=n−2(1−c1−c2)andc1+c2<1, we deduce that k2 nn−2b−2 n→0 asn→ ∞, and from the above inequality that E C2 ℓ,j,n(Yi) = o(1). Since V ar n−1/2Pn i=1Cℓ,j,n(Yi) = V ar(Cℓ,j,n(Y))⩽E C2 ℓ,j,n(Y) , it follows from Bienaym´ e-Chebyshev inequality that 1√nnX i=1(Cℓ,j,n(Yi)−E(Cℓ,j,n(Y))) = o p(1). (5.16) Furthermore, E D2 ℓ,n(Xj, Y) =n2βn 4k2nE" X2 j1{Xj⩾0}Zg1,ℓ(z) f2 bn(z)f(z)−g1,ℓ(Y) f2 bn(Y)f(Y) f(Y)Knf(Y)(z−Y) kn√βn dz2# =n2βn 4k2nE" X2 j1{Xj⩾0}Z R1,ℓ(z)ε2 bn(z)−R1,ℓ(Y)ε2 bn(Y) f(Y)Knf(Y)(z−Y) kn√βn dz2# =n2βn 4k2nE" E X2 j1{Xj⩾0}Z R1,ℓ(z)ε2 bn(z)−R1,ℓ(Y)ε2 bn(Y) f(Y)Knf(Y)(z−Y) kn√βn dz2 Y# =1 4E E X2 j1{Xj⩾0}|Y φ2 ℓ,n(Y) , where φℓ,n(Y) =n√βn knZ R1,ℓ(z)ε2 bn(z)−R1,ℓ(Y)ε2 bn(Y) f(Y)Knf(Y)(z−Y) kn√βn dz =βnZ R1,ℓ Y+kn√βn nf(Y)t ε2 bn Y+kn√βn nf(Y)t −R1,ℓ(Y)ε2 bn(Y) K(t)dt 16 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET From the continuity of fandg1,ℓ, which implies that of R1,ℓandεbn, and since n−1kn∼nc1−1→0 andβn→1 asn→ ∞ , we obtain lim n→∞ R1,ℓ Y+kn√βn nf(Y)t ε2 bn Y+kn√βn nf(Y)t −R1,ℓ(Y)ε2 bn(Y) K(t) = 0 . Further, since 0 ⩽εbn⩽1, it follows  R1,ℓ Y+kn√βn nf(Y)t ε2 bn Y+kn√βn nf(Y)t −R1,ℓ(Y)ε2 bn(Y) K(t) ⩽  R1,ℓ Y+kn√βn nf(Y)t −R1,ℓ(Y) ε2 bn Y+kn√βn nf(Y)t +R1,ℓ(Y) ε2 bn Y+kn√βn nf(Y)t −ε2 bn(Y) × |K(t)| ⩽ R1,ℓ Y+kn√βn nf(Y)t −R1,ℓ(Y) + 2|R1,ℓ(Y)| |K(t)| ⩽ckn√βn nf(Y)|t| |K(t)|+ 2|R1,ℓ(Y)| |K(t)|. Since n−1kn→0 asn→ ∞ , we have for nlarge enough  R1,ℓ Y+kn√βn nf(Y)t ε2 bn Y+kn√βn nf(Y)t −R1,ℓ(Y)ε2 bn(Y) K(t) ⩽c f(Y)|t| |K(t)|+ 2|R1,ℓ(Y)| |K(t)| ⩽c c0|t| |K(t)|+ 2|R1,ℓ(Y)| |K(t)|, (5.17) withZc c0|t| |K(t)|+ 2|R1,ℓ(Y)| |K(t)|dt=c c0Z |t| |K(t)|dt+ 2|R1,ℓ(Y)|Z |K(t)|dt <∞. Then, using the
https://arxiv.org/abs/2505.19359v1
dominated convergence theorem we get: lim n→∞(φℓ,n(Y)) = 0. Moreover, using (5.17), we have E X2 j1{Xj⩾0}|Y φ2 ℓ,n(Y)⩽E X2 j1{Xj⩾0}|YZc c0|t| |K(t)|+ 2|R1,ℓ(Y)| |K(t)|dt2 ⩽2c2 c2 0Z |t| |K(t)|dt2 E X2 j|Y + 4Z |K(t)|dt2 E X2 j|Y R2 1,ℓ(Y). Since E2c2 c2 0Z |t| |K(t)|dt2 E X2 j|Y + 4Z |K(t)|dt2 E X2 j|Y R2 1,ℓ(Y) =2c2 c2 0Z |t| |K(t)|dt2 E X2 j + 4Z |K(t)|dt2 E X2 jR2 1,ℓ(Y) ⩽2c2 c2 0Z |t| |K(t)|dt2 E X2 j + 4Z |K(t)|dt2 E1/2 X4 j E1/2 R4 1,ℓ(Y) <∞, we can use again the dominated convergence theorem which yields lim n→∞E D2 ℓ,n(Xj, Y) =1 4E E X2 j1{Xj⩾0}|Y lim n→∞φ2 ℓ,n(Y) = 0. NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 17 Since V ar n−1/2Pn i=1Dℓ,n(Xij, Yi) =V ar(Dℓ,n(Xj, Y))⩽E D2 ℓ,n(Xj, Y) , we conclude, by using Bienaym´ e-Chebyshev inequality, that 1√nnX i=1(Dℓ,n(Xij, Yi)−E(Dℓ,n(Xj, Y))) = o p(1). (5.18) Then, from (5.15), (5.16) and (5.18) we obtain 1√nnX i=1n V− i,ℓ,j,n−E V− i,ℓ,j,no =1√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi)βn+1 2R1,ℓ(Yi)ε2 bn(Yi)βnXij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y)βn+1 2R1,ℓ(Y)ε2 bn(Y)βnXj1{Xj⩾0} + op(1), and from Lemma 5.3 it follows 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y)#) =1√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi)βn+1 2R1,ℓ(Yi)ε2 bn(Yi)βnXij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y)βn+1 2R1,ℓ(Y)ε2 bn(Y)βnXj1{Xj⩾0} + op(1) =1√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi) +1 2R1,ℓ(Yi)ε2 bn(Yi)Xij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y) +1 2R1,ℓ(Y)ε2 bn(Y)Xj1{Xj⩾0} +δn+ op(1), where δn=βn−1√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi) +1 2R1,ℓ(Yi)ε2 bn(Yi)Xij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y) +1 2R1,ℓ(Y)ε2 bn(Y)Xj1{Xj⩾0} , so that |δn|⩽(1−βn)√n 21 nnX i=1|R1,ℓ(Yi)R1,j(Yi)|+1 nnX i=1|R1,ℓ(Yi)Xij|+E(|R1,ℓ(Y)R1,j(Y)|+|R1,ℓ(Y)Xj|) . 18 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET Since (1 −βn)√n∼n−7/2, we deduce from the preceding inequality and the strong law of large numbers that δn= op(1) and, consequently, that 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y)#) =1√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi) +1 2R1,ℓ(Yi)ε2 bn(Yi)Xij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y) +1 2R1,ℓ(Y)ε2 bn(Y)Xj1{Xj⩾0} + op(1) =1√nnX i=1g1,ℓ(Yi)g1,j(Yi) 2f2 bn(Yi)+g1,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0} −E" g1,ℓ(Y)g1,j(Y) 2f2 bn(Y)+g1,ℓ(Y)f(Y) 2f2 bn(Y)Xj1{Xj⩾0}# + op(1).(5.19) Then, the proof of ( i) is complete by adding to (5.19) the same equation obtained by exchanging ℓandj. Since bg+ 1,j,nis obtained from bg− 1,j,nby replacing βnby 1/βn, we prove ( ii) from a similar reasoning. It leads to 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y)#) =1√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi) +1 2R1,ℓ(Yi)ε2 bn(Yi)Xij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y) +1 2R1,ℓ(Y)ε2 bn(Y)Xj1{Xj⩾0} +δ′ n+ op(1), where δ′ n=1−βn βn√nnX i=11 2R1,ℓ(Yi)R1,j(Yi)ε2 bn(Yi) +1 2R1,ℓ(Yi)ε2 bn(Yi)Xij1{Xij⩾0} −E1 2R1,ℓ(Y)R1,j(Y)ε2 bn(Y) +1 2R1,ℓ(Y)ε2 bn(Y)Xj1{Xj⩾0} and |δ′ n|⩽(1−βn)√n 2βn1 nnX i=1|R1,ℓ(Yi)R1,j(Yi)|+1 nnX i=1|R1,ℓ(Yi)Xij|+E(|R1,ℓ(Y)R1,j(Y)|+|R1,ℓ(Y)Xj|) . Since (1 −βn)√n∼n−7/2andβn→1 asn→ ∞ , we also deduce that δ′ n= op(1), what implies the result of ( ii) as above. □ Now, considering, for any ℓ∈ {1, . . . , d }, bg1,ℓ,n(y) =1 nHn(y)nX i=1Xiℓ1{Xiℓ⩾0}KYi−y Hn(y) (5.20) and bg2,ℓ,n(y) =1 nHn(y)nX i=1−Xiℓ1{Xiℓ<0}KYi−y Hn(y) , (5.21) we have: NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 19 Lemma 5.5. Under the assumptions 3.1, 3.5, 3.7 to 3.9 and 3.10 , we have for (ℓ, j)∈ {1, . . . , d }2 and(m, r)∈ {1,2}2: 1√nnX i=1( gm,ℓ(Yi) f2 bn(Yi)bgr,j,n(Yi) +gm,j(Yi) f2 bn(Yi)bgr,ℓ,n(Yi)−E" gm,ℓ(Y) f2 bn(Y)bgr,j,n(Y) +gm,j(Y)
https://arxiv.org/abs/2505.19359v1
f2 bn(Y)bgr,ℓ,n(Y)#) =1√nnX i=1gm,ℓ(Yi)gr,j(Yi) f2 bn(Yi)+gm,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0}+gm,j(Yi)f(Yi) 2f2 bn(Yi)Xiℓ1{Xiℓ⩾0} −E" gm,ℓ(Y)gr,j(Y) f2 bn(Y)+gm,ℓ(Y)f(Y) 2f2 bn(Y)Xj1{Xj⩾0}+gm,j(Y)f(Y) 2f2 bn(Y)Xiℓ1{Xℓ⩾0}# + op(1). Proof. We only give the proof for m=r= 1 since those related to the other values are obtained from similar reasoning. By using Assumption 3.8-( vi) we have, as in the proof of Theorem 2 in [1] (see p. 14), bg− 1,ℓ,n(y)⩽bg1,ℓ,n(y)⩽bg+ 1,ℓ,n(y), where bg− 1,ℓ,nandbg+ 1,ℓ,nare defined in (5.7) and (5.8). Hence 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,n,j(Y)#) ⩽1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg1,j,n(Y)#) ⩽1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y)#) . However, by using Lemma 5.2, we obtain 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,n,j(Y)#) =1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,n,j(Y)#) +√n Eg1,ℓ(Y) f2 bn(Y) bg− 1,j,n(Y)−bg+ 1,j,n(Y) =1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,n,j(Y)#) + o(1) and, similarly, 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y)#) =1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y)#) + o(1) . Hence 1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y)#) + o(1) ⩽1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg1,j,n(Y)#) ⩽1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y)#) + o(1) . 20 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET By adding to this last inequality the same one obtained by exchanging ℓandj, we obtain F− ℓ,j,n+ o(1)⩽Fℓ,j,n⩽F+ ℓ,j,n+ o(1), where Fℓ,j,n=1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg1,j,n(Y) +g1,j(Y) f2 bn(Y)bg1,ℓ,n(Y)#) , F− ℓ,j,n=1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg− 1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg− 1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg− 1,j,n(Y) +g1,j(Y) f2 bn(Y)bg− 1,ℓ,n(Y)#) , F+ ℓ,j,n=1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg+ 1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg+ 1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg+ 1,j,n(Y) +g1,j(Y) f2 bn(Y)bg+ 1,ℓ,n(Y)#) . Then, by using Lemma 5.4, we deduce from the preceding inequality that Fℓ,j,n−1√nnX i=1g1,ℓ(Yi)g1,j(Yi) f2 bn(Yi)+g1,ℓ(Yi)f(Yi) 2f2 bn(Yi)Xij1{Xij⩾0}+g1,j(Yi)f(Yi) 2f2 bn(Yi)Xiℓ1{Xiℓ⩾0} +E" g1,ℓ(Y)g1,j(Y) f2 bn(Y)+g1,ℓ(Y)f(Y) 2f2 bn(Yi)Xj1{Xj⩾0}+g1,j(Y)f(Y) 2f2 bn(Y)Xℓ1{Xℓ⩾0}# = op(1), and the proof is complete. □ We define the functions I(2) ℓj(y) =gℓ(y)bgj,n(y) +gj(y)bgℓ,n(y) f2 bn(y)=Rbn,ℓ(y)bgj,n(y) fbn(y)+Rbn,j(y)bgℓ,n(y) fbn(y)(5.22) and I(3) ℓj(y) = 2 Rbn,ℓ(y)Rbn,j(y)bfbn(y) fbn(y), (5.23) where Rbn,ℓis defined in (2.1). Then, we have: Lemma 5.6. Under the assumptions 3.1 and 3.3 to 3.10, we have for (ℓ, j)∈ {1, . . . , d }2: (i)1√nnX i=1n I(2) ℓj(Yi)−Eh I(2) ℓj(Y)io =1√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi) +1 2XijRbn,ℓ(Yi)f(Yi) fbn(Yi)+1 2XiℓRbn,j(Yi)f(Yi) fbn(Yi) −E Rbn,ℓ(Y)Rbn,j(Y) +1 2XjRbn,ℓ(Y)f(Y) fbn(Y)+1 2XℓRbn,j(Y)f(Y) fbn(Y) + op(1); (ii)1√nnX i=1n I(3) ℓj(Yi)−Eh I(3) ℓj(Y)io =2√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi)f(Yi) fbn(Yi)−E Rbn,ℓ(Y)Rbn,j(Y)f(Y) fbn(Y) + op(1). Proof. NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 21 (i). We have: 1√nnX i=1n I(2) ℓj(Yi)−Eh I(2) ℓj(Y)io =1√nnX i=1gℓ(Yi)bgj,n(Yi) f2 bn(Yi)+gj(Yi)bgℓ,n(Yi) f2 bn(Yi)−E" gℓ(Y)bgj,n(Y) f2 bn(Y)+gj(Y)bgℓ,n(Y) f2 bn(Y)# . Since gℓ=g1,ℓ−g2,ℓandbgℓ,n=bg1,ℓ,n−bg2,ℓ,n, where bg1,ℓ,nandbg2,ℓ,nare defined in (5.20) and (5.21), it follows 1√nnX i=1n I(2) ℓj(Yi)−Eh I(2) ℓj(Y)io =1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg1,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg1,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg1,j,n(Y) +g1,j(Y) f2 bn(Y)bg1,ℓ,n(Y)#) −1√nnX i=1( g1,ℓ(Yi) f2 bn(Yi)bg2,j,n(Yi) +g1,j(Yi) f2 bn(Yi)bg2,ℓ,n(Yi)−E" g1,ℓ(Y) f2 bn(Y)bg2,j,n(Y) +g1,j(Y) f2 bn(Y)bg2,ℓ,n(Y)#) −1√nnX i=1( g2,ℓ(Yi) f2 bn(Yi)bg1,j,n(Yi) +g2,j(Yi) f2 bn(Yi)bg1,ℓ,n(Yi)−E" g2,ℓ(Y) f2 bn(Y)bg1,j,n(Y) +g2,j(Y) f2 bn(Y)bg1,ℓ,n(Y)#) +1√nnX i=1( g2,ℓ(Yi) f2 bn(Yi)bg2,j,n(Yi) +g2,j(Yi)
https://arxiv.org/abs/2505.19359v1
f2 bn(Yi)bg2,ℓ,n(Yi)−E" g2,ℓ(Y) f2 bn(Y)bg2,j,n(Y) +g2,j(Y) f2 bn(Y)bg2,ℓ,n(Y)#) . Then, Lemma 5.5 yields the result. (ii). Putting Zℓ,j,n=2√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi)f(Yi) fbn(Yi)−E Rbn,ℓ(Y)Rbn,j(Y)f(Y) fbn(Y) , we have: E1√nnX i=1n I(3) ℓj(Yi)−Eh I(3) ℓj(Y)io − Zℓ,j,n2 =E2√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi)bfn(Yi)−f(Yi) fbn(Yi)−E" Rbn,ℓ(Y)Rbn,j(Y)bfn(Y)−f(Y) fbn(Y)#2 ⩽4E  Rbn,ℓ(Y)Rbn,j(Y)bfn(Y)−f(Y) fbn(Y)!2 . Using Theorem 1 of [1] together with the inequalities fbn⩾bnandR2 bn,ℓ=R2 ℓε2 bn⩽R2 ℓ, we obtain almost surely E1√nnX i=1n I(3) ℓj(Yi)−Eh I(3) ℓj(Y)io − Zℓ,j,n2 ⩽4b−2 nτ2 nE R2 ℓ(Y)R2 j(Y) , where τn=k4 n n4+√ nlog(n) kn. From (5.5), we have τn∼n1/2−c1log1/2(n) and, therefore, b−2 nτ2 n∼ n2(1/2−c1+c2)log(n). Since c1−c2>3/4>1/2, it follows that b−2 nτ2 n→0 as n→ ∞ and, 22 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET consequently, that E1√nnX i=1n I(3) ℓj(Yi)−Eh I(3) ℓj(Y)io − Zℓ,j,n2 = o(1) . Then, using Bienaym´ e-Chebyshev inequality we deduce that 1√nnX i=1n I(3) ℓj(Yi)−Eh I(3) ℓj(Y)io − Zℓ,j,n= op(1), and the proof of ( ii) is complete. □ Lemma 5.7. Under the assumptions 3.5 and 3.7 to 3.10, we have for (ℓ, j)∈ {1, . . . , d }2: √nERbn,ℓ(Y)bgj,n(Y) fbn(Y) =√nE[Rℓ(Y)Rj(Y)] + o(1) . Proof. Clearly Rbn,ℓ(Y)bgj,n(Y) fbn(Y) =gℓ(Y)bgj,n(Y) f2 bn(Y)=(g1,ℓ(Y)−g2,ℓ(Y)) (bg1,j,n(Y)−bg2,j,n(Y)) f2 bn(Y) =g1,ℓ(Y)bg1,j,n(Y) f2 bn(Y)−g1,ℓ(Y)bg2,j,n(Y) f2 bn(Y)−g2,ℓ(Y)bg1,j,n(Y) f2 bn(Y)+g2,ℓ(Y)bg2,j,n(Y) f2 bn(Y). (5.24) Usingbg− 1,ℓ,n(y)⩽bg1,ℓ,n(y)⩽bg+ 1,ℓ,n(y) (see [1], p. 14), we obtain √nE" g1,ℓ(Y)bg− 1,ℓ,n(Y) f2 bn(Y)# ⩽√nE" g1,ℓ(Y)bg1,ℓ,n(Y) f2 bn(Y)# ⩽√nE" g1,ℓ(Y)bg+ 1,ℓ,n(Y) f2 bn(Y)# ; then, (5.10) and (5.14) lead to o(1)⩽√nE" g1,ℓ(Y)bg1,ℓ,n(Y) f2 bn(Y)# −√nE[R1,ℓ(Y)R1,j(Y)]⩽o(1), what allows to conclude that √nE" g1,ℓ(Y)bg1,ℓ,n(Y) f2 bn(Y)# =√nE[R1,ℓ(Y)R1,j(Y)] + o(1) . From a similar reasoning, we also obtain √nE" g1,ℓ(Y)bg2,j,n(Y) f2 bn(Y)# =√nE[R1,ℓ(Y)R2,j(Y)] + o(1) , √nE" g2,ℓ(Y)bg1,j,n(Y) f2 bn(Y)# =√nE[R2,ℓ(Y)R1,j(Y)] + o(1) , and √nE" g2,ℓ(Y)bg2,j,n(Y) f2 bn(Y)# =√nE[R2,ℓ(Y)R2,j(Y)] + o(1) . Then, (5.24) and the decomposition Rℓ(Y)Rj(Y) =R1,ℓ(Y)R1,j(Y)−R1,ℓ(Y)R2,j(Y)−R2,ℓ(Y)R1,j(Y)+R2,ℓ(Y)R2,j(Y) (5.25) NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 23 yield the result. □ Let us put bf1,n(y) =1 nD+n(y)nX i=1KYi−y D−n(y) (5.26) and bf2,n(y) =1 nD−n(y)nX i=1KYi−y D+n(y) , (5.27) where D− nandD+ nare defined in (5.6). Then, we have: Lemma 5.8. Under the assumptions 3.4, 3.6 and 3.7, we have for (ℓ, j)∈ {1, . . . , d }2,(m, r, s )∈ {1,2}3and any C >0: √nE" Rm,ℓ(Y)Rs,j(Y)bfr,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# =√nE[Rm,ℓ(Y)Rs,j(Y)] + o(1) . Proof. We have √nE" Rm,ℓ(Y)Rs,j(Y)bf1,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# =√nβn knnX i=1E Rm,ℓ(Y)Rs,j(Y)Knf(Y)(Yi−Y) kn√βn 1{f(Y)⩾bn+Cτn} =n3/2√βn knE Rm,ℓ(Y)Rs,j(Y)Knf(Y)(Y1−Y) kn√βn 1{f(Y)⩾bn+Cτn} =n3/2√βn knZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)Knf(y)(z−y) kn√βn f(y)f(z)dy dz. Since Z Knf(y)(z−y) kn√βn f(z)dz=kn√βn nf(y)Z f y+kn√βn nf(y)t K(t)dt, it follows √nE" Rm,ℓ(Y)Rs,j(Y)bf1,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# =βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f y+kn√βn nf(y)t K(t)dy dt =βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt +βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(y)dy. (5.28) However, βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(y)dy =βn√nE Rm,ℓ(Y)Rs,j(Y)1{f(Y)⩾bn+Cτn} =βn√nE[Rm,ℓ(Y)Rs,j(Y)]−βn√nE Rm,ℓ(Y)Rs,j(Y)1{f(Y)<bn+Cτn} , 24 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET so that βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(y)dy−√nE[Rm,ℓ(Y)Rs,j(Y)] =−(1−βn)√nE[Rm,ℓ(Y)Rs,j(Y)]−βn√nE Rm,ℓ(Y)Rs,j(Y)1{f(Y)<bn+Cτn} . Since (1 −βn)√n∼n−7/2,βn→1 asn→ ∞ andbn+Cτn∼bnbecause b−1 nτn∼n1/2−c1+c2→0 asn→ ∞ , it follows from the above equality and Assumption 3.7 that βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(y)dy=√nE[Rm,ℓ(Y)Rs,j(Y)] + o(1) . (5.29) Moreover, an use of Taylor’s expansion up to order 3 leads to βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt =βn√n2X q=1Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(q)(y) q!kq n(√βn)q nqfq(y)Z tqK(t)dt dy +βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)k3 n(√βn)3 6n3f3(y)Z t3f(3) y+θkn√βn nf(y)t
https://arxiv.org/abs/2505.19359v1
K(t)dt dy, where 0 < θ < 1. Since Kis of order 3, it follows βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt =βn√nZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)k3 n(√βn)3 6n3f3(y)Z t3 f(3) y+θkn√βn nf(y)t −f(3)(y) K(t)dt dy. Thus βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt ⩽βn√nZ {f(y)⩾bn+Cτn}|Rm,ℓ(y)Rs,j(y)|k3 n(√βn)3 6n3f3(y)Z |t|3 f(3) y+θkn√βn nf(y)t −f(3)(y) |K(t)|dt dy ⩽β3 nn−7/2k4 n 6Z {f(y)⩾bn+Cτn}|Rm,ℓ(y)Rs,j(y)| f4(y)dy×Z t4|K(t)|dt ⩽β3 nn−7/2k4 nb−1 n 6Z {f(y)⩾bn+Cτn}|Rm,ℓ(y)Rs,j(y)| f3(y)dy×Z t4|K(t)|dt ⩽β3 nn−7/2k4 nb−1 n 6c4 0Z |Rm,ℓ(y)Rs,j(y)|f(y)dy×Z t4|K(t)|dt ⩽n−7/2k4 nb−1 n 6c4 0E |Rm,ℓ(Y)Rs,j(Y)|Z t4|K(t)|dt. Since n−7/2k4 nb−1 n∼n−7/2+4c1+c2=n−4(7/8−c1−c2/4)→0 asn→ ∞ , it follows from the preceding inequality that βn√nZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn√βn nf(y)t −f(y) K(t)dy dt = o(1) .(5.30) NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 25 Then, from (5.28), (5.29) and (5.30), we deduce that √nE" Rm,ℓ(Y)Rs,j(Y)bf1,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# =√nE[Rm,ℓ(Y)Rs,j(Y)] + o(1) . Sincebf2,nis obtained from bf1,nby replacing βnby 1/βn, we obtain the case of r= 2 from a similar reasoning. It leads to √nE" Rm,ℓ(Y)Rs,j(Y)bf2,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# =√n βnZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn n√βnf(y)t −f(y) K(t)dy dt +√n βnZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(y)dy, with √n βnZ Z {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y) f y+kn n√βnf(y)t −f(y) K(t)dy dt ⩽n−7/2k4 nb−1 n 6β3nc4 0Z |Rm,ℓ(y)Rs,j(y)|f(y)dy×Z t4|K(t)|dt ⩽3n−7/2k4 nb−1 n 12c4 0E |Rm,ℓ(Y)Rs,j(Y)|Z t4|K(t)|dt, fornlarge enough, and √n βnZ {f(y)⩾bn+Cτn}Rm,ℓ(y)Rs,j(y)f(y)dy−√nE[Rm,ℓ(Y)Rs,j(Y)] =(1−βn)√n βnE[Rm,ℓ(Y)Rs,j(Y)]−1 βn√nE Rm,ℓ(Y)Rs,j(Y)1{f(Y)<bn+Cτn} = o(1) . Hence √nE" Rm,ℓ(Y)Rs,j(Y)bf2,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# =√nE[Rm,ℓ(Y)Rs,j(Y)] + o(1) . □ Lemma 5.9. Under the assumptions 3.4, 3.7, 3.9 and 3.10, we have for (ℓ, j)∈ {1, . . . , d }2: √nE" Rbn,ℓ(Y)Rbn,j(Y)bfbn(Y) fbn(Y)# =√nE[Rℓ(Y)Rj(Y)] + o(1) . Proof. Arguing as in [20] (see Eq. (4.21), p. 1066), we have bfbn(Y) =J1,n(Y) +J2,n(Y) +J3,n(Y) +J4,n(Y) +J5,n(Y), 26 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET where J1,n(Y) =bfn(Y)1{f(Y)⩾bn+Cτn}, J2,n(Y) =f(Y)n 1{bfn(Y)⩾bn}−1{f(Y)⩾bn+Cτn}o , J3,n(Y) = bfn(Y)−f(Y)n 1{bfn(Y)⩾bn}−1{f(Y)⩾bn+Cτn}o , J4,n(Y) =bn1{f(Y)<bn−Cτn}, J5,n(Y) =bnn 1{bfn(Y)<bn}−1{f(Y)<bn+Cτn}o , Cis a given positive constant and τn=k4 n n4+√ nlog(n) kn. Then, it is enough to show that √nERbn,ℓ(Y)Rbn,j(Y)J1,n(Y) fbn(Y) =√nE[Rℓ(Y)Rj(Y)] + o(1) (5.31) and √nERbn,ℓ(Y)Rbn,j(Y)Jm,n(Y) fbn(Y) = o(1) , m= 2, . . . , 5. (5.32) Proof of (5.31): Clearly, √nERbn,ℓ(Y)Rbn,j(Y)J1,n(Y) fbn(Y) =√nE" Rℓ(Y)Rj(Y)bfn(Y) f(Y)1{f(Y)⩾bn+Cτn}# =√nE" (R1,ℓ(Y)−R2,ℓ(Y))(R1,j(Y)−R2,j(Y))bfn(Y) f(Y)1{f(Y)⩾bn+Cτn}# =ζ1,1,n−ζ1,2,n−ζ2,1,n+ζ2,2,n, (5.33) where ζm,s,n =√nE" Rm,ℓ(Y)Rs,j(Y)bfn(Y) f(Y)1{f(Y)⩾bn+Cτn}# . Since, from [1] (see p. 12), bf1,n⩽bfn⩽bf2,n, where bf1,nandbf2,nare defined in (5.26) and (5.27), we deduce that √nE" Rm,ℓ(Y)Rs,j(Y)bf1,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# ⩽ζm,s,n⩽√nE" Rm,ℓ(Y)Rs,j(Y)bf2,n(Y) f(Y)1{f(Y)⩾bn+Cτn}# , and from Lemma 5.8 that o(1)⩽ζm,s,n−√nE[Rm,ℓ(Y)Rs,j(Y)]⩽o(1), what implies that ζm,s,n =√nE[Rm,ℓ(Y)Rs,j(Y)] + o(1) . (5.34) Then, from (5.33), (5.34) and (5.25), we obtain (5.31). Proof of (5.32): Arguing as in [20] (see pp. 1066–1067), and using the fact that bn+Cτn∼bn since b−1 nτn→0 asn→ ∞ , we get the inequalities √nERbn,ℓ(Y)Rbn,j(Y)J2,n(Y) fbn(Y) ⩽√nE |Rℓ(Y)Rj(Y)|1{f(Y)<bn+Cτn} = o(1) , NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 27 √nERbn,ℓ(Y)Rbn,j(Y)J3,n(Y) fbn(Y) ⩽Cτnb−1 n√nE |Rℓ(Y)Rj(Y)|1{f(Y)<bn+Cτn} = o(1) , √nERbn,ℓ(Y)Rbn,j(Y)J4,n(Y) fbn(Y) ⩽ √nE Rℓ(Y)Rj(Y)1{f(Y)<bn−Cτn} = o(1) , √nERbn,ℓ(Y)Rbn,j(Y)J5,n(Y) fbn(Y) ⩽ √nE Rℓ(Y)Rj(Y)1{f(Y)<bn+Cτn} = o(1) , which complete the proof of (5.32). □ 5.2.Proof of Theorem 3.1. Let us denote by bλ(n) ℓjthe (ℓ, j)-th entry of the d×dmatrix bΛn. It is easily seen that √nbλ(n) ℓj=1√nnX i=1bgℓ,n(Yi)bgj,n(Yi) bf2 bn(Yi)=1√nnX i=1n I(1) ℓj(Yi) +I(2) ℓj(Yi)−I(3) ℓj(Yi)o −U(1) n,ℓ,j+U(2) n,ℓ,j+U(3)
https://arxiv.org/abs/2505.19359v1
n,ℓ,j−U(4) n,ℓ,j, where I(1) ℓj(y) =gℓ(y)gj(y) f2 bn(y)=Rbn,ℓ(y)Rbn,j(y), I(2) ℓjandI(3) ℓjare defined in (5.22) and (5.23), and the U(m) n,ℓ,j’s are given in (5.1), (5.2), (5.3) and (5.4). Then, from Lemma 5.1 we get √nbλ(n) ℓj=1√nnX i=1n I(1) ℓj(Yi) +I(2) ℓj(Yi)−I(3) ℓj(Yi)o + op(1), (5.35) and, putting νℓj=E I(1) ℓj(Y) +I(2) ℓj(Y)−I(3) ℓj(Y) , we have √n bλ(n) ℓj−νℓj =1√nnX i=1n I(1) ℓj(Yi)−E I(1) ℓj(Y)o +1√nnX i=1n I(2) ℓj(Yi)−E I(2) ℓj(Y)o −1√nnX i=1n I(3) ℓj(Yi)−E I(3) ℓj(Y)o + op(1). Then, from Lemma 5.6 and the three following equality obtained in Eq. (4.6), Eq. (4.14) and Eq. (4.15) of [20] : 1√nnX i=1n I(1) ℓj(Yi)−E I(1) ℓj(Y)o =1√nnX i=1n Rℓ(Yi)Rj(Yi)−E[R(Yℓ)Rj(Y)]o + op(1), 1√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi) +1 2XiℓRbn,j(Yi)f(Yi) fbn(Yi)+1 2XijRbn,ℓ(Yi)f(Yi) fbn(Yi) −E Rbn,ℓ(Y)Rbn,j(Y) +1 2XℓRbn,j(Y)f(Y) fbn(Y)+1 2XjRbn,ℓ(Y)f(Y) fbn(Y) =1√nnX i=1n Rℓ(Yi)Rj(Yi) +1 2XiℓRj(Yi) +1 2XijRℓ(Yi)−2E[R(Yℓ)Rj(Y)]o + op(1), 28 L. BENGONO MINTOGO, E.D.D. NKOU, AND G.M. NKIET and 1√nnX i=1 Rbn,ℓ(Yi)Rbn,j(Yi)f(Yi) fbn(Yi)−E Rbn,ℓ(Y)Rbn,j(Y)f(Y) fbn(Y) =1√nnX i=1n Rℓ(Yi)Rj(Yi)−E[Rℓ(Y)Rj(Y)]o + op(1), it follows √n bλ(n) ℓj−νℓj =1√nnX i=11 2(XiℓRj(Yi) +XijRℓ(Yi))−E(Rℓ(Y)Rj(Y)) + op(1). Moreover, √n νk,ℓ=√nE Rbn,ℓ(Y)Rbn,j(Y)+Rbn,ℓ(Y)bgj,n(Y) fbn(Y)+Rbn,j(Y)bgℓ,n(Y) fbn(Y)−2Rbn,ℓ(Y)Rbn,j(Y)bfbn(Y) fbn(Y) ; then, using Lemma 5.8, Lemma 5.9 together with the equality √nE Rbn,ℓ(Y)Rbn,j(Y) =√nE[R(Yℓ)Rj(Y)] + o(1) given in Eq. (4.17) of [20], we obtain√n νℓj=√n λℓj+o(1), where λℓj=E(Rℓ(Y)Rj(Y)). Hence √n bλ(n) ℓj−λℓj =1√nnX i=11 2(XiℓRj(Yi) +XijRℓ(Yi))−E(Rℓ(Y)Rj(Y)) + op(1). Clearly, E1 2(XℓRj(Y) +XjRℓ(Y)) =E XℓRj(Y) =E Rℓ(Y)Rj(Y) , and, putting Hn=√n bΛn−Λ andH(n) ℓj=√n bλ(n) ℓj−λℓj , we have Tr A⊤Hn =dX ℓ=1dX j=1aℓjH(n) ℓj=1√nnX i=1(Ui−E(Ui)) +op(1), where Ui=dX ℓ=1dX j=1aℓj 2(XiℓRj(Yi) +XijRℓ(Yi)). From the central limit theorem and Slutsky’s theorem we deduce that Tr A⊤HnD→ N 0, σ2 A , asn→ ∞ , where σ2 Ais given in (3.3). Then, using Levy’s theorem, we conclude that HnD→ H , as n→ ∞ , where Hhas a normal distribution in Md(R) with Tr A⊤Hn ⇝N 0, σ2 A . References [1] L. Bengono Mintogo, E.D.D. Nkou, and G.M. Nkiet, Rates of strong uniform consistency for the k-nearest neighbors kernel estimators of density and regression function, ArXiv:2408.12741 (2024). [2] E. Bura and R.D. Cook, Estimating the structural dimension of regressions via parametric inverse regression, J. R. Stat. Soc. Ser. B. Stat. Methodol. 63(2001), 393–410. [3] G. Collomb, Estimation de la r´ egression par la m´ ethode des kpoints les plus proches avec noyau: Quelques propri´ et´ es de convergence ponctuelle, C. R. Math. Acad. Sci. Paris 289(1979), 245–247. [4] R.D. Cook and B. Li, Dimension reduction for conditional mean in regression, Ann. Statist. 30(2002), 455–474. [5] R.D. Cook and S. Weisberg, Comment on ‘Sliced inverse regression for dimension reduction’, J. Amer. Statist. Assoc. 86(1991), 328–332. NONPARAMETRIC ESTIMATION OF SIR BY THE k-NN KERNEL METHOD 29 [6] R. Coudret, S. Girard, and J. Sarraco, A new sliced inverse regression method for multivariate response, Comput. Statist. Data Anal. 77(2014), 285–299. [7] J.R. Ebende Penda, E.D.D. Nkou, S. Bouka, and G.M. Nkiet, On variable selection in partially linear regression model, C. R. Math. Acad. Sci. Soc. R. Can. 46(2024), 119–144. [8] L. Ferr´ e, Determining the dimension in sliced inverse regression and related methods, J. Amer. Statist. Assoc. 93(1998), 132–140. [9] T. Hsing
https://arxiv.org/abs/2505.19359v1
and R.J. Caroll, An asymptotic theory for sliced inverse regression, Ann. Statist. 20(1992), 1040–1061. [10] K.C. Li, Sliced inverse regression for dimension reduction, J. Amer. Statist. Assoc. 86(1991), 316–342. [11] K.C. Li, Y. Aragon, K. Shedden, and C. Thomas-Agnan, Dimension reduction for multivariate response data, J. Amer. Statist. Assoc. 98(2003), 99–109. [12] D. S. Moore and J. W. Yackel, Consistency properties of nearest neighbor density function estimators, Ann. Statist. 5(1977), 143–154. [13] G.M. Nkiet, Consistent estimation of the dimensionality in sliced inverse regression, Ann. Inst. Statist. Math. 60(2008), 257–271. [14] E.D.D. Nkou, Recursive kernel estimator in semiparametric regression model, J. Nonparametr. Stat. 35(2023), 145–171. [15] E.D.D. Nkou and G.M. Nkiet, Strong consistency of kernel estimator in a semiparametric regression model, Statistics 53(2019), 1289–1305. [16] E.D.D. Nkou and G.M. Nkiet, Wavelet-based estimation in a semiparametric regression model, Int. J. Wavelets Multiresolut. Inf. Process. 20(2022), 2150056. [17] J.L. Powell, J.H. Stock, and T.M. Stoker, Semiparametric estimation of index coefficients, Econometrica 57(1989), 1403–1430. [18] J.R. Schott, Determining the dimensionality in sliced inverse regression, J. Amer. Statist. Assoc. 89(1994), 141–148. [19] S. Velilla, Assessing the number of linear components in a general regression problem. J. Amer. Statist. Assoc. 93(1998), 1088–1098. [20] L.-X. Zhu and K.-T. Fang, Asymptotics for kernel estimate of sliced inverse regression, Ann. Statist. 24(1996), 1053–1068. [21] L. Zhu, B. Miao, and H. Peng, On sliced inverse regression with high dimensional covariates, J. Amer. Statist. Assoc. 101(2006), 630–643. [22] L.-X. Zhu and K.W. Ng, Asymptotics of sliced inverse regression, Statist. Sinica 5(1995), 727–736. [23] L.-P. Zhu and L.-X. Zhu, On kernel method for sliced average variance estimation, J. Multivariate Anal. 98(2007), 970–991. Laboratoire de Probabilit ´es, Statistique et Informatique Unit´e de Recherche en Math ´ematiques et Informatique Universit ´e des Sciences et Techniques de Masuku BP 943 Franceville GABON Email address :luranbengono@gmail.com, emmanueldedieunkou@gmail.com, guymartial.nkiet@univ-masuku.org
https://arxiv.org/abs/2505.19359v1
Foundations of Top- kDecoding For Language Models Georgy Noarov∗Soham Mallick∗Tao Wang∗Sunay Joshi Yan Sun Yangxinyu Xie Mengxin Yu Edgar Dobriban University of Pennsylvania May 27, 2025 Abstract Top-kdecoding is a widely used method for sampling from LLMs: at each token, only the largest knext-token- probabilities are kept, and the next token is sampled after re-normalizing them to sum to unity. Top- kand other sampling methods are motivated by the intuition that true next-token distributions are sparse, and the noisy LLM probabilities need to be truncated. However, to our knowledge, a precise theoretical motivation for the use of top- k decoding is missing. In this work, we develop a theoretical framework that both explains and generalizes top- k decoding. We view decoding at a fixed token as the recovery of a sparse probability distribution. We consider Bregman decoders obtained by minimizing a separable Bregman divergence (for both the primal anddual cases) with a sparsity-inducing ℓ0regularization. Despite the combinatorial nature of the objective, we show how to optimize it efficiently for a large class of divergences. We show that the optimal decoding strategies are greedy, and further that the loss function is discretely convex in k, so that binary search provably and efficiently finds the optimal k. We show that top- kdecoding arises as a special case for the KL divergence, and identify new decoding strategies that have distinct behaviors (e.g., non-linearly up-weighting larger probabilities after re-normalization). Contents 1 Introduction 2 1.1 A roadmap of our contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Regularized sparse Bregman decoding 4 2.1 Top- kdecoding preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Regularized sparse Bregman decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 The algorithmic structure of primal and dual Bregman decoding 6 3.1 Renormalization for a fixed sparsity pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Greedy property: Justifying top- kselection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Discrete convexity of cost function: Speeding up the search for optimal adaptive k. . . . . . . . . 8 4 Example: Bregman α-decoding 9 5 Experiments 10 5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.19371v1
. . . . . . . . . . . . . . . . 10 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6 Related work 11 7 Discussion 12 *Equal contribution. Correspondence to gnoarov@seas.upenn.edu, kcillam@wharton.upenn.edu, tawan@upenn.edu, dobriban@wharton.upenn.edu 1arXiv:2505.19371v1 [cs.AI] 25 May 2025 Appendices 16 A Existence and uniqueness of dual Bregman decoding 16 B Proof of the primal greedy property in Theorem 3.2 18 B.1 Decomposing the Bregman cost function on subsets . . . . . . . . . . . . . . . . . . . . . . . . . 18 C Proof of the dual greedy property in Theorem 3.3 21 C.1 Proof under Assumption (A1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.1.1 Decomposition of the loss difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.1.2 Analysis of terms based on the dual solution . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.2 Proof under Assumption (A2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2.1 Extra notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2.2 Properties of the auxiliary functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2.3 Proving the dual greedy property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 D Proof of discrete convexity for primal Bregman projection 25 E Proof of discrete convexity for dual Bregman projection 27 F Algorithmic details 29 F.1 Computing the dual renormalization map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 F.2 Pseudocode for algorithms . . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.19371v1
. . . . . . . . . . . . . . . . . . . . 30 G Example: α-Bregman decoding 32 G.1 Proof of Lemma 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 G.2 Proof of Proposition 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 G.3 Illustrating primal and dual renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 G.4 Illustrating general nonconvexity of dual renormalization . . . . . . . . . . . . . . . . . . . . . . 35 G.5 Illustrating discrete convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 G.6 The simultaneous effects of Bregman decoding and temperature scaling . . . . . . . . . . . . . . 35 H Supplementary experimental details 37 H.1 Compute resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 H.2 Supplementary experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 1 Introduction Large language models (LLMs) are powerful generative AI tools for producing text. When pre-trained on large text corpora and aligned according to human preferences, they can be used for a wide range of tasks. On a technical level, they are probability distributions over text: given any user text prompt x, an LLM samples an answer Y∼π(·|x) from a probability distribution π(·|x)over text. However, even after obtaining a pre-trained, fine-tuned, and human preference-aligned model π, it is rare to directly sample from the model. Instead, several sampling/decoding methods are commonly used, including top- k[21] or top- psampling [ 32]. These are widely used either by default or as an option in many popular LLMs, including the GPT series, Gemini, and Claude. In addition to other decoding methods such as beam search, temperature scaling, best-of- N, etc., top- k, top-pand related methods are known to improve performance in a broad range of settings compared to direct sampling, see e.g., [12, 21, 32]. In this paper, we focus on decoding methods
https://arxiv.org/abs/2505.19371v1
that modify each next-token-probability distribution to induce sparsity , i.e., to keep only a small number of tokens with a nonzero probability. This includes the widely used top- k[21] and top- p[32] sampling methods, among others. These methods are motivated by the intuition that the noisy LLMs probabilities need to be truncated to denoise the “unreliable tail” [ 32]. In particular, we focus on the popular 2 top-kdecoding method, which keeps only the largest knext-token-probabilities at each decoding step. These are re-normalized—via dividing by their sum—to a probability distribution from which the next token is sampled. Despite the wide use and rich intuition behind top- kdecoding, to our knowledge, a precise theoretical understanding of top- kdecoding is not available—see Section 6 for a discussion of related work. In this work, we develop a theoretical framework that enables a flexible range of generalizations of top- kdecoding. For a fixed token, we view decoding as recovering a sparse probability distribution. We consider denoisers obtained by minimizing a Bregman divergence (such as a KL divergence or Brier score) with a sparsity-inducing ℓ0regularization. This approach is motivated by a rich literature of both Bregman divergences and sparsity, see Section 6 for details. Our approach leads to new decoding methods. As an example, we consider Bregman divergences generated by the α-entropies x7→xα/[α(α−1)][29,51]. Top- k decoding arises as an instance of this class for α→1, corresponding to the KL divergence. We also identify new decoding strategies with distinct behavior. The figure on the left shows an example of a distribution over 100 tokens, the result of top- 10decoding, and results for our Bregman- αdecoding with k= 10 : for α= 1/4, Bregman decoding places relatively more mass on larger probabilities, while for α= 2.5, the situation is reversed. In various applications, either behavior may be desired. 1.1 A roadmap of our contributions We start by laying the foundation for out theoretical framework, including presenting a view of decoding strategies that decomposes them into two steps: selecting a number of tokens, and re-normalizing their entries to a probability distribution (Section 2.1). We present decoding strategies obtained by sparsity-regularized Bregman divergence- minimization (Section 2.2). We consider both primal anddual decoding methods, minimizing the Bregman divergence with respect to its first and second arguments, respectively, as both are widely studied in optimization and statistical learning [see e.g., 1, 10, 24, 56, etc]. In general, ℓ0-regularization leads to combinatorial optimization problems, for which there are no known polynomial- time algorithms [ 11,42]. Our main contribution is to show that, despite this, sparse Bregman decoding can be optimized efficiently for a large class of divergences. Specifically, we show two properties: (1) greedy selection — choosing some number kof the largest probabilities—is optimal (Theorems 3.2 and 3.3 in Section 3.2); and (2) the loss function is discretely convex ink, so that an efficient binary search can be used to find the optimal k∗ (Theorem 3.4 in Section 3.3). Showing these properties is non-trivial, and requires us to develop and combine a range of novel structural insights into the sparse Bregman objective that could be
https://arxiv.org/abs/2505.19371v1
of independent interest. As an example, we discuss α-Bregman decoding strategies, generated by Tsallis α-entropies x7→xα/[α(α−1)], for which we show that primal renormalization can be solved exactly in several cases of interest and converges to water-filling as α→ ∞ (Section 4). Finally, we illustrate some of the decoding schemes described in the paper on open-ended text generation and mathematical problem solving tasks with LLMs, where they perform competitively with top- kdecoding (Section 5). 3 2 Regularized sparse Bregman decoding 2.1 Top- kdecoding preliminaries Top-kdecoding. Given a probability distribution p= (p1, . . . , p V)(where Vstands for “vocabulary size”), and some 1⩽k⩽V,top-kdecoding first selects the indices Sk= (i1, . . . , i k)of the largest kprobabilities, breaking ties arbitrarily. Setting all other coordinates to zero in p, one obtains the vector p[1 :k]of the klargest entries. Then, it re-normalizes this vector by dividing it by its sum. Letting (p(1), p(2), . . . , p (k)) = (pi1, . . . , p ik)be the largest k entries of p, top-k(p) =p[1 :k]/kX j=1p(j) . (1) One then draws a sample from the distribution top- k(p). Decoding strategies. Next, we aim to generalize top- kdecoding. We will refer to any operator Dec on probability distributions as a decoding strategy ; formally Dec : ∆ V→∆V, where ∆V={x∈[0,1]V:PV i=1xi= 1}is the simplex of V-dimensional probability distributions. Observe that top- kdecoding consists of two steps: selecting the largest coordinates and re-normalizing them. The second step can be viewed as “re-distributing” the probability mass that has been thresholded away by selection among the remaining indices. This step can be performed in a lot of other meaningful ways besides division by the sum. For instance, we may put a larger weight on the larger remaining probabilities, if we consider them more reliable. Renormalization. Motivated by this, we define the notion of a renormalization mapping, which takes as input a thresholded probability vector with knonzero entries remaining. We consider renormalization maps that are permutation-equivariant , i.e., when their input is permuted, their output is permuted accordingly; which clearly holds for the sum-division used in top- k. Therefore, since the sum of probabilities after selection can be less then unity, we can define them as maps from the sub-probability simplex ∆sub,k={x∈[0,1]k:Pk i=1xi⩽1}to the simplex ∆k. Definition 2.1 (Renormalization) .For a positive integer k, we call a permutation-equivariant map T: ∆sub,k→∆k arenormalization map . A renormalization map can be extended to the full simplex ∆V, by applying it only on the nonzero coordinates.1We can now define generalized top- kdecoding as re-normalizing the top- kentries via a general re-normalization map. Definition 2.2 (Generalized top- kdecoding) .For a fixed k, a generalized top- kdecoding strategy Deck,T: ∆V→ ∆V, parameterized by the choice of kand renormalization map T, takes as input any V-class probability vector p, thresholds it to the sub-vector p[1 :k]consisting of its top- kelements, and renormalizes it to T(p[1 :k])∈∆V. Adaptivity. A natural extension is to choose kadaptively based on p. For this, we consider a k-selector map ˆk: ∆V→[V] :={1, . . . , V },
https://arxiv.org/abs/2505.19371v1
and a collection of renormalization maps Tk: ∆sub,k→∆k,k= 1, . . . , V . We define an adaptive generalized top- kdecoding strategy DecT: ∆V→∆Vviap7→Tˆk(p)(p[1 :ˆk(p)]). Below, we will design specific renormalizers Tand ways to choose k. 2.2 Regularized sparse Bregman decoding Decoding via sparse divergence minimization. Consider a divergence Div(·,·) : ∆ V×∆V→Rbetween two distributions. Classical examples include the squared error Div(p, q) =∥p−q∥2 2and the KL divergence 1Formally, for a vector p∈RVandS⊂[V], letpSbe the restriction of pto the coordinates in S. Given a vector p∈∆Vsuch thatpSc= 0 outside of a set j∈S, a renormalization map T(p)can be extended to ∆Vby embedding it into the original coordinates: [T(p)]j= [T(pS)]jforj∈S, and[T(p)]j= 0 otherwise. 4 Div(p, q) =PV j=1pjln(pj/qj). We define the decoding strategy DecDiv, via sparsity-regularized divergence minimization2under divergence Div, for any probability vector pas: DecDiv(p)∈arg min ˆp∈∆Vn Div(ˆp, p) +λ∥ˆp∥0o (sparsity-regularized decoding) . (2) Here, the ℓ0-pseudonorm ∥ˆp∥0is the number of nonzero entries of ˆp, and λ⩾0is asparsity cost hyperparameter. Asλincreases, the optimal solution ˆp=p∗gets increasingly more sparse. Separable Bregman divergences. In this work, we shall instantiate Divin Problem 2 with separable Bregman divergences [ 1,10]. We will see that this class is expressive enough to induce top- kdecoding and many fruitful generalizations of it. For a convex domain Dom⊆Rand a convex differentiable function ϕ: Dom →R, the one-dimensional Bregman ϕ-divergence dϕis defined as: dϕ(x, y) =ϕ(x)−ϕ(y)−ϕ′(y)(x−y),forx, y∈Dom. The separable V-dimensional Bregman ϕ-divergence Dϕ: DomV→Ris then defined as: Dϕ(x, y) =X i∈[V]dϕ(xi, yi),forx= (x1, . . . , x V), y= (y1, . . . , y V)∈DomV. A well-known property of Bregman divergences is that Dϕ(x, y)⩾0for all x, y, with equality if x=y; when ϕis strictly convex, x=yin fact becomes the unique minimum. Figure 1: Illustration of the landscape of the sparse Bregman objective for the primal (left) and dual (right) cases. We choose a V= 3dimensional example where the target vector is p= (0.1,0.01,0.001)/0.111. We show an α-Bregman divergence (see Section 4) with α= 10 andλ= 0.01. Primal and dual Bregman decoding. Since Bregman divergences are generally non-symmetric in their arguments, we may instantiate the sparse Bregman decoding Problem 2 in two substantially distinct ways: by placing the estimand ˆpin the first ( primal ) or second ( dual) argument: Div(ˆp, p) := D ϕ(ˆp, p)(primal decoding) , Div(ˆp, p) := D ϕ(p,ˆp)(dual decoding) . (3) Both formulations possess a sound theoretical motivation. Bregman projections are commonly defined as minimization in the first argument, while Bregman-based proper scoring rules for mean elicitation correspond to minimization in the second argument [see e.g., 24, 39, etc]. The landscapes of primal and dual decoding are illustrated in Figure 1. The dual objective can be non-convex even in the interior of the simplex. However, crucially, the objectives are discontinuous at the edges of the simplex due to theℓ0penalty. While in general these decoding objectives could be combinatorial problems that may be hard to solve, we will show in Section 3 that for separable Bregman divergences, both the primal and dual problems can be solved efficiently. 2In our examples of interest, we will
https://arxiv.org/abs/2505.19371v1
show that this optimization problem is well-defined. When there are multiple minimizers, we assume that one is selected in an arbitrary measurable way. 5 In both the primal and the dual Bregman case, when λ= 0, the corresponding sparse decoding Problem 2 is solved atˆp=p(and uniquely so if ϕis strictly convex), with the intuition that absent sparsity requirements the best guess is to preserve the original distribution p. Henceforth, we will focus on the sparse regime λ >0, thus forcing some entries of ˆpto be zeroed out at optimality. Our main results in Section 3 establish, for both primal and dual decoding, that under mild technical requirements on Dϕ, the optimal sparsity in fact zeroes out all but top- k∗coordinates of p, for the optimal k=k∗(p), thus leading to a principled and broad generalization of top- kdecoding. 3 The algorithmic structure of primal and dual Bregman decoding We now proceed to investigate the properties of primal and dual Bregman decoding. Our goal is to show that under mild technical assumptions on the divergence Dϕ, both decoding strategies result in adaptive generalized top- k decoding in the sense of Definition 2.2. Explicitly, in Section 3.2 we will demonstrate for any p∈∆Vthat out of the (a-priori) 2Vpossible sparsity patterns S⊆[V], the optimal one must consist of the top- kentries of pfor some k∈[V]. Next, in Section 3.3 we will establish that finding the optimal k∗=k∗(p)is in fact a (discretely) convex optimization problem in k∈[V], which critically enables both strategies to have O(VlogV)oracle computational complexity under oracle invocations of arbitrary monotone scalar root finding. Without this convex structure, the oracle complexity could rise to Ω(V2), which would be prohibitive in language-model-relevant settings in which vocabulary sizes upwards of V∼105are common. 3.1 Renormalization for a fixed sparsity pattern We first investigate the renormalization component of a Bregman decoding strategy. Once the optimal sparsity pattern S⊆[V](of some size |S|=k) has been identified, the vector x— which denotes the sub-vector of prestricted to indices in S— needs to be projected onto the simplex ∆k. Since the ℓ0regularization term becomes fixed to λk, Problem (2)becomes equivalent to: arg minˆp∈∆kDiv(ˆp, x). This is a k-dimensional Bregman projection problem to the simplex (without sparsity regularization). Primal renormalization We impose the following mild condition on the Bregman generator ϕ. Assumption 3.1 (Primal validity) .The map ϕis convex and continuously differentiable on [0,1]as well as strictly convex on (0,1). Existing results [ 33,34] then imply that for a primal valid potential ϕ, denoting f=ϕ′(and extending its inverse f−1so that f−1(x) = 0 forx < f (0)andf−1(x) = 1 forx > f (1), making it continuous and non-decreasing on all ofR), the primal renormalization mapTϕis given for x∈∆sub,kby: [Tϕ(x)]i=f−1(f(xi) +ν)for all i∈[k],where ν∈Ris chosen so thatkX i=1[Tϕ(x)]i= 1. (4) Since ν7→f−1(f(xi) +ν)is non-decreasing3inν, the solution can be found efficiently using off-the-shelf root-finding algorithms such as Brent’s method. Dual renormalization While primal projections are well-studied in prior work [ 33,34], we are not aware of a direct derivation of dual Bregman projections. Indeed, Bregman divergences are convex in the first [ 3] but generally not
https://arxiv.org/abs/2505.19371v1
the second argument, which can interfere with the uniqueness of dual projections. To pave the road towards dual Bregman projections, we will therefore rely on additional structure in ϕanddϕ, expressed as the following dual validity condition. 3It is strictly increasing for ν∈[−f(xi),1−f(xi)], but the required νmay lie outside this range. 6 Figure 2: Comparison of primal (left) and dual (right) Bregman α-renormalization maps (see Section 4) on input vector x=0.67Pk i=1i k 1,k−1 k, . . . ,1 k ∈∆sub,kwithk= 100 . We plot the renormalized values against the original coordinate values ofx. Assumption 3.2 (Dual validity) .The map ϕis thrice differentiable on (0,1]with lim x→0+xϕ′′(x) = 0 . For x∈ (0,1],y7→dϕ(x,y)is strictly convex for y∈[x,1],andy7→dϕ(0,y)is strictly convex for y∈(0,1]. We establish in Theorem A.1 (see Appendix A) that subject to dual validity, the dual renormalization mapT∗ ϕis uniquely defined for any x∈∆sub,kwithx̸= 0kby the following implicit equations: [T∗ ϕ(x)]i=xi+ν∗/f′([T∗ ϕ(x)]i)fori∈[k],withν∗∈Rchosen so thatkX i=1[T∗ ϕ(x)]i= 1. (5) Assumption 3.2, short of requiring global convexity of dϕ(x,·)on[0,1], only enforces it for y∈[x,1]. To enable this relaxation, the proof of Theorem A.1 carefully excludes optimal solutions belonging to the region y≤xor to the simplex boundary. Rather than a mere curiosity, this refinement substantially expands the scope of dual decoding. In particular, in our later specialization, it is essential for ensuring that dual α-decoding is uniquely defined for all α >1, not just α∈(1,2]: as plots in Appendix G.4 demonstrate, α-Bregman divergences are nonconvex for y≤x forα >2. See Section F for algorithmic details on computing the dual map, as well as pseudocode for our algorithms. Figure 2 illustrates the primal and dual renormalization maps for α-Bregman divergences (introduced in Section 4). In this concrete example, TϕandT∗ ϕappear similar; however, for different, e.g. more “peaked”, inputs x∈∆sub,k, they are more distinct, as we illustrate in Appendix G.3. 3.2 Greedy property: Justifying top- kselection The viewpoint that lower-probability tokens can be considered as noisy [ 32] suggests that it would be natural and indeed desirable for a decoding strategy to be “greedy”—dictating that it is optimal to renormalize over the top-k-probability tokens, for some k∈[V]. We formalize this as follows. Definition 3.1 (Greedy decoding) .A decoding strategy Dec : ∆ V→∆Vis called greedy if for every p∈∆V, the set of nonzero entries of Dec(p)is a set of top- ˆkentries of p, for some ˆk=ˆk(p). While many popular decoding methods are greedy [ 12,21,32,38], some are not [ 22,36]; justifications for non- greediness, i.e., the ability to occasionally throw out some of the top- ktokens, include that this can e.g. help generate more “typical” text. As such, our assertion that the primal and dual Bregman decoding strategies are greedy is nontrivial and requires proof. First, we state our result for primal Bregman decoding. Theorem 3.2 (Primal Bregman decoding is greedy) .The primal Bregman decoding strategy from (2)is greedy for any primal valid potential ϕ. 7 The proof is provided in Appendix B. It proceeds by decomposing the Bregman objective into several terms, see Lemma B.2, and bounding them with the help of the primal renormalization equations (4).
https://arxiv.org/abs/2505.19371v1
The dual case, owing i.a. to the implicit form of the dual renormalization formulas (5), is correspondingly more complex to handle. Unlike in Theorem 3.2, our next result requires further conditions, which we state as a menu of two options. The relationship between the extra assumptions is intricate; Assumption (A2) is implied by, but is strictly weaker than, log-convexity of ϕ′. Theorem 3.3 (Dual Bregman decoding is greedy) .The dual Bregman decoding strategy from (2)is greedy for any dual-valid ϕwithϕ′(0) = 0 that further satisfies either of the following conditions: (A1) ϕ′is convex; (A2) The maps4udefined as u(x) :=xϕ′′(x)/ϕ′(x)forx∈(0,1]andϕare nondecreasing. The proof is provided in Appendix C. In it, we use two different proof techniques for both conditions: For Condition (A1), our proof in Appendix C.1 leverages the decomposition from the primal case along with the change of variables dϕ(x, y) = d ϕ∗(ϕ′(y), ϕ′(x)), where ϕ∗is the convex conjugate of ϕ. For Condition (A2), we develop a saddle-point proof approach in Appendix C.2. For that, we perform a sensitivity analysis of both the renormalized values [T∗ ϕ(p)]iand of the per-coordinate Bregman loss terms, relative to hypothetical changes in the dual Lagrange multiplier ν∗and in the entries piofp; we carry this out via implicit differentiation of the defining equations (5). 3.3 Discrete convexity of cost function: Speeding up the search for optimal adaptive k Next, we show that when restricted to the greedy (top- k) selection, the primal and dual decoding objectives both enjoy discrete convexity with respect to the sparsity parameter k. First, for a general divergence Div, denote the ℓ0-regularized cost of each greedy (top- k) choice by cost(k): cost(k) := min ˆp∈∆k{Div ((ˆ p,0V−k), p) +λk}. (6) Recall that a function h: [V]→Risdiscretely convex if for all k∈[V−1]− {1}, its discrete second derivative ∆2h(k) := ∆ h(k+ 1)−∆h(k) :={h(k+ 1)−h(k)} − { h(k)−h(k−1)}⩾0. Theorem 3.4 (Discrete primal and dual cost convexity) .cost(·)is discretely convex in k∈[V]for: 1.Div(ˆp, p) = D ϕ(ˆp, p), ifϕis primal valid; 2.Div(ˆp, p) = D ϕ(p,ˆp), ifϕis dual valid. In Figure 6 (see Appendix G.5), we illustrate the result of Theorem 3.4 by plotting the cost(·)functions for primal and dual Bregman α-decoding (defined in Section 4 below) for assorted α. Provable binary search over k:As a direct consequence of Theorem 3.4, the cost increments ∆cost( k) = cost(k+ 1)−cost(k)increase with k, so binary search over kwill efficiently identify an optimal sparsity parameter k∗— as one for which ∆cost( k∗)⩽0and∆cost( k∗+ 1)⩾0. The proof of Theorem 3.4 requires very distinct techniques in the primal and dual cases. Primal k-convexity. The proof is developed in Appendix D. As its cornerstone, we use the Legendre dual mapping ϕ∗of the generator ϕto establish and leverage the following cost structure: for any k, cost( k), up to additional terms, can be represented as max ν≥0h ν−Pk i=1ϕ∗(ϕ′(pi) +ν)i , where the objective is concave in νand has νk, the optimal Lagrange multiplier for renormalizing the top kprobabilities of pfrom (4), as it unique optimizer. From here, we are able to establish ∆2cost(k)≥0. 4In the economics literature, u(x) =xϕ′′(x)/ϕ′(x)is referred to
https://arxiv.org/abs/2505.19371v1
as the elasticity of the function ϕ′. 8 Dual k-convexity. The proof is in Appendix E. The above dualization strategy does not directly apply. Instead, we lower bound ∆2cost∗(k)by regrouping the loss contributions of the indices i∈[k+ 1], and —via intricate term rearrangement and bounding—reduce to proving the local concavity of a special transformation (Equation 20) that turns out to hold by our dual-validity assumption. 4 Example: Bregman α-decoding We now consider, as an illustration, a single-parameter family of Bregman decoding strategies, which arises via the generators of the Havrda-Charvát-Tsallis α-entropies [8, 29, 45, 51, 52]: ϕα(x) =xα/[α(α−1)], x∈[0,1],forα∈J:= (−∞,0)∪(0,1)∪(1,∞). When α <0andx= 0, we set xα:= +∞so that ϕα(0) =∞. For α= 1, one defines ϕ1(x) =xlog(x), which corresponds to the Shannon entropy, arising in the limit5asα→1. Observe that ϕαisprimal valid for all α̸= 0, asϕ′′ α(x) =xα−2. This yields the following primal family of renormalizations, which we will index by αrather thanϕ: Definition 4.1 (Primal Bregman α-decoding) .Fixα∈J, k∈[V]. The renormalization map Tαis given for p∈∆sub,kas:[Tα(p)]i=(pα−1 i+ν)1 α−1fori∈[k], with ν∈Rchosen so thatP i∈[k][Tα(p)]i=1. Note that for α= 1, we have ϕ′ 1(x) = log x+ 1. Hence, (4)implies eνPk i=1pi= 1, and we obtain the “standard” renormalization: [T1(p)]i=pi/(Pk j=1pj), fori∈[k]. Therefore, primal Bregman 1-decoding is top-kdecoding , showing how one recovers top- kin our framework. It turns out that some further values of αalso lead to renormalization maps of special interest. For any fixed p, we let T−∞(p) = lim inf α→−∞Tα(p)and T∞(p) = lim inf α→∞Tα(p), where the limits are entrywise. Proposition 4.2 (Special primal α-renormalization maps) .We have the following special instances6of the primal Bregman α-renormalization map, defined for all i∈[k]as follows: [T−∞(p)]i=pi+ 1[i=i∗]· 1−Pk j=1pj , assuming that arg maxipi={i∗}. [T1.5(p)]i=√pi+hq r2+k 1−s −ri /k2 , where r=Pk j=1√pjands=Pk j=1pj. [T2(p)]i=pi+ (1−Pk j=1pj)/k. [T∞(p)]i= max {pi, ν}, where ν∈Ris the “water level” for whichPk i=1[T∞(p)]i= 1. Along with the primal family, the dual α-decoding family can also be defined based on ϕα. Unlike α-decoding, the dual Bregman sparse decoding Problem 2 can be non-convex, as displayed in Figure 1 above. Figure 5 in Appendix G.4 further demonstrates the nonconvexity of Dϕαon the unit square for some α. Yet, we can still show that any dual α-decoding with α >1is valid, greedy and k-convex: Lemma 4.3. All generator functions ϕα,α >1, are dual-valid and satisfy Assumption (A2). We give an illustration contrasting primal and dual α-decoding for various α >1in Appendix G.3. 5One conventionally defines the entropies via (xα−x)/[α(α−1)], in which case the Shannon entropy is obtained in the limit as α→1. In our case, we use the definition ϕα(x) =xα/[α(α−1)]so that some technical conditions (such as ϕ′ α(0) = 0 ) hold in the proofs. Both definitions lead to the same decoding strategies in (4). 6In particular, T−∞(p), T1.5(p), T2(p)do not require solving for νin Definition 4.1, enabling a fast implementation just like in the case of the canonical top- krenormalization. 9 5 Experiments We now illustrate some of the decoding schemes described in our paper in the context of LLMs. Since our goal is to develop
https://arxiv.org/abs/2505.19371v1
the theoretical foundations of top- kdecoding, our aim in this section is simply to illustrate that the performance of our novel decoding schemes can be competitive with standard top- kdecoding. In particular, we do not aim to compare or compete with other popular and established decoding methods, which is beyond the scope of our theory-focused paper. 5.1 Experimental Setup Method. In addition to standard top- kdecoding, which coincides with the α= 1case of our primal α-decoding family described in Section 4, we illustrate primal α-decoding strategies for α= 1.5andα= 2. These have closed-form renormalization maps that are as fast as standard renormalization. Full and partial evaluation. Further, we perform two types of experiments: (1) For the evaluation of our full decoding strategy, we decode by adaptively selecting the optimal sparsity parameter k∗by optimizing our sparse Bregman objective. In this approach, we aim to observe the behavior when adaptively choosing k∗. Since practical choices of k∗are always upper bounded, we set a maximum k∗⩽kmax:= 50 . (2) In the partial evaluation approach, we instead directly evaluate—for each fixed choice of kin the grid k∈ {5,10, . . . , 50}—our proposed renormalization strategies along with standard top- krenormalization. Models and benchmarks. We conduct experiments using the GPT-2 Large [ 43] and Llama 3.1 8B [ 25] models. We evaluate on two benchmarks: (1) open-ended text generation using the WebText test set from the GPT-2 output dataset [40], and (2) grade school math reasoning using the GSM8K Chain-of-Thought benchmark [13]. Evaluation metrics. For open-ended text generation, following Chen et al. [12], we use the first 35 tokens of each WebText test sample as a prompt and generate up to 256 tokens. We evaluate the following standard metrics [see e.g., 12, 32, 38, etc]: (1)Perplexity difference , which measures the perplexity (according to base model pbase) of human text compared to that obtained from a decoding strategy pdecoding derived from the base model, where lower is better. This equals EX∼D[EY∼D(·|X)(pbase(Y|X)−1/|Y|)−EY∼pdecoding (·|X)(pbase(Y|X)−1/|Y|)],where X∼ D is a prompt drawn from the dataset, Y∼ D(·|X)denotes a human-written continuation drawn from the dataset, and Y∼pdecoding (· | X)denotes a model-generated continuation using a specific decoding strategy. Here, |Y|is the length of the continuation. (2)Repetition difference :EX∼D PY∼pdecoding (·|X)(rep(Y))−PY∼D(·|X)(rep(Y)) ,where rep(Y)is the event thatYcontains two contiguous and identical token spans of length ⩾2; lower is better. 5.2 Results Open-ended text generation. Using the partial evaluation setup with temperature fixed at 1.0, Figure 3 reports the differences in perplexity and repetition frequency between model-generated and human-written text across a range ofkvalues. Primal decoding strategies are competitive with top- kin terms of both metrics. In particular α= 2.0 has the smallest gaps in perplexity. GSM8K dataset. Using the fulldecoding strategy, we evaluate the LLaMA 3.1 8B model using 8-shot CoT prompting. We test various temperatures, regularization strengths λ∈ {0.01,0.0001}and primal decoding parameters α∈ {1.5,2.0}. Results for other settings are in Appendix H. To ensure a matched comparison, we run top-kwithk=k∗for the Bregman decoding run with the same temperature, λ, andα, rounded to the nearest integer, see Table 2 in Appendix H. As
https://arxiv.org/abs/2505.19371v1
seen in Table 1, across all temperature settings, primal decoding with adaptive k∗ achieves accuracy comparable to top- k. At higher temperatures (such as 1.5), the performance of top- kdecoding degrades more rapidly than that of primal decoding. 10 10 20 30 40 50 k4681012Perplexity Diff T op-k Primal-1.5 Primal-2.0 10 20 30 40 50 k246810Repetition Freq Diff(%) T op-k Primal-1.5 Primal-2.0 10 20 30 40 50 k5678910Perplexity Diff T op-k Primal-1.5 Primal-2.0 10 20 30 40 50 k0.00.51.01.52.02.53.0Repetition Freq Diff(%) T op-k Primal-1.5 Primal-2.0Figure 3: Perplexity and repetition frequency differences between generated and human-written text for GPT2-large (left two panels) and LLaMA 3.1 8B (right two panels), for various kvalues. We show top- kdecoding and primal decoding with α∈ {1.5,2.0}. Standard deviations are estimated using 1000 bootstrap resamples. Table 1: Accuracy on GSM8K for LLaMA 3.1 8B using Bregman primal decoding ( λ∈ {0.01,0.0001},α∈ {1.5,2.0}) and top-kdecoding, for various temperatures. For top- k,kequals the averaged k∗from primal decoding with matching temperature, λ, andα. Standard deviations are over 1000 bootstrap resamples. Tempλ= 0.01Top-k(λ= 0.01)λ= 0.0001Top-k(λ= 0.0001 )α= 1.5α= 2.0 α= 1.5α= 2.0 0.3 85.14±0.8084.38±1.0083.62±1.0284.69±0.9984.69±0.9984.46±1.0085.14±0.9883.62±1.02 0.7 83.24±1.0281.73±1.0683.78±1.0284.69±0.9982.03±1.0682.03±1.0682.11±1.0683.78±1.02 1.0 81.20±1.0880.97±1.0881.20±1.0881.20±1.0877.41±1.1577.26±1.1579.23±1.1278.54±1.13 1.5 79.00±1.1280.06±1.1075.97±1.1875.97±1.1857.24±1.3664.97±1.3143.21±1.3658.53±1.36 6 Related work Bregman projection. Michelot [37] considered the Brier score projection problem and derived an efficient algorithm. Later, Shalev-Shwartz et al. [48] revisited the properties of optimal Brier projection, and Duchi et al. [17] gave and analyzed the explicit algorithm that we discuss in what follows. Wang and Carreira-Perpinán [53] simplified and distilled the proof. [ 35] further studied the projection as a method for generating sparse probability predictions in multiclass prediction problems. [ 33,34] developed methods for efficient Bregman projections to the simplex; for a fixed support, these results characterize our primal decoding. [ 44,46] developed differentiable variants of top- k decoding. In contrast to these works, we: (1) consider Bregman projections under ℓ0regularization, and (2) offer, to the best of our knowledge, novel analyses of dual Bregman projections. ℓ0regularization. Regularization via the ℓ0-pseudonorm has been studied widely, with various approximate algorithms (based on surrogates, integer programming, branch-and-bound methods, etc.) developed for problems ranging from linear regression to more general learning tasks [see e.g., 2,6,9,15,18–20,30,41,49,50,58,61, etc]. In contrast, the algorithms we propose are exact within numerical precision for the specific class of problems we consider. Bregman divergences. The properties of Bregman divergences [ 10] have been widely studied; see, e.g., [ 1,3,5, 8,27,39,47,55,57], etc. In particular, there are a number of relations between Bregman divergences and their versions with reversed arguments, motivated by the fact that convexity in the first parameter allows for minimization, making it useful to switch the order of the variables, see e.g., [ 1,26] etc. We both leverage some of these results in our work, and contribute some, to the best of our knowledge, novel proof techniques and insights into the (primal and dual) Bregman geometry. LLM decoding. There is a vast range of work on LLM sampling (or decoding), see e.g., [ 54] and references therein. Classical methods include greedy sampling and beam search. Sparse sampling methods such as top- k sampling [ 21] are
https://arxiv.org/abs/2505.19371v1
motivated by intuition that the “unreliable tail” of low-probability tokens is mis-estimated [ 32]. In particular, [ 32] propose top- psampling, and [ 38] propose min- psampling. Other sampling methods were proposed in [4,22,31,36]. [12] propose the decoding game, a two-player game between a generator/LLM and an adversary 11 that distorts the true distribution. They show that certain sparse truncated sampling methods are approximately minimax optimal. There have also been various approaches to explicitly make language model output probabilities sparse, see e.g., [ 14,59,60]. In contrast, our goal is to develop a deeper theoretical understanding of the popular top-kdecoding method, placing it into a broader framework. General motivation. The motivation for our general approach is two-fold: (1) Without sparsity considerations, Bregman divergences are known to have a close correspondence to proper scoring rules, and are minimized at the true probability distribution, see e.g., [ 10,24]. This property is highly desirable in probabilistic forecasting and prediction, ensuring that the forecaster is incentivized to predict the true distribution in order to minimize their loss. (2) The ℓ0-“norm”, i.e., the number of nonzero entries of a sparse vector, has been widely argued to both be a reasonable measure of sparsity, and to have good properties as a regularizer in certain sparse estimation problems such as sparse regression [see e.g., 7,16,23,28, etc]. Combining these two lines of thought provides the motivation for studying ℓ0-regularized Bregman divergence minimization. 7 Discussion In this paper, we have developed a theoretical foundation for top- kdecoding. We hope and anticipate that our “greedy + k-convex sparse decoding” framework will serve both as motivation and as a useful template for developing a variety of novel theoretically motivated adaptive sparse decoding methods in future work. While our theoretical results are already strong, they could be improved even further, for instance by elucidating the tightest conditions under which the greedy property holds for dual Bregman decoding. Further, a broader experimental evaluation of Bregman decoding methods, which is beyond the scope of our paper, is a promising and important future research direction. As our work aims to enhance the understanding of decoding methods for language models, it may have positive societal implications, similar to other research in the field. Acknowledgments This work was partially supported by the NSF, ARO, AFOSR, ONR, and the Sloan Foundation. References [1]Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry , volume 191. American Mathematical Soc., 2000. [2]Sohail Bahmani, Bhiksha Raj, and Petros T Boufounos. Greedy sparsity-constrained optimization. The Journal of Machine Learning Research , 14(1):807–841, 2013. [3]Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with Bregman divergences. Journal of machine learning research , 6(Oct):1705–1749, 2005. [4]Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar, and Lav R. Varshney. Mirostat: A neural text decoding algorithm that directly controls perplexity. In International Conference on Learning Representations , 2021. URL https://openreview.net/forum?id=W1G1JZEIy5_ . [5]Heinz H Bauschke and Patrick L Combettes. Iterating Bregman retractions. SIAM Journal on Optimization , 13(4):1159–1173, 2003. [6]Dimitris Bertsimas, Angela King, and Rahul Mazumder. Best subset selection via a modern optimization lens. The Annals of Statistics , 44(2):813,
https://arxiv.org/abs/2505.19371v1
2016. 12 [7]Lucien Birgé and Pascal Massart. Gaussian model selection. Journal of the European Mathematical Society , 3 (3):203–268, 2001. [8]Mathieu Blondel, André F.T. Martins, and Vlad Niculae. Learning with Fenchel-Young losses. Journal of Machine Learning Research , 21(35):1–69, 2020. URL http://jmlr.org/papers/v21/19-021. html . [9]Thomas Blumensath and Mike E Davies. Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis , 27(3):265–274, 2009. [10] Lev M Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics , 7 (3):200–217, 1967. [11] Emmanuel J Candes and Terence Tao. Decoding by linear programming. IEEE transactions on information theory , 51(12):4203–4215, 2005. [12] Sijin Chen, Omar Hagrass, and Jason Matthew Klusowski. Decoding game: On minimax optimality of heuristic text generation strategies. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=Wfw4ypsgRZ . [13] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [14] Gonçalo M Correia, Vlad Niculae, and André FT Martins. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2174–2184, 2019. [15] Antoine Dedieu, Hussein Hazimeh, and Rahul Mazumder. Learning sparse classifiers: Continuous and mixed integer optimization perspectives. Journal of Machine Learning Research , 22(135):1–47, 2021. URL http://jmlr.org/papers/v22/19-1049.html . [16] David L Donoho and Iain M Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika , 81(3): 425–455, 1994. [17] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the ℓ1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning , pages 272–279, 2008. [18] M’hamed Essafri, Luca Calatroni, and Emmanuel Soubies. Exact continuous relaxations of ℓ0-regularized criteria with non-quadratic data terms, 2024. URL https://arxiv.org/abs/2402.06483 . [19] M’hamed Essafri, Luca Calatroni, and Emmanuel Soubies. On ℓ0Bregman-relaxations for Kullback-Leibler sparse regression. In 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP) , pages 1–6. IEEE, 2024. [20] Mhamed Essafri, Luca Calatroni, and Emmanuel Soubies. Box-constrained ℓ0Bregman-relaxations, 2025. URL https://arxiv.org/abs/2503.15083 . [21] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Association for Computational Linguistics, 2018. [22] Matthew Finlayson, John Hewitt, Alexander Koller, Swabha Swayamdipta, and Ashish Sabharwal. Closing the curious case of neural text degeneration. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=dONpC9GL1o . 13 [23] Dean P Foster and Edward I George. The risk inflation criterion for multiple regression. The Annals of Statistics , 22(4):1947–1975, 1994. [24] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association , 102(477):359–378, 2007. [25] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et
https://arxiv.org/abs/2505.19371v1
al. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [26] Sebastian Gruber and Florian Buettner. Uncertainty estimates of predictions via a general bias-variance decomposition. In International Conference on Artificial Intelligence and Statistics , pages 11331–11354. PMLR, 2023. [27] Peter D Grünwald and A Philip Dawid. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. Ann. Statist. , 32(1):1367–1433, 2004. [28] Trevor Hastie, Robert Tibshirani, and Martin Wainwright. Statistical Learning with Sparsity: The Lasso and Generalizations . CRC Press, 2015. [29] Jan Havrda and František Charvát. Quantification method of classification processes. concept of structural a-entropy. Kybernetika , 3(1):30–35, 1967. [30] Hussein Hazimeh, Rahul Mazumder, and Tim Nonet. L0learn: A scalable package for sparse learning using ℓ0 regularization. Journal of Machine Learning Research , 24(205):1–8, 2023. [31] John Hewitt, Christopher D Manning, and Percy Liang. Truncation sampling as language model desmoothing. InFindings of the Association for Computational Linguistics: EMNLP 2022 , pages 3414–3427, 2022. [32] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. InInternational Conference on Learning Representations , 2020. URL https://openreview.net/ forum?id=rygGQyrFvH . [33] Walid Krichene, Syrine Krichene, and Alexandre Bayen. Efficient Bregman projections onto the simplex. In 2015 54th IEEE Conference on Decision and Control (CDC) , pages 3291–3298. IEEE, 2015. [34] Cong Han Lim and Stephen J. Wright. Efficient Bregman projections onto the permutahedron and related polytopes. In Arthur Gretton and Christian C. Robert, editors, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics , volume 51 of Proceedings of Machine Learning Research , pages 1205–1213, Cadiz, Spain, 09–11 May 2016. PMLR. [35] Andre Martins and Ramon Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International conference on machine learning , pages 1614–1623. PMLR, 2016. [36] Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. Locally typical sampling. Transactions of the Association for Computational Linguistics , 11:102–121, 2023. [37] Christian Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of Rn. Journal of Optimization Theory and Applications , 50:195–200, 1986. [38] Nguyen Nhat Minh, Andrew Baker, Clement Neo, Allen G Roush, Andreas Kirsch, and Ravid Shwartz- Ziv. Turning up the heat: Min-p sampling for creative and coherent LLM outputs. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/ forum?id=FBkpCyujtS . [39] Frank Nielsen. An elementary introduction to information geometry. Entropy , 22(10):1100, 2020. [40] OpenAI. Gpt-2 output dataset. https://github.com/openai/gpt-2-output-dataset , 2019. URL https://github.com/openai/gpt-2-output-dataset . 14 [41] Jianting Pan and Ming Yan. Efficient sparse probability measures recovery via Bregman gradient. Journal of Scientific Computing , 102(3):66, 2025. [42] Christos H Papadimitriou and Kenneth Steiglitz. Combinatorial optimization: algorithms and complexity . Courier Corporation, 1998. [43] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. [44] Luca Ragazzi, Paolo Italiani, Gianluca Moro, and Mattia Panni. What are you token about? Differentiable perturbed top- ktoken selection for scientific document summarization. In Findings of the Association for Computational
https://arxiv.org/abs/2505.19371v1
Linguistics ACL 2024 , pages 9427–9440, 2024. [45] Daniel Reem, Simeon Reich, and Alvaro De Pierro. Re-examination of Bregman functions and new properties of their divergences. Optimization , 68(1):279–348, 2019. [46] Michael Eli Sander, Joan Puigcerver, Josip Djolonga, Gabriel Peyré, and Mathieu Blondel. Fast, differentiable and sparse top-k: a convex analysis perspective. In International Conference on Machine Learning , pages 29919–29936. PMLR, 2023. [47] Leonard J Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association , 66(336):783–801, 1971. [48] Shai Shalev-Shwartz, Yoram Singer, Kristin P Bennett, and Emilio Parrado-Hernández. Efficient learning of label ranking by soft projections onto polyhedra. Journal of Machine Learning Research , 7(7), 2006. [49] Yiyuan She, Zhifeng Wang, and Jiuwu Jin. Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning. The Annals of Statistics , 49(6):3434–3459, 2021. [50] Emmanuel Soubies, Laure Blanc-Féraud, and Gilles Aubert. A continuous exact ℓ0penalty (cel0) for least squares regularized problem. SIAM Journal on Imaging Sciences , 8(3):1607–1639, 2015. [51] Constantino Tsallis. Possible generalization of Boltzmann-Gibbs statistics. Journal of statistical physics , 52: 479–487, 1988. [52] Constantino Tsallis. Introduction to nonextensive statistical mechanics: approaching a complex world . Springer, 2009. [53] Weiran Wang and Miguel A Carreira-Perpinán. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. arXiv preprint arXiv:1309.1541 , 2013. [54] Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui. From decoding to meta-generation: Inference-time algorithms for large language models. Transactions on Machine Learning Research , 2024. ISSN 2835-8856. URL https: //openreview.net/forum?id=eskQMcIbMS . Survey Certification. [55] Robert C Williamson, Elodie Vernet, and Mark D Reid. Composite multiclass losses. Journal of Machine Learning Research , 17(222):1–52, 2016. [56] Wotao Yin, Stanley Osher, Donald Goldfarb, and Jerome Darbon. Bregman iterative algorithms for ℓ1- minimization with applications to compressed sensing. SIAM Journal on Imaging sciences , 1(1):143–168, 2008. [57] Guodong Zhang, Shengyang Sun, David Duvenaud, and Roger Grosse. Noisy natural gradient as variational inference. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , pages 5852–5861. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/zhang18l.html . 15 [58] Jacky Y Zhang, Rajiv Khanna, Anastasios Kyrillidis, and Oluwasanmi O Koyejo. Learning sparse distributions using iterative hard thresholding. Advances in Neural Information Processing Systems , 32, 2019. [59] Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, and Xu Sun. Explicit sparse transformer: Concentrated attention through explicit selection. arXiv preprint arXiv:1912.11637 , 2019. [60] Shuai Zhao, Qing Li, Yuer Yang, Jinming Wen, and Weiqi Luo. From softmax to nucleusmax: A novel sparse language model for chinese radiology report summarization. ACM Transactions on Asian and Low-Resource Language Information Processing , 22(6):1–21, 2023. [61] Junxian Zhu, Jin Zhu, Borui Tang, Xuanyu Chen, Hongmei Lin, and Xueqin Wang. Best-subset selection in generalized linear models: A fast and consistent algorithm via splicing technique. arXiv preprint arXiv:2308.00251 , 2023. Appendix Contents Existence and uniqueness of dual Bregman decoding 16 Proof of the primal greedy property in Theorem 3.2 18
https://arxiv.org/abs/2505.19371v1
Proof of the dual greedy property in Theorem 3.3 21 Proof of discrete convexity for primal Bregman projection 25 Proof of discrete convexity for dual Bregman projection 27 Algorithmic details 29 Example: α-Bregman decoding 32 Supplementary experimental details 37 A Existence and uniqueness of dual Bregman decoding Theorem A.1 (Uniqueness and formula for dual Bregman renormalization) .Fix a dual valid potential ϕ. Then, for anyx∈∆sub,kwithP ixi>0, the renormalization map T∗ ϕis uniquely defined by: [T∗ ϕ(x)]i=xi+ν∗/f′([T∗ ϕ(x)]i)for all i∈[k],where ν∗∈Ris chosen so thatkX i=1[T∗ ϕ(x)]i= 1. Proof. First, assume without loss of generality that 0<P i∈[k]xi<1. Otherwise, ifP i∈[k]xi= 1thenx∈∆k, so the unique unconstrained optimum, which is at xby the standard property of Bregman divergences, is also the unique optimum of our constrained projection problem. Note that Slater’s condition is satisfied for this projection problem as we are optimizing over the simplex (whose relative interior is nonempty). Therefore, in this differentiable problem, its optimal solutions can be characterized via its KKT conditions. 16 Introduce a Lagrange multiplier ν∈Rfor the simplex constraint, and Lagrange multipliers (λi)i∈[k]for the nonnegativity constraints. Then, the Lagrangian is as follows: L(ˆp, ν) =kX i=1 ϕ(xi)−ϕ(ˆpi)−ϕ′(ˆpi) (xi−ˆpi) −νkX i=1ˆpi−1 −kX i=1λiˆpi. Here, λi⩾0for all i, and by complementary slackness, at optimality λi= 0whenever ˆpi>0. For each i∈[k], the stationarity condition reads (except possibly when ˆpi= 0, where the second derivative could be infinite): 0 =∂L ∂ˆpi=−ϕ′′(ˆpi) (xi−ˆpi)−ν−λi⇐⇒ ϕ′′(ˆpi)(ˆpi−xi) =ν+λi. In particular, for each coordinate ifor which the optimal ˆpi∈(0,1), the stationarity condition is: ϕ′′(ˆpi)(ˆpi−xi) =ν=⇒ˆpi=xi+ν ϕ′′(ˆpi)=xi+ν f′(ˆpi). (7) Now, we show that ν >0. Indeed, observe that there must be at least one index ifor which ˆpi> xi. If that was not the case, we would getP i∈[k]ˆpi⩽P i∈[k]xi<1by our assumption, contradicting that ˆp∈∆k. In particular, then, ˆpi> xi⩾0, and therefore we have ϕ′′(ˆpi)(ˆpi−xi) =ν. Since ϕ′′(ˆpi)>0andˆpi−xi>0, we thus conclude that ν >0. Having shown that ν > 0, we now proceed to show that all ˆpi>0at optimality. Note that∂ ∂ydϕ(x, y) = ϕ′′(y) (y−x)fory >0. We will now consider two cases: 1.ϕ′′(0)is finite; 2.limy→0ϕ′′(y) = +∞. Ifϕ′′(0)is finite, ˆpi>0for all i. Indeed, suppose that was not the case, and ˆpi= 0for some i. Then we would have: ϕ′′(0)(0−xi) =ν+λi, or equivalently, ϕ′′(0)·xi+ν+λi= 0. Each of the three terms is nonnegative, andν >0, so we arrive at a contradiction. Next, consider the case in which limy→0ϕ′′(y) = +∞. Then, limy→0∂ ∂ydϕ(x, y) =−∞ for all x∈(0,1]. Then, since limy→0∂ ∂ydϕ(x, y) =−∞ for all x∈(0,1], for any isuch that xi>0, setting ˆpi= 0 would lead to ν=−∞, hence necessarily ˆpi>0. On the other hand, for any ifor which xi= 0, since limy→0yϕ′′(y) = 0 , setting ˆpi= 0would lead to ν= 0, which is a contradiction. In all cases, the optimal ˆpis in the strict interior of the simplex, so it suffices to solve (7)over this range. To show that the solution exists and is unique, we collect together the following information about Ψfrom (13) with Ψ(x, y, ν ) :=ϕ′′(y)(y−x)−νfor all x, y, ν . Then, for a fixed ν,(7)is equivalent to solving Ψ(xi,ˆpi, ν) = 0 .
https://arxiv.org/abs/2505.19371v1
First, consider x >0. Then, we have the following: 1.Since the map y7→dϕ(x, y)is strictly convex for y∈[x,1], it follows that∂ ∂ydϕ(x, y) = Ψ( x, y,0)is strictly increasing for y∈[x,1], and so is Ψ(x, y, ν ). 2. We have Ψ(x, x, ν ) =−ν⩽0. Further, Ψ(x,1, ν) =ϕ′′(1)(1−x)−ν⩾0, whenever ν⩽ϕ′′(1)(1−x). Hence, the map y7→Ψ(x, y, ν )has a unique zero on the interval [x,1], as long as 0< ν⩽ϕ′′(1)(1−x). Next, consider x= 0, in which case we need to solve the equation ϕ′′(y)y=ν. Then, we have the following: 1.Since the map y7→dϕ(0, y)is strictly convex for y∈(0,1], it follows that∂ ∂ydϕ(0, y) = Ψ(0 , y,0) = ϕ′′(y)yis strictly increasing for y∈(0,1], and so is Ψ(0, y, ν). 17 2.By assumption, limy→0+yϕ′′(y) = 0 , hence we have limy→0+Ψ(x, x, ν ) =−ν⩽0. Further, Ψ(0,1, ν) = ϕ′′(1)(1−x)−ν⩾0, whenever ν⩽ϕ′′(1). Hence, the map y7→Ψ(0, y, ν)has a unique zero on the interval (0,1], as long as 0< ν⩽ϕ′′(1). Now define M:= min iϕ′′(1)(1−xi) =ϕ′′(1)(1−max ixi). Since by assumptionP ixi<1, it follows that M > 0. From the above analysis, it follows that, as long as ν∈(0, M], for each i, the equation ϕ′′(yi)(yi−xi) =ν. has a unique solution yi(ν)∈(xi,1]. Furthermore, as we establish in Lemma C.2, the map ν7→yi(ν)is strictly increasing for ν >0, also owing to the assumed second-argument convexity of dϕ. In particular, define G(ν) =Pk i=1yi(ν)forν >0; then Gis continuous and strictly increasing, and satisfies limν→0G(ν) =P ixi<1andG(M)⩾yi∗(M) = 1 , where i∗is any index achieving the maximum among the coordinates of x. Hence there is a unique ν∗∈(0, M]with G(ν∗) = 1 . Setting ˆpi=yi(ν∗)yields a vector in ∆kthat satisfies the KKT stationarity. Finally, note that the solution ˆpthat we just identified is unique. Indeed, we have earlier excluded boundary solutions from consideration, and then further excluded any solutions in which ˆpi< xifor any i∈[k]; thus, it suffices to recall that the Bregman objective is assumed to be strictly convex in the interior of the region of the simplex given by{ˆp∈∆k: ˆpi⩾xifor all i∈[k]}, thus concluding the proof. B Proof of the primal greedy property in Theorem 3.2 We will first fix some notations. Henceforth, we will assume that the vector phas been sorted, i.e., p1⩾p2⩾. . .⩾ pV. For any subset Q={i1, . . . , i k} ⊆[V]of size k, letQc= [V]\Q. Let pQdenote the sub-probability vector with the entries of pwhose indices are in Q. We define the loss L(Q)as L(Q) = min ˆp∈∆kDϕ((ˆp,0V−k),(pQ, pQc)) = min ˆp∈∆kkX j=1dϕ(ˆpj, pij) +SQc. (8) Here, SQc=P j /∈Qdϕ(0, pj). To prove Theorem 3.2, we will show that L(S′)⩾L(S)for any S′⊆[V]of size k, where S= [k]consists of the top- kindices. We will further show that strict inequality always holds if pS′̸=pS. To do this, we proceed in three steps: (1) We first simplify the form of the loss function L(Q)in Lemma B.1, (2) For any two subsets S, S′, we decompose the loss difference L(S′)−L(S)into three terms in Lemma B.2, (3) We individually analyze each of the terms in this decomposition and prove they are non-negative.
https://arxiv.org/abs/2505.19371v1
B.1 Decomposing the Bregman cost function on subsets Lemma B.1. For any Q={i1, i2, . . . i k} ⊆[V]of size k, the loss function as defined in (8)simplifies to: L(Q) =kX j=1[ϕ([TQ(p)]j)−ϕ′(pij)[TQ(p)]j] +S[V]− |Q|ϕ(0). (9) 18 Proof. Observe that: L(Q) = D ϕ((ˆpQ,0V−k),(pQ, pQc)) =kX j=1d([TQ(p)]j, pij) +SQc =kX j=1[ϕ([TQ(p)]j)−ϕ(pij)−ϕ′(pij)([TQ(p)]j−pij)] +SQc =kX j=1[ϕ([TQ(p)]j)−ϕ′(pij)[TQ(p)]j] +kX j=1[−ϕ(pij) +f(pij)pij] +SQc. This further equals kX j=1[ϕ([TQ(p)]j)−ϕ′(pij)[TQ(p)]j] +X j∈Qdϕ(0, pj) +SQc− |Q|ϕ(0) =kX j=1[ϕ([TQ(p)]j)−ϕ′(pij)[TQ(p)]j] +SQ+SQc− |Q|ϕ(0) =kX j=1[ϕ([TQ(p)]j)−ϕ′(pij)[TQ(p)]j] +S[V]− |Q|ϕ(0). This finishes the proof. LetTQ(p)denote a minimizer of the above loss L(Q), i.e., TQ(p)∈arg min ˆp∈∆kDϕ((ˆp,0V−k),(pQ, pQc))(a)= arg min ˆp∈∆kkX j=1dϕ(ˆpj, pij). Note that (a)holds above as the term SQcdoes not play any role in the location of the minimizer. However, it does contribute to the final loss L(Q). Also, as the divergence is separable, once we have selected a subset Q, the ordering of its elements does not matter for the calculation of the above loss and minimizer. Thus, without loss of generality, we may assume i1< i2< . . . < i kfork∈[V]. By forming the Lagrangian and differentiating it, we obtain the primal thresholding from (4): ϕ′([TQ(p)]j) =ϕ′(pij) +νQ∀j∈[k]. (10) Here, νQis chosen such thatPk j=1[TQ(p)]j= 1. Lemma B.2. LetS={i1, . . . , i k}, S′={i′ 1, . . . , i′ k} ⊆ [V]andTS(p)andTS′(p)be the corresponding minimizers. Then, the following decomposition holds: L(S′)−L(S) = D ϕ(TS′(p), TS(p)) +kX j=1([TS′(p)]j−[TS(p)]j) ϕ′([TS(p)]j)−ϕ′(pij) +kX j=1[TS′(p)]j ϕ′(pij)−ϕ′(pi′ j) . (11) 19 Proof. We have from Lemma B.1 that L(S′)−L(S) =kX j=1[ϕ([TS′(p)]j)−ϕ′(pi′ j)[TS′(p)]j]−kX j=1[ϕ([TS(p)]j)−ϕ′(pij)[TS(p)]j] =kX j=1[ϕ([TS′(p)]j)−ϕ([TS(p)]j)] +ϕ′(pij)[TS(p)]j−ϕ′(pi′ j)[TS′(p)]j. This further equals kX j=1[ϕ([TS′(p)]j)−ϕ([TS(p)]j)−ϕ′([TS(p)]j)([TS′(p)]j−[TS(p)]j)] +kX j=1 [TS′(p)]jh ϕ′([TS(p)]j)−ϕ′(pi′ j)i −[TS(p)]j ϕ′([TS(p)]j)−ϕ′(pij) = D ϕ(TS′(p), TS(p)) +kX j=1([TS′(p)]j−[TS(p)]j) ϕ′([TS(p)]j)−ϕ′(pij) +kX j=1[TS′(p)]j ϕ′(pij)−ϕ′(pi′ j) . Now, returning to our proof, suppose S= [k]andS′={i′ 1, . . . i′ k}. We know from Lemma B.2 that L(S′)−L(S) = D ϕ(TS′(p), TS(p))| {z } I+kX j=1([TS′(p)]j−[TS(p)]j) ϕ′([TS(p)]j)−ϕ′(pij) | {z } II +kX j=1[TS′(p)]j ϕ′(pij)−ϕ′(pi′ j) | {z } III. Now, consider the term II. Using (10), we can simplify this further as follows: II=kX j=1([TS′(p)]j−[TS(p)]j)νS=νS kX j=1[TS′(p)]j−kX j=1[TS(p)]j (a)= 0, where (a)follows asPk j=1[TS′(p)]j=Pk j=1[TS(p)]j= 1. Also, I⩾0asDϕis a divergence measure. Finally, to conclude our proof, we show that III⩾0. Since the entries of pare sorted in a non-decreasing order and as the indices in S= [k]andS′are sorted in ascending order, we have ∀j∈[k], j=ij⩽i′ j⇒∀j∈[k], p(ij)⩾p(i′ j) ⇒kX j=1[TS′(p)]j ϕ′(pij)−ϕ′(pi′ j) =III⩾0. Strict inequality holds as long as some pi′ jis not among the top- kindices of p. 20 C Proof of the dual greedy property in Theorem 3.3 To prove the greedy property for the two alternate conditions in Theorem 3.3, we will provide two distinct proof techniques for the two cases (A1) and (A2). The first one uses duality and the second one uses a saddle point argument. We will now recall the definition of the Legendre dual of a convex function—in this case, of the generator function ϕ—and its defining property that will help us. Below, f([0,1])denotes the image of [0,1]under f. Lemma C.1 (Classical) .For a valid ϕ, letϕ∗(x) = supp⩾0{px−ϕ(p)}be the Legendre dual of ϕ,
https://arxiv.org/abs/2505.19371v1
defined for allx∈f([0,1]). Then, we have for every x∈f([0,1])the identity: ϕ(f−1(x)) =xf−1(x)−ϕ∗(x).Moreover (ϕ∗)′=f−1, and ϕ∗is strictly increasing. Proof. Since the map p7→R(p) :=px−ϕ(p)is continuous, it achieves a maximum on [0,1]. From the first order condition of the defining equation for ϕ∗, if the maximum is achieved in (0,1), we have: ∂R ∂p=x−ϕ′(p) =x−f(p) = 0 , so for the maximizer pmaxwe have f(pmax) =x⇒pmax=f−1(x). Now, since fis increasing and x∈f([0,1]), we have R′(0) = x−f(0)⩾0, with equality if x=f(0). Similarly, R′(1) = x−f(1)⩽0, with equality if x= f(1). Hence, it follows that the above characterization for the maximizer pmaxalso applies on the boundaries of [0,1]. To conclude the proof of the identity, it suffices to observe that ϕ∗(x) =pmaxx−ϕ(pmax) =xf−1(x)−ϕ(f−1(x)). The expression for (ϕ∗)′follows by direct calculation. C.1 Proof under Assumption (A1) With the dual convex conjugate ϕ∗as per Lemma C.1, the divergence measure satisfies: dϕ(p, q) = d ϕ∗(ϕ′(q), ϕ′(p)). (12) Let the loss for the dual problem be denoted as L∗, (the divergence measure with the arguments swapped), and let T∗ Qbe the dual renormalization map from Lemma A.1 applied to pQ, i.e., L∗(Q) = min ˆp∈∆kDϕ((pQ, pQc),(ˆp,0V−k)) = min ˆp∈∆kkX j=1dϕ(pij,ˆpj) +S∗ Qc,where S∗ Qc=X j /∈Qdϕ(pj,0) =kX j=1dϕ(pij,[T∗ Q(p)]j) +S∗ Qc. C.1.1 Decomposition of the loss difference Using the form of the loss difference in Lemma (B.2) and(12), we can compute the loss difference for the dual problem as follows: L∗(S′)−L∗(S) =VX j=1dϕ(pi′ j,[T∗ S′(p)]j)−VX j=1dϕ(pij,[T∗ S(p)]j) (due to (12))=VX i=1dϕ∗(ϕ′([T∗ S′(p)]j), ϕ′(pi′ j))−VX i=1dϕ∗(ϕ′([T∗ S(p)]j), ϕ′(pij)) 21 Indeed, changing the potential ϕtoϕ∗, and changing all the arguments pij, pi′ j, T∗ S, T∗ S′toϕ′(pij), ϕ′(pi′ j), ϕ′(T∗ S), ϕ′(T∗ S′) respectively in Lemma (B.2) suffices. Thus, under the same setup of the two subsets S= [k]andS′and denoting ϕ′=f, we obtain: L∗(S′)−L∗(S) = D ϕ∗(f(T∗ S′(p)), f(T∗ S(p))) +kX j=1(f([T∗ S′(p)]j)−f([T∗ S(p)]j)) (ϕ∗)′(f([T∗ S(p)]j))−(ϕ∗)′ f(pij) +kX j=1f([T∗ S′(p)]j) (ϕ∗)′(f(pij))−(ϕ∗)′(f(pi′ j)) . Since (ϕ∗)′=f−1, this further equals Divϕ∗(f(T∗ S′(p)), f(T∗ S(p)))| {z } I′ +kX j=1(f([T∗ S′(p)]j)−f([T∗ S(p)]j)) [T∗ S(p)]j−pij | {z } II′+kX j=1f([T∗ S′(p)]j) pij−pi′ j | {z } III′. C.1.2 Analysis of terms based on the dual solution Similar to the proof for the primal case, the term I′⩾0, asDϕ∗is a divergence, and III′⩾0asϕ′=f⩾0, as f(0) = 0 andfis increasing. Moreover, as fis strictly increasing, if any of the pi′ jare not among the top- kentries, then strict inequality holds. To analyze II, we have II=kX j=1(f([T∗ S′(p)]j)−f([T∗ S(p)]j)) [T∗ S(p)]j−pij from Lemma A.1=kX j=1(f([T∗ S′(p)]j)−f([T∗ S(p)]j))ν∗ S f′([T∗ S(p)]j). Since fis convex, (f([T∗ S′(p)]j)−f([T∗ S(p)]j))⩾f′([T∗ S(p)]j) ([T∗ S′(p)]j−[T∗ S(p)]j) (a)⇒1 f′([T∗ S(p)]j)·(f([T∗ S′(p)]j)−f([T∗ S(p)]j))⩾[T∗ S′(p)]j−[T∗ S(p)]j (b)⇒kX j=11 f′([T∗ S(p)]j)·(f([T∗ S′(p)]j)−f([T∗ S(p)]j))⩾kX j=1([T∗ S′(p)]j−[T∗ S(p)]j) = 0 . In the above steps, (a)follows as f′>0asfis strictly increasing and (b)follows asPk j=1[T∗ S′(p)]j=Pk j=1[T∗ S(p)]j= 1. This implies II′⩾0, finishing the proof. 22 C.2 Proof under Assumption (A2) C.2.1 Extra notation Since∂ ∂ydϕ(x, y) =ϕ′′(y) (y−x)fory >0, we define for (x, y, ν )∈D:= [0,1]×(0,1]×(0,∞), Ψ(x, y, ν ) :=ϕ′′(y)(y−x)−ν. (13) Define the mapping derived from solving Ψ(x, y, ν ) = 0 overyby: ξ(x, ν)
https://arxiv.org/abs/2505.19371v1
: [0,1]×(0,∞)→(0,1],such that [T(p)]i=ξ(pi, ν)for all i,and for optimal ν. It follows from the proof of Lemma A.1 that the solution ξis well-defined. Define two auxiliary functions ψ, h that will be used in the computation of the Bregman costs below, such that for all (x, y, ν )∈D: ψ(x, y) :=ϕ(y)−ϕ′(y)(y−x),andh(x, ν) :=ψ(x, ξ(x, ν)). C.2.2 Properties of the auxiliary functions Lemma C.2 (Derivatives∂ξ ∂x,∂ξ ∂ν).Define v: [0,1]×(0,1]→[0,∞)asv(x, y) =ϕ′′(y) +ϕ′′′(y)(y−x). We have for all (x, ν)∈[0,1]×(0,∞): ∂ξ ∂ν(x, ν) =1 v(x, ξ(x, ν)),and∂ξ ∂x(x, ν) =ϕ′′(ξ(x, ν)) v(x, ξ(x, ν)). (14) Proof. The proof of either identity follows by applying implicit differentiation to the function Ψ. Fixx∈[0,1]and consider F(y, ν) = Ψ( x, y, ν ) = ϕ′′(y)(y−x)−νfor(y, ν)∈(0,1]×(0,∞). Because ϕisC3on(0,1],Fis continuously differentiable, and ∂F ∂y(y, ν) = ϕ′′′(y)(y−x) +ϕ′′(y) = v(x, y)>0 by Assumption 3.2. Hence, by the implicit function theorem, the map ν7→ξ(x, ν)isC1with ∂ξ ∂ν(x, ν) =−∂F/∂ν ∂F/∂y=1 v(x, ξ(x, ν)). For the latter identity, fix ν >0and define G(x, y) := Ψ( x, y, ν ) =ϕ′′(y)(y−x)−ν, (x, y)∈[0,1]×(0,1]. For each x0∈(0,1]lety0:=ξ(x0, ν)∈(0,1]satisfy G(x0, y0) = 0 . We have∂G ∂y(x, y) = v(x, y). Assumption 3.2 gives v(x, y)>0for all 0< y⩽1and0⩽x⩽y. Hence ∂G/∂y (x0, y0)̸= 0. Since Gis continuously differentiable and ∂G/∂y ̸= 0at(x0, y0), the implicit-function theorem guarantees a C1 mapx7→ξ(x, ν)in a neighborhood of x0withG x, ξ(x, ν) = 0. Differentiating G x, ξ(x, ν) ≡0with respect to xand using ∂G/∂x =−ϕ′′(y)gives 0 =∂G ∂x+∂G ∂y∂ξ ∂x=−ϕ′′ ξ(x, ν) +v x, ξ(x, ν)∂ξ ∂x, 23 so ∂ξ ∂x(x, ν) =ϕ′′ ξ(x, ν) v x, ξ(x, ν). When x= 0, the same argument applies, because∂G ∂y(0, y) =v(0, y)>0and∂G/∂x |(0,y)=−ϕ′′(y)is finite (the solution y=ξ(0, ν)is strictly positive, so ϕ′′(y)is finite even if ϕ′′(y)→∞ asy↓0). Thus ∂ξ/∂x |(0,ν)exists and the same formula holds. This completes the proof. Lemma C.3 (Derivative∂h ∂ν).Under the condition that x7→u(x) := xϕ′′(x)/ϕ′(x)is non-decreasing from Assumption (A2), we have∂h ∂ν(x, ν)⩽0for all x∈[0,1]andν >0. Proof. For the derivative with respect to ν, observe first that ∂ψ ∂y(x, y) =ϕ′(y)− ϕ′′(y)y+ϕ′(y) +x ϕ′′(y) =ϕ′′(y) (x−y). Hence, by the chain rule, ∂ ∂νψ x, ξ(x, ν) =∂ψ ∂y x, ξ(x, ν)∂ξ ∂ν(x, ν) =ϕ′′ ξ(x, ν) [x−ξ(x, ν)]∂ξ ∂ν(x, ν). Due to the defining equation ϕ′′(ξ) (ξ−x) =ν, this simplifies to ∂h ∂ν(x, ν) =∂ ∂νψ x, ξ(x, ν) =−ν∂ξ ∂ν(x, ν) =−ν v x, ξ(x, ν)⩽0, where the last equality uses∂ξ ∂ν(x, ν) =1 v x, ξ(x, ν)andν >0. Lemma C.4 (Derivative∂h ∂x).Assumption (A2) implies∂h ∂x(x, ν)⩾0for all x∈[0,1]andν >0. Proof. First recall that ψ(x, y) =ϕ(y)−ϕ′(y) (y−x) =⇒∂ψ ∂x(x, y) =ϕ′(y),∂ψ ∂y(x, y) =ϕ′′(y) (x−y). Hence, with y=ξ(x, ν), ∂h ∂x(x, ν) =∂ψ ∂x x, ξ +∂ψ ∂y x, ξ∂ξ ∂x(x, ν) =ϕ′(ξ) +ϕ′′(ξ) [x−ξ]∂ξ ∂x(x, ν). Because ξ=ξ(x, ν)satisfies ϕ′′(ξ) (ξ−x) =ν, we have ∂h ∂x(x, ν) =ϕ′(ξ)−ν∂ξ ∂x(x, ν) =ϕ′(ξ)−νϕ′′(ξ) v x, ξ. Write N(x, ν) = ϕ′(ξ)ϕ′′(ξ) + ( ξ−x) ϕ′(ξ)ϕ′′′(ξ)−ϕ′′(ξ)2 =ϕ′(ξ)ϕ′′(ξ) + (ξ−x)A(ξ), where A(t) :=ϕ′(t)ϕ′′′(t)−ϕ′′(t)2. Case 1: A(ξ)⩾0. Because ξ⩾xfrom Lemma A.1, the second term is non-negative; with ϕ′, ϕ′′⩾0the first term is also non-negative,
https://arxiv.org/abs/2505.19371v1
so N⩾0. 24 Case 2: A(ξ)<0. Since ξ⩾x, we have N(x, ν)⩾ϕ′(ξ)ϕ′′(ξ) +ξ A(ξ) = ϕ′(ξ)2u′(ξ), where u(t) :=t ϕ′′(t)/ϕ′(t). Indeed, u′(t)ϕ′(t)2=ϕ′(t) ϕ′′(t) +t ϕ′′′(t) −t ϕ′′(t)2=ϕ′(t)ϕ′′(t) +t ϕ′(t)ϕ′′′(t)−ϕ′′(t)2 . By Assumption (A2), uis non-decreasing, so u′(ξ)⩾0; hence N(x, ν)⩾0in this case as well. Because v(x, ξ)>0andN(x, ν)⩾0in both cases, we conclude ∂h(x, ν)/∂x⩾0for all x∈[0,1]andν >0, thereby proving the lemma. C.2.3 Proving the dual greedy property Denote an arbitrary subset of the indices by: S⊆[J]. LetνSbe the corresponding Lagrange multiplier. Below, for a vector x∈RVand a set S⊂[V], we denote by x[S]the sub-vector of xrestricted to the coordinates in S. Since ϕ′(0) = 0 by the assumptions of Theorem 3.3, denoting Γ =PJ m=1dϕ(pm,0) +ϕ(0)|S|we can write for every S: Dϕ(p,ˆp[S]) =X m∈Sϕ(pm)−ϕ([T(p)]m)−ϕ′([T(p)]m)·(pm−[T(p)]m) +X m∈[J]\Sdϕ(pm,0) =X m∈S−(ϕ([T(p)]m)−ϕ′([T(p)]m)·([T(p)]m−pm)) + Γ =X m∈S−ψ(pm,[T(p)]m) + Γ =X m∈S−h(pm, νS) + Γ . Now, let us prove that the greedy property holds. Suppose Sis optimal among all subsets of indices of size kbut does not consist of some of the top kprobability tokens. Then there exist some i̸=jsuch that i∈S,j̸∈S, and pj> pi. Denote S′=S\ {i} ∪ {j}. LetνS, νS′denote the choice of νthat makes the projected probabilities sum to unity. Now since S′only differs from Sin that it includes the larger pj> pi, we can conclude that νS> νS′. Then, using the above formula for the value of the objective function on an arbitrary subset, we have: Dϕ(p,ˆp[S])−Dϕ(p,ˆp[S′]) =h(pj, νS′)−h(pi, νS) +X m∈S\{i}(h(pm, νS′)−h(pm, νS)). Now, since hdecreases in νby Lemma C.3, we have that the sum is nonnegative since νS′< νS. As for the remaining term, we have: h(pj, νS′)⩾h(pj, νS)⩾h(pi, νS), where the first inequality is by the fact that νS′< νSand Lemma C.3, and the second inequality is by the fact that pj> piand Lemma C.4. This concludes the proof of the dual greedy property under Assumption (A2). D Proof of discrete convexity for primal Bregman projection We follow the notations that were introduced in the beginning of the proof in Section B. To show that the cost function is discretely convex in kfor the primal, it suffices to show that L([k]) := min ˆp∈∆kDϕ((ˆp,0V−k), p) = D ϕ((T[k](p),0V−k), p) 25 is discretely convex in k. Indeed, the difference cost(k)−L([k]) =λkis linear in k. To simplify notation, let us denote L([k])byL(k)andT[k]byTk. From Lemma (B.1) we know that with ˜SV:=S[V]−kϕ(0) L(k) =kX j=1{ϕ([Tk(p)]j)−ϕ′(pj)[Tk(p)]j}+˜SV. Using (10), we know that f([Tk(p)]j) =f(pj) +ν[k]∀j∈[k]. Again, we simply denote ν[k]asνk. For j∈[k], letting x=f(pj) +νkin Lemma C.1, we have: ϕ([Tk(p)]j)−ϕ′(pj)[Tk(p)]j=ϕ(f−1(f(pj) +νk))−f(pj)f−1(f(pj) +νk) =ϕ(f−1(x))−f(pj)f−1(x) =xf−1(x)−ϕ∗(x)−f(pj)f−1(x) = (x−f(pj))f−1(x)−ϕ∗(x) =νk[Tk(p)]j−ϕ∗(f(pj) +νk). But now, using that the nonzero entries of Tk(p)must sum to unity, we find the following simplification: L(k) =kX j=1{νk[Tk(p)]j−ϕ∗(f(pj) +νk)}+˜SV =νkkX j=1[Tk(p)]j−kX j=1ϕ∗(f(pj) +νk) +˜SV=νk−kX j=1ϕ∗(f(pj) +νk) +˜SV. (15) Now, define the auxiliary function Wfor all j, νfor which the expression below is well defined: W(k, ν) :=ν−kX j=1ϕ∗(f(pj) +ν), (16) where pis implicitly kept fixed. From the above calculation, we thus obtain after canceling out terms: L(k+ 1)−2L(k) +L(k−1) = W(k+ 1, νk+1)−2W(k, νk) +W(k−1, νk−1). To prove that this is nonnegative, we leverage that W(k,·)is
https://arxiv.org/abs/2505.19371v1
strictly concave in νfor each k, which follows as the Legendre dual mapping ϕ∗is strictly convex since so is ϕ. Then, observe that for every j, ∂ ∂νW(k, ν) = 1−kX j=1(ϕ∗)′(f(pi) +ν) = 1−kX j=1f−1(f(pj) +ν). (17) Thus, ∂ ∂νW(k, ν)|ν=νk= 1−kX j=1f−1(f(pj) +νk) = 1−kX j=1[Tk(p)]j= 0. AsW(k,·)is strictly concave in ν,W(k,·)is maximized at νk. Thus, we have: (1) W(k+1, νk+1)⩾W(k+1, νk), and (2) W(k−1, νk−1)⩾W(k−1, νk). With these in hand, we have: L(k+ 1)−2L(k) +L(k−1) = W(k+ 1, νk+1)−2W(k, νk) +W(k−1, νk−1) (18) ⩾[W(k+ 1, νk)−W(k, νk)]−[W(k, νk)−W(k−1, νk)]. Now, due to the definition of W, the last display equals −ϕ∗(f(pk+1) +νk) +ϕ∗(f(pk) +νk)⩾0, (19) the inequality holding as pk⩾pk+1, and as the mapping p7→ϕ∗(f(p) +νk)is increasing in psince so are ϕ∗and f. This concludes the proof. 26 E Proof of discrete convexity for dual Bregman projection We denote θx(y) =ϕ′′(y)(y−x).As observed before, we have for all admissible x, ythat ∂ ∂ydϕ(x, y) =θx(y), and the convexity condition for the second argument of dϕof Assumption 3.2 is given by: ∂ ∂yθx(y)⩾0⇔ϕ′′(y) +ϕ′′′(y)(y−x)⩾0for all y⩾x⩾0. The dual projection for any 1⩽i⩽j⩽Vis given (for optimal Lagrange multiplier νj) by: θpi([T∗ j(p)]i) =νj⇔ϕ′′([T∗ j(p)]i)([T∗ j(p)]i−pi) =νj. Denote the dual Bregman objective, as a function of the selected sparsity k, as: cost∗(k) = D ϕ(p,(T∗ k(p),0V−k)) +λk. We now demonstrate that cost∗(k)is discretely convex in k. For this, we will directly show that the second-order differences of this function are nonnegative at every k∈ {2, . . . , V −1}. Specifically, we can write: ∆∗,2(k) := cost∗(k+ 1)−2cost∗(k) + cost∗(k−1) = D ϕ p, T∗ k+1(p),0V−k−1 −2Dϕ(p,(T∗ k(p),0V−k)) + D ϕ p, T∗ k−1(p),0V−k+1 We now decompose this quantity into three terms corresponding to three ranges of index i∈[V], namely i∈[k−1], i∈ {k, k+ 1}, and i∈ {k+ 2, . . . , V }. We obtain: ∆∗,2(k) =k−1X i=1n dϕ(pi,[T∗ k+1(p)]i)−dϕ(pi,[T∗ k(p)]i) + dϕ(pi,[T∗ k−1(p)]i)−dϕ(pi,[T∗ k(p)]i) o +n (ϕ(pk)−ϕ(0)−ϕ′(0)·pk)−2 (ϕ(pk)−ϕ([T∗ k(p)]k)−ϕ′([T∗ k(p)]k)·(pk−[T∗ k(p)]k)) + ϕ(pk)−ϕ([T∗ k+1(p)]k)−ϕ′([T∗ k+1(p)]k)·(pk−[T∗ k+1(p)]k) + (ϕ(pk+1)−ϕ(0)−ϕ′(0)·pk+1)−2 (ϕ(pk+1)−ϕ(0)−ϕ′(0)·pk+1) + ϕ(pk+1)−ϕ([T∗ k+1(p)]k+1)−ϕ′([T∗ k+1(p)]k+1)·(pk+1−[T∗ k+1(p)]k+1)o −VX i=k+2{dϕ(pi,0)−2dϕ(pi,0) + d ϕ(pi,0)}. The last sum is identically zero, so we engage with the other two ranges of indices. Range 1: i∈[k−1].For Range 1, recall that for any convex function ψ,it holds for any two points x, yin its domain that ψ(x)−ψ(y)⩾ψ′(y)(x−y). Now, notice that for each iin Range 1, each of the two terms in figure brackets can be bounded via the convexity of dϕ(x,·)in its second argument as: dϕ(pi,[T∗ k+1(p)]i)−dϕ(pi,[T∗ k(p)]i)⩾∂ ∂ydϕ(pi, y) y=[T∗ k(p)]i· [T∗ k+1(p)]i−[T∗ k(p)]i =θpi([T∗ k(p)]i)· [T∗ k+1(p)]i−[T∗ k(p)]i =νk· [T∗ k+1(p)]i−[T∗ k(p)]i 27 and: dϕ(pi,[T∗ k−1(p)]i)−dϕ(pi,[T∗ k(p)]i)⩾∂ ∂ydϕ(pi, y) y=[T∗ k(p)]i· [T∗ k−1(p)]i−[T∗ k(p)]i =θpi([T∗ k(p)]i)· [T∗ k−1(p)]i−[T∗ k(p)]i =νk· [T∗ k−1(p)]i−[T∗ k(p)]i . As a result, we may simplify the Range 1 sum as follows, using that by definition, the first jterms in the projection T∗ jfor each j∈ {k−1, k, k + 1}sum to unity: Range 1 Sum ⩾k−1X i=1νk· [T∗ k+1(p)]i−[T∗ k(p)]i + [T∗ k−1(p)]i−[T∗ k(p)]i  =νk k−1X i=1[T∗ k+1(p)]i−2k−1X i=1[T∗ k(p)]i+k−1X i=1[T∗ k−1(p)]i! =νk 1−[T∗ k+1(p)]k−[T∗ k+1(p)]k+1 −2(1−[T∗ k(p)]k) + 1
https://arxiv.org/abs/2505.19371v1
=νk 2[T∗ k(p)]k−[T∗ k+1(p)]k−[T∗ k+1(p)]k+1 . Range 2: i∈ {k, k+ 1}.For Range 2, we first note that the following three types of terms cancel out: ϕ(0), ϕ(pk),ϕ(pk+1). Furthermore, terms involving ϕ′(0)vanish by assumption. The remaining terms in the Range 2 sum can then be written as: Range 2 Sum ⩾n −2 (−ϕ([T∗ k(p)]k)−ϕ′([T∗ k(p)]k)·(pk−[T∗ k(p)]k)) + −ϕ([T∗ k+1(p)]k)−ϕ′([T∗ k+1(p)]k)·(pk−[T∗ k+1(p)]k)o +n −ϕ([T∗ k+1(p)]k+1)−ϕ′([T∗ k+1(p)]k+1)·(pk+1−[T∗ k+1(p)]k+1)o . Now, we can bound −ϕ′([T∗ k+1(p)]k+1)·pk+1⩾−ϕ′([T∗ k+1(p)]k+1)·pk, using that pk⩾pk+1and the strict convexity of ϕ. We find the lower bound Range 2 Sum ⩾−2n −ϕ([T∗ k(p)]k)−ϕ′([T∗ k(p)]k)·(pk−[T∗ k(p)]k)o +n −ϕ([T∗ k+1(p)]k)−ϕ′([T∗ k+1(p)]k)·(pk−[T∗ k+1(p)]k)o +n −ϕ([T∗ k+1(p)]k+1)−ϕ′([T∗ k+1(p)]k+1)·(pk−[T∗ k+1(p)]k+1)o . By adding and subtracting the term ϕ(pk)twice, we have the following equivalent bound: Range 2 Sum ⩾−2n ϕ(pk)−ϕ([T∗ k(p)]k)−ϕ′([T∗ k(p)]k)·(pk−[T∗ k(p)]k)o +n ϕ(pk)−ϕ([T∗ k+1(p)]k)−ϕ′([T∗ k+1(p)]k)·(pk−[T∗ k+1(p)]k)o +n ϕ(pk)−ϕ([T∗ k+1(p)]k+1)−ϕ′([T∗ k+1(p)]k+1)·(pk−[T∗ k+1(p)]k+1)o =−2dϕ(pk,[T∗ k(p)]k) + d ϕ pk,[T∗ k+1(p)]k + dϕ pk,[T∗ k+1(p)]k+1 . 28 Returning to the main bound We can now merge the cases, resulting in the following tight lower bound of the second differential of the cost function: ∆∗,2(k)⩾νk 2[T∗ k(p)]k−[T∗ k+1(p)]k−[T∗ k+1(p)]k+1 −2dϕ(pk,[T∗ k(p)]k) + d ϕ pk,[T∗ k+1(p)]k + dϕ pk,[T∗ k+1(p)]k+1 . Now, define the following key auxiliary function ψk: [0,1]→R, such that for all x∈[0,1]: ψk(x) =νk·x−dϕ(pk, x). This lets us rewrite our lower bound equivalently as: ∆∗,2(k)⩾2ψ([T∗ k(p)]k)−ψ [T∗ k+1(p)]k −ψ [T∗ k+1(p)]k+1 . (20) We now establish a monotonicity property for ψk. Lemma E.1. For every k∈[V]the function ψk(x)is increasing on x∈[0,[T∗ k(p)]k]. Proof. We consider the derivative of the function ψk: ∂ ∂xψk(x) =νk−∂ ∂xdϕ(pk, x) =νk−θpk(x) =θpk([T∗ k(p)]k)−θpk(x), where we have used the connection between θx(y)andνk(see Lemma A.1). Now, recalling that by assumption,∂ ∂yθx(y)⩾0for all y⩾x⩾0, and using that [T∗ k(p)]k⩾pkby the properties of the dual projection method (see Lemma A.1), we have that: ∂ ∂xψk(x) =θpk([T∗ k(p)]k)−θpk(x)⩾0, so long as 0⩽x⩽[T∗ k(p)]k. Continuing, by the properties of the dual projection, we have: [T∗ k(p)]k⩾[T∗ k+1(p)]k⩾[T∗ k+1(p)]k+1. In view of Lemma E.1, (20) implies that ∆∗,2(k)⩾ ψ([T∗ k(p)]k)−ψ [T∗ k+1(p)]k + ψ([T∗ k(p)]k)−ψ [T∗ k+1(p)]k+1 ⩾0 + 0 = 0 . This concludes the proof of dual discrete convexity of the Bregman cost function. F Algorithmic details F.1 Computing the dual renormalization map Recall that when ϕis dual valid, the renormalization map T∗ ϕis uniquely defined for x∈∆sub,kwithP ixi>0by the fixed point equation (see Lemma A.1) [T∗ ϕ(x)]i=xi+ν∗/f′([T∗ ϕ(x)]i)for all i∈[k],where ν∗∈Ris chosen so thatkX i=1[T∗ ϕ(x)]i= 1. 29 To compute T∗ ϕ, recall from Section C.2.1 the function Ψfrom (13) withΨ(x, y, ν ) := ϕ′′(y)(y−x)−νfor allx, y, ν . Then, for a fixed ν,[T(x)]isatisfying the equation [T(x)]i=xi+ν/f′([T(x)]i)is equivalent to solving Ψ(xi, yi, ν) = 0 foryi= [T(x)]i. The monotonicity properties from Lemma A.1 then suggest the following algorithm, consisting of a binary search over ν∈(0, M], and then over each coordinate of Tsolving ϕ′′([T(x)]i)([T(x)]i−xi) =ν. Algorithm 1 Dual Renormalization Map T∗ ϕ(x)via Nested Binary Search Require: Convex generator ϕwith derivatives f=ϕ′,f′′=ϕ′′; input vector x∈∆sub,kwithPxi<1; tolerance ε >0 Ensure: Renormalized vector ˆp=T∗ ϕ(x)∈∆k 1:function DUALRENORMALIZE (x,ϕ,ε) 2: k←length of x 3: f′′←ϕ′′ 4: M←ϕ′′(1)·(1−max ixi)
https://arxiv.org/abs/2505.19371v1
▷Upper bound on feasible ν 5: Initialize νlow←0,νhigh←M 6: while νhigh−νlow> εdo 7: ν←(νlow+νhigh)/2 8: fori= 1tokdo 9: xi←x[i] 10: y[i]←SOLVE ROOT(xi,ν,f′′,ε) 11: end for 12: G←Pk i=1y[i] 13: ifG <1then 14: νlow←ν 15: else 16: νhigh←ν 17: end if 18: end while 19: return y 20:end function 21:function SOLVE ROOT(xi,ν,f′′,ε) 22: a←xi, b←1 23: while b−a > ε do 24: m←(a+b)/2 25: Ψ←f′′(m)·(m−xi)−ν 26: ifΨ<0then 27: a←m 28: else 29: b←m 30: end if 31: end while 32: return (a+b)/2 33:end function F.2 Pseudocode for algorithms See Algorithm 3 and Algorithm 4 for pseudocode for sparse primal (resp. dual) Bregman decoding. 30 Algorithm 2 Discrete Binary Search for Unimodal Cost Minimization Require: Callable function C OMPUTE COST, maximum support size V Ensure: Optimal support size k∗minimizing C OMPUTE COST(k) 1:function BINARY SEARCH (COMPUTE COST,V) 2: c1←COMPUTE COST(1) 3: c2←COMPUTE COST(2) 4: ifc2−c1⩾0then 5: return 1 6: end if 7: cV−1←COMPUTE COST(V−1) 8: cV←COMPUTE COST(V) 9: ifcV−cV−1⩽0then 10: return V 11: end if 12: Initialize L←1,R←V 13: while R−L >1do 14: m← ⌊(L+R)/2⌋ 15: cm←COMPUTE COST(m) 16: cm+1←COMPUTE COST(m+ 1) 17: ifcm+1−cm⩾0then 18: R←m 19: else 20: L←m 21: end if 22: end while 23: return R 24:end function Algorithm 3 Regularized Sparse Primal Bregman Decoding Require: Probability vector p∈∆V, valid convex generator ϕ, sparsity penalty λ⩾0 Ensure: Sparse decoded distribution ˆp∈∆V 1:function SPARSE PRIMAL BREGMAN DECODE (p,ϕ,λ) 2: Sortpin descending order: p(1)⩾p(2)⩾···⩾p(V) 3: Define f=ϕ′ 4: function COMPUTE RENORMALIZATION (x∈Rk) 5: Solve for ν∈Rsuch thatPk i=1f−1(f(xi) +ν) = 1 6: return ˆp(k)with[ˆp(k)]i=f−1(f(xi) +ν)fori∈[k] 7: end function 8: function COMPUTE COST(k) 9: Letx=p[1:k] 10: ˆp(k)←COMPUTE RENORMALIZATION (x) 11: Pad with zeros: ˆp(k)←(ˆp(k) 1, . . . , ˆp(k) k,0, . . . , 0) 12: Compute Dϕ(ˆp(k), p) =PV i=1h ϕ(ˆp(k) i)−ϕ(pi)−f(pi)(ˆp(k) i−pi)i 13: return cost(k) = D ϕ(ˆp(k), p) +λk 14: end function 15: k∗←BINARY SEARCH (ComputeCost, V) 16: Recompute ˆp(k∗)using C OMPUTE RENORMALIZATION (p[1:k∗]) 17: Pad with zeros to full length V 18: return ˆp(k∗) 19:end function 31 Algorithm 4 Regularized Sparse Dual Bregman Decoding Require: Probability vector p∈∆V, valid convex generator ϕ, sparsity penalty λ⩾0 Ensure: Sparse decoded distribution ˆp∈∆V 1:function SPARSE DUALBREGMAN DECODE (p,ϕ,λ) 2: Sortpin descending order: p(1)⩾p(2)⩾···⩾p(V) 3: Define f=ϕ′,f′=ϕ′′ 4: function COMPUTE DUALRENORMALIZATION (x∈Rk) 5: Solve for ν∈Rsuch that:Pk i=1[T∗ ϕ(x)]i= 1,where [T∗ ϕ(x)]isatisfies the fixed-point equation: [T∗ ϕ(x)]i=xi+ν/f′([T∗ ϕ(x)]i). 6: return ˆp(k)=T∗ ϕ(x) 7: end function 8: function COMPUTE DUALCOST(k) 9: Letx=p[1:k] 10: ˆp(k)←COMPUTE DUALRENORMALIZATION (x) 11: Pad with zeros: ˆp(k)←(ˆp(k) 1, . . . , ˆp(k) k,0, . . . , 0) 12: Compute Dϕ(p,ˆp(k)) =PV i=1h ϕ(pi)−ϕ(ˆp(k) i)−f(ˆp(k) i)(pi−ˆp(k) i)i 13: return cost(k) = D ϕ(p,ˆp(k)) +λk 14: end function 15: k∗←BINARY SEARCH (ComputeDualCost, V) 16: Recompute ˆp(k∗)using C OMPUTE DUALRENORMALIZATION (p[1:k∗]) 17: Pad with zeros to full length V 18: return ˆp(k∗) 19:end function G Example: α-Bregman decoding G.1 Proof of Lemma 4.3 We first restate the lemma. Lemma G.1. All generator functions ϕα,α >1, are dual-valid and satisfy Assumption (A2). Proof. For Assumption 3.2, we can explicitly write: dϕ(x, y) =xα α(α−1)−yα α(α−1)−yα−1 α−1(x−y) =yα α−x α−1yα−1+xα α(α−1). Therefore, the
https://arxiv.org/abs/2505.19371v1
second derivative in yof this expression is (α−1)yα−2−(α−2)xyα−3=yα−3(y(α−1)−x(α−2)) = yα−3(y(α−1) +x(2−α)). Now, if y⩾x, then using α−1⩾0we have that the above expression is ⩾yα−3(x(α−1) +x(2−α)) =yα−3x⩾0, confirming the convexity in y. Now for the condition that x7→u(x) :=xϕ′′(x)/ϕ′(x)is non-decreasing from Assumption (A2), we can observe that ϕ′(x)ϕ′′′(x)−ϕ′′(x)2=xα−1 α−1·(α−2)xα−3−(xα−2)2=−x2α−4 α−1. 32 Therefore, we identically have: ϕ′(x)ϕ′′(x) +x(ϕ′(x)ϕ′′′(x)−ϕ′′(x)2) =x2α−3 α−1−xx2α−4 α−1= 0, thus concluding the proof. G.2 Proof of Proposition 4.2 Recall the α–renormalization map [Tα(p)]i= pα−1 i+ν1 α−1, i∈[k],where the shift parameter ν=ν(α, p)is chosen so thatPk i=1[Tα(p)]i= 1. We treat each value (or limit) of αin turn. The limit α→ −∞ .Define Fβ(ν) :=kX i=1 pβ i+ν1/β, β :=α−1<0. Because x7→x1/βis strictly decreasing and convex on (0,∞)forβ <0,Fβis strictly decreasing and continuous on the interval −minipβ i,∞ . Moreover, limν↓−minipβ iFβ(ν) =∞andlimν↑∞Fβ(ν) = 0 , so a unique root νβ withFβ(νβ) = 1 exists. Because Fβ(0) = S:=Pk i=1pi⩽1andFβis decreasing, we have νβ⩽0. Letq(α) i= [Tα(p)]i= pβ i+νβ1/β,andi∗be the index where piis largest. Using the constraintP iq(α) i= 1, q(α) i⋆= 1−X i̸=i⋆q(α) i=δ+pi⋆+X i̸=i⋆ pi−q(α) i ⩾pi⋆+δ. Raising q(α) i⋆= pβ i⋆+νβ1/βto the power β <0yields νβ= pi⋆+δ+Rββ−pβ i⋆, R β:=X i̸=i⋆ pi−q(α) i ∈[0, δ]. (21) Fori̸=i⋆, we have νβ/pβ i→0. Indeed, (21) implies |νβ|⩽pβ i⋆(cβ−1)withc:= (pi⋆+δ)/pi⋆>1. Because β→ −∞ ,cβ→0, we have |νβ|=O pβ i⋆ =o pβ i . Then, q(α) i=pi 1 +νβ pβ i1/β →pi, i ̸=i⋆. (22) Summing (22) over i̸=i⋆and usingP iq(α) i= 1gives q(α) i⋆= 1−X i̸=i⋆q(α) i→1−X i̸=i⋆pi=pi⋆+δ. (23) Equations (22) and (23) establish q(α)→T−∞(p)component-wise, completing the proof. The case α=3 2.Now α−1 =1 2, hence [T1.5(p)]i=√pi+ν2, i ∈[k].Sets:=Pk j=1√pjand A:=Pk j=1pj.The normalization condition becomes 1 =kX i=1(√pi+ν)2=A+ 2sν+kν2. 33 Solving kν2+ 2sν+ (A−1) = 0 for the root that yields non–negative probabilities gives ν=−s+√ s2+k(1−A) k. Hence [T1.5(p)]i= √pi+p s2+k(1−A)−s k!2 , i ∈[k]. The case α= 2.Here α−1 = 1 , so Definition 4.1 yields [T2(p)]i=pi+ν, i ∈[k].The normalization condition gives 1 =Pk i=1(pi+ν) =Pk i=1pi+kν,hence ν=1−Pk j=1pj k.Substituting yields [T2(p)]i=pi+1−Pk j=1pj k, i ∈[k]. The limit α→+∞.Write β:=α−1→+∞. Letν=cβwithc∈[0,1]. Then [Tα(p)]i= pβ i+cβ1/β= expn 1 βlog pβ i+cβo . Using1 βlog(aβ+bβ)→log(max {a, b})asβ→ ∞ gives limα→∞[Tα(p)]i= max {pi, c}.Choose the water levelcso thatPk i=1max{pi, c}= 1.This furnishes the claimed water–filling rule. The four cases above prove Proposition 4.2. G.3 Illustrating primal and dual renormalization We consider the peaked vector v= [0.1,0.001,0.001,0.001,0.001], and plot how both of its distinct constituent values get transformed by the primal and dual Bregman α-renormalization (by symmetry, all copies of 0.001are guaranteed to get mapped to the same value by any of our renormalizations). The resulting plots are in Figure 4. As predicted by our theory, both renormalization families coincide at three values of the parameter, namely at α∈ {1,2,∞}. Furthermore, the primal family evolves more gradually than the dual family between the endpoints of the parameter interval α∈(1,2], while the reverse behavior occurs for α∈(2,∞)(where both renormalizations gradually converge to the water-filling limit which, in this case, is the uniform distribution). Figure 4: Comparison of primal and dual renormalization maps: The transformation of the larger
https://arxiv.org/abs/2505.19371v1
value ( 0.1, left) and of the smaller value ( 0.001, right). 34 G.4 Illustrating general nonconvexity of dual renormalization Figure 5 illustrates that the dual Bregman objective can in general be non-convex for large α. Figure 5: Nonconvexity of the Bregman dual landscape on the square (x, y)∈[0,1]2. G.5 Illustrating discrete convexity Figure 6 illustrates that the loss function cost(·)defined in (6)is discretely convex for both the primal and dual decoding strategies. Here, we have chosen V= 80 and the regularization parameter λas1/80. When kis close to V, the renormalization maps are all close to the true vector p, regardless of the value of α, and hence the loss primarily depends on the regularization term λk, which here equals λk= 1fork= 80 . Thus, all curves (corresponding to different values of α) for both the primal and dual plots, asymptote to linearity and converge to this value at k= 80 . G.6 The simultaneous effects of Bregman decoding and temperature scaling Here, we provide a plot to help compare the simultaneous effects of Bregman decoding and temperature scaling. We use the same simulation setting and plotting style as in our figure from the introduction (Section 1); except we only plot the nonzero probabilities (i.e., the top k= 10 probabilities), and we plot the relative sizes of the probabilities compared to the standard top- kdecoding. Further, we use the same αand temperature hyperparameters used in our experiments in Table 1. The results are shown in Figure 7. Standard top- kdecoding corresponds to α= 1and T= 1. From the figure, it appears that the effect of α >1is to moderate/regularize the amount by which the small probabilities are pushed to zero; which could potentially be one reason why α-Bregman decoding with α >1can perform better at high temperatures. 35 Figure 6: Discrete convexity of the function k7→cost(k)for primal and dual Bregman α-decoding. Figure 7: Comparison with changing the temperature. 36 H Supplementary experimental details H.1 Compute resources The experiments were conducted on a system running Rocky Linux 8.10, with 64 CPU cores of Intel(R) Xeon(R) Gold 6448Y processors at 2.10 GHz, 1 TB of RAM, and 8 NVIDIA L40S GPUs with 46 GB of memory each. All experiments can be done with only one GPU and multiple GPUs were used only to parallelize experiments. The software environment used Python 3.11.11, PyTorch 2.5.1, and CUDA 12.4. H.2 Supplementary experimental results In this section, we provide additional experimental results to supplement those from Section 5. In Table 2, we show the average k∗values (and their values rounded to the nearest integer) selected by primal Bregman decoding on GSM8K with LLaMA 3.1 8B for various temperatures, α, and λ. Table 2: Mean (and rounded) average k∗values on GSM8K with LLaMA 3.1 8B for various temperatures, α, andλ. Tempλ= 0.1 λ= 0.01 λ= 0.001 λ= 0.0001 α= 1.5α= 2.0α= 1.5α= 2.0α= 1.5α= 2.0 α= 1.5 α= 2.0 0.3 1.2231(1) 1.1537 (1) 1.6201 (2) 1.4453 (1) 2.1274 (2) 1.7964 (2) 2.8578 (3) 2.2112 (2) 0.7 1.2295 (1) 1.1554 (1) 1.6689 (2) 1.4794 (1) 2.3193 (2) 1.9048 (2) 3.2554
https://arxiv.org/abs/2505.19371v1