text
string | source
string |
|---|---|
Scalable and adaptive prediction bands with kernel sum-of-squares Louis Allain1,2, Sébastien Da Veiga2, and Brian Staber1 1Safran Tech, Digital Sciences & Technologies, 78114 Magny-Les-Hameaux, France 2Univ Rennes, Ensai, CNRS, CREST - UMR 9194, F-35000 Rennes, France May 28, 2025 Abstract Conformal Prediction (CP) is a popular framework for constructing prediction bands with valid coverage in finite samples, while being free of any distributional assumption. A well-known limitation of conformal prediction is the lack of adaptivity, although several works introduced practically efficient alternate procedures. In this work, we build upon recent ideas that rely on recasting the CP problem as a statistical learning problem, directly targeting coverage and adaptivity. This statistical learning problem is based on reproducible kernel Hilbert spaces (RKHS) and kernel sum-of-squares (SoS) methods. First, we extend previous results with a general representer theorem and exhibit the dual formulation of the learning problem. Crucially, such dual formulation can be solved efficiently by accelerated gradient methods with several hundreds or thousands of samples, unlike previous strategies based on off-the-shelf semidefinite programming algorithms. Second, we introduce a new hyperparameter tuning strategy tailored specifically to target adaptivity through bounds on test- conditional coverage. This strategy, based on the Hilbert-Schmidt Independence Criterion (HSIC), is introduced here to tune kernel lengthscales in our framework, but has broader applicability since it could be used in any CP algorithm where the score function is learned. Finally, extensive experiments are conducted to show how our method compares to related work. All figures can be reproduced with the accompanying code. 1 Introduction In many applications, machine learning regression models require a trustworthy uncertainty quantification in their predictions. This is especially true for high-stakes applications such as design optimization, non-destructive testing, medical diagnostics, autonomous vehicles, or financial forecasting, where decisions based on model predictions can have significant impacts. Having a reliable uncertainty quantification is thus fundamental. Several machine learning models come with uncertainty quantification in their predictions, such as Gaussian Processes [Rasmussen and Williams, 2005], Random Forests [Breiman, 2001] or Bayesian Neural Networks [Wang and Yeung, 2020], among many others, but these models generally provide inaccurate prediction bands: coverage guarantees typically hold asymptotically or with strong distributional assumptions. In practice, however, especially for high-stakes decisions, we should at least provide marginal coverage guarantees that hold in finite sample and without making any distributional assumptions on the data. In addition, a desirable feature is adaptivity: we would like prediction bands to be wide when either the model lacks confidence or if the variability in the data is high, and narrow when both the model is confident and the variability is low. Having adaptive prediction bands means having an uncertainty quantification that is informative on either the performance of the model or the variability of the data, which is key in crucial applications. Conformal Prediction (CP) (see, e.g., [Gammerman et al., 1998, Papadopoulos et al., 2002, Shafer and Vovk, 2008] or [Angelopoulos and Bates, 2023] for a modern introduction) has been designed from the ground up to be an uncertainty quantification statistical framework that provides marginal coverage guarantees in finite sample while being
|
https://arxiv.org/abs/2505.21039v1
|
distribution-free. In particular, the split conformal procedure proposed by Papadopoulos et al. [2002] is especially easy to implement. CP is becoming widely used in many different applications (see [Balasubramanian et al., 2014, Vazquez and Facelli, 2022] and references 1arXiv:2505.21039v1 [cs.LG] 27 May 2025 therein), but unfortunately, by construction, standard CP does not provide adaptive prediction bands. A lot of research has been done in this direction, which we will review later. In parallel, for specific statistical learning problems, Marteau-Ferey et al. [2020] introduced a new kernel framework known as kernel sum-of-squares (SoS), tailored specifically to estimate non-negative functions. Their key idea is to characterize such functions of interest by a linear positive semidefinite Hermitian operator, which admits a finite-dimensional representation through a representer theorem. Since then, it has been leveraged for non-convex optimization [Rudi et al., 2025], estimation of optimal transport distances [Vacher et al., 2021], modeling of probability densities [Rudi and Ciliberto, 2021] and PSD-constrained functions [Muzellec et al., 2022]. Very recently, kernel SoS was also identified as a powerful framework for constructing more adaptive prediction bands by Liang [2022] and Fan et al. [2024]. They were the first to propose to build prediction bands as solutions of such a learning problem, where adaptive coverage is targeted with additional constraints. This point of view is the one we adopt and generalize in this paper. Outline and contributions. We start by presenting CP in Section 2 as well as recent variants developed to improve adaptivity. We also introduce the kernel SoS framework, since our proposed method extensively relies on it. In Section 3 we introduce our approach that learns a CP score function by solving a statistical optimization problem with several ingredients: an objective function controlling both the width and the regularity of the prediction bands, and constraints for coverage. We also discuss a new criterion dedicated to the tuning of kernel lengthscales, with local coverage as an objective. In Section 4, we finally conduct extensive experiments to compare our method to other conformal prediction methods that provide adaptive bands. Our contributions are as follows: •We generalize the previously introduced kernel SoS point of view for prediction bands and precisely analyze the contribution and practical effect of each term in the objective function, •We provide a representer theorem that makes the problem numerically tractable, •We derive a dual formulation of this problem and propose an accelerated gradient algorithm to enable faster computation on large datasets, unlike previous work that was limited to small datasets, •We introduce a new criterion to tune kernel hyperparameters based on the Hilbert-Schmidt Inde- pendence Criterion (HSIC), which is also applicable to any other CP method. In particular, we provide both theoretical and empirical evidence of the effectiveness of this metric to achieve better adaptivity. 2 Conformal prediction and kernel sum-of-squares 2.1 Conformal prediction Split conformal prediction. CP was introduced by Gammerman et al. [1998], with the so-called full-variant, but we focus here on the split variant, introduced by Papadopoulos et al. [2002]. Suppose we have a training dataset DN={(Xi, Yi)}N i=1from a pair (X, Y)∼PXYwhere X∈ X
|
https://arxiv.org/abs/2505.21039v1
|
⊂ Rdand Y∈ Y ⊂R. This dataset is split in two parts: a pre-training dataset Dn={(Xi, Yi)}n i=1and acalibration oneDm={(Xi, Yi)}m i=1with N=n+m. The pre-training dataset Dnis used to first fit a predictive model bmn(·), which can be any machine learning algorithm. The second step consists in computing performance scores associated to bmnon the hold-out calibration dataset Dm. The usual score in the literature is defined as the absolute errors S(Xi, Yi) := Si=|Yi−bmn(Xi)|fori∈ D m, which are used to compute the quantile bqαof the set {Si}i∈Dmwith an adjusted level ⌈(1−α)(m+ 1)⌉/m, where αis the desired error rate. Finally, for a new observation XN+1, the split CP prediction bands are bCN(XN+1) =[bmn(XN+1)±bqα], which satisfy the marginal coverage P YN+1∈bCN(XN+1) ≥1−α (1) for any Nas long as (X1, Y1), . . . , (XN, YN),(XN+1, YN+1)are exchangeable. Unfortunately, as under- lined by Romano et al. [2019], these prediction bands cannot be adaptive since they have constant width 2bqα. The research direction in recent years has been to adjust CP procedures to target more adaptive bands. 2 The quest for adaptivity. Historically the first idea was to change the score function by rescaling the scores with an estimate of the variability bσn(·)≥0. The new scores are thus defined as Si= |Yi−bmn(Xi)|/bσn(Xi), i∈ Dmwith prediction bands bCN(XN+1) =[bmn(XN+1)±bqαbσn(XN+1)]. Lei and Wasserman [2014] first proposed to use an estimate of the conditional mean absolute deviation for bσn(·). Another sensible choice is to scale the scores by an estimate of the standard deviation, as was suggested for several machine learning models like Gaussian Processes, Random Forests and Bayesian Neural Networks [Johansson et al., 2014, Papadopoulos, 2024, Jaber et al., 2024]. However, such scaling functions are rarely estimated in a goal-oriented way, without quantitative and explicit consideration for adaptive coverage. This often leads in practice to poorly adaptive prediction bands. In a parallel line of work, the popular Conformalized Quantile Regression (CQR) [Romano et al., 2019] proposes to change the score function by leveraging quantile regression. Instead of using an interval built around an estimate bmn(·)of the regression function, they rely on estimates bqαlowern (·)andbqαuppern (·)of the conditional quantiles, and build an interval bCN(XN+1) = qαlowern (XN+1)−bqα, qαuppern (XN+1) +bqα where now bqαis the adjusted quantile of the set {max qαlowern (Xi)−Yi, Yi−qαuppern (Xi) , i∈ Dm}. In other words, the score function is chosen as S(X, Y) =max qαlowern (X)−Y, Y−qαuppern (X) . By design, the CQR model is adaptive and generally provides sensible prediction bands. However, it suffers from two practical limitations: (a) decision-making people usually prefer a point estimate with an interval aroundthis estimate and (b) quantile regression in a small data regime can be quite challenging. Also note that a regularized version of CQR tailored to target test-conditional coverage was recently proposed by Feldman et al. [2021]. Another line of work consists in modifying the calibration step. For example, Guan [2023] and Hore and Barber [2024] propose to weight the scores when computing the adjusted level quantile, where the weights depend on the test point XN+1: this directly implies that the quantile changes with
|
https://arxiv.org/abs/2505.21039v1
|
XN+1, and as a result the prediction bands are adaptive. More precisely, given a kernel H(·,·)that defines a density H(x,·)for all x∈ X, sample eXN+1from H(XN+1,·). The quantile bqα(XN+1,eXN+1)is computed on the empirical distributionPm i=1ewiδSi+ewN+1δ+∞, where the weights are computed as ewi=H(Xi,eXN+1)/(Pm j=1H(Xj,eXN+1) +H(XN+1,eXN+1)). Although appealing, such modifications to the score function have some practical shortcomings. First, computing different weights for all test points can be computationally demanding during inference. Second, the method suffers in practice from the randomization induced by the sampling of eXN+1. To overcome this issue Hore and Barber [2024] propose the m-RLCP methods that averages the predictions bands over msampling of eXN+1. However this leads to a marginal coverage equal to 1−2αand the computational cost increases significantly. Finally, and perhaps more importantly, the kernel H(·,·)involved in the definition of the weights depends on a bandwidth hyperparameter that must be tuned. This choice has a strong impact on the shape of the prediction bands as shown in Hore and Barber [2024]. We will come back to this point later since our framework shares the same characteristic. Finally, another reweighting method based instead on Jackknife+ was proposed by Deutschmann et al. [2024]. 2.2 Kernel sum-of-squares Since our proposed method is based on kernel SoS for positive functions, we give a brief overview following Marteau-Ferey et al. [2020]. LetHbe a RKHS with associated kernel kandϕ:X → H one of its feature map such that k(x, x′) =ϕ(X)⊤ϕ(X). LetS(H)be the set of bounded Hermitian linear operator from HtoH. For A ∈ S (H), we write A ⪰ 0when Ais a positive semi-definite (PSD) operator, and S+(H)the set of such PSD operators. For all x∈ X, we define fA(X) =ϕ(X)⊤Aϕ(X),A ∈ S +(H)which is non-negative by construction. Kernel SoS refers to a statistical learning problem where the unknown nonparametric function is constrained to be non-negative and obtained as the solution of inf A⪰0L(fA(X1), . . . , f A(Xn)) +λ1∥A∥ ⋆+λ2∥A∥2 F. (2) Interestingly, Marteau-Ferey et al. [2020] show a representer theorem for Equation (2), which makes kernel SoS computationally tractable in finite-dimension and is recalled below. Theorem 1 (Marteau-Ferey et al. [2020]) Assume L:Rn→R∪{+∞}to be a lower semi-continuous and bounded below loss function. Equation (2)admits a solution A⋆which can be written A⋆=Pn i,i=1B⋆ ijϕ(Xi)ϕ(Xj)⊤for some matrix B⋆∈Rn×n,B⋆⪰0. Furthermore A⋆is unique if Lis convex andλ2>0. The corresponding non-negative function is given by fA⋆(X) =Pn i,i=1B⋆ ijk(Xi, X)k(Xj, X). 3 Theorem 1 provides a finite-dimensional equivalent problem which involves an unknown PSD matrix B. But Marteau-Ferey et al. [2020] also propose an equivalent formulation: considering Kthe kernel matrix with elements Kij=k(Xi, Xj)andVits upper Cholesky decomposition, we can define Φ(X) = V−⊤kXwithkX= (k(X, X i))i=1,...,nand˜fA(X) =Φ(X)⊤AΦ(X). With these notations, the following proposition shows that we obtain the same solution if we optimize the PSD matrix Ainstead of B. Proposition 1 (Marteau-Ferey et al. [2020]) Under the assumptions of Theorem 1, the following problem has at least one solution, which is unique if λ2>0andLis convex: inf A⪰0L(˜fA(X1), . . . , ˜fA(Xn)) +λ1∥A∥⋆+λ2∥A∥2 F. (3) For any given solution A⋆∈Rn×nof Equation (3), the function ˜fA⋆is also
|
https://arxiv.org/abs/2505.21039v1
|
solution of Equation (2). Although such a result may appear of minor impact, the Aformulation actually yields significant computational savings in practice, as we illustrate in our numerical experiments (see Appendix B.1). Remark 1 Operator Ais PSD, hence it admits an eigendecomposition A=P l≥0λlul⊗ulwithλl≥0 andul∈ H. By the reproducing property, we have fA(X) =P l≥0λlul(X)ul(X)⊤. Hence, fA(X)is an infinite sum of squared functions in H. Equivalently in finite dimension, for a PSD matrix B=UDU⊤ we have fB(X) =k⊤ XUDU⊤kX=Pn l=1λl(Pn i=1uilk(X, X i))2. Thus fBis a function defined as a linear combination of squared functions from H. 3 Regularized kernel SoS for adaptive prediction bands We now come back to our initial problem of building adaptive prediction bands. To do so, we focus on the split CP setting with two i.i.d. datasets Dn={(Xi, Yi)}n i=1andDm={(Xi, Yi)}m i=1, and consider estimating a score function in a specific supervised learning problem to achieve better adaptivity. 3.1 Learning the scores through an optimization problem A general framework for learning score functions was recently introduced by Xie et al. [2024a]: given a task- specific loss (e.g. conditional coverage or minimum interval width), an optimized score function is obtained via a boosting algorithm. In this work, the score function S(X, Y) =max ( µ1(X)−Y, Y−µ2(X))/σ(X) is inspired by CQR and is parameterized by three unknown functions (µ1, µ2, σ)such that µ1(·)≤µ2(·) andσ(·)≥0, which are iteratively optimized during boosting rounds. From a practical viewpoint however, the constraints on these functions are not inherently accounted for in the boosting algorithm. We advocated before the use of prediction intervals built around a point estimate, which corresponds to the particular case m=µ1=µ2and yields a score function S(X, Y) =|Y−m(X)|/σ(X), or equivalently S(X, Y) = (Y−m(X))2/f(X)with f(·) :=σ(·)2since only the score quantiles are involved in the final prediction interval: we thus recover the rescaled conformal score setting where we learnthe rescaling function f(·), with an additional non-negativity constraint. The kernel SoS framework is thus a natural candidate for this learning task. Kernel SoS formulation for prediction intervals. We introduce two RKHSs HmandHfassociated to kernels kmwith lengthscales θmandkfwith lengthscales θf, respectively. The regression function m will be estimated in the RKHS Hmwhile the non-negative scaling function fwill be estimated using the kernel SoS framework in the RKHS Hf. The first step is to derive our proposed infinite-dimensional learning problem from the key properties that we impose on prediction bands, which writes inf m∈Hm,A∈S +(Hf)a nnX i=1(Yi−m(Xi))2+b nnX i=1fA(Xi) +λ1∥A∥ ⋆+λ2∥A∥2 F (4) s.t. fA(Xi)≥(Yi−m(Xi))2, i∈[n], (5) ∥m∥2 Hm≤s (6) where ∥A∥ ⋆and∥A∥ Fdenote the nuclear norm and the Frobenius norm of operator A, respectively. In this problem, we propose to include several key components: 4 1.Accurate mean estimation (first term in (4)) with regularity penalty (6), as in standard kernel ridge regression [Schölkopf and Smola, 2002], 2.Prediction intervals with minimum mean width (second term in (4)) and an additional regularity penalty (third and fourth terms in (4)), 3.100%coverage on pre-training data (5), later calibrated on the calibration dataset. Minimizing the nuclear norm with coverage constraints was originally proposed by Liang [2022], and minimizing
|
https://arxiv.org/abs/2505.21039v1
|
the mean width was later proposed by Fan et al. [2024]. Our proposition essentially differs from their work for a broader and more efficient practical applicability: (a) we place ourselves in the split CP setting, whereas Liang [2022] and Fan et al. [2024] propose different calibration procedures that are harder to implement in practice, with theoretical coverage guarantees that depend on hyperparameters, (b) we rely on a dual formulation which can handle several hundreds of samples, (c) we propose a goal-oriented tuning strategy for θf, and (d) our proposal is more general and we give better understanding of why this targets adaptivity1. In particular, adaptivity can actually be controlled through the complexity of the scaling function f, in three different ways: 1.The sparsity in the linear combination. This can be controlled by the nuclear norm ∥A∥ ⋆, which acts as a lasso-type penalty (see Recht et al. [2010]), 2.Theℓ2norm of the coefficients in the linear combination, which is equal to the Frobenius norm ∥A∥ Fand is similar to a ridge penalty, 3. The RKHS Hfwhich impacts the functions ul(·), and notably the lengthscales θf. To further illustrate point 3., if we take kf(x, x′) =⟨x, x′⟩(RHKS of linear functions), the rescaling function will be a second-order polynomial (thus not complex), while with a Gaussian kernel if θf→+∞ the prediction intervals will be constant (not adaptive enough) and if θfis small they will be very wiggly (too adaptive). We thus see that in between, we can target better adaptivity: we propose in Section 3.2 a new goal-oriented criterion related to local coverage to tune the lengthscales θf. Note also that 1−α instead of 100%coverage can be considered in the constraints, but this unfortunately leads to a non-convex optimization problem [Braun et al., 2025]. Before discussing in detail the choice of our problem hyperparameters, we first derive a representer theorem which makes the optimization numerically tractable. Theorem 2 (Representer theorem) Let(a, b, s, λ 1)∈R4 +andλ2>0. Then Equation (4)ad- mits a unique solution (m⋆, fA⋆)of the form m⋆(X) =Pn i=1γ⋆ ikm(Xi, X) =γ⋆⊤km XandfA⋆(X) = Φ(X)⊤A⋆Φ(X)for some vector γ⋆∈Rnand matrix A⋆∈Rn×n,A⋆⪰0. The detailed proof, based on Marteau-Ferey et al. [2020] and Muzellec et al. [2022], can be found in Appendix A.1. This representer theorem leads to the following tractable semi-definite programming (SDP) problem: inf γ∈Rn,A∈Sn +a nnX i=1 Yi−γ⊤km Xi2+b nnX i=1˜fA(Xi) +λ1∥A∥⋆+λ2∥A∥2 F s.t. ˜fA(Xi)≥ Yi−γ⊤km Xi2, i∈[n], (7) γ⊤Kmγ≤s. In practice, such SDP problem can be solved using off-the-shelf solvers like SCS[O’Donoghue et al., 2016], as was advocated by Liang [2022] and Fan et al. [2024]. However, our numerical experiments show that this strategy does not scale past a few hundreds pre-training samples: this is thus a severe practical limitation which, in our opinion, heavily weakens the kernel SoS point of view. To circumvent this major issue, we rely instead on a dual formulation of Equation (7). 1This lack of understanding is for example pointed out by Liang et al. [2024]. 5 Dual formulation. First note that this formulation is possible only when λ2>0(thus excluding Liang [2022] framework) and that it consists
|
https://arxiv.org/abs/2505.21039v1
|
of an optimization problem over (n+ 1)variables rather than (n+n×n)variables. Proposition 2 (Dual formulation) Let(a, b, s, λ 1)∈R4 +, λ2>0and∆ :=Rn+1 +. Equation (7) admits a dual formulation of the form sup (Γ,θ)∈∆r(Γ, θ)⊤Diag(Γa)r(Γ, θ) +θ(γ(Γ, θ)⊤Kmγ(Γ, θ)−s)−Ω⋆(VDiag(Γ−b)V⊤)(8) where r(Γ, θ) =Y−Kmγ(Γ, θ),γ(Γ, θ) =C(Γ, θ)−1Diag(Γa)Y,C(Γ, θ)=Diag ( Γa)Km+θIn, Ω⋆(B) =1 4λ2∥[B−λ1In]+∥2 Fand∀x∈R,Diag ( Γx):=Diag ( Γ)+x nIn. Moreover, if (bΓ,bθ)is solution of Equation (8), a solution of Equation (4)can be retrieved as bγ= Diag(bΓa)Km+bθIn−1 Diag(bΓa)YandbA=1 2λ2h VDiag(bΓ−b)V⊤−λ1Ini + where [A]+denotes the positive part of A2. A detailed proof is given in Appendix A.2 as well as the dual analytical gradient. Interestingly, this dual formulation can be efficiently optimized using accelerated gradient-ascent algorithms [Ruder, 2017, Xie et al., 2024b]. For an even faster implementation, we also take advantage of the sparsity induced by the Lagrange multipliers Γand leverage GPU computation, see Figure 8 for an illustration of the numerical speed-up. Final prediction bands. Solving the dual formulation yields estimates bm(X) =bγ⊤km XandbfA(X) = Φ(X)⊤bAΦ(X), from which we derive the estimated score function S(X, Y) =|Y−bm(X)|/q bfA(X). Following standard split CP, our prediction interval is given by bCN(XN+1) = bm(XN+1)−bqαq bfA(XN+1),bm(XN+1) +bqαq bfA(XN+1) (9) wherebqαis the adjusted quantile of the estimated score function on the calibration set. By design, they satisfy the same marginal coverage guarantee as split CP, as formalized below. Proposition 3 If the samples (X1, Y1), . . . , (XN, YN),(XN+1, YN+1)are exchangeable, then the predic- tion interval (9) satisfies the marginal coverage (1). Extensions. Similarly to Liang [2022], if a point estimate bm(·)is available beforehand from any machine learning model trained on a separate dataset, our dual formulation can be easily modified. More importantly, it is also straightforward to consider non-symmetric intervals by estimating two functions bfAlow(·)andbfAup(·), at the cost of increasing the dual problem dimension to (2n+ 1), see Appendix B.4. We also strongly believe that incorporating constraints of the form µ1(·)≤µ2(·)as proposed by Xie et al. [2024a] is possible with kernel SoS, by taking inspiration from Aubin-Frankowski and Rudi [2024]. Kernel SoS is also complementary to Hore and Barber [2024], as our learned score can be post-processed with their approach. Finally, since we aim for 100%coverage, our framework may lack robustness with respect to outliers: however, we can take advantage of our preliminary estimate mGP(·)(see next section) to filter out samples with large residuals. 3.2 Hyperparameter tuning First, the hyperparameters θmandsrelated to the regression function m(·)are fixed with a preliminary estimate mGP(·)obtained from Gaussian Process (GP) regression with kernel km:θmis optimized by maximum likelihood and s=∥mGP∥2 Hm. Second, our numerical experiments below show that λ2 has a very small impact on the shape of the intervals, as long as λ2>0. Not only does it make the initial problem strongly convex and makes the dual formulation possible, but it also facilitates numerical optimization. In the following, we thus fix it at λ2= 1. Hyperparameters a, b, λ 1andθfnecessitate more attention. Hence, we perform an extensive numerical studies where we evaluate the marginal impact of each of them on several test cases with different training 2For a
|
https://arxiv.org/abs/2505.21039v1
|
PSD matrix Awith eigendecomposition A=UDU⊤, its positive part is defined as [A]+=Umax(0 ,D)U⊤. 6 size and repeated random seeds (see Appendix B.1 for details). Boxplots of the combined indicators are given in Figure 1. We first observe that ahas very little influence on mean squared-errors when the noise is symmetric (top left). This phenomenon was commented in Fan et al. [2024], where they argue that the coverage constraints have the additional effect to reduce mean squared-errors. Our experiments confirm this intuition, we simply set a= 0for the symmetric case. Note however that our conclusions related to aonly apply to test cases with symmetric noise (which is the case in this benchmark). In presence of asymmetry, it may be required to select a >0to ensure a satisfying estimation of the mean function: we give an illustration in Appendix B.4. Furthermore, Figure 1 also shows that bandλ1have highly different influence (bottom row). This may seem surprising, given how they interact in the dual formulation through the last term in (8). A potential explanation is that bcontrols a data-driven term, unlike λ1, and the latter does not have a strong impact on the optimal solution provided it is positive. As a result, we choose to set λ1= 1. On the contrary, bhas the expected behavior, in the sense that higher values yield narrower intervals, at the cost of increasing the nuclear norm, that is the function complexity. But the last hyperparameter θfalso controls such complexity: we thus expect a compensation between these two hyperparameters through an interaction. Numerical experiments in Section 4 actually confirm this phenomenon and show that as long as bis sufficiently large ( e.g.10), it is possible to tune only θfto reach the same level of adaptivity. θfthus requires specific consideration, since it has the most impact on the shape of the prediction bands when bis fixed. This is the one we focus on below. (a) Impact of aon RMSE (b) Impact of λ2on mean interval width and ∥A∥⋆ (c) Impact of λ1andbon mean interval width (d) Impact of λ1andbon∥A∥⋆ Figure 1: Marginal impact of hyperparameters a,b,λ1andλ2over several values of θf, test cases and random seeds on rmse, mean interval width and regularization norms. Goal-oriented criterion. Adaptivity is tightly related to local coverage P(YN+1∈bC(XN+1)|XN+1= x)≥1−α, which is impossible to achieve exactly in a distribution-free setting [Vovk, 2012, Barber et al., 2021]. We focus ourselves on a weaker version, where we condition on Xbeing in a small neighborhood ωX∈ FXfrom the event space FXsuch that for all x∈ X,P(x∈ωX)≥δ:P(YN+1∈bC(XN+1)|XN+1∈ ωX)≥1−α. Recently, Deutschmann et al. [2024] showed that such coverage with split CP can be 7 controlled with the mutual information (MI) between the inputs and the score function, namely P(Y∈bCDN(X)|DN, X∈ωX)≥1−α−1 δp 1−exp(−MI(X, SDn(X, Y))). (10) Note that contrary to Deutschmann et al. [2024], we explicitly write the coverage conditionally on the training set DN. Inherently this bound is uninformative for low-probability sets, but interestingly we can counterbalance this effect by choosing a score function which is as independent as possible of the inputs in order to get MI(X, S(X, Y))close to 0,
|
https://arxiv.org/abs/2505.21039v1
|
see Deutschmann et al. [2024]. However, this implies computing MI for a random vector in dimension d: from a practical perspective, MI suffers from the curse of dimensionality and rapidly becomes numerically unstable. Instead, for a re-scaled score function, we show that a similar bound holds with MI between one-dimensional variables, which is more robust. Furthermore, using recent inequalities between the total variation distance and the maximum mean discrepancy [Wang and Tay, 2023], we also extend our result to replace MI with HSIC [Gretton et al., 2005], for which we observe in practice much improved numerical stability. Proposition 4 LetbCDNbe the prediction intervals built from a score function S(X, Y) =|Y− m(X)|/p f(X)through split CP with DN=Dn∪Dm. Then for any ωXinFXsuch that P(X∈ωX)≥δ, denoting pDN=P(YN+1∈bCDN(XN+1)|DN, XN+1∈ωX)we have: pDN≥1−α−1 δq 1−α1exp(MI( rDn(XN+1, YN+1),bfDn(XN+1))) (11) where α1does not depend on f(·)andrDn(X, Y) =|Y−bmDn(X)|. In addition, we have pDN≥1−α−1 δs 1−α1 1−α2HSIC( rDn(XN+1, YN+1),bfDn(XN+1))(12) where α2only depends on the kernel used for HSIC, which must be characteristic. To target local coverage, we can then maximize HSIC (r(X, Y), f(X)): in our kernel SoS procedure, this means that θfcan be tuned efficiently according to this criterion. For HSIC estimation, we need samples XN+1independent of Dn: we thus rely on a cross-validation procedure, see Appendix A.3 for the proof and B.2 for implementation details. In order to illustrate the reason why we advocate using HSIC over MI, that is numerical stability, we reproduce here our experiment on test case 2 from Section 4. For several values of b, we compute both criteria for a grid of θfcandidate values, with n= 100and10-fold cross-validation. Figure 2 shows that HSIC (left) allows to clearly discriminate the lengthscales, while MI (right) suffers from estimation instability and is thus unusable in practice to identify a satisfying lengthscale. Figure 2: Test case 2 with n= 100. HSIC (left) and MI (right) criteria between r(X, Y)andf(X)as a function of bandθf(confidence intervals obtained by bootstrap and optimal values of θfin dashed lines). Remark 2 We believe that Proposition 4 is also of interest beyond kernel SoS, in the sense that it provides a principled way to tune hyperparameters in all score functions, such as in Hore and Barber [2024] or Braun et al. [2025] for example. However, its true potential would lie in generalizing Equation (12)to other score functions, which is left as future work. It could also be directly used as a loss function in Xie et al. [2024a], instead of estimating local coverage. 4 Experiments In our experiments, we first focus on test cases with a relatively small number nof training samples, and later increase nto reach several thousands to highlight the benefits of our dual formulation. 8 4.1 Small data experiments Our proposed kernel SoS algorithm is first compared in the small data regime to standard competitors for splitCP: CQR (with random forests, following experiments from Romano et al. [2019], Hore and Barber [2024]) and rescaled scores. For the latter, we only consider two variants of GP for fair comparison, since we also place ourselves in the RKHS setting. We focus on
|
https://arxiv.org/abs/2505.21039v1
|
a homoscedastic and heteroscedastic [Binois et al., 2018] GP model, and consider here Y=m(X) +σ(X)ϵwith: m(X), Z= 10X+ 1 σ(X) X ϵ Case 1 sin(2πZ 5+ 0.24πZ 5)1X≤9.6+ (Z 10−1)1X>9.6√ 0.1 + 2 X2U[−1,1]N(0,1) Case 2 X/2 |sin(X)|N(0,1)N(0,1) Case 3 X/24 3ϕ(2X 3) N(0,1)N(0,1) Case 4 2 sin( πX) +πX√ 1 +X2 U[0,1]N(0,1) We begin by illustrating the interaction between bandθfin Figure 3 left. For all values of b, we observe a consistent HSIC behaviour: it first increases with θf, thus improving adaptivity, until it reaches a peak and then decreases. Interestingly, we also note that the higher b, the higher the optimal θf: this clearly shows that both hyperparameters have opposite effects on adaptivity. Furthermore, we see that forb≥10we reach a plateau for the optimal HSIC. In practice it is thus sufficient to only optimize θfas soon as bis fixed at a large enough value. Figure 3 also shows that a small value for θfleads to overly adaptive bands, while the HSIC-optimized θfproduces smooth and adaptive ones. Figure 3: Test case 2 with n= 100. Left: HSIC criterion between r(X, Y)andf(X)as a function of b andθf(confidence intervals obtained by bootstrap and optimal values of θfin dashed lines). Middle / Right: optimal prediction bands with too small and optimized lengthscale, respectively. In the following we now fix b= 10and compare kernel SoS with HSIC-optimized θfto CQR and rescaled GPs on test case 1 with 20replications. In Figure 4, we investigate several metrics related to adaptivity: mean width, MI, R2 SQIand local coverage (see Appendix B.4 for details). Figure 4: Test case 1 with n= 100. Adaptivity metrics and density of local coverage. 9 We note that CQR and standard GP yield larger intervals and local coverage with great variability around the target. Heteroscedastic GP produces intervals similar to ours, but with higher mean width. In contrast, kernel SoS gives prediction intervals with both small width and satisfying local coverage properties. This behavior is confirmed on test cases 2 and 3, as illustrated in Figure 5 and Figure 6. For test case 2, we observe that all methods tend to produce prediction bands that are too large (hence a local coverage density leaning towards 1), with a notable exception for kernel SoS. Although MI and R2 SQIare similar for the heteroscedastic GP and kernel SoS, the latter has intervals with much smaller mean width. Figure 5: Test case 2 with n= 100. Adaptivity metrics and density of local coverage. Focusing on test case 3, the homoscedastic GP has poor local coverage, but with small prediction bands. Kernel SoS and the heteroscedastic GP have highly similar characteristics, with excellent local coverage as well as low MI and intervals with small mean width. Figure 6: Test case 3 with n= 100. Adaptivity metrics and density of local coverage. Test case 4 exhibits a different behavior due to oracle prediction bands being actually close to a constant: since R2 SQIis not a relevant indicator in such situation (see Appendix B.4 for details), we discard it from our analysis. We display the remaining adaptivity metrics in Figure
|
https://arxiv.org/abs/2505.21039v1
|
7. As expected, homoscedastic GP, which produces almost constant intervals, performs the best in this setting. Except CQR, all methods yield similar MI, as well as similar satisfying local coverage. But this time, kSoS tend to select larger intervals than GPs: the good performance of heteroscedastic GP actually comes from the fact that it often prefers a homoscedastic model during fitting. An additional experiment in higher dimension is to be found in Appendix B.4. 10 Figure 7: Test case 4 with n= 100. Adaptivity metrics and density of local coverage. 4.2 Large scale experiments Finally, we demonstrate that our dual formulation for solving the kernel SoS problem can scale to several hundreds or thousands of training points, unlike previous work that relies on SDP solvers. Figure 8 left, shows that as expected a SDP solver can only handle up to n= 200training samples, while our dual solver scales linearly with nand is robust to user tolerances, see Appendix B.3 for details. When we optimize θfwith HSIC, we can retrieve the optimal solution for n= 2000displayed in Figure 8 right. The only computational burden comes from the HSIC optimization with cross-validation, but this is an embarrassingly parallel step. Figure 8: Test case 1. Left: time for SDP and dual formulation as a function of nand tolerance (a=b= 0, θf= 0.3,max iter = 104). Right: optimal solution of dual formulation for n= 2000. 5 Conclusion We introduce a generalized kernel sum-of-squares framework for building scalable and adaptive prediction bands for conformal prediction. Scalability is achieved through a new representer theorem together with a dual formulation which can be solved efficiently with accelerated gradient algorithms. Unlike previous work, this makes the kernel SoS paradigm for CP able to scale up to thousands of training points, as we illustrate in our experiments. On the other hand, adaptivity and local coverage are targeted by optimizing hyperparameters with a new HSIC-based criterion, which is numerically robust and has excellent practical performance. Since such a criterion appears promising, as a perspective, we plan to investigate its extension to more general score functions. 11 A Proofs A.1 Proof of Theorem 2 - representer theorem To begin, we first introduce some notation. When considering objects related to functions mandf, we will use the superscriptsmandfto differentiate them. Associated to each function we consider a RKHS H(·), a kernel k(·), a kernel matrix K(·), a feature map ϕ(·)and a column vector such that for x∈ X k(·)(X) = k(·)(X1, X), . . . , k(·)(Xn, X)T . For the kernel matrix [Kf]ij=kf(Xi, Xj)only, we consider its Cholesky decomposition and empirical feature map Kf=VTVand Φ(X) =V−Tkf(X). Next, for a Hilbert space Hwe write S(H)the set of bounded Hermitian linear operators from HtoH andS+(H)those that are positive-definite. We also write Sn +=S+(Rn×n)the set of real, symmetric and positive-definite square matrices of size n. To prove our representer theorem, we first rewrite (4) as inf m∈Hm,A∈S +(Hf)a·1 nnX i=1(Yi−m(Xi))2+b·1 nnX i=1fA(Xi) +λ1∥A∥ ⋆+λ2∥A∥2 F (13) s.t. (Yi−m(Xi))2−fA(Xi)≤0, i∈[n], ∥m∥2 Hm−s≤0. In order to show that there exists a finite-dimensional representation for both m⋆andA⋆, we will first
|
https://arxiv.org/abs/2505.21039v1
|
show that for any fixed m∈ Hm, the optimal Ahas a finite-dimensional representation. Indeed if m∈ Hmis fixed, the problem writes inf A∈S +(Hf)b nnX i=1fA(Xi) +λ1∥A∥ ⋆+λ2∥A∥2 F s.t. (Yi−m(Xi))2−fA(Xi)≤0, i∈[n] or equivalently inf A∈S +(Hf)Lm(fA(X1), . . . , f A(Xn)) + Ω ( A) where Lm(fA(X1), . . . , f A(Xn)) =( b nPn i=1fA(Xi)if(Yi−m(Xi))2−fA(Xi)≤0, i∈[n], +∞ else, and Ω (A) =λ1∥A∥ ⋆+λ2∥A∥2 F. Since Lm:Rn→R∪ {+∞}is lower semi-continuous and bounded below (notice that it is linear and bounded below by 0), we can apply Theorem 1and Proposition 3 from Marteau-Ferey et al. [2020] to deduce that the solution is entirely characterized by a PSD matrix A∈Rn×ngiven by inf A∈Sn +b nnX i=1˜fA(Xi) +λ1∥A∥⋆+λ2∥A∥2 F s.t. (Yi−m(Xi))2−˜fA(Xi)≤0, i∈[n], 12 with ˜fA(Xi) =⟨Φ(Xi),AΦ(Xi)⟩and Φ(Xi) =V−Tkf(Xi). Since this representation is valid for any m∈ Hm, Problem (13) is thus equivalent to inf m∈Hm,A∈Sn +a nnX i=1(Yi−m(Xi))2+b nnX i=1˜fA(Xi) +λ1∥A∥⋆+λ2∥A∥2 F (14) s.t. (Yi−m(Xi))2−˜fA(Xi)≤0, i∈[n], ∥m∥2 Hm−s≤0. Now, to show that m⋆∈ Hmhas a finite-dimensional representation, we will rely upon the dual problem. To do so, we first need to show that strong duality holds. Since our problem is convex, it is enough to check Slater’s constraint qualification, i.e.we simply need to exhibit a strictly feasible point (m0,A0). We first take m0= 0∈ Hmwhich satisfies ∥m0∥2 Hm−s=−s <0. If we assume that Kfis of full rank (this assumption is always satisfied if k(f)is universal and all training points Xiare distinct), it is invertible and we can define α= (Kf)−1Ysuch that Yi=Pn j=1αjk(f)(Xi, Xj). Denote B0≻0 thepositive-definite matrix with elements [B0]ii=α2 i+ϵ1αi=0and[B0]ij= 0ifi̸=jwhere ϵ >0 (note that in general all αiwill be non-zeros, but if it it not the case we introduce a small ϵ). Then by Cauchy-Schwartz we have for all i= 1, . . . , n Y2 i= nX j=1αjk(f)(Xi, Xj) 2 ≤nnX j=1α2 j(k(f)(Xi, Xj))2 ≤nnX j=1(α2 j+ϵ1αj=0)(k(f)(Xi, Xj))2 =nnX j=1[B0]jj(k(f)(Xi, Xj))2=n˜fA0(Xi) withA0=VB0V⊤≻0, which implies (Yi−m0(Xi))2=Y2 i<˜fA0(Xi)as soon as n≥1. Consequently (m0,A0)is a strictly feasible point and strong duality holds. We thus introduce Γ∈Rn +the Lagrangian multipliers associated to the first nconstraints and θ∈R+the Lagrangian multiplier associated to the constraint on the norm of m. The Lagrangian function writes: L(m,A,Γ, θ) =a nnX i=1(Yi−m(Xi))2+b nnX i=1˜fA(Xi) +λ1∥A∥⋆+λ2∥A∥2 F +nX i=1Γih (Yi−m(Xi))2−˜fA(Xi)i +θ ∥m∥2 Hm−s =nX i=1a n+ Γi (Yi−m(Xi))2+θ ∥m∥2 Hm−s +λ1∥A∥⋆+λ2∥A∥2 F −nX i=1 Γi−b n ˜fA(Xi). Following Muzellec et al. [2022] Appendix C.5, the optimality conditions of the Lagrangian function w.r.t. mare ∇mL(m,A,Γ, θ) =nX i=12 Γi+a n km(Xi,·) ⟨m(·), km(Xi,·)⟩Hm−Yi + 2θm(·) = 2nX i=1 Γi+a n [km(Xi,·)⊗km(Xi,·)] (m) + 2θm(·) −2nX i=1 Γi+a n Yikm(Xi,·) 13 and setting this gradient to 0leads to "nX i=1 Γi+a n (km(Xi,·)⊗km(Xi,·)) +θIn# (m) =nX i=1 Γi+a n Yikm(Xi,·). To conclude, we denote C(Γ, θ) := Diag ( Γa)Km+θIn, γ(Γ, θ) :=C(Γ, θ)−1Diag ( Γa)Y, Diag ( Γa) = Diag( Γ) +a nIn such that m⋆has the finite-dimensional representation m⋆(X) =nX i=1γikm(Xi, X) =γTkm(X). In the end, plugging this expression in Problem (14) yields the semi-definite problem from Equation (7): inf γ∈Rn,A∈Sn +a nnX i=1(Yi−m(Xi))2+b nnX i=1˜fA(Xi) +λ1∥A∥⋆+λ2∥A∥2
|
https://arxiv.org/abs/2505.21039v1
|
F s.t. Yi−γTkm(Xi)2−˜fA(Xi)≤0, i∈[n], γTKmγ−s≤0. A.2 Proof of Proposition 2 - dual formulation Proof. The dual problem of Equation (7) is defined as d= sup Γ∈Rn + θ∈R+inf m∈Hm A∈Sn +L(m,A,Γ, θ) = sup Γ∈Rn + θ∈R+D(Γ, θ) where the dual function is D(Γ, θ):= inf m∈Hm,A∈Sn +L(m,A,Γ, θ). Remark first that in the previous section we already introduced the Lagrangian function L(m,A,Γ, θ) =nX i=1a n+ Γi (Yi−m(Xi))2+θ ∥m∥2 Hm−s +λ1∥A∥⋆+λ2∥A∥2 F −nX i=1 Γi−b n ˜fA(Xi), with optimality condition for mgiven by m⋆(X) =γTkm(X). Now, we need to derive the optimality conditions for A: inf A∈Sn +λ1∥A∥⋆+λ2∥A∥2 F−nX i=1 Γi−b n ˜fA(Xi) = inf A∈Sn +λ1∥A∥⋆+λ2∥A∥2 F− ⟨A,VDiag ( Γ−b)VT⟩ (15) =−sup A∈Sn +⟨A,VDiag ( Γ−b)VT⟩ −λ1∥A∥⋆−λ2∥A∥2 F =−Ω⋆ VDiag ( Γ−b)VT (16) where Ω⋆is the Fenchel conjugate of Ω. Equality (15) comes from the fact thatPn i=1Γi˜fA(Xi) = ⟨A,VDiag ( Γ)VT⟩. Indeed, recall that ˜fA(Xi) =⟨Φ(Xi),AΦ(Xi)⟩andΦ(Xi) =V−Tkf(Xi), which 14 yields nX i=1Γi˜fA(Xi) =nX i=1Γi⟨Φ(Xi),AΦ(Xi)⟩=nX i=1ΓiTr Φ(Xi)TAΦ(Xi) =nX i=1ΓiTr AΦ(Xi)Φ(Xi)T =nX i=1Γi⟨A,Φ(Xi)Φ(Xi)T⟩ =⟨A,nX i=1ΓiΦ(Xi)Φ(Xi)T⟩ where the second term inside the brackets can be expressed as nX i=1ΓiΦ(Xi)Φ(Xi)T=nX i=1ΓiV−Tkf(Xi)kf(Xi)TV−1 =V−T"nX i=1Γikf(Xi)kf(Xi)T# V−1 =V−TKfDiag(Γ)(Kf)TV−1 =VDiag(Γ)VT. Equation (16) is simply the definition of the Fenchel conjugate of Ω(A)=λ1∥A∥⋆+λ2∥A∥2 F. Finally, replacing mby its optimal value and the regularizations involving Aby the above expression, the dual function is D(Γ, θ) = inf m∈Hm A∈Sn +L(m,A,Γ, θ) =r(Γ, θ)TDiag(Γa)r(Γ, θ)| {z } first term+θ(γ(Γ, θ)TKmγ(Γ, θ)−s)| {z } second term−Ω⋆(VDiag(Γ−b)VT)| {z } third term where r(Γ, θ) :=Y−Kmγ(Γ, θ), which corresponds to Proposition 2. Gradient computation. To solve the dual problem d= sup Γ∈Rn + θ∈R+D(Γ, θ) we propose to use an accelerated gradient algorithm, which requires the gradient of the dual function w.r.t. the Lagrange multipliers. We now provide the explicit computations for this gradient. In what follows, Jjjdenotes the matrix filled with zeros except for a 1at row jand column j. Gradient of first term rTDiag(Γa)r. It is straightforward to show that ∂C ∂Γj=∂Diag(Γa) ∂ΓjKm=JjjKm, ∂C−1 ∂Γj=−C−1∂C ∂ΓjC−1=−C−1JjjKmC−1, ∂γ ∂Γj=∂C−1 ∂ΓjDiag(Γa)Y+C−1∂Diag(Γ) ∂ΓjY =C−1Jjj −KmC−1Diag(Γa) +In Y =C−1Jjju, ∂r ∂Γj=−Km∂γ ∂Γj=−KmC−1Jjju 15 where u:= −KmC−1Diag(Γa) +In Y. We then get ∂rTDiag ( Γa)r ∂Γj=r2 j+ 2rTDiag(Γa)∂r ∂Γj =r2 j−2rTDiag(Γa)KmC−1Jjju =r2 j−2sTJjju, ∂rTDiag ( Γa)r ∂Γ=r⊙2−2s⊙u where s:=rTDiag(Γa)KmC−1. Similarly, ∂C ∂θ=In, ∂C−1 ∂θ=−C−1∂C ∂θC−1=−C−2, ∂γ ∂θ=∂C−1 ∂θDiag(Γa)Y=−C−2Diag(Γa)Y, ∂r ∂θ=−Km∂γ ∂θ=KmC−2Diag(Γa)Y:=t, such that ∂rTDiag(Γa)r ∂θ= 2rTDiag(Γa)∂r ∂θ = 2rTDiag(Γa)t. Gradient of second term θ(γTKmγ−s). ∂θ(γTKmγ−s) ∂Γj= 2θγTKm∂γ ∂Γj = 2θγTKmC−1Jjju, = 2θpTJjju ∂θ(γTKmγ−s) ∂Γ= 2θp⊙u where p:=γTKmC−1. We also have ∂θ γTKmγ−s ∂θ= γTKmγ−s + 2θγTKm∂γ ∂θ = γTKmγ−s −2θγTKmC−2Diag(Γa)Y = γTKmγ−s −2θγTt. Gradient of third term Ω⋆(VDiag(Γ−b)VT). From Marteau-Ferey et al. [2020] Lemma 5, we have ∇Ω⋆(A⋆) =1 2λ2[A⋆−λ1In]+. We then use the chain rule following Equation (137)from Petersen and Pedersen [2012]: ∂Ω⋆(VDiag(Γ−b)VT) ∂Γj= Trh ∇Ω⋆ VDiag(Γ−b)VTTVJjjVTi = VT ∇Ω⋆ VDiag(Γ−b)VTTV jj, ∂Ω⋆(VDiag(Γ−b)VT) ∂Γ= Diag VT ∇Ω⋆ VDiag(Γ−b)VTTV . 16 Recovering the solution from optimal Lagrange multipliers. Once the dual problem is solved (in practice the convergence of our accelerated gradient algorithm is checked with some small relative tolerances on the constraints and the duality gap, e.g.10−4, see Appendix B.3), we need to recover the
|
https://arxiv.org/abs/2505.21039v1
|
optimal solutions of the primal problem. Denoting bΓ∈Rn +andbθ∈R+the optimal Lagrange multipliers, the approximated mean function is recovered by bγ= Diag(bΓa)Km+bθIn−1 Diag(bΓa)Y. On the other hand, to reconstruct the matrix A, we follow Theorem 8from Marteau-Ferey et al. [2020]: bA=∇Ω⋆ VDiag(bΓ−b)VT =1 2λ2h VDiag(bΓ−b)VT−λ1Ini +. A.3 Proof of Proposition 4 - local coverage Before giving a detailed proof of our bounds, we first recall the definitions of the maximum mean discrepancy and the Hilbert-Schmidt independence criterion. MMD and HSIC. Definition 1 (Maximum Mean Discrepancy [Smola et al., 2007]) LetXandYbe random vec- tors defined on a topological space Z, with respective Borel probability measures PXandPY. Let k:Z × Z → Rbe a kernel function and let H(k)be the associated reproducing kernel Hilbert space. The maximum mean discrepancy between PXandPYis defined as MMD k(PX, PY) = sup ∥f∥H(k)≤1|EX∼PX[f(x)]−EY∼PY[f(y)]|. The squared MMD admits the following closed-form expression: MMD k(PX, PY)2=EX∼PX,X′∼PX[k(X, X′)] +EY∼PY,Y′∼PY[k(Y, Y′)] −2EX∼PX,Y∼PY[k(X, Y)], which can be estimated thanks to U- or V-statistics. Now given a pair of random vectors (U, V)∈ X × Y with probability distribution PUV, we define the product RKHS H=F × Gwith kernel kH((u, v),(u′, v′)) = kX(u, u′)kY(v, v′). A measure of the dependence between UandVcan then be defined as the distance between the mean embedding of PUV andPU⊗PV, the joint distribution with independent marginals PUandPV: MMD2(PUV, PU⊗PV) =∥µPUV−µPU⊗µPV∥2 H. This measure is the so-called Hilbert-Schmidt independence criterion (HSIC, see Gretton et al. [2005]) and can be expanded as HSIC (U, V) =MMD2(PUV, PU⊗PV) =EU,U′,V,V′kX(U, U′)kY(V, V′) +EU,U′kX(U, U′)EV,V′kY(V, V′) −2EU,V[EU′kX(U, U′)EV′kY(V, V′)] where (U′, V′)is an independent copy of (U, V). Once again, the reproducing property implies that HSIC can be expressed as expectations of kernels, which facilitates its estimation when compared to other dependence measures such as the mutual information. Bounds on local coverage. To lighten notations, we denote X=XN+1,Y=YN+1,R=|Y− bmDN(X)|,V=bfDN(X)andS=R/√ V. In the frame of Proposition 4, we work conditionally on DN, 17 which means that in what follows bmDN(·)andbfDN(·)are deterministic functions. The chain rule for mutual information gives MI((X, V), R) =MI(X, R) +MI(V, R|X) =MI(V, R) +MI(X, R|V). Conditionally on X,Vis constant and then RandVare independent. This implies MI(V, R|X) = 0and MI(X, R)−MI(V, R) =MI(X, R|V). We now write MI(X, S) =MI(X,R√ V) ≤MI(X,(R, V)) (∀g,MI(g(X), Y)≤MI(X, Y)) ≤MI((X, V),(R, V)) (MI((X1, X2), Y)≥MI(X1, Y)) =MI(X, R|V) +H(V) (MI(X, Y|Z) =MI(X, Z),(Y, Z)−H(Z)) ≤MI(X, R|V) +H(X) (∀g, H(g(X))≤H(X)) =MI(X, R)−MI(V, R) +H(X) and we can observe that only MI (V, R)depends on V. We thus deduce that 1−exp(−MI(X, S))≤1−α1exp(MI(V, R)) where α1= exp( −MI(X, R)−H(X))is independent from V, which proves Equation (11). For the second part of the proposition, from Equation (15)in Wang and Tay [2023], we have the bound TV(P,Q)≥1 2√MkMMD k(P,Q) whereTV(P,Q) =sup A∈F|P(A)−Q(A)|forP,Qdefined on a measurable space (Ω,F)andMMD k(P,Q) are the total variation and the maximum mean discrepancy between probability distributions PandQ, respectively. Here, the MMD depends on the choice of a kernel k, which is bounded by Mk=sup x∈Xk(x, x), and must be characteristic for the inequality to hold. We then apply this inequality to P=PV Rthe joint
|
https://arxiv.org/abs/2505.21039v1
|
distribution of (V, R)andQ=PV⊗PRthe joint distribution with independent marginals PVandPR, to get 1−exp(−MI(V, R))≥TV2(PV R, PV⊗PR)≥α2HSIC (V, R), where the inequality on the left is the Bretagnolle-Huber inequality, the inequality on the right comes from the HSIC definition HSIC (X, Y) =MMD2(PXY, PX⊗PY)and we denote α2= 1/(4Mk)with k the kernel used in HSIC. We finally have 1−α1exp(MI(V, R))≤1−α1 1−α2HSIC (V, R) and Equation (12) follows. Remark 3 In our initial mutual information bound, we replace H(V)byH(X)in order to obtain in the end a bound which can be expressed only with HSIC. This may seem crude, and of course we could easily incorporate H(V)in our criterion to get a sharper bound: however, our numerical experiments show that even without this term the criterion yields satisfying adaptivity. 18 B Additional experiments and details B.1 Hyperparameter influence Figure 1 quantitatively evaluates the influence of each hyperparameter in our kernel SoS formulation, obtained through the following extensive study: 1. We select test cases 1, 2 and 3 (see Appendix B.4 for details) 2. For each test case: (a)We generate a training dataset of size n= 50,100for3different random seeds and 10different values for θf (b) For each hyperparameter of interest a,b,λ1andλ2 i.We fix all hyperparameters to 1except the one of interest which varies between 10−3and 107 ii. We solve the kernel SoS problem iii.We compute the root mean-squared error, the mean width, the nuclear norm and the Frobenius norm of the optimal solution (c)We normalize these indicators with their median over all combinations to be able to compare the different test cases on a common ground B.2 Cross-validation for kernel hyperparameter estimation LetKbe the number of folds. For k∈[K], we write Dkthe fold dataset kandD−k=Dn\ Dk. We denote by bm−k,bf−kthe mean and scaling functions trained on D−kaccording to Equation (8). Define two sets, RK=K[ k=1{(Yi−bm−k(Xi))2}i∈Dkand FK=K[ k=1{bf−k(Xi)}i∈Dk. We seek max θf∈Rd\HSIC (R, F), (17) where\HSIC (R, F)is estimated with samples RKandFK. In all our experiments, we use the energy distance kernel k(x, x′) =|x|+|x′| − |x−x′|, which has been shown to be characteristic by Sejdinovic et al. [2013]. In practice, since HSIC is estimated with samples, the objective function of this optimization problem is noisy: consequently we resort to the BOBYQA optimization algorithm [Powell et al., 2009]. Once the lengthscales θfare selected, we train the scores on the whole pre-training dataset to learn bmDnandbfDn. From there, we use the calibration set following the split conformal procedure. Our complete algorithm is summarized in Algorithm 1. Algorithm 1: Split CP with adaptive kernel SoS Data: DN=Dn∪ Dm, the pre-training and calibration datasets. Input: (a, b, λ 1)∈R3 +, λ2>0, the error-rate α∈]0,1[and choose kernels kmandkf. 1Fit a GP on Dnwith estimated nugget effect. Retrieve estimated lengthscales θmand set s=∥mGP∥2 2Solve Equation (17) to estimate θf 3Solve Equation (8) to get bmDnandbfDn 4Compute the scores {si}i∈Dm={(Yi−bmDn(Xi))2/bfDn(Xi)}i∈Dm 5Compute the adjusted level quantile bqα({si}i∈Dm)of these scores Output: bCDN(·) = bmDn(·)±q bqαbfDn(·) B.3 Implementation details This section provides additional details about the SDP and dual solvers used throughout the experiments. 19 SDP solver. The SDP problem is solved with the SCS algorithm O’Donoghue
|
https://arxiv.org/abs/2505.21039v1
|
[2021], O’Donoghue et al. [2023] available in the convex optimization software CVXPYDiamond and Boyd [2016], Agrawal et al. [2018]. An example of script that implements the SDP problem defined by Equation (7) is shown below. Here, vector_variable and matrix_variable denote the unknowns γ∈RnandA∈Sn +. The remaining variables, such as the Gram matrix Km_train (Km)and the feature vector phi_variance_gram_matrix (Φ(X)) have to be computed by the user using the training set and the chosen kernel functions. import numpy as np import cvxpy as cp import scs # Variables matrix_variable = cp. Variable ((n, n), symmetric = True ) vector_variable = cp. Variable ((n, d)) # Expressions f_A = cp. reshape ( cp. diag ( phi_variance_gram_matrix .T @ matrix_variable @ phi_variance_gram_matrix ), (-1, 1) , order ="C", ) mean_estimator = cp. matmul ( Km_train , vector_variable ) # Objective objective = cp. Minimize ( (a/n) * cp. sum_squares ( y_train - mean_estimator ) + (b/n) * cp.sum(f_A ) + lambda1 * cp. trace ( matrix_variable ) + lambda2 * cp. square (cp. norm ( matrix_variable , "fro")) ) # Constraints constraints = [ matrix_variable >> 0, f_A >= ( y_train - mean_estimator )**2 , cp. quad_form ( vector_variable , Km_train , assume_PSD = True ) <= s, ] # Problem definition problem = cp. Problem ( objective , constraints ) # Solve problem . solve ( solver =cp.SCS , verbose = False ) Listing 1: Solving the SDP problem with CVXPY and SCS. Dual formulation algorithm. We optimize the dual objective Equation (8) using a projected gradient method with Nesterov acceleration, enforcing feasibility by clipping the Lagrange multipliers multipliers and applying convergence checks on constraint satisfaction and duality gap. The gradient is computed analytically and exploits problem structure, including sparsity and Cholesky factorization for efficiency. Our implementation leverages JAXBradbury et al. [2018] to accelerate the projected gradient method and enable GPU support. We use just-in-time compilation to accelerate key components such as the dual objective, gradient computation, and update steps, enabling GPU/TPU acceleration and reduced runtime through ahead-of-time compilation with XLA. This results in a solver that is both scalable and portable across CPU and GPU environments, making it practical for medium- to large-scale datasets. We use Optax’s DeepMind et al. [2020] implementation of gradient descent with Nesterov acceleration to optimize the dual function. In Table 1 below, we complement our time comparison on test case 1 from Figure 8 (left) with the standard deviations of the computation time. All experiments are carried out with a=b= 0,θf= 0.3 and a learning rate of 10−2. 20 Table 1: Computation time (mean ±std in seconds) for each nand stopping tolerance tol. Device: CPU Device: GPU Stopping tolerance Stopping tolerance n 0.01 0.001 0.0001 0.01 0.001 0.0001 100 1.32 ±0.89 1.37 ±0.16 2.30 ±0.81 2.41±0.72 3.28 ±0.15 5.00 ±0.14 200 2.23 ±0.56 4.30 ±1.36 4.88 ±0.11 2.69±0.35 4.61 ±0.18 6.01 ±0.14 300 3.76 ±0.19 10.58 ±1.57 12.67 ±1.373.55±0.33 8.27 ±0.15 10.24 ±0.16 400 11.35 ±0.63 31.07 ±1.49 34.20 ±12.64 6.12±0.33 14.17 ±0.20 14.05 ±0.17 500 11.05 ±0.92 30.29 ±2.47 52.22 ±1.805.78±0.32 15.22 ±0.88 25.63 ±0.16 600 22.71
|
https://arxiv.org/abs/2505.21039v1
|
±7.10 32.06 ±2.19 42.85 ±2.5810.20±0.38 14.84 ±0.22 19.26 ±0.22 800 25.69 ±1.90 41.91 ±1.93 66.07 ±2.0612.34±1.21 19.10 ±0.25 30.06 ±0.33 1000 41.59 ±1.32 83.08 ±1.39 136.86 ±1.4817.86±0.60 34.23 ±0.34 56.37 ±0.33 Illustration. For sanity check, we show in Figure 9 that the solution of our dual formulation algorithm coincides with the SDP solver one on test case 1, and in Figure 10 the corresponding HSIC estimation with 5-fold cross-validation. Figure 9: Optimal solution obtained with our dual formulation and the SDP solver SCS on test case 1 with n= 100, and trained with hyperparameters a=b= 0,λ1=λ2= 1,θf= 0.4. The dual solver was run with convergence tolerance set to 10−4and learning rate set to 0.01. Note that here the predictions bands are not calibrated. Figure 10: Test case 1 with n= 1000anda=b= 0, λ1=λ2= 1. HSIC criterion between r(X, Y)and f(X)as a function θfand optimal value of θfin dashed line. 21 B.4 Additional numerical experiments Formulation Bversus formulation A.As mentioned in Section 2.2, Marteau-Ferey et al. [2020] provide two equivalent formulations of the kernel SoS problems in terms of a PSD matrix BorA. Though theoretically equivalent, we propose to investigate their respective numerical efficiency. For different test cases, different lengthscales and different random seeds, we record the computational time of the SCS solver to obtain the optimal solution with the Bformulation, the Aformulation and with or without Frobenius regularization for both, see Figure 11. The latter serves as a numerical illustration of its interest from a computational perspective, beyond the theoretical one related to strong convexity. Figure 11: Time comparison between formulation A, B and with or without Frobenius regularization. Each model has been run for test cases 1, 2 and 3 with three random seeds, two sample sizes n= 50and n= 100and ten lengthscales between 0.1and1.0. The y-axis is plotted on a logarithmic scale. In average for different hyperparameter combinations, we observe a significant reduction between both formulations, with formulation Ayielding 90%computational savings. Setting λ2>0also improves resolution times, in particular when b >0. Evaluation metrics. For our experiments, we compute the following metrics: •The mean width of the prediction intervals on the test set •The mutual information between XandS(X, Y)on the test set •TheR2 SQIcriterion proposed in Deutschmann et al. [2024]. We first discretize the interval widths of test samples with quantiles, leading to a partition of the test set. For each partition, we compute the (1−α)- quantile of the absolute residuals and the median width: the R2 SQIis the R2determination coefficient of the linear regression without intercept between these two •The local coverage, obtained by approximating P(YN+1∈bC(XN+1)|XN+1=x)by its empirical counterpart with samples from YN+1(of size nY) at different random locations XN+1(of size nX) We discuss in further detail the R2 SQIcriterion proposed in Deutschmann et al. [2024], using Figure 12 as a visual aid. This illustration excludes the CQR model due to the lack of a predictor for the mean function. ForthehomoscedasticGPmodel(left)weobserveconstantwidthofthepredictionbands. Consequently, the median width quantiles form a constant line with respect to the absolute error quantiles. The linear model without intercept thus fits these data very poorly and results
|
https://arxiv.org/abs/2505.21039v1
|
in a negative determination coefficient. 22 Figure 12: Test case 1 with n= 100,nX= 100andnY= 1000. We showcase prediction bands (top row) and the absolute residuals quantiles versus the median width quantiles (bottom row) for three models, homoscedastic GP (left column), heteroscedastic GP (middle column) and kernel SoS (right column). We use50quantiles to compute R2 SQI. This behavior is observed throughout cases 1,2and3, where the homoscedastic GP always outputs constant prediction bands. For the heteroscedastic GP model, we see that the prediction bands are more adaptive than the homoscedastic one, resulting in a positive determination coefficient. However, the linear regression is not perfect: we can spot some bands that overcover (top row, middle column), for X∈(0.75,1.0]and around X= 0.5andX=−0.5. On the corresponding quantile plot, these points correspond to the mean width quantiles that are above the regression line in red. We see that the model mostly overcovers for points with high residuals. In addition, the model also undercovers in some regions, for instance at X=−0.2, which corresponds to the black points in the lower left of the quantile regression plot, where the absolute errors are very low, or at X=−0.7andX=−1where the absolute errors are larger. The latter correspond to the black points in the middle of the figure under the red regression line. Finally, for the kernel SoS model, we observe a better linear regression model. In particular, prediction bands (top row, right column) are indeed narrower in the middle where the mean predictor makes smaller errors and the data are less noisy, and wider at both ends, where the data have more noise which results in a mean predictor with larger errors. Additional test cases and results. The complete list of test cases we consider is given below. Note that we only provide R2 SQIfor kernel SoS and the heteroscedastic GP: homoscedastic GP typically yields constant bands with largely negative determination coefficient, and CQR does not provide a mean function predictor. For all experiments related to adaptivity metrics, we perform 20replications with different random seeds, and local coverage is estimated with nX= 100independent random locations XN+1for which we generate nY= 1000independent samples from YN+1.R2 SQI, MI and mean width are estimated with a test set of size ntest= 1000. Case 1. Inspired from Gramacy and Lee [2009]. d= 1, X∼ U[−1,1], Y =m(X) +σ(X)ϵ, ϵ∼ N(0,1) m(X) =( sin(π(2X+ 1/5)) + 0 .2 cos(4 π(2X+ 1/5))if10X+ 1≤9.6 X−9/10otherwise σ(X) =p 0.1 + 2 X2 Figure 4 and 8 from the main paper report the results on this test case. 23 Case 2. Corresponds to setting 1 in Hore and Barber [2024]. X∼ N d(0, Id), Y =m(X) +σ(X)ϵ, ϵ∼ N(0,1) m(X) = 0 .5dX i=1X(i) σ(X) =dX i=1|sin(X(i))| Results for d= 1andn= 100are to be found in Figure 5. Figure 13 shows the optimal solution of our dual formulation for n= 2000. Figure 13: Test case 2 with d= 1andn= 2000. Optimal solution of dual formulation for a= 0,b= 1000, λ1=λ2= 1andθf= 1.2. Remark 4 We do not investigate this test case in larger dimension, since by concentration due to the
|
https://arxiv.org/abs/2505.21039v1
|
additive structure, the intervals tend to be constant when dincreases. The same comment applies to test case 3. For brevity, we thus postpone the investigation of such a setting to test case 4. Case 3. Corresponds to setting 2 in Hore and Barber [2024]. X∼ N d(0, Id), Y =m(X) +σ(X)ϵ, ϵ∼ N(0,1) m(X) = 0 .5dX i=1X(i) σ(X) =dX i=14 3ϕ(2X 3), ϕ:pdf of standard Gaussian Ford= 1andn= 100, adaptivity metrics are given in Figure 6. With n= 2000, we obtain in Figure 14 the following optimal solution of the dual formulation. 24 Figure 14: Test case 3 with d= 1andn= 2000. Optimal solution of dual formulation for a=b= 0, λ1=λ2= 1andθf= 0.9. Case 4. Inspired from Kivaranovic et al. [2020]. X∼ U[0,1]d, Y =m(X) +σ(X)ϵ, ϵ∼ N(0,1) m(X) = 2 sin( πβ⊤X) +πβ⊤X σ(X) =q 1 + (β⊤X)2 In dimension d= 1we set β= 1and obtain the adaptivity metrics given in Figure 7. We give in Figure 15 the optimal solution of the dual formulation obtained with n= 2000. Figure 15: Test case 4 with d= 1andn= 2000. Optimal solution of dual formulation for a=b= 0, λ1=λ2= 1andθf= 1.6. We now turn to the same test case in dimension d= 5with n= 150andβ= (1,0.1,0.1,0.1,0.1). Figure 16 shows a comparison with all competitors in terms of adaptivity metrics, but observe that we also do not compute MI since estimation is not robust due to the dimension. We can first remark that all methods tend to overcover more than in dimension d= 1. But similarly tod= 1, here the homoscedastic GP is also the best model in terms of local coverage and mean width. Interestingly, the performance of heteroscedastic GP and CQR degrades, while kSoS still achieves local coverage as good as homoscedastic GP, although with slightly larger intervals on average. 25 Figure 16: Test case 4 with d= 5andn= 150. Adaptivity metrics and density of local coverage. Non-symmetric intervals. We can readily extend our kernel SoS framework to non-symmetric prediction intervals of the form bCN(X) =h bm(X)−bfAlow(X)−bqα,bm(X) +bfAup(X) +bqαi with score function S(X, Y) =max ( m(X)−fAlow(X)−Y, Y−m(X)−fAup(X))inspired by CQR. In this setting, the kernel SoS problem writes inf m∈Hm,Alow,Aup∈S+(Hf)a nnX i=1(Yi−m(Xi))2+b nnX i=1(fAlow(Xi) +fAup(Xi)) +λ1∥Alow∥⋆+λ2∥Alow∥2 F+λ1∥Aup∥⋆+λ2∥Aup∥2 F s.t. fAlow(Xi)≥m(Xi)−Yi, i∈[n], fAup(Xi)≥Yi−m(Xi), i∈[n], ∥m∥2 Hm≤s. Contrary to the symmetric noise case, here the constraints no longer ensure a good fit of the regression function: this means that we need to set a >0. Also remark that different RKHSs can be chosen for AlowandAupin order to adapt to different regularities for the left and right tails of the conditional distribution. As an illustration, we focus on a test case from Braun et al. [2025], which involves an exponentially distributed noise: d= 1, X∼ U[−1,1], Y =m(X) +σ(X)ϵ, ϵ∼ E(1) m(X) = sin(2 X) σ(X) = 0 .5 + 2 X The optimal solution of the kSOS problem is given in Figure 17 for symmetric and non-symmetric intervals with a= 0anda= 1000, for n= 100. First observe that when a= 0the mean function is biased, unlike for a= 1000. In addition, breaking the
|
https://arxiv.org/abs/2505.21039v1
|
symmetry clearly improves adaptivity. References Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning . The MIT Press, 11 2005. ISBN 9780262256834. doi: 10.7551/mitpress/3206.001.0001. URL https: //doi.org/10.7551/mitpress/3206.001.0001 . Leo Breiman. Random forests. Machine Learning , 45(1):5–32, 2001. doi: 10.1023/A:1010933404324. Hao Wang and Dit-Yan Yeung. A survey on bayesian deep learning. Association for Computing Machinery , 53(5), 2020. ISSN 0360-0300. doi: 10.1145/3409383. URL https://doi.org/10.1145/3409383 . A Gammerman, V Vovk, and V Vapnik. Learning by transduction. In Conference on Uncertainty in Artificial Intelligence , 1998. 26 (a) Non-symmetric intervals and a= 0 (b) Non-symmetric intervals and a= 1000 (c) Symmetric intervals and a= 0 (d) Symmetric intervals and a= 1000 Figure 17: Test case 5. Optimal mean function and non-symmetric prediction intervals (up) and symmetric intervals (bottom) for a= 0(left), a= 1000(right), with parameters b= 1, λ1= 1, λ2= 1, θf= 0.7. Harris Papadopoulos, Kostas Proedrou, Vladimir Vovk, and Alexander Gammerman. Inductive confidence machines for regression. In European Conference on Machine Learning , 2002. URL https://api. semanticscholar.org/CorpusID:42084298 . Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research , 9:371–421, mar 2008. URL http://jmlr.org/papers/volume9/shafer08a/shafer08a.pdf . Submitted 8/07; Published 3/08. Anastasios N. Angelopoulos and Stephen Bates. Conformal prediction: A gentle introduction. Found. Trends Mach. Learn. , 16(4):494–591, March 2023. ISSN 1935-8237. doi: 10.1561/2200000101. URL https://doi.org/10.1561/2200000101 . Vineeth Balasubramanian, Shen-Shyang Ho, and Vladimir Vovk. Conformal prediction for reliable machine learning: theory, adaptations and applications . Newnes, 2014. Janette Vazquez and Julio C Facelli. Conformal prediction in clinical medical sciences. Journal of Healthcare Informatics Research , 6(3):241–252, 2022. Ulysse Marteau-Ferey, Francis Bach, and Alessandro Rudi. Non-parametric models for non-negative functions. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Ad- vances in Neural Information Processing Systems , volume 33, pages 12816–12826. Curran As- sociates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 968b15768f3d19770471e9436d97913c-Paper.pdf . Alessandro Rudi, Ulysse Marteau-Ferey, and Francis Bach. Finding global minima via kernel ap- proximations. Mathematical Programming , 209(1):703–784, jan 2025. ISSN 1436-4646. doi: 10.1007/s10107-024-02081-4. URL https://doi.org/10.1007/s10107-024-02081-4 . Adrien Vacher, Boris Muzellec, Alessandro Rudi, Francis Bach, and Francois-Xavier Vialard. A dimension- free computational upper-bound for smooth optimal transport estimation. In Mikhail Belkin and 27 Samory Kpotufe, editors, Proceedings of Thirty Fourth Conference on Learning Theory , volume 134 ofProceedings of Machine Learning Research , pages 4143–4173. PMLR, 15–19 Aug 2021. URL https://proceedings.mlr.press/v134/vacher21a.html . Alessandro Rudi and Carlo Ciliberto. Psd representations for effective probability models. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Ad- vances in Neural Information Processing Systems , volume 34, pages 19411–19422. Curran As- sociates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ a1b63b36ba67b15d2f47da55cdb8018d-Paper.pdf . Boris Muzellec, Francis Bach, and Alessandro Rudi. Learning psd-valued functions using kernel sums-of- squares, 2022. URL https://arxiv.org/abs/2111.11306 . Tengyuan Liang. Universal prediction band via semi-definite programming. Journal of the Royal Statistical Society Series B: Statistical Methodology , 84(4):1558–1580, August 2022. ISSN 1467-9868. doi: 10.1111/rssb.12542. URL http://dx.doi.org/10.1111/rssb.12542 . Jianqing Fan, Jiawei Ge, and Debarghya Mukherjee. Utopia: Universally trainable optimal prediction intervals aggregation, 2024. URL https://arxiv.org/abs/2306.16549 . Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. In
|
https://arxiv.org/abs/2505.21039v1
|
H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper_files/paper/2019/file/5103c3584b063c431bd1268e9b5e76fb-Paper.pdf . Jing Lei and Larry Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society Series B: Statistical Methodology , 76(1):71–96, jan 2014. doi: 10.1111/ rssb.12021. URL https://doi.org/10.1111/rssb.12021 . U. Johansson, Henrik Boström, Tuwe Löfström, and Henrik Linusson. Regression conformal prediction with random forests. Machine Learning , 97:155 – 176, 2014. URL https://api.semanticscholar. org/CorpusID:14015369 . Harris Papadopoulos. Guaranteed coverage prediction intervals with gaussian process regression. IEEE Transactions on Pattern Analysis and Machine Intelligence , 46(12):9072–9083, December 2024. ISSN 1939-3539. doi: 10.1109/tpami.2024.3418214. URL http://dx.doi.org/10.1109/TPAMI.2024. 3418214. Edgar Jaber, Vincent Blot, Nicolas Brunel, Vincent Chabridon, Emmanuel Remy, Bertrand Iooss, Didier Lucor, Mathilde Mougeot, and Alessandro Leite. Conformal approach to gaussian process surrogate evaluation with coverage guarantees, 2024. URL https://arxiv.org/abs/2401.07733 . Shai Feldman, Stephen Bates, and Yaniv Romano. Improving conditional coverage via orthogonal quantile regression. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors,Advances in Neural Information Processing Systems , volume 34, pages 2060–2071. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ 1006ff12c465532f8c574aeaa4461b16-Paper.pdf . Leying Guan. Localized conformal prediction: A generalized inference framework for conformal prediction. Biometrika , 110(1):33–50, mar 2023. doi: 10.1093/biomet/asac040. URL https://doi.org/10.1093/ biomet/asac040 . Rohan Hore and Rina Foygel Barber. Conformal prediction with local weights: randomization enables robust guarantees. Journal of the Royal Statistical Society Series B: Statistical Methodology , 2024. doi: 10.1093/jrsssb/qkae103. URL https://doi.org/10.1093/jrsssb/qkae103 . Nicolas Deutschmann, Mattia Rigotti, and Maria Rodriguez Martinez. Adaptive conformal regression with split-jackknife+ scores. Transactions on Machine Learning Research , 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=1fbTGC3BUD . Ran Xie, Rina Foygel Barber, and Emmanuel J. Candès. Boosted conformal prediction intervals. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, edi- tors,Advances in Neural Information Processing Systems , volume 37, pages 71868–71899. Curran Associates, Inc., 2024a. URL https://proceedings.neurips.cc/paper_files/paper/2024/file/ 842714f78c95096e20ac7d2591c5a24b-Paper-Conference.pdf . 28 BernhardSchölkopfandAlexanderJSmola. Learning with kernels: support vector machines, regularization, optimization, and beyond . MIT press, 2002. Ruiting Liang, Wanrong Zhu, and Rina Foygel Barber. Conformal prediction after efficiency-oriented model selection, 2024. URL https://arxiv.org/abs/2408.07066 . Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review , 52(3):471–501, 2010. Sacha Braun, Liviu Aolaritei, Michael I Jordan, and Francis Bach. Minimum volume conformal sets for multivariate regression. arXiv preprint arXiv:2503.19068 , 2025. Brendan O’Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. Conic optimization via operator splitting and homogeneous self-dual embedding. Journal of Optimization Theory and Applications , 169 (3):1042–1068, June 2016. URL http://stanford.edu/~boyd/papers/scs.html . Sebastian Ruder. An overview of gradient descent optimization algorithms, 2017. URL https://arxiv. org/abs/1609.04747 . Xingyu Xie, Pan Zhou, Huan Li, Zhouchen Lin, and Shuicheng Yan. Adan: Adaptive nesterov momentum algorithm for faster optimizing deep models. IEEE Transactions on Pattern Analysis and Machine Intelligence , 46(12):9508–9520, 2024b. doi: 10.1109/TPAMI.2024.3423382. Pierre-Cyril Aubin-Frankowski and Alessandro Rudi. Approximation of optimization problems with constraints through kernel sum-of-squares. Optimization , pages 1–26, 2024. Vladimir Vovk. Conditional validity of inductive
|
https://arxiv.org/abs/2505.21039v1
|
conformal predictors. In Steven C. H. Hoi and Wray Buntine, editors, Proceedings of the Asian Conference on Machine Learning , volume 25 of Proceedings of Machine Learning Research , pages 475–490, Singapore Management University, Singapore, 04–06 Nov 2012. PMLR. URL https://proceedings.mlr.press/v25/vovk12.html . Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. The limits of distribution-free conditional predictive inference. Information and Inference: A Journal of the IMA , 10(4):455–482, aug 2021. doi: 10.1093/imaiai/iaaa017. URL https://doi.org/10.1093/imaiai/ iaaa017. Chong Xiao Wang and Wee Peng Tay. Semi-nonparametric estimation of distribution divergence in non-euclidean spaces, 2023. URL https://arxiv.org/abs/2204.02031 . Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In International conference on algorithmic learning theory , pages 63–77. Springer, 2005. Mickael Binois, Robert B Gramacy, and Mike Ludkovski. Practical heteroscedastic gaussian process modeling for large simulation experiments. Journal of Computational and Graphical Statistics , 27(4): 808–821, 2018. Kaare B. Petersen and Michael S. Pedersen. The Matrix Cookbook, 2012. URL https://www.math. uwaterloo.ca/~hwolkowi/matrixcookbook.pdf . AlexSmola, ArthurGretton, LeSong, andBernhardSchölkopf. Ahilbertspaceembeddingfordistributions. InInternational conference on algorithmic learning theory , pages 13–31. Springer, 2007. D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and rkhs-based statistics in hypothesis testing. The Annals of Statistics , pages 2263–2291, 2013. Michael JD Powell et al. The bobyqa algorithm for bound constrained optimization without derivatives. Cambridge NA Report NA2009/06, University of Cambridge, Cambridge , 26:26–46, 2009. Brendan O’Donoghue. Operator splitting for a homogeneous embedding of the linear complementarity problem. SIAM Journal on Optimization , 31:1999–2023, August 2021. Brendan O’Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. SCS: Splitting conic solver, version 3.2.7. https://github.com/cvxgrp/scs , November 2023. 29 Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research , 17(83):1–5, 2016. Akshay Agrawal, Robin Verschueren, Steven Diamond, and Stephen Boyd. A rewriting system for convex optimization problems. Journal of Control and Decision , 5(1):42–60, 2018. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/jax-ml/ jax. DeepMind, Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Antoine Dedieu, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, George Papamakarios, John Quan, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Laurent Sartran, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Miloš Stanojević, Wojciech Stokowiec, Luyu Wang, Guangyao Zhou, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/google-deepmind . Robert B. Gramacy and Herbert K. H. Lee. Adaptive design and analysis of supercomputer experiments. Technometrics , 51(2):130–145, May 2009. ISSN 1537-2723. doi: 10.1198/tech.2009.0015. URL http://dx.doi.org/10.1198/TECH.2009.0015 . Danijel Kivaranovic, Kory D Johnson, and Hannes Leeb. Adaptive, distribution-free prediction intervals for deep networks. In International Conference on Artificial Intelligence and Statistics , pages 4346–4356. PMLR, 2020. 30
|
https://arxiv.org/abs/2505.21039v1
|
arXiv:2505.21102v1 [math.ST] 27 May 2025Linearity-Inducing Priors for Poisson Parameter Estimation Under L1Loss Leighton P. Barnes∗, Alex Dytso†, and H. Vincent Poor‡ ∗Center for Communications Research, Princeton, NJ 08540, USA, l.barnes@idaccr.org †Qualcomm Flarion Technology, Inc., Bridgewater, NJ 08807, USA, odytso2@gmail.com ‡Princeton University, Princeton, NJ 08544, USA, poor@princeton.edu Abstract —We study prior distributions for Poisson parameter estimation under L1loss. Specifically, we construct a new family of prior distributions whose optimal Bayesian estimators (the conditional medians) can be any prescribed increasing function that satisfies certain regularity conditions. In the case of affine estimators, this family is distinct from the usual conjugate priors, which are gamma distributions. Our prior distributions are con- structed through a limiting process that matches certain moment conditions. These results provide the first explicit description of a family of distributions, beyond the conjugate priors, that satisfy the affine conditional median property; and more broadly for the Poisson noise model they can give any arbitrarily prescribed conditional median. I. I NTRODUCTION In Bayesian estimation theory, the optimal estimator under L1loss is given by the conditional median. In particular, suppose that a random variable Xis drawn from a prior distribution PX, and we are given a noisy measurement Y ofXthat is drawn from some noise model PY|X. We use bX(Y)to denote our estimate of Xfrom Y. The conditional median is optimal in the following sense:1 med(X|Y=·) = argmin bXEh |bX(Y)−X|i , (1) where the argmin is over all possible measurable functions and the expectation is jointly over X, Y . Our focus is on the Poisson noise model, where the obser- vation Yis a non-negative integer with PY|X(y|x) =1 y!xye−x, x≥0, y∈N0, (2) where N0denotes the set of non-negative integers, and we use the convention that 00= 1. The Poisson parameter Xis supported on the non-negative reals. The Poisson noise model is fundamental to applications involving counting discrete events, such as the number of photons in optical communications [1]–[5] or particles in molecular communications [6], and the number of neural This work was supported in part by the U.S. National Science Foundation under Grant ECCS-2335876. 1In case there are multiple optimizers, which can be the case for dis- crete distributions, the median is always taken to be med (X|Y=y) = F−1 X|Y=y(1 2)where F−1 X|Y=y(p)∈(0,1)is the left-inverse of the conditional cumulative distribution function (cdf) also known as the quantile function.spikes in neuroscience [7], [8]. The latter application espe- cially establishes Poisson noise, and understanding its differ- ences from other neural noise models such as Gaussian noise, is a topic of great interest in understanding noisy large-scale neural systems and learning models. ForL2error, where the optimal estimator is the conditional mean instead of the conditional median, and it is known that E[X|Y=y] =ay+b , (3) i.e., the optimal estimator is an affine function, if and only if 0< a < 1, b > 0and the prior distribution PXis a gamma distribution [9] with the probability density function fX(x) =αθ Γ (θ)xθ−1e−αx, x≥0, (4) where α=1−a a, θ=b a. (5) The gamma distribution is the conjugate prior for this exponen- tial family; and for a natural
|
https://arxiv.org/abs/2505.21102v1
|
exponential family, the optimal L2 estimator for the so-called mean parameter (which is just Xin the Poisson case) is affine if and only if the prior distribution onXis the conjugate prior [10], [11]. In this work, we consider the corresponding property for the conditional median, and ask if there are prior distributions PXsuch that med(X|Y=y) =ay+b . (6) This follows a corresponding line of research that the authors have undertaken for the additive Gaussian noise model and some other exponential families [12]–[14]; see also [15] for similar results. In the Gaussian case, there is a unique prior that guarantees a linear conditional median, and it corresponds to the conjugate prior (which is another Gaussian with a particular variance). In contrast to both the L2error case and the Gaussian noise model case, the gamma distribution does not induce an affine conditional median under Poisson noise since med(X|Y=y) =1 αγ−11 2, θ+y , (7) where γ−1is the inverse of the lower incomplete gamma function and needs to be computed numerically. However, it 0 5 10 15 20−0.165−0.16−0.155−0.15 y Fig. 1: Plot of the conditional median in expression (7) minus the conditional mean in (3) for a=b=1 2. The difference decays like O(1/y)as shown in [16]. is a near miss, as shown in Fig. 1; see [16] for a precise error analysis where first- and second-order error terms are derived. A. Paper Outline and Contribution Our first contribution is to show that for any increasing function f(that satisfies a particular regularity condition) there exists a prior such that med(X|Y=y) =f(y), y∈N0. (8) Specializing this result to affine functions f, we demonstrate a new family of prior distributions that doinduce an affine conditional median. They are distinct from the usual conjugate prior/gamma distribution, but may have a qualitatively similar tail as seen in Fig. 2. They are constructed by a limiting procedure of matching particular moment conditions, and this limiting procedure is guaranteed to have at least a subsequence of distributions that converge in total variation to a measure with the desired properties. To the best of our knowledge, this is the first explicit description of a distribution that is distinct from the usual conjugate prior, but satisfies the affine conditional median property (6). In this sense, it could be viewed as a family ofL1-conjugate priors, but without necessarily preserving the property that conditioning is closed with respect to the family of distributions. We proceed as follows: In Section II below we state our main existence results, and then Section III is devoted to proving these results by constructing our distributions via a limiting process. In Section IV we conclude by showing examples and discussing the implications of this construction. II. M AINRESULTS In this section, we present our main results. We begin by presenting the existence result, followed by a description of a procedure that can be used to construct such a distribution. Theorem 1. Suppose that f:N0→R+satisfies the following properties: •fis an increasing function up to a point c0and constant afterwards (i.e., f(i−1)< f(i), i≤c0andf(i+ 1) = f(c0), i≥c0). The point c0can be infinity;
|
https://arxiv.org/abs/2505.21102v1
|
and•there exists κ≥1such that c0X i=1 f ⌊i κ⌋ f(i)!i κ ef(i)<∞. (9) Then, there exists a distribution PXsupported on W={wi:wi=f(i−1), i∈N} (10) such that for all y∈N0 med(X|Y=y) =f(y). (11) The first assumption in Theorem 1 is not very restrictive in view of the following facts: •The assumption that fis increasing is justified by the fact that for Poisson noise optimal Bayesian estimators are non- decreasing [17]. •IfX≤Ahas bounded support, then med(X|Y=y)≤ A, y∈N, i.e., the conditional median can increase only up to a point. Specializing, the above result to linear estimators we have the following result. Corollary 1. Given 0< a <1 e≈0.3678 andb >0, let W={wi:wi=a(i−1) +b, i∈N}. (12) Then, there exists PXsupported on Wsuch that for all y∈N0 med(X|Y=y) =ay+b. (13) Proof: Setting f(y) =ay+b, the sum in (9) reduces to inf κ≥1∞X i=1 a⌊i κ⌋+b ai+b!i κ eai+b, (14) which converges provided that a <1 e. Note that unlike for the conditional mean case, the dis- tribution inducing a linear conditional median is not neces- sarily unique. It is also worth pointing out that the range of a∈ 0,1 e in Corollary 1 is most likely not exhaustive. The full set of admissible values of ais left as an open problem. III. P ROOF A. Equivalent Moment Condition We first produce an equivalent moment condition, which is easier to work with. Lemma 1. Given f:N0→R+, suppose there exists a distribution PWsuch that E[Wy1W≤f(y)] =E[Wy1W>f (y)], y∈N0. (15) Then, dPX(w) = d PW(w)ew(16) induces med(X|Y=y) =f(y), y∈N0, (17) provided that PXis a proper distribution. Proof: An equivalent condition for the prescribed condi- tional median is given by the conditional cdf: for all y∈N0 E 1X≤f(Y)|Y=y =1 2(18) which is equivalent to: for all y∈N0 E[1X≤f(Y)|Y=y] =E[1X>f (Y)|Y=y]. (19) Now by defining dPW(w)∝e−wdPX(w), (20) and using the structure of the PY|Xin (2) the expression in (19) can be re-written as Z wy1w≤f(y)dPW(w) =Z wy1w>f (y)dPW(w). (21) This concludes the proof. We now proceed to show the following theorem. Theorem 2. Suppose that f:N0→R0is increasing up to a point c0, such that for every y∈N0 inf κ≥1lim N→∞c0X i=N+1 f ⌊i κ⌋ f(i)!i κ (f(i))y= 0. (22) Then, there exists a discrete probability distribution PWsup- ported on the set Wdefined in (10) such that E[Wy1W≤f(y)] =E[Wy1W>f (y)], y∈N0. (23) Our approach is to first show a truncated version, where the first Mmoment conditions are satisfied, and then show that we can pass to a subsequence whose limit exists as the truncation point Mis moved to infinity. B. Proof of the Truncated Version of Theorem 2 We first consider a truncated version of our problem. That is, given M≥1, we seek to find PWsuch that E[Wy1W≤f(y)] =E[Wy1W>f (y)], y∈[0 :M−1].(24) Naturally, we assume that M≤c0. We start with the following helper lemma. Lemma 2. For every M≥1and0< w 1< . . . < w M+1< ∞, there exists a probability vector (p1, . . . , p M+1)such that for all k∈[0 :M−1] k+1X i=1piwk i=M+1X i=k+2piwk i. (25) Proof: We show this by induction. For the base case of M=
|
https://arxiv.org/abs/2505.21102v1
|
1, we have p1=p2, which results in valid probability vector if p1=p2= 1/2. We now make an induction hypothesis that the statement is true for M. Let wM+2> w M+1. Now by the induction hy- pothesis for (w1, . . . , w M+1)there exists a probability vector (p1, . . . , p M+1)such that for all k∈[0 :M−1] k+1X i=1piwk i=M+1X i=k+2piwk i. (26)Moreover, by the induction hypothesis for (w1, . . . , w M, wM+2)there exists a probability vector (q1, . . . , q M+1)such that for all k∈[0 :M−1] k+1X i=1qiwk i=MX i=k+2qiwk i+qM+1wk M+2. (27) We now construct a solution to the case M+ 1 for the sequence (w1, . . . , w M, wM+1, wM+2). Fix some α∈[0,1] (to be determined later) and let (m1, . . . , m M+2) =α(p1, . . . , p M+1,0) + (1−α)(q1, . . . , q M,0, qM+1) (28) which obviously is a probability vector. Moreover, by con- struction, for all k∈[0 :M−1], we have k+1X i=1miwk i=M+2X i=k+2miwk i. (29) To conclude the proof, we need to examine the case of k=M and show that M+1X i=1miwM i=mM+2wM M+2, (30) which can be re-written as αM+1X i=1piwM i+(1−α)MX i=1qiwM i= (1−α)qM+1wM M+2.(31) To show that there exists a choice of α∈[0,1]such that (31) is true note that for α= 1 the LHS of (31) is larger than the RHS. For α= 0 the LHS is smaller than RHS since MX i=1qiwM i≤wM+2MX i=1qiwM−1 i =wM+2qM+1wM−1 M+2(32) where the last equality follows from (27). This shows that there the desired αexists and concludes the proof. We now show that for every increasing function there exists aPWthat satisfies (24). Our construction works by fixing PW to be a discrete distribution with the support given by WM={wi:wi=f(i−1), i∈[1 :M+ 1]}. (33) Theorem 3. Suppose that f:N0→R+is increasing up to c0 and1≤M≤c0. Then, there exists a probability distribution PWsupported on WMthat satisfies (24). Proof: With the choice of wi’s in (33), the system in (24) can be written as: for all y∈[0 :M−1] M+1X i=1piwy i1{wi≤f(y)}=M+1X i=1piwy i1{wi>f(y)}, (34) and which can be further simplified to y+1X i=1piwy i=M+1X i=y+2piwy i, y∈[0 :M−1]. (35) Now by invoking Lemma 2, we see that there exists a probability vector (p1, . . . , p M+1)such that the system of equalities in (35) is satisfied. This concludes the proof. We now produce a concentration bound that will be useful. Lemma 3. LetWbe the random variable constructed in Theorem 3. Then, for κ≥1andi κ≤M+ 1, P[W≥wi+1]≤ f ⌊i κ⌋ f(i)!i κ . (36) Proof: P[W≥wi+1] =E[1{W≥f(i)}] (37) ≤E"W f(i)⌊i κ⌋ 1{W≥f(i)}# (38) ≤E"W f(i)⌊i κ⌋ 1{W≥f(⌊i κ⌋)}# (39) =E"W f(i)⌊i κ⌋ 1{W≤f(⌊i κ⌋)}# (40) ≤ f ⌊i κ⌋ f(i)!⌊i κ⌋ , (41) where (39) follows from the assumption that fis non- decreasing; and the equality in (40) follows by using the fact thatPWsatisfies (24). C. Extension to the General Case Ifc0<∞, then the proof is done; so here, we consider the case of c0=∞. For M≥1, letPM Wdenote a distribution
|
https://arxiv.org/abs/2505.21102v1
|
constructed in Theorem 3 for some M > 1. In this section, we show that there is at least a subsequence Mksuch that PW= lim k→∞PMk Wconverges (in total variation) and PWis a valid distribution that satisfies the desired system. LetpM= (pM 1, . . . , pM M+1)denote the probability vector corresponding to PM W. Using Lemma 3 with κ > 1, given ε >0, letMε,1, ibe such that: for all M≥Mε,1 P[WM≥f(i)]≤ε 3. (42) LetΠi(pM)denote the projection onto the first icompo- nents of pM(or equivalently setting all components pM kwith k > i to zero). Using sequential compactness, let Mkbe a subsequence such that Πi(pMk)converges in ℓ1norm as k→ ∞ . Since this subsequence is also Cauchy, let Mε,2be such that ∥Πi(pMk)−Πi(pMj)∥1≤ε 3(43) for all Mj, Mk> M ε,2. Combining (42) and (43), we have ∥pMk−pMj∥1≤ε (44) for all Mk, Mj≥max( Mε,1, Mε,2). So far we have only constructed a subsequence that works for one particular ε, not any ε >0. This can be remedied by setting εℓ=1 2ℓand considering the following procedure. First,we construct a subsequence Mkthat works for ε1. We leave this subsequence alone for Mk≤max( Mε1,1, Mε1,2), but forMk>max( Mε1,1, Mε1,2)we refine this subsequence by taking another subsequence of it that works for ε2. Repeating this process results in a final Cauchy subsequence Mk, which by completeness also converges in ℓ1. We now show that PWis a valid distribution. Note that ∞X i=1PW(wi) = lim N→∞NX i=1PW(wi) (45) = lim N→∞lim k→∞NX i=1PMk W(wi)≤1. (46) We now show thatP∞ i=1PW(wk)≥1. First, note that for large enough M, from Lemma 3, we have that for all k PM[W≥wk+1]≤ f ⌊i κ⌋ f(i)!i κ . (47) Now note that for all j j+1X i=1PW(wi) = lim k→∞j+1X i=1PMk W(wi) (48) = lim k→∞(1−P[WMk≥wj+1]) (49) ≥1− f ⌊j κ⌋ f(j)!j κ , (50) where from (22) we have that limj→∞ f(⌊j κ⌋) f(j)j κ = 0. Combining (46) and (50), and using that fis increasing, we conclude that PWis a valid probability distribution. It remains to show that PWsatisfies the following: for y≥0 y+1X i=1PW(wi)wy i=∞X i=y+2PW(wi)wy i. (51) Note that ∞X i=y+2PW(wi)wy i = lim N→∞NX i=y+2PW(wi)wy i (52) = lim N→∞lim k→∞NX i=y+2PMk W(wi)wy i (53) = lim k→∞∞X i=y+2PMk W(wi)wy i−lim N→∞lim k→∞∞X i=N+1PMk W(wi)wy i (54) = lim k→∞∞X i=y+2PMk W(wi)wy i−∆ (55) = lim k→∞y+1X i=1PMk W(wi)wy i−∆ (56) =y+1X i=1PW(wi)wy i−∆. (57) Finally, note that ∆ = lim N→∞lim k→∞∞X i=N+1PMk W(wi)wy i (58) ≤lim N→∞∞X i=N+1 f ⌊i κ⌋ f(i)!i κ (f(i))y(59) = 0, (60) where (60) follows from the bound in (47) and the fact that the series is convergent from the assumption in (22). Therefore, we have that for all y≥0 ∞X i=y+2PW(wi)wy i=y+1X i=1PW(wi)wy i. (61) This concludes the proof. D. Proof of Theorem 1 Combining Lemma 1 and Theorem 2, it remains to show that the PXis a valid probability distribution: PX(wi+1)∝PW(wi+1)ewi+1(62) ≤ f ⌊i κ⌋ f(i)!i κ ef(i), (63) where (63) follows from the tail bound in Lemma 3, which results in a valid distribution provided that the
|
https://arxiv.org/abs/2505.21102v1
|
sum converges. The proof is concluded by noting that in Theorem 1 we assume the series in (63) is summable. ■ IV. E XAMPLES The distribution in Theorem 1 can be approximated to any degree of accuracy by the following procedure: 1) Choose some M≥1and define AM= 1 1 1 ··· 1 −1 1 1 ··· 1 −w1 −w2 w3··· wM+1 −w2 1 −w2 2 −w2 3··· w2 M+1............... −wM−1 1 −wM−1 2 −wM−1 3 ···wM−1 M+1 (64) where wi=f(i−1), i∈[1 :M+ 1] and where fis the desired estimator. 2) Let pM=A−1 M 1,0, . . . , 0T(65) where AMis guaranteed to have an inverse. 3) A truncated approximation of PXis given by: for i∈ [1 :M+ 1] P(M) X(wi)∝pM(i)ewi. (66) It is interesting to simulate the truncated approximation in (66) for the affine case:0 1 2 3 400.51 xCDFgamma M= 2 M= 8 Fig. 2: Comparison of the gamma cdf (red), which induces a linear conditional mean, to the distribution in (66). 0 2 4 6 8 10012 ymed(X|Y=y) M= 2 M= 4 M= 8 Fig. 3: Conditional medians induced by the distribution in (66) with a=b= 0.3. •Fig. 2 shows the cdf of the distribution in (66) for f(y) = ay+bwitha=b= 0.3and several values of Mand compares it to the cdf of the gamma distribution. •Fig. 3 shows conditional medians of the distribution in (66) for M= 2,4, and 8witha=b= 0.3. V. C ONCLUSION This work has addressed the problem of L1estimation under Poisson noise, focusing on the Bayesian case. By con- structing a novel family of priors, we have demonstrated that any increasing function can serve as an admissible Bayesian estimator, provided certain integrability conditions are met. In particular, we highlighted a class of priors distinct from the classical gamma distribution, which is known to induce affine conditional means under L2loss. These new priors represent a significant departure from the conjugate priors typically used for exponential families and are constructed using a limiting process that guarantees the desired linearity properties. Future work could explore extensions of this framework to other noise models such as Binomial or Negative Binomial distributions, or to different loss functions. REFERENCES [1] R. McEliece, E. Rodemich, and A. Rubin, “The practical limits of photon communication,” Jet Propulsion Laboratory Deep Space Network Progress Reports , vol. 42, pp. 63–67, 1979. [2] S. Shamai, “Capacity of a pulse amplitude modulated direct detection photon channel,” IEE Proceedings I (Communications, Speech and Vision) , vol. 137, no. 6, pp. 424–430, 1990. [3] S. Verdú, “Poisson communication theory,” International Technion Com- munication Day in Honor of Israel Bar-David , vol. 66, 1999. [4] A. Lapidoth and S. M. Moser, “On the capacity of the discrete-time poisson channel,” IEEE Transactions on Information Theory , vol. 55, no. 1, pp. 303–322, 2008. [5] A. Dytso, L. Barletta, and S. Shamai (Shitz), “Properties of the support of the capacity-achieving distribution of the amplitude-constrained Pois- son noise channel,” IEEE Transactions on Information Theory , vol. 67, no. 11, pp. 7050–7066, 2021. [6] N. Farsad, W. Chuang, A. Goldsmith, C. Komninakis, M. Médard, C.
|
https://arxiv.org/abs/2505.21102v1
|
Rose, L. Vandenberghe, E. E. Wesel, and R. D. Wesel, “Capacities and optimal input distributions for particle-intensity channels,” IEEE Transactions on Molecular, Biological and Multi-Scale Communications , vol. 6, no. 3, pp. 220–232, 2020. [7] R. B. Stein, “A theoretical analysis of neuronal variability,” Biophysical Journal , vol. 5, no. 2, pp. 173–194, 1965. [8] M. N. Shadlen and W. T. Newsome, “The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding,” Journal of Neuroscience , vol. 18, no. 10, pp. 3870–3896, 1998. [9] A. Dytso and H. V . Poor, “Estimation in Poisson noise: Properties of the conditional mean estimator,” IEEE Transactions on Information Theory , vol. 66, no. 7, pp. 4304–4323, 2020. [10] P. Diaconis and D. Ylvisaker, “Conjugate priors for exponential fami- lies,” The Annals of Statistics , vol. 7, no. 2, pp. 269–281, 1979. [11] J.-P. Chou, “Characterization of conjugate priors for discrete exponential families,” Statistica Sinica , vol. 11, pp. 409–418, 2001. [12] L. P. Barnes, A. Dytso, and H. V . Poor, “ L1estimation in Gaussian noise: On the optimality of linear estimators,” in Proceedings of the 2023 IEEE International Symposium on Information Theory (ISIT) , June 2023, pp. 1872–1877. [13] L. P. Barnes, A. Dytso, J. Liu, and H. V . Poor, “ L1estimation: On the optimality of linear estimators,” IEEE Transactions on Information Theory , vol. 70, no. 11, pp. 8026–8039, 2024. [14] ——, “Multivariate priors and the linearity of optimal bayesian es- timators under Gaussian noise,” in Proceedings of the 2024 IEEE International Symposium on Information Theory (ISIT) , July 2024, to appear. [15] E. Akyol, K. Viswanatha, and K. Rose, “On conditions for linearity of optimal estimation,” IEEE Transactions on Information Theory , vol. 58, no. 6, pp. 3497–3508, 2012. [16] J. Chen and H. Rubin, “Bounds for the difference between median and mean of gamma and Poisson distributions,” Statistics & Probability letters , vol. 4, no. 6, pp. 281–283, 1986. [17] P. Nowak, “Monotonicity of Bayes estimators,” Applicationes Mathe- maticae , vol. 4, no. 40, pp. 393–404, 2013.
|
https://arxiv.org/abs/2505.21102v1
|
arXiv:2505.21274v1 [math.OC] 27 May 2025Sample complexity of optimal transport barycenters with di screte support L´ eo Portales IRIT, TSE, INP Toulouse leo.portales@irit.frEdouard Pauwels TSE, Universit´ e Toulouse Capitole edouard2.pauwels@ut-capitole.fr Elsa Cazelles CNRS, IRIT, Universit´ e de Toulouse elsa.cazelles@irit.fr May 28, 2025 Abstract Computational implementation of optimal transport baryce nters for a set of target probability mea- sures requires a form of approximation, a widespread soluti on being empirical approximation of mea- sures. We provide an O(/radicalbig N/n)statistical generalization bounds for the empirical spars e optimal trans- port barycenters problem, where Nis the maximum cardinality of the barycenter (sparse suppor t) andn is the sample size of the target measures empirical approxim ation. Our analysis includes various optimal transport divergences including Wasserstein, Sinkhorn an d Sliced-Wasserstein. We discuss the applica- tion of our result to specific settings including K-means, co nstrained K-means, free and fixed support Wasserstein barycenters. 1 Introduction Optimal transport barycenters provide a natural way for ave raging a set of probability distributions while preserving their underlying geometric structure. They are formally defined as probability measures that minimize the following cost function ν/ma√sto−→1 LL/summationdisplay ℓ=1D/parenleftBig µℓ,ν/parenrightBig , (1.1) whereDdenotes an optimal transport divergence, and/braceleftbig µℓ/bracerightbigL ℓ=1is a collection of target probability measures inRd. This problem was first introduced and studied with the 2-Was serstein metric squared D=W2 2[1], as a way to generalize McCann’s interpolation [39] to more than two measures. Independently and in parallel, the authors of [49] have proposed a method based on Wasserste in barycenters for texture synthesis and mixing. Then, following fast implementation of W2-barycenters (e.g. [17]), optimal transport barycenters became a central tool in machine learning and statistics. Th ey have been used for various applications such as sample blending [55], texture mixing [49], multisource d omain adaptation [42] or music genre recognition [41]. Although initially formulated with D=W2 2, other transport divergences have since been considered in (1.1), either to reduce their computational complexity, to induce desirable smoothing, or even to consider 1 a more complex set of probability measures. This has led to th e introduction of entropic [14], Bures (for centered Gaussian distributions) [13], sliced [8] and Grom ov Wasserstein barycenters [11]. Sample averaged approximation and barycenters There are many situations where the target proba- bility measures in (1.1) may be intractable, either because they are modeled by continuous variables or because they are discretely supported on a very large number of point. To handle these cases from a com- putational point of view, one requires a discrete approxima tion. Typical schemes include histogram approx- imation (e.g. images), sample average approximation (SAA) [42, 15] or online stochastic approximation (SA) [21, 6]. This paper focuses on the sample average approx imation of target measures, which primarily arises in two main use cases: the statistical setting where t he target distributions are only accessible through empirical samples[42] and numerical monte-carlo approxim ation of known densities [48]. In practice, most algorithms for computing optimal transport barycenters in volve a support cardinality constraint, similar to optimal quantization of measures [45] for which one aims to a pproximate
|
https://arxiv.org/abs/2505.21274v1
|
one probability measure with a small number of atoms. We include this support constraint in our treatement of the barycenter problem. In other words, we are interested in probability measures νminimizing the function (1.1), subject to the constraint |Supp(ν)| ≤Nfor some integer N≥1, whereSupp(ν)denotes the support of ν. This problem is referred to as optimal transport barycenters with sparse support in the literature [60, 9]. There exists few exceptions which do not include the support cardinality con straint; for example, [56] represents barycenters based on estimates of continuous functions by neural networ ks. In this article, we provide finite sample statistical guaran ties for the sample averaged approximation of optimal transport barycenters under support cardinalit y constraints. The transport-based divergences Dconsidered are the Wasserstein distances, the Sinkhorn div ergences (entropic regularization), and the Sliced and max-Sliced Wasserstein distances. In other word s, we consider a collection of target probability measures {µℓ}L ℓ=1inRd, such that for each ℓ∈[[1,L]], we have access to nrandom variables Xℓ 1,...,Xℓ ni.i.d∼ µℓ, and define their associated empirical measures by µℓ n:=1 n/summationtextn i=1δXℓ i. The empirical sparse optimal transport barycenters are given by argmin (Y,π)∈(Rd)N×∆N1 LL/summationdisplay ℓ=1D/parenleftBigg µℓ n,N/summationdisplay i=1πiδyi/parenrightBigg (1.2) whereDdenotes any of the transport divergences stated above, Y:= (y1,...,yN)is a point cloud in Rd and∆Nis the(N−1)-probability simplex. Statistical results for optimal transport barycenters In [12] authors study the statistical stability of empirical Wasserstein barycenter for general population m easures (that is, measures supported on a set of probabilities), which includes the setting of (1.1). The ra te of convergence they obtain suffers from the curse of dimensionality; this is inevitable in all generality as t he barycenters inherit the sample complexity of optimal transport [23]. In [29] the authors consider the mor e specific framework of empirical barycenters of a collection of discrete target measures. Contrarily to [ 12] where the rate of convergence of empirical barycenter is O(n−1/3d), the rate of convergence in this case is O(Cdn−1/2), where the dimension of the target measures only appears in the constant Cd, and not exponentially in the sample size. As mentioned be- fore, the fundamental difference in convergence rate behav ior between the two frameworks lies in a similar phenomenon observed in optimal transport statistics, name ly, that the statistical complexity of optimal trans- port adapts to the least complex measure [31]. More precisel y, in general, empirical optimal transport has a 2 convergence rate of the form O(n−1/d)[23], while optimal transport between two probability meas ures, at least one of which is discrete, has a rate of convergence of O(n−1/2)[18]. In the case of population barycenters, that is barycenters o f a probability measure Qsupported on the space of probability measures, numerous statistical resul ts have been established [7, 34]. Although related, these results fall outside the scope of our study, as in their framework, the target measures µℓare indepen- dently sampled from Q. Let us note that in the context of entropy regularized Wasse rstein barycenters, the sample average approximation was reported to be more efficie nt than the stochastic approximation approach in terms
|
https://arxiv.org/abs/2505.21274v1
|
of computational complexity [20]. Contribution We provide convergence bounds in expectation for the empiri cal sparse optimal transport barycenter functional (1.2) of the form O(/radicalbig N/n), whereNdenotes the number of support point of the barycenter, and nis the number of samples per empirical target measure. These bounds are obtained for classical, entropy regularized, Sliced and max-Sliced Was serstein barycenters, bridging the gap between various results in the literature. From this main result, we derive identical rates for the sample complexity of optimal transport barycenters with sparse support. Notations Throughout this article we will denote respectively M1(Ω)andMN 1(Ω), for some Ω⊂Rd, the set of probability measures on Ωand the set of probability measures supported on at most Npoints in Ω. We will denote BRthe closed ball of Rdcentered in 0and of radius R >0. Additionally, /ba∇dbl · /ba∇dbl is the Euclidean norm on Rd,∆NtheN−1probability simplex and Sd−1thed−1-sphere in Rd. Organization of the paper We first introduce Wasserstein distances and divergences ba sed on optimal transport, followed by optimal transport barycenters and t heir sparse approximation in Section 2. We then discuss their computation in Section 3. Our main results on s ample complexity of barycenters are presented in Section 4, and the paper ends with a discussion in Section 5 . Dual formulations of the divergences considered, together with technical details and proofs, ar e postponed to the appendices. 2 Introduction to optimal transport and barycenters 2.1 Optimal transport divergences Optimal transport has become a powerful tool in machine lear ning due to its ability to compute distances between arbitrary probability measures (we refer the reade r to [46] for a computational point of view, and to [57] for a more theoretical point of view). In mathematica l terms, the optimal transport between two measures µ,ν∈ M1(Rd)consists in finding a coupling which minimizes the average co sts of transporting µontoν: inf γ∈Π(µ,ν)/integraldisplay Rd×Rdc(x,y)dγ(x,y), (2.1) wherec:Rd×Rd→Ris a lower semi-continuous cost function and Π(µ,ν)denotes the set of all probability couplings with marginals µandν. Whenc:=Dpis thep-exponent of a distance D, forp≥1, (2.1) defines the p-exponent of a distance Wpin the space of probability measures, called the Wasserstei n distance [57, Section 6], and is particularly suited as it me trizes the space of probability measures. In our case, we will consider the Euclidean distance on Rd, and the Wasserstein distance is given by Wp p(µ,ν) = inf γ∈Π(µ,ν)/integraldisplay Rd×Rd/ba∇dblx−y/ba∇dblpdγ(x,y). (2.2) 3 In the semi-discrete setting, where one measure is continuo us and the other one is discrete, as considered in our sparse barycenter problem, this infinite dimensional optimization problem can be reformulated as a finite-dimensional maximization problem using the Kantor ovich dual formulation of optimal transport (see Appendix A). Still, Wasserstein distances lack practi cability as in the discrete setting for instance, solving eq. (2.2) involves addressing a potentially high-d imensional linear programming problem. This has motivated the introduction of new optimal transport-based divergences between probability measures. The entropic regularization of Wasserstein distances, or S inkhorn divergences [16, 46], consists in adding an entropy regularization term to the optimal transp ort problem (2.1) and writes between
|
https://arxiv.org/abs/2505.21274v1
|
two proba- bility measures µ,ν∈ M1(Rd)as follows: Wp ǫ,p(µ,ν) = inf γ∈Π(µ,ν)/integraldisplay Rd×Rd/ba∇dblx−y/ba∇dblpdγ(x,y)+ǫKL(γ|µ⊗ν) (2.3) where KL (γ|ξ) :=/integraltext Rd2×Rd2(log(dγ dξ(x,y))−1)dγ(x,y)denotes the Kullback-Leibler divergence and ǫ > 0is a regularization parameter. The optimization problem in (2.3) can then be efficiently solved using Sinkhorn algorithm [52, 16]. The dual formulation of this en tropy regularized optimal transport problem, studied in [25] with an emphasis on the discrete and semi-dis crete settings, will be of particular use in our analysis and is presented in Appendix A. Note that it may be pr eferable to consider the debiased version of the Sinkhorn divergence [26] as entropy regularization c an produce undesirable smoothing effect when computing sparse barycenters [32]. It is defined by Wp ǫ,p(µ,ν) =Wp ǫ,p(µ,ν)−1 2(Wp ǫ,p(µ,µ)+Wp ǫ,p(ν,ν)). (2.4) In any case, both classical and regularized optimal transpo rt require solving an optimization problem as soon as the probability distributions lies in an Euclidean space of dimension greater than 1. For the univariate distributions case however, the Wasserstein distance (2.2 ) has an explicit expression which depends on the quantile function of the measures [58, Chapter 2.2]. By leve raging this analytic form through projections of thed-dimensional measures onto lines, [49] have introduced sli ced Wasserstein distances, which were then theoretically studied in [33]. More precisely, for any θ∈Sd−1, we setPθ:x∈Rd/ma√sto→ /a\}b∇acketle{tθ,x/a\}b∇acket∇i}ht ∈Rthe projection on the line directed by θ. Forµ,νprobability measures in Rd, one can then consider two collec- tions of one-dimensional measures (Pθ♯µ)θ∈Sd−1and(Pθ♯ν)θ∈Sd−1, whereT♯µdenotes the push forward (or image measure) of the measure µby the measurable map T. For a fixed θ, a sliced distance compares pairs (Pθ♯µ,Pθ♯ν)of one-dimensional measures. If one averages these distanc es over all directions θ∈Sd−1, then the distance is called Sliced Wasserstein . If one chooses the direction that yields the larger distanc e be- tween the projected measures, it is called max-Sliced Wasserstein . The Sliced and max-Sliced Wasserstein distance are therefore respectively defined as SWp p(µ,ν) =/integraldisplay Sd−1Wp p(Pθ♯µ,Pθ♯ν)dσ(θ) max-SWp p(µ,ν) = max θ∈Sd−1Wp p(Pθ♯µ,Pθ♯ν),(2.5) whereσdenotes the uniform probability measure over Sd−1. Using the semi-dual formulation of Wpone can also express these divergences in the semi-discrete cas e with a finite dimensional optimization program. These expressions are discussed in Appendix A. 4 2.2 Optimal transport barycenters with sparse support We recall that an unconstrained barycenter of a collection o f target probability measures {µℓ}L ℓ=1according to an optimal transport divergence Dis given as the solution of the following problem : µ∗∈argmin µ∈M1(Rd)1 LL/summationdisplay ℓ=1D(µℓ,µ). (2.6) However, for computational reasons discussed in Section 3, a barycenter µ∗in (2.6) may be intractable in practice. For instance, for D=Wp pwithp >1, and if at least one of the target measures is absolutely continuous, then the unique barycenter is also absolutely c ontinuous [1, 10], and therefore inherently hard to recover. To address these situations, a common approach i s to approximate µ∗by a discrete probability measure by restricting the optimization set in (2.6) to MN 1(Rd). In the literature, this is referred to as optimal transport barycenter problem with sparse support , or equivalently, discretely supported barycenters with a
|
https://arxiv.org/abs/2505.21274v1
|
cardinality constraint. Formally, the aim is to find a p robability measure ν∗:=/summationtextN i=1πiδyiwhere Y:= (y1,...,yN)∈(Rd)Nandπ∈∆N, that solves the following optimization problem: min (Y,π)∈AFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg :=1 LL/summationdisplay ℓ=1D/parenleftBigg µℓ,N/summationdisplay i=1πiδyi/parenrightBigg (2.7) where A is a closed nonempty subset of (Rd)N×∆N. When specifying the set of optimization constraint A, and the number of measure L, one recovers various existing cases in the literature [17, 9] that we summarize in the following table. A L= 1 L >1 (Rd)N×∆N Optimal quantization Barycenter with sparse support (Rd)N×{¯π} Constrained quantizationFree support barycenter {¯Y}×∆N Fixed support barycenter In this article, we provide statistical complexity bounds f or (2.7) with D=Wp p, Wp ǫ,p,Wp ǫ,p, SWp pand max-SWp p. 3 Algorithmic aspects of optimal transport barycenters and their sparse ap- proximation Computing optimal transport barycenters presents two main challenges: the need to discretize the target measures and the inherent computational complexity of opti mal transport. Discretization of target measures As mentioned before, although the target probability distr ibutions may be continuous in the optimization problem (2.6), an algorit hm to compute the barycenter operates on discrete data, as it can only handle finite objects. Generally, contin uous target probability measures are therefore ap- proximated by empirical measures, and the barycenter (2.6) is computed either by (deterministic) gradient or fixed-point methods [17], or by stochastic gradient metho ds [36]. Similarly, for discrete target distribu- tions supported on a large number of atoms, one typically res orts to empirical approximations of the whole 5 distributions and stochastic gradient methods [29]. Addit ionally, most practical algorithms implicitly con- strain the cardinality of the barycenter, whose number of at oms must be specified as input. Therefore, these algorithms actually solve the constrained problem (2.7) in stead of (2.6). This has led to the notion of optimal transport barycenters with sparse support [60, 9]. Note tha t these barycenters are strongly related to optimal quantization and K-means problems (consider L= 1in (2.7)), as discussed in Section (5.3). For (centered) Gaussian target distributions however, a fix ed-point algorithm on covariance matrices [4] allows to compute the Gaussian barycenter [1] without di scretization. Finally, an alternative to directly handling continuous measures could be to triangulate the su pports of the target distributions, as is done in the optimal quantization problem and semi-discrete optima l transport [35]. However, this approach seems difficult to extend to the barycenter problem since target me asures may be distributed over very different supports. Computational complexity of optimal transport. Even for discrete probability distributions supported onNpoints inRd, the optimal transport problem has a prohibitive O(N3log(N))computational cost [46]. The Wasserstein barycenter problem naturally inherits thi s complexity, for a set of Lmeasures [3, 9]. Con- sequently, the barycenter problem is only tractable for mea sures supported on a reasonable number of atoms [2]. In practice, Sinkhorn divergences Wp ǫ,p(2.3) are often favored for computing Wasserstein barycent ers (1.2), resulting in Sinkhorn barycenters. These divergenc es offer a significant computational advantage, of orderO(N2/ǫ)[22], which allows a trade-off between optimal transport ac curacy and algorithmic efficiency. Regularization is crucial and can be introduced
|
https://arxiv.org/abs/2505.21274v1
|
in two ways : either by using Sinkhorn divergence D= Wp ǫ,pin the barycenter problem, or by adding an entropic regulari zation term on the barycenter itself in the optimization problem (1.2). In [14], the author studies both regularization approaches, as well as a combination of the two. Sliced-Wasserstein distances are also relevant to compute barycenters in the space of probability mea- sures, as introduced in [49] for an application in texture mi xing. The Sliced-Wasserstein barycenters were then studied in [8]. As mentioned in Section 2, Sliced distan ces rely on the closed-form solution of the one-dimensional Wasserstein distance, and Sliced barycen ters consequently inherit this computational effi- ciency. 4 Sample complexity for sparse optimal transport barycente rs In this section, we present our main results : the generaliza tion error bound followed by the sample com- plexity of classical, entropic and sliced optimal transpor t barycenters. These were inspired by [28, Section 4.4], which provides the sample complexity of the K-means fu nctional (see Section 5.3). Theorem 4.1 (Generalization error bound) .Letµ1,...,µL∈ M1(BR), forR >0. For some integer n, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law µℓ, for all ℓ∈[[1,L]]. LetD=Wp p, Wp ǫ,p(for some ǫ >0),SWp por max-SWp p. Then for any integer N≥1, andFD the function defined in (2.7) , we have E/bracketleftBigg sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤Cp,R/radicalbigg Cd,N n, 6 where the finite constant values Cp,RandCd,Ndepend on the divergence Dand are summarized in the following table. D Cp,R Cd,N Wp p 8√ 2/integraltext3(2R)p 0/radicalbigg log/parenleftBig 2+32p(2R)p τ/parenrightBig dτ N(d+1) SWp p max-SWp p8√ 2/integraltext3(2R)p 0/radicalbigg log/parenleftBig 2+48p(2R)p τ/parenrightBig dτN(d+1)+d Wp ǫ,p8√ 2/integraltext(4p+1)(2R)p 0/radicalbigg log/parenleftBig 2+64p(2R)p τ/parenrightBig dτN(d+1) Theorem 4.1 provides uniform bounds for the empirical baryc enter cost function, which is the cost function minimized by most algorithms in practice when comp uting an optimal transport barycenter (See Section 3). Remark 4.2 (On the hypothesis of Theorem 4.1) .Note that we consider here empirical measures supported on a set of samples of equal size n, but our results also hold for different sample sizes for eac h target measure by replacing nby the minimum of those individual sample sizes in the upper b ound of the generalization error. Finally, although for convenience we consider the ba rycenter problem (2.7) with uniform weights1 L for each measure µℓ, Theorem 4.1 holds for any weights λ∈∆L. The proof of our main theorem relies on log-entropy to measur e the complexity of a particular functional class derived from the dual formulation of the divergence Dunder consideration. A sketch of proof is presented below, with the complete details provided in Appe ndix C. Sketch of proof. The proof is based on empirical process theory. For each dive rgenceD, we rewrite the empirical sparse optimal transport barycenter problem as a n empirical risk minimization problem over a parameterized class of functionals given by the dual formul ation (see Appendix A) of D. These class of functions are of the form F:=/braceleftBigg fc(Y,π), fdual variable of D/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg ,for(Y,π)∈(Rd)N×∆N/bracerightBigg .
|
https://arxiv.org/abs/2505.21274v1
|
whereµis a target measure and fcdenotes the c-transform of f. Upper bounding the generalization error uniformly in the parameter space by the log-entropy of the fu nctional class allows us to conclude. Remark 4.3. Alternative statistical learning techniques could have be en applied to derive upper bounds in Theorem 4.1. For instance, Rademacher complexity was use d in [24] to obtain statistical bound for Sinkhorn divergences, while Vapnik–Chervonenkis theory h as been applied to quantization problems (see [28, Section 4.4]). Applying the latter to our setting leads to slightly looser upper bounds of the form/radicalBig N(d+1)log(N(d+1)) n. Theorem 4.1 directly yields the sample complexity of sparse optimal transport barycenters, as stated in the following corollary. Corollary 4.4 (Sparse optimal transport barycenters) .Letµ1,...,µL∈ M1(BR), forR >0. For some integern, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law 7 µℓ, for allℓ∈[[1,L]]. Let also N≥1be some integer, Aa closed nonempty subset of BN R×∆N, and let D=Wp p, Wp ǫ,p(for some ǫ >0),SWp por max-SWp p. We denote by (Yn,πn)an argmin in Aof (Y,π)/ma√sto→FD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg =1 LL/summationdisplay ℓ=1D/parenleftBigg µℓ n,N/summationdisplay i=1πiδyi/parenrightBigg . Then we have : E/bracketleftBigg FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πn iδyn i/parenrightBigg −min (Y,π)∈AFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/bracketrightBigg ≤2Cp,R/radicalbigg Cd,N n, where the constant Cp,RandCd,Nare summarized in the table of Theorem 4.1. This corollary is obtained leveraging a classical statisti cal learning theory argument which is given in the proof below. Proof. SupposeAis a closed subset of the compact set BN R×∆N. By continuity of the functionals (Y,π)/ma√sto→ FD(µ1,...,µL,/summationtextN i=1πiδyi)and(Y,π)/ma√sto→FD(µ1 n,...,µL n,/summationtextN i=1πiδyi), they both admit global minimizers inA. We denote by (Y∗,π∗)and(Y∗ n,π∗ n)two respective minimizers, and their associated discrete m easures ν∗:=/summationtextN i=1πiδyiandν∗ n:=/summationtextN i=1πn iδyn i. Then we have FD(µ1,...,µL,ν∗ n)−FD(µ1,...,µL,ν∗) ≤FD(µ1,...,µL,ν∗ n)−FD(µ1 n,...,µL n,ν∗ n)+FD(µ1 n,...,µL n,ν∗)−FD(µ1,...,µL,ν∗) ≤2 sup (Z,τ)∈A/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1τiδzi/parenrightBigg −FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1τiδzi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle. Taking the expectation on both sides and applying Theorem 4. 1 allows to conclude. Corollary 4.4 and the sample complexity results of Sinkhorn divergences in [24] also yield an upper bound for the debiased Sinkhorn barycenters, that is for D=Wǫ,pdefined in (2.4). The proof of the next corollary is also postponed in Appendix C. Corollary 4.5 (Debiased Sinkhorn barycenters) .Letµ1,...,µL∈ M1(BR), forR >0. For some integer n, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law µℓ, for allℓ∈[[1,L]]. Let also N≥1be an integer, Aa closed nonempty subset of BN R×∆Nand letD=Wp ǫ,p (for some ǫ >0). We denote by (Yn,πn)an argmin in Aof (Y,π)/ma√sto→FD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg . Then we have E/bracketleftBigg FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πn iδyn i/parenrightBigg −min (Y,π)∈AFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/bracketrightBigg ≤Cp,R/radicalbigg N(d+1) n+2K/parenleftBigg eK ǫ√n/parenleftbigg 1+1 ǫ⌊d/2⌋/parenrightbigg/parenrightBigg , withκ= 2CpR+ (2R)p, whereCp=√ 2p(2R)p−1is the Lipshitz constant of the Euclidean cost at the powerp. Additionally, Kis an unknown constant depending only on RandpandCp,Ris the constant associated to Wp ǫ,pin Theorem 4.1. 8 5 Discussion of the results 5.1 Sample complexity of optimal transport divergences Although in general empirical optimal transport suffers fr om the curse of dimensionality [23], it has now been well established that the rate of convergence of the
|
https://arxiv.org/abs/2505.21274v1
|
emp irical optimal transport divergences between a measureµand an empirical measure νndistributed over ni.i.d random variables of common law ν, depends on the intrinsic complexity of the measures νandµ. If at least one of them is discretely supported, the convergence rate is of order O(1/√n)[54, 24, 31, 43, 37], which does not depend exponentially on t he dimension of the underlying space Rd. In other words, the sample complexity of optimal transport adapts to the measure with lowest complexity [31]. Our main theorem al lows to recover this1√nrate of convergence for empirical semi-discrete transport. By choosing L= 1, we get that for any ν∈ MN 1(BR), E|D(µn,ν)−D(µ,ν)| ≤E/bracketleftBigg sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleD/parenleftBigg µn,N/summationdisplay i=1πiδyi/parenrightBigg −D/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg , the upper bounds of Theorem 4.1 also hold for the empirical di vergences themselves. We note however that they do not improve upon the literature. 5.2 Barycenters with sparse support For a sparse barycenter with at most Nsupport points, and empirical target measures supported on nrandom variables each, the rates we obtain in our main Theorem 4.1 ar e of order/radicalbig N/n , which is classical in semi-discrete optimal transport (see [18, Theorem 2.10]). Interestingly, this rate is uniform in the number of measures L, in the regularization parameter ǫfor the Sinkhorn divergence, and exhibits no curse of dimensionality ( i.eno exponential dependency on the dimension dof the target measures). The authors of [29] tackle the unconstrained Wasserstein ba rycenter problem, that is (2.6) for D=W2 2, for a set of discrete target probability measures. In this ca se, the barycenter is also discrete, assuming that each measure µℓis supported on Mℓpoints, and choosing N≥/summationtextL ℓ=1Mℓ−L+1, then both problems (2.6) and (1.2) coincide, there always exists a Wasserstein baryc enter which is supported on a set of maximal cardinality/summationtextL ℓ=iMℓ−L+1, as shown in [5, Theorem 2]. In this setting, we recover the co nvergence rate in [29]. An important remaining question is whether our convergence rates are optimal. The question is open for most optimal transport divergences, our convergence ra te is close to optimal for the 2-Wasserstein distance, thanks to [28, Theorem 4.7] which applies to a sing le measure, L= 1. More precisely, let µ∈ M1(BR)andµnits empirical counterpart. We denote Yn= (yn 1,...,yn N)∈(Rd)Na minimizer of Y/ma√sto→W(µn,/summationtextN i=1¯πiδyi), where¯πis a minimizer of π∈∆N/ma√sto→minY∗∈(Rd)NW2 2(µ,/summationtextN i=1πiδy∗ i). By translating the result of [28, Theorem 4.7] into optimal tra nsport terminology, we obtain W2 2/parenleftBigg µ,N/summationdisplay i=1¯πiδyn i/parenrightBigg −min Y∗∈(Rd)NW2 2/parenleftBigg µ,N/summationdisplay i=1¯πiδy∗ i/parenrightBigg ≥min π∈∆NW2 2/parenleftBigg µ,N/summationdisplay i=1πiδyn i/parenrightBigg −min Y∗∈(Rd)NW2 2/parenleftBigg µ,N/summationdisplay i=1¯πiδy∗ i/parenrightBigg /greaterorsimilar/radicalBigg N1−4 d n. 9 The convergence rate of Corollary 4.4 for D=W2 2andL= 1 is therefore min-max optimal in n, with a suboptimal multiplicative factor√ d+1N4/d. We also expect this result to hold for D=SWp pand max- SWp p, given that the1√nrate is also optimal for the divergences themselves [59]. The question remains h owever open for the barycenter with respect to the Sinkhorn divergence Wp ǫ,p. We conjecture that the sharper1 nrate could be obtained as it was the rate recently found for entropic empirical optimal transport
|
https://arxiv.org/abs/2505.21274v1
|
of subgaussian probability measures [19]. 5.3 K-means and constrained K-means Similarly to the previous subsection, choosing D=W2 2andL= 1 in Corollary 4.4 provide error bounds for the celebrated K-means and constrained K-means problem s. Letµ∈ M1(BR)andX1,...,X nben i.i.d samples of law µ. It is well known that the minimization of the K-means functi onal min Y∈(BR)Nn/summationdisplay i=1min j=1,...,N/ba∇dblXi−yj/ba∇dbl2(5.1) can be recasted in as an optimal quantization problem for the empirical measure µndistributed on the sample pointsX1,...,X n(see Appendix D.3). In other words, we have min Y∈(BR)N1 nn/summationdisplay i=1min j=1,...,N/ba∇dblXi−yj/ba∇dbl2= min ν∈MN 1(BR)W2 2(µn,ν). (5.2) Corollary 4.4 then yields a convergence rate of order O(/radicalbig N/n)for the K-means problem, which im- proves upon the convergence rate O/parenleftbigg/radicalBig Nlog(N) n/parenrightbigg given in [28, Section 4.4]. Additionally, the Wasserstein functional formulation of the K-means problem provides a co mplementary approach to the K-means prob- lem, closer to the approximation of measure point of view. It also allows to consider arbitrary cluster size constraints, a variant initially formulated in [44]. Forma lly, the constrained K-means problem consists in fixing a vector of weights ¯π∈∆Nand then solving min Y∈(Rd)NW2 2/parenleftBigg µn,N/summationdisplay i=1¯πiδyi/parenrightBigg . (5.3) For a known target measure µand a uniform vector of weights ¯πi= 1/Nfor alli∈[[1,N]], we recover the optimal uniform quantization problem which is solved with a variation of the Lloyd algorithm [40]. In the empirical case µn, Corollary 4.4 applies to this problem as well. 6 Conclusion In this paper, we prove that the rate of convergence in expect ation of empirical optimal transport barycenters with sparse support is of order O/parenleftbigg/radicalBig N n/parenrightbigg . This result is described for several types of optimal trans port divergences, namely the Wasserstein and sliced distances, as well as the Sinkhorn divergence. We show that they all behaved similarly as the number of observation nper target measure grows to infinity. A natural question is whether these rates are min-max optimal. While t here exists elements of answer for Wasserstein distances D=Wp p, it remains an open question for Sinkhorn divergences D=Wp ǫ,p, whose sample com- plexity is of order1 n, sharper than the typical rate1√nobserved in optimal transport. Finally, although our 10 results are limited to the compactly supported case, we beli eve this is not a significant restriction, as most applications fall within this setting : texture mixing [49] , constrained clustering [30], shape interpolation [53]. Acknowledgments This work benefited from financial support from the French gov ernment managed by the National Agency for Research under the France 2030 program, with the referen ce ”ANR-23-PEIA-0004”. Edouard Pauwels acknowledges the support of Institut Universitaire de Fran ce (IUF), the AI Interdisciplinary Institute ANITI funding, through the French “Investments for the Future – PI A3” program under the grant agreement ANR- 19-PI3A0004, Air Force Office of Scientific Research, Air For ce Material Command, USAF, under grant numbers FA8655-22-1-7012, ANR Chess (ANR-17-EURE-0010), ANR Regulia and ANR Bonsai. References [1] Martial Agueh and Guillaume Carlier. Barycenters in the Wasserstein space. SIAM Journal on Mathematical Analysis , 43(2):904–924, 2011. [2] Jason M Altschuler and Enric Boix-Adsera. Wasserstein b arycenters can
|
https://arxiv.org/abs/2505.21274v1
|
be computed in polynomial time in fixed dimension. Journal of Machine Learning Research , 22(44):1–19, 2021. [3] Jason M Altschuler and Enric Boix-Adsera. Wasserstein b arycenters are np-hard to compute. SIAM Journal on Mathematics of Data Science , 4(1):179–203, 2022. [4] Pedro C ´Alvarez-Esteban, Eustasio Del Barrio, Juan Cuesta Cuesta- Albertos, and Carlos Matr´ an. A fixed-point approach to barycenters in Wasserstein space. Journal of Mathematical Analysis and Applications , 441(2):744– 762, 2016. [5] Ethan Anderes, Steffen Borgwardt, and Jacob Miller. Dis crete Wasserstein barycenters: Optimal transport for discrete data. Mathematical Methods of Operations Research , 84:389–409, 2016. [6] Julio Backhoff, Joaquin Fontbona, Gonzalo Rios, and Fel ipe Tobar. Stochastic gradient descent for barycenters in Wasserstein space. Journal of Applied Probability , pages 1–29, 2024. [7] J´ er´ emie Bigot. Statistical data analysis in the Wasse rstein space. ESAIM: Proceedings and Surveys , 68:1–19, 2020. [8] Nicolas Bonneel, Julien Rabin, Gabriel Peyr´ e, and Hans peter Pfister. Sliced and Radon Wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision , 51:22–45, 2015. [9] Steffen Borgwardt and Stephan Patterson. On the computa tional complexity of finding a sparse Wasserstein barycenter. Journal of Combinatorial Optimization , 41(3):736–761, 2021. [10] Camilla Brizzi, Gero Friesecke, and Tobias Ried. p-Was serstein barycenters. Nonlinear Analysis , 251:113687, 2025. [11] Luc Brogat-Motte, R´ emi Flamary, C´ eline Brouard, Juh o Rousu, and Florence d’Alch´ e Buc. Learning to predict graphs with fused Gromov-Wasserstein barycenters. In International Conference on Machine Learning , pages 2321–2335. PMLR, 2022. [12] Guillaume Carlier, Alex Delalande, and Quentin Merigo t. Quantitative stability of barycenters in the Wasserstei n space. Probability Theory and Related Fields , 188(3):1257–1286, 2024. [13] Sinho Chewi, Tyler Maunu, Philippe Rigollet, and Austi n J Stromme. Gradient descent algorithms for Bures- Wasserstein barycenters. In Conference on Learning Theory , pages 1276–1304. PMLR, 2020. [14] L´ ena¨ ıc Chizat. Doubly regularized entropic Wassers tein barycenters. arXiv preprint arXiv:2303.11844 , 2023. 11 [15] Sebastian Claici, Edward Chien, and Justin Solomon. St ochastic Wasserstein barycenters. In International Conference on Machine Learning , pages 999–1008. PMLR, 2018. [16] Marco Cuturi. Sinkhorn distances: Lightspeed computa tion of optimal transport. Advances in neural information processing systems , 26, 2013. [17] Marco Cuturi and Arnaud Doucet. Fast computation of Was serstein barycenters. In International conference on machine learning , pages 685–693. PMLR, 2014. [18] Eustasio Del Barrio, Alberto Gonz´ alez Sanz, and Jean- Michel Loubes. Central limit theorems for semi-discrete Wasserstein distances. Bernoulli , 30(1):554–580, 2024. [19] Eustasio del Barrio, Alberto Gonz´ alez Sanz, Jean-Mic hel Loubes, and Jonathan Niles-Weed. An improved central limit theorem and fast convergence rates for entrop ic transportation costs. SIAM Journal on Mathematics of Data Science , 5(3):639–669, 2023. [20] Darina Dvinskikh. Stochastic approximation versus sa mple average approximation for population Wasserstein barycenters. arXiv preprint arXiv:2001.07697 , 2020. [21] Pavel Dvurechenskii, Darina Dvinskikh, Alexander Gas nikov, Cesar Uribe, and Angelia Nedich. Decentral- ize and randomize: Faster algorithm for Wasserstein baryce nters. Advances in Neural Information Processing Systems , 31, 2018. [22] Pavel Dvurechensky, Alexander Gasnikov, and Alexey Kr oshnin. Computational optimal transport: Complexity
|
https://arxiv.org/abs/2505.21274v1
|
by accelerated gradient descent is better than by Sinkhorn’ s algorithm. In International conference on machine learning , pages 1367–1376. PMLR, 2018. [23] Nicolas Fournier and Arnaud Guillin. On the rate of conv ergence in Wasserstein distance of the empirical measure. Probability theory and related fields , 162(3):707–738, 2015. [24] Aude Genevay, L´ ena¨ ıc Chizat, Francis Bach, Marco Cut uri, and Gabriel Peyr´ e. Sample complexity of Sinkhorn divergences. In Kamalika Chaudhuri and Masashi Sugiyama, e ditors, Proceedings of the Twenty-Second Inter- national Conference on Artificial Intelligence and Statist ics, volume 89 of Proceedings of Machine Learning Research , pages 1574–1583. PMLR, 2019. [25] Aude Genevay, Marco Cuturi, Gabriel Peyr´ e, and Franci s Bach. Stochastic optimization for large-scale optimal transport. Advances in neural information processing systems , 29, 2016. [26] Aude Genevay, Gabriel Peyr´ e, and Marco Cuturi. Learni ng generative models with Sinkhorn divergences. In International Conference on Artificial Intelligence and St atistics , pages 1608–1617. PMLR, 2018. [27] Evarist Gin´ e and Richard Nickl. Mathematical foundations of infinite-dimensional statist ical models . Cambridge university press, 2021. [28] L´ aszl´ o Gy¨ orfi. Principles of nonparametric learning . Springer, 2002. [29] Florian Heinemann, Axel Munk, and Yoav Zemel. Randomiz ed Wasserstein barycenter computation: Resam- pling with statistical guarantees. SIAM Journal on Mathematics of Data Science , 4(1):229–259, 2022. [30] Nhat Ho, XuanLong Nguyen, Mikhail Yurochkin, Hung Hai B ui, Viet Huynh, and Dinh Phung. Multilevel clustering via Wasserstein means. In International conference on machine learning , pages 1501–1509. PMLR, 2017. [31] Shayan Hundrieser, Thomas Staudt, and Axel Munk. Empir ical optimal transport between different measures adapts to lower complexity. In Annales de l’Institut Henri Poincare (B) Probabilites et st atistiques , volume 60, pages 824–846. Institut Henri Poincar´ e, 2024. [32] Hicham Janati, Marco Cuturi, and Alexandre Gramfort. D ebiased Sinkhorn barycenters. In International Con- ference on Machine Learning , pages 4692–4701. PMLR, 2020. 12 [33] S. Kolouri, K. Nadjahi, U. Simsekli, R. Badeau, and G. Ro hde. Generalized Sliced Wasserstein distances. Advances in neural information processing systems , 32, 2019. [34] Thibaut Le Gouic, Quentin Paris, Philippe Rigollet, an d Austin J Stromme. Fast convergence of empirical barycenters in Alexandrov spaces and the Wasserstein space .Journal of the European Mathematical Society , 25(6):2229–2250, 2022. [35] Bruno L´ evy. A numerical algorithm for L2semi-discrete optimal transport in 3D. ESAIM: Mathematical Mod- elling and Numerical Analysis , 49(6):1693–1715, 2015. [36] Lingxiao Li, Aude Genevay, Mikhail Yurochkin, and Just in M Solomon. Continuous regularized Wasserstein barycenters. Advances in Neural Information Processing Systems , 33:17755–17765, 2020. [37] Tianyi Lin, Zeyu Zheng, Elynn Chen, Marco Cuturi, and Mi chael I Jordan. On projection robust optimal trans- port: Sample complexity and model misspecification. In International Conference on Artificial Intelligence and Statistics , pages 262–270. PMLR, 2021. [38] Pascal Massart. Concentration inequalities and model selection: Ecole d’E t´e de Probabilit ´es de Saint-Flour XXXIII-2003 . Springer, 2007. [39] Robert J McCann. A convexity principle for interacting gases. Advances in mathematics , 128(1):153–179, 1997. [40] Quentin M´ erigot, Filippo Santambrogio, and Cl’ement Sarrazin. Non-asymptotic convergence bounds
|
https://arxiv.org/abs/2505.21274v1
|
for Wasserstein approximation using point clouds. In Advances in Neural Information Processing Systems , vol- ume 34, pages 12810–12821. Curran Associates, Inc., 2021. [41] Eduardo F Montesuma and Fred-Maurice Ngol` e Mboula. Wa sserstein barycenter transport for acoustic adap- tation. In ICASSP 2021-2021 IEEE International Conference on Acousti cs, Speech and Signal Processing (ICASSP) , pages 3405–3409. IEEE, 2021. [42] Eduardo Fernandes Montesuma and Fred Maurice Ngole Mbo ula. Wasserstein barycenter for multi-source do- main adaptation. In Proceedings of the IEEE/CVF conference on computer vision a nd pattern recognition , pages 16785–16793, 2021. [43] Kimia Nadjahi, Alain Durmus, L´ ena¨ ıc Chizat, Soheil K olouri, Shahin Shahrampour, and Umut Simsekli. Statis- tical and topological properties of Sliced probability div ergences. Advances in Neural Information Processing Systems , 33:20802–20812, 2020. [44] Michael Kwok-Po Ng. A note on constrained k-means algor ithms. Pattern Recognition , 33(3):515–519, 2000. [45] Gilles Pag` es, Huyˆ en Pham, and Jacques Printems. Opti mal quantization methods and applications to numerical problems in finance. Handbook of computational and numerical methods in finance , pages 253–297, 2004. [46] Gabriel Peyr´ e and Marco Cuturi. Computational optima l transport: With applications to data science. Founda- tions and Trends® in Machine Learning , 11(5-6):355–607, 2019. [47] L´ eo Portales, Elsa Cazelles, and Edouard Pauwels. On t he sequential convergence of Lloyd’s algorithms. arXiv preprint arXiv:2405.20744 , 2024. [48] Giovanni Puccetti, Ludger R¨ uschendorf, and Steven Va nduffel. On the computation of Wasserstein barycenters. Journal of Multivariate Analysis , 176:104581, 2020. [49] Julien Rabin, Gabriel Peyr´ e, Julie Delon, and Marc Ber not. Wasserstein barycenter and its application to texture mixing. In International conference on scale space and variational me thods in computer vision , pages 435–446. Springer, 2011. [50] Filippo Santambrogio. Optimal transport for applied m athematicians. Birk¨auser, NY , 55(58-63):94, 2015. [51] Bodhisattva Sen. A gentle introduction to empirical pr ocess theory and applications. Lecture Notes, Columbia University , 11:28–29, 2018. 13 [52] Richard Sinkhorn and Paul Knopp. Concerning nonnegati ve matrices and doubly stochastic matrices. Pacific Journal of Mathematics , 21(2):343–348, 1967. [53] Justin Solomon, Fernando De Goes, Gabriel Peyr´ e, Marc o Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional Wasserstein distances: Ef ficient optimal transportation on geometric domains. ACM Transactions on Graphics (ToG) , 34(4):1–11, 2015. [54] Max Sommerfeld and Axel Munk. Inference for empirical W asserstein distances on finite spaces. Journal of the Royal Statistical Society Series B: Statistical Methodolo gy, 80(1):219–238, 2018. [55] Sanvesh Srivastava, V olkan Cevher, Quoc Dinh, and Davi d Dunson. Wasp: Scalable Bayes via barycenters of subset posteriors. In Artificial intelligence and statistics , pages 912–920. PMLR, 2015. [56] Matthew Staib, Sebastian Claici, Justin M Solomon, and Stefanie Jegelka. Parallel streaming Wasserstein barycenters. Advances in Neural Information Processing Systems , 30, 2017. [57] C´ edric Villani. Optimal Transport: Old and New , volume 338 of Grundlehren der mathematischen Wis- senschaften . Springer Berlin Heidelberg, 2008. [58] C´ edric Villani. Topics in optimal transportation , volume 58. American Mathematical Soc., 2021. [59] Xianliang Xu and Zhongyi Huang. Central limit theorem f
|
https://arxiv.org/abs/2505.21274v1
|
or the sliced 1-Wasserstein distance and the max-sliced 1-Wasserstein distance. arXiv preprint arXiv:2205.14624 , 2022. [60] Qingyuan Yang and Hu Ding. Approximate algorithms for k-sparse Wasserstein barycenter with outliers. arXiv preprint arXiv:2404.13401 , 2024. 14 A Semi-dual formulation of optimal transport The Kantorovich dual formulation of the Wasserstein distan cesWpcan be recast as a semi-dual problem using the c-transform. Similarly, the Sinkhorn divergences Wp ǫ,padmit a semi-dual formulation based on the smoothed c- transform. In the semi-discrete setting ( i.e.transport between a continuous and a discrete measure), the se formu- lations turn an infinite-dimensional program (correspondi ng to the primal formulations (2.1) and (2.3)) into a finite- dimensional one, which is a key point in our proofs. In this se ction, we present these semi-dual formulations in the semi-discrete setting, with a particular focus on the case w here the discrete measure has zero weights or repeated atoms. Throughout this section, we consider a measure µ∈ M1(Rd). A.1 Semi-dual formulation outside the generalized diagona l The generalized diagonal plays a central role throughout ou r proofs, it denotes the set of point clouds with at least two identical components: DN=/braceleftbig Y:= (y1,...yN)∈(Rd)N|∃i/\e}atio\slash=jsuch that yi=yj/bracerightbig . (A.1) Let us then define ν:=/summationtextN i=1πiδyia discrete measure supported on Y:= (y1,...,y N)∈(Rd)N\DNand strictly positive weights π∈int(∆N). For a function bounded continuous function φ∈ Cb(Rd), let us also denote φc∈ Cb(Rd)its c-transform, defined in our case for all x∈Rdasφc(x) = min i=1,...,N(/ba∇dblx−yi/ba∇dblp−φ(yi))[50, Section 1.3]. Then by Kantorovich’s duality ([57, Theorem 5.10]) an dc−convexity of the optimal potentials we have the following formulation for the p-Wasserstein distance between µandν: Wp p(µ,ν) : = inf γ∈Π(µ,ν)/integraldisplay Rd×Rd/ba∇dblx−y/ba∇dblpdγ(x,y) = max φ∈RN/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−φi}dµ(x)+N/summationdisplay i=1πiφi.(A.2) In (A.2), with a slight abuse of notation, the dual potential φ∈ Cb(Rd)is identified with its value on the support of ν, which consists of Ndistinct points: φi=φ(yi), fori= 1,...,N . For this dual formulation to hold, the measure νmust be supported outside of the generalized diagonal DN, with strictly positive weights; see the discussion in the next Section A.2. Entropic optimal transport (2.3) between µandνalso admits both a dual and a semi-dual form (see [25, Proposi - tion 2.1]), which are respectively given by the following op timization problems: Wp ǫ,p(µ,ν) = max u,v∈C(Rd)×RN/integraldisplay Rdu(x)dµ(x)+N/summationdisplay i=1πivi−ǫN/summationdisplay i=1πi/integraldisplay Rdexp/parenleftbiggu(x)+vi−/ba∇dblx−yi/ba∇dblp ǫ/parenrightbigg dµ(x)(A.3) and Wp ǫ,p(µ,ν) = max w∈RN−ǫ/integraldisplay Rdlog/parenleftBiggN/summationdisplay i=1πiexp/parenleftbiggwi−/ba∇dblx−yi/ba∇dblp ǫ/parenrightbigg/parenrightBigg dµ(x)+N/summationdisplay i=1πiwi−ǫ. (A.4) These constructions also necessitate that Y:= (y1,...,y N)∈(Rd)N\DNandπ∈int(∆N). A.2 The case of support points on the generalized diagonal DNand/or with null weights For a functional of the form (Y,π)/ma√sto→D/parenleftBig µ,/summationtextN i=1πiδyi/parenrightBig , whereDis an optimal transport divergence, it is necessary to assume that the point cloud Ylies outside of the generalized diagonal DN, and that the weights are strictly positive in order to apply the duality formulations described above. Indeed, suppose that yi=yj, for some i/\e}atio\slash=j, then the dual formulation of the problem should include the constrai ntφi=φj, as bothφiandφjrepresent the evaluation of a continuous potential in the same argument. In other words, w e would have φi=φ(yi) =φ(yj) =φj. While this may 15 seem like a minor technicality that can often
|
https://arxiv.org/abs/2505.21274v1
|
be overlooked, it cannot be omitted in our analysis. This is particularly relevant in the case of the sliced Wasserstein distance, whe re two distinct points in Rdmay project onto the same point on a line. Similarly, it is important to ensure that the weigh ts of the discrete measure remain in the interior of the probability simplex int(∆N). This technicality has been handled in [47, Section 5.1] with the following proposition. Proposition A.1. Let(Y,π)∈(Rd)N×∆N, then there exists M∈[[1,N]]and(˜Y,˜π)∈(Rd)M\DM×int(∆M) such that for all µ∈ M1(Rd)and anyp≥1: Wp/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg =Wp/parenleftBigg µ,M/summationdisplay i=1˜πiδ˜yi/parenrightBigg and Wǫ,p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg =Wǫ,p/parenleftBigg µ,M/summationdisplay i=1˜πiδ˜yi/parenrightBigg . A.3 Semi-duality formulation for semi-discrete sliced div ergences Starting from the definition of the sliced distances in (2.5) , the dual formulation (A.2) can be applied to each one- dimensional distance Wp(Pθ♯µ,Pθ♯ν), θ∈Sd−1, addressing the ties in the support point of the measures Pθ♯ν. Let (Y,π)∈(Rd)N\DN×int(∆N), then, for each θ∈Sd−1there exists an integer M:=M(θ)∈[[1,N]]such that/a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht /\e}atio\slash= 0 andπi>0for alli,j∈[[1,M(θ)]], i/\e}atio\slash=j. Up to a reordering of components, we can write πiasπθ i:=πi+πjwhenever /a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht= 0 for alli,j∈[[1,N]], i/\e}atio\slash=j. Following this construction, we obtain a probability measure νθ:=/summationtextM(θ) i=1πθ iδ/ang⌊ra⌋ketleftyi,θ/ang⌊ra⌋ketrightwith no ties in its support points. Then, the Kantorovich dua lity formulation (A.2) leads to the following expressions of the sliced distances. SWp p(µ,ν) =/integraldisplay Sd−1 max w∈RM(θ)/integraldisplay Rdmin i=1,...,M(θ){|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−wi}dµ(x)+M(θ)/summationdisplay i=1πiwi dσ(θ) (A.5) and max-SWp p(µ,ν) = max θ∈Sd−1max w∈RM(θ)/integraldisplay Rdmin i=1,...,M(θ){|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−wi}dµ(x)+M(θ)/summationdisplay i=1πiwi, (A.6) whereσdenotes the uniform measure over the d-sphereSd−1. Note that actually, one can replace M(θ)byNin the maximization over the variable win (A.5), without taking into account any dependency on θ. Indeed, we can write M(θ) := card( {i∈[[1,N]] :/a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht /\e}atio\slash= 0,∀j/\e}atio\slash=i}). We thus have M(θ)< N if and only if there exist indices i/\e}atio\slash=jsuch that /a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht= 0, meaning that θlies in /uniondisplay i,j∈[[1,N]] i/negationslash=j/braceleftbig θ∈Sd−1:/a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht= 0/bracerightbig . SinceY∈(Rd)N\DNthis is a finite union of hyperplanes intersected with the sph ere, which is σ-negligible. It follows that M(θ) =Nfor almost every θ∈Sd−1. B Boundedness of the dual variables In this section, we consider a measure µ∈ M1(BR), R >0and a discrete measure ν∈ MN 1(BR). We prove that the dual variables associated to the optimal transport prob lemD(µ,ν)for the Wasserstein distances D=Wp pand the Sinkhorn divergences D=Wp ǫ,pare bounded. 16 Proposition B.1. Letµ∈ M1(BR)for some R >0andD=Wp p,orWp ǫ,p. Then for any (Y,π)∈(BN R\DN)× int(∆N), we have the following Wp p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg = max w∈RN |wi|≤2(2R)p/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+N/summationdisplay i=1πiwi, and Wp ǫ,p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg = max w∈RN |wi|≤4p(2R)p−ǫ/integraldisplay Rdlog/parenleftBiggN/summationdisplay i=1πiexp/parenleftbiggwi−/ba∇dblx−yi/ba∇dblp ǫ/parenrightbigg/parenrightBigg dµ(x)+N/summationdisplay i=1πiwi−ǫ. Remark B.2. In all generality, the dual variables w∈RNare not bounded since adding a constant c∈Rtowyields another valid solution. Therefore, to fix a dual solution, we set one coordinate of wto0and under this constraint the dual variable wremains bounded uniformly in Yandπ. Proof. In the following, we prove both cases independently. CaseD=Wp p.Let(Y,π)∈(BN R\DN)×int(∆N), then by Kantorovich duality (A.2) between µand the discrete measure/summationtextN i=1πiδyi, we have Wp p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg = max w∈RN/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+N/summationdisplay i=1πiwi. (B.1) As explained in Remark (B.2), we may select a choice of dual va riablesw∈RNoptimal in (B.1) and verifying wN= 0.
|
https://arxiv.org/abs/2505.21274v1
|
From [50, Proposition 5.11] and [50, Remark 1.13] that foll ows, we may also select a c-convave functionwcwhich verifies for all i∈[[1,N]]:wi= (wc(yi))c. We then have |wi−wN|=|(wc(yi))c−(wc(yN))c| =/vextendsingle/vextendsingle/vextendsingle/vextendsinglemin x∈Supp(µ){/ba∇dblx−yi/ba∇dblp−wc(x)}−min x∈Supp(µ){/ba∇dblx−yN/ba∇dblp−wc(x)}/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤max x∈BR|/ba∇dblx−yi/ba∇dblp−/ba∇dblx−yN/ba∇dblp| ≤2(2R)p, where the first inequality uses Lemma D.2. Using that wN= 0, we get the result, namely, the dual variables for thep-Wasserstein distances are bounded, and |wi| ≤2(2R)p. CaseD=Wp ǫ,p.Let(Y,π)∈(BN R\DN)×int(∆N). Let also u∈ Cb(BR)be an optimal solution in the dual formu- lation of the entropic p-Wasserstein distance between µand/summationtextN i=1πiδyigiven in (A.3). By [24, Proposition 1], uisκ-Lipshitz, where κis the Lipschitz constant of (a,b)∈BR×BR/ma√sto→ /ba∇dbla−b/ba∇dblp. Then since Supp(µ)⊂BR, we have by Lemma D.1 that κis upper bounded by p√ 2(2R)p−1. Letwbe a solution of (A.4) for which we can suppose wN= 0(Remark (B.2)). As in [25], one can show with a first order cond ition on (A.3) that for all ya coordinate in Y: w(y) =−ǫlog/parenleftbigg/integraldisplay Rdexp/parenleftbiggu(x)−/ba∇dblx−y/ba∇dblp ǫ/parenrightbigg dµ(x)/parenrightbigg . We thus have for any y1,y2two distinct points in the point cloud Y, 17 |w(y1)−w(y2)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleǫlog /integraltext Rdexp/parenleftBig u(x)−/⌊ard⌊lx−y1/⌊ard⌊lp ǫ/parenrightBig dµ(x) /integraltext Rdexp/parenleftBig u(x)−/⌊ard⌊lx−y2/⌊ard⌊lp ǫ/parenrightBig dµ(x) /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleǫlog /integraltext Rdexp/parenleftBig u(x1)−/⌊ard⌊lx1−y1/⌊ard⌊lp ǫ/parenrightBig dµ(x) /integraltext Rdexp/parenleftBig u(x2)−/⌊ard⌊lx2−y2/⌊ard⌊lp ǫ/parenrightBig dµ(x) /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =|u(x1)−/ba∇dblx1−y2/ba∇dblp−(u(x2)−/ba∇dblx2−y1/ba∇dblp)| ≤ |u(x1)−u(x2)|+| /ba∇dblx1−y2/ba∇dblp−/ba∇dblx2−y1/ba∇dblp| ≤p√ 2(2R)p−1/ba∇dblx1−x2/ba∇dbl+/ba∇dblx1−y2/ba∇dblp+/ba∇dblx2−y1/ba∇dblp(B.2) ≤p√ 2(2R)p+2(2R)p ≤4p(2R)p, wherex1andx2denote respectively a maximizer of x/ma√sto→exp/parenleftBig u(x)−/⌊ard⌊lx−y1/⌊ard⌊lp ǫ/parenrightBig and a minimizer of x/ma√sto→ exp/parenleftBig u(x)−/⌊ard⌊lx−y2/⌊ard⌊lp ǫ/parenrightBig overBR. Using that w(yN) = 0 , we get that for all i∈[[1,N]], |w(yi)| ≤4p(2R)p. C Proofs of the main theorems In this section, we prove the main result in Theorem 4.1, whic h is the sample complexity of optimal transport barycen- ters with sparse support. Throughout the section, Pℓ nforℓ∈[[1,L]]denotes the empirical process Pℓ n:f/ma√sto→Pℓ nf:=1 nn/summationdisplay i=1f(Xℓ i) forni.i.d random variables Xℓ 1,...,Xℓ nof lawµℓ. For any subset Fof a normed space (Θ,/ba∇dbl · /ba∇dbl)andτ >0, we denoteN(τ,F,/ba∇dbl·/ba∇dbl)theτ-covering number of Fwith respect to /ba∇dbl·/ba∇dbl. This quantity represents the minimum number of balls of radius τ, with respect to the norm /ba∇dbl·/ba∇dbl, needed to fully cover F. We refer the reader to [51, Definition 2.2] for more details on the covering number. Finally, we also den ote/ba∇dbl·/ba∇dblL2(µ)theL2norm with respect to the measure µ andBd(0,R)thed-dimensional closed ball of radius R. The proofs for Wasserstein barycenters (Section C.1), Slic ed Wasserstein barycenter (Section C.2) and entropy regularized barycenters (Section C.3) all follow the same a pproach : we first derive the covering number for a class of functions associated to the dual formulation of the diverge ncesD, and then apply tools from empirical process theory to obtain a generalization error bound for the barycenters. C.1 Wasserstein barycenter Lemma C.1. Let Fp=/braceleftbigg/braceleftbigg fY,w:x∈BR/ma√sto→min i=1,...,N/ba∇dblx−yi/ba∇dblp−wi/bracerightbigg , Y∈BN R, w∈B1(0,2(2R)p)N/bracerightbigg , (C.1) then for all τ >0andµ∈ M1(BR): N(τ,Fp,/ba∇dbl·/ba∇dblL2(µ))≤/parenleftbigg 1+4pRp τ/parenrightbiggdN/parenleftbigg 1+8(2R)p τ/parenrightbiggN Moreover Fp,Q, the subset of functions of Fpwhose parameters (Y,w)are rationals, is dense in Fp. 18 Proof. Let(Y,w),(Z,v)∈BN R×[−2(2R)p,2(2R)p]N. Then for all x∈BR |fY,w(x)−fZ,v(x)|=/vextendsingle/vextendsingle/vextendsingle/vextendsinglemin i=1,...,N/ba∇dblx−yi/ba∇dblp−wi−min i=1,...,N/ba∇dblx−zi/ba∇dblp−vi/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤max i=1,...,N|/ba∇dblx−yi/ba∇dblp−wi−(/ba∇dblx−zi/ba∇dblp−vi)| (C.2) ≤max i=1,...,N{|/ba∇dblx−yi/ba∇dblp−/ba∇dblx−zi/ba∇dblp|+|wi−vi|} ≤pRp−1max i=1,...,N/ba∇dblyi−zi/ba∇dbl+ max i=1,...,N|wi−vi|, (C.3) where inequality (C.2) is due to Lemma D.2 and inequality (C. 3) is given by Lemma D.1. Let us denote T=pRp−1, and letτ >0. We then consider z1,...,zn(τ)aτ 2T-covering of BRandv1,...vm(τ)aτ
|
https://arxiv.org/abs/2505.21274v1
|
2-covering of B1(0,2(2R)p). For any vector of indices i= (i1,...,iN)∈[[1,n(τ)]]Nandk= (k1,...,k N)∈[[1,m(τ)]]N, we define for all x∈BRthe function fi,k(x) = min j=1,...,N/ba∇dblx−zij/ba∇dblp−vkj. Letf:=fY,w∈ Fp, then by (C.3) there exists i,ksuch that |f(x)−fi,k(x)| ≤pRp−1max j=1,...,N/ba∇dblyj−zij/ba∇dbl+ max j=1,...,N|wj−vkj| ≤Tτ 2T+τ 2=τ, so that/ba∇dblf−fi,k/ba∇dblL2(µ)≤τ. Since there exists n(τ)N×m(τ)Npossibilities for such functions fi,k, it follows from the covering construction that : N(τ,Fp,/ba∇dbl·/ba∇dblL2(µ))≤ N/parenleftBigτ 2T,Bd(0,R),/ba∇dbl·/ba∇dbl/parenrightBigN N/parenleftBigτ 2,B1(0,2(2R)p),| · |/parenrightBigN ≤/parenleftbigg 1+4RT τ/parenrightbiggdN/parenleftbigg 1+8(2R)p τ/parenrightbiggN =/parenleftbigg 1+4pRp τ/parenrightbiggdN/parenleftbigg 1+8(2R)p τ/parenrightbiggN where we second inequality is due to [38, Lemma 4.14]. Finally,Fp,Qis dense in Fpby a continuity argument and inequality (C.3). Theorem C.2. Letµ1,...,µL∈ M1(BR)for someR >0. For some integer n, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law µℓfor allℓ∈[[1,L]], then for D=Wp pand some integer N≥1, we have E/bracketleftBigg sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1 n,...,µℓ n,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤Cp,R/radicalbigg N(d+1) n, whereCp,R= 8√ 2/integraltext3(2R)p 0/radicalbigg log/parenleftBig 2+32p(2R)p τ/parenrightBig dτ. Proof. Let(Y,π)∈BN R×∆Nand letM∈[[1,N]]and(˜Y,˜π)∈(BM R\DM)×int(∆M)be as in Proposition A.1. For all ℓ∈[[1,L]], letXℓ 1,...,Xℓ nbeni.i.d random variables of law µℓ. Using Proposition B.1 and denoting 19 K:= 2(2R)pwe therefore have /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 LL/summationdisplay ℓ=1 max ˜w∈RM |˜wi|≤K1 nn/summationdisplay k=1min i=1,...,M{/ba∇dblXℓ k−˜yi/ba∇dblp−˜wi}+M/summationdisplay i=1˜πi˜wi −max ˜w∈RM |˜wi|≤K/integraldisplay Rdmin i=1,...,M{/ba∇dblx−˜yi/ba∇dblp−˜wi}dµℓ(x)+M/summationdisplay i=1˜πi˜wi /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤1 LL/summationdisplay ℓ=1max ˜w∈RM |˜wi|≤K/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay k=1min i=1,...,M{/ba∇dblXℓ k−˜yi/ba∇dblp−˜wi}−/integraldisplay Rdmin i=1,...,M{/ba∇dblx−˜yi/ba∇dblp−˜wi}dµℓ(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =1 LL/summationdisplay ℓ=1max w∈RN |wi|≤K wi=wjifyi=yj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay k=1min i=1,...,N{/ba∇dblXℓ k−yi/ba∇dblp−wi}−/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−wi}dµℓ(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle. where the inequality is verified thanks to by Lemma D.2. The la st equality is due to the fact that for all Y∈BN Rand x∈Rd, we have min k=1,...,M/ba∇dblx−yk/ba∇dblp−wk= min k=1,...,N/ba∇dblx−yk/ba∇dblp−wk ifwi=wjwhenever yi=yj. Notice that the dependency in the vector of weights πhas disappeared. For any(x,Y,w)∈BR×BN R×[−K,K]N, letfY,w(x) = min i=1,...,N/ba∇dblx−yi/ba∇dblp−wi. We then have sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤1 LL/summationdisplay ℓ=1sup (Y,w)∈BN R×[−K,K]N/vextendsingle/vextendsinglePℓ n(fY,w)−EX∼µℓ[fY,w(X)]/vextendsingle/vextendsingle =1 LL/summationdisplay ℓ=1sup f∈Fp|Pℓ n(f)−EX∼µℓ[f(X)]| whereFpis defined in (C.1). Notice further that for any ℓ∈[[1,L]], sup f∈FpPℓ nf2= sup (Y,w)∈BN R×[−K,K]N1 nn/summationdisplay k=1/parenleftbigg min i=1,...,N/ba∇dblXℓ k−yi/ba∇dblp−wi/parenrightbigg2 ≤1 nn/summationdisplay k=1/parenleftBigg sup x,y∈BR/ba∇dblx−y/ba∇dblp+ sup w∈[−K,K]|w|/parenrightBigg2 ≤((2R)p+2(2R)p)2= (3(2R)p)2, which gives/radicalBig supf∈FPnf2≤3(2R)p. By Lemma C.1, Fpis dense in Fp,Q, and we can then apply [27, Theorem 20 3.5.1] on empirical processes and covering number. Therefo re, denoting ω:= sup f∈FPnf2, we get for any ℓ∈[[1,L]]: E/bracketleftBigg √nsup f∈Fp|Pℓ n(f)−EX∼µℓ[f(X)]|/bracketrightBigg ≤8√ 2E/bracketleftbigg/integraldisplayω 0/radicalBig log/parenleftbig 2N(τ/2,Fp,/ba∇dbl·/ba∇dblL2(µℓn))/parenrightbig dτ/bracketrightbigg ≤8√ 2E/bracketleftBigg/integraldisplay3(2R)p 0/radicalBig log/parenleftbig 2N(τ/2,Fp,/ba∇dbl·/ba∇dblL2(µℓ n))/parenrightbig dτ/bracketrightBigg . From Lemma C.1 applied to the measure µℓ n, we get /integraldisplay3(2R)p 0/radicalBig log/parenleftbig 2N(τ/2,Fp,/ba∇dbl·/ba∇dblL2(µℓn))/parenrightbig dτ ≤/integraldisplay3(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg 2/parenleftbigg 1+8pRp τ/parenrightbiggdN/parenleftbigg 1+16(2R)p τ/parenrightbiggN/parenrightBigg dτ ≤/integraldisplay3(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg 2/parenleftbigg 1+16p(2R)p τ/parenrightbiggN(d+1)/parenrightBigg dτ ≤/integraldisplay3(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg/parenleftbigg 2+32p(2R)p τ/parenrightbiggN(d+1)/parenrightBigg dτ =/radicalbig N(d+1)/integraldisplay3(2R)p 0/radicalBigg log/parenleftbigg 2+32p(2R)p τ/parenrightbigg dτ. Finally, as this last term does not depend on ℓ, we get E/bracketleftBigg 1 LL/summationdisplay ℓ=1sup f∈Fp|Pℓ n(f)−EX∼µℓ[f(X)]|/bracketrightBigg ≤Cp,R/radicalbigg N(d+1) n, whereCp,R= 8√ 2/integraltext3(2R)p 0/radicalbigg log/parenleftBig 2+32p(2R)p τ/parenrightBig dτ. C.2 Sliced Wasserstein barycenter Lemma C.3. Let Fs p=/braceleftbigg/braceleftbigg fθ,Y,w:x/ma√sto→min i=1,...,N|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−wi/bracerightbigg , Y∈BN R, w∈B1(0,2(2R)p)N,θ∈Sd−1/bracerightbigg , then for all ǫ >0andµ∈ M1(BR): N(τ,Fs p,/ba∇dbl·/ba∇dblL2(µ))≤/parenleftbigg 1+12pRp τ/parenrightbiggd/parenleftbigg 1+6pRp τ/parenrightbiggdN/parenleftbigg12(2R)p τ/parenrightbiggN . Moreover Fs p,Q, the subset of functions of Fs pwhose parameters (Y,w,θ)are rationals, is dense in Fs p. Proof. LetK= 2(2R)p,Y,Z∈BN R,w,v∈[−K,K]Nandθ,ϕ∈Sd−1. Then
|
https://arxiv.org/abs/2505.21274v1
|
by Lemma (D.1), we have for x∈BR 21 | |/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−|/a\}b∇acketle{tx−zi,ϕ/a\}b∇acket∇i}ht|p| ≤pRp−1|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht−/a\}b∇acketle{tx−zi,ϕ/a\}b∇acket∇i}ht| =pRp−1|/a\}b∇acketle{tx,θ−ϕ/a\}b∇acket∇i}ht+/a\}b∇acketle{tzi,ϕ−θ/a\}b∇acket∇i}ht+/a\}b∇acketle{tzi,θ/a\}b∇acket∇i}ht−/a\}b∇acketle{tyi,θ/a\}b∇acket∇i}ht| ≤pRp−1(|/a\}b∇acketle{tx,θ−ϕ/a\}b∇acket∇i}ht|+|/a\}b∇acketle{tzi,ϕ−θ/a\}b∇acket∇i}ht|+|/a\}b∇acketle{tzi−yi,θ/a\}b∇acket∇i}ht|) ≤pRp−1(2R/ba∇dblθ−ϕ/ba∇dbl+/ba∇dblzi−yi/ba∇dbl). Therefore, thanks to Lemma (D.2), we get |fθ,Y,w(x)−fϕ,Z,v(x)| ≤max i=1,...,N| |/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−|/a\}b∇acketle{tx−zi,ϕ/a\}b∇acket∇i}ht|p|+|wi−vi| ≤max i=1,...,NpRp−1(2R/ba∇dblθ−ϕ/ba∇dbl+/ba∇dblzi−yi/ba∇dbl)+|wi−vi| ≤2pRp/ba∇dblθ−ϕ/ba∇dbl+pRp−1max i=1,...,N/ba∇dblzi−yi/ba∇dbl+ max i=1,...,N|wi−vi|. (C.4) Then, using the same argument as in the proof of Lemma C.1, we c an upper bound the covering number N(τ,Fs p,/ba∇dbl· /ba∇dblL2(µ))by the following product of covering numbers: N(τ,Fs p,/ba∇dbl·/ba∇dblL2(µ)) ≤ N/parenleftbiggτ 6pRp,Sd−1,/ba∇dbl·/ba∇dbl/parenrightbigg N/parenleftbiggτ 3pRp−1,Bd(0,R),/ba∇dbl·/ba∇dbl/parenrightbiggN N/parenleftBigτ 3,B1(0,2(2R)p),|·|/parenrightBigN ≤/parenleftbigg 1+12pRp τ/parenrightbiggd/parenleftbigg 1+6pRp τ/parenrightbiggdN/parenleftbigg12(2R)p τ/parenrightbiggN , which concludes the proof of the upper bound of the covering n umber. Finally, Fs p,Qis dense in Fs pdue to inequality (C.4). Theorem C.4. Letµ1,...,µL∈ M1(BR)for someR >0. For some integer n, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law µℓ, for allℓ∈[[1,L]]. Then for D=SWp por max-SWp pand for some integer N≥1, we have E/bracketleftBigg sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1 n,...µℓ n,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤Cp,R/radicalbigg N(d+1)+d n. whereCp,R= 8√ 2/integraltext3(2R)p 0/radicalbigg log/parenleftBig 2+48p(2R)p τ/parenrightBig dτ. Proof. For the sake of brevity, we only prove the result for D=SWp pas the max- SWp pcase is very similar. Let M∈[[1,N]]and(˜Y,˜π)∈(BM R\DM)×int(∆N)be as in Proposition A.1. For all ℓ∈[[1,L]], letXℓ 1,...,Xℓ nbe i.i.d random variables of law µℓ. With a slight abuse of notation and following the construct ion in Section A.3, we define the function M:θ/ma√sto→Card({i∈[[1,M]],/a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht /\e}atio\slash= 0,∀j/\e}atio\slash=i}). Then using Proposition B.1 and defining K:= 2(2R)p, we have: 22 /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1˜πiδ˜yi/parenrightBigg −FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1˜πiδ˜yi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 LL/summationdisplay ℓ=1 /integraldisplay Sd−1max w∈RM(θ) |wi|≤K1 nn/summationdisplay k=1min i=1,...,M(θ){|/a\}b∇acketle{tXℓ k−˜yi,θ/a\}b∇acket∇i}ht|p−wi}dσ(θ)+M(θ)/summationdisplay i=1˜πiwi −/integraldisplay Sd−1max w∈RM(θ) |wi|≤K/integraldisplay Rdmin i=1,...,M(θ){|/a\}b∇acketle{tx−˜yi,θ/a\}b∇acket∇i}ht|p−wi}dµℓ(x)dσ(θ) +M(θ)/summationdisplay i=1˜πiwi /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤1 LL/summationdisplay ℓ=1max θ∈Sd−1max w∈RM(θ) |wi|≤K/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay k=1min i=1,...,M(θ){|/a\}b∇acketle{tXℓ k−yi,θ/a\}b∇acket∇i}ht|p−wi} −/integraldisplay Rdmin i=1,...,M(θ){|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−wi}dµℓ(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle. =1 LL/summationdisplay ℓ=1max θ∈Sd−1max w∈RN |wi|≤K wi=wjif/ang⌊ra⌋ketleftyi−yj,θ/ang⌊ra⌋ketright=0/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay k=1min i=1,...,N{|/a\}b∇acketle{tXℓ k−yi,θ/a\}b∇acket∇i}ht|p−wi} −/integraldisplay Rdmin i=1,...,N{|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−wi}dµℓ(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle, where the first inequality uses Lemma D.2 for the maximum with respect to w, and then upper bound the integral with respect to θby its maximum value. The last equality is due to the fact that for allθ∈Sd−1,Y∈(Rd)Nandx∈Rd, we have that min k=1,...,M(θ)| /a\}b∇acketle{tx−yk,θ/a\}b∇acket∇i}ht |p−wk= min k=1,...,N| /a\}b∇acketle{tx−yk,θ/a\}b∇acket∇i}ht |p−wk whenever wi=wjif/a\}b∇acketle{tyi−yj,θ/a\}b∇acket∇i}ht= 0. For all(x,θ,Y,w )∈BR×Sd−1×BN R×[−K,K]N, letfθ,Y,w(x) = min i=1,...,N|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−wi. We then have sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFp/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδYi/parenrightBigg −Fp/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδYi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤1 LL/summationdisplay ℓ=1sup (θ,Y,w)∈Sd−1×BN R×[−K,K]N/vextendsingle/vextendsinglePℓ n(fθ,Y,w)−EX∼µℓ[fθ,Y,w(X)]/vextendsingle/vextendsingle =1 LL/summationdisplay ℓ=1sup f∈Fsp|Pℓ n(f)−EX∼µℓ[f(X)]|, whereFs pis defined in Lemma C.3. Moreover, we have for any ℓ∈[[1,L]], sup f∈Fs pPℓ nf2= sup (θ,Y,w)∈Sd−1×BN R×[−K,K]N1 nn/summationdisplay k=1/parenleftbigg min i=1,...,N|/a\}b∇acketle{tXℓ k−yi,θ/a\}b∇acket∇i}ht|p−wi/parenrightbigg2 ≤1 nn/summationdisplay k=1/parenleftBigg sup x,y∈BR,θ∈Sd−1/ba∇dblx−y/ba∇dblp/ba∇dblθ/ba∇dblp+ sup w∈[−K,K]w/parenrightBigg2 ≤((2R)p+2(2R)p)2= (3(2R)p)2, 23 With the same argument as in the proof of Theorem C.2 this quan tity can be upper bounded in expectation using [27, Theorem 3.5.1]: E/bracketleftBigg √nsup f∈Fsp|Pℓ n(f)−EX∼µℓ[f(X)]|/bracketrightBigg ≤8√ 2E/bracketleftBigg/integraldisplay3(2R)p 0/radicalBig log/parenleftbig 2N(τ/2,Fsp,/ba∇dbl·/ba∇dblL2(µℓn))/parenrightbig dτ/bracketrightBigg . From Lemma C.3 we thus have /integraldisplay3(2R)p 0/radicalBig log/parenleftbig 2N(τ/2,Fsp,/ba∇dbl·/ba∇dblL2(µℓ n))/parenrightbig dτ ≤/integraldisplay3(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg 2/parenleftbigg 1+24pRp τ/parenrightbiggd/parenleftbigg 1+12pRp τ/parenrightbiggdN/parenleftbigg24(2R)p τ/parenrightbiggN/parenrightBigg dτ ≤/integraldisplay3(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg 2/parenleftbigg 1+24p(2R)p τ/parenrightbiggN+dN+d/parenrightBigg dτ ≤/radicalbig N(d+1)+d/integraldisplay3(2R)p 0/radicalBigg log/parenleftbigg 2+48p(2R)p τ/parenrightbigg dτ. Finally, as this last term does not depend on ℓ, we get E/bracketleftBigg 1 LL/summationdisplay ℓ=1sup f∈Fsp|Pℓ n(f)−EX∼µℓ[f(X)]|/bracketrightBigg ≤Cp,R/radicalbigg N(d+1)+d n, whereCp,R=/integraltext3(2R)p 0/radicalbigg log/parenleftBig 2+48p(2R)p τ/parenrightBig dτ. C.3 Entropy regularized barycenter Lemma C.5. Let Fǫ p=/braceleftBigg/braceleftBigg gǫ
|
https://arxiv.org/abs/2505.21274v1
|
Y,π,w:x∈BR/ma√sto→ −ǫlog/parenleftBiggN/summationdisplay i=1πiexp/parenleftbiggwi−/ba∇dblx−yi/ba∇dblp ǫ/parenrightbigg/parenrightBigg/bracerightBigg /vextendsingle/vextendsingle/vextendsingleY∈BN R, π∈∆N, w∈B1(0,4p(2R)p)N,/bracerightBig then for all τ >0andµ∈ M1(BR), we have N(τ,Fǫ p,/ba∇dbl·/ba∇dblL2(µ))≤/parenleftbigg 1+4pRp τ/parenrightbiggdN/parenleftbigg16p(2R)p τ/parenrightbiggN . Moreover Fǫ p,Q, the subset of functions of Fǫ pwhose parameters (Y,π,w)are rationals, is dense in Fǫ p. 24 Proof. Let(Y,π,w),(Z,γ,v)∈BN R×∆N×[−2(2R)p,2(2R)p]N. Then for all x∈BR, we have |gǫ Y,π,w(x)−gǫ Z,γ,v(x)|=ǫ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog /summationtextN i=1πiexp/parenleftBig wi−/⌊ard⌊lx−yi/⌊ard⌊lp ǫ/parenrightBig /summationtextN i=1γiexp/parenleftBig vi−/⌊ard⌊lx−zi/⌊ard⌊lp ǫ/parenrightBig /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤ǫ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog maxi=1,...,Nexp/parenleftBig wi−/⌊ard⌊lx−yi/⌊ard⌊lp ǫ/parenrightBig/summationtextN i=1πi mini=1,...,Nexp/parenleftBig vi−/⌊ard⌊lx−zi/⌊ard⌊lp ǫ/parenrightBig/summationtextN i=1γi /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/vextendsinglemax i=1,...,Nwi−/ba∇dblx−yi/ba∇dblp−min i=1,...,Nvi−/ba∇dblx−zi/ba∇dblp/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤ |wi−vi|+|/ba∇dblx−zi/ba∇dblp−/ba∇dblx−yi/ba∇dblp| wherei(resp.i) is an index for which the maximum (resp. minimum) of i/ma√sto→wi−/ba∇dblx−yi/ba∇dblp(resp.i/ma√sto→vi−/ba∇dblx−zi/ba∇dblp) is attained. By Lemma D.1, we thus have |gY,π,w(x)−gZ,γ,v(x)| ≤ |wi−vi|+pRp−1/ba∇dblyi−zi/ba∇dbl. (C.5) Notice that the right-hand side of the above inequality depe nds on neither πnorǫ. LetT=pRp−1, with the same proof argument of Theorem C.2, and by boundedness of the entr opic dual variables (Proposition B.1) we thus get N(τ,Fǫ p,/ba∇dbl·/ba∇dblL2(µ))≤ N/parenleftBigτ 2T,Bd(0,R),/ba∇dbl·/ba∇dbl/parenrightBigN N/parenleftBigτ 2,B1(0,4p(2R)p),| · |/parenrightBigN ≤/parenleftbigg 1+4pRp τ/parenrightbiggdN/parenleftbigg16p(2R)p τ/parenrightbiggN which concludes the proof of the upper bound of the covering n umber. Finally, Fǫ p,Qis dense in Fǫ pdue to inequality (C.5). Theorem C.6. Letµ1,...,µL∈ M1(BR)for someR >0. For some integer n, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law µℓ, for allℓ∈[[1,L]]. LetD=Wp ǫ,pwithǫ >0, then for some integer N≥1: E/bracketleftBigg sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1,...µL,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1 n,...µℓ n,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤Cp,R/radicalbigg N(d+1) n whereCp,R= 8√ 2/integraltext(4p+1)(2R)p 0/radicalbigg log/parenleftBig 2+64p(2R)p τ/parenrightBig dτ. Proof. We fixǫ >0. Let(Y,π)∈BN R×∆Nand letM∈[[1,N]]and(˜Y,˜π)∈(BM R\DM)×int(∆M)be as in Proposition A.1. For any triplet (Y,π,w)∈BN R×∆N×RNandx∈BR, we define the function gǫ Y,π,w(x) :=−ǫlog N/summationdisplay j=1πjexp/parenleftbiggwj−/ba∇dblx−yj/ba∇dblp ǫ/parenrightbigg . For allℓ∈[[1,L]], letXℓ 1,...,Xℓ nbe i.i.d random variables of law µℓ. Using Proposition B.1 and the same argument as in the proof of Theorem C.2, we get for K:= 4p(2R)p 25 sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg −FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤1 LL/summationdisplay ℓ=1sup (Y,π,w)∈BN R×∆N×[−K,K]N|Pℓ ngǫ Y,π,w−EX∼µℓ/bracketleftbig gǫ Y,π,w(X)/bracketrightbig | =1 LL/summationdisplay ℓ=1sup g∈Fǫp|Pℓ n(g)−EX∼µℓ[g(X)]| whereFǫ pis defined in Lemma C.5 . Moreover, for all w∈B1(0,4p(2R)p)N, we have −4p(2R)p+(2R)p ǫ≤wj−/ba∇dblx−yj/ba∇dblp ǫ≤4p(2R)p ǫ ⇐⇒ −(4p+1)(2R)p ǫ≤log N/summationdisplay j=1πjexp/parenleftbiggwj−/ba∇dblx−yj/ba∇dblp ǫ/parenrightbigg ≤4p(2R)p ǫ ⇐⇒ −4p(2R)p≤gǫ Y,π,w(x)≤(4p+1)(2R)p. (C.6) Then, by Lemma C.5, the bound obtained in (C.6) and with the sa me argument as in Theorem C.2, we obtain by [27, Theorem 3.5.1] the following bound E/bracketleftBigg √nsup g∈Fǫp|Pℓ n(g)−EX∼µℓ[g(X)]|/bracketrightBigg ≤8√ 2E/bracketleftBigg/integraldisplay(4p+1)(2R)p 0/radicalBig log/parenleftbig 2N(τ/2,Fǫp,/ba∇dbl·/ba∇dblL2(µℓn))/parenrightbig dτ/bracketrightBigg ≤8√ 2/integraldisplay(4p+1)(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg 2/parenleftbigg 1+8pRp τ/parenrightbiggdN/parenleftbigg32p(2R)p τ/parenrightbiggN/parenrightBigg dτ ≤8√ 2/integraldisplay(4p+1)(2R)p 0/radicaltp/radicalvertex/radicalvertex/radicalbtlog/parenleftBigg 2/parenleftbigg 1+32p(2R)p τ/parenrightbiggN(d+1)/parenrightBigg dτ ≤8√ 2/radicalbig N(d+1)/integraldisplay(4p+1)(2R)p 0/radicalBigg log/parenleftbigg 2+64p(2R)p τ/parenrightbigg dτ so that finally E/bracketleftBigg 1 LL/summationdisplay ℓ=1sup g∈Fǫp|Pℓ n(g)−EX∼µℓ[g(X)]|/bracketrightBigg ≤Cp,R/radicalbigg N(d+1) n, whereCp,R= 8√ 2/integraltext(4p+1)(2R)p 0/radicalbigg log/parenleftBig 2+64p(2R)p τ/parenrightBig dτ. In the following, we prove the result for debiased Sinkhorn b arycenters. Corollary C.7. Letµ1,...,µL∈ M1(BR), forR >0. For some integer n, letµ1 n,...,µL nbe empirical measures supported over ni.i.d random variables of respective law µℓ, for allℓ∈[[1,L]]. Let also N≥1be an integer, Aa closed nonempty subset of BN R×∆Nand letD=Wp ǫ,p(for some ǫ >0). We denote by (Yn,πn)an argmin in Aof (Y,π)/ma√sto→FD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg . 26 Then we have E/bracketleftBigg FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πn iδyn i/parenrightBigg −min (Y,π)∈AFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/bracketrightBigg ≤2Cp,R/radicalbigg N(d+1) n+2Kp,R/parenleftBigg eK ǫ√n/parenleftbigg 1+1 ǫ⌊d/2⌋/parenrightbigg/parenrightBigg
|
https://arxiv.org/abs/2505.21274v1
|
, withκ= 2CpR+ (2R)p, whereCp=√ 2p(2R)p−1is the Lipshitz constant of the Euclidean cost at the power p. Additionally, Kp,Ris an unknown constant depending only on RandpandCp,Ris the constant associated to Wp ǫ,pin Theorem 4.1. Proof. Let(Y∗,π∗)and(Yn,πn)be respective minimizers of (Y,π)/ma√sto→FD/parenleftBig µ1,...,µL,/summationtextN i=1πiδyi/parenrightBig and(Y,π)/ma√sto→ FD/parenleftBig µ1 n,...,µL n,/summationtextN i=1πiδyi/parenrightBig . We then denote ν∗=/summationtextN i=1π∗ iδy∗ iandνn=/summationtextN i=1πn iδyn i. Note that the minimizers exist by continuity of the functional on the compact set A. Notice now that FD(µ1,...,µL,νn)−FD(µ1,...,µL,ν∗) ≤FD(µ1,...,µL,νn)−FD(µ1 n,...,µL n,νn)+FD(µ1 n,...,µL n,ν∗)−FD(µ1,...,µL,ν∗) ≤2 sup (Y,π)∈BN R×∆N/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFD/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδYi/parenrightBigg −FD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤sup (Y,π)∈BR×∆N2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFWp ǫ,p/parenleftBigg µ1 n,...,µL n,N/summationdisplay i=1πiδyi/parenrightBigg −FWp ǫ,p/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +1 LL/summationdisplay ℓ=1/vextendsingle/vextendsingleWp ǫ,p(µℓ n,µℓ n)−Wp ǫ,p(µℓ,µℓ)/vextendsingle/vextendsingle. In the right-hand side of the inequality, we bound the first te rm by Theorem 4.1 while the second term can be bounded thanks to the proof of [24, Theorem 3]. In fact, their proof can be adapted almost directly to show that E|Wp ǫ,p(µn,µn)−Wp ǫ,p(µ,µ)|=O/parenleftBigg eK ǫ√n/parenleftbigg 1+1 ǫ⌊d/2⌋/parenrightbigg/parenrightBigg . This result is given for arbitrary measures α,β∈ M1(Rd), their empirical counterparts ˆαn,ˆβnand the quantity Wp ǫ,p(ˆαn,ˆβn), but at no point in the proof the independence of the samples u pon which ˆαnandˆβnare constructed is mandatory. As a result, one can obtain the same bound with β=α=µ. D Additional Lemmas We provide here two well-known technical lemmas that are use d throughout the main proofs in Appendix C. We then recall a classical result in optimal quantization. D.1 Technical lemmas Lemma D.1. ForR >0andp≥1, we have (i) the function f:x∈BR/ma√sto→ /ba∇dblx/ba∇dblpispRp−1Lipschitz, (ii) the function c: (x,y)∈BR×BR/ma√sto→ /ba∇dblx−y/ba∇dblpis√ 2p(2R)p−1Lipschitz. 27 Proof. (i) The case p= 1is just the result of triangular inequality. Second, by mean value theorem and the chain rule, one can show that for p >1, ∇f(x) =p/ba∇dblx/ba∇dblp−2x whose norm is upper bounded by pRp−1. (ii) Letx,y,∈BR. We first consider the case p= 1: | /ba∇dblx−y/ba∇dbl−/ba∇dblx′−y′/ba∇dbl |2=| /ba∇dblx−y/ba∇dbl−/ba∇dblx−y′/ba∇dbl−(/ba∇dblx′−y′/ba∇dbl−/ba∇dblx−y′/ba∇dbl)|2 ≤| /ba∇dblx−y/ba∇dbl−/ba∇dblx−y′/ba∇dbl |2+| /ba∇dblx−y′/ba∇dbl−/ba∇dblx′−y′/ba∇dbl |2 ≤ /ba∇dblx−x′/ba∇dbl2+/ba∇dbly−y′/ba∇dbl2 by triangular inequality. Fore the case p >1, we differentiate the functional c: (x,y)/ma√sto→ /ba∇dblx−y/ba∇dblp, that is ∂xc(x,y) =p 2(/ba∇dblx−y/ba∇dbl2)p 2−1∂x/ba∇dblx−y/ba∇dbl2 =p/ba∇dblx−y/ba∇dblp−2(x−y). It results that /ba∇dbl∂xc(x,y)/ba∇dbl=p/ba∇dblx−y/ba∇dblp−1 and therefore /ba∇dbl∇c(x,y)/ba∇dbl2=/ba∇dbl∂xc(x,y)/ba∇dbl2+/ba∇dbl∂yc(x,y)/ba∇dbl2 = 2p2/ba∇dblx−y/ba∇dbl2p−2. We then conclude by mean value theorem and the fact that x,y,∈BR. Lemma D.2. For any functions f,g:Rd−→Rand subset UofRd /vextendsingle/vextendsingle/vextendsingle/vextendsinglesup x∈Uf(x)−sup x∈Ug(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤sup x∈U|f(x)−g(x)| and /vextendsingle/vextendsingle/vextendsingle/vextendsingleinf x∈Uf(x)−inf x∈Ug(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤sup x∈U|f(x)−g(x)| Proof. Letx∈U, then f(x)−sup t∈Ug(t)≤f(x)−g(x)≤sup x∈U|f(x)−g(x)|. This being true for all x∈Umeans that sup x∈Uf(x)−sup t∈Ug(x)≤sup x∈U|f(x)−g(x)|. By inverting the role of fandgwe can then show the first result. As for the second result for a llx∈Uwe have sup x∈U|f(x)−g(x)| ≥f(x)−g(x) ≥inf x∈Uf(x)−g(x) This being true for all x∈Umeans that sup x∈U|f(x)−g(x)| ≥inf x∈Uf(x)−inf x∈Ug(x). By inverting the role of fandgwe can show the second result. 28 D.2 Optimal quantization For completeness, we recall and prove this well known fact ab out optimal quantization. Lemma D.3. LetY∈(Rd)N\DN,then for all p≥1 min π∈∆NWp p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg =/integraldisplay Rdmin i=1,...,N/ba∇dblx−yi/ba∇dblpdµ(x). Proof. Consider the functional (w,π)∈RN×∆N/ma√sto−→/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+N/summationdisplay i=1πiwi. It is concave in wand convex in π, and we can therefore swap the min and the max in the following problem min π∈∆Nmax w∈RN/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+N/summationdisplay
|
https://arxiv.org/abs/2505.21274v1
|
i=1πiwi. (D.1) Notice that a first order condition with respect to πimpliesw∗= 0RNat optimality. Let π∗∈∆Nbe a minimizer of (D.1). In order to use the duality expression (A.2), we con sider˜π∈int(∆M)the vector of weights π∗whose null components have been removed. Following this construction and with a slight abuse of notation, we reorder the point cloudY:= (y1,...,y N)so that the components yicorresponding to null weights have indexes i∈[[M+1,N]]. min π∈∆NWp p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg =Wp p/parenleftBigg µ,M/summationdisplay i=1˜πiδyi/parenrightBigg = min π∈∆Mmax w∈RM/integraldisplay Rdmin i=1,...,M{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+M/summationdisplay i=1πiwi = max w∈RMmin π∈∆M/integraldisplay Rdmin i=1,...,M{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+M/summationdisplay i=1πiwi =/integraldisplay Rdmin i=1,...,M/ba∇dblx−yi/ba∇dblpdµ(x). We can then consider two cases. First, suppose that µdoes not give mass to the V orono¨ ı set {x∈Rd|/ba∇dblx−yi/ba∇dblp≤ /ba∇dblx−yj/ba∇dblp, j∈[[1,N]]}for alli∈[[M+1,N]]; this may be the case for example if µis discrete. Then we have /integraldisplay Rdmin i=1,...,M/ba∇dblx−yi/ba∇dblpdµ(x) =/integraldisplay Rdmin i=1,...,N/ba∇dblx−yi/ba∇dblpdµ(x). Second, suppose that there exists i∈[[M+ 1,N]]such that µgives mass to a set {x∈Rd| /ba∇dblx−yi/ba∇dblp≤ /ba∇dblx−yj/ba∇dblp, j∈[[1,N]]}. By contradiction, suppose that M < N . Because Y /∈DNwe then have /integraldisplay Rdmin i=1,...,M/ba∇dblx−yi/ba∇dblpdµ(x)>/integraldisplay Rdmin i=1,...,N/ba∇dblx−yi/ba∇dblpdµ(x) = max w∈RNmin π∈∆N/integraldisplay Rdmin i=1,...,N{/ba∇dblx−yi/ba∇dblp−wi}dµ(x)+N/summationdisplay i=1πiwi = min π∈∆NWp p/parenleftBigg µ,N/summationdisplay i=1πiδyi/parenrightBigg which contradicts the optimality of the minimum in π. We thus get the result. 29
|
https://arxiv.org/abs/2505.21274v1
|
arXiv:2505.21778v1 [math.PR] 27 May 2025Reconstruction of the Probability Measure and the Coupling Parameters in a Curie-Weiss Model Miguel Ballesteros∗, Rams´ es H. Mena∗, Arno Siri-J´ egousse∗, and Gabor Toth∗ Abstract The Curie-Weiss model is used to study phase transitions in statistical mechanics and has been the object of rigorous analysis in mathematical physics. We analyse the problem of reconstructing the probability measure of a multi-group Curie-Weiss model from a sample of data by employing the maximum likelihood estimator for the coupling parameters of the model, under the assumption that there is interaction within each group but not across group boundaries. The estimator has a number of positive properties, such as consistency, asymptotic normality, and exponentially decaying probabilities of large deviations of the estimator with respect to the true parameter value. A shortcoming in practice is the necessity to calculate the partition function of the Curie-Weiss model, which scales exponentially with respect to the population size. There are a number of applications of the estimator in political science, sociology, and automated voting, centred on the idea of identifying the degree of social cohesion in a population. In these applica- tions, the coupling parameter is a natural way to quantify social cohesion. We treat the estimation of the optimal weights in a two-tier voting system, which requires the estimation of the coupling parameter. MSC 2020 : 62F10, 82B20, 60F05, 91B12 Keywords : Curie-Weiss model, Mathematical physics, Statistical mechanics, Gibbs measures, Mathemat- ical analysis, Maximum likelihood estimator, Consistency, Asymptotic normality, Large deviations, Two-tier voting systems, Optimal weights 1 Introduction Models of ferromagnetism have long served as foundational tools in statistical mechanics, enabling the ex- ploration of collective behaviour in systems with many interacting components. As with many other physics models, they have also attracted the attention of mathematical physicists who have contributed rigorous results which frequently confirm the physicists’ intuition. Among the most celebrated is the Ising model, introduced in the early 20th century by Wilhelm Lenz and his student Ernst Ising [15]. In its classical form, the Ising model consists of a lattice of binary spins or magnets, where each spin interacts with its nearest neighbours and aligns in response to an external field and thermal fluctuations. Although the one- dimensional version exhibits no phase transition at finite temperature, higher-dimensional cases, such as the two-dimensional model solved by Onsager [29], exhibit rich phase behaviour and critical phenomena, making the Ising model a central object of study in both physics and applied mathematics. The Curie-Weiss model, introduced as a mean-field approximation of the behaviour of ferromagnets [33], simplifies the spatial complexity by assuming that each spin interacts equally with all others. This global interaction structure permits a rigorous mathematical analysis of phase transitions and critical behaviour (see ∗IIMAS-UNAM, Mexico City, Mexico 1 [10] for an in-depth treatment of the model). Beyond its physical origins, the Curie-Weiss model has been widely used in other domains, including economics [4], political science [17], and sociology [7]. An extension of the Curie-Weiss model is the specification that instead of having a homogeneous population of interacting agents, there are several
|
https://arxiv.org/abs/2505.21778v1
|
identifiable groups which have distinct cultures or attitudes, manifesting in different voting decisions. The first version of a multi-group Curie-Weiss model was introduced in [7]; subsequently, similar models were studied in [3, 26, 27, 19, 20, 23], including the statistical problem of community detection [3, 26, 1]. One particularly compelling application of the Curie-Weiss model lies in the domain of collective decision- making, such as voting. In this context, each spin is interpreted as an individual voter casting a binary ‘yes’ or ‘no’ vote. The mutual influence between voters, akin to the ferromagnetic alignment in physical systems, can model peer pressure, shared information, or ideological affinity. This analogy becomes especially fruitful in the analysis of two-tier voting systems, where decisions are made in two stages: individuals vote to determine the outcome of a local group (e.g., a state or district), and these local outcomes are then aggregated at a higher level (e.g., in a council or federal assembly). The central question in this setting concerns the assignment of optimal voting weights to the representatives of each group, ensuring fair representation in the face of population imbalances or correlated voting behaviour (see [11] for a treatment of voting power). This challenge is classically illustrated in institutions like the Council of the European Union or the United Nations Security Council. However, its significance extends far beyond geopolitics. In modern automated sys- tems, such as recommendation engines, online platforms aggregating user preferences, and AI-based decision frameworks, users’ preferences often exhibit correlated structures. Here too, the need arises to determine how individual or subgroup preferences should be weighted when computing global outcomes. As such, tools from statistical mechanics, and in particular statistical estimation methods applied to the Curie-Weiss model, provide both a conceptual and practical bridge between physical models and real-world decision-making systems. The present article contributes to this line of inquiry by addressing the statistical estimation of the coupling parameters βλin a multi-group Curie-Weiss model, a quantity which modulates the strength of interaction between agents or voters within each of the M∈Ngroups indexed by λ∈NM1. Estimating βλfrom observed voting data yields critical insights into the degree of collective behaviour or ideological alignment, which can, in turn, inform the derivation of optimal weights in two-tier voting systems. Section 7 of this article explores this application in detail under the assumption that voters interact within each group but not across group boundaries, connecting theoretical results to policy-relevant and business-related settings. Imagine a population which is divided into Mgroups along cultural or national lines. An example is the European Union with its 27 member states. Each member state sends a representative to the Council of the European Union, where votes take place according to a set of fairly complicated rules. We consider the simpler case of a weighted voting system: each member state has a certain voting weight, and its representative’s vote in the council is multiplied by the voting weight. Examples of weighted voting systems are ‘one person, one vote’ in popular votes or voting weights proportional to the population of each country in a council. By
|
https://arxiv.org/abs/2505.21778v1
|
imposing a ‘fairness criterion,’ we can determine what the weights in the council ought to be. The theoretical optimal weights may depend on the underlying voting model (such as the Curie-Weiss model treated in this article) which describes the population’s voting behaviour in probabilistic terms. If we reconstruct the underlying voting model, we also obtain estimators for the optimal voting weights, which can then be calculated from a sample of observations. Prior work has been done on the estimation problem of parameters of spin models such as those we mentioned. A classical reference is the book [16]. The estimation of the interaction parameter from data belonging the 1We will write Nm:={1, . . . , m }for any m∈N 2 realm of the social sciences was studied in [13]. Work has also been done on the estimation of parameters in more complicated models referred to as spin glass models with random interaction structure (see [5, 6]). Our methodology is grounded in maximum likelihood estimation , which is a natural and widely used approach in parametric inference, provided we have a good reason to assume we know the underlying statistical model which generated the data. The maximum likelihood estimator for the coupling parameter is particularly attractive because it is well-defined for all values of βλ, including for parameter values close to the critical regime (and typical samples from this distribution). Furthermore, the maximum likelihood estimator enjoys desirable statistical properties: it is consistent , meaning it converges in probability to the true parameter value as the number of observations increases. It is asymptotically normal , allowing for the construction of confidence intervals and hypothesis tests. The estimator satisfies large deviation principles , providing robust upper bounds on the probability of estimation errors in finite samples. The main computational drawback of this approach is the necessity to compute the normalisation constant (called partition function in statistical physics) of the Curie-Weiss model, which scales exponentially with the number of voters. This challenge is well known in the literature and has inspired a range of approximation techniques. The path of further research will lead us to consider maximum likelihood estimators based on an approximation to the maximum likelihood optimality condition (8), which sidesteps the costly calculation of the partition function at the cost of sacrificing the ability to calculate estimators for any possible sample. Another avenue leads to alternate estimators based on observing a sample of the votes from a subset of the population. A third direction we are planning to explore is the generalisation of the Curie-Weiss model to interacting groups of voters. This research programme will hopefully provide a clear picture of how to estimate interaction parameters in voting models, with applicability beyond the Curie-Weiss model. In summary, this article situates the Curie-Weiss model at the intersection of statistical physics, statistics, and political decision-making. By developing and exhaustively analysing the maximum likelihood estimator for the coupling parameters, we aim to provide a rigorous and interpretable method for quantifying inter- action strength in voting populations, with applications ranging from international councils to algorithmic aggregation in digital
|
https://arxiv.org/abs/2505.21778v1
|
platforms. Given this applicability to different academic disciplines, we have tried to provide complete and comprehensible proofs that appeal to a wide audience composed of mathematicians, statisticians, physicists, economists, and political scientists. The main results of this article are Propositions 8 and 9, and Theorem 10 (to be found in Section 3). Since the estimator we study (see Definition 7) is defined by an implicit condition (8), we require Proposition 8 to be certain the estimator is uniquely determined for any sample of observations. As we will show, there are realisations of the sample which lead to estimates which do not correspond to our assumption that the true parameters are non-negative real numbers. Propos- ition 9 assures us that the probability of these realisations decays exponentially to 0 as the sample size goes to infinity. Finally, Theorem 10 contains the aforementioned statistical properties of the estimator: consistency, asymptotic normality, and a large deviation principle. We present the Curie-Weiss model in the next section. The maximum likelihood estimator is defined in Section 3, where we also state the main results of this article. Sections 4 and 5 contain the proofs of the main results. Section 6 is about the standard error of the statistics in this article. The topic of optimal weights in two-tier voting systems is treated in Section 7. Finally, an Appendix to the article contains some auxiliary results we employ. 3 2 The Curie-Weiss Model Let the sets NNλ,λ∈NM, represent Mgroups of voters. We will denote the space of voting configurations for this population by ΩN1+···+NM:={−1,1}N1+···+NM. Each individual vote cast xλi∈Ω1will be indexed by λ∈NMdenoting the group and i∈NNλthe identity of the voter in question. We will refer to each ( x11, . . . , x 1N1, . . . , x M1, . . . , x MNM)∈ΩN1+···+NMas a voting configuration, which consists of a complete record of the votes cast by the entire population on a certain issue. We model their behaviour in binary voting situations with the following voting model: Definition 1. LetNλ∈Nandβλ∈R,λ∈NM. We set N:= (N1, . . . , N M) and β:= (β1, . . . , β M). The Curie-Weiss model (CWM) is defined for all voting configurations ( x11, . . . , x 1N1, . . . , x M1, . . . , x MNM)∈ ΩN1+···+NMby Pβ,N(X11=x11, . . . , X MNM=xMNM):=Z−1 β,Nexp 1 2MX λ=1βλ Nλ NλX i=1xλi!2 , (1) where Zβ,Nis a normalisation constant called the partition function which depends on βandN. The constants βλare called coupling parameters. In the physical context of the CWM as a model of ferromagnetism, where M= 1, there is a single coupling parameter βwhich is the inverse temperature. As such, the range of values for βis usually [0 ,∞). For technical reasons to do with the range of the statistic employed to calculate the maximum likelihood estimator, we will consider the more general definition given above. However, as a model of voting, the CWM has non-negative coupling parameters to reflect social cohesion. In the voting context, the coupling parameters βλmeasure
|
https://arxiv.org/abs/2505.21778v1
|
the degree of influence the voters in group λ exert over each other, with the influence becoming stronger the larger βλis. As we see, the most probable voting configurations are those with unanimous votes in favour of or against the proposal. However, there are only two of these extreme configurations, whereas there is a multitude of low probability configurations with roughly equal numbers of votes for and against. This is the ‘conflict between energy and entropy.’ Which one of these pseudo forces dominates depends on the magnitude of the coupling parameters. The partition function Zβ,Nis defined by Zβ,N=X x∈ΩN1+···+NMexp 1 2MX λ=1βλ Nλ NλX i=1xλi!2 . (2) We set Sλ:=NλX i=1Xλi, λ∈NM. (3) The key to understanding the behaviour of the CWM is the random vector (S1, . . . , S M), which represents the voting margins, i.e. the difference between the numbers of yes and no votes, in each group. Notation 2. Throughout this article, we will use the symbol EXfor the expectation and VXfor the variance of some random variable X. Capital letters such as Xwill denote random variables, while lower case letters such as xwill denote realisations of the corresponding random variable. 4 3 Maximum Likelihood Estimation of β 3.1 The Maximum Likelihood Estimator ˆβN We will denote by n∈Nthe size of a sample of observations. Then each sample takes values in the space Ωn N1+···+NM:=nY i=1ΩN1+···+NM. We assume that we have access to a sample of voting configurations x(1), . . . , x(n) ∈Ωn N1+···+NMcomposed ofn∈Ni.i.d. realisations of ( X11, . . . , X MNλ) from the CWM. The density function for such a sample is given by the n-fold product of (1): f x(1), . . . , x(n);β :=Z−n β,NnY t=1exp 1 2MX λ=1βλ Nλ NλX i=1x(t) λi!2 . (4) For fixed x(1), . . . , x(n) , the function β∈RM7→f x(1), . . . , x(n);β ∈Ris called the likelihood function, and lnf x(1), . . . , x(n);β =−nlnZβ,N+1 2nX t=1MX λ=1βλ Nλ NλX i=1x(t) λi!2 (5) is the log-likelihood function. The maximum likelihood estimator ˆβMLofβgiven the sample x(1), . . . , x(n) is the value which maximises the likelihood function, i.e. ˆβML:= arg max β′f x(1), . . . , x(n);β′ . Since x7→lnxis a strictly increasing function, we can instead identify ˆβMLas the value which maximises the log-likelihood function ˆβML= arg max β′lnf x(1), . . . , x(n);β′ . To find the maximum of the log-likelihood function, we derive with respect to each βλand equate to 0: d lnf x(1), . . . , x(n);β dβλ=−n Zβ,NdZβ,N dβλ+1 2NλnX t=1 NλX i=1x(t) λi!2 != 0. (6) We continue with our calculation of the maximum likelihood estimator. The squared sums S2 λdefined in (3) have expectation Eβ,NS2 λ=dZβ,N dβλ·2NλZ−1 β,N(7) by Lemma 51. We substitute (7) into (6): d lnf x(1), . . . , x(n);β dβλ=−n Zβ,N1 2NλZβ,NEβ,NS2 λ+1 2NλnX t=1 NX i=1x(t) i!2 . 5 Then the optimality conditiond lnf(x(1),...,x(n);β) dβ β=ˆβML= 0 is equivalent to EˆβML,NS2 λ=1 nnX t=1 NX i=1x(t) i!2 . (8)
|
https://arxiv.org/abs/2505.21778v1
|
Definition 3. We define the statistic T: Ωn N1+···+NM→Rfor any realisation of the sample x= x(1), . . . , x(n) ∈ Ωn N1+···+NMby T(x):=1 nnX t=1 N1X i=1x(t) 1i!2 , . . . , NMX i=1x(t) Mi!2 . Remark 4.Tis a random vector on the probability space Ωn N1+···+NMwith the power set of Ωn N1+···+NM as the σ-algebra and the probability measure defined by the density function (4). We will write Tfor this random variable and T(x) for its realisation given a sample x∈Ωn N1+···+NM. Proposition 5. Tis a sufficient statistic for β. We will prove this proposition in Section 4. Notation 6. We will write [ −∞,∞] for the compactification R∪ {−∞ ,∞}and [0 ,∞] for [0 ,∞)∪ {∞} . Definition 7 (Maximum Likelihood Estimator) .LetN∈NMandβ∈RM. The maximum likelihood estimator of the parameter βis given by ˆβN: Ωn N1+···+NM→[−∞,∞]Msuch that the optimality condition (8) holds: EˆβN(x),N S2 1, . . . , S2 M =T(x), x∈Ωn N1+···+NM. 3.2 Main Results of the Article In the remainder of this section, we will state the main results concerning the estimator ˆβN. Symbols such as ‘p− − − − → n→∞’ and ‘ N 0, σ2 ’ are used in a standard way (cf. Notation 29). Proposition 8. LetN∈NMandn∈N. For each sample x∈Ωn N1+···+NM, there is a unique value y∈ [−∞,∞]Msuch that Ey,N S2 1, . . . , S2 M =T(x)holds. Therefore, the estimator ˆβNis uniquely determined for any realisation x∈Ωn N1+···+NMof the sample. Proposition 9. LetN∈NM. For each value of the coupling constants β>02, there is a constant ¯δ >0 such that Pn ˆβN/∈[0,∞)o ≤2Mexp −¯δn holds for all n∈N. In (16), we will state the value of the constant ¯δin the proposition above. Proposition 9 says that although the estimator ˆβNcan assume negative and even infinite values, these realisations are exponentially unlikely as the sample size increases given our assumption on the true value of βmeant to reflect social cohesion. Finally, we state a theorem about the statistical properties of the estimator ˆβN. Theorem 10. FixN∈NMandβ>0. The estimator ˆβNhas the following properties: 1.ˆβNis consistent: ˆβNp− − − − → n→∞β. 2For all x∈RM,x >0 is to be interpreted coordinate-wise, i.e. xi>0, i∈NM. 6 2.ˆβNis asymptotically normal:√n ˆβN−βd− − − − → n→∞N(0,Σ), where the covariance matrix Σis diagonal with entries Σλλ=4N2 λ Vβλ,NλS2 λ, λ∈NM. 3.ˆβNsatisfies a large deviations principle with rate nand rate function Jdefined in (19). Jhas a unique minimum at β, and we have for each closed set K⊂[−∞,∞]Mthat does not contain β, infy∈KJ(y)>0and Pn ˆβN∈Ko ≤2Mexp −ninf y∈KJ(y) for all n∈N. One could jump to the conclusion that given these results, the estimation problem of the parameter βusing the maximum likelihood estimator from Definition 7 is solved in satisfactory fashion. However, there are computational problems with the calculation of the expectation EˆβN(x),N S2 1, . . . , S2 M in (8) for all but small N1, . . . , N M. The main difficulty is the calculation of the normalisation constant Zβ,Nin (2) which is of order 2N1+···+NMasN1+···+NM→ ∞ . Two possible solutions
|
https://arxiv.org/abs/2505.21778v1
|
to this problem are: 1. Find and use an asymptotic approximation of EˆβN(x),N S2 1, . . . , S2 M valid for large N1+···+NM which is less costly to calculate than the exact moment EˆβN(x),N S2 1, . . . , S2 M . 2. Employ alternate estimators based on small subsets of voters so that instead of the moment EˆβN(x),N S2 1, . . . , S2 M some other expression can be employed which is less costly to calculate. This approach has the added benefit that we need less data to estimate β. Instead of requiring access to a sample of voting configurations from the entire population, a sample containing observations of a subset of votes suffices. We will explore both of these avenues in future work. 4 Proof of Propositions 8 and 9 We will analyse the properties of the estimator ˆβN, with Propositions 20 and 27 being the key insights for the proof of Propositions 8 and 9. Proposition 8 will follow from Proposition 20, and Proposition 9 from Propositions 20 and 27. First, we will prove Proposition 5 about the sufficiency of the statistic T: Proof of Proposition 5. We observe that the density function of the sample distribution given in (4) only depends on the observations through the realisation of the statistic T: f x(1), . . . , x(n);β =Z−n β,NnY t=1exp 1 2MX λ=1βλ Nλ NλX i=1x(t) λi!2 =Z−n β,Nexp n 2T x(1), . . . , x(n) β1 N1... βM NM =:g T x(1), . . . , x(n) ;β . 7 Due to the product structure of the CWM measure from Definition 1, which features non-interacting groups, we can estimate the coupling constants βλindependently of each other. We will therefore work with the marginal distributions of each group λgiven by Pβλ,Nλ(Xλ1=xλ1, . . . , X λNλ=xλNλ):=Z−1 βλ,Nλexp 1 2MX λ=1βλ Nλ NλX i=1xλi!2 for all ( xλ1, . . . , x λNλ)∈ΩNλ, where the partition function is Zβλ,Nλ=X xλ∈ΩNλexp βλ 2Nλ NλX i=1xλi!2 . We will return to the multi-group setting in Section 7. In the meantime, we will be working with a single group at a time, and therefore there will be no confusion if we omit the subindex λfrom all our expressions to improve readability. As such, we will be writing Pβ,Ninstead of Pβλ,Nλ,Tinstead of ( T)λ, etc. Definition 11. TheRademacher distribution with parameter t∈[−1,1] is a probability measure Pton{−1,1} given by Pt{1}:=1+t 2. We set u:= (1, . . . , 1)∈ΩNand for all x∈ΩN pβ(x):= exp β 2N NX i=1xi!2 . (9) We will use the following auxiliary results in the proof of Proposition 20: Lemma 12. The limits lim β→∞pβ(x) pβ(u)=( 1 if PN i=1xi =N, 0 otherwise, hold for all x∈ΩN. Proof. First let PN i=1xi =N. Then we have lim β→∞pβ(x) pβ(u)= lim β→∞exp β 2N NX i=1xi!2 − NX i=1ui!2 = lim β→∞expβ 2N N2−N2 = 1. Now let PN i=1xi < N. Then lim β→∞pβ(x) pβ(u)= lim β→∞exp β 2N NX i=1xi!2 − NX
|
https://arxiv.org/abs/2505.21778v1
|
i=1ui!2 = 0 becausePN i=1xi2 < N2=PN i=1ui2 . 8 The next statement follows directly from the lemma by noting that PN i=1xi =Nis equivalent to x∈ {−u, u}: Corollary 13. The limits lim β→∞pβ(x)P y∈ΩNpβ(y)=( 1 2if PN i=1xi =N, 0 otherwise , hold for all x∈ΩN. Definition 14. We define the minimum of the range of S2to be κ:= min x∈ΩNn PN i=1xi o and the set Υ:=( x∈ΩN NX i=1xi =κ) . Remark 15.Note that κ=( 0 if Nis even , 1 otherwise , and the cardinality of Υ is |Υ|= N N 2! ifNis even , N N+1 2! otherwise . Lemma 16. Lety∈Υ. Then the limits lim β→−∞pβ(x) pβ(y)=( 1 if PN i=1xi =κ, 0 otherwise, hold for all x∈ΩN. Proof. Letx, y∈Υ. Then we have lim β→−∞pβ(x) pβ(y)= lim β→−∞exp β 2N NX i=1xi!2 − NX i=1yi!2 = lim β→−∞expβ 2N(κ−κ) = 1. Now let x /∈Υ. Then lim β→−∞pβ(x) pβ(y)= lim β→−∞exp β 2N NX i=1xi!2 − NX i=1yi!2 = 0 becausePN i=1xi2 −PN i=1yi2 >0. The next corollary follows immediately from the last lemma: 9 Corollary 17. The limits lim β→−∞pβ(x)P y∈ΩNpβ(y)=( 1 |Υ|if PN i=1xi =κ, 0 otherwise , hold for all x∈ΩN. Lemma 18. The following statements hold: 1.limβ→−∞Eβ,NS2=κ. 2.limβ→∞Eβ,NS2=N2. We will write s(x):=NX i=1xi for all x∈ΩN. Proof. We show each statement in turn. 1. We start by proving the statement lim β→−∞Eβ,NS2=κ. The moment Eβ,NS2is calculated as the sum of 2Nsummands of the type Z−1 β,Ns(x)2expβ 2Ns(x)2 , x∈ΩN. Since the number of summands is fixed over all values of β∈R, we have lim β→−∞Eβ,NS2=X x∈ΩNs(x)2lim β→−∞exp β 2Ns(x)2 Zβ,N=X x∈ΩNs(x)2lim β→−∞pβ(x)P y∈Ωpβ(y), where pβ(x) is defined as in (9). By Definition 14 of κand Υ, for all x∈Υ, s(x)2=κ. According to Corollary 17, we obtain X x∈ΩNs(x)2lim β→−∞pβ(x)P y∈ΩNpβ(y)=X x∈Υs(x)21 |Υ|=κ. 2. We next show the second statement employing Corollary 13, s(u)2=s(−u)2=N2, and the fact that there are finitely many configurations x∈ΩN: lim β→∞Eβ,NS2= lim β→∞P x∈ΩNpβ(x)s(x)2 P y∈ΩNpβ(y)= lim β→∞pβ(u) pβ(u) +pβ(−u)s(u)2+ lim β→∞pβ(−u) pβ(u) +pβ(−u)s(−u)2 =1 2s(u)2+1 2s(−u)2=N2. 10 As we will use the function β∈R7→Eβ,NS2∈Rin our proofs later on, it is convenient to assign a letter to this function. Definition 19. Let for each N∈Nthe function ϑN: [−∞,∞]→Rbe defined by ϑN(β):=Eβ,NS2, β∈R, ϑ N(−∞):=κ, andϑN(∞):=N2. Proposition 20. The function ϑNhas the following properties: 1.ϑNis strictly increasing and continuous; it is continuously differentiable on R. Its derivative is ϑ′ N(β) =1 2NVβ,NS2, β∈R. 2. We have for all β∈(−∞,0)and all N∈N: κ <Eβ,NS2< N. 3. We have for all β∈[0,∞)and all N∈N: N≤Eβ,NS2< N2. Proof. 1. The continuity on Ris clear. The function ϑNis a sum of differentiable functions of β∈R, and 11 we calculate dEβ,NS2 dβ=d dβ X x∈ΩN NX i=1xi!2 Z−1 β,Nexp β 2N NX i=1xi!2 =X x∈ΩN 1 2NPN i=1xi4 exp β 2NPN i=1xi2 Zβ,N Z2 β,N −PN i=1xi2 exp β 2NPN i=1xi2 dZβ,N dβ Z2 β,N =1 2NEβ,NS4−Z−2 β,NX x∈ΩN NX i=1xi!2 exp β 2N NX i=1xi!2 ·X y∈ΩN1 2N NX i=1yi!2
|
https://arxiv.org/abs/2505.21778v1
|
exp β 2N NX i=1yi!2 =1 2NEβ,NS4−1 2NX x∈ΩN NX i=1xi!2 Z−1 β,Nexp β 2N NX i=1xi!2 ·X y∈ΩN NX i=1yi!2 Z−1 β,Nexp β 2N NX i=1yi!2 =1 2NEβ,NS4−1 2N Eβ,NS22=1 2NVβ,NS2>0, where we used the derivative of Zβ,Nprovided in Lemma 51, and the strict inequality stems from the fact that the random variable S2is not almost surely constant for any β∈R. Thus, ϑN|R is continuously differentiable and strictly increasing. The continuity of ϑNfollows from the limit statements in Lemma 18. Lemma 18 and the inequalities in points 2 and 3, which we will show now, yield the strict monotonicity of ϑNon its entire domain. 2. For β= 0, the Xi,i∈NN, are independent Rademacher random variables (see Definition 11) with 12 parameter t= 0. We calculate the second moment of S: E0,NS2=E0,N NX i=1Xi!2 =E0,N NX i=1X2 i+X i̸=jXiXj =NX i=1E0,NX2 i+X i̸=jE0,N(XiXj) =NX i=11 +X i̸=jE0,NXiE0,NXj =N, where we used E0,NXi= 0,X2 i= 1 and the independence of all Xi, Xjwith i̸=junder P0,N. Next, we use S2≥κ,E0,NS2=N, and the strict monotonicity of the function ϑN|Rto establish that for all β∈(−∞,0), κ≤lim β′→−∞Eβ′,NS2<Eβ,NS2<E0,NS2=N. This proves the second statement. 3. Using the first property and lim β→∞Eβ,NS2=N2from Lemma 18, we have for all N∈Nand all β≥0, N=E0,NS2≤Eβ,NS2<lim β′→∞Eβ′,NS2=N2. Recall that we convened to write T= (T)λ. Proposition 20 assures us that any realisation of the statistic T in the interval N, N2 allows us to identify a unique value of ˆβN∈[0,∞) such that the optimality condition (8) holds. It should be noted that realisations T(x) in [ κ, N)∪ N2 are possible since the range of S2isn κ,(κ+ 2)2, . . . , N2o . As Tis defined to be the average of the realisations of S2over all observations x(t) in the sample x∈Ωn N, this implies that the range of Tincludes N2and a subset of [ κ, N). Taking into account that ET=Eβ,NS2, we obtain the following realisations of ˆβNdepending on the value ofT: 1. If T(x)∈(κ, N), then a negative value for ˆβNsatisfies (8), and for T(x) =κ, (8) holds if ˆβN=−∞. 2. If T(x)∈ N, N2 , then (8) is satisfied for a unique value ˆβN∈[0,∞). 3. If T(x) =N2, then (8) holds if ˆβN=∞but not for any finite value of ˆβN. These observations are the reason we defined ˆβNas a function which takes values in the extended real numbers, including −∞and∞. This concludes the proof of Proposition 8. We are interested in the case that the coupling constant βlies in [0 ,∞), as this models social cohesion. If β > 0, the edge case T(x)∈[κ, N)∪ N2 is of little practical importance due to Proposition 27 below. We will use some concepts and results in the proof of said proposition, which we state before the proposition itself. We will work with rate functions which are defined as Legendre transforms. Definition 21. Letf:R→Rbe a convex function. Then the Legendre transform f∗:R→R∪ {∞} offis defined by f∗(t):= supx∈R{xt−f(x)}, t∈R. 13 We will employ two lemmas concerning the convexity of the Legendre transformation of convex functions to prove
|
https://arxiv.org/abs/2505.21778v1
|
Proposition 27 below. Lemma 22. Letf:R→Rbe a convex function. Then the Legendre transform f∗is convex. Proof. Letθ∈[0,1] and t1, t2∈R. Then θf∗(t1) + (1 −θ)f∗(t2) =θsup x∈R{t1x−f(x)}+ (1−θ) sup x∈R{t2x−f(x)} = sup x∈R{θt1x−θf(x)}+ sup x∈R{(1−θ)t2x−(1−θ)f(x)} ≥sup x∈R{(θt1+ (1−θ)t2)x−(θ+ (1−θ))f(x)} =f∗(θt1+ (1−θ)t2), where we used first the definition of f∗, then θ∈[0,1], and finally supx∈Rg(x) + supx∈Rh(x)≥ supx∈R{g(x) +h(x)}for any functions g, h:R→R. Lemma 23. Letf:R→Rbe a convex and differentiable function. Then the Legendre transform f∗is strictly convex on the set {t∈R|f∗(t)<∞}. Proof. We already know from Lemma 22 that f∗is convex. Assume it is not strictly convex on F:= {t∈R|f∗(t)<∞}. Since fis assumed to be differentiable, and hence continuous, this implies the existence oft1, t2∈F, t1̸=t2, such that f∗t1+t2 2 =1 2f∗(t1) +1 2f∗(t2). (10) See [9, p. 12] for a reference that for continuous real-valued functions defined on open intervals midpoint convexity is equivalent to convexity. Letzbe a maximiser of the set xt1+t2 2−f(x) x∈R . Then the inequality f∗t1+t2 2 =zt1+t2 2−f(z)≥xt1+t2 2−f(x), x∈R, holds. By (10), 1 2f∗(t1) +1 2f∗(t2) =f∗t1+t2 2 =zt1+t2 2−f(z) =1 2(zt1−f(z)) +1 2(zt2−f(z)) ≤1 2f∗(t1) +1 2f∗(t2), and hence zis also a maximiser of the sets {xti−f(x)|x∈R},i= 1,2, i.e. zti−f(z)≥xti−f(x), x∈R, i= 1,2. By rearranging terms, we obtain f(x)≥xti−zti+f(z), x∈R, i= 1,2. 14 Thus we have two affine functions x7→gi(x):=xti−zti+f(z), i= 1,2, with the properties g1̸=g2, g i≤f, g i(z) =f(z), i= 1,2. Hence each giis tangent to fatz. However, due to the assumed differentiability of fandg1̸=g2this is impossible, yielding a contradiction. Therefore, (10) cannot hold. Definition 24. LetPbe a probability measure on R. Then let Λ P(t):= lnR Rexp (tx)P(dx) for all t∈R such that the expression is finite. We call Λ Pthe cumulant generating function of Pand the Legendre transform Λ∗ Pits entropy function. Let Ybe a real random variable with distribution P. We will then say that Λ Y:= ΛPand Λ∗ Y:= Λ∗ Pare the cumulant generating and entropy function of Y, respectively. As a last ingredient we will need in the proof of Proposition 27, we prove some properties of Λ S2and Λ∗ S2. Definition 25. LetYbe a random variable. The value ess inf Y:= sup {a∈R|P{Y < a }= 0} is called the essential infimum of Y. We convene that ess inf Y:=−∞if the set of essential lower bounds for Yon the right hand side of the display above is empty. The value ess sup Y:= inf{a∈R|P{Y > a }= 0} is called the essential supremum of Y. We convene that ess sup Y:=∞if the set on the right hand side above is empty. Lemma 26. LetYbe a bounded random variable which is not almost surely constant. The cumulant generating function ΛYofYand the entropy function Λ∗ YofYhave the following properties: 1.ΛYis convex and differentiable. 2.Λ∗ Yis finite on the interval (ess inf Y,ess sup Y)and infinite on [ess inf Y,ess sup Y]c. 3.Λ∗ Yis strictly convex on (ess inf Y,ess sup Y). 4.Λ∗ Yis strictly decreasing on the interval (ess inf Y,EY)and strictly increasing on (EY,ess sup Y). 5.Λ∗ Yhas a unique global minimum at EYwith Λ∗ Y(EY) =
|
https://arxiv.org/abs/2505.21778v1
|
0 . Proof. Since Yis bounded, Λ Yis well-defined and finite for all t∈R. We now show that Λ Yis convex and differentiable. Let θ∈[0,1] and t1, t2∈R. Then ΛY(θt1+ (1−θ)t2) = ln Eexp (( θt1+ (1−θ)t2)Y) = lnEh exp (t1Y)θexp (t2Y)1−θi ≤lnh (Eexp (t1Y))θ(Eexp (t2Y))1−θi =θΛY(t1) + (1 −θ) ΛY(t2), where we used H ¨older’s inequality. Note that the function ( t, Y)7→g(t, Y):= exp ( tY) has the properties 1.g(t,·) is an integrable function with respect to the push-forward measure P◦Y−1for all t∈R. 2. The partial derivative∂g(t,Y) ∂texists P◦Y−1-almost surely for all t∈R. 3. ∂g(t,Y) ∂t =|Yexp (tY)|isP◦Y−1-integrable for all t∈R. 15 By Leibniz’s integral rule, Λ Y= lnR g(·, Y) dP◦Y−1is differentiable with Λ′ Y(t) =E(Yexp (tY)) Eexp (tY). Since Λ Yis convex and differentiable, Λ∗ Yis strictly convex by Lemma 23. This shows statements 1 and 3. Lety∈(ess inf Y,ess sup Y). We show that Λ∗ Y(y)<∞holds. We have P(Y < y ),P(Y > y )>0. Let for allt∈R f(t):=yt−ΛY(t) and g(t):= exp f(t) =exp (yt) Eexp (tY). We write Eexp (tY) =Eexp (tY) 1{Y≤y}+Eexp (tY) 1{y<Y≤ess sup Y}+Eexp (tY) 1{ess sup Y <Y}. By Definition 25, we have P{ess sup Y < Y }= 0. Therefore, by dividing numerator and denominator by exp (yt), we obtain g(t) =1 Eexp (t(Y−y)) 1{Y≤y}+Eexp (t(Y−y)) 1{y<Y≤ess sup Y}. We see that lim t→∞Eexp (t(Y−y)) 1{Y≤y}= 0 due to P(Y < y )>0, and limt→∞Eexp (t(Y−y)) 1{y<Y≤ess sup Y}=∞due to P(Y > y )>0. It follows that lim t→∞g(t) = 0 and limt→∞f(t) =−∞. Next we note that lim t→−∞Eexp (t(Y−y)) 1{Y≤y}=∞due to P(Y < y )>0, and lim t→−∞Eexp (t(Y−y)) 1{y<Y≤ess sup Y}= 0 due to P(Y > y )>0. Therefore, lim t→−∞ g(t) = 0 and limt→−∞ f(t) =−∞. Hence, the continuous function freaches its maximum at some point t0∈R, and this implies Λ∗ Y(y) = supt∈R{yt−ΛY(t)}= max t∈Rf(t) =f(t0)<∞. This shows that Λ∗ Yis finite on the interval (ess inf Y,ess sup Y). Now let y∈[ess inf Y,ess sup Y]c. Assume first y >ess sup Y. AsP{ess sup Y < Y }= 0, we can write Eexp (tY) =Eexp (tY) 1{Y≤ess sup Y} and g(t) =1 Eexp (t(Y−y)) 1{Y≤ess sup Y}. Then lim t→∞Eexp (t(Y−y)) 1{Y≤ess sup Y}= 0 and lim t→−∞Eexp (t(Y−y)) 1{Y≤ess sup Y}=∞. There- fore, lim t→∞g(t) =∞and lim t→−∞ g(t) = 0. Hence, Λ∗ Y(y) = supt∈R{yt−ΛY(t)}= lim t→∞f(t) =∞. Analogously, one can show Λ∗ Y(y) = supt∈R{yt−ΛY(t)}= lim t→∞f(t) =∞for all y <ess inf Y. This concludes the proof of statement 2. Next we show that Λ∗ Y(EY) = 0 and Λ∗ Y(x)>0 for all x̸=EY. Jensen’s inequality yields ΛY(t) = ln Eexp (tY)≥E(ln exp ( tY)) =tEY. (11) It follows directly from Λ(0) = 0 and Definition 21 that Λ∗ Y(EY) = 0, and the second part of statement 5 is proved. 16 We rearrange terms in (11) to obtain for all t <0 and x≥EY xt−ΛY(t)≤tEY−ΛY(t)≤0. Since Λ Y(0) = 0, Λ∗ Y(x)≥0 for all x∈Ris a consequence of Definition 21. This and the last display yield Λ∗ Y(x) = sup
|
https://arxiv.org/abs/2505.21778v1
|
t≥0{xt−ΛY(t)} for all x≥EY. As the function x7→xt−ΛY(t) given t≥0 is increasing, we have shown that Λ∗ Yis increasing on the interval ( EY,ess sup Y). Together with the strict convexity of Λ∗ Y, this implies that Λ∗ Yis strictly increasing on the interval ( EY,ess sup Y). Analogously, one can show Λ∗ Y(t) = sup t≤0{xt−ΛY(t)} (12) for all x≤EYand that Λ∗ Yis strictly decreasing on the interval (ess inf Y,EY), which completes the proof of statement 4 and the first part of statement 5. As an application of Lemma 26, we obtain the exponential convergence to 0 of the probability of the set of atypical realisations T /∈ N, N2 . Proposition 27. For any value of the coupling constant β >0, there is a constant δ:= inf Λ∗ S2(t)|t /∈ N, N2 >0such that P T /∈ N, N2 ≤2 exp ( −δn) holds for all n∈N. Proof. The random variable S2is bounded and not almost surely constant, so Lemma 26 applies to Λ∗ S2. Set δ:= inf Λ∗ S2(t)|t /∈ N, N2 . (13) Due to N <Eβ,NS2< N2by Proposition 20 and the strict monotonicity of Λ∗ S2on each of the intervals κ,Eβ,NS2 and Eβ,NS2, N2 by Lemma 26, we conclude that Λ∗ S2(N)>0 and Λ∗ S2 N2 >0, and hence δ= min Λ∗ S2(N),Λ∗ S2 N2 >0 holds. We write P T /∈ N, N2 =P{T∈(−∞, N)}+P T∈ N2,∞ . An application of Markov’s inequality yields for all x≤0 P{T∈(−∞, N)}=P{T−N < 0} ≤P{exp (nx(T−N))≥1} ≤Eexp (nx(T−N)) = exp ( −nxN)nY s=1Eexp x NX i=1X(s) i!2 = exp ( −nxN) Eexp xS2n = exp ( −nxN) exp ( nΛS2(x)) = exp ( −n(xN−ΛS2(x))). As this holds for all x≤0, we use N <Eβ,NS2and (12) to arrive at P{T∈(−∞, N)} ≤exp (−nΛ∗ S2(N)). (14) 17 Similarly, we calculate the upper bound P T∈ N2,∞ ≤exp −nΛ∗ S2 N2 . (15) Combining (14) and (15) yields the claim considering the definition of δin (13). Remark 28.We define the constant ¯δfrom Proposition 9 by setting ¯δ:=MX λ=1δλ, (16) with each of the δλbeing of the form (13) with the entropy function in the definition being Λ∗ S2 λ. Proof of Proposition 9. The equivalences T∈(−∞, N)⇐⇒ ˆβN∈[−∞,0), T=N2⇐⇒ ˆβN=∞ hold by Proposition 20. Proposition 9 now follows from Proposition 27. 5 Proof of Theorem 10 Notation 29. Let ( Yn)n∈Nbe a sequence of random variables and Ya random variable. We will write Ynp− − − − → n→∞Yfor the statement ‘ Ynconverges in probability to Y,’ i.e. P{|Yn−Y|> ε} − − − − → n→∞0 holds for all ε >0. LetPnbe the distribution of Yn,n∈N, and Pthe distribution of Y. We will write Ynd− − − − → n→∞YorYnd− − − − → n→∞P for the statement ‘ Ynconverges in distribution to Y,’ i.e. Z fdPn− − − − → n→∞Z fdP for all continuous and bounded functions f:R→R. We will refer to a normal distribution with mean η∈Rand variance σ2>0 asN η, σ2 . Let for any random variable Xand any probability measure Pthe expression X∼Pstand for ‘ Xis
|
https://arxiv.org/abs/2505.21778v1
|
distributed according to P.’ In the following, we will state some auxiliary results we will use in the proof of Theorem 10. Recall that we use the symbol κ∈ {0,1}for the minimum value the random variable S2can assume. Recall Definition 19 of the function ϑN. Lemma 30. The function ϑNfrom Definition 19 has an inverse function ϑ−1 N: κ, N2 →[−∞,∞]which is strictly increasing and continuously differentiable on κ, N2 . 18 Proof. The statements follow from Proposition 20 and the inverse function theorem. In particular, the continuous differentiability of ϑ−1 Nfollows from the continuous differentiability of ϑN: ϑ′ N(x)>0, x∈R. Soϑ−1 Nis differentiable and ϑ−1 N′(y) =1 ϑ′ N ϑ−1 N(y), y∈ κ, N2 . Remark 31.With the previous lemma, we can express the estimator ˆβN: Ωn N→[−∞,∞] as ˆβN(x) = ϑ−1 N◦T (x), x∈Ωn N. Notation 32. Let (F,F) be a measurable space. We will write for any set A∈ F, the indicator function of A as 1A:F→R, 1A(x):=( 1 if x∈A, 0 if x∈F\A. Next we present an auxiliary lemma about the convergence in distribution of sequences of random variables. Lemma 33. Let(Yn)n∈Nbe a sequence of random variables and (Mn)n∈Na sequence of positive numbers such that |Yn| ≤Mn, n∈N, is satisfied. Let νbe a probability measure on R, and assume the convergence Ynd− − − − → n→∞ν. Finally, let (Bn)n∈Nbe a sequence of measurable sets which satisfies P{Yn∈Bn}=o1 Mn . Then we have for all z∈R, 1{Yn∈Bcn}Yn+z 1{Yn∈Bn}d− − − − → n→∞ν. Proof. SetAn:= Range ( Yn)\Bnand Wn:= 1{Yn∈An}Yn+z 1{Yn∈Bn}, Un:= 1{Yn∈Bn}Yn−z 1{Yn∈Bn} for all n∈N. We have Yn=Wn+Un, (17) and hence if we show Unp− − − − → n→∞0, then Wnd− − − − → n→∞ν 19 follows from Ynd− − − − → n→∞ν, (17), and Theorem 52. Letε >0. We calculate P{|Un| ≥ε} ≤E|Un| ε=1 εZ {Yn∈Bn}|Yn|dP ≤1 εMnP{Yn∈Bn} − − − − → n→∞0, where the convergence to 0 follows from the assumption P{Yn∈Bn}=o 1 Mn . Finally, P |z| 1{Yn∈Bn}≥ε ≤|z| εP{Yn∈Bn}=− − − − → n→∞0, and thus Unp− − − − → n→∞0 holds. The next auxiliary results relate to large deviation principles and contraction principles (see Definition 55 and Theorem 57). Lemma 34. LetXbe a metric space and I:X → [0,∞]a good rate function. Then, for any non-empty closed setK⊂ X, there is a point xK∈Ksuch that I(xK) = inf x∈KI(x). Proof. If inf x∈KI(x) =∞, then set xKequal to an arbitrary point in K. So assume inf x∈KI(x)<∞. Let (xn)n∈Nbe a sequence in Kwith lim n→∞I(xn) = inf x∈KI(x). Then there is a constant n0∈Nsuch that for all n≥n0, I(xn)≤inf x∈KI(x) + 1 = :α <∞. We have xn∈ {x∈ X | I(x)≤α}, n≥n0. AsIis a good rate function, the level set {x∈ X | I(x)≤α}is compact, and therefore ( xn)n≥n0has a subsequence ( xnk)k∈Nthat converges to some x0∈ {x∈ X | I(x)≤α}. Since xnk∈Kfor all k∈N, and K is closed, x0must belong to K. Next we note that inf x∈KI(x) = lim n→∞I(xn) = lim k→∞I(xnk) = lim inf k→∞I(xnk)≥I(x0), where the inequality is due
|
https://arxiv.org/abs/2505.21778v1
|
to xnk− − − − → k→∞x0and the lower semi-continuity of I. As x0∈K, we also have I(x0)≥infx∈KI(x). Thus I(x0) = inf x∈KI(x) holds. Lemma 35. LetXandYbe metric spaces, I:X → [0,∞]a good rate function, and f:X → Y a continuous function. We define J:Y → [0,∞]by J(y):= inf{I(x)|x∈ X, f(x) =y}, y∈ Y. LetMI⊂ X be the set of minima of IandMJ⊂ Y the set of minima of J. Then f(MI) =MJholds. In particular, if fis injective and Ihas a unique minimum at x0, then Jhas a unique minimum at f(x0). 20 Proof. Lety0∈f(MI) and x0∈MIbe such that y0=f(x0). By definition of J, we have J(y0) = inf {I(x)|x∈ X, f(x) =y0} ≤I(x0) because of f(x0) =y0. On the other hand, we have for all x∈ X I(x)≥I(x0) asx0∈MI, and therefore, J(y0) = inf {I(x)|x∈ X, f(x) =y0} ≥I(x0). So we have established J(y0) =I(x0). Let y∈ Y. We have J(y) = inf {I(x)|x∈ X, f(x) =y} ≥I(x0) =J(y0), and thus y0∈MJ. Now let y0∈MJ. Ify0/∈f(X), then J(y0) = inf {I(x)|x∈ X, f(x) =y0}= inf∅=∞. Since Iis a rate function, there is some x1∈ Xsuch that I(x1)<∞, and J(f(x1))≤I(x1)<∞=J(y0), contradicting the assumption y0∈MJ. Hence, y0∈f(X) must hold, and f−1({y0}) is a non-empty closed subset of Xbecause fis continuous. We apply the previous lemma to obtain a point x0∈f−1({y0}) with the property I(x0) = inf I(x)|x∈f−1({y0}) =J(y0)≤J(y), y∈ Y. Therefore, I(x0)≤I(x), x∈f−1({y}), y∈ Y. Using the identity X=S y∈f(X)f−1({y}), we obtain I(x0)≤I(x)x∈ X, and thus x0∈MIandy0=f(x0)∈f(MI) follow. Now are ready to begin the proof proper of Theorem 10. Recall that Nis the number of voters, nis the number of observations in the sample, Λ∗ S2is the entropy function of S2, and see Definition 55 of large deviation principles. Also recall Lemma 30 which states that the function ϑ−1 Nexists and is continuous and strictly increasing. In preparation for statement 3 of Theorem 10, we define the good rate function J: [−∞,∞]→[0,∞] by J(y):= inf Λ∗ S2(x)|x∈R, ϑ−1 N(x) =y , y∈[−∞,∞]. (18) Remark 36.We define the multivariate good rate function J: [−∞,∞]M→[0,∞] from the statement of Theorem 10 by setting J(x):=MX λ=1Jλ(xλ), x∈[−∞,∞]M, (19) with each of the Jλbeing of the form (18). 21 Proof of Theorem 10. We prove each statement in turn. 1. Recall that x(1), . . . , x(n) ∈Ωn Nrefers to the sample of voting configurations we observe. x(t) i∈ {−1,1},t∈Nn,i∈NN, is the vote of individual iin the t-th vote. The weak law of large numbers 1 nnX t=1 NX i=1X(t) i!2 p− − − − → n→∞Eβ,NS2(20) holds becausePN i=1X(t) i2 is a bounded random variable, and thus the second moment exists. By Definition 7, we have EˆβN,NS2=1 nnX t=1 NX i=1x(t) i!2 . This and (20) yield EˆβN,NS2 p− − − − → n→∞Eβ,NS2. As noted above the inverse function ϑ−1 N, which maps an expectation Eβ,NS2to the value β, is con- tinuous. ˆβNp− − − − → n→∞βfollows by Theorem 53. 2. By Definition 3, the statistic Thas the form specified in Proposition 56 with the function f: ΩN→R given by f(x1, . .
|
https://arxiv.org/abs/2505.21778v1
|
. , x N):= NX i=1xi!2 for all ( x1, . . . , x N)∈ΩN. Hence, we have µ=Eβ,NS2=ETandσ2=Vβ,NS2. By Proposition 56, √n T−Eβ,NS2d− − − − → n→∞N 0,Vβ,NS2 holds. We want to apply Theorem 54 to the transformation f:=ϑ−1 N, but face the difficulty that ϑ−1 N is not differentiable on the entire range of T. By assumption, 0 < β < ∞. Therefore, by Proposition 20, we have κ <Eβ,NS2< N2. Let a < b be real numbers such that κ < a < Eβ,NS2< b < N2. SetK:= (a, b)candBn:=√n T−Eβ,NS2 |T∈K for all n∈N. We define Wn:=√n T−Eβ,NS2 1{T∈Kc}, n∈N and apply Lemma 33 to the sequence Yn:=√n T−Eβ,NS2 . Note that Bnis finite for all n∈N, soB:=S∞ n=1Bnis countable hence closed. Set ν:=N 0,Vβ,NS2 andMn:=√nfor each n∈N. The set Kis closed and T∈Kif and only if Yn∈Bn. By statement 4 of Proposition 56, we have P{Yn∈B} ≤2 exp −ninf x∈KΛ∗ S2(x) , n∈N. 22 Therefore, P{Yn∈B}=o 1 Mn holds, and we can apply Lemma 33 to conclude Wnd− − − − → n→∞N 0,Vβ,NS2 . We apply Theorem 54. Set D:= [a, b] and f:D→R f(y):=ϑ−1 N(y), y∈D. fis continuously differentiable and the derivative f′is strictly positive on the compact set D. Therefore, √n ˆβN−β 1{T∈Kc} =√n ˆβN−β 1{T∈Kc}+√n(β−β) 1{T∈K} =√n ˆβN 1{T∈Kc}+β 1{T∈K} −β =√n f T 1{T∈Kc}+Eβ,NS21{T∈K} −f Eβ,NS2d− − − − → n→∞N 0,(f′(µ))2σ2 holds. We apply Lemma 33 once more to conclude that √n ˆβN−βd− − − − → n→∞N 0,(f′(µ))2σ2 is satisfied. We use Proposition 20 which states that ϑ′ N(β) =1 2NVβ,NS2, and Lemma 30 provides the derivative ϑ−1 N′ Eβ,NS2 =1 ϑ′ N ϑ−1 N(Eβ,NS2)=1 ϑ′ N(β)=2N Vβ,NS2. Substituting the value of the derivative in the previous display yields the claim concerning the asymp- totic normality of the estimator ˆβN. 3. By Definition 3, the statistic Thas the form specified in Proposition 56 with the function f: ΩN→R given by f(x1, . . . , x N):= NX i=1xi!2 for all ( x1, . . . , x N)∈ΩN. Hence, we have µ=Eβ,NS2,σ2=Vβ,NS2, and Λ∗ S2for the objects in Proposition 56, and, therefore, Tsatisfies a large deviations principle with rate nand rate function Λ∗ S2. As noted before, the function ϑ−1 Nis continuous, we have T x(1), . . . , x(n) ∈ κ, N2 for all n∈N and all x(1), . . . , x(n) ∈Ωn N, and κ, N2 is the domain of ϑ−1 N. By Theorem 57, ˆβN=ϑ−1 N◦Tsatisfies a large deviations principle with rate nand rate function J. By Lemma 26, Λ∗ S2has a unique minimum at Eβ,NS2. Since Lemma 30 also says that ϑ−1 Nis strictly increasing, by Lemma 35, Jhas a unique minimum at β=ϑ−1 N Eβ,NS2 . Let K⊂Rbe a closed set that does not contain β. We have Pn ˆβN∈Ko =Pn ϑN ˆβN ∈ϑN(K)o =Pn EˆβN,NS2∈ϑN(K)o =P{T∈ϑN(K)}, 23 where the last step uses Definition 7. Now note that Kis a closed subset of the compact space [ −∞,∞] and therefore compact. The continuous function ϑNmaps it to the compact
|
https://arxiv.org/abs/2505.21778v1
|
set ϑN(K)⊂R. Hence, ϑN(K) is a closed set. Since Kdoes not contain βandϑNis strictly increasing, ϑN(K) does not contain Eβ,NS2. By Proposition 56, we have P{T∈ϑN(K)} ≤2 exp ( −δ′n) for all n∈Nwith δ′:= inf x∈ϑN(K)Λ∗ S2(x)>0. The equality δ= inf y∈KJ(y) = inf y∈Kinf Λ∗ S2(x)|x∈R, ϑ−1 N(x) =y = inf Λ∗ S2(x)|x∈R, ϑ−1 N(x)∈K = inf{Λ∗ S2(x)|x∈ϑN(K)}=δ′ yields the final claim. 6 Standard Error of the Statistic Tand the Estimator ˆβN In order to calculate moments in the proof of Lemma 40 below, we introduce the concept of profile vectors. Definition 37. Letk, n∈Nwith k≤n. We will call all i∈Nk nindex vectors and set Π:=( (r1, . . . , r k)∈(Nk)k kX ℓ=1ℓrℓ=k) , and we will refer to Π as the set of profile vectors and to the elements r∈Π as profile vectors . For any index vector i∈Nk n, the expression r:= (r1, . . . , r k):=ρ(i) is defined as follows: for each ℓ∈Nk,rℓis the number of indices in ithat appear exactly ℓtimes. We will call rthe profile vector of i. Remark 38.It is an elementary fact that for each index vector i∈Nk n,ρ(i)∈Π holds. Lemma 39. Letk∈Nandr∈Π. The number of index vectors i∈Nk nsuch that r= (r1, . . . , r k) =ρ(i)is given by n! r1!···rk! n−Pk ℓ=1rℓ !k! 1!r1···k!rk. Proof. We construct an index vector iwith entries in Nnand profile r=ρ(i) in two steps: 1. We partition Nnintok+ 1 sets Bj,j∈Nk+1. Each set Bjcontains the indices which occur exactly j times in iforj≤kandBk+1:=Nn\Sk j=1Bj. Hence, |Bj|=rjholds for all j≤k. There are n! r1!r2! rk! N−Pk ℓ=1rℓ !! (21) ways to realise this partition of Nn. 2. The index vector iis of length k. We can think of the elements ofSk j=1Bjas a finite alphabet. Our task is to assemble an ordered k-tuple iof elements ofSk j=1Bjin such a fashion that for each j≤k, 24 each of the rjelements of Bjoccurs exactly jtimes. So there are r1elements ofSk j=1Bjselected once, r2elements selected twice, . . .,rkelements selected ktimes. There are k! 1. . . 1 2 . . . 2. . . k !. . . k ! =k! 1!r1···k!rk ways to accomplish this task. The statistic Tfrom Definition 3 is an unbiased estimator of the expectation Eβ,NS2. Since each summand composing Ttakes values in κ, N2 ,Tis a bounded random variable. Lemma 40. Let(Yn)n∈Nbe a sequence of i.i.d. random variables with mean EYn=µ, variance VYn=σ2, and existing moments E|Xn|k<∞for all k, n∈N. Let Zbe a random variable with distribution N 0, σ2 . Then E 1√nnX i=1(Yi−µ)!k − − − − → n→∞EZk holds for all k∈N. Proof. We will use the concepts from Definition 37 and Lemma 39 to calculate the expectation E 1√nPn i=1(Yi−µ)k . To simplify the notation, we set Ui:=Yi−µfor all i∈N. As the random variables ( Un)n∈Nare i.i.d., we have EUi1···Uik=EUj1···Ujkfor all index vectors i,jwith ρ(i) =ρ(j). Therefore, we can write for any profile index vector iwith profile vector r=ρ(i) EU(r):=EUi1···Xik. Using Lemma 39, we express the expectation E 1√nPn i=1Uik as E 1√nnX i=1Ui!k =1
|
https://arxiv.org/abs/2505.21778v1
|
nk 2nX i1,...,ik=1EUi1···Uik =1 nk 2X r∈Πn! r1!···rk! n−Pk ℓ=1rℓ !k! 1!r1···k!rkEU(r). (22) As a first step, we note that since EUi= 0 and EUℓ iUm j=EUℓ iEUm jfor all i̸=jand all ℓ, m∈N, we have for all r∈Π with r1≥1, EU(r) = 0 . (23) Now let r1= 0 and rℓ∗>0 for some ℓ∗>2. Set ⌊x⌋:= max {m∈Z|m≤x}for all x∈R. Suppose first 25 thatℓ∗= 2j∗+ 1 for some j∗≥1. Then kX ℓ=1rℓ=1 2 ⌊k 2⌋X ℓ=12r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ =1 2 2r2j∗+1+X 1≤ℓ≤⌊k 2⌋,ℓ̸=j∗2r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ ≤1 2 (2j∗+ 1)r2j∗+1+X 1≤ℓ≤⌊k 2⌋,ℓ̸=j∗2r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ −1 2r2j∗+1 ≤1 2 (2j∗+ 1)r2j∗+1+X 1≤ℓ≤⌊k 2⌋,ℓ̸=j∗2r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ −1 2 ≤1 2 ⌊k 2⌋X ℓ=1(2ℓ+ 1)r2ℓ+1+⌊k 2⌋X ℓ=12ℓr2ℓ −1 2 =k 2−1 2. Now suppose that ℓ∗= 2j∗for some j∗≥2. Then kX ℓ=1rℓ=1 2 ⌊k 2⌋X ℓ=12r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ =1 2 2r2j∗+⌊k 2⌋X ℓ=12r2ℓ+1+X 1≤ℓ≤⌊k 2⌋,ℓ̸=j∗2r2ℓ ≤1 2 2j∗r2j∗+⌊k 2⌋X ℓ=12r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ −r2j∗ ≤1 2 2j∗r2j∗+⌊k 2⌋X ℓ=12r2ℓ+1+⌊k 2⌋X ℓ=12r2ℓ −1 =1 2 ⌊k 2⌋X ℓ=1(2ℓ+ 1)r2ℓ+1+⌊k 2⌋X ℓ=12ℓr2ℓ −1 =k 2−1. 26 So we have the upper boundk 2−1 2forPk ℓ=1rℓindependently of the parity of ℓ∗. Hence, 1 nk 2n! r1!···rk! n−Pk ℓ=1rℓ !k! 1!r1···k!rkEU(r)≤C1 nk 2n! n−Pk ℓ=1rℓ ! ≤C1 nk 2nPk ℓ=1rℓ ≤C1√n. (24) Finally, we set Π1:={r∈Π|rℓ= 0, ℓ̸= 2}. Note that Π 1=∅ifkis odd and Π 1= 0,k 2,0, . . . , 0 ifkis even. This means by (23) and (24) that E 1√nnX i=1Ui!k =1 nk 2X r∈Πn! r1!···rk! n−Pk ℓ=1rℓ !k! 1!r1···k!rkEU(r)≤C1√n ifkis odd. Now let kbe even, and hence Π 1= 0,k 2,0, . . . , 0 . Then E 1√nnX i=1Ui!k ≈1 nk 2nPk ℓ=1rℓ k 2 !k! (2!)k 2EU2 1···U2 k 2 =1 nk 2nk 2 k 2 !k! 2k 2σk = (k−1)!!σk, where we used the identityk! (k 2)!2k 2= (k−1)!! for even k. The last expression above is the moment of order k, for keven, of the distribution N 0, σ2 . The odd moments of N 0, σ2 are 0 and by (23) and (24) for oddkall summands of (22) converge to 0. This concludes the proof. The previous lemma serves to determine the limiting standard error of the statistic T. Proposition 41. The standard error of the statistic Tsatisfies lim n→∞√n√ VT=q Vβ,NS2. Proof. This follows directly from Theorem 10 and Lemma 40. While the statistic Thas a finite standard deviation for all population sizes Nand all sample sizes n, the same is not true for the estimator ˆβNas this estimator assumes the values ±∞with positive probability for allNand all n. Thus a similar statement does not hold for finite nnor in the limit n→ ∞ . However, instead of using Lemma 40 to determine the limiting standard error, we can employ statement 2 of Theorem 10 to obtain the limiting probabilities for a certain class of events. Below, N 0, σ2 Astands for the probability of any measurable set Aunder the N 0, σ2 distribution. 27 Proposition 42. LetA⊂Rbe a Lebesgue measurable set. Then we have P ˆβN∈β+1√nA − − − − → n→∞N 0,4N2
|
https://arxiv.org/abs/2505.21778v1
|
Vβ,NS2 A. In particular, we have for any ε >0,Pn ˆβN−β ≥ε√no − − − − → n→∞N 0,4N2 Vβ,NS2 (−ε, ε)c. Proof. Theorem 10 states that √n ˆβN−βd− − − − → n→∞N 0,4N2 Vβ,NS2 . By the definition of convergence in distribution and the absolute continuity of the normal distribution, we have for any measurable set A, P ˆβN∈β+1√nA =P ˆβN−β∈1√nA =Pn√n ˆβN−β ∈Ao − − − − → n→∞N 0,4N2 Vβ,NS2 A. Even in the absence of a finite standard deviation for the estimator ˆβN, the previous proposition gives us an explicit limit for the probability of a deviation of order1√nof the estimator ˆβNfrom the true parameter value β. 7 Optimal Weights in a Two-Tier Voting System A two-tier voting system describes a scenario where the population of a state or union of states is divided into Mgroups (e.g., member states). Each group sends a representative to a common council that makes decisions for the union. These representatives cast their votes (‘yes’ or ‘no’) based on the majority opinion within their respective group. Each group λ∈NMis assumed to be of size Nλ∈N. The votes of individual voter are represented by the random variable Xλi, where λ∈NMindicates the group and i∈Nλthe individual within the group. Recall the group voting margins Sλdefined in (3), and set ¯S:=MX λ=1Sλ to be the overall voting margin. Given the group voting margin Sλ, we define the council vote of each group. Definition 43. The council vote of each group λ∈NMis χλ:=( 1 if Sλ>0, −1 otherwise. 28 Since the groups may vary in size, it is natural to assign different voting weights wλto each representative, reflecting the relative sizes of their groups. In some situations, the groups can be formed in such a manner that they have similar sizes. For example, when electing a parliament such as the U.S. House of Represent- atives, the country is typically divided into districts with roughly equal populations, each receiving one seat. This approach is practical within a single country but becomes less feasible in other contexts. It would be impractical to reassemble countries or member states (like those in the United Nations or European Union) into districts of equal size or equally populated groups due to sovereignty concerns. Thus, the issue of how to assign voting weights to groups of different sizes cannot be avoided. The problem of determining these optimal weights involves minimising the democracy deficit (see Definition 44 below), i.e. the deviation between a council vote and an idealised referendum across the entire population. This concept was first explored for binary voting by Felsenthal and Machover [12], and later analysed in various contexts by other authors (e.g., [17, 34, 18, 28, 31, 22, 21]. Informally, imposing the democracy deficit as a criterion, we endeavour to assign the voting weights in the council in such fashion that, on average, the council vote shall be as close as possible to a hypothetical referendum. Other approaches to optimal weights are based on welfare considerations or the goal of equalising the influence of all voters within the overall population.
|
https://arxiv.org/abs/2505.21778v1
|
The seminal work by Penrose [30], which introduced the square root law as a rule for assigning voting weights that equalises each voter’s probability of being decisive in a two-tier system under the assumption of independent voting, exemplifies this approach. Further contributions to understanding optimal voting weights from welfare and influence perspectives can be found in [2, 24, 25]. In order to define the democracy deficit, we need a voting model that describes voting behaviour across the overall population. We will assume that the overall population behaves according to a Curie-Weiss model Pβ,N(see Definition 1) for some fixed β= (β1, . . . , β M)≥0 and N= (N1, . . . , N M)∈NM. With a voting model in place, we can proceed to define the democracy deficit, which will serve as a criterion for the determination of the optimal voting weights each group receives in the council. Definition 44. The democracy deficit for a set of voting weights w1, . . . , w M∈Ris given by Eβ,N" ¯S−MX λ=1wλχλ#2 . We will call any vector ( w1, . . . , w M)∈Rof weights which minimises the democracy deficit ‘optimal’. Proposition 45. For all β= (β1, . . . , β M)≥0, the optimal weights are given by wλ=Eβλ,Nλ|Sλ|, λ∈NM. Proof. This result was first proved in [17]. For the reader’s convenience, we present a short proof here. We find the minimum of the expression defining the democracy deficit by deriving Eβ,Nh S−PM λ=1wλχλi2 with respect to each wλ,λ∈NM, and equating each derivative to 0: Eβ,N" ¯S−MX ν=1wνχν! χλ# = 0, which is equivalent to Eβ,N¯S χλ=MX ν=1wνEβ,Nχνχλ. 29 Due to Definitions 1 and 43, we have Eβ,N¯S χλ=MX ν=1Eβ,NSνχλ =Eβλ,NλSλχλ =Eβλ,Nλ|Sλ| and MX ν=1wνEβ,NSνχλ=wλEβλ,Nλχ2 λ =wλ. The optimality condition is therefore wλ=Eβλ,Nλ|Sλ|, λ∈NM. We will now show that the expectation Eβλ,Nλ|S|of the voting margin of a single group is strictly increasing in the respective coupling constant βλ(cf. Proposition 20). The following is an auxiliary lemma for this purpose. Lemma 46. LetXandYbe random variables that take values in a countable set E⊂R. Let c∈Rand define the sets A:=E∩(−∞, c),B:=E∩[c,∞). Assume that A, B̸=∅and P{X=x}>P{Y=x}, x∈A, P{X=x}<P{Y=x}, x∈B. ThenEY >EXholds. Proof. We define the constant t:=P{X∈A} −P{Y∈A}. (25) The constant tcan also be expressed as t= 1−P{X∈B} −(1−P{Y∈B}) =P{Y∈B} −P{X∈B}. (26) We write EY−EX=X x∈Ex(P{Y=x} −P{X=x}) =X x∈Ax(P{Y=x} −P{X=x}) +X x∈Bx(P{Y=x} −P{X=x}). (27) For all x∈A,x < c andP{Y=x} −P{X=x}<0 hold. Thus, for the first summand in (27), we have the lower bound X x∈Ax(P{Y=x} −P{X=x})> cX x∈A(P{Y=x} −P{X=x}) =c(P{Y∈A} −P{X∈A}) =−ct, (28) 30 where in the last step we used (25). For all x∈B,x≥candP{Y=x} −P{X=x}>0 hold. Thus, for the second summand in (27), we have the bound X x∈Bx(P{Y=x} −P{X=x})≥cX x∈B(P{Y=x} −P{X=x}) =c(P{Y∈B} −P{X∈B}) =ct, (29) where in the last step we used (26). Combining the lower bounds in (28) and (29) yields the claim due to (27). Proposition 47. For fixed N∈N, the function β∈R7→Eβ,N|S| ∈Ris strictly increasing and continuous. Proof. The continuity is immediate. We show that β7→Eβ,N|S|is strictly increasing. For this purpose, we calculate the derivative
|
https://arxiv.org/abs/2505.21778v1
|
of pβ(x)/Zβ,Nfor any x∈ΩNemploying Lemma 51. Recall that pβ(x) equals exp β 2NPN i=1xi2 ands(x) stands forPN i=1xifor all x∈ΩN. d dβpβ(x) Zβ,N =dpβ(x) dβZβ,N−pβ(x)dZβ,N dβ Z2 β,N =s(x)2 2Npβ(x) Zβ,N−pβ(x)Zβ,N 2NEβ,NS2 Z2 β,N =pβ(x) 2NZβ,N s(x)2−Eβ,NS2 . (30) We note that the derivative is positive if and only if s(x)2>Eβ,NS2. (31) We define the set G:= ℓ2|ℓ∈N, ℓ=Nmod 2 , m < ℓ ≤N , and let G= g1, . . . , g |G| be an enumeration of Gin ascending order. By Proposition 20, β∈R7→Eβ,NS2∈ κ, N2 is continuous and strictly increasing. Hence, the function is injective. By Lemma 18, it is surjective. We define the constants bi∈R∪ {∞} ,i∈N|G|, by the condition Ebi,NS2=gi. Note that ( bi)i∈N|G|is a strictly increasing (since ( gi)i∈N|G|is strictly increasing) finite sequence, and b|G|=∞ due to g|G|=N2and lim β→∞Eβ,NS2=N2by Lemma 18. Using these constants, we define the sets B1:= (−∞, b1), Bi:= [bi−1, bi), i∈N|G|\{1}. 31 Due to the bijectivity of β∈R7→Eβ,NS2∈ κ, N2 ,B1, . . . , B |G|is a partition of R. With these preparations, we first show that for all β1< β2with β1, β2∈Bifor some i∈N|G|,Eβ1,N|S|< Eβ2,N|S|holds. We define the following subsets of Ω N: Ar:=n x∈ΩN s(x)2≤Eβr,NS2o , r = 1,2, and write Acfor the complement of any subset Aof Ω N. By the definition of the set Gand the sets Bj, j∈N|G|, the equality A1=A2is satisfied. We set A:=A1. We use the derivatives of pβ(x)/Zβ,Nin (30) and the positivity condition (31) for said derivatives. Since β∈R7→Eβ,NS2∈ m, N2 is strictly increasing, the sign of the derivative of pβ(·)/Zβ,N, for any β∈(β1, β2) and any x∈ΩNis pβ(x) 2NZβ,N s(x)2−Eβ,NS2 <0, x∈A, pβ(x) 2NZβ,N s(x)2−Eβ,NS2 >0, x∈Ac. These signs and the fundamental theorem of calculus (note that the derivatives of pβ(x)/Zβ,Nare continuous) yield for all x∈A pβ2(x) Zβ2,N=pβ1(x) Zβ1,N+Zβ2 β1d dβpβ(x) Zβ,N dβ <pβ1(x) Zβ1,N and for all x∈Ac pβ2(x) Zβ2,N=pβ1(x) Zβ1,N+Zβ2 β1d dβpβ(x) Zβ,N dβ >pβ1(x) Zβ1,N. We apply Lemma 46 with Xbeing |S|following the distribution Pβ1,N◦ |S|−1andYbeing |S|following the distribution Pβ2,N◦ |S|−1. Then the statement Eβ1,N|S|<Eβ2,N|S|follows by Lemma 46. Now assume that β1, β2∈R,β1< β2, and β1∈Bj1,β2∈Bj2for some j1< j2∈N|G|. Since β7→Eβ,N|S| is continuous and strictly increasing on each Bi,i∈N|G|, we obtain for all β∈Bi Ebi,N|S|= lim β′↗biEβ′,N|S|>Eβ,N|S| and Ebi+1,N|S|=gi+1> gi=Ebi,N|S|. Taking into account β1∈Bj1= [bj1−1, bj1),β2∈[bj2−1, bj2), and j2−1≥j1, we therefore have Eβ2,N|S| ≥Ebj2−1,N|S| ≥Ebj1,N|S|>Eβ1,N|S|, and we have proved that the function β7→Eβ,N|S|is strictly increasing. By Propositions 45 and 47, we have 32 Corollary 48. Letλ, µ∈NM,Nλ=Nµ, and 0≤βλ< βµ. Then the optimal weights wλandwµsatisfy the inequality wλ< w µ. This implies that among groups of equal sizes, those groups with stronger interactions between voters, that is groups with a higher parameter β, will receive a larger weight in the council than groups with voters who interact more loosely with each other, i.e. groups that that have a lower parameter β. Put in a different way, under the CWM as a voting model, there are two avenues for groups to obtain a higher voting weight under the democracy deficit criterion: have a larger population or be more
|
https://arxiv.org/abs/2505.21778v1
|
cohesive in the group votes. Proposition 45 yields an estimator for the optimal weights of each group by substituting any estimator for the parameter βλ. For example, an estimator for the optimal weights based on the estimator ˆβNfrom Definition 7 can be defined as follows. Let for all λ∈NM,ˆβNλ(λ):= ˆβN λ. Then, ˆβNλ(λ) : Ωn Nλ→[−∞,∞] is an estimator for βλcalculated for a sample of observations of voting in group λ. Definition 49. Letλ∈NM,βλ≥0. The estimator ˆ wλ: Ωn Nλ→[0,∞) for the optimal weight of group λ based on ˆβNλ(λ) is defined by ˆwλ=EˆβNλ(λ),Nλ|Sλ|. The estimator ˆ wλinherits many of the properties of ˆβN. This is the subject of the next theorem. Recall the good rate function Jdefined in (19), and the single-group rate function Jλ: [−∞,∞]→[0,∞] in (18) for group λ. We define the good rate function Hλ:R→[0,∞] by Hλ(y):= inf{Jλ(β)|β∈[−∞,∞],Eβ,Nλ|Sλ|=y}, y∈R. Theorem 50. Letλ∈NMandNλ∈N. Let wλbe the optimal weight from Proposition 45. Then the following statements hold: 1.ˆwλp− − − − → n→∞wλ. 2.√n( ˆwλ−wλ)d− − − − → n→∞N 0, υ2 and υ2= Eβ,Nλ|Sλ|3−Eβ,Nλ|Sλ|Eβ,NλS2 λ2 Vβ,NS2. 3.ˆwλsatisfies a large deviations principle with rate nand rate function Hλ.Hλhas a unique minimum atwλ, and we have for each closed set K⊂Rthat does not contain wλ,infy∈KHλ(y)>0and P{ˆwN∈K} ≤2 exp −ninf y∈KHλ(y) . Proof. This theorem is proved in close analogy to Theorem 10. Appendix We present a number of concepts and auxiliary results we use. Recall the expression Zβ,Nfrom (2). 33 Lemma 51. The first derivative of Zβ,Nwith respect to βλ,λ∈NMis dZβ,N dβλ=Zβ,N 2NλEβ,NS2 λ. Proof. The derivative of the partition function Zβ,Nwith respect to βλis dZβ,N dβλ=X x∈ΩN1+···+NMd dβλ exp 1 2MX λ=1βλ Nλ NλX i=1xλi!2 =X x∈ΩN1+···+NM1 2Nλ NλX i=1xλi!2 exp 1 2MX λ=1βλ Nλ NλX i=1xλi!2 =Zβ,N 2NλEβ,NS2 λ. Theorem 52 (Slutsky) .Let(Yn)n∈Nand(Zn)n∈Nbe sequences of random variables, Ya random variable, anda∈Ra constant such that Ynd− − − − → n→∞YandZnp− − − − → n→∞a. Then Yn+Znd− − − − → n→∞Y+aand YnZnd− − − − → n→∞aY. Proof. These statements are Theorems 11.3 and 11.4 in [14]. Theorem 53 (Continuous Mapping) .Let(Yn)n∈Nbe a sequence of random variables and Ya random variable, each of them taking values in some subset A⊂R, such that Ynp− − − − → n→∞Y, and let g:A→Rbe a continuous function. Then g(Yn)p− − − − → n→∞g(Y). Proof. See Theorem 2.3 in [32]. Theorem 54 (Delta Method) .Let(Yn)n∈Nbe a sequence of random variables such that EYn=µ∈Rfor all n∈Nand√n(Yn−µ)d− − − − → n→∞N 0, σ2 for a constant σ >0. Let f:D→Rbe a continuously differentiable function with domain D⊂Rsuch that Yn∈Dfor all n∈N. Assume f′(µ)̸= 0. Then √n(f(Yn)−f(µ))d− − − − → n→∞N 0,(f′(µ))2σ2 is satisfied. Proof. Since this result is not found as frequently in textbooks, we present a proof here for the convenience of the readers. We Taylor expand the function faround the point µ: f(Yn) =f(µ) + (Yn−µ)f′(ξn) 34 for some ξnwhich lies between µandYn. We can rewrite the above as √n(f(Yn)−f(µ)) =√n(Yn−µ)f′(ξn). (32) Due to the assumption√n(Yn−µ)d− − − − → n→∞N 0, σ2 , |ξn−Yn| ≤
|
https://arxiv.org/abs/2505.21778v1
|
|µ−Yn|p− − − − → n→∞0, and as f′is continuous, Theorem 53 implies f′(ξn)p− − − − → n→∞f′(µ). The last display,√n(Yn−µ)d− − − − → n→∞N 0, σ2 , (32), and Theorem 52 together yield √n(f(Yn)−f(µ))d− − − − → n→∞N 0,(f′(µ))2σ2 . Recall Notation 6 for the expressions [ −∞,∞] and [0 ,∞]. Definition 55. Let ( Pn)n∈Nbe a sequence of probability measures on a metric space X, let ( an)n∈Nbe a sequence of positive numbers with an− − − − → n→∞∞, and let I:X → [0,∞] be a function. If Iis lower semi- continuous, i.e. its level sets {x∈ X | I(x)≤α}are closed for each α∈[0,∞), we call Ia rate function. If the level sets are compact in Xfor each α∈[0,∞), we call Ia good rate function. If Iis a good rate function, and the two conditions 1. lim supn→∞1 anlnPnK≤ −infx∈KI(x) for each closed set K⊂ X, 2. lim inf n→∞1 anlnPnG≥ −infx∈GI(x) for each open set G⊂ X hold, then we say that the sequence ( Pn)n∈Nsatisfies a large deviations principle with rate anand rate function I. If ( Yn)n∈Nis a sequence of random variables taking values in Xsuch that, for each n∈N,Yn follows the distribution Pn, we will also say that ( Yn)n∈Nsatisfies a large deviations principle with rate an and rate function I. In our applications of large deviations principles, the metric space Xwill be Ror [−∞,∞]. We present a standard convergence result concerning a statistic employed in the estimation of the parameter β. This can be used to then demonstrate convergence to βfor the estimators presented in this article. Recall Definition 24 of the entropy function of a distribution. Proposition 56. Letn, N∈Nand let R: Ωn N→Rbe a statistic of the form R x(1), . . . , x(n) :=1 nnX t=1f x(t) , x(1), . . . , x(n) ∈Ωn N, for some function f: ΩN→R. Let Xbe a random vector on ΩNwith Curie-Weiss distribution according to Definition 1. Let µ:=Eβ,Nf(X),σ2:=Vβ,Nf(X), and Λ∗ f(X)the entropy function of f(X). Then the following three statements hold: 35 1. A law of large numbers holds for the sequence R x(1), . . . , x(n) : R x(1), . . . , x(n)p− − − − → n→∞µ. 2. A central limit theorem holds for the sequence√n R x(1), . . . , x(n) −µ : √n R x(1), . . . , x(n) −µd− − − − → n→∞N 0, σ2 . 3. A large deviations principle holds for the sequence R x(1), . . . , x(n) with rate nand rate function I:R→[0,∞)∪ {∞} , I(x):= Λ∗ f(X)(x), x∈R. 4. If Ihas a unique global minimum at x0∈R, then for any closed set K⊂Rthat does not contain x0 we have δ:= inf x∈KI(x)>0, and P{R∈K} ≤2 exp ( −δn) holds for all n∈N. Proof. Since Ris defined as a sum of i.i.d. random variables with existing variance, the first three results can be found in many books about probability theory, and we therefore omit their proof. The last statement is somewhat less commonly
|
https://arxiv.org/abs/2505.21778v1
|
found, hence we prove it here. The random variable f(X) is bounded and not almost surely constant, so Lemma 26 applies to Λ∗ f(X). Set δ:= infn Λ∗ f(X)(x)|x∈Ko . (33) AsEβ,Nf(X) =µ∈KcandKcis open, there is some η >0 such that the open ball Bη(µ) with radius ηand centre µis a subset of Kc. We choose εr:= sup {η >0|(µ, µ+η)⊂Kc}andεl:= sup {η >0|(µ−η, µ)⊂Kc}. Then G:= (µ−εl, µ+εr)⊂Kc. By statements 4 and 5 of Lemma 26, δ= infn Λ∗ f(X)(x)|x∈Gco = minn Λ∗ f(X)(µ−εl),Λ∗ f(X)(µ+εr)o >0. We write P{R∈K} ≤P{R∈Gc}=P{R≤µ−εl}+P{R≥µ+εr}. We use Markov’s inequality to obtain for all t≤0 P{R≤µ−εl} ≤P{exp (nt(R−(µ−εl)))≥1} ≤Eexp (nt(R−(µ−εl))) = exp ( −nt(µ−εl))nY r=1Eexp tf X(r) = exp ( −nt(µ−εl)) [Eexp (tf(X))]n = exp ( −nt(µ−εl)) exp nΛf(X)(t) = exp −n (µ−εl)t−Λf(X)(t) . As this holds for all t≤0 and we have µ−εl< µ=Eβ,Nf(X), we arrive at P{R≤µ−εl} ≤exp −nΛ∗ f(X)(µ−εl) . (34) 36 Similarly, we calculate the upper bound P{R≥µ+εr} ≤exp −nΛ∗ f(X)(µ+εr) . (35) Combining (34) and (35) yields P{R∈K} ≤exp −nΛ∗ f(X)(µ−εl) + exp −nΛ∗ f(X)(µ+εr) ≤2 exp ( −δn). Theorem 57 (Contraction Principle) .LetXandYbe metric spaces. Let (Pn)n∈Nbe a sequence of probability measures on X, and let f:D→ Y be a continuous function with its domain D⊂ X containing the support ofPnfor each n∈N. Let (an)n∈Na sequence of positive numbers with an− − − − → n→∞∞, and I:X → [0,∞]a good rate function. We define J:Y → [0,∞]by J(y):= inf{I(x)|x∈D, f(x) =y}, y∈ Y. Then Jis a good rate function. If (Pn)n∈Nsatisfies a large deviations principle with rate anand rate function I, then the sequence of push forward measures Pn◦f−1 n∈NonYsatisfies a large deviations principle with rateanand rate function J. Proof. This result can be found, e.g., in [8, Theorem 4.2.1]. Acknowledgements M. B. is a Fellow and G. T. is a Candidate of the Sistema Nacional de Investigadoras e Investigadores. G. T. was supported by a Secihti (formerly Conahcyt) postdoctoral fellowship. References [1] Miguel Ballesteros, Rams´ es H. Mena, Jos´ e Luis P´ erez, and Gabor Toth. Detection of an arbitrary number of communities in a block spin ising model. 2023. arXiv:2311.18112. [2] Claus Beisbart and Luc Bovens. Welfarist evaluations of decision rules for boards of representatives. Soc. Choice Welf. , 29(4):581–608, 2007. [3] Quentin Berthet, Philippe Rigollet, and Piyush Srivastava. Exact recovery in the ising blockmodel. Ann. Statist. , 47(4):1805–1834, 2019. [4] William A. Brock and Steven N. Durlauf. Discrete choice with social interactions. Rev. Econ. Stud. , 68(2):235–260, 2001. [5] Sourav Chatterjee. Estimation in spin glasses: A first step. Ann. Stat. , 35(5):1931–1946, 2007. [6] Wei-Kuo Chen, Arnab Sen, and Qiang Wu. Joint parameter estimations for spin glasses. 2024. arXiv:2406.10760. [7] Pierluigi Contucci and Stefano Ghirlanda. Modelling society with statistical mechanics: An application to cultural contact and immigration. Qual. Quant. , 41:569–578, 2007. 37 [8] Amir Dembo and Ofer Zeitouni. Large Deviations Techniques and Applications . Springer New York, 2 edition, 1998. [9] William F. Donoghue. Distributions and Fourier Transforms . Academic Press, 1969. [10] Richard Ellis. Entropy, Large Deviations, and Statistical Mechanics . Whiley, 1985. [11] Dan
|
https://arxiv.org/abs/2505.21778v1
|
S. Felsenthal and Mosh´ e Machover. The Measurement of Voting Power. Theory and Practice, Problems and Paradoxes . Edward Elgar Publishing, 1998. [12] Dan S. Felsenthal and Mosh´ e Machover. Minimizing the mean majority deficit: The second square-root rule. Math. Soc. Sci. , 37(1):25–37, 1999. [13] Ignacio Gallo, Adriano Barra, and Pierluigi Contucci. Parameter evaluation of a simple mean-field model of social interaction. Math. Models Methods Appl. Sci. , 19(supp01):1427–1439, 2009. [14] Allan Gut. Probability: A Graduate Course . Springer New York, 2 edition, 2013. [15] Ernst Ising. Beitrag zur Theorie des Ferromagnetismus. Zeitschrift f ¨ur Physik , 31:253–258, 1925. [16] Pieter Willem Kasteleijn. Statistical Problems in Ferromagnetism, Antiferromagnetism and Adsorption . 1956. [17] Werner Kirsch. On penrose’s square-root law and beyond. Homo oecon. , 24(3/4):357–380, 2007. [18] Werner Kirsch and Jessica Langner. The Fate of the Square Root Law for Correlated Voting , chapter Voting Power and Procedures, pages 147–158. Springer Cham, 2014. [19] Werner Kirsch and Gabor Toth. Two groups in a Curie-Weiss model. Math. Phys. Anal. Geom. , 23(17), 2020. [20] Werner Kirsch and Gabor Toth. Two groups in a Curie-Weiss model with heterogeneous coupling. J. Theor. Probab. , 33:2001–2026, 2020. [21] Werner Kirsch and Gabor Toth. Optimal weights in a two-tier voting system with mean-field voters. arXiv:2111.08636, November 2021. [22] Werner Kirsch and Gabor Toth. Collective bias models in two-tier voting systems and the democracy deficit. Math. Soc. Sci. , 119:118–137, 2022. [23] Werner Kirsch and Gabor Toth. Limit theorems for multi-group Curie-Weiss models via the method of moments. Math. Phys. Anal. Geom. , 25(24), 2022. [24] Yukio Koriyama, Antonin Mac´ e, Rafael Treibich, and Jean-Fran¸ cois Laslier. Optimal apportionment. J. Political Econ. , 121(3):584–608, 2013. [25] Sascha Kurz, Nicola Maaser, and Stefan Napel. On the democratic weights of nations. J. Political Econ. , 125(5):1599–1634, 2017. [26] Matthias L ¨owe and Kristina Schubert. Exact recovery in block spin Ising models at the critical line. Electron. J. Stat. , 14:1796–1815, 2020. [27] Matthias L ¨owe, Kristina Schubert, and Franck Vermet. Multi-group binary choice with social interaction and a random communication structure – a random graph approach. Physica A , 556, 2020. [28] Nicola Maaser and Stefan Napel. A note on the direct democracy deficit in two-tier voting. Math. Soc. Sci., 63(2):174–180, 2012. [29] Lars Onsager. Crystal statistics I. a two-dimensional model with an order-disorder transition. Phys. Rev., 65(3-4):117–149, 1944. [30] Lionel Penrose. The elementary statistics of majority voting. Journal of the Royal Statistical Society , 109(1):53–57, 1946. 38 [31] Gabor Toth. Correlated Voting in Multipopulation Models, Two-Tier Voting Systems, and the Democracy Deficit . PhD Thesis, FernUniversit ¨at in Hagen, April 2020. doi:10.18445/20200505-103735-0. [32] Adrianus Willem Van der Vaart. Asymptotic Statistics . Cambridge University Press, 1 edition, 1998. [33] Pierre Weiss. L’hypoth` ese du champ mol´ eculaire et la propri´ et´ e ferromagn´ etique. Journal de Physique Th´ eorique et Appliqu´ ee , 6(1):661–690, 1907. [34] Karol Zyczkowski and Wojciech Slomczynski. Square Root Voting System, Optimal Threshold and π, chapter Voting Power and Procedures, pages 127–146. Springer Cham., 2014. 39
|
https://arxiv.org/abs/2505.21778v1
|
Random irregular histograms Oskar Høgberg Simensen1*Dennis Christensen2 Nils Lid Hjort1 1Department of Mathematics, University of Oslo 2Norwegian Defence Research Establishment (FFI) Abstract We propose a new method of histogram construction, providing the first fully Bayesian approach to irregular histograms. Our procedure applies Bayesian model selection to a piecewise constant model of the underlying distribution, resulting in a method that selects both the number of bins as well as their location based on the data in a fully automatic fashion. We show that the histogram estimate is consistent withrespecttotheHellingermetricundermildregularityconditions,andthatitattainsaconvergencerate equal to the minimax rate (up to a logarithmic factor) for Hölder continuous densities. Simulation studies indicate that the new method performs comparably to other histogram procedures, both for minimizing the estimation error and for identifying modes. A software implementation is included as supplementary material. Keywords: Bayesianmodelselection,Bayesiannonparametrics,histograms,nonparametricdensityestima- tion. 1 Introduction The much celebrated histogram is the earliest example of a nonparametric density estimator, and remains widespread in use even to this day. Although more efficient density estimators have been devised since, histogramshaveretainedtheirpopularityduetotheirsimplenatureandinterpretability. Thekeydifficulty encountered when drawing a histogram is that the appearance of the density estimate is sensitive to the choiceofpartitionusedtodrawthehistogram. Ifthepartitionisnotchosenwell,thequalityoftheresulting histogramwillberatherpoor,failingtoresembletheunderlyingdistributionatall. Asaresult,thequestion of designing automatic histogram procedures, where the partition is chosen based on the given sample, has attracted considerable interest in the statistics community; see Scott (2010) for a review. However, this problem has turned out to be a difficult one, with no solution universally accepted as the best one. Manyofthehistogramproceduresproposedinthestatisticalliteraturesimplifytheproblemofselecting thepartitionsomewhatbyexclusivelyconsideringregularpartitions(BirgéandRozenholc,2006),wherethe binsareofequalwidth,sothatoneonlyneedstochoosethenumber kofbins. However,eventhissimplified problemhasnotledtoacanonicalproceduretochoosethenumberofintervalsinthepartition,asevidenced by the sheer number of different bin selection rules for regular histograms (Li et al., 2020). Automatic irregular histogram methods provide a more flexible alternative to regular histograms by determiningboththenumberofbinsandtheirlocationbasedonthesample. Thebenefitsofusingoptimally chosen cut points between the intervals to draw a histogram as opposed to a regular grid with the same number of bins are rather immediate, as the bin widths are in the irregular case able to adapt to the local behavior of the underlying density, offering more smoothing near modes and in the tails of the density, resulting in smaller estimation risks (Scott, 1992). Although the adaptive nature of irregular histograms is anattractivefeaturefromatheoreticalperspective,BirgéandRozenholc(2006)remarkthatthesearchforan *Corresponding author. Email: oskarhsimensen@gmail.com 1arXiv:2505.22034v1 [stat.ME] 28 May 2025 appropriate set of cut points can lead to increases in the statistical risk of the procedure, which may result in worse density estimates in practice according to classical loss functions. Although automatic irregular histogram procedures do not always yield smaller estimation risks than regular ones, they often have an advantage when it comes to detecting important features in a density such asmodes. Automaticmodeidentificationisafeatthatcannotbeachievedbyregularhistogramprocedures designed to produce small risks with respect to classical loss functions; as shown by Scott (1992), the asymptoticallyoptimalbinwidthintermsof L2riskresultsinanundersmoothedhistogramifthegoalisto automatically infer the modes of a density. In contrast, our proposed irregular histogram procedure shows that there need not be a trade-off between low estimation error and automatic mode identification. Despitetheapparentadvantagesofirregularhistogramprocedures,theyhavenotbeenaswidelyadopted astheirregularcounterparts(Daviesetal.,2009). Thisisinnodoubtinpartduetothecomputationalchal- lenges encountered by most irregular histogram methods, which typically involves solving a more difficult optimization problem than that encountered in the regular case. Moreover, many previous proposals for irregularhistogramsdependontheselectionofkeytuningparameters,withnouniversalrecommendation givenfortheirdefaultvalues,whichhashinderedtheirusebypractitioners(Rozenholcetal.,2010). However, inrecentyears,therehasbeenarenewedinterestinirregularhistogramprocedures,withmanydifferentpro- posalsappearinginthestatisticalliteraturethatofferautomatichistogramconstructionatspeedsthatmake them an appealing alternative
|
https://arxiv.org/abs/2505.22034v1
|
to regular histogram methods; see e.g. Davies and Kovac (2004); Rozenholc et al. (2010); Li et al. (2020); Mendizábal et al. (2023). Theirregularhistogrammodelproposedinthispaperprovides,toourknowledge,thefirstfullyBayesian approach to histogram estimation. In particular, our approach is based on finding the partition bIwhich maximizes the posterior probability under a piecewise constant model of the data-generating density, re- sultinginanautomaticdata-basedruleforchoosingthehistogrampartition. Usingacombinationofsearch heuristicsanddynamicprogramming,ourapproachresultsinabinselectionrulethatisveryquicktocom- pute, even for large datasets. Unlike regular Bayesian histogram models, the random irregular histogram method is shown to excel at automatic mode detection while achieving similar performance in terms of classical loss functions, making it an attractive choice for exploratory data analysis. A software implemen- tationoftheirregularrandomhistogramestimatorproposedinthisarticlecanbefoundinthe AutoHist.jl Juliapackage, available as part of the general package registry (Simensen, 2025). The source code used to create all the figures and tables presented in this article is available at the following GitHub repository: https://github.com/oskarhs/Random-Histograms---Paper . The remainder of the paper is structured as follows. Section 2 gives an introduction to histogram construction, focusing on irregular methods. Section 3 introduces our irregular random histogram model and describes an algorithm to compute the approximate a posteriori most probable partition given an observedsample. ConsistencyandconvergencerateresultsaregiveninSection4. InSection5,wedescribea simulationstudyinwhichwecomparetheperformanceofourmethodwithotherstate-of-the-arthistogram procedures from the statistical literature. In Section 6 we illustrate the proposed density estimator by applying it to some real-world datasets. We conclude the article by discussing some extensions of the methodology presented here to hazard rate estimation and semiparametric regression. 2 Histogram construction Thehistogramapproachtononparametricdensityestimationisbasedonapiecewiseconstantmodelofthe data-generating density f. For ease of exposition, we present all models for densities on the unit interval, but note that the methodology presented here can be extended to other compact intervals via a suitable 2 affine transformation. For a given interval partition I= (I1,I2, . . . ,Ik)of[0,1], the piecewise constant approximation to the density fis given by f x|I,θ =kX j=1θj |Ij|1Ij(x), x∈[0,1], (1) where θbelongs to the k-simplex Sk= θ∈[0,1]k:Pk j=1θj= 1 and1Adenotes the indicator function for a set A, so1A(x) = 1ifx∈ Aand 0 otherwise. Although the dimension of θdepends on the given partition I, we generally omit this in the notation when the number of bins in the partition in question is clear from the context. For a given partition, estimation of the parameter θis usually done by maximizing thelikelihoodfunctionbasedon forminimizinganestimated L2loss,yieldingthefamiliarbinproportions bθj=Nj/n,where Nj=Pn i=11Ij(xi)isthenumberofobservationsfallingintobin Ij. Althoughconstructing a histogram for a given partition is a simple task, selecting a good partition based on the observed sample constitutes a much more difficult problem. The chosen partition controls the degree of smoothing, with larger intervals inducing a smoother density estimate at the cost of increased bias. Attempting to find a partition that strikes a good balance between smoothing and small bias is the basis of most automatic histogram procedures. Most automatic histogram procedures are based on a decision-theoretic framework where the sample x= (x1, . . . , x n)has been generated by a true density f0and seek to achieve good performance in terms of the frequentist risk, Rn f0,bf =Ex∼f0n ℓ f0,bfo , (2) where ℓ f0,bf is a loss function measuring the quality of using the estimate bfas an approximation of f0. Typical loss functions include those based
|
https://arxiv.org/abs/2505.22034v1
|
on powers of the Lror Hellinger metrics, f0−bf r=Z1 0 f0(x)−bf(x) rdx1/r , d H f0,bf =Z1 0np f0(x)−q bf(x)o2 dx1/2 . On the other hand, approaches based on maximum likelihood attempt to minimize the Kullback–Leibler divergence, K f0,bf =Z1 0f0(x) logf0(x) bf(x)dx. In general, the risk of an estimator will depend on the true density, and there is no procedure that universally yields the smallest possible risk for all densities. As such, most histogram procedures aim for small risks over a large class of densities f0. The vast majority of regular histogram methods take this approach to histogram estimation by minimizing an asymptotic expression for the risk, a penalized likelihood, or a direct estimate of the risk (Davies et al., 2009). Irregular histogram methods based on riskminimizationincludeproposalsbasedon L2leave-one-outcross-validation(Rudemo,1982;Celisseand Robin, 2008) and the penalized maximum likelihood approach of Rozenholc et al. (2010), which seeks to minimizeanupperboundontheHellingerrisk. Notallhistogramproceduresfollowthedecision-theoretic paradigm;theapproachesofDaviesandKovac(2004);Lietal.(2020)constructanirregularhistogrambased on approximating the empirical distribution function under a parsimony constraint. The determination of the cut points of an irregular histogram constitutes a difficult problem from a computationalperspective,andmostirregularhistogramproceduresrelyonvariousdiscretizationschemes that specify a finite set of possible cut points between the intervals. The reduction in computational complexitybroughtaboutbydiscretizationisinitselfnotenoughtoguaranteethattheresultingoptimization problem can be solved in a reasonable amount of time, even for datasets of modest size. As a result, 3 many irregular histogram procedures are based on a selection criterion with a particular additive structure that enables the use of a dynamic programming algorithm (Kanazawa, 1988), efficient search heuristics (Mendizábal et al., 2023) or both (Rozenholc et al., 2010). 3 A Bayesian approach to irregular histograms We now describe our proposed approach for constructing an irregular histogram. Assuming that we have observed an independent and identically distributed (i.i.d.) sample x1, . . . , x non the unit interval, our Bayesian modeling approach proceeds by specifying a parameterized density fas in (1) and prior distributions on the model parameters, I,θ. This choice of fleads to the following expression for the joint density of the sample x, f(x|I,θ) =nY i=1kY j=1θj |Ij|1Ij(xi) =kY j=1θj |Ij|Nj , (3) which depends only on the sample through the interval counts N= N1, . . . , N k . In our model, we restrict our attention to partitions with interval endpoints belonging to a given finite setTn= τn,j: 0≤j≤kn , where τn,0= 0,τn,kn= 1andτn,j−1< τn,jfor all j, and knis a sequence of positive integers, growing with the sample size n. LetPTn,kdenote the set of interval partitions of [0,1] consisting of kbins with endpoints in Tn, and put PTn=∪kn k=1PTn,k. Since PTndepends on n, the prior for Imust also necessarily do so, and we write pn(I)to make this dependence explicit. The prior distribution onthepartitionsismosteasilydescribedbyintroducingthenumber kofbinsasafurtherrandomvariable, with prior distribution k∼pn(k), supported on {1,2, . . . , k n}. Conditional on k, the prior distribution on the partitions pn(I |k)is the uniform distribution on PTn,k. Since |PTn,k|= kn−1 k−1 , it follows that the prior probability mass function of Iispn(I |k) = kn−1 k−1−1. We note that if I ∈ P Tn,k, then pn(I, k′)is nonzero only for k′=kand as such, we have the equality pn(I) =Pkn k′=1p(k′)pn(I
|
https://arxiv.org/abs/2505.22034v1
|
|k′) =pn(k)pn(I |k). Finally, as prior distribution for θ|Iwe take a k-dimensional Dir(a)distribution, which has density p(θ|I) =Γ a Qk j=1Γ(aj)kY j=1θaj−1 j,θ∈ Sk, where a= (a1, . . . , a k)∈(0,∞)kanda=Pk j=1aj. In general, the parameters ajmay depend on the partition I,althoughweomitthisinthenotation. Wedeferthediscussiononhowtochoose pn(k)andato Section 3.2. The prior-model specification can thus be summarized as k∼pn(k), I |k∼Unif I PTn,k , θ|I ∼ Dir(θ|a), f(x|I,θ) =kX j=1θj |Ij|1Ij(x), xi|f∼f.(4) The model (4) is similar to some Bayesian approaches to regular histogram estimation, which have been investigated from a theoretical perspective (Hall and Hannan, 1988; Scricciolo, 2007) and from a 4 computationally oriented one (Knuth, 2019). The main difference between our proposal and previous Bayesianapproachesisthatinourmodel,thereismorethanonepartitionforeachvalueof k,resultingina more flexible procedure due to the variable bin widths. 3.1 Posterior distribution We now derive an expression for the posterior distribution of I. As a first step, we show that the Dirichlet distributionisaconjugatepriorfor f,conditionalonthechosenpartition. Indeed,asthelikelihoodfunction takes the form in (3), it follows from Bayes’ rule that for I ∈ P Tn,k, p(θ|x,I)∝p(θ|I)f(x|I,θ)∝kY j=1θaj−1 jkY j=1θNj j∝Dir θ|a+N . To find the posterior probability of Iup to proportionality, we note that by Bayes’ rule, pn(I |x)∝pn(I)p(x|I) =pn(I)p(θ|I)f(x|I,θ) p(θ|x,I), where the last equality follows from the relation p(θ|I)f(x|I,θ) =p(x|I)p(θ|x,I). Plugging in the expressions for our chosen prior distributions along with the posterior for θ, we arrive at the following expression for the posterior probability of I, pn(I |x)∝pn(k) Qk j=1|Ij|NjQk j=1Γ(aj+Nj) Qk j=1Γ(aj)Γ(a) Γ(a+n)kn−1 k−1−1 ,I ∈ P Tn,k. (5) Our proposed Bayesian histogram estimator is based on the partition which minimizes the Bayes risk, bI= arg min J∈PTnEI,θ Ex∼f ℓ(I,J) I,θ , where ℓis a nonnegative loss function. To derive the optimal model selection procedure in terms of the Bayes risk, we work with the 0-1 loss function, ℓ(I,J) = 1 ifJ=I, 0 else . TofindtheminimizeroftheBayesrisk,itsufficestominimizetheposteriorexpectedloss, EI ℓ(I,J) x [cf.LehmannandCasella(1998,p.228)],whichleadstothefollowingruleforselectingtheoptimalpartition bI= arg max I∈PTnpn(I |x). (6) We refer to bIas the maximum a posteriori (MAP) partition. Since the histogram is first and foremost a graphical tool, providing point estimates of the density is of great interest. Having computed the maximizer of (6) we can construct a density estimate bfbased on the conditionalposterior p θ|x,bI . Foragivenpartition I ∈ P Tn,k,thedensity fcanbeparameterizedinterms of the k-dimensional vector θ, and we proceed by estimating θbybθ, the Bayes estimator under squared L2 lossconditionalon I. Writing fI,δforthedensitygivenby f(x|I,δ)asin(1)for δ∈ Sk,wecanexpandthe loss as fI,θ−fI,δ 2 2=kX j=1Z Ij fI,θ(x)−fI,δ(x) 2dx=kX j=1Z Ij|Ij|−2 θj−δj 2dx=kX j=1|Ij|−1 θj−δj 2. 5 Since the Bayes estimator bθofθminimizes the posterior expected loss, we need to solve the following optimization problem: min δ∈SkEθn fI,θ−fI,δ 2 2 xo = min δ∈SkkX j=1|Ij|−1Eθ (θj−δj)2 x . This problem is most easily solved by first removing the restriction δ∈ Sk, and showing that the uncon- strained and constrained optima coincide. Since the summands of the above criterion are nonnegative, the problem then reduces to minimizing Eθ (θj−δj)2 x with respect to δjfor each j, the solution of which isbθj=Eθ{θj|x}(Lehmann and Casella, 1998, p. 228). Using the linearity and monotonicity properties of expectations,onequicklyverifiesthat bθ∈ Sk,soitmustalsobethesolutiontotheconstrainedproblem. As theposteriordistributionof θgiven I ∈ P
|
https://arxiv.org/abs/2505.22034v1
|
Tn,kisaDirichletdistribution,wecanreadofftheposteriormean directly, yielding Bayes estimates bθj=aj+Nj a+n=a a+naj a+n a+nNj n, j = 1,2, . . . , k. (7) Sincethepriormeanof θjisaj/a,weseethattheposteriormeanisaconvexcombinationofthepriormean andthemaximumlikelihoodestimate Nj/n. Ifaissmallrelativeto n,thenthedata-basedpartdominatesthe estimate, and the estimated bin probabilities end up being close to the corresponding maximum likelihood estimates. OurBayesianapproachthusdiffersfromthepenalizedlikelihoodapproachmainlyintheselection of the optimal partition and not in the estimation of bin probabilities conditional on the selected model. After computing the optimal partition and the corresponding Bayes estimates of the bin probabilities, a point estimate of the density fcan be obtained via bfbI(x) =kX j=1bθj bIj 1bIj(x). (8) We refer to the estimator (8) as the Bayes histogram estimator . The bθjserve as estimates of the true bin probabilities θj=R Ijf(x) dx. Although the form of (8) is rather simple, the bin selection procedure itself is quite complex, as it requires finding the best partition according to (5) among a field of 2kn−1 candidatepartitions. AsarguedbyWand(1997),theuseofacomplexbinselectionprocedureseemsalmost inevitableforahistogrammethoddesignedtoperformwellforalargeclassofdensities. Infact,simulation studies show that most well working automatic regular and irregular histogram methods either solve an optimizationproblemoverasetofpossiblepartitionsorestimatederivativesoftheunderlyingdensity(Birgé and Rozenholc, 2006; Davies et al., 2009). Remark 1. Although the primary focus of our article is the study of the Bayes histogram estimator, we note that full posterior inference for fis available through the posterior distribution for Igiven by (5). In particular, Markov chain Monte Carlo (MCMC) methods can be used to generate approximate posterior samples, which can be used to estimate posterior quantities of interest, such as the posterior probability distribution of the number of modes in the density. Remark2. Theestimatorin(8)isnottheoverallBayesestimatorofthedensity funderthestandard L2loss as it does not minimize the Bayes risk. The minimizer of the posterior expected loss is rather the posterior mean of f, Ef f(x) x =X I∈PTnpn(I |x)f x|I,bθ , (9) whichisamodelaveragingestimator,weightingeachconditionalposteriormean Ef f(x) x,I according totheposteriorprobabilityofthecorrespondingpartition I. Sincethepartition Jwhichincludeseverygrid 6 point in Tnis a refinement of every I ∈ P Tn, the estimator (9) is also a piecewise constant density with cut points in Tn, one could also consider using it as an estimate of the density. We do not pursue this approach here for two reasons. The first reason for favoring (8) over the posterior mean is that it is quick to compute; theposteriormeanisasumof 2kn−1terms,makingtheevaluationof(9)infeasibleevenformoderatevalues ofkn. Althoughsimulationfromtheposterior I |xispossiblethroughMCMC,suchaproceduremaysuffer fromslowlymixingchainsandrequiresperformingconvergencediagnostics,whichmakesthemodelfitting procedure rather involved compared to alternative automatic histogram methods. On the other hand, an approximation to the Bayes histogram estimator can be computed quickly using the methods of the next subsection without requiring any tuning from the user. Another reason for preferring the Bayes histogram estimator is that of model parsimony, as it depends only on the chosen partition and the corresponding intervalprobabilityestimates(7). Oursimulationsshowthat(8)oftenyieldsquitegoodvisualsummariesof the underlying density with a relatively small number of bins. In contrast, the computation of Ef f(x) x involvescomputingaweightedaverageoftheestimatedbinprobabilitiesfrommanydifferentmodels,which leads to a complicated expression for the resulting bin heights. 3.2 Computing the Bayes histogram estimator The key observation that allows us to find the maximizer of pn(I |x)efficiently is that we can write the logarithm of the right hand side in (5) as Φ(I) + Ψ n(k),I ∈ P Tn,k, (10) where ΦandΨnare given by Φ(I) =kX j=1 log Γ( aj+Nj)−log Γ( aj)−Njlog|Ij| , Ψn(k) =
|
https://arxiv.org/abs/2505.22034v1
|
log pn(k) + log Γ( a)−log Γ( a+n)−logkn−1 k−1 . Φis additive with respect to the intervals of the partition in the sense that we can write Φ =Pk j=1Φ0(Ij), where Φ0(Ij)depends only on Ij. This allows for the use of the dynamic programming algorithm due to Kanazawa (1988) which determines the maximizer of (10) using O k3 n floating-point operations. Although the dynamic programming algorithm succeeds in identifying the MAP partition within a reasonable time budget for smaller datasets, the O k3 n runtime is prohibitive for large values of kn. It is generally desirable to have knincrease at a rate close to n, as otherwise Tncan end up being too coarse, and theresultinghistogrammaymisskeydetailsinthedatathatwouldhavebeenshownifahigh-resolutiongrid was used. In our implementation, we have resolved this issue by letting kngrow at a rate close to n, but we adoptthesearchheuristicofRozenholcetal.(2010)toconstructareducedgrid Qn⊆ Tnwhosecardinality qn growsmoreslowlywith n. Havingcomputed Qn,werunthedynamicprogrammingalgorithmtomaximize pn(I |x)overPQn,andtake bIastheresultingmaximizer. Wesettledonthechoice qn={nlogn}1/3,asthis yieldsanoverallalgorithmiccomplexityoforder nlog(n)forthedynamicprogrammingpart. Althoughthis approach is not guaranteed to find the exact maximizer of pn(I |x)overTn, we have found that in practice thechosenpartitiontypicallyyieldsahistogramclosetotheonebasedonthetrueMAPintermsofHellinger loss in cases where exact evaluation is feasible. 7 3.3 Prior elicitation and practical implementation The model specified by (4) leaves the statistician with some flexibility through the choice of the prior for k and the hyperparameter a. To have a fully automatic procedure, however, default choices for pn(k)anda are needed, which we provide in this section. As prior for k, we have found that in practice using the uniform prior on {1,2, . . . , k n}with kn= ⌈4n/log2(n)⌉appears to work well compared to other alternatives in simulations, as documented in Ap- pendix A.1. As such, we have used this uniform prior for kas part of our default procedure. The hyperparameters aof the Dirichlet prior for θ|Ican be chosen to center the prior distribution of f on a particular reference density. If g0is a initial guess for the density f, we let aj=aZ Ijg0(x)dx,I ∈ P Tn,k, j = 1,2, . . . , k, (11) where a >0is a constant. Conditional on the partition I, we see that the prior mean of fis Eθ f(x)|I =kX j=1Eθ θj Ij 1Ij(x) =kX j=1aj a Ij 1Ij(x) =kX j=1|Ij|−1Z Ijg0(y)dy1Ij(x), which is the best piecewise constant approximation of the density g0based on the partition I, in the sense that it minimizes the L2distance to g0among all piecewise constant densities on I. To guarantee that this choice of ajalways yields a proper posterior for θ|x,I, we must ensure that aj>0, which is true if the density g0is chosen to be positive almost everywhere in [0,1]. If prior information is not available, we can takeg0equaltotheuniformdistributionintheunitinterval,leadingtothenon-informativechoice aj=a|Ij| for all j. We have made the choice g0=1[0,1]the default in the available software implementation of the randomirregularhistogram,andwehaveadoptedthischoiceinallsubsequentsimulationsandapplications of the histogram method in this paper. Thechoiceoftheparameter aismoredelicate,asitisdifficulttodirectlyassessitsimpactonthechosen partition bI. Through (7), smaller values of ayield Bayes estimates ˆθjwhich are closer to the maximum likelihood estimate of θj, depending more on the data and less on the prior mean. However, if ais small relativeto nitseffectonthechosenpartitionistypicallyofmuchgreaterimportancethanitsinfluenceonthe estimatedbinprobabilities,sincethedata-basedpartoftheestimatetendstodominateformoderatevalues ofn, which is confirmed by the simulations in Appendix A.2. Based
|
https://arxiv.org/abs/2505.22034v1
|
on the results from these preliminary simulations, we suggest a= 5as a reasonable default choice for a. Aspresented,therandomirregularhistogrammethodappliestodensitiessupportedonaknownclosed interval. To apply the methodology to data with unknown support, the support of the data has to be estimated. Our preferred method in this regard is to rescale the data to the unit interval via the affine transformation zi= (xi−x(1))/(x(n)−x(1)), apply the Bayesian histogram method to the ziand rescale the density estimates to the original scale, which is equivalent to the approach taken by most histogram procedures. For a discussion of other approaches to estimating the support of a probability distribution for the purposes of density estimation, see Section 1.4 of Zhong (2016) and the references therein. Anotherissuetoconsideristheconstructionofthegrid Tn. Onepossibilityistouseafine,regularmesh. Data-based grids are also possible; one such approach is to construct Tnfrom the sample quantiles, so that τn,j=bQn(j/kn)forj= 1,2, . . . , k n−1where bQn(q)isthe q-quantileof thesample x. Afurtherdata-based option is to use the order statistics of the data as possible cut points. All of these approaches have their advantages and disadvantages, and the best choice of grid is ultimately dependent on the density we are trying to estimate; see e.g. Rozenholc et al. (2010). As part of our default procedure, we propose using a regular mesh for Tn, but note that in practical applications, one may also want to test data-based grids to check the sensitivity of the procedure to the choice of mesh. 8 4 Asymptotic results To study the large-sample behavior of the Bayes histogram estimator (8), we use the theory developed by Ghosal et al. (1999, 2000) to study posterior consistency and posterior convergence rates, but with some modifications to account for the fact that our estimator is based on the MAP partition bI. To start, we note that the prior-model specification in (4) defines a sequence of probability measures {Pn}n∈Nonthespace Fofallprobabilitydensitieson [0,1]throughthespecificationsofthepriordistributions pn(k), pn(I |k)andp(θ|I)fork,I |kandθ|I, respectively. For a sample of size n, the prior distribution of fis then given by Pn, upon which the corresponding posterior distribution Pn(·|x)is based, and our limit results are based on working with this sequence of posteriors. In the following, we assume that the data x= (x1, . . . , x n)is an i.i.d. sample from a true density f0, and denote by Pn 0then-fold product measure induced by f0. The associated stochastic order symbols are denoted by OPn 0for stochastic boundedness and OPn 0for convergence in probability. ResultsregardingtheconsistencyandconvergenceratesofBayesiannonparametricpointestimatorsare oftenderivedbystudyingtheasymptoticbehavioroftheposteriordistribution. Wesaythatthesequenceof posteriordistributions Pn(·|x)isconsistentwithrespecttoametric difPn f∈ F:d(f0, f)≥ϵ|x =OPn 0(1) for any ϵ >0. The related, stronger notion of posterior convergence rates is also of great interest when studying the large-sample behavior of Bayesian nonparametric procedures. We say that the sequence of posteriors Pn(·|x)converges at rate ϵn=O(1)if for every sequence Mndiverging to positive infinity, Pn(f∈ F:d(f0, f)≥Mnϵn|x) =OPn 0(1). Although verifying posterior consistency and identifying a posterior convergence rate of a Bayesian nonparametric procedure can be of independent interest, the conditions used to establish the two are on theirownnotsufficienttoconcludethattheBayeshistogramestimatorof(8)isconsistentandthatitobtains the same convergence rate as the posterior, introducing the need for new mathematical results concerning estimators based on model selection. 4.1 Consistency OurfirstresultshowsthattheBayeshistogramestimatorisHellingerconsistentunderverygeneralconditions
|
https://arxiv.org/abs/2505.22034v1
|
on the prior and the true density f0. The proof of Theorem 1 is given in Appendix B.2. Theorem 1. Suppose that f0is a probability density on the unit interval satisfyingR1 0 f0(x) rdx <∞for some r∈(1,2], and suppose that the random histogram prior sequence {Pn}n∈Nsatisfies the following conditions: (i)Thesequenceofpriorprobabilitymassfunctionsfor k,pn(k),isfullysupportedon {1,2, . . . , k n},where kn→ ∞ andkn=O(n). Moreover, we have uniformly for all k= 1,2, . . . , k nandn, logpn(k) =O(n). (ii)Conditional on the partition I ∈ P Tn,k, the prior for θ|IisDir(a)where 0< aj≤Σfor a constant Σ>0. (iii)The mesh sequence Tnsatisfies max 1≤j≤kn{τn,j−τn,j−1}=O(1) Then the Bayes histogram estimator bfbIsatisfies Ex∼f0n d2 H f0,bfbIo =O(1). 9 The first and third condition ensure that the prior eventually assigns positive mass to an arbitrarily fine grid, which is needed to approximate f0in the Hellinger sense. The second condition is fulfilled if, for instance, aj=aR Ijg0(x)dxfor a constant a >0and a strictly positive density g0, or if the ajare all equal to some fixed, positive number. Theorem 1 mirrors the result by Barron et al. (1999, Section 3.1), who studied posterior consistency for a regular random histogram prior, showing that the posterior distribution is consistent under the slightly stronger condition that f0is bounded. 4.2 Convergence rate To establish a convergence rate result, we need to make some additional smoothness assumptions on the true density f0. We say that a function f0: [0,1]→Ris said to be α-Hölder continuous for α∈(0,1]if there exists a constant L0>0such that |f0(x)−f0(y)| ≤L0|x−y|αfor all x, y∈[0,1]. The parameter α determinesthesmoothnessofthefunction f0,withlargervaluesof αcorrespondingtosmootherfunctions. TheHölderexponentrelatesdirectlytothebiasintheapproximationof f0byapiecewiseconstantfunction on the partition I(cf. Lemma B.4 in the appendix), with larger exponents yielding better approximations for a given number of bins, leading to a faster convergence rates as a result. Theorem 2. Suppose that f0: [0,1]→Ris a strictly positive α-Hölder continuous density and that the random irregular histogram prior sequence {Pn}n∈Nsatisfies the following. (i)The sequence of prior probability mass functions for k,pn(k), is fully supported on {1,2, . . . , k n}, where nγ=O(kn)for any γ∈(0,1)andkn=O(n). Moreover, thereare positive constants D1, D2, d1, d2such that for all k≤kn, D1exp −d1klog(k) ≤pn(k)≤D2exp −d2klog(k) . (ii)Conditional on the partition I ∈ P Tn,k, the prior for θ|IisDir(a)where σk−1≤aj≤Σfor all jand two constants σ,Σ>0. (iii)The mesh sequence Tnsatisfies max 1≤j≤kn{τn,j−τn,j−1}=O k−1 n , min 1≤j≤kn{τn,j−τn,j−1} ≥Bk−1 n,for some B >0. Then for ϵn={n/log(n)}−α/(2α+1), the Bayes histogram estimator satisfies Ex∼f0n d2 H f0,bfbIo =O ϵ2 n . The proof of Theorem 2 can be found in Appendix B.1. The first condition of the theorem is satisfied if, for instance, pn(k)is taken to be a Poisson distribution truncated to the interval {1,2, . . . , k n}. The third conditionensuresthatthemesh Tndoesnotdeviatetoomuchfromaregularone,whichguaranteesthatthe deterministic component of the risk is at most of order k−2αwith respect to the squared Hellinger metric. The rate obtained by the Bayes histogram estimator is equivalent to the minimax rate for α-Hölder continuous densities with respect to the squared Hellinger metric of n−2α/(2α+1)up to a logarithmic factor (Massart,2007,Proposition7.16). Moreover,thepriorisrate-adaptive;itattainsanear-optimalconvergence rate without knowing the smoothness of the true density f0. The convergence rate
|
https://arxiv.org/abs/2505.22034v1
|
of the Bayes histogram estimatormirrorstheposteriorconvergencerateobtainedbyScricciolo(2007)forthecorrespondingregular histogrammodel,withtheminordifferencethatourrateisslightlybetterintermsofthelogarithmicfactor. The additional logarithmic factors appearing in the convergence rate are rather typical of Bayesian density estimatorsbasedonmixtureswithDirichletdistributedweights;see,forexample,thepolygonallysmoothed 10 prior of Scricciolo (2007) and the extended Bernstein prior of Kruijer and van der Vaart (2008). Hall and Hannan(1988)consideraregularrandomhistogrammodelwithauniformpriorontheintervalprobabilities. Their main result implies that the histogram estimator based on maximization of pn(k)converges at rate {n/log(n)}−1/3iff0is bounded away from 0and has a uniformly continuous derivative f′ 0, which is equal to the convergence rate of Theorem 2 under the weaker assumption that α= 1. In light of this result, we conjecture that the rate obtained in Theorem 2 is sharp for any α∈(0,1]. Remark3. Theconsistencyandrateresultspresentedherecanalsobeextendedtosimilarmodelsforregular histograms,withminormodificationstotheproofs. Inparticular,thismeansthattheresultsofTheorems1 and 2 are also valid for the histogram estimator proposed by Knuth (2019). Remark 4. We note that the conditions used to show Theorems 1 and 2 are stronger than those needed to show posterior consistency, so the posterior distribution of an irregular histogram model satisfying the conditionsinTheorem1isconsistent. Similarly,posteriordistributionof fattainsthesameconvergencerate as the Bayes histogram estimator in Theorem 2, provided it satisfies the hypotheses of the theorem. 5 Simulation study Toshowcasetheperformanceofourproposedirregularhistogrammethod,wehaveconductedasimulation study in which we compare it to multiple state-of-the-art automatic histogram procedures on the set of test densities Dgiven in Figure 1. These densities were chosen to reflect a variety of features with regard to skewness,tailbehavior,numberofmodesandthesharpnessofindividualpeaks. Densities10,12and14-16 aretakenfromRozenholcetal.(2010),densities7-9arefromMarronandWand(1992),whiledensity13was suggested by Li et al. (2020). 5.1 Loss functions Althoughclassicallossfunctionssuchasthe L2-orHellingermetricsprovideanaturalwayofmeasuringthe discrepancy between a density estimate and the true density in a simulation study, they do not adequately assess the quality of a histogram procedure in terms of identifying key features of a density, such as modes (Denby and Mallows, 2009). To also account for this important aspect of histogram estimation, we have included the peak identification loss of Davies et al. (2009) as a performance metric. Definition 1 (Davies et al., 2009) .We say that a density fhas afinite mode orpeakattif there is an interval 1 Csuch that tis the midpoint of Candf(s) =cfor all s∈ C, and there exists a γ >0such that f(s)< cfor alls∈ Cγ\ C, where Cγ=∪s∈C{x:|s−x| ≤γ}. The density fis said to have an infinite peak ormodeattif either one-sided limit at tequals ∞. Suppose that f0hasl0modes t1, t2, . . . , t l0and that the histogram estimate bfhas modes at s1, . . . , s l. Given a tolerance vector δ∈(0,∞)l0, we say that a peak of the histogram bfmatches the peak at tjof f0at tolerance δjifmin1≤i≤l|tj−si|< δj. The peak siis said to be spurious if it does not match any peak of f0. Letting C=Pl0 j=11[0,δj)(min i|si−tj|)denote the number of correctly identified modes, we define the peak identification (PID)lossas the sum of the number of spurious and unidentified peaks, ℓ f0,bf = (l0−C) + (l−C). The behavior of the PID loss depends on the choice of the tolerance parameter δ. To determine the tolerance δjof a finite peak at tjof a density f0, we have used the following approach: find the smallest 1We include the case where Cis a singleton set, which corresponds to the usual definition of a local maximum. 11 Figure 1: Test densities
|
https://arxiv.org/abs/2505.22034v1
|
included in the simulation study γ >0such thatR Cj,γ f0(x)−¯f0,j,γ dx R Cj,γf0(x)dx>0.2,Cj,γ= [tj−γ, tj+γ], where we have defined ¯f0,j,γ=|Cj,γ|−1R Cj,γf0(x)dx. We then take δj=γ/2as our tolerance for the peak attj. This criterion essentially measures how well f0is approximated by a constant function around any givenmode,reflectingthefactthatweshouldexpecthigherprecisionwhentryingtodeterminethelocation of a sharp peak. We visually confirmed that this criterion yielded sensible and non-overlapping tolerance regionsforeachofthetestdensitiesinFigure1. Fortheinfinitelypeaked Chisq(1) wesetthetoleranceregion ofthepeakat 0to[0, q0.1],where qαdenotesthe α-quantileofthisdistribution,andforthe Beta(0 .5,0.5),we used [0, q0.1]and[q0.9,1]for the peaks at 0and1, respectively. OneobjectiontotheuseofthePIDlossasameasureofmodedetectionisthatitonlycapturesahistogram procedure’s ability to detect modes automatically and that no statistician would interpret any mode of a histogramasonecorrespondingtoamodeinthetruedensity. Scott(1979)remarksthatmanydataanalysts prefertheuseofrelativelysmallbinwidthswhendrawingaregularhistogram,leavingthefinalsmoothing to be done by eye. Although we agree that this is a valid objection to the use of the peak identification loss intheregularcase,thesubjectivemannerinwhichthenumberandthelocationofmodeshavetobelocated 12 Table 1: Overview of the methods included in the simulation study Type Method Abbreviation Description Wand’s 2-step rule Wand Wand (1997) AIC AIC Taylor (1987) RegularBIC BIC Davies et al. (2009) Birgé and Rozenholc’s rule BR Birgé and Rozenholc (2006) Knuth’s rule Knuth Knuth (2019) Stochastic Complexity SC Hall and Hannan (1988) Random irregular histogram RIH Section 3 Penalty B in Rozenholc et al. RMG-B Rozenholc et al. (2010) IrregularPenalty R in Rozenholc et al. RMG-R Rozenholc et al. (2010) Taut string TS Davies and Kovac (2004) L2cross-validation L2CV Rudemo (1982) Kullback–Leibler cross-validation KLCV Section 5.2 issomewhatunsatisfactory. Incontrast,aprocedurethatdoeswellwithrespecttothePIDlosscanidentify modes automatically, which is a major reason for the introduction of irregular histograms to begin with. In addition to the PID loss, we have also included results for the Hellinger and L2metrics in our simulations. Since closed-form expressions for these two loss functions are generally not available, Gauss quadrature was used to approximate the loss, performing the numeric integration piecewise over each interval where both the histogram estimate and the true density are continuous. 5.2 Setup In our study we have chosen to focus on histogram methods that have performed well in past simulation studies, see Birgé and Rozenholc (2006); Davies et al. (2009). In addition, the method of Knuth (2019) and the stochastic complexity approach of Hall and Hannan (1988) were also included, as these effectively correspond to a regular equivalent of the random irregular histogram. The Essential Histogram of Li et al. (2020) was excluded due to the high computational cost of its available software implementation, which madeitimpracticalforthelargestsamplesizesconsideredinthisstudy. Wehavealsoincludedanirregular histogram method based on Kullback–Leibler cross-validation, an approach that, to our knowledge, has so far not been explored for irregular histograms. This approach chooses the partition which maximizes CVn(I) =Pk j=1Njlog(Nj−1)−Pk j=1Njlog|Ij|forallpartitionswith Nj≥2forall j,andisanirregular counterparttothecross-validationcriterionproposedbyHall(1990). Adetailedlistoftheincludedmethods and their corresponding abbreviations can be found in Table 1. Appendix C.1 provides further details on the implementations used for each procedure. Tojudgethequalityofthedifferenthistogramprocedures,weusedaMonteCarloproceduretoestimate the risks of each method for a set of test densities f0∈ Dwith respect to the Hellinger, L2and PID losses, using the Monte Carlo average bRn f0,bfm =1 BBX b=1ℓ f0,bfm ,(n, m, f 0)∈ N × M × D , (12) where N={50,200,1000,5000,25000}denotesthesetofsamplesizesusedinthestudyand Misthesetof model indices. The Monte Carlo average in (12) was computed using B= 500replications. 13 Figure 2: Median ranks from the simulation study in terms of
|
https://arxiv.org/abs/2505.22034v1
|
Hellinger risk (left) and PID risk (right). 5.3 Results Tables showing the estimated risks bRn f0,bfm for each of the three loss functions can be found in Ap- pendixC.2,alongwithboxplotscomparingtheperformancesofthedifferentmethodsincludedinthestudy. Toprovideamorecompactsummaryoftheresults,werankedeachmethodaccordingtotheirriskforeach combination of loss function, density and sample size, and computed the median rank by method over D andN. The resulting medians for the Hellinger and PID losses are shown in Figure 2. Although the relative performance of the different procedures depends on the true density at hand, our results indicate some broader trends among the regular and irregular histogram procedures. •Whether or not the Hellinger risk favors regular histogram methods varies based on the true density. For spatially homogeneous densities such as the standard normal or skewed bimodal densities, there is little benefit to using an irregular grid to begin with, resulting in generally smaller risks for regular histograms. Fortheheavy-tailed Chisq(1) andstandardlognormaldensitiesandtheinfinitelypeaked Beta(1 /2,1/2), the irregular methods tend to perform better. •With the exception of the L2CV and KLCV criteria, all the irregular histogram procedures included in the study outperformed their regular counterparts in terms of automatic mode identification for almost every combination of f0andn. This is apparent from the right panel of Figure 2; four of the irregularproceduresproducemuchsmallermedianranksthananyoftheothermethods. Inaddition, wenotethatthesefourirregularmethodsexhibitlowerPIDrisksasthesamplesizeincreases,apattern that is often reversed for regular histograms. Regarding the relative performance of the individual methods, our main findings were as follows: •The two cross-validation based procedures perform much worse in terms of mode identification than the other methods, and yielded the highest L2and Hellinger losses for many densities. Plots of the histogramsresultingfromtheseproceduresrevealedthattheystillhaveatendencytochoosepartitions with rather narrow bins even for smooth densities, despite the fact that a minimum bin width was enforced. •Among the regular histogram procedures, the Bayesian criteria Knuth and SC are the overall best performersforthesmallestsamplesizes,whilethecriterionofBirgéandRozenholc(2006)doesbetter forlargersamples. TheBIC-basedhistogramistheclearwinnerintermsofPIDrisk,butdoesrelatively poorly with respect to the two classical loss functions. 14 Figure 3: Illustration of automatic mode detection and the PID loss for a n= 5000sample from the Harp density. •TheTautStringmethodofDaviesandKovac(2004)isoverallthebestperformerintermsofHellinger andL2risk out of the irregular histogram methods. The two criteria of Rozenholc et al. (2010) also perform quite well in this regard. The random irregular histogram method performs a bit worse than the other non-cross-validation irregular procedures with respect to classical loss functions for large samples, but often does well at mode detection, and frequently produces the lowest PID risks out of all methods for the largest sample size. To illustrate the advantage of irregular histograms over their regular counterparts in automatic mode detection, we generated a sample of size n= 5000from the Harp density in Figure 1. The Harp density has multiple modes at different scales, making it difficult for regular histogram procedures to correctly identify the number of modes and the location of all individual peaks simultaneously, even for larger samples. Figure 3 shows the resulting histograms from fitting the random irregular histogram and the regular BIC histogramtothesimulatedsample,whichwaschosentoyieldaPIDlossclosetothecorrespondingestimated risksinTable7. Thetoleranceregionsforeachindividualpeaksisshowningray,withthereddotsindicating thepositionsofthehistogrammodesthatmatchthepeaksofthetruedensity,whereasthespuriousmodes are drawn in black. The disadvantages of using a fixed global bin width in this case are immediately apparent from Figure 3; the bin width chosen by the BIC histogram results in too much smoothing in the regionsurroundingtheleftmostmode,whilethedensityestimateisfartoocoarseneartherightmostmode, resultinginahighnumberofspuriouspeaksnearthismodeandintherighttailofthedensity. Incontrast,
|
https://arxiv.org/abs/2505.22034v1
|
the irregular histogram is able to adapt the bin width locally, providing more smoothing in flatter regions of the density, which results in high-precision estimates of each of the five true modes. On the other hand, 15 Figure 4: Irregular (left) and regular (right) histograms fitted to the Old Faithful data. The dashed blue line shows the kernel estimate. the Hellinger loss for each of the two procedures are 0.177for the BIC and 0.226for the random irregular histogram,favoringtheregularhistogram. Ultimately,whetheronepreferstheregularorirregularestimate in this case comes down to the analysts preference for low estimation risk or automatic mode detection. 6 Applications In this section we apply the irregular histogram method proposed in this paper to two real-world datasets. Forcomparison,wealsoincluderesultsforthehistogrammethodofKnuth(2019),asthismethodisaregular equivalent of the random irregular histogram. 6.1 Old Faithful geyser data InourfirstapplicationweconsidertheOldFaithfuldatasetfromHärdle(1991),whichconsistsofmeasured waiting times in minutes between the consecutive eruptions of the Old Faithful geyser in Yellowstone Na- tionalPark. Previousanalysesofthisdatasethavetypicallyshownaclearbimodalstructure;seee.g.Venables and Ripley (1999) and Kwasniok (2021) for analyses using several different density estimation procedures. WefittedboththeirregularrandomhistogramandtheregularhistogramofKnuth(2019)tothedataset, estimating the support of the histogram for both procedures. For comparison, we also computed a kernel density estimate using the density() function from base Rwith the bandwidth selector of Sheather and Jones (1991). The resultingdensity estimatesare shownin Figure4. The irregularhistogram methodyields a smooth density estimate, using a relatively small number of bins, and displays a clear bimodal structure. Incontrast,theregularhistogramprocedurechoosesafairlylargenumberofbinsinthiscase,resultingina much rougher appearance. The bimodal structure is also less apparent in this case, as there are two peaks neareachofthetwomodesinthekernelestimate. Overall,theirregularhistogramdisplaysagreaterdegree of agreement with the kernel estimate than Knuth’s method. 6.2 Multiple hypothesis testing In a multiple hypothesis testing scenario, statistical procedures often aim at controlling the false discovery rate(FDR),theexpectednumberoftruenullhypothesesamongtherejectedones(BenjaminiandHochberg, 1995). It is therefore of great interest to estimate the FDR based on the observed test statistics. This task is often carried out by using the p-values of the individual tests to construct an estimate of the proportion 16 Table 2: Average RMSEs for β= 4 π0 nIrregular Regular 0.5 200 0.107 0.078 1000 0.054 0.043 5000 0.029 0.025 0.8 200 0.121 0.109 1000 0.06 0.038 5000 0.029 0.022 0.95 200 0.055 0.057 1000 0.05 0.05 5000 0.032 0.027Table 3: Average RMSEs for β= 10 π0 nIrregular Regular 0.5 200 0.064 0.117 1000 0.03 0.063 5000 0.014 0.034 0.8 200 0.057 0.099 1000 0.032 0.062 5000 0.015 0.034 0.95 200 0.055 0.057 1000 0.039 0.051 5000 0.014 0.023 of true null hypotheses, from which the FDR can be estimated, for instance via the approach of Storey and Tibshirani(2003). Amodel-basedapproachtoestimatingthetruenullproportionassumesthatthe p-values are distributed according to a mixture density of the form f(p) =π01[0,1](p) + (1 −π0)h(p), (13) where π0isthetheoreticalproportionoftruenullhypothesesand histhedensityofthefalsenullhypotheses, assumed to have most of its mass near 0andh(1) = 0. A nonparametric approach to estimating π0is to use the observed p-values to compute a nonparametric density estimate bfand take bπ0=bf(1)as the estimate ofπ0, which is reasonable in light of our assumptions that imply π0=f(1). For some previous proposed nonparametric approaches to estimating the density fspecifically for this purpose, see e.g. Langaas et al. (2005); Guan et al. (2008) and the references therein. Our irregular histogram procedure seems like an attractivemethodforestimating finthiscase,asthesupportthedistributionofthe p-valuesisknown. This
|
https://arxiv.org/abs/2505.22034v1
|
avoidshavingtouseboundarycorrections,whicharenecessaryforsomeclassicaldensityestimatorssuchas kernel-basedones,sinceweareprimarilyconcernedwithestimatingavalueattheboundaryofthesupport of the density. To investigate the performance of our irregular histogram procedure in this setting, we conducted a small simulation study based on the model in (13), before applying it to a real dataset. In our simulations, we took the probability density hof the alternative hypotheses equal to the Beta(1 , β)density for values ofβin the range [2,10]. We varied the parameter π0∈ {0.5,0.8,0.95}, and considered samples of size n∈ {200,1000,5000}. To measure the quality of the procedure, the root mean squared error (RMSE) of bπ0was estimated by B= 500Monte Carlo samples. For comparison, we included estimates of π0based on using the automatic histogram procedure of Knuth (2019). In a multiple testing scenario there can be a considerable proportion of observed p-values in the vicinity of 0. To better show the details in the density near 0, we decided to use a quantile-based grid for the irregular histogram in order to introduce a greater amount of possible cut points in this region. The average RMSEs from the Monte Carlo simulations are shown in Tables 2 and 3. The relative performance of the two procedures depend heavily on the value of βin the simulations, with Knuth’s method performing better for β= 4while the irregular procedure wins out for β= 10. This is related to the amount of smoothing introduced by the two procedures in the tail of the density. The tendency of the irregular procedure to select a large bin in regions where the density is very flat worsens the performance for the Beta(1 ,4), as this density still assigns a considerable amount of mass to p >0.5, 17 Figure 5: Irregular (left) and regular (right) histograms fitted to the Hedenfalk et al. (2001) data. resulting in greater bias. In contrast, for β= 10the right tail of hdecays very quickly, and the bias of the irregularhistogramprocedurebecomesnegligible. Inthiscase,thevariancereductionbroughtaboutbythe greater amount of smoothing leads to better performance in terms of RMSE. To end the section, we consider an example from breast cancer research analyzed in Storey and Tibshi- rani (2003). The study of Hedenfalk et al. (2001) attempted to identify differences in gene expressivity in individualswithBRCA1-andBRCA2-mutation-positivetumorsbasedonmicroarraymeasurements. Storey and Tibshirani (2003) use permutation tests to compute p-values for the measured differences in gene ex- pressivity, leaving us with a sample of n= 3170 p-values. The resulting density estimates from applying theirregularhistogramprocedureandtheregularhistogramofKnuthareshowninFigure5. Theirregular histogramdisplaysaverypronouncedpeaknear 0,owingtothefactthatanon-negligibleproportionofthe p-values in this dataset concentrate in a narrow interval around the origin. Despite having a very narrow width, the leftmost interval contains a total of 65observations, accounting for more than 2%of the total sample. Although the regular histogram also shows a distinct mode near 0, it is much less pronounced thanthatoftheirregularhistogram. Theregularhistogramalsoappearstoundersmoothsignificantlyinthe right tail, as there are many sporadic bumps here that are unlikely to reflect any real structure in the data. The resulting estimates of π0arebπ0,irr= 0.713andbπ0,reg= 0.675for the irregular and regular histogram models, respectively. For comparison, we also estimated π0using the convex, decreasing density estimator in Langaas et al. (2005), and the local false discovery rate method of Phipson (2013). This yielded estimates ofbπ0,cd= 0.671fortheformermethodand bπ0,lfdr= 0.744forthelatter,indicatingthattheresultsfromboth of
|
https://arxiv.org/abs/2505.22034v1
|
the histogram methods are quite reasonable in this case. 7 Discussion We conclude the article with a brief discussion of possible extensions of the random irregular histogram, and how the ideas presented here can be used to construct computationally tractable Bayesian models for regression and hazard rate estimation. The results of our simulation study show that the random irregular histogram model excels at mode detection for larger samples, a feat that cannot be achieved by regular histogram procedures designed for optimalperformancewithrespecttoclassicalestimationerrorcriteria(Scott,1992). However,theimproved performance of regular histograms in terms of classical loss functions for spatially homogeneous densities hasledsomeauthorstoarguefortheircontinueduse,cf.BirgéandRozenholc(2006). Rozenholcetal.(2010) 18 propose constructing both a regular and an irregular histogram based on a penalized likelihood, and the resulting histogram is chosen to be the one with the lowest penalized likelihood of the two. Although this approachoftensucceedsinreducingtheestimationerrorcomparedtoanirregularprocedure,theresulting densityestimatesstillsufferfrompoorautomaticmodedetectionwhenevertheregularhistogramischosen. Another possible approach in this regard would be to use a prior distribution (or penalty in the penalized likelihood case) such that, for a given value of k, the partitions which have more regular bin widths are assigned more prior probability or a smaller penalty. This could lead to an improved estimation error for spatially homogeneous densities, while keeping the advantage of an irregular procedure in terms of mode identification. In this article, we have concerned ourselves with the task of finding the best piecewise constant density estimate,butmanyoftheideaspresentedherecanbeextendedtootherareasofstatisticssuchassemipara- metricregressionandhazardrateestimation. Inthesemiparametricregressionsetup,weconsiderthemodel yi=m(xi) +ϵiwhere m: [0,1]→Ris an unknown regression function and the ϵiare independent and Normal(0 , τ−2). Apossiblechoiceofnonparametricpriorfor misapiecewiseconstantmodel,where mtakes thefixedvalue mjonIj. Inthiscase,letting mj|I ∼ Normal( µ, β−2)leadstoaconjugatemodelforgiven I, whichresultsinananalyticalexpressionforthemarginallikelihood. Thecomputationaltechniquesoutlined in this paper can then be used to find the MAP partition bIin this model, and the corresponding estimated regression function is a Bayesian variant of the regressogram; the regression equivalent of the histogram (Klemelä, 2012). In the hazard rate case, Gamma priors can be used to yield a similarly computationally tractable expression for the posterior of I. Appendix A: Sensitivity to choices of prior parameters Toinvestigatethesensitivityoftheirregularrandomhistogramproceduretothechoiceofpriordistribution, we conducted some small numerical experiments. In each experiment, we estimated the Hellinger risk for a given configuration of the prior parameters by generating random samples x(b)of size n∈ N = {100,1000,10000}from three test densities for b= 1,2, . . . , B. The risks were estimated by a simple Monte Carlo average, bRn(f0, j) =1 BBX i=1dH f0,bfj ,(n, m, f 0)∈ N × M × D′, (A.1) where D′denotes a set of test densities, Mis the model index set for the different prior configurations and bfjis the Bayes histogram estimate when prior jis used. In our simulations, we took B= 500. The chosen testdensitieswerethe Gamma(3 ,3),theBeta(3 ,3)andthe t-distributionwith 3degreesoffreedom. Forthe Beta(3 ,3)we took [0,1]to be the support of the density estimate, while for the Gamma(3 ,3)we computed thehistogramestimateon [0, x(n)]. Forthe t-distributiontheleft-andrightmostendpointsofthehistogram were set equal to the minimum and maximum of the sample, respectively. A.1 Prior for the number of bins Tostart,weinvestigatedtheeffectofdifferentpriordistributionsforthenumber kofbins. Intheexperiment, the value of the Dirichlet concentration parameter awas kept fixed at 1. We tested the prior distributions pn(k)∝1/kmform∈ {0,1,2}and the Poiss(1) distribution with pn(k)∝1/k!as this prior fulfills the conditions of Theorem 2. In order
|
https://arxiv.org/abs/2505.22034v1
|
to present the results, we computed the logarithm of the risk in (A.1) relativetothatoftheuniformprior. Theresultinglog-relativerisksareshowninFigure6. Theuniformprior appears to yield the best performance for most combinations of the sample size and true density, with the 19 Figure 6: Logarithm of risk relative to the uniform prior pn(k)∝1. Figure 7: Left: Mean value of kaggregated over all test densities by sample size. Right: Logarithm of risk relative to a= 1. exception of the Gamma(3 ,3)forn= 104. Both pn(k)∝1/kandpn(k)∝1/k2generally perform similarly andnotmuchworsethantheuniform. ThePoissonpriordoestheworstintermsofHellingerrisk,indicating that it puts too high a penalty on larger values of kcompared to what is optimal in this case. A.2 Choice of the Dirichlet concentration parameter To study the effect of the concentration parameter aon the quality of the resulting density estimates, a similar numerical experiment to that in the previous subsection was conducted. We tested the values a∈ {0.01,0.1,1,10,100}. Acrossallvaluesof a,wekeptthepriorfor kfixedtoauniformone. Toinvestigate the effect of the concentration parameter on the number of bins in the MAP partition, we computed the meannumberofbinschoseninthesimulationsforeachchoiceof a. Furthermore,therelativeHellingerrisk oftheprocedurewasestimatedfordifferentvaluesof a,using a= 1asareference. Theresultsareshownin Figure7. Themeannumberofbinschosenbytheproceduretendstoincreasealongwiththeconcentration 20 parameter a. This illustrates the point made in Section 3.3; although smaller values of ayield estimates of thebinprobabilitiesthatassignmoreweighttothedatathrough(7),itseffectontheposteriorof Icannotbe neglected, and it can thus be beneficial to set the concentration parameter to a larger value in practice. We also plotted aagainst the mean number of bins chosen separately for each density, but this showed a very similar picture to the aggregated results and is therefore omitted. In terms of risk, the procedure with a= 10performs the best in all cases. Its lower risk compared to a= 1can likely be attributed to its tendency of choosing a greater number of bins. The smallest values of the concentration parameter a= 0.01performs quite poorly, and the choice a= 0.1is only slightly better. Thechoice a= 100doesverypoorlyforthesmallestsamplesizes. Thisisratherunsurprisinginlightof(7), as for n= 100the prior mean ends up contributing significantly to the interval probability estimates. As the sample size increases, the influence of the prior mean weakens, and the relative risk of this procedure improves considerably. Overall, it seems that a choice of a∈[1,10]should perform reasonably well for the sample sizes considered here. Appendix B: Proofs Our proofs are mainly based on the theory developed in Ghosal et al. (1999, 2000). These are based on the use of covering numbers, which provide a measure of the complexity of a given model. For a metric space (X, d)the covering number N ϵ,Y, d)of a set Y ⊆ Xis defined as the minimal number of open balls in X needed to cover Y. Covering numbers can be controlled using the slightly stronger concept of bracketing numbers. For a metric space (F, d)consisting of functions from A → B, we define the bracket [f, g]for two functions f, g∈ Fto be the set of functions h:A → Bsatisfying f(x)≤h(x)≤g(x)for all x∈ A. For two vectors in a Euclidian metric space, a,b∈Rk, we define the bracket [a,b]as the set of those csatisfying aj≤cj≤bjforall j. Wedefinethebracketingnumberofaset H ⊆ Ftobetheminimalnumberofbrackets inFneeded to cover H. Before proceeding with the proofs, we review some notation. Given an observed i.i.d. sample x= (x1, . .
|
https://arxiv.org/abs/2505.22034v1
|
. , x n), a likelihood function fand a prior PnonF, the space of probability densities on [0,1], the posterior probability of a measurable set Ais Pn(f∈ A|x) =R AQn i=1f(xi)dPn(f)R FQn i=1f(xi)dPn(f). From the form of the random histogram prior given in (4), this probability can be decomposed as Pn(f∈ A|x) =P I∈PTnpn(I)R AIp(θ|I)Qn i=1fI,θ(xi)dθ P I∈PTnpn(I)R Skp(θ|I)Qn i=1fI,θ(xi)dθ, where AI={θ∈ Sk:fI,θ∈ A}andfI,θis the density given in (1). Since our primary goal is to study the posterior mean of fconditional on bI, the posterior distribution f|x,Iis of interest. Conditional on the chosen partition I ∈ P Tn,k, we have Pn(fI,θ∈ A|x,I) =R AIp(θ|I)Qn i=1fI,θ(xi)dθR Skp(θ|I)Qn i=1fI,θ(xi)dθ. B.1 Convergence rate ForM > 0and a sequence ϵn=O(1), we define An= f:dH f0, f ≥Mϵn . In line with the no- tation introduced in the beginning of the section, for I ∈ P Tn,kwe denote by An,Ithe set given by θ∈ Sk:dH f0, fI,θ ≥Mϵn . 21 Our first result concerns the consistency of the conditional posterior mean based on a maximum a posteriori model selection procedure. Lemma B.1. Letϵn,¯ϵnbe sequences satisfying ¯ϵn≤ϵnandn¯ϵ2 n→ ∞. Suppose that for some b >0and a constant c >0, we can find a sequence of interval partitions {I(n)}n∈Nof[0,1], where I(n)∈ PTn, such that the event e(2+c)n¯ϵ2 np x|I(n) pn(I(n))/nY i=1f0(xi)> b, (B.2) denoted Bn, satisfies Pn 0 x∈Bc n =O ϵ2 n . Suppose further that we can find sets Fn inFsuch that N ϵn/2,Fn, dH)<exp nϵ2 n , (B.3) Pn Fc n <exp − {c+ 4}n¯ϵ2 n , (B.4) Then for all sufficiently large M > 0we have Ex∼f0 Pn fbI,θ∈ An x,bI =O(1). Proof.Take{I(n)}tosatisfy(B.2),andwrite Dn(I) =p(x|I)pn(I)/Qn i=1f0(xi). Weworkwiththefollow- ing posterior probability: Pn fbI,θ∈ An x,bI =R An,bIp(θ|bI)Qn i=1fbI,θ(xi)dθ R Skp(θ|bI)Qn i=1fbI,θ(xi)dθ ≤R An,bIp(θ|bI)Qn i=1fbI,θ(xi)dθ R Skp(θ|bI)Qn i=1fbI,θ(xi)dθ1Bn(x) +1Bcn(x). To get a useful bound on the above probability, we multiply by pn bI /Qn i=1f0(xi)in both the numerator and the denominator in the first term of the above expression, resulting in the following inequalities Pn fbI,θ∈ An x,bI ≤Dn bI−1pn bIZ An,bIp θ bInY i=1fbI,θ(xi) f0(xi)dθ1Bn(x) +1Bcn(x) ≤Dn I(n)−1pn bIZ An,bIp θ bInY i=1fbI,θ(xi) f0(xi)dθ1Bn(x) +1Bcn(x), where we used the argmax property of bI, so that pn bI p x bI ≥pn I(n) p x I(n) and hence Dn bI ≥ Dn I(n) for the second inequality. The first term in the above expression can be further bounded by b−1e(2+c)n¯ϵ2 npn bIZ An,bIp θ bInY i=1fbI,θ(xi) f0(xi)dθ ≤b−1e(2+c)n¯ϵ2 nX I∈PTnpn(I)Z An,Ip θ InY i=1fI,θ(xi) f0(xi)dθ =b−1e(2+c)n¯ϵ2 nZ AnnY i=1f(xi) f0(xi)dPn(f). Arguing as in the proof of Theorem 8.9 in Ghosal and van der Vaart (2017), under the conditions (B.3) and (B.4) the expectation of the right hand side of the above under Pn 0isO e−ρnϵ2 n when it is multiplied by e(2+c)n¯ϵ2 nfor sufficiently large M > 0and some ρ >0not depending on n. From this we deduce that Ex∼f0n Pn fbI,θ∈ An x,bIo ≤ O e−ρnϵ2 n +Pn 0 x∈ Bc n . (B.5) 22 To conclude the proof, we note that by arguing as on page 507 of Ghosal et al. (2000), we have d2 H f0,bfbI ≤M2ϵ2 n+ 2Pn fbI,θ∈ An
|
https://arxiv.org/abs/2505.22034v1
|
x,bI , which follows from the convexity of the map g7→d2 H(f0, g), the conditional version of Jensen’s inequality andthefactthattheHellingermetricisboundedby√ 2. Sincebothtermsontherighthandsideof(B.5)are O(ϵ2 n), we obtain Ex∼f0 d2 H f0,bfbI =O ϵ2 n , which proves the claim. We now turn our attention to finding useful bound on the covering number. Define HTn,k=n f: [0,1]→R:f(x) =kX j=1|Ij|−1θj1Ij(x) for some I ∈ P Tn,kandθ∈ Sko .(B.6) ToshowthattheconditionsofLemmaB.1hold,weneedtoboundonthebracketingnumberof HTn,k. This is achieved by relating it to the bracketing number of the simplex Sk. Variants of the following lemmas appear in the proofs of Ghosal (2001); Scricciolo (2007). We include the argument in its full length here for completeness. Lemma B.2. LetIbe a partition of [0,1]. Let l=Pk j=1ajωjandu=Pk j=1bjωjwithωj(x) =|Ij|−11Ij(x). Suppose that h(x) =Pk j=1θjωj(x), where θ∈ Sk. If[a,b]is an ϵ-bracket containing θ, then [l, u]is an ϵ-bracket containing h. Proof.The claim that h∈[l, u], follows from the fact that |Ij|−1aj≤ |I j|−1θj≤ |I j|−1bjfor all jby assumption. It remains to show that dH l, u < ϵ. We start off by noting that d2 H ajωj, bjωj =Z Ijq |Ij|−1aj−q |Ij|−1bj2 dx=√aj−p bj2 . FromLemma4inGenoveseandWasserman(2000),wehavethat d2 H l, u ≤Pk j=1d2 H ajωj, bjωj andhence, d2 H l, u ≤kX j=1√aj−p bj2 =d2 H a,b < ϵ2, which was to be shown. Define HTn,Itobethesetofpiecewiseconstantdensitiesbasedonthepartition I. Wenowgiveabound on the covering number of HTn,I. Lemma B.3. LetI ∈ P Tn,k. Then the ϵ-bracketing number of HTn,Iforϵ≤1satisfies N[ ] ϵ,HTn,I, dH ≤k(2πe)k/2 ϵk−1(B.7) Moreover, the ϵ-convering number of Fn=∪sn k=1HTn,ksatisfies N[ ] ϵ,Fn, dH ≤Lkn ϵsnsn (B.8) for a universal constant L >0. 23 Proof.Toshowthatthefirstclaimholds,let [a1,b1], . . . , [am,bm]bean ϵ-bracketingof Sk,andassumethat mis chosen so that the number of brackets is minimal. Define the associated functions l1, u1, . . . , l m, umas in Lemma B.2, li(x) =kX j=1ai,j|Ij|−11Ij(x), u i(x) =kX j=1bi,j|Ij|−11Ij(x). Leth∈ HTn,Ibe given by h(x) =Pk j=1|Ij|−1θj1Ij(x), for some θ∈ Sk. Since Sk⊆ ∪m l=1[al,bl], we have θ∈[ai,bi]for some i. By Lemma B.2, [li, ui]is an ϵ-bracket containing h, and since hwas arbitrary, we concludethat HTn,I⊆ ∪m i=1[li, ui]. Bytheminimalityof m,thisshowsthattheHellingerbracketingnumber ofHTn,Ican be no greater than that of Sk. By Lemma 2 in Genovese and Wasserman (2000), the bracketing number of the k-simplex Sksatisfies N[ ] ϵ,Sk, dH ≤k(2πe)k/2 ϵk−1. Using this fact we find that N[ ] ϵ,HTn,I, dH ≤N[ ] ϵ,Sk, dH ≤k(2πe)k/2 ϵk−1, which was to be shown. To prove the second part of the lemma, we note that for any k≤sn, a piecewise constant density based on the partition I ∈ P Tn,kcan be written as fθ,I(x) =kX j=1|Ij|−1θj1Ij(x),forx∈[0,1], for some θ∈ Sk. Now by choosing J ∈ P Tn,snsuch that Jis a refinement of Iand˜θ∈ Ssnappropriately, we have that fJ,˜θ(x) =fI,θ(x),for each x∈[0,1]. We conclude that PTn,k⊆ PTn,sn, and as such, Fn=∪sn k=1HTn,k⊆ HTn,sn. Thus, it suffices to bound the coveringnumberof PTn,sn. LetHTn,Ibethesetofpiecewiseconstantdensitiesbasedonthepartition I. By construction, we have that HTn,sn=[ I∈PTn,snHTn,I. By (B.7), the bracketing number of HTn,IforI ∈ P Tn,snis bounded by the bracketing number of Ssn. We notethatthereare kn−1 sn−1 possiblepartitionsof [0,1]ofsize sn. Byaunionbound,wehaveforany ϵ≤1that N[ ](ϵ,HTn,sn, dH)≤X I∈PTn,snN[ ](ϵ,HTn,I, dH) ≤X I∈PTn,snN[ ](ϵ,Ssn,
|
https://arxiv.org/abs/2505.22034v1
|
dH) =kn−1 sn−1 N[ ](ϵ,Ssn, dH). Usingthefactthatthebracketingnumberboundsthecoveringnumberforthesame ϵtogetherwithprevious 24 estimates, we deduce the following chain of inequalities: N ϵ,HTn,sn, dH ≤X I∈PTn,snN[ ] ϵ,HTn,I, dH ≤kn−1 sn−1 N[ ] ϵ,Ssn, dH ≤(ekn/sn)snsn(2πe)sn/2 ϵsn−1, whereweusedthat kn−1 sn−1 ≤ kn sn ≤(ekn/sn)sn. Toconcludetheproof,wenotethatas sn≤2snforsn≥1, N ϵ/2,HTn,sn, dH ≤ekn snsnsn(2πe)sn/2 (ϵ/2)sn−1≤Lkn ϵsnsn , for some constant L >0, which was to be shown. Inthesequel,foragivenfunction h∈L1 [0,1] wewrite hIforthepiecewiseconstantversionof hbased on the partition I ∈ P Tn,k, defined by hI(x) =kX j=1hj1Ij(x),forx∈[0,1], where hj=|Ij|−1R Ijh(x) dx. The following lemma establishes an approximation rate result for Hölder continuous functions. Lemma B.4. Leth: [0,1]→Rbeα-Hölder continuous. Let I(k) ∞ k=1be a sequence of partitions of [0,1]such that max j I(k) j =O k−1 . Then the sequence of piecewise constant approximations {hI(k)}∞ k=1ofhsatisfies h−hI(k) ∞=O k−α . Moreover, if his strictly positive we have d2 H h, hI(k) =O k−2α Proof.Fixk∈Nand let x∈[0,1]. Let jbe such that x∈ I(k) j. Then, |h(x)−hI(k)(x)|= h(x)− I(k) j −1Z I(k) jh(s) ds ≤ I(k) j −1Z I(k) j|h(x)−h(s)|ds ≤ I(k) j −1Z I(k) jL0|x−s|αds ≤ |I(k) j|−1L0 I(k) j 1+α =L0 I(k) j α where we used that |x−s| ≤ |Ik,j|forx, s∈Ik,j. Since max j I(k) j =O(k−1), we find h−hI(k) ∞= sup x∈[0,1]|h(x)−hI(k)(x)| ≤L0max j I(k) j α=O(k−α), For the second part, we note that d2 H h, hI(k) ≤Z1 0 h(x)−hI(k)(x) 2 h(x)dx, cf.Lemmas2.5and2.7in(Tsybakov,2009). Since hiscontinuousandstrictlypositive,thelastexpressionis upper bounded by a constant multiple of ∥h−hI(k)∥2 ∞as a consequence of the extreme value theorem. 25 To show that the condition (B.2) holds, it suffices to show that the prior distribution assigns sufficient probability to the event that f∈ N′ ϵn(f0), where N′ ϵ(f0) = f:d2 H f0, f f0/f ∞≤ϵ2 , ϵ > 0. The following lemma gives a lower bound on the probability of N′ ϵ(f0). Lemma B.5. Suppose the hypotheses of Theorem 2 are met. Let C1, C2be positive constants and let k=k(ϵ)∈N satisfy C1ϵ−1/α≤k≤C2ϵ−1/α. (B.9) Then we can find a partition I(n)∈ PTn,kand two constants c1, c2not depending on ϵ, kandI(n)such that for all sufficiently small ϵ >0, Pn N′ ϵ(f0) ≥pn I(n) c1exp −c2klog(1/ϵ) , (B.10) provided nis sufficiently large. Proof.WeargueasinthebeginningoftheproofofTheorem2inScricciolo(2007). Let ϵ >0tobedetermined later. If nis such that k(ϵ)> kn, then the inequality (B.10) is trivially true as the right hand side is zero. Our strategy will be to relate the neighborhood N′ ϵ(f0)to neighborhoods in terms of the squared Hellinger distance, as the latter is more convenient to work with. To accomplish this, we first need an estimate of the squaredHellingerdistancebetweenthepiecewiseconstantapproximationandthetruedensity f0. Westart by constructing the partition I(n)by including every ⌈kn/k⌉index in the partition. The maximal size of a bin in this partition is then bounded by max j I(n) j ≤ ⌈kn/k⌉max j{τn,j−τn,j−1} ≤A⌈kn/k⌉k−1 n≤2Ak−1, provided nis sufficiently large. This shows that max j I(n) j =O k−1 . By a similar argument, we arrive at minj I(n) j ≥Bk−1/2. Let γ=c0ϵfor a positive constant c0to be determined later. To bound the squared Hellinger distance between f0andfI(n),θwe apply the triangle inequality together with the inequality (a+b)2≤2a2+ 2b2, yielding d2 H f0, fI(n),θ ≤2d2 H f0, f0,I(n) +
|
https://arxiv.org/abs/2505.22034v1
|
2d2 H f0,I(n), fI(n),θ . (B.11) To bound the first term of (B.11), we leverage Lemma B.4 to find that d2 H f0, f0,I(n) ≤˜Ak−2α≤˜AC−2α 1ϵ2=˜AC−2α 1c2 0γ2, for some constant ˜A > 0. We now bound the last term on the right hand side of (B.11). Denoting π(n) j=R I(n) jf0(x) dxand using the fact that d2 H fI(n),θ, f0,I(n) ≤ fI(n),θ−f0,I(n) 1, we obtain d2 H fI(n),θ, f0,I(n) ≤ fI(n),θ−f0,I(n) 1=kX j=1Z I(n) j I(n) j −1 θj−π(n) j dx=kX j=1 θj−π(n) j = θ−π(n) 1. Now for any simplex vector θ∈ Sksatisfying θ−π(n) 1≤C−2α 1c2 0γ1+1/αit follows from (B.11) that d2 H f0, fI(n),θ ≤2˜Ak−2α+ 2 θ−π(n) 1 ≤2˜Ak−2α+ 2C−2α 1c2 0γ1+1/α ≤2(˜AC−2α 1c2 0+C−2α 1c2 0)γ2 =c3γ2, 26 where we used that γ1+1/α≤γ2forγ∈(0,1)andc3= 2c2 0C−2α 1(˜A+ 1)is a constant. Note that c3↓0as c0↓0, soc3can be made arbitrarily small by choosing c0to be suitably small. Denote Aγ= θ∈ Sk: θ−π(n) 1≤C−2α 1c2 0γ1+1/α . To bound the likelihood ratio ∥f0/fI(n),θ∥∞, we note that for m= inf x∈[0,1]f0(x), we have π(n) j≥m|I(n) j|forj= 1,2, . . . , k. For sufficiently small γ, we thus have on Aγthat θj≥π(n) j− θ−π(n) 1≥m|I(n) j| −C−2α 1c2 0γ1+1/α. To ensure that θj≥m I(n) j /2, we note that γ1+1/α≤ O k−1−α =o k−1 and hence, any constant multiple of γ1+1/αis upper bounded by mBk−1/4for large values of k. Consequently C−2α 1c2 0γ1+1/α≤ mminj|I(n) j|/2which further implies θj≥m|I(n) j|/2by the above display. From the above arguments we arrive at fI(n),θ(x)≥m/2and hence f0/fI(n),θ ∞≤2∥f0∥∞ m. It follows that d2 H f0, fI(n),θ f0/fI(n),θ ∞≤c4γ2, for a constant c4=c4(c0)↓0asc0↓0. Applying the aboveestimates,wethushavethat Aγ⊆ N′ ϵ(f0). Asγ1+1/α=o(k−1),anyconstantmultipleof γ1+1/αisless than 1/(Σk)for all sufficiently large k. In addition, the lower bound σk−1≤ak,jimplies that the Dirichlet parameters are lower bounded by a constant multiple of γ−b(1+1/α)for some b >0, so the conditions of Lemma G.13 in Ghosal and van der Vaart (2017) are met, and we obtain the lower bound Pn Aγ|I(n) ≥c1exp −c5klog(2C2α 1c−2 0/γ1+1/α) , for some c5>0. It remains to show that we can write the probability on the right hand side in terms of ϵ only. Observe that for γ=c0ϵthe expression log(2C2α 1c−2 0/γ1+1/α)/log(1/ϵ)converges to a positive, finite limit as ϵ↓0. From this we conclude that for a suitable constant c2>0, Pn N′ ϵ(f0) ≥pn I(n) c1exp −c2klog(1/ϵ) , which was to be shown. We are finally in a position to prove Theorem 2. Proof of Theorem 2. We verify the conditions of Lemma B.1, tending to the first condition with ¯ϵn=ϵn= {n/log(n)}−α/(2α+1). Letting {sn}be a sequence of integers satisfying C1ϵ−1/α n≤sn≤C2ϵ−1/α nfor positive constants C1, C2, the result of Lemma B.5 yields Pn f:d2 H(f0, f) f0/f ∞≤ϵ2 ≥pn I(n) c1exp −c2snlog(1/ϵn) , (B.12) for some I(n)∈ HTn,sn. To continue our argument, we need to lower bound the prior probability of the partition I(n). To this end, note that pn I(n) sn =kn−1 sn−1−1 ≥kn sn−1 ≥ekn sn−sn ≥exp −snlog(kn) , (B.13) where the last inequality holds for all sufficiently large n. Since kn=O(n)by assumption, we have, for all sufficiently large n, pn I(n)|sn
|
https://arxiv.org/abs/2505.22034v1
|
≥exp −snlog(A1n) ≥exp −A2snlog(n) , for two constants A1, A2>0. pn(sn)≥D1exp −d1snlog(sn) ≥D1exp −d1snlog(n) . 27 It is easily verified that log(1/ϵn)/log(n)converges to a positive, finite limit as n→ ∞. Moreover, a quick computation shows that nϵ2 n=n1/(2α+1){log(n)}2α/(2α+1)and hence, snlog(n)≤C2n1/(2α+1) log(n) (2α+1−1)/(2α+1)=C2nϵ2 n. To proceed, note that since I(n)∈ PTn,snwe have pn I(n) =pn(sn)pn I(n)|sn . We deduce that the right hand side of (B.12) is lower bounded by D1exp −d1snlog(n) exp −A2snlog(n) c1exp −c2C2nϵ2 n , whereweusedtheassumptionon pn(k). Since log(sn)/log(n)convergestoapositive,finitelimitas n→ ∞, we conclude that we can find constants c3, c4>0such that Pn f:d2 H f0, f f0/f ∞≤ϵ2 n ≥c3exp −c4nϵ2 n . To verify that P0(x∈ Bn) =O ϵ2 n forBnthe event that e(2+c)n¯ϵ2 np(x|I(n))pn(I(n))/Qn i=1f0(xi)> b, we appeal to Lemma 8.10 in Ghosal and van der Vaart (2017), which together with equation (8.8)in the same work implies that Bnhas probability at least d6D2(√n/ϵn)−6=O ϵ2 n under f0. The rate condition of Lemma B.1 is thus seen to hold under the stated assumptions. To verify the prior condition (B.4), we let {tn}be a sequence of integers taken to satisfy E1n1/(2α+1){log(n)}−1/(2α+1)≤tn≤E2n1/(2α+1){log(n)}−1/(2α+1), for two positive constants E1, E2to be chosen later. From the definition of tnand previous calculations it is clear that tnlog(n)≥E1nϵ2 n. For Fn=∪tn k=1HTn,k, where HTn,kis as in (B.6), we then have that Pn Fc n =P∞ k=kn+1pn(k). Using the assumption on pn(k)this is bounded above by c5exp −d2tnlog(tn) forc5=D2P∞ k=0exp −d2klog(k) <∞. Furthermore, as log(tn)/log(n)converges to a positive limit as n→ ∞, we can find a new positive constant d′ 2such that ∞X k=kn+1pn(k)< c5exp −d′ 2tnlog(n) . Taking E1= (c4+ 4)/d′ 2, we have d′ 2tn≥(c4+ 4)nϵ2 nwhich combined with previous estimates implies that Pn Fc n ≥exp{−(c4+ 4)nϵ2 n}, verifying (B.4). Finally, we show that the entropy condition (B.6) holds. By Lemma B.3, N(ϵn/2,Fn, dH)is upper bounded by exp tnlog(Lkn/{ϵnsn}) . Since log(kn) + log(1 /ϵn) =O log(n) , the covering number can be further bounded by exp c5tnlog(n) for a suitable constant c5. Since tnlog(n) =O(nϵ2 n)this implies that N(ϵn/2,Fn, dH)≤exp c6nϵ2 n for some c6>0as desired. B.2 Consistency The proof of Theorem 1 is in parts very similar to that of Theorem 2, so we only fill in the details where needed. Before proving Theorem 1, we need an approximation result. Lemma B.6. Letf0: [0,1]→Rbe a probability density. Then for any ϵ >0we can find a δ >0such that if Iis any interval partition of [0,1]withmax j|Ij|< δ, then there is a density h, possibly depending on ϵ, that is piecewise constant on the partition Isatisfying f0−h 1< ϵ. 28 Proof.Wewillfirstshowthatforany ϵ >0andanarbitrarysequenceofpartitions J(n)with max j J(n) j →0, wecanfindan N∈Nandadensity hNthatispiecewiseconstanton J(N)andwhichsatisfies f0−hN 1< ϵ. To this end, we make use of the fact that step functions are dense in Lr [0,1] forr≥1(Schilling, 2005, Corollary 12.11). Fix ϵ∈(0,1/2)and let gϵbe a strictly positive step function such that ∥f0−gϵ∥1< ϵ/5. Fromtheinversetriangleinequality,wehave 1−ϵ/5<∥gϵ∥1<1 +ϵ/5. Letting mϵ=gϵ/∥gϵ∥1,itfollowsby the triangle inequality that f0−mϵ 1≤ f0−gϵ 1+ gϵ−mϵ 1<ϵ 5+ gϵ 1 1−1 ∥gϵ∥1 <ϵ 5+2ϵ 5(1 +ϵ/5)<4ϵ 5. whereweused |1−1/x|<2xforx >1/2. Since mϵisastepfunction,itisalmosteverywherecontinuouson [0,1]. Let{J(n)}beasequenceofintervalpartitionsof [0,1]suchthat
|
https://arxiv.org/abs/2505.22034v1
|
max j J(n) j →0. Define hn=mϵ,J(n) to be the piecewise constant version of mϵbased on the partition J(n). To show convergence L1, we first show that hnconverges to mϵalmost everywhere. To this end, fix µ >0and let x∈[0,1]be a continuity point of mϵso that |mϵ(x)−mϵ(y)|< µwhenever |x−y|< νfor all sufficiently small ν. For all large nwe have max j J(n) j ≤ν, so for x, y∈ J(n) j, integrating the inequalitites −µ < m ϵ(x)−mϵ(y)< µover this interval and multiplying by 1/ J(n) j shows that −µ < m ϵ(x)−mϵ,J(n)(x)< µ. Since µwas arbitrary we conclude that hn(x)→mϵ(x). As hn→mϵalmost everywhere and hn, mϵare densities for all n, Scheffé’s Lemma guarantees that mϵ−hn 1→0. In particular, this result implies that ∥mϵ−hN∥1< ϵ/5for some N∈N, so that f0−hN 1≤ f0−mϵ 1+ mϵ−hN 1< ϵ. Supposenowthatthehypothesisofthelemmaisfalseforthesakeofcontradiction. Thenthereexistsan ϵ >0suchthatforevery n∈N,wecanfindapartition I(n)with max j I(n) j <1/nsuchthat f0−hn 1≥ϵ foralldensities hnwhicharepiecewiseconstanton I(n). Since max j I(n) j →0,thiscontradictsourprevious result. We are now ready to prove Theorem 1. Proof of Theorem 1. ByanargumentsimilartothatofLemmaB.1,itcanbeestablishedtheBayeshistogram estimator is consistent if for a constant c >0and a sequence of sets Fn⊆ F, N ϵ,Fn, dH < nϵ2, Pn(Fc n <exp(−cn), in addition to the condition that for all δ >0, we can find a sequence of partitions I(n)∈ PTnsuch that the event enδp I(n) p x|I(n) /Qn i=1f0(xi)> bfor some b >0, denoted Bn, has probability tending to 1. The first two conditions can be handled as in the proof of Theorem 2 by taking Fn=∪kn k=1HTn,k, as N ϵ,Fn, dH < nϵ2for all sufficiently large nandPn Fc n = 0in this case. We first show that f0can be approximated to arbitrary precision by a piecewise constant density in the squaredHellingermetric. Since d2 H(f, g)≤ ∥f−g∥1foralldensities f, g,weworkwiththe L1metricinstead as it is more convenient in this case. Let now µ∈(0,1)to be decided later. We will now show that we can always construct a partition I(n)∈ PTn,kand find a density hnpiecewise constant on I(n)with k∈N fixed such that f0−hn 1< µ. By Lemma B.6, we can find such an hnwhenever max j I(n) j < νfor some ν >0. Let k >2/ν, and let N∈Nbe such that max 1≤j≤k{τn,j−τn,j−1}< ν/ 3for all n≥N. For each 29 l∈ {1,2, . . . , k }, we can then find at least one τn,j∈ (l−1)/k, l/k . We now construct the partition I(n)by including exactly one τn,jfor each regular interval (l−1)/k, l/k . Since the length of every such regular interval is less than ν/2, it follows that the adjacent endpoints in the interval partition I(n)constructed this way are at most νapart and consequently we must have f0−hn 1< µ. Denote V(f0, f) =R1 0f0(x) log2 f0(x)/f(x) dxand put N′′ ϵ(f0) = f:K(f0, f)≤ϵ2, V(f0, f)≤ϵ2 . To relate Hellinger neighbourhoods to N′′ ϵ(f0), we appeal to Theorem 5 in Wong and Shen (1995), which implies that max K(f0, fI(n),θ), V(f0, fI(n),θ) < ϵ2whenever d2 H f0, fI(n),θ <3µ for all sufficiently small µ >0provided that for some γ∈(0,1]and the set Dγ= f0/fI(n),θ≥e1/γ L2 γ=Z Dγf0(x)f0(x) fI(n),θ(x)γ dx <∞. This holds true
|
https://arxiv.org/abs/2505.22034v1
|
for γ=r−1ifminjθj> µ2/2due to the fact that f0∈Lr [0,1] for some r∈(0,1]by assumption. ToshowthatthethepriorassignssufficientmasstoHellingerneighbourhoodsof f0,notethat by the triangle inequality, d2 H f0, fI(n),θ ≤ f0−hn 1+ hn−fI(n),θ 1= f0−hn 1+ η(n)−θ 1, where η(n) j=R I(n) jhn(x)dx. The first term in the above expression is less than µfor all sufficiently large n by the previous calculation, and we conclude that {fI(n),θ:d2 H(f0, fI(n),θ)<3µ} ⊆ N′′ ϵ(f0)forθbelonging to the set Aµ= θ∈ Ssn: θ−η(n) 1≤2µ,min jθj> µ2/2 . Since the prior for θ|I(n)is ak-dimensional Dir(a)distribution and a∈(0,Σ)kby assumption, we can findz >0such that ajis lower bounded by a constant multiple of µzfor all j. Appealing to Lemma G.13 in Ghosal and van der Vaart (2017), we find that P Aµ|I(n) > c1exp −c2klog(1/µ) for two positive constants c1, c2. From (B.13) we have pn I(n)|k ≥exp −klog(kn) and hence, Pn f∈ N′′ ϵ(f0) ≥pn(k)pn I(n)|k P Aµ|I(n) ≥c3exp logpn(k)−c4klog(n) , for constants c3, c4>0. By Lemma 8.10 in Ghosal and van der Vaart (2017), we obtain the lower bound enδp I(n) p x|I(n) /nY i=1f0(xi)≥c3exp nδ+ log pn(k)−c4klog(n)−(1 +D)nϵ2 , where D > 0, except for on a set of probability less than (√nϵ2)−2. By our assumptions the terms in the exponent on the right hand side converge to 0when divided by n, expect for nδ−(1 +D)nϵ2. Hence, the right hand side of the expression diverges to ∞provided ϵis taken to be sufficiently small, so that Bc nis contained in a set with probability tending to 0, which concludes the proof. Appendix C: Simulation study The following section includes some further details on the setup of the simulation study in Section 5 and presents the results in more detail. 30 C.1 Methods used and implementation An overview of all the methods included in the simulation study is given in Table 1. The sowftware implementations used for each method are as follows: For the Wand method we used the dpih()function in the KernSmooth library (Wand, 2021). The ftnonpar package (Davies and Kovac, 2012) was used to compute the Taut String histogram. Finally, for the two approaches of Rozenholc et al. (2010) we used the histogram Rlibrary (Mildenberger et al., 2019). For all the other methods we have used our own software implementation. For all the densities in Figure 1, we estimated the support in the way described in the beginning of Section 3.3. This was done even for those densities that have known compact supports to ensure that all density estimates were comparable, as some of the software implementations used in the simulation study did not allow us to manually specify the support of the histogram estimate. With the exception of the Wand method, we computed the optimal number kof bins for each regular histogramprocedureamongallregularhistogramsconsistingoflessthan ⌈4n/log2(n)⌉. AlthoughBirgéand Rozenholc (2006) specifically recommend maximizing the penalized likelihood up to ⌈n/log(n)⌉for their method, we have found that for the test densities under consideration here, the maximum always occurs at a much lower value of kfor the largest sample sizes, so we can save a considerable amount of computation by restricting our search to a smaller set. For
|
https://arxiv.org/abs/2505.22034v1
|
the RIH, L2CV and KLCV methods we used the greedy search heuristic of Rozenholc et al. (2010) to reduce the computational burden of computing the optimal partition according to these criteria. Based on some preliminary simulations, we decided to use a data-driven grid for both cross-validation methods and restricted our search to only consider partitions to a minimum bin width greater than log1.5(n)/n, similar to the approach used by Rozenholc et al. (2010) for the L2CV method. This restriction was put in place because these criteria would frequently yield density estimates with sharp and narrow spikes even for smooth densities in the unrestricted case. For the random irregular histogram method, we used a fine regular grid consisting of ⌈4n/log2(n)⌉bins. C.2 Results from the simulation study This subsection contains tables showing the complete output from the simulation study for the Hellinger, PID and L2losses, respectively. Complete tables of the results with respect to each loss function are shown in Tables 4 to 9. To provide a more compact visual summary of the results with respect to the Hellinger and L2losses, we computed the logarithm of the risk relative to the best-performing method for each density f0∈ Dand sample size n∈ N, LRR n(f0, m) = log bRn f0,bfm −log min m′∈MbRn f0,bfm′ . Boxplots of LRR n(f0, m)for sample sizes of {50,200,5000,25000}are shown in Figures 8 and 11 for the Hellingerand L2losses,respectively. Figures9and10showsthePIDrisksforthefourirregularmethodsRIH, RMG-B,RMG-RandTS.TheirregularL2CVandKLCVhistogramswereexcludedfromthiscomparison,as they tended to perform much worse than the other irregular procedures in this regard. WenotethattherandomirregularhistogramdoesquitepoorlyintermsofHellingerand L2riskcompared to the other irregular histogram methods for the Chisq(1) ,Beta(1 /2,1/2)and trimodal uniform densities. This is primarily due to the fact that we have used a regular mesh in our simulation study, while the other methods use a data-based grid. When employing a data-based grid the risks of the RIH method drops considerably, and the advantage of the other methods in terms of estimation error becomes much smaller. 31 Figure 8: Boxplots of LRR n(f0, m)fordH. Figure 9: Boxplots of PID risks for four of the irregular histogram methods for sample sizes n= 50and n= 200. 32 Figure 10: Boxplots of PID risks for four of the irregular histogram methods for samples of size n= 5000 andn= 25000 Figure 11: Boxplots of LRR n(f0, m)for the L2metric. 33 Table 4: Estimated Hellinger risks for densities 1-8. Density n Wand AIC BIC BR Knuth SC RIH RMG-B RMG-R TS L2CV KLCV 1 50 0.27 0.298 0.3 0.297 0.294 0.292 0.307 0.333 0.321 0.325 0.322 0.331 200 0.167 0.185 0.194 0.187 0.185 0.183 0.204 0.222 0.218 0.189 0.21 0.228 1000 0.096 0.104 0.117 0.106 0.11 0.11 0.129 0.139 0.136 0.127 0.117 0.129 5000 0.057 0.06 0.071 0.061 0.064 0.064 0.081 0.085 0.084 0.092 0.076 0.081 25000 0.034 0.035 0.044 0.035 0.038 0.039 0.051 0.052 0.051 0.066 0.047 0.049 2 50 0.226 0.239 0.194 0.197 0.197 0.207 0.189 0.194 0.189 0.204 0.216 0.218 200 0.13 0.117 0.095 0.099 0.096 0.098 0.095 0.096 0.095 0.102 0.143 0.167 1000 0.07 0.052 0.043 0.045 0.043 0.043 0.043 0.043 0.043 0.047 0.109 0.128 5000 0.04 0.022 0.019 0.019 0.019 0.019
|
https://arxiv.org/abs/2505.22034v1
|
0.018 0.019 0.018 0.02 0.062 0.071 25000 0.023 0.01 0.008 0.009 0.009 0.009 0.008 0.008 0.008 0.009 0.033 0.036 3 50 0.433 0.446 0.443 0.444 0.404 0.407 0.385 0.391 0.393 0.316 0.515 0.526 200 0.336 0.339 0.348 0.341 0.315 0.323 0.297 0.261 0.286 0.207 0.558 0.545 1000 0.253 0.254 0.274 0.255 0.243 0.252 0.224 0.161 0.195 0.136 0.64 0.637 5000 0.192 0.194 0.218 0.194 0.191 0.201 0.167 0.1 0.133 0.094 0.346 0.479 25000 0.146 0.148 0.173 0.148 0.151 0.159 0.123 0.062 0.09 0.065 0.155 0.169 4 50 0.36 0.352 0.347 0.347 0.323 0.335 0.335 0.342 0.333 0.281 0.458 0.489 200 0.265 0.23 0.24 0.232 0.219 0.233 0.222 0.228 0.229 0.182 0.521 0.529 1000 0.188 0.167 0.178 0.167 0.165 0.176 0.159 0.144 0.149 0.131 0.612 0.637 5000 0.129 0.137 0.156 0.143 0.146 0.153 0.104 0.09 0.095 0.089 0.309 0.485 25000 0.097 0.092 0.151 0.093 0.101 0.117 0.065 0.058 0.063 0.077 0.081 0.072 5 50 0.318 0.335 0.326 0.327 0.319 0.319 0.339 0.361 0.353 0.315 0.452 0.459 200 0.207 0.216 0.223 0.215 0.209 0.213 0.231 0.25 0.246 0.207 0.268 0.279 1000 0.129 0.129 0.143 0.129 0.129 0.133 0.146 0.156 0.154 0.139 0.122 0.13 5000 0.08 0.077 0.091 0.078 0.077 0.078 0.092 0.096 0.095 0.098 0.077 0.081 25000 0.049 0.046 0.058 0.046 0.047 0.048 0.058 0.058 0.059 0.069 0.049 0.049 6 50 0.324 0.387 0.347 0.349 0.347 0.332 0.316 0.334 0.327 0.32 0.322 0.334 200 0.248 0.271 0.266 0.262 0.257 0.251 0.226 0.227 0.23 0.212 0.266 0.276 1000 0.194 0.196 0.203 0.193 0.19 0.187 0.161 0.145 0.153 0.14 0.225 0.239 5000 0.154 0.147 0.159 0.144 0.145 0.142 0.115 0.089 0.099 0.092 0.131 0.137 25000 0.122 0.11 0.124 0.109 0.111 0.109 0.082 0.056 0.065 0.062 0.09 0.092 7 50 0.334 0.357 0.352 0.349 0.346 0.342 0.337 0.357 0.344 0.368 0.424 0.421 200 0.251 0.255 0.254 0.25 0.243 0.243 0.242 0.249 0.246 0.246 0.296 0.291 1000 0.168 0.157 0.171 0.151 0.147 0.15 0.173 0.177 0.176 0.131 0.151 0.161 5000 0.091 0.094 0.113 0.094 0.098 0.099 0.106 0.109 0.108 0.073 0.09 0.093 25000 0.054 0.055 0.071 0.055 0.057 0.055 0.067 0.067 0.066 0.043 0.055 0.058 8 50 0.283 0.309 0.307 0.304 0.304 0.301 0.296 0.314 0.305 0.315 0.319 0.329 200 0.176 0.195 0.2 0.194 0.192 0.19 0.197 0.21 0.206 0.196 0.215 0.238 1000 0.102 0.11 0.126 0.112 0.115 0.117 0.127 0.135 0.133 0.126 0.118 0.129 5000 0.061 0.064 0.077 0.065 0.069 0.066 0.08 0.083 0.082 0.069 0.076 0.08 25000 0.036 0.038 0.048 0.038 0.041 0.042 0.05 0.051 0.051 0.049 0.046 0.048 34 Table 5: Estimated Hellinger risks for densities 9-16. Density n Wand AIC BIC BR Knuth SC RIH RMG-B RMG-R TS L2CV KLCV 9 50 0.309 0.342 0.334 0.332 0.333 0.327 0.332 0.343 0.333 0.348 0.357 0.361 200 0.228 0.241 0.239 0.233 0.231 0.228 0.235 0.25 0.245 0.237 0.263 0.248 1000 0.16 0.153 0.17 0.152 0.153 0.154 0.155 0.159 0.157 0.147 0.159 0.161 5000 0.103 0.09 0.11 0.091 0.096 0.098 0.097 0.1 0.099 0.073 0.091 0.094 25000 0.061 0.059 0.069 0.059 0.061 0.06 0.062 0.062 0.061 0.043 0.054 0.055 10 50 0.404 0.421 0.39 0.391 0.394 0.4
|
https://arxiv.org/abs/2505.22034v1
|
0.389 0.395 0.389 0.425 0.397 0.405 200 0.349 0.306 0.345 0.334 0.314 0.288 0.345 0.346 0.346 0.334 0.346 0.353 1000 0.321 0.183 0.216 0.185 0.185 0.179 0.22 0.231 0.227 0.191 0.214 0.213 5000 0.189 0.107 0.134 0.108 0.112 0.11 0.147 0.15 0.147 0.112 0.125 0.133 25000 0.079 0.063 0.084 0.063 0.069 0.068 0.096 0.095 0.093 0.07 0.082 0.083 11 50 0.26 0.292 0.288 0.286 0.289 0.284 0.291 0.305 0.301 0.304 0.273 0.29 200 0.153 0.169 0.178 0.172 0.171 0.168 0.186 0.196 0.193 0.178 0.193 0.207 1000 0.088 0.095 0.108 0.098 0.099 0.099 0.119 0.127 0.124 0.11 0.118 0.134 5000 0.051 0.054 0.066 0.055 0.058 0.058 0.074 0.078 0.076 0.074 0.076 0.084 25000 0.031 0.031 0.041 0.032 0.035 0.035 0.046 0.047 0.046 0.054 0.047 0.05 12 50 0.584 0.491 0.546 0.539 0.552 0.604 0.623 0.418 0.408 0.381 0.854 0.843 200 0.293 0.279 0.296 0.284 0.331 0.374 0.36 0.279 0.28 0.245 0.817 0.831 1000 0.184 0.164 0.239 0.175 0.214 0.254 0.18 0.172 0.173 0.152 0.809 0.786 5000 0.09 0.093 0.135 0.097 0.126 0.149 0.107 0.105 0.106 0.103 0.122 0.127 25000 0.048 0.054 0.082 0.056 0.075 0.087 0.066 0.064 0.065 0.072 0.051 0.063 13 50 0.585 0.564 0.587 0.586 0.524 0.507 0.569 0.598 0.54 0.551 0.576 0.573 200 0.485 0.395 0.474 0.42 0.379 0.371 0.501 0.745 0.49 0.352 0.561 0.558 1000 0.277 0.229 0.273 0.234 0.23 0.226 0.377 0.542 0.391 0.201 0.375 0.367 5000 0.155 0.131 0.17 0.131 0.134 0.134 0.241 0.344 0.244 0.122 0.19 0.22 25000 0.074 0.075 0.101 0.075 0.083 0.083 0.154 0.17 0.152 0.074 0.122 0.129 14 50 1.084 1.017 1.017 1.017 1.04 1.058 1.239 0.414 0.612 0.434 1.247 1.246 200 0.914 0.918 0.902 0.907 0.936 0.948 1.007 0.206 0.203 0.224 1.26 1.229 1000 0.717 0.748 0.748 0.748 0.769 0.79 0.767 0.092 0.091 0.107 1.22 1.18 5000 0.535 0.426 0.426 0.426 0.474 0.509 0.537 0.041 0.041 0.047 1.054 1.158 25000 0.217 0.019 0.019 0.019 0.127 0.166 0.269 0.018 0.018 0.023 0.435 0.61 15 50 0.398 0.392 0.399 0.4 0.364 0.352 0.393 0.401 0.389 0.392 0.487 0.462 200 0.273 0.251 0.253 0.244 0.237 0.235 0.26 0.24 0.237 0.223 0.41 0.394 1000 0.182 0.141 0.14 0.135 0.135 0.135 0.14 0.105 0.103 0.129 0.176 0.149 5000 0.128 0.068 0.066 0.066 0.067 0.068 0.074 0.05 0.05 0.063 0.095 0.075 25000 0.086 0.03 0.03 0.03 0.032 0.032 0.038 0.023 0.023 0.027 0.048 0.042 16 50 0.413 0.411 0.408 0.408 0.366 0.357 0.378 0.367 0.353 0.353 0.504 0.52 200 0.299 0.291 0.283 0.282 0.271 0.266 0.276 0.233 0.23 0.232 0.459 0.426 1000 0.199 0.171 0.223 0.172 0.171 0.166 0.165 0.111 0.11 0.115 0.309 0.257 5000 0.141 0.089 0.119 0.09 0.102 0.103 0.089 0.054 0.053 0.063 0.099 0.088 25000 0.084 0.04 0.04 0.04 0.04 0.046 0.045 0.022 0.022 0.03 0.05 0.044 35 Table 6: Estimated PID risks for densities 1-8. Density n Wand AIC BIC BR Knuth SC RIH RMG-B RMG-R TS L2CV KLCV 1 50 0.492 0.696 0.25 0.262 0.41 0.638 0.038 0.108 0.052 0.338 0.388 0.116 200 0.514 1.09 0.14 0.19 0.262 0.286 0.004 0.052 0.002 0.012 1.346 0.596 1000 1.124 1.842 0.066 0.356 0.184 0.192 0.002
|
https://arxiv.org/abs/2505.22034v1
|
0.044 0.006 0.0 4.652 3.278 5000 3.166 3.582 0.216 1.538 0.808 1.568 0.0 0.034 0.012 0.002 7.694 5.07 25000 7.394 6.938 0.8 4.326 2.112 4.018 0.0 0.034 0.028 0.004 10.144 8.322 2 50 1.368 0.654 0.052 0.072 0.126 0.336 0.02 0.168 0.02 0.368 1.04 0.74 200 1.814 0.616 0.012 0.108 0.022 0.082 0.012 0.062 0.004 0.252 2.58 2.01 1000 3.648 0.626 0.0 0.06 0.006 0.014 0.008 0.036 0.012 0.27 7.414 6.076 5000 7.708 0.558 0.0 0.038 0.006 0.022 0.0 0.024 0.0 0.47 9.516 8.252 25000 15.572 0.662 0.0 0.026 0.006 0.02 0.0 0.0 0.0 0.34 11.102 9.422 3 50 4.932 4.516 3.452 3.474 4.306 4.222 2.006 0.738 2.0 0.84 2.38 2.282 200 10.596 7.696 4.824 6.352 7.02 6.652 2.006 0.142 1.988 0.308 2.474 2.194 1000 27.918 19.048 8.706 16.974 16.134 15.622 2.01 0.044 1.132 0.0 2.736 2.428 5000 75.49 54.402 19.5 50.702 40.242 39.896 2.002 0.068 0.004 0.0 4.688 3.556 25000 196.452 154.122 48.028 147.712 103.83 104.088 0.006 0.064 0.008 0.0 4.406 3.818 4 50 3.156 3.314 2.954 2.946 3.218 3.116 1.75 1.594 1.804 1.038 2.094 2.098 200 8.21 4.156 3.642 3.872 3.924 3.758 1.41 1.424 1.686 0.528 1.728 1.772 1000 20.676 6.124 5.102 5.358 5.282 5.446 0.91 0.682 1.14 0.074 1.932 1.678 5000 52.546 19.442 7.36 14.296 10.394 11.716 0.144 0.08 0.238 0.028 2.384 1.934 25000 174.866 58.06 12.752 55.478 51.276 50.86 0.002 0.034 0.008 0.004 2.806 2.346 5 50 1.656 1.44 0.926 0.93 1.182 1.196 0.344 0.382 0.368 0.508 1.554 1.182 200 2.946 2.336 0.802 1.018 1.238 1.16 0.076 0.092 0.04 0.098 0.724 0.424 1000 7.738 4.634 1.224 2.58 2.112 2.144 0.002 0.064 0.004 0.0 1.854 1.296 5000 17.608 9.732 2.71 6.702 5.084 5.548 0.0 0.052 0.006 0.0 4.476 3.168 25000 38.458 18.954 5.878 15.722 15.182 23.248 0.002 0.064 0.018 0.002 5.97 5.378 6 50 2.084 3.256 1.684 1.772 2.7 3.28 1.406 0.704 1.106 0.79 2.35 1.862 200 3.248 5.918 2.95 3.924 4.844 5.674 0.978 0.228 0.696 0.318 3.228 2.538 1000 5.714 16.922 5.18 10.404 8.868 11.448 0.002 0.038 0.0 0.004 4.056 3.076 5000 11.912 52.998 10.374 41.648 23.516 29.5 0.0 0.064 0.004 0.002 6.93 4.758 25000 31.196 157.06 30.096 137.548 67.152 79.768 0.0 0.06 0.028 0.002 7.784 4.974 7 50 5.786 5.798 5.438 5.448 5.65 5.664 5.142 5.126 5.138 5.07 5.862 5.538 200 6.028 5.12 5.442 5.404 5.214 5.136 4.402 4.308 4.318 3.526 4.9 5.046 1000 5.392 7.722 4.23 4.938 4.084 4.234 2.542 1.752 1.946 0.052 1.638 1.064 5000 12.242 15.65 3.074 12.662 7.938 8.742 0.004 0.092 0.008 0.002 2.48 1.728 25000 38.84 30.938 9.798 26.964 18.298 25.868 0.0 0.088 0.026 0.006 3.59 2.748 8 50 1.392 1.478 1.426 1.49 1.36 1.442 1.166 1.23 1.21 1.372 1.428 1.608 200 0.814 1.568 0.892 0.844 0.878 0.868 1.148 1.158 1.138 1.532 1.376 0.944 1000 1.708 2.668 0.446 0.808 0.654 0.61 1.164 1.146 1.122 1.02 3.728 3.05 5000 4.982 5.19 0.528 2.936 1.468 2.542 0.872 0.886 0.862 0.118 6.806 4.674 25000 12.212 11.006 1.408 7.326 3.54 6.43 0.13 0.11 0.112 0.002 9.148 7.526 36 Table 7: Estimated PID risks for densities 9-16. Density n Wand AIC BIC BR Knuth SC RIH RMG-B
|
https://arxiv.org/abs/2505.22034v1
|
RMG-R TS L2CV KLCV 9 50 5.314 5.032 5.144 5.116 5.116 5.016 4.942 4.986 4.908 4.978 5.37 5.46 200 4.666 4.978 4.928 4.764 4.726 4.694 4.504 4.376 4.368 4.396 4.482 3.954 1000 4.566 8.168 3.912 4.728 4.176 4.052 2.91 2.46 2.44 1.722 4.262 3.342 5000 7.638 14.098 3.476 10.752 6.004 6.276 0.774 0.782 0.718 0.096 4.678 3.538 25000 20.638 37.132 5.944 25.082 12.118 23.274 0.004 0.052 0.016 0.008 5.198 4.074 10 50 10.078 10.842 10.918 10.874 10.916 10.884 10.904 10.762 10.9 9.77 11.12 10.764 200 12.376 5.92 10.876 8.508 7.522 6.002 10.858 10.794 10.784 3.968 8.86 10.466 1000 6.884 3.4 3.61 2.262 1.93 2.224 0.226 0.532 0.164 0.486 2.046 1.028 5000 0.9 7.788 0.118 3.26 0.81 1.118 0.0 0.236 0.054 0.012 1.246 0.796 25000 0.012 17.972 0.006 12.424 1.186 1.266 0.0 0.128 0.078 0.004 0.528 0.48 11 50 0.46 0.654 0.126 0.184 0.266 0.546 0.01 0.072 0.012 0.302 0.834 0.468 200 0.304 0.818 0.098 0.166 0.262 0.388 0.006 0.026 0.0 0.014 2.138 1.418 1000 0.462 1.664 0.038 0.266 0.268 0.292 0.004 0.024 0.002 0.0 6.406 5.148 5000 0.804 2.748 0.044 0.868 0.332 0.404 0.002 0.042 0.012 0.002 10.058 8.238 25000 1.546 5.896 0.048 2.454 0.546 0.566 0.0 0.016 0.018 0.004 13.214 11.61 12 50 2.868 2.58 2.61 2.638 2.612 2.588 1.82 0.31 0.232 0.838 1.964 1.784 200 0.494 0.368 0.374 0.352 0.392 0.382 0.768 0.096 0.032 0.178 1.824 1.864 1000 0.534 1.656 0.152 1.116 1.274 1.206 0.0 0.104 0.004 0.0 2.37 2.312 5000 3.576 3.882 0.826 3.026 2.822 3.072 0.0 0.086 0.002 0.0 1.194 0.77 25000 16.526 8.904 1.702 7.834 4.616 3.97 0.0 0.058 0.008 0.0 2.296 1.278 13 50 4.694 5.382 5.496 5.492 5.404 5.388 5.618 4.584 5.396 3.422 5.47 5.702 200 4.912 5.756 4.7 5.144 5.498 5.516 2.418 1.296 0.858 0.706 5.688 5.184 1000 4.586 15.442 4.202 13.246 11.084 11.808 0.062 0.12 0.01 0.03 3.908 2.292 5000 16.932 42.952 12.794 36.618 26.484 26.862 0.008 0.128 0.012 0.0 2.61 1.668 25000 63.816 77.966 26.426 71.032 42.354 44.698 0.0 0.088 0.04 0.004 2.602 1.724 14 50 4.808 4.0 4.0 4.0 4.0 4.0 4.574 0.394 0.736 0.896 3.528 3.34 200 5.054 4.528 4.004 4.128 4.616 4.412 3.988 0.196 0.0 0.96 3.476 3.95 1000 4.734 5.258 5.24 5.258 5.066 4.93 4.0 0.04 0.0 1.414 2.064 2.068 5000 7.522 3.398 3.398 3.398 4.64 5.982 0.004 0.004 0.0 1.526 2.91 2.288 25000 11.864 5.992 5.992 5.992 8.682 8.632 0.0 0.0 0.0 2.17 1.742 1.74 15 50 2.212 2.002 2.194 2.204 2.052 2.032 2.12 1.82 1.854 1.712 5.442 3.58 200 2.326 3.822 1.404 1.966 2.424 2.75 1.792 1.874 1.82 1.428 2.93 2.7 1000 7.02 7.392 0.79 3.102 2.078 2.52 1.95 2.04 1.974 1.31 5.854 5.206 5000 17.224 6.198 0.088 1.822 1.026 2.362 1.762 1.796 1.772 0.534 10.446 6.966 25000 38.722 1.414 0.0 0.232 3.594 5.142 0.196 0.192 0.17 0.17 13.68 10.364 16 50 4.222 3.294 3.474 3.464 3.304 3.294 3.082 2.834 2.906 2.2 3.99 3.432 200 2.596 4.608 2.188 2.604 3.398 3.588 2.766 2.88 2.864 1.752 1.866 2.042 1000 7.566 14.438 3.92 12.384 10.92 11.676 1.512 1.526 1.504 1.182 4.044 4.0 5000 18.82 28.812 12.458 26.078 19.512
|
https://arxiv.org/abs/2505.22034v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.