text
string
source
string
Scalable and adaptive prediction bands with kernel sum-of-squares Louis Allain1,2, Sébastien Da Veiga2, and Brian Staber1 1Safran Tech, Digital Sciences & Technologies, 78114 Magny-Les-Hameaux, France 2Univ Rennes, Ensai, CNRS, CREST - UMR 9194, F-35000 Rennes, France May 28, 2025 Abstract Conformal Prediction (CP) is ...
https://arxiv.org/abs/2505.21039v1
distribution-free. In particular, the split conformal procedure proposed by Papadopoulos et al. [2002] is especially easy to implement. CP is becoming widely used in many different applications (see [Balasubramanian et al., 2014, Vazquez and Facelli, 2022] and references 1arXiv:2505.21039v1 [cs.LG] 27 May 2025 therein)...
https://arxiv.org/abs/2505.21039v1
⊂ Rdand Y∈ Y ⊂R. This dataset is split in two parts: a pre-training dataset Dn={(Xi, Yi)}n i=1and acalibration oneDm={(Xi, Yi)}m i=1with N=n+m. The pre-training dataset Dnis used to first fit a predictive model bmn(·), which can be any machine learning algorithm. The second step consists in computing performance scores...
https://arxiv.org/abs/2505.21039v1
XN+1, and as a result the prediction bands are adaptive. More precisely, given a kernel H(·,·)that defines a density H(x,·)for all x∈ X, sample eXN+1from H(XN+1,·). The quantile bqα(XN+1,eXN+1)is computed on the empirical distributionPm i=1ewiδSi+ewN+1δ+∞, where the weights are computed as ewi=H(Xi,eXN+1)/(Pm j=1H(Xj,e...
https://arxiv.org/abs/2505.21039v1
solution of Equation (2). Although such a result may appear of minor impact, the Aformulation actually yields significant computational savings in practice, as we illustrate in our numerical experiments (see Appendix B.1). Remark 1 Operator Ais PSD, hence it admits an eigendecomposition A=P l≥0λlul⊗ulwithλl≥0 andul∈ H....
https://arxiv.org/abs/2505.21039v1
the mean width was later proposed by Fan et al. [2024]. Our proposition essentially differs from their work for a broader and more efficient practical applicability: (a) we place ourselves in the split CP setting, whereas Liang [2022] and Fan et al. [2024] propose different calibration procedures that are harder to imp...
https://arxiv.org/abs/2505.21039v1
of an optimization problem over (n+ 1)variables rather than (n+n×n)variables. Proposition 2 (Dual formulation) Let(a, b, s, λ 1)∈R4 +, λ2>0and∆ :=Rn+1 +. Equation (7) admits a dual formulation of the form sup (Γ,θ)∈∆r(Γ, θ)⊤Diag(Γa)r(Γ, θ) +θ(γ(Γ, θ)⊤Kmγ(Γ, θ)−s)−Ω⋆(VDiag(Γ−b)V⊤)(8) where r(Γ, θ) =Y−Kmγ(Γ, θ),γ(Γ, θ) =...
https://arxiv.org/abs/2505.21039v1
PSD matrix Awith eigendecomposition A=UDU⊤, its positive part is defined as [A]+=Umax(0 ,D)U⊤. 6 size and repeated random seeds (see Appendix B.1 for details). Boxplots of the combined indicators are given in Figure 1. We first observe that ahas very little influence on mean squared-errors when the noise is symmetric (...
https://arxiv.org/abs/2505.21039v1
see Deutschmann et al. [2024]. However, this implies computing MI for a random vector in dimension d: from a practical perspective, MI suffers from the curse of dimensionality and rapidly becomes numerically unstable. Instead, for a re-scaled score function, we show that a similar bound holds with MI between one-dimens...
https://arxiv.org/abs/2505.21039v1
a homoscedastic and heteroscedastic [Binois et al., 2018] GP model, and consider here Y=m(X) +σ(X)ϵwith: m(X), Z= 10X+ 1 σ(X) X ϵ Case 1 sin(2πZ 5+ 0.24πZ 5)1X≤9.6+ (Z 10−1)1X>9.6√ 0.1 + 2 X2U[−1,1]N(0,1) Case 2 X/2 |sin(X)|N(0,1)N(0,1) Case 3 X/24 3ϕ(2X 3) N(0,1)N(0,1) Case 4 2 sin( πX) +πX√ 1 +X2 U[0,1]N(0,1) We begi...
https://arxiv.org/abs/2505.21039v1
7. As expected, homoscedastic GP, which produces almost constant intervals, performs the best in this setting. Except CQR, all methods yield similar MI, as well as similar satisfying local coverage. But this time, kSoS tend to select larger intervals than GPs: the good performance of heteroscedastic GP actually comes f...
https://arxiv.org/abs/2505.21039v1
show that for any fixed m∈ Hm, the optimal Ahas a finite-dimensional representation. Indeed if m∈ Hmis fixed, the problem writes inf A∈S +(Hf)b nnX i=1fA(Xi) +λ1∥A∥ ⋆+λ2∥A∥2 F s.t. (Yi−m(Xi))2−fA(Xi)≤0, i∈[n] or equivalently inf A∈S +(Hf)Lm(fA(X1), . . . , f A(Xn)) + Ω ( A) where Lm(fA(X1), . . . , f A(Xn)) =( b nPn i=...
https://arxiv.org/abs/2505.21039v1
F s.t. Yi−γTkm(Xi)2−˜fA(Xi)≤0, i∈[n], γTKmγ−s≤0. A.2 Proof of Proposition 2 - dual formulation Proof. The dual problem of Equation (7) is defined as d= sup Γ∈Rn + θ∈R+inf m∈Hm A∈Sn +L(m,A,Γ, θ) = sup Γ∈Rn + θ∈R+D(Γ, θ) where the dual function is D(Γ, θ):= inf m∈Hm,A∈Sn +L(m,A,Γ, θ). Remark first that in the previous ...
https://arxiv.org/abs/2505.21039v1
optimal solutions of the primal problem. Denoting bΓ∈Rn +andbθ∈R+the optimal Lagrange multipliers, the approximated mean function is recovered by bγ= Diag(bΓa)Km+bθIn−1 Diag(bΓa)Y. On the other hand, to reconstruct the matrix A, we follow Theorem 8from Marteau-Ferey et al. [2020]: bA=∇Ω⋆ VDiag(bΓ−b)VT =1 2λ2h VDiag...
https://arxiv.org/abs/2505.21039v1
distribution of (V, R)andQ=PV⊗PRthe joint distribution with independent marginals PVandPR, to get 1−exp(−MI(V, R))≥TV2(PV R, PV⊗PR)≥α2HSIC (V, R), where the inequality on the left is the Bretagnolle-Huber inequality, the inequality on the right comes from the HSIC definition HSIC (X, Y) =MMD2(PXY, PX⊗PY)and we denote α...
https://arxiv.org/abs/2505.21039v1
[2021], O’Donoghue et al. [2023] available in the convex optimization software CVXPYDiamond and Boyd [2016], Agrawal et al. [2018]. An example of script that implements the SDP problem defined by Equation (7) is shown below. Here, vector_variable and matrix_variable denote the unknowns γ∈RnandA∈Sn +. The remaining vari...
https://arxiv.org/abs/2505.21039v1
±7.10 32.06 ±2.19 42.85 ±2.5810.20±0.38 14.84 ±0.22 19.26 ±0.22 800 25.69 ±1.90 41.91 ±1.93 66.07 ±2.0612.34±1.21 19.10 ±0.25 30.06 ±0.33 1000 41.59 ±1.32 83.08 ±1.39 136.86 ±1.4817.86±0.60 34.23 ±0.34 56.37 ±0.33 Illustration. For sanity check, we show in Figure 9 that the solution of our dual formulation algorithm co...
https://arxiv.org/abs/2505.21039v1
in a negative determination coefficient. 22 Figure 12: Test case 1 with n= 100,nX= 100andnY= 1000. We showcase prediction bands (top row) and the absolute residuals quantiles versus the median width quantiles (bottom row) for three models, homoscedastic GP (left column), heteroscedastic GP (middle column) and kernel So...
https://arxiv.org/abs/2505.21039v1
additive structure, the intervals tend to be constant when dincreases. The same comment applies to test case 3. For brevity, we thus postpone the investigation of such a setting to test case 4. Case 3. Corresponds to setting 2 in Hore and Barber [2024]. X∼ N d(0, Id), Y =m(X) +σ(X)ϵ, ϵ∼ N(0,1) m(X) = 0 .5dX i=1X(i) σ(X...
https://arxiv.org/abs/2505.21039v1
symmetry clearly improves adaptivity. References Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning . The MIT Press, 11 2005. ISBN 9780262256834. doi: 10.7551/mitpress/3206.001.0001. URL https: //doi.org/10.7551/mitpress/3206.001.0001 . Leo Breiman. Random forests. Machine Lea...
https://arxiv.org/abs/2505.21039v1
H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper_files/paper/2019/file/5103c3584b063c431bd1268e9b5e76fb-Paper.pdf . Jing Lei and Larry Wasse...
https://arxiv.org/abs/2505.21039v1
conformal predictors. In Steven C. H. Hoi and Wray Buntine, editors, Proceedings of the Asian Conference on Machine Learning , volume 25 of Proceedings of Machine Learning Research , pages 475–490, Singapore Management University, Singapore, 04–06 Nov 2012. PMLR. URL https://proceedings.mlr.press/v25/vovk12.html . Rina...
https://arxiv.org/abs/2505.21039v1
arXiv:2505.21102v1 [math.ST] 27 May 2025Linearity-Inducing Priors for Poisson Parameter Estimation Under L1Loss Leighton P. Barnes∗, Alex Dytso†, and H. Vincent Poor‡ ∗Center for Communications Research, Princeton, NJ 08540, USA, l.barnes@idaccr.org †Qualcomm Flarion Technology, Inc., Bridgewater, NJ 08807, USA, odytso...
https://arxiv.org/abs/2505.21102v1
exponential family, the optimal L2 estimator for the so-called mean parameter (which is just Xin the Poisson case) is affine if and only if the prior distribution onXis the conjugate prior [10], [11]. In this work, we consider the corresponding property for the conditional median, and ask if there are prior distributio...
https://arxiv.org/abs/2505.21102v1
and•there exists κ≥1such that c0X i=1 f ⌊i κ⌋ f(i)!i κ ef(i)<∞. (9) Then, there exists a distribution PXsupported on W={wi:wi=f(i−1), i∈N} (10) such that for all y∈N0 med(X|Y=y) =f(y). (11) The first assumption in Theorem 1 is not very restrictive in view of the following facts: •The assumption that fis increasing is...
https://arxiv.org/abs/2505.21102v1
1, we have p1=p2, which results in valid probability vector if p1=p2= 1/2. We now make an induction hypothesis that the statement is true for M. Let wM+2> w M+1. Now by the induction hy- pothesis for (w1, . . . , w M+1)there exists a probability vector (p1, . . . , p M+1)such that for all k∈[0 :M−1] k+1X i=1piwk i=M+1X...
https://arxiv.org/abs/2505.21102v1
constructed in Theorem 3 for some M > 1. In this section, we show that there is at least a subsequence Mksuch that PW= lim k→∞PMk Wconverges (in total variation) and PWis a valid distribution that satisfies the desired system. LetpM= (pM 1, . . . , pM M+1)denote the probability vector corresponding to PM W. Using Lemma...
https://arxiv.org/abs/2505.21102v1
sum converges. The proof is concluded by noting that in Theorem 1 we assume the series in (63) is summable. ■ IV. E XAMPLES The distribution in Theorem 1 can be approximated to any degree of accuracy by the following procedure: 1) Choose some M≥1and define AM= 1 1 1 ··· 1 −1 1 1 ··· 1 −w1 −w2 w3··· wM+1 −w2 1 ...
https://arxiv.org/abs/2505.21102v1
Rose, L. Vandenberghe, E. E. Wesel, and R. D. Wesel, “Capacities and optimal input distributions for particle-intensity channels,” IEEE Transactions on Molecular, Biological and Multi-Scale Communications , vol. 6, no. 3, pp. 220–232, 2020. [7] R. B. Stein, “A theoretical analysis of neuronal variability,” Biophysical ...
https://arxiv.org/abs/2505.21102v1
arXiv:2505.21274v1 [math.OC] 27 May 2025Sample complexity of optimal transport barycenters with di screte support L´ eo Portales IRIT, TSE, INP Toulouse leo.portales@irit.frEdouard Pauwels TSE, Universit´ e Toulouse Capitole edouard2.pauwels@ut-capitole.fr Elsa Cazelles CNRS, IRIT, Universit´ e de Toulouse elsa.cazelle...
https://arxiv.org/abs/2505.21274v1
one probability measure with a small number of atoms. We include this support constraint in our treatement of the barycenter problem. In other words, we are interested in probability measures νminimizing the function (1.1), subject to the constraint |Supp(ν)| ≤Nfor some integer N≥1, whereSupp(ν)denotes the support of ν...
https://arxiv.org/abs/2505.21274v1
of computational complexity [20]. Contribution We provide convergence bounds in expectation for the empiri cal sparse optimal transport barycenter functional (1.2) of the form O(/radicalbig N/n), whereNdenotes the number of support point of the barycenter, and nis the number of samples per empirical target measure. The...
https://arxiv.org/abs/2505.21274v1
two proba- bility measures µ,ν∈ M1(Rd)as follows: Wp ǫ,p(µ,ν) = inf γ∈Π(µ,ν)/integraldisplay Rd×Rd/ba∇dblx−y/ba∇dblpdγ(x,y)+ǫKL(γ|µ⊗ν) (2.3) where KL (γ|ξ) :=/integraltext Rd2×Rd2(log(dγ dξ(x,y))−1)dγ(x,y)denotes the Kullback-Leibler divergence and ǫ > 0is a regularization parameter. The optimization problem in (2.3) c...
https://arxiv.org/abs/2505.21274v1
cardinality constraint. Formally, the aim is to find a p robability measure ν∗:=/summationtextN i=1πiδyiwhere Y:= (y1,...,yN)∈(Rd)Nandπ∈∆N, that solves the following optimization problem: min (Y,π)∈AFD/parenleftBigg µ1,...,µL,N/summationdisplay i=1πiδyi/parenrightBigg :=1 LL/summationdisplay ℓ=1D/parenleftBigg µℓ,N/summ...
https://arxiv.org/abs/2505.21274v1
in two ways : either by using Sinkhorn divergence D= Wp ǫ,pin the barycenter problem, or by adding an entropic regulari zation term on the barycenter itself in the optimization problem (1.2). In [14], the author studies both regularization approaches, as well as a combination of the two. Sliced-Wasserstein distances ar...
https://arxiv.org/abs/2505.21274v1
whereµis a target measure and fcdenotes the c-transform of f. Upper bounding the generalization error uniformly in the parameter space by the log-entropy of the fu nctional class allows us to conclude. Remark 4.3. Alternative statistical learning techniques could have be en applied to derive upper bounds in Theorem 4.1...
https://arxiv.org/abs/2505.21274v1
emp irical optimal transport divergences between a measureµand an empirical measure νndistributed over ni.i.d random variables of common law ν, depends on the intrinsic complexity of the measures νandµ. If at least one of them is discretely supported, the convergence rate is of order O(1/√n)[54, 24, 31, 43, 37], which ...
https://arxiv.org/abs/2505.21274v1
of subgaussian probability measures [19]. 5.3 K-means and constrained K-means Similarly to the previous subsection, choosing D=W2 2andL= 1 in Corollary 4.4 provide error bounds for the celebrated K-means and constrained K-means problem s. Letµ∈ M1(BR)andX1,...,X nben i.i.d samples of law µ. It is well known that the mi...
https://arxiv.org/abs/2505.21274v1
be computed in polynomial time in fixed dimension. Journal of Machine Learning Research , 22(44):1–19, 2021. [3] Jason M Altschuler and Enric Boix-Adsera. Wasserstein b arycenters are np-hard to compute. SIAM Journal on Mathematics of Data Science , 4(1):179–203, 2022. [4] Pedro C ´Alvarez-Esteban, Eustasio Del Barrio, ...
https://arxiv.org/abs/2505.21274v1
by accelerated gradient descent is better than by Sinkhorn’ s algorithm. In International conference on machine learning , pages 1367–1376. PMLR, 2018. [23] Nicolas Fournier and Arnaud Guillin. On the rate of conv ergence in Wasserstein distance of the empirical measure. Probability theory and related fields , 162(3):70...
https://arxiv.org/abs/2505.21274v1
for Wasserstein approximation using point clouds. In Advances in Neural Information Processing Systems , vol- ume 34, pages 12810–12821. Curran Associates, Inc., 2021. [41] Eduardo F Montesuma and Fred-Maurice Ngol` e Mboula. Wa sserstein barycenter transport for acoustic adap- tation. In ICASSP 2021-2021 IEEE Internat...
https://arxiv.org/abs/2505.21274v1
or the sliced 1-Wasserstein distance and the max-sliced 1-Wasserstein distance. arXiv preprint arXiv:2205.14624 , 2022. [60] Qingyuan Yang and Hu Ding. Approximate algorithms for k-sparse Wasserstein barycenter with outliers. arXiv preprint arXiv:2404.13401 , 2024. 14 A Semi-dual formulation of optimal transport The Ka...
https://arxiv.org/abs/2505.21274v1
be overlooked, it cannot be omitted in our analysis. This is particularly relevant in the case of the sliced Wasserstein distance, whe re two distinct points in Rdmay project onto the same point on a line. Similarly, it is important to ensure that the weigh ts of the discrete measure remain in the interior of the proba...
https://arxiv.org/abs/2505.21274v1
From [50, Proposition 5.11] and [50, Remark 1.13] that foll ows, we may also select a c-convave functionwcwhich verifies for all i∈[[1,N]]:wi= (wc(yi))c. We then have |wi−wN|=|(wc(yi))c−(wc(yN))c| =/vextendsingle/vextendsingle/vextendsingle/vextendsinglemin x∈Supp(µ){/ba∇dblx−yi/ba∇dblp−wc(x)}−min x∈Supp(µ){/ba∇dblx−yN/...
https://arxiv.org/abs/2505.21274v1
2-covering of B1(0,2(2R)p). For any vector of indices i= (i1,...,iN)∈[[1,n(τ)]]Nandk= (k1,...,k N)∈[[1,m(τ)]]N, we define for all x∈BRthe function fi,k(x) = min j=1,...,N/ba∇dblx−zij/ba∇dblp−vkj. Letf:=fY,w∈ Fp, then by (C.3) there exists i,ksuch that |f(x)−fi,k(x)| ≤pRp−1max j=1,...,N/ba∇dblyj−zij/ba∇dbl+ max j=1,...,N...
https://arxiv.org/abs/2505.21274v1
by Lemma (D.1), we have for x∈BR 21 | |/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht|p−|/a\}b∇acketle{tx−zi,ϕ/a\}b∇acket∇i}ht|p| ≤pRp−1|/a\}b∇acketle{tx−yi,θ/a\}b∇acket∇i}ht−/a\}b∇acketle{tx−zi,ϕ/a\}b∇acket∇i}ht| =pRp−1|/a\}b∇acketle{tx,θ−ϕ/a\}b∇acket∇i}ht+/a\}b∇acketle{tzi,ϕ−θ/a\}b∇acket∇i}ht+/a\}b∇acketle{tzi,θ/a\}b∇acket∇i}...
https://arxiv.org/abs/2505.21274v1
Y,π,w:x∈BR/ma√sto→ −ǫlog/parenleftBiggN/summationdisplay i=1πiexp/parenleftbiggwi−/ba∇dblx−yi/ba∇dblp ǫ/parenrightbigg/parenrightBigg/bracerightBigg /vextendsingle/vextendsingle/vextendsingleY∈BN R, π∈∆N, w∈B1(0,4p(2R)p)N,/bracerightBig then for all τ >0andµ∈ M1(BR), we have N(τ,Fǫ p,/ba∇dbl·/ba∇dblL2(µ))≤/parenleftbig...
https://arxiv.org/abs/2505.21274v1
, withκ= 2CpR+ (2R)p, whereCp=√ 2p(2R)p−1is the Lipshitz constant of the Euclidean cost at the power p. Additionally, Kp,Ris an unknown constant depending only on RandpandCp,Ris the constant associated to Wp ǫ,pin Theorem 4.1. Proof. Let(Y∗,π∗)and(Yn,πn)be respective minimizers of (Y,π)/ma√sto→FD/parenleftBig µ1,...,µL...
https://arxiv.org/abs/2505.21274v1
i=1πiwi. (D.1) Notice that a first order condition with respect to πimpliesw∗= 0RNat optimality. Let π∗∈∆Nbe a minimizer of (D.1). In order to use the duality expression (A.2), we con sider˜π∈int(∆M)the vector of weights π∗whose null components have been removed. Following this construction and with a slight abuse of no...
https://arxiv.org/abs/2505.21274v1
arXiv:2505.21778v1 [math.PR] 27 May 2025Reconstruction of the Probability Measure and the Coupling Parameters in a Curie-Weiss Model Miguel Ballesteros∗, Rams´ es H. Mena∗, Arno Siri-J´ egousse∗, and Gabor Toth∗ Abstract The Curie-Weiss model is used to study phase transitions in statistical mechanics and has been the ...
https://arxiv.org/abs/2505.21778v1
identifiable groups which have distinct cultures or attitudes, manifesting in different voting decisions. The first version of a multi-group Curie-Weiss model was introduced in [7]; subsequently, similar models were studied in [3, 26, 27, 19, 20, 23], including the statistical problem of community detection [3, 26, 1]....
https://arxiv.org/abs/2505.21778v1
imposing a ‘fairness criterion,’ we can determine what the weights in the council ought to be. The theoretical optimal weights may depend on the underlying voting model (such as the Curie-Weiss model treated in this article) which describes the population’s voting behaviour in probabilistic terms. If we reconstruct the...
https://arxiv.org/abs/2505.21778v1
platforms. Given this applicability to different academic disciplines, we have tried to provide complete and comprehensible proofs that appeal to a wide audience composed of mathematicians, statisticians, physicists, economists, and political scientists. The main results of this article are Propositions 8 and 9, and Th...
https://arxiv.org/abs/2505.21778v1
the degree of influence the voters in group λ exert over each other, with the influence becoming stronger the larger βλis. As we see, the most probable voting configurations are those with unanimous votes in favour of or against the proposal. However, there are only two of these extreme configurations, whereas there is...
https://arxiv.org/abs/2505.21778v1
Definition 3. We define the statistic T: Ωn N1+···+NM→Rfor any realisation of the sample x= x(1), . . . , x(n) ∈ Ωn N1+···+NMby T(x):=1 nnX t=1  N1X i=1x(t) 1i!2 , . . . , NMX i=1x(t) Mi!2 . Remark 4.Tis a random vector on the probability space Ωn N1+···+NMwith the power set of Ωn N1+···+NM as the σ-algebra and t...
https://arxiv.org/abs/2505.21778v1
to this problem are: 1. Find and use an asymptotic approximation of EˆβN(x),N S2 1, . . . , S2 M valid for large N1+···+NM which is less costly to calculate than the exact moment EˆβN(x),N S2 1, . . . , S2 M . 2. Employ alternate estimators based on small subsets of voters so that instead of the moment EˆβN(x),N S...
https://arxiv.org/abs/2505.21778v1
i=1ui!2  = 0 becausePN i=1xi2 < N2=PN i=1ui2 . 8 The next statement follows directly from the lemma by noting that PN i=1xi =Nis equivalent to x∈ {−u, u}: Corollary 13. The limits lim β→∞pβ(x)P y∈ΩNpβ(y)=( 1 2if PN i=1xi =N, 0 otherwise , hold for all x∈ΩN. Definition 14. We define the minimum of the range of S...
https://arxiv.org/abs/2505.21778v1
exp β 2N NX i=1yi!2  =1 2NEβ,NS4−1 2NX x∈ΩN NX i=1xi!2 Z−1 β,Nexp β 2N NX i=1xi!2  ·X y∈ΩN NX i=1yi!2 Z−1 β,Nexp β 2N NX i=1yi!2  =1 2NEβ,NS4−1 2N Eβ,NS22=1 2NVβ,NS2>0, where we used the derivative of Zβ,Nprovided in Lemma 51, and the strict inequality stems from the fact that the random variable S2is not...
https://arxiv.org/abs/2505.21778v1
Proposition 27 below. Lemma 22. Letf:R→Rbe a convex function. Then the Legendre transform f∗is convex. Proof. Letθ∈[0,1] and t1, t2∈R. Then θf∗(t1) + (1 −θ)f∗(t2) =θsup x∈R{t1x−f(x)}+ (1−θ) sup x∈R{t2x−f(x)} = sup x∈R{θt1x−θf(x)}+ sup x∈R{(1−θ)t2x−(1−θ)f(x)} ≥sup x∈R{(θt1+ (1−θ)t2)x−(θ+ (1−θ))f(x)} =f∗(θt1+ (1−θ)t2), w...
https://arxiv.org/abs/2505.21778v1
0 . Proof. Since Yis bounded, Λ Yis well-defined and finite for all t∈R. We now show that Λ Yis convex and differentiable. Let θ∈[0,1] and t1, t2∈R. Then ΛY(θt1+ (1−θ)t2) = ln Eexp (( θt1+ (1−θ)t2)Y) = lnEh exp (t1Y)θexp (t2Y)1−θi ≤lnh (Eexp (t1Y))θ(Eexp (t2Y))1−θi =θΛY(t1) + (1 −θ) ΛY(t2), where we used H ¨older’s ine...
https://arxiv.org/abs/2505.21778v1
t≥0{xt−ΛY(t)} for all x≥EY. As the function x7→xt−ΛY(t) given t≥0 is increasing, we have shown that Λ∗ Yis increasing on the interval ( EY,ess sup Y). Together with the strict convexity of Λ∗ Y, this implies that Λ∗ Yis strictly increasing on the interval ( EY,ess sup Y). Analogously, one can show Λ∗ Y(t) = sup t≤0{xt−...
https://arxiv.org/abs/2505.21778v1
distributed according to P.’ In the following, we will state some auxiliary results we will use in the proof of Theorem 10. Recall that we use the symbol κ∈ {0,1}for the minimum value the random variable S2can assume. Recall Definition 19 of the function ϑN. Lemma 30. The function ϑNfrom Definition 19 has an inverse fu...
https://arxiv.org/abs/2505.21778v1
to xnk− − − − → k→∞x0and the lower semi-continuity of I. As x0∈K, we also have I(x0)≥infx∈KI(x). Thus I(x0) = inf x∈KI(x) holds. Lemma 35. LetXandYbe metric spaces, I:X → [0,∞]a good rate function, and f:X → Y a continuous function. We define J:Y → [0,∞]by J(y):= inf{I(x)|x∈ X, f(x) =y}, y∈ Y. LetMI⊂ X be the set of mi...
https://arxiv.org/abs/2505.21778v1
. , x N):= NX i=1xi!2 for all ( x1, . . . , x N)∈ΩN. Hence, we have µ=Eβ,NS2=ETandσ2=Vβ,NS2. By Proposition 56, √n T−Eβ,NS2d− − − − → n→∞N 0,Vβ,NS2 holds. We want to apply Theorem 54 to the transformation f:=ϑ−1 N, but face the difficulty that ϑ−1 N is not differentiable on the entire range of T. By assumption, 0 <...
https://arxiv.org/abs/2505.21778v1
set ϑN(K)⊂R. Hence, ϑN(K) is a closed set. Since Kdoes not contain βandϑNis strictly increasing, ϑN(K) does not contain Eβ,NS2. By Proposition 56, we have P{T∈ϑN(K)} ≤2 exp ( −δ′n) for all n∈Nwith δ′:= inf x∈ϑN(K)Λ∗ S2(x)>0. The equality δ= inf y∈KJ(y) = inf y∈Kinf Λ∗ S2(x)|x∈R, ϑ−1 N(x) =y = inf Λ∗ S2(x)|x∈R, ϑ−1 N(...
https://arxiv.org/abs/2505.21778v1
nk 2nX i1,...,ik=1EUi1···Uik =1 nk 2X r∈Πn! r1!···rk! n−Pk ℓ=1rℓ !k! 1!r1···k!rkEU(r). (22) As a first step, we note that since EUi= 0 and EUℓ iUm j=EUℓ iEUm jfor all i̸=jand all ℓ, m∈N, we have for all r∈Π with r1≥1, EU(r) = 0 . (23) Now let r1= 0 and rℓ∗>0 for some ℓ∗>2. Set ⌊x⌋:= max {m∈Z|m≤x}for all x∈R. Suppose ...
https://arxiv.org/abs/2505.21778v1
Vβ,NS2 A. In particular, we have for any ε >0,Pn ˆβN−β ≥ε√no − − − − → n→∞N 0,4N2 Vβ,NS2 (−ε, ε)c. Proof. Theorem 10 states that √n ˆβN−βd− − − − → n→∞N 0,4N2 Vβ,NS2 . By the definition of convergence in distribution and the absolute continuity of the normal distribution, we have for any measurable set A, P ˆβN...
https://arxiv.org/abs/2505.21778v1
The seminal work by Penrose [30], which introduced the square root law as a rule for assigning voting weights that equalises each voter’s probability of being decisive in a two-tier system under the assumption of independent voting, exemplifies this approach. Further contributions to understanding optimal voting weight...
https://arxiv.org/abs/2505.21778v1
of pβ(x)/Zβ,Nfor any x∈ΩNemploying Lemma 51. Recall that pβ(x) equals exp β 2NPN i=1xi2 ands(x) stands forPN i=1xifor all x∈ΩN. d dβpβ(x) Zβ,N =dpβ(x) dβZβ,N−pβ(x)dZβ,N dβ Z2 β,N =s(x)2 2Npβ(x) Zβ,N−pβ(x)Zβ,N 2NEβ,NS2 Z2 β,N =pβ(x) 2NZβ,N s(x)2−Eβ,NS2 . (30) We note that the derivative is positive if and only i...
https://arxiv.org/abs/2505.21778v1
cohesive in the group votes. Proposition 45 yields an estimator for the optimal weights of each group by substituting any estimator for the parameter βλ. For example, an estimator for the optimal weights based on the estimator ˆβNfrom Definition 7 can be defined as follows. Let for all λ∈NM,ˆβNλ(λ):= ˆβN λ. Then, ˆβN...
https://arxiv.org/abs/2505.21778v1
|µ−Yn|p− − − − → n→∞0, and as f′is continuous, Theorem 53 implies f′(ξn)p− − − − → n→∞f′(µ). The last display,√n(Yn−µ)d− − − − → n→∞N 0, σ2 , (32), and Theorem 52 together yield √n(f(Yn)−f(µ))d− − − − → n→∞N 0,(f′(µ))2σ2 . Recall Notation 6 for the expressions [ −∞,∞] and [0 ,∞]. Definition 55. Let ( Pn)n∈Nbe a seq...
https://arxiv.org/abs/2505.21778v1
found, hence we prove it here. The random variable f(X) is bounded and not almost surely constant, so Lemma 26 applies to Λ∗ f(X). Set δ:= infn Λ∗ f(X)(x)|x∈Ko . (33) AsEβ,Nf(X) =µ∈KcandKcis open, there is some η >0 such that the open ball Bη(µ) with radius ηand centre µis a subset of Kc. We choose εr:= sup {η >0|(µ, µ...
https://arxiv.org/abs/2505.21778v1
S. Felsenthal and Mosh´ e Machover. The Measurement of Voting Power. Theory and Practice, Problems and Paradoxes . Edward Elgar Publishing, 1998. [12] Dan S. Felsenthal and Mosh´ e Machover. Minimizing the mean majority deficit: The second square-root rule. Math. Soc. Sci. , 37(1):25–37, 1999. [13] Ignacio Gallo, Adria...
https://arxiv.org/abs/2505.21778v1
Random irregular histograms Oskar Høgberg Simensen1*Dennis Christensen2 Nils Lid Hjort1 1Department of Mathematics, University of Oslo 2Norwegian Defence Research Establishment (FFI) Abstract We propose a new method of histogram construction, providing the first fully Bayesian approach to irregular histograms. Our proc...
https://arxiv.org/abs/2505.22034v1
to regular histogram methods; see e.g. Davies and Kovac (2004); Rozenholc et al. (2010); Li et al. (2020); Mendizábal et al. (2023). Theirregularhistogrammodelproposedinthispaperprovides,toourknowledge,thefirstfullyBayesian approach to histogram estimation. In particular, our approach is based on finding the partition ...
https://arxiv.org/abs/2505.22034v1
on powers of the Lror Hellinger metrics, f0−bf r=Z1 0 f0(x)−bf(x) rdx1/r , d H f0,bf =Z1 0np f0(x)−q bf(x)o2 dx1/2 . On the other hand, approaches based on maximum likelihood attempt to minimize the Kullback–Leibler divergence, K f0,bf =Z1 0f0(x) logf0(x) bf(x)dx. In general, the risk of an estimator will depe...
https://arxiv.org/abs/2505.22034v1
|k′) =pn(k)pn(I |k). Finally, as prior distribution for θ|Iwe take a k-dimensional Dir(a)distribution, which has density p(θ|I) =Γ a Qk j=1Γ(aj)kY j=1θaj−1 j,θ∈ Sk, where a= (a1, . . . , a k)∈(0,∞)kanda=Pk j=1aj. In general, the parameters ajmay depend on the partition I,althoughweomitthisinthenotation. Wedeferthedis...
https://arxiv.org/abs/2505.22034v1
Tn,kisaDirichletdistribution,wecanreadofftheposteriormean directly, yielding Bayes estimates bθj=aj+Nj a+n=a a+naj a+n a+nNj n, j = 1,2, . . . , k. (7) Sincethepriormeanof θjisaj/a,weseethattheposteriormeanisaconvexcombinationofthepriormean andthemaximumlikelihoodestimate Nj/n. Ifaissmallrelativeto n,thenthedata-basedp...
https://arxiv.org/abs/2505.22034v1
log pn(k) + log Γ( a)−log Γ( a+n)−logkn−1 k−1 . Φis additive with respect to the intervals of the partition in the sense that we can write Φ =Pk j=1Φ0(Ij), where Φ0(Ij)depends only on Ij. This allows for the use of the dynamic programming algorithm due to Kanazawa (1988) which determines the maximizer of (10) using O...
https://arxiv.org/abs/2505.22034v1
on the results from these preliminary simulations, we suggest a= 5as a reasonable default choice for a. Aspresented,therandomirregularhistogrammethodappliestodensitiessupportedonaknownclosed interval. To apply the methodology to data with unknown support, the support of the data has to be estimated. Our preferred metho...
https://arxiv.org/abs/2505.22034v1
on the prior and the true density f0. The proof of Theorem 1 is given in Appendix B.2. Theorem 1. Suppose that f0is a probability density on the unit interval satisfyingR1 0 f0(x) rdx <∞for some r∈(1,2], and suppose that the random histogram prior sequence {Pn}n∈Nsatisfies the following conditions: (i)Thesequenceofpri...
https://arxiv.org/abs/2505.22034v1
of the Bayes histogram estimatormirrorstheposteriorconvergencerateobtainedbyScricciolo(2007)forthecorrespondingregular histogrammodel,withtheminordifferencethatourrateisslightlybetterintermsofthelogarithmicfactor. The additional logarithmic factors appearing in the convergence rate are rather typical of Bayesian densit...
https://arxiv.org/abs/2505.22034v1
included in the simulation study γ >0such thatR Cj,γ f0(x)−¯f0,j,γ dx R Cj,γf0(x)dx>0.2,Cj,γ= [tj−γ, tj+γ], where we have defined ¯f0,j,γ=|Cj,γ|−1R Cj,γf0(x)dx. We then take δj=γ/2as our tolerance for the peak attj. This criterion essentially measures how well f0is approximated by a constant function around any givenmo...
https://arxiv.org/abs/2505.22034v1
Hellinger risk (left) and PID risk (right). 5.3 Results Tables showing the estimated risks bRn f0,bfm for each of the three loss functions can be found in Ap- pendixC.2,alongwithboxplotscomparingtheperformancesofthedifferentmethodsincludedinthestudy. Toprovideamorecompactsummaryoftheresults,werankedeachmethodaccordin...
https://arxiv.org/abs/2505.22034v1
the irregular histogram is able to adapt the bin width locally, providing more smoothing in flatter regions of the density, which results in high-precision estimates of each of the five true modes. On the other hand, 15 Figure 4: Irregular (left) and regular (right) histograms fitted to the Old Faithful data. The dashe...
https://arxiv.org/abs/2505.22034v1
avoidshavingtouseboundarycorrections,whicharenecessaryforsomeclassicaldensityestimatorssuchas kernel-basedones,sinceweareprimarilyconcernedwithestimatingavalueattheboundaryofthesupport of the density. To investigate the performance of our irregular histogram procedure in this setting, we conducted a small simulation st...
https://arxiv.org/abs/2505.22034v1
the histogram methods are quite reasonable in this case. 7 Discussion We conclude the article with a brief discussion of possible extensions of the random irregular histogram, and how the ideas presented here can be used to construct computationally tractable Bayesian models for regression and hazard rate estimation. T...
https://arxiv.org/abs/2505.22034v1
to present the results, we computed the logarithm of the risk in (A.1) relativetothatoftheuniformprior. Theresultinglog-relativerisksareshowninFigure6. Theuniformprior appears to yield the best performance for most combinations of the sample size and true density, with the 19 Figure 6: Logarithm of risk relative to the...
https://arxiv.org/abs/2505.22034v1
. , x n), a likelihood function fand a prior PnonF, the space of probability densities on [0,1], the posterior probability of a measurable set Ais Pn(f∈ A|x) =R AQn i=1f(xi)dPn(f)R FQn i=1f(xi)dPn(f). From the form of the random histogram prior given in (4), this probability can be decomposed as Pn(f∈ A|x) =P I∈PTnpn(I...
https://arxiv.org/abs/2505.22034v1
x,bI , which follows from the convexity of the map g7→d2 H(f0, g), the conditional version of Jensen’s inequality andthefactthattheHellingermetricisboundedby√ 2. Sincebothtermsontherighthandsideof(B.5)are O(ϵ2 n), we obtain Ex∼f0 d2 H f0,bfbI =O ϵ2 n , which proves the claim. We now turn our attention to finding ...
https://arxiv.org/abs/2505.22034v1
dH) =kn−1 sn−1 N[ ](ϵ,Ssn, dH). Usingthefactthatthebracketingnumberboundsthecoveringnumberforthesame ϵtogetherwithprevious 24 estimates, we deduce the following chain of inequalities: N ϵ,HTn,sn, dH ≤X I∈PTn,snN[ ] ϵ,HTn,I, dH ≤kn−1 sn−1 N[ ] ϵ,Ssn, dH ≤(ekn/sn)snsn(2πe)sn/2 ϵsn−1, whereweusedthatkn−1 sn−1 ...
https://arxiv.org/abs/2505.22034v1
2d2 H f0,I(n), fI(n),θ . (B.11) To bound the first term of (B.11), we leverage Lemma B.4 to find that d2 H f0, f0,I(n) ≤˜Ak−2α≤˜AC−2α 1ϵ2=˜AC−2α 1c2 0γ2, for some constant ˜A > 0. We now bound the last term on the right hand side of (B.11). Denoting π(n) j=R I(n) jf0(x) dxand using the fact that d2 H fI(n),θ, f0,I...
https://arxiv.org/abs/2505.22034v1
≥exp −snlog(A1n) ≥exp −A2snlog(n) , for two constants A1, A2>0. pn(sn)≥D1exp −d1snlog(sn) ≥D1exp −d1snlog(n) . 27 It is easily verified that log(1/ϵn)/log(n)converges to a positive, finite limit as n→ ∞. Moreover, a quick computation shows that nϵ2 n=n1/(2α+1){log(n)}2α/(2α+1)and hence, snlog(n)≤C2n1/(2α+1) lo...
https://arxiv.org/abs/2505.22034v1
max j J(n) j →0. Define hn=mϵ,J(n) to be the piecewise constant version of mϵbased on the partition J(n). To show convergence L1, we first show that hnconverges to mϵalmost everywhere. To this end, fix µ >0and let x∈[0,1]be a continuity point of mϵso that |mϵ(x)−mϵ(y)|< µwhenever |x−y|< νfor all sufficiently small ν. F...
https://arxiv.org/abs/2505.22034v1
for γ=r−1ifminjθj> µ2/2due to the fact that f0∈Lr [0,1] for some r∈(0,1]by assumption. ToshowthatthethepriorassignssufficientmasstoHellingerneighbourhoodsof f0,notethat by the triangle inequality, d2 H f0, fI(n),θ ≤ f0−hn 1+ hn−fI(n),θ 1= f0−hn 1+ η(n)−θ 1, where η(n) j=R I(n) jhn(x)dx. The first term in the above ...
https://arxiv.org/abs/2505.22034v1
the RIH, L2CV and KLCV methods we used the greedy search heuristic of Rozenholc et al. (2010) to reduce the computational burden of computing the optimal partition according to these criteria. Based on some preliminary simulations, we decided to use a data-driven grid for both cross-validation methods and restricted ou...
https://arxiv.org/abs/2505.22034v1
0.018 0.019 0.018 0.02 0.062 0.071 25000 0.023 0.01 0.008 0.009 0.009 0.009 0.008 0.008 0.008 0.009 0.033 0.036 3 50 0.433 0.446 0.443 0.444 0.404 0.407 0.385 0.391 0.393 0.316 0.515 0.526 200 0.336 0.339 0.348 0.341 0.315 0.323 0.297 0.261 0.286 0.207 0.558 0.545 1000 0.253 0.254 0.274 0.255 0.243 0.252 0.224 0.161 0....
https://arxiv.org/abs/2505.22034v1
0.389 0.395 0.389 0.425 0.397 0.405 200 0.349 0.306 0.345 0.334 0.314 0.288 0.345 0.346 0.346 0.334 0.346 0.353 1000 0.321 0.183 0.216 0.185 0.185 0.179 0.22 0.231 0.227 0.191 0.214 0.213 5000 0.189 0.107 0.134 0.108 0.112 0.11 0.147 0.15 0.147 0.112 0.125 0.133 25000 0.079 0.063 0.084 0.063 0.069 0.068 0.096 0.095 0.0...
https://arxiv.org/abs/2505.22034v1
0.044 0.006 0.0 4.652 3.278 5000 3.166 3.582 0.216 1.538 0.808 1.568 0.0 0.034 0.012 0.002 7.694 5.07 25000 7.394 6.938 0.8 4.326 2.112 4.018 0.0 0.034 0.028 0.004 10.144 8.322 2 50 1.368 0.654 0.052 0.072 0.126 0.336 0.02 0.168 0.02 0.368 1.04 0.74 200 1.814 0.616 0.012 0.108 0.022 0.082 0.012 0.062 0.004 0.252 2.58 2...
https://arxiv.org/abs/2505.22034v1
RMG-R TS L2CV KLCV 9 50 5.314 5.032 5.144 5.116 5.116 5.016 4.942 4.986 4.908 4.978 5.37 5.46 200 4.666 4.978 4.928 4.764 4.726 4.694 4.504 4.376 4.368 4.396 4.482 3.954 1000 4.566 8.168 3.912 4.728 4.176 4.052 2.91 2.46 2.44 1.722 4.262 3.342 5000 7.638 14.098 3.476 10.752 6.004 6.276 0.774 0.782 0.718 0.096 4.678 3.5...
https://arxiv.org/abs/2505.22034v1