text
string | source
string |
|---|---|
(3) 2.4974 (2) 1.0 1.2287 (1) 1.1594 (1) 1.7519 (2) 1.5048 (2) 2.7231 (3) 2.0234 (2) 4.6926 (5) 3.0924 (3) 1.5 1.2331 (1) 1.1566 (1) 1.8106 (2) 1.5189 (2) 4.1842 (4) 2.4067 (2) 14.2539 (14) 5.6002 (6) Table 3 shows results analogous to those in Table 1 for λ∈ {0.1,0.001}. Table 3: Accuracy on GSM8K for LLaMA 3.1 8B using Bregman primal decoding ( λ∈ {0.1,0.001},α∈ {1.5,2.0}) and top- kdecoding, across different temperature settings. For top- k,kequals the averaged optimal k∗ from the corresponding primal decoding run (matching temperature, λ, and α). Standard deviations are estimated using 1000 bootstrap resamples. Tempλ= 0.1Top-k(λ= 0.1)λ= 0.001Top-k(λ= 0.001)α= 1.5α= 2.0 α= 1.5α= 2.0 0.3 83.93±1.0184.46±1.0084.69±0.9984.69±0.9983.93±1.0185.29±0.9883.62±1.0283.62±1.02 0.7 83.47±1.0285.29±0.9884.69±0.9984.69±0.9982.18±1.0582.41±1.0583.78±1.0283.78±1.02 1.0 84.46±1.0084.38±1.0084.69±0.9984.69±0.9978.92±1.1280.89±1.0878.54±1.1381.20±1.08 1.5 83.78±1.0284.38±1.0084.69±0.9984.69±0.9969.22±1.2373.92±1.2164.67±1.3275.97±1.18 Figure 8 presents the MAUVE scores comparing generated and human-written text under different decoding strategies. While primal decoding shows a slight advantage over top- kdecoding, the differences are not statistically significant. We report standard deviations estimated from 50 bootstrap resamples; a higher number of resamples was not used due to the high computational cost of MAUVE score evaluation. 37 510 15 20 25 30 35 40 45 50 k0.840.860.880.900.920.940.96MAUVE Score GPT-2 MAUVE Scores T op-k Primal-1.5 Primal-2.0 510 15 20 25 30 35 40 45 50 k0.620.640.660.680.70MAUVE Score Llama3.1 MAUVE Scores T op-k Primal-1.5 Primal-2.0Figure 8: MAUVE scores results between generated and human-written text for GPT2-large (left panel) and LLaMA 3.1 8B (right panel), for various kvalues. We show top- kdecoding and primal decoding with α∈ {1.5,2.0}. Standard deviations are estimated using 50 bootstrap resamples 38
|
https://arxiv.org/abs/2505.19371v1
|
ICS FOR COMPLEX DATA WITH APPLICATION TO OUTLIER DETECTION FOR DENSITY DATA A P REPRINT Camille Mondon Mathematics and Statistics Toulouse School of Economics Toulouse, 31000 camille.mondon@tse-fr.euHuong Thi Trinh Faculty of Mathematical Economics Thuongmai University Hanoi trinhthihuong@tmu.edu.vnAnne Ruiz-Gazen Mathematics and Statistics Toulouse School of Economics Toulouse, 31000 anne.ruiz-gazen@tse-fr.eu Christine Thomas-Agnan Mathematics and Statistics Toulouse School of Economics Toulouse, 31000 christine.thomas@tse-fr.eu May 27, 2025 ABSTRACT Invariant coordinate selection (ICS) is a dimension reduction method, used as a preliminary step for clustering and outlier detection. It has been primarily applied to multivariate data. This work introduces a coordinate-free definition of ICS in an abstract Euclidean space and extends the method to complex data. Functional and distributional data are preprocessed into a finite-dimensional subspace. For example, in the framework of Bayes Hilbert spaces, distributional data are smoothed into compositional spline functions through the Maximum Penalised Likelihood method. We describe an outlier detection procedure for complex data and study the impact of some preprocessing parameters on the results. We compare our approach with other outlier detection methods through simulations, producing promising results in scenarios with a low proportion of outliers. ICS allows detecting abnormal climate events in a sample of daily maximum temperature distributions recorded across the provinces of Northern Vietnam between 1987 and 2016. Keywords Bayes spaces •Distributional data •Extreme weather •Functional data •Invariant coordinate selection • Outlier detection • Temperature distribution • 62H25 • 62R10 • 62G07 • 65D07 1 Introduction The invariant coordinate selection (ICS) method was introduced in a multivariate data analysis framework by Tyler et al. (2009). ICS is one of the dimension reduction methods that extend beyond Principal Component Analysis (PCA) and second moments. ICS seeks projection directions associated with the largest and/or smallest eigenvalues of the simultaneous diagonalisation of two scatter matrices (see Loperfido 2021; Nordhausen and Ruiz-Gazen 2022, for recent references). This approach enables ICS to uncover underlying structures, such as outliers and clusters, that might be hidden in high-dimensional spaces. ICS is termed “invariant” because it produces components, linear combinations of the original features of the data, that remain invariant (up to their sign and some permutation) under affine transformations of the data, including translations, rotations and scaling. Moreover, Theorem 4 in (Tyler et al. 2009) demonstrates that, for a mixture of elliptical distributions, the projection directions of ICS associated with the largest or smallest eigenvalues usually generate the Fisher discriminant subspace, regardless of the chosen pair of scatter matrices and without prior knowledge of group assignments. Once the pair of scatter matrices is chosen, invariant components can be readily computed, and dimension reduction is achieved by selecting the components that reveal the underlying structure. Recent articles have examined in detail the implementation of ICS in a multivariate framework,arXiv:2505.19403v1 [stat.ME] 26 May 2025 A P REPRINT - M AY27, 2025 focusing on objectives such as anomaly detection (Archimbaud, Nordhausen, and Ruiz-Gazen 2018) or clustering (Alfons et al. 2024). These studies particularly address the choice of pairs of scatter matrices and the selection of relevant invariant components. Note that this idea of joint diagonalisation of scatter matrices is
|
https://arxiv.org/abs/2505.19403v1
|
also used in the context of blind source separation and more precisely for Independent Component Analysis (ICA) which is a model-based approach as opposed to ICS (see Nordhausen and Ruiz-Gazen 2022, for more details). ICS has later been adapted to more complex data, namely compositional data (Ruiz-Gazen et al. 2023), functional data (Rendón Aguirre 2017; B. Li et al. 2021, for ICA) and multivariate functional data (Archimbaud, Boulfani, et al. 2022; Virta et al. 2020, for ICA). A significant contribution of the present work is the formulation of a coordinate-free variant of ICS, considering data objects in an abstract Euclidean space, without having to choose a specific basis. This formulation allows ICS to be consistently defined in a very general framework, unifying its original definition for multivariate data and its past adaptations to specific types of complex data. In the case of compositional data, the coordinate-free approach yields an alternative implementation of ICS that is more computationally efficient. We are also able to propose a new version of invariant coordinate selection adapted to distributional data. Note that a coordinate-free version of ICS has already been mentioned in (Tyler et al. 2009), in the discussion by Mervyn Stone, who proposed to follow the approach of Stone (1987). In their response, Tyler and co-authors agree that this could offer a theoretically elegant and concise view of the topic. A coordinate-free approach of ICA is proposed by B. Li et al. (2021), but to our knowledge, no coordinate-free approach to ICS exists for a general Euclidean space. As mentioned above, a possible application of ICS is outlier detection. In the context of a small proportion of outliers, a complete detection procedure integrating a dimension reduction step based on the selection of invariant coordinates is described by Archimbaud, Nordhausen, and Ruiz-Gazen (2018). This method, called ICSOutlier, flags outlying observations and has been implemented for multivariate data by Nordhausen, Archimbaud, and Ruiz-Gazen (2023). It has been adapted to compositional data by Ruiz-Gazen et al. (2023) and to multivariate functional data by Archimbaud, Boulfani, et al. (2022). We propose to extend this detection procedure to complex data and illustrate it on distributional data. Detecting outliers is already challenging in a classical multivariate context because outliers may differ from the other observations in their correlation pattern (see Aggarwal 2017, for an overview on outlier detection and analysis). Archimbaud, Nordhausen, and Ruiz-Gazen (2018) demonstrate how the ICS procedure outperforms those based on the Mahalanobis distance and PCA (robust or not). For compositional data, the constraints of positivity and constant sum of coordinates must be taken into account as detailed in (Ruiz-Gazen et al. 2023) and further examined in this paper. For univariate functional data, outliers are categorised as either magnitude or shape outliers, with shape outliers being more challenging to detect because they are hidden among the other curves. Many existing detection methods for functional data rely on depth measures, including the Mahalanobis distance (see, e.g., the recent paper Dai et al. 2020, and the included references). Density data are constrained functional data, and thus combine the challenges associated with
|
https://arxiv.org/abs/2505.19403v1
|
both compositional and functional data. The literature on outlier detection for density data is very sparse and recent with, as far as we know, the papers by Menafoglio (2021), Lei, Chen, and H. Li (2023) and Murph, Strait, et al. (2024) only. Two types of outliers have been identified for density data: the horizontal-shift outliers and the shape outliers, with shape outliers being again more challenging to detect (see Lei, Chen, and H. Li 2023, for details). The procedure proposed by Menafoglio (2021) is based on an adapted version of functional PCA to density objects in a control chart context. In order to derive a robust distribution-to-distribution regression method, Lei, Chen, and H. Li (2023) propose a transformation tree approach that incorporates many different outlier detection methods adapted to densities. Their methods involve transforming density data into unconstrained data and using standard functional outlier detection methods. Murph, Strait, et al. (2024) continue the work of the previously cited article by comparing more methods through simulations, and give an application to gas transport data. ICS is not mentioned in these references. Our coordinate-free definition of ICS enables direct adaptation of the ICSOutlier method to complex data. Through a case study on temperature distributions in Vietnam, we assess the impact of preprocessing parameters and provide practical recommendations for their selection. In addition, the results of a simulation study demonstrate that our method performs favourably compared with other approaches. An original application to Vietnamese data provides a detailed description of the various stages involved in detecting low-proportion outliers using ICS, as well as interpreting them from the dual eigendensities. Section 2 presents ICS in a coordinate-free framework, states a useful result to link ICS in different spaces, and treats the specific cases of compositional, functional and distributional data. For the latter, we develop a Bayes space approach and discuss the maximum penalised likelihood method to preprocess the original samples of real-valued data into a sample of compositional splines. Section 3 describes the ICS-based outlier detection procedure adapted to complex data, discusses the impact of the preprocessing parameters on outlier detection through a toy example. Simulating data from multiple generating schemes, we compare ICS with other outlier detection methods for density data. Section 4 provides an application of the outlier detection methodology to maximum temperature data in Vietnam over 30 years. 2 A P REPRINT - M AY27, 2025 Section 5 concludes the paper and offers some perspectives. Supplementary material on ICS, a reminder on Bayes spaces, as well as proofs of the propositions and corollaries are given in the Appendix. 2 ICS for complex data A naive approach to ICS for complex data would be to apply multivariate ICS to coordinate vectors in a basis. This not only ignores the metric on the space when the basis is not orthonormal, but also gives a potentially different ICS method for each choice of basis (as in Archimbaud, Boulfani, et al. 2022). Defining a unique coordinate-free ICS problem avoids defining multiple ICS methods and having to discuss the potential links between them, thus making our approach more
|
https://arxiv.org/abs/2505.19403v1
|
intrinsic. In particular, it leads to more interpretable invariant components that are of the same nature as the considered complex random objects. In the case of functional or distributional data, the usual framework assumes that the data objects reside in an infinite-dimensional Hilbert space, which leads to non-orthonormal bases and incomplete inner product spaces. We choose to restrict our attention to finite-dimensional approximations of the data in the framework of Euclidean spaces, which are particularly suitable here because ICS is known to fail when the dimension is larger than the sample size (Tyler 2010). This suggests that an ICS method for infinite-dimensional Hilbert spaces would require modifying the core of the method, which is beyond the scope of this work. 2.1 A coordinate-free ICS problem In order to generalise invariant coordinate selection (Tyler et al. 2009, def. 1) to a coordinate-free framework in a Euclidean space E, we need to eliminate any reference to a coordinate system, which means replacing coordinate vectors by abstract vectors, matrices by linear mappings, bases or quadratic forms, depending on the context. This coordinate emancipation procedure will ensure that our definition of ICS for an E-valued random object Xdoes not depend on any particular choice of basis of Eto represent X. Following this methodology, we are able to immediately generalise the definition of (affine equivariant) scatter operators from random vectors in E=Rp(as defined in Tyler et al. 2009, eq. 3) to random objects in a Hilbert space E. This is a perfect example of how the coordinate-free framework can be used to extend existing work to infinite-dimensional spaces. For further details, see Definition 3 in the Appendix. A notable difference from (Tyler et al. 2009) is that we work directly with random objects instead of their underlying distributions. In particular, we introduce an affine invariant spaceEof random objects on which the scatter operators are defined and to which we assume that Xbelongs. For example,E=Lp(Ω,E)corresponds to assuming the existence of the pfirst moments of∥X∥. Again, emancipating from coordinates allows us to naturally generalise ICS to complex random objects in a Euclidean space. Definition 1 (Coordinate-free ICS) .Let(E,⟨·,·⟩)be a Euclidean space of dimension p,E⊆L1(Ω,E)an affine invariant set of integrable E-valued random objects, S1andS2two scatter operators on EandX∈E. The invariant coordinate selection problem ICS(X,S 1,S2)is to find a basis H= (h1,...,hp)ofEand a finite non-increasing real sequence Λ = (λ1≥...≥λp)such that ICS(X,S 1,S2) :/braceleftbigg ⟨S1[X]hj,hj′⟩=δjj′ ⟨S2[X]hj,hj′⟩=δjj′λjfor all 1≤j,j′≤p, (1) whereδjj′equals 1ifj=j′and0otherwise. Such a basis His called an ICS(X,S 1,S2)eigenbasis, whose elements are ICS(X,S 1,S2)eigenobjects. Such a Λis called an ICS(X,S 1,S2)spectrum, whose elements are called ICS(X,S 1,S2)eigenvalues or generalised kurtosis. Given an ICS(X,S 1,S2)eigenbasisHand1≤j≤p, the real number zj=⟨X−EX,hj⟩ (2) is called the j-th invariant coordinate (in the eigenbasis H). In Definition 1, our coordinate emancipation procedure does not yield a generalisation to infinite-dimensional Hilbert spaces, where a basis Hwould not be properly defined as it is not necessarily orthonormal. Remark (Multivariate case) .IfE=Rp, we identify S1andS2with their associated (p×p)-matrices in the canonical basis, and we identify an ICS eigenbasis Hwith the (p×p)-matrix of its vectors stacked column-wise, so that we retrieve
|
https://arxiv.org/abs/2505.19403v1
|
the classical formulation of invariant coordinate selection by Tyler et al. (2009). In the ICS problem Equation 1, the scatter operators S1andS2do not play symmetrical roles. This is because the usual method of solving ICS(X,S 1,S2)is to use the associated inner product of S1[X], which requires S1[X]to be injective. In that case, Proposition 2 in the Appendix proves the existence of solutions to the ICS problem. Another way to understand the coordinate-free nature of this ICS problem is to work with data isometrically represented in two spaces and to understand how we can relate a given ICS problem in the first space to a corresponding ICS problem in the second. This is the object of the following proposition, which will be used in Section 2.3. 3 A P REPRINT - M AY27, 2025 Proposition 1. Letφ: (E,⟨·,·⟩E)→(F,⟨·,·⟩F)be an isometry between two Euclidean spaces of dimension p, E⊆L1(Ω,E)an affine invariant set of integrable E-valued random objects, SE 1andSE 2two affine equivariant scatter operators onE. Then: (a)F=φ(E) ={φ(XE),XE∈E} is an affine invariant set of integrable F-valued random objects, and we denoteXF=φ(XE)∈F wheneverXE∈E; (b)SF ℓ:XF∈F∝⇕⊣√∫⊔≀→φ◦SE ℓ[XE]◦φ−1,ℓ∈{1,2},are two affine equivariant scatter operators on F; (c)HF=φ(HE) = (φ(hE 1),...,φ (hE p))is a basis of FwheneverHE= (hE 1,...,hE p)is a basis of E. For anyE-valued random object XE∈E, any basisHE= (hE 1,...,hE p)ofE, and any finite non-increasing real sequence Λ = (λ1≥...≥λp)the following assertions are equivalent: (i)(HE,Λ)solves ICS(XE,SE 1,SE 2)in the space E (ii)(HF,Λ)solves ICS(XF,SF 1,SF 2)in the space F. 2.2 The case of weighted covariance operators A difficulty in ICS is to find interesting scatter operators that capture the non-ellipticity of the random object. Usually, for multivariate data, we use the pair of scatter matrices (Cov,Cov 4). In this section, we define an important family of scatter operators, namely the weighted covariance operators, which contains both Cov andCov 4. They are explicitly defined by coordinate-free formulas which allow us to relate ICS problems using weighted covariance operators between any two Euclidean spaces. We denote by GL(E)the group of linear automorphisms of Eand byA1/2the unique non-negative square root of a linear mapping A. Definition 2 (Weighted covariance operators) .For any measurable function w:R+→R, let Ew=/braceleftig X∈L2(Ω,E)/vextendsingle/vextendsingle/vextendsingleCov[X]∈GL (E)andw/parenleftig/vextenddouble/vextenddouble/vextenddoubleCov[X]−1/2(X−EX)/vextenddouble/vextenddouble/vextenddouble/parenrightig ∥X−EX∥∈L2(Ω,R)/bracerightig . Note thatEwis an affine invariant set of integrable E-valued random objects. For X∈Ew, we define the w-weighted covariance operator Covw[X]by ∀(x,y)∈E2,⟨Covw[X]x,y⟩=E/bracketleftig w2/parenleftig/vextenddouble/vextenddouble/vextenddoubleCov[X]−1/2(X−EX)/vextenddouble/vextenddouble/vextenddouble/parenrightig ⟨X−EX,x⟩⟨X−EX,y⟩/bracketrightig . (3) When necessary, we will also write CovE wfor thew-weighted covariance operator on Eto avoid any ambiguity. It is easy to check that weighted covariance operators are affine equivariant scatter operators in the sense of Definition 3. Example 1. Ifw= 1, we retrieve Cov, the usual covariance operator on L2(Ω,E). Example 2. If forx∈R+,w(x) = (p+2)−1/2x, we obtain the fourth-order moment operator Cov 4(as in Nordhausen and Ruiz-Gazen 2022, for the case E=Rp) onEw=/braceleftbig X∈L4(Ω,E)|Cov[X]∈GL (E)/bracerightbig . The following corollary applies Proposition 1 to the pair of wℓ-weighted covariance operators SE ℓ= Covwℓ,ℓ∈{1,2}, for which the corresponding SF ℓare exactly the wℓ-weighted covariance operators on F. Corollary 1. Let(E,⟨·,·⟩E)φ→(F,⟨·,·⟩F)be an isometry between two Euclidean spaces of dimension pand w1,w2:R+→Rtwo measurable functions. For any integrable E-valued
|
https://arxiv.org/abs/2505.19403v1
|
random object X∈Ew1∩Ew2(with the notations from Definition 2), the equality CovF wℓ[φ(X)] =φ◦CovE wℓ[X]◦φ−1(4) holds forℓ∈{1,2}, as well as the equivalence between the following assertions, for any basis H= (h1,...,hp)ofE, and any finite non-increasing real sequence Λ = (λ1≥...≥λp): (i)(H,Λ)solves ICS(X,CovE w1,CovE w2)in the space E. (ii)(φ(H),Λ)solves ICS(φ(X),CovF w1,CovF w2)in the space F. 4 A P REPRINT - M AY27, 2025 2.3 Implementation In order to implement coordinate-free ICS in any Euclidean space E, we restrict our attention to the pair (Covw1,Covw2) of weighted covariance operators defined in Section 2.2. Note that we could also transport other known scatter matrices, such as the Minimum Covariance Determinant (defined in Rousseeuw 1985), back to the space Eusing Proposition 1, but this approach would no longer be coordinate-free. We now choose a basis B= (b1,...,bp)ofEin order to represent each element xofEby its coordinate vector [x]B= ([x]b1...[x]bp)⊤∈Rp. Then, the following corollary of Proposition 1 allows one to relate the coordinate-free approach in Eto three different multivariate approaches applied to the coordinate vectors in any basis BofE, where the Gram matrix GB= (⟨bj,bj′⟩)1≤j,j′≤pappears, accounting for the non-orthonormality of B. Notice that, since the ICS problem has been defined in Section 2.1 without any reference to a particular basis, it is obvious that the basis B has no influence on ICS. Corollary 2. Let(E,⟨·,·⟩)be a Euclidean space of dimension p,w1,w2:R+→Rtwo measurable functions. Let Bbe any basis of E,GB= (⟨bj,bj′⟩)1≤j,j′≤pits Gram matrix and [·]Bthe linear map giving the coordinates in B. For anyX∈Ew1∩Ew2(with the notations from Definition 2), any basis H= (h1,...,hp)ofE, and any finite non-increasing real sequence Λ = (λ1≥...≥λp)the following assertions are equivalent: (1)(H,Λ)solves ICS(X,CovE w1,CovE w2)in the space E (2)(G1/2 B[H]B,Λ)solves ICS(G1/2 B[X]B,Covw1,Covw2)in the space Rp (3)([H]B,Λ)solves ICS(GB[X]B,Covw1,Covw2)in the space Rp{#eq-third} (4)(GB[H]B,Λ)solves ICS([X]B,Covw1,Covw2)in the space Rp where [H]Bdenotes the non-singular p×pmatrix representing the basis ([h1]B,..., [hp]B)ofRp. In practice, we prefer Assertion (3) (transforming the data by the Gram matrix of the basis) because it is the only one that does not require inverting the Gram matrix in order to recover the eigenobjects. Then, the problem is reduced to multivariate ICS, already implemented in the R package ICSusing the QR decomposition (Archimbaud, Drma ˇc, et al. 2023). This QR approach enhances stability compared to methods based on a joint diagonalisation of two scatter matrices, which can be numerically unstable in some ill-conditioned situations. After we obtain the ICS eigenelements, we can use them to reconstruct the original random object, in order to interpret the contribution of each invariant component. Proposition 3 in the Appendix generalises the multivariate reconstruction formula to complex data. In order to implement this reconstruction, we need the coordinates of the elements of the dual ICS eigenbasis. Identifying the basis [H]Bwith the matrix whose columns are its vectors, the dual basis [H∗]Bis the matrix [H∗]B=/parenleftbig [H]⊤ BGB/parenrightbig−1. Remark (Empirical ICS and estimation) .In order to work with samples of complex random objects, we can study the particular case of a finite E-valued random object Xwhere we have a fixed sample Dn= (x1,...,xn)and we assume thatXfollows the empirical probability distribution PDnof(x1,...,xn). In that case, the
|
https://arxiv.org/abs/2505.19403v1
|
expressions (in Definition 2) for instance) of the form Ef(X)for any function fare discrete and equal to1 n/summationtextn i=1f(xi). Now, let us assume that we observe an i.i.d. sample Dn= (X1,...,Xn)following the distribution of an unknown E-valued random object X0. We can estimate solutions of the problem ICS(X0,S1,S2)from Definition 1 by working conditionally on the data (X1,...,Xn)and taking the particular case where Xfollows the empirical probability distributionPDn. This defines estimates of the ICS(X0,S1,S2)eigenobjects as solutions of an ICS problem involving empirical scatter operators. Since the population version of ICS for a complex random object X∈Eis more concise than its sample counterpart for Dn= (X1,...,Xn), we shall use the notations of the former in the next sections. 2.4 ICS for compositional data The specific case of coordinate-free ICS for compositional data is equivalent to the approach of Ruiz-Gazen et al. (2023). To see this, let us consider the simplex E= (Sp+1,⊕,⊙,⟨·,·⟩Sp+1)of dimension pinRp+1with the Aitchison structure (Pawlowsky-Glahn, Juan José Egozcue, and Tolosana-Delgado 2015). The results from 5.1 (resp. 5.2) in (Ruiz-Gazen et al. 2023) can be recovered by applying Corollary 1 to any isometric log-ratio transformation (see Pawlowsky-Glahn, Juan José Egozcue, and Tolosana-Delgado 2015, for a definition) (resp. the centred log-ratio transformation). 5 A P REPRINT - M AY27, 2025 Corollary 2 gives a new characterisation of the problem ICS(X,Covw1,Covw2)using additive log-ratio transformations. For a given index 1≤j≤p, letBj= (b1,...,bp)denote the basis of Sp+1corresponding to the alrjtransformation, i.e. obtained by taking the canonical basis of Rp+1, removing the j-th vector and applying the exponential. In that case, it is easy to compute the p×pGram matrix of Bj: GBj=Ip−1 p+ 11p1⊤ p= 1−1 p+1−1 p+1...−1 p+1 −1 p+1......... .........−1 p+1 −1 p+1...−1 p+11−1 p+1 . Then, we get the equivalence between the following two ICS problems: 1.(H,Λ)solves ICS(X,Covw1,Covw2)in the spaceSp+1 2.(alrj(H),Λ)solves ICS(clr(X)(j),Covw1,Covw2)in the space Rp where clr(x)(j)=GBjalrj(x)is the centred log-ratio transform of x∈Sp+1from which the j-th coordinate has been removed. This suggests a new and fastest implementation of invariant coordinate selection for compositional data, in an unconstrained space and only requiring the choice of an index jinstead of a full contrast matrix. 2.5 ICS for functional data The difficulty of functional data (in the broader sense, encompassing density data) is twofold: first, functions are usually analysed within the infinite-dimensional Hilbert space L2(a,b), second, a random function is almost never observed for every argument, but rather on a discrete grid. This grid can be regular or irregular, deterministic or random, dense (the grid spacing goes to zero) or sparse. We describe a general framework for adapting coordinate-free ICS to functional data, solving both difficulties at the same time by smoothing the observed values into a random function uthat belongs to a Euclidean subspace EofL2(a,b). 2.5.1 Choosing an approximating Euclidean subspace We usually choose polynomial spaces, spline spaces with given knots and order, or spaces spanned by a truncated Hilbert basis of L2(a,b). In practice, this choice also depends on the preprocessing method that we have in mind to smooth discrete observations into functions. 2.5.2 Preprocessing the observations into the approximating space Considering a dense, deterministic
|
https://arxiv.org/abs/2505.19403v1
|
grid (t1,...,tN), we need to reconstruct an E-valued random function ufrom its noisy observed values (u(t1) +ε1,...,u (tN) +εN). There are many well-documented approximation techniques to carry out this preprocessing step, such as interpolation, spline smoothing, or Fourier methods (for a detailed presentation, see Eubank 2014). 2.5.3 Solving ICS in the approximating space Once we have obtained an E-valued random function u, we can apply the method described in Section 2.3 to reduce ICS(u,Covw1,Covw2)to a multivariate problem on the coordinates in a basis of E. In particular, for an orthonormal basisBofE(such as a Fourier basis or a Hermite polynomial basis), Corollary 2 gives the equivalence between the following two assertions: 1.(H,Λ)solves ICS(u,Covw1,Covw2)in the space E 2.([H]B,Λ)solves ICS([u]B,Covw1,Covw2)in the space Rp. IfEis a finite-dimensional spline space, we usually work with the coordinates of uin a B-spline basis of E, but then we should take into account its Gram matrix, as in Corollary 2. ICS has previously been defined for multivariate functional data by Archimbaud, Boulfani, et al. (2022), who define a pointwise method and a global method. Unlike the pointwise approach, which is specific to multivariate functional data, the global method can also be applied to univariate functional data in L2(a,b), as it corresponds to applying multivariate ICS to truncated coordinate vectors in a Hilbert basis of L2(a,b). The above framework retrieves the global method in (Archimbaud, Boulfani, et al. 2022) as a particular case when taking a Hilbert basis BofL2(a,b)and solving coordinate-free ICS in the space Espanned by the pfirst elements of B. 6 A P REPRINT - M AY27, 2025 2.6 ICS for distributional data A first option to adapt ICS to density data would be to consider it as constrained functional data and directly follow the approach of Section 2.5. However, distributional data does not reduce to density data (such as absorbance spectra studied in Ferraty and Vieu 2002), as it can also be histogram data or sample data (such as the dataset of temperature samples analysed in Section 4). Moreover, the framework of Bayes Hilbert spaces, described by (Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn 2014) and recalled in the Appendix, is specifically adapted to the study of distributional data. Taking into account the infinite-dimensional nature of distributional data, we follow a similar framework as the one of Section Section 2.5, restricting our attention to finite-dimensional subspaces Eof the Bayes spaceB2(a,b)with the Lebesgue measure as reference. 2.6.1 Choosing an approximating Euclidean space Following smoothing splines methods, adapted to Bayes spaces by Machalová, Hron, and Monti (2016) and recalled in the Appendix, we choose to work in the space E=C∆γ d(a,b)of compositional splines on (a,b)of orderd+ 1 with knots ∆γ= (γ1,...,γk). Note that the centred log-ratio transform clris an isometry between Eand the space F=Z∆γ d(a,b)of zero-integral splines on (a,b)of orderd+ 1 (degree less than or equal to d) and with knots ∆γ= (γ1,...,γk). They both have dimension p=k+d. 2.6.2 Preprocessing the observations into the approximating space We consider the special cases of histogram data and of sample data. In the former, we follow (Machalová, Talská, et al. 2021)
|
https://arxiv.org/abs/2505.19403v1
|
to smooth each histogram into a compositional spline in E. In the latter, we assume that a random density is observed through a finite random sample (X1,...,XN)drawn from it. The preprocessing step consists in estimating the density from the observed sample. To perform the estimation, we need a nonparametric estimation procedure that yields a compositional spline belonging to E. That is why we opt for maximum penalised likelihood (MPL) density estimation, introduced by Silverman (1982). The principle of MPL is to maximise a penalised version of the log-likelihood over an infinite-dimensional space of densities without parametric assumptions. The penalty is the product of a smoothing parameterλby the integral over the interval of interest of the square of the m-th derivative of the log density. Therefore, the objective functional is a functional of the log density. Due to the infinite dimension of the ambient space, the likelihood term alone is unbounded above, hence the penalty term is necessary. In our case of densities on an interval (a,b), we select the value m= 3so that (according to Silverman 1982, Theorem 2.1) when the smoothing parameter tends to infinity, the estimated density converges to the parametric maximum likelihood estimate in the exponential family of densities whose logarithm is a polynomial of degree less than or equal to 2. This family comprises the uniform density, exponential and Gaussian densities truncated to (a,b). In order to use MPL in B2(a,b), we need to add extra smoothness conditions and therefore we restrict attention to the densities of B2(a,b)whose log belongs to the Sobolev space of order mon(a,b), thus ensuring the existence of the penalty term. Note that compositional splines verify these conditions. With Theorem 4.1 in (Silverman 1982), the optimisation problem has at least a solution. Since the estimatefof the density of (X1,...,XN)needs to belong to the chosen finite-dimensional subspace E=C∆γ d(a,b), we restrict MPL to E, using the R function fda::density.fd , designed by Ramsay, Hooker, and Graves (2024). This function returns the coordinates of log(f)in the B-spline basis with knots ∆γand orderd+ 1, that we project onto Z∆γ d(a,b)and to which we apply clr−1so that we obtain an element of C∆γ d(a,b). 2.6.3 Solving ICS in the approximating space We have now obtained an E-valued random compositional spline f. In order to work with two weighted covariance operators Covw1andCovw2, wherew1,w2:R+→Rare two measurable functions, we assume that f∈Ew1∩Ew2, using the notations of Definition 2. Now, we refer to Section 2.3 to reduce the problem ICS(f,Covw1,Covw2)to a multivariate ICS problem on the coordinates of fin the CB-spline basis of C∆γ d(a,b)(defined in Machalová, Talská, et al. 2021), transformed by the Gram matrix of said CB-spline basis. Note that Corollary 1 applied to the centred log-ratio isometry between C∆γ d(a,b)andZ∆γ d(a,b)gives the equivalence between: 1.(H,Λ)solves ICS(f,Covw1,Covw2)in the space E=C∆γ d(a,b) 2.(clr(H),Λ)solves ICS(clr(f),Covw1,Covw2)in the space F=Z∆γ d(a,b). Then, it is completely equivalent, and useful for implementation, to work with the coordinates of clr(f)in the ZB-spline basis ofZ∆γ d(a,b). 7 A P REPRINT - M AY27, 2025 3 Outlier detection for complex data using ICS 3.1 Implementation of ICS on complex
|
https://arxiv.org/abs/2505.19403v1
|
data for outlier detection We propose using ICS to detect outliers in complex data, specifically in scenarios with a small proportion of outliers (typically 1 to 2%). For this, we follow the three-step procedure defined by Archimbaud, Nordhausen, and Ruiz-Gazen (2018), modifying the first step based on the implementation of coordinate-free ICS in Section 2.3. 3.1.1 Computing the invariant coordinates For the scatter operators, we follow the recommendation of Archimbaud, Nordhausen, and Ruiz-Gazen (2018) who compare several pairs of more or less robust scatter estimators in the context of a small proportion of outliers, and conclude that (Cov,Cov 4)is the best choice. Thus, we use the empirical scatter pair (Cov,Cov 4)(see Example 1 and Example 2), and compute the eigenvalues λ1≥...≥λp, and the invariant coordinates zji,1≤j≤p, for each observation Xi,1≤i≤n. As outlined in Section 2.3, for a given sample of random complex objects Dn={X1,...,Xn}in a Euclidean space E, solving the empirical version of ICS is equivalent to solving an ICS problem in a multivariate framework (see Tyler et al. 2009) with the coordinates of the objects in a basis BofE. In order to choose a basis, we follow the specific recommendations for each type of data from Section 2.4 and Section 2.6. 3.1.2 Selecting the invariant components The second step of the outlier detection procedure based on ICS is the selection of the κ < p relevant invariant components and the computation of the ICS distances. For each of the nobservations, the ICS distance is equal to the Euclidean norm of the reconstructed data using the κselected invariant components. In the case of a small proportion of outliers and for the scatter pair (Cov,Cov 4), the invariant components of interest are associated with the largest eigenvalues and the squared ICS distances are equal toκ/summationdisplay j=1z2 ji. As noted by Archimbaud, Nordhausen, and Ruiz-Gazen (2018), there exist several methods for the selection of the number of invariant components. One approach is to examine the scree plot, as in PCA. This method, recommended by Archimbaud, Nordhausen, and Ruiz-Gazen (2018), is not automatic. Alternative automatic selection methods apply univariate normality tests to each component, starting with the first one, and using some Bonferroni correction (for further details see page 13 of Archimbaud, Nordhausen, and Ruiz-Gazen 2018). In the present paper, we use the scree plot approach when there is no need of an automatic method, and we use the D’Agostino normality test for automatic selection. The level for the first test (before Bonferroni correction) is 5%. Dimension reduction involves retaining only the first κcomponents of ICS instead of the original pvariables. Note that when all the invariant components are retained, the ICS distance is equal to the Mahalanobis distance. 3.1.3 Choosing a cut-off The computation of ICS distances allows to rank the observations in decreasing order, with those having the largest distances potentially being outliers. However, in order to identify the outlying densities, we need to define a cut-off, and this constitutes the last step of the procedure. Following Archimbaud, Nordhausen, and Ruiz-Gazen (2018), we derive cut-offs based on Monte Carlo simulations from the standard
|
https://arxiv.org/abs/2505.19403v1
|
Gaussian distribution. For a given sample size and number of variables, we generate 10,000 standard Gaussian samples and compute the empirical quantile of order 97.5% of the ICS-distances using the three steps previously described. An observation with an ICS distance larger than this quantile is flagged as an outlier. The procedure described above has been illustrated in several examples (see Archimbaud, Nordhausen, and Ruiz-Gazen 2018), and is implemented in the R package ICSOutlier (see Nordhausen, Archimbaud, and Ruiz-Gazen 2023). However, in the context of densities, the impact of preprocessing parameters (see Section 2.6) on the ICSOutlier procedure emerges as a crucial question that needs to be examined. 3.2 Influence of the preprocessing parameters for the density data application As a toy example, consider the densities of the maximum daily temperatures for the 26 provinces of the two regions Red River Delta and Northern Midlands and Mountains in Northern Vietnam between 2013 and 2016. We augment this data set made of 104 densities by adding the provinces AN GIANG and BAC LIEU from Southern Vietnam in the same time period. The total number of observations is thus 112. Details on the original data and their source are provided in Section 4.1. 8 A P REPRINT - M AY27, 2025 ANGIANG BACLIEUBACGIANGBACKANCAOBANG DIENBIENHAGIANG HOABINHLAICHAU LANGSONLAOCAI PHUTHOSONLATUYENQUANG YENBAI BACNINHHAIDUONGHAIPHONG HANAMHANOIHUNGYEN NAMDINHNINHBINHQUANGNINH THAIBINHTHAINGUYEN THANHHOAVINHPHUC 8°N10°N12°N14°N16°N18°N20°N22°N24°N 102°E104°E106°E108°E longitudelatituderegion Mekong Delta Northern Midlands and Mountains Red River Delta Figure 1: Map of Vietnam showing the 63 provinces, with the three regions under study colour-coded. The 28 provinces included in the toy example are labelled. 0.00.10.20.3 10 20 30 40 temperature (deg. Celsius)densityregion Mekong Delta Northern Midlands and Mountains Red River Delta (a) −40−20020 10 20 30 40 temperature (deg. Celsius)centred clr(density) (b) Figure 2: Plots of the 112 densities (left panel) and clr densities (right panel), colour-coded by region for the toy example. Figure 1 displays a map of Vietnam with the contours of all provinces and coloured according to their administrative region, allowing the reader to locate the 26 provinces in the North and the two in the South. As shown on the left panel of Figure 2, the eight densities of the two provinces from the South for the four years exhibit a very different shape (in red) compared to the northern provinces (in blue and green), with much more concentrated maximum temperatures. These two provinces should be detected as outliers when applying the ICSOutlier methodology. However, the results may vary depending on the choice of preprocessing parameters (see Section 2.6.2). Our goal is to analyse how the 9 A P REPRINT - M AY27, 2025 detected outliers vary depending on the preprocessing when using the maximum penalised likelihood method with splines of degree less than or equal to d= 4. Specifically, we study the influence on the results of ICSOutlier of the smoothing parameter λ, the number of inside knots k, and the location of the knots defining the spline basis. The number κof selected invariant components is fixed at four in all experiments to facilitate interpretation. This value has been chosen after viewing the
|
https://arxiv.org/abs/2505.19403v1
|
scree plots of the ICS eigenvalues following the recommendations in Section 3.1. For each of the experimental scenarios detailed below, we compute the squared ICS distances of the 112 observations as defined in Section 3.1, using κ= 4. Observations are classified as outliers when their squared ICS distance exceeds the threshold defined in Section 3.1, using a level of 2.5%. For each experiment, we plot on Figure 3 the indices of the observations from 1 to 112 on the y-axis, marking outlying observations with dark squares. The eight densities from Southern Vietnam are in red and correspond to indices 1 to 8. We consider the following scenarios: •the knots are either located at the quantiles of the temperature values (top panel on Figure 3) or equally spaced (bottom panel on Figure 3), •from the left to the right of Figure 3, the number of knots varies from 0 to 14 by increments of 2, and then takes the values 25 and 35 (overall 10 different values). Note that when increasing the number of knots beyond 35, the code returns more and more errors due to multicollinearity issues and the results are not reported. •the base-10 logarithm of the parameter λvaries from -8 to 8 with an increment of 1 on the x-axis of each plot. Altogether we have 2×10×17 = 340 scenarios. Figure 4 is a bar plot showing the observations indices on the x-axis and the frequency of outlier detection across scenarios on the y-axis color-coded by region. The eight densities from the two southern provinces (AN GIANG and BAC LIEU) across the four years are most frequently detected as outliers, along with the province of LAI CHAU (indices 33 to 36), which is located in a mountainous region in northwest of Vietnam. On the original data, we can see that the LAI CHAU province corresponds to densities with low values for high maximum temperatures (above 35 °C) coupled with relatively high density values for maximum temperatures below 35 °C. A few other observations are detected several times as outliers, but less frequently: indices 53 (TUYEN QUANG in 2013), 96 (QUANG NINH in 2016), and 107 (THANH HOA in 2015). Looking at Figure 3, we examine the impact of the preprocessing parameters on the detection of outlying observations. First, note that the ICS algorithm returns an error when the λparameter is large (shown as white bands in some plots). This is due to a multicollinearity problem. Even though the QR version of the ICS algorithm is quite stable, it may still encounter problems when multicollinearity is severe. Indeed, when λis large, the estimated densities converge to densities whose logarithm is a polynomial of degree less than or equal to 2 (see details in Section 2.6.2), and belongs to a 3-dimensional affine subspace of the Bayes space, potentially with a dimension smaller than that of the approximating spline space. If we compare the top and the bottom plots, we do not observe large differences in the outlying pattern, except for a few observations rarely detected as outliers. Thus, the knot location has
|
https://arxiv.org/abs/2505.19403v1
|
a rather small impact on the ICS results for this data set. Regarding the impact of the λparameter, the outlier pattern remains relatively stable when the number of knots is small (less than or equal to 6), especially when looking at the densities from the south of Vietnam in red. For a large number of knots, the observations detected as outliers vary with λ. The number of knots has more impact than their location or the λparameter. When the number of knots is smaller than or equal to 6 (corresponding to p= 10 variables), the plots are very similar. However, as pincreases, some observations from Southern Vietnam are not detected for all λvalues, while another density (QUANG NINH in 2016) is detected for large λvalues with equally spaced knots, and to a lesser extent for knots at temperature quantiles. In (Archimbaud, Boulfani, et al. 2022), ICS is applied to multivariate functional data with B-splines preprocessing. Based on their empirical experience, the authors recommend using a dimension p(in their case, the number of functional components times the number of B-splines coefficients) no larger than the number of observations divided by 10. Typically in multivariate analysis, the rule of thumb is that the dimension should not exceed the number of observations divided by 5. For functional or distributional data, it appears that even more observations per variable are needed. The reason for this is not entirely clear, but in the case of ICS, we can suspect that the presence of multicollinearity, even approximate, degrades the results. By increasing the number of knots, we precisely increase the multicollinearity problem, especially for large values of λ. 10 A P REPRINT - M AY27, 2025 0 2 4 6 8 10 12 14 25 35equally spaced quantiles −505−505−505−505−505−505−505−505−505−5050306090 0306090 log10(lambda)observation indexregion Mekong Delta Northern Midlands and Mountains Red River Delta outlying FALSE TRUE Figure 3: Outlier detection by ICS across smoothing parameters for the Vietnam toy example. Top: knots at quantiles; Bottom: equally spaced knots. y-axis: observation indices; x-axis:λparameter. Columns correspond to knot numbers (0-35). Outliers are dark and colour-coded by region. 11 A P REPRINT - M AY27, 2025 0100200300400 0 30 60 90 observation indexoutlier detection frequencyregion Mekong Delta Northern Midlands and Mountains Red River Delta Figure 4: Frequency of outlier detection by ICS across 340 scenarios with varying smoothing parameters, for each observation in the Vietnam toy example. From this experimentation, we recommend using knots located at the quantiles of the measured variable, and a number of knots such that the number of observations is around 10 times the dimension p(here: the dimension of the B- spline basis). The base-10 logarithm of parameter λcan be chosen between -2 and 2 to avoid extreme cases and multicollinearity problems. Moreover, the idea of launching ICS multiple times with different preprocessing parameter values to confirm an observation’s atypical nature by its repeated detection is a strategy we retain for real applications, as detailed in Section 4.3. 3.3 Comparison with other methods We now compare ICS for functional data (presented in Section 2.5) to eight outlier detection methods
|
https://arxiv.org/abs/2505.19403v1
|
already existing in the literature, such as median-based approaches (Murph, Strait, et al. 2024), the modified band depth method (Sun and Genton 2011) and MUOD indices (Ojo, Fernández Anta, et al. 2022). Our simulation uses three density-generating processes with 2%of outliers. The scheme named GP_clr , based on model 4 of the fdaoutlier package (Ojo, Fernández Anta, et al. 2022, section 4.1), first simulates a discretised random function in L2(0,1)from a mixture of two Gaussian processes with different means, and applies the inverse clr transformation to obtain a random density in the Bayes space B2(0,1). The scheme named GP_margin first simulates a discretised random function in L2(0,1)using model 5 of the fdaoutlier package, which consists in a mixture of two Gaussian processes with different covariance operators. Then, the random density is obtained as a kind of marginal distribution of the discrete values of the random function, where the x-axis is discarded: theses values are considered as a random sample and smoothed using MPL (see Section 2.6 with parameters λ= 1,10basis functions and knots (as well as interval bounds) at quantiles of the full sample. This scheme is similar to the data generating process of the Vietnamese climate dataset. Finally, the Gumbel scheme first draws parameters from a mixture of two Gaussian distributions in R2and computes the Gumbel density functions corresponding to these parameters (it generates shift outliers as described in (Murph, Strait, et al. 2024)). Note that the output of all the schemes is a set of discretised densities on a regular grid of size p= 100 that covers an interval (a,b)(which is (0,1)forGP_clr andGumbel and the range of the full sample for GP_margin ). In each sample, there are n= 200 densities. 12 A P REPRINT - M AY27, 2025 For the outlier detection methods, we denote them as <Approach>_<Metric> so that for instance, ICS_B2 refers to ICS for density data in the Bayes space B2(a,b). The steps of the ICS_B2 method are as follows. After applying the discrete clr transformation to each discretised density function, we approximate the underlying clr transformed smooth density by a smoothing spline in L2 0(a,b)using the preprocessing described in (Machalová, Hron, and Monti 2016). During this process, densities should not take values too close to 0to avoid diverging clr, so we replace by 10−8all density values below this threshold. The parameters of the compositional spline spaces are chosen by the function fda.usc::fdata2fd . Then, we solve ICS in the chosen compositional spline space, automatically selecting the components with tests as before. The ICS_L2 method first smooths each discretised density using splines in L2(a,b)treating the densities as ordinary functional parameters. In the second step, we apply ICS in the chosen spline space, selecting the components automatically through D’Agostino normality tests. The MBD (López-Pintado and Romo 2009) and MUOD (Azcorra et al. 2018) approaches are implemented using the package fdaoutlier (Ojo, Lillo, and Anta 2023), either directly ( <Approach>_L2 ) or after transforming the densities into log quantile densities (<Approach>_LQD ) or into quantile functions ( <Approach>_QF ). The median-based methods such as Median_LQD andMedian_Wasserstein are described
|
https://arxiv.org/abs/2505.19403v1
|
in (Murph, Strait, et al. 2024) and implemented in the DeBoinR package from (Murph and Strait 2023) using the recommended default parameters. For each combination between a generating scheme and a method, we average the TPR (True Positive Rate, or sensitivity) and the FPR (False Positive Rate, one minus specificity) over N= 200 repetitions, for each value of PP (the number of predicted positive) which scales from 0ton. We also compute point-wise confidence bounds using the standard deviation of the TPR over the Nrepetitions and the standard Gaussian quantile of order 97.5%. The ROC curves together with their confidence bands are represented in Figure 5, separately for the three density-generating processes. Table 1 summarises the performance of the methods across the schemes, by means of the average area under the curve (AUC). We can see that both ICS methods give quite similar results except for the GP_clr generating process where ICS_B2 outperforms ICS_L2 . Together with MUOD_L2 andMUOD_QF , these methods are the best in terms of average AUC, although ICS-based methods perform more consistently across the different generating schemes. The Median_LQD and MBD_LQD methods are worse than the others for all generating schemes. Overall, we can recommend ICS versus the other outlier detection methods in this situation where the proportion of outliers is small. Median_LQD Median_Wasserstein MUOD_L2 MUOD_LQD MUOD_QFICS_B2 ICS_L2 MBD_L2 MBD_LQD MBD_QF0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.000.000.250.500.751.00 0.000.250.500.751.00 False Positive Rate (FPR)True Positive Rate (TPR)Scheme GP_clr GP_margin Gumbel Figure 5: ROC curves of 10 different outlier detection methods for density data with 3 generating schemes. Table 1: AUC for the 10 outlier detection methods, averaged across the 3 generating schemes. Approach Metric Average AUC MUOD L2 0.92 ICS B2 0.92 MUOD QF 0.91 13 A P REPRINT - M AY27, 2025 Table 1: AUC for the 10 outlier detection methods, averaged across the 3 generating schemes. Approach Metric Average AUC ICS L2 0.91 MBD QF 0.90 Median Wasserstein 0.90 MUOD LQD 0.88 MBD L2 0.86 Median LQD 0.78 MBD LQD 0.74 4 An application to Vietnamese climate data 4.1 Data description and preprocessing BACGIANGBACKANCAOBANG DIENBIENHAGIANG HOABINHLAICHAU LANGSONLAOCAI PHUTHO SONLATUYENQUANG YENBAI BACNINH HAIDUONG HAIPHONG HANAMHANOI HUNGYEN NAMDINHNINHBINHQUANGNINH THAIBINHTHAINGUYEN THANHHOAVINHPHUC 20°N21°N22°N23°N 102°E 103°E 104°E 105°E 106°E 107°E 108°E longitudelatitudeclimate region S1 S2 S3 Figure 6: The three climate regions of Northern Vietnam. In this application, we study daily maximum temperatures for each of the I= 63 Vietnamese provinces over a T= 30 -year period (1987-2016). Originally from the Climate Prediction Center (CPC) database, developed and maintained by the National Oceanic and Atmospheric Administration (NOAA), the data underwent a preliminary treatment presented in (Trinh, Thomas-Agnan, and Simioni 2023). From the daily 365 or 366 values for each year, we derive the yearly maximum temperature distribution for each of the 1,890 province-year units. We assume that the temperature samples are independent across years and spatially across provinces, which is a simplifying assumption. Figure 1 depicts the six administrative regions of Vietnam, and the corresponding provinces.
|
https://arxiv.org/abs/2505.19403v1
|
However, these regions cover areas with varied climates. To achieve more climatically homogeneous groupings, we use clusters of provinces based on climatic regions as defined by Stojanovic et al. (2020). Figure 6 displays the three climatic regions covering Northern Vietnam. We focus on region S3, composed of 13 provinces, by similarity with the North Plain (Red River Delta) (S3) in (Stojanovic et al. 2020). 14 A P REPRINT - M AY27, 2025 Figure 7 shows the maximum temperature densities for the 13 provinces of S3, plotted by year, using the preprocessing detailed in Section 2.6.2 with degree less than or equal to d= 4, smoothing parameter λ= 10 andk= 10 knots located at quantiles of the pooled sample (across space and time). We observe more variability across time than across space which confirms that the spatial homogeneity objective is achieved. 2011 2012 2013 2014 2015 20162005 2006 2007 2008 2009 20101999 2000 2001 2002 2003 20041993 1994 1995 1996 1997 19981987 1988 1989 1990 1991 1992 1020304010203040102030401020304010203040102030400.0000.0250.0500.0750.1000.125 0.0000.0250.0500.0750.1000.125 0.0000.0250.0500.0750.1000.125 0.0000.0250.0500.0750.1000.125 0.0000.0250.0500.0750.1000.125 temperature (deg. Celsius)densityprovince BACNINH HAIDUONG HAIPHONG HANAM HANOI HOABINH HUNGYEN NAMDINH NINHBINH PHUTHO THAIBINH THANHHOA VINHPHUC Figure 7: Maximum temperature densities for the 13 provinces in the S3 climate region of Northern Vietnam, 1987-2016, colour-coded by province. 4.2 Outlier detection using ICS for the S3 climate region of Vietnam We follow the different steps described in Section 3.1, and examine the results of ICS outlier detection using the scatter pair(Cov,Cov 4)on the 390 (13 provinces ×30 years) densities from region S3, obtained after the preprocessing detailed above. The scree plot on the left panel of Figure 8 clearly indicates that we should retain the first two invariant components. The right panel of Figure 8 shows the squared ICS distances based on these first two components, with the observations index on the x-axis and with a threshold (horizontal line) corresponding to a significance level of 2.5%. This plot reveals that several observations are distinctly above this threshold, especially for the years 1987 and 2010. The left panel of Figure 9 displays the scatter plot of the first two components, labelled by year. The densities are coloured by province for the outliers and coloured in grey for the other provinces. This plot reveals that the outliers are either densities from 2010 (and one density from 1998) that are outlying on the first component, or densities from 1987 and 2007 that are outlying on the second component. To interpret the outlyingness, we can use the dual eigendensities plotted in the right panel of Figure 9 together with Figure 10, which represents the densities and their centred log-ratio transformation, colour-coded by year for the outliers and in grey for the other observations. This is justified by the reconstruction formula of Proposition 3 in the Appendix. The horizontal line on the eigendensities plot (right plot of Figure 9) corresponds to the uniform density on the interval [5; 40] . Four provinces in 2010 are outlying with large positive values on the first invariant component (see the left panel of Figure 9). The first eigendensity IC.1 is
|
https://arxiv.org/abs/2505.19403v1
|
characterised by a smaller mass of the temperature values on the interval [5; 20] , compared to the uniform distribution, a mass similar to the uniform on [20; 35] , and a much 15 A P REPRINT - M AY27, 2025 0.81.01.21.4 IC.1IC.2IC.3IC.4IC.5IC.6IC.7IC.8IC.9ICS eigenvaluesselected FALSE TRUE (a) 1987 198719871987198719871987198719871987 19871987 1987 198819881988 198819881988198819881988198819881988 19881989 198919891989198919891989 198919891989 198919891989 1990199019901990199019901990199019901990199019901990199119911991199119911991199119911991199119911991 1991199219921992 1992199219921992199219921992 199219921992199319931993199319931993199319931993 199319931993 199319941994199419941994 19941994 199419941994 199419941994 1995199519951995199519951995199519951995199519951995199619961996199619961996199619961996199619961996 19961997199719971997199719971997199719971997 19971997 199719981998 1998199819981998199819981998 199819981998 1998199919991999199919991999199919991999199919991999 199920002000200020002000200020002000200020002000200020002001200120012001200120012001200120012001 20012001200120022002200220022002200220022002200220022002200220022003 200320032003200320032003 200320032003 20032003200320042004 200420042004 2004200420042004 200420042004 2004200520052005 20052005200520052005200520052005200520052006200620062006200620062006200620062006 2006200620062007 2007200720072007 20072007 200720072007 20072007 2007 20082008200820082008200820082008200820082008200820082009200920092009200920092009200920092009 20092009 200920102010 2010 201020102010 201020102010 201020102010 2010201120112011 201120112011 201120112011 20112011 20112011201220122012 2012201220122012201220122012 2012201220122013201320132013201320132013201320132013 201320132013 201420142014 201420142014 20142014201420142014201420142015201520152015201520152015201520152015 201520152015201620162016201620162016201620162016 201620162016 2016 010203040 1990 2000 2010 yeardistancesoutlying provinces a a a a a aHAIDUONG NAMDINH NINHBINH PHUTHO THAIBINH THANHHOA (b) Figure 8: Scree plot of the ICS eigenvalues (left panel), and the ICS distances based on the first two components (right panel) for maximum temperature densities for the 13 provinces in the S3 climate region of Northern Vietnam, 1987-2016. 1987 198719871987198719871987198719871987 19871987 1987 19881988 1988198819881988198819881988 198819881988 19881989 1989198919891989 19891989 19891989198919891989 1989 1990199019901990199019901990199019901990 19901990 1990199119911991199119911991199119911991199119911991 19911992 1992 199219921992 19921992 199219921992 199219921992 1993 1993199319931993 19931993 199319931993 1993 1993199319941994199419941994 19941994199419941994 19941994 1994 1995199519951995199519951995199519951995 19951995 1995 199619961996 199619961996 19961996199619961996199619961997 199719971997199719971997 199719971997 19971997 1997 1998 199819981998199819981998 199819981998 19981998 19981999 199919991999199919991999 199919991999 199919991999 200020002000200020002000200020002000200020002000 20002001 200120012001200120012001200120012001 20012001 200120022002200220022002 20022002 200220022002 2002200220022003 2003200320032003 20032003 200320032003 200320032003 2004200420042004200420042004200420042004 20042004 2004 200520052005 2005200520052005200520052005 20052005 20052006 2006200620062006 20062006 200620062006 2006200620062007 2007200720072007 20072007 200720072007 20072007 2007 20082008 2008200820082008 200820082008 200820082008 20082009200920092009200920092009200920092009 2009200920092010 2010201020102010 20102010 201020102010 201020102010 2011 2011201120112011 20112011201120112011 2011 20112011201220122012201220122012 2012201220122012 20122012 20122013 2013201320132013 20132013 201320132013 20132013 201320142014201420142014 20142014201420142014 2014201420142015 2015201520152015 20152015 201520152015 201520152015 2016201620162016201620162016 201620162016 20162016 2016 −20246 −2.5 0.0 2.5 5.0 IC.1IC.2outlying provinces a a a a a aHAIDUONG NAMDINH NINHBINH PHUTHO THAIBINH THANHHOA (a) 0.000.020.040.06 10 20 30 40 temperature (deg. Celsius)densityindex IC.1 IC.2 (b) Figure 9: Scatter plot of the first two invariant components (left panel) labelled by year and coloured by province, and the first two ICS dual eigendensities (right panel) of the maximum temperature densities for the 13 provinces in the S3 climate region of Northern Vietnam, 1987-2016. larger mass than the uniform on the interval [35; 40] . These four observations correspond to the four blue curves on the left and right panels of Figure 10. Compared to the other densities, these four densities exhibit relatively lighter tails on the lower end of the temperature spectrum and heavier tails on the higher end. For temperature values in the medium range, these four observations fall in the middle of the cloud of densities and of clr transformed densities. On the second invariant component, six observations take large values and are detected as outliers. They correspond to four provinces in 1987 and three in 2007 (see the left panel of Figure 9). The second eigendensity IC.2 differs greatly from the uniform distribution on the whole interval of temperature values. The left tail is much lighter while the right tail is much heavier. Besides the six observations flagged as outliers, other provinces in 1987 and 2007 take large values on
|
https://arxiv.org/abs/2505.19403v1
|
IC.2, and correspond to densities with very few days with maximum temperature less than 15 degrees Celsius compared to other densities. 16 A P REPRINT - M AY27, 2025 0.0000.0250.0500.0750.1000.125 10 20 30 40 temperature (deg. Celsius)densityoutlying years 1987 1994 1998 2007 2010 (a) −10010 10 20 30 40 temperature (deg. Celsius)centred clr(density) (b) Figure 10: Maximum temperature densities (left panel) and their centred log-ratio transforms (right panel) for the 13 provinces in the S3 climate region of Northern Vietnam, 1987-2016, outlying densities are colour-coded by year. 4.3 Influence of the preprocessing parameters 5 10 15 20 252 ICs selected auto. selection of ICs −2−1012−2−1012−2−1012−2−1012−2−1012199020002010 199020002010 log10(lambda)yearoutlier frequency 0.000.250.500.75 Figure 11: Outlier detection by ICS across smoothing parameters for the Vietnam climate data. Top: 2 invariant components selected; Bottom: automatic selection through D’Agostino tests. y-axis: year;x-axis:λparameter. Columns correspond to knot numbers (5-25). Outliers are marked as light gray to black squares depending on their detection frequency. 17 A P REPRINT - M AY27, 2025 050100150 1990 2000 2010 yearoutlier detection frequency Figure 12: Frequency of outlier detection by ICS across all 25 scenarios with varying smoothing parameters and all 13 provinces, for each year in the Vietnamese climate dataset. As mentioned in Section 3.2, we can validate the atypical nature of observations by running the ICSOutlier procedure multiple times with varying smoothing parameter values. Following the rule of thumb of one dimension per 10 observations, with 390 observations, we should consider less than 35 interior knots. In what follows, we take 5, 10, 15, 20 and 25 interior knots and we consider base-10 logarithm values for λequal to -2, -1, 0, 1 and 2. The number of selected ICS components is either fixed equal to 2, or is automatically determined using the D’Agostino normality test described in Section 3.1. We compute the squared ICS distances of the 390 observations, and observations are classified as outliers when their squared distance exceeds the threshold based on a 2.5%level as detailed in Section 3.1. We plot in Figure 11 the years on the y-axes for the 25 smoothing parameter setups, indicating outlying years with light gray to black squares depending on their detection frequency. Figure 12 displays a bar plot of the frequency of outlier detection (across the 25 setups and the 13 provinces) for each year. Note that the choice of the number of selected invariant components has minimal impact. Both Figure 11 and Figure 12 confirm the results of the previous section. Most provinces are outlying in 1987 and several are also outlying in 2007 and 2010. For large values of λ, many provinces are also detected as outliers in 2016. Some provinces are detected quite often over the years: THANH HOA, HAI PHONG and HOA BINH. Note that in (Stojanovic et al. 2020), the province of THANH HOA extends across two climatic regions (S3 and S4) which could explain why it is very often detected as an outlier. An overall comment regarding the outlier detection procedure that we use in the present application is that, from our experience on other data sets,
|
https://arxiv.org/abs/2505.19403v1
|
an outlying density is often characterised by a behaviour that differs from the other densities in the tails of the distribution. This is not surprising because the Bayes inner product defined by equation Equation 9 involves the ratio of densities which can be large when a density is small (at the tails of the distribution). 5 Conclusion and perspectives We propose a coordinate-free presentation of ICS that allows ICS to be applied to more complex objects than the coordinates vectors of multivariate analysis. We focus on the case of distributional data and describe an outlier detection procedure based on ICS. However, one of the limitations of the coordinate-free approach is that it is mainly adapted to pairs of weighted covariance operators, because they have a coordinate-free definition. These pairs of operators 18 A P REPRINT - M AY27, 2025 include the well-known (Cov,Cov 4)pair. Its scatter counterpart in the multivariate context is the one recommended by Archimbaud (2018) for a small proportion of outliers. But it is unclear how we could generalise other well-known scatter matrices (such as M-estimators, pairwise-based weighted estimators, or Minimum Covariance Determinant estimators) which are useful when using ICS as a preprocessing step for clustering (see Alfons et al. 2024). Concerning a further adaptation of ICSOutlier to density objects, one perspective to our work is to take into account different settings for the preprocessing parameters and aggregate the results in a single outlyingness index. Another perspective is to consider multivariate densities (e.g., not only maximum density temperature but also minimum density temperature, precipitation,. . . ) and generalise the ICSOutlier procedure as in (Archimbaud, Boulfani, et al. 2022) for multivariate functional data. This coordinate-free framework for ICS lays the groundwork for a generalisation to infinite-dimensional Hilbert spaces. Many difficulties arise, such as the compactness of the covariance operator which makes it non surjective, so that one cannot easily define a Mahalanobis distance, on which our definition of weighted covariance operators relies. Moreover, the existence of solutions and other properties of ICS proved in this paper come from the fact that one of the scatter operators is an automorphism, so it cannot be compact (in particular not the covariance). Finally, Tyler (2010) proved that, whenever the dimension pis larger than the number of observations n, all affine equivariant scatter operators are proportional, which is a bad omen for a straight generalisation to infinite-dimensional Hilbert spaces. One can partially circumvent these difficulties by assuming that the data is almost surely in a deterministic finite-dimensional subspace E ofH(which is the case for density data after our preprocessing) and applying coordinate-free ICS. Another option could be to alleviate the affine equivariance assumption. Acknowledgments The major part of this work was completed while the authors were visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM) in Hanoi and the authors express their gratitude to VIASM. This paper has also been funded by the Agence Nationale de la Recherche under grant ANR-17-EURE-0010 (Investissements d’Avenir program). We thank Thibault Laurent for attracting our attention on the climate regions partition of Vietnam. We also thank
|
https://arxiv.org/abs/2505.19403v1
|
the two reviewers who gave us constructive comments that allowed us to improve our article. Appendix Scatter operators for random objects in a Hilbert space Let us first discuss some definitions relative to scatter operators in the framework of a Hilbert space (E,⟨·,·⟩). We consider an E-valued random object X: Ω→Ewhere Ωis a probability space and Eis a Hilbert space equipped with the Borel σ-algebra. In order to define ICS, we need at least two scatter operators, which generalise the covariance operator defined on Eby ∀(x,y)∈E2,⟨Cov[X]x,y⟩=E[⟨X−EX,x⟩⟨X−EX,y⟩], (5) while keeping its affine equivariance property: ∀A∈GL (E),∀b∈E,Cov[AX+b] =ACov[X]A∗, where the Hilbert norm of Xis assumed to be square-integrable, and A∗is the adjoint linear operator of Ain the Hilbert spaceE, represented by the transpose of the matrix that represents A. Definition 3 (Scatter operators) .Let(E,⟨·,·⟩)be a Hilbert space of dimension p,Ean affine invariant set of E-valued random objects, i.e. that verifies: ∀X∈E,∀A∈GL (E),∀b∈E,AX +b∈E. (6) An operator S:E→S+(E)(whereS+(E)is the space of non-negative symmetric operators on E) is called an (affine equivariant) scatter operator (defined on E) if it satisfies the following two properties: 1. Invariance by equality in distribution: ∀(X,Y )∈E2,X∼Y⇒S[X] =S[Y]. 2. Affine equivariance: ∀X∈E,∀A∈GL (E),∀b∈E,S[AX+b] =AS[X]A∗. We do not know whether there exist other scatter operators than the covariance when the Hilbert space has infinite dimension. 19 A P REPRINT - M AY27, 2025 Details on coordinate-free ICS The problem ICS(X,S 1,S2)defined by Equation 1 is equivalent to assuming that S1[X]is injective and finding an orthonormal basis Hthat diagonalises the non-negative symmetric operator S1[X]−1S2[X]in the Euclidean space (E,⟨S1[X]·,·⟩). The ICS(X,S 1,S2)spectrum Λis unique and is simply the spectrum of S1[X]−1S2[X]. Proposition 2 (Existence of solutions) .Let(E,⟨·,·⟩)be a Euclidean space of dimension p,E⊆L1(Ω,E)an affine invariant set of integrable E-valued random objects, S1andS2two scatter operators on E. For anyX∈Esuch that S1[X]is an automorphism, there exists at least one solution (H,Λ)to the problem ICS(X,S 1,S2), and Λis a uniquely determined non-increasing sequence of positive real numbers. Proof. SinceS1[X]is non-singular, S1[X]−1S2[X]exists and is symmetric in the Euclidean space (E,⟨S1[X]·,·⟩), because ∀(x,y)∈E2,⟨S1[X]S1[X]−1S2[X]x,y⟩=⟨S2[X]x,y⟩=⟨S2[X]y,x⟩ =⟨S1[X]S1[X]−1S2[X]y,x⟩. Thus, the spectral theorem guarantees that there exists an orthonormal basis Hof(E,⟨S1[X]·,·⟩)in which S1[X]−1S2[X]is diagonal. This methodology does not generalise to the infinite-dimensional case, because the inner product space (H,⟨·,S1[X]·⟩) is not necessarily complete, so the spectral theorem does not apply. Remark (Courant-Fischer variational principle) .The ICS problem Equation 1 can be stated as a maximisation problem. If1≤j≤p, the following equalities hold: hj∈ argmax h∈E,⟨S1[X]h,hj′⟩=0if0<j′<j⟨S2[X]h,h⟩ ⟨S1[X]h,h⟩andλj= max h∈E,⟨S1[X]h,hj′⟩=0if0<j′<j⟨S2[X]h,h⟩ ⟨S1[X]h,h⟩. (7) The following reconstruction formula, extended from multivariate to complex data, is useful to interpret the ICS dual eigenbasisH∗= (h∗ j)1≤j≤p, which is defined as the only basis of the space Ethat satisfies ⟨hj,h∗ j′⟩=δjj′for all 1≤j,j′≤p. Proposition 3 (Reconstruction formula) .Let(E,⟨·,·⟩)be a Euclidean space of dimension p,E⊆L1(Ω,E)an affine invariant set of integrable E-valued random objects, S1andS2two scatter operators on E. For anyX∈Esuch that S1[X]is an automorphism and any ICS(X,S 1,S2)eigenbasisH= (h1,...,hp)ofE, we have X=EX+p/summationdisplay j=1zjh∗ j, where thezj,1≤j≤pare defined as in Equation 2 and H∗= (h∗ j)1≤j≤p= (S1[X]hj)1≤j≤pis the dual basis of H. Reminder on Bayes spaces The most recent and complete description of the
|
https://arxiv.org/abs/2505.19403v1
|
Bayes spaces approach can be found in (Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn 2014). For the present work, we will identify the elements of a Bayes space, as defined by Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn (2014), with their Radon–Nikodym derivative with respect to a reference measure λ. This leads to the following framework: let (a,b)be a given interval of the real line equipped with the Borel σ-algebra, let λbe a finite reference measure on (a,b). LetB2(a,b)be the space of square-log integrable probability densitiesdµ dλ,whereµis a measure that is equivalent to λ, which means that µandλare absolutely continuous with respect to each other. Note that the simplex Spused in compositional data analysis can be seen as a Bayes space when considering, instead of an interval (a,b)equipped with the Lebesgue measure, the finite set {1,...,p + 1}equipped with the counting measure (see Example 2 in Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn 2014). Let us first briefly recall the construction of the Hilbert space structure of B2(a,b). For a density finB2(a,b), the clr transformation is defined by clrf(.) = logf(.)−1 λ(a,b)/integraldisplayb alogf(t)dλ(t). 20 A P REPRINT - M AY27, 2025 The clr transformation maps an element of B2(a,b)into an element of the space L2 0(a,b)of functions which are square-integrable with respect to λon(a,b)and whose integral is equal to zero. The clr inverse of a function uof L2 0(a,b)isB2-equivalent to exp(u).More precisely, if u∈L2 0(a,b), clr−1(u)(.) =expu(.) /integraltextb aexpu(t)dλ(t). A vector space structure on B2(a,b)is readily obtained by transporting the vector space structure of L2 0(a,b)to B2(a,b)using the clr transformation and its inverse, see for example Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn (2014). Its operations, denoted by ⊕and⊙, are called perturbation (the “addition”) and powering (the “scalar multiplication”). For the definition of the inner product, we adopt a normalization different from that of J. J. Egozcue, Díaz–Barrero, and Pawlowsky–Glahn (2006) and of Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn (2014) in the sense that we choose the classical definition of inner product in L2 0(a,b),for two functions uandvinL2 0(a,b) ⟨u,v⟩L2 0=/integraldisplayb au(t)v(t)dλ(t), (8) so that the corresponding inner product between two densities fandgin the Bayes space B2(a,b)is given by ⟨f,g⟩B2=1 2λ(a,b)/integraldisplayb a/integraldisplayb a(logf(t)−logf(s))(logg(t)−logg(s))dλ(t)dλ(s). (9) This normalization yields an inner product which is homogeneous to the measure λwhereas the Van Den Boogaart, Juan José Egozcue, and Pawlowsky-Glahn (2014) normalization is unitless. Note that, for clarity and improved readability, the interval over which the spaces L2 0andB2are defined are omitted from some notations. For a random density f(.)in the infinite-dimensional space B2(a,b), the expectation and covariance operators can be defined as follows, whenever they exist: EB2[f] = clr−1E[clrf]∈B2(a,b) CovB2[f]g=EB2/bracketleftig ⟨f⊖EB2[f],g⟩B2⊙(f⊖EB2[f])/bracketrightig = clr−1E[⟨f,g⟩B2clrf] = clr−1E[⟨clrf,clrg⟩L2 0clrf]for anyg∈B2(a,b), where⊖is the negative perturbation defined by f⊖g=f⊕[(−1)⊙g]. Reminder on compositional splines Following (Machalová, Talská, et al. 2021), in order to construct a basis of E=C∆γ d(a,b), which is required in practice, it is convenient to first construct a basis of a finite-dimensional spline subspace of L2 0(a,b), which we then transfer toB2(a,b)by the inverse clrtransformation. More precisely, Machalová, Hron, and Monti (2016)
|
https://arxiv.org/abs/2505.19403v1
|
propose a basis of zero-integral splines in L2 0(a,b)that are called ZB-splines. The corresponding inverse images of these basis functions by clr are called CB-splines. A ZB-spline basis, denoted by Z={Z1,...,Zk+d−1},is characterised by the spline of degree less than or equal to d (orderd+ 1), the number kand the positions of the so-called inside knots ∆γ={γ1,...,γd}in(a,b). The dimension of the resulting subspace Z∆γ disp=k+d. LetC∆γ dbe the subspace generated by C={C1,...,Cp}inB2(a,b), whereCj= clr−1(Zj)are the back-transforms in B2(a,b)of the basis functions of the subspace Z∆γ d. The expansion of a density finB2(a,b)is then given by f(t) =p/circleplusdisplay j=1[f]CjCj(t), (10) so that the corresponding expansion of its clrinL2 0(a,b)is given by clrf(t) =p/summationdisplay j=1[f]CjZj(t). (11) Note that the coordinates of fin the basis Care the same as the coordinates of clr(f)in the basis Z, forj= 1,...,p, [f]Cj= [clrf]Zj.Following Machalová, Hron, and Monti (2016), the basis functions of Z∆γ dcan be written in a B-spline basis, see Schumaker (1981), which is convenient to allow using existing code for their computation. 21 A P REPRINT - M AY27, 2025 Proofs Proposition 1. First, let us verify that the problem ICS(XF,SF 1,SF 2)is well defined on F: (a) The application φis linear so it is measurable. Moreover, if X∈E,A∈GL (F)andb∈F, then ∥φ(X)∥F=∥X∥E and Aφ(X) +b=φ/parenleftbig φ−1◦A◦φ(X) +φ−1(b)/parenrightbig whereφ−1◦A◦φ(X) +φ−1(b)∈E. (b)IfX∈E,SF ℓ[φ(X)] =φ◦SE ℓ[X]◦φ−1is a non-negative symmetric operator and if Y∈E verifies φ(X)∼φ(Y), thenX∼Y(because the Borel σ-algebra onEis the pullback by φof that onF) so that, for ℓ∈{1,2}, SF ℓ[φ(X)] =φ◦SE ℓ[X]◦φ−1=φ◦SE ℓ[Y]◦φ−1=SF ℓ[φ(Y)] and SF ℓ[Aφ(X) +b] =φ◦SE ℓ[φ−1◦A◦φ(X) +φ−1(b)]◦φ−1 =A◦φ◦SE ℓ[X]◦φ−1◦A∗=ASF ℓ[φ(X)]A∗. (c) The isometry φpreserves the linear rank of any finite sequence of vectors of E. Now, (HE,Λ)solves ICS(XE,SE 1,SE 2)in the space Eif and only if /braceleftigg ⟨SE 1[X]hE j,hE j′⟩E=δjj′for all 1≤j,j′≤p ⟨SE 2[X]hE j,hE j′⟩E=λjδjj′for all 1≤j,j′≤p ⇐⇒/braceleftigg ⟨φ(SE 1[X]hE j),φ(hE j′)⟩F=δjj′for all 1≤j,j′≤p ⟨φ(SE 2[X]hE j),φ(hE j′)⟩F=λjδjj′for all 1≤j,j′≤p ⇐⇒/braceleftigg ⟨SF 1[X]hF j,hF j′⟩F=δjj′for all 1≤j,j′≤p ⟨SF 2[X]hF j,hF j′⟩F=λjδjj′for all 1≤j,j′≤p, which is equivalent to the fact that (HF,Λ)solves ICS(XF,SF 1,SF 2)in the space F. Corollary 1. Letℓ∈{1,2}and˜X=X−EX. In order to prove the equation Equation 4, we will need to prove that, for any (x,y)∈F2, ⟨φ◦CovE wℓ[X]◦φ−1(x),y⟩F=⟨CovE wℓ[X]φ−1(x),φ−1(y)⟩E =E[wℓ(∥CovE[X]−1/2˜X∥E)2⟨˜X,φ−1(x)⟩E⟨˜X,φ−1(y)⟩E] =E[wℓ(∥CovF[φ(X)]−1/2φ(˜X)∥F)2⟨φ(˜X),x⟩F⟨φ(˜X),y⟩F] ⟨φ◦CovE wℓ[X]◦φ−1(x),y⟩F=⟨CovF wℓ[φ(X)]x,y⟩F.(12) It is enough to show the equality between Equation 12 (2) and Equation 12 (3), for which we treat differently the cases wℓ= 1 andwℓ̸= 1. Ifwℓ= 1, there is nothing to prove, so that the equation Equation 4 holds for the covariance operator. Ifwℓ̸= 1, we now know from the case wℓ= 1that CovF[φ(X)]−1/2=φ◦CovE[X]−1/2◦φ−1 so that ∥CovE[X]−1/2˜X∥E=∥CovF[φ(X)]−1/2φ(˜X)∥F (13) Once the equation Equation 4 is proved, one only needs to apply Proposition 1 to finish the proof. Corollary 2. Applying Corollary 1 to the isometry φB:/braceleftbigg(E,⟨·,·⟩E)→(Rp,⟨·,·⟩Rp) x∝⇕⊣√∫⊔≀→G1/2 B[x]B, we obtain the equivalence between the following assertions: 22 A P REPRINT - M AY27, 2025 (i)(H,Λ)solves ICS(X,Covw1,Covw2)in the space E (ii)(G1/2 B[H]B,Λ)solves ICS(G1/2 B[X]B,Covw1,Covw2)in the space Rp, which gives the equivalence between the assertions (1) and (2). The equivalence between the other assertions are deduced from the fact that for any ℓ∈{1,2}and any (x,y)∈E2: ⟨CovE wℓ[X]x,y⟩E=⟨Covwℓ(G1/2 B[X]B)G1/2 B[x]B,G1/2
|
https://arxiv.org/abs/2505.19403v1
|
B[y]B⟩Rp =⟨Covwℓ(GB[X]B)[x]B,[y]B⟩Rp =⟨Covwℓ([X]B)GB[x]B,GB[y]B⟩Rp,(14) where Equation 14 (1) comes from the equation Equation 4, and the equalities Equation 14 (2) and Equation 14 (3) come from the affine equivariance of Covwℓ. Proposition 3. Let us decompose S1[X]−1(X−EX)over the basis H, which is orthonormal in (E,⟨·,S1[X]·⟩): S1[X]−1(X−EX) =p/summationdisplay j=1⟨S1[X]−1(X−EX),S1[X]hj⟩hj =p/summationdisplay j=1⟨X−EX,hj⟩hj S1[X]−1(X−EX) =p/summationdisplay j=1zjhj. The dual basis H∗ofHis the one that satisfies ⟨hj,h∗ j′⟩=δjj′for all 1≤j,j′≤pand we know from the definition of ICS that this holds for (S1[X]hj)1≤j≤p. Code & reproducibility In order to implement coordinate-free ICS, we created the R package ICSFun , which is used to generate the figures (see the code in this HTML version of the article). References Aggarwal, Charu C. (2017). Outlier Analysis . en. Cham: Springer International Publishing. ISBN : 978-3-319-47577-6 978-3-319-47578-3. DOI: 10.1007/978-3-319-47578-3. (Visited on 08/05/2024) (cit. on p. 2). Alfons, Andreas et al. (Mar. 2024). “Tandem clustering with invariant coordinate selection”. In: Econometrics and Statistics .ISSN : 2452-3062. DOI: 10.1016/j.ecosta.2024.03.002. (Visited on 07/18/2024) (cit. on pp. 2, 19). Archimbaud, Aurore (2018). “Détection non-supervisée d’observations atypiques en contrôle de qualité: un survol”. In: Journal de la Société Française de Statistique 159.3, pp. 1–39 (cit. on p. 19). Archimbaud, Aurore, Feriel Boulfani, et al. (Mar. 2022). “ICS for multivariate functional anomaly detection with applications to predictive maintenance and quality control”. In: Econometrics and Statistics .ISSN : 2452-3062. DOI: 10.1016/j.ecosta.2022.03.003. (Visited on 01/10/2024) (cit. on pp. 2, 3, 6, 10, 19). Archimbaud, Aurore, Zlatko Drma ˇc, et al. (Mar. 2023). “Numerical Considerations and a new implementation for invariant coordinate selection”. en. In: SIAM Journal on Mathematics of Data Science 5.1, pp. 97–121. ISSN : 2577-0187. DOI: 10.1137/22M1498759. (Visited on 05/03/2024) (cit. on p. 5). Archimbaud, Aurore, Klaus Nordhausen, and Anne Ruiz-Gazen (Dec. 2018). “ICS for multivariate outlier detection with application to quality control”. en. In: Computational Statistics & Data Analysis 128, pp. 184–199. ISSN : 01679473. DOI: 10.1016/j.csda.2018.06.011. (Visited on 10/13/2022) (cit. on pp. 2, 8). Azcorra, A. et al. (May 2018). “Unsupervised Scalable Statistical Method for Identifying Influential Users in Online Social Networks”. en. In: Scientific Reports 8.1. Publisher: Nature Publishing Group, p. 6955. ISSN : 2045-2322. DOI: 10.1038/s41598-018-24874-2. URL: https://www.nature.com/articles/s41598-018-24874-2 (visited on 04/03/2025) (cit. on p. 13). Dai, Wenlin et al. (Sept. 2020). “Functional outlier detection and taxonomy by sequential transformations”. In: Computational Statistics & Data Analysis 149, p. 106960. ISSN : 0167-9473. DOI: 10.1016/j.csda.2020.106960. (Visited on 08/05/2024) (cit. on p. 2). Egozcue, J. J., J. L. Díaz–Barrero, and V . Pawlowsky–Glahn (July 2006). “Hilbert Space of Probability Density Functions Based on Aitchison Geometry”. en. In: Acta Mathematica Sinica, English Series 22.4, pp. 1175–1182. ISSN : 1439-8516, 1439-7617. DOI: 10.1007/s10114-005-0678-2. (Visited on 04/08/2024) (cit. on p. 21). 23 A P REPRINT - M AY27, 2025 Eubank, Randall L. (Apr. 2014). Nonparametric Regression and Spline Smoothing . 2nd ed. Boca Raton: CRC Press. ISBN : 978-0-429-18267-9. DOI: 10.1201/9781482273144 (cit. on p. 6). Ferraty, Frédéric and Philippe Vieu (Dec. 2002). “The Functional Nonparametric Model and Application to Spectro- metric Data”. en. In: Computational Statistics 17.4, pp. 545–564. ISSN : 1613-9658. DOI: 10.1007/s001800200126. (Visited on 12/30/2024) (cit. on p. 7).
|
https://arxiv.org/abs/2505.19403v1
|
Lei, Xinyi, Zhicheng Chen, and Hui Li (July 2023). “Functional Outlier Detection for Density-Valued Data with Application to Robustify Distribution-to-Distribution Regression”. In: Technometrics 65.3, pp. 351–362. ISSN : 0040-1706. DOI: 10.1080/00401706.2022.2164063. (Visited on 03/27/2024) (cit. on p. 2). Li, Bing et al. (2021). Functional independent component analysis : an extension of fourth-order blind identification . URL: https://sites.google.com/site/germainvanbever/publica (visited on 10/18/2023) (cit. on p. 2). Loperfido, Nicola (Nov. 2021). “Some theoretical properties of two kurtosis matrices, with application to invariant coordinate selection”. In: Journal of Multivariate Analysis 186, p. 104809. ISSN : 0047-259X. DOI: 10.1016/j.jmva.2 021.104809. (Visited on 03/13/2024) (cit. on p. 1). López-Pintado, Sara and Juan Romo (June 2009). “On the Concept of Depth for Functional Data”. In: Journal of the American Statistical Association 104.486. Publisher: ASA Website _eprint: https://doi.org/10.1198/jasa.2009.0108, pp. 718–734. ISSN : 0162-1459. DOI: 10.1198/jasa.2009.0108. URL: https://doi.org/10.1198/jasa.2009.0108 (visited on 04/03/2025) (cit. on p. 13). Machalová, J., K. Hron, and G.S. Monti (June 2016). “Preprocessing of centred logratio transformed density functions using smoothing splines”. en. In: Journal of Applied Statistics 43.8, pp. 1419–1435. ISSN : 0266-4763, 1360-0532. DOI: 10.1080/02664763.2015.1103706. (Visited on 03/12/2024) (cit. on pp. 7, 13, 21). Machalová, J., Renáta Talská, et al. (June 2021). “Compositional splines for representation of density functions”. en. In: Computational Statistics 36.2, pp. 1031–1064. ISSN : 0943-4062, 1613-9658. DOI: 10.1007/s00180-020-01042-7. (Visited on 03/12/2024) (cit. on pp. 7, 21). Menafoglio, Alessandra (Apr. 2021). Anomaly detection for density data based on control charts . IASC-ERS Course. URL: https://iasc-isi.org/events/iasc-ers-course-an-introduction-to-functional-data-analysis-for-density-functions- in-bayes-spaces/ (cit. on p. 2). Murph, Alexander C. and Justin D. Strait (Dec. 2023). DeBoinR: Box-Plots and Outlier Detection for Probability Density Functions .URL: https://cran.r-project.org/web/packages/DeBoinR/ (visited on 03/27/2024) (cit. on p. 13). Murph, Alexander C., Justin D. Strait, et al. (2024). “Visualisation and outlier detection for probability density function ensembles”. en. In: Stat13.2, e662. ISSN : 2049-1573. DOI: 10.1002/sta4.662. (Visited on 04/12/2024) (cit. on pp. 2, 12, 13). Nordhausen, Klaus, Aurore Archimbaud, and Anne Ruiz-Gazen (Dec. 2023). ICSOutlier: Outlier Detection Using Invariant Coordinate Selection .URL: https://cran.r-project.org/web/packages/ICSOutlier/ (visited on 07/19/2024) (cit. on pp. 2, 8). Nordhausen, Klaus and Anne Ruiz-Gazen (Mar. 2022). “On the usage of joint diagonalization in multivariate statistics”. In:Journal of Multivariate Analysis . 50th Anniversary Jubilee Edition 188, p. 104844. ISSN : 0047-259X. DOI: 10.1016/j.jmva.2021.104844. (Visited on 07/02/2024) (cit. on pp. 1, 2, 4). Ojo, Oluwasegun Taiwo, Antonio Fernández Anta, et al. (Sept. 2022). “Detecting and classifying outliers in big functional data”. en. In: Advances in Data Analysis and Classification 16.3, pp. 725–760. ISSN : 1862-5355. DOI: 10.1007/s11634-021-00460-9. (Visited on 12/27/2024) (cit. on p. 12). Ojo, Oluwasegun Taiwo, Rosa Elvira Lillo, and Antonio Fernandez Anta (Sept. 2023). fdaoutlier: Outlier Detection Tools for Functional Data Analysis .URL: https://cran.r-project.org/web/packages/fdaoutlier/index.html (visited on 03/19/2025) (cit. on p. 13). Pawlowsky-Glahn, Vera, Juan José Egozcue, and Raimon Tolosana-Delgado (Mar. 2015). Modeling and Analysis of Compositional Data . en. 1st ed. Wiley. ISBN : 978-1-118-44306-4 978-1-119-00314-4. DOI: 10.1002/9781119003144. (Visited on 07/19/2024) (cit. on p. 5). Ramsay, James, Giles Hooker, and Spencer Graves (Mar. 2024). fda: Functional Data Analysis .URL: https://cran.r-proj ect.org/web/packages/fda/ (visited on 03/16/2024) (cit. on p. 7). Rendón Aguirre, Janeth Carolina (May 2017).
|
https://arxiv.org/abs/2505.19403v1
|
“Clustering in high dimension for multivariate and functional data using extreme kurtosis projections”. eng. PhD thesis. Universidad Carlos III de Madrid. DOI: 10016/25286. (Visited on 03/14/2024) (cit. on p. 2). Rousseeuw, Peter (1985). “Multivariate Estimation with High Breakdown Point”. en. In: Mathematical Statistics and Applications . Ed. by Wilfried Grossmann et al. Dordrecht: Springer Netherlands, pp. 283–297. ISBN : 978- 94-010-8901-2 978-94-009-5438-0. DOI: 10.1007/978- 94- 009- 5438- 0_20. (Visited on 12/30/2024) (cit. on p. 5). Ruiz-Gazen, Anne et al. (2023). “Detecting Outliers in Compositional Data Using Invariant Coordinate Selection”. en. In: Robust and Multivariate Statistical Methods: Festschrift in Honor of David E. Tyler . Ed. by Mengxi Yi and Klaus Nordhausen. Cham: Springer International Publishing, pp. 197–224. ISBN : 978-3-031-22687-8. DOI: 10.1007/978-3-031-22687-8_10. (Visited on 10/12/2023) (cit. on pp. 2, 5). 24 A P REPRINT - M AY27, 2025 Schumaker, Larry (1981). Spline Functions: Basic Theory . 3rd ed. Cambridge University Press. ISBN : 978-0-521- 70512-7 978-0-511-61899-4. DOI: 10.1017/CBO9780511618994. (Visited on 10/20/2023) (cit. on p. 21). Silverman, B. W. (Sept. 1982). “On the Estimation of a Probability Density Function by the Maximum Penalized Likelihood Method”. In: The Annals of Statistics 10.3, pp. 795–810. ISSN : 0090-5364, 2168-8966. DOI: 10.1214/aos /1176345872. (Visited on 12/05/2023) (cit. on p. 7). Stojanovic, Milica et al. (Mar. 2020). “Trends and Extremes of Drought Episodes in Vietnam Sub-Regions during 1980–2017 at Different Timescales”. en. In: Water 12.3. Number: 3, p. 813. ISSN : 2073-4441. DOI: 10.3390/w12030 813. (Visited on 06/26/2024) (cit. on pp. 14, 18). Stone, Mervyn (1987). Coordinate-Free Multivariable Statistics: An Illustrated Geometric Progression from Halmos to Gauss and Bayes . eng. Oxford statistical science series 2. Oxford: Clarendon Pr. ISBN : 978-0-19-852210-2 (cit. on p. 2). Sun, Ying and Marc G. Genton (Jan. 2011). “Functional Boxplots”. In: Journal of Computational and Graphical Statistics 20.2. Publisher: ASA Website _eprint: https://doi.org/10.1198/jcgs.2011.09224, pp. 316–334. ISSN : 1061- 8600. DOI: 10.1198/jcgs.2011.09224. URL: https://doi.org/10.1198/jcgs.2011.09224 (visited on 04/03/2025) (cit. on p. 12). Trinh, Thi Huong, Christine Thomas-Agnan, and Michel Simioni (Feb. 2023). Discrete and Smooth Scalar-on-Density Compositional Regression for Assessing the Impact of Climate Change on Rice Yield in Vietnam .URL: https://www.t se-fr.eu/publications/discrete-and-smooth-scalar-density-compositional-regression-assessing-impact-climate-cha nge-rice (cit. on p. 14). Tyler, David E. (Sept. 2010). “A note on multivariate location and scatter statistics for sparse data sets”. In: Statistics & Probability Letters 80.17, pp. 1409–1413. ISSN : 0167-7152. DOI: 10.1016/j.spl.2010.05.006. (Visited on 03/13/2024) (cit. on pp. 3, 19). Tyler, David E. et al. (June 2009). “Invariant co-ordinate selection”. en. In: Journal of the Royal Statistical Society: Series B (Statistical Methodology) 71.3, pp. 549–592. ISSN : 13697412, 14679868. DOI: 10.1111/j.1467-9868.2009.00706.x. (Visited on 10/13/2022) (cit. on pp. 1–3, 8). Van Den Boogaart, Karl Gerald, Juan José Egozcue, and Vera Pawlowsky-Glahn (June 2014). “Bayes Hilbert Spaces”. en. In: Australian & New Zealand Journal of Statistics 56.2, pp. 171–194. ISSN : 13691473. DOI: 10.1111/anzs.12074. (Visited on 09/15/2023) (cit. on pp. 7, 20, 21). Virta, Joni et al. (2020). “Independent component analysis for multivariate functional data”. In: Journal of Multivariate Analysis 176, p. 104568. DOI: 10.1016/j.jmva.2019.104568 (cit. on p. 2). 25
|
https://arxiv.org/abs/2505.19403v1
|
arXiv:2505.19438v1 [math.ST] 26 May 2025Sampling from Binary Quadratic Distributions via Stochastic Localization Chenguang Wang* 1 2Kaiyuan Cui* 3Weichen Zhao4Tianshu Yu1 Abstract Sampling from binary quadratic distributions (BQDs) is a fundamental but challenging problem in discrete optimization and probabilistic infer- ence. Previous work established theoretical guar- antees for stochastic localization (SL) in continu- ous domains, where MCMC methods efficiently estimate the required posterior expectations dur- ing SL iterations. However, achieving similar con- vergence guarantees for discrete MCMC samplers in posterior estimation presents unique theoreti- cal challenges. In this work, we present the first application of SL to general BQDs, proving that after a certain number of iterations, the external field of posterior distributions constructed by SL tends to infinity almost everywhere, hence satisfy Poincar ´e inequalities with probability near to 1, leading to polynomial-time mixing. This theo- retical breakthrough enables efficient sampling from general BQDs, even those that may not orig- inally possess fast mixing properties. Further- more, our analysis, covering enormous discrete MCMC samplers based on Glauber dynamics and Metropolis-Hastings algorithms, demonstrates the broad applicability of our theoretical framework. Experiments on instances with quadratic uncon- strained binary objectives, including maximum independent set, maximum cut, and maximum clique problems, demonstrate consistent improve- ments in sampling efficiency across different dis- crete MCMC samplers. *Equal contribution1School of Data Science, The Chinese University of Hong Kong, Shenzhen2Shenzhen Research Insti- tute of Big Data3The Academy of Mathematics and Systems Science, Chinese Academy of Sciences4The School of Statistics and Data Science, LPMC & KLMDASR, Nankai University. Cor- respondence to: Weichen Zhao <zhaoweichen@nankai.edu.cn >, Tianshu Yu <yutianshu@cuhk.edu.cn >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).1. Introduction In this work, we consider sampling from the Gibbs measure with the form of binary quadratic distributions (BQDs): ν(x)∝e−β 2⟨x,Wx⟩+⟨x,b⟩, x∈ {− 1,1}N. (1) where βis the inverse temperature. This distribution class naturally arises in statistical physics such as Ising mod- els (Nishimori, 2001; Bauerschmidt & Dagallier, 2024), spin systems (Bauerschmidt & Bodineau, 2019) and in com- binatorial optimization as quadratic unconstrained binary optimization (QUBO) problems (Lucas, 2014; Glover et al., 2022a;b), making it a fundamental object of study in both theoretical and applied research. Sampling from such distri- butions poses significant challenges due to their discrete na- ture and complex dependencies (Sly & Sun, 2012). To sam- ple from such distributions, various sampling approaches have been proposed, including Markov Chain Monte Carlo (MCMC) methods (Metropolis et al., 1953; Titsias & Yau, 2017; Grathwohl et al., 2021; Sun et al., 2021; 2023b), sim- ulated annealing (Kirkpatrick et al., 1983; Sun et al., 2022; Sanokowski et al., 2023), diffusion models (Sanokowski et al., 2024) and variational inference (Koehler et al., 2022). Recently, stochastic localization (SL) has emerged as a promising sampling framework. In the continuous do- main, SLIPS (Grenioux et al., 2024) established theoret- ical guarantees by proving the “duality of log-concavity” on sampling from the posterior and marginal distribu- tions during SL iterations under certain assumptions, lead- ing to significant improvements in sampling efficiency using Metropolis-Adjusted Langevin Algorithm (MALA) (Roberts &
|
https://arxiv.org/abs/2505.19438v1
|
Tweedie, 1996). Intuitively, rather than di- rectly sampling from the continuous target distribution, SL achieves sampling by iteratively adding Gaussian noise to posterior estimates of the target distribution. This results in posterior distributions that are Gaussian convolutions of the target distribution which effectively reduces sampling com- plexity by smoothing out the target distribution’s irregular features (Huang et al., 2023; Chen et al., 2024). This idea of decomposing a difficult direct sampling task into a sequence of sampling from simpler distributions is also prevalent in diffusion models for generative tasks and is considered one of the key factors behind their remarkable performance (Ho et al., 2020; Song et al., 2020a;b; Rombach et al., 2022). 1 Sampling from Binary Quadratic Distributions via Stochastic Localization However, this elegant theoretical machinery relies heav- ily on properties specific to continuous spaces and cannot be directly extended to discrete settings, where the fun- damental nature of the state space requires different ana- lytical approaches. While several pioneering works have explored SL-based sampling for specific discrete statistical physics models, including Sherrington-Kirkpatrick (SK) models (El Alaoui et al., 2022) and random field Ising models (Alaoui et al., 2023b), these efforts have often em- ployed model-specific techniques for posterior estimation or focused on distributions with particular structural prop- erties. For instance, El Alaoui et al. (2022) addresses SK models at specific temperature regimes using TAP free en- ergy (Nishimori, 2001) optimization to approximate poste- rior expectations in SL iterations. Consequently, a general framework for leveraging SL with standard, off-the-shelf discrete MCMC samplers and providing theoretical guaran- tees broadly applicable to the rich class of BQDs has been less explored. A natural question thus arises: Can SL reduce sampling difficulty for BQDs, as it does in continuous settings, by constructing easily-samplable poste- rior distributions using general-purpose discrete MCMC? In this work, we affirmatively answer the question for gen- eral BQDs by establishing strong theoretical guarantees. Our core theoretical contribution is proving that after a cer- tain number of SL iterations, the posterior distributions con- structed by SL satisfy Poincar ´e inequalities with probability close to 1. This result is crucial as it implies polynomial- time mixing for any underlying discrete MCMC sampler ap- plied to these posteriors. We explicitly derive a bound on the required number of SL iterations, providing concrete guid- ance for practical implementation. Unlike prior results tied to specific cases, our framework applies to general BQDs (for arbitrary Wandb), demonstrating the broad applicabil- ity of SL in this discrete setting. Importantly, our guarantees hold for discrete MCMC samplers based on both Glauber dynamics and Metropolis-Hastings algorithms, encompass- ing most commonly used discrete sampling methods. To demonstrate the practical value of our theoretical in- sights, we conduct extensive experiments on three types of QUBO problems: maximum independent set, max- imum cut, and maximum clique problems, each evalu- ated across multiple datasets. We assess three represen- tative discrete MCMC samplers based on Glauber dynamics and Metropolis-Hastings algorithms: Gibbs with Gradients (GWG) (Grathwohl et al., 2021), Path Auxiliary Sampler (PAS) (Sun et al., 2021), and Discrete Metropolis-Adjusted Langevin Algorithm (DMALA) (Zhang et
|
https://arxiv.org/abs/2505.19438v1
|
al., 2022). Our experimental results demonstrate that SL consistently im- proves the sampling performance of all three samplers across every dataset, providing strong empirical support for our theoretical guarantees.2. Preliminaries In this section, we introduce key concepts and notations that will be used throughout this paper. 2.1. Notations Throughout this paper, we consider a system of size N where [N]:= 1,2, . . . , N denotes the index set and Ω:= {−1,1}Nrepresents the hypercube state space. For any configuration x, we use x∼ito denote xwith site iexcluded, xito denote the configuration after transition at site iand x=yoff j to denote xi=yi, i̸=j. We define the trivial metric as d(x, y) := 1x̸=y, where 1x̸=y= 1 ifx̸=y, and1x̸=y= 0 ifx=y. We use || · || TVto denote the total variation distance. The conditional distribution of xiis denoted by νi(·|x), while Eνrepresents expectation under measure ν. Finally, ⟨·,·⟩denotes the standard inner product. 2.2. Discrete MCMC Samplers In this part, we introduce three recent and effective dis- crete MCMC samplers, which will facilitate our theoretical analysis in Section 4: Gibbs With Gradients (GWG) (Grath- wohl et al., 2021), Path Auxiliary Sampler (PAS) (Sun et al., 2021), and Discrete Metropolis-adjusted Langevin Algo- rithm (DMALA) (Zhang et al., 2022). These samplers share a common foundation in locally balanced proposals, where they incorporate local information about the target distri- bution to construct more efficient transitions. Essentially, they can all be viewed as generalized Glauber dynamics that incorporate gradient-based information from target distribu- tion and different dimension update schemes, followed by an optional Metropolis-Hastings (MH) adjustment. For GWG, the locally informed proposal is defined as: p(x′|x)∝ed(x) 21x′∈H(x), (2) where d(x) =−(2x−1)∇logν(x),H(x)denotes the Hamming ball of size 1 around x, and the site to flip is determined by sampling i∼Categorical (Softmax (d(x) 2)). For PAS, the proposal is formulated as: py(x′|x)∝g(e⟨∇logν(y),x′−x⟩), (3) where g(·)is the local balancing function (specifically, g(t) =√ tin PAS) and yrepresents the current state as the starting point of the auxiliary path. For DMALA, the proposal admits the conditional indepen- dence relationship p(x′|x) =QN ip(x′ i|x)and takes the form: p(x′|x) =NY i=1 Softmax1 2(∇logν(x))i(x′ i−xi) , (4) 2 Sampling from Binary Quadratic Distributions via Stochastic Localization where the quadratic term present in the original form be- comes negligible in the binary case. Each of these samplers may include an optional MH adjust- ment step with the acceptance ratio: min 1,ν(x′)p(x|x′) ν(x)p(x′|x) . (5) 2.3. SL for Sampling El Alaoui & Montanari (2022) established a connection between stochastic localization and information theory through an observation process (Yt)t≥0: Yt=tX+Bt, (6) where X∼ν,(Bt)t≥0is a standard Brownian motion independent of X. The conditional distribution qtofX given Ytfollows a stochastic localization process, which by Theorem 7.12 of (Liptser & Shiryaev, 2013), satisfies: dYt=ut(Yt)dt+ dBt, Y 0= 0, (7) where ut(y) =R xqt(x|y)dx. Grenioux et al. (2024) later generalized this to: Yα t=α(t)X+σBt, t∈[0, Tgen), (8) withα(t) =t1/2g(t)satisfying certain regularity condi- tions1. The corresponding SDE becomes: dYα t= ˙α(t)uα t(Yα t)dt+σdBt, Yα 0= 0, (9) where uα t(y) =R xqα t(x|y)dx. We refer more fundamental definitions of SL to Appendix B.1.
|
https://arxiv.org/abs/2505.19438v1
|
3. Binary Quadratic Sampling via Stochastic Localization In this section, we present the SL-based sampling frame- work for binary quadratic distributions. Based on SL frame- work introduced in Section 2.3, estimating the posterior expectation uα t(y) =R xqα t(x|y)dxis crucial for sampling. The most straightforward approach is to approximate this ex- pectation through Monte Carlo approximation using discrete MCMC samplers: ˜Uα t=1 nnX j=1xt j≈uα t(y) where {xt j}n j=1∼qα t(x|y). The algorithm is detailed in Algorithm 1. Understanding the properties of this posterior distribution becomes central to our analysis. 1Specifically, g∈C0([0, Tgen),R+)∩C1((0, Tgen),R+) is strictly increasing with g(t)∼Ctβ/2ast→0for some β≥1, C > 0, and limt→Tgeng(t) =∞.Algorithm 1 SL with Discrete MCMC Samplers Input: Discrete MCMC Sampler ( DMS ), time steps for SDE iterations {ti}T i=0, sample size for posterior estima- tionn, SL parameters α(·),σ Initialize ˜Y0∼ N(0, t0σ2I) fori= 0toT−1do Setδi=ti+1−ti, wi=α(ti+1)−α(ti) Simulate {xi j}n j=1∼P(X|˜Yi)with DMS Estimate the posterior expectation ˜Uα i=1 nPn j=1xi j Simulate ˜Yi+1∼ N(˜Yi+wi˜Uα i, σ2δiI) end for Output:˜YT α(tT) Given the observation process in Equation 8, Bayes’ theo- rem yields the posterior distribution of Xgiven Yt: qα t(x|y)∝pα t(y|x)ν(x) (10) where pα t(y|x)is the Gaussian likelihood induced by the observation process. For the binary quadratic distribution ν(x)defined in Equation (1), the posterior distribution takes the explicit form: qα t(x|y)∝e−β 2⟨x,Wx⟩+⟨x,b+α(t)Yt σ2t⟩. (11) Note that the quadratic term associated with xfrom the Gaussian likelihood pα t(y|x)vanishes since ⟨x, x⟩=1for x∈ {− 1,1}N. In contrast to continuous settings, where x2terms persist and log-concavity can be analyzed through the Hessian of the log-likelihood, the discrete case presents unique challenges. Here, the posterior distribution differs from ν(x)only in its linear term, and we cannot assess sampling difficulty through derivatives. This fundamental difference necessitates theoretical guarantees to understand how SL affects the mixing properties of discrete MCMC samplers. Physical Intuition Before delving into rigorous theoret- ical analysis, we provide an intuitive understanding from a physical perspective. Our starting point is the following convergence result: Theorem 3.1. Consider the observation process Yt=α(t)X+σBt, in the caseα(t) σ√ t→+∞ast→+∞, for arbitrary ζ >0 andε >0, there is a T(ζ, ε)large enough such that for t≥T, the observation process satisfies P(|Yt| ≥ζ)≥1−ε. Examining the linear term coefficient in the posterior dis- tribution (11), which serves as an effective external field 3 Sampling from Binary Quadratic Distributions via Stochastic Localization ht: ht=b+α(t)Yt σ2t. (12) The behavior of this term is intrinsically linked to the stochastic localization process. Specifically, the dynam- ics of the localization variable Ytare governed by the SL formulation arising from an observation process, as detailed in Section 2.3 (cf. Equation (8)). Theorem 3.1 provides a theoretical guarantee on the asymptotic behavior of this process under our chosen α(t)schedule. It proves that as t→Tgen, the termYt α(t)→X∈ {− 1,1}Nand the scaling factorα(t)2 σ2t→+∞. As a result, the magnitude of the ef- fective external field |ht|grows infinitely. In physical terms, this corresponds to applying an increasingly strong external field to an interacting system. For some statistical physics models, the presence of a strong external field generally enhances mixing properties compared to the zero-field
|
https://arxiv.org/abs/2505.19438v1
|
case (Martinelli et al., 1994; Martinelli & Olivieri, 1994; Keri- mov, 2012). Intuitively, when this field becomes sufficiently large, the quadratic interaction term becomes negligible in comparison, and the posterior distribution approximates to: qα t(x|y)≈∝e⟨x,b+α(t)Yt σ2t⟩. This simplifies to a product of independent Bernoulli dis- tributions, which is significantly easier to sample. This property of a strong, growing external field is crucial for establishing favorable mixing properties of the posterior distribution at late times. A formal condition on the exter- nal field strength required for our subsequent theoretical guarantees is introduced later in Condition 4.1. 4. Theoretical Analysis This section establishes theoretical guarantees for sampling from posterior distributions in the SL framework. The spec- tral gap γgapdetermines the convergence rate of algorithm and takes different forms for each DMCMC sampler: •1−|h| e3|h| 4+e−3|h| 4for Glauber dynamics; •1−2|h|e−|h|for classical Metropolis chains; •1−|h| (e|h| 4+e−|h| 4)2for single-site gradient-informed MH algorithms; •1−4|h|N (e|h| 4+e−|h| 4)2for DULA (Zhang et al., 2022). Notably, for all cases, γgapincreases with |h|and approaches 1 as|h| →+∞. We organize this section as follows. Section 4.1 introduces the essential definitions and assumptions that underpin ouranalysis. Section 4.2 examines two warm-up cases, estab- lishing Poincar ´e inequalities for classical Glauber dynamics and Metropolis-Hastings algorithms. Building on these foundations, Section 4.3 extends our theoretical guarantees to more sophisticated sampling algorithms. All proofs are deferred to Appendix C. 4.1. Setup and Key Assumptions For simplicity, we represent the posterior distribution in equation (11) using the following Gibbs measure: νβ,h=1 ˜Zβ,he−β 2⟨x,Wx⟩+⟨x,h⟩, x∈ {− 1,1}N, (13) where βdenotes the inverse temperature and ˜Zβ,his the partition function. Based on the physical intuition provided in Section 3, the external field hin equation (13) grows to infinity, which motivates our fundamental assumption for all subsequent theoretical analysis: Condition 4.1. The strong external field hsatisfies |h| ≥2βsup i∈[N]X k̸=i|Wik|, (14) Remark 4.2.Theorem 3.1 guarantees that this condition holds with high probability after sufficient SL iterations. Moreover, for low-temperature models (large inverse tem- perature β), this condition is naturally satisfied due to the dominance of the external field term. Following Condition 4.1 and Theorem 3.1, setting ζ= 2βsupi∈[N]P k̸=i|Wik|and by choosing a sufficiently small ε >0, we can ensure that for t≥T(α, ε), the exter- nal field |Yt|in equation (11) is larger than ζ, and therefore exceeds 2βsupi∈[N]P k̸=i|Wik|with probability at least 1−ε. In subsequent sections, we will demonstrate that when the external field is sufficiently large, certain Dis- crete MCMC samplers for Gibbs distribution (13) satisfy Poincar ´e inequalities. This theoretical guarantee, combined with the high-probability bound on the external field, estab- lishes that the Poincar ´e inequalities in Theorem 4.3 and The- orem 4.5 hold with probability close to 1, thereby ensuring polynomial-time mixing2for sampling from the posterior distribution (11). 4.2. Warm-up Cases: Poincar ´e Inequality for Glauber Dynamics and Metropolis Chains In this subsection, we begin with presenting our theoret- ical results through two fundamental examples, Glauber dynamics and Metropolis chains. Here, we derive explicit coefficients for the Poincar ´e inequality that depend on the external field h. 2As shown in Theorem 20.6 of (Levin & Peres, 2017),
|
https://arxiv.org/abs/2505.19438v1
|
stochas- tic dynamics satisfying Poincar ´e inequality exhibit polynomial mixing time. 4 Sampling from Binary Quadratic Distributions via Stochastic Localization Glauber Dynamics Glauber dynamics, or Gibbs sam- pling, provides a fundamental approach for sampling from BQDs. The algorithm iteratively updates each spin iby sampling from its conditional distribution: νβ,h(xi|x∼i) :=e−β 2⟨x,Wx⟩+⟨x,h⟩ P xi=±1e−β 2⟨x,Wx⟩+⟨x,h⟩. For Glauber dynamics, we establish the following theorem with spectral gap γgap= 1−|h| e3|h| 4+e−3|h| 4: Theorem 4.3. For the Gibbs measure (13) satisfying the large field Condition 4.1, the following Poincar ´e inequality holds: Varνβ,h(f)≤1 1−|h| e3|h| 4+e−3|h| 4EGD(f, f), (15) where EGD(f, f)is the Dirichlet form (23) associated with Glauber dynamics on L2(νβ,h). Building upon the established Poincar ´e inequality in Theo- rem 4.3, which quantifies the mixing rate of Glauber dynam- ics, we can further analyze the convergence properties of Monte Carlo estimators. Specifically, the spectral gap γgap plays a crucial role in deriving concentration inequalities for averages of samples generated by the dynamics. This leads us to the following corollary concerning the Chernoff-type error bound for our Monte Carlo estimator of the posterior mean. Corollary 4.4. Under condition of theorem 4.3, for a se- quence {X1, X2, . . . , X n}sampled from Glauber dynamics, for all ε >0, we have: Pq" 1 nnX i=1Xi−Eνβ,h[X] ≥ε# ≤Cγgap,n,qe−nε2γgap c, where X1∼qis the initial distribution of samples, the spectral gap is γgap= 1−|h| e3|h| 4+e−3|h| 4,cis an absolute constant, and Cγgap,n,qis a rational function. This established error bound demonstrates that the Monte Carlo estimator achieves an exponential error reduction with respect to the sample size n. Metropolis Chains. The Metropolis-Hastings (MH) algo- rithm is a cornerstone of MCMC sampling methods. As a warm-up case, we analyze the simplified Metropolis chains (Martinelli, 1999) where, for each site i∈[N], the transition x→xioccurs with probability: P(xi|x) = min {1,νβ,h(xi) νβ,h(x)} = min {1, e2βxiP j̸=iWijxj−2hixi},and remains unchanged with probability: 1−P(xi|x) = 1−min{1, e2βxiP j̸=iWijxj−2hixi}.For these Metropolis chains, we establish the following Poincar ´e inequality: Theorem 4.5. For the Gibbs measure (13) satisfying the large field Condition (4.1) , the following Poincar ´e inequality holds: Varνβ,h(f)≤1 1−2|h|e−|h|EMH(f, f), (16) where EMH(f, f)is the Dirichlet form (24) associated with MH algorithm on L2(νβ,h). 4.3. Poincar ´e Inequality for Advanced DMCMC Algorithms While Section 4.2 provides warm-up cases for classical algo- rithms, here we analyze more sophisticated DMCMC sam- plers. We propose a general condition that not only encom- passes these warm-up cases but also extends to advanced gradient-informed proposals introduced in Section 2.2. Un- der this natural condition, we establish similar Poincar ´e inequalities for these sampling methods. Single-Site Metropolis-Hastings Algorithms Consider a MH algorithm that updates one site at a time with transition kernel: P(xi|x) = Ψ( xi|x) min 1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x) ,(17) and remains at the current state with probability: P(xi|x) = 1−Ψ(xi|x) min 1, νβ,h(xi)Ψ(x|xi)/νβ,h(x)Ψ(xi|x) , where Ψdenotes the proposal distribution. This formu- lation generalizes both the GWG transition kernel (2)and the PAS kernel (3) in their single-site update variants. Theorem 4.6. For the MH kernel (17), assume the pro- posal Ψ(xi|x)is chosen such that P(xi|x)is Lipschitz continuous
|
https://arxiv.org/abs/2505.19438v1
|
for all x= ˆxoff j: P(xi|x)−P(ˆxi|ˆx) ≤CLip(β, h)|βWijxixj|, where CLip(β, h)decreases exponentially to 0 as the exter- nal field |h|tends to infinity. Then for the Gibbs measure (13) satisfying the large field Condition 4.1, the following Poincar ´e inequality holds: Varνβ,h(f)≤1 1−CLip(β, h)|h|EMH(f, f), (18) where EMH(f, f)is the Dirichlet form (24) associated with MH dynamics on L2(νβ,h). Remark 4.7.Judging from the above warm-up cases and the calculation examples below, for the measure νβ,hwe are considering, the Lipschitz properties of general cases can be expected. 5 Sampling from Binary Quadratic Distributions via Stochastic Localization Particularly, for gradient-informed proposals Ψ(xi|x) : =1 Zβ,h(x)e1 2∇U(x)i(xi−x), where U(x) =−β 2⟨x, Wx ⟩+⟨x, h⟩, we establish the fol- lowing result: Theorem 4.8. For the Gibbs measure (13) satisfying the large field Condition 4.1, the following Poincar ´e inequality holds for the gradient-informed Metropolis-Hastings algo- rithm: Varνβ,h(f)≤1 1−|h| (e|h| 4+e−|h| 4)2EMH(f, f), (19) whereEMH(f, f)denotes the Dirichlet form associated with the MH algorithm on L2(νβ,h). DULA The Discrete Unadjusted Langevin Algorithm (DULA) performs simultaneous updates across all sites. Its transition kernel takes the form: P(x[N]|x) = Ψ( x[N]|x) =Y i∈[N]Ψ(xi|x), (20) with probability 1−Ψ(x[N]|x)of remaining at the current state. The gradient-informed proposal for each site is given by: Ψ(xi|x) : =1 Zβ,h(x)e1 2∇U(x)i(xi−x) =1 Zβ,h(x)e−β 2⟨x,W(xi−x)⟩+1 2⟨xi−x,h⟩ =1 Zβ,h(x)eβxiP j̸=iWijxj−hxi which aligns with the proposal (4). For this algorithm, we establish: Theorem 4.9. For the Gibbs measure (13) satisfying the large field Condition 4.1, the following Poincar ´e inequality holds: Varνβ,h(f)≤1 1−4|h|N (e|h| 4+e−|h| 4)2E(f, f), (21) where E(f, f) :=1 2Z f(x)−f(x[N])2 νβ,h(dx)P(dx[N]|x) is the Dirichlet form associated with DULA on L2(νβ,h). Remark 4.10.Similar to Corollary 4.4, for Metropolis chains, single-site gradient-informed MH algorithms, and DULA, the corresponding Chernoff-type error bounds of the Monte Carlo estimator for the posterior mean can also be obtained. These are derived from Theorems 4.5, 4.8, and 4.9, respectively. For a detailed exposition, please refer to Appendix C.8.5. Related Work Our work connects to two main research areas: discrete MCMC sampling and stochastic localization. Recent ad- vances in discrete MCMC sampling have moved beyond traditional Gibbs sampling (Dai et al., 2020; Wang & Cho, 2019) to gradient-informed methods, including Gibbs With Gradients (Grathwohl et al., 2021), Path Auxiliary Sampler (Sun et al., 2021), and Discrete Langevin Algorithm (Zhang et al., 2022). These methods enhance sampling efficiency by incorporating local structure information into proposal distributions. Stochastic localization (SL) was initially de- veloped for proving measure properties (Eldan, 2013; 2020) and has recently emerged as a powerful sampling framework (El Alaoui et al., 2022; Grenioux et al., 2024). While SL has shown success in continuous domains through SLIPS (Grenioux et al., 2024) and specific discrete models like SK models (El Alaoui et al., 2022), theoretical analysis for general discrete distributions remains challenging due to their inherent structural constraints. For a comprehensive review of related work, please refer to Appendix A. 6. Empirical Results In this section, we empirically validate the sampling perfor- mance of the SL framework. Following the DISCS bench- mark (Goshvadi et al., 2024)3, we evaluate on three types of combinatorial
|
https://arxiv.org/abs/2505.19438v1
|
optimization problems with the form of binary quadratic distributions: maximum independent set, maximum cut, and maximum clique problems, each contain- ing multiple diverse datasets (14 datasets in total). Detailed information about these problems and datasets is provided in Appendix D.1. Experimental Settings We employ three advanced dis- crete MCMC samplers: Gibbs with Gradients (GWG) (Grathwohl et al., 2021), Path Auxiliary Sampler (PAS) (Sun et al., 2021), and Discrete Metropolis-Adjusted Langevin Algorithm (DMALA) (Zhang et al., 2022), all incorpo- rating Glauber dynamics and Metropolis-Hastings algo- rithms. To align with our theoretical framework, we im- plement GWG and PAS with single-site updates, while us- ing the Metropolis-Hastings adjusted version of DULA, that is DMALA. For SL implementation, we adopt the GEOM(2,1) αschedule and uniform SDE time discretiza- tion from SLIPS (Grenioux et al., 2024). Hyperparameters are tuned through comprehensive search. For fair com- parison when estimating posterior expectations, SL uses 3While MCMC samplers ideally generate i.i.d. samples from the target distribution after mixing, SL converges to a Dirac dis- tribution of a single sample from the target distribution, making effective sample size metrics from DISCS (Goshvadi et al., 2024) unsuitable for performance evaluation. Therefore, we do not com- pare different sampling algorithms on traditional physics models under this metric. 6 Sampling from Binary Quadratic Distributions via Stochastic Localization Table 1. Objective values ( ↑) on MIS benchmarks with different samplers and their SL variants. For each sampler, we compare the original method and its SL counterpart. Bold values indicate better performance between the paired methods. ER-DENSITY SATLIB R-0.05 R-0.10 R-0.20 R-0.25 GWG 104.219 ±1.63 61.750 ±0.90 34.125 ±0.48 27.813 ±0.63 419.063 ±14.29 SL-GWG 104.375 ±1.52 61.938 ±0.93 34.375 ±0.74 28.000 ±0.56 419.165 ±14.39 PAS 104.375 ±1.64 61.750 ±0.94 34.063 ±0.61 27.813 ±0.46 419.539 ±14.39 SL-PAS 104.531 ±1.48 61.906 ±0.88 34.375 ±0.55 28.031 ±0.59 419.707 ±14.35 DMALA 103.063 ±1.62 61.000 ±0.83 33.813 ±0.68 27.438 ±0.56 415.722 ±14.51 SL-DMALA 103.594 ±1.80 61.344 ±0.96 34.000 ±0.56 27.750 ±0.66 415.718 ±14.38 identical parameters as the corresponding discrete MCMC sampler and maintains the same 10000 MCMC steps in total. Furthermore, motivated by our theoretical results showing that posterior distributions become easier to sample from as SL iterations proceed, the experimental results in Section 6.1 are obtained by exponentially decaying the allocation of 10000 MCMC steps across SL iterations. The impact of this allocation strategy is analyzed in Section 6.2. Detailed experimental settings are provided in the Appendix D.2. 6.1. Main Results In this section, we evaluate the SL framework against three discrete MCMC samplers (GWG, PAS, DMALA) on maxi- mum independent set (MIS), maximum cut (MaxCut), and maximum clique (MaxClique) problems. Following the setting in DISCS (Goshvadi et al., 2024), for MIS, we di- rectly report the objective values, while for MaxCut and MaxClique, we report the optimality gap (obj baseline×100% ) where baseline values are obtained from classic solvers col- lected by DISCS benchmark. The best solution found during the entire sampling process is used for evaluation. To as- sess sampling performance, we implement SL using each MCMC sampler for posterior estimation, enabling direct comparison: MCMC sampler vs.SL
|
https://arxiv.org/abs/2505.19438v1
|
+ MCMC sampler under identical MCMC steps. Results on MIS Table 1 shows the performance compar- ison on MIS problems across five datasets, including four Erd˝os-R ´enyi random graphs with different densities (0.05- 0.25) and instances from SATLIB dataset. For each MCMC sampler, our SL variant consistently achieves better or com- parable objective values. The improvements are particularly noticeable on sparse graphs (R-0.05 and R-0.10), where SL-PAS outperforms baseline PAS by 0.15% and 0.25% respectively. Among all methods, SL-PAS demonstrates the strongest overall performance, achieving the highest objec- tive values on both random graphs and SATLIB instance.Table 2. The percentage (%) ( ↑) of the solution provided by DISCS (Goshvadi et al., 2024) on Maxclique benchmarks. Bold values indicate better performance between paired methods. RB TWITTER GWG 87.509 ±6.19 100.000 ±0.00 SL-GWG 87.598 ±6.16 100.000 ±0.00 PAS 87.544 ±6.15 100.000 ±0.00 SL-PAS 87.649 ±6.14 100.000 ±0.00 DMALA 87.231 ±6.14 100.000 ±0.00 SL-DMALA 87.310 ±6.15 100.000 ±0.00 Results on MaxClique Table 2 presents results on two MaxClique datasets: RB and TWITTER. On the RB dataset, all SL variants demonstrate consistent improvements over their MCMC counterparts, with SL-PAS achieving the high- est optimality gap of 87.649%. For the TWITTER dataset, all methods achieve optimal solutions (100%), indicating its relative simplicity. The improvements on RB, while mod- est in magnitude, are consistent across all three MCMC samplers, further validating SL’s effectiveness in enhancing sampling performance. Results on MaxCut Table 3 presents results on seven MaxCut datasets, including Erd ˝os-R ´enyi (ER) graphs, Barab ´asi-Albert (BA) graphs with varying sizes (256-1024 nodes), and OPTSICOM instances. SL variants consistently outperform their MCMC counterparts across most datasets, with particularly notable improvements on larger graphs. SL-PAS achieves the best overall performance, showing improvements of up to 0.045% on BA-512 and 0.038% on BA-1024 compared to baseline PAS. On the OPTSICOM instance, all methods achieve optimal solutions. The experimental results demonstrate the consistent effec- tiveness of our SL framework across different combinatorial optimization problems, datasets, and MCMC samplers. It is important to note that the DMCMC baselines employed are themselves highly effective and often perform near optimal- 7 Sampling from Binary Quadratic Distributions via Stochastic Localization Table 3. The percentage (%) ( ↑) of the solution provided by DISCS (Goshvadi et al., 2024) on Maxcut benchmarks. Bold values indicate better performance between paired methods. ER BA OPTSICOM 256-300 512-600 1024-1100 256-300 512-600 1024-1100 GWG 101.881 ±1.76 100.144 ±0.12 100.098 ±0.14 99.947 ±0.06 100.850 ±0.48 101.544 ±0.38 100.000 ±0.00 SL-GWG 101.924 ±1.76 100.161 ±0.12 100.101 ±0.13 99.972 ±0.06 100.889 ±0.49 101.562 ±0.37 100.000 ±0.00 PAS 101.884 ±1.76 100.158 ±0.12 100.120 ±0.13 99.954 ±0.05 100.883 ±0.48 101.632 ±0.37 100.000 ±0.00 SL-PAS 101.920 ±1.76 100.174 ±0.12 100.123 ±0.14 99.975 ±0.05 100.928 ±0.48 101.670 ±0.37 100.000 ±0.00 DMALA 101.871 ±1.76 100.130 ±0.12 100.074 ±0.14 99.946 ±0.06 100.763 ±0.50 101.243 ±0.39 100.000 ±0.00 SL-DMALA 101.922 ±1.76 100.144 ±0.12 100.078 ±0.14 99.965 ±0.06 100.792 ±0.49 101.228 ±0.38 100.000 ±0.00 GWG PAS DMALA103.6103.8104.0104.2104.4104.6Objective MIS ER_DENSITY-0.05 GWG PAS DMALA61.361.461.561.661.761.861.9Objective MIS ER_DENSITY-0.10 GWG PAS DMALA34.034.134.234.3Objective MIS ER_DENSITY-0.20 GWG PAS DMALA27.7027.7527.8027.8527.9027.9528.00Objective MIS ER_DENSITY-0.25 GWG PAS DMALA416417418419Objective MIS SATLIB GWG PAS
|
https://arxiv.org/abs/2505.19438v1
|
DMALA87.387.487.587.6Ratio MAXCLIQUE RB GWG PAS DMALA0.0160.0180.0200.0220.0240.026Ratio +1.019e2MAXCUT ER-256-300 GWG PAS DMALA0.0400.0450.0500.0550.0600.0650.0700.075Ratio +1.001e2MAXCUT ER-512-600 GWG PAS DMALA99.96299.96499.96699.96899.97099.97299.974Ratio MAXCUT BA-256-300 GWG PAS DMALA100.78100.80100.82100.84100.86100.88100.90100.92Ratio MAXCUT BA-512-600Performance Comparison Across Different Models and Datasets Exp. Decay Allocation with Uniform Disc. Identical Allocation with Uniform Disc. Exp. Decay Allocation with Log-SNR Disc. Figure 1. Ablation study comparing two design choices: (1) MCMC steps allocation strategies (Exponential Decay vs. Identical) and (2) SDE time discretization methods (Uniform vs. Log-SNR). Hatched bars indicate the best performing configuration for each algorithm- dataset combination. ity on these challenging tasks, significantly outperforming commercial solvers like Gurobi (as shown in Table 3). The improvements shown by SL are achieved even when start- ing from these strong established methods, highlighting its ability to further boost performance. For comprehensive evaluation results, including the sampling trajectory, we refer readers to Appendix D.3. 6.2. Ablation Study In this part, we conduct ablation studies on two aspects: (1) MCMC steps allocation strategies (Exponential Decay vs. Identical) and (2) SDE time discretization methods (Uni- form vs. Log-SNR), to validate our physical intuition pro- posed in Section 3 and theoretical insights about sampling difficulty evolution proved in Section 4. Time Discretization Strategy We compare two strategies for discretizing the SDE integration interval: uniform dis-cretization (our default choice) and Log-SNR discretization proposed in SLIPS (Grenioux et al., 2024). Under uniform discretization, time points are evenly spaced across the in- terval, while Log-SNR discretization allocates more SDE iterations to smaller time points where posterior distribu- tions are theoretically more challenging to sample from. Impact of MCMC Step Allocation Our theory suggests that sampling becomes easier during SL iterations. To lever- age this insight, we compare two strategies for allocating MCMC steps across SL iterations: exponential decay (our default choice) and uniform allocation. In exponential decay, earlier iterations receive more MCMC steps for posterior es- timation, while uniform allocation distributes steps equally. As shown in Figure 1, Exp. Decay Allocation with Uniform Disc. achieves superior performance in most test cases, which aligns well with our physical intuition and theoretical analysis. However, we observe several interesting excep- 8 Sampling from Binary Quadratic Distributions via Stochastic Localization tions where alternative strategies perform marginally bet- ter, such as in MAXCUT ER-256-300 with PAS algorithm where Exp. Decay Allocation with Log-SNR Disc. yields the best result, and in some MIS SATLIB cases where Iden- tical Allocation shows comparable performance. Despite these minor variations, the performance differences among the three strategies are generally modest (typically less than 1% in objective values or optimality gap), demonstrating the remarkable stability of SL sampling framework across different configurations. For more comprehensive ablation studies, including the im- pact of α-schedule in SL and the influence of hyperparame- ters on sampling performance, we refer readers to Appendix D.4, where we provide detailed experimental results and analyses. 7. Conclusion and Discussion In this work, we have established the first theoretical frame- work for applying stochastic localization (SL) to binary quadratic distributions (BQDs), proving that the posterior distributions constructed by SL satisfy Poincar ´e inequalities with high probability after a certain number of iterations. Our theoretical
|
https://arxiv.org/abs/2505.19438v1
|
guarantees are particularly general, encom- passing both Glauber dynamics and Metropolis-Hastings based discrete MCMC samplers without restrictive assump- tions on the underlying distributions. Extensive experiments on QUBO problems demonstrate that SL consistently im- proves the sampling efficiency of various discrete MCMC samplers, providing strong empirical support for our theo- retical results. These results also open several directions for future research. First, extending our framework beyond BQDs to distribu- tions with unknown forms (specifically, deep energy-based models) remains challenging, potentially requiring efficient second-order Taylor approximations. Second, while our current theoretical analysis provides mixing guarantees for the posterior sampling step within SL, establishing rigor- ous theoretical guarantees for the convergence rate of the overall SL process to the final target distribution is a crucial and challenging problem that warrants future investigation. Third, generalizing our framework to handle discrete vari- ables with more than two states, moving beyond the binary case, is an important direction. Acknowledgments This work was supported by the National Key R&D Program of China under grant 2022YFA1003900, the National Natu- ral Science Foundation of China (No.12401666, 12326611, 12426303), the Guangdong Provincial Key Laboratory of Mathematical Foundations for Artificial Intelligence (2023B1212010001) and the Fundamental Research Fundsfor the Central Universities, Nankai University (No.054- 63241437). Impact Statement This work has significant implications for both theoretical research and practical applications in discrete optimization and probabilistic inference. Our theoretical guarantees for SL in discrete domains open new avenues for developing more efficient sampling algorithms. The demonstrated im- provements in sampling efficiency across various discrete MCMC samplers suggest potential applications in diverse fields, including statistical physics, combinatorial optimiza- tion, and machine learning. As discrete optimization prob- lems continue to arise in emerging technologies such as quantum computing and molecular design, the practical impact of our theoretical advances is expected to grow sig- nificantly. References Alaoui, A. E., Montanari, A., and Sellke, M. Sampling from mean-field gibbs measures via diffusion processes. arXiv preprint arXiv:2310.08912 , 2023a. Alaoui, A. E. K., Eldan, R., Gheissari, R., and Piana, A. Fast relaxation of the random field ising dynam- ics. ArXiv , abs/2311.06171, 2023b. URL https: //api.semanticscholar.org/CorpusID: 265128744 . Anari, N., Koehler, F., and Vuong, T.-D. Trickle-down in localization schemes and applications. In Proceedings of the 56th Annual ACM Symposium on Theory of Comput- ing, pp. 1094–1105, 2024. Bauerschmidt, R. and Bodineau, T. A very simple proof of the lsi for high temperature spin systems. Journal of Functional Analysis , 276(8):2582–2588, 2019. Bauerschmidt, R. and Dagallier, B. Log-sobolev inequality for near critical ising models. Communications on Pure and Applied Mathematics , 77(4):2568–2576, 2024. Bertoin, J., Martinelli, F., Peres, Y ., and Martinelli, F. Lec- tures on glauber dynamics for discrete spin models. Lec- tures on Probability Theory and Statistics: Ecole d’Et ´e de Probailit ´es de Saint-Flour XXVII-1997 , pp. 93–191, 1999. Chen, W., Zhang, M., Paige, B., Hern ´andez-Lobato, J. M., and Barber, D. Diffusive gibbs sampling. arXiv preprint arXiv:2402.03008 , 2024. Chen, Y . An almost constant lower bound of the isoperi- metric coefficient in the kls conjecture. Geometric and Functional Analysis , 31:34–61, 2021. 9 Sampling from Binary Quadratic
|
https://arxiv.org/abs/2505.19438v1
|
Distributions via Stochastic Localization Chen, Y . and Eldan, R. Localization schemes: A frame- work for proving mixing bounds for markov chains. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS) , pp. 110–122. IEEE, 2022. Dai, H., Singh, R., Dai, B., Sutton, C., and Schuurmans, D. Learning discrete energy-based models via auxiliary- variable local exploration. Advances in Neural Informa- tion Processing Systems , 33:10443–10455, 2020. Dobruschin, P. The description of a random field by means of conditional probabilities and conditions of its regu- larity. Theory of Probability & Its Applications , 13(2): 197–224, 1968. Dobrushin, R. L. Prescribing a system of random variables by conditional distributions. Theory of Probability & Its Applications , 15(3):458–486, 1970. Dubhashi, D. P. and Panconesi, A. Concentration of measure for the analysis of randomized algorithms . Cambridge University Press, 2009. Dwivedi, R., Chen, Y ., Wainwright, M. J., and Yu, B. Log- concave sampling: Metropolis-hastings algorithms are fast. Journal of Machine Learning Research , 20(183): 1–42, 2019. El Alaoui, A. and Montanari, A. An information-theoretic view of stochastic localization. IEEE Transactions on Information Theory , 68(11):7423–7426, 2022. El Alaoui, A., Montanari, A., and Sellke, M. Sampling from the sherrington-kirkpatrick gibbs measure via algorithmic stochastic localization. In 2022 IEEE 63rd Annual Sym- posium on Foundations of Computer Science (FOCS) , pp. 323–334. IEEE, 2022. Eldan, R. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geometric and Func- tional Analysis , 23(2):532–569, 2013. Eldan, R. Taming correlations through entropy-efficient measure decompositions with applications to mean-field approximation. Probability Theory and Related Fields , 176(3):737–755, 2020. Eldan, R. and Shamir, O. Log concavity and concentration of lipschitz functions on the boolean hypercube. Journal of functional analysis , 282(8):109392, 2022. Eldan, R., Koehler, F., and Zeitouni, O. A spectral condition for spectral gap: fast mixing in high-temperature ising models. Probability theory and related fields , 182(3): 1035–1051, 2022. Gillman, D. A chernoff bound for random walks on ex- pander graphs. SIAM Journal on Computing , 27(4):1203– 1220, 1998.Glover, F., Kochenberger, G., Hennig, R., and Du, Y . Quan- tum bridge analytics i: a tutorial on formulating and using qubo models. Annals of Operations Research , 314(1): 141–183, 2022a. Glover, F., Kochenberger, G., Ma, M., and Du, Y . Quantum bridge analytics ii: Qubo-plus, network optimization and combinatorial chaining for asset exchange. Annals of Operations Research , 314(1):185–212, 2022b. Goshvadi, K., Sun, H., Liu, X., Nova, A., Zhang, R., Grath- wohl, W., Schuurmans, D., and Dai, H. Discs: a bench- mark for discrete sampling. Advances in Neural Informa- tion Processing Systems , 36, 2024. Grathwohl, W., Swersky, K., Hashemi, M., Duvenaud, D., and Maddison, C. Oops i took a gradient: Scalable sam- pling for discrete distributions. In International Con- ference on Machine Learning , pp. 3831–3841. PMLR, 2021. Grenioux, L., Noble, M., Gabri ´e, M., and Durmus, A. O. Stochastic localization via iterative posterior sampling. In Forty-first International Conference on Machine Learn- ing, ICML 2024, Vienna, Austria, July 21-27, 2024 , 2024. Ho, J., Jain, A., and Abbeel, P. Denoising diffusion proba- bilistic models.
|
https://arxiv.org/abs/2505.19438v1
|
Advances in neural information process- ing systems , 33:6840–6851, 2020. Huang, B., Montanari, A., and Pham, H. T. Sampling from spherical spin glasses in total variation via algorithmic stochastic localization. arXiv preprint arXiv:2404.15651 , 2024. Huang, X., Dong, H., Yifan, H., Ma, Y ., and Zhang, T. Re- verse diffusion monte carlo. In The Twelfth International Conference on Learning Representations , 2023. Hubbard, J. Calculation of partition functions. Physical Review Letters , 3(2):77, 1959. Kerimov, A. The one-dimensional long-range ferromagnetic ising model with a periodic external field. Physica A: Statistical Mechanics and its Applications , 391(10):2931– 2935, 2012. Kirkpatrick, S., Gelatt Jr, C. D., and Vecchi, M. P. Opti- mization by simulated annealing. science , 220(4598): 671–680, 1983. Koehler, F., Lee, H., and Risteski, A. Sampling approxi- mately low-rank ising models: Mcmc meets variational methods. In Conference on Learning Theory , pp. 4945– 4988. PMLR, 2022. Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), Proceedings of the 17th International Conference 10 Sampling from Binary Quadratic Distributions via Stochastic Localization on Machine Learning (ICML 2000) , pp. 1207–1216, Stan- ford, CA, 2000. Morgan Kaufmann. Levin, D. A. and Peres, Y . Markov chains and mixing times , volume 107. American Mathematical Soc., 2017. Lezaud, P. Chernoff-type bound for finite markov chains. Annals of Applied Probability , pp. 849–867, 1998. Liptser, R. S. and Shiryaev, A. N. Statistics of random processes: I. General theory , volume 5. Springer Science & Business Media, 2013. Lucas, A. Ising formulations of many np problems. Fron- tiers in physics , 2:5, 2014. Martinelli, F. Lectures on glauber dynamics for discrete spin models. Lectures on probability theory and statistics (Saint-Flour, 1997) , 1717:93–191, 1999. Martinelli, F. and Olivieri, E. Approach to equilibrium of glauber dynamics in the one phase region: I. the attractive case. Communications in Mathematical Physics , 161(3): 447–486, 1994. Martinelli, F., Olivieri, E., and Schonmann, R. H. For 2-d lattice spin systems weak mixing implies strong mixing. Communications in Mathematical Physics , 165(1):33–47, 1994. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. Equation of state calculations by fast computing machines. The journal of chemical physics , 21(6):1087–1092, 1953. Montanari, A. Sampling, diffusions, and stochastic localiza- tion. arXiv preprint arXiv:2305.10690 , 2023. Montanari, A. and Wu, Y . Posterior sampling from the spiked models via diffusion processes. arXiv preprint arXiv:2304.11449 , 2023. Nishimori, H. Statistical physics of spin glasses and in- formation processing: an introduction . Number 111. Clarendon Press, 2001. Rhodes, B. and Gutmann, M. Enhanced gradient- based mcmc in discrete spaces. arXiv preprint arXiv:2208.00040 , 2022. Roberts, G. O. and Tweedie, R. L. Exponential convergence of langevin distributions and their discrete approxima- tions. 1996. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition , pp. 10684–10695, 2022.Sanokowski, S., Berghammer, W., Hochreiter, S., and Lehner, S. Variational annealing on graphs for combi- natorial optimization. Advances in Neural Information Processing Systems , 36:63907–63930, 2023. Sanokowski, S.,
|
https://arxiv.org/abs/2505.19438v1
|
Hochreiter, S., and Lehner, S. A diffusion model framework for unsupervised neural combinatorial optimization. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 , 2024. Sly, A. and Sun, N. The computational hardness of counting in two-spin models on d-regular graphs. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science , pp. 361–369. IEEE, 2012. Song, J., Meng, C., and Ermon, S. Denoising diffusion im- plicit models. arXiv preprint arXiv:2010.02502 , 2020a. Song, Y ., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er- mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456 , 2020b. Sun, H., Dai, H., Xia, W., and Ramamurthy, A. Path auxil- iary proposal for mcmc in discrete space. In International Conference on Learning Representations , 2021. Sun, H., Guha, E. K., and Dai, H. Annealed training for combinatorial optimization on graphs. In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Work- shop) , 2022. Sun, H., Dai, B., Sutton, C., Schuurmans, D., and Dai, H. Any-scale balanced samplers for discrete space. In The Eleventh International Conference on Learning Repre- sentations , 2023a. Sun, H., Dai, H., Dai, B., Zhou, H., and Schuurmans, D. Discrete langevin samplers via wasserstein gradient flow. InInternational Conference on Artificial Intelligence and Statistics , pp. 6290–6313. PMLR, 2023b. Titsias, M. K. and Yau, C. The hamming ball sampler. Journal of the American Statistical Association , 112(520): 1598–1611, 2017. Wang, A. and Cho, K. Bert has a mouth, and it must speak: Bert as a markov random field language model. arXiv preprint arXiv:1902.04094 , 2019. Wu, L. Poincar ´e and transportation inequalities for gibbs measures under the dobrushin uniqueness condition. The Annals of Probability , 34(5):1960–1989, 2006. Zanella, G. Informed proposals for local mcmc in discrete spaces. Journal of the American Statistical Association , 115(530):852–865, 2020. 11 Sampling from Binary Quadratic Distributions via Stochastic Localization Zhang, R., Liu, X., and Liu, Q. A langevin-like sampler for discrete distributions. In International Conference on Machine Learning , pp. 26375–26396. PMLR, 2022. Zhang, Y ., Ghahramani, Z., Storkey, A. J., and Sutton, C. Continuous relaxations for discrete hamiltonian monte carlo. Advances in Neural Information Processing Sys- tems, 25, 2012. 12 Sampling from Binary Quadratic Distributions via Stochastic Localization A. Related Work Discrete MCMC Samplers The Gibbs sampling algorithm, which is equivalent to Glauber dynamics in statistical physics, has been widely used to study statistical physics models (refs) and train deep energy-based models (Dai et al., 2020; Wang & Cho, 2019). To improve sampling efficiency beyond traditional Gibbs sampling, recent works have focused on locally balanced proposals (Zanella, 2020) that leverage local information such as gradient-like quantities. These methods modify the proposal distribution by incorporating local structure of the target distribution, leading to more informed transitions. Notable examples include Gibbs With Gradients (Grathwohl et al., 2021), which approximates probability ratios using well-defined gradient information in discrete space, Path Auxiliary Sampler (Sun et al., 2021), which constructs auxiliary paths to enable long-range transitions while maintaining detailed balance, and Metropolis-Adjusted Langevin Algorithm (Zhang et al., 2022) also introduces
|
https://arxiv.org/abs/2505.19438v1
|
the gradient information and updates all dimensions in parallel by factorizing the joint distribution. When second-order information of the target distribution is available, (Zhang et al., 2012; Rhodes & Gutmann, 2022; Sun et al., 2023a) leverage the Gaussian integral trick (Hubbard, 1959) to transform discrete sampling into continuous sampling, enabling the use of numerous well-developed continuous MCMC samplers. Stochastic Localization Stochastic localization (SL) (Eldan, 2013; 2020; Chen, 2021; Eldan et al., 2022; Eldan & Shamir, 2022) is a recent method for proving properties of measures such as concentration inequalities and measure decompositions. Chen & Eldan (2022) further developed it into a technique for studying mixing time of Markov chains. Additionally, it naturally produces a sampling framework which has attracted the research interest of many data scientists (El Alaoui et al., 2022; Montanari, 2023; Grenioux et al., 2024; Anari et al., 2024). The main challenge of sampling via SL lies on Bayes estimation of posterior expectation. For discrete targets, El Alaoui et al. (2022) first use SL sampling from the Sherrington-Kirkpatrick (SK) model, where posterior estimation is obtained by an approximate message passing (AMP) algorithm and a process for minimizing the Thouless-Anderson-Palmer (TAP) free energy via natural gradient descent (NGD). Furthermore, they extended this approach to mean-field models (Alaoui et al., 2023a), spiked models (Montanari & Wu, 2023) and spherical spin glasses (Huang et al., 2024). For a certain class of non log-concave continuous distributions, SLIPS (Grenioux et al., 2024) samples from qtvia Metropolis-Adjusted Langevin Algorithm (Roberts & Tweedie, 1996) to estimate posterior mean. In continuous case, log-concavity of distribution implies easier sampling (Dwivedi et al., 2019). SLIPS proves duality of log-concavity to show its efficiency. However, for binary distributions, quadratic term will collapse into constant ( x2= 1, x∈ {− 1,1}n) or linear term ( x2=x, x∈ {0,1}n), which poses difficulties for theoretical analysis. B. Background In this section we will outline some background of stochastic dynamics of algorithms and our theoretical techniques. B.1. Fundamental Definitions of Stochastic Localization A stochastic localization process (νt)tis defined as dνt dν(x) =Ft(x), where the functions Ftsolve following stochastic differential equations: F0(x) = 1 , dFt(x) =Ft(x)⟨x−m(νt), CtdBt⟩,∀x, (22) where m(ν) :=R xν(dx),(Ct)tis an adapted process which takes values in the space of n×npositive-definite matrices, (Bt)t≥0is a standard Brownian motion on Rn. We can also use linear-tilt form describing stochastic localization process νt(dx) ν(dx)=eZt−1 2⟨Σtx,x⟩+⟨yt,x⟩, where Σt=Rt 0C2 sds,yt=Rt 0 CsdBs+C2 sm(νs)ds andZtis a normalizing constant. A key property is that for any measurable set A⊂Ω, stochastic localization process νt(A)almost surely converges to either 0or1ast→ ∞ and the concentrated point is distributed according to the law ν. 13 Sampling from Binary Quadratic Distributions via Stochastic Localization Lemma B.1 (Proposition 10 of (Eldan & Shamir, 2022)) .Letat=m(νt), the process atalmost surely converges to a point in Ω, and a∞:= lim t→∞atis distributed according to the law ν. Moreover, the measure νtalmost-surely weakly converges to a Dirac measure at a∞. Lemma B.1 suggest samples a∞∼νcan be obtained by a stochastic localization process. B.2. Generator and Dirichlet Form of Glauber Dynamics and Metropolis-Hastings Algorithms Consider (E, d)is a polish
|
https://arxiv.org/abs/2505.19438v1
|
space equipped with the Borel field B, and the Gibbs measures νon spin space ET. Glauber Dynamics is a Markov process of pure jumps. If the configuration at present is x, then at each site i, it will change to xiaccording to the conditional distribution ν(xi|x∼i). The generator of Glauber dynamics is LGDf(x) =X i∈T(Eν[f(X)|X∼i=x∼i]−f(x)). And the associated Dirichlet form of Glauber dynamics is EGD(f, f) =EνX i∈T(Eν[f(X)|X∼i]−f(X))2. (23) Metropolis-Hastings Algorithms is a reversible Markov chain with transition function: P(y|x) = Ψ( y|x) min{1,ν(y)Ψ(x|y) ν(x)Ψ(y|x)},ifx̸=y, and P(y|x) = 1−X x̸=yΨ(y|x) min{1,ν(y)Ψ(x|y) ν(x)Ψ(y|x)},ifx=y, where Ψis the proposal distribution. The generator of Metropolis-Hastings algorithms is LMHf(x) =Z ETP(y|x)(f(y)−f(x)). And the associated Dirichlet form of Metropolis-Hastings algorithms is EMH(f, f) =1 2Z ET×ETν(dx)P(dy|x) (f(x)−f(y))2. (24) B.3. Dobrushin Interdependence Matrix Interdependence matrix serves as a fundamental tool in our analysis, providing a rigorous way to characterize dependency relationships among random variables. This concept, deeply rooted in probability theory and statistical physics, allows us to quantify the strength of interactions between variables. We now present its formal definition. Lp-Wasserstein Distance Let(Ω, d)be a Polish space equipped with the Borel field B. Forp≥1, we denote by Pd, p(Ω) the space of probability measures on (Ω,B)with finite p-th moment. For any ν1, ν2∈ Pd, p(Ω), theLp-Wasserstein distance is defined as Wp,d(ν1, ν2) := inf πZ Z Ω×Ωd(x, y)pπ(dx, dy )1/p , (25) where the infimum is taken over all probability measures πonΩ×Ωwith marginals ν1andν2respectively. In the special case where d(x, y) =1x̸=y, we have W1,d(ν1, ν2) = sup A∈B|ν1(A)−ν2(A)|=1 2∥ν1−ν2∥TV. 14 Sampling from Binary Quadratic Distributions via Stochastic Localization d-Dobrushin Interdependence Matrix Fori∈[N], letνi(·|x)denote the conditional distribution of xigiven x∼i. The d-Dobrushin interdependence matrix C:= (cij)i,j∈[N]is defined by cij:= sup x=yoffjW1,d(νi(· |x), νi(· |y)) d(xj, yj), i, j ∈[N], (26) where each entry cijquantifies the influence of site jon site i. The classical Dobrushin uniqueness condition (Dobruschin, 1968; Dobrushin, 1970), supiP jcij<1,provides a sufficient condition for the uniqueness of Gibbs measures in spin glasses. For a more general generator on {−1,1}Ngiven by Lf(x) =X S⊂[N]Z −1,1SJS(x, dz S)(f(xS)−f(x)), (27) where JSis a bounded nonnegative kernel representing the local jump rate, the Dobrushin interdependence matrix takes the form cij:=X i∈ScS(j). (28) Here, cS(j)≥0is the optimal constant satisfying sup x=yoffj1 d(xj, yj)|Z ESg(zS)(JS(x, dz S)−JS(y, dz S))| ≤cS(j)X i∈Sδi(g) for all Lipschitz continuous functions g, where δi(g) := supx=yoffi|g(y)−g(x)| d(yi,xi). When dis the trivial metric, we have cS(j)≤1 2sup x=yoff j∥JS(x,·)−JS(y,·)∥TV. B.4. Poincar ´e Inequalities for Gibbs Measures under the Dobrushin Uniqueness Condition In this subsection, we present the mathematical techniques employed in our study. For stochastic dynamics with a given generator, Wu (2006) derives a sharp estimate of the spectral gap (or Poincar ´e constant) through the spectral radius analysis of the Dobrushin interdependence matrix. Consider the generator Lf:=X i∈T[νi(f)−f], where νi:=νi(dxi|x)is the local specification of Gibbs measure νonET, that is, for each i∈Tthe conditional distribution ofxiknowing xT\{i}coincides with the given νi(·|x). It generates a Glauber dynamics. Then we have following Poincar ´e inequality for Glauber dynamics. Lemma B.2 (Theorem 2.1 of (Wu, 2006)) .Letrsp(C)be the spectral radius of the
|
https://arxiv.org/abs/2505.19438v1
|
Dobrushin interdependence matrix C= (cij)i,j∈Tdefined in (26) (which is an eigenvalue of C by the Perron–Frobenius theorem). If rsp(C)<1, then (1−rsp(C))ν(f, f)≤EνX i∈Tνi(f, f)∀f∈L2(ET, ν), where ν(f, g)denotes the covariance of f,gunder ν, and νi(f, g) =νi(fg)−νi(f)νi(g)is the conditional covariance of f,gunder νiwithxT\{i}fixed. For more general generator (27) Lf(x) =X S⊂NZ ESJS(x, dz S)(f(xS)−f(x)), 15 Sampling from Binary Quadratic Distributions via Stochastic Localization Assume that for each S⊂Tand for every j∈T, there is some finite optimal constant CS(j)≥0, sup x=yoffj1 d(xj, yj)|Z ESg(zS) (JS(x, dz S))−JS(y, dz S)| ≤cS(j)X i∈Sδi(g) for all Lipschitz continuous function gonET. And δi(f) := sup x=yoffi|f(y)−f(x)| d(yi, xi). Lemma B.3 (Theorem 3.1 and Corollary 3.5 of (Wu, 2006)) .Let η:= inf x∈ETinf i∈TX S∋iJS(x, ES). For Dobrushin interdependence matrix Cdefined in (28), assume rsp(C)<1. Then Markov semigroup Pt=etLhas a unique invariant measure νsuch thatR ETd(x, y)ν(dy)<+∞for every x. Moreover, for each x∈ET, W1,d(Pt(x,·), ν)≤e−ηtmax jX i(etC)ijZ ETd(x, y)ν(dy)≤e−t(η−∥C∥1)Z ETd(x, y)ν(dy). C. Proofs for Main Theorems General Proof Sketch. Our proof of the Poincar ´e inequality follows three key steps. • First, we construct the d-Dobrushin interdependence matrix Cfrom the algorithm’s transition kernel; • We then establish upper bounds on the matrix entries cij, which leads to a bound on the spectral radius of C. • Finally, we leverage this spectral bound to establish the Poincar ´e inequality. C.1. Proof for Theorem 3.1 Proof. Define the Gaussian integration Φ(x) =1√ 2πZx −∞e−y2 2dy. By definition P(|Yt| ≥ζ) =P(|α(t)X+σBt| ≥ζ) =P(|Bt±α(t) σ| ≥ζ σ) =1− Φζ σ√ t±α(t) σ√ t −Φ −ζ σ√ t±α(t) σ√ t . Obviously, 0≤Φζ σ√ t±α(t) σ√ t −Φ −ζ σ√ t±α(t) σ√ t ≤Φζ σ√ t−α(t) σ√ t ≤Φζ σ−α(t) σ√ t . Becauseα(t) σ√ t→+∞, there is a Tlarge enough, such that for t≥T, we have 0≤Φζ σ−α(t) σ√ t ≤ε. Finally, we know that for arbitrary ζ >0andε >0, when t≥T(α, ε) P(|Yt| ≥ζ)≥1−ε. 16 Sampling from Binary Quadratic Distributions via Stochastic Localization C.2. Proof for Theorem 4.3 Proof. The conditional measure ν(xi|x∼i)is νβ,h(xi|x∼i) =e−β 2⟨x,Wx⟩+⟨x,h⟩ P xi=±1e−β 2⟨x,Wx⟩+⟨x,h⟩ =e−β 2P k̸=iWikxixk−β 2Wii−β 2P k,j̸=iWjkxjxk+hixi+P k̸=ihkxk P xi=±1e−β 2P k̸=iWikxixk−β 2Wii−β 2P k,j̸=iWjkxjxk+hixi+P k̸=ihkxk =e−β 2P k̸=iWikxixk+hixi e−β 2P k̸=iWikxk+hi+eβ 2P k̸=iWikxk−hi. Because W1,d(νβ,h(xi|x∼i), νβ,h(xi|ˆx∼i)) = sup A∈B|νβ,h(A|x∼i)−νβ,h(A|ˆx∼i)|, where B={∅,{+1,−1},{+1},{−1}}. For A={+1}, |νβ,h(xi= +1|x∼i)−νβ,h(xi= +1|ˆx∼i)| = e−β 2P k̸=i,jWikxk+hi−β 2Wijxj 2 cosh −β 2P k̸=i,jWikxk+hi−β 2Wijxj−e−β 2P k̸=i,jWikxk+hi−β 2Wijˆxj 2 cosh −β 2P k̸=i,jWikxk+hi−β 2Wijˆxj , and for A={−1}, |νβ,h(xi=−1|x∼i)−νβ,h(xi=−1|ˆx∼i)| = eβ 2P k̸=i,jWikxk−hi+β 2Wijxj 2 cosh −β 2P k̸=i,jWikxk+hi−β 2Wijxj−eβ 2P k̸=i,jWikxk−hi+β 2Wijˆxj 2 cosh −β 2P k̸=i,jWikxk+hi−β 2Wijˆxj . Consider function g1(x) =ex 2 cosh( x), g2(x) =e−x 2 cosh( x), we know that g′ 1(x) =−g′ 2(x) =2 (ex+e−x)2. Taking h0≥0such that βsup i∈TX k̸=i|Wik| ≤h0, then β 2X k̸=i,jWikxk+β 2Wijxj ≤β 2sup i∈TX k̸=i|Wik| ≤h0 2. Hence, for h= 2h0 3h0 2=h−h0 2≤ −β 2X k̸=i,jWikxk−β 2Wijxj+h≤h0 2+h, 17 Sampling from Binary Quadratic Distributions via Stochastic Localization and for h=−2h0 h−h0 2≤ −β 2X k̸=i,jWikxk−β 2Wijxj+h≤h0 2+h=−3h0 2. Putting everything together we get for |h|= 2h0 −β 2X k̸=i,jWikxk−β 2Wijxj+h ≥3h0 2. and g′ 1 −β 2X
|
https://arxiv.org/abs/2505.19438v1
|
k̸=i,jWikxk−β 2Wijxj+h ≤2 e3h0 2+e−3h0 2=2 e3|h| 4+e−3|h| 4. Hence, for 2βsup i∈TX k̸=i|Wik| ≤ |h|, we get W1,d(νβ,h(xi|x∼i), νβ,h(xi|ˆx∼i)) = sup A∈B|νβ,h(A|x∼i)−νβ,h(A|ˆx∼i)| ≤2 e3|h| 4+e−3|h| 4 β 2Wij(xj−ˆxj) ≤2β|Wij| e3|h| 4+e−3|h| 4, which implies cij:= sup x=yoffjW1,d(νi(· |x), νi(· |y)) d(xj, yj)≤2β|Wij| e3|h| 4+e−3|h| 4, i, j∈[N]. Hence, ∥C∥∞= sup iX j∈[N]cij≤sup iX j∈[N]2β|Wij| e3|h| 4+e−3|h| 4≤|h| e3|h| 4+e−3|h| 4. Therefore rsp(C)≤ ∥C∥∞≤|h| e3|h| 4+e−3|h| 4. Combine with Lemma B.2, we have Varνβ,h(f)≤1 1−|h| e3|h| 4+e−3|h| 4EGD(f, f). C.3. Proof of Corollary 4.4 Proof. The following lemma gives a Chernoff-type bound for Markov chains. Lemma C.1 (Theorem 3.6 of (Dubhashi & Panconesi, 2009)) .LetX1, X2, . . . , X nbe a sequence generated by a irreducible and reversible Markov chain on a finite set with invariant distribution πand spectral gap γ. Then, for any initial distribution q; any positive integer nand all ϵ >0, Pq" nX i=1f(Xi)−nπ(f) ≥ε# ≤Cγ,n,qexp −ε2γ cn , where cis an absolute constant and Cγ,n,q is a rational function. Lezaud (1998); Gillman (1998) also prove the same result. 18 Sampling from Binary Quadratic Distributions via Stochastic Localization It is well-known that Glauber dynamics is reversible with respect to Gibbs measure νβ,h(Bertoin et al., 1999). And we consider sampling from a finite set {−1,1}N, thus Glauber dynamics is irreducible. From lemma C.1, we have Pq" 1 nnX i=1Xi−Eνβ,h[X] ≥ε# ≤Cγgap,n,qe−nε2γgap c, where γgapis spectral gap of Glauber dynamics corresponding to νβ,h. Combining with Theorem 4.3, we have γgap= 1−|h| e3|h| 4+e−3|h| 4. C.4. Proof of Theorem 4.5 Proof. Fori∈T,x→xiwith probability P(xi|x) = min {1,νβ,h(xi) νβ,h(x)}= min {1, e2βxiP j̸=iWijxj−2hixi}, and keep invariant with probability 1−P(xi|x) = 1−min{1, e2βxiP j̸=iWijxj−2hixi}, Then Lf(x) =Z ET(f(y)−f(x))p(y|x) =TX i=1Z Ei(f(xyi)−f(x))P(xyi|x) =TX i=1(f(xi)−f(x)) min{1, e2βxiP j̸=iWijxj−2hixi}, Define the local kernel as follows νi(x, xi) = min {1, e2βxiP j̸=iWijxj−2hixi}, Obviously, νi(x, Ei) = 1 for any x, then η= 1. ci(j) = sup x=ˆxoffj|νi(x,·)−νi(ˆx,·)|TV= sup x=ˆxoffjsup B|νi(x, B)−νi(ˆx, B)| = sup x=ˆxoffjsup Bi|νi(xi, A)−νi(ˆxi, A)| = sup x=ˆxoffjmax{ min{1, e2βxiP k̸=i,jWikxk+2βxiWijxj−2hixi} −min{1, e2βxiP k̸=i,jWikxk−2βxiWijxj−2hixi} , 1−min{1, e2βxiP k̸=i,jWikxk+2βxiWijxj−2hixi} −1 + min {1, e2βxiP k̸=i,jWikxk−2βxiWijxj−2hixi} } = sup x=ˆxoffj min{1, e2βxiP k̸=i,jWikxk+2βxiWijxj−2hixi} −min{1, e2βxiP k̸=i,jWikxk−2βxiWijxj−2hixi} . Because 2βX k̸=i,jWikxk+ 2βWijxj ≤2βsup i∈TX k̸=i|Wik| ≤2h0. Hence, for h= 2h0 −3h≤2βX k̸=i,jWikxk+ 2βWijxj−2h≤ −h −3h≤2βX k̸=i,jWikxk−2βWijxj−2h≤ −h, 19 Sampling from Binary Quadratic Distributions via Stochastic Localization and for h=−2h0 2h0≤2βX k̸=i,jWikxk+ 2βWijxj−2h≤6h0 2h0≤2βX k̸=i,jWikxk−2βWijxj−2h≤6h0. Putting everything together we get for |h|= 2h0 2βX k̸=i,jWikxk±2βWijxj−2h ≥2h0. Hence, ci(j) = sup x=ˆxoffj min{1, e2βxiP k̸=i,jWikxk+2βxiWijxj−2hixi} −min{1, e2βxiP k̸=i,jWikxk−2βxiWijxj−2hixi} ≤e−2h04β|Wij|=e−|h|4β|Wij|. Finally, ∥C∥∞= sup iX j∈[N]ci(j)≤2|h|e−|h|, and by Lemma B.2, we can get Varνβ,h(f)≤1 1−2|h|e−|h|EMH(f, f). C.5. Proof of Theorem 4.6 Proof. The transition matrix of Metropolis chain is P(y|x) = Ψ( x, y) min{1,νβ,h(y)Ψ(x|y) νβ,h(x)Ψ(y|x)},ifx̸=y, and P(y|x) = 1−X x̸=yΨ(x, y) min{1,π(y)Ψ(x|y) π(x)Ψ(y|x)},ifx=y. Particularly, on the configuration {+1,−1}N, the dynamic flips one site each time, that is for |x−y| ≥2 P(y|x) = 0 , and for i∈[N],x→xiwith probability P(xi|x) = Ψ( xi|x) min{1,νβ,h(y)Ψ(x|xi) νβ,h(x)Ψ(xi|x)}, and keep invariant with probability 1−P(xi|x) = 1−Ψ(xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)}, 20 Sampling from Binary Quadratic Distributions via Stochastic Localization Then Lf(x) =Z ET(f(y)−f(x))p(y|x) =TX
|
https://arxiv.org/abs/2505.19438v1
|
i=1Z Ei(f(xyi)−f(x))P(xyi|x) =TX i=1(f(xi)−f(x))Ψ(xi|x) min{1,π(xi)Ψ(x|xi) π(x)Ψ(xi|x)}, Define the local kernel as follows νi(x, xi) =Ψ( xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)}. Obviously, νi(x, Ei) = 1 for any x, then η= 1. ForB={+1,−1}N,i̸=j ci(j) = sup x=ˆxoffj|νi(x,·)−νi(ˆx,·)|TV= sup x=ˆxoffjsup B|νi(x, B)−νi(ˆx, B)| = sup x=ˆxoffjsup Bi|νi(x∼i, A)−νi(ˆx∼i, A)| = sup x=ˆxoffjmax{|νi(x∼i,+1)−νi(ˆx∼i,+1)|,|νi(x∼i,−1)−νi(ˆx∼i,−1)|} = sup x=ˆxoffjmax{ Ψ(xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)} −Ψ(ˆxi|ˆx) min{1,νβ,h(ˆxi)Ψ(ˆx|ˆxi) νβ,h(ˆx)Ψ(ˆxi|ˆx)} , 1−Ψ(xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)} −1 + Ψ(ˆ xi|ˆx) min{1,νβ,h(ˆxi)Ψ(ˆx|ˆxi) νβ,h(ˆx)Ψ(ˆxi|ˆx)} } = sup x=ˆxoffj Ψ(xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)} −Ψ(ˆxi|ˆx) min{1,νβ,h(ˆxi)Ψ(ˆx|ˆxi) νβ,h(ˆx)Ψ(ˆxi|ˆx)} ≤CLip(β, h)|βWij|. Hence, ∥C∥∞= sup iX j∈[N]ci(j)≤CLip(β, h)|h|, which implies the Poincar ´e inequality Varνβ,h(f)≤1 1−CLip(β, h)|h|EMH(f, f). C.6. Proof of Theorem 4.8 Proof. For Gibbs measure νβ,h=1 Zβ,he−β 2⟨x,Wx⟩+⟨x,h⟩, we consider the following with kernel P(xi|x) = Ψ( xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)}, where Ψ(xi|x) =1 Zβ,h(x)e−β 2⟨x,W(xi−x)⟩+1 2⟨xi−x,h⟩=eβxiP j̸=iWijxj−hxi Zβ,h(x), 21 Sampling from Binary Quadratic Distributions via Stochastic Localization and Zβ,h(x) =1 + eβxiP j̸=iWijxj−hxi, then Ψ(xi|x) =1 Zβ,h(x)e−β 2⟨x,W(xi−x)⟩+1 2⟨xi−x,h⟩=eβxiP j̸=iWijxj−hxi 1 +eβxiP j̸=iWijxj−hxi =e1 2βxiP j̸=iWijxj−1 2hxi e−1 2βxiP j̸=iWijxj+1 2hxi+e1 2βxiP j̸=iWijxj−1 2hxi, and Ψ(x|xi) =e−1 2βxiP j̸=iWijxj+1 2hxi e−1 2βxiP j̸=iWijxj+1 2hxi+e1 2βxiP j̸=iWijxj−1 2hxi. Hence, Ψ(x|xi) Ψ(xi|x)=e−1 2βxiP j̸=iWijxj+1 2hxi e1 2βxiP j̸=iWijxj−1 2hxi=e−βxiP j̸=iWijxj+hxi. Furthermore, Ψ(ˆxi|ˆx) =e1 2βxiP k̸=i,jWikxk−1 2βxiWijxj−1 2hxi e−1 2βxiP k̸=i,jWikxk+1 2βxiWijxj+1 2hxi+e1 2βxiP k̸=i,jWikxk−1 2βxiWijxj−1 2hxi Ψ(ˆx|ˆxi) =e−1 2βxiP k̸=i,jWikxk+1 2βxiWijxj+1 2hxi e−1 2βxiP k̸=i,jWikxk+1 2βxiWijxj+1 2hxi+e1 2βxiP k̸=i,jWikxk−1 2βxiWijxj−1 2hxi. and Ψ(ˆx|ˆxi) Ψ(ˆxi|ˆx)=e−βxiP k̸=i,jWikxk+βxiWijxj+hxi. Besides, νβ,h(xi) νβ,h(x)=e2βxiP k̸=i,jWikxk+2βxiWijxj−2hxi,νβ,h(ˆxi) νβ,h(ˆx)=e2βxiP k̸=i,jWikxk−2βxiWijxj−2hxi, and νβ,h(xi) νβ,h(x)Ψ(x|xi) Ψ(xi|x)=e−βxiP k̸=i,jWikxk−βxiWijxj+hxie2βxiP k̸=i,jWikxk+2βxiWijxj−2hxi =eβxiP k̸=i,jWikxk+βxiWijxj−hxi νβ,h(ˆxi) νβ,h(ˆx)Ψ(ˆx|ˆxi) Ψ(ˆxi|ˆx)=eβxiP k̸=i,jWikxk−βxiWijxj−hxi. Consider the function f(x, y) =min{1, ex+y} 1 +ex+y=1 2−|1−ex+y| 2(1 + ex+y), then for y1, y2∈(−∞,−x)∪(−x,+∞) |f(x, y1)−f(x, y2)| ≤1 (ex+y 2+e−x+y 2)2|y2−y1|, 22 Sampling from Binary Quadratic Distributions via Stochastic Localization fory1≤ −x≤y2 |f(x, y1)−f(x, y2)| ≤|f(x, y1)−f(x,−x)| ≤1 (ex+y 2+e−x+y 2)2| −x−y1| ≤1 (ex+y 2+e−x+y 2)2|y2−y1|. Denote that x=βxiX k̸=i,jWikxk−hxi y=βxiWijxj. Recall that for h= 2h0 h0 2=h 2−h0 2≤ −β 2X k̸=i,jWikxk−β 2Wijxj+h 2≤h0 2+h 2, and for h=−2h0 h 2−h0 2≤ −β 2X k̸=i,jWikxk−β 2Wijxj+h 2≤h0 2+h 2=−h0 2. Putting everything together we get for |h|= 2h0 −β 2X k̸=i,jWikxk−β 2Wijxj+h 2 ≥h0 2. Hence, Ψ(xi|x) min{1,νβ,h(xi)Ψ(x|xi) νβ,h(x)Ψ(xi|x)} −Ψ(ˆxi|ˆx) min{1,νβ,h(ˆxi)Ψ(ˆx|ˆxi) νβ,h(ˆx)Ψ(ˆxi|ˆx)} = e1 2(x+y) e1 2(x+y)+e−1 2(x+y)min{1, ex+y} −e1 2(x−y) e1 2(x−y)+e−1 2(x−y)min{1, ex−y} = 1 1 +e−(x+y)min{1, ex+y} −1 1 +e−(x−y)min{1, ex−y} ≤2 (ex+y 2+e−x+y 2)2|y| ≤2β|Wij| (eβxiP k̸=i,jWikxk−hxi+βxiWijxj 2 +e−βxiP k̸=i,jWikxk+hxi−βxiWijxj 2 )2 ≤2β|Wij| (e|h| 4+e−|h| 4)2 Finally, we get the estimate ci(j)≤2β|Wij| (e|h| 4+e−|h| 4)2. Using Lemma B.2, we have Varνβ,h(f)≤1 1−|h| (e|h| 4+e−|h| 4)2EMH(f, f). 23 Sampling from Binary Quadratic Distributions via Stochastic Localization C.7. Proof of Theorem 4.9 Define the transition kernel JT(x,−x)as follows JT(x,−x) =TY i=1Ψ(xi|x) =TY i=1eβxiP j̸=iWijxj−hxi 1 +eβxiP j̸=iWijxj−hxi, and TY i=1Ψ(ˆxi|ˆx) =TY i=1eβˆxiP j̸=iWijˆxj−hˆxi 1 +eβˆxiP j̸=iWijˆxj−hˆxi, where x= ˆxoffj. Then TY i=1Ψ(xi|x)−TY i=1Ψ(ˆxi|ˆx) =Ψ(xj|x)TY i̸=jΨ(xi|x)−Ψ(ˆxj|x)TY i̸=jΨ(ˆxi|ˆx) =Ψ(xj|x)TY i̸=jeβxiP k̸=i,jWikxk+βxiWijxj−hxi 1 +eβxiP k̸=i,jWikxk+βxiWijxj−hxi−Ψ(ˆxj|ˆx)TY i̸=jeβxiP k̸=i,jWikxk−βxiWijxj−hxi 1 +eβxiP k̸=i,jWikxk−βxiWijxj−hxi :=NY i=1fi(x)−NY i=1fi(ˆx), where fi(x) =eβxiP k̸=i,jWikxk+βxiWijxj−hxi 1 +eβxiP k̸=i,jWikxk+βxiWijxj−hxi, i̸=j, and fj(x) = Ψ( xj|x) =eβxjP k̸=jWjkxk−hxj 1 +eβxjP k̸=jWjkxk−hxj, fj(ˆx) = Ψ(ˆ xj|ˆx) =e−βxjP k̸=jWjkxk+hxj 1 +e−βxjP k̸=jWjkxk+hxj. Recall that for h0≥0such that βsup i∈TX k̸=i|Wik| ≤h0, then βX k̸=i,jWikxk±βWijxj ≤βsup i∈TX k̸=i|Wik| ≤h0. Hence, for h=
|
https://arxiv.org/abs/2505.19438v1
|
2h0 −3h0≤βX k̸=i,jWikxk±βWijxj−h≤ −h0, and for h=−2h0 h0≤βX k̸=i,jWikxk±βWijxj−h≤3h0. 24 Sampling from Binary Quadratic Distributions via Stochastic Localization Putting everything together we get for |h|= 2h0 βX k̸=i,jWikxk±βWijxj−h ≥h0. For|y| ≥h0 0≤d dy1 1 +ey ≤1 (eh0 2+e−h0 2)2. Because for any i∈[N],0<Ψ(xi|x)≤1, then NY i=1Ψ(xi|x)−NY i=1Ψ(ˆxi|ˆx) = Ψ(x1|x)NY i=2Ψ(xi|x)−Ψ(ˆx1|ˆx)NY i=2Ψ(xi|x) + Ψ(ˆx1|ˆx)NY i=2Ψ(xi|x)−NY i=1Ψ(ˆxi|ˆx) = Ψ(x1|x)−Ψ(ˆx1|ˆx)NY i=2Ψ(xi|x) + Ψ(ˆx1|ˆx) NY i=2Ψ(xi|x)−NY i=2Ψ(ˆxi|ˆx)! ≤ Ψ(x1|x)−Ψ(ˆx1|ˆx) + NY i=2Ψ(xi|x)−NY i=2Ψ(ˆxi|ˆx) ≤NX i=1 Ψ(xi|x)−Ψ(ˆxi|ˆx) . Fori̸=j Ψ(xi|x)−Ψ(ˆxi|ˆx) ≤2β|Wij| (eh0 2+e−h0 2)2. Fori=j Ψ(xj|x)−Ψ(ˆxj|ˆx) ≤1 (eh0 2+e−h0 2)2 2βX k̸=jWjkxk−2h ≤2βP k̸=j|Wjk|+ 2|h| (eh0 2+e−h0 2)2. Hence cij=c[N](j) = sup x=ˆxoffj NY i=1Ψ(xi|x)−NY i=1Ψ(ˆxi|ˆx) ≤X i̸=j2β|Wij| (eh0 2+e−h0 2)2+2βP k̸=j|Wjk|+ 2|h| (eh0 2+e−h0 2)2 =2β(P i̸=j|Wij|+P k̸=j|Wjk|) + 2|h| (e|h| 4+e−|h| 4)2≤4|h| (e|h| 4+e−|h| 4)2. Finally, ∥C∥∞= sup iX j∈[N]ci(j)≤4|h|N (e|h| 4+e−|h| 4)2. Combine with Lemma B.2, we have Varνβ,h(f)≤1 1−4N|h| (e|h| 4+e−|h| 4)2E(f, f). 25 Sampling from Binary Quadratic Distributions via Stochastic Localization C.8. Chernoff-type Error Bounds for DMCMC Samplers This section details the derivation of Chernoff-type error bounds for the Monte Carlo estimators based on DMCMC samplers, expanding upon the theoretical results presented in Section 4. These bounds quantify the concentration of the sample average around the true posterior mean. For a sequence of samples {X1, X2, . . . , X n}generated by a DMCMC chain, we establish the following corollary: Corollary C.2. LetX1∼qdenote the initial distribution of the samples. Under the assumption that the Gibbs measure (13) satisfies the large field Condition 4.1, for any ε >0, the following Chernoff-type error bound holds: Pq" 1 nnX i=1Xi−Eνβ,h[X] ≥ε# ≤Cγgap,n,qe−nε2γgap c, where cis an absolute constant, Cγgap,n,qis a rational function, and the spectral gap γgapis specific to each DMCMC algorithm: •For classical Metropolis chains, γgap= 1−2|h|e−|h|; •For the single-site gradient-informed MH algorithm, γgap= 1−|h| (e|h| 4+e−|h| 4)2; •For DULA, γgap= 1−4|h|N (e|h| 4+e−|h| 4)2. Proof. Our approach leverages general results on the concentration of measure for Markov chains. A Metropolis-Hastings (MH) kernel P(x′|x)is typically defined by: P(x′|x) = Ψ( x′|x) min 1,νβ,h(x′)Ψ(x|x′) νβ,h(x)Ψ(x′|x) , where Ψ(·|·)represents the proposal distribution. A key property of these MH kernels is their reversibility with respect to the target Gibbs measure νβ,h. Furthermore, given that we are sampling from a finite state space {−1,1}N, these MH algorithms are inherently irreducible. These properties of reversibility and irreducibility are precisely the conditions required for Lemma C.1 to apply. Lemma C.1 states that for a reversible and irreducible Markov chain, a Chernoff-type error bound of the form Pq" 1 nnX i=1Xi−Eνβ,h[X] ≥ε# ≤Cγgap,n,qe−nε2γgap c holds, where γgapis the spectral gap of the underlying Markov chain. Finally, by incorporating the specific spectral gap calculations derived for each DMCMC variant—classical Metropolis chains (Theorem 4.5), single-site gradient-informed MH (Theorem 4.8), and DULA (Theorem 4.9)—we substitute their respective γgapvalues into this general bound, thus completing the proof. D. Experimental Miscellaneous In this section, we first benchamrk the QUBO instances in section D.1 and provide detailed experimental settings for reproducibility in Section D.2. We then present additional experimental results in Section D.3 to further validate our framework’s effectiveness. Finally, in Section D.4, we conduct
|
https://arxiv.org/abs/2505.19438v1
|
comprehensive ablation studies on various components of our framework, such as the α-schedule in SL and the influence of hyperparameters on sampling performance and convergence properties. D.1. QUBO Instances In this section, we describe the form of Binary Quadratic Distributions for the QUBO instances used in our experiments. For instances with x∈ {0,1}N, we transform them to {−1,1}Nthrough the mapping y=x+1 2in our experimental setup. 26 Sampling from Binary Quadratic Distributions via Stochastic Localization Maximum Independent Set Given a graph G= (V, E)withNnodes, the Maximum Independent Set (MIS) problem can be formulated as: max x∈0,1NNX i=1−cixisubject to xixj= 0for all (i, j)∈E which can be transformed into a BQD: p(x)∝e−β 2 −cTx+λxTAx 2 where Ais the adjacency matrix, βis the inverse temperature and λis the penalty coefficient ensuring constraint satisfaction. Following DISCS (Goshvadi et al., 2024), we set c= 1,λ= 1.0001 and use their annealing schedule and post-processing method to obtain feasible solutions. The evaluation is conducted on five datasets: four from Erd ˝os-R ´enyi random graphs ER-[700-800] with densities {0.05, 0.10, 0.15, 0.25 }and one from SATLIB. Maximum Cut The Maximum Cut (MaxCut) problem on a graph G= (V, E)withDnodes can be formulated as: max x∈{−1,1}NX (i,j)∈Ewij(1−xixj 2) which corresponds to the following BQD: p(x)∝e−βP (i,j)∈Ewij(1−xixj 2) where wij= 1in the experiments. Following DISCS (Goshvadi et al., 2024), we evaluate on seven datasets: three from different sizes of Erd ˝os-R ´enyi random graphs with the density 0.15 (ER-256-300, ER-512-600, ER-1024-1100), three from different sizes of Barab ´asi-Albert random graphs (ba-256-300, ba-512-600, ba-1024-1100), and one from Optsicom. Maximum Clique The Maximum Clique problem on a graph G= (V, E)withDnodes can be formulated as: min x∈{0,1}N−NX i=1cixisubject to xixj= 0for all (i, j)/∈E which corresponds to the following BQD: p(x)∝e−β(−cTx+λ 2(1Tx·(1Tx−1)−xTAx)) Following DISCS (Goshvadi et al., 2024), we set c= 1,λ= 1.0001 and maintain the same post-processing method. We evaluate on two datasets: Twitter and RB. D.2. Experimental Setting In this part, we provide detailed experimental settings for the configuration of three key components in our framework: the discrete MCMC sampler settings, the Stochastic Localization algorithm parameters, and the exponential decay step allocation strategy. Discrete MCMC Sampler Settings We employ three different discrete MCMC samplers in our experiments: Gibbs with Gradients (GWG) (Grathwohl et al., 2021) , Path Auxiliary Sampler (PAS) (Sun et al., 2021), and Discrete Metropolis- Adjusted Langevin Algorithm (DMALA) (Zhang et al., 2022), for all samplers, we use adaptive step sizes with a target acceptance rate of 0.574 and balancing function of g(t) =√ t. The specific configurations are as follows: For GWG and PAS, we configure it to flip one bit at each step. For DMALA, we set the initial step size to 0.2, schedule the step size adjustment every 100 steps, and reset the gradient estimate every 20 steps. Exponential Decay Step Allocation Given the total number of SL SDE iterations Kand total MCMC steps Ntot(10,000 in our work), we need to allocate NtMCMC steps for posterior estimation at each time step t, ensuringPK t=1Nt=Ntot. Our exponential decay allocation strategy works as follows: First, we assign a minimum
|
https://arxiv.org/abs/2505.19438v1
|
number of steps Nminto each time step. Then, we distribute the remaining steps Ntot−KN minexponentially according to Nt=Nmin+c·rt/K, where ris the decay rate and cis a normalization constant ensuring the sum constraint. To handle discretization effects, we floor the continuous schedule to integers and carefully distribute the remaining steps to maintain the exact total. 27 Sampling from Binary Quadratic Distributions via Stochastic Localization Table 4. Results of ablative experiments for MIS ER-0.05 and MIS ER-0.10. Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 103.69 ±1.57 104.31 ±1.79 103.06 ±1.82 GEOM(1,1) 103.97 ±1.55 104.19 ±1.67 103.41 ±1.85 GEOM(2,1) 104.37 ±1.52 104.53 ±1.48 103.59 ±1.80 K=256 104.19 ±1.65 104.53 ±1.48 103.34 ±1.57 K=512 104.19 ±1.65 104.19 ±1.72 103.59 ±1.80 K=1024 104.37 ±1.52 104.22 ±1.49 103.03 ±1.69 σ=1 103.25 ±1.70 104.16 ±1.42 103.19 ±1.74 σ=5 103.81 ±1.63 104.22 ±1.67 103.34 ±1.61 σ=10 104.00 ±1.54 104.34 ±1.81 103.28 ±1.64Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 60.25 ±0.79 61.63 ±0.96 60.94 ±1.06 GEOM(1,1) 61.47 ±1.06 61.75 ±0.83 60.88 ±0.96 GEOM(2,1) 61.94 ±0.93 61.91 ±0.88 61.34 ±0.96 K=256 61.41 ±0.82 61.91 ±0.88 60.78 ±0.99 K=512 61.94 ±0.93 61.78 ±0.96 60.94 ±1.00 K=1024 61.53 ±0.83 61.69 ±0.95 61.34 ±0.96 σ=1 61.47 ±0.83 61.69 ±0.77 61.09 ±1.16 σ=5 61.69 ±1.04 61.69 ±0.88 60.81 ±1.04 σ=10 61.66 ±0.92 61.50 ±0.83 61.22 ±1.05 SL Settings For the alpha-schedule in Stochastic Localization, we adopt the GEOM(2,1) schedule α(t) =t(1−t)−1/2as suggested in prior works. We use uniform time discretization and employ exponential integrator for SDE integration. When performing posterior estimation in SL, we use the same MCMC sampler configurations as in their standalone sampling counterparts. Other hyperparameters are determined through extensive hyperparameter optimization. Specifically, we tune the following parameters: the number of discretization steps K∈ {256,512,1024}, initial and final noise scales ϵ, ϵend∈[10−4,10−1], MCMC sample ratio (proportion of last samples used for posterior estimation) in [0.1,1]with step 0.1, MCMC step exponential decay rate rin[10−4,10−1], minimum MCMC steps Nmin∈ {2,4,6,8,16,32}, and noise parameter σ∈ {1,2,4,6,8,10,15,20}. The total number of MCMC steps is fixed at 10,000. The optimal hyperparameters found through our search will be released with our open-source code. All experiments are conducted on a single NVIDIA A40 GPU and an AMD EPYC 7513 CPU. The code is available at https://github.com/LOGO-CUHKSZ/SLDMCMC. D.3. Further Experimental Results To provide a more comprehensive understanding of our framework’s behavior, we visualize the sampling trajectories for both the base MCMC samplers (DMALA, GWG, and PAS) and the SL-enhanced framework across different test cases. The shaded areas in the trajectories, representing the variance across multiple instances, are generally narrow, indicating the stability of all methods. Most notably, SL-based samplers (solid lines) and their base MCMC counterparts (dashed lines) achieve comparable final performance across all test cases, demonstrating the reliability of both approaches. D.4. Further Ablations Based on the comprehensive ablation experiments shown in Tables 4 - 9, we analyze the impact of three key components in SL: the α-schedule, number of SL iterations K, and noise parameter σ. First, regarding the α-schedule, we adopt the geometric schedule GEOM( α1,α2) from SLIPS (Grenioux et al., 2024), defined as GEOM( α1,α2)(t) =tα1 2(1−t)−α2 2, where t∈[0,1]andα1≥1,α2>0. Both GEOM(1,1)
|
https://arxiv.org/abs/2505.19438v1
|
and GEOM(2,1) demonstrate superior performance over the CLASSIC schedule (defined as α(t) =t) in MIS problems. However, this impact becomes negligible in MaxCut instances, where all schedules perform similarly with variations within 0.1%. Second, the number of SL iterations K shows a problem-dependent sensitivity pattern. While K exhibits significant influence on performance in MIS tasks, its impact becomes minimal in MaxCut problems where the performance differences among K=256, 512, and 1024 are statistically insignificant ( ≤0.01% variation). Third, the noise parameter σdemonstrates similar problem-specific characteristics. In MIS instances, moderate noise levels (σ= 5) generally yield optimal performance with notable improvements over other settings. However, in MaxCut problems, the choice of σhas minimal impact on the final performance, with variations typically less than 0.05%. When comparing methods across problem types, SL-PAS exhibits superior performance in MIS tasks compared to SL-GWG and SL-DMALA. However, this advantage diminishes in MaxCut instances, where all three methods achieve nearly identical results. 28 Sampling from Binary Quadratic Distributions via Stochastic Localization Table 5. Results of ablative experiments for MIS ER-0.20 and MIS ER-0.25. Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 34.06 ±0.61 33.91 ±0.58 33.03 ±0.77 GEOM(1,1) 34.09 ±0.63 34.06 ±0.50 33.97 ±0.64 GEOM(2,1) 34.38 ±0.74 34.38 ±0.54 34.00 ±0.56 K=256 34.19 ±0.73 34.25 ±0.61 33.84 ±0.62 K=512 34.03 ±0.53 34.09 ±0.52 34.00 ±0.56 K=1024 34.38 ±0.74 34.38 ±0.54 33.72 ±0.84 σ=1 33.72 ±0.57 33.69 ±0.58 33.84 ±0.67 σ=5 34.22 ±0.74 34.00 ±0.66 33.81 ±0.68 σ=10 34.00 ±0.43 34.22 ±0.54 33.88 ±0.65Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 27.88 ±0.54 26.50 ±0.56 27.31 ±0.58 GEOM(1,1) 27.56 ±0.50 27.84 ±0.51 27.44 ±0.50 GEOM(2,1) 28.00 ±0.56 28.03 ±0.59 27.75 ±0.66 K=256 27.84 ±0.57 27.94 ±0.50 27.41 ±0.55 K=512 27.88 ±0.48 27.91 ±0.63 27.47 ±0.56 K=1024 28.00 ±0.56 28.03 ±0.59 27.75 ±0.66 σ=1 27.81 ±0.53 27.72 ±0.51 27.06 ±0.50 σ=5 27.72 ±0.57 27.94 ±0.43 27.47 ±0.50 σ=10 27.88 ±0.65 27.94 ±0.61 27.75 ±0.66 Table 6. Results of ablative experiments for MIS SATLib and MaxClique RB. Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 418.93 ±14.39 419.35 ±14.37 415.23 ±14.51 GEOM(1,1) 419.03 ±14.31 419.61 ±14.37 415.72 ±14.48 GEOM(2,1) 419.16 ±14.39 419.71 ±14.35 415.72 ±14.37 K=256 419.03 ±14.32 419.71 ±14.35 415.57 ±14.55 K=512 419.16 ±14.39 419.62 ±14.37 415.72 ±14.37 K=1024 419.03 ±14.45 419.59 ±14.35 415.66 ±14.52 σ=1 418.85 ±14.31 419.51 ±14.33 415.43 ±14.29 σ=5 419.00 ±14.37 419.55 ±14.35 415.70 ±14.39 σ=10 419.01 ±14.39 419.71 ±14.35 415.72 ±14.37Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 87.52 ±6.18 87.59 ±6.19 87.16 ±6.11 GEOM(1,1) 87.52 ±6.14 87.43 ±6.11 87.10 ±6.09 GEOM(2,1) 87.60 ±6.16 87.65 ±6.14 87.31 ±6.15 K=256 87.43 ±6.16 87.65 ±6.14 87.19 ±6.08 K=512 87.42 ±6.09 87.41 ±6.13 87.18 ±6.06 K=1024 87.60 ±6.16 87.56 ±6.17 87.31 ±6.15 σ=1 86.77 ±6.11 87.35 ±6.13 87.08 ±6.11 σ=5 87.42 ±6.18 87.55 ±6.20 87.26 ±6.16 σ=10 87.42 ±6.21 87.45 ±6.17 87.31 ±6.15 E. Computational Complexity Analysis We analyze the computational complexity of the Stochastic Localization (SL) framework combined with Discrete MCMC (DMCMC) samplers, compared to using DMCMC alone. We consider the total operational cost in terms of fundamental operations like function evaluations, pseudo-gradient calculations, or matrix operations, as these often dominate the runtime for the target distributions we consider.
|
https://arxiv.org/abs/2505.19438v1
|
Let Nbe the dimension of the problem (number of binary variables), Tbe the total number of SL iterations, and Mbe the total budget of DMCMC steps across all SL iterations. E.1. Baseline DMCMC Complexity For standard DMCMC methods used as baselines in our study, such as Gibbs With Gradients (GWG) (Grathwohl et al., 2021), Path Auxiliary Sampler (PAS) (Sun et al., 2021), and Discrete Metropolis-adjusted Langevin Algorithm (DMALA) (Zhang et al., 2022), the dominant computational cost per MCMC step typically involves evaluating the target distribution’s pseudo-gradient or related quantities. For many relevant models, including the Binary Quadratic Distribution (BQD) studied in this paper, these operations scale at least quadratically with the dimension, O(N2). Given a total budget of MMCMC steps, the overall complexity of a baseline DMCMC method is thus O(MN2). E.2. SL + DMCMC Complexity When using SL with DMCMC, the total MCMC budget Mis distributed across TSL iterations. The core cost associated with performing MDMCMC steps remains O(MN2)in total. In addition to the MCMC sampling, each of the TSL iterations involves a few key operations: (1)MC estimation of the conditional mean: This step involves drawing samples from the current posterior distribution and computing their mean. If we draw a fixed, relatively small number of samples per SL iteration, this calculation takes O(N)time per iteration as it involves vector operations in RN.(2)SDE simulation: Updating the localization variable Yα taccording to the SDE involves vector additions and scaling, which takes O(N) time per iteration. The total computational overhead introduced by the SL process itself, over Titerations, is therefore O(T×(N+N)) =O(TN). Combining the cost of the total MCMC steps and the SL overhead, the total computational complexity of the SL+DMCMC framework is O(MN2+TN). 29 Sampling from Binary Quadratic Distributions via Stochastic Localization Table 7. Results of ablative experiments for MaxCut ER 256-300 and MaxCut BA 256-300. Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 101.92 ±1.76 101.91 ±1.76 101.92 ±1.76 GEOM(1,1) 101.92 ±1.76 101.92 ±1.76 101.92 ±1.76 GEOM(2,1) 101.92 ±1.76 101.92 ±1.76 101.92 ±1.76 K=256 101.91 ±1.76 101.92 ±1.76 101.91 ±1.76 K=512 101.92 ±1.76 101.92 ±1.76 101.92 ±1.76 K=1024 101.92 ±1.76 101.92 ±1.76 101.92 ±1.76 σ=1 101.90 ±1.76 101.89 ±1.76 101.90 ±1.76 σ=5 101.92 ±1.76 101.92 ±1.76 101.92 ±1.76 σ=10 101.92 ±1.76 101.92 ±1.76 101.92 ±1.76Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 99.96 ±0.07 99.97 ±0.06 99.95 ±0.07 GEOM(1,1) 99.97 ±0.06 99.97 ±0.06 99.95 ±0.08 GEOM(2,1) 99.97 ±0.05 99.98 ±0.05 99.96 ±0.06 K=256 99.96 ±0.07 99.97 ±0.06 99.96 ±0.07 K=512 99.97 ±0.06 99.97 ±0.06 99.96 ±0.07 K=1024 99.97 ±0.06 99.97 ±0.05 99.96 ±0.06 σ=1 99.94 ±0.08 99.95 ±0.07 99.81 ±0.12 σ=5 99.97 ±0.06 99.97 ±0.05 99.95 ±0.07 σ=10 99.97 ±0.06 99.97 ±0.06 99.96 ±0.07 Table 8. Results of ablative experiments for MaxCut ER 512-600 and MaxCut ER 1024-1100. Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 100.16 ±0.12 100.17 ±0.12 100.14 ±0.13 GEOM(1,1) 100.16 ±0.12 100.17 ±0.12 100.14 ±0.13 GEOM(2,1) 100.16 ±0.12 100.17 ±0.12 100.14 ±0.12 K=256 100.15 ±0.12 100.16 ±0.12 100.14 ±0.12 K=512 100.16 ±0.12 100.17 ±0.12 100.14 ±0.12 K=1024 100.16 ±0.12 100.17 ±0.12 100.14 ±0.12 σ=1 100.15 ±0.12 100.16 ±0.12 100.11 ±0.13 σ=5 100.16 ±0.12 100.17 ±0.12 100.14 ±0.13
|
https://arxiv.org/abs/2505.19438v1
|
σ=10 100.16 ±0.12 100.17 ±0.12 100.14 ±0.13Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 100.08 ±0.14 100.12 ±0.14 100.06 ±0.14 GEOM(1,1) 100.10 ±0.14 100.12 ±0.13 100.07 ±0.14 GEOM(2,1) 100.10 ±0.13 100.12 ±0.14 100.08 ±0.14 K=256 100.09 ±0.14 100.12 ±0.14 100.08 ±0.14 K=512 100.09 ±0.14 100.12 ±0.14 100.07 ±0.14 K=1024 100.10 ±0.14 100.12 ±0.14 100.07 ±0.14 σ=1 100.07 ±0.14 100.11 ±0.13 100.07 ±0.14 σ=5 100.10 ±0.14 100.12 ±0.13 100.07 ±0.14 σ=10 100.09 ±0.14 100.12 ±0.14 100.08 ±0.14 E.3. Comparison Comparing the total complexity O(MN2+TN)with the baseline O(MN2), the SL framework adds an O(TN)overhead. In the high-dimensional settings where advanced MCMC methods are typically applied ( Nis large), and with practical choices for the number of SL iterations Trelative to the total MCMC budget M, this overhead is often negligible. For example, in our experiments, we used T∈ {256,512,1024}while the total MCMC steps Mwent up to 10,000. In such scenarios, TN is much smaller than MN2. Thus, the SL framework allows for potential improvements in mixing and convergence properties by transforming the target distribution towards an easier-to-sample form over time, without introducing a prohibitive increase in computational cost compared to the baseline DMCMC methods. To validate our theoretical complexity analysis, we measured the actual running times across various benchmark problems, as shown in Tables 10 and 11. The results largely confirm our theoretical expectations. For MIS problems, the SL variants show a small overhead (typically 1-6%) compared to their base counterparts. This slight increase is consistent with our complexity analysis predicting an additional O(TN)term. In some instances (e.g., MaxClique RB and TWITTER, and some larger MaxCut instances), the SL framework appears marginally faster than the base methods. However, these apparent improvements should be attributed to hardware-specific factors, implementation details, and measurement variability. Such minor variations are common in empirical runtime measurements and do not contradict our theoretical analysis. Overall, the empirical results demonstrate that the SL framework introduces only marginal computational overhead in practice, which aligns with our theoretical analysis where TN≪MN2in high-dimensional settings. 30 Sampling from Binary Quadratic Distributions via Stochastic Localization Table 9. Results of ablative experiments for MaxCut BA 512-600 and MaxCut BA 1024-1100. Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 100.87 ±0.48 100.92 ±0.48 100.77 ±0.49 GEOM(1,1) 100.88 ±0.48 100.92 ±0.48 100.78 ±0.48 GEOM(2,1) 100.89 ±0.49 100.92 ±0.48 100.79 ±0.49 K=256 100.87 ±0.49 100.92 ±0.48 100.78 ±0.49 K=512 100.88 ±0.49 100.92 ±0.48 100.79 ±0.49 K=1024 100.89 ±0.49 100.92 ±0.48 100.78 ±0.49 σ=1 100.77 ±0.49 100.90 ±0.48 100.76 ±0.49 σ=5 100.88 ±0.48 100.92 ±0.49 100.78 ±0.49 σ=10 100.88 ±0.49 100.92 ±0.48 100.78 ±0.49Ablation SL-GWG SL-PAS SL-DMALA CLASSIC 101.53 ±0.38 101.66 ±0.38 101.21 ±0.39 GEOM(1,1) 101.56 ±0.37 101.66 ±0.37 101.23 ±0.39 GEOM(2,1) 101.55 ±0.37 101.66 ±0.37 101.23 ±0.38 K=256 101.55 ±0.38 101.66 ±0.37 101.22 ±0.38 K=512 101.56 ±0.38 101.66 ±0.37 101.23 ±0.38 K=1024 101.56 ±0.38 101.67 ±0.37 101.21 ±0.39 σ=1 101.31 ±0.37 101.63 ±0.37 101.16 ±0.39 σ=5 101.55 ±0.37 101.66 ±0.37 101.22 ±0.39 σ=10 101.55 ±0.37 101.67 ±0.37 101.22 ±0.39 Table 10. Running time (in seconds) on MIS and MaxClique benchmarks for various methods and their SL variants. Bold values indicate faster performance between paired methods. MIS
|
https://arxiv.org/abs/2505.19438v1
|
MaxClique Method ER-0.05 ER-0.10 ER-0.20 ER-0.25 SATLIB RB TWITTER DMALA 23.873 35.511 77.866 103.636 188.901 3025.712 120.634 SL-DMALA 24.997 37.621 78.494 104.367 200.743 3013.018 117.597 PAS 32.648 44.558 86.655 112.305 377.038 3049.668 125.222 SL-PAS 36.134 49.425 90.428 115.630 388.979 3041.619 125.502 GWG 27.249 37.743 79.305 104.461 326.739 3030.762 117.964 SL-GWG 28.056 40.556 81.412 106.017 330.687 3031.967 119.806 Table 11. Running time (in seconds) on MaxCut benchmarks for various methods and their SL variants. Bold values indicate faster performance between paired methods. ER BA OPTSICOM Method 256-300 512-600 1024-1100 256-300 512-600 1024-1100 DMALA 167.341 755.462 2814.459 75.166 141.391 271.073 4.772 SL-DMALA 169.672 759.201 2807.121 81.436 148.428 284.511 7.459 PAS 196.667 815.633 2873.359 133.742 267.957 591.858 6.647 SL-PAS 201.665 820.853 2869.082 140.680 279.453 619.575 10.487 GWG 178.039 780.655 2943.244 714.469 2749.896 7872.727 6.476 SL-GWG 181.171 784.497 2936.596 710.698 2695.718 6802.453 9.146 31 Sampling from Binary Quadratic Distributions via Stochastic Localization 1 2000 4000 6000 8000 10000 MCMC Steps020406080100ObjectiveMIS-ER_DENSITY-R-0.05 1 2000 4000 6000 8000 10000 MCMC Steps0102030405060ObjectiveMIS-ER_DENSITY-R-0.10 1 2000 4000 6000 8000 10000 MCMC Steps05101520253035ObjectiveMIS-ER_DENSITY-R-0.20 1 2000 4000 6000 8000 10000 MCMC Steps0510152025ObjectiveMIS-ER_DENSITY-R-0.25 1 2000 4000 6000 8000 10000 MCMC Steps050100150200250300350400ObjectiveMIS-SATLIB 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.8Best Ratio %MAXCLIQUE-RB 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCLIQUE-TWITTER 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-ER-R-ER-0.15-N-256-300 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-ER-R-ER-0.15-N-512-600 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-ER-R-ER-0.15-N-1024-1100 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-OPTSICOM-R-B 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-BA-R-BA-4-N-256-300 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-BA-R-BA-4-N-512-600 1 2000 4000 6000 8000 10000 MCMC Steps0.00.20.40.60.81.0Best Ratio %MAXCUT-BA-R-BA-4-N-1024-1100Sampling Trajectories: Comparison of Different Samplers Across Datasets DMALA SL-DMALA GWG SL-GWG PAS SL-PAS Figure 2. Sampling trajectories comparison between MCMC samplers (dash lines) and their SL-based counterparts (solid lines) across different datasets. The shaded areas in the trajectories, representing the variance across multiple instances 32
|
https://arxiv.org/abs/2505.19438v1
|
arXiv:2505.19824v1 [math.ST] 26 May 2025Weighted Tail Random Variable: A Novel Framework with Stochastic Properties and Applications Sarikul Islam∗and Nitin Gupta∗∗ ∗,∗∗Department of Mathematics, Indian Institute of Technology Kharagpur, 721302, West Bengal, India. Email address:∗sarikul phdmath@kgpian.iitkgp.ac.in,∗∗nitin.gupta@maths.iitkgp.ac.in. Abstract This paper introduces a novel framework to construct the pro bability density function (PDF) of non-negative continuous random variables. The proposed fr amework uses two functions: one is the survival function (SF) of a non-negative continuous r andom variable, and the other is a weight function, which is an increasing and differentiable f unction satisfying some properties. The resulting random variable is referred to as the weighted tail random variable (WTRV) corresponding to the given random variable and the weight fu nction. We investigate several reliability properties of the WTRV and establish various st ochastic orderings between a random variable and its WTRV, as well as between two WTRVs. Using thi s framework, we construct a WTRV of the Kumaraswamy distribution. We conduct goodness- of-fit tests for two real-world datasets, applied to the Kumaraswamy distribution and its c orresponding WTRV. The test results indicate that the WTRV offers a superior fit compared to the Kum araswamy distribution, which demonstrates the utility of the proposed framework. Keywords: reliability; time-to-event distribution; failure rate or der; likelihood ratio order; goodness-of-fit. 2020 MSC: Primary 62N05; 60E15, Secondary 62N02. 1. Introduction Probability models are fundamental tools for representing and analyzing a wide range of real-world data, including time-to-event data in survival analysis and failure-time data in reli- ability engineering. Many probability distributions, inc luding the beta, exponential, gamma, Kumaraswamy, logistic, and Weibull models, may sometimes f ail to capture some features of certain real-world data. Their functional shapes may not consistently provide an optimal fit for specific data, indicating the necessity for the develo pment of more adaptable models. Researchers have proposed various generalizations of clas sical probability distributions to en- hance flexibility and expand applicability across multiple disciplines. Many of these methods may achieve greater flexibility by introducing one or more ne w parameters to the baseline distribution, employing transformation techniques, or co mbining existing distributions. Barreto-Souza et al. [3] introduced the beta generalized ex ponential model, which includes the beta exponential and generalized exponential models as its special cases. They presented a probability model for positive data and thoroughly invest igated its properties and parameter estimation methods. Alexander et al. [1] presented the gene ralized beta-generated (GBG) distri- butions, which include classical beta-generated, Kumaras wamy-generated, and exponentiated distributions as specific cases. It emphasized the flexibili ty attributes and improved model- ing capabilities by providing an additional shape paramete r, increasing central entropy while preserving tail dynamics. Cordeiro and de Castro [5] introd uced a family of generalized prob- ability models, which includes conventional models such as the gamma, normal, Weibull, and Gumbel distributions. They expressed the ordinary moments of the generalized distribution as linear functions of the probability weighted moments (PWMs ) of the parent distribution, and illustrated the versatility of the new family through appli cations using real data. Eugene et al. [6] presented a new class of
|
https://arxiv.org/abs/2505.19824v1
|
probability models derived from the logit of the beta distribution, highlighting the beta-normal distribution as a particular case. Their research demonstrated flex- ibility in modeling symmetric, skewed, and bimodal data, su pported by maximum likelihood estimates and empirical analysis. Some researchers have pr oposed probability frameworks for constructing new probability distributions by leveraging one or more existing ones as founda- tional components. Alzaatreh et al. [2] presented a novel fr amework for generating continuous distribution families using a transformation framework. T he generated family is called the T–X family of probability models. Several well-known probabil ity models are particular cases of the T–X family. For a comprehensive account of the generaliz ed probability models and their applications, see Mandouh et al. [16], Sagrillo et al. [20], Tahir and Cordeiro [22], and the ref- erences therein. To the best of our knowledge, no establishe d probabilistic framework has been proposed in the literature that systematically harnesses t he SF of time-to-event distributions as a foundational tool for innovating new classes of non-negat ive continuous distributions. In this paper, we adopt a novel approach to develop a framewor k for constructing non- negative continuous probability models. The proposed fram ework utilizes the SF of a non- negative continuous probability model and incorporates an additional function, referred to as a weight function, to formulate the PDF of another random var iable. Such a framework can preserve some characteristics of the input random variable while offering enhanced flexibility and superior goodness-of-fit across distributions with diff ering asymmetry and tail behavior, compared to the input random variable of the framework. Consider a random variable Xwith lower bound 0, upper bound u>0, distribution function FX(·), and SF FX(·)=1−FX(·). Further, consider a non-constant, increasing and differen tiable function w: [0,u)→[0,∞)satisfying w(0)=0and /integraldisplayu 0/integraldisplayx 0|w′(t)|dt dF X(x)<∞. Then we define the random variable Xw, referred to as the WTRV of Xwith weight function w(·), by the PDF fXw(·)as follows: fXw(x)=w′(x)FX(x) E[w(X)],0≤x<u, (1) whereE[w(X)]denotes the expectation of w(X), and if Xis unbounded above, then u=∞. This framework enables the systematic construction of seve ral well-known distributions and, in some cases, introduces new shape parameters. Examples incl ude the WTRV of beta, Burr Type XII, gamma, Kumaraswamy, logistic, and Weibull distributi ons, each obtained by selecting an appropriate non-negative continuous random variable and s uitable weight function (see Table 1). This framework encompasses an important class of random var iables in renewal theory, namely, equilibrium random variables. The duration betwee n a specified time tand the following renewal in a renewal process is known as the forward recurren ce time, and the equilibrium distribution describes the limiting distribution of the fo rward recurrence time of the renewal process, also known as the asymptoticequilibrium distribu tion. Numerous studies have explored equilibrium random variables and their reliability proper ties in recent decades. Let Xbe a non- negative continuous random variable with a finite mean µ>0. If we set w(x)=x,x≥0in the 2 proposed framework given by Equation (1), then the PDF of Xwbecomes fXw(x)=FX(x) µ,x≥0. (2) This PDF characterizes the equilibrium distribution of the variable
|
https://arxiv.org/abs/2505.19824v1
|
X; see Bon and Illayk [4]. We denote the corresponding random variable by /tildewideX. Hence, if w(x)=x,x≥0, then the framework constructs the equilibrium random variable /tildewideX. One may refer to Bon and Illayk [4], Gupta [9], Gupta and Sankaran [10], Li and Xu [14], Nanda et al. [18], and the references therein for relevant literature on the equilibrium random v ariable. We investigate various reliability properties of Xwby considering some reliability properties ofXand some conditions on the weight function w(·). We also investigate the conditions onXandw(·)under which the aging properties of X, such as decreasing failure rate (DFR), increasing failure rate (IFR), decreasing mean residual li fe (DMRL), and increasing mean residual life (IMRL) are preserved by Xw. Subsequently, we assume that the variables Xand Yare stochastically ordered and the function w(·)satisfies certain properties; we then establish various stochastic ordering relationships between XwandYw. Furthermore, we investigate conditions on the stochastically ordered variables XandY, as well as on w(·), under which Xw andYwpreserve the presumed stochastic order between XandY. We specifically examine the preservation of presumed stochastic orderings between XandY, such as failure rate ordering, reversed failure rate ordering, and the usual stochastic or dering by the corresponding WTRVs XwandYw. Some of the results in this paper generalize existing resul ts on equilibrium random variables. By appropriately selecting the weight function w(·), we may construct a variable Xwthat provides superior fits to empirical data in contrast to the va riable X. As an application of the proposed framework, we constructed a novel distribution, t ermed the weighted Kumaraswamy distribution, by selecting the weight function w(x)=xc,0≤x<1, where c>0, and assuming thatXfollows the Kumaraswamy distribution. The resulting model , with three shape parameters, offers greater flexibility for modeling empirical data. Good ness-of-fit tests on monsoon rainfall data from India (1901–2021) suggest that the weighted Kumar aswamy distribution outperforms the Kumaraswamy distribution for modeling the mentioned ra infall data. This demonstrates the utility of the framework in constructing flexible distribut ions that can capture the characteristics of real-world data with greater accuracy. The structure of this paper is as follows. Section 2 provides the notations, preliminary concepts, and lemmas useful for subsequent sections. Secti on 3 provides the methodology of the proposed framework and presents the WTRVs of some conven tional distributions in Table 1. Section 4 is dedicated to the analysis of various reliabilit y properties of a random variable and its WTRV. Section 5 establishes various stochastic orders i nvolving random variables and their WTRV. It also investigates conditions under which several s tochasticorders between two random variables are preserved by their corresponding WTRVs. Sect ion 6 provides an application of the proposed framework to demonstrate its usefulness in rea l-world data modeling. Section 7 concludes the paper. 2. Notations, Preliminary Concepts, and Lemmas This section presents the notations, key concepts, and lemm as that are useful for the later sections of this paper. The term ‘increasing (decreasing) function’ refers to a mon otonically increasing (monotoni- cally decreasing) function, unless stated otherwise. The P DF of a random variable Xis denoted 3 byfX(·); the
|
https://arxiv.org/abs/2505.19824v1
|
cumulative distribution function (CDF) by FX(·); the SF by FX(·)=1−FX(·); the failure rate function by rX(·)=fX(·)/FX(·); the reversed failure rate function by /tildewiderX(·)= fX(·)/FX(·); the mean residual life (MRL) function by mX(·)=/integraltext∞ ·FX(x)dx/FX(·); the support of the PDF fX(·)of the variable XbySX=(lX,uX), where lX=inf{x∈R:fX(x)>0}and uX=sup{x∈R:fX(x)>0}, and we say that Xhas the support SX. The random variable Xw, with its support S1=(l1,u1)such that l1=inf{x∈R:fXw(x)>0}andu1=sup{x∈R:fXw(x)>0}, denotes the WTRV of Xwith weight function w(·). The variable Yw, with its support S2=(l2,u2) such that l2=inf{x∈R:fYw(x)>0}andu2=sup{x∈R:fYw(x)>0}, denotes the WTRV of Ywith weight function w(·). From now on, XandYdenote non-negative, continuous random variables with respective supports SX=(lX,uX)andSY=(lY,uY), unless specified otherwise. The functions w(·),w1(·), and w2(·)denote weight functions, while their first derivatives are denoted by w′(·),w′ 1(·), and w′ 2(·), respectively. We now present several definitions that will be used in the sub sequent sections of the paper. Definition 1. (Definition 2 of Liu [15]) Consider the random variable Xwith its lower bound lX∈R∪{−∞} and upper bound uX∈R∪{∞} . A non-constant function w(·)is called absolutely integrable with respect to Xif it fulfills one of the following conditions: (i)lXis a finite lower bound of X, and /integraldisplayuX lX/integraldisplayx lX|w(t)|dt dF X(x)<∞. LetuX→∞ when Xis unbounded above. (ii)uXis a finite upper bound of X, and /integraldisplayuX lX/integraldisplayuX x|w(t)|dt dF X(x)<∞. LetlX→−∞ when Xis unbounded below. We now present the definitions of several stochastic orderin gs between two random variables: (see Misra et al. [17]) Definition 2. LetSX=(lX,uX), and SY=(lY,uY)be the respective supports of XandY. Then Xis said to be less than Yin the (i) likelihood ratio (lr) ordering (expressed as X≤lrY), iffY(x1)fX(x2)≤fX(x1)fY(x2)for all x1,x2∈(−∞,∞)such that x1≤x2; or equivalently, if lX≤lY,uX≤uY, and fY(·)/fX(·) is increasing on SX∩SY; (ii) failure rate (fr) ordering (expressed as X≤f rY), ifFY(x1)FX(x2)≤FX(x1)FY(x2)for all x1,x2∈(−∞,∞)such that x1≤x2; or equivalently, if lX≤lY,uX≤uY, and rX(x)≥rY(x) for all x∈(−∞,uX); or equivalently, if FY(·)/FX(·)is increasing on (−∞,max( uX,uY)); (iii) reversed failure rate (rfr) ordering (expressed as X≤r f rY), ifFY(x1)FX(x2)≤FX(x1)FY(x2) for all x1,x2∈(−∞,∞)such that x1≤x2; or equivalently, if lX≤lY,uX≤uY, and/tildewiderX(x)≤/tildewiderY(x)for all x∈(lY,∞); or equivalently, if FY(·)/FX(·)is increasing on (min( lX,lY),∞); (iv) usual stochastic (st) ordering (expressed as X≤stY), if FX(x)≤FY(x)for every x∈(−∞,∞); or equivalently, if lX≤lY,uX≤uY, and FX(x)≤FY(x)for all x∈SX∩SY. 4 We proceed by defining a log-concave [log-convex] function f rom Misra et al. [17]. Definition 3. Letf: (a,b)→(0,∞)be a function such that logf(·)is bounded below [bounded above] in a neighborhood of a point x0∈(a,b). Then fis said to be log-concave [log-convex ] on(a,b)if, for all 0<θ< b−a,x1andx2such that a<x1<x2<b−θ, the following inequality holds: f(x1+θ)f(x2)≥[≤]f(x1)f(x2+θ). We now present definitions of some reliability properties of a random variable useful in the subsequent sections of this paper: (see Misra et al. [17]). Definition 4. Consider the variable Xwith the support SX=(lX,uX). Then, Xis said to be an (i) increasing likelihood ratio (ILR) [decreasing likelih ood ratio (DLR)] if the PDF fX(·) is log-concave [log-convex] on SX, or equivalently, if the Glaser function ηX(·)= −f′ X(·)/fX(·)is increasing [decreasing] on SX; (ii) IFR [DFR] if the SF FX(·)is log-concave [log-convex] on SX, or equivalently, if rX(·)is increasing [decreasing] on SX; (iii) IMRL [DMRL] if mX(·)is increasing
|
https://arxiv.org/abs/2505.19824v1
|
[decreasing] on SX. One may refer to Shaked and Shanthikumar [21] and the referen ces therein for details of stochastic orderings and stochastic aging/reliability no tions. The following facts on aging properties of a random variable are useful: (see Misra et al. [17]). (i) If fX(·)is log-concave, then Xhas the IFR property; (ii) If Xhas IFR property, then it has DMRL property; (iii) If Xhas DFR property, then it has IMRL property, which further im plies uX=∞, and hence SX=(lX,∞); (iv) If fX(·)is log-convex and uX=∞, then Xhas DFR property. We now present two lemmas from Misra et al. [17], which are use d in subsequent sections to establish several theorems. Lemma 1. Consider the independent variables Xand Yhaving their respective supports SX=(lX,uX)and SY=(lY,uY). Then X≤stYif and only if lX≤lY,uX≤uY, and E[φ2(X,Y)]≥E[φ1(X,Y)]for all such functions φ2(·,·),φ1(·,·), and∆φ21(·,·)=φ2(·,·)−φ1(·,·) that satisfy conditions (i)-(iv) outlined below: (i)∆φ21(x,y)≥0, whenever (x,y)∈SX×SYandx≤y; (ii) If SX∩SY/nequal∅, then for every fixed x∈SX, the function∆φ21(x,y)is non-decreasing in y∈[max( x,lY),uY); (iii) If SX∩SY/nequal∅, then for every fixed y∈SY, the function∆φ21(x,y)is non-increasing in x∈[lX,min( y,uX)); (iv) If SX∩SY/nequal∅, then∆φ21(x,y)+∆φ21(y,x)≥0for all (x,y)∈(SX∩SY)×(SX∩SY) such that x≤y; where SX=[lX,uX]andSY=[lY,uY]denote the closure of SXandSY, respectively. 5 Lemma 2. Consider the independent variables Xand Yhaving their respective supports SX=(lX,uX)and SY=(lY,uY). Then X≤f rYif and only if lX≤lY,uX≤uY, and E[φ2(X,Y)]≥E[φ1(X,Y)]for all functionsφ2(·,·),φ1(·,·), and∆φ21(·,·)=φ2(·,·)−φ1(·,·)that satisfy conditions (i)-(iii) outlined below: (i)∆φ21(x,y)≥0, for all (x,y)∈SX×SYsuch that x≤y; (ii) If SX∩SY/nequal∅, then for every fixed x∈SX∩SY, the function∆φ21(x,y)is increasing in y∈[x,uY); (iii) If SX∩SY/nequal∅, then for all (x,y)∈(SX∩SY)×(SX∩SY)such that x≤y, The following inequality holds: ∆φ21(x,y)+∆φ21(y,x)≥0. The following lemma from Gupta [8, p. 20] will be used later to establish a theorem concerning the reversed failure rate order. Lemma 3. Consider the independent variables Xand Yhaving their respective supports SX=(lX,uX)and SY=(lY,uY). Then X≤r f rYif and only if lX≤lY,uX≤uY, and E[φ2(X,Y)]≥E[φ1(X,Y)]for all functionsφ2(·,·),φ1(·,·), and∆φ21(·,·)=φ2(·,·)−φ1(·,·) satisfying conditions (i)-(iii) outlined below: (i)∆φ21(x,y)≥0, for all (x,y)∈SX×SYsuch that x≤y; (ii) If SX∩SY/nequal∅, then for every fixed y∈SX∩SY, the function∆φ21(x,y)is decreasing in x∈(lX,y); (iii) If SX∩SY/nequal∅, then for all (x,y)∈(SX∩SY)×(SX∩SY)such that x≤y, The following inequality holds: ∆φ21(x,y)+∆φ21(y,x)≥0. 3. Methodology of the Framework This framework constructs the PDF of a new random variable ut ilizing: (i) the SF FX(·)of a non-negative continuous random variable X, and; (ii) a weight function w(·)that is increasing and differentiable on (0,uX)satisfies w(0)=0, and has its first derivative absolutely integrable with resp ect to X, where uXis the upper bound of X. The foundation of this framework is built on well-establish ed theoretical results, which are presented chronologically to support its development. The formula for the expectation of a non-negative variable Xis given as E[X]=/integraldisplay∞ 0FX(x)dx. This formula is often called an ‘alternative formula’ for co mputing the expectation of X. A generalization of the above result given by Hong [11] and sta tes that if Xhas finite k-th moment, that is,E|Xk|<∞for any integer k>0, thenE[Xk]is given by: E[Xk]=/integraldisplay∞ 0kxk−1FX(x)dx. (3) 6 Ogasawara [19] provides an alternative formula for the expe ctation of a strictly increasing function of a continuous variable. A further generalization of the alternative formula for com puting the expectation of a
|
https://arxiv.org/abs/2505.19824v1
|
function of Xis established in Liu [15]. The formula is given by the follow ing lemma: (see Theorem 1 of Liu [15]). Lemma 4. Let the continuous variable Xhas support (lX,uX)⊂R. Let a differentiable function w(·)be defined on (lX,uX)with its first derivative absolutely integrable with respec t toX. IflXis real-valued, then E[w(X)]=w(lX)+/integraldisplayuX lXw′(x)FX(x)dx. (4) LetuX→∞ when Xis unbounded above. This lemma is crucial for the proposed framework. Lemma 4 ind icates that the expectation ofw(X)can be formulated using the derivative of w(·)and the SF of X, provided that w(·) satisfies the conditions of Lemma 4. This lemma is proved in Li u [15]. 3.1.Derivation of the Framework By setting lX=0,w(0)=0in Lemma 4, Equation (4) becomes: E[w(X)]=/integraldisplayuX 0w′(x)FX(x)dx. (5) We further assume that the function w(·), referred to as a weight function, is increasing; this condition is besides the assumptions on w(·)mentioned in Lemma 4. Then, clearly, the expectation E[w(X)]is positive. We divide both sides of Equation (5) by E[w(X)], resulting in: /integraldisplayuX 0w′(x)FX(x) E[w(X)]dx=1. (6) We denote the integrand of Equation (6) by fXw(x). The function fXw(·)satisfies the following: (i) both w′(x)andFX(x)are non-negative, so fXw(x)≥0for all x∈(0,uX); (ii) by construction,/integraldisplayuX 0fXw(x)dx=1. Hence, fXw(·)defines a PDF of a non-negative continuous probability model ; let us denote the corresponding variable by Xwand refer to it as the WTRV of Xwith weight function w(·). This methodology provides a systematic approach to construct a n ew probability model using the SF of a variable Xand a weight function w(·)defined on [0,uX). Table 1 applies the proposed framework to construct several well-known probability models. 7 Table 1: Construction of some distributions employing the proposed framework X∼ FX(x),x≥0 w(x),x≥0 fXw(x)=w′(x)FX(x) E[w(X)]Xw∼ Exponential(λ) e−λx,λ> 0 w1(x)=xλe−λxExponential(λ) Exponential(λ) e−λx,λ> 0 w2(x)=xk,k>0λkxk−1e−λx Γ(k)Gamma (k,λ) Weibull (α,β) e−(x/β)α,α,β> 0 w3(x)=(x/β)αα β/parenleftBigx β/parenrightBigα−1e−/parenleftBigx β/parenrightBigα Weibull (α,β) F(x,β),x∈(0,1) (1−x)β−1,β> 1 w4(x)=xα,α> 0xα−1(1−x)β−1 B(α,β)Beta(α,β) Exponential(1 2) e−x 2 w5(x)=xk 2,k>0(1 2)kxk 2−1e−x 2 Γ(k 2)Chi-Square (k) Rayleigh (σ) e−x2 2σ2,σ> 0 w6(x)=x2 2σ2x σ2e−x2/2σ2Rayleigh (σ) Weibull (2,√ 2σ) e−x2/2σ2,σ> 0 w7(x)=x√ 2σ√ 2 σ√πe−x2/2σ2Half-Normal (σ) Weibull (p,a) e−(x/a)p,a,p>0 w8(x)=(x/a)p pxd−1e−(xa)p adΓ(d p)Generalized Gamma (p,a,d) Burr type XII (c,k) (1+xc)−k,c,k>0w9(x)=log(1+xc)ckxc−1 (1+xc)k+1 Burr type XII (c,k) Burr type XII (c,k) (1+xc)−k,c,k>0 w10(x)=xa,a>0axa−1(1+xc)−k kB((ck−a)/c,(c+a)/c)WTRV of Burr type XII (c,k) Kumaraswamy(a,b) (1−xa)b,a,b>0 w11(x)=xaabxa−1(1−xa)bKumaraswamy(a,b) Kumaraswamy(a,b) (1−xa)b,a,b>0 w12(x)=xc,c>0cxc−1(1−xa)b b·B(1+c/a,b)Weighted Kumaraswamy (a,b,c) 4. Reliability Properties of a Random Variable and Its WTRV Here we investigate several reliability properties of a ran dom variable Xand its WTRV Xw. Initially, we assume some reliability properties of Xand provide a sufficient condition on the weight function w(·)to construct Xwthat possesses the ILR [DLR] property. Furthermore, we investigate the sufficient conditions on Xandw(·)to ensure the preservation of reliability properties of XbyXw. In particular, we focus on whether aging properties such as IFR, DFR, IMRL, and DMRL of the variable Xare preserved by Xw. This study may help us to construct a WTRV Xwthat belongs to a prespecified aging class. The significance o f studying the aging relationship between XandXwlies in the fact that many reliability properties of Xwmay be directly inferred from those of Xand the property of w(·). This may facilitate the reliability analysis of
|
https://arxiv.org/abs/2505.19824v1
|
some random variables more straightforwardly, despite their survival or failure rate functions not admitting closed-form expressions. The following theorem assumes Xis IFR [DFR] and w′(·)is log-concave [log-convex] to ensure that Xwbelongs to the ILR [DLR] aging class. 8 Proposition 1. If (a)Xis IFR [DFR], and (b)w′(·)is a log-concave [log-convex] function on SX, then Xwis ILR [DLR], that is, ηXw(·)=−f′ Xw(·)/fXw(·)is an increasing [decreasing] function. Proof. To prove that Xwis ILR [DLR], we need to show that fXw(·)is a log-concave [log-convex] function on its support S1=(l1,u1). To proceed, let 0<θ< u1−l1, and let x1,x2∈S1satisfy l1<x1<x2<u1−θ. Consider the expression ∆= fXw(x1+θ)fXw(x2)−fXw(x1)fXw(x2+θ). Now,E[w(X)]2∆is expressed as E[w(X)]2∆=w′(x1+θ)FX(x1+θ)w′(x2)FX(x2)−w′(x1)FX(x1)w′(x2+θ)FX(x2+θ). Given that Xis IFR [DFR], it follows that FX(·)is log-concave [log-convex] on SX. Also, w′(·) is a log-concave [log-convex] function on SX. It follows from Definition 3 that FX(x1+θ)FX(x2)≥[≤]FX(x1)FX(x2+θ), and w′(x1+θ)w′(x2)≥[≤]w′(x1)w′(x2+θ). Using these inequalities, we have E[w(X)]2∆≥[≤] 0, which further implies that, ∆≥[≤] 0. From Definition 3, we get fXw(·)is a log-concave [log-convex] function, that is, Xwis ILR [DLR]. The ILR [DLR] attribute of a variable Xis crucial when its failure rate and MRL function do not admit closed-form expressions. In 1980, Glaser [7] introduced the function ηX(x)=−f′ X(x) fX(x),x∈SX. The Glaser function ηX(·)serves as an effective instrument for revealing the shape of t he failure rate and MRL functions for many distributions. Exam ples include the gamma, beta, and positively truncated normal distributions. Although c losed-form expressions for the failure rate and MRL function are not available for these distributi ons, they admit simple expressions forηX(·). Glaser [7] proved that if ηX(·)ofXis increasing [decreasing], then the failure rate function rX(·)is also increasing [decreasing], and the MRL function mX(·)ofXis decreasing [ increasing]. Example 1. Consider Xwith SF FX(x)=(1−x)β−1,0<x<1, whereβ>1. Let w(x)=xα, 0≤x<1, whereα>1. Then rX(·)ofXobtained as rX(x)=β−1 1−x,0≤x<1. AsrX(·)is increasing, Xis IFR. Now, we have d2 dx2/parenleftbigg logw′(x)/parenrightbigg =1−α x2<0, 9 that is, w′(·)is a log-concave function. The PDF of the WTRV Xwis (see Table 1) defined as fXw(x)=Γ(α+β) Γ(α)Γ(β)xα−1(1−x)β−1,0<x<1. The Glaser function ηXw(·)ofXwis obtained as ηXw(x)=β−1 1−x−α−1 x,0<x<1. SinceηXw(·)is an increasing function, Xwexhibits the ILR property. Setting w(x)=xin Proposition 1, we have w′(x)=1, which is trivially both log-concave and log-convex. This yields the following corollary. Corollary 1. LetXbe a non-negative continuous random variable. Then, Xis IFR [DFR] if and only if /tildewideXis ILR [DLR]. Proof. The proof follows from the expression of the Glaser function of/tildewideXasη/tildewideX(·)=rX(·). We now consider Xis IFR and establish conditions on w(·)andrX(·)that ensure Xwpreserves the IFR property. Theorem 1. LetXis IFR. If the function w′(·)/rX(·)is both increasing and log-concave on SX, then Xwis also IFR. Proof. Since w(·)increasing function on (0,uX), it follows that uX=u1. For a fixed 0<θ< u1−l1, and for x1andx2such that l1−θ<x1<x2<u1=uX, define ∆1=FXw(x1+θ)FXw(x2)−FXw(x1)FXw(x2+θ). Then,E[w(X)]2∆1is expressed as E[w(X)]2∆1=/bracketleftBigg/integraldisplay∞ x1+θw′(z1)FX(z1)dz1/bracketrightBigg/bracketleftBigg/integraldisplay∞ x2w′(z2)FX(z2)dz2/bracketrightBigg −/bracketleftBigg/integraldisplay∞ x1w′(z2)FX(z2)dz2/bracketrightBigg/bracketleftBigg/integraldisplay∞ x2+θw′(z1)FX(z1)dz1/bracketrightBigg =/bracketleftBigg/integraldisplayuX−θ x1w′(z1+θ) rX(z1+θ)fX(z1+θ)dz1/bracketrightBigg/bracketleftBigg/integraldisplayuX x2w′(z2) rX(z2)fX(z2)dz2/bracketrightBigg −/bracketleftBigg/integraldisplayuX x1w′(z2) rX(z2)fX(z2)dz2/bracketrightBigg/bracketleftBigg/integraldisplayuX−θ x2w′(z1+θ) rX(z1+θ)fX(z1+θ)dz1/bracketrightBigg . Consider the independent variables, Z1andZ2, with PDFs f(z1+θ)andf(z2), respectively. Let Ki(·)represent the CDFs of Zi, where lZi=inf{z∈R:Ki(z)>0},uZi=sup{z∈R:Ki(z)<1},SZi=(lZi,uZi),i=1,2. Then the supports of Z1andZ2are given by SZ1=(lX−θ,uX−θ)andSZ2=(lX,uX), respectively. Given that Xexhibits the IFR property, it follows that rZ1(x)=fX(x+θ) FX(x+θ)=rX(x+θ)≥rX(x)=fX(x) FX(x)=rZ2(x), 10 which implies that Z1≤f rZ2. Now we verify
|
https://arxiv.org/abs/2505.19824v1
|
the conditions (i)-(iii) of Lemma 2 for Z1andZ2 to show that∆1≥0. Since lX−θ<x1<x2<uX−θ<uX, it follows that SZ1∩SZ2=(lX−θ,uX−θ)∩(lX,uX)=(lX,uX−θ)/nequal∅. Now define φ2(x,y)=w′(x+θ)w′(y) rX(x+θ)rX(y)I(x1≤x<uX−θ)I(x2≤y<uX), and φ1(x,y)=w′(x+θ)w′(y) rX(x+θ)rX(y)I(x2≤x<uX−θ)I(x1≤y<uX), where (x,y)∈SZ1×SZ2. Then, utilizing the independence of Z1andZ2,E[w(X)]2∆1is expressed as E[w(X)]2∆1=E/bracketleftbigφ2(Z1,Z2)/bracketrightbig−E/bracketleftbigφ1(Z1,Z2)/bracketrightbig. For(x,y)∈SZ1×SZ2, let∆φ21(x,y)=φ2(x,y)−φ1(x,y), then ∆φ21(x,y)=w′(x+θ)w′(y) rX(x+θ)rX(y),ifx1≤x<x2andx2≤y<uX, −w′(x+θ)w′(y) rX(x+θ)rX(y),ifx2≤x<uX−θandx1≤y<x2, 0, otherwise. It is evident that∆φ21(x,y)≥0for all (x,y)∈SZ1×SZ2with x≤y. Using the non-negativity and increasing property of w′(·)/rX(·)onSZ2=SX, we have for each fixed x∈SZ1∩SZ2= (lX,uX−θ),∆φ21(x,y)is an increasing function in y∈[x,uZ2]=[x,uX]. Further, for all (x,y)∈(SZ1∩SZ2)×(SZ1∩SZ2)with x≤y, using the log-concave propery of w′(·)/rX(·)over SXwe get ∆φ21(x,y)+∆φ21(y,x)=w′(x+θ)w′(y) rX(x+θ)rX(y)−w′(x)w′(y+θ) rX(x)rX(y+θ)≥0. Thus, the functions φ1(·,·),φ2(·,·), and∆φ21(·,·)satisfy conditions (i)–(iii) of Lemma 2, with the independent random variable Z1andZ2satisfying Z1≤frZ2. Hence, by applying Lemma 2, one can deduce that E[φ2(Z1,Z2)]≥E[φ1(Z1,Z2)]. Consequently, we have ∆1=FXw(x1+θ)FXw(x2)−FXw(x1)FXw(x2+θ)≥0, which establishes the log-concavity of FXw(·)onSXwby using Definition 3. Therefore, Xw possesses the IFR property. Example 2. LetXfollow the standard uniform distribution. Then failure rat erX(·)is defined by rX(x)=1 1−x,0<x<1. Letw(x)=−log(1−x2),0<x<1. Then, w′(x) rX(x)=2x(1−x) 1−x2=2x 1+x,0<x<1. It is an increasing function. Moreover, the second derivati ve d2 dx2/parenleftBigg log/parenleftBiggw′(x) rX(x)/parenrightBigg/parenrightBigg =−1 x2+1 (1+x)2<0for 0<x<1. Therefore, w′(·)/rX(·)is log-concave over SX=(0,1). Hence, by applying Theorem 1, we conclude that Xwexhibits the IFR property. 11 Now we consider Xis DFR and establish conditions on w(·)andrX(·)ensuring that Xwis also DFR. Theorem 2. LetXis DFR. If the function w′(·)/rX(·)is increasing and log-convex on SX, then Xwis also DFR. Proof. Since Xexhibits DFR property, the upper bound of Xis infinity, i.e., uX=∞, which implies that SX=(lX,∞). Furthermore, from the increasing property of w(·)over (lX,∞), we get the support of XwisSXw=(l1,∞). For a fixedθ>0and for x1,x2satisfying l1<x1<x2<∞, consider the following expression: ∆2=FXw(x1)FXw(x2+θ)−FXw(x1+θ)FXw(x2). Then, lX≤l1<x1<x2<∞. After some manipulations, we obtain E[w(X)]2∆2=/bracketleftBigg/integraldisplay∞ x1w′(z1) rX(z1)fX(z1)dz1/bracketrightBigg/bracketleftBigg/integraldisplay∞ x2w′(z2+θ) rX(z2+θ)fX(z2+θ)dz2/bracketrightBigg −/bracketleftBigg/integraldisplay∞ x1w′(z2+θ) rX(z2+θ)fX(z2+θ)dz2/bracketrightBigg/bracketleftBigg/integraldisplay∞ x2w′(z1) rX(z1)fX(z1)dz1/bracketrightBigg LetZ1and Z2be two independent random variables whose PDFs are given by fZ1(z1)= fX(z1)forz1>lX,andfZ2(z2)=fX(z2+θ)forz2>lX−θ,respectively. Denote by Ki(·)the CDFs of Zi,i=1,2such that lZi=inf{zi∈R:Ki(zi)>0},uZi=sup{zi∈R:Ki(zi)<1},SZi=(lZi,uZi),i=1,2. Then SZ1∩SZ2=SX=(lX,∞). Since Xhas DFR property, it follows that Z1≤f rZ2. Now using the independence of Z1andZ2we have E[w(X)]2∆2=E/bracketleftBiggw′(Z1) rX(Z1)w′(Z2+θ) rX(Z2+θ)I(Z1≥x1)I(Z2≥x2)/bracketrightBigg −E/bracketleftBiggw′(Z2+θ) rX(Z2+θ)w′(Z1) rX(Z1)I(Z1≥x2)I(Z2≥x1)/bracketrightBigg . Define φ2(x,y)=w′(x)w′(y+θ) rX(x)rX(y+θ)I(x≥x1)I(y≥x2), and φ1(x,y)=w′(x)w′(y+θ) rX(x)rX(y+θ)I(x≥x2)I(y≥x1), where (x,y)∈SZ1×SZ2. Then,E[w(X)]2∆2is expressed as E[w(X)]2∆2=E/bracketleftbigφ2(Z1,Z2)/bracketrightbig−E/bracketleftbigφ1(Z1,Z2)/bracketrightbig. For(x,y)∈SZ1×SZ2, define∆φ21(x,y)=φ2(x,y)−φ1(x,y), then we have ∆φ21(x,y)=w′(x)w′(y+θ) rX(x)rX(y+θ), ifx2≥x≥x1and y≥x2, −w′(x)w′(y+θ) rX(x)rX(y+θ), ifx≥x2and x2≥y≥x1, 0, otherwise. 12 Observe that the intersection of the supports is SZ1∩SZ2=(lX,∞)/nequal∅. We now verify conditions (i)-(iii) of Lemma 2 for the independent variabl eZ1andZ2. From the definition of ∆φ21(·,·), we have∆φ21(x,y)≥0for all (x,y)∈SZ1×SZ2such that x≤y. Since w′(·)/rX(·)is increasing function on SX, for each fixed x∈SZ1∩SZ2,∆φ21(x,y)is increasing in y∈[x,∞). Moreover, for x≤y, the log-convexity of w′(·)/rX(·)onSXimplies that ∆φ21(x,y)+∆φ21(y,x)=w′(x) rX(x)·w′(y+θ) rX(y+θ)−w′(x+θ) rX(x+θ))·w′(y) rX(y)≥0, for(x,y)∈(SZ1∩SZ2)×(SZ1∩SZ2). Now, since Z1andZ2are independent and Z1≤f rZ2, applying Lemma 2 to Z1andZ2, it follows that E[w(X)]2∆2=E/bracketleftbigφ2(Z1,Z2)/bracketrightbig−E/bracketleftbigφ1(Z1,Z2)/bracketrightbig≥0. Consequently, ∆2=FXw(x1)FXw(x2+θ)−FXw(x1+θ)FXw(x2)≥0. From Definition 3, we get FXw(·)is log-convex on SXw. Hence, Xwexhibits the DFR property. Example 3. LetXfollows Pareto distribution with SF FX(x)=(1+x)−α,x≥0, α> 0. Letw(x)=e(x+1)2−e,x≥0. Then w′(·)andrX(·)are defined by w′(x)=2(x+1)e(x+1)2,x≥0,and rX(x)=α x+1,x≥0, respectively. We observe that w′(x) rX(x)=2(x+1)2e(x+1)2 α,x≥0, is increasing. Furthermore, observed that d2 dx2log/parenleftBiggw′(x) rX(x)/parenrightBigg =2−2 (x+1)2≥0for x≥0, which ensures the log-convexity of w′(·)/rX(·). Hence, by applying Theorem 2, we conclude that Xwexhibits the DFR property. We now consider Xis DMRL and establish conditions on w(·),rX(·), and the
|
https://arxiv.org/abs/2505.19824v1
|
MRL function mX(·)to ensure that Xwhas the DMRL property. Theorem 3. LetXis DMRL. If w′(·)/rX(·)is increasing and log-concave on SXandmX(·)is log-convex on SX, then Xwis IFR and hence DMRL. Proof. By applying Theorem 2.5(a) of Misra et al. [17], which assume s the DMRL property ofXand the log-convex property of mX(·)onSX, we get that Xhas the IFR property. Now, using the IFR property of Xand increasing and log-concave properties of w′(·)/rX(·)onSXto Theorem 1, we get Xwis IFR and hence DMRL. Now consider Xis IMRL and establish conditions on w(·),rX(·), and mX(·)to ensure that Xw is also IMRL. Theorem 4. LetXis IMRL. If w′(·)/rX(·)is increasing and log-convex on SX, and the function mX(·)is log-concave on SX, then Xwis DFR and hence IMRL. Proof. By applying Theorem 2.5(b) of Misra et al. [17], which assume s the IMRL property ofXand the log-concave property of mX(·)onSX, we get that Xhas the DFR property. Now, using the DFR property of Xand the increasing and log-convex properties of w′(·)/rX(·)onSX to Theorem 2, we conclude that Xwis DFR and hence IMRL. 13 5. Stochastic Ordering Properties of Random Variables and T heir WTRVs This section focuses on several stochastic ordering proper ties involving Xand its WTRV Xw. At first, we establish conditions on X,Y,w1(·), and w2(·)under which the WTRVs Xw1 andYw2are stochastically ordered by likelihood ratio ordering. W e also establish the conditions under which the likelihood ratio ordering holds between XandXwas well as between /tildewideXandXw. We then establish the usual stochastic ordering between XandYwhen their WTRVs Xw1and Yw2are stochastically ordered by reversed failure rate orderi ng. Furthermore, we examine and find conditions under which various stochastic orderings be tween two random variables Xand Yare preserved between their respective WTRVs XwandYw. We consider the orderings, such as the usual stochastic order, failure rate order, and rever sed failure rate order, for this purpose. This study may provide a tool to construct WTRVs for XandY, when some ordering property between XwandYwis prespecified. We first assume the failure rate order between XandY, and conditions on w1(·)andw2(·) that ensure the likelihood ratio order between Xw1andYw2. We present the following theorem. Theorem 5. (i)Ifl1≤l2,u1≤u2,X≤f rY,w′ 2(·) w′ 1(·)is increasing in on S1∩S2, with w′ 1(x)/nequal0 for all x∈S1, then Xw1≤lrYw2. (ii) If SX=S1,SY=S2,Xw1≤lrYw2, and w′ 1(·)/w′ 2(·)is increasing function on S1∩S2, with w′ 2(x)/nequal0for all x∈S2, then X≤f rY. Proof. (i) Given that l1≤l2andu1≤u2. For x∈S1∩S2, consider the following relation obtained by using the PDFs of Xw1andYw2: fYw2(x) fXw1(x)=w′ 2(x)FY(x) w′ 1(x)FX(x)·E[w1(X)] E[w2(Y)]. (7) Here, X≤f rYimplies that FY(·)/FX(·)is increasing on SX∩SY, and hence on S1∩S2, since S1∩S2⊂SX∩SY. Now, it is given that w′ 2(·)/w′ 1(·)is increasing on S1∩S2andw′ 1(x)/nequal0for allx∈S1. Hence, fYw2(·)/fXw1(·)is increasing on S1∩S2, that is, Xw1≤lrYw2. (ii) The proof is similar to part (i) and follows directly fro m the relation below, obtained from (7). FY(x) FX(x)=fYw2(x) fXw1(x)·w′ 1(x) w′ 2(x)·E[w2(Y)] E[w1(X)]. (8) Example 4. (i) Let Xand Yfollow exponential distributions with rate parameters λ1and λ2, respectively, where 0<λ 2<λ 1. Consider w1(x)=xk1andw2(x)=xk2,x≥0, where 0<k1<k2. We first observe that FY(x) FX(x)=e(λ1−λ2)x,x>0andw′ 2(x) w′ 1(x)=xk2−k1,x>0, are increasing functions of x.
|
https://arxiv.org/abs/2505.19824v1
|
Consequently, X≤f rYandw′ 2(·)/w′ 1(·)is increasing. Observe that, l1=l2=0andu1=u2=∞. Now, by applying Theorem 5, it follows that Xw1≤lrYw2. In Theorem 5, if we set w1(x)=w2(x)=x, then the WTRVs Xw1andYw2becomes /tildewideXand/tildewideY ofXandY, respectively. Then we have the following corollary. 14 Corollary 2. Suppose XandYare non-negative random variables. X≤f rYif and only if /tildewideX≤lr/tildewideY. Remark 1. The above corollary was proved by Gupta [9]. We now consider Xis IFR [DFR] and assume conditions on w(·)andrX(·)to establish a likelihood ratio order between XandXw. We state the following proposition. Proposition 2. LetXbe defined on SX=S1=(l1,u1). If (i)Xis IFR [DFR], and (ii)w(·)is a strictly increasing and concave [convex] function on SX, then, Xw≤lrX[X≤lrXw]. Proof. Since SX=S1, it follows that lX=l1anduX=u1. Now, for x∈SX∩S1, we consider fXw(x) fX(x)=FX(x)·w′(x) fX(x)·E[w(X)]=1 rX(x)×w′(x) E[w(X)]. The IFR [DFR] property of Ximplies that 1/rX(·)is a decreasing [increasing] on SX. Using the strictly increasing and concave [convex] properties of w(·),we get w′(·)is decreasing [increasing] onSX. Therefore, the function fXw(·)/fX(·)is decreasing [increasing] on SX∩S1, implying that Xw≤lrX[X≤lrXw]. Example 5. LetXhas CDF F(x)=1−e−x2,x>0. Consider w(x)=xk,x>0, where 0<k<1. Then rX(·)is defined as rX(x)=2x,x>0. Therefore, Xis IFR. Now for 0<k<1, we obtain w′′(x)=k(k−1)xk−2<0for x>0. So,w(·)is a strictly increasing and concave. The PDFs of Xand its WTRV Xware fX(x)=2xe−x2and fXw(x)=kxk−1e−x2 Γ(k 2+1),x≥0, respectively. Now, by applying Proposition 2, we get Xw≤lrX. In Proposition 2, If we take w(x)=x, the WTRV Xwbecomes /tildewideX. We present the following corollary. Corollary 3. Xis IFR [DFR] if and only if /tildewideX≤lrX[X≤lr/tildewideX] Remark 2. The proof of the above corollary follows from fX(·)/f/tildewideX(·)=E[X]·rX(·); (see Bon and Illayk [4]). We now consider conditions on Xandw(·)to establish a likelihood ratio order between /tildewideX andXw. We state the following proposition. Proposition 3. Let/tildewideXbe defined on S/tildewideX=S1=(l1,u1). Then, /tildewideX≤lrXw[Xw≤lr/tildewideX]if and only if w(·)is a convex [concave] function on S1. 15 Proof. From S/tildewideX=S1, we get l/tildewideX=l1andu/tildewideX=u1. The PDF f/tildewideX(·)is defined by Equation (2). Forx∈S/tildewideX∩S1, consider fXw(x) f/tildewideX(x)=w′(x)FX(x) E[w(X)]×E[X] FX(x)=w′(x)E[X] E[w(X)]. Now, fXw(·)/f/tildewideX(·)is an increasing [decreasing] on S/tildewideX∩S1if and only if w′(·)is increasing [decreasing] on S1. Consequently, /tildewideX≤lrXw[Xw≤lr/tildewideX]if and only if w(·)is convex [concave] onS1. The theorem below provides the conditions on w1(·)andw2(·)so that the reversed failure rate ordering between Xw1andYw2is transverse to the usual stochastic ordering between Xand Y. Theorem 6. LetXandYbe defined on SX=S1=(l1,u1)andSY=S2=(l2,u2), respectively. IfXw1≤r f rYw2,w′ 1(·) w′ 2(·)is an increasing function, and w′ 2(x),w′ 1(0)/nequal0onSX∩SY, then X≤stY. Proof. Since Xw1≤r f rYw2,SX=S1andSY=S2, we have lX=l1≤l2=lYanduX=u1≤u2= uY. Now, Xw1≤r f rYw2implies thatζ(·)defined by ζ(t)=/integraltextt 0w′ 2(x)FY(x)dx /integraltextt 0w′ 1(x)FX(x)dx,t∈S1∩S2 is increasing in t. Thus, for t≥z>0, we haveζ(t)≥ζ(z), which gives /integraltextt 0w′ 2(x)FY(x)dx /integraltextt 0w′ 1(x)FX(x)dx≥/integraltextz 0w′ 2(x)FY(x)dx /integraltextz 0w′ 1(x)FX(x)dx. (9) Sinceζ(·)is increasing, we have ζ′(t)≥0fort∈S1∩S2 or,w′ 2(t)FY(t)/integraltextt 0w′ 1(x)FX(x)dx−w′ 1(t)FX(t)/integraltextt 0w′ 2(x)FY(x)dx /parenleftbigg/integraltextt 0w′ 1(x)FX(x)dx/parenrightbigg2≥0 or,w′ 2(t)FY(t) w′ 1(t)FX(t)≥/integraltextt 0w′ 2(x)FY(x)dx /integraltextt 0w′ 1(x)FX(x)dx. (10) Using inequalities (9) and (10), we obtain w′ 2(t)FY(t) w′ 1(t)FX(t)≥lim z→0/integraltextz 0w′ 2(x)FY(x)dx /integraltextz 0w′ 1(x)FX(x)dx. (11) Since the right-hand side of the inequality (11) is expresse d as0 0, the application of L’H ˆopital’s rule yields w′ 2(t)FY(t) w′ 1(t)FX(t)≥w′ 2(0)FY(0) w′ 1(0)FX(0)=w′ 2(0) w′ 1(0), or,FY(t) FX(t)≥w′ 1(t) w′ 2(t)·w′ 2(0) w′ 1(0). Using the
|
https://arxiv.org/abs/2505.19824v1
|
fact that w′ 1(·)/w′ 2(·)is increasing on SX∩SY, we obtain FY(t)≥FX(t)for all t∈S1∩S2=SX∩SY. Hence, X≤stY. 16 Example 6. LetXandYhave CDFs FX(x)=1−e−2x,x>0and FY(x)=1−e−x,x>0, respectively. Let w1(x)=x2andw2(x)=x,x>0. Then observe that w′ 1(x) w′ 2(x)=2x,x>0, is increasing. Furthermore, the function FYw2(x) FXw1(x)=2e2x−2(1+x)ex,x>0 is increasing on (0,∞). Therefore, Xw1≤r f rYw2. Here, SX=S1=SY=S2=(0,∞). Applying Theorem 6, we conclude that X≤stY. If we set w1(x)=w2(x)=xin Theorem 6, then the WTRVs Xw1andYw2becomes /tildewideXand/tildewideY, respectively. We state the following corollary. Corollary 4. If/tildewideX≤r f r/tildewideY, then X≤stY. Remark 3. The above corollary was proved by Li and Xu [14]. We now consider the minimum of two independent variable XandY, denoted by X∧Y, and establish a likelihood ratio order between Xw∧Ywand(X∧Y)w, the WTRV of X∧Y. Theorem 7. LetXandYbe independent with support SX=SY, and let XwandYwalso be independent. If Xw≤f r[≥f r]XandYw≤f r[≥f r]Y, then Xw∧Yw≤lr[≥lr] (X∧Y)w Proof. Since SX=SYand a common w(·)is used to formulate XwandYw, we have S1= (l1,u1)=S2=(l2,u2). For x∈S1∩S2, consider the PDFs fXw∧Yw(x)=fXw(x)FYw(x)+fYw(x)FXw(x), and f(X∧Y)w(x)=w′(x)FX(x)FY(x) E[w(X∧Y)]. Now, for x∈S1∩S2, we have fXw∧Yw(x) f(X∧Y)w(x)=fXw(x)FYw(x)+fYw(x)FXw(x) w′(x)FX(x)FY(x)×E[w(X∧Y)] =w′(x)FX(x)FYw(x) E[w(X)]w′(x)FX(x)FY(x)+w′(x)FY(x)FXw(x) E[w(Y)]w′(x)FX(x)FY(x)×E[w(X∧Y)] =FYw(x) E[w(X)]FY(x)+FXw(x) E[w(Y)]FX(x)×E[w(X∧Y)]. (12) Thus, using Equation (12), the assumptions of the theorem, a nd the fact that the sum of two non-negative decreasing [increasing] functions is again a decreasing [increasing] function, the theorem follows. If we set w(x)=xin Theorem 7, then the conditions /tildewideX≤f rXand/tildewideY≤f rYbecomes equivalent to that XandYare DMRL. We state the following corollary. Corollary 5. LetXandYbe independent with SX=SY, and let /tildewideXand/tildewideYalso be independent. IfXandYare DMRL, then /tildewideX∧/tildewideY≤lr(/tildewiderX∧Y). 17 Remark 4. The above corollary was proved by Bon and Illayk [4]. Theorem 7 can be generalized for more than two variables. We p rovide the following corollary. Corollary 6. LetX1,X2, ..., Xnbe independent variables with common support, and let X1 w,X2 w,..., Xn walso be independent. If Xi w≤f r[≥f r]Xifori=1,2,···,n, then X1 w∧···∧ Xn w≤lr[≥lr] (X1∧···∧ Xn)w. Remark 5. The proof of the above corollary is similar to the proof of The orem 7. The following theorem provides the conditions under which t he usual stochastic ordering between XandYremains preserved in their WTRVs XwandYw. Theorem 8. LetXandYbe two independent variables. If w′ 1(·)/rX(·)is decreasing on SX= [lX,uX],w′ 2(·)/rY(·)is increasing on SY=[lY,uY], and X≤stY, then Xw1≤stYw2. Proof. Since X≤stY, it follows that lX≤lYanduX≤uY. Furthermore, by definition w1(·) andw2(·)have same supports as the variable XandY, i.e., SX=(lX,uX)andSY=(lY,uY), respectively, it follows that l1=lXandu2=uY. Hence, we have lX=l1≤lY≤l2and u1≤uX≤uY=u2. Thus, l1≤l2andu1≤u2. IfS1∩S2=∅, i.e., l1=lX≤u1<l2≤uY=u2, then Xw1≤stYw2holds trivially. Now, if S1∩S2/nequal∅, i.e., l1=lX≤l2<u1≤uY=u2, then, S1∩S2=(l2,u1). To prove Xw1≤stYw2, we fixθ∈S1∩S2and consider ∆3=FYw2(θ)−FXw1(θ). It then follows that, l1≤l2<θ< u1≤u2, and the expression for E[w1(X)]E[w2(Y)]∆3is E[w1(X)]E[w2(Y)]∆3=/bracketleftBigg/integraldisplayu1 l1w′ 1(x)FX(x)dx/bracketrightBigg/bracketleftBigg/integraldisplayu2 θw′ 2(y)FY(y)dy/bracketrightBigg −/bracketleftBigg/integraldisplayu2 l2w′ 2(y)FY(y)dy/bracketrightBigg/bracketleftBigg/integraldisplayu1 θw′ 1(x)FX(x)dx/bracketrightBigg =/bracketleftBigg/integraldisplayuX lXw′ 1(x) rX(x)fX(x)dx/bracketrightBigg/bracketleftBigg/integraldisplayuY θw′ 2(y) rY(y)fY(y)dy/bracketrightBigg −/bracketleftBigg/integraldisplayuY lYw′ 2(y) rY(y)fY(y)dy/bracketrightBigg/bracketleftBigg/integraldisplayuX θw′ 1(x) rX(x)fX(x)dx/bracketrightBigg . We define the functions φ2(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y)·I(θ≤y≤uY), φ 1(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y)·I(θ≤x≤uX), and∆φ21(x,y)=φ2(x,y)−φ1(x,y)for(x,y)∈SX×SY. Under these definitions and the independence of XandY, expression for E[w1(X)]E[w2(Y)]∆3becomes E[w1(X)]E[w2(Y)]∆3=/integraldisplayuX lX/integraldisplayuY θw′ 1(x)w′ 2(y) rX(x)rY(y)fX(x)fY(y))dydx −/integraldisplayuY lY/integraldisplayuX θw′ 1(x)w′ 2(y) rX(x)rY(y)fY(y)fX(x)dxdy =E[φ2(X,Y)]−E[φ1(X,Y)], 18 and for (x,y)∈SX×SY, we obtain ∆φ21(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y),iflX<x<θandθ≤y<uY −w′ 1(x)w′ 2(y) rX(x)rY(y),ifθ≤x<uXand lY<y<θ 0, otherwise. We now verify conditions (i)–(iv) of Lemma
|
https://arxiv.org/abs/2505.19824v1
|
1 for the independ ent variable XandY. Since SX∩SY=(lY,uX)/nequal∅, it follows that∆φ21(x,y)≥0for every (x,y)∈SX×SYwith x≤y. Using the non-negative and decreasing properties of w′ 1(·)/rX(·)onSX, and non-negative and increasing properties of w′ 2(·)/rY(·)onSY, we have for each fixed y∈SY, the∆φ21(x,y)is decreasing in xover the interval [lX,min( y,uX)); similarly, for each fixed x∈SX,∆φ21(x,y)is increasing in yon the interval [max( x,lY),uY). Furthermore, for all (x,y)in(SX∩SY)×(SX∩SY) with x≤y, we obtain ∆φ21(x,y)+∆φ21(y,x)=w′ 1(x)w′ 2(y) rX(x)rY(y)−w′ 1(y)w′ 2(x) rX(y)rY(x)≥0, using the monotonicity of w′ 1(·)/rX(·)andw′ 2(·)/rY(·). Since X≤stY, the independent variable XandYsatisfy all the conditions of Lemma 1, we have E[w1(X)]E[w2(Y)]∆3=E(φ2(X,Y))−E(φ1(X,Y))≥0, which further implies that FYw2(θ)≥FXw1(θ)onS1∩S2. This completes the proof. The following theorem provides conditions under which the f ailure rate order between X andYis preserved by XwandYw. Theorem 9. LetXandYbe two independent variables. If l1≤l2,u1≤u2,w′ 1(·)/rX(·)is a decreasing function on SX=(lX,uX),w′ 2(·)/rY(·)is an increasing function on SY=(lY,uY), andX≤f rY, then Xw1≤f rYw2. Proof. From the ordering X≤f rY, we have lX≤lYanduX≤uY. In the case of S1∩S2=∅, i.e., when l1<u1≤l2<u2,Xw1≤f rYw2holds trivially. Now consider the case where S1∩S2/nequal∅, that is, l1≤l2<u1≤u2, so that S1∩S2=(l2,u1). Now, for x∈(−∞,l2], we have FYw2(x)=1, and the ratio FYw2(x)/FXw1(x)is increasing. For x∈[u1,u2), we have FXw1(x)=0, and the ratio is infinity, assuming that c/0=∞forc>0. Therefore, the ratio FYw2(x)/FXw1(x)is increasing on (−∞,l2]∪[u1,u2). To establish that the ratio FYw2(x)/FXw1(x)is increasing on S1∩S2=(l2,u1), take arbitrary sandtwith l2<s<t<u1, and consider the expression ∆4=FYw2(t)FXw1(s)−FYw2(s)FXw1(t). Then, using the PDFs of Xw1andYw2, we have the expression for E[w1(X)]E[w2(Y)]∆4as follows E[w1(X)]E[w2(Y)]∆4=/bracketleftbigg/integraldisplayu2 tw′ 2(y)FY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplayu1 sw′ 1(x)FX(x) dx/bracketrightbigg −/bracketleftbigg/integraldisplayu2 sw′ 2(y)FY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplayu1 tw′ 1(x)FX(x) dx/bracketrightbigg =/bracketleftbigg/integraldisplayu2 tw′ 2(y) rY(y)fY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplayu1 sw′ 1(x) rX(x)fX(x) dx/bracketrightbigg 19 −/bracketleftbigg/integraldisplayu2 sw′ 2(y) rY(y)fY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplayu1 tw′ 1(x) rX(x)fX(x) dx/bracketrightbigg . Now, define the functions φ2(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y)·I(s≤x<uX)I(t≤y<uY),(x,y)∈SX×SY, φ1(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y)·I(t≤x<uX)I(s≤y<uY),(x,y)∈SX×SY, and∆φ21(x,y)=φ2(x,y)−φ1(x,y),(x,y)∈SX×SY. Now, using the independence of XandY, we have the expression for E[w1(X)]E[w2(Y)]∆4as follows E[w1(X)]E[w2(Y)]∆4=/integraldisplayuY t/integraldisplayuX sw′ 1(x) rX(x)w′ 2(y) rY(y)fX(x)fY(y)dxdy −/integraldisplayuY s/integraldisplayuX tw′ 2(y) rY(y)w′ 1(x) rX(x)fX(x)fY(y)dxdy. =E[φ2(X,Y)]−E[φ1(X,Y)]. For(x,y)∈SX×SY, ∆φ21(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y),ifs≤x<tandt≤y<uY, −w′ 1(x)w′ 2(y) rX(x)rY(y),ift≤x<uXands≤y<t, 0, otherwise. Since S1⊆SX,S2⊆SY, and S1∩S2/nequal∅, it follows that SX∩SY/nequal∅,SX∩SY=(lY,uX), and lX≤lY≤l2<s<t<u1≤uX≤uY. We now verify conditions (i)–(iii) of Lemma 2 for the indepen dent variables XandY. From the definition of∆φ21(·,·), it is evident that∆φ21(x,y)≥0for every (x,y)∈SX×SYandx≤y. Moreover, for any fixed x∈SX∩SY, the increasing property of the function w′ 2(·)/rY(·)ensures the increasing property of ∆φ21(x,y)fory∈(x,uY). Furthermore, using the decreasing property ofw′ 1(·)/rX(·), the increasing property of w′ 2(·)/rY(·), and that x≤y, we get ∆φ21(x,y)+∆φ21(y,x)=w′ 1(x)w′ 2(y) rX(x)rY(y)−w′ 1(y)w′ 2(x) rX(y)rY(x)≥0, whenever (x,y)∈SX×SY. Since X≤f rYandXandYare independent variables satisfying all conditions of Lemma 2, we have E[w1(X)]E[w2(Y)]∆4=E[φ2(X,Y)]−E[φ1(X,Y)]≥0, which further implies that ∆4=FYw2(t)FXw1(s)−FYw2(s)FXw1(t)≥0. Therefore, the function FYw2(·)/FXw1(·)is increasing on (−∞,u2), which implies that Xw1≤f rYw2. Example 7. LetXfollow the standard uniform distribution and Yfollow the exponential distribution with rate λ=1. Then, rX(·)andrY(·)are defined as rX(x)=1 1−x,0<x<1,and rY(x)=1forx>0, 20 respectively. For x<0,rX(x)=rY(x)=0; for 0<x<1,rX(x)≥rY(x). Hence, X≤f rY. Let w1(·)andw2(·)are defined by w1(x)=ex−1,x>0,and w2(x)=x,x>0, respectively. Now observe that w′ 1(x) rX(x)=ex(1−x),0<x<1,andw′ 2(x) rY(x)=1,x>0, which are decreasing and increasing functions, respective ly. Hence, applying Theorem 9, we deduce that Xw1≤f rYw2. Remark 6. In Example 7, w′ 2(x)/w′ 1(x)=e−xis not increasing for x>0, so
|
https://arxiv.org/abs/2505.19824v1
|
the condition of Theorem 5 is not satisfied. The PDFs of Xw1andYw2are given by fXw1(x)=ex(1−x) e−2,0<x<1,and fYw2(x)=e−x,x>0. Now, consider the function fYw2(x) fXw1(x)=e−2x(e−2) 1−x,0<x<1. We plot fYw2(·)/fXw1(·)to demonstrate that it is not increasing. Therefore, the lik elihood ratio order Xw1≤lrYw2does not hold. 0.0 0.2 0.4 0.6 0.8 1.00.40.50.60.70.80.91.0 xfYw2(x)/fXw1(x) Figure 1: Plot of the function fYw2(·)/fXw1(·). The following theorem provides conditions under which the r eversed failure rate order between XandYis preserved by XwandYw. Theorem 10. LetXandYbe two independent variables. If l1≤l2,u1≤u2,w′ 1(·)/rX(·)is a decreasing function on SX=(lX,uX),w′ 2(·)/rY(·)is an increasing function on SY=(lY,uY), andX≤r f rY, then Xw1≤r f rYw2. 21 Proof. From the ordering X≤r f rY, we have lX≤lYanduX≤uY. In the case of S1∩S2=∅, i.e., when l1<u1≤l2<u2,Xw1≤r f rYw2holds trivially. Now consider the case where S1∩S2/nequal∅, that is, l1≤l2<u1≤u2, so that S1∩S2=(l2,u1). We now show that FYw2(x)/FXw1(x) is increasing on (l1,∞). For x∈(l1,l2], we have FYw2(x)=0, and consequently the ratio FYw2(x)/FXw1(x)is zero. For x∈[u1,∞), we have FXw1(x)=1, and hence FYw2(x)/FXw1(x)= FYw2(x), which is again an increasing function. Therefore, the rati oFYw2(x)/FXw1(x)is increasing on(l1,l2]∪[u1,∞). To establish that the ratio FYw2(x)/FXw1(x)is increasing on S1∩S2=(l2,u1), take arbitrary sandtwith l2<s<t<u1, and consider the expression ∆5=FYw2(t)FXw1(s)−FYw2(s)FXw1(t). Then, using the PDFs of Xw1andYw2, we have the expression for E[w1(X)]E[w2(Y)]∆5as follows E[w1(X)]E[w2(Y)]∆5=/bracketleftbigg/integraldisplayt l2w′ 2(y)FY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplays l1w′ 1(x)FX(x) dx/bracketrightbigg −/bracketleftbigg/integraldisplays l2w′ 2(y)FY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplayt l1w′ 1(x)FX(x) dx/bracketrightbigg =/bracketleftbigg/integraldisplayt l2w′ 2(y) rY(y)fY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplays l1w′ 1(x) rX(x)fX(x) dx/bracketrightbigg −/bracketleftbigg/integraldisplays l2w′ 2(y) rY(y)fY(y) dy/bracketrightbigg/bracketleftbigg/integraldisplayt l1w′ 1(x) rX(x)fX(x) dx/bracketrightbigg . Now, define the functions φ2(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y)·I(lX≤x<s)I(lY≤y<t),(x,y)∈SX×SY, φ1(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y)·I(lX≤x<t)I(lY≤y<s),(x,y)∈SX×SY, and∆φ21(x,y)=φ2(x,y)−φ1(x,y),(x,y)∈SX×SY. Now, using the independence of XandY, we have the expression for E[w1(X)]E[w2(Y)]∆5as follows E[w1(X)]E[w2(Y)]∆5=/integraldisplayt lY/integraldisplays lXw′ 1(x) rX(x)w′ 2(y) rY(y)fX(x)fY(y)dxdy −/integraldisplays lY/integraldisplayt lXw′ 2(y) rY(y)w′ 1(x) rX(x)fX(x)fY(y)dxdy. =E[φ2(X,Y)]−E[φ1(X,Y)]. For(x,y)∈SX×SY, ∆φ21(x,y)=w′ 1(x)w′ 2(y) rX(x)rY(y),iflX≤x<sands≤y<t, −w′ 1(x)w′ 2(y) rX(x)rY(y),ifs≤x<tandlY≤y<s, 0, otherwise. Since S1⊆SX,S2⊆SY, and S1∩S2/nequal∅, it follows that SX∩SY/nequal∅,SX∩SY=(lY,uX), and lX≤lY≤l2<s<t<u1≤uX≤uY. 22 We now verify conditions (i)–(iii) of Lemma 3 for the indepen dent variables XandY. From the definition of∆φ21(·,·), it is evident that∆φ21(x,y)≥0for every (x,y)∈SX×SYandx≤y. Moreover, for any fixed y∈SX∩SY, the decreasing property of the function w′ 1(·)/rX(·)ensures the decreasing property of ∆φ21(x,y)forx∈(lX,y). Furthermore, using the decreasing property ofw′ 1(·)/rX(·), the increasing property of w′ 2(·)/rY(·), and that x≤y, we get ∆φ21(x,y)+∆φ21(y,x)=w′ 1(x)w′ 2(y) rX(x)rY(y)−w′ 1(y)w′ 2(x) rX(y)rY(x)≥0, whenever (x,y)∈SX×SY. Since X≤r f rY, and XandYare independent variables satisfying all conditions of Lemma 3, it follows from Lemma 3 that E[w1(X)]E[w2(Y)]∆5=E[φ2(X,Y)]−E[φ1(X,Y)]≥0, which further implies that ∆5=FYw2(t)FXw1(s)−FYw2(s)FXw1(t)≥0. Therefore, the function FYw2(·)/FXw1(·)is increasing on (l1,∞), which implies that Xw1≤r f rYw2. Example 8. LetXfollow the exponential distribution with rate λ=1, and let Yfollow the standard uniform distribution. Then, we have FY(x) FX(x)=x 1−e−x is increasing on (0,∞). Hence, X≤r f rY. Now consider w1(x)=x,x>0and w2(x)= −x−ln(1−x),0<x<1. Then we have w′ 1(x) rX(x)=1,x>0, andw′ 2(x) rY(x)=x(1−x) 1−x=x,0<x<1, which are decreasing and increasing functions, respective ly. Therefore, by applying Theo- rem 10, it follows that Xw1≤r f rYw2. 6. An Application of the Framework In this section, we leverage the proposed framework to deriv e a continuous and bounded random variable Xw, under the assumption that Xfollows a Kumaraswamy distribution. Subse- quently, we model
|
https://arxiv.org/abs/2505.19824v1
|
real-world data using the random variable sX,Xw, and the Beta distribution, and assess their goodness-of-fit metrics by employing the me thod of maximum likelihood estimation (MLE), followed by statistical tests to compare their fitting performance. Poondi Kumaraswamy introduced a family of double-bounded continu ous probability distributions in Kumaraswamy [12] to model data with finite lower and upper bou nds, incorporating zero inflation, that is, the left end point is zero and lies within t he support of the distribution. The Kumaraswamy distribution is comparable to the beta dist ribution and provides enhanced simplicity, especially for simulation-based application s. This advantage of Kumaraswamy distribution comes from the closed-form availability of it s PDF, CDF, and quantile function. We model the data using X, defined on the interval (0,1). To construct the WTRV Xw, we use the weight function w(x)=xc,x∈(0,1), where c>0. The resulting distribution of Xwis called the weighted Kumaraswamy (or simply, weighted Kw) distribu tion. We demonstrate that Xw provides a substantially better goodness-of-fit measures c ompared to Xwhen modeling rainfall datasets1,2from Northwest and Northeast India for the period 1901–2021 . We denote X∼ Kw(a,b) [X∼WK(a,b,c)]to say that Xfollows the Kumaraswamy/bracketleftbigweighted Kumaraswamy/bracketrightbig distribution with shape parameters a,b>0 [a,b,c>0]. 1https://www.data.gov.in/resource/rainfall-nw-india- and-its-departure-normal-monsoon-session-1901-2021 2https://www.data.gov.in/resource/rainfall-ne-india- and-its-departure-normal-monsoon-session-1901-2021 23 6.1.Model Description The PDF of X∼Kw(a,b), as given in Lemonte [13], is fX(x)=abxa−1(1−xa)b−1,x∈(0,1),a,b>0, the SF FX(·)and quantile function F−1 X(·)ofXare FX(x)=(1−xa)band F−1 X(x)=(1−(1−x)1 b)1 a,x∈(0,1), respectively, and its the n-th raw moment is E[Xn]=ab/integraldisplay1 0xa+n−1(1−xa)b−1dx=bB/parenleftbig1+n a,b/parenrightbig,n∈N, where B(p,q)=/integraldisplay1 0tp−1(1−t)q−1dt,p,q>0. The PDF of Xw∼WK(a,b,c)(see Table 1), is fXw(x)=cxc−1(1−xa)b bB/parenleftbig1+c a,b/parenrightbig,x∈(0,1),a,b,c>0, (13) the SF of Xwis FXw(x)=c bB/parenleftbig1+c a,b/parenrightbig/integraldisplay1 xtc−1(1−ta)bdt =c abB/parenleftbig1+c a,b/parenrightbig/integraldisplay1 xazc a−1(1−z)bdz =cB1/parenleftbigxa,1;c a,b+1/parenrightbig abB/parenleftbig1+c a,b/parenrightbig, where B1(y,1;p,q)=/integraldisplay1 ytp−1(1−t)q−1dt,0≤y≤1,p,q>0, and the n-th raw moment of Xwis E[Xn w]=/integraldisplay1 0cxc+n−1(1−xa)b bB/parenleftbig1+c a,b/parenrightbigdx=cB/parenleftbigc+n a,b+1/parenrightbig abB/parenleftbig1+c a,b/parenrightbig,n∈N. 0.00.20.40.60.81.0x 0.00.51.01.52.02.53.0Density a=3,b=3,c=2 a=3,b=3,c=0.5 a=3,b=2,c=3 a=3,b=0.5,c=3 a=2,b=3,c=3 a=0.5,b=3,c=3 a=2,b=2,c=2 a=0.5,b=0.5,c=0.5 Figure 2: Density plot of weighted Kumaraswamy distributio n with parameters a,b,c. 24 6.2.Estimation of Model Parameters The weighted Kumaraswamy distribution shares similaritie s with both the beta and Ku- maraswamy distributions, since all three are continuous ra ndom variables defined on the interval (0,1). To assess their comparative modeling performance, we mode l monsoon rainfall datasets to these distributions using the method of MLE. Suppose we ar e given the data z={z1,z2,···,zn} such that zmin,zmaxare the minimum and maximum observations in z, respectively. To fit the dataset to the considered models, we normalize the observat ions as follows: xi=zi−zmin zmax−zmin,i∈{1,2,..., n}. The log-likelihood function Lobtained using Equation (13) as follows: L(a,b,c)=nlog/parenleftBiggc bB(1+c/a,b)/parenrightBigg +(c−1)n/summationdisplay i=1log(xi)+bn/summationdisplay i=1log(1−xa i). (14) We obtain the first-order partial derivatives of L(a,b,c)with respect to a,b,andcand equate them to zero to get the likelihood equations. However, the li kelihood equations cannot be solved analytically because of the presence of a,b, and cas arguments of the beta function in the log- likelihood expression. Thus, we employ the ”Limited-memor y Broyden–Fletcher–Goldfarb– Shanno with Box” (L-BFGS-B) constrained optimisation meth od and the mle() function in R software to estimate the model parameters, and effectively m inimize−L(a,b,c)to obtain the maximum likelihood estimates of a,b,andc. Similarly, we estimate the model parameters for the beta and Kumaraswamy
|
https://arxiv.org/abs/2505.19824v1
|
distributions employing the same o ptimisation technique. Table 2 presents the descriptive statistics of monsoon rainfall da ta (in millimeters) from Northwest (NW) and Northeast (NE) India, while Table 3 shows the estima ted parameters of the fitted probability models for these datasets. Table 2: Descriptive statistics of the datasets Dataset Mean Median Mode Std. Dev. Variance Skewness Kurtos is Min Max NW India 593.595 602.8 557.2 104.84 10992.46 -0.056 3.07 338 .8 928.4 NE India 1491.061 1486.8 1783 200.48 40192.68 0.226 2.83 107 8.9 2090.3 Table 3: Estimated model parameters using the method of MLE Dataset Probability ModelEstimated Parameters a b c NW IndiaBeta 3.1297 4.2115 – Kumaraswamy (Kw) 2.5667 5.8036 – Weighted Kw 2.0092 13.2024 6.1306 NE IndiaBeta 2.0394 3.0538 – Kw 1.8521 3.5741 – Weighted Kw 1.6085 3.6069 3.2833 25 0.0 0.2 0.4 0.6 0.8 1.0 Monsoon Rainfall in North-West India0.00.51.01.52.02.5Density Distribution Beta Kw Weighted Kw 0.0 0.2 0.4 0.6 0.8 1.0 Monsoon Rainfall in North-East India0.00.51.01.52.0Density Distribution Beta Kw W eighted Kw Figure 3: Plots of the histograms and corresponding fitted de nsities of the datasets. Table 4: Goodness-of-fit and Model Selection Criteria Dataset MetricProbability Model Beta Kw Weighted Kw NW IndiaLog-likelihood 43.3819 45.6155 47.6259 AIC -82.7638 -87.2309 -89.2517 BIC -77.2055 -81.6727 -80.9144 RMSE 0.2057 0.1448 0.1052 NE IndiaLog-likelihood 28.1876 29.3296 30.0229 AIC -52.3752 -54.6593 -54.0457 BIC -46.8509 -49.1349 -45.7592 RMSE 0.1753 0.1377 0.0958 6.3. Statistical Testing for the Goodness-of-Fit of the Dataset s We perform several goodness-of-fit tests based on empirical CDFs for the selection of the most suitable probability models for the datasets. We use th e Kolmogorov-Smirnov (KS), Anderson-Darling (AD), Cram ´er-von Mises (CvM), and Chi-Square (with 10 bins) (ChiSq) goodness-of-fit tests to determine the goodness of the fitted models. Let F(·)represent the distribution function of a probability model obtained by es timating the model parameters for a given dataset, and let Fn(·)represent the empirical CDF of that dataset. Now, the underl ying hypothesis for a goodness-of-fit test is given by H0:Fn(x)=F(x),∀xvs. H1:Fn(x)/nequalF(x),for at least one x. The above-mentioned hypothesis tests allow us to statistic ally determine whether a fitted model significantly deviates from the empirical distribution of t he given dataset. Table 5 provides the results of the goodness-of-fit tests for the three candidate models across each considered dataset. 26 Table 5: Goodness-of-fit test statistics and p-value for the conside red models Dataset TestTest statistic values p-value Beta Kw Weighted Kw Beta Kw Weighted Kw NW IndiaKS 0.0665 0.0513 0.0451 0.6677 0.9129 0.9687 AD 0.8442 0.4813 0.1950 0.4500 0.7656 0.9917 CvM 0.1247 0.0627 0.0260 0.4772 0.7978 0.9877 ChiSq 9.2350 6.6068 5.8144 0.1608 0.3587 0.3247 NE IndiaKS 0.0570 0.0543 0.0484 0.8417 0.8800 0.9468 AD 0.6728 0.4713 0.4024 0.5816 0.7759 0.8461 CvM 0.0885 0.0569 0.0422 0.6453 0.8340 0.9223 ChiSq 4.3261 3.4122 3.5281 0.7415 0.8444 0.7402 7. Conclusion This study introduces a novel framework for formulating a no n-negative continuous proba- bility distribution by utilizing the SF of a non-negative co ntinuous random variable and a weight function. The proposed framework has the potential to const ruct probability distributions that can model real-world data with superior goodness-of-fit
|
https://arxiv.org/abs/2505.19824v1
|
met rics than the input random variable of the framework, as evidenced by the weighted Kumaraswamy d istribution, which exhibits a superior goodness-of-fit compared to the original Kumaras wamy probability model. Future research may comprehensively investigate the proposed fra mework, with particular emphasis on the tail behavior of the WTRV of a random variable. Special focus may also be given to its ability to generate flexible probability distributions useful in various domains, including reliability theory and survival analysis. Acknowledgements The first author gratefully acknowledges the research fello wship granted by the Ministry of Education, Government of India. References [1] Alexander, C., Cordeiro, G.M., Ortega, E.M., Sarabia, J .M., (2012). Generalized beta- generated distributions. Computational Statistics & Data Analysis 56(6), 1880–1897. [2] Alzaatreh, A., Lee, C., Famoye, F., (2013). A new method f or generating families of continuous distributions. Metron 71(1), 63–79. [3] Barreto-Souza, W., Santos, A.H., Cordeiro, G.M., (2010 ). The beta generalized exponen- tial distribution. Journal of Statistical Computation and Simulation 80(2), 159–172. [4] Bon, J.L., Illayk, A., (2005). Ageing properties and ser ies system. Journal of applied probability 42(1), 279–286. 27 [5] Cordeiro, G.M., de Castro, M., (2010). A new family of gen eralized distributions. Journal of Statistical Computation and Simulation 81(7), 883–898. [6] Eugene, N., Lee, C., Famoye, F., (2002). Beta-normal dis tribution and its applications. Communications in Statistics - Theory and Methods 31(4), 49 7–512. [7] Glaser, R.E., (1980). Bathtub and related failure rate c haracterizations. Journal of the American Statistical Association 75(371), 667–672. [8] Gupta, N., (2009). Some Contributions to the Theory of St ochastic Orders and Their Applications. Ph.d. thesis. Indian Institute of Technolog y Kanpur. Lemma 1.3.3, p. 20. [9] Gupta, R.C., (2007). Role of equilibrium distribution i n reliability studies. Probability in the Engineering and Informational Sciences 21(2), 315–334 . [10] Gupta, R.P., Sankaran, P.G., (1998). Bivariate equili brium distribution and its applications to reliability. Communications in Statistics-Theory and M ethods 27(2), 385–394. [11] Hong, L., (2012). A remark on the alternative expectati on formula. The American Statistician 66(4), 232–233. [12] Kumaraswamy, P., (1980). A generalized probability de nsity function for double-bounded random processes. Journal of Hydrology 46(1), 79–88. [13] Lemonte, A.J., (2011). Improved point estimation for t he kumaraswamy distribution. Journal of Statistical Computation and Simulation 81(12), 1971–1982. [14] Li, X., Xu, M., (2008). Reversed hazard rate order of equ ilibrium distributions and a related aging notion. Statistical Papers 49(4), 749–767. [15] Liu, Y., (2020). A general treatment of alternative exp ectation formulae. Statistics and Probability Letters 166, 108863. [16] Mandouh, R.M., Mahmoud, M.R., Abdelatty, R.E., (2024) . A new (T-Xθ) family of dis- tributions: properties, discretization and estimation wi th applications. Scientific Reports 14(1), 1613. [17] Misra, N., Gupta, N., Dhariyal, I.D., (2008). Preserva tion of some aging properties and stochastic orders by weighted distributions. Communicati ons in Statistics - Theory and Methods 37(5), 627–644. [18] Nanda, A.K., Jain, K., Singh, H., (1996). Properties of moments for s-order equilibrium distributions. Journal of Applied Probability 33(4), 1108 –1111. [19] Ogasawara, H., (2019). Alternative expectation formu las for real-valued random vectors. Communications in Statistics-Theory
|
https://arxiv.org/abs/2505.19824v1
|
arXiv:2505.19835v1 [math.ST] 26 May 2025On a retarded stochastic system with discrete diffusion modeling life tables Tom´ as Caraballo1, Francisco Morillas2, Jos´ e Valero3 1Universidad de Sevilla, Departamento de Ecuaciones Diferenciales y An´ alisis Num´ erico Apdo. de Correos 1160, 41080-Sevilla Spain E.mail: caraball@us.es 2Universitat de Val` encia, Departament d’Economia Aplicada, Facultat d’Economia, Campus dels Tarongers s/n, 46022-Val` encia, Spain. E.mail: Francisco.Morillas@uv.es 3Universidad Miguel Hern´ andez de Elche, Centro de Investigaci´ on Operativa Avda. Universidad s/n, Elche (Alicante), 03202, Spain. E.mail: jvalero@umh.es Abstract This work proposes a method for modeling and forecasting mortality rates. It constitutes an improve- ment over previous studies by incorporating both the historical evolution of the mortality phenomenon and its random behavior. In the first part, we introduce the model and analyze mathematical properties such as the existence of solutions and their asymptotic behavior. In the second part, we apply this model to forecast mortality rates in Spain, showing that it yields better results than classical methods. 1 Introduction In actuarial or demographic sciences, life tables are very useful to study biometric functions such as the probability of survival or death, therefore they are crucial to calculate the insurance premium. In our previous papers [34], [35], [7], we have shown that a system of ordinary differential equations with nonlocal discrete diffusion is appropriate to model dynamical life tables. In [34], we implemented the deterministic model d dtui(t) =X r∈Dji−rur(t)−ui(t) +X r∈Z\Dji−rgr(t) ,i∈D,t >0, (1) ui(0) = u0 i,i∈D, where D={m1, . . . , m 2},m1< m 2,mi∈Z, and j:Z→R+. Using the observed data published by Spain’s National Institute of Statistics, we compared the results of our model with those obtained through classical techniques. Our numerical simulations indicate that, in the short run—specifically for predictions within three years—we achieved improvements in certain indicators of goodness and smoothness. Since some memory effects are present on life tables, we considered in [35] a modification of model (1) in which we added some delay terms. Namely, we studied the problem: d dtui(t) =X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s)−ui(t) (2) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s),i∈D, t > 0, ui(τ+s)≡ϕi(s) ,i∈D,s∈[−h,0], 1 where αi: [−s,0]→R+anddµ(s) =ξ(s)dsbeing ξ(·) a probability density. In [35] it was shown that with this model the prediction horizon can be extended up to 8 years. In addition, it gives coherent values, in magnitude, when comparing it with other classical techniques such as the Lee-Carter model, up to 18 years. Despite the good results provided by these deterministic models, in the real world there is always a certain level of noise, which either can be intrinsic to the model or can appear due to the presence of errors in the observed data. This is why in [7] we considered the stochastic version of model (1) given by d dtui(t) =X r∈Dji−rur(t)−ui(t) +X r∈Z\Dji−rgr(t) +bσi(ui(t))dwi dt,i∈D,t >0, (3) ui(0) = u0 i,i∈D, where wi(t) are independent Brownian motions and b >0 is the intensity of the white noise, and two specific type of noises were considered: 1) σi(v) =v(linear case); 2) σi(v) =v(1−v). The choice of the noise in the second case is motivated by the fact
|
https://arxiv.org/abs/2505.19835v1
|
that we are interested in studying variables like the probability of death, which take values in the interval [0 ,1]. Although the results given by the numerical simulations are fine from the qualitative point of view, it was necessary to make a correction of the estimates by using the average annual improvement rate. However, this correction is not equally adequate for all ages. Thus, in order to take into account in the model appropriate correction rates for each age, we need to introduce delay terms in the equation as in (2). Therefore, we will study in this paper the following stochastic system of differential equations with delay: (d dtui(t) = [J(t, ut)]i−ui(t) +bσi(ui(t))dwi dt,i∈D,t > τ, ui(τ+s)≡ϕi(s) ,i∈D,s∈[−h,0],(4) where D={m1, m1+ 1, ..., m 2},−∞< m 1< m 2<∞,m=m2−m1+ 1, τis the initial moment of time, h >0,wi(t) are independent Brownian motions, b≥0 is the intensity of the white noise, σi:R→R,and J:R+×C([−h,0],Rm)→Rmis the non-autonomous convolution operator defined by [J(t, ut)]i=X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s), ifi∈D, where u(t) = ui(s) i∈D, utis the segment of solution defined by ut(s) =u(t+s) fors∈[−h,0],j:Z→R, g:R×Z\D→Randdµ(s) =ξ(s)dsbeing ξ(·) a probability density. We define the Banach space l∞ 2={(ui)i∈Z\D: sup i∈Z\D|ui|<∞} with the norm ∥u∥l∞ 2= supi∈Z\D|ui|and assume the following conditions: (H1)jk≥0 for all k∈Z. (H2)P i∈Zji= 1. (H3)g∈C(R, l∞ 2). (H4)αi∈C([−h,0],R), αi(s)≥0 for i∈D, s∈[−h,0]. As for problem (3) we will consider the noises σi(v) =v(linear case) and σi(v) =v(1−v). In the first part of the paper, we study several properties of the solutions to problem (4). In the case where the noise is linear, we establish the existence of a unique globally defined positive solution if the initial condition is positive. When the noise is non-linear with σi(v) =v(1−v), we prove that if the initial condition belongs to (0 ,1), then the solution remains in this interval for any future moment of time. Finally, we analyze the asymptotic behaviour of the solutions as time goes to + ∞, showing under certain assumptions that, for large enough time, the solution belongs to a neighbourhood of the unique fixed point of the deterministic system. 2 In the second part of the paper, we apply model (4) to forecast age-specific mortality rates in Spain. Using observed data from 2008 to 2018, we perform numerical simulations to generate multiple realizations of predicted mortality values for the period 2019–2023. Based on these realizations, we construct confidence intervals and calculate several error indicators, comparing the results with those obtained from classical techniques such as the Lee-Carter and Renshaw-Haberman models. Our model achieved the best results for all years within the validation period (2019-2023). Thus, we conclude that our method should be regarded as a promising alternative to classical models. 2 Properties of solutions In this section, we shall obtain some properties of the solutions to problem (4). 3 The linear case We will first consider a standard linear noise, that is, we study the system d dtui(t) = [J(t, ut)]i−ui(t) +bui(t)dwi dt,i∈D,t > τ, (5) ui(τ+s)≡ϕi(s) ,i∈D,s∈[−h,0]. Denote Rm +={v∈Rm:vj>0 for all j}andαi=R0 −hαi(s)dµ(s). Our aim is to establish the existence
|
https://arxiv.org/abs/2505.19835v1
|
of global positive solutions. Lemma 1 Assume that gr(t)≥0,for all randt,and that αi≤1, for all i. Then for any ϕ∈C([−h,0],Rm +) there exists a unique globally defined solution u(·)such that u(t)∈Rm +almost sure for t≥τ. Proof. The existence of a unique local solution to problem (5) follows from standard results for functional stochastic differential equations governed by locally Lipschitz functions [31, Theorem 2.8, P. 154]. Given that any solution u(·) is defined in the maximal interval [0 , τe), we need to prove that τe= +∞and that u(t)∈Rm +for all t≥0 a.s. We choose k0>0 such that 1 k0<min s∈[−h,0]|ϕi(s)| ≤ max s∈[−h,0]|ϕi(s)|< k0for all i∈D. For each k≥k0we define the stopping time τk= inf{t∈[τ, τe) :ui(t)̸∈(1 k, k) for some i∈D}. This sequence is increasing as k↗+∞. Ifτ∞= lim k→+∞τk= +∞a.s., then τe= +∞andu(t)∈Rm +for t≥0 almost surely, proving the assertion. If lim k→+∞τk̸= +∞a.s, there would exist T, ε > 0 such that P(τ∞≤T)> ε, and then there would be k1≥k0for which P(τk≤T)≥εfork≥k1. Further, we consider the C2-function V:Rm +→R1 +given by V(u) =X i∈D(ui−1−log(ui)). 3 Letτ≤t≤τk∧T.Then u(t)∈Rm +and by Itˆ o’s formula we have dV(u(t)) (6) =X i∈D 1−1 ui(t) X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s)−ui(t) dt +X i∈D1 2b2dt+X i∈D 1−1 ui(t) bui(t)dwi(t) =I(t)dt+X i∈DIi(t)dwi(t), where Ii(t) = 1−1 ui(t) bui(t). By using ( H1)−(H4),u(t)∈Rm +andgr(t)≥0 the first term is estimated by: I(t)≤X i∈DX r∈DZ0 −hji−rur(t+s)αi(s)dµ(s)−X i∈Dui(t) +m+m 2b2 +X i∈DX r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s) −X i∈D1 ui(t) X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s) ≤KT+X i∈DX r∈DZ0 −hji−rur(t+s)αi(s)dµ(s)−X i∈Dui(t), where we have used that X i∈D1 ui(t) X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s) ≥0, X i∈DX r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s)≤mCT. Integrating in (6) over ( τ, τk∧T) and taking expectations we obtain that 0≤EV(u(τk∧T)) ≤V(ϕ(0)) + EZτk∧T τKTdt+EZτk∧T τX i∈D 1−1 ui(t) bui(t)dwi(t) +EX i∈DX r∈DZ0 −hZτk∧T τji−rur(t+s)αi(s)dtdµ(s)−EX i∈DZτk∧T τui(t)dt. Using αi≤1 we have X i∈DX r∈DZ0 −hZτk∧T τji−rur(t+s)αi(s)dtdµ(s) =X i∈DX r∈DZ0 −hji−rαi(s)Zτk∧T τur(t+s)dtdµ(s) ≤X r∈DX i∈Dji−rZ0 −hαi(s)dµ(s)Zτk∧T τ−hur(t)dt ≤X r∈DZτk∧T τ−hur(t)dt=X r∈DZτk∧T τur(t)dt+X r∈DZτ τ−hur(t)dt. 4 Hence, 0≤EV(u(τk∧T)) ≤V(ϕ(0)) + KTE(τk∧T) +X r∈DZτ τ−hϕ(t)dt ≤V(ϕ(0)) + KTT+Kϕ. Let Ω k={ω:τk≤T}, which satisfies P(Ωk)≥εfork≥k1. For any ω∈Ωkthere is i∈Dsuch that either ui(τk, ω) =korui(τk, ω) = 1 /k, which implies that V(u(τk∧T, ω))≥(k−1−log(k))∧(1 k−1 + log( k)). Hence, V(ϕ(0)) + KTT+Kϕ≥E(1ΩkV(u(τk∧T)))≥ε((k−1−log(k))∧(1 k−1 + log( k))) = εR(k), where 1 Astands for the indicator function of the set A. Passing to the limit as k→+∞we obtain a contradiction as R(k)→+∞. As a consequence, the following result follows. Corollary 2 Letϕ1, ϕ2∈C([−h,0],Rm +)be two initial conditions satisfying ϕ1 i(s)> ϕ2 i(s)for any i∈D, s∈[−h,0]. Also, g1, g1∈C([0,+∞), l∞ 2)are such that g1 i(t)≥g2 i(t)for all i∈Dandt≥τ. Then, ui(t)> vi(t), for all i∈Dandt≥τ, where u(·),v(·)are the unique solutions to problem (4) corresponding to{ϕ1, g1}and{ϕ2, g2}, respectively. 4 The non-linear case Let us consider now the system d dtui(t) = [J(t, ut)]i−ui(t) +bui(t)(1−ui(t))dwi dt,i∈D,t > τ, (7) ui(τ+s)≡ϕi(s) ,i∈D,s∈[−h,0]. We are now interested in proving that the components of the solution remain in the interval (0 ,1) for every moment of time. In this way, we guarantee that the variables are probabilities if the initial
|
https://arxiv.org/abs/2505.19835v1
|
conditions are as well. Lemma 3 Assume that gr(t)∈[0,1],for all randt,and that αi≤1, for all i. Then, for any ϕ∈ C([−h,0],Rm +)such that ϕi(s)∈(0,1), for any i∈Dands∈[−h,0], the unique solution u(·)to (7) satisfies almost surely that ui(t)∈(0,1)for all i∈Dandt≥τ. Proof. The existence and uniqueness of local solution is again guaranteed by [31, Theorem 2.8, P. 154]. Now we prove the statement of the Lemma. Letk0>0 be such that 1 k0<min s∈[−h,0]|ϕi(s)| ≤ max s∈[−h,0]|ϕi(s)|< k0for all i∈D. We define now the stopping time τk= inf{t∈[0,∞) :ui(t)̸∈(1 k,1−1 k) for some i∈D}. Since this sequence is increasing as k↗+∞, ifτ∞= lim k→+∞τk= +∞a.s., then 0 < ui(t)<1 almost sure for t≥0. 5 By contradiction, assume the existence of T, ε > 0 such that P(τ∞≤T)> ε. In such a case there would exist k1≥k0such that P(τk≤T)≥εfork≥k1. We denote K0={u= (um1, ..., u m2)∈Rm +: 0< ui<1} and define the C2-function V:K0→R1 +given by V(u) =−X i∈D(log(1 −ui) + log( ui)). Forτ≤t≤τk∧Twe have u(t)∈K0, and then by Itˆ o’s formula we have dV(u(t)) =X i∈D1 1−ui(t)−1 ui X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s)−ui(t) dt +X i∈D1 2 u2 i(t) + (1 −ui(t))2 b2dt+X i∈Db(2ui(t)−1)dwi(t) =I(t)dt+X i∈DIi(t)dwi(t), where Ii(t) =b(2ui(t)−1). First, X i∈D −1 ui X r∈Dji−rur(t)−ui(t) +X r∈Z\Dji−rgr(t) ≤X i∈D1. Second, by αi≤1,0≤gr(t)≤1 and ( H2) we obtain that X r∈DZ0 −hji−rur(t+s)αi(s)dµ(s) +X r∈Z\DZ0 −hji−rgr(t+s)αi(s)dµ(s)−ui(t) ≤X r∈Zji−rZ0 −hαi(s)dµ(s)−ui(t) ≤1−ui(t). Then the term I(t) is estimated as follows: I(t)≤X i∈D 2 +b2 =m 2 +b2 . Integrating over (0 , τk∧T) and taking expectations we deduce that 0≤EV(u(τk∧T))≤V(ϕ(0)) + EZτk∧T 0m 2 +b2 dt+EZτk∧T 0X i∈Db(2ui(t)−1)dwi(t) =V(ϕ(0)) + m 2 +b2 E(τk∧T)≤V(ϕ(0)) + m 2 +b2 T. Let Ω k={ω:τk≤T}, which satisfies P(Ωk)≥εfork≥k1. For any ω∈Ωkthere exists i∈Dsuch that either ui(τk, ω) =1 korui(τk, ω) = 1−1 k, so that V(u(τk∧T, ω))≥log(k)−log(1−1 k). 6 Thus, V(ϕ(0)) + m 2 +b2 T≥E(1ΩkV(u(τk∧T)))≥ε log(k)−log(1−1 k) . Passing to the limit as k→+∞we arrive at a contradiction. 5 Asymptotic behaviour If we consider model (4) in the deterministic and autonomous cases, that is, b= 0 and gr(t)≡gr∈R, and assume that αi(s) =α(s), for all i∈D, then it is well known [35] that there exists a unique fixed point u given by the solution of the system −M1X r∈Dji−rur+ui=M1X r∈Z\Dji−rgr=bi, i∈D, (8) where M1=M1(h) :=R0 −hα(s)dµ(s), provided that M1X r∈Dji−r<1∀i∈D. (9) Moreover, ur≥0 for any r∈D(see Remark 3.1 in [7]). We will show that the solutions of the stochastic system remain close to this fixed point for large times in a suitable sense. We start with the linear case. Theorem 4 Assume that gr≥0, for all r∈D,b <1and that M1(h)X r∈Dji−r<1−b2∀i∈D, (10) h <1 2(1−b2)log1−b2 1−δ(h)−b2 , (11) where δ(h) = min i∈D( 1−b2−M1(h)X r∈Dji−r.) Then, for any ϕ∈C([−h,0],Rm +), the unique solution to problem (5) satisfies lim sup t→+∞1 tZt 0sup θ∈[−h,0]E(∥u(s+θ)−u∥2 Rmds≤2b2∥u∥2 Rm λ∗−L(h, λ∗), where L=L(h, λ) := 2(1 −δ(h)−b2)eλhandλ∗∈ 0,2(1−b2) is such that λ∗> L(h, λ∗). Remark 5 It is easy to see that (11) is satisfied for hsmall enough. Indeed, let h0be such that (10) holds for all h≤h0. If h < h 0is
|
https://arxiv.org/abs/2505.19835v1
|
small enough such that h <1 2(1−b2)log 1−b2 1−δ(h0)−b2 , the fact that δ(h)is non-increasing implies that h <1 2(1−b2)log 1−b2 1−δ(h)−b2 . Remark 6 Letf(λ) = 2(1 −δ(h)−b2)eλh−λ. Condition (11) implies that f(2(1−b2))<0. Then choosing λ∗close enough to 2(1−b2)we have that f(λ∗)<0, soλ∗> L(h, λ∗). Proof. We define the C2function V:Rm→Rby V(u) =X i∈D(ui−ui)2=∥u−u∥2 Rm. 7 Forλ∈ 0,2(1−b2) we have d(eλtV(u(t))) = λeλtV(u(t))dt+eλtdV(u(t)) and using Ito’s formula we obtain that dV(u(t)) =X i∈D2(ui−ui) X r∈DZ0 −hji−rur(t+s)α(s)dµ(s) +X r∈Z\DZ0 −hji−rgrα(s)dµ(s)−ui(t) dt +X i∈Du2 i(t)b2dt+X i∈D2b(ui(t)−ui)ui(t)dwi(t) =I(t)dt+X i∈DIi(t)dwi(t). Using (8) the term I(t) is estimated by I(t) =X i∈D2(ui−ui) X r∈DZ0 −hji−rur(t+s)α(s)dµ(s) +M1X r∈Z\Dji−rgr−ui(t) +X i∈Du2 i(t)b2 ≤X i∈D2(ui−ui) X r∈DZ0 −hji−r(ur(t+s)−ur)α(s)dµ(s)−(ui(t)−ui)! + 2b2∥u−u∥2 Rm+ 2b2∥u∥2 Rm =X i∈D2(ui−ui)X r∈DZ0 −hji−r(ur(t+s)−ur)α(s)dµ(s)−2(1−b2)∥u−u∥2 Rm+ 2b2∥u∥2 Rm =I1(t) +I2(t). Using the definition of δ(h) the first term is estimated by I1(t)≤M1(h)X i∈D(ui(t)−ui)2X r∈Dji−r+X i∈DX r∈DZ0 −hji−r(ur(t+s)−ur)2α(s)dµ(s) ≤(1−δ(h)−b2)X i∈D(ui(t)−ui)2+(1−δ(h)−b2) M1(h)Z0 −hX r∈D(ur(t+s)−ur)2α(s)dµ(s). Hence, d(eλtV(u(t)))≤λeλt∥u(t)−u∥2 Rm−2(1−b2)∥u−u∥2 Rmeλt+ 2b2∥u∥2 Rmeλt+ (1−δ(h)−b2)∥u(t)−u∥2 Rmeλt +(1−δ(h)−b2) M1(h)eλtZ0 −hX r∈D(ur(t+s)−ur)2α(s)dµ(s) +eλtX i∈DIi(t)dwi(t). Integrating over (0 , t) and taking into account that λ <2(1−b2), we can discard the first two terms appearing on the right-hand side and deduce eλtV(u(t))≤V(u(0)) +2b2∥u∥2 Rm λeλt+ (1−δ(h)−b2)Zt 0∥u(s)−u∥2 Rmeλsds +(1−δ(h)−b2) M1(h)Zt 0eλlZ0 −hX r∈D(ur(l+s)−ur)2α(s)dµ(s)dl+Zt 0eλsX i∈DIi(s)dwi(s). 8 Taking expectations we obtain eλtE∥u(t)−u∥2 Rm≤E∥u(0)−u∥2 Rm+2b2∥u∥2 Rm λeλt+ (1−δ(h)−b2)Zt 0E∥u(s)−u∥2 Rmeλsds +(1−δ(h)−b2) M1(h)Zt 0eλlZ0 −hE∥u(l+s)−u∥2 Rmα(s)dµ(s)dl ≤E∥u(0)−u∥2 Rm+2b2∥u∥2 Rm λeλt+ (1−δ(h)−b2)Zt 0E∥u(s)−u∥2 Rmeλsds + (1−δ(h)−b2)Zt 0sup s∈[−h,0]E∥u(l+s)−u∥2 Rmeλldl ≤E∥u(0)−u∥2 Rm+2b2∥u∥2 Rm λeλt+ 2(1−δ(h)−b2)Zt 0sup s∈[−h,0]E∥u(l+s)−u∥2 Rmeλldl. Notice that the last term makes sense thanks to [31, Lemma 2.3, P. 150], since this implies that u∈ L2(Ω;C([−h, T];Rm))⊂C([−h, T], L2(Ω;Rm)). We next replace tbyt+θ,θ∈[−h,0],t+θ≥0,in the above inequality. Then E∥u(t+θ)−u∥2 Rm ≤e−λ(t+θ)E∥u(0)−u∥2 Rm+2b2∥u∥2 Rm λ+ 2(1−δ(h)−b2)e−λ(t+θ)Zt+θ 0sup s∈[−h,0]E∥u(l+s)−u∥2 Rmeλldl ≤e−λteλhE∥u(0)−u∥2 Rm+2b2∥u∥2 Rm λ+ 2(1−δ(h)−b2)e−λteλhZt 0sup s∈[−h,0]E∥u(l+s)−u∥2 Rmeλldl. Fort+θ <0 we have eλtE∥u(t+θ)−u∥2 Rm≤eλtsup θ∈[−h,0]E∥u(θ)−u∥2 Rm≤eλhsup θ∈[−h,0]E∥u(θ)−u∥2 Rm. Thus, if we define the function y(t) = sup θ∈[−h,0]E∥u(t+θ)−u∥2 Rm, then eλty(t)≤y(0)eλh+2b2∥u∥2 Rm λeλt+L(h)Zt 0y(l)eλldl and Gronwall’s lemma implies eλty(t)≤y(0)eλh+2b2∥u∥2 Rm λeλt+L(h)Zt 0 y(0)eλh+2b2∥u∥2 Rm λeλheλl! eL(h)(t−l)dl ≤y(0)eλh+y(0)eλh(eL(h)t−1) +2b2∥u∥2 Rm λeλt+2b2∥u∥2 Rm λeλhL(h) λ−L(h)eL(h)t(e(λ−L(h))t−1) ≤y(0)eλheL(h)t+2b2∥u∥2 Rm λ−L(h)eλt, and, consequently, y(t)≤y(0)eλhe(L(h)−λ)t+2b2∥u∥2 Rm λ−L(h). Integrating over (0 , t) we have 1 tZt 0y(s)ds≤y(0)eλh λ−L(h)1−e(L(h)−λ)t t+2b2∥u∥2 Rm λ−L(h) ≤y(0)eλh λ−L(h)1 t+2b2∥u∥2 Rm λ−L(h). 9 Therefore, lim sup t→+∞1 tZt 0y(s)ds≤2b2∥u∥2 Rm λ−L(h), which is true when λ > L (h). Thus, the statement is proved. Let us consider now system (7). Theorem 7 Assume that gr∈[0,1], for all r∈D,b <1and that (10), (11) hold. Then for any ϕ∈ C([−h,0],Rm +)such that ϕi(s)∈(0,1), for all i∈Dands∈[−h,0], the unique solution to problem (7) satisfies lim sup t→+∞1 tZt 0sup θ∈[−h,0]E(∥u(s+θ)−u∥2 Rmds≤2b2∥u∥2 Rm λ∗−L(h, λ∗), where L=L(h, λ) := 2(1 −δ(h)−b2)eλhandλ∗∈ 0,2(1−b2) is such that λ∗> L(h, λ∗). Proof. As before, we define the C2-function V:Rm→Rgiven by V(u) =X i∈D(ui−ui)2=∥u−u∥2 Rm. Forλ∈ 0,2(1−b2) we have d(eλtV(u(t))) = λeλtV(u(t)) +eλtdV(u(t)). Then, Ito’s formula yields dV(u(t)) =X i∈D2(ui−ui) X r∈DZ0 −hji−rur(t+s)α(s)dµ(s) +X r∈Z\DZ0 −hji−rgrα(s)dµ(s)−ui(t) +X i∈Du2 i(t)(1−ui(t)2b2dt+X i∈D2b(ui(t)−ui)ui(t)(1−ui(t))dwi(t) =I(t)dt+X i∈DIi(t)dwi(t). SinceP i∈Du2 i(t)(1−ui(t)2b2≤P i∈Du2 i(t)b2, repeating the same steps in the proof of Theorem 4 we derive eλtV(u(t))≤V(u(0)) +2b2∥u∥2 Rm λeλt+ (1−δ(h)−b2)Zt 0∥u(s)−u∥2 Rmeλsds +(1−δ(h)−b2) M1(h)Zt 0eλlZ0 −hX r∈D(ur(l+s)−ur)2α(s)dµ(s)dl+Zt
|
https://arxiv.org/abs/2505.19835v1
|
0eλsX i∈DIi(s)dwi(s) and, taking expectations, we infer eλtE∥u(t)−u∥2 Rm≤E∥u(0)−u∥2 Rm+2b2∥u∥2 Rm λeλt+ 2(1−δ(h)−b2)Zt 0sup s∈[−h,0]E∥u(l+s)−u∥2 Rmeλldl. Again, the last term in the previous equation is well defined thanks [31, Lemma 2.3, P. 150]. Notice that as the solutions belong to (0 ,1) almost surely, the term in front of the noise has sub-linear growth Lemma 2.3 in [31] can be applied to this nonlinear case. The rest of the proof repeats the same argument as in Theorem 4. 6 Application to Life Tables In this section, we will appply model (7) to predict the probability of death by age in Spain. 10 6.1 Life Tables: mortality modeling through a stochastic delay approach Life tables are among the most widely used tools for studying survival and mortality patterns in a population. In demography, for instance, mortality constitutes one of the terms of the component method ([18], [26]) used for population estimates. In actuarial science, particularly in insurance applications, life tables are a fundamental tool for calculating, for example, adverse scenarios that insurance companies must be prepared to face. These tables are structured around interrelated biometric functions (see, for example, [11], [4], [2]). Among these functions, we can highlight the probability of surviving to a completed age x,px; its comple- ment, the probability of death, qx; life expectancy at age x,ex; and the central death rate by age, mx. In practice, the true values of these functions are generally unknown and must be estimated from ob- served data. This estimation process can be approached from different methodological perspectives, typically grouped into three main categories: (i) stochastic versus non-stochastic models, (ii) parametric versus non- parametric models, and (iii) static versus dynamic approaches. Each of these axes defines key aspects in the modeling process: whether randomness is considered, whether structured functional forms are imposed, and whether the temporal evolution of mortality is incorporated. Traditionally, the estimation of age-specific death probabilities ( qx) has been performed using data from a single time period. This procedure generates so-called crude death rates, which may lack desirable smooth- ness properties, such as the expected continuity between adjacent ages. To correct these irregularities, classical laws such as Gompertz’s law [20], Gompertz-Makeham’s law [30], or the Heligman-Pollard model [23], among others, have been proposed. However, these approaches generally adopt a parametric and static perspective, in which the function qt xis assumed to remain constant over nearby time intervals. Nevertheless, it is well known that mortality rates evolve over time, meaning that assuming qt x=qt+h xcan lead to significant errors in many applications, such as pension expenditure projections or the calculation of technical reserves for insured portfolios. This realization has driven the development of dynamic mortality models, where the temporal evolution of qt xis modeled explicitly. Among the best-known and most widely used dynamic models are the Lee-Carter models [29], the CBD model [8], and the extended M3–M7 family of models [9], [10]. These models introduce temporal improvement factors and stochastic components that capture the uncertainty associated with future mortality, offering interpretable structures and reasonable predictive performance. In line with these models, in
|
https://arxiv.org/abs/2505.19835v1
|
[34] a non-parametric dynamic model, based on kernel smoothing techniques to estimate mortality rates, was proposed. This model avoids rigid functional assumptions and uses a system of non-local differential equations to approximate the evolution of qt xover time. Although it successfully reproduces qualitative features of observed mortality, its predictive horizon is short (two to three years) due to the absence of historical information. To overcome this limitation, [35] proposed an improved model that incorporates past information through a delay term, specifically using mortality improvement rates [8]. This formulation retains the dynamic and non-parametric character of the original model while substantially enhancing its predictive capacity, extending its utility to time horizons of five to ten years. The underlying idea is that mortality trajectories are partially path-dependent, and thus the historical evolution must be considered. This delay-based model remains within a non-stochastic framework, using observed improvement rates to incorporate past dynamics. It offers a robust alternative to stochastic models when precise deterministic prediction is required, as in regulatory contexts (e.g., Solvency II, [17]). However, despite its improved pre- dictive performance, a major limitation persists: it does not offer ’alternative’ mortality evolution scenarios, nor does it allow the calculation of quantiles or the construction of confidence intervals. To address this issue, the non-local model proposed in [34] was extended into a stochastic model by introducing a random term into the original system of non-local differential equations. This extension aimed to capture both the temporal evolution and the intrinsic variability of the mortality phenomenon. The resulting model [7] strikes a compelling balance between interpretability, flexibility, and robustness, and aligns with the family of stochastic, non-parametric, and dynamic models (see [6], [14], [12]). Despite providing a novel approach to integrating stochastic variability and non-parametric smoothing within a single mortality modeling framework, the need to incorporate historical information about mortality evolution became apparent. This leads to the model proposed in the present work, which can be classified as a dynamic, non-parametric, and stochastic model that also integrates historical information to capture 11 the temporal evolution of mortality. 6.2 Dynamical kernel graduation: combining stochastic behavior and historical data The model proposed in equation (4) contains both delay and stochastic terms. In a similar way as in [35], in this work the delay term takes into account the history using the concept of Improvement Rate (see also [15], [21], [22] and [1]). These rates, denoted by rt0,t1x,treat to characterize the evolution of the mortality year-to- year or between two arbitrary moments, t0andt1, at a concrete age x; they are defined by rt0,t1x=qt1x/qt0x. The difference d=t1−t0>0 is the delay. Using these improvement rates, we define the global improvement rate, denoted by rt1 t0, as the coefficient of the linear model (without constant term) of the observed death rates, that is, we fit the linear model qt1 ·=rt1 t0·qt0 · with the data qt0 0, . . . , qt0ω and qt1 0, . . . , qt1ω . The procedure can be summarized as in ([35]): 1. We consider the observed mortality rates at each age ( x) and each year
|
https://arxiv.org/abs/2505.19835v1
|
( t):qt x,x∈D={0, . . . , 100}, t∈ {1908, . . . , 2018}. Also, we consider the values of gx(t), which is the rate of death either at ”negative ages” or after the actuarial infinite (we have chosen it equal to 100). 2. We estimate the improvement rates for each age and delay, that is: year delay1 2 3 4 ... 109 110 r1909 1908∼q1909 · q1908·r1910 1908∼q1910 · q1908·r1911 1908∼q1911 · q1908·r1912 1908∼q1912 · q1908·. . .r2017 1908∼q2017 · q1908·r2018 1908∼q2018 · q1908· r1910 1909∼q1910 · q1909 ·r1911 1909∼q1911 · q1909 ·r1912 1909∼q1912 · q1909 ·r1913 1909∼q1913 · q1909 ·. . .r2017 1909∼q2017 · q1909 ·− r1911 1910∼q1911 · q1910·r1912 1910∼q1912 · q1910·r1913 1910∼q1913 · q1910·r1914 1910∼q1914 · q1910·. . . − − r1912 1911∼q1912 · q1911·r1913 1911∼q1913 · q1911·r1914 1911∼q1914 · q1911·r1915 1911∼q1915 · q1911·. . . − − ...... . . . − − − − r2017 2016∼q2017 · q2016·r2018 2016∼q2018 · q2016·− − − − − r2018 2017∼q2018 · q2017·− − − − − − 3. We consider the modified spheric function eγgiven by eγ(s) = T, ifs < b−a, γ(b−s) , ifb−a≤s≤b, 0, ifs > b, where γ(s) =( T 2n 3s a− s a3o , if 0≤s≤a, T, ifs > a, with a= 20, b= 30 and T= 1. We will use this function to calculate the annual improvement rates by delay, rd, which summarizes the previous information of the mortality process. We calculate a ponderated mean, with respect to the delay, of the improvement rates. For this aim, we define the vector v= (vi),i= 1, ..., n−1,n= 111, given by vi=eγ(i). Then, for each delay d= 1, ...,110, we define the vector vd= (vd i),i= 1, ..., n−d,by vd i=viPn−d j=1vj, 12 and the rd=n−dX i=1vd irt+i+d t+i, (12) being t= 1907 .In this way, the values rt1 t0which are far enough from the present moment of time are not taken into account. We will refer to this values as the global improvement rates for the delay d. 4. Using the improvement rates by delay, rd, we define the function α(s) from (4), by putting α(−s) = 1 + βs,s∈[0, h], (13) where βis the coefficient of the linear regression obtained from the data rd,d= 1,2, ..., h , and h≤110 is the maximum delay to be considered. 5. The experience of studying the mortality phenomenon allows us to assure that the importance of these rates are not the same for all delays. Indeed, the importance of the improvement rates increases when they are close to the time of prediction. To take this into account, we assume that the importance of these rates is modulated by a probability distribution function. To do this, we consider the exponential probability density function with the form fλ(s) = λe−λsifs≥0, 0 ifs <0, and obtain a density function in the interval [0 , h] by putting bfλ(s) =λe−λs Rh 0λe−λsds, s∈[0, h]. (14) Then we define the density function ξ(·) from (4) by ξ(s) =bfλ(−s) for s∈[−h,0]. In the particular case when in the numerical approximations we
|
https://arxiv.org/abs/2505.19835v1
|
consider only integer delays, we can discretize the interval [0 , h] by using a finite number of integer delays s={0,1, . . . , d max}, where dmax=h≤n−1, which is the maximum delay to be considered. Thus, instead of (14) we will use the discretized probability function f∗(s) =ˆfλ(s) Pdmax s=0ˆfλ(s), (15) which approximates (14) at s= 0,1, ..., d max. Using the annual improvement rates, and the exponential distribution, we can define the weighted improvement rates as Rd weight =rd·f∗(d), (16) but they are not used in the numerical method. In a similar way as in previous works (see [34], [35]), for each age x, for an arbitrary moment tand for a time step τ >0, the probability of death at t+τ, denoted by qx(t+τ), depends on: •all graduate values at moment t,qz(t),z∈D⊂Z(via Gaussian kernel graduation, see [2]). •all previous moments of time t+s,s∈[−h,0] (via improvement rates). In the real world, when a phenomenon has a random nature, that is, there exists some kind of noise which can be intrinsic to the process under study, it is more suitable to introduce random fluctuations, for example to forecast adverse scenarios. Then, in this work, in a similar way as in [7], we introduce the stochastic term 13 bqi(t)(1−qi(t))dwi dtin the model, giving rise to equation (7). Applying the Euler-Maruyama discrete time approximation [28], the relation between qx(t+τ) and {qz(t)},{qz(t+s)}s∈[−h,0[becomes: qx(t+τ) =1 1 +τqx(t) +τ 1 +τX z∈DZ0 −hjx−zqz(t+s)α(s)dµ(s) (17) +τ 1 +τX z∈Z\DZ0 −hjx−zgz(t+s)α(s)dµ(s) +bqx(t)(1−qx(t))(Wx,t+τ−Wx,t), where jis a suitable kernel (in this work a Gaussian kernel), gz(·) is the rate of death either at ”negative ages” or after the actuarial infinite, dµ(s) =bfλ(−s)ds, where bfλ(−s), α(s) are defined above. Relation (17) is consistent with the empirical experience on the actuarial practice. In order to evaluate the integralR0 −hjx−zgz(t+s)α(s)dµ(s) we will use the classical Riemann sum with time step 1 and aproximate the function bfλ(s) using the discretized probability function (15). We observe that when the values qx(t) decrease over time, which is our case, the coefficient βin (13) is negative. Then α=Z0 −hα(s)bfλ(s)ds=Z0 −h(1−βs)bfλ(s)ds= 1−βZ0 −hsbfλ(s)ds≤1, so that assumption α≤1 is satisfied in Lemma 3. 6.3 Numerical simulation We will implement our nonlinear stochastic model with delay (17) (NLSD for short) to predict the probability of death in Spain. 6.3.1 Data and parameters The dataset used in this work has been obtained from [24]. The variables are the population and the central mortality rates for each age, which are taken from 0 to 100 years old (actuarial infinity). We use the observed values in Spain in the period 1908 −2023. The dataset has been splitted in two subsets. First, the period 1908 −2018 is used to fit and calibrate the model; second, the period 2019 −2023 is used for the validation of the model. We have chosen the value λ=11 12in the function (14). The maximum delay hwas set to 90. With this value, the estimated slope βis−0.003473 . As a kernel we have chosen a discrete Gaussian Kernel with 0 mean, variance equal to 1 and bandwidth bw= 0.25,which is
|
https://arxiv.org/abs/2505.19835v1
|
defined as follows. We consider a finite set of ages A={−m,−m+ 1, ..., x, . . . , M −1, M}, where 0 ≤x≤100, m≥0, M≥100. We define the set of distances Dx={d−m, d−m+1, . . . , d 0, d1, . . . , d M}={x+m, x +m−1, ...,1,0,1, ..., M −x−1, M−x}. Then, we define the truncated gaussian kernel bKb(·) as bKb(k) =Kb(k)P ξ∈DKb(ξ),k∈D, where K(·) denotes a density function from a standard gaussian random variable and Kb(x) =1 bwK(x bw), bw= 0.25. In expression (17) we consider the truncated summatories with the restriction −50≤z≤150. Thus, we setm= 50, M = 150 for the discrete Gaussian kernel. In this way, for x∈ {0,1, ...,100}we obtain: jx−z=Kb(|x−z|)P ξ∈DxKb(ξ),forz∈ {− 50, ...,150}, 14 where Dx={x+ 50, ...,150−x}. Related with actuarial infinity ( 100 years old) ,the values of gxforx > 100 are taken to be equal to 0.385, in a similar way as in [1]. For x <0 the values of gt xare estimated using the expression3 4◦qt 0+1 4◦qt 1. The proposed method enables forecasting over an arbitrary time horizon. Also, the method makes it possible to obtain several trajectories, that is, an ensemble of predictions. In particular, we have considered a time horizon of 15 years and a number of the trajectories equal to 500. The time step τin (17) is taken equal to 1. Although it is possible to use a smaller value for τby interpolating the values of the variable in the past, the results are rather similar. The increments Wx,t+τ−Wx,τare obtained using the Box-Muller algorythm [28], which gives a pair of pseudo-random numbers ( z1, z2) that are independent and normally distributed with zero mean and variance equal to τ. We have obtained the predicted trajectories for several values of the parameter bdetermining the intensity of the noise: b= 0.1 ,0.05 and 0.025 . 6.3.2 Indicators To validate the method and to determine if the proposed technique can be used in real applications, we define several indicators. These indicators, also, are used to compare different models. We consider two types of measures which can be classified as error, count and central measures. Error measures. These measures compare the observed mortality rates with a synthetic trajectory. In this case, we use the mean (or median) trajectory of the realizations. Then, we calculate the mean quadratic difference, It MqD, or the mean relative quadratic difference ,It MRqD , for each year. In particular we use the expressions It MqD =P100 x=0 qt,obs x−E(qt x)2 101orχt MRqD=P100 x=0(qt,obs x−E(qt x))2 E(qtx) 101. (18) Here, qt,obs xare the observed rates to age xat time t, andE(qt x) are the mean value of the trajectories of the computed realizations. Count measures. In the stochastic paradigm, it can be appropiate to use other indicators to determine if a method is good. The model proposed in this work, as well as other models which are used in the validation step, allows us to construct s ynthetic empirical confidence intervals IC1−αwith a given αlevel. Then, we define several indicators using these confidence intervals. In particular,
|
https://arxiv.org/abs/2505.19835v1
|
we put It c,1−α= # x:qt,obs x̸∈IC1−α . (19) For each year It c,1−αsummarizes the number of ages of the observed data at year tthat do not belong to theα−synthetic confidence interval, α∈[0,1]. We will use 1 −α= 0.98,0.90 and 0 .80. Central measures and variability. It is important to point out that a stochastic model is suitable if it achieves a good balance between coverage and precision. For instance, if the confidence intervals are narrow but do not contain the observed (or future) values, such a model underestimates uncertainty and may lead to serious consequences—for example, if an insurance company fails to allocate sufficient capital reserves to meet future claims. On the other hand, if the resulting confidence intervals are too wide, even if they always include the observed or future values, the model overestimates uncertainty, thus losing predictive value and potentially causing significant harm—for example, by requiring capital to be reserved for specific purposes, thereby limiting its availability for others, such as healthcare or pensions. Hence, in the indicators of this type we take into account both the precision of the mean values and the dispersion of the realizations in order to compare the methods. Using the same notation as in (18) we define, in a similar way as in [13], the following indicator: It CT,τ =P100 x=0 qt,obs x−E(qt x)2 (σtx)τ , (20) where σt xis the standard deviation of the trajectories of the computed realizations to age xat time tand either τ= 1 or τ= 2. 15 6.3.3 Software The software used to implement the numerical method has been MATLAB (versi´ on R2024b). The R-software ([36]) has been used to download the dataset from [24] (using package demography, [25] ). Also, the R-software has been used to implement the models Lee-Carter, Renshaw-Haberman, CBD and the family M3-M7 using the package StMoMo ([37]); this package has been used to determine the best models. The R-software has been used to estimate the count, error and central measures used to validate our model and to determine which are the best models. The figures have been created using the R-software. 6.3.4 Numerical results This section is dedicated to the validation of the proposed method and to compare it with others techniques. In this first part, we show graphically how the NLSD method reproduces the qualitative bahavior of the mortality curve. In the second part, we compare the NLSD method with the classical ones, in particular with the Lee-Carter and Renshaw-Haberman methods. Figure 1 shows, for the year 2023, that the mean trajectory obtained by the NLSD method is closed to the observed rates. The same behavior can be verified for the rest of the years in the validation period (2019 −2023). Also, qualitatively, we can observe how the mean realization reproduces the form of the mortality curve, with the usual parts: adaptation to the environment (ages 0-16), natural longevity (ages 16-100) and social jump (ages 16-27). 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates NLSD Figure 1: Mean trajectory: b= 0.1 Figures 2a-2c and Figures
|
https://arxiv.org/abs/2505.19835v1
|
3a-3c show, for the 5-year horizon of forecasting, if the observed rates belong or not to the intervals IC0.98,IC0.90andIC0.80forb= 0.1 and b= 0.025. 16 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates Mean of NLSD NLSD−IC0.98(a) Confidence Level: 1 −α= 0.98 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates Mean of NLSD NLSD−IC0.90 (b) Confidence Level: 1 −α= 0.9 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates Mean of NLSD NLSD−IC0.80 (c) Confidence Level:1 −α= 0.8 Figure 2: Confidence Intervals for several confidence levels ( b= 0.1). 17 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates Mean of NLSD NLSD−CI0.98(a) Confidence Level: 1 −α= 0.98 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates Mean of NLSD NLSD−CI0.90 (b) Confidence Level: 1 −α= 0.9 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates Mean of NLSD NLSD−IC0.80 (c) Confidence Level:1 −α= 0.8 Figure 3: Confidence Intervals for several confidence levels ( b= 0.025). In the context of real applications, forecasting several plausible scenarios often requires more than just the mean trajectory. For example, when the mortality rates are used as input data in nonlinear estimations, as in the calculation of the cost of claims using mechanisms based on compound interest rates, it becomes convenient to account for random fluctuations. As we can see in Figure 4, the proposed method allows us to estimate not only the mean trajectories but also an arbitrary number of equally probable realizations. Further, with the aim of comparison, we apply different methods using the same data in the period 18 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed Rates Mean of NLSD Ensemble of RealizationsFigure 4: Ensemble of realizations: b= 0.1 1908-2018 and forecast the mortality rates for the period 2019-2023. Then, we calculate the indicators which have been defined previosly. As we said before, initially we have considered several methods such as the Lee-Carter (LC), Renshaw- Haberman (RH) and CBD methods, and the models M3-M7. With the aim of facilitating the interpretation of the results, we have selected the best-fitting methods. The selection is due using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). With these indicators, and using the package StMoMo in the R-software, it has been determined that the Renshaw-Haberman metod and the Lee-Carter method are the most suitable. Figure 5 shows us the results for the year 2023: the observed data and the mean value of the trajectories for each technique. In the NLSD method, b= 0.1. Figure 5 provides evidence about the suitability of the proposed method. Even though the proposed technique seems to be, graphically, the better technique, it is important to note that none of the three evaluated techniques has been specifically calibrated, and the default parameter values of the StMoMo package have been used for the LC and RH methods.
|
https://arxiv.org/abs/2505.19835v1
|
Complementarialy, we can use the quantitave measures to determine the goodness of each model and to compare them. We start with the count indicators. Table 1 shows the number of ages (for each year into the period of the validation) that do not belong to the confidence interval, IC0.98, and for each of the evaluated methods. Tables 2 and 3 show the same information for the confidence intervals IC0.90andIC0.80. Our method has been tested with different values of the parameter b(0.1,0.05 and 0.025), which determines the intensity of the noise. Method \year 2019 2020 2021 2022 2023 NLSD 0.1 7 6 4 8 5 NLSD 0.05 17 56 21 18 33 NLSD 0.025 50 75 53 39 67 LC 65 80 70 63 71 RH 71 88 81 81 84 Table 1: It c,0.98 19 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates Observed rates NLSD Lee−Carter Renshaw−HabermanFigure 5: Mean trajectories Method \year 2019 2020 2021 2022 2023 NLSD 0.1 10 18 9 11 16 NLSD 0.05 26 66 36 28 59 NLSD 0.025 64 84 66 50 82 LC 77 93 80 72 82 RH 78 93 82 87 87 Table 2: It c,0.9 Method \year 2019 2020 2021 2022 2023 NLSD 0.1 15 44 17 15 28 NLSD 0.05 41 72 48 34 64 NLSD 0.025 72 91 75 67 86 LC 79 95 85 75 88 RH 85 94 89 91 91 Table 3: It c,0.8 By analyzing the tables of the count-based indicators, we can conclude that the proposed technique captures the observed mortality more effectively. This suggests that, in this regard, it provides a better fit than the other approaches. However, this does not necessarily imply that the technique is more accurate, as the result may be due to a more conservative forecast, so we have to consider other indicators as well. Complementary to the count-based indicators, error metrics may be useful for assessing whether the technique is as accurate as, or more or less accurate than, the LC and RH models. Using the indicators (18) over the full 0-100 age range we obtain the results in Tables 4 and 5. 20 Method \year 2019 2020 2021 2022 2023 NLSD 0.1 2.256290e-05 2.236960e-05 2.260896e-05 2.961290e-05 1.518093e-05 NLSD 0.05 1.103389e-04 1.091354e-04 1.090309e-04 1.070186e-04 1.554116e-04 NLSD 0.025 1.075917e-05 1.107349e-05 1.083946e-05 1.529299e-05 1.772229e-05 LC 3.135750e-05 3.088876e-05 3.099892e-05 2.791308e-05 4.069674e-05 RH 3.186545e-05 3.139071e-05 3.097631e-05 2.684819e-05 1.960841e-05 Table 4: It MqD Method \year 2019 2020 2021 2022 2023 NLSD 0.1 1.162379e-04 1.168913e-04 1.174218e-04 1.815611e-04 9.268740e-05 NLSD 0.05 4.740293e-04 4.682191e-04 4.692663e-04 4.908082e-04 7.605474e-04 NLSD 0.025 7.714122e-05 7.831104e-05 7.724071e-05 1.283547e-04 1.373643e-04 LC 1.075780e-04 1.061766e-04 1.066380e-04 1.506715e-04 2.467346e-04 RH 2.252991e-04 2.267182e-04 2.237182e-04 1.858389e-04 1.321769e-04 Table 5: It MRqD Tables 4 and 5 allow us to compare the methods. We have highlighted in bold the values with lowest error for each year. According to Table 4, the method NLSD has the lowest error accross all the years. In particular, NLSD with b= 0.025 outperforms both the LC and RH methods in all the years. In Table 5, NLSD (with b= 0.025)
|
https://arxiv.org/abs/2505.19835v1
|
is the most accurate method in four out of five years, while RH performs best in one (although the error value for NLSD is nearly identical to that of RH in that case). The LC model does not achieve the lowest error in any of the five years for either indicator. Central measures are also very useful to assess the accuracy of the methods. In Table 6 one can see the values of the indicators (20) in the year 2023. I2023 CT,1 I2023 CT,2 NLSD 0.1 0.18 149.54 NLSD 0.05 0.36 623.98 NLSD 0.025 0.70 2490.81 LC 0.88 2109.89 RH 1.74 103496.19 Table 6: It CT,τ The indicator I2023 CT,1yields better results for NLSD across all three levels of noise intensity. The indicator I2023 CT,2is better for NLSD for b= 1 and b= 0.05, while LC slightly outperforms NLSD for b= 0.025. The values obtained for RH are significantly worse than for the other methods. Figure 6 shows the estimated density functions for several ages and each method. From this picture we can see graphically the variability, which is different for each age and method. This change in variability from one technique to another, when directly comparing the estimated mean values and the observed values, highlights the importance of accounting for such variability in order to accurately assess the precision of each technique. When b= 0.025, for most ages, the NLSD method yields mean values closest to the observed values, while maintaining a level of variability comparable to the other methods. At the oldest age (80), the LC method provides the best coverage, albeit at the cost of high variability in the realizations. Moreover, for younger and middle ages, the realizations from the LC method fail to cover the observed value, despite exhibiting greater variability. We have seen that the NLSD method can be applied to real-world scenarios with high short-term accuracy. To assess whether this technique is also effective in the medium or long term, we examine whether its predicted values align with those obtained from the RH and LC methods. Figure 7 displays the mean prediction trajectories over a 10-year horizon (with predictions for 2028 based on observed data up to 2018). We observe that the predicted values are similar in magnitude across the three techniques. However, there exist differences at the youngest and oldest ages. The predictions diverge most at the earliest ages, where the NLSD model produces the highest values, followed by LC with intermediate values, and RH with the lowest. 21 age: 5 qxDensity 0.00000 0.00005 0.00010 0.00015 0.00020040000 80000age: 15 qxDensity 0.00000 0.00005 0.00010 0.00015 0.00020040000 80000 age: 20 qxDensity 0.00000 0.00005 0.00010 0.00015 0.00020 0.00025 0.00030040000 80000age: 35 qxDensity 1e−04 2e−04 3e−04 4e−04 5e−04 6e−04010000 25000 age: 45 qxDensity 0.0006 0.0008 0.0010 0.0012 0.0014 0.0016040008000age: 65 qxDensity 0.004 0.006 0.008 0.010 0.012 0.01405001500 age: 75 qxDensity 0.012 0.014 0.016 0.018 0.020 0.022 0.024 0.0260400800age: 80 qxDensity 0.025 0.030 0.035 0.040 0.04505001000Observed Rates Mean of NLSD Mean of LC Mean of RHFigure 6: Density function, b= 0.025 0 20 40 60 80 100−10 −8 −6 −4
|
https://arxiv.org/abs/2505.19835v1
|
−2 0 AgeMortality rates NLSD LC RHObserved Rates Mean of NLSD Ensemble of Realizations Figure 7: Mean trajectories 22 From a qualitative point of view, it is worth noting that the analysis of historical time series indicates a decreasing intensity of the social hump over time. In this regard, the NLSD model exhibits a more realistic behavior compared to the LC and RH models. It is also important to highlight that the LC and RH models generate predictions using autoregressive and moving average time series models (ARIMA), which possess the following characteristics: 1. They make linear forecasts by extrapolating the dynamics of the most recent values. For instance, re- cent improvement in mortality rates may be projected forward, potentially leading to underestimations of future mortality levels. 2. They exhibit an uncertainty cone that grows rapidly so their predictive performance deteriorates sig- nificantly as the forecast horizon increases. In Economics, for example, it is common to use a 12-step monthly forecast horizon. For annual forecasts, typical horizons are 3, 5, or 7 years. 0 20 40 60 80 100−10 −8 −6 −4 −2 0 AgeMortality rates NLSD LC RH Figure 8: Mean trajectories Figure 8 shows the predictions for the year 2033 (15-year horizon). This figure exhibits a similar pattern to that observed in the 10-year horizon, but the differences between the methods become more pronounced. 7 Conclusions This work proposes a method for modeling and forecasting mortality rates. It constitutes an improve- ment over previous studies ([34], [35], [7]), by incorporating both the historical evolution of the mortality phenomenon and its random behavior. The first part of the study introduces the NLSD model and analyzes mathematical properties such as the existence of solutions and their asymptotic behavior. The second part presents an application of the NLSD model. For this purpose, the Euler–Maruyama method is applied to data obtained from the Human Mortality Database [24]. The choice of the HMD is justified by the fact that it contains mortality datasets from a large number of countries, all of which have been methodologically harmonized. The use of Spanish data is arbitrary; the method has also been tested with data from other countries, such as the UK, although we do not show these results in this paper. 23 To assess the validity of the proposed method, the observation period was divided into two subsets: one for fitting and calibration (1908 −2018), and another for validation (2019 −2023). The evaluation was carried out by comparing the proposed model with other widely used approaches, such as the LC, RH, CBD, and M3–M7 models. Count-based, error-based and central metrics were used in the comparison. The NLSD model achieved the best results for all years within the validation period. Based on this study, we can conclude that the NLSD model should be regarded as a promising alternative to classical models. While a more exhaustive validation remains to be conducted, the method has shown the best performance among the models tested. As extensions of this study, we propose conducting a sensitivity analysis of the parameters, as well as an exhaustive
|
https://arxiv.org/abs/2505.19835v1
|
comparison across different regions and time periods. From a technical perspective, it would be valuable to incorporate optimization techniques for parameter estimation and to assess the applicability of cross-validation strategies. Acknowledgements. The research has been partially supported by the Spanish Ministerio de Ciencia e Innovaci´ on (MCI), Agencia Estatal de Investigaci´ on (AEI) and Fondo Europeo de Desarrollo Regional (FEDER) under the project PID2021-122991NB-C21. Conflict of interest. The authors declare that they do not have any conflict of interest. References [1] I. Albarr´ an Lozano, F. Ariza Rodr´ ıguez, V.M. C´ obreces Ju´ arez, M.L. Durb´ an Reguera, J.M. Rodr´ ıguez-Pardo del Castillo, Riesgo de Longevidad y su aplicaci´ on pr´ actica a Solven- cia II. VIII International Awards ”Julio Castelo Matr´ an”, Fundaci´ on Mapfre. Available on- line: https://www.fundacionmapfre.org/fundacion/es es/publicaciones/diccionario-mapfre-seguros/r/ riesgo-de-longevidad.jsp. [2] M. Ayuso, H. Corrales, M. Guill´ en, A.M. Perez-Mar´ ın, J.L. Rojo, Estad´ ıstica Actuarial Vida , Universitat de Barcelona Edicions, Barcelona, 2007. [3] E. Barbi, F. Lagona, M. Marsili, J.W. Vaupel, W.W. Kenneth, The plateau of human mortality: de- mography of longevity pioneers, Science, 360 (2018). 1459-1461. [4] B. Benjamin, J.H. Pollard, The analysis of mortality and other actuarial statistics, Institute of Actuaries and the Faculty of Actuaries, Oxford, 1993. [5] M. Bogoya, C.A G´ omez S., Discrete model of a nonlocal diffusion equation, Rev. Colombiana Mat., 47 (2013), 83–94. [6] N. Brouhns, M. Denuit, I.V. Keilegom, Bootstrapping the Poisson log-bilinear model for mortality forecasting, Scand. Actuar. J., 3(2005), 212-224. [7] T. Caraballo, F. Morillas, J. Valero, On a stochastic nonlocal system with discrete diffusion modeling life tables, Stoch. Dyn. ,22(2022). [8] A.J.G. Cairns, D. Blake, K. Dowd, Modelling and management of mortality risk: a review. Scandinavian Actuarial Journal 2008 (2008) 79-113. [9] A.J.G. Cairns, D. Blake, K. Dowd, G.D. Coughlan, D. Epstein, M. Khalaf-Allah, Mortality density forecasts: an analysis of six stochastic mortality models, Insurance Math. Econom., 48(2011) 355-367. [10] A.J.G. Cairns, D. Blake, K. Dowd, G.D. Coughlan, D. Epstein, A. Ong and I.A. Balevich, Quantitative comparison of stochastic mortality models using data from England and Wales and the United Statesm, N. Am. Actuar. J., 13(2009) 1-35. [11] C.L. Chiang, The life tables and its applications, Krieger Publishing Company, Malabar, FL, USA, 1984. [12] J.B. Copas, S. Haberman Non-parametric graduation using kernel methods, J. Inst. Actuaries, 110 (1983) 135-156. 24 [13] A.P. Dawid, P. Sebastiani, Coherent dispersion criteria for optimal experimental design, The Annals of Statistics , 27 (1999), 65-81. https://doi.org/10.1214/aos/1018031108. [14] E. Dodd, E., J.J. Forster, J. Bijak, P.W.F. Smith, Smoothing mortality data: Producing the English life table, 2010-12, J. R. Stat. Soc. Ser. A Stat. Soc., 181(2016), 1-19. [15] E. Dodd, E., J.J. Forster, J. Bijak, P.W.F. Smith, Stochastic modelling and projection of mortality improvements using a hybrid parametric/semi-parametric age–period–cohort model, Scand. Actuar. J., 2021 (2), 134-155, DOI: 10.1080/03461238.2020.1815238. [16] E. Dolgin, Longevity data hint at no natural limit on lifespan, Nature, 559 (2018) 14-15. [17] European Parliament and of the Council, Risk management and supervision of insurance companies (Solvency II) - Consolidated text: Directive 2009/138/EC of the European Parliament and of the Council of
|
https://arxiv.org/abs/2505.19835v1
|
25 November 2009 on the taking-up and pursuit of the business of Insurance and Reinsurance. Available online: https://eur-lex.europa.eu/ [on-line: 2020/12/6]. [18] Eurostat ,Population projections in the EU. Statistics Explained, 2024, March 4, https://ec.europa.eu/eurostat/statistics-explained/index.php?oldid=596339 [29/04/2025]. [19] J. Gavin, S. Haberman and R. Verrall, Graduation by Kernel and adaptive kernel methods with a boundary correction, Transactions of the Society of Actuaries, XLVII (1995) 1-37. [20] B. Gompertz, On the nature of the function of the law of human mortality and on a new mode of determining the value of life contingencies, Transactions of The Royal Society, 115 (1825) 513-585. [21] S. Haberman, A. Renshaw, Parametric mortality improvement rate modelling and projecting, Insurance Math. Econom., 50(2012), 309-333. [22] S. Haberman, A. Renshaw, Modelling and projecting mortality improvement rates using a cohort per- spective, Insurance Math. Econom., 53(2013), 150-168. [23] L.M.A. Helligman, J.H. Pollard, The age pattern of mortality, J. Inst. Actuaries ,107(1980) 49-82. [24] Human Mortality Database. Series of death rates of Spain, University of California, Berkeley (USA), and Max Planck Institute for Demographic Research (Germany) .Available at www.mortality.org or www.humanmortality.de (data downloaded on [2025/04/15]). [25] R.J. Hyndman (with contributions by Heather Booth, Leonie Tickle and John Maindonald), Package demography: forecasting mortality, fertility, migration and population data (R package version 1.22), 2019, https://CRAN.R-project.org/package=demography. [26] Instituto Nacional de Estad´ ıstica (INE), Metodolog´ ıa de las estimaciones intercensales de poblaci´ on, https://www.ine.es/inebaseDYN/ecp30282/docs/meto estimaciones pobla.pdf [29/04/2025]. [27] Instituto Nacional de Estad´ ıstica (INE), Life tables: Methodolgy . [28] P.E. Kloeden, E. Platen, H. Schurz, Numerical Solution of SDE through Computer Experiments, Springer, Berlin, 1994. [29] R. Lee, L. Carter, Modelling and forecasting US mortality, J. Am. Stat. Assoc., 87(1992), 659-671. [30] W. Makeham, On the law of mortality and the construction of annuity tables, Journal of the Institute of Actuaries, 8(1869), 301-310. [31] X. Mao, Stochastic Differential Equations and Applications , Woodhead Publishing Limited, Cambridge, UK, 1997. [32] T. Missov, Gamma-Gompertz life expectancy at birth, Demographic Research, 28(2013), 259-270. 25 [33] F.G. Morillas, I.B. Sampere, Using wavelet techniques to approximate the subjacent risk of death, In Modern Mathematics and Mechanics , V.A. Sadovnichiy and M.Z. Zgurovsky eds. (Cham, Switzerland, Springer International Publishing, 2019), Chapter 28, pp. 541-557. [34] F. Morillas, J. Valero, On a nonlocal discrete diffusion system modeling life tables, RACSAM ,108 (2014), 935-955. [35] F. Morillas, J. Valero, On a retarded nonlocal ordinary differential system with discrete diffusion mod- eling life tables, Mathematics ,9(2021), 220. [36] R Core Team, R: a language and environment for statistical computing , R Foundation for Statistical Computing, Vienna, Austria, 2020. Available online: https://www.R-project.org/. [37] A.M. Villegas, V.K. Kaishev, P. Millossovich, StMoMo: An R package for stochastic mortality modeling, J. Stat. Software, 84 (2018), 1-38. 26
|
https://arxiv.org/abs/2505.19835v1
|
arXiv:2505.20005v1 [math.ST] 26 May 2025Existence of the solution to the graphical lasso Jack Storror Carter Dept. of Economics and Business, Universitat Pompeu Fabra, Spain Data Science Center, Barcelona School of Economics, Spain Abstract The graphical lasso (glasso) is an l1penalised likelihood estimator for a Gaussian precision matrix. A benefit of the glasso is that it exists even when the s ample covariance matrix is not positive definitebut only positive semidefinite. This note c ollects a numberof results concerning the existence of the glasso both when the penalty is applied t o all entries of the precision matrix and when the penalty is only applied to the off-diagonals. New p roofs are provided for these results which give insight into how the l1penalty achieves these existence properties. These proofs extend to a much larger class of penalty functions all owing one to easily determine if new penalised likelihood estimates exist for positive semidefi nite sample covariance. A common method for sparse estimation of a Gaussian precision (inve rse covariance) matrix is using an l1penalised likelihood, often called the graphical lasso (glasso) (Baner jee et al., 2008; Yuan and Lin, 2007; Friedman et al., 2008). While the glasso has some d rawbacks (Mazumder and Hastie, 2012; Williams and Rast, 2020; Carter et al., 2024) and other method s can achieve superior perfor- mance, it remains popular due to its simple formulation and fast compu tation and is often used as a benchmark for new methods. A key benefit of the glasso is that it ex ists even when the sample co- variance matrix Sis not positive definite, but only positive semidefinite - a case where th e maximum likelihood estimate (MLE) does not exist. This allows the glasso to still b e used when the sample size is smaller than the number of variables. While this property of the glasso is well established, the literature lac ks a single clear reference for both versions of the glasso when the penalty on the diagonal is or is n ot included. In fact, the 1 1 BACKGROUND AND NOTATION existence property and how the glasso achieves it is slightly different for these two versions. When the diagonal penalty is included, the existence of the glasso for any positive semidefinite Sis a simple corollary of Banerjee et al. (2008) Theorem 1, which shows th at the unique solution to the glasso has bounded eigenvalues. When the diagonal penalty is omitte d, the glasso exists for positive semidefinite Swith non-zero diagonal, which occurs with probability 1 when using Gau ssian data. This was shown by Lauritzen and Zwiernik (2022) Theorem 8.7. Howev er, both proofs use the dual of the optimisation problem and therefore focus on the covariance matrix, rather than the precision matrix. This makes it harder to understand how the glasso achieves these existence properties and to design new penalised likelihoods, which usually directly penalise the pr ecision matrix due to the correspondence with conditional independence, that also achieve existence. This paper collects the two existence results for the glasso, provid ing additional context. New proofs
|
https://arxiv.org/abs/2505.20005v1
|
for these existence results will be provided that do not utilise the du al optimisation problem, but instead show how the objective function acts when certain eigenva lues are allowed to tend to infinity. The idea of these proofs can be extended to any penalty function t hat is separable in the entries of the precision matrix. Hence it can easily be determined if other such p enalised likelihood estimates exist for positive semidefinite S. 1 Background and notation The log-likelihood function for a p×pGaussian precision matrix Θ = ( θij) given a p×pposi- tive semidefinite matrix S= (sij), after removing additive and multiplicative constants, and the corresponding MLE are l(Θ|S) = log(det(Θ)) −tr(SΘ), ˆΘ = argmax Θ≻0l(Θ|S), where tr( A) denotes the trace of a matrix Aand Θ≻0 refers to the set of p×ppositive definite matrices. The glasso subtracts an l1penalty function with penalty parameter ρ > 0 from the log-likelihood giving objective function and glasso estimate G(Θ|S) =l(Θ|S)−ρ/summationdisplay i,j|θij|, ˆΘG= argmax Θ≻0G(Θ|S). 2 1 BACKGROUND AND NOTATION An alternative version of the glasso only penalises the off-diagonal e ntries. To distinguish this from the glasso, it will be referred to as the off-diagonal glasso (odglass o) which has objective function and odglasso estimate ˜G(Θ|S) =l(Θ|S)−ρ/summationdisplay i/negationslash=j|θij|, ˆΘ˜G= argmax Θ≻0˜G(Θ|S). It will be useful to consider these optimisation problems in terms of t he eigenvalues and eigenvectors ofSand Θ. Because both matrices are symmetric, they are guarantee d to have an orthonormal basis of eigenvectors. Write the eigenvalues of Sasλ= (λ1,...,λ p) with corresponding orthonormal eigenvectors V= (v1,...,v p), and the eigenvalues of Θ as σ= (σ1,...,σ p) with corresponding orthonormal eigenvectors W= (w1,...,w p). Thejth entry of the eigenvector wiis written as wij. The determinant of a matrix is the product its eigenvalues, and the t race can be written as tr( SΘ) = /summationtextp i,j=1σiλj(wT ivj)2. Hence the log-likelihood function and MLE can be rewritten in terms o f eigen- values and eigenvectors as l(σ,W|λ,V) =p/summationdisplay i=1/parenleftBigg log(σi)−σip/summationdisplay j=1λj(wT ivj)2/parenrightBigg , (ˆσ,ˆW) = argmax σ>0,W∈Vl(σ,W|λ,V), whereVis the space of orthonormal bases of Rp. The glasso and odglasso optimisation problems can similarly be rewritten by noting that |θjk|=|/summationtextp i=1σiwijwik|. In the optimisation problems, Scould be any p×ppositive semidefinite matrix. However, it is usually the sample covariance matrix from a p-variate Gaussian i.i.d. sample X1,...,X niid∼Np(µ,Θ−1) with mean vector µand covariance matrix Θ−1. When µis unknown, the sample covariance matrix is S=1 n/summationtextn i=1(Xi−¯X)(Xi−¯X)T, where ¯X=1 n(X1+···+Xn). When n > p ,Sis positive definite with probability 1. However, when n≤p,Shas exactly p−(n−1) eigenvalues equal to 0 with probability 1 (Mathai et al., 2022, Section 8.3). Hence Sis positive semidefinite but not positive definite - for the remainder of the paper this case will be called only positive semidefinite. Whenµis known, the sample covariance matrix is instead S=1 n/summationtextn i=1(Xi−µ)(Xi−µ)T, which is positive definite with probability 1 when n≥p, but is only positive semidefinite when n < p with exactlyp−neigenvalues equal to 0 with probability 1. 3 2 MAXIMUM LIKELIHOOD ESTIMATE 2 Maximum likelihood estimate We begin by considering the existence of
|
https://arxiv.org/abs/2505.20005v1
|
the MLE for positive definite and positive semidefinite S. While these results hardly need proving, the existence of the glasso and odglasso for positive definite Seasily follow, they provide simple examples of the style of proofs that will be used for the glasso and odglasso, and understanding why the MLE does not exist when Sis only positive semidefinite helps focus the proofs for glasso and odglasso. Proposition 1. The MLE exists for any positive definite S. Proof. Since the likelihood function is continuous, the existence of the MLE f ollows ifl(Θ|S)→ −∞ whenever Θ approaches the boundary of the space of positive defi nite matrices. Positive definite matrices are characterised by positive eigenvalues σ1,...,σ p>0 andVis closed, so the boundary of the space occurs when σi→0 orσi→ ∞ . Hence it will instead be shown that l(σ,W|λ,V)→ −∞ whenever any (potentially more than one) σi→0,∞for anyW∈ V. Because the log-likelihood is separable in the σi, eachσican be considered separately. Sis positive definite so it has strictly positive eigenvalues λ1,...,λ p>0. Also for each withere must be at least onevjsuch that wT ivj/negationslash= 0, otherwise v1,...,v p,wiarep+ 1 orthogonal vectors of length p. Hence /summationtextp j=1λj(wT ivj)2>0 and so log( σi)−σi/summationtextp j=1λj(wT ivj)2→ −∞ asσi→0 orσi→ ∞ . It follows that l(σ,W|λ,V)→ −∞ whenever any σi→0,∞. Corollary 1. The glasso and odglasso estimates exist for any positive defi niteS. Proof. Since the penalty functions ρ/summationtext i,j|θij|andρ/summationtext i/negationslash=j|θij|are non-negative, it follows that G(σ,W| λ,V)→ −∞ and ˜G(σ,W|λ,V)→ −∞ as anyσi→0,∞. Proposition 2. The MLE does not exist when Sis only positive semidefinite. Proof. Consider Θ with the same eigenvectors as S,W=V. ThenwT ivjis equal to 1 for i=j and 0 otherwise and so l(σ,W|λ,V) =/summationtextp i=1log(σi)−σiλi. SinceSis only positive semidefinite, it has at least one eigenvalue equal to 0, say λ1= 0. Then, keeping σ2,...,σ p>0 fixed, as σ1→ ∞ , l(σ,W|λ,V)→ ∞ and so the MLE does not exist. 4 4 OFF-DIAGONAL GRAPHICAL LASSO This proof shows how the log-likelihood function is unbounded when th e eigenvectors of Θ are set equal to those of S. This extends to whenever an eigenvector of Θ is in the null space of S, in which case the trace term does not depend on the corresponding eigenv alue. However, for any eigenvector of Θ not in the null space of S, the trace term is a linear function of the corresponding eigenvalue . It also remains true that l(σ,W|λ,V)→ −∞ asσi→0, when the other eigenvalues are fixed. This means that in the subsequent proofs for the existence of the glas so and odglasso, attention need only be paid to eigenvalues σi→ ∞ corresponding to eigenvectors in the null space of S. 3 Graphical lasso While the MLE does not exist for only positive semidefinite S, the addition of the l1penalty ensures that the glasso solution exists for any positive semidefinite S. Proposition 3. The glasso estimate exists for any positive semidefinite S. Proof. The penalty function for only the diagonal entries θjjisρ/summationtextp j=1/vextendsingle/vextendsingle/summationtextp i=1σiw2 ij/vextendsingle/vextendsingle. Because σiw2 ij≥ 0, this is equal to ρ/summationtextp i=1/summationtextp j=1σiw2
|
https://arxiv.org/abs/2505.20005v1
|
ij. All terms in the full penalty function are non-negative, so removing the off-diagonal penalty terms obtains the upper bound G(σ,W|λ,V)≤p/summationdisplay i=1log(σi)−σi/parenleftBiggp/summationdisplay j=1λj(wT ivj)2+ρw2 ij/parenrightBigg , Since the eigenvectors w1,...,w pare orthonormal, for each i= 1,...,p there exists a jsuch that wij/negationslash= 0 and so/summationtextp j=1λj(wT ivj)2+ρw2 ij>0. It follows that the upper bound, and therefore G, tends to−∞ as anyσi→ ∞ . This proof shows that the penalty on the diagonal entries alone is en ough to ensure the existence of the glasso for any positive semidefinite S. This is because for any fixed eigenvectors, the penalty on the diagonal is a linear function of all eigenvalues. 4 Off-diagonal graphical lasso When the penalty on the diagonal is removed, as in the odglasso, the solution no longer exists for every positive semidefinite S. For certain S, eigenvectors of Θ can be found such that the penalty 5 4 OFF-DIAGONAL GRAPHICAL LASSO term does not depend on certain eigenvalues. Example Consider S= 0 0 0 1 , which has eigenvalues λ1= 0,λ2= 1 with corresponding eigenvectors v1= (1,0)T,v2= (0,1)T. Then the odglasso objective function is ˜G(σ,W|,λ,V ) = log(σ1) + log(σ2)−σ1w12−σ2w22−2ρ|σ1w11w12+σ2w21w22|. By taking w1= (1,0)T,w2= (0,1)T,˜Gonly depends on σ1through the log( σ1) term, and so for fixed σ2,˜G(σ,W|λ,V)→ ∞ asσ1→ ∞ . This is of course a very specific example in which the diagonal of Shas an entry equal to 0. In fact, having zeros on the diagonal of Sis the only case in which the odglasso does not exist. Proposition 4. The odglasso estimate exists for positive semidefinite Sif and only if the diagonal entries of Sare non-zero. Proof. Recall that the objective function for odglasso is ˜G(σ,W|λ,V) =p/summationdisplay i=1/parenleftBigg log(σi)−σip/summationdisplay j=1λj(wT ivj)2/parenrightBigg −ρ/summationdisplay j/negationslash=k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep/summationdisplay i=1σiwijwik/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle. First suppose, without loss of generality, that s11= 0. Choose Θ to have eigenvector w1= (1,0,..., 0)T, which is in the null space of Sand so the trace term does not depend on σ1. The penalty term also does not depend on σ1because w1jw1k= 0 for all j/negationslash=k. So, for any fixed σ2,...,σ pandw2,...,w p,˜G(σ,W|λ,V)→ ∞ asσ1→ ∞ . Now suppose that all diagonals of Sare non-zero. Let w1be in the null space of S. Thenw1must have at least two non-zero entries, because if w1hadith entry equal to 1 and all other entries equal to 0 then Sw1is non-zero in the ith entry and so w1is not in the null space. Hence there exist j/negationslash=ksuch that w1jw1k/negationslash= 0 and so σ1w1jw1k→ ±∞ asσ1→ ∞ . It follows that, for any fixed σ2,...,σ pandw2,...,w p, the penalty term tends to −∞ asσ1→ ∞ at a linear rate, and therefore ˜G(σ,W|λ,V)→ −∞ asσ1→ ∞ . 6 5 UNIQUENESS If two eigenvalues σ1,σ2→ ∞ , both corresponding to eigenvectors in the null space of S, it is possible for the sum σ1w1jw1k+σ2w2jw2kto remain finite. Specifically, if w1jw1k>0 andw2jw2k<0, taking σ1=x/(w1jw1k),σ2=−x/(w2jw2k) results in σ1w1jw1k+σ2w2jw2k= 0 even as x→ ∞ . However, for this to occur for all j/negationslash=krequires that w1jw1k=aw2jw2kfor some constant afor allj/negationslash=k. For this relationship to hold, w1andw2must match in the position of non-zero entries. We have
|
https://arxiv.org/abs/2505.20005v1
|
already see n that both must have at least two non-zero entries. If w1,w2both have exactly two non-zero entries in the same position, then, since w1,w2are in the null space of S, all non-null space eigenvectors of Smust be equal to 0 in these two entries by orthogonality. This would r esult inShaving a diagonal entry equal to 0. Hence w1,w2must have at least three non-zero entries, say the i/negationslash=j/negationslash=kentries. Then we have w1iw1j=aw2iw2jandw1iw1k=aw2iw2k. Dividing, we get w1j/w1k=w2j/w2kand so w1j=cw2jwherec=w1k/w2k. This holds with the same cfor allj. Sincew1,w2are unit vectors, this means that c=±1. In both cases w1,w2are not orthogonal. Hence this situation cannot occur. The same argument extends to when more than two eigenvalues σ1,...,σ l→ ∞ , and so the penalty function tends to −∞ at a linear rate, meaning ˜G(σ,W|λ,V)→ −∞ . WhenSis a Gaussian sample covariance matrix, the diagonal entries are pos itive with probability 1. Corollary 2. The odglasso estimate exists with probability 1 when Sis a Gaussian sample covariance matrix with unknown µandn≥2, or with known µandn≥1. Proof. For unknown µ, a diagonal entry of Scan be written in terms X1,...,X nassjj=1 n/summationtextn i=1(Xij− ¯Xj)2whereXijis thejth entry of Xiand ¯Xj=1 n/summationtextn i=1Xij. Hence sjj= 0 if and only if X1j=···=Xnj. SinceX1,...,X nare independent Gaussian random vectors, this occurs with probability 0 when n≥2. For known µ, instead sjj= 0 if and only if X1j=···=Xnj=µwhich occurs with probability 0 whenn≥1. 5 Uniqueness Each of the objective functions l,G, ˜Gare strictly concave. It therefore follows that the solution to the corresponding optimisation problems are unique, whenever the y exist. This gives the following 7 6 DISCUSSION result. Proposition 5. Whenever they exist, the MLE, glasso estimate and odglasso e stimate are unique. 6 Discussion In this paper we have focused on the l1penalty function. Since this provides a linear penalty, it is enough to dominate the logarithmic term in the log-likelihood and ensu re the existence of the solution. However, these results can be extended to other penalis ed likelihoods l(Θ|S)−Pen(Θ) wherePen(Θ) =/summationtextp i,j=1penij(θij) withpenij(θij) non-decreasing in |θij|. Specifically, when Pen is continuous and non-negative (or more generally, lower bounded), then the same results apply as long aspenij(θij)→ ∞ as|θij| → ∞ at a faster than logarithmic rate. This includes, for example, all monomial penalties penij(θij) =ρ|θij|awitha >0. Of course, further attention must be paid to the uniqueness of these solutions when the objective function is no long er strictly concave. On the other hand, when penijis bounded, as is the case for many popular non-convex penalties like the MCP (Zhang, 2010) and SCAD penalty (Fan and Li, 2001; Fan e t al., 2009) and penalties approximating the l0such as the seamless l0(Dicker et al., 2013) and ATAN (Wang and Zhu, 2016) penalties, the solution does not exist when Sis only positive semidefinite. This is also the case for thel0penalty itself, even if the penalty function is non-continuous. Howe ver, a key part of the proof of Proposition 3 is that the diagonal penalty alone is enough to ensur e the existence of the solution. Hence, if a
|
https://arxiv.org/abs/2505.20005v1
|
boundeded or sub-logarithmic penalty is preferred for t he off-diagonals, the solution will still exist for all positive semidefinite Sas long as it is paired with a suitably strong penalty on the diagonal. The diagonal penalty could be allowed to depend on the samp le size of the data in such a way that it disappears when n > p and existence is already guaranteed. Penalties that diverge at a logarithmic rate, for example penij(θij) =ρlog(1 +θij), require more investigation to determine their existence for only positive semidefin iteS. Additional care must also be taken with penalties that are not bounded from below with penij(θij)→ −∞ asθij→0. This is because the objective function may no longer tend to −∞ as the eigenvalues σi→0. The horseshoe- like penalty (Sagar et al., 2024) provides an interesting case where t he penalty is not bounded from below and diverges at a logarithmic rate. 8 REFERENCES REFERENCES Acknowledgements This research was supported by the EUTOPIA Science and Innovat ion Fellowship Programme and funded by the European Union Horizon 2020 programme under the M arie Sk/suppress lodowska-Curie grant agreement No 945380. References Onureena Banerjee, Laurent El Ghaoui, and Alexandre d’Aspremo nt. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research , 9:485–516, 2008. Jack Storror Carter, David Rossell, and Jim Q Smith. Partial correla tion graphical lasso. Scandina- vian Journal of Statistics , 51(1):32–63, 2024. Lee Dicker, Baosheng Huang, and Xihong Lin. Variable selection and e stimation with the seamless-l 0 penalty. Statistica Sinica , pages 929–962, 2013. Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association , 96(456):1348–1360, 2001. Jianqing Fan, Yang Feng, and Yichao Wu. Network exploration via the adaptive lasso and scad penalties. The annals of applied statistics , 3(2):521, 2009. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inv erse covariance estimation with the graphical lasso. Biostatistics , 9(3):432–441, 2008. Steffen Lauritzen and Piotr Zwiernik. Locally associated graphical m odels and mixed convex expo- nential families. The Annals of Statistics , 50(5):3009–3038, 2022. Arak M Mathai, Serge B Provost, and Hans J Haubold. Multivariate statistical analysis in the real and complex domains . Springer Nature, 2022. Rahul Mazumder and Trevor Hastie. The graphical lasso: New insigh ts and alternatives. Electronic journal of statistics , 6:2125, 2012. Ksheera Sagar, Sayantan Banerjee, Jyotishka Datta, and Anind ya Bhadra. Precision matrix esti- 9 REFERENCES REFERENCES mation under the horseshoe-like prior–penalty dual. Electronic Journal of Statistics , 18(1):1–46, 2024. Yanxin Wang and Li Zhu. Variable selection and parameter estimation with the atan regularization method. Journal of Probability and Statistics , 2016(1):6495417, 2016. Donald R Williams and Philippe Rast. Back to the basics: Rethinking partia l correlation network methodology. British Journal of Mathematical and Statistical Psycholog y, 73(2):187–212, 2020. Ming Yuan and Yi Lin. Model selection and estimation in the gaussian gr aphical model. Biometrika , 94(1):19–35, 2007. Cun-Hui Zhang. Nearly unbiased variable selection under minimax con cave penalty. The Annals of Statistics , 38(2):894, 2010.
|
https://arxiv.org/abs/2505.20005v1
|
Kernel Ridge Regression with Predicted Feature Inputs and Applications to Factor-Based Nonparametric Regression Xin Bing∗Xin He†Chao Wang‡. May 27, 2025 Abstract Kernel methods, particularly kernel ridge regression (KRR), are time-proven, powerful non- parametric regression techniques known for their rich capacity, analytical simplicity, and com- putational tractability. The analysis of their predictive performance has received continuous attention for more than two decades. However, in many modern regression problems where the feature inputs used in KRR cannot be directly observed and must instead be inferred from other measurements, the theoretical foundations of KRR remain largely unexplored. In this paper, we introduce a novel approach for analyzing KRR with predicted feature inputs. Our framework is not only essential for handling predicted feature inputs—enabling us to derive risk bounds without imposing any assumptions on the error of the predicted features—but also strengthens existing analyses in the classical setting by allowing arbitrary model misspecification, requiring weaker conditions under the squared loss, particularly allowing both an unbounded response and an unbounded function class, and being flexible enough to accommodate other convex loss functions. We apply our general theory to factor-based nonparametric regression models and establish the minimax optimality of KRR when the feature inputs are predicted using principal component analysis. Our theoretical findings are further corroborated by simulation studies. Keywords: Reproducing kernel Hilbert space, kernel ridge regression, kernel complexity, nonpara- metric regression, factor regression model, dimension reduction. 1 Introduction Regression is one of the most fundamental problems in statistics and machine learning where the goal is to construct a prediction function f:Z →Rbased on nindependent pairs ( Yi, Zi)∈(R,Z), with 1 ≤i≤n, such that for a new pair ( Y, Z), the prediction f(Z) is close to Y. Among existing regression approaches, Kernel Ridge Regression (KRR) is a cornerstone of nonparametric regression due to its flexibility, analytical simplicity, and computational efficiency (Bach and Jordan, 2002; Cucker and Smale, 2002; Kimeldorf and Wahba, 1971; Steinwart and Christmann, 2008), making it widely applicable across diverse fields, including finance, bioinformatics, signal processing, ∗Department of Statistical Sciences, University of Toronto, Toronto, Canada. E-mail: xin.bing@utoronto.ca †School of Statistics and Data Science, Shanghai University of Finance and Economics, Shanghai, China. E-mail: he.xin17@mail.shufe.edu.cn ‡School of Statistics and Data Science, Shanghai University of Finance and Economics, Shanghai, China. E-mail: wang.chao@163.sufe.edu.cn 1arXiv:2505.20022v1 [math.ST] 26 May 2025 image recognition, natural language processing, and climate modeling. In this paper, we study the prediction of KRR when the regression features Zi’s are not directly observed and must be predicted from other measurements. LetK:Z × Z → Rbe a pre-specified kernel function. In the classical setting where npairs of (Yi, Zi) are observed, KRR searches for the best predictor over the function class HK, the Reproducing Kernel Hilbert Space (RKHS) induced by K, by minimizing the penalized least squares loss: ef= arg min f∈HK( 1 nnX i=1 Yi−f(Zi)2+λ∥f∥2 K) . (1) Here λ > 0 is some regularization parameter and ∥ · ∥ Kis the endowed norm of HK. It is well known that the RKHS induced by a universal kernel, such as the Gaussian or Laplacian kernel, is a large
|
https://arxiv.org/abs/2505.20022v1
|
function space, as any continuous function can be approximated arbitrarily well by an intermediate function in the RKHS under the infinity norm (Micchelli et al., 2006; Steinwart, 2005). More importantly, the representer theorem (Kimeldorf and Wahba, 1971) ensures that ef in (1) admits a closed-form solution and can be computed effectively. Theoretical properties of ef have been constantly studied over the past two decades, see, for instance, Blanchard and M¨ ucke (2018); Caponnetto and De Vito (2007); Cui et al. (2021); Fischer and Steinwart (2020); Rudi and Rosasco (2017); Smale and Zhou (2007); Steinwart and Christmann (2008), to just name a few. However, in many modern regression problems across machine learning, natural language pro- cessing, healthcare, and finance, the regression features Z1, . . . , Z nin (1) are not directly observed and must be predicted from other measurable features. Consequently, regression such as KRR is performed by regressing the response onto the predicted feature inputs. Below, we present promi- nent examples, including well-known feature extraction and representation learning techniques in large language models and deep learning. Example 1 (Dimension reduction via principal component analysis) .KRR, or more broadly, non- parametric regression approaches, are particularly appealing due to their flexibility in approximat- ing any form of the regression function when the feature dimension is small and the sample size is large. However, in many modern applications, the number of features is often comparable to, or even greatly exceeds the sample size. Classical non-parametric approaches including KRR are affected by the curse of dimensionality. To address the high-dimensionality issue, one common approach is to first find a low-dimensional representation of the observed high-dimensional features and then regress the response onto the obtained low-dimensional representation. Principal Component Analysis (PCA) is the most commonly used method to construct a small number of linear combinations of the original features, known as the Principal Components (PCs) (Hotelling, 1957). The prediction is then made by regressing the response onto the obtained PCs. When this regression step is performed using ordinary least squares, the approach is known as Principal Component Regression (PCR) (Bai and Ng, 2006; Bing et al., 2021; Kelly and Pruitt, 2015; Stock and Watson, 2002), which is fully justified under the following factor-based linear regression model: Y=Z⊤β+ϵ, (2) X=AZ+W. (3) Here both X∈RpandY∈Rare observable while Z∈Rr, with r≪p, is some unobservable, latent factor. The p×rloading matrix Ais deterministic but unknown, the vector βcontains the linear 2 coefficients of the factor Z, and both ϵ∈RandW∈Rprepresent additive errors. By centering, we can assume Y,XandZhave zero mean. The leading PCs in PCR can be regarded as a linear prediction of the latent factor Z. While the linear factor model in (3) is reasonable and widely adopted in the factor model literature (Anderson, 2003; Bai, 2003; Chamberlain and Rothschild, 1983; Lawley and Maxwell, 1962), PCR often yields unsatisfactory predictive performance due to its reliance on linear regression in the low-dimensional space. As a running example of our general theory, we adopt KRR to capture more complex nonlinear relationships between YandZin place of (2), while still using (3) to model
|
https://arxiv.org/abs/2505.20022v1
|
the dependence between XandZ. Example 2 (Feature extraction via autoencoder) .Unlike PCA, an autoencoder is a powerful al- ternative in machine learning and data science for constructing a nonlinear low-dimensional repre- sentation of high-dimensional features (Goodfellow et al., 2016). Specifically, an autoencoder learns an encoding function Ethat maps the original features X∈ X ⊆ Rpto its latent representation in Z ⊆Rrwhile simultaneously learning a decoding function D:Z → X to reconstruct the original features from the encoded representation (Bank et al., 2023). Both the encoder and decoder are typically parameterized by using deep neural networks as EθandDϑ, respectively. Given a metric d:X × X :→R+, for instance, d(x, x′) =∥x−x′∥2 2for any x, x′∈ X, the encoder and decoder are learned by solving the optimization problem: (bθ,bϑ) := arg min θ,ϑ1 nnX i=1d (Dϑ◦Eθ)(Xi), Xi . The trained autoencoder provides a method for constructing a low-dimensional representation of the original features, given by Ebθ(X), which are then utilized in various downstream supervised learning tasks (Berahmand et al., 2024), with applications found in finance (Bieganowski and ´Slepaczuk, 2024), healthcare (Afzali et al., 2024) and climate science (Chen et al., 2023). Example 3 (Representation learning in large language models) .Recent advancements in natural language processing have demonstrated the effectiveness of large language models (LLMs), such as Transformer-based architectures, in learning meaningful representations of text data (Devlin et al., 2019; Radford et al., 2021; Vaswani et al., 2017). These models convert text input into numerical vector representations, commonly referred to as word embeddings or contextualized embeddings, which capture semantic and syntactic properties of words and phrases. Formally, a large language model learns an embedding function E:X → Z , where Xrep- resents the input text space (e.g., tokenized sequences) and Zis the latent representation space. This includes the traditional static word embeddings (e.g., Word2Vec or GloVe) where the same token is mapped to a unique representation. Recently, modern transformer-based models pro- duce contextualized embeddings based on the surrounding words. Given a sequence of tokens X= (x1, x2, . . . , x T), a trained transformer model computes the embeddings as: Ebθ(X) = Ebθ(x1), Ebθ(x2), . . . , E bθ(xT) , where Ebθdenotes the parametrized embedding function, pretrained on large-scale corpora using unsupervised learning objectives, such as masked language modeling (Devlin et al., 2019). The embeddings serve as powerful feature representations for various downstream supervised learning tasks. Let f1◦f2denote the composition of two functions f1andf2. In regression applications, the embeddings of the input text together with their corresponding responses are used to train a regression function bf:Z →Rwith the final predictor given by: bf◦Ebθ:X →R.This approach 3 is widely adopted in natural language processing tasks such as sentiment analysis with continuous sentiment scores, automated readability assessment, and financial text-based forecasting, where leveraging pretrained embeddings has been shown to significantly improve prediction performance (Brown et al., 2020; Raffel et al., 2020). Motivated by the aforementioned examples, in this paper we study the KRR in (1) when the feature inputs Zi’s need to be predicted from other measurable features. Concretely, consider n i.i.d. training data D={(Yi,
|
https://arxiv.org/abs/2505.20022v1
|
Xi)}n i=1∈(R,X)nwithX ⊆ Rpand the response Yigenerated according to the following non-parametric regression model Y=f∗(Z) +ϵ, (4) where Z∈ Z ⊆ Rris some random, unobservable, latent factor with some r∈N,f∗:Z → R is the true regression function, and ϵis some regression error with zero mean and finite second moment σ2<∞. Letbg:X → Z be any generic measurable predictor of Zthat is constructed independently of D. The KRR predictor bfis obtained as in (1) with Zireplaced by its predicted valuebg(Xi) for i∈[n] :={1, . . . , n }. The final predictor of KRR using predicted inputs based on bgisbf◦bg:X →R,with its excess risk defined as E(bf◦bg) := E Y−(bf◦bg)(X)2−σ2=E f∗(Z)−(bf◦bg)(X)2, (5) where ( Y, X) is a new random pair following the same model as D. As detailed below, despite existing analyses and results on KRR in the classical setting, little is known about its predictive behavior when feature inputs are subject to prediction errors. In this paper, our goal is to provide a general treatment for analyzing E(bf◦bg), which is valid for any generic predictor bg. The new framework we develop for analyzing KRR is not only indispensable for handling predicted feature inputs but also improves existing analyses in the classical setting by allowing arbitrary model misspecification, requiring weaker conditions under the squared loss, and being flexible enough to accommodate other convex loss functions in (1). We start by reviewing existing approaches for analyzing KRR in the classical setting. 1.1 Existing approaches for analyzing KRR In the classical setting where the features Zi’s in (1) are directly observed, existing techniques for bounding the excess risk of efunder model (4) mainly fall into two streams. The first proof strategy relies on the integral operator approach (see, e.g., Blanchard and M¨ ucke (2018); Caponnetto and De Vito (2007); Fischer and Steinwart (2020); Rudi and Rosasco (2017); Smale and Zhou (2007), and references therein). Denote by ρthe probability measure of the feature Z∈ Z and by L2(ρ) ={f:R Zf2(z) dρ(z)<∞}its induced space of square-integrable functions, equipped with the inner product ⟨·,·⟩ρand norm ∥ · ∥ ρ. For any z∈ Z, by writing Kz:=K(z,·)∈ H K, the integral operator is LK:L2(ρ)→ L2(ρ), defined as LK:f7→Z ZKzf(z) dρ(z). Let{µj}j≥1be the eigenvalues of LKand{ϕj}j≥1be the corresponding eigenfunctions, assuming their existence. When the regression function f∗belongs to HKand the eigenvalues µj’s satisfy the following polynomial decay condition cj−1/α≤µj≤Cj−1/α, for all j≥1 and some α∈(0,1), (6) 4 Caponnetto and De Vito (2007) established the minimax optimal rate of ∥ef−f∗∥2 ρ, which depends on the decaying rate α. While the integral operator approach allows for sharp rate analysis of KRR, it oftentimes has limitations in handling model misspecification, and the required conditions onµj’s could be difficult to verify. Without assuming (6), Smale and Zhou (2007) derives upper bounds of ∥ef−f∗∥2 ρbut their analysis still requires f∗∈ H Kalong with some additional smoothness condition on f∗. Allowing f∗/∈ H Kis addressed by Rudi and Rosasco (2017) in the context of learning with random features where the authors resort to a so-called effective dimension conditionP jµj/(µj+δ)≤Cδ−α, for all δ >0 and some
|
https://arxiv.org/abs/2505.20022v1
|
α∈[0,1], which is related to but weaker than (6). Alternatively, Fischer and Steinwart (2020); Zhang et al. (2023) handle f∗/∈ H Kby relying on the smoothness condition f∗=Lr Kg∗for some g∗∈ L2(ρ) and some r∈(0,1/2] as well as the upper bound requirement in condition (6). The other proof strategy for analyzing KRR adopts the empirical risk minimization perspective and employs probabilistic tools from empirical process theory to establish excess risk bounds. See, e.g., Bartlett et al. (2005); Mendelson and Neeman (2010); Steinwart et al. (2009); more recently, Duan et al. (2024); Ma et al. (2023), and references therein. Inherited from the classical learning theory, this proof strategy has the potential to deal with model misspecification and can be easily applied to KRR in (1) in which the squared loss is replaced by other convex loss functions (Eberts and Steinwart, 2013). However, the existing works typically assume boundedness conditions on the response variable and the function class, or restrictive conditions on the eigen-decomposition ofLK. For instance, the analyses in Duan et al. (2024); Ma et al. (2023); Mendelson and Neeman (2010) require the eigenfunctions {ϕj}j≥1ofLKto be uniformly bounded, a useful condition for obtaining improved ∥ · ∥∞bound but is not always satisfied even for the most popular kernel functions (Zhou, 2002). On the other hand, the authors of Bartlett et al. (2005) derive prediction risk bounds for general empirical risk minimization based on local Rademacher complexity , and applied them to analyze efbd, which is computed from (1) with λ= 0 but constrained to the unit ballBH:={f:∥f∥K≤1}. A key step in their proof is connecting the local Rademacher complexity ofBHwith the kernel complexity function R(δ) =1 n∞X j=1min{δ, µj}1/2 ,∀δ≥0, using the result proved in Mendelson (2002). The kernel complexity function depends not only on HKbut also on the probability measure ρofZthrough the integral operator LK, and is commonly used to characterize the complexity of HK. Indeed, it is closely related with the effective dimension (Caponnetto and De Vito, 2007; Rudi and Rosasco, 2017), covering numbers (Cucker and Smale, 2002) and related capacity measures (Steinwart and Christmann, 2008). Let δn>0 be the fixed point of R(δ) such that R(δn) =δn. The risk bound of efbdin Bartlett et al. (2005) reads P ∥efbd−f∗∥2 ρ≲δn+t n ≥1−e−t,∀t≥0. (7) However, the analysis of Bartlett et al. (2005) requires the response variable YandHKto be bounded, as well as f∗∈ H K. The case f∗/∈ H Kis considered in Steinwart et al. (2009) where the authors derive an oracle inequality for the prediction risk of a truncated version of the KRR predictor ef. But their analysis requires Yto be bounded, condition (6) and the approximation error to satisfy inf f∈HK{∥f∗−f∥2 ρ+λ∥f∥2 K} ≤Cλβfor some β∈(0,1] and λ >0. The last requirement turns out be related to the aforementioned smoothness condition f∗=Lr Kg∗with r∈(0,1/2]. 5 Summarizing, even in the classical setting where the feature inputs are observed, existing anal- yses of KRR require either that the true regression function belongs to the specified RKHS, that both the RKHS and the response Yare bounded, or that certain conditions on the
|
https://arxiv.org/abs/2505.20022v1
|
eigenvalues or eigenfunctions of the RKHS hold. As explained in the next section, our new framework of analyz- ing KRR removes such restrictions under the squared loss in (1), while remaining applicable to the analysis of general convex loss functions. More substantially, when the feature inputs Ziin (1) must be predicted by bg(Xi), neither of the above two proof strategies can be applied. This is because both the integral operator LKand the kernel complexity function R(δ), defined via the eigenvalues of LK, depend on the probability measure ρof the latent feature Z. When regressing Yonto the predicted feature bg(X), following either strategy would lead to a different integral operator Lx,Kand a different kernel complexity function, both of which depend on the probability measure ρxofbg(X). Let {µx,j}j≥1be the eigenvalues of Lx,K. The new kernel complexity function is Rx(δ) =1 n∞X j=1min{δ, µx,j}1/2 ,∀δ≥0. Establishing a direct relationship between either Rx(δ) and R(δ), or between {µj}j≥1and{µx,j}j≥1, however, is generally intractable without imposing strong assumptions on the relationship between ρxandρ. Handling predicted feature inputs thus requires a new proof strategy. 1.2 Our contributions We summarize our main contributions in this section. 1.2.1 Non-asymptotic excess risk bounds of KRR with predicted feature inputs Our first contribution is to establish new non-asymptotic upper bounds on the excess risk of the KRR predictor bf◦bg. In particular, our risk bounds are valid for any generic predictor bgwithout imposing any assumptions on the proximity of bg(X) toZ. Moreover, our results allow arbitrary model misspecification and do not require either the response or the RKHS to be bounded. To allow for model misspecification, we assume in Assumption 1 of Section 3.1 only the existence offH∈ H K, which is the L2(ρ) projection of f∗ontoHK. Using fH, the excess risk of bf◦bgis decomposed in (19) of Section 3.1 as E(bf◦bg)≲E[ℓbf◦bg(Y, X)] +E∆bg+∥fH−f∗∥2 ρ (8) where ∥fH−f∗∥2 ρrepresents the approximation error, ∆ bg:=∥KZ−Kbg(X)∥2 Kreflects the error of predicting Zbybg(X) through the kernel function K, and ℓbf◦bg(y, x) := y−(bf◦bg)(x)2− y−(fH◦bg)(x)2 denotes the squared loss of bf◦bgrelative to fH◦bg. Analyzing the term E[ℓbf◦bg(Y, X)] in (8) turns out to be the main technical challenge, and its bound also depends on both E∆bgand∥fH−f∗∥2 ρ. In Theorem 2 of Section 3.2, we stated our main result: 6 Theorem 1 (Informal) .For any η∈(0,1)and a suitable choice of λin(1), with probability 1−η, one has E(bf◦bg)≲δnlog(1/η) +E∆bg+∥fH−f∗∥2 ρ+log(1/η) n. (9) Theorem 1 holds for any predictor bgofZthat is constructed independently of Dand any f∗∈ L2(ρ). In particular, it does not impose any restrictions on E∆bg, the prediction error for Z, nor on the approximation error ∥fH−f∗∥2 ρ. Regardless of their magnitudes, both terms appear additively in the risk bound, a particular surprising result for E∆bg. This is a new result to the best of our knowledge. Even in the classical setting, the rate in (9) with E∆bg= 0 generalizes the existing result in (7) by allowing arbitrary model misspecification, as well as unboundedness of both the response and the RKHS. Moreover, even when the model is correctly specified, our analysis achieves the same optimal rate without imposing any
|
https://arxiv.org/abs/2505.20022v1
|
regularity or decaying conditions on µj’s, such as those in (6), in contrast to existing integral operator approaches, for instance, Caponnetto and De Vito (2007); Fischer and Steinwart (2020); Rudi and Rosasco (2017); Steinwart et al. (2009). In Section 3.2.1 we further derive more transparent expression of E∆bg. For the kernel function Kthat satisfies a Lipschitz property (see, Assumption 5), E∆bgcan be simplified to E∥bg(X)−Z∥2 2, theL2prediction error of bg(X). To discuss the order of δnin (9), we present in Section 3.2.2 its slow rate in Corollary 1 and its fast rates under three common classes of kernel functions: linear kernels in Corollary 2, kernels with polynomially decaying eigenvalues in Corollary 3, and kernels with exponentially decaying eigenvalues in Corollary 4. Our new risk bounds generalize existing results in all cases by characterizing the dependence on both the approximation error and the error in predicting the feature inputs. In Section 5, we further extend our analysis to other convex loss functions in (1) that satisfy some classical smoothness and strong convexity conditions. 1.2.2 A new framework for analyzing KRR with predicted feature inputs Along with our new rate results comes a novel approach to analyzing the KRR with predicted inputs. The framework we develop offers two key contributions: (1) it improves existing analyses of KRR in the classical setting by accommodating arbitrary model misspecification and allowing both an unbounded response and an unbounded RKHS; (2) it is essential for handling predicted feature inputs, enabling us to derive risk bounds without imposing any assumptions on the proximity of bg(X) toZ. To accomplish the first point, we adopt a hybrid approach that combines the integral operator method with the local Rademacher complexity based empirical process theory from Bartlett et al. (2005), along with a careful reduction argument. This is necessary because the integral operator approach cannot easily handle model misspecification, while the empirical process theory requires both a bounded response and a bounded RKHS. Specifically, as seen in the decomposition (8), the key is to bound from above E[ℓbf◦bg(Y, X)]. Since supf∈HK∥f∥Kcould be unbounded, we first apply a reduction argument, showing that bounding E[ℓbf◦bg(Y, X)] can be reduced to controlling the empirical process sup f∈Fbn E[ℓf◦bg(Y, X)]−2En[ℓf◦bg(Y, X)]o (10) where Fbis abounded subset of HK, given by (37) of Section 3.4. We use Enin this paper to denote the expectation with respect to the empirical measure of ni.i.d. samples. To further 7 address the unboundedness of Y, we need to bound the cross-term En[ϵ(f◦bg)(X)−ϵ(fH◦bg)(X)] uniformly over f∈ F b. To accomplish this, we employ both the empirical process theory based on local Rademacher complexity from Bartlett et al. (2005) and the integral operator approach, as detailed in Appendix B.2.3; see also Lemma 7. Our analysis only requires moment conditions on the regression error ϵ. Finally, the local Rademacher complexity technique from Bartlett et al. (2005) is applied once again to control the remaining bounded components in the empirical process (10). Due to our new hybrid approach, the analysis can be easily applied to learning algorithms in (1) in which other convex loss functions are
|
https://arxiv.org/abs/2505.20022v1
|
used in place of the squared loss, as demonstrated in Section 5. Another fundamental difficulty in our analysis arises from handling predicted feature inputs. As noted in the end of Section 1.1, the primary challenge lies in establishing a precise connection between the local Rademacher complexity and the kernel complexity function R(δ) in the presence of errors in predicting the features. The existing analysis in Mendelson (2002) only establishes a connection to Rx(δ), which depends on the probability measure ρxofbg(X). And it is hopeless to relate Rx(δ) with R(δ) without imposing restrictive assumptions on the closeness of ρxand ρ. To circumvent this issue, rather than working with the population-level local Rademacher complexity and Rx(δ)—both of which rely on ρx—we instead focus on their empirical counterparts. The first step is to relate the local Rademacher complexity to its empirical counterpart where we borrow existing concentration inequalities of local Rademacher complexities in Bartlett et al. (2005); Boucheron et al. (2000, 2003). By adapting the argument in Bartlett et al. (2005, Lemma 6.6), we further link the empirical local Rademacher complexity to bRx(δ), the empirical counterpart of Rx(δ). The next step is to connect bRx(δ) tobR(δ), with the latter being the empirical counterpart of R(δ). This is the most challenging step, as bRx(δ) andbR(δ) depend, respectively, on the empirical integral operators associated with bg(Xi) and Zifori∈[n]. Our proof reveals that this requires to control the spectral difference between two n×nkernel matrices with entries equal to K(bg(Xi),bg(Xj)) and K(Zi, Zj) for i, j∈[n]. As accomplished in Lemmas 11 and 12 of Appendix B.3.3, this is highly non-trivial as we do not impose any requirements on the proximity of bg(Xi) toZi. Finally, we establish sharp concentration inequalities between bR(δ) and its population-level counterpart, R(δ), to close the loop of relating the local Rademacher complexity to R(δ). Completing the loop requires a series of technical lemmas, collected in Lemmas 9, 10, 11, 12, and 13 of Appendices B.3.1 to B.3.4. Developing this proof strategy is part of our contribution, and we provide a more detailed discussion in Section 3.4.3. For illustration, Figure 1 below depicts how we relate the local Rademacher complexity to the kernel complexity R(δ), represented by the solid arrow. 1.2.3 Application to factor-based non-parametric regression models Our third contribution is to provide a complete analysis of using KRR with feature inputs predicted by PCA under the factor-based nonparametric regression models (3) and (4). As an application of our general theory, together with the existing literature on high-dimensional factor models (for instance, Bai (2003); Fan et al. (2013)), we state in Corollary 5 of Section 4.2 an explicit upper bound of the excess risk of the KRR with inputs predicted by PCA. Furthermore, in Theorem 3 of Section 4.2, we establish a matching minimax lower bound of the excess risk under models (3) and (4), thereby demonstrating the minimax optimality of the KRR using the leading PCs. Prediction consistency of Principal Component Regression (PCR) has been established in Stock and Watson (2002) under the factor-based linear regression models (2) and (3). Explicit rates of the excess
|
https://arxiv.org/abs/2505.20022v1
|
risk are later given in Bing et al. (2021), where a general linear predictor of Xi, including 8 Figure 1: Proof strategy of relating local Rademacher complexity to kernel complexity R(δ). PCA as a particular instance, is used to predict Zi. In a more recent work (Fan and Gu, 2024), the authors investigate the predictive performance of fitting neural networks between the response and the retained PCs, proving minimax optimal risk bounds under the factor model (3) when the regression function belongs to a hierarchical composition of functions in the H¨ older class. Under model (3) and (4) with f∗belonging to some RKHS, our results in Section 4 provide the first minimax optimal risk bound for using KRR with the leading PCs. Moreover, our theory extends beyond the factor model (3), as it applies to any predictor bgofZand accommodates arbitrary dependence between XandZ. This paper is organized as follows: we introduce the learning algorithm of KRR with predicted inputs in Section 2. Theoretical results of analyzing KRR with predicted inputs are stated in Sec- tion 3. The risk decomposition is given in Section 3.1 while the risk bounds along with the main assumptions are stated in Section 3.2. Risk bounds for specific kernels are given in Section 3.3 while in Section 3.4 we explain the proof sketch and highlight the main technical difficulties. In Section 4 we apply our general result to factor-based nonparametric regression models. In Section 5 we extend our analysis and derive risk bounds for general convex loss functions. Simulation studies are stated in Appendix A and all proofs are deferred to the Appendix. Notation. For any a, b∈R, we write a∧b= min {a, b}anda∨b= max {a, b}. For any integer d, we let [ d] ={1, . . . , d }. For any symmetric, semi-positive definite matrix Q∈Rd×d, we use λ1(Q)≥λ1(Q)≥. . .≥λd(Q) to denote its eigenvalues. For any matrix A, we use ∥A∥opand ∥A∥F= (PA2 ij)1/2to denote its operator norm and Frobenius norm, respectively. We use Idto denote the d×didentity matrix. For d1≥d2, we write Od1×d2for matrices with orthonormal columns. The inner product and the endowed norm in the Euclidean space are denoted as ⟨·,·⟩2 and∥ · ∥2, respectively. For any function f:Z →R,∥f∥∞= supz∈Z|f(z)|. For any two sequences anandbn, we write an≲bnif there exists some constant Csuch that an≤Cbnfor all n. We write an≍bnifan≲bnandbn≲an. In this paper, we use c, c′, CandC′to denote constants whose values may vary line by line unless otherwise stated. 9 2 Kernel ridge regression with predicted feature inputs In this section, we state the learning algorithm for Kernel Ridge Regression (KRR) with predicted feature inputs. Recall that D={(Yi, Xi)}n i=1∈(R,X)ncontains ntraining data. Let bg:X → Z be any measurable function that is used to predict the features Zi∈ Z. For instance, bgcould be obtained from one of the feature extraction or representation learning approaches mentioned in the Introduction. The predicted feature inputs are bg(Xi) for all i∈[n]. LetK:Z × Z → Rbe some pre-specified kernel function and denote by HKthe Reproducing Kernel Hilbert Space (RKHS) induced by K. Write the
|
https://arxiv.org/abs/2505.20022v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.