text
string
source
string
affine varieties. Let ϕ:V−→Wbe a surjective morphism, and let f∈C[W]be a regular function. Then z∈Vandp=ϕ(z)are critical points of f′=f◦ϕandf, respectively, if the differential dϕz:Tz(V)−→Tp(W)is surjective. 11 Proof. The differential dϕzis, in coordinates, equal to the Jacobian of ϕrestricted to Tz(V). By the chain rule, the gradient of f′equals ∇z(f′) =∇p(f)·Jaczϕ, where ∇p(f) is a row vector. Suppose z∈Vis a critical point of f′. This means that ∇z(f′)∈Tz(V)⊥, i.e., ∇z(f′)·v= 0 for all v∈Tz(V). Equivalently, ∇p(f)·Jaczϕ·v= 0 for all v∈Tz(V). This implies that ∇p(f) is orthogonal to the subspace of the tangent space Tp(W) that is the image of Tz(V) under the differential dϕz. Now, if dϕzis surjective, then it is guaranteed that∇p(f) is in Tp(W)⊥, and hence pis a critical point of f. For the converse, pick a critical point p∈W. So∇p(f) is in Tp(W)⊥. Since ϕis surjective, let zbe any point in the fiber ϕ−1(p). By the surjectivity of the differential dϕzthe image of Tz(V) under the differential isTp(W), and since ∇z(f′)·v=∇p(f)·Jaczϕ·v= 0 for all v∈Tp(V) we conclude that z is a critical point of f′for every z∈ϕ−1(p). Theorem 3.3. The degree of the multi-eigenvector problem (9)over the projection Grass- mannian pGr(k, n)isn k . The critical points are the k-dimensional subspaces spanned by k eigenvectors of A. Proof. Consider the parametrization Φ from the Stiefel variety Vk,nto the projection Grass- mannian which sends Z7→ZZT. This is a surjective morphism between the two smooth varieties (see the proof of Theorem 5.2 in [5] for the smoothness of the projection Grass- mannian) where an entire orbit in Theorem 1.1 is sent to a single projection matrix. Now we prove that the differential dΦZis surjective. Since O( n) acts transitively on Vk,n, we consider the tangent space T[e1···ek](Vk,n). We note that the columns of Jac( Z) as in (5) are indexed by zijfori= 1, . . . , n andj= 1, . . . , k . We see that the standard unit vectors eijfor i=k+ 1, . . . , n andj= 1, . . . , k are in the kernel of Jac([ e1···ek]). We next compute the Jacobian of Φ. For this, we order the kncolumns of Jac ZΦ by the entries of the rows of Z. It will be more convenient to look at blocks of entries in each row corresponding the rows of Zwhich we denote by ˜Z1, . . . , ˜Zn. The rows of Jac ZΦ correspond to then+1 2 entries of the symmetric matrix ZZT. We order the rows by first the diagonal entries of this matrix, then the first superdiagonal, followed by the second superdiagonal, etc. Then the first nrows of JacZΦ is diag(2 ˜Z1, . . . , 2˜Zn). The next n−1 rows will be of the form (0···0˜Zi+1˜Zi0···0) fori= 1, . . . , n −1 where ˜Ziappears at block i+ 1. The following n−2 rows are of the form (0···0˜Zi+20˜Zi0···0) fori= 1, . . . , n −2 where ˜Ziappears at block i+ 2, etc. The last row will be ( ˜Zn0···0˜Z1). Evaluating this Jacobian at [ e1···ek] produces k(n−k) distinct
https://arxiv.org/abs/2505.15969v1
standard unit vectors of Cknin the last k(n−k) columns indexed by the entries of ˜Zk+1, . . . , ˜Zn. Hence, multiplying JacZΦ with the vectors eijfori=k+ 1, . . . , n andj= 1, . . . , k that are tangent vectors to Vk,natZgives k(n−k) linearly independent vectors in the tangent space to pGr( k, n) at ZZT. Since the dimension of this smooth Grassmannian is k(n−k) we conclude that dΦZ is surjective. We finish the proof by applying Lemma 3.2. 12 The trace of the objective function is a generic linear function in the entries of A: trace( AP) =X iaiipii+ 2X i<jaijpij. Thus solving multi-eigenvector problem on the projection Grassmannian is equivalent to solving a generic linear optimization problem on the same variety. The number of critical points of optimizing a generic linear form on a variety is called the linear optimization degree (LO degree) of the variety [11]. Corollary 3.4. The LO degree of pGr(k, n)isn k . 3.3 Critical points in isospectral coordinates We now present the most general formulation of the multi-eigenvector problem. While the Stiefel and projection formulations are optimization problems over the Grassmannian, writing this problem in isospectral coordinates allows us to generalize to an optimization problem over a flag variety Fl( k;n) = Fl( k1, . . . , k r;n). We fix c1, . . . , c n∈Cwith c1=···= ck1,ck1+1=···=ck2,···,ckr+1=···=cn. We let X0= diag( c1, . . . , c n). With this, we introduce the following linear optimization problem over the isospectral model Fl c(k;n) min/max S∈Flc(k;n)trace( AS). (10) If the ciare distinct, then the optimization problem is over the complete flag variety and as we will see it has n! critical points. On the other hand, if c1=···=ck= 1 and ck+1=···=cn= 0, then S=QX 0QTis a point in pGr( k, n) and we recover the optimization problem (9). The following theorem interpolates between these extreme cases. Theorem 3.5. The linear optimization problem (10) over the isospectral flag variety Flc(k;n)hasn k1,k2−k1,...,n−kr critical points. We first prove the following which is an analog of Theorem 1.1. Theorem 3.6. LetAbe a generic real symmetric n×nmatrix and X0= diag( c1, . . . , c n) with ckj+1=···=ckj+1forj= 0, . . . , r where k0= 0andkr+1=n. The algebraic set of complex critical points of the optimization problem min/max QTQ=Idntrace( AQX 0QT) (11) is equal to G σ∈Sn/(Sk1×Sk2−k1···×Sn−kr){U(σ·R) :R∈O(k1)×O(k2−k1)× ··· × O(n−kr)} (12) where U= [u1u2···un]foru1, . . . , u nan orthonormal eigenbasis of Aandσacts by per- muting the rows of R. This algebraic set is a disjoint union ofn k1,k2−k1,...,n−kr varieties isomorphic to O(k1)×O(k2−k1)× ··· × O(n−kr). 13 Proof. Similar to the proof of Theorem 1.1, a matrix Q∈O(n) is a critical point of the problem if and only if there exists a symmetric matrix Msuch that AQX 0=QM. The condition that QTAQX 0is symmetric is equivalent to requiring that QTAQandX0commute, which occurs precisely when QTAQis a block diagonal matrix with blocks of size k1,(k2− k1), . . . , (n−kr). IfQis in the disjoint union (12), then QTAQ= (RT·σ−1)UTAU(σ·R)
https://arxiv.org/abs/2505.15969v1
=RTΛR where Λ is the diagonal matrix with eigenvalues of Aon the diagonal. Since Λ is diagonal andRhas the desired block structure, the points in the disjoint union are critical points. Conversely, suppose QTAQhas the desired block structure. Since Ais diagonalizable, every block of QTAQis diagonalizable. Assembling these blocks gives a diagonalization QTAQ=RΛRTwhere R∈O(k1)×O(k2−k1)× ··· × O(n−kr), up to a permutation of the rows of R. But then A=QRΛRTQT, which implies that Q=URT, as desired. Proof of Theorem 3.5. The parameterization SO( n)→Flc(k;n) can naturally be extended to Φ: O( n)→Flc(k;n). The fiber of a point in Fl c(k;n) is isomorphic to a copy of O( k1)× ··· × O(n−kr). Once we prove that the parametrization takes critical points to critical points, we will be finished as in the proof of Theorem 3.3. Indeed, we may apply Lemma 3.2, since the parametrization is surjective and Fl c(k;n) is smooth by Theorem 2.9. Since there is a transitive O( n)-action on O( n), it suffices to compute the dimension of the image of the Jacobian evaluated at the tangent space of the identity matrix Id n. For all 1≤i < j≤nwe have eij−eji∈TIdn(O(n)). Thesen 2 vectors are linearly independent. We now compute Jac Idn(Φ). This matrix has rows of the form 2 cieT iifor all i= 1, . . . , n and of the form cjeT ij+cieT jifor 1≤i < j≤n. Therefore Jac Idn(Φ)·(eij−eji) is zero when ci=cjand otherwise has one nonzero entry. Since the nonzero entry has a different index for distinct pairs i, j, this process produces dim(Fl c(k;n)) linearly independent vectors. Corollary 3.7. The LO degree of the isospectral model Flc(k;n)isn k1,k2−k1,...,n−kr . 4 Heterogeneous Quadratics Minimization Problem The heterogeneous quadratics minimization problem generalizes the multi-eigenvector prob- lem considered in the previous section. Its most natural formulation is in Stiefel coordinates: min ZTZ=IdkkX i=1ZT iAiZi (13) where Ai∈Sym(Rn) and Zidenotes the ith column of Zfori= 1, . . . , k . The multi- eigenvector problem is recovered by choosing Ai=Aof all i= 1, . . . , k . In Stiefel coordinates, the critical points can be computed using ∇ kX i=1ZT iAiZi! = 2 ZT 1A1ZT 2A2···ZT kAk 14 and Jac( Z) as in (5). A more convenient system of equations whose solutions are the same critical points is the following (see [15, Lemma 1]): [A1Z1A2Z2···AkZk]ZT−Z[A1Z1A2Z2···AkZk]T= 0 and ZTZ= Id k. The optimization problem is invariant under the action of O(1)k: if [ Z1Z2···Zk] is a critical point, so is [ ±Z1±Z2··· ± Zk]. Computing the number of complex critical points of (13) is a challenging open problem. Our numerical computations, produced with HomotopyContinuation.jl [2], are summarized in Table 1. n= 2 n= 3 n= 4 n= 5 n= 6 n= 7 n= 8 n= 9 k= 2 8 40 112 240 440 728 1120 1632 k= 3 80 960 5536 21440 64624 k= 4 1920 57216 Table 1: Degrees of the heterogeneous quadratics minimization problem for small k, n. Conjecture 4.1. The number critical points of the heterogeneous quadratics minimization problem for k= 2 is 8Pn−1
https://arxiv.org/abs/2505.15969v1
j=1j2. 4.1 Diagonal case Our computational experiments indicate that the number of critical points of the heteroge- neous quadratics minimization problem stays stable if we take the input matrices A1, . . . , A k to be generic diagonal matrices. While we do not have a general proof for this observation, we present a result addressing the first nontrivial case. Proposition 4.2. LetA1= diag( a11, a12, a13)andA2= diag( a21, a22, a23)be generic di- agonal matrices. Then the algebraic degree of the corresponding heterogeneous quadratics minimization problem (13) is40. Proof. We will explicitly describe these 40 critical points. The critical points are defined by the Lagrange multiplier equations A1Z1=q11Z1+q12Z2, A 2Z2=q12Z1+q22Z2, ZT 1Z1= 1, ZT 1Z2= 0, ZT 2Z2= 1 where q11, q12, q22are Lagrange multipliers. This is a square system with 9 variables and 9 equations. We obtain 22·3·2 = 24 solutions by taking Z∗ 1=±ei,Z∗ 2=±ej,q∗ 11=a1i, q∗ 12= 0, and q∗ 22=a2jfor all i̸=j. By computing a Gr¨ obner basis of the ideal given by the above equations over the rational function field Q(a1j, a2j:j= 1,2,3) we see that this ideal is zero dimensional and has 40 solutions. In addition to the 24 solutions we already described, the rest of the 16 solutions come via row and columns sign flips of the Z∗whose 15 rows we list below: (Z∗T)1=√a12−a13+a23−a22 αp −(a12−a13)(a21−a22)(a21−a23)p (a22−a23)(a11−a12)(a11−a13) (Z∗T)2=√a11−a13+a23−a21 αp −(a11−a13)(a22−a21)(a22−a23)p (a21−a23)(a12−a11)(a12−a13) (Z∗T)3=√a11−a12+a22−a21 αp −(a11−a12)(a23−a21)(a23−a22)p (a21−a22)(a13−a11)(a13−a12) where α=a11a22−a11a23+a12a23−a12a21+a13a21−a13a22. The corresponding Lagrange multipliers are q∗ 11=1 α −a11a12(a21−a22) +a11a13(a21−a23)−a12a13(a22−a23) q∗ 12=2 αp −(a11−a12)(a11−a13)(a12−a13)(a21−a22)(a21−a23)(a22−a23) q∗ 22=1 α a21a22(a11−a12)−a21a23(a11−a13) +a22a23(a12−a13) . These computations were performed in Oscar.jl [14]. 4.2 Projection coordinates We now formulate the heterogeneous quadratics minimization problem as an optimization problem over a flag variety in projection coordinates. We rewrite the objective function as nX i=1ZT iAiZi=nX i=1trace( ZT iAiZi) =kX i=1trace( BiPi) where Pi=Pi j=1ZjZT jfori= 1, . . . , k andBi=Ai−Ai+1fori= 1, . . . , k −1 and Bk=Ak. Over pFl(1 ,2, . . . , k ;n) we can reformulate our problem as follows: minimizekX i=1trace( BiPi) (14) subject to PiPj=Pj,trace( Pi) =ifor 1≤j≤i≤k. Proposition 4.3. If the heterogeneous quadratics minimization problem (14) hasmcritical points, then (13) has2kmcritical points in Vk,n. Proof. The map from the Stiefel formulation of the flag variety Fl(1 ,2, . . . , k ;n) to the pro- jection formulation pFl(1 ,2, . . . , k ;n) given by Z7→(P1, . . . , P k) where Pi=Pi j=1ZjZT jis 2k to 1 since [ ±Z1±Z2. . .±Zk] map to the same point. We proceed as in the proof of Theorem 3.3. A basis for the tangent space of Vk,natZ= [e1···ek] consists of k(n−k) standard unit vectors eijfori=k+ 1, . . . , n andj= 1, . . . , k , and the vectors eij−ejifor 1≤i < j≤k. 16 The Jacobian of the above 2k-to-1 parametrization map consists of a stack of kJacobians where each individual Jacobian is the Jacobian of the parametrization map from Vj,nfor j= 1, . . . , k as we computed in the proof of Theorem 3.3. A careful
https://arxiv.org/abs/2505.15969v1
computation shows that the images of the k(n−k) +k 2 vectors under the Jacobian of the parametrization map stay linearly independent. Hence, this image has dimension dim(pFl(1 ,2, . . . , k ;n)). Therefore, the differential of the parametrization map is surjective. Lemma 3.2 implies that the critical points on Vk,nare mapped to the critical points on pFl(1 ,2, . . . , k ;n). Corollary 4.4. The degree of the heterogeneous quadratics minimization problem over the flag variety pFl(1 ,2, . . . , k ;n)is equal to the LO degree of pFl(1 ,2, . . . , k ;n). Therefore, both (13) and(14) have finitely many critical points. 5 Two Problems from Statistics In this section, we discuss two optimization problems from statistics which involve flags, namely canonical correlation analysis andcorrespondence analysis . Our goal is to describe and count the number of complex critical points of these optimization problems. The for- mulations are taken from [17, Section 1.2]. 5.1 Canonical correlation analysis Canonical correlation analysis is a technique in statistics for pairing up corresponding parts of a pair of data sets [10]. The problem is formulated as follows. Let X, Y ben×pand n×qdata matrices where nis the common sample size. Let SX, SY, SXYdenote the sample covariance matrices. The kth pair ( ak, bk)∈Rp×Rqof canonical correlation loadings is (ak, bk) = argmax {aTSXYb:aTSXa=bTSYb= 1, aTSXaj=aTSXYbj=bTSY Xaj=bTSYbj= 0, j= 1, . . . , k −1}. We perform a standard simplification via the Cholesky factorization SX=PTPandSY= QTQwhere P, Q are upper triangular. We substitute A=P−TSXYQ−1,u=Pa, and v=Qb to obtain the simpler problem (uk, vk) = argmax {uTAv:uTu=vTv= 1, uTuj=uTAvj=vTATuj=vTvj= 0, j= 1, . . . , k −1}. If we collect u1, . . . , u kandv1, . . . , v kinto the p×kmatrix Uandq×kmatrix V, respectively, then ( U, V) represents a pair of flags in Fl(1 ,2, . . . , k ;p)×Fl(1,2, . . . , k ;q) [17]. We say the pair ( U, V)∈Cp×k×Cq×kis a critical point of the canonical correlation problem if for all i= 1, . . . , k , the pair ( uk, vk) is a critical point of the optimization problem maximize uTAv (15) subject to uTu=vTv= 1 uTuj=uTAvj= 0, j= 1, . . . , k −1 vTATuj=vTvj= 0, j= 1, . . . , k −1. 17 The critical points are given by the singular value decomposition of A[4]. Theorem 5.1. The critical points of the canonical correlation analysis problem are the pairs (U, V)∈Cp×k×Cq×kwhere the columns of Uare the left singular unit vectors of A, the columns of Vare the right singular unit vectors of A, and corresponding columns have the same singular value. There aremin(p,q) k k!2kcritical points. Proof. The count follows from the description of the critical points: choose kunit left and right singular vectors of A, permute the corresponding vectors simultaneously, and flip signs of them as desired. We will prove by induction that for each i, there exist λi, ηisuch that Avi=λiuiandATui=ηiviif and only if the pair ( ui, vi)
https://arxiv.org/abs/2505.15969v1
is a critical point of (15). We proceed by induction on i. When i= 1, the optimization problem simplifies to max uTu=vTv=1vTAu.By computing the transpose of the Jacobian, we find that the critical points of this problem are characterized by the leftmost column of the matrix Av u 0 ATu0v being in the span of the rest of the columns. We therefore have that ( u, v) is a critical point if and only if there exist λ1, η1such that Av=λ1uandATu=η1v. Suppose now that for j= 1, . . . , i −1, we have that Avj=λjujandATuj=ηjvj. The transpose of the augmented Jacobian matrix of (15) is Aviu1···ui0··· 0Av1···Avi−1 0··· 0 ATui0··· 0v1···vi0··· 0 ATu1···ATui−1 =Aviu1···ui0··· 0λ1u1···λi−1ui−10··· 0 ATui0··· 0v1···vi0··· 0 η1v1···ηi−1vi−1 . Proving that this matrix drops rank is equivalent to proving that the matrix Aviu1···ui0··· 0 ATui0··· 0v1···vi drops rank. It is clear that the matrix drops rank if Avi∈span( ui) and ATui∈span( vi). Conversely, suppose that Avi=α1u1+···+αiui, ATui=β1v1+···+βivi. Multiplying the first equation by uT jforj= 1, . . . , i−1 gives 0 = uT jAvi=αj. By symmetry, αj=βj= 0 for j= 1, . . . , i −1. Thus Avi=αiuiandATui=βivi. 5.2 Correspondence Analysis Correspondence analysis (CA) is a statistical optimization problem over a pair of Grassman- nians; see [4, 17]. CA is an analog of principal component analysis for categorical data. The 18 data for CA come in the form of an n×pmatrix Xknown as a contingency table. We let 1be the all-ones vector of appropriate size and set t=1TX1∈R. The row and column weights are defined as r=1 tX1∈Rnc=1 tXT1∈Rp Dr=1 tdiag( r)∈Rn×nDc=1 tdiag( c)∈Rp×p. Fork= 1, . . . , p , we seek a pair of matrices ( Uk, Vk)∈Rn×k×Rp×ksuch that (Uk, Vk) = argmax {trace( UT(1 tX−rcT)V):UTDrU=VTDcV= Id k}. We begin with two simplifications of this problem. The first is standard in correspondence analysis: since Dr, Dcare diagonal, they can be factored into U, V, respectively, by replacing Uwith√DrUandVwith√DcV. The second is to replace the matrix1 tX−rcTfrom statistics with a generic matrix A∈Rn×p. The new optimization problem is max UTU=VTV=Idktrace( UTAV). (16) The solution to this problem is given by the singular value decomposition of A; see [4]. The problem is invariant under a simultaneous O( k)-action on UandV; hence it is an optimization problem over the product of Grassmannians Gr( k, n)×Gr(k, p). Theorem 5.2. LetAbe a generic real n×pmatrix with p > n , letUbe an n×kvariable matrix and let Vbe ap×kvariable matrix. The algebraic set of complex critical points of the optimization problem (16) is equal to G {i1,...,ik}∈([n] k){([ui1ui2···uik]Q,[vi1vi2···vik]Q) :Q∈O(k)} (17) where u1, . . . , u nis an orthonormal basis of left singular vectors for Aandv1, . . . , v nis an orthogonal basis of right singular vectors for Asuch that ui, vishare a common singular value fori= 1, . . . , n . This algebraic set is a disjoint union ofn k varieties isomorphic to O(k). Proof. As in the proof of Theorem 1.1, a pair ( U, V) is a critical point if and
https://arxiv.org/abs/2505.15969v1
only it satisfies UTU=VTV= Id kand there exist symmetric matrices M, N such that AV=UM and ATU=V N. We must have M=N, since UTAV=UTUM =MandVTATU=VTV N= NandM, N are symmetric. Thus ( U, V) is a critical point if and only if UTU=VTV= Id k and there exists a single symmetric matrix Mwith AV=UMandATU=V M. The matrix Msatisfies AATU=UM2where AATis full rank so by Lemma 3.1, M2is orthogonally diagonalizable, which implies Mis orthogonally diagonalizable. We write M=QTΛQ for the spectral decomposition of M. Then the constraints become AV QT=UQTΛ and ATUQT=V QTΛ which is precisely what it means for ( UQT, V QT) to be in the set (17). 19 References [1] Madeline Brandt, Juliette Bruce, Taylor Brysiewicz, Robert Krone, and Elina Robeva. The degree of SO( n,C). In Combinatorial algebraic geometry , volume 80 of Fields Inst. Commun. , pages 229–246. Fields Inst. Res. Math. Sci., Toronto, ON, 2017. [2] Paul Breiding and Sascha Timme. Homotopycontinuation.jl: A package for homotopy continuation in julia. In Mathematical Software – ICMS 2018 , pages 458–465. Springer International Publishing, 2018. [3] Taylor Brysiewicz and Fulvio Gesmundo. The degree of Stiefel manifolds. Enumer. Comb. Appl. , 1(3):Paper No. S2R20, 18, 2021. [4] Carles M. Cuadras and Michael Greenacre. A short history of statistical association: from correlation to correspondence analysis to copulas. J. Multivariate Anal. , 188:Paper No. 104901, 21, 2022. [5] Karel Devriendt, Hannah Friedman, Bernhard Reinke, and Bernd Sturmfels. The two lives of the Grassmannian. To appear in Acta Univ. Sapientiae Math. , 2024. [6] William Fulton. Young Tableaux: With Applications to Representation Theory and Geometry . London Mathematical Society Student Texts. Cambridge University Press, 1996. [7] Felix R. Gantmacher. The Theory of Matrices . Chelsea Publishing, New York, 1959. [8] Lek-Heng Lim and Ke Ye. Degree of the Grassmannian as an affine variety, 2024. arXiv preprint arXiv: 2405.05128 . [9] Lek-Heng Lim and Ke Ye. Simple matrix models for the flag, Grassmann, and Stiefel manifolds, 2024. arXiv preprint arXiv: 2407.13482 . [10] Kantilal Varichand Mardia, John T. Kent, and John M. Bibby. Multivariate analy- sis. Probability and Mathematical Statistics: A Series of Monographs and Textbooks. Academic Press, London-New York-Toronto, 1979. [11] Laurentiu G. Maxim, Jose Israel Rodriguez, Botong Wang, and Lei Wu. Linear opti- mization on varieties and Chern-Mather classes. Adv. Math. , 437:Paper No. 109443, 22, 2024. [12] Jiawang Nie and Kristian Ranestad. Algebraic degree of polynomial optimization. SIAM J. Optim. , 20(1):485–502, 2009. [13] Jiawang Nie, Kristian Ranestad, and Bernd Sturmfels. The algebraic degree of semidef- inite programming. Math. Program. , 122(2):379–405, 2010. [14] OSCAR – Open Source Computer Algebra Research system, Version 1.3.1, 2025. 20 [15] Harry Oviedo. Implicit steepest descent algorithm for optimization with orthogonality constraints. Optim. Lett. , 16(6):1773–1797, 2022. [16] Kristian Ranestad. Algebraic degree in semidefinite and polynomial optimization. In Handbook on semidefinite, conic and polynomial optimization , volume 166 of Internat. Ser. Oper. Res. Management Sci. , pages 61–75. Springer, New York, 2012. [17] Ke Ye, Ken Sze-Wai Wong, and Lek-Heng Lim. Optimization on flag manifolds. Math. Program. , 194(1-2):621–660, 2022. Authors’ addresses: Department of Mathematics, University
https://arxiv.org/abs/2505.15969v1
arXiv:2505.16124v1 [stat.ME] 22 May 2025Controlling the False Discovery Rate in High-Dimensional Linear Models Using Model-X Knockoffs and p-values Jinyuan Chang1,2, Chenlong Li3, Cheng Yong Tang∗4, and Zhengtian Zhu2 1Joint Laboratory of Data Science and Business Intelligence, Southwestern University of Finance and Economics, Chengdu, China 2Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China 3School of Mathematics, Taiyuan University of Technology, Taiyuan, China 4Department of Statistics, Operations, and Data Science, Temple University, Philadelphia, PA, USA. Abstract In this paper, we propose novel multiple testing methods for controlling the false discovery rate (FDR) in the context of high-dimensional linear models. Our development innovatively integrates model-X knockoff techniques with debiased penalized regression estimators. The proposed approach addresses two fundamental challenges in high-dimensional statistical in- ference: (i) constructing valid test statistics and corresponding p-values in solving problems with a diverging number of model parameters, and (ii) ensuring FDR control under com- plex and unknown dependence structures among test statistics. A central contribution of our methodology lies in the rigorous construction and theoretical analysis of two paired sets of test statistics. Based on these test statistics, our methodology adopts two p-value-based multiple testing algorithms. The first applies the conventional Benjamini–Hochberg procedure, justified by the asymptotic mutual independence and normality of one set of the test statistics. The second leverages the paired structure of both sets of test statistics to improve detection power while maintaining rigorous FDR control. We provide comprehensive theoretical analysis, es- tablishing the validity of the debiasing framework and ensuring that the proposed methods achieve proper FDR control. Extensive simulation studies demonstrate that our procedures outperform existing approaches – particularly those relying on empirical evaluations of false discovery proportions – in terms of both power and empirical control of the FDR. Notably, our methodology yields substantial improvements in settings characterized by weaker signals, smaller sample sizes, and lower pre-specified FDR levels. Keywords: Benjamini-Hochberg procedure; Debiased estimator; False discovery rate; Model-X knockoff; Multiple testing; Penalized regression; p-values. ∗Address for correspondence: C. Y. Tang, Department of Statistics, Operations, and Data Science, Temple University, 1810 Liacouras Walk, Philadelphia, PA 19122 USA. Email: yongtang@temple.edu 1 1 Introduction Multiple testing is a foundational yet challenging aspect of statistical analysis and applied research. Adjusting for the multiplicity of statistical tests is essential for developing reliable methodologies that support valid scientific discoveries; see Benjamini (2010) for a comprehensive overview and further insights. This problem has received considerable attention across a wide range of disciplines, including molecular biology, genetics, clinical research, epidemiology, cognitive neuroscience, and others. In these settings, controlling the false discovery rate (FDR) has emerged as a central methodological objective. FDR control ensures that the expected proportion of false positives among the rejected hypotheses does not exceed a pre-specified level. The Benjamini–Hochberg (BH) procedure (Benjamini and Hochberg, 1995), which ranks p-values and compares them to an increasing sequence of thresholds, is the most widely used approach for achieving this goal. Investigating and understanding the impact of dependence among multiple test statistics has been a central research focus in the area of multiple testing. A methodologically valid approach must
https://arxiv.org/abs/2505.16124v1
control the FDR regardless of the dependence structure among the test statistics. The BH procedure is known to control the false discovery rate at its target level under the assumption of independence among test statistics. However, when test statistics are dependent, the situation becomes more complex. Under the condition of positive regression dependence on subsets (PRDS), the BH procedure has been shown to control the FDR conservatively (Benjamini and Yekutieli, 2001; Sarkar, 2002). Recent studies have further explored this issue by incorporating information and structure from additional models for the test statistics; see, for example, Sun and Cai (2008), Fan et al. (2012), and Lei and Fithian (2018). Fithian and Lei (2022) investigate calibrating the FDR control procedure using information from the conditional distribution to handle dependent test statistics. The knockoff method, introduced by Barber and Cand` es (2015), has emerged as an influential tool for controlling the FDR. Its key advantage lies in its ability to provide valid FDR control across a broad class of dependence structures in the design matrix. The central idea is to construct “knockoff” variables that replicate the dependence structure of the original variables while being conditionally independent of the response variable, given the original variables. By fitting a model 2 using an augmented design matrix that includes both the original and knockoff variables, and comparing test statistics for the original variables with those of their knockoff counterparts, knockoff- based procedures can identify relevant variables while maintaining control of the FDR. In high-dimensional settings, particularly in the context of multiple testing with a diverging number of model parameters, developing procedures for controlling the FDR has garnered sub- stantial interest. To this end, a class of methods has been proposed that leverages the limiting properties of high-dimensional Gaussian random variables; see Liu (2013), Cai and Liu (2016), and the review by Cai et al. (2023). For high-dimensional regression models, the model-X knockoff method introduced by Cand` es et al. (2018) extends the original approach of Barber and Cand` es (2015) with a dedicated focus on addressing challenges specific to the high-dimensional regime. Notably, as a key distinction from the conventional BH procedure, the class of knockoff methods does not rely on p-values; instead, it estimates empirical false discovery proportions directly by leveraging joint distributional properties, particularly the correlation structure between the knockoff variables and the original variables. Related to this perspective, several recent developments have focused on empirically evaluating false discovery proportions without relying on p-values; see Liu et al. (2020), Xing et al. (2021), Dai et al. (2023a), and Dai et al. (2023b). In contrast, within the same setting as Barber and Cand` es (2015), Sarkar and Tang (2022) propose constructing paired p-values upon incorporating knockoff copies into linear models. They introduce a two-step procedure that first conducts a selection step using the Bonferroni method, followed by applying the BH procedure to the selected variables in the second step. Their method is capable of controlling the FDR under arbitrary dependence structures in the design matrix. The approach of Sarkar and Tang (2022) demonstrates competitive performance, particularly when
https://arxiv.org/abs/2505.16124v1
the target FDR level is low or the signal strength is weak. However, because the method requires evaluating ordinary least squares (OLS) estimators and their associated p-values, it is not applicable in high-dimensional settings where the number of predictors is much larger than the sample size. Our aim in this study is to address the multiple testing problem in high-dimensional linear models. This investigation presents two major challenges: first, the inherent difficulty of analyzing the properties associated with parameter estimation in complex high-dimensional models, a topic that has received considerable attention in recent research; and second, ensuring valid FDR control 3 in the presence of complex dependencies among high-dimensional test statistics. Our method- ological strategy seeks to leverage the advantages of p-values in solving high-dimensional multiple testing problems. Specifically, p-values can explicitly account for the uncertainty associated with the estimators, thereby capturing the most relevant information from the data. These advantages are particularly valuable in challenging situations, such as when the signal strength is weak or the target FDR level is low. However, given the challenges mentioned above, obtaining p-values in high-dimensional models is not straightforward, and even when they are available, their dependence structure is often complex and unknown. This poses a formidable obstacle to developing multiple testing methods that control the FDR. To address these challenges, we propose a new multiple testing framework that effectively con- trols the FDR while also aiming to improve power in high-dimensional testing problems. Specifically, we develop p-values for high-dimensional linear models using model-X knockoffs. We first establish the methodology and supporting theory to ensure that debiasing techniques (van de Geer et al., 2014; Zhang and Zhang, 2013; Ning and Liu, 2017) can be effectively applied to a working linear model augmented with knockoff variables, thereby enabling the construction of valid test statistics and corresponding p-values for high-dimensional model parameters. By leveraging the correlation structure between the knockoff and original variables, our approach constructs two sets of test statis- tics that are naturally paired, as each tests the same hypothesis in the high-dimensional setting. Notably, one set of test statistics is shown to be asymptotically mutually independent, enabling di- rect application of the BH procedure. Furthermore, we demonstrate that these paired test statistics and their corresponding p-values facilitate the application of the two-step procedure introduced by Sarkar and Tang (2022), allowing for effective FDR control while enhancing power in more challeng- ing high-dimensional regimes. Building on this foundation, we establish that the proposed multiple testing framework successfully controls the FDR in high-dimensional linear models. Our approach differs from existing p-value-based methods – particularly those proposed in Liu (2013), Cai and Liu (2016), and Cai et al. (2023) – for high-dimensional multiple testing problems in a crucial way: it fully incorporates the limiting joint distribution of high-dimensional test statistics. In contrast, existing methods rely on two regimes for approximating the limiting distributions in the development of multiple testing procedures. The first regime is the joint asymptotic normality 4 of the test statistics, while the second involves approximating the tail behavior of the limiting distri- bution using
https://arxiv.org/abs/2505.16124v1
extreme value distributions, leveraging asymptotic results as the number of hypotheses diverges. As a result, the performance of these approaches depends on the accuracy of distribu- tional approximations in both regimes. Moreover, an implicit rationale behind the use of extreme value approximations is that the tails of the joint limiting normal distribution are asymptotically independent. This reflects only a partial utilization of the limiting joint distribution of the test statistics, potentially overlooking dependence structures that could be informative for improving power in multiple testing problems. We make several notable contributions in this study. Foremost, we develop a novel and practi- cally effective p-value-based multiple testing framework tailored to high-dimensional settings. Our approach addresses key challenges that have traditionally limited the use of p-values in such con- texts, particularly: (a) constructing valid p-values under high-dimensional model parameters, and (b) accommodating arbitrary design matrices that may involve unknown and complex dependence structures. Methodologically, we show that combining debiasing techniques with model-X knockoff variables enables new inferential insights. By applying the debiased procedure to an augmented model with knockoff variables, we construct a set of naturally paired p-values for each hypothesis, leveraging the structure induced by the knockoffs. Our theoretical analysis establishes the validity of these p-values and the debiased inference procedure, ensuring rigorous FDR control. Empirically, our proposed methods outperform existing approaches that rely on empirical false discovery pro- portion estimates. They achieve effective FDR control while offering improved power, particularly in challenging scenarios involving weaker signals and smaller sample sizes. The rest of this paper is organized as follows. Section 2 outlines our methodology. Section 3 presents the main results, including the theoretical properties of the proposed p-value-based multiple testing procedures. Section 4 reports simulations, and Section 5 provides a demonstration with a real data example. We conclude the paper with a discussion in Section 6. The used real data and the codes for implementing our proposed methods are available at the GitHub repository: https://github.com/JinyuanChang-Lab/HD FDRPV. Notation. For any positive integer m, we write [ m] ={1, . . . , m }. Denote the indicator function byI(·) and the m-dimensional identity matrix by Im. For a vector v= (v1, . . . , v m)⊤∈Rm, 5 |v|2= (Pm i=1|vi|2)1/2, and |v|0represents the number of nonzero entries of v. For two vectors v,u∈Rm, we define the inner product ⟨v,u⟩=v⊤u. For a matrix A= (ai,j)∈Rp×q, we define the elementwise ℓ∞-norm |A|∞= max i∈[p],j∈[q]|ai,j|, the elementwise ℓ1-norm |A|1=P i∈[p],j∈[q]|ai,j|, and the ℓ1-norm ∥A∥1= max j∈[q]Pp i=1|ai,j|. The maximum and the minimum singular values of Aare respectively denoted by σmax(A) and σmin(A). For a random variable x, we denote its sub- Gaussian norm ∥x∥ψ2= supq≥1q−1/2{E(|x|q)}1/q. For a random vector x∈Rm, its sub-Gaussian norm is defined as ∥x∥ψ2= supu∈Sm−1∥⟨x,u⟩∥ψ2, where Sm−1denotes the unit sphere in Rm. 2 Methodology 2.1 Setting and background We denote the design matrix by X= (x1, . . . ,xn)⊤, where xi= (xi,1, . . . , x i,d)⊤∈Rdfori∈[n]. In high-dimensional settings, the number of predictors dmay exceed the sample size n. For a random design, the rows {xi}n i=1are treated as nindependent
https://arxiv.org/abs/2505.16124v1
copies of a d-dimensional random vector x. Our investigation takes place in the context of the linear model: y=Xβ+ε, (1) where y= (y1, . . . , y n)⊤∈Rnis the response vector, ε= (ε1, . . . , ε n)⊤∈Rnis the error term with E(εi) = 0 and Var( εi) =σ2, and β∈Rdis the parameter vector. The primary objective of interest is to simultaneously test the hypotheses: H0,j:βj= 0 versus H1,j:βj̸= 0, j∈[d]. (2) Recently, the development of the so-called knockoff variables based approaches has been influ- ential in addressing the multiple testing problem. In a seminal work, Barber and Cand` es (2015) propose generating a new design matrix ˜X= (˜x1, . . . , ˜xn)⊤, consisting of knockoff copies {˜xi}n i=1of the variables {xi}n i=1inX. The new variables in ˜Xare conditionally independent of the response y given the observed explanatory variables X. As a remarkable property of ˜X, replacing any subset of the explanatory variables in the original design matrix Xwith their corresponding knockoff copies in˜Xpreserves the variance-covariance structure of Xexactly. 6 By fitting a working linear model using the augmented design matrix ( X,˜X): y= (X,˜X)γ+ε, (3) we obtain a parameter estimate in 2 ddimensions, denoted as ˆγ= (ˆβ⊤ 1,ˆβ⊤ 2)⊤. Due to the conditional independence of ˜Xfromygiven X, the true parameter corresponding to ˆβ2is0. Specifically, if the true model is (1) with true parameter β0:= (β0 1, . . . , β0 d)⊤, then the true value of γisγ0= (β⊤ 0,0⊤)⊤. Based on the construction of knockoff variables, Barber and Cand` es (2015) introduce a procedure that controls the false discovery rate (FDR), regardless of the dependence structure in the design matrix. Their approach leverages the properties of the knockoff copies ˜X, using them in conjunction with the original data ( X,y) to empirically estimate the false discovery proportion (FDP). To this end, Barber and Cand` es (2015) propose the construction of test statistics {Wj}d j=1, derived from the procedure used to obtain the parameter estimates ˆγ= (ˆβ⊤ 1,ˆβ⊤ 2)⊤∈R2d. These statistics are designed to quantify the evidence against each of the dnull hypotheses H0,j:β0 j= 0, corresponding to the model in (1). A key requirement is that the statistics W1. . . , W dare symmetric around zero under the null hypothesis. The FDP is then estimated at a threshold tvia [FDP( t) =#{j:Wj<−t} #{j:Wj> t} ∨1. (4) Barber and Cand` es (2015) further propose to use a more conservative estimator: [FDP +(t) =1 + #{j:Wj<−t} #{j:Wj> t} ∨1. (5) They show that selecting the threshold τ= min {t:[FDP +(t)≤α}and rejecting all hypotheses for which Wj> τyields a multiple testing procedure that controls the FDR at the nominal level α. The additive adjustment in (5) is necessary, as noted by Barber and Cand` es (2015), to ensure both theoretical and empirical control of the FDR. Recently, Xing et al. (2021) propose an approach using a device called the Gaussian Mirror. For each j∈[d], the j-th mirror statistic is constructed by replacing the j-th column x·,jin the design matrix Xwith two perturbed versions: x+ ·,j=x·,j+cjξjandx− ·,j=x·,j−cjξj, where ξj∼ N(0,In).
https://arxiv.org/abs/2505.16124v1
A new set of parameter estimates is obtained by fitting the linear model (1) with yand the new design matrix ( x+ ·,j,x− ·,j,X−j), where X−jdenotes the matrix Xwithout the j-th column. Let ˆβ+ j and ˆβ− jbe the components of the estimator corresponding to x+ ·,jandx− ·,j, respectively. Based 7 on these estimates, the so-called mirror statistics can be constructed. An example of the mirror statistics is Mj=|ˆβ+ j+ˆβ− j| − |ˆβ+ j−ˆβ− j|. In the same spirit as (4), Xing et al. (2021) propose to estimate the FDP as [FDP( t) =#{j:Mj<−t} #{j:Mj> t} ∨1. (6) A multiple testing procedure is then developed by finding τ= min {t:[FDP( t)≤α}, and rejecting all null hypotheses for which Mj> τ. With empirical FDP estimations such as those in (4)–(6), the resulting multiple testing proce- dures only require the test statistics, e.g., {Wj}d j=1or{Mj}d j=1, without explicitly relying on the joint distributions of these statistics. As a result, these procedures become p-value-free, a key dis- tinction from conventional methods rooted in Benjamini and Hochberg (1995). On the one hand, this offers significant practical convenience by eliminating the need for explicitly evaluating the joint distribution the test statistics. On the other hand, it raises some critical concerns, partic- ularly when addressing multiple testing problems in more challenging scenarios. Foremost, since FDP estimators such as (4)–(6) are ratio estimators, they tend to exhibit higher variance when the numerator is small – especially at low target FDR level α. This issue is further exacerbated when the sample size nis small, leading to greater variability in the FDP estimates and making the results of the multiple testing procedures unstable. In fact, the procedures of Barber and Cand` es (2015) can become overly conservative in situations where the FDR level αis low or the signal strength is weak; see Sarkar and Tang (2022). In contrast, p-value-based multiple testing approaches attempt to fully incorporate data evidence in assessing the strength of evidence against the null hypotheses. Given the same model and data setup, p-value-based methods, when appropriately designed, can be more powerful while still maintaining control of the FDR at a given αlevel. As an example, Sarkar and Tang (2022) propose a multiple testing procedure for settings with paired p-values for each null hypothesis. Their approach utilizes the properties of these paired p-values and follows a two-step procedure: first performing an initial selection using a Bonferroni-type method, and then applying the BH procedure to determine the final rejections. They demonstrate that the augmented design matrix containing the knockoff variables of Barber and Cand` es (2015), in the context of linear models, supports the development of 8 paired p-values, and the resulting multiple testing procedure controls the FDR at the desired level. Empirically, Sarkar and Tang (2022) show that the p-value-based multiple testing procedure tends to be more powerful in challenging situations, such as when the sample size is small, the signal is weak, and the target FDR level is low. When handling high-dimensional multiple testing problems – e.g., when d≫n– substantial new challenges arise. For knockoff-based methods, the primary
https://arxiv.org/abs/2505.16124v1
difficulty lies in constructing appropriate knockoff variables. In the high-dimensional linear model setting, Cand` es et al. (2018) propose the model-X knockoff framework. Using model-X knockoff variables, an augmented design matrix can be constructed similarly to that in Barber and Cand` es (2015). To estimate the high-dimensional linear model with the resulting n×2daugmented design matrix, penalized regression methods are applicable; see B¨ uhlmann and van de Geer (2011) and Fan et al. (2020); a representative example is the Lasso estimator. To construct the statistics {Wj}d j=1, one strategy is to use the solution path of the Lasso estimator for high-dimensional linear models. Specifically, let ˆγ= arg min γ∈R2d1 2n|y−(X,˜X)γ|2 2+ϱ1|γ|1 , (7) where ϱ1is a tuning parameter that controls the penalty on the ℓ1-norm of γ. The Lasso estimator is convenient in practice due to the convex nature of the objective function in (7), as both the squared loss and the ℓ1-penalty are convex. As ϱ1decreases, ˆγincludes selected variables in a sequential manner. Let {ϱ1,j}2d j=1be the sequence of respective tuning parameter such that ϱ1,jcorresponds to the j-th component in the parameter firstly entering the solution path. Then the FDP can be estimated via (5) by letting Wj=ϱ1,j−ϱ1,j+d. In the Gaussian Mirror approach, Xing et al. (2021) develop a two-stage procedure. In the first stage, a variable selection step is applied to obtain a low-dimensional estimated model, effectively reducing the dimensionality from dtos, where s≪d. In the second stage, mirror statistics {Mj}s j=1are constructed based on the reduced model, and multiple testing is performed using FDP estimation as defined in (6). In the context of high-dimensional multiple testing, p-value-based methods face a fundamental difficulty: how to obtain reliable p-values associated with the test statistics for the high-dimensional model parameters. Penalized regression methods with sparsity-inducing penalty functions are ef- 9 fective for estimation; however, the joint distribution of the resulting sparse parameter estimates is difficult to characterize; see Fan et al. (2020) and references therein. Without appropriate p-values, approaches such as that of Sarkar and Tang (2022) are not applicable for solving high-dimensional multiple testing problems. 2.2 Multiple testing with high-dimensional p-values The primary objective of our study is to develop an effective p-value-based approach for high- dimensional multiple testing problems, based on the working model (3) that incorporates the re- generated model matrix ˜X. Our methodology is designed to accommodate the dependence among test statistics arising in such settings. In this study, we focus on the specific scenario with the model-X knockoffs elaborated extensively in Cand` es et al. (2018). In particular, in the context of the linear model (1) with a random design, the model-X knockoffs for a generic d-dimensional random vector x= (x1, . . . , x d)⊤are a new family of random variables ˜x= (˜x1, . . . , ˜xd)⊤constructed with the following two properties: (a) (x⊤,˜x⊤)⊤ swap( S)d= (x⊤,˜x⊤)⊤for any S⊆[d], whered= denotes equality in distribution, and (x⊤,˜x⊤)⊤ swap( S)is obtained from ( x⊤,˜x⊤)⊤by swapping the entries xjand ˜xjfor each j∈S. (b)˜x⊥ ⊥y|x. LetΣ= Cov( x). Then the properties of the model-X knockoff variables imply that the
https://arxiv.org/abs/2505.16124v1
covariance matrix of ( x⊤,˜x⊤)⊤is structured as Γ= Cov {(x⊤,˜x⊤)⊤}= Σ Σ −D Σ−D Σ , (8) where D= diag( s), and sis a hyperparameter chosen to ensure that the covariance matrix Γis positive semidefinite. We refer to Cand` es et al. (2018) for a comprehensive discussion of the model- X knockoff methodology and its constructions within high-dimensional linear models for practical implementations. To address the high-dimensional estimation problem under model (3), the Lasso estimator (7) offers a convenient solution. However, conducting statistical inference for high-dimensional model parameters remains challenging; see Hastie et al. (2015), Fan et al. (2020), and references therein. 10 In the context of multiple testing, a central challenge stems from the unknown distribution of the Lasso estimator, which involves intricate and high-dimensional dependence among its components. To overcome these challenges, we propose utilizing a debiased estimator (Zhang and Zhang, 2013; van de Geer et al., 2014) as the foundational element for constructing suitable test statistics and corresponding p-values. In brief, debiasing procedures begin with a sparse initial estimator, such as the Lasso estimator (7), and subsequently apply an additive correction to render the limiting joint distribution of the bias-corrected estimators tractable; see Ning and Liu (2017), Chang et al. (2020), and Chang et al. (2023) for comprehensive expositions in broad context. Upon consistently estimating the limiting variances and covariances of the debiased estimator, one can construct the test statistics and associated p-values, thereby enabling valid statistical inference for high- dimensional model parameters. In our setting with knockoff variables, let Z= (X,˜X), and define the debiased estimator as ˆγ(bc)=ˆγ+n−1ˆΘ⊤Z⊤(y−Zˆγ). (9) Here, ˆγdenotes the Lasso estimator defined in (7), and ˆΘis the so-called “decorrelating” matrix associated with ˆΓ=n−1Z⊤Z, satisfying |ˆΘ⊤ˆΓ−I2d|∞≤ϱ2for some hyper-parameter ϱ2. A rigorous specification of ˆΘis provided in Section 2.3. Upon noting that the true parameter value in the working model (3) is γ0, we obtain n1/2{ˆγ(bc)−γ0}=n−1/2ˆΘ⊤Z⊤ε+n1/2(ˆΘ⊤ˆΓ−I2d)(γ0−ˆγ). (10) The properties of ˆγ(bc), which incorporate the structure of the knockoff variables, are established in Section 3, forming the theoretical basis for the validity of our proposed multiple testing procedure. These technical developments are of independent interest, as they characterize the behavior of debiased estimators in the presence of knockoff variables, and are expected to be broadly useful for statistical inference in such settings. Based on ˆγ(bc)given in (9), we propose constructing a transformed debiased estimator Tˆγ(bc)= ˆβ(bc) 1 ˆβ(bc) 2 with T= IdId Id−Id . (11) Since the true value of γisγ0= (β⊤ 0,0⊤)⊤, both ˆβ(bc) 1and ˆβ(bc) 2∈Rdare valid estimators of β0, 11 provided that ˆγ(bc)is consistent. Write Θ0=Γ−1. By the properties of the model-X knockoffs in (8), we have TΘ 0T⊤= 2(2Σ−D)−10 0 2D−1 . As elaborated in Section 3.2, Θ0determines the limiting variance-covariance matrix of ˆγ(bc), which also serves as the probability limit of the high-dimensional estimator ˆΘunder appropriate condi- tions. Intuitively, since TΘ 0T⊤is block-diagonal, ˆβ(bc) 1and ˆβ(bc) 2are asymptotically independent. Additionally, because Dis a diagonal matrix, the components of ˆβ(bc) 2are asymptotically mutually independent. These intuitions will be rigorously confirmed by the theoretical results in Section 3. Write ˆΛ=TˆΘ⊤ˆΓˆΘT⊤, whose diagonal components are denoted
https://arxiv.org/abs/2505.16124v1
by {ˆΛj,j}2d j=1. Let ˆ σbe an estimator of the standard deviation σof the model error εi, which will be specified in Section 2.3. Write ˆβ(bc) 1= (ˆβ(bc) 1,1, . . . , ˆβ(bc) 1,d)⊤and ˆβ(bc) 2= (ˆβ(bc) 2,1, . . . , ˆβ(bc) 2,d)⊤. Based on (10) and (11), for j∈[d], we define a set of paired test statistics ( t1,j, t2,j) as follows: t1,j:=n1/2ˆβ(bc) 1,j ˆσˆΛ1/2 j,jand t2,j:=n1/2ˆβ(bc) 2,j ˆσˆΛ1/2 j+d,j+d. (12) As rigorously established in our main theoretical analysis in Section 3, the two sets of test statistics, {t1,j}d j=1and{t2,j}d j=1, are asymptotically independent of each other, and each set converges jointly in distribution to a multivariate normal distribution. As a result of our rigorous theoretical devel- opment, the p-values associated with each t1,jandt2,jcan be calculated with guaranteed validity, enabling a key component for developing a practically useful multiple testing method. The two sets of test statistics provide opportunities for developing high-dimensional multiple testing methodologies. In particular, we note that, since Dis diagonal, the statistics {t2,j}d j=1are asymptotically mutually independent. Consequently, the conventional BH procedure, as described in Algorithm 1, is applicable for solving (2). More importantly, the paired nature of the two sets of test statistics provides an opportunity to combine their strengths to achieve improved power. This paired p-value-based approach is detailed in Algorithm 2. Remarkably, although t1,1, . . . , t 1,dare not independent, their asymptotic independence from {t2,j}d j=1can still be leveraged to develop a potentially more powerful multiple testing procedure. Concretely, Algorithm 2 is a two-step proce- dure: the first step performs an initial selection using the Bonferroni method, and the second step 12 makes the final decision using the BH method with a set of adjusted p-values derived from the first step. Accordingly, we refer to the procedure in Algorithm 2 as the Bonferroni–Benjamini–Hochberg method. The theoretical guarantees of FDR control for both Algorithms 1 and 2 are established as part of our main results in Section 3. Algorithm 1 Benjamini-Hochberg method with test statistics {t2,j}d j=1 Step 1 . Let ˜Pj=P(2) j=G(|t2,j|) for each j∈[d], where G(t) = 2{1−Φ(t)}and Φ( t) is the cumulative distribution function of the standard normal distribution. Step 2 . Given α∈(0,1), let ˜P(1)≤ ··· ≤ ˜P(d)be the ordered versions of the ˜Pj’s, find ˜R= max i∈[d] :˜P(i)≤iα d , provided that the maximum exists; otherwise, let ˜R= 0. Step 3 . Reject the null hypotheses corresponding to ˜P(j)with j≤˜R. Algorithm 2 Bonferonni-Benjamini-Hochberg with paired test statistics {(t1,j, t2,j)}d j=1 Step 1 . Given 0 <√α <1, for each j∈[d], let ˜Pj=( 1,ifP(1) j>√α , P(2) j,ifP(1) j≤√α , where P(1) j=G(|t1,j|) and P(2) j=G(|t2,j|). Step 2 . Let ˜P(1)≤ ··· ≤ ˜P(d)be the ordered versions of the ˜Pj’s. Find ˜R= max i∈[d] :˜P(i)≤i√α d , provided that the maximum exists; otherwise, let ˜R= 0. Step 3 . Reject the null hypotheses corresponding to ˜P(j)with j≤˜R. Here, Algorithm 2 involves two pre-specified significance levels: one for the Bonferroni step and the other for the BH step. For simplicity in our presentation, we set both levels to√αas a
https://arxiv.org/abs/2505.16124v1
recommended balanced choice. While it is feasible within our framework to develop adaptive multiple testing procedures by incorporating an estimate of the proportion of true null hypotheses – e.g., following the approach of Storey (2002) – we omit this extension to maintain focus on our primary objective. Moreover, in high-dimensional multiple testing, the proportion of true nulls is typically high under reasonable scenarios, rendering such adjustments less critical in practice. 13 2.3 Selection of (ˆΘ,ˆσ) We conclude this section with specific choices for estimating the intermediate parameters ˆΘand ˆσ involved in the multiple testing procedures elaborated in Section 2.2. To ensure the effectiveness of the debiasing procedure, a suitable estimator ˆΘis required in (9). In this study, we adopt the CLIME approach proposed by Cai et al. (2011). Specifically, the estimator ˆΘ∈R2d×2dis obtained by solving the following optimization problem: min|Θ|1subject to |ˆΓΘ−I2d|∞≤ϱ2, (13) where ϱ2>0 is a tuning parameter. In our implementation, ϱ2is selected via 5-fold cross-validation. For the estimator ˆ σof the standard deviation of the model error involved in (12), we employ the scaled Lasso estimator (Sun and Zhang, 2012). This estimator is obtained by solving the following optimization problem: min γ∈R2d, σ>01 2σn|y−Zγ|2 2+σ 2+ϱ3|γ|1 , (14) where ϱ3>0 is a tuning parameter. For the choice of ϱ3, we follow the quantile-based penalty level introduced and analyzed by Sun and Zhang (2013). In summary, our work presents a novel multiple testing framework tailored for high-dimensional linear models. The core of our framework lies in a debiased Lasso methodology applied to an augmented design matrix ( X,˜X) that incorporates knockoff variables, effectively addressing key inferential challenges in high-dimensional settings. A major innovation lies in the analytical prop- erties of the transformed estimators, which enable the construction of a novel set of paired test statistics and the evaluation of the associated p-values. Our methodology, as elaborated in Algo- rithms 1 and 2, facilitates not only the application of the conventional BH procedure but also a novel two-step Bonferroni–Benjamini–Hochberg procedure. Our theoretical results establish that both approaches guarantee control of the FDR, even under unknown dependence structures among the high-dimensional test statistics. 14 3 Theory 3.1 Overview Our investigation not only advances existing methodologies in practice, but also provides a theoret- ically sound solution to multiple testing in complex high-dimensional settings. The first objective of our comprehensive theoretical analysis is to establish that the transformed bias-corrected estimators defined in (11), along with their associated p-values, possess the necessary asymptotic properties. Our main theoretical results confirm that the resulting high-dimensional test statistics in (12) ex- hibit the required distributional behavior. As a second objective, we rigorously demonstrate that the proposed multiple testing procedures – Algorithms 1 and 2 – control the FDR at the nominal level. Analyzing the theoretical properties of our proposed p-value-based multiple testing methods presents several significant challenges due to the intricate nature of high-dimensional models with knockoff variables. First, the linear model with the augmented matrix ( X,˜X) requires dedicated analysis of the Lasso estimator (7), the debiased estimator (9), and the transformation (11). The inclusion of knockoff variables
https://arxiv.org/abs/2505.16124v1
introduces complex dependence structures, making parameter es- timation more challenging and requiring careful theoretical treatment to ensure proper control of these dependencies. Further challenges arise from analyzing and ensuring the properties of the inter- mediate parameter estimates, including the high-dimensional precision matrix ˆΘusing the CLIME estimator (13), and the standard deviation estimate ˆ σthrough the scaled Lasso estimator (14). We present the relevant technical conditions. Condition 1. The model error ε= (ε1, . . . , ε n)⊤∼ N(0, σ2In)and is independent of Z. Condition 2. The covariance matrix Γsatisfies Cmin≤σmin(Γ)≤σmax(Γ)≤Cmaxfor some constants Cmin∈(0,∞)andCmax∈[1,∞), and ZΓ−1/2has independent sub-Gaussian rows with zero mean and sub-Gaussian norm ∥Γ−1/2z∥ψ2=κfor some constant κ∈(0,∞), where z= (x⊤,˜x⊤)⊤. 15 Condition 3. Let U:=U(M, q, s d) = Θ= (θi,j)2d×2d:Θ≻0,∥Θ∥1≤M,max i∈[2d]2dX j=1|θi,j|q≤sd for some constants q∈[0,1)andM∈(0,∞). Assume that the precision matrix Θ0=Γ−1∈ U(M, q, s d), and that the estimator ˆΘis obtained from (13)with tuning parameter ϱ2≥a{n−1log(2d)}1/2 with a= max {C0M,4eκ2p 6(τ+ 2)CmaxC−1 min}, where τ > 0andC0=η−1(2 + τ+η−1K2 0)with η= min {1/8,(4Cmaxκ2e)−1}andK0= (1−2Cmaxκ2eη)−1. Condition 4. There exists H=Hn⊆[d]such that |H| ≥ log log dand|β0 j| ≥2√ 2C−1/2 minσ(n−1logd)1/2 for all j∈ H, where Cminis specified in Condition 2. Condition 1 is standard for analyzing linear models and has been widely assumed in many existing studies; see, for example, van de Geer et al. (2014). The independence between εand ˜Xfollows directly from property (b) of the model-X knockoffs. Condition 2 imposes restrictions on the maximum and minimum singular values of Γand characterizes the tail behavior of Γ−1/2z. This condition is essential for establishing that as n→ ∞ , with high probability, the compatibility constant and the generalized coherence parameter associated with the random design matrix Z remain bounded. These concepts play a crucial role in the theoretical analysis of high-dimensional Lasso estimators; see Hastie et al. (2015) and references therein. The conditions on Γ,Z, and zin Condition 2 are equivalent to imposing constraints on the singular values of ΣandD, as well as the sub-Gaussian behavior of x. More detailed discussions on these equivalences can be found in Remarks 1 and 2 below. Condition 3 pertains to the analysis of the CLIME estimator (13), following the same principles as in Cai et al. (2011). It assumes that the true precision matrix Θ0belongs to a class of sparse matrices and requires the tuning parameter ϱ2to be properly scaled to ensure the estimator converges to the true matrix at a desirable rate under the elementwise ℓ∞-norm. Finally, Condition 4 concerns the minimal signal strength. It also imposes constraints on the number of false null hypotheses, ensuring it is not too small – similar to requirement (2) in Theorem 2.1 of Liu and Shao (2014). Remark 1. Condition 2forΓis equivalent to imposing conditions on the singular values of Σ andD, respectively. Notice that D= diag( s)withs∈Rd, and Γdefined in (8)is similar to 16 diag(2 Σ−D,D). On one hand, by Weyl inequality and Condition 2, we have Cmin≤σmin(D)≤ 2σmin(Σ)−Cmin,2σmax(Σ)−Cmax≤σmax(D)≤Cmax, and Cmin≤σmin(Σ)≤σmax(Σ)≤Cmax. On the other hand, letting ˜Cmin≤σmin(Σ)≤σmax(Σ)≤˜CmaxandCmin≤σmin(D)≤σmax(D)≤ Cmax<2˜Cminfor some positive constants ˜Cminand˜Cmax, then σmin(Γ)≥min(2 ˜Cmin−Cmax, Cmin)> 0andσmax(Γ)≤max(2 ˜Cmax−Cmin, Cmax). Remark 2. IfXhas independent
https://arxiv.org/abs/2505.16124v1
sub-Gaussian rows, with zero mean and sub-Gaussian norm ∥x∥ψ2=κ1, then the model-X knockoff variate ˜Xalso has independent sub-Gaussian rows, with zero mean and sub-Gaussian norm ∥˜x∥ψ2=κ1. Furthermore, the augmented covariate Zhas independent sub-Gaussian rows, with zero mean and sub-Gaussian norm κ1≤ ∥z∥ψ2≤2κ1, and satisfies ∥z∥ψ2σ−1 max(Γ1/2)≤ ∥Γ−1/2z∥ψ2≤ ∥z∥ψ2σmax(Γ−1/2), where z= (x⊤,˜x⊤)⊤. 3.2 Main results Recall that Z= (X,˜X)∈Rn×2dis the augmented design matrix, and ˆγ(bc)in (9) is the debiased estimator (9) with the estimator ˆΘ∈R2d×2din (13). By letting wn=n−1/2ˆΘ⊤Z⊤ε, we have n1/2{ˆγ(bc)−γ0}=wn+δn, (15) where wn|Z∼ N (0, σ2ˆΘ⊤ˆΓˆΘ), and δn=n1/2(ˆΘ⊤ˆΓ−I2d)(γ0−ˆγ). The properties of ˆγ(bc)are given in Theorem 1. Theorem 1. Let Conditions 1–3hold and |γ0|0≤s0for some integer 1≤s0<2d. For any given τ >0specified in Condition 3, let ˆγbe the Lasso estimator given in (7)with ϱ1satisfying ϱ1≥4σ{3Cmax(τ+ 1)n−1log(2d)}1/2. Write v0= 240000 c∗CmaxC−1 minκ4,c1= (4c∗κ4)−1andc2= a2Cmin/(96e2κ4Cmax)−2withc∗= 32000 e2. The following two assertions hold: (i)Ifn≥max{v0s0log(240 eds−1 0),5000( τ+ 1)κ4log(2d),12.5(τ+ 1) log(2 d),6(c2+ 2) log(2 d)}, then P |δn|∞>16ϱ1ϱ2s0n1/2 Cmin ≤2e−c1n+ 12d−τ. (ii)Ifn≥6(c2+ 2) log(2 d), then P(|ˆΘ⊤ˆΓˆΘ−Θ0|∞≤5Mϱ 2)≥1−10d−τ. 17 Theorem 1 ensures that δnin (15) is uniformly bounded from above with high probability. It guarantees that the estimation error ˆγ(bc)−γ0is dominated by a zero-mean normally distributed term n−1/2wn, and all the bias in n−1/2δnis uniformly small. LetH0={j∈[d] :β0 j= 0}, and define FDP =P j∈H0I{˜Pj≤˜P(˜R)} ˜R∨1,FDR = E(FDP) . Based on the properties of ˆγ(bc)given in (9) and ˆ σdefined in (14), we establish the following theorems, which guarantee control of the FDR under our proposed procedures. Theorem 2. Let Conditions 1–4hold. Assume that τ > 2and|γ0|0≤s0<2dwith s0≪ n1/2(logd)−3/2. Consider a suitable choice of the tuning parameters ϱ1≍ϱ2≍ϱ3≍ {n−1log(2d)}1/2 in(7),(13)and(14), respectively. If logd≪n1/(5+2ϑ)for some ϑ >0, then Algorithm 1can control FDR atπ0α, that is lim n→∞P(FDP ≤π0α+ϵ) = 1 for any ϵ >0, and lim sup n→∞FDR≤π0α , where π0=d0/d, and d0is the number of true null hypotheses. Theorem 3. Let the conditions of Theorem 2hold. If s2 d≪dands0≲n1/2(logd)−5/2−2ϑforϑ specified in Theorem 2, then Algorithm 2can control FDR atπ0α, that is lim n→∞P(FDP ≤π0α+ϵ) = 1 for any ϵ >0, and lim sup n→∞FDR≤π0α , where π0=d0/d, and d0is the number of true null hypotheses. 18 4 Simulations 4.1 Setting In the simulations, data are generated from the linear model (1), with model errors drawn inde- pendently from a standard normal distribution. The design matrix Xvaries across three settings, reflecting different dependence structures between variables: •Setting 1: The rows of Xare generated from a multivariate normal distribution N(0,Id). •Setting 2: The rows of Xare drawn from a multivariate normal distribution with an AR(1) dependence structure and correlation coefficient 0.4. This setting has been explored in the literature; see Barber and Cand` es (2015), Cand` es et al. (2018), and Sarkar and Tang (2022). •Setting 3: The rows of Xare generated from a multivariate normal distribution with block dependence. Specifically, samples are drawn from N(0,Σ), where Σ= (Σ i,j)d×dwith Σ i,i= 1 and Σ i,j= 0.2·I(⌈i/20⌉=⌈j/20⌉). This block structure has also been studied in prior work; see Fithian and Lei (2022). These designs allow us to investigate how different correlation structures in Xaffect the performance of the
https://arxiv.org/abs/2505.16124v1
statistical methods under consideration. The matrix containing the knockoff varibles is constructed following the second-order model-X knockoff procedure of Cand` es et al. (2018). We explore combinations of n∈ {200,500}andd∈ {n,1.5n,2n}. For each setting, we randomly assign kcomponents of β0to be nonzero, with the remaining d−kcomponents set to zero. The nonzero coefficients are assigned a common amplitude, which is varied over the interval [0 .1,0.5] using equally spaced values. The simulations consider FDR control levels α∈ {0.05,0.1,0.2}. Each simulation setting is repeated 300 times to ensure reliable performance estimates. In the simulations, we compare the proposed methods in Algorithms 1 and 2 with the FDR-controlling knockoff procedure of Cand` es et al. (2018) and the Gaussian Mirror method of Xing et al. (2021). 19 4.2 Summary of finding 4.2.1 FDR control Our results demonstrate that both proposed methods, as detailed in Algorithms 1 and 2, consistently control the FDR at the specified levels. In the absence of signal (i.e., when all null hypotheses are true), for all pre-specified FDR levels α= 0.05,0.1,0.2, the empirical FDRs achieved by our methods closely match the nominal levels. Figure 1 shows the empirical FDR across simulation replications at α= 0.1. Additional results for other FDR levels, with all true null hypotheses, are provided in Section A.1 of the Supplementary Material. See Figures S1 and S2 for α= 0.05 and 0 .2, respectively. These findings confirm the theoretical guarantees established in Section 3 and provide strong empirical support for the validity of our procedures. In contrast, the FDP-based knockoff approach using (5) tends to be overly conservative, often yielding no discoveries. This behavior is expected, and possible reasons for it are discussed in Sections 1 and 2. A notable observation is that the Gaussian Mirror method, when using (6) to estimate the empirical FDP, tends to inflate the empirical FDR when all null hypotheses are true. For instance, in the absence of signal, the empirical FDR of the Gaussian Mirror method often exceeds the nominal level by a substantial margin. As its FDR frequently surpasses 0.25 at α= 0.1, we have omitted these results from Figure 1 for clarity. A likely explanation lies in the behavior of the empirical FDP estimator (6); see also the related discussion in Barber and Cand` es (2015) regarding the estimator (4). To investigate this issue, we applied the adjustment defined in (5)—in the spirit of the approach proposed by Barber and Cand` es (2015)—to estimate the empirical FDP for the Gaussian Mirror method. This adjustment allows the Gaussian Mirror method to control the FDR, albeit conservatively, similar to the knockoff procedure, but often at the cost of yielding no discoveries; see Figure 1. 4.2.2 Power comparisons Figure 2 presents power results at α= 0.1 for selected configurations of ( n, d) under Setting 2 for generating the design matrix. Additional power results are provided in Section A.2 of the 20 Supplementary Material; see Figures S3 and S4 for α= 0.05 and 0 .2, respectively. With respect to power, our methods outperform the model-X knockoff approach in settings char- acterized by lower
https://arxiv.org/abs/2505.16124v1
pre-specified FDR levels, weaker signals, and smaller sample sizes, demonstrating their effectiveness under more challenging conditions. A likely explanation is the variability intro- duced by the ratio estimators used in the empirical FDP calculations of knockoff-based methods (Cand` es et al., 2018), which can substantially affect performance when signals are sparse and the total number of rejections is small. While the power of the knockoff method tends to improve with larger sample sizes and higher dimensionality, our methods remain competitive across all scenarios; see Figures S5–S7 in Section A.2 of the Supplementary Material for further details. When the empirical FDP is estimated using (6), the Gaussian Mirror method of Xing et al. (2021) tends to exhibit higher power. However, in these cases, the empirical FDR often exceeds the nominal level by relatively large margin, consistent with the observations when all null hypotheses are true. If the FDP estimation is adjusted using the FDP+ procedure in (5), the empirical FDR is controlled at the nominal level, but the power of the Gaussian Mirror method decreases to levels comparable to those of the knockoff methods; see Figure 2 and additional results in the Supplementary Material. Notably, the Gaussian Mirror method employs a two-step procedure (variable selection in the first step and statistic evaluation in the second step). In this respect, we offer a cautionary observa- tion: when the two-step procedure of the Gaussian Mirror method fails to correctly identify the true contributing variables in the first step, the approach becomes less competitive in terms of power. This limitation arises because once the true signals are missed during the initial variable selection step, they cannot be recovered in the second step. As demonstrated in the examples provided in Section A.3 of the Supplementary Material, the Gaussian Mirror method struggles in settings with higher correlations among variables in the design matrix. See Figures S10–S12 for details. In such cases, Lasso-based variable selection in the first step may fail, leading to relatively more frequent false exclusions of true contributing variables, particularly when signal strength is relatively weak. In these scenarios, the numerical results confirm that our methods outperform the Gaussian Mirror method, especially under more challenging scenarios, demonstrating better power. 21 4.2.3 Further investigations of our methods Although our framework is formulated as a single-stage procedure based on debiasing techniques, it can be naturally extended to a two-stage approach that first performs greedy variable screening, followed by refined inference using our proposed method. This extension may offer computational benefits and help reduce the effective dimensionality of the problem. In the same spirit as the two-stage approach employed in the Gaussian Mirror method of Xing et al. (2021), we implement a two-stage variant: we first apply variable selection using the same initial screening method, and then apply our p-value-based multiple testing procedures (Algorithms 1 and 2) to the design matrix constructed from the selected variables. The results, presented in Figure 3, highlight the competitive power of our methods and demonstrate strong potential for further enhancing their performance through this two-stage adaptation. We also conduct additional numerical studies to
https://arxiv.org/abs/2505.16124v1
demonstrate the benefits of using the debiased estimator, particularly in settings where low-dimensional approaches, such as that of Sarkar and Tang (2022), are applicable. Figure 4 presents results highlighting these advantages in cases where both our proposed methods and the method in Sarkar and Tang (2022) are applicable, specifically when n > 2d. Additional results are provided in Section A.4 of the Supplementary Material, particularly in Figures S13 and S14. While the method of Sarkar and Tang (2022) performs competitively in conventional low- dimensional linear models, it struggles in moderate-dimensional problems due to the limitations of the OLS approach. In contrast, the debiased method is crucial for generating valid p-values in high-dimensional settings. Unlike Sarkar and Tang (2022), which does not incorporate a penalized estimator or a debiasing step, our methods leverage a parsimonious model structure and exploit the properties of high-dimensional estimators. High-dimensional variable selection promotes sparsity, while the debiasing step yields more accurate variance estimation. The OLS-based method suffers from inflated variance when the number of predictors is moderately large, rendering it inapplicable when n≤2d. Between our two proposed procedures, Algorithm 2 consistently demonstrates superior power compared to Algorithm 1, highlighting the advantage of combining information from paired sets of test statistics—an integration made possible through the generation of model-X knockoffs. 22 In summary, we find from the numerical results that our proposed methods effectively control the FDR while maintaining competitive power across various simulation settings. Overall, these findings establish the efficacy and practicality of our proposed approaches in high-dimensional multiple testing scenarios. 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 1, n = 200 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 2, n = 200 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 3, n = 200 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 1, n = 500 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 2, n = 500 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 3, n = 500 Figure 1: Simulated false discovery rate (FDR) when all null hypotheses are true, for setting 1 (left column), setting 2 (middle column), and setting 3 (right column). The sample sizes of top row and bottom row are n= 200 and n= 500, respectively. The FDR level is α= 0.1. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and green solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and blue dotted line), and the Gaussian Mirror method of Xing et al. (2021) with FDP+ procedure (diamonds and purple dashed line). 5 Real data example We apply our methods to identify mutations in Human Immunodeficiency Virus Type 1 (HIV-1) associated with drug resistance, comparing them with competing approaches. The dataset analyzed, as described in Rhee et al. (2006), comprises HIV-1 subtype B sequences from individuals with prior antiretroviral treatment. It includes mutations at protease and reverse transcriptase (RT) positions of HIV-1 subtype B sequences, conferring resistance to Protease Inhibitors (PIs), nucleoside reverse transcriptase inhibitors (NRTIs), and non-nucleoside RT inhibitors (NNRTIs). In preprocessing the dataset, we follow the steps outlined in Barber and Cand` es (2015). The design matrix X∈ {0,1}n×dis constructed such that xi,j= 1 if the i-th sample contains the
https://arxiv.org/abs/2505.16124v1
j-th 23 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 200 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 300 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 400 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 200 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 300 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 400Figure 2: Simulated FDR and power for the settings of n= 200 and d= 200 (left column), n= 200 andd= 300 (middle column), n= 200 and d= 400 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.04dand the FDR level is α= 0.1. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 400, d = 400 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 600, d = 600 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 800, d = 800 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 400, d = 400 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 600, d = 600 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 800, d = 800 Figure 3: Simulated FDR and power for the settings of n= 400 and d= 400 (left column), n= 600 andd= 600 (middle column), n= 800 and d= 800 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 15 and the FDR level is α= 0.1. The methods compared are Algorithm 1 (squares and green dotted line), two-stage Algorithm 1 (squares and red solid line), Algorithm 2 (circles and blue dotted line), two-stage Algorithm 2 (circles and yellow solid line), the Gaussian Mirror method of Xing et al. (2021) (triangles and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (triangles and purple two-dashed line). 24 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 300, d = 100 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 500, d =200 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 700, d = 300 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 300, d = 100 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 500, d =200 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 700, d = 300Figure 4: Simulated FDR and power for the settings of n= 300 and d= 100 (left column), n= 500 andd= 200 (middle column), n= 700 and d= 300 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.1dand the FDR level is α= 0.1. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and green solid line), and the Bonferroni-Benjamini-Hochberg method of Sarkar and Tang (2022) (triangles and blue dotted line). mutation, and xi,j= 0 otherwise. For a specific drug, the i-th entry of the response vector yi represents the logarithm of the increase in resistance to that drug in the i-th patient. Using linear models to capture these effects,
https://arxiv.org/abs/2505.16124v1
we apply our proposed methods to identify mutations in HIV-1 associated with resistance to Protease Inhibitors (PIs). Specifically, we focus on six of the seven drugs: APV, ATV, IDV, LPV, NFV, and RTV. As an example with comparable nandd, analyzing this real dataset provides valuable insights into the advantages of our p-value-based multiple testing methods. In particular, the methods of Barber and Cand` es (2015) and Sarkar and Tang (2022) remain applicable, enabling a more extensive comparison to highlight the benefits of the proposed methodology. We compare our proposed methods with the knockoff method (Barber and Cand` es, 2015), the Gaussian Mirror method (Xing et al., 2021), and the Bonferroni-Benjamini-Hochberg method introduced by Sarkar and Tang (2022). Although there is no definitive ground truth in the real data, we validate our discoveries by comparing them against the treatment-selected mutation (TSM) panels provided in Rhee et al. (2005). These panels identify mutations observed more frequently in virus samples from patients 25 treated with each drug compared to those never treated with that drug. Since these panels are independently derived from the dataset we analyze, they serve as a benchmark for validating the discoveries made by respective methods. We investigate three different levels of FDR control: α= 0.05, 0 .1, and 0 .2. Representative results are included in Figure 5, with additional figures in Section B of the Supplementary Mate- rial. Summarizing the results, our methods demonstrate consistent performance across all cases, producing a reasonable number of findings with stable “false discoveries”, defined here as those not in the TSM panels. In contrast, the knockoff method (Barber and Cand` es, 2015) does not yield any discoveries at α= 0.05, as shown in Figure S15 of the Supplementary Material, for six out of the seven drugs. This is consistent with our observation from the simulation studies. The Gaussian Mirror method (Xing et al., 2021), with empirical FDP evaluated by (6), appears to identify the most discoveries across all scenarios, but its proportion of false discoveries also tends to exceed the target FDR level in most cases; see Figure 5, and Figures S15 and S16 in the Supplementary Material. This aligns with our observations in simulations, where the Gaussian Mirror method, with empirical FDP evaluated by (6), tends to have a higher level of false discoveries when the FDR is low and/or the number of signals is few. When n >2d, where the method of Sarkar and Tang (2022) is applicable, our proposed methods demonstrate superior performance. Specifically, at α= 0.2, as shown in Figure 5, our method consistently identifies more findings, while the method of Sarkar and Tang (2022) makes fewer discoveries or even fails to detect any. This outcome is likely due to the higher variance of the estimators used in their approach, especially when nis not sufficiently large and/or dis relatively large. Qualitatively similar observations are reported in Figures S15 and S16 in the Supplementary Material. These comparisons highlight the advantages of using Lasso-penalized regression and a debiased approach in constructing the test statistics, which effectively address the limitations of conventional OLS methods
https://arxiv.org/abs/2505.16124v1
in high-dimensional settings. In summary, our proposed p-value-based multiple testing methods demonstrate competitive performance in addressing multiple testing in practical settings. We recommend our approach, particularly in cases where the target FDR level is lower or the signals from contributing variables are weaker, allowing for further investigation and comparisons. 26 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to APV0 10 20 30 40(a)n= 767 , d= 201 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to ATV0 10 20 30 40 (b)n= 328 , d= 147 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to IDV0 10 20 30 40 (c)n= 825 , d= 206 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to LPV0 10 20 30 40 (d)n= 515 , d= 184 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to NFV0 10 20 30 40 (e)n= 842 , d= 207 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to RTV0 10 20 30 40 (f)n= 793 , d= 205 Figure 5: Results of the real data example for α= 0.2. Blue represents the number of discoveries that are in the treatment-selected mutation panels list, and yellow represents the number of discoveries not in the treatment-selected mutation panels list. The total number of HIV-1 protease positions in the treatment-selected mutation panels list is 34. The methods compared are the proposed Algorithm 1 (Algorithm1), the proposed Algorithm 2 (Algorithm2), the Gaussian Mirror method of Xing et al. (2021) (GM), the knockoff-based method of Barber and Cand` es (2015) (Knockoff), and the Bonferroni-Benjamini-Hochberg method of Sarkar and Tang (2022) (B-BH). 27 6 Discussion In conclusion, our study proposes and analyzes innovative p-value-based methodologies for high- dimensional multiple testing that effectively control the FDR while enhancing power in challenging scenarios. The empirical results demonstrate the promising performance of our methods, partic- ularly those utilizing p-values derived from the debiased estimator and paired test statistics from model-X knockoffs. By combining theoretical insights with methodological applications, our work represents a significant advancement in statistical methodology and theory, providing valuable tools for practical multiple testing applications. High-dimensional multiple testing problems are inherently challenging, with numerous avenues for future development. For instance, constructing valid and useful test statistics, along with corresponding p-values, remains a difficult open problem for more general models beyond linear regression. Generalized linear models, such as Poisson and logistic regression, are particularly valuable tools. Although penalized likelihood approaches have been developed for model estimation (Fan et al., 2020), multiple testing methods for high-dimensional generalized linear models remain underexplored. The primary challenge lies in constructing valid test statistics and associated p- values, compounded by the complex dependence structures involved, which require further dedicated effort. We plan to address these challenges in future projects. References Barber, R. F. and Cand` es, E. J. (2015). Controlling the false discovery rate via knockoffs. The Annals of Statistics , 43, 2055–2085. Benjamini, Y. (2010). Simultaneous and selective inference: current successes and future challenges. Biometrical Journal , 52, 708–721. Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B: Statistical Methodology , 57, 289–300. Benjamini, Y. and Yekutieli, D. (2001).
https://arxiv.org/abs/2505.16124v1
The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics , 29, 1165–1188. B¨ uhlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications . Springer, Heidelberg. Cai, T., Liu, W., and Luo, X. (2011). A constrained ℓ1minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association , 106, 594–607. 28 Cai, T. T., Guo, Z., and Xia, Y. (2023). Statistical inference and large-scale multiple testing for high-dimensional regression models. TEST , 32, 1135–1171. Cai, T. T. and Liu, W. (2016). Large-scale multiple testing of correlations. Journal of the American Statistical Association , 111, 229–240. Cand` es, E., Fan, Y., Janson, L., and Lv, J. (2018). Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B: Statistical Methodology , 80, 551–577. Chang, J., Chen, S. X., Tang, C. Y., and Wu, T. T. (2020). High-dimensional empirical likelihood inference. Biometrika , 108, 127–147. Chang, J., Shi, Z., and Zhang, J. (2023). Culling the herd of moments with penalized empirical likelihood. Journal of Business & Economic Statistics , 41, 791–805. Dai, C., Lin, B., Xing, X., and Liu, J. S. (2023a). False discovery rate control via data splitting. Journal of the American Statistical Association , 118, 2503–2520. Dai, C., Lin, B., Xing, X., and Liu, J. S. (2023b). A scale-free approach for false discovery rate control in generalized linear models. Journal of the American Statistical Association , 118, 1551– 1565. Fan, J., Han, X., and Gu, W. (2012). Estimating false discovery proportion under arbitrary covari- ance dependence. Journal of the American Statistical Association , 107, 1019–1035. Fan, J., Li, R., Zhang, C.-H., and Zou, H. (2020). Statistical Foundations of Data Science . Chapman & Hall/CRC. Fithian, W. and Lei, L. (2022). Conditional calibration for false discovery rate control under dependence. The Annals of Statistics , 50, 3091–3118. Hastie, T., Tibshirani, R., and Wainwright, M. (2015). Statistical Learning with Sparsity: The Lasso and Generalizations . Chapman & Hall/CRC. Lei, L. and Fithian, W. (2018). AdaPT: an interactive procedure for multiple testing with side information. Journal of the Royal Statistical Society Series B: Statistical Methodology , 80, 649– 679. Liu, W. (2013). Gaussian graphical model estimation with false discovery rate control. The Annals of Statistics , 41, 2948–2978. Liu, W., Ke, Y., Liu, J., and Li, R. (2020). Model-free feature screening and FDR control with knockoff features. Journal of the American Statistical Association , 117, 428–443. Liu, W. and Shao, Q.-M. (2014). Phase transition and regularized bootstrap in large-scale t-tests with false discovery rate control. The Annals of Statistics , 42, 2003–2025. Ning, Y. and Liu, H. (2017). A general theory of hypothesis tests and confidence regions for sparse high dimensional models. The Annals of Statistics , 45, 158–195. Rhee, S.-Y., Fessel, W. J., Zolopa, A. R., Hurley, L., Liu, T., Taylor, J., Nguyen, D. P., Slome, S., Klein, D., Horberg, M., Flamm, J., Follansbee, S., Schapiro, J. M., and Shafer, R. W. (2005). HIV-1 Protease and reverse-transcriptase mutations: correlations with antiretroviral
https://arxiv.org/abs/2505.16124v1
therapy in subtype B isolates and implications for drug-resistance surveillance. The Journal of Infectious Diseases , 192, 456–465. 29 Rhee, S.-Y., Taylor, J., Wadhera, G., Ben-Hur, A., Brutlag, D. L., and Shafer, R. W. (2006). Genotypic predictors of human immunodeficiency virus type 1 drug resistance. Proceedings of the National Academy of Sciences of the United States of America , 103, 17355–17360. Sarkar, S. K. (2002). Some results on false discovery rate in stepwise multiple testing procedures. The Annals of Statistics , 30, 239–257. Sarkar, S. K. and Tang, C. Y. (2022). Adjusting the Benjamini-Hochberg method for controlling the false discovery rate in knockoff-assisted variable selection. Biometrika , 109, 1149–1155. Storey, J. D. (2002). A direct approach to false discovery rates. Journal of the Royal Statistical Society Series B: Statistical Methodology , 64, 479–498. Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression. Biometrika , 99, 879–898. Sun, T. and Zhang, C.-H. (2013). Sparse matrix inversion with scaled Lasso. Journal of Machine Learning Research , 14, 3385–3418. Sun, W. and Cai, T. (2008). Large-scale multiple testing under dependence. Journal of the Royal Statistical Society Series B: Statistical Methodology , 71, 393–424. van de Geer, S., B¨ uhlmann, P., Ritov, Y., and Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics , 42, 1166– 1202. Xing, X., Zhao, Z., and Liu, J. S. (2021). Controlling false discovery rate using Gaussian Mirrors. Journal of the American Statistical Association , 118, 222–241. Zhang, C.-H. and Zhang, S. S. (2013). Confidence intervals for low dimensional parameters in high-dimensional linear models. Journal of the Royal Statistical Society Series B: Statistical Methodology , 76, 217–242. 30 Supplementary Material for “Controlling the False Discovery Rate in High-Dimensional Linear Models Using Model-X Knockoffs and p-values” A Additional simulations results Additional results of simulations are reported here. A.1 Additional simulations results of FDR control We first provide all additional simulation results when all null hypotheses are true. Figures S1 and S2 report the FDR control results under FDR levels 0 .05 and 0 .2, respectively. 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 1, n = 200 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 2, n = 200 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 3, n = 200 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 1, n = 500 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 2, n = 500 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 3, n = 500 Figure S1: Simulated FDR when all null hypotheses are true, for setting 1 (left column), setting 2 (middle column), setting 3 (right column). The sample sizes of top row and bottom row are n= 200 andn= 500, respectively. The FDR level is α= 0.05. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and green solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and blue dotted line), and the Gaussian Mirror method of Xing et al. (2021) with FDP+ procedure (diamonds and purple dashed line). A.2 Additional simulations results of power analysis All additional simulation results when some null hypotheses are not true under setting 2 and sparsity level k= 0.04dare
https://arxiv.org/abs/2505.16124v1
reported here. Figures S3 and S4 present the results with relatively small n S1 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 1, n = 200 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 2, n = 200 0.000.050.100.150.200.25 200220240260280300320340360380400 dSimulated FDRSetting 3, n = 200 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 1, n = 500 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 2, n = 500 0.000.050.100.150.200.25 5005506006507007508008509009501000 dSimulated FDRSetting 3, n = 500Figure S2: Simulated FDR when all null hypotheses are true, for setting 1 (left column), setting 2 (middle column), setting 3 (right column). The sample sizes of top row and bottom row are n= 200 andn= 500, respectively. The FDR level is α= 0.2. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and green solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and blue dotted line), and the Gaussian Mirror method of Xing et al. (2021) with FDP+ procedure (diamonds and purple dashed line). andd. While Figures S5-S7 present the results with relatively large nandd. We implement the two-stage method and report relevant results in Figures S8 and S9. A.3 Additional simulations results with highly correlated covariates Figures S10-S12 report simulation results with highly correlated covariates. The rows of Xare drawn from a multivariate normal distribution with an autoregressive (AR(1)) dependence structure and correlation coefficient 0.67. The sparsity level is k= 15. A.4 Additional simulations in comparision with the approach of Sarkar and Tang (2022) Figures S13 and S14 report additional simulation results under low dimensional setting with FDR levels 0 .05 and 0 .2, respectively. S2 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 200 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 300 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 400 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 200 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 300 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 400Figure S3: Simulated FDR and power for the settings of n= 200 and d= 200 (left column), n= 200 andd= 300 (middle column), n= 200 and d= 400 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.04dand the FDR level is α= 0.05. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 200 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 300 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 200, d = 400 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 200 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 300 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 200, d = 400 Figure S4: Simulated FDR and power for the settings of n= 200 and d= 200 (left column), n= 200 andd= 300 (middle column), n= 200 and d= 400 (right column). The rows of the design
https://arxiv.org/abs/2505.16124v1
matrix were generated from setting 2. The sparsity level is k= 0.04dand the FDR level is α= 0.2. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). S3 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 500 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 750 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 1000 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 500 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 750 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 1000Figure S5: Simulated FDR and power for the settings of n= 500 and d= 500 (left column), n= 500 andd= 750 (middle column), n= 500 and d= 1000 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.04dand the FDR level is α= 0.05. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 500 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 750 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 1000 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 500 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 750 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 1000 Figure S6: Simulated FDR and power for the settings of n= 500 and d= 500 (left column), n= 500 andd= 750 (middle column), n= 500 and d= 1000 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.04dand the FDR level is α= 0.1. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). S4 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 500 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 750 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 500, d = 1000 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 500 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 750 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 500, d = 1000Figure S7: Simulated FDR and power for the settings of n= 500 and d= 500 (left column), n= 500 andd= 750 (middle column), n= 500 and d= 1000 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.04dand the FDR
https://arxiv.org/abs/2505.16124v1
level is α= 0.2. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 400, d = 400 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 600, d = 600 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 800, d = 800 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 400, d = 400 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 600, d = 600 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 800, d = 800 Figure S8: Simulated FDR and power for the settings of n= 400 and d= 400 (left column), n= 600 andd= 600 (middle column), n= 800 and d= 800 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 15 and the FDR level is α= 0.05. The methods compared are Algorithm 1 (squares and green dotted line), two-stage Algorithm 1 (squares and red solid line), Algorithm 2 (circles and blue dotted line), two-stage Algorithm 2 (circles and yellow solid line), the Gaussian Mirror method of Xing et al. (2021) (triangles and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (triangles and purple two-dashed line). S5 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 400, d = 400 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 600, d = 600 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 800, d = 800 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 400, d = 400 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 600, d = 600 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 800, d = 800Figure S9: Simulated FDR and power, for the settings of n= 400 and d= 400 (left column), n= 600 and d= 600 (middle column), n= 800 and d= 800 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 15 and the FDR level is α= 0.2. The methods compared are Algorithm 1 (squares and green dotted line), two-stage Algorithm 1 (squares and red solid line), Algorithm 2 (circles and blue dotted line), two-stage Algorithm 2 (circles and yellow solid line), the Gaussian Mirror method of Xing et al. (2021) (triangles and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (triangles and purple two-dashed line). 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 600, d = 600 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 800, d = 800 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 1000, d = 1000 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 600, d = 600 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 800, d = 800 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 1000, d = 1000 Figure S10: Simulated FDR and power for the settings of n= 600 and d= 600 (left column), n= 800 and d= 800 (middle column), n= 1000 and d= 1000 (right column). The rows of Xare drawn from a multivariate normal distribution with an AR(1) dependence structure and correlation coefficient 0.67. The sparsity level is k= 15
https://arxiv.org/abs/2505.16124v1
and the FDR level is α= 0.05. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). S6 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 600, d = 600 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 800, d = 800 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 1000, d = 1000 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 600, d = 600 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 800, d = 800 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 1000, d = 1000Figure S11: Simulated FDR and power for the settings of n= 600 and d= 600 (left column), n= 800 and d= 800 (middle column), n= 1000 and d= 1000 (right column). The rows of Xare drawn from a multivariate normal distribution with an AR(1) dependence structure and correlation coefficient 0.67. The sparsity level is k= 15 and the FDR level is α= 0.1. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). B Additional results of the real data analysis Additional results of the real data analysis are included in Figures S15 and S16. S7 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 600, d = 600 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 800, d = 800 0.000.100.200.300.400.500.60 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated FDRn = 1000, d = 1000 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 600, d = 600 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 800, d = 800 0.000.250.500.751.00 0.100.150.200.250.300.350.400.450.50 AmplitudeSimulated powern = 1000, d = 1000Figure S12: Simulated FDR and power for the settings of n= 600 and d= 600 (left column), n= 800 and d= 800 (middle column), n= 1000 and d= 1000 (right column). The rows of Xare drawn from a multivariate normal distribution with an AR(1) dependence structure and correlation coefficient 0.67. The sparsity level is k= 15 and the FDR level is α= 0.2. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and yellow solid line), the knockoff-based method of Cand` es et al. (2018) (triangles and green dotted line), the Gaussian Mirror method of Xing et al. (2021) (diamonds and blue dashed line), and the Gaussian Mirror method with FDP+ procedure (squares and purple dashed line). 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 300, d = 100 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 500, d =200 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 700, d = 300 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 300, d = 100 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 500, d =200 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 700, d = 300 Figure S13: Simulated FDR and power for the settings of n= 300 and d= 100 (left column), n= 500 and
https://arxiv.org/abs/2505.16124v1
d= 200 (middle column), n= 700 and d= 300 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.1dand the FDR level is α= 0.05. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and green solid line), and the Bonferroni-Benjamini-Hochberg method of Sarkar and Tang (2022) (triangles and blue dotted line). S8 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 300, d = 100 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 500, d = 200 0.000.050.100.150.200.25 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated FDRn = 700, d = 300 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 300, d = 100 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 500, d = 200 0.000.250.500.751.00 0.100.200.300.400.500.600.700.800.901.00 AmplitudeSimulated powern = 700, d = 300Figure S14: Simulated FDR power for the settings of n= 300 and d= 100 (left column), n= 500 andd= 200 (middle column), n= 700 and d= 300 (right column). The rows of the design matrix were generated from setting 2. The sparsity level is k= 0.1dand the FDR level is α= 0.2. The methods compared are Algorithm 1 (squares and red solid line), Algorithm 2 (circles and green solid line), and the Bonferroni-Benjamini-Hochberg method of Sarkar and Tang (2022) (triangles and blue dotted line). S9 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to APV0 10 20 30 40(a)n= 767 , d= 201 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to ATV0 10 20 30 40 (b)n= 328 , d= 147 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to IDV0 10 20 30 40 (c)n= 825 , d= 206 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to LPV0 10 20 30 40 (d)n= 515 , d= 184 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to NFV0 10 20 30 40 (e)n= 842 , d= 207 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to RTV0 10 20 30 40 (f)n= 793 , d= 205 Figure S15: Results of the real data example for α= 0.05. Blue represents the number of dis- coveries that are in the treatment-selected mutation panels list, and yellow represents the number of discoveries not in the treatment-selected mutation panels list. The total number of HIV-1 pro- tease positions in the treatment-selected mutation panels list is 34. The methods compared are the proposed Algorithm 1 (Algorithm1), the proposed Algorithm 2 (Algorithm2), the Gaussian Mir- ror method of Xing et al. (2021) (GM), the knockoff-based method of Barber and Cand` es (2015) (Knockoff), and the Bonferroni-Benjamini-Hochberg method of Sarkar and Tang (2022) (B-BH). S10 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to APV0 10 20 30 40(a)n= 767 , d= 201 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to ATV0 10 20 30 40 (b)n= 328 , d= 147 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to IDV0 10 20 30 40 (c)n= 825 , d= 206 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to LPV0 10 20 30 40 (d)n= 515 , d= 184 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to NFV0 10 20 30 40 (e)n= 842 , d= 207 Algorithm1 Algorithm2 GM Knockoff B−BHResistance to RTV0 10 20 30 40 (f)n= 793 , d= 205 Figure S16: Results of the real data example for α= 0.1. Blue represents the number
https://arxiv.org/abs/2505.16124v1
arXiv:2505.16275v1 [math.ST] 22 May 2025Semiparametric Bernstein–von Mises theorems for reversible diffusions Matteo Giordano and Kolyan Ray University of Turin and Imperial College London Abstract We establish a general semiparametric Bernstein–von Mises theorem for Bayesian nonparametric priors based on continuous observations in a periodic reversible mul- tidimensional diffusion model. We consider a wide range of functionals satisfying an approximate linearization condition, including several nonlinear functionals of the in- variant measure. Our result is applied to Gaussian and Besov-Laplace priors, showing these can perform efficient semiparametric inference and thus justifying the correspond- ing Bayesian approach to uncertainty quantification. Our theoretical results are illus- trated via numerical simulations. Keywords : Bernstein–von Mises, multidimensional diffusions, reversibility, semipara- metric inference, uncertainty quantification. Contents 1 Introduction 2 2 Main results 5 2.1 Reversible diffusions with periodic drift and Bayesian inference . . . . . 5 2.2 A general semiparametric Bernstein–von Mises theorem . . . . . . . . . 6 2.3 Gaussian priors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4 Besov-Laplace priors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3 Numerical illustrations with Gaussian priors 13 4 Proofs 16 4.1 Proof of Theorem 1: general semiparametric BvM . . . . . . . . . . . . 16 4.2 Auxiliary results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3 Proof of Theorem 2: Gaussian priors . . . . . . . . . . . . . . . . . . . . 23 4.4 Proof of Theorem 3: Besov-Laplace priors . . . . . . . . . . . . . . . . . 25 4.5 Functional expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.6 A PDE estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Bibliography 32 1 1 Introduction Let(Xt= (X1 t, . . . , Xd t) :t≥0)be the multidimensional diffusion process arising as the solution to the SDE dXt=∇B(Xt)dt+dWt, t ≥0, X 0=x0∈Rd, (1) where (Wt= (W1 t, . . . , Wd t) :t≥0)is a standard d-dimensional Brownian motion and B:Rd→Ris a twice-continuously differentiable scalar potential function. We observe the continuous trajectory XT= (Xt: 0≤t≤T)over a time horizon T > 0, and consider statistical inference for low-dimensional functionals of the potential Bwhen this is modelled using a Bayesian nonparametric prior, leading to a semiparametric inference problem. The SDE (1) describes
https://arxiv.org/abs/2505.16275v1
the position of a particle diffusing in a potential energy field that exerts a force directed towards its local extrema, see Figure 1. By a classic result of Kolmogorov, the drift taking the form of a gradient vector field ∇Bis equivalent to time reversibility of the process X(e.g. [7], p. 46). Reversible systems are widespread in the natural sciences [34, 59, 45, 46], and one must thus model the scalar potential B to correctly incorporate such physical dynamics. Furthermore, Btypically has a strong physicalinterpretationandestimatingvariousaspectsofitisoftenofsignificantinterest. This supports directly modelling B, which is the approach we take here, assigning to B a Bayesian prior. We study statistical inference for low-dimensional functionals Ψ(B)satisfying an approximate linearization condition, which includes several interesting nonlinear func- tionals. In the reversible setting (1), there is a one-to-one correspondence between the potential Band invariant measure µB, see (2) below, so that one can further embed functionals of the invariant measure into this framework. This allows us to treat several new and physically interesting cases, such as the entropy or integrated square root of the invariant measure. The natural Bayesian approach is to assign a nonparametric prior to Band consider the induced marginal posterior for Ψ(B). We provide rigorous frequentist guarantees for this approach in the shape of a semiparametric Bernstein–von Mises (BvM) theo- rem as the time-horizon T→ ∞. It gives general conditions under which the marginal posterior for Ψ(B)behaves asymptotically like a normal distribution centered at an efficient estimator of Ψ(B)and with posterior variance equal to the inverse efficient Fisher information, which in model (1) can be expressed as the abstract solution to an elliptic PDE. This implies that the limiting covariances obtained coincide with the semi- parametric information lower bounds for these estimation problems. In particular, such a result guarantees the validity of semiparametric Bayesian uncertainty quantification in the sense that posterior credible intervals for Ψ(B)are also asymptotic frequentist confidence intervals of the correct level, see for instance [12]. This is especially relevant since uncertainty quantification is a major motivation for using Bayesian methods in practice [21]. Weapplyourgeneraltheoremtotwoclassesofpriors: GaussianprocessesandBesov- Laplace priors. Gaussian processes are widely used in diffusion models, partly due to computational advances in applying them to ‘real world’ discrete data [43, 56, 30, 65, 9, 28]. While often the standard Bayesian approach for such problems, Gaussian priors are known to be unsuited to modelling spatially inhomogeneous functions [6, 29, 4], which motivated using heavier tailed priors, such as Besov-Laplace priors, in the inverse prob- lems and imaging communities [35, 17, 33, 31, 5, 4]. The latter priors are better suited to modelling such inhomogeneities due to several attractive properties, such as edge- preservation and sparse solution representations, whilst also maintaining a log-concave 2 Figure 1: Left: a periodic potential energy field B. Right: a continuous trajectory (Xt,0≤ t≤T)started at X0= (1,1)and run until time T= 1. structure amenable to posterior sampling. Our results provide first statistical guaran- tees for semiparametric inference using both these prior classes in a reversible diffusion setting. We further illustrate the applicability of results in numerical simulations using Gaussian priors. We study
https://arxiv.org/abs/2505.16275v1
statistical inference in the T→ ∞regime, where one can use the average behaviour of the particle trajectory for inference. This requires a suitable notion of statistical ergodicity to ensure the particle exhibits enough recurrence to use long-time averages. We ensure this by following [43, 47, 68, 1, 20, 40, 27] in restricting to periodic potentials B. Periodicity simplifies several technical arguments, notably the elliptic PDE techniques, leading to a cleaner exposition of the main statistical ideas. Note that a periodic potential Bstill implies the corresponding (periodised) diffusion is reversible ([20], Proposition 2), which maintains the modelling link between reversibility and po- tential functions behind our approach. For further discussion on alternative approaches to recurrence, such as confining potentials or reflecting boundary conditions, see Section 3.2 of [27]. Bayesian nonparametric theory for drift estimation of diffusions is well-studied in dimension d= 1(e.g. [66, 47, 68, 42, 1]). In general dimensions d≥2, there has been re- cent progress on posterior contraction rates for continuous observations [41, 27] and dis- crete data [39, 32], though little is known regarding the performance of Bayesian uncer- tainty quantification. For d≤3, Nickl and Ray [40] obtain nonparametric BvM results forcertain non-reversible driftvectorfields, whichamongstotherthingsimplysemipara- metric BvMs for smooth linear functionals. However, their approach relies on specific properties of truncated Gaussian series product priors on the drift b= (b1, . . . , b d), which crucially uses that the priors for each coordinate are independent and supported on the same finite-dimensional projection spaces. Since product priors for bdraw gradi- ent vector fields b=∇Bwith probability zero, these cannot model reversibility. Their approach thus deals with a fundamentally different physical model, whereas here we deal with reversible dynamics, general dimension d≥1and possibly nonlinear func- tionals. Convergence guarantees have also been obtained in multidimenions for various frequentist methods, typically kernel methods, see for instance [16, 58, 60, 61, 62, 2] and the references therein. Our proof builds on the semiparametric BvM ideas of Castillo and Rousseau [15], extending these to the reversible diffusion model (1). Elliptic PDE theory plays a key role in our results and proofs, for instance using mapping properties of the generator 3 of the underlying semigroup (via the Poisson equation and Itô’s formula) to access martingale techniques to deal with the dependent data. Moreover, the representor ψL of the linearization of the functional Ψ(B)in the information (LAN) norm in model (1) can typically only be expressed implicitly as the solution to an elliptic PDE, whose regularity properties therefore play a central role in the analysis of semiparametric inference in this setting. In particular, the usual “prior invariance condition” [12, 15], which reflects how well the prior aligns with the model and functional and is known to play an key role in the quality of semiparametric Bayesian inference, involves regularity properties of the solution map ψL. The present reversible diffusion setting also shares many connections with nonlinear inverseproblems, whereBayesianmethodshavefoundsignificantuse[63,38]. Modelling Bis equivalent to modelling the invariant measure (see (2)), which leads to a nonlinear regression problem, see [27] for discussion regarding posterior contraction rates. In our context, this connection
https://arxiv.org/abs/2505.16275v1
can be seen via the semiparametric information bounds in Section 2.2, which involve the solution to an elliptic PDE, see [25] for a similar situation in inverse problems. In contrast, in the non-reversible case studied in [40], such bounds are simpler weighted L2-norms. Notation. LetTd= (0,1]ddenotethe d-dimensionaltorus, Lp(Td)theusualLebesgue spaces on Td,⟨·,·⟩2the inner product on L2(Td)and ˙L2(Td) = f∈L2(Td) :Z Tdf(x)dx= 0 , the subspace of centered L2(Td)functions. For an invariant measure µ0, set⟨f, g⟩µ0=R fgdµ 0with corresponding norm ∥f∥2 µ0=R Tdf2dµ0. For f∈L2(Td), define the empirical process GT[f] =1√ TZT 0f(Xs)dx−Z Tdf(x)dµ0(x), where µ0is the invariant measure of XonTd, see (2) below. LetC(Td)be the space of continuous functions on Tdequipped with ∥ · ∥∞. For s >0, denote by Cs(Td)the usual Hölder space of ⌊s⌋-times continuously differentiable functions whose ⌊s⌋th-derivative is (s− ⌊s⌋)-Hölder continuous. We let Hs(Td),s∈R, denote the usual L2-Sobolev spaces on Td, defined by duality when s <0. We further define the Sobolev norms ∥f∥W1,q=∥f∥q+Pd i=1∥∂xif∥q,q≥1, and note that ∥·∥W1,2 is equivalent to ∥ · ∥H1. Let{Φlr:l∈ {− 1,0} ∪N, r= 0, . . . , max(2ld−1,0)}be an orthonormal tensor product wavelet basis of L2(Td), obtained from a periodised Daubechies wavelet basis ofL2(T), which we take to be S-regular for S∈Nlarge enough; see Section 4.3. in [24] for details. For 0≤t≤S,1≤p, q≤ ∞, define the Besov spaces via their wavelet characterisation: Bt pq(Td) =  f∈Lp(Td) :∥f∥q Btpq:=X l2ql(t+d 2−d p) X r|⟨f,Φlr⟩2|p!q p <∞  , replacing the ℓporℓq-norm above with ℓ∞ifp=∞orq=∞, respectively. Recall that Ht(Td) =Bt 22(Td)and the continuous embedding Cs(Td)⊆Bs ∞∞(Td)fors≥0, see Chapter 3 in [57]. We sometimes suppress the dependence of the function spaces on the underlying domain, writing for example Bt pqinstead of Bt pq(Td). We also employ 4 the same function space notation for vector fields f= (f1, . . . , f d). For instance, f∈ Hs= (Hs)⊗dwill mean each fi∈Hsand the norm on Hsis∥f∥Hs=Pd i=1∥fi∥Hs. Similarly, ∥∇g∥p=Pd i=1∥∂xig∥p. Finally, for a function space X⊂L2, we denote by ˙X=X∩˙L2 We write ≲,≳and≃to denote one- or two-sided inequalities up to multiplicative constants that may either be universal or ‘fixed’ in the context where the symbols appear. We also write a+= max( a,0)anda∨b= max( a, b)for real numbers a, b. The ε-covering number of a set Θfor a semimetric d, denoted N(Θ, d, ε), is the minimal number of d-balls of radius εneeded to cover Θ. 2 Main results 2.1 Reversible diffusions with periodic drift and Bayesian infer- ence Consider the SDE (1) with B:Rd→Ra twice-continuously differentiable and one- periodic potential, i.e. B(x+m) =B(x)for all m∈Zd. There exists a strong pathwise solution X= (Xt:t≥0)to (1) on the path space C([0,∞),Rd), see Chapter 24 and 39 in [8]. We denote the law of the corresponding observed process XT= (Xt: 0≤t≤T) byPB=PT B, omitting the depending on the initial condition X0=x0since this does not affect our results. Using the periodicity, we can analogously consider Bas a function on Td. Since the law of Xin (1) depends on Bonly through the drift ∇B, potentials differing only by additive constants yield the same law. To ensure identifiability, we thus
https://arxiv.org/abs/2505.16275v1
consider the unique equivalence class withR TdB(x)dx= 0, namely B∈˙L2(Td). In the present model, the process lives on all of Rd, but will notbe globally recurrent in this space. However, the periodicity of Bmeans that the values of (Xt)tmodulo Zd encode all the relevant statistical information about ∇Bcontained in the trajectory XT. The periodic model thus effectively restricts the diffusion to the bounded state space Td, providing a notion of statistical recurrence that suffices to ensure ergodicity in the asymptotic limit T→ ∞. Specifically, one can define an invariant measure on Tdsince it holds that (arguing as in the proof of Lemma 6 of [41]): 1 TZT 0φ(Xs)ds→PBZ Tdφ(x)dµB(x),∀φ∈C(Td), asT→ ∞, where µBis a uniquely defined probability measure on Tdand we identify φwith its periodic extension on the left-side of the last display. In the reversible diffusion model (1), the potential function Buniquely defines the invariant measure via its density, also denoted µB: µB(x) =e2B(x) R Tde2B(z)dz, x ∈Td, (2) see p. 45-47 of [7]. As usual, µBcan equivalently be identified as the solution to the PDE L∗ Bu= 0forL∗ BtheL2-adjoint of the generator LBdefined in (23). The log-likelihood of B∈˙C2(Td)is given by Girsanov’s theorem (e.g. [8], Section 17.7): ℓT(B) := logdPB dPB=0(XT) =−1 2ZT 0∥∇B(Xt)∥2dt+ZT 0∇B(Xt).dXt,(3) 5 where PB=0is the law of a d-dimensional Brownian motion (Wt: 0≤t≤T). Assigning toBa possibly T-dependent prior Π = Π T, which is supported on ˙C2(Td), the posterior is given by Bayes formula as usual: dΠ(B|XT) =eℓT(B)dΠ(B)R ˙C2(Td)eℓT(B′)dΠ(B′), B ∈˙C2(Td). For a given functional Ψ : ˙C2(Td)→R, the Bayesian uses the marginal posterior distribution, whose law equals the pushforward Π(·|XT)◦Ψ−1, for inference on Ψ(B). More concretely, we can generate posterior samples Ψ(B)|XTfrom B∼Π(·|XT)via evaluations of the functional. This provides a principled Bayesian approach to inference onΨ(B)that, as we will show below, can be optimal from an information theoretic point of view. Note that by the injectivity of the map B7→µBin (2), functionals Φ(µB)of the invariant measure can equivalently be considered as functionals of B, and hence are contained within this framework. We will study the behaviour of the marginal posterior distribution assuming the data XT∼PB0is generated from a ‘ground truth’ law following the SDE (1) with potential B0. We will often write P0:=PB0for notational convenience, differentiating this from the notation PB=0used in the likelihood (3). 2.2 A general semiparametric Bernstein–von Mises theorem We provide here a general result giving conditions on the prior and functional under which the marginal posterior provides asymptotically optimal semiparametric inference. LetΨ :˙C2(Td)→Rbe a one-dimensional functional admitting an expansion about B0 of the form Ψ(B) = Ψ( B0) +⟨ψ, B−B0⟩2+r(B, B 0), (4) where ψ∈L2(Td)andr(B, B 0)is a remainder term of size o(T−1/2)satisfying con- ditions to be made precise below. Since B, B 0∈˙C2(Td), we may take ψto satisfyR Tdψ= 0since this does not change (4) (e.g. by considering Parseval’s theorem in the Fourier basis). If Ψis a linear functional, then simply r(B, B 0) = 0. Thus ψis the Riesz representor of the linearization of Ψin the L2(Td)-inner product. The optimal asymptotic variance in semiparametric estimation theory is connected to
https://arxiv.org/abs/2505.16275v1
the information or local asymptotic normality (LAN) norm induced by the statistical model(e.g.Chapter25of[67]). Inthepresentreversiblediffusionsetting, theLANinner product is ⟨B,¯B⟩L:=⟨∇B,∇¯B⟩µ0(see Lemma 2), giving corresponding functional expansion Ψ(B) = Ψ( B0) +⟨ψL, B−B0⟩L+r(B, B 0). Define the second order elliptic partial differential operator Aµ:H2(Td)→L2(Td) given by Aµu:=∇ ·(µ∇u) =µ∆u+∇µ.∇u, (5) in which case the functional representors are related by ψL=A−1 µ0[ψ], i.e. ψLsolves the elliptic PDE Aµ0ψL=ψ. The corresponding information bound in this model is then∥ψL∥2 L=∥∇ψL∥2 µ0=∥∇A−1 µ0ψ∥2 µ0.However, for most functionals Ψ, especially nonlinear ones, we typically have access to ψrather than ψL(see below for examples), since the latter is usually only defined implicitly as the solution to an elliptic PDE. We therefore consider the expansion (4) to make our regularity conditions concrete, deducing the regularity of the corresponding ψLby elliptic regularity theory. 6 Assuming Ψis differentiable in a suitable sense, a sequence of estimators ˆΨTis asymptotically efficient for estimating Ψ(B)at the true parameter B0if ˆΨT= Ψ( B0) +1 TZT 0∇ψL(Xt).dWt+oP0(T−1/2) = Ψ( B0) +1 TZT 0∇A−1 µ0ψ(Xt).dWt+oP0(T−1/2),(6) see equation (25.22) in [67]. Using the martingale central limit theorem, the sequence√ T(ˆΨT−Ψ(B0))is then asymptotically normal with mean zero and variance ∥ψL∥2 µ0= ∥∇A−1 µ0ψ∥2 µ0, which is the best possible in a local minimax sense. We write LΠ(√ T(Ψ(B)−ˆΨT)|XT)for the marginal posterior distribution of√ T(Ψ (B)−ˆΨT), where ˆΨTis any random sequence satisfying (6). The semiparametric Bernstein–von Mises theorem states that this distribution asymptotically resembles a centered normal distribution with variance ∥∇A−1 µ0ψ∥2 µ0. We now make this statement precise via the bounded Lipschitz distance dBLon probability distributions on R(see Chapter 11 of [18]) before stating our general result. Definition 1. LetXT= (Xt: 0≤t≤T)denote an observation from the SDE (1) with potential B0, whose distribution we denote by PB0=P0. We say the posterior satisfies the semiparametric Bernstein–von Mises (BvM) for a functional Ψsatisfying expansion (4)if, for ˆΨTsatisfying (6)and as T→ ∞ , dBL LΠ(√ T(Ψ(B)−ˆΨT)|XT), N(0,∥∇A−1 µ0ψ∥2 µ0) →P00. Theorem 1 (Semiparametric Bernstein–von Mises) .LetΠ = Π Tbe a prior for Bthat is supported on ˙C2(Td). Suppose B0∈˙C(d/2+1+ κ)∨2(Td)for some κ >0, and let Ψ : ˙C2(Td)→Rbe a functional satisfying the expansion (4)with representor ψ∈˙Ht(Td) with t >(d/2−1)+. For 1≤p≤2and2≤q≤ ∞such that 1/p+ 1/q= 1, assume there exist measurable sets DTsatisfying Π(DT|XT)P0−→1and such that DT⊆ B∈˙C2(Td) :∥∇B− ∇B0∥p≤MεT,∥B−B0∥Hd/2+1+ κ≤ζT, ∥B∥Hd+1+κ≤M,|r(B, B 0)| ≤ξT/√ T (7) for some M > 0andεT, ζT, ξT→0with√ TεT→ ∞. Let γT∈˙Hd/2+1+ κ(Td)be a sequence of fixed functions such that, as T→ ∞, ∥γT∥Hd/2+1+ κ=O(1); ∥A−1 µ0ψ−γT∥W1,q=o(1/(√ TεT)), (8) where Aµ0is the second-order operator (5). For u∈R, define the perturbations Bu=B−uγT/√ T. If for every u∈Rin a neighbourhood of 0, R DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B)P0−→1 (9) asT→ ∞, then the semiparametric Bernstein–von Mises holds for Ψ. The condition (7) requires the posterior to concentrate on sets around the true B0 on which one can perform a LAN expansion of the likelihood with uniform control of the 7 remainder terms. These conditions can be verified using general tools for proving poste- rior contraction as in [22], which have been applied in the multivariate diffusion setting with continuous observations in [40, 27]. The condition
https://arxiv.org/abs/2505.16275v1
r(B, B 0) =o(1/√ T)means the functional Ψ(B)is approximately linear with expansion (4), which nonetheless allows to cover several interesting nonlinear functionals. Condition (9) requires invariance of the prior for the full parameter Bunder a shift Bu=B−uγT/√ Tin the approximate least favourable direction γT, which should be close to the true least favourable direction ψL=∇A−1 µ0ψ. This condition reflects how well the prior aligns with the model and functional, and if not satisfied may prevent the√ T-rate in the semiparametric BvM theorem (see [12, 13, 22] for further discus- sion). Considering an approximation γTto∇A−1 µ0ψallows to weaken the condition (9) for concrete priors. Note that verifying (9) for specific priors may impose additional smoothness conditions. We next consider examples of functionals covered by Theorem 1, including function- als of the invariant measure µB. Proofs of the expansions (4) and remainder terms can be found in Section 4.5. Example 1 (Linear functionals) .IfΨ(B) =R TdB(x)a(x)dxfora∈L2(Td), then ψ=a−R Tdaandr(B, B 0) = 0 . Example2 (Squarefunctional) .IfΨ(B) =R TdB(x)2dx, then ψ= 2B0andr(B, B 0) = ∥B−B0∥2 2. Example 3 (Power functionals) .LetΨ(B) =R B(x)qdxforq≥2an integer. Then ψ=q[Bq−1 0−R Bq−1 0]and for any K, M > 0andνT→0, sup B,B0:∥B∥∞,∥B0∥∞≤K ∥B−B0∥2≤MνT|r(B, B 0)|=O(ν2 T). (10) Remark 1 (Posterior contraction rates and remainder) .ForB, B 0∈˙L2(Td), i.e. sat- isfyingR B=R B0= 0, the Poincaré inequality implies ∥B−B0∥2≤Cd∥∇B− ∇B0∥2 (e.g. p. 290 in [19]) for some Cd>0. One can thus replace the L2-norm in the above remainder by ∥∇B−∇B0∥2. When p= 2, the other conditions in (7)together with (10) and the Sobolev embedding theorem imply that |r(B, B 0)|=O(ε2 T)uniformly over DT, i.e. we may take ξT=√ Tε2 T(provided εT=o(T−1/4)). If only a posterior contraction rate for ∥∇B− ∇B0∥pis available, it may still be possible to exploit additional prior support properties to deduce that ∥B−B0∥2≲νTfor some νT=o(T−1/4), generally slower than εT. This is done for Besov-Laplace priors in Section 2.4. Example 4 (Linear functionals of the invariant measure) .LetΨ(B) =R TdµB(x)φ(x) dxforφ∈L∞(Td), where µB=e2B/R Tde2Bis the invariant measure. Then ψ= 2µB0[φ−R µB0φ]and the remainder term r(B, B 0)satisfies (10). Note that Ψis non- linear in B. Example 5 (Entropy of the invariant measure) .LetΨ(B) =R TdµB(x) logµB(x)dx. Then ψ= 2µB0[logµB0−R µB0logµB0]and the remainder term r(B, B 0)satisfies (10). Example 6 (Square-root of the invariant measure) .LetΨ(B) =R Tdp µB(x)dx. Then ψ=µB0[1√µB0−R√µB0]and the remainder term r(B, B 0)satisfies (10). Example 7 (Power functionals of the invariant measure) .LetΨ(B) =R µB(x)qdxfor q≥2an integer. Then ψ= 2µB0[qµq−1 B0−qR µq B0]and the remainder term r(B, B 0) satisfies (10). 8 The condition r(B, B 0)≤ξT/√ Tin (7) of Theorem 1 is thus satisfied in Examples 2 - 7 as soon as νT=o(T−1/4)(in Example 1 it automatically follows with ξT= 0). For functionals of the invariant measure, the remainder condition (10) also ensures µB is bounded away from zero and infinity, so that the regularity of the various integrands matches that of µB, and hence B. For example, it is known that the square root of an infinitely differentiable function near zero need not be more than C1in general [11] without additional assumptions,
https://arxiv.org/abs/2505.16275v1
e.g. [49]. Taking µBbounded away from zero rules out such situations, where statistical estimation problems can behave qualitatively differently [48, 44, 50] and it is unclear if√ T-rates are attained. Remark 2 (Functional regularity) .Employing a standard Bayesian nonparametric prior for Band considering the induced marginal posterior for Ψ(B)can viewed as a “plug-in” method. In terms of uniformly controlling linear functionals of B, one ex- pects that a smoothness condition of the form t > d/ 2−1as in Theorem 1 is sharp for such methods when d≥2, see Sections 2.4-2.5 of [40] and also [14] for more gen- eral discussion. For certain models and functionals, Bayesian methods can be tailored to provide efficient semiparametric inference for lower regularities, for instance using targeted priors [51, 52] or posterior corrections [15, 69], but this is beyond the scope of this article. 2.3 Gaussian priors We first apply our general theory to Gaussian priors, which are widely used for esti- mating the coefficients of SDEs [43, 47, 56, 30, 65, 9, 28]. For continuously-observed multidimensionalreversiblediffusions, GiordanoandRay[27]showedthatGaussianpri- ors are conjugate with explicitly available formulae for posterior inference, and derived minimax-optimal posterior contraction rates under suitable tuning of the hyperparam- eters. We now show that Gaussian priors also lead to efficient semiparametric inference for a large class of functionals of the potential. As in [27], we consider rescaled Gaussian priors constructed from a base probability measure satisfying the following mild regularity condition. For definitions and back- ground information on Gaussian processes and measures, see [24, Chapter 2], or [22, Chapter 11]. Condition 1. Fors > dand some κ >0, letΠW= Π W,Tbe a (possibly T-dependent) centred Gaussian Borel probability measure on the Banach space ˙C(Td)that is supported on a separable (measurable) linear subspace of ˙Hd+1+κ(Td)∩˙C(d/2+κ)∨2(Td), and as- sume that its reproducing kernel Hilbert space (RKHS) (HW,∥ · ∥HW)is continuously embedded into ˙Hs+1(Td). Note that the above constant κ > 0can be arbitrarily small. Concrete choices of Gaussian priors satisfying Condition 1 are given in Examples 8 and 9 below. For W∼ΠWwith ΠWsatisfying Condition 1, we construct the rescaled Gaussian prior Π = Π Tby taking the law of the random function B(x) :=W(x) Td/(4s+2d), x ∈Td. (11) By linearity, Πis also a centred Gaussian Borel probability measure on ˙C(Td), with the samesupportas ΠWandwithRKHS Hsatisfying H=HWand∥·∥H=Td/(4s+2d)∥·∥HW (cf. Exercise 2.6.5 in [24]). Remark 3 (Rescaling) .Rescaling by the diverging term Td/(4s+2d)in(11)is a tech- nique borrowed from the theoretical literature on Bayesian nonlinear inverse problems, 9 e.g. [37, 26, 38], which ensures the posterior for Bplaces most of its mass on sets of bounded higher-order smoothness. In that setting, it is typically needed to control stability estimates, uniformly over the bulk of the posterior. In the context of multidi- mensional reversible diffusions, this was similarly used in the posterior contraction rate analysis of [27] to control the nonlinearity of the map B7→µBin(2). Via the localising setsDTin(7), Theorem 1 requires the posterior to contract about the truth in order to perform a LAN expansion of the likelihood, while the bounded Sobolev regularity of the posterior is needed to control stochastic bias
https://arxiv.org/abs/2505.16275v1
terms in the likelihood (e.g. Lemma 3). The rescaling in (11)ensures such conditions are satisfied. Deriving posterior contraction rates, let alone BvM results, for non-rescaled Gaussian priors in such nonlinear settings is currently an open problem. Theorem 2. LetΠbe the rescaled Gaussian prior from (11)withW∼ΠWsatisfying Condition 1 for some s > d, some κ >0and RKHS HW. Set εT=T−s/(2s+d), suppose thatB0∈˙Hs+1(Td)∩˙C(d+1/2+κ)∨2(Td)and that there exists a sequence B0,T∈HW such that ∥B0,T∥HW=O(1)and∥B0−B0,T∥C1=O(εT)asT→ ∞. LetΨ : ˙C2(Td)→Rbe a functional satisfying the expansion (4)with representor ψ∈˙Ht(Td)for some t >(d/2−1)+and remainder satisfying (10), and assume there exists a sequence γT∈HWsuch that ∥γT∥Hd/2+1+ κ=O(1); ∥γT∥Hd+1+κ=o(√ T); ∥γT∥HW=o(1/(√ Tε2 T)),(12) as well as ∥A−1 µ0ψ−γT∥H1=o(1/(√ TεT)). (13) Then the semiparametric Bernstein–von Mises holds for Ψ. Theorem 2 shows that suitably rescaled Gaussian priors satisfy a semiparametric BvM under the mild regularity Condition 1. The requirements on the ground truth B0 are (essentially) the same as those in Theorem 2.1 in [27], which establishes posterior contraction rates that are used in the proof of Theorem 2 to construct the localising setsDTfrom (7). These conditions entail that B0be (at least) (s+ 1)-Sobolev-regular, and that it be well-approximated by elements of the RKHS HW. For Gaussian priors modelling Sobolev smooth functions, the approximating sequence B0,Tcan be readily constructed; cf. Examples 8 and 9 below. In Theorem 2, we restrict to functionals Ψwith remainders satisfying (10) for con- creteness, since the remainder condition can then be more easily verified via Remark 1. As shown by Examples 1 - 7, this already allows to cover several interesting nonlinear instances. The additional norm bounds in (12) for the approximate LAN representors γTcompared to (8) are used in the verification of the asymptotic invariance property (9) for Gaussian priors. For priors modelling Sobolev regular functions, conditions (12) and (13) can typically be verified via standard elliptic regularity estimates and approx- imation theory for any ψ∈˙Ht(Td),t >(d/2−1)+, and thus impose no additional restrictions. In the following examples, to enforce the identifiability conditionR TdB(x)dx= 0, we set the coefficient of the constant function ( e0≡1for the Fourier basis and Φ−10≡0for wavelets) equal to zero. For more general Gaussian processes, one can directly recenter priors draws B7→B−R TdB(x)dx. Example 8 (Periodic Matérn process) .Fors >3d/2, consider a base Gaussian prior ΠWgiven by the law of a periodic Matérn process of regularity s+ 1−d/2(cf. Ex- ample 11.8 in [22]), namely the centred Gaussian process W={W(x), x∈Td}with 10 covariance kernel Kper(x, y) := (2 π)dX k∈Zd\{0}1 (1 + 4 π2|k|2)(s+1)/2ek(x)ek(y), x, y ∈Td,(14) where {ek(x) =e2πk.x:k∈Zd}is the Fourier basis of L2(Td), see Section A.1.1 of [27] for further details. By the Fourier series characterization of periodic Sobolev spaces, it has RKHS HW=˙Hs+1(Td)with RKHS norm equivalence ∥ · ∥HW≃ ∥ · ∥ Hs+1. Theperiodic Matérnprocesssatisfies Condition1, beingsupported in ˙Cs+1−d/2−η(Td) for any η >0, which is a separable linear subspace of ˙Hd+1+κ(Td)∩˙C(d/2+κ)∨2(Td)for sufficiently small κ >0since s >3d/2(see Section A.1.1 of [27] for details). Given B0∈˙Hs+1(Td) =H, we may apply Theorem 2 with the trivial choice B0,T≡B0. LetΨ : ˙C2(Td)→Rbe a functional with expansion (4), representor ψ∈˙Ht(Td) for some t >(d/2−1)+and remainder satisfying
https://arxiv.org/abs/2505.16275v1
(10). Since B0∈˙Hs+1(Td)and s > 3d/2, we also have µ0∝e2B0∈Hs+1(Td)⊂Cd+1+κ(Td), the last inclusion holding by the Sobolev embedding. Hence by Lemma 9, there exists a unique ele- ment A−1 µ0ψ∈˙H(d/2+1+ κ)∨2(Td)such that Aµ0A−1 µ0ψ=ψalmost everywhere, and ∥A−1 µ0ψ∥H(d/2+1+ κ)∨2≲∥ψ∥H(d/2−1+κ)+for any κ > 0small enough. Define its trun- cated Fourier series γT:=X k∈Zd\{0}:|k|∞≤K⟨A−1 µ0ψ, ek⟩2ek, forK=KT≃T1/(2s+d). Using the Fourier characterization of Sobolev spaces, it holds thatγT∈˙Hr(Td)for all r≥0with the estimate ∥γT∥Hr≲Kmax{r−(d/2+1+ κ)∨2,0}∥A−1 µ0ψ∥H(d/2+1+ κ)∨2≲  T(r−d/2−1−κ)+ 2s+d ,ifd≥2, T(r−2)+ 2s+d, ifd= 1, (15) using which one can readily verify (12). Similarly, (13)holds since ∥A−1 µ0ψ−γT∥H1≲ K−(d/2+κ)∨1∥A−1 µ0ψ∥H(d/2+1+ κ)∨1≲T−(d/2+κ)∨1 2s+d. We may thus apply Theorem 2 to the periodic Matérn base prior (14)and any B0∈˙Hs+1(Td),s >3d/2, for any functional Ψ :˙C2(Td)→Radmitting expansion (4)with representor ψ∈˙Ht(Td),t >(d/2−1)+, and remainder satisfying (10). The above priors are equivalent to mean-zero Gaussian processes with covariance operatorequaltoaninversepoweroftheLaplacian[47,68], forwhichposteriorinference based on discrete data can be computed using finite element methods [43]. Another approach is to truncate a Gaussian series expansion in a suitable basis. We illustrate this in the following example. Example 9 (Truncated wavelet series) .Let{Φlr, l∈ {− 1,0}∪N, r= 0, . . . , (2ld−1)∨ 0}be an orthonormal tensor product wavelet basis of L2(Td), obtained from S-regular periodised Daubechies wavelets in L2(T); see Section 4.3 in [24] for details. Consider a base truncated Gaussian wavelet series prior ΠW:=L(W), W (x) :=JX l=0X r2−l(s+1)glrΦlr(x)glriid∼N(0,1), x∈Td,(16) for some s > dandJ=JT∈Nsuch that 2JT≃T1/(2s+d). As shown in Example 2.2 of [27], ΠWsatisfies Condition 1 with support equal to the wavelet approximation space VJ:=span(Φlr, l∈ {0} ∪N, r= 0, . . . , (2ld−1)∨0), and has RKHS HW=VJ 11 with∥h∥HW=∥h∥Hs+1. IfB0∈˙Cs+1(Td), then by standard wavelet approximation properties (e.g. Section 4.3 in [24]), the projection B0,T:=PJ l=0P r⟨B0,Φlr⟩2Φlrsat- isfies∥B0,T∥Hs+1≤ ∥B0∥Hs+1<∞and∥B0−B0,T∥C1≲2−Js≃T−s/(2s+d)=εTas required for Theorem 2. LetΨ :˙C2(Td)→Rbe a functional admitting expansion (4)with representor ψ∈ ˙Ht(Td)for some t >(d/2−1)+and remainder satisfying (10). Since B0∈˙Cs+1(Td) with s > d, arguing as in Example 8 implies that for sufficiently small κ > 0, there exists a unique element A−1 µ0ψ∈˙H(d/2+1+ κ)∨2(Td)such that ∥A−1 µ0ψ∥H(d/2+1+ κ)∨2≲ ∥ψ∥H(d/2−1+κ)+. Consider the truncated wavelet series γT:=JX l=0X r⟨A−1 µ0ψ,Φlr⟩2Φlr, which satisfies γT∈˙Hr(Td)for all r≥0and the norm estimate (15). As in Ex- ample 8, one can verify (12)and(13). Theorem 2 therefore applies to the truncated Gaussian wavelet series prior (16)and any B0∈˙Cs+1(Td),s > d, for any functional Ψ : ˙C2(Td)→Rwith expansion (4), representor ψ∈˙Ht(Td),t >(d/2−1)+, and remainder satisfying (10). 2.4 Besov-Laplace priors WenextconsiderBesov-Laplacepriors, constructedviarandomwaveletexpansionswith i.i.d. random coefficients following the Laplace (or double exponential) distribution with density λ(z) =e−|z|/2,z∈R. Specifically, we employ rescaled and truncated priors obtained starting from a base probability measure ΠWgiven by the law of the random function W(x) :=JX l=0X r2−l(s+1−d/2)glrΦlr(x), g lriid∼λ, x ∈Td,(17) with J=JT∈Nsuch that 2J≃T1/(2s+d), and where {Φlr, l∈ {− 1,0} ∪N, r= 0, . . . , (2ld−1)∨0}is an S-regular orthonormal tensor product wavelet basis of L2(Td) (with S∈Nfixed but arbitrarily large), cf. Example 9. This extends the construction of Gaussian wavelet series priors, and represents a specific instance of the more gen- eral class of p-exponential (or Besov) priors [35, 6, 3], which
https://arxiv.org/abs/2505.16275v1
prescribe random wavelet coefficients with tail behaviour between the Laplace (corresponding to p= 1) and the Gaussian distribution ( p= 2). Besov-Laplace priors have recently enjoyed significant popularity within the inverse problems and imaging communities [35, 17, 33, 31, 4, 36] since they exhibit attractive sparsity-promoting and edge-preserving properties, while alsomaintainingalog-concavestructurefavourabletocomputationandtheoreticalanal- ysis. Note that in (17), the wavelet coefficient corresponding to the constant function Φ−10≡1has been set to zero to ensure the zero-integral identifiability condition. In a similar spirit to the Gaussian priors studied in Section 2.3, we construct rescaled truncated Besov-Laplace priors Π = Π T, given by, for W∼ΠWas in (17), B(x) =W(x) Td/(2s+d), x ∈Td. (18) In the present setting, these priors are known to achieve minimax-optimal contraction rates [27] and the following result shows that they also satisfy a semiparametric BvM. 12 Theorem 3. Fors > d +(d/2)∨2, letΠbe the rescaled Besov-Laplace prior from (18), and suppose B0∈˙Hs+1(Td). Let Ψ :˙C2(Td)→Rbe a functional admitting expansion (4)with representor ψ∈˙Ct(Td)for some t >(d/2−1)+and remainder satisfying (10). Then the semiparametric Bernstein–von Mises holds for Ψ. The conditions on ΠandB0in Theorem 3 match those in Theorem 2.4 of [27], which establishes posterior contraction for these priors. This play a localising role in our proof, similar to that played by the corresponding result for Gaussian priors in Theorem 2. In Theorem 3, we again restrict for concreteness to functionals Ψwith remainder satisfying (10). The (slightly stronger) Hölder-smoothness condition imposed on ψ, as opposed to the Sobolev requirements in Theorems 1 and 2, is due to the necessity of approximating the LAN representor A−1 µ0ψin the stronger ∥ · ∥ W1,∞-norm (cf. the second display in (8)), which we approached by invoking Hölder-type regularity estimates for elliptic PDEs and wavelet approximation properties in sup-norm. We note that, however, the full smoothness range t >(d/2−1)+is allowed in Theorem 3. 3 Numerical illustrations with Gaussian priors In this section, we illustrate our theoretical findings in finite sample sizes via a small numerical simulation study. We focus on modelling the unknown potential Bin (1) using Gaussian priors, which were shown to be conjugate in [29] with explicit formulae for the posterior mean and covariance, cf. (20) below. We provide an implementation for these procedures, and empirically explore the applicability of our theorems. For three different periodic ‘ground truths’ Bdefined on the bi-dimensional torus T2= (0,1]2, cf. eq. (19) and Figure 2 (top row), we simulate the continuous diffusion trajectories XTfor increasing T, and compute the resulting conjugate posterior distri- butions of B|XT. We then select three nonlinear real-valued functionals Ψand compute the (non-Gaussian) plug-in posteriors for Ψ(B)|XTvia Monte Carlo simulation, report- ingcoveragescoresandlengthsofthe 95%-credibleintervals, aswellasroot-meansquare errors (RMSEs) of the posterior mean. We run each simulation 250 times, recording average values and standard deviations when relevant. All experiments were carried out on a MacBook Pro with M1 processor and 8GB RAM. The code to reproduce these simulations is available at: https://github.com/MattGiord/Rev-Diff . Data generation. We take the three true potential energy fields on T2to be: B(1)(x, y) =e−(7.5x−5)2−(7.5y−5)2+e−(7.5x−2.5)2−(7.5y−2.5)2; B(2)(x, y) = 2 + e−(7.5x−5)2−(7.5y−5)2−e−(7.5x−2.5)2−(7.5y−2.5)2; B(3)(x, y) =e−(7.5x−5.5)2−(7.5y−5.5)2+ 0.75e−(5x−1.25)2−(7.5y−5.5)2 +
https://arxiv.org/abs/2505.16275v1
1.25−(7.5x−5.5)2−(5y−1.25)2+e−(7.5x−2)2−(7.5y−2)2.(19) For each given B, we simulate the continuous trajectory (Xt:t≥0)via the Euler- Maruyama scheme, iterating xr+1=xr+∇B(xr)δt+δtWr, W riid∼N(0,1), r ≥0. Across all the experiments, we set the time stepsize to δt= 10−4, resulting in realistic approximations of the continuous diffusion paths, cf. Figure 1 (right). The (uninfluen- tial) initial condition was fixed to x0= (1,1). We repeat the Euler-Maruyama scheme for5×105and106times, yielding time horizons T= 50andT= 100, respectively. 13 Figure 2: Top row: the three potential energy fields B(i),i= 1,2,3, from (19). Bottom row: the corresponding posterior means ¯B(i) T,i= 1,2,3, at time T= 100. The relative L2-estimation errors ∥B(i)−¯B(i) T∥2/∥B(i)∥2are equal to 0.21, 0.03, 0.12, respectively. Prior specification and posterior inference. We employ the periodic Matérn process prior from Example 8, modelling B(x) =X k∈Z2\{0}:|k|∞≤Kvkgkek(x), g kiid∼N(0,1), x ∈T2, where vk= (1 + |k|2)(s+1)/2T−1/(2s+2)with s= 3 = 3 d/2,{ek:k∈Z2}is the bi- dimensional Fourier basis, and K∈Nis a sufficiently high truncation level. Identifying any function B=P k∈Z2\{0}:|k|∞≤KBkekwith its Fourier coefficient vector B= (Bk)k, this corresponds to assigning the multivariate Gaussian prior with diagonal covariance matrix, B∼N(0,Υ), Υ =diag[(vk)k], whence the conjugate computation in Section 2.3.3 of [27] yields the Gaussian posterior distribution B|XT∼N((Σ + Υ−1)−1H,(Σ + Υ−1)−1), (20) where Σ ="ZT 0∇ek(Xt).∇ek′(Xt)dt# k,k′, H ="ZT 0∇ek(Xt).dXt# k. For each simulated continuous trajectory, we numerically compute the above matrix Σ and vector Hby approximating the integrals with Riemann sums, which we then use to evaluate the posterior mean and covariance matrix in (20). For nonlinear functionals Ψ : ˙C2(T2)→R, the plug-in posterior distribution of Ψ(B)|XTis generally non-Gaussian and not available in closed form. To compute the 14 Table 1: Coverage scores (and average length) of the 95%credible intervals, and RMSEs (with standard deviations) of the posterior mean for three nonlinear functionals Ψ(B), ob- tained over 250 repeated experiments Coverage (Length) RMSE (STD. DEV.) Ψ(B) T= 50 T= 100 T= 50 T= 100 Ground truth: B1 Ψ10.61 (0.033) 0.91 (0.023) 0.013 (0.0079) 0.0061 (0.0037) Ψ20.88 (0.022) 0.95 (0.014) 0.0053 (0.0075) 0.0024 (0.0018) Ψ30.82 (0.024) 0.96 (0.017) 0.0075 (0.0057) 0.0037 (0.0022) Ground truth: B2 Ψ10.72 (0.042) 0.92 (0.025) 0.016 (0.0073) 0.0074 (0.0063) Ψ20.95 (0.038) 0.96 (0.016) 0.0078 (0.0066) 0.0048 (0.0051) Ψ30.86 (0.029) 0.93 (0.016) 0.0084 (0.0054) 0.0045 (0.0042) Ground truth: B3 Ψ10.66 (0.029) 0.91 (0.024) 0.013 (0.0087) 0.0054 (0.0043) Ψ20.84 (0.021) 0.96 (0.025) 0.0085 (0.0071) 0.0032 (0.0027) Ψ30.76 (0.021) 0.97 (0.019) 0.0079 (0.0059) 0.0032 (0.0026) corresponding posterior mean and credible intervals, we use Monte Carlo approxima- tion, which is straightforward to implement by sampling B(1), . . . , B (M)from the explic- itly available posterior distribution (20) of B|XT, and computing samples Ψ(B(1)), . . . , Ψ(B(M)). For each Monte Carlo approximation, we used M= 1000samples. Results. For the three instances of potential energy fields B(1), B(2), B(3)from (19), cf. Figure 2 (top row), Table 1 reports the coverage and average lengths of the 95% credible intervals for three nonlinear functionals Ψ1,Ψ2,Ψ3, at times T= 50,100, each obtained through 250 repeated experiments. To illustrate our results over a range of functionals, we took Ψ1(B) =R B(x)2dx(Example 2), Ψ2(B) =R B(x)4dx(Example 3 with
https://arxiv.org/abs/2505.16275v1
q= 4) and Ψ3(B) =R µB(x) logµB(x)dxto be the entropy of the invariant measure (Example 5). For each combination of ground truth Band functional Ψ, the obtained coverages are higher at the larger time horizon; in particular, for T= 100, they are very close to the nominal level 95%predicted by Theorem 2. The average lengths of the credible interval are also seen to decrease as the time horizon increases. Table 1 further reports the average estimation errors for the posterior mean of the plug- in posteriors, with the associated standard deviation relative to the 250 experiments. As expected from our theoretical results, the RMSE scores (and standard deviations) become smaller across the board as Tincreases. For the ground truth B(1), Figure 3 compares individual realisations of the plug-in posterior distributions of Ψi(B)|XT,i= 1,2,3, at times T= 50andT= 100, ob- tained via Monte Carlo approximation. The plot displays a progressively more accurate Gaussian approximation for the plug-in posterior of all three functionals, in line with the findings of Theorem 2. For these specific realisations, the 95%credible intervals (vertical blue lines) correctly cover the ground truths. Lastly, the bottom row of Figure 2 shows the posterior means of the three considered potentials B(1),B(2),B(3)at time T= 100, displaying an excellent visual quality in the reconstruction. 15 Figure 3: Top row: plug-in posterior distributions of Ψi(B)|XT,i= 1,2,3, atT= 50. Bottom row: plug-in posteriors at T= 100. The vertical red lines indicate the ‘ground truths’ Ψi(B1), for B1as in Figure 2 (top-left). The vertical blue lines identify the 95% credible intervals. The green line corresponds to a normal PDF centred at the posterior mean and with variance equal to the posterior variance. 4 Proofs 4.1 Proof of Theorem 1: general semiparametric BvM Following the approach of [54, 15], the proof proceeds by showing that the Laplace transform of the rescaled marginal posterior for Ψ(B)converges to that of the limiting Gaussian distribution, which then implies weak convergence to this Gaussian limit. We first localize to sets on which the full posterior concentrates. LetDT⊂˙C2(Td)be measurable sets such that Π(DT|XT)P0−→1asT→ ∞, and let ΠDT(·) =Π(· ∩ D T) Π(DT)(21) denote the prior Πconditioned to DT. The corresponding posteriors satisfy ∥Π(·|XT)− ΠDT(·|XT)∥TV≤2Π(Dc T|XT)(e.g. p142 of [67]), which tends to zero in P0-probability asT→ ∞. Since convergence in total variation is stronger than weak convergence, it suffices to show the desired result for the posterior based on the conditioned prior ΠDT instead of Π. The next lemma expands the posterior Laplace transform of√ T(Ψ(B)− eΨT)for the conditioned prior and a suitable centering eΨT: EΠDTh eu√ T(Ψ(B)−eΨT) XTi =R DTeu√ T(Ψ(B)−eΨT)eℓT(B)dΠ(B)R DTeℓT(B)dΠ(B). Lemma 1. Suppose B0∈˙C(d/2+1+ η)∨2(Td)for some η >0, and let Ψ :˙C2(Td)→Rbe a functional satisfying the expansion (4)with representor ψ∈˙L2(Td). For 1≤p, q≤ ∞ 16 such that 1/p+ 1/q= 1,M > 0andεT, ζT, ξT→0, let DT⊆ B∈˙C2(Td) :∥∇B− ∇B0∥p≤MεT,∥B−B0∥Hd/2+1+ η≤ζT, ∥B∥Hd+1+η≤M,|r(B, B 0)| ≤ξT/√ T . LetγT∈˙Hd/2+η(Td)∩˙C1(Td)be a sequence of fixed functions such that as T→ ∞, ∥γT∥C1∨ ∥γT∥H(d/2+η)∨1=O(1); ∥A−1 µ0ψ−γT∥W1,q=o(1/(√ TεT)), where Aµ0is the second-order operator (5). For fixed u∈R, define the perturbations Bu=B−uγT/√ T. Then the (localized) posterior Laplace
https://arxiv.org/abs/2505.16275v1
transform satisfies EΠDTh eu√ T(Ψ(B)−eΨT) XTi =eu2 2TRT 0∥∇γT(Xt)∥2dtR DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B)(1 +oP0(1)) asT→ ∞, with centering eΨT= Ψ( B0) +1 TZT 0∇γT(Xt).dWt. (22) Proof.Using Bayes formula and setting ZT=R DTeℓT(B)dΠ(B)to be the normalizing constant, IT(u) :=EΠDT[eu√ T(Ψ(B)−eΨT)|XT] =e−u√ TRT 0∇γT(Xt).dWt1 ZTZ DTeu√ T(⟨B−B0,ψ⟩2+r(B,B0))eℓT(B)−ℓT(Bu)eℓT(Bu)dΠ(B), where we have used the functional expansion (4). Using the LAN expansion given in Lemma 2, ℓT(B)−ℓT(Bu) =u√ TZT 0∇γT(Xt).dWt+u2 2TZT 0∥∇γT(Xt)∥2dt −u√ TZT 0∇(B−B0)(Xt).∇γT(Xt)dt =u√ TZT 0∇γT(Xt).dWt+u2 2TZT 0∥∇γT(Xt)∥2dt −uGT[∇(B−B0).∇γT]−u√ T⟨∇(B−B0),∇γT⟩µ0. Using this and supB∈DT|r(B, B 0)|=o(1/√ T), the second to last display equals eu2 2TRT 0∥∇γT(Xt)∥2dt+oP0(1) ×1 ZTZ DTe−uGT[∇(B−B0).∇γT]eu√ T[⟨B−B0,ψ⟩2−⟨∇(B−B0),∇γT⟩µ0]eℓT(Bu)dΠ(B). By Lemma 3, supB∈DT|GT[∇(B−B0).∇γT]|=oP0(1). Integrating by parts, ⟨∇(B−B0),∇γT⟩µ0=Z Td∇(B−B0)(x).∇γT(x)µ0(x)dx =Z Td(B−B0)(x)∇ ·(µ0∇γT)(x)dx =⟨B−B0, Aµ0γT⟩2 17 forAµ0the second order elliptic operator defined in (5). Therefore, writing ψ= Aµ0A−1 µ0ψ(cf. Lemma 9), for B∈ DT, √ T|⟨B−B0, ψ⟩2− ⟨∇ (B−B0),∇γT⟩µ0|=√ T|⟨B−B0, ψ−Aµ0γT⟩2| =√ T|⟨∇(B−B0), µ0∇(A−1 µ0ψ−γT)⟩2| ≲√ T∥µ0∥∞∥∇(B−B0)∥p∥∇(A−1 µ0ψ−γT)∥q ≲√ TεT∥A−1 µ0ψ−γT∥W1,q=o(1) by assumption and since µ0is bounded on Td. The Laplace transform in question then equals IT(u) =eu2 2TRT 0∥∇γT(Xt)∥2dt+oP0(1)1 ZTZ DTeℓT(Bu)dΠ(B) as desired. Proof of Theorem 1. Since DTsatisfies Π(DT|XT)P0−→ 1by assumption, it suffices to prove the result for the prior (21) conditioned to DTby the argument following that equation. We shall show that the resulting posterior Laplace transform JT(u) = EΠDT[eu√ T(Ψ(B)−ˆΨT)|XT]converges in probability to exp(u2∥∇A−1 µ0ψ∥2 µ0/2), which is the Laplace transform of a N(0,∥∇A−1 µ0ψ∥2 µ0)distribution, for every u∈Rin a neigh- bourhoodof0. SinceconvergenceofsuchconditionalLaplacetransformsin P0-probability implies conditional convergence in distribution in P0-probability (e.g. Lemma 1 of the supplement of [15] or Corollary 2 of [53]), this will complete the proof. By Lemma 5, we have ˆΨT=eΨT+oP0(1/√ T), so we may replace the centering in JT(u)byeΨTdefined in (22) at the cost of a multiplicative eoP0(1)term. The resulting Laplace transform is then exactly the one considered in Lemma 1. Since the sets DTand functions (γT)in the present theorem satisfy the conditions of Lemma 1, that lemma implies JT(u) =eu2 2TRT 0∥∇γT(Xt)∥2dt+oP0(1)R DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B). ApplyingLemma5tothefirstterm(since ∥∇A−1 µ0ψ−∇γT∥µ0≲∥A−1 µ0ψ−γT∥W1,q=o(1) forq≥2)andusingassumption(9)forthesecond,weconclude JT(u) =eu2∥∇A−1 µ0ψ∥2 2/2+oP0(1). The theorem then follows from the convergence of Laplace transforms. 4.2 Auxiliary results In this section, we present technical results that are used in the main proofs. We have the following local asymptotic normality (LAN) expansion. Lemma 2. IfB0∈C(d/2+1+ κ)∨2(Td)andh∈Hd/2+1+ κfor some κ >0, then ℓT(B0+h/√ T)−ℓT(B0) =WT(h)−1 2TZT 0∥∇h(Xt)∥2dt, where, under PB0and as T→ ∞, WT(h) =1√ TZT 0∇h(Xt).dWt→dN(0,∥∇h∥2 µ0), 1 2TZT 0∥∇h(Xt)∥2dt→PB01 2∥∇h∥2 µ0. 18 Proof.This is a specific instance of Lemma 6 of [40]. We next require some additional definitions. Let LB0:H2(Td)→L2(Td)be the generator of the diffusion (1) given by Lu=LB0u=1 2∆u+∇B0.∇u. (23) Further equip L2 µ0(Td) = f∈L2(Td) :R Tdf(x)dµ0(x) = 0 with the corresponding pseudo-distance d2 L(f, g) =dX i=1 ∂xiL−1 B0[f−g] 2 ∞, which is well-defined since the solution map L−1 B0:L2 µ0(Td)→˙H2(Td)acts on L2 µ0(Td) as soon as B∈C2, see Section 6 of [40] for details. Let N(F, d, τ)denote the covering number of F, i.e. the minimal number of d-balls of radius τneeded to cover a set F. Further define JF=ZDF 0p 2 log 2 N(F,6dL, τ)dτ, (24) where DFis the dL-diameter of F. Using these quantities,
https://arxiv.org/abs/2505.16275v1
we obtain the following lemma controlling the remainder term in the LAN expansion, uniformly over a function class. The constant η >0below can be arbitrarily small and does not affect the required regularity in a significant way. Lemma 3. For some η >0, suppose B0∈C(d/2+η)∨2(Td)and for M > 0andζT→0, let DT⊆ {B∈˙C2(Td) :∥B∥Hd+1+η≤M,∥B−B0∥Hd/2+1+ η≤ζT}. LetγT∈˙Hd/2+η(Td)∩˙C1(Td)be (a sequence of) fixed functions and suppose Γ := lim sup T→∞max(∥γT∥C1,∥γT∥H(d/2+η)∨1)<∞. Then as T→ ∞, E0sup B∈DT|GT[∇(B−B0).∇γT]| →0. Proof.Applying Lemma 1 of [40] with the µ0-centered function class FT={gB(x) =∇(B−B0)(x).∇γT(x)− ⟨∇ (B−B0),∇γT⟩µ0:B∈ DT} ∪ {0}, the expected supremum under consideration is upper bounded as E0sup B∈DT|GT[gB]| ≤2√ Tsup gB∈FT∥L−1 B0gB∥∞+ 4√ 2JFT (25) with JFTdefined in (24). Turning to the entropy integral JFT, for all B,¯B∈ DT, using the Sobolev embedding Hd/2+κ,→L∞for any κ >0and the PDE estimate in Lemma A.1 of [27], dL(gB, g¯B)≲∥L−1 B0[gB−g¯B]∥Hd/2+1+ κ≲∥gB−g¯B∥Hd/2−1+κ 19 for0< κ < min(η,1/2). Therefore, using the Runst-Sickel lemma ([55], p. 345 or Lemma 2 of [40]), dL(gB, g¯B)≲∥∇(B−¯B).∇γT− ⟨∇ (B−¯B),∇γT⟩µ0∥H(d/2−1+κ)+ ≲∥∇(B−¯B)∥H(d/2−1+κ)+∥∇γT∥∞+∥∇(B−¯B)∥∞∥∇γT∥H(d/2−1+κ)+ +|⟨∇(B−¯B),∇γT⟩µ0|∥1∥H(d/2−1+κ)+ ≲∥B−¯B∥Hd/2+1+ κmax(∥γT∥C1,∥γT∥H(d/2+κ)∨1) ≲Γ∥B−¯B∥Hd/2+1+ κ forT >0large enough. Since 0< κ < η, the dL-diameter of FTthus satisfies DFT≲ ΓζT→0by assumption. Writing Hr 1for the unit ball of Hr(Td), the metric entropy is then bounded as, for T >0large enough, logN(FT,6dL, τ)≤logN(DT, CΓ∥ · ∥Hd/2+1+ κ, τ) ≤logN(MHd+1+η 1 ,∥ · ∥Cd/2+1+ κ, τ/(C′Γ)) ≲C′MΓ τ d (d+1+η)−(d/2+1+ κ) , where the last inequality follows by arguing as in the proof of Theorem 4.3.36 in [24] as soon as (d+ 1 + η)−(d/2 + 1 + κ)> d/2, i.e. 0< κ < η. This yields JFT≲ZDFT 0p log 2 + ( MΓ/τ)d/2 d/2+η−κdτ≲DFT+Dη−κ d/2+η−κ FT→0. The second term in (25) is thus o(1)asT→ ∞. For the first term, arguing as above, for all B∈ DTandκ >0small enough, ∥L−1 B0gB∥∞≲∥gB∥H(d/2−2+κ)+ ≲∥∇(B−B0)∥H(d/2−2+κ)+∥∇γT∥∞+∥∇(B−B0)∥∞∥∇γT∥H(d/2−2+κ)+ ≲Γ∥B−B0∥Hd/2+1+ κ≲ΓζT→0, where we have used the Sobolev embedding theorem, Lemma A.1 of [27] and Lemma 2 of [40]. This shows that the right side of (25) tends to zero as T→ ∞, completing the proof. We require the following L2(P0)-bound for averages of square functions of the diffu- sion process. Lemma 4. Suppose B0∈C(d/2+1+ κ)∨2(Td)andh∈Hd/2+κ(Td)for some κ > 0. Then E0 1 TZT 0h(Xs)2ds−Z Tdh(x)2dµ0(x)!2 ≤C1 T∥h∥2 Hd/2+κ+1 T2∥h∥4 Hd/2+κ , where Cdepends only on d, κand∥B0∥Cd/2+1+ κ. Proof.Since t7→t2is a smooth map, the function fh(x) =h(x)2− ∥h∥2 µ0∈L2 µ0(Td)∩ Hd/2+κ(Td)⊂C(Td), and moreover, ∥fh∥Hd/2+κ≤C∥h∥2 Hd/2+κ+∥h∥2 µ0∥1∥Hd/2+κ≤C′∥h∥2 Hd/2+κ 20 foraconstant C′=C′(d, κ,∥µ0∥∞)since Hd/2+κisanalgebrafor κ >0. Let L−1=L−1 B0 be theinverse ofthe generator Ldefined in (23), see Section 6of [40] forits construction. Lemma 11 of [40] implies that fh=LL−1[fh]everywhere and ∥L−1[fh]∥Hd/2+2+ κ≤ C(d,∥B0∥Cd/2+1+ κ)∥fh∥Hd/2+κ. By the Sobolev embedding theorem, L−1[fh]∈C2and so we may apply Itô’s formula to obtain ZT 0fh(Xs)ds=ZT 0LL−1[fh](Xs)ds =L−1[fh](XT)−L−1[fh](X0)−ZT 0∇L−1[fh](Xs).dWs. For the first term, we use the bound ∥L−1[fh]∥C2≲∥L−1[fh]∥Hd/2+2+ κ≤C∥h∥2 Hd/2+κ. Using Itô’s isometry, E ZT 0∇L−1[fh](Xs).dWs!2 =EZT 0∥∇L−1[fh](Xs)∥2ds ≲T∥L−1[fh]∥C1≲T∥h∥2 Hd/2+κ with the same dependence on the constants. Normalizing everything by 1/Tthen gives the result. From this lemma, we deduce the limiting covariance in the expansion of the Laplace transform in Lemma 1. Lemma5. Suppose B0∈C(d/2+1+ κ)∨2(Td)for some κ >0and let ψ∈˙Ht(Td)witht > d/2−1.
https://arxiv.org/abs/2505.16275v1
Further let (γT:T >0)⊂Hd/2+1+ κ(Td)be a sequence of functions such that Kγ:= lim supT→∞∥γT∥Hd/2+1+ κ<∞and∥∇A−1 µ0ψ− ∇γT∥µ0→0asT→ ∞. Then forˆΨTandeΨTdefined in (6)and(22), respectively, we have ˆΨT−eΨT=oP0(1/√ T). Furthermore, as T→ ∞, 1 TZT 0∥∇γT(Xt)∥2dtP0−→ ∥∇ A−1 µ0ψ∥2 µ0. Proof.Using the definitions (6) and (22), ˆΨT−eΨT=1 TZT 0 ∇A−1 µ0ψ(Xt)− ∇γT(Xt) .dWt+oP0(T−1/2).(26) We then have E0 1 TZT 0 ∇A−1 µ0ψ(Xt)− ∇γT(Xt) .dWt!2 =1 T2E0ZT 0dX i=1[∂xiA−1 µ0ψ(Xs)−∂xiγT(Xs)]2ds. We may without loss of generality take κ >0small enough that 0< κ < t −d/2 + 1. For such κ, Lemma 9 implies ∥∇A−1 µ0ψ∥Hd/2+κ≲∥A−1 µ0ψ∥Hd/2+1+ κ≤C∥ψ∥Hd/2−1+κ≤ C∥ψ∥Ht, where Cdepends only on d, κand∥B0∥C|d/2−1+κ|+1. We thus have that ∥∇A−1 µ0ψ− ∇γT∥Hd/2+κ≲∥ψ∥Ht+Kγ<∞forT >0large enough. Applying Lemma 4 with each hi(x) =∂xiA−1 µ0ψ(x)−∂xiγT(x)then gives E0 1 TZT 0∥∇A−1 µ0ψ(Xt)− ∇γT(Xt)∥2dt− ∥∇ A−1 µ0ψ− ∇γT∥2 µ0!2 ≲1 T(27) 21 forT >0large enough. Therefore, E0 1 TZT 0 ∇A−1 µ0ψ(Xt)− ∇γT(Xt) .dWt!2 =1 T∥∇A−1 µ0ψ− ∇γT∥2 µ0+O(T−3/2) =o(1/T) by assumption. Since convergence in L2(P0)implies convergence in P0-probability, the stochastic integral term in (26) is oP0(T−1/2)as desired. Turning to the second assertion, arguing as for (27) and using Lemma 4, E0 1 TZT 0∥∇A−1 µ0ψ(Xt)∥2dt− ∥∇ A−1 µ0ψ∥2 µ0!2 ≤C T(28) forC=C(d, κ,∥B0∥Cd/2+κ+1,∥ψ∥Ht), so that in particular1 TRT 0∥∇A−1 µ0ψ(Xt)∥2dtP0−→ ∥∇A−1 µ0ψ∥2 µ0. Expanding out the difference of two squares and using Cauchy-Schwarz, 1 TZT 0∥∇γT(Xt)∥2dt−1 TZT 0∥∇A−1 µ0(Xt)∥2dt = 1 TZT 0dX i=1 ∂xiγT(Xt)−∂xiA−1 µ0ψ(Xt) ∂xiγT(Xt) +∂xiA−1 µ0ψ(Xt) dt ≤ 1 TZT 0dX i=1(∂xiγT(Xt)−∂xiA−1 µ0ψ(Xt))2dt!1/2 × 1 TZT 0dX i=1(∂xiγT(Xt) +∂xiA−1 µ0ψ(Xt))2dt!1/2 . The first term equals ∥∇A−1 µ0ψ−∇γT∥2 µ0+OP0(1/√ T)by (27). Writing ∇γT= (∇γT− ∇A−1 µ0ψ) +∇A−1 µ0ψ, the square of the second term can be upper bounded by a multiple of 1 TZT 0∥∇γT(Xt)− ∇A−1 µ0ψ(Xt)∥2+∥∇A−1 µ0ψ(Xt)∥2dt =∥∇A−1 µ0ψ− ∇γT∥2 µ0+∥∇A−1 µ0ψ∥2 µ0+OP0(1/√ T) using (27) and (28). Combining the above, the before last display is bounded by  ∥∇A−1 µ0ψ− ∇γT∥2 µ0+OP0(T−1/2)1/2 × ∥∇A−1 µ0ψ− ∇γT∥2 µ0+∥∇A−1 µ0ψ∥2 µ0+OP0(T−1/2)1/2 , which is oP0(1)since∥∇A−1 µ0ψ− ∇γT∥µ0→0by assumption. This shows 1 TZT 0∥∇γT(Xt)∥2dtP0−→ ∥∇ A−1 µ0ψ∥2 µ0 asT→ ∞. 22 4.3 Proof of Theorem 2: Gaussian priors We verify the conditions of Theorem 1 with p=q= 2. First note that by Condition 1,Πis supported on ˙C2(Td)by construction. Further, since s > d, we have HW⊂ ˙Hs+1(Td)⊂˙Hd+1+κ(Td)for all sufficiently small κ, whence γT∈˙Hd+1+κ(Td)⊂ ˙Hd+1/2+κ(Td). Since γTsatisfies the conditions (8) by assumption, it remains to show (i) the posterior concentrates on sets DTsatisfying (7) and (ii) the change of measure condition (9) in order to apply Theorem 1. (i) For εT=T−s/(2s+d)andM > 0, define the sets DT=DT(M) := B∈˙C2(Td) :∥∇B− ∇B0∥2≤MεT,∥B∥Hd+1+κ≤M, |⟨B, γ T⟩HW| ≤M∥γT∥HW . (29) ForMlarge enough, Lemma 6 below implies that Π(DT|XT)P0−→1asT→ ∞. By the Poincaré inequality (e.g. p. 290 in [19]), for all B∈ DT⊂˙C2(Td)it holds that ∥B−B0∥H1≃ ∥∇ B− ∇B0∥2≲εT, whence, recalling that B0∈˙Hs+1(Td)⊂˙Hd+1+κ(Td)and that ∥B∥Hd+1+κ≤Mfor all B∈ DT, by the Sobolev interpolation inequality (e.g. Theorems 1.3.3 and 4.3.1 in [64]), ∥B−B0∥Hd+1/2+κ≲∥B−B0∥d 2d+2κ H1∥B−B0∥d/2+κ d+κ Hd+1+κ≲εd 2d+2κ T =o(1). Lastly, since the remainder r(B, B 0)of the functional Ψsatisfies (10) by assumption, we have by Remark 1 that for all B∈ DT, sup B∈DT|r(B,
https://arxiv.org/abs/2505.16275v1
B 0)|=O(ε2 T) =o(1/√ T) since s > d. We conclude that, for all sufficiently large M > 0, the set DTsatisfies the condition (7) of Theorem 1 with the choices ζT=εd/(2d+2κ) T andξT=√ Tε2 T. (ii) It remains to verify the asymptotic invariance property (9). For Bu=B− uγT/√ TandΠu:=L(Bu), using the Cameron-Martin theorem (e.g. Theorem 2.6.13 in [24]), R DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B)=R DT,ueℓT(B′)dΠu dΠ(B′)dΠ(B′) R DTeℓT(B)dΠ(B) =R DT,ueℓT(B′)e−u√ T⟨γT,B′⟩H−u2 2T∥γT∥2 HdΠ(B′) R DTeℓT(B)dΠ(B), where DT,u:={B′=Bu:B∈ DT}. Using that ∥·∥2 H=Tε2 T∥·∥2 HWand the definition ofDT, sup B′∈DT,u u√ T⟨γT, B′⟩H+u2 2T∥γT∥2 H ≤ |u|√ Tε2 Tsup B∈DT|⟨γT, B⟩HW|+u2ε2 T∥γT∥2 HW+u2 2ε2 T∥γT∥2 HW ≤M|u|√ Tε2 T∥γT∥HW+3 2u2ε2 T∥γT∥2 HW=o(|u|+u2), 23 since by assumption ∥γT∥HW=o(1/(√ Tε2 T))(which also implies ∥γT∥HW=o(1/εT)). Hence for all |u|<1, R DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B)=eo(1)R DT,ueℓT(B′)dΠ(B′) R DTeℓT(B)dΠ(B)=eo(1)Π(DT,u|XT) Π(DT|XT). As already observed at the beginning of the proof, the denominator in the right hand side satisfies Π(DT|XT)P0−→1. Moreover, Dc T,u=n B:∥∇(B+u√ TγT)− ∇B0∥2> Mε To ∪n B:∥B+u√ TγT∥Hd+1+κ> Mo ∪n B:|⟨B+u√ TγT, γT⟩HW|> M∥γT∥HWo . Since ∥γT∥2≲∥γT∥Hd/2+1+ κ=O(1)by assumption, the first set is contained in {B:∥∇B− ∇B0∥2> Mε T−C/√ T}. The second set is similarly contained in {B:∥B∥Hd+1+κ> M −o(1)}using ∥γT∥Hd+1+κ=o(√ T), while the third is con- tained in {B:|⟨B, γ T⟩HW|>(M−o(1))∥γT∥HW}since∥γT∥HW=o(1/(Tε2 T)) =o(1). Since T−1/2=o(εT), we conclude that DT,u(M)c⊂ D T(M/2)cforT >0large enough. For sufficiently large M > 0, we thus have Π(DT,u(M)|XT)≥Π(DT(M/2)|XT)P0−→1 by Lemma 6, which completes the proof of (9). Lemma 6. LetΠandB0be as in the statement of Theorem 2, and let DTbe the set defined in (29). Then for all sufficiently large M > 0, asT→ ∞, Π(DT|XT)P0−→1. Proof.Write DT=DT,1∩ DT,2∩ DT,3withDT,1:={B∈˙C2(Td) :∥∇B− ∇B0∥2≤ MεT},DT,2:={B∈˙C2(Td) :∥B∥Hd+1+κ≤M}, and DT,3:={B∈˙C2(Td) : |⟨B, γ T⟩HW| ≤M∥γT∥HW . We show that each set has posterior probability tending to one in P0-probability as T→ ∞. ForM > 0large enough, this holds for DT,1by Theorem 2.1 of [27], whose assump- tionsaresatisfiedundertheconditionsofTheorem2. Further, since ΠWissupportedon ˙Hd+1+κ(Td)∩˙C(d/2+κ)∨2(Td)by assumption, arguing exactly as in the proof of Lemma 5.2 of [27], it follows that for all K > 0there exists sufficiently large M > 0such that Π(Dc T,2)≤e−KTε2 T. By an analogue of the Theorem 8.20 in [22] for the present setting, whose validity is implied by the proof of Theorem 2.1 of [27], we then have Π(DT,2|XT)P0−→1asT→ ∞. Turningto DT,3,notethatif B∼Π,then ⟨B, γ T⟩H=Tε2 T⟨B, γ T⟩HW∼N(0,∥γT∥2 H) = N(0, Tε2 T∥γT∥2 HW), and therefore by the standard tail inequality for normal random variables, Π Dc T,3 = Π B:|⟨B, γ T⟩HW| ∥γT∥HW> M = Π B:|⟨B, γ T⟩H| ∥γT∥H> M√ TεT ≤e−M2Tε2 T. This implies that Π(DT,3|XT)P0−→1asT→ ∞, again by an analogue of Theorem 8.20 in [22]. 24 4.4 Proof of Theorem 3: Besov-Laplace priors We verify the conditions of Theorem 1 with p= 1andq=∞. By construction, the support of ΠW, and hence also of Π, is equal to the wavelet approximation space VJ=span(Φlr, l= 0,1, . . . , J, r = 0, . . . , (2ld−1−1)∨0)⊂˙C2(Td). Since B0∈˙Hs+1(Td)ands > d +(d/2)∨2, it holds that B0∈˙C(d/2+1+ κ)∨2(Td)by the Sobolev embedding. Let B0,T:=PJ l=0P r⟨B0,Φlr⟩2Φlrbe the projection of B0onto VJ, and for εT:=T−s/(2s+d)andM >
https://arxiv.org/abs/2505.16275v1
0, define the sets DT=DT(M) := B∈VJ:∥∇B− ∇B0,T∥1≤MεT,∥B∥Hd+1+κ≤M ,(30) where κ > 0is an arbitrarily small constant. For Mlarge enough, Lemma 7 below implies that Π(DT|XT)P0−→1asT→ ∞. Using (31), ∥∇B−∇B0∥1≲εTforall B∈ DT, whichverifiesthefirstrequirementin (7). Next, in view of the continuous embedding W1,1(Td)⊂B1 1∞(Td)(e.g. Proposition 4.3.20 in [24]) and the Poincaré inequality, for all B∈ DT, ∥B−B0,T∥B1 1∞≲∥B−B0,T∥W1,1≲∥∇B− ∇B0,T∥1≲εT, and hence, using that B, B 0,T∈VJ, for any κ >0, ∥B−B0,T∥Bd/2+κ 1∞≲2J(d/2+κ−1)+∥B−B0,T∥B1 1∞≲T−s−(d/2+κ−1)+ 2s+d . Using the continuous embedding Bd/2+κ 1∞(Td)⊆L2(Td)(e.g. Proposition 4.3.9 in [24]) and that B0∈˙Hs+1, ∥B−B0∥2≲∥B−B0,T∥2+∥B0,T−B0∥2≲T−s−(d/2+κ−1)+ 2s+d +εT. ForB∈ DT, by the Sobolev interpolation inequality (e.g. Theorems 1.3.3 and 4.3.1 in [64]), ∥B−B0∥Hd+1/2+κ≲∥B−B0∥d/2 1+d+κ 2∥B−B0∥1+d/2+κ 1+d+κ Hd+1+κ ≲T−[s−(d/2+κ−1)+]d/2 (2s+d)(1+d+κ)=o(1), since s > d + (d/2)∨2, which verifies the second requirement in (7). Lastly, using the functional remainder assumption (10) and the second last display, r(B, B 0)≲T−2(s−(d/2+κ−1)+) 2s+d =o(1/√ T), since s > d + (d/2)∨2> d/2 + 2( d/2 +κ−1)∨0.We conclude that for all sufficiently large M > 0, the set DTsatisfies the condition of Theorem 1 (with choices ζT= T−[s−(d/2+κ−1)+]d/2 (2s+d)(1+d+κ)andξT=T−s−d/2−2(d/2+κ−1)+ 2s+din (7)). Now consider the representor ψ∈˙Ct(Td), where t > (d/2−1)+. Using that B0∈˙Hs+1(Td)with s > d + (d/2)∨2, and ψ∈˙Ct(Td)⊂˙B(d/2−1+κ)+ ∞1 (Td)(e.g. p. 335 in [24]) for sufficiently small κ >0, by Lemma 9 there exists a unique element A−1 µ0ψ∈ ˙B(d/2+1+ κ)∨2 ∞1 (Td)suchthat Aµ0A−1 µ0ψ=ψalmosteverywhereand ∥A−1 µ0ψ∥B(d/2+1+ κ)∨2 ∞1≲ ∥ψ∥B(d/2−1+κ)+ ∞1. Take the wavelet projections onto VJ: γT:=JX l=0X r⟨A−1 µ0ψ,Φlr⟩2Φlr, 25 which satisfy γT∈˙Ht′(Td)for all t′≥0. These verify (8) since by continuous embed- ding Bt′ ∞1(Td)⊂Ct′(Td)(e.g. p. 347 in [24]), ∥γT∥Hd/2+1+ κ≲∥γT∥Bd/2+1+ κ ∞1<∞,and forκ >0small enough, ∥A−1 µ0ψ−γT∥W1,∞≲∥A−1 µ0ψ−γT∥B1 ∞1≲2−J(d/2+κ)∥A−1 µ0ψ∥Bd/2+1+ κ ∞1=o(1/(√ TεT)). It remains to show the asymptotic invariance property (9). Following the notation and terminology set out in [6], the space of ‘admissible shifts’ associated to the re- scaled (truncated) Besov-Laplace prior ΠisQ=VJ∩˙L2(Td). Since γT∈VJ, it follows from Proposition 2.7 in [6] that Πu:=L(Bu), with Bu=B−uγT√ T, is absolutely continuous with respect to Πwith Radon-Nikodym derivative dΠu dΠ(B) = expn Td 2s+d ∥B∥Bs+1 11− ∥B+uγT/√ T∥Bs+1 11o . Therefore, it holds that R DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B)=R DT,ueℓT(B)eTd/(2s+d)(∥B∥Bs+1 11−∥B+uγT/√ T∥Bs+1 11)dΠ(B) R DTeℓT(B)dΠ(B), where DT,u:={B′=Bu:B∈ D T}. Using the reverse triangle inequality and that Bs+1 ∞1⊂Bs+1 11, sup B∈DT,uTd/(2s+d) ∥B∥Bs+1 11− ∥B+uγT/√ T∥Bs+1 11 ≤Td/(2s+d)∥uγT/√ T∥Bs+1 11 ≲T−s−d/2 2s+d∥γT∥Bs+1 ∞1 ≲T−s−d/2 2s+d2J(s+1−d/2−1−κ)∥γT∥Bd/2+1+ κ ∞1≲T−κ 2s+d=o(1). This implies that as T→ ∞, for all u∈R, R DTeℓT(Bu)dΠ(B)R DTeℓT(B)dΠ(B)=eo(1)R DT,ueℓT(B)dΠ(B) R DTeℓT(B)dΠ(B)=eo(1)Π(DT,u|XT) Π(DT|XT). The denominator satisfies Π(DT|XT)→1inP0-probability by Lemma 7. Moreover, Dc T,u=n B: ∇(B+u√ TγT)− ∇B0,T 2> Mε To ∪n B:∥B+u√ TγT∥Hd+1+κ> Mo . Since t >(d−1)+, ∥∇γT/√ T∥2≲∥A−1 µ0ψ∥H1/√ T≲∥ψ∥L2/√ T=o(εT), while using γT∈VJgives ∥γT/√ T∥Hd+1+κ≲2Jd/2T−1/2∥γT∥Hd/2+1+ κ≲εT=o(1). Thus DT,u(M)⊃ D T(M/2)forTlarge enough. But for M > 0enough, the P0- probability of this last set tends to 1 by Lemma 7, which completes the proof. Lemma 7. LetΠandB0be as in the statement of Theorem 3, and let DTbe the set defined in (30). Then for all sufficiently large M > 0, asT→ ∞, Π(DT|XT)P0−→1. 26 Proof.Write DT:=DT,1∩ DT,2, with DT,1:={B∈VJ:∥∇B− ∇B0,T∥1≤MεT} andDT,2:={B∈VJ:∥B∥Hd+1+κ≤M}. We show that both sets have posterior probability tending to
https://arxiv.org/abs/2505.16275v1
one in P0-probability as T→ ∞. Starting with DT,1, note that since B0∈Hs+1(Td), ∥∇B0− ∇B0,T∥1≲∥B0−B0,T∥H1≤2−Js∥B0∥Hs+1≲εT. (31) Hence, provided that Mis sufficiently large, DT,1⊇ {B∈˙C2(Td) :∥∇B− ∇B0∥1≤MεT/2} By Theorem 2.4 of [27] (with the choice p= 1), whose assumptions are precisely recov- ered under the conditions of Theorem 3, it then follows that Π(Dc T,1|XT)≤Π(B∈˙C2(Td) :∥∇B− ∇B0∥1> Mε T/2|XT)P0−→0.(32) Further, for sufficiently large M > 0, DT,2⊇ D′ T,2:=n B=B1+B2:B1, B2∈VJ,∥B1∥∞≤M 2T−s+1 2s+d,∥B2∥Bs+1 11≤M 2o . Indeed, since s > d + (d/2)+, forB1, B2∈VJas above, ∥B1∥Hd+1+κ≲2J(d+1+κ)T−s+1 2s+d≃T−s−d−κ 2s+d=o(1), and ∥B2∥Hd+1+κ≲∥B2∥Bs+1 11≲M, holding in view of the embedding Bs+1 11(Td)⊂Hd+1+κ(Td)(cf. eq. (69) in [35]). By Lemma 5.2 of [27] (with the choice p= 1), we then have that for all K > 0, we may choose M > 0large enough such that Π(Dc T,2)≤Π(D′c T,2)≤e−KTε2 T. Similarly to the conclusion of the proof of Lemma 6, we then obtain via an analogue of Theorem 8.20 in [22] for the present setting that Π(DT,2|XT)P0−→1asT→ ∞. Combined with (32), this concludes the proof. 4.5 Functional expansions In this section, we study conditions under which nonlinear (in B) functionals satisfy the approximately linear expansion (4). Examples 1 and 2 follow immediately. Proof of Example 3. ForΨ(B) =R Bq, using the binomial theorem, Ψ(B)−Ψ(B0)− ⟨qBq−1 0−qZ Bq−1 0, B−B0⟩2=ZqX k=2q k (B−B0)kBq−k 0. For∥B∥∞,∥B0∥∞≤K, using the interpolation equality ∥f∥k≤ ∥f∥2/k 2∥f∥1−2/k ∞for any2≤k≤ ∞, the right-hand side is bounded by a multiple of qX k=2∥B−B0∥k k∥B0∥q−k ∞≤qX k=2Kq−k∥B−B0∥2 2∥B−B0∥k−2 ∞≲∥B−B0∥2 2 as required. 27 Thenextlemmaprovidesanexpansionforlinearfunctionalsoftheinvariantmeasure µB=e2B/R e2Bas in Example 4, which are nonlinear in the potential B. Lemma 8. LetB, B 0∈˙L2(Td)andΨ(B) =R TdµB(x)φ(x)dxforφ∈L∞(Td). Then the functional Ψsatisfies the expansion Ψ(B) = Ψ( B0) +⟨2µB0[φ−Ψ(B0)], B−B0⟩2+r(B, B 0), where for any K, M > 0andεT→0, sup B,B0:∥B∥∞,∥B0∥∞≤K ∥B−B0∥2≤MεT|r(B, B 0)|=O(ε2 T) (33) asT→ ∞. In particular, Ψsatisfies the expansion (4)with ψ= 2µB0[φ−R µB0φ]∈ ˙L2(Td). Proof.Consider B, B 0∈˙L2(Td)such that ∥B∥∞,∥B0∥∞≤Kand∥B−B0∥2≤ MεT, i.e. in the supremum over which we consider the remainder term. Write NB=R Tde2B(x)dxand recall that µB=e2B/NBfrom (2). Expanding, Ψ(B)−Ψ(B0) =Z (µB−µB0)φ =Ze2B NB0−e2B0 NB0+e2B NB−e2B NB0 φ =Ze2B0 NB0h e2(B−B0)−1i φ+1 NB−1 NB0Z e2Bφ.(34) Letρ(x) =e2x−1−2x, which satisfies |ρ(x)| ≤2x2e2|x|using the Lagrange form of the Taylor expansion remainder. The first term above equals 2Z (B−B0)µB0φ+Z ρ(B−B0)µB0φ. Moreover, 1 NB−1 NB0=R e2B0−e2B NBNB0=R e2B0(1−e2(B−B0)) NBNB0 =−1 NBZ µB0[2(B−B0) +ρ(B−B0)] and so the second term in (34) equals −Z µB0[2(B−B0) +ρ(B−B0)]Z µBφ. Substituting these into (34) then yields Ψ(B)−Ψ(B0) = 2Z (B−B0)µB0φ+Z ρ(B−B0)µB0φ −2Z (B−B0)µB0Z µBφ−Z ρ(B−B0)µB0Z µBφ.(35) Since∥µB0∥∞≤e4∥B0∥∞≤e4K, the second term in (35) satisfies Z ρ(B−B0)µB0φ ≤Z 2(B−B0)2e2|B−B0|µB0|φ| ≤2e8K∥φ∥∞∥B−B0∥2 2. 28 The same bound holds for the fourth term in (35), so that Ψ(B)−Ψ(B0) = 2Z (B−B0)µB0φ−2Z (B−B0)µB0Z µBφ+O(∥B−B0∥2 2), where the constants in the remainder term depend only on Kand∥φ∥∞. Adding and subtracting 2R (B−B0)µB0R µB0φ= 2R (B−B0)µB0Ψ(B0)to the right hand side, Ψ(B)−Ψ(B0) = 2Z (B−B0)µB0φ−2Z (B−B0)µB0[Ψ(B)−Ψ(B0) + Ψ( B0)] +O(∥B−B0∥2 2) =O(εT) + [Ψ( B)−Ψ(B0)]O(εT), where all remainder terms are uniform over K, M, ∥φ∥∞and we used ∥B−B0∥1≤ ∥B−B0∥2. Since εT→0, we deduce that |Ψ(B)−Ψ(B0)|=O(εT). Substituting this back into the right side of the
https://arxiv.org/abs/2505.16275v1
last display, Ψ(B)−Ψ(B0) = 2Z (B−B0)µB0(φ−Ψ(B0)) +O(ε2 T). Many interesting functionals are approximately linear in the invariant measure µ, and Lemma 8 allows us to perform a further linearization in Bon the linearizations in µ. This is the approach we take for several examples. Proof of Example 5. For the entropy functional Ψ(B) =R µBlogµB, Ψ(B)−Ψ(B0) =Z (µB−µB0) logµB0+Z µBlogµB µB0. Applying Lemma 8 to the functional Φ(B) =R µBlogµB0since µB0∈L∞(Td), Ψ(B)−Ψ(B0) =⟨2µB0[logµB0−Φ(B0)], B−B0⟩2+r1(B, B 0) +Z µBlogµB logµB0, where r1satisfies(33). ThelasttermequalstheKullback-Leiblerdivergence KL(µB, µB0) between µBandµB0, which is bounded by a multiple of h(µB, µB0)2∥µB/µB0∥∞by Lemma B.2 of [22], where hdenotes the Hellinger distance. Arguing as in the proof of Lemma 2.5 of [22], h(µB, µB0)2≤4R e2B0e2|B−B0||B−B0|2 R e2B0≤4e8K∥B−B0∥2 2 for∥B∥∞,∥B0∥∞≤K, so that KL(µB, µB0)≲e12K∥B−B0∥2 2. The last term in the second last display therefore satisfies the same bound as (33). Proof of Example 6. ForΨ(B) =R√µB, Ψ(B)−Ψ(B0) =1 2Z (µB−µB0)1√µB0−1 2Z(√µB−√µB0)2 √µB0. Similar to the last example, the third term is bounded by 2e10K∥B−B0∥2 2. Applying Lemma 8 to Φ(B) =1 2R µBµ−1/2 B0then gives Ψ(B)−Ψ(B0) =⟨µB0[µ−1/2 B0−Z√µB0], B−B0⟩2+r(B, B 0), with the remainder term satisfying (33). 29 Proof of Example 7. As in the proof of Example 3, for Ψ(B) =R µq B, Ψ(B)−Ψ(B0)− ⟨qµq−1 B0−qZ µq−1 B0, µB−µB0⟩2=O ∥µB−µB0∥2 2 , wheretheremaindertermontherightsidedependsonlyon Kandq. But∥µB−µB0∥2 2≤ ∥√µB+√µB0∥2 ∞h(µB, µB0)2≤8e12K∥B−B0∥2 2by the above. Applying Lemma 8 as in the previous examples thus gives the result with ψ= 2µB0[qµq−1 B0−qR µq B0]. 4.6 A PDE estimate We require the following regularity estimate for solutions to the Poisson equation Aµu= f, where Aµis the second order elliptic operator defined in (5). Recall that ˙L2(Td) = {f∈L2(Td) :R Tdf(x)dx= 0}. Lemma 9. Lett∈Rand assume µ∈C|t−2|+1(Td)is strictly positive on Td. For any f∈˙L2(Td), there exists a unique solution A−1 µ[f]∈˙L2(Td)to the equation Aµu=f satisfying AµA−1 µ[f] =falmost everywhere. Moreover, ∥A−1 µ[f]∥Btpq≲∥f∥Bt−2 pq, with constants depending on t, p, q, dand on ∥1/µ∥B|t−2| ∞∞∨ ∥µ∥B|t−2|+1 ∞∞. Proof.Let{ek=e2πik.·, k= (k1, . . . , k d)∈Zd}denote the trigonometric basis of L2(Td)and(ψj)j≥0form a Littlewood-Paley resolution of unity such that supp (ψ0)⊂ {x:|x| ≤2}andψj(x) =ψ(2−jx),j≥1, for some non-negative Schwartz function ψ∈ S(Rd)with supp (ψ)⊂ {x: 1/2≤ |x| ≤2}(see p. 81-82 of [57] for a full definition and construction). Then for t∈R,1≤p, q≤ ∞, an equivalent norm for the periodic Besov space Bt pq(Td)is given by ∥u∥Btpq= X j≥02tqj X k∈Zdψj(k)⟨u, ek⟩2ek q Lp(Td) 1/q , (36) with the natural modification when q=∞, see p. 162 of [57]. Letq <∞. For any u∈ H:=˙Bt pq(Td), since ⟨u, e0⟩2=⟨u,1⟩2= 0and⟨∆u, ek⟩2= −(2π)2P jk2 j⟨u, ek⟩2fork̸= 0, ∥u∥q Btpq=X j≥02tqj X k∈Zd,k̸=01 4π2|k|2ψj(k)⟨∆u, ek⟩2ek q Lp(Td) =X j≥02(t−2)qj X k∈Zd,k̸=0Mj(k)ψj(k)⟨∆u, ek⟩2ek q Lp(Td), where Mj=M(2−j·)andM= Φ/(4π2| · |2)with Φa smooth function supported on {x: 1/4≤ |x| ≤9/4}such that Φ = 1on{x: 1/2≤ |x| ≤2}. By Lemma 10, the last display is bounded by X j≥02(t−2)qj X k∈Zd,k̸=0ψj(k)⟨∆u, ek⟩2ek q Lp(Td)∥F−1[Mj]∥q L1(Rd), 30 where F−1is the inverse Fourier transform. Since Φ∈ S(Rd), so are both Mand F−1M, which implies supj∥F−1[Mj]∥L1(Rd)=∥F−1[M]∥L1(Rd)<∞.Using again the norm equivalence (36), this yields ∥u∥Btpq≤C∥∆u∥Bt−2 pq for an absolute constant
https://arxiv.org/abs/2505.16275v1
C > 0. The case q=∞follows identically. Using the last display, the definition of Aµand the multiplication inequality for Besov norms, ∥u∥Btpq≲∥(1/µ)Aµu∥Bt−2 pq+∥(1/µ)∇µ.∇u∥Bt−2 pq ≲∥1/µ∥B|t−2| ∞∞∥Aµu∥Bt−2 pq+∥1/µ∥B|t−2| ∞∞∥µ∥B|t−2|+1 ∞∞∥u∥Bt−1 pq(37) for all u∈ H, with constants depending on t, p, q, d. We will deduce from this the inequality ∥u∥Bt pq≲∥Aµu∥Bt−2 pq, ∀u∈ H. (38) Indeed, suppose the last inequality it is not true. Then there exists a sequence um∈ H such that ∥um∥Btpq= 1for all mbut∥Aµum∥Bt−2 pq→0asm→ ∞. By compactness, umhas a subsequence, also denoted by um, which converges in ∥ · ∥Bt−1 pq-norm to some u∈ Hsatisfying Aµu= 0. Using (37) with fixed constant depending only on K, t, p, q, d , implies that umis also Cauchy in Bt pq, and its limit must satisfy ∥u∥Btpq= 1. However, the only solution u∈ HtoAµu= 0onTdequals u=const = 0, contradicting ∥u∥Btpq= 1and thereby proving (38). By the Fredholm property, a solution uftoAµu=fexists wheneverR Tdf= 0, and for f∈Ht−2(Td)any such solution belongs to Ht(Td)(see Theorem 3.5.3 in [10], which is proved for smooth b, but the proof remains valid for b∈C|t−2|(Td)). The weak maximum principle (p.179 in [23]) now implies that ufis unique up to an additive constant, and applying (38) to the unique selection uf=A−1 µ[f]∈ Hcompletes the proof. Lemma 10. Letf=P k∈Zd:∥k∥2≤t⟨f, ek⟩ekbe a finite trigonometric polynomial, m∈ C∞(Rd)andΦ∈ S(Rd)satisfysupp(Φ)⊂ {x:|x| ≤2}andΦ = 1on{x:|x| ≤1}. Then for every 1≤p≤ ∞, X k∈Zd:∥k∥2≤t⟨f, ek⟩ek Lp(Td)≤ ∥f∥Lp(Td)∥F−1[m]∗F−1[Φ(·/t)]∥L1(Rd). Proof.The function Φ(·/t)is supported on {x:|x| ≤2t}and equal to 1 on {x:|x| ≤t}. Define M(u) =m(u)Φ(u/t), u ∈Rd, which is in C∞(Rd), has compact support and coincides with mon{x:|x| ≤t}. Since M∈ S(Rd), the Fourier inversion F[F−1[M]](k) =M(k)holds pointwise. Therefore, 31 forck=⟨f, ek⟩2, X ∥k∥≤tckm(k)ek(x) =X ∥k∥≤tckM(k)ek(x) =X ∥k∥≤tckF[F−1[M]](k)ek(x) =X ∥k∥≤tZ RdF−1[M](y)e−ik.ydy c kek(x) = (2π)dX ∥k∥≤tZ RdF−1[M](2πz)cke2πi(x−z).kdz = (2π)dZ RdF−1[M](2πz)f(x−z)dz. Taking the Lp(Td)norm of the last display and using Minkowski’s integral inequality, X ∥k∥≤tckm(k)ek Lp(Td)≤ ∥f∥Lp(Td)∥F−1[M]∥L1(R), where we have also used a change of variable in the last term. The result then follows since F−1[M] =F−1[m]∗F−1[Φ(·/t)]. Acknowledgements. We are grateful to Richard Nickl for helpful discussions. M. G. was supported by MUR - Prin 2022 - Grant no. 2022CLTYP4, funded by the European Union – Next Generation EU, and by the “de Castro” Statistics Initiative, Collegio Carlo Alberto, Torino. Part of this research was done while KR was visiting the LPSM at the Sorbonne University in Paris, funded by a CNRS Poste Rouge. Bibliography [1] K. Abraham. Nonparametric Bayesian posterior contraction rates for scalar dif- fusions with high-frequency data. Bernoulli , 25(4A):2696–2728, 2019. ISSN 1350- 7265. doi: 10.3150/18-BEJ1067. URL https://doi.org/10.3150/18-BEJ1067 . [2] C. Aeckerle-Willems and C. Strauch. Sup-norm adaptive drift estimation for mul- tivariate nonreversible diffusions. Ann. Statist. , 50(6):3484–3509, 2022. ISSN 0090- 5364. doi: 10.1214/22-aos2237. URL https://doi.org/10.1214/22-aos2237 . [3] S. Agapiou and A. Savva. Adaptive inference over Besov spaces in the white noise model using p-exponential priors. Bernoulli , 30(3):2275–2300, 2024. ISSN 1350- 7265. doi: 10.3150/23-bej1673. URL https://doi.org/10.3150/23-bej1673 . [4] S. Agapiou and S. Wang. Laplace priors and spatial inhomogeneity in Bayesian inverse problems. Bernoulli , 30(2):878–910, 2024. ISSN 1350-7265. doi: 10.3150/ 22-bej1563.
https://arxiv.org/abs/2505.16275v1
URL https://doi.org/10.3150/22-bej1563 . [5] S. Agapiou, M. Dashti, and T. Helin. Rates of contraction of posterior distributions based on p-exponential priors. Bernoulli , 27(3):1616–1642, 2021. ISSN 1350-7265. doi: 10.3150/20-bej1285. URL https://doi.org/10.3150/20-bej1285 . [6] S. Agapiou, M. Dashti, and T. Helin. Rates of contraction of posterior distributions based on p-exponential priors. Bernoulli , 27(3):1616–1642, 2021. 32 [7] D. Bakry, I. Gentil, and M. Ledoux. Analysis and geometry of Markov diffu- sion operators , volume 348 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] . Springer, Cham, 2014. ISBN 978-3-319-00226-2; 978-3-319-00227-9. doi: 10.1007/978-3-319-00227-9. URL https://doi.org/10.1007/978-3-319-00227-9 . [8] R. F. Bass. Stochastic processes . Cambridge Univ. Press, Cambridge, 2011. ISBN 978-1-107-00800-7. doi: 10.1017/CBO9780511997044. URL https://doi.org/ 10.1017/CBO9780511997044 . [9] P. Batz, A. Ruttor, and M. Opper. Approximate Bayes learning of stochastic differential equations. Phys. Rev. E , 98:022109, 2018. doi: 10.1103/PhysRevE.98. 022109. URL https://link.aps.org/doi/10.1103/PhysRevE.98.022109 . [10] L. Bers, F. John, and M. Schechter. Partial differential equations . Lectures in Applied Mathematics, Vol. III. Wiley & Sons, Inc. New York-London-Sydney, 1964. [11] J.-M. Bony, F. Broglia, F. Colombini, and L. Pernazza. Nonnegative functions as squares or sums of squares. J. Funct. Anal. , 232(1):137–147, 2006. ISSN 0022-1236. doi: 10.1016/j.jfa.2005.06.011. URL https://doi.org/10.1016/j.jfa.2005.06. 011. [12] I. Castillo. A semiparametric Bernstein–von Mises theorem for Gaussian pro- cess priors. Probab. Theory Related Fields , 152(1-2):53–99, 2012. ISSN 0178- 8051. doi: 10.1007/s00440-010-0316-5. URL https://doi.org/10.1007/ s00440-010-0316-5 . [13] I. Castillo. Semiparametric Bernstein–von Mises theorem and bias, illustrated with Gaussianprocesspriors. Sankhya A ,74(2):194–221, 2012. ISSN0976-836X. doi: 10. 1007/s13171-012-0008-6. URL https://doi.org/10.1007/s13171-012-0008-6 . [14] I. Castillo and R. Nickl. Nonparametric Bernstein-von Mises theorems in Gaussian white noise. Ann. Statist. , 41(4):1999–2028, 2013. ISSN 0090-5364. doi: 10.1214/ 13-AOS1133. URL https://doi.org/10.1214/13-AOS1133 . [15] I. Castillo and J. Rousseau. A Bernstein–von Mises theorem for smooth functionals in semiparametric models. Ann. Statist. , 43(6):2353–2383, 2015. ISSN 0090-5364. doi: 10.1214/15-AOS1336. URL https://doi.org/10.1214/15-AOS1336 . [16] A. Dalalyan and M. Reiß. Asymptotic statistical equivalence for ergodic diffusions: the multidimensional case. Probab. Theory Related Fields , 137(1-2):25–47, 2007. ISSN 0178-8051. doi: 10.1007/s00440-006-0502-7. URL https://doi.org/10. 1007/s00440-006-0502-7 . [17] M. Dashti, S. Harris, and A. Stuart. Besov priors for Bayesian inverse problems. Inverse Probl. Imaging , 6(2):183–200, 2012. ISSN1930-8337. doi: 10.3934/ipi.2012. 6.183. URL https://doi-org.ezp.lib.cam.ac.uk/10.3934/ipi.2012.6.183 . [18] R. M. Dudley. Real analysis and probability , volume 74 of Cambridge Studies in Advanced Mathematics . Cambridge University Press, Cambridge, 2002. ISBN 0- 521-00754-2. doi: 10.1017/CBO9780511755347. URL https://doi.org/10.1017/ CBO9780511755347 . Revised reprint of the 1989 original. 33 [19] L. C. Evans. Partial differential equations , volume 19 of Graduate Studies in Mathematics . American Mathematical Society, Providence, RI, second edition, 2010. ISBN 978-0-8218-4974-3. doi: 10.1090/gsm/019. URL https://doi.org/ 10.1090/gsm/019 . [20] E. García-Portugués, M. Sørensen, K. V. Mardia, and T. Hamelryck. Langevin diffusions on the torus: estimation and applications. Stat. Comput. , 29(1):1–22, 2019. ISSN 0960-3174. doi: 10.1007/s11222-017-9790-2. URL https://doi.org/ 10.1007/s11222-017-9790-2 . [21] R. Ghanem, D. Higdon, H. Owhadi, et al. Handbook of uncertainty quantification , volume 6. Springer New York, 2017. [22] S. Ghosal and A. van der Vaart. Fundamentals of nonparametric Bayesian in- ference, volume 44 of Cambridge Series in Statistical and Probabilistic Mathemat- ics. Cambridge
https://arxiv.org/abs/2505.16275v1
University Press, Cambridge, 2017. ISBN 978-0-521-87826-5. doi: 10.1017/9781139029834. URL https://doi.org/10.1017/9781139029834 . [23] D. Gilbarg, N. S. Trudinger, D. Gilbarg, and N. Trudinger. Elliptic partial differ- ential equations of second order , volume 224. Springer, 1977. [24] E. Giné and R. Nickl. Mathematical foundations of infinite-dimensional statis- tical models . Cambridge University Press, New York, 2016. ISBN 978-1-107- 04316-9. doi: 10.1017/CBO9781107337862. URL http://dx.doi.org/10.1017/ CBO9781107337862 . [25] M. Giordano and H. Kekkonen. Bernstein–von mises theorems and uncertainty quantification for linear inverse problems. SIAM/ASA Journal on Uncertainty Quantification , 8(1):342–373, 2020. [26] M. Giordano and R. Nickl. Consistency of bayesian inference with gaussian process priors in an elliptic inverse problem. Inverse Problems , 36(8):085001, 2020. [27] M. Giordano and K. Ray. Nonparametric Bayesian inference for reversible multi- dimensional diffusions. The Annals of Statistics , 50(5):2872–2898, 2022. [28] M. Giordano and S. Wang. Statistical algorithms for low-frequency diffusion data: A pde approach. arXiv preprint arXiv:2405.01372 , 2024. [29] M. Giordano, K. Ray, and J. Schmidt-Hieber. On the inability of gaussian pro- cess regression to optimally learn compositional functions. Advances in Neural Information Processing Systems , 35:22341–22353, 2022. [30] S. Gugushvili and P. Spreij. Nonparametric Bayesian drift estimation for multidi- mensional stochastic differential equations. Lith. Math. J. , 54(2):127–141, 2014. [31] T. Helin and M. Burger. Maximum a posteriori probability estimates in infinite- dimensional Bayesian inverse problems. Inverse Problems , 31(8):085009, 22, 2015. ISSN 0266-5611. doi: 10.1088/0266-5611/31/8/085009. URL https://doi-org. ezp.lib.cam.ac.uk/10.1088/0266-5611/31/8/085009 . [32] M. Hoffmann and K. Ray. Nonparametric Bayesian estimation in a multidimen- sional diffusion model with high frequency data. Probab. Theory Related Fields , 191(1-2):103–180, 2025. ISSN 0178-8051. doi: 10.1007/s00440-024-01317-w. URL https://doi.org/10.1007/s00440-024-01317-w . 34 [33] V. Kolehmainen, M. Lassas, K. Niinimäki, and S. Siltanen. Sparsity-promoting Bayesian inversion. Inverse Problems , 28(2):025005, 28, 2012. ISSN 0266-5611. doi: 10.1088/0266-5611/28/2/025005. URL https://doi.org/10.1088/0266-5611/ 28/2/025005 . [34] H. A. Kramers. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica, 7:284–304, 1940. ISSN 0031-8914. [35] M.Lassas, E.Saksman, andS.Siltanen. Discretization-invariantBayesianinversion and Besov space priors. Inverse Probl. Imaging , 3(1):87–122, 2009. ISSN 1930- 8337. doi: 10.3934/ipi.2009.3.87. URL https://doi-org.ezp.lib.cam.ac.uk/ 10.3934/ipi.2009.3.87 . [36] A. Magra, A. van der Vaart, and H. van Zanten. Semi-parametric Bernstein-von Mises theorem in linear inverse problems. Electron. J. Stat. , 19(1):–, 2025. doi: 10.1214/25-ejs2372. URL https://doi.org/10.1214/25-ejs2372 . [37] F. Monard, R. Nickl, and G. P. Paternain. Consistent inversion of noisy non- Abelian X-ray transforms. Comm. Pure Appl. Math. , 74(5):1045–1099, 2021. ISSN 0010-3640. doi: 10.1002/cpa.21942. URL https://doi.org/10.1002/cpa.21942 . [38] R. Nickl. Bayesian non-linear statistical inverse problems . EMS press Berlin, 2023. [39] R. Nickl. Consistent inference for diffusions from low frequency measurements. Ann. Statist. , 52(2):519–549, 2024. ISSN 0090-5364. doi: 10.1214/24-aos2357. URL https://doi.org/10.1214/24-aos2357 . [40] R. Nickl and K. Ray. Nonparametric statistical inference for drift vector fields of multi-dimensional diffusions. The Annals of Statistics , 48(3):1383–1408, 2020. [41] R. Nickl and K. Ray. Nonparametric statistical inference for drift vector fields of multi-dimensional diffusions. Ann. Statist. , 48(3):1383–1408, 2020. ISSN 0090- 5364. doi: 10.1214/19-AOS1851. URL https://doi.org/10.1214/19-AOS1851 . [42] R. Nickl and J. Söhl. Nonparametric Bayesian posterior contraction rates for dis- cretely observed scalar diffusions. Ann. Statist.
https://arxiv.org/abs/2505.16275v1
, 45(4):1664–1693, 2017. [43] O. Papaspiliopoulos, Y. Pokern, G. O. Roberts, and A. M. Stuart. Nonparametric estimation of diffusions: a differential equations approach. Biometrika , 99(3):511– 531, 2012. [44] T. Patschkowski and A. Rohde. Adaptation to lowest density regions with appli- cation to support recovery. Ann. Statist. , 44(1):255–287, 2016. ISSN 0090-5364. doi: 10.1214/15-AOS1366. URL https://doi.org/10.1214/15-AOS1366 . [45] F. Pinski and A. Stuart. Transition paths in molecules: gradient descent in pathspace. Journal of Chemical Physics , 132:184104, 2010. [46] F. J. Pinski, A. M. Stuart, and F. Theil. Γ-limit for transition paths of maximal probability. J. Stat. Phys. , 146(5):955–974, 2012. ISSN 0022-4715. doi: 10.1007/ s10955-012-0443-8. URL https://doi.org/10.1007/s10955-012-0443-8 . [47] Y. Pokern, A. M. Stuart, and J. H. van Zanten. Posterior consistency via precision operators for Bayesian nonparametric drift estimation in SDEs. Stochastic Process. Appl., 123(2):603–628, 2013. ISSN0304-4149. doi: 10.1016/j.spa.2012.08.010. URL https://doi.org/10.1016/j.spa.2012.08.010 . 35 [48] K. Ray and J. Schmidt-Hieber. Minimax theory for a class of nonlinear statistical inverse problems. Inverse Problems , 32(6):065003, 29, 2016. ISSN 0266-5611. doi: 10.1088/0266-5611/32/6/065003. URL https://doi.org/10.1088/0266-5611/ 32/6/065003 . [49] K. Ray and J. Schmidt-Hieber. A regularity class for the roots of nonnegative func- tions.Ann. Mat. Pura Appl. (4) , 196(6):2091–2103, 2017. ISSN 0373-3114. doi: 10. 1007/s10231-017-0655-2. URL https://doi.org/10.1007/s10231-017-0655-2 . [50] K. Ray and J. Schmidt-Hieber. Asymptotic nonequivalence of density estimation and Gaussian white noise for small densities. Ann. Inst. Henri Poincaré Probab. Stat., 55(4):2195–2208, 2019. ISSN 0246-0203. doi: 10.1214/18-AIHP946. URL https://doi.org/10.1214/18-AIHP946 . [51] K. Ray and B. Szabo. Debiased bayesian inference for average treatment effects. InAdvances in Neural Information Processing Systems , volume 32, 2019. [52] K. Ray and A. van der Vaart. Semiparametric Bayesian causal inference. Ann. Statist., 48(5):2999–3020, 2020. ISSN 0090-5364. doi: 10.1214/19-AOS1919. URL https://doi.org/10.1214/19-AOS1919 . [53] K.RayandA.vanderVaart. OntheBernstein–vonMisestheoremfortheDirichlet process. Electron. J. Stat. , 15(1):2224–2246, 2021. doi: 10.1214/21-ejs1821. URL https://doi.org/10.1214/21-ejs1821 . [54] V. Rivoirard and J. Rousseau. Bernstein-von Mises theorem for linear functionals of the density. Ann. Statist. , 40(3):1489–1523, 2012. ISSN 0090-5364. doi: 10. 1214/12-AOS1004. URL https://doi.org/10.1214/12-AOS1004 . [55] T. Runst and W. Sickel. Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations , volume 3 of De Gruyter Series in Non- linear Analysis and Applications . Walter de Gruyter & Co., Berlin, 1996. ISBN 3-11-015113-8. doi: 10.1515/9783110812411. URL https://doi.org/10.1515/ 9783110812411 . [56] A. Ruttor, P. Batz, and M. Opper. Approximate Gaussian process in- ference for the drift function in stochastic differential equations. In Ad- vances in Neural Information Processing Systems , volume 26, pages 2040– 2048, 2013. URL https://proceedings.neurips.cc/paper/2013/file/ 021bbc7ee20b71134d53e20206bd6feb-Paper.pdf . [57] H.-J. Schmeisser and H. Triebel. Topics in Fourier analysis and function spaces . A Wiley-Interscience Publication. John Wiley & Sons, Ltd., Chichester, 1987. ISBN 0-471-90895-9. [58] E. Schmisser. Penalized nonparametric drift estimation for a multidimensional diffusion process. Statistics , 47(1):61–84, 2013. ISSN 0233-1888. doi: 10. 1080/02331888.2011.591931. URL https://doi.org/10.1080/02331888.2011. 591931. [59] Z. Schuss. Singular perturbation methods in stochastic differential equations of mathematical physics. SIAM Rev. , 22(2):119–155, 1980. ISSN 0036-1445. doi: 10.1137/1022024. URL https://doi.org/10.1137/1022024 . 36 [60] C. Strauch. Sharp adaptive drift estimation for ergodic diffusions: the multivariate case.Stochastic Process. Appl. , 125(7):2562–2602, 2015. ISSN 0304-4149. doi: 10.
https://arxiv.org/abs/2505.16275v1
1016/j.spa.2015.02.003. URL https://doi.org/10.1016/j.spa.2015.02.003 . [61] C. Strauch. Exact adaptive pointwise drift estimation for multidimensional er- godic diffusions. Probab. Theory Related Fields , 164(1-2):361–400, 2016. ISSN 0178-8051. doi: 10.1007/s00440-014-0614-4. URL https://doi.org/10.1007/ s00440-014-0614-4 . [62] C. Strauch. Adaptive invariant density estimation for ergodic diffusions over anisotropic classes. Ann. Statist. , 46(6B):3451–3480, 2018. ISSN 0090-5364. doi: 10.1214/17-AOS1664. URL https://doi.org/10.1214/17-AOS1664 . [63] A. M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer. , 19:451–559, 2010. ISSN 0962-4929. doi: 10.1017/S0962492910000061. URL https://doi.org/ 10.1017/S0962492910000061 . [64] H. Triebel. Interpolation theory, function spaces, differential operators / Hans Triebel. North-Holland mathematical library 18. North-Holland Co, Amsterdam, 1978. [65] F. van der Meulen and M. Schauer. Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided proposals. Electron. J. Stat. , 11(1):2358–2396, 2017. ISSN 1935-7524. doi: 10.1214/17-EJS1290. URL https: //doi.org/10.1214/17-EJS1290 . [66] F. H. van der Meulen, A. W. van der Vaart, and J. H. van Zanten. Convergence rates of posterior distributions for Brownian semimartingale models. Bernoulli , 12 (5):863–888, 2006. ISSN 1350-7265. doi: 10.3150/bj/1161614950. URL https: //doi.org/10.3150/bj/1161614950 . [67] A. W. van der Vaart. Asymptotic statistics , volume 3 of Cambridge Series in Statistical and Probabilistic Mathematics . Cambridge University Press, Cambridge, 1998. ISBN 0-521-49603-9; 0-521-78450-6. doi: 10.1017/CBO9780511802256. URL https://doi.org/10.1017/CBO9780511802256 . [68] J. van Waaij and H. van Zanten. Gaussian process methods for one-dimensional diffusions: optimal rates and adaptation. Electron. J. Stat. , 10(1):628–645, 2016. ISSN 1935-7524. doi: 10.1214/16-EJS1117. URL https://doi.org/10.1214/ 16-EJS1117 . [69] A.Yiu,E.Fong,C.Holmes,andJ.Rousseau. Semiparametricposteriorcorrections. Journal of the Royal Statistical Society Series B: Statistical Methodology , page qkaf005, 02 2025. ISSN 1369-7412. doi: 10.1093/jrsssb/qkaf005. URL https: //doi.org/10.1093/jrsssb/qkaf005 . 37
https://arxiv.org/abs/2505.16275v1
arXiv:2505.16302v1 [math.ST] 22 May 2025Covariance matrix estimation in the singular case using regularized Cholesky factor Olivier Besson∗ May 23, 2025 Abstract We consider estimating the population covariance matrix when the number of available samples is less than the size of the observations. The sample covariance matrix (SCM) being singular, regularization is mandatory in this case. For this purpose we consider minimizing Stein’s loss function and we investigate a method based on augmenting the partial Cholesky decomposition of the SCM. We first derive the finite sample optimum estimator which mini- mizes the loss for each data realization, then the Oracle estimator which minimizes the risk, i.e., the average value of the loss. Finally a practical scheme is presented where the missing part of the Cholesky decomposition is filled. We conduct a numerical performance study of the proposed method and compare it with available related methods. In particular we investigate the influence of the condition number of the covariance matrix as well as of the shape of its spectrum. 1 Introduction The need for accurate covariance matrix estimation, whether for multivariate analysis, data decorrelation or adaptive filtering is pregnant in many fields such as economics or engineering [1–3]. When the number of samples navailable to estimate the population covariance matrix Σ is slightly above the size of the observations p, the sample covariance matrix (SCM) is known to become unreliable, and accordingly it is advisable to regularize it. In the singular case, i.e., when n < p , the case we consider herein, the SCM is singular and hence no longer positive definite as should a covariance matrix be, in which case regularization becomes mandatory. The most widespread method consists in linear shrinkage of the SCM, i.e,. a weighted linear combination of the SCM and a target matrix, typically the identity matrix (the latter technique is often referred to as diagonal loading in the engineering community). When the target matrix is the identity matrix, linear shrinkage actually belongs to the class of orthogonally invariant estimators (OIE) which retain the SCM eigenvectors and apply a transformation to its eigenval- ues: in this case linear shrinkage corresponds to an affine transformation of the SCM eigenvalues. More precisely, let Sbe the SCM and let its eigenvalue decomposition be S=UΛUTwithU the matrix of eigenvectors and Λ= diag( λ1,···, λp) the diagonal matrix of eigenvalues. OIE are of the form ˆΣ=UDUTwithD= diag( d1,···, dp) a diagonal matrix constructed from Λ. OIE have been extensively studied, see e.g., [4–8] for early works. More recently Ledoit and Wolf in a series of papers [9–14] have thoroughly investigated them under the statistical framework of random matrix theory (RMT), i.e., p, n→ ∞ withp/n→c. Their methods mostly rely on RMT coupled with Stein’s approach [4,15–17]. Briefly stated, an estimate of a specific form (in their case ˆΣ=UDUT) is chosen along with a loss function L(ˆΣ,Σ). Ledoit and Wolf define what ∗ISAE-SUPAERO, Universit´ e de Toulouse, 10 avenue Marc P´ elegrin, 31055 Toulouse, France. Email: olivier.besson@isae-supaero.fr 1 they refer to as finite sample optimal (FSOPT) the estimate obtained by minimizing L(ˆΣ,Σ): obviously this estimate
https://arxiv.org/abs/2505.16302v1
is hypothetical since it depends on Σ. Tables 1, 2 and 3 of [18] provide the expressions of the FSOPT estimator for a large number of loss functions. The next step is to derive the asymptotic -as p, n→ ∞ with p/n→c- limit of L(ˆΣ,Σ), say L∞(ˆΣ,Σ). Then the Oracle estimator is defined as the one which minimizes this limit. Note that in a classical framework where asymptotic is understood as pfixed and n→ ∞ ,L∞(ˆΣ,Σ) is usually replaced byR(ˆΣ,Σ) =E{L(ˆΣ,Σ)}. Finally, to come up with a practical scheme, a consistent (unbiased in the classical framework) estimate of L∞(ˆΣ,Σ) is derived, whose minimization leads to the final estimator. Proceeding along these lines, reference [9] derives a linear-shrinkage-based bona fide estimator which now constitutes a reference estimator, see also [19] for a similar approach and a close estimator. While linear shrinkage (LS) is simple and quite effective, it can be im- proved upon using more complex transformations such as nonlinear shrinkage (NLS) [10–13] or possibly quadratic inverse shrinkage [14]. All above referred techniques rely on keeping the eigenvectors of the SCM and modifying its eigenvalues. Another solution consists in regularizing the Cholesky factor of the SCM [16], that is if S=GGTdenotes the Cholesky decomposition of Swhere Gis lower triangular with positive diagonal elements, the estimates are of the form ˆΣ=GDGTwithDa diagonal matrix. While deemed less performant than OIE this type of estimators has two merits. First they are usually simpler, as Cholesky factors can be computed more efficiently than eigenvectors and eigenvalues. Besides, this approach enables one to obtain analytical expressions of the regularization parameters for a certain number of risks, see [16, 20, 21] due to the fact that the riskE{L(ˆΣ,Σ)}no longer depends on unknown parameters, yielding closed-form expressions forD. As a result, the risk itself is minimized, not just an unbiased estimate of the risk. The above mentioned approaches deal with the case n≥pbut the paper [22] considers the case n < p . However, the regularized SCM has rank nand is thus singular, which is problematic for an estimate of a covariance matrix which, in principle, is positive definite. Another class of regularization schemes is based on the modified Cholesky decomposition and relies on the fact that the elements of the Cholesky factor can be obtained from the coefficients of a sequence of regression models [23–25]. Based on this property, reference [26] considers a penalized likelihood approach to regularize the Cholesky decomposition of the inverse of the covariance matrix. However this method is not applicable when n < p . In order to cope with the singular case, banding the Cholesky factor is proposed in [27, 28]. More precisely a sequence of reduced- dimension linear regression models is computed for the Cholesky factor of the covariance matrix [28] or of its inverse [27]. These methods work in the singular case and provide a positive definite covariance matrix estimate. Herein, we also aim at deriving a full rank Cholesky-based covariance matrix estimate in the singular case. Somehow information is missing when n < p , actually the ( p−n)×(p−n) right lower
https://arxiv.org/abs/2505.16302v1
tail of the Cholesky factor of the population covariance matrix is not identifiable. Therefore, there is a need to fill this unknown part and a method is proposed to fulfill this need. 2 Covariance matrix estimation 2.1 Outline LetX= x1x2···xn be the p×ndata matrix whose columns xkare independent and drawn from a multivariate Gaussian distribution with zero mean and full-rank population covariance matrix Σ. We assume here that n < p so that S=XXThas rank nwith probability one. We let S=GGT=G11 G21 GT 11GT 21 (1) 2 denote the Cholesky decomposition of Swhere G11is an×nlower triangular matrix with positive diagonal entries. Below we consider estimates of the form ˆΣ=˜GD˜GT=G110 G21˜G22D10 0 D 2GT 11GT 21 0 ˜GT 22 (2) where D= diag( d1, . . . , d n, dn+1, . . . , d p) is diagonal and ˜G22is a lower triangular matrix with positive diagonal entries. We wish to examine such estimates under Stein’s loss L(ˆΣ,Σ) = tr( ˆΣΣ−1)−log|ˆΣΣ−1| (3) Towards this end we define the Cholesky decomposition of Σas Σ=LLT=L110 L21L22LT 11LT 21 0 LT 22 (4) where L11andL22aren×nand ( p−n)×(p−n) lower triangular matrices with positive diagonal entries. Note that due to n < p L22is not identifiable from S. Actually, if we decompose Σas Σ=Σ11Σ12 Σ21Σ22 =L11LT 11 L11LT 21 L21LT 11L21LT 21+L22LT 22 (5) and define Σ2.1=Σ22−Σ21Σ−1 11Σ12=L22LT 22, the likelihood of the observations is L(X;Σ)∝ |Σ|−n 2etr(−1 2Σ−1S) =|Σ11|−n 2|Σ2.1|−n 2etr(−1 2 L−1G 2) =|L11|−n|L22|−netr(−1 2 L−1 11G11 2) ×etr(−1 2 L−1 22(G21−L21L−1 11G11) 2) (6) where etr( .) stands for the exponential of the trace of a matrix. Clearly the maximum is achieved atL11=n−1/2G11,L21=n−1/2G21and max L11,L21L(X;Σ)∝ |L22|−n(7) which is unbounded, revealing that L22is not identifiable. Consequently, nothing can be inferred about L22from the data and therefore we will need to fill in the missing information. In the sequel we will successively derive the FSOPT estimator which minimizes L(ˆΣ,Σ), then the Oracle estimator which minimizes E{L(ˆΣ,Σ)}before proposing a bona fide estimator that can be implemented in practice. 2.2 Finite-sample optimum estimator The FSOPT estimator aims at minimizing L(˜GD˜GT,Σ) with respect to Dand ˜G22. First we rewrite the loss function in a more convenient form. We have L−1˜G=L−1 11 0 −L−1 22L21L−1 11L−1 22G110 G21˜G22 =L−1 11G11 0 L−1 22(G21−L21L−1 11G11)L−1 22˜G22 (8) 3 so that the loss function can be written as L(˜GD˜GT,Σ) = tr( L−1˜GD˜GTL−T)−log|L−1˜GD˜GTL−T| = tr(L−1GD 1GTL−T)−log|D1| −log|L−1 11G11GT 11L−T 11| + tr(L−1 22˜G22D2˜GT 22L−T 22)−log|L−1 22˜G22D2˜GT 22L−T 22| (9) Observing that tr(L−1GD 1GTL−T) =nX j=1djgT jΣ−1gj (10) where gjis the jthcolumn of G, we find that minimization with respect to D1provides dj= (gT jΣ−1gj)−1j= 1, . . . , n (11) As for D2and ˜G22the minimum is achieved when ˜G22D2˜GT 22=L22LT 22=Σ2.1 (12) Of course these values are hypothetical and cannot be computed since they depend on Σbut the FSOPT estimator will serve as a reference as the loss function is minimized for each realization ofX. 2.3 Oracle estimator The Oracle estimator is obtained by minimizing R(˜GD˜GT,Σ) =E{L(˜GD˜GT,Σ)}. Since the columns of Xare independent and follow a multivariate Gaussian distribution with zero mean
https://arxiv.org/abs/2505.16302v1
and covariance matrix Σ, which we denote as Xd=Np,n(0,Σ,In), it follows that Xd=L¯Xwhere ¯Xd=Np,n(0,Ip,In) whered= means “has the same distribution as ”. Consequently Sd=L¯WLT where ¯Wfollows a singular Wishart distribution [29]. We let ¯W=¯G¯GTwith ¯Ga lower triangular matrix with positive diagonal elements. Then all elements of ¯Gare independent and ¯Gijd=N(0,1) for i > j ,¯Gjjd=q χ2 n−j+1. Moreover we have Gd=L¯G. It follows that the loss function in (9) can be rewritten as L(˜GD˜GT,Σ) = tr( ¯GD 1¯GT)−log|D1| −log|¯G11¯GT 11| + tr(L−1 22˜G22D2˜GT 22L−T 22)−log|L−1 22˜G22D2˜GT 22L−T 22| (13) Now E{tr(¯GD 1¯GT)}=E{X i≥jdj¯G2 ij} =X i>jdj+nX j=1djE{χ2 n−j+1} =nX j=1(p−j+E{χ2 n−j+1})dj (14) E{log|¯G11¯GT 11|}=nX j=1E{logχ2 n−j+1} (15) 4 which implies that R(˜GD˜GT,Σ) =−nX j=1E{logχ2 n−j+1}+nX j=1(p+n−2j+ 1)dj−nX j=1logdj + tr(L−1 22˜G22D2˜GT 22L−T 22)−log|L−1 22˜G22D2˜GT 22L−T 22| (16) The risk is thus minimized for dj= (p+n−2j+ 1)−1,j= 1, . . . , n and ˜G22D2˜GT 22=L22LT 22. In contrast to FSOPT, the Oracle estimator provides a value of D1that can be computed. Yet, similarly to the FSOPT estimator, ˜G22andD2still depend on the unknown Σ2.1. However this was expected since nothing can be inferred about Σ2.1with only n < p observations. 2.4 Practical scheme In order to come up with a bona fide estimator we first set dj= (p+n−2j+ 1)−1,j= 1, . . . , n as in the Oracle estimator. Then we must decide upon D2and ˜G22. Their optimal choice is ˜G22D2˜GT 22=L22LT 22which cannot be met in practice. Therefore we must set these matrices arbitrarily, just as a target matrix is chosen in conventional linear shrinkage. Before proceeding, our intuition is that the least influential is Σ2.1the least the impact of a wrong guess for D2 and ˜G22. Our idea is to use a Cholesky decomposition of Σwith complete pivoting [30] so that the diagonal elements of the so-obtained lower triangular factor are in decreasing order. Alternatively a QR decomposition with pivoting [31] of XTcan be computed as XTΠ=QHT where Πis ap×ppermutation matrix, Qis an×northogonal matrix and His ap×n lower triangular with positive and decreasing diagonal elements. This amounts to temporarily replacing XbyY=ΠTX, which corresponds to a permutation of the rows of X. The sample covariance matrix of Yis given by YYT=ΠTXXTΠ=HQTQHT=HHTand hence His the Cholesky factor of YYTwith decreasing diagonal elements. If we partition H=H11 H21 , thenHcan be augmented to ˜H=H11 0 H21αIp−n (17) where α=H11(n, n) with the underlying idea that the diagonal elements of ˜Hshould decrease smoothly, so that we set the p−nlast diagonal elements to the minimal value of the nfirst diagonal elements. In the same spirit we set D2=βIp−nwith β=dn= (p−n+ 1)−1. The regularized estimate of the covariance matrix of Yis thus ˜HD˜HTand thus, coming back to X, we obtain ˆΣ=ΠH11 0 H21αIp−nD10 0βIp−nH11HT 21 0αIp−n ΠT(18) Note that the regularizing parameters αandβare chosen automatically: αwill depend on the data while βis fixed. The scheme allows to obtain a positive definite covariance matrix estimate at a reasonable cost since only a Cholesky decomposition is required. 3 Numerical simulations In this section we study the performance of ˆΣin (18) by evaluating its Stein’s risk. Through preliminary simulations
https://arxiv.org/abs/2505.16302v1
we observed that the FSOPT and the Oracle estimators perform the same so we only consider the latter. For comparison purposes we also evaluate Stein’s risks of the benchmark linear shrinkage estimator of [9] (LW-LS in the figures), the nonlinear shrinkage estimator of [11] (LW-NLS) and the methods of [27] (BL) and [28] (RLZ). We consider a scenario where p= 200. As for the spectrum of Σit is divided in two parts: a set of ηplarge eigenvalues 5 uniformly distributed on [ λmax/2, λmax] and a set of (1 −η)psmall eigenvalues uniformly dis- tributed on [0 .5,1].ηcontrols the shape of the spectrum of Σwhile λmaxcontrols its condition number, say cond( Σ). The influence of these parameters is seldom studied, yet we show that they have an important impact on which method should be retained. In a first experiment we fix n= 120, we vary the condition number between 4 and 1024 and we consider two different values of η, namely η= 0.25 and η= 0.4. The results are reported in Figure 1. Several key observations can be made. First, we observe that LS and RLZ are highly sensitive to the condition number of Σand also depends on the shape of the spectrum. LW-LS performs better than ˆΣonly when cond( Σ) is below some threshold, and this threshold decreases as ηincreases. In most situations the risk of ˆΣis smaller than that of LW-LS. In contrast, the risks of our method, BL and LW-NLS are independent of cond( Σ) and of η, which is a very desirable feature in practice. Note that LW-NLS is the best estimator and our method performs slightly worse than BL, except with large condition numbers. However BL requires the user to set the regularized Cholesky factor bandwidth while our method is fully automatic and parameter-free. In Figure 2 we study the influence of nwhen the condition number of Σis held constant at cond(Σ) = 256. It can be seen that LW-LS and RLZ results in the largest risks while LW-NLS provides the smallest risk, except when η= 0.4 and n≤120 where our method performs better. Again we observe a similar risk between our method and BL, unless nis small and ηlarge. Finally the influence of ηis investigated in Figure 3. It shows that LW-NLS performs very well for η≤0.3. Our method has a constant risk over ηand so it provides an interesting alternative for η≥0.4. 4 Conclusions We addressed covariance matrix estimation in the singular case ( n < p ) through regularization of the rank-deficient Cholesky factor of the sample covariance matrix. We introduced the finite sample optimum estimator, which minimizes Stein’s loss for every data matrix, the Oracle estimator, which minimizes the average loss, and a new practical scheme obtained by augmenting the Cholesky factor. The latter method results in a rather simple, positive definite covariance matrix estimate whose regularization parameters are chosen automatically. We illustrated the impact of the condition number of the population covariance matrix as well as of the shape of its spectrum. We showed that the estimator proposed here provides an interesting solution
https://arxiv.org/abs/2505.16302v1
when nis small and the large eigenvalues occupy at least 30% of the spectrum. References [1] M. Pourahmadi. High dimensional covariance estimation . Wiley series in probability and statistics. John Wiley & Sons, Hoboken, NJ, 2013. [2] A. Zagidullina. High-dimensional covariance matrix estimation - An introduction to random matrix theory . SpringerBriefs in Applied Statistics and Econometrics, 2021. [3] O. Ledoit and M. Wolf. The power of (non-)linear shrinking: A review and guide to covariance matrix estimation. Journal of Financial Econometrics , 20(1):187–218, 2022. [4] C. Stein. Lectures on the theory of estimation of many parameters. Journal of Mathematical Sciences , 34:1373–1403, July 1986. [5] D. K. Dey and C. Srinivasan. Estimation of a covariance matrix under Stein’s loss. The Annals of Statistics , 13(4):1581–1591, December 1985. 6 4 8 16 32 64 128 256 512 1024 Condition number of 22.525.027.530.032.535.037.540.0decibels Mean value of Stein loss - n=120, =0.25 Oracle proposed LW-LSLW-NLS RLZ BL 4 8 16 32 64 128 256 512 1024 Condition number of 22.525.027.530.032.535.037.540.0decibels Mean value of Stein loss - n=120, =0.4 Oracle proposed LW-LSLW-NLS RLZ BLFigure 1: Mean value of Stein’s loss versus cond( Σ).n= 120 and varying η. 7 80 100 120 140 160 180 200 Number of samples n242628303234decibels Mean value of Stein loss - =0.25, cond()=256 Oracle proposed LW-LSLW-NLS RLZ BL 80 100 120 140 160 180 200 Number of samples n262830323436decibels Mean value of Stein loss - =0.4, cond()=256 Oracle proposed LW-LSLW-NLS RLZ BLFigure 2: Mean value of Stein’s loss versus n. cond( Σ) = 256 and varying η. 8 [6] D. K. Dey and C. Srinivasan. Trimmed minimax estimator of a covariance matrix. Annals Institute Statistical Mathematics , 38:101–108, 1986. [7] Y. Sheena and A. Takemura. Inadmissibility of non-order preserving orthogonally invariant estimators of the covariance matrix in the case of Stein’s loss. Journal of Multivariate Analysis , 41:117–131, 1992. [8] F Perron. Minimax estimators of a covariance matrix. Journal of Multivariate Analysis , 43(1):16 – 28, 1992. [9] O. Ledoit and M. Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis , 88(2):365–411, February 2004. [10] O. Ledoit and M. Wolf. Nonlinear shrinkage estimation of large-dimensional covariance matrices. The Annals of Statistics , 40(2):1024–1060, April 2012. [11] O. Ledoit and M. Wolf. Direct nonlinear shrinkage estimation of large-dimensional covari- ance matrices. Technical report, University of Zurich, 2017. Working paper no. 264. [12] O. Ledoit and M. Wolf. Optimal estimation of a large dimensional covariance matrix under Stein’s loss. Bernoulli , 24(4B):3791–3832, November 2018. [13] O. Ledoit and M. Wolf. Analytical nonlinear shrinkage of large-dimensional covariance matrices. The Annals of Statistics , 48(5):3043–3065, October 2020. [14] O. Ledoit and M. Wolf. Quadratic shrinkage for large covariance matrices. Bernoulli , 28(3):1519–1547, August 2022. [15] C. Stein. Inadmissibility of the usual estimator for the mean of a multivariate distribution. InProceedings 3rd Berkeley Symposium on Mathematical Statistics and Probability , pages 197–206, 1956. [16] W. James and C. Stein. Estimation with quadratic loss. In Proceedings 4th Berkeley Symposium on Mathematical Statistics and Probability , pages 361–380, 1961. [17] C. Stein. Estimation of the
https://arxiv.org/abs/2505.16302v1
mean of a multivariate normal distribution. The Annals of Statistics , 9(6):1135–1151, November 1981. [18] O. Ledoit and M. Wolf. Shrinkage estimation of large covariance matrices: Keep it simple, statistician? Journal of Multivariate Analysis , 186:104796, November 2021. [19] T. Bodnar, A. K. Gupta, and N. Parolya. On the strong convergence of the optimal linear shrinkage estimator for large dimensional covariance matrix. Journal of Multivariate Analysis , 132:215–228, November 2014. [20] J. B. Selliah. Estimation and testing problems in a Wishart distribution. Technical report no. 10, Department of Statistics, Stanford University, January 10 1964. [21] M. L. Eaton and I. Olkin. Best equivariant estimators of a Cholesky decomposition. The Annals of Statistics , 15(4):1639–1650, 1987. [22] H. Tsukuma and T. Kubokawa. Unified improvements in estimation of a normal covariance matrix in high and low dimensions. Journal of Multivariate Analysis , 143:233–248, January 2016. [23] M. Pourahmadi. Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation. Biometrika , 86(3):677–690, September 1999. 9 [24] M. Pourahmadi. Maximum likelihood estimation of generalised linear models for multivari- ate normal covariance matrix. Biometrika , 87(2):425–435, June 2000. [25] W. B. Wu and M. Pourahmadi. Nonparametric estimation of large covariance matrices of longitudinal data. Biometrika , 90(4):831–844, December 2003. [26] J. Z. Huang, N. Liu, M. Pourahmadi, and L. Liu. Covariance matrix selection and estimation via penalised normal likelihood. Biometrika , 93(1):85–98, March 2006. [27] P. J. Bickel and E. Levina. Regularization of large covariance matrices. The Annals of Statsitics , 36(1):199–227, February 2008. [28] A. J. Rothman, E. Levina, and J. Zhu. A new approach to Cholesky-based covariance regularization in high dimensions. Biometrika , 97(3):539–550, September 2010. [29] M. S. Srivastava. Singular Wishart and multivariate beta distributions. The Annals of Statistics , 31(5):1537–1560, October 2003. [30] N. J. Hingham. Accuracy and stability of numerical algorithms . Society for Industrial and Applied Mathematics, Philadelphia, 1992. [31] G. Golub and C. Van Loan. Matrix Computations . John Hopkins University Press, Balti- more, 3rd edition, 1996. 10 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Value of 24262830323436decibels Mean value of Stein loss - n=120, cond()=256 Oracle proposed LW-LSLW-NLS RLZ BLFigure 3: Mean value of Stein’s loss versus η.n= 120 and cond( Σ) = 256. 11
https://arxiv.org/abs/2505.16302v1
arXiv:2505.16428v1 [math.ST] 22 May 2025Sharp Asymptotic Minimaxity for One-Group Priors in Sparse Normal Means Problem Sayantan Paul, Prasenjit Ghosh and Arijit chakrabarti Abstract In this paper, we consider the asymptotic properties of the B ayesian multiple testing rules when the mean parameter of the sparse normal means problem is modeled by a b road class of global-local priors, expressed as a scale mixture of normals. We are interested in studying t he least possible risk, i.e., the minimax risk for two frequentist losses-one being the usual misclassificati on (or Hamming) loss, and the other one, measured as the sum of FDR and FNR. Under the beta-min separation condi tion used by Abraham et al. (2024) in the same context, at first, assuming the level of sparsity to b e known, we propose a condition on the global parameter of our chosen class of priors, such that the result ant decision rule attains the minimax risk for both of the losses mentioned above. When the level of sparsit y is unknown, we either use an estimate of the global parameter obtained from the data, the empirical Baye s approach, or propose an absolutely continuous prior on it, a full Bayes one. For both of the procedures, unde r some assumption on the unknown level of sparsity, we show that the decision rules also attain the min imax risk, again for both of the losses. Ourresults also provide a guideline regarding the selection of priors, in the sense that beyond a subclass(horseshoe-type priors) of our chosen class of priors, the minimax risk is not achievable with respect to any one of the two loss functions considered in this article. However, the subclass, horseshoe-type priors, is such a large subclass that it contains Horseshoe, Strawderman–Berger, standard double Pareto, inverse–gamma priors withα= 0.5, just to name a few. In this way, along with the most popular B H procedure and ℓ-value based approach using spike and slab prior, a multiple testing rule based on one-group priors also achieves the optimal boundary. To the best of our knowledge, these are the first results in the literature of global-local priors which ensure the optimal minimax risk can be achieved exactly. Keywords— Multiple testing, False Discovery Rate, global-local prio rs, Classification Loss. 1 Introduction Multiple hypothesis testing is a prominent area of research in statist ics due to its wide range of applications related to biology, image processing, astronomy, just to name a fe w. The primary objective in a multiple testing problem is to form a decision rule which controls the overall type I err or. The most popular one in this context is the False Discovery Rate (FDR, defined below) introduced by Benj amini and Hochberg (1995). Suppose we have an observation vector X∈Rn,X= (X1,X2,···,Xn), where each observation Xiis expressed as, Xi=θi+ǫi,i= 1,2,···,n, (1) where the parameters θ1,θ2,···,θnare unknown mean parameters and ǫi’s are the random errors present in the model. Generally, ǫi’s are assumed to be independent N(0,σ2) random variables. In the literature of multiple testing, usually σ2is assumed to be known and hence without loss
https://arxiv.org/abs/2505.16428v1
of generality, a typica l choice ofσ2isσ2= 1. This is the famous normal means model orGaussian sequence model . In the normal means problem as given in (1), one is interested in testing simultaneously H0i:θi= 0 vsH1i:θi∝ne}ationslash= 0 fori= 1,...,n. A multiple testing procedure, defined in terms of a measurable function of the data is as follows: Let ψbe a measurable function of the data, i.e., ψ:x∈R∝ma√sto→(ψi(x))i=1,2,···,n∈ {0,1}nwhereψi≡ψi(X) = 1 implies rejecting the ithnull hypothesis, H0i. Givenθ= (θ1,...,θn)∈Rnand any testing procedure ψ, the FDR of ψatθfor the sparse 1 normal model is defined as FDR(θ,ψ) =Eθ/bracketleftbigg/summationtext i:θi=0ψi max(/summationtextn i=1ψi,1)/bracketrightbigg . (2) Benjamini and Hochberg (1995) established that their proposed p rocedure based on p-values controls the FDR at levelαunder the assumption that the test statistics are independent. T his work was later extended by Benjamini and Yekutieli (2001), when they proved the same even u nder positive dependence of the test statis- tics. After this, several authors including Sarkar (2002), Chen a nd Sarkar (2004), Sarkar (2007), Storey (2007), Sarkar (2008), Sarkar and Guo (2010) have investigate d the problem of proposing procedures for con- trolling FDR in the context of the famous sparse normal means mode l. Given that a testing rule controls the FDR at some desired level, the next obvious question is whether the s ame rule can provide any control over the type II error, which results in good behaviour of power corres ponding to the procedure. This probability of error is measured by the False Negative Rate (FNR). For a testing r uleψ, FNR atθis of the form FNR(θ,ψ) =Eθ/bracketleftbigg/summationtext i:θi/negationslash=0(1−ψi) max(/summationtextn i=11θi/negationslash=0,1)/bracketrightbigg . (3) In this context, a question in which researchers are often interes ted is the optimality of the testing procedure, i.e., the least possible value of the sum of type I and type II errors th at is attainable by any multiple testing rule. In case of a single testing problem, this question was addresse d in depth for a long time by Gin´ e and Nickl (2021) and Ingster and Suslina (2012) from a non-parametric view point and by Baraud (2002), Ingster et al. (2010) and Nickl and Van De Geer (2013) for different loss function s in high-dimensional problems. However, the same question was studied very recently in the context of multip le hypothesis testing. Fromont et al. (2016) studied the problem in terms of the familywise error rate. On the other hand, when the signals are non-random, this problem was initially investigated by Arias-Castro a nd Chen (2017), and subsequent works have been done by Belitser and Nurushev (2020) and Rabinovich et a l. (2020). The multiple testing risk at θ for the testing procedure ψis of the form R(θ,ψ) =FDR(θ,ψ)+FNR(θ,ψ). (4) GivenFDR(θ,ψ) andFNR(θ,ψ), for obtaining the optimal risk, the benchmark is to consider the m inimax risk, which is defined as R(Θ) = inf ψsup θ∈ΘR(θ,ψ), (5) where the infimum is taken over all multiple testing rules and the param eter set Θ is some subset of l0[qn] discussed before. Note that the most naive testing rule ψi= 0
https://arxiv.org/abs/2505.16428v1
for all iimplies the corresponding minimax risk is exactly 1. Hence, it is of practitioners’ importance to consider a non-trivial t esting procedure such that the resultant R(Θ) is strictly less than 1. Intuition suggests that it is necessary to ass ume some conditions on the least possible value of signals for the separation of those from the noises presen t in the data. The most natural condition in this context is the beta-min condition (stated later formally), which ensures the non-null θi’s should exceed a certain threshold. As already stated in Chapter 1, it was Arias-Cas tro and Chen (2017) who first studied the problem from a frequentist’s perspective. Under the beta-min con dition, assuming that all signals having the same absolute value, say M, at first, they established that if M >a/radicalbigg 2log/parenleftBig n qn/parenrightBig for somea<1, there does not exist any multiple testing rule at least based on thresholding, which yie lds a non-trivial amount of minimax risk uniformly over Θ(M). On the other hand, if M > a/radicalbigg 2log/parenleftBig n qn/parenrightBig , for some abeing greater than 1, the parameterαcorresponding to the Benjamini-Hochberg (BH) procedure can be chosen appropriately, such that 2 the minimax risk based on BH procedure converges to 0. As mentione d by Abraham et al. (2024), these two results act as a stepping stone for the confirmation of the importa nce of the beta-min condition in order to obtain a non-trivial amount of minimax risk. In other words, these e nsure that for a multiple testing rule, based on thresholding, the detection boundary for signals should be close to the universal threshold,/radicalbigg 2log/parenleftBig n qn/parenrightBig . Rabinovich et al. (2020) considered the same problem and derived no n-asymptotic lower and upper bounds of R(Θ) whena=an>1 is known. They also established that these proposed bounds are o f the same order as the BH procedure. However, they did not study the situation wh en their testing procedures result in some non-vanishing minimax risk, i.e, the precise asymptotic boundary on t he signal strength M, for which R(Θ) goes from 0 to 1 was missing in these existing works. The rest of the work is organized as follows. In Section 2, we provide an extensive literature review of the works done in the context of finding minimax risk for the Gaussian loca tion model and its extension. The results obtained based on one-group priors are stated in Section 3 . It consists of two subsections, Subsection 3.1 is related to the risk obtained for the classification loss, whereas results related to the loss, defined as the sum of FDR and FNR, are discussed in Subsection 3.2. Section 4 conta ins an overall discussion of our work along with some future problems related to it. Proofs correspondin g to our results are provided in Section 5. 1.1 Notations For any two sequences {an}and{bn}with{bn} ∝ne}ationslash= 0, we write an≍bnto denote that there exist two constants c1andc2such that 0 < c1≤an/bn≤c2<∞for sufficiently large n. Similarly, an/lessorsimilarbndenotes that for sufficiently large n, there exists a constant c >0 such that
https://arxiv.org/abs/2505.16428v1
an≤cbn. Similarly, an∝bnimplies there exists some constant C >0 such that an=Cbnfor alln. Finally,an=o(bn) denotes lim n→∞an bn= 0.φ(·) denotes the density of a standard normal distribution. 2 Existing results on minimax risk The most recent and widely studied work on the minimax risk for the Ga ussian sequence model is due to Abraham et al. (2024). At the initial stage of their work, under the Gaussianity assumption, they introduced the beta-min condition which is designed to separate the alternative hypothesis from the null, and is defined as follows: given ˜ a∈R, set Θ=Θ(˜a,qn) ={θ∈l0[qn] :|θi| ≥˜afori∈Sθ,|Sθ|=qn}, (6) whereSθ={i:θi∝ne}ationslash= 0}. MotivatedbythepreviousworksofArias-Castro and Chen (2017 )andRabinovich et al. (2020), they chose ˜ adefined in (6) to be close to the universal threshold,/radicalbigg 2log/parenleftBig n qn/parenrightBig , and considered the set: Θb=Θ(˜ab,qn),˜ab=/radicalBigg 2log/parenleftbiggn qn/parenrightbigg +b, (7) withΘ(˜a,qn) is as same as (6). They established that for any b∈R, the minimax risk over Θbis 1−Φ(b)+o(1), i.e. inf ψsup θ∈ΘbR(θ,ψ) = 1−Φ(b)+o(1). (8) This result continues to hold even if b=bn→ ∞andb=bn→ −∞. As a byproduct of this result, they obtained that the oracle thresholding rule, ψi=1{|Xi| ≥/radicalbigg 2log/parenleftBig n qn/parenrightBig },i= 1,...,nis asymptotically minimax. This result also helps them to identify a sharp boundary for n ontrivial testing, which is of the form 3 ˜ab=/radicalbigg 2log/parenleftBig n qn/parenrightBig +bwithb→ −∞. This boundary corresponds to a regime where there does not exis t any multiple testing procedure which has a lower minimax risk than the trivia l ones (ψi= 0 for alliandψi= 1 for alli). However, restricting bto be finite (even if negative) yields some non-trivial value of the minim ax risk. Since the oracle thresholding rule depends on the number of non-ze ro means,qn, which is unknown quite often, an obvious question is whether there exists any multiple test ing rule which can attain the minimax risk of (8), even if the level of sparsity is unknown. Assuming that qn/lessorsimilarnc, for some 0 <c <1, they proved that both the BH procedure of Benjamini and Hochberg (1995) with app ropriately chosen α=αn→0 asn→ ∞ andl-value based procedure using the two-groups spike-and-slab prio rs with a proper choice of slab distribution results in providing a positive answer to the question with bdefined in (8) either being finite or b=bntending to∞. Next, we describe the l-value based procedure briefly. For a detailed discussion, see Sect ion S-8 in the supplementary material of Abraham et al. (2024) and the refe rences therein. Consider the normal means model given in (1), where each θiis modeled by a two-groups spike-and-slab prior given by, θiiid∼(1−p)δ0+pF,i= 1,2,···,n, (9) whereδ0denotes Dirac measure which is degenerate at 0 and Fdenotes a non-degenerate absolutely continuous distribution over R. The distribution Fin (9) is chosen in such a waythat it can be used to model the intermed i- ateandlargevaluesof θi. Inasparsesituation, pisassumedtobesmallanddenotestheprobabilitythatthetrue value ofθiis non-zero. This modelling is widely studied in the literature, see for e.g ., Mitchell and Beauchamp (1988), Johnstone and Silverman (2004), Efron (2004). As a con sequence, the marginal distribution of Xiis of the form, Xiind∼(1−p)φ(x)+p/integraldisplay Rφ(x−θ)F(θ)dθ,i= 1,2,···,n. (10) In this problem,
https://arxiv.org/abs/2505.16428v1
Abraham et al. (2024) assumed the slab density to b e quasi-Cauchy, such that the convolution ofφ∗f(fbeing the density of Fgiven in (9)), say ˜fis such that ˜f(x) =1√ 2πx−2/parenleftbigg 1−exp/parenleftbigg −x2 2/parenrightbigg/parenrightbigg ,x∈R. Now, given any t∈(0,1), they considered a multiple testing rule which depends on the post erior inclusion probability of θi, and is of the form, ψℓ(X) = (1{ℓi,p(X)<t})i=1,2,···,n, where ℓi,p(X) = Πp(θi= 0|X) =(1−p)φ(Xi) (1−p)φ(Xi)+p˜f(Xi). Sincepis unknown, they used the Marginal Maximum Likelihood Estimate (MML E) ofp, which is defined as the maximizer of the likelihood (or equivalently log-likelihood) functio n ofXgivenpand is of the form ˆp=argmaxp∈[1 n,1]L(p),L(p) being the log-likelihood function. Hence, Abraham et al. (2024) con sidered the decision rule of the form ψˆl(X), whereψˆℓ(X) =ψℓ(X) evaluated at p= ˆp. Next, they generalized the problem of obtaining the minimax risk by re laxing assumptions both based on signal strength and the noise. For weakeningthe assumption base d on noise, they assumed that the observations are generated from a continuous distribution on Rsuch that the CDF of the observations is ˜L−Lipschitz for some constant ˜L. Some assumptions on the expected number of nulls were also requir ed. On the other hand, in orderto study more generalsituations based on the signal stren gth, they considered a more generalset of signals containing different signal strengths. For any given ˜a= (˜a1,˜a2,...,˜aqn), with ˜aj>0 for allj= 1,2,...,qn, let us consider the set Θ(˜a,qn) ={θ∈l0[qn] :∃i1,i2,...,iqnall distinct,|θij| ≥˜ajforj= 1,2,...,qn}, (11) 4 To study the risk, next they introduced Λ n(˜a) =1 qn/summationtextqn j=1F˜aj(a∗ n) wherea∗ n= (ζlog/parenleftBig n qn/parenrightBig )1 ζfor someζ >1. Under the assumptions mentioned above, for location models, the m inimax risk is obtained as inf ψsup θ∈Θ(˜a,qn)R(θ,ψ) = Λn(˜a)+o(1), (12) and for scale models, the minimax risk is of the form inf ψsup θ∈Θ(˜a,qn)R(θ,ψ) = 2Λn(˜a)−1+o(1), (13) Note that for the normal means model with varying signal strengt h assumption as given in (11), we have Λn(˜a) =1 qn/summationtextqn j=1[1−Φ(˜aj−a∗ n)], wherea∗ n=/radicalbigg 2log/parenleftBig n qn/parenrightBig . Similar to the signal strength as given in (7), in this case, too, they were able to establish that both the BH proced ure (with a proper choice of αconverging to zero) and the ℓ-value based procedure achieve the risk bound, i.e. sup θ∈Θ(˜a,qn)R(θ,ψ) = Λn(˜a)+o(1), (14) even if the level of sparsity is unknown. In the latter half of the paper, Abraham et al. (2024) studied the m inimax risk corresponding to the classification (or Hamming) loss L(θ,ψ), which is defined as L(θ,ψ) =n/summationdisplay i=1[1{θi= 0}1{ψi∝ne}ationslash= 0}+1{θi∝ne}ationslash= 0}1{ψi= 0}]. (15) Earlier, Butucea et al. (2018) and Butucea et al. (2023) studied th e variable selection problem based on the Hamming loss. They introduced the notion of almost full recovery of a testing procedure, which is defined as follows: given a testing procedure ψ, almost full recovery is achieved by it with respect to the classificat ion loss for a subset of l0[qn], sayΘqn, if, asn→ ∞, 1 qnsup θ∈ΘqnEL(θ,ψ) =o(1). (16) Abraham et al. (2024) for the Gaussian location problem obtained th e minimax risk corresponding to the classification loss,
https://arxiv.org/abs/2505.16428v1
which is of the form 1 qninf ψsup θ∈ΘbEL(θ,ψ) = 1−Φ(b)+o(1), (17) whereΘbis already defined in (7) and bmay be either finite or b=bn→ ±∞. This result also holds for the BH andl−value-based procedures under the assumption of polynomial spar sity stated before. To the best of our knowledge, the only work based on one-grouppr iors related to minimax risk in the normal means model is due to Salomond (2017). Under the assumption that observations are generated from a normal distribution as given in (1), they considered modeling each θiby a continuous prior instead of a two-groups prior with the hierarchical formulation as θi|σ2 iind∼ N(0,σ2 i),σ2 iind∼π(σ2 i), (18) whereπ(·) denotes a density on the positive real numbers. Here σ2 iplays the dual role of capturing the overall sparsity of the model along with detecting the real signals. It is clea r from (18) that all one-group global-local priors stated earlier can be included in this rich class of priors. This cla ss of priors was previously studied 5 by van der Pas et al. (2016), where they were interested in obtainin g the optimal posterior contraction rate of the parameter vector around the truth. In this work, they we re interested in testing H0i:θi= 0 vs H1i:θi∝ne}ationslash= 0 simultaneously. Motivated by the existing works of Carvalho et al. (2009), Datta and Ghosh (2013), Ghosh et al. (2016) and Ghosh and Chakrabarti (2017), they proposed a multiple testing rule ψ= (ψ1,...,ψn) , withψi=1(E(ϕi|X=x)>α) for some fixed threshold α, whereϕi=σ2 i 1+σ2 i. They remarked that forα=1 2, their proposed testing rule becomes equivalent to that of Ghosh a nd Chakrabarti (2017). Motivated by works of Arias-Castro and Chen (2017) and Rabinovich et al. (20 20), they also considered the problem of studying the minimax bound of a risk which is defined as the sum of FDR a nd FNR. Based on the conditions proposed by van der Pas et al. (2016) along with some additional ass umptions on the level of sparsity, they obtained an upper bound of the minimax risk. This risk converges to z ero as the number of hypotheses n diverges to ∞. However, there were some gaps in their work. First, their chosen boundary for the signal strengths exceeded the universal threshold. Hence, it remained uncertain whether the same upper bound (on the minimax risk) holds for the case when non-zero θi’s exceed the universal threshold, i.e., the parameter space stated in (7). Second, they also did not study the situation when th ere are multiple levels of signals. Hence, there remain some questions still unanswered regarding the behav ior of their proposed rule corresponding to that problem. Third, there is no mention of minimaxity due to classifica tion loss in their work. All these imply that whether a one-group prior can attain the minimax risk for the G aussian location model was not answered in depth. In a nutshell, the study of minimax risk based on one-group priors needs to be revisited once again. In the next Section, we try to provide positive answers to the
https://arxiv.org/abs/2505.16428v1
ques tions raised in the last part of this Section based on one-group priors. 3 Results based on one-group priors Inspired by the works of Abraham et al. (2024) and Salomond (2017 ), we were interested in studying the multiple testing problem H1i:θi∝ne}ationslash= 0 simultaneously, when each θiis modeled as a scale mixture of normals as, θi|λi,τind∼ N(0,λ2 iσ2τ2) λ2 iind∼π1(λ2 i) (19) τ∼π2(τ), for some choices of π1andπ2. Note that, in the hierarchical formulation of (19), the posterior mean ofθiis obtained as E(θi|X,τ,σ) = (1−E(κi|Xi,τ,σ))Xi, (20) whereκi= 1/(1+λ2 iτ2). As a result, in the literature of one-group priors, E(κi|Xi,τ,σ) is interpreted as a shrinkage coefficient and hence such priors are also known as one-g roup shrinkage priors. Out of the two sets of hyperparameters ( λiandτ) present in the hierarchy (19), λi’s are named as local shrinkage parameters and τasglobal shrinkage parameter . Hence, alternatively, one-group priors are also termed as global- local shrink- age priors. Different choices of π1(·) proposed by several authors have enriched the literature over the years. Some of the examples of such priors are due to t-prior (Tipping (200 1)), the Laplace prior (Park and Casella (2008)), the normal-exponential-gamma prior (Griffin and Brown (2 005)), horseshoe prior (Carvalho et al. (2009), Carvalho et al. (2010), Polson and Scott (2010), Polson a nd Scott (2012)), three parameter beta nor- mal priors (Armagan et al. (2011)), generalized double pareto prio rs (Armagan et al. (2013)), the Dirichlet- laplace prior (Bhattacharya et al. (2015)), horseshoe+ prior (Bh adra et al. (2017)) etc. In order to mimic the optimal properties (either in terms of estimatio n or testing) of its two-groups counter- parts, this class of priors should have the dual nature of squelchin g the noises towards zero along with keeping intact the true signals. In this hierarchical formulation, the global parameterτplays the level of sparsity and hence a smaller value of it ensures a large mass around the origin; on t he other hand, λi’s are used to detect 6 the signals present in the model. This results in noise observations be ing shrunk towards zero while the real signals are left unshrunk. This is known as tail robustness and hence the priors are named as tail robust priors. Motivated by the works of Polson and Scott (2010), Ghosh et al. (2 016) considered a general class of priors of the form π1(λ2 i) =K(λ2 i)−a−1L(λ2 i), (21) whereK >0 is the constant of proportionality and ais any positive real number and L is the slowly varying function defined earlier. They established that for different values ofaand different functions L(·), the class of priors they consider contain three parameter beta normal prio rs introduced by Armagan et al. (2011), generalized double pareto prior due to Armagan et al. (2013), just to name a few. Three parameter beta normal is a rich class containing horseshoe, Strawderman-Berger prior, Normal-exponential-gamma priors. Using a simulation study, Carvalho et al. (2009) at first provides an in dication that E(1−κi|Xi,τ) with κi= 1/(1 +λ2 iτ2) mimics the role of the posterior inclusion probability (Π(
https://arxiv.org/abs/2505.16428v1
θi∝ne}ationslash= 0|X)) corresponding to the spike and slab prior. They proposed the following decision rule intuitive ly, RejectH0iifE(1−κi|Xi,τ)>1 2,i= 1,2,···,n. (22) Later Datta and Ghosh (2013) and Ghosh and Chakrabarti (2017 ) established optimality of the decision rule (22) in the context of minimizing the Bayes risk under 0 −1 loss function. Motivated by these works, in order to study the minimax risk of different loss functions based on one-gr oup priors for the normal means problem, we also consider the decision rule ψ= (ψ1,ψ2,···,ψn), where for i= 1,2,···,n, ψi≡ψi(Xi) =1{E(1−κi|Xi,τ)>1 2}. (23) Based on the decision rule (23), we want to investigate whether any global-local prior of the form (19) satisfying (21) can attain the minimax risk either based on Hamming loss or based on the sum of FDR and FNR, irrespective of the level of sparsity being known or not. For the theoretical study of the decision rule (23), we assume the slowly varying function L(·) defined in (21) satisfies Assumption 1. Fora≥1 2: (a)There exists some c0(>0) such that L(t)≥c0∀t≥t0, for somet0>0, which depends on both Landc0. (b)There exists some M∈(0,∞) such that supt∈(0,∞)L(t)≤M. ItisevidentfromearlierresultsofDatta and Ghosh (2013), van de r Pas et al. (2014), Ghosh et al. (2016), Ghosh and Chakrabarti (2017) that the global shrinkage parame terτplays an important role in capturing the underlying sparsity of the model. However, the sparsity level is usu ally unknown. Since the decision rule (23) also contains τ, it is of equal importance to propose a modified version of it when the level of sparsity is unknown. Using the empirical Bayes estimate of τ, proposed by van der Pas et al. (2014), the decision rule modifies to, for i= 1,2,···,n, ψi≡ψEB i(X) =1{E(1−κi|Xi,/hatwideτ)>1 2}, (24) where /hatwideτ= max{1 n,1 c2nn/summationdisplay i=11(|Xi|>/radicalbig c1logn)}, (25) withc1≥2 andc2≥1. An alternative approach is to model τby an absolutely continuous prior π(τ) on it, known as the full Bayes approach. Motivated by the previous work of Paul and Chakrabarti (2022), here, we consider a general class of priors π(·) satisfying the following condition: 7 (C4)/integraltextαn 1 nπ(τ)dτ= 1 for some αnsuch thatαn→0 andnαn→ ∞asn→ ∞. In this procedure, the decision rule is modified as, for i= 1,2,···,n, ψi≡ψFB i(X) =1{E(1−κi|X)>1 2}, (26) where the full Bayes posterior mean of 1 −κiis defined as, E(1−κi|X) =/integraldisplayαn 1 nE(1−κi|X,τ)π(τ|X)dτ=/integraldisplayαn 1 nE(1−κi|Xi,τ)π(τ|X)dτ. We will study the optimality of the decision rules (23), (24), and (26) depending on the level of sparsity to be known or not, for both classification loss and the loss measured as t he sum of FDR and FNR. Results of the minimax risk related to the Hamming or classification loss using one-gro up priors are discussed in Subsection 3.1. On the other hand, Subsection 3.2 contains results based on min imax risk corresponding to the loss defined as the sum of FDR and FNR using one-group shrinkage priors. We end this Section with a Proposition based on a lower bound on the pr obability of type II error corre- sponding to the ithhypothesis. This shows that why a >1 2in (21) can not be used
https://arxiv.org/abs/2505.16428v1
as a good choice for the broad class of one-group priors considered here. Proposition 1. SupposeXiind∼ N(θi,1),i= 1,2,···,n. We want to test hypotheses H0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (23)using one-group priors. Assume that τ→0asn→ ∞such that nτ qn→C∈(0,∞). Also assume that slowly varying function L(·)satisfies Assumption 1. Then for a >1 2in (21), fori= 1,...,qnandb∈R, we have, sup |θi|≥/radicalBig 2log(n qn)+bt2i→1,asn→ ∞, (27) wheret2idenotes the type II error corresponding to ithhypothesis. Observe that for Theorems 1 and 2, the loss function L(θ,ψ) for which the minimax risk has to be obtained, depends on t1iandt2i, the type I and type II errors corresponding to the ithhypothesis, respectively. Later, it will be clear from the proofs of these two Theorems that out of typ e I and type II errors, type II error is the dominating one. Hence, the lower bound on the type II error over t he subset Θbfora>1 2implies the overall risk also exceeds the minimax risk, 1 −Φ(b) +o(1), in this case. On the other hand, when one is interested in studying the minimax risk based on the loss, defined as the sum of FD R and FNR, then also, the proofs of Theorems 4 and 5 imply that those mainly depend on the FNR, which is as same as the type II error. Hence, the conclusion from Proposition 1 goes through here, too. As a con sequence, Proposition 1 acts as a guideline for us regarding the choice of agiven in (21) in the subsequent subsections in order to obtain the min imax risk. 3.1 Results related to the classification loss Motivated by the result of Proposition 1, from this Subsection, We a re interested in the special case when a=1 2 in (21). This subclass of priors is named as horseshoe-type priors. Note that it is a rich subclass which contains the three parameter beta normal mixtures with α= 0.5 andβ >0 (e.g. horseshoe, Strawderman-Berger), the generalized double Pareto priors with α= 1 (e.g. standard double Pareto), and the inverse–gamma priors w ith α= 0.5, etc. As stated earlier, we are interested in studying the optimality of our decision rule (23) based on horseshoe- type priors in the sense that whether (23) can attain the minimax ris k over the set Θbor not. At first, we assume the level of sparsity is known and use the global parameter τas a tuning one. We propose a condition onτ, depending on the level of sparsity. The first result of our work is a s follows: 8 Theorem 1. SupposeXiind∼ N(θi,1),i= 1,2,···,n. We want to test hypotheses H0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (23)using one-group priors. Assume that τ→0asn→ ∞such that nτ qn→C∈(0,∞). Also assume that slowly varying function L(·)satisfies Assumption 1. Then for horseshoe- type priors (i.e. for a=1 2), we have 1 qnsup θ∈ΘbEL(θ,ψ) = 1−Φ(b)+o(1), (28) whereL(θ,ψ)is same as defined in (15)andΘbis defined in (7). This result holds for any b∈Rorb=bn→ ∞. Remark 1. The importance of Theorem 1 lies in proposing a decision rule based on one-group priors which attains
https://arxiv.org/abs/2505.16428v1
the minimax risk corresponding to the classificatio n or Hamming loss. To the best of our knowledge, it is the first result in the literature of global-local priors i n this context. In order to establish this result, the only condition one needs on τis thatτis of the same order asqn n, i.e., the proportion of non-zero means, asymp- totically. The same condition was previously used by Ghosh e t al. (2016) while they obtained the optimality of the same testing rule with respect to the Bayes risk for the no rmal means model assuming the observations are generated from a two-groups spike and slab distribution. No te that, the result is established via providing bounds on the type I and type II errors. For type I error, Theorem 6 of G hosh et al. (2016) comes in handy. How- ever, the calculation for the type II error involves the use o f some technical observations and many non-trivial arguments. In a nutshell, this result illustrates that, lik e the BH procedure of Benjamini and Hochberg (1995) and theℓ-value approach based on two-groups spike-and-slab priors , a decision rule (23)based on a broad class of global-local prior can also attain the minimax risk for th e normal means model under the assumption that the sparsity level is known. The main drawback of Theorem 1 is that it assumes the sparsity of th e underlying model to be known, which holds seldom. This motivates us to consider the adaptive decisio n rule (24) and study the same in terms of the minimax risk. Now consider the following Theorem. Theorem 2. Consider the setup of Theorem 1. Suppose we want to test hypot hesesH0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (24)using one-group priors, where /hatwideτis the same as (25). Assume that the number of non-zero means qnsatisfiesnβ1≤qn≤nβ2for some 0< β1< β2<1. Also assume that slowly varying function L(·)satisfies Assumption 1. Then for horseshoe-type priors (i.e . fora=1 2), we have 1 qnsup θ∈ΘbEL(θ,ψ) = 1−Φ(b)+o(1), (29) whereL(θ,ψ)andΘbare same as defined in Theorem 1. This result holds for any b∈Rorb=bn→ ∞. Remark 2. The key importance of Theorem 2 lies in providing a positive a nswer to the question that the minimax risk based on classification loss can still be attain ed using the decision rule, even if the level of sparsity is unknown. Though the two results (Theorem 1 and 2) attain th e same risk, but the calculations are quite different. Note that, due to the introduction of /hatwideτin place ofτas stated in (24), the decision rule corresponding to rejecting ithnull hypothesis depends on the entire dataset X, not onXionly, and also has a complicated form. Hence, in order to bypass the calculations based on E(1−κi|Xi,/hatwideτ)directly, we try to incorporate some salient features of E(1−κi|xi,τ)for any fixed xi∈R. Observe that, for any fixed xi∈R,E(1−κi|xi,τ)is non-decreasing in τ. Hence, for calculations corresponding to the type I error, first, we truncate /hatwideτin a range such that the monotonicity of τcan be used. In that range, after using monotonicity, one can employ the same
https://arxiv.org/abs/2505.16428v1
arguments used in Theorem 1 for controlling type I error in th at part. On the complementary part of the range, we need to provide nontrivial arguments by exploiting the st ructure of /hatwideτ. For controlling type II error, we apply the procedure on E(κi|xi,τ), using the fact that for any for any fixed xi∈R,E(κi|xi,τ)is non-increasing in τ. Nevertheless, we should mention that, though the stated ap proach looks similar to those used in Theorems 10 and 11 of Ghosh et al. (2016), our procedure is much differen t from those, since observations generated in 9 this case are different from that problem, as there is no assum ption of an underlying two-groups structure here, to be specific. In this sense, the techniques used to derive Th eorems 10 and 11 of Ghosh et al. (2016) can not be applied in this context. Finally, we should not forget tha t this result implies the decision rule (24)acts as an alternative to the famous BH procedure of Benjamini and Hoch berg (1995) and the ℓ-value approach based on two-groups spike-and-slab priors, even if the sparsity lev el is unknown. Theorem 3. Consider the setup of Theorem 1. Suppose we want to test hypot hesesH0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (26)using one-group priors, where the prior on τsatisfies (C4) with αn∝(logn)δ2 nfor some 0<δ2<1 2. Assume that the number of non-zero means qnsatisfiesqn∝(logn)δ3for someδ3>0. Also assume that slowly varying function L(·)satisfies Assumption 1. Then for horseshoe-type priors (i.e. for a=1 2), we have 1 qnsup θ∈ΘbEL(θ,ψ) = 1−Φ(b)+o(1), (30) whereL(θ,ψ)andΘbare same as defined in Theorem 1. This result holds for any b∈Rorb=bn→ ∞. 3.2 Results related to minimax risk for FDR+FNR As alluded to earlier, in this Subsection, too, we confine our interest in studying whether the decision rules based on horseshoe-type priors (23) and (24) can achieve the min imax risk, asymptotically, irrespective of the level of sparsity to be known or not. Similar to the previous subsect ion, here, too, at first we assume the level of sparsity to be known and try to model τaccordingly. Our result provides an affirmative conclusion that, similar to the BH and l-value procedures, as stated by Abraham et al. (2024), the decis ion rule (23) can also attain the minimax risk assuming the proportion of non-zero means t o be known. The result is stated formally below. Theorem 4. SupposeXiind∼ N(θi,1),i= 1,2,···,n. We want to test hypotheses H0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (23)using one-group priors. Assume that τ→0asn→ ∞such that nτ qn→C∈(0,∞). Also assume that slowly varying function L(·)satisfies Assumption 1. Then for horseshoe- type priors (i.e. for a=1 2), we have sup θ∈ΘbR(θ,ψ) = 1−Φ(b)+o(1), (31) whereΘbis same as defined in Theorem 1 and R(θ,ψ)is defined in (4). This result continues to hold for any b∈Rorb=bn→ ∞. Remark 3. Similar to the results of the previous Subsection, this resu lt also justifies the selection of the decision rule(23)based on one-group priors since it attains the minimax risk f or the loss function defined as the sum of
https://arxiv.org/abs/2505.16428v1
FDR and FNR. As stated earlier, the only work in the context of obtaining an upper bound to the minimax risk based on this loss induced by one-group priors was considere d by Salomond (2017). However, their choice of the minimum value of the non-zero means exceeds the universa l threshold. Not only this, they did not provide any answer regarding whether their proposed decision rule c an attain the minimax risk. This result provides a positive answer to the question for the horseshoe-type pri ors corresponding to the minimax risk when the supremum of θis considered over a set that contains the universal thresho ld. An advantage of this result is that the upper bound can be made arbitrarily small by taking b=bn→ ∞. The proof is done by providing bounds on FDR and FNR. The truncation used in bounding FDR is inspired b y their approach. However, we have to do delicate calculations after that. We need to use Hoeffding’s inequality to bound some quantity related to FDR. The technique used in providing a bound of FNR is quite nontri vial from that of Salomond (2017). In that context, bound on type II error, as derived in previous resul ts, comes in useful. Our proof also demonstrates that some calculations used in Salomond (2017) can be done us ing a simpler approach, too. 10 Given that (23) achieves the minimax risk, it is natural to ask whethe r the adaptive decision rule (24), too, can provide the same result, when the number of non-zero means is unknown. The following result answers the question in an affirmative sense. Theorem 5. Consider the setup of Theorem 1. Suppose we want to test hypot hesesH0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (24)using one-group priors, where /hatwideτis the same as defined in van der Pas et al. (2014). Assume that the number of non-zero mean sqnsatisfiesnβ1≤qn≤nβ2for some 0< β1< β2<1. Also assume that slowly varying function L(·)satisfies Assumption 1. Then for horseshoe- type priors (i.e. for a=1 2), we have sup θ∈ΘbR(θ,ψ) = 1−Φ(b)+o(1), (32) whereΘbis same as defined in Theorem 1 and R(θ,ψ)is defined in (4). This result holds for any b∈Ror b=bn→ ∞. Remark 4. This result elucidates that, like the BH procedure of Benjam ini and Hochberg (1995) and the ℓ-value approach based on two-groups spike-and-slab priors , a decision rule (24)based on a broad class of global- local prior can also attain the minimax risk for the normal me ans model, even if the sparsity level is unknown. This type of result was earlier considered by Salomond (2017 ) for a broader class of priors, which contains our chosen class of priors. They proved that the upper bound of th e minimax risk for their proposed decision rule converges to 0asn→ ∞. However, their work did not provide any answer to the questi on of whether their decision rule can attain the minimax risk or not. On the contr ary, our result does provide a positive answer to that question. As a byproduct of our result, the minimax
https://arxiv.org/abs/2505.16428v1
ri sk can be made arbitrarily small by choosing b=bn→ ∞, even for an unknown sparse situation. In this way, their res ult becomes a special case of ours under our chosen class of priors under study. Theorem 6. Consider the setup of Theorem 1. Suppose we want to test hypot hesesH0i:θi= 0vsH1i:θi∝ne}ationslash= 0 simultaneously based on the decision rule (26)using one-group priors, where the prior on τsatisfies (C4) with αn∝(logn)δ2 nfor some 0<δ2<1 2. Assume that the number of non-zero means qnsatisfiesqn∝(logn)δ3for someδ3>0. Also assume that slowly varying function L(·)satisfies Assumption 1. Then for horseshoe-type priors (i.e. for a=1 2), we have sup θ∈ΘbR(θ,ψ) = 1−Φ(b)+o(1), (33) whereΘbis same as defined in Theorem 1 and R(θ,ψ)is defined in (4). This result holds for any b∈Ror b=bn→ ∞. 4 Concluding Remarks Due to the availability of large datasets related to several fields like A stronomy, Bioinformatics, Image Process- ing, and many more, multiple hypothesis testing is a serious problem of concern for the researchers. Hence, the main task is to control the overall type I error of the problem, the most famous one being the FDR. Given that a procedure controls FDR, it is obvious to ask about its perfor mance in controlling FNR, i.e., the optimal value of the risk corresponding to the loss defined as the sum of FDR and FNR. Another related problem of concern is the optimal risk of a testing procedure based on classific ation loss. For the normal means model, Abraham et al. (2024) obtained the minimax risk for both these prob lems and established that the BH proce- dure of Benjamini and Hochberg (1995) and ℓ-value procedure using spike and slab prior attain the risk, even if the sparsity pattern is unknown. This raises the question of whet her any continuous one-group prior can attain the minimax risk or not. In this work, we provide a positive answ er to this question by considering a broad class of one-group priors under study. To the best of our k nowledge, these are the first results of its kind. 11 As a consequence of our results, this gives freedom to practitione rs to choose one-group priors as an alternative to its two-groups counterpart with more confidence. Though our results are promising and ensure a general class of con tinuous priors can be considered as an alternative to the ’gold standard’ spike-and-slab priors, there ar e some questions that still need to be addressed from a decision theoretic viewpoint. First, when the level of sparsit y is unknown, the most general estimate of τas stated by van der Pas et al. (2017) is the MMLE of τ, defined as the maximizer of the marginal likelihood function of Xgivenτ. So, it is natural to ask whether the decision rule (24) using the MML E ofτcan attain the minimax risk for the two loss functions considered here. Hence, one might be interested to study the optimality of this procedure in this context. Another possible extension of our work is studying the class of prior s proposed by van der Pas
https://arxiv.org/abs/2505.16428v1
et al. (2016). For that class of priors, first, it is needed to prove that, assuming the level of sparsity to be known, whether any multiple testing rule can attain the minimax risk, asympto tically, under the beta-min condition of Abraham et al. (2024). The question regarding the optimality of a te sting procedure for an unknown level of sparsity is open till now. All of these can be considered as some prob lems of interest to be discussed in detail in the future. 5 Proofs Proof of Proposition 1. From the definition of type II error, we have t2i=Pθi/negationslash=0(E(κi|Xi,τ)>1 2) =Pθi/negationslash=0(E(1−κi|Xi,τ)≤1 2) ≥Pθi/negationslash=0(K1τ2aexp/parenleftbiggX2 i 2/parenrightbigg (1+o(1))≤1 2) =Pθi/negationslash=0(X2 i≤4alog/parenleftbigg1 τ/parenrightbigg (1+o(1))) = Φ(/radicalBigg 4alog/parenleftbigg1 τ/parenrightbigg (1+o(1)−θi)−Φ(−/radicalBigg 4alog/parenleftbigg1 τ/parenrightbigg (1+o(1)−θi). (34) The inequality in the third line holds by using Theorem 4 of Ghosh et al. (2 016). Note that in order to provide a lower bound on the type II error on t he subset of Θb, it is enough to consider only one candidate of θifrom that set and show that the lower bound of (34) for that choic e ofθi tends to 1 as n→ ∞. Here, take b= 0 andθi=/radicalbigg 2log/parenleftBig n qn/parenrightBig . Next, note that under the assumption on τthat nτ qn→C∈(0,∞), we obtain log/parenleftbigg1 τ/parenrightbigg = log/parenleftbiggn qn/parenrightbigg +log/parenleftbigg1 C/parenrightbigg +log(1+o(1)) = log/parenleftbiggn qn/parenrightbigg (1+o(1)). This implies, for all sufficiently large n, sup |θi|≥/radicalBig 2log(n qn)+bt2i≥Φ((√ 4a−√ 2)log/parenleftbiggn qn/parenrightbigg )(1+o(1))−Φ((−√ 4a−√ 2)log/parenleftbiggn qn/parenrightbigg )(1+o(1)) = Φ((√ 4a−√ 2)log/parenleftbiggn qn/parenrightbigg )(1+o(1)) →1,asn→ ∞. 12 Here the equality holds since for a >1 2, Φ((√ 4a−√ 2)log/parenleftBig n qn/parenrightBig )(1+o(1))→1 asn→ ∞and for any a >0, Φ((−√ 4a−√ 2)log/parenleftBig n qn/parenrightBig )(1+o(1))→0 asn→ ∞. Proof of Theorem 1. The lower bound of the left hand side of (28) holds due to Theorem 7 o f Abraham et al. (2024). Hence, weneedtoobtaintheupperboundofit. Usingargu mentssimilartoTheorem7ofAbraham et al. (2024), note that EL(θ,ψ) =/summationdisplay i:θi=0t1i+/summationdisplay i:θi/negationslash=0t2i, (35) wheret1iandt2idenote the type I and type II errors corresponding to ithhypothesis, respectively. Hence, it is enough to show that 1 qn/summationdisplay i:θi=0t1i=o(1), (36) and 1 qn/summationdisplay i:θi/negationslash=0sup |θi|≥/radicalBig 2log(n qn)+bt2i≤1−Φ(b)+o(1). (37) First, we prove (35). In this case using Theorem 6 of Ghosh et al. (2 016), we have t1i≡t1≤K1τ2aL(1 τ2)/radicalBig log/parenleftbig1 τ2/parenrightbig(1+o(1)), (38) whereK1is a positive constant independent of τandiando(1) is such that lim n→∞o(1) = 0 and is independent of anyi. Hence, using (37) and under the assumption that L(·) satisfies Assumption 1, for a≥1 2, 1 qn/summationdisplay i:θi=0t1i≤K1M(n−qn) qnτ(τ2)a−1 2/radicalBig log/parenleftbig1 τ2/parenrightbig(1+o(1)) ≤K1Mnτ qn(τ2)a−1 2/radicalBig log/parenleftbig1 τ2/parenrightbig(1+o(1)) =o(1),asn→ ∞. Here equality in the last line follows due to the assumption on τ. This completes the proof of (36). Now, we need to provide an upper bound on t2ifor eachi= 1,2,···,n. Recall from Lemma 3 of Ghosh and Chakrabarti (2017) that for ea chx∈Rand 0< τ <1,E(κ|x,τ)≤ g(x,τ,η,δ), withg(x,τ,η,δ) =g1(x)+g2(x,τ,η,δ) for anyη,δ∈(0,1) withg1(x) =C1[x2/integraltextx2 1+t0 0exp/parenleftbig −u 2/parenrightbig du]−1 andg2(x,τ,η,δ) =C2τ−1exp/parenleftBig −η(1−δ) 2x2/parenrightBig whereC1andC2aretwoconstantsand t0isdefinedinAssumption 1. Fix anyη∈(0,1) andδ∈(0,1) and choose ρ=2 η(1−δ)[1+g(τ)] whereg(τ)>0 and tends to zero as τ→0 such thatτg(τ)→0 asτ→0. One possible choice of g(τ) = (log/parenleftbig1 τ/parenrightbig )−δ1for
https://arxiv.org/abs/2505.16428v1
some 0 <δ1<1. Later we may choose a more specific choice of g(τ), if required. Now define BnandCnasBn={g(Xi,τ,η,δ)>1 2}and Cn={|Xi|>/radicalBig ρlog/parenleftbig1 τ/parenrightbig }withρas defined before. Clearly, t2i=Pθi(E(κi|Xi,τ)>1 2) ≤Pθi(Bn)≤Pθi(Bn∩Cn)+Pθi(Cc n). (39) 13 Observe that, for any θi Pθi(g(Xi,τ,η,δ)>1 2,|Xi|>/radicalBigg ρlog/parenleftbigg1 τ/parenrightbigg ) ≤Pθi(g1(Xi)>1 4,|Xi|>/radicalBigg ρlog/parenleftbigg1 τ/parenrightbigg )+Pθi(g2(Xi,τ,η,δ)>1 4,|Xi|>/radicalBigg ρlog/parenleftbigg1 τ/parenrightbigg ) ≤Pθi(g1(Xi)>1 4,|Xi|>/radicalBigg 2log/parenleftbigg1 τ/parenrightbigg )+P(τ(log(1 τ))−δ1>1 4) ≤P(log/parenleftbigg1 τ/parenrightbigg (1−τ1 1+t0)<C1)+P(τ(log(1 τ))−δ1>1 4) →0 asτ→0. (40) Here in the chain of inequalities, the second one holds since ρ>2. Next we use that both g1(x) andg2(x) are decreasing function of |x|. Hence this implies for any θiand for all sufficiently large n Pθi(Bn∩Cn) = 0. (41) As a result, (40) and (41) combine for all sufficiently large n t2i≤Pθi(|Xi| ≤/radicalBigg ρlog/parenleftbigg1 τ/parenrightbigg ) = Φ(Tn−θi)−Φ(−Tn−θi), (42) whereTn=/radicalBig ρlog/parenleftbig1 τ/parenrightbig . Next, we divide our calculations depending on θiis positive or not. Case-1:θi>0 for somei. Then, using the lower bound on θithe upper bound of t2iobtained in (42) modifies to t2i≤Φ(Tn−θi) ≤Φ(/radicalBigg ρlog/parenleftbigg1 τ/parenrightbigg −/radicalBigg 2log/parenleftbiggn qn/parenrightbigg −b), where the choice of ρis same as discussed before. Observe that the decision rule does no t depend on how η∈(0,1),δ∈(0,1) andρ>2 η(1−δ)is chosen. Hence, taking infimum over all such ρ’s and subsequently over all possible choices of ( η,δ)∈(0,1)×(0,1), and finally using continuity of Φ( ·), we have t2i≤Φ(/radicalBigg 2(1+g(τ))log/parenleftbigg1 τ/parenrightbigg −/radicalBigg 2log/parenleftbiggn qn/parenrightbigg −b). (43) Next, under the assumptionnτ qn→C∈(0,∞), we obtain log/parenleftbigg1 τ/parenrightbigg = log/parenleftbiggn qn/parenrightbigg +log/parenleftbigg1 C/parenrightbigg +log(1+o(1)) (44) and g(τ))log/parenleftbigg1 τ/parenrightbigg = (log/parenleftbiggn qn/parenrightbigg )1−δ1(1+o(1)). (45) 14 Combining all these observations, we have t2i≤Φ(/radicalbig an+dn−√an−b), (46) wherean= 2log/parenleftBig n qn/parenrightBig anddn= 2[log/parenleftbig1 C/parenrightbig +log(1+o(1))]+2(log/parenleftBig n qn/parenrightBig )1−δ1(1+o(1)). Now, note that Φ(/radicalbig an+dn−√an−b)−Φ(−b) =1√ 2π/integraldisplay√an+dn−√an−b −bexp/parenleftbigg −x2 2/parenrightbigg dx ≤/radicalbig an+dn−√an =(√an+dn−√an)·(√an+dn+√an) (√an+dn+√an) =dn (√an+dn+√an) ≤dn 2√an. (47) Note that, for dn= (log/parenleftBig n qn/parenrightBig )1−δ1(1 +o(1)),1 2< δ1<1, we have dn=o(√an), asn→ ∞. As a result, we obtain t2i≤Φ(/radicalbig an+dn−√an−b) = Φ(−b)+o(1). (48) Note that, this o(1) is independent of i. Case-2:θi<0 for somei. Then, using the lower bound on −θithe upper bound of t2iobtained in (42) modifies to t2i≤1−Φ(−Tn−θi) ≤1−Φ(−/radicalBigg ρlog/parenleftbigg1 τ/parenrightbigg +/radicalBigg 2log/parenleftbiggn qn/parenrightbigg +b). (49) Again using arguments similar to those of (43)-(46), in this case, an upper bound on the probability of type II error is obtained as t2i≤1−Φ(−/radicalbig an+dn+√an+b), (50) whereananddnare same as before. Next, again using the same arguments as used in (47) helps to establish that Φ(−/radicalbig an+dn+√an+b) = Φ(b)+o(1), (51) whereo(1) is independent of i. Now, combining (49)-(51), t2i≤1−Φ(b)+o(1). (52) Finally (48) and (52) imply, for each i= 1,2,···,qnand|θi| ≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +bfor eachθi∝ne}ationslash= 0, t2i≤1−Φ(b)+o(1). (53) This also ensures the upper bound of the left hand side of (37) is of t he order of 1 −Φ(b)+o(1) and completes the proof of Theorem 1. 15 Proof of Theorem 2. Similar to Theorem 1, here too, we need to obtain an upper bound of t he left hand side of (29). Hence, it suffices to prove that /summationdisplay i:θi=0tEB 1i=o(qn) (54) and 1 qn/summationdisplay i:θi/negationslash=0sup |θi|≥/radicalBig 2log(n qn)+btEB 2i≤1−Φ(b)+o(1), (55) wheretEB 1iandtEB 2iare the type I and type II errors for
https://arxiv.org/abs/2505.16428v1
the empirical Bayes approac h corresponding to ith hypothesis, respectively. For any αn>0, we have tEB 1i=Pθi=0(E(1−κi|Xi,/hatwideτ)>1 2) =Pθi=0(E(1−κi|Xi,/hatwideτ)>1 2,/hatwideτ >2αn)+Pθi=0(E(1−κi|Xi,/hatwideτ)>1 2,/hatwideτ≤2αn) =I+IIsay, (56) Recall that, by definition, for any given x∈R,E(1−κ|x,τ) is increasing in τ. Hence, using (38), we have II≤Pθi=0(E(1−κi|Xi,2αn)>1 2) ≤K1M(2αn)2a /radicalbigg log/parenleftBig 1 4α2n/parenrightBig(1+o(1)). (57) Next, note that I≤Pθi=0(/hatwideτ >2αn) ≤Pθi=0(/hatwideτ1>2αn)+Pθi=0(/hatwideτ2>2αn), (58) where/hatwideτ1=1 nand/hatwideτ2=1 c2n/summationtextn i=11(|Xi|>√c1logn). Now we choose αn>0 such that /hatwideτ1≤2αnfor all sufficiently large n. Hence, (58) implies, for all sufficiently large n I≤Pθi=0(/hatwideτ2>2αn) ≤Pθi=0(/hatwideτ3>αn)+Pθi=0(/hatwideτ4>αn), (59) where/hatwideτ3=1 c2n/summationtext i:θi/negationslash=01(|Xi|>√c1logn) and/hatwideτ4=1 c2n/summationtext i:θi=01(|Xi|>√c1logn). Observe that, /hatwideτ3≤1 c2qn n. Hence we choose αnsuch thatαn≥1 c2qn nfor all sufficiently large n. This implies I≤Pθi=0(/hatwideτ4>αn) =Pθi=0(1 (n−qn)/summationdisplay i:θi=01(|Xi|>/radicalbig c1logn)>c2nαn (n−qn)) ≤Pθi=0(1 (n−qn)/summationdisplay i:θi=01(|Xi|>/radicalbig c1logn)≥αn) =Pθi=0(Un−E(Un)≥αn(n−qn)−E(Un)), (60) whereUn=/summationtext i:θi=01(|Xi|>√c1logn). Next we consider the generalized version of Chernoff-Hoeffding b ound for independent but non i.i.d. sequence of random variables, which is s tated in the next lemma. 16 Lemma 1. LetZ1,Z2,···,Zmbemindependent 0−1random variables with E(Zi) =pi,i= 1,2,···,m. Let Z=/summationtextm i=1Zi,µ=E(Z) =/summationtextm i=1piandp=µ m. Then P(Z≥µ+λ)≤exp/braceleftbigg −{mHp(p+λ m)}/bracerightbigg ,for0<λ<m −µ, and P(Z≤µ−λ)≤exp/braceleftbigg −{mH1−p(1−p+λ m)}/bracerightbigg ,for0<λ<µ, whereHp(x) =xlog/parenleftBig x p/parenrightBig +(1−x)log/parenleftBig 1−x 1−p/parenrightBig is the relative entropy of xw.r.t.p. Here, note that Zi=1(|Xi|>√c1logn),E(Zi) =pi/lessorsimilarn−c1 2(logn)−1 2, and µ=E(Z)/lessorsimilarn1−c1 2(logn)−1 2, and p=µ n−qn/lessorsimilarn−c1 2(logn)−1 2(1+o(1)). Forλ=αn(n−qn)−µ, we have 0 <λ<m −µ. Also, we have p+λ n−qn=αn. Hence, Hp(p+λ n−qn) =αnlog/parenleftbiggαn p/parenrightbigg +(1−αn)log/parenleftbigg1−αn 1−p/parenrightbigg . (61) Also, note that, since c1≥2,p αn→0 asn→ ∞. Recall that,log(1 1−y) y→1 asy→0. Hence, with y=p−αn 1−αn, the second term in the right hand side of (61) is of the form log/parenleftbigg1−αn 1−p/parenrightbigg =p−αn 1−αn(1+o(1)), (62) whereo(1) depends only on nsuch that lim n→∞o(1) = 0. Hence, using (61) and (62), an lower bound of Hp(αn) is given by Hp(αn) =αnlog/parenleftbiggαn p/parenrightbigg +(1−αn)·(p−αn) (1−αn)(1+o(1)) =αnlog/parenleftbiggαn p/parenrightbigg (1+o(1)) /greaterorsimilarαn(1+o(1)). (63) With the use of Lemma 1 and (63) with the assumption qn=o(n), the upper bound of (60) is obtained as Pθi=0(/hatwideτ4>αn)≤e−nαn(1+o(1)) ≤e−qn c2(1+o(1)), (64) where inequality in the last line holds due to. By choosing αn=c2qn n, we immediately see that αn→0 and satisfies and. Thus, with the above choice of αn, and combining (60) and (64), we obtain I≤e−qn c2(1+o(1)). (65) 17 Hence, combining (56),(57) and (65), we have 1 qn/summationdisplay θi=0tEB 1i/lessorsimilarnαn qn(αn)2a−1 /radicalbigg log/parenleftBig 1 αn/parenrightBig(1+o(1))+n qne−qn c2(1+o(1)). (66) Note that the choice of αnsatisfiesnαn qn→C3for some 0<C3<∞. Hence, the first term in the right hand side of (66) goes to zero as n→ ∞. On the other hand, since qnsatisfiesnβ1≤qn≤nβ2for some 0<β1<β2<1, the second term too tends to zero as n→ ∞. This implies (54) holds. Now we move towards establishing (55). In this context, note that for anyγ∈(0,1 c2) and for any ζn>0, tEB 2i=Pθi/negationslash=0(E(κi|Xi,/hatwideτ)>1 2) =Pθi/negationslash=0(E(κi|Xi,/hatwideτ)>1 2,/hatwideτ >γζn)+Pθi/negationslash=0(E(κi|Xi,/hatwideτ)>1 2,/hatwideτ≤γζn) =III+IVsay, (67) Recall that, by definition, for any given x∈R,E(κ|x,τ) is non-increasing in τ. Hence, III≤Pθi/negationslash=0(E(κi|Xi,γζn)>1 2). (68) Now, assume that ζn→0 such thatnζn qn→Cfor someC∈(0,∞). Hence, replacing τbyγζnin (39)-(49) and (68), we have for each θi∝ne}ationslash= 0,i= 1,2,···,n, sup |θi|≥/radicalBig 2log(n qn)+bIII≤1−Φ(b)+o(1), (69) where theo(1) term is independent of i. Hence, our next target is to show that IV→0 asn→ ∞. Note that IV≤Pθi/negationslash=0(/hatwideτ≤γζn)
https://arxiv.org/abs/2505.16428v1
≤Pθi/negationslash=0(1 c2nn/summationdisplay j=1(/negationslash=i)1(|Xj|>/radicalbig c1logn)≤γζn) ≤P(|1 n−1n/summationdisplay j=1(/negationslash=i)1(|Xj|>/radicalbig c1logn)−νn| ≥νn−c2γn n−1ζn) ≤νn(1−νn) (n−1)(νn−c2γn n−1ζn)2, (70) whereνn=P(|Xi|>√c1logn). Since,νn(1−νn)≤1 4for any 0< νn<1, it is enough to show that the denominator in the right hand side of (70) goes to ∞asn→ ∞. Here in the chain of inequalities, the first one holds since by definition /hatwideτ≥1 c2n/summationtextn j=1(/negationslash=i)1(|Xj|>√c1logn). Next we use the fact that the distribution of the remaining Xj’s does not depend on the distribution of Xi. The final one follows from the use of Markov’s inequality. Observe that νn= 1−Φ(/radicalbig c1logn−θi)+1−Φ(/radicalbig c1logn+θi). (71) Next, we divide our calculations depending on θiis positive or not. Case-1:θi>0 for somei. Hence, using the assumption on the growth of qn, νn≥1−Φ(/radicalbig c1logn−θi) ≥1−Φ(K2/radicalbig logn−b), (72) 18 whereK2=√c1−/radicalbig 2(1−β2)>0. Now we again divide our calculations depending on bbeing positive or negative. Subcase-1: b≥0. Then the lower bound of νnmodifies to νn≥1−Φ(K2/radicalbig logn) =1 K2√lognexp/parenleftbigg −logn 2·K2 2/parenrightbigg (1+o(1)) =1 K2√lognn−K2 2 2(1+o(1)). (73) Hence, (n−1)(νn−c2γn n−1ζn)2 ≥(n−1)(1 K2√lognn−K2 2 2(1+o(1))−Cc2γn−(1−β2))2. (74) This implies (74) goes to ∞if K2 2 2<(1−β2) andK2 2<1. (75) Simplification yields, c1<8(1−β2) and/radicalbiggc1 2−1√ 2<1−β2</radicalbiggc1 2+1√ 2. Combining these two, we obtain that the denominator of (70) goes t o∞asn→ ∞for any 0<β2<1. This implies, when θi>0 for someiandθi≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +bwithb≥0, IV=o(1),asn→ ∞. (76) Subcase-1: b<0. In this case, we have −b<K2/radicalbig logn. Hence lower bound of νnis of the form νn≥1−Φ(2K2/radicalbig logn) =exp/parenleftbig −2K2 2logn/parenrightbig 2K2√logn(1+o(1)). As a result ( n−1)(νn−c2γn n−1ζn)2→ ∞asn→ ∞if 2K2 2<(1−β2) and 4K2 2<1. This also holds for any 0<β2<1. This implies, when θi>0 for someiandθi≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +bwithb<0, IV=o(1),asn→ ∞. (77) Combining (76) and (77) imply that when θi>0 for someiandθi≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b, IV=o(1),asn→ ∞. (78) 19 Hence, when θi>0 for someiandθi≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b, combining (67), (68) and (78) ensures that t2i≤1−Φ(b)+o(1), (79) whereo(1) term is independent of i. Case-2:θi<0 for somei. Hence using the assumption on the growth of qn, νn≥1−Φ(/radicalbig c1logn+θi) ≥1−Φ(K2/radicalbig logn−b), (80) whereK2is same as before. Note that the lower bound of νnobtained in (80) is exactly same as that of (72). Hence, applying the same calculations done in (73)-(77), we again ob tain that, when θi<0 for some iand −θi≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b, t2i≤1−Φ(b)+o(1), (81) whereo(1) term is independent of i. Finally, (79) and (81) imply that sup |θi|≥/radicalBig 2log(n qn)+btEB 2i≤1−Φ(b)+o(1), (82) witho(1) being independent of any i. This implies (55) and completes the proof of Theorem 2. Proof of Theorem 3. Similar to Theorem 2, here too, we need to obtain an upper bound of t he left-hand side of (30). Hence, it suffices to prove that /summationdisplay i:θi=0tFB 1i=o(qn) (83) and 1 qn/summationdisplay i:θi/negationslash=0sup |θi|≥/radicalBig 2log(n qn)+btFB 2i≤1−Φ(b)+o(1), (84) wheretFB 1iandtFB 2iare the type I and type II errorsfor the full Bayes approach cor responding to ithhypothesis, respectively. Note that, E(1−κi|X) =/integraldisplayαn 1 nE(1−κi|X,τ)π(τ|X)dτ =/integraldisplayαn 1 nE(1−κi|Xi,τ)π(τ|X)dτ≤E(1−κi|Xi,αn)· (85) For proving the inequality above, we first use the fact that given an yxi∈R,E(1−κi|xi,τ) is non-decreasing inτ, and next we use (C4). Now, using Theorem 4 of Ghosh et al. (2016) , we
https://arxiv.org/abs/2505.16428v1
have for a=1 2, asn→ ∞ E(1−κi|Xi,αn)≤K1eX2 i 2αn(1+o(1)). (86) 20 Hereo(1) depends only on nsuch that lim n→∞o(1) = 0 and is independent of iandK1is a constant depending onM. Hence, we have the following : tFB 1i=PH0i(E(1−κi|X)>1 2) ≤PH0i/parenleftbiggX2 i 2>log/parenleftbigg1 αn/parenrightbigg −logK1−log(1+o(1))/parenrightbigg =PH0i/parenleftbigg |Xi|>/radicalBigg 2log/parenleftbigg1 αn/parenrightbigg/parenrightbigg (1+o(1)). Notethat, as αn<1foralln≥1,2log/parenleftBig 1 αn/parenrightBig >0foralln≥1. Next, usingthefactthatunder H0i,Xiiid∼ N(0,1) with 1−Φ(t)<φ(t) tfort>0, we get tFB 1i≤2φ(/radicalbigg 2log/parenleftBig 1 αn/parenrightBig ) /radicalbigg 2log/parenleftBig 1 αn/parenrightBig(1+o(1)) =/radicalbigg 1 παn/radicalbigg log/parenleftBig 1 α2 n/parenrightBig(1+o(1)), (87) whereo(1) depends only on nsuch that lim n→∞o(1) = 0. This implies, for sufficiently large n 1 qn/summationdisplay i:θi=0tFB 1i/lessorsimilarnαn qn/radicalbigg log/parenleftBig 1 αn/parenrightBig(1+o(1)) =(logn)δ2 (logn)δ3√logn−δ2loglogn(1+o(1)) = (logn)δ2−1 2−δ3(1+o(1))→0,asn→ ∞, (88) sinceδ2<1 2andδ3>0. This implies (83) holds. Next, in order to establish (84) holds, it is en ough to show that, for each i= 1,···,qn, sup|θi|≥/radicalBig 2log(n qn)+btFB 2i≤1−Φ(b)+o(1), whereo(1) is independent of i. In order to provide an upper bound on the probability of type-II er ror induced by the decision rule (26), tFB 2i=PH1i(E(κi|X)>1 2), we first note that, E(κi|X) =/integraldisplayαn 1 nE(κi|X,τ)π(τ|X)dτ =/integraldisplayαn 1 nE(κi|Xi,τ)π(τ|X)dτ≤E(κi|Xi,1 n), (89) where the inequality in the last line follows due to the fact that given an yxi∈R,E(κi|xi,τ) is non-increasing inτ. Hence, using arguments similar to (39)-(43) with τ=1 n, whenθi>0 fori= 1,···,qn, we have, tFB 2i≤Φ(/radicalBig 2(1+(logn)−δ1)logn−/radicalBigg 2log/parenleftbiggn qn/parenrightbigg −b). (90) Hence, in order to obtain the desired upper bound on tFB 2i, it is sufficient to show that/radicalbig (1+(logn)−δ1)logn−/radicalbigg log/parenleftBig n qn/parenrightBig →0 asn→ ∞. Note that, for qn∝(logn)δ3, /radicalBig (1+(logn)−δ1)logn−/radicalBigg log/parenleftbiggn qn/parenrightbigg =/radicalBig (1+(logn)−δ1)logn−/radicalbig logn−δ2loglogn>0, 21 for alln. Hence, the upper bound converging to zero is enough. Observe t hat /radicalBig (1+(logn)−δ1)logn−/radicalBigg log/parenleftbiggn qn/parenrightbigg =/radicalBig (1+(logn)−δ1)logn−/radicalbig logn−δ2loglogn =(logn)(1−δ1)+δ2loglogn/radicalbig (1+(logn)−δ1)logn+√logn−δ2loglogn ≤(logn)(1 2−δ1)+δ2loglogn√logn(1+o(1)) →0,asn→ ∞, (91) since1 2< δ1<1. Combining (90) and (91) implies the desired upper bound on tFB 2iis established for θi>0. The proof when θi<0 holds using similar arguments. This completes the proof of Theorem 3. Proof of Theorem 4. The lower bound to the left hand side of (31) follows from Theorem 1 o f Abraham et al. (2024). Hence, it is enough to establish that the upper bound to th e left hand side of (31) is of the order 1−Φ(b)+o(1). Using the definition of FDR(θ,ψ), we have for any λ∈(0,1), FDR(θ,ψ) =Eθ/bracketleftbigg/summationtext i:θi=0ψi max(/summationtextn i=1ψi,1)/bracketrightbigg =Eθ/bracketleftbigg/summationtext i:θi=0ψi max(/summationtextn i=1ψi,1)1/summationtextn i=1ψi>λqn/bracketrightbigg +Eθ/bracketleftbigg/summationtext i:θi=0ψi max(/summationtextn i=1ψi,1)1/summationtextn i=1ψi≤λqn/bracketrightbigg =V+VI, say. (92) Observe that, for the set 1/summationtextn i=1ψi>λqn,max(/summationtextn i=1ψi,1)>λqn. HenceVcan be bounded as V≤/summationtext i:θi=0t1i λqn=(n−qn)t1 λqn ≤M λnτ qn(τ2)a−1 2/radicalBig log/parenleftbig1 τ2/parenrightbig(1+o(1)) =o(1),asn→ ∞. (93) The equality in the second line holds since by definition the type I error forithhypothesis t1iis the same for alli, and denoted as t1. The second inequality follows due to the use of (38) and the assump tion onL(·). Now, note that {n/summationdisplay i=1ψi≤λqn} ⊆ {/summationdisplay i:θi/negationslash=0ψi≤λqn}. (94) This implies VI≤Pθ(/summationdisplay i:θi/negationslash=0ψi≤λqn). (95) Since,ψi’s are independent Bernoulli random variables, hence, we use Hoeffd ing’s inequality. Choose 0 <λ< Φ(b). As a result, for sufficiently large n,tdefined in Hoeffding’s inequality is obtained as t=qn−/summationdisplay i:θi/negationslash=0t2i−λqn = (1−λ)qn−/summationdisplay i:θi/negationslash=0t2i ≥[Φ(b)−λ+o(1)]qn. (96) 22 Note that,
https://arxiv.org/abs/2505.16428v1
the choice of λimpliest>0 for sufficiently large n. In order to obtain (96), we use a uniform upper bound ont2i,i:θi∝ne}ationslash= 0 as derived in (53). Using (96), we obtain, for all |θi| ≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b,i= 1,2,···,qn Pθ(/summationdisplay i:θi/negationslash=0ψi≤λqn)≤exp/parenleftbigg −2t2 qn/parenrightbigg ≤exp/parenleftbig −2[Φ(b)−λ+o(1)]2qn/parenrightbig . (97) Next, combining (95)-(97), for sufficiently large n sup θ∈ΘbVI=o(1),asn→ ∞. (98) Now, (92), (93) and (98) combine to prove that sup θ∈ΘbFDR(θ,ψ) =o(1),asn→ ∞. (99) On the other hand, for FNR(θ,ψ), from definition, we obtain FNR(θ,ψ) =Eθ(/summationtext i:θi/negationslash=0(1−ψi)) qn =1 qn/summationdisplay i:θi/negationslash=0t2i. (100) Recall that, in (53), we have already derived that, for each |θi| ≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b,i= 1,2,···,qn, t2i≤1−Φ(b)+o(1), where theo(1) term is independent of any i. This combining with (100) implies for all sufficiently large n sup θ∈ΘbFNR(θ,ψ)≤1−Φ(b)+o(1). (101) Finally, (99) and (101) imply the upper bound to the left hand side of ( 31) is of the order 1 −Φ(b)+o(1) and completes the proof. Proof of Theorem 5. Here we follow the basic architecture of the proof of Theorem 4. He nce, we only need to show that the upper bound to the left-hand side of (32) is of the or der 1−Φ(b)+o(1). Next, using the argument of (92), we have V≤/summationtext i:θi=0tEB 1i λqn /lessorsimilarnαn qn(αn)2a−1 /radicalbigg log/parenleftBig 1 αn/parenrightBig(1+o(1))+n qne−qn c2(1+o(1)). (102) Here inequality in the second line holds due to (66). Note that the cho ice ofαnsatisfiesnαn qn→C3for some 0<C3<∞. Hence, the first term to the right hand side of (102) goes to zero asn→ ∞. On the other hand, sinceqnsatisfiesnβ1≤qn≤nβ2for some 0<β1<β2<1, the second term too tends to zero as n→ ∞. This implies V=o(1),asn→ ∞. (103) 23 Again employing (94) and (95), we obtain for sufficiently large n t=qn−/summationdisplay i:θi/negationslash=0tEB 2i−λqn ≥[Φ(b)−λ+o(1)]qn, (104) where the upper bound on tEB 2ias obtained in (82) is used. This lower bound with Hoeffding’s inequality confirms, for sufficiently large n sup θ∈ΘbVI=o(1),asn→ ∞. (105) Combining (103) and (105) ensures sup θ∈ΘbFDR(θ,ψ) =o(1),asn→ ∞. (106) ForFNR(θ,ψ), from definition, we obtain FNR(θ,ψ) =Eθ(/summationtext i:θi/negationslash=0(1−ψi)) qn =1 qn/summationdisplay i:θi/negationslash=0tEB 2i. (107) Recall that, in (82), we have already derived that, for each |θi| ≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b,i= 1,2,···,qn, tEB 2i≤1−Φ(b)+o(1), where theo(1) term is independent of any i. This combining with (107) implies for all sufficiently large n sup θ∈ΘbFNR(θ,ψ)≤1−Φ(b)+o(1). (108) Finally, (106) and (108) imply the upper bound to the left hand side of (32) is of the order 1 −Φ(b)+o(1) and completes the proof. Proof of Theorem 6. Here also we follow the basic architecture of the proof of Theorem 4 . Hence, in this case, too, it is sufficient to show that the upper bound to the left-hand sid e of (33) is of the order 1 −Φ(b)+o(1). Note that, using the argument of (92) and (88), we have V≤/summationtext i:θi=0tFB 1i λqn /lessorsimilarnαn qn/radicalbigg log/parenleftBig 1 αn/parenrightBig(1+o(1)) =o(1),asn→ ∞. (109) Again employing arguments of (94) and (95), we obtain for sufficient ly largen t=qn−/summationdisplay i:θi/negationslash=0tFB 2i−λqn ≥[Φ(b)−λ+o(1)]qn, (110) 24 where the upper bound on tFB 2ias obtained in Theorem 3 is used. This lower bound with Hoeffding’s inequ
https://arxiv.org/abs/2505.16428v1
ality confirms, for sufficiently large n sup θ∈ΘbVI=o(1),asn→ ∞. (111) Combining (109) and (111) ensures sup θ∈ΘbFDR(θ,ψ) =o(1),asn→ ∞. (112) ForFNR(θ,ψ), from definition, we obtain FNR(θ,ψ) =Eθ(/summationtext i:θi/negationslash=0(1−ψi)) qn =1 qn/summationdisplay i:θi/negationslash=0tFB 2i. (113) Recall that, in Theorem 3, we have already derived that, for each |θi| ≥/radicalbigg 2log/parenleftBig n qn/parenrightBig +b,i= 1,2,···,qn, tFB 2i≤1−Φ(b)+o(1), where theo(1) term is independent of any i. This combining with (113) implies for all sufficiently large n sup θ∈ΘbFNR(θ,ψ)≤1−Φ(b)+o(1). (114) Finally, (112) and (114) imply the upper bound to the left hand side of (33) is of the order 1 −Φ(b)+o(1) and completes the proof. References Abraham, K., Castillo, I. and Roquain, ´E. (2024). Sharp multiple testing boundary for sparse sequences .The Annals of Statistics , 52(4):1564–1591. Arias-Castro, E. and Chen, S. (2017). Distribution-free multiple t esting.Electronic Journal of Statistics . Armagan, A., Clyde, M. and Dunson, D. (2011). Generalized beta mix tures of Gaussians. Advances in Neural Information Processing Systems , 24, 523–531. Armagan, A., Dunson, D., Lee, J. (2013a). Generalized double Pare to shrinkage, Statistica Sinica , 23(1) :119. Baraud, Y. (2002). Non-asymptotic minimax rates of testing in sign al detection. Belitser, E. and Nurushev, N. (2020). Needles and straw in a hayst ack: Robust confidence for possibly sparse sequences. Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discov ery rate: a practical and powerful approach to multiple testing, Journal of the Royal statistical society: series B (Methodo logical), 57(1):289–300. Benjamini, Y. and Yekutieli, D. (2001). The control of the false disc overy rate in multiple testing under depen- dency,Annals of statistics , 1165–1188. 25 Bhadra, A, Datta, J, Polson, N., Willard, B. (2017). The horseshoe + estimator of ultra-sparse signals, Bayesian Analysis, 12(4):1105–1131. Bhattacharya, A., Pati, D., Pillai, N. S., and Dunson, D. (2015). Dirich let–laplace priors for optimal shrink- age,Journal of the American Statistical Association , 110(512):1479–1490. Butucea, C., Ndaoud, M., Stepanova, N.A., and Tsybakov, A. B. (20 18). Variable selection with hamming loss, Annals of statistics , 46(5):1837–1875. Butucea, C., Mammen, E., Ndaoud, M., and Tsybakov, A. B. (2023). Variable selection, monotone likelihood ratio and group sparsity, Annals of statistics , 51(1):312–333. Carvalho, M.C., Polson, N.G., and Scott, J. G. (2009). Handling spars ity via the horseshoe. In Artificial Intel- ligence and Statistics, Proceedings of Machine Learning Research , 73–80. Carvalho, M.C., Polson, N.G., and Scott, J. G. (2010). The horsesho e estimator for sparse signals. Biometrika , 97(2):465–480. Chen, J., and Sarkar, S.N. (2004). Multiple testing of response rat es with a control: A bayesian stepwise approach. Journal of statistical planning and inference , 125(1-2):3–16. Datta, J. and Ghosh, J. K. (2013). Asymptotic properties of bay es risk for the horseshoe prior. Bayesian Analysis, 8(1):111–132. Efron, B. (2004). Large-scale simultaneous hypothesis testing: the choice of a null hypothesis. Journal of the American Statistical Association , 99(465):96–104. Fromont, M., Lerasle, M. and Reynaud-Bouret, P. (2016). Family- wise separation rates for multiple testing. Ghosh. P., and Chakrabarti, A. (2017). Asymptotic optimality of on e-group shrinkage priors in sparse high- dimensional problems. Bayesian Analysis , 12(4):1133–1161. Ghosh, P., Tang, X., Ghosh, M.
https://arxiv.org/abs/2505.16428v1
and Chakrabarti, A. (2016). Asymp totic properties of bayes risk of a general class of shrinkage priors in multiple hypothesis testing under sparsit y.Bayesian Analysis , 11(3):753– 796. Gin´ e, E., and Nickl, R. (2021). Mathematical foundations of infinite -dimensional statistical models. Cambridge university press . Griffin, J.E. and Brown, P.J. (2005). Alternative prior distributions f or variable selection with very many more variables than observations. Technical report, University of Warwick . Ingster, Y. and Suslina, I.A. (2012). Nonparametric goodness-o f-fit testing under Gaussian models. Springer Science & Business Media , volume 169. Ingster, Y., Tsybakov, A. B., and Verzelen, N. (2010). Detection boundary in sparse regression. Johnstone, I. M., and Silverman, B. W. (2004). Needles and straw in haystacks: Empirical bayes estimates of possibly sparse sequences. The Annals of Statistics , 32(4):1594–1649. Mitchell, T. J., and Beauchamp, J. J. (1988). Bayesian variable selec tion in linear regression. Journal of the American Statistical Association , 83(404):1023–1032. Nickl, R. and Van De Geer, S. (2013). Confidence sets in sparse reg ression,Annals of statistics , 2852–2876. Park, T. and Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association , 103(482):681–686. 26 Paul, S. and Chakrabarti, A. (2023). Posterior contraction rate and asymptotic bayes optimal- ity for one group global-local shrinkage priors in sparse normal means problem. arXiv preprint arXiv:2211.02472v2 . Polson, N. G., and Scott, J. G. (2010). Shrink globally, act locally: Sp arse bayesian regularization and predic- tion.Bayesian statistics , 9(501- 538):105. Polson, N. G., and Scott, J. G. (2012). Local shrinkage rules, l /acute.ts1evy processes and regularized regression. Journal of the Royal Statistical Society: Series B (Statistical Met hodology) ,74(2):287–311. Rabinovich, M., Ramdas,A., Jordan, M.I., and Wainwright, M. J. (2020) . Optimal rates and trade-offs in multiple testing. Statistica Sinica ,30(2):741–762. Salomond, J. B. (2017). Risk quantification for the thresholding ru le for multiple testing using gaussian scale mixtures. arXiv preprint arXiv:1711.08705 . Sarkar, S.K. (2002). Some results on false discovery rate in stepw ise multiple testing procedures, Annals of statistics , 30(1):239–257. Sarkar, S.K. (2007). Stepup procedures controlling generalized f wer and generalized fdr. Sarkar,S.K.(2008).Onmethodscontrollingthefalsediscoveryra te,Sankhy /macron.ts1a: The Indian Journal of Statistics , 135–168. Sarkar, S.K. (2008). Procedures controlling the k-fdr using bivar iate distributions of the null p-values, Statistica Sinica, 1227–1238. Storey, J. D. (2007). The optimal discovery procedure: a new ap proach to simultaneous significance test- ing.Journal of the Royal Statistical Society: Series B (Statist ical Methodology) ,69(3):347–368. Tipping M. E. (2001). Sparse bayesian learning and the relevance ve ctor machine. Journal of Machine Learning Research , 1(Jun):211–244. van der Pas, S. L., Kleijn, B. J., and van der Vaart, A. W. (2014). Th e horseshoe estimator: Posterior concen- tration around nearly black vectors. Electronic Journal of Statistics , 8(2):2585–2618. van der Pas, S. L., Salomond, J. B., and Hieber, J. S. (2016). Condit ions for posterior contraction in the sparse normal means problem. Electronic Journal of Statistics , 10(1):976–1000. van der Pas, S. L., Szabo, B. and van der Vaart, A. W. (2017). Ada p- tive posterior contraction rates for the horseshoe. Electronic Journal of
https://arxiv.org/abs/2505.16428v1
arXiv:2505.16651v1 [math.OC] 22 May 2025Risk-averse formulations of Stochastic Optimal Control and Markov Decision Processes Alexander Shapiro∗Yan Li† May 23, 2025 Abstract The aim of this paper is to investigate risk-averse and distributionally robust modeling of Stochastic Optimal Control (SOC) and Markov Decision Process (MDP). We discuss construc- tion of conditional nested risk functionals, a particular attention is given to the Value-at-Risk measure. Necessary and sufficient conditions for existence of non-randomized optimal policies in the framework of robust SOC and MDP are derived. We also investigate sample complexity of optimization problems involving the Value-at-Risk measure. Keywords: Stochastic Optimal Control, Markov Decision Process, risk measures, distribu- tional robustness, Value-at-Risk, sample complexity, rectangularity, Bellman equation ∗Georgia Institute of Technology, Atlanta, Georgia 30332, USA, ashapiro@isye.gatech.edu Research of this author was partially supported by Air Force Office of Scientific Research (AFOSR) under Grant FA9550-22-1-0244. †Texas A&M University, College Station, Texas 77840, USA, gzliyan113@tamu.edu. 1 1 Introduction The aim of this paper is to investigate risk averse and distributionally robust approaches to opti- mization problems where decisions are made sequentially under conditions of uncertainty. Specif- ically we discuss the Stochastic Optimal Control (SOC) and Markov Decision Process (MDP) formulations of such optimization problems. For static (single stage) stochastic programs the risk averse and distributionally robust counterparts, of the respective risk neutral problems, are well studied now. Distributionally robust approach to stochastic programming is going back to the pioneering paper by Scarf [16]. For a recent survey we can refer to [8], and to [4] where the spe- cial attention is given to construction of the ambiguity sets based on Wasserstein metric. Modern theory of risk measures was started in Artzner et al [1], where axioms of the so-called coherent risk measures were formulated. For a discussion of the corresponding risk averse optimization problems we can refer to [15]. The relation between distributionally robust and risk averse approaches to stochastic optimization is based on dual representation of coherent risk measures (e.g., [19, Section 6.3]). An extension of risk averse and distributionally robust approaches to sequential optimization is not straightforward and still is somewhat controversial. An approach to risk averse multistage stochastic programming based on nested compositions of coherent risk measures was suggested in [14] (see also [19, Section 6.5.4]). That approach involves construction of conditional counterparts of law invariant coherent risk measures. It can be readily extended to SOC problems where the underlying random process does not depend to states and controls. An extension of the risk averse to MDPs is more involved since there the evolution of the system is determined by conditional probabilities (transition kernels) rather than explicitly defined random process. It was suggested by Ruszczy´ nski [13] to construct nested risk measures directly on the histories of the states process equipped with the probability law driven by (non-randomized) policies of the considered MDP. The approach in [13] is focused on coherent risk measures and is based on their dual representation. A different approach to the distributionally robust formulation of MDPs was suggested in [6] and [10], and was used since then in numerous publications. The approach
https://arxiv.org/abs/2505.16651v1
in [6] and [10] in a sense is static since the ambiguity sets of transition kernels are defined before realizations of the decision process. As a consequence, in order to derive the corresponding dynamic equations there is a need to introduce the so-called rectangularity conditions on the ambiguity sets of transition kernels. The distributionally robust and risk averse approaches to MDPs can be unified by defining the process as a dynamic game between the decision maker (the controller) and the adversary (the nature) (cf., [9]). The main contribution of this manuscript can be summarized as the following. •We discuss construction of conditional and nested risk functionals without assuming their convexity property. The construction is general and is not based of the dual (distributionally robust) formulation. A particular attention is given to the Value-at-Risk measure, which is not convex. •We discuss sample complexity of optimization problems involving the Value-at-Risk measure. •We give necessary and sufficient conditions for existence of non-randomized optimal policies. •We discuss both the SOC and MDP formulations in finite and infinite horizons settings. In particular, in the case of MDP our approach is more general and simpler than the one of [13]. We use the following notation and terminology. For a∈R, we denote [ a]+:= max {0, a}. Fora, b∈Rwe denote a∨b:= max {a, b}. A set Ω equipped with its sigma algebra Fis called 1 measurable space. The measurable space (Ω ,F) is said to be Polish if Ω is a separable complete metric space and Fis its Borel sigma algebra. In particular, any closed subset of Rnequipped with its Borel sigma algebra is a Polish space. By Pwe denote the set of probability measures on (Ω,F). For P∈P, (Ω,F, P) is called the probability space. It is said that P∈Pis nonatomic, if for any A∈ Fsuch that P(A)>0, there exists A′∈ Fsuch that A′⊂AandP(A)> P(A′)>0. For a set A⊂Ω denote by 1Aits indicator function, i.e., 1A(ω) = 1 for ω∈A, and 1A(ω) = 0 for ω∈Ω\A. By supp( P) we denote the support of probability measure Pdefined on a metric space Ω equipped with its Borel sigma algebra, i.e., A= supp( P) is the smallest closed subset of Ω such thatP(A) = 1. ForP∈Pand a random variable Z: Ω→Rwe denote by EP[Z] =R ΩZdP its expected value, and by FP Z(z) :=P(Z≤z), z∈R, its cumulative distribution function (cdf). By Fwe denote the set of right side continuous mono- tonically nondecreasing functions ϕ:R→Rsuch that lim z→−∞ ϕ(z) = 0 and lim z→+∞ϕ(z) = 1, i.e.Fis the space of cdfs. By Lp(Ω,F, P) we denote the space of random variables Zsuch thatR Ω|Z|pdP <∞,p∈[1,∞). Byδξwe denote measure of mass one at a point ξ(Dirac measure). For Q, P∈Pit is said that Qis absolutely continuous with respect to P, denoted Q≪P, if for A∈ F such that P(A) = 0 it follows that Q(A) = 0. For a sigma subalgebra GofFwe denote by EP |G[Z] the respective conditional expectation. That is, EP |G[Z] isG-measurable and P-integrable, and1 Z AEP |G[Z]dP=Z AZdP, ∀A∈ G. Note that EP |G[Z] consists of a family
https://arxiv.org/abs/2505.16651v1
of G-measurable random variables (called versions) which are equal to each other P-almost surely. By P|G(A) =E|G[1A],A∈ F, is denoted the respective conditional probability. For a random variable Y, we denote by E|Y[Z] the conditional expectation and by P|Y(A) =E|Y[1A],A∈ F, the respective conditional probability. 2 Risk functionals Let (Ω ,F) be a measurable space. Assume that with every probability measure P∈Pis associated a linear space Zof measurable functions (random variables) Z: Ω→Rand functional R:Z →R. Consider the following axioms (conditions) that Rmay satisfy. (A1) (monotonicity) if Z, Z′∈ ZandZ≥Z′,P-a.s., then R(Z)≥ R(Z′). (A2) (convexity) if Z, Z′∈ Zandτ∈[0,1], then R(τZ+ (1−τ)Z′)≤τR(Z) + (1 −τ)R(Z′). (A3) (translation equivariance) for Z∈ Zandτ∈R, it follows that R(Z+τ) =R(Z) +τ. (A4) (positive homogeneity) if Z∈ Zandτ≥0, then R(τZ) =τR(Z). In order to emphasize that the functional Rdepends on P∈P, we write it as RP. The corre- sponding linear space Zmay also depend on P, but we suppress this in the notation. In any case we assume that if Z∈ Zandτ∈R, then Z+τ∈ Z. Note that it follows from axiom (A1) that if Z=Z′,P-a.s., then RP(Z) =RP(Z′). It follows from axiom (A4) that RP(0) = 0. 1This is the classical definition of conditional expectation due to Kolmogorov. 2 Functionals satisfying axioms (A1) - (A4) are called coherent risk measures, [1]. An important example of coherent risk measure is the Average Value-at-Risk (also called Conditional Value-at- Risk, expected shortfall, expected tail loss). The following representation of the Average Value-at- Risk is due to [12], AV@RP α(Z) := inf τ∈R{τ+α−1EP[Z−τ]+}, α∈(0,1]. (2.1) In that example, the respective space Z=L1(Ω,F, P). An important example of convex functional, satisfying axioms (A1) - (A3), is the entropic risk measure RP τ(Z) :=τ−1logEP[eτZ], τ > 0. Here the respective space Zconsists of random variables Zsuch that EP[eτZ]<∞for all τ∈R. An important example of non-convex risk measure is the Value-at-Risk, RP=V@RP α, where V@RP α(Z) := inf {z:FP Z(z)≥1−α}, α∈(0,1). (2.2) That is, V@RP α(Z) is the left side (1 −α)-quantile of the distribution of Z. In that example the corresponding space Zconsists of all measurable Z: Ω→R. The V@RP α(·) functional satisfies axioms (A1),(A3) and (A4), but not axiom (A2). It is said that RPisstrictly monotone ifZ≥Z′,P-a.s., and P(Z > Z′)>0, then RP(Z)> RP(Z′). The AV@RP αandV@RP α,α∈(0,1), functionals are monotone but are not strictly monotone. It is said that Z, Z′∈ Zare distributionally equivalent (with respect to P∈P), if their cdfs do coincide, i.e., P(Z≤z) =P(Z′≤z) for all z∈R. Definition 2.1. It is said that RPislaw invariant if for any distributionally equivalent Z, Z′∈ Z it follows that RP(Z) =RP(Z′). That is, a law invariant risk measure RP(Z) is a function of the cdf FP Z, i.e., can be represented as RP(Z) =ρ(FP Z), Z∈ Z, (2.3) where ρis mapping the corresponding ϕ∈FintoR. For example, for RP=AV@RP αthe corre- sponding ρ(ϕ) = inf τ∈R τ+α−1Z+∞ −∞[z−τ]+dϕ(z) , defined for such ϕ∈FthatR+∞ −∞[z−τ]+dϕ(z)<∞for all τ∈R. ForRP=V@RP αthe corresponding mapping is ρ(ϕ) = inf {z:ϕ(z)≥1−α}, ϕ∈F. We also consider robust counterparts of risk functionals. That is, let M⊂Pbe a (nonempty) set of probability
https://arxiv.org/abs/2505.16651v1
measures. Then the robust counterpart of RPis defined as R(Z) := sup P∈MRP(Z), (2.4) assuming that supP∈MRP(Z)<∞for all Z∈ Z. IfRPsatisfies any of axioms (A2) - (A4) for allP∈M, then so is its robust counterpart R. The monotonicity axiom is more involved, this is because the inequality Z≥Z′is defined P-almost surely with respect to a particular P∈P. Note that if Z≥Z′almost surely with respect to P, then Z≥Z′almost surely with respect to 3 any measure P∈Pabsolutely continuous with respect to P. Therefore we consider the following setting. Consider a probability measure P∈P, viewed as a reference measure. Suppose that P∈M, and every P∈Mis absolutely continuous with respect to P. Suppose that for every P∈Mthe functional RPis monotone. Then the robust functional is monotone with respect to P, i..e, if Z≥Z′,P-a.s., then R(Z)≥R(Z′). In particular let RP:=EP. Then the corresponding functional R, referred to as the distribu- tionally robust functional , becomes R(Z) = sup P∈MEP(Z). (2.5) Of course, EPis monotone for every P∈P, and hence the distributionally robust functional Ris monotone with respect to P. Note that if the distributionally robust functional Ris law invariant with respect to2P, then every P∈Mis absolutely continuous with respect to P. Indeed if A∈ F is such that P(A) = 0, then 1Ais distributionally equivalent (with respect to P) to 0, and hence by the law invariance it follows that R(1A) = 0. Consequently P(A)≤R(1A) = 0 for any P∈M. Example 2.1 (Robust Value-at-Risk) .Consider V@RP α. The the corresponding robust Value-at- Risk is RV@Rα(Z) := supP∈MV@RP α(Z) = inf {z:P(Z≤z)≥1−α,∀P∈M} = inf {z: inf P∈MP(Z≤z)≥1−α}.(2.6) Consider a nonatomic probability measure P∈P, viewed as a reference measure. Suppose that P∈Mand the respective distributionally robust functional R(Z), defined in (2.5), islaw invariant with respect to P. We have that for sets A, A′∈ F, the indicator functions Z=1AandZ′=1A′ are distributionally equivalent with respect to PiffP(A) =P(A′). Therefore if P(A) =P(A′), then R(1A) =R(1A′).Consider function p: [0,1]→Rdefined as p(τ) := inf P∈MP(A)for some A∈ Fsuch that P(A) =τ. (2.7) Since P(A) =EP[1A]and because of the assumed law invariance of the respective distributionally robust functional, p(τ)is well defined; and since Pis nonatomic, p(τ)is defined for every τ∈[0,1]. Properties of function p(τ)are discussed in [19, Proposition 7.16]. In particular, p(τ)≤τ, and p(·)is monotonically nondecreasing and continuous on the interval (0,1]. We have then (cf., [19, Prpoposition 7.14]) RV@Rα(Z) = inf z:P(Z≤z)≥p−1(1−α) . (2.8) That is, RV@Rα(·) =V@RP α∗(·), where 1−α∗=p−1(1−α). Note that 1−α∗≥1−α. 2.1 Sample complexity LetZ1, ..., Z Nbe iid samples of random variable Z∼P, and PNbe the corresponding empirical measure, i.e., PN=N−1PN i=1δZi. Furthermore, let FPN(z) =PN(Z≤z) =N−1PN i=11{Zi≤z}be the empirical cdf and V@RPNα:= inf z:FPN(z)≥1−α , be the empirical Value-at-Risk viewed as an estimator of V@RP α(Z). We are interested in evaluating the sample size Nsuch that V@RPNα−V@RP α(Z) < εwith high probability for a given ε >0. Note that if the equation FP Z(z) = 1−αhas more than one solution, i.e., the (1 −α)-left side quantile 2That is, if P(Z≤z) =P(Z′≤z) for all z∈R, then R(Z) =R(Z′). 4 is smaller than the (1 −α)-right side quantile,
https://arxiv.org/abs/2505.16651v1
then V@RPNαmay not converge, in probability, toV@RP α(Z) asN→ ∞ . That is, in general there is no guarantee that V@RPNαis a consistent estimator of V@RP α(Z). Recall the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality (e.g., [7, Theorem 11.6]): for Z∼P, FP=FP Zandε >0 the following inequality holds P sup z∈R FPN(z)−FP(z) > ε ≤2e−2Nε2. (2.9) Consider first the case where P(Z=ν)>0, with ν:=V@RP α(Z), i.e.3, lim z↑νFP(z)<limz↓νFP(z). Moreover suppose that κ:= min (1−α)−limz↑νFP(z),limz↓νFP(z)−(1−α) >0. (2.10) Then by the DKW inequality we have that P V@RPNα=V@RP α(Z) ≥1−2e−2Nκ2. (2.11) This implies the following. Proposition 2.1. Suppose that condition (2.10) holds. Then for δ >0andN≥1 2κ−2log(2/δ)the empirical V@RPNαisequal toV@RP α(Z)with probability at least 1−δ. Another case which we consider is the following. Suppose that FP=FP Zhas the following local growth property. •There are constants b >0 and c >0 such that FP(z′)−FP(z)≥c(z′−z) for all z, z′∈[ν−b, ν+b] such that z′≥z, (2.12) where ν:=V@RP α(Z). In particular this condition holds if the density dFP(z)/dzexists and dFP(z)/dz≥cfor all z∈ [ν−b, ν+b]. Forϕ∈Fandα∈(0,1) denote V@Rα,ϕ:= inf{z:ϕ(z)≥1−α}. Forϵ >0 and ϕ∈Fsuppose that sup z∈[ν−b,ν+b] ϕ(z)−FP(z) ≤ϵ. It follows by the growth condition (2.12) that if ϵ < bc , then V@Rα,ϕ∈[ν−b, ν+b] and V@RP α(Z)−V@Rα,ϕ ≤ϵ/c, (2.13) By the DKW inequality this implies the following. Proposition 2.2. Suppose that the growth condition (2.12) holds and let δ >0. Then for any ε∈(0, b)andN≥1 2c−2ε−2log(2/δ)it follows P |V@RPNα(Z)−V@RP α(Z)|< ε ≥1−δ. (2.14) 3Note that FP(·) is right side continuous and lim z↓νFP(z) =F(ν). 5 Proof. We have that V@RPNα∈[ν−b, ν+b] if sup z∈[ν−b,ν+b] FPN(z)−FP(z) ≤bc. By the DKW inequality and (2.13), for N≥1 2c−2b−2log(2/δ), P V@RPNα(Z)−V@RP α(Z) <min b,q log(2 /δ)) 2Nc2 ≥1−δ. Thus we have that for N≥1 2c−2b−2log(2/δ) ∨1 2c−2ε−2log(2/δ) (2.15) the bound (2.14) follows. Of course for ε < b the first term in the right hand side of (2.15) can be omitted, and hence the proof is complete. Consider now the following optimization problem min x∈XV@RP α(ψ(x, ξ)), (2.16) where Xis a (nonempty) compact subset of Rn,ξis a random vector whose probability distribution Pis supported on a closed set Ξ ⊂Rd, and ψ:X ×Ξ→R. Denote Zx(·) :=ψ(x,·). We assume that Zx: Ξ→Ris (Borel) measurable for every x∈ X. We say that the growth condition (2.12) holds uniformly if there are constants b >0 and c >0 such that for every x∈ X, Fx(z′)−Fx(z)≥c(z′−z) for all z, z′∈[νx−b, νx+b] such that z′≥z, (2.17) where Fxis the cdf of Zx, and νx:=V@RP α(Zx). Assumption 2.1. There is a positive constant Lsuch that ψ(x, ξ)−ψ(x′, ξ) ≤L x−x′ ,∀(x, ξ)∈ X × Ξ. (2.18) Letξ1, ..., ξ Nbe an iid sample of ξ∼P, and PNbe the corresponding empirical measure. Denote D:= supx,x′∈X∥x−x′∥the diameter of the set X. Note that Dis finite, since Xis assumed to be compact and hence is bounded. Theorem 2.1. Suppose Assumption 2.1and the uniform growth condition (2.17) hold. Then for anyε∈(0, b),δ >0and N≥2c−2ε−2 nlog4LD ε + log(1 δ) , (2.19) it follows that P sup x∈X V@RP α(ψ(x, ξ))−V@RPNα(ψ(x, ξ)) ≤ε ≥1−δ. (2.20) Proof. We begin by first showing that for Zx(·) =f(x,·), V@RP α(Zx)−V@RP
https://arxiv.org/abs/2505.16651v1
α(Zx′) ≤L x−x′ , x, x′∈ X. Indeed, let νx α:=V@RP α(Zx). It is clear that for any ϵ >0, we have P(Zx≤νx α+ϵ)≥1−α,and hence P(Zx′− ∥Zx′−Zx∥ ≤νx α+ϵ)≥1−α. 6 This implies νx′ α≤νx α+∥Zx′−Zx∥+ϵ. Since this holds for any ϵ >0, it follows that νx′ α≤ νx α+∥Zx′−Zx∥. Interchanging νx′ αandνx αyields νx′ α−νx α ≤ ∥Zx′−Zx∥ ≤L x−x′ , x, x′∈ X, (2.21) where the last inequality follows from (2.18). Note the same treatment also shows V@RPNα(Zx)−V@RPNα(Zx′)) ≤L x−x′ , x, x′∈ X. (2.22) Suppose the uniform growth condition (2.17) holds. Let η:=ε/(4L) and consider an η-net of X, denoted by Nη. That is, for any x∈ X, there is xη∈Nηsuch that ∥x−xη∥ ≤η. For x∈ X we can write V@RP α(Zx)−V@RPNα(Zx) ≤ V@RP α Zxη) −V@RPNα Zxη + V@RP α(Zx)−V@RP α Zxη + V@RPNα(Zx)−V@RPNα Zxη .(2.23) By (2.21) and (2.22) we have that the sum of the last two terms in (2.23) is ≤ε/2. It follows that P sup x∈X V@RP α(Zx)−V@RPNα(Zx) ≤ε ≥P max xη∈Nη V@RP α Zxη −V@RPNα Zxη ≤ε/2 . (2.24) Now by Proposition 2.2, for every xη∈Nηwe have that V@RP α Zxη −V@RPNα Zxη < ε/2 with probability at least 1 −δfor sample size N≥2c−2ε−2log(2/δ). Consequently P max xη∈Nη V@RP α Zxη −V@RPNα Zxη ≤ε/2 ≥1−Mδ, where M:=|Nη|is the cardinality of the net. By (2.24) it follows that for N≥2c−2ε−2logM 2δ , the bound (2.20) follows. We have that cardinality M=|Nη|of the net can be bounded by C(D/η)n=C(4LD/ε )nfor some constant C > 0. Choice of Cmay depend on the considered norm ∥ · ∥. For example for theℓ∞norm (the max-norm) we can use C= 1. Exact value of the constant Cis not important here. We can use generically C= 2. We obtain that (2.20) holds for the sample size N≥ 2c−2ε−2log (4LD/ε )n δ . This completes the proof. Denote by Sεthe set of ε-optimal solutions of problem (2.16), i.e., Sε= x∈ X:V@RP α(ψ(x, ξ))≤inf x∈XV@RP α(ψ(x, ξ)) +ε , and let ˆSN:= arg minx∈X V@RPNα(ψ(x, ξ)) be the set of optimal solutions of the respective empirical (Sample Average Approximation) problem. By the uniform bound of Theorem 2.1 we have the following (this can be compared with [19, Theorem 5.18]). Corollary 2.1. Suppose Assumption 2.1 and the uniform growth condition (2.17) hold. Then for anyε∈(0, b),δ >0andNsatisfying (2.19) , it follows that the event {ˆSN⊂ S ε}happens with probability at least 1−δ. 7 2.2 Conditional risk functionals We discuss in this section conditional counterparts of risk functionals. For random variable Z: Ω→Rand sigma subalgebra G ⊂ F consider the conditional cdf FP Z|G(z) :=P|G(Z≤z) =EP |G[1{Z≤z}], z∈R. Suppose that RPis law invariant and hence can be represented in the form (2.3). Then the conditional counterpart of RPis defined as RP |G(Z) :=ρ(FP Z|G). (2.25) For law invariant coherent risk measures such definition of the respective conditional counterparts is equivalent to the standard definitions used in the literature. For example, in the definition (2.1) of the Average Value-at-Risk the minimum is attained at ¯ τ=V@RP α(Z) and hence its conditional counterpart can be obtained by
https://arxiv.org/abs/2505.16651v1
replacing V@RP α(Z) with its conditional counterpart. For the Value-at-Risk its conditional counterpart is V@RP α|G(Z) = inf z:P|G(Z≤z)≥1−α , α∈(0,1). (2.26) In particular we consider the following construction where we follow [21, Appendix A1 - A2]. Assume that the measurable space (Ω ,F) is given as the product of measurable spaces (Ω 1,F1) and (Ω 2,F2), i.e., Ω = Ω 1×Ω2andF=F1⊗ F 2. Denote by P1andP2the sets of probability measures on the respective measurable spaces (Ω 1,F1) and (Ω 2,F2). For a probability measure P∈Pwe denote by P1∈P1the respective marginal probability measure on (Ω 1,F1), that is P1(A) =P(A×Ω2) for A∈ F 1. The marginal probability measure P2∈P2is defined in the similar way. We assume that (Ω 1,F1) and (Ω 2,F2) are Polish spaces. Consider sigma subalgebra GofFconsisting of the sets A×Ω2,A∈ F1, that is G:={A×Ω2:A∈ F1}. (2.27) Note that the elements of subalgebra Gare determined by sets (events) A∈ F1, and in that sense G can be identified with the sigma algebra F1, we write this as G ≡ F 1. A random variable Z(ω1, ω2), ω= (ω1, ω2)∈Ω1×Ω2, isG-measurable iff Z(ω1,·) is constant on Ω 2for every ω1∈Ω1, and Z(·, ω2) isF1-measurable for every ω2∈Ω2. Therefore, with some abuse of the notation, we write aG-measurable variable as a function Z(ω1) ofω1∈Ω1. We also use notation Zω1(ω2) :=Z(ω1, ω2) viewed as random variable Zω1: Ω2→R. Definition 2.2 (Regular Probability Kernel) .A function K:F2×Ω1→[0,1]is said to be a Regular Probability Kernel (RPK) of a probability measure P∈Pif the following properties hold: (i)K(·|ω1)is a probability measure for P1-almost every ω1∈Ω1,(ii)for every B∈ F2the function K(B|·)isF1-measurable, (iii)for every A∈ F1andB∈ F2it follows that P(A×B) =Z AK(B|ω1)dP1(ω1). (2.28) In particular, P2(B) =R Ω1K(B|ω1)dP1(ω1)is the respective marginal probability measure. The Disintegration Theorem (e.g., [3, III-70]) ensures existence of the RPK for a wide class of measurable spaces, in particular if (Ω 1,F1) and (Ω 2,F2) are Polish spaces. Therefore, we assume existence of the RPK for every P∈P. The function K(B|·) is defined up to P1-measure zero, 8 and is uniquely determined on the supp( P1). The RPK is associated with a specified P∈P, we sometimes write KPto emphasize this. ForP∈Pandω1∈supp( P1) we can define a probability measure P|ω1∈P2as P|ω1(B) :=KP(B|ω1), B∈ F2. (2.29) Then we can define the conditional counterpart of the cdf FP Zas FP Z|ω1(z) :=P|ω1(Zω1≤z), P1−a.e. ω1∈Ω1. (2.30) That is, FP Z|ω1is the cdf of Zω1: Ω2→Rassociated with the probability measure P|ω1. The corresponding conditionally law invariant risk functional is a function of the conditional cdf. That is, the conditional law invariant risk functional RP |ω1:Z →Ris defined as RP |ω1(Z) =ρ1 FP Z|ω1 , (2.31) where ρ1is the corresponding mapping. Note that for sigma algebra G ≡ F 1defined in (2.27), FP Z|ω1can be viewed as version FP Z|ω1=FP Z|G(ω1) of the conditional cdf, and RP |ω1can be viewed as version RP |ω1=RP |G(ω1) of the conditional risk measure. For example, for the Value-at-Risk its conditional counterpart V@RP α|ω1(Z) = infn z:FP Z|ω1(Z≤z)≥1−αo . (2.32) Note that FP Z|ω1andRP |ω1(Z) are uniquely defined for ω1∈supp( P1), and RP |ω1(Z) can
https://arxiv.org/abs/2505.16651v1
be arbitrary forω1∈Ω1\supp( P1). 2.2.1 Multistage setting. This can be extended in a straightforward way to the setting where Ω = Ω 1× ··· × ΩTand F=F1⊗···⊗F T, with T≥2. For P∈Pandt∈ {2, ..., T}we can define conditional probabilities P|ω[t−1]determined by the respective Regular Probability Kernel KP t:Ft×Ω[t−1]→[0,1], where ω[t−1]= (ω1, ..., ω t−1), Ω [t−1]= Ω 1× ··· × Ωt−1andF[t−1]=F1⊗ ··· ⊗ F t−1. Note that P|ω[t−1]is a probability measure on (Ω t,Ft) for given (conditional on) ω[t−1]. Then for Zt=Zt(ω1, ..., ω t), and Zω[t−1](·) :=Zt(ω[t−1],·) viewed as a random variable on (Ω t,Ft), define FP Zt|ω[t−1](z) :=P|ω[t−1](Zω[t−1]≤z), Pt−a.e. ω[t−1]∈Ω[t−1]. (2.33) The corresponding conditional law invariant functional RP |ω[t−1](Zt) is defined as a function of FP Zt|ω[t−1], in the way similar to (2.31), that is RP |ω[t−1](Zt) =ρt−1 FP Zt|ω[t−1] . (2.34) 9 Rectangularity. In particular suppose that P=P1× ··· × PT. We refer to such setting as rectangularity condition. In that case P|ω[t−1]=PtforP[t−1]-almost every ω[t−1]∈Ω[t−1], where P[t−1]=P1×···× Pt−1is the respective marginal distribution on (Ω [t−1],F[t−1]). Consequently we can definite the corresponding conditional risk measure as RP |ω[t−1](Zt) :=RPt(Zω[t−1]). (2.35) For example for the Value-at-Risk, definition (2.32) becomes V@RP α|ω[t−1](Zt) = infn z:Pt(Zω[t−1])≤z)≥1−αo . (2.36) Let us consider now robust conditional counterpart of Rdefined in (2.4). Suppose that the ambiguity set is of the following form (the rectangularity assumption) M:={P=P1× ··· PT:Pt∈Mt, t= 1, ..., T}, (2.37) where Mt⊂Ptare sets of marginal probability measures. In that case the robust counterpart of (2.35) is R|ω[t−1](Zt) := sup Pt∈MtRPt |ω[t−1](Zt). (2.38) Remark 2.1. In the rectangular case we can proceed without assuming the conditional law invari- ance. For example we can use distributionally robust functionals RPt(·) := sup Pt∈MtEPt(·) even if the ambiguity set Mtconsists of probability measures which are not absolutely continuous with respect to the reference measure. For example, the ambiguity set Mtcan consist of probability measures with Wasserstein distance less than a positive constant rfrom a reference probability measure (Wasserstein ball of radius r). On the other hand, in the non-rectangular settings the assumption of conditional law invariance is essential in the construction of conditional counterparts of risk functionals. Nested functionals. Consider risk functions (risk measures) defined in (2.34). For t=Tthe functional RP |ω[T−1]maps random variable ZT=ZT(ω1, ..., ω T) into ZT−1=ZT−1(ω1, ..., ω T−1), i.e., ZT−1=RP |ω[T−1](ZT). Recall that ZT: Ω[T]→Ris a measurable function and ZT−1: Ω[T−1]→R is measurable by the construction, and that Ω [T]= Ω. We continue this process iteratively by defining Zt−1=RP |ω[t−1](Zt), t=T, ..., 2. (2.39) The corresponding nested functional is given by RP |ω1(Z2), which is a real number. That is, the nested risk functional is RP(ZT) =RP |ω1 ···RP |ω[T−1](ZT) . (2.40) We write this as RP=RP |ω1◦ ··· ◦ RP |ω[T−1]. In the rectangular case we can use risk functionals of the form (2.35). For robust functionals of the form (2.38), the corresponding nested functional R=R|ω1◦ ··· ◦ R|ω[T−1]: Ω→Ris defined in the similar way, R(ZT) =R|ω1 ···R|ω[T−1](ZT) . (2.41) 10 It should be verified that the considered functionals are well defined. As it was already men- tioned, F[t−1]-measurability of RP
https://arxiv.org/abs/2505.16651v1
|ω[t−1](Zt) follows by the construction. In general, the considered variables Ztmay be restricted to an appropriate linear space of measurable functions (such lin- ear space is explicitly mentioned in the definition of axioms (A1)-(A4)). For the Value-at-Risk functionals, the corresponding linear space consists of all measurable functions, and RP |ω[t−1](Zt) = V@RP α|ω[t−1](Zt) is well defined for any F[t]-measurable Zt. For the robust functionals defined in (2.38), such verification could be more involved. Of course if the space Ω is finite, the measurability and existence issues hold automatically. 3 Risk averse Stochastic Optimal Control Consider Stochastic Optimal Control (SOC) problem (e.g., [2]): min π∈ΠEπ"TX t=1ct(xt, ut, ξt) +cT+1(xT+1)# , (3.1) where ξ1, ..., ξ Tis a sequence of random vectors, and Π is the set of polices governed by the functional equation xt+1= Φ t(xt, ut, ξt), t= 1, ..., T , and the minimization over controls ut∈ Ut. In SOC it is usually assumed that random vectors ξtare mutually independent of each other (the stagewise independence assumption), while the distribution of ξtis allowed to depend on state xtand control ut,t= 1, ..., T . Unless stated otherwise, we assume that the distribution of ( ξ1, ..., ξ T)does not depend on states and controls, while we allow an interstage dependence of ξ1, ..., ξ T. In that case it suffices to consider policies Π =n π= (π1, . . . , π T) :ut=πt(xt, ξ[t−1]), ut∈ Ut, xt+1= Φ t(xt, ut, ξt), t= 1, ..., To ,(3.2) where ξ[t]:= (ξ1, ..., ξ t) denotes history of the process, with (deterministic) initial values x1andξ0. We assume that ξt∈Ξtwhere Ξ t⊂Rdtis a closed set, Ftis the Borel sigma algebra of Ξ t, that Utis a (nonempty) closed subset of Rmt,Xtis a closed subset of Rnt, and that the cost functions ct:Xt× Ut×Ξt→Rand mappings Φ t:Xt× Ut×Ξt→ X t+1are continuous, t= 1, ..., T . It is well known that dynamic equations for value functions of problem (3.1) can be written as follows: VT+1(xT+1) =cT+1(xT+1), and for t=T, ..., 1, Vt(xt, ξ[t−1]) = inf ut∈UtE|ξ[t−1] ct(xt, ut, ξt) +Vt+1(Φt(xt, ut, ξt), ξ[t]) . (3.3) Moreover, if the process ξ1, ..., ξ Tis stagewise independent, then the value functions depend only on state variables, i.e. the dynamic equations take the form Vt(xt) = inf ut∈UtE ct(xt, ut, ξt) +Vt+1(Φt(xt, ut, ξt)) . (3.4) Let us consider the following risk averse counterpart of the SOC. Consider the set Ξ := Ξ 1× ··· × ΞTequipped with sigma algebra F=F1⊗ ··· ⊗ F T, and denote by Pthe set of probability measures on (Ξ ,F). For P∈Pconsider conditional law invariant risk functionals RP |ξ[t−1], defined as in (2.34), and the corresponding nested functional RP=RP |ξ1◦···◦RP |ξ[T−1], defined as in (2.40). The respective risk averse counterpart of problem (3.1) is min π∈ΠRP"TX t=1ct(xt, ut, ξt) +cT+1(xT+1)# , (3.5) 11 where the minimization is over policies of the form (3.2). The dynamic programming equations for problem (3.5) can be written as VT+1(xT+1) =cT+1(xT+1), and for t=T, ..., 1, Vt(xt, ξ[t−1]) = inf ut∈UtRP |ξ[t−1] ct(xt, ut, ξt) +Vt+1(Φt(xt, ut,
https://arxiv.org/abs/2505.16651v1
ξt), ξ[t]) , (3.6) which can be viewed as the counterpart of dynamic equations (3.3). A sufficient condition for policy ¯ut= ¯πt(xt, ξ[t−1]),t= 1, ..., T , to be optimal is that ¯ut∈arg min ut∈UtRP |ξ[t−1] ct(xt, ut, ξt) +Vt+1(Φt(xt, ut, ξt), ξ[t]) . (3.7) Proof of the above dynamic equations is based on an interchangeability property of risk functionals satisfying the axioms of monotonicity and translation equivariance (cf., [17]). It is also pointed out in [17] that without the strict monotonicity property of the risk functionals, conditions (3.7) could be not necessary for optimality, i.e. it could exist a policy ¯ π∈Π which is optimal for problem (3.5) but does not satisfy dynamic equations (3.7). Such optimal policy is not dynamically consistent in the sense that although it is optimal from the point of view of the first stage, it could be not optimal at the later stages conditional on some realizations of the data process. Rectangular setting. Consider now the rectangular setting, i.e., assume that P=P1×···× PT, where Ptis a marginal distribution of ξt. Then risk functionals become of the form (2.35), that isRP |ξ[t−1](Zt) :=RPt(Zξ[t−1]). In that case value functions depend on state variables only and dynamic equations become Vt(xt) = inf ut∈UtRPt ct(xt, ut, ξt) +Vt+1(Φt(xt, ut, ξt)) , (3.8) where ξt∼Pt,t= 1, ..., T . The rectangularity condition is a counterpart of stagewise independence condition, and risk averse dynamic equations (3.8) is the counterpart of equations (3.4). In the rectangular case we can also consider the robust risk averse setting. That is, let Mtbe a set of probability measures on (Ξ t,Ft). Then counterparts of the dynamic equations (3.8) become Vt(xt) = inf ut∈Utsup Pt∈MtRPt ct(xt, ut, ξt) +Vt+1(Φt(xt, ut, ξt) . (3.9) This corresponds to replacing functional RPt(·) with its robust counterpart (compare with (2.4)) Rt(·) := sup Pt∈MtRPt(·), t= 1, ..., T. (3.10) These dynamic equations correspond to the nested formulation of the risk averse SOC problem: min π∈ΠR"TX t=1ct(xt, ut, ξt) +cT+1(xT+1)# , (3.11) where R=R|ξ1◦ ··· ◦ R|ξ[T−1](compare with (2.41)). 3.1 Infinite horizon risk averse SOC. We also can consider risk averse infinite horizon SOC problem and its robust extension. The risk averse counterpart of the classical Bellman equation is V(x) = inf u∈URP c(x, u, ξ ) +βV(Φ(x, u, ξ )) , (3.12) 12 and V(x) = inf u∈Usup P∈MRP c(x, u, ξ ) +βV(Φ(x, u, ξ )) , (3.13) for the robust risk averse problem. Here β∈(0,1) is the discount factor, X ⊂Rn,U ⊂Rm, Ξ⊂Rdare (nonempty) closed sets, Fis the Borel sigma algebra of Ξ, c:X × U × Ξ→Rand Φ :X ×U × Ξ→ X are continuous functions, and Mis a set of probability distributions on (Ξ ,F). We assume that for every P∈M, the functional RPsatisfies the axioms of monotonicity and translation equivariance, and that RP(0) = 0 . It follows that if |Z| ≤κ,P-almost surely for some κ∈R, then |RP(Z)| ≤ R (κ) =R(0)+κ=κ. Assuming that the cost function is bounded, it follows that the corresponding Bellman operator has the contraction property, and hence by the fixed point
https://arxiv.org/abs/2505.16651v1
theorem, equation (3.13) has a unique solution. That is, consider the space Bof bounded measurable functions g:X →R, equipped with the sup-norm ∥g∥∞= supx∈X|g(x)|, and Bellman operator T:B→B, T(g)(·) := inf u∈Usup P∈MRP c(·, u, ξ) +βg(Φ(·, u, ξ)) , g∈B. (3.14) We have that if |Z| ≤κ,P-almost surely for some κ∈R, then |RP(Z)| ≤κ. This implies that if the cost function is bounded, then for any bounded g:X →R, it follows that T(g) is bounded. That is, the operator Tmaps the space of bounded functions into the space of bounded functions. The operator Thas the following properties. It is monotone, i.e., if g, g′∈Bandg(·)≥g′(·), thenT(g)(·)≥ T(g′)(·). This follows from the monotonicity property of RP. Also from the translation equivariance property of RPfollows the property of constant shift: for any g∈Band c∈R,T(g+c) =T(g) +βc. It follows that Tis a contraction mapping, i.e., ∥T(g)− T(g′)∥∞≤β∥g−g′∥∞, g, g′∈B, and hence by the Banach Fixed Point Theorem has unique fixed point, denoted by V. Equation (3.13) corresponds to the nested formulation of the respective infinite horizon risk averse problem. Remark 3.1. To ensure that (3.13) is well-defined in the first place, we proceed as follows. Suppose that the set Ξis compact and consider the space C(Ξ)of continuous functions ϕ: Ξ→Requipped with the sup-norm. The dual C(Ξ)∗of that space is the space of finite signed (Borel) measures on Ξ(Riesz representation). Since Ξis compact, the weak∗topology of C(Ξ)∗is metrizable. Let Mbe weakly∗-closed (and hence weakly∗-compact by the Banach - Alaoglu theorem) and Ube compact. Suppose RP(Z)is continuous in Pin weak∗-topology, and continuous in Zin the sup-norm, then it is clear that for any continuous g(·), the optimization problem on the right hand side of (3.14) is continuous in (u, x, P ). Since MandUare compact, it follows that T(g)(·)is continuous. Combining this with Tbeing contractive, it follows that the value function Vis continuous and (3.13) is well-defined. 3.1.1 Sample complexity of infinite horizon risk averse SOC We focus in this section on sample complexity in the infinite-horizon setting, the sample complexity for the finite-horizon setting can be derived similarly. Specifically we deal with the Value-at-Risk measure, and the analysis can be viewed as an extension of section 2.1. Letξ1, ..., ξ Nbe iid samples of ξ∼P, and PN=N−1PN i=1δξibe the corresponding empirical measure. The empirical counterpart of the Bellman equation is obtained by replacing probability 13 measure Pin (3.12) with the empirical measure PN. Consider the counterpart of Bellman operator (3.14) with respect to PN, that is TN(g)(·) := inf u∈URPN c(·, u, ξ) +g(Φ(·, u, ξ)) . Assuming that the cost function is bounded, the operator TNhas a unique fixed point denoted VN, which is the solution of the empirical counterpart of the Bellman equation. Our interest in this section is to determine the sample size Nsuch that ∥V−VN∥∞< εwith high probability for a given ε >0. We make the following analogue of Assumption 2.1. Assumption 3.1. (i) The sets XandUare compact. (ii) There is a positive constant Lsuch that c(x, u, ξ )−c(x′, u′, ξ) ≤L∥(x, u)−(x′, u′)∥, Φ(x, u, ξ )−Φ(x′, u′, ξ) ≤L∥(x,
https://arxiv.org/abs/2505.16651v1
u)−(x′, u′)∥, for all x, x′∈ X,u, u′∈ U,ξ∈Ξ. Unfortunately the above assumption does not guarantee Lipschitz continuity of the value func- tionV(·). Nevertheless we can proceed as follows. Let ˜Vdenote an approximation of Vsuch that ˜Vis˜L-Lipschitz continuous. Then ∥˜V−VN∥∞=∥T˜V+˜V− T˜V− TNVN∥∞ ≤ ∥T ˜V− TNVN∥∞+∥˜V− T˜V∥∞ =∥T˜V− TN˜V+TN˜V− TNVN∥∞+∥˜V− T˜V∥∞ ≤ ∥(T − T N)˜V∥∞+β∥˜V−VN∥∞+∥˜V− T˜V∥∞, and consequently ∥˜V−VN∥∞≤1 1−β ∥(T − T N)˜V∥∞+∥˜V− T˜V∥∞ . (3.15) This immediately implies ∥V−VN∥∞≤ ∥V−˜V∥∞+∥˜V−VN∥∞ ≤ ∥V−˜V∥∞+1 1−β ∥(T − T N)˜V∥∞+∥˜V−V+TV− T˜V∥∞ ≤1 1−β ∥(T − T N)˜V∥∞+ 2∥V−˜V∥∞ . (3.16) Lemma 3.1. Suppose that Assumption 3.1 holds and RP(Z)isLR-Lipschitz continuous in Zw.r.t. ∥ · ∥∞norm, i.e., |RP(Z)− RP(Z′)| ≤ L R∥Z−Z′∥∞. (3.17) Then for any ε >0, there exists ˜Vsuch that ∥˜V−V∥∞≤ε, and ˜Vis˜L-Lipschitz continuous with ˜L:=LRL[(βLRL)k−1] βLRL−1and k :=1 1−βlog1 ε , (3.18) where we assume (without loss of generality) that βLRL >1. 14 Proof. Consider a function g:X →Rthat is Lg-Lipschitz continuous, and define Qg(x, u, ξ ) =c(x, u, ξ ) +βg(Φ(x, u, ξ )). Then we have T(g)(x) = inf u∈URP(Qg(x, u,·)). Furthermore RP(Qg(x, u,·))− RP(Qg(x′, u′,·)) ≤ L R∥Qg(x, u,·)−Qg(x′, u′,·)∥∞ ≤ L RL(1 +βLg)∥(x, u)−(x′, u′)∥, from which we conclude that T(g)(·) is also LRL(1 +βLg)-Lipschitz continuous. Now consider V(0)≡0, and define V(k)=T(V(k−1)). Denote Lkthe Lipschitz constant of V(k), then it is clear that Lk= 0 and Lk≤ LRL+βLRLLk−1≤(LRL[(βLRL)k−1] βLRL−1, βLRL̸= 1, kLRL, β LRL= 1, for any k≥1. Since T(·) is a contractive operator in ∥·∥∞-norm, it suffices to take k=1 1−βlog(1 ε) to ensure ∥V−V(k)∥∞≤ε. Taking ˜V=V(k)concludes the proof. In the remainder of this section we deal with RP(·) :=V@RP α(·). From the first inequality in (2.21), it is clear that in the case of RP(·) :=V@RP α(·) we can take LR= 1 in (3.17). Consider now the case where the probability measure Phas finite support. For α∈(0,1), define lα:= sup {P(A) :P(A)≤1−α, A∈ F} , rα:= inf{P(A) :P(A)≥1−α, A∈ F} , κα:= min {(1−α)−lα, rα−(1−α)}. (3.19) Since Phas finite support, the set of αwhere κα= 0 is finite. The condition κα>0 and the following theorem can be viewed as a counterpart of condition (2.10) and Proposition 2.1. The estimate (3.20), below, suggests that in the present case the number of samples needed in the risk averse setting only grows linearly (up to an logarithmic factor) in (1 −β)−1as the discount factor βapproaches 1. Theorem 3.1. LetRP=V@RP αin(3.12) , for some α∈(0,1). Suppose Assumption 3.1holds and the probability measure Phas finite support. Suppose further that κα>0, with καdefined in (3.19) . Let Ddenote the diameter of the set X × U . Then for any δ∈(0,1),ε >0, and N≥1 2κ−2 αh (n+m) log(8DL2 ε(1−β)(βL−1)) +n+m 1−βlog(βL) log(4 ε(1−β)) + log(2 δ)i , (3.20) it follows that ∥V−VN∥∞≤εwith probability at least 1−δ. Proof. To begin, let us recall Lemma 3.1, and let ˜Vdenote the approximation of Vsuch that ∥V−˜V∥∞≤(1−β)ε/4 (3.21) with Lipschitz constant ˜L=L[(βL)k−1] βL−1,k=1 1−βlog4 ε(1−β) .For notational convenience, define ˜Zx,u(ξ) :=c(x, u, ξ ) +β˜V(Φ(x, u, ξ )) 15 andT(˜V)(x, u) :=V@RP α˜Zx,u , and TN(˜V)(x, u) :=V@RPNα˜Zx,u .Note that from Assumption 3.1
https://arxiv.org/abs/2505.16651v1
and ˜L-Lipschitz continuity of ˜V, we have ∥˜Zx,u−˜Zx′,u′∥∞≤L∥(x, u)−(x′, u′)∥,L=L(1 +β˜L). (3.22) For a fixed x, u∈ X × U , from Proposition 2.1 and the definition of κα, forN≥1 2κ−2 αlog(2/δ), with probability 1 −δwe have T(˜V)(x, u) =TN(˜V)(x, u). Consider an η-net of X ×U , denoted by Nη. That is, for any ( x, u)∈ X ×U , there is ( xη, uη)∈Nη such that ∥(x, u)−(xη, uη)∥ ≤η. For ( x, u)∈ X × U we can write V@RP α ˜Zx,u −V@RPNα ˜Zx,u ≤ V@RP α ˜Zxη,uη) −V@RPNα ˜Zxη,uη + V@RP α ˜Zx,u −V@RP α ˜Zxη,uη + V@RPNα ˜Zx,u −V@RPNα ˜Zxη,uη .(3.23) Hence for N≥1 2κ−2 αlog(2|Nη|/δ) and η≤ε(1−β)/(4L), with probability at least 1 −δ, for any (x, u)∈ X × U we have T(˜V)(x, u)−TN(˜V)(x, u) ≤ V@RP α ˜Zx,u −V@RP α ˜Zxη,uη + V@RPNα ˜Zx,u −V@RPNα ˜Zxη,uη (a) ≤2Lη≤(1−β)ε/2, where ( a) follows from V@RP α(·) being 1-Lipschitz continuous w.r.t ∥ · ∥∞-norm and (3.22). Since T(˜V)(x) = inf u∈UT(˜V)(x, u) and TN(˜V)(x) = inf u∈UTN(˜V)(x, u), this immediately implies ∥(T − T N)˜V∥∞≤(1−β)ε/2. Combining this together with (3.21) and (3.16) yields ∥V−VN∥∞≤ε. The rest of the proof pertains to bounding |Nη|and follows similar argument as in Theorem 2.1. Remark 3.2. For continuous P, following similar arguments as in Theorem 2.1 and 3.1, a sample complexity of O((1−β)−2c−2ε−2)could be established if β∈(0, L−1), and there are constants b >0 andc >0such that for all (x, u)∈ X × U , Fx,u(z′)−Fx,u(z)≥c(z′−z)for all z, z′∈[νx,u−b, νx,u+b]such that z′≥z, (3.24) where Fx,uis the cdf of Zx,u(ξ) := c(x, u, ξ ) +V(Φ(x, u, ξ ))andνx,u:=V@RP α(Zx,u). That is, the sample size, required to attain a given relative error of the empirical (SAA) solution, is not sensitive to the discount factor, even if the discount factor is very close to one. This is in line with the similar observation made in [18], where it was based on different arguments. It should be noted, however, that the above (3.24) depends on the unknown value function V(·)and is in general difficult to verify. In particular, it remains open to establish the existence of density function4for Zx,u. 4When βL < 1, one can show that Vis Lipschitz continuous, and hence Zx,u(ξ) is Lipschitz continuous in ξ. Suppose Pis absolutely continuous w.r.t. the Lebesgue measure, and the first-order stationary points of Zx,u(·) take Lebesgue measure zero, then from Coarea formula [5], it follows that Z−1 x,u(A) is a P-null set for any A⊂Rtaking Lebesgue measure zero. In this case, the density function of Zx,uexists due to Radon-Nikodym theorem. 16 3.2 Randomized policies Consider the (robust) rectangular setting and suppose that the decision maker can choose controls at random. That is, for t= 1, ..., T, letStbe the set of (Borel) probability measures on Ut, and let (ut, ξt)∼Qt×Ptbe random with Qt∈StandPt∈Mt. Consider the following extension of dynamic equations (3.9): eVT+1(xT+1) =cT+1(xT+1), and for t=T, ..., 1, eVt(xt) = inf Qt∈Stsup Pt∈MtRQt×Pt ct(xt, ut, ξt) +eVt+1(Φt(xt, ut, ξt) , (3.25) where RQt×Ptis a risk functional on a linear space of random variables Z:Ut×Ξt→R. Specifically
https://arxiv.org/abs/2505.16651v1
we assume that RQt×Pt(Zt(ut, ξt)) :=EQt RPt(Zt(ut, ξt)) , (3.26) where ut∼QtandRPtis a risk functional with respect to ξt∼Pt. In particular if RPt=EPt, equation (3.25) becomes eVt(xt) = inf Qt∈Stsup Pt∈MtEQt×Pt ct(xt, ut, ξt) +eVt+1(Φt(xt, ut, ξt) . (3.27) We say that there exists a non-randomized optimal policy if for t= 1, ..., T , the minimum in the right hand side of (3.25) is attained at Dirac measure Qt=δut. That is, existence of the non-randomized policy means that dynamic equations (3.25) are equivalent to dynamic equations (3.9). The dual the min-max problem in the right hand side of (3.25) is obtained by interchanging the ‘inf’ and ‘sup’ operators. That is, value functions of the dual problem are defined as fWT+1(xT+1) = cT+1(xT+1), and for t=T, ..., 1, fWt(xt) = sup Pt∈Mtinf Qt∈StEQt RPt |{z} RQt×Pt ct(xt, ut, ξt) +fWt+1(Φt(xt, ut, ξt) . (3.28) The function ψ(xt, ut) :=RPt ct(xt, ut, ξt) +fWt+1(Φt(xt, ut, ξt) inside the brackets in the right hand side of (3.28) is a function of xtandut, and consequently the expectation EQt[ψ(xt, ut)] is minimized over Qt∈St. Let us observe that for every xtit suffices to perform this minimization over Dirac measures Qt=δut,ut∈ Ut. That is, fWt(·) =Wt(·), where Wtis defined by equation Wt(xt) = sup Pt∈Mtinf ut∈UtRPt ct(xt, ut, ξt) +Wt+1(Φt(xt, ut, ξt) . (3.29) By the standard theory of min-max problems we have that Vt(·)≥Wt(·). It is said that there is no duality gap between the dual problems (3.9) and (3.29) if Vt(·) =Wt(·). Forxt∈ Xt, it is said that ( u∗ t, P∗ t) is a saddle point of the min-max problem (3.9) if u∗ t∈arg min ut∈UtRP∗ t ct(xt, ut, ξt) +Vt+1(Φt(xt, ut, ξt) , (3.30) P∗ t∈arg max Pt∈MtRPt ct(xt, u∗ t, ξt) +Vt+1(Φt(xt, u∗ t, ξt) . (3.31) Existence of the saddle point is a sufficient condition for the no duality gap of the min-max problems (3.9) and (3.29). Conversely, given optimal solutions u∗ tandP∗ tof the respective problems (3.9) and (3.29), the no duality gap property implies that ( u∗ t, P∗ t) is a saddle point. In order to verify the no duality gap between the min-max problems in (3.25) and (3.28), i.e., thateVt(·) =fWt(·), we need the following condition. 17 Assumption 3.2. Fort= 1, ..., T, the set Mtis convex, the set Utis compact, and for every P∈Mtthe functional RPis concave in P, i.e., if P, P′∈Mtandτ∈[0,1], then RτP+(1−τ)P′(·)≥ τRP(·) + (1 −τ)RP′(·). Suppose that the above assumption holds. Then the functional RQt×Pt, defined in (3.26), is concave in Pt, and is convex (linear) in Qt. By Sion’s theorem [22], there is no duality gap between the min-max problems (3.25) and (3.28). There is a large class of risk functionals which are concave with respect to the probability distribution. For example consider risk functionals of the from RP(Z) = inf θ∈ΘEP[Ψ(Z, θ)], where Θ is a subset of a finite dimensional vector space and Ψ :R×Θ→Ris a real valued function. The AV@Rαrisk measure is of that form. Since RP(Z) is given by the infimum of linear in Pfunctionals, it is
https://arxiv.org/abs/2505.16651v1
concave in P. Another example of concave inPfunctional is the Value-at-Risk measure RP=V@RP α. The following result about existence of the non-randomized optimal policies is an extension of [20, Theorem 4.1]. Proposition 3.1. Suppose that for t= 1, ..., T , there exists saddle point (u∗ t, P∗ t)satisfying condi- tions (3.30) and(3.31) . Then there exists non-randomized optimal policy given by π∗ t(xt) =u∗ t. Conversely, suppose that Assumption 3.2 holds and there exists non-randomized optimal policy. Then the saddle point exists. Proof. Suppose that there exists saddle point ( u∗ t, P∗ t). Then Vt(·) =Wt(·). Moreover, we have that Vt(·)≥eVt(·), and by duality eVt(·)≥fWt(·). Since Wt(·) =fWt(·), it follows that Vt(·) =eVt(·). That is, the value functions with respect to randomized and non-randomized policies are the same. This implies existence of optimal policy, which is given by π∗ t(xt) =u∗ t. Conversely, suppose there exists the non-randomized optimal policy. This implies that Vt(·) = eVt(·). Moreover, suppose that Assumption 3.2 holds. Then by Sion’s theorem eVt(·) =fWt(·). Since fWt(·) = Wt(·), it follows that Vt(·) = Wt(·). Because both respective problems have optimal solutions, existence of the saddle point follows. In particular, suppose that Mt={P∗ t}is the singleton and the minimization problem in (3.30) has an optimal solution u∗ t. Then clearly the saddle point exists and consequently there exists the non-randomized optimal policy. Suppose that for t= 1, ..., T , functionals RPtare convex, i.e., satisfy the axiom of convexity. Suppose further that the sets UtandMtare convex, the cost functions ct(xt, ut, ξt) are convex in (xt, ut) and the mappings Φ t(xt, ut, ξt) =At(ξt)xt+Bt(ξt)ut+bt(ξt) are affine. Then the value functions Vt(xt) are convex and the min-max problem in the right hand side of (3.9) becomes convex-concave. Suppose further that the sets Utare compact. Then by Sion’s theorem there is no duality gap between problems (3.9) and (3.29), and thus the saddle point exists provided the primal (3.9) and dual (3.29) problems have optimal solutions. Consequently in that case existence of non-randomised policies follows (cf., [20, Section 4]). For non-convex functionals RPt, verification of existence of the saddle points could be more involved. 4 Risk averse Markov Decision Processes Consider a (finite horizon) Markov Decision Process (MDP) (e.g., [11]) min π∈ΠEπhPT t=1ct(st, at, st+1) +cT+1(sT+1)i . (4.1) 18 HereStis the state space, Atis the action set and ct:St×At×St+1→Ris the cost function, at stage t= 1, ..., T . We assume that the state Stand action Atspaces are Polish spaces, and denote by Ft the (Borel) sigma algebra of St. The dynamic process is defined by transitional kernels (transitional probabilities) Pt(st+1|st, at) of moving from state st∈ Stto next state st+1∈ St+1given action at∈ A t. Unless stated otherwise we consider non-randomized policies, that is Π = {π1, ..., π T} where πt:St→ A tis a (measurable) mapping form the state space to the action space. The value s1is deterministic, initial conditions. We also use notation Pst,at t(·) =Pt(·|st, at) for the transition kernel. Assuming that the data process ξtis stagewise independent, the SOC can be formulated in
https://arxiv.org/abs/2505.16651v1