text
string
source
string
1,∥X−x0∥ ≤h >0. Call this infimum δ >0.∥vTU∥is bounded, so δis finite. Define: λn≡vT nEP(n)" UX−x0,n hn UX−x0,n hnT |D= 1, X∈An# vn =EP(n)" vT nUX−x0,n hn vT nUX−x0,n hnT |D= 1, X∈An# =EP(n)" vT nUX−x0,n hn2 |D= 1, X∈An# ≥ε2P(n) vT nUX−x0,n hn ≥ε|D= 1, X∈An ≥ε2δ >0. Therefore, for all ¯h≤h′,λ(¯h)≥ε2δ. Therefore λ∗≥ε2δ >0. Lemma 23 (Uniform functional approximation across multiple gridpoints) .Suppose the conditions of The- orem 2 hold. Let fkn,n(v) : [−1,1]d→Rbe a set of knfunctions that are uniformly bounded. Let S(m, h¯) 65 be the set of sets of mtuples (x, h)with x∈[−1,1]dandh∈[h¯n, h′], where h′comes from Lemma 22’s notion of hsmall enough and where for all (x1, h1),(x2, h2)∈S∈ S(m, h¯)withx2̸=x1, there are no points x∈[−1,1]dwith∥x−x1∥ ≤h1and∥x−x2∥ ≤h2. Write ˜c= 22(1+γ0) 1−γ0C2 1−γ0(supv|f(v)|)−2. Suppose (i) h¯dγ0 γ0−1 n≫n−1; (ii) mnbe a sequence tending to infinity such for all fixed a >0,mn≪exp −anh¯dγ0 γ0−1 n ; and (iii) and kn 1−e−1 emn exp log(1/2)+˜c−1n−1h¯−dγ0 γ0−1 n  →0. Then there is a sequence of εn→+0such that for all k= 1, . . . , k n: sup P∈P,S∈S(mn,h¯n)P max j P iDiK ∥Xi−xj∥ hn,j fk,n Xi−xj hn,j −Eh DK ∥X−xj∥ hn,j fk,n X−xj hn,ji nEP(n)h DK ∥X−xj∥ hn,ji ≥εn =o(1). Proof of Lemma 23. This proof will get hairy. For simplicity, I proceed assuming Kis the uniform bandwidth scaled downwards by one-half; the proof would require far more care if the kernel took on nonzero values for an unbounded set. Also, for simplicity, assume that e(X) is bounded above by one-half — the difficulty here comes from small propensity scores, and is already substantial. Let the upper bound of |f|beb≥0. Define Rn≡˜cnh¯dγ0 γ0−1 n and Vn,j,k≡X iDiK∥Xi−xj∥ hn,jfk,n Xi−xj hn,j −Eh fk,n X−xj hn,j |DK ∥X−xj∥ hn,j = 1i nEP(n)h DK ∥X−xj∥ hn,ji . By construction, there is a sequence of ε+ n→0 such that kn 1−(1−1/e)mn exp(log(1/2)+Rnε2n) →0. Note that the events DiK ∥Xi−xj∥ hn,j are mutually disjoint for a given iacross j; therefore write j(i) for thejsuch that DiK∥Xi−xj(i)∥ hn,j = 1 if feasible, and write j(i) = 0 if no such jexists. LetP(n) be a sequence of distributions in P, let Snbe a sequence of sets in S(mn, h¯n), and let {(xn,j, hn,j)}be a sequence of associated points and bandwidths. By Lemma 19, for all j, n, EP(n) DK∥X−xj∥ hn,j ≥C1/(1−γ0)21+γ0 1−γ0h¯dγ0 γ0−1 n. Note that I use a laxer bound for h¯nbecause the polynomial order here is found elsewhere, and the polynomial order in Lemma 19 is not found elsewhere. Thus, the argument will continue to hold in the presence of certain typos. Consider the event Aof{i, j(i)}. Note that for any given j, by the Chernoff bound for binomial random 66 variables, P(n) PDiK ∥Xi−xj∥ hn,j nEh DK ∥X−xj∥ hn,ji≥2 ≤exp −nEh DK ∥X−xj∥ hn,ji 3 ≤exp −˜cb2 3nh¯dγ0 γ0−1 n , So that P(n) max jPDiK ∥Xi−xj∥ hn,j nEh DK ∥X−xj∥ hn,ji≤2 ≤1−X jP(n) PDiK ∥Xi−xj∥ hn,j nEh DK ∥X−xj∥ hn,ji≥2  ≤1−O mnexp −˜cb2 3nh¯dγ0 γ0−1 n = 1−o(1). I therefore proceed under the high probability event that Ais such
https://arxiv.org/abs/2504.13273v2
that max jPDiK∥Xi−xj∥ hn,j nEh DK∥X−xj∥ hn,ji≤2. I now apply the Hoeffding inequality to thePDK ∥X−xj∥ hn,j ≤2Eh DK ∥X−xj∥ hn,ji elements of Vn,j conditional on A, for all k, n, j P(n) (|Vn,j,k| ≥εn)≤2exp −2 nEP(n)h DK ∥X−xj∥ hn,ji2 ε2 nPDiK ∥Xi−xj∥ hn,j b2 ≤2exp −Rnε2 n . Then: P(n) knmax k=1mnmax j=1|Vn,j,k| ≥εn|A ≤knX k=1 1−mnY j=1(1−P(n) (|Vn,j,k| ≥εn))  ≤kn 1−mnY j=1 1−2exp −Rnε2 n =kn 1− 1−2exp−1 16b2nh¯dγ0 γ0−1 n ε2 nmn =kn 1−  1−1 exp(log(1 /2) +Rnε2n)exp(log(1/2)+Rnε2 n)!mn exp(log(1/2)+Rnε2n)  =kn 1−(1−1/e−o(1))mn exp(log(1/2)+Rnε2n) =o(1). Therefore P maxkn k=1maxmn j=1|Vn,j,k| ≥εn =o(1). Lemma 24 (Nondegeneracy of local polylnomial eigenvalues at estimated bandwidths over gridpoints) . Suppose the conditions of Theorem 2 hold. Fix some k >0and let Snbe a set of gnpoints xn,j∈[−1,1]d, withgn≤k′nfor some fixed k′>0. For each xn,j, lethn,j=suph :nP iDi1{∥Xi−xn,j∥} ≤ kh−2βµand 67 leth∗ n,jsolve nE[D1{∥X−xn,j∥}] =kh−2βµ. Then there is a δn→+0such that lim sup n→∞sup P∈PP max j λmin P iDiK∥Xi−xj∥ hn,j UXi−xj hn,j UXi−xj hn,jT P iDiK∥Xi−xj∥ hn,j! λmin EP" DK ∥X−xj∥ h∗ n,j U X−xj h∗ n,j U X−xj h∗ n,jT# EP DK ∥X−xj∥ h∗ n,j −1 ≥δn =o(1). Proof of Lemma 24. For convenience, I proceed assuming Kis the uniform bandwidth, scaled downwards by one-half. Let P(n) be a sequence of distributions in P. Apply Lemma 23 to the sequence h¯n=n−1 2βµ+dγ0 γ0−1/log(n) and mn= 3gn≪exp anh¯dγ0 γ0−1 n for all a >, to yield a sequence of ε(a) n→+0. Lethn,jsolve min h Nn(h|xn,j)−kh−2βµ , and let h∗ n,jsolve min h N∗ n(h|xn,j)−kh−2βµ , where N∗ n(h| xn,j) =nE[e(X)1{∥X−xn,j∥ ≤h}]. Further, let [ h¯n,j,¯hn,j] be the convex hull of the set of hthat solve N∗ n(h|xn,j)(1 + ε(a) n)−1=kh−2βµorN∗ n(h|xn,j)(1 + ε(a) n) =kh−2βµ. By Lemma 23, with probability tending to one, hn,j∈[h¯n,j,¯hn,j]. Write: An,j(h, h′) =EP(n) DK ∥X−xn,j∥ h U X−xn,j h U X−xn,j hT EP(n)h DK ∥X−xn,j∥ h′i Bn,j(h, h′) =Pn i=1DiK ∥Xi−xn,j∥ h U Xi−xn,j h U Xi−xn,j hT Pn i=1DiK ∥Xi−xn,j∥ h′ The claim is that there is a δn→+0 such that P(n) λmin(Bn,j(hn,j, hn,j)) λmin An,j(h∗ n,j, h∗ n,j)−1 ≥δn! =o(1). A−1 n,jis symmetric, so that ∥A−1 n,j∥2 (op)is equal to the squared largest eigenvalue of A−1 n,j, where ∥ · ∥ (op)is equal to the operator norm. By Lemma 22, ∥A−1 n,j∥2 (op)=λmin(An,j)−2is bounded above. Take mn= 3gn≤3k′n,kn= (n+ 1), and h¯n=n−1 2βµ+dγ0 γ0−1/log(n)γ0−1 dγ0, so that the first condition of Lemma 23 holds by Lemma 22. The second condition holds because mn=O(n). The third condition holds by L’Hopital’s Rule applied to n 1−e−1 e an exp(b+cnd/log(n) →0 for any fixed a, b, c, d > 0. Thus, I may apply Lemma 23 to the kn=n+ 1 bounded functions f(v) =Uk(v)Uℓ(v) and 1nh¯n,j h∗ n,j≤v≤¯hn,j h∗ n,jo at the 68 bandwidths h¯n. Let the associated εnterms be ε(b) nThen: max j An,j(h∗ n,j, h∗ n,j)k,ℓ−Bn,j(hn,j, hn,j)k,ℓ ≤max j An,j(h∗ n,j, h∗ n,j)k,ℓ −Bn,j(h∗ n,j, h∗ n,j)k,ℓ + max j Bn,j(h∗ n,j, h∗ n,j)k,ℓ −Bn,j(h∗ n,j, hn,j)k,ℓ + max j Bn,j(h∗ n,j, hn,j)k,ℓ −Bn,j(hn,j, hn,j)k,ℓ ≤oP(n)(1) + OP(n) max j An,j(h∗ n,j, h∗ n,j)k,ℓ 
https://arxiv.org/abs/2504.13273v2
max j  1−PDiK ∥Xi−xn,j∥ h∗ n,j PDiK ∥Xi−xn,j∥ hn,j  +O max jPDi1 ∥Xi−xn,j∥ ∈[h¯n,j,¯hn,j] PDi1 ∥Xi−xn,j∥ ≤h¯n,j ! =oP(n)(1) + OP(n) ε(a) n +o 1 +ε(a) n2 (1 +ε(b) n) =oP(n)(1). Therefore max j∥An,j−Bn,j∥(op)=o(1). Then, by well-known arguments (Horn and Johnson, 2013, p. 381): max j|λmin(Bn,j)−λmin(A,nj)|= max i λmax(B−1 n,j)−λmax(A−1 ,nj) ≤max j∥B−1 n,j−A−1 n,j∥(op) ≤max j∥A−1 n,j∥2 (op)∥An,j−Bn,j∥(op) 1− ∥A−1 n,j(An,j−Bn,j)∥(op)= max jOP(n)(1)oP(n)(1) 1−oP(n)(1)=oP(n)(1), so that max j λmin(Bn,j) λmin(An,j)−1 = max j λmin(Bn,j)−λmin(An,j) λmin(An,j) ≤max j|λmin(Bn,j)−λmin(A,nj)| minjλmin(An,j)=oP(n)(1), so that the full claim holds. Lemma 25 (Minimal small-propensity points) .Suppose the conditions of Theorem 2 hold, and there are mnpoints xj∈[−1,1]dsuch that infj̸=j′∥xj−xj′∥ ≥handmax jnE[D1{∥X−xj∥ ≤h/d}]≤(h/d)−2βµ. Then there is a universal constant B≥2that depends on C, γ 0, βµ, dsuch that mn≤B nh2βµ+dγ0 γ0−11−γ0 . Proof of Lemma 25. Note that the set of points with 1 {∥X−xj∥ ≤h/d}are mutually exclusive across x. As a result, for all such j: E e(X)| ∥X−xj∥ ≤h d} ≤nE[e(X)1{∥X−xj∥ ≤h/d}] nP(∥X−xj∥ ≤h/d)≤dd+2βµ nhd+2βµ mnh dd /2≤X jP ∥X−xj∥ ≤h d 2≤P e(X)≤2dd+2βµ nhd+2βµ ≤C2dd+2βµ nhd+2βµγ0−1 mn≤C2γ0d2βµ(γ0−1)+dγ0 nh2βµ+dγ0 γ0−11−γ0 . Finally, if the constant is below 2, without loss of generality set B= 2. 69 Lemma 26 (KL divergence) .For any given L′>0andx0∈[−1,1]d, construct distributions Pn,mform= 1,2as follows. Draw X∼U([−1,1]d). Draw D|X∼Bern C−(γ0−1)Pn,m(∥X−x0∥ ≤ ∥ x−x0∥)1/(γ0−1) . Finally, draw Y|X, D∼ N(Dµn,m(X), σ2 min)where µn,1(X) = 0 and µn,2(X) =L′ exp−4 3n−βµ 2βµ+dγ0 γ0−1exp −1 1− 2∥X−x0∥ n−1 2βµ+dγ0 γ0−1!2 1  ∥X−x0∥ ≤n−1 2βµ+dγ0 γ0−1 2  . Finally, define P={Pn,1}n=1∞,m=1,2. Then (i) if there exists a P0satisfying Assumptions 6 and 7, then there exists a fixed L′such that Psatisfies Assumptions 6 and 7. (ii) there is an α > 0finite such that KL(Pn,1, Pn,2)≤α. Proof of Lemma 26. First, I show (i) that such an L′exists. Because there exists a P∈P0in this set and every Pn,mhas the smallest possible range of Y−µ(X)|X, D = 1, it must be that for every Pn,m∈P, Assumption 1(a) and Assumption 1(c) hold. Also, if I define V(x) =Pn,m(∥X−x0∥ ≤ ∥ x−x0∥) which is distributed Unif ([0,1]), then for all Pn,m: Pn,m(e(X)≤π) =Pn,m C−(γ0−1)V(X)1 γ0−1≤π =Pn,m V(X)≤Cπγ0−1 =Cπγ0−1. Therefore, Assumption 1(d) holds. Also, Pn,m e(X)≤π 2 =Cπ 2γ0−1 = 21−γ0Pn,m(e(X)≤π), so that Assumption 4(i) holds with ρ= 21−γ0. It is also clear that EPn,m[Y|X, D = 0] = EPn,0[Y|X, D = 1]∈Σ(βµ, L), since these functions are a constant zero. It remains to show that there is an L′>0 such that for all n, m,V ar Pn,2(µn,2(X))≤M(Assumption 1(b) and completing Assumption 6) and µn,2(X)∈Σ(βµ, L) (Assumption 7). For the variance upper bound: V ar Pn,2(µn,2(X))≤(L′)2 n−2βµ 2βµ+dγ0 γ0−1! exp8 3−2 ≤(L′)2exp(2/3). so that it suffices to take L′≤p Mexp (−2/3). For H¨ older continuity, write µn,2(X) =L′ an−βµ 2βµ+dγ0 γ0−1ga ∥X−x0∥ n−1 2βµ+dγ0 γ0−1! . Ifa >0 is small enough, then gais infinitely differentiable and in Σ( βµ,1/2). Thus, by standard arguments (Tsybakov, 2009), if L′is small enough, µn,2(X)∈Σ(βµ, L). Thus, there is an L′>0 such that Psatisfies Assumptions 6 and 7. 70 Second, I show the main claim (ii). It is useful to write hn=n−1
https://arxiv.org/abs/2504.13273v2
2βµ+dγ0 γ0−1. Then: KL(Pn,1, Pn,2) =nPn,1 logdPn,1 dPn,2 ≤nPn,1 ∥X−x0∥ ≤hn 4 Pn,1 D= 1| ∥X−x0∥ ≤hn 4 L′ exp(−4 3)hβµnexp −1 1−1/42 2 =nk2−2d−1(L′)2k C 1 γ0−1 hd+d γ0−1+2βµ n =k2−2d−1(L′)2k C 1 γ0−1 = “α.” Proof of Proposition 6. Letδ¯≥1 solve: δ¯log(δ¯)2βµ+dγ0 γ0−1 2βµ = (δ¯)(δ¯log(δ¯))2βµ+dγ0 γ0−1 4−1. This is defined, because as δ→+1, the left-hand side tends to infinity while the right-hand side tends to one; but as δtends to infinity, the left-hand side ( δ2βµ+dγ0 γ0−1 2βµ) grows more slowly than the right-hand side (δδ/log(δ)). I will assume that δ¯log(δ¯)1 2βµ≥4. If not, increase δ¯to have δ¯log(δ¯)1 2βµ= 41/(2βµ). In either case, δ¯log(δ¯)2βµ+dγ0 γ0−1 2βµ≤(δ¯)(δ¯log(δ¯))2βµ+dγ0 γ0−1 4−1.Also define r¯= δ¯log(δ¯)2βµ+dγ0 γ0−1 2βµandπ= δ¯log(δ¯)−1 2βµ. Note that by construction, π≤1/41/(2βµ). First, I claim that if δ≥δ¯(0) n, then h(k+1) n≤r¯−1 2βµ+dγ0 γ0−1h(k) nandm(k+1) n≥r¯m(k) nfor all k= 0,1, . . .. I claim this inductively. It is convenient to write m(0) n=δandh(0) nto solve δ=exp δ h(0) n h(1) n−2βµ , so that for all k≥0, m(k+1) n = m(k) n h(k) n 21/βµh(k+1) n2βµ = m(0) n(h(0) n)2βµ 22(k+1)(h(k+1) n)2βµ h(k+1) n =h(0) n m(k) n m(0) n! −1 2βµ+dγ0 γ0−1 =h(0) n m(0) n1−(h(0) n)2βµ 22k(h(k) n)2βµ 2βµ+dγ0 γ0−1. Then, for k= 0: δ=exp δ  h(1) n h(0) n!2βµ   h(1) n h(0) n=log(δ) δ1/2βµ ≤r¯−1 2βµ+dγ0 γ0−1. 71 m(1) n m(0) n= m(0) n1 4 h(0) n h(1) n2βµ −1 ≥(δ¯)r¯2βµ 2βµ+dγ0 γ0−1 4−1= (δ¯) (δ¯log(δ¯))2βµ+dγ0 γ0−1 2βµ 2βµ 4−1≥r¯. I now proceed assuming the claim holds for all k′= 0, . . . , k −1. Then: h(k+1) n h(k) n= m(k) n m(k−1) n! −1 2βµ+dγ0 γ0−1 ≤r¯−1 2βµ+dγ0 γ0−1 m(k+1) n m(k) n= m(0) n(h(0) n)2βµ 4k(h(k) n)2βµ 1 4h(k+1) n−1 ≥(δ¯)(h(0) n)2βµ (h(0) n)2βµ 1 4h(1) n−1 ≥r¯. Thus, if δ≥δ¯,P∞ k=1m(k) n≥P∞ k=1δ=∞. Proof of Theorem 2. Recall that ψn=n−βµ 2βµ+dγ0 γ0−1. There are two main directions to show. Lower bound pointwise rate . Define x0= (0, . . . , 0). LetPbe as in Lemma 26, with associated distributions Pn,mform= 1,2. By Lemma 26, K(Pn,1, Pn,2)≤α. Define the seminorm d(P, Q) =|EP[Y| X=x0, D= 1]−EQ[Y|X=x0, D= 1]|. By construction, d(Pn,1, Qn,2) =“A/2”z}|{ L′exp(1/3)n−βµ 2βµ+dγ0 γ0−1n−βµ 2βµ+dγ0 γ0−1, where L′>0 is fixed. Therefore d(Pn,1, Pn,d)≥2sn, where sn=An−βµ 2βµ+dγ0 γ0−1. Thus, standard arguments (Tsybakov, 2009) show that for any fixed estimator ˆ µofE[Y|X=x0, D= 1] and all nlarge enough, sup P∈PP(|ˆµ(x0)−µ(x0)| ≥sn)≥max j=1,2Pj(|ˆµ(x0)−µn,j(x0)| ≥sn)≥max exp(−α) 4,1−p α/2 2! >0. Thus, for all t >1 (see Tsybakov Theorem 2.3), lim inf n→∞inf ˆµsup P∈PP nβµ 2βµ+dγ0 γ0−1|ˆµ(x0)−µ(x0)| ≥tβµ 2βµ+dγ0 γ0−1! ≥max exp(−ct) 4,1−p ct/2 2! , where cis a constant that only depends on the parameters of the problem. Thus, n−βµ 2βµ+dγ0 γ0−1is a lower bound on the pointwise rate of convergence. Achievable global rate . This is the more difficult and interesting direction. let δ¯andπbe taken from Proposition 6. Also let k∗ nbe the smallest kfor which π(k−1)2βµ≤1/log(n)2. Note that k∗ n=O(log(log( n))). Choose some δ≥δ¯, where δ¯is constructed in Proposition 6. Define h(k) nandm(k) nas in Proposition 6. I
https://arxiv.org/abs/2504.13273v2
define the estimator through a series of steps. At a high level, a first step finds the grid width through a 72 minimum level of implied overlap within grid regions. A second step chooses the local polynomial bandwidth for each gridpoint based on the number of treated observations nearby, and then conducts the associated regression. A third step interpolates between gridpoints. Split [ −1,1]dinto a first-pass grid of hypercubes of edge length 1 /⌊1/n−1 2βµ+d⌋. Let the associated first- pass gridpoints be {˜xn,k}. Define Nn(h|x) =PDi1{∥Xi−x∥ ≤h}. Let ˜hn,j= suph≤1h:Nn(h|˜xn,j)≤ 2h−2βµ, which is well-defined because Nn(0|x)≤n. Define ¯hn= max k˜hn,j. Now construct the true grid using the upper-bound gridpoint distance ¯hn. Split [ −1,1]dinto a grid of hypercubes of edge length 1 /⌊1/¯hn⌋. For simplicity, I proceed assuming 1 /¯hnis an integer, so that the edges are of length ¯hn; the rounding error is second-order. Write gn= (2/¯hn)dfor the number of gridpoints and call the points on the grid xn,1, xn,2, . . . , x n,gn. For each gridpoint xn,j, choose the bandwidth hn,jas follows. Choose hn,j= sup h:Nn(h|xn,j)≤ 2h−2βµ. By Lemma 23 applied to O(n) points and f= 1 at the fixed bandwidths hn,0/d, with probability tending to one, hn,j≥hn,0/dfor all j. Construct the regression estimate at gridpoints as ˆ µ(xn,j) from local polynomial regression of Yon U X−xn,j hn,j with weights DK ∥X−xn,j∥ hn,j with uniform kernel. (If this regression is degenerate, take ˆµ(xn,j) = 0.) For all other points x∈[−1,1]d, construct ˆ µ(x) through linear regression of YonU xn,j−x ¯hn among gridpoints jwith∥xn,j−x∥ ≤ ⌈ βµ⌉/¯hn. By inspection, this estimator only requires knowledge of βµ. It remains to show that this estimator achieves a global consistency rate of at least ψn. Define h(k(j)) n to be the largest h(k) nfork= 1, . . . , k∗ nsatisfying nE[D1{∥X−xn,j∥ ≤h/d}]≤(h/d)−2βµ. As a result of this conditioning, hn,j≤h(k(j)) n. Write k(j) = 1 if no such kexists; this will turn out to be an ignorable event. I first show that there is a k >0 such that ¯hn≤kn−1/(2βµ+dγ0/(γ0−1))almost surely. By Lemma 19, there is a k′>0 such that E[D1{∥X−x∥ ≤h}]≥k′hdγ0/(γ0−1)for all h≤1 and all x∈[−1,1]d. Let h(init)solve k′nhdγ0/(γ0−1) (init)= 4h−2βµ (init), so that h(init)= (k′/4n)−1/(2βµ+dγ0/(γ0−1)). Split up [ −1,1]dinto a second-step set of hypercubes of edge length 1 /⌊d/h(init)⌋, with gridpoints ˜ x∗ n,i. For convenience, assume thatd/h(init)is an integer; the difference is a rounding error. Note that the number of second-step gridpoints isO(n). By Lemma 24, with probability tending to one, Nn(h(init)|˜x∗ n,i)≥2h−2βµ (init). I proceed under this event. Now fix an x∈[−1,1]dand consider a bandwidth of area h≥(1 + 1 /d)h(init). This hypercube includes at least one ˜ x∗ n,i(within a distance of h(init)/d) and all points within h(init)of ˜x∗ n,i. Call such a point i(x) arbitrarily. Thus, for such an h≥(1 + 1 /d)h(init)Nn(h|x)≥Nn(h|˜x∗ n,i(x))≥Nn(h(init)| ˜x∗ n,i(x))≥miniNn(h(init)|˜x∗ n,i)≥2h−2βµ (init). Thus, on the high-probability event that I am proceeding on, h¯n≤h(init)=kn−1/(2βµ+dγ0/(γ0−1)), where k= (4/k′)1/(2βµ+dγ0/(γ0−1))for some k′>0. I now apply Lemma 19, Lemma 22, Lemma 23, and Lemma 24 to the gridpoints. There are gn=O(n) gridpoints, so that Lemma
https://arxiv.org/abs/2504.13273v2
19, Lemma 22, and Lemma 24 apply immediately, so that the eigenvalues at 73 hn,jare nondegenerate. Now consider the tuples ( xn,j, h(k∗ n)) and the kn=k∗ n+ 1 functions: first, fk(v) = 1n h(k∗ n)v≤h(k) no , and fk∗n+1(v) = 1. Define h¯∗ n=h¯nlog(n)2(γ0−1)/d. By Lemma 19 and construction, h(k∗ n)≥h¯∗ nfor large enough n. It remains to show that (i) ( h¯∗ n)dγ0 γ0−1≫n−1, (ii) gn≪exp 1 16n(h¯∗ n)dγ0 γ0−1 , and (iii) kn 1−e−1 egn exp log(1/2)+1 16n(h¯∗n)dγ0 γ0−1  →0. (i) and (ii) hold by inspection. (iii) holds by bound- ingkn=O(nlog(log( n))) and then applying L’Hopital’s Rule. As a result, I proceed under the high probability event that Nn(h(k) n/d|xn,j)≥n 2E[D1{∥X−xn,j∥} ≤ h(k) n/d], so that the k(1) additional assignment is never necessary, and the smallest eigenvalue of the PDKUUT/PDKmatrices at the gridpoint bandwidths hn,jis at least λ∗/2>0. These conditions also ensure that there are at least ( hn(1)/d)−2βµ= (h¯n/d)−2βµ→ ∞ treated observations within the bandwidth hn,jof each gridpoint j. Next, I claim that there are at most m(k) ngridpoints with hn,j≥h(k) n. Recall by the above that hn,j≤ h(k(j)) n. Thus, the number of gridpoints with hn,j≥h(k) nis bounded by the number of jwith h(k(j)) n≥h(k) n. By Lemma 25, this is bounded above by: B n(h(k) n)2βµ+dγ0 γ0−11−γ0 =B h(k) n h(1) n!−(γ0−1) 2βµ+dγ0 γ0−1 ≤exp 2−kδ h(k) n h(1) n!−2βµ =m(k) n. Next, I bound the largest conditional expected squared gridpoint error above. Let Zbe the data of {X, D}. The local polynomial estimator is: ˆµ(xn,j) =1T PDiK ∥Xi−xn,j∥ hn U Xi−xn,j hn U Xi−xn,j hnT PDiK ∥Xi−xn,j∥ hn | {z } “ˆΣn” −1 PDiK ∥Xi−xn,j∥ hn U Xi−xn,j hn Y PDiK ∥Xi−xn,j∥ hn . Also write ˜ µn(x) for the prediction of E[Y|X=x, D = 1] based on the ℓe-order Taylor expansion of µ(X) atxn,j. On the above event, the conditional bias of the local polynomial estimator for a given gridpoint is: E[ˆµ(xn,j)|Z]−µ(xn,j) =1TˆΣ−1 nPDiK ∥Xi−xn,j∥ hn U Xi−xn,j hn (µ(Xi)−˜µn(Xi)) PDiK ∥Xi−xn,j∥ hn =O λmin(ˆΣn)−1 O hβµ n =O(hβµ n), with a constant that is independent of P(n) and x0,h. Thus, on the above event, the largest gridpoint 74 conditional bias is bounded. On the other hand, one gridpoint’s conditional variance is: V ar(ˆµ(xn,j)|Z) =1TˆΣ−1 nPDiK ∥Xi−xn,j∥ hn2 U Xi−xn,j hn U Xi−xn,j hnT V ar(Y|X=Xi, D= 1) PDiK ∥Xi−xn,j∥ hn2ˆΣ−1 n1 ≤O(1)1TˆΣ−1 nPDiK ∥Xi−xn,j∥ hn U Xi−xn,j hn U Xi−xn,j hnT σ2 max PDiK ∥Xi−xn,j∥ hn2ˆΣ−1 n1 (K≤1) =O X DiK∥Xi−xn,j∥ hn−1! 1TˆΣ−1 n1σ2 max =O X DiK∥Xi−xn,j∥ hn−1! O λmin ˆΣn−1 =O n−1hd+d/(γ0−1) n O λmin EP(n)h ˆΣni−1 (Lemma 19, Lemma 24) =O n−1ndγ0(γ0−1) 2βµ+dγ0/(γ0−1) O(1) (Lemma 22) =O n−2βµ 2βµ+dγ0/(γ0−1) , once again with a constant that is independent of P(n) and x0,h. The argument for characterizing the largest gridpoint error is not quite standard. By standard argu- ments (Tsybakov, 2009), if there are at most m(j) ngridpoints with hn,j≥h(k) nso that Nn(hn,j|xn,j)≥ (hn,j)−2βµand the hn,jbandwidths around xn,jare by construction non-overlapping subsets of [ −1,1]d, then EPh maxj:hn,j≥h(k) n(ˆµ(xn,j)−E[ˆµ(xn,j)|Z])2|Zi =OP log m(j) n (hn,j)2βµ . Then the conditional expected largest
https://arxiv.org/abs/2504.13273v2
gridpoint squared difference overall is bounded above by EP gnmax j=1(ˆµ(xn,j)−E[ˆµ(xn,j)|Z])2|Z ≤k∗ nX k=1EP gnmax j=1(ˆµ(xn,j)−E[ˆµ(xn,j)|Z])2|Z ≤k∗ nX k=1EP" max hn,j=h(k) n(ˆµ(xn,j)−E[ˆµ(xn,j)|Z])2|Z# =Op k∗ n−1X k=1log m(j) n h(k) n2βµ +Op log(gn) h(k∗ n) n2βµ =Op δk∗ nX k=12−k h(k) n h(1) n!−2βµ h(k) n2βµ  (Proposition 6) +Op log(gn)π(k−1)2βµ h(1) n2βµ =Op k∗ nX k=12−k h(1) n2βµ +Oplog(n) log(n)2 h(1) n2βµ (δfixed) 75 =Op ∞X k=12−k h(1) n2βµ! Oplog(n) log(n)2 h(1) n2βµ (δfixed) ≤2 h(1) n2βµ = 2n−2βµ 2βµ+dγ0 γ0−1. Thus, returning to more standard arguments, maxgn j=1|ˆµ(xn,j)−µ(xn,j)|=OP(n) n−βµ 2βµ+dγ0 γ0−1! . But the prediction for non-gridpoints has nondegenerate eigenvalues, so by standard arguments, max x∈[−1,1]d|ˆµ(x)− µ(x)|=O maxgn j=1|ˆµ(xn,j)−µ(xn,j)|+L 1 ⌊h¯n⌋βµ =OP(n)(ψn). Therefore ˆ µachieves the global rate ψn, and with no polylogarithmic penalty. Completing the proof . An achievable global rate of convergence is also an achievable pointwise rate of convergence. Thus, n−βµ 2βµ+dγ0 γ0−1is the optimal pointwise rate of convergence. A lower bound on the pointwise rate of convergence is also a lower bound on the global rate of convergence. Thus, n−βµ 2βµ+dγ0 γ0−1is also the optimal global rate of convergence. Proof of Corollary 5. Write αµ=βµ 2βµ+dγ0/(γ0−1)andαe=βe 2βe+d. By standard arguments (Stone, 1982) and Theorem 2, there are cross-fit estimators ˆ µ(X) and ˆ e(X) such thatrµ,n≾(n/log(n))−αµandre,n≾(n/log(n))−αe, none of which depend on γ0. As a result, Assumption 2 holds. Take bn=re,nlog(n). Because αe>0, 1≫bn≫re,n. By Corollary 1, it only remains to show that the conditions of Assumption 3 hold. (a)Consistency . It is clear that rµ,n, re,n→0. (b)Product of errors . Ifγ0≥2, the claim holds by inspection. If not: rµ,nre,nb(γ0−2)/2 n ≪rµ,nbγ0/2 n≪rµ,nrγ0/2 e,nlog(n)γ0/2≾(n/log(n))−αµ−αeγ0/2log(n)γ0/2 = log( n)αµ+αeγ0/2+γ0/2n−αµ−αeγ0/2 =n−1/2log(n)αµ+αeγ0/2+γ0/2n1/2−αµ−αeγ0/2≪n−1/2. (Equation (2)) (c)Regression error near singularities . By inspection of the previous argument, rµ,nbγ0/2 n≪n−1/2. (d)Asymptotically known thresholding . By construction, re,n≪bn. As a result, by Corollary 1, the result for Wald confidence interval validity holds. C.7 Choice of Threshold Proof of Lemma 1. First, I show that there is at least one such solution. 76 Recall the equation: fn(b) =b1 nP1{ˆe(X)≤b}q 1 nPD max{ˆe,b}2+b2s 1 nX D max{ˆe, b}2−n−1/2. When b= 0,fn(b) is well-defined:PD/¯eis finite, so sup D/¯e2is finite. Because the first two terms of fn(b) include multiplication by b,fn(0) = 0. When b= 1: fn(1) = r 1 nX D!−1 +r 1 nX D−n−1/2> r 1 nX D!−1 −1≥0. The final line holds because1 nPD∈(0,1] by assumption. Define b− n= sup b≤1|fn(b)≤0. Define b+ n= inf b≥b− n|fn(b)≥0. Because fn(0)≤0≤fn(1), both of these values are well-defined. Therefore, for every bsatisfying b− n< b < b+ n, it is the case that fn(b) is a well-defined real number that satisfies both fn(b)>0 and fn(b)<0. No such number exists, so it must be thatb− n=b+ n. Define bnto be that value. Next, I show that there is a unique solution. In particular, I show that ˆ gn(b)≡b1 nP1{ˆe(X)≤b}q 1 nPD max{ˆe,b}2+ b2q 1 nPD max{ˆe,b}2is a strictly increasing function of bforb≥miniˆei. As bincreases, the first term’s numerator strictly increases and the denominator weakly decreases. As a result, the first term strictly increases in that range. For b <miniˆei, the first
https://arxiv.org/abs/2504.13273v2
arXiv:2504.13322v1 [math.PR] 17 Apr 2025FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES Samuel Livingstone Department of Statistical Science, University College Lon don, U.K. samuel.livingstone@ucl.ac.uk Giorgos Vasdekis School of Mathematics, Statistics and Physics, Newcastle U niversity, U.K. giorgos.vasdekis@newcastle.ac.uk Giacomo Zanella Department of Decision Sciences and BIDSA, Bocconi Univers ity, Italy giacomo.zanella@unibocconi.it Abstract. We formally introduce and study locally-balanced Markov ju mp processes (LBMJPs) defined on a general state space. These continuous-time stoc hastic processes with a user-specified limiting distribution are designed for sampling in setting s involvingdiscrete parameters and/or non- smooth distributions, addressing limitations of other pro cesses such as the overdamped Langevin diffusion. The paper establishes the well-posedness, non-e xplosivity, and ergodicity of LBMJPs under mild conditions. We further explore regularity prope rties such as the Feller property and characterise the weak generator of the process. We then deri ve conditions for exponential ergodicity via spectral gaps and establish comparison theorems for diff erent balancing functions. In particular we show an equivalence between the spectral gaps of Metropol is–Hastings algorithms and LBMJPs with bounded balancing function, but show that LBMJPs can ex hibit uniform ergodicity on un- boundedstate spaces whenthebalancingfunction isunbound ed,evenwhenthelimitingdistribution is not sub-Gaussian. We also establish a diffusion limit for a n LBMJP in the small jump limit, and discuss applications to Monte Carlo sampling and non-rever sible extensions of the processes. Keywords: Markov Processes, Sampling Algorithms, Mixing Times, Ergo dicity, Markov Chain Monte Carlo, Locally-balanced processes 1.Introduction Continuous-time stochastic processes with a user-specifie d limiting distribution are the backbone of modernsamplingalgorithms, makingthemanindispensablet oolformodernscientificresearch. Cel- ebratedexamplesincludetheoverdampedLangevindiffusionR oberts and Stramer[2002],Xifara et al. [2014], its kinetic/underdampedcounterpartDalalyan and Riou-Durand[2020], randomized Hamil- tonian Monte Carlo Bou-Rabee and Sanz-Serna [2017], and pie cewise-deterministic Markov pro- cesses such as the zigzag and bouncy particle samplers Bierk ens et al. [2019b], Bouchard-Cˆ ot´ e et al. [2018]. These processes are now well understood, mixing pro perties have been meticulously stud- ied (e.g. Roberts and Tweedie [1996b], Mattingly et al. [200 2], Bou-Rabee and Sanz-Serna [2017], Bierkens et al.[2019a],Vasdekis and Roberts[2022],Durmu s et al.[2021],Deligiannidis et al.[2019]), and the resulting sampling algorithms have been successful ly used in many applied domains (e.g. Diggle[2019],Rossky et al.[1978],Duane et al.[1987],Bet ancourt and Girolami[2015],Hardcastle et al. [2024], Koskela [2022]). An important component of the above processes is their use of gradients to drive sample paths into regions of high probability under the limiting distributio n. The overdamped Langevin diffusion, 1 2 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELLA for example, is governed by the equation dXt=−(1/2)∇U(Xt)dt+dBtand has a limiting dis- tribution with density π(x)∝e−U(x). To simulate the process, the potential Umust therefore be sufficiently smooth. This restricts applicability to smooth manifolds (e.g. Betancourt et al. [2017], Livingstone and Girolami [2014]). Many problems requiring sampling algorithms are defined on more general spaces involving discrete parameters and/or n on-smooth distributions of interest. There is, therefore, a need to develop and study processes th at can be applied in these settings. Locally-balanced Markov processes are a relatively new cla ss of purejump-typestochastic processes that have been designed for this purpose. Based on the locall y informed algorithm of Zanella
https://arxiv.org/abs/2504.13322v1
[2020], thecontinuous-timeformulationinfinitestatespaceswasi ntroducedbyPower and Goldman[2019]. Many approaches for both discrete and continuous sampling p roblems have since been motivated by these processes, such as discrete state-space algorithm s Zhou et al. [2022], Chang and Zhou [2024],Grathwohl et al.[2021],Zhang et al.[2022],Liang e t al.[2023b],van den Boom et al.[2022], Liang et al.[2023a,2022],Sansone[2022],Sun et al.[2022, 2023],multipletryalgorithmsGagnon et al. [2023],Chang et al.[2022], gradient-basedalgorithmsLiv ingstone and Zanella[2022],Vogrinc et al. [2022], Mauri and Zanella [2024], Hird et al. [2022], revers ible jump algorithms Gagnon [2021], use as a Stein operator Shi et al. [2022] and importance sampl ing of Markov chain sample paths Zhou and Smith [2022], Li et al. [2023]. In this article we formally introduce and study the locally- balanced Markov process defined on a general state space. In Section 2.1 we define the process and i n Section 2.2 we show that it is well- defined, non-explosive and has a user-defined invariant dist ribution, and that it is ergodic under mild conditions. In Section 2.3 we prove regularity propert ies, such as the Feller property, while in Section 2.4 we study the generator of the process. In Section 3 we focus on mixing properties. In particular, in Section 3.1 we provide conditions under whic h the process converges exponentially quickly to equilibrium, and provide comparison theorems to compare the spectral gaps of different locally-balanced processes. In Section 3.2 we show that in s ome cases the process is uniformly er- godic on unboundedstate spaces, even when the limiting dist ribution is not sub-Gaussian. We note thatthisisqualitatively differentbehaviourtotheoverdam pedLangevindiffusion,whichistypically not uniformly ergodic when the limiting distribution is not sub-Gaussian. In Section 4 we consider a particular regime in which we show that the process has a diffu sion limit. Finally, in Section 5.1 we discuss using the locally-balanced Markov process for Mo nte Carlo sampling, and in Section 5.2 we suggest nonreversible extensions of the process, a topic of recent interest among the sampling community (e.g. Diaconis et al. [2000], Andrieu and Livings tone [2021], Faulkner and Livingstone [2024]). The appendices contain some of the longer and more t echnical proofs. 1.1.Notation. LetEbe a Polish space. For any measure πdefined on the measurable space (E,E), ifπadmits a density with respect to some dominating measure, we will write πto denote both the measure and the density when there is no ambiguity. W e write supp( π) to indicate the support of π. Furthermore, for a Markov kernel γ:E×E→[0,1], we will sometimes write [ π⊗γ] to denote the measure on E×Edefined as [ π⊗γ](dx,dy) :=π(dx)γ(x,dy). We denote the indicator function of the set AasIA. We write L1(π),L2(π) andB(E) to denote the set of functions f:E→Rthat are respectively integrable with respect to π, square-integrable with respect to πand bounded. For f,g∈L2(π) we write∝an}b∇acketle{tf,g∝an}b∇acket∇i}htπ:=/integraltext Ef(x)g(x)π(dx). We write Cb(E) andC0(E) to denote the continuous real functions fwith domain Ethat are bounded or converging to zero at the boundary of Erespectively. Finally, in the case where E=Rdwe write C∞for the set of functions f:Rd→Rthat are infinitely differentiable and C∞ cfor those that are infinitely differentiable with compact support. The composit ionf◦h(x) :=f(h(x))
https://arxiv.org/abs/2504.13322v1
for any two functions f,hsuch that the range of his within the domain of f FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 3 2.Definition and basic properties 2.1.Definition of the process. Consider a Borel measure space ( E,E) whereEis Polish. Con- sider also an underlying probability space (Ω ,F,P). Letπbe a measure on ( E,E) withπ(E) = 1, andγ:E×E→[0,1] be a Markov transition kernel on ( E,E). We denote by t(x,y) the Radon– Nikodym derivative t(x,y) :=π(dy)γ(y,dx) π(dx)γ(x,dy)=[π⊗γ](dy,dx) [π⊗γ](dx,dy), (1) and note that a version of t(x,y) is always well-defined and satisfies 0 < t(x,y)<∞andt(y,x) = 1/t(x,y) for all ( x,y)∈Rfor some R⊂E×Esatisfying [ π⊗γ](R) = 1, and t(x,y) := 0 for (x,y)/∈R, by Proposition 1 of Tierney [1998]. Before defining a locally-balanced Markov jump process in De finition 2.2, we must first introduce the notion of a balancing function. Definition 2.1 (Balancing function) .We callg:R≥0→R≥0abalancing function ifg(1) = 1and gsatisfies the balancing property g(t) =tg(1/t) (2) for allt >0, with the conventions that g(0) := 0 andg(t)>0fort >0. Remark 2.1. The requirement g(1) = 1 is essentially without loss of generality, as a different choice of balancing function ˜g=c·gsuch that ˜g(1) =cfor some c >0will induce a process with the same law at time tas the original process at time ct, for allt >0. Remark 2.2. There are an infinite number of balancing functions. For examp le, there is a bijection between the space of balancing functions and the space {f: [0,1)→R>0}, since one can always set g(t) =  f(t)ift∈(0,1) 1 ift= 1 tf(1/t)ift >1. Furthermore, it is easy to see that the space of balancing fun ctions is convex with respect to addition and multiplication, i.e. for any g1,g2balancing and a∈(0,1),ag1+(1−a)g2andga 1g1−a 2is also balancing. Finally, for any even function h:R→R>0, the function gh(t) =√ th(logt)is also balancing (see e.g. Proposition 1 of Vogrinc et al. [2022]). Remark 2.3. Popular choices of ginclude the bounded functions g(t) = min(1 ,t)andg(t) = 2t/(1+t), as well as the unbounded choices g(t) = max(1 ,t)andg(t) =tαI[0,1)(t)+t1−αI[1,∞)(t) for anya >0(e.g. setting a= 1/2givesg(t) =√t). Givengand the Radon–Nikodym derivative t(x,y) we can define the function λ(x) :=/integraldisplay Eg◦t(x,y)γ(x,dy) (3) and the operator Γgf:=/integraldisplay Ef(y)g◦t(x,y) λ(x)γ(x,dy) (4) for anyf∈L2(π). We use Γ gto denote both the operator and the corresponding Markov ker nel Γg(x,A) := ΓgIA(x) for any x∈EandA∈Ewhen there is no ambiguity. We will often make the following assumption. Assumption 2.1. For allx∈supp(π),λ(x)>0. 4 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELLA With these ingredients we are now able to define the process. Definition 2.2 (Locally-balanced Markov jump process) .Letx∈supp(π), and consider a discrete time Markov Chain X= (Xn)n∈N, withX0=xand transition kernel Γgas in (4). Consider also a sequence of conditionally independent given Xrandom variables τn∼Exp(λ(Xn−1)), forn≥1, withλas in(3). SetT0= 0and Tn:=n/summationdisplay k=1τk for alln∈N. Given(Xn)n≥0and(Tn)n≥0, theLocally-Balanced Markov Jump Process (LBMJP) with balancing function g, base kernel γ, target distribution πand state space E∪∂is(Yt)t≥0such thatY0=xand Yt=/braceleftBigg Xsup{n:Tn≤t},ift <sup{Tn:n∈N} ∂ , otherwise . We refer to ∂as the graveyard state. Notethattheaboveconstructionisproblematicif λtakes infinitevaluesinaset ofpositiveLebesgue measure. In Section 2.2, however, we provide conditions on λ,g,γ, andπunder which the process
https://arxiv.org/abs/2504.13322v1
is well-posed and non-explosive, meaning that λisπ-almost surely (a.s.) finite and the process will never reach the graveyard state ∂. The jump kernel associated to ( Yt)t≥0is defined for any x∈Eand any A∈EasJ(x,A) := λ(x)Γg(x,A). This can also be written J(x,dy) =g◦t(x,y)γ(x,dy), (5) from which the below proposition is straightforward. Proposition 2.1. The jump kernel Jassociated with a LBMJP (Yt)t≥0satisfies /integraldisplay A×Bπ(dx)J(x,dy) =/integraldisplay A×Bπ(dy)J(y,dx) for anyA,B∈E. Proof.Note first that π(dx)J(x,dy) =g◦t(x,y)[π⊗γ](dx,dy). The key step of the argument is that g◦t(x,y)[π⊗γ](dx,dy) =g◦t(y,x)[π⊗γ](dy,dx), (6) which is simply a rearrangement of (2) when t(x,y) is chosen as in (1). The right-hand side of (6) is equal to π(dy)J(y,dx), which concludes the proof. /square A simple method to simulate the process is given in Algorithm 1. 2.2.Well-posedness, non-explosivity and ergodicity. The below non-decreasing assumption ongwill prove sufficient to establish a number of fundamental pro perties for a locally-balanced Markov jump process. Assumption 2.2. g:R≥0→R≥0is non-decreasing and continuous. FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 5 Algorithm 1 Simulation of a locally-balanced Markov jump process Require: x∈E,T∗>0 Seti←0,T0←0,X0←x whileTi< T∗do Drawτi+1∼Exp(λ(Xi)) SetTi+1←Ti+τi+1 SetYt←Xifor allt∈[Ti,Ti+1). DrawXi+1∼Γg(Xi,·) i←i+1. end while return(Yt)0≤t≤T∗. Remark 2.4. Assuming that gis non-decreasing is arguably very natural when the kernel γis symmetric with respect to some canonical reference measure µ(e.g. Lebesgue or counting), in which caset(x,y) =f(y)/f(x)withf=dπ/dµ, andggives higher weight to areas with higher density f. More generally, however, any choice of locally balancing fu nction will have the correct stationary measure. We leave exploration of other choices for future wo rk. The following lemma details some constraints on gimposed by Assumption 2.2. Lemma 2.1. Letgbe a balancing function satisfying Assumption 2.2. Then (i)g(t)≤1+tfor allt≥1. (ii)For allt >0, min(1,t)≤g(t)≤max(1,t). (7) Proof.(i)Since g(1) = 1thenbyAssumption2.2 g(t)≤1fort∈[0,1], fromwhichdirectcalculation shows that g(t) =tg(1/t)≤tfort≥1. Combining gives the result. (ii) For t >1 theng(t)≥1 follows immediately from the non-decreasing assumption, a nd fort <1 direct calculation gives g(t) =tg(1/t)≥t, from which it follows that g(t)≥min(1,t). Switching the direction of the inequalities in each case gives the upper bound g(t)≤max(1,t). /square Lemma 2.1 implies that non-decreasing balancing functions can grow at most linearly, which is in fact enough to deduce that the associated locally-balanc ed Markov jump process is both π-a.e. well-defined (Proposition 2.2) and non-explosive (Theorem 2.1). Proposition 2.2 (Well-posedness) .Ifgsatisfies Assumption 2.2 then the following hold: (a)λ(x)<∞forπ-almost every x. (b)Zλ:=/integraltext Eλ(x)π(dx)<∞, meaning ˜π(dx) =Z−1 λλ(x)π(dx)is a proper probability measure onE. (c) Γgis a˜π-reversible discrete-time Markov transition kernel. 6 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELLA Proof of Proposition 2.2. We first show (b), which implies part (a). Using Lemma 2.1(i) a nd the definition of t(x,y) in (1) gives Zλ=/integraldisplay Eλ(x)π(dx) =/integraldisplay E×Eg(t(x,y))γ(x,dy)π(dx) (8) ≤/integraldisplay E×E(1+t(x,y))γ(x,dy)π(dx) = 1+/integraldisplay E×Eπ(dy)γ(y,dx) = 1+1 = 2 <∞, as desired. Part (c) follows from (6) after noting that [˜π⊗Γg](A×B) =Z−1 λ/integraldisplay A×Bπ(dx)g(t(x,y))γ(x,dy) =Z−1 λ/integraldisplay A×Bπ(dy)g(t(y,x))γ(y,dx) = [˜π⊗Γg](B×A) for every A,B∈E. /square Next we consider whether the process can explode in finite tim e. Recall that a jump process is said to explode if the inter-arrival times τ1,τ2,...satisfy/summationtext+∞ k=1τk<∞, meaning the process reaches the graveyard state ∂. Conversely, the process is called non-explosive if/summationtext+∞ k=1τk=∞almost surely.
https://arxiv.org/abs/2504.13322v1
To ensure non-explosivity of the process, we make th e following assumption on the kernel Γg, which is connected with the concept of φ−irreducibility (see e.g. Meyn and Tweedie [1993]). While irreducibility is not in general necessary for non-ex plosivity, it allows for a simpler treatment and it is a natural requirement in sampling applications. Assumption 2.3. For allx∈supp(π)and any B∈Ewithφ(B)>0, there exists n∈Nsuch that Px(Xn∈B)>0. Remark 2.5. For most applications in sampling (such as MCMC), Assumption 2 .3 is rather minimal. The assumption will be satisfied, for example, in the case where for all x∈E,γ(x,·)is supported on the entire state space, with continuous g,t(x,·)that satisfy g(s)>0, for all s >0, andt(x,y)>0for allx,y∈E. The next theorem shows that irreducibility of Γ gis sufficient to ensure that Yis non-explosive for π-almost every starting state. Theorem 2.1 (Non-explosivity, reversibility & ergodicity) .Under Assumptions 2.1-2.3, the fol- lowing hold: (a)Yis almost surely (a.s.) non-explosive for π-a.e. starting state x∈E. (b)Yisπ-reversible, hence π-invariant. (c)Yis ergodic for π-a.e. starting state x∈E, meaning for every f:E→R, withf∈L1(π) we have a.s. that lim t→∞1 t/integraldisplayt 0f(Ys)ds=/integraldisplay Ef(x)π(dx). (9) Proof of Theorem 2.1. For (a), by definition ( Yt)t≥0explodes if and only if sup {n:Tn≤t}=∞ for some t <∞, i.e. if/summationtext∞ k=1τk<∞. Conditionally on X, by Theorem 2.3.2 of Norris [1997], a.s. FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 7 /summationtext∞ k=1τk<∞if and only if/summationtext+∞ k=1λ(Xk)−1<∞, thus unconditionally Px/parenleftBigg∞/summationdisplay k=1τk<∞/parenrightBigg =Px/parenleftBigg+∞/summationdisplay k=11 λ(Xk)<∞/parenrightBigg . By Proposition 2.2(c) Γ gis ˜π-invariant. Since Γ gis alsoφ-irreducible for some non-trivial φ, by Theorem 1 of Asmussen and Glynn [2011], for ˜ π-almost any x∈Eand any ˜ π-integrable f:E→ [0,∞) 1 NN/summationdisplay k=1f(Xk)N→∞−−−−→/integraldisplay f(x)˜π(dx) almost surely. Setting f(x) := 1/λ(x) gives 1 NN/summationdisplay k=11 λ(Xk)N→∞−−−−→/integraldisplay1 λ(x)1 Zλλ(x)π(dx) =1 Zλ>0 almost surely, implying that Px(/summationtext+∞ k=1λ(Xk)−1<+∞) = 0 for ˜ π-almost any x. For (b)itisclassical thatif Yisnon-explosivethentheuniquesolution totheKolmogorov backward equation is stochastic, see e.g. Section X.3 of Feller [1991 ]. Combining with Proposition 2.1 implies the result, a proof of which can be found as e.g. Proposition 2 of Serfozo [2005]. Consider now part (c). Without loss of generality, we will as sume that f(x)≥0 for all x∈E, and the result will follow by considering the positive and ne gative part of a general f∈L1(π). By Definition 2.2, we have 1 t/integraldisplayt 0f(Ys)ds=1 tn(t)/summationdisplay i=1τif(Xi−1)+1 t(t−Tn(t))f(Xn(t)), wheren(t) := sup{n:Tn≤t}. Using the fact that a.s. t−Tn(t)≤τn(t)+1, and that Tn(t)≤t≤ Tn(t)+1, we write 1 Tn(t)+1n(t)/summationdisplay i=1τif(Xi−1)≤1 t/integraldisplayt 0f(Ys)ds≤1 Tn(t)n(t)+1/summationdisplay i=1τif(Xi−1). (10) By the proof of Theorem 1 of Asmussen and Glynn [2011], ( Xn)n∈Nis ˜π-irreducible, i.e. one can choose ˜πto be the non-trivial measure φin the definition of φ-irreducibility. Then, (( Xn−1,τn))n∈N is aMarkov chain that is also φ-irreduciblewith measure φ(dx,dτ) := ˜π(dx)dτonE×[0,+∞), with dτdenoting the Lebesgue measure. It also has an invariant dist ribution ˆ πdefined on E×[0,+∞), withX-marginal equal to ˜ πand conditional distribution of τ|Xequal to Exp( λ(X)), meaning ˆπ(dx,dτ) =1 Zλλ2(x)exp{−λ(x)τ}dτ π(dx). By Theorem 1 of Asmussen and Glynn [2011], using the Law of Lar ge Numbers for the functions g(x,s) =f(x)s, andh(x,s) =swe deduce that for ˜ π-a.e. starting points x∈E, a.s. lim n→∞1
https://arxiv.org/abs/2504.13322v1
nn/summationdisplay i=1τif(Xi−1) =Eˆπ[g] =/integraldisplay E/integraldisplay+∞ 0f(x)s1 Zλλ2(x)exp{−λ(x)s}ds π(dx) =1 Zλ/integraldisplay Ef(x)π(dx), (11) 8 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELLA and lim n→∞1 nTn= lim n→∞1 nn/summationdisplay i=1τi=Eˆπ[h] =/integraldisplay E/integraldisplay+∞ 0s1 Zλλ2(x)exp{−λ(x)s}dsπ(dx) =1 Zλ.(12) Finally, observe that since λ(x)>0 for all x∈E, we have a.s. n(t)t→+∞−−−−→+∞. Combining (11) and (12), we see that on letting t→+∞on the left-hand side of (10) we get limsup t→+∞1 Tn(t)+1n(t)/summationdisplay i=1τif(Xi−1) = limsup t→+∞n(t) n(t)+11 (n(t)+1)−1Tn(t)+1n(t)−1n(t)/summationdisplay i=1τif(Xi−1) =/integraldisplay Ef(x)π(dx) and a similar argument shows that for the right-hand side of ( 10) we get liminf t→+∞1 Tn(t)n(t)+1/summationdisplay i=1τif(Xi−1) =/integraldisplay Ef(x)π(dx). The result follows on noting that since 0 < λ(x)<∞forπ-a.e.x, any set of full ˜ π-measure is also of fullπ-measure, therefore (9) holds for π-a.e. starting point. /square Next, we consider the average number of jumps of the LBMJP in a finite time horizon. The assumption that this is finite is required for many theoretic al results of interest. With a view towards using the theory developed in Davis [1984] for piece wise-deterministic Markov processes, we present the following. Lemma 2.2. LetnTbe the number of jumps of a LBMJP until time T∈(0,∞). Under Assump- tions 2.1-2.3, there exists S∈Ewithπ(S) = 1such that for all T >0andx∈S, Ex[nT]<∞. (13) Proof of Lemma 2.2. The jumps follow a non-homogeneous Poisson process with rat eλ(Yt). Stan- dard results (see e.g. Bierkens et al. [2019a]) show that Ex[nT] =Ex/bracketleftbigg/integraldisplayT 0λ(Yt)dt/bracketrightbigg . Assume that the starting point xof the process is distributed according to π, which is invariant for the process Yfrom Theorem 2.1(b). Then Eπ[nT] =Eπ/bracketleftbigg/integraldisplayT 0λ(Yt)dt/bracketrightbigg =/integraldisplayT 0Eπ[λ(Yt)]dt=/integraldisplayT 0/integraldisplay Eλ(x)π(dx)dt=T/integraldisplay Eλ(x)π(dx)<∞ since/integraltext Eλ(x)π(dx)<∞from Proposition 2.2. The result follows. /square To conclude with the well-posedness of the LBMJP, we address the concern that Ymight hit a statexwithλ(x) = +∞when starting from some zero measure set. To avoid this, we ma ke the following assumption. Assumption 2.4. For allx∈supp(π), and any A∈Ewithπ(A) = 0, we have Γg(x,A) = 0. Remark 2.6. Under Assumption 2.1, we have that π(A) = 0⇐⇒˜π(A) = 0. Therefore, Assumption 2.4 asks that for any x∈supp(˜π),Γg(x,·)is absolutely continuous with respect to ˜π. Since˜πis also the invariant measure of the kernel Γg, this will certainly hold for ˜π-almost all FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 9 statesx, and therefore for π-almost all states x. Therefore, Assumption 2.1 merely imposes an extra condition on a zero π-measure set. Under Assumption 2.4, we have the following strengthening o f Theorem 2.1 parts (a) and (c) from holding for π-a.e. starting state to every starting state xin the support of π, along with some additional stability of the process. Theorem 2.2 (Stability and ergodicity from any starting state) .Under Assumptions 2.1-2.4, from any starting state x∈supp(π)the following hold: (a)The process Ytis a.s. non-explosive. (b)Yt∈supp(π)andλ(Yt)<∞. (c)For allf∈L1(π), a.s. lim t→∞1 t/integraldisplayt 0f(Ys)ds=/integraldisplay Ef(x)π(dx). (14) Proof of Theorem 2.2. Let Λ<∞={y∈E:λ(y)<∞}. By Proposition 2.2(a), π(Λ<∞∩ supp(π)) = 1. Therefore, by Assumption 2.4, and since we start from x∈supp(π), almost surely the process X1,X2,...will stay on Λ <∞∩supp(π) for alln∈N, and, therefore, the same will hold for the process Ytfor allt≥0. Using the same argument
https://arxiv.org/abs/2504.13322v1
as in Theorem 2.1(b), we get that th e process is π-reversible and invariant. Finally, (14) follows by the sam e arguments of the proof of Theorem 2.1(c) by appealing to Theorem 2 of Asmussen and Glyn n [2011]. /square Remark 2.7. Asλ∈L1(π)(by Proposition 2.2), and λ(Yt)<∞a.s. for all t≥0(by Theo- rem 2.2), we can always consider a version of λ∈L1(π)such that λ(x)<∞for allx∈E, and this will not affect the behaviour of the process. 2.3.Feller property. For a continuous time Markov process with transition kernel Pt(x,·) at time t,letthecorrespondingsemigroup( Pt)t≥0bedefinedthroughtherelation Ptf(x) :=/integraltext f(y)Pt(x,dy). We will refer to a Markov process for which Ptf(x)→f(x) ast→0 point-wise for all x∈Eas strong Feller if whenf:E→Ris bounded then Ptf∈Cb(E). Iff∈Cb(E) =⇒Ptf∈Cb(E) then we will call the process weak Feller. In general a Markov jump process will not be strong Feller, since at time tif there is some non-zero probability that no jumps have occu rred then Ptf is a convex combination of functions including f, which may not be continuous. To establish the weak Feller property in some generality, we make the following assumptions. Assumption 2.5. The following hold. (1)Let(U,B(U),ν)be a probability space, and h:E×U→Esuch that for any x∈E,γ(x,·) = Pu∼ν(h(x,u)∈·). Assume that for all u∈Uthe function x→h(x,u)is continuous. (2)The Radon–Nikodym derivative t:E×E→R≥0as in (1) is continuous. Assumption 2.5 will be satisfied in many applied settings. As sumption 2.5.1, for example, will be verified for a kernel γthat induces a random walk. In particular, using the notatio n of Assumption 2.5.1, we then have U=E,h(x,u) =x+u, soh(·,u) is continuous. Assumption 2.5.2 will be satisfied if for example πandγadmit continuous densities with respect to a common referen ce measure. Proposition 2.3 (Weak Feller) .Under Assumptions 2.4 and 2.5, a locally-balanced Markov ju mp process is weak Feller. 10 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A The proof of Proposition 2.3 relies on the following lemma. Lemma 2.3. Under Assumptions 2.4 and 2.5, the rate λof the LBMJP is a continuous function. Proof of Lemma 2.3. Using Assumption 2.5 we write for any y∈E, λ(y) =/integraldisplay Eg◦t(y,z)γ(y,dz) =/integraldisplay Ug◦t(y,h(y,u))ν(du). Let us fix x∈Eandu∈U. From Assumption 2.5.2, we have lim y→xg◦t(y,h(y,u)) =g◦ t(x,h(x,u)). For some ǫ >0, consider the ball B(x,ǫ) with radius ǫ, centred at x. Then, using Assumption2.5.2, thereexists an Mxsuchthat t(y,h(y,u))≤Mxforally∈B(x,ǫ). ByLemma2.1 this implies that for all y∈B(x,ǫ) g◦t(y,h(y,u))≤1+t(y,h(y,u))≤1+Mx. By the Bounded Convergence Theorem this then implies that li my→xλ(y) =λ(x). /square Proof of Proposition 2.3. We construct the process using a sequence of i.i.d. variable sE1,E2, ···∼Exp(1) and U1,U2,...∼νin the following way. Assume that the process starts from x∈E. We setτx 1=λ(x)E1andτx 1=Tx 1the first jump time. Ys=xfors∈[0,τx 1). Then the process jumps to h(x,U1) soYx T1=h(x,U1). The process is then constructed inductively, i.e. if it ha s been constructed until the n’th jump time Tx nthen we set τx n+1=En+1λ(Yx τn),Tx n+1=Tx n+τx n+1, Yx s=Yx Txnfors∈[Tx n,Tx n+1) andYx Tx n+1=h(Yx Txn,Un+1). Let us fix t≥0,x∈Eand let us fix a specific configuration of these E1,E2,...andU1,U2,.... Our goal will be to prove that a.s. on the configuration of E’s andU’s, limy→xYy t=Yx t.
https://arxiv.org/abs/2504.13322v1
If we have that then for any f∈Cb(E), we have lim y→xf(Yy t) =f(Yx t) a.s. and from bounded convergence, Ptf(y) =Ey[f(Yt)]y→x−−−→Ex[f(Yt)] =Ptf(x) which proves that Ptfis continuous. We now turn on proving that a.s. Yy ty→x−−−→Yx t. Due to non-explosivity of the process, a.s. there existsn∈Nsuch that t∈[Tx n,Tx n+1) and since a.s. Tx n∝ne}ationslash=t,t∈(Tx n,Tx n+1). Consider the discrete time chain/parenleftbig Xy k/parenrightbig k≥0defined as Xy k=Yy τy k. Sinceh(·,u) is continuous for all u, we observe that lim y→xXy 1=h(y,U1) =h(x,U1) =Xx 1, and since Xy k+1=h(Xy k,Uk+1). using induction we get that for any k∈N,Xy kis continuous with resepct to y. Furthermore, we can write Ty 1=λ(y)E1 which is continuous with respect to ysinceλis continuous. We also observe that for any k∈N, Ty k+1=Ty k+λ(Xy k)Ek+1 so ifTy kis continuous, then Ty k+1is as well. Via induction, Ty kis continuous with respect to yfor allk. Therefore, since Ty n∈Nfor ally, there exists δ >0 such that if y∈B(x,δ), thenTy n=Tx n. This further implies that t∈(Ty n,Ty n+1) for any y∈B(x,δ). Therefore, Yx t=Xx Yxn= lim y→xXy Txn= lim y→xXy Ty n= lim y→xYy t. This proves that Ptfis continuous. Finally, since fis bounded and ( Pt)t≥0defines a contraction semigroup then Ptf(x) =Ex[f(Yt)] is also bounded, meaning that Ptf∈Cb(E). FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 11 It remains to prove that for any bounded f:E→R, and any x∈Ewe have Ptf(x)t→0−−→f(x). To prove this, we observe that |Ptf(x)−f(x)|=|Ex[f(Yt)−f(x)]| ≤Ex[|f(Yt)−f(x)|] =Ex[|f(Yt)−f(x)|Iτ1≤h]+Ex[|f(Yt)−f(x)|Iτ1>h] =Ex[|f(Yt)−f(x)|Iτ1≤h] ≤2∝ba∇dblf∝ba∇dbl∞Px(τ1≤h) = 2∝ba∇dblf∝ba∇dbl∞(1−exp{−λ(x)t})t→0−−→0. This completes the proof. /square 2.4.Generator. In this sub-section we study the generator of a locally-bala nced Markov jump process. This will become important in the next sections whe n we discuss ergodicity properties and the behaviour of the process in the limit when the size of jump s becomes small. Write Lto denote the operator such that for any bounded function f:E→R, we have Lf(x) :=/integraldisplay Eg◦t(x,y)(f(y)−f(x))γ(x,dy) =λ(x)(Γgf(x)−f(x)), (15) where Γ gis as in (4). It is well known (see e.g. Theorem 3.1 Ethier and K urtz [2009]) that under suitable conditions the operator defined by Af(x) =a(x)(Γf(x)−f(x)) is the (strong) generator of a Markov Jump Process with jump k ernel of the form a·Γ (see for example Ethier and Kurtz [2009]). Some of these conditions a re, however, quite restrictive for our setting. But taking advantage of results of Davis [1984] for piecewise deterministic Markov processes, we have the following. Theorem 2.3 (Weak generator) .Let Assumptions 2.1-2.4 hold. Then for any bounded f:E→R, t≥0andx∈supp(π)∩S, withSas in Lemma 2.2, the process Mt=f(Yt)−f(x)−/integraldisplayt 0Lf(Ys)ds, t≥0 (16) is a martingale with respect the natural filtration of Y. Therefore, the domain D(L)of the weak gen- erator of Ycontains all bounded functions B(E), and the weak generator of the LBMJP restricted onB(E)is the operator L. Proof of Theorem 2.3. From Lemma 2.2, Ex[nT]<∞for allT≥0. Furthermore, since fis bounded, for any T >0, letting Yt−:= lims→t−Ys, we have Ex/bracketleftBiggnT/summationdisplay k=1/vextendsingle/vextendsingle/vextendsinglef(YTk)−f(YT− k)/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤2∝ba∇dblf∝ba∇dbl∞Ex[nT]<∞. From Theorem 5.5 of Davis [1984] we get that Mtis a local Martingale. We further observe that for anyy∈E, |Lf(y)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay E(f(y)−f(x))g◦t(x,y)γ(x,dy)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2∝ba∇dblf∝ba∇dbl∞λ(x) 12 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO
https://arxiv.org/abs/2504.13322v1
ZANELL A therefore for any T≥0 sup t≤T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0Lf(Ys)ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2∝ba∇dblf∝ba∇dbl∞/integraldisplayT 0λ(Ys)ds which has finite expectation since x∈Nand from Lemma 2.2. Since fis bounded, we get that M is a true martingale. /square We also have the following point-wise limit, which will prov e useful in Section 3. Proposition 2.4. Assume that Assumption 2.4 holds. For any bounded f:E→Rand any x∈E lim h→0Ex[f(Yh)−f(x)] h=Lf(x). Proof of Proposition 2.4. The proof is presented in A. /square Next we focus on the case of bounded g. Here the generator has numerous additional regularity properties. Proposition 2.5. Under Assumption 2.1, if gis bounded above then the following hold: (a)Lis a bounded operator on L2(π). (b)The domainD(L) =L2(π). (c)Lgenerates a uniformly continuous contraction semi-group (Pt)t≥0. (d)The semi-group can be written Pt:= exp(tL) =∞/summationdisplay n=0tn nLn. Proof of Proposition 2.5. For (a) set λ:= supxλ(x). Then we can equivalently define the process as evolving with constant rate (see Ethier & Kurtz Section 4. 2) and write the generator as Lf(x) = λ/integraltext [f(y)−f(x)]¯Γg(x,dy),where ¯Γg(x,dy) :=/parenleftbigg 1−λ(x) λ/parenrightbigg δx(dy)+λ(x) λΓg(x,dy). Note that ¯Γgisπ-reversible and ¯Γg(x,E) = 1,implying that the corresponding operator ¯Γgf(x) :=/integraltext f(y)¯Γ(x,dy) forf∈L2(π) satisfies∝ba∇dbl¯Γg∝ba∇dbl≤1. For any f∈D(L) it therefore holds that ∝ba∇dblLf∝ba∇dbl2=/integraldisplay (Lf(x))2π(dx) =/integraldisplay λ2(¯Γgf(x)−f(x))2π(dx)≤λ2∝ba∇dbl¯Γgf∝ba∇dbl2+λ2∝ba∇dblf∝ba∇dbl2+2λ2∝ba∇dblf∝ba∇dbl∝ba∇dbl¯Γgf∝ba∇dbl, implyingthat∝ba∇dblL∝ba∇dbl<∞as required. Parts (b), (c) and (d) follow directly from (a), see e.g. Theorem 1.2 of Pazy [2012]. /square 3.Mixing properties Here we study exponential and uniform ergodicity propertie s of a locally-balanced Markov jump process. InSection 3.1 we definea weak notion of spectral gap , which implies exponential ergodicity for the process. We then show that in the case of bounded gthere is an equivalence between the existence of a spectral gap for locally-balanced processes and Metropolis–Hastings algorithms in Section 3.1.1. After this we provide tools to compare two diffe rent locally-balanced processes with different choices of gin Section 3.1.2. Finally, in Section 3.2 we show conditions under which a locally-balanced process can be uniformly ergodic. FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 13 3.1.Spectral gaps and exponential ergodicity. For anyf∈B(E) we define a Dirichlet form associated with L(defined in (15)) as E(L,f) :=∝an}b∇acketle{tf,−Lf∝an}b∇acket∇i}htπ. (17) Note that typically the Dirichlet form is defined using the st rong generator. Here we instead use the weak generator and rely on the point-wise convergence es tablished in Proposition 2.4, which will be sufficient for our needs whilst also allowing us to avoi d questions about the domain of the strong generator. We will define a form of spectral gap for the process as γL:= inf f∈B(E)E(L,f) Varπ(f). (18) We note the small difference to the usual definition of spectral gap G(L), defined as G(L) := inf f∈L2(π)E(L,f) Varπ(f)(19) whenListhestronggenerator. For aLBMJPtheseemingly weaker con ditionγL>0infactdirectly impliesexponential ergodicity intotal variation distanc eprovidedthattheprocessisinitialised from a distribution µthat is absolutely-continuous with respect to πwith finite χ2−divergence, as stated below. Proposition 3.1 (Exponential ergodicity under spectral gap) .Let Assumptions 2.1-2.4 hold. If an LBMJP with balancing function gand weak generator Lis such that γL>0andY0∼µwith µ≪π, then ∝ba∇dblµPt−π∝ba∇dblTV≤1 2χ2(µ∝ba∇dblπ)1 2exp{−γLt}. (20) Proof.Letf∈B(E). We haveE(L,f)≥γLVarπ(f). By Theorem 2.2, since πis invariant for the process, it holds that d dtVarπ(Ptf) =d dt∝ba∇dblPtf∝ba∇dbl2 π=d dt/integraldisplay Ptf(x)2π(dx).
https://arxiv.org/abs/2504.13322v1
NotethatbyProposition2.4, point-wisethederivativeof Ptf(x)2withrespectto tis2Ptf(x)LPtf(x). Furthermore, usingthe same argument as in theProof of Theor em 2.3, we have|2Ptf(x)LPtf(x)|≤ 4∝ba∇dblf∝ba∇dbl2 ∞λ(x), which is π-integrable using Proposition 2.2(b) and recalling that f∈B(E). Therefore we can switch the integral and derivative in the above and we h ave d dtVarπ(Ptf) = 2∝an}b∇acketle{tPtf,LPtf∝an}b∇acket∇i}htπ=−2E(L,Ptf)≤−2γLVarπ(Ptf). Applying the Gr¨ onwall inequality and using the semi-group property of Ptthen gives the familiar result Varπ(Ptf)≤e−2γLtVarπ(f) (21) forf∈B(E). Now turning to the total variation distance, using that π(Ptf) =π(f) and applying the Cauchy–Schwarz inequality shows that ∝ba∇dblµPt−π∝ba∇dblTV= sup A∈E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay (PtIA(x)−π(A))/parenleftbiggdµ dπ(x)−1/parenrightbigg π(dx)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤χ2(µ∝ba∇dblπ)1 2sup A∈EVarπ(PtIA)1 2.(22) Finally applying (21) and noting that Var π(IA)≤1/4 for any A∈Egives the result. /square Inthenexttwo sub-sections wewill provideconditions that guarantee existence of a spectral gap for the process. We first treat the case where the balancing funct iongis bounded before generalising to the unbounded case. 14 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A 3.1.1.Bounded gand connections with Metropolis–Hastings. Whengisboundedthelocally-balanced Markov jump process can be strongly connected to Metropolis –Hastings algorithms. We will call a Markov transition kernel of Metropolis–Hastings type if it can be written P(x,dy) :=α(x,y)γ(x,dy) +/parenleftbigg/integraldisplay (1−α(x,y))γ(x,dy)/parenrightbigg δx(dy), for some αsatisfying α(x,y)/α(y,x) =t(x,y). Note that for any balancing function g≤1, choosing α(x,y) =g(t(x,y)) is a valid choice. We further observe that the generator of any LBMJP with g≤λfor some λ <∞can be written Lf(x) =¯λ/integraldisplay [f(y)−f(x)]¯Γg(x,dy), where ¯Γg(x,dy) =/parenleftbigg 1−/integraldisplayg(t(x,y)) λγ(x,dy)/parenrightbigg δx(dy)+g(t(x,y)) λγ(x,dy). Note, therefore, that ¯Γgis a Markov transition kernel of Metropolis–Hastings type w ith the choice of acceptance rate ˜ α(x,y) = ˜g(t(x,y)) :=g(t(x,y))/λ≤1. The following result shows that any choice of ˜ gis not too dissimilar to min(1 ,t). Lemma 3.1. Ifg≤λ,g(1) = 1andgis non-decreasing balancing function then min(1,t)≤g(t)≤λmin(1,t). Proof.For the right-hand side inequality, for any balancing funct iong(t)≤λ⇐⇒g(s)/s≤λ wheres:= 1/t, meaning g(s)≤λs. Sincetis arbitrary then so is s. Combined with the fact that g(s)≤¯λ, this implies the result. The left-hand side inequality fol lows directly from Lemma 2.1. /square Using Lemma 3.1, an equivalence between the spectral gaps of a LBMJP with bounded gand Metropolis–Hastings algorithms is shown in the below propo sition. Proposition 3.2 (Equivalence of spectral gaps between LBMJP and Metropolis –Hastings) .If g(t)≤λfor some λ <∞is non-decreasing balancing function, then the locally-ba lanced jump process with generator Lf(x) =/integraltext (f(y)−f(x))g(t)γ(x,dy)has a positive spectral gap γL(as in(18)) if and only if the Metropolis–Hastings Markov chain with tr ansition kernel P(x,dy) = min(1,t)γ(x,dy)+/parenleftbig/integraltext (1−min(1,t))γ(x,dy)/parenrightbig δx(dy)does. The same equivalence holds for the spec- tral gap G (P), defined in (19). Proof.Note by Lemma 3.1 that for any f∈L2(π) /integraldisplay (f(y)−f(x))2min(1,t(x,y))π(dx)γ(x,dy)≥1 λ/integraldisplay (f(y)−f(x))2g(t(x,y))π(dx)γ(x,dy) ≥1 λ/integraldisplay (f(y)−f(x))2min(1,t)π(dx)γ(x,dy). WritingP(x,dy) = min(1 ,t)γ(x,dy) +/parenleftbig/integraltext (1−min(1,t))γ(x,dy)/parenrightbig δx(dy), then from the variational characterisation of the spectral gap it directly follows th at Gap(L)≥Gap(P)≥Gap(L) λ, and γL≥γP≥γL λ whereγis defined in (18). /square FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 15 Given the above, we can deduce a stronger equivalence by tran slating between spectral gaps and exponential ergodicity, as stated below. Corollary 3.1. The following hold: (i)Under Assumptions 2.1-2.4, if the Metropolis-Hastings ker nelPwith proposal γ, targeting πis geometrically ergodic when initialised from µwithχ2(µ||π)<∞, then the LBMJP with the
https://arxiv.org/abs/2504.13322v1
same γ, bounded gand initialised from µis exponentially ergodic. (ii)If two LBMJPs with bounded and non-decreasing g1andg2have the same γ, then either both have positive spectral gap or neither does. Proof.(i) Using Theorem 2.1 of Roberts and Rosenthal [1997] then if Pis geometrically ergodic andχ2(µ||π)<∞then∝ba∇dblP∝ba∇dbl<1, from which it follows that Gap( P)>0. By Proposition 3.2 we get that Gap( L)>0, and therefore γL>0. The result follows by Proposition 3.1. (ii) This is an immediate consequence of Proposition 3.2 since both locall y balanced processes will have a positive spectral gap iff the Metropolis-Hastings algorithm with pro posalγhas a positive one. /square 3.1.2.Comparison theorems for unbounded g.Proposition 3.1 above shows that a positive spectral gap combined with a suitably warm start leads to exponential ergodicity in total variation distanc e. In the next result we show that among non-decreasing choices ofgthere is a natural ordering on the gaps, which can then be used to greatly simplify the quest ion of exponential ergodicity for a process when gis not bounded. Proposition 3.3 (Comparisonofspectralgapswithunbounded g).Consider two LBMJPs with the same kernel γand with balancing functions g1andg2respectively. Let L1andL2be the generators of the two processes respectively. Assume that g1(t)≥ωg2(t)for allt≥0and some ω >0. Then γL1≥ω·γL2. Proof.Note thatB(E)⊂D(L1)∩D(L2). For any f∈B(E) recall the integral representation E(L1,f) =1 2/integraldisplay (f(y)−f(x))2g1◦t(x,y)π(dx)γ(x,dy), (23) from which it is straightforward to see that E(L1,f)≥ωE(L2,f) using that g1(t)≥ωg2(t) for all t≥0. /square The above result, when combined with Lemma 2.1, allows us to f ocus attention wholly on the process with (bounded) balancing function g(t) = min(1 ,t). With this knowledge, it is also imme- diatethat existingresultson spectral gaps/geometric erg odicity forMetropolis–Hastings algorithms (e.g. Roberts and Tweedie [1996a,b], Livingstone et al. [20 19]) can be leveraged in their entirety to establish exponential convergence to equilibrium of a loca lly-balanced Markov jump process. Corollary 3.2. For a given γ(x,·), if the LBMJP with g(t) = min(1 ,t)has a positive spectral gap, then it will also have a positive spectral gap γLfor all non-decreasing choices of g. Corollary 3.3. For a given γ(x,dy), if a Metropolis–Hastings algorithm with proposal γ(x,dy) has a positive spectral gap, then a LBMJP with the same invaria nt distribution and any choice of non-decreasing gwill also have a positive spectral gap γL. 3.2.Uniform Ergodicity. The goal of this sub-section is to show that a locally-balanc ed pro- cess can be uniformly ergodic, even on unbounded state space s, which suggests that their mixing properties can be robust with respect to the starting positi on. We begin by recalling the following definition. 16 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A Definition 3.1 (Uniform Ergodicity) .The process (Yt)t≥0is called uniformly ergodic if there exists K >0andρ <1such that for all x∈supp(π) ∝ba∇dblPx(Yt∈·)−π(·)∝ba∇dblTV≤Kρt. Two important notions for studying ergodicity properties o f the process are petite and small sets. We recall (see e.g. Meyn and Tweedie [2009]) that for a discre te time Markov chain ( Xn)n∈N, a set Cis called petiteif there exists a probability measure αonN, a non-trivial measure ηonEand ǫ >0 such that for all x∈C,/integraldisplay Px(Xk∈·)α(dk)≥ǫ η(·). The
https://arxiv.org/abs/2504.13322v1
setCis called smallifαcan be taken as the Dirac measure α=δnfor some n∈N. These definitions naturally extend to continuous time processes. We begin with a result that guarantees that the class of small sets contains all the compact subsets ofE. This will be helpful later when we establish uniform ergodi city for particular examples. Proposition 3.4 (Compact sets are small) .Let Assumptions 2.1-2.5 hold. Let δ >0and consider theδ-skeleton of the LBMJP, i.e. the chain (Zn)n∈NwithZn=Ynδ. There exists a δ >0such that for theδ-skeleton all compact sets are small. Proof of Proposition 3.4. Letδ >0 and for n∈Nlet us write Qδ nfor then-step transition kernel of theδ-skeleton Z. Since Γ g, isπ-irreducible and 0 < λ(x)<∞, it is easy to see that the LBMJP is alsoπ-irreducibe. Therefore, the skeleton chain is π-irreducible, and since Eis separable, there exists an open ball Bwithπ(B)>0. By Proposition 2.2.1, there exists x∈Ewithλ(x)< ∞, therefore Qδ 1(x,{x}) =Px(Yδ=x)>0, so the chain is (strongly) aperiodic in the sense of Meyn and Tweedie [2009]. From Proposition 2.3 the chain is we ak Feller. All Assumptions of Theorem 3.4 of Meyn and Tweedie [1992] are satisfied, therefo re all compact sets are petite for the skeleton chain. Finally, any such petite set is small as from Theorem 5.5.7 of Meyn and Tweedie [2009]. /square For any set C, we define hC:= inf{t≥0|Yt∈C}to be the hitting time to set C. It is well known that the notion of uniform ergodicity is closely connected w ith the behaviour of hitting times of small sets of the process. For the LBMJP we have the following result. Proposition 3.5 (Uniform ergodicity of a LBMJP) .Let Assumption 2.4 hold. Assume further that for any petite set Cthere exist M,λ∗>0such that for any x∈E,Ex[hC]< M, and for any y∈C,λ(y)≤λ∗. Then the LBMJP Yis uniformly ergodic. Proof.Using Markov’s inequality Ex[hC]< Mimplies that Px(hC< t)≥1−M/t. Choose ts.t. M/t <1. Then, writing fx Cfor the law of hCwhen the process starts from x, it holds that Pt(x,C)≥/integraldisplayt 0Px(Xt∈C|hC=s)fx C(ds). By the definition of hC,hC=simplies that X˜s∈Cfor some ˜ s∈(s,t). Therefore Px(Xt∈C|hC= s)≥e−λ(X˜s)(t−˜s)≥e−λ∗t, which implies that Pt(x,C)≥e−λ∗t/integraldisplayt 0fx C(ds) =e−λ∗tPx(hC< t)≥e−λ∗t/parenleftbigg 1−M t/parenrightbigg . (24) FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 17 SinceCis petite there exist ǫ0>0, a probability distribution ηdefined on Eand another αdefined on [0,∞) such that/integraltext Ps(y,A)α(ds)≥ǫ0η(A) for allA∈Eandy∈C. Thus for any x∈E /integraldisplay Pt+s(x,A)α(ds)≥/integraldisplay C/integraldisplay Ps(y,A)α(ds)Pt(x,dy)≥ǫ0η(A)Pt(x,C)≥ǫη(A), forǫ=ǫ0e−λ∗t(1−M/t)>0. This means that Eis petite. The process is strongly aperiodic as shown in the proof of Proposition 3.4. Uniform ergodicity of Yfollows from applying Theorem 5.2(b) of Down et al. [1995], upon noting that the drift condi tionPTVT≤β(s)VT+bIEfors≤T withβ(s) bounded on [0 ,T] andβ(T)<1 is trivially satisfied by setting VT≡1,β(s) = 0 for all s∈[0,T] andb= 1. /square 3.2.1.Example 1: Simple random walk. To illustrate how uniform ergodicity can arise for a locally - balanced Markov process when Eis unbounded we consider the case E=N, and the locally- balanced process with base kernel γ(x,dy) :=1 2(δx−1(dy)+δx+1(dy)). (25) Let us also make the following assumption regarding the tail s ofπand the growth of the function g. Assumption 3.1. There exist ˜a,a >0,β
https://arxiv.org/abs/2504.13322v1
>1, andk >0such that for all t≥1, g(t)≥t˜a, and for all n∈N,π(n)>0, and for all n≥k, π(n) π(n+1)≥exp/braceleftBig aβnβ−1/bracerightBig . Under this assumption we have the following. Theorem 3.1. Assume that γis of the form (25)and Assumption 3.1 holds. Then the LBMJP is uniformly ergodic. Proof of Theorem 3.1. The proof is presented in B. /square As will become evident by the proof of Theorem 3.1, the same re sult holds when the state space is{hn,n∈N}for some h >0 and when γ(x,dy) = 2−1(δx−h+δx+h). Consider a distribution of interest πonRof the form π(x)∝exp{−xa}fora∈(1,2). As we will see in Section 4, whenh→0, after an appropriate rescale of time, the LBMJP converges weakly to an overdamped Langevin diffusion ( St)t≥0, with state space R, that solves the stochastic differential equation dSt=1 2∇logπ(St)dt+dBt (26) whereBis a Brownian motion. We observe that πwill satisfy Assumption 3.1. Therefore, the LBMJP defined on {hn,n∈N}will be uniformly ergodic for a target distribution of the fo rm π(n)∝exp{−na}for anya >1. On the other hand, the Langevin diffusion (26) will not be uniformly ergodic for a target distribution of the form π(x)∝exp{−|x|a}defined on Runless a >2 (e.g. Roberts and Tweedie [1996b], Sandri´ c [2025]), high lighting a qualitative difference in the mixing behaviour between the processes. 18 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A 4.Weak Convergence to an overdamped Langevin diffusion In this section we focus on the state space is E=Rdand study the behaviour of the process when the size of the jumps decreases and the frequency increa ses arbitrarily. We will show that with an appropriate space and time rescaling, in the limit a l ocally-balanced Markov jump process converges weakly to the overdamped Langevin diffusion. Scali ng limits of this form have been shown for various Metropolis-Hastings algorithms and have been used to analyse their behaviour in high dimensions and optimally tune various algorithmic p arameters (e.g. Gelman et al. [1997]). In this section we make the following additional assumption . Assumption 4.1. Assume that πadmits a Lebesgue density π∈C3(Rd), that˜M≥π(x)>0for some˜M <∞and allx∈Rd, thatg∈C2(R≥0), and that there exists M >0such that for all x∈Rd, −MId∝√∇ecedesequal∇2logπ(x)∝√∇ecedesequalMId. (27) Remark 4.1. Condition (27)is usually called M-smoothness of the potential −logπ(x), and is common in numerical analysis (e.g. Lytras and Mertikopoulos [2024]). The assumption restricts the eigenvalues of ∇2logπto be in[−M,M]. The main result of this section is the following. Theorem 4.1 (Weak convergence to an overdamped Langevin diffusion) .Let Assumptions 2.1-2.4 and 4.1 hold, and let σn∈(0,1)be a sequence with limn→∞σn= 0. For every n∈N, let(Yn t)t≥0be thed-dimensional LBMJP with γ(x,·)chosen to be a N(x,σ2 nId)distribution and (Sn t)t≥0defined asSn t:=Yσ−2 nt. Let(St)t≥0be the overdamped Langevin diffusion process governed by the stochastic differential equation dSt=1 2∇logπ(St)dt+dBt. ThenSnn→∞−−−→Sweakly in the Skorokhod topology. Proof of Theorem 4.1. The proof is presented in C. /square Remark 4.2. The proof of Theorem 4.1 can be straightforwardly generalised to non-Gaussian γ with similarly decaying variance. If znis the jump for the nth process under the law of γthen the main restriction is to control the moments of e(x,σn) =φ(x+zn)−φ(x), and in particular guarantee that
https://arxiv.org/abs/2504.13322v1
sup x∈CEγ/bracketleftbig e(x,σn)8/bracketrightbig to control the higher order terms (such as the term A(x,σn)in the proof) In particular, we observe that kernels γsuch as those considered in Section 3.2 satisfy this propert y. 5.Discussion 5.1.Use in Monte Carlo simulation. There are two natural ways to use locally-balanced Markov processes for Monte Carlo computation of the integra l/integraltext Ef(x)π(dx). The first is to sim- ulate a realisation of an LBMJP for Tunits of time, and then compute ergodic averages along the trajectory. For simplicity we assume that T:=TN=/summationtextN j=1τj, whereτj∼Exp(λ(Xj−1) and {X0,X1,...,XN−1) is the embedded Markov chain with transition kernel Γ gas in (4). This equates to using the estimator ˆfMC N:=1 TN/integraldisplayTN 0f(Yt)dt=1 TNN/summationdisplay j=1τjf(Xj−1). FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 19 Another approach to computing the same integral is to refrai n from simulating the continuous-time process in its entirety, and instead only simulate the embed ded Markov chain {X0,X1,..., XN−1}and use the importance weighted estimator ˆfIS N:=N/summationdisplay j=1λ(Xj−1)−1 /summationtextN k=1λ(Xk−1)−1f(Xj−1). In fact it can be shown that ˆfIS Nalways has lower asymptotic variance than ˆfMC Nusing a Rao– Blackwellisation argument. Computing ˆfIS Tis called importance tempering in Li et al. [2023], and in Theorem B1 of that work the authors quantify how much impro vement will be made when comparing the two approaches depending on the particular LB MJP and function fwhenEis finite. See also Zhou and Smith [2022] for more details. Anotherapproach to computingtheintegral/integraltext Ef(x)π(dx) is tousethekernel Γ gdefinedinequation (4) to generate proposals within a Metropolis–Hastings alg orithm Metropolis et al. [1953], Hastings [1970]. If the current state of the chain is ˜XiandY∼Γg(˜Xi,·) then the next state ˜Xi+1is set to be Ywith probability min(1 ,λ(˜Xi)/λ(Y)), otherwise ˜Xi+1is set to be ˜Xi. Following this/integraltext Ef(x)π(dx) is estimated by ˆfMH N:=1 NN/summationdisplay j=1f(˜Xj−1). In Zanella [2020] locally-balanced processes are used in th is way to construct Monte Carlo estima- tors. It is not immediately clear which of ˆfIS NandˆfMH Nis more effective for a given problem, but some useful discussion in this direction is provided in Zhou and Smith [2022]. 5.2.Non-reversible extensions. There has been recent interest within the Markov chain monte Carlo sampling community in designing algorithms based on n on-reversible Markov processes. The motivation is that such processes can often have desirable m ixing properties (e.g. Diaconis et al. [2000]). Herewebrieflyillustratehowtoextendtheframewo rkoflocally-balanced Markov processes to allow for non-reversible processes to be constructed in a natural way. Take an arbitrary Markov kernel ˇPand mapping T:E→Esuch that Q(z,A) :=δT(z)(A),µQ=µ and the corresponding operator Qf(z) :=/integraltext f(z′)Q(z,dz′) is an isometric involution (see Definition 1 of Andrieu and Livingstone [2021]). Then we define the Marko v jump kernel ˇJ(z,dz′) =g/parenleftbiggµ(dz′)QˇPQ(z′,dz) µ(dz)ˇP(z,dz′)/parenrightbigg ˇP(z,dz′). A rigorous treatment of the above Radon-Nikodym derivative is given in Thin et al. [2020]. It is straightforward to show that the condition µ(dz)ˇJ(z,dz′) =µ(dz′)QˇPQ(z′,dz), knownas skewormodified detailedbalance(e.g.Andrieu and Livingstone[2021],Thi n et al.[2020]), is satisfied. Similarly the corresponding operator ˇLf(z) :=/integraldisplay [f(z′−f(z)]g/parenleftbiggµ(dz′)QˇPQ(z′,dz) µ(dz)ˇP(z,dz′)/parenrightbigg ˇP(z,dz′) is (µ,Q) self-adjoint, meaning that for any suitable f,git holds that∝an}b∇acketle{tf,Lg∝an}b∇acket∇i}htµ=∝an}b∇acketle{tQLQf,g∝an}b∇acket∇i}htµ. The above set up can therefore be used to construct non-rever sible locally balanced Markov jump processes.
https://arxiv.org/abs/2504.13322v1
We leave a thorough exploration of these non-rev ersible locally-balanced processes to future work. 20 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A Acknowledgements The authors would like to thank the Isaac Newton Institute fo r Mathematical Sciences, Cambridge, for support and hospitality during the programme Stochastic systems for anomalous diffusion , where work on this paper was undertaken. They would also like to thank Andrea Bertazzi for helpful discussions, and Codina Cotar for inviting them to p articipate at the INI programme. Part of the research was conducted while GV was a postdoctoral fel low at UCL, under SL. Funding SL and GV were supported by an EPSRC New Investigator Award (E P/V055380/1). GZ was suported by ERC, through StG “PrSc-HDBayLe” grant (1010765 64). This work was supported by EPSRC (EP/Z000580/1). References ChristopheAndrieuand Samuel Livingstone. Peskun–Tierne y orderingfor Markovian Monte Carlo: beyond the reversible scenario. The Annals of Statistics , 49(4):1958–1981, 2021. S. Asmussen and P. W. Glynn. A new proof of convergence of MCMC via the ergodic theorem. Statistics & Probability Letters , 81(10):1482–1485, 2011. Michael Betancourt and Mark Girolami. Hamiltonian Monte Ca rlo for hierarchical models. Current trends in Bayesian methodology with applications , 79(30):2–4, 2015. Michael Betancourt, Simon Byrne, Sam Livingstone, and Mark Girolami. The geometric founda- tions of hamiltonian monte carlo. Bernoulli , pages 2257–2298, 2017. J. Bierkens, G. O. Roberts, and Pierre-Andr´ e Zitt. Ergodic ity of the zigzag process. The Annals of Applied Probability , 29(4):2266 – 2301, 2019a. doi: 10.1214/18-AAP1453. Joris Bierkens, Paul Fearnhead, and Gareth Roberts. The Zig -Zag process and super-efficient sampling for Bayesian analysis of big data. The Annals of Statistics , 47(3):1288, 2019b. Nawaf Bou-RabeeandJes´ usMar´ ıaSanz-Serna. RandomizedH amiltonian MonteCarlo. The Annals of Applied Probability , 27(4):2159–2194, 2017. Alexandre Bouchard-Cˆ ot´ e, Sebastian J Vollmer, and Arnau d Doucet. The bouncy particle sampler: A nonreversible rejection-free Markov chain Monte Carlo me thod.Journal of the American Statistical Association , 113(522):855–867, 2018. Hyunwoong Chang and Quan Zhou. Dimension-free relaxation t imes of informed mcmc samplers on discrete spaces. arXiv preprint arXiv:2404.03867 , 2024. Hyunwoong Chang, Changwoo Lee, ZhaoTang Luo, Huiyan Sang, a nd Quan Zhou. Rapidly mixing multiple-try Metropolis algorithms formodelselection pr oblems.Advances in Neural Information Processing Systems , 35:25842–25855, 2022. Arnak Dalalyan and Lionel Riou-Durand. On sampling from a lo g-concave density using kinetic Langevin diffusions. Bernoulli , 26(3), 2020. M. H. A. Davis. Piecewise-Deterministic Markov Processes: A General Class of Non-Diffusion Stochastic Models. Journal of the Royal Statistical Society. Series B (Methodo logical), 46(3): 353–388, 1984. ISSN 00359246. George Deligiannidis, Alexandre Bouchard-Cˆ ot´ e, and Arn aud Doucet. Exponential ergodicity of the bouncy particle sampler. The Annals of Statistics , 47(3), 2019. P. Diaconis, S. Holmes, and R. M. Neal. Analysis of a nonrever sible Markov chain sampler. Ann. Appl. Probab. , 10(3):726–752, 2000. ISSN 1050-5164. doi: 10.1214/aoap/ 1019487508. Peter J Diggle. Modeling infectious disease distributions : Applications of point process methods. InHandbook of infectious disease data analysis , pages 387–409. Chapman and Hall/CRC, 2019. FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 21 D. Down, S. P Meyn, and R. L. Tweedie.
https://arxiv.org/abs/2504.13322v1
Exponential and uniform ergodicity of Markov processes. The Annals of Probability , 23(4):1671–1691, 1995. Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Dunca n Roweth. Hybrid monte carlo. Physics letters B , 195(2):216–222, 1987. A. Durmus, A. Guillin, and P. Monmarch´ e. Piecewise determi nistic Markov processes and their invariant measures. Annales de l’Institut Henri Poincar´ e, Probabilit´ es et Sta tistiques, 57(3):1442 – 1475, 2021. doi: 10.1214/20-AIHP1125. S. N. Ethier and T. G. Kurtz. Markov processes: characterization and convergence . John Wiley & Sons, 2009. Michael F Faulkner and Samuel Livingstone. Sampling algori thms in statistical physics: a guide for statistics and machine learning. Statistical Science , 39(1):137–164, 2024. W. Feller. An introduction to probability theory and its applications , Volume 2 , volume 81. John Wiley & Sons, 1991. Philippe Gagnon. Informed reversible jump algorithms. Electronic Journal of Statistics , 15(2): 3951–3995, 2021. Philippe Gagnon, Florian Maire, and Giacomo Zanella. Impro ving multiple-try Metropolis with local balancing. Journal of Machine Learning Research , 24(248):1–59, 2023. A. Gelman, W. R. Gilks, and G. O. Roberts. Weak convergence an d optimal scaling of random walk Metropolis algorithms. The Annals of Applied Probability , 7(1):110 – 120, 1997. doi: 10.1214/aoap/1034625254. Will Grathwohl, Kevin Swersky, Milad Hashemi, David Duvena ud, and Chris Maddison. Oops i took a gradient: Scalable sampling for discrete distributi ons. In International Conference on Machine Learning , pages 3831–3841. PMLR, 2021. Luke Hardcastle, Samuel Livingstone, and Gianluca Baio. Av eraging polyhazard models using Piecewise deterministic Monte Carlo with applications to d ata with long-term survivors. arXiv preprint arXiv:2406.14182 , 2024. W. K. Hastings. Monte Carlo Sampling Methods Using Markov Ch ains and Their Applications. Biometrika , 57(1):97–109, 1970. ISSN 00063444, 14643510. Max Hird, Samuel Livingstone, and Giacomo Zanella. A fresh T ake on ‘Barker Dynamics’ for MCMC. In Alexander Keller, editor, Monte Carlo and Quasi-Monte Carlo Methods , pages 169– 184, Cham, 2022. Springer International Publishing. Jere Koskela. Zig-zag sampling for discrete structures and nonreversible phylogenetic MCMC. Journal of Computational and Graphical Statistics , 31(3):684–694, 2022. D. A. Levin, Y. Peres, and E. L. Wilmer. Markov chains and mixing times . American Mathematical Society, 2006. Guanxun Li, Aaron Smith, and Quan Zhou. Importance is import ant: A guide to informed impor- tance tempering methods. arXiv e-prints , pages arXiv–2304, 2023. Xitong Liang, Samuel Livingstone, and Jim Griffin. Adaptive r andom neighbourhood informed Markov chain Monte Carlo for high-dimensional Bayesian var iable selection. Statistics and Com- puting, 32(5):84, 2022. XitongLiang, AlbertoCaron, SamuelLivingstone, andJimGr iffin. Structurelearningwithadaptive random neighborhood informed MCMC. Advances in Neural Information Processing Systems , 36:40760–40772, 2023a. Xitong Liang, Samuel Livingstone, and Jim Griffin. Adaptive m cmc for bayesian variable selection in generalised linear models and survival models. Entropy, 25(9):1310, 2023b. S Livingstone, M Betancourt, S Byrne, and M Girolami. On the G eometric Ergodicity of Hamil- tonian Monte Carlo. Bernoulli , 25(4A):3109–3138, 2019. SamuelLivingstoneandMarkGirolami. Information-geomet ric Markov chainMonte Carlomethods using diffusions. Entropy, 16(6):3074–3102, 2014. 22 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A Samuel Livingstone and Giacomo Zanella. The Barker Proposa l: Combining Robustness and Ef- ficiency in
https://arxiv.org/abs/2504.13322v1
Gradient-Based MCMC. Journal of the Royal Statistical Society Series B: Statisti cal Methodology , 84(2):496–523, 01 2022. ISSN 1369-7412. doi: 10.1111/rss b.12482. I. Lytras and P. Mertikopoulos. Tamed Langevin sampling und er weaker conditions, 2024. Jonathan C Mattingly, Andrew M Stuart, and Desmond J Higham. Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degene rate noise. Stochastic processes and their applications , 101(2):185–232, 2002. Lorenzo Mauri and Giacomo Zanella. Robust Approximate Samp ling via Stochastic Gradient Barker Dynamics, 2024. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rose nbluth, Augusta H Teller, and Ed- ward Teller. Equation of state calculations by fast computi ng machines. The journal of chemical physics, 21(6):1087–1092, 1953. S. P. Meyn and R. L. Tweedie. Stability of Markovian processe s I: criteria for discrete-time Chains. Advances in Applied Probability , 24(3):542–574, 1992. doi: 10.2307/1427479. S. P. Meyn and R. L. Tweedie. Stability of Markovian processe s II: continuous-time processes and sampled chains. Advances in Applied Probability , 25(3):487–517, 1993. doi: 10.2307/1427521. Sean Meyn and Richard L. Tweedie. Markov Chains and Stochastic Stability . Cambridge Mathe- matical Library. Cambridge University Press, 2 edition, 20 09. P. Monmarch´ e, M. Rousset, and P.-A. Zitt. Exact targeting o f gibbs distributions using velocity- jump processes. Stochastics and Partial Differential Equations: Analysis a nd Computations , 2022. doi: https://doi.org/10.1007/s40072-022-00247-9 . J. R. Norris. Markov Chains . Cambridge Series in Statistical and Probabilistic Mathem atics. Cambridge University Press, 1997. doi: 10.1017/CBO978051 1810633. A. Pazy. Semigroups of linear operators and applications to partial differential equations , volume 44. Springer Science & Business Media, 2012. Samuel Power and Jacob Vorstrup Goldman. Accelerated sampl ing on discrete spaces with non- reversible Markov processes. arXiv preprint arXiv:1912.04681 , 2019. G. O. Roberts and J. S. Rosenthal. Geometric Ergodicity and h ybrid Markov chains. Electronic communications in Probability , 2:13 – 25, 1997. G. O. Roberts and R. L. Tweedie. Geometric convergence and ce ntral limit theorems for multidi- mensional Hastings and Metropolis algorithms. Biometrika , 83(1):95–110, 1996a. Gareth O Roberts and Osnat Stramer. Langevin diffusions and Me tropolis-Hastings algorithms. Methodology and computing in applied probability , 4:337–357, 2002. Gareth O Roberts and Richard L Tweedie. Exponential converg ence of Langevin distributions and their discrete approximations. Bernoulli , 2(4):341–363, 1996b. Peter J Rossky, Jimmie D Doll, and Harold L Friedman. Brownia n dynamics as smart Monte Carlo simulation. The Journal of Chemical Physics , 69(10):4628–4633, 1978. Nikola Sandri´ c. A Note on the Uniform Ergodicity of Diffusion Processes. Bulletin of the Malaysian Mathematical Sciences Society , 48(3):1–16, 2025. Emanuele Sansone. Lsb: Local self-balancing mcmc in discre te spaces. In International Conference on Machine Learning , pages 19205–19220. PMLR, 2022. R. F. Serfozo. Reversible Markov processes on general space s and spatial migration processes. Advances in applied probability , 37(3):801–818, 2005. Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis Titsias, an d Lester Mackey. Gradient estimation with discrete stein operators. Advances in neural information processing systems , 35:25829– 25841, 2022. Haoran Sun, Hanjun Dai, and Dale Schuurmans. Optimal scalin g for locally balanced proposals in discrete spaces. Advances in
https://arxiv.org/abs/2504.13322v1
Neural Information Processing Systems , 35:23867–23880, 2022. Haoran Sun, Bo Dai, Charles Sutton, Dale Schuurmans, and Han jun Dai. Any-scale Balanced Sam- plers for Discrete Space. In The Eleventh International Conference on Learning Representa tions, FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 23 2023. Achille Thin, Nikita Kotelevskii, Christophe Andrieu, Ala in Durmus, Eric Moulines, and Maxim Panov. Nonreversible MCMC from conditional invertible tra nsforms: a complete recipe with convergence guarantees. arXiv preprint arXiv:2012.15550 , 2020. L. Tierney. A note on Metropolis-Hastings kernels for gener al state spaces. Annals of applied probability , pages 1–9, 1998. Willem van den Boom, Alexandros Beskos, and Maria De Iorio. T he G-Wishart weighted proposal algorithm: Efficient posterior computation for Gaussian gra phical models. Journal of Computa- tional and Graphical Statistics , 31(4):1215–1224, 2022. GiorgosVasdekisandGarethORoberts. Anoteonthepolynomi alergodicityoftheone-dimensional Zig-Zag process. Journal of Applied Probability , 59(3):895–903, 2022. Jure Vogrinc, Samuel Livingstone, and Giacomo Zanella. Opt imal design of the Barker proposal andotherlocally balanced Metropolis–Hastings algorithm s.Biometrika , 110(3):579–595, 102022. ISSN 1464-3510. doi: 10.1093/biomet/asac056. Tatiana Xifara, Chris Sherlock, Samuel Livingstone, Simon Byrne, and Mark Girolami. Langevin diffusions and the Metropolis-adjusted Langevin algorithm. Statistics & Probability Letters , 91: 14–19, 2014. Giacomo Zanella. Informed proposals for local MCMC in discr ete spaces. Journal of the American Statistical Association , 115(530):852–865, 2020. Ruqi Zhang, Xingchao Liu, and Qiang Liu. A Langevin-like sam pler for discrete distributions. In International Conference on Machine Learning , pages 26375–26396. PMLR, 2022. Quan Zhou and Aaron Smith. Rapid convergence of informed imp ortance tempering. In Interna- tional Conference on Artificial Intelligence and Statistics , pages 10939–10965. PMLR, 2022. Quan Zhou, Jun Yang, Dootika Vats, Gareth O Roberts, and Jeffre y S Rosenthal. Dimension-free mixing for high-dimensional Bayesian variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 84(5):1751–1784, 2022. Appendix A.Proof of Proposition 2.4 Proof of Proposition 2.4. We will prove the result when h→0+. The case where h→0−follows similarly. Recall that when the process starts from x∈E,τ1exp(λ(x)) is the first jumping time. Therefore, for h >0, on the event{τ1> h}we have Xh=x. At the same time the density of τ1is given by fτ1(s)λ(x)exp{−λ(x)s}. We then write, 1 h(Ex[f(Xh)]−f(x)) =1 hEx[(f(Xh)−f(x))Iτ1≤h]+1 hEx[(f(Xh)−f(x))Iτ1>h] =1 h/integraldisplay E/integraldisplayh 0Ey[f(Xh−s)]λ(x)exp{−λ(x)s}dsΓg(x,dy)−1 hf(x)Px(τ1> h) =/integraldisplay E1 h/integraldisplayh 0Ey[f(Xh−s)]λ(x)exp{−λ(x)s}dsΓg(x,dy)−f(x)1 h(1−exp{−λ(x)h}).(28) A simple calculation shows that f(x)1 h(1−exp{−λ(x)h})h→0−−−→λ(x)f(x). (29) 24 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A Furthermore, for any y∈E, consider the quantity A(h,y) :=1 h/integraldisplayh 0Ey[f(Xh−s)]λ(x)exp{−λ(x)s}ds=λ(x)1 h/integraldisplayh 0Ey[f(Xu)]exp{−λ(x)(h−u)}du =λ(x)exp{−λ(x)h}1 h/integraldisplayh 0Ey[f(Xu)]exp{λ(x)u}du. (30) Now, for any u,h >0 letJ={there exists a jump on the interval ( u,u+h)}. We calculate |Ey[f(Xu+h)]−Ey[f(Xu)]|=|Ey[f(Xu+h)−f(Xu)]|=|Ey[(f(Xu+h)−f(Xu))IJ]| ≤Ey[|f(Xu+h)−f(Xu)|IJ]≤2∝ba∇dblf∝ba∇dbl∞Py(J)h→0−−−→0. Therefore the function u→Ey[f(Xu)] is continuous, and the same holds for the function u→ Ey[f(Xu)]exp{λ(x)u}. From the Fundamental Theorem of Calculus and (30), we get fo r ally∈E A(h,y)h→0−−−→λ(x)f(y), and since|f|is bounded, for all h∈(0,1)|A(h,y)|≤∝ba∇dblf∝ba∇dbl∞λ(x). From Bounded Convergence Theorem /integraldisplay E1 h/integraldisplayh 0Ey[f(Xh−s)]λ(x)exp{−λ(x)s}dsΓg(x,dy)h→0−−−→λ(x)/integraldisplay Ef(y)Γg(x,dy). Combining this with (28) and (29), the result follows. /square Appendix B.Proof of Theorem 3.1 In order to prove the result, we will need a series of intermed iate propositions and lemmas. We begin by defining a desired quantity. Definition B.1. Fork∈N, we write hk=
https://arxiv.org/abs/2504.13322v1
inft≥0{Yt=k}for the hitting time of k. Assume that the current state of the process is n∈N. We denote the probabilities associated with the process moving to the right or to the left p(n) :=P(X1=n+1|X0=n) =g/parenleftBig π(n+1) π(n)/parenrightBig g/parenleftBig π(n−1) π(n)/parenrightBig +g/parenleftBig π(n+1) π(n)/parenrightBig, q(n) := 1−p(n) respectively. Let an:=1 λ(n)q(n)=2 g/parenleftBig π(n−1) π(n)/parenrightBig, and define the odds associated with moving to the right as b(n) =p(n) q(n). For a fixed k∈N, and for n≥k+1, we also define γn:= 1+b(n−1)+b(n−1)b(n−2)+...+b(n−1)b(n−2)b(n−3)...b(k+1). where we omit the dependence of the quantity γonkfor ease of notation. The following lemma relates the hitting times of the process with the above defined quantities. FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 25 Lemma B.1. For allN > k∈N, assume that the LBMJP starts from N. Then, we have EN[hk] =N−1/summationdisplay n=kan+1γn+1+EN+1[hN]b(N)γN (31) Proof of Lemma B.1. We observe that EN[hk] =N−1/summationdisplay n=kEn+1[hn]. (32) Let us write Xnto denote the Markov chain with transition kernel Γ g, starting from n. We also writeτm∼exp(λ(m)) to denote the time the process Ystays inm∈Nbefore jumping. Note that En[hn−1] =En[E[hn−1|Xn]] =En/bracketleftbig q(n)τn+p(n)/parenleftbig τn+En+1/bracketleftbig hn−1|Xn+1/bracketrightbig/parenrightbig/bracketrightbig (33) =q(n)1 λ(n)+p(n)1 λ(n)+p(n)En+1[hn−1] =1 λ(n)+p(n)En+1[hn−1]. Now, from the Markov property, we have En+1[hn−1] =En+1[hn]+En[hn−1], so (33) becomes En[hn−1] =1 λ(n)+p(n)En+1[hn]+p(n)En[hn−1]. Rearranging, we get En[hn−1] =1 λ(n)q(n)+p(n) q(n)En+1[hn] =a(n)+b(n)En+1[hn]. (34) We therefore have that En[hn−1] =a(n) +a(n+1)b(n) +b(n+1)b(n)En+2[hn+1]. Applying this formula recursively gives En[hn−1] =a(n)+a(n+1)b(n)+a(n+2)b(n+1)b(n)+... +a(N)b(N−1)b(N−2)...b(n)+b(N)b(N−1)...b(n)EN+1[hN] (35) Using (32) and on summing over n, we recover (31). /square An interesting corollary is the following. It shows that the behaviour of the series of anis crucial for the process to come down from infinity in finite time, which its elf is crucial for uniform ergodicity. Corollary B.1. If/summationtext∞ n=1an= +∞, then for any k,limN→+∞EN[hk] = +∞. Proof of Corollary B.1. Sinceγ(n)≥1, from (31) we get for all N > k EN[hk]≥N−1/summationdisplay n=kan, and the result follows on letting N→+∞. /square Wewillnowmakethefollowingassumptionandstateanyfurth erresultsinthissubsectionassuming that it holds. As we will see later, this assumption is weaker than Assumption 3.1. 26 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A Assumption B.1. Assume that there exists k∈Nandp=p(k)andλ=λ(k)such that for all n≥k p(n)≤p <1 2andλ(n)≥λ >0. Assume further that nb(n)n→∞−−−→0. (36) We then have the following. Proposition B.1. Assume that Assumption B.1 holds. Then for any compact set C⊂N, limsup N→∞EN[hC]<∞ ⇐⇒∞/summationdisplay n=1a(n)<∞. (37) Proof of Proposition B.1. LetC⊂Nbe a compact set and let k= supC. We first observe that for anyN > k, for the LBMJP with γas in (25), hC=hk. We will therefore prove (37) with hk instead of hC. First of all, the LHS of (37) implies the RHS due to Corollary B .1. Let us assume that the RHS of (37) holds. Let λ >0 andp∈(0,1/2) be as in Assumption B.1. Let˜Ybe a Markov jump process defined on Nwith constant jump rate λand with jump law given by˜Γ(x,dy) =p δx+1+ (1−p)δx−1. Let˜hn= inft≥0/braceleftBig ˜Yt=n/bracerightBig , and let ˜Xbe the discrete time Random Walk on Nwith constant probability of jumping to the right be p. We can couple the processes ( Yt)t≥0and/parenleftBig ˜Yt/parenrightBig t≥0by using the same exp(1) random variables to generate their jumping times and
https://arxiv.org/abs/2504.13322v1
the same uniform distribut ions to generate their jumps. A simple comparison between the two coupled processes shows that for anyn≥k En+1[hn]≤En+1/bracketleftBig ˜hn/bracketrightBig , and therefore En+1[hn]≤En+1/bracketleftBig ˜hn/bracketrightBig ≤1 λEn+1/bracketleftbigg inf m∈N/braceleftBig ˜Xm=n/bracerightBig/bracketrightbigg =1 (1−2p)λ, (38) where for the final equation we use standard results on the hit ting times of random walk (e.g. Levin et al. [2006], above equation (2.14)). Furthermore, w e observe that for all n≥k, b(n)<1. Therefore, using (36), we get that for n≥k b(n)γn=b(n)+b(n)b(n−1)+···+b(n)b(n−1)...b(k+1) ≤(n−k)b(n)≤nb(n)n→+∞− −−−− →0. (39) Combining this with (38), we get lim N→∞b(N)γNEN+1[hN] = 0. (40) Finally, sinceforall n≥k,b(n)<1andb(n)n→∞−−−→0, wehave γ(n)n→∞−−−→1. Since/summationtext∞ n=1a(n)<∞, we get ∞/summationdisplay n=ka(n+1)γ(n+1)<∞. (41) FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 27 This, along with (40) and Lemma B.1 shows that limsup N→∞EN[hk]<∞. This completes the proof. /square Proof of Theorem 3.1. From Assumption 3.1 we get that there exists k∈Nsuch that for all n > k π(n+1) π(n)≤1,π(n−1) π(n)≥exp/braceleftBig aβ(n−1)β−1/bracerightBig and since gis increasing, g(1) = 1 and g(t)≥t˜afort≥1, we get b(n) =g/parenleftBig π(n+1) π(n)/parenrightBig g/parenleftBig π(n−1) π(n)/parenrightBig≤exp/braceleftBig −a˜aβ(n−1)β−1/bracerightBig , (42) and therefore lim n→+∞nb(n) = 0. (43) Furthermore, for all n > k λ(n) =g/parenleftbiggπ(n−1) π(n)/parenrightbigg +g/parenleftbiggπ(n+1) π(n)/parenrightbigg ≥/parenleftbiggπ(n−1) π(n)/parenrightbigg˜a ≥exp/braceleftBig ˜aaβnβ−1/bracerightBig ≥exp/braceleftBig ˜aaβkβ−1/bracerightBig and due to (42) p(n) =g/parenleftBig π(n+1) π(n)/parenrightBig g/parenleftBig π(n−1) π(n)/parenrightBig +g/parenleftBig π(n+1) π(n)/parenrightBig≤g/parenleftBig π(n+1) π(n)/parenrightBig g/parenleftBig π(n−1) π(n)/parenrightBig≤exp/braceleftBig −a˜aβkβ−1/bracerightBig <1 2, where the last inequality holds by choosing ksufficiently large. Therefore, Assumption B.1 holds. Furthermore, we observe that for n > k a(n) =2 g/parenleftBig π(n−1) π(n)/parenrightBig≤2exp/braceleftBig −˜aaβ(n−1)β−1/bracerightBig , and therefore/summationtext∞ n=1a(n)<∞. By Proposition B.1, for any compact set C, there exists M >0 such that EN[hC]≤Mfor allN∈N. Note that by Assumption 3.1 and the form of γ, the conditions of Proposition 3.4 are satisfied, so every compact set Cis small. Therefore, all assumptions of Proposition 3.5 are satisfied and the result follows. /square Appendix C.Proof of Theorem 4.1 Proof.Lett(x,y) =π(y)/π(x) and let b(x) = logg(exp{x}) so that g(t) = exp{b(logt)}. For convenience, let us write φ= logf∈C3. LetLnandLbe the weak generators of SnandS respectively. Recall from Theorem 2.3 that for f∈C∞ c(Rd), Lnf(x) =σ−2 nEY∼N(x,σ2nId)[(f(Y)−f(x))g◦t(x,Y)] =σ−1 nEZ∼N(0,Id)[(f(x+σnZ)−f(x))exp{b(φ(x+σnZ)−φ(x))}], while it is well known (see e.g. Gelman et al. [1997]) that Lf(x) =1 2∇φ(x)∇f(x)+1 2∆f(x). 28 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A Letf∈C∞ candCa compact set. From g(t) =tg(1/t) we get g′(t) =g(1/t)−t−1g′(1/t) so g′(1) =g(1)/2 = 1/2. Therefore b′(0) = 1/2. Alsob(0) = 0. Noting that φis continuous, a Taylor series expansion of baround 0 shows that ∃σ(1) n∈(0,σn) such that exp{b(φ(x+σnZ)−φ(x))}= exp/braceleftbigg1 2e(x,σn)+1 2b′′(e(x,σ(1) n))e(x,σn)2/bracerightbigg (44) for some e(x,σn) :=φ(x+σnZ)−φ(x). Using another Taylor series expansion, we observe that fo r anyσn,∃σ(3) n∈(0,σn) such that e(x,σn) =σn∇φ(x)·Z+1 2σ2 nZT∇2φ(x+σ(3) nZ)Z. (45) Therefore, the facts that −MId≤∇2φ(x)≤MIdand∝ba∇dbl∇φ∝ba∇dbl∞,C= supx∈C|∇φ(x)|<∞imply that for anyZ sup x∈C|e(x,σn)|≤σn∝ba∇dbl∇φ∝ba∇dbl∞,C∝ba∇dblZ∝ba∇dbl2+1 2σ2 nM∝ba∇dblZ∝ba∇dbl2 2, which in turn implies that for all k >0∃Mk>0 such that EZ∼N(0,Id)/bracketleftbigg sup x∈C|e(x,σn)|k/bracketrightbigg ≤Mkσk n. (46) Now exp{u}= 1+u+exp{ξ}u2/2, for some ξ∈(−u,u). Therefore, using (44) we get that there exists aξ∈(min{0,b(e(x,σn))},max{0,b(e(x,σn))}) such that exp{b(φ(x+σnZ)−φ(x))}= exp/braceleftbigg1 2e(x,σn)+1 2b′′(e(x,σ(1) n))e(x,σn)2/bracerightbigg = 1+1 2e(x,σn)+1 2b′′(e(x,σ(1) n))e(x,σn)2+1 2exp{ξ}/parenleftbigg1 2e(x,σn)+1 2b′′(e(x,σ(1) n))e(x,σn)2/parenrightbigg2 = 1+1 2e(x,σn)+/parenleftbigg1 2b′′(e(x,σ(1) n))+1 8exp{ξ}/parenrightbigg e(x,σn)2+1 4exp{ξ}b′′(e(x,σ(1) n))e(x,σn)3 +1
https://arxiv.org/abs/2504.13322v1
8exp{ξ}/bracketleftBig b′′(e(x,σ(1) n))/bracketrightBig2 e(x,σn)4. At the same time, from a Taylor expansion up to second degree, ∃σ(2) n∈(0,σn) such that for any Z∈Rd f(x+σnZ)−f(x) =σn∇f(x)·Z+1 2σ2 nZT∇2f(x)Z+1 6σ3 nd/summationdisplay i,j,k=1∂i∂j∂kf(x+σ(2) nZ)ZiZjZk. Overall this gives FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 29 (f(x+σnZ)−f(x))exp{b(φ(x+σnZ)−φ(x))}= (47) σn∇f(x)·Z+1 2σ2 nZT∇2f(x)Z+1 6σ3 nd/summationdisplay i,j,k=1∂i∂j∂kf(x+σ(2) nZ)ZiZjZk +1 2σne(x,σn)∇f(x)·Z+1 4σ2 ne(x,σn)ZT∇2f(x)Z +1 12σ3 ne(x,σn)d/summationdisplay i,j,k=1∂i∂j∂kf(x+σ(2) nZ)ZiZjZk +σne(x,σn)2/parenleftbigg1 2b′′(e(x,σ(1) n))+1 8exp{ξ}/parenrightbigg ∇f(x)·Z +1 2σ2 ne(x,σn)2/parenleftbigg1 2b′′(e(x,σ(1) n))+1 8exp{ξ}/parenrightbigg ZT∇2f(x)Z +1 6σ3 ne(x,σn)2/parenleftbigg1 2b′′(e(x,σ1 n))+1 8exp{ξ}/parenrightbiggd/summationdisplay i,j,k=1∂i∂j∂kf(x+σ(2) nZ)ZiZjZk +1 4σne(x,σn)3exp{ξ}b′′(e(x,σ(1) n))∇f(x)·Z +1 8σ2 ne(x,σn)3exp{ξ}b′′(e(x,σ(1) n))ZT∇2f(x)Z +1 24σ3 ne(x,σn)3exp{ξ}b′′(e(x,σ(1) n))d/summationdisplay i,j,k=1∂i∂j∂kf(x+σ(2) nZ)ZiZjZk +1 8σne(x,σn)4exp{ξ}/bracketleftBig b′′(e(x,σ(1) n))/bracketrightBig2 ∇f(x)·Z +1 16σ2 ne(x,σn)4exp{ξ}/bracketleftBig b′′(e(x,σ(1) n))/bracketrightBig2 ZT∇2f(x)Z +1 48σ3 ne(x,σn)4exp{ξ}/bracketleftBig b′′(e(x,σ(1) n))/bracketrightBig2d/summationdisplay i,j,k=1∂i∂j∂kf(x+σ(2) nZ)ZiZjZk. Noting that EZ∼N(0,Id)[∇f(x)·Z] = 0 and that EZ∈N(0,Id)/bracketleftbig ZT∇2f(x)Z/bracketrightbig = ∆f(x) we get that Ln(x) =σ−2 nEZ∼N(0,Id)[(f(x+σnZ)−f(x))exp{b(φ(x+σnZ)−φ(x))}] (48) =1 2∆f(x)+1 2σ−1 nEZ∼N(0,Id)[e(x,σn)∇f(x)·Z]+σ−2 nA(x,σn), whereA(x,σn) is expectation of the remaining terms of the sum in (47) (i.e . except for first, second and fourth term). We will now control the term A(x,σn) and prove it to be of order σ3 n. First of all, recall that ξ≤max{0,b(e(x,σn))}. Furthermore, from Lemma 2.1 we have min {1,t}≤g(t)≤max{1,t}, therefore for any k∈N, exp{kξ}≤exp{max{0,kb(e(x,σn))}}= exp{max{0,klogg(exp{e(x,σn)})}}(49) = max/braceleftBig 1,(g(exp{e(x,σn)}))k/bracerightBig ≤max/braceleftBig 1,(exp{e(x,σn)})k/bracerightBig = max/braceleftBigg 1,/parenleftbiggπ(x+σnZ) π(x)/parenrightbiggk/bracerightBigg ≤max/braceleftBigg 1,˜Mk π(x)k/bracerightBigg , 30 SAMUEL LIVINGSTONE, GIORGOS VASDEKIS AND GIACOMO ZANELL A which is bounded uniformly on xon the compact set Csinceπ∈C0andπ(x)>0 for all x∈Rd. At the same time, in order to control A(x,σn) we need to control the term b′′(e(x,σ(1) n)) appearing in the sum. In order to do this, note that from (44) we have that b′′(e(x,σ(1) n)) = 2b(e(x,σn))−1 2e(x,σn) e(x,σn)2 whenx,Z∈Rdsuch that e(x,σn)∝ne}ationslash= 0 and we can define b′′(e(x,σ(1) n)) =b′′(0) when e(x,σn) = 0. Letz(e) = 2b(e)−1 2e e2. Sinceb(0) = 0 and b′(0) =1 2, from L’Hospital we get lim e→0z(e) =b′′(0) =1 2+g′′(1)−g′′(1)2. Therefore, there exists e0such that for any x∈CandZ∈Rdwithe(x,σn)< e0,/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2b(e(x,σn))−1 2e(x,σn) e(x,σn)2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤1+|g′′(1)|+g′′(1)2. On the other hand, using a similar argument as in (49) we get th at sup x∈C|b(e(x,σ))|<∞, which implies that sup x∈C,Z∈Rd,e(x,σn)≥e0/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2b(e(x,σn))−1 2e(x,σn) e(x,σn)2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle<∞. Therefore, sup x∈C,Z∈Rdb′′(e(x,σ(1) n)) = sup x∈C,Z∈Rd/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2b(e(x,σn))−1 2e(x,σn) e(x,σn)2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle<∞. (50) Using the fact that f∈C∞ c, thatZ∼N(0,Id), that for all n∈Ne(x,σn) has all the moments finite (due to (46)), using (50) and (46), and by carefully con sidering all the terms of the sum in A(x,σn), we get that limsup n→∞supx∈C|A(x,σn)| σ3n<∞. (51) Furthermore, recall that due to (45) we have e(x,σn)∇f(x)·Z=σn(∇f(x)·Z)(∇φ(x)·Z)+1 2σ2 n/parenleftBig ZT∇2φ(x+σ(3) nZ)Z/parenrightBig (∇f(x)·Z) =σn d/summationdisplay i,j=1∂if(x)∂jφ(x)ZiZj +1 2σ2 nd/summationdisplay i,j,k=1∂kf(x)∂i∂jφ(x+σ(3) nZ)ZiZjZk and therefore EZ∼N(0,Id)[e(x,σn)∇f(x)·Z] =σn∇φ(x)·∇f(x) (52) +σ2 nEZ∼N(0,Id) 1 2d/summationdisplay i,j,k=1∂kf(x)∂i∂jφ(x+σ(3) nZ)ZiZjZk ,(53) and using again the fact that for all y∈Rd,−MId∝√∇ecedesequal∇2φ(y)∝√∇ecedesequalMIdwe get that sup x∈CEZ∼N(0,Id) 1 2d/summationdisplay i,j,k=1∂kf(x)∂i∂jφ(x+σ(3) nZ)ZiZjZk <∞ (54) FOUNDATIONS OF LOCALLY-BALANCED MARKOV PROCESSES 31 Overall using (48), (51), (52) and (54), we get Lnf(x) =/parenleftbigg1 2∇φ(x)·∇f(x)+1 2∆f(x)/parenrightbigg +σ−2 nB(x,σ3 n) with limsup n→∞supx∈C|B(x,σn)| σ3n<∞. For alln, the Martingale problem ( µ,Ln,C∞ c) is solved by the jump process, due to Theorem 2.3. All assumptions of Theorem 8.1 of Monmarch´ e et al. [2022] ar e satisfied and the result follows. /square
https://arxiv.org/abs/2504.13322v1
Continuous-time filtering in Lie groups : estimation via the Fréchet mean of solutions to stochastic differential equations Marc Arnaudon1, Magalie Bénéfice2, and Audrey Giremus3 1Univ. Bordeaux, CNRS, Bordeaux INP, IMB, UMR 5251, F-33400 Talence, France 2Université de Lorraine, CNRS, IECL, F-54000 Nancy, France 3Univ. Bordeaux, CNRS, Bordeaux INP, IMS, UMR 5218, F-33400 Talence, France Abstract. We compute the Fréchet mean Etof the solution Xtto a continuous-time stochastic differential equation in a Lie group. It pro- vides an estimator with minimal variance of Xt. We use it in the context of Kalman filtering and more precisely to infer rotation matrices. In this paper, we focus on the prediction step between two consecutive observa- tions. Compared to state-of-the-art approaches, our assumptions on the model are minimal. Keywords: Filtering ·Lie groups ·rotation matrices ·Fréchet mean · Prediction 1 Introduction Optimal filtering has been an active area of research since the introduction of the Kalman filter in the 1960s [7], with applications spanning from navigation and target tracking to industrial control. Over the past decades, different variants have been proposed to overcome the limitations of the seminal algorithm. They make it possible to deal with non-Gaussian uncertainties or non-linearities in the observation or dynamic models, which jointly define the state space representa- tion.Morerecently, thetopicofinferringparameters thatdonotlieon Euclidean spaces has emerged, with a focus on Lie groups. This issue occurs for instance when estimating rotation matrices, as required in inertial navigation, or, more generally, rigid body transformations such as the pose of a camera in computer vision. Filters designed for that purpose either provide analytical solutions at the cost of restrictive assumptions on the state space representation or leverage the Bayesian formalism to calculate approximate estimates and confidence intervals. In the first case, either invariant or equivariant properties are usually required [3,6], whereas in the latter case, the posterior distribution of the parameters to be inferred, conditionally upon the measurements, is enforced to be a concen- trated Gaussian distribution in several works [4]. Conversely, this paper sets the basis for an alternative continuous-time filter on Lie groups that relaxes the as- sumptions on the state space representation while yielding estimates accurate uparXiv:2504.13502v1 [math.PR] 18 Apr 2025 2 M. Arnaudon, M. Bénéfice and A. Giremus to the second order. We focus on the calculation of the prediction of the param- eters of interest in the time interval between two consecutive observations. More precisely, we establish a system of differential equations that describes the joint evolution of their Fréchet mean, taken as their estimate, and of the covariance matrix of the estimation error in the Lie algebra. The outline of the paper is the following: Section 2 is dedicated to prereq- uisites on the Fréchet mean on Lie groups; Section 3 addresses its calculation in the case of a generic stochastic differential equation on Lie groups; proof is provided in Section 4 and Section 5 presents simulation results while conclusions and perspectives are proposed in Section 6. 2 Prerequisites: Fréchet means on Lie groups LetGbe a Lie group with Lie algebra Gand neutral element e. Definition 1. For a random
https://arxiv.org/abs/2504.13502v1
variable XonGwith support supppXq, we say thatEpXq PGis the exponential barycenter of Xif there exists a G-valued integrable and centered random variable νpXqsuch that X“EpXqexppνpXqq, (1) the segmenttEpXqexpptνpXqq, tP r0,1suis included in the convex hull of supppXqand the couple pEpXq, νpXqqis unique. It is well known ([5] Propositions 4 and 5) that if the support of Xis sufficiently small, then the exponential barycenter of Xexists (and is unique by definition). In the sequel we will assume that Ghas a left invariant metric, and we will denote byx¨,¨ygthe scalar product at point gPG. Definition 2. An open ball Bpg, RqofGcentered at gwith radius Rą0is said to be a regular geodesic ball if Răπ 2? Kwhere Ką0is an upper bound of the sectional curvatures in G, and@xPBpg, Rq, the cutlocus of xdoes not meet Bpg, Rq. In this situation, we have the following result ([8] Theorem 4.2 and [9] Propo- sition 3.6) Proposition 1. If the support of the random variable Xis included in a reg- ular geodesic ball of G, then EpXqis the unique minimizer in HpXqofgÞÑ E“ ρ2pg, Xq‰ , where ρis the Riemannian distance in G. Moreover, we have E“ ρ2pEpXq, Xq‰ “Er}ν}2s. (2) In other words, EpXqis the Fréchet mean of X. Consequently, EpXqis the estimator of Xminimizing the variance of the error. Continuous time filtering in Lie groups : estimation via Fréchet mean 3 3 Computing the Fréchet mean of a stochastic differential equation Consider the Stratonovich stochastic differential equation (SDE) in G ˝dXt“XtpbpXtqdt`˝dWtq, X 0“e (3) with bPC8pG,Gq,Wt“mÿ i“1σiWi twhere for all i,σiPG, and thepWi tq1ďiďm are independent real-valued Brownian motions. Denote by τthe exit time of a fixed regular geodesic ball Bpe, Rq. Denote by Et“EpXτ tqthe exponential barycenter of the stopped process Xτ t“Xt^τandντ t“νpXτ tqthe error of the estimator. For y, xPBpe, Rq, denote logpy, xq“y´1logypxqPG. Notice that ντ t“logpEt, Xτ tq. (4) We will also denote , bτpXtq“bpXtq1ttăτuand all στ i“σi1ttăτu,i“1, . . . , m. The following theorem yields an ordinary differential equation for the Fréchet mean, which is the basis for computations. Theorem 1. The Fréchet mean Etof the solution to Equation (3)is solution to the McKean-Vlasov equation dEt“Ethtdtwith ht“´E“ T1logpEt¨,0Xτ tq‰´1 ¨E« T2logp0Et,¨Xτ tqpXτ tbτpXτ tqq`1 2mÿ i“1Hess 2logp0Et,¨Xτ tqpXτ tστ ibXτ tστ iqff . (5) Here T1,T2andHess 2denote respectively derivative with respect to first and second variable, and Hessian with respect to the second variable. Proof.The existence, uniqueness and smoothness of Etwith respect to the law ofXτ tis garanteed by the fact that supppXτ tqis included in a regular geodesic ball ([8] Theorem 4.2, [5] Propositions 4 and 5). On the other hand we have ˝dντ t“TlogpdEt,˝dXτ tq (6) yielding in Itô form dντ t“T1logp¨,0Xτ tqpdEtq`T2logp0Et,¨qpXtbτ tpXtqqdt `1 2mÿ i“1Hess 2logp0Et,¨Xτ tqpXτ tστ ibXτ tστ iqdt`dMt(7) with Mta martingale. Since Erντ ts”0we get from (7) 0“E“ T1logp¨,0Xτ tq‰ pEthtq`ErT2logp0Et,¨qpXτ tbτ tpXtqqs `1 2E«mÿ i“1Hess 2logp0Et,¨Xτ tqpXτ tστ ibXτ tστ iqff .(8) 4 M. Arnaudon, M. Bénéfice and A. Giremus By [1] Lemma 2.6, the endomorphism of G:E“ T1logpEt¨,0Xτ tq‰ is invertible. From this last point we get (5), and this
https://arxiv.org/abs/2504.13502v1
achieves the proof of Theorem 1. [ \ With Theorem 1 at hand, we propose to look for computable approximations of theFréchetmeans Etofthesolution Xttothestochasticdifferentialequation(3), together with the error term νt. Theorem 2. We have the expansions dEt“Ethtdtwith ht“bpEtq `1 2˜ Hess EtbpEtp¨q,Etp¨qq´r TEtbpEtp¨q,¨s`1 8mÿ i“1rrrσi,¨s,¨s, σis¸ ¨Erντ tbντ ts `Opt2q. (9) For the error term we have d dtErντ tbντ ts “#˜ 2TEtbpEtp¨qq´ 2rbpEtq,¨s´1 3mÿ i“1rrσi,¨s, σis¸ dId+ Erντ tbντ ts `1 4˜mÿ i“1rσi,¨sbr σi,¨s¸ Erντ tbντ ts`mÿ i“1σibσi`Opt2q(10) where AdB“1 2pAbB`BbAqis the symmetric tensor product. The proof of Theorem 2 is postponed to Section 4 The last Theorem 3 specializes Theorem 2 to the case G“SOp3q. The Lie group involved in inertial navigation, which is our main application, is indeed isometric to products of two copies of SOp3qtogether with Euclidean spaces. Note that the simulations in Section 5 will be done on SOp3q. Theorem 3. Assume that G“SOp3qendowed with its canonical bi-invariant metric. Also assume that pσ1, σ2, σ3q“σpG1, G2, G3qwherepG1, G2, G3qis an orthonormal basis of Gandσą0. Then the expansions of Theorem 2 take the form dEt“htdtwith ht“bpEtq`1 2pHess EtbpEtp¨q,Etp¨qq´r TEtbpEtp¨q,¨sq¨Erντ tbντ ts`Opt2q(11) and for the error term, d dtErντ tbντ ts“σ2Id`t2pTEtbpEtp¨qq´r bpEtq,¨sqdIduErντ tbντ ts `1 8E“ }ντ t}2Idpντ tqK‰ `Opt2q.(12) Continuous time filtering in Lie groups : estimation via Fréchet mean 5 In terms of the Levi-Civita connection ∇and the curvature tensor R, Equa- tions(11)and(12)rewrite ht“bpEtq`ˆ1 2Hess EtbpEtp¨q,Etp¨qq`∇¨TEtbpEtp¨qq˙ ¨Erντ tbντ ts`Opt2q (13) and d dtErντ tbντ ts “σ2Id`" p2TEtbpEtp¨qq` 4∇¨bpEtqqdId`1 2Rp¨7,¨q¨* Erντ tbντ ts`Opt2q. (14) Proof.Note that, on the algebra Gassociated to SOp3q, any orthonormal basis pG1, G2, G3qsatisfies up to a change for the index numbering: rG1, G2s“G3? 2;rG2, G3s“G1? 2;rG3, G1s“G2? 2. (15) The first part of Theorem 3 is a direct application of Theorem 2considering that, for any νPG: nÿ i“1rrrGi, νs, νs, Gis“0andnÿ i“1rGi, νsbrGi, νs“}ν}2 2IdνK.(16) Indeed, as these values do not depend on the choice of the orthonormal basis, we can choose p˜G1,˜G2,˜G3qsatisfying (15) such that ˜G1“ν }ν}. With this choice for a basis the above results are immediate. To prove the second part of Theorem 3 we notice that, as the chosen metric is bi-invariant, ∇Xν“1 2rX, νsandRpX, νqν“´1 4rrX, νs, νsfor any X, νPG. In particular, decomposing Xin the basisp˜G1,˜G2,˜G3q, we getrrX, νs, νs “ ´1 2}ν}2PνKpXqandRpX, νqν“1 8}ν}2PνKpXq. [ \ 4 Proof of Theorem 2 From Theorem 1, assumptions on Xtand Equation (6) for ντ twe can deduce that for all kě1, }ντ}k t,k:“E„ sup sďt|ντ s|kȷ ďCktk{2(17) for some Cką0. In the sequel, Rt,k“żt 0βkpsqds`żt 0mÿ i“1αk,ipsqdWi s, (18) 6 M. Arnaudon, M. Bénéfice and A. Giremus with βpsqandαk,ipsqtaking their values in G, will denote error terms satisfying E« sup sďt˜mÿ i“1|αk,ipsq|k`|βkpsq|k¸ff ďC}ντ}k t,k (19) for some Cą0, and will change from a line to another. Denote SttheG-valued semimartingale defined by S0“0and˝dSt“ expp´ντ tq˝dpexppντ tqq. By derivating (1), we get: ˝dXτ t“dEtexppντ tq`Et˝dpexppντ tqq “Xτ tpexppadp´ντ tqqphtqdt`˝dStq. (20) Using (3), we obtain the following relation on G: bpXτ tqdt`˝dWτ t“exppadp´ντ tqqphtqdt`˝dSt and thus, passing from Stratonovich to Itô: bpXτ tqdt`dWτ t“exppadp´ντ tqqphtqdt`dSt. (21) We first look at the left-hand
https://arxiv.org/abs/2504.13502v1
side term. As Xτ t“Etexppντ tq, we get: bpXτ tq“bpEtq`TEtbpEtντ tq`1 2Hess EtbpEtντ tbEtντ tq`Op|ντ t|3q.(22) We now turn to the right-hand side term. We first have: exppadp´ντ tqqphtq“ht`rht, ντ ts`1 2rrht, ντ ts, ντ ts`` Op|ντ t|3q.(23) ForuPG, the differential of the exponential map at point uisduexp“ exppuqř kě0adp´uqpkq pk`1q!where adp´uqpkqdenote the k-th iteration of ad p´uq. Then: ˝dSt“expp´ντ tq˝dpexppντ tqq“ÿ kě0adp´ντ tqpkq pk`1q!p˝dντ tq.(24) Passing from Stratonovich to Itô and using the Jacobi identity to show that rrrX, Ys, Ys, Xs“´rr X,rX, Yss, Ys“rrr X, Ys, Xs, Ys: dSt“dντ t`1 2rdντ t, ντ ts`1 12rrdντ t, ντ ts, ντ ts`1 6rrdντ t, ντ ts, dντ tss `1 24rrrdντ t, ντ ts, dντ ts, ντ ts`dRt,3. (25) Write the decomposition ντ t“Mt`Dtwhere Mtis the martingale part of ντ t andDtits finite variation drift part. Putting together (22), (23) and (25), (21), Continuous time filtering in Lie groups : estimation via Fréchet mean 7 we have: dMt“dWτ t´1 2rdMt, ντ ts´1 12rrdMt, ντ ts, ντ ts`dRt,3; (26) dDt“´htdt`bpEtqdt`TEtbpEtντ tqdt´rht, ντ tsdt´1 6rrdMt, ντ ts, dM tss ´1 2rdDt, ντ ts´1 12rrdDt, ντ ts, ντ ts´1 2rrht, ντ ts, ντ tsdt `1 2Hess EtbpEtντ tbEtντ tqdt´1 24rrrdMt, ντ ts, dM ts, ντ ts`dRt,3.(27) Applying adp´ντ tqtwice and once to (26), we get: rrdMt, ντ ts, ντ ts“rr dWτ t, ντ ts, ντ ts`dRt,3 (28) rdMt, ντ ts“rdWτ t, ντ ts´1 2rrdWτ t, ντ ts, ντ ts`dRt,3. (29) This provides a nice estimate for dMt: dMt“dWτ t´1 2rdWτ t, ντ ts`1 6rrdWτ t, ντ ts, ντ ts`dRt,3. (30) In particular we obtain: dντ tbdντ t“dMtbdMt“mÿ i“1pστ ibστ i´στ idrστ i, ντ ts `ˆ1 3στ idrrστ i,¨s,¨s`1 4rστ i,¨sbr στ i,¨s˙ pντ tbντ tq˙ dt`dRt,3.(31) We now look for the estimate of dDt. From (31), we get: rrdMt, ντ ts, dM ts“mÿ i“1ˆ rrστ i, ντ ts, στ is´1 2rrrστ i,¨s,¨s, στ ispντ tbντ tq˙ dt`dRt,3 rrrdMt, ντ ts, dM ts, ντ ts“mÿ i“1rrrστ i,¨s, στ is,¨spντ tbντ tqdt`dRt,3 Applying adp´ντ tqtwice and once to (27), we get: rrdDt, ντ ts, ντ ts“rr´ ht`bpEtq, ντ ts, ντ tsdt`dRt,3 (32) and rdDt, ντ ts“r´ ht`bpEtq, ντ tsdt`rTEtbpEt¨q ´1 2rht`bpEtq,¨s´1 6mÿ i“1rrστ i,¨s, στ is,¨ff pντ tbντ tqdt`dRt,3.(33) 8 M. Arnaudon, M. Bénéfice and A. Giremus Finally: d dtDt“´ht`bpEtq`ˆ TEtbpEt¨q´1 2rht`bpEtq,¨s˙ pντ tq ´1 6mÿ i“1rrσi,¨s, σis` 1ttďτuντ t˘ `ˆ„ ´1 2TEtbpEt¨q`1 6r´ht`bpEtq,¨s,¨ȷ (34) `1 2Hess EtbpEt¨bEt¨q`1 8mÿ i“1rrrσi,¨s,¨s, σis¸ ` 1ttďτuντ tbντ t˘ `β3ptq. with β3defined in (18). From Definition 1, we have Erντ ts“0. In particular, ErDts“0. Moreover, similarly to Lemma 2.3 in [2], we can prove that Ppτď tqďCe´C1{tfor some C, C1ą0. Then there exists C2ą0such that Er1ttďτuντ tbντ ts“Erντ tbντ ts`O´ e´C2 t¯ andEr1ttďτuντ ts“O´ e´C2 t¯ . Looking at the mean of (34) we obtain: ht“bpEtq`ˆ„ ´1 2TEtbpEt¨q`1 6r´ht`bpEtq,¨s,¨ȷ `1 2Hess EtbpEt¨bEt¨q `1 8mÿ i“1rrrσi,¨s,¨s, σis¸ Erντ tbντ ts`Opt3{2q. (35) In particular, ht“bpEtq`Optqandrr´ht`bpEtq,¨s,¨sErντ tbντ ts“Opt3{2q. Now htbeing smooth, Opt3{2qimplies Opt2q. This proves Relation (9). We are left to obtain the estimate of the error term. Using the Itô relation, we have: dErντ tbντ ts“Er2dDtdντ t`dντ tbdντ ts (36) We have: dDtdντ t“p´
https://arxiv.org/abs/2504.13502v1
ht`bpEtqqdντ tdt`´ TEtbpEt¨qdId´1 6mÿ i“1rrστ i,¨s, στ isdId ´1 2rht`bpEtq,¨sdId¯ pντ tbντ tqdt`dRt,3. (37) To obtain (10), we then just have to look at the means of (31) and (37) and use again that ht“bpEtq`Optq,Erντ ts“0andtÞÑErντ tbντ tsis smooth (with the same arguments as for the smoothness of tÞÑEt). 5 Simulations We performed Matlab simulations to validate the proposed approach in the case of a rotation matrix Xtthat evolves on the Lie group SOp3q. For that purpose, Continuous time filtering in Lie groups : estimation via Fréchet mean 9 we simulated NMC“500realisations of (3) by taking bpXq“X´1AX, with A a given antisymetric matrix, and a standard deviation σ“0.1for the Brownian motions. A total time interval of 0.1s was considered, decomposed in N“100 steps for the discretization. Finally, we computed the Fréchet mean of Xteither by leveraging the proposed set of equations (13) and (14), or, alternatively, by using a Gauss-Newton on Lie group to minimize an empirical approximation of (2) based on the NMCMonte Carlo runs. To visually compare both obtained rotation matrices at the last time step, we represent in Figure 1 their effect on the unitary vectors pe1,e2,e3qof the canonical basis of R3(plotted as black arrows). The differently colored clouds of points are obtained by applying the NMCsimulated matrices to e1,e2ande3, respectively. The blue and red lines, that are nearly superimposed, show the three transformed basis vectors using the theoretical and Monte Carlo Fréchet means, respectively, and a very good coincidence between them can be observed. Fig. 1.Comparison between the empirical and the proposed Fréchet mean. 6 Conclusion In this paper, we focused on the prediction step of a continuous-time filter on Lie group. An extension of our results will be at first to let the diffusion process Xtstart from a non Dirac law. The next step will be to update the Fréchet mean of the posterior law by taking into account the current measurement. Finally, let us stress the fact that our method requires a minimal set of assumptions, which allows to extend it to more general state spaces, as symmetric spaces, homogeneous spaces and Riemanian manifolds. 10 M. Arnaudon, M. Bénéfice and A. Giremus References 1. Arnaudon, M., Li, X.M.: Barycenters of measures transported by stochastic flows. Ann. Probab. 33(4), 1509–1543 (2005) 2. Arnaudon,M.,Thalmaier,A.,Wang,F.Y.:GradientestimatesandHarnackinequal- ities on non-compact Riemannian manifolds. Stochastic Process. Appl. 119(10), 3653–3670 (2009) 3. Barrau, A., Bonnabel, S.: The invariant extended kalman filter as a stable observer. IEEE Transactions on Automatic Control 62(4), 1797–1812 (2017) 4. Bourmaud, G., Megret, R., Arnaudon, M., Giremus, A.: Continuous-discrete ex- tended kalman filter on matrix lie groups using concentrated gaussian distributions. Journal of Mathematical Imaging and Vision 51, 209–228 (2017) 5. Émery, M., Mokobodzki, G.: Sur le barycentre d’une probabilité dans une variété. In: Séminaire de Probabilités, XXV, Lecture Notes in Math., vol. 1485, pp. 220–233. Springer, Berlin (1991) 6. van Goor, P., Hamel, T., Mahony, R.: Equivariant filter (eqf). IEEE Transactions on Automatic Control 68(6), 3501–3512 (2023) 7. Kalman, R.: A new approach to linear filtering and prediction problems. Transac- tions of the ASME - Journal of Basic Engineering
https://arxiv.org/abs/2504.13502v1
Bayesian Model Averaging in Causal Instrumental Variable Models Gregor Steiner∗Mark Steel† Department of Statistics, University of Warwick May 14, 2025 Abstract Instrumental variables are a popular tool to infer causal effects under unobserved confounding, but choosing suitable instruments is challenging in practice. We propose gIVBMA, a Bayesian model averag- ing procedure that addresses this challenge by averaging across different sets of instrumental variables and covariates in a structural equation model. Our approach extends previous work through a scale-invariant prior structure and accommodates non-Gaussian outcomes and treatments, offering greater flexibility than existing methods. The computational strategy uses conditional Bayes factors to update models separately for the outcome and treatments. We prove that this model selection procedure is consistent. By explicitly accounting for model uncertainty, gIVBMA allows instruments and covariates to switch roles and provides robustness against invalid instruments. In simulation experiments, gIVBMA outper- forms current state-of-the-art methods. We demonstrate its usefulness in two empirical applications: the effects of malaria and institutions on income per capita and the returns to schooling. A software implementation of gIVBMA is available in Julia. Keywords: Conditional Bayes factors, Endogeneity, Invalid instruments, Many instruments, Unobserved confounding 1 Introduction Instrumental variables offer a way to infer causal effects in the presence of unobserved confounding. To be suitable, an instrumental variable must meet two criteria: It must not be affected by the unobserved confounder and must not affect the outcome directly (valid), and it must be associated with the regressor of interest (relevant). In practice, finding variables that fulfill these assumptions is challenging. Further- more, when multiple instrumental variables are available, the results can be highly sensitive to the specific instruments chosen. In many applications, there is a large degree of uncertainty over which instruments to use in addition to the usual uncertainty over covariate inclusion in a regression model. To address this uncertainty, we propose a Bayesian model averaging (BMA) procedure that averages across different sets of instrumental variables and covariates in a multivariate sampling model with correlated residuals. Borrowing from the Econometrics literature, we use the term endogenous variables for treatments that are affected by unobserved confounding. We build on earlier work by Karl and Lenkoski (2012). Our computational strategy is closely inspired by theirs: we use a Gibbs structure to update the outcome, treatment, and covariance parameters separately and use conditional Bayes factors in the model updates. However, we make a number of contributions that go beyond Karl and Lenkoski (2012): 1. We explicitly explain the statistical modelling from a probabilistic perspective and link it to causal inference. 2. A careful analysis of prior structures is provided, and we use suitably adapted g-priors on the regression coefficients, which make the analysis scale-invariant. Our approach uses independent priors on the outcome and treatment model space without enforcing identification through zero prior probability on non-identified models. ∗gregor.steiner@warwick.ac.uk †m.steel@warwick.ac.uk 1arXiv:2504.13520v3 [stat.ME] 13 May 2025 3. We adapt the latent Gaussian framework with univariate link functions (ULLGM) proposed in Steel and Zens (2024) to allow for non-Gaussian outcomes and endogenous variables. This greatly extends the applicability of our method while not substantially
https://arxiv.org/abs/2504.13520v3
adding to the computational cost. 4. We prove that our model selection procedure is consistent in the sense that the conditional Bayes factors used in the model updates tend to infinity in favor of the true model as the sample size grows. 5. We allow for the BMA framework to freely assign variables as either instruments or covariates, thus providing additional robustness against invalid instruments. 6. The method proposed in this paper is implemented in the gIVBMA.jl package written in the Julia language (Bezanson et al., 2017). While Karl and Lenkoski (2012) previously released an R package for their method, it has been withdrawn from CRAN. A subsequent paper by Lee and Lenkoski (2022) addresses multiple endogenous and non-Gaussian vari- ables. However, they only focus on the Poisson case, which they analyse using a Laplace approximation. Through the ULLGM framework, we provide a flexible and exact way of incorporating a wide range of non-Gaussian variables, which can easily be adapted to the data. We refer to the method of Karl and Lenkoski (2012) and Lee and Lenkoski (2022) as IVBMA and call our proposed procedure generalised IVBMA (gIVBMA). Through extensive experiments on real and simulated data, we evaluate gIVBMA against IVBMA, BMA, and a number of classical methods tailored for this problem. The gIVBMA methodology works well, especially in settings with potentially invalid instruments. A version of gIVBMA identifies such invalid instruments in a data-driven manner and uses them as covariates, thus providing additional robustness. Our proposed procedure also performs well when many individually weak instruments are available. Another advantage of gIVBMA (and IVBMA) over most classical methods is that we obtain the posterior distribution of the covariance matrix which informs us on the degree of endogeneity. This also provides these methods with the ability to borrow strength from the model for the endogenous variables. 1.1 Related literature Rossi et al. (2005) propose a simple Bayesian approach to instrumental variables (IV) based on conjugate normal and inverse Wishart priors and a Gibbs sampler iterating between the outcome, treatment, and covariance parameters. Koop et al. (2012) were among the first to propose BMA in IV models, offering a general approach for simultaneous equations. However, their complex reversible jump MCMC scheme (which can mix quite slowly in practice) makes their method less accessible for practitioners. Karl and Lenkoski (2012) build on the Bayesian IV framework of Rossi et al. (2005), using conditional Bayes factors to iteratively update the outcome and treatment models within a Gibbs sampler. Lee and Lenkoski (2022) further extend this approach to approximately accommodate non-Gaussian (Poisson) distributions. Lenkoski et al. (2014) introduce a hybrid Bayesian version of the classical Two-Stage Least Squares (TSLS) estimator. Our proposed method performs well in settings with many individually weak instruments. Traditional classical estimators tend to exhibit substantial bias and can even become inconsistent in such settings. To address these issues, common approaches include first-stage regularisation (e.g. Belloni et al., 2012; Carrasco, 2012), using jackknife-fitted values in the first stage (e.g. Angrist et al., 1999; Hansen and Kozbur, 2014), selecting instruments based on minimizing a mean-square
https://arxiv.org/abs/2504.13520v3
error criterion (e.g. Donald and Newey, 2001; DiTraglia, 2016), or model-averaging to obtain model-averaged first-stage predictions of the endogenous variable (e.g. Kuersteiner and Okui, 2010). An important strand of the literature focuses on identification and estimation when some instrumental variables are invalid (Kang et al., 2016; Windmeijer et al., 2019, 2021). Kang et al. (2016) conclude that identification is still possible as long as a plurality rule holds, i.e. the valid instruments outnumber the invalid ones. 1.2 Notation Letι= (1, . . . , 1)⊺denote a vector of ones of length n, while Inis the n×nidentity matrix. For any vector v, letvidenote the i-th component of v. For any matrix V, letVibe the i-th row of VandVijthe element in row iand column j. For an n×kmatrix Rof full column rank, let PR=R(R⊺R)−1R⊺be the n×n projection matrix and MR=In−PRbe the projection onto the orthogonal space. For matrices SandT 2 with the same number of rows, R= [S:T]is the matrix that results from horizontally concatenating S andT. We use vec(R) to denote the column vector resulting from stacking the columns of R, while S⊗T denotes the Kronecker product of SandTfor any matrices SandT. Letp(x) denote the probability density function (pdf) or probability mass function (pmf) of the random variable x.x∼N(µ,Σ) to indicate that the random vector x∈ ℜkfollows a Gaussian distribution with mean vector µ∈ ℜkand positive definite symmetric (PDS) k×kcovariance matrix Σ. We say an n×kmatrix Rfollows a matrix normal distribution, R∼MN(M,S,T) if and only if vec(R)∼N(vec(M),T⊗S), where M∈ ℜn×kandSandTare PDS matrices, respectively of dimension n×nandk×k. 2 The Sampling Model We consider a structural model with a single outcome and l≥1 endogenous variables: y=αι+Xτ+Wβ +ϵ,X=ιΓ+[Z:W]∆+H, (1) where yis an n×1 outcome vector, X=[x1:. . .:xl]is an n×lmatrix of causes or treatments that are potentially endogenous, that is X̸⊥ ⊥ϵ|W,Wis an n×kmatrix of exogenous covariates (i.e. W⊥ ⊥ϵ),Zis ann×pmatrix of potential instruments, ϵis an n×1 vector of outcome residuals, and H= [η1:. . .:ηl] is an n×lmatrix of treatment residuals. The outcome model is parameterized by an intercept α, anl×1 vector of “effects” τ, and a k×1 vector of covariate coefficients β. The treatment model is parameterized by a 1 ×l(row) vector Γand a ( p+k)×lmatrix of slope coefficients ∆. The main parameter of interest is the vector of treatment effects τ. This setup can be motivated by the potential outcomes framework (here we follow the exposition in Imbens, 2014). Suppose that yi(x) is the potential outcome for the i-th outcome observation with conditional expectation E[yi(x)|Wi] =α+xτ+Wiβsuch that the average treatment effect (ATE) of increasing xby one unit is exactly τ. If we have multiple endogenous variables, i.e. xis a vector, any component of τis the ATE of only increasing the corresponding component of xby 1 while the other components remain constant. Define the outcome residual ϵi=yi(x)−E[yi(x)|Wi], which is by definition uncorrelated with the exogenous covariates Wi. Given observations ( yi,Xi,Wi), this potential outcomes model implies the observed data model yi=yi(Xi) =α+Xiτ+Wiβ+ϵi,which is our outcome equation for a single observation. Thus, τ has a causal interpretation. Subsection 2.2 contains more discussion on the identification
https://arxiv.org/abs/2504.13520v3
of τ. We assume the residuals are jointly normal [ϵ:H]∼MN(0,In,Σ), where the structural covariance matrix Σcan be partitioned into Σ=σyyΣyx Σ⊺ yxΣxx = var(ϵi) cov( ϵi,Hi) cov(Hi, ϵi) var( Hi) . IfΣyx̸= 0, the outcome and the treatment residuals are correlated, signaling the presence of unobserved confounding or endogeneity. We assume the errors to be homoskedastic and serially uncorrelated across observations but this could be generalised relatively easily. The conditional distribution of y|Xis given by (see supplementary Section A) y|X∼N(αι+Xτ+Wβ +HΣ−1 xxΣ⊺ yx, σy|xIn), where σy|x=σyy−ΣyxΣ−1 xxΣ⊺ yxandH=X−(ιΓ+[Z:W]∆). The marginal distribution of the treatment matrix, X∼MN(ιΓ + [Z:W]∆,In,Σxx),then completes the joint distribution of yandX. 2.1 Model Uncertainty This paper aims to incorporate model uncertainty into the framework outlined above (or its extension to non-Gaussian distributions as described in Subsection 2.3). In particular, a model refers to the exclusion of a specific set of covariates or instruments, or, equivalently, exact zero restrictions on the corresponding regression coefficients. We consider such uncertainty in both the treatment model and the outcome model. Unlike Karl and Lenkoski (2012), we choose to always include the endogenous variables in the outcome model. This is a conscious choice and not necessary, as our prior specification and computational implemen- tation would also easily allow for excluding the endogenous variables. We believe, however, that this is more 3 natural as our primary focus is on estimating the effects of the endogenous variables while accounting for other covariates. This implies that we obtain a non-zero effect estimate in every model we consider. LetL∈ Ldenote a possible outcome model and let M∈ M denote a possible treatment model, where LandMare the sets of all models considered. Then, the likelihood conditional on the models is given by y|X, L, M ∼N(αι+Xτ+WLβL+HΣ−1 xxΣ⊺ yx, σy|xIn) X|M∼MN(ιΓ+ [ZM:WM]∆M,In,Σxx), where the subscripts indicate the appropriate subset of the columns of the design matrices and equivalent zero restrictions on the coefficients. Note that the outcome also depends on the treatment model Mthrough the residual matrix H. The number of potential outcome models is 2k, while we have 2k+ppotential treatment models. We do not introduce any cross-restrictions, so the number of combinations of outcome and treatment models is 22k+p. Unlike Lee and Lenkoski (2022), our treatment model Mputs the same zero restrictions on all columns of the coefficient matrix ∆. This results in slightly less flexibility (for l >1) but leads to more interpretable results and substantially reduces the computational cost. 2.2 Identification The regression coefficient of Xin the conditional model y|Xis generally not equal to τ. To see this, write the mean of the conditional distribution as E[y|X, L, M ] =αι+X τ+ Σ−1 xxΣ⊺ yx +WLβL−(ιΓ+ [ZM:WM]∆M)Σ−1 xxΣ⊺ yx. Naively regressing yonXtargets τ+ Σ−1 xxΣ⊺ yx instead of τitself. Whenever Σyx̸= 0, this approach leads to biased results. This illustrates why it is necessary to consider the joint distribution of yandX instead of only the conditional distribution of y|X. Inference on τis still possible if we have suitable instruments (i.e. columns of [ ZM:WM] that are not in WL). Then, one can use the inference on ∆from
https://arxiv.org/abs/2504.13520v3
the treatment model to infer the covariance “ratio” Σ−1 xxΣ⊺ yxand subsequently identify τ. Our approach does not do this explicitly, but instead, in the outcome model, we condition on HΣ−1 xxΣ⊺ yx, which is known conditionally on the treatment and covariance parameters. The treatment residual H contains all the variation in Xthat the instruments and covariates do not explain. Therefore, including H acts as a control for the unobserved confounding. This is similar to classical control function approaches (see e.g. Wooldridge, 2015), but the coefficient of the control function is fixed (conditional on Σ) and implied by our modeling assumptions. The treatment effect τis identified in a “classical” sense if at least linstruments satisfy the following: 1.Relevance: Thej-th instrument zjis relevant if it is associated with at least one of the endogenous variables, that is, ∆jis not the zero-vector. 2.Validity/Exogeneity: The instruments are valid (or exogenous) if they are conditionally independent of the error in the outcome model, ZM⊥ ⊥ϵ|WL,WM. The exogeneity assumption above combines the unconfoundedness and exclusion restriction assumptions usually featured in a potential outcomes setup. The monotonicity assumption is trivially satisfied in our setup as the treatment equation is linear in the instruments (for more details see e.g. Imbens, 2014). For a given combination of LandM, the model is just-identified if the number of valid and relevant instruments implied by LandMis exactly equal to l. The model is under-identified if there are fewer instruments than land over-identified if there are more. It is important to emphasize that identification is not a binary characteristic as it tends to be in classical inference for two reasons. First, if the model is under-identified, then the likelihood does not add information on some components of Σ−1 xxΣ⊺ yx. However, with a proper prior, we can still obtain a proper posterior distribution, albeit one that may be quite diffuse in some components. This carries through to the inference onτ. The second point is specific to our BMA approach: Each combination of outcome and treatment models can have different instruments with varying degrees of instrument strength. While some of these models will be well-identified, others might be very uninformative. Consequently, the marginal posterior of 4 τcan be very diffuse if the majority of the posterior weight is concentrated on these uninformative models. This merely reflects that we are not learning much about τfrom the available data, but it does not prevent us from conducting inference. 2.3 Non-Gaussian models We can relax the Gaussianity assumption using the Univariate Link Latent Gaussian Models (ULLGM) framework proposed in Steel and Zens (2024). The idea is to assign a latent Gaussian representation to any column of [y:X]that is not Gaussian and then perform posterior inference conditional on the latent Gaussian. Unlike Lee and Lenkoski (2022), we do not need to rely on approximations. More precisely, assume yandXare non-Gaussian and let the n×1 vector qand the n×lmatrix Qbe their latent Gaussian representations, respectively. Then, we model ( yi,Xi), i= 1, . . . , n , independently as yi|qi, ry∼F(y) hy(qi),ry,Xi|Qi,rx∼lY j=1F(xj) hxj(Qij),rxj where qi|Xi,Qi∼N α+Xiτ+Wiβ+ (Qi−Γ⊺−[Zi:Wi]∆)Σ−1 xxΣ⊺ yx,
https://arxiv.org/abs/2504.13520v3
σy|x ,Q∼MN(ιΓ + [Z: W]∆,In,Σxx), F(y)andF(xj)are the distributions of yandxj, and hyandhxjare invertible univariate link functions that map the latent Gaussian to the appropriate parameter of F(y)andF(xj), j= 1. . . , l. If required, ryandrxjgroup any additional parameters of F(y)andF(xj), which are assumed to be the same for all observations. We also assume the endogenous variables are independent conditional on their latent Gaussian representation, that is, all the dependence is in the Gaussian part. Many distributions can be expressed as members of the ULLGM family. Two examples that we use in our simulations and applications are (for more examples see Steel and Zens, 2024): •Poisson-Log-Normal: F= Poisson( λi), λi=h(qi) = exp( qi) •Beta-Logistic: F= Beta ( µi, r), µi=h(qi) = exp( qi)/(1+exp( qi)) , where µirepresents the mean and ra dispersion parameter (using the alternative parameterization of the Beta distribution introduced in Ferrari and Cribari-Neto, 2004). The interpretation of the Gaussian parameters varies with the distributions. For instance, in a Poisson-Log- Normal distribution for the outcome, the outcome regression parameters have a log-linear interpretation. The causal interpretation of τis then that of an expected log-ratio of expected potential outcomes (assuming l= 1 for simplicity), τ=Eq[qi(x+ 1)−qi(x)] =Eq logEy|q[yi(x+ 1)] Ey|q[yi(x)] =Eq logλi(x+ 1) λi(x) , where qi(x) = log Ey|q[yi(x)] = log λi(x) is the latent Gaussian potential outcome and yi(x) is the potential outcome on the observed level for a given value of x. Note that the causal interpretation of τis not affected by the choice of F(xj) hxj(·), j= 1, . . . , l , asXi(and not Qi) multiplies τin the mean for qi. If we choose to parameterize a location parameter of Xiby the latent Gaussian process Qi, then the invertibility of hxj, j= 1, . . . , l , gives us a stochastic version of the monotonicity condition. Model uncertainty is now introduced in the equations for the latent variables, similar to the discussion in Subsection 2.1. 3 Prior Specification 3.1 Regression coefficients As a prior on the regression coefficients, we adopt a version of the widely used g-prior (Zellner, 1986). However, we incorporate the intercepts into the g-prior as we cannot guarantee posterior propriety with an improper prior. In the outcome model, the prior distribution under model Lon the coefficient vector 5 ρ= (α,τ⊺,β⊺)⊺then has the following continuous prior on ρL= (α,τ⊺,β⊺ L)⊺, while the other elements of ρare exactly zero: ρL|L,Σ∼N 0, gLσy|x(U⊺ LUL)−1 , (2) where UL= [ι:X:WL]. In the treatment model, we use a matrix version of the g-prior on the coefficient matrix Λ= [Γ⊺:∆⊺]⊺given by a matrix normal prior on ΛM= [Γ⊺:∆⊺ M]⊺ ΛM|M,Σ∼MN(0, gM(V⊺ MVM)−1,Σxx), (3) where VM= [ι:ZM:WM] and the other elements of Λare zero. We will assume that for all models that we consider, the matrices ULandVMare of full column rank. This will ensure that the inverses in (2) and (3) exist for all models. Since the g-priors include the intercept, we recommend centering all the Gaussian components of [ y:X] to ensure that the zero mean prior on the intercept is reasonable. These priors are conditionally
https://arxiv.org/abs/2504.13520v3
conjugate for our sampling model, so we obtain closed-form expressions for conditional posteriors and marginal likelihoods (see Subsection 4.2). In addition, we only have to elicit two scalar hyperparameters, gLandgM. The choice of gL, gM>0 controls the prior variance and the complexity penalty of the Bayes factors. We can either fix gLandgMor put hyperpriors on them. We will mostly focus on two possible choices: an adapted version of the benchmark or BRIC prior (Fern´ andez et al., 2001) and the hyper- g/nprior (Liang et al., 2008). In the adapted benchmark prior, we fix gL= max n,(k+l+ 1)2 and gM= max n,(k+p+ 1)2 . In the context of the standard normal linear regression model, the benchmark prior results in model selection consistency (Fern´ andez et al., 2001), which will be examined in our setting in Subsection 5.1. The hyper- g/nprior is characterised by its pdf, p(g) =a−2 2n 1 +g n−a/2 , where a >2. This prior (unlike the regular hyper- gprior) leads to consistent model selection in normal linear regression but does not yield analytic expressions for the marginal likelihood. Typical choices are a= 3 and a= 4, which behave similarly. As an alternative, we also consider the two-component prior by Zhang et al. (2016) for the treatment model. This is primarily motivated by settings with very weak instruments, where it may be desirable to use different degrees of penalisation for instruments and covariates to ensure that weak instruments can still be included. We can only use the two-component prior with a single endogenous variable, as it does not yield a closed-form (conditional) posterior for l >1. Then, the prior on the parameter λM= (γ,δM) is λM|M,Σ∼N 0, σxxGM(V⊺ MVM)−1GM , where GMis a diagonal matrix with entries√gCfor the intercept and the covariates and√gIfor the instruments. To encourage the inclusion of weak instruments, we specify gI=cgC, where c∈(0,1) is a constant, which does not depend on n. Our default choice is c= 1/2 throughout the paper. This structure can be used for fixed or stochastic g’s. On any additional parameters ryandrxjwe adopt proper priors to ensure posterior propriety. 3.2 Covariance Matrix As in Karl and Lenkoski (2012), we put an inverse Wishart prior on the structural covariance matrix, i.e.Σ∼W−1(ν,Il+1). Centering the prior over the (diagonal) identity matrix reflects the fact that we want inference on the degree of endogeneity to be primarily driven by the data. The degrees of freedom parameter νcontrols the amount of information in the prior and must satisfy ν > l so that the prior is proper. The choice of νcan be quite influential as i.a.it controls how tight the prior on all off-diagonal elements of Σis around zero. It can substantially impact the inference on the marginal variances and the degree of endogeneity. To avoid being too dogmatic, we assume a hyperprior on ν. An Exponential prior that is shifted by at least l, such that ν > l , works well in our experience. For more numerical stability, it 6 may be desirable to shift it slightly further. We use an Exponential with scale 1 shifted
https://arxiv.org/abs/2504.13520v3
by l+1 as our default choice throughout the remainder of the paper. This prior is quite uninformative on νitself while implying a prior on the covariance ratio Σ−1 xxΣ⊺ yxthat is (almost) identical to fixing ν= 3 (as done by Karl and Lenkoski, 2012). Figure S.1 in the supplementary material illustrates the implied priors on the covariance ratio and the outcome variance for shifted Exponential priors with different scales. 3.3 Model Space We use a model prior based on the independent inclusion of variables in both the outcome and treatment equations. That is, we have inclusion probabilities, say wLandwM, and the prior probability of model Lj isp(Lj) =wkj L(1−wL)k−kj, where kis the number of potential covariates and kjis the number included in model Lj. The choice of wLcan be very influential, so we recommend a hyperprior on wLto make the procedure more adaptive. Using a Beta( a, bL) hyperprior results in the Beta-binomial prior p(Lj) =Γ(a+bL) Γ(a)Γ(bL)Γ(a+kj)Γ(bL+k−kj) Γ(a+bL+k). Following Ley and Steel (2009), we choose a= 1 and bL= (k−mL)/mL, where the user specifies the prior mean outcome model size mL. If not stated otherwise, we use mL=k/2 as our default choice. For the treatment model we use a similar procedure, leading to uniform hyperpriors on the prior inclusion probabilities, as in Scott and Berger (2010). 4 Posterior Inference 4.1 Computational Strategy Obtaining a tractable joint posterior distribution of all parameters and models across both the outcome and treatment models is not possible. Instead, we tackle the problem conditionally and take inspiration from the computational strategy in Karl and Lenkoski (2012). We use a Gibbs sampler to iteratively sample the outcome parameters, treatment parameters, and the covariance matrix. Within the first and second steps, we update the models via a Metropolis-Hastings (MH) step in model space, update gif it is not fixed, and then, given the model, we draw the parameters from their conditional posteriors. We describe each of these steps for updating the outcome model in more detail below, and the treatment model update is analogous. Algorithm 1 summarises the Gibbs sampler. Moving in model space is implemented through an MH step where the probability of accepting the proposed model L′given the current model Lis a(L′, L) = minp(y|X,Λ,Σ, L′, M) p(y|X,Λ,Σ, L, M )p(L′) p(L)h(L|L′) h(L′|L),1 , where his a proposal kernel. Throughout, models are proposed by randomly permuting the inclusion index of one covariate. This proposal is symmetric in the sense that h(L|L′)/h(L′|L) = 1, so the acceptance probability reduces to the (conditional) Bayes factor times the prior ratio (capped at 1). This can lead to slow mixing in very high dimensions, where other proposals might be more attractive, but we leave this to future work. IfgLis random, we update it using an MH step with a lognormal proposal. The proposal scale is tuned adaptively, targeting an acceptance rate of 0 .234. Finally, we update the parameter vector ρ= (α,τ⊺,β⊺)⊺conditional on LandgL. The conditionally conjugate priors lead to a known distribution for the conditional posterior, as described in (4) below in Subsection 4.2. 4.2 Conditional Bayes factors and posteriors
https://arxiv.org/abs/2504.13520v3
We can obtain closed-form conditional posteriors and marginal likelihoods based on the prior specification described above. A detailed derivation is provided in Section B of the supplementary material. In the 7 Algorithm 1: The gIVBMA Gibbs Sampling Algorithm Input: DataD= (y,X,Z,W), number of posterior samples S, prior mean model sizes ( mL, mM), initial values α(0),τ(0),β(0),Γ(0),∆(0),Σ(0), L(0), M(0), and optionally q(0),Q(0), r(0) y, r(0) x, g(0) L, g(0) M, ν(0). fors= 1, . . . , S do (a)Latent Gaussian Update : 1. If any column of [ y:X] is not Gaussian, draw a latent Gaussian representation q(s)orQ(s)via an MH step using a Barker proposal. 2. If required, draw additional parameters r(s) yorr(s) xvia an MH step. (b)Outcome Model Update : 1. Sample the inclusion indicators L(s)via an MH step in model space. 2. If gLis random, draw g(s) Lvia an MH step. 3. Sample the outcome model parameters α(s),τ(s),β(s) from their full conditional distribution in (4) (see Subsection 4.2). (c)Treatment Model Update : 1. Sample the inclusion indicators M(s)via an MH step in model space. 2. If gMis random, draw g(s) Mvia an MH step. 3. Sample the treatment model parameters Γ(s),∆(s) from their full conditional distribution in (5). (d)Covariance Matrix Update : 1. If νis random, draw ν(s)via an MH step. 2. Sample the covariance matrix Σ(s)from its full conditional distribution in (7). Output: A posterior sample for s = 1,. . . , S: α(s),τ(s),β(s),Γ(s),∆(s),Σ(s), L(s), M(s),q(s),Q(s), r(s) y, r(s) x, g(s) L, g(s) M, ν(s) . outcome model, we have the conditional posterior ρL|Λ,Σ, L, M, y,X∼NgL gL+ 1(U⊺ LUL)−1U⊺ L˜y, σy|xgL gL+ 1(U⊺ LUL)−1 , (4) where ˜y=y−HΣ−1 xxΣ⊺ yxis the endogeneity corrected outcome. The conditional Bayes factor (CBF) of model Liversus model Ljis given by CBF( Li, Lj) =p(y|X,Λ,Σ, Li, M) p(y|X,Λ,Σ, Lj, M)= (gL+ 1)(dUj−dUi)/2exp −1 2σy|xgL gL+ 1˜y⊺(PUj−PUi)˜y , where Ui, dUiandUj, dUjare the design matrices and their number of columns in models LiandLj respectively. In the treatment model, we obtain the (conditional) posterior ΛM|ρ,Σ, L, M, y,X∼MN (V⊺ MVM)−1V⊺ M˜X Il+g−1 MB−1 Σ−1⊺ ,(V⊺ MVM)−1, BΣ+g−1 MIl−1Σxx , (5) where BΣ=Il+1 σy|xΣ⊺ yxΣyxΣ−1 xxand˜X=X−1 σy|xϵΣyx B−1 Σ⊺ . The CBF of model Miversus model Mjis CBF( Mi, Mj) =|gMBΣ+Il|(dVj−dVi)/2exp −1 2tr AΣ˜X⊺(PVj−PVi)˜X , (6) where AΣ= Il+g−1 MB−1 Σ−1⊺ Σ−1 xxBΣandVi, dViandVj, dVjare the design matrices and their number of columns in models MiandMjrespectively. Using the two-component prior in the treatment model (with l= 1) leads to similar results, see Section B of the supplementary material. 8 The treatment model uses the joint distribution of yandXfor its marginal likelihood, while the outcome model uses the conditional distribution y|X. This distinction arises because the marginal distribution of X is independent of outcome parameters, but the outcome model y|Xdepends on the treatment parameters. Finally, we have the conditional posterior for the covariance matrix, Σ|ρ,Λ, L, M, y,X∼W−1(ν+n,Il+1+[ϵ:H]⊺[ϵ:H]). (7) Conditional on the data and the outcome and treatment parameters, the residuals ϵandHare known, so the posterior scale matrix can be computed within the Gibbs steps. If we put a hyperprior on ν, we add an extra
https://arxiv.org/abs/2504.13520v3
step to the Gibbs sampler updating ν. Given Σ,νis independent of everything else, so the full conditional is proportional to the prior on Σgiven νtimes the prior on ν. We use an adaptive MH step, targeting an acceptance rate of 0 .234. 4.3 The non-Gaussian case Here, we discuss the computational strategy to deal with the ULLGM models introduced in Subsection 2.3. We add an extra step to the Gibbs sampler that draws the latent Gaussian (and potentially additional parameters ryorrx) and then perform posterior inference on the Gaussian parameters conditional on the latent Gaussian representation. Steel and Zens (2024) propose to use an MH step with an adaptive Barker proposal (Livingstone and Zanella, 2022) to sample the latent Gaussians. This is a good compromise between using gradient information to increase the mixing speed and maintaining robustness. Define the residuals based on the latent Gaussian ϵi=qi−(α+Xiτ+Wiβ) and Hi=Qi−Γ⊺−Vi∆. For the outcome, the gradient used in the Barker proposal is ∂logp(qi|ρ,Λ,Σ,y,X ) ∂qi=∂logp(yi|qi, ry) ∂qi−ϵi−HiΣ−1 xxΣ⊺ yx σy|x, where the first term depends on the specific distribution of yi, and the second term comes from the Gaussian “prior”. For the j-th endogenous variable, the gradient is ∂logp(Qij|ρ,Λ,Σ, qi,X) ∂Qij=∂logp Xij|Qij, rxj ∂Qij+ Σ−1 xxΣ⊺ yx j σy|x ϵi−HiΣ−1 xxΣ⊺ yx −[Σ−1 xxQi]j. The first term depends on the distribution of the j-th endogenous variable, the second arises from the outcome distribution, and the last is the contribution of the Gaussian “prior”. If additional parameters ryorrxjare required, we add an extra step to update them separately after the respective component of the latent Gaussian. This is done through an adaptive MH step, targeting an acceptance rate of 0 .234. After updating the latent Gaussian representation and any additional parameters, we can proceed with the same steps as in the Gaussian case. The formulas from above remain valid using the latent Gaussian representation in place of any non-Gaussian variables. 5 Data-driven instrument selection Our method does not distinguish between invalid instruments and exogenous covariates (both are included in the outcome and treatment models). This motivates a modification of the methodology described above: The user provides only a single matrix of instruments and covariates Z, all columns of which can be included in the outcome and treatment equation (rather than two matrices, ZandW). Then, the sampling model becomes y|X, L, M ∼N(αι+Xτ+ZLβL+HΣ−1 xxΣ⊺ yx, σy|xIn) X|M∼MN(ιΓ+ZM∆M,In,Σxx), where H=X−(ιΓ+ZM∆M) and ZLandZMare subsets of the same set of potential instruments and covariates. The set of valid instruments that are used in the endogeneity correction consists of ZM\ZL, i.e., 9 all columns of ZMthat are not included in ZL. The posterior results from above remain valid with a slight modification of the design matrices UL=[ι:X:ZL] andVM= [ι:ZM]. The definition of the BRIC prior has to be adjusted to gL= max n,(p+l+ 1)2 andgM= max n,(p+ 1)2 ,where pis the number of columns of Z. Similarly, we adjust the hyperparameters of the model space prior to bL= (p−mL)/mLand bM= (p−mM)/mM,where our default choice for the prior mean model size is now mL=mM=p/2. In this setting, we do not use the
https://arxiv.org/abs/2504.13520v3
two-component prior as there is no a priori distinction between instruments and covariates. As before, our method works well when at least lrelevant and valid instruments are available without re- quiring the analyst to identify these instruments in advance. The main benefit of this data-driven instrument selection is additional robustness against invalidity by allowing variables to switch roles in both directions. Previously, covariates could become instruments by being excluded from the outcome model, but now, we also allow instruments to become covariates by being included in the outcome model. The added flexibility and robustness are only possible because of our explicit treatment of model uncertainty and come at almost no extra cost. The formal description of the model only changes minimally. Many classical methods explicitly developed with invalid instruments in mind allow including instruments in the outcome model but put an ℓ1-penalty on their outcome coefficients (e.g. Kang et al., 2016; Windmeijer et al., 2019). Under the plurality rule that the valid instruments constitute the largest group of instruments, the causal effect can still be identified (Kang et al., 2016). Unlike these methods, gIVBMA is not limited by the plurality rule as it allows instruments and covariates to change roles, and it can perform well even when the plurality rule is not satisfied, as evidenced in our simulation experiments. Based on simulation experiments and our consistency result in Subsection 5.1, we believe this flexible modification works well whenever the sample size nis sufficiently large. In small samples, especially with a high proportion of invalid instruments, restricting certain instruments to the treatment model may still be beneficial. Therefore, we believe that both variants of gIVBMA are useful in practice. In our software, we implement this by letting the user specify either two separate matrices for instruments and covariates or just a single matrix of instruments and covariates. In the first case, the instruments are confined to the treatment model, while in the second case, all variables can be included in both models. If not otherwise specified, gIVBMA will include data-driven instrument selection. 5.1 Model Selection consistency of gIVBMA It is important to know that if the sample size nincreases, we end up putting more and more posterior mass on the “correct” model. We are assuming here that the model that actually generated the data lies within the model space considered.1 Let us assume that the data are generated from models LiandMifor, respectively, the outcome and treat- ment equations. Then we say that our gIVBMA procedure is model selection consistent if both CBF( Li, Lj) and CBF( Mi, Mj) tend to ∞with nfor any Lj̸=LiandMj̸=Mi. We show that the following results hold: Theorem 1. The procedure gIVBMA, detailed in Sections 2 and 3 (with or without the data-driven instru- ment selection from the earlier part of this section), is model selection consistent in the Gaussian case if and only if the following conditions are satisfied: •limn→∞g=∞for fixed g∈ {gL, gM} •if we assume a hyperprior p(gL), we have that lim n→∞Z ℜ+(1 +gL)1/2p(gL)dgL=∞ •under a hyperprior p(gM), we have for c∈ {l,2l, . .
https://arxiv.org/abs/2504.13520v3
. , (p+k)l} lim n→∞Z ℜ+gc/2 Mp(gM)dgM=∞ 1This is what is often referred to as an M-closed setting. In most situations, model selection consistency naturally extends to the M-open framework in an intuitive manner (Mukhopadhyay et al., 2015). 10 For the two-component prior explained in Section 3 with l= 1, we have consistency under the same necessary and sufficient conditions as above, after replacing gMbygI. If (some of) the components in (y,X)are assigned a non-Gaussian sampling distribution as in Section 2.3, with (q,Q)the latent Gaussian counterparts, then model selection consistency is assured if, in addition to the conditions on gabove, we have independence between any additional parameters ry,rxand(q,Q, Li, Mj). Proof. See Supplementary Section C. For both the fixed (BRIC) and random g(hyper- g/n) options used throughout this paper, we can thus show that model selection consistency holds. This result provides additional justification for data-driven instrument selection. If the sample size is large enough, posterior mass concentrates on the “correct” models and, therefore, instruments and covariates will be correctly separated. 6 Simulation experiments We evaluate the performance of gIVBMA in three different scenarios and compare it to naive BMA, IVBMA, and several classical methods. Throughout, we consider the median absolute error (MAE) and median bias of the point estimates of τ, (credible or confidence) interval coverage of τ, and the log predictive score (LPS) on a separate holdout dataset. For all Bayesian methods, we use the posterior mean as our point estimate for the MAE and bias calculations. While MAE, bias, and coverage relate specifically to the quality of the estimation of τ, LPS is a measure of predictive adequacy, focused on probabilistic predictions of the outcomes. Supplementary Section E.1 provides more details. The predictive comparison is perhaps not entirely fair to some of the classical methods (in particular those that are two-stage in nature) as they reduce the bias in τbut lack a correction term like gIVBMA. Consequently, when making predictions, these methods use an estimate of τinstead of τ+ Σ−1 xxΣ⊺ yx, which would be the appropriate coefficient for the conditional model y|X. This can lead to poor predictions when Σ−1 xxΣ⊺ yxbecomes large. However, in practice, these methods might not be employed when prediction is the primary objective. Methods that do not correct for endogeneity (OLS or naive BMA) target the biased coefficient τ+ Σ−1 xxΣ⊺ yx, but provide good predictions as this is the right coefficient for predicting y conditional on X. This discrepancy reflects the fact that predicting observed and counterfactual outcomes are different problems under endogeneity. Our probabilistic method allows us to provide endogeneity-corrected inference on τwhile maintaining good predictive power in the observational model. 6.1 Many weak instruments We now consider a setting with many individually weak instruments and l= 1. Our simulation setup is similar to model C in Kuersteiner and Okui (2010), where half of the instruments are relevant but increasingly weaker, and the other half is irrelevant. The main difference from their design is that we also consider exogenous covariates. We consider p= 20 instruments and k= 10 covariates generated from independent
https://arxiv.org/abs/2504.13520v3
standard normal distributions and two different sample sizes n∈ {50,500}. Then, we multiply all instruments and covariates with even indices by 100 to have more variation in the variables’ scale. We expect IVBMA to struggle as its prior variance does not account for the scale, while the g-prior used in gIVBMA is scale- invariant. The instruments’ coefficients are generated by δi=c(p) 1−i p/2 + 14 fori= 1, . . . , p/ 2,and all other coefficients are set to zero. The constant c(p) is chosen so that the first-stage R2of the instruments is approximately 0 .01 or 0 .1. The coefficients of all instruments with even indices are then divided by 100 to adjust for the scaling. The covariates’ coefficients are set to 1 /10 or 1 /1000 for the firstk/2 odd and even covariates, respectively, and zero else, and the treatment effect is set to τ= 1/10. The covariance matrix Σis chosen to have σyy=σxx= 1 and σyx= 1/2. We compare gIVBMA’s performance to naive BMA (not accounting for endogeneity), IVBMA, OLS, TSLS on the full model, oracle TSLS (O-TSLS) where we know the correct model specifications, which is of course not attainable in empirical work, TSLS with a jackknife-type fitted value in the first stage (JIVE, 11 Angrist et al., 1999), TSLS with a jackknife and ridge-regularised first-stage (RJIVE, Hansen and Kozbur, 2014), the post-LASSO estimator by Belloni et al. (2012) and the model-averaged TSLS estimator (MATSLS) with unrestricted weights by Kuersteiner and Okui (2010). Additional details on the implementation of these methods are presented in Supplementary Section E. For gIVBMA, we consider three different g-prior specifications: BRIC, hyper- g/n, and the two-component prior (2C) with a hyper- g/nprior on the covariate component gCandc= 1/2. We do not use data-driven instrument selection in this example. n= 50 R2 f= 0.01 R2 f= 0.1 MAE Bias Cov. LPS MAE Bias Cov. LPS BMA (hyper- g/n) 0.5 0.5 0.06 1.36 0.44 0.44 0.08 1.38 gIVBMA (BRIC) 0.24 0.23 1.0 1.35 0.24 0.21 1.0 1.37 gIVBMA (hyper- g/n)0.16 0.07 1.0 1.36 0.14 0.06 0.99 1.38 gIVBMA (2C) 0.2 0.08 1.0 1.36 0.15 0.05 0.99 1.39 IVBMA (KL) 0.54 0.54 0.83 9.71 0.34 0.34 0.91 6.26 OLS 0.52 0.52 0.05 1.43 0.45 0.45 0.07 1.47 TSLS 0.5 0.5 0.3 1.43 0.39 0.39 0.37 1.48 O-TSLS 0.46 0.46 0.63 1.39 0.36 0.36 0.63 1.41 JIVE 0.78 0.68 0.86 1.91 0.73 0.56 0.92 1.93 RJIVE 0.58 0.51 0.79 1.8 0.67 0.51 0.86 1.9 MATSLS 0.89 0.34 0.95 2.18 0.41 0.18 0.87 1.77 Post-LASSO 0.58 0.51 1.0 - 0.61 0.35 1.0 - n= 500 R2 f= 0.01 R2 f= 0.1 MAE Bias Cov. LPS MAE Bias Cov. LPS BMA (hyper- g/n) 0.5 0.5 0.0 1.28 0.45 0.45 0.0 1.29 gIVBMA (BRIC) 0.26 0.15 0.95 1.28 0.12 0.05 0.99 1.28 gIVBMA (hyper- g/n) 0.32 0.3 0.88 1.28 0.13 0.06 0.94 1.28 gIVBMA (2C) 0.41 0.33 0.81 1.28 0.12 0.04 0.93 1.28 IVBMA (KL) 0.26 0.26 0.67 76.94 0.09 -0.07 0.76 188.29 OLS 0.5 0.5 0.0 1.28 0.45 0.45 0.0 1.3 TSLS 0.39 0.39 0.45 1.31 0.15 0.14 0.75
https://arxiv.org/abs/2504.13520v3
1.37 O-TSLS 0.33 0.33 0.66 1.33 0.08 0.06 0.92 1.4 JIVE 0.96 0.79 0.82 1.72 0.18 -0.15 0.98 1.57 RJIVE 0.97 0.74 0.82 1.75 0.18 -0.16 0.98 1.57 MATSLS 0.34 0.02 0.87 1.62 0.1 0.01 0.96 1.45 Post-LASSO 1.13 0.26 1.0 - 0.11 0.02 0.99 - Table 1: Many Weak Instruments: MAE, bias, coverage, and mean LPS on 100 simulated datasets with many weak instruments. The best values in each column are printed in bold. Post-Lasso only returns estimates for τbut not for the other coefficients, so we cannot compute the LPS. When no instrument is selected, no effect estimates are provided, therefore, we do not consider those cases. The number of times no instruments were selected by Post-Lasso in the first stage is (from top-left to bottom-right): 93, 85, 74, 1. Table 1 shows the results. The gIVBMA methods do best overall in terms of LPS and in the small sample, also in terms of MAE and median bias. Their coverage is too high for small n, while the hyper- g/n specifications slightly undercover for larger n. In some cases, TSLS and O-TSLS come close, but these can also perform very badly, in particular with low coverage. OLS and naive BMA are not too far from TSLS in terms of MAE and bias, but do considerably worse with n= 500 and stronger instruments. In line with the discussion before the start of this subsection, the predictive performance of BMA (and OLS in the larger sample) is comparable to gIVBMA, but coverage is far too low. The jackknife-based methods tend to have higher MAE and bias but better coverage than regular TSLS. MATSLS performs very well overall in the 12 n= 500 case but also has good coverage for n= 50. Post-Lasso performs well with the stronger instruments. However, only in the n= 500 scenario with stronger instruments does the Lasso tend to select instruments, whereas in the other cases, it often does not select any. IVBMA undercovers in all four scenarios, and in the smaller sample, it also performs substantially worse than gIVBMA in terms of MAE and bias. IVBMA’s predictive performance is very poor in all scenarios (especially for n= 500) since the prior on the regression coefficients is not scale-invariant. Section E.3 in the supplementary material presents simulation results for a similar setting where the treatment is a Poisson-distributed count variable. The results are qualitatively very similar, but the differ- ences between the methods tend to be smaller. An exception is the poor prediction for IVBMA, which is even more extreme. 6.2 Invalid Instruments To investigate the performance in the presence of invalid instruments, we consider a simulation setup similar to Kang et al. (2016) and Windmeijer et al. (2019). The full results are presented in Supplementary Section E.4. The performance of all gIVBMA variants is very favorable and superior to sisVIVE (a bespoke method by Kang et al., 2016) and oracle TSLS (in the smaller sample). An oracle version of gIVBMA tends to perform better, suggesting that using prior information on the separation of instruments and covariates can
https://arxiv.org/abs/2504.13520v3
be beneficial. All methods except for sisVIVE are largely unaffected by the plurality rule. 6.3 Multiple endogenous variables with correlated instruments We consider an example with two endogenous variables, a Gaussian and a Beta, and correlated instruments. We generate 15 valid instruments (five of which are also relevant) with a correlation structure similar to Fern´ andez et al. (2001). The data-generating process is given in detail in Supplementary Section E.5. We vary the sample size n∈ {50,500}. We compare gIVBMA against naive BMA, IVBMA, OLS, TSLS (using all instruments and an oracle version), and MATSLS. The latter is included as it is the only classical method that is competitive with oracle TSLS in all scenarios of Subsection 6.1. To evaluate the model selection performance of gIVBMA and IVBMA, we also compare the distribution of the estimated posterior probabilities of the true treatment model M={1,5,7,11,13}and the mean treatment model sizes. IVBMA estimates separate treatment models for the two endogenous variables, and we report both. IVBMA cannot directly model the Beta-distributed endogenous variable and instead relies on a Gaussian approximation. Both gIVBMA variants outperform all other methods in terms of MAE and median bias, even in the larger sample. With n= 500, they slightly undercover for the Beta endogenous variable, while their coverage is too large for n= 50. BMA and OLS are severely biased as expected, but their coverage for the second endogenous variable is adequate (in contrast with the coverage for X1). This is a consequence of the non- linear transformation from the latent Gaussian to the Beta variable, diluting the endogeneity. TSLS, oracle TSLS, and MATSLS all perform similarly well, with only some under-coverage for n= 50. IVBMA results in a substantial bias for both sample sizes, yet it predicts and covers relatively well. Supplementary Table S.3 presents the full estimation error, coverage, and LPS results. Figure 1 presents the distributions of the posterior probability of the true treatment model and the mean treatment model size. Both gIVBMA variants put substantial posterior mass on the true model, and their mean model size is accordingly very close to that of the true model (5). For n= 50, IVBMA tends to select too many variables for X1and too few for X2. The performance of IVBMA is better in the n= 500 setting, but it still cannot compete with gIVBMA. Supplementary Table S.4 presents median posterior inclusion probabilities (PIP) for all instruments. The selection performance of gIVBMA is clearly superior for both sample sizes. 7 Empirical Examples 7.1 Geography or institutions? Carstensen and Gundlach (2006) show that geographic factors such as disease ecology, particularly malaria 13 n=50 0.00.51.0Posterior probability true model 246810Mean model size gIVBMA (BRIC) gIVBMA (hyper−g/n) IVBMA (X1) IVBMA (X2)n= 500 0.00.51.0 gIVBMA (BRIC) gIVBMA (hyper−g/n) IVBMA (X1) IVBMA (X2)5678Figure 1: Multiple endogenous variables with correlated instruments : Posterior probabilities of the true treatment model and mean model sizes for 100 simulated datasets of size n= 50 (top) and n= 500 (bottom). The true treatment model size is 5. IVBMA uses separate treatment models for the two endogenous variables X1andX2. prevalence, have
https://arxiv.org/abs/2504.13520v3
a substantial negative impact on income. DiTraglia (2016) reanalyzes these data to illustrate the usefulness of their proposed Focused Moment Selection Criterion. They consider the regression model log gdpci=β1+β2rulei+β3malfal i+ϵi, where gdpc is real GDP per capita in 1995 prices, rule is an average governance indicator measuring the quality of institutions, and malfal is the fraction of the population at risk of malaria transmission in 1994. The quality of institutions and the prevalence of malaria are likely endogenous. The dataset (for 44 countries) contains various potential instruments such as historical settler mortality (lnmort), malaria transmission stability (maleco), winter frost levels (frost), the maximum temperature during peak humidity (humid), latitude (distance from equator), the proportion of Western Europeans and English speakers (eurfrac and engfrac), proximity to coast (coast), and predicted trade share (trade). Settler mortality (lnmort) and malaria ecology (maleco) are very plausible to be exogenous, but there is uncertainty about the exogeneity of all other potential instruments. We will consider both institutional quality and malaria prevalence as joint endogenous variables (i.e. l= 2) and compare the PIP of all instruments in the outcome and treatment model to assess their validity. The rule variable is approximately Gaussian. The malfal variable is a proportion and only takes values in [0,1], so we model it using a Beta distribution. We put an Exponential prior with rate 1 on the additional dispersion parameter of the Beta distribution and draw it in an MH step with a Gaussian proposal. One challenge is that several countries in our sample have recorded malfal values of exactly 0 or 1. This is a problem for our algorithm as the logistic function (e.g., used in computing the gradient for the proposal) is not finite at 0 or 1. We deal with this incompatibility of our observations with the sampling model by using set observations (Fern´ andez and Steel, 1999). We treat zero observations (and analogously ones) as belonging to a set (0 ,0.0005) and add an extra step to draw the value from its sampling distribution truncated to that set. This matches these observations to the dominating measure (Lebesgue) of the sampling model. Table 2 shows the results. All outcome PIP are very low, so there is not much evidence for a violation of the exclusion restriction. As expected, maleco and lnmort are relevant instruments. The humidity and coast variables are highly relevant as well. The hyper- g/nprior tends to yield a larger gin the outcome model and a smaller gin the treatment model than the fixed gcorresponding to BRIC, leading to lower and higher PIP, respectively. Our treatment effect estimates are in line with those in DiTraglia (2016). Their point estimate for rule is between 0 .81 and 0 .97, depending on which instrument set they use. For malfal, their point estimates are between −1.16 and −0.9. The posterior distributions of both components of the covariance vector, σ12andσ13, concentrate near zero. Thus, the gIVBMA results are similar to naive BMA, though gIVBMA credible intervals are slightly narrower, reflecting the borrowing of strength from the treatment equation. To assess the predictive
https://arxiv.org/abs/2504.13520v3
performance on this dataset, we compute the LPS using leave-one-out cross- validation. For each observation, the model is trained excluding that observation, and the LPS is then 14 gIVBMA (BRIC) gIVBMA (hyper- g/n) BMA (hyper- g/n) Mean 95% CI Mean 95% CI Mean 95% CI rule 0.85 [0.59, 1.12] 0.87 [0.61, 1.12] 0.79 [0.45, 1.12] malfal -1.01 [-1.47, -0.54] -1.02 [-1.47, -0.57] -1.05 [-1.67, -0.43] σ12 -0.03 [-0.1, 0.03] -0.03 [-0.1, 0.03] - - σ13 0.01 [-0.14, 0.16] 0.01 [-0.12, 0.14] - - PIP L PIP M PIP L PIP M PIP L PIP M maleco 0.02 1.0 0.01 1.0 0.02 - lnmort 0.02 1.0 0.01 1.0 0.03 - frost 0.01 0.03 0.0 0.12 0.02 - humid 0.02 1.0 0.0 1.0 0.04 - latitude 0.02 0.13 0.0 0.21 0.01 - eurfrac 0.03 0.23 0.0 0.61 0.03 - engfrac 0.01 0.12 0.01 0.15 0.02 - coast 0.04 1.0 0.0 1.0 0.02 - trade 0.01 0.21 0.0 0.34 0.02 - Table 2: Geography or institutions? Treatment effect estimates (posterior mean and 95% credible interval) and posterior inclusion probabilities (PIP) in outcome (L) and treatment (M) models for rule and malfal as endogenous variables. The algorithm was run for 10,000 iterations (the first 2,000 of which were discarded as burn-in). calculated for the excluded data point. We compare the performance of gIVBMA, IVBMA, BMA, and TSLS. Although sisVIVE would be relevant given potential concerns about instrument validity (at least a priori), we exclude it as this approach does not accommodate multiple endogenous variables. Methods designed for many weak instruments were also omitted since their predictive performance was consistently inferior to TSLS in our simulations. Table 3 presents the mean LPS values. The predictive performance of TSLS is competitive with that of gIVBMA, but IVBMA does considerably worse. Naive BMA also predicts very well. Method Mean LPS gIVBMA (BRIC) 0.669 gIVBMA (hyper- g/n) 0.654 IVBMA (KL) 0.827 BMA (hyper- g/n) 0.654 TSLS 0.666 Table 3: Geography or institutions? The mean LPS calculated across all iterations of leave-one-out cross- validation on the Carstensen and Gundlach (2006) dataset, where each iteration uses a single observation as the holdout set and the rest as the training set. 7.2 Returns to schooling A prominent problem in microeconomics is estimating the causal relationship between educational attain- ment and earnings. Education levels are not randomly assigned but are likely influenced by unobservable characteristics that simultaneously affect earning potential. Therefore, the observed correlation between ed- ucation and wages may not accurately reflect the true economic returns to schooling. Many econometricians have tried to isolate the causal effect by finding instrumental variables that provide plausibly exogenous variation in educational attainment, such as quarter of birth (Angrist and Krueger, 1991) or geographic variation in college proximity (Card, 1995). 15 We revisit this problem using the dataset from Card (1995), a subset of the National Longitudinal Survey of Young Men, containing all men who were interviewed in 1976 and provided valid wage and education responses. Our outcome is the logarithm of hourly wages with years of schooling as the single endogenous explanatory variable of interest. Our
https://arxiv.org/abs/2504.13520v3
set of potential exogenous covariates and instruments includes age and age squared, college proximity (distance to a 2 or 4-year college), variables on family background, marital status, race, and regional indicators. The dataset also includes information on parents’ educational attainment. However, due to a substantial proportion of missing values in these variables, we conduct our analysis both including and excluding parental education to assess the robustness of our results. After removing missing values, the data consists of 3 ,003 observations without and 2 ,215 observations with parental education, and 19 or 21 potential instruments and exogenous controls. In both cases, we allow for fully data-driven instrument selection such that all variables can be used as instruments or covariates. 𝜏0.0 0.1 0.2(a) 020406080 𝜎yx/𝜎xx−0.3 −0.2− 0.1 0.001020 𝜏0.00 0.05 0.10(b) 02550𝜎yx/𝜎xx−0.10− 0.05 0.0002040 gIVBMA (BRIC) g IVBMA (hyper-g/n) BMA (hyper-g/n) IVBMA Figure 2: Returns to schooling : Posterior distributions of the treatment effect of education and the covariance ratio σyx/σxxbased on the Card (1995) dataset (a) without parental education ( n= 3,003) and (b) with parental education ( n= 2,215). The algorithm was run for 50 ,000 iterations, the first 5 ,000 of which were discarded as burn-in. In (b), IVBMA includes a point mass at zero of 6 .6% of the posterior distribution. Figure 2 shows our posterior results. In line with Card (1995), the gIVBMA posterior for the treatment effect, τ, indicates higher returns to schooling compared to BMA without endogeneity correction. How- ever, our results vary significantly depending on whether parental education is included. Without parental education, the posterior distribution concentrates slightly above 0 .2, suggesting relatively high returns to schooling. When parental education is included, the posterior mass centers around 0 .05. These latter results indicate lower returns to schooling than those reported by Card (1995), whose estimates ranged from 0 .094 to 0.194 across different specifications. The two datasets lead to different instruments. In the case without parental education, indicators for being Black, living in a Metropolitan area (SMSA), and living in the South consistently appear in the treatment model but rarely in the outcome model, serving as instruments in most analyzed models. When parental education is included, both mother’s and father’s education levels are consistently selected in the treatment model but rarely in the outcome model, making them the primary instruments. The Black and 16 SMSA indicators no longer serve as instruments in this scenario. The college proximity indicators play a less significant role than anticipated. Proximity to a four-year college is occasionally included for the data without parental education, but rarely in the case with parental education, while proximity to a two- year college is rarely included in either case. The IVBMA approach selects college proximity indicators more frequently, using them as instruments in most models without parental education. Complete posterior inclusion probabilities for both cases are presented in Section F.1 of the supplementary material. As a robustness check, we also fit the model without parental education, using only the observations that have complete parental education data ( n= 2,215). The gIVBMA results are relatively similar
https://arxiv.org/abs/2504.13520v3
to those from the full dataset (see Supplementary Section F.1). These findings suggest that the differences, at least for gIVBMA, are primarily driven by the additional instruments rather than the missing observations, highlighting how posterior inference can be highly sensitive to the choice of the instrument set. We believe the model including parental education provides a better representation of reality. A supporting indicator is the lower estimated variance in the treatment model when parental education is included, indicating a higher degree of instrument relevance (see Supplementary Figure S.3). This example serves as a cautionary tale: In a misspecified model (in the sense that important covariates or instruments are missing from the set of available variables), the selection of instruments and the resulting inference on treatment effects can be misleading. Supplementary Table S.9 shows that the LPS performance of gIVBMA, IVBMA and naive BMA is relatively similar on all three subsets of the data, while TSLS does worse. 8 Conclusion We have introduced the gIVBMA method, which performs posterior inference in structural equation models with at least one endogenous regressor, averaging over all possible sets of potential instruments and covari- ates. We allow for the model to freely choose its instruments based on the data, which provides additional robustness against invalid instruments. Our computational strategy relies on updating the outcome model, the treatment model, and the covariance matrix separately using conditional Bayes factors in the model updates. Exploiting the ULLGM framework (Steel and Zens, 2024), we can accommodate non-Gaussian outcomes and endogenous variables. We provide simple and unrestrictive necessary and sufficient conditions for model selection consistency of gIVBMA. In both simulation and real data experiments, gIVBMA outper- forms the earlier method proposed by Karl and Lenkoski (2012) and many classical estimators considered in the literature. Future work could extend the framework to more complex covariance structures, possibly rendering gIVBMA available for settings such as time series or network data. Supplementary Material In the online appendix, we present technical details and additional information. Appendix A derives the conditional outcome distribution used in the main text. Appendix B derives the conditional posterior distributions and marginal likelihoods. Appendix C proves Theorem 1 in the main text, while Appendix D further discusses the hyperprior distribution for ν. Appendix E and F provide additional details on the simulation experiments and empirical examples, respectively. Code to replicate all findings is available at https://github.com/gregorsteiner/gIVBMA-Code and the accompanying Julia package can be installed from https://github.com/gregorsteiner/gIVBMA.jl . References Angrist, J. D., Imbens, G. W., and Krueger, A. B. (1999). Jackknife instrumental variables estimation. Journal of Applied Econometrics , 14(1):57–67. Angrist, J. D. and Krueger, A. B. (1991). Does Compulsory School Attendance Affect Schooling and Earnings? The Quarterly Journal of Economics , 106(4):979–1014. 17 Belloni, A., Chen, D., Chernozhukov, V., and Hansen, C. (2012). Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain. Econometrica , 80(6):2369–2429. Bezanson, J., Edelman, A., Karpinski, S., and Shah, V. B. (2017). Julia: A fresh approach to numerical computing. SIAM review , 59(1):65–98. Card, D. (1995). Using geographic variation in college proximity to estimate
https://arxiv.org/abs/2504.13520v3
the return to schooling. In Christofides, L. N., Grant, E. K., and Swidinsky, R., editors, Aspects of Labour Market Behaviour: Essays in Honour of John Vanderkamp , pages 201–222. University of Toronto Press, Toronto. Carrasco, M. (2012). A regularization approach to the many instruments problem. Journal of Econometrics , 170(2):383–398. Carstensen, K. and Gundlach, E. (2006). The Primacy of Institutions Reconsidered: Direct Income Effects of Malaria Prevalence. The World Bank Economic Review , 20(3):309–339. Chernozhukov, V., Hansen, C., and Spindler, M. (2016). hdm: High-dimensional metrics. R Journal , 8(2):185–199. Davidson, R. and MacKinnon, J. G. (2004). Econometric theory and methods , volume 5. Oxford University Press New York. DiTraglia, F. J. (2016). Using invalid instruments on purpose: Focused moment selection and averaging for GMM. Journal of Econometrics , 195(2):187–208. Donald, S. G. and Newey, W. K. (2001). Choosing the Number of Instruments. Econometrica , 69(5):1161– 1191. Fern´ andez, C. and Steel, M. (1999). Multivariate Student-t regression models: Pitfalls and inference. Biometrika , 86(1):153–167. Fern´ andez, C., Ley, E., and Steel, M. F. J. (2001). Benchmark priors for Bayesian model averaging. Journal of Econometrics , 100(2):381–427. Ferrari, S. and Cribari-Neto, F. (2004). Beta Regression for Modelling Rates and Proportions. Journal of Applied Statistics , 31(7):799–815. Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2014). Bayesian data analysis . CRC Press, Taylor and Francis Group, Boca Raton London New York, third edition edition. Hansen, C. and Kozbur, D. (2014). Instrumental variables estimation with many weak instruments using regularized JIVE. Journal of Econometrics , 182(2):290–308. Imbens, G. W. (2014). Instrumental Variables: An Econometrician’s Perspective. Statistical Science , 29(3):323 – 358. Kang, H. (2017). sisVIVE: Some Invalid Some Valid Instrumental Variables Estimator . R package version 1.4. Kang, H., Zhang, A., Cai, T. T., and Small, D. S. (2016). Instrumental Variables Estimation With Some Invalid Instruments and its Application to Mendelian Randomization. Journal of the American Statistical Association , 111(513):132–144. Karl, A. and Lenkoski, A. (2012). Instrumental Variable Bayesian Model Averaging via Conditional Bayes Factors. arXiv:1202.5846 [stat]. Koop, G., Leon-Gonzalez, R., and Strachan, R. (2012). Bayesian model averaging in the instrumental variable regression model. Journal of Econometrics , 171(2):237–250. Kuersteiner, G. and Okui, R. (2010). Constructing Optimal Instruments by First-Stage Prediction Averaging. Econometrica , 78(2):697–718. 18 Lee, J. and Lenkoski, A. (2022). Incorporating Model Uncertainty in Market Response Models with Multiple Endogenous Variables by Bayesian Model Averaging. In Econometrics - Recent Advances and Applications . IntechOpen. Lenkoski, A., Eicher, T. S., and Raftery, A. E. (2014). Two-Stage Bayesian Model Averaging in Endogenous Variable Models. Econometric Reviews , 33(1-4):122–151. Ley, E. and Steel, M. F. (2009). On the effect of prior assumptions in Bayesian model averaging with applications to growth regression. Journal of Applied Econometrics , 24(4):651–674. Liang, F., Paulo, R., Molina, G., Clyde, M. A., and Berger, J. O. (2008). Mixtures of g Priors for Bayesian Variable Selection. Journal of the American Statistical Association , 103(481):410–423. Livingstone, S. and Zanella, G. (2022). The Barker Proposal: Combining Robustness and Efficiency in Gradient-Based MCMC. Journal of the Royal Statistical
https://arxiv.org/abs/2504.13520v3
Society Series B: Statistical Methodology , 84(2):496–523. Mukhopadhyay, M., Samanta, T., and Chakrabarti, A. (2015). On consistency and optimality of Bayesian variable selection based on g-prior in normal linear regression models. Annals of the Institute of Statistical Mathematics , 67:963–997. Mullahy, J. (1997). Instrumental-Variable Estimation of Count Data Models: Applications to Models of Cigarette Smoking Behavior. The Review of Economics and Statistics , 79(4):586–593. Muller, K. E. and Stewart, P. W. (2006). Linear model theory: univariate, multivariate, and mixed models . John Wiley & Sons. Rossi, P. E., Allenby, G. M., and McCulloch, R. E. (2005). Bayesian Statistics and Marketing . Wiley, Hoboken, NJ. Scott, J. and Berger, J. (2010). Bayes and empirical Bayes multiplicity adjustment in the variable-selection problem. Annals of Statistics , 38:2587–619. Steel, M. F. J. and Zens, G. (2024). Model Uncertainty in Latent Gaussian Models with Univariate Link Function. arXiv:2406.17318. Windmeijer, F., Farbmacher, H., Davies, N., and Davey Smith, G. (2019). On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments. Journal of the American Statistical Association , 114(527):1339–1350. Windmeijer, F., Liang, X., Hartwig, F. P., and Bowden, J. (2021). The Confidence Interval Method for Select- ing Valid Instrumental Variables. Journal of the Royal Statistical Society Series B: Statistical Methodology , 83(4):752–776. Wooldridge, J. M. (2015). Control Function Methods in Applied Econometrics. The Journal of Human Resources , 50(2):420–445. Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with g-prior distribu- tions. Bayesian inference and decision techniques . Zhang, H., Huang, X., Gan, J., Karmaus, W., and Sabo-Attwood, T. (2016). A Two-Component G-Prior for Variable Selection. Bayesian Analysis , 11(2):353–380. 19 A Derivation of the conditional outcome distribution Here, we derive the conditional distribution of the outcome given all endogenous variables presented in Section 2 of the main text. Consider again the structural model y=αι+Xτ+Wβ +ϵ X=ιΓ+[Z:W]∆+H, with [ϵ:H]∼MN(0,In,Σ), where the structural covariance matrix Σcan be partitioned into Σ=σyyΣyx Σ⊺ yxΣxx . First, note that the reduced form equation of yis given by y=αι+ (ιΓ+[Z:W]∆)τ+Wβ +Hτ+ϵ and therefore the joint distribution of yandXis given by y vec(X) ∼Nαι+ (ιΓ+[Z:W]∆)τ+Wβ vec(ιΓ + [Z:W]∆) ,Ψ⊗In , where we call Ψ=1τ⊺ 0Il Σ1τ⊺ 0Il⊺ =σyy+ 2τ⊺Σ⊺ yx+τ⊺ΣxxτΣyx+τ⊺Σxx Σ⊺ yx+Σxxτ Σxx the reduced form covariance. Now, we can condition on Xto get y|X∼N(µy|x, σy|xIn), where, using well-known properties of the multivariate Normal distribution, we have µy|x=αι+ (ιΓ + [Z:W]∆)τ+Wβ + (ΣyxΣ−1 xx+τ⊺)⊗In vec(H) =αι+ (ιΓ+[Z:W]∆)τ+Wβ +H(Σ−1 xxΣ⊺ yx+τ) =αι+Xτ+Wβ +HΣ−1 xxΣ⊺ yx and σy|x=σyy−ΣyxΣ−1 xxΣ⊺ yx. B Derivation of the conditional posteriors and marginal likeli- hoods B.1 The Gaussian case Here, we derive the conditional posterior distributions and marginal likelihoods. Let ρ= (α,τ⊺,β⊺) be the outcome parameters and U= [ι:X:W] be the outcome design matrix (we drop the model subscripts in this section for convenience). Similarly, define Λ= [Γ⊺:∆⊺]⊺andV= [ι:Z:W]. Then, the likelihood is y|X∼N(Uρ+HΣ−1 xxΣ⊺ yx, σy|xIn) X∼MN(VΛ,In,Σxx) 20 and the coefficient priors are given by ρ|Σ∼N(0, gLσy|x(U⊺U)−1) Λ|Σ∼MN(0, gM(V⊺V)−1,Σxx). Define the corrected outcome ˜y=y−HΣ−1 xxΣ⊺ yx. Then, the conditional posterior for ρis p(ρ|Λ,Σ,y,X)∝p(y|X,ρ,Λ,Σ)p(X|Λ,Σ)p(ρ|Σ) ∝p(y|X,ρ, Λ,Σ)p(ρ|Σ) ∝exp −1 2σy|x(˜y−Uρ)⊺(˜y−Uρ) exp −1 2σy|xρ⊺ g−1 LU⊺U ρ
https://arxiv.org/abs/2504.13520v3
= exp −1 2σy|x ˜y⊺˜y+ρ⊺gL+ 1 gLU⊺U ρ−2˜y⊺Uρ +gL gL+ 1˜y⊺PU˜y−gL gL+ 1˜y⊺PU˜y ∝N ρ|gL gL+ 1(U⊺U)−1U⊺˜y,gL gL+ 1σy|x(U⊺U)−1 ·exp −1 2σy|x˜y⊺ In−gL gL+ 1PU ˜y ∝N ρ|gL gL+ 1(U⊺U)−1U⊺˜y,gL gL+ 1σy|x(U⊺U)−1 . The (conditional) marginal likelihood is the normalising constant of this distribution (adjusted by the prior). So it will be the remaining exponential times the ratio of normalising constants of the prior and this posterior. Thus, p(y|X,Λ,Σ)∝|gL/(gL+ 1)σy|x(U⊺U)−1| |gLσy|x(U⊺U)−1|−1/2 exp −1 2σy|x˜y⊺ In−gL gL+ 1PU ˜y ∝(gL+ 1)−dU/2exp −1 2σy|x˜y⊺ In−gL gL+ 1PU ˜y , where dUis the number of columns included in U. Note that we condition on Xin the marginal likelihood. This is because when integrating out ρ, it is sufficient to consider the conditional model as the marginal distribution of Xdoes not depend on ρ. However, this marginal likelihood is proportional to the one based on the full joint distribution, where the proportionality constant does not depend on the outcome model. Therefore, this is merely a matter of notation and does not affect the results. For the treatment model, define BΣ=Il+1 σy|xΣ⊺ yxΣyxΣ−1 xx ˜X=X−1 σy|xϵΣyx B−1 Σ⊺ . Also note that p(y|X,ρ,Λ,Σ) depends on Λthrough the residual matrix H, so we cannot drop this factor. 21 Then, we have p(Λ|ρ,Σ, y, X )∝p(y|X,ρ,Λ,Σ)p(X|Λ,Σ)p(Λ|Σ) ∝exp −1 2σy|x ϵ−(X−VΛ)Σ−1 xxΣ⊺ yx⊺ ϵ−(X−VΛ)Σ−1 xxΣ⊺ yx ·exp −1 2tr Σ−1 xx(X−VΛ)⊺(X−VΛ) exp −1 2tr Σ−1 xxΛ⊺(g−1 MV⊺V)Λ ∝exp −1 2tr Σ−1 xx(BΣ+g−1 MIl) Λ⊺V⊺VΛ−2(Il+g−1 MB−1 Σ)−1˜X⊺VΛ +(Il+g−1 MB−1 Σ)−1˜X⊺PV˜X (Il+g−1 MB−1 Σ)−1⊺ −(Il+g−1 MB−1 Σ)−1˜X⊺PV˜X (Il+g−1 MB−1 Σ)−1⊺ ·exp −1 2σy|x(ϵ⊺ϵ−2ϵ⊺XΣ−1 xxΣ⊺ yx)−1 2tr(Σ−1 xxBΣX⊺X) ∝MN Λ|(V⊺V)V⊺˜X (Il+g−1 MB−1 Σ)−1⊺ ,(V⊺V)−1,(BΣ+g−1 MIl)−1Σxx , Again, the (conditional) marginal likelihood is the normalising constant of this unnormalised distribution. If we omit all factors that do not depend on the design matrix and, therefore, the model (we only care about ratios between different models), we have p(y,X|ρ,Σ)∝ |gMBΣ+Il|−dV/2exp1 2tr AΣ˜X⊺PV˜X , where dVis the total number of columns included in Vand AΣ= Il+g−1 MB−1 Σ−1⊺ Σ−1 xxBΣ. Note that in the treatment model, the marginal likelihood is now based on the joint distribution of yandX. This is because the conditional outcome model depends on the treatment parameters Λ. Therefore, when integrating out Λ, we have to consider the joint distribution. For the two-component prior, the derivation is very similar. The squared term in Λnow becomes BΣΛ⊺V⊺VΛ+Λ⊺G−1V⊺V G−1Λ In the one-dimensional case, BΣ=bΣis a scalar and we can factorise this expression as Λ⊺(bΣV⊺V+G−1V⊺V G−1)Λ and then complete the square in Λ. In this case, we have the following conditional posterior Λ|ρ,Σ,y,x∼N(bΣAGV⊺˜x, σxxAG), where bΣ= 1 +1 σy|xσ2 yx σxx ˜x=x−σyx bΣσy|xϵ AG= bΣV⊺V+G−1V⊺V G−1−1. The marginal likelihood is given by p(y,x|ρ,Σ)∝|AG| |G(V⊺V)−1G|1/2 expb2 Σ 2σxx˜x⊺V AGV⊺˜x . 22 B.2 The non-Gaussian case The conditional posterior distributions and marginal likelihoods change slightly in the non-Gaussian case. For the latent Gaussian outcome qdefine ˜q=q−(Q−VΛ)Σ−1 xxΣ⊺ yx. Now, the formulas from above remain valid after replacing ˜ywith ˜q. In particular, we have ρ|q,Q,X,Λ,Σ∼NgL gL+ 1(U⊺U)−1U⊺˜q,gL gL+ 1σy|x(U⊺U)−1 and p(q|X,Q,Λ,Σ)∝(gL+ 1)−dU/2exp −1 2σy|x˜q⊺ In−gL gL+ 1PU ˜q . (8) There is some subtlety in that the posterior quantities
https://arxiv.org/abs/2504.13520v3
now depend on both the actual treatment Xthrough the projection on U= [ι:X:W] and its latent Gaussian representation Qthrough the latent residual H=Q−VΛ. For the treatment model, define ˜Q=Q−1 σy|x(q−Uρ)Σyx B−1 Σ⊺ such that we have Λ|q,Q,X,ρ,Σ∼MN (V⊺V)V⊺˜Q (Il+g−1 MB−1 Σ)−1⊺ ,(V⊺V)−1,(BΣ+g−1 MIl)−1Σxx and p(q,Q|X,ρ,Σ)∝ |gMBΣ+Il|−dV/2exp1 2tr AΣ˜Q⊺PV˜Q , (9) where dVis the total number of columns included in V. Note that this marginal likelihood still depends onXthrough the latent outcome residual ϵ=q−Uρ, but not on the actual outcome y. To identify the dependence on X, it is helpful to factorise the joint treatment marginal likelihood as p(q,Q|X,ρ,Σ)∝ |gMBΣ+Il|−dV/2exp1 2tr (AΣQ⊺PVQ) ·exp1 2tr1 σy|xcΣ(q−Uρ)⊺PV (q−Uρ)Σyx B−1 Σ⊺−2Q =|gMBΣ+Il|−dV/2exp1 2tr (AΣQ⊺PVQ) ·exp −1 2σy|x 2(q−Uρ)⊺PVQcΣ−Σyx B−1 Σ⊺cΣ(q−Uρ)⊺PV(q−Uρ) ,(10) where we have defined cΣ=AΣB−1 ΣΣ⊺ yx. The marginal distribution p(Q|Σ) is proportional to the first exponential, and the conditional distribu- tionp(q|Q,X,ρ,Σ) is proportional to the second exponential factor. The latter involves terms with both q,QandUand thus still depends on X. C Proof of Theorem 1 Given the proper prior, we know that the posterior exists for all models under consideration, almost surely in the sampling distributions. One important condition that all models share is that the matrices ULand VMare of full column rank for any model we consider (an assumption made in Subsection 3.1). This can easily be imposed by the prior on the model space. First, we will examine model selection consistency for the Gaussian case, and then we will show that consistency extends to non-Gaussian settings. 23 C.1 Outcome model In order to prove model selection consistency of the procedure using CBFs for the outcome equation, we assume, without loss of generality, that model Liis the true model generating the data, and we consider CBF( Li, Lj), which measures the relative support for Liversus Lj. Thus, model selection consistency implies that CBF( Li, Lj) tends to ∞for any Lj̸=Liasngrows without bounds. We need to distinguish two cases: •Model Ljnests Li. In this case, Ljhas all the important covariates but is overparameterised, i.e., some of its covariates are not in the true model. We use the expression for CBF( Li, Lj) in Subsection 4.2: CBF( Li, Lj) = (gL+ 1)(dUj−dUi)/2exp −1 2σy|xgL gL+ 1˜y⊺(PUj−PUi)˜y , (11) where Ui, dUiandUj, dUjare the design matrices and their number of columns in model LiandLj respectively. Clearly here we have that dUj> dUi. First, we show that the exponential factor does not tend to zero (or ∞) with n. From Appendix A we know that the corrected outcome ˜y=y−HΣ−1 xxΣ⊺ yx given Xis distributed as ˜y|X∼N(Uiρi, σy|xIn) (12) in the true model. This means that ˜y √σy|x|X∼N(Uiρi,In) and we can now use standard results on quadratic forms of Normal random variables (see e.g.Theorem 9.4 in Muller and Stewart, 2006, p. 175) to establish that1 σy|x˜y⊺(PUj−PUi)˜yis distributed as a central χ2random variable with dUj−dUidegrees of freedom, given that PUj−PUiis an idempotent matrix of rank dUj−dUiand (PUj−PUi)Ui= 0. This ensures us that the exponential factor in (11) is almost surely a finite positive number and does not dictate the asymptotic behavior. Now consider the first factor in (11), where
https://arxiv.org/abs/2504.13520v3
we distinguish between the following two prior options: Fixed gL:Here a necessary and sufficient condition for CBF( Li, Lj)→ ∞ with nis that gLincreases without bound in n. This is the case in e.g.the BRIC prior we use, and would also hold for the unit information prior, and many more. Random gL:if we assume a hyperprior p(gL), the CBF above will asymptotically behave as I(Li, Lj) =Z ℜ+(1 +gL)c/2p(gL)dgL, where c=dUj−dUi. If we opt for, as done in this paper, the hyper- g/nprior, then I(Li, Lj) =a−2 2nZ ℜ+(1 +gL)c/2(1 +gL n)−a 2dgL, where a >2 for prior propriety. For any given n, the integral in I(Li, Lj) will be nonzero and will not explode for small gLbut might do so for large values of gL. If we focus on the right-hand tail, we obtain I(Li, Lj)≈na 2−1a−2 2Z gc−a 2 LdgL. In case the degree of overparameterisation c=dUj−dUiis smaller than a−2 (e.g.dUj=dUi+ 1 and a >3), the right hand tail will integrate and the integral wil be finite. But even then, the factor in front will increase without bound with n, leading the CBF to do the same thing. In case dUj−dUi≥a−2 the integral will not be finite and thus CBF( Li, Lj)→ ∞ with n.2 2If we use a hyper- gprior instead for gLa similar reasoning shows that model selection consistency does not hold for cases where dUj−dUi< a−2. This is qualitatively similar to the behaviour in the standard linear regression model, as in Liang et al. (2008), but for different reasons, as a slightly different prior structure is used there. 24 In summary, for the case where Ljnests the true model Li, we have that CBF( Li, Lj)→ ∞ with n, thus showing model selection consistency for gIVBMA under the prior assumptions mentioned here. In fact, these are necessary and sufficient conditions. •Model Ljdoes not nest Li. Here, the model Ljlacks at least one important covariate. We slightly rewrite the expression for the CBF in (11) as CBF( Li, Lj) = (gL+ 1)(dUj−dUi)/2exp −n 2σy|xgL gL+ 1˜y⊺MUi n−MUj n ˜y , (13) using MU=In−PU. We then apply the same reasoning as in Lemma A.1 of Fern´ andez et al. (2001) to (12), and we use the fact that for any model Ljthat does not nest Li lim n→∞ρ⊺ iU⊺ iMUjUiρi n=bj∈(0,∞), (14) which means that Uiis not in the column space of Uj, or we can not express UiasUi=UjAfor some matrix Aof dimension dUj×dUi. In fact, (14) is satisfied if Ucorresponding to the full model has full column rank, which was assumed in Subsection 3.1. Then we can state the following probability limit for any model Ljnot nesting the true Li: plim n→∞˜y⊺MUj˜y n=σy|x+bj, while for Li(or any model nesting it), we obtain plim n→∞˜y⊺MUi˜y n=σy|x. This immediately shows us that the CBF → ∞ as the sample size grows, since the exponential term behaves as exp( cjn) for some cj>0. This is irrespective of dUjand the value or prior for gL. Thus, we also have model selection consistency for gIVBMA in the case that Ljdoes not nest the
https://arxiv.org/abs/2504.13520v3
true model Li. Summarising, for any model Lj, we have shown that CBF( Li, Lj) in favour of the true model Liincreases without bounds for the choice of the outcome model. Outcome model selection on the basis of gIVBMA is thus consistent, if and only if •limn→∞gL=∞for fixed gL •we have that lim n→∞Z ℜ+(1 +gL)c/2p(gL)dgL=∞ for any c∈ {1, . . . , p +k}if we assume a hyperprior p(gL). The latter condition is always satisfied if it holds for c= 1. C.2 Treatment model Assume Miis the true treatment model generating the matrix of endogenous variables X. We show that CBF( Mi, Mj) tends to infinity for all Mj̸=Mi. As above, we distinguish two cases: •Model Mjnests Mi. We can rewrite the conditional Bayes factor for model Mjversus Mias CBF( Mi, Mj) =gl(dVj−dVi)/2 M |BΣ+g−1 MIl|(dVj−dVi)/2exp −1 2tr AΣ˜X⊺(PVj−PVi)˜X ,(15) where AΣ= Il+g−1 MB−1 Σ−1⊺ Σ−1 xxBΣ. 25 Since Mjnests Miwe have dVj> dVi. We first show that the trace in the exponential term does not tend to infinity with n. Note that ˜Xis a linear transformation of the residual matrix [ ϵ:H] and is therefore again a matrix Normal, ˜X∼MN(ViΛi,In,˜Σxx), where ˜Σxx= −1 σy|xΣyx B−1 Σ⊺ Il!⊺ Σ −1 σy|xΣyx B−1 Σ⊺ Il! . AsPVj−PViis idempotent and ( PVj−PVi)Vi= 0, the matrix quadratic form ˜X⊺(PVj−PVi)˜X follows a (central) Wishart distribution with tr( PVj−PVi) =dVj−dVidegrees of freedom (see e.g. Corollary 10.8.2 and Theorem 10.9 in Muller and Stewart, 2006, p. 202). Using linearity of the trace operator and Theorem 10.10 in Muller and Stewart (2006), we have Eh tr AΣ˜X⊺(PVj−PVi)˜Xi = tr Eh AΣ˜X⊺(PVj−PVi)˜Xi = (dVj−dVi)tr AΣ˜Σxx . For small gM,AΣbehaves like gMB⊺ ΣΣ−1 xxBΣand for large gMit tends to Σ−1 xxBΣ. The latter does not explode, and therefore, this expectation is finite for all values of gM. Since the expectation is finite, the trace is finite almost surely, and the exponential term does not tend to zero. Therefore, similarly to the previous subsection, we need to establish that the first term tends to infinity with n. Note that the determinant |BΣ+g−1 MIl|behaves like |BΣ|for large gMand therefore does not dictate the asymptotic behavior. Thus, it is sufficient to focus on the first term. For any fixed gM, a necessary and sufficient condition is that gM→ ∞ with n(e.g., BRIC). If we use a hyper prior on gM, we need to ensure that lim n→∞Z ℜ+gc/2 Mp(gM)dgM=∞ forc∈ {l,2l, . . . , (p+k)l}. Similarly to the situation for the outcome model, we can easily show that this is always satisfied for the hyper- g/nprior. •Model Mjdoes not nest Mi. Again, we define the orthogonal projection MV=In−PVto rewrite the CBF as CBF( Mi, Mj) =gl(dVj−dVi)/2 M |BΣ+g−1 MIl|(dVj−dVi)/2exp −n 2tr AΣ˜X⊺MVi n−MVj n ˜X . (16) We then use a version of Lemma A.1. in Fern´ andez et al. (2001) modified to the matrix setting: Lemma 1. IfMiis nested within (or equal to) Mj, plim n→∞˜X⊺MVj˜X n=˜Σxx. If for any Mjthat does not nest Mi, lim n→∞Λ⊺ iV⊺ iMVjViΛi n=Dj, where Djis a positive definite l×lmatrix, then plim n→∞˜X⊺MVj˜X n=˜Σxx+Dj.
https://arxiv.org/abs/2504.13520v3
26 The assumption for the non-nested case in Lemma 1 is satisfied if Vcorresponding to the full model has full column rank, which was assumed in Subsection 3.1. Applying Lemma 1, we have that the exponential term behaves (as n→ ∞ ) as exp (n·tr (AΣDj)). Note that AΣcan be written as the quadratic form AΣ= Il+g−1 MB−1 Σ−1⊺ Σ−1 xx BΣ+g−1 MIl Il+g−1 MB−1 Σ−1. Since Σ−1 xx BΣ+g−1 MIl =gM+ 1 gMΣ−1 xx+1 σy|xΣ−1 xxΣ⊺ yxΣyxΣ−1 xx is symmetric and positive-semidefinite, we can conclude that AΣis also positive-semidefinite, implying that tr (AΣDj)≥0. The equality holds only if AΣDjis the zero matrix, as the trace of the product of two positive- semidefinite matrices is a (squared) Frobenius norm. Therefore, the trace is positive almost surely, and the exponential tends to infinity as n→ ∞ . For any treatment model Mj, we have shown that CBF( Mi, Mj) in favour of the true treatment model Mi tends to infinity as the sample size ntends to infinity. Thus, the treatment model selection is consistent under the following (necessary and sufficient) prior assumptions on gM: •limn→∞gM=∞for fixed gM •under a hyperprior p(gM), we have lim n→∞Z ℜ+gc/2 Mp(gM)dgM=∞ forc∈ {l,2l, . . . , (p+k)l}. C.2.1 The two-component prior The conditional Bayes factor under the two-component prior is given by CBF( Mi, Mj) = |A(i) G| |A(j) G||Gj(V⊺ jVj)−1Gj| |Gi(V⊺ iVi)−1Gi|!1/2 exp −b2 Σ 2σxx˜x⊺ VjA(j) GV⊺ j−ViA(i) GV⊺ i ˜x , where A(i) G= bΣV⊺ iVi+G−1 iV⊺ iViG−1 i−1. The exponential factor in this CBF behaves very similarly to the one in the single- gcase (only with l= 1), as the quadratic form in the exponent can be bounded by that of the single- gcase (Zhang et al., 2016, Appendix B). Recall from Section 3 that the relation between gIfor the instruments and gCfor the exogenous covariates is given by gI=cgC for some constant c <1 which does not depend on n. Then, note that |Gj(V⊺ jVj)−1Gj| |A(j) G|!1/2 ≥bdVj/2 Σ|Gj|+ 1≥(bΣgI)dVj/2, 27 where the first inequality is based on Minkowski’s determinant theorem (see Zhang et al., 2016, Appendix B) and the second inequality holds since gC> gI. Similarly, we can get a lower bound on the inverse as follows: |A(j) G| |Gj(V⊺ jVj)−1Gj|!1/2 = |G−1 j(V⊺ jVj)G−1 j| |bΣ(V⊺ jVj) +G−1 j(V⊺ jVj)G−1 j|!1/2 ≥ |G−1 j(V⊺ jVj)G−1 j| |bΣ(V⊺ jVj) +G−1 I,j(V⊺ jVj)G−1 I,j|!1/2 , where GI,jis a diagonal matrix with all entries equal to√gIand the inequality follows from the mono- tonicity of determinants with respect to positive semi-definite matrices and the fact that G−1 I,j(V⊺ jVj)G−1 I,j− G−1 j(V⊺ jVj)G−1 jis positive semi-definite. We can then write |A(j) G| |Gj(V⊺ jVj)−1Gj|!1/2 ≥|Gj|−1 (bΣ+g−1 I)dVj/2≥g−dVj/2 C (bΣ+g−1 I)dVj/2=c bΣgI+ 1dVj/2 . Thus, we can bound the relevant ratio both ways: (bΣgI)dVj/2≤ |Gj(V⊺ jVj)−1Gj| |A(j) G|!1/2 ≤bΣgI+ 1 cdVj/2 , so that it is clear the ratio behaves like gdVj/2 I and the same necessary and sufficient condition as for the single- gcase applies, with gMreplaced by gI(and l= 1). C.3 Extension to the non-Gaussian case A mild assumption we make is
https://arxiv.org/abs/2504.13520v3
that the prior on any additional parameters in ryandrx= (rx1, . . . , r xl) is proper and independent of q,Q, Li, Mj. The conditional Bayes factor for the outcome model will now be based on the marginal likelihood for y|X,Q, Li, Mj,Λ,Σ, which can be written as p(y|X,Q, Li, Mj,Λ,Σ) =Z p(y|q, ry)p(q|X,Q, Li, Mj,Λ,Σ)p(ry)dqdry, where the second factor on the right-hand side is (8). Now, assume that the true model is Liand write the conditional marginal likelihood of Lias p(y|X,Q, Li, Mj,Λ,Σ) =Z p(y|q, ry)p(q|X,Q, Lk, Mj,Λ,Σ) CBF q(Li, Lk)p(ry)dqdry, where CBF q(·,·) is the expression in (11), but with ˜yreplaced by ˜q. We can now replace the integral in ry by p(y|q) =Z p(y|q, ry)p(ry)dry, (17) which has to be finite a.s. since the posterior exists. This gives us p(y|X,Q, Li, Mj,Λ,Σ) =Z p(y|q)p(q|X,Q, Lk, Mj,Λ,Σ) CBF q(Li, Lk)dq. The expression in (8) is almost surely positive and finite, so that the ratio CBF q(Li, Lk) is bounded from below by some positive number Bikalmost surely in q. Thus, we can write p(y|X,Q, Li, Mj,Λ,Σ)> B ikZ p(y|q)p(q|X,Q, Lk, Mj,Λ,Σ)dq. The integral on the right-hand side is the conditional marginal likelihood for Lk, so the conditional Bayes factor for the true Liversus the misspecified Lkis larger than Bik. Now Bikis the minimum conditional 28 Bayes factor of Liversus Lkforq, and we know from the analysis in Subsection C.1 that the underlying Gaussian model for qwith the prior in Section 3 is model-selection consistent for the unit information prior and the hyper- g/nprior. As a consequence, under these choices for g(or any other choice that satisfies the criteria for consistency in the Gaussian model), lim n→∞CBF q(Li, Lk) =∞almost surely in q. Thus, it must be the case that lim n→∞Bik=∞, which immediately leads to model selection consistency for the outcome model. To examine the consistent selection of the treatment model, we need to consider p(y,X|Lj, Mi,ρ,Σ) =Z p(y|q, ry)p(X|Q,rx)p(q,Q|Lj, Mi,X,ρ,Σ)p(ry)p(rx)dqdQdrydrx,(18) where the third factor under the integral is given in (9), and we need to make sure that the expression we integrate characterizes a valid factorization of a probability distribution. Using (10) in Appendix B.2 we can write this as p(y,X|Lj, Mi,ρ,Σ) =ZZ p(y|q, ry)p(q|Lj, Mi,Q,X,ρ,Σ)p(ry)dqdry p(X|Q,rx)p(Q|Mi,Σ)p(rx)dQdrx which ensures that (18) represents a valid probability statement. We can then rewrite (18) as p(y,X|Lj, Mi,ρ,Σ) =Z p(y|q, ry)p(X|Q,rx)p(q,Q|Lj, Mk,X,ρ,Σ) CBF qQ(Mi, Mk)p(ry)p(rx)dqdQdrydrx, where CBF qQ(Mi, Mk) is the conditional Bayes factor in (15) with ˜Xreplaced by ˜Q. Following the same reasoning as for the outcome model, we can then prove model selection consistency for the treatment model. D Additional Details on the prior specification for ν Figure S.1 presents different shifted Exponential priors on νand their implied priors on the covariance ratio σyx/σxxand the conditional outcome variance σy|x. The Exponential with scale 1 strikes a good balance between being uninformative on νitself while also keeping the implied prior on the covariance ratio relatively uninformative and is, therefore, our default choice. The latter is almost equivalent to that of fixing ν= 3, which is done in Karl and Lenkoski (2012).
https://arxiv.org/abs/2504.13520v3
E Additional details on the simulation experiments E.1 Performance measures This Section defines the performance measures used in our simulation experiments. Define the median absolute error (MAE) as the median ℓ1difference between the point estimator and the true parameter, MAE( τ,ˆτ) = Median i=1,...,M τ−ˆτ(i) 1, where ˆτ(i)is the posterior estimate obtained from the i-th simulated dataset and Mis the number of dataset replications. If l= 1 (i.e., τis a scalar), the ℓ1norm simplifies to the absolute value. Similarly, we estimate the median bias as Bias(τ,ˆτ) = Median i=1,...,M ˆτ(i) −τ 1. For all Bayesian methods, the posterior mean is used as the point estimator for the MAE and median bias calculations. Finally, we define the coverage of each method as the proportion of credible intervals (or confidence intervals for the classical estimators) that include the true parameter value. 29 𝜈2 3 4 5Density 0510 𝜎yx/𝜎xx−3−2 −1 0 1 2 3Density 0.00.20.40.60.8 𝜎y∣x0.0 0.5 1.0 1.5 2.0Density 0123 Expone ntial(0. 1) Exp onential (1) Expon ential(5) 𝜈= 3 𝜈=5Figure S.1: Exponential priors (scale parameterisation) on νand the implied prior on the covariance ratio σyx/σxxand the conditional variance σy|xin the case of l= 1. The Exponential priors are shifted by l+1 = 2. For the log-predictive score (LPS) calculation, let θ= (ρ,Λ,Σ, L, M ) denote all parameters and model indices considered, and note that we only focus on the conditional distribution y|Xas this is of primary interest in most applications. Then, the LPS is given by (with some abuse of notation as θhas continuous and discrete components) LPS = −1 n∗n∗X i=1logZ p(y∗ i|X∗ i,θ)p(θ|y,X)dθ, where the integral is the posterior predictive distribution for holdout observations ( y∗ i,X∗ i), i= 1, . . . , n∗. We divide by the holdout sample size n∗to make the results comparable across different sizes of the holdout dataset. This integral is approximated by averaging over our posterior sample of {θ(s):s= 1, . . . , S }(see e.g. Gelman et al., 2014) LPS≈ −1 n∗n∗X i=1log 1 SSX s=1p(y∗ i|X∗ i,θ(s))! . For the classical estimators, we compute an analogue of the log predictive density by taking the log of the outcome density evaluated at the point estimates, i.e. −(n∗)−1Pn∗ i=1logp(y∗ i|X∗ i,ˆθ). We generate an additional 20% of holdout observations in each dataset to compute the LPS. E.2 Implementation of competing estimators Here, we present some additional details on how the competing methods are implemented in our simulation study. Throughout this Section, define U= [ι:X:W] to be the outcome (or second stage) design matrix andV= [ι:Z:W] to be the treatment (or first stage) design matrix, where Wis the matrix of all available exogenous control variables and Zis the matrix of all available instruments. Let kUandkVdenote their number of columns, respectively. Also, let PV=V(V⊺V)−1V⊺denote the first stage projection matrix. The aim is to estimate the outcome parameter ρ. 30 •Ordinary Least Squares (OLS): The OLS estimator is defined as ˆρOLS= (U⊺U)−1U⊺y and its covariance matrix can be estimated as ˆvarOLS= ˆσ2(U⊺U)−1, where ˆ σ2=∥y−UˆρOLS∥2 2/(n−kU). •Two-Stage Least Squares (TSLS): TSLS linearly regresses the endogenous
https://arxiv.org/abs/2504.13520v3
variables on the instru- ments and uses the resulting fitted values in the outcome model. This is equivalent to using the linear projection PVXinstead of X. Accordingly, the TSLS estimator is given by ˆρTSLS = (U⊺PVU)−1U⊺PVy and its variance is estimated as ˆ varTSLS = ˆσ2(U⊺PVU)−1, where ˆ σ2=∥y−UˆρTSLS∥2 2/(n−kU). The oracle TSLS (O-TSLS) estimator refers to the one that uses the correct subset of columns in UandVin the formula above. For more details on TSLS estimation, we refer to any (graduate) Econometrics textbook (e.g. Davidson and MacKinnon, 2004). •Jacknife instrumental variable estimation (JIVE): Motivated by the bias of TSLS in overidenti- fied settings, Angrist et al. (1999) propose to use jackknife fitted values in the first-stage. The intuition behind their approach is that not using the i-th observation when predicting the first stage gets rid of the correlation between the predicted value and the residual for observation i. LetU(i)andV(i)be the design matrices with the i-th observation left out. Then, define ˆUJIV E as the matrix with row i containing Vi(V(i)⊺V(i))−1V(i)U(i)(the predicted value for the i-th observation). Then, the JIVE estimator is defined as3 ˆρJIV E = (ˆU⊺ JIV EU)−1ˆU⊺ JIV Ey. Angrist et al. (1999) use TSLS standard errors based on a just-identified setting with instruments ˆUJIV E (which is asymptotically valid). •Regularised JIVE (RJIVE): Hansen and Kozbur (2014) propose JIVE with regularisation (in particular, Ridge) in the first stage. The jackknife prediction in the first stage then becomes Vi(V(i)⊺V(i)+λIn−1)−1V(i)U(i). They propose to set λ=c2·p, where the constant cis set to the sample standard deviation of X(after partialling out control variables). We use this value of λin our simulation experiments. The second stage estimation remains unchanged from that of regular JIVE. •Post-Lasso: Belloni et al. (2012) propose to use Lasso regularisation in the first stage. We have not implemented this manually, but use the rlassoIV function in the hdmR package (Chernozhukov et al., 2016). However, this function only provides estimates (and standard errors) for the endogenous variables but not for the exogenous covariates (which we need to compute the LPS). Also, when no instruments are selected, the software only returns a warning but no effect estimates. Thus, we do not provide LPS estimates and compute the other criteria only in those cases where at least one instrument is selected. 3We use what they refer to as JIVE1. They also propose a slightly different estimator named JIVE2. However, the difference between JIVE1 and JIVE2 is minimal in their simulations, so we do not expect this to make a big difference. 31 •Model-averaged TSLS (MATSLS): Kuersteiner and Okui (2010) propose to use model-averaged first-stage predictions of the endogenous variable. That is, they use a TSLS estimator with a weighted average first-stage projection, P(w)=PM i=1wiPVi, where the sum goes over all possible models. The weights are estimated by minimising a mean-squared error criterion. We allow the weights to be unrestricted (i.e. they can be negative). Then, the TSLS estimator is computed as above using P(w) in place of PV. To implement this procedure, we have translated the MATLAB code found on Ryo Okui’s website
https://arxiv.org/abs/2504.13520v3
to Julia. •sisVIVE: Kang et al. (2016) propose the sisVIVE estimator for settings with potentially invalid instruments. Instruments can be included in the outcome model but are subject to an ℓ1penalty. The estimation strategy is to minimise the moment condition subject to that ℓ1penalty term. We use the sisVIVE R package (Kang, 2017) to implement the estimator. To the best of our knowledge, there is no simple way of computing valid standard errors for the sisVIVE estimator (the software does not provide standard errors, and Kang et al. (2016) do not discuss how one would compute them). Therefore, our simulations do not provide any coverage results for sisVIVE. •BMA: Naive BMA refers to using BMA on the outcome model only (and ignoring the treatment equation). We use a g-prior specification on the model-specific slope coefficients and uninformative priors on the intercept and the residual variance (see e.g. Fern´ andez et al., 2001). Note that this is different from the prior specification of gIVBMA, where the intercept is included in the g-prior. BMA is strictly based on the assumption that the outcome is Gaussian. This could be relaxed using the ULLGM approach of Steel and Zens (2024), but the approximation seems reasonable in our birthweight example, which is the only setting where we consider a non-Gaussian outcome. •IVBMA: To implement the IVBMA method by Karl and Lenkoski (2012), we use the most recent version (1.05, 2014-09-18) of the ivbma R package. This R package is not available on CRAN anymore (withdrawn on 2018-12-07) but can still be downloaded from the archive. Their code does not accom- modate the approximate extension to Poisson variables suggested in Lee and Lenkoski (2022), so when we use IVBMA in the presence of non-Gaussian variables, we use a Gaussian approximation for any non-Gaussian component. Whenever we use purely data-driven instrument selection in our simulations, we also allow naive BMA and OLS to include all instruments in the outcome model. Letting gIVBMA use the instrument information in the outcome model while the naive methods cannot would unfairly skew the comparison towards gIVBMA. This is particularly relevant when some of the instruments may be invalid, as in Section 6.2 of the main text. E.3 Count data experiment To investigate the performance of our method if the treatment is a count variable, we consider the setting of Section 6.1 with many weak instruments. However, the treatment xiis now simulated from a Poisson with rate parameter exp( qi), where the Gaussian qiis generated as before. Table S.1 presents the results, which are qualitatively somewhat similar to the ones in the main text. Notably, the naive methods (OLS and regular BMA) are much closer than in the Gaussian setting. This is perhaps a consequence of the additional variance induced by the non-linear transformation, diluting the degree of endogeneity. Also, IVBMA’s small sample performance in terms of MAE and bias is much better, but its predictive performance is still very poor. Post-Lasso has even more instances of not selecting a single instrument, so its performance is based on very few iterations. E.4 Invalid instruments
https://arxiv.org/abs/2504.13520v3
To investigate the performance in the presence of invalid instruments, we consider a simulation setup similar to the ones in Kang et al. (2016) and Windmeijer et al. (2019). Again, we consider two different sample sizes n∈ {50,500}andp= 10 potential instruments simulated from independent standard Normals. Their coefficients in the outcome model are set to β= (1, . . . , 1,0, . . . , 0) where the first scoefficients are one (and, therefore, the first sinstruments are invalid). We vary s∈ {3,6}such that the plurality rule only holds in the first scenario. The instruments’ treatment coefficient is set to δ=cι, where c >0 is chosen such that 32 n= 50 R2 f= 0.01 R2 f= 0.1 MAE Bias Cov. LPS MAE Bias Cov. LPS BMA (hyper- g/n) 0.06 0.06 0.32 1.47 0.05 0.05 0.39 1.44 gIVBMA (BRIC) 0.02 0.01 0.95 1.5 0.02 -0.0 0.99 1.51 gIVBMA (hyper- g/n)0.02 0.0 0.94 1.51 0.02 -0.01 0.98 1.51 gIVBMA (2C) 0.02 0.0 0.95 1.51 0.02 -0.01 0.97 1.51 IVBMA (KL) 0.11 0.06 0.53 14.84 0.1 -0.04 0.6 12.64 OLS 0.06 0.06 0.28 1.6 0.05 0.05 0.36 1.51 TSLS 0.07 0.07 0.61 1.62 0.05 0.05 0.6 1.51 O-TSLS 0.07 0.06 0.77 1.59 0.04 0.04 0.87 1.47 JIVE 0.11 0.05 0.98 2.07 0.14 0.05 1.0 2.13 RJIVE 0.14 0.05 1.0 2.69 0.12 0.07 1.0 2.62 MATSLS 0.17 0.05 0.98 2.49 0.08 0.03 0.92 1.91 Post-LASSO 0.08 0.08 1.0 - 0.15 0.01 0.75 - n= 500 R2 f= 0.01 R2 f= 0.1 MAE Bias Cov. LPS MAE Bias Cov. LPS BMA (hyper- g/n) 0.06 0.06 0.0 1.35 0.05 0.05 0.0 1.37 gIVBMA (BRIC) 0.01 -0.0 0.94 1.42 0.01 -0.0 0.95 1.44 gIVBMA (hyper- g/n)0.01 -0.0 0.94 1.42 0.01 -0.0 0.95 1.44 gIVBMA (2C) 0.01 -0.0 0.95 1.42 0.01 -0.0 0.94 1.44 IVBMA (KL) 0.1 -0.05 0.29 216.37 0.08 -0.05 0.54 140.27 OLS 0.06 0.06 0.0 1.36 0.05 0.05 0.0 1.38 TSLS 0.04 0.04 0.69 1.38 0.02 0.02 0.81 1.41 O-TSLS 0.04 0.03 0.85 1.4 0.02 0.01 0.9 1.42 JIVE 0.1 0.08 0.83 1.64 0.05 -0.03 0.97 1.79 RJIVE 0.09 0.07 0.89 1.69 0.09 0.0 0.97 1.99 MATSLS 0.08 0.01 0.98 1.75 0.02 -0.0 0.94 1.48 Post-LASSO 0.42 -0.28 1.0 - 0.02 0.0 1.0 - Table S.1: Many Weak Instruments: MAE, bias, coverage, and mean LPS with a Poisson endogenous variable and many weak instruments based on 100 simulated datasets. The best values in each column are printed in bold. Post-Lasso only returns estimates for τ, but not for the other coefficients, so we cannot compute the LPS. When no instrument is selected, no effect estimates are provided, therefore, we do not consider those cases. The number of times no instruments were selected in the first stage is (from top-left to bottom-right): 98, 96, 95, 48. the first-stage R2 fis approximately 0 .2. The treatment effect is set to τ= 0.1. Then the data is generated from (1), while σyy=σxx= 1 and σyx= 1/2. We compare the performance of our method to IVBMA (Karl and Lenkoski, 2012), TSLS using all instruments, oracle TSLS, and the sisVIVE estimator
https://arxiv.org/abs/2504.13520v3
of Kang et al. (2016). We also add what we call oracle gIVBMA (O-gIVBMA), which correctly separates the valid and invalid instruments (or covariates) a priori. For this oracle, the invalid instruments can be included in both models, but the valid instruments can only be included in the treatment model. This corresponds to a situation where the analyst has some extra prior information to distinguish valid instruments from potentially invalid instruments. We would expect this to be slightly more efficient in smaller samples. We cannot use the two-component prior for gIVBMA with purely data-driven instrument selection. There is no way to split the potential instruments into two groups without any additional prior knowledge. For the oracle, however, we can still use the two-component prior. Table S.2 shows that all gIVBMA variants predict well and lead to small bias and MAE. For small n, the BRIC specification has a higher MAE and median bias than the hyper- g/nvariants. The oracle versions 33 n= 50 s= 3 s= 6 MAE Bias Cov. LPS MAE Bias Cov. LPS BMA (hyper- g/n) 0.48 0.48 0.09 1.38 0.48 0.48 0.09 1.41 gIVBMA (BRIC) 0.35 0.34 1.0 1.37 0.26 0.25 0.98 1.41 gIVBMA (hyper- g/n) 0.13 0.09 0.97 1.37 0.2 0.0 0.97 1.41 O-gIVBMA (hyper- g/n) 0.16 0.09 0.99 1.37 0.14 -0.0 0.98 1.39 O-gIVBMA (2C) 0.14 0.07 1.0 1.37 0.17 -0.02 0.97 1.39 IVBMA (KL) 0.39 0.39 0.98 1.38 0.6 0.6 0.89 1.41 OLS 0.53 0.53 0.08 1.44 0.51 0.51 0.05 1.5 TSLS 1.45 1.45 0.21 2.15 2.44 2.44 0.03 2.46 O-TSLS 0.28 0.25 0.77 1.45 0.31 0.24 0.88 1.5 sisVIVE 0.66 0.66 - 1.69 2.41 2.41 - 2.39 n= 500 s= 3 s= 6 MAE Bias Cov. LPS MAE Bias Cov. LPS BMA (hyper- g/n) 0.46 0.46 0.0 1.29 0.48 0.48 0.0 1.27 gIVBMA (BRIC) 0.08 0.06 0.96 1.29 0.19 0.19 0.94 1.27 gIVBMA (hyper- g/n) 0.1 0.06 0.95 1.29 0.25 0.24 0.91 1.27 O-gIVBMA (hyper- g/n)0.06 0.02 0.96 1.29 0.08 0.03 0.92 1.27 O-gIVBMA (2C) 0.07 0.03 0.96 1.29 0.1 0.04 0.93 1.27 IVBMA (KL) 0.09 -0.06 0.82 1.29 0.08 -0.05 0.83 1.27 OLS 0.5 0.5 0.0 1.29 0.5 0.5 0.0 1.27 TSLS 1.79 1.79 0.0 2.18 3.57 3.57 0.0 2.72 O-TSLS 0.06 0.02 0.96 1.42 0.08 0.02 0.95 1.41 sisVIVE 0.22 0.22 - 1.61 4.36 4.36 - 3.05 Table S.2: Invalid Instruments: MAE, bias, coverage, and mean LPS on 100 simulated datasets with s invalid instruments. The best values in each column are printed in bold. The sisVIVE estimator does not provide any uncertainty quantification, so we do not report any coverage results. tend to perform slightly better, suggesting that using prior information on the separation of instruments and covariates can be beneficial. The IVBMA method also predicts well, but incurs significant bias for small nand undercovers for large n. TSLS results in a very large positive bias, very small coverage, and poor prediction. The oracle version of TSLS does better but is not competitive with the gIVBMA methods for small n. All the methods above appear largely unaffected by the plurality rule. The
https://arxiv.org/abs/2504.13520v3
latter does affect sisVIVE, however, which performs very poorly when s= 6, as expected. E.5 Multiple endogenous variables with correlated instruments We consider an example with two endogenous variables, one following a Gaussian distribution and one following a Beta distribution, and correlated instruments. We take inspiration from the simulation design in Fern´ andez et al. (2001): There are 15 instruments where Z1, . . . , Z 10are generated from independent standard Gaussians and Z11, . . . , Z 15are generated according to [Z11:. . .:Z15]=[Z1:. . .:Z5][0.3,0.5,0.7,0.9,1.1]⊺[1,1,1,1,1] +E, where Eis an n×5 matrix of independent Gaussian errors. This data-generating process leads to mod- erate correlations between the groups Z1, . . . , Z 5andZ11, . . . , Z 15and relatively strong correlations within Z11, . . . , Z 15. Then, we generate the (latent Gaussian) endogenous variables according to Q=ι[4,−1] +Z1[2,−2] +Z5[−1,1] +Z7[1.5,1] +Z11[1,1] +Z13[0.5,−0.5] +H 34 and we set X1=Q1and sample X2from a Beta distribution with mean exp( Qi2)/(1 + exp( Qi2)) for observation iand dispersion parameter r= 1. The outcome is then generated from y=ι+X[0.5,−0.5]⊺+ϵ, where [ϵ:H]are jointly Gaussian with mean zero and covariance matrix Σ. The covariance is set to Σij=c|i−j|, i, j= 1,2,3 with c= 2/3, so the endogeneity is relatively strong. None of the instruments directly affects the outcome; therefore, they are all valid. Again, we vary the sample size n∈ {50,500}. Table S.4 presents the median posterior inclusion probabilities from the simulation with multiple endoge- nous variables. The gIVBMA variants always include the correct variables while only very sparsely including wrong instruments in the smaller sample and not at all in the larger sample. IVBMA correctly selects the included instruments, but also includes other unnecessary variables too often for the first endogenous vari- able. For the second endogenous variable, some of the true instruments are hardly included in the n= 50 scenario, while the performance is good for n= 500. n= 50 MAE Bias Cov. X1 Cov. X2 LPS BMA (hyper- g/n) 0.48 0.42 0.47 0.92 1.39 gIVBMA (BRIC) 0.24 0.06 0.93 1.0 1.23 gIVBMA (hyper- g/n)0.22 0.04 0.92 0.99 1.23 IVBMA (KL) 0.52 0.5 0.92 0.94 1.26 OLS 0.95 0.65 0.0 0.93 1.46 TSLS 0.28 0.16 0.86 0.98 1.46 O-TSLS 0.27 0.09 0.89 0.98 1.47 MATSLS 0.46 0.13 0.91 0.95 1.57 n= 500 MAE Bias Cov. X1 Cov. X2 LPS BMA (hyper- g/n) 0.76 0.66 0.0 0.99 1.14 gIVBMA (BRIC) 0.08 0.01 0.99 0.87 1.15 gIVBMA (hyper- g/n) 0.1 0.01 0.94 0.89 1.15 IVBMA (KL) 0.25 0.23 0.98 0.92 1.14 OLS 0.77 0.67 0.0 0.97 1.16 TSLS 0.11 0.01 0.95 0.94 1.43 O-TSLS 0.11 0.02 0.96 0.95 1.43 MATSLS 0.1 0.01 0.94 0.94 1.43 Table S.3: Multiple endogenous variables with correlated instruments : MAE, bias, coverage, and mean LPS based on 100 simulated datasets with two endogenous variables (one Gaussian and one Beta). The best values in each column are in bold. F Additional details on the empirical examples F.1 Returns to schooling Here, we present additional details for the returns to schooling example based on the Card (1995) dataset.
https://arxiv.org/abs/2504.13520v3
Table S.5 includes definitions of all the variables used. Tables S.6 and S.7 present the posterior inclusion probabilities obtained with the Card (1995) dataset, including and excluding parental education, respectively, by gIVBMA, IVBMA, and naive BMA (only the outcome model). Figure S.2 and Table S.8 present the posterior results and inclusion probabilities for the model without parental education on the smaller shared set of observations. Figure S.3 shows the posterior distributions of the treatment residual variance on all three subsets of the dataset. 35 𝜏0.0 0.1 0.2 0.3(a) 020406080 𝜎yx/𝜎xx−0.3 −0.2 −0.1 0.0051015 gIVBMA (BRIC) g IVBMA (hyper-g/n) BMA (hyper-g/n) IVBMAFigure S.2: Returns to schooling : Posterior distributions of the treatment effect of education (Rao- Blackwellized for gIVBMA; Kernel density estimate for IVBMA) and the covariance ratio σyx/σxxbased on the Card (1995) dataset without parental education but only using the observations that have complete entries for parental education ( n= 2,215). The algorithm was run for 50 ,000 iterations, the first 5 ,000 of which were discarded as burn-in. IVBMA includes a point mass at zero of 22 .8%. 36 gIVBMA (BRIC) gIVBMA (hyper- g/n) IVBMA (X1) IVBMA (X2) Variable n=50 n=500 n=50 n=500 n=50 n=500 n=50 n=500 Z1 1.0 1 .0 1 .0 1 .0 1 .0 1 .0 0 .998 1 .0 Z2 0.003 0 .0 0 .012 0 .0 0 .223 0 .057 0 .071 0 .017 Z3 0.006 0 .0 0 .008 0 .0 0 .241 0 .044 0 .101 0 .02 Z4 0.002 0 .0 0 .006 0 .0 0 .246 0 .07 0 .087 0 .013 Z5 1.0 1 .0 1 .0 1 .0 1 .0 1 .0 0 .334 1 .0 Z6 0.0 0 .0 0 .01 0 .0 0 .188 0 .047 0 .061 0 .012 Z7 1.0 1 .0 1 .0 1 .0 1 .0 1 .0 0 .422 1 .0 Z8 0.001 0 .0 0 .003 0 .0 0 .166 0 .052 0 .066 0 .015 Z9 0.009 0 .0 0 .008 0 .0 0 .205 0 .056 0 .061 0 .018 Z10 0.0 0 .0 0 .004 0 .0 0 .178 0 .056 0 .064 0 .012 Z11 1.0 1 .0 1 .0 1 .0 1 .0 1 .0 0 .578 1 .0 Z12 0.0 0 .0 0 .006 0 .0 0 .167 0 .042 0 .074 0 .016 Z13 1.0 1 .0 1 .0 1 .0 1 .0 1 .0 0 .089 1 .0 Z14 0.004 0 .0 0 .011 0 .0 0 .173 0 .031 0 .076 0 .012 Z15 0.002 0 .0 0 .002 0 .0 0 .164 0 .039 0 .08 0 .011 Table S.4: Multiple endogenous variables with correlated instruments: Median treatment posterior inclusion probabilities across 100 simulated datasets. The instruments included in the true model are printed in bold. Note that IVBMA uses separate treatment models for the two endogenous variables X1andX2. Variable Definition lwage Logarithm of the hourly wage (in cents) in 1976 educ Years of schooling in 1976 age Age in years agesq Age squared exper Experience as measured by age - educ
https://arxiv.org/abs/2504.13520v3
- 6 expersq Experience squared nearc2 =1 if near 2-year college in 1966 nearc4 =1 if near 4-year college in 1966 fatheduc Father’s schooling in years motheduc Mother’s schooling in years momdad14 =1 if live with mom and dad at 14 sinmom14 =1 if with single mom at 14 step14 =1 if with step parent at 14 black =1 if black south =1 if in south in 1976 smsa =1 if in a standard metropolitan statistical area (SMSA) in 1976 married =1 if married in 1976 reg66i =1 for region i in 1966 Table S.5: Returns to schooling: Definitions of all variables used from the Card (1995) dataset. We evaluate the predictive performance on this dataset in a 5-fold cross-validation exercise comparing gIVBMA under the hyper- g/nand BRIC prior with IVBMA, naive BMA, and TSLS. We do not believe any of the other classical methods in our simulation experiments are particularly suited to this setting, as we neither have many weak instruments nor evidence for the intended instruments being invalid. As shown in Table S.9, the performance of both gIVBMA variants, IVBMA and naive BMA, is relatively similar on all three subsets of the data. TSLS performs the worst in this predictive comparison. However, this assessment is not entirely fair, both due to the reasons discussed in Section 6 and because TSLS is the only method lacking flexibility in covariate and instrument selection. 37 gIVBMA (hyper- g/n) gIVBMA (BRIC) IVBMA BMA (hyper- g/n) L M L M L M L age 0.742 0.176 0.492 0.533 0.989 1.0 0.89 agesq 0.262 0.174 0.536 0.534 0.013 1.0 0.352 nearc2 0.071 0.062 0.385 0.08 0.343 0.304 0.672 nearc4 0.001 0.298 0.032 0.572 0.069 0.893 0.091 momdad14 0.989 1.0 0.906 1.0 0.997 1.0 0.104 sinmom14 0.003 0.008 0.012 0.01 0.032 0.132 0.082 step14 0.001 0.006 0.009 0.011 0.059 0.148 0.124 black 0.035 1.0 0.167 1.0 0.223 1.0 1.0 south 0.005 1.0 0.177 0.995 0.125 0.998 1.0 smsa 0.006 1.0 0.152 1.0 0.063 1.0 1.0 married 1.0 1.0 1.0 1.0 1.0 1.0 1.0 reg662 0.001 0.004 0.015 0.015 0.033 0.128 0.089 reg663 0.206 0.052 0.632 0.054 0.669 0.309 0.757 reg664 0.006 0.005 0.022 0.016 0.09 0.141 0.21 reg665 0.0 0.005 0.016 0.016 0.03 0.079 0.147 reg666 0.0 0.004 0.016 0.01 0.031 0.109 0.089 reg667 0.0 0.004 0.009 0.018 0.024 0.098 0.085 reg668 0.395 0.027 0.665 0.126 0.938 0.713 0.922 reg669 0.004 0.038 0.047 0.111 0.06 0.536 0.184 Table S.6: Returns to schooling: Posterior inclusion probabilities for the Card (1995) dataset excluding parental education, using the full sample of n= 3,003 observations. gIVBMA (hyper- g/n) gIVBMA (BRIC) IVBMA BMA (hyper- g/n) L M L M L M L age 0.704 0.809 0.781 0.903 0.999 1.0 0.801 agesq 0.298 0.607 0.233 0.774 0.367 1.0 0.329 nearc2 0.12 0.004 0.397 0.018 0.488 0.15 0.823 nearc4 0.001 0.052 0.013 0.102 0.02 0.457 0.083 momdad14 0.003 0.023 0.016 0.041 0.049 0.317 0.094 sinmom14 0.003 0.004 0.007 0.01 0.11 0.355 0.079 step14 0.004 0.291 0.032 0.532 0.076 0.912 0.195 black 0.999 0.011 1.0 0.039 1.0 0.292 1.0 south 1.0 0.006 1.0 0.018 1.0
https://arxiv.org/abs/2504.13520v3
0.108 1.0 smsa 1.0 0.944 1.0 0.958 1.0 0.982 1.0 married 1.0 0.879 1.0 0.983 1.0 0.993 1.0 reg662 0.001 0.002 0.007 0.011 0.025 0.135 0.106 reg663 0.042 0.007 0.163 0.017 0.25 0.126 0.555 reg664 0.007 0.016 0.04 0.044 0.144 0.464 0.169 reg665 0.003 0.022 0.008 0.032 0.029 0.257 0.094 reg666 0.0 0.004 0.011 0.015 0.032 0.16 0.08 reg667 0.001 0.009 0.005 0.026 0.03 0.269 0.094 reg668 0.244 0.005 0.611 0.009 0.841 0.347 0.771 reg669 0.001 0.006 0.016 0.019 0.036 0.21 0.105 fatheduc 0.006 1.0 0.14 1.0 0.073 1.0 0.105 motheduc 0.008 1.0 0.056 1.0 0.085 1.0 0.796 Table S.7: Returns to schooling: Posterior inclusion probabilities for the Card (1995) dataset including parental education, based on the reduced sample of n= 2,215 observations. 38 gIVBMA (hyper- g/n) gIVBMA (BRIC) IVBMA BMA (hyper- g/n) L M L M L M L age 0.99 0.229 0.501 0.739 1.0 1.0 0.805 agesq 0.001 0.157 0.521 0.662 0.22 1.0 0.306 nearc2 0.135 0.142 0.59 0.131 0.501 0.37 0.874 nearc4 0.001 0.024 0.012 0.073 0.032 0.4 0.082 momdad14 0.001 0.014 0.01 0.033 0.07 0.277 0.09 sinmom14 0.001 0.005 0.008 0.019 0.1 0.321 0.103 step14 0.002 0.106 0.027 0.209 0.09 0.658 0.186 black 0.006 1.0 0.055 1.0 0.384 1.0 1.0 south 0.01 0.996 0.282 0.82 0.402 0.797 1.0 smsa 0.002 1.0 0.074 1.0 0.289 1.0 1.0 married 1.0 0.999 1.0 1.0 1.0 1.0 1.0 reg662 0.001 0.007 0.011 0.009 0.041 0.135 0.113 reg663 0.031 0.017 0.347 0.04 0.418 0.195 0.57 reg664 0.001 0.003 0.024 0.016 0.11 0.351 0.155 reg665 0.001 0.001 0.01 0.018 0.03 0.193 0.096 reg666 0.001 0.004 0.007 0.021 0.033 0.143 0.081 reg667 0.001 0.003 0.011 0.011 0.031 0.13 0.066 reg668 0.057 0.011 0.3 0.028 0.747 0.614 0.742 reg669 0.001 0.003 0.012 0.019 0.039 0.165 0.097 Table S.8: Returns to schooling: Posterior inclusion probabilities for the Card (1995) dataset without parental education but only using the observations that have complete entries for parental education ( n= 2,215). 5 6 7Posterior Density 012gIVBMA (BRIC) 5 6 7012gIVBMA(hyper-g/n) 5 6 7012IVBM A 𝜎xx No parent edu. (n= 3,003) With par ent edu. (n= 2,215) N o parent edu. (n =2,215) Figure S.3: Returns to schooling: Posterior distribution of the treatment residual variance σxxfor gIVBMA (BRIC and hyper- g/n) and IVBMA. 39 Method (a) (b) (c) gIVBMA (hyper- g/n) 0.432 0.442 0.441 gIVBMA (BRIC) 0.432 0.439 0.442 BMA (hyper- g/n) 0.431 0.439 0.44 IVBMA 0.432 0.441 0.441 TSLS 0.698 0.591 0.72 Table S.9: Returns to schooling: The mean LPS calculated over each fold of the Card (1995) data (a) without parental education ( n= 3,003), (b) with parental education ( n= 2,215), and (c) without parental education only using the shared set of observations ( n= 2,215) in a 5-fold cross-validation procedure. F.2 Additional Example: Maternal smoking and birthweight As an additional application, we revisit the effect of maternal smoking during pregnancy on birth weight using data from Mullahy (1997). The smoking habits of mothers are likely to be correlated with other unobserved determinants of the child’s birth weight, potentially leading to endogeneity. The dependent variable is birth weight
https://arxiv.org/abs/2504.13520v3
in ounces, and the endogenous regressor is the typical number of cigarettes smoked daily during the pregnancy. As instruments, we use the father’s years of education, the mother’s years of education, the cigarette price in their home state, and the excise tax on cigarettes in their home state. These have a plausible effect on the mother’s smoking habits but are unlikely to affect the child’s birth weight directly. Further explanatory variables are birth order, the child’s sex, and race. We also include all pairwise interactions between the instruments and exogenous covariates as potential covariates (they could end up being selected as instruments). In total, this yields 19 potential instruments and covariates. After removing missing values, the dataset contains 1 ,191 observations. The data records both birth weight and the number of cigarettes smoked daily as integers. However, the birth weight numbers are reasonably large, such that a Gaussian approximation could work well. This gives us two modelling options: a Poisson with a latent log-normal parameter or a Gaussian log-linear model. The latter uses the natural logarithm of birth weight as an outcome to ensure the interpretation is comparable across these two models. We fit both models with hyper- g/npriors and compare the results. In both cases, the number of cigarettes (the only endogenous variable) is modeled as a Poisson with a latent log-normal parameter. We also compare this to naive BMA (also under a hyper- g/nprior) and IVBMA, both applied to the log-linear Gaussian model. Note that IVBMA also treats the number of cigarettes in the treatment model as Gaussian. Figure S.4 shows the marginal posterior for the cigarettes coefficient in the Poisson and the Gaussian model. They are both centered at just above −0.005, but the Poisson is slightly tighter. For each additional cigarette smoked per day, we would expect a decrease in birth weight of around 0 .5%. IVBMA does not select the endogenous variable most of the time and, therefore, has most of its posterior mass on zero. Both the Gaussian and the Poisson models always select mothers’ years of education as an instrument, while other instruments are only sporadically included. IVBMA, on the other hand, selects the father’s years of education in almost all treatment models, while the mother’s education is only included about two-thirds of the time. Again, we compare the predictive performance on this dataset using the LPS in a cross-validation pro- cedure. For the log-linear models, we adjust the posterior predictive by the Jacobian of y7→log(y) to be on the same scale as the Poisson log-normal model. So, the approximate LPS for the log-linear models is LPS≈ −1 n∗n∗X i=1log 1 SSX s=1p(logy∗ i|X∗ i, θs) |y∗ i|! =−1 n∗n∗X i=1log 1 SSX s=1p(logy∗ i|X∗ i, θs)! +1 n∗n∗X i=1log|y∗ i|. Table S.10 presents the results of the cross-validation exercise. The Poisson log-normal model achieves the lowest mean LPS. 40 𝜏−0.01 0.00 0.01 0. 020100200 𝜎yx/𝜎xx−0.02− 0.01 0 .00 0.010100200300400 gIVBMA (Pois son) gIVBMA (Gaussian) BMA (Gaussian) IVBMAFigure S.4: Maternal smoking and birthweight : The marginal posterior distribution (Rao-Blackwellised) of the effect of smoking an
https://arxiv.org/abs/2504.13520v3
arXiv:2504.13620v1 [math.PR] 18 Apr 2025Set-valued conditional functionals of random sets Tobias Fissler∗Ilya Molchanov† April 21, 2025 Abstract Many key quantities in statistics and probability theory su ch as the expectation, quantiles, expectiles and many risk measures are law-deter mined maps from a space of randomvariables to thereals. We call sucha law-determined map, which is normalised, positively homogeneous, monotone and translation equivar iant, a gauge function. Con- sidered as a functional on the space of distributions, we can apply such a gauge to the conditional distribution of a random variable. This result s in conditional gauges, such as conditional quantiles or conditional expectations. In t hispaper, weapply suchscalar gauges to the support function of a random closed convex set X. This leads to a set- valued extension of a gauge function. We also introduce a con ditional variant whose values are themselves random closed convex sets. In special cases, this functional be- comes the conditional set-valued quantile or the condition al set-valued expectation of a random set. In particular, in the unconditional setup, if Xis a random translation of a deterministic cone and the gauge is either a quantile or a n expectile, we recover the cone distribution functions studied by Andreas Hamel an d his co-authors. In the conditional setup, the conditional quantile of a random sin gleton yields the conditional version of the half-space depth-trimmed regions. Keywords: Random set ·Quantile ·Random cone ·Depth-trimmed region ·Gauge function ·Conditional distribution Mathematics Subject Classification: 60D05 ·62H05·91G70 1 Introduction Due to the lack of a natural total order, many classical concepts for random variables are not directly applicable for random vectors. In the course of extending such concepts to random vectors the classical definitions often acquire new features. The most well-known example is that of the median and of quantiles: while on the line they are number s, in the space 1tobias.fissler@math.ethz.ch, RiskLab, ETH Zurich, 8092 Zurich, Swit zerland 2ilya.molchanov@unibe.ch,Institute ofMathematical Statistics and A ctuarialScience, University ofBern, 3012 Bern, Switzerland 1 they are sometimes defined as sets or collections of numbers depen ding on direction, see Hallin et al. (2010) andPaindaveine and Virta (2021). Alternatively, it is possible to work with partial orders as done by Belloni and Winkler (2011), or to apply tools from the theory of optimal transport as pursued by Carlier et al. (2016), or to derive a multivariate median from a solution of a certain minimisation procedure, see Chaudhuri (1996). A useful replacement of quantiles in multivariate statistics is provide d bydepth-trimmed regions. In this course, the distribution of a random vector or its empirical counterpart is associated with a nested family of sets, depending on the paramete rα∈(0,1]. These depth- trimmed regions are exactly the upper level sets of the depth func tion, which allocates to each point its depth with respect to the underlying probability dist ribution. The first such concept is the half-space depth introduced by Tukey(1975) (seeNagy et al. (2019) for a recent survey), later followed by an axiomatic approach of Zuo and Serfling (2000) and numerous further concepts including the simplicial depth due to Liu(1988) andzonoid
https://arxiv.org/abs/2504.13620v1
depth defined by Koshevoy and Mosler (1997) andMosler(2002).Molchanov and Turin (2021) showed that many such constructions arise from an application of a sublinear expectation to the projections of random vectors and interpreting the obtain ed function as the support function of the depth-trimmed region. Motivated by financial applications, it is natural to associate with a v ector a certain random closed convex set which describes all financial positions att ainable from the value of the vector by admissible transactions. If all transactions are f orbidden, this random set isX+Rd −. The most classical nontrivial example arises when the components of a vector Xrepresent different currencies, and the set of admissible positions is obtained by adding to the vector all points from a cone Cwhich describes positions attainable from price zero, seeKabanov and Safarian (2009). In the financial setting the cone Calways contains Rd −, meaning that disposal or consumption of assets are always allowed. A number of works has addressed assessing the risk of X+C, which is the cone translated byX, seeHamel and Heyde (2010) andMolchanov and Cascos (2016). The key issue is to associate with X+Ca closed convex set in Rdwhich describes its risk and regards X+C as acceptable if this set contains the origin. This set can be describe d as the family of all a∈Rdsuch that a+X+Cis acceptable. Abstracting away from the financial interpretation, it is possible to associate with each pointxfromRdits position in relation to X+C. IfC={0}is degenerate, then this is similar to considering the depth of xin relation to the distribution of the random vector X. In case of a nontrivial cone C, its orientation can be rather arbitrary and can address other features, not necessarily inherent to the financial setting. This lin e of research was initiated byHamel and Kostner (2018) who defined the cone distribution function FX,C(z),z∈Rd, as the infimum of the probability content of closed half-spaces which pa ss through zand have outer normals from the polar cone to C. The excursion sets of the cone distribution function are termed the C-quantile function ofX. There is a one-to-one correspondence between such set-valued quantile functions and the cone distribution function. A similar programme has been carried over by Hamel and Ha (2025) forexpectiles re- placing quantiles. The sublinearity property of expectiles establishe d byBellini et al. (2014) 2 makes it possible to naturally preserve the convexity property inst ead of taking the convex hull involved in the constructions based on quantiles. Inthispaperweextend thementioned worksofAndreasHamel and hiscoauthorsinthree directions. First, instead of randomly translated cones, we allow fo rrandom closed convex setsXinRd, not necessarily being a random translation of a deterministic cone. Second, in- stead of considering a set-valued version of quantiles or expectiles , we construct a set-valued version of a general gauge function g. A general gauge function is a law-determined real- valued mapping from the space of random variables, which is constan t preserving, positively homogeneous, monotone and translation equivariant. Besides the prime examples of quan- tiles and expectiles, our definition of the gauge also comprises the av erage-value-at-risk or expected shortfall, animportant riskmeasureinquantitative
https://arxiv.org/abs/2504.13620v1
riskm anagement, orthegauges based on moments as defined by Fischer(2003). Third, we construct a conditional version of the set-valued gauge of the random set X. That means, for a sub- σ-algebra Aof theσ- algebra of the underlying probability space, a gauge function g, and a random closed convex setX, we construct another random A-measurable closed convex set Y=G(X|A). In the unconditional case and if the random set is a deterministic cone tran slated by a random vec- tor, our set-valued quantiles and expectiles yield the objects defin ed byHamel and Kostner (2018) andHamel and Ha (2025). We first explain our construction without conditioning. Recall that a closed convex set F⊆Rdcanbeuniquely represented astheintersection ofallhalf-spaces containing Fdefined via supporting hyperplanes of Fwith any normal vector wfrom the unit sphere Sd−1. That is, F=/intersectiondisplay w∈Sd−1Hw/parenleftbig h(F,w)/parenrightbig , where h(F,w) = sup/braceleftbig /a\}bracketle{tw,x/a\}bracketri}ht:x∈F/bracerightbig , w∈Rd, is the support function of F(also called a scalarisation) and a generic half-space with normal vectorw∈Rdis defined by Hw(t) =/braceleftbig x∈Rd:/a\}bracketle{tx,w/a\}bracketri}ht ≤t/bracerightbig , t∈R. (1.1) Note that Hw(∞) =H0(t) =Rdfor anyw∈Rdandt∈R. This approach is also applicable if a deterministic Fis replaced by a compact convex random set X. In this case, the support function h(X,w) is a path continuous random function of wfrom the unit sphere. The values h(X,w),w∈Sd−1, make it possible to describe Xin terms of a collection of random variables, sometimes called scalarisa tions of X. We can thus apply the gauge function g(e.g., a quantile) to each of these scalarisations, considering g(h(X,w)),w∈Sd−1. Hence, we end up with the unconditional set-valued gauge, which is the closed random set G(X) =/intersectiondisplay w∈Sd−1Hw/parenleftbig g(h(X,w))/parenrightbig . (1.2) 3 In order to build the corresponding conditional version, we first ne ed to define the con- ditional scalar gauge function with respect to a sub- σ-algebra A. Since the gauge itself is law-determined, this can be achieved by applying the gauge to the co nditional distribution. Hence, we replace the gauge of the scalarisation, g(h(X,w)),w∈Sd−1, in (1.2) by the conditional gauge of the scalarisation. This leads us to the random c losed convex set /intersectiondisplay w∈Sd−1Hw/parenleftbig g(h(X,w)|A)/parenrightbig , While we deal with uncountable intersections, it is known that the larg estA-measurable random closed convex subset of this intersection is well defined ( L´ epinette and Molchanov , 2019). This construction relies on letting the direction wbe deterministic. This may lead to problems when Xis unbounded and so the support function in some (possibly, random ) directions becomes infinite. For instance, if Xis a half-space with the normal having a non-atomic distribution, then h(X,w) =∞almost surely for each w∈Sd−1, so that X cannot be recovered by the values of the support function on any countable subset of Sd−1. We can address this problem by replacing the deterministic w∈Sd−1by allA-measurable random vectors. With this idea, our set-valued conditional gauge functions is the largest A-measurable random closed convex set contained in the intersectio n ofHW(g(h(X,W)|A)) over allA-measurable Wwith values in Sd−1. As the outcome of our construction, a random closed convex set is associated with a (possibly, random in the conditional setting) set which serves as a
https://arxiv.org/abs/2504.13620v1
k ind of a depth-trimmed region for the random set. This construction is principally different f rom the one developed byCascos et al. (2021), where a depth-trimmed region for a random set is a collection of sets. Followingtheworksby Hamel and Heyde (2010),Hamel et al. (2011),Hamel et al. (2013), andMolchanov and Cascos (2016), in the special cases of subadditive or superadditive gauge functionsandintheunconditionalsetting, Molchanov and M¨ uhlemann (2021)systematically studied subadditive and superadditive functionals of random closed convex sets. The condi- tional setting calls fornew toolswhich aredeveloped inthe current p aper. Working out these conditional versions makes it possible to calculate conditional risks g iven some information about the financial position or about the market. The importance o f such a conditional approach in risk measurement has been emphasised in ( McNeil et al. ,2015, Chapter 9.2). The paper is organised as follows. Section 2introduces gauge functions, their conditional variants and provides several examples of important gauge funct ions. Section 3provides a necessary background material on random closed convex sets an d several concepts related to them. Arepresentation theoremusingtheintersection ofacount ablenumber ofrandomhalf- spaces is also proved. Section 4describes the construction of a set-valued conditional gauge. Particular cases resulting from choosing some of the most importan t gauge functions are discussed in Section 5. In particular, choosing the essential infimum or essential suprem um asthegaugeleadstoconceptsclosetotheideasoftheconditional coreandconditionalconvex hull developed by L´ epinette and Molchanov (2019). Section 6discusses conditional gauges 4 for random singletons and thus provides conditional versions of so me well known depth- trimmed regions. For instance, we obtain conditional quantiles, men tioned by Hallin et al. (2010),Kim et al. (2021) andKim et al. (2021). Section 7concerns conditional gauges for (possibly, translated) random cones. 2 Conditional gauge functions 2.1 Gauge functions Let (Ω,F,P) be a complete atomless probability space, and let us denote by Lp(Rd) the set of allp-integrable random vectors in Rd(actually, their a.s. equivalence classes), where p∈ [1,∞]. Endow Lp(Rd),p∈[1,∞], with the σ(Lp,Lq)-topology based on the pairing of Lp(Rd) andLq(Rd) withp−1+q−1= 1. Further, let L0(Rd) be the family of all random vectors with thetopologyinducedbytheconvergence inprobability, equivalently ,Emin(|Xn−X|,1)→0. These families become families of random variables if d= 1. For a sub- σ-algebra AofF we write Lp(Rd;A) to indicate the set of all A-measurable random vectors in Lp(Rd) where p∈ {0} ∪[1,∞]. Further, denote by Lp(Rd) the family of random vectors ξwhich are representable as the sum ξ=ξ′+ξ′′, whereξ′∈Lp(Rd) forp∈ {0}∪[1,∞] andξ′′belongs to the family L0([0,∞]d) of random vectors whose components take values from the exte nded positive half-line. Similarly, for a sub- σ-algebra AofF, the family Lp(Rd;A) consists of all A-measurable elements of Lp(Rd). We introduce the following concept of a gauge function . Denote by ¯R= [−∞,∞] the extended real line. Definition 2.1. Agauge function isamapg:Lp(R)→¯R, satisfying thefollowingproperties for allX,Y∈Lp(R): (g1) law-determined: g(X) =g(Y) whenever XandYshare the same distribution; (g2) constant preserving: g(c) =cfor allc∈R; (g3) positive homogeneity: g(cX) =cg(X) for allc∈(0,∞); (g4) monotonicity: g(X)≤g(Y) ifX≤Yalmost surely; (g5) translation equivariance: g(X+c) =g(X)+cfor allc∈R. In some cases, we also impose the following optional properties: (g6) Lipschitz property (if p∈[1,∞]):|g(X)−g(Y)| ≤ /bardblX−Y/bardblp; (g7) subadditivity: g(X+Y)≤g(X)+g(Y)
https://arxiv.org/abs/2504.13620v1
if the right-hand side is well defined; (g8) superadditivity: g(X+Y)≥g(X)+g(Y) if the right-hand side is well defined; (g9) sensitivity with respect to infinity: P{X=∞}>0 implies that g(X) =∞. 5 Note that in (g7) and (g8) the right-hand sides are well defined unle ss the expressions (∞−∞) or (−∞+∞) appear. Remark 2.2. It suffices to replace (g2) by the requirement that g(0) is finite. Indeed, pos- itive homogeneity (g3) implies that g(0) =g(c·0) =cg(0) for all c∈(0,∞). Therefore, g(0)∈ {0,±∞}. Assuming that g(0) is finite implies that g(0) = 0, and then the translation equivariance yields that g(c) =cfor allc∈R. On the other hand, the translation equivari- ance implies for all c∈Rthatg(∞) =g(∞+c) =g(∞)+c. Hence, g(∞) =∞. Thus, (g3) and (g5) in combination with the finiteness of g(0) readily imply (g2). Since the gauge is law-determined, it is actually a functional on distrib utions of random variables. In view of this, the monotonicity property (g4) holds if Xis stochastically smaller thanY, meaning that the cumulative distribution functions satisfy FX(t)≥FY(t) for all t∈R. Recall that X≤Yalmost surely implies the first order stochastic dominance. To ensure better analytical properties, it may be useful to requir e that gauge functions are lower semicontinuous in σ(Lp,Lq), that is, g(X)≤liminf n→∞g(Xn) for each sequence {Xn,n≥1} ⊆Lp(R) converging to X∈Lp(R) in theσ(Lp,Lq)-topology ifp∈[1,∞], and converging in probability if p= 0. While such properties are very nat- ural for subadditive gauges, checking and interpreting them beco mes more complicated if subadditivity is not imposed. It is easy to see that the translation equivariance property implies t he Lipschitz property of gauge functions defined on L∞(R). If the gauge is subadditive, it will be called a sublinear expectation and denoted by e(possibly, with some sub- and superscripts). A monotone, translation equivariant, and superadditive gauge is termed a utility function or asuperlinear gauge and is denoted by u. Ifgis a gauge function, its dual is defined by g(X) =−g(−X), X∈Lp(R). (2.1) Passing to the dual relates sublinear and superlinear gauges. A superadditive gauge satisfies all properties of a coherent utility f unctions, see Delbaen (2012), where the input Xstands for a gain. Its subadditive dual is a coherent risk measure in the sign convention of McNeil et al. (2015), where the input Xstands for a loss. 2.2 Concatenation approach Forp∈ {0} ∪[1,∞], denote by Mpthe family of Borel probability measures on ¯R, which correspond to distributions of random variables in Lp(R). The fact that the gauge function is law-determined means that ginduces a functional Tg:Mp→¯R. We equip Mpwith the smallest σ-algebra B(Mp) such that for any Borel set B⊆¯Rthe evaluation map IB:Mp→ [0,1] given by IB(µ) =µ(B) is Borel measurable. 6 LetA⊆Fbea sub- σ-algebra of F. Our aimis to define a conditional gaugefunctionwith values in L0(¯R;A), which inherits the corresponding properties of gand which coincides with gifAistrivial. Denoteby PX|Aaregularversion oftheconditionaldistributionof X∈Lp(R) givenA. Since for p∈[1,∞] andX∈Lp(R), the negative part of Xisp-integrable, we have thatPX|A∈Mpalmost surely. For p= 0, it trivially holds that PX|A∈M0almost surely. Theconditionalgaugeisdefinedfollowingtheconstructiondescribe dbyFissler and Holzmann (2022) via the concatenation g(X|A) :=Tg(PX|A).To make this approach mathematically meaningful, we need to
https://arxiv.org/abs/2504.13620v1
assume that Tg:Mp→¯RisB(Mp)/B(¯R)-measurable. This assump- tion holds for all practically relevant examples discussed in Section 2.3due to the results of Fissler and Holzmann (2022). Thismeasurabilityassumption directlyyields thefollowingre- sult, establishing A-measurabilityof g(X|A). ItfollowsfromLemma2of Fissler and Holzmann (2022). Lemma 2.3. IfTg:Mp→¯RisB(Mp)/B(¯R)-measurable, then Tg(PX|A)isA/B(¯R)-measurable. Theconstructionoftheconditionalgaugecanbesummarisedinthe followingproposition. Proposition 2.4. Letg:Lp(R)→¯R,p∈ {0} ∪[1,∞], be a gauge function such that Tg:Mp→¯RisB(Mp)/B(¯R)-measurable. Then, for any sub- σ-algebraA⊆F, theconditional gauge function g(·|A) :Lp(R)→L0(¯R;A), X/ma√sto→g(X|A) :=Tg(PX|A) is such that for each X∈Lp(R), the conditional gauge g(X|A)exists, and it is an A- measurable random variable with values in ¯R. The properties of Definition 2.1and the construction via concatenation from Proposi- tion2.4directly imply the following properties. Proposition 2.5. Under the assumptions of Proposition 2.4, the conditional gauge function satisfies the following properties for all X,Y∈Lp(R): (i) constant preserving: g(X|A) =Xa.s. ifXisA-measurable; (ii) positive homogeneity: g(γX|A) =γg(X|A)a.s. for all A-measurable γwithγ∈ L∞(R+)ifp∈[1,∞]andγ∈L0(R+)ifp= 0; (iii) law-determined: g(X|A) =g(Y|A)a.s. ifPX|A=PY|Aalmost surely; (iv) monotonicity: g(X|A)≤g(Y|A)a.s. ifX≤Yin the first order stochastic dominance conditionally on A(that is, almost surely FX|A(t)≥FY|A(t)for allt∈Rfor the conditional c.d.f.s of XandYgivenA), e.g., if X≤Ya.s.; (v) conditional translation equivariance: g(X+Y|A) =g(X|A)+Y a.s. for allA-measurable Yifg(X|A)>−∞. 7 Remark 2.6. It is possible to extent the domain of the conditional gauge from Lp(R) to all X∈L0(R) such that PX|A∈Mpalmost surely. The simplest situation arises when Xitself isA-measurable so that the conditional distribution PX|Abecomes degenerate and thus an element of Mpforp∈ {0}∪[1,∞]. Hence, property (i) in Proposition 2.5can be extended tog(X|A) =Xa.s. for all X∈L0(R;A). Similarly, the positive homogeneity in (ii) holds for allγ∈L0([0,∞);A). Indeed, the conditional c.d.f. of γXgivenAis FγX|A(t;ω) =/braceleftBigg FX|A(t/γ(ω);ω),ifγ(ω)>0, 1[0,∞)(t), ifγ(ω) = 0, forP-almost all ω∈Ω. Moreover, the translation equivariance in (v) can be extended t o anyA-measurable Y. This argument appears also later in the construction of the gener alised conditional expectation. Further properties like sub- and superadditivity easily translate to conditional gauges. For the sensitivity with respect to infinity, (g9), we obtain the follow ing result. Lemma 2.7. Suppose the gauge function satisfies (g9) and the assumption s of Proposi- tion2.4are satisfied. Then, for X∈Lp(R), on the event/braceleftbig P{X=∞|A}>0/bracerightbig , the conditional gauge g(X|A)is∞almost surely. We close this subsection by a direct observation, which follows from t he fact that PX|A= PXalmost surely if Xis independent of A. Proposition 2.8. Under the assumptions of Proposition 2.4, it holds that g(X|A) =g(X) almost surely for all X∈Lp(R)which are independent of A. In particular, g(X|A) =g(X) almost surely for all X∈Lp(R)ifAis trivial. Following the approach of Guo et al. (2017), it is alternatively possible to consider a gauge function as a norm on the module of A-measurable random variables. However, then a conditional gauge is not necessarily law-determined. Furthermor e, we do not only work with sublinear gauges, as considered by Guo et al. (2017). 2.3 Examples Below we mention main examples of gauges and their conditional varian ts. Quantiles. These are gauges defined on L0(R). Definition 2.9. Forα∈(0,1], thelower quantile of a random variable X∈L0(R) is defined as q− α(X) := inf{t∈R:P{X≤t} ≥α}, and theupper quantile is defined for α∈[0,1) as q+ α(X) := inf{t∈R:P{X≤t}> α}. 8 Asusual, we set inf ∅:=∞andinfR:=−∞. Asshown
https://arxiv.org/abs/2504.13620v1
by Fissler and Holzmann (2022), quantilefunctionalsare B(Mp)/B(¯R)-measurable. Theysatisfyproperties(g1)–(g5), butnot (g9). For α∈(0,1) they are neither sub- nor superadditive. We refer to de Castro et al. (2023) for an exhaustive overview of the properties of conditional quan tiles. Even though conditional quantiles are introduced differently, Theorem 2.6 from t he cited paper shows that they coincide with those obtained by our concatenation appro ach. A special case of the lower quantile with α= 1 yields the essential supremum esssupX:=q− 1(X), which is subadditive. Letting α= 0 in the upper quantile yields the essential infimum essinfX:=q+ 0(X), which is superadditive. These two functions are dual to each other . Moreover, it is easy to show that each gauge function satisfies essinfX≤g(X)≤esssupX. Alternatively to the concatenation approach, a construction of t he conditional essential supremum (or infimum) can be found in F¨ ollmer and Schied (2004, Appendix A.5). Generalised expectation. Another typical choice for a gauge is the expectation, which is well defined on L1(R), it satisfies all properties (g1)–(g9) and the expectation funct ional isB(Mp)/B(¯R)-measurable due to Fissler and Holzmann (2022). The corresponding condi- tional gauge function is the well known conditional expectation. As detailed in Remark 2.6, we can extend the domain of the conditional expectation to all X∈L0(R) such that PX|A∈M1almost surely. This basically amounts to considering the generalised conditional expectation , see, e.g., L´ epinette and Molchanov (2019, Appendix B). Average quantiles. For a fixed value of α∈[0,1)andX∈L0(R), define the right-average quantile eα(X) :=1 1−α/integraldisplay1 αq− t(X)dt. (2.2) Sinceq− t(X)<q+ t(X) only for at most countably many t∈(0,1) it is immaterial if the integrand is the lower or upper quantile function. If Xdescribes a loss, then eαis known in quantitative risk management asthe average-value-at-risk , also termed the expected shortfall . It is a coherent risk measure and one of the most widely used risk mea sures in the financial industry and in regulation, see McNeil et al. (2015). IfX∈L1(R), theneα(X) is finite. The right-average quantiles satisfy properties (g1)–(g5), and suba dditivity (g7) together with (g9). For α= 0 and X∈L1(R), we simply obtain the expectation. Theleft-average quantile , arises from averaging quantiles in the left tail. It is defined for α∈(0,1] andX∈L1(R) as uα(X) :=1 α/integraldisplayα 0q− t(X)dt. (2.3) 9 We have the dual relationship uα(X) =−e1−α(−X). The functional uα(X) is a superadditive gauge, satisfying (g1)–(g5) and (g8), but not (g9). Measurability of eαanduαfollows from Fissler and Holzmann (2022), so that their condi- tional versions are well defined. Expectiles. Following Newey and Powell (1987), theτ-expectile ,τ∈(0,1), of a random variableX∈L1(R) is defined as the unique solution to the equation τE/bracketleftbig (X−z)+/bracketrightbig = (1−τ)E/bracketleftbig (X−z)−/bracketrightbig inz∈R, wherex+= max{0,x}andx−= max{0,−x}. Forτ= 1/2, we retrieve the usual expectation. The expectile is finite on L1(R). It has been shown by Bellini et al. (2014) that expectiles satisfy properties (g1)–(g5), and that they are subadditive for τ≥1/2 and superadditive for τ≤1/2. IfX=∞with a positive probability, we let the expectile take the value ∞, so that (g9) holds ensured. The measurability property follows ag ain from Fissler and Holzmann (2022). Lp-norm based gauges. Forp∈[1,∞), theLp-norm is
https://arxiv.org/abs/2504.13620v1
defined on Lp(R) as /bardblX/bardblp:=/parenleftbig E|X|p/parenrightbig1/p. This function is not translation equivariant and not necessarily mono tone. It is possible to turn it into a gauge on Lp(R) by letting ep,a(X) :=/braceleftBigg EX+a/parenleftbig E(X−EX)p +/parenrightbig1/p,ifX∈Lp(R), ∞, ifX=∞with positive probability,(2.4) wherea∈[0,1]. This gauge satisfies (g1)–(g5), (g6), (g7) and (g9). The B(Mp)/B(¯R)- measurability property follows as above. Extensions of gauges. For each gauge, we define its maximum and minimum extensions as g∨m(X) =g/parenleftbig max(X1,...,X m)/parenrightbig , (2.5) g∧m(X) =g/parenleftbig min(X1,...,X m)/parenrightbig , (2.6) whereX1,...,X mare i.i.d. copies of X. The maximum extension preserves the sublinearity property and the minimum one preserves the superlinearity proper ty. This construction may be used to obtain parametric families of gauges as was done by Molchanov and Turin (2021) for sublinear expectations. 10 3 Random closed convex sets and related cones For a nonempty set F⊆Rd, thesupport function is defined as h(F,u) := sup/braceleftbig /a\}bracketle{tu,x/a\}bracketri}ht:x∈F/bracerightbig , u∈Rd. The support function of the empty set is defined to be −∞. A support function can be identified by its subadditivity and positive homogeneity properties. T hese properties are summarised by saying that his sublinear. Each lower semicontinuous sublinear function is the support function of a closed convex set, see Schneider (2014). By homogeneity, it suffices to restrict the support function onto the unit sphere Sd−1. Recall that a generic half-space Hw(t) with normal vector w∈Rdis defined at ( 1.1). Note that Hw(∞) =H0(t) =Rdfor anyw∈Rdandt∈R. We let Hw(−∞) =∅. The support function of Hw(t) is finite only at u=cwforw/\e}atio\slash= 0 and c≥0 and then takes the valuect. For a set F⊆Rddenote its polarby Fo:=/braceleftbig u∈Rd:h(F,u)≤1/bracerightbig . IfF=Cis a convex cone, then the polar to Cis equivalently defined by Co:=/braceleftbig u∈Rd:h(C,u)≤0/bracerightbig . Thebarrier cone BFofFconsists of all u∈Rdsuch that h(F,u)<∞. It is easy to see that BFis indeed a convex cone and that Fo⊆BF. IfF=Cis a cone, then BF=Co. Denote by Fd cthe family of closed convex sets in Rd. A map Xfrom the probability space (Ω,F,P) toFd cis said to be a random closed convex set if{ω:X∩K/\e}atio\slash=∅} ∈Ffor all compact sets K, seeMolchanov (2017). If{ω:X∩K/\e}atio\slash=∅} ∈Afor a sub- σ-algebra A⊆F and all compact sets K, thenXis said to be A-measurable. Equivalently, Xis a random closed convex set if and only if its support function h(X,u) is a random function on Rdwhich may take infinite values if Xis not bounded. The only case when the support function takes value −∞is whenXis empty. By applying the definition of the barrier cone to realisations of X, we obtain the barrier cone ofX BX:=/braceleftbig u∈Rd:h(X,u)<∞/bracerightbig . IfXis a.s. compact, then BX=Rd. Note that BXis a random set itself, which is not necessarily closed: its values are so-called Fσsets, being countable unions of random closed sets{u∈Rd:h(X,u)≤n},n≥1, seeMolchanov (2017, Lemma 1.8.23). Measurability ofBXis understood as graph measurability of the map ω/ma√sto→BX(ω), seeMolchanov (2017, Sec. 1.3.6). By σ(X) we denote the smallest σ-algebra which makes Xmeasurable, and by σ(BX) the smallest σ-algebra which makes BXgraph measurable. Example 3.1. Important examples of random sets are related to deterministic or random cones. In the following we denote a deterministic closed
https://arxiv.org/abs/2504.13620v1
convex cone byCand a random one 11 byC. IfX=C, thenBX=Coand soBXis a random closed convex set. Let X=X+C, whereXis a random vector and Cis a random closed convex cone in Rdwhich contains (−∞,0]d. In the financial setting, such a cone is called the set of portfolios a vailable at price zero, see Kabanov and Safarian (2009). The polar cone Cois said to be a cone of consistent price systems. Arandomvector ξiscalleda selection ofXifξ∈Xalmostsurely. Let Lp(X)denote the family of p-integrable σ(X)-measurable selections of Xforp∈[1,∞), essentially bounded ones ifp=∞, and all selections if p= 0. Sometimes, it is convenient to consider selections which are measurable with respect to a larger σ-algebra than the one generated by X, i.e., such selections may involve extra randomisation. For instance, a de terministic convex set may have random selections. A pointx∈Rdis said to be a fixed point ofXifP{x∈X}= 1. Clearly a fixed point is a selection of X. IfLp(X) is not empty, then Xis called p-integrable , shortly, integrable ifp= 1. This is the case if Xisp-integrably bounded , that is, /bardblX/bardbl:= sup/braceleftbig /bardblx/bardbl:x∈X/bracerightbig is ap-integrable random variable (essentially bounded if p=∞). Ifp= 0, then the p-inegrability of Xmeans that Xis almost surely nonempty, equivalently, L0(X) is not empty. Each a.s. nonempty random closed set Xis the closure of a countable family of its selec- tions{ξn,n≥1} ⊆L0(X). This is called a Castaing representation ofX, see (Molchanov , 2017, Definition 1.3.6). Let us stress that all members of a Castaing repr esentation are assumed to be measurable with respect to the σ-algebra σ(X) generated by X. IfXis p-integrable, then all selections in its Castaing representation can b e also chosen to be p- integrable. A random closed set in Rdis said to be regular closed if it almost surely coincides with the closure of its interior. If Xis regular closed and admits a fixed point, then it has a Castaing representation which consists of the random vectors ξn:=un1un∈X+x1un/∈X, where{un,n≥1}is a countable dense set in Rdandxis a fixed point of X. IfXis integrable, then its selection expectation is defined by EX:= cl/braceleftbig Eξ:ξ∈L1(X)/bracerightbig , (3.1) whichistheclosureofthesetofexpectationsofallintegrableselec tionsofX,see(Molchanov , 2017, Section 2.1.2). If Xis integrably bounded, then the closure on the right-hand side is not needed and EXis compact. This expectation (and its conditional variant) can be eq uiv- alentlydefined using thesupportfunction, seeSection 5.1, thusproviding adual construction of the selection expectation. IfXa.s. attains compact values, then X=/intersectiondisplay n≥1Hwn/parenleftbig h(X,wn)/parenrightbig , (3.2) 12 where{wn,n≥1}is a countable dense set in Sd−1. This representation may fail if Xis unbounded with positive probability, e.g., if Xis a random half-space. Still, even if ( 3.2) fails,L´ epinette and Molchanov (2019, Theorem 3.4) establishes that each random closed convex set satisfies X=/intersectiondisplay n≥1Hηn/parenleftbig h(X,ηn)/parenrightbig , (3.3) where{ηn,n≥1} ⊆L0(Rd) is a family of F-measurable random vectors, whose choice may depend on X. By scaling with /bardblηn/bardblifηn/\e}atio\slash= 0 and letting H0(t) =Rdifηn= 0, it is possible to assume that ηn∈Sd−1a.s. for all n. The following result generalises this by showing that theηn’s can be taken to be σ(BX)-measurable and even constants if BXis regular closed. Theorem 3.2. LetXbe an
https://arxiv.org/abs/2504.13620v1
almost surely nonempty random closed convex set in Rd. Then, for each Castaing representation {Wn,n≥1}ofBX∩Sd−1, X=/intersectiondisplay n≥1/braceleftBig x∈Rd:/a\}bracketle{tWn,x/a\}bracketri}ht ≤h(X,Wn)/bracerightBig . (3.4) IfBXis regular closed, then (3.4)holds with Wn=wn, where{wn,n≥1}is a deterministic dense set in Sd−1. Proof.Assume first that 0 ∈Xa.s. Then /braceleftbig u∈Rd:h(Xo,u)>0/bracerightbig =BX\{0}. Indeed, for u/\e}atio\slash= 0, we have h(Xo,u)>0 if and only if h(X,u)<∞and sou∈BX. In particular, Xo∩Sd−1=BX∩Sd−1a.s. Let {Wn,n≥1}be a Castaing representation of BX∩Sd−1, which consists of σ(BX)-measurable random vectors. Then Xois the closure of the union of Xo∩ {Wnt:t≥0}forn≥1. For each n≥1, let{Wnζnm,m≥1}be a Castaing representation of Xo∩{Wnt:t≥0}. Note that ζnmisF-measurable for all nand m. By the Bipolar theorem, X= (Xo)o. Hence, Xis the polar set to {Wnζnm,m,n≥1}, meaning that X=/intersectiondisplay n,m≥1/braceleftbig x∈Rd:/a\}bracketle{tWnζnm,x/a\}bracketri}ht ≤1/bracerightbig =/intersectiondisplay n,m≥1/braceleftbig x∈Rd:/a\}bracketle{tWnζnm,x/a\}bracketri}ht1ζnm>0≤1/bracerightbig . SinceWnζnm∈Xoa.s., we have h(X,Wnζnm)≤1 a.s. Hence, X⊇/intersectiondisplay n,m≥1/braceleftbig x∈Rd:/a\}bracketle{tWnζnm,x/a\}bracketri}ht ≤h(X,Wnζnm)/bracerightbig . Sinceζnm>0 a.s., X⊇/intersectiondisplay n≥1/braceleftbig x∈Rd:/a\}bracketle{tWn,x/a\}bracketri}ht ≤h(X,Wn)/bracerightbig . 13 The opposite inclusion is obvious. Assume now that BXis regular closed, so that BX∩Sd−1is regular closed in the relative topology of Sd−1. ThenWn:=wnζnm1wn∈BX1ζnm>0,n≥1, is a Castaing representation of Xoand the above argument applies. IfXdoes not necessarily contain the origin, let Y=X−ξfor an arbitrary ξ∈L0(X). Note that the barrier cone of Ycoincides with the barrier cone of X, so that their Castaing representations are identical. Then Y=/intersectiondisplay n≥1/braceleftbig y∈Rd:/a\}bracketle{tWn,y/a\}bracketri}ht ≤h(Y,Wn)/bracerightbig =/intersectiondisplay n≥1/braceleftbig y∈Rd:/a\}bracketle{tWn,y+ξ/a\}bracketri}ht ≤h(X,Wn)/bracerightbig =/intersectiondisplay n≥1/braceleftbig x∈Rd:/a\}bracketle{tWn,x/a\}bracketri}ht ≤h(X,Wn)/bracerightbig −ξ. Finally, note that x∈Xif and only if x−ξ∈Y. IfXandYare two random closed convex sets, then X⊆Ya.s. ifh(X,W)≤h(Y,W) a.s. for all W∈L0(Rd), seeL´ epinette and Molchanov (2019, Corollary 3.6), or equivalently, by rescaling for all W∈L0(Sd−1). The following result shows that in some cases it is possible to reduce the choice of Wto obtain the same characterisation. Lemma 3.3. LetXandYbe two random closed convex sets. Assume that Yis a random compact convex set or that BYis regular closed. Then X⊆Ya.s. if and only if h(X,wn)≤ h(Y,wn)a.s. for any countable dense set {wn,n≥1} ⊆Sd−1. Proof.Under the imposed conditions, ( 3.2) holds for Yand implies that Y=/intersectiondisplay n≥1/braceleftbig x∈Rd:/a\}bracketle{twn,x/a\}bracketri}ht ≤h(Y,wn)/bracerightbig ⊇/intersectiondisplay n≥1/braceleftbig x∈Rd:/a\}bracketle{twn,x/a\}bracketri}ht ≤h(X,wn)/bracerightbig ⊇Xa.s. 4 Conditional set-valued gauge functions Fixap∈ {0}∪[1,∞]. LetXbeap-integrablerandomclosedconvexset, whichisthenalmost surely nonempty. For each W∈L0(Rd), the support function h(X,W) is a random variable with values in ( −∞,∞], seeL´ epinette and Molchanov (2019, Lemma 3.1). While h(X,W) is not necessarily integrable, its negative part is always integrable if Xisp-integrable and W∈Lq(Rd), where 1 /p+1/q= 1. Indeed, choose any ξ∈Lp(X), and write h(X,W) =h(X−ξ,W)+/a\}bracketle{tξ,W/a\}bracketri}ht. The second summand on the right-hand side is integrable, while the fir st one is nonnegative. Thus,h(X,W)∈L1(R). 14 Consider a gauge function gsatisfying the conditions of Proposition 2.4. For each W∈L∞(Rd), the random variable h(X,W) belongs to Lp(R), so that its conditional gauge g(h(X,W)|A) is well defined (up to an almost sure equivalence). Then HW/parenleftbig g(h(X,W)|A)/parenrightbig is a random half-space, which becomes the whole space if g(h(X,W)|A) =∞, and is empty ifg(h(X,W)|A) =−∞. Note that this half-space does not change if Wis scaled, that is, if Wis replaced by γWfor a random variable γ∈L∞/parenleftbig (0,∞);A/parenrightbig . Theorem 4.1. Fix ap∈ {0}∪[1,∞]. LetXbe ap-integrable random closed convex set. LetAbe a sub-σ-algebra of F. There exists the largest (in the inclusion
https://arxiv.org/abs/2504.13620v1
order) A-measurable random closed convex set Ysuch that h(Y,W)≤g(h(X,W)|A)a.s. (4.1) for allW∈L∞(Rd;A). Proof.Consider Y′:=/intersectiondisplay W∈L∞(Rd;A)HW/parenleftbig g(h(X,W)|A)/parenrightbig . (4.2) By construction, Y′has closed realisations, so it is a random closed set in a σ-algebra chosen to be suitably rich. By L´ epinette and Molchanov (2019, Lemma 4.3), there exists the largestA-measurable random closed set Ysuch that Y⊆Y′almost surely (it is called the measurable version of Y′). Recall that the probability space is assumed to be complete. By construction, Ysatisfies ( 4.1). Assume that Yis not the largest set which satisfies (4.1). Then there exists an A-measurable random vector ξsuch that ξ /∈Ywith a positive probability and Z=Y∪{ξ}satisfies ( 4.1). However, then Y′∪{ξ}is a subset of the right- hand side of ( 4.2). Hence, Y∪{ξ}is anA-measurable subset of Y′, which is a contradiction unlessξ∈Ya.s. The setYfrom Theorem 4.1is denoted as G(X|A) and is called a conditional set-valued gaugeofXgivenA. The function G(·|A) is said to be the set-valued extension of the (scalar) conditional gauge g(·|A). We replace in our notation Aby a random variable, if A is generated by it. Lemma 4.2. Condition (4.1)can equivalently be imposed for all W∈L0(Sd−1;A)or for all W∈L0(Rd;A). Proof.IfW∈L∞(Rd;A), it can be represented as the product of /bardblW/bardbland a random vector W′such that W′/\e}atio\slash= 0 a.s. For this, one may need to redefine Win a measurable way outside of the event {W= 0}. It remains to notice that it is possible to take /bardblW/bardblas a factor out of the both sides of ( 4.1) due to the homogeneity property, see Proposition 2.5(ii). Remark 4.3. The conditional translation equivariance property (see Propositio n2.5(v)) implies that ( 4.1) can be equivalently written as g/parenleftbig h(X,W)−h(Y,W)|A/parenrightbig ≥0 a.s. 15 for allW∈L∞(Rd;A) and (4.2) as Y′=/intersectiondisplay W∈L∞(Rd;A)/braceleftBig x∈Rd:g/parenleftbig h(X,W)−/a\}bracketle{tx,W/a\}bracketri}ht|A/parenrightbig ≥0 a.s./bracerightBig . (4.3) The latter expression yields an A-measurable random closed convex set even for intersections taken over all W, which belong to L0(Rd;F), that is, for those, which arenot necessarily mea- surable with respect to A. This approach was pursued by L´ epinette and Molchanov (2019) for the gauges being the expectation, essential supremum and es sential infimum. Computing the intersection in ( 4.3) over all (not necessarily A-measurable) Wmay be complicated in case of further gauges. Moreover, it may result in smaller sets, as the following example illustrates. Example 4.4. LetX=HU(0) be a half-space with unit outer normal U∈L0(Sd−1) such that the distribution of Uis non-atomic. Note that in this case BXis a line {tU:t≥0}, whichisnotregularclosed, inparticular, Lemma 3.3isnotapplicable. Foranyfixeddirection w∈Sd−1, thesupportfunction h(X,w)isinfinitealmostsurely, sothat g(h(X,w))isinfinite too. Hence, the intersection of Hw/parenleftbig g(h(X,w)|A)/parenrightbig over allw∈Sd−1is whole space. If Ais generated by U, then the right-hand side of ( 4.2) is nontrivial and equal to HU(0). However, if Ais independent of U, thenh(X,W) =∞a.s. for all A-measurable W. In this case, g(h(X,W)|A) =∞a.s. and so G(X|A) becomes the whole space. However, taking the intersection in ( 4.3) over all F-measurable directon vectors yields the random set /intersectiondisplay V∈L∞(Rd;F)/braceleftBig x∈Rd:g/parenleftbig h(X,V)−/a\}bracketle{tx,V/a\}bracketri}ht|A/parenrightbig ≥0 a.s./bracerightBig ⊆/braceleftbig x∈Rd:g/parenleftbig h(X,U)−/a\}bracketle{tx,U/a\}bracketri}ht|A/parenrightbig ≥0 a.s./bracerightbig ={x∈Rd:−/a\}bracketle{tx,U/a\}bracketri}ht ≥0 a.s./bracerightbig =HU(0) =X, which is not the whole space. Theorem 4.5. Fix ap∈ {0} ∪[1,∞].
https://arxiv.org/abs/2504.13620v1
The conditional gauge G(·|A)is a map from the family of almost surely nonempty ( p-integrable) random closed convex sets to the family of A-measurable random closed convex sets which is (G1) law-determined, that is, G(X|A) =G(X′|A)ifXandX′have the same conditional distribution given A; (G2) constant preserving: G(X|A) =Xa.s. ifXisA-measurable; (G3) positively homogeneous: G(ΓX|A) = ΓG(X|A)a.s. for all invertible d×dmatrices Γ with entries from L0([0,∞);A); (G4) monotone: G(X|A)⊆G(X′|A)a.s. ifX⊆X′a.s. for two p-integrable random closed setsXandX′; 16 (G5) translation equivariant: G(X+Z|A) =G(X|A)+Zfor allp-integrable random closed convex sets ZifG(X|A)is not empty. Proof.(G1) Follows from the fact that the conditional gauges are law-det ermined and that h(X,W) andh(X′,W) share the same distribution for any W∈L∞(Rd;A). (G2) IfXisA-measurable, thenfor W∈L∞(Rd;A)itholdsthat g(h(X,W)|A) =h(X,W), so that the set in ( 4.2) equalsXa.s. (G3) Note that h(ΓX,W) =h(X,Γ⊤W) for any W∈L∞(Rd;A). Thus, for Y=G(ΓX|A) it holds that h(Y,W)≤g(h(ΓX,W)|A) if and only if h(Γ−1Y,V) =h(Y,(Γ−1)⊤V)≤g(h(X,V)|A) forV= Γ⊤W. Since Γ is invertible, the sets L0(Rd;A) and its image under Γ coincide. Thus, Γ−1G(ΓX|A) =G(X|A). (G4) Follows from the fact that the set constructed by the right- hand side of ( 4.2) forXis a subset of the one constructed by X′. (G5) Follows from the fact that for any W∈L∞(Rd;A) g(h(X+Z,W)|A) =g(h(X,W)+h(Z,W)|A) =g(h(X,W)|A)+h(Z,W). Remark 4.6 (Unconditionalset-valuedgauge) .IfAistrivial, then( 4.1)becomes h(Y,W)≤ g(h(X,W)) for all deterministic W=w∈Sd−1and so the unconditional gauge is given by G(X) =/intersectiondisplay w∈Sd−1Hw/parenleftbig g(h(X,w))/parenrightbig . (4.4) Since the right-hand side is deterministic, G(X) is a deterministic closed convex set. Such a setiscalledthe Wulff shape associatedwiththefunction g(h(X,w)),w∈Sd−1, seeSchneider (2014, Section 7.5). If gis sublinear and g(h(X,w)) is lower semicontinuous in w, then we haveh(G(X),w) =g(h(X,w)) for all w∈Sd−1. In this case Ybecomes the sublinear expectation of X, studied by Molchanov and M¨ uhlemann (2021). Proposition 4.7. If the conditional gauge g(·|A)is superadditive, then its set-valued exten- sionG(·|A)satisfies G(X′+X′′|A)⊇G(X′|A)+G(X′′|A). Proof.Superadditivity of G(·|A) follows from the fact that for all W∈L∞(Rd;A) g(h(X′+X′′,W)|A) =g(h(X′,W)+h(X′′,W)|A) ≥g(h(X′,W)|A)+g(h(X′′,W)|A). Thus,G(X′+X′′|A) contains the sum of the largest random closed convex sets whose support functions are dominated by g(h(X′,W)|A) andg(h(X′′,W)|A), which are G(X′|A) andG(X′′|A), respectively. 17 Remark 4.8 (Set-valued utility) .Proposition 4.7implies that if g(·|A) is superadditive, G(·|A)becomes a conditional set-valuedutility functions, which canbeus ed asanacceptance criterion for set-valued portfolios. Namely, Xis said to be conditionally acceptable if 0 ∈ G(X|A). Then, 0 ∈G(X′|A) and 0∈G(X′′|A) for two random closed convex sets X′and X′′imply that 0 ∈G(X′+X′′|A), meaning that X′+X′′is also conditionally acceptable. The following examples work for any choice of the gauge function g, which admits a conditional version. Example 4.9. Assume that X:=ξZandξ≥0 is independent of σ(Z). Then g(h(X,W)|Z) =g(h(Z,W)ξ|Z) =g(ξ|Z)h(Z,W). Thus,G(X|Z) =g(ξ|Z)Z. Example 4.10. LetX:= (−∞,V1]×(−∞,V2] be the quadrant in R2with its upper right corner at ( V1,V2). LetAbe generated by V2. Ifw= (w1,w2)∈R2 +, then h(X,w) =w1V1+w2V2, and the support function is infinite for all other w∈R2. For any gauge g, sinceW∈ L∞(R2 +;A), g(h(X,W)|V2) =W1g(V1|V2)+W2V2. Thus,G(X|V2) =/parenleftbig −∞,g(V1|V2)/bracketrightbig ×/parenleftbig −∞,V2/bracketrightbig . Example 4.11. LetX:= [0,V1]×[0,V2] be the rectangle in Rdwith its upper right corner at (V1,V2). LetAbe generated by V2. IfW= (W1,W2), then h(X,W) =  0, W 1≤0,W2≤0, W1V1+W2V2, W1>0,W2>0, W1V1 W1>0,W2≤0, W2V2 W1≤0,W2>0. SinceWis assumed
https://arxiv.org/abs/2504.13620v1
to be A-measurable, g(h(X,W)|A) =  0, W 1≤0,W2≤0, W1g(V1|V2)+W2V2, W1>0,W2>0, W1g(V1) W1>0,W2≤0, W2V2 W1≤0,W2>0. Thus,G(X|V2) is the rectangle/bracketleftbig 0,g(V1|V2)/bracketrightbig ×/bracketleftbig 0,V2/bracketrightbig . 5 Special cases of set-valued gauges While we write G(X|A) for a generic conditional set-valued gauge, we use special notat ion for the most important gauge functions to denote their set-value d variants. For example, eα(X|A) denotes the set-valued gauge constructed from the right-ave rage quantile eα. 18 5.1 Generalised conditional expectation Letg(·|A) be the generalised conditional expectation, which is defined on L1(R). Then G(X|A) is the generalised conditional expectation of XgivenA, seeMolchanov (2017, Definition 2.1.79). Indeed, the generalised conditional expectation E(X|A) satisfies h/parenleftbig E(X|A),W/parenrightbig =E/parenleftbig h(X,W)|A/parenrightbig a.s. (5.1) for allA-measurable W. For deterministic W, this fact is well known (see, e.g., Molchanov (2017, Theorem 2.1.72)), for random Wit is proved in L´ epinette and Molchanov (2019, Lemma 6.7). Thus, the generalised conditional expectation is the lar gestA-measurable random closed set Ywhich satisfies h(Y,W)≤E(h(X,W)|A) a.s. for all W∈L∞(Rd;A), meaning that ( 4.1) is satisfied with equality. If the left-hand side of ( 5.1) is finite, then necessarily W∈BX. Hence, BE(X|A)⊆BX. The inverse inclusion does not always hold. If we consider gauges given by the maximum and minimum extensions of the expectation defined at ( 2.5) and(2.6), then we obtainthe set-valued gauges given by E/parenleftbig X1∪···∪Xm|A/parenrightbig andE/parenleftbig X1∩ ··· ∩Xm|A/parenrightbig , respectively, where X1,...,Xmare i.i.d. copies of X. The intersection-based gauge is nonempty only if X1∩···∩Xm/\e}atio\slash=∅with probability one. 5.2 Conditional quantiles Assume now that g=q− αis the lower α-quantile. Then q− α(X|A) is the largest closed convex set such that its support function is dominated by q− α(h(X,W)) for all W∈L0(Sd−1;A). The set-valued extension essinf( X|A) of the conditional essential infimum is the largest A-measurable random closed convex set Ysuch that h(Y,W)≤essinf/parenleftbig h(X,W)|A/parenrightbig a.s. for allW∈L∞(Rd;A). A similar concept is the conditional core m(X|A) ofX, which is de- fined byL´ epinette and Molchanov (2019) as the largest A-measurable random closed convex subset of X, that is, h(m(X|A),V)≤h(X,V) for allF-measurable V. By Theorem 4.12 fromL´ epinette and Molchanov (2019), m(X|A) =/intersectiondisplay V∈L∞(Rd)HV/parenleftbig essinf(h(X,V)|A)/parenrightbig , where the intersection is taken over all F-measurable V, not necessarily those which are A-measurable, cf. ( 4.3) where the intersection is taken over all W∈L∞(Rd;A). Thus, m(X|A)⊆essinf(X|A), andtheinclusionmaybestrictasExample 4.4shows. Thisinclusionshowsthatessinf( X|A) is not empty if the conditional core is not empty, which is the case if Xadmits an A- measurable selection. 19 In the unconditional case, m(X) is the set of fixed points of X. Still, essinf( X) may be different: for example, if Xis the random half-space from Example 4.4, then essinfh(X,u) =∞for all deterministic uand so essinf( X) =Rd, whilem(X) ={0}. On the other hand, by Lemma 3.3, we have m(X) = essinf( X) ifBXis regular closed. Indeed, then m(X) is the largest determinist set such that h(m(X),w)≤h(X,w) a.s. for allw∈Sd−1, which is the same as h(m(X),w)≤essinfh(X,w). The latter inequality identifies essinf( X). Theconditional convex hull M(X|A) ofX, is the smallest (in the sense of inclusion) A- measurable random closed convex set which a.s. contains X, seeL´ epinette and Molchanov (2019, Definition 5.1). It
https://arxiv.org/abs/2504.13620v1
follows from L´ epinette and Molchanov (2019, Theorem 5.2) that the support function of M(X|A) equals esssup h(X,W) for allW∈L0(Sd−1;A). Ifgis the essential supremum, then the right-hand side of ( 4.2) equalsM(X|A), meaning that M(X|A) = esssup( X|A). (5.2) This equality is due to the fact that the essential supremum is subline ar. For a general gauge, we always have essinf(X|A)⊆G(X|A)⊆esssup(X|A). 5.3 Average quantiles The conditional right-average quantile eα(·|A) is subadditive. Then eα(h(X,W)|A) is a sublinear function of W∈L∞(Rd;A). The conditional left-average quantile uαis superlinear. If Y=uα(X|A), then its width in direction w∈Sd−1is bounded from above by h(Y,w)+h(Y,−w)≤uα(h(X,w)|A)+uα(h(X,−w)|A) ≤uα(h(X,w)+h(X,−w)|A). Thus, the width of Yis at most the conditional left-average quantile of the width of X. 6 Random singletons and conditional depth-trimmed regions LetX∈Lp(Rd). By definition, Y=G({X}|A) is the largest A-measurable random closed convex set such that h(Y,W)≤g/parenleftbig /a\}bracketle{tX,W/a\}bracketri}ht|A/parenrightbig for allW∈L∞(Rd;A). Ifgis the expectation and X∈L1(Rd), thenG({X}|A) =/braceleftbig E(X|A)/bracerightbig . 20 Example 6.1 (Unconditional setting with sublinear gauges) .Fix ap∈[1,∞] and assume thatg:Lp(R)→¯Ris subadditive and lower semicontinuous. If wn→w, then/a\}bracketle{tX,wn/a\}bracketri}ht converges to /a\}bracketle{tX,w/a\}bracketri}htinσ(Lp,Lq), so that the function g(/a\}bracketle{tX,w/a\}bracketri}ht) is lower semicontinuous. Sincegis subadditive, this function is sublinear in w. Indeed, it is clearly homogeneous and g/parenleftbig /a\}bracketle{tX,w+v/a\}bracketri}ht/parenrightbig =g/parenleftbig /a\}bracketle{tX,w/a\}bracketri}ht+/a\}bracketle{tX,v/a\}bracketri}ht/parenrightbig ≤g/parenleftbig /a\}bracketle{tX,w/a\}bracketri}ht/parenrightbig +g/parenleftbig /a\}bracketle{tX,v/a\}bracketri}ht/parenrightbig . Thus, there exists a convex body G({X}) such that h(G({X}),w) =g(/a\}bracketle{tX,w/a\}bracketri}ht) for allw∈ Rd, seeSchneider (2014). This way to associate multivariate distributions with convex bodies was suggested by Molchanov and Turin (2021), where examples and properties of this construction can be found. For a random vector X= (X1,...,X d)∈Lp(Rd), we denote g(X|A) =/parenleftbig g(X1|A),...,g(Xd|A)/parenrightbig . Proposition 6.2. LetX={X}be a singleton with X= (X1,...,X d)∈Lp(Rd). Then the following holds. (i) If the gauge gis superadditive, then G({X}|A)⊆ {g(X|A)}almost surely. (ii) If the gauge gis subadditive, then G({X}|A)⊆g(X|A)+Rd −a.s. Proof.(i) If the conditional gauge is superadditive, then the width of Y=G({X}|A) in direction Wis given by h(Y,W)+h(Y,−W)≤g(/a\}bracketle{tX,W/a\}bracketri}ht|A)+g(/a\}bracketle{tX,−W/a\}bracketri}ht|A) ≤g(0|A) = 0 for allW∈L∞(Rd;A). Thus, G({X}|A) is a.s. a singleton {Y}or it is empty. Assuming the former, the components YiofY= (Y1,...,Y d) are given by g(Xi|A). (ii) If the conditional gauge is subadditive, then g(/a\}bracketle{tX,W/a\}bracketri}ht|A)≤ /a\}bracketle{tg(X|A),W/a\}bracketri}ht. forallW∈L∞(Rd +;A). Noteherethatthenon-negativityofthecomponentsof Wisessential to apply the subadditivity property. Then h(Y,W)≤ /a\}bracketle{tg(X|A),W/a\}bracketri}ht for allW∈L∞(Rd +;A), so that Y⊆g(X|W)+Rd −. The proof for the superadditive case actually implies that G({X}|A) is empty if g(/a\}bracketle{tX,W/a\}bracketri}ht|A)+g(/a\}bracketle{tX,−W/a\}bracketri}ht|A)<0 for at least one W∈L∞(Rd;A). 21 Example 6.3 (Conditioning on one component) .LetX= (X1,X2) be a random vector in R2, and let Abe generated by X2. Then /intersectiondisplay W∈L∞(R2;A)HW(g(/a\}bracketle{tX,W/a\}bracketri}ht|X2)) =/intersectiondisplay W∈L∞(R2;A)HW(g(X1W1|X2)+/a\}bracketle{t(0,X2),W/a\}bracketri}ht) = (0,X2)+/intersectiondisplay W∈L∞(R2;A)HW(g(X1W1|X2)). IfW1≥0 a.s., it is possible to use the homogeneity property to obtain G({X}|A)⊆(0,X2)+/intersectiondisplay W∈L∞(R+×R;A)HW(/a\}bracketle{t(g(X1|X2),0),W/a\}bracketri}ht) = (g(X1|X2),X2)+(−∞,0]×{0}. Note here that the polar to R+×Ris (−∞,0]×{0}. IfW1≤0 a.s., then g(X1W1|X2) = (−W1)g(−X1|X2). Thus, G({X}|X2)⊆[−g(−X1|X2),g(X1|X2)]×{X2}. Note that the first component is bounded between the conditional scalar gauge and its dual. Consider now several particular gauges applied to random singleton s. Example 6.4 (Conditional half-space depth) .Consider the gauge given by a quantile g= q− α. It is defined for all X∈L0(Rd). In the unconditional setting, G({X}) is the Tukey (or half-space) depth-trimmed region at level 1 −α, seeTukey(1975) andNagy et al. (2019). If
https://arxiv.org/abs/2504.13620v1
Xis uniformly distributed in a convex set K, the set G({X}) is called the floating body of K, seeNagy et al. (2019). It is known from Bobkov(2010) that, if the distribution of Xis log-concave and α∈(1/2,1), then h(G({X}),u) =q− α(/a\}bracketle{tX,u/a\}bracketri}ht), u∈Sd−1. Ifα= 1, then gis theessential supremum, andesssup( {X})is theconvex hull ofthe support ofX. The conditional variant of the half-space depth may be used in a mult iple output regres- sion setting to introduce a notion of depth (for the responses) co nditioned on the value of the regressors, as indicated in Hallin et al. (2010) andWei(2008). Example 6.5. Assumethat Xhasaspherically symmetricdistribution, thatis, thedistribu- tion ofXis invariant under orthogonal transformations, see Fang et al. (1990, Section 2.1). It is known (see Theorem 2.4 ibid) that this holds if and only if for all w∈Sd−1the projec- tion/a\}bracketle{tX,w/a\}bracketri}hthas the same distribution as /bardblw/bardblX1, whereX1is the first component of X. If 22 g(X1)≥0, then G({X}) =/intersectiondisplay w∈Sd−1/braceleftbig x∈Rd:/a\}bracketle{tx,w/a\}bracketri}ht ≤g(/a\}bracketle{tX,w/a\}bracketri}ht)/bracerightbig =/intersectiondisplay w∈Sd−1/braceleftbig x∈Rd:/a\}bracketle{tx,w/a\}bracketri}ht ≤ /bardblw/bardblg(X1)/bracerightbig =g(X1)B, whereBis the unit Euclidean ball. If Y=µ+ΓXfor a deterministic location µ∈Rdan invertible scale matrix Γ ∈Rd×d, then (G2) and (G3) yield that G({Y}) =µ+g(X1)ΓB, which is a translated ellipsoid. For instance, if Yfollows the centred normal distribution with covariance matrix Σ, then Γ = Σ1/2andX1is the standard normal. Conveniently, a closed form solution for the gauge of a one-dimensional standard n ormal is known for many important cases. E.g., for the (upper and lower) quantile q− α(X1) = Φ−1(α),α∈(0,1), and for average quantiles eα(X1) = (1−α)−1ϕ(Φ−1(α)),uα(X1) =−α−1ϕ(Φ−1(α)), (6.1) where Φ−1is the quantile function of a standard normal and ϕits density function. Finally, for the conditional setup, assume that XgivenAfollows a normal distribution with conditional mean vector µand invertible conditional covariance matrix Σ. Such a situation arises, e.g., when Xis jointly normal and Ais generated by a subvector of X. Another relevant example is a mean–normal variance mixture such a s a GARCH-process with Gaussian innovations, which is a relevant model class in quantitat ive risk management; seeMcNeil et al. (2015, Chapter 6.2) and references therein. Then, the above argumen ts work similarly, and we obtain G({X}|A) =µ+g(X1)Σ1/2B. That is, the conditional gauge is a random ellipsoid obtained by multiplyin g the Euclidean ball with g(X1) and the random matrix Σ1/2and translating by the random vector µ. Example 6.6 (Conditional zonoids) .Ifg=eαis the right-average quantile, then G({X}) is the zonoid-trimmed region of X∈L1(Rd), seeKoshevoy and Mosler (1997) andMosler (2002). Inthiscase, thesupportfunctionof G({X})isequalto eα(/a\}bracketle{tX,u/a\}bracketri}ht)forallu∈Sd−1. Its conditional variant yields a random convex body, which may be under stood as a conditional zonoid of a multivariate distribution. An application of this construct ion to the random vector (1,X) in dimension ( d+1) yields lift zonoids (see Mosler(2002)) and their conditional variants. Example 6.7 (Conditional expectation and its maximum extension) .Ifgis the maximum extension of the conditional expectation, then G({X}|A) is the conditional expectation of the random polytope Pobtained as the convex hull of the finite set {X1,...,X m}built by i.i.d. copies of X. This expectation taken with respect a filtration appears in Ararat and Ma (2023) as an example of a set-valued martingale. 23
https://arxiv.org/abs/2504.13620v1
Letg=ep,abe defined at ( 2.4). Assume that E(X|A) = 0 a.s. for X∈L1(Rd). Then the corresponding conditional depth-trimmed region is the largest convex set such that h(G({X}|A),W) =/parenleftBig E/bracketleftbig |/a\}bracketle{tX,W/a\}bracketri}ht|p +/vextendsingle/vextendsingleA]/parenrightBig1/p for allW∈L0(Sd−1;A). Thedepth-trimmedregionsbasedonconsideringexpectileshavebe enstudiedby Cascos and Ochoa (2021). Our construction provides their conditional variant, which also h ave been mentioned in the cited paper in view of application to the regression setting. 7 Translations of cones 7.1 The case of a deterministic cone LetX=X+C, whereX∈Lp(Rd) andCis a deterministic convex cone in Rdwhich is different from the origin and the whole space. In this case, h(X,w) =/a\}bracketle{tX,w/a\}bracketri}htifwbelongs to the polar cone Coand otherwise h(X,w) =∞a.s. Thus, BX=Co. In the unconditional setting, G(X+C) is the largest closed convex set satisfying h(G(X+C),w)≤g(/a\}bracketle{tX,w/a\}bracketri}ht), w∈Co. Then G(X+C) =/intersectiondisplay w∈Sd−1Hw(g(X+C,w))) =/intersectiondisplay w∈CoHw(g(/a\}bracketle{tX,w/a\}bracketri}ht)). Ifg=q− αis the quantile, we recover the construction from Hamel and Kostner (2018), namely,q− α(X+C)isthelower C-quantilefunction Q− X,C(α)ofXintroducedin( Hamel and Kostner , 2018, Definition 4). If gis the expectile, we recover the downward cone expectile from Defi- nition 2.1 of Hamel and Ha (2025). Ifgis the dual to the expectile, we recover the upward cone expectile from the same reference. Our construction works for a number of further gauge functions and also in the conditional setting. The following result immediately follows from (G5). Proposition 7.1. Letgbe a sublinear expectation defined on Lp(R). IfX∈Lp(Rd)with G({X}|A)almost surely nonempty, and Cis a deterministic cone, then G(X+C|A) =G({X}|A)+C. With this result, the examples from Section 6can be easily reformulated for the case of random translations of deterministic cones. Example 7.2. Letgbe a sublinear expectation which is sensitive with respect to infinity, e.g., the right-average quantile or expectile with τ≥1/2. Assume that Cois a subset of Rd +. Then g(h(X+C,w)) =g(/a\}bracketle{tX,w/a\}bracketri}ht)≤ /a\}bracketle{tg(X),w/a\}bracketri}ht=h(g(X)+C,w), w∈Co. Thus,G(X+C) =g(X) +C. In particular, if g=eαis the right-average quantile, then G(X+C) is the sum of the zonoid-trimmed region of Xand the cone C. 24 7.2 Random cones translated by a deterministic point LetX=x+C, wherexis deterministic and Cis a random cone. Since the gauge is translation equivariant, G(X|A) =x+G(C|A). So we assume without loss of generality that x= 0 and let X=C. The constant preserving property (G2) implies that G(C|A) =Cif CisA-measurable. SinceBX=Co, wehave h(C,W) = 0ifWisaselectionof Co. Otherwise, h(C,W) =∞ on the event {W /∈Co}. Proposition 7.3. Assume that the chosen gauge satisfies (g9). Then, for the cor responding set-valued conditional gauge, we have G(C|A) = (m(Co|A))o. (7.1) Proof.Due to (g9), for any W∈L∞(Rd;A), on the event/braceleftbig P{W /∈Co|A}>0/bracerightbig the condi- tional gauge is g(h(C,W)|A) =∞. Therefore, G(C|A) becomes the intersection of HW(0) over allW∈L∞(Rd;A) such that P{W∈Co|A}= 1 almost surely. In particular, it is possible to take Wwhich almost surely belongs to Co. The family of such Wis the family of selections of the conditional core m(Co|A), so that the intersection becomes the right-hand side of (7.1). It remains to notice that for each W∈L∞(Rd;A), the support function of (m(Co|A))oin direction Wis at most g(h(C,W)|A). In the unconditional setting and assuming that the chosen gauge f unction satisfies (g9), G(C) is the polar cone to the set of fixed points of Co. ByL´ epinette and
https://arxiv.org/abs/2504.13620v1
Molchanov (2019, Proposition 5.5), ( m(Co|A))oequals the conditional convex hull M(X|A) ofC. Since the essential supremum also satisfies (g9), we obtain as a corollary of P roposition 7.3that esssup(C|A) equals the conditional convex hull of C, which also follows from ( 5.2). Ifgis the essential infimum, then the situation is different. Then, essinf (h(C,W)|A) = ∞on the event/braceleftbig P{W /∈Co|A}= 1/bracerightbig , meaning that P{h(C,W) =∞|A}= 1 a.s. and implying that W /∈Coalmost surely. This is the polar set to esssup( Co|A) and so essinf(C|A) = (esssup( C0|A))o=m(C|A). In particular, essinf( C) is the set of fixed points of C. From our list of examples of gauge functions, the only ones not satis fying (g9) are the quantiles and left-average quantiles. So first suppose that g=q− α. Consider the random variableζt, whereζt=∞with probability 1 −tand otherwise ζt= 0 fort∈(0,1). Define r(t) =q− α(ζt). Then r(t) = 0 ift≥αand oherwise r(t) =∞. Thus, in the unconditional setting for w∈Rd, q− α(h(C,w)) =r/parenleftbig P{w∈Co}/parenrightbig =/braceleftBigg 0,ifP{w∈Co} ≥α, ∞,otherwise . This set-valued gauge is the polar to the cone of all w∈Rdsuch that P{w∈Co} ≥α. 25 The set Qα(Co) =/braceleftbig w:P{w∈Co} ≥α/bracerightbig (7.2) is called the Vorob’ev quantile of Coat levelα, seeMolchanov (2017, Section 2.2.2). Note thatQα(Co)isitselfaconeand Qα(Co)convergestothesetofall xsuchthat P{x∈Co}>0 asα↓0. Thus, q− α(C) =/parenleftbig Qα(Co)/parenrightbigo, that is, the gaugeof Cis the polar set to Qα(Co). The same expression holds for left-average quantiles, since uα(ζt) =q− α(ζt). 7.3 Random translations of random cones LetX=X+Cfor ap-integrable random vector Xand a (possibly random) cone C. There are two natural ways to introduce the conditional σ-algebra Ain this setting. If Ais generated by X(more generally, if XisA-measurable), then (G5) yields that G(X+C|A) =X+G(C|A). Then the arguments from Section 7.2apply. If CisA-measurable, then (G5) yields that G(X+C|A) =G(X|A)+C. In the remainder of this section we adapt the unconditional setting . If (g9) holds, then G(X+C) =/intersectiondisplay w∈Sd−1Hw(g(h(X+C,w)) =/intersectiondisplay w∈CHw(g(/a\}bracketle{tX,w/a\}bracketri}ht)), whereC=m(Co) is the set of points wwhich almost surely belong to Co. Without (g9), the same holds with Creplaced by Qα(Co) from (7.2). Example 7.4. Consider the convex cone CinR2with points ( −κ1,1) and (1 ,−κ2) on its boundary, where κ1,κ2are positive random variables such that κ1κ2≥1 a.s. This cone describes the family of portfolios on two assets available at price zer o and such that κ1units of the first asset can be exchanged for one unit of the second ass et and also κ2units of the second asset can beexchanged for a single unit of the first asset. The higher the value of κ1κ2 is, the more transaction costs are paid when exchanging the asset s. If the gauge function satisfies (g9), then G(C) = (m(Co))o. If essinf κi=ki,i= 1,2, andk1k2≥1, thenm(Co) is the cone with ( k2,1) and (1 ,k1) on its boundary, so that G(C) is the cone with points (1,−k2) and (−k1,1) on its boundary. If k1k2<1, thenm(Co) =R2 +and soG(C) ={0}. If the essential suprema of κ1andκ2are infinite, then essinf( C) =R2 −. Ifgis a quantile, then G(C) is the polar to the Vorob’ev quantile of Co, which is given by the cone with points
https://arxiv.org/abs/2504.13620v1
(1,q− α(κ1)) and (q− α(κ2),1) on its boundary. Assume that X=X+CforX∈Lp(Rd). Then h(X,w) =/a\}bracketle{tX,w/a\}bracketri}htifw∈Coand otherwise h(X,w) =∞. If the gauge function satisfies (g9), then G(X) is the sum of G({X}(which is the corresponding depth-trimmed region of X) andG(C) as described above. If the gauge is a quantile q− αwe have q− α(h(X,w)) = inf/braceleftbig t∈R:P{/a\}bracketle{tX,w/a\}bracketri}ht ≤t,w∈Co} ≥α/bracerightbig . 26 Example 7.5. LetXbe normally distributed in Rdwith mean µ= (1,2), the standard deviations σ1= 0.3,σ2= 0.5 and the correlation coefficient ρ= 0.6. Denote by Σ the covariance matrix of Xand note that Σ =/parenleftbigg0.09 0.09 0.09 0.25/parenrightbigg . LetCbe a random cone that contains R2 +and that has points ( −1,π) and (π,−1) on its boundary, where ( π−2) is lognormally distributed with mean zero and the volatility σ= 0.2. Note that Cois the cone with points ( −π,−1) and (−1,−π) on its boundary, so thatm(C)o=Cis the cone with points ( −2,−1) and (−1,−2) on its boundary. Below we calculate G(X+C) for different choices of the underlying gauge: the expectation, t he quantile andalower average quantile with α= 0.1, theupper averagequantiles with α= 0.9, and the norm-based one with p= 2 anda= 1. Ifu∈Candζis a standard normal random variable, then Eh(X+C,u) =/a\}bracketle{tµ,u/a\}bracketri}ht, e0.9(h(X+C,u)) =/a\}bracketle{tµ,u/a\}bracketri}ht+/a\}bracketle{tΣu,u/a\}bracketri}ht1/2e0.9(ζ) =/a\}bracketle{tµ,u/a\}bracketri}ht+/a\}bracketle{tΣu,u/a\}bracketri}ht1/2(1−α)−1ϕ(Φ−1(α)) ≈ /a\}bracketle{tµ,u/a\}bracketri}ht+1.75/a\}bracketle{tΣu,u/a\}bracketri}ht1/2, u0,1(h(X+X,u)) =/a\}bracketle{tµ,u/a\}bracketri}ht+/a\}bracketle{tΣu,u/a\}bracketri}ht1/2u0,1(ζ) =/a\}bracketle{tµ,u/a\}bracketri}ht−/a\}bracketle{tΣu,u/a\}bracketri}ht1/2α−1ϕ(Φ−1(α)) ≈ /a\}bracketle{tµ,u/a\}bracketri}ht−1.75/a\}bracketle{tΣu,u/a\}bracketri}ht1/2, e2,1(h(X+C,u)) =/a\}bracketle{tµ,u/a\}bracketri}ht+(E/a\}bracketle{tX,u/a\}bracketri}ht2 +)1/2=/a\}bracketle{tµ,u/a\}bracketri}ht+/a\}bracketle{tΣu,u/a\}bracketri}ht1/2/√ 2. All these gauges are infinite for all u /∈C. Thus,E(X+C) =µ+Co. The Vorob’ev quantile Qα(Co) at level α= 0.1 ofCois the cone which is a subset of R2 − and has points ( −2.77,−1) and (−1,−2.77) on its boundary. If u∈Qα(Co), then q− 0.1(h(X+X,u)) =/a\}bracketle{tµ,u/a\}bracketri}ht+/a\}bracketle{tΣu,u/a\}bracketri}ht1/2Φ−1(α)≈ /a\}bracketle{tµ,u/a\}bracketri}ht−1.28/a\}bracketle{tΣu,u/a\}bracketri}ht1/2. Acknowledgements The authors are grateful to Ignacio Cascos for comments on ear lier versions of this paper. Compliance with ethical standards No funds, grants, or other support was received. The authors h ave no relevant financial or non-financial interests to disclose. The authors declare that the y have no Conflict of interest. References Ararat C, Ma J (2023) Path-regularity and martingale properties o f set-valued stochastic integrals. Technical report, arxiv math:2308.13110 27 Bellini F, Klar B, M¨ uller A, Rosazza Gianin E (2014) Generalized quantile s as risk measures. Insurance Mathematics and Economics 54: 41–48 Belloni A, Winkler RL (2011) On multivariate quantiles under partial or ders. Ann Statist 39: 1125–1179 Bobkov SG (2010) Convex bodies and norms associated to convex m easures. Probab Theory Related Fields 147: 303–332 Carlier G, Chernozhukov V, Galichon A (2016) Vector quantile regre ssion: an optimal trans- port approach. Ann Statist 44: 1165–1192 Cascos I, Li Q, Molchanov I (2021) Depth and outliers for samples o f sets and random sets distributions. Aust N Z J Stat 63: 55–82 Cascos I, Ochoa M (2021) Expectile depth: theory and computatio n for bivariate datasets. J Multivariate Anal 184: Paper No. 104757, 17 de Castro L, Costa BN, Galvao AF, Zubelli JP (2023) Conditional qua ntiles: An operator- theoretical approach. Bernoulli 29: 2392–2416 Chaudhuri P (1996) On a geometric notion of quantiles for multivaria te data. J Amer Statist Assoc 91: 862–872 Delbaen F (2012) Monetary Utility Functions. Osaka University Pres s Fang KT, Kotz S, Ng
https://arxiv.org/abs/2504.13620v1
KW (1990) Symmetric Multivariate and Related Dis tributions. Cjap- man and Hall Fischer T (2003) Risk capital allocation by coherent risk measures b ased on one-sided mo- ments. Insurance Math Econom 32: 135–146 Fissler T, Holzmann H (2022) Measurability of functionals and of ideal point forecasts. Electronic Journal of Statistics 16: 5019–5034 F¨ ollmer H, Schied A (2004) Stochastic Finance: An Introduction in D iscrete Time. De Gruyter studies in mathematics. Walter de Gruyter Guo T, Zhang E, Wu M, Yang B, Yuan G, Zeng X (2017) On random conv ex analysis. J Nonlinear Convex Anal 18: 1967–1996 Hallin M, Paindaveine D, ˇSiman M (2010) Multivariate quantiles and multiple-output re- gression quantiles: from L1optimization to halfspace depth. Ann Statist 38: 635–669 Hamel AH, Ha TKL (2025) Set-valued expectiles for ordered data a nalysis. J Multivariate Anal 208: Paper No. 105425 28 Hamel AH, Heyde F (2010) Duality for set-valued measures of risk. SIAM J Financial Math 1: 66–95 Hamel AH, Heyde F, Rudloff B (2011) Set-valued risk measures for c onical market models. Math Finan Economics 5: 1–28 Hamel AH, Kostner D (2018) Cone distribution functions and quant iles for multivariate random variables. J Multivariate Anal 167: 97–113 Hamel AH, Rudloff B, Yankova M (2013) Set-valued average value at risk and its computa- tion. Math Finan Economics 7: 229–246 Kabanov Y, Safarian M (2009) Markets with transaction costs. Sp ringer Finance. Springer- Verlag, Berlin. Mathematical theory Kim S, Cho HR, Wu C (2021) Risk-predictive probabilities and dynamic no nparametric conditional quantile models for longitudinal analysis. Statist Sinica 31 : 1415–1439 Koshevoy G, Mosler K (1997) Zonoid trimming for multivariate distribu tions. Ann Statist 25: 1998–2017 L´ epinette E, Molchanov I (2019) Conditional cores and conditiona l convex hulls of random sets. J Math Anal Appl 478: 368–392 Liu RY (1988) On a notion of simplicial depth. Proc Nat Acad Sci USA 85 : 1732–1734 McNeil AJ, Frey R, Embrechts P (2015) Quantitative Risk Manageme nt: Concepts, Tech- niques and Tools. Princeton University Press, Princeton, revised e dition Molchanov I (2017) Theory of Random Sets. Springer, London, 2n d edition Molchanov I, Cascos I (2016) Multivariate risk measures: a constr uctive approach based on selections. Math Finance 26: 867–900 Molchanov I, M¨ uhlemann A (2021) Nonlinear expectations of rando m sets. Finance Stoch 25: 5–41 Molchanov I, Turin R (2021) Convex bodies generated by sublinear e xpectations of random vectors. Adv in Appl Math 131: Paper No. 102251, 31 Mosler K (2002) Multivariate Dispersion, Central Regions and Depth . The Lift Zonoid Ap- proach, volume 165 of Lect. Notes Stat. . Springer, Berlin Nagy S, Sch¨ utt C, Werner E (2019) Halfspace depth and floating b ody. Stat Surv 13: 52–118 Newey WK, Powell JL (1987) Asymmetric Least Squares Estimation a nd Testing. Econo- metrica 55: 819–847 29 Paindaveine D, Virta J (2021) On the behavior of extreme d-dimensional spatial quantiles under minimal assumptions. In: Advances in contemporary statistics and econometrics— Festschrift in honor of Christine Thomas-Agnan , pp. 243–259. Springer, Cham Schneider R (2014) Convex Bodies. The Brunn–Minkowski Theory. Cambridge University Press,
https://arxiv.org/abs/2504.13620v1
Testing Random Effects for Binomial Data Lucas Kania, Larry Wasserman and Sivaraman Balakrishnan Department of Statistics and Data Science Carnegie Mellon University In modern scientific research, small-scale studies with limited participants are in- creasingly common. However, interpreting individual outcomes can be challenging, making it standard practice to combine data across studies using random effects to draw broader scientific conclusions. In this work, we introduce an optimal method- ology for assessing the goodness of fit between a given reference distribution and the distribution of random effects arising from binomial counts. Using the minimax framework, we characterize the smallest separation between the null and alternative hypotheses, called the critical separation, under the 1-Wasserstein distance that ensures the existence of a valid and powerful test. The optimal test combines a plug-in estimator of the Wasserstein distance with a debiased version of Pearson’s chi-squared test. We focus on meta-analyses, where a key question is whether multiple studies agree on a treatment’s effectiveness before pooling data. That is, researchers must determine whether treatment effects are homogeneous across studies. We begin by analyzing scenarios with a specified reference effect, such as testing whether all studies show the treatment is effective 80% of the time, and describe how the critical separation depends on the reference effect. We then extend the analysis to homogeneity testing without a reference effect and construct an optimal test by debiasing Cochran’s chi-squared test. Finally, we illustrate how our proposed methodologies improve the construction of p-values and confidence intervals, with applications to assessing drug safety in the context of rare adverse outcomes and modeling political outcomes at the county level. Contents 1 Introduction 2 2 The minimax framework 4 3 Goodness of fit testing 7 4 Homogeneity testing with a reference effect 15 5 Homogeneity testing without a reference effect 19 6 Applications to meta-analyses and model selection 26 7 Discussion 30 Appendices 31 References 83 1arXiv:2504.13977v1 [math.ST] 17 Apr 2025 1 Introduction In performance evaluation, Lord [1965], Lord and Cressie [1975] and Grilli et al. [2015] study the problem of estimating an individual’s true performance in a task based on a limited number of evaluations. They consider a scenario where nindividuals answer tquestions, and the number of correct responses is recorded. When the number of questions is small compared to the number of participants, it becomes difficult to make meaningful inferences about any single participant. However, aggregating data across all participants enables drawing conclusions about the population. To facilitate such data pooling, the authors propose a binomial mixture model, where the probability of a correct response for each individual is drawn from an underlying distribution Xi|pi∼Bin(t, pi), p i∼π,1≤i≤n, (1) where Xirepresents the observed scores, piare the true scores of the candidates, and πis themixing distribution , which captures the variability in individual abilities. The research primarily focuses on analyzing the properties of the mixing distribution, which may belong to a parametric or nonparametric family. For instance, while Lord [1969] allowed πto be any distribution supported on [0 ,1], Thomas [1989] and Grilli et al. [2015] restricted their analysis to finite binomial mixtures to
https://arxiv.org/abs/2504.13977v1
better account for known individual differences in task performance. Binomial mixtures also play an important role in other fields. They are used to account for variation in mouse mortality rates [Brooks et al., 1997], word frequencies [Lowe, 1999], welfare program participation [Melkersson and Saarela, 2004], genetic heterogeneity [Zhou and Pan, 2009], genomic dependencies [Snipen et al., 2009, Hogg et al., 2007], and RNA composition [J¨ urges et al., 2018, Lin et al., 2023]. In all cases, assessing the compatibility between a reference mixing distribution and the data is crucial. A central statistical question is how accurately the mixing distribution can be recovered. The main challenge is that a binomial mixture preserves only the first tmoments of the underlying distribution. Consequently, any two mixing distributions sharing these moments produce statistically indistinguishable observations. Research on this problem falls into two categories: methods that reliably recover the mixing distribution but require strong conditions on the number of trials and methods that avoid such conditions but offer weaker statistical guarantees. Teicher [1963] studied the identifiability of finite mixing distributions. Building on his work, Dedecker and Michel [2013], Dedecker et al. [2015], Nguyen [2013], Ho and Nguyen [2016], Heinrich and Kahn [2018] analyzed the convergence of the method of moments (MOM) and the maximum likelihood estimator (MLE) under strong identifiability conditions, which require smoothness assumption on the mixing distribution. In particular, Manole and Khalili [2021] investigated binomial mixtures under these conditions. More generally, Ye and Bickel [2021], Tian et al. [2017], Vinayak et al. [2019] study the estimation of the mixing distribution without requiring identifiability conditions using the plug-in, MOM and MLE estimators. Their findings indicate that, without stronger assumptions, standard estimators of the mixing 2 distribution may be unreliable when the number of trials is much smaller than the number of studies. In this work, we focus on testing rather than estimation. Instead of approximating the mixing distribution, we assess its proximity to a reference distribution. To compare them, we use the 1-Wasserstein distance, denoted by W1and defined in section 2, since it does not impose restrictions on the support of the compared distributions. We study goodness-of-fit testing, also known as the identity testing problem, under the binomial mixture model. Given an arbitrary reference mixing distribution π0, called the null distribution, we aim to test whether the mixing distribution underlying the observed data in (1) equals π0or deviates significantly from it as measured by the 1-Wasserstein distance: H0:π=π0v.s.H1:W1(π, π 0)≥ϵ. The parameter ϵrepresents the separation between the hypotheses. When ϵ= 0, the hy- potheses overlap, making them indistinguishable. Our goal is to determine the smallest ϵfor which a test can successfully differentiate the hypotheses while controlling the probability of making a mistake. To characterize the minimum separation, we adopt the non-parametric minimax framework for hypothesis testing, which can be traced back to the foundational work of Mann and Wald [1942], Ermakov [1990, 1991], Ingster [1993a,b,c], and Lepski and Spokoiny [1999]. Intuitively, some null distributions are easier to test because fewer observations fall within the statistical fluctuations allowed under the null hypothesis. This phenomenon, known as locality,
https://arxiv.org/abs/2504.13977v1
has been observed in binomial, Multinomial, Poisson distributions and smooth densities in the context of fixed effects testing [Valiant and Valiant, 2014, Balakrishnan and Wasserman, 2019, Chhor and Carpentier, 2022]. A key application of locality arises in statistical meta-analysis of treatment effectiveness. Suppose nstudies are conducted, each applying the treatment to tparticipants. To obtain a precise estimate of treatment effectiveness, scientists pool the data across studies. However, before combining results, it is crucial to verify whether the treatment effect is homogeneous across studies. This requires testing if the mixing distribution is concentrated around a single point: H0:π=δp0s.t. 0 ≤p0≤1 v.s. H1:W1(π, δp0)≥ϵ where p0, called the reference effect, is typically unknown. In clinical trials, p0can be very small, making commonly used asymptotically valid tests, such as Pearson’s chi-squared test [Pearson, 1900] and Cochran’s chi-squared test [Cochran, 1954], unreliable since their asymptotic approximations do not hold [Park, 2019]. In this work, we address the following questions •How difficult is goodness-of-fit testing for an arbitrarily complex null hypothesis? •How difficult is homogeneity testing? For goodness-of-fit testing, the worst case is as difficult as estimation. When the null hy- pothesis is arbitrarily complex, no consistent test exists for a fixed number of trials. The 3 optimal testing algorithm combines a plug-in estimator of the Wasserstein distance with a debiased Pearson’s chi-squared test. Our main contribution is proving its optimality using Kravchuk polynomials [Kravchuk, 1929], which are orthogonal under the binomial distri- bution. They enable unbiased estimation of the mixing distribution’s moments and link probabilistic distances between marginal distributions to moment differences between the mixing distributions. Analogously to the Hermite and Charlier polynomials for the Normal and Poisson distributions [Wu and Yang, 2020b], they significantly simplify constructing worst-case scenarios where all testing algorithms fail. For homogeneity testing with respect to a reference effect, the optimal test combines two Kravchuk polynomials. The problem’s difficulty depends on the null distribution’s location. When there are few trials, the data sparsity induced by the null distribution significantly affects the hardness of the data, meaning that mixing distributions that generate non-sparse data are harder to test. Nevertheless, it remains possible to consistently test whether the mixing distribution concentrates around the reference effect, even with as few as two trials. This result extends to homogeneity testing without a reference effect for which an optimal test can be constructed by debiasing Cochran’s chi-squared test. Both methods can be used to construct tighter confidence intervals and enhance the power of p-values in clinical meta-analyses. The remainder of the paper is organized as follows: Section 2 introduces the minimax frame- work, and section 3 studies goodness of fit testing. Section 4 introduces the local minimax framework and applies it to the homogeneity testing with simple null distributions. Section 5 extends the analysis to homogeneity testing with a composite null distribution. Finally, sec- tion 6 applies the methodologies to problems in the medical and political sciences. Notation Henceforth, we write an≲bnif there exists a positive constant Csuch that an≤C·bnfor all nlarge enough. Analogously, an≍bndenotes that an≲bnandbn≲an. Furthermore, given a distribution π, we use ml(π) =Ep∼π[pl] to
https://arxiv.org/abs/2504.13977v1
denote its l-th moment, and V(π) =m2(π)−m2 1(π) to denote its variance. Let δA(a) equal 1 if a∈A, and 0 otherwise. Alternatively, we use the following notation I(condition) to denote the indicator function that evaluates to 1 whenever the condition is true and returns 0 otherwise. Finally, let a∧b= min( a, b) and a∨b= max( a, b). 2 The minimax framework We develop the minimax framework to evaluate test performance. Consider nbinomial observations, each consisting of ttrials. The random effects model is given by (1). The probability mass function of the binomial distribution is P(A) = Bin( t, p)(A) =X j∈At j pj(1−p)t−j=tX j=0δA(j)·Bj,t(p) 4 where Bj,t(p) =t j pj(1−p)t−jis the j-th element of the t-order Bernstein basis. The marginal measure of the data follows a mixture of binomial distributions: Pπ(A) =Z1 0Bin(t, p)(A)dπ(p) =tX j=1δA(j)·bj,t(π) for A⊆ {0, . . . , t} (2) where bj,t(π) =Ep∼π[Bj,t(p)] for 1 ≤j≤tare called the expected fingerprint under π, representing the probability of observing jsuccesses under the binomial mixture. Thus, (1) is equivalent to stating that we have nobservations from the marginal measure: Xi∼Pπfor 1≤i≤n. A key challenge is that the marginal measure of the data Pπpreserves only the first tmoments of the mixing distribution. This can be easily seen by noting (2) depends on πonly though the expected fingerprint and rewriting them as a function of the first tmoments bj,t(π) =tX l=jt ll v (−1)l−j·ml(π) where ml(π) =Ep∼π[pl] . (3) Hence, tests can estimate only the first tmoments of the mixing distribution, making mixing distributions that share these moments indistinguishable. Given a null distribution π0, we test whether a mixing distribution equals the null or differs from it: H0:π=π0v.s.H1:π̸=π0. Under the alternative hypothesis, the distributions may be arbitrarily close. If we want to distinguish them while providing statistical guarantees, we can only do so asymptotically. To provide a finite sample analysis, some separation between the hypotheses must be required. In this work, we measure their distance under the 1-Wasserstein distance, also called the earth mover distance: Test H0:π=π0v.s.H1:W1(π, π 0)≥ϵ. (4) The 1-Wasserstein distance quantifies the cost of transporting mass from πtoπ0[Villani, 2009, Santambrogio, 2015]. Let Ddenote the set of all distributions supported on [0 ,1] and Γ be the set of distributions supported on [0 ,1]2whose marginals are πandπ0, then W1is defined as: W1(π, π 0) = inf γ∈ΓEp∼π,q∼π0|p−q|. Moreover, it admits a dual representation as an integrated probability metric [Kantorovich and Rubinshtein, 1958]: W1(π, π 0) = sup f∈Lip1[0,1]Ep∼π[f(p)]−Ep∼π0[f(p)] (5) where Lip1[0,1] is the set of all 1-Lipschitz functions supported on [0 ,1], and the function achieving the supremum is called the witness function. 5 Our goal is to understand how small ϵin (4) can be such that there exists a test that consis- tently distinguishes the hypotheses. A test ψmaps the sample space to {0,1}, returning 0 if it considers that the data supports the null hypothesis and 1 otherwise. Its type I error is measured as the probability of choosing the alternative hypothesis when the null hypothesis is true. Let Ψ be the set of all tests that control
https://arxiv.org/abs/2504.13977v1
the type I error by α∈(0,1) for any null distribution, called valid tests, Ψ =∩π0∈DΨ(π0) where Ψ( π0) = ψ:Pn π0(ψ(X) = 1) ≤α . where X= (X1, . . . , X n) is a vector containing nobservations. The risk of a valid test is given by its maximum type II error, i.e., the probability of choosing the null hypothesis when the alternative is true R(ψ, π 0, ϵ) = sup W1(π,π0)≥ϵPn π(ψ(X) = 0) . A test with low risk is called powerful. The minimax risk quantifies the best performance over all valid tests under the worst-case null distribution R∗(ϵ) = inf ψ∈Ψsup π0∈DR(ψ, π 0, ϵ) . A valid test is called minimax or optimal if its risk equals the minimax risk. For a fixed sample size nand number of trials t, there might not exist any powerful valid test if ϵis too small. Thus, we define the critical separation as the smallest ϵsuch that there exists a powerful valid test: ϵ∗(n, t) = inf {ϵ:R∗(ϵ)≤β} (6) where βis some arbitrary constant in (0 ,1−α). For any ϵ < ϵ ∗, no valid test can the type II error by β. Furthermore, if lim inf nϵ∗(n, t) = 0, we say that there exists a sequence of tests that consistently distinguishes the hypotheses. That is, there exists a sequence of tests that, in the limit of infinite observations, detects any deviation from the null distribution. To characterize the critical separation, we derive upper and lower bounds that match up to constants. Given any valid test ψthat controls the risk by β, its risk function provides an upper bound on the critical separation: ϵ∗(n, t)≤inf{ϵ: sup π0∈DR(ψ, π 0, ϵ)≤β}. (7) In the simplest case, a lower bound follows by identifying two mixing distributions, π0 andπ1, that are far when measured by the Wasserstein distance but induce statistically indistinguishable marginal measures. Consequently, no test can distinguish them and the critical separation is lower-bounded by ϵ∗(n, t)≥W1(π1, π0) if Pπ0d=Pπ1. (8) When the upper and lower bounds in (7) and (8) match up to constants, we say that we have characterized the minimax separation up to constants. 6 0 100 200 300 400 500 Number of matched moments0.00.10.20.30.4W1W1 decreases as 1/k for k matched moments 0.00 0.25 0.50 0.75 1.00 Values02468Normalized CountsT wo distributions that match 10 moments Distribution Alternative NullFigure 1: The left panel displays the Wasserstein distance between the moment matching distributions as the number of matched moments increased. The right panel displays two distributions that match 10 moments. 3 Goodness of fit testing We derive the critical separation for goodness-of-fit testing under arbitrary null distributions. Sections 3.1 and 3.2 analyze the performance of the plugin and debiased Pearson chi-squared tests. Section 3.3 shows that the plugin test is optimal when t > n , while the debiased Pearson chi-squared test is optimal when t < n . 3.1 The plugin test Since we aim to detect deviations under the Wasserstein distance, a natural approach is to use the plug-in estimator. Consider the Wasserstein distance between the empirical and
https://arxiv.org/abs/2504.13977v1
null distributions cW1(X) =W1(bπ, π 0) where bπ=1 nnX i=1δXi/t. We reject the null hypothesis when cW1(X) exceeds its 1 −αquantile under the null distri- bution, denoted by qα(Pπ0,cW1), ensuring type I error control: ψα cW1(X) =I cW1(X)≥qα(Pπ0,cW1) . (9) Intuitively, the test works whenever W1(bπ, π 0) is a good approximation of W1(π, π 0). To understand the quality of the approximation, it is useful to separate the uncertainty due to the mixing distribution and the binomial sampling. Let pibe the random effect associated with Xiin (1) and define the unobserved empirical measure of random effects eπ=1 nnX i=1δpi. (10) By the triangle inequality, it holds that |W1(π, π 0)−W1(bπ, π 0)| ≤W1(π,eπ) +W1(eπ,bπ) . (11) 7 Hence, whenever the right-hand side is small, the plug-in statistic is a good approximation ofW1(π, π 0). Since eπis an empirical measure sampled from π, Theorem 3.2 of Bobkov and Ledoux [2019] tells us that the first term should not be large on average E[W1(π,eπ)]≤J(π)√nwhere J(π) =Z1 0p Fπ(x)(1−Fπ(x))dx andFπ(x) =Rx 0dπis the cumulative distribution function of π. For the second term, conditioning on eπ, the concentration of bπaround eπcan be measured by a Bernstein bound [Ye and Bickel, 2021]. Using this strategy and localizing the result around the null distribution, we derive the separation rates for the plug-in test. The proof is in appendix G.1.2. Theorem 1. The plug-in test (9)controls the type I error by α. Moreover, there exists a universal positive constant Csuch that the test controls the type II error by βwhenever ϵ(n, t, π 0)≥C·" J(π0)√n+r Ep∼π0[p(1−p)] t+1 n+1 t# . The first term indicates that the plug-in test adapts to the concentration of the null distri- bution, vanishing when π0is a point mass. The second term accounts for the adaptation to low binomial variance, meaning that the location of π0influences detection. When π0 concentrates near the endpoints of [0 ,1], the test detects smaller deviations from the null distribution. In the worst case, when π0does not concentrate, the plug-in test requires the separation between hypotheses to satisfy ϵ(n, t)≳n−1/2+t−1/2. Forn≥t, the test achieves a parametric rate. However, when tis smaller, the term t−1/2dominates the separation. Intuitively, this occurs because bπapproximates the mixing distribution by coarsening it into a mixture of t+ 1 point masses. We can gain some insight by using approximation theory. Let njdenote the jth observed fingerprint: nj=nX i=1I(Xi=j) for j∈ {0, . . . , t}. (12) The empirical distribution can be written as bπ=Pt j=0δj/t·nj n. Taking the limit as n→ ∞ , so that the only remaining uncertainty is due to the number of trials, we obtain eπd→π andbπd→Pt j=0δj/t·bj,t(π). Thus, by (11) the error of the plug-in estimate is dominated by W1(bπ, π). Exploiting the duality of W1(5), we obtain the following representation of the error W1(bπ, π) = sup f∈Lip[0 ,1]Ep∼π[f(p)−Bt(f, p)] where Bt(f, p) =Pt j=0f(j/t)·Bj,t(p) is the t-order Bernstein polynomial approximation off. Thus, the plug-in approach is constrained by the error of approximating Lipschitz functions by Bernstein polynomials, which is of order t−1/2[Bustamante, 2017]. 8 This error is known
https://arxiv.org/abs/2504.13977v1
to be suboptimal. If we consider all polynomials of degree t, then the best uniform approximation of a Lipschitz function has an error of at most π/2t[Plaskota, 2021]. Therefore, the plug-in test can be improved by modifying the polynomial approximation of the witness function in (5). In the following section, we show that directly comparing the observed fingerprints to their expected values under the null hypothesis can be related to this improved approximation. 3.2 The debiased Pearson’s chi-squared test Since Pπis at-dimensional Multinomial, a natural approach is to use a chi-squared test to compare the observed and expected fingerprints. Indyk [2003], Weed and Bach [2019] show that the W1distance is closely tied to the ℓ1distance, suggesting that a chi-squared test that is powerful under ℓ1separation can detect deviations under W1. Formally, we want to reduce problem (4) to Given ( n1, . . . , n t)∼Multinomial( n, bt(π)) (13) Test H0:bt(π) =bt(π0) v.s. H1:∥bt(π)−bt(π0)∥1≥ϵ. To relate the Wasserstein distance between the mixing distributions to the ℓ1distance be- tween the fingerprints, we use the duality of the Wasserstein distance (5). Let Pkbe the space of polynomials supported on [0 ,1] of degree at most k. Then, approximating the witness function by the best k-order polynomial gives the bound W1(π, π 0)≤2·inf pk∈Pk∥f−pk∥∞+|Eq∼π[pk(q)]−Eq∼π0[pk(q)]|. (14) where ∥f−pk∥∞= supx∈[0,1]|f(x)−pk(x)|. The second term compares the first kmoments ofπandπ0. Since Pπcontains only information about the first tmoments, we consider only polynomials of degree at most t. To compare fingerprints instead of moments, we express thepkpolynomial in the t-order Bernstein basis: pk(x) =Pt j=0cj,k·Bj,t(x) for k≤t. Recalling that Eq∼π[Bj,t(q)] = bj,t(π) and propagating the expectation in (14), the second term becomes bounded by the fingerprint difference under ℓ1 W1(π, π 0)≤2·inf p∈Pk∥f−p∥∞+∥ck∥∞· ∥bt(π)−bt(π0)∥1. (15) Thus, optimally reducing testing under W1(4) to fingerprint testing under ℓ1(13) requires balancing the approximation error and the fingerprint separation under ℓ1. Testing Multinomials under the ℓ1distance has been previously studied by Batu et al. [2000], Paninski [2008], Valiant and Valiant [2014], Chan et al. [2014], Balakrishnan and Wasserman [2019] and Chhor and Carpentier [2022], among others. We revisit the minimax test for (13) using the machinery of the Kravchuk polynomials, which are central to both upper and lower bounds in this work. These are the discrete orthogonal polynomials with respect to the binomial distribution. Let m∈ {0, . . . , t}and 0 < p < 1, the m-th Kravchuk polynomial [Kravchuk, 1929, Szeg˝ o, 1975, Dominici, 2008] is given by Km(x, p, t ) =mX v=0(−1)m−vt−x m−vx v pm−v(1−p)v. (16) 9 We also define the m-th normalized Kravchuk polynomials eKm(x, p, t ) =Km(x, p, t )/t m . They are orthogonal under the binomial distribution [Szeg˝ o, 1975]: EX∼Bin(t,p)[eKn(X, p, t )·eKm(X, p, t )] =µn pt n·I(n=m) where µp=p·(1−p) , (17) and facilitate the estimation of centered moments: EX∼Bin(t,q)h eKm(X, p, t )i = (−1)m·(p−q)mform≥1 . To solve the fingerprint testing problem (13), we construct an unbiased estimator of the ℓ2 distance between the observed and expected fingerprints using the second Kravchuk polyno- mial bℓ2(X) =1 t+ 1tX j=0eK2(nj, bj,t(π0),
https://arxiv.org/abs/2504.13977v1
n) max1 t+1, µbj,t(π0). We call the corresponding test the debiased Pearson’s chi-squared test ψα bℓ2=I bℓ2(X)> qα(Pπ0,bℓ2) , (18) since expanding the definition of eK2, it is clear that is a centered chi-squared statistic eK2(nj, bj,t(π0), n) =nj n−bj,t(π0)2 −µnj/n n−1. Unlike the usual chi-squared statistic, the centering is random and debiases the estimator [Valiant and Valiant, 2014]. Balakrishnan and Wasserman [2019] proved that this test is minimax optimal for testing under the ℓ1distance. Lemma 1 (Theorem 3.2 of Balakrishnan and Wasserman [2019]) .For the fingerprint testing problem (13), the debiased Pearson’s χ2test(18) controls type I error by α. Furthermore, there exists constant C >0such that the type II error of the test is bounded by βwhenever ϵ(n, t)≥C·t1/4 n1/2. Although the above result was originally derived for independent fingerprints, it holds for the case of dependent fingerprints due to the Poissonization trick; see appendix C of Canonne [2022]. By leveraging lemma 1 and approximating the witness function in (15), the following lemma shows that the debiased Pearson’s chi-squared test improves over the plug-in test when t≲n. Theorem 2. For testing under W1(4), the debiased Pearson’s χ2test(18) controls type I error by α. Furthermore, for any constant δ >0, there exists positive constant Cdepending onδsuch that the type II error is bounded by βwhenever ϵ(n, t)≥C·  1 tfort≲logn 1√tlognforlogn≲t≲n1/4−δ (logn)3/8 10 The proof can be found in appendix G.1.1. We remark that the method of moments [Tian et al., 2017] can be used to construct a method that achieves the first regime, while the maximum likelihood estimation can achieve both regimes [Vinayak et al., 2019]. The next section argues that combining the plugin and debiased Pearson’s χ2test via a Bonferroni correction: ψα GM(X) = max ψα/2 bℓ2(X) ,ψα/2 cW1(X) . (19) is optimal, up to constants and log nfactors, outside the region n1/4−δ≲t≲n. 3.3 Lower bounds on the critical separation The minimax lower bounds rely on Le Cam’s two-point method. We find π0andπ1that maximize W1(π1, π0) while ensuring they induce similar marginal distributions, i.e., their total variation distance V(Pπ0, Pπ1) is small. Prior work [Tian et al., 2017, Vinayak et al., 2019, Kong and Valiant, 2017] has established the following results by explicitly construct- ing mixing distributions or cleverly bounding the total variation distance. Here, we present an alternative proof for those results using approximation theory and Kravchuk polynomi- als, which avoid explicit construction and simplify controlling the total variation between marginal distributions. Le Cam’s method states that an upper bound on the total variation between the marginal measures, denoted by V, implies a lower bound on the critical separation. Lemma 2 (Le Cam’s method, theorem 2.2 of Tsybakov [2009]) . ϵ∗(n, t)≥sup π0,π1∈DW1(π0, π1)s.t.V(Pπ0, Pπ1)<Cα 2n(20) where Cα= 1−(α+β). The key to simplifying this optimization is relating the total variation distance between the marginal measures to a suitable distance between the mixing distributions. Since the binomial density preserves only the first tmoments of the mixing distributions, it is natural to compare them using moment differences. The following lemma establishes this connection. Lemma 3. For any 0< p < 1,
https://arxiv.org/abs/2504.13977v1
and mixing distribution π0andπ1inD, it holds that V(Pπ0, Pπ1)≤1 2·q Mp(π0, π1)where Mp(π0, π1) =tX m=1t m ·∆2 m(π1, π0) µm p and∆m(π1, π0) =Eu∼π1[u−p]m−Eu∼π0[u−p]mis the moment difference centered at p. 11 The proof, found in appendix C.1, relies on the orthogonality of the Kravchuk polynomials. It follows a structure similar to analogous results for mixtures of Gaussian or Poisson dis- tributions, which use Hermite and Charlier polynomials [Wu and Yang, 2020a]. Using this bound, the optimization problem (20) reduces to: ϵ∗(n, t)≥sup π0,π1∈DW1(π0, π1) s.t. Mp(π0, π1)<(Cα/n)2. (21) To connect the problem to approximation theory, let Dδbe the set of distributions distribu- tions supported on [1 /2−δ,1/2 +δ], and choose ( δ, L) so that any pair of distributions in Dδthat share Lmoments satisfy M(π0, π1)<(Cα/n)2, then (21) is lower-bounded by: ϵ∗(n, t)≥sup π0,π1∈DδW1(π0, π1) s.t. ml(π0) =ml(π1) for 1 ≤l≤L. (22) Recall from (5) that the Wasserstein distance equals the maximum mean difference over all 1-Lipschitz functions. A lower bound for (22) arises by selecting a specific 1-Lipschitz function, such as the absolute value: ϵ∗(n, t)≥M(δ, L) (23) where M(δ, L) = supπ0,π1∈DδEp∼π1|p| −Ep∼π0|p|s.t.ml(π0) =ml(π1) for 1 ≤l≤L. The dual of this moment-matching problem is the best polynomial approximation of the absolute value function. Lemma 4 (Appendix E of Wu and Yang [2016]) . M(δ, L) = 2·A(δ, L)where A(δ, L) = inf f∈PLsup |x|≤δ||x|p−f(x)| andPLis the set of all L-order polynomials supported on [−δ, δ]. Bernstein [1912] studied the best polynomial approximation of the absolute value function and proved that: A(δ, L) = (β1+CL)·δ L(24) where CL→0 as L→ ∞ andβ1≈0.29. Consequently, by (23), lemma 4 and (24), we obtain: ϵ∗(n, t)≥2(β1+CL)·δ L. By adjusting the support of the mixing distributions and the number of matched moments, one can derive different lower bounds on the critical separation. This technique, known as themoment-matching method [Ingster, 2001, Wu and Yang, 2020b], yields the following result. Theorem 3. The critical separation (6)is lower-bounded by ϵ∗(n, t)≳  1 tfort≲logn 1√tlognforlogn≲t≲n logn 1√nfort≳n logn(25) 12 The proof is provided in appendix G.2. The lemma states that whenever the number of trials is much smaller than the sample size, meaning t≲logn, the statistical uncertainty with respect to nis negligible. This follows from lemma 3, which shows that two mixing distributions that share tmoments produce identical marginal measures. The farthest two such distributions can be under W1is 1/t. For applications, this implies that if tis constant, increasing the number of observations does not help us distinguish the hypotheses, which motivates the local analysis in section 4. When the number of trials is larger than the sample size, i.e. log n/n≲t, we are in the parametric regime. This can be foreseen by using an asymptotic argument. Scale the dataeXi=Xi/tand let ePπdenote its distribution. For a function f, let Bt(f) denote its Bernstein polynomial approximation. If fis a continuous bounded function, Bt(f) converges tofuniformly on [0 ,1]. Consequently, ePπconverges weakly to πsince lim t→∞EePπ[f(X)] = lim t→∞Eπ[Bt(f, p)] =Eπ[f(p)] . At the limit, the testing problem (4) reduces to Given eXiiid∼πfori∈ {1, . . . , n
https://arxiv.org/abs/2504.13977v1
} Test H0:π=π0v.s. H1:W1(π, π 0)≥ϵ, for which the separation rate is known to be ϵ≍n−1/2[Ba et al., 2011]. In the intermediate regime, where log n≲t≲logn/n, the lower bound follows from con- structing mixing distributions that match tlognmoments, which is the optimal number of moments that one needs to estimate to approximate the underlying mixing distribution in this regime [Vinayak et al., 2019]. Up-to-constants and log factors, the lower-bound (25) matches the separations rates achieved by the test (19), which indicates its optimality up-to-constants and log nfactors. Moreover, the critical separation matches the known estimation rates [Tian et al., 2017, Vinayak et al., 2019]. Thus, testing the goodness of fit of arbitrary distributions is as hard as estimating the underlying mixing distribution. 3.4 Simulations To compare the plug-in test (9) and debiased Pearson’s χ2test (18), we evaluate them against the distributions used to produce the lower bounds on the critical separation. The plug-in test (9) achieves parametric rates for large t. To obtain that behaviour, the lower bound is constructed using the following family of distributions where the probabilities of the null mixing distribution are perturbed. π0=1 2·(δ0+δ1) and π1=1 2−ϵ ·δ0+1 2+ϵ ·δ1for 0≤ϵ≤1 2. (26) Under these mixing distributions, the marginal measures are supported at {0, t}. Thus, increasing the number of trials does not yield additional information, limiting the power a 13 test can achieve as tgrows. Figure 2 confirms that all tests have the same power curve regardless of t. 0.0 0.1 0.2 0.3 0.4 0.50.000.250.500.751.00 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5Prob. perturbation lower-bound construction W1(π,π0) between null and alternative mixing distributionsPower (higher is better)n=10 t=2 n=10 t=10 n=10 t=100 T est Global minimax Plug-in debiased Pearson's χ2 mod. LRT mod. Pearson's χ2 Figure 2: Power curves for valid tests under alternative distribution generated by (26). All tests have the same power curves regardless of the number of trials t. Note that additional tests appear in our simulation; their definitions are provided in ap- pendix K.1. In particular, the modified Pearson’s χ2and likelihood ratio tests are inspired by the debiased Pearson’s χ2test (18) and truncate the denominator whenever it vanishes. Without truncation, these tests produce numerical issues in our simulation because the mix- ing distributions put a non-negligible amount of mass at the endpoints of [0 ,1]. We use moment-matching distributions to match the separation rates of the debiased Pear- son’s χ2test in theorem 2. A simple construction of such distributions is provided by [Kong and Valiant, 2017]. Figure 1 shows that as we match more moments, the W1between them decays like 1 /kwhere kis the number of matched moments. Proposition 1 (Proposition 2 of Kong and Valiant [2017]) .Given k∈N, there exists π0 andπ1mixing distributions supported on the [0,1]interval that match their first kmoments andW1(π1, π0)≥c/kwhere cis a positive constant. Figure 3 shows the power curves of the tests under the moment matching mixing distribu- tions. The minimax theory predicts that when the number of trials tis sufficiently smaller than the number of observations n, the debiased Pearson’s χ2test should perform
https://arxiv.org/abs/2504.13977v1
better in the worst case. Conversely, when tis much larger, the plug-in test should be able to obtain higher power. Both phenomena can be observed in the figure. Finally, figure 4 illustrates the empirical critical separation, which represents the minimum distance between the null and alternative mixing distributions required for the tests to have power greater than 1 −αand type I error below α. Two regimes arise from the minimax critical separation (25). When tis small relative to n, the empirical critical separation is controlled by the moment-matching construction. Initially, increasing treduces the mini- mum detectable separation between the null and alternative mixing distributions. However, for sufficiently large t, the probability perturbation construction dominates, causing the empirical critical separation to plateau. The figure also highlights a key issue of global minimax analysis. Although figure 3 shows that the modified Pearson’s χ2and likelihood ratio tests perform much better than the other tests, 14 0.0 0.1 0.2 0.3 0.40.000.250.500.751.00 0.0 0.1 0.2 0.3 0.4 0.0 0.1 0.2 0.3 0.4moment matching lower-bound construction W1(π,π0) between null and alternative mixing distributionsPower (higher is better)n=10 t=2 n=10 t=10 n=10 t=100 T est Global minimax Plug-in debiased Pearson's χ2 mod. LRT mod. Pearson's χ2Figure 3: Power curves for valid tests under alternatives generated by proposition 1. the global minimax perspective, which takes into account all possible mixing distributions, suggest a different conclusion. From this viewpoint, their performance is comparable to the debiased Pearson’s χ2test. 4 Homogeneity testing with a reference effect Homogeneity testing is a key case of goodness-of-fit testing with applications in statistical meta-studies, where assessing agreement among studies is necessary before pooling their information. It corresponds to testing if the random effects (1) come from a single point mass: Test H0:π=δp0v.s. H1:W1(π, δp0)≥ϵ (27) where p0∈[0,1] is known. The results of the previous section are loose when applied to homogeneity testing because the null distribution is not arbitrarily complex. To understand the dependence of the critical separation on the null hypothesis, we introduce 0 25 50 75 1000.160.200.240.280.32Minimum W1(π,π0) such that power≥0.95 and type I error ≤0.05 tW1 (lower is better)T est Global minimax Plug-in debiased Pearson's χ2 mod. LRT mod. Pearson's χ2 Figure 4: Empirical critical separation as the number of trials tis varied below and above the number of observations n 15 the local minimax framework. The local minimax risk is given by the worst type II error among all valid tests for π0: R∗(ϵ, π0) = inf ψ∈Ψ(π0)R(ψ, π 0, ϵ) where π0=δp0, and the local critical separation is the smallest detectable ϵ-separation with respect to π0: ϵ∗(n, t, π 0) = inf {ϵ:R∗(ϵ, π0)≤β}where π0=δp0. (28) In section 4.1, we reduce testing the homogeneity of random effects to testing the homogene- ity of fixed effects. This equivalence reveals how the local critical separation depends on the null hypothesis’s location. In section 4.2, we discuss the construction of lower bounds for the local critical separation. 4.1 Equivalence between random and fixed effects Homogeneity testing of random effects closely relates to homogeneity testing of fixed effects. Consider the unobserved random effects piof each observation
https://arxiv.org/abs/2504.13977v1