text
string | source
string |
|---|---|
neral assumption of independent (but not necessarily identically distributed) /u1D44B/u1D456. /square Proof of Theorem 7. (=⇒) Assume the Pufferfish condition is satisfied, and fix P/u1D703∈Θsuch that /u1D437∼P/u1D703. Without loss of generality, consider the test for /u1D43B0:/u1D4600vs./u1D43B1:/u1D4601. By the post-processing invariance of Pufferfish (Kifer and Machanavajjhala, 2014) ,T◦M satisfies/u1D700-Pufferfish(S,Spairs,D), and so for any (/u1D4600,/u1D4601) ∈Spairssatisfying min{P/u1D703(/u1D4600),P/u1D703(/u1D4601)}>0: Tis level-/u1D6FC=⇒P/u1D703((T ◦M)( /u1D437)=1|/u1D4600) ≤/u1D6FC =⇒/u1D6FD=P/u1D703((T ◦M)( /u1D437)=1|/u1D4601) ≤/u1D452/u1D700/u1D6FC. (⇐=) Assume the Pufferfish condition is notsatisfied. Then without loss of generality in the order of/u1D4600and/u1D4601, there exists a distribution P/u1D703∈Θ, a discriminative pair (/u1D4600,/u1D4601) ∈Spairs, and an event/u1D445⊆range(M) such that: P/u1D703(M(/u1D437) ∈/u1D445|/u1D4600)> /u1D452/u1D700P/u1D703(M(/u1D437) ∈/u1D445|/u1D4601). Let/u1D6FC=P/u1D703(M(/u1D437) ∈/u1D445|/u1D4600), and letT(/u1D714)=1/u1D714∈/u1D445. Then/u1D6FC >0andTis a level- /u1D6FCtest, since P/u1D703(T(M(/u1D437))=1|/u1D4600)=P/u1D703(M(/u1D437) ∈/u1D445|/u1D4600)=/u1D6FC, but this test has power /u1D6FD=P/u1D703(T(M(/u1D437))=1|/u1D4601)=P/u1D703(M(/u1D437) ∈/u1D445|/u1D4601)> /u1D452/u1D700/u1D6FC. /square Proof of Corollary 8. (=⇒) Suppose the Pufferfish condition is satisfied. Fix /u1D4370∼/u1D4371, and let P/u1D703 be the probability measure on Dthat assigns probability 1/2to each of /u1D4370and/u1D4371. Clearly we have P/u1D703∈/u1D707(D), and so for /u1D437∼P/u1D703, Pufferfish implies: P(M(/u1D4370) ∈/u1D438)=P/u1D703(M(/u1D437) ∈/u1D438|/u1D437=/u1D4370) ≤/u1D452/u1D700P/u1D703(M(/u1D437) ∈/u1D438|/u1D437=/u1D4371) =/u1D452/u1D700P(M(/u1D4371) ∈/u1D438). 13 This is precisely the DP condition. (⇐=) Suppose the Pufferfish condition is not satisfied. By Theore m 7, there exist /u1D4370∼/u1D4371 fromD,P/u1D703∈/u1D707(D),/u1D6FC∈ (0,1), and a test T(P/u1D703,M,M(/u1D437))such that for /u1D437∼P/u1D703,Tis level-/u1D6FCfor /u1D43B0:/u1D437=/u1D4370vs./u1D43B1:/u1D437=/u1D4371, butThas power /u1D6FD > /u1D452/u1D700/u1D6FC. DefineT/u1D703(·,·)=T(P/u1D703,·,·). Draw/u1D437∼P/u1D703. By the above, T/u1D703is level-/u1D6FCfor/u1D43B0:/u1D437=/u1D4370vs. /u1D43B1:/u1D437=/u1D4371while having power /u1D6FD > /u1D452/u1D700/u1D6FC. Per Theorem 2, Mis not/u1D700-DP(D,∼)./square Proof of Theorem 9. The proof of this is similar to Kifer and Machanavajjhala (20 14, Theorem 6.1). ( ⇐=)Fix/u1D4540/u1D452∼/u1D4541∈ G/u1D45B. Then/u1D4540,/u1D4541differ on one edge, say, {/u1D456∗, /u1D457∗}∉/u1D4540and{/u1D456∗, /u1D457∗} ∈/u1D4541. LetP/u1D703be the distribution over G/u1D45Bdefined by P/u1D703(/u1D43A=/u1D454)=/productdisplay.1 {/u1D456,/u1D457}∈I/u1D449/u1D70B1{/u1D456,/u1D457}∈/u1D454 /u1D456/u1D457(1−/u1D70B/u1D456/u1D457)1−1{/u1D456,/u1D457}∈/u1D454, where /u1D70B/u1D456∗/u1D457∗=1 2, /u1D70B/u1D456/u1D457=1{/u1D456,/u1D457}∈/u1D4540for{/u1D456, /u1D457} ∈ I/u1D449\{/u1D456∗, /u1D457∗}. Note that P/u1D703∈Θ/u1D456/u1D45B/u1D451, and the support of P/u1D703is precisely {/u1D4540,/u1D4541}. Specifically, /u1D4541is the only graph in the support of P/u1D703satisfying /u1D70E/u1D456∗/u1D457∗, and/u1D4540is the only graph satisfying ¬/u1D70E/u1D456∗/u1D457∗. So ifMis/u1D700– Pufferfish (S,Spairs,Θ/u1D456/u1D45B/u1D451)we have: P/u1D703(M(/u1D43A)=/u1D714|/u1D70E/u1D456∗/u1D457∗) ≤/u1D452/u1D700P/u1D703(M(/u1D43A)=/u1D714| ¬/u1D70E/u1D456∗/u1D457∗), or equivalently: P(M(/u1D4541)=/u1D714) ≤/u1D452/u1D700P(M(/u1D4540)=/u1D714). A similar argument shows P(M(/u1D4540)=/u1D714) ≤/u1D452/u1D700P(M(/u1D4541)=/u1D714), and so we have /u1D700–edge DP, since /u1D4540and/u1D4541are arbitrary neighboring databases in G/u1D45B. (=⇒ ) Let{/u1D456∗, /u1D457∗} ∈ I/u1D449,P/u1D703∈Θ/u1D456/u1D45B/u1D451. Assuming Mis/u1D700–edge DP, we have: P/u1D703(M(/u1D43A)=/u1D714|/u1D70E/u1D456∗/u1D457∗)=/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454)=/u1D714)P/u1D703(/u1D43A=/u1D454|/u1D70E/u1D456∗/u1D457∗) =/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454∪{/u1D456∗, /u1D457∗})=/u1D714)P/u1D703(/u1D43A=/u1D454|/u1D70E/u1D456∗/u1D457∗)(/u1D454=/u1D454∪{/u1D456∗, /u1D457∗}under/u1D70E/u1D456∗/u1D457∗) ≤/u1D452/u1D700/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454\{/u1D456∗, /u1D457∗})=/u1D714)P/u1D703(/u1D43A=/u1D454|/u1D70E/u1D456∗/u1D457∗)(by edge DP) a/circlecopyrt=/u1D452/u1D700/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454\ {/u1D456∗, /u1D457∗})=/u1D714)P/u1D703(/u1D43A=/u1D454| ¬/u1D70E/u1D456∗/u1D457∗) =/u1D452/u1D700/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454)=/u1D714)P/u1D703(/u1D43A=/u1D454| ¬/u1D70E/u1D456∗/u1D457∗)(/u1D454=/u1D454\{/u1D456∗, /u1D457∗}under¬/u1D70E/u1D456∗/u1D457∗) =/u1D452/u1D700P/u1D703(M(/u1D43A)=/u1D714| ¬/u1D70E/u1D456∗/u1D457∗). Equality a/circlecopyrtcomes from the fact that for any P/u1D703∈Θ/u1D456/u1D45B/u1D451, the set {/u1D454∈ G/u1D45B:P/u1D703(/u1D43A=/u1D454|/u1D70E/u1D456∗/u1D457∗)>0} has a one-to-one correspondence with the set {/u1D454′∈ G/u1D45B:P/u1D703(/u1D43A=/u1D454′| ¬/u1D70E/u1D456∗/u1D457∗)>0} 14 such that /u1D454/u\i∈∞A6.e\dl→/u1D454\ {/u1D456∗, /u1D457∗}, and P/u1D703(/u1D43A=/u1D454|/u1D70E/u1D456∗/u1D457∗)=/productdisplay.1 {/u1D456,/u1D457}∈I/u1D449\{/u1D456∗,/u1D457∗}/u1D453/u1D456/u1D457(1{/u1D456,/u1D457}∈/u1D454)=P/u1D703(/u1D43A=/u1D454′| ¬/u1D70E/u1D456∗/u1D457∗), where each /u1D453/u1D456/u1D457:{0,1} →R≥0is some function whose particular form is not of concern. By a similar argument, we have P/u1D703(M(/u1D43A)=/u1D714| ¬/u1D70E/u1D456∗/u1D457∗) ≤/u1D452/u1D700P/u1D703(M(/u1D43A)=/u1D714|/u1D70E/u1D456∗/u1D457∗). Since /u1D456∗, /u1D457∗are arbitrary, we meet the /u1D700-Pufferfish constraints for (S,Spairs,Θ/u1D456/u1D45B/u1D451). /square Proof of Lemma 10. To satisfy (/u1D700+/u1D6FC)–Pufferfish (Θ,S,Spairs), we need to satisfy P/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457} ∈/u1D438) ≤/u1D452/u1D700+/u1D6FCP/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457}∉/u1D438) P/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457}∉/u1D438) ≤/u1D452/u1D700+/u1D6FCP/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457} ∈/u1D438) for all P/u1D703∈Θ,{/u1D456, /u1D457} ∈ I/u1D449,/u1D714∈range(M) such that P/u1D703({/u1D456, /u1D457} ∈/u1D438) ∈ (0,1). Fix P/u1D703,/u1D456, /u1D457, and/u1D714. We begin by rewriting P/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457} ∈/u1D438)=/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454)=/u1D714)P/u1D703(/u1D43A=/u1D454| {/u1D456, /u1D457} ∈/u1D438) =1 2/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454∪{/u1D456, /u1D457})=/u1D714)P/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438), where the last line follows from the fact that replacing /u1D454with/u1D454∪{/u1D456, /u1D457}results in double-counting, asP/u1D703(/u1D43A=/u1D454| {/u1D456, /u1D457} ∈/u1D438)=0for all/u1D454not containing the edge
|
https://arxiv.org/abs/2504.12520v1
|
{/u1D456, /u1D457}. We similarly rewrite: P/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457}∉/u1D438)=1 2/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454\ {/u1D456, /u1D457})=/u1D714)P/u1D703(/u1D43A=/u1D454\ {/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438). For any fixed /u1D454, sinceMis/u1D700–edge DP, we know that P(M(/u1D454∪{/u1D456, /u1D457})=/u1D714) ≤/u1D452/u1D700P(M(/u1D454\{/u1D456, /u1D457})=/u1D714),and P(M(/u1D454\{/u1D456, /u1D457})=/u1D714) ≤/u1D452/u1D700P(M(/u1D454∪{/u1D456, /u1D457})=/u1D714), since/u1D454∪{/u1D456, /u1D457}neighbors /u1D454\ {/u1D456, /u1D457}, while by assumption we also have: P/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438) ≤/u1D452/u1D6FCP/u1D703(/u1D43A=/u1D454\ {/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438),and P/u1D703(/u1D43A=/u1D454\{/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438) ≤/u1D452/u1D6FCP/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438). Consequently we may conclude, P/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457} ∈/u1D438) =1 2/summationdisplay.1 /u1D454∈G/u1D45BP(M(/u1D454∪{/u1D456, /u1D457})=/u1D714)P/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438) ≤1 2/summationdisplay.1 /u1D454∈G/u1D45B/parenleftBig /u1D452/u1D700P(M(/u1D454\ {/u1D456, /u1D457})=/u1D714)/parenrightBig /parenleftBig /u1D452/u1D6FCP/u1D703(/u1D43A=/u1D454\ {/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438)/parenrightBig =/u1D452/u1D700+/u1D6FCP/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457}∉/u1D438), and similarly: P/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457}∉/u1D438) ≤/u1D452/u1D700+/u1D6FCP/u1D703(M(/u1D43A)=/u1D714| {/u1D456, /u1D457} ∈/u1D438). /square 15 Proof of Corollary 11. The derivation of /u1D6FCfollows from an application of Bayes’ theorem to the definition of the change statistic in Eq. 3. In particular, fin ding/u1D6FCto satisfy Lemma 10 requires us to bound the quantity/barex/barex/barex/barexlog/parenleftbiggP/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438) P/u1D703(/u1D43A=/u1D454\ {/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438)/parenrightbigg/barex/barex/barex/barex. To simplify notation, we write the event (/u1D43A∪ {/u1D456, /u1D457}=/u1D454∪ {/u1D456, /u1D457})=(/u1D43A\ {/u1D456, /u1D457}=/u1D454\ {/u1D456, /u1D457})as /u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457. By Bayes’ theorem, we have that: P/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438)=P/u1D703(/u1D43A∪{/u1D456, /u1D457}=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438) =P/u1D703(/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457| {/u1D456, /u1D457} ∈/u1D438) =P/u1D703({/u1D456, /u1D457} ∈/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457)P/u1D703(/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) P/u1D703({/u1D456, /u1D457} ∈/u1D438), and similarly P/u1D703(/u1D43A=/u1D454\ {/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438)=P/u1D703({/u1D456, /u1D457}∉/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457)P/u1D703(/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) P/u1D703({/u1D456, /u1D457}∉/u1D438). Dividing these two quantities, we obtain: P/u1D703(/u1D43A=/u1D454∪{/u1D456, /u1D457} | {/u1D456, /u1D457} ∈/u1D438) P/u1D703(/u1D43A=/u1D454\ {/u1D456, /u1D457} | {/u1D456, /u1D457}∉/u1D438)=P/u1D703({/u1D456, /u1D457} ∈/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) P/u1D703({/u1D456, /u1D457}∉/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) /bracehtipupleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext /bracehtipdownright/bracehtipdownleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext /bracehtipupright a/circlecopyrt/slashBigg P/u1D703({/u1D456, /u1D457} ∈/u1D438) P/u1D703({/u1D456, /u1D457}∉/u1D438)/bracehtipupleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipdownright/bracehtipdownleft/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehext/bracehtipupright b/circlecopyrt. By Eq. 3, a/circlecopyrtis precisely exp(/u1D6FD/u1D447 /u1D703Δ/u1D703(/u1D454,/u1D456, /u1D457)). Thus: /barex/barex/barex/barex/barexlog/parenleftBiggP/u1D703({/u1D456, /u1D457} ∈/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) P/u1D703({/u1D456, /u1D457}∉/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457)/slashBigg P/u1D703({/u1D456, /u1D457} ∈/u1D438) P/u1D703({/u1D456, /u1D457}∉/u1D438)/parenrightBigg/barex/barex/barex/barex/barex=/barex/barex/barex/barex/u1D6FD/u1D447 /u1D703Δ/u1D703(/u1D454,/u1D456, /u1D457) −log/parenleftbiggP/u1D703({/u1D456, /u1D457} ∈/u1D438) P/u1D703({/u1D456, /u1D457}∉/u1D438)/parenrightbigg/barex/barex/barex/barex. Taking the supremum over all choices of /u1D703,/u1D454,/u1D456, /u1D457 yields/u1D6FC. We note that log(b/circlecopyrt)is also bounded in absolute value by a similar quantity, yiel ding the upper bound on /u1D6FC. Let/u1D701=sup/u1D703,/u1D454,/u1D456,/u1D457/barex/barex/u1D6FD/u1D447 /u1D703Δ/u1D703(/u1D454,/u1D456, /u1D457)/barex/barex. Then: P/u1D703({/u1D456, /u1D457} ∈/u1D438) P/u1D703({/u1D456, /u1D457}∉/u1D438)=/summationtext.1 /u1D454∈G/u1D45BP/u1D703({/u1D456, /u1D457} ∈/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457)P/u1D703(/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) /summationtext.1 /u1D454∈G/u1D45BP/u1D703{/u1D456, /u1D457}∉/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457)P/u1D703(/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457). It follows from Eq. 3 that for each /u1D454∈ G/u1D45B: P/u1D703({/u1D456, /u1D457} ∈/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) ≤/u1D452/u1D701P/u1D703({/u1D456, /u1D457}∉/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457), and similarly P/u1D703({/u1D456, /u1D457} ∈/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457) ≥/u1D452−/u1D701P/u1D703({/u1D456, /u1D457}∉/u1D438|/u1D43A/u1D450 /u1D456/u1D457=/u1D454/u1D450 /u1D456/u1D457). Plugging these bounds into the previous equation yields: /u1D452−/u1D701≤P/u1D703({/u1D456, /u1D457} ∈/u1D438) P/u1D703({/u1D456, /u1D457}∉/u1D438)≤/u1D452/u1D701⇐⇒/barex/barex/barex/barexlog/parenleftbiggP/u1D703({/u1D456, /u1D457} ∈/u1D438) P/u1D703({/u1D456, /u1D457}∉/u1D438)/parenrightbigg/barex/barex/barex/barex≤/u1D701. Putting it all together, we have that: /u1D6FC=sup /u1D703,/u1D454,/u1D456,/u1D457|log(a/circlecopyrt) −log(b/circlecopyrt)| ≤2/u1D701. /square 16 References Blocki, Jeremiah, Avrim Blum, Anupam Datta, and Or Sheffet ( 2013). “Differentially private data analysis of social networks via restricted sensitivity”. I n:Proceedings of the 4th conference on Innovations in Theoretical Computer Science , pp. 87–96. Chen, Rui, Benjamin CM Fung, Philip S Yu, and Bipin C Desai (20 14). “Correlated network data publication via differential privacy”. In: The VLDB Journal 23, pp. 653–676. Chen, Xihui, Sjouke Mauw, and Yunior Ram´ ırez-Cruz (2019). “Publishing community-preserving attributed social graphs with a differential privacy guara ntee”. In: arXiv preprint arXiv:1909.00280 . Cormode, Graham (2011). “Personal privacy vs population pr ivacy: learning to attack anonymiza- tion”. In:
|
https://arxiv.org/abs/2504.12520v1
|
Proceedings of the 17th ACM SIGKDD international conferenc e on Knowledge dis- covery and data mining , pp. 1253–1261. Cuff, Paul and Lanqing Yu (2016). “Differential privacy as a mutual information constraint”. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer an d Communications Security , pp. 43–54. Cummings, Rachel, Damien Desfontaines, et al. (2023). “Cha llenges towards the Next Frontier in Privacy”. In: arXiv preprint arXiv:2304.06929 . Cummings, Rachel, Gabriel Kaptchuk, and Elissa M Redmiles ( 2021). “”I need a better description”: An Investigation Into User Expectations For Differential P rivacy”. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Secur ity, pp. 3037–3052. Desfontaines, Damien and Bal´ azs Pej´ o (2020). “Sok: diffe rential privacies”. In: Proceedings on privacy enhancing technologies 2020.2, pp. 288–313. Dong, Jinshuo, Aaron Roth, and Weijie J Su (2022). “Gaussian differential privacy”. In: Journal of the Royal Statistical Society Series B: Statistical Method ology 84.1, pp. 3–37. Duchi, John C, Michael I Jordan, and Martin J Wainwright (201 3). “Local privacy and statistical minimax rates”. In: 2013 IEEE 54th Annual Symposium on Foundations of Computer S cience . IEEE, pp. 429–438. Dwork, Cynthia (2006). “Differential privacy”. In: Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II 33 . Springer, pp. 1–12. Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smit h (2006). “Calibrating noise to sen- sitivity in private data analysis”. In: Theory of Cryptography Conference . Ed. by Shai Halevi and Tal Rabin. V ol. 3876. Springer. Springer Berlin Heidelberg , pp. 265–284. ISBN : 9783540327318. DOI:10.1007/11681878_14 .URL:https://doi.org/10.1007/11681878_14 . Dwork, Cynthia and Moni Naor (2010). “On the difficulties of d isclosure prevention in statistical databases or the case for differential privacy”. In: Journal of Privacy and Confidentiality 2.1. Ghosh, Arpita and Robert Kleinberg (2016). “Inferential pr ivacy guarantees for differentially private mechanisms”. In: arXiv preprint arXiv:1603.01508 . Hay, Michael, Chao Li, Gerome Miklau, and David Jensen (2009 ). “Accurate estimation of the degree distribution of private networks”. In: 2009 Ninth IEEE International Conference on Data Mining . IEEE, pp. 169–178. Hehir, Jonathan, Aleksandra Slavkovic, and Xiaoyue Niu (20 22). “Consistent Spectral Clustering of Network Block Models under Local Differential Privacy”. In :Journal of Privacy and Confiden- tiality 12.2. Hunter, David R and Mark S Handcock (2006). “Inference in cur ved exponential family models for networks”. In: Journal of Computational and Graphical Statistics 15.3, pp. 565–583. 17 Hunter, David R, Pavel N Krivitsky, and Michael Schweinberg er (2012). “Computational statistical methods for social network models”. In: Journal of Computational and Graphical Statistics 21.4, pp. 856–882. Imola, Jacob, Takao Murakami, and Kamalika Chaudhuri (2021 ). “Locally Differentially Private Analysis of Graph Statistics.” In: USENIX Security Symposium , pp. 983–1000. Ji, Tianxi, Changqing Luo, Yifan Guo, Jinlong Ji, Weixian Li ao, and Pan Li (2019). “Differentially private community detection in attributed social networks ”. In: Asian Conference on Machine Learning . PMLR, pp. 16–31. Jiang, Honglu, Jian Pei, Dongxiao Yu, Jiguo Yu, Bei Gong, and Xiuzhen Cheng (2021). “Appli- cations of differential privacy in social network
|
https://arxiv.org/abs/2504.12520v1
|
analysis : A survey”. In: IEEE Transactions on Knowledge and Data Engineering 35.1, pp. 108–127. Jorgensen, Zach, Ting Yu, and Graham Cormode (2016). “Publi shing attributed social graphs with formal privacy guarantees”. In: Proceedings of the 2016 international conference on manage - ment of data , pp. 107–122. Kairouz, Peter, Sewoong Oh, and Pramod Viswanath (2015). “T he composition theorem for differ- ential privacy”. In: International conference on machine learning . PMLR, pp. 1376–1385. Karwa, Vishesh, Pavel N Krivitsky, and Aleksandra B Slavkov i´ c (2017). “Sharing social network data: differentially private estimation of exponential fa mily random-graph models”. In: Journal of the Royal Statistical Society: Series C (Applied Statist ics)66.3, pp. 481–500. Karwa, Vishesh, Sofya Raskhodnikova, Adam Smith, and Grigo ry Yaroslavtsev (2011). “Private analysis of graph structure”. In: Proceedings of the VLDB Endowment 4.11, pp. 1146–1157. Karwa, Vishesh and Aleksandra Slavkovic (2016). “Inferenc e using noisy degrees: Differentially private/u1D6FD-model and synthetic graphs”. In: The Annals of Statistics 44.1, pp. 87–112. DOI: 10.1214/15-AOS1358 . Kasiviswanathan, Shiva Prasad, Kobbi Nissim, Sofya Raskho dnikova, and Adam Smith (2013). “Analyzing graphs with node differential privacy”. In: Theory of Cryptography: 10th Theory of Cryptography Conference, TCC 2013, Tokyo, Japan, March 3-6 , 2013. Proceedings . Springer, pp. 457–476. Kifer, Daniel (2009). “Attacks on privacy and deFinetti’s t heorem”. In: Proceedings of the 2009 ACM SIGMOD International Conference on Management of data , pp. 127–138. Kifer, Daniel, John M Abowd, et al. (2022). “Bayesian and Fre quentist Semantics for Common Vari- ations of Differential Privacy: Applications to the 2020 Ce nsus”. In: arXiv preprint arXiv:2209.03310 . Kifer, Daniel and Ashwin Machanavajjhala (2014). “Pufferfi sh: A framework for mathematical pri- vacy definitions”. In: ACM Transactions on Database Systems (TODS) 39.1, pp. 1–36. Liu, Changchang, Supriyo Chakraborty, and Prateek Mittal ( 2016). “Dependence makes you vulnber- able: Differential privacy under dependent tuples.” In: NDSS . V ol. 16, pp. 21–24. McSherry, Frank (2016a). Differential privacy and correlated data .URL:https://github.com/frankmcsherry/blog/bl — (2016b). Lunchtime for differential privacy .URL:https://github.com/frankmcsherry/blog/blob/master/p ost Mueller, Tamara T, Dmitrii Usynin, Johannes C Paetzold, Dan iel Rueckert, and Georgios Kaissis (2022). “Sok: Differential privacy on graph-structured da ta”. In: arXiv preprint arXiv:2203.09205 . Nanayakkara, Priyanka, Mary Anne Smart, Rachel Cummings, G abriel Kaptchuk, and Elissa Red- miles (2023). “What Are the Chances? Explaining the Epsilon Parameter in Differential Pri- vacy”. In: arXiv preprint arXiv:2303.00738 . Qin, Zhan, Ting Yu, Yin Yang, Issa Khalil, Xiaokui Xiao, and K ui Ren (2017). “Generating synthetic decentralized social graphs with local differential priva cy”. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security , pp. 425–438. 18 Snijders, Tom AB, Philippa E Pattison, Garry L Robins, and Ma rk S Handcock (2006). “New spec- ifications for exponential random graph models”. In: Sociological methodology 36.1, pp. 99– 153. Task, Christine and Chris Clifton (2012). “A guide to differ ential privacy theory in social network analysis”. In: 2012 IEEE/ACM International Conference on Advances in Soci al Networks Anal- ysis and Mining . IEEE, pp. 411–417. — (2014). “What should we protect? Defining differential pri vacy for social network analysis”.
|
https://arxiv.org/abs/2504.12520v1
|
In: State of the Art Applications of Social Network Analysis , pp. 139–161. Tschantz, Michael Carl, Shayak Sen, and Anupam Datta (2020) . “SoK: Differential privacy as a causal property”. In: 2020 IEEE Symposium on Security and Privacy (SP) . IEEE, pp. 354–371. Wasserman, Larry and Shuheng Zhou (2010). “A statistical fr amework for differential privacy”. In: Journal of the American Statistical Association 105.489, pp. 375–389. Wasserman, Stanley and Philippa Pattison (1996). “Logit mo dels and logistic regressions for social networks: I. An introduction to Markov graphs and p”. In: Psychometrika 61.3, pp. 401–425. Xiao, Qian, Rui Chen, and Kian-Lee Tan (2014). “Differentia lly private network data release via structural inference”. In: Proceedings of the 20th ACM SIGKDD international conferenc e on Knowledge discovery and data mining , pp. 911–920. Yang, Bin, Issei Sato, and Hiroshi Nakagawa (2015). “Bayesi an differential privacy on correlated data”. In: Proceedings of the 2015 ACM SIGMOD international conferenc e on Management of Data , pp. 747–762. 19
|
https://arxiv.org/abs/2504.12520v1
|
Shrinkage priors for circulant correlation structure models Michiko Okudo1and Tomonari Sei1 1Department of Mathematical Informatics Graduate School of Information Science and Technology The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, JAPAN, Email: okudo@mist.i.u-tokyo.ac.jp; sei@mist.i.u-tokyo.ac.jp Abstract We consider a new statistical model called the circulant correlation structure model, which is a multivariate Gaussian model with unknown covariance matrix and has a scale-invariance property. We construct shrinkage priors for the circulant correlation structure models and show that Bayesian predictive densities based on those priors asymptotically dominate Bayesian predictive densities based on Jeffreys priors under the Kullback–Leibler (KL) risk function. While shrinkage of eigenvalues of covariance matrices of Gaussian models has been successful, the proposed priors shrink a non-eigenvalue part of covariance matrices. Keywords: exchangeable correlation structure, shrinkage priors, Bayes, multivariate analysis 1 Introduction We study shrinkage prediction of covariance matrices of multivariate Gaussian models. Shrinkage esti- mation and prediction have been widely studied, originating from Stein’s paradox and the James–Stein estimator. Shrinkage of µof N p(µ, Ip) to the origin is known to be effective, and shrinkage priors for Bayes estimation and prediction are also studied; see, for example, Komaki (2006) and George et al. (2012). On the other hand, in shrinkage estimation of Σ of N p(0,Σ), eigenvalue shrinkage towards the origin has been successful and there has been extensive research including Donoho et al. (2018). Yang and Berger (1994) investigated priors shrinking differences between eigenvalues. In this paper, we propose a new model called circulant correlation structure model, which is a submodel of N p(0,Σ), and show that shrinkage of non-eigenvalue part of Σ has favorable effect in this model. Our model is constructed as follows. We consider p-dimensional Gaussian models N p(0,Σ) with covariance matrices expressed as Σ =DαRDα (1) 1arXiv:2504.12615v1 [math.ST] 17 Apr 2025 and R=QDλQ∗, (2) where Dα= diag( α1, . . . , α p),Dλ= diag( λ1, . . . , λ p) and Qis a constant matrix expressed by Q=1√p 1 1 1 ··· 1 1ω ω2··· ωp−1 1ω2ω4··· ω2(p−1) ............... 1ωp−1ω2(p−1)··· ω(p−1)2 . Here ω= exp(2 π√−1/p) is a primitive p-th root and Q∗denotes the Hermitian transpose of Q. The matrix Q= (qij) appears in discrete Fourier transformation and Qis unitary. The parameters αiandλk are assumed to be positive. We call the model defined by (1) and (2) the circulant correlation structure model. The structure ofRis sometimes called a circular model in time series analysis (Anderson 1971:Section 6.5), where λ determines the spectrum of R. Our model has an additional parameter vector α, which plays a role of amplitude modulation (Jiang and Hui 2004) and makes the model scale invariant. The ( i, j) component of the matrix R= (rij) is rij=pX k=1λkqik¯qjk =1 ppX k=1λkω(i−j)(k−1)(3) where ¯ qijis the ( i, j) component of Q∗. The ( i, j) component of the matrix Rdepends only on i−j modulo p. Therefore, Ris a circulant matrix. Conversely, λkis determined from Rby λk=1 ppX i=1pX j=1rij¯ω(i−j)(k−1). (4) Because each component of Ris a real number, λ1, . . . , λ pmust satisfy λa+1=λp−a+1,1≤a≤ ⌊(p−1)/2⌋. (5) To
|
https://arxiv.org/abs/2504.12615v1
|
remove a multiplicative redundancy between αandλ, we assume pY i=1λi= 1 (6) without loss of generality. Lemma 1.1. Under the restriction (6), the parameters αandλare identifiable from Σ. 2 Proof. We see from the expression (3) that all the diagonal elements of Rhave the same value. This means that Ris a constant multiple of a correlation matrix determined from Σ. The multiplicative constant is obtained from an identity det( R) = det( Dλ) = 1. Then, αandλare determined by (1) and (4), respectively. The problem settings are as follows. We consider a statistical model P={Np(0,Σ)|Σ =DαR(θ)Dα, R(θ) =QDλ(θ)Q∗, θ∈Rd, α∈Rp +} with parameters θ= (θ1, . . . , θ d) and α, where λ(θ) is a parametric family satisfying (5) and (6). Specific examples of λ(θ) are provided in Section 2 and Section 3. Suppose that we have observations xn= {x(1), . . . , x (n)}from p(x;θ, α)∈ P. We address the problem of constructing a predictive density ˆ p(y|xn) for a future sample ythat follows the same distribution p(y;θ, α) asx(i), i= 1, . . . , n . The performance of predictive density ˆ p(y|xn) is evaluated by the Kullback–Leibler (KL) divergence D{p(y;θ, α); ˆp(y|xn)}=Z p(y;θ, α) logp(y;θ, α) ˆp(y|xn)dy, and the risk function and the Bayes risk functions are E[D{p(y;θ, α); ˆp(y|xn)}] =Z p(xn;θ, α)D{p(y;θ, α); ˆp(y|xn)}dxn, and Z π(θ, α)Z p(xn;θ, α)D{p(y;θ, α); ˆp(y|xn)}dxndθdα, respectively. Bayesian predictive densities based on a prior π(θ, α) for a future sample yis obtained by pπ(y|x) =Z p(y;θ, α)π(θ, α|xn)dθdα where π(θ, α|xn) is the posterior density π(θ, α|xn) =p(xn;θ, α)π(θ, α)R p(xn;θ, α)π(θ, α)dθdα. It is shown in Aitchison (1975) that Bayesian predictive densities are optimal about the Bayes risk, thus we adopt them as predictive densities for each prior. We propose shrinkage priors for the circulant correlation structure model, which shrinks the corre- lation part of covariance matrices to the identity matrix, and show that it dominates the Jeffreys prior asymptotically. We consider shrinkage of θ, which controls eigenvalues of R, not Σ. The density of the Jeffreys prior of the model increases exponentially as θincreases. We propose a uniform prior about θ and log αi(i= 1, . . . , p ), and compared to the Jeffreys prior, it has shrinkage effect about θ. The model is intentionally designed so that the cross components of the Fisher information matrix about θandα are 0, and it makes it easier to construct shrinkage prior of θalone. This property is analogous to the 3 fact that in the full model {Np(0,Σ)|Σ}, the cross components of the Fisher information metric about eigenvalues and eigenvectors of Σ are 0. The construction of the rest of the paper is as follows. In Section 2, we give the specific form of λ(θ), and propose a prior that dominates the Jeffreys prior asymptotically. In Section 3, optimality results for the proposed priors are given about the asymptotic KL risk for the full model and a submodel called exchangeable correlation structure model. Numerical experiments to illustrates the difference of asymptotic KL risks of the Jeffreys prior
|
https://arxiv.org/abs/2504.12615v1
|
and the proposed prior are also given. 2 Construction of shrinkage priors In this section, we assume that λ(θ) is log-linear, that is, log λ(θ) = (log λk(θ))p k=1is linear in θ= (θ1, . . . , θ d). We also assume that vectors ( ∂/∂θ i) logλ(i= 1, . . . , d ) are linearly independent. For example, the full model for λthat satisfies (5) and (6) is written in log-linear form as λ(θ) = (e−θ1−···− θp−1, eθ1. . . , eθp−1), where d=⌊p/2⌋andθp−a=θafor 1≤a≤ ⌊(p−1)/2⌋. Another example is the exchangeable correlation structure model λ(θ) = (e−(p−1)θ, eθ, . . . , eθ) with d= 1, which is discussed in Section 3. We begin by evaluating the Fisher metric of the model. Let mod( a)∈ {1, . . . , p }denote the modulo ofadivided by p. It is well known that the Fisher metric of N(0,Σ(ω)) with respect to a parameter ωis gωiωj=1 2tr Σ−1∂Σ ∂ωiΣ−1∂Σ ∂ωj . Lemma 2.1. Suppose that λ(θ)is log-linear. The component of Fisher information matrix gabout θi andθj(i, j= 1, . . . , d )is expressed as a constant matrix gθiθj=1 2pX k=1∂ ∂θilogλk∂ ∂θjlogλk . The component of gabout αiandαj(i, j= 1, . . . , p )is gαiαj= (δij+rijrij)α−1 iα−1 j = δij+X mqim¯qjmX mod ( k+l−1)=m1 pλkλ−1 l α−1 iα−1 j. The other components gαiθj(i= 1, . . . , p ;j= 1, . . . , d )are 0. 4 Proof. The component of gabout θis obtained by gθiθj=1 2tr Σ−1∂Σ ∂θiΣ−1∂Σ ∂θj =1 2tr R−1∂R ∂θiR−1∂R ∂θj =1 2tr D−1 λ∂Dλ ∂θiD−1 λ∂Dλ ∂θj =1 2pX k=1∂ ∂θilogλk∂ ∂θjlogλk . In particular, gθiθjis a constant because λ(θ) is log-linear. The component of gabout αis gαiαj=1 2tr Σ−1∂Σ ∂αiΣ−1∂Σ ∂αj =1 2tr R−1 D−1∂Dα ∂αiR+RD−1∂Dα ∂αi R−1 D−1∂Dα ∂αjR+RD−1∂Dα ∂αj = (δij+rijrij)α−1 iα−1 j, where rijandrijare the ( i, j) component of RandR−1, respectively. Because rij=pX k=1λkqik¯qjk and rij=pX k=1λ−1 kqik¯qjk, we have rijrij= pX k=1λkqik¯qjk! pX l=1λ−1 lqil¯qjl! =X kX lλkλ−1 lqik¯qjkqil¯qjl. We obtain rijrij=X kX lλkλ−1 lqik¯qjkqil¯qjl =X kX lλkλ−1 l1 pω(i−1)(k+l−2)1 p¯ω(j−1)(k+l−2) =1 ppX m=1qim¯qjmX mod ( k+l−1)=mλkλ−1 l, because qij=1√pω(i−1)(j−1). 5 The other components of gis 0, which is confirmed as follows. We have 1 2tr Σ−1∂Σ ∂αiΣ−1∂Σ ∂θk =1 2tr R−1D−1 α∂Σ ∂αiD−1 αR−1∂R ∂θk =1 2tr R−1 D−1 α∂Dα ∂αiR+R∂Dα ∂αiD−1 α R−1∂R ∂θk = tr∂Dα ∂αiD−1 αR−1∂R ∂θk (7) and when all the diagonal components of R−1∂R ∂θk are 0, (7) is 0. The i-th diagonal component of R−1(∂R/∂θ k) is X j|qij|2∂logλj ∂θk=1 p∂ ∂θk X jlogλj = 0 due to |qij|2= 1/pand the condition (6). Because gαiθj= 0 ( i= 1, . . . , p ;j= 1, . . . , d ), the parameters θandαare orthogonal with respect to the Fisher metric. This property is called adaptivity if either αorθis considered as a nuisance parameter; see e.g. Segers et al. (2014). The density of the Jeffreys prior πJis evaluated as πJ(θ, α) =|g|1/2 = (|gθ||gα|)1/2 ∝ |gα|1/2, where gα= (gαiαj), and we used the fact that gθ= (gθiθj)
|
https://arxiv.org/abs/2504.12615v1
|
is constant. Let µm=X mod ( k+l−1)=m1 pλkλ−1 l, then |I+R◦R−1|=|I+Qdiag( µ1, . . . , µ m)Q∗| =pY m=1(1 +µm) =pY m=1 1 +X mod ( k+l−1)=m1 pλkλ−1 l , (8) where A◦Bis the Hadamard product of AandB. We consider a new parametrization βi= log αi(i= 1, . . . , p ) 6 so that gβiβj= (δij+rijrij).Then, from (8), |gβ|=|I+R◦R−1| =pY m=1 1 +X mod ( k+l−1)=m1 pλkλ−1 l (9) and πJ(θ, β) =|gβ|1/2. In particular, πJ(θ, β) does not depend on β. We obtain a superharmonic prior for this model. Theorem 2.1. Suppose that λ(θ)is log-linear. Let πS(θ, β)∝1. The Bayesian predictive density based on πSasymptotically dominates the Bayesian predictive density based on πJregarding the KL risk. Proof. From the result in Komaki (2006), it is enough to show that πS/πJsatisfies ∆πS πJ <0. Here, ∆ denotes the Laplace–Beltrami operator ∆f=X a,b|g|−1/2∂a(|g|1/2gab∂bf), where ∂a=∂/∂ω aandω= (β, θ). We show the following lemma. Lemma 2.2. For any function f(θ)depending only on θ, we have f−1∆f=X i,jgθiθjn ∂θi∂θjlogf+ (∂θilogf)(∂θjlogf) +∂θilog|g|1/2∂θjlogfo . (10) Proof. Because gβiθjare 0 and gθiθjare constant, we have ∆f=X i,j|g|−1/2∂θi(|g|1/2gθiθj∂θjf) =X i,jgθiθj|g|−1/2∂θi(|g|1/2∂θjf) =X i,jgθiθjn ∂θi∂θjf+∂θilog|g|1/2∂θjfo and we immediately obtain (10). 7 To complete the proof of the theorem, we show that the function f= (πS/πJ) satisfies ∆ f <0. For f=π−1 J∝ |gβ|−1/2, (10) is f−1∆f=−1 2X i,jgθiθj∂θi∂θjlog|gβ|. (11) We show that the Hessian matrix of log |gβ|is positive definite. From (9), |gβ|=pY m=1 1 +X mod ( k+l−1)=m1 pλkλ−1 l . The function h(x1, . . . , x p) = log 1 +pX i=1exp(xi)! is convex, thus its Hessian matrix Hhis positive definite. The Hessian matrix of the function (θ1, . . . , θ d)7→log 1 +X mod ( k+l−1)=m1 pλkλ−1 l for a fixed mis decomposed as W⊤ mHhWm, where Wm∈Rp×dis a constant matrix defined by (Wm)li=∂ ∂θilog1 pλkλ−1 l , k = mod( m+ 1−l). Since W⊤ mHhWmis positive semi-definite, the Hessian matrix of log |gβ|is also positive semi-definite. To see positive definiteness of the Hessian matrix, suppose that there exists v∈Rdsuch that v⊤W⊤ mHhWmv= 0 for all m. This implies Wmv= 0 by positive definiteness of Hh. From the expression of Wm, we obtain X ivi∂ ∂θilogλk=X ivi∂ ∂θilogλl for all kandl. From the condition (6), we further obtain X ivi∂ ∂θilogλk= 0. Then v= 0 follows from the linear independence of ( ∂/∂θ i) logλ. Hence, the Hessian matrix of log |gβ|is positive definite. From (11), we conclude that f−1∆f <0. 8 3 Optimality in a class of priors In this section, we consider the full model and exchangeable model as specific examples of the circulant correlation structure models. The superharmonic prior derived in Section 2 has an optimal property in each case. 3.1 Full model We recall the definition of the full model: λ(θ) = (e−θ1−···− θp−1, eθ1, . . . , eθp−1), θ a=θp−a,1≤a≤ ⌊(p−1)/2⌋ (12) withd=⌊p/2⌋. The prior πSis the uniform density with respect to the parameter ( θ, β), where βi= log αi. Ifp= 2, the prior πScoincides with the correlation-shrinkage prior proposed by Sei
|
https://arxiv.org/abs/2504.12615v1
|
and Komaki (2022), where finite-sample properties of πSare studied. The prior πShas an optimal property regarding the asymptotic KL risk in a class of priors πa(θ, β)∝ πJ(θ, β)a(a∈R). Theorem 3.1. Suppose that λ(θ)is the full model (12). Let πa(θ, β)∝πJ(θ, β)a(a∈R).Regarding the asymptotic KL risk, when θ1, . . . , θ d→ ∞ ,a= 0is optimal, that is the prior πS(θ, β)∝1is optimal. Here, θ1, . . . , θ d→ ∞ means that θi=rθi0for a fixed positive vector θi0andr→ ∞ . Proof. In order to simplify calculation, we consider a class of priors πt(θ, β)∝πJ(θ, β)2t+1(t∈R).Let ˆpJ and ˆptbe Bayesian predictive densities based on πJandπt, respectively. The asymptotic risk difference of ˆpJand ˆptis obtained following Komaki (2006): E[D(p(y;θ, β); ˆpJ)]−E[D(p(y;θ, β); ˆpt)] =−2 n2πJ πt1/2 ∆πt πJ1/2 + o(n−2). (13) Letf= (πt/πJ)1/2=πJt=|g|t/2, then πJ πt1/2 ∆πt πJ1/2 =X i,jgθiθjn ∂θi∂θjlogf+ (∂θilogf)(∂θjlogf) +∂θilog|g|1/2∂θjlogfo =X i,jgθiθj ∂θi∂θjlogf+1 4(t2+t)∂θilog|g|∂θjlog|g| . LetSm={(k, l)|mod( k+l−1) = m}. We have ∂θi∂θjlogf=t 2pX m=1∂θi∂θjlog 1 +X (k,l)∈Sm1 pλkλ−1 l =t 2pX m=1( ∂θi∂θjP (k,l)∈Sm1 pλkλ−1 l 1 +P (k,l)∈Sm1 pλkλ−1 l −(∂θiP (k,l)∈Sm1 pλkλ−1 l)(∂θjP (k,l)∈Sm1 pλkλ−1 l) (1 +P (k,l)∈Sm1 pλkλ−1 l)2) . 9 We prove that ∂θi∂θjlogfconverges to 0 as θ1, . . . , θ d→ ∞ . Then, lim θ1,...,θ d→∞∂θi∂θjlogf = lim θ1,...,θ d→∞t 2pX m=1( ∂θi∂θjP (k,l)∈Sm1 pλkλ−1 l 1 +P (k,l)∈Sm1 pλkλ−1 l −(∂θiP (k,l)∈Sm1 pλkλ−1 l)(∂θjP (k,l)∈Sm1 pλkλ−1 l) (1 +P (k,l)∈Sm1 pλkλ−1 l)2) . Letck i=∂θilogλk. Then, lim θ1,...,θ d→∞pX m=1∂θi∂θjP (k,l)∈Sm1 pλkλ−1 l 1 +P (k,l)∈Sm1 pλkλ−1 l = lim θ1,...,θ d→∞pX m=2∂θi∂θjP (k,l)∈Sm1 pλkλ−1 l 1 +P (k,l)∈Sm1 pλkλ−1 l = lim θ1,...,θ d→∞pX m=21 pP (k,l)∈Sm(ck i−cl i)(ck j−cl j)λkλ−1 l 1 +P (k,l)∈Sm1 pλkλ−1 l = lim θ1,...,θ d→∞pX m=21 p(cm i−c1 i)(cm j−c1 j)λ−1 1λm 1 +P (k,l)∈Sm1 pλkλ−1 l =pX m=2(cm i−c1 i)(cm j−c1 j), where the first equality follows from λk=λlfor all ( k, l)∈S1due to (5) and the third equality holds because λ−1 1λmis dominant as θ1, . . . , θ d→ ∞ . Also, because lim θ1,...,θ d→∞pX m=1(∂θiP (k,l)∈Sm1 pλkλ−1 l)(∂θjP (k,l)∈Sm1 pλkλ−1 l) (1 +P (k,l)∈Sm1 pλkλ−1 l)2 = lim θ1,...,θ d→∞pX m=2(∂θiP (k,l)∈Sm1 pλkλ−1 l)(∂θjP (k,l)∈Sm1 pλkλ−1 l) (1 +P (k,l)∈Sm1 pλkλ−1 l)2 =pX m=2(cm i−c1 i)(cm j−c1 j), we have lim θ1,...,θ d→∞∂θi∂θjlogf= 0. 10 Therefore, from (13) lim θ1,...,θ d→∞E[D(p(y;θ, β); ˆpJ)]−E[D(p(y;θ, β); ˆpt)] =−2 n2lim θ1,...,θ d→∞πJ πt1/2 ∆πt πJ1/2 + o(n−2) =−2 n2lim θ1,...,θ d→∞X i,jgθiθj ∂θi∂θjlogf+1 4(t2+t)∂θilog|g|∂θjlog|g| + o(n−2) =−2 n2lim θ1,...,θ d→∞X i,jgθiθj( 1 4( t+1 22 −1 4) ∂θilog|g|∂θjlog|g|) + o(n−2) and it is maximized when t=−1/2. Theorem 3.1 assumes that θ1, . . . , θ dare positive. The same consequence holds for other cases whenever only a term inP (k,l)∈Smλkλ−1 lis dominant for each m. 3.2 Exchangeable model Ifλ(θ) = (e−(p−1)θ, eθ, . . . , eθ), the matrix R(θ) has exchangeable correlations. Indeed, we have rij=pX k=1qikλk¯qjk =e−(p−1)θqi1¯qj1+eθpX k=2qik¯qjk =1 p e−(p−1)θ+eθpX k=2ω(i−j)(k−1)! =1 p e−(p−1)θ+eθ(pδij−1) sincePp k=2ω(i−j)(k−1)=−1 ifi̸=j. The p×psubmatrix gαand the component regarding θof its Fisher information matrix are gα=−1 p2
|
https://arxiv.org/abs/2504.12615v1
|
2 sinhpθ 22 α−1(α−1)⊤+ 1 p 2 sinhpθ 22 + 2! diag( α−2 1, . . . , α−2 p), gθθ=p(p−1)/2, respectively. The Jeffreys prior is πJ(θ, α)∝ 1 p 2 sinhpθ 22 + 2!p−1 2 pY i=1α−1 i! . We have some theorems about shrinkage priors in the settings. Recall that βi= log αi. 11 Proposition 3.1. Letπc(θ, β)∝πJ(θ, β)c(c∈R). The Bayesian predictive density based on πasymp- totically dominates the Bayesian predictive density based on πJregarding the KL risk when −1≤c <1. Proof. Letf= (πc/πJ)1/2=|g|γ, where γ= (c−1)/4. We show fis superharmonic when −1/2≤γ <0. We have f−1∆f=2 p(p−1)n γ(∂2 θlog|gα|) + (γ2+γ 2)(∂θlog|gα|)2o . Because log |gα|can be expressed as log |gα|= (p−1) log( aepθ+be−pθ+c) + (terms independent of θ) using a, b, c > 0, log|gα|is convex as a function of θand we have f−1∆f≤0 when −1/2≤γ <0. Corollary 3.1. Letπc(θ, β)∝πJ(θ, β)c(c∈R).Regarding the asymptotic KL risk, when θ→ ∞ ,c= 0 is optimal, that is the prior πS(θ, β)∝1is optimal. Proof. Letf= (πc/πJ)1/2=|g|γ, where γ= (c−1)/4. We have f−1∆f=2 p(p−1)n γ(∂2 θlog|gα|) + (γ2+γ 2)(∂θlog|gα|)2o . (14) LetS= sinh( pθ/2) and C= cosh( pθ/2). Here, ∂θlog|g|= (p−1)4SC 4 pS2+ 2, (15) ∂θ∂θlog|g|=2p(p−1) (4 pS2+ 2)2 (C2+S2)(4 pS2+ 2)−8 pS2C2 (16) and as θ→ ∞ ∂θlog|g| →p(p−1), ∂2 θlog|g| →0. Thus (14) is maximized by γ=−1/4. We plot the difference of asymptotic KL risks of the Bayesian predictive densities of the Jeffreys prior πJand the proposed prior for exchangeable correlation structure models. Let ˆ pγbe the Bayesian predictive density based on the prior πγ(θ, β)∝πJ(θ, β)4γ+1. The asymptotic risk difference between πJ andπSin this model is evaluated from (13), (14), (15), and (16) as E[D(p(y;θ, β); ˆpJ)]−E[D(p(y;θ, β); ˆpγ)] =−2 n2πJ πS1/2 ∆πS πJ1/2 + o(n−2) =−2 n22 p(p−1)n γ(∂2 θlog|gα|) + (γ2+γ 2)(∂θlog|gα|)2o + o(n−2) =−2 n22 p(p−1) γ2p(p−1) (4 pS2+ 2)2 (C2+S2)(4 pS2+ 2)−8 pS2C2 + (γ2+γ 2) (p−1)4SC 4 pS2+ 2!2 + o(n−2). (17) 12 The asymptotic KL risk difference (17) is shown in Figure 1. The sample size nisn= 100 in every plot. The risk difference E[D(p(y;θ, β); ˆpJ)]−E[D(p(y;θ, β); ˆpγ)] is positive, which shows that πγdominates πJasymptotically. As shown in Corollary 3.1, γ=−1/4 is optimal when θ→ ∞ , and the risk differences when γ=−1/4,p= 2,3,10 are positive even in the area where θis not around 0. When p= 2, the risk difference is large around θ= 0, and it means that proposed prior decreases the risk effectively around Dλ=I2. On the other hand, when p= 3,10 and γ=−1/4,−1/100, the risk difference around θ= 0 is smaller, and it shows the possibility that stronger shrinkage around θ= 0 is effective. 4 Acknowledgements The authors thank Fumiyasu Komaki for helpful comments to the early version of this work. This work was supported in part by JSPS KAKENHI Grant Numbers JP20K23316, JP21K11781 and JP22H00510. References J. Aitchison. Goodness of prediction fit. Biometrika , 62:547–554, 1975. T. W. Anderson. The Statistical Analysis of Time Series . Wiley, New York, 1971. D. L. Donoho, M. Gavish, and I. M. Johnstone. Optimal shrinkage of eigenvalues in the
|
https://arxiv.org/abs/2504.12615v1
|
spiked covariance model. Annals of Statistics , 46:1742–1778, 2018. E. I. George, F. Liang, and X. Xu. From minimax shrinkage estimation to minimax shrinkage prediction. Statistical Science , 27:92–94, 2012. J. Jiang and Y. V. Hui. Spectral density estimation with amplitude modulation and outlier detection. Annals of the Institute of Statistical Mathematics , 56:611–630, 2004. F. Komaki. Shrinkage priors for Bayesian prediction. Annals of Statistics , 34:808–819, 2006. J. Segers, R. van den Akker, and B. J. M. Werker. Semiparametric Gaussian copula models: Geometry and efficient rank-based estimation. The Annals of Statistics , 42:1911–1940, 2014. T. Sei and F. Komaki. A correlation-shrinkage prior for Bayesian prediction of the two-dimensional Wishart model. Biometrika , 109:1173–1180, 2022. R. Yang and J. O. Berger. Estimation of a covariance matrix using the reference prior. Annals of Statistics , 22:1195–1211, 1994. 13 −3−2−101230.00000 0.00005 0.00010 0.00015 0.00020p=2 θRisk differenceγ=−12 γ=−14 γ=−1100 −3−2−101230.00000 0.00005 0.00010 0.00015 0.00020p=3 θRisk differenceγ=−12 γ=−14 γ=−1100 −3−2−101230.00000.00050.00100.00150.0020p=10 θRisk difference γ=−12 γ=−14 γ=−1100Figure 1: KL risk difference E[D(p(y;θ, β); ˆpJ)]−E[D(p(y;θ, β); ˆpγ)] for p= 2,3,10 and γ= −1/2,−1/4,−1/100 14
|
https://arxiv.org/abs/2504.12615v1
|
On perfect sampling: ROCFTP with Metropolis-multishift coupler Nabipoor, Majid nabipoor@ualberta.ca University Health Network, Toronto, Canada April 18, 2025 Abstract ROCFTP is a perfect sampling algorithm that employs various random opera- tions, and requiring a specific Markov chain construction for each target. To overcome this requirement, the Metropolis algorithm is incorporated as a random operation within ROCFTP. While the Metropolis sampler functions as a random operation, it isn’t a coupler. However, by employing normal multishift coupler as a symmetric proposal for Metropolis, we obtain ROCFTP with Metropolis-multishift. Initially de- signed for bounded state spaces, ROCFTP’s applicability to targets with unbounded state spaces is extended through the introduction of the Most Interest Range (MIR) for practical use. It was demonstrated that selecting MIR decreases the likelihood of ROCFTP hitting MIRCby a factor of (1 −ε), which is beneficial for practical implementation. The algorithm exhibits a convergence rate characterized by expo- nential decay. Its performance is rigorously evaluated across various targets, and tests ensure its goodness of fit. Lastly, an R package is provided for generating exact samples using ROCFTP Metropolis-multishift. Keywords: Exact sampling, Generation, MCMC algorithms, CFTP, Coupling 1arXiv:2504.12872v1 [stat.CO] 17 Apr 2025 1 Introduction Many applications of Markov Chain Monte Carlo (MCMC) are centered around Bayesian inference. Bayesian inference relies on posterior features such as moments, quantiles, or the highest posterior density region, which can be expressed in terms of posterior expectations of functions of θ, denoted as E[f(θ)|Y], where θis a model parameter, and Yrepresents the observed data. The integration in this expression has been a challenging aspect of Bayesian inference, and in most applications, the analytic evaluation of E[f(θ)|Y] is not possible. Alterna- tive approaches include: (1) Numerical evaluation, which can be difficult and occasionally inaccurate (2) Analytic approximation methods like Laplace approximation, which are sometimes suitable (3) Monte Carlo integration, including MCMC. The classic approach of MCMC involves evaluating E[f(X)] by drawing samples Xt:t= 1, ..., n from the distribu- tion of Xand approximating it as: E[f(X)]≈1 nPn t=1f(Xt).This equation represents an ergodic average, and convergence to the required expectation is guaranteed by the strong ergodic theorem. However, obtaining samples Xtcan be challenging since posteriors are often non-standard. One approach to address this is through a Markov chain with π(·) as its stationary distribution. The ROCFTP algorithm, proposed by Wilson (2000 a), offers a coupling diagnostic ap- proach for drawing independent and identically distributed (i.i.d.) samples from a Markov Chain. However, it requires the construction of a Markov chain for each target and can be executed using various random operation couplers. The Metropolis sampler functions as a random operation but not as a coupler, whereas the normal multishift serves as a sym- metric coupler that can be employed as a proposal in the Metropolis algorithm. ROCFTP with Metropolis-multishift coupler can generate i.i.d. samples without the necessity of constructing a Markov chain for each target. This manuscript introduces the perfect sampling algorithm of ROCFTP with Metropolis- multishift coupler and investigates its properties thoroughly. The associated R pack- 2 ageROCFTP.MMS is available on the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/ROCFTP.MMS/index.html .
|
https://arxiv.org/abs/2504.12872v1
|
Section 2 delves into the fundamentals and definitions of random operations, the normal multishift coupler, CFTP, and ROCFTP, while also addressing the determination of the block length Tin ROCFTP. Section 3 further explores the Metropolis-multishift coupler, including its func- tionality, convergence properties, the impact of starting range selection on sampling, and considerations regarding the number of starting paths. In Section 4, the performance of ROCFTP with the Metropolis-multishift coupler is assessed on both unimodal and multi- modal targets. Lastly, Section 5 provides a detailed discussion of the results, along with a comparison with other algorithms. 1.1 Literature review The convergence of an MCMC algorithm to its stationary distribution is a necessary con- dition for drawing inferential conclusion of MCMC runs, Frigessi (1995). Convergence properties of MCMC algorithms have been widely studied for decades; Peskun (1973), Sokal & Thomas (1988), Besag & Green (1993), Frigessi (1995), Mengersen & Tweedie (1996), Johnson (1998). One way of drawing samples in MCMC algorithms is to discard the first m iterations, or draw samples after a sufficiently long burn-in of say m iterations to allow Markov chain converges to its stationary distribution Besag et al. (1995). Frigessi (1995) defines the burn-in t∗such that for all t > t∗,∥P(X(t)=· |x(1))−π(·)∥ ≤ ε, and investigates different methods to determine upper and lower bounds for t∗. In this approach, sample points are dependent, Mykland et al. (1995). As a potential remedy, it is suggested that many independent runs of MCMC be used; however, it is expensive, Besag & Green (1993). The other approach is sub-sampling the chain at equally spaced times after a sufficiently long burn-in, Besag & Green (1993). Regenerative simulation and splitting Markov chain are two different proposed techniques for sub-sampling from a chain. Mykland et al. (1995) assumed T0≤T1≤. . .such that at each Ti, the future of the process is independent of the past and identically distributed; and proposed a scaled 3 regeneration quantile plot to determine such Tis, which are known as regeneration times. Consider a discrete Markov chain, and fix a particular state, the times that chain returns to that state are regeneration times. Nummelin (1978) generalized this idea to Markov chains with continuous state spaces by introducing the concept of an atom . Nummelin (1978) defines an atom as a set of points of which all transitions are identical, and the chain is split by entering to the atom . Coupling is the joint construction of two or more random variables or Markov chains, and as a probabilistic technique dates back to Doeblin (1938). Later, this technique was used to prove ergodic theorems for Markov lattice interactions, Wasserstein (1969), Do- brushin (1971), Holley & Liggett (1975). Advancements in coupling theory led to the development of various coupling schemes, including quantile coupling, maximal coupling, Ornstein coupling, epsilon-coupling, and shift coupling. Maximal coupling which attains the upper bound of coupling event inequality proposed by Goldstein (1979), was later uti- lized for poison approximation by Thorisson (1995). Lindvall (1992) compiled the scattered literature on coupling schemes, laying the groundwork for Thorisson (2000)’s book. Letac (1986) observed
|
https://arxiv.org/abs/2504.12872v1
|
that if a Markov chain moves backward in time, it will pointwise converge to the stationary distribution. Aldous (1990) investigated a Markov chain with random walk stationary, and through a reversed time transition matrix, proved that the chain converges to uniform distribution. Johnson (1996) proposed monotone coupling for a p-dimensional Markov chain where all conditional distributions are monotonically decreasing outside a p- dimensional hypercube. A novel approach, proposed by Propp & Wilson (1996), Coupling From The Past (CFTP), involves running two or more chains in parallel until all chains co- alesce. This algorithm outputs an exact draw from the stationary distribution. Monotone CFTP looks backward to find the two chains that coalesce in a block and the state at time 0 is according with the stationary distribution π. The selection of random operations in CFTP is as crucial as the choice of the Markov chain itself. Monotone random operations are particularly important, as they enable the algorithm to confine the entire state space to a single point. The CFTP algorithm necessitates the storage of randomness sources, ren- 4 dering it costly. Wilson (2000 a) proposed Read-Once CFTP (ROCFTP), which operates Markov chains in a forward manner. The algorithm reads a stream of randomness only once, leading to significant reductions in memory and time requirements. Later, Wilson (2000 b) explored various monotone random operations, with a focus on uniform multishift and normal multishift. CFTP is applicable on bounded state spaces, with ˆ0,ˆ1 representing the unique minimum and maximum of the state space, respectively. The algorithm initiates at time −tin the past with initial values of ˆ0,ˆ1. If these two chains coalesce in a block, the observation at time zero serves as the output. Fill’s algorithm, introduced by Fill (1997), is structured in a manner similar to CFTP. Fill’s algorithm initiates a chain Xat time zero from X0=ˆ0 and advances the chain forward for tsteps by storing the randomness source. Then, at time t, the second chain Ystarts from Yt=ˆ1 and runs backward based on the stored randomness. If Y0=ˆ0, then the algorithm outputs Xt, if not, it doubles t, and repeats the process. Fill explained that every iteration of doubling timplies rejection sampling and investigated the reasons behind the effectiveness of this algorithm. However, this algorithm can be interrupted with respect to a deadline specified in terms of Markov chain steps. Fill’s algorithm was extended by Fill et al. (2000), making it applicable to general chains, and capable of alleviating computational burdens. The algorithm initiates a chain at time twith initial value of Xtand runs the time-reversed chain for tsteps to obtain X0, while stores the randomness. Then, at time zero, it starts a second chain Ywith an arbitrary initial value from the state space and runs forward according to the stored randomness. If both chains coalesce at time t, it outputs X0, otherwise, it repeats the algo- rithm with a new tandXt. Fill’s algorithm outputs Xt, while the extended version outputs X0. Berthelsen et al. (2010) used ROCFTP with Gibbs sampler to generate samples for mixture models with unknown mixture weights. Glynn
|
https://arxiv.org/abs/2504.12872v1
|
& Rhee (2014) proposed a new approach for unbiased estimation of Markov chain stationary expectations. They defined Z:=PN K=0∆k P(N≥k)as an estimator of E[f(X)] where ∆ k=f(Xk)−f(Xk−1), and Nis aZ+-valued random variable independent of ∆k. They established a coupling between XkandXk−1to ensure ∆ k−→0 ask−→ ∞ . 5 They demonstrated that under three conditions of (1) Markov chain Xis counteractive on average, (2) fis Lipschitz, (3) the jump in each transition of the Markov chain Xis finite; then Zis an unbiased estimator for E[f(X)]. Jacob et al. (2020) employed the telescopic sum argument of Glynn-Rhee combined with the coupling of Markov chains to derive an unbiased estimate for E[f(X)]. Additionally, Leigh et al. (2023) demonstrated that unbiased simulation, coupled with the coupling of two Markov chains, achieves perfect simulation. 2 Preliminaries This section provides preliminaries, including random operations, normal multishift cou- pler, CFTP and ROCFTP for constructing ROCFTP with Metropolis-multishift. Ad- vanced reader may skim through this section; however, subsection 2.4 is recommended, as it provides a simulation for ROCFTP to investigate the length of the block T. The scheme of storing randomness in CFTP algorithm, along with related R-code, is provided in the Appendix, laying the foundation for further materials. 2.1 Random operations Random operation ϕ() is a deterministic function that takes the input state Xtat time t with some intrinsic randomness Ut+1; updates the state Xt, and outputs Xt+1=ϕ(Xt, Ut+1), Wilson (2000 b). The intrinsic randomness Uts are mutually independent. The random operation ϕ() preserves the stationary distribution π; ifXtis distributed according to π andUtis random, then Xt+1is distributed according to π. Definition 1 A random operation ϕ()is monotone if: Xt≤Yt=⇒ϕ(Xt, Ut+1)≤ϕ(Yt, Ut+1). Monotonicity is a useful property for sandwiching the entire state space at time tin a state 6 Xt, Wilson (2000 b), Murdoch (2000). Consider a finitely bounded continuous state space and denote ˆ0 as minimum and ˆ1 as maximum of the state space. In order to test if the random operation ϕ() maps simultaneously the entire state space in Xt; by monotonicity, it is necessary only to apply it on the two starting values ˆ0 and ˆ1. A multishift coupler is a random operation ϕ(), such that for each real value x, the random value ϕ(x, u)−xhas the same target distribution (Wilson 2000 b). In other words, the update distribution belongs to a location family. There are different multishift couplers, such as uniform multishift coupler, normal multishift coupler, and unimodal multishift coupler. We will discuss the normal multishift coupler. 2.2 Normal multishift coupler Wilson (2000 b) proposed the normal multishift coupler, and explained it as a combination of layered uniform rectangles, as depicted in Figure A.1. He referred to it as the layered normal multishift coupler. In normal multishift coupler, we select a rectangle from the mixture of uniform distributions according to the mixing proportions, and then pick a random point within the rectangle. The result is normally distributed random variable. LetLandRdenote the left and right end points of a uniform rectangle, and let fbe the density of normal distribution. Choose Z∼Normal (0,1) and U∼Uniform (0, f(Z)).
|
https://arxiv.org/abs/2504.12872v1
|
Therefore L=−f−1(U) and R=f−1(U), where f−1is the inverse function of f. Take a random point within the rectangle X∼Uniform (L, R), and apply mapping ML,R,X(s) = ⌊s+R−X R−L⌋(R−L) +X. This method is illustrated in Figure 1. In this algorithm, σ= 1, and it can be changed for any σ. Note that sis the initial value or the previous state. In fact, if we define a Markov chain, swill change based on every update of the chain. For instance, let Xt+1=ρXt+εt+1, where ρis given. Take s=ρXtin the mapping and repeat the procedure. 7 Figure 1: Illustration of normal multishift coupler. 2.3 CFTP Consider an ergodic discrete Markov chain with transition probability pij>0 from state i to state j; ergodicity implies that there exists a unique stationary probability distribution π(·). Let, the Markov chain start in some state and run for a long time; the probability that it ends up in state iconverges to π(i). CFTP is the extreme implementation of this principle, considering all possible starting values simultaneously, and running the chains until dependence on the starting values has vanished. Consider a collection of coupled Markov chains started simultaneously at every possible state of the chain at time t=−2n and end at time t= 0. If the paths occasionally coalesce into one path in the block of (−2n,−2n−1), then we are done; if not, double tand start again, by using the stored sequence of randomness. CFTP is a simulation algorithm that can be executed using various random operations. If the random operation is monotone, it allows the algorithm to sandwich all the state space in one state. Hence, it suffices to start only two chains from ˆ0 and ˆ1. In fact, CFTP looks backward block by block (doubles each time) until the two chains of ˆ0 and ˆ1 started at t=−2ncoalesce on nth block and outputs X0. Suppose {Xt} is a Markov chain with state space X;ϕ() is a random operation Xt=ϕ(Xt−1, Ut), and Ut is the randomness at time t. Then CFTP algorithm is: 8 CFTP (2n) : t←− − 2n Bt←− X While t <0 t←−t+ 1 Bt←−ϕ(Bt−1, Ut) If|B−2n−1|= 1 Return( B0) Else CFTP (2n+1) where | · |denotes the cardinality of a set. CFTP is based on the observation that if the chain was run from all initial states in Xat time t=−∞,(B−∞=X), all paths will coalesce to a singleton at time t= 0,(B0={X0}), and X0has the stationary distribution π. Figure 2 illustrates CFTP in monotone case with a normal multishift coupler. The trajectories have the same starting points but in different times. The interested reader is refereed to Huber (2016) for more detail. It is useful to describe how to illustrate CFTP Figure 2: Illustration of CFTP in monotone case with normal multishift coupler. because it explains how and why this algorithm stores the randomness Ut. Figure A.2 shows how to construct the matrix of CFTP to store randomness. Suppose we have a function, 9 let’s call it multishift(), which, given a random vector, generates a new random vector. In fact, it takes the initial values and produces
|
https://arxiv.org/abs/2504.12872v1
|
the start block in Figure A.2, increasing it by 2nwhere n= 2,3,···in the next iterations. The second block contains 22new vectors, shown as a white block in the top right of the start block. It is necessary to continue this block to time t= 0. Note that this sampling is simultaneous, so we have to store the randomness Utwhich is used in the start block. Then add the NA’s block as shown in Figure A.2, and continue the process recursively until coalescence in a block. The R code for Figure 2 is provided in Appendix A.3. 2.4 ROCFTP CFTP by increasing the length of block from Mto 2Mrequires U−2M,···, U0in such a way that U−M+1,···, U0from the previous blocks must not change. It is needed to store randomness from U−M+1toU0, and not only this storing is difficult to implement, but also it requires lots of storage space, specifically when the random operation is not monotone. These issues led to the development of ROCFTP, proposed by Wilson (2000 a), which employs a retroactive stopping rule. It involves executing random operations in a forward direction over time. Then, at a certain point, it decides to halt and return a previous state instead of continuing from the current state. The time notation in ROCFTP is an inner notation and it uses each Utonce without any storage. Let C−n= {paths coalesce in block [ −nT,−(n−1)T]}, therefore, the pairs ( C−n, X−(n−1)T) depend only on {Ut}−(n−1)T t=−nT+1, so such pairs are independent across n. In Figure 3, suppose CFTP runs at mTas the present time. It means that at least one C−nevent, n= 1,···, mhas occurred. Then, we could search backward for the first previous C−nevent. The output will be the update at the present time or t=mT. This means, we could relabel the time axis, and XmTcould relabel as X0, which is the output of CFTP. This is a simpler perfect sampler, and we don’t need to reuse Utvalues, it only requires saving XmTvalues as output. The algorithm of ROCFTP is as follows: 1. Run from t= 0 forward to the first Cnevent. 10 Figure 3: Illustration of ROCFTP: output is XmT. 2. Follow the paths from X(n+1)T; and run from t= (n+ 1)Tuntil Cm, where m≥n, occurs. 3. Output XmT. With this algorithm, one question arises. What is T, and how do we choose it? Suppose we can choose T≤tand a coupling such that P(coalescence in the first block |T) =p. Blocks run independently, therefore: P(1stcoalescence in the block n |T) =p(1−p)n−1. This is a geometric random variable, so E[n|T] = 1 /p. Suppose τis the total time that ROCFTP needs to generate one sample. Therefore τ=nTand by considering T≤t: E[τ] =E[nT] =E[E[nT|T]]≤t p. 11 A simulation conducted to explore the effects of the block length Tin the ROCFTP algorithm. Let pdenote the probability of coalescence in the first block, nrepresent the number of blocks needed until the first block with coalescence, and τindicate the total time to generate a sample. We implemented a ROCFTP with mutishift coupler and tar- get distribution N(0,1), and executed
|
https://arxiv.org/abs/2504.12872v1
|
the algorithm 100 Ktimes for starting ranges of (−100,100), ( −50,50), (−10,10), with various selections for block time T. It is evident that as Tincreases, palso increases, and the total time τto generate a sample decreases up to a certain range, beyond which τstarts to increase. For more details, refer to Table 1. (ˆ0,ˆ1) T p ¯n ¯τ (ˆ0,ˆ1) T p ¯n ¯τ (-100,100) 70 0.244 4.113 288 (-50,50) 50 0.102 9.710 485 80 0.481 2.062 165 60 0.327 3.051 183 90 0.674 1.481 133 70 0.556 1.794 126 100 0.800 1.249 125 80 0.725 1.380 110 110 0.880 1.137 125 90 0.833 1.200 108 120 0.930 1.075 129 100 0.901 1.109 111 (-10,10) 20 0.107 9.395 188 30 0.356 2.821 85 40 0.585 1.712 68 50 0.745 1.343 67 60 0.844 1.185 71 Table 1: Simulation for the time block Tby running 100k iteration of ROCFTP with a normal mutishift coupler and target distribution N(0,1). The total time τdecreases as T increases up to a certain range. In CFTP or ROCFTP, once coalescence occurs, the past is forgotten, and no initial- ization bias remains. However, removing the initialization bias alone may not suffice, as there could still be a coupling time bias. The running time Tof CFTP is an unbounded random variable, which is unknown a priori. Fill (1997) noted that if we terminate the process when T > T∗, we introduce the impatience bias . In such cases, we are more likely to observe outcomes resulting from fast coalescence. For instance, consider an asymmetric random walk on {1,···, N}that steps up 1, down 2. It can coalesce at 1 in N/2 steps, but 12 requires at least N−1 steps to coalesce at N. To address this, he introduced a variation of ROCFTP that does not exhibit impatience bias, i.e., it is interruptible. Latter, Fill et al. (2000) extended Fill’s algorithm to general chains on quite general state spaces. In fact, they showed that given coalescence, the value at the end of the block is independent of the value at the start. 3 Metropolis-multishift coupler In the previous section, we covered CFTP and ROCFTP, both employing different random operations. While the multishift coupler offers monotonicity, ideal for encompassing the entire state space, it lacks a built-in Markov chain. Establishing a suitable Markov chain for the target distribution becomes essential. On the other hand, traditional methods like the Metropolis algorithm don’t demand this construction but lack monotonicity. Consequently, they cannot encapsulate the entire state space in one point. In the Metropolis sampler, we have the freedom to choose the proposal distribution. The algorithm is theoretically guaranteed to converge to its stationary distribution regard- less of the proposal distribution selected. However, in practice, the choice of proposal distribution is critical as it significantly impacts the convergence time. While the Metropo- lis algorithm utilizing a normal multishift proposal tends to drive proposed values toward the target distribution in a monotonically convergent manner, the acceptance-rejection mechanism within the algorithm prevents strict monotonicity in the generated chain. Nev- ertheless, owing to the normal multishift proposal, the
|
https://arxiv.org/abs/2504.12872v1
|
chain typically demonstrates be- havior closely resembling monotonicity. We will begin by developing and applying the Metropolis-multishift algorithm to various target distributions before integrating it as a random operation in ROCFTP. 13 3.1 Metropolis sampler The Metropolis algorithm stands as one of the most influential algorithms in Markov Chain Monte Carlo (MCMC) methods, recognized as one of the top ten algorithms of the 20th century Dongarra & Sullivan (2000). Let’s run the traditional Metropolis algorithm si- multaneously over the range ( −10,10) with the target distribution N(0,1) and proposal N(0,1). In each step, Xt+1=Xt+Zt+1, where Zt+1∼N(0,1). Figure 4 illustrates that all states within the considered the starting range move independently and converge towards the target distribution, albeit without coalescing. Despite the lack of monotonicity in the Metropolis algorithm, the starting range tends to converge and move within a narrow band. To induce coalescence, one approach is to utilize the floor() function, as follows: Figure 4: Applying Metropolis simultaneously on ( −10,10) with target N(0,1) and Markov chain Xt+1=Xt+N(0,1). Xt+1=⌊Xt⌋+ ∆Xt+1where ∆ Xt+1∼N(0,1). Figure A.3 illustrates the Metropolis algorithm with this new Markov chain, showcasing the occurrence of coalescence. It visually demonstrates the convergence of the state space 14 within the interval [ n, n+ 1], where signals are emitted from each interval, leading to subsequent monotonic coalescence. By including floor function, the chain changed, and we cannot use Metropolis algorithm, since it needs a symmetric proposal, which leads us to normal multishift coupler. 3.2 Metropolis-multishift coupler When applied simultaneously to the considered state space, the Metropolis algorithm fails to induce coalescence. A well-chosen proposal for the Metropolis sampler can significantly enhance convergence speed and induce coalescence. The symmetry of the normal multishift coupler makes it suitable for integration within the Metropolis algorithm framework. While this coupler does not require constructing a specific Markov chain to achieve the target dis- tribution, it is not strictly monotone but exhibits behavior closely resembling monotonicity. This unique characteristic distinguishes it from traditional Metropolis algorithms, particu- larly in scenarios involving shifting between modes, where coalescence occurs more rapidly compared to conventional approaches. The Metropolis algorithm with a multishift proposal offers two distinct advantages: (1) it eliminates the need for constructing a specific Markov Chain, and (2) it induces coalescence. The algorithm for Metropolis with a multishift coupler is outlined as follows: Start with an arbitrary point X0, Generate proposal yby multishift coupler based on previous value, y=ϕ(Xt, Ut+1), Compute α=min 1,π(y) π(Xt) , Generate u∼Uniform (0,1), Ifu≤αsetXt+1=y, Else set Xt+1=Xt, Do recursively from second step until t=T. If the starting range is chosen significantly distant from the target distribution, or if the target distribution is multimodal, coalescence may occur at an incorrect position, or the 15 chain may become trapped within one of the modes. To explore these scenarios, we will apply the Metropolis-multishift coupler to six different target distributions, which include both unimodal and multimodal distributions. The considered target distributions are: 1.N(0,1) 2.N(30,1) 3. 0.8N(−2,1) + 0 .2N(2,1) 4. 0.2N(−5,1) + 0 .2N(5,1) + 0 .6N(15,1) 5. 0.8Uniform (−100,100) + 0 .2Beta (50,50) 6. 0.9Uniform (−100,100)
|
https://arxiv.org/abs/2504.12872v1
|
+ 0 .1Beta (500,500) The Metropolis-multishift coupler implemented simultaneously over the range (-10,10) for the first case of N(0,1). Figure A.4 depicts the initial partitioning of the starting range into intervals or sources by this coupler, emitting one or several signals from each source. Subsequently, these signals tend to undergo monotonic coalescence. This behavior contrasts with that of the Metropolis algorithm, which uniformly moves the entire state space and converges it into a narrow band. The Metropolis-multishift coupler is also applied to the range ( −10,10) for the tar- get distribution N(30,1). However, this choice of starting range appears to be unsuitable for this target distribution. Figure A.5 demonstrates how the algorithm gradually ad- justs towards the target position. Selecting an inappropriate starting range can result in several issues, including slower coalescence and potential coalescence occurring at incor- rect positions, particularly when the starting range is significantly distant from the target. Nevertheless, by persisting with the chain, it eventually converges to the target distribution. The third scenario features a multimodal target distribution, with modes located at -2 and 2. These modes are close enough for the multishift normal standard coupler to transition between them. We apply the Metropolis-multishift coupler to the range ( −30,30) 16 with the target distribution 0 .8N(−2,1) + 0 .2N(2,1). The results are illustrated in Figure A.6. The chain effectively shifts between both modes while preserving the respective weights of each mode. The fourth scenario involves three modes that are relatively distant from each other, with a gap of 10 between each pair of modes. Given this setup, it’s worth considering whether the multishift normal standard coupler can seamlessly transition between these modes. Can we anticipate a behavior similar to the third scenario, where the coupler moves between modes effortlessly? Figure A.7 illustrates the behavior of the Metropolis- multishift of normal standard coupler. It shows that the coupler moves away from the initial state space and becomes trapped within the three modes. The gap between modes exceeds the deviation of the proposal distribution, resulting in the chains becoming trapped. In this case, we utilized the multishift of standard normal as the proposal for the Metropolis algorithm. To address this issue, a normal multishift with a larger scale, such as N(0,3.52), is needed. Choosing the appropriate scale significantly impacts the coalescence time. Figure 5 illustrates how the chain transitions between modes with the adjusted proposal distribution. Figure 5: Applying Metropolis-multishift simultaneously on ( −30,30) with target 0.2N(−5,1) + 0 .2N(5,1) + 0 .6N(15,1) and proposal multishift N(0,3.52). The fifth case comprises a target distribution with a mode around 0.4 and 0.6, featuring 17 long tails spanning from -100 to 100, as depicted in Figure A.8, resembling a Dirac delta function overall, upon closer inspection, a beta density emerges. We apply the Metropolis- multishift coupler simultaneously over the range ( −100,100) with the target distribution 0.8Uniform (−100,100) + 0 .2Beta (50,50). Two different scales, 3 .52and 1, are considered for the multishift proposal. Given the wider range of this target distribution, coalescence occurs more rapidly with the larger scale, as
|
https://arxiv.org/abs/2504.12872v1
|
illustrated in Figures A.9 and A.10, where the mode is discernible in both cases. In the sixth case, we increased the weight of the uniform distribution up to 0.9. While the mode is detectable in the figure, it becomes more ambiguous for larger weights. Despite the increased computational cost of the Metropolis-multishift method for targets such as the recent case, the method remains effective. It’s important to note that not only are the weights changed, but the beta density is also altered to be narrower in the mode area. To enhance mode detection, the scale of the proposal is reduced, causing the chain to move more slowly. Consequently, the number of steps is increased to one and a half million. Figure A.11 demonstrates that the chains coalesce after 500 Ksteps, with the mode clearly detectable. The Metropolis-multishift coupler is not monotone due to the acceptance-rejection pro- cess inherent in the Metropolis algorithm. However, it tends to exhibit nearly monotonic behavior towards the target distribution. Selecting two or more paths from the considered range does not alter the time of coalescence. Non-monotonic behavior typically arises in areas where the gap between paths exceeds the scale of the proposal. The size and prox- imity of the considered state space significantly affect the time of coalescence, particularly if it is large or distant from the target. Choosing an appropriate proposal, especially with a suitable scale, can decrease the time of coalescence. Therefore, the time of coalescence in the Metropolis-multishift coupler depends on: (1) the starting range and (2) the chosen scale of the proposal. The next two subsections delve into the convergence properties of the Metropolis-multishift coupler and the optimal ranges, with the goal of selecting suitable starting ranges and offering solutions for targets with unbounded state spaces. 18 3.3 Convergence properties Letπrepresent the target distribution. Consider a bounded starting range with a proba- bility greater than 1 −ϵ, where ˆ0 and ˆ1 denote the minimum and maximum of this range, respectively. The selection of the starting range and its impact on sampling is discussed in subsection 3.4. The chain {ˆ0t}starts from ˆ0 att= 0, governed by the transition kernel of a Metropolis-multishift coupler with πas its stationary distribution; the same applies to chain {ˆ1t}. Initiate a primary chain {Xt}using the same Metropolis-multishift coupler from an arbitrary point X0att= 0 within the starting range, ensuring that all points in the starting range coalesce with the two extreme chains of ˆ0tandˆ1tas auxiliary chains. The chains {ˆ0t},{ˆ1t}, and{Xt}run simultaneously or jointly. Let Tbe the length of a block such that P(coupling event at time t|t < T ) =p >0. The chain ˆ0tupdates to ˆ0 when mod(t, T) = 0, and similarly for chain ˆ1t. The coupling event inequality (Lindvall (1992), Thorisson (2000)) connects the distri- bution of coupling time to the convergence properties of an MCMC algorithm. We provide some definitions to explain this inequality for Metropolis-multishift coupler. LetXand ˆXbe random variables with distributions L(X) and L(ˆX) respectively, defined on probability space ( X,F). The total variation between L(X) andL(ˆX) is given by: ∥L(X)− L(ˆX)∥= 2
|
https://arxiv.org/abs/2504.12872v1
|
sup A∈F L(X∈A)− L(ˆX∈A) . Suppose T∗ ˆ0denotes the coupling time of Markov chains {Xt},{ˆ0t}; and T∗ ˆ1represents the coupling time of Markov chains {Xt},{ˆ1t}in a successful block with length T. The coupling event inequality (Lindvall (1992), Thorisson (2000)) implies that XT∗ ˆ0=ˆ0T∗ ˆ0=⇒ ∥L (Xt)− L(ˆ0t)∥ ≤2P(T∗ ˆ0> t), and similarly for ˆXT∗ ˆ1=ˆ1T∗ ˆ1; define T∗=max(T∗ ˆ0, T∗ ˆ1), hence ∥L(Xt)− L(ˆ0t)∥+∥L(Xt)− L(ˆ1t)∥ ≤4P(T∗> t), (1) 19 where T∗denotes the coupling time of Markov chains {Xt},{ˆ0t}, and{ˆ1t}. The ergodic properties of the Metropolis algorithm have been thoroughly investigated by Andrieu & Moulines (2006), Atchad´ e & Perron (2007), and Wang & Ling (2017), par- ticularly regarding symmetric proposals. Let’s investigate this property for the Metropolis- multishift coupler. Consider Pas aπ-irreducible Metropolis-multishift kernel, with a one- step transition kernel denoted as P(x, dy) and a t-step transition kernel as Pt(x, dy), where πrepresents the stationary distribution. We denote the joint density of two independent draws from πasπ2, and Qtrepresents the joint distribution of titerates in the primary Markov chain Xt: Qt(dx, dy ) =Pt(x, dy)π(dx). Let the auxiliary chain ˆ0tstarts from ˆ0, in a successive block t < T ; then Qt ˆ0(dy) =Pt(ˆ0, dy) and the same for ˆ1t. By using Theorem 1 of Johnson (1998) ∥π2−Qt∥ ≤2∥Qt ˆ0−Qt∥and∥π2−Qt∥ ≤2∥Qt ˆ1−Qt∥, (2) hence by using (1) ∥π2−Qt∥ ≤4P(T∗> t), (3) where P(T∗> t) is the proportion of times that the Markov chains {Xt},{ˆ0t}, and{ˆ1t} fail to couple in tor fewer steps. Note that if the number of auxiliary paths increases from 2 paths to kpaths, inequality 3 still holds, where T∗represents the coupling time of kauxiliary paths and the primary chain Xt. In fact, if the Metropolis-multishift coupler coalesces all the starting range to a single point, then the convergence remains independent of the number of auxiliary paths. Subsection 3.5 demonstrates that the two extreme auxil- iary paths are sufficient for the Metropolis-multishift coupler to coalesce the entire starting range in tsteps. Proposition 1 Suppose Qtrepresents the joint distribution of titerations of the Markov chain{Xt}governed by a π-irreducible Metropolis-multishift coupler, and T∗is the coupling 20 time of the two extreme auxiliary chains {ˆ0t}and{ˆ1t}with the primary chain {Xt}. In this context, inequality 3 holds. Proposition 1 offers an upper bound for total variation, and if it decreases rapidly with an increase in t, then inequality 3 serves as a mechanism for generating samples using the Metropolis-multishift coupler. The Metropolis-multishift coupler is uniformly ergodic if there exist positive constants Mandr <1 such that ∥π2−Qt∥ ≤Mrt, Tierney (1994). In the remainder of this subsection, the goal is to explain P(T∗> t), supported by a simulation to provide justification. Suppose a Metropolis-multishift coupler operates within a suitable starting range simul- taneously, as discussed in Subsection 3.4. The multishift coupler, acting as the proposal, is a step function that partitions the starting range into intervals and proposes a unique value for each interval. In particular, the mapping ⌊s+R−X R−L⌋(R−L) +Xproposes a unique value for all s∈ [X−L, X +R]. It’s worth noting that LandRare randomly generated from a normal distribution at each step, while Xis
|
https://arxiv.org/abs/2504.12872v1
|
uniformly selected from the interval [ L, R]. Once Xis generated, the algorithm creates an interval around it. Similar intervals are then constructed on the left and right sides of this interval, effectively dividing the starting range into intervals. The mapping then proposes a unique value for each of these intervals. The proposed value is accepted with a probability of α=min{1,π(Xt) π(Xt−1)}and rejected with a probability of (1 −π(Xt) π(Xt−1)). It’s important to note that when the proposed value is rejected,π(Xt) π(Xt−1)<1. Let’s consider a sufficiently large tfor the coalescence of all paths. Since the starting range is bounded, achieving the coalescence of all paths requires at least a minimum number of successive iterations, let’s say miterates. Then, the probability of non-coalescence implies that the number of successive iterations should be less than m. Define Mas the maximum of all m−1 values of α, and ras the maximum of rejected probabilities of (1−π(Xt) π(Xt−1)). Note that π(Xt)>0, as if π(Xt) = 0, then Xtwould be within the null set. 21 Hence, P(T∗> t)≤Mm−1rt−m+1, (4) where m < t , and 0 < r < 1. This heuristic explanation demonstrates that P(T∗> t) decays exponentially as the number of iterations tincreases. In order to demonstrate the exponential decrease of P(T∗> t), we executed the Metropolis-multishift coupler on three paths starting from (-10, 0, 10) for a target of N(0,1) with 100 Kreplications for each t= 1 to 100. Figure 6 depicts the results for t= 1 to 79, as beyond t= 79, P(T∗> t) becomes zero. This indicates that replications more than 100 Kare required to observe at least one run without coalescence for t= 80 and beyond, given the exceedingly small probability of non-coalescence. It’s worth noting that P(T∗> t) = 1 for the first 7 observations, signifying that the Metropolis-multishift coupler requires a minimum of 8 successive steps for the first coalescence, as described by min equation 4. The fitted curve for log(P(T∗> t)) from t= 1 to 79 is log(P(T∗> t)) =−69.3064e−84.4791t−0.8734. Extrapolating for t= 150, the fit yields P(T∗> t) =e−23.96207. According to Theorem 2 of Johnson (1998), for nindependent samples from target π(·), the total variation is bounded by ∥πn−Qt n∥ ≤4nP(T∗> t). If we take a sample size of n= 1000 Kand a block length of T= 150, then the total variation from the target is bounded by ∥πn−Qt n∥ ≤4000000 e−23.96207≤0.000157. Figure 6 illustrates an exponential decay much stronger than the linear decays depicted in Johnson (1998). 3.4 The most interest range In the Subsection 3.2, we discussed two factors influencing fast coalescence: the starting range and the scale of the proposal. Here, we will delve into the starting range. Selecting a starting range significantly distant from the target range may cause the two starting chains to coalesce at a point far from the target distribution. When combined with a small value ofT, this can introduce bias into the sampling process. For instance, in Figure 7, where the 22 Figure 6: Logarithms of P(T∗> t) versus twith 100k replications for each t. target in both cases
|
https://arxiv.org/abs/2504.12872v1
|
is N(30,1); the first case selects a start range of [ −100,−95], resulting in coalescence around -70 after 100 steps. Although, in the second case, the start range is [−200,−190], leading to coalescence around -100 after 200 steps! When dealing with mixture distributions featuring multiple modes, selecting the wrong starting range alongside a small scale can lead to chains becoming trapped in a specific mode or prevent coalescence altogether. For instance, in the fourth case with three modes and a small-scale proposal, Figure A.12 illustrates how adjusting the range can inadvertently trap chains in one mode, potentially misleading experimenters into believing the algorithm is functioning correctly. Conversely, opting for a larger starting range with a small scale may result in chains getting stuck in two modes, completely avoiding coalescence. In another scenario where three modes have varying gaps, selecting a larger scale and an incorrect starting range can result in rapid coalescence with a shorter block length T, leading to biased samples that favor the two closer modes. Such issues arise from making the wrong choice of range. Evaluating the target distribution π(x) by graphing it can assist in selecting an appropriate starting range. While this process may not be straightforward, it provides a general understanding of the distribution’s modes, aiding in better range selection. 23 Figure 7: An illustration of Metropolis-multishift applied simultaneously to the target distribution ( −30,30) with a mean of N(30,1) and a proposal distribution of N(0,1), starting from different ranges. Foss & Tweedie (1998) highlighted the necessity of a uniformly ergodic underlying Markov chain for the proper functioning of CFTP. However, numerous common chains utilized in Bayesian inference possess unbounded state spaces and lack uniform ergodicity. Murdoch (2000) tackled this challenge by proposing solutions through various Markov chain constructions. Additionally, Wilson (2000 b) introduced ˆ0 and ˆ1 for bounded state spaces. Here, we delve into a more practical solution to address this issue. When considering the standard normal distribution, most normal tables provide a largest bound of [ −3.59,3.59], with P([−3.59,3.59]) = 0 .9996. Therefore, selecting the range [ −10,10] for the target N(0,1) ensures that almost all samples fall within this range, which is more than sufficient for practical purposes. The likelihood of generating a sample outside the range [ −10,10] is practically zero. Consequently, incurring significant computa- tional costs for such a negligible probability is unreasonable, leading us to define the ”Most Interesting Range” (MIR). Definition 2 Consider a Markov chain with target π, state space X, and the interested probability of 1−ε, sufficiently large, and define Fε={A⊂ X :π(A)≥1−ε}. Let µbe 24 Lebesgue measure for continuous state spaces or cardinality for discrete state spaces, then MIR = arg inf A∈Fεµ(A). MIR offers two significant advantages: (1) it provides a small, bounded starting range, and (2) it selects the correct part of the state space, steering clear of positions distant from the target. By this definition, we can establish ˆ0 = inf( MIR ) and ˆ1 = sup( MIR ), with MIRC=X \MIR representing the unlikely range where π(MIRC) =ε. The MIR encourages the selection
|
https://arxiv.org/abs/2504.12872v1
|
of a range around the mode(s). While identifying the MIR in complex posteriors is challenging, keeping it in mind proves invaluable. Now, the question arises: Can the algorithm generate a sample from the unlikely range? If so, what is the probability? Suppose ϕ:X −→ X represents the random operation of the Metropolis-multishift, running with two auxiliary chains {ˆ0t}and{ˆ1t}, and a primary chain {Xt}starting from ˆ0,ˆ1, and an arbitrary position X0on the MIR, respectively. In a successive block, the coalescence of the three chains ˆ0t,ˆ1t,Xtindicates that the primary chain has forgotten its past states. In fact, XT=ϕT◦ϕT−1◦ ··· ◦ ϕ0(ˆ0) = ϕT◦ϕT−1◦ ··· ◦ ϕ0(ˆ1) does not depend onˆ0,ˆ1 or any x∈MIR . IfTis sufficiently large, then XTconverges to the stationary distribution π, as indicated byR AP(ϕ(x)∈A)π(dx) =π(A), and becomes independent of X0. Note that the pair ( C, X T) solely depends on randomness {Ut}t=T t=1. In the case of such a sufficiently large T: P X0∈MIR, X T∈MIRC|C = (1−ϵ)ϵ. where Cis the coupling event of the three chains ˆ0t=ˆ1t=Xtandt < T . In ROCFTP, the diagnosis mechanism ensures convergence and independent draws from the target dis- tribution π. Therefore, in ROCFTP with the Metropolis-multishift coupler, it is sufficient to choose the length of the block Tsuch that P(C|T) =p >0. 25 Proposition 2 Choosing MIR as the starting range reduces the likelihood of the Metropolis- multishift coupler reaching MIRCby a factor of 1−ϵ. The probability that a chain attains a value from the unlikely range is (1 −ε)ε. So, selecting MIR effects ROCFTP hitting MIRCby factor of (1 −ε) which is proper for practical purpose. 3.5 How many starting paths are needed? To reduce the computational cost of ROCFTP, one strategy is to minimize the number of starting paths. When dealing with a monotone coupler and a bounded state space, it suffices to demonstrate the coalescence of two paths originating from ˆ0 and ˆ1. The monotonic property of a coupler ensures that all points in the state space will ultimately coalesce. The Metropolis-multishift coupler is not strictly monotone due to its acceptance/rejection process, it exhibits behavior close to monotonicity, largely thanks to the multishift coupler. If the state space is continuous, the multishift coupler transforms the entire state space into a countable set of randomly distributed points. This characteristic underscores the effectiveness of the multishift coupler, as these points progressively coalesce in each sub- sequent step. Figure A.13 visually demonstrates how the Metropolis-multishift coupler condenses the interval [ −2,2] within four steps to three points. We aim to demonstrate that the near-monotone behavior of the Metropolis-multishift coupler is sufficient for use in ROCFTP. This implies that if paths ˆ0tandˆ1tcoalesce, then the entire state space will also coalesce. Therefore, constructing ROCFTP based on the two paths ˆ0tandˆ1tis adequate. To provide a simulation justification, we considered the first four targets introduced in Subsection 3.2, representing the most critical cases. We calculated the time of coalescence simultaneously for 2, 10, 100, 1000, and 10000 paths. Simultaneously, in this context, means that these paths are inclusive of each other; for instance, the experiment with
|
https://arxiv.org/abs/2504.12872v1
|
100 paths encompasses the experiment with 10 paths. In fact, these experiments run jointly using the same randomness. 26 Table 2 displays the mean time of coalescence for the selected four targets, calculated from 1000 replications. It shows minimal differences in the mean coalescence time between two paths and more paths, especially in the first three targets. This indicates that whether we start with two paths or 10000 paths, the results are nearly identical, with all paths coalescing at almost the same time. In essence, any increase in coalescence time due to using more paths is negligible. The results highlights that if the two paths ˆ0tandˆ1t coalesce, then the entire state space will also coalesce, even if it takes slightly longer. Target MIR Proposal 2 Paths 10 Paths 100 Paths N(0,1) [ −10,10] N(0,1) 29.663 29.671 29.678 N(30,1) [20 ,40] N(0,1) 29.892 29.901 29.901 0.8N(−2,1) + 0 .2N(2,1) [ −10,10] N(0,1) 42.542 42.546 42.546 0.2N(−5,1) + 0 .2N(5,1) + 0 .6N(15,1) [−15,25] N(0,3.52) 148.320 150.978 151.343 Target MIR Proposal 1000 Paths 10000 Paths N(0,1) [ −10,10] N(0,1) 29.678 29.678 N(30,1) [20 ,40] N(0,1) 29.901 29.901 0.8N(−2,1) + 0 .2N(2,1) [ −10,10] N(0,1) 42.546 42.546 0.2N(−5,1) + 0 .2N(5,1) + 0 .6N(15,1) [−15,25] N(0,3.52) 151.343 151.343 Table 2: Simulating the mean coalescence time of the Metropolis-multishift coupler with different targets based on 1000 independent iterations. Table 3 presents the percentage of equal coalescence times with different numbers of starting paths. In most cases, whether the chains start from ˆ0tandˆ1tor from the entire state space, they coalesce at the same time. The number of starting paths does not significantly affect the coalescence time. This observation underscores the nearly monotone behavior of the Metropolis-multishift coupler. Target 2 Paths 10 Paths 100 Paths 1000 Paths 10000 Paths N(0,1) 99.2 99.9 100 100 100 N(30,1) 99.6 100 100 100 100 0.8N(−2,1) + 0 .2N(2,1) 99.9 100 100 100 100 0.2N(−5,1) + 0 .2N(5,1) + 0 .6N(15,1) 97.6 99.6 100 100 100 Table 3: The percentage of equal coalescence times with different numbers of starting paths for the Metropolis-multishift coupler. The range and proposal settings are consistent with those in Table 2. Table 4 presents the summary statistics of coalescence time with two paths, based on 27 10000 replications. It shows a notable gap between the central tendency and maximum values, indicating a long tail on the right side of the distribution of coalescence time (refer to Figure A.14). Based on the results in this subsection, we will construct ROCFTP using the Metropolis-multishift coupler with paths starting from ˆ0 and ˆ1. Target Min 1st Q. Median Mean 3rd Q. Max N(0,1) 7.00 24.00 29.00 29.59 34.00 84.00 N(30,1) 7.00 24.00 29.00 29.60 34.00 90.00 0.8N(-2,1)+0.2N(2,1) 9.00 29.00 38.00 42.59 51.00 259.00 0.2N(-5,1)+0.2N(5,1)+0.6N(15,1) 3.00 62.00 116.00 151.1 202.00 1384.00 Table 4: Summary statistics of coalescence time using the Metropolis-multishift coupler with different targets and two starting paths. The range and proposal settings are consistent with those in Table 2. 28 4 ROCFTP with Metropolis-multishift coupler ROCFTP initiates sampling from the second coupling event, where the first coupling event establishes
|
https://arxiv.org/abs/2504.12872v1
|
the primary chain for sampling. The two auxiliary chains {ˆ0t}and{ˆ1t}function as a diagnostic mechanism to select an independent sample from the stationary distribution. The convergence of the chain to the stationary distribution πis established by the first coupling event. We are prepared to sample using ROCFTP with the Metropolis-multishift coupler. In this context, we will conduct ROCFTP with the Metropolis-multishift coupler on the four targets outlined in Table 2, along with their related MIRs and proposals. The median time of coalescence from Table 4 will be used as the block length T. To assess goodness of fit, we will use Q-Q plots based on 10 Ksamples from the targets. Figure A.15 illustrates a comparison between the histogram and the related theoretical density, and includes the Q-Q plot for each target. In panel (d) of Figure A.15, the histogram aligns well with the theoretical density, but the QQ plot reveals gaps among the modes. These outliers in the QQ plot appear unusual because they deviate from the expected proportions. Ideally, there should be around 2000, 2000, and 6000 samples around each mode according to the target distribution. However, ROCFTP with the Metropolis-multishift coupler may generate samples that don’t exactly match these proportions. As a result, if the generated samples are more or less than 2000 for the first left mode, the QQ plot will show outliers either above or below the QQ line. To identify these outliers, we calculate the absolute difference between the practical and theoretical quantiles, as shown in Figure A.16. There are 30 outliers detected, which amounts to about 0.3% of all the samples. Despite these outliers, ROCFTP with the Metropolis-multishift coupler achieves a close match to the expected proportions overall. 29 5 Discussion ROCFTP with the Metropolis-multishift coupler provides perfect samples for complex pos- teriors and converges rapidly to the target distribution at an exponential rate. It achieves faster convergence compared to the algorithm proposed by Johnson (1998). Moreover, it is applicable to posteriors with unbounded state spaces by leveraging the MIR , which influences the algorithm to hit MIRCby a factor of 1 −ϵ. Additionally, its performance on mixture distributions has been investigated. Berthelsen et al. (2010) utilized ROCFTP with the Gibbs sampler to generate samples for mixture models, but they did not examine the performance and properties of ROCFTP with the Gibbs sampler. On the other hand, the approach by Jacob et al. (2020) pro- vides an unbiased estimate at a lower computational cost. However, ROCFTP with the Metropolis-multishift coupler provides exact samples, making it suitable for estimating and investigating the properties of more complex parameters in posteriors. 30 References Aldous, D. J. (1990), ‘The random walk construction of uniform spanning trees and uniform labelled trees’, SIAM Journal on Discrete Mathematics 3(4), 450–465. Andrieu, C. & Moulines, ´E. (2006), ‘On the ergodicity properties of some adaptive mcmc algorithms’. Atchad´ e, Y. F. & Perron, F. (2007), ‘On the geometric ergodicity of metropolis-hastings algorithms’, Statistics 41(1), 77–84. Berthelsen, K. K., Breyer, L. A. & Roberts, G. O. (2010), ‘Perfect posterior simulation for mixture and hidden markov models’, LMS
|
https://arxiv.org/abs/2504.12872v1
|
Journal of Computation and Mathematics 13, 246–259. Besag, J., Green, P., Higdon, D. & Mengersen, K. (1995), ‘Bayesian computation and stochastic systems’, Statistical science pp. 3–41. Besag, J. & Green, P. J. (1993), ‘Spatial statistics and bayesian computation’, Journal of the Royal Statistical Society: Series B (Methodological) 55(1), 25–37. Dobrushin, R. L. (1971), ‘Markov processes with a large number of locally interacting com- ponents: existence of a limit process and its ergodicity’, Problemy peredachi informatsii 7(2), 70–87. Doeblin, W. (1938), ‘Expos´ e de la th´ eorie des chaınes simples constantes de markova un nombre fini d’´ etats’, Math´ ematique de l’Union Interbalkanique 2(77-105), 78–80. Dongarra, J. & Sullivan, F. (2000), ‘Guest editors’ introduction: The top 10 algorithms’, Computing in Science & Engineering 2(1), 22–23. Fill, J. A. (1997), An interruptible algorithm for perfect sampling via markov chains, in ‘Proceedings of the twenty-ninth annual ACM symposium on Theory of computing’, ACM, pp. 688–695. 31 Fill, J. A., Machida, M., Murdoch, D. J. & Rosenthal, J. S. (2000), ‘Extension of fill’s perfect rejection sampling algorithm to general chains’, Random Structures and Algorithms 17(3- 4), 290–316. Foss, S. G. & Tweedie, R. L. (1998), ‘Perfect simulation and backward coupling’, Stochastic models 14(1-2), 187–203. Frigessi, A. (1995), ‘[bayesian computation and stochastic systems]: Comment’, Statistical Science 10(1), 41–43. Glynn, P. W. & Rhee, C.-h. (2014), ‘Exact estimation for markov chain equilibrium expec- tations’, Journal of Applied Probability 51(A), 377–389. Goldstein, S. (1979), ‘Maximal coupling’, Zeitschrift f¨ ur Wahrscheinlichkeitstheorie und verwandte Gebiete 46(2), 193–204. Holley, R. A. & Liggett, T. M. (1975), ‘Ergodic theorems for weakly interacting infinite systems and the voter model’, The annals of probability pp. 643–663. Huber, M. L. (2016), Perfect simulation , Vol. 148, CRC Press. Jacob, P. E., O’Leary, J. & Atchad´ e, Y. F. (2020), ‘Unbiased markov chain monte carlo methods with couplings’, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82(3), 543–600. Johnson, V. E. (1996), ‘Studying convergence of markov chain monte carlo algorithms using coupled sample paths’, Journal of the American Statistical Association 91(433), 154–166. Johnson, V. E. (1998), ‘A coupling-regeneration scheme for diagnosing convergence in markov chain monte carlo algorithms’, Journal of the American Statistical Association 93(441), 238–248. Leigh, G. M., Yang, W.-H., Wickens, M. E. & Northrop, A. R. (2023), ‘Perfect simulation from unbiased simulation’, arXiv preprint arXiv:2308.07176 . 32 Letac, G. (1986), ‘A contraction principle for certain markov chains and its applications’, Contemp. Math 50, 263–273. Lindvall, T. (1992), ‘Lectures on the coupling method. wiley series in probability and mathematical statistics: probability and mathematical statistics’. Mengersen, K. L. & Tweedie, R. L. (1996), ‘Rates of convergence of the hastings and metropolis algorithms’, The annals of Statistics 24(1), 101–121. Murdoch, D. (2000), ‘Exact sampling for bayesian inference: Unbounded state spaces’, Monte Carlo Methods 26, 111–121. Mykland, P., Tierney, L. & Yu, B. (1995), ‘Regeneration in markov chain samplers’, Journal of the American Statistical Association 90(429), 233–241. Nummelin, E. (1978), ‘A splitting technique for harris recurrent markov chains’, Zeitschrift f¨ ur Wahrscheinlichkeitstheorie und verwandte Gebiete 43(4), 309–318. Peskun, P. H. (1973), ‘Optimum monte-carlo sampling using markov chains’,
|
https://arxiv.org/abs/2504.12872v1
|
Biometrika 60(3), 607–612. Propp, J. G. & Wilson, D. B. (1996), ‘Exact sampling with coupled markov chains and applications to statistical mechanics’, Random structures and Algorithms 9(1-2), 223– 252. Sokal, A. D. & Thomas, L. E. (1988), ‘Absence of mass gap for a class of stochastic contour models’, Journal of Statistical Physics 51(5), 907–947. Thorisson, H. (1995), ‘Coupling methods in probability theory’, Scandinavian journal of statistics pp. 159–182. Thorisson, H. (2000), ‘Coupling, stationarity and regeneration’. Tierney, L. (1994), ‘Markov chains for exploring posterior distributions’, the Annals of Statistics pp. 1701–1728. 33 Wang, Z. & Ling, C. (2017), ‘On the geometric ergodicity of metropolis-hastings algorithms for lattice gaussian sampling’, IEEE Transactions on Information Theory 64(2), 738–751. Wasserstein, L. N. (1969), ‘Markov processes on countable product space describing large systems of automata’, Probl. Pered. Inform 5, 64–73. Wilson, D. B. (2000 a), ‘How to couple from the past using a read-once source of random- ness’, Random Structures & Algorithms 16(1), 85–113. Wilson, D. B. (2000 b), ‘Layered multishift coupling for use in perfect sampling algorithms (with a primer on cftp)’, Monte Carlo Methods 26, 141–176. 34 A Appendix A.1 Figures Figure A.1: Expression of normal distribution as a combination of layered uniform rectan- gles. 35 Figure A.2: Storing scheme of randomness in a matrix for CFTP algorithm. The storing process starts from the left, and the white and dotted blocks under initial values determine the new and previous randomness in each step. Figure A.3: Applying Metropolis simultaneously on ( −10,10) with target N(0,1) and Markov chain Xt+1=⌊Xt⌋+N(0,1). 36 Figure A.4: Applying Metropolis-multishift simultaneously on ( −10,10) with target N(0,1). Figure A.5: Applying Metropolis-multishift simultaneously on ( −10,10) with target N(30,1). 37 Figure A.6: Applying Metropolis-multishift simultaneously on ( −30,30) with target 0.8N(−2,1) + 0 .2N(2,1). Figure A.7: Applying Metropolis-multishift simultaneously on ( −30,30) with target 0.2N(−5,1) + 0 .2N(5,1) + 0 .6N(15,1). 38 Figure A.8: Illustration of 0 .8Uniform (−100,100) + 0 .2Beta (50,50) in far view of (−100,100) looks like a single line and in close up (0 ,1) it shows the density shape. Figure A.9: Applying Metropolis-multishift simultaneously on range ( −100,100) with tar- get 0.8Uniform (−100,100) + 0 .2Beta (50,50), and proposal multishift N(0,1). Figure A.10: Applying Metropolis-multishift simultaneously on range ( −100,100) with target 0 .8Uniform (−100,100) + 0 .2Beta (50,50), and proposal multishift N(0,3.52). 39 Figure A.11: Applying Metropolis-multishift simultaneously on eleven paths on the range (−100,100) with target 0 .9Uniform (−100,100) + 0 .1Beta (500,500), and proposal multi- shift N(0,0.12). Figure A.12: Visual demonstration of the Metropolis-multishift algorithm applied simulta- neously over different starting ranges with a target distribution of 0 .2N(−5,1)+0 .2N(5,1)+ 0.6N(15,1) using a proposal distribution of N(0,1). 40 Figure A.13: Illustration of Metropolis-multishift coupler in the first six steps. Figure A.14: Histogram of coalescence time with different targets. 41 (a) N(0,1) (b) N(30,1) (c) 0.8N(-2,1)+0.2N(2,1) (d) 0.2N(-5,1)+0.2N(5,1)+0.6N(15,1) Figure A.15: Test of fitting generated samples with target distribution of Table 2. 42 Figure A.16: The number of out fit points in Figure A.15. 43 A.2 R package R-package ROCFTP.MMS containing code to perform ROCFTP with Metropolis-multishift
|
https://arxiv.org/abs/2504.12872v1
|
described in the article. The package also contains examples provided in the article. https://cran.r-project.org/web/packages/ROCFTP.MMS/index.html A.3 CFTP R code ev<- function(vec) { even<- NULL for(i in (1: floor(length(vec)/2))) { even<- c(even, 2*i) } return(even) } L.R<- function() { z<- rnorm(1,0,1) u<- runif(1,0,dnorm(z)) main<- sqrt(-2*log(sqrt(2*pi)*u, base = exp(1))) L<- -main R<- main m<- c(L,R) return(m) } 44 update<- function(ro,inits,m,x) { even<-ev(inits) odd<- (1:length(inits))[-even] f<- rep(0, length(inits)) for(i in even) { f[i]<-floor((inits[i]*ro+m[2]-x)/(m[2]-m[1]))*(m[2]-m[1])+x } for(j in odd) { f[j]<-floor((inits[j]*ro+m[2]-x)/(m[2]-m[1]))*(m[2]-m[1])+x } return(f) } multishift<- function(ro,inits,niter) { pred<-matrix(inits, nrow=1) random<-NULL for (i in 1:niter) { m<-L.R() x<- runif(1,m[1],m[2]) g<- update(ro,pred[i,],m, x) pred<-rbind(pred,g) random<- rbind(random,c(m,x)) } 45 return(cbind(pred[2:nrow(pred),],random)) } multishift1<- function(ro,inits,niter, random) { pred<-matrix(inits, nrow=1) for (i in 1:niter) { m<-random[i,1:2] x<- as.numeric(random[i,3]) g<- update(ro,pred[i,],m, x) pred<-rbind(pred,g) } return(cbind(pred[2:nrow(pred),],random)) } CFTP<- function (ro,start) { random<-NULL inits<- start niter<- 2 tempsamp<- multishift(ro,start,niter) random<- tempsamp[,3:5] tempsamp<- tempsamp[,1:2] repeat{ niter<-2*niter samples<- multishift(ro,start,niter) random1<- samples[,3:5] 46 samples<- samples[,1:2] cond<- samples[nrow(samples),] inits<- samples[nrow(samples),] b<- matrix(c(rep(NA,nrow(samples)*ncol(tempsamp)-2),start), ncol=ncol(tempsamp),byrow=TRUE) sapmples<-cbind( b ,samples) h<-multishift1(ro,inits,nrow(random), random) h<- cbind(tempsamp,h[,1:2]) tempsamp<- rbind(sapmples,h) random<- rbind(random1,random) if(length(unique(cond))==1) break } b<- c(rep(NA, ncol(tempsamp)-2),start) tempsamp<- rbind(b,tempsamp) matplot(tempsamp, type="l", col="blue", lty=1, axes = FALSE, ylab=" ") } CFTP(0.92,c(-100,100)) 47
|
https://arxiv.org/abs/2504.12872v1
|
Query Complexity of Classical and Quantum Channel Discrimination Theshani Nuradha1,∗and Mark M. Wilde1,† 1School of Electrical and Computer Engineering, Cornell University, Ithaca, New York 14850, USA Quantum channel discrimination has been studied from an information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabil- ities as a function of the number of unknown channel accesses. In this paper, we study the query complexity of quantum channel discrimination, wherein the goal is to determine the minimum number of channel uses needed to reach a desired error probability. To this end, we show that the query complexity of binary channel dis- crimination depends logarithmically on the inverse error probability and inversely on the negative logarithm of the (geometric and Holevo) channel fidelity. As a special case of these findings, we precisely characterize the query complexity of discriminat- ing between two classical channels. We also provide lower and upper bounds on the query complexity of binary asymmetric channel discrimination and multiple quan- tum channel discrimination. For the former, the query complexity depends on the geometric R´ enyi and Petz R´ enyi channel divergences, while for the latter, it depends on the negative logarithm of (geometric and Uhlmann) channel fidelity. For multiple channel discrimination, the upper bound scales as the logarithm of the number of channels. CONTENTS I. Introduction 2 II. Preliminaries and Notations 3 A. Quantum Information-Theoretic Quantities 3 B. Setup 6 1. Symmetric Binary Channel Discrimination 7 2. Asymmetric Binary Channel Discrimination 7 3. Multiple Channel Discrimination 7 III. Problem Statement 8 IV. Query Complexity of Channel Discrimination 9 A. Symmetric Binary Channel Discrimination 9 B. Asymmetric Channel Discrimination 11 C. Multiple Channel Discrimination 11 Acknowledgments 11 ∗pt388@cornell.edu †wilde@cornell.eduarXiv:2504.12989v1 [quant-ph] 17 Apr 2025 2 References 11 A. Proof of Proposition 5 14 B. Proof of Theorem 6 15 C. Proof of Theorem 9 19 D. Proof of Theorem 10 21 I. INTRODUCTION Quantum hypothesis testing has been studied in early works of quantum information theory [1–3]. In quantum state discrimination, the hypotheses are modelled as quantum states, and the task is to devise a measurement strategy that maximizes the probability of correctly identifying the state of a given quantum system. In the basic version of this problem, a quantum system is prepared in one of two possible states, denoted by ρand σ, and it is the goal of the distinguisher, who does not know a priori which state was prepared, to determine the identity of the unknown state. In the case that n∈Ncopies (or samples) are provided, the states are then denoted by ρ⊗nandσ⊗n. Having extra samples for the discrimination task decreases the probability of making an error in the process. It is well known that the error probability decreases exponentially fast in the number of samples provided that ρ̸=σ[4–9]. Quantum hypothesis testing can be generalized to the setting where there are multiple hypotheses [10], and it is also known in this case that the error probability generally decays exponentially fast with n[11]. A scenario analogous to quantum hypothesis testing with quantum channels is referred to as quantum channel discrimination.
|
https://arxiv.org/abs/2504.12989v1
|
Here, the task is to use the unknown quantum channel several times ( ntimes) to determine which channel was applied. Quantum channel discrimination in the asymptotic regime (i.e., n→ ∞ ) has been studied in [12–20]. Also Ref. [21] studied some settings in which a finite number of channel uses are sufficient to distinguish between two channels. More broadly, there has been significant interest in the topic of quantum channel discrimination [22–27]. Recently, Ref. [28] studied the non-asymptotic regime of this fundamental statistical task by considering the sample complexity of quantum hypothesis testing (discrimination of quantum states). Here, sample complexity refers to the optimal number of samples of the unknown state required to achieve a fixed non-zero error probability. Furthermore, Ref. [28] also explored several applications of the sample complexity of quantum hypoth- esis testing in quantum algorithms for simulation and unstructured search, and quantum learning and classification. This non-asymptotic study also provided a foundation to study information-constrained settings of quantum hypothesis testing with constraints being ensur- ing privacy [29, 30], where privacy is quantified by quantum local differential privacy [31, 32], and access to noisy quantum states instead of noiseless states [33]. In this work, we extend the study of quantum hypothesis testing to a more general setting of channel discrimination. Here, we consider how many uses of the unknown channel are required to achieve a fixed non-zero error probability of discriminating between two or multiple channels. We use the term “query complexity” to refer to the optimal number of channel uses in this scenario, as this terminology is common in theoretical computer science, 3 for similar settings including unitary operator and gate discrimination [34–38]. Similar to the sample complexity of quantum hypothesis testing, we suspect that our findings will be useful in applications related to quantum learning theory, quantum computing, and quantum algorithms, where one has to distinguish between several channels. Contributions: In this paper, we provide a comprehensive analysis of the query com- plexity of classical and quantum channel discrimination. First, we define the query com- plexity of three quantum channel discrimination settings, including symmetric binary (Defi- nition 2), asymmetric binary (Definition 3), and symmetric multiple channel discrimination (Definition 4), while providing equivalent expressions in Remark 2. Next, we study the query complexity of symmetric binary channel discrimination in Theorem 6. To this end, we show that the query complexity scales logarithmically in the inverse error probability ε∈(0,1) and inversely in the negative logarithm of the geometric channel fidelity (lower bound) and Holevo channel fidelity (upper bound). Moreover, in Corollary 8, we obtain a tight characterization of the query complexity of discriminating two classical channels NandM. Furthermore, we characterize the query complexity of asymmetric binary channel dis- crimination with the use of the geometric R´ enyi and Petz R´ enyi channel divergences in Theorem 9. As the third setting, we establish bounds on the query complexity of symmetric multiple channel discrimination in Theorem 10. Here, we find that the query complexity scales with the negative logarithm of the channel fidelities and the upper bound scales as O(ln(M)), where Mis
|
https://arxiv.org/abs/2504.12989v1
|
the number of distinct channels to be discriminated. Note: After the completion of our work, we came across the independent study [39], which also studies the non-asymptotic regime of quantum channel discrimination. II. PRELIMINARIES AND NOTATIONS In this section, we establish some notation and recall various quantities of interest and relations used in our paper. LetN:={1,2, . . .}. Throughout our paper, we let ρandσdenote quantum states, which are positive semi-definite operators acting on a separable Hilbert space and with trace equal to one. Let NA→BandMA→Bbe quantum channels, which are completely positive trace- preserving maps. Let Idenote the identity operator. For every bounded operator Aand p≥1, we define the Schatten p-norm as ∥A∥p:= Trh A†Ap/2i1/p . (1) A. Quantum Information-Theoretic Quantities A divergence D(ρ∥σ) is a function of two quantum states ρandσ, and we say that it obeys the data-processing inequality if the following holds for all states ρandσand every channel N: D(ρ∥σ)≥D(N(ρ)∥N(σ)). (2) Definition 1 (Generalized Divergences) .Letρandσbe quantum states. 4 1. The normalized trace distance is defined as 1 2∥ρ−σ∥1. (3) 2. The Petz–R´ enyi divergence of order α∈(0,1)∪(1,∞)is defined as [40] Dα(ρ∥σ):=1 α−1lnQα(ρ∥σ) (4) where Qα(A∥B):= lim ε→0+Tr Aα(B+εI)1−α ,∀A, B≥0. (5) It obeys the data-processing inequality for all α∈(0,1)∪(1,2][40]. Note also that D1/2(ρ∥σ) =−lnFH(ρ, σ), where the Holevo fidelity is defined as [41] FH(ρ, σ):= Tr√ρ√σ2. (6) 3. The sandwiched R´ enyi divergence of order α∈(0,1)∪(1,∞)is defined as [42, 43] eDα(ρ∥σ):=1 α−1lnQα(ρ∥σ) (7) where eQα(A∥B):= lim ε→0+Trh A1 2(B+εI)1−α αA1 2αi ,∀A, B≥0. (8) It obeys the data-processing inequality for all α∈[1/2,1)∪(1,∞)[44, 45]. Note also thateD1/2(ρ∥σ) =−lnF(ρ, σ), where the quantum fidelity is defined as [46] F(ρ, σ):= √ρ√σ 2 1. (9) The Bures distance is defined as [47, 48] dB(ρ, σ):=r 2 1−√ F(ρ, σ) . (10) 4. The geometric R´ enyi divergence of order α∈(0,1)∪(1,∞)is defined as [49] bDα(ρ∥σ):=1 α−1lnbQα(ρ∥σ) (11) where bQα(A∥B):= lim ε→0+Trh (B+εI) (B+εI)−1 2A(B+εI)−1 2αi ,∀A, B≥0. (12) It obeys the data-processing inequality for all α∈(0,1)∪(1,2][50, Theorem 7.45]. Forα= 1/2, we have bD1 2(ρ∥σ) =−lnbF(ρ, σ), (13) where the geometric fidelity [51] is defined as bF(ρ, σ):= inf ε>0Tr[ρ(ε)#σ(ε)]. (14) 5 Here A#Bdenotes the matrix geometric mean of the positive definite operators Aand B: A#B:=A1/2 A−1/2BA−1/21/2A1/2, (15) ρ(ε):=ρ+εI, and σ(ε):=σ+εI. Using the geometric fidelity, we also define bdB(ρ, σ):=r 2 1−p bF(ρ, σ) . (16) 5. The quantum relative entropy is defined as [52] D(ρ∥σ):= lim ε→0+Tr[ρ(lnρ−ln(σ+εI))], (17) and it obeys the data-processing inequality [53]. Generalized divergences can be extended to channels as follows [54, Definition II.2]: D(N∥M ):= sup ρRAD(NA→B(ρRA)∥M A→B(ρRA)) (18) Then, we can also define the channel variants of all the generalized divergences presented in Definition 1 (replacing Dwith the following: Dα,eDα,bDα, dB,bdB). In the same spirit, channel fidelities (denoted as Fto include FH, F,bF) are defined as F(N,M):= inf ρRAF(NA→B(ρRA),MA→B(ρRA)). (19) As shown in [55, Proposition 55], the channel fidelity based on Fcan be efficiently computed by means of a semi-definite program (SDP), and a similar statement can be made for bFby using the SDP for geometric fidelity (see [56, Section 6] and [57, Eq. (1.11) and Lemma
|
https://arxiv.org/abs/2504.12989v1
|
3.20]) and the reasoning used to establish [55, Proposition 55]. Precisely, the SDP is given by h bF(N,M)i1/2 = sup λ≥0,WRB∈Herm λ:λIR≤TrB[WRB], ΓN RBWRB WRBΓM RB ≥0 , (20) where ΓN RBand ΓM RBare the Choi operators of NandM, respectively. We make use of the following inequalities throughout our work: Let NA→BandMA→B denote quantum channels, and let ρRAandσRAdenote quantum states. 1. Chain rule for geometric R´ enyi divergence (for α∈(1,2) [58, Theorem 3.4] and for α∈(0,1) [55, Proposition 45]): bDα(NA→B(ρRA)∥M A→B(σRA))≤bDα(N∥M ) +bDα(ρRA∥σRA). (21) 2. Since bD1 2(ρ∥σ) =−lnbF(ρ, σ), the following is a rewriting of (21): bF(NA→B(ρRA),MA→B(σRA))≥bF(N,M)bF(ρRA, σRA). (22) 6 3. Let n∈N. Then, F N⊗n,M⊗n ≤(F(N,M))n. (23) This follows because F N⊗n,M⊗n = inf ρRnAnF N⊗n(ρRnAn),M⊗n(ρRnAn) (24) ≤inf ρ⊗n RAF N⊗n(ρ⊗n RA),M⊗n(ρ⊗n RA) (25) = inf ρRA(F(N(ρRA),M(ρRA)))n(26) = (F(N,M))n, (27) where the first inequality holds by choosing a smaller set comprised of product states for the optimization and the penultimate equality from the multiplicativity of fidelity. B. Setup Given two quantum channels NA→BandMA→Bacting on an input system Aand an output system B, the most general adaptive strategy for discriminating these two channels is as follows. First we prepare an arbitrary input state ρR1A1=τR1A1, where R1is a reference system (ancillary register). The ith use of the respective channel takes a register Aias input and gives a register Bias output. After each use of the channel NorM, we apply an (adaptive) channel A(i) RiBi→Ri+1Ai+1to the registers RiandBisuch that the following holds (depending on the channel applied): First for 1 ≤i≤n ρRiBi:=NAi→Bi(ρRiAi), (28) τRiBi:=MAi→Bi(τRiAi), (29) and ρRi+1Ai+1:= (A(i) RiBi→Ri+1Ai+1◦ NAi→Bi)(ρRiAi), (30) τRi+1Ai+1:= (A(i) RiBi→Ri+1Ai+1◦ M Ai→Bi)(τRiAi), (31) for every 1 ≤i < n on the left hand side, where n∈N. Finally, a quantum measurement (QRnBn, IRnBn−QRnBn) is applied on the systems RnBnto determine which channel was queried. The outcome Qcorresponds to guessing the channel Nat the end of the protocol, while the outcome I−Qcorresponds to guessing the channel M. The probability of guessing Nwhen Mis applied is given by Tr[ QRnBnτRnBn] (type II error probability), while the probability of guessing MwhenNis applied is given by Tr[( IRnBn−QRnBn)ρRnBn] (type I error probability). In what follows, we use the notation {Q,A}to identify a strategy of using the set of channels, {A(i) RiBi→Ri+1Ai+1}i, and a final measurement ( QRnBn, IRnBn−QRnBn) as described above. In this setup, we also consider that we have access to n∈Nuses of the channel N orM. 7 1. Symmetric Binary Channel Discrimination In the setting of symmetric binary channel discrimination, we suppose that there is a prior probability p∈(0,1) associated with the use of channel N, and there is a prior probability q≡1−passociated with using the channel M. Indeed, the unknown channel is selected by flipping a coin, with the probability of heads being pand the probability of tails being q. If the outcome of the coin flip is heads, then nchannel uses of Nare applied, and if the outcome is tails, then nchannel uses of Mare applied, along with the adaptive channels {A(i) RiBi→Ri+1Ai+1}ifor 1≤i < n and the input state ρR1Ai=τR1A1in both the cases. Thus, the expected error probability in
|
https://arxiv.org/abs/2504.12989v1
|
this experiment is as follows: pe,{Q,A},ρR1A1(p,N, q,M, n):=pTr[(IRnBn−QRnBn)ρRnBn] +qTr[QRnBnτRnBn].(32) Given p, the distinguisher can minimize the error-probability expression in (32) over all adaptive strategies {Q,A}and the initial state ρR1A1. The optimal error probability pe(p,N, q,M, n) of hypothesis testing is as follows: pe(p,N, q,M, n):= inf {Q,A},ρR1A1pe,{Q,A},ρR1A1(p,N, q,M, n) (33) = inf {Q,A},ρR1A1{pTr[(IRnBn−QRnBn)ρRnBn] +qTr[QRnBnτRnBn]}(34) = inf A,ρR1A11 2(1− ∥pρRnBn−qτRnBn∥1), (35) where the last equality follows by applying the Helstrom–Holevo theorem [2, 3] for states and recalling that the optimization over all final measurements is abbreviated by Q. 2. Asymmetric Binary Channel Discrimination In the setting of asymmetric channel discrimination, there are no prior probabilities associated with the selection of the channel NorM—we simply assume that one of them is chosen deterministically, but the chosen channel is unknown to the distinguisher. The goal of the distinguisher is to minimize the probability of the second kind of error subject to a constraint on the probability of the first kind of error in guessing which channel is selected. Given a fixed ε∈[0,1], the scenario reduces to the following optimization problem: βε(N(n)∥M(n)):= inf {Q,A},ρR1A1Tr[QRnBnτRnBn] : Tr[( IRnBn−QRnBn)ρRnBn]≤ε .(36) 3. Multiple Channel Discrimination Here the goal is to select one among Mpossible channels as the guess for the selected channel. Let S:={(pm,Nm)}M m=1be an ensemble of Mchannels with prior probabilities taking values in the set {pm}M m=1. Without loss of generality, let us assume that pm>0 for all m∈ {1,2, . . . , M }. Fix the adaptive channels applied after each channel use as {A(i) RiBi→Ri+1Ai+1}iand the input to the protocol as ρR1A1. Denote for all 1 ≤m≤M, ρm R1A1:=ρR1A1and ρm Ri+1Ai+1:= (A(i) RiBi→Ri+1Ai+1◦ Nm Ai→Bi)(ρm RiAi), (37) 8 for all 1 ≤i < n . The minimum error probability of M-channel discrimination, given nchannel uses of the unknown channel, is as follows: pe(S, n):= inf {Q,A},ρR1A1MX m=1pmTr (IRnBn−Qm RnBn)ρm RnBn , (38) where the minimization is over all adaptive strategies denoted by A, every initial state ρR1A1, and every positive operator-valued measure (POVM) denoted as Qapplied at the final stage (i.e., a tuple ( Q1 RnBn, . . . , QM RnBn) satisfying Qm RnBn≥0 for all m∈ {1, . . . , M } andPM m=1Qm RnBn=IRnBn). III. PROBLEM STATEMENT Formally, we state the definitions of the query complexity of symmetric binary, asym- metric binary, and multiple channel discrimination in the following Definitions 2, 3, and 4, respectively. In each case, we define them by employing the error-probability metrics in (34), (36), and (38) and in such a way that the query complexity is the minimum value ofn∈N(number of channel uses) needed to get the error probability below a thresh- oldε∈[0,1]. Definition 2 (Query Complexity of Symmetric Binary Channel Discrimination) .Letp∈ (0,1),q= 1−p, and ε∈[0,1], and let NandMbe quantum channels. The query complexity n∗(p,N, q,M, ε)of symmetric binary channel discrimination is defined as follows: n∗(p,N, q,M, ε):= inf{n∈N:pe(p,N, q,M, n)≤ε}. (39) Definition 3 (Query Complexity of Asymmetric Binary Channel Discrimination) .Letε, δ∈ [0,1], and let NandMbe quantum channels. The query complexity n∗(N,M, ε, δ)of asymmetric binary channel discrimination is defined as follows: n∗(N,M,
|
https://arxiv.org/abs/2504.12989v1
|
ε, δ):= inf n∈N:βε N(n)∥M(n) ≤δ . (40) Definition 4 (Query Complexity of M-ary Channel Discrimination) .Letε∈[0,1], and letS:={(pm,Nm)}M m=1be an ensemble of Mchannels. The query complexity n∗(S, ε)of M-ary channel discrimination is defined as follows: n∗(S, ε):= inf{n∈N:pe(S, n)≤ε}. (41) Remark 1 (Equivalent Expression) .Note that we can rewrite the query complexity of asym- metric binary channel discrimination as follows: n∗(N,M, ε, δ) = inf n∈N:βδ N(n)∥M(n) ≤ε . (42) Remark 2 (Equivalent Expressions for Query Complexities) .The query complexity n∗(p,N, q,M, ε)of symmetric binary quantum channel discrimination has the following equivalent expressions: n∗(p,N, q,M, ε) = inf {Q,A},ρR1A1{n∈N:pTr[(IRnBn−QRnBn)ρRnBn] +qTr[QRnBnτRnBn]≤ε} (43) 9 = inf A,ρR1A1 n∈N:1 2(1− ∥pρRnBn−qτRnBn∥1)≤ε (44) By recalling the quantity βε(N(n)∥M(n))defined in (36), we can rewrite the sample com- plexity n∗(N,M, ε, δ)of asymmetric binary quantum hypothesis testing in the following two ways: n∗(N,M, ε, δ) = inf {Q,A},ρR1A1 n∈N: Tr[( IRnBn−QRnBn)ρRnBn]≤ε, Tr[QRnBnτRnBn]≤δ (45) = inf n∈N:βδ M(n)∥N(n) ≤ε . (46) This follows from reasoning similar to that presented in [28, Appendix C] because the ex- pression in (45) indicates that the query complexity for asymmetric binary quantum channel discrimination can be thought of as the minimum number of queries to the unknown channels needed for the type I error probability to be below εand the type II error probability to be below δ. Finally, the query complexity n∗(S, ε)ofM-ary quantum channel discrimination can be rewritten as follows: n∗(S, ε) = inf {Q,A},ρR1A1( n∈N:MX m=1pmTr (IRnBn−Qm RnBn)ρm RnBn ≤ε) . (47) IV. QUERY COMPLEXITY OF CHANNEL DISCRIMINATION In this section, we provide lower and upper bounds on the query complexity of quan- tum channel discrimination, applicable to the three main settings discussed in Section II B (namely, symmetric binary channel discrimination, assymetric binary channel discrimina- tion, and symmetric multiple channel discrimination). A. Symmetric Binary Channel Discrimination Proposition 5 (Trivial Cases) .Letp,q,ε,N, andMbe as stated in Definition 2. If one of the following conditions hold: 1.ε∈[1/2,1], 2.∃s∈[0,1]such that ε≥psq1−s, 3.∃ψRAwith the dimensions of the systems RandAbeing equal such that supp(NA→B(ψRA))∩supp(MA→B(ψRA)) =∅, then n∗(p,N, q,M, ε) = 1 . (48) IfN=Mandmin{p, q}> ε∈[0,1/2), then n∗(p,N, q,M, ε) = +∞. (49) 10 Proof. See Appendix A. Define for s∈[0,1] Cs(N∥M ):=−inf ρRAln Tr NA→B(ρRA)sMA→B(ρRA)1−s . (50) Theorem 6 (Query Complexity: Symmetric Binary Channel Discrimination) .Letp,q,ε, N, andMbe as stated in Definition 2 such that the conditions in Proposition 5 do not hold. Then the following bounds hold max ln(pq/ε) −lnbF(N,M),1−ε(1−ε) pqh bdB(N,M)i2 ≤n∗(p,N, q,M, ε)≤ inf s∈[0,1]ln psq1−s ε Cs(N∥M ) ,(51) where Cs(N∥M )is defined in (50). Proof. See Appendix B. Corollary 7. Letp,q,ε,N, andMbe as stated in Definition 2 such that the conditions in Proposition 5 do not hold. Then the following bounds hold ln(pq/ε) −lnbF(N,M)≤n∗(p,N, q,M, ε)≤ 2 ln√pq ε −lnFH(N,M) . (52) Proof. Proof follows directly from Theorem 6 together with the substitution s= 1/2 in the upper bound therein. For classical channels, where the input and output systems of a channel are classical, the query complexity is tight, and it is a function of channel fidelities. Corollary 8 (Query Complexity: Classical Channels) .LetNandMbe classical
|
https://arxiv.org/abs/2504.12989v1
|
channels, then we have that n∗(p,N, q,M, ε) = Θ ln 1 ε −lnF(N,M)! . (53) Proof. Proof follows by adapting Corollary 7 and identifying that, for classical channels with classical inputs, the following equality holds: F(N,M) =bF(N,M) =FH(N,M). Remark 3 (Special Classes of Channels) .For some special classes of channels, including environmental seizable channels [14, Definition 36], the query complexity bounds boil down to quantum hypothesis testing of states with the environment states as the states. In that case, the results derived in [28] can be directly used to derive bounds on the query complexity. Furthermore, for the task of discriminating bosonic dephasing channels, it will be possible to provide analytical expressions for the query complexity by utilizing the findings in [59]. 11 B. Asymmetric Channel Discrimination Theorem 9 (Query Complexity: Asymmetric Channel Discrimination) .Fixε, δ∈(0,1), and let NandMbe quantum channels. Then the following bounds hold for the query com- plexity n∗(N,M, ε, δ)of asymmetric binary channel discrimination defined in Definition 3: max sup α∈(1,2] ln (1−ε)α′ δ bDα(N∥M ) ,sup α∈(1,2] ln (1−δ)α′ ε bDα(M∥N ) ≤n∗(N,M, ε, δ) ≤min inf α∈(0,1) ln εα′ δ Dα(N∥M ) , inf α∈(0,1) ln δα′ ε Dα(M∥N ) .(54) where α′:=α α−1. Proof. See Appendix C. C. Multiple Channel Discrimination Theorem 10 (Query Complexity: M-ary Channel Discrimination) .Letn∗(S, ε)be as stated in Definition 4. Then, max m̸= ¯mln pmp¯m (pm+p¯m)ε −lnbF(Nm,N¯m)≤n∗(S, ε)≤ max m̸= ¯m2 ln M(M−1)√pm√p¯m 2ε −lnF(Nm,N¯m) . (55) Proof. See Appendix D. ACKNOWLEDGMENTS We thank Amira Abbas, Marco Cerezo, and Ludovico Lami for helpful discussions. TN and MMW acknowledge support from NSF Grant No. 2329662. [1] C. W. Helstrom, Detection theory and quantum mechanics, Information and Control 10, 254 (1967). [2] C. W. Helstrom, Quantum detection and estimation theory, Journal of Statistical Physics 1, 231 (1969). [3] A. S. Holevo, Statistical decision theory for quantum systems, Journal of Multivariate Analysis 3, 337 (1973). [4] F. Hiai and D. Petz, The proper formula for relative entropy and its asymptotics in quantum probability, Communications in Mathematical Physics 143, 99 (1991). [5] T. Ogawa and H. Nagaoka, Strong converse and Stein’s lemma in quantum hypothesis testing, IEEE Transaction on Information Theory 46, 2428 (2000). 12 [6] K. M. R. Audenaert, J. Calsamiglia, R. Mu˜ noz-Tapia, E. Bagan, L. Masanes, A. Acin, and F. Verstraete, Discriminating states: The quantum Chernoff bound, Physical Review Letters 98, 160501 (2007). [7] M. Nussbaum and A. Szko la, The Chernoff lower bound for symmetric quantum hypothesis testing, Annals of Statistics 37, 1040 (2009). [8] H. Nagaoka, The converse part of the theorem for quantum Hoeffding bound (2006), arXiv:quant-ph/0611289. [9] M. Hayashi, Error exponent in asymmetric quantum hypothesis testing and its application to classical-quantum channel coding, Physical Review A 76, 062301 (2007). [10] H. P. Yuen, R. S. Kennedy, and M. Lax, Optimum testing of multiple hypotheses in quantum detection theory, IEEE Transactions on Information Theory 21, 125 (1975). [11] K. Li, Discriminating quantum states: The multiple Chernoff distance, The Annals of Statis- tics44, 1661 (2016). [12] M. Hayashi, Discrimination of
|
https://arxiv.org/abs/2504.12989v1
|
two channels by adaptive methods and its application to quan- tum system, IEEE Transactions on Information Theory 55, 3807 (2009). [13] T. Cooney, M. Mosonyi, and M. M. Wilde, Strong converse exponents for a quantum channel discrimination problem and quantum-feedback-assisted communication, Communications in Mathematical Physics 344, 797 (2016), arXiv:1408.3373. [14] M. M. Wilde, M. Berta, C. Hirche, and E. Kaur, Amortized channel divergence for asymptotic quantum channel discrimination, Letters in Mathematical Physics 110, 2277 (2020). [15] X. Wang and M. M. Wilde, Resource theory of asymmetric distinguishability for quantum channels, Physical Review Research 1, 033169 (2019). [16] K. Fang, O. Fawzi, R. Renner, and D. Sutter, Chain rule for the quantum relative entropy, Physical Review Letters 124, 100501 (2020). [17] B. Bergh, N. Datta, R. Salzmann, and M. M. Wilde, Parallelization of adaptive quantum chan- nel discrimination in the non-asymptotic regime, IEEE Transactions on Information Theory 70, 2617 (2024). [18] F. Salek, M. Hayashi, and A. Winter, Usefulness of adaptive strategies in asymptotic quantum channel discrimination, Physical Review A 105, 022419 (2022). [19] B. Bergh, J. Kochanowski, R. Salzmann, and N. Datta, Infinite dimensional asymmetric quan- tum channel discrimination (2023), arXiv:2308.12959 [quant-ph]. [20] Z. Huang, L. Lami, and M. M. Wilde, Exact quantum sensing limits for bosonic dephasing channels, PRX Quantum 5, 020354 (2024). [21] A. W. Harrow, A. Hassidim, D. W. Leung, and J. Watrous, Adaptive versus nonadaptive strategies for quantum channel discrimination, Physical Review A 81, 032339 (2010). [22] G. Chiribella, G. M. D’Ariano, and P. Perinotti, Memory effects in quantum channel discrim- ination, Physical Review Letters 101, 180501 (2008), arXiv:0803.3237. [23] R. Duan, Y. Feng, and M. Ying, Perfect distinguishability of quantum operations, Physical Review Letters 103, 210501 (2009), arXiv:0908.0119. [24] M. Piani and J. Watrous, All entangled states are useful for channel discrimination, Physical Review Letters 102, 250501 (2009). [25] W. Matthews, M. Piani, and J. Watrous, Entanglement in channel discrimination with re- stricted measurements, Physical Review A 82, 032302 (2010). [26] D. Puzzuoli and J. Watrous, Ancilla dimension in quantum channel discrimination, Annales Henri Poincar´ e 18, 1153 (2017), arXiv:1604.08197. [27] V. Katariya and M. M. Wilde, Evaluating the advantage of adaptive strategies for quantum 13 channel distinguishability, Physical Review A 104, 052406 (2021). [28] H.-C. Cheng, N. Datta, N. Liu, T. Nuradha, R. Salzmann, and M. M. Wilde, An invitation to the sample complexity of quantum hypothesis testing (2024), arXiv:2403.17868v3. [29] T. Nuradha and M. M. Wilde, Contraction of private quantum channels and private quantum hypothesis testing, IEEE Transactions on Information Theory 71, 1851 (2025), arXiv:2406.18651. [30] H. Cheng, C. Hirche, and C. Rouz´ e, Sample complexity of locally differentially private quantum hypothesis testing, in IEEE International Symposium on Information Theory (ISIT) (2024) pp. 2921–2926, arXiv:2406.18658. [31] C. Hirche, C. Rouz´ e, and D. S. Fran¸ ca, Quantum differential privacy: An information theory perspective, IEEE Transactions on Information Theory 69, 5771 (2023). [32] T. Nuradha, Z. Goldfeld, and M. M. Wilde, Quantum pufferfish privacy: A flexible privacy framework for quantum systems, IEEE Transactions on Information Theory 70, 5731 (2024), arXiv:2306.13054. [33] I. George, C. Hirche, T.
|
https://arxiv.org/abs/2504.12989v1
|
Nuradha, and M. M. Wilde, Quantum doeblin coefficients: Interpre- tations and applications (2025), arXiv:2503.22823. [34] X. Huang and L. Li, Query complexity of unitary operation discrimination, Physica A: Sta- tistical Mechanics and its Applications 604, 127863 (2022). [35] A. Kawachi, K. Kawano, F. Le Gall, and S. Tamaki, Quantum query complexity of uni- tary operator discrimination, IEICE TRANSACTIONS on Information and Systems 102, 483 (2019). [36] G. Chiribella, G. M. D’Ariano, and M. Roetteler, On the query complexity of perfect gate discrimination, in 8th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2013) (Schloss Dagstuhl–Leibniz-Zentrum f¨ ur Informatik, 2013) pp. 178–191. [37] Z. M. Rossi, J. Yu, I. L. Chuang, and S. Sugiura, Quantum advantage for noisy channel discrimination, Physical Review A 105, 032401 (2022). [38] R. Ito and R. Mori, Lower bounds on the error probability of multiple quantum channel discrimination by the bures angle and the trace distance (2021), arXiv:2107.03948. [39] L. Li, Sampling complexity of quantum channel discrimination, Communications in Theoret- ical Physics (2025). [40] D. Petz, Quasi-entropies for finite quantum systems, Reports on Mathematical Physics 23, 57 (1986). [41] A. S. Holevo, On quasiequivalence of locally normal states, Theoretical and Mathematical Physics 13, 1071 (1972). [42] M. M¨ uller-Lennert, F. Dupuis, O. Szehr, S. Fehr, and M. Tomamichel, On quantum R´ enyi entropies: A new generalization and some properties, Journal of Mathematical Physics 54, 122203 (2013). [43] M. M. Wilde, A. Winter, and D. Yang, Strong converse for the classical capacity of entanglement-breaking and Hadamard channels via a sandwiched R´ enyi relative entropy, Com- munications in Mathematical Physics 331, 593 (2014). [44] R. L. Frank and E. H. Lieb, Monotonicity of a relative R´ enyi entropy, Journal of Mathematical Physics 54, 122201 (2013). [45] M. M. Wilde, Optimized quantum f-divergences and data processing, Journal of Physics A: Mathematical and Theoretical 51, 374002 (2018). [46] A. Uhlmann, The “transition probability” in the state space of a ∗-algebra, Reports on Math- 14 ematical Physics 9, 273 (1976). [47] C. W. Helstrom, Minimum mean-squared error of estimates in quantum statistics, Physics Letters A 25, 101 (1967). [48] D. Bures, An extension of Kakutani’s theorem on infinite product measures to the tensor product of semifinite ω∗-algebras, Transactions of the American Mathematical Society 135, 199 (1969). [49] K. Matsumoto, A new quantum version of f-divergence, in Nagoya Winter Workshop: Reality and Measurement in Algebraic Quantum Theory (Springer, 2015) pp. 229–273. [50] S. Khatri and M. M. Wilde, Principles of quantum communication theory: A modern approach (2024), arXiv:2011.04672v2. [51] K. Matsumoto, Reverse test and quantum analogue of classical fidelity and generalized fidelity (2010), arXiv:1006.0302. [52] H. Umegaki, Conditional expectation in an operator algebra. IV. Entropy and information, Kodai Mathematical Seminar Reports 14, 59 (1962). [53] G. Lindblad, Completely positive maps and entropy inequalities, Communications in Mathe- matical Physics 40, 147 (1975). [54] F. Leditzky, E. Kaur, N. Datta, and M. M. Wilde, Approaches for approximate additivity of the Holevo information of quantum channels, Physical Review A 97, 012332 (2018), 1709.01111. [55] V. Katariya and M. M. Wilde, Geometric distinguishability measures limit
|
https://arxiv.org/abs/2504.12989v1
|
quantum channel estimation and discrimination, Quantum Information Processing 20, 78 (2021). [56] K. Matsumoto, Quantum fidelities, their duals, and convex analysis (2014), arXiv:1408.3462 [quant-ph]. [57] S. Cree and J. Sikora, A fidelity measure for quantum states based on the matrix geometric mean (2020), arXiv:2006.06918. [58] K. Fang and H. Fawzi, Geometric R´ enyi divergence and its applications in quantum channel capacities, Communications in Mathematical Physics 384, 1615 (2021). [59] Z. Huang, L. Lami, and M. M. Wilde, Exact quantum sensing limits for bosonic dephasing channels, PRX Quantum 5, 020354 (2024). Appendix A: Proof of Proposition 5 We first prove (48). First condition: Assume that ε∈[1/2,1]. The trivial strategy of guessing the channel uniformly at random achieves an error probability of 1 /2 (i.e., with n= 1,Q=I/2 in (34)). Thus, if ε∈[1/2,1], then the trivial strategy accomplishes the task with just a single access to the channel. Second condition: Now assume that there exists s∈[0,1] such that ε≥psq1−s. With a single use of the channel, we can achieve the error probability1 2(1− ∥pρR1B1−qτR1B1∥1) for the initial state ρR1A1, and by invoking [6, Theorem 1] with A=pρR1B1andB=qτR1B1, we conclude that 1 2(1− ∥pρR1B1−qτR1B1∥1)≤psq1−sTr ρs R1B1τ1−s R1B1 (A1) ≤psq1−s(A2) ≤ε. (A3) 15 In the above, we used the H¨ older inequality |Tr[AB]| ≤ ∥A∥r∥B∥t, which holds for r, t≥1, such that r−1+t−1= 1, to conclude that Tr ρs R1B1τ1−s R1B1 ≤1.That is, set r=s−1, t= (1−s)−1,A=ρR1B1, and B=τR1B1. The last inequality follows by the assumption. So we conclude that only a single access to the channel is needed in this case also. Third condition: Assume that ∃ψRAsuch that supp( NA→B(ψRA))∩supp(MA→B(ψRA)) = ∅. In this case, choose the initial state to be ρR1A1=ψRA. Then the minimum error evaluates to be1 2(1− ∥pNA→B(ψRA)−qMA→B(ψRA)∥1). Since the supports of the two output states are disjoint, we get ∥pNA→B(ψRA)−qMA→B(ψRA)∥1=p+q= 1. Then, we achieve zero- error by following this procedure and the choice of the initial state with just one access of the channel, concluding the proof. We finally prove (49). This follows because it is impossible to distinguish the states and the desired inequality1 2(1− ∥pρRnBn−qτRnBn∥1)≤εcannot be satisfied for any possible value of n. Indeed, in this case, since N=Mwe have ρRnBn=τRnBnand 1 2(1− ∥pρRnBn−qτRnBn∥1) =1 2(1− |p−q|∥ρRnBn∥1) (A4) =1 2(1− |p−(1−p)|) (A5) =1 2(1− |1−2p|) (A6) = min {p, q}. (A7) Then, with the assumption min {p, q} ≥ε, for all n∈ N, error probability is always greater than εThus, (49) follows. Appendix B: Proof of Theorem 6 Upper bound: We first set n:= inf s∈[0,1]ln psq1−s ε Cs(N∥M ) , (B1) and let s∗∈[0,1] be the optimal value of sabove. With Cs(N∥M )≥0 due to −ln Tr[ ρsσ1−s]≥0 for all states ρandσands∈[0,1], then n≥ln ps∗q1−s∗ ε Cs∗(N∥M )(B2) ⇔ nCs∗(N∥M )≥ln ps∗q1−s∗ −lnε (B3) ⇔ − inf ρRAln Trh (N(ρRA))⊗ns∗ (M(ρRA))⊗n1−s∗i −ln ps∗q1−s∗ ≥ −lnε (B4) ⇔ infρRATrh p(N(ρRA))⊗ns∗ q(M(ρRA))⊗n1−s∗i ≤ε. (B5) Now applying quantum Chernoff bound in [6, Theorem 1] with A=pρ⊗nandB=qσ⊗n, we conclude that ε≥inf ρRATrh pN⊗n(ρ⊗n RA)⊗ns∗ qM⊗n(ρ⊗n RA)⊗n1−s∗i (B6) 16 ≥inf ρRA1 2 1− pN⊗n(ρ⊗n RA)−qM⊗n(ρ⊗n RA) 1 . (B7) Consider that pe(p,N, q,M,
|
https://arxiv.org/abs/2504.12989v1
|
n) = inf {Q,A},ρR1A11 2(1− ∥pρRnBn−qτRnBn∥1). (B8) Choosing ncopies of the input state and passing it through channel NorMin its n uses, the final state before the measurement is applied evaluates to be either of these states (N⊗n(ρ⊗n RA) orM⊗n(ρ⊗n RA)). This is one strategy that is included in the set of all adaptive strategies. This leads to the following upper bound: pe(p,N, q,M, n)≤inf ρRA1 2 1− pN⊗n(ρ⊗n RA)−qM⊗n(ρ⊗n RA) 1 . (B9) The choice of nin (B1) thus satisfies the constraint (44) in Definition 2, and since the optimal query complexity cannot exceed this choice, this concludes our proof of the upper bound in (51). Lower Bound 1: Let us consider the case where ε≥pe(p,N, q,M, n) = inf {A},ρR1A11 2(1− ∥pρRnBn−qτRnBn∥1). (B10) Applying the fact that ∥pρ−qσ∥2 1≤1−4pqF(ρ, σ), (B11) we arrive at inf {A},ρR1A11 2(1− ∥pρRnBn−qτRnBn∥1)≥ inf {A},ρR1A12pqF(ρRnBn, τRnBn) 1 +∥pρRnBn−qτRnBn∥1(B12) ≥ inf {A},ρR1A1pqF(ρRnBn, τRnBn) (B13) ≥ inf {A},ρR1A1pqbF(ρRnBn, τRnBn), (B14) where the last inequality follows because bF≤F. Recall that, for 1 ≤i≤n, ρRiBi:=NAi→Bi(ρRiAi), (B15) τRiBi:=MAi→Bi(τRiAi), (B16) and ρRi+1Ai+1:= (A(i) RiBi→Ri+1Ai+1◦ NAi→Bi)(ρRiAi), (B17) τRi+1Ai+1:= (A(i) RiBi→Ri+1Ai+1◦ M Ai→Bi)(τRiAi), (B18) for every 1 ≤i < n . With the above notation, and by using the chain rule for geometric fidelity in (22), we have that bF(ρRnBn, τRnBn) =bF(NAn→Bn(ρRnAn),MAn→Bn(ρRnAn)) (B19) 17 ≥bF(N,M)bF(ρRnAn, τRnAn). (B20) Then using the data processing of geometric fidelity under the channel A(n−1) Rn−1Bn−1→RnAn, we have that bF(ρRnAn, τRnAn)≥bF NAn−1→Bn−1(ρRn−1An−1),MAn−1→Bn−1(τRn−1An−1) . (B21) Proceeding with the application of the chain rule and the data processing under the adaptive channels for n−1 times, we obtain bF(ρRnBn, τRnBn)≥ bF(N,M)n−1bF(ρR1B1, τR1B1) (B22) = bF(N,M)n−1bF(NA1→B1(ρR1A1),MA1→B1(τR1A1)). (B23) Combining the above inequality with (B14), we arrive at inf {A},ρR1A11 2(1− ∥pρRnBn−qτRnBn∥1) ≥pq bF(N,M)n−1 inf {A},ρR1A1bF(NA1→B1(ρR1A1),MA1→B1(τR1A1)) (B24) =pq bF(N,M)n , (B25) where the last equality follows by the definition of channel fidelity with the choice bFin (19). With the above together with the error constraint in (B10), we conclude the proof of the first lower bound. Lower bound 2: Using [28, Eq. (F13)], we have that ∥pρ−qσ∥2 1≤1−4pqF(ρ, σ) (B26) ≤1−4pq+ 4pq[dB(ρ, σ)]2(B27) ≤1−4pq+ 4pqh bdB(ρ, σ)i2 (B28) = 1−4pq+ 8pq 1− 1−h bdB(ρ, σ)i2 2 , (B29) where the third inequality follows by bF≤F. Rewriting (B10) by algebraic manipulations, we get (1−2ε)2≤sup {A},ρR1A1∥pρRnBn−qτRnBn∥2 1 (B30) ≤1−4pq+ 8pq 1− inf {A},ρR1A1 1−h bdB(ρRnBn, τRnBn)i2 2 (B31) 18 ≤1−4pq+ 8pq 1− 1−h bdB(N,M)i2 2 n (B32) ≤1−4pq+ 4pqnh bdB(N,M)i2 , (B33) where the last inequality follows from 1 −(1−x)n≤nxfor 0≤x≤1. Now, we prove the penultimate inequality: first observe that 1−h bdB(ρ, σ)i2 2=bF(ρ, σ). (B34) This relation is also true for the channel variants 1−h bdB(N,M)i2 2=bF(N,M). (B35) Using (B23), we have that bF(ρRnBn, τRnBn)≥ bF(N,M)n−1bF(NA1→B1(ρR1A1),MA1→B1(τR1A1)) (B36) ≥ bF(N,M)n (B37) = 1−h bdB(N,M)i2 2 n , (B38) where the last inequality follows by the definition of channel fidelities in (19), and the last equality by (B35). With that, we prove that inf {A},ρR1A1 1−h bdB(ρRnBn, τRnBn)i2 2 ≥ 1−h bdB(N,M)i2 2 n . (B39) This completes the proof of (B33). By rearranging terms in (B33), we obtain that n≥(1−2ε)2−(1−4pq) 4pqh bdB(N,M)i2(B40) ≥pq−ε(1−ε)
|
https://arxiv.org/abs/2504.12989v1
|
pqh bdB(N,M)i2, (B41) which concludes the proof of the lower bound. 19 Appendix C: Proof of Theorem 9 Lower bound: Letα∈(1,2]. Then, by the data-processing inequality for the geometric R´ enyi divergence under the measurement channel (comprised of the POVM elements Q, I−Q) bDα(ρ∥τ)≥1 α−1ln Tr[Qρ]αTr[Qτ]1−α+ Tr[( I−Q)ρ]α+ Tr[( I−Q)τ]1−α (C1) ≥α α−1ln Tr[ Qρ]−ln Tr[ Qτ]. (C2) Letρ=ρRnBn,τ=τRnBnwith the constraint that Tr[ QRnBnρRnBn]≥1−ε. Then, the above inequality can be rewritten as bDα(ρRnBn∥τRnBn)≥α α−1ln(1−ε)−ln Tr[ QRnBnτRnBn] (C3) From here, let us optimize over all {Q,A}andρR1A1such that sup {Q,A},ρR1A1bDα(ρRnBn∥τRnBn) ≥α α−1ln(1−ε)−ln inf {Q,A},ρR1A1{Tr[QRnBnτRnBn] : Tr[ QRnBnρRnBn]≥1−ε} (C4) =α α−1ln(1−ε)−lnβε N(n)∥M(n) . (C5) By assuming βε N(n)∥M(n) ≤δ, we also have that sup {Q,A},ρR1A1bDα(ρRnBn∥τRnBn)≥α α−1ln(1−ε)−lnδ (C6) = ln(1−ε)α′ δ , (C7) where α′is defined in the statement of Theorem 9. By using the chain rule for the geometric R´ enyi divergence in (21), we have that bDα(ρRnBn∥τRnBn) =bDα(NAn→Bn(ρRnAn)∥M An→Bn(ρRnAn)) (C8) ≤bDα(N∥M ) +bDα(ρRnAn∥τRnAn). (C9) Then using the data-processing inequality for the geometric R´ enyi divergence under the channel A(n−1) Rn−1Bn−1→RnAn, we have that bDα(ρRnAn∥τRnAn)≤bDα NAn−1→Bn−1(ρRn−1An−1)∥M An−1→Bn−1(τRn−1An−1) . (C10) Proceeding with the application of the chain rule and the data-processing inequality under the adaptive channels for n−1 times, we obtain bDα(ρRnBn∥τRnBn)≤(n−1)bDα(N∥M ) +bDα(ρR1B1∥τR1B1) (C11) = (n−1)bDα(N∥M ) +bDα(NA1→B1(ρR1A1)∥M A1→B1(τR1A1)) (C12) 20 ≤nbDα(N∥M ), (C13) where the last inequality follows by the definition of the channel variant of the geometric R´ enyi divergence obtained by replacing Din (18) with bDα. Combining (C7) and (C13), we arrive at the following for all α∈(1,2): nbDα(N∥M )≥ln(1−ε)α′ δ , (C14) which leads to n≥sup α∈(1,2)ln (1−ε)α′ δ bDα(N∥M ). (C15) The other lower bound is obtained by switching the roles of NwithM, and εwith δ with the use of the fact n∗(N,M, ε, δ):= inf n∈N:βδ M(n)∥N(n) ≤ε (C16) as given in (46). Upper bound: By [28, Lemma 20], for all α∈(0,1) and ε∈(0,1), we have that −ln inf Q{Tr[Qσ] : Tr[ Qρ]≥1−ε} ≥Dα(ρ∥σ) +α α−1ln1 ε . (C17) With that for our setup, let us consider ρ=ρRnBnandσ=τRnBnandQ=QRnBnand supremize over all adaptive strategies {Q,A}and input states ρR1A1to obtain −lnβε N(n)∥M(n) =−ln inf {Q,A},ρR1A1{Tr[QRnBnτRnBn] : Tr[ QRnBnρRnBn]≥1−ε}(C18) ≥ sup {Q,A},ρR1A1Dα(ρRnBn∥τRnBn) +α α−1ln1 ε . (C19) Consider a product strategy, where we discriminate between the states [ N(ρRA)]⊗nand [M(ρRA)]⊗nwith the input state being ρRAand optimizing over such strategies with different input states is included in the set of all possible strategies. This provides the following lower bound: sup {Q,A},ρR1A1Dα(ρRnBn∥τRnBn)≥sup {Q,A},ρRADα [N(ρRA)]⊗n∥[M(ρRA)]⊗n (C20) =n·sup {Q,A},ρRADα(N(ρRA)∥M(ρRA)), (C21) where the last equality follows by the additivity of the Petz-R´ enyi relative entropy. Combining (C19) and (C21), we have that −lnβε N(n)∥M(n) ≥n·sup {Q,A},ρRADα(N(ρRA)∥M(ρRA)) +α α−1ln1 ε , (C22) for all α∈(0,1). 21 Using that, with the choice n≥inf α∈(0,1) ln εα′ δ Dα(N∥M ) , (C23) we have that βε N(n)∥M(n) ≤δ. Then, the optimum number of channel uses with this strategy is n= inf α∈(0,1) ln εα′ δ Dα(N∥M ) , (C24) which leads to n∗(N,M, ε, δ)≤ inf α∈(0,1) ln εα′ δ Dα(N∥M ) . (C25) Appendix D: Proof of Theorem 10 Lower bound: Recall (38), so that pe(S, n):=
|
https://arxiv.org/abs/2504.12989v1
|
inf {Q,A},ρR1A1MX m=1pmTr (IRnBn−Qm RnBn)ρm RnBn , (D1) where Qdenotes a POVM ( Q1 RnBn, . . . , QM RnBn) satisfying Qm RnBn≥0 for all m∈ {1, . . . , M } andPM m=1Qm RnBn=IRnBn. For 1 ≤m̸= ˜m≤M, choose LRnBnandTRnRnto be positive semi-definite operators satisfying LRnBn+TRnBn=IRnBn−Qm RnBn−Q˜m RnBn. With that, define ˜Qm RnBn:=Qm RnBn+ LRnBnand ˜Q˜m RnBn:=Q˜m RnBn+TRnBn, so that we have ˜Qm RnBn+˜Q˜m RnBn=I. Consider MX m=1pmTr (IRnBn−Qm RnBn)ρm RnBn ≥pmTr (IRnBn−Qm RnBn)ρm RnBn +p˜mTr (IRnBn−Q˜m RnBn)ρ˜m RnBn (D2) ≥pmTrh (IRnBn−˜Qm RnBn)ρm RnBni +p˜mTrh (IRnBn−˜Q˜m RnBn)ρ˜m RnBni (D3) = (pm+p˜m)pm pm+p˜mTrh (IRnBn−˜Qm RnBn)ρm RnBni +p˜m pm+p˜mTrh (IRnBn−˜Q˜m RnBn)ρ˜m RnBni (D4) ≥(pm+p˜m) inf {˜Q,A},ρR1A1pm pm+p˜mTrh ˜QRnBnρm RnBni +p˜m pm+p˜mTrh (IRnBn−˜QRnBn)ρ˜m RnBni (D5) ≥pmp˜m pm+p˜m bF(Nm,N˜mn , (D6) where the second inequality follows by the explicit construction of ˜Qso that Qi RnBn≤˜Qi RnBn fori∈ {m,˜m}and the last inequality by applying (B25). 22 By optimizing the left-hand side over all {Q,A}andρR1A1and imposing the constraint on the error, we have that ε≥pe(S, n) (D7) ≥pmp˜m pm+p˜m bF(Nm,N˜m)n . (D8) Rearranging the terms in the above inequality and maximizing over all arbitary pairs m,˜m such that 1 ≤m̸= ˜m≤Mconcludes the proof of the desired lower bound. Upper bound: By employing the product strategy of discriminating the following states {(Nm A→B(ρRA))⊗n}mwith the initial state ρRA, we have that pe(S, n)≤inf Q,ρRAMX m=1pmTr (IRnBn−Qm RnBn) (Nm(ρRA))⊗n (D9) ≤inf Q,ρRA1 2M(M−1) max ¯m̸=m√pm√p¯m√ F(Nm(ρRA),N¯m(ρRA)) (D10) =1 2M(M−1) max ¯m̸=m√pm√p¯m√ F(Nm,N¯m), (D11) where the second inequality follows by applying [28, Eq.(J8)] to this setting and the last equality by the definition of channel fidelities in (19) with the choice F. Then, by choosing n≥max m̸= ¯m2 ln M(M−1)√pm√p¯m 2ε −lnF(Nm,N¯m), (D12) we obtain that pe(S, n)≤εusing (D11). The optimum nto choose with this strategy would be n= max m̸= ¯m2 ln M(M−1)√pm√p¯m 2ε −lnF(Nm,N¯m) . (D13) With this, we conclude that n∗(S, ε)≤ max m̸= ¯m2 ln M(M−1)√pm√p¯m 2ε −lnF(Nm,N¯m) . (D14)
|
https://arxiv.org/abs/2504.12989v1
|
Spatial Confidence Regions for Excursion Sets with False Discovery Rate Control Howon Ryu1, Thomas Maullin-Sapey2,3, Armin Schwartzman1,4, and Samuel Davenport1 1Division of Biostatistics and Bioinformatics, University of California, San Diego, San Diego, CA, USA 2Big Data Institute, Nuffield Department of Population Health, University of Oxford, Oxford, United Kingdom 3School of Mathematics, University of Bristol, Bristol, United Kingdom 4Halıcıoğlu Data Science Institute, University of California, San Diego, San Diego, CA, USA April 18, 2025 Abstract Identifying areas where the signal is prominent is an important task in image analysis, with particular applications in brain mapping. In this work, we develop confidence regions for spatial excursion sets above and below a given level. We achieve this by treating the confidence procedure as a testing problem at the given level, allowing control of the False Discovery Rate (FDR). Methods are developed to control the FDR, separately for positive and negative excursions, as well as jointly over both. Furthermore, power is increased by incorporating a two-stage adaptive procedure. Simulation results with various signals show that our confi- dence regions successfully control the FDR under the nominal αlevel. We show- case our methods with an application to functional magnetic resonance imaging (fMRI) data from the Human Connectome Project illustrating the improvement in statistical power over existing approaches. 1 Introduction Spatial inference is an important topic in image analysis, especially in neuroimaging applications. Thegoalofspatialinferenceistoprovidelocalizationofsignalsofinterest, that is, to statistically identify regions within the image where the effect of interest is thought to be the strongest or exceeds a certain level. A common example of this problem is to identify regions of the brain that show activation in response to a stimulus in a task-based functional magnetic resonance imaging (fMRI) experiment. 1arXiv:2504.13124v1 [stat.ME] 17 Apr 2025 A traditional approach to solving this problem is via a mass univariate voxel-wise testing procedure, which attempts to identify the locations of voxels where the acti- vation is non-zero. Mass univariate inference typically involves performing hundreds of thousands of tests throughout the brain, leading to a substantial multiple testing problem [10, 53]. In order to address this, standard analyses control the family-wise error rate (FWER) [42] or the false discovery rate (FDR) [28]. FWER control can be achieved using permutation- or bootstrap-based inference [35, 34, 33, 55], or us- ing random field theory [24, 53, 21]. Other alternatives include thresholding based on topological features such as peaks [18, 16] or clusters [25, 47]. The above approaches seek to identify brain regions where the activation is non- zero. With the increasing size of fMRI datasets, however, this type of spatial inference suffers from the “null hypothesis fallacy”. Due to large sample sizes, even small effects are picked up by the model as significant [46, 22], and commonly used methods identify large regions of the brain as active (see [15] and Figure 1 of [19]) [30]. As an alternative spatial inference approach, to avoid the null hypothesis fallacy, Sommerfeld et al. [41] (which from hereon will be referred to as SSS after the names of theauthors)proposedtheconstructionofconfidenceregionsatalevel c̸= 0. Confidence regions, in their general sense, address spatial
|
https://arxiv.org/abs/2504.13124v1
|
uncertainty for the excursion set ,Ac= {v∈ V:µ(v)> c}, corresponding to the locations where µ(v), the true mean at each element vin the image space V, is greater than the threshold c∈R. We will refer to the elements of Vas locations. For images defined on a 2D/3D lattice, the locations are pixels/voxels respectively and for images defined on a surface the locations are vertices. Confidence regions consist of an upper region ˆA+ c, and a lower region ˆA− c. The upper region is constructed to indicate where the signal is greater than cand the complement of the lower region is constructed to indicate where the signal is less than c. SSS originally introduced confidence regions for Acin the setting of 2D geospatial climate data. In the neuroimaging context, Bowring et al. [15] and [14] extended this work to raw effect size and standardized effect size of 3D fMRI data and [40] extended the SSS approach to account for conjunction and disjunction effects. The confidence regions in SSS and Bowring et al. [15] aim to control a rather strict error rate, ensuring complete inclusion of the confidence regions with high probability i.e.P(ˆA+ c⊆ A c⊆ˆA− c)≥1−αfor some α∈(0,1). The inclusion statement is violated if even a single location belongs to ˆA+ cbut not Ac, or to Acbut not ˆA− c. In this sense, this type of coverage is analogous to FWER, which is the probability of making any error when using testing procedure. Such a strict criterion is conservative and may producelargeconfidenceregionswiththecorrectcoverageratebutlowspatialprecision. Moreover, thetheorybehindSSSrequiresthedatatobedefinedonacontinuousdomain and not the lattice structures which often arise in neuroimaging. To increase spatial precision, we consider another error criterion based on the false discovery rate (FDR), which is known to be less conservative. FDR control in neu- roimaging was first introduced by [28], and since then it has been widely used as an alternative measure for error in spatial inference in fMRI [4, 16, 17]. Targeting the FDR provides more powerful inference than targeting FWER, resulting in a greater number of discoveries. This occurs because it aims to control only the false positives over all the rejected tests, rather than over all the tests. Various FDR corrections in- clude Benjamini-Hochberg (BH) procedure [5], BH under dependency [9], and others [7, 54, 6]. 2 Ourgoalinthisworkistoconstructconfidenceregionsthatquantifytheuncertainty in the spatial location and the extent of excursion sets representing effects above a level c, while taking advantage of the increase in statistical power brought by adopting FDR control. To this end, we propose a location-wise testing method that searches for signals above (or below) a level c̸= 0(which can be positive or negative) with FDR control over the entire image space. The key here is that the level being tested is not 0 but c, thus avoiding the null hypothesis fallacy which occurs at 0. The confidence regions are then constructed from the outcomes of the location-wise tests. Confidence regions make directional inclusion statements. Directional errors occur in two-sided hypothesis testing when the null hypothesis is rejected in the wrong direc- tion. Some early works in directional error control in two-tailed test involve
|
https://arxiv.org/abs/2504.13124v1
|
[23], and [51, 8] for multiple testing confidence intervals. Different correctional methods for di- rectional FDR control are presented in [52] for neuroimaging and [32] for genetics data applications, while [31] presents error control under different correlational structures, or under independence [36, 38]. (a) Separate Method, c= 1.5 (b) Joint Method, c= 1.5 (c) Separate Method, c= 2.5 (d) Joint Method, c= 2.5 Figure 1: An illustration of confidence regions, developed in this work, applied to task- based fMRI: red area denotes upper confidence region ˆA+ c, blue area denotes lower confidence region ˆA− c, and the yellow area denotes the point estimate for the excursion setˆAc. The rows differ in threshold cin the % blood-oxygen-level-dependent (BOLD) change, and the columns differ in confidence region construction methods. 3 In this paper, we introduce testing-based confidence regions ˆA+ cand ˆA− cwhich control the FDR over space in the upper and lower directions, respectively, with respect to the true excursion set Ac. Our proposed approach uses one-sided testing at the level c, instead of the usual two-sided testing, which allows us to control directional errors. We investigate two modes of multiple testing for confidence region construction: one using the Benjamini-Hochberg procedure, and the other using a two-stage adaptive procedure proposed in [13]. This two-stage procedure is useful in increasing statistical power when the targeted excursion set is small, which occurs frequently for higher levels c. Testing in the positive and negative directions separately we obtain confidence sets for each direction. We continue by constructing confidence sets with joint error control over the two directions, in which the tests for the positive and negative directions are considered together in a single multiple testing procedure. An illustration of the separate and joint confidence regions in practice is presented in Figure 1. We evaluate the performance of the proposed methods in terms of FDR, for type-I error, and false non-discovery rate (FNDR), for type-II error, defined with respect to the excursion set. The rest of the paper is organized as follows. In Section 2, we present the problem formally, define the error rates for separate error control, and present the algorithms for controlling them. In Section 3, this approach is extended to the joint error control over the two directions. In Section 4 we evaluate the performance of the methods via simulation. In Section 5, a real data application to task fMRI data from the Human Connectome Project working memory task is presented. 2 Confidence Regions with Separate Error Control In this section, we introduce a spatial inference framework that controls the FDR sepa- rately through two one-sided tests. Upper and the lower confidence regions are defined via positive and negative one-sided hypothesis testing where the spatial FDR is con- trolled at level αseparately for each test. 2.1 Upper Confidence Region 2.1.1 Construction Given locations vwithin a finite lattice V ⊂RD, we can construct an upper confidence region by testing in the positive direction at the level c. For each v∈ V, we consider the null and alternative hypotheses given by HU 0(v) :µ(v)≤cvs. HU A(v) :µ(v)>
|
https://arxiv.org/abs/2504.13124v1
|
c (1) at each location v∈ Vwith corresponding null set (Ac)C⊆ V, the complement of Ac. To do so, let t(v) =ˆµ(v)−c ˆσ(v)/√n be the t-statistic for testing against the level cas in (1) where ˆµ(v)andˆσ(v)are the point estimates of the true mean and standard deviation at each location v∈ V respectively from ni.i.d. samples. We can transform these test-statistics into a set ofp-values, pU= pU(1), . . . , pU(m) to obtain an upper confidence region, at a given 4 level α∈(0,1), by applying the Benjamini-Hochberg procedure [5, 9] as demonstrated in Algorithm 1. Algorithm 1 Upper confidence region Input: (t(v), α) Output: ˆA+ c 1:Obtain m p-values pU(v) = 1−FTn−1(t(v)), v= 1, . . . , m,where FTn−1is the cdf of t-distribution with n−1degrees of freedom from the positive one-sided test. 2:Apply the Benjamini-Hochberg procedure to the p-values pU= pU(1), . . . , pU(m) to get pk, the threshold for rejection. 3:Define ˆA+ c={v∈ V:p(v)≤pk}, the set of locations rejected. 4:Return the upper confidence region ˆA+ c. The image space Vcan be naturally partitioned according to the rejection regions and the true non-null regions, see Table 1. For a given level c, the test and the corre- sponding regions can be visualized as shown in Figure 2. Hypothesis Not rejected Rejected Total HU 0 (Ac)C∩(ˆA+ c)C(Ac)C∩ˆA+ c (Ac)C HU A Ac∩(ˆA+ c)CAc∩ˆA+ c Ac Total (ˆA+ c)C ˆA+ c V Table 1: Partition of the image space Vaccording to the ground truth and the rejection status (positive direction, one-sided). The color of the text in each cell corresponds to the color of the area in Figure 2 (b) and (c). 5 Figure 2: Upper confidence region schematic: (a) Example signal and the projection of the excursion set above c(Ac). (b) A hypothetical upper confidence region ˆA+ c superimposedonthegroundtruth Ac. (c)Thedivisionof(b)intoregionscorresponding to those specified in Table 1 as a visual representation of false positives (green) and false non-positives (orange). 2.1.2 Spatial FDR and FNDR The false discovery proportion is given by the ratio of the number of true null locations rejected ( |(Ac)C∩ˆA+ c|=|ˆA+ c\Ac|) to the number of all rejected locations ( |ˆA+ c|)). The construction of ˆA+ censures control of the upper spatial false discovery rate ( FDR U), which is defined as the expectation of the false discovery proportion, namely FDR U=E" |ˆA+ c\ Ac| |ˆA+ c| ∨1# . (2) Algorithm 1 ensures that this is controlled in expectation to a level α. Note that the p-values created this way satisfy the assumption of Positive Regression Dependency on each Subset (PRDS) [43], a necessary condition for application of the Benjamini- Hochberg procedure [9], which is considered reasonable for neuroimaging data [28]. InordertomeasurestatisticalpowerinSection4, weshallusethefalsenon-discovery proportion, which measures the ratio between the number of false non-discoveries and the number of total non-discoveries. Then, the upper false non-discovery rate (FNDR U) is defined as FNDR U=E" |Ac\ˆA+ c| |(ˆA+ c)C| ∨1# , (3) where FNDR Uis the expected value of the false non-discovery proportion. 6 2.2 Lower Confidence Region 2.2.1 Construction In order to construct a lower confidence region ˆA−
|
https://arxiv.org/abs/2504.13124v1
|
cwe consider testing against the negative one-sided null hypothesis at level c, defined as HL 0(v) :µ(v)≥cvs. HL A(v) :µ(v)< c. (4) Construction of the lower confidence region is then formally given by Algorithm 2. This differs from Algorithm 1 in that it incorporates a two-stage adaptive procedure, as explained below. Algorithm 2 Lower confidence region Input: (t(v), α) Output: ˆA− c 1:Obtain m p-values pL(v) =FTn−1(t(v))where FTn−1is the cdf of t-distribution with n−1degrees of freedom from the negative one-sided test. 2:Apply the two-stage adaptive procedure (Blanchard et al. [13]) to the p-values pL= pL(1), . . . , pL(m) to get pk, the threshold for rejection. 3:Define ˆA− c={s∈S:p(s)≤pk}C, the complement of the set of locations rejected. 4:Return the lower confidence region ˆA− c. Then, the null set corresponding to the hypothesis (4) is the closure Ac, defined as Ac={v∈ V:µ(v)≥c}. Note that in order to test µ≥cagainst µ < c, we can instead test−µ≤ −cagainst −µ >−c. Thus, the problem is symmetric and well defined in terms of the one-sided test in the negative side. The choice of formulation is dictated by the specific hypothesis constructed by the analysts. Note here that unlike the case of the upper confidence region ˆA+ c, the lower confi- dence region ˆA− cnow represents the non-significant set of locations. This asymmetry in the definition is by choice, stemming from the fact that while the null hypothesis in the negative direction is used in constructing ˆA− c, the excursion set is still in the positive direction. While having a similar construction procedure, the biggest difference between Algo- rithms 1 and 2 is that the latter uses an adaptive procedure to perform multiple testing. The BH procedure controls the FDR at α·π0level where π0:=m0 mwith mbeing the total number of all the hypotheses, and m0being the number of true null hypotheses. When the fraction of true null hypotheses π0is small, as is typically the case in the test (4) for high values of c, the control of the FDR could be very conservative. This warrants an adaptive procedure that entails a correction on α·π0so that the actual error rate is kept at a higher (less conservative) level. Various adaptive FDR control measures have been suggested, including Benjamini and Hochberg’s adaptive modification on the BH procedure [6] and others that followed [48, 49, 27, 39, 26, 38] under different assumptions on the data. We employ the two- stage adaptive procedure by Blanchard and Roquain (2009) [13] to form the lower confidence region. This works by estimating the number of null hypotheses in the first stage and using this information in a second stage to perform inference, see Section 2.2.3 for details. 7 The negative one-sided test partitions the image space Vin the following way (Table 2). For a given level c, the test and the corresponding regions are visualized as shown in Figure 3. Hypothesis Not rejected Rejected Total H0 Ac∩ˆA− c Ac∩(ˆA− c)CAc HA (Ac)C∩ˆA− c (Ac)C∩(ˆA− c)C(Ac)C Total ˆA− c (ˆA− c)CV Table 2: Partition of the image space Vaccording to the ground truth and
|
https://arxiv.org/abs/2504.13124v1
|
the rejection status (negative direction, one-sided). The color of the text in each cell corresponds to the color of the area in Figure 3 (b) and (c). Figure 3: Lower confidence region schematic: (a) Example signal and the projection of the excursion set at and above c(Ac). (b) A hypothetical lower confidence region ˆA− c superimposedonthegroundtruth Ac. (c)Thedivisionof(b)intoregionscorresponding to those specified in Table 2 as a visual representation of false positives (green) and false non-positives (orange). 2.2.2 Spatial FDR and FNDR Similar to the upper regions, the lower false discovery proportion is measured by the number of true null locations rejected ( |Ac∩(ˆA− c)C|=|Ac\ˆA− c|) divided by the number of rejected locations ( |(ˆA− c)c|) from the negative one-sided test. The false discovery rate for the lower confidence region ( FDR L) is defined as the expectation of of the false discovery proportion, namely FDR L=E" |Ac\ˆA− c| |(ˆA− c)C| ∨1# . (5) 8 The false non-discovery proportion is measured by the number of false null locations that are not rejected in the negative testing, |(Ac)C∩ˆA− c|=|ˆA− c\ Ac|, divided by the number of all the non-rejected locations in the negative one-sided test, |ˆA− c|. Again, the FNDR for the lower confidence region ( FNDR L) is defined as the expectation of the false non-discovery proportion, FNDR L=E" |ˆA− c\ Ac| |ˆA− c| ∨1# . (6) 2.2.3 The Two-stage Adaptive Procedure Here we provide formal details on the two-stage adaptive procedure by Blanchard and Roquain (2009) [13] used for our lower confidence region. Two-stage adaptive procedures typically use the threshold collection of the form ∆i=π−1 0β(i) m·α. The weights β(i)take the form β(i) =Ri 0u d f(u)with fbeing a fixed probability distribution on (0,∞)under unspecified dependence, or β(i) =i under independence of the p-values or PRDS. By multiplying the threshold collection by a factor of π−1 0, we expect to have less conservative procedure. In practice, we use ∆i= ˆπ−1 0β(i) m·αwhere ˆπ−1 0=m ˆm0is an estimator of the true π−1 0. The two-stage adaptive procedure by Blanchard and Roquain [13] provably controls the FDR at the αlevel under positive dependence assumptions. In the first stage, a step-up procedure is conducted with threshold collection ∆i=β(i) mα0to get an estimate ˆm0=m− |R0|where |R0|is the number of rejected hypotheses. In the second stage, an adaptive procedure is conducted with ∆i= ˆπ−1 0β(i) mα1where ˆπ−1 0=Fκ(ˆm0 m).Fκ(x) forx∈[0,1]is given as follows: Fκ(x) = 1, ifx≤κ−1, 2κ−1 1−√ 1−4(1−x)κ−1,otherwise . Blanchard and Roquain state that the two-stage procedure with α0=α/4, α1=α/2 andκ= 2produces less conservative FDR control compared to the linear step-up procedure alone (BH procedure) when Fκ=2( ˆm0/m)≥2, which translates to when we expect at least F−1 2(2) = 62 .5%of the tests to be rejected [13]. In the confidence region setting, the rejection corresponds to (ˆA− c)C. When cis high enough so that (ˆA− c)Cis expected to be larger than ˆA− c(which is typically the case in neuroimaging inference), the two-stage adaptive procedure produces more powerful lower confidence regions. 3 Confidence Regions with Joint Error Control In this section we develop upper and
|
https://arxiv.org/abs/2504.13124v1
|
lower confidence regions which have joint (instead of separate) error rate control. To do so, we shall combine the directional p-values from the upper and lower direction tests into a single BH algorithm. 9 3.1 Hypothesis Testing We now propose a confidence region construction procedure for jointly testing two null hypotheses: HU 0(v) :µ(v)≤cvs. HU A(v) :µ(v)> c, (7) HL 0(v) :µ(v)≥cvs. HL A(v) :µ(v)< c, (8) where the superscripts UandLdenote upper and lower respectively. Define L= V × {− 1}andU=V × { 1}to be two homeomorphic copies of V. Given two sets A, B⊆ V, letA⊔B:=A×{−1}∪B×{1}refertothedisjointunionofthecorresponding copies in LandU, following the notation of [37]. Given this notation we can define the joint hypothesis space H=L ⊔ U. The joint null set for the hypotheses is then AC c⊔ Ac. Considering the disjoint union will allow us to be able to provide joint confidence statements which hold for the upper and lower spaces simultaneously. The space Vis partitioned by the joint error control hypothesis testing framework as: Hypothesis Not Rejected Rejected Total H0 {(Ac)C∩(ˆA+ c)C} ⊔ {¯Ac∩ˆA− c}{(Ac)C∩ˆA+ c} ⊔ {¯Ac∩(ˆA− c)C}(Ac)C⊔¯Ac HA {Ac∩(ˆA+ c)C} ⊔ {(¯Ac)C∩ˆA− c}{Ac∩ˆA+ c} ⊔ {(¯Ac)C∩(ˆA− c)C}Ac⊔(¯Ac)C Total (ˆA+ c)C⊔ˆA− cˆA+ c⊔(ˆA− c)CV × { L, U} Table 3: Table demonstrating the partition of Vaccording to true hypothesis and rejection. Figure 4: Upper and lower confidence regions from joint control method schematic: (a) Example signal and the projection of the excursion set above c(Ac) and at and above c (¯Ac). (b) Hypothetical upper lower confidence regions ˆA+ candˆA− csuperimposed on the ground truth Acand¯Acrespectively. (c) The division of (b) into regions corresponding to those specified in Table 3 as a visual representation of false positives (green) and false non-positives (orange). 10 3.2 Joint Confidence Regions: Construction Given the same setting of image space Vconsisting of mlocations, we construct the upper and lower confidence regions for the area above the threshold c,Ac. Let pU(v) be the p-value calculated from the t-statistics corresponding to the positive test (7), andpL(v) = 1−pU(v)be the p-value calculated from the t-statistics corresponding to the negative test (8). Subsequently, p= pL(1), . . . , pL(m), pU(1), . . . , pU(m) collec- tively undergoes the BH procedure for joint FDR control through which the voxels are identified as rejected or not rejected. Algorithm 3 Upper and Lower confidence regions with Joint Error Control Input: (t(v), α) Output: (ˆA+ c,ˆA− c) 1:Obtain m p-values pU(v) = 1−FTn−1(|t(v)|)where FTn−1is the cdf of t-distribution with n−1degrees of freedom. 2:Obtain pL(v) = 1−pU(v). 3:Apply the Benjamini-Hochberg procedure to p = pL(1), . . . , pL(m), pU(1), . . . , pU(m) to get pk, the biggest p-value for rejection. 4:Define ˆA+ c={v∈ V:pU(v)≤pk}, the set of voxels rejected from U. 5:Define ˆA− c={v∈ V:pL(v)≤pk}C, the set of voxels rejected from L. 6:returnthe upper confidence region ˆA+ cand the lower confidence region ˆA− c. Importantly the BH procedure controls the FDR to a levelm0 m·αwhere mdenotes the number of locations tested, and m0denotes the true non-null locations [9]. With the joint error control hypothesis testing, the locations
|
https://arxiv.org/abs/2504.13124v1
|
in the image are tested twice with each location being the true non-null hypothesis for at least one of the directions considered. This means that the FDR is effectively controlled at a levelm 2m·α. We can take advantage of this and use a nominal level of 2αinstead of αwhile still providing FDR control at a level α, leading to a substantial power improvement. TechnicallyPRDSdoesnotholdforthejointlydefined p-valuesduealimitedamount of negative correlation between lower and upper p-values at the Ac, however in practice this effect is very small and does not affect error control, as shown in Section 4.3. 3.3 Joint Confidence Regions: spatial FDR and FNDR ThefalsediscoveryproportioninconfidenceregionswithjointFDRcontrolisdefinedby the number of true null hypotheses that are rejected, |{(Ac)C∩ˆA+ c}⊔{ ¯Ac∩(ˆA− c)C}|= |(ˆA+ c\ A c)⊔(Ac\ˆA− c)|, divided by the total number of rejections across both one- sided tests, |ˆA+ c⊔(ˆA− c)C|. The FDR for confidence regions with joint error rate control (FDR J) is thus defined as the expected false discovery proportion, FDR J=E" |(ˆA+ c\ Ac)⊔(Ac\ˆA− c)| |ˆA+ c⊔(ˆA− c)C| ∨1# =E" |ˆA+ c\ Ac|+|Ac\ˆA− c| (|ˆA+ c|+|(ˆA− c)C|)∨1# .(9) Similarly, the false non-discovery proportion for joint FDR control is defined by the number of false non-discoveries over the upper and the lower sets (|{A c∩(ˆA+ c)C} ⊔ 11 {(¯Ac)C∩ˆA− c}|=|(Ac\ˆA+ c)⊔(ˆA− c\ Ac)|)and the number of total non-discoveries (|(ˆA+ c)C⊔ˆA− c|). Therefore, the FNDR for joint error control ( FNDR J) is defined as the expected false non-discovery proportion, FNDR J=E" |(Ac\ˆA+ c)⊔(ˆA− c\ Ac)| |(ˆA+ c)C⊔ˆA− c| ∨1# =E" |Ac\ˆA+ c|+|ˆA− c\ Ac| (|(ˆA+ c)C|+|ˆA− c|)∨1# .(10) Table 4 summarizes the spatial FDR and FNDR for the upper, lower and joint methods. FDR FNDR Separate Upper Eh |ˆA+ c\Ac| |ˆA+ c|∨1i Eh |Ac\ˆA+ c| |(ˆA+ c)c|∨1i Separate Lower Eh |Ac\ˆA− c|) |(ˆA− c)C|∨1i Eh |ˆA− c\Ac| |ˆA− c|∨1i Joint Eh |ˆA+ c\Ac|+|Ac\ˆA− c| (|ˆA+ c|+|(ˆA− c)C|)∨1i Eh |Ac\ˆA+ c|+|ˆA− c\Ac| (|(ˆA+ c)C|+|ˆA− c|)∨1i Table 4: Summary of the spatial FDR and FNDR for confidence regions produced by different methods. 4 Simulations 4.1 Confidence Region Illustration For the purpose of demonstrating the confidence regions, we apply the method to the 2-dimensional synthetic images yiof dimension (50,50)with sample size 80 such that yi:V →R, i= 1, . . . ,80. The images yiare generated from three different signals in combination with noise field (Figure 5) in that yi(v) =µ(v) +ϵi(v)where µ(v)denotes the underlying signal function across voxels v, and ϵi(v)∼ N(0, σ2)denotes the noise on top of the signal. We consider three signals: ramp, circular, and step. The ramp signal increases linearly from -1 to 1 over 50 pixels horizontally where the ramp is repeated vertically, creating the pattern that increases from left to right of the image. The step signal is generated by smoothing over a 2D signal where the left half of the image is of value -1, and the right half of the image is of the value 1. The full width at half maximum (FWHM) used for smoothing the circle signal is 8 pixels. Note here that the ramp is invariant to and thus unaffected by different signal smoothing levels as
|
https://arxiv.org/abs/2504.13124v1
|
it is defined as linearly gradual increase or decrease across the grid. The circle signal is generated by smoothing (FWHM 8 pixels) over a circle of radius 12 pixels which has magnitude 1 on the background of field of value -1. The noise field is obtained from smoothing the Gaussian random field N(0,1.52)by a kernel of FWHM 4. The noise field is scaled by the square root of the sum of squares of smoothing kernel to preserve the standard deviation. Figure 5 shows the underlying signal images, and the 2D fields that are comprised of the underlying signals and the smoothed noise fields. Figure 6 shows the upper and 12 the lower confidence regions superimposed on ˆAc=v∈ V: ˆµ(v)> c, where ˆµ(v)is the sample mean of the images. The confidence regions were constructed for the images presented in Figure 5, each with different error control (separate, and joint) methods. Figure 5: Synthetic images used for confidence region illustration. The first row shows ramp, step, and circle signals. The second row shows the same synthetic images with added smoothed Gaussian noise. The pixels in the noise field follows N(0,1.52), consti- tuting an uncorrelated Gaussian noise field. The noise field is smoothed using FWHM 8 pixels. 13 Figure 6: Confidence regions for the 2D synthetic images ( n= 80) with α= 0.05for separate control, and α= 0.1for the joint control; the red area denotes the upper confidence region ˆA+ c, the yellow area including the red area denotes the excursion set ˆAc, and the blue area including the yellow area denotes the lower confidence region ˆA− c. 4.2 Error Rate Simulation The error rate simulation was conducted to demonstrate the performance of the pro- posed confidence region methods in terms of empirical FDR and empirical FNDR de- pending on different clevels. The simulation was repeated 1,000 times where in each simulation instance, a set of 80 synthetic 2D images were generated with each image consisting of 2500 voxels. Identical to the confidence region illustration, we denote each image as yi:V →R, i= 1, . . . ,80, where yi(v) =µ(v) +ϵi(v)for all the voxels vin the image where µ(v)denotes the signal function, and ϵi(v)∼ N(0, σ2)denotes the noise. The confidence regions were constructed via 1. joint error control for upper and lower confidence regions at α= 0.1, 2. separate error control for upper confidence region (BH procedure) at α= 0.05, 3. separate error control for lower confidence region (BH procedure) at α= 0.05, and 4. separate error control for lower confidence region (two-stage adaptive procedure) atα= 0.05. Basedonthesefoursetsofconfidenceregions, falsediscoveryandfalsenon-discovery proportions were calculated at clevels ranging from −2to2with 0.2 increments, re- sulting in 21 different clevels in total. Empirical FDR and FNDR were then measured 14 as the mean of the false discovery and false non-discovery proportions over 1,000 sim- ulations at each clevel. Thisprocessisrepeatedacrossdifferentsimulationsettingsgeneratedusingthecom- binations of three signals, three noise smoothing levels ( FWHM = 0 ,5,10), and three signal smoothing levels ( FWHM = 5 ,10,15). This resulted in 25 different simulation setting combinations, as only one signal smoothing level (no smoothing) was
|
https://arxiv.org/abs/2504.13124v1
|
considered for the ramp signal. Among noise smoothing settings, only FWHM = 5 is reported, as the other two settings showed similar results. The signals used for error simulation are identical to the signals in confidence region illustration (Figure 5) where the signal generation process is explained in detail. There are some benefits for using ramp, step and circle signals for error simulation. ramp signal has gradual increase from −1to1from left to right with the values in the image having even distribution across [−1,1]. Thus, the ramp signal is effective in demonstrating how the error rates behave depending on different clevels when there is linear change in slope in signal. The step signal has two flat regions, of value -1 on the first half and of value 1 on the second half, bridged by Gaussian kernel smoothing around 0 ( FWHM = 8 ). Step signals illustrates the effect of flat areas in the image on error rates. Finally, the circle signal is a variation on the step signal where the flat region of magnitude 1 is sloped out by Gaussian kernel ( FWHM = 8 ) smoothing over the background of value -1, creating a blob of flat region that resembles signals observed from fMRI scans. Given the signals, noise fields are added on to the signal to create a synthetic image for simulation. The noise field is obtained from smoothing the Gaussian random field N(0,1)with differing smoothness level to emulate correlated noise field assumption which is realistic and commonly observed in real world fMRI data. For the step and circle fields, the simulation supposes three different signal smooth- ing levels ( FWHM = 5 ,10,15), and three different noise smoothing levels ( FWHM = 0,5,10). The ramp fields does not have differing signal smoothing levels as the ramp signal by generation is a gradation and thus does not include signal smoothing step, although the ramp fields still take different noise smoothing levels. 15 4.3 FDR Signal FWHM = 5 FWHM = 10 FWHM = 15 Figure 7: FDR simulation result for three signals (ramp, step and circle top to bottom) and three signal smoothing levels with noise smoothing at FWHM = 5 . The x-axis denotesthreshold cranging[-2, 2]with0.2increments. The y-axisdenotestheempirical FDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). The red dotted line signifies the nominal FDR level. Note here that the ramp signal has the same plots for different signal smoothing levels as they are unaffected by signal smoothing. Figure 7 shows the empirical FDR change per different clevels for each possible com- bination of smoothing levels and signals. The results for the entire simulation are presented in the supplementary materials. The threshold cranges from -2 to 2, increas- ing by 0.2 on the x-axis. The red line shows the empirical FDR change for the joint method, the blue line for the separate upper (BH), the yellow line for the separate lower (BH), and the green line for the separate
|
https://arxiv.org/abs/2504.13124v1
|
lower (adaptive). For the ramp signal, the signal smoothing is non-existent, thus showing the same result for three different levels of signal smoothing. The simulation results suggest that in general, the empirical FDR is effectively 16 controlled under the 0.05 level across four methods and for all values of c. This is with an exception of the peak at c=−1in circle signal with FWHM = 5 from the joint method. Although maintained below the 0.05 level, similar behaviour is observed at c=−1and 1 for the step signal and c=−1for the circle signal from the joint method. This is due to the fact that PRDS breaks down near the flat regions in the signals c= 1 andc=−1when using the joint method. See discussion for further details. The asymmetry between the peaks at c=−1andc= 1in circle signal joint method simulation is explained by the asymmetry in the number of pixels included in the area below 1 (the background) and above -1 (the circle). More voxels are rejected when testing for the lower side HL A(v) :µ(v)< cwith c= 1than when testing for the upper sideHU A(v) :µ(v)> cwith c=−1. This leads to decreased denominator for the joint FDR at c=−1, while the other numbers involved in the calculation of joint FDR remain relatively insignificant, ultimately resulting in higher FDR at c=−1. As predicted, the lower side adaptive method shows less conservative control of the FDR than the lower BH method in the circle signal images. This is in accordance with Blanchard and Roquain [13] which states that the proposed two-stage method works better when more tests are expected to be rejected which is equivalent to when there are more voxels below cin the context of the negative one-sided test. This also explains why the lower side adaptive shows good control of the FDR when threshold is higher. 17 4.4 FNDR Signal FWHM = 5 FWHM = 10 FWHM = 15 Figure8: FNDRsimulationresultforthreesignals(ramp, stepandcircletoptobottom) and three signal smoothing levels with noise smoothing at FWHM = 5 . The x-axis denotesthreshold cranging[-2, 2]with0.2increments. The y-axisdenotestheempirical FNDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). Note here that the ramp signal has the same plots for different signal smoothing levels as they are unaffected by signal smoothing. The simulation results for the empirical FNDR are presented in Figure 8 with the same simulation settings as the FDR. Overall, the joint method shows consistently lower empirical FNDR compared to other methods. High peaks are displayed around the boundary values of c∈(−2,−1) andc∈(1,2)in signals for separate methods. This is because once the threshold is set too extreme, i.e. below -1 for the positive test or above 1 for the negative test, essentially all the voxels are supposed to be rejected with Acbeing V. Therefore, any non-rejectedvoxelsduetonoiseareallfalsenon-rejections. Sincethenoiseisdistributed N(0,1), if the threshold is set too high or low, i.e. past -2 or 2, rarely are voxels falsely not-rejected due to the effect of very big or very small noise. 18 Much
|
https://arxiv.org/abs/2504.13124v1
|
like the FDR simulation, the asymmetric pattern in the circle signal for joint method stems from the asymmetry in the number of voxels in the circle signal and the background. Otherwise, the pattern of empirical FNDR demonstrated by the step and circle signals are similar. Generally, lower BH shows higher FNDR than lower adaptive as the threshold gets higher, signifying that the two-stage adaptive procedure exhibits a higher power than the BH procedure when more tests are expected to be rejected. As can be seen from Figure 8, signal smoothing does not bring as much improvement in power detection with the empirical FNDR plots, showing similar range of empirical FNDR values across different signal smoothing levels. 5 Human Connectome Project Data Application 5.1 The Human Connectome Project Data Description The Human Connectome Project (HCP) is a five-year study which aims to map struc- tual and functional human brain connectivity in young adults through multiple neu- roimaging modalities [50]. In this section, we detail results of a real-data analysis per- formed on the HCP dataset with 77 participants using the confidence region method- ologies proposed in this paper, as well as the method from SSS which refers to the bootstrap-based confidence region construction algorithm following Bowring (2019) [15] and SSS [41]. The confidence regions are built on the 77 subject-level contrast images from N-back working memory task comparing the 0-back vs. 2-back memory task. In the N-back working memory task, the participants are shown a sequence of pictures of objects and asked to remember them either immediately or 2 turns later. The image pre-processing details are presented in [29, 2]. 5.2 Confidence Region Result The95%confidence regions are constructed on fMRI scans from 77 subjects as a real data application of the proposed methods after applying additional smoothing with Gaussian kernel with FWHM = 2.25to match the results shown in Bowring (2019) [15]. Confidence regions using 1) the joint method with α= 0.1, 2) the separate method with BH adjustment for upper and lower side each with α= 0.05, 3) the separate method with BH adjustment for upper side and two-stage adaptive procedure for lower with α= 0.05, and 4) SSS ( α= 0.05) were compared with threshold level 1.0%, 1.5%, and 2.0% Blood Oxygenation Level Dependent (BOLD) change. Joint control confidence regions are produced with α= 0.1instead of 0.05 for the reasons mentioned in chapter 3. For all slices, FDR controlling methods show tighter inference of both upper and lower CR compared to the SSS method. SSS shows smaller upper CR and larger lower CR which suggests more conservative inference compared to FDR controlling testing based methodologies. This is due to the fact that by controlling for FDR, the method allows for more false discoveries in exchange for more discoveries in general. Despite having higher αlevel at 0.1, joint control confidence regions still show comparable results to other methods even with higher significance level. Naturally, as the threshold 19 Threshold Joint Separate (BH)Separate (adaptive)SSS 1.0% 1.5% 2.0% Figure9: ConfidenceregionapplicationtotheHCPdatawith77subjectsinthesagittal slice view (X=-4, MNI space). The red area denotes the upper confidence region ˆA+
|
https://arxiv.org/abs/2504.13124v1
|
c, the yellow area including the red area denotes the excursion set ˆAc, and the blue area including the yellow area denotes the lower confidence region ˆA− c. The rows differ in the threshold level cby which the confidence regions are constructed. The columns denote different methods. Overall, FDR controlling hypothesis testing methods which are presented in the first three columns, show tighter spatial inference result when compared to SSS, thus providing a higher degree of localization and interpretability to the identified regions. The confidence regions show activation in areas such as the precuneus cortex, paracingulate gyrus, middle frontal gyrus, and angular gyrus which matches the expectation from the literature as areas involved in working memory. 20 Threshold Joint Separate (BH)Separate (adaptive)SSS 1.0% 1.5% 2.0% Figure 10: Confidence region application to the HCP data with 77 subjects in the coronal slice view (y=16, MNI space). The red area denotes the upper confidence region ˆA+ c, the yellow area including the red area denotes the excursion set ˆAc, and the blue area including the yellow area denotes the lower confidence region ˆA− c. The rows differ in the threshold level cby which the confidence regions are constructed. The columns denote different methods. Overall, FDR controlling hypothesis testing methods which are presented in the first three columns, show tighter spatial inference result when compared to SSS, thus providing a higher degree of localization and interpretability to the identified regions. The confidence regions show activation in areas such as the precuneus cortex, paracingulate gyrus, middle frontal gyrus, and angular gyrus which matches the expectation from the literature as areas involved in working memory. 21 Threshold Joint Separate (BH)Separate (adaptive)SSS 1.0% 1.5% 2.0% Figure 11: Confidence region application to the HCP data with 77 subjects in the axial slice view (Z=48, MNI space). The red area denotes the upper confidence region ˆA+ c, the yellow area including the red area denotes the excursion set ˆAc, and the blue area including the yellow area denotes the lower confidence region ˆA− c. The rows differ in the threshold level cby which the confidence regions are constructed. The columns denote different methods. Overall, FDR controlling hypothesis testing methods which are presented in the first three columns, show tighter spatial inference result when compared to SSS, thus providing a higher degree of localization and interpretability to the identified regions. The confidence regions show activation in areas such as the precuneus cortex, paracingulate gyrus, middle frontal gyrus, and angular gyrus which matches the expectation from the literature as areas involved in working memory. 22 goes up, the area enclosed between the upper and lower confidence regions decreases. Confidence regions with separate controls of FDR for lower and upper are presented in two forms for comparison: one with BH procedure for the lower confidence region, andtheotheronewiththetwo-stageadaptiveprocedureforthelowerconfidenceregion. The upper confidence region remains the same as both methods uses BH procedure for the upper set FDR control. Lower confidence regions with adaptive method are smaller than lower sets with BH procedure which is to be expected as the two-stage adaptive procedure is less conservative
|
https://arxiv.org/abs/2504.13124v1
|
when more voxels are thought to be rejected. In the context of negative one-sided testing, this is equivalent to when there are less number of voxels above cthan below c. 6 Conclusion and Discussion This paper has developed upper and lower confidence regions for the excursion set Ac, with spatial FDR control, based on directional hypothesis testing. These regions were constructed for separate upper and lower control and joint control over both directions. Inordertoboostpoweratwo-stageadaptiveapproachwasincorporatedfortheseparate control and a natural adaption (using twice the desired αlevel whilst still controlling at the level α) was used for the joint approach. Error rate simulations across a variety of scenarios showed that the empirical FDR was almost always controlled to the nominal level α= 0.05. Real data applications using fMRI data from the HCP showed that the confidence regions bounds were informative and particularly were more spatially precise than CRs obtained by existing methods [15, 41]. For the joint confidence regions, we use two one-sided tests rather than naively using a two-sided testing approach in which the hypothesis being tested is of the form H0(v) :µ(v) =cvs. HA(v) :µ(v)̸=c. At first glance naively testing against this null hypothesis and applying the BH might seem like a simpler approach for creating joint confidence regions. However, rejecting this null hypothesis does not provide directional error control, as the direction of the effect is not specified. Instead, testing against two one-sided null hypotheses (7 and 8) in the positive and the negative directions, as we do in our joint construction, allows us to provide directional inference. For the confidence regions with separate error control, the FDR is controlled sep- arately for the positive and negative directions. The joint confidence regions instead control the FDR simultaneously by considering both sets of directional nulls. Joint error control is theoretically more desirable than separate error control and the right choice of which to use may depend on the questions that users are trying to answer. The two approaches are complementary and which to use depends on the error rate theanalystwishestocontrol. Theseparatecontrolmethodismoreinformative, because it admits possibly different error rates in the upper and lower confidence regions. For example, when estimating a positive excursion set, the upper confidence region gives an indication of the minimal extent of the excursion set, whereas the lower confidence regiongivesanindicationofthemaximalextentoftheexcursionset, andcontrollingthe error rate in one may be more important than in the other. The joint method is more concise in that it controls the FDR over the entire image, but it does not distinguish 23 whethertheerrorsoccurintheupperconfidenceregionorthelowerconfidenceregion. If the question of interest is one-directional in nature, using the separate method provides more informative confidence regions in terms of interpretability and error control. In our simulations we observed that when the underlying signals in the data are thought to be relatively evenly distributed below and above the threshold c, the joint method usually provide less conservative confidence regions under the nominal αlevel (see e.g. the ramp and step signal simulation in Figure 7 first and second rows). Mean- while, with the presence of signals with unbalanced distribution of areas below and above
|
https://arxiv.org/abs/2504.13124v1
|
c, the lower confidence region with two-stage adaptive procedure provides better results (as signified by the circle signal simulation; c.f. Figure 7 third row). Most fMRI data would fall under this case. A further observation of note is the presence of the peaks in the empirical FDR plots atc=−1,1for the circle and step signals in Figure 7 . This can be attributed to the factthatPRDS,whichisasufficientconditioninvolvingpositivedependencyfortheBH procedure to hold [9, 44], breaks down around c=−1and 1 due to stronger negative dependency among the p-values for the lower and upper directions. The peaks of the FDR in Figure 7 are reduced in height with the presence of stronger signal smoothing as the gradients of the signal become less sharp. Compared to SSS, the FDR-controlling method we propose in this work provides less conservative, thus tighter confidence regions. Aside from the power gain, the FDR- controlling framework based on hypothesis testing allows error control in finite samples, without requiring asymptotic assumptions. As a closing note, we discuss potential extensions of this work. More immediate applications include the construction of the confidence regions for Cohen’s dorR2in place of the mean values in the image [14, 1]. Testing based confidence regions easily extend to other correlation structures by combining it with other methods for correcting theFDR(e.g. [7, 9, 6]). Confidenceregionscouldalsobeconstructedinordertocontrol other measures of error such as the false discovery proportion (FDP) [45, 20, 11] or in the context of selective inference on the hierarchies of hypotheses, such as the whole brain, regions, and individual voxels [3, 12, 45]. 7 Acknowledgements HR, AS and SD were partially supported by NIH grant R01MH128923. TMS, AS and SD were partially supported by NIH grant R01EB026859. Data were provided in part by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. 24 References [1] James Algina. “A comparison of methods for constructing confidence intervals for the squared multiple correlation coefficient”. In: Multivariate Behavioral Research 34.4 (1999), pp. 493–504. [2] Deanna M Barch et al. “Function in the human connectome: task-fMRI and in- dividual differences in behavior”. In: Neuroimage 80 (2013), pp. 169–189. [3] Yoav Benjamini and Marina Bogomolov. “Selective inference on multiple families of hypotheses”. In: Journal of the Royal Statistical Society Series B: Statistical Methodology 76.1 (2014), pp. 297–318. [4] Yoav Benjamini and Ruth Heller. “False discovery rates for spatial signals”. In: Journal of the American Statistical Association 102.480 (2007), pp. 1272–1281. [5] Yoav Benjamini and Yosef Hochberg. “Controlling the false discovery rate: a prac- tical and powerful approach to multiple testing”. In: Journal of the Royal statis- tical society: series B (Methodological) 57.1 (1995), pp. 289–300. [6] Yoav Benjamini and Yosef Hochberg. “On the adaptive control of the false discov- ery rate in multiple testing with independent statistics”. In: Journal of educational and Behavioral Statistics 25.1 (2000), pp. 60–83. [7] Yoav Benjamini, Abba M Krieger, and Daniel Yekutieli. “Adaptive linear step- up procedures that control the
|
https://arxiv.org/abs/2504.13124v1
|
false discovery rate”. In: Biometrika 93.3 (2006), pp. 491–507. [8] YoavBenjaminiandDanielYekutieli.“Falsediscoveryrate–adjustedmultiplecon- fidence intervals for selected parameters”. In: Journal of the American Statistical Association 100.469 (2005), pp. 71–81. [9] Yoav Benjamini and Daniel Yekutieli. “The control of the false discovery rate in multiple testing under dependency”. In: Annals of statistics (2001), pp. 1165– 1188. [10] Craig M Bennett, Michael B Miller, and George L Wolford. “Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction”. In: Neuroimage 47.Suppl 1 (2009), S125. [11] Alexandre Blain, Bertrand Thirion, and Pierre Neuvial. “Notip: Non-parametric true discovery proportion control for brain imaging”. In: NeuroImage 260 (2022), p. 119492. [12] Gilles Blanchard, Pierre Neuvial, and Etienne Roquain. “Post hoc confidence bounds on false positives using reference families”. In: (2020). [13] Gilles Blanchard and Etienne Roquain. “Adaptive False Discovery Rate Control under Independence and Dependence.” In: Journal of Machine Learning Research 10.12 (2009). [14] Alexander Bowring et al. “Confidence Sets for Cohen’s d effect size images”. In: NeuroImage 226 (2021), p. 117477. issn: 1053-8119. [15] Alexander Bowring et al. “Spatial confidence sets for raw effect size images”. In: NeuroImage 203 (2019), p. 116187. 25 [16] Justin Chumbley et al. “Topological FDR for neuroimaging”. In: Neuroimage 49.4 (2010), pp. 3057–3064. [17] Justin R Chumbley and Karl J Friston. “False discovery rate revisited: FDR and topological inference using Gaussian random fields”. In: Neuroimage 44.1 (2009), pp. 62–70. [18] Samuel Davenport and Thomas E Nichols. “Selective peak inference: Unbiased estimation of raw and standardized effect size at local maxima”. In: NeuroImage 209 (2020), p. 116375. [19] Samuel Davenport, Thomas E Nichols, and Armin Schwarzman. “Confidence re- gions for the location of peaks of a smooth random field”. In: arXiv preprint arXiv:2208.00251 (2022). [20] Samuel Davenport, Bertrand Thirion, and Pierre Neuvial. “FDP control in mul- tivariate linear models using the bootstrap”. In: arXiv preprint arXiv:2208.13724 (2022). [21] Samuel Davenport et al. “Robust FWER control in neuroimaging using Random Field Theory: Riding the SuRF to continuous land Part 2”. In: (2023). [22] Anders Eklund, Thomas E Nichols, and Hans Knutsson. “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates”. In: Proceed- ings of the national academy of sciences 113.28 (2016), pp. 7900–7905. [23] Helmut Finner. “Stepwise multiple test procedures and control of directional er- rors”. In: The Annals of Statistics 27.1 (1999), pp. 274–289. [24] Guillaume Flandin and Karl J Friston. “Analysis of family-wise error rates in sta- tistical parametric mapping using random field theory”. In: Human brain mapping 40.7 (2019), pp. 2052–2054. [25] Karl J Friston. “Functional and effective connectivity in neuroimaging: a synthe- sis”. In: Human brain mapping 2.1-2 (1994), pp. 56–78. [26] Yulia Gavrilov, Yoav Benjamini, and Sanat K Sarkar. “An adaptive step-down procedure with proven FDR control under independence”. In: (2009). [27] Christopher Genovese and Larry Wasserman. “A stochastic process approach to false discovery control”. In: (2004). [28] Christopher R Genovese, Nicole A Lazar, and Thomas Nichols. “Thresholding of statistical maps in functional neuroimaging using the false discovery rate”. In: Neuroimage 15.4 (2002), pp. 870–878. [29] Matthew F Glasser et
|
https://arxiv.org/abs/2504.13124v1
|
al. “The minimal preprocessing pipelines for the Human Connectome Project”. In: Neuroimage 80 (2013), pp. 105–124. [30] Javier Gonzalez-Castillo et al. “Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis”. In: Proceedings of the National Academy of Sciences 109.14 (2012), pp. 5487–5492. [31] Wenge Guo and Joseph P Romano. “On stepwise control of directional errors under independence and some dependence”. In: Journal of Statistical Planning and Inference 163 (2015), pp. 21–33. 26 [32] WengeGuo,SanatKSarkar,andShyamalDPeddada.“Controllingfalsediscover- ies in multidimensional directional decisions, with applications to gene expression data on ordered categories”. In: Biometrics 66.2 (2010), pp. 485–492. [33] Satoru Hayasaka and Thomas E Nichols. “Combining voxel intensity and cluster extent with permutation test framework”. In: Neuroimage 23.1 (2004), pp. 54–63. [34] Satoru Hayasaka and Thomas E Nichols. “Validating cluster size inference: ran- dom field and permutation methods”. In: Neuroimage 20.4 (2003), pp. 2343–2356. [35] Satoru Hayasaka et al. “Nonstationary cluster-size inference with random field and permutation methods”. In: Neuroimage 22.2 (2004), pp. 676–687. [36] Ruth Heller and Aldo Solari. “Simultaneous directional inference”. In: Journal of the Royal Statistical Society Series B: Statistical Methodology (2023), qkad137. [37] JohnLee. Introduction to topological manifolds .Vol.202.SpringerScience&Busi- ness Media, 2010. [38] Dennis Leung and Ninh Tran. “Adaptive procedures for directional false discovery rate control”. In: Electronic Journal of Statistics 18.1 (2024), pp. 706–741. [39] Kun Liang and Dan Nettleton. “Adaptive and dynamic adaptive procedures for false discovery rate control and estimation”. In: Journal of the Royal Statistical Society Series B: Statistical Methodology 74.1 (2012), pp. 163–182. [40] T Maullin-Sapey, A Schwartzman, and TE Nichols. “Spatial confidence regions for combinations of excursion sets in image analysis: supplementary theory”. In: Journal of the Royal Statistical Society: Statistical Methodology Series B (2023). [41] Stephan Sain Max Sommerfeld and Armin Schwartzman. “Confidence Regions for Spatial Excursion Sets From Repeated Random Field Observations, With an Application to Climate”. In: Journal of the American Statistical Association 113.523 (2018), pp. 1327–1340. [42] Thomas Nichols and Satoru Hayasaka. “Controlling the familywise error rate in functional neuroimaging: a comparative review”. In: Statistical methods in medical research 12.5 (2003), pp. 419–446. [43] MarcoPeronePacificoetal.“Falsediscoverycontrolforrandomfields”.In: Journal of the American Statistical Association 99.468 (2004), pp. 1002–1014. [44] E Roquain and G Blanchard. “Two simple sufficient conditions for FDR control”. In:HAL2008 (2008). [45] Jonathan D Rosenblatt et al. “All-resolutions inference for brain imaging”. In: Neuroimage 181 (2018), pp. 786–796. [46] William W Rozeboom. “The fallacy of the null-hypothesis significance test.” In: Psychological bulletin 57.5 (1960), p. 416. [47] Stephen M Smith and Thomas E Nichols. “Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in clus- ter inference”. In: Neuroimage 44.1 (2009), pp. 83–98. [48] John D Storey. “A direct approach to false discovery rates”. In: Journal of the Royal Statistical Society Series B: Statistical Methodology 64.3 (2002), pp. 479– 498. 27 [49] John D Storey, Jonathan E Taylor, and David Siegmund. “Strong control, conser- vative point estimation and simultaneous conservative consistency of false discov- ery rates: a unified approach”. In: Journal of the Royal Statistical Society Series B: Statistical Methodology 66.1 (2004), pp. 187–205. [50] David C
|
https://arxiv.org/abs/2504.13124v1
|
Van Essen et al. “The Human Connectome Project: a data acquisition perspective”. In: Neuroimage 62.4 (2012), pp. 2222–2231. [51] Asaf Weinstein, William Fithian, and Yoav Benjamini. “Selection adjusted confi- dence intervals with more power to determine the sign”. In: Journal of the Amer- ican Statistical Association 108.501 (2013), pp. 165–176. [52] AndersonMWinkleretal.“FalseDiscoveryRateandLocalizingPower”.In: arXiv preprint arXiv:2401.03554 (2024). [53] KeithJWorsleyetal.“Athree-dimensionalstatisticalanalysisforCBFactivation studies in human brain”. In: Journal of Cerebral Blood Flow & Metabolism 12.6 (1992), pp. 900–918. [54] Daniel Yekutieli and Yoav Benjamini. “Resampling-based false discovery rate con- trolling multiple test procedures for correlated test statistics”. In: Journal of Sta- tistical Planning and Inference 82.1-2 (1999), pp. 171–196. [55] Hui Zhang, Thomas E. Nichols, and Timothy D. Johnson. “Cluster mass inference via random field theory”. In: NeuroImage 44.1 (2009), pp. 51–61. issn: 1053-8119. 28 29 8 Supplementary Materials 8.1 Simulation Results Figure 12: FDR simulation result for the ramp signal. The rows differ in noise smooth- ing (FWHM 0, 5, and 10) and the columns differ in signal smoothing levels (FWHM 5, 10, and 15). The x-axis denotes threshold c ranging [-2, 2] with 0.2 increments. They-axis denotes the empirical FDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). The red dotted line signifies the nominal FDR level.30 Figure 13: FDR simulation result for the step signal. The rows differ in noise smoothing (FWHM 0, 5, and 10) and the columns differ in signal smoothing levels (FWHM 5, 10, and 15). The x-axis denotes threshold c ranging [-2, 2] with 0.2 increments. The y-axis denotes the empirical FDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). The red dotted line signifies the nominal FDR level. 31 Figure 14: FDR simulation result for the circle signal. The rows differ in noise smooth- ing (FWHM 0, 5, and 10) and the columns differ in signal smoothing levels (FWHM 5, 10, and 15). The x-axis denotes threshold c ranging [-2, 2] with 0.2 increments. They-axis denotes the empirical FDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). The red dotted line signifies the nominal FDR level. 32 Figure15: FNDRsimulationresultfortherampsignal. Therowsdifferinnoisesmooth- ing (FWHM 0, 5, and 10) and the columns differ in signal smoothing levels (FWHM 5, 10, and 15). The x-axis denotes threshold c ranging [-2, 2] with 0.2 increments. The y-axis denotes the empirical FNDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). 33 Figure 16: FNDR simulation result for the step signal. The rows differ in noise smooth- ing (FWHM 0, 5, and 10) and the columns differ in signal smoothing levels (FWHM 5, 10, and 15). The x-axis denotes threshold c ranging [-2, 2] with 0.2 increments. The y-axis
|
https://arxiv.org/abs/2504.13124v1
|
denotes the empirical FNDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). 34 Figure17: FNDRsimulationresultforthecirclesignal. Therowsdifferinnoisesmooth- ing (FWHM 0, 5, and 10) and the columns differ in signal smoothing levels (FWHM 5, 10, and 15). The x-axis denotes threshold c ranging [-2, 2] with 0.2 increments. The y-axis denotes the empirical FNDR. The red line denotes the joint method, the blue line the separate upper (BH), the yellow the separate lower (BH), and the green the separate lower (adaptive). 35 36 8.2 HCP Applications Figure 18: Confidence region application to the HCP data with 77 subjects in sagittal (X=-32), coronal (Y=24), and axial (Z=-2) from top to bottom. The red area denotes the upper confidence region ˆA+ c, the yellow area including the red area denotes the ex- cursion set ˆAc, and the blue area including the yellow area denotes the lower confidence region ˆA− c. The rows differ in the threshold level cby which the confidence regions are constructed. The columns denote different methods. Overall, FDR controlling hypoth- esis testing methods which are presented in the first three columns, show tighter spatial inference result when compared to SSS, thus providing a higher degree of localization and interpretability to the identified regions. The confidence regions show activation in areas such as the precuneus cortex, paracingulate gyrus, middle frontal gyrus, and angular gyrus which matches the expectation from the literature as areas involved in working memory. 37
|
https://arxiv.org/abs/2504.13124v1
|
How Much Weak Overlap Can Doubly Robust T-Statistics Handle? Jacob Dorn∗ April 23, 2025 Abstract In the presence of sufficiently weak overlap, it is known that no regular root-n-consistent estimators exist and standard estimators may fail to be asymptotically normal. This paper shows that a thresholded version of the standard doubly robust estimator is asymptotically normal with well-calibrated Wald confidence intervals even when constructed using nonparametric estimates of the propensity score and conditional mean outcome. The analysis implies a cost of weak overlap in terms of black-box nuisance rates, borne when the semiparametric bound is infinite, and the contribution of outcome smoothness to the outcome regression rate, which is incurred even when the semiparametric bound is finite. As a byproduct of this analysis, I show that under weak overlap, the optimal global regression rate is the same as the optimal pointwise regression rate, without the usual polylogarithmic penalty. The high-level conditions yield new rules of thumb for thresholding in practice. In simulations, thresholded AIPW can exhibit moderate overrejection in small samples, but I am unable to reject a null hypothesis of exact coverage in large samples. In an empirical application, the clipped AIPW estimator that targets the standard average treatment effect yields similar precision to a heuristic 10% fixed-trimming approach that changes the target sample. 1 Introduction In observational data, it is common to find at least some treated observations with small propensity scores, in which case overlap is weak. Under weak overlap, outcome regression is more difficult and inverse propensity estimators can divide by small numbers. This paper studies the extent to which standard asymptotic results for doubly robust estimators continue to apply when strict overlap does not hold, but there is a tail bound on inverse propensity scores. The paper ∗I am grateful for suggestions from Xiaohong Chen, Rebecca Dorn, Kevin Guo, Edward Kennedy, Samir Khan, Michal Koles´ ar, Lihua Lei, Xinwei Ma, Ulrich M¨ uller, Yuya Sasaki, Yulong Wang, and Larry Wasserman, and participants at seminars at the University of Pennsylvania. Artificial intelligence was used to suggest changes in the editing process. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-2039656 and by grant T32 HS026116 from the Agency for Healthcare Research and Quality. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation or the Agency for Healthcare Research and Quality. 1arXiv:2504.13273v2 [econ.EM] 22 Apr 2025 aims to answer two questions: (1)Black-box rates . Under what conditions on nuisance errors do Wald confidence intervals constructed using doubly robust estimators have well-calibrated coverage? (2)Feasibility . When are the black-box rates in (1) feasible under weak overlap? Overlap weakness is measured with a tail-bound parameter γ0. It is known that there is a phase transition when γ0crosses two, which corresponds to a uniform propensity density. Values of the tail parameter γ0above two correspond to stronger overlap assurances, a case which I callsomewhat weak overlap . Under somewhat weak overlap, it is
|
https://arxiv.org/abs/2504.13273v2
|
known that the semiparametric bound for estimation of ψ0is finite and can be achieved by the doubly robust Augmented Inverse Propensity Weighted (AIPW) estimator with known conditional mean outcome and propensity functions. I show that semi- parametric efficiency holds even when using nonparametric estimates of the two AIPW nuisance functions, provided the nuisance estimates are constructed by cross-fitting and achieve error rates rµ,nandre,nthat satisfy the product condition that n1/2rµ,nre,ntends to zero in probability. The one modification I make relative to the usual analysis is that I require the nuisance rates to apply uniformly in regions with small propensity scores, which is immediate under L∞bounds. Values of γ0below two allow the density of propensity scores to be unbounded at zero, a case which I callvery weak overlap. Under very weak overlap, it is known that the semiparametric efficiency bound is infinite and Inverse Propensity Weighting (IPW) estimators fail to be asymptotically normal, even if the propensity function is known. Ma and Wang (2020) show that thresholding strategies that clip (Winsorize) or trim (discard) observations with small propensity scores at a sequence of bntending to zero can restore asymptotic normality, but at a slower-than-√nrate and with first-order bias. Previous work has proposed constructing confidence intervals for this case using debiasing strategies and self-normalized subsampling. I show that if thresholding is applied to AIPW, then there is no first-order bias and simple Wald confidence have exact asymptotic coverage. The analysis provides sufficient black-box conditions for AIPW with estimated nuisance functions to achieve these asymptotic results: first, that the threshold bntends to zero more slowly than the propensity error re,n; and second, that the product n1/2rµ,nbγ0/2 ntends to zero in probability. That analysis characterizes the feasibility of AIPW Wald confidence intervals under black-box conditions on the propensity estimates and outcome regression. An added difficulty is that in regions with weak overlap, there cannot be many treated observations to use in outcome regression. I quantify the added difficulty for the case of regression within a H¨ older smoothness class of order βµ>0. I show that weak overlap scales the effective outcome smoothness by 1 −1/γ0, exhibiting a cost of weak overlap even in the somewhat weak 2 overlap regime in which the other asymptotic results carry through nearly unchanged. I show that in this setting, the optimal global rate is equal to the optimal pointwise rate, without the usual polylogarithmic factor separating the optimal pointwise and global rates (Stone, 1982). The argument leverages a novel construction partitioning the covariate space into regions with increasingly strong overlap assurances in such a way as the contribution of each region to the worst-case error is half as large as the previous contribution. In this way, the contribution of infinitely many regions to the global error is controlled uniformly. Taken together, these results provide a precise answer to the question posed by this work’s title: doubly- robust t-statistics can handle weak overlap of tail bound γ0, provided the outcome and propensity nuisance functions are in H¨ older smoothness classes of order βµandβeand βµ(1−1/γ0) 2βµ(1−1/γ0) +d+βemin{γ0/2,1} 2βe+d>1 2. In this case, the
|
https://arxiv.org/abs/2504.13273v2
|
threshold bn=n−βe/(2βe+d)log(n)(3βe+d)/(2βe+d)suffices, regardless of the weak overlap parameter γ0. When the outcome and propensity smoothness orders are the same β >0, then thresholded AIPW can handle weak overlap of order γ0, so long as: γ0>max2β2+ 2βd+d2 β(2β+d),4β2 4β2−d2 . Under Lipschitz continuity of both nuisance functions in one dimension, doubly robust t-statistics can handle weak overlap of order γ0>5 3. In higher dimensions, there is always some sufficient smoothness order that yields valid t-statistics for any fixed tail bound. The conditions here yield new rules of thumb for thresholding in applied work. In my favored regime, the econometrician is willing to posit a minimal consistency rate for one of the two nuisance function estimates. Given such a minimal rate, a simple plug-in procedure predicts the threshold with the laxest restriction on the other nuisance function needed to achieve well-calibrated Wald confidence intervals. In the absence of any such information, a third rule of thumb derives a threshold that imposes the laxest equal minimal consistency rate on both nuisance estimates. None of these rules of thumb depend directly on knowledge of the tail bound parameter γ0. In simulations, I find that clipped AIPW achieves the promised properties asymptotically. I consider a setting of very weak overlap with nonparametric outcome regression and propensity estimates. Unthresholded IPW and AIPW estimators perform poorly, with large errors and nonnormal asymptotic distributions. In this setting, clipped IPW displays its known first-order bias, and clipped AIPW displays the second-order bias justified by the theoretical analysis. With access to 1,000 or 10,000 observations, I find that p-values based 3 on clipped AIPW t-statistics exhibit moderate overrejection. In large samples with 100,000 observations, a Kolmogorov-Smirnov test based on 5,000 simulations is unable to reject a null hypothesis that clipped AIPW p-values on the true causal effect are exactly uniformly distributed. I apply the clipped AIPW estimator to data on right heart catheterization. I consider the setting of Connors et al. (1996), which has become a canonical setting with weak overlap, including providing the empirical application for Crump et al. (2009)’s paper proposing a 10% fixed-trimming rule of thumb. I compare the clipped AIPW estimator that targets the full-population effect to estimators that apply AIPW to a sample trimmed based on a fixed rule. I find that by including observations with small estimated propensities, the clipped AIPW strategy increases the estimated harm of the procedure by 0.17 standard errors relative to the 10 percent fixed-trimming rule, while increasing the estimated standard error by only 5.1%. These results show that targeting the full-population treatment effect does not need to introduce a major efficiency loss, and show that thresholded AIPW can easily be added as a robustness test when practitioners apply a fixed-trimming rule. Weak overlap is a common phenomenon in practice and in theory. The dominant response to weak overlap in inverse propensity score practice is trimming: dropping samples with small propensity estimates in order to estimate average effects within a more precise population (Currie and Walker, 2011; Bailey and Goodman-Bacon, 2015; Galiani et al., 2005), typically following the 10% rule of thumb
|
https://arxiv.org/abs/2504.13273v2
|
from Crump et al. (2009). Other proposals targeting new samples include reweighting towards higher-precision populations (Yang and Ding, 2018; Li et al., 2018) or clipping strategies that Winsorize weights above (Lee et al., 2011; Ionides, 2008).1D’Amour et al. (2021) argue that weak overlap is likely to be prevalent in modern settings with high-dimensional covariates. Imbens (2004) argues that changing the target estimand may be necessary in the absence of sufficient precision. There is theoretical work characterizing estimation of nonstandard estimands under weak overlap. Khan and Tamer (2010) show that very weak overlap yields an irregularly identified parameter, an infinite semi- parametric efficiency bound, slower-than-√nestimation rates for the traditional average causal effects, and no clear notion of best estimator. An important theoretical literature has proposed novel point and confi- dence interval estimators with desirable properties under weak overlap (Rothe, 2017; Armstrong and Koles´ ar, 2017, 2021; Sasaki and Ura, 2022; Ma et al., 2023; Chaudhuri and Hill, 2024). Many of these procedures have favorable properties relative to the simpler thresholded estimator I consider here, but to my knowledge there has been little take-up by practitioners. Ma and Wang (2020) and Khan and Ugander (2022) show that sufficiently trimmed AIPW and IPW can remain asymptotically normal, but at the cost of introducing 1Awkwardly, the epidemiological literature sometimes refers to the Winsorization strategy as “trimming.” My results hold for both dropping or Winsorizing extreme propensities, so the confused reader can view this as a work deriving simple asymptotics for trimmed AIPW regardless of their preferred meaning of “trim.” 4 first-order bias for the standard causal effects that often calls for a nonstandard debiasing strategy that can enable laxer regression conditions than the standard smoothness conditions I explore. Crump et al. (2009), Yang and Ding (2018), Li et al. (2018), and Goldsmith-Pinkham et al. (2024) propose changing estimands in response to weak overlap, which introduces a discontinuous estimand based on whether or not the econometrician detects meaningful overlap weakness. Other theoretical literature so far has either proposed a nonstandard causal estimator, required non- standard techniques to construct confidence intervals, or done both. Ma and Wang (2020) and Heiler and Kazak (2021) propose using self-normalized subsampling methods that enable valid statistical inference for standard estimands without clipping or trimming, but empirical practice has favored simple t-tests. Ma and Wang also propose a debiasing procedure. Heiler and Kazak also find that estimated untrimmed AIPW is first-order equivalent to the oracle AIPW estimator with known nuisance functions, and the associated asymptotic distribution is alpha-stable, if the product of nuisance estimation rates is of a lower order than the oracle standard deviation; I find this result does not extend to the thresholded AIPW estimator that I show is asymptotically normal. Ma et al. (2024) and Lei et al. (2021) propose statistical tests under a null of sufficient or strict overlap, respectively, presumably in the hopes of avoiding these complications. My analysis of nonparametric regression rates is also relevant to the literature on regression with degenerate designs, although to my knowledge the possibility of a global rate with no polylogarithmic penalty
|
https://arxiv.org/abs/2504.13273v2
|
is new (Stone, 1982; Hall et al., 1997; Ga¨ ıffas, 2005; Mou et al., 2023). The plan of the paper is as follows. Section 2 presents the setting and main theoretical results. Section 3 interprets these results as minimal black-box consistency rates and as minimal smoothness rates. Section 4 considers implications for parametric estimators and derives some rules of thumb for empirical use. Sec- tion 5 presents numerical results for simulations and the empirical application to right-heart catheterization. Section 6 concludes. Notation . I follow Heiler and Kazak (2021) and use “strict overlap” to refer to the case in which the propensity score is bounded away from zero almost surely; I use “weak overlap” to refer to the case in which the infimum of the support of the propensity score is zero, which is sometimes called “limited overlap” (Khan and Tamer, 2010; Chaudhuri and Hill, 2024). I focus my attention on distributions with weak overlap that may possess subexponential tails. I use “very weak overlap” to refer to case in which the associated heavy tails can fail to generate inverse propensity moments, a class which is sometimes called “heavy tailed” (Chaudhuri and Hill, 2024). I use “somewhat weak overlap” to refer to the case in which I allow only subexponential tails that are sufficiently light, a class which is sometimes said to satisfy “strict overlap” or “overlap” (Heiler and Kazak, 2021; Bruns-Smith et al., 2024). I write ˜ψAIPW (Oracle )(bn) =1 nPϕ(Z|bn, η) for the oracle AIPW estimate with pseudo-outcome ϕ(Z|bn,{¯e(X),¯µ(X)}) = ¯µ(X) +D(Y−¯µ(X)) max{¯e(X),b}and clipping threshold bnand 5 σn=n−1/2q 1 nPϕ(Z|bn, η)2−˜ψAIPW (Oracle )(bn)2and ˆσn=n−1/2q 1 nPϕ(Z|bn,ˆη)2− 1 nPϕ(Z|bn,ˆη)2 for the associated oracle and estimated sample standard deviation, respectively. I refer to regions of the covariate space in which the propensity can be arbitrarily close to zero as singularities. I use the notation EP[·] and E[·] to refer to the expectation under the maintained distribution P, and I use the notation ψ(P) to refer to the statistical average potential outcome EP[EP[Y|X, D = 1]] where the right-hand side is well-defined under P. I abuse notation and write ψ=ψ(P) and use supP∈ABto refer to the supremum of Bover distributions PinAunder any maintained restrictions on the distribution and nuisance functions. I write that a set of nuisance functions are cross-fit if the data is partitioned into Kfolds and the nuisance functions in fold kare independent of the data in fold k. I write An≤PBnto refer to the case that for all ϵ >0,P(An> B n+ϵ)→0. I write P(En) for the probability of event Enoccurring under the distribution P, with the number of draws nsometimes left implicit. I use the notation cn≪dnfor nonnegative sequences cn, dnto indicate that dn>0 for all nlarge enough and cn/dn→0. I use the notation cn≾dnanddn≿cn to indicate that there is some δ >0 such that dn≥δcnfor all nlarge enough. I write cn=oP(dn) for sequence of dn>0 to indicate that for all δ >0,P(|cn|/dn> δ)→0; if there is only one distribution in a statement, cn=o(dn) should be understood to mean cn=oP(dn). I use log to refer to the natural logarithm anda∨bto
|
https://arxiv.org/abs/2504.13273v2
|
indicate max {a, b}. I define H¨ older smoothness using a multivariate version of the notation of Tsybakov (2009): a function fis in the H¨ older smoothness class Σ( β, L) if the ⌊β⌋-order multivariate derivatives Dαf=∂∥α∥ ∂xα1 1∂xα2 2...fsatisfy ∥Dαf(x)−Dαf(x′)∥ ≤L∥x−x′∥β−⌊β⌋, where I write Dαf(x) for Dαfevaluated at x. For simplicity, I use local polynomial regression to refer to specifically kernel regression with uniform bandwidth: ˆ µ(NW)(x|h) =PD1{∥X−x∥≤h}YPD1{∥X−x∥≤h}when feasible and ˆ µ(NW)(x|h) = 0 when no nearby treated observations are available. For an estimator ˆ ηofη, I use ∥ˆη−η∥∞to refer to the sup norm supx∈Support (P)|ˆη(x)−η(x)|. 2 Setting, Consistency, and Asymptotic Normality This section presents asymptotic results under black-box nuisance conditions. 2.1 Setting I derive uniform convergence rates under lower bounds on overlap weakness. I follow Ma and Wang (2020), who provide important building blocks in my analysis, and parameterize overlap weakness through a tail parameter γ0. Unlike their analysis, the results will be uniform over a model family Psatisfying certain restrictions, including some basic regularity conditions. 6 1.8 2 2.2 0.00.10.20.30.40.50.00.10.20.30.40.50.00.10.20.30.40.50250500750100012501500 Propensity Score# Observations Treated ControlFigure 1: Simulations of 10,000 observations of e(X) with P(e(X)≤π) =πγ0−1for increasing values of γ0. Assumption 1. LetPbe a nonempty family of distributions, and write e(X) =P(D= 1|X) and µ(X) = EP[Y|X, D = 1]. Then every P∈Psatisfies the following conditions for some q >3, M, σ min, C > 0 and γ0>1: (a)Conditional moments .E[|Y−µ(X)|q|X, D = 1]≤Mq<∞almost surely. (b)Unconditional moments .V ar(µ(X))≤M. (c)Residuals . Var( Y|X, D )≥σ2 min. (d)Propensity tail .P(e(X)≤π)≤Cπγ0−1for all π∈[0,1]. Definition 1 generalizes Ma and Wang (2020)’s slowly varying tails assumption. Assumptions 1(a) through 1(c) are regularity conditions that rule out cases like perfectly predictable outcomes. Assumption 1(d) provides the substantial restriction on P: overlap may be weak in the sense that γ0is finite, but there is some minimal γ0andCthat provides a lower bound on the propensity’s tail behavior. Under strict overlap, Assumption 1(d) holds for any finite γ0>1, and most results here hold after replacing γ0with infinity. Under weak overlap, Assumption 1(d) may only hold for some values of γ0, in which case the inverse propensity distribution may be heavy-tailed. I refer to the case γ0>2 as “somewhat weak overlap” and refer to the case of γ0<2 as “very weak overlap.” As γ0shrinks below 2, overlap is permitted to be increasingly weak. γ0≤1 corresponds to no bound on the propensity distribution. Figure 1 illustrates behavior for simulated data with various values of γ0. When the propensity score e(X) has a well-defined density, γ0= 2 corresponds to a roughly uniform distribution of propensity scores (Ma and Wang, 2020). When γ0is above two, the density of propensity scores tends to zero at zero; when γ0is below two, the density of propensity scores can tend to infinity at zero. Heuristically, there are never 7 too many treated observations with very small propensity scores, but γ0governs the degree to which there can be many untreated observations with very small propensity scores. A phase transition occurs when γ0crosses two. Proposition 1. (i) Suppose Assumption 1 holds for some γ0>2. Then the semiparametric bound
|
https://arxiv.org/abs/2504.13273v2
|
is finite for all P∈P. (ii) Suppose Assumption 1 holds for some γ0∈(1,2), and there is a P∈PandC′>0 such that P(e(X)≤π)≥C′πγ0−1for all π∈(0,1]. Then the semiparametric bound is infinite for P. I will require certain rates on the nuisance functions e(X) and µ(X). I write the worst-case rates as re,n andrµ,n. Assumption 2 (Cross-fitting and L∞rates) .The nuisances ˆ µand ˆeare estimated with cross-fitting with a fixed number of folds K. Ifnkis the number of observations per fold, then inf knk/supknk→1. Further, for all k∈1, . . . , K and all P∈P, the cross-fit nuisances satisfy the uniform consistency rates EP[∥ˆµ(−k) n− µ∥∞]≤rµ,nandEP[∥ˆe(−k) n−e∥∞]≤re,nwhere rµ,n, re,nare uniformly bounded above. Cross-fitting is a common strategy for simplifying the analysis of Neyman-orthogonal estimators like AIPW (Chernozhukov et al., 2018). In practice, nuisances satisfying Assumption 2 may only be achieved with arbitrarily high probability. I impose uniformity to ensure these rates hold in regions of xwith weak overlap, but can be bypassed with L2error conditions in other regions. Such uniformity assumptions are standard in studying semiparametric estimators under irregular identification (Semenova, 2024). 2.2 Estimator and Consistency My formal analysis considers the clipped AIPW estimator with cross-fit nuisance function estimates. Results for the other standard thresholding procedure, trimming, generally follow by the same arguments. I begin by providing sufficient conditions for consistency. For simplicity, I focus the theoretical analysis on estimating the average potential outcome ψ=E[DY/e (X)]. The average treatment effect follows as a corollary. The clipped AIPW estimator of ψis: ˆψAIPW clip (bn) =1 nKX k=1X i∈Fkϕ Zi|bn,ˆη(−k) ,where ϕ(Z|b,ˆη) = ˆµ(X) +D(Y−ˆµ(X)) max{ˆe(X), b}. (1) In that equation, Fkis the set of observations irandomly partitioned in fold k, ˆη(−k)is the nuisance function estimates constructed only on observations in folds other than k. The unthresholded AIPW estimator is the special case of bn= 0. I analyze the clipped AIPW estimator because results for the trimmed AIPW estimator follow somewhat more easily. 8 A standard result for the unthresholded AIPW estimator is double robustness: when e(X) is bounded away from zero, unthresholded AIPW is consistent for ψif either re,norrµ,ntends to zero. The existence of weak overlap introduces a subtlety to double robustness. Proposition 2 (Consistency) .Suppose bnsatisfies n−1/2≪bn≪1, the conditions of Assumption 2 hold, and either (i) re,nbmin{γ0−2,0} n →0or (ii) rµ,nre,n+bn bn→0. Then for all ϵ >0, sup P∈PP ˆψAIPW clip (bn)−ψ(P) > ϵ →0. Under very weak overlap, condition (i) is stronger than the classic strict overlap condition that re,n consistency implies estimator consistency. It requires that re,ngo to zero faster than b2−γ0n, so that as overlap is allowed to be weaker, the propensity consistency rate may need to be as fast as bnitself. Under even somewhat weak overlap, condition (ii) is stronger than the classic rµ,n→0 condition. With an inconsistent propensity score, a meaningful fraction of the data may be clipped even asymptotically, in which case the outcome regression error rate must offset the positive probability of incorrectly assigning an inverse propensity weight of b−1 n. 2.3 Asymptotic Normality and Confidence Intervals This subsection presents the main theoretical claims of the paper.
|
https://arxiv.org/abs/2504.13273v2
|
It shows that under suitable rate restrictions, the clipped AIPW estimator is first-order equivalent to an oracle clipped AIPW estimator, both estimators are consistent and asymptotically normal, and simple Wald confidence intervals are well- calibrated. A common strategy for deriving confidence intervals for unthresholded AIPW under strict overlap is Neyman orthogonality. In those classic settings, the difference between the feasible AIPW estimator with estimated nuisance functions and the hypothetical oracle AIPW estimator with known nuisance functions is 1 nX ϕ(Z|0,ˆη)−ϕ(Z|0, η) =1 nX (ˆµ−µ)D ˆe−1 + (Y−µ)D ˆe−D e . Intuitively, the regression errors ˆ µ−µare debiased by the inverse propensity estimatesD ˆeof the number one. As a result, in classical settings, slowly consistent nuisance estimates can yield quickly consistent causal estimates. When all nuisances are consistent at o(n−1/4) rates and e(X) is bounded away from zero, inverse propensity errors are of the same order as propensity errors, classical AIPW estimates are first-order equivalent to oracle estimates with known nuisances, and simple Wald confidence intervals cover the true causal effect by appeal to the asymptotically normal oracle AIPW estimator. 9 Under very weak overlap, thresholded AIPW does not obtain the standard debiasing benefit. The anal- ogous decomposition for clipped AIPW is 1 nX ϕ(Z|bn,ˆη)−ϕ(Z|bn, η) =1 nX (ˆµ−µ)D max{ˆe, bn}−1 + (Y−µ)D max{ˆe, bn}−D max{e, bn} . Above the clipping threshold bn, thresholded AIPW’s nuisance estimation error enjoys a product-of-errors character that is similar to classical settings, albeit with multiplication by weights as large as b−1 n. Below the clipping threshold, the regression errors are not debiased by any inverse propensity estimate. Thresholded AIPW enjoys a subtly different form of debiasing: as the threshold tends to zero, increasingly little mass remains. If the threshold bntends to zero quickly enough, the bias in the thresholded region is debiased by the threshold itself. If the threshold bntends to zero slowly enough, the product of errors condition can remain feasible. My formal contribution is to show that there can be a Goldilocks range where bntends to zero neither too slowly nor too quickly, so that thresholded AIPW is first-order equivalent to oracle AIPW and is asymptotically normal by appeal to the asymptotically normal thresholded oracle AIPW estimator. My results for asymptotic normality and statistical inference will proceed under the following rate re- quirements. Assumption 3 (Minimal rates) .Assumption 2 holds, with the following rates on the regression error rµ,n and the propensity error re,n: (a)Consistency .rµ,n, re,n→0. (b)Product of errors .rµ,nre,n 1 +b(γ0−2)/2 n log(1/bn)1{γ0=2}/2 ≪n−1/2. (c)Regression error near singularities .rµ,nbγ0/2 n≪n−1/2. (d)Asymptotically known thresholding .re,n≪bn. Under very weak overlap, conditions (b) and (c) are stronger than the standard product-of-errors con- dition rµ,nre,n≪n−1/2. For example, when γ0≥1.5 and rµ,n=n−1/4, then re,n≪n−1/3will suffice, provided re,n≪bn≪n−1/3. However, these conditions never require parametric n−1/2consistency rates: shared regression rates of n−1/3will always suffice for these conditions, provided the clipping threshold bn goes to zero at a rate sufficiently close to n−1/3. I discuss these conditions further in Section 3.2. Certain technical possibilities call for one of two alternative further assumptions: a distributional smooth- ness assumption or a stronger rate
|
https://arxiv.org/abs/2504.13273v2
|
assumption. Assumption 4 (Nongeneracy or faster rates) .One of the following two conditions hold: 10 (i)Nondegenerate overlap . There exists some ρ >0 such that for all P∈Pandπ∈[0,1],P(e(X)≤ π/2)≤(1−ρ)P(e(X)≤π). (ii)Faster rates .rµ,nb(γ0−1)2/γ0n ≪n−1/2. Assumption 4(i) is a uniform version of the requirement that P(e(X)≤x) =c(x)xγ0−1forc(x) tending to a constant at zero. The definition formalizes the notion that a distribution may place some propensity mass near zero, but it may not place mass adversarially within the region of the origin. When γ0<2, Assumption 4(ii) is stronger than Assumption 3(c). As γ0tends to one, the condition approaches the parametric requirement rµ,n=O(n−1/2). I now provide the main theoretical result. Theorem 1 ((Slow) Asymptotic Normality) .Suppose bnsatisfies n−1/2≪bn≪1, and Assumptions 1, 2, 3, and 4 hold. Then the clipped AIPW estimator is oracle-equivalent: lim n→∞sup P∈Pσ−2 nEP ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn)2 = 0. Further, clipped AIPW is asymptotically normal: lim n→∞sup P∈Psup t∈R P ˆψAIPW clip (bn)−ψ(P) ˆσn≤t! −Φ(t) = 0. Theorem 1 is the core theoretical claim of this paper. The first result shows that thresholded AIPW is first-order equivalent to an oracle estimator with known nuisances: the effect of nuisance estimation error on the treatment effect estimate tends to zero faster than the standard deviation of the oracle estimator. The second result leverages this first-order equivalence to characterize the asymptotic distribution of the clipped AIPW estimates and t-statistics: the estimator is asymptotically normal, and estimated t-statistics are asymptotically standard normal. Both results are standard for AIPW under strict overlap, but substantial care is required to handle unbounded inverse propensities under weak overlap. The argument for normality builds on Ma and Wang (2020)’s proof that aggressively-trimmed oracle IPW with known propensities achieves asymptotic normality with first-order bias. I extend their argument to a uniform family of distributions using the Berry-Esseen Theorem and note that oracle AIPW must have zero finite-sample bias. The main task of Theorem 1 is to show that replacing the true nuisances with estimated nuisances has a second-order effect on clipped AIPW estimates under appropriate conditions. This is nontrivial, because under weak overlap, there is an asymptotically unbounded number of observations with arbitrarily large 11 inverse propensities with even known nuisance functions. Nevertheless, by taking appropriate care and leveraging that clipping introduces bias by reducing inverse propensities, I am able to show that the effect of nuisance estimation is second-order even under the very weak overlap case in which unthresholded AIPW fails to be asymptotically normal and no regular root-n estimators exist. Next, I show that Theorem 1 yields the natural result for inference: simple t-tests based on Wald confidence intervals are well-calibrated. Corollary 1 (T-tests are well-calibrated) .Suppose the conditions of Theorem 1 hold. Consider the Wald confidence interval ˆCn(α) =h ˆψAIPW clip (bn) +zα/2ˆσn,ˆψAIPW clip (bn) +z1−α/2ˆσni . Then for all α∈(0,1/2), lim sup n→∞sup P∈P P(ψ(P)∈ˆCn(α))−(1−α) = 0. Under somewhat weak overlap, unthresholded AIPW is semiparametrically efficient. I now show thresh- olding is also unnecessary. Corollary 2 (Thresholding is second-order under somewhat weak overlap) .Suppose Assumption 2 holds for some γ0>2,re,nandrµ,n→0, and re,nrµ,n≪n−1/2. Then the feasible AIPW estimator
|
https://arxiv.org/abs/2504.13273v2
|
with bn= 0 is semiparametrically efficient, and the associated Wald confidence interval ˆCn(α)satisfies lim sup n→∞sup P∈P P(ψ(P)∈ˆCn(α))−(1−α) = 0. The logic of Corollary 2 is to show that any sequence of bn→0 has a second-order effect on estimation, so that unthresholded and thresholded AIPW are first-order equivalent, and there is some bn→0 slowly enough satisfying the conditions of Theorem 1, so that there is a thresholded AIPW estimator achieving Corollary 1. Taken together, this subsection yields a remarkable result for practice. The distribution Pmay place so much propensity mass near the origin that the semiparametric efficiency bound is infinite, the lower bound on the density of propensity mass near the origin can be so weak that identification nearly fails, and the nuisance estimator may be so poorly designed that it pushes all observations’ estimated propensities towards the origin at a slower-than-parametric rate. Nevertheless, Neyman orthogonality is sufficiently powerful to ensure the validity of the simple t-test. The next section interprets the rate requirements of Assumption 3. 12 Table 1: Summary of degradation of asymptotic behavior and requirements as overlap is permitted to be increasingly weak. Overlap Phase Strict (infe(x)>0) Somewhat Weak (γ0>2) Very Weak (γ0<2) Double Robustness re,n→0 or re,n→0 or re,nbγ0−2 n→0 or Consistency Conditions rµ,n→0 rµ,nb−1 n→0 rµ,nre,nb−1 n→0 Oracle AIPW Asymptotics Unthresholded Distribution Normal Normal Nonnormal Thresholded Convergence Rate n−1/2n−1/2n−1/2b(γ0−2)/2 n Semiparametrically Efficient? Yes Yes No Nuisance Requirements Black Box: n1/2rµ,nre,n→0 (L2)n1/2rµ,nre,n→0 (L∞) n1/2rµ,nrγ0/2 e,n→0 (L∞) Smoothness:βµ 2βµ+d+βe 2βe+d>1 2βµ 2βµ+dγ0 γ0−1+βe 2βe+d>1 2βµ 2βµ+dγ0 γ0−1+γ0 2βe 2βe+d>1 2 Regression Rates Pointwise optimum: n−βµ 2βµ+dn−βµ(1−1/γ0) 2βµ(1−1/γ0)+dn−βµ(1−1/γ0) 2βµ(1−1/γ0)+d Global optimum: ( n/log(n))−βµ 2βµ+dn−βµ(1−1/γ0) 2βµ(1−1/γ0)+dn−βµ(1−1/γ0) 2βµ(1−1/γ0)+d 3 Interpretation of Nuisance Requirements This section interprets the rate requirements for Wald confidence intervals to cover asymptotically. I summarize the results in Table 1. Under somewhat weak overlap, thresholded and unthresholded AIPW remain semiparametrically efficient and√n-consistent, and the traditional product of errors nuisance con- dition remains in place with a modification to an L∞on errors. Under very weak overlap, clipped AIPW achieves a slower consistency rate and the required black box nuisance rates are more stringent. Both cases make outcome regression more difficult, but never so difficult as to require parametric assumptions. As a byproduct of this analysis, I show that the optimal pointwise and global regression rates under weak overlap are the same, without the usual polylogarithmic factor in the global rate. 3.1 Degradation of Consistency Rate The previous analysis suggests that smaller values of bnare preferable because they admit weaker black- box requirements. However, under very weak overlap, larger values of bncorrespond to faster AIPW rates. I characterize the consistency rate of any oracle-equivalent estimator as follows. Proposition 3 (Consistency rate) .Suppose the assumptions of Proposition 2 hold. Then there exist positive constants cminandcmaxsuch that cminn−1EPh D max{e(X),bn}2i ≤σ2 n≤cmaxn−1EPh D max{e(X),bn}2i for all P∈P, where σ2 n=n−1 1 nPϕ(Z|bn, η)2−˜ψAIPW (Oracle )(bn)2 is the oracle sample variance. 13 If the estimator were trimmed instead of clipped, EPh D max{e(X),bn}2i would be replaced by a term like EPh D1{e(X)≥bn} e(X)i . Weaker overlap corresponds to larger values of EP[D/max{e(X), bn}2] and
|
https://arxiv.org/abs/2504.13273v2
|
slower consis- tency rates. Conditional on P, larger values of bncorrespond to a smaller value of EP[D/max{e(X), bn}2], faster oracle consistency, and greater asymptotic power. Proposition 3 implies a worst-case consistency rate over distributions in P. I focus on the case of very weak overlap, because Corollary 2 shows that under somewhat weak overlap, clipped and traditional AIPW achieve a traditional√nconsistency rate. Corollary 3 (Worst-case consistency rate) .Suppose γ0<2and let bnbe a fixed sequence of bnsatisfying 1≫bn≫n−1/2. There exists a C′>0such that for Psatisfying Assumption 1, C′n−1bγ0−2 n≥supP∈Pσ2 n for all nlarge enough. Further, there exists a family Psatisfying Assumption 1 and a C′′∈(0, C′)such thatsupP∈Pσ2 n≥C′′n−1bγ0−2 n for all nlarge enough. The rate n−1bγ0−2 n is a worst-case consistency rate in bn: every distribution in Pachieves a consistency at least as fast as n−1bγ0−2 n, and it is possible to find a distribution for which the consistency rate is no faster. The combination of Corollary 3 and Theorem 1 yields a trade-off under very weak overlap: smaller values of bnyield laxer requirements on regression estimation near singularities, but lead to larger variance and slower consistency. 3.2 Degradation of Black-Box Nuisance Requirements Under very weak overlap, the black-box rates of Assumption 3 are more stringent than the usual rµ,nre,n≪n−1/2condition. The main requirement is that that rµ,nrmin{γ0/2,1} e,n goes to zero faster than n−1/2. As a result, outcome regression rates are more valuable than nominally equivalent propensity rates under very weak overlap. The usual product-of-errors condition under strict overlap often takes a form like rµ,nre,n≪σn, where σnis the standard deviation of the Oracle estimator. For example, Heiler and Kazak (2021) argue that this condition is sufficient for estimated unthresholded AIPW to be first-order equivalent to an oracle esti- mator. An alternative characterization of the usual product-of-errors condition that rµ,nre,n≪n−1/2; this requirement is equivalent under somewhat weak overlap, but is more stringent under very weak overlap. Even this stronger product-of-errors requirement is insufficient for Wald confidence intervals constructed with thresholded AIPW to be valid. Corollary 4 (Under very weak overlap, faster rates are necessary) .For any overlap bound γ0∈(1,2), there exists aPand cross-fit nuisance estimators satisfying: 14 1.The model is regular .Psatisfies Assumption 1 for this γ0. 2.Classic product of errors .supPEP[∥ˆµ−µ∥∞]EP[∥ˆe−e∥∞]≪n−1/2andbn≫EP[∥ˆe−e∥∞]. 3.Wald inference fails . For any fixed target coverage level α∈(0,1),supP∈PP(ψ(P)∈ˆCn)→0. A heuristic sufficient condition for Wald confidence interval validity is rµ,nrmin{γ0,2}/2 e,n ≪n−1/2. When γ0<2, there is a range of nuisance estimates such that rµ,nre,n≪n−1/2≪rµ,nrmin{γ0,2}/2 e,n and estimation bias can be of a higher order than the oracle variance. I calculate the black-box rate requirements under Assumption 4(i) in a few special cases. I omit an analysis of the stronger rate requirement in Assumption 4(ii) that would be needed to handle degenerate distributions. Assumption 5. Assumptions 1, 2, and 4(i) hold, and re,n, rµ,n→0. I characterize the following special cases. Example 1 (Somewhat weak overlap) .Suppose Assumption 5 holds, γ0>2, and rµ,nre,n≪n−1/2. Then there exists a bn→0 such that clipped AIPW t-statistics are asymptotically well-calibrated. Example 2 (Second moments barely fail to exist) .Suppose Assumption 5 holds for γ0= 2 and
|
https://arxiv.org/abs/2504.13273v2
|
there is some η >0 such that rµ,nre,nlog(1/re,n)≪n−1/2. Then there exists a bn→0 such that clipped AIPW t-statistics are asymptotically well-calibrated. Example 3 (Shared rates, very weak overlap) .Suppose Assumption 5 holds for some γ0>1 and rµ,n, re,n≪ n−1/3. Then there exists a bn→0 such that clipped AIPW t-statistics are asymptotically well-calibrated. Example 4 (Parametric rates) .Suppose Assumption 5 holds for some γ0>1 and either (i) rµ,n=O(n−1/2) andre,n=o(1) or (ii) re,n=O(n−1/2) and rµ,n=o(n(γ0−2)/4). Then there exists a bn→0 such that clipped AIPW t-statistics are asymptotically well-calibrated. I now unpack the black box and quantify the degree to which weak overlap makes a given outcome regression rate more difficult to achieve. 3.3 Necessary Smoothness Conditions Weak overlap makes outcome regression more difficult: there may be few treated observations in precisely the regions in which thresholded AIPW depends most acutely on outcome regression. In this section, I show that for pointwise rates, even somewhat weak overlap can be viewed as degrading effective outcome 15 smoothness for optimal local polynomial estimators. I also derive a blessing of weak overlap: the optimal global rate is equal to the optimal pointwise rate, without the usual polylogarithmic penalty. I characterize optimal nonparametric regression rates under H¨ older continuity. For convenience, I fix the covariates to be uniform over a specific hypercube in Rdwith constant variance. Assumption 6 (H¨ older smoothness and fixed domain) .Psatisfies Assumptions 1 and 4(i), and for all P∈P,X∼Unif ([−1,1]d) and Y|X, D∼ N(DµP(X) + (1 −D)µ′ P(X), σ2 min), with µ, µ′in the H¨ older smoothness class Σ( βµ, L) for some fixed βµ, L > 0. Most of these assumptions are standard assumptions for studying local polynomial regression under strict overlap (Stone, 1982). I assume normal outcomes in order to simplify the characterization of the optimal rate. It is known that in this case but under strict overlap, the optimal pointwise rate is n−βµ 2βµ+d, and the optimal global rate ( n/log(n))−βµ 2βµ+dhas a polylogarithmic penalty, in the sense that the optimal global rate is worse by some polynomial factor of log( n). I require that treated observations cannot concentrate in small regions. Assumption 7 (Non-trivial concentration) .There are parameters ρ, γ > 0 such that for all h >0 small enough and all P∈Pandx0∈[−1,1]dP(e(X)≥ρsup∥x−x0∥≤he(x)|D= 1,∥X−x0∥ ≤h)> γ. I show in Appendix Proposition 5 that this condition holds if the propensity function is sufficiently smooth. When the propensity function is nonsmooth, it is possible for nature to concentrate treated observations within a given bandwidth in a region that is too small, introducing local polynomial degeneracy issues that are outside the scope of this work. In the worst case, weak overlap of order γ0>1 plays a role equivalent to scaling the effective outcome smoothness downward by (1 −1/γ0), but also removes the polylogarithmic factor in the optimal global rate. Theorem 2 (Weak overlap reduces effective outcome smoothness) .Define ψn=n−β∗ 2β∗+d, where β∗= βµ(1−1/γ0). Then (i)ψnis a pointwise (and global) rate upper bound . There exists a c >0and aPsatisfying Assumptions 6 and 7 such that lim inf n→∞inf ˆµsup P∈PP(|ˆµ(0)−µ(0)|> cψ n)>0. (ii)ψnis an
|
https://arxiv.org/abs/2504.13273v2
|
achievable global (and pointwise) rate . Suppose Psatisfies Assumptions 6 and 7. Then there exists an estimator ˆµ(x)such that for all ϵ >0, there is a finite c(ϵ)such that lim sup n→∞sup P∈PP(∥ˆµ−µ∥∞> c(ϵ)ψn)≤ϵ. 16 Further, the estimator can be computed without knowledge of the overlap bound γ0. Recall that under strict overlap, the optimal pointwise convergence rate for a H¨ older-smooth regression function is n−βµ 2βµ+d. Theorem 2 shows that weak overlap has the effect of degrading the effective smoothness rate from βµtoβµ(1−1/γ0). As overlap is allowed to become increasingly weak and other parameters are held constant, there can be regions of the covariate space with increasingly few treated observations so that the optimal pointwise regression rate is slower. This penalty occurs even under somewhat weak overlap. In the limit in which γ0tends to one, the rate in Theorem 2 can become arbitrarily poor. For example, when γ0= 2, the difficulty of estimating a twice continuously differentiable function is comparable to the difficulty of estimating a Lipschitz-continuous function under strict overlap. Theorem 2 also presents a blessing of weak overlap: the optimal global rate is equal to the pointwise rate, with no polylogarithmic penalty. Usually, optimal global rates are worse than optimal pointwise rates by a polylogarithmic factor due to the need to have accuracy at a number of gridpoints that grows with n. Under weak overlap, the global rate still must be consistent at all gridpoints, but most gridpoints must satisfy strict overlap and have a negligible contribution to the global consistency rate. The challenge is to partition the remaining observations in a way which does not impose a polylogarithmic penalty. Naively partitioning the remaining observations into regions of increasing overlap with a constant penalty yields log(log( n)) partitions and a penalty potentially as large as log(log(log( n))). The proof of Theorem 2 instead uses a more subtle construction, partitioning [ −1,1]dinto observations with increasing overlap in such a way as each partition’s worst-case global penalty is half as large as the previous worst-case penalty, even after accounting for the increasing number of gridpoints. Appendix Proposition 6 shows that if smallest partition’s penalty is sufficiently large, then this construction eventually includes all relevant observations. Then, summing over the penalties yields a global penalty that is large but bounded, so that the optimal pointwise rate is an achievable (and therefore optimal) global rate. Theorem 2 yields minimal smoothness assumptions for Wald confidence interval validity. Corollary 5 (Minimal smoothness conditions) .Suppose Assumption 6 holds and there is a βe>0such that e(X)∈Σ(βe, L)and βµ 2βµ+dγ0/(γ0−1)+min{γ0/2,1}βe 2βe+d>1/2. (2) Then there is a sequence of nuisance estimators and thresholds that are independent of γ0such that for all 17 γ0>1, the associated Wald confidence interval ˆCn(α)constructed using ˆψAIPW clip (bn)satisfies lim sup n→∞sup P∈P P(ψ(P)∈ˆCn(α))−(1−α) = 0. In one dimension, thresholded AIPW with Lipschitz-continuous conditional outcome mean and propen- sity function can handle weak overlap of order γ0>4+1/βe 3. In multiple dimensions, the econometrician must assume stronger smoothness restrictions than Lipschitz continuity in order to achieve the necessary nuisance rate guarantees under even strict overlap. When
|
https://arxiv.org/abs/2504.13273v2
|
the propensity function is infinitely-differentiable, thresholded AIPW with a Lipschitz-continuous conditional outcome mean can handle weak overlap of order γ0>2(d+1) d+2. More generally, under very weak overlap, if βµ, βe>d√ γ2 0+4γ0−4+2−γ0 4(γ0−1), then it is feasible to achieve standard inference with thresholded AIPW. This concludes the substantive theoretical analysis. In the next section, I use these results to infer lessons for empirical practice. 4 Lessons for Empirical Practice This section leverages the theoretical analysis to consider misspecified parametric estimators and some rules of thumb for empirical use. 4.1 Parametric Estimators and Misspecification When both nuisance functions are estimated nonparametrically, then consistency is achievable and AIPW is generally preferable under strict overlap. When both nuisance functions are estimated parametrically, then it is possible for one or both nuisance function to be inconsistent and the choice of estimator may be ambiguous. I now provide some intuition on the two estimators when nuisance functions are estimated parametrically and through cross-fitting. I will consider IPW and AIPW with the same sequence of thresholds bnsatisfying 1 ≫bn≫n−1/2. I write that a nuisance estimate ˆ ηis consistent if it tends to the correct limit η, and I write that ˆ ηis inconsistent otherwise. In this subsection, I will assume that parametric nuisance estimators ˆ ηachieve an L∞error relative to a limiting nuisance function ¯ ηthat is the order of n−1/2. For example, consider logit estimation of a propensity model of the form ¯ e(X) =exp(X′β) 1+exp(X′β)for a pseudo-true parameter β. If the support of X is bounded, then n−1/2-consistent estimate of βis sufficient to achieve n−1/2-consistent estimation of ¯ e(X) everywhere. However, weak overlap may emerge from unbounded tails, in which case the L∞rate may not go to zero. Unbounded covariates are an important case in general. For example, Ma and Wang (2020) motivate 18 weak overlap tails through the distribution of covariates under a logistic propensity model. Nevertheless, a careful treatment of parametric estimation of nuisances with unbounded covariates is outside the scope of this work. The analysis above is easiest to extend when either both or neither nuisance function is consistent. If both nuisance estimates are consistent, then the AIPW and IPW estimators will be consistent and will have variance on the same order, but the IPW estimator may have higher-order bias than the AIPW estimator. This higher-order bias follows because IPW can be viewed as a particular case of AIPW with an inconsistent outcome regression estimator. If both the propensity and outcome regression estimates are inconsistent, then both the IPW and AIPW estimators fail to be consistent, and as in the case of inconsistent nuisance functions with strict overlap, there is no general reason to prefer one or the other. When the outcome regression estimate is inconsistent, there is no general reason to prefer IPW or AIPW, but both estimators may have bias that is of a higher-order than the estimator’s standard deviation. When ˆ µis inconsistent, both IPW and AIPW can be viewed as instances of AIPW with an inconsistent outcome regression estimate. Suppose Pis a distribution from the second half of Corollary
|
https://arxiv.org/abs/2504.13273v2
|
3, which has P(e(X)≤π)∼πγ0−1for all πsmall enough. The bias in the thresholded region with an inconsistent outcome regression estimate is generally on the order of P(e(X)≤bn)∼bγ0−1 n. However, by Corollary 3, the oracle AIPW (and oracle IPW) standard deviation is on the order of n−1/2bγ0/2−1 n ≪bγ0−1 n. This heuristic analysis suggests that in many cases, IPW or AIPW-with-inconsistent-outcome-regression will have bias that is of a higher order than the estimator’s standard error. That intuition is similar to Ma et al. (2023)’s analysis of trimmed AIPW with a tailored debiasing procedure. The case of a consistent outcome regression estimate with inconsistent propensity estimates is more interesting. In this case, AIPW should have lower-order bias than IPW, because IPW will be inconsistent. The Berry-Esseen argument for AIPW asymptotic normality with known nuisance functions only requires cross-fitting and bnto go to zero slower than n−1/2, so that thresholded AIPW should also be asymptotically normal under appropriate error product conditions. However, it is unclear how the bias compares to sampling error. In any event, this robustness intuition is useful, because I apply parametric nuisance estimators in the application to right heart catheterization. Careful treatment of the parametric case is left for future work. Before proceeding to apply clipped AIPW, I derive some rules of thumb for choosing a threshold. 4.2 Choice of Threshold The theoretical analysis above provides conditions under which there is some sequence of thresholds for which AIPW is asymptotically normal and centered around the true causal estimand. I now provide guidance 19 for how to choose the threshold. Input: Rate upper bounds ¯ rµ,nand ¯re,n(maybe null), data {Di,ˆei}n i=1 Output: Rule-of-thumb threshold bn Function calculateRuleofThumb( ¯rµ,n,¯re,n,{Di,ˆei}n i=1): ifis.null( ¯rµ,n)and!is.null( ¯re,n)then bn←¯re,n else Aµ←!is.null (¯rµ,n) Ae←!is.null (¯re,n) bn←supb: 0≥errorBoundDiff (b,¯rµ,n,¯re,n,{Di,ˆei}n i=1, Aµ, Ae) end return bn Function errorBoundDiff( b,¯rµ,n,¯re,n,{Di,ˆei}n i=1, Aµ, Ae): ˜rµ,n←Aµ¯rµ,n+ (1−Aµ)b ˜re,n←Ae¯re,n+ (1−Ae)b second moment ←1 nPn i=1Di max{ˆei,b}2 error bound ←˜rµ,n1 nP1{ˆei≤b}√ second moment+ ˜rµ,n˜re,n√ second moment return error bound −n−1/2 Algorithm 1: Rule-of-thumb for choice of threshold bngiven (possibly null) rate upper bounds ¯ rµ,nand ¯rµ,n, to ensure that rµ,n≪¯rµ,n(orrµ,n≪bnif no bound is given) and re,n≪¯re,n(orre,n≪bnif no bound is given) is sufficient to achieve well-calibrated Wald confidence intervals. I propose different rules of thumb based on whether the econometrician is willing to provide an upper bound on the rate of convergence for the propensity estimate, the outcome regression estimate, or both. The combined proposal is presented in Algorithm 1. In practice, it is often relatively easy to identify an upper bound on the propensity rate of convergence. For instance, if the propensity score is estimated with local polynomial regression of order ℓe, then the econometrician is implicitly asserting a H¨ older smoothness of some order βe> ℓe, and a feasible consistency rate of ( n/log(n))−βe/(2βe+d)(Stone, 1982). In this case, the econometrician can safely conjecture that if their estimator is well-founded, then it will achieve a global consistency rate re,n≪n−ℓe/(2ℓe+d), and a threshold of bn=n−ℓe/(2ℓe+d)= “¯re,n”. This rule of thumb is practical and the safest rule with respect to outcome regression that does not impose stronger requirements on the
|
https://arxiv.org/abs/2504.13273v2
|
propensity estimate, but also often corresponds to a relatively slow consistency rate with relatively little power. Three alternative rules of thumb target faster consistency rates and laxer propensity requirements. While the main text focuses on black-box requirements on consistency rates in terms γ0, the proof goes through showing bntends to zero slower than n−1/2,re,n≪bn, and rµ,nP(e(X)≤bn)r EPh D max{e(X),bn}2i+rµ,nre,ns EPD max{e(X), bn}2 ≪n−1/2. (Appendix Assumption 3’) 20 In these alternative cases, I propose replacing bnin the right-hand side with the putative threshold bandrµ,n andre,nin the left-hand side with predicted upper bounds ¯ rµ,nand ¯re,nwhere feasible and with the putative threshold botherwise. This yields an empirical function error bound (b). I then solve for the putative threshold bwhere error bound (b) crosses n−1/2. The result is a threshold bnaimed to target maximal efficiency and minimal propensity consistency requirements, calculated without use of the outcome data or direct knowledge of γ0, and with the property that if the nuisance errors go to zero faster than the specified upper bound (or estimated threshold), then the resulting Wald confidence intervals will be well-calibrated. An interesting avenue for future work is whether there is a convenient choice of context-dependent constant multiples in the error bound function. This rule of thumb is also always feasible, for example in the case of nonspecified upper bounds. Lemma 1 (Well-defined rule of thumb) .Suppose ˆe∈(0,1]andPD/ˆe >0. Let fn(b)be the error bound function with no upper bound nuisance rates given. Then there is exactly one bnsuch that lim supb→b− nfn(b)≤ 0≤lim infb→b+ nfn(bn). A rule of thumb with nonspecified rates seems particularly attractive, since it is an entirely data-driven way to choose the AIPW threshold. However, in practice, a given outcome regression rate is more difficult to achieve than a given propensity rate under weak overlap, so that rules of thumb with specified nuisance rates is more appropriate for nonparametric outcome regression estimates. 5 Applications In this section, I present simulated results for the clipped AIPW estimator as well as empirical results from an application to right heart catheterization. I find that clipped AIPW performs well asymptotically, producing near-perfect calibration of p-values with 100,000 observations, but exhibits some undercoverage in small samples. When studying the right heart catheterization data, I find that the rule of thumb approach increases the estimated harm of the procedure by 0.17 standard errors relative to the usual 10% trimming rule, while increasing the estimated standard error by 5.1%. 5.1 Simulation Evidence I now study the performance of the clipped AIPW estimator in simulations. My simulation design is based on the design in Ma and Wang (2020). As in their work, I simulate data with P(e(X)≤π) =πγ−1andDY=κD(1−e(X)) +D(ε−4)/√ 8, where ε|X, D∼ξ2 4is scaled to achieve zero mean and unit variance. However, I increase γfrom 1 .5 to 1 .8 to ensure feasible outcome regression 21 rates, set κ= 2 rather than κ= 1 to avoid coincidental offsetting bias of IPW lower and upper tails in small samples, and reduce DYbyκE[D(1−e(X))] so that the true average potential outcome is zero. I achieve this propensity distribution by taking X∼Unif
|
https://arxiv.org/abs/2504.13273v2
|
([0,1]) i.i.d. and setting e(X) =X1/(γ0−1). I present results for 5,000 simulations of increasingly large samples. I estimate both the propensity and outcome regressions with five-fold cross-fitting. I use shrinkage cubic splines and REML estimation, as implemented by the mgcv package in R. In this setting, Theorem 2 establishes that kernel regression can achieve a pointwise rate of n−1/(3+1/(γ0−1)). I conjecture that rµ,n≪n−1/5, which is feasible if γ0>1.5, and choose the clipping threshold bnbased on Algorithm 1. IPW (Unadj., n=100,000) RMSE = 2.32e+15AIPW (Unadj., n=100,000) RMSE = 1.33e+15IPW (Clip, n=100,000) RMSE = 0.0136AIPW (Clip, n=100,000) RMSE = 0.00935IPW (Unadj., n=10,000) RMSE = 1.15e+12AIPW (Unadj., n=10,000) RMSE = 7.68e+11IPW (Clip, n=10,000) RMSE = 0.0332AIPW (Clip, n=10,000) RMSE = 0.0277IPW (Unadj., n=1,000) RMSE = 6.04e+08AIPW (Unadj., n=1,000) RMSE = 4.55e+08IPW (Clip, n=1,000) RMSE = 0.0835AIPW (Clip, n=1,000) RMSE = 0.08 0e+00 1e+165e+15 −5e+15 5e+150e+00 −0.04 0.02 −0.020.00 −0.04 0.02 −0.020.000e+00 8e+12 4e+12 −2e+12 6e+12 2e+12 −0.10 0.10 0.000.05 −0.10 0.10 0.000.050e+00 8e+09 4e+09 −2e+09 6e+09 2e+09 −0.2 0.20.0 −0.2 0.4 0.00.2012345 0 51015 010203040 0 2 4 0 51015 0102030400e+001e−092e−09 0.0e+005.0e−131.0e−121.5e−122.0e−12 −3.2e−17 1.4e−16 3.2e−16 4.9e−16 6.6e−160e+001e−092e−093e−09 0.0e+005.0e−131.0e−121.5e−122.0e−12 −3.0e−17 1.4e−16 3.0e−16 4.7e−16 6.4e−16 EstimateDensity Figure 2: Histograms of point estimates in simulations for the various methods considered in the simula- tions. Vertical dotted and solid lines indicate true causal effect and median estimate, respectively. Clipped estimators achieve much better performance than unthresholded estimators, and clipped AIPW’s debiasing property is also apparent. I begin by summarizing point estimates in Figure 2. The unthresholded estimators are approximately median-unbiased, but possess sufficiently heavy inverse propensity tails that the mean performance degrades with increasing sample size. The clipped estimators perform much better, but the clipped IPW estimator exhibits its known first-order bias. The clipped AIPW estimator exhibits less bias than the clipped IPW 22 IPW (Unadj., n=100,000) SW p = 1.66e−60AIPW (Unadj., n=100,000) SW p = 2.21e−45IPW (Clip, n=100,000) SW p = 1.16e−14AIPW (Clip, n=100,000) SW p = 7.07e−06IPW (Unadj., n=10,000) SW p = 3.49e−53AIPW (Unadj., n=10,000) SW p = 1.68e−39IPW (Clip, n=10,000) SW p = 9.87e−24AIPW (Clip, n=10,000) SW p = 2.81e−11IPW (Unadj., n=1,000) SW p = 7.48e−42AIPW (Unadj., n=1,000) SW p = 2.1e−22IPW (Clip, n=1,000) SW p = 3.83e−26AIPW (Clip, n=1,000) SW p = 1.5e−09 −4−202 −4−202 −6−4−202 −4−202−4−202 −4−202 −6−4−202−6−4−2024−4−202−5.0−2.50.02.5 −4−202−5.0−2.50.02.50.00.10.20.3 0.00.10.20.3 0.00.10.20.30.40.00.10.20.30.4 0.00.10.20.30.4 0.00.10.20.30.40.00.20.40.6 0.000.250.500.751.00 0.000.250.500.751.000.00.20.40.60.8 0.00.51.0 0.00.51.01.5 T StatisticDensityFigure 3: Histograms of simulated t-statistics on the true null hypothesis for various sample sizes. Vertical solid and dotted lines indicate mean t-statistic and target mean t-statistic of zero, respectively. Dashed line corresponds to the calibrated Gaussian density targeted in the Shaprio-Wilk test for normality. estimator, and has slightly better performance in terms of mean squared error. I find in Figure 3 that the clipped AIPW estimator’s t-statistics are reasonably well-calibrated. The plot presents t-statistics on the true average potential outcome. The t-statistics of unthresholded IPW and AIPW estimators are visibly non-Gaussian, and often exhibit a multimodal distribution. This poor performance is unsurprising: unthresholded estimators are known to fail to be asymptotically normal in this setting. Both thresholded estimators are known to be asymptotically normal in
|
https://arxiv.org/abs/2504.13273v2
|
this setting when the propensity score is known, and both the asymptotic normality and the clipped IPW estimator’s first-order bias are visible to the naked eye, although the clipped IPW estimator also exhibits visible skew in small samples. I test for t-statistic normality using a Shapiro-Wilk test. The test rejects normality for both clipped estimates. Still, the clipped AIPW estimator’s violations are less severe by this criterion. I find in Figure 4 that the clipped AIPW estimator’s p-values are well-calibrated in large samples. I use Wald confidence intervals to calculate two-sided p-values on the null of the true average potential outcome. If Wald confidence intervals are well-calibrated, then the simulated p-values on the true average potential 23 IPW (Unadj., n=100,000) KS p = 0AIPW (Unadj., n=100,000) KS p = 0IPW (Clip, n=100,000) KS p = 1.52e−86AIPW (Clip, n=100,000) KS p = 0.692IPW (Unadj., n=10,000) KS p = 9.12e−315AIPW (Unadj., n=10,000) KS p = 2.4e−314IPW (Clip, n=10,000) KS p = 8.73e−28AIPW (Clip, n=10,000) KS p = 2.57e−06IPW (Unadj., n=1,000) KS p = 1.28e−94AIPW (Unadj., n=1,000) KS p = 1.6e−125IPW (Clip, n=1,000) KS p = 1.26e−30AIPW (Clip, n=1,000) KS p = 4.53e−20 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.00.51.01.5 0.00.51.0 0.00.30.60.90.00.51.01.52.0 0.00.51.01.52.0 0.00.51.01.52.02.501234 0246 024601234 0246 0246 T StatisticDensityFigure 4: Histograms of simulation p-values on null hypothesis of true average potential outcome for various sample sizes. Dotted lines correspond to the target Uniform(0, 1) density. P-values in labels correspond to Kolmogorov-Smirnov tests for the Uniform(0, 1) distribution. outcome will be exactly uniformly distributed. The unthresholded IPW and AIPW estimators exhibit known poor performance. The clipped IPW estimator exhibits overrejection even with large samples, as even oracle clipped IPW would provide well-calibrated inference for a biased estimand. The clipped AIPW estimator also overrejects in small samples, but the bias is less severe: with 1,000 observations, clipped IPW rejects the true null in 12.0% of simulations, while clipped AIPW rejects in 8.8% of simulations. As the sample size increases, the asymptotic calibration of Corollary 1 becomes apparent. With 100,000 observations, clipped IPW rejects the true null hypothesis in 12.8% of simulations, while clipped AIPW rejects in 5.3% of simulations. The Kolmogorov-Smirnov p-value on exact calibration of the two-sided test statistics for clipped AIPW with 100,000 observations is 0.692. This is a remarkable result: despite the known extreme difficulty of statistical inference in this setting, 5,000 simulated draws are insufficient to detect a meaningful failure of Wald confidence intervals based on the clipped AIPW estimator. In moderate samples, clipped AIPW can undercover due to the difficulty of outcome regression in this setting. Figure 5 presents an example with 1,000 observations. It is rare to have treated observations with 24 −2024 0.00 0.25 0.50 0.75 1.00 Treated XTreated YFigure 5: Simulated treated observations for one simulation of 1,000 observations. It is rare to see treated observations with small X, which corresponds to small values of e(X) = X1/(γ0−1). As a result, such observations can have high leverage when predicting E[Y|X= 0, D= 1], and can yield to important errors between the true (dashed) and predicted (solid) regression lines.
|
https://arxiv.org/abs/2504.13273v2
|
small values of e(X). As a result, when such observations are treated, a small number of observations can receive substantial leverage in outcome regression, and the predictions of E[Y|X= 0, D= 1] can be driven by a small number of observations. In Appendix B (Figures 9 through 11), I conduct the same experiments, but with the estimated outcome regression function replaced by the true outcome regression function. The root-mean-squared error and failures of normality are comparable, suggesting these non-inferential patterns are driven by propensity estimation and clipping. However, the two-sided p-values exhibit better performance in small samples, and if anything slightly underreject with 100,000 observations. In Appendix B (Figures 12 through 14), I show that these conclusions would largely carry through if clipping were replaced by trimming. The notable differences are that trimmed AIPW exhibits slightly better estimation performance in small samples, while if anything trimmed IPW is slightly worse; trimmed t-statistics exhibit less severe violations of normality; and p-values based on trimmed propensities exhibit more severe undercoverage for both IPW and AIPW. 5.2 Application to Right Heart Catheterization I apply the clipped AIPW estimator to study the effect of right-heart cathterization (RHC) on survival. This dataset was first analyzed by Connors et al. (1996), and is a common benchmark in the weak overlap literature (Crump et al., 2009; Armstrong and Koles´ ar, 2017). 25 Treated Control 0.00.20.40.60.81.00.00.20.40.60.81.00100200300400500600700800900 Propensity ScoreNumber of ObservationsFigure 6: Histogram of estimated propensity scores for treated (left) and control (right) observations in right heart catheterization data. The plot is designed to parallel Figure 1 in Crump et al. (2009). Slight differences reflect the use of cross-fitting. I analyze a version of the dataset from Armstrong and Koles´ ar (2017). The dataset is comprised of 5,735 adult patients, and the treatment Dcorresponds to receiving RHC within 24 hours of admission. The target causal effect is the average treatment effect of RHC on 30-day survival. The data includes 52 covariates X(72 covariates if counting factor levels separately). I estimate the nuisance functions e(X) and µ(X) using five-fold cross-fitting. I estimate nuisance functions with logistic regression to align with Crump et al. (2009)’s empirical application. I estimate standard errors by bootstrapping the procedure. I keep fold assignment fixed in bootstraps to minimize the risk of over-fitting. Crump et al. propose a weak overlap rule of thumb that estimates the treatment effect for the sub- population with propensity scores between 10% and 90%. This rule-of-thumb trimming rule is chosen to approximately minimize asymptotic variance. This strategy ensures asymptotic normality, but changes the target estimand even asymptotically. By comparison, the clipped and trimmed AIPW estimators I analyze have thresholds bnthat tend to zero asymptotically. As a result, the estimators proposed here are able to target full population average treatment effect, potentially at the cost of increased variance. I compare these procedures to the 10% rule and other fixed trimming rules using the same nuisance estimates. I present the distribution of estimated propensity scores for treated and control units in Figure 6. The figure is an analog of Crump et al. (2009)’s Figure 1.
|
https://arxiv.org/abs/2504.13273v2
|
There is a meaningful density of units with estimated propensities near zero, suggesting weak overlap. This pattern is similar to the findings of Crump et al., although there are slight differences, presumably due to my use of cross-fitting. I compare AIPW estimators for various trimmed subsamples to the clipped AIPW estimator. I choose the clipping threshold bnthrough the no-specified-upper-bound version of Algorithm 1 because I estimate 26 0.000.010.02 0.000 0.025 0.050 0.075 0.100 Potential b nError Bound from rµ,n, re,n << bn Treated ControlFigure 7: The value of error bound (b) from Algorithm 1 for e(x) and 1 −e(x) thresholding with no specified nuisance upper bounds, which corresponds to an error bound implied by rµ,n, re,n≪bn. The procedure chooses bnto set this function equal to n−1/2, which corresponds to the horizontal line. n−1/2is indicated by horizontal dashed line. The more favorable distribution of estimated treatment propensities allows for a more aggressive clipping threshold. both nuisance functions parametrically. I plot the functions used in choosing bnin Figure 7. The estimated lower clipping threshold is 0.068 and affects 10.5% of observations. The Crump et al. 10% rule of thumb would exclude 16.3% of observations below. The estimated upper clipping threshold is 0.09 below one: there are few observations with large estimated propensities, so the rule of thumb concludes there is no need to trim observations with large estimated propensities. This upper threshold affects 1.4% of observations, comparable to the 1.8% of observations excluded above by the 10% rule of thumb. I present estimated effects and confidence intervals for various potential fixed trimming rules in Figure 8. The 10% trimming rule yields an estimated reduction in survival rates of 5.79 percentage points among the trimmed sample, with an estimated 95% Wald confidence interval of [-9.14, -2.43]. Other trimming rules would yield larger confidence intervals, as expected because the 10% rule is chosen to roughly minimize asymptotic variance over target populations. I compare the fixed-trimmed-sample AIPW estimates to a clipped AIPW estimator that targets the full population treatment effect. The estimated harm increases to -6.07 percentage points, a change of 0.168 standard errors under the 10% rule of thumb estimator. The clipped AIPW confidence interval of [-9.6, -2.55], has a 5.14% larger width than the 10% trimmed sample interval. The clipped AIPW point estimates are similar to the point estimates under a 1% or 5% trimming rule, but the associated confidence interval is narrower under the full-population estimator. Part of the added width is driven by inverse propensities among clipped observations: if I used a trimmed, rather than clipped, AIPW estimator, the estimated effect would move by 0.256 standard errors, and the standard error would only increase by 0.54%. However, the 27 −0.10−0.050.00 0.00 0.05 0.10 0.15 Trimming ThresholdsEstimate and Confidence IntervalFigure 8: Estimated effects (solid line) and 95% confidence interval (shaded region) for AIPW applied to various trimmed subsamples. Estimate and confidence for clipped AIPW is represented by the dashed and dotted horizontal lines, respectively. Clipped AIPW produces similar estimates and standard errors as the clipping procedure while targeting a more interpretable estimand. A threshold
|
https://arxiv.org/abs/2504.13273v2
|
of zero is omitted from the graph because the resulting confidence interval of [-568.8, 581.3] would make the graph difficult to read. simulation results of Section 5.1 suggest that trimmed AIPW may slightly undercover. Taken together, these results illustrate that under weak overlap, targeting the causal effect within the full population need not come at a large precision cost. In this application, clipped AIPW with a rule-of-thumb clipping rate yields similar estimates to estimators that target a fixed trimmed sample, while targeting a population that is often more relevant and adding only a small precision cost. 6 Conclusion This work shows that standard Wald confidence intervals for clipped AIPW can achieve target coverage for standard causal effects under plausible conditions. I provide sufficient conditions on nuisance regression rates for clipped (or trimmed) AIPW to be uniformly valid over distributions with even very weak overlap. I use these theoretical results to derive new rules of thumb for choosing a threshold. I find that Wald confidence intervals perform well in simulations, especially in large samples, and can achieve comparable precision to a fixed 10% trimming rule in practice. These results can be extended in many interesting directions. This work exploits Neyman orthogonality to achieve standard statistical inference in the presence of a small region of irregular identification. Sasaki and Ura (2022) and Ma et al. (2023) propose estimators for ratio estimands beyond IPW; the arguments here are likely to extend to their more general framework. Issues of weak overlap hold for inverse propensity and other importance sampling estimators in settings like difference-in-difference estimation (Callaway and Sant’Anna, 28 2021) or statistical inference for parameters that are identified at infinity (Andrews and Schafgans, 1998; Khan and Nekipelov, 2024); the results and rules of thumb here can likely be adapted to those settings. Semenova (2024) applies thresholding strategies to intersection bounds, where at a high level a margin condition plays the role of the minimal overlap bound here. Perhaps similar ideas could apply to other forms of irregular identification. The regression analysis in Section 3.3 may also extend to estimating the effects of continuous treatments. The results here suggest that thresholded AIPW is a viable alternative to fixed-trimming rules. I provide rules of thumb that enable practitioners to easily report results that target the population average effect. When, as in my empirical application, the fixed-trimming and sequence-of-thresholds approaches yield similar causal conclusions, then there is strong evidence that causal conclusions are driven by causal effects, and not how the researcher treats observations with extreme propensity scores. References Andrews, D. W. K. and Schafgans, M. M. A. (1998). Semiparametric estimation of the intercept of a sample selection model. The Review of Economic Studies , 65(3):497–517. Armstrong, T. B. and Koles´ ar, M. (2017). A simple adjustment for bandwidth snooping. The Review of Economic Studies , 85(2):732–765. Armstrong, T. B. and Koles´ ar, M. (2021). Finite-sample optimal estimation and inference on average treatment effects under unconfoundedness. Econometrica , 89(3):1141–1177. Bailey, M. J. and Goodman-Bacon, A. (2015). The war on poverty’s experiment in public medicine: Commu- nity health centers and
|
https://arxiv.org/abs/2504.13273v2
|
the mortality of older americans. American Economic Review , 105(3):1067–1104. Bruns-Smith, D., Dukes, O., Feller, A., and Ogburn, E. L. (2024). Augmented balancing weights as linear regression. Callaway, B. and Sant’Anna, P. H. (2021). Difference-in-differences with multiple time periods. Journal of Econometrics , 225(2):200–230. Themed Issue: Treatment Effect 1. Chaudhuri, S. and Hill, J. B. (2024). Heavy tail robust estimation and inference for average treatment effects. Econometric Reviews . Chen, X., Hong, H., and Tarozzi, A. (2008). Semiparametric efficiency in gmm models of nonclassical measurement errors, missing data and treatment effects. Technical Report 1644, Cowles Foundation. 29 Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. Econometrics Journal , 21(1):C1–C68. Connors, Alfred F., J., Speroff, T., Dawson, N. V., Thomas, C., Harrell, Frank E., J., Wagner, D., Desbiens, N., Goldman, L., Wu, A. W., Califf, R. M., Fulkerson, William J., J., Vidaillet, H., Broste, S., Bellamy, P., Lynn, J., and Knaus, W. A. (1996). The effectiveness of right heart catheterization in the initial care of critically ill patients. JAMA , 276(11):889–897. Crump, R. K., Hotz, V. J., Imbens, G. W., and Mitnik, O. A. (2009). Dealing with limited overlap in estimation of average treatment effects. Biometrika , 96(1):187–199. Currie, J. and Walker, R. (2011). Traffic congestion and infant health: Evidence from E-ZPass. American Economic Journal: Applied Economics , 3(1):65–90. D’Amour, A., Ding, P., Feller, A., Lei, L., and Sekhon, J. (2021). Overlap in observational studies with high-dimensional covariates. Journal of Econometrics , 221(2):644–654. Galiani, S., Gertler, P., and Schargrodsky, E. (2005). Water for life: The impact of the privatization of water services on child mortality. Journal of Political Economy , 113(1):83–120. Ga¨ ıffas, S. (2005). Convergence rates for pointwise curve estimation with a degenerate design. Mathematical Methods of Statistics , 14(1). Goldsmith-Pinkham, P., Hull, P., and Koles´ ar, M. (2024). Contamination bias in linear regressions. American Economic Review , 114(12):4015–51. Hahn, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica , 66(2):315–331. Hall, P., Marron, J. S., Neumann, M. H., and Titterington, D. M. (1997). Curve estimation when the design density is low. The Annals of Statistics , 25(2):756 – 770. Heiler, P. and Kazak, E. (2021). Valid inference for treatment effect parameters under irregular identification and many extreme propensity scores. Journal of Econometrics , 222(2):1083–1108. Horn, R. A. and Johnson, C. R. (2013). Matrix Analysis: Second Edition . Cambridge University Press. Imbens, G. W. (2004). Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and Statistics , 86(1):4–29. 30 Ionides, E. L. (2008). Truncated importance sampling. Journal of Computational and Graphical Statistics , 17(2):295–311. Khan, S. and Nekipelov, D. (2024). On uniform inference in nonlinear models with endogeneity. Journal of Econometrics , 240(2):105261. Khan, S. and Tamer, E. (2010). Irregular identification, support conditions, and inverse weight estimation. Econometrica , 78(6):2021–2042. Khan, S. and Ugander, J. (2022). Doubly-robust and heteroscedasticity-aware sample trimming for causal inference. Lee, B. K., Lessler, J., and Stuart, E. A.
|
https://arxiv.org/abs/2504.13273v2
|
(2011). Weight trimming and propensity score weighting. PLoS ONE , 6(3). Lei, L., D’Amour, A., Ding, P., Feller, A., and Sekhon, J. (2021). Distribution-free assessment of population overlap in observational studies. Li, F., Morgan, K. L., and Zaslavsky, A. M. (2018). Balancing covariates via propensity score weighting. Journal of the American Statistical Association , 113(521):390–400. Ma, X., Sasaki, Y., and Wang, Y. (2024). Testing limited overlap. Econometric Theory . Ma, X. and Wang, J. (2020). Robust inference using inverse probability weighting. Journal of the American Statistical Association , 115(532):1851–1860. Ma, Y., Sant’Anna, P. H. C., Sasaki, Y., and Ura, T. (2023). Doubly robust estimators with weak overlap. Mou, W., Ding, P., Wainwright, M. J., and Bartlett, P. L. (2023). Kernel-based off-policy estimation without overlap: Instance optimality beyond semiparametric efficiency. Newey, W. K. (1994). The asymptotic variance of semiparametric estimators. Econometrica , 62(6):1349– 1382. Rothe, C. (2017). Robust confidence intervals for average treatment effects under limited overlap. Econo- metrica , 85(2):645–660. Sasaki, Y. and Ura, T. (2022). Estimation and inference for moments of ratios with robustness against large trimming bias. Econometric Theory , 38(1):66–112. Semenova, V. (2024). Aggregated intersection bounds and aggregated minimax values. 31 Stone, C. J. (1982). Optimal global rates of convergence for nonparametric estimators. The Annals of Statistics , 10(4):1040–1053. Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation . Springer. Yang, S. and Ding, P. (2018). Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores. Biometrika , 105(2):487–493. 32 A Key Technical Assumptions and Claims I make use of the following assumptions. Assumption 3’. Assumption 2 holds, with the following rates on the regression error rµ,nand the propensity error re,nfor any sequence of P(n)∈P: (a)Consistency .rµ,n, re,n→0. (b)Product of errors .rµ,nre,nr EP(n)h D max{e(X),bn}2i ≪n−1/2. (c)Regression error near singularities .rµ,nP(n)(e(X)≤bn)r EP(n)h D max{e,bn}2i≪n−1/2. (d)Asymptotically known thresholding .re,n≪bn. These conditions adapt to the distributions in the sequence P(n). The conditions of Assumption 3’ are weaker than the more interpretable conditions in the main text. Corollary 6 (Sufficiency of Assumption 3’) .Suppose Assumptions 3, 4, and Assumption 4(i) hold and let ρ >0be given. Then Assumption 3’ holds. I will show that the feasible clipped estimator ˆψAIPW clip (bn) is first-order equivalent to the oracle clipped estimator ˜ψAIPW (Oracle )(bn). The oracle clipped AIPW estimator is asymptotically normal by the trimmed IPW arguments in Ma and Wang (2020). By construction, the oracle clipped AIPW estimator is finite-sample unbiased. The following asymptotic normality follows as a result. Proposition 4 (Oracle asymptotic normality) .Suppose bnsatisfies n−1/2≪bn≪1. Then the oracle clipped AIPW estimator has uniform convergence to a normal distribution in the sense that lim sup n→∞sup P∈Psup t∈R P ˜ψAIPW (Oracle )(bn)−ψ(P) σn≤t! −Φ(t) = 0. Proposition 4 will be an extension of the following claim. In addition to this modified theorem, Theorem 1 replaces the oracle standard deviation σnwith the estimated standard deviation ˆ σnwhen constructing t- statistics. Theorem 1’ ((Slow) Asymptotic Normality) .Suppose the conditions of Theorem 1 hold, and P(n) is a sequence of distributions P∈P. Then σ−1 n ˆψAIPW clip (bn)−ψnP(n)⇝N(0,1), where σnis the oracle
|
https://arxiv.org/abs/2504.13273v2
|
standard deviation defined in Proposition 4. Next, I describe the key new results for nonparametric regression. This result shows that in nonparametric regression, if the propensity function is sufficiently smooth, then nature cannot severely concentrate treated 33 observations within a given bandwidth of any point. The non-concentration ensures that the eigenvalues of the local polynomial regression matrix are nondegenerate. Proposition 5 (Non-trivial concentration) .Suppose Assumption 6 holds and there is an L >0, βe>d γ0−1 such that P(D= 1|X)∈Σ(βe, L)for all P∈P. Then Assumption 7 holds. Note thatd γ0−1is also a key parameter in Mou et al. (2023). A broader connection is outside the scope of this work. The next result presents the main construction involved in avoiding a polylogarithmic penalty under weak overlap. The claim iteratively constructs a sequence of minimal bandwidths and gridpoint counts that ensure that no more than m(k) ngridpoints xn,jcan have nE[D1{∥X−xn,j∥ ≤h}] =h−2βµsolved by h≤h(k) n, and that each kcan contribute at most half as much as the previous iteration to the polylogarithmic penalty. Here, I show that if the first set is expansive enough to incur what I later show is a polylogarithmic penalty bounded by log( δ¯), then if δ¯is at least some large, but finite, number, then this iterative process eventually covers all gridpoints. Proposition 6 (Inductive grouping) .Letβµ>0,γ0>1,d≥1be given. For any δ > 1, inductively construct a sequence of handmas follows. Take h(1) n=n−1 2βµ+dγ0 γ0−1. Then for all k= 1,2, . . ., inductively define m(k) n=exp 2−kδ h(k) n h(1) n−2βµ andh(k+1) n = n m(k) n δγ0−1 −1 2βµ+dγ0 γ0−1. Then there is a δ¯>0and aπ∈(0,1/42βµ]such that if δ≥δ¯, thenP∞ k=1m(k) n=∞, and for all k >1,h(k) n≤h(1) nπk−1. B Other Simulation Evidence In this section, I presented simulated evidence for trimmed estimators. 34 IPW (Unadj., n=100,000) RMSE = 2.32e+15AIPW (Unadj., n=100,000) RMSE = 1.33e+15IPW (Clip, n=100,000) RMSE = 0.0136AIPW (Clip, n=100,000) RMSE = 0.00902IPW (Unadj., n=10,000) RMSE = 1.15e+12AIPW (Unadj., n=10,000) RMSE = 7.67e+11IPW (Clip, n=10,000) RMSE = 0.0332AIPW (Clip, n=10,000) RMSE = 0.0259IPW (Unadj., n=1,000) RMSE = 6.04e+08AIPW (Unadj., n=1,000) RMSE = 4.39e+08IPW (Clip, n=1,000) RMSE = 0.0835AIPW (Clip, n=1,000) RMSE = 0.0714 0e+00 1e+165e+15 −5e+15 5e+150e+00 −0.04 0.02 −0.020.00 −0.02 0.020.000e+00 8e+12 4e+12 −2e+12 6e+12 2e+12 −0.10 0.10 0.000.05 −0.10 0.10 0.00−0.05 0.050e+00 8e+09 4e+09 −2e+09 6e+09 2e+09 −0.2 0.20.0 −0.2 0.4 0.00.2 0 2 4 0 51015 010203040 0 2 4 0 51015 0102030400e+001e−092e−093e−09 0.0e+005.0e−131.0e−121.5e−122.0e−12 −3.3e−17 1.5e−16 3.3e−16 5.0e−16 6.8e−160e+001e−092e−093e−09 0.0e+005.0e−131.0e−121.5e−122.0e−12 −3.0e−17 1.4e−16 3.0e−16 4.7e−16 6.4e−16 EstimateDensityFigure 9: Histograms of point estimates in simulations for the various methods considered in the simulations, but using the oracle µregression function instead of the estimated ˆ µregression function. IPW (Unadj., n=100,000) SW p = 1.66e−60AIPW (Unadj., n=100,000) SW p = 1.92e−45IPW (Clip, n=100,000) SW p = 1.16e−14AIPW (Clip, n=100,000) SW p = 4.09e−06IPW (Unadj., n=10,000) SW p = 3.49e−53AIPW (Unadj., n=10,000) SW p = 4.42e−41IPW (Clip, n=10,000) SW p = 9.87e−24AIPW (Clip, n=10,000) SW p = 7.66e−11IPW (Unadj., n=1,000) SW p = 7.48e−42AIPW (Unadj., n=1,000) SW p = 1.77e−26IPW (Clip, n=1,000) SW p = 3.83e−26AIPW (Clip, n=1,000) SW
|
https://arxiv.org/abs/2504.13273v2
|
p = 2.43e−08 −4−202 −4−202 −6−4−202 −2.5 0.02.5−4−202 −202 −6−4−202 −4−202−4−202−4−202 −4−202−4−2020.00.10.20.30.4 0.00.10.20.30.4 0.00.10.20.30.40.00.10.20.30.4 0.00.10.20.30.4 0.00.10.20.30.40.00.20.40.6 0.00.30.60.9 0.000.250.500.751.000.00.20.40.60.8 0.00.51.0 0.00.51.01.5 T StatisticDensity Figure 10: Histograms of simulation t-statistics for various sample sizes, but using the oracle µregression function instead of the estimated ˆ µregression function. 35 IPW (Unadj., n=100,000) KS p = 0AIPW (Unadj., n=100,000) KS p = 0IPW (Clip, n=100,000) KS p = 1.52e−86AIPW (Clip, n=100,000) KS p = 0.00742IPW (Unadj., n=10,000) KS p = 9.12e−315AIPW (Unadj., n=10,000) KS p = 4.77e−301IPW (Clip, n=10,000) KS p = 8.73e−28AIPW (Clip, n=10,000) KS p = 0.748IPW (Unadj., n=1,000) KS p = 1.28e−94AIPW (Unadj., n=1,000) KS p = 6.16e−95IPW (Clip, n=1,000) KS p = 1.26e−30AIPW (Clip, n=1,000) KS p = 0.0706 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.00.30.60.9 0.00.30.60.9 0.00.30.60.91.20.00.51.01.52.0 0.00.51.01.52.0 0.00.51.01.52.02.501234 0246 024601234 0246 0246 T StatisticDensityFigure 11: Histograms of simulation p-values on null hypothesis of true average potential outcome for various sample sizes, but using the oracle µregression function instead of the estimated ˆ µregression function. IPW (Unadj., n=100,000) RMSE = 2.32e+15AIPW (Unadj., n=100,000) RMSE = 1.33e+15IPW (Trim, n=100,000) RMSE = 0.0217AIPW (Trim, n=100,000) RMSE = 0.00876IPW (Unadj., n=10,000) RMSE = 1.15e+12AIPW (Unadj., n=10,000) RMSE = 7.68e+11IPW (Trim, n=10,000) RMSE = 0.044AIPW (Trim, n=10,000) RMSE = 0.0264IPW (Unadj., n=1,000) RMSE = 6.04e+08AIPW (Unadj., n=1,000) RMSE = 4.55e+08IPW (Trim, n=1,000) RMSE = 0.0938AIPW (Trim, n=1,000) RMSE = 0.0763 0e+00 1e+165e+15 −5e+15 5e+150e+00 −0.04 0.02 −0.020.00 −0.02 0.020.000e+00 8e+12 4e+12 −2e+12 6e+12 2e+12 −0.10 0.05 −0.050.00 −0.10 0.10 0.00−0.05 0.050e+00 8e+09 4e+09 −2e+09 6e+09 2e+09 −0.3 0.2 −0.1−0.2 0.00.1 −0.2 0.20.0 0 2 4 0 51015 0102030400246 0 51015 0102030400e+001e−092e−09 0.0e+005.0e−131.0e−121.5e−122.0e−12 −3.2e−17 1.4e−16 3.2e−16 4.9e−16 6.6e−160e+001e−092e−093e−09 0.0e+005.0e−131.0e−121.5e−122.0e−12 −3.0e−17 1.4e−16 3.0e−16 4.7e−16 6.4e−16 EstimateDensity Figure 12: Histograms of point estimates in simulations for the various methods considered in the simulations, but with trimming instead of clipping. 36 IPW (Unadj., n=100,000) SW p = 1.66e−60AIPW (Unadj., n=100,000) SW p = 2.21e−45IPW (Trim, n=100,000) SW p = 2.04e−10AIPW (Trim, n=100,000) SW p = 3.17e−05IPW (Unadj., n=10,000) SW p = 3.49e−53AIPW (Unadj., n=10,000) SW p = 1.68e−39IPW (Trim, n=10,000) SW p = 5.16e−16AIPW (Trim, n=10,000) SW p = 4.29e−07IPW (Unadj., n=1,000) SW p = 7.48e−42AIPW (Unadj., n=1,000) SW p = 2.1e−22IPW (Trim, n=1,000) SW p = 7.88e−22AIPW (Trim, n=1,000) SW p = 5.38e−08 −4−202 −4−202 −5.0−2.50.0 −5.0−2.50.02.5−4−202 −4−202 −5.0−2.50.0 −6−4−2024−4−202−5.0−2.50.02.5 −6−4−202−5.0−2.50.02.50.00.10.20.3 0.00.10.20.3 0.00.10.20.30.40.00.10.20.3 0.00.10.20.3 0.00.10.20.30.40.00.20.40.6 0.000.250.500.751.00 0.000.250.500.751.000.00.20.40.60.8 0.00.51.0 0.00.51.01.5 T StatisticDensityFigure 13: Histograms of simulation t-statistics for various sample sizes, but with trimming instead of clipping. IPW (Unadj., n=100,000) KS p = 0AIPW (Unadj., n=100,000) KS p = 0IPW (Trim, n=100,000) KS p = 0AIPW (Trim, n=100,000) KS p = 0.00115IPW (Unadj., n=10,000) KS p = 9.12e−315AIPW (Unadj., n=10,000) KS p = 2.4e−314IPW (Trim, n=10,000) KS p = 0AIPW (Trim, n=10,000) KS p = 7.25e−21IPW (Unadj., n=1,000) KS p = 1.28e−94AIPW (Unadj., n=1,000) KS p = 1.6e−125IPW (Trim, n=1,000) KS p = 8.98e−214AIPW (Trim, n=1,000) KS p = 1.34e−37 0.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00 0.000.250.500.751.00 0.000.250.500.751.000.00.51.01.52.0 0.00.51.01.5 0.00.51.001234 0246 0.02.55.07.510.001234 0246 024601234 0246 0246 T StatisticDensity Figure 14: Histograms of simulation p-values on null hypothesis of true average potential outcome
|
https://arxiv.org/abs/2504.13273v2
|
for various sample sizes, but with trimming instead of clipping. 37 C Proofs The proofs, including proofs of the claims in Appendix A, are split into sections showing asymptotic properties of oracle clipped AIPW (Appendix C.1), consistency of estimated clipped AIPW (Appendix C.2), and black-box consistency rates (Appendix C.3). Note that Appendix C.3 is out of order from the perspective of the main text and corresponds to claims in Section 3.1, but Proposition 3 is used to show claims that appear earlier in the main text. Appendix C.4 presents proofs for the main claims: the black-box asymptotic properties of estimated clipped AIPW. Finally, Appendix C.5, Appendix C.6, and Appendix C.7 prove claims about black-box nuisance rates, outcome regression rates, and rules of thumb, respectively. Additional Notation . In these proofs, I use P(n) to refer to an arbitrary sequence of distributions for the purposes of computing suprema; for such sequences, I use ψn=ψ(P(n)) to denote the sequence of average potential outcomes. I use Pn[cn] to refer to the average of cnoverndraws from P(sometimes abusing notation and including nuisance functions in cn), and I use P[cn] to refer to the expectation of cnoverP. This can occasionally lead to unfortunate notation like P(n)n(En) for a sequence of event probabilities under a sequence of distributions. I write lim x→z+f(x) and lim x→z−for the right- and left-hand limits of fatz. I write cn=oP(n)(1) if for all δ >0,P(n)(|cn|/dn> δ)→0, and if no P(n) is defined, I use cn=oP(n)(dn) to mean that for any sequence of P(n)⊂P,cn=oP(n)(dn). I write cn=OP(n)(1) if for all ϵ >0, there exists a δ >0 such that P(n)(|cn|/dn> δ)< ϵ. If there is a sequence of distributions to be considered, then I use o(dn) and O(dn) to implicitly refer to oP(n)(dn) and OP(n)(dn). I write that cnP(n)⇝N(0,1) if supt∈R|P(n)(cn≤t)−Φ(t)| →0, where Φ is the standard normal cumulative distribution function; I write thatcn→P(n)cifcn−c=oP(n)(1); and I write that cnP− →cif for all sequences of P(n)∈P,cn→P(n)c. I write cn= Θ( dn) if there exists a k1, k2>0 such that P(n) [cn∈[k1dn, k2dn]]→1, and I write cn= Ω(dn) if there exists a k1>0 such that P(n)[cn≥k1dn]→0. C.1 Oracle Normality Proof of Proposition 1. For (i), it is known that the efficiency bound is n−1 V ar(µ(X)) +Eh V ar(Y|X,D=1) e(X)i (Hahn, 1998). By Assumptions 1(a) and 1(b), V ar(µ(X)) and V ar(Y|X, D = 1) are uniformly bounded above. It therefore only remains to show the standard condition (Newey, 1994; Hahn, 1998; Chen et al., 2008) that supP∈PEP[1/e(X)] is bounded above: EP[1/e(X)] =Z∞ 0P1 e(X)> t dt =Z∞ 0P(e(X)<1/t)dt 38 = 1 +Z∞ 1P(e(X)<1/t)dt ≤1 +Z∞ 1C(t−1)γ0−1dt (Lemma 2) = 1 + CZ∞ 1t1−γ0dt = 1 +C γ0−2, which is finite. (ii) holds by extension of Khan and Tamer (2010) Theorem 4.1. In particular, note that by Assump- tion 1(c), V ar(Y|X, D = 1)≥σ2 min>0, so it only remains to show that EP[1/e(X)] is infinite: EP[1/e(X)] = 1 +Z∞ 1P(e(X)<1/t)dt ≥1 +C′Z∞ 1t1−γ0dt=∞. Lemma 2. Define πmin=C γ0. Then infP∈PP(D= 1)≥πmin>0. Proof of Lemma 2. For any P∈P, P[D= 1] = EP[e(X)] =Z∞ 0P(e(X)> t)dt=Z1
|
https://arxiv.org/abs/2504.13273v2
|
0P(e(X)≤t)dt≥Z1 0Ctγ0−1dt=C γ0. Lemma 3. Assume bn→0. Then for all large n, the following inequalities hold throughout P: (i)P(e(X)> πmin/2)≥πmin/2 (ii)E[e(X)/{e(X)∨bn}2]≥πmin/2 (iii)E[|ϕn−EP(n)[ϕn]|q]≤(4M)qE[e(X)/{e(X)∨bn}2]/bq−2 n (iv)E[|ϕn|q]≤(8M)qE[e(X)/{e(X)∨bn}2]/bq−2 n Proof of Lemma 3. I take these proofs one at a time. (i) Start from the following inequalities: πmin≤E[e(X)] =E[e(X)1{e(X)≤πmin/2}] +E[e(X)1{e(X)> πmin/2}] ≤(πmin/2)[1−P(e(X)> πmin/2)] + P(e(X)> πmin/2) 39 < πmin/2 +P(e(X)> πmin/2) Subtracting πmin/2 from the far left- and right-hand sides of this inequality gives the desired conclusion. (ii) If bn≤πmin/2 (which happens for all large n), then: E[e(X)/{e(X)∨bn}2]≥E[1/e(X)1{e(X)≥bn}]≥P(e(X)≥bn)≥P(e(X)≥πmin/2)≥πmin/2. (iii) By Jensen’s inequality: E[|ϕn−EP(n)[ϕn]|q]≤2q−1(E[|µ(X)−EP(n)[µ(x)]|q] +E[|Y−µ(X)|qD/{e(X)∨bn}q]) ≤2q−1(2qE[|µ(X)|q] +E[E[|Y−µ(X)|q|X, D = 1]e(X)/{e(X)∨bn}q]) ≤2q−1(2qMq+ 2qE[E[|Y|q|X, D = 1]e(X)/{e(X)∨bn}q]) ≤2q−1(2qMq+ 2qMqE[e(X)/{e(X)∨bn}2×1/{e(X)∨bn}q−2]) ≤2q−1(2qMq+ 2qMqE[e(X)/{e(X)∨bn}2]/bq−2 n). Since E[e(X)/{e(X)∨bn}2]/bq−2 n≥πmin/2bq−2 n→ ∞ by Item (ii), I may further bound the above quantity by 22qMqE[e(X)/{e(X)∨b2 n}]/bq−2 nonce nis large enough. (iv) By Jensen’s inequality: E[|ϕn|q] =E[|ϕn−EP(n)[ϕn] +EP(n)[ϕn]|q] ≤2q−1(E[|ϕn−EP(n)[ϕn]|q] +|EP(n)[ϕn]|q]) ≤2q−1(4M)qE[e(X)/{e(X)∨bn}2]/bq−2 n+ 2q−1|EP(n)[µ(x)]|q(Item (iii)) ≤2q−1(4M)qE[e(X)/{e(X)∨bn}2/bq−2 n] + 2q−1E[E[|Y|q|X, D = 1]] (Jensen) ≤2q−1(4M)qE[e(X)/{e(X)∨bn}2]/bq−2 n+ 2q−1Mq. As before, since E[e(X)/{e(X)∨bn}2]/bq−2 n→ ∞ , the first term in the upper bound is eventually larger than the second and I may bound the whole expression by (8 M)qE[e(X)/{e(X)∨bn}2]/bq−2 n once nis large enough. Lemma 4. Letc(γ) =γ−1 γC−1/(γ−1)>0. Then for any P∈P, I have: EP[e(X)1{e(X)≤bn}]≥c(γ)P(e(X)≤bn)γ/(γ−1). (3) 40 This lower bound is attained when P(e(X)≤t) =tγ−1. Proof of Lemma 4. Letp=P(e(X)≤bn). If p= 0, then the bound holds trivially so I will assume throughout that p >0. Then I may write: EP[e(X)1{e(X)≤bn}] =Z∞ 0P(e(X)1{e(X)≤bn}> t)dt=Zbn 0P(t < e (X)≤bn)dt=Zbn 0p−P(e(X)≤t)dt ≥bnp−Zbn 0min{p, Ctγ−1}dt=bnp−C γp Cγ γ−1−bnp+pp C1 γ−1) =c(γ)pγ/(γ−1). This proves the lower bound. When P(e(X)≤t) =tγ−1, a direct calculation gives EP[e(X)1{e(X)≤bn}] = [(γ−1)/γ]bγ n= [(γ−1)/γ]P(e(X)≤bn)γ/(γ−1). Therefore, the lower bound is also sharp. Lemma 5. For any P∈P, V ar P(ϕ(Z|bn, η))≥σ2 minEP e(X)/max{e(X), bn}2 ≥σ2 min[c(γ)P(e(X)≤bn)γ/(γ−1)/b2 n+πmin/2] ≥σ2 minπmin/2>0. Proof of Lemma 5. For the first line: VarP(ϕn) =E[Var( ϕn|X)] + Var( E[ϕn|X]) =E[Var( ϕn|X)] =E E[|Y−µ(X)|2|X, D = 1]e(X) {e(X)∨bn}2 ≥σ2 minEe(X) {e(X)∨bn}2 . Since Definition 1 implies e(X)>0, Var P(ϕn)>0. For the second line, I assume nis so large that bn≤πmin/2. Then: E[e(X)/max{e(X), bn}2] =E[e(X)/b2 n1{e(X)≤bn}] +E[1/e(X)1{e(X)> bn}] ≥E[e(X)/b2 n1{e(X)≤bn}] +P(e(X)> bn) ≥E[e(X)/b2 n1{e(X)≤bn}] +P(e(X)> πmin) ≥E[e(X)/b2 n1{e(X)≤bn}] +πmin/2 (Lemma 3.(i)) ≥c(γ)(1/bn)2P(e(X)≤bn)γ/(γ−1)+πmin/2. (Lemma 4) The final line is immediate. Lemma 6.1 √ V ar(ϕ(Z|bn,η))≤1 √ σ2 minEP(n)[D/max{e(X),bn}2]. 41 Proof of Lemma 6. By Lemma 5, I have: σ−1 n≤n1/2/q σ2 minEP(n)[D/max{e(X), bn}2], where σ−1 n=n−1/2/p V ar P(n)(ϕn). Lemma 7. Define ˜ϕ(Z|b, P)≡ϕ(Z|b, η(P))−EP[µ(X)]forP∈P. Further define ρ(b, P)≡EP[|˜ϕ(Z| b, P)|3]andσ(b, P)≡q V ar P(˜ϕ(Z|b, P)). Then the following hold: 1.EP[˜ϕ(Z|b, P)] = 0 2.σ(b, P)>0 3.ρ(b, P)<∞(though it may be arbitrarily large) 4. If bnbe a sequence of positive real numbers such that n−1/2≪bnandP(n)be a sequence of distributions inP, thenρ(bn,P(n)) σ(bn,P(n))3√n=o(1). Proof of Lemma 7. EP[˜ϕ(Z|bn, P)] = 0 is immediate. V ar P[˜ϕ(Z|b, P)]>0 follows by Lemma 5. For the third moment being finite: ρ(b, P) =EP[|˜ϕ(Z|b, P)|3]≤8EP |µ(X)−EP[µ(X)]|3+b−3|Y−µ(X)|3 ≤O(Mq) + 16 b−3EP |Y|3 . This is finite (and O(b−3)O(Mq)) by assumption. Finally, I have the claim for sequences. Recall that by Lemmas 5 and 3,1 σ(bn,P(n))3√n=o(1) and EP(n)h D max{e(X),bn}2i ≥σ2 min/2. As a result:
|
https://arxiv.org/abs/2504.13273v2
|
ρ(bn, P(n)) σ(bn, P(n))3√n≤8EP(n)h D|Y−µ(X)|3 max{e(X),bn}3+|µ(X)−EP(n)[µ(X)]|3i σ(bn, P(n))3√n ≤O(Mq)EP(n)h D max{e(X),bn}2i bnσ(bn, P(n))3√n+O(Mq) σ(bn, P(n))3√n =O(Mq, σ2 min)EP(n)D max{e(X), bn}2−1/2 (b2 nn)−1/2+o(1) = o(1). 42 Proof of Proposition 4. LetP(n) be a sequence of distributions in P. By Lemma 7 and the Berry Esseen Theorem, the difference between the CDF of oracle clipped AIPW t-statistic˜ψAIPW clip −ψn σn=P˜ϕ(Z|bn,P(n))√ V ar(ϕ(Z|bn,η))√n and the standard normal CDF Φ is uniformly bounded above by3ρ(bn,P(n)) σ(bn,P(n))3√n. By Lemma 7.4, this difference tends to zero. Therefore: lim sup n→∞sup P∈Psup t∈R P ˜ψAIPW (Oracle )(bn)−ψ(P) σn≤t! −Φ(t) = lim sup n→∞o(1) = 0 . C.2 Consistency Lemma 8. Under cross-fitting, I have the bias bound: Eh ˆψAIPW clip (bn)−ψni |ˆµ,ˆe ≤rµ,nP(n) (e(X)≤bn+re,n) +rµ,nre,nE1{e > b n+re,n} e−re,n . Proof. Fix one fold and take ˆ µ(−k)and ˆe(−k)as given. I write the bias relative to oracle clipped AIPW as: E (ˆµ−µ) 1−D max{ˆe, bn} + (µ−Y)D max{e, bn}−D max{ˆe, bn} =E (ˆµ−µ) 1−D max{ˆe, bn} , with the equality following by cross-fitting. Forp= 1,2, let cn,psolve: |max{cn,p, bn}p−max{cn,p−re,n, bn}p| max{cn,p−re,n, bn}p=|max{cn,p, bn}p−max{cn,p+re,n, bn}p| max{cn,p+re,n, bn}p. (4) cn,pis useful because it is the changeover point between whether the worst-case ˆ eis above or below e. Note that cn,p∈(bn, bn+re,n) by the intermediate value theorem: when cn,p=bn, the left-hand side is zero, while when cn,p=bn+re,n, the left hand side has the smaller denominator but equal numerator. cn,1is useful, because when e(X) =cn,1, ˆe(X) =e(X)−re,nand ˆe(X) =e(X) +re,nproduce equal levels of observation-wise bias from clipped inverse propensities relative to unity. Then I have the bound: Eh ˆψAIPW clip (bn)−ψni ≤ E (ˆµ−µ)max{ˆe, bn} −e max{ˆe, bn}1{e≤bn−re,n} |ˆe,ˆµ + E (ˆµ−µ)max{ˆe, bn} −e max{ˆe, bn}1{e∈(bn−re,n, cn,1]} |ˆe,ˆµ + E (ˆµ−µ)max{ˆe, bn} −e max{ˆe, bn}1{e∈(cn,1, bn+re,n]} |ˆe,ˆµ 43 + E (ˆµ−µ)max{ˆe, bn} −e max{ˆe, bn}1{e > b n+re,n} |ˆe,ˆµ ≤ E rµ,nbn−e bn1{e≤bn−re,n} |ˆe,ˆµ + E rµ,nre,n e+re,n1{e∈(bn−re,n, cn,1]} |ˆe,ˆµ + E rµ,ne−bn bn1{e∈(cn,1, bn+re,n]} |ˆe,ˆµ + E (ˆµ−µ)re,n e−re,n1{e > b n+re,n} |ˆe,ˆµ ≤rµ,nP(n)(e(X)≤bn+re,n) +rµ,nre,nE1 e−re,n1{e > b n+re,n} , where the final line follows because rµ,nis always multiplied by a term that is bounded above by one for all e(X)≤bn+re,n. Proof of Proposition 2. LetP(n) be a sequence of distributions in P, and fix some k∈1, ..., K . Write ˆψAIPW clip (bn) =1 KP kˆψAIPW clip,k (bn), where ˆψAIPW clip,k (bn) is the fold −kaverage potential outcome esti- mate. I will show that ˆψAIPW clip,k (bn)−ψn=oP(n)(1). First I show that Eh ˆψAIPW clip,k (bn)−ψn|ˆµ(−k),ˆe(−k)i =oP(n)(1). This holds by the assumptions of Propo- sition 2 applied to the bias bound from Lemma 8: Eh ˆψAIPW clip,k (bn)−ψn|ˆµ(−k),ˆe(−k)i ≤Crµ,n(bn+re,n)γ0−1+rµ,nre,nE1{e(X)> bn+re,n e(X)−re,n . Ifrµ,nre,n+bn bn→P(n)0, then this term is oP(n)(1) because rµ,nandre,n+bnare bounded above by Assump- tion 2, and at least one of the two terms must tend to zero because bn→P(n)0 and rµ,nre,n+bn bn→P(n)0. Suppose re,nbmin{γ0−2,0} n →P(n)0. For the first term, rµ,n(bn+re,n)γ0−1→P(n)0 because rµ,nis bounded above and γ0>1. For the final term, suppose that γ0<2, so that the claim is not immediate. Then: EP(n)1{e(X)> bn+re,n} e(X)−re,n =C(bn+re,n)γ0−1b−1 n+C(γ0−1)ZC−1/(γ0−1) bn+re,n(t−re,n)−1tγ0−2dt ≤C(bn+re,n)γ0−1b−1 n+C(γ0−1)ZC−1/(γ0−1) bn+re,n(t−re,n)γ0−3dt ≤C(bn+re,n)γ0−1b−1 n+C(γ0−1)Z1 bnxγ0−3dx =C(bn+re,n)γ0−1b−1 n+C(γ0−1) 2−γ0 bγ0−2 n−1 ≤
|
https://arxiv.org/abs/2504.13273v2
|
2γ0−1C+γ0−1 2−γ0C bγ0−2 n+ 2Crγ0−1 e,nb−1 n =O bγ0−2 n 1 +δ1−γ0 n =O(bγ0−2 n), 44 so that re,nEP(n)h 1{e(X)>bn+re,n} e(X)−re,ni =o(1). Note that this bound may be lax. Whether the propensity rate requirement could be weakened is an open question for future work. Regardless, in this remaining case under the propensity rate requirement (i), rµ,nre,nbγ0−2 nδn→P(n)0 by construction of δn, so that the final term of the bias bound tends to zero. Next, I show that V(ˆψAIPW clip,k (bn)|ˆµ(−k),ˆe(−k)) =oP(n)(1). I have: V(ˆψAIPW clip,k (bn)|ˆµ(−k),ˆe(−k))≤n−1E" ˆµ+D(Y−ˆµ) max{ˆe, bn}2 |ˆµ,ˆe# ≤8n−1b−2 n (Lemma 3.(iv)) =o(1). Therefore E ˆψAIPW clip,k (bn)−ψn2 |ˆµ(−k),ˆe(−k) =oP(n)(1). Therefore, for all ϵ >0, every δ >0, and every sequence of P(n), there is an nlarge enough such that P(n) ˆψAIPW clip,k (bn)−ψn2 > ϵ2 ≤δ. Thus, consistency holds. C.3 Degradation of Consistency Rate Proof of Proposition 3. Define σ2 max= supP∈PsupX,DV ar(Y|X, D ). By the presence of q >2 moments, σ2 maxis finite. By Lemma 5, EP(n)h Dσ2 min max{e(X),bn}2i ̸→0. By iid sampling and the oracle AIPW conditional mean being equal to µ(X), I obtain: V ar P(n) ˜ψAIPW (Oracle )(bn) −n−1V ar P(n)(µ(X)) =EP(n)h V ar P(n) ˜ψAIPW (Oracle )(bn)| {X}i =n−1EP(n) V ar P(n)D(Y−µ(X)) max{e(X), bn}|X =n−1EP(n) e(X)V ar P(n)(Y−µ(X)) max{e(X), bn}|X, D = 1 =n−1EP(n) e(X)V ar(Y|X, D = 1) max{e(X), bn}2 = Θ n−1EP(n)D max{e(X), bn}2 . In addition, n−1V ar P(n)(µ(X))≤n−1M=O n−1EP(n)h D max{e(X),bn}2i , proving the claim. Proof of Corollary 3. Letnbe large enough that bn≤1 and bγ0−2 n >2. Recall the definition of σ2 maxfrom the proof of Proposition 3. 45 For the upper bound, let Pbe arbitrary: σ2 n−n−1V ar(µ) =n−1V ar P µ(X) +D(Y−µ(X)) max{e(X), bn} −n−1V ar P(µ(X)) =n−1EPD(Y−µ(X))2 max{e(X), bn}2 ≤n−1EPDσ2 max max{e(X), bn}2 =σ2 maxEPe(X) max{e(X), bn}2 =n−1σ2 maxZ∞ 0Pe(X) max{e(X), bn}2≥t dt =n−1σ2 maxZ∞ 0P e(X)≤bn, e(X)≥tb2 n dt+n−1σ2 maxZ∞ 0P(e(X)> bn, e(X)≤1/t)dt =n−1σ2 maxZb−1 n 0P e(X)∈[tb2 n, bn] dt+n−1σ2 maxZb−1 n 0P(e(X)∈[bn,1/t])dt =n−1σ2 maxZb−1 n 0P e(X)∈[tb2 n,1/t] dt≤n−1σ2 maxZb−1 n 0P(e(X)≤1/t)dt ≤Cn−1σ2 maxZb−1 n 0t1−γ0dt=Cσ2 max γ0−2|{z} C′n−1bγ0−2 n. For the remaining term, n−1V ar(µ) =O(n−1) =o C′n−1bγ0−2 n . For the lower bound, define P={P}, where Pis the distribution which draws e(X) from the CDF P(e(X)≤π) = (1 −πmin) min{Cπγ0−1,1}+πmin1{π≥1}andY|X, D∼ N(0, σ2 min). This distribution has valid conditional moments and residual variance by the minimal value of Mand the choice of V ar(Y|X < D). The treated fraction is at least πmin>0. For all π < C−1/(γ0−1),P(e(X)≤π) = (1 −πmin)Cπγ0−1≤ Cπγ0−1; for all π > C−1/(γ0−1),P(e(X)≤π) = (1 −πmin) +πmin1{π= 1}, which must be below Cπγ0−1 for all such πin order for Pto be non-empty. Finally, note that: σ2 n−n−1V ar P(µ) =n−1EPD(Y−µ(X))2 max{e, bn}2 =n−1σ2 min Rbn 0t b2n(1−πmin)(γ0−1)Ctγ0−2dt +R1 bn1 t(1−πmin)(γ0−1)Ctγ0−2dt +πmin =n−1σ2 min b−2 n(1−πmin)(γ0−1)CRbn 0tγ0−1dt + (1 −πmin)(γ0−1)CR1 bntγ0−3dt +πmin =n−1σ2 min bγ0−2 n(1−πmin)C γ0−1 γ0+γ0−1 2−γ0 +πmin−(1−πmin)Cγ0−1 2−γ0 ≥σ2 minC(1−πmin)γ0−1 γ0+γ0−1 2(2−γ0) | {z } C′′n−1bγ0−2 n. 46 Note also that C′′>0. Thus, C′′n−1bγ0−2 n≤supP∈Pσ2 n−VarP(µ(X))≤C′n−1bγ0−2 n. Analogously to before, n−1V ar(µ) =o C′′n−1bγ0−2 n , completing the proof. C.4
|
https://arxiv.org/abs/2504.13273v2
|
Asymptotic Normality and Rates Lemma 9. Suppose the conditions of Proposition 4 hold and let P(n)be a sequence of distributions in P. Then, EP(n)h D max{e(X),bn}2i =O 1 +bγ0−2 nlog(1/bn)1{γ0−2} , with a constant that only depends on Cand γ0. Proof of Lemma 9. LetP(n) be given. Then: EP(n)D max{e(X), bn}2 =EP(n)e(X) max{e(X), bn}2 ≤EP(n)1 max{e(X), bn} =Z∞ 0P(n)1 max{e(X), bn}> t dt =Z∞ 0P(n) (max {e(X), bn}<1/t)dt= 1 +Z∞ 1P(n) (max {e(X), bn}<1/t)dt = 1 +Z1/bn 1P(n) (max {e(X), bn}<1/t)dt≤1 +CZb−1 n 1t1−γ0dt. First, suppose γ0= 2. Then EP(n)h D max{e(X),bn}2i ≤1 +C(log(1 /bn)−1).Alternatively, suppose γ0̸= 2. Then EP(n)h D max{e(X),bn}2i ≤1 +C γ0−2 1−bγ0−2 n . Lemma 10. Suppose the conditions of Proposition 4 hold and P(n)is a sequence of distributions in P. Then 1r EP(n)h D max{e,bn}2i=O 1q 1 +b−2nP(n)(e(X)≤bn)γ0/(γ0−1) . Proof of Lemma 10. For any m≥0, define Pm,n={P∈P|P(e(X)≤bn)≤m). For each n, define mn=P(n)(e(X)≤bn)γ0/(γ0−1). I have: sup P∈Pmn,nEPD max{e, bn}2 = sup P∈Pmn,nEPD1{e(X)> bn} max{e, bn}2 +EPD1{e(X)≤bn} b2n ≥1 + sup P∈Pmn,nb−2 nEP[e(X)1{e(X)≤bn}] ≥1 +c(γ0) sup P∈Pmn,nP(e(X)≤bn)γ0/(γ0−1)(Lemma 4) = 1 + c(γ0)b−2 nmγ0/(γ0−1) n . Therefore 1 +b−2 nP(n)(e(X)≤bn)γ0/(γ0−1)2=O EP(n)h D max{e,bn}2i . Taking the square root and inverting both sides completes the proof. 47 Proof of Corollary 6. The two changes are the product-of-errors term being stated as 3’(b) and the regression- error-near-singularities term being stated as 3’(c). I begin with the product-of-errors term (b). Suppose Assumption 3(b) holds. I wish to show that if rµ,nre,n 1 +b(γ0−2)/2 n log(1/bn)1{γ0=2}/2 =o(n−1/2), then rµ,nre,nr EP(n)h D max{e(X),b2n}i =oP(n)(n−1/2). Letα >0 from Assumption 3 be given. By Lemma 9, for all α >0, rµ,nre,ns EP(n)D max{e(X), b2n} =OP(n) rµ,nre,n 1 +b(γ0−2)/2 n log(1/bn)1{γ0=2}/2 =o(n−1/2). Next I consider the regression error near the singularities term (c). I first verify (c) for a sequence of P(n)∈Punder Assumption 4(i). Let a sequence of P(n)∈Pandbnbe given, and consider an arbitrary sub-sequence. If there is a further sub-sub-sequence with P(n)(e(X)≤bn) = 0, take this sub-sub-sequence and the claim holds. If not, take a sub-sub-sequence with P(n)(e(X)≤bn)>0. On this sub-sub-sequence, I have the bound: EP(n)D max{e, bn}2 ≥EP(n)D1{e∈(bn/2, bn] max{e, bn}2 ≥1 2bnP(n)(e∈(bn/2, bn]) =1 2bn(P(n)(e≤bn)−P(n)(e≤bn/2)) ≥ρ 2bnP(n)(e≤bn). (Assumption 4(i)) Then, applying Assumption 1(d), rµ,nP(n)(e(X)≤bn)r EP(n)h D max{e,bn}2i≤rµ,n2bnP(n)(e≤bn) ρ1/2 ≤s 2C ρrµ,nbγ0/2 n=o(n−1/2). Finally, I verify (c) assuming Assumption 4(ii) holds. I wish to show that if there is a sequence of P(n)∈Pand associated constants such that rµ,nbγ0/2 n=o(n−1/2), then rµ,nP(n)(e(X)≤bn)r EP(n)h D max{e,bn}2i=o(n−1/2). Under somewhat weak overlap ( γ0>2), then rµ,nP(n)(e(X)≤bn)r EP(n)h D max{e,bn}2i=O rµ,nbγ0−1 n =O rµ,nb2(γ0−1)/γ0+(γ2 0+2−3γ0)/γ0 n =o(n−1/2). 48 I therefore proceed for γ0≤2. I will use the bound from Lemma 10: rµ,nP(n)(e(X)≤bn)r EP(n)h D max{e,bn}2i≤rµ,nP(n)(e(X)≤bn)q 1 +b−2nP(n)(e(X)≤bn)γ0/(γ0−1)= “RHS. ” Consider a sub-sequence of P(n)∈Pand constants. I will show that there is a further sub-sub-sequence for which this right-hand side RHS iso(n−1/2). Suppose there is a sub-sub-sequence such that b−2 nP(n)(e(X)≤ bn)γ0/(γ0−1)→0. Then for that sub-sub-sequence, I have: RHS≤rµ,nP(n)(e(X)≤bn) =rµ,no b2(γ0−1)/γ0 n =o(n−1/2). If not, then b2 n≾P(n)(e(X)≤bn)γ0/(γ0−1)and I have the bound: RHS≤O rµ,nbnP(n)(e(X)≤bn)1−γ0/(2(γ0−1)) =O rµ,nbn P(n)(e(X)≤bn)(1/2−1/γ0)∗γ0/(γ0−1) =O rµ,nbn P(n)(e(X)≤bn)γ0/(γ0−1)(γ0−2)/(2γ0) =O rµ,nbn b2 n(γ0−2)/(2γ0) =O rµ,nb2(γ0−1)/γ0 n =o(n−1/2). Therefore Assumption 3’(c)
|
https://arxiv.org/abs/2504.13273v2
|
holds. Lemma 11. Suppose the requirements of Theorem 1’ hold. Then by implication, rµ,nP(n)(e(X)≤bn)≪ n−1/2r EP(n)h D max{e,bn}2i . Proof of Lemma 11. rµ,nP(n)(e(X)≤bn) = rµ,nP(n)(e(X)≤bn)r EP(n)h D max{e,bn}2i s EP(n)D max{e, bn}2 ≪n−1/2s EP(n)D max{e, bn}2 . (A3’(c)) Lemma 12 (Oracle consistency) .Ifn−1/2≪bn≪1, then |Pn[ϕn]−EP(n)[µ(x)]|P− →0. Proof of Lemma 12. LetP(n) be a sequence of distributions in P. For any t >0, I have: P(n) |P(n)n[ϕn]−EP(n)[µ(x)]|> t ≤E[|ϕn−EP(n)[µ(x)]|2] nt2(Chebyshev’s inequality) 49 ≤E[|ϕn−EP(n)[ϕn]|q]2/q nt2(Jensen’s inequality) ≤[(4M)qE[e(X)/{e(X)∨bn}2]]2/q t2nb2(q−2)/q n(Lemma 3.(iii)) ≤(4M)2 t21 nb2n. This upper bound tends to zero and holds simultaneously for all P∈P. Hence, |P(n)n[ϕn]−EP(n)[µ(x)]|= oP(n)(1). Lemma 13 (Oracle variance consistency) .Letσ2 n=n−1(Pn[ϕ2 n]−Pn[ϕn]2)be the oracle sample variance. Ifn−1/2≪bn≪1, then nσ2 n/VarP(n)(ϕn)P− →1. Proof of Lemma 13. LetP(n) be a sequence of distributions in P. First, I argue that for any q >2: P P(n)n[ϕ2 n]−P[ϕ2 n] VarP(n)(ϕn) > t ≤E{|P(n)n[ϕ2 n]−P[ϕ2 n]|q/2} tq/2V ar P(n)(ϕn)q/2(Markov inequality) ≤2 tq/2nq/2−1E{|ϕ2 n−P[ϕ2 n]|q/2}] V ar P(n)(ϕn)q/2(von Bahr-Esseen inequality) ≤2q/2+1 tq/2nq/2−1E[|ϕn|q] V ar P(n)(ϕn)q/2(Jensen’s inequality) ≤2q/2+1 tq/2nq/2−1(8M)qE[e(X)/{e(X)∨bn}2] bq−2 n(Var P(n)(ϕn))q/2(Lemma 3.(iv)) ≤2q/2+1 tq/2nq/2−1(8M)qE[e(X)/{e(X)∨bn}2] bq−2 nσq minE[e(X)/{e(X)∨bn}2]q/2(Lemma 5) ≤(8M)q2q/2+1 tq/2σq min(πmin/2)q/2−11 nq/2−1bq−2 n. (Lemma 5) Since bn≫n−1/2,nq/2−1bq−2 n→ ∞ , so that P(n)n[ϕ2 n]−P[ϕ2 n] VarP(n)(ϕn) =oP(n)(1). Then, by the triangle inequality: |nσ2 n/VarP(n)(ϕn)−1| ≤ P(n)n[ϕn]2−P[ϕn]2 VarP(n)(ϕn) + P(n)n[ϕ2 n]−P[ϕ2 n] VarP(n)(ϕn) ≤ |P(n)n[ϕn] +P[ϕn]| × |P(n)n[ϕn]−P[ϕn]|/(σ2 minπmin/2) +oP(n)(1) (Lemma 5 + above) ≤(2M+oP(n)(1))oP(n)(1)OP(n)(1) + oP(n)(1) ( P[ϕn]≤M+ Lemma 12) =oP(n)(1), where σ2 nis the oracle sample variance. Therefore, this upper bound tends to zero uniformly over P. 50 Lemma 14 (Orthogonalized inverse propensities) .Suppose the conditions of Proposition 2 hold, re,n≪bn, andP(n)is a sequence of distributions in P. Then EP(n)"D max{ˆe, bn}−D max{e, bn}2# =oP(n) EP(n)D max{e, bn}2 . Proof of Lemma 14. Write ( I) =EP(n) D max{ˆe,bn}−D max{e,bn}2 . Since re,n≪bn, letnbe large enough that bn≥2re,n, so that e≥bn+re,nimplies e−re,n≥e/2. Then the squared error has the following decomposition: (I) =EP(n)D max{ˆe, bn}2−D max{e, bn}2 −2EP(n)D max{e, bn}D max{ˆe, bn}−D max{e, bn} =EP(n)D max{e, bn}2max{e, bn}2−max{ˆe, bn}2 max{ˆe, bn}2 −2EP(n)D max{e, bn}2max{e, bn} −max{ˆe, bn} max{ˆe, bn} =EP(n)D max{e, bn}2max{e, bn}2−max{ˆe, bn}2 max{ˆe, bn}2 + 2EP(n)D max{e, bn}2max{ˆe, bn} −max{e, bn} max{ˆe, bn} ≤EP(n)D1{ˆe≤e} max{e, bn}2max{e, bn}2−max{ˆe, bn}2 max{ˆe, bn}2 + 2EP(n)D max{e, bn}2re,n max{e+re,n, bn} ≤EP(n)D max{e, bn}2max{e, bn}2−max{e−re,n, bn}2 max{e−re,n, bn}2 + 2EP(n)D max{e, bn}2re,n bn ≤EP(n)D1{e∈[bn, bn+re,n)} max{e, bn}2(bn+re,n)2−b2 n b2n +EP(n)D1{e∈[bn+re,n,1]} max{e, bn}2e2−(e−re,n)2 (e−re,n)2 + 2EP(n)D max{e, bn}2re,n bn ≤EP(n)" D1{e∈[bn, bn+re,n)} max{e, bn}22bnre,n+r2 e,n b2n# +EP(n)D1{e∈[bn+re,n,1]} max{e, bn}22ere,n (e−re,n)2 + 2EP(n)D max{e, bn}2re,n bn ≤EP(n)" D max{e, bn}2 4bnre,n+r2 e,n b2n!# + 4EP(n)D1{e∈[bn+re,n,1]} max{e, bn}22re,n e ≤EP(n)" D max{e, bn}2 12bnre,n+r2 e,n b2n!# . Since re,n=o(bn) by assumption, this upper bound is o EP(n)h D max{e,bn}2i . Lemma 15. For all P∈P,EP[D/max{e(X), bn}2]≥1− b2 nγ0−1 c(γ0)γ0−1 γ−γ0 0. Proof of Lemma 15. EP[D/max{e(X), bn}2] =EP[D1{e(X)≤bn}/max{e(X), bn}2] +EP[D1{e(X)> bn}/max{e(X), bn}2] ≥b−2 nEP[D1{e(X)≤bn}] + (1−P(e(X)≤bn)) 51 ≥b−2 nc(γ0)P(e(X)≤bn)γ0/(γ0−1)+ (1−P(e(X)≤bn)) (Lemma 4) = 1 + P(e(X)≤bn) c(γ0)b−2 nP(e(X)≤bn)1/(γ0−1)−1 . This term is minimized over P(e(X)≤bn) atP(e(X)≤bn) = γ0−1 c(γ0)b−2 nγ0γ0−1 , which produces EP[D/max{e(X), bn}2] = 1−P(e(X)≤bn) γ0= 1− b2 nγ0−1 c(γ0)γ0−1 γ−γ0 0. Lemma 16. Suppose the conditions of Lemma
|
https://arxiv.org/abs/2504.13273v2
|
14 hold and rµ,n→0. Let P(n)be a sequence of distributions inP. Recall the definitions of ϕnas the oracle clipped influence function and ˆϕnthe estimated influence function. Then EP(n)[ϕ2 n] =V ar P(n)(ϕn) +O(1)andP(n)nh ˆϕ2 n−ϕ2 ni =oP(n) V ar P(n)(ϕn) . Proof of Lemma 16. First, note that: EP(n)[ϕ2 n] =V ar P(n)(ϕn) +EP(n)[ϕn]2=V ar P(n)(ϕn) +O(1)2. (Assumption 1(a)) Next, I show that P(n)nh ˆϕ2 n−ϕ2 ni =oP(n) EP(n) D/max{e(X), bn}2 . I have: P(n)nh ˆϕ2 n−ϕ2 ni =P(n)n[(ˆϕn−ϕn)2] + P(n)nh 2(ˆϕn−ϕn)ϕni ≤P(n)n[(ˆϕn−ϕn)2] + 2r P(n)nh (ˆϕn−ϕn)2i P(n)n[ϕ2n] (Cauchy-Schwarz) =P(n)n[(ˆϕn−ϕn)2] +r P(n)nh (ˆϕn−ϕn)2i OP(n) EP(n)[ϕ2n] (Lemma 13) =EP(n)[(ˆϕn−ϕn)2] +r EP(n)h (ˆϕn−ϕn)2i OP(n) EP(n)[ϕ2n] +oP(n)q EP(n)[ϕ2n] . (SLLN) I decompose the nuisance error as: (ˆϕn−ϕn)2≤ |ˆµ−µ| 1−D max{e, bn} +|Y−ˆµ| D max{ˆe, bn}−D max{e, bn} 2 (Tri. Ineq.) ≤4 r2 µ,n 1 +D max{e, bn}2 + (Y−µ)2+oP(n)(rµ,n)D max{ˆe, bn}−D max{e, bn}2! ≾r2 µ,nD max{e, bn}2+D max{ˆe, bn}−D max{e, bn}2 +oP(n)(1) 52 =oP(n)(1)D max{e, bn}2EP(n)D max{e, bn}2 +D max{ˆe, bn}−D max{e, bn}2 +oP(n)(1) (rµ,n→0, Lemma 15) EP(n)h (ˆϕn−ϕn)2i =oP(n) EP(n)D max{e, bn}2 +EP(n)"D max{ˆe, bn}−D max{e, bn}2# +oP(n)(1) =oP(n) EP(n)D max{e, bn}2 +oP(n)(1). (Lemma 14) =oP(n) V ar P(n)(ϕn) +oP(n)(1). (Lemma 5) As a result: P(n)nh ˆϕ2 n−ϕ2 ni =q oP(n) V ar P(n)(ϕn) OP(n) EP(n)[ϕ2n] +oP(n) V ar P(n)(ϕn) +oP(n)q EP(n)[ϕ2n] =oP(n) V ar P(n)(ϕn) +q V ar P(n)(ϕn) =oP(n) V ar P(n)(ϕn) . (Lemma 15) Lemma 17 (Estimated variance consistency) .Suppose the conditions of Proposition 2 and Lemma 14 hold. LetP(n)be a sequence of distributions in P. Then ˆσ2 n/σ2 nP− →1. Proof of Lemma 17. The conditions of Proposition 2 hold, so the conditions of Proposition 3 hold as well. The conditions of Lemma 14 hold, so the conditions of Lemma 16 hold as well. Recall the definition ¯ σ2 n=n−1VarP(n)(ϕn). Letσ2 n=n−1(P(n)n[ϕ2 n]−P(n)n[ϕn]2) be the oracle sample variance. By Lemma 13, σ2 n/¯σ2 nP− →1. Therefore it suffices to show that (ˆ σ2 n−σ2 n)/¯σ2 n=P(n)n[ˆϕ2 n]−P(n)n[ϕ2 n]−ˆψAIPW clip (bn)2+˜ψAIPW (Oracle )(bn)2 V arP(n)(ϕn)P− →0. Note that by Proposition 3, EP(n) D/max{e(X), bn}2 = Θ( n¯σ2 n) = Θ P(n) VarP(n)(ϕn) . By the triangle inequality: ˆσ2 n−σ2 n ¯σ2n ≤ P(n)nh ˆϕ2 n−ϕ2 ni V ar P(n)(ϕn) + ˆψAIPW clip (bn)2−˜ψAIPW (Oracle )(bn)2 V ar P(n)(ϕn) ≾ P(n)nh ˆϕ2 n−ϕ2 ni O1 EP(n)[D/max{e(X), bn}2] (Lemma 6) +OP(n) ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) (Lemma 5) = P(n)nh ˆϕ2 n−ϕ2 ni OP(n)1 EP(n)[D/max{e(X), bn}2] +oP(n)(1) (Proposition 2) = P(n)nh ˆϕ2 n−ϕ2 ni OP(n) 1/V ar P(n)(ϕn) +oP(n)(1) (Proposition 3) 53 =oP(n)(V ar P(n)(ϕn))OP(n) 1/V ar P(n)(ϕn) +oP(n)(1) (Lemma 16) =oP(n)(1). Therefore (ˆ σ2 n−σ2 n)/¯σ2 n→P(n)0. By Lemma 13, σ2 n/¯σ2 n→P(n)1. As a result, (ˆ σ2 n−¯σ2 n)/¯σ2 n→P(n)0 and ˆσ2 n/¯σ2 nP− →1. Lemma 18. Suppose the conditions of Theorem 1’ hold. Then σ−1 n ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) =oP(n)(1). Proof of Lemma 18. I write k(i) for observation i’s fold and nkfor the number of observations in fold k. Then the oracle and clipped AIPW estimators are: ˜ψAIPW (Oracle )(bn) =1 nnX i=1ϕ(Zi|bn, η) =X knk n“˜ψAIPW, (k) clip(bn)”z }| { 1 nkX i:k(i)=kϕ(Zi|bn, η) ˆψAIPW
|
https://arxiv.org/abs/2504.13273v2
|
clip (bn) =1 nnX i=1ϕ(Zi|bn,ˆη(−k)) =X knk n1 nkX i:k(i)=kϕ(Zi|bn,ˆη(−k)) | {z } “ˆψAIPW, (k) clip(bn)”. I write ˆ rk≡σ−1 n ˜ψAIPW, (k) clip(bn)−ˆψAIPW, (k) clip(bn) . I wish to show thatP knk nˆrk=oP(n)(1). I consider an arbitrary kand quantify the bias and variance of ˆ rkgiven the data and nuisance estimates from the other folds−k. I write the standard decomposition: ˆrk=σ−1 nPn (ˆµ−µ) 1−D max{ˆe, bn} + (µ−Y)D max{e, bn}−D max{ˆe, bn} . By cross-fitting, the bias satisfies E[ˆrk|ˆη(−k)] =σ−1 nEh (ˆµ−µ)max{ˆe,bn}−e max{ˆe,bn}i . I now bound this term. E[ˆrk|ˆe(−k),ˆµ(−k)] ≤σ−1 nrµ,nP(n)(e(X)≤bn+re,n) +σ−1 nrµ,nre,nEP(n)1 e−re,n1{e > b n+re,n} (Lemma 8) ≤oP(n)(1)+n1/2rµ,nre,nE D/max{e, bn}2 p E[D/max{e, bn}2](Lemma 11 + Proposition 3) =oP(n)(1). (Assumption 3’(b)) Next, I show that V ar P(n) ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) =oP(n)(¯σ2 n), where ¯ σ2 n=n−1VarP(n)(ϕn). . Consider the estimates in one fold k, with the nuisances from other folds fixed. For the estimates in that 54 fold: V ar P(n) ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) =V ar P(n)nˆϕ−ϕ =n−1EP(n)h (ˆϕ−ϕ)2−EP(n)E[ˆϕ−ϕ]2i =n−1EP(n)h (ˆϕ−ϕ)2−oP(n)(1)i (Bias) =n−1oP(n) V ar P(n)(ϕn) +oP(n)(1) (Lemma 16) =n−1oP(n) EP(n)[D/max{e(X), bn}2]2 +oP(n)(1) (Lemma 3.(iii)) =oP(n)(¯σ2 n+ 1) (Lemma 6) =oP(n)EP(n)[D/max{e(X), bn}2] n+ 1 (Proposition 3) =oP(n)(¯σ2 n). (bn≫n−1/2) As a result: Eh ˆr2 k|ˆη(−k)i =Eh ˆrk|ˆη(−k)i2 +V ar ˆrk|ˆη(−k) =oP(n)(1) ˆrk=oP(n)(1) σ−1 n ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) =oP(n)(1) ≤X knk n|ˆrk|=X knk noP(n)(1) = oP(n)(1). Finally, I am ready to prove the central claims of this work. Proof of Theorem 1’. By Lemma 18, σ−1 n ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) =oP(n)(1). Therefore, by Proposi- tion 4, σ−1 n ˆψAIPW clip (bn)−ψn =σ−1 n ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) +σ−1 n ˜ψAIPW (Oracle )(bn)−ψnP(n)⇝N(0,1). Proof of Theorem 1. For either claim, let P(n) be a sequence of distributions Pin the relevant set. Note that in either case, the assumptions of Theorem 1’ hold by Corollary 6. Therefore, by Theorem 1’, sup t∈R P(n)n ˆψAIPW clip (bn)−ψn σn≤t! −Φ(t) →0, where P(n)ndenotes the empirical average under distribution P(n) and σnis defined in Proposition 4. Now I expand the empirical t-statistics for any fixed t: P(n)n ˆψAIPW clip (bn)−ψn ˆσn≤t! =P(n)n ˆψAIPW clip (bn)−ψn σnσn ˆσn ≤t! 55 =P(n)n ˆψAIPW clip (bn)−ψn σn≤tˆσn σn! =P(n)n ˆψAIPW clip (bn)−ψn σn−tˆσn σn≤0! →Φ(t), with the final result holding by Slutsky’s theorem. Therefore, sup t∈R P(n)n ˆψAIPW clip (bn)−ψn ˆσn≤t! −Φ(t) →0, by properties of a cumulative distribution function. Proof of Corollary 1. For simplicity of exposition, I prove the result for the class Punder Assumption 4(ii): lim sup n→∞sup P∈P P(ψ(P)∈ˆCn)−(1−α) = lim sup n→∞sup P∈P Pn ψ(P)−ˆψAIPW clip (bn) ˆσn∈ zα 2, z1−α 2! −(1−α) = lim sup n→∞sup P∈P Pn ˆψAIPW clip (bn)−ψ(P) ˆσn> z1−α 2 −α 2 − Pn ˆψAIPW clip (bn)−ψ(P) ˆσn> z α 2 −(1−α 2) = lim sup n→∞sup P∈P Pn ˆψAIPW clip (bn)−ψ(P) ˆσn< z1−α 2 −(1−α 2) − Pn ˆψAIPW clip (bn)−ψ(P) ˆσn< z α 2 −α 2 ≤lim sup n→∞sup P∈P Pn ˆψAIPW clip (bn)−ψ(P) ˆσn< z1−α 2! −Φ(z1−α 2) + lim sup n→∞sup P∈P Pn ˆψAIPW clip (bn)−ψ(P) ˆσn< z α 2! −Φ(zα 2)
|
https://arxiv.org/abs/2504.13273v2
|
= 0. (Theorem 1) Proof of Corollary 2. By construction, there is a sequence of bn→0 such that re,n≪bn,re,n≪bn≪ n−1/2/rµ,n2∗min{1/γ0,γ0/(4(γ0−1))}. For such a bn, Assumption 2(b) holds by rµ,nre,n≪n−1/2andγ0>2. Further, Assumption 4(ii) holds because ( γ0−1)2/γ0>1. Recall the definition of the oracle clipped AIPW estimator ˜ψAIPW (Oracle )(bn) and the feasible estimator ˆψAIPW clip (bn). It is clear by the proof of Proposition 1(i) that ˜ψAIPW (Oracle )(0) achieves the finite semiparametric efficiency 56 bound. I claim that ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(0) = oP(n)(n−1/2), so that the remaining claims will hold by the proofs of Theorem 1 and Corollary 1. By the proof of Theorem 1, ˆψAIPW clip (bn)−˜ψAIPW (Oracle )(bn) =oP(n)(n−1/2), so it only remains to show that ˜ψAIPW (Oracle )(bn)−˜ψAIPW (Oracle )(0) = oP(n)(n−1/2). By construction, EP(n)h ˜ψAIPW (Oracle )(bn)−˜ψAIPW (Oracle )(0)i = 0, so it suffices to show that: V ar P(n) n1/2 ˜ψAIPW (Oracle )(bn)−˜ψAIPW (Oracle )(0) =V ar P(n) n−1/2XD(Y−µ(X)) max{e(X), bn}−D(Y−µ(X)) e(X) =V ar P(n) n−1/2XD(Y−µ(X))(e(X)−bn)1{e(X)≤bn} bne(X) =V ar P(n)D(Y−µ(X))(e(X)−bn)1{e(X)≤bn} bne(X) =EP(n)"D(Y−µ(X))(e(X)−bn)1{e(X)≤bn} bne(X)2# =EP(n)(Y−µ(X))2(bn−max{e(X), bn})2 b2ne(X) ≤σ2 maxEP(n)1{e(X)≤bn} e(X) =σ2 maxZ∞ 0P(n)1{e(X)≤bn} e(X)> t dt =σ2 maxZ∞ 0P(n) (e(X)≤min{bn,1/t})dt =σ2 max R1/bn 0P(n) (e(X)≤min{bn,1/t})dt +R∞ 1/bnP(n) (e(X)≤min{bn,1/t})dt =σ2 max P(n) (e(X)≤bn) (1/bn) +R∞ 1/bnP(n) (e(X)≤1/t)dt ≤Cσ2 max bγ0−2 n+Z∞ 1/bnt1−γ0dt! =Cσ2 maxγ0−1 γ0−2 =oP(n)(1). C.5 Degradation of Black-Box Nuisance Requirements Proof of Corollary 4. Take Pto be the distribution that draws e(X) from the CDF P(e(X)≤π) =πγ0−1 and draws Y|X, D∼ N(0,1). letP={P}. Such a family confirms to the requirements of Assumption 1 by construction. 57 Take re,n=n−1/4/log(n) and rµ,n=n−1/4/log(n). Note that for any bn≫re,n,rµ,nbγ0/2 n≫n1/2. I claim that this implies the zero-coverage property for a certain set of nuisance estimators satisfying these sup-norm rates. Because re,n≪bn, letnbe large enough that bn>2re,nand ( bn−re,n)γ0−2>2. Recall that γ0<2 by assumption, so that I can divide by 2 −γ0. Take the nuisance estimate for the sequence P(n) =Pas ˆµ(X) =µ(X)−rµ,nand ˆe(X) =e(X) +re,n. The bias of the clipped estimator ˆψAIPW clip (bn) with nobservations is: EPh ˆψAIPW clip (bn)−ψ(P)i =E (ˆµ(X)−µ(X))D max{ˆe(X), bn}−1 =rµ,nE 1−D max{e(X) +re,n, bn} =rµ,nE 1{e(X)≤bn−re,n}bn−e(X) bn+1{e(X)> bn−re,n}re,n e(X) +re,n ≥rµ,n bnE[1{e(X)≤bn−re,n}(bn−e(X))]≥rµ,n bnE 1 e(X)≤bn 2 (bn−e(X)) ≥rµ,n 2E 1{e(X)≤bn 2} =rµ,nbγ0−1 n2γ0−2. It is convenient to write Bn=EP[ˆψAIPW clip (bn)−ψ(P)] σnfor this proof. By the proof of Corollary 3, there is a C−1>0 such that σn≥Cn−1/2bγ0/2−1 n for all nlarge enough. For such n: Bn=EPh ˆψAIPW clip (bn)−ψ(P)i σn≥C2γ0−2n1/2rµ,nbγ0−1 nb1−γ0/2 n =C2γ0−2n1/2rµ,nbγ0/2 n≥C2γ0−2n1/2rµ,nrγ0/2 e,n→ ∞ . Note also that: V ar ˆψAIPW clip (bn) =1 nV ar µ+D max{ˆe, bn}(Y−µ) +rµ,n1{e≥bn, X∈ XQ}} 1−D e+re,n =1 nV ar µ+D max{ˆe, bn}(Y−µ) +rµ,n1 nV ar 1{e≥bn, X∈ XQ}} 1−D e+re,n ≤σ2 n+ϵrµ,n n V ar Qre,n1{e≥bn} e+re,n +EQe(1−e)1{e≥bn} (e+re,n)2 ≤σ2 n+ϵrµ,n n EQ" 1{e≥bn} r2 e,n+e e2#! ≤σ2 n+2ϵrµ,n nEQ 1{e≥bn}e−1 =σ2 n+2Cϵrµ,n(γ0−1) nZ1 bntγ0−3dt =σ2 n+2Cϵrµ,n(γ0−1) n(2−γ0)Z1 bn(bγ0−2 n−1)≤σ2 n+rµ,n2Cϵ(γ0−1) 2−γ0bγ0−2 n n 58 =σ2 n+o(σ2 n). (Proof of Corollary 3) Next, I show that the conditions of Lemma 17 hold. It suffices to show that n−1/2≪bn≪1 and re,n≪ bn, sore,nbmin{γ0−2,0} n ≪re,n/bn→0. By assumption, n−1/2≪re,n≪bn≪1. Therefore Lemma 17
|
https://arxiv.org/abs/2504.13273v2
|
applies, ˆσ2 n/V ar (ˆψAIPW clip (bn))→P1 and Eh ˆψAIPW clip (bn)−ψ(P)i →P∞. As a result,EP[ˆψAIPW clip (bn)−ψ(P)] ˆσn→P∞ and for any fixed α∈(0,1/2), P(ψ(P)∈ˆCn) =P ˆψAIPW clip (bn)−ψ(P) ˆσn∈[zα/2, z1−α/2]! =P ˆψAIPW clip (bn)−Eh ˆψAIPW clip (bn)i σn+oP(σn)∈ Bn+zα/2+oP(1), Bn+z1−α/2+oP(1) =POP(σn) σn+oP(σn)∈ Bn+zα/2+oP(1), Bn+z1−α/2+oP(1) →P0, with the limit holding because Bntends to infinity. Proof of Example 1. For simplicity, I proceed assuming re,n≫n−1/2. By Theorem 1, it remains to show that there is a bn→0 such that 3(c) ( rµ,nbγ0/2 n≪n−1/2) and 3(d) ( re,n≪bn). Because rµ,nre,n≪n−1/2, there exists some δn→ ∞ such that rµ,nre,nδn≪n−1/2. Choose some bn such that 1 ≫bnandre,n≪bn≪(re,nδn)2/γ0. This is feasible because γ0>2, so that r2/γ0e,n≫re,nand δ2/γ0n→ ∞ . Then re,n≪bnandrµ,nbγ0/2 n≪rµ,nre,nδn≪n−1/2.Thus, both conditions hold. Proof of Example 2. For simplicity, I proceed assuming re,n≫n−1/2. By Theorem 1, it remains to show that there is a bn→0 such that rµ,nre,nlog(1/bn)≪n−1/2,rµ,nbn≪n−1/2, and re,n≪bn. Because rµ,nre,nlog(1/re,n)≪n−1/2, there exists a bnsuch that re,n≪bn≪1 and rµ,nbnlog(1/bn)≪ n−1/2. For this bn, all three conditions hold by inspection. Proof of Example 3. For simplicity, I proceed assuming re,n, rµ,n≫n−1/2. Ifγ0≥2, the claim holds by Example 1 and Example 2. Now suppose that γ0<2. By Theorem 1, it remains to show that there is a bn→0 such that rµ,nre,nb(γ0−2)/2 n ≪n−1/2,rµ,nbγ0/2 n≪n−1/2, and re,n≪bn. Because re,n≪n−1/3, there exists a bn such that n−1/3≫bn≫max{re,n, n−1r−2 µ,n}. For this bn: rµ,nre,nb(γ0−2)/2 n ≪rµ,nre,nb−1/2 n≪rµ,nre,n n−1r−2 µ,n−1/2=n1/2r2 µ,nre,n≪n−1/2 rµ,nbγ0/2 n≪rµ,nb1/2 n≪n−1/2, 59 verifying all conditions. Proof of Example 4. Ifγ0≥2, the claim holds by Example 1 and Example 2. Now suppose that γ0<2. By Theorem 1, it remains to show that there is a bn→0 such that rµ,nre,nb(γ0−2)/2 n ≪n−1/2,rµ,nbγ0/2 n≪n−1/2, and re,n≪bn. (i)rµ,n=O(n−1/2). Take bn→0 such that bn≫re,n. The first condition holds because b(γ0−2)/2 n ≪ r(γ0−2)/2 e,n ≪r−1/2 e,n. The second condition holds because rµ,nbγ0/2 n≪n−1/2. (ii)re,n=O(n−1/2). There is a δn→ ∞ such that δ1/2 n≪n−1/4/rµ,n. Take bnsuch that n−1/2≫bn≫ n−1/2δn, so that re,n≪bn. Then: rµ,nre,nb(γ0−2)/2 n ≪rµ,nre,nb−1/2 n≪rµ,nre,nn1/4δ−1/2 n≪n−1/2 rµ,nbγ0/2 n≪rµ,nb1/2 n≪rµ,nn−1/4δ1/2 n≪n−1/2, verifying all conditions. C.6 Necessary Smoothness Conditions Lemma 19 (Minimal expected nearby observations) .Suppose the conditions of Theorem 2 hold. Define A(x0|h) ={x:x∈[−1,1]d,∥x−xj∥ ≤h}. Then inf x0∈[−1,1]d,P∈P,h>0EP[D|X∈A(x0|h)]≥2−(γ0+1)/(γ0−1)C−1/(γ0−1)hd γ0−1. Proof of Lemma 19. Proof by contradiction. Suppose not, and there is a P∈P,x0∈[−1,1]d,h >0 with EP[D|X∈A(x0|h)]<2−(γ0+1)/(γ0−1)C−1/(γ0−1)hd γ0−1. Define π= 2−(γ0+1)/(γ0−1)C−1/(γ0−1)hd γ0−1andB={X∈A(x0|h) :e(X)≤π}. Then P(X∈B)< P(X∈A(x0|h))/2≥hd/2. As a result: P(e(X)≤π)>hd 2=Cπγ0−1C−1hdπ1−γ0 2=Cπγ0−1C−12−1−γ0hd 2γ0+1 1−γ0C−1 γ0−1hd γ0−11−γ0 =Cπγ0−1. Contradiction. Lemma 20 (Minimal coefficient) .Suppose Assumption 6 holds and e(X)∈Σ(βe, L)for some βe>d γ0−1. For0≤j < β e, Let cj(v|x0)be the coefficient in the jth-order Taylor expansion of e(x)around x0applied 60 in the direction of v. Then there is a c∗>0such that for all x0, there exists an x̸=x0and a 0≤j≤α(Mou ) such that x0+x−x0 ∥x−x0∥∈[−1,1]dandcj x−x0 ∥x−x0∥|x0 ≥c∗. Proof of Lemma 20. Ifβe≤1, the claim is immediate by Lemma 19. I therefore proceed assuming βe>1. Proof by contradiction. Suppose for all c >0, there exists a Pand an x0such that cj x−x0 ∥x−x0∥|x0 < c for all j≤α(Mou ). Define: h∗= argsuph∈(0,1]X α(Mou )<j≤ℓe¯cj(h)j+Le(h)βe≤C1/d ⌈α(Mou )+ 2⌉(h)d γ0−1. This h∗is well-defined because as
|
https://arxiv.org/abs/2504.13273v2
|
htends to zero from above, the left-hand side is of a lower order than the right-hand side. By continuity, h∗ n∈(0,1] and satisfies this weak inequality. Take c∗= min0≤j≤α(Mou )C1 1−γ0 ⌈α(Mou )+2⌉(h∗)d γ0−1−j. I claim that for this c∗, the the content of Lemma 20 holds. Proof by contradiction. Suppose, not, and there is an x0∈[−1,1]dand an xas above such that cj x−x0 ∥x−x0∥|x0 < c∗for all j≤α(Mou ). Then for all x∈[−1,1]d/x0such that ∥x−x0∥ ≤ h∗and x0+x−x0 ∥x−x0∥∈[−1,1]d: e(x) =f(x|x0) +g(x|x0) =X 0≤j≤α(Mou )cjx−x0 ∥x−x0∥|x0 ∥x−x0∥j+X α(Mou )<j≤ℓecjx−x0 ∥x−x0∥|x0 ∥x−x0∥j+L∥x−x0∥βe ≤X 0≤j≤α(Mou )C1 1−γ0 ⌈α(Mou )+ 2⌉(h∗)d γ0−1−j∥x−x0∥j+C1 1−γ0 ⌈α(Mou )+ 2⌉(h∗)d γ0−1≤C1 1−γ0⌈α(Mou )+ 1⌉ ⌈α(Mou )+ 2⌉(h∗)d γ0−1. So that: P e(X)≤C1 1−γ0⌈α(Mou )+ 1⌉ ⌈α(Mou )+ 2⌉(h∗)d γ0−1 ≥P x0+X−x0 ∥X−x0∥∈[−1,1]d,∥X−x0∥ ≥h∗ ≥(h∗)d=C C1 1−γ0(h∗)d/(γ0−1)γ0−1 =C⌈α(Mou )+ 2⌉ ⌈α(Mou )+ 1⌉γ0−1 C1 1−γ0⌈α(Mou )+ 1⌉ ⌈α(Mou )+ 2⌉(h∗)d γ0−1γ0−1 > C C1 1−γ0⌈α(Mou )+ 1⌉ ⌈α(Mou )+ 2⌉(h∗)d γ0−1γ0−1 . But by Assumption 1(d), P e(X)≤C1 1−γ0⌈α(Mou )+1⌉ ⌈α(Mou )+2⌉(h∗)d γ0−1 ≤C C1 1−γ0⌈α(Mou )+1⌉ ⌈α(Mou )+2⌉(h∗)d γ0−1γ0−1 . Con- tradiction. 61 Therefore, for this c∗>0 and all x0, there exists an x̸=x0and a 0 ≤j≤α(Mou )such that cj x−x0 ∥x−x0∥|x0 ≥c∗. Proof of Proposition 5. Proof by contradiction. Suppose not, and for all ρ, γ > 0, there is an h∈ 0,(C1/d/(2L))1 βe−d γ0−1i , x0∈[−1,1]d, and P∈Psuch that P(e(X)≥ρsup∥x−x0∥≤he(x)|D= 1,∥X−x0∥ ≤h)≤γ. Take some ρ >0 and some sequence of γn→0+, with associated bandwidths hn, such that P(n) e(X)≥ρ sup ∥x−x0∥≤hne(x)! |D= 1,∥X−x0∥ ≤hn! ≤γn. Because [ −1,1]dis compact and the coefficients are in a compact space, there is a subsequence of nfor which x0and the local polynomial coefficients of e(x) around x0are convergent. Without loss of generality, suppose this is the sequence. Write gn(x|x0) =P(n)(D= 1|X=x)−fn(x|x0) be the local polynomial propensity residuals. Also write f∗(· |x0) for the local polynomial coefficients at the limiting coefficients. First, I show that hn→0. Suppose not, and lim supn→∞hn(x) =h∗>0. Without loss of generality, suppose lim n→∞hn(x) =h∗. Write A={x| ∥x−x0∥ ≤h∗}. Then by continuity of densities and bounds on derivatives, for all nlarge enough, γn≥P(n) e(X)≥ρ 2 sup ∥x−x0∥≤h∗e(x)! |D= 1,∥X−x0∥ ≤h∗! ≥P(n) e(X)≥3ρ 8 sup ∥x−x0∥≤h∗f∗(x|x0)! |D= 1,∥X−x0∥ ≤h∗! −1( sup ∥x−x0∥≤h∗|e(x)−f∗(x|x0)| ≥ρ 8sup ∥x−x0∥≤h∗f∗(x|x0)) ≥P(n) e(X)≥3ρ 8 sup ∥x−x0∥≤h∗f∗(x|x0)! |D= 1,∥X−x0∥ ≤h∗! −1( sup x∈A∥fn(x|x0)−f∗(x|x0)∥+|gn(x|x0)| ≥ρ 16sup ∥x−x0∥≤h∗f∗(x|x0)) =P(n) e(X)≥3ρ 8 sup ∥x−x0∥≤h∗f∗(x|x0)! |D= 1,∥X−x0∥ ≤h∗! −1n O((h∗)βe)≥Ω((h∗)−d γ0−1o ≥P(n) e(X)≥ρ 4 sup ∥x−x0∥≤h∗f∗(x|x0)! |D= 1,∥X−x0∥ ≤h∗! , which is a positive constant. Therefore γn̸→0. Contradiction. I therefore proceed assuming hn→0. Let the lowest-order nonzero coefficient in f∗be of order j∗.j∗is 62 defined and finite by Lemma 20. Define Gn=P(n) e(X)≥ρ 2 sup ∥x−x0∥≤h∗e(x)! |D= 1,∥X−x0∥ ≤h∗! −γn. I wish to show that Gndoes not converge to zero. By the Bolzano-Weierstrass Theorem, it suffices to show that there is a nonconvergent subsequence of n. Write mn,j≡sup∥v∥=1|˜cj(v)|hj−j∗ n andmn≡max 0≤j≤j∗mn,j. I consider two cases: (i) mn→0 or (ii) lim inf n→∞mn>0 (and potentially infinite). Case (i): suppose mn→0. Then for all x∈An, e(x) =f∗(x|x0) +˜fn(x|x0) +gn(x|x0) = sup ∥α∥=j∗Dαe(x0)(x−x0)α+O(hmin{j∗+1,βe} n ) +o(hj∗ n) +O(hβe n) =hj∗ ncj∗x−x0
|
https://arxiv.org/abs/2504.13273v2
|
∥x−x0∥∥x−x0∥ hnj∗ +o(hj∗ n). Letn≥n′imply that the o(hj∗ n) term is at most half as large as supx∈Anhj∗ ncj∗ x−x0 ∥x−x0∥ ∥x−x0∥ hnj∗ , as well as to imply that x0+hnv∈[−1,1]dif and only if there is an h >0 such that x0+hv∈[−1,1]d. Then: P(n) e(X)≥sup x∈Ane(x)|D= 1, X∈An ≥P(n) hj∗ ncj∗X−x0 ∥X−x0∥∥X−x0∥ hnj∗ ≥ρ 2sup x∈Anhj∗ ncj∗x−x0 ∥x−x0∥∥x−x0∥ hnj∗ |D= 1, X∈An! =P(n′) cj∗X−x0 ∥X−x0∥∥X−x0∥ hn′j∗ ≥ρ 2sup x∈An′cj∗x−x0 ∥x−x0∥∥x−x0∥ hn′j∗ |D= 1, X∈An′! >0. Contradiction. Case (ii): suppose lim inf n→∞mn>0. Write ˜fn(x|x0) =hj∗ nmnℓeX j=0P ∥α∥=jDαe(x0) x−x0 ∥x−x0∥α hj−j∗ n mn| {z } “˜dj,nx−x0 ∥x−x0∥∥x−x0∥ hnj . By construction, ˜dj,n x−x0 ∥x−x0∥ is bounded between −1 and 1, so that there is a convergent subsequence for allα. Without loss of generality I proceed assuming this is the full sequence. Write ˜d∗ j(v) = lim n→∞˜dj,n(v) for all v. Write ˜d∗ j∗(v) =cj∗(v)/mn, where cj∗(v) is the j∗-order coefficient 63 inf∗. By construction, for all ∥x−x0∥ ≤hn, e(x|x0) =f∗(x|x0) +˜fn(x|x0) +gn(x|x0) =mnhj∗ n j∗X j=0˜d∗ jx−x0 ∥x−x0∥∥x−x0∥ hnj +o(1) + O hmin{1,βe−j∗} n =mnhj∗ nj∗X j=0˜d∗ jx−x0 ∥x−x0∥∥x−x0∥ hnj +o(mnhj∗ n). Letn≥n′imply that the o(mnhj∗ n) term is at most half as large as the largest value of the first term over x∈An, as well as that x0+hnv∈[−1,1]dif and only if there is an h >0 such that x0+hv∈[−1,1]d. Then: P(n) e(X)≥sup x∈Ane(x)|D= 1, X∈An ≥P(n) mnhj∗ nj∗X j=0˜d∗ jX−x0 ∥X−x0∥∥X−x0∥ hnj ≥ρ 2sup x∈Anmnhj∗ nj∗X j=0˜d∗ jx−x0 ∥x−x0∥∥x−x0∥ hnj |D= 1, X∈An =P(n′) j∗X j=0˜d∗ jX−x0 ∥X−x0∥∥X−x0∥ hn′j ≥ρ 2sup x∈An′j∗X j=0˜d∗ jx−x0 ∥x−x0∥∥x−x0∥ hn′j |D= 1, X∈An′ >0. Contradiction. Therefore there is a ρ, γ > 0 such that for all h > 0 small enough, for all P∈Pandx0∈[−1,1]d P(e(X)≥ρsup∥x−x0∥≤he(x)|D= 1,∥X−x0∥ ≤h)> γ. Lemma 21 (Minimal eigenvalue technical result) .Suppose the conditions of Theorem 2 hold. There is an h′>0such that lim ε→+0sup P∈P,x0∈[−1,1]d,h∈(0,h′],∥v∥=1P ∥vTU∥ ≤ε|D= 1,∥X−x0∥ ≤h = 0. Proof of Lemma 21. Take some sequence of εn→+0. Take h′, ρ, γ from Assumption 7. LetP(n) be a sequence of distributions in P, letvnbe a sequence of vectors with ∥v∥= 1, let hnbe a sequence in (0 , h′], and write An={x:∥x−x0∥ ≤hn}. Then: P(n) ∥vT nU∥ ≤√εn|D= 1, X∈An =EP(n) e(X)1 ∥vT nU∥ ≤√εn |X∈An EP(n)[e(X)|X∈An] ≤ supx∈Ane(x) P(n) ∥vT nU∥ ≤√εn|X∈An P(n) e(X)≥ρ supx∈Ane(x) ρ supx∈Ane(x) 64 ≤ supx∈Ane(x) P(n) ∥vT nU∥ ≤√εn|X∈An γρsupx∈Ane(x) =P(n) ∥vT nU∥ ≤√εn|X∈An ργ=O(√εn) =o(1). Lemma 22 (Uniform minimal expected eigenvalue) .Suppose the conditions of Theorem 2 hold, and let U(v)be the vector of zero-through- ⌊βe⌋-order interactions of v. Define: λ(¯h)≡ inf h∈(0,¯h],∥v∥=1,x0∈[−1,1]d,P∈PvTEP" UX−x0 h UX−x0 hT |D= 1,∥X−x0∥ ≤h# v, λ∗≡lim inf ¯h→+0λ(¯h), where U(·)is the ⌊βµ⌋-order local polynomial interaction matrix. Then λ∗>0. Proof of Lemma 22. Leth′be from Lemma 21. Fix some ¯h∈(0, h′]. Let hnbe a sequence in (0 ,¯h], let vn be a sequence of vectors with ∥v∥= 1, let x0,nbe a sequence of points in [ −1,1]d, and let P(n) be a sequence of distributions in P. Let An={x:∥x−x0,n∥ ≤hn}. By Lemma 21, there is a ε >0 such that: inf P∈P,x0∈[−1,1]d,h∈(0,hn],∥v∥=1P ∥vTU∥ ≥ε|D=
|
https://arxiv.org/abs/2504.13273v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.