text
string | source
string |
|---|---|
x,y)⟩|. This completes the proof of our result. 5 Proof of Corollary 1.2 Recall the statement of Corollary 1.2. In the limit N→ ∞ , where the graph Gin the main result approaches a graphon W: [0,1]2→[0,1], defined over the random variables x, y∈[0,1], then dG,W(g, h) := 1 −Ex exp Ey|xlog (1−d(g(y), h(y))) W(x, y) is a metric defined over the functions g, h: [0,1]→C, where Ex(.) denotes the expectation with respect to xandEy|x(.) denotes the conditional expectation of ygiven x, whenever it is defined. Proof of Corollary 1.2. We modify the original definition of dX,G,P(g, h) as follows: dX,G,P(g, h) := 1 −1 NNX j=1exp"NX i=1log (1−d(gj, hj)) pj|i# . (5.1) where pj|iis a conditional probability satisfyingPN jpj|i= 1. Rewriting equation 5.1 it in terms of the marginal probability as dX,G,P(g, h) := 1 −1 NNX j=1exp"NX i=1pi pjilog (1−d(gj, hj))# . In this form, our metric can be viewed as a distance function that is estimated using finite N×Nsamples drawn from an underlying distribution. Similar to the framework 15 of graphons, where the vertices i, jare associated with random variables x, y∈[0,1] with marginal distributions p(x), p(y), the edges between iandjare sampled from a distribution W(x, y), as N→ ∞ , the distance dX,G,P(g, h)→dX,G,Wcan be extended to continuous functions g, h: [0,1]→Cover graphons as dG,W(g, h) =dW,G,W(g, h) := 1 −Z1 0expZ1 0log (1−d(g(y), h(y))) W(x, y)p(y)dy p(x)dx. This finishes the proof of our corollary. 6 Proofs of Corollaries 1.3 and 1.4 The concept of semirings generalize the concept of rings by dropping the requirement for additive inverses. This allows for algebraic structures in various fields where subtraction does not naturally exist or is not meaningful, such as in the context of optimization, theoretical computer science, and indeed, graph theory. For a broad review of the theory of semirings and their applications to these fields, we recommend the monograph [5]. In this section, we discuss the properties of joint metrics with respect to semiring structure on directed graphs. We begin with the proof of Corollary 1.3. We recall its statement for convenience. Let ( X1, d1), . . . , (XN, dN) be metric spaces as in the main result of our paper. Let G(1) (resp. G(2)) be a directed graph on the set {1, . . . , N 1}(resp. on the set {N1+ 1, . . . , N }). We denote by X(1)(resp. by X(2)) the product metric space X1× ··· × Xr(resp. by (Xr+1×···× XN)). Let P(1)(resp. P(2)) be the matrix of probability scores on the edges of G(1)(resp. on the edges of G(2)). If the directed graph Gis a disjoint union of two sub-directed graphs, G:=G(1)⊔ G(2), then we have dX,G,P=N1 NdX(1),G(1),P(1)+N2 NdX(2),G(2),P(2). where X=X(1)× X(2)andN2:=N−N1. Proof of Corollary 1.3. LetE1(resp. E2) denote the edge set of G1(resp. of G2). We now look closely at the definition of dX,G1⊔G2,P. Let ( g,h)∈ X × X . Ifgis given by ( g1, . . . , g N), then we will write g(1)(resp. g(2)) for ( g1, . . . , g N1) (resp.
|
https://arxiv.org/abs/2505.06405v1
|
for ( gN1+1, . . . , g N). The vectors 16 h(1)andh(2)are defined similarly. Then we have dX,G1⊔G2,P(g,h) = 1−1 NNX j=1Y i∈{1,...,N} (j,i)∈E1⊔E2[1−di(gi, hi)]1 pji = 1−1 N N1X j=1Y i∈{1,...,N} (j,i)∈E1[1−di(gi, hi)]1 pji | {z } A+ NX j=N1+1Y i∈{1,...,N} (j,i)∈E2[1−di(gi, hi)]1 pji | {z } B . We reorganize this expression as follows: dX,G1⊔G2,P(g,h) = 1−1 N −N1(−1 + 1−1 N1A) + −N2(−1 + 1−1 N2B) = 1−1 N N1−N1 1−1 N1A + N2−N2 1−1 N2B = 1−N1+N2 N−1 N −N1 1−1 N1A + −N2 1−1 N2B =N1 N 1−1 N1A +N2 N 1−1 N2B . Since the term 1 −1 N1A(resp. the term 1 −1 N2B) is the metric value dX(1),G(1),P(1)(g(1),h(1)) (resp. dX(2),G(2),P(2)(g(2),h(2))), the proof follows. Let (X1, d1), . . . , (XN, dN) be metric spaces as in the main result of our paper. Consider a list of directed graphs G1, . . . ,Gr, where GihasNinodes for i= 1, . . . , r , and assume that N1+···+Nr=N. Define the following subsets of the product space: •LetX(1)denote the product of the first N1metric spaces, that is, X(1)=X1×X2× ··· × XN1. •Similarly, let X(2)denote the product of the next N2metric spaces, that is, X(2)=XN1+1×XN1+2× ··· × XN1+N2. •Continuing inductively, define X(i)fori= 3, . . . , r as the product of the next Nimetric spaces, ensuring the main product X1× ··· × XNis partitioned into the subproducts X(1),X(2), . . . ,X(r). This construction associates each graph Giwith the corresponding product X(i). In this notation, we have the following corollary of our previous result. 17 Corollary 6.1. We maintain the notation from the previous paragraph. Let Xdenote the metric space X1× ··· × XN. LetGdenote the directed graph obtained by taking the disjoint union of the graphs G1, . . . ,Gr. Finally, let Pdenote the block diagonal matrix obtained by putting together the matrices of probability scores P1, . . . ,Prof the edges of the graphs G1, . . . ,Grin the written order. Then we have dX,G,P=N1 NdX(1),G1,P1+···+Nr NdX(r),Gr,Pr. Proof. In light of our previous theorem, the proof follows by induction on the number rof directed graphs. We omit details. We now proceed to discuss what happens to our metrics under Cartesian products of directed graphs. Let us recall the statement of our relevant result, Corollary 1.4. We maintain our notation from the introductory section. Then we have dF,G1□G2,P(g,h) = 1−N1 N|V(G1)| −dF1,G1,P1N2 N|V(G2)| −dF2,G2,P2 . Proof of Corollary 1.4. Let (g,h)∈F×F. Then dX,G1□G2,P(g,h) is given by the expression 1−1 N2X (v1,v2)∈V(G1□G2)Y (u1,u2)∈V(G1□G2) ((u1,u2),(v1,v2))∈E(G1□G2) 1−d(u1,u2)(g(u1,u2), h(u1,u2))1 p(u1,u2),(v1,v2) | {z } C(v1,v2). The product C(v1,v2)inside the summation can be split into two products, C(v1,v2)=A(v1,v2)B(v1,v2), where A(v1,v2):=Y (u1,u2)∈V(G1□G2) ((u1,u2),(v1,v2))∈E(G1□G2) u1=v1 1−d(u1,u2)(g(u1,u2), h(u1,u2))1 p(u2,v2) and B(v1,v2):=Y (u1,u2)∈V(G1□G2) ((u1,u2),(v1,v2))∈E(G1□G2) u2=v2 1−d(u1,u2)(g(u1,u2), h(u1,u2))1 p(u1,v1). 18 Clearly, an edge (( u1, u2),(v1, v2))∈E(G1□G2) cannot satisfy the following two conditions at the same time: u1=v1and u2=v2. Hence, we have a natural splitting of the summation: 1−1 N2X (u1,u2)∈V(G1□G2) ((u1,u2),(v1,v2))∈E(G1□G2)C(v1,v2)= 1−1 N2X (u1,u2)∈V(G1□G2) ((u1,u2),(v1,v2))∈E(G1□G2)A(v1,v2)B(v1,v2) = 1−1 N2 X u1∈V(G1) (u2,v2)∈E(G2)A(u1,v2) X u2∈V(G2) (u1,v1)∈E(G1)B(v1,u2) . (6.2)
|
https://arxiv.org/abs/2505.06405v1
|
Notice that for each vertex u1∈V(G1), the corresponding multiplicand in A(v1,v2), that is, 1−d(u1,u2)(g(u1,u2), h(u1,u2))1 p(u2,v2), where ( u2, v2)∈E(G2), is independent of the vertex u1being connected to any other vertex inG1. Hence, by varying gandh, we can view A(u1,v2)as a real-valued function on the space A(u1,v2): NY j=N1+1F(Xi, Xj)! × NY j=N1+1F(Xi, Xj)! →R where Xiis the metric space indexed by the vertex u1∈V(G1). Then, the expression dX(u1),G2,P2:= 1−1 N2A(u1,v2)is a joint-metric on the space X(u1) :=QN j=N1+1F(Xi, Xj). Notice also that X u1∈V(G1) (u2,v2)∈E(G2)A(u1,v2)=X u1∈V(G1) (u2,v2)∈E(G2) N2−N2 1−1 N2A(u1,v2) =N1|V(G1)| −X u1∈V(G1) (u2,v2)∈E(G2)N2dX(u1),G2,P2 =N1|V(G1)| −NX u1∈V(G1) (u2,v2)∈E(G2)N2 NdX(u1),G2,P2. (6.3) By using similar arguments, we obtain X u2∈V(G2) (u1,v1)∈E(G1)B(v1,u2)=N2|V(G2)| −NX u2∈V(G2) (u1,v1)∈E(G1)N1 NdX(u2),G1,P1, (6.4) 19 where dX(u2),G1,P1:= 1−1 N1B(u1,u2)is a joint-metric on the space X(u2) :=QN i=N2+1F(Xi, Xj) foru2∈V(G2). By Corollary 6.1, the sumsP u1∈V(G1) (u2,v2)∈E(G2)N2 NdX(u1),G2,P2andP u2∈V(G2) (u1,v1)∈E(G1)N1 NdX(u2),G1,P1 are joint-metrics on the disjoint unions N2Y j=1F(X1, Xj)⊔ ··· ⊔N2Y j=1F(XN1, Xj) and N1Y i=1F(Xi, XN1+1)⊔ ··· ⊔N1Y i=1F(Xi, XN), respectively. Hence, after substituting (6.3) and (6.4) into (6.2), we obtain dX,G1□G2,P(g,h) = 1−N1 N|V(G1)| −dF1,G1,P1N2 N|V(G2)| −dF2,G2,P2 . This finishes the proof. Acknowledgements. M.B.C gratefully acknowledges the support provided by the Louisiana Board of Regents award LEQSF(2023-25)-RD-A-21 for this research. S.C acknowledges the support from the National Science Foundation grant FET:2208770. References [1] Christian Borgs and Jennifer T. Chayes. Graphons: A nonparametric method to model, estimate, and design algorithms for massive networks. CoRR , abs/1706.01143, 2017. [2] Richard A. Brualdi, Janine Smolin Graves, and K. Mark Lawrence. Codes with a poset metric. Discrete Math. , 147(1-3):57–72, 1995. [3] Sophie Dove, Monika B¨ ohm, Rebecca Freeman, Stina Jellesmark, and David J. Murrell. A user-friendly guide to using distance measures to compare time series in ecology. Ecology and Evolution , 13(10):e10520, 2023. Published October 5, 2023. [4] Tuvi Etzion, Marcelo Firer, and Roberto Assis Machado. Metrics based on finite directed graphs and coding invariants. IEEE Trans. Inform. Theory , 64(4):2398–2409, 2018. [5] Michel Gondran and Michel Minoux. Graphs, dioids and semirings , volume 41 of Op- erations Research/Computer Science Interfaces Series . Springer, New York, 2008. New models and algorithms. 20 [6] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In I. Guyon, U. Von Luxburg, S. Ben- gio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc., 2017. [7] Tamas Lazar, Mainak Guharoy, Wim Vranken, Steffen Rauscher, Shoshana J. Wodak, and Peter Tompa. Distance-based metrics for comparing conformational ensembles of intrinsically disordered proteins. Biophysical Journal , 118(12):2952–2965, June 2020. Epub 2020 May 20. [8] C. Y. Lee. Some properties of nonbinary error-correcting codes. IRE Trans. , IT-4:77–82, 1958. [9] L´ aszl´ o Lov´ asz. Large Networks and Graph Limits . American Mathematical Society, 2012. [10] L´ aszl´ o Lov´ asz and Bal´ azs Szegedy. Limits of dense graphs and measures. Journal of Combinatorial Theory, Series B , 96(6):933–957, 2006. [11] Hugues Mercier, Vijay K. Bhargava, and Vahid Tarokh. A survey of error-correcting codes for channels with symbol synchronization
|
https://arxiv.org/abs/2505.06405v1
|
errors. IEEE Communications Surveys & Tutorials , 12(1):87–96, 2010. [12] Deen Dayal Mohan, Bhavin Jawade, Srirangaraj Setlur, and Venu Govindaraj. Deep metric learning for computer vision: A brief overview. In Handbook of Statistics, Special Issue - Deep Learning , volume 48, page 59. 2023. Book Chapter Published In Handbook of Statistics, Special Issue - Deep Learning 48, 59. [13] John G. Proakis and Masoud Salehi. Digital Communications . McGraw-Hill, New York, 5th edition, 2008. [14] Juan Luis Su´ arez-D´ ıaz, Salvador Garc´ ıa, and Francisco Herrera. A tutorial on dis- tance metric learning: Mathematical foundations, algorithms, experimental analysis, prospects and challenges (with appendices on mathematical background and detailed algorithms explanation). arXiv preprint arXiv:1812.05944 , 2018. 36 pages with appen- dices. [15] B. Ozgode Yigin and G. Saygili. Effect of distance measures on confidences of t-sne embeddings and its implications on clustering for scrna-seq data. Scientific Reports , 13:6567, 2023. 21
|
https://arxiv.org/abs/2505.06405v1
|
Semiparametric semi-supervised learning for general targets under distribution shift and decaying overlap Lorenzo Testa1,2, Qi Xu1, Jing Lei1, and Kathryn Roeder1,3 1Department of Statistics & Data Science, Carnegie Mellon University 2L’EMbeDS, Sant’Anna School of Advanced Studies 3Department of Computational Biology, Carnegie Mellon University {ltesta,qixu,jinglei,roeder}@andrew.cmu.edu May 13, 2025 Abstract In modern scientific applications, large volumes of covariate data are readily available, while outcome labels are costly, sparse, and often subject to distribution shift. This asymmetry has spurred interest in semi-supervised (SS) learning, but most existing approaches rely on strong assumptions – such as missing completely at random (MCAR) labeling or strict positivity – that put substantial limitations on their practical usefulness. In this work, we introduce a general semiparametric frame- work for estimation and inference in SS settings where labels are missing at random (MAR) and the overlap may vanish as sample size increases. Our framework accommodates a wide range of smooth statistical targets – including means, linear coefficients, quantiles, and causal effects – and remains valid under high-dimensional nuisance estimation and distributional shift between labeled and unlabeled samples. We construct estimators that are doubly robust and asymptotically normal by deriving influence functions under this decaying MAR-SS regime. A key insight is that classical root- n convergence fails under vanishing overlap; we instead provide corrected asymptotic rates that capture the impact of the decay in overlap. We validate our theory through simulations and demonstrate practical utility in real-world applications on the internet of things and breast cancer where labeled data are scarce. 1 Introduction In contemporary scientific applications, as highlighted by the success of prediction-powered inference (Angelopoulos et al., 2023a), semi-supervised (SS) learning has emerged as a powerful and increasingly essential paradigm, particularly in contexts where labeled data are expensive, difficult, or ethically challenging to obtain. These situations are ubiquitous in scientific domains such as biomedical research (e.g., large biobanks with partial annotations), public policy analysis (e.g., outcomes available only for selected sub-populations) and digital platforms (e.g., engagement or intervention data available only for active users). In these settings, researchers typically have access to a large volume of unlabeled covariate information, whereas labeled observations – i.e., data points with observed outcomes of interest – remain sparse. A fundamental and often overlooked consequence of this asymmetry is the inherent violation of the positivity oroverlap condition. As the amount of unlabeled data grows much faster than the number of labeled samples, it becomes increasingly likely that certain covariate regions are labeled with vanishing probability. Formally, the labeling propensity, defined as the conditional probability of observing the outcome given the covariate, may approach zero on non-negligible portions of the covariate space, violating the standard assumption that the labeling propensity is bounded away from zero almost surely. This phenomenon, known as overlap decay , is not merely a technicality, but a structural feature of semi- 1arXiv:2505.06452v1 [math.ST] 9 May 2025 supervised learning. It affects identifiability, estimation, and inference, and renders many traditional estimators unstable or undefined. Crucially, this distinguishes the semi-supervised setting from standard missing data problems, where the labeling mechanism is always assumed to satisfy overlap
|
https://arxiv.org/abs/2505.06452v1
|
conditions (Tsiatis, 2006). In SS learning, by contrast, decaying overlap is the norm, and asymptotic theory must be adapted accordingly. Moreover, much of the existing work in semi-supervised learning (Angelopoulos et al., 2023a; Xu et al., 2025) has been developed under the assumption that labels are missing completely at random (MCAR), where the probability of observing a label is independent of both covariates and outcomes. Although analytically convenient, this assumption is rarely plausible in practice, especially in observational studies. In real-world applications where randomized design cannot be implemented, the labeling process is almost always influenced by observable characteristics, such as patient demographics, policy priorities, or platform usage patterns, making the MCAR assumption often inappropriate. A more realistic framework is the missing at random (MAR) setting, where the labeling probability depends on the covariates, but is conditionally independent of the unobserved outcome. The MAR assumption, also known as ignorability condition or selection bias , allows systematic distribution shift patterns in the labeling mechanism, while still permitting identifiability of the outcome distribution from observed data. Under this decaying missing at random semi-supervised (decaying MAR-SS) framework, naive applications of existing semi-supervised or missing data methods can yield biased or inefficient estimates, or even fail to be well defined. In this paper, we propose a general semiparametric framework for estimation and inference in semi-supervised settings under distribution shift and decaying overlap. Our framework covers a wide range of statistical inference targets – including means, causal effects, quantiles, and smooth functionals of the data-generating process – and offers principled strategies for estimation even when the probability of observing a label vanishes in large regions of the covariate space. Crucially, our theory allows for distributional shift between the labeled and unlabeled samples, meaning that the covariate distributions can be different, so long as appropriate reweighting can be performed with accurate nuisance parameter estimation. We build on semiparametric efficiency theory to derive influence functions for general targets in the decaying MAR-SS setting, and characterize distribution shift semi-supervised (DS3) estimators that are doubly robust, as they remain consistent and asymptotically normal even if one of the two nuisance components (labeling or projection) is misspecified. Moreover, we show that asymptotic normality remains attainable – though at a slower rate – provided that the decay in labeling probability is appropriately controlled. To our knowledge, this is the first asymptotically normal, doubly robust estimator for general parameters of interest under the decaying overlap MAR-SS setting. A detailed comparison of our results with existing literature is provided in Section 1.1. 1.1 Related work Our work, lying at the intersection of several branches of scholarship, combines the virtues of multiple literatures into a unified inferential framework for semi-supervised learning under distribution shift and decaying overlap. “Pure” semi-supervised learning. SS learning has traditionally focused on settings in which the labeled data are assumed to be missing completely at random (MCAR) – see Chapelle et al. (2009); Zhu (2005) for a review. Most existing work in this area focuses on estimation methods tailored to specific problems, such as mean estimation (Zhang et al., 2019; Zhang and Bradic,
|
https://arxiv.org/abs/2505.06452v1
|
2022), quantile estimation (Chakrabortty et al., 2022), and linear regression (Azriel et al., 2022; Cai and Guo, 2020). Notable exceptions are Song et al. (2024), which studies general M-estimation problems, and Xu et al. (2025), which encompasses all parameters satisfying some form of functional smoothness. However, the MCAR assumption is increasingly recognized as unrealistic in many scientific applications, limiting the practical utility of such techniques in the presence of distribution shift. Prediction-powered inference. A recent line of work addressing semi-supervised learning under MCAR is the prediction-powered inference (PPI) framework. PPIintegrates machine learning predictions into 2 formal statistical procedures, using both labeled and unlabeled data to improve estimation efficiency while maintaining valid inference, resembling the approach proposed by Chen and Chen (2000). Since the seminal work of Angelopoulos et al. (2023a), several extensions have expanded its scope. Angelopoulos et al. (2023b) and Miao et al. (2023) propose weighting strategies that ensure the PPI estimator performs at least as well as estimators relying solely on labeled data. Zrnic and Candès (2024) further develops the framework by introducing a cross-validation-based approach to learn the predictive model directly from the labeled sample. Decaying MAR-SS setting. Classical results in semiparametric theory offer general principles for constructing doubly robust estimators in settings where outcomes are missing at random (Tsiatis, 2006). These estimators combine outcome projection models and propensity score models – estimated either from the labeled sample alone or by incorporating information from both labeled and unlabeled data – to yield consistent estimators of population-level parameters. However, most existing work assumes strong overlap. On the other hand, the literature on limited overlap and positivity violations has largely emerged from the causal inference community, where imbalance in treatment assignment and high- dimensional covariates pose analogous challenges (Crump et al., 2009; D’Amour et al., 2021; Khan and Tamer, 2010; Rothe, 2017; Yang and Ding, 2018). Proposed solutions include trimming low-overlap regions, covariate balancing, and targeted reweighting schemes that stabilize estimation by controlling variance in sparse areas. Nevertheless, these methods typically assume that the propensity score can approach zero only in some specific regions of the support of the covariates Xand does not depend on the sample size. As such, they are ill-equipped to handle settings where the labeling probability vanishes uniformly across the covariate space as the total sample size grows. A notable exception is the recent work of Zhang et al. (2023), who develop a doubly robust estimator for the outcome mean in a semi-supervised MAR setting with decaying overlap. Semiparametric statistics and missing data. From a technical standpoint, our work lies in the furrow of semiparametric statistics, which focuses on estimation in the presence of high-dimensional nuisance parameters. The missing data literature provides tools such as inverse probability weighting (IPW), augmented IPW (AIPW), and targeted maximum likelihood estimation (TMLE), many of which naturally extend to SS learning with MAR labeling. Foundational results on influence functions, tangent spaces, and semiparametric efficiency bounds – developed by Bickel et al. (1993); Tsiatis (2006); Van der Vaart (2000) – serve as the theoretical backbone of modern SS learning inference. However, most of
|
https://arxiv.org/abs/2505.06452v1
|
these frameworks assume strict positivity. As a result, applying semiparametric tools in semi-supervised learning with shrinking labeling fractions and decaying overlap requires substantial modifications to both asymptotic analysis and estimator construction. 1.2 Our contributions We propose a general framework for identification, estimation, and inference of arbitrary target parame- ters in the semi-supervised setting with missing at random labeling and decaying overlap. Our work builds on semiparametric theory, but departs significantly from classical setups by relaxing the positivity assumption and allowing the labeling probability to vanish uniformly as the sample size increases. Our main contributions are as follows: •General-purpose inference under decaying overlap. We develop a flexible and model-agnostic framework to construct DS3estimators of general target parameters under MAR labeling with decaying overlap. Our approach accommodates arbitrary functionals – such as means, causal effects, or linear coefficients – and handles distribution shifts without relying on parametric or Donsker-type assumptions about the projection functions or the missing mechanism. •Nonstandard asymptotics and double robustness. We establish rigorous theoretical guarantees for our DS3estimators, showing that they are doubly robust and asymptotically normal. Impor- tantly, the central limit theorem we prove exhibits a non-standard dependence on the sample size that captures the effect of overlap decay. This opens up the question of semiparametric efficiency in the decaying MAR-SS regime, in contrast to classical settings where the propensity score is fixed, i.e. does not decay as the sample size increases. 3 Scholarship MAR Decaying overlap General target Tsiatis (2006) ✓ ✗ ✓ Zhang et al. (2023) ✓ ✓ ✗ PPI(Angelopoulos et al., 2023a) ✗ ✗ ✓ Xu et al. (2025) ✗ ✗ ✓ DS3(our proposal) ✓ ✓ ✓ Table 1: Framing our contribution •Prediction-powered inference under MAR. As a side product of our theoretical and methodolog- ical development, we provide a natural extension of PPIto missing at random settings, greatly increasing its applicability in real-world scenarios. Table 1 summarizes our contribution in relation to the existing literature, highlighting how our framework integrates MAR labeling, accommodates overlap decay, and supports general statistical targets. 1.3 Organization This paper is organized as follows. Section 2 introduces the semi-supervised learning framework under MAR labeling with decaying overlap, and formally defines the statistical estimation problem. In Section 3, we present a general strategy for building DS3estimators and characterize their asymptotic behavior. Following Tsiatis (2006); Zhang et al. (2023), we begin by analyzing the case where the propensity score is known by design, and then extend our results to the more realistic setting where the propensity score must be estimated from the data. Here, we also extend the prediction-powered inference framework proposed by Angelopoulos et al. (2023a) to the decaying MAR-SS setting. Section 4 applies our general theory to specific statistical functionals and assesses the empirical performance of our proposed DS3estimators through simulation studies. In Section 5, we illustrate the practical value of our methodology using data from the BLE-RSSI study (Mohammadi et al., 2017), where we estimate the spatial position of a device based on signal strength measurements captured by remote sensors, and the METABRIC study (Curtis et al., 2012; Pereira et al.,
|
https://arxiv.org/abs/2505.06452v1
|
2016), where we revisit a question posed by Hsu and Lin (2023) and estimate the association between biomarkers previously identified (Cheng et al., 2021) and survival outcomes in patients with breast cancer. Concluding remarks and potential directions for future research are discussed in Section 6. Additional theoretical results and simulation details are provided in the Supplementary Material. All code for reproducing our analysis is available at https://github.com/testalorenzo/decMAR_inf . 2 Problem setup We assume that we observe a small collection of nindependent and identically distributed labeled samples DL i= (Xi,Yi) n i=1, where Xi∈ Rpis ap-dimensional vector of covariates and Yi∈ Rkis a k-dimensional outcome. We let DL= (X,Y)denote an independent copy of DL i= (Xi,Yi). We also assume to have access to a further large collection of Nunlabeled data samples DU i= (Xi) N i=1. As before, we letDU= (X)denote an independent copy of DU i= (Xi). Following traditional nomenclature from semiparametric statistics literature, we recast our problem in a missing data framework. We denote theobserved data as{Di= (Xi,Ri,RiYi)}n+N i=1, where Riis a binary indicator that indicates whether observationDiis labeled ( Ri=1), or unlabeled ( Ri=0). We letD= (X,R,RY)denote an independent copy ofDi= (Xi,Ri,RiYi). We denote the data-generating distribution as P⋆∈P, wherePis the set of distributions induced by a nonparametric model, and thus we write D∼ P⋆. For theoretical convenience, we also define the full dataDF= (X,Y), that is, the data that we would observe if there were no missingness mechanisms in place. The potential discrepancy between the labeled and unlabeled datasets due to distribution shift naturally 4 leads us to adopt a missing at random (MAR) labeling mechanism. Moreover, a distinctive aspect of our setting is that the proportion of labeled data nmay become asymptotically negligible compared to the volume of unlabeled data N. Specifically, we consider the extreme regime in which limn,N→∞n/(n+N) = 0. This scenario effectively violates the standard assumption of positivity , typically required in classical missing data analyses, where the ratio n/(n+N)is always bounded away from zero. We formally state these assumptions below. Assumption 2.1 (Decaying MAR-SS) .Let the following assumptions hold: a. Missing at random. R⊥ ⊥Y|X. b. Decaying overlap. The ratio n/(n+N)satisfies limn,N→∞n/(n+N)∈[0, 1). Remark 2.2. Assumption acorresponds to the well-known ignorability condition, commonly invoked in both the missing data and causal inference literature (Kennedy, 2024; Tsiatis, 2006). Unlike classical semi-supervised settings that assume labels are missing completely at random (MCAR), our setup allows for selection bias, or distribution shift, in the labeled data. As a result, population-level parameters cannot be consistently estimated using the labeled sample alone. The goal of our work is not merely to enhance supervised estimators using additional unlabeled data (which are no longer unbiased under MAR), but rather to construct from scratch a theoretically grounded estimator of general functionals of the full data-generating process. Assumption b, inspired by the asymptotic framework of Zhang et al. (2023), formalizes the regime under which our theoretical guarantees are derived. In contrast to much of the existing literature, which assumes strong overlap (i.e., a uniformly bounded away from zero labeling propensity), we allow for
|
https://arxiv.org/abs/2505.06452v1
|
overlap to decay with the sample size, requiring new tools to characterize identifiability, efficiency, and convergence rates in this more general setting. Remark 2.3 (Triangular arrays) .Thepropensity score , defined asπ⋆(x) = P[R=1|X=x], plays a central role in our theoretical development under decaying overlap. In particular, Assumption b imposes structural constraints on the behavior of π⋆, by implicitly allowing it to depend on the sample sizes nandN. Assuming E π⋆−1(X) <∞for every nandN, we introduce the shorthand quantity a−1 n,N= E π⋆−1(X) . Under Assumption b, we have an,N>0for all nandN, but we allow an,N→0 asn,N→∞ , capturing the regime of decaying overlap. As a canonical example, in the MCAR case, the propensity score reduces to π(x) =n/(n+N)for all x∈ Rp, implying that an,N=n/(n+N)in this special case. Hence, both Riandπ(Xi)form triangular arrays over n,Nandi, but we suppress the dependency on nandNfor notational simplicity. Throughout this paper, we focus on the identification and estimation of a target quantity θ⋆∈ Rqthat is defined as the Euclidean functional solving θ⋆=θ( P⋆), withθ:P→ Rq. We restrict our attention to target parameters that could be estimated using regular andasymptotically linear full-data estimators, that is, targets that admit the expansion p n+N ˆθF−θ⋆ = (n+N)−1/2n+NX i=1ϕF DF i;θ⋆ +oP(1), (1) for some regular and asymptotically linear full-data estimator ˆθF, and some function ϕF DF i;θ⋆ , referred to as full-data influence function , evaluating the contribution of the full-data observationDF i to the overall estimator ˆθF(Hampel, 1974). The influence function ϕF DF i;θ⋆ depends on the true parameter θ⋆and is such that E ϕF DF;θ⋆ =0and V ϕF DF;θ⋆ is positive definite. This framework is very general and encompasses M-estimation, Z-estimation, U-statistics, and many estimands in causal inference. Examples of such targets are the outcome mean, the outcome quantiles, least squares coefficients, average treatment effects, and general minimizers of convex loss functions depending on both XandY(Tsiatis, 2006; Van der Vaart, 2000). There is a deep connection between regular and asymptotically linear estimators and influence functions – see Theorem 3.1 in Tsiatis (2006). In fact, given a full-data influence function ϕF DF;θ⋆ , one can 5 recover the associated estimator ˆθFby solving the estimating equation n+NX i=1ϕF DF i;θ⋆ =0 . (2) Standard semiparametric theory guarantees that, under mild conditions, the classical estimator ˆθF characterized in Equation 1 and Equation 2 is consistent for θ⋆and asymptotically normal. This follows from well-known properties of influence functions and an application of the Central Limit Theorem. The asymptotic variance of the estimator ˆθFis given by the variance of its influence function divided by the sample size, i.e. V ϕF DF;θ⋆ /(n+N). Notice that if we do not make assumptions on the distribution of the data-generating process, we are left with a problem that is inherently nonparametric. In this case, semiparametric theory shows that each functional is associated with one and only one full-data influence function, and thus it is also the efficient one (i.e. the one delivering the smallest asymptotic variance). The efficient full-data estimator is optimal in the sense that it achieves the semiparametric efficiency lower bound . While convenient from a
|
https://arxiv.org/abs/2505.06452v1
|
theoretical standpoint, the full-data estimator ˆθFis of little practical relevance, as it relies on outcome labels that are missing in the unlabeled portion of the data. A natural first approximation is to construct an estimator using only the labeled sample. However, under the missing at random (MAR) assumption, such an estimator is generally biased due to distribution shifts in the labeling mechanism. Our objective is therefore to understand howto construct an unbiased and asymptotically normal estimator for the target parameter θ⋆, using the observed data D= (X,R,RY)in the presence of decaying overlap. We then develop a general and flexible strategy for constructing estimators that remain valid under this challenging regime. 2.1 Notation Before proceeding, we introduce some more notation. We denote weak convergence of a random object Znto a limit ZasZn⇝Z. Similarly, we denote convergence in Lpnorm asZnLp →Z. Given a random object Zn, we denote its variance as V[Zn]. IfZnisq-dimensional, then V[Zn]must be understood as aq×qpositive definite variance-covariance matrix. We also define the Lpnorm of a q-dimensional vector Zas∥Z∥p=Pq j=1Zp j1/p . Given a q×qmatrix A,λmin(A)andλmax(A)indicate its smallest and largest eigenvalues, respectively. The q×qidentity matrix is denoted as Iq. 3 Learning in the decaying MAR-SS setting 3.1 Known propensity score model In this Section, we develop results for inference in the decaying MAR-SS setting under the assumption that the propensity score π⋆is known. This setup is particularly relevant in experimental designs where the labeling mechanism is controlled and fully specified. When outcome labels are missing at random, the full-data estimator defined in Equation 1 is not directly computable, as it relies on unobserved outcomes in the unlabeled sample. To properly account for the MAR mechanism, we derive the observed-data influence function by projecting the full-data influence function onto the observed data model. Proposition 3.1 (Observed-data influence function) .Letθ⋆be a target parameter that admits a regular and asymptotically linear (RAL) expansion in the full-data model as in Equation 1, and assume the decaying MAR-SS setting described in Assumption 2.1. Then, the observed-data influence function for the target θ⋆is 6 given by: ϕ(D;θ⋆) =R π⋆(X)ϕF(D;θ⋆)−R−π⋆(X) π⋆(X)E ϕF(D;θ⋆)|X = E ϕF(D;θ⋆)|X +R π⋆(X) ϕF(D;θ⋆)− E ϕF(D;θ⋆)|X ,(3) whereϕF DF;θ⋆ is any valid full-data influence function, and π⋆: Rp→(0,1)denotes the propensity score, defined as π⋆(x) = P[R=1|X=x]. In Equation 3, the only unknown component is the nuisance function µ⋆(X) = E ϕF(D;θ⋆)|X , which we refer to as the projection model . In fact, for the moment, we assume that the propensity score is known, and this amounts to assume that ˆπ=π⋆. To make the dependence on nuisance functions explicit, we denote the influence function evaluated at ˆµand ˆπasϕ(D;θ⋆;ˆµ;ˆπ). In particular, the observed-data influence function where ˆµ=µ⋆andˆπ=π⋆is denoted asϕ(D;θ⋆;µ⋆;π⋆). Sinceµ⋆must be estimated from the data, we employ cross-fitting to avoid restrictive Donsker conditions and to retain full-sample efficiency. Cross-fitting works as follows. We first randomly split the observations {D1, . . . ,Dn+N}intoJdisjoint folds (without loss of generality, we assume that the number of observations n+Nis divisible by J). For each j=1, . . . , Jwe form ˆP[−j]with all but the j-th fold, and
|
https://arxiv.org/abs/2505.06452v1
|
P[j] n+Nwith thej-th fold. Then, we learn ˆµ[−j]onˆP[−j]. For most supervised methods, learning ˆµ[−j]onˆP[−j]will involve using only labeled observations within ˆP[−j]. Then, we compute the estimator ˆθˆµby solving the estimating equation JX j=1X i∈ P[j] n+Nϕ Di;θ⋆;ˆµ[−j];π⋆ =0 . (4) Estimating the true projection function µ⋆can be challenging in practice, especially when the number of labeled samples is limited. However, due to double robustness , the estimator derived from Equation 4 remains consistent and asymptotically normal even when ˆµis misspecified, as long as it converges to some limit ¯µ̸=µ⋆(in a sense we will make precise shortly). The cost of such misspecification is an increase in asymptotic variance relative to the case where ˆµ=µ⋆. We can now state some regularity conditions needed to show the theoretical properties of our estima- tors. Assumption 3.2 (Inference) .Let the number of cross-fitting folds be fixed at J, and assume that: a.There exist ¯µand¯πsuch that, for each j∈{1, . . . , J}, one has ϕ D;θ⋆;ˆµ[−j];ˆπ[−j] L2 →ϕ(D;θ⋆;¯µ;¯π). (5) b.For each j∈{1, . . . , J}, one has (n+N)1/2a1/2 n,NJX j=1 η[j] 2=oP(1), (6) where the remainder term η[j]is defined asη[j]=θ ˆP[−j] −θ( P⋆) + E ϕ D;θ⋆;ˆµ[−j];ˆπ[−j] . c.λmin( V[ϕ(D;θ⋆;¯µ;¯π)])≍a−1 n,Nandλmax( V[ϕ(D;θ⋆;¯µ;¯π)])≍a−1 n,N. d.For everyϵ>0, it holds that E an,N∥ϕ(D;θ⋆;¯µ;¯π)∥2 21 ∥ϕ(D;θ⋆;¯µ;¯π)∥2>ϵvtn+N an,N →0 as n,N→∞ . (7) Remark 3.3. Fixing the number of cross-fitting folds prevents undesirable asymptotic behavior and is standard in modern semiparametric literature. Assumptions aandbare also conventional; see, for 7 example, Kennedy (2024). Assumption ais used to control the empirical process fluctuations, while Assumption bensures that the remainder term in the von Mises expansion is negligible. Assumptions c anddrestrict the class of admissible targets within our framework, while still allowing for a broad and general set of results. Specifically, Assumption cimposes a condition on the scaling of the asymptotic variance of the influence function, whereas Assumption dintroduces a technical requirement on the rate at which the observed-data influence function decays with the sample sizes nandN. This latter condition is motivated by the classical Lindeberg condition, which is commonly assumed in proving Central Limit Theorems (CLTs), and is in fact close to being necessary for asymptotic normality. Our formulation generalizes the analogous condition used by Zhang et al. (2023) to accommodate a wider class of targets. In particular, we recover Assumption 3.2 of Zhang et al. (2023) by noticing that, for a univariate target (and thus a unidimensional influence function), the norm in Assumption dreduces to the absolute value of the influence function itself. Remark 3.4 (Lindeberg condition) .At first glance, Assumption dmay appear benign: as n,N→∞ , the term an,N→0, which might suggest that the Lindeberg-type condition becomes increasingly easy to satisfy. However, this is not the case. A key subtlety lies in the fact that the observed-data influence functionϕ(D;θ⋆)is itself a function of nandN, particularly through its dependence on the labeling mechanism and the propensity scores. This introduces a non-trivial coupling between the behavior of the influence function and the rate an,N. Asan,N→0, the weights applied to individual observations in the influence function increase, especially in regions of the covariate
|
https://arxiv.org/abs/2505.06452v1
|
space where the propensity score is small. Consequently, while the factor an,Nin the condition vanishes, the tails of the influence function may become heavier due to extreme reweighting. This interplay can make the Lindeberg condition more delicate – not less. In this sense, the condition acts as a technical safeguard that balances asymptotic variance control with tail robustness under decaying overlap. We are now ready to provide our first result, characterizing the asymptotic behavior of our DS3estimators with known propensity score. Theorem 3.5 (Consistency and asymptotic normality, known π⋆).Assume (n+N)an,N→∞ and that π⋆is known. Under Assumption 2.1 (decaying MAR-SS) and Assumptions 3.2 (inference) a,c, and d, one has (n+N)1/2a1/2 n,N ˆθˆµ−θ⋆ 2=OP(1), (8) and (n+N)1/2V[ϕ(D;θ⋆;ˆµ;π⋆)]−1/2 ˆθˆµ−θ⋆ ⇝N 0,Iq . (9) Remark 3.6 (Effective sample size) .The quantity (n+N)an,Ncan be interpreted as the effective sample sizethat governs the convergence rate of the DS3estimator ˆθto the target parameter θ⋆. The factor an,Nreflects the severity of overlap decay, by capturing both how concentrated the propensity score π⋆is around zero and the relative size of the labeled dataset n. The greater the concentration of π⋆ near zero, the slower the rate of convergence. In the special case where labels are missing completely at random (MCAR), we have π⋆(x) =n/(n+N)for all x∈Rp, which implies an,N=n/(n+N)and thus an effective sample size equal to n. In this regime, the rate of convergence reduces to the standard root- nrate typically achieved when using labeled data alone. Our findings align with asymptotic results established in Angelopoulos et al. (2023b) and Xu et al. (2025). 3.2 Unknown propensity score model In general observational studies, both the projection model µ⋆and the propensity score π⋆are un- known and need to be estimated. We employ again cross-fitting . We randomly split the observations {D1, . . . ,Dn+N}intoJdisjoint folds. For each j=1, . . . , Jwe form ˆP[−j]with all but the j-th fold, and P[j] n+Nwith the j-th fold. Then, we learn ˆµ[−j]and ˆπ[−j]onˆP[−j], and compute the estimator ˆθˆµ;ˆπby solving the estimating equation JX j=1X i∈ P[j] n+Nϕ Di;θ⋆;ˆµ[−j];ˆπ[−j] =0 . (10) 8 We can now extend the results of Theorem 3.5 to the more general case where all nuisance functions are estimated from the data. Theorem 3.7 (Consistency and asymptotic normality, unknown nuisances) .Assume (n+N)an,N→∞ . Under Assumption 2.1 (decaying MAR-SS) and Assumptions 3.2 (inference) a,b,c, and d, one has (n+N)1/2a1/2 n,N ˆθˆµ;ˆπ−θ⋆ 2=OP(1), (11) and (n+N)1/2V[ϕ(D;θ⋆;ˆµ;ˆπ)]−1/2 ˆθˆµ;ˆπ−θ⋆ ⇝N 0,Iq . (12) Remark 3.8 (Asymptotic normality under misspecification) .When both the projection model and the propensity score are correctly specified, Theorem 3.7 ensures that ˆθˆµ;ˆπis asymptotically normal. If only one of the two nuisance components is correctly specified, the DS3estimator remains consistent due to itsdouble robustness property. However, achieving asymptotic normality in this case is more demanding, as reflected in the remainder condition in Assumption 3.2, part b. Specifically, the correctly specified nuisance function must converge at the faster rate of (n+N)−1/2a−1/2 n,N, which is typically attainable only when its population counterpart belongs to a low-dimensional, parametric model class. Therefore, in complex applications, the relaxation of Assumption 3.2, b, may inhibit asymptotic normality and preserve
|
https://arxiv.org/abs/2505.06452v1
|
only consistency. 3.3 Prediction-powered inference under decaying MAR-SS In this Section, we extend the framework developed so far to incorporate additional information from an off-the-shelf predictive model fthat maps the covariate space to the outcome space. We do not impose assumptions on the quality of f, treating it as a generic black-box machine learning prediction model. This extension connects naturally to the prediction-powered inference (PPI) framework introduced by Angelopoulos et al. (2023a), but significantly broadens its applicability in real-world scientific settings. In particular, while the original PPIframework assumes that labels are missing completely at random (MCAR), our extension relaxes this to the more general missing at random (MAR) setting, where the labeling mechanism may depend on the covariates. Additionally, while the original PPIframework is limited to M-estimation problems, our approach supports any target functional satisfying Equation 1. As a result, our formulation enables the use of predictive models within PPI even under covariate- dependent labeling, allowing the benefits of modern machine learning to be realized in more realistic and challenging scenarios. Notably, standard PPImethods are not valid under MAR. As in previous Sections, we recast the problem in the language of missing data. The observed data are now given byD= (X,f(X),R,RY), with n+Nrealizations. In this context, the observed-data influence function for PPItakes the form: ϕPPI(D;θ⋆) =R π⋆(X,f(X))ϕF(D;θ⋆)−R−π⋆(X,f(X)) π⋆(X,f(X))E ϕF(D;θ⋆)|X,f(X) . (13) All results presented in the previous Sections extend naturally to this setting. The key distinction is that both the projection model and the propensity score can now be conditional on the additional covariate f(X), which is always observed and may improve the quality of both nuisance estimates. In particular, we can establish consistency and asymptotic normality for the resulting PPIestimator under decaying MAR. Corollary 3.9 (PPIasymptotic normality under decaying MAR-SS) .Assume (n+N)an,N→∞ . Under Assumption 2.1 (decaying MAR-SS) and Assumption 3.2 (inference), one has (n+N)1/2a1/2 n,N ˆθPPI−θ⋆ 2=OP(1), (14) and (n+N)1/2V ϕPPI(D;θ⋆;ˆµ;ˆπ)−1/2 ˆθPPI−θ⋆ ⇝N 0,Iq . (15) 9 Remark 3.10. The utility of incorporating an external predictive model finto the estimation procedure – as in the PPI framework – depends critically on the information that f(X)carries relative to the covariates X. In many practical applications, fis a deterministic function, such as a black-box machine learning predictor trained externally. In this case, f(X)is entirely determined by X, and the conditional expectations in Equation 3 project the full-data influence function ϕF DF;θ⋆ onto the same σ-field as in the original setting. As a result, the asymptotic variance of the resulting estimator remains unchanged, and the inclusion of fdoes not yield a theoretical efficiency gain – though it can still offer great practical advantages by simplifying nuisance model specification. More interesting behavior arises when f introduces additional randomness beyond X. For example, models such as variational autoencoders can yield stochastic predictions f(X)even for identical covariates X. In such cases, the σ-field onto which the conditional expectations project the full-data influence function may change, and the augmented variable f(X)can reduce the residual variance by capturing information orthogonal to X, especially if it correlates with omitted features influencing the outcome. Finally, we note that our framework can
|
https://arxiv.org/abs/2505.06452v1
|
naturally accommodate the integration of multiple predictive models; that is, we can simultaneously leverage a collection of off-the-shelf models {f1, . . . , fm}within our estimation procedure, allowing us to flexibly combine information from diverse sources and enhance robustness in practical applications. 4 Simulation study In this Section, we instantiate the general theory developed above for two specific target parameters: themultivariate outcome mean and the coefficients of a linear regression model . For each case, we conduct extensive simulation studies to assess the finite-sample performance of the proposed DS3estimators under varying degrees of overlap and model misspecifications. We compare our DS3estimators to a naive estimator computed only on the labeled dataset, and the outcome regression (OR) and inverse probability weighting (IPW) counterparts. The ORinfluence function is ϕOR(D;θ⋆) = E ϕF(D;θ⋆)|X , (16) while the IPWinfluence function is ϕIPW(D;θ⋆) =R π⋆(X)ϕF(D;θ⋆). (17) We employ cross-fitting to learn nuisance functions involved in the computation of our DS3estimator, ORandIPW, with J=5 folds. We are interested in probing double robustness and measuring actual coverage. Therefore, we assess performance in terms of estimation accuracy and inferential coverage. To measure accuracy, we compute the root mean squared error under the L2-distance between the true θ⋆∈ Rqand its estimate ˆθ, i.e.RMSEˆθ = ˆθ−θ⋆ 2. To measure coverage, we evaluate the empirical fraction of times ∆in whichθ⋆falls into the asympotic 1−αconfidence regions of ˆθ. In particular, we employ Hotelling one sample T2test, which rejects the null hypothesis ˆθ=θ⋆at 1−αlevel if ˆθ−θ⋆TˆV−1 n,N ˆθ−θ⋆ ≥Fq,n+N−q,1−α, (18) where ˆVn,N= (n+N)−1Pn+N i=1ϕ(Di;θ⋆;ˆµ;ˆπ)ϕ(Di;θ⋆;ˆµ;ˆπ)TandFq,n+N−q,1−αis the 1−αquantile of aF-distribution with qandn+N−qdegrees of freedom. We mimic and extend the data-generating process in the simulation study conducted by Zhang et al. (2023). For each i=1, . . . , n+N, we first generate p-dimensional i.i.d. Gaussian covariates Xi∼N 0,Ip and residuals ϵi∼N (0,Ik). Then we assign the missingness label according to a Bernoulli law, i.e.Ri|Xi∼Ber(π⋆(Xi)), where the propensity score π⋆is modeled as: •π⋆(Xi) =n/(n+N), corresponding to a decaying MCAR . 10 •π⋆(Xi) =expit log(n/(n+N)) +XT iγ where expit (x) =exp(x)/(1+exp(x)). The coefficients γ are randomly sampled from a normal distribution, that is γ∼N 0,Ip . This setting corresponds to a logistic propensity score with decaying offset . We then generate outcomes Yi∈ Rkaccording to the following linear model: Yi=βTXi+ϵi, (19) whereβ∈ Rp×q, and, for each ℓ=1, . . . , k,βℓ∼N 0,Ip . We model the propensity score using a constant estimator, which essentially corresponds to a MCAR estimator, and an offset logistic regression. Within each fold j, the constant estimator computesP i∈ˆP[j]Ri/|ˆP[j]|, while the offset in the logistic regression is set at log P i∈ˆP[j]Ri/|ˆP[j]| . Similarly, we model the projection model using linear regression. We notice that the estimators for the propensity scores are the same as the ones proposed by Zhang et al. (2023). For each simulation scenario, we set α=0.05,p=10,n∈{100,1000 ,10000},N∈{10n,100n}. Each configuration is replicated across 100 random seeds. 4.1 Multivariate outcome mean Given full dataDF= (X,Y), with X∈ RpandY∈ Rk, the k-dimensional full-data influence function for thek-dimensional target parameter θ⋆= E[Y]is ϕF DF;θ⋆ =Y−θ⋆. (20) The observed data are D= (X,R,RY), and thus the
|
https://arxiv.org/abs/2505.06452v1
|
observed-data influence function takes the form ϕ(D;θ⋆) =µ⋆(X) +R π⋆(X)(Y−µ⋆(X))−θ⋆, (21) whereµ⋆: Rp→ Rkis equal to E[Y|X]. The previous influence function motivates the well-known augmented inverse probability weighting (AIPW ) estimator (Bang and Robins, 2005) ˆθAIPW ˆµ;ˆπ=1 n+Nn+NX i=1 ˆµ(Xi) +Ri ˆπ(Xi)(Yi−ˆµ(Xi)) . (22) Similarly, the ORandIPW(Rosenbaum and Rubin, 1983) estimators for the multivariate outcome mean are: ˆθOR ˆµ=1 n+Nn+NX i=1ˆµ(Xi), ˆθIPW ˆπ=1 n+Nn+NX i=1RiYi ˆπ(Xi). (23) Here, we set k=2, so that the problem becomes bivariate outcome mean estimation. Figure 1 reports results for the decaying offset scenario with n=100labeled samples and N=1000 unlabeled samples, the outcome model ˆµestimated using linear regression, and propensity score model ˆπestimated via logistic regression with offset. All the other simulation results are displayed in Supplementary Material Section C. The boxplot of RMSE shows that both our proposal and ORachieve low and stable error across replications, while IPWexhibits slightly greater variability and higher median error. The naive estimator, which does not adjust for distribution shift, performs markedly worse, with large variability and many outliers. The right panel confirms these trends in terms of coverage: our approach attains near-nominal coverage (around 95%), while ORslightly undercovers. IPW suffers from substantial undercoverage, and the naive estimator fails entirely, with average coverage close to zero. These results highlight the robustness and efficiency of our proposal under the decaying overlap regime. Similar conclusions can be drawn by analyzing other simulation scenarios in Supplementary Material Section C. 11 DS3 OR IPW Naive Method0.00.51.01.52.0RMSE RMSE Distribution DS3 OR IPW Naive Method0.00.20.40.60.81.0Coverage Empirical CoverageFigure 1: Multivariate outcome mean simulation results. The simulation is run 100times under the decaying offset scenario with n=100labeled samples and N=1000 unlabeled samples; the outcome model ˆµis estimated using linear regression, and propensity score model ˆπis estimated via logistic regression with offset. The left panel displays boxplots summarizing the RMSE distribution over different seeds of the considered estimators. The right panel shows the empirical coverage and ±2standard error bars around them. 4.2 Linear regression coefficients Given full dataDF= (X,Y), with X∈ RpandY∈ R, the p-dimensional full-data influence function for thep-dimensional target parameter θ⋆=argminθ E (Y−XTθ)2 is ϕF DF;θ⋆ =Σ−1 XX Y−XTθ⋆ , (24) where ΣX= E X XT ∈ Rp×p. The observed data are D= (X,R,RY), and thus the observed-data influence function takes the form ϕ(D;θ⋆) =Σ−1 XXµ⋆(X) +R π⋆(X)Σ−1 XX(Y−µ⋆(X))−Σ−1 XX XTθ⋆, (25) whereµ⋆: Rp→ Ris equal to E[Y|X]. We are implicitly ignoring the further nuisance E XTX , which can be easily estimated at a parametric rate, i.e. (n+N)−1/2, using the covariates in both the labeled and unlabeled datasets via ˆΣ= (n+N)−1Pn+N i=1XiXT i, and thus does not represent an obstacle to our approach. The previous influence function motivates the augemented inverse probability weighting (AIPW ) estimator ˆθAIPW ˆµ;ˆπ=ˆΣ−11 n+Nn+NX i=1Xi ˆµ(Xi) +Ri ˆπ(Xi)(Yi−ˆµ(Xi)) . (26) Similarly, the ORandIPWestimators for the linear regression coefficients are: ˆθOR ˆµ=ˆΣ−11 n+Nn+NX i=1Xiˆµ(Xi), ˆθIPW ˆπ=ˆΣ−11 n+Nn+NX i=1XiRiYi ˆπ(Xi). (27) Figure 2 reports results for the decaying offset scenario with n=100labeled samples and N=1000 unlabeled samples, the outcome model ˆµestimated using linear regression, and propensity score model ˆπestimated using logistic regression with offset.
|
https://arxiv.org/abs/2505.06452v1
|
All other simulation results are shown in the Supplementary Material Section C. The boxplot of root mean squared error (RMSE) indicates that our proposed DS3estimator, the OR andIPWmethods achieve low and stable error across simulation replications. In contrast, the naive 12 DS3 OR IPW Naive Method0246RMSE RMSE Distribution DS3 OR IPW Naive Method0.00.20.40.60.81.0Coverage Empirical CoverageFigure 2: Linear regression coefficients simulation results. The simulation is run 100times under the decaying offset scenario with n=100labeled samples and N=1000 unlabeled samples; the outcome model ˆµis estimated using linear regression, and propensity score model ˆπis estimated via logistic regression with offset. The left panel displays boxplots summarizing the RMSE distribution over different seeds of the considered estimators. The right panel shows the empirical coverage and ±2standard error bars around them. estimator, which fails to account for distribution shift, performs considerably worse, exhibiting high variability and numerous outliers. The right panel, which reports empirical coverage, reinforces these trends: both our method and ORattain coverage close to the nominal 95% level, with our approach being slightly overconservative in this setting. In contrast, both IPWand the naive estimator display significant undercoverage. These results illustrate the robustness and efficiency of our proposal in the presence of decaying overlap. Similar conclusions hold across additional simulation scenarios, as detailed in Supplementary Material Section C. 5 Real-data applications 5.1 BLE-RSSI study To demonstrate the practical utility of our approach, we first apply it to the BLE-RSSI dataset (Moham- madi et al., 2017). This dataset was collected in a library using Received Signal Strength Indicator (RSSI) measurements obtained from an array of 13Bluetooth Low Energy (BLE) iBeacons. Data acquisition was performed using a smartphone during the library’s operational hours. The primary objective motivating this dataset is to enable indoor localization, that is, to accurately estimate the spatial position of a mobile device based on wireless signal strength measurements, a key problem in many real-world applications including navigation, resource management, and smart building systems. The dataset comprises two components: a labeled subset containing n=1420 observations and an unlabeled subset containing N=5191 observations. For each labeled observation, the recorded features include a location identifier and the RSSI readings from the 13 iBeacons. Location identifiers combine a letter (representing longitude) and a number (representing latitude) within the physical layout of the floor. A schematic diagram illustrating the arrangement of the iBeacons and the labeled locations is provided in Supplementary Material Figure D.17. Each location label is then mapped to a two- dimensional real-valued coordinate, representing the longitude and latitude of the device at the time of measurement. The inferential goal in this application is to estimate the mean spatial position of the device across the observed population. Thus, the problem can be formulated as one of bivariate outcome mean estimation, 13 0.0 0.5 Value0200400600CountDistribution of A F K P U Z Longitude051015LatitudeDistribution of Y and Y Figure 3: BLE-RSSI application results. Left panel: estimated propensity scores across observations, along with the proportion of labeled data n/(n+N), shown as a vertical dashed line. The heterogeneity in the estimated propensity scores may suggest a missing-at-random (MAR) labeling mechanism. Right
|
https://arxiv.org/abs/2505.06452v1
|
panel: joint bivariate distribution over the floor bivariate grid of the observed position of the device Ycompared to the distribution of predicted positions from the estimated outcome model using signal recorded by remote sensors. The discrepancy between observed and predicted outcomes may suggest a missing-at-random (MAR) labeling mechanism. Dots represent bivariate means computed using our approach and the naive approach. where the target parameter is the mean of a two-dimensional random vector. Formally, let Yi∈R2 denote the bivariate spatial coordinates (longitude and latitude) for observation i;Ri∈{0, 1}indicate whether the location label is observed; and Xi∈Rprepresent the vector of p=13RSSI readings from the iBeacons, used as auxiliary covariates. The observed data take the form Di= (Xi,Ri,RiYi), for i=1, . . . , n+N. We compare two estimation strategies: the naive estimator, which only uses labeled observations and ignores selection bias, and the DS3estimator defined in Equation 22, which corrects for the semi- supervised structure. For the DS3estimator, the outcome regression function ˆµis estimated using random forests (Breiman, 2001), with RSSI measurements as predictors. The propensity score ˆπis estimated using logistic regression with offset, also based on RSSI predictors. To mitigate overfitting and ensure valid asymptotic properties, both nuisance functions are estimated using cross-fitting with J=5 folds. Results are presented in Figure 3. The heterogeneous distribution of the estimated propensity scores indicates that the probability of label observation varies across the covariate space, supporting the plausibility of a missing-at-random (MAR) assumption conditional on the RSSI features. Furthermore, the distribution of the spatial coordinates imputed from the predictive model ˆµshows a marked shift compared to the labeled outcomes Y, moving upward and to the left in the two-dimensional grid. This discrepancy between the labeled and imputed outcome distributions highlights the necessity of properly accounting for the semi-supervised structure of the data. Ignoring the unlabeled data or treating the labeled set as representative would likely introduce substantial bias in the estimation of the mean device position. Overall, this example demonstrates how the proposed methodology can successfully leverage partially labeled data to produce robust and unbiased inference in complex real-world settings. 5.2 METABRIC study We further apply our approach to the METABRIC dataset (Curtis et al., 2012; Pereira et al., 2016), one of the largest open-access breast cancer cohorts with gene expression profiles, clinical covariates, and long-term survival follow-up. Cheng et al. (2021) and Hsu and Lin (2023) identified 20 biomarkers 14 (reported in Supplementary Material Table D.1) as potentially predictive of 5-year disease-specific survival (DSS) status, and defined two prognosis classes accordingly: the poor prognosis class includes patients who died of breast cancer within five years of diagnosis, while the good prognosis class includes those who died of breast cancer after five years. Here, we ask a related but complementary question. Instead of discretizing survival outcomes into prognosis classes, we treat survival time (in months) as a continuous response variable. Then, we aim to estimate the effect of the selected biomarkers on survival among patients who died of breast cancer. Following the labeling conventions of Cheng et al. (2021); Hsu and Lin (2023), we treat censored
|
https://arxiv.org/abs/2505.06452v1
|
patients – either still alive or deceased from other causes – as unlabeled. The resulting dataset includes n=611 labeled observations and N=1231 unlabeled observations. We frame this task as a coefficient estimation problem in linear regression. Let Yidenote the survival time (in months) since initial diagnosis for individual i;Ri∈{0,1}indicate whether the individual died of breast cancer ( Ri=1) or not; Xi∈ Rprepresent the vector of expression values for p=20 biomarkers of interest; and Wi∈ Rs, with s=10, denote additional clinical features (also described in Supplementary Table D.1) used to aid estimation of nuisance components. The observed data are Di= (Xi,Wi,Ri,RiYi), for i=1, . . . , n+N. We employ both the naive estimator, which only analyzes labeled data, and the DS3estimator in Equation 26. For the latter, the outcome regression model ˆµis estimated using random forests (Breiman, 2001), with both biomarker and clinical covariates included as predictors. The propensity score ˆπis estimated via logistic regression, again leveraging both sets of covariates. All nuisance functions are estimated using cross-fitting with J=5folds to ensure valid inference. While our approach casts the problem as one of linear regression under distribution shift, we acknowledge that alternative methodological frameworks could also be applied. In particular, since the outcome of interest is survival time, this setting naturally lends itself to tools from survival analysis, such as Cox proportional hazards models (Cox, 1972), which are designed to explicitly account for censoring and time-to-event structure. Moreover, our focus on a fixed, literature-informed set of biomarkers connects to the field of inference after feature selection (Berk et al., 2013; Kuchibhotla et al., 2022), which provides tools for valid inference after variable selection. Although we do not pursue these directions here, our method complements these perspectives by offering a flexible and semiparametrically grounded approach to semi-supervised inference under distribution shift – especially in regimes where labeling is both biased and sparse. Results are shown in Figure 4. At the significance level α=0.1, the naive estimator identifies three biomarkers – BTRC, FBXO6, and ZDHHC17 – as significantly associated with survival outcomes. In contrast, our semi-supervised approach identifies two additional significant biomarkers: KRAS and PLK1. Prior studies provide biological support for these findings. Specifically, Kim et al. (2015); Tokumaru et al. (2020) report that elevated KRAS expression is linked to poorer survival outcomes, while Garlapati et al. (2023); Ueda et al. (2019) associate higher PLK1 levels with shorter survival in breast cancer patients. Moreover, of the 20biomarkers analyzed, 18exhibit consistent coefficient signs between ourDS3estimator and the naive estimator, indicating strong qualitative agreement. However, for two biomarkers, the naive approach yields opposite signs. One such case is GSK3B, where our DS3estimator identifies a positive association with survival time among patients who died of breast cancer, suggesting that higher expression is linked to better outcomes. This aligns with findings from Luo (2009), which report that GSK3B may function as “tumor suppressor” for mammary tumors. The second case is DBN1, for which we estimate a negative effect – indicating that higher DBN1 expression is associated with shorter survival – while the naive estimator suggests the opposite. This
|
https://arxiv.org/abs/2505.06452v1
|
result aligns with the findings of Alfarsi et al. (2020), who identify DBN1 as a prognostic biomarker in luminal breast cancer. Their study shows that elevated DBN1 expression is associated with more aggressive tumor characteristics and higher rates of relapse among patients receiving adjuvant endocrine therapy. While these results are consistent with existing literature, we emphasize that they are not statistically significant and should be interpreted cautiously. We note that, because survival times are subject to right-censoring, the missing-at-random (MAR) 15 0.0 0.2 0.4 0.6 0.8 1.0 Value050100150200CountDistribution of 0 100 200 300 Value050100150CountDistribution of Y and Y ESR1 PGR ERBB2 MKI67 PLAU ELAVL1 EGFR BTRC FBXO6 SHMT2 KRAS SRPK2 YWHAQ PDHA1 EWSR1 ZDHHC17 ENO1 DBN1 PLK1 GSK3B Biomarkers40 20 02040Effect size Estimated coefficients DS3NaiveFigure 4: METABRIC application results. Left panel: estimated propensity scores across observations, along with the proportion of labeled data n/(n+N), shown as a vertical dashed line. The heterogeneity in the estimated propensity scores may suggest a missing-at-random (MAR) labeling mechanism. Right panel: marginal distribution of the observed survival times Y(in months) compared to the distribution of predicted survival times from the estimated outcome model using both biomarkers and clinical covariates. Vertical lines indicate the means of each distribution. The discrepancy between observed and predicted outcomes further suggests a MAR mechanism. Bottom panel: estimated linear coefficients from our semi-supervised approach and the naive estimator based solely on labeled data. Confidence intervals are at α=0.1 significance level. assumption does not hold: patients with longer survival times are more likely to be censored. However, many patients in METABRIC entered the study at different times, leading to censoring across a range of survival times, not only after five years. Consequently, although the MAR assumption is imperfect, its violation may be moderate in practice. Therefore, to investigate this issue, we repeat our analysis using only those censored observations corresponding to deaths from competing causes, and report results in Supplementary Figure D.18. The results using only competing-risk censored data are qualitatively similar to those obtained using all censored observations, supporting the robustness of our approach despite the potential violation of MAR. Summarizing, our analysis supports a favorable prognostic role for GSK3B and adverse roles for KRAS, PLK1, and DBN1 in breast cancer. These findings underscore the importance of adjusting for distribution shift in semi-supervised survival analyses and demonstrate the practical utility of our proposed approach in real-world biomedical applications. 16 6 Conclusions In this work, we have developed a general semiparametric framework for estimation and inference in semi-supervised learning problems where outcome labels are missing at random (MAR) and the overlap decays with sample size. Our approach accommodates arbitrary statistical targets and remains valid under flexible modeling of high-dimensional nuisance functions. Leveraging the tools of semiparametric theory, we have derived influence functions under decaying overlap and constructed DS3estimators that are both doubly robust and asymptotically normal. A key insight of our analysis is that overlap decay imposes fundamental adaptations of inference, invali- dating classical root- nasymptotics and requiring a redefinition of the effective sample size. We formally characterized this rate through a generalization of
|
https://arxiv.org/abs/2505.06452v1
|
Lindeberg conditions and provided practical estima- tors that remain consistent in this more challenging regime. We further extended our methodology to incorporate black-box prediction models in the spirit of prediction-powered inference ( PPI), broadening its applicability to MAR labeling mechanisms. Our simulation studies confirmed the theoretical properties of our DS3estimators, including double robustness, asymptotic normality, and valid coverage under finite samples and severe overlap decay. These findings suggest that our framework is both theoretically sound and practically useful, especially in settings where labeled data are scarce and distribution shift is non-negligible. Despite these contributions, several limitations remain. First, while we allow for decaying overlap and general targets, our current framework assumes that the MAR assumption holds, i.e., distribution shift depends only on covariates and not on unobserved variables. Relaxing this assumption toward a missing-not-at-random (MNAR) setting would require substantially different identification strategies. Second, the performance of our DS3estimators under strong model misspecification remains to be fully understood. Lastly, while we allow for high-dimensional covariates and flexible machine learning models, our theoretical guarantees rely on sufficient regularity and convergence rates that may be difficult to verify or achieve in practice. Future research directions include extending our theory to MNAR mechanisms and exploring adaptive procedures that optimize the bias-variance trade-off in regions of low overlap. Another important direction is the integration of these techniques into large-scale machine learning pipelines for high- throughput scientific applications, such as genomic association studies, fairness-aware recommendation systems, and policy evaluation in digital platforms. By relaxing traditional assumptions while preserving rigorous inferential guarantees, our work provides a foundation for reliable statistical inference in the increasingly common regime of biased labeling and partial supervision. Acknowledgments L.T . wishes to thank Francesca Chiaromonte and Edward Kennedy for very helpful discussions. This project was funded by National Institute of Mental Health (NIMH) grant R01MH123184. References Lutfi H Alfarsi, Rokaya El Ansari, Brendah K Masisi, Ruth Parks, Omar J Mohammed, Ian O Ellis, Emad A Rakha, and Andrew R Green. Integrated analysis of key differentially expressed genes identifies dbn1 as a predictive marker of response to endocrine therapy in luminal breast cancer. Cancers , 12(6): 1549, 2020. Anastasios N Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I Jordan, and Tijana Zrnic. Prediction-powered inference. Science , 382(6671):669–674, 2023a. Anastasios N Angelopoulos, John C Duchi, and Tijana Zrnic. Ppi ++: Efficient prediction-powered inference. arXiv preprint arXiv:2311.01453 , 2023b. 17 David Azriel, Lawrence D Brown, Michael Sklar, Richard Berk, Andreas Buja, and Linda Zhao. Semi- supervised linear regression. Journal of the American Statistical Association , 117(540):2238–2251, 2022. Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. Biometrics , 61(4):962–973, 2005. Richard Berk, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao. Valid post-selection inference. The Annals of Statistics , pages 802–837, 2013. Peter J Bickel, Chris AJ Klaassen, Ya’acov Ritov, and Jon A Wellner. Efficient and adaptive estimation for semiparametric models , volume 4. Springer, 1993. Leo Breiman. Random forests. Machine learning , 45:5–32, 2001. T Cai and Zijian Guo. Semisupervised inference for explained variance in high dimensional linear regression and
|
https://arxiv.org/abs/2505.06452v1
|
its applications. Journal of the Royal Statistical Society Series B: Statistical Methodology , 82(2):391–419, 2020. Abhishek Chakrabortty, Guorong Dai, and Raymond J Carroll. Semi-supervised quantile estimation: Robust and efficient inference in high dimensional settings. arXiv preprint arXiv:2201.10208 , 2022. Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006) [book reviews ].IEEE Transactions on Neural Networks , 20(3):542–542, 2009. Yi-Hau Chen and Hung Chen. A unified approach to regression analysis under double-sampling designs. Journal of the Royal Statistical Society Series B: Statistical Methodology , 62(3):449–460, 2000. Li-Hsin Cheng, Te-Cheng Hsu, and Che Lin. Integrating ensemble systems biology feature selection and bimodal deep neural network for breast cancer prognosis prediction. Scientific Reports , 11(1):14914, 2021. David R Cox. Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) , 34(2):187–202, 1972. Richard K Crump, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik. Dealing with limited overlap in estimation of average treatment effects. Biometrika , 96(1):187–199, 2009. Christina Curtis, Sohrab P Shah, Suet-Feung Chin, Gulisa Turashvili, Oscar M Rueda, Mark J Dunning, Doug Speed, Andy G Lynch, Shamith Samarajiwa, Yinyin Yuan, et al. The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. Nature , 486(7403):346–352, 2012. Alexander D’Amour, Peng Ding, Avi Feller, Lihua Lei, and Jasjeet Sekhon. Overlap in observational studies with high-dimensional covariates. Journal of Econometrics , 221(2):644–654, 2021. Chakravarthy Garlapati, Shriya Joshi, Shristi Bhattarai, Jayashree Krishnamurthy, Ravi Chakra Turaga, Thi Nguyen, Xiaoxian Li, and Ritu Aneja. Plk1 and aurkb phosphorylate survivin differentially to affect proliferation in racially distinct triple-negative breast cancer. Cell Death & Disease , 14(1):12, 2023. Frank R Hampel. The influence curve and its role in robust estimation. Journal of the american statistical association , 69(346):383–393, 1974. Te-Cheng Hsu and Che Lin. Learning from small medical data—robust semi-supervised cancer prognosis classifier with bayesian variational autoencoder. Bioinformatics Advances , 3(1):vbac100, 2023. Edward H Kennedy. Semiparametric doubly robust targeted double machine learning: a review. Handbook of Statistical Methods for Precision Medicine , pages 207–236, 2024. Shakeeb Khan and Elie Tamer. Irregular identification, support conditions, and inverse weight estimation. Econometrica , 78(6):2021–2042, 2010. 18 Rae-Kwon Kim, Yongjoon Suh, Ki-Chun Yoo, Yan-Hong Cui, Hyeonmi Kim, Min-Jung Kim, In Gyu Kim, and Su-Jae Lee. Activation of kras promotes the mesenchymal features of basal-type breast cancer. Experimental & molecular medicine , 47(1):e137–e137, 2015. Arun K Kuchibhotla, John E Kolassa, and Todd A Kuffner. Post-selection inference. Annual Review of Statistics and Its Application , 9(1):505–527, 2022. Jia Luo. Glycogen synthase kinase 3 β(gsk3β) in tumorigenesis and cancer chemotherapy. Cancer letters , 273(2):194–200, 2009. Jiacheng Miao, Xinran Miao, Yixuan Wu, Jiwei Zhao, and Qiongshi Lu. Assumption-lean and data- adaptive post-prediction inference. arXiv preprint arXiv:2311.14220 , 2023. Mehdi Mohammadi, Ala Al-Fuqaha, Mohsen Guizani, and Jun-Seok Oh. Semisupervised deep reinforce- ment learning in support of iot and smart city services. IEEE Internet of Things Journal , 5(2):624–635, 2017. Bernard Pereira, Suet-Feung Chin, Oscar M Rueda, Hans-Kristian Moen Vollan, Elena Provenzano, Helen A Bardwell, Michelle Pugh, Linda Jones, Roslin Russell, Stephen-John Sammut, et al. The somatic mutation profiles
|
https://arxiv.org/abs/2505.06452v1
|
of 2,433 breast cancers refine their genomic and transcriptomic landscapes. Nature communications , 7(1):11479, 2016. Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika , 70(1):41–55, 1983. Christoph Rothe. Robust confidence intervals for average treatment effects under limited overlap. Econometrica , 85(2):645–660, 2017. Shanshan Song, Yuanyuan Lin, and Yong Zhou. A general m-estimation theory in semi-supervised framework. Journal of the American Statistical Association , 119(546):1065–1075, 2024. Lorenzo Testa, Tobia Boschi, Francesca Chiaromonte, Edward H Kennedy, and Matthew Reimherr. Doubly-robust functional average treatment effect estimation. arXiv preprint arXiv:2501.06024 , 2025. Yoshihisa Tokumaru, Masanori Oshi, Eriko Katsuta, Li Yan, Vikas Satyananda, Nobuhisa Matsuhashi, Manabu Futamura, Yukihiro Akao, Kazuhiro Yoshida, and Kazuaki Takabe. Kras signaling enriched triple negative breast cancer is associated with favorable tumor immune microenvironment and better survival. American journal of cancer research , 10(3):897, 2020. Anastasios A Tsiatis. Semiparametric theory and missing data , volume 4. Springer, 2006. Ai Ueda, Keiki Oikawa, Koji Fujita, Akio Ishikawa, Eiichi Sato, Takashi Ishikawa, Masahiko Kuroda, and Kohsuke Kanekura. Therapeutic potential of plk1 inhibition in triple-negative breast cancer. Laboratory investigation , 99(9):1275–1286, 2019. Aad W Van der Vaart. Asymptotic statistics , volume 3. Cambridge university press, 2000. Zichun Xu, Daniela Witten, and Ali Shojaie. A unified framework for semiparametrically efficient semi-supervised learning. arXiv preprint arXiv:2502.17741 , 2025. Shu Yang and Peng Ding. Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores. Biometrika , 105(2):487–493, 2018. Anru Zhang, Lawrence D Brown, and T Tony Cai. Semi-supervised inference: General theory and estimation of means. Annals of Statistics , 2019. Yuqian Zhang and Jelena Bradic. High-dimensional semi-supervised learning: in search of optimal inference of the mean. Biometrika , 109(2):387–403, 2022. Yuqian Zhang, Abhishek Chakrabortty, and Jelena Bradic. Double robust semi-supervised inference for the mean: selection bias under mar labeling with decaying overlap. Information and Inference: A Journal of the IMA , 12(3):2066–2159, 2023. 19 Xiaojin Jerry Zhu. Semi-supervised learning literature survey. 2005. Tijana Zrnic and Emmanuel J Candès. Cross-prediction-powered inference. Proceedings of the National Academy of Sciences , 121(15):e2322083121, 2024. Supplementary Material A Proofs of main statements Wherever convenient, integrals and expectations are written in linear functional notation. Thus, instead of E[f(X)]=R f(x) P(d x), we write P[f]. We also write ϕFto indicate the full-data influence function for notational simplicity. A.1 Proof of Proposition 3.1 Proof. Given a full-data influence function ϕF, by Theorem 7.2 in Tsiatis (2006), we know that the space of associated observed-data influence functions Λ⊥is given by Λ⊥=§R π⋆(X)ϕF⊕Λ2ª , (28) where Λ2= L2(D): E L2(D)|DF =0 . By Theorem 10.1 in Tsiatis (2006), for a fixed ϕF, the optimal observed-data influence function among the class in Equation 28 is obtained by choosing L2(D) =−ΠR π⋆(X)ϕF|Λ2 , (29) where the operator Πprojects the element RϕF/π⋆(X)onto the space Λ2– see Theorem 2.1 in Tsiatis (2006). Finally, by Theorem 10.2 in Tsiatis (2006), we know that ΠR π⋆(X)ϕF|Λ2 =R−π⋆(X) π⋆(X) h2(X)∈Λ2, (30) where h2(X) = E ϕF|X . This concludes the proof. A.2 Proof of Theorem 3.5 Proof. For simplicity,
|
https://arxiv.org/abs/2505.06452v1
|
we show the result for a single cross-fitting fold. In particular, we denote the distribution where ˆµis trained as ˆPand the distribution where the influence functions are approximated as Pn+N. Similarly, we denote as ¯µthe population limit of ˆµ. The propensity score is known, so that ˆπ=¯π=π⋆. ByVon Mises expansion, we have ˆθˆµ−θ⋆= Pn+N[ϕ(D;θ⋆;¯µ;π⋆)] + ( Pn+N− P⋆) [ϕ(D;θ⋆;ˆµ;π⋆)−ϕ(D;θ⋆;¯µ;π⋆)] +η(ˆµ,µ⋆). (31) We refer to the three elements on the right-hand side respectively as the influence function term , the empirical process term and the remainder term . Consistency. To prove consistency, we need to show that the norm of each term on the right-hand side isOP (n+N)−1/2a−1/2 n,N . We do so by analyzing each term separately. First term (influence function). By definition of influence function, E[ϕ(D;θ⋆;¯µ;π⋆)]=0, and thus the first term has mean zero. We now analyze the variance. Define ¯Vθ⋆= V[ϕ(D;θ⋆;¯µ;π⋆)]= 20 E ϕ(D;θ⋆;¯µ;π⋆)ϕ(D;θ⋆;¯µ;π⋆)T . By Assumption 3.2, c, we haveλmax ¯Vθ⋆ ≍a−1 n,N. Therefore, we have Eh (n+N)1/2a1/2 n,N∥ Pn+N[ϕ(D;θ⋆;¯µ;π⋆)]∥22i =an,N ¯Vθ⋆ 2 2≤an,Nq2λmax ¯Vθ⋆ =OP(1), (32) which in turn implies ∥ Pn+N[ϕ(D;θ⋆;¯µ;π⋆)]∥2=OP (n+N)−1/2a−1/2 n,N . Second term (Empirical process). We need to show that the norm of ( Pn+N− P⋆) [ϕ(D;θ⋆;ˆµ;π⋆)−ϕ(D;θ⋆;¯µ;π⋆)] is of order OP (n+N)−1/2a−1/2 n,N . This follows from Lemma 3.7 in Testa et al. (2025) under Assump- tion 3.2, a. Third term (Remainder). Again, we need to show that the norm of η(ˆµ,µ⋆)equals OP (n+N)−1/2a−1/2 n,N . This directly follows from the fact the propensity score is known. The previous results, combined, imply that ˆθˆµ−θ⋆ 2=OP (n+N)−1/2a−1/2 n,N . Central Limit Theorem. We now turn to the asymptotic normality of ˆθˆµby showing that the first term on the right-hand side satisfies a form of Central Limit Theorem, and that it dominates the other components. By Assumption 3.2, c, we haveλmin ¯Vθ⋆ ≍a−1 n,N. This implies: ¯V−1/2 θ⋆ 2 2= λmax ¯V−1/2 θ⋆2 = λmin ¯Vθ⋆−1≍an,N. (33) By the tail condition din Assumption 3.2, we have that E ¯V−1/2 θ⋆ϕ(D;θ⋆;¯µ;π⋆) 2 21n ¯V−1/2 θ⋆ϕ(D;θ⋆;¯µ;π⋆) 2>ϵp n+No ≤ E ¯V−1/2 θ⋆ 2 2∥ϕ(D;θ⋆;¯µ;π⋆)∥2 21n ¯V−1/2 θ⋆ 2∥ϕ(D;θ⋆;¯µ;π⋆)∥2>ϵp n+No ≤ E an,N∥ϕ(D;θ⋆;¯µ;π⋆)∥2 21 pan,Nϕ(D;θ⋆;¯µ;π⋆) 2>ϵp n+N →0 as n,N→∞ ,(34) where the first inequality is due to triangle inequality and the second inequality is due to the chain of equalities in Equation 33. Therefore the following Lindeberg condition also holds (n+N)−1n+NX i=1E ¯V−1/2 θ⋆ϕ(Di;θ⋆;¯µ;π⋆) 2 21n ¯V−1/2 θ⋆ϕ(Di;θ⋆;¯µ;π⋆) 2>ϵp n+No →0 as n,N→∞ , (35) which implies, by Lindeberg-Feller Central Limit Theorem (Van der Vaart, 2000), that (n+N)−1/2¯V−1/2 θ⋆n+NX i=1ϕ(Di;θ⋆;¯µ;π⋆)⇝N 0,Iq . (36) In summary, we have (n+N)1/2¯V−1/2 θ⋆ ˆθˆµ−θ⋆ = (n+N)−1/2¯V−1/2 θ⋆n+NX i=1ϕ(Di;θ⋆;¯µ;π⋆) +oP(1). (37) 21 A.3 Proof of Theorem 3.7 Proof. For simplicity, we show the result for a single cross-fitting fold. In particular, we denote the distribution where ˆµand ˆπare trained as ˆPand the distribution where the influence functions are approximated as Pn+N. Similarly, we denote as ¯µand ¯πthe population limit of ˆµand ˆπ, respectively. ByVon Mises expansion, we have ˆθˆµ;ˆπ−θ⋆= Pn+N[ϕ(D;θ⋆;¯µ;¯π)]+( Pn+N− P⋆) [ϕ(D;θ⋆;ˆµ;ˆπ)−ϕ(D;θ⋆;¯µ;¯π)]+η((ˆµ,ˆπ),(µ⋆,π⋆)). (38) Consistency. To prove consistency, we need to show that the norm of each term on the right-hand side isOP (n+N)−1/2a−1/2 n,N . We
|
https://arxiv.org/abs/2505.06452v1
|
do so by analyzing each term separately. First term (influence function). By definition of influence function, E[ϕ(D;θ⋆;¯µ;¯π)]=0, and thus the first term has mean zero. We now analyze the variance. Define ¯Vθ⋆= V[ϕ(D;θ⋆;¯µ;¯π)]= E ϕ(D;θ⋆;¯µ;¯π)ϕ(D;θ⋆;¯µ;¯π)T . By Assumption 3.2, c, we haveλmax ¯Vθ⋆ ≍a−1 n,N. Therefore, we have Eh (n+N)1/2a1/2 n,N∥ Pn+N[ϕ(D;θ⋆;¯µ;¯π)]∥22i =an,N ¯Vθ⋆ 2 2≤an,Nq2λmax ¯Vθ⋆ =OP(1), (39) which in turn implies ∥ Pn+N[ϕ(D;θ⋆;¯µ;¯π)]∥2=OP (n+N)−1/2a−1/2 n,N . Second term (Empirical process). We need to show that the norm of ( Pn+N− P⋆) [ϕ(D;θ⋆;ˆµ;ˆπ)−ϕ(D;θ⋆;¯µ;¯π)] is of order OP (n+N)−1/2a−1/2 n,N . This follows from Lemma 3.7 in Testa et al. (2025) under Assump- tion 3.2, a. Third term (Remainder). Again, we need to show that the norm of η((ˆµ,ˆπ),(µ⋆,π⋆))equals OP (n+N)−1/2a−1/2 n,N . This directly follows from Assumption 3.2, b. The previous results, combined, imply that ˆθˆµ;ˆπ−θ⋆ 2=OP (n+N)−1/2a−1/2 n,N . Central Limit Theorem. We now turn to the asymptotic normality of ˆθˆµ;ˆπby showing that the first term on the right-hand side satisfies a form of Central Limit Theorem, and that it dominates the other components. By Assumption 3.2, c, we haveλmin ¯Vθ⋆ ≍a−1 n,N. This implies: ¯V−1/2 θ⋆ 2 2= λmax ¯V−1/2 θ⋆2 = λmin ¯Vθ⋆−1≍an,N. (40) By the tail condition din Assumption 3.2, we have that E ¯V−1/2 θ⋆ϕ(D;θ⋆;¯µ;¯π) 2 21n ¯V−1/2 θ⋆ϕ(D;θ⋆;¯µ;¯π) 2>ϵp n+No ≤ E ¯V−1/2 θ⋆ 2 2∥ϕ(D;θ⋆;¯µ;¯π)∥2 21n ¯V−1/2 θ⋆ 2∥ϕ(D;θ⋆;¯µ;¯π)∥2>ϵp n+No ≤ E an,N∥ϕ(D;θ⋆;¯µ;¯π)∥2 21 pan,Nϕ(D;θ⋆;¯µ;¯π) 2>ϵp n+N →0 as n,N→∞ ,(41) where the first inequality is due to triangle inequality and the second inequality is due to the chain of equalities in Equation 40. Therefore the following Lindeberg condition also holds (n+N)−1n+NX i=1E ¯V−1/2 θ⋆ϕ(Di;θ⋆;¯µ;¯π) 2 21n ¯V−1/2 θ⋆ϕ(Di;θ⋆;¯µ;¯π) 2>ϵp n+No →0 as n,N→∞ , (42) which implies, by Lindeberg-Feller Central Limit Theorem (Van der Vaart, 2000), that (n+N)−1/2¯V−1/2 θ⋆n+NX i=1ϕ(Di;θ⋆;¯µ;¯π)⇝N 0,Iq . (43) 22 In summary, we have (n+N)1/2¯V−1/2 θ⋆ ˆθˆµ;ˆπ−θ⋆ = (n+N)−1/2¯V−1/2 θ⋆n+NX i=1ϕ(Di;θ⋆;¯µ;¯π) +oP(1). (44) A.4 Proof of Corollary 3.9 Proof. This is a straightforward application of Theorem 3.7, where f(X)is employed as an additional covariate. B Additional theoretical results Here, we provide some high-level conditions that imply Assumption 3.2, c, for the multivariate outcome mean target. We assume that the propensity score π⋆is known, so that ˆπ=¯π=π⋆. We first provide a simple yet useful Lemma. Lemma B.1. Assume ˆπ=¯π=π⋆. Then ER ˆπ(X)−12 |X =1 ˆπ(X)−1 . (45) Proof. The following chain holds: ER ˆπ(X)−12 |X = E R2+ˆπ2(X)−2Rˆπ(X) ˆπ2(X)|X = ER ˆπ2(X)|X +1−2 ER ˆπ(X)|X =E[R|X] ˆπ2(X)+1−2E[R|X] ˆπ(X) =1 ˆπ(X)−1 ,(46) where we are using the facts R2=Randˆπ(X) =π⋆(X). Proposition B.2 (High-level conditions for multivariate outcome mean target) .Letθ⋆= E[Y]and ˆπ=¯π=π⋆. Assume that, for all ℓ=1, . . . , k, E Yℓ−µ⋆ ℓ(X)2|X ≥σ2 ℓ, E (Yℓ−¯µℓ(X))2|X ≤¯σ2 ℓ and V[Yℓ]≤¯σ2 ℓ. Thenλmin( V[ϕ(D;θ⋆;¯µ;π⋆)])≍a−1 n,N. Proof. We first derive additional uniform lower bounds for E (Yℓ−¯µℓ(X))2|X and V[Yℓ]. By iterated expectation, we have: E (Yℓ−¯µℓ(X))2|X = E Yℓ−µ⋆ ℓ(X) +µ⋆ ℓ(X)−¯µℓ(X)2|X = E Yℓ−µ⋆ ℓ(X)2|X + E µ⋆ ℓ(X)−¯µℓ(X)2|X +2 E Yℓ−µ⋆ ℓ(X) µ⋆ ℓ(X)−¯µℓ(X) |X = E Yℓ−µ⋆ ℓ(X)2|X + E µ⋆ ℓ(X)−¯µℓ(X)2|X ≥σ2 ℓ.(47) 23 Similarly, we can show that: V[Yℓ]
|
https://arxiv.org/abs/2505.06452v1
|
= E (Yℓ− E[Yℓ])2 = E E Yℓ−µ⋆ ℓ(X)2|X + E µ⋆ ℓ(X)− E[Yℓ]2|X ≥σ2 ℓ.(48) We now show that each diagonal element of ¯Vθ⋆is of the same rate as a−1 n,N. In particular, for each ℓ=1, . . . , kwe have: ¯Vθ⋆[ℓ,ℓ] = E ϕ(D;θ⋆;¯µ;π⋆)2 ℓ = E ¯µℓ(X) +R(Yℓ−¯µℓ(X)) π⋆(X)− E[Yℓ]2 = E(R−π⋆(X)) (Yℓ−¯µℓ(X)) π⋆(X)+Yℓ− E[Yℓ]2 = E (R−π⋆(X))2 (π⋆(X))2(Yℓ−¯µℓ(X))2 + V[Yℓ] = E(1−π⋆(X)) π⋆(X)(Yℓ−¯µℓ(X))2 + V[Yℓ],(49) where we are using Lemma B.1. Therefore we have that an,N¯Vθ⋆[ℓ,ℓ]≥an,N1 π⋆(X)−1 σ2 ℓ+σ2 ℓ =σ2 ℓ>0 , (50) and similarly we can show that an,N¯Vθ⋆[ℓ,ℓ]≤an,N1 π⋆(X)−1 ¯σ2 ℓ+¯σ2 ℓ =¯σ2 ℓ<∞. (51) The last two inequalities imply that ¯Vθ⋆[ℓ,ℓ]≍a−1 n,N. We can now exploit Cauchy-Schwartz to expand the previous results to off-diagonal elements of ¯Vθ⋆. In fact: |¯Vθ⋆[ℓ1,ℓ2]|≤q ¯Vθ⋆[ℓ1,ℓ1]¯Vθ⋆[ℓ2,ℓ2]≍a−1 n,N. (52) Therefore, for all ℓ1=1, . . . , kandℓ2=1, . . . , k, it holds that ¯Vθ⋆[ℓ1,ℓ2]≍a−1 n,N, (53) which concludes the proof. C Further simulation results C.1 Multivariate outcome mean 24 DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n = 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000Figure C.1: Multivariate outcome mean simulation results. Empirical coverage is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n = 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.2: Multivariate outcome mean simulation results. Empirical coverage is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. 25 DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n = 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000Figure C.3: Multivariate outcome mean simulation results. Empirical coverage is reported for the MCAR setting, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}and N∈{10n, 100 n}. DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n
|
https://arxiv.org/abs/2505.06452v1
|
= 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.4: Multivariate outcome mean simulation results. Empirical coverage is reported for the MCAR setting, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}and N∈{10n, 100 n}. 26 DS3OR IPW Naive012RMSE n = 100, N = 1000 DS3OR IPW Naive02 n = 1000, N = 10000 DS3OR IPW Naive012n = 10000, N = 100000 DS3OR IPW Naive Method02RMSE n = 100, N = 10000 DS3OR IPW Naive Method02 n = 1000, N = 100000 DS3OR IPW Naive Method02 n = 10000, N = 1000000Figure C.5: Multivariate outcome mean simulation results. RMSE is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. DS3OR IPW Naive012RMSE n = 100, N = 1000 DS3OR IPW Naive02 n = 1000, N = 10000 DS3OR IPW Naive012n = 10000, N = 100000 DS3OR IPW Naive Method02RMSE n = 100, N = 10000 DS3OR IPW Naive Method02 n = 1000, N = 100000 DS3OR IPW Naive Method02 n = 10000, N = 1000000 Figure C.6: Multivariate outcome mean simulation results. RMSE is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. 27 DS3OR IPW Naive0.00.5RMSEn = 100, N = 1000 DS3OR IPW Naive0.00.10.2 n = 1000, N = 10000 DS3OR IPW Naive0.000.05 n = 10000, N = 100000 DS3OR IPW Naive Method0.000.250.50RMSE n = 100, N = 10000 DS3OR IPW Naive Method0.00.10.2n = 1000, N = 100000 DS3OR IPW Naive Method0.000.05 n = 10000, N = 1000000Figure C.7: Multivariate outcome mean simulation results. RMSE is reported for the MCAR setting, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}and N∈{10n, 100 n}. DS3OR IPW Naive0.00.51.0RMSE n = 100, N = 1000 DS3OR IPW Naive0.00.10.2 n = 1000, N = 10000 DS3OR IPW Naive0.000.05 n = 10000, N = 100000 DS3OR IPW Naive Method0.000.250.50RMSE n = 100, N = 10000 DS3OR IPW Naive Method0.00.10.2n = 1000, N = 100000 DS3OR IPW Naive Method0.000.05 n = 10000, N = 1000000 Figure C.8: Multivariate outcome mean simulation results. RMSE is reported for the MCAR setting, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}andN∈{10n,100n}. 28 C.2 Linear coefficients DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n
|
https://arxiv.org/abs/2505.06452v1
|
= 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.9: Linear coefficients simulation results. Empirical coverage is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n = 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.10: Linear coefficients simulation results. Empirical coverage is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. 29 DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n = 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000Figure C.11: Linear coefficients simulation results. Empirical coverage is reported for the MCAR setting, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}and N∈{10n, 100 n}. DS3OR IPW Naive0.00.51.0Coverage n = 100, N = 1000 DS3OR IPW Naive0.00.51.0 n = 1000, N = 10000 DS3OR IPW Naive0.00.51.0 n = 10000, N = 100000 DS3OR IPW Naive Method0.00.51.0Coverage n = 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.12: Linear coefficients simulation results. Empirical coverage is reported for the MCAR setting, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}and N∈{10n, 100 n}. 30 DS3OR IPW Naive05RMSE n = 100, N = 1000 DS3OR IPW Naive0.02.55.0 n = 1000, N = 10000 DS3OR IPW Naive0.02.55.0 n = 10000, N = 100000 DS3OR IPW Naive Method02550RMSE n = 100, N = 10000 DS3OR IPW Naive Method020 n = 1000, N = 100000 DS3OR IPW Naive Method02550 n = 10000, N = 1000000Figure C.13: Linear coefficients simulation results. RMSE is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈ {100, 1000, 10000}andN∈{10n, 100 n}. DS3OR IPW Naive05RMSE n = 100, N = 1000 DS3OR IPW Naive0.02.55.0 n = 1000, N = 10000 DS3OR IPW Naive0.02.55.0 n = 10000,
|
https://arxiv.org/abs/2505.06452v1
|
N = 100000 DS3OR IPW Naive Method02550RMSE n = 100, N = 10000 DS3OR IPW Naive Method020 n = 1000, N = 100000 DS3OR IPW Naive Method02550 n = 10000, N = 1000000 Figure C.14: Linear coefficients simulation results. RMSE is reported for the setting with logis- tic true propensity score π⋆, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100, 1000, 10000}andN∈{10n, 100 n}. 31 DS3OR IPW Naive010RMSE n = 100, N = 1000 DS3OR IPW Naive010 n = 1000, N = 10000 DS3OR IPW Naive010 n = 10000, N = 100000 DS3OR IPW Naive Method0100RMSEn = 100, N = 10000 DS3OR IPW Naive Method0100 n = 1000, N = 100000 DS3OR IPW Naive Method0100 n = 10000, N = 1000000Figure C.15: Linear coefficients simulation results. RMSE is reported for the MCAR setting, linear outcome regression model ˆµ, and a constant propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}andN∈{10n,100n}. DS3OR IPW Naive010RMSE n = 100, N = 1000 DS3OR IPW Naive010 n = 1000, N = 10000 DS3OR IPW Naive010 n = 10000, N = 100000 DS3OR IPW Naive Method0100RMSE n = 100, N = 10000 DS3OR IPW Naive Method0100 n = 1000, N = 100000 DS3OR IPW Naive Method0100 n = 10000, N = 1000000 Figure C.16: Linear coefficients simulation results. RMSE is reported for the MCAR setting, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel corresponds to a specific configuration of labeled and unlabeled sample sizes, with n∈{100,1000 ,10000}andN∈{10n,100n}. 32 D Further application details Figure D.17: Layout of library room in BLE-RSSI study. Retrieved from Mohammadi et al. (2017). Type Covariates Biomarkers ESR1, PGR, ERBB2, MKI67, PLAU, ELAVL1, EGFR, BTRC, FBXO6, SHMT2, KRAS, SRPK2, YWHAQ, PDHA1, EWSR1, ZDHHC17, ENO1, DBN1, PLK1, GSK3B Clinical features Age, menopausal state, tumor size, radiotherapy, chemotherapy, hormone ther- apy, neoplasm histologic grade, cellularity, surgery-breast conserving, surgery- mastectomy Table D.1: Variables employed in the METABRIC application as described by Hsu and Lin (2023). 33 0.2 0.4 0.6 0.8 1.0 Value050100CountDistribution of 0 100 200 300 Value050100150CountDistribution of Y and Y ESR1 PGR ERBB2 MKI67 PLAU ELAVL1 EGFR BTRC FBXO6 SHMT2 KRAS SRPK2 YWHAQ PDHA1 EWSR1 ZDHHC17 ENO1 DBN1 PLK1 GSK3B Biomarkers40 20 02040Effect size Estimated coefficients DS3NaiveFigure D.18: METABRIC application using only competing risk censored data as unlabeled data. 34
|
https://arxiv.org/abs/2505.06452v1
|
arXiv:2505.07016v1 [cs.IT] 11 May 2025Multi-Terminal Remote Generation and Estimation Over a Broadcast Channel With Correlated Priors Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Nir Weinberger and Deniz Gündüz Abstract —We study the multi-terminal remote estimation problem under a rate constraint, in which the goal of the encoder is to help each decoder estimate a function over a certain distribution – while the distribution is known only to the encoder, the function to be estimated is known only to the decoders, and can also be different for each decoder. The decoders can observe correlated samples from prior distributions, instantiated through shared randomness with the encoder. To achieve this, we employ remote generation, where the encoder helps decoders generate samples from the underlying distribution by using the samples from the prior through importance sampling. While methods such as minimal random coding can be used to effi- ciently transmit samples to each decoder individually using their importance scores, it is unknown if the correlation among the samples from the priors can reduce the communication cost using the availability of a broadcast link. We propose a hierarchical importance sampling strategy that facilitates, in the case of non- zero Gács-Körner common information among the priors of the decoders, a common sampling step leveraging the availability of a broadcast channel. This is followed by a refinement step for the individual decoders. We present upper bounds on the bias and the estimation error for unicast transmission, which is of independent interest. We then introduce a method that splits into two phases, dedicated to broadcast and unicast transmission, respectively, and show the reduction in communication cost. I. I NTRODUCTION In machine learning (ML), learning generative distributions to model data in some domain has become a prominent discipline. In our problem, an entity – hereby referred to as the encoder – aims to allow a group of decoders to estimate statistical properties of its distribution, such as population inferences derived from ML models (e.g., in generative ad- versarial networks [1] or variational auto-encoders [2]). The distribution is known only to the encoder, while each decoder may want to estimate a different function of that distribution. A direct solution for the encoder is to sample from its distribution and transmit deterministic samples to decoders. However, this method can lead to substantial communication overheads, introduce considerable latency in remote estimation, and po- tentially compromise the privacy of the encoder’s distribution. Similarly, compressing and sending the distribution directly is suboptimal unless the distribution belongs to a simple class. To address these issues, prior distributions available at the decoders (also known to the encoder) can be used to remotely generate samples, which can then be used to estimate the desired statistics through importance sampling [3]. In this M.E., R.B. and A.W-Z. are with the Technical University of Munich. Emails: {maximilian.egger, rawad.bitar, antonia.wachter-zeh}@tum.de. D.G. is with Imperial College London. Email: d.gunduz@imperial.ac.uk. N.W. is at Technion — Israel Institute of Technology. Email: nirwein@technion.ac.il. This project has received funding from the German Research Foundation (DFG) under Grant Agreement Nos. BI 2492/1-1 and WA 3907/7-1, and from UKRI for project
|
https://arxiv.org/abs/2505.07016v1
|
AI-R (ERC-Consolidator Grant, EP/X030806/1). The work of N.W. was partly supported by the Israel Science Foundation (ISF), grant no. 1782/22.multi-terminal setting, the use of a broadcast channel from the encoder to the decoders can exploit potential correlations of samples observed from the decoders’ prior distributions, in order to reduce the communication costs. In this paper, we explore the gains of this approach. A point-to-point remote version of sample generation and estimation was considered in [4]. A scheme called minimal random coding (MRC) was proposed, which is a lossy com- pression method. The scheme is based on selecting a sample from a proposal distribution using the relative importance of scores with respect to the target distribution. In MRC, the encoder and the decoder draw the same samples from the prior distribution using shared randomness. The encoder then draws an index of one of these samples from a distribution that reflects the relative importance of the sample with respect to the target distribution. This problem is mathematically equiv- alent to channel simulation (also known as reverse channel coding) [5]–[7] and relative entropy coding (REC) [8], where the general narrative is to make a decoder sample from a distribution that is close or equal to the target distribution. Closely related is also the function representation lemma [9]. REC was initially proposed for image compression, with various follow-up works aiming to improve its complexity, e.g., [10]–[12]. In parallel, MRC was improved by ordered random coding (ORC) [5], thereby connecting importance sampling with Poisson functional representation. Building on ORC, importance sampling-based lossy compression was further developed in [13]. The aforementioned works have recently attracted interest in ML, as they allow for alleviating various negative im- pacts of quantization in these systems [14]. For example, in [15], a combination of REC and differential privacy was used for communication-efficient federated learning. In [16], [17], general frameworks for stochastic federated learning were proposed, leveraging MRC for communication-efficient transmission of samples from the clients’ posteriors, i.e., their model updates. In this setting, multiple clients transmit importance samples to the federator, and proposal distributions capture temporal correlation in the learning process. With the exception of [16], most works study the case of a single encoder and a single decoder, whereas the multi- terminal setting, and specifically, multiple decoders, has re- ceived much less attention. However, this setting is particularly interesting when the decoders can observe correlated sam- ples from their joint prior distribution. Naively, using shared randomness between encoder and decoder, the encoder can perform MRC to convey to each decoder samples from its respective prior distribution, reflecting the relative importance with respect to the target distribution. The communication cost of this strategy scales linearly in the number of decoders. However, such a large cost can be reduced in the presence of a broadcast channel. For instance, in the extreme case where the joint prior is such that the samples are fully correlated (equal), it suffices for the encoder to transmit just a single sample index to all the decoders, via a broadcast channel. Our goal in this paper is to characterize
|
https://arxiv.org/abs/2505.07016v1
|
the potential gains in the case of joint prior distributions with arbitrary correlation. We focus on a specific instance of the problem, where the samples obtained by the decoders are used to evaluate a function of interest. Our contributions are as follows. As a preliminary step, we refine a known result on the sampling process in MRC, and provide in Section II a high-probability error bound for the standard procedure in the single-decoder case. Then, in Section III, we propose a hierarchical sampling strategy that operates in two stages: It first samples a block from the support of the prior distribution, and second, conditionally samples the final index within the chosen block. We analyze the bias and the error incurred in the function evaluation using this sampling strategy, the transmission cost, and the distance of the final sample distribution to the target distribution. The results are compared to non-hierarchical (standard) sampling procedures. Then, using the Gács-Körner common informa- tion shared by two decoders, we apply hierarchical sampling in the multi-decoder setting, and establish the reduction in communication cost with a broadcast channel, followed by decoder-specific refinements through unicast transmission. II. P RELIMINARIES AND SYSTEM MODEL Notation. We denote the Kullback-Leibler divergence be- tween two distributions pUandpVbyDKL(pU∥pV), the total variation distance by dTV(pU∥pV), and the χ2-divergence by χ2(pU, pV). For a natural number u, we let [u]≜{1,···u}. We denote the entropy of UbyH(U), and the mutual informa- tion between UandVbyI(U;V). The Gács-Körner common information Cbetween UandVis given by CGK(U;V) = max CH(C)s.t.C=g1(U) = g2(V)for two deterministic functions g1andg2. We denote the p-norm of a function h(·) over distribution puas∥h∥Lp(pu)≜EU∼pu[|h(U)|p]1/p. We consider the setting of a single encoder and multiple decoders i∈[N]. The encoder shares randomness with the decoders i∈[N], each of which can observe samples from a prior distribution pYiinduced by a joint distribution pY1,···,YN. Following the common assumption, the joint dis- tribution pY1,···,YNis known to all parties, yet decoder ican only observe samples from the marginal pYi, and not from pYi′fori′∈[N]\{i}. According to the joint distribution, the samples drawn from the priors pYiat the different decoders might be arbitrarily dependent. The encoder has access to a random variable Q, defining a target probability measure pQoverX, where pQis unknown to the decoders. Let fi:X → R,∀i∈[N],be arbitrary measurable functions over an alphabet X, where fiis only available to decoder i. For each i, the encoder wants to estimate the evaluation of the functions fiover the target mea- surepQ, i.e., I(fi)≜R Xfi(y)dpQ(y).The encoder, having observations Y1,j,···, YN,j∼pY1,···,YN, sends a message m to all decoders over a rate-constraint broadcast channel, and transmits a message mito decoder iover a rate-constraint unicast channel. Potentially using the observations Yi,jfrom the prior pYishared with the encoder and the message mi,Encoder withpQDecoder 1 Decoder 2m mm1 m 2 Y1,j, Y2,j∼pY1,Y2Y1,j∼pY1 Y2,j∼pY21 KPK k=1f(X(k) 1) 1 KPK k=1f(X(k) 2)X(1) 1,···,X(K) 1 X(1) 2,···,X(K) 2 Fig. 1: System model for two decoders. each decoder ireconstructs samples X(k) i,∀k∈[K],for some K > 0, to be used as the input to the function fi, thereby obtaining an estimate Ii,K(f)≜1 KPK k=1fi(X(k) i). Although our
|
https://arxiv.org/abs/2505.07016v1
|
results apply to the case where the function is different for each of the decoders, we assume for clarity the same function fat all decoders. The system model is depicted in Fig. 1. The direct method for estimating I(f)by the decoder is for the encoder to sample from pQ, and then transmit deterministic quantized samples to each decoder, which allows them to estimate I(f). However, the communication cost can be signif- icantly reduced by leveraging the prior information pY1,···,YN, known to both the encoder and the decoders, especially when the prior distributions pYiare close to the target distribution pQ. We use a channel simulation approach, where, using the priors pYi, we make the decoders sample from the target distribution pQ, or a distribution ˜pQ≈pQ. Those samples are then used to estimate the function evaluation. A naive solution is to treat each of the decoders i∈[N] individually. Following this solution, let us consider for now one of the decoders i, and utilize the MRC method [4], which does not require or utilize the knowledge of pQat the decoder. The encoder draws nsamples Yi,1,···, Yi,n∼pYi and constructs a categorical auxiliary distribution pWias pWi(j) =pQ(Yi,j)/pYi(Yi,j)Pn j=1pQ(Yi,j)/pYi(Yi,j). The encoder then draws an index pi,k∼pWiand transmits it to the decoder iusing log2(n)bits. The decoder ireconstructs the sample Yi,pi,kby drawing the same samples Yi,1,···Yi,n∼ pYiusing the common prior and shared randomness1. By repeating this process for multiple samples k∈ [K]and separately for each of the decoders, each de- coder i∈[N]obtains samples used to estimate the func- tion’s value as Ii,K(f)≜1 KPK k=1f(Yi,pi,k).Letting τi≜Pn j=1pQ(Yi,j)/pYi(Yi,j), we obtain the following expression for the expected value of the estimate Ii,K(f): Epi,k∼pWi f(Yi,pi,k) =1 τiX pi,k∈[n]pQ(Yi,pi,k) pYi(Yi,pi,k)f(Yi,pi,k).(1) The analysis of importance sampling in [3] states that an accurate estimate of I(f)can be obtained by observing n=O(exp( DKL(pQ∥pYi)))samples Yi,1,···, Yi,nfrom pYi and normalizing by the importance weights, i.e., Ii(f) = 1 nPn j=1f(Yi,j)ρi,where ρi≜pQ(Yi,j) pYi(Yi,j). Up to a normalization constant, this estimate corresponds to the expectation of the MRC approach given in (1). Hence, the error bound of the 1Improved REC strategies were proposed in [5], [8], [10], [11]. However, for simplicity, we focus here on the MRC strategy introduced in [4]. importance-sampling estimate in [3] directly leads to a bound on the bias of the MRC approach incurred by the sampling: Lemma 1 ([3, Theorem 2.1]) .For some t≥0, letn= exp( DKL(pQ∥pYi) + t)and ϵ≜e−t/4+ 2p PrX∼pQ(log(ρi(X)> D KL(pQ∥pYi) +t/2)))1/2, then with probability at least 1−2ϵ, |Epi,k∼pWi f(Yi,pi,k) −EX∼pQ[f(X)]| ≤2∥f∥L2(pQ)ϵ 1−ϵ, where the probability is over the randomness of Yi,j∼pYi. Using this result on the bias of the sampling process, along with concentration of the samples of the estimator, we derive the following guarantee on our estimator Ii,K(f): Proposition 1. Letnandϵbe defined as in Lemma 1. For some ϵ⋆>0, with probability at least 1−ϵ⋆−4ϵ, we have |Ii,K(f)−I(f)|≤2ϵ∥f∥L2(pQ) 1−ϵ+q 2ϵ 1−ϵ∥f∥L4(pQ)+p ∥f∥L2(pQ) Kϵ⋆. For each decoder, the number of samples nrequired for low variance of the estimator is given by O(exp( DKL(pQ∥pYi))), and this cost scales linearly with the number of decoders. Whenever only a unicast link is available, it appears that this
|
https://arxiv.org/abs/2505.07016v1
|
cannot be improved. However, when a broadcast link is available, and the joint prior distribution is of correlated samples, the cost compared to the unicast case can be reduced. As discussed above, in the extreme case of equal and fully dependent pYi, broadcasting the same index to all decoders is trivially effective, but the solution is much less obvious for joint distributions with partial correlation. We propose to gauge the correlation using the Gács-Körner common infor- mation and develop a method that reduces the communication cost, utilizing the availability of a broadcast channel. III. H IERARCHICAL SAMPLING For methodological reasons, we introduce the hierarchical sampling approach for a point-to-point setting, where the goal is to make one decoder i∈[N]estimate I(f)by leveraging its prior pYi. In the following section, we will show how this technique can reduce the communication cost for multiple decoders that are required to estimate the function I(f), and their samples follow a joint prior distribution with non-zero Gács-Körner common information. A. Hierarchical Sampling Methodology Letgi(·)describe a mapping from the alphabet Xto a smaller alphabet Cthat partitions alphabet Xinto|C|disjoint blocks. Hence, gi(Yi)induces a random variable C=gi(Yi) with probability law pCdetermined by disjoint blocks of pYi. LetpYi|Cdescribe the conditional distribution over a subset ofXdetermined by the choice of the block c∈ C, such thatpYi|CpC=pC|YipYi=pYi. With gi(·), we can define a random variable QC=gi(X),X∼pQ, that describes the target distribution pQover the blocks C. The corresponding probability measure is denoted pQC. Similarly, we let the conditional distribution based on the block choice be pQ|C. The hierarchical sampling scheme we propose is to perform MRC using pCto obtain a sample cfrom Cwith high importance with respect to QC, and then conduct conditional MRC using pQ|C=c.1) Block Sampling: The encoder draws ncsamples from pYiand transforms them using gi(·), thereby obtaining sam- ples C1,···Cnc∼pCfrom pC. The importance of the samples from pCw.r.t. pQCis determined by the quantity ρC≜dpQC dpC. The encoder constructs a categorical distribution pWCreflecting the normalized importance scores ρCof the samples Cj, j∈[nc], i.e., pWC(j) =pQC(Cj)/pC(Cj)Pnc j=1pQC(Cj)/pC(Cj). (2) The encoder then draws Kindices mk, k∈[K]from the weight distribution pWCand transmits those indices to decoder iwithKlog2(nc)bits. 2) Conditional Sampling: The following is repeated for all k∈[K]. Having selected an index mk, and hence a sample Cmk, the encoder draws samples from pYiuntil obtaining ni,Cmksamples Y(Cmk) i,1,···, Y(Cmk) i,ni,Cmk∼pYi|C=Cmkfor which it holds that gi(Y(Cmk) i,j ) =Cmkfor all j∈[ni,Cmk]. For each c∈ C, we refer to the conditional importance as ρc i≜dpQ|C(·|c) dpYi|C(·|c). The encoder constructs a second categorical distribution reflecting the conditional importance scores2, i.e., setting c=Cmk, pWi|C=c(j) =pQ|C(Y(c) i,j|c)/pYi|C(Y(c) i,j|c) Pni,c j=1pQ|C(Y(c) i,j|c)/pYi|C(Y(c) i,j|c).(3) The encoder draws an index ℓi,k∼pWi|C=Cmk, and transmits it with log2(ni,Cmk)bits. For each k∈[K], the decoder thus reconstructs the sample Y(Cmk) i,ℓi,k. The resulting function estimate is given by Ii,K(f) =1 KPK k=1f(Y(Cmk) i,ℓi,k).Uniquely determining a sample requires log2(nc) + log2(ni,c)bits to transmit the respective indices, where cis the block sample selected in the first round. The block index mkcorresponds to a rough sample, and ℓi,kdescribes the “refinement” of the sample for the prior pYiw.r.t. the common part pC. We
|
https://arxiv.org/abs/2505.07016v1
|
thus refer to this strategy as “hierarchical sampling”. It is not obvious whether the proposed strategy is inferior to the standard MRC procedure described in Section II. Thus, we next analyze the performance. Remark 1. We note that the communication cost to transmit the indices mkandℓi,kcan be further reduced by known im- proved strategies leveraging the non-uniformity of the indices, cf. ORC [5]. We omit such improvements, as they distract from the main theme. The functionality and improvements brought by the scheme presented in Section IV remain unaffected. B. On the Bias and Estimation Error First, we derive an upper bound on the bias of the hierarchi- cal sampling procedure, followed by a bound on the average communication complexity. For clarity in the statement of the theorem, we define the following quantities. For tc≥0, let ϵ2(tc)≜e−tc 4+2r Pr QC∼pQC(log(ρC(QC)>D KL(pQC∥pC)+tc/2)). Further, for t≥0anddc≜DKL(pQ|C=c∥pYi|C=c), we have 2The distribution pWi|C=Cmkis reused for equal block samples, i.e., when Cmk=Cmk′fork̸=k′∈[K]. ϵ2 i(c, t)≜e−t 4+ 2r Pr X∼pQ|C=c(log(ρc i(X)> dc+t/2)). We let ˆϵi(t)≜max c∈Cϵi(c, t). Theorem 1. For some fixed tc, t≥0, let ˆϵi≜ˆϵi(t), and ϵ≜ϵ(tc). When nc≥exp(DKL(pQC∥pC) +tc)and∀c∈ C: ni,c≥exp(DKL(pQ|C=c∥pYi|C=c)+t)we have with probability at least 1−2(|C|ˆϵi+ϵ)that Emk∼pWC,ℓi,k∼pWi|C=Cmkh f(Y(Cmk) i,ℓi,k)i −EX∼pQ[f(X)] ≤ ∥f∥L2(pQ)" 2√ 2ϵ 1−ϵ2ˆϵi 1−ˆϵi+ 1 +2ˆϵi 1−ˆϵi# (4) (⋆) ≤2√ 2∥f∥L2(pQ)ϵ 1−ϵ+ˆϵi 1−ˆϵi , where (⋆)holds for ϵ≤1 9. Using Theorem 1, we can obtain an upper bound on the total variation distance between the distribution from which the sample is drawn, and the desired distribution pQ. Corollary 1. Let˜pQbe the distribution of the selected sample Y(Cmk) i,ℓi,k. Under the assumptions stated in Theorem 1 and for ϵ≤1 9, the total variation is bounded as dTV(˜pQ∥pQ)≤2(|C|+ 1)ˆϵi+ 4ϵ. Following similar lines as in the proof of Proposition 1, we further obtain the following guarantee on the estimate Ii,K(f). Proposition 2. Letnc,ni,c,∀c∈ C,ϵandˆϵibe defined as in Theorem 1. For ϵ⋆>0andˆϵ≜h 2√ 2ϵ 1−ϵ 2ˆϵi 1−ˆϵi+ 1 +2ˆϵi 1−ˆϵii , we have with probability at least 1−ϵ⋆−4(|C|ˆϵi+ϵ)that |Ii,K(f)−I(f)|≤∥f∥L2(pQ)ˆϵ+∥f4∥L2(pQ)√ ˆϵ+p ∥f∥L2(pQ) Kϵ⋆ We compare our bound in (4) to the established result in Lemma 1 in two edge cases: i) When each block contains only one symbol, the bound degenerates to ∥f∥L2(pQ)2√ 2ϵ 1−ϵ. The additional factor of√ 2is an artifact of the bound- ing technique. In fact, a slightly modified proof of Theo- rem 1 can recover ∥f∥L2(pQ)2ϵ 1−ϵin this special case; ii) When one block contains all symbols, the bound yields ∥f∥L2(pQ)2ˆϵi 1−ˆϵi=∥f∥L2(pQ)2ϵi(1,t) 1−ϵi(1,t), where in this case, ϵi(1, t)is defined over DKL(pQ∥pYi), i.e., ϵi(1, t) =e−t/4+ 2p PrX∼pQ(log(ρi(X)> D KL(pQ∥pYi) +t/2)))1/2, and ρiis defined as in Section II. Hence, we recover Lemma 1. Inter- polating between the edge cases i) and ii), i.e., when gi(·)is non-trivial, the bound incurs a slight additional cost. Despite the possible additional cost, the hierarchical sam- pling strategy is beneficial over MRC even in the point-to- point setting, depending on the function properties. Indeed, consider the following illustrative example. In certain cases, sampling from pQCwith large precision is more important than sampling from pQ|C. Therefore, suppose that the function f of interest is smooth on the support of pQ|Cfor
|
https://arxiv.org/abs/2505.07016v1
|
all c∈ C. In the most extreme case, the function has zero variance conditioned on a block. Then, an intermediate result of the proof of Theorem 1 shows the following improved result.Corollary 2. Letfbe constant on the support of pQ|C(·|c) for all c∈ C. For some fixed tc≥0, letϵ≜ϵ(tc). When nc≥log(DKL(pQC∥pC) +tc)we have w.p. at least 1−2ϵthat Emk,ℓi,kh f(Y(Cmk) i,ℓi,k)i −EX∼pQ[f(X)] ≤ ∥f∥L2(pQ)2ϵ 1−ϵ. This result significantly improves over Lemma 1 by requir- ing a lower number of samples ncand providing a tighter probabilistic guarantee determined by ϵ. Generalizing this concept to the case where fis smooth but not constant on the support of the conditionals pQ|C, i.e., where the index ℓi,kcarries only little information compared to the index mk, illustrates the potential of hierarchical sampling for certain point-to-point problems, e.g., successive refinement. C. Complexity Analysis We continue to analyze the cost of the proposed strat- egy. From Theorem 1, it is immediate that the sample complexity is upper bounded by O(exp( DKL(pQC∥pC))) + max c∈CO(exp( DKL(pQ|C=c∥pYi|C=c))). However, based on pCandpQCand their divergence χ2(pC, pQC), we can obtain a tighter guarantee on the average complexity. Lemma 2. The average sample complexity from Theorem 1 per transmission k∈[K]is upper bounded by EC1,···Cnc∼pC,mk∼pWC[nc+ni,c]≤ O(exp( DKL(pQC∥pC))) +χ2(pC, pQC)+1 (nc−1)/ncEQC∼pQC O(exp( DKL(pQ|C=QC∥pYi|C=QC))) . The communication cost is given by O(DKL(pQC∥pC)) + χ2(pC,pQC)+1 (nc−1)/ncEQC∼pQC O(DKL(pQ|C=QC∥pYi|C=QC)) bits. IV. M ULTI -TERMINAL SETTING We next consider the case of multiple decoders, and for clarity, assume that N= 2. The two decoders i∈[2]observe samples from priors pY1andpY2, that are marginal of a correlated joint distribution, thus I(Y1;Y2)>0. Recall that this joint distribution pY1,Y2is known to all parties, but only the encoder can sample from both pY1andpY2. We first describe a straightforward, yet suboptimal, solution to the problem. The encoder draws a sufficient number of sam- ples from the prior Y1,j, Y2,j∼pY1,Y2. Each decoder should be indicated Kappropriate samples Yi,pi,k, k∈K,through in- dices pi,kto obtain an estimate Ii,K(f) =1 KPK k=1f(Yi,pi,k). To transmit the k-th sample, a naive strategy selects for de- coder 1an appropriate sample index p1,kfromY1,1,···, Y1,n1, where n1=O(exp( DKL(pQ∥pY1))), and to decoder 2a sam- ple index p2,k∈[n2]to select a sample from Y2,1,···, Y2,n2, where n2=O(exp( DKL(pQ∥pY2))). Following the methodology of hierarchical sampling in the point-to-point case from Section III, we instead propose to first select an appropriate sample from pCcommon to both decoders (transmitted through broadcast), and then select a sample from pY1|CandpY2|Cindividually for each decoder, transmitted by unicast. This utilizes the availability of a broadcast channel and the non-zero Gács-Körner Theorem to reduce the communication cost. According to Gács-Körner [18], if Y1andY2have non-zero common information, then there exist two deterministic functions g1(·)andg2(·), defining a common random variable C=g1(Y1) =g2(Y2)over the alphabet C. The Gács-Körner common information is then given by the Cthat maximizes the common information CGK(Y1;Y2)≜max CH(C)s.t.C=g1(Y1) =g2(Y2). Given Cwith maximum entropy, the random variables Y1and Y2are independent. It is well-known that non-zero common information only exists if the joint distribution pY1,Y2exhibits a block structure. The alphabet CofCis then the blocks of the joint distribution. Having pY1,Y2, the encoder and both decoders can compute gi(·), and
|
https://arxiv.org/abs/2505.07016v1
|
thereby the probability law pCof the random variable C. The encoder now samples from pY1,Y2, and the decoders observe samples from the marginals through shared randomness. Decoder 1, by observing samples from pY1, can thus obtain samples from pCby invoking g1(·). Decoder 2draws the same samples for Cby observing samples from pY2induced by the joint pY1,Y2and invoking g2(·). The samples from pCcan be used to optimize the importance sampling strategy under dependent yet different priors. We use as Cin Section III the common random variable resulting from the Gács-Körner common information over blocks of the joint distribution pY1,Y2with alphabet C, such that the partitioning functions g1(Y1)andg2(Y2)map the priors Y1andY2toC. As in Section III, we define a probability measure pQCanalog to pCover the alphabet C. For a low bias of the sampling (cf. Theorem 1), the number of common samples ncfrom pCneeds to be in the order ofexp(DKL(pQC∥pC)). Following the methodology in Sec- tion III-A1, the encoder draws ncsamples from pCto obtain C1,···Cnc∼pC, by sampling from pY1,Y2and invoking function g1(·)org2(·)on the samples of Y1orY2. The encoder selects an index mkby sampling from the weight distribution pWCdefined in (2). Conditioned on the common information C, the priors are independent. The encoder draws samples from pY1,Y2|C=pY1|CpY2|C; hence for each decoder i individually and each mk∈[nc], k∈[K], the encoder follows the methodology in Section III-A2 and draws ni,Cmksamples Y(Cmk) i,1,···, Y(Cmk) i,ni,Cmkfrom the conditional pYi|C(·|Cmk). The encoder selects for each decoder an index ℓi,kby sampling from pWi|C=Cmk(j)as defined in (3). Hence, the importance sampling estimate reads Ii,K(f) =1 KPK k=1f(Y(Cmk) i,ℓi,k). The common index mkcan be broadcast to both clients, and ℓ1,kandℓ2,kare transmitted via point-to-point links. While the first sampling operation is common to all decoders, the latter (conditional) sampling can be regarded as the refinement for the individual priors. We depict in Fig. 2 a minimal schematic of our method. When nc≥exp(DKL(pQC∥pC) +tc)and∀c, i∈ C : ni,c≥exp(DKL(pQ|C=c∥pYi|C=c) + t), the guarantee from Theorem 1 holds uniformly for both decoders with probability 1−2(|C|(ˆϵ1+ˆϵ2)+2ϵ). According to Lemma 2, the average communication cost is given by O(DKL(pQC∥pC)) + χ2(pC,pQC)+1 (nc−1)/ncEQC∼pQChPN i=1O(DKL(pQ|C=QC∥pYi|C=QC))i . The major advantage of this scheme compared to the standard scheme is that the cost introduced by the divergence of the distributions pQCandpCover the blocks Cis only incurred once per transmission, compared to incurring cost for each of the decoders. This can significantly reduce the communicationEncoderDecoder 1 Decoder 2mk mkl1,k l2,k Y1,j, Y2,j∼pY1,Y2Y1,j∼pY1, Cj∼g1(Y1,j) Y2,j∼pY2, Cj∼g2(Y2,j)1 KPK k=1f(Y(Cmk) 1,ℓi,k) 1 KPK k=1f(Y(Cmk) 2,ℓi,k)Y(Cmk) 1,ℓ1,k Y(Cmk) 2,ℓ2,k Fig. 2: Simple illustration of hierarchical sampling for two decoders with correlated priors pY1andpY2using Gács-Körner common information C. cost for non-zero common information, and is amplified in the case of multiple receivers. The choice of Cas the entropy-maximizing common ran- dom variable does not directly relate to the KL-divergence between target distributions pQC, pQ|Cand the prior distri- butions pC, pYi|C. Even without any information about pQC andpQ|C,Ccould be chosen to minimize the communica- tion cost on expectation. Furthermore, the encoder can send refinements of the functions g1(·)andg2(·)to both decoders, in order to optimize the KL-divergence DKL(pQC∥pC)and DKL(pQ|C∥pYi|C)at the expense of
|
https://arxiv.org/abs/2505.07016v1
|
a one-time overhead. This overhead vanishes with increasing K. If the joint distribution pY1,Y2was unknown to the decoders, it is similarly possi- ble that the encoder determines desirable functions gi(·)to minimize subsequent communication costs and transmits the partitioning of the alphabet given by gi(·)to the individual decoders. The remainder of the method remains unchanged. Remark 2. The concept of common information can be gener- alized to more than two random variables [19], facilitating a generalization of our methodology to the case of more than two decoders. Going beyond, the introduced hierarchical sampling strategy generalizes to more hierarchical levels, which can be leveraged in the case of many decoders. E.g., it is possible to first encode a common sample for all decoders, conditioned on which one can create common samples for subsets of the clients. This can be repeated until ultimately resorting to pair-wise common information. However, we note that such common information structures can be rare in practice. V. C ONCLUSION We introduced a hierarchical remote generation and estima- tion strategy based on MRC that leverages prior information shared between encoder and decoder to evaluate a given function over a target distribution through a rate-constrained communication link. We proved an error bound and compared it to the MRC baseline. We showed that hierarchical sampling can be beneficial in the point-to-point setting (depending on the function properties), and significant gains can be achieved for priors with non-zero common information. The scheme exhibits the structure of successive refinement, where the broadcast link is used to transmit a coarse sample, later refined for each decoder individually over unicast links. Extending our methodology to multiple hierarchies can facilitate the usage in successive refinement. The details of such a variant remain an interesting open problem, as well as studying alternatives to the Gács-Körner common information. REFERENCES [1] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio, “Generative adversarial nets,” Advances in neural information processing systems , vol. 27, 2014. [2] D. P. Kingma, M. Welling et al. , “Auto-encoding variational bayes,” in International Conference on Learning Representations , 2014. [3] S. Chatterjee and P. Diaconis, “The sample size required in importance sampling,” The Annals of Applied Probability , vol. 28, no. 2, pp. 1099– 1135, 2018. [4] M. Havasi, R. Peharz, and J. M. Hernández-Lobato, “Minimal random code learning: Getting bits back from compressed model parameters,” inInternational Conference on Learning Representations , 2019. [5] L. Theis and N. Yosri, “Algorithms for the communication of samples,” inInternational Conference on Machine Learning . PMLR, 2022, pp. 21 308–21 328. [6] C. T. Li, “Channel simulation: Theory and applications to lossy com- pression and differential privacy,” Foundations and Trends® in Commu- nications and Information Theory , vol. 21, no. 6, pp. 847–1106, 2024. [7] G. Flamich and L. Wells, “Some notes on the sample complexity of approximate channel simulation,” arXiv preprint arXiv:2405.04363 , 2024. [8] G. Flamich, M. Havasi, and J. M. Hernández-Lobato, “Compressing images by encoding their latent representations with relative entropy coding,” Advances in Neural Information Processing Systems
|
https://arxiv.org/abs/2505.07016v1
|
, vol. 33, pp. 16 131–16 141, 2020. [9] C. T. Li and A. El Gamal, “Strong functional representation lemma and applications to coding theorems,” IEEE Transactions on Information Theory , vol. 64, no. 11, pp. 6967–6978, 2018.[10] G. Flamich, S. Markou, and J. M. Hernández-Lobato, “Fast relative entropy coding with a* coding,” in International Conference on Machine Learning . PMLR, 2022, pp. 6548–6577. [11] ——, “Faster relative entropy coding with greedy rejection coding,” Advances in Neural Information Processing Systems , vol. 36, 2024. [12] J. He, G. Flamich, and J. M. Hernández-Lobato, “Accelerating relative entropy coding with space partitioning,” arXiv preprint arXiv:2405.12203 , 2024. [13] B. Phan, A. Khisti, and C. Louizos, “Importance matching lemma for lossy compression with side information,” in International Conference on Artificial Intelligence and Statistics . PMLR, 2024, pp. 1387–1395. [14] J. Chen, Y . Fang, A. Khisti, A. Ozgur, N. Shlezinger, and C. Tian, “Information compression in the ai era: Recent advances and future challenges,” arXiv preprint arXiv:2406.10036 , 2024. [15] A. Triastcyn, M. Reisser, and C. Louizos, “Dp-rec: Private & communication-efficient federated learning,” arXiv preprint arXiv:2111.05454 , 2021. [16] B. Isik, F. Pase, D. Gunduz, S. Koyejo, T. Weissman, and M. Zorzi, “Adaptive compression in federated learning via side information,” in International Conference on Artificial Intelligence and Statistics , 2024, pp. 487–495. [17] M. Egger, R. Bitar, A. Wachter-Zeh, N. Weinberger, and D. Gündüz, “BICompFL: Stochastic federated learning with bi-directional compres- sion,” arXiv preprint arXiv:2502.00206 , 2025. [18] P. Gacs and J. Körner, “Common information is far less than mutual information,” Problems of Control and Information Theory , vol. 2, 01 1973. [19] H. Tyagi, P. Narayan, and P. Gupta, “When is a function securely computable?” IEEE Transactions on Information Theory , vol. 57, no. 10, pp. 6337–6350, 2011.
|
https://arxiv.org/abs/2505.07016v1
|
arXiv:2505.07295v1 [math.ST] 12 May 2025GMM with Many Weak Moment Conditions and Nuisance Parameters: General Theory and Applications to Causal Inference Rui Wang1, Kwun Chuen Gary Chan∗1, and Ting Ye∗1 1Department of Biostatistics, University of Washington Abstract : Weak identification is a common issue for many statistical problems – for example, when instrumental variables are weakly correlated with treatment, or when proxy variables are weakly correlated with unmeasured confounders. Under weak identification, standard estimation methods, such as the generalized method of moments (GMM), can have sizeable bias in finite samples or even asymptotically. In addition, many practical settings involve a growing number of nuisance parameters, adding further complexity to the problem. In this paper, we study estimation and inference under a general nonlinear moment model with many weak moment conditions and many nuisance parameters. To obtain debiased inference for finite-dimensional target parameters, we demonstrate that Neyman orthogonality plays a stronger role than in conventional settings with strong identification. We study a general two-step debiasing estimator that allows for possibly nonparametric first-step estimation of nuisance parameters, and we establish its consistency and asymptotic normality under a many weak moment asymptotic regime. Our theory accommodates both high-dimensional moment conditions and infinite-dimensional nuisance parameters. We provide high-level assumptions for a general setting and discuss specific applications to the problems of estimation and inference with weak instruments and weak proxies. Keywords : Causal inference; Instrumental variables; Neyman-orthogonality; Proximal causal inference; Weak identification ∗Corresponding to: tingye1@uw.edu and kcgchan@uw.edu 1 1 Introduction 1.1 Weak identification problem Weak identification is a common issue for many statistical applications. A well-known example is the problem of weak instrumental variables (IVs). IV methods are widely used to identify and estimate causal effects when there is unmeasured confounding (Angrist et al., 1996; Hern´ an and Robins, 2006). These methods typically rely on three standard assumptions: (i) IV independence: the IVs are independent of unmeasured confounders; (ii) exclusion restriction: the IVs do not have direct effect on the outcome variable; and (iii) IV relevance: the IVs are correlated with the treatment (Baiocchi et al., 2014). While the IV relevance assumption can be empirically assessed and justified in many examples, the correlation between the instruments and the treatment is often weak. For example, in epidemiology, Mendelian randomization is a popular strategy to study the causal effects of some exposures in the presence of unmeasured confounding, using single-nucleotide polymorphisms (SNPs) as instruments. However, SNPs are usually weakly correlated with the exposure of interest, which leads to substantial bias using traditional methods (Sanderson et al., 2022), such as two-stage least square or generalized method of moments (GMM) (Hansen, 1982). As another example, labor economists used quarter of birth and its interactions with year of birth as instruments for estimating the return to education (Angrist and Krueger, 1991). Even if the total sample size is large, weak instruments problem might produce biased point and variance estimates, as discussed in Bound et al. (1995). Another important but less recognized example of weak identification arises in proximal causal inference (Tchetgen Tchetgen et al., 2024) with weak proxies. Similar to IV methods,
|
https://arxiv.org/abs/2505.07295v1
|
proximal causal inference is a general framework for the identification and estimation of causal effects with unmeasured confounding. This approach typically requires two types of proxies for the unmeasured confounders: treatment proxies and outcome proxies (Miao et al., 2018; Tchetgen Tchetgen et al., 2024), where the treatment proxy is assumed to have no causal effect on the outcome, and the exposure is assumed to have no causal effect on the outcome proxy. When the proxies are only weakly correlated with the unmeasured confounders, they may fail to provide sufficient information for the unmeasured confounders. As a result, standard methods (e.g., those proposed by Tchet- gen Tchetgen et al. (2024)) may yield inconsistent estimates and invalid inference, even in large 2 samples, as demonstrated in our simulation study. The weak identification problem is usually modeled through a drifting data-generating process where identification fails in the limit (Stock and Wright, 2000; Newey and Windmeijer, 2009). Under weak identification, standard estimators—such as estimating equation estimators (Van der Vaart, 2000) or GMM estimators with a fixed number of moment conditions (Hansen, 1982)—can be biased, and their asymptotic distributions can become complex and nonstandard (Stock and Wright, 2000). Nevertheless, information about the target parameter may accumulate as the num- ber of moment conditions increases. This has motivated the development of many weak moment asymptotics (Han and Phillips, 2006; Newey and Windmeijer, 2009). Under this regime, traditional GMM estimators remain biased in general (Han and Phillips, 2006), but consistent and asymptot- ically normal estimates can be obtained using the continuous updating estimator (CUE) (Newey and Windmeijer, 2009), which belongs to the broader family of estimators for moment conditions. However, modern semiparametric inference often involves flexibly estimating infinite-dimensional nuisance parameters. For example, the structural mean model approach for IV analysis involves modeling the conditional expectations of the instruments, treatment, and outcome given baseline covariates (Hern´ an and Robins, 2006). To mitigate model misspecification, flexible nonparametric estimation methods are frequently used to estimate infinite-dimensional nuisance functions. The inclusion of such nuisance models complicates the estimation and inference of the target parameter, especially under weak identification. 1.2 Previous works In this paper, we address two major estimation challenges that appear simultaneously, where each gathered a body of literature. The first challenge is to handle nuisance parameters. Function-valued nuisance parameters are very common in modern semiparametric models, especially in those efficient influence function based methods (Bickel et al., 1993). Usually, a two-step estimator is considered, where the first-step is to estimate the nuisance parameters and the second step is to construct an efficient influence function-based estimator by plugging in the nuisance estimator. In the earlier literature, parametric estimation for nuisance parameters is widely adopted for constructing doubly robust (Robins et al., 1994) or multiple robust estimators (Tchetgen Tchetgen and Shpitser, 2012). More recently, based 3 on the orthogonality property of efficient influence function (Chernozhukov et al., 2018), flexible nonparametric nuisance estimation coupled with cross-fitting was considered in one-step estimation (Pfanzagl, 1990; Rotnitzky et al., 2020), estimating equation (Chernozhukov et al., 2018), targeted maximum likelihood estimation (van der Laan and Rose, 2011), GMM
|
https://arxiv.org/abs/2505.07295v1
|
(Chernozhukov et al., 2022) and empirical likelihood (Bravo et al., 2020). Another challenge is to deal with many weak moment conditions. The related weak IVs prob- lem has been extensively studied in econometrics literature (Staiger and Stock, 1997; Stock and Wright, 2000; Han and Phillips, 2006; Newey and Windmeijer, 2009; Andrews et al., 2019). See Andrews et al. (2019) for a comprehensive review of weak IV literature. Newey and Windmeijer (2009) considered the case of high-dimensional weak moment conditions in the absence of nuisance parameters. A few recent papers consider weak moments with nuisance parameters. Ye et al. (2024) considered many weak moment conditions with nuisance parameters under specific linear moment conditions and a one-dimensional paramter of interest in the setting of weak instrumental variables and Mendelian randomization. Andrews (2018) constructed valid confidence interval under weak identification with first-step nuisance estimates. However, they did not consider estimation and number of moment conditions were fixed. Around the time we completed this paper, we became aware of a preprint by Zhang and Sun (2025) on arXiv that proposed a similar idea for constructing an estimator under a partially linear IV model. In contrast, our work develops a more general theoretical framework that applies to both linear and nonlinear weak-identification problems and is not restricted to IV problems. 1.3 Main contributions In this paper, we developed a general theory for estimation and inference with many weak moment conditions with flexible first-step nuisance parameter estimators. Our work contributes to the modern semiparametric inference literature and weak identification literature (Rotnitzky et al., 2020; Chernozhukov et al., 2018; van der Laan and Rose, 2011; Chernozhukov et al., 2022; Bravo et al., 2020; Newey and Windmeijer, 2009). Interestingly, the interplay between function-valued nuisance parameters and many weak mo- ments results in some unique features of our theory comparing to previous semiparametric inference literature: 4 1. The presence of many weak moments requires a global Neyman orthogonality condition, in the sense of Definition 1. Different from standard semiparametric inference theory, the global Neyman orthogonality condition plays a central role in both consistency and asymptotic normality results. More details are given in Section 2.3 and Section 2.4. 2. The presence of many weak moments requires faster convergence rate conditions for nuisance parameter estimation, which departs from the classical condition which requires the conver- gence rate of nuisance faster than N−1/4, where Nis the sample size (Chernozhukov et al., 2018). We discuss our assumptions and asymptotic theory in more details in Section 3.1 and Section 3.2. 3. The convergence rate of the proposed estimator is slower than√ N, which means our estimator is not regular in the usual sense (Van der Vaart, 2000). The asymptotic expansion of our estimator involves a usual first-order term which is equivalent to the influence function of the classical GMM estimator, with an additional U-statistics term which is not negligible due to many weak moments. An additional control of the contribution due to estimated nuisance parameters to the U-statistics terms is needed. Moreover, the classical influence function-based variance estimator does not work anymore. Details are given in
|
https://arxiv.org/abs/2505.07295v1
|
Section 3.2. We establish the high-level conditions for our proposed two-step CUE to be consistent and asymptotically normal. Consequently, our results can be applied to future problems from verifying our conditions. We provide three examples and verify the high-level regularity condition in those cases. The first two examples are additive structural mean model (ASMM) and multiplicative structural mean model (MSMM) with instrumental variables. Those two examples are extensions of Clarke et al. (2015) to many weak instruments case. The third example is on many weak proxies. To the best of our knowledge, our paper is the first to study the weak identification problem for parameter of interest in proximal causal inference. Although Bennett et al. (2023) considered weakly identified nuisance parameters problem in proximal causal inference, the parameter of interest is still assumed to be strongly identified. 5 1.4 Organization of the paper The rest of the paper is organized as follows. Section 2 introduces the setup, notations and our estimation strategy. The CUE and global Neyman orthogonality are utilized, respectively, to address bias arising from weak identification and from the estimation of nuisance parameters. Section 3 introduces the main theorems for consistency and asymptotic normality of the proposed estimator, as well as an over-identification test. In Section 4, we applied our general theory on three examples from causal inference, including two IV examples and one proximal causal inference example. Simulation results are reported in Section 5. We revisit the return to education example in Section 6 and conclude the whole paper in Section 7. 2 Set up and notations 2.1 Framework: many weak moment conditions model We are interested in a finite-dimensional target parameter, whose true value is denoted as β0. The parameter space of parameter of interest is denoted as B, which is a compact subset of Rp. The target parameter β0belongs to the interior of Bsatisfying the following moment condition: EP0,N[g(O;β0, η0,N)] = 0 , (1) where P0,Nis the law of Owhich could depend on sample size Nand EP0,N[·] to denote the expectation with respect to distribution P0,N;g(o;β, η) = ( g(1)(o;β, η), ..., g(m)(o;β, η))Tis am- dimensional vector of known moment functions; η0,N∈ Tis the true value of nuisance parameter, which is a functional of true data-generating distribution, where Tis the parameter space for nui- sance parameter; and ( Oi)N i=1is a sequence of observed data which are independent and identically distributed (i.i.d.) copies of O. We assume the parameter β0is weakly identified. The weak identification for β0is characterized by the Assumption 1, which is also given in Newey and Windmeijer (2009) in the absence of nuisance parameters. Let G(o;β, η) =∂ ∂βg(o;β, η),G(β, η) =EP0,N[G(O;β, η)],G=G(β0, η0,N), Ω(β, η) =EP0,N[g(O;β, η)gT(O;β, η)],Ω =Ω(β0, η0,N). Assumption 1. (Many weak moment conditions). 6 (a) There is a p×pmatrix SN=˜SNdiag(µ1N, ..., µ pN)such that ˜SNis bounded, the smallest eigenvalue of ˜SN˜ST Nis bounded away from zero, for each jeither µjN=√ NorµjN/√ N→0, µN= min 1≤j≤pµjN→ ∞ ,m/µ2 N=O(1). (b)NS−1 NGTΩ−1G(S−1 N)T→H,His nonsingular. Under the standard asymptotic regime, when µjN=µN=√ N,mis finite and P0,Ndoes not depend on N,˜SNcan be taken as
|
https://arxiv.org/abs/2505.07295v1
|
Ip(the p-dimensional identity matrix), and Assump- tion 1(b) reduces to GTΩ−1Gbeing a fixed positive semidefinite matrix. This scenario corre- sponds to the usual GMM asymptotics (Hansen, 1982) with GTΩ−1Gbeing the asymptotic vari- ance of efficient GMM. When µN/√ N→0, the parameter β0is weakly identified. Later on in Section 2.2, we will define the objective function of CUE (when the true value of nuisance is known) to be bQ(β, η0,N) =bgT(β, η0,N)bΩ(β, η0,N)−1bg(β, η0,N)/2, where bg(β, η) =1 NPN i=1g(Oi;β, η) andbΩ(β, η) =1 NPN i=1g(Oi;β, η)g(Oi;β, η)T, and it follows Newey and Windmeijer (2009) that ∂bQ(β,η0,N) ∂β∂βTconverges to GTΩ−1Gatβ=β0. Therefore Assumption 1 (b) implies that the objective function becomes nearly flat around the true value β0asymptotically in some directions, therefore identification fails. Assumption 1 allows for different identification strengths for different linear combinations of β0, characterized by ˜SN. To see this, Assumption 1 (b) can be rewritten as follows Ndiag( µ−1 1N, ..., µ−1 pN)EP0,N" ∂g(O;β0, η0,N) ∂˜ST Nβ#T Ω−1EP0,N" ∂g(O;β0, η0,N) ∂˜ST Nβ# diag( µ−1 1, ..., µ−1 N)→H. Therefore, the identification strength for each coordinate of ˜STβis measured by µ1N, ..., µ pN, respectively, and µN= min 1≤j≤pµjNmeasures the overall identification strength. When β0is one- dimensional, ˜SNcan be taken as 1 and SNbecomes µN, then Assumption 1 (b) can be simplified as Nµ−2 NGTΩ−1G→Hwith Hbeing a positive number. The condition m/µ2 N=O(1) is a constraint for the minimum strength of identification. The following notations will be used throughout the paper. For any vector x, we will use ∥x∥ to denote the l2norm. For matrix A, we use ∥A∥to denote the spectral norm and ∥A∥Fto denote the Frobenius norm. We use ∥ · ∥ P,qto denote the Lq(P) norm, that is, ∥f∥P,q= (R fqdP)1/q. For a vector v,v(j)is the j−th element of v.A(j,k)is the ( j, k)−th element of matrix AandA(j)is the j−th column of matrix A. For two matrices AandB,A⪯Bmeans B−Ais positive semidefinite. 7 2.2 Removing weak identificaiton bias It is well-known that GMM is typically biased under many weak asymptotics. Han and Phillips (2006) provided an intuitive explanation of the bias from GMM: the objective function of GMM might not be minimized at β0asymptotically. To explain the bias of GMM under many weak asymptotics, we will assume η0,Nis known to ease discussion. In this case, the GMM estimator can be defined as bβGMM = arg min β∈Bbg(β, η0,N)TWbg(β, η0,N)/2, where Wis an arbitrary positive semidefinite matrix, bg(β, η) =1 NPN i=1g(Oi;β, η) (Hansen, 1982). Let g(β, η) =EP0,N[g(O;β, η)]. The expectation of the objective function of GMM is EP0,NbgT(β, η0,N)Wbg(β, η0,N) = (1−N−1)g(β, η0,N)TW(g(β, η0,N))| {z } Signal+ tr(WΩ(β, η0,N))/N| {z } Noise, where the first term is called the signal term since the minimizer of of the signal term is β0. However, the noise term may not be minimized at β0. Under the many weak asymptotics, the signal term may not dominate the noise term, which leads to bias of GMM estimator. One way to remove the bias incurred by the noise term is to choose Wto be Ω(β, η0,N)−1. With this choice, the noise term does not
|
https://arxiv.org/abs/2505.07295v1
|
depend on βanymore, therefore the expectation of the objective function will be minimized at β0. In practice, Ω(β, η0,N) can be replaced by its empirical counterpart bΩ(β, η0,N), where bΩ(β, η):= 1 NPN i=1g(Oi;β, η)g(Oi;β, η)T, as long as supβ∈B∥Ω(β, η0,N)−Ω(β, η0,N)∥converges to 0 sufficiently fast. This leads to the objective function of CUE: bgT(β, η0,N)bΩ(β, η0,N)−1bg(β, η0,N)/2. In the absence of nuisance parameters, this is the estimator in Newey and Windmeijer (2009), which removes the bias due to many weak moments. Although we may use a first-step nuisance parameter estimator in CUE, we need to further remove the bias due to estimation of nuisance parameters, which will be discussed next. 2.3 Removing bias from estimated nuisance parameters It is well known that the estimation of nuisance parameters affects the asymptotic behavior of target parameter estimator (See Theorem 5.31 of Van der Vaart (2000)). To remove the impact of nuisance parameter estimation to the estimation of target parameters, we consider a special class 8 of moment functions which satisfies global Neyman orthogonality condition (van der Laan and Robins, 2003; Chernozhukov et al., 2018; Foster and Syrgkanis, 2023). For a function f, which does not have to be a moment function, we define the order- kGateaux derivative map D(k) t,β,f:T →Rm with direction ηatβ∈ B,t∈[0,1] as D(k) t,β,f(η) =∂k ∂tkEP0,Nh f O;β,(1−t)η0,N+tηi . Definition 1. A function f(o;β, η)is said to be global Neyman orthogonal on set B1⊂ B if for any β∈ B 1,Nandη∈ T,D(1) 0,β,f(η) = 0 . If f(o;β, η)is a moment function, we call the corresponding moment condition EP0,N[f(O;β0, η0,N)] = 0 a globally Neyman orthogonal moment condition. The global Neyman orthogonality condition means that the estimation of nuisance parameter η0,Ndoes not have first order bias on EP0,N[f(O;β, η0,N)], no matter what value βtakes in B1. Latter on in Section 3, we assume the moment function g(o;β, η) is global Neyman orthogonal on B1=B, which is stronger than Neyman orthogonality condition defined in Chernozhukov et al. (2018), where they take B1={β0}. There are two reasons why we need a stronger version of Neyman orthogonality. First, as we will discuss in Section 2.4 and 3.1, global Neyman orthogonality is needed for consistency, while in the conventional finite strong moment cases, Neyman orthogonalty is typically not required for consistency. The intuition is that the many weak moments amplify first order bias due to nuisance parameter estimation if the moment condition is not orthogonal. Second, the asymptotic expansion of the estimator under weak moment asymptotics involves a second-order term which requires us to control the impact of nuisance parameter estimation to that additional term in order to attain asymptotic normality. A sufficient condition is the global Neyman orthogonality of G(o;β, η) =∂ ∂βg(o;β, η), which is guaranteed from the global Neyman orthogonality of g(o;β, η) based on the following permanence property: Lemma 1. (Permanence properties of global Neyman orthogonality). (a) Suppose g(o;β, η) satisfies global Neyman orthogonality on B1, and∂g(o;β,η) ∂βexists on B2⊂ B 1, then∂g(o;β,η) ∂βis globally Neyman orthogonal on B2. (b) Suppose g1(o;β, η),g2(o;β, η)satisfy global Neyman orthogonality onB1,h1(β)andh2(β)are functions
|
https://arxiv.org/abs/2505.07295v1
|
of β, then g1(o;β, η)h1(β)+g2(o;β, η)h2(β)also satisfy global Neyman orthogonality on B2. 9 In practice, we may have an initial moment function ˜ g(o;β,˜η) with ˜ ηbeing the nuisance parameter in the initial moment condition with true value ˜ η0,N. Usually, ˜ η0,Nis a subset of η0,N, and the nuisance parameter can be written as a statistical functional of the underlying distribution, that is, we can write η0,Nasη(P0,N). One common way to orthogonize a given moment function ˜ g(o;β,˜η) is adding the first-order influence function. Let Pt,Nbe the sub- model defined as {(1−t)P0,N+tP1,N, t∈[0,1]}for some P1,Nsuch that η(P) exists for all P∈ Pt,N. The nuisance parameter under distribution Pis denoted as η(P). The first order influ- ence function, denoted as ϕ, can be characterized in the following way (Ichimura and Newey, 2022): ∂ ∂tEP0,N[˜g(O;β,˜η(Pt,N))] t=0=R ϕ(o;β, η0,N)dP1,N(o). This is essentially the Gateaux deriva- tive characterization of the influence function. Then g(o;β, η) can be constructed as g(o;β, η) = ˜g(o;β,˜η) +ϕ(o;β, η). From Theorem 1 of Chernozhukov et al. (2022), which is also included in the Supplementary Materials, this construction leads to a global Neyman orthogonal moment condi- tion. In practice, it is often more convenient to construct Neyman orthogonal moment conditions by inspection and then directly verify the definition of global Neyman orthogonality. Examples of global Neyman orthogonal moment conditions are given in Section 4. To allow the use of flexible classes nuisance parameter estimations, one could alleviate the requirement for the complexity of function class using cross-fitting (van der Laan and Rose, 2011; Chernozhukov et al., 2018) The idea is to use sample splitting to ensure independent data for nuisance parameter and target parameter estimation. We employ a version of cross-fitting which combines with CUE to arrive at the following two-step estimator: Step I We partition the observation indices ( i= 1, ..., N ) into Lgroups {Il, l= 1, ..., L}, we useNlto denote cardinality of group l. For simplicity, we assume each group has the same cardinality. Let bηlbe the estimator constructed using {Oi, i∈Ic l}, which consists of all observations whose indices are not in Il. We use l(i) to denote the map from ito the group l∈ {1, ..., L}where ilies in, that is l(i) ={l:i∈Il}. 10 Step II Let bg(β,bη) =1 NNX i=1g(Oi;β,bηl(i)),bΩ(β,bη) =1 NNX i=1g(Oi;β,bηl(i))g(Oi;β,bηl(i))T, bQ(β,bη) =bgT(β,bη)TbΩ(β,bη)−1bg(β,bη)/2. Then, the target parameter can be estimated by solving the minimization problem bβ= arg min β∈BbQ(β,bη). 2.4 Two key features of the theoretical results There are two key features of our theoretical results: 1. In section 3.1, we will show the consistency of the proposed estimator. Interestingly, depart from the usual double-machine learning literature, our consistency result replies on the Ney- man orthogonality condition. One important ingredient of showing consistency is to control the convergence rate of√ N µNsupβ∈B∥bg(β,bη)−bg(β, η0,N)∥to be op(1). This term is related to the bias of estimating bη. If the Neyman orthogonality does not hold, then the bias of estimating supβ∈B∥bg(β,bη)−bg(β, η0,N)∥is approximately√mδN, where δNis roughly the convergence rate of the nuisance parameter. Here the scale√mis due to many moment conditions. Then the
|
https://arxiv.org/abs/2505.07295v1
|
rate of√ N µNsupβ∈B∥bg(β,bη)−bg(β, η0,N)∥becomes√ Nmδ N µN. We can see that under the standard asymptotic regime when mis a constant and µN=√ N, we need δN=o(1), that is as long as the nuisance estimate is consistent, the estimator will be consistent. However, when the moments are very weak in the sense that√m/µ N=Cfor some constant C, without Neyman orthogonality, we will need δN=o(1/√ N), which is not possible even for parametric models. When g(o;β, η) is global Neyman orthogonal, we show that√ N µNsupβ∈B∥bg(β,bη)−bg(β, η0,N)∥=op(1) under mild assumptions. 2. The asymptotic expansion under many weak moment asymptotics involves a higher-order term which requires additional theoretical development in the presence of nuisance param- eters. Consider expanding the objective function around the true value, let βbe a value 11 between bβandβ0, we have 0 =NS−1 N∂bQ(β,bη) ∂β β=bβ=NS−1 N∂bQ(β,bη) ∂β β=β0+NS−1 N∂bQ(β,bη) ∂β∂βT β=¯βS−1T N[SN(bβ−β0)]. Here NS−1 N∂bQ(β,bη) ∂β|β=β0gives the usual influence function term of GMM (up to a constant or matrix) plus a U-statistics term. The U-statistics term is negligible under usual asymptotics but no longer negligible under many weak asymptotics. We will need to show that the contribution of the nuisance parameter estimation is negligible for the U-statistics term, in additional to the first-order term. This is different from the existing double-machine learning literature and is unique under weak-identification. 3 A general theory for estimation and inference under weak moment conditions 3.1 Consistency In this section, we give the regularity conditions for consistency for two-step CUE. Assumption 2. We assume the following conditions for the moment condition: (a) For all β∈ B, the identification relations hold: Let δ(β) = ST N(β−β0)/µN, there is a constant C > 0with ∥δ(β)∥ ≤ C√ N∥g(β, η0,N)∥/µNfor all β∈ B. (b) there is C > 0andcM=Op(1)such that ∥δ(β)∥ ≤C√ N∥bg(β, η0,N)∥/µN+cMfor all β∈ B, where cMisOp(1). Since√ N∥g(β, η0,N)∥/µN=√ N∥g(β, η0,N)−g(β0, η0,N)∥/µN(note that g(β0, η0,N) = 0), Assumption 2(a) says when g(β, η0,N) is close to g(β0, η0,N), it implies that βis also close to β0. Since√ N∥bg(β, η0,N)∥/µN=√ N∥bg(β, η0,N)−bg(β0, η0,N)∥/µN+Op(1) (we show ∥bg(β0, η0,N)∥= Op(1) in the Supplementary Materials), Assumption 2(b) says when bg(β, η0,N) is close to bg(β0, η0,N), δ(β) isOp(1). Assumption 2 implies global identification of β0(Newey and Windmeijer, 2009). Assumption 3. g(o;β, η)is continuous in βand (a) supβ∈BEP0,N[{g(O;β, η0,N)Tg(O;β, η0,N)}2]/N→ 0; (b) There is a constant C >0such that, for all β∈ B,1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤ C; (c)supβ∈B∥bΩ(β, η0,N)−Ω(β, η0,N)∥F=op(1); (d)|aT{Ω(β′, η0,N)−Ω(β, η0,N)}b| ≤C∥a∥∥b∥∥β′− β∥; (e) For every C′, there is CandcM=Op(1)such that for all β,β′∈ B,∥δ(β)∥ ≤C′,∥δ(β′)∥ ≤ 12 C′,√ N∥g(β′, η0,N)−g(β, η0,N)∥/µN≤C∥δ(β′)−δ(β)∥and√ N∥bg(β′, η0,N)−bg(β, η0,N)∥/µN≤ cM∥δ(β′)−δ(β)∥. Assumption 3 are assumptions on the moment condition when η=η0,Nand how mgrows with sample size N. Ifg(o;β, η0,N) is uniformly bounded, Assumption 3(a) holds when m2/N=o(1). Assumption 3(b) says Ω is a positive definite matrix with bounded eigenvalues. For any fixed β, m2/N=o(1) is sufficient for Assumption 3(c) to hold if all entries of g(O;β, η0,N) are uniformly bounded. When g(o;β, η0,N) has a simple form, such as being separable as in (2), Assumption 3(c) is easy to verify. Assumption 3(d) and (e)
|
https://arxiv.org/abs/2505.07295v1
|
are again easy-to-verify assumptions in the examples we provide. The next two assumptions are about nuisance parameter estimation. Assumption 4. (Global Neyman orthogonality and continuous second order Gateaux derivative). The moment function g(o;β, η)satisfies Global Neyman orthogonality on B. Fur- thermore, the second order Gateaux derivative D(2) β,t,g(η)exists for all β∈ B,t∈[0,1],η∈ T, and is continuous in t. To introduce the next assumption, we need the following notations. Define function classes G(j) η={g(j)(o;β, η), β∈B},G(j,k) η={G(j,k)(o;β, η), β∈B},G(j,k,r) η =n ∂2 ∂β(k)∂β(r)g(j)(o;β, η), β∈Bo (recall that g(j)(o;β, η) is the j−thcoordinate of g(o;β, η),G(j,k)(o;β, η) is the ( j, k)−th entry ofG(o;β, η)). For any two function classes G1,G2, we define G1· G2={g1·g2, g1∈ G1, g2∈ G2}. Furthermore, let E(j) ηbe the envelop function of G(j) η(Chernozhukov et al., 2014a), similarly we can define envelop functions E(j,k) η,E(j,k,r) η . The next assumption states requirements for convergence rates for nuisance parameters. Assumption 5. (Nuisance parameters estimation). Given a random subset Iof[N]of size N/L, we have the nuisance parameter estimator bη(estimated using samples outside I) belongs to a realization set TNwith probability 1−∆N, where ∆N=o(1),TNcontains η0,Nand satisfies the following conditions: (a) For each η∈ TNandj∈[m], the function classes in G={G(j) η,G(i) η×G(j) η′, i, j= 1, ..., m, η, η′∈ TN}are suitably measurable, and they are VC-type classes, which means their uniform cov- ering numbers satisfy supQlogN(ϵ∥F∥Q,2,F,∥ · ∥ Q,2)≤vlog(a/ϵ)for all 0< ϵ≤1,F ∈ G , 13 where Fis measurable envelope functions for F,N(ϵ∥F∥Q,2,F,∥ · ∥Q,2)is the covering num- ber(Chernozhukov et al., 2014a), the supreme is over all finitely discrete measures Q. Fur- thermore, we assume ∥F∥P0,N,qexists for all q≥1. (b) Define the following statistical rates for j, k∈ {1, ..., m}: r(j) N:= sup η∈TN,β∈B EP0,N g(j)(O;β, η)−g(j)(O;β, η0,N) 21/2 , λ(j) N:= sup β∈B,t∈(0,1),η∈TN ∂2 ∂t2EP0,Ng(j)(O;β,(1−t)η0,N+tη)) , r(j,k) N:= sup β∈B,η∈TN EP0,N g(j)(β, η)g(k)(β, η)−g(j)(β, η0,N)g(k)(β, η0,N) 21/2 . We assume there exits ϵ >0such that, the statistical rates satisfy aN p log (1 /aN) +Nϵ! +N−1/2+ϵlog(1/a(j) N) +Nϵ≤δN, for all aN∈ {r(j) N, r(j,k) N}, λ(j) N≤N−1/2δN, δN=o(min(√ N/m, µ N/√m)). Assumption 4 says the moment function is globally Neyman orthogonal on the whole parameter space B. Assumption 5 (a) is a mild condition for restricting the size for the function classes. For example, the linear combinations of finite number of functions {Pp i=1β(i)fi:β∈ B} (fiare fixed functions) is a VC class (Van der Vaart, 2000). When both of G(i) ηandG(j) η′are VC classes and bounded, it can be shown that G(i) η× G(j) η′is automatically VC. Assumption 5 (b) is a mild condition for the convergence rates of nuisance estimators. Here r(j) Nandr(j,k) Nare the first order bias from estimating nuisance parameters. Typically, r(j) Nandr(j,k) Nare approximately equal to the convergence rate of the slowest (in terms of convergence rate) nuisance parameter estimator. Since ϵcan be taken as an arbitrarily small value, δNis approximately equal to aNp log(1/aN), which is slightly bigger than aNand can also be approximately interpreted as the slowest convergence rate among the nuisance parameter estimators. In the examples in Section 4, λ(j) Nare multiplication of
|
https://arxiv.org/abs/2505.07295v1
|
convergence rates for two or more nuisance parameters estimators, see supplementary materials for more details. Under the usual asymptotics where mis finite and µN=√ N, we require δN=o(√ N) andλN=o(1), which coincides with usual estimation results of doubly-robust consistency result. 14 Unlike the usual GMM estimator, the weighting matrix in the objective function bΩ(β, η) of two- step CUE is estimated and also depends on βandη, therefore we also need further assumptions for matrix convergence. Assumption 6. The following conditions hold: (a) supβ∈B,η∈TN Ω(β, η)−Ω(β, η0,N) =o(1); (b) supβ∈B bΩ(β, η0,N)−Ω(β, η0,N) =op(1). Assumption 6 (a) further restricts the convergence rate of nuisance parameters, which needs to be verified case-by-case. See the examples in Section 4. Assumption 6 (b) is a mild condition for convergence of bΩ(β, η0,N). For any fixed β, a sufficient condition for Assumption 6 to be hold is thatm2/N=o(1). The requirement of uniform convergence is a slightly stronger condition but is still reasonable in many applications. Theorem 1. (Consistency). Under Assumptions 1-6 and m2/N→0, we have δ(bβ) =op(1), where δ(β) =ST N(β−β0)/µN. The consistency result is different from traditional consistency result of double machine learning literature in two ways. First, the consistency result ST N(bβ−β0)/µN=op(1) implies different linear combinations of βhave different rate of convergence in probability and ∥bβ−β0∥=op(1), therefore it is stronger than the usual notion of consistency. Second, the consistency result relies on the global Neyman orthogonality assumption, while the conventional estimators will be consistent under strong identification as long as the nuisance parameters can be consistently estimated. 3.2 Asymptotic normality Additional assumptions are needed for asymptotic normality. The next assumption restricts how the number of moments grows with sample size. Assumption 7. (a)g(o;β, η)is twice continuously differentiable with respect to βin a neighborhood ofβ0, denoted as B′. (b) (EP0,N[∥g(O;β0, η0,N)∥4] +EP0,N[∥G(O;β, η0,N)∥4 F])m/N→0. (c) For a 15 constant Candj= 1, ..., p , EP0,N[G(O;β, η0,N)GT(O;β, η0,N)] ≤C, EP0,N" ∂G(O;β0, η0,N) ∂β(j)∂GT(O;β0, η0,N) ∂β(j)# ≤C, √ N EP0,N∂G(O;β0, η0,N) ∂β(j) (S−1 N)T F≤C. Assumption 8. For¯βp− →β0, then for k= 1, ..., p , √ Nh bG(¯β, η0,N)−bG(β0, η0,N)i (S−1 N)T =op(1), √ N" ∂bG(¯β, η0,N) ∂β(k)−∂bG(β0, η0,N) ∂β(k)# (S−1 N)T =op(1). A sufficient condition for 7(b) to hold is that every element of giandGiis uniformly bounded andm3/N=o(1). The second and third inequality of Assumption 7 (c) and Assumption 8 hold trivially if g(o;β, η) is linear in β(and hence G(o;β, η) does not depend on β). Assumption 8 is used for obtaining the asymptotic distribution of ˆβ−βby showing the convergence of the second order derivative of objective evaluated at βconverges to its counterpart evaluated at βwith ηfixed atη0,N. The next assumption is about the convergence rate of nuisance estimators: Assumption 9. (Nuisance parameters estimation). Given a random subset Iof[N]of size N/L, we have the nuisance parameter estimator bη(estimated using samples outside I) belongs to a realization set TN∈TNwith probability 1−∆N, where TNcontains η0,Nand satisfies the following conditions. (a) For each η∈ TNandj∈[m], the function classes in G={G(i) η,G(i,k) η,G(i,k,r) η ,G(i) η×G(j) η′,G(i,k) η× G(j,r) η′,G(i) η× G(j,k) η′, i, j= 1, ..., m, k,
|
https://arxiv.org/abs/2505.07295v1
|
r = 1, ..., p, η, η′∈ TN}are suitably measurable, and they are VC-type classes, which means their uniform covering numbers satisfy supQlogN(ϵ∥F∥Q,2,F,∥· ∥Q,2)≤vlog(a/ϵ)for all 0< ϵ≤1,F ∈ G , where Fis measurable envelope functions for F. Furthermore, we assume ∥F∥P0,N,qexists for all q≥1. 16 (b) The following conditions on statistical rates hold for any j, k, l, r , define: r(j) N:= sup η∈TN,β∈B EP0,N g(j)(O;β, η)−g(j)(O;β, η0,N) 21/2 λ(j) N:= sup β∈B,t∈(0,1),η∈TN ∂2 ∂t2EP0,Ng(j)(O;β,(1−t)η0,N+tη) r(j,l) N:= sup β∈B′,η∈TN EP0,N ∂g(j)(β, η) ∂β(l)−∂g(j)(β, η0,N) ∂β(l) 2 1/2 r(j),(k) N:= sup β∈B,η∈TN EP0,N g(j)(β, η)g(k)(β, η)−g(j)(β, η0,N)g(k)(β, η0,N) 21/2 r(j,k,l) N:= sup β∈B′,η∈TN EP0,N ∂2g(j)(β, η) ∂β(l)∂β(r)−∂2g(j)(β, η0,N) ∂β(l)∂β(r) 2 1/2 r(j),(k,l) N:= sup β∈B′,η∈TN EP0,N g(j)(β, η)∂g(k)(β, η) ∂βl−g(j)(β, η0,N)∂g(k)(β, η0,N) ∂β(l) 2 1/2 r(j,k,l,r ) N:= sup β∈B′,η∈TN EP0,N ∂g(j)(β, η) ∂β(r)∂g(k)(β, η) ∂β(l)−∂g(j)(β, η0,N) ∂β(r)∂g(k)(β, η0,N) ∂β(l) 2 1/2 Then we assume exits ϵ >0such that, the statistical rates satisfy aN p log (1 /aN) +Nϵ! +N−1/2+ϵ[log(1 /aN) + 1] ≤δN foraN∈ {r(j) N, r(j,l) N, r(j),(k) N, r(j,k,l) N, r(j),(k,l) N, r(j,k,l,r ) N}, λ(j) N≤N−1/2δN, δN=o(1/√m) Assumption 9 is stronger version of 5. Under the usual asymptotics where mis finite and µN=√ N, Assumption 9 requires δN=o(1) and λ(j) N=o(1/√ N), which again coincides with usual estimation results of rate double-robustness (Chernozhukov et al., 2018; Rotnitzky et al., 2020). Under many weak asymptotics, we require δN=o(1/√m) and λ(j) N≤δN/√ N, which can be interpreted as the convergence rate for all nuisance estimates need to be faster than 1 /√m, and multiplication of convergence rates for two nuisance estimates need to be faster than 1 /√ mN, which is a stronger requirement than the corresponding rate condition under usual asymptotics (Chernozhukov et al., 2018). 17 Define Ω(k)(o;β, η) =g(o;β, η)∂g(o;β, η)T ∂β(k),Ω(kl)(o;β, η) =g(o;β, η)∂g(o;β, η)T ∂β(k)∂β(l), Ω(k,l)(o;β, η) =∂g(o;β, η)T ∂β(k)∂g(o;β, η)T ∂β(l). Assumption 10. For all β,˜βin a neighborhood B′ofβ0andA(o;β, η)equal to Ωk(o;β, η), Ωkl(o;β, η)orΩk,l(o;β, η), we have (a) supβ∈B′∥bA(β, η0,N)−A(β, η0,N)∥p− →0, (b)|a′[A(˜β, η0,N)− A(β, η0,N))]b| ≤C∥a∥∥b∥∥˜β−β∥, where bA(β, η) =1 NPN i=1A(Oi;β, η),A(β, η) =EP0,N[A(Oi;β, η)]. Assumption 10 is also in Newey and Windmeijer (2009), which imposes further assumptions on Uniform convergence and smoothness conditions for Ω ,Ωk,Ωkl,Ωk,l. Assumption 11. The following conditions holds for A(o;β, η)equal to Ω(o;β, η),Ωk(o;β, η), Ωkl(o;β, η)orΩk,l(o;β, η): (a) supβ∈B′,η∈TN M(O;β, η)−M(O;β, η0,N) =op(1/√m); (b)supβ∈B′ cM(β, η0,N)− M(O;β, η0,N) =op(1/√m). Comparing to Assumption 6, Assumption 11 imposes stronger conditions for nuisance parameter estimation and convergence rate of high-dimensional matrices. Now we discuss the special case when the moment function is separable . We call a moment function g(o;β, η) to be separable if it has the following form: g(o;β, η) =BX b=1g[b](o;η)h[b](β), (2) where g[b](o;η), b= 1, ..., B arem×qmatrices which do not depend on β,h(b)(β), b= 1, ..., B areq×1 vectors second-order continuously differentiable functions of βwhich do not depend on η. We call moment function which satisfies (2) as separable moment function since the nuisance parameters and parameter of interest are separated. For separable moment functions, we assume the following regularity conditions hold for the
|
https://arxiv.org/abs/2505.07295v1
|
estimation of the nuisance parameters, which specialize Assumptions 9 to the case with separable moment functions: Assumption 12. (Separable score regularity and nuisance parameters estimation). 18 (a)g[b](o;η)satisfies global Neyman orthogonality for any b∈ {1,2, ..., B}onB. The second-order Gateux derivative D(2) β,t,g[b](η)is continuous. (b) Given a random subset Iof[N]of size N/L, we have the nuisance parameter estimator bη (estimated using samples outside I) belongs to a realization set TN∈TNwith probability 1− ∆N, where TNcontains η0,Nand satisfies the following conditions: The following conditions on statistical rates hold for any j, k∈ {1, ..., m}, l, r∈ {1, ..., q},b1, b2∈ {1, ..., B}, define: r(b1),(j,q) N:= sup η∈TN EP0,N g[b1],(j,q)(O;η)−g[b1],(j,q)(O;η0,N) 21/2 λ(j) N:= sup β∈B,t∈(0,1),η∈TN ∂2 ∂t2EP0,Ng(j)(O;β,(1−t)η0,N+tη) r(b1,b2),(j,k,l,r ) N:= sup η∈TN EP0,N g[b1],(j,l)(O;η)g[b2],(k,r)(O;η)−g[b1],(j,l)(O;η0,N)g[b2],(k,r)(O;η0,N) 21/2 Then the statistical rates satisfy aN≤δN, λN≤N−1/2δN, δN=o(1/√m), foraN∈ {r(b1),(j,q) N, r(b1,b2),(j,k,l,r ) N} Similar to Assumption 9, Assumption 12 imposes conditions for convergence rates of nuisance estimators for separable moment functions. Note the definition of r(b1),(j,q) Nandr(b1,b2),(j,k,l,r ) Ndoes not involve supremum over β, which is more convenient for verifying in practice. Assumption 11(b) is also easier to verify for separable moment functions, since supremum over βcan be separated and replaced by a constant due to compactness of B. Therefore concentration inequalities for the norm of high-dimensional matrices can be used (Tropp, 2016). Theorem 2. Suppose assumptions 1-11 hold for general moment function, or assumptions 1-3,6- 8,10,12 hold for separable momnent function, m3/N→0, we have ST N(bβ−β0)⇝N(0, V) where V=H−1+H−1ΛH−1,S−1 NE[UT iΩ−1Ui]S−1T Np− →Λ,⇝stands for weak convergence, His 19 defined in Assumption 1 and U(k) i=G(k)(Oi;β0, η0,N)−G(k)−EP0,N[G(k)(Oi;β0, η0,N)gT(Oi;β0, η0,N)]Ωg(Oi;β0, η0,N), Ui= [U(1) i, ..., U(p) i]. Theorem 2 implies that the asymptotic distribution of the proposed two-step CUE is the same as that of arg min βbQ(β, η0,N) (Newey and Windmeijer, 2009). That is, knowing the true value of nuisance parameter or not does not affect the asymptotic distribution. Although this property was also obtained in double machine learning literature with estimating equation or GMM as the second step (Chernozhukov et al., 2018, 2022), we generalize those results to the case with CUE as the second step under many weak moment asymptotics. However, the conditions and proofs are very difference, since here we need to show the nuisance parameter estimation is negligible for the U-statistics term. Now we give a consistent estimator for the asymptotic variance. Let bH=∂2bQ(bβ,bη) ∂β∂βT,bD(j)(β) = 1 NPN i=1∂gi(β,bη) ∂βj−1 NPN i=1∂gi(β,bηl(i)) ∂βjgi(β,bηl(i))TbΩ(β,bη)−1bg(β,bη),bD= (bD(1)(bβ), ...,bD(p)(bβ)),bΩ = bΩ(bβ,bη). Then the variance estimator is bV /N , where bV=bH−1bDTbΩ−1bDbH−1. The following theorem states bVis a consistent variance estimator. Theorem 3. (Consistency of variance estimator). Assume all the regularity assumptions in Theorem 2 hold, ST NbV SN/Np− →V. Furthermore, for a vector c, if there is rNandc∗̸= 0such that rNS−1 Nc→c∗, then cT(bβ−β0)q cTbV c/Nd− →N(0,1). In the standard double-machine learning literature, the asymptotic variance is usually estimated by the empirical variance of influence function, with plug-in nuisance estimates (Chernozhukov 20 et al., 2018). However, this conventional variance estimator ignores the contribution of the U- statistics term to the asymptotic variance and therefore tends to exhibit
|
https://arxiv.org/abs/2505.07295v1
|
downward bias. 3.3 Over-identification test We propose an over-identification test as a diagnostic tool, extending the traditional J-test for over- identification (Hansen, 1982). In practice, one often wants to test whether all moment conditions hold simultaneously—that is, whether EP0,N[g(O;β0, η0,N)] = 0. The null hypothesis does not hold if at least one of the moment conditions fails to hold. Our version of over-identification test is given below: Theorem 4. (Over-identification test). Assume all the regularity assumptions in Theorem 2 hold, under the null hypothesis H0:EP0,N[g(O;β0, η0,N)] = 0 , P0,N(2NbQ(bβ,bη)≥χ2 1−α(m−p))→α asN→ ∞ , where χ2 1−α(m−p)is1−αquantile of chi-square distribution with degree of freedom m−p. The over-identification test can be viewed as a model specification test—that is, a test of whether the moment condition model is correctly specified. When the moment conditions are constructed using instrumental variables, the over-identification test can also be interpreted as a test of instrument validity. However, it should be noted that other forms of misspecification, such as incorrect functional form, can also lead to rejection of the null hypothesis. 4 Examples In this section, we provide three examples from causal inference, and study specialized conditions which satisfy the high-level conditions given in the previous sections. 4.1 Additive structural mean model with many instrumental variables Suppose we have {Oi= (Yi, Ai, Zi, Xi, Ui), i= 1, ..., N}, which are iid draws from the true data- generating distribution P0,N, where Yis a continuous outcome, Adenotes treatment uptake which 21 can be continuous or categorical, Zis an m-dimensional vector of instruments, Xrepresents base- line covariates, Uare unobserved confounders between AandY. Let Y(z, a) be the potential outcome (Rubin, 1972) of Yunder treatment value aand instruments value z. To construct moment conditions, we assume (a) consistency: Y=Y(z, a) ifA=a, Z =z; (b) exclusion restriction: Y(z, a) = Y(a), therefore Y(z, a) can be written as Y(a); (c) latent ignorability: Y(a)⊥ ⊥(A, Z)|X, U; (d) IV independence: Z⊥ ⊥U|Xand (e) IV relevance: Zis not independent ofAgiven X. The following additive structural mean model (ASMM) is a commonly used model (Robins, 1994; Vansteelandt and Joffe, 2014): EP0,N[Y(a)−Y(0)|A=a, Z=z, X=x, U=u] =γ(a, z, x ;β0), where {γ(a, z, x ;β), β∈ B} is a user-specified function class with γ(0, z, x;β) = 0 and γ(a, z, x ; 0) = 0,Bis a compact subset of Rp. Note that the ASMM above implicitly assumes Uis not an effect modifier. A popular choice is γ(a, z, x ;β) =Pp j=1β(j)f(j)(a)h(j)(x), where f(j)(a) and h(j)(x) are functions of AandXand that f(j)(0) = 0. For example, if p= 2, Xis one-dimensional, f(1)(a) =f(2)(a) =a,h(1)(x) = 1, h(2)(x) =x, then γ(a, z, x ;β) =a(β(1)+β(2)x), which explicitly models the effect modification by X. Ifp= 1,f(1)(a) =aandh(1)(x) = 1, γ(a, z, x ;β) =βa. Under the ASMM, we have EP0,N[Y−γ(A, Z, X ;β0)|Z, X] =EP0,N[EP0,N[Y−γ(A, Z, X ;β0)|Z, X, A, U ]|Z, X] =EP0,N[[Y(0)|Z, X, A, U ]|Z, X] =EP0,N[Y(0)|X]. Similarly, we can show that EP0,N[Y−γ(A, Z, X ;β0)|X] =EP0,N[Y(0)|X], hence EP0,N[Y− γ(A, Z, X ;β0)|X] =EP0,N[Y−γ(A, Z, X ;β0)|Z, X]. Therefore, let ˜ g(O;β, ηZ) = (Y−γ(A, Z,
|
https://arxiv.org/abs/2505.07295v1
|
X ;β))(Z− ηZ(X)), where ηZis a function of X, with true value ηZ,0,N(X) =EP0,N[Z|X]. Then, EP0,N[˜g(O;β0, ηZ,0,N)] = 0, which defines a valid moment condition for estimating β0. For ease of presentation, we consider the simple model that γ(a, z, x ;β) =βa. Discussions with a more complicated γ(a, z, x ;β) =Pp j=1β(j)f(j)(a)h(j)(x) is similar. In this case, the orthogonal moment function can be defined as g(O;β, η) = (Y−ηY(X)−βA+βηA(X))(Z−ηZ(X)), where η= (ηZ, ηY, ηA),ηYandηAare functions of Xwith true values ηY,0,N(X) =EP0,N[Y|X] and 22 ηA,0,N(X) =EP0,N[A|X],ηZ= (ηZ(1), ..., ηZ(m)) with true values ηZ(k),0,N(X) =EP0,N[Z(k)|X], for k= 1, ..., m . Note that when γ(a, z, x ;β) =βa, the orthogonal estimating function takes exactly the same form as the Robinson-style score proposed for the partially linear IV model Chernozhukov et al. (2018). However, our theory allows us to handle more complex ASMMs. For consistency and asymptotic normality of our proposed two-step CUE in the presence of high-dimensional weak instrumental variables, we need the following regularity assumptions: Assumption 13. (Regularity assumptions for ASMM). We assume the following condi- tions hold: (a). (X, Z)is uniformly bounded. There exists constant Csuch that EP0,N[Y4|X]< C, EP0,N[A4|X]< C,EP0,N[A2|Z, X]< C. (b).There is a positive scalar H, such that µ−2 NNGTΩ−1G→ H, where µ2 N→ ∞ ,µN=O(√ N),m/µ2 Nis bounded for all N,m3/N→0. (c). 1/CI m⪯ EP0,N[(Z−EP0,N[Z|X])(Z−EP0,N[Z|X])T|X]⪯CIm. (d).Given a random subset lof[N]of size N/L, the nuisance parameter estimator bηlsatisfies the following conditions. With P0,N−proba- bility no less than 1−∆N, for j= 1, ..., m ,∥bηA,l−ηA,0,N∥P0,N,2≤δN,∥bηY,l−ηY,0,N∥P0,N,2≤ δN,∥bηZ(j),l−ηZ(j),0,N∥P0,N,2≤δN/m,∥bηY,l−ηY,0,N∥P0,N,∞≤C,∥bηA,l−ηA,0,N∥P0,N,∞≤C, ∥bηZ(j),l−ηZ(j),0,N∥P0,N,∞≤C,∥bηA,l−ηA,0,N∥P0,N,2× ∥bηZ(j),l−ηZ(j),0,N∥P0,N,2≤N−1/2δN,∥bηY,l− ηY,0,N∥P0,N,2× ∥bηZ(j),l−ηZ(j),0,N∥P0,N,2≤N−1/2δN, where δN=o(1/√m),∆N=o(1). Assumption 13 provides sufficient conditions for assumptions stated in Theorem 2, and thus guarantees consistency and asymptotic normality of two-step CUE under ASMM. Under ASMM, G=−EP0,N[(A−ηA,0,N(X))(Z−ηZ,0,N(X))]. Therefore, Assumption 13 (b) can be interpreted as weak conditional correlation between AandZgiven X. Assumption 13 (c) gives mild conditions on convergence rate of nuisance estimation. Suppose misO(N1/3−ϵ) for some 0 < ϵ≤1/3, then we require the convergence rate for bηZ(j),lto be o(N−1/2+3ϵ/2). Therefore, a approximately parametric rate for bηZ(j),lis needed when ϵis close to zero. For the estimation of ηA,0,N,ηY,0,N, the convergence rates can be as slow as o(N−1/6+ϵ/2), which is a mild requirement. Assumption 13 (c) also requires our m+ 2 nuisance estimates belong to the realization set simultaneously. In traditional double machine learning literature (Chernozhukov et al., 2018) where mis fixed, it is not an issue since the number of nuisance parameters is fixed. Under many weak asymptotics, the number of nuisance parameters ηZ(j),0,Nis diverging and therefore it is harder to control them simultaneously. When bηZ(j),lis estimated through nonlinear least squares, under 23 some regularity conditions, the L2error of estimation will concentrated around a interval of size O(p logN/N ) with probability no less than 1 −C/N (See Theorem 5.2 of Koltchinskii (2011) for more details), and therefore under suitable conditions, the probability of all bηZ(j),lconcentrating around a interval of size O(p logN/N ) is no less than 1 −Cm/N . Since m/N =o(1), it’s possible to control the convergence rates for ( bηZ(j),l),j= 1, ..., m simultaneously even when mgoes to infinity. The consistency and
|
https://arxiv.org/abs/2505.07295v1
|
asymptotic normality of CUE when Zis high-dimensional weak instrument variable is given by the following theorem: Theorem 5. Suppose Assumption 13 holds. The CUE estimator is consistent and asymptotically normal, with the asymptotic distribution being the same as that in Theorem 2. 4.2 Multiplicative structural mean model with many instrumental variables Next, we give an example where the moment condition is not linear in the parameter of interest. Following the same notations and set up as in the ASMM example, except that we consider A∈ {0,1, ..., K}being ordinal, we consider the following multiplicative structural mean model (MSMM): logEP0,N[Y(a)|A=a, Z=z, U=u, X =x] EP0,N[Y(0)|A=a, Z=z, U=u, X =x]=γ(a, z, x ;β0). Again, here γ(a, z, x ;β) is a user-specified function with γ(0, z, x;β) = 0. A simple but widely used example is γ(a, z, x ;β) =βa. Similar to the ASMM, one can include effect modifiers to the model. The goal here is also to estimate βsince βencodes the conditional average causal effect on the multiplicative scale. Similar to the ASMM, EP0,N[˜g(O;β0, ηZ,0,N)] = 0 defines an initial moment condition for estimating β0, where ˜ g(O;β0, ηZ,0,N) =Yexp(−β0A)(Z−ηZ,0,N) and ηZ,0,Nis defined in the same way as in the ASMM example. A global orthogonal form of the moment condition can be derived using EP0,N Yexp(−βA)−X a∈{0,1,..,K}EP0,N[Y|X, A =a] exp(−βa)P0,N(A=a|X) (Z−EP0,N[Z|X]) = 0. For ease of presentation, we consider Ato be binary, then the orthogonal moment function is g(o;β, η) = Yexp(−βA)−X a∈{0,1}ηY,(a)(X) exp(−βa)ηA,(a)(X) (Z−ηZ(X)), 24 where η= (ηY,(0), ηY,(1), ηA,(0), ηA,(1), ηZ) with true values ηY,(a),0,N(a, X) =EP0,N[Y|A=a, X], ηA,(1),0,N(X) =P0,N(A= 1|X),ηA,(0),0,N(X) = 1−P0,N[A= 1|X], and ηZ,0,N(X) =EP0,N[Z|X]. We need the following additional regularity assumption holds: Assumption 14. (Regularity assumptions for MSMM). (a)(X, Z)is uniformly bounded. There exists constant Csuch that EP0,N[Y4|A, X]< C. (b)There is a positive scalar H, such that µ−2 NNGTΩ−1G→H, where µ2 N→ ∞ ,µN=O(√ N),m/µ2 Nis bounded for all N,m3/N→0. (c) There is a constant Csuch that 1/CI m⪯EP0,N[(Z−EP0,N[Z|X])(Z−EP0,N[Z|X])T|X]⪯CIm. (d) Given a random subset lof[N]of size N/L, the nuisance parameter estimator bηlsatisfies the following conditions. With P0,N−probability no less than 1−∆N,∥bηA,(1),l−ηA,(1),0,N∥P0,N,2≤δN, ∥bηY,(a),l−ηY,(a),0,N∥P0,N,2≤δN,∥bηZ(j),l−ηZ(j),0,N∥P0,N,2≤δN/m,∥bηA,(1),l−ηA,(1),0,N∥P0,N,∞≤C, ∥bηY,(a),l−ηY,(a),0,N∥P0,N,∞≤C,∥bηZ(j),l−ηZ(j),0,N∥P0,N,∞≤C,∥bηA,(1),l−ηA,(1),0,N∥P0,N,2×∥bηZ(j),l− ηZ(j),0,N∥P0,N,2≤N−1/2δN,∥bηY,(a),l−ηY,(a),0,N∥P0,N,2×∥bηZ(j),l−ηZ(j),0,N∥P0,N,2≤N−1/2δN, where δN=o(1/√m),∆N=o(1),Cis a constant. In this example, G=−exp(−β0)EP0,N Y I{A= 1} −EP0,N[Y I{A= 1}|X] (Z−EP0,N[Z|X]) , therefore Assumption 14(b) can be interpreted as the conditional correlation given Xbetween Y I{A=a}andZis small. Since Zaffects Yonly through A, weak identification can still be interpreted as a small conditional correlation between AandZ. The consistency and asymptotic normality of two step CUE when Zis high-dimensional weak instrument variable is given by the following theorem: Theorem 6. Suppose Assumption 14 holds. The CUE estimator is consistent and asymptotically normal, with the asymptotic distribution being the same as that in Theorem 2. 4.3 Proximal causal inference In this section, we discuss a structural mean model approach for proximal causal inference with many weak treatment proxies, which can be viewed as a generalization of Tchetgen Tchetgen et al. (2024)’s method. 25 LetP0,Nbe the true data generating mechanism. Let Abe a continuous treatment with reference level 0, Ybe a continuous outcome, Xbe measured confounder, Ube unmeasured confounder, Z be
|
https://arxiv.org/abs/2505.07295v1
|
am−dimensional vector of treatment proxies and Wbe an one-dimensional outcome proxy. We assume the following structural mean model holds for potential outcomes, that is, EP0,N[Y(a)− Y(0)|Z, A =a, U, X ] =βa,0a. To construct moment conditions for βa,0, we impose the following assumptions: Assumption 15. (Identification of proximal structural mean model). (a) (Latent ignorability for treatment and treatment proxy). (Y(0), W)⊥ ⊥(A, Z)|U, X. (b) (Out- come proxy). There exists βw,0such that EP0,N[Y(0)−βw,0W|U, X] =EP0,N[Y(0)−βw,0W|X]. Assumption 15 (a) is a mild condition that is also assumed in Tchetgen Tchetgen et al. (2024); Liu et al. (2024). This assumption means conditioning on all the covariates and unmeasured con- founders, the causal relationship between ( A, Z) and ( Y, W ) are no-longer confounded. Assumption 15 (b) can be justified under the following structural model: EP0,N[Y|A, Z, U, X ] =β0+βa,0A+βu,0U+f1(X), EP0,N[W|A, Z, U, X ] =α0+αu,0U+f2(X), where f1andf2are two unknown functions of X. Under the structural model, we have βw,0= βu,0/αu,0. This is a generalization of Tchetgen Tchetgen et al. (2024); Liu et al. (2024), where they assumed f1andf2are linear functions of X. We have the following identification result for βa,0 andβw,0. Theorem 7. Under Assumption 15, the following moment condition holds: EP0,N A−EP0,N[A|X] Z−EP0,N[Z|X] (Y−EP0,N[Y|X]−βa,0(A−EP0,N[A|X])−βw,0(W−EP0,N[W|X])) = 0. Furthermore, it is globally Neyman orthogonal on B. 26 Therefore, the moment function can be defined as g(O;β, η) = A−ηA Z−ηZ (Y−βaA−βwW−ηY(X) +βa,0ηA(X) +βw,0ηW(X)) (3) where β= (βa, βw) with true value β0= (βa,0, βw,0),η= (ηY, ηW, ηA, ηZ) with true values ηY,0,N(X) =EP0,N[Y|X],ηA,0,N(X) =EP0,N[A|X],ηW,0,N(X) =EP0,N[W|X],ηZ,0,N(X) =EP0,N[Z|X]. For the consistency and asymptotic normality of CUE with many weak treatment proxies, we need the following regularity conditions: Assumption 16. (Regularity assumptions for proximal causal inference example). The following conditions hold: (a) (A, W, Z )is uniformly bounded, EP0,N[Y4|A, X, Z ]< C,EP0,N[W4|A, X, Z ]< C. (b) Assumption 1 holds, m3/N→0(c)1/CI m+1⪯EP0,N[(˜Z−EP0,N[˜Z|X])(Z−EP0,N[˜Z|X])T|X]⪯ CIm+1, where ˜Z= (A, ZT)T. (d) Given a random subset lof[N]of size N/L, the nuisance parameter estimator bηlsatisfies the following conditions. With P0,N−probability no less than 1−∆N,∥bηA,l−ηA,0,N∥P0,N,2≤δN,∥bηY,l−ηY,0,N∥P0,N,2≤δN,∥bηW,l−ηW,0,N∥P0,N,2≤δN,∥bηZ(j),l− ηZ(j),0,N∥P0,N,2≤δN/m,∥bηA,l−ηA,0,N∥P0,N,∞≤C,∥bηY,l−ηY,0,N∥P0,N,∞≤C,∥bηW,l−ηW,0,N∥P,∞≤ C,∥bηZ(j),l−ηZ(j),0,N∥P0,N,∞≤C,∥bηZ(j),l−ηZ(j),0,N∥P0,N,2(∥bηY,l−ηY,0,N∥P0,N,2+∥bηA,l−ηA,0,N∥P0,N,2+ ∥bηW,0−ηW,0,N∥P0,N,2)≤N−1/2δN,∥bηA,l−ηA,0,N∥P0,N,2(∥bηY,l−ηY,0,N∥P0,N,2+∥bηA,l−ηA,0,N∥P0,N,2+ ∥bηW,l−ηW,0,N∥P0,N,2)≤N−1/2δN, where δN=o(1/√m),∆N=o(1). In this example, G=EP0,N A−EP0,N[A|X] Z−EP0,N[Z|X] (−(A−EP0,N[A|X]),−(W−EP0,N[W|X])) , therefore Assumption 16(b) can be interpreted as the Z-Acorrelation and Z-Wcorrelation are small (conditional on X), which can be further interpreted as Z-Ucorrelation (conditional on X) is small, that is, Zis a weak proxy for unmeasured confounder. The following theorem gives consistency and asymptotic normality of two-step CUE. Theorem 8. Suppose Assumption 16 holds. The CUE estimator is consistent and asymptotically normal, with the asymptotic distribution being the same as that in Theorem 2. 27 5 Simulation 5.1 Additive and multiplicative structural mean models The data generating processes for ASMM and MSMM are described below: Sample sizes: We use 5 different sample sizes, they are N= 1000, 2000, 5000, 10000, 20000. Baseline covariates: We consider a three-dimensional covariate X= (X1, X2, X3).Xis gener- ated from truncated normal with mean 0, covariance matrix I3+ 0.2J3, truncated at −4 and 4, where I3is a 3×3 identity matrix, J3is a 3×3 matrix with all its elements equal to 1. Instrumental variable: The
|
https://arxiv.org/abs/2505.07295v1
|
dimension of instrumental variable mis 4N−1/4, they are 22 ,26,33,40,47 under each sample size. They are generated from the model Zj=X1+X2+X3+ϵj, where j= 1, ..., m ,ϵj∼uniform[ −3,3]. Treatment: We generate a latent continuous treatment variable A∗from the model A∗=αmX j=1N−1/2−j/(3m)Zj+X1+ sin( X2) + expit( X3) +U+N(0,1), where U∼Uniform[ −4,4] is a unmeasured confounder. For ASMM, we set α= 1 and A=A∗, for MSMM, we set α= 1.5 and A=I(A∗>0.6). Outcome: For ASMM, We generate the outcomes from the following model: Y= 1 + 3 A−X1+ sin X2−X2 3+X2X3+U+N(0,1) For MSMM, We generate Yfrom the following model: Y(0) = 1 + 0 .5X1+ 0.5X2−0.5X3+ 0.5I(U >0.5) + 0 .5N(0,1), Y(1) = Y(0) exp(1) , Y=AY(1) + (1 −A)Y(0). The data-generating distributions reflect the idea that the moment conditions are weak. The effect of ZjonA∗isαN−1/2−j/(3m), therefore the identification is weak in the sense that the effect isZjonA∗becoming smaller when Ngrows. Furthermore, our data-generating distributions allow different strength for different instruments. We also introduce nonlinearity terms when we generate A∗andYsince our proposed two-step CUE could deal with nonlinearity in the nuisance models. 28 For each simulated dataset, we construct the proposed estimator with all nuisance functions estimated using the Rpackage SuperLearner (Van der Laan et al., 2007). We include the ran- dom forest and generalized linear models in the SuperLearner library. We compare our method with two-step GMM with an optimal choice of weighting matrix, which can be obtained the fol- lowing two steps. First, obtain an initial estimate ˜βGMM by minimizing the objective function bQGMM(β,bη):=bgT(β,bη)Tbg(β,bη)/2, second, obtain the final GMM estimate bβGMM by minimizing the objective function bQGMM(β,bη):=bgT(β,bη)TWbg(β,bη)/2, where the weighting matrix is calcu- lated using the optimal choice W=bΩ−1(˜βGMM ,bη)(Hansen, 1982; Chernozhukov et al., 2018). The nuisance estimator is constructed in the same way as two-step CUE. When identification is strong and the number of moments are fixed, GMM is asymptotically unbiased and normal (Chernozhukov et al., 2022). Table 1 shows the simulation results for ASMM and MSMM. For the ASMM case, two-step CUE has smaller bias and more accurate coverage under all settings, while two-step GMM shows larger positive biases and under-coveraged confidence intervals for all sample sizes. The standard error of CUE is bigger than that of GMM. For the MSMM case, CUE still shows smaller bias and more accurate coverage rates for confidence intervals, although when sample size is 1000 and mis 22, the two-step CUE also shows a relatively large bias. The rejection rates for over-identification test are close to nominal type one error. Figure S.1 shows the sampling distribution of two-step CUE estimates and two-step GMM estimates across 1000 simulations under different settings. 5.2 Proximal causal inference We generate datasets according to the following data-generating process: Sample sizes: We use 5 different sample sizes, they are N= 1000, 2000, 5000, 10000, 20000. Baseline covariates: We consider a three-dimensional covariate X= (X1, X2, X3).X∼N(0, I3+ 0.2J3), where I3is a 3×3 identity matrix, J3is a 3×3 matrix with all elements equal to 1. The covariates are truncated so that they are bounded by −4 and 4. Treatment proxies: The
|
https://arxiv.org/abs/2505.07295v1
|
dimension of treatment proxy mis 4N−1/4, they are 22 ,26,33,40,47 29 Table 1: Simulation results for ASMM and MSMM. Column rejection rate reports the type one error for over-identification test with nominal level 0.05. Column Estimate reports the aver- age point estimates. Column Bias reports the average bias. Column SEreports the average of estimated standard errors. Column SDreports the standard deviation of estimates across 1000 simulations. Column Coverage rate reports the coverage rate for 95% Wald confidence interval. CUE and GMM refer to the two-step CUE and two-step GMM, respectively. N m rejection rate Method Estimate Bias SE SD Coverage rate ASMM1000 22 0.067CUE 2.992 -0.008 0.087 0.087 0.952 GMM 3.067 0.067 0.072 0.076 0.801 2000 26 0.063CUE 2.994 -0.006 0.063 0.063 0.955 GMM 3.046 0.046 0.056 0.057 0.847 5000 33 0.053CUE 3.001 0.001 0.042 0.043 0.952 GMM 3.032 0.032 0.039 0.041 0.851 10000 40 0.048CUE 2.999 -0.001 0.031 0.031 0.954 GMM 3.021 0.021 0.029 0.030 0.868 20000 47 0.061CUE 2.999 -0.001 0.023 0.023 0.949 GMM 3.014 0.014 0.022 0.022 0.894 MSMM1000 22 0.037CUE 0.899 -0.101 0.736 1.919 0.949 GMM 1.440 0.440 15.282 0.256 0.248 2000 26 0.056CUE 0.947 -0.053 0.281 0.793 0.965 GMM 1.334 0.334 1.064 0.171 0.238 5000 33 0.039CUE 0.977 -0.023 0.143 0.136 0.966 GMM 1.206 0.206 0.088 0.108 0.396 10000 40 0.045CUE 0.991 -0.009 0.101 0.100 0.953 GMM 1.155 0.155 0.072 0.084 0.432 20000 47 0.031CUE 0.996 -0.004 0.072 0.072 0.947 GMM 1.109 0.109 0.057 0.063 0.509 30 under each sample size. Treatment proxy Zis generated from the following distribution: Z(j)=X1+X2+X3+ 0.5m−1+3(j−1)/(4(m−1))U+ϵj, where j= 1, ..., m ,ϵ(j)∼unif[−3,3],U∼unif[−4,4] is unmeasured confounder. Outcome proxy: Outcome proxy is generated from the following model: W=X1+X2+ expit( X3) +U+N(0,1). Treatment: The continuous treatment is generated from the following distribution: A=X1+X2−X3+U+N(0,1). Outcome: The continuous outcome is generated from the following distribution: Y= 3A+ sin X1−X2 2−X3+X2X3+U+N(0,1) Again, the data-generating distribution reflect the idea that the moment conditions are weak. The effect of UonZjis 0.5m−1+3(j−1)/4(m−1), therefore the correlation between Z(j)andUdecreases when the sample size becomes larger. This data-generating distribution also allows for different strengths for different treatment proxies. We also introduce nonlinearity terms when we generated YandW. We compare two-step CUE with 2SLS proposed by Tchetgen Tchetgen et al. (2024). The 2SLS involves two steps. In the first step, a regression model of WonA, Z, X is fitted, the fitted value forWis denoted as cW. In the second stage, A regression model of YonA,cWandXis fitted. The final estimates for βa,0andβw,0are the estimated regression coefficients of AandcW, respectively. The 2SLS is implemented using the function ivreg inR. Table 2 shows simulation results for proximal causal inference. Similar to the IV case, we con- struct the proposed estimator with all nuisance functions estimated using the Rpackage SuperLearner (Van der Laan et al., 2007) and we include the random forest and generalized linear models in the SuperLearner library. When sample size is 1000, both CUE and 2SLS have large bias. When sam- 31 ple size is above 2000, CUE has smaller bias and more accurate coverage for 95% Wald confidence intervals. We observe upward bias of βaand
|
https://arxiv.org/abs/2505.07295v1
|
downward bias of βwfor 2SLS. The standard errors of CUE estimates are larger than those of 2SLS estimates. The over-identification test controls the type-one error in all the simulation settings. Table 2: Simulation results for proximal causal inference. Column rejection rate reports the type one error for over-identification test with nominal level 0.05. Column Estimate reports the average point estimates. Column Bias reports the average bias. Column SEreports the average of estimated standard errors. Column SDreports the standard deviation of estimates across 1000 simulations. Column Coverage rate reports the coverage rate for 95% Wald confidence interval. CUE refers to the two-step CUE. 2SLS refers to the two stage least square method proposed by Tchetgen Tchetgen et al. (2024). N m rejection rate Method Estimate Bias SE SD Coverage rate βa1000 22 0.047CUE 2.879 -0.121 0.532 1.288 0.946 2SLS 3.186 0.186 0.201 0.202 0.826 2000 26 0.043CUE 2.984 -0.016 0.228 0.450 0.943 2SLS 3.123 0.123 0.153 0.153 0.846 5000 33 0.051CUE 3.013 0.013 0.113 0.111 0.948 2SLS 3.072 0.072 0.102 0.100 0.885 10000 40 0.033CUE 3.011 0.011 0.075 0.077 0.927 2SLS 3.045 0.045 0.073 0.074 0.893 20000 47 0.048CUE 3.005 0.005 0.050 0.051 0.944 2SLS 3.023 0.023 0.052 0.052 0.923 βw1000 22 0.047CUE 1.150 0.150 0.633 1.546 0.957 2SLS 0.780 -0.220 0.236 0.237 0.816 2000 26 0.043CUE 1.020 0.020 0.270 0.537 0.950 2SLS 0.852 -0.148 0.180 0.179 0.848 5000 33 0.051CUE 0.987 -0.013 0.133 0.130 0.952 2SLS 0.914 -0.086 0.120 0.116 0.889 10000 40 0.033CUE 0.989 -0.011 0.088 0.090 0.935 2SLS 0.946 -0.054 0.086 0.087 0.896 20000 47 0.048CUE 0.996 -0.004 0.059 0.059 0.944 2SLS 0.973 -0.027 0.062 0.061 0.921 6 Revisiting the return of education example In this section, we revisit the example of estimating effect of compulsory schooling requirement on earnings by Angrist and Krueger (1991). In their paper, they estimated the effect of education attainment (measured in years) on logarithm of weekly wage using 1970 and 1980 Censuses data. Since unmeasured confounders such as ability may be positively correlated with both education 32 attainment and weekly wage, direct regression analysis may suffer from substantial bias. One important observation made by Angrist and Krueger (1991) is that quarter of birth is correlated with education attainment. This is mainly because people who are born early in the calendar year are typically older when they enter school, therefore they typically achieve the legal age of dropout with lower education attainment. This fact causes correlation between quarter of birth and education attainment. Furthermore, there are no obvious channels that quarter of birth could directly affect weekly wages. Therefore, quarter of birth may serve as instruments. We control race, region dummies, year dummies, age and age square as observed confounders. Following Angrist and Krueger (1991), we construct instrumental variables using interactions of year dummies and quarter of birth dummies, the total number of instruments used for analysis is 29. We reproduce the results of OLS estimate and 2SLS estimate using 1930-1939 birth cohort re- ported in Angrist and Krueger (1991). Table 3 shows the our replication results using lmfunction and2sls function of R. However, as
|
https://arxiv.org/abs/2505.07295v1
|
pointed by Bound et al. (1995), the 2SLS estimate may suffer from weak instruments issue even though the sample size is enormous. To illustrate our method- ology, we estimate the following three models using two-step GMM and our proposed estimator two-step CUE to estimate the effect of education attainment on wage. Model 1: An ASMM with orginal continuous education attainment variable and log weekly wage. Model 2: An ASMM with a binary education attainment variable (using 12 years as the cutoff) and log weekly wage. Model 3: A MSMM with a binary education attainment variable (using 12 years as the cutoff) and weekly wage. Let A∗denote the continuous outcome, Adenote the binary outcome and Ydenote the weekly wage on the orginal scale, then the coefficient of A∗from model 1 can be interpreted as the average treatment effect on the log weekly wage, that is EP0,N[log(Y(a∗+ 1))−log(Y(a∗))], the coefficient of Afrom model 2 can be interpreted as the usual average treatment effect on the log weekly wage, that is EP0,N[log(Y(1))−log(Y(0))], the coefficient of Afrom model 3 can be inter- preted as the logarithm of average treatment effect on the weekly wage on the multiplicative scale, that is log( EP0,N[Y(1)]/EP0,N[Y(0)]). For both GMM and two-step CUE, we use linear probability models for instruments, super-learner with generalized linear model and random forest libraries for the model of log weekly wage and education attainment. The results are presented in Table 3. Comparing the orginal results from Angrist and Krueger (1991) and the results from model 1, we find that the point estimate of GMM (0.0599) is almost the same as 2SLS (0.0599). The CUE 33 estimate (0.0481) is smaller than 2SLS and GMM. The standard error of CUE (0.0770) is much larger the standard errors of 2SLS and GMM (they are 0.0354 and 0.0289, respectively). Similarly, we find that the point estimates from CUE are smaller than those from GMM and the standard errors from CUE are bigger than those from GMM in both Model 2 and Model 3. This phenomenon coincides with the simulation results, where we also observe an upward bias for GMM, a downward bias for CUE, and larger standard error for CUE compared to that from GMM. The proposed over-identification test for the model 1, model 2 and model 3 can not reject the null hypothesis, which implies that there is no evidence of model misspecification. Table 3: Effects of education attainment on weekly wage using different methods. Orginal results refers to the orginal results reported in Table V of Angrist and Krueger (1991). Model 1: ASMM refers to the ASMM with logarithm of weakly wage as the outcome, continuous education attain- ment as the treatment. Table V. Model 2: ASMM refers to the ASMM with logarithm of weakly wage as the outcome, binary education attainment as the treatment. Model 3: MSMM refers to the MSMM with weakly wage as the outcome, binary education attainment as the treatment. Estimates refers to the estimated coefficient of treatment in each model. Estimate SE 95% CI Original resultsOLS 0.0632 0.0003 [0.0625,0.0390] 2SLS 0.0599 0.0289 [0.003,0.116]
|
https://arxiv.org/abs/2505.07295v1
|
Model 1: ASMMGMM 0.0556 0.0354 [-0.013,0.125] CUE 0.0481 0.0770 [-0.103,0.199] Model 2: ASMMGMM 0.315 0.194 [-0.065,0.695] CUE 0.305 0.294 [-0.271,0.881] Model 3: MSMMGMM 0.381 0.179 [0.030,0.731] CUE 0.232 0.238 [-0.234,0.698] 7 Discussion In this paper, we study the estimation under many weak moment conditions in the presence of infinite-dimensional nuisance parameters. We find that the Global Neyman orthogonality condition plays a central role in removing bias from nuisance estimation under many weak asymptotics, and propose a two-step CUE combined with cross-fitting which accommodates a flexible, possibly nonparametric, first-stage estimation. The asymptotic expansion of the two-step CUE includes the usual first-order GMM term as well as a non-negligible U-statistics term, which complicates the analysis of its asymptotic distribution and variance estimator. In contrast, the asymptotic 34 expansion of the standard double machine learning estimator contains only a first-order influence function term, the variance estimator can be constructed using the empirical variance of influence function. We apply our general framework to both the many weak instruments setting and the many weak treatment proxies setting. Through extensive simulation studies, we demonstrate that, under weak instruments in both the ASMM and MSMM cases, our methodology outperforms the GMM estimator. In the proximal causal inference setting, the two-step CUE outperforms 2SLS. We also provide diagnostic tools, such as an over-identification test, for assessing model misspecification. Several extensions maybe considered. First, the standard double machine-learning gives a semi- parametrically efficient estimator if the moment function is the efficient score function. It’s inter- esting to study if our proposed estimator is optimal in some sense. This will involve new concepts of semiparametric efficiency under many weak asymptotics. Secondly, it is interesting to extend our identification and estimation strategy for proximal causal inference to the longitudinal case and nonlinear models. Thirdly, it is interesting to study estimation and inference for weakly iden- tifed function-valued parameter, such as dose-response curve, survival function and conditional treatment effect. Acknowledgment This work was presented at the ENAR 2025 Spring Meeting (March, 2025), we thank all the helpful feedback from the audience. We thank Ethan Ashby, Albert Osom, James Peng, Yutong Jin, Ellen Graham and Jiewen Liu for their helpful comments and discussion. This research was partially supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R35GM155070. References Andrews, I. (2018). Valid two-step identification-robust confidence sets for gmm. Review of Eco- nomics and Statistics , 100(2):337–348. Andrews, I., Stock, J. H., and Sun, L. (2019). Weak instruments in instrumental variables regres- sion: Theory and practice. Annual Review of Economics , 11:727–753. 35 Angrist, J. D., Imbens, G. W., and Rubin, D. (1996). Identification of causal effects using instru- mental variables. Journal of the American Statistical Association , 91(434):444–455. Angrist, J. D. and Krueger, A. B. (1991). Does Compulsory School Attendance Affect Schooling and Earnings?*. The Quarterly Journal of Economics , 106(4):979–1014. Baiocchi, M., Cheng, J., and Small, D. S. (2014). Instrumental variable methods for causal inference. Statistics in Medicine , 33(13):2297–2340. Bennett, A., Kallus, N., Mao, X., Newey, W., Syrgkanis, V., and Uehara, M. (2023). Inference
|
https://arxiv.org/abs/2505.07295v1
|
on strongly identified functionals of weakly identified functions. Bickel, P. J., Klaassen, C. A., Bickel, P. J., Ritov, Y., Klaassen, J., Wellner, J. A., and Ritov, Y. (1993). Efficient and adaptive estimation for semiparametric models , volume 4. Springer. Bound, J., Jaeger, D. A., and Baker, R. (1995). Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. Journal of the American Statistical Association , 90(430):443–450. Bravo, F., Escanciano, J. C., and Keilegom, I. V. (2020). Two-step semiparametric empirical likelihood inference. The Annals of Statistics , 48(1):1 – 26. Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. Chernozhukov, V., Chetverikov, D., and Kato, K. (2014a). Gaussian approximation of suprema of empirical processes. Chernozhukov, V., Chetverikov, D., and Kato, K. (2014b). Gaussian approximation of suprema of empirical processes. The Annals of Statistics , 42(4):1564 – 1597. Chernozhukov, V., Escanciano, J. C., Ichimura, H., Newey, W. K., and Robins, J. M. (2022). Locally robust semiparametric estimation. Econometrica , 90(4):1501–1535. Clarke, P. S., Palmer, T. M., and Windmeijer, F. (2015). Estimating Structural Mean Models with Multiple Instrumental Variables Using the Generalised Method of Moments. Statistical Science , 30(1):96 – 117. 36 Foster, D. J. and Syrgkanis, V. (2023). Orthogonal statistical learning. The Annals of Statistics , 51(3):879 – 908. Han, C. and Phillips, P. C. (2006). Gmm with many moment conditions. Econometrica , 74(1):147– 192. Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica , 50(4):1029–1054. Hern´ an, M. A. and Robins, J. M. (2006). Instruments for causal inference: an epidemiologist’s dream? Epidemiology , 17(4):360–372. Ichimura, H. and Newey, W. K. (2022). The influence function of semiparametric estimators. Quantitative Economics , 13(1):29–61. Koltchinskii, V. (2011). Oracle inequalities in empirical risk minimization and sparse recovery problems: ´Ecole D’ ´Et´ e de Probabilit´ es de Saint-Flour XXXVIII-2008 , volume 2033. Springer Science & Business Media. Liu, J., Park, C., Li, K., and Tchetgen Tchetgen, E. J. (2024). Regression-based proximal causal inference. American Journal of Epidemiology , page kwae370. Miao, W., Geng, Z., and Tchetgen Tchetgen, E. J. (2018). Identifying causal effects with proxy variables of an unmeasured confounder. Biometrika , 105(4):987–993. Newey, W. K. and Windmeijer, F. (2009). Generalized method of moments with many weak moment conditions. Econometrica , 77(3):687–719. Pfanzagl, J. (1990). Estimation in semiparametric models , pages 17–22. Springer US, New York, NY. Robins, J., Rotnitzky, A., and Zhao, L. P. (1994). Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association , 89(427):846– 866. Robins, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. Communications in Statistics - Theory and Methods , 23(8):2379–2412. 37 Rotnitzky, A., Smucler, E., and Robins, J. M. (2020). Characterization of parameters with a mixed bias property. Biometrika , 108(1):231–238. Rubin, D. (1972). Estimating causal effects of treatments in experimental and observational studies. ETS Research Bulletin Series , 1972(2):i–31. Sanderson, E., Glymour, M. M., Holmes, M. V., Kang, H.,
|
https://arxiv.org/abs/2505.07295v1
|
Morrison, J., Munaf` o, M. R., Palmer, T., Schooling, C. M., Wallace, C., Zhao, Q., et al. (2022). Mendelian randomization. Nature Reviews Methods Primers , 2(1):6. Staiger, D. and Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica , 65(3):557–586. Stock, J. H. and Wright, J. H. (2000). Gmm with weak identification. Econometrica , 68(5):1055– 1096. Tchetgen Tchetgen, E. and Shpitser, I. (2012). Semiparametric theory for causal mediation anal- ysis: Efficiency bounds, multiple robustness and sensitivity analysis. The Annals of Statistics , 40(3):1816 – 1845. Tchetgen Tchetgen, E., Ying, A., Cui, Y., Shi, X., and Miao, W. (2024). An Introduction to Proximal Causal Inference. Statistical Science , 39(3):375 – 390. Tripathi, G. (1999). A matrix extension of the cauchy-schwarz inequality. Economics Letters , 63(1):1–3. Tropp, J. A. (2016). The expected norm of a sum of independent random matrices: An elementary approach. In High Dimensional Probability VII: The Cargese Volume , pages 173–202. Springer. van der Laan, M. and Rose, S. (2011). Targeted Learning: Causal Inference for Observational and Experimental Data . Springer Series in Statistics. Springer New York. Van der Laan, M. J., Polley, E. C., and Hubbard, A. E. (2007). Super learner. Statistical Applica- tions in Genetics and Molecular Biology , 6(1). van der Laan, M. J. and Robins, J. M. (2003). Unified methods for censored longitudinal data and causality . Springer. 38 van der Vaart, A. and Wellner, J. (2023). Weak Convergence and Empirical Processes: With Applications to Statistics . Springer Series in Statistics. Springer. Van der Vaart, A. W. (2000). Asymptotic statistics , volume 3. Cambridge university press. Vansteelandt, S. and Joffe, M. (2014). Structural Nested Models and G-estimation: The Partially Realized Promise. Statistical Science , 29(4):707 – 731. Ye, T., Liu, Z., Sun, B., and Tchetgen Tchetgen, E. (2024). Genius-mawii: For robust mendelian randomization with many weak invalid instruments. Journal of the Royal Statistical Society Series B: Statistical Methodology , page qkae024. Zhang, D. and Sun, B. (2025). Debiased continuous updating gmm with many weak instruments. arXiv preprint arXiv:2504.18107 . Zhang, H. and Chen, S. (2021). Concentration inequalities for statistical inference. Communications in Mathematical Research , 37(1):1–85. 39 Supporting Information to “GMM with Many Weak Moment Conditions and Nuisance Parameters: General Theory and Applications to Causal Inference” We will use the following notations repeatedly throughout the supplementary materials: g(β, η):o7→g(o;β, η), Ω(β, η):o7→g(o;β, η)gT(o;β, η), G(β, η):o7→∂g(o;β, η) ∂β Forh=g, Gor Ω, hi(β, η) =h(Oi;β, η),bhl(β, η) = Pn,lh(β, η) h(β, η), hi=hi(β0, η0,N),h(β, η) =P0,Nh(β, η),h=h(β0, η0,N), Q(β, η) =g(β, η)TΩ(β, η)−1g(β, η)/2 +m/(2N) eQ(β, η) =bg(β, η)TΩ(β, η0)−1bg(β, η)/2 bQ(β, η) =bg(β, η)TbΩ(β, η)−1bg(β, η)/2. Furthermore, we will use the empirical process notations: Pf:=Z fdP, Pnf:=Z fdPn=1 nnX i=1f(Oi), Gnf:=√n(Pn−P)f. We also use Pn,lto denote the empirical distribution in the l-th fold. Gn,lcan be defined in a similar way. S1 Bounding supremum of empirical processes In this section, we will briefly review useful tools for bounding the supremum of empirical processes. LetPbe the underlying data generating distribution, O1, O2, ..., O nare iid sampled from P, the support of Oiis denoted as O. Let
|
https://arxiv.org/abs/2505.07295v1
|
Pnbe the empirical measure. Let Fbe a class of measurable functions mapping from OtoRwith envelope function F. 1 It is often of interest to study the convergence rate of empirical processes. Let ∥Gn∥F= sup f∈F|Gnf| Our goal is to give a high-probability tail bound for ∥Gn∥F. Chapter 2.14 of Chernozhukov et al. (2014b) described such techniques. The technique is based on Dudley’s entropy integral (van der Vaart and Wellner, 2023). The uniform entropy is defined as J(δ,F|F, L 2) = sup QZδ 0q 1 + log N(ϵ∥F∥Q,2,F, L2(Q))dϵ where the supremum is taken over all discreet probability measures Qwith∥F∥Q,2>0. Suppose Fis a VC-type class, which means log sup QN(ϵ∥F∥Q,2,F,∥ · ∥ Q,2)≤vlog(a/ϵ) for some constants a≥eandv≥1, where N(ϵ∥F∥Q,2,F,∥ · ∥Q,2) is the covering number, then we have the following result Lemma S1. EP[∥Gn∥F]≤K r vσ2loga∥F∥P,2 σ +v∥M∥P,2√nloga∥F∥P,2 σ! where Kis a absolute constant. Furthermore, for t≥1, with probability bigger than 1−t−q/2, ∥Gn∥F≤(1 +α)EP[∥G∥F] +K(q) (σ+n−1/2∥M∥P,q)√ t+α−1n−1/2∥M∥P,2t! where cis a constant, ∥M∥P,q≤n1/q∥F∥P,qandK(q)>0is a constant only depending on q. This result can be found in Chernozhukov et al. (2018). We will mainly use a special case of 2 this result in our paper. Let t=n2 q, then we have with probability bigger than 1 −1 n, ∥Gn∥F ≤(1 +α)K σr vloga∥F∥P,2 σ +v∥M∥P,2√nloga∥F∥P,2 σ! + K(q) σ+∥M∥P,q√n n1 q+1 α√n∥M∥P,2n2 q! where I used the fact that ∥M∥P,2≤ ∥M∥P,qforq≥2, we further have ∥M∥P,q≤n1/q∥F∥P,q. The reason why we choose t=n2 qis that, in this case, m2t−q/2=m2/(n1/2)2, which is o(1) if we assume m=o(n1/2). S2 Other useful results In our paper, we need to bound terms like ∥1 NPN i=1gi(β0, η0,N)gT i(β0, η0,N)−Egi(β0, η0,N)gT i(β0, η0,N)∥ and∥1 nPn i=1Gi(β0, η0,N)gT i(β0, η0,N)−EGi(β0, η0,N)gT i(β0, η0,N)∥. Therefore, we need some results for bounding the norm of random matrices. We will mainly use the following result, which is The- orem 1 of Tropp (2016). Lemma S2. (The expected norm of an independent sum of random matrices). Let X1, ..., X Nbe a sequence of iid d1×d2random matrices with E[Xi] = 0 for all i. Let S:=PN i=1Xi, define the matrix variance parameter to be ν(S) = maxn NX i=1E[XiXT i] , NX i=1E[XT iXi] o , and the large deviation parameter to be L:= (Emax i∥Xi∥2)1/2. Define the dimensional constant C(d1, d2):= 4(1 + 2 ⌈log(d1+d2)⌉). 3 Then p ν(S)/4 +L/4≤(E∥S∥)1/2≤p C(d2, d2)ν(S) +C(d2, d2)L. S2 is useful for establishing that the empirical mean of high-dimensional random matrices concentrates around its expectation. Lemma S3. LetXbe am×mpositive semidefinite random matrix, Ybe a random variable with |Y| ≤Ka.s., then ∥EXY∥ ≤2∥EXK∥. Proof. ∥EXY∥=∥EX(Y+−Y−)∥ ≤ ∥EXY+∥+∥EXY−∥ For∥EXY+∥, by the definition of ∥ · ∥, we have ∥EXY+∥= sup ∥v∥=1vTEXY+v= sup ∥v∥=1EvTXY+v ≤sup ∥v∥=1EvTXKv = sup ∥v∥=1vTEXKv =∥EXK∥ Similarly, we have ∥EXY−∥ ≤ ∥ EXK∥, therefore, ∥EXY∥ ≤2∥EXK∥. Lemma S4. LetX, Z bem×mpositive semidefinite random matrices with X≤Z. Let Ybe a positive random variable. Then ∥EXY∥ ≤ ∥EZY∥. 4 Proof. ∥EXY∥=λmax(EXY) =λmax(E((X−Z)Y+ZY)≤λmax(E(X−Z)Y) +λmax(EZY) ≤λmax(EZY) =∥EZY∥, where the first inequality is due to Weyl’s inequality, the second inequality is because E(X−Z)Y is negative-semidefinite. The following lemma is Lemma A0 of Newey and
|
https://arxiv.org/abs/2505.07295v1
|
Windmeijer (2009). Lemma S5. IfAbe positive semidefinite matrix, if ∥bA−A∥Fp− →0,λmin(A)≥1/C,λmax(A)≤C, then with probability approaching 1 ξmin(bA)≥1/(2C)andξmax≤2C. The following lemma is Corollary 7.1 of Zhang and Chen (2021). Lemma S6. Let{Xi}N i=1be identically distributed but not necessarily independent and assume E(|X1|p)<∞,(p≥1). Then Emax 1≤i≤n|Xi|=o(n1/p). S3 Technical lemmas In this section, we will present some technical lemmas that will be used for subsequent proof. Proposition 1. (Chernozhukov et al., 2022). Suppose that η(P1,N) =η1,N∈ TN,η(P0,N) =η0,N. letPN={Pt,N, t∈[0,1]}be a collection of distributions such that Pt,N= (1−t)P0,N+tP1,Nand η(Pt,N) = (1 −t)η0,N+tη1,N. Suppose that For all β∈ B1andη1,N∈ TN,(a) there exists a function ϕ(O;β, η0,N)satisfyingd dtEP0,N[˜g(O;β, η(Pt))] =R ϕ(o;β, η0,N)dP1(o),EP0,N[ϕ(O;β, η0,N)] = 0 , EP0,N[ϕ(O;β, η0,N)2]<∞, (b)R ϕ(o;β, η(Pt,N))dPt,N(o)fort∈[0,1]and (c)R ϕ(o;β, η(Pt))dP0,N(o) andR ϕ(o;β, η(Pt))dP1,N(o)are continuous at t= 0, then g(β, η) = ˜g(β, η)+ϕ(β, η)is globally Ney- man orthogonal on B1. Lemma S7. (Permanence properties of global Neyman orthogonality). (a) Suppose g(o;β, η)satisfies global Neyman orthogonality on B1, and∂g(o;β,η) ∂βexists on B2⊂ B 1, then∂g(o;β,η) ∂β is globally Neyman orthogonal on B2. (b) Suppose g1(o;β, η),g2(o;β, η)satisfy global Neyman or- thogonality on B1,h1(β)andh2(β)are functions of β, then g1(o;β, η)h1(β) +g2(o;β, η)h2(β)also satisfy global Neyman orthogonality on B2. 5 Proof. For (a), ∂ ∂tEP0,N" ∂ ∂βg(O;β,(1−t)η0,N+tη1,N)# t=0 =∂ ∂β( ∂ ∂tEP0,N" g(O;β,(1−t)η0,N+tη1,N)# t=0) =0. (b) is obvious. Lemma S8. Suppose g(o;β, η)is a separable moment function such that g(o;β, η) =BX b=1g(b)(o;η)h(b)(o;β). Furthermore, suppose h(b)(β), b= 1, ..., B are continuously differentiable functions of β. Then under Assumption 12, sup β∈B (Pn,l−P0,N)(g(β,bηl)−g(β, η0)) =OP0,Nrm NδN , sup β∈B∥(Pn,l−P0,N)(G(β,bηl)−G(β, η0,N))∥=OP0,N rm NδN! . Proof. Direct calculation yields sup β∈B (Pn,l−P0,N)(g(β,bηl)−g(β, η0,N)) = sup β∈B (Pn,l−P0,N) BX i=1 g(b)(bηl)−g(b)(η0,N) h(b)(β)) ≤BX b=1 (Pn,l−P0,N) g(b)(bηl)−g(b)(η0,N) sup β∈B h(b)(β)) ≲BX b=1 (Pn,l−P0,N) g(b)(bηl)−g(b)(η0,N) , 6 where the third inequality is due to triangle inequality, the fourth inequality is due to the bound- edness of h(b)(β). For any b∈ {1, ..., B}, we consider 1 NEP0,N" Gn,l g(b)(bηl)−g(b)(η0,N) 2 Oi, i∈Ic l# ≤1 NEP0,N" g(b)(bηl)−g(b)(η0,N) 2 F Oi, i∈Ic l# ≤sup η∈TN1 NEP0,N" g(b)(η)−g(b)(η0,N) 2 F Oi, i∈Ic l# ≤sup η∈TN1√ NEP0,N g(b)(η)−g(b)(η0,N) 2 F = sup η∈TN1 NmX j=1qX r=1EP0,N g(b)(j,r)(η)−g(b)(j,r)(η0,N) 2 ≲mδ2 N N. where the last line is due to Assumption 12. Therefore, sup β∈B (Pn,l−P0,N)(g(β,bηl)−g(β, η0,N)) =OP rm NδN! . Similarly, we can prove that sup β∈B∥(Pn,l−P0,N)(G(β,bηl)−G(β, η0,N))∥=OP0,N rm NδN! . Lemma S9. Suppose Assumption 5 holds, we have sup β∈B (Pn,l−P0,N)(g(β,bηl)−g(β, η0,N)) =OP0,Nrm NδN . Suppose Assumption 9 holds, we further have sup β∈B∥(Pn,l−P0,N)(G(β,bηl)−G(β, η0,N))∥=OP0,N rm NδN! . 7 Proof. First we notice that sup β∈B Gn,l(g(β,bηl)−g(β, η0)) = sup β∈BvuutmX j=1 Gn,l(g(j)(β,bηl)−g(j)(β, η0))2 ≤vuutmX j=1 sup β∈B Gn,l(g(j)(β,bηl)−g(j)(β, η0)) 2 Since we have already assumed that, for any 1 ≤j≤m, and η∈ Tnthe function class G(j) η={(g(j)(β, η)−g(j)(β, η0,N)), β∈B} is a VC-type class such that log sup QN(ϵ∥F∥Q,2,G(j) η,∥ · ∥ Q,2)≤vlog(a/ϵ),0< ϵ < 1. We apply Lemma S1 conditional on Ic l, sobηlcan be treated as fixed. then we have that with probability 1 −mN−1, sup g∈G(j) η|Gng| ≲(1 +α)K σr vloga∥F∥P,2 σ +v∥M∥P,2√ Nloga∥F∥P,2 σ! + K(q) σ+∥M∥P,q√ N
|
https://arxiv.org/abs/2505.07295v1
|
N1 q+1 α√ N∥M∥P,2N2 q! holds for all 1 ≤j≤msimultaneously and for any α >0, supg∈G(j) η∥g∥2 P,2≤σ2≤ ∥F∥2 P,2. By the assumption ∥F∥P,q<∞for all q, taking σasr(j) N, we have with probability larger than 1 −mN−1, 8 sup g∈G(j) η|Gng| ≲(1 +α)K r(j) Ns vloga∥F∥P0,N,2 r(j) N +vN1/q∥F∥P0,N,q√ Nloga∥F∥P0,N,2 r(j) N! +K(q) r(j) N+N1/q∥F∥P0,N,q√ N N1 q+N1/q∥F∥P0,N,q α√ NN2 q! ≲r(j) Ns log1 r(j) N+N−1/2+ϵlog1 r(j) N+r(j) NNϵ+N−1/2+ϵ ≲r(j) N s log1 r(j) N+Nϵ! +N−1/2+ϵ 1 logr(j) N+ 1! ≤δN for all 1 ≤j≤m. For the second inequality, we choose qsuch that q≥3/ϵ. To sum up, sup β∈B Gn,l(g(β,bηl)−g(β, η0,N)) =Op(√mδN). Similarly, we can prove that sup β∈B Gn,l(G(β,bηl)−G(β, η0,N)) =Op(√mδN) under Assumption 9. Lemma S10. Suppose gis global Neyman orthogonal and either Assumption 5 or Assumption 9 holds, then for k∈Il sup β∈B EP0,N[g(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(Ok;β, η0,N)] =Op(√mλ′ N) 9 Proof. With probability 1 −o(1),bηl∈ TN. Notice that sup β∈B EP0,N[g(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(Ok;β, η0,N)] = sup β∈BvuutmX j=1(EP0,N[g(j)(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(j)(Ok;β, η0,N)])2 ≤vuutmX j=1(sup β∈B|EP0,N[g(j)(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(j)(Ok;β, η0,N)]|)2 Therefore, we only need to bound sup β∈B|EP0,N[g(j)(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(j)(Ok;β, η0,N)]| for every j. Let fl(t;β, j) =EP0,N[g(j)(Ok;β, η0,N+t(bηl−η0,N))|(Oi)i∈Ic l]−EP0,N[g(j)(Ok;β, η0,N)], t∈[0,1]. Then by Taylor expansion, we have for some ˜t∈[0, t], fl(t;β, j) =fl(0;β, j) +f′ l(0;β, j) +f′′ l(˜t;β, j). Notice that |fl(0;β, j)|=|EP0,N[g(j)(Ok;β, η0,N)|(Oi)i∈Ic l]−E[g(j)(Ok;β, η0,N)]|= 0, |f′ l(0;β, j)| ≤sup η∈Tn ∂ ∂tEP0,Ng(j)(O;β, η0,N+t(η−η0,N)) = 0, sup β∈B|f′′ l(˜t;β, j)| ≤ sup β∈B,t∈(0,1),η∈TN ∂2 ∂t2EP0,Ng(j)(O;β, η0+t(η−η0,N)) ≤λN, where the first equation is due to EP0,N[g(j)(Ok;β, η0,N)|(Oi)i∈Ic l] =E[g(j)(Ok;β, η0,N)], the second line is due to Neyman orthogonality, the third line is due to Assumption 5 or Assumption 9. Therefore, sup β∈B EP0,N[g(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(Ok;β, η0)] ≤q mλ2 N=√mλN 10 holds with probability 1 −o(1), hence sup β∈B EP0,N[g(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(Ok;β, η0,N)] =Op(√mλN) . Lemma S11. Suppose gis global Neyman orthogonal, and either Assumption 5 or Assumption 9 holds, we have sup β∈B∥bg(β,bη)−bg(β, η0,N)∥=Oprm NδN+√mλN Proof. We only need to prove that sup β∈B∥bgl(β,bηl)−bgl(β, η0)∥=Oprm NδN+√mλN The result follows by noticing that sup β∈B∥bgl(β,bηl)−bgl(β, η0,N)∥ ≤1√ Nsup β∈B Gn,l(g(β,bηl)−g(β, η0,N)) + sup β∈B EP0,N[g(Ok;β,bηl)|(Oi)i∈Ic l]−EP0,N[g(Ok;β, η0,N)] Remark S1. Under Assumption 9, we further have sup β∈B∥bg(β,bη)−bg(β, η0,N)∥=oP(1/√ N). Lemma S12. Under Assumption 5, we have ∥bΩ(β0,bη)−Ω(β0, η0,N)∥=Opm√ NδN . Proof. We only need to derive the convergence rate for ∥bΩl(β0,bηl)−Ω(β0, η0,N)∥for some l. Note 11 that ∥bΩl(β0,bηl)−Ω(β0, η0)∥ =∥(Pn,l−P0,N)(Ω(β0,bηl)−Ω(β0, η0,N))∥+∥(Pn,l−P0,N)Ω(β0, η0,N)∥+∥P0,N[Ω(β0,bηl)−Ω(β0, η0,N)]∥ To derive the convergence rate for each term, we will vectorize the matrice bΩl(β0,bηl) and Ω(β0, η0,N) asm2dimensional vectors. Then we can apply the same techniques we used in Lemma S9 and S10. For the first term, EP0,N∥√n(Pn,l−P0,N)(Ω(β0,bηl)−Ω(β0, η0,N))∥2 ≤EP0,N∥bΩl(β0,bηl)−Ω(β0, η0,N)∥2 F ≤sup η∈TnEP0,N[∥vec(bΩl(β0, η))−vec(Ω(β0, η0,N))∥2] ≤mX j=1mX k=1r(j,k) n≤m2δ2 N. The second line is because the spectral norm is always upper bounded by the Frobenius Norm. The third line is due to cross-fitting, so that the randomness of bηlcan be ignored. Therefore, ∥(Pn,l−P0,N)(Ω(β0,bηl)−Ω(β0, η0,N))∥=Opm√ NδN . Remark S2. Under the assumption δN=o(1/√m), we further have sup β∈B∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥=op(1/√m). Lemma S13. Under Assumption 5, we have sup β∈B∥(Pn−P0,N)(Ω(β,bη)−Ω(β, η0,N))∥=Opm√ NδN . 12 If we further assume Assumption 9 or Assumption 12 holds, then sup
|
https://arxiv.org/abs/2505.07295v1
|
β∈B∥(Pn−P0,N)(Ω(β,bη)−Ω(β, η0,N))∥=oprm N . Proof. We will vectorize the matrices Ω( β,bηl) and Ω( β, η0,N) asm2dimensional vectors. Then we can apply the same techniques we used in Lemma S9 and S10. For the first term, we have sup β∈B∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥ ≤sup β∈B∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥F = sup β∈B∥(Pn,l−P0,N)(vec(Ω( β,bηl))−vec(Ω( β, η0,N)))∥ Note that vec(Ω( β, η)) is a m2-dimensional vector with each component of the form g(j)(β, η)g(k)(β, η), with i, k∈ {1, ..., m}. Following the similar derivations as in S9, we have sup β∥(Pn,l−P)(Ω(β,bηl)−Ω(β, η0,N))∥=Opr m2 NδN , The only difference here is that vec(Ω( β, η)) is a m2-dimensional vector, so here the size is Opq m2 NδN instead of Oppm NδN . If we further assume Assumption 9 holds, then sup β∈B∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥=oprm N . since δN=o(1/√m). Proof when Assumption 12 holds is very similar to the proof of Lemma S8, thus omitted. Remark S3. Combing Lemma S13 with Assumption 6 or Assumption 11 allows us to drive con- vergence rate of ∥bΩ(β,bη)−Ω(β, η0,N)∥ 13 by noticing that ∥bΩl(β,bηl)−Ω(β, η0,N)∥ =∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥+∥(Pn,l−P0,N)Ω(β, η0,N)∥+∥P0,N[Ω(β,bηl)−Ω(β, η0,N)]∥. Lemma S14. Under Assumption 3, bg(β0, η0,N) =Oprm N . Proof. We only need to bound EP0,N bg(β0, η0,N) 2 : EP0,N bg(β0, η0,N) 2=EP0,NbgT(β0, η0,N)bg(β0, η0,N) =1 NEP0,Ntr[gT(β0, η0,N)g(β0, η0,N)] =1 NEP0,Ntr[g(O;β0, η0,N)gT(β0, η0,N)] =1 NtrΩ(β0, η0,N)≲m N Therefore, bg(β0, η0,N) =Oprm N . Lemma S15. Assume m2/N→0,m2/µN=O(1), Assumption 1,2,3,4,5,6 hold, then ∥δ(bβ)∥=Op(1). 14 Proof. Direct calculation yields ∥bg(bβ,bη)∥2=bg(bβ,bη)Tbg(bβ,bη) ≲bg(bβ,bη)TbΩ(bβ,bη)−1bg(bβ,bη) =bQ(bβ,bη)≤bQ(β0,bη) ≤|bQ(β0,bη)−bQ(β0, η0)|+bQ(β0, η0,N) ≤|(bg(β0,bη)−bg(β0, η0,N))bΩ−1(β0,bη)bg(β0,bη)| +|bg(β0, η0,N)bΩ(β0,bη)−1(bΩ(β0,bη)−bΩ(β0, η0,N))bΩ(β0, η0,N)| +|bg(β0, η0,N)bΩ−1(β0,bη)(bg(β0,bη)−bg(β0, η0,N))|+bQ(β0, η0,N). Since ∥bg(β0,bη)−bg(β0, η0,N)∥=Oprm NδN , ∥bΩ(β0,bη)−bΩ(β0, η0,N)∥ ≤ ∥bΩ(β0,bη)−Ω(β0, η0,N)∥+∥Ω(β0, η0,N)−bΩ(β0, η0,N)∥=op(1), ∥bg(β0, η0,N)∥=Oprm N , bQ(β0, η0,N)≲∥bg(β0, η0,N)∥2=Opm N , we have ∥bg(bβ,bη)∥=Op(p m/N ). Therefore, ∥δ(bβ)∥ ≤C√ N∥bg(bβ, η0,N)∥/µN+cM ≤C√ N∥bg(bβ, η0,N)−bg(bβ,bη)∥/µN+C√ N∥bg(bβ,bη)∥/µN+cM =Op(1). Lemma S16. Assume conditions in Theorem 1 hold, then sup β∈B,δ(β)≤C bg(β, η0,N) =OpµN√ N 15 Proof. sup β∈B,δ(β)≤C bg(β, η0,N) ≤ sup β∈B,δ(β)≤C bg(β, η0,N)−bg(β0, η0,N) + bg(β0, η0,N) ≤µN√ NcM sup β∈B,δ(β)≤C∥δ(β)∥+Oprm N ≤OpµN√ N , where the second last inequality is due to Assumption 3, the last inequality is due to boundedness ofm/µ2 N. Lemma S17. Assume conditions in Theorem 1 hold, then sup β∈B,δ(β)≤C bg(β,bη) =OpµN√ N . Proof. sup β∈B,δ(β)≤C bg(β,bη) ≤ sup β∈B,δ(β)≤C bg(β,bη)−bg(β, η0,N) + sup β∈B,δ(β)≤C bg(β, η0,N)−bg(β0, η0,N) + bg(β0, η0,N) =Oprm NδN+√mλN +Oprm N +OPµN√ N =OPµN√ N . Lemma S18. Suppose gis global Neyman orthogonal and continuously differentiable in a neigh- borhood of β0, denoted as B′. Suppose in addition Assumption 9 holds, we have sup β∈B′∥bG(β,bη)−bG(β, η0,N)∥=Op1√ N Proof. since gis global Neyman orthogonal in B′andgis continuously differentiable in a neighbor- hood of B′, we have Gis also global Neyman orthogonal in B′. The rest of proof is similar to the proof of S11, thus omitted. 16 Lemma S19. Assume conditions for Theorem 2 hold, then Nµ−1 N∂bQ(β,bη) ∂β β=β0=Nµ−1 N∂˜Q(β,bη) ∂β β=β0+op(1). Proof. It suffices to show that for k−th coordinate of β, denoted as β(k)(k∈ {1, ..., p}), we have Nµ−1 N∂bQ(β,bη) ∂β(k) β=β0=Nµ−1 N∂˜Q(β,bη) ∂β(k) β=β0+op(1). Following the same derivation as in Ye et al. (2024), we have ∂bQ(β,bη)
|
https://arxiv.org/abs/2505.07295v1
|
∂β(k) β=β0=G(k),TbΩ(β0,bη)−1bg(β0,bη) + (N−1NX i=1ˇU(k) i)TbΩ(β0,bη)−1bg(β,bη) ∂˜Q(β,bη) ∂β(k) β=β0=G(k),T¯Ω(β0, η0,N)−1bg(β0,bη) + (N−1NX i=1˜U(k) i)TΩ(β0, η0,N)−1bg(β,bη) where ˜U(k) i=G(k) i(β0,bηl(i))−G(k)−EP0,N(G(k) igT i)¯Ω−1gi(β0,bηl(i)), ˇU(k) i=G(k) i(β0,bηl(i))−G(k)− N−1NX i=1G(k) i(β0,bηl(i))gi(β0,bηl(i)) bΩ(β0,bηl(i))−1gi(β0,bηl(i)). By contrasting the explicit formula for Nµ−1 N∂bQ(β,bη) ∂β(k) β=β0andNµ−1 N∂˜Q(β,bη) ∂β(k) β=β0, we need to show the following three conditions: 1.Nµ−1 N(N−1PN i=1(ˇU(k) i−˜U(k) i))TbΩ(β,bη)−1bg(β0,bη) =op(1) 2.Nµ−1 n(N−1PN i=1˜U(k) i)T{bΩ(β0,bη)−1−Ω−1(β0, η0,N)}bg(β0,bη) =op(1) 3.Nµ−1 NG(k)T{bΩ(β0,bη)−1−Ω−1(β0, η0,N)}bg(β0,bη) =op(1). 17 We firstly show 1., by derivations in Ye et al. (2024), |Nµ−1 N(N−1NX i=1(ˇU(k) i−˜U(k) i)bΩ(β,bη)−1bg(β0,bη)| ≤Cµ−1 NN∥bg(β0,bη)∥2n ∥bΩ(β0,bη)−Ω)∥+∥N−1NX i=1G(k) i(β0,bηl(i))gi(β0,bηl(i))−E(G(k) igT i)∥o =op(1) Since∥bg(β0,bη)∥ ≤ ∥bg(β0,bη)−bg(β0, η0,N)∥+∥bg(β0, η0,N)∥=Op(p m/N ),∥bΩ(β0,bη)−Ω∥=op(1/√m), ∥N−1PN i=1G(k) i(β0,bηl(i))gi(β0,bηl(i))−EP0,N(G(k) igT i)∥=op(1/√m) and√m/µ n=O(1). Here we will show that ∥N−1PN i=1G(k) i(β0,bηl(i))gi(β0,bηl(i))−EP0,N(G(k) igT i)∥=op(1/√m). It suffices to show that ∥Pn,lG(k)(β0,bη)gT(β0,bη)−P0,NG(k)(β0, η0,N)gT(β, η0,N)∥=op(1/√m). Following the similar derivations in Lemma S13 and Remark S3, we have ∥Pn,lG(k)(β0,bη)gT(β0,bη)−P0,NG(k)(β0, η0,N)gT(β, η0,N)∥F ≤∥(Pn,l−P0,N)(vec( G(k)(β0,bη)gT(β0,bη))−vec(G(k)(β0, η0,N)gT(β0, η0,N)))∥ +∥(Pn,l−P0,N)vec(G(k)(β0, η0,N)gT(β0, η0,N))∥ +∥P0,N(vec(G(k)(β0,bη)gT(β0,bη))−vec(G(k)(β0, η0,N)gT(β0, η0,N)))∥ ≤op rm N! +op1√m +o1√m =op1√m . For the last inequality, we used the similar derivations as in S13 and Assumption 11. For 2., we first show that 1√ mNNX i=1˜U(k) i =Op(1). 18 Recall U(k) i=G(k) i−G(k)−EP0,N[G(k) igT i]Ω−1gi. Therefore, U(k) iis the residual for projecting G(k) i−G(k)onto the space spanned by gi. Therefore, EP0,N(∥U(k) i∥2)≤EP0,N(∥G(k) i∥2)≤Cm2. Using Markov inequality, we have 1√ mN NX i=1U(k) i =Op(1). Next, 1√ NNX i=1(˜U(k) i−U(k) i) = 1√ NNX i=1 G(k) i(β0,bηl(i))−G(k) i(β0, η0,N)−EP0,N[G(k) igT i]Ω−1(gi(β0,bηl(i))−gi) ≲√ N bG(k)(β0,bη)−bG(k)(β0, η0,N) +√ N bg(β0,bη)−bg(β0, η0,N) =op(1). For the last line, we used the fact that bG(k)(β0,bη)−bG(k)(β0, η0,N) =op1√ N , bg(β0,bη)−bg(β0, η0,N) =op1√ N , which are results from Lemma S11 and Lemma S18. We also use the fact that EP0,N[G(k) igT i]≤C. This inequality can be derived using matrix Cauchy-Schwartz inequality, the assumption that all eignvalues of Ω are bounded above and below by some constants, and the Assumption that ∥EP0,NGiGT i∥ ≤C. 19 Therefore, we conclude that 1√ mNNX i=1˜U(k) i =Op(1). Following the derivations of Ye et al. (2024), we have Nµ−1 N(N−1NX i=1˜U(k) i)T{bΩ(β0,bη)−1−Ω−1}bg(β0,bη) ≲ 1√ mNNX i=1˜U(k) i ∥√m{bΩ(β0,bη)−Ω(β0, η0,N)}∥∥µ−1 N√ Nbg(β0,bη)∥ =op(1). The proof of 3. is similar to the proof of 2., thus omitted. Lemma S20. ∂bg(β0, η0,N) ∂β(k) =Op rm N! , ∂2bg(β0, η0,N) ∂β(k)∂β(r) =Op rm N! . Proof. Assumption 7 implies that EP0,N" ∂bg(β0, η0,N) ∂β(k)∂bg(β0, η0,N)T ∂β(k)# ≤C. Therefore, EP0,N ∂bg(β0, η0,N) ∂β(k) 2 =EP0,N" ∂bg(β0, η0,N)T ∂β(k)∂bg(β0, η0,N) ∂β(k)# = tr1 NEP0,N" ∂gi(β0, η0,N) ∂β(k)∂gi(β0, η0,N)T ∂β(k)# ≤m N. Therefore, ∂bg(β0, η0,N) ∂β(k) =Op rm N! . 20 Similarly, we can prove ∂2bg(β0, η0,N) ∂β(k)∂β(r) =Op rm N! . Remark S4. A direct consequence for this lemma is that bG(β0, η0,N) =Op rm N! , ∂bG(β0, η0,N) ∂β(r) =Op rm N! . Lemma S21. For any βp− →β0, NS−1 N ∂2bQ(β,bη) ∂β∂βT−∂2bQ(β, η0,N) ∂β∂βT! S−1T N=op(1). 21 Proof. We firstly calculate the explicit form of∂2bQ(β,η) ∂β(k)∂β(r). Direct calculation yields, ∂bΩ(β, η) ∂β(k)=∂ ∂β 1 NNX i=1gi(β, η)gi(β, η)T =1 NNX i=1" ∂gi(β, η) ∂β(k)gi(β, η)T+gi(β, η)∂gi(β, η)T ∂β(k)# , ∂2bΩ(β, η) ∂β(k)∂β(r)=2 NNX i=1" gi(β, η) ∂β(k)∂β(r)gi(β, η)T+∂gi(β, η) ∂β(k)∂gi(β, η)T ∂β(r)# , ∂bQ(β, η)
|
https://arxiv.org/abs/2505.07295v1
|
∂β(k)=∂bg(β, η)T ∂β(k)bΩ(β, η)−1bg(β, η)−1 2bg(β, η)bΩ−1∂bΩ(β, η) ∂β(k)bΩ−1bg(β, η), ∂2bQ(β, η) ∂β(k)∂β(r)=∂2bg(β, η)T ∂β(k)∂β(r)bΩ(β, η)−1bg(β, η) −∂bg(β, η)T ∂β(k)bΩ−1(β, η)∂bg(β, η)T ∂β(r)bΩ−1(β, η)bg(β, η) −∂bg(β, η)T ∂β(r)bΩ−1(β, η)∂bg(β, η)T ∂β(k)bΩ−1(β, η)bg(β, η) +∂bg(β, η)T ∂β(k)bΩ(β, η)−1∂bg(β, η)T ∂β(r) +bg(β, η)TbΩ(β, η)−1∂Ω(β, η) ∂β(r)bΩ(β, η)−1∂bΩ(β, η) ∂β(k)bΩ(β, η)−1bg(β, η) −1 2bg(β, η)bΩ(β, η)−1∂2bΩ(β, η) ∂β(r)∂β(k)bΩ(β, η)−1bg(β, η) =:J(k,r) 1(β, η)−J(k,r) 2(β, η)−J(k,r) 3(β, η) +J(k,r) 4(β, η) +J(k,r) 5(β, η)−1 2J(k,r) 6(β, η). Define Ji(β, η) to be the matrix with ( k, j)-th element equal to J(k,j) i(β, η) for i= 1, ...,6. It remains to show that for all i∈ {1,2,3,4,5,6}, we have NS−1 N(Ji(β,bη)−Ji(β, η0,N))S−1T N=op(1). We will firstly show that NS−1 N(J1(β,bη)−J1(β, η0,N))S−1T N=op(1), the proofs for other terms are similar thus omitted. 22 Firstly notice that J1(β, η) = ∂bG(β, η) ∂β(1), ...,∂bG(β, η) ∂β(p)!T bΩ(β, η)−1bg(β, η). ∥NS−1 N(J1(β,bη)−J1(β, η0,N))S−1T N∥ ≤ NS−1 N ∂bG(β,bη) ∂β(1), ...,∂bG(β,bη) ∂β(p)! − ∂bG(β, η0,N) ∂β(1), ...,∂bG(β, η0,N) ∂β(p)! T bΩ(β,bη)−1bg(β,bη)S−1T N + NS−1 N ∂bG(β, η0,N) ∂β(1), ...,∂bG(β, η0,N) ∂β(p)!T bΩ(β,bη)−1(bΩ(β,bη)−bΩ(β, η0,N))bΩ(β, η0,N)−1bg(β,bη)S−1T N + NS−1 N ∂bG(β, η0,N) ∂β(1), ...,∂bG(β, η0,N) ∂β(p)!T bΩ(β, η0,N)−1(bg(β,bη)−bg(β, η0,N))S−1T N :=J11+J12+J13 ForJ11, since∂2g(β,η) ∂β(k)∂β(r)satisfies global Neyman orthogonality in a neighborhood of β0, similar to derivations in Lemma S11, we have ∂bG(β,bη) ∂β(1), ...,∂bG(β,bη) ∂β(p)! − ∂bG(β, η0,N) ∂β(1), ...,∂bG(β, η0,N) ∂β(p)! =op1√ N . Since for any βwe have ∥bΩ(β,bη)−Ω(β, η0,N)∥=op(1), and the eigenvalues of Ω(β, η0,N) are bounded by 1 /CandCfor any β, therefore the eigenvalues ofbΩ(β,bη) are bounded by 1 /2Cand 2 C. Previously we have derived that sup∥δ(β)∥≤C∥bg(β,bη)∥= Op(µN/√ N). Therefore, J11≤Nµ−2 Nop(1/√ N)Op(µN/√ N) =op(1/µN) =op(1). 23 ForJ12, we firstly notice that NS−1 N ∂bG(β, η0,N) ∂β(1), ...,∂bG(β, η0,N) ∂β(p)!T = NS−1 N ∂bG(β, η0,N) ∂β(1), ...,∂bG(β, η0,N) ∂β(p)!T − ∂bG(β0, η0,N) ∂β(1), ...,∂bG(β0, η0,N) ∂β(p)!T + NS−1 N ∂bG(β0, η0,N) ∂β(1), ...,∂bG(β0, η0,N) ∂β(p)!T =op(√ N) +N∥S−1 N∥Op(p m/N ). Therefore, J12≤(op(√ N) +N∥S−1 N∥Op(p m/N ))op(1/√m)Op(µN/√ N)∥S−1 N∥=op(1). Similarly we can prove that J13=op(1). Therefore we have proved that NS−1 N(J1(β,bη)−J1(β, η0,N))S−1T N=op(1). ForJ2term, similarly we notice that J2(β, η) =bG(β, η)TbΩ−1(β, η)bG(β, η)bΩ−1(β, η)bg(β, η). Therefore, 24 ∥NS−1 N(J2(β,bη)−J2(β, η0,N))S−1T N∥ ≤ NS−1 N[bG(β,bη)−bG(β, η0,N)]bΩ−1(β,bη)bG(β,bη)bΩ−1(β,bη)bg(β,bη)S−1 N + NS−1 NbG(β, η0,N)[bΩ(β,bη)−1−bΩ(β, η0,N)−1]bG(β,bη)bΩ−1(β,bη)bg(β,bη)S−1 N + NS−1 NbG(β, η0,N)bΩ(β, η0,N)−1[bG(β,bη)−bG(β, η0,N)]bΩ−1(β,bη)bg(β,bη)S−1 N + NS−1 NbG(β, η0,N)bΩ(β, η0,N)−1bG(β, η0,N)[bΩ−1(β,bη)−bΩ−1(β, η0,N)]bg(β,bη)S−1 N + NS−1 NbG(β, η0,N)bΩ(β, η0,N)−1bG(β, η0,N)bΩ−1(β, η0,N)[bg(β,bη)−bg(β, η0,N)]S−1 N :=J21+J22+J23+J24+J25. Similar to the previous arguments, we have J21≤Nµ−2 NoP(1/√ N)OP(µN/√ N)OP(µN/√ N) =op(1) Now we consider the second term. Similar to previous arguments, we have ∥NS−1 NbG(β, η0,N)∥=op(√ N) +N∥S−1 N∥Op(p m/N ), ∥bΩ(β,bη)−1−bΩ(β, η0,N)−1∥=∥bΩ(β,bη)−1(bΩ(β,bη)−bΩ(β, η0,N))bΩ(β, η0,N)−1∥=op(1/√m). For∥bG(β,bη)∥, we have ∥bG(β,bη)∥ ≤ ∥bG(β,bη)−bG(β, η0,N)∥+∥bG(β, η0,N)∥ ≤op(1/√ N) +∥SN∥op(1/√ N) +Op(p m/N ) =∥SN∥op(1/√ N). Therefore J22≤(op(√ N) +N∥S−1 N∥Op(p m/N ))op(1/√m)∥SN∥op(1/√ N)Op(µN/√ N)∥S−1 N∥=op(1). Similarly we can show that J23,J24,J25areop(1). 25 Using the same technique we can show that NS−1 N(Ji(β,bη)−Ji(β, η0,N))S−1T N=op(1), fori= 3,4,5,6. Therefore, we have NS−1 N ∂2bQ(β,bη) ∂β∂βT−∂2bQ(β, η0,N) ∂β∂βT! S−1T N=op(1). S4 Proof for main Theorems S4.1 Proof for consistency Proof. Our goal is to prove that δ(bβ)p− →0, where δ(β) =1
|
https://arxiv.org/abs/2505.07295v1
|
µNST N(β−β0). By identification assump- tion, ST N(β−β0) µN ≤C√ N∥P0,N[g(β, η0,N)] µN. Therefore, it remains to prove that µ−1 N√ N∥P0,N[g(bβ, η0,N)]∥=op(1). Sincebβis the minimizer of bQ(β,bη), we have µ−2 NNbQ(bβ,bη)≤µ−2 NNbQ(β0,bη) Later on, we will prove that sup ∥δ(β)∥≤Cµ−2 NN|bQ(β,bη)−Q(β, η0,N)|=op(1). (4) for any ϵ, γ, letA1={∥δ(bβ)∥ ≤C}, since δ(bβ) =Op(1), we can choose Csuch that P(A1)≥1−ϵ. LetA2={sup∥δ(β)∥≤Cµ−2 NN|bQ(β,bη)−Q(β, η0,N)| ≤γ}, then by 4, we have for Nlarge enough, 26 P(A2)>1−ϵ. Therefore, with probability bigger than 1 −2ϵ, µ−2 NNQ(bβ, η0,N)≤µ−2 NNbQ(bβ,bη) +γ≤µ−2 NNbQ(β0,bη) +γ≤µ−2 NNQ(β0, η0) + 2γ, which implies µ−2 NN[Q(bβ, η0,N)−Q(β0, η0,N)] =op(1) since ϵandγare arbitrarily small. Then we have µ−2 NN[Q(bβ, η0,N)−Q(β0, η0,N)] +m/µ2 N=µ−2 NNQ(bβ, η0,N) =µ−2 NNPg (bβ, N)PgT(bβ, η0,N)g(bβ, η0,N)−1Pg(bβ, η0,N) ≥Cµ−2 NN∥Pg(bβ, η0,N)∥2, where the last inequality is due to Assumption 3. Therefore µ−1 N√ N∥P[g(bβ, η0,N)]∥=op(1), and hence δ(bβ)p− →0. To show (4), we make the following decomposition: sup ∥δ(β)∥≤Cµ−1 NN|bQ(β,bη)−Q(β, η0,N)| ≤sup ∥δ(β)∥≤Cµ−1 NN|bQ(β,bη)−˜Q(β,bη)| | {z } =:AN+ sup ∥δ(β)∥≤Cµ−1 NN|˜Q(β,bη)−˜Q(β, η0,N)| | {z } =:BN + sup ∥δ(β)∥≤Cµ−1 NN|˜Q(β, η0,N)−Q(β, η0,N)| | {z } =:CN, where Q(β, η) =g(β, η)TΩ(β, η)−1g(β, η)/2 +m 2N, ˜Q(β, η) =bg(β, η)TΩ(β, η0,N)−1bg(β, η)/2. Here we only discuss AN=op(1) and BN=op(1). Since CNdoes not involve any nuisance parameter, the proof for uniform convergence of CNis the same as Newey and Windmeijer (2009). 27 ForANterm, following the same derivation as in Ye et al. (2024), we have AN≤sup ∥δ(β)∥≤C ba(β,bη)T(˜Ω(β,bη)−Ω(β,bη))ba(β,bη) + sup ∥δ(β)∥≤C ba(β,bη)T(bΩ(β,bη)−Ω(β,bη))bΩ(β,bη)−1(bΩ(β,bη)−Ω(β,bη))ba(β,bη) ≤sup ∥δ(β)∥≤C∥ba(β,bη)∥2n sup ∥δ(β)∥≤C bΩ(β,bη)−¯Ω(β,bη) +Csup ∥δ(β)∥≤C bΩ(β,bη)−¯Ω(β,bη) 2o where ba(β, η) =µ−1 N√ NΩ(β, η0,N)−1bg(β, η). Since sup ∥δ(β)∥≤C∥ba(β,bη)∥2≤sup ∥δ(β)∥≤CCµ−2 nN∥bg(β,bη)∥2=Op(1), ∥bΩ(β0,bη)−Ω(β0, η0,N)∥=op(1), AN=op(1). ForBN, following the same derivation of Ye et al. (2024), we have Bn≲µ−2 NNsup β∈B∥bg(β,bη)−bg(β, η0,N)∥sup ∥δ(β)∥≤C(∥bg(β,bη)∥+∥bg(β, η0,N)∥) =Op √mδN µN! =op(1) by Lemma S11, Lemma S16 and Lemma S17. S4.2 Proof for asymptotic normality By the first order condition ∂bQ(β,bη) ∂β β=bβ= 0 28 and Taylor expansion around β0, we have 0 =NS−1 N∂bQ(β,bη) ∂β β=bβ=NS−1 N∂bQ(β,bη) ∂β β=β0+NS−1 N∂bQ(β,bη) ∂β∂βT β=¯βS−1T N[SN(bβ−β0)] Next, we aim to show NS−1 N∂bQ(β,bη) ∂β β=β0−NS−1 N∂bQ(β, η0,N) ∂β β=β0 =op(1), it suffices to show that N/µ N∂˜Q(β,bη) ∂β β=β0=N/µ N∂bQ(β, η0,N) ∂β β=β0+op(1) due to ∥S−1 N∥ ≤µ−1 Nand Lemma S19. We have Nµ−1 N∂bQ(β,bη) ∂β(k) β=β0=Nµ−1 N∂˜Q(β,bη) ∂β(k) β=β0+op(1) =Nµ−1 NG(k)TΩbg(β0,bη) +1 NµNNX i,j=1˜U(k)T iΩ−1(g(Oj;β0,bη)−g(Oj;β0, η0,N)) +1 NµNNX i,j=1˜U(k)T i¯Ω−1g(Oj;β0, η0) +op(1) where ˜U(k) i=G(k)(β,bη)−¯G(k)−E[G(k) i(β0, η0,N)gT i(β0, η0,N)]Ω−1g(β0,bη). Notice that ∥Nµ−1 NG(k)TΩˆg(β0,bη)−Nµ−1 NG(k)TΩˆg(β0, η0,N)∥ =∥Nµ−1 NG(k)TΩ−1(bg(β0,bη)−bg(β0, η0,N))∥=op(1) 29 For the second term, 1 NµNNX i,j=1˜U(k)T i¯Ω−1(gj(β0,bη)−gj(β0, η0,N)) ≲1 NµNNX i,j=1˜U(k)T i(gj(β0,bη)−gj(β0, η0)) ≤1 NµNNX i=1˜U(k)T iNX j=1(gj(β0,bη)−gj(β0, η0,N)) ≤1 NµN NX i=1˜U(k)T i NX j=1(gj(β0,bη)−gj(β0, η0,N)) =op(1) Since1√ NµN∥˜Ui∥=Op(1). Lastly, 1 NµNNX i,j=1˜U(k)T i¯Ω−1g(Oj;β0, η0,N)−1 NµNNX i,j=1U(k)T i¯Ω−1g(Oj;β0, η0,N) ≤ 1 NµnNX i,j=1 ˜U(k)T i−U(k)T i ¯Ω−1g(Oj;β0, η0,N) ≲ 1 NµNNX i=1 ˜U(k)T i−U(k)T iNX j=1g(Oj;β0, η0,N) =op(1) Therefore, we have shown that NS−1 N∂bQ(β,bη) ∂β β=β0=NS−1 N∂bQ(β, η0,N) ∂β β=β0+op(1). Combing with nS−1 N ∂2bQ(β,bη) ∂β∂βT! S−1T N=nS−1 N ∂2bQ(β, η0,N) ∂β∂βT! S−1T N+op(1) and Theorem 3 of Newey and Windmeijer (2009), the asymptotic normality result has been proved. 30 S4.3 Proof of Theorem
|
https://arxiv.org/abs/2505.07295v1
|
4 Proof. The variance estimator can be written as S−1 NbV S−1,T N N= (NS−1 NbHS−1,T N)−1(NS−1 NbDTbΩbDS−1,T N)(NS−1 NbHS−1,T N)−1. We introduce some new notations to assist the proof: bD(j)(β, η) =1 NNX i=1∂gi(β, η) ∂βj−1 NNX i=1∂gi(β, η) ∂βjgi(β, η)TbΩ(β, η)−1bg(β, η), bD(β, η) = (bD(1), ...,bD(p)), ˜H=∂2bQ(bβ, η0,N) ∂β∂βT, ˜V=˜H−1bDT(bβ, η0,N)bΩ−1(bβ, η0,N)˜D(bβ, η0,N)˜H−1. We aim to show that S−1 NbV S−1 N N−S−1 N˜V S−1 N N=op(1). According to Newey and Windmeijer (2009), NS−1 NbHS−1 Np− →H, NS−1,T N˜DTbΩ(bβ, η0,N)˜DS−1,T Np− →HV H. Therefore, we only need to show 1. (NS−1 NbHS−1,T N)−1−(NS−1 N˜HS−1,T N)−1=op(1). 2.NS−1 NbDT(bβ,bη)bΩ(bβ,bη)bD(bβ,bη)S−1,T N−NS−1 NbDT(bβ, η0,N)bΩ(bβ, η0,N)bD(bβ, η0,N)S−1,T N=op(1). I will list some key ingredients for showing those two claim and omit related details. For the first 31 claim, by Lemma S21, we have (NS−1 NbHS−1,T N)−1−(NS−1 N˜HS−1,T N)−1 =N−1ST N(bH−1−˜H−1)SN =N−1ST NbH−1(˜H−bH)˜H−1SN =N−1ST NbH−1SN(NS−1 N(˜H−bH)S−1,T N)N−1ST N˜H−1SNp− →0. For the second claim, we will show that √ NS−1 N[bDT(bβ,bη)−bDT(bβ, η0,N)] =op(1), (5) √ NS−1 NbDT(bβ, η0,N) =Op(1). (6) Combing with the previous result ∥bΩ(bβ,bη)−bΩ(bβ, η0,N)∥=op(1/√m), the second claim can be shown. Equation (5) follows from the following rate results we have already shown before: 1 NNX i=1∂gi(bβ,bη) ∂βj−1 NNX i=1∂gi(bβ, η0,N) ∂βj =op(1/√m), ∥bG(bβ,bη)−bG(bβ, η0,N)∥=op(1/√ N), ∥bΩ(bβ,bη)−bΩ(bβ, η0,N)∥=op(1/√m), ∥bg(bβ,bη)−bg(bβ, η0,N)∥=op(1/√ N), ∥bg(bβ, η0,N)∥=Op(µN/√ N). As for (6), it suffices to show that ∥√ NS−1 NbDT(bβ, η0,N)−√ NS−1 NbDT(β0, η0,N)∥=op(1), (7) ∥√ NS−1 NbDT(β0, η0,N)∥=Op(1). (8) Equation (7) is a direct consequence of Assumption 8. Equation (8) is a direct consequence of √m/µ N=O(1),∥bG(β0, η0,N)∥=Op(√m/√ N),∥bg(β0, η0,N)∥=Op(√m/√ N). 32 S4.4 Proof of Theorem 5 Proof. Letβbe a value lying on the line segment between β0andbβ. By mean-value theorem, 2N[bQ(β0,bη)−bQ(bβ,bη)] =N(bβ−β0)T" ∂2bQ(β,bη) ∂β∂βT# (bβ−β0) =(bβ−β0)TSNNS−1 N" ∂2bQ(β,bη) ∂β∂βT# S−1,T NST N(bβ−β0) =(bβ−β0)TSNNS−1 N" ∂2bQ(β, η0,N) ∂β∂βT# S−1,T NST N(bβ−β0) +op(1), =2N[bQ(β0, η0,N)−bQ(bβ, η0,N)] +op(1) where the third equality is due to Lemma S21 and the fact that ST N(bβ−β0) =Op(1). By the proof of Theorem 5 in Newey and Windmeijer (2009), we have 2 N[bQ(β0, η0,N)−bQ(bβ, η0,N)] = Op(1). Therefore, 2N[bQ(β0,bη)−bQ(bβ,bη)]√m−pp− →0. (9) Next, we will show that N(bQ(β0,bη)−bQ(β0, η0,N)) =op(√m). |bQ(β0,bη)−bQ(β0, η0,N)| ≤|(bg(β0,bη)−bg(β0, η0,N))bΩ−1(β0,bη)bg(β0,bη)| +|bg(β0, η0,N)bΩ(β0,bη)−1(bΩ(β0,bη)−bΩ(β0, η0,N))bΩ−1(β0, η0,N)bg(β0,bη)| +|bg(β0, η0,N)bΩ−1(β0,bη)(bg(β0,bη)−bg(β0, η0,N))|. 33 Since ∥bg(β0,bη)−bg(β0, η0,N)∥=Oprm NδN , δN=o(1/√m) ∥bΩ(β0,bη)−bΩ(β0, η0,N)∥ ≤ ∥bΩ(β0,bη)−Ω(β0, η0,N)∥+∥Ω(β0, η0,N)−bΩ(β0, η0,N)∥=op(1/√m), ∥bg(β0, η0,N)∥=Oprm N ,∥bg(β0,bη)∥=Oprm N 1/(2C)≤λmin(bΩ−1(β0,bη))≤λmax(bΩ−1(β0,bη))≤2C, we have N|(bg(β0,bη)−bg(β0, η0,N))bΩ−1(β0,bη)bg(β0,bη)|=Op(mδN) =op(√m), N|bg(β0, η0,N)bΩ(β0,bη)−1(bΩ(β0,bη)−bΩ(β0, η0,N))bΩ−1(β0, η0,N)bg(β0,bη)|=o(√m) N|bg(β0, η0,N)bΩ−1(β0,bη)(bg(β0,bη)−bg(β0, η0,N))|=op(√m). Therefore, N(bQ(β0,bη)−bQ(β0, η0,N)) =op(√m). Combing this result with (9), we have 2N[bQ(β0, η0,N)−bQ(bβ,bη)]√m−pp− →0. Hence, 2NbQ(bβ,bη)−(m−p)√m−p =2NbQ(β0, η0,N)−(m−p)√m−p+op(1) =rm m−p2NbQ(β0, η0,N)−m√m+p√m−p+op(1) d− →N(0,1) by proof of Theorem 5 in Newey and Windmeijer (2009). By standard results for chi-squared distribution that ( χ2 1−α(m)−m)/√ 2mconverges to 1 −αquantile of standard normal as m→ ∞ , 34 we have P(2NbQ(bβ,bη)≥χ2 1−α(m−p)) =P 2NbQ(bβ,bη)−(m−p)p 2(m−p)≥χ2 1−α(m−p)−(m−p)p 2(m−p)! →α. The Theorem has been proved. S5 Linear moment condition case As an important example of separable moment, linear moment is very common in practice. That is, the estimating function have the following form: g(o;β, η) =G(o;η)Tβ+ga(o;η). The linear moment equation case could greatly simplify the problem. Assumption 12 can be simplified as follows for linear moment function:
|
https://arxiv.org/abs/2505.07295v1
|
Assumption 17. (Linear score regularity and nuisance parameters estimation). (a) Given a random subset Iof[N]of size N/L, we have the nuisance parameter estimator bη (estimated using samples outside I) belongs to a realization set TN∈TNwith probability 1−∆N, where TNcontains η0,Nand satisfies the following conditions. (b)g(o;β, η)satisfies Neyman-orthogonality. (c) The following conditions on statistical rates hold for any j, k∈ {1, ..., m}, l, r∈ {1, ..., p}, 35 define: ra,(j) N:= sup η∈TN EP0,N[|ga,(j)(O;η)−ga,(j)(O;η0,N)|2]1/2 r(j,l) N:= sup β∈B,η∈TN E G(j,l)(β, η)−G(j,l)(β, η0,N) 21/2 λ(j) N:= sup β∈B,t∈(0,1),η∈Tn ∂2 ∂t2EP0,Ng(j)(O;β, η0,N+t(η−η0,N)) ra,(j),(k) N:= sup η∈TN E ga,(j)(η)ga,(k)(η)−ga,(j)(ηP0,N)ga,(k)(ηP0,N) 21/2 r(j,k,l,r ) N:= sup β∈B EP0,N G(j,l)(η)G(k,r)(η)−G(j,l)(η)G(k,r)(η0,N) 21/2 r(j,k,r) N:= sup β∈B EP0,N G(j,l)(η)g(k)(η)−G(j,l)(η)g(k)(η0,N) 2!1/2 Then the statistical rates satisfy aN≤δN, λN≤N−1/2δN, δN=o(1/√m), foraN∈ {ra,(j) N, r(j,l)) N, ra,(j),(k) N, r(j,k,l,r ) N, r(j,k,r) N}. S6 Additive SMM example We will verify the assumptions for ASMM example one by one. In this example, gis linear moment condition. Gandgaare defined as G(O;η) =−(Z−ηZ(X))(A−ηA(X)), ga(O;η) = (Z−ηZ(X))(Y−ηY(X)). We first verify the regularity conditions hold for the ASMM example. Lemma S22. For all β∈ B,1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤C. 36 Proof. Ω(β, η0) =E[(Z−ηZ,0(X))(Z−ηZ,0(X))T(Y−ηY,0(X)−βA+βηA,0(X))2] =Eh (Z−ηZ,0(X))(Z−ηZ,0(X))TE[(Y−ηY,0(X)−βA+βηA,0(X))2|Z, X]i ≲Eh (Z−ηZ,0(X))(Z−ηZ,0(X))Ti ≤CIm Similarly, Ω(β, η0,N) =E[(Z−ηZ,0(X))(Z−ηZ,0(X))T(Y−ηY,0(X)−βA+βηA,0(X))2] =Eh (Z−ηZ,0(X))(Z−ηZ,0(X))TE[(Y−ηY,0(X)−βA+βηA,0(X))2|Z, X]i ≳Eh (Z−ηZ,0(X))(Z−ηZ,0(X))Ti ≥1/CI m. Therefore, for all β∈ B, 1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤C. Lemma S23. (a) There is C > 0with|β−β0| ≤C√ N∥g(β, η0,N)∥/µNfor all β∈ B. (b) There isC >0andcM=Op(1)such that |β−β0| ≤C√ N∥g(β, η0,N)∥/µN+cMfor all β∈ B. Proof. Proof of (a): g(β, η0,N) =g(β, η0,N)−g(β0, η0,N) = (β−β0)G(η0,N). Therefore, √ N∥g(β, η0,N)∥/µN=√ N|β−β0|q GT(η0,N)G(η0,N)/µN≳|β−β0,N|, where the second inequality is due to boundedness of eigenvalues of Ω(β, η0) and weak asymptotics assumption. Proof of (b): √ Nbg(β, η0,N)/µN =√ Nbg(β0, η0,N)/µN+µ−1 N√ N(β−β0)1 NNX i=1(Gi−G) +√ N(β−β0)G/µ N. 37 We have already shown that ∥bg(β0, η0,N)∥=O(p m/N ). Similarly we can drive the rate for ∥1 NP i=1(Gi−G)∥, that is bG−G: EP0,N∥bG−G∥2=EP0,N(bG−G)T(bG−G) =1 NEP0,NGT iGi−1 NGTG=1 Ntr(EP0,NGiGT i)−1 NGTG. From the weak asymptotics assumption, we know 1 NGTG=O(µ2 N/N2). From the assumption EP0,N[A2|Z, X]< C 1 Ntr(EP0,NGiGT i) =1 Ntr(EP0,NGiGT i) =1 NtrEP0,Nh (Z−µZ,0,NX)(Z−µZ,0,N(X))T(A−µA,0,N(X))2i =1 NtrEP0,Nh (Z−µZ,0,N(X))(Z−µZ,0,N(X))TEP0,N[(A−µA,0,N(X))2|Z, X]i ≤1 NtrEP0,Nh (Z−µZ,0,N(X))(Z−µZ,0,N(X))Ti ≲m/N. To sum up, we have 1 NX i=1(Gi−G) =O rm N! . Therefore, we have cM:=µ−1 N√ Nsup β∈B bg(β0, η0,N) + (β−β0)1 NNX i=1(Gi−G) =Op(1), where the last inequality is due to boundedness of m2/µN. Therefore, |β−β0| ≤ ∥µ−1 N√ N(β−β0)G∥ ≤µ−1 N√ N∥bg(β, η0,N)∥+cM. Lemma S24. supβ∈BEP0,N[(g(β, η0,N)Tg(β, η0,N))2]/N→0. 38 Proof. sup β∈BEP0,N[(g(β, η0,N)Tg(β, η0,N))2]/N = sup β∈BEP0,N[((Z−ηZ,0,N(X))T(Z−ηZ,0,N(X)))2(Y−ηY,0,N(X)−βA+βηA,0,N(X))4]/N ≤m2/Nsup β∈BEP0,N[(Y−ηY,0,N(X)−βA+βηA,0,N(X))4] ≲m2/N→0, where the first inequality is due to boundedness of ZandX, the second inequality is due to boundedness of ηY,0,N(X),ηA,0,N(X), compactness of Band moment contraints EP0,NY4< C, EP0,NA4< C. Lemma S25. |aT{Ω(β′, η0,N)−Ω(β, η0,N)}b| ≤C∥a∥∥b∥∥β′−β∥. Proof. By matrix Cauchy-Schwarz inequality Tripathi (1999), we have EP0,N[Gigi(β, η0)T]Ω(β, η0,N)−1EP0,N[gi(β, η0,N)GT i]≤EP0,N[GiGT i]. Therefore, ∥E[Gigi(β, η0,N)T]∥ ≤C. Direct calculation yields: |aT{Ω(β′, η0,N)−Ω(β, η0,N)}b| ≤|(β′−β)2aTEP0,N{GiGT i}b|+|(β′−β)aTEP0,N{GigT i(β, η0,N)}b|+|(β′−β)aTEP0,N{gi(β, η0,N)GT i}b| ≤|β′−β|∥a∥∥b∥(|β′−β|∥EP0,N[GiGT i]∥+ 2∥EP0,N[Gigi(β, η0,N)T]∥) ≲|β′−β|∥a∥∥b∥. Lemma S26. There is CandcM=Op(1)such that for all β′, β∈ B. √ N∥g(β′, η0,N)−g(β, η0,N)∥/µN≤C|β′−β|, √ N∥bg(β′, η0,N)−bg(β0, η0,N)∥/µN≤cM|β′−β|. 39 Proof. µ−1 N√ N∥g(β′, η0,N)−g(β, η0,N)∥=µ−1 N√ N|β′−β|∥G∥≲|β′−β|.
|
https://arxiv.org/abs/2505.07295v1
|
Similarly, µ−1 N√ N∥bg(β′, η0,N)−bg(β, η0,N)∥=µ−1 N√ N|β′−β|∥bG∥≲cM|β′−β|, where cM:=µ−1 N√ N∥bG∥=Op(1). Lemma S27. (EP0,N∥gi∥4+EP0,N∥Gi∥4)m/N→0. Proof. The proof is similar to Lemma S24. The only difference is that we use the assumption m3/N→0. Lemma S28. gsatisfies Global Neyman orthogonality. Proof. Letη1= (ηY,1, ηA,1, ηZ,1) be another nuisance parameter. For simplicity of notation, the subscript Nwill be suppressed. Then EP0,N" g(O;β,(1−t)η0,N+tη1)# =EP0,N" Z−[(1−t)ηZ,0,N+tηZ,1]! Y−[(1−t)ηY,0,N+tηY,1]−βA+β[(1−t)ηA,0,N+tηA,1]]!# 40 We can calculate the first order Gateaux derivative ∂ ∂tEP0,N" g(O;β,(1−t)η0,N+tη1)# =EP0,N" ηZ,0,N−ηZ,1! Y−[(1−t)ηY,0,N+tηY,1]−βA+β[(1−t)ηA,0,N+tηA,1]]!# + EP0,N" Z−[(1−t)ηZ,0,N+tηZ,1]! ηY,0,N−ηY,1−β(ηA,0,N−ηA,1)!# Therefore, ∂ ∂tEP0,N" g(O;β,(1−t)η0,N+tη1)# t=0 =EP0,N" ηZ,0,N−ηZ,1! Y−ηY,0,N−β(A−ηA,0,N)!# + EP0,N" Z−ηZ,0,N! ηY,0,N−ηY,1−β(ηA,0,N−ηA,1)!# =0 Therefore, gsatisfies Neyman-orthogonality. Lemma S29. gsatisfies the rate conditions in Assumption 17. Proof. Now we verify the rate conditions. Define the realization set to be the set of all η= (ηZ, ηA, ηY) (where ηZ= (ηZ(1), ..., ηZ(m))) such that ∥ηZ(j)−ηZ(j),0,N∥P0,N,∞≤C, j = 1, ...., m ∥ηY−ηY,0∥P0,N,∞≤C,∥ηA−ηA,0,N∥P0,N,∞≤C max(∥ηA−ηA,0,N∥P0,N,2,∥ηY−ηY,0,N∥P0,N,2)≤δN ∥ηZ(j)−ηZ(j),0,N∥P0,N,2≤δN/m ∥ηZ(j)−ηZ(j),0,N∥P0,N,2×(∥ηA−ηA,0,N∥P0,N,2+∥ηY−ηY,0,N∥P0,N,2)≤δNN−1/2 41 Forr(j,1) N, we have r(j,1) N:= sup η∈Tn EP0,N |G(j)(O;η)−G(j)(O;η0,N)|21/2 ≤sup η∈TN (ηZ(j)(X)−ηZ(j),0,N(X))(A−ηA,0,N(X)) P0,N,2+ sup η∈TN (Z−ηZ(j))(ηA(X)−ηA,0,N(X)) P0,N,2 ≲sup η∈TN ηZ(j)−ηZ(j),0,N P0,N,2+ sup η∈TN ηA−ηA,0,N P0,N,2 ≤δN. The first inequality is due to inequality, the second inequality is due to boundedness of ( Z, X) and EP0,N[(A−ηA,0,N(X))2|Z, X]. Forra,(j) N, we have r′a,(j) N:= sup η∈TN EP |ga,(j)(O;η)−ga,(j)(O;η0,N)|21/2 ≤sup η∈TN (ηZ(j)−ηZ(j),0,N)(Y−ηY,0,N(X)) P,2+ sup η∈TN (Z−ηZ(j))(ηY−ηY,0,N) P0,N,2 ≲sup η∈TN ηZ(j)−ηZ(j),0,N P0,N,2+ sup η∈TN ηY−ηY,0,N P0,N,2 ≤δN, The first inequality is due to inequality, the second inequality is due to boundedness of ( Z, X) and EP0,N[(Y−ηY,0,N(X))2|Z, X]. The second-order Gateaux derivative is ∂2 ∂t2EP0,N" g(O;β,(1−t)η0,N+tη1)# t=˜t =2EP0,N" ηZ,0,N(X)−ηZ,1(X)! ηY,0,N(X)−ηY,1(X)−β(ηA,0,N(X)−ηA,1(X))!# 42 Therefore, by triangle and Cauchy–Schwarz inequality, EP0,N ∂2 ∂t2EP0,N" g(O;β,(1−t)η0,N+tη1)# t=˜t ≲∥ηZ,0,N−ηZ,1∥P0,N,2 ∥ηY,0,N−ηY,1∥P0,N,2+∥ηA,0,N−ηA,1∥P0,N,2 ≤δNN−1/2. Forra,(j),(k)term, we have r′(j,k) N:= sup η∈TN EP0,N ga,(j)(η)ga,(k)(η)−ga,(j)(η0,N)ga,(k)(η0,N) 2!1/2 ≤sup η∈TN (ga,(j)(η)−ga,(j)(η0,N))ga,(k)(η) P0,N,2+ sup η∈TN (ga,(k)(η)−ga,(k)(η0,N))ga,(j)(η0,N) P0,N,2 ≤sup η∈TN (ga,(j)(η)−ga,(j)(η0,N))(ga,(k)(η)−ga,(k)(η0,N) P0,N,2 + sup η∈TN (ga,(k)(η)−ga,(k)(η0,N))ga,(j)(η0,N) P0,N,2+ sup η∈TN (ga,(j)(η)−ga,(j)(η0,N))ga,(k)(η0,N) P0,N,2. Note that ga,(j)(O;η)−ga,(j)(O;η0,N) = (Z(j)−ηZ(j)(X))(ηY,0,N(X)−ηY(X)) + ( ηZ(j),0,N(X)−ηZ(j)(X))(Y−ηY,0,N(X)) For the first term, sup η∈TN (ga,(j)(η)−ga,(j)(η0,N))(ga,(k)(η)−ga,(k)(η0,N) P0,N,2 ≤sup η∈TN (Z(j)−ηZ(j)(X))(Z(k)−ηZ(k)(X))(ηY,0,N(X)−ηY(X))2 P0,N,2 + sup η∈TN (Z(j)−ηZ(j)(X))(Z(k)−ηZ(k)(X))(Y−ηY,0,N(X))2 P0,N,2 + sup η∈TN (Z(j)−ηZ(j)(X))(Z(k)−ηZ(k)(X))(Y−ηY,0,N(X))(ηY,0,N(X)−ηY(X)) P0,N,2 ≲sup η∈TN ηY,0,N−ηY P0,N,2+ sup η∈TN ηZ(j),0,N−ηZ(j) P0,N,2≤δN, 43 where for the last inequality, I used ∥ηY−ηY,0,N∥∞≤C,EP0,N[(Y−ηY,0,N)2|X]≤C,ZandX are bounded. For the second term, sup η∈TN (ga,(k)(η)−ga,(k)(η0,N))ga,(j)(η0,N) P0,N,2 ≤sup η∈TN (Z(k)−ηZ(k)(X))(Z(j)−ηZ(j),0,N(X))(ηY,0,N(X)−ηY(X))(Y−ηY,0,N(X)) P0,N,2+ + sup η∈TN (ηZ(k)(X)−ηZ(k),0,N(X))(Z(j)−ηZ(j),0,N(X))(Y−ηY,0,N(X))2 P0,N,2 ≤δN Similarly we can verify the remaining conditions in Assumption 17. Lemma S30. Assumption 11 is satisfied. Proof. For the first part, we have P0,N[Ω(β, η)−Ω(β, η0,N)] = EP0,N(Z−ηZ(X))(Z−ηZ(X))T(Y−ηY(X)−βA+βηA)2− EP0,N(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Y−ηY,0,N(X)−βA+βηA,0,N)2 ≤ EP0,Nh (Z−ηZ(X))(Z−ηZ(X))T−(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Y−ηY(X)−βA+βηA)2i + EP0,N[(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T][(Y−ηY(X)−βA+βηA)2−(Y−ηY,0,N(X)−βA+βηA,0,N)2] 44 Now we firstly consider the first term. EP0,N[(Z−ηZ(X))(Z−ηZ(X))T−(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T](Y−ηY(X)−βA+βηA)2 ≤ EP0,N[(Z−ηZ(X))(Z−ηZ(X))T−(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T](Y−ηY(X)−βA+βηA)2 F =vuutmX i,j=1 EP0,Nh (Z(i)−ηZ(i)(X))(Z(j)−ηZ(j)(X))−(Z(i)−ηZ(i),0,N(X))(Z(j)−ηZ(j),0,N(X))(Y−ηY(X)−βA+βηA)2i!2 ≤vuutmX i,j=1 EP0,Nh (Z(i)−ηZ(i)(X))(Z(j)−ηZ(j)(X))−(Z(i)−ηZ(i),0,N(X))(Z(j)−ηZ(j),0,N(X)) (Y−ηY(X)−βA+βηA)2i!2 Note that EP0,Nh (Z(i)−ηZ(i)(X))(Z(j)−ηZ(j)(X))−(Z(i)−ηZ(i),0,N(X))(Z(j)−ηZ(j),0,N(X)) (Y−ηY(X)−βA+βηA)2i ≤EP0,Nh (Z(i)−ηZ(i)(X))(ηZ(j)(X)−ηZ(j),0,N(X)) + (Z(j)−ηZ(j),0,N(X))(ηZ(i)(X)−ηZ(i),0,N(X)) (Y−ηY(X)−βA+βηA)2i ≲∥ηZ(j)−ηZ(j),0,N∥P0,N,2hq EP0,N([Z(i)−ηZ(i))2(Y−ηY)4] +q EP0,N[(Z(i)−ηZ(i))2(A−ηA)2(Y−ηY)2] +q EP0,N[(Z(i)−ηZ(i))2(A−ηA)4]i +∥ηZ(i)−ηZ(i),0,N∥P0,N,2hq EP0,N([Z(i)−ηZ(j),0,N)2(Y−ηY)4] +q EP0,N[(Z(j)−ηZ(j))2(A−ηA)2(Y−ηY)2] +q EP0,N[(Z(j)−ηZ(j))2(A−ηA)4]i ≲1 mδN We used uniformly boundedness of ( Z, X) in the last inequality, for example, to bound EP([Z(i)− ηZ(i))2(Y−ηY)4], we have EP0,N([Z(i)−ηZ(i))2(Y−ηY)4] ≲EP0,N((Y−ηY)4]≤EP0,N((Y−ηY,0,N+ηY,0,N−ηY)4] ≲EP0,N((Y−ηY,0,N)4+ (ηY,0,N−ηY)4]≤C. Similarly we can show that EP0,N[(Z(j)−ηZ(j))2(A−ηA)4]≤CandEP0,N[(Z(i)−ηZ(i))2(A− ηA)2(Y−ηY)2]≤C. Therefore EP0,N[(Z−ηZ(X))(Z−ηZ(X))T−(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T](Y−ηY(X)−βA+βηA)2 ≤δN 45 For the second term, EP0,N[(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T][(Y−ηY(X)−βA+βηA)2−(Y−ηY,0,N(X)−βA+βηA,0,N)2] ≤ EP[(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T][|2Y−ηY(X)−ηY,0,N(X)−2βA+βηA+βηA,0,N||ηY,0,N−ηY+β(ηA−ηA,0,N)|] ≲ EP0,Nh EP0,N[(Z−ηZ,0(X))(Z−ηZ,0,N(X))T|X]|ηY,0−ηY+β(ηA−ηA,0,N)|i ≲EP0,N|ηY,0,N−ηY|+EP0,N|ηA,0,N−ηA| ≤∥ηY,0,N−ηY∥P0,N,2+∥ηA,0,N−ηA∥P0,N,2≤δN=o(1/√m) The last step is to show sup β∈B (Pn−P0,N)M(β, η0) =op(1/√m). ForM= Ω,Ωk,Ωkl,Ωk,l. I will only show the
|
https://arxiv.org/abs/2505.07295v1
|
case when M= Ω. The proof for other cases are similar. Ω(β, η0) =(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Y−ηY,0,N−βA+βηA,0,N)2 =(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T[(A−ηA,0,N)2β2+ 2β(A−ηA,0,N)(Y−ηY,0,N) + (Y−ηY,0,N)2]. Therefore sup β∈B (Pn−P0,N)Ω(β, η0,N) ≤ (Pn−P0,N)(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2 + (Pn−P0,N)(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)(Y−ηY,0,N) + (Pn−P0,N)(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Y−ηY,0,N)2 . I will show how to drive the rate for the first term, the second and third term can be handled similarly. LetSi=1 n[(Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))2−EP0,N(Zi−ηZ,0,N(Xi))(Z− 46 ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))2]. Now we calculate the matrix variance parameter for R=Pn i=1Si: ν(R) = NX i=1EP0,N[SiST i] =1 N EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)4# − EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2# EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2# ≤1 N EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)4# +1 N EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2# EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2# ≤m N EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)4# + 1 N EP0,N" (Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2# 2 ≲m N Next we calculate the large deviation parameter. ∥Si∥2=∥SiST i∥ =1 N2 (Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T(Z−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))4 −EP0,N[(Zi−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Ai−ηA,0,N(Xi))2](Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))2 −(Zi−ηZ,0,N(Xi))(Z−ηZ,0,N(Xi))T(A−ηA,0,N(Xi))2EP0,N[(Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))2] +EP0,N[(Zi−ηZ,0,N(X))(Z−ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))2EP0,N[(Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T(Ai−ηA,0,N(Xi))2 ≲m(Ai−ηA,0,N(Xi))4 N2 (Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T +2(Ai−ηA,0,N(Xi))2 N2 (Zi−ηZ,0,N(Xi))(Zi−ηZ,0,N(Xi))T +1 N2 ≲m2 N2[(Ai−ηA,0,N(Xi))4+ (Ai−ηA,0,N(Xi))2] The large deviation parameter Lsatisfies L2=EP0,Nmax i∥Si∥ ≤m2 N2EP0,Nmax i[(Ai−ηA,0,N(Xi))4+ (Ai−ηA,0,N(Xi))2]≲m2 N2o(1) = o(m2 N2). The second last inequality is because we assumed EP0,N|A|4<∞, therefore EP0,Nmax i|A|4=o(1) 47 by Corollary 7.1 of Zhang and Chen (2021). Therefore, ∥R∥=Op r log(m)m N+ log( m)r m2 N2! =oP1√m . The proof for Theorem 5 follows from the previous lemmas. Proof of Theorem 5. So far we have verified all regularity condition except for Assumption 3 (c) and Assumption 10. To verify Assumption 3 (c), we have sup β∈B∥bΩ(β, η0,N)−Ω(β, η0,N)∥F≤sup β∈B√m∥bΩ(β, η0,N)−Ω(β, η0,N)∥=op(1). Similarly, since we have already had supβ∈B∥(Pn−P0,N)A(β, η0)∥=oP0,N(1/√m), we have supβ∈B∥bA(β, η0)− A(β, η0)∥=oP0,N(1) for A= Ω(k)orA= Ωk,l. Therefore, Assumption 10 (a) has been verified. Next, we verify that |aT[Ω(k)(˜β, η0)−Ω(k)(β, η0)]b| ≤C∥a∥∥b∥∥˜β−β∥. |aT[Ω(k)(˜β)−Ω(k)(β)]b| =|aT[EP0,N(gi(˜β, η0,N)Gi(η0,N))−EP0,N(gi(β, η0,N)Gi(η0,N))]b|=|(˜β−β)aTEP0,N(GiGT i)b| ≲|˜β−β|∥a∥∥b∥. Therefore, Assumption 10 is satisfied. According to Theorem 2, we obtain the desired result. S7 Multiplicative SMM example gcan be rewritten in the following way: g(O;β, ηN) =(Z−µZ,N(X))[I{A= 1}Y−ηY,(1),N(X)ηA,(1),N(X)] exp( −β) + (Z−µZ,N(X))[I{A= 0}Y−ηY,(0),N(X)(1−ηA,(1),N)(X)]. 48 Therefore, g(O;β, ηN) is a separable moment function with B= 2,q= 1, g[1](o;η) = (z−ηZ,N(x))[I{a= 1}y−ηY,(1),N(x)ηA,(1),N(x)], g[2](o;η) = (z−ηZ,N(x))[I{a= 0}y−ηY,(0),N(x)(1−ηA,(1),N)(x)], h[1](β) = exp( β), h[2](β) = 1 . Verifying conditions for MSMM is very similar to the ASMM case. Verifying assumption 2 and assumption 3 relies on the following lemma: Lemma S31. Suppose β1, β2∈ B ⊂ R, where Bis a compact set. Then there exists a constant C1, C2such that C1|β1−β2| ≤ |exp(−β1)−exp(−β2)| ≤C2|β1−β2|. Proof. Without loss of generality, we assume β1≤β2. Since β1, β2belongs to some compact set, we know there exist c1, c2such that c1≤β1≤β2≤c2. By mean value theorem, we know there exists β3such that β2∈[β1, β2] and|exp(−β1)−exp(−β2)|= exp( −β3)|β1−β2|. Let C1= exp( −c2) and C2= exp( −c1), we then have C1|β1−β2| ≤ |exp(−β1)−exp(−β2)| ≤C2|β1−β2|. Lemma S32. For all β∈ B,1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤C. Proof. Ω(β, η0,N) =EP0,N[(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T ((AY−ηY,(1),0,N(X)ηA,(1),0,N(X)) exp( −β) + (1 −A)Y−ηY,(0),0,N(X)ηA,(0),0,N(X))2] =EP0,N[(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T EP0,N[((AY−ηY,(1),0,N(X)ηA,(1),0,N(X)) exp( −β) + (1 −A)Y−ηY,(0),0,N(X)ηA,(0),0,N(X))2|Z, X]] Since with probability one, 1/C≤EP0,N[((AY−ηY,(1),0,N(X)ηA,(1),0,N(X)) exp( −β) + (1 −A)Y−ηY,(0),0,N(X)ηA,(0),0,N(X))2|Z, X]≤C forβ∈ B. We have for all β∈ B, 1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤C. 49 Lemma S33. (a) There is C > 0with|β−β0| ≤C√ N∥g(β, η0,N)∥/µNfor all β∈ B. (b) There isC >0andcM=Op(1)such that |β−β0| ≤C√ N∥g(β, η0,N)∥/µN+cMfor all β∈ B. Proof. Proof for part (a): G=EP0,Nh (Z−µZ,0,N(X))(I{A= 1}Y−ηY,(1),0,N(X)ηA,(1),0,N(X)) exp( β0)i =EP0,N[g[1](O;η0,N)] exp( β0). Combing with the fact
|
https://arxiv.org/abs/2505.07295v1
|
that exp( β) is bounded and β∈ B, 1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤ Cfor all β∈ B, we have c1µ2 N/N≤g[1](O;η0)Tg[1](O;η0)≤c2µ2 N/N for some constants c1andc2. Since g(β, η0,N) =g(β, η0,N)−g(β0, η0,N) =g[1](η0,N)(exp( β)−exp(β0)), similar to the ASMM case, we have √ N∥g(β, η0,N)∥/µN=√ N/µ N|exp(β)−exp(β0)|q gT(η0,N)g(η0,N)≳|β−β0|. Proof for part (b). The proof for part (b) is similar to that of ASMM, thus omitted. Lemma S34. g[1]andg[2]satisfy Neyman orthogonality. Hence gsatisfies Neyman orthogonality. Proof. Letη1= (ηY,(0),1, ηY,(1),1, ηA,(1),1, ηZ,1). Then for g[1](η), EP0,Nh g[1](O;β,(1−t)η0,N+tη1)i =EP0,N (Z−(1−t)ηZ,0,N(X) +tηZ,1(X)) ×(I{A= 1}Y−((1−t)ηY,(1),0,N(X) +tηY,(1),1(X))((1−t)ηA,(1),0,N(X) +tηA,(1),1(X))) . 50 The first order Gatuex derivative is ∂ ∂tEP0,Nh g[1](O; (1−t)η0,N+tη1)i =EP0h (ηZ,0,N(X)−ηZ,1(X))(I{A= 1}Y−((1−t)ηY,(1),0,N(X) +tηY,(1),1(X)) ×((1−t)ηA,(1),0,N(X) +tηA,(1),1(X)))i + (Z−(1−t)ηZ,0,N(X) +tηZ,1(X))(ηY,(1),0,N(X)−ηY,(1),1(X))((1−t)ηA,(1),0,N(X) +tηA,(1),1(X)) + (Z−(1−t)ηZ,0,N(X) +tµZ,1(X))(ηA,(1),0,N(X)−ηA,(1),1(X))((1−t)ηY,(1),0,N(X) +tηY,(1),1(X)). Therefore, ∂ ∂tEP0,Nh g[1](O;β,(1−t)η0,N+tη1)i t=0 =EP0,Nh (µZ,0(X)−µZ,1(X))(I{A= 1}Y−ηY,(1),0,NηA,(1),0,N)i +EP0,Nh (Z−µZ,0,N(X))(ηY,(1),0,N(X)−ηY,(1),1(X))ηA,(1),0,N(X)i +EP0,Nh (Z−µZ,0,N(X))(ηA,(1),0,N(X)−ηA,(1),1(X))ηY,(1),0,N(X)i =0. Therefore, g[1](η) satisfies Neyman orthogonality. We can show g[2](η) is Neyman orthogonal in a similar way, and hence g(β, η) is Neyman orthogonal. The next result is related to the statistical rate for the second order Gateux drivative of g. Lemma S35. Forj= 1, ..., m , λ(j) N≤N−1/2δN. 51 Proof. We calculate the second-order Gateux derivative for g[1],(j)(η): ∂2 ∂2EP0,N[g[1],(j)((1−t)η0+tη1)] =3EP0,Nh (ηZ(j),0,N(X)−ηZ(j),1(X))(ηY,(1),0,N(X)−ηY,(1),1(X))((1−t)ηA,(1),0,N+tηA,(1),1)i + 3EP0,Nh (ηZ(j),0,N(X)−ηZ(j),1(X))(ηA,(1),0,N(X)−ηA,(1),1(X))((1−t)ηY,(1),0,N+tηY,(1),1)i + 2EP0,Nh (Z(j)−(1−t)ηZ(j),0,N(X)−tηZ(j),1(X))(ηA,(1),0,N(X)−ηA,(1),1(X))(ηY,(1),1(X)−ηY,(1),0,N(X))i . Therefore, sup t∈(0,1),η∈TN ∂2 ∂t2EP0,N[g[1],(j)((1−t)η0,N+tη1)] ≲EP0,Nh |(ηZ(j),0,N(X)−ηZ(j),1(X))(ηY,(1),0,N(X)−ηY,(1),1(X))|i +EP0,Nh |(ηZ(j),0,N(X)−ηZ(j),1(X))(ηA,(1),0,N(X)−ηA,(1),1(X))|i +EP0,Nh |(ηA,(1),0,N(X)−ηA,(1)(X))(ηY,(1)(X)−ηY,(1),0,N(X))|i ≤∥ηZ(j),0,N−ηZ(j),1∥P0,N,2∥ηY,(1),0,N−ηY,(1),1∥P0,N,2+∥ηZ(j),0,N−ηZ(j),1∥P0,N,2∥ηA,(1),0,N−ηA,(1),1∥P0,N,2 +∥ηY,(1),0,N−ηY,(1)∥P0,N,2∥ηA,(1),0,N−ηA,(1),1∥P0,N,2 ≤N−1/2δN. Similarly, we can prove that sup t∈(0,1),η1∈TN ∂2 ∂t2EP0,N[g[2],(j)((1−t)η0,N+tη1)] ≤N−1/2δN. Therefore λ(j) N:= sup β∈B,t∈(0,1),η∈TN ∂2 ∂t2EP0,Ng(j)(O;β, η0,N+t(η−η0,N)) ≤ sup t∈(0,1),η∈TN ∂2 ∂t2E[g[1],(j)((1−t)η0,N+tη)] sup β∈Bexp(β) + sup t∈(0,1),η∈TN ∂2 ∂t2E[g[2],(j)((1−t)η0,N+tη)] ≤N−1/2δN. 52 Lemma S36. supβ∈BEP0,N[(g(β, η0,N)Tg(β, η0,N))2]/N→0. Proof. The proof is similar to the proof of S24. Lemma S37. |aT{Ω(β′, η0,N)−Ω(β, η0,N)}b| ≤C∥a∥∥b∥∥β′−β∥. Proof. By matrix Cauchy-Schwarz inequality Tripathi (1999), we have EP0,N[Gi(β, η0,N)gi(β, η0,N)T]Ω(β, η0,N)−1EP0,N[gi(β, η0,N)G(β, η0,N)T i]≤EP0,N[Gi(β, η0,N)Gi(β, η0,N)T]. Since∥Gi(β, η0,N)∥ ≤C, we have ∥EP0,N[Gi(β, η0,N)gi(β, η0,N)T]∥ ≤Cand∥EP0,N[g[1] i(β, η0,N)gi(β, η0,N)T]∥ ≤ C. Since g(β′, η0,N) =g(β, η0,N) +g[1](η0,N)(exp( β′)−exp(β)), we have |aT{Ω(β′, η0,N)−Ω(β, η0,N)}b| ≤|(exp( β′)−exp(β))2aTEP0,N{g[1] i(η0,N)g[1] i(η0,N)T}b| +|(exp( β′)−exp(β))aTEP0,N{g[1] i(η0,N)gi(β, η0)T}b| +|(exp( β′)−exp(β))aTEP0,N{gi(β, η0)g[1] i(η0,N)T i}b| ≲|β′−β|∥a∥∥b∥. Lemma S38. There is CandcM=Op(1)such that for all β′, β∈ B. √ N∥g(β′, η0,N)−g(β, η0,N)∥/µN≤C|β′−β|, √ N∥bg(β′, η0,N)−bg(β0, η0,N)∥/µN≤C|β′−β|. Proof. the Proof is similar to that of Lemma S26, thus omitted. Lemma S39. (EP0,N∥gi∥4+EP0,N∥Gi∥4)m/N→0. Proof. The Proof is similar to that of Lemma S27, thus omitted. 53 Lemma S40. EP0,N[GiGT i] ≤C, EP0,N" ∂Gi(β0, η0,N) ∂β∂Gi(β0, η0,N)T ∂β# ≤C, √ N µN EP0,N∂Gi(β0) ∂β ≤C. Proof. The proof for the first and second inequality is similar to the proof of Lemma S32. To show the third one, we just need to note that EP0,Nh ∂Gi(β0) ∂βi =G, then the result follows by Assumption 14 (a). Lemma S41. If¯βp− →β0, then for k= 1, ..., p , √ N/µ Nh bG(¯β, η0,N)−bG(β0, η0,N)i =op(1), √ N/µ N" ∂bG(¯β, η0,N) ∂β(k)−∂bG(β, η0,N) ∂β(k)# =op(1). Proof. ∥√ N[bG(¯β, η0)−bG(β0, η0)]/µN∥ =√ N µN|exp(β)−exp(β0)|q bg[1],T(η0,N)bg[1],T(η0) =√ N µN|exp(β)−exp(β0)|exp(−β0)q bGT(η0,N)bGT(η0). Similar to the ASMM case, we can prove that ∥bGT(η0,N)bGT(η0,N)∥=OPµN√ N . Since|exp(β)−exp(β0)|=op(1), we have ∥√ N[bG(¯β, η0,N)−bG(β0, η0,N)]/µN∥=op(1). The proof for √ N" ∂bG(¯β, η0,N) ∂β(k)−∂bG(β0,N, η0,N) ∂β(k)# /µN =op(1) 54 is similar, thus omitted. The next lemma says the convergence rate conditions
|
https://arxiv.org/abs/2505.07295v1
|
stated in Assumption 12 hold. Lemma S42. Assumption 12 holds for the MSMM example. Proof. We have shown Neyman othogonality condition and λN≤N−1/2δN. It remains to show thatzN≤δNforaN=r(b1),(j,q) N, r(b1,b2),(j,k,l,r ) N. We will show that r(1),(1,1) N≤δNas example. Other proofs are similar and omitted. We pick η1from the realization set TN. For r(1),(1,1) N, we have r(1),(1,1) =∥(Z(1)−ηZ(1),1)(AY−ηY,(1),1(X)ηA,(1),1(X))− (Z(1)−ηZ(1),0,N)(AY−ηY,(1),0,N(X)ηA,(1),0,N(X))∥P0,N,2 ≤∥AY(ηZ(1),0,N−ηZ(1),1)∥P0,N,2+∥Z(1)(ηY,(1),1(X)ηA,(1),1(X))−ηY,(1),0,N(X)ηA,(1),0,N(X)))∥P0,N,2 +∥ηZ(1),1(X)ηY,(1),1(X)ηA,(1),1(X)−ηZ(1),0,N(X)ηY,(1),0,N(X)ηA,(1),0,N(X)))∥P0,N,2 For the first term, we have ∥AY(ηZ(1),0,N−ηZ(1),1)∥P0,N,2 ≤q EP0,N(ηZ1,0,N(X)−ηZ1,1(X))2EP0,N[Y2|A, X]EP0,N[A2|X]≤ ∥ηZ1,0,N−ηZ1,1∥P0,N,2. For the second term, we have ∥Z(1)(ηY,(1),1(X)ηA,(1),1(X))−ηY,(1),0,N(X)ηA,(1),0,N(X)))∥P0,N,2 ≤∥(ηY,(1),1(X)ηA,(1),1(X))−ηY,(1),0,N(X)ηA,(1),0,N(X)))∥P0,N,2 ≤∥(ηY,(1),1(X)−ηY,(1),0,N(X))ηA,(1),1(X)∥P0,N,2+∥(ηA,(1),1(X)−ηA,(1),0,N(X))ηY,(1),0,N(X)∥P0,N,2 ≲δN Similarly we can show that ∥ηZ(1),1(X)ηY,(1),1(X)ηA,(1),1(X)−ηZ(1),0,N(X)ηY,(1),0,N(X)ηA,(1),0,N(X)))∥P0,N,2≤ CδN. Therefore, r(1),(1,1)≤CδN. 55 Lemma S43. Assumption 10 holds for the MSMM example. Proof. The proofs are similar to the proofs of Lemma S32 and S37. Lemma S44. Assumption 11 holds for MSMM example. Proof. The proof is similar to the proof of the ASMM case, thus omitted. The proof of Theorem 6 is a direct result of Lemmas in Supplemental material S7 and Theorem 2. S8 Proximal causal inference example S8.1 Proof for Theorem 7 Proof. Note that EP0,N[Y−βa,0A−βw,0W|A, Z, U, X ] =EP0,N[Y(0)−βw,0W|A, Z, U, X ] =EP0,N[Y(0)−βw,0W|U, X] =EP0,N[Y(0)−βw,0W|X] On the other hand, EP0,N[Y−βa,0A−βw,0W|X] =EP0,N[EP0,N[Y−βa,0A−βw,0W|A, Z, U, X ]|X] =EP0,N[Y(0)−βw,0W|X] Therefore we have EP0,N[Y−βA−βwW|A, Z, U, X ] =EP0,N[Y−βA−βwW|X]. Hence, for any function hof (A, Z, U ), we have EP0,Nh(A, Z, U )(Y−βa,0A−βw,0W−EP0,N[Y−βa,0A−βw,0W|X]) = 0 , 56 specifically, EP0,N(Z−EP0,N[Z|X])(Y−βa,0A−βw,0W−EP0,N[Y−βa,0A−βw,0W|X]) = 0 EP0,N(A−EP0,N[A|X])(Y−βa,0A−βw,0W−EP0,N[Y−βa,0A−βw,0W|X]) = 0 . Lemma S45. The moment function (3)satisfies global Neyman orthogonality. Proof. Let ˜g(O;β, η) = (Z−ηZ)(Y−ηY−βA(A−ηA)−βW(W−ηW)). The first-order Gateux derivative of ˜ gwith direction η1is ∂ ∂tEP0,N ˜g(O;β,(1−t)η0,N+tη) =EP0,Nh (ηZ,0,N(X)−ηZ,1(X)) ×[Y−(1−t)ηY,0,N(X)−tηY,1(X) −βA(A−(1−t)ηA,0,N(X)−tηA,1(X))−βW(W−(1−t)ηW,0,N(X)−tηW,1(X))] + (Z−(1−t)ηZ,0,N(X)−tηZ,1(X)) ×(ηY,0,N(X)−ηY,1(X)−βA(ηA,0,N(X)−ηA,1(X))−βW,1(ηW,0,N(X)−ηW,1(X)))i . Therefore, ∂ ∂tEP0,N ˜g(O;β,(1−t)η0,N+tη1) t=0= 0. The result still holds if we replace Zin ˜gwith A. Therefore, ∂ ∂tEP0,N g(O;β,(1−t)η0,N+tη1) t=0= 0. 57 Lemma S46. Under Assumption 16, we have for j= 1, ..., m + 1 λ(j) N≤N−1/2δN. Proof. Forj≥2, we have ∂2 ∂t2EP0,N g(O;β,(1−t)η0,N+tη) =2EP0,Nh (ηZ(j),0,N(X)−ηZ(j)(X))(ηY,0,N(X)−ηY(X)−βA(ηA,0,N−ηA)−βW(ηW,0,N−ηW))i Therefore, λ(j) N≤sup η∈TN,β∈BEP0,N|(ηZ(j),0,N(X)−ηZ(j)(X))(ηY,0,N(X)−ηY(X))| + sup β∈B|βA|sup η∈TN,β∈BEP0,N|(ηZ(j),0,N(X)−ηZ(j)(X))(ηA,0,N(X)−ηA(X))| + sup β∈B|βW|sup η∈TN,β∈BEP0,N|(ηZ(j),0,N(X)−ηZ(j),0,N(X))(ηW,0,N(X)−ηW(X))| ≤sup η∈TN∥ηZ(i),0,N−ηZ(i)∥P0,N,2(∥ηY,0,N−ηY∥P0,N,2+∥ηA,0,N−ηA∥P0,N,2+∥ηW,0,N−ηW∥P0,N,2) ≤N−1/2δN. The case when j= 1 is similar, thus omitted. Next, we will verify the regularity conditions for the proximal causal inference example. This example is similar to the ASMM example because the moment condition of both examples are linear in β. We will verify Assumption 2,3. Other assumptions can be verified similarly. Lemma S47. For all β∈ B,1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤C. 58 Proof. Ω(β, η0,N) =EP0,N[(˜Z−η˜Z,0,N(X))(˜Z−η˜Z,0,N(X))T ×(Y−ηY,0,N(X)−βaA+βaηA,0,N(X)−βwW+ηW,0,N(X))2] =EP0,Nh (˜Z−η˜Z,0(X))(˜Z−η˜Z,0(X))T ×EP0,N[(Y−ηY,0,N(X)−βaA+βaηA,0,N−βw+βwηW,0,N(X))2|Z, X, A ]i Since EP0,N[(Y−ηY,0,N(X)−βaA+βaηA,0,N−βw+βwηW,0,N(X))2|Z, X, A ] is bounded by 1 /C andCandEP0,N[(˜Z−η˜Z,0,N(X))(˜Z−η˜Z,0,N(X))T] is bounded by CIm+1and 1 /CI m+1. Lemma S48. (a) There is a constant C > 0with∥δ(β)∥ ≤C√ N∥g(β, η0,N)∥/µNfor all β∈B. (b) there is C > 0andcM=Op(1)such that ∥δ(β)∥ ≤C√ N∥bg(β, η0,N)∥/µN+cMfor all β∈B, where cMisOp(1). Proof. For (a), we have g(β, η0,N) =g(β, η0,N)−g(β0, η0,N) =G(β−β0). Here Gis a ( m+ 1)×2 matrix. Therefore, √ N∥g(β, η0,N)∥/µN =√ Nq (β−β0)TGT(η0,N)G(η0,N)(β−β0)/µN =q δ(β)TNS−1 NGTGS−1,T Nδ(β) ≳q δ(β)TNS−1 NGTΩ−1GS−1,T Nδ(β)≳∥δ(β)∥. 59 To prove part (b), we first have the following expansion: √ Nbg(β, η0,N)/µN =√ Nbg(β0, η0,N)/µN+µ−1 N√ N1 NNX i=1(Gi−G)(β−β0) +√ NG(β−β0)/µN =√ Nbg(β0, η0,N)/µN+µ−1 N√ N1 NNX i=1(Gi−G)(β−β0) +√ NGST,−1 Nδ(β).
|
https://arxiv.org/abs/2505.07295v1
|
Therefore, ∥δ(β)∥ ≲∥√ NGST,−1 Nδ(β)∥ ≤√ N∥bg(β, η0,N)∥/µN+ √ Nbg(β0, η0,N)/µN+µ−1 N√ N1 NNX i=1(Gi−G)(β−β0) | {z } cM. Since we have already shown that ∥bg(β0, η0)∥=Op(p m/N ), therefore√ N∥bg(β, η0,N)∥/µN= Op(√m/µ N) =O(1). It remains to show that sup β∈B µ−1 N√ N1 NNX i=1(Gi−G)(β−β0) =Op(1). Since EP0,N∥(bG−G)(β−β0)∥2 =1 NtrEP0,NGi(β−β0)(β−β0)TGT i−1 N(β−β0)GTG(β−β0). Similar to the ASMM example, we have 1 NtrEP0,NGi(β−β0)(β−β0)TGT i=O(m/N ) 1 N(β−β0)GTG(β−β0) =O(µ2 N/N2). Therefore, cM=Op(1). 60 Lemma S49. supβ∈BEP0,N[(g(β, η0,N)Tg(β, η0,N))2]/N→0. Proof. Omitted. Lemma S50. |aT{Ω(β′, η0,N)−Ω(β, η0,N)}b| ≤C∥a∥∥b∥∥β′−β∥. Proof. gi(β, η0,N) =Giβ+ (˜Zi−η˜Z,0,N(Xi))(Yi−ηY,0,N(Xi)) Therefore, gi(β, η0,N)gi(β, η0,N)T =GiββTGT i+ 2(˜Zi−η˜Z,0,N(Xi))(Yi−ηY,0,N(Xi))βTGT i + (˜Zi−η˜Z,0,N(Xi))(˜Zi−η˜Z,0,N(Xi))T(Yi−ηY,0,N(Xi))2 We start from bounding |aTEP0,N[GT iβ′β′TGT i]bT−aTEP0,N[GT iββTGT i]bT|: |aTEP0,N[GT iβ′β′TGT i]bT−aTEP0,N[GT iββTGT i]bT| = aTEP0,N (˜Z−η˜Z,0,N) (A−ηA,0,N(X)) (W−ηW,0,N(X)) T β′β′T (A−ηA,0,N(X)) (W−ηW,0,N(X)) (˜Z−η˜Z,0,N)T bT −aTEP0,N (˜Z−η˜Z,0,N) (A−ηA,0,N(X)) (W−ηW,0,N(X)) T ββT (A−ηA,0,N(X)) (W−ηW,0,N(X)) (˜Z−η˜Z,0,N)T bT = aTEP0,N (˜Z−η˜Z,0,N)EP0,N (A−ηA,0,N(X)) (W−ηW,0,N(X)) T β′β′T (A−ηA,0,N(X)) (W−ηW,0,N(X)) A, Z, X (˜Z−η˜Z,0,N)T bT −aTEP0,N (˜Z−η˜Z,0,N(X))EP0,N (A−ηA,0,N(X)) (W−ηW,0,N(X)) T ββT (A−ηA,0,N(X)) (W−ηW,0,N(X)) A, Z, X (˜Z−η˜Z,0,N(X))T bT ≲EP0,N[|aT(˜Z−η˜Z0,N(X))||bT(˜Z−η˜Z0,N(X))|]∥β′−β∥ ≤∥a∥∥b∥∥β′−β∥ where the second last inequality is due to boundedness of ηA,0,N,ηW,0,NEP0,N[Y4|A, X, Z ] and 61 EP0,N[W4|A, X, Z ]. We can use the similar way to bound |aTEP0,N[(˜Zi−η˜Z,0,N(Xi))(Yi−ηY,0,N(Xi))β′TGT i]bT−aTEP0,N[(˜Zi−η˜Z,0,N(Xi))(Yi−ηY,0,N(Xi))βTGT i]bT|. The details are omitted. Lemma S51. There is CandcM=Op(1)such that for all β′, β∈ B. √ N∥g(β′, η0,N)−g(β, η0,N)∥/µN≤C|δ(β′)−δ(β)|, √ N∥bg(β′, η0,N)−bg(β0, η0,N)∥/µN≤cM|δ(β′)−δ(β)|. Proof. µ−1 N√ N∥g(β′, η0,N)−g(β, η0,N)∥=µ−1 N√ N∥G(β′−β)∥ =µ−1 N√ Nq (β′−β)TSNS−1 NGTGS−1,T NST N(β′−β)≲∥δ(β′)−δ(β)∥. Similarly µ−1 N√ N∥g(β′, η0,N)−g(β, η0,N)∥=µ−1 N√ N∥bG(η0,N)(β′−β)∥ ≤√ N∥bG(η0,N)∥∥S−1 N∥∥δ(β′)−δ(β)∥, where cM=√ N∥bG(η0,N)∥∥S−1 N∥=Op(1). S9 Additional simulation details Figure S.1 and Figure S.2 show the sampling distributions of estimates for GMM and CUE across 1000 simulations. 62 CUE GMMn=1000, m=22 n=2000, m=26 n=5000, m=33 n=10000, m=40 n=20000, m=47 2.50 2.75 3.00 3.25 3.502.50 2.75 3.00 3.25 3.500200400 0200400 0200400 0200400 0200400countASMM CUE GMMn=1000, m=22 n=2000, m=26 n=5000, m=33 n=10000, m=40 n=20000, m=47 0 1 2 0 1 20200400 0200400 0200400 0200400 0200400countMSMMFigure S.1: Sampling distribution of the CUE and GMM estimator under ASMM and MSMM settings. The dashed red lines represents the ground truth in each setting. 63 CUE 2SLSn=1000, m=22 n=2000, m=26 n=5000, m=33 n=10000, m=40 n=20000, m=47 2 3 4 2 3 40200400600 0200400600 0200400600 0200400600 0200400600countβa CUE 2SLSn=1000, m=22 n=2000, m=26 n=5000, m=33 n=10000, m=40 n=20000, m=47 0 1 2 0 1 20200400 0200400 0200400 0200400 0200400countβwFigure S.2: Sampling distribution of the CUE and 2SLS estimator under proximal causal inference settings. The dashed red lines represents the ground truth in each setting. 64
|
https://arxiv.org/abs/2505.07295v1
|
arXiv:2505.07383v1 [math.ST] 12 May 2025Some insights into depth estimators for location and scatter in the multivariate setting Jorge G. Adrover∗Marcelo Ruiz† Abstract The concept of statistical depth has received considerable attention as a way to extend the notions of the median and quantiles to other statistical models. These procedures aim to formalize the idea of identifying deeply embedded fits to a model that are less influenced by contamination. Since contamination introduces bias in estimators, it is well known in the location model that the median minimizes the worst-case performance, in terms of maximum bias, among all equivariant estimators. In the multivariate case, Tukey’s median was a groundbreaking concept for location estimation, and its counterpart for scatter matrices has recently attracted considerable interest. The breakdown point and the maximum asymptotic bias are key concepts used to summarize an estimator’s behavior under contamination. For the location and scale model, we consider two closely related depth formulations, whose deepest estimators display significantly different behavior in terms of breakdown point. In the multivariate setting, we analyze recently introduced concentration inequalities that provide a unified framework for studying both the statistical convergence rate and robustness of Tukey’s median and depth-based scatter matrices. We observe that slight variations in these inequalities allow us to visualize the maximum bias behavior of the deepest estimators. Since the maximum bias for depth- based scatter matrices had not previously been derived, we explicitly calculate both the breakdown point and the maximum bias curve for the deepest scatter matrices. Keywords: Multivariate scatter depth, Breakdown Point, Minimax Bias Function 1 Introduction The concept of statistical depth has received considerable attention as a way to extend the notions of the median and quantiles to more general statistical models. Originally introduced by Tukey (1975), depth was defined as the minimum proportion of data points lying on either side of a point, and this idea was later extended to the bivariate case. Donoho and Gasko (1992) formalized the general notion of the depth of a point zin ap-dimensional space with respect to a probability measure P. The seminal paper by Huber (1964) highlighted the median as an estimator with optimal worst-case behavior under contamination (i.e., its maximum bias), making it the most robust among the class of equivariant location estimators. A key feature of the median is that it is flanked on each side by half of the data mass, helping to shield it from the effects of outliers. This robustness—being well-surrounded by data—is echoed in Tukey’s concept of the multivariate median. In this case, a point in Euclidean space is considered a median if it maximizes the minimum mass contained in any closed half-space whose boundary includes the point. This pursuit of a point that is ”deeply inserted” into the data, to enhance robustness in univariate and multivariate location models, has been extended to other statistical frameworks. In the regression setting, Rousseeuw and Hubert (1999) defined regression depth, which measures how deeply a linear fit is embedded within the data. This is evaluated by the smallest amount of data mass in the two opposing ∗Facultad de
|
https://arxiv.org/abs/2505.07383v1
|
Matem´ atica, Astronom´ ıa, F´ ısica y Computaci´ on - Universidad Nacional de C´ ordoba, CIEM and CONICET, Argentina. jorge.adrover@unc.edu.ar †Departamento de Matem´ atica, Facultad de Ciencias Exactas, F´ ısico-Qu´ ımicas y Naturales, Universidad Nacional de R´ ıo Cuarto, Argentina. mruiz@exa.unrc.edu.ar 1 wedges formed by the fit plane and vertical planes orthogonal to the explanatory subspace. Mizera and M¨ uller (2004) further explored depth concepts in the context of location-scale models. Chen and Tyler (2002) investigated various properties of Tukey’s median, including its influence function and maximum contamination bias. Chen et al. (2018) extended the idea of depth to covariance matrix estimation by introducing the concept of matrix depth. Their estimator achieves the optimal rate under Huber’s ε-contamination model for estimating covariance/scatter matrices with various structural assumptions, such as bandedness and sparsity. Paindaveine and Van Bever (2018) also developed halfspace depth concepts for scatter, concentration, and shape matrices. While their concept of scatter depth is similar to that in Chen et al. (2018) and Zhang (2002), rather than focusing on the deepest scatter matrix, they studied the properties of the depth function and its associated depth regions. Nagy et al. (2019) discussed the relationships between halfspace depth and problems in affine and convex geometry, offering an extensive overview of various depth notions. Section 2 presents two proposals for joint location and scale depth estimation in the univariate location- scale model, comparing their differing breakdown behaviors despite conceptual similarity. Section 3 discusses the importance of maximum asymptotic bias in understanding how estimator behavior evolves as sample size nbecomes large. By examining the concentration inequalities in Chen et al. (2018), we show that slight variations in these inequalities allow us to visualize the maximum bias behavior of the deepest estimators. In particular, Section 4 derives the maximum bias function for the deepest scatter matrix estimator introduced by Chen et al. (2018) in the case of known location, showing that this estimator shares Tukey’s median’s asymptotic breakdown point of 1 /3. All proofs are deferred to the Appendix. 2 The location-scale model, depth and breakdown point Let us take the multivariate location and scatter model,. X=µ0+V0u (1) with µ0∈Rp,V0∈Rp×pan inversible matrix, and uan inobservable random vector with centrosym- metric distribution P0around 0. µ0andV0Vt 0(except for a constant) are identifiable. Tukey’s depth of a vector θ∈Rpis given by DT(θ, P) = inf u∈Sp−1minP utX≤utθ . withSp−1={z∈Rp:∥z∥= 1}.Carrizosa (1996) and Adrover et al. (2002) independently came up with a residual smallness concept which comprises the notion of depth given in the univariate and multivariate model as well as the regression model. More precisely, DT(θ, P) = inf ∥λ∥=1,γ∈RpP λt(x−θ) ≤ λt(x−γ) (2) and Tukey’s median is taken to be ˆθT(P) = arg sup θDT(θ, P). The idea behind that definition is that the depth of a fit θis determined by the bad performance displayed by the residuals |λt(x−θ)|compared to the best competitor γ,whose residuals |λt(x−γ)| have the minimum probability of being worse than those of |λt(x−θ)|, therefore the θwith the best worst performance is singled out. Observe that, if P=P0 V−1 0(· −µ0) ,ˆθT(P) =µ0. Ifp=
|
https://arxiv.org/abs/2505.07383v1
|
1,the deepest location estimator is the median. In the univariate case the model (1) is the usual location-scale model Y=µ0+σ0U,σ0>0,with U∼P0centrosymmetric. The residual smallness 2 concept in (2) is easily adapted for a joint estimation of location and scale as it follows: The depth of (µ, σ)∈R×(0,∞) is taken to be, D1 LS(µ, σ, P ) = min(infλ∈RP([|Y−µ| ≤ |Y−λ|]), infγ>0Ph Y−µ σ −1 ≤ Y−µ γ −1 i) (ˆµ1,ˆσ1) = arg max µ,σD1 LS(µ, σ, P ). With this definition, we get very well known functionals for location and scale, as it is stated in the following lemma. Lemma . ˆµ1=med(P) and ˆ σ1=med P(|Y−med(P)) and .D1 LS(ˆµ1,ˆσ1, P) = 0 .5. IfP=F0 σ−1 0(· −µ0) ,med F0(U) = 0 y med F0(|U|) = 1, then ( µ0, σ0) = (ˆ µ1(P),ˆσ1(P)).IfPis taken to be the empirical distribution function we come up with the usual median and median absolute deviation around the median as location and scale estimators. If Θ denotes the set in which the parameters are assumed to lie in the central model (for instance, Θ =R×(0,∞) in the location-scale model), the breakdown point of an estimator is a value that quantifies the level of contamination required to cause the estimator to move outside any compact subset of Θ. A usual way to model the presence of spurious data in a sample is the ε−contamination neighborhood, which is defined to be, Pε(F0) = (1−ε)F0· −µ0 σ0 +εG(·), Gany distribution on R . and the asymptotic breakdown point of ( ε∗ L, ε∗ S) of a functional (ˆ µ,ˆσ):Pε(F0)→Θ is defined to be, ε∗ L= inf( ε >0 : sup Pε(F0)|ˆµ(·)|=∞) , ε∗ S= inf( ε >0 : sup Pε(F0)max ˆσ(·),ˆσ−1(·) =∞) . ˆµ1as well as ˆ σ1are estimators whose breakdown point is 0 .5,the best breakdown point attainable for location and scale equivariant estimators. Let us suppose now that we bind together the estimation of both parameters in one expression, by considering the following second proposal for location-scale depth, D2 LS(µ, σ, P ) = inf γ>0,λ∈RP [|Y−µ| ≤ |Y−λ|]∩ Y−µ σ −1 ≤ Y−µ γ −1 , and the deepest estimator as (ˆµ2,ˆσ2) = arg max µ∈R,σ>0D2 LS(µ, σ, P ). It is easy to see that D2 LS(µ, σ, P ) = minP(µ−σ≤y≤µ), P (µ≤y≤µ+σ), P([y≤µ−σ]), P ([y≥µ+σ]). . IfP=F0 σ−1 0(· −µ0) ,(ˆµ2,ˆσ2) = ( µ0, σ0).IfPis a symmetric distribution, ˆ µ2=med(P),ˆσ2= med(P)−q1(P), q1(P) the first quartile of P.However, we can see in the Appendix that ˆµ2,ˆσ2 has breakdown point much lower than that of the D1 LS.We will assume that F0= Φ, the standard normal distribution. Let ε0be the solution to the equation δ= (1−δ)h1 3 yδ+ 2q y2 δ+ 6 ln 2 , yδ . with h(x, y) = Φ ( x)−Φ y+x 2 andyδ= Φ−1 δ 1−δ a normal quantile ,with δ <1/3. 3 Theorem 1 The breakdown point ε∗of ˆµ2,ˆσ2 isε∗=ε0with 1/5< ε0<1/4. The second proposal for depth estimators in the location -scale model yields estimators with much lower breakdown point compared to that of the first proposal. This phenomenum can be
|
https://arxiv.org/abs/2505.07383v1
|
considered in a broader scenario showing a behavior also described in some other simultaneous procedures in the robust literature. For instance, a similar behavior is described in the simultaneous M-estimation for location and scale (Huber and Ronchetti, 2009, p.141) and in the context of simultaneous M-estimation for regression and scale in Maronna and Yohai (1991) which seems to prevent from doing simultaneous estimation in order to improve its robustness. 3 Maximum bias and concentration inequalities The asymptotic breakdown point is an aspect of a more powerful tool to measure the performance of an estimator. The so-called maximum asymptotic bias function depicts more accurately the behavior of the estimator since it captures the global behavior by quantifying how much the estimator deviates in the whole contamination neighborhood under diferent levels of contaminations from the parameters at the central model. In spite of the deep understanding that the maxbias function provides, it is usually neglected because of the technicalities its derivation requires. . Pε(P0) = (1−ε)P0 V−1 0(· −µ0) +εG(·), Gany distribution on Rp SetPE 0=P0 V−1 0(· −µ0) ,Σ0=V0Vt 0and let Fbe a subset of distributions such that Pε PE 0 ⊂ Fand it also contains the empirical distributions functions. Observe that, if L(X)∈ P ε PE 0 then L(X) =L Σ−1/2 0(˜X−µ0) ,L ˜X ∈ Pε(P0).We say that the functionals ˆµ:F →RpandˆΓ :F → E , where Estands for the set of p×pmatrices symmetric and definite positive, are affine and translation equivariant if, for any invertible matrix A∈Rp×p,it holds that ˆµ(L(Ax+b)) = Aˆµ(L(x)) +b ˆΓ (L(Ax+b)) = AˆΓ (L(x))At We say that the functionals ˆµandˆΓ are Fisher consistent if ˆµ PE 0 =µ0andˆΓ PE 0 =c V0Vt 0 , c > 0 for any µ0,V0∈Rp×pinvertible. The effect of the distorsion caused by having P∈ Pε PE 0 , it is customary measured in the following invariant manner, bL(ˆµ, ε, P ) = ˆµ(P)−ˆµ PE 0t ˆΓ PE 0−1 ˆµ(P)−ˆµ PE 0 bS ˆΓ, ε, P = sup u∈Sp−1utˆΓ (P)u utˆΓ (P0)u =λ(1) ˆΓ (P0)−1/2ˆΓ (P)ˆΓ (P0)−1/2 =λ(1) ˆΓ (P)ˆΓ (P0)−1 , bI ˆΓ, ε, P = inf u∈Sp−1utˆΓ (P)u utˆΓ (P0)u =λ(p) ˆΓ (P0)−1/2ˆΓ (P)ˆΓ (P0)−1/2 =λ(p) ˆΓ (P)ˆΓ (P0)−1 = sup u∈Sp−1utˆΓ−1(P)u utˆΓ−1(P0)u=λ(1) ˆΓ (P0)1/2ˆΓ−1(P)ˆΓ (P0)1/2 . 4 Then we can define the asymptotic biases for the largest and smallest eigenvalues as BL ˆµ, ε, PE 0 = sup P∈PεbL(ˆµ, ε, P ) BS ˆΓ, ε, PE 0 = sup P∈PεbS ˆΓ, ε, P , BI ˆΓ, ε, , PE 0 = sup P∈PεbI ˆΓ, ε, P . B ˆΓ, ε, PE 0 = maxn BS ˆΓ, ε , BI ˆΓ, εo We say that the asymptotic explosion and implosion breakdown points are given by ε∗ L= infn ε >0 :BL ˆΓ, ε =∞o ε∗ S= infn ε >0 :BS ˆΓ, ε =∞o , ε∗ I= infn ε >0 :BI ˆΓ, ε =∞o , ε∗= min ( ε∗ S, ε∗ I). If we consider equivariant and Fisher consistent functionals ˆµandˆΓ for location and scatter, it is easily proved that BL ˆµ, ε, PE 0 =cBL(ˆµ, ε, P 0) BS ˆΓ, ε,
|
https://arxiv.org/abs/2505.07383v1
|
PE 0 =BS ˆΓ, ε, P 0 BI ˆΓ, ε, , PE 0 =BI ˆΓ, ε, , P 0 which entails that the maximum bias can be computed using µ0=0and Σ 0=I.The maximum bias for Tukey’s median turns out to be B ˆθT, ε,Φ = Φ−1 1+ε 2(1−ε) (Chen and Tyler (2002)). Interest in scatter depth has been revitalized over the past decade, particularly following the seminal papers by Chen et al. (2018) and Paindaveine and Van Bever (2018), which explored different aspects of the concept of depth for multivariate scatter. If X∼P, X∈Rp,Γ∈ E, the depth of Γ is taken to be D(Γ, P) = inf u∈Sp−1min P |ut(X−v0(P))|2≤utΓu , P |ut(X−v0(P))|2≥utΓu , withv0a preliminary affine equivariant location functional to yield an affine equivariant multivariate scatter functional If location is known we can assume without loss of generality that v0=0.For the known location case, the depth and the deepest estimator are defined to be, D(Γ, P) = inf u∈Sp−1minn P utX 2≤utΓu , P utX 2≥utΓuo . ˆΓ = arg max Γ⪰0D(Γ, P). D M(P) =D ˆΓ, P , (⪰defines the usual partial order in the set Eof symmetric positive matrices). Chen et al.. (2018) broke new ground by introducing a unified way to study the statistical convergence rate and robustness jointly. Given δ∈(0,1/2) let α= 1−2δandε∈[0,1/2). Let Pε(P0) be the ε−contamination neighborhood with P0=N(θ,Σ). Take F(M) as the set of symmetric and definite positive matrices Σ such that the largest eigenvalue λ1(Σ) is less than a constant M > 0.They derived that, for ε∈[0, ε′], ε′<1/3 and ( p+ log (1 /δ))/nsufficiently small, there exists a constant C > 0 (depending on ε′but independent of p, n, ε ),such that inf θ,Σ∈F(M),P∈Pε(P0)P ˆθT−θ 2 ≤C maxnp n, ε2o +log (1 /δ) n ≥α (3) 5 (Theorem 2.1 of Chen et al. (2018)). The constant Cin the concentration inequality (3) is actually affected by the asymptotic maximum bias of Tukey ’s median. However, the bound (3) can be derived in a more illuminating manner by explicitly incorporating the maximum bias, as the maximum bias governs the behavior of the estimator when the sample size is sufficiently large. Without enlarging the bound in (3), we obtain a more informative inequality, inf θ,Σ∈F(M),P∈Pε(P0)P ˆθT−θ 2 ≤˜C maxnp n, B2 ˆθT, ε,Φo +log (1 /δ) n ≥α. (see Lemma 9 in the Appendix). The reasoning behind expecting the asymptotic maximum bias function to appear in the concentration inequality is as follows: As the sample size tends to infinity, the estimator is expected to converge to the functional value ˆθT(P).The quantity ˆθT(P)−θ remains within a range bounded above by the maximum bias corresponding to the given level of contamination, since the distributions vary over the entire ε-contamination neighborhood. On the other hand, Chen et al. (2018) also derived a concentration inequality for the deepest estimator for the dispersion matrix. They showed that, with probability at least 1 −2δ, inf Σ∈F(M),P∈Pε(F0)P ˆΣ−Σ 2 op≤C maxnp n, ε2o +log (1 /δ) n ≥α. (4) With a similar reasoning to that of Lemma 9 we can
|
https://arxiv.org/abs/2505.07383v1
|
obtain that (see Lemma 10 below) inf Σ∈F(M),P∈Pε(F0)P ˆΣ−Σ 2 op≤C∗ maxnp n, B2 E(ε)o +log (1 /δ) n ≥α. with BE(ε) =h 1√βΦ−1(a(ε))−1i .By these means, the concentration inequality is also able to uncover the maximum bias of the deepest as it will be shown in the next Section. 4 Maximum bias for the deepest scatter matrix From now onwards we will assume that A1.P0is a multivariate normal around 0and covariance matrix I. The bounds found using the concentration inequalities of Chen et al. (2018) suggest to study the asymptotic maximum bias of the deepest scatter matrix to verify the accuracy of the bounds. We will summaryze the steps to get the maximum bias function for the deepest scatter matrix. First of all we compute the depth of any matrix under the normal model and the deepest estimator under this distribution, which is given by Φ−1(3/4)I.We next verify that in case of having a sequence of contaminations in the ε−contamination neighborhood which yields the deepest estimator to have either the largest eigenvalue going to infinity or the smallest one going to 0 ,then the level of contamination ε≥1/3,which entails that ε∗≥1/3.Next, we calculate the depth of any matrix in Eunder point mass contaminations located along the direction given by a vector e∈Rp. Moreover we consider the set of the matrices whose eigenvector associated with the largest eigenvalue coincides with the vector e, which yields a more specific formula rather the general one for any matrix given in the previous step. Then, in this class, we calculate a deepest scatter matrix (since uniqueness cannot be suspected at all) in which the largest and smallest eigenvalues attainable for point mass contaminations coincide with those of the bounds found in the concentration inequalities. On the one hand we show that in case of having a sequence of matrices such that the maximum eigenvalue tends to infinity its depth could only converge to a values less or equal than min ( ε,1−ε).On the other hand, we prove that the deepest estimator in the case of using point mass contaminations is less or equal to (1 −ε)/2. Therefore we can conclude that if we have a level of contamination εsuch that the deepest estimator have smallest and largest eigenvalues bounded above and below for any contamination, the level of contamination must be less or equal to 1 /3 6 and we get that the breakdown ε∗= 1/3,which coincide with that of Tukey ’s median. The reminiscence ofTukey ’s median is even emphasized since we can use a similar reasoning to that of Chen and Tyler (2002) to get a bound for the asymptotic maximum bias curve. And the bound found let us confirm that it agrees with that of the bounds obtained in the concentration inequalities given in the former Section, that is Theorem 2 B ˆΓ, ε, P 0 = max 1√βΦ−1 3−ε 4(1−ε) ,√β Φ−1(3−5ε 4(1−ε)) . All the proofs are deferred to the Appendix. 5 Appendix Letg(δ) = (1 −δ)h 1 3 yδ+ 2p y2 δ+ 6 ln 2 , yδ with yδ=
|
https://arxiv.org/abs/2505.07383v1
|
Φ−1 δ 1−δ a normal quantile ,with δ < 1/3.Letε0be the fixed point of g; i.e., ε0=g(ε0). (5) LetPn= (1−ε)Φ (·) +εHn(·) be a sequence of contaminated distributions in the ε−contamination neighborhood Pε(Φ), Dnthe depth induced by these contaminated distributions and {(ˆµn,ˆσn)}nthe deepest estimators associated to those distributions. Take the sets determined by ( µ, σ)∈R×(0,∞), A(1)(µ, σ) = (−∞, µ−σ], A(2)(µ, σ) = [µ−σ, µ], A(3)(µ, σ) = [µ, µ+σ] and A(4)(µ, σ) = [µ+σ,∞). LetPε= (1−ε)Φ. Note that ∀n≥1 :Pn(A(j)(µ, σ))≥Pε(A(j)), j= 1,2,3,4. Lemma 3 The following statements hold. (i) Given y0<0fixed, the maximum of the function h0(x) =h(x, y0), x > 0occurs at M(y0) = 1 3 y0+ 2p y2 0+ 6 ln 2 . (ii) For ϵ∈(0,1/3)the following equivalence holds: ε <(1−ε)h(M(yε0), yε0)if and only if ε < ε 0. Proof . (i) Fixed y0,we want to maximize h0(x) = h(x, y0) = Φ ( x)−Φ y0+x 2 .Ifφstands for the derivative of Φ ,the derivative of h0turns out to be h′ 0(x) = φ(x)−1 2φ y0+x 2 ≥0 if and only if 2 exp −1 2x2 ≥exp −1 8 (x+y0)2 if and only if ln 2 −3x2 8+xy0 4+y2 0 8≥0 if and only if 1 3 y0−2p y2 0+ 6 ln 2 ≤x≤1 3 y0+ 2p y2 0+ 6 ln 2 . Then M(y0) =1 3 y0+ 2p y2 0+ 6 ln 2 yields the maximum value given by h0(M(y0)) = Φy0 3+2 3q y2 0+ 6 ln 2 −1 2Φ2 3y0+1 3q y2 0+ 6 ln 2 . (ii) If ε < ε 0, using (5), we have that (1−ε)h(M(yε0), yε0)>(1−ε0)h(M(yε0), yε0) =g(ε0) =ε0> ε. Assume now that ε <(1−ε)h(M(yε0), yε0) holds. Using again (5), we have that ε <(1−ε)h(M(yε0), yε0) =1−ε 1−ε0(1−ε0))h(M(yε0), yε0) =1−ε 1−ε0ε0. and the converse follows. 7 Lemma 4 The function g(δ) = (1 −δ)h 1 3 yδ+ 2p y2 δ+ 6 ln 2 , yδ is strictly decreasing for δ∈ (0,1/3).Therefore, δ < g (δ)if and only if δ < ε 0. Proof . Note that g(δ) = (1 −δ)f(yδ) =f Φ−1 δ 1−δ with f(y) = hy 3+2 3p y2+ 6 ln 2 , y = Φ1 3y+2 3p y2+ 6 ln 2 −Φ2 3y+1 3p y2+ 6 ln 2 . Let us see that the function fis decreasing in y∈(−∞,0).Note that y 3+2 3p y2+ 6 ln 2 =2 3y+1 3p y2+ 6 ln 2 +1 3p y2+ 6 ln 2 −y andy 3+2 3p y2+ 6 ln 2 >0 for all y.The derivative of this function turns out to be f′(y) = φ1 3y+2 3p y2+ 6 ln 2 2 3+1 3yp y2+ 6 ln 2+1 3 yp y2+ 6 ln 2−1!! −φ2 3y+1 3p y2+ 6 ln 2 2 3+1 3yp y2+ 6 ln 2! . Seta=2 3y+1 3p y2+ 6 ln 2 , b=1 3p y2+ 6 ln 2 −y >0 and f′(y) =1p y2+ 6 ln 2(φ(a+b)a−φ(a) (a+b)). Observe that2 3y+1 3p y2+ 6 ln 2 >0 if and only if y >−p ln (4) .Therefore, if y≤ −p ln (4) ,then a≤0, φ(a+b)a−φ(a) (a+b)<0 and
|
https://arxiv.org/abs/2505.07383v1
|
f′(y) =1p y2+ 6 ln 2[φ(a+b)a−φ(a) (a+b)]<0 Ify >−p ln (4) ,then a >0, a < a +b, φ(a+b)< φ(a) and f′(y)<0,which entails the result. Since the function yδ= Φ−1 δ 1−δ is increasing in δthen the composition f(yδ) =h 1 3 yδ+ 2p y2 δ+ 6 ln 2 , yδ is decreasing in δ. Consequently, g(y) is a product of two positive decreasing functions and it is decresing inδ. Therefore, the claim of the Lemma follows. Lemma 5 The following statements are valid. (i) If ε <1/5then there exists −∞< xL< xU<∞such that lim n→∞DnxL+xU 2,xU−xL 2 ≥ε. (ii) If there exists 0< α≤1verifying that inf x>0lim n→∞Hn([x,∞))≥αandΦ(−√ ln 4) 1+Φ(−√ ln 4)< ε < ε 0,then there exists −∞< xL< xU<∞such that lim n→∞DnxL+xU 2,xU−xL 2 > εα . Note:Φ(−√ ln 4) 1+Φ(−√ ln 4)= 0.1068. 8 (iii) If Pε A(j)(ˆµn,ˆσn) → n→∞0for some j∈ {1,2,3,4}then lim n→∞Dn(ˆµn,ˆσn)≤εandε >1/5. (iv) Given a sequence {(µn, σn)}n∈Nsuch that lim n→∞Dn(µn, σn)>0then lim n→∞Pn A(j)(µn, σn) >0 for any j∈ {1,2,3,4}. Proof . (i) It suffices to take xU=−xLsuch that Φ ( xL) = 1 /4.Indeed, considering µ= (xL+ xU)/2 and σ= (xL−xU)/2, the probabilities that define the notion of depth satisfy: Pn(A(1)(µ, σ))≥ Pε(A(1)(µ, σ)) = (1 −ϵ)Φ(xL) = (1 −ϵ)1 4,Pn(A(2)(µ, σ)))≥Pε(A(2)(µ, σ)) = (1 −ϵ)[Φ(0) −Φ(xL)] = (1−ϵ)1 4, and similarly for the other terms. Therefore D(0,−xL)≥(1−ε) 1/4> εand the statement follows. (ii) Given y0=yε0,and using Lemma 3 we can write (1 −ϵ)h0(x) =Pε A(3)yε0+x 2,x−yε0 2 .The maximum of h0occurs at M(yε0) =1 3 yε0+ 2q y2ε0+ 6 ln 2 and the maximum value is given by h0(M(yε0)) = Φyε0 3+2 3q y2ε0+ 6 ln 2 −1 2Φ2 3yε0+1 3q y2ε0+ 6 ln 2 . Ifε < ε 0, by (ii) of Lemma 3 Pε A(3)yε0+M(yε0) 2,M(yε0)−yε0 2 = (1−ε)h(M(yε0), yε0)> ε Moreover Pε A(1)yε0+M(yε0) 2,M(yε0)−yε0 2 =ε0> ε, and then, Pn A(j) yε0+M(yε0) 2,M(yε0)−yε0 2 > ε,j= 1,3. Observe that2 3yδ+1 3p y2 δ+ 6 ln 2 >0 if and only if yδ∈[−ln (4) ,ln (4)] if and only if δ≥Φ(−√ ln 4) 1+Φ(−√ ln 4).Therefore2 3yε0+1 3p y2ε0+ 6 ln 2 >0,and we can say that Pε A(2)yε0+M(yε0) 2,M(yε0)−yε0 2 > Pε A(3)yε0+M(yε0) 2,M(yε0)−yε0 2 > ε, and then Pn A(2) yε0+M(yε0) 2,M(yε0)−yε0 2 > ϵ. The remaining set A(4) yε0+M(yε0) 2,M(yε0)−yε0 2 has probability greater than εα.Set η=(1−ε) 2Φ A(4)yε0+M(yε0) 2,M(yε0)−yε0 2 , then we get, for nlarge enough that (1−ε)Φ A(4)yε0+M(yε0) 2,M(yε0)−yε0 2 +εHn A(4)yε0+M(yε0) 2,M(yε0)−yε0 2 ≥(1−ε)Φ A(4)yε0+M(yε0) 2,M(yε0)−yε0 2 +εα−η=εα+η, 9 and (ii) follows. (iii) Take −xLobtained in item (i) and j0such that lim nPε A(j0)(ˆµn,ˆσn) = 0.Then, (1−ε) 4≤D(0,−xL)≤Dn(ˆµn,ˆσn)≤Pε A(j0)(ˆµn,ˆσn) +ε and the result follows. (iv) It follows straightforwardly. Lemma 6 Ifinf x>0lim n→∞Hn([x,∞)) = 1 then there exists −∞< xL< xU<∞such that lim n→∞DnxL+xU 2,xU−xL 2 > ε if and only if ε < ε 0. Proof. Ifε < ε 0then Lemma (5) (i) and (ii) says the conclusion. If inf x>0lim n→∞Hn([x,∞)) = 1 and there exists −∞< xL< xU<∞such that ε < lim n→∞Dn xL+xU 2,xU−xL 2 then Pε A(j)xL+xU
|
https://arxiv.org/abs/2505.07383v1
|
2,xU−xL 2 > εforall j∈ {1,2,3}. IfPε A(1) xL+xU 2,xU−xL 2 = (1−ϵ)Φ(xL)> εthen xL>Φ−1(ϵ/(1−ϵ)) = yϵ. Moreover, by (i) of Lemma 1 ε < P ε A(3)xL+xU 2,xU−xL 2 = (1 −ϵ) Φ(xU)−(Φ(xL+xU 2) = (1 −ϵ)h(xL, xU) ≤(1−ε)h(M(yε0), yε0) =g(ϵ). By Lemma 4 we get that ε < ε 0. Lemma 7 Given the contaminated distributions Pn= (1−ε)Φ (·) +εδan(·)with point mass contami- nations, δanverifying that limn→∞an=∞,then there exists (M0, s0)∈(0,∞)×(0,∞)such that the deepest estimators {(ˆµn,ˆσn)}n∈N∈(−M0, M0)×(s0, M0)if and only ε < ε 0. Proof : Take {(µn, σn)}n∈Nany sequence such that lim n→∞µn+σn=∞andlim n→∞Dn(µn, σn)>0. (6) Then, (i) If lim n→∞µn−σn=−∞then lim n→∞Pn A(1)(µn, σn) = 0 and Lemma 5 (iv) yields a contra- diction. (ii) If µn+σn> anforanlarge enough, then lim a→∞Pn(A(1)(µn, σn)) = 0 and Lemma 5 (iv) yields a contradiction. Consequently, −∞<inf n(µn−σn) and µn+σn≤anfornsufficiently large. Setb= lim n→∞µn−σn.This allows to say that lim n→∞µn=∞since lim n→∞µn+σn−(µn−σn) =∞. (iii) If µn+σn< anfornsufficiently large then lim a→∞Pn A(3)(µn, σn) = 0,and Lemma 5 (iv) yields a contradiction. Then µn+σn=anfornsufficiently large. 10 (iv) Summing up, we can assume without loss of generality that lim n→∞Pn A(1)(µn, σn) = (1 −ε) Φ (b), lim a→∞Pn A(2)(µn, σn) = (1 −ε) (1−Φ (b)), lim n→∞Pn A(3)(µn, σn) =ε, lim a→∞Pn A(4)(µn, σn) =ε. By Lemma 6 there exists −∞< xL< xU<∞such that lim n→∞DnxL+xU 2,xU−xL 2 > ε if and only if ε < ε 0.Therefore, for nlarge enough, DnxL+xU 2,xU−xL 2 > D n(µn, σn) (7) if and only if ε < ε 0.Consequently, in case of having of point mass contaminations, {(µn, σn)}n∈Ncannot be the deepest estimators if and only if ε < ε 0. Theorem 8 The breakdown point of the depth estimator is ε∗=ε0. Proof. Given ε >0, (i) Assume that the deepest estimator remains bounded in the ε−contamination neighborhood, that is there exists M > 0 and r >0 such that (µ(P), σ(P))∈[−M, M ]×[r, M] for all P∈ Fε(Φ). An special case is given by the point mass contaminations. Thus Lemma 7 let us infer that ε < ε 0and the breakdown point verifies that ε∗≤ε0. (ii) Assume that there is a sequence of distributions Pn∈ Fε(Φ) such that the associated deepest esti- mators {(ˆµnˆσn)}verifies that lim n→∞(ˆµn+ ˆσn) =∞.SetA(j) n=A(j)(ˆµn,ˆσn).Then lim n→∞Pε A(j) n = 0 for j= 4 and Lemma 5 (iii) says that ε >1/5.Without loss of generality we can assume that there is a -∞ ≤ bsuch that lim n→∞Pn A(1) n) = (1 −ε) Φ (b) +εα1, lim a→∞Pn A(2) n = (1 −ε) (1−Φ (b)) +εα2, lim n→∞Pn A(3) n =εα3,lim a→∞Pn A(4) n =εα4. Then lim n→∞Dn(ˆµn,ˆσn)≤εα,with α= min ( α3, α4). (8) Suppose by contradiction that ε < ε 0.By Lemma 5 (ii), there exists −∞< xL< xU<∞such that lim n→∞DnxL+xU 2,xU−xL 2 > εα (9) Then, by (8) and (9), for nlarge enough we have DnxL+xU 2,xU−xL 2 > D n(ˆµn,ˆσn) which is a contradiction. Therefore ε≥ε0andε∗≥ε0and the result follows. 11 Lemma 9 Given δ∈(0,1/2)letα= 1−2δandε∈[0,1/2). Let Pε(P0)be the ε−contamination neighborhood with P0=N(θ,Σ). Take F(M)as the
|
https://arxiv.org/abs/2505.07383v1
|
set of symmetric and definite positive matrices Σsuch that the largest eigenvalue λ1(Σ)is less than a constant M > 0.Forε∈[0, ε′], ε′<1/3and (p+ log (1 /δ))/nsufficiently small, there exists a constant C > 0(depending on ε′but independent of p, n, ε ),such that, with probability greater than 1−2δ,it holds that inf θ,Σ∈F(M),P∈Pε(F0)P ˆθT−θ 2 ≤˜C maxnp n, B2 T(ε, I)o +log (1 /δ) n ≥α. Proof. Chen et al. (2018) in the proof of Theorem 2.1 showed that, for θ=0 Φ ˆθ ≤1 2+ε 1−ε+ 40r 6eπ 1−e−1r p+ 1 n+7 2r log (1 /δ) n(10) =1 +ε 2 (1−ε)+ 40r 6eπ 1−e−1r p+ 1 n+7 2r log (1 /δ) n The upper bound becomes useless for ε≥1/3 since it is greater than 1. Therefore, we need to consider ε <1/3−c.Putb(p, n) = 40q 6eπ 1−e−1q p+1 n+7 2q log(1/δ) nanda(ε) =1 2+ε 1−ε=1+ε 2(1−ε).Thus, (10) is equivalent to ˆθ ≤Φ−11 2+ε 1−ε+b(p, n) −Φ−11 2 + Φ−11 2 =1 ϕ(Φ−1(γC))ε 1−ε+b(p, n) ≤C rp n∨ε+r log (1 /δ) n! , with γC∈1 2,1 2+ε 1−ε+b(p, n) . Therefore, set and we get ˆθ ≤Φ−1(a(ε) +b(p, n))−Φ−1(a(ε)) + Φ−1(a(ε)) (11) =1 ϕ(Φ−1(ηB))b(p, n) + Φ−1(a(ε)), ηB∈(a(ε), a(ε) +b(p, n)) ≤1 ϕ(Φ−1(a(ε) +b(p, n)))b(p, n) + Φ−1(a(ε)) Ifε <1/3−c, ca positive constant, a(ε) is increasing on [0 ,1/3−c), let a(mc) be the maximum value. Take dsuch that 1 −a(mc)> d > 0 and ( p, n)∈Ac,d={(p, n) :b(p, n)<1−a(mc)−d}.Then, there exists a constant Cc,d= sup(p,n)∈Ac,d ϕ Φ−1(a(mc) +b(p, n))−1such that ˆθ ≤˜Cc,d rp n∨BL ˆθT, ε, I +r log (1 /δ) n! , and the lemma follows. Lemma 10 Given δ∈(0,1/2)letα= 1−2δandε∈[0,1/2). Let Pε(P0)be the ε−contamination neighborhood with P0=N(θ,Σ). Take F(M)as the set of symmetric and definite positive matrices Σ such that the largest eigenvalue λ1(Σ)is less than a constant M > 0.Forε≤1/3−cand(p+ log (1 /δ))/n sufficiently small, there exists a constant C > 0(depending on ε′but independent of p, n, ε ),such that, with probability greater than 1−2δ, inf Σ∈F(M),P∈Pε(F0)P ˆΣ−Σ 2 op≤C∗ maxnp n, B2 E(ε)o +log (1 /δ) n ≥α. 12 Proof. In the process of obtaining (4), In the process of obtaining such an inequality, Chen et al. (2018, p. 1955) showed that, with probability greater than 1 −2δ, sup u∈Sp−1 Φp β −Φ utˆΓu utΣu! ≤ε 2(1−ε)+r 2400eπ 1−e−1r 3 + 2 p n+7 4r log (1 /δ) n. (12) holds. Put an,δ= 40q 6eπ 1−e−1q 3+2p n+7 2q log(1/δ) nand√βq uT(ˆΓ/β)u uTΣu=√βxu, with an,δ≥0. Then, sup u∈Sp−1 Φ(p β)−Φp βxu (13) = max Φp βsup u∈Sp−1xu −Φ(p β),Φ(p β)−Φp βinf u∈Sp−1xu (14) If inf u∈Sp−1xu≤supu∈Sp−1xu≤1, supu∈Sp−1 Φ(√β)−Φ √βxu = Φ(√β)−Φ √βinfu∈Sp−1xu .If 1≤infu∈Sp−1xu≤supu∈Sp−1xuthen supu∈Sp−1 Φ(√β)−Φ √βxu = Φ √βsupu∈Sp−1xu −Φ(√β). If inf u∈Sp−1xu≤1≤supu∈Sp−1xuwe have to analyze both cases. If the maximum on the righthand side of (13) occurs in Φ √βsupu∈Sp−1xu −Φ(√β) then supu∈Sp−1xu≥1,and we have, Φp βinf u∈Sp−1xu ≤Φ(p β) +ε 2 (1−ε)+1 2an,δ. By calling a(ε) =3 4+ε 2(1−ε)=3−ε 4(1−ε)an increasing function in ε, we have sup u∈Sp−1xu≤1√βΦ−1 a(ε) +1 2an,δ −1√βΦ−1(a(ε)) + 1 +1√βΦ−1(a(ε))−1 Then there exists η∈ a(ε), a(ε) +1 2an,δ such that sup u∈Sp−1xu−1≤1 2√β1 ϕ(Φ−1(η))an,δ+1√βΦ−1(a(ε))−1
|
https://arxiv.org/abs/2505.07383v1
|
, 0≤ sup u∈Sp−1xu−1≤1 2√β1 ϕ Φ−1 a(ε) +1 2an,δan,δ+1√βΦ−1(a(ε))−1 =gS(ε, n, δ, p ) Then, BE(ε) =1√βΦ−1(a(ε))−1 , If the maximum in (13) occurs at Φ(√β)−Φ √βinfu∈Sp−1xu ,then inf u∈Sp−1xu≤1 and we call b(ε) =3 4−ε 2(1−ε)=3−5ε 4(1−ε),which is decreasing in ε.Thus, we get that 0≤Φ(p β)−Φp βinf u∈Sp−1xu ≤ε 2 (1−ε)+1 2an,δ Φp βinf u∈Sp−1xu ≥Φ(p β)−ε 2 (1−ε)−1 2an,δ inf u∈Sp−1xu≥1√βΦ−1(b(ε))−1 + 1−1√βΦ−1(b(ε)) +1√βΦ−1 b(ε)−1 2an,δ 13 Then, there exists η∈ b(ε)−1 2an,δ, b(ε) such that inf u∈Sp−1xu−1≥ −1 2√β1 ϕ(Φ−1(η))an,δ+1√βΦ−1(b(ε))−1, ≥ −1 2√β1 ϕ(Φ−1(b(ε)))an,δ+1√βΦ−1(b(ε))−1. Then, 0≤1−inf u∈Sp−1xu≤1 2√β1 ϕ(Φ−1(b(ε)))an,δ+ 1−1√βΦ−1(b(ε)) =gI(ε, n, δ, p ). Call BI(ε) = 1−1√βΦ−13−5ε 4 (1−ε) . It is easy to show that BI(ε)≤BE(ε).Observe that there is an η∈[βinfu∈Sp−1xu, β]∪[β, βsupu∈Sp−1xu] such that sup u∈Sp−1 Φ(p β)−Φp βxu = sup u∈Sp−1 (β−βxu)φ(η)√η ≥φ(1 +gS(ε, n, p, δ ))p 1 +gS(ε, n, p, δ )sup u∈Sp−1|(β−βxu)|. Then, we can conclude that inf Σ∈F(M),P∈Pε(F0)P ˆΣ−Σ 2 op≤C∗ maxnp n, B2 E(ε)o +log (1 /δ) n ≥α. 5.1 Maximum bias of the deepest scatter estimator We know that ˆΓ (P0) is Fisher consistent up to a constant, that is ˆΓ (P0) =√βΣ.Without loss of generality to study the breakdown point and maximum bias, we can assume that Σ = I.IfFis a centrosymmetric distribution in RpandX∼Fwith then wtX∼ztXfor all w,z∈ Sp−1.Let us take a symmetric positive matrix Γ ,with eigenvalues l1≥l2≥. . .≥lp>0 and eigenvectors v1, . . . ,vprespectively Γ =pX j=1ljvjvt j. Observe that P0 utX 2≤utΓu =P0 utX 2≤utΓu =P0 −√ utΓu≤utX≤√ utΓu P0 utX 2≥utΓu =P0 utX≤ −√ utΓu +P utX≥√ utΓu , Letgbe the function , ˙g:Sp−1→[0,1], g(u) = P0 −√ utΓu≤utX≤√ utΓu . Thus, we have the following result. Lemma 11 D(Γ, P0) = minu∈Sp−1(g(u),1−g(u)) = min ( g(vp),1−g(v1)) 14 Proof. Given g(u) = Φ√ utΓu −Φ −√ utΓu ,the Lagrangian is h(u,λ) =g(u) +λ utu−1 . The derivatives are given by 0=∂h(u,λ) ∂u=φ√ utΓuΓu√ utΓu −φ −√ utΓu −Γu√ utΓu + 2λu 0 =∂h(u,λ) ∂λ=utu−1. Therefore, we get that 0=∂h(u, λ) ∂u=h 2φ√ utΓuiΓu√ utΓu+ 2λu Callb(u) =−h 2φ√ utΓui /√ utΓu<0,which yields b(u) Γu=2λuwith λ=1 2b(u)utΓuand conse- quently, (Γ −[utΓu]I)u=0.Thus, this entails that the critical points of the Lagrangian are the eigenvec- tors of Γ .Therefore, the function, g(u) = Φ√ utΓu −Φ −√ utΓu has the following Lagrangian critical points we have g(vj) = Φ l1/2 j −Φ −l1/2 j = 2Φ l1/2 j −1, j= 1, ..., p g(vp)≤g(v)≤g(v1) for all v∈Sp−1 1−g(vj) = 1 −h Φ l1/2 j −Φ −l1/2 ji = 2h 1−Φ l1/2 ji , j= 1, ..., p 1−g(vp)≥g(v)≥1−g(v1) for all v∈Sp−1 and the minimum is given by m= min ( g(vp),1−g(v1)). Corollary 12 The deepest estimator is a multiple of the identity and its depth is 1/2. Proof. Since g(vp)≤g(v1) and are increasing functions of√ vtΓv, then g(vp) = 1−g(v1) = 0 .5 which says. l1/2 p=l1/2 1= Φ−1(3/4).Then we Γ = Φ−1(3/4)2Ip(Ipthe identity matrix p×p). Corollary 13 Let{Pε,n}∞ n=1be such that Pε,n= (1−ε)P0+εPnsuch that the associated depth estima- tors{Γn}∞ n=1,Γn=Pp j=1l(n) jv(n) jh v(n) ji have either their largest eigenvalue l(n) 1→
|
https://arxiv.org/abs/2505.07383v1
|
∞ or their smallest eigenvalue l(n) p→0.Then ε >1/3. Proof. Put aε,n(w) = (1 −ε)P0 wtΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nw wtΓ−1/2 n ≤1 Pp j=1l(n)−1 j (wtej)2 + εPn wtΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≤1 Pp j=1l(n)−1 j (wtej)2 bε,n(w) = (1 −ε)P0 wt rΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≥1 Pp j=1l(n)−1 j (wtej)2 + εPn wt rΓ−1/2 n wtrΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≥1 Pp j=1l(n)−1 j (wtej)2 15 Letwnbe such that D(Γn, Pn) = min ( aε,n(wn), bε,n(wn)). i) Ifl(n) 1→ ∞ ,since min aε,n v(n) 1 , bε,n v(n) 1 ≥min ( aε,n(wn), bε,n(wn)) then, either bε,n v(n) 1 ≥ aε,n v(n) 1 ≥D(Γn, Pn) oraε,n v(n) 1 ≥bε,n v(n) 1 ≥D(Γn, Pn) which yields that (1−ε)P0 wt nΓ−1/2 n wtnΓ−1/2 n XXtΓ−1/2 nwr wtnΓ−1/2 n ≥l(n) 1 +εPn wt nΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwn wtΓ−1/2 n ≥l(n) 1 ≥D(Γn, Pn) Observe that P0 wt nΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≥l(n) 1 →0 and we get ε≥lim n→∞D(Γn, Pn)≥lim n→∞D(I, Pn)≥(1−ε)D(I, P0) = (1 −ε)1 2 which says that ε≥1/3. (ii) If l(n) p→0,since min aε,n v(n) p , bε,n v(n) p ≥min ( aε,n(wn), bε,n(wn)),then either bε,n v(n) p ≥aε,n v(n) p ≥D(Γn, Pn) oraε,n v(n) p ≥bε,n v(n) p ≥D(Γn, Pn) which yields that (1−ε)P0 wt nΓ−1/2 n wtnΓ−1/2 n XXtΓ−1/2 nwn wtnΓ−1/2 n ≤l(n) p +εPn wt nΓ−1/2 n wtnΓ−1/2 n XXtΓ−1/2 nwn wtnΓ−1/2 n ≤l(n) p ≥D(Γn, Pn) Thus, since P0 wt nΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≤l(n) p →0, we get ε≥lim n→∞D(Γn, Pn)≥lim n→∞D(I, Pn)≥(1−ε)D(I, P0) = (1 −ε)1 2 which says that ε≥1/3. Given a unit vector eandr >0,take the point mass contaminations δr=δre. If we have Pε,r= (1−ε)P0+εδrandδstands for the Kronecker delta (it is 1 if the inequality holds, 0 otherwise), we consider mr(u) = min (1−ε)P −√ utΓu≤utX≤√ utΓu +εδ r2(ute)2≤utΓu , (1−ε)n P utX≤ −√ utΓu +P utX≥√ utΓuo +εδ r2(ute)2≥utΓu . Notation 14 Put he(v) = vtΓv/ vte2 Be r= v∈Sp−1andhe(v)< r2 Fe r= v∈Sp−1andhe(v) =r2 Ae r= v∈Sp−1andhe(v)> r2 16 and gbe r= inf Berg(v), Gbe r= sup Berg(v), Gae r= sup Aerg(v), gae r= inf Aerg(v). If either Be r=∅orAe r=∅putgbe r=∞andGbe r=−∞ orgae r=∞andGae r=−∞ The following lemma gives us the depth of any symmetric positive matrix Γ under point mass con- taminations. Lemma 15 It holds that D(Γ, Pε,r) = min(1−ε) (1−Gbe r) +ε,(1−ε)gae r+ε, (1−ε)gbe r,(1−ε) (1−Gae r)) . (15) Proof. if he(v)< r2,then (1 −ε)g(v) +εδ he(v)≥r2 ≥(1−ε)gbe rand (1 −ε) (1−g(v)) + εδ he(v)≤r2 ≥(1−ε) (1−Gbe r)+ε.ifhe(v)> r2,then (1 −ε)g(v)+εδ he(v)≥r2 ≥(1−ε)gae r+ εand (1 −ε) (1−g(v)) +εδ he(v)≤r2 ≥(1−ε) (1−Gae r) and the lemma follows. Corollary 16 Take Γa matrix whose eigenvector v1coincides with e.Ifr > l1/2 1consider v0,r∈Sp−1 such that vt 0,r Γ−r2v1vt 1 v0,r= 0.Then D(Γ, Pε,r) = min{(1−ε) (1−g(v1)),(1−ε)g(vp) +ε} ifr≤l1/2 1 min(1−ε)g(vp) +ε,(1−ε) (1−g(v1)) +ε, (1−ε)g(v0,r),(1−ε) (1−g(v0,r))) ifr > l1/2 1. Proof. Observe that q1=l1/2 1 vt 1e=l1/2 1andqj=∞, j= 2..., p.
|
https://arxiv.org/abs/2505.07383v1
|
Ifr < l1/2 1then Be r=∅, Gbe r=−∞andgbe r=∞. {v1,v2, . . . ,vp} ∈Ae rthen Gae r=g(v1) and gae r=g(vp). Thus, D(Γ, Pε,r) = min {(1−ε)g(vp) +ε,(1−ε) (1−g(v1))}. Ifr > l1/2 1then {v2, ...,vp}/∈Be r,v1∈Be r, Gbe r=g(v1). Take now v0,r∈Sp−1such that vt 0,r Γ−r2v1vt 1 v0,r= 0.Since g(v2)≤g(v0,r)< g(v1), gbe r=g(v0,r). Observe that {v2, ...,vp} ∈Ae randv1/∈Ae r.Then Gae r=g(v0,r) and gae r=g(vp). Since D(Γ, Pε,r) = min{(1−ε) (1−Gbe r) +ε,(1−ε)gae r+ε,(1−ε)gbe r,(1−ε) (1−Gae r))}, then we have that D(Γ, Pε,r) = min(1−ε)g(vp) +ε, (1−ε) (1−g(v1)) ifr≤l1/2 1 min(1−ε) (1−g(v1)) +ε,(1−ε)g(vp) +ε, (1−ε)g(v0,r),(1−ε) (1−g(v0,r))) ifr > l1/2 1. (16) The following theorem restricted to matrices whose eigenvector associated with the largest eigenvalue shares the same direction than the contamination gives us which should be the deepest estimator in this class. 17 Lemma 17 Given Γ =Pljvjvt j,l1≥...≥lp,vt ivj=δijandv1=e.Then the deepest matrices in this class are given by Γ =(√βI, ifr≤l−1/2 1,Pljvjvt j,ifr > l−1/2 1. withβ=h Φ−1 3−4ε 4(1−ε)i2 , l1=h Φ−1 3−ε 4(1−ε)i2 , lp=h Φ−1 3−5ε 4(1−ε)i2 , lp≤lp−1≤ ··· ≤ l2<Φ−1 3 4 andD(Γ, Pε,r) = (1 −ε)/2. Proof. Ifr < l1/2 1then the deepest matrix should verify (1−ε)g(vp) +ε= (1 −ε) (1−g(v1)) g(vp) +g(v1) =1−2ε 1−ε To increase the depth, g(vp) should increase and g(v1) decrease, and then they should reach 1 /2.This implies that g(vp) = g(v1) =1−2ε 2(1−ε) l1/2 p =l1/2 1= Φ−13−4ε 4(1−ε) and the deepest matrix is a multiple of the identity matrix. Let us take l1/2 1< r.Thus we get D(Γ, Pε,r) = min(1−ε) (1−g(v1)) +ε,(1−ε)g(vp) +ε,(1−ε)g(v0,r), (1−ε) (1−g(v0,r))) . with g(vp)≤...≤g(v2)< g(v0,r)< g(v1). Given Γ we have for any lp≤s≤l1, lp/r2≤s/r2≤l1/r2<1 and the equationq vt 0,rΓv0,r=s= r|vt 0,rv1|has a solution v0,r.Geometrically,q vt 0,rΓv0,r=s=r|vt 0,rv1|means that v0,rbelongs to the intersection of an ellipsoid and a cone around v1.v0,ris a function of r,andv0,r→ r→∞v1.The best estimator should be selected so that (1 −ε)g(v0,r) = (1 −ε) (1−g(v0,r)) which entails that g(v0,r) = 1/2 orq vt 0,rΓv0,r=Φ−1(3/4).Then we take matrices Γ such that Φ−1(3/4)≤l1/2 1(in special s= Φ−1(3/4) = 0 .6745) .All the four quantities involved in D(Γ, Pε,r) must be equal in order to get the best estimator In this case we get that (1−ε) (1−g1) +ε=(1−ε) 2= (1−ε)gp+ε (1−g1) +ε 1−ε=1 2;gp+ε 1−ε=1 2 g1=1 2+ε 1−ε;gp=1 2−ε 1−ε g1=1 +ε 2 (1−ε);gp=1−2ε 2 (1−ε). 18 which implies that g1= 2Φ l1/2 1 −1 =1 2+ε 1−ε Φ l1/2 1 =3 4+ε 2 (1−ε)=3−3ε+ 2ε 4 (1−ε)=3−ε 4 (1−ε). gp= 2Φ l1/2 p −1 =1 2−ε 1−ε Φ l1/2 p =3 4−ε 2 (1−ε)=3−3ε−2ε 4 (1−ε)=3−5ε 4 (1−ε). Remark 18 Φ l1/2 1 andΦ l1/2 p are the quantities predicted by the bounds in Chen et al (2018), BE(ε) =h 1√βΦ−1 3−ε 4(1−ε) −1i andBI(ε) = 1 −1√βΦ−1 3−5ε 4(1−ε) .Both functions goes to the the maximum and minimum value when ε= 1/3. Lemma 19 Let us consider the case of having a family of matrices Γrwith the largest eigenvalue l(r) 1→ ∞,r2l(r)−1/2 1 →1/2,and the other eigenvalues going to 0. Then, limr→∞D(Γ, Pε,r) = min ( ε,1−ε). Proof. We have
|
https://arxiv.org/abs/2505.07383v1
|
to analyze, min (1−ε)P0 wt rΓr wtrΓ−1/2 r XXtΓ−1/2 rwr wtrΓ−1/2 r ≤1Pp j=2l(r)−1 j(wtrej)2 + εδ r2l(r)−1/2 1 wt reetwr≤1 , (1−ε)P0 wt rΓ−1/2 r wtrΓ−1/2 r XXtΓ−1/2 rwr wtrΓ−1/2 r ≥1Pp j=2l(r)−1 j(wt rej)2 + εδ r2l(r)−1/2 1 wt reetwr≥1 where wris a vector which yields the minimum. (i) If wr∈L(e2, . . . ,ep) we get for rsufficiently large, min{ε,(1−ε)}. (ii) If wr∈[L(e2, . . . ,ep)]c,we get for rsufficiently large, min{ε,(1−ε)}. Then, for rlarge enough, since ε≤1/2, D(Γr, Pε,r) =ε. (17) and we conclude the statement of the lemma. Lemma 20 The critical points of Φ√ vtΓv (respectively 1−Φ√ vtΓv )subject to vtΓv−r2(vte)2≥ 0(idem vtΓv−r2(vte)2≤0)are either vp,v1.or occurs at Fe r. Proof. Take the Lagrangian Φ√ vtΓv +λ vtΓv−r2(vte)2 +γ(vtv−1),Karush, Kuhn, Tucker condition says that if the minimum local occurs in ˜ v,then either ˜ vtΓ˜ v−r2(˜ vte)2= 0 or we have to consider the Lagrangian Φ√ vtΓv +γ(vtv−1).In the last case, it was shown that the critical points are the eigenvectors of Γ .Suppose that vj0,j0< p,is the eigenvector corresponding to the minimum lj0 19 such that vt j0Γvj0> r2 vt j0e2, by the continuity of the functions we could find another ¯ vsuch that vt j0Γvj0>¯ vtΓ¯ v> r2(¯ vte)2> r2 vt j0e2which is a contradiction. Therefore the critical points occur at˜ v=vp(v1) or˜ vtΓ˜ v−r2(˜ vte)2= 0. The following lemma will give us the maximum bias for the deepest estimator under point mass contaminations. Lemma 21 IfˆΓstands for a deepest estimator, then . D ˆΓ, Pε,r ≤(1−ε)/2. Proof. Given Γ =Plvvjvt j.The depth of a matrix is given by, if r > l1/2 1. min{(1−ε) (1−Gbe r) +ε,(1−ε)gae r+ε,(1−ε)gbe r,(1−ε) (1−Gae r))}. Call gm= min v∈Ferg(v) and gM= arg max v∈Ferg(v). The different configurations that a matrix Γ might have are given in the following table depending on where v1andvpbelong to, Ae rv1,vp v1vpv1vp Be r v1,vp vpv1 vpv1 Fe r v1,vp vpv1v1vp (i)v1,vp∈Ae r: the depth of Γ should be min (1 −ε) (1−gM) +ε,(1−ε)gp+ε,(1−ε)gm,(1−ε) (1−g1)) (ii)v1,vp∈Be r: min{(1−ε) (1−g1) +ε,(1−ε)gm+ε,(1−ε)gp,(1−ε) (1−gM))} (iii)v1,vp∈Fe r: min{(1−ε) (1−g1) +ε,(1−ε)gp+ε,(1−ε)gp,(1−ε) (1−g1))} (iv)v1∈Ae r,vp∈Be r, min{(1−ε) (1−gM) +ε,(1−ε)gm+ε,(1−ε)gp,(1−ε) (1−g1))} (v)vp∈Ae r,v1∈Be r, min{(1−ε) (1−g1) +ε,(1−ε)gp+ε,(1−ε)gm,(1−ε) (1−gM))} (vi)v1∈Ae r,vp∈Fe r, min{(1−ε) (1−gM) +ε,(1−ε)gp+ε,(1−ε)gp,(1−ε) (1−g1))} (vii)v1∈Be r,vp∈Fe r, min{(1−ε) (1−g1) +ε,(1−ε)gp+ε,(1−ε)gp,(1−ε) (1−gM))} (viii) v1∈Ae r,vp∈Fe r min{(1−ε) (1−gM) +ε,(1−ε)gp+ε,(1−ε)gp,(1−ε) (1−g1))} (ix)v1,vp∈Fe r min{(1−ε) (1−g1) +ε,(1−ε)gp+ε,(1−ε)gp,(1−ε) (1−g1))} In any case, we have that D(Γ, Pε,r)≤min{(1−ε)gs,(1−ε) (1−gt)}, s∈ {p, m}, t∈ {1, M}, gs≤ 1−gtand the statement follows. 20 Theorem 22 The asymptotic breakdown point of the deepest estimator is 1/3. Proof. (i) Lemma above says that if the estimators moves to the boundary of the domain of the eigenvalues, then ε≥1/3 and ε∗≥1/3. (ii) If the depth estimatorsn ˆΓro remain bounded in the ε−neighborhood, we have that for point mass contaminatios we have that D(Γr, Pε,r)≤D ˆΓr, Pε,r forrlarge enough. By the preceding lemmas, we have that ε≤(1−ε)/2,andε≤1/3.Then, if the depth estimator remains bounded in theε−contamination neighborhood then the level of contamination εis less than 1 /3.Equivalently, if ε >1/3 then the depth estimator becomes unbounded in the
|
https://arxiv.org/abs/2505.07383v1
|
ε−contamination neighborhood. Therefore ε∗≤1/3. Next, we follow closely to Theorem 4.1 and 4.2 of Chen and Tyler (2002). Let us take the contaminated distribution Pε,Q= (1−ε)P0+εQandA⊂ {Γ⪰0}, definir ∥A∥= sup Γ∈A( ∥Γ∥op ∥βI∥op,∥βI∥op ∥Γ−1∥op) L(η, P0) = {Γ⪰0 :D(Γ, P0)≥DM(P0)−η} Λ (ε, P0) = inf QDM(Pε,Q) δ(ε, P0) =Λ (ε, P0)−(1−ε)DM(P0) 1−ε M(P) = {Γ⪰0 :D(Γ, P) =DM(P)}=\ 0<ε<D M(P)L(ε, P) Lemma 23 Letε <1/3andPε,Q= (1−ε)P0+εQ.It holds that 1.Λ (ε, P0)≥(1−ε)DM(P0)andDM(Pε,Q)≤(1−ε)DM(P0) +ε. 2. Set α=e 1−ε−δ(ε, P0).IfΓ/∈L(α.P)then D(Γ, Pε,Q)<Λ (ε, P0)andΓcannot be a deepest estimator. 3.B ˆΓ, ε, P 0 ≤ L ε 1−ε, P0 = max 1 βΦ−1 3−ε 4(1−ε) ,β Φ−1(3−5ε 4(1−ε)) Proof. Since P0= Φ then DM(Φ) = 1 /2 . Observe that δ≥0 since given Qany distribution on Rp, Pε,Q utX 2≤utΓu ≥(1−ε)P0 utX 2≤utΓu , Pε,Q utX 2≥utΓu ≥(1−ε)P0 utX 2≥utΓu minn Pε,Q utX 2≤utΓu , Pε,Q utX 2≥utΓuo ≥(1−ε) minn P0 utX 2≤utΓu , P0 utX 2≥utΓuo ≥(1−ε)D(Γ, P0) D(Γ, Pε,Q)≥(1−ε)D(Γ, P0) D(Γ, Pε,Q) = inf u∈Sp−1minn Pε,Q utX 2≤utΓu , Pε,Q utX 2≥utΓuo 21 DM(Pε,Q)≥D(βI, P ε,Q)≥(1−ε)DM(P0) Λ (ε, P0) = inf QDM(Pε,Q)≥(1−ε)DM(P0) Since min Pε,Q |utX|2≤utΓu , Pε,Q |utX|2≥utΓu ≤Pε,Q utX 2≤utΓu ≤(1−ε)P0 utX 2≤utΓu +ε min Pε,Q |utX|2≤utΓu , Pε,Q |utX|2≥utΓu ≤Pε,Q utX 2≥utΓu ≤(1−ε)P0 utX 2≥utΓu +ε Then, minn Pε,Q utX 2≤utΓu , Pε,Q utX 2≥utΓuo ≤(1−ε) minn P0 utX 2≤utΓu , P0 utX 2≥utΓuo +ε and D(Γ, Pε,Q)≤(1−ε)D(Γ, P0) +ε and (i) follows. (ii) Set α=e 1−ε−δ(ε, P0).Then 0 ≤α <1/2.If Γ/∈L(α.P0) we have that D(Γ, P0)< D M(P0)−α. Therefore, D(Γ, Pε,Q)≤(1−ε)D(Γ, P0) +ε <(1−ε) DM(P0)−ε 1−ε+δ(ε, P0) +ε ≤(1−ε) DM(P0)−ε 1−ε+Λ (ε, P0)−(1−ε)DM(P0) 1−ε +ε= Λ ( ε, P0) and (ii) follows. (iii)L(α, P 0) =L(ε/(1−ε)−δ, P0)⊂.L(ε/(1−ε), P0) Then, Γ ∈(L(α, P 0))cimplies, by (ii) that Γ∈(M(Pε,Q))cor (L(α, P 0))c⊂(M(Pε,Q))cwhich says that M(Pε,Q)⊂L(α, P 0)⊂L(ε/(1−ε), P0) for all distribution Q. Since D(Γ, P0) = min ( g(vp),1−g(v1)) and DM(P0) = 1 /2, L(ε/(1−ε), P0) = Γ⪰0 : min ( g(vp),1−g(v1))≥1 2−ε/(1−ε) = Γ⪰0 : 1−g(v1)≥g(vp)≥1 2−ε/(1−ε) ∪ Γ⪰0 :g(vp)≥1−g(v1)≥1 2−ε/(1−ε) = Γ⪰0 :1 2+ε/(1−ε)≥g(v1)≥g(vp)≥1 2−ε/(1−ε) . 22 Then, ∥Γ∥op≤Φ−1 3−ε 4(1−ε) and Γ−1 op≤1/Φ−1 3−5ε 4(1−ε) if Γ∈L(ε/(1−ε), P) and ∥L(ε/(1−ε), P0)∥= sup Γ∈L(ε/(1−ε),P0)max( ∥Γ∥op β,β ∥Γ−1∥op) = max 1 βΦ−13−ε 4 (1−ε) ,β Φ−1 3−5ε 4(1−ε) The bias functions turn out to be bS ˆΓ, ε, P =λ(1) ˆΓ (P0)−1/2ˆΓ (P)ˆΓ (P0)−1/2 =β−1λ(1) ˆΓ (P) =β−1 ˆΓ (P) op bI ˆΓ, ε, P =λ(1) ˆΓ (P0)1/2ˆΓ−1(P)ˆΓ (P0)1/2 =βλ(1) ˆΓ−1(P) =β ˆΓ−1(P) op. B ˆΓ, ε, P 0 = max β−1sup P∈Pε ˆΓ (P) op, βsup P∈Pε ˆΓ−1(P) op Since M(Pε,Q)⊂L(ε/(1−ε), P0) then B ˆΓ, ε, P 0 ≤ ∥L(ε/(1−ε), P0)∥. Corollary 24 B ˆΓ, ε, P 0 =∥L(ε/(1−ε), P0)∥. Proof. Since the point mass contaminations yield estimators whose bias reach the upper bound then References •J. G. Adrover, R. A. Maronna and V.J. Yohai (2002) Relationships between maximum depth and projection estimates. Journal of Statistical Planning and Inference . vol. 105, pag. 363-375, 2002. •E. Carrizosa (1996). A characterization of halfspace depth. Journal of Multivariate Analysis. 58 21-26. •M. Chen, C. Gao,
|
https://arxiv.org/abs/2505.07383v1
|
and Z. Ren. (2018). Robust covariance and scatter matrix estimation under Huber’s contamination model. The Annals of Statistics , 46 (5):1932–1960. •Z. Chen and D. E. Tyler. The influence function and maximum bias of Tukey’s median. The Annals of Statistics , 30(6):1737–1759, 2002. •Donoho, D. L. and M. Gasko (1992). Breakdown properties of location estimates based on halfspace depth and projected outlyingness. The Annals of Statistics 20(4), 1803–1827. •S. Nagy, C. Sch¨ utt and E. Werner. (2019). Halfspace depth and floating body. Statistics Surveys , 13, 52–118. •P. Huber. (1964). Robust Estimation of a Location Parameter. The Annals of Mathematical Statistics . 35(1): 73–101 •R. A. Maronna and V. J. Yohai. ”The breakdown point of simultaneous general M-estimate of regression and scale”, Journal of the American Statistical Association , vol.86, pag. 699-703, 1991. •I. Mizera, I. and C. H. M¨ uller, C. H. (2004). Location-scale depth. Journal of the American Statistical Association . 99 949–989. 23 •D. Paindaveine and G. Van Bever. Halfspace depths for scatter, concentration and shape matrices. The Annals of Statistics , 46(6B):3276–3307, 2018. •P. J. Rousseeuw and M. Hubert. (1999). Regression depth. Journal of the American Statistical Association . 94. 388-433. •P. J. Rousseeuw and M. Hubert (2018). Computation of robust statistics: depth, median, and related measures. In Handbook of discrete and computational geometry , third ed. (J. E. Goodman, J. O’Rourke and C. D. T´ oth, eds.). Discrete Mathematics and its Applications 1539–1552. Chapman & Hall/CRC. •J. W. Tukey (1975). Mathematics and the picturing of data. In Proc. International Congress of Mathematicians, Vancouver 1974 . 2-523–531. Canadian Mathematical Congress, Montreal. •J. Zhang, J. (2002). Some extensions of Tukey’s depth function. Journal of Multivariate Analysis . 82 134–165 24
|
https://arxiv.org/abs/2505.07383v1
|
arXiv:2505.07649v1 [math.ST] 12 May 2025Constructing Bayes Minimax Estimators via Integral Transformations Dominique Fourdrinier∗William E. Strawderman† Martin T. Wells‡ May 23, 2025 Abstract The problem of Bayes minimax estimation for the mean of a mult ivariate normal distribution under quadratic loss has attracted significan t attention recently. These estimators have the advantageous property of being admissi ble, similar to Bayes pro- cedures, while also providing the conservative risk guaran tees typical of frequentist methods. This paper demonstrates that Bayes minimax estima tors can be derived using integral transformation techniques, specifically th rough the I-transform and the Laplace transform, as long as appropriate spherical pri ors are selected. Sev- eral illustrative examples are included to highlight the effe ctiveness of the proposed approach. AMS 2010 subject classifications: Primary 62C20, 62C15, 62C10; s econdary 62A15,44A20. Keywords and phrases: Bayes estimate, integral transformatio ns, minimax estimate, mul- tivariate normal mean, superharmonic functions. ∗Dominique Fourdrinier is Professor, Universit´ e de Rouen, LITIS EA 4108, Avenue de l’Universit´ e, BP 12, 76801 Saint- ´Etienne-du-Rouvray, France, E-mail: Dominique.Fourdrinier@univ- rouen.fr. †William E. Strawderman is deceased and was Professor, Rutgers Univ ersity, Department of Statistics. We gratefully acknowledge Bill Strawderman’s deep and lasting contr ibutions to statistics, particularly his foundational work in Bayes minimax theory. His insights have shap ed the theoretical foundations of decision theory, bridging Bayesian and frequentist approaches, a nd continue to influence generations of statisticians. ‡Martin T. Wells is Professor, Cornell University, Department of Sta tistics and Data Science, 1190 Comstock Hall, Ithaca, NY 14853, USA. E-mail: mtw1@cornell.edu. 1 1 Introduction The two principal paradigms for estimating the mean of a multivariate normal distribution are the minimax and Bayesian approaches. The minimax framework ha s been extensively studied, while the Bayesian perspective, although widely applied in cer tain contexts, is sometimes criticized for its perceived lack of objectivity. In this pap er, we investigate the construction of Bayes minimax estimators, which are particularly ap pealing as they com- bine the decision-theoretic optimality of Bayes rules with the robust ness and conservatism of minimax procedures. As such, these estimators possess desira ble properties from both Bayesian and frequentist viewpoints. In a normal context of dimension k, that is, when X∼ Nk(θ,Ik), Stein [Ste81] obtains the minimaximity of the general estimator δof the form δ(X) =X+γ(X) through the unbiased estimator of the risk k+ 2divγ(X) +∝ba∇dblγ(X)∝ba∇dbl2. Thus, if, for every x∈Rk, 2divγ(x)+∝ba∇dblγ(x)∝ba∇dbl2≤0, thenδis minimax. When (generalized) priors πonθare considered, by the Brown identity formal Bayes estimators are of the form X+∇Logm(X) (where ∇denotes the gradient and mthe marginal density), this risk condition becomes, after some calculat ion, for every x∈Rk, ∆/radicalbig m(x)≤0(where∆denotestheLaplacian). Stein[ Ste81]givesexamplesforgeneralized Bayes estimates. Inmany cases, thesuperharmonicity of√misquitedifficult toverify. The difficulty has led researchers to consider the superharmonicity of minstead of√m. This is reasonable as ∀x∈Rk∆/radicalbig m(x) =1 2/radicalbig m(x)/parenleftbigg ∆m(x)−1 2∝ba∇dbl∇m(x)∝ba∇dbl2 m(x)/parenrightbigg . (1.1) Therefore, if ∆ m(x)≤0, then ∆/radicalbig m(x)≤0. A problem that occurs is that, as shown in Fourdrinier, Strawderman, and Wells [ FSW98], if one uses a proper prior, the induced marginal cannot be superharmonic. Therefore, if one insists on a s uperharmonic
|
https://arxiv.org/abs/2505.07649v1
|
marginal theunderlying priorisnecessarily improper. However, ifthefunctio n√missuperharmonic the prior may be proper. The results presented in this article focus on the construction of B ayes minimax pro- cedures. In analyzing such estimators, it is useful to distinguish be tweenconstruction and verification . Constructing Bayes minimax rules involves identifying a prior that yie lds the desired minimax properties. This often involves selecting a prior whos e marginal distribu- tion satisfies a condition based on superharmonicity, as exemplified in the approaches of [FSW98] and [WZ08]. In contrast, verifying that a given prior leads to a minimax rule is typic ally more straightforward. Verification generally involves demonstrating th at the resulting marginal distribution satisfies an established sufficient condition. For example , [Str71] employed the Brown identity to show that the shrinkage function associated with the Strawderman prior meets the minimaxity condition introduced by [ Bar70]. The objective of this paper is not to verify the minimaxity of a given Ba yes estimator, but rather to address the inverse problem: the construction of B ayes estimators that are 2 minimax. Section 2 presents the formulation of the problem, outlines the necessary setup, and discusses the superharmonicity of√m. Section 3 introduces a general construction method for spherical priors. Section 4 narrows the focus to the s ubclass of normal-variance mixtures and provides a concrete example. Section 5 concludes with a discussion of the implications of the results. The findings of this paper complement those of Fourdrinier et al.[FSW98], who pro- posed a related construction method applicable specifically to varian ce mixture priors. While their approach is somewhat less general, it yields sharper result s within that sub- class. A broader overview of Bayes minimax estimation can be found in Chapter 3 of Fourdrinier, Strawderman, and Wells [ FSW18]. 2 The Model LetX∼ Nk(θ,Ik). The central problem of this paper is that of constructing spher ical Bayes minimax estimators of θunder quadratic loss function L(θ,δ) =∝ba∇dblδ−θ∝ba∇dbl2.Before giving the main results, first recall the elements that lead to Bayes m inimax estimates. Assume that θis distributed according to a prior probability measure with density π. Then the marginal distribution of Xhas density with respect to the Lebesgue measure in Rkgiven by ∀x∈Rkm(x) =1 (2π)k/2/integraldisplay Rkexp/parenleftbigg −1 2∝ba∇dblx−θ∝ba∇dbl2/parenrightbigg π(θ)dθ. (2.1) When the prior is assumed to be spherically symmetric, that is, π(θ) =g(∝ba∇dblθ∝ba∇dbl2) for some function g, it is characterized by its radial density λrelated to gby, setting r=∝ba∇dblθ∝ba∇dbl, λ(r) =2πk/2 Γ(k/2)rk−1g(r2). (2.2) In that case, the marginal is spherically symmetric as well, that is, of the form m(x) = ℓ(∝ba∇dblx∝ba∇dbl), for some function ℓ. Setting u=∝ba∇dblx∝ba∇dbl, the exact form of the marginal labeling function ℓ(u) is deduced in Fourdrinier and Wells [ FW96] and equals ℓ(u) =Γ(k/2) 2πk/2exp(−u2/2) u(k−2)/2/integraldisplay∞ 0exp(−r2/2) r(k−2)/2I(k−2)/2(ur)λ(r)dr, (2.3) whereIνdenotes the modified Bessel function of order ν. The integral in ( 2.3) is related to theI-transform which we discuss below. The Bayes estimate δπ(X), defined as the minimizer of the Bayes risk (see Berger [Ber85], p 17) is given by δπ(X) =X+∇Logm(X). Thus the Bayes estimator δπis of the form δπ(X) =X+γ(X). Stein [ Ste81] shows,
|
https://arxiv.org/abs/2505.07649v1
|
under suitable integrability con- ditions, that an unbiased estimator of the risk of δπequalsk+ 2divγ(X) +∝ba∇dblγ(X)∝ba∇dbl2= k+4∆/radicalbig m(X)/slashbig/radicalbig m(X) where the divergence and the Laplacian are denoted by div and ∆, respectively. Thus the risk of δπequalsEθ[∝ba∇dblδν−θ∝ba∇dbl2] =k+4Eθ[∆/radicalbig m(X)/slashbig/radicalbig m(X)] 3 whereEθis the expectation with respect to Nk(θ,Ik). Therefore a sufficient condition forδπto dominate the usual minimax estimate δ0(X) =Xis that the square root of the marginal density is superharmonic; in this case δπwill be minimax as well. In Appendix A.1, we illustrate the difficulty to show the superharmonicity of√mfor the Strawderman prior [Str71]. There is a connection between properness and superharmonicity. As is shown in Four- drinier, Strawderman and Wells [ FSW98], the fact that the prior is proper implies that the marginal cannot be superharmonic. Consequently, we shall se arch for priors among those for which the square root of the marginal density is superha rmonic. It is important to work with the superharmonicity of the square root of the margin al rather than just the marginal itself because we wish to investigate the minimaxicity of Baye s rules based on both proper and improper priors. We are now in a position to relate the marginal density of Xto aI-transform (see Appendix A.3) and its representation ℓin (2.3). Suppose that the prior πis a spherically symmetric distribution with radial density λ. By definition of the I-transform, it follows that (2.3) equals ℓ(u) =h(u)I(k−2)/2[f](u), (2.4) with h(u) =Γ(k/2) 2πk/2u(1−k)/2exp(−u2/2) (2.5) and f(r) =r(1−k)/2exp(−r2/2)λ(r). (2.6) According to ( 1.1), the superharmonicity condition on√mis equivalent to ∆m(x)−1 2∝ba∇dbl∇m(x)∝ba∇dbl2 m(x)≤0∀x∈Rk. (2.7) Since∇m(x) =ℓ′(∝ba∇dblx∝ba∇dbl)x/∝ba∇dblx∝ba∇dbland ∆m(x) =ℓ′(∝ba∇dblx∝ba∇dbl)(k−1)/∝ba∇dblx∝ba∇dbl+ℓ′′(∝ba∇dblx∝ba∇dbl), Condition ( 2.7) becomes, through the change of variable u=∝ba∇dblx∝ba∇dbl, ℓ′(u)k−1 u+ℓ′′(u)−1 2(ℓ′(u))2 ℓ(u)≤0. (2.8) The differential inequality ( 2.8) gives rise to a differential inequality of the I-transform of the function f(related to the radial distribution λ) given in ( 2.4). We develop this differential inequality in the next section and use it to characterize c lasses of minimax priors. 4 3 General Spherical Priors 3.1 The Main Result In this section, we focus on the construction of Bayes minimax rules based on priors that are spherically symmetric about the origin. The class of spherically sy mmetric priors contains a wide variety of distributions, some of which have quite hea vy tails. For more on modeling with a normal sampling distribution and spherical prior, see A ngers and Berger [AB91], DasGupta and Rubin [ DR88], Berger and Robert [ BR90], Pericchi and Smith [PS92], Fourdrinier and Wells [ FW96] and Fourdrinier, Strawderman and Wells [ FSW18]. Theorem 3.1, has two parts. Part 1 expresses Inequality ( 2.8) as previously announced. Part 2 is the main result of this section and gives a construction of a s pherically symmetric minimax prior. Theorem 3.1 Assume that the prior is sphericallysymmetric with the dens ityof the radius λas in(2.2)and the Bayes risk of the estimate is finite. 1. A sufficient condition for the corresponding spherical Bay es estimator of the mean of the multivariate normal distribution to be minimax is F′′(u) F(u)−1 2/parenleftbiggF′(u) F(u)/parenrightbigg2 +F′(u) F(u)/bracketleftbiggk−1 21 u−u/bracketrightbigg +(k−1)(7−3k) 81 u2+1
|
https://arxiv.org/abs/2505.07649v1
|
2u2−k+1 2<0 (3.1) whereF(u)is theI-transform of order (k−2)/2offin(2.6). 2. Let ϕ(u) =1 u2∞/summationdisplay j=0bjuj be any non positive generalized series. The function F define d by F(u) = (c1z1(u)+c2z2(u))2u(k−1)/2exp(u2/2), (3.2) wherez1andz2are two linearly independent solutions of the second order d ifferential equation z′′+k−1 uz′−1 2ϕ(u)z= 0 (3.3) andc1andc2are arbitrary constant, satisfies ( 3.1). Provided the so obtained function Fis theI-transform of order (k−2)/2of a function, the radial density of the prior is given, for every r >0, by λ(r) =r(k−1)/2exp(r2/2)I−1 (k−2)/2[F](r), (3.4) whereI−1 (k−1)/2is the inverse I-transform of order (k−1)/2. If, in addition, the function λis integrable, then the resulting proper Bayes estimator is minimax. 5 ProofAccording to ( 2.4), we have ℓ(u) =h(u)F(u) whereh(u) is given by ( 2.5). Hence (2.8) becomes [p(u)F(u)+F′(u)]k−1 u+ (p2(u)+p′(u))F(u)+2p(u)F′(u)+F′′(u) −(p(u)F(u)+F′(u))2 2F(u)≤0, (3.5) whereF(u) =I(k−2)/2[f](u) andp(u) = (1−k)/2u−1−u. Upon defining R(u) = F′(u)/F(u) Inequality ( 3.5) can be written as [p(u)+R(u)]k−1 u+1 2(p(u)+R(u))2+(p′(u)+R′(u))≤0 (3.6) which, setting y(u) =p(u)+R(u), becomes y′(u)+k−1 uy(u)+1 2y2(u)≤0. (3.7) In order to solve this first order non linear differential inequality defi ne a function ϕ≤0 and solve the generalized Ricatti equation y′(u)+k−1 uy(u)+1 2y2(u) =ϕ(u). (3.8) This equation may be solved using the classical method for generalize d Ricatti equations. First, set y(u) =λv(u)u1−k,r(u) = 1/2λu1−k,s(u) =−1/λϕ(u)uk−1for some λ. Then solving (3.8) reduces to v′+r(u)v2+s(u) = 0. (3.9) Now applying Section 16.515 of Gradshteyn and Ryzhik [ GR94], the general solution of (3.7) is v(u) =1 r(uc1z′ 1(u)+c2z′ 2(u) c1z1(u)+c2z2(u), wherez1andz2are independent solutions of the second order differential equatio n z′′−r′(u) r(u)z′+r(u)s(u)z= 0 (3.10) andc1andc2are arbitrary constants. Upon evaluation of rands, (3.10) reduces to ( 3.3). The hypothesis on ϕallows us to construct solutions even when ( 3.3) has a singularity at 0 (see Krasnov, Kiss´ elev and Markarenko [ KKM81], p. 167). Transforming back to (3.8), it follows that the general solution of that equation is y(u) = 2c1z′ 1(u)+c2z′ 2(u) c1z1(u)+c2z2(u). 6 Hence, by definition of y(u), F′(u) F(u)= 2c1z′ 1(u)+c2z′ 2(u) c1z1(u)+c2z2(u)+k−1 21 u+u. Upon solving this elementary first-order equation, it follows that F(u) = (c1z1(u)+c2z2(u))2u(k−1)/2eu2/2, which proves ( 3.2). In order to deduce the form of a (generalized) Bayes minimax prior d ensity, recall that F(u) =I(k−2)/2[f](u). Therefore, I−1 (k−2)/2[F](r) =f(r) and applying ( 2.6) and solving for λ(r) give the result in ( 3.3) and complete the proof of Part 2. ✷ Remark According to ( 3.2), the function F(u) is nonnegative, so that the radial density λin (3.4) is also nonnegative. 3.2 An Example of a General Spherical Prior Chooseϕ(u) =−2b/u2withb≥0. Then ( 3.3) is z′′+k−1 uz′+b u2z= 0, (3.11) which is a Bessel type equation, that is, z′′+p(u)z′+q(u)z= 0 (see Appendix A.2) where the coefficients p(u) andq(u) can be represented as generalized series of the form p(u) =1 u∞/summationdisplay j=0ajujandq(u) =1 u2∞/summationdisplay j=0bjuj witha0=k−1, b0=b, aj=bj= 0 forj≥1.According to Krasnov, Kiss´ elev and Markarenko [ KKM81], the solutions of ( 3.11) are of the form z=uρ/summationtext∞ j=0cjuj(with c0∝ne}ationslash= 0) where ρis a solution of the determinant equation ρ(ρ−1)+a0ρ+b0= 0. (3.12) In this specific case the roots of ( 3.12) are ρ1=2−k−/radicalbig (k−2)2−4b 2andρ2=2−k+/radicalbig (k−2)2−4b 2
|
https://arxiv.org/abs/2505.07649v1
|
forb≤(k−2)2/4. When the difference of the roots, ρ2−ρ1, is not integer or zero, two independent solutions of ( 3.11) are z1(u) =uρ1∞/summationdisplay j=0cjujandz2(u) =uρ2∞/summationdisplay j=0cjuj, 7 where the coefficient cjcan be determined by plugging the series solutions into ( 3.11), which yields ∞/summationdisplay j=0[(j+ρ)(j+ρ+k−2)+b]cjuj+ρ−2= 0∀u≥0, (3.13) forρequalsρ1orρ2. The solution to ( 3.13) iscj= 0 for all j≥1 andc0∝ne}ationslash= 0 arbitrary since the first term of the series is null by ( 3.12). Hence the solutions to ( 3.11) are z1(u) =A1uρ1andz2(u) =A2uρ2 forA1andA2arbitrary constants1. Hence, the I-transform of r→r(1−k)/2exp(r2/2)λ(r) is F(u) = (A1uρ1+A2uρ2)2u(k−1)/2exp(u2/2). As in Theorem 3.1, the determination of the radial density of the prior which yields a minimax Bayes rules reduces to linear combination of terms of the for m r(k−1)/2exp(r2/2)I−1 (k−2)/2[uγexp(u2/2)](r). (3.14) From Oberhettinger [ Obe72] Formula 5.13, on the inversion of Hankel transformation, Hν[uη−1/2e−αu2](r)∝r−1/2exp/parenleftbigg −r2 8α/parenrightbigg Mη/2,ν/2/parenleftbigg −r2 4α/parenrightbigg , withRe(η+ν)>−1 andRe(α)>0 whereMx,µis the Whittaker function connected with the confluent hypergeometric function (see Appendix A.2) by Mx,µ(z) = exp/parenleftBig −z 2/parenrightBig zµ+1/2 1F1/parenleftbigg µ+1 2−x;1+2µ;z/parenrightbigg . Therefore, using ( A.6), the expression in ( 3.14) is proportional to r(k−2)/2exp/parenleftbiggr2 4/parenrightbigg Mγ/2+1/4,(k−2)/4/parenleftbiggr2 2/parenrightbigg . (3.15) withγ+(k+ 1)/2>0. Ifbis chosen such that ( k−2)2/4−1< b≤(k−2)2/4 all the regularity conditions above are valid. Thus the construction of The orem3.1yields a radial prior which gives rise to a minimax Bayes estimator. It is interesting to note the striking similarity between ( 3.15) and the Strawderman prior in ( A.3). However, note that the expression in ( 3.15) does not have a finite integral. Perhaps choosing an appropriate linear combination of solutions will y ield integrability, although we were unable to find one. Remark: The second-order differential equation in ( 3.3) has various other solutions de- pending on the choice of ϕ. A convenient class of solutions can be determined by the 1As a side remark, notice that it can be shown that, when ρ2−ρ1is an integer or zero, an additional solution of ( 3.11) isz3(u) =Az1(u)logu+Bz2(u). 8 generalized Bessel’s differential equation u2z′′+(1−2α)uz′+[(βγuγ)2+α2−v2γ2]z= 0. The solution to this equation is z(u) =u2Zv(βuγ) whereZvis a solution of the Bessel’s differential equation u2z′′+uy′+(u2−v2)z= 0.Special solutions Zvof this equation are the Bessel, Neumann and Hankel functions Jν,Yν,H(1) ν,H(2) ν, respectively. 4 The Variance Mixtures of Normals Case 4.1 The Main Result In this section, we apply the general theory of the previous sectio n and give a method to construct a family of priors which give rise to the proper Bayes minima x rules for scale mixtures of normals. Notice that, in the case where the prior is a scale mixture of normals, we have that λ(r) =/integraldisplay∞ 0h(v)λv(r)dv where λv(r) =21−k/2 Γ(k/2)rk−1 vk/2exp/parenleftbigg −r2 2v/parenrightbigg is the normal radial density. Hence the marginal labeling function ha s the representation ℓ(u) =g(u)I(k−2)/2/bracketleftbigg/integraldisplay∞ 0λv(r)h(v)exp/parenleftbigg −r2 2/parenrightbigg r(1−k)/2dv/bracketrightbigg (u) =g(u)/integraldisplay∞ 0I(k−2)/2[λv(r)h(v) exp/parenleftbigg −r2 2/parenrightbigg r(1−k)/2](u)dv =21−k/2 Γ(k/2)g(u)/integraldisplay∞ 0h(v)/integraldisplay∞ 0√ruI(k−2)/2(ru)1 vk/2r(k−1)/2exp/parenleftbigg −r2 2v/parenrightbigg exp/parenleftbigg −r2 2/parenrightbigg drdv. Then it follows that ℓ(u) =21−k/2 Γ(k/2)g(u)√u/integraldisplay∞ 0h(v)1 vk/2/integraldisplay∞ 0rk/2exp/parenleftbigg −/parenleftbigg 1+1 v/parenrightbiggr2 2/parenrightbigg (4.1) I(k−2)/2(ru)drdv =21−k/2 Γ(k/2)g(u)u(k−1)/2/integraldisplay∞ 0h(v)1 vk/2/parenleftbigg 1+1 v/parenrightbigg−k/2 exp/parenleftbiggu2v 2(1+v)/parenrightbigg dv =1 (2π)k/2/integraldisplay∞ 0h(v) (v+1)k/2exp/parenleftbigg−u2 2(1+v)/parenrightbigg dv. (4.2)
|
https://arxiv.org/abs/2505.07649v1
|
9 Corollary 4.1 Assume that the prior is a variance mixture of normal distrib utions with mixing density h. 1. A sufficient condition for the corresponding spherical (ge neralized) Bayes estimator of the mean of the normal distribution to be minimax is G′(s) G(s)−2G′′(s) G′(s)≤k s∀s >0 (4.3) whereGis the Laplace transform of f:t→tk/2−2h/parenleftbigg1−t t/parenrightbigg 1]0,1[(t) (1]0,1[being the indicator function of the interval ]0,1[). 2. For any continuous function ϕsuch that, ϕ(s)≤k/sfor every s >0, the function Gdefined, for any s >0, by G(s) =/parenleftbigg/integraldisplays bexp/parenleftbigg −1 2/integraldisplayt aϕ(u)du/parenrightbigg dt/parenrightbigg2 , (4.4) whereaandbare two real constants, satisfies ( 4.3). Provided that the function G obtained is the Laplace transform of a positive function defi ned on]0,∞[and vanishes outside]0,1[, the mixing density his given for every v >0, by h(v) = (v+1)k/2−2L−1[G]/parenleftbigg1 v+1/parenrightbigg , whereL−1is the inverse Laplace transform. If, in addition, the funct ionhis inte- grable, then the resulting proper Bayes estimator is minima x. ProofMaking the identification G(u2/2) =ℓ(u), where ℓ(u) is defined in ( 2.3), it follows from some tedious calculations that ( 3.1) reduces to ( 4.3). This proves Part 1. It is clear that any function ϕwhich satisfies ϕ(s) =G′(s) G(s)−2G′′(s) G′(s)∀s >0 yields a solution of ( 4.3) provided that ϕ(s)≤k/s. Straightforward calculations lead to /parenleftbigg Log 2/parenleftBig/radicalbig G(s)/parenrightBig′/parenrightbigg′ =−ϕ(s) 2∀s >0 and two integrations give the following expression for the Laplace tr ansformG: G(s) =α/parenleftbigg/integraldisplays bexp/parenleftbigg −1 2/integraldisplayt aϕ(u)du/parenrightbigg dt/parenrightbigg2 ∀s >0 10 wherea∈Randb∈R. Finally, inverting the Laplace transform G, we obtain tk/2−2h/parenleftbigg1−t t/parenrightbigg 1]0,1[(t) =L−1[G](t)∀t∈]0,1[ that is h(v) = (v+1)k/2−2L−1[G]/parenleftbigg1 v+1/parenrightbigg ∀v >0. This proves Part 2of the corollary. ✷ 4.2 An Example of a Variance Mixture of Normal Prior Corollary 4.1seems difficult to use directly, due to the requirement that the func tionGis a Laplace transform of a positive function on ]0 ,1[. If this is not the case, an obvious guess would be ϕ(s) =k/swhich gives G(s) = α/parenleftbig s1−k/2+β/parenrightbig2withα >0 andβ∈R. Whenβ= 0, we would have f(t) =L−1[G](t) =αL−1[s2−k](t) =αtk−3 (k−3)!∀t∈]0,1[. Hence the corresponding mixing density hequals h(v) =α(v+1)k/2−2f/parenleftbigg1 v+1/parenrightbigg =α(v+1)1−k/2 (k−3)!∀v >0 which is a special case of the form of Strawderman’s prior. However, on closer inspection, the analysis is faulty, as s2−kis the Laplace transform of tk−3/(k−3)! on all ]0 ,∞[. Since the Laplace transform is unique, s2−kis not the Laplace transform of tk−3/(k−3)!1]0,1[(t). It is quite interesting that the faulty analysis does give a prior which is proper and for which the resulting Bayes estimation is m inimax. It is easy to see from another perspective that ϕ(s) =k/scannot correspond to the mixing distribution for a proper prior. If so, the resulting Bayes est imator,δ, would have risk identically to k, the risk of the usual estimator δ0=X. In this case, since δcannot be equal to X(an unbiased estimator cannot be a proper Bayes prior in the norma l case), 1/2(δ0+δ) would have a Bayes risk smaller than δ. Hence δcannot be Bayes, and the “boundary” case ϕ(s) =k/scannot correspond to a proper prior. Butfirstweindicatethatitispossibletousethetheoremanditsproo fdirectly, although we bypass guessing at a suitable ϕ. Instead, we first guess a
|
https://arxiv.org/abs/2505.07649v1
|
suitable inverse Laplace transform and show that its Laplace transform satisfies ( 4.3), and that the function hthat results is positive and integrable. 11 4.2.1 Example 1 TakeL−1[G](t) =tn1]0,1[(t) wherenis a nonnegative integer. Then an easy calculation gives G(s) =n! sn+1−e−sn/summationdisplay j=0n! j!sn−j+1 =n! sn+1e−s∞/summationdisplay j=n+1sj j!, hence G′(s) =−(n+1)! sn+2e−s∞/summationdisplay j=n+2sj j! and G′′(s) =(n+2)! sn+3e−s∞/summationdisplay j=n+3sj j!. Therefore the condition ϕ(s) =G′(s) G(s)−2G′′(s) G′(s)≤k s becomes −n+1 s/summationtext∞ j=n+2sj j!/summationtext∞ j=n+1sj j!+2n+2 s/summationtext∞ j=n+3sj j!/summationtext∞ j=n+2sj j!≤k s or, equivalently, 2(n+2)/bracketleftBigg s/summationtext∞ j=n+2sj (j+1)!/summationtext∞ j=n+2sj j!/bracketrightBigg −(n+1)/bracketleftBigg s/summationtext∞ j=n+1sj (j+1)!/summationtext∞ j=n+1sj j!/bracketrightBigg ≤k. (4.5) Now we note that both terms in brackets in ( 4.5) are less than 1. This follows since each term in the numerator is contained in the denominator. Addition ally, note that the first bracketed term is smaller than the second bracketed term. I n fact, each of the two terms is of the form stimesE[1/1+X] whereXis a Poisson random variable truncated at n+2 andn+1, respectively. Since the Poisson truncated at n+2 is stochastically larger than that truncated at n+1 the claim follows immediately. Then ( 4.5) is true as soon as 2(n+2)−(n+1)≤kor equivalently n≤k−3. It remains to find conditions under which the corresponding density is integrable. Since the mixing density is given by h(v) = (v+1)k/2−2L−1[G]/parenleftbigg1 v+1/parenrightbigg = (v+1)k/2−2−n. 12 This is integrable when n > k/2−1. Hence, the resulting proper Bayes prior procedure is minimax when k/2−1< n≤k−3. When k= 5, this reduces to 1 .5< n≤2 orn= 2 which corresponds to the Strawderman prior to a= 1/2. While this proof is perhaps less efficient than Strawderman’s original r esult, it demon- strates the potential utility of the main theorem of this section. 4.2.2 Example 2 The next example is reminiscent of the generalized beta mixtures of G aussians, a family of global-local shrinkage priors commonly used in high-dimensional Ba yesian regression (see Armagan, Dunson, and Clyde [ ACD11]). These priors are designed to offer adaptive shrinkage, robustness, and computational tractability, while main taining proper posterior behavior even when the number of predictors is large. This family pro vides a flexible and efficient framework for sparse estimation, unifying and extending m any existing priors. Notably, it encompasses several important priors as special case s, including the horseshoe, normal-exponential gamma, Strawderman-Berger, normal-gamm a, and normal-beta prime priors. In particular, take L−1[G](t) =tα−1(1−t)β−1(1−σt)−γ1]0,1[(t), withσ >0. Then G(s) =/integraldisplay1 0fs(t)dt with fs(t) =tα−1(1−t)β−1(1−σt)−γe−st so that −G′(s) =/integraldisplay1 0tfs(t)dt =/integraldisplay1 0tα(1−t)β−1(1−σt)−γe−stdt =/bracketleftbigg tα(1−t)β−1(1−σt)−γ/parenleftbigg −1 s/parenrightbigg e−st/bracketrightbigg1 0 +1 s/integraldisplayt 0d dt(tα(1−t)β−1(1−σt)−γ)e−stdt =1 s/integraldisplay1 0/bracketleftbiggα ttfs(t)−(β−1) 1−ttfs(t)+γσ 1−σttfs(t)/bracketrightbigg dt =1 s/integraldisplay1 0/bracketleftbiggα t−β−1 1−t+γσ 1−σt/bracketrightbigg tfs(t)dt =1 s/integraldisplay1 0/bracketleftbigg α−(β−1)t 1−t+γσt 1−σt/bracketrightbigg fs(t)dt. 13 Expressing G′′the same way gives ϕ(s) =G′(s) G(s)−2G′′(s) G′(s) =−1 sE∗ s/bracketleftbigg α−(β−1)T 1−T+γσT 1−σT/bracketrightbigg (4.6) +2 sE∗∗ s/bracketleftbigg (α+1)−(β−1)T 1−T+γσT 1−σT/bracketrightbigg (4.7) whereE∗ sis the expectation with respect to a density proportional to fs(t) andE∗∗ sis the expectation with respect to a density proportional to tfs(t). Case 1: Assume γ <0 Notethat tfs(t)hasamonotoniclikelihoodratiowithrespect to fs(t)andtheintegrand in (4.6) and (4.7) is decreasing. Then E∗∗ s/bracketleftbigg (α+1)−(β−1)T 1−T+γσT 1−σT/bracketrightbigg ≤E∗ s/bracketleftbigg (α+1)−(β−1)T 1−T+γσT 1−σT/bracketrightbigg . Therefore
|
https://arxiv.org/abs/2505.07649v1
|
ϕ(s)≤1 sE∗ s/bracketleftbigg {2(α+1)−α}−(β−1)T 1−T+γσT 1−σT/bracketrightbigg =1 s/braceleftbigg α+2+E∗ s/bracketleftbigg (1−β)T 1−T+γTT 1−σT/bracketrightbigg/bracerightbigg ≤α+2 s, since the integrand term is negative. Case 2: Assume γ >0 Similarly ϕ(s)≤1 s/braceleftbigg α+2+E∗ s/bracketleftbigg (1−β)T 1−T/bracketrightbigg +2E∗∗ s/bracketleftbigg γσT 1−σT/bracketrightbigg −E∗ s/bracketleftbigg γσT 1−σT/bracketrightbigg/bracerightbigg . Hence ϕ(s)≤1 s/braceleftbigg α+2+E∗ s/bracketleftbigg (1−β)T 1−T/bracketrightbigg +2E∗∗ s/bracketleftbigg γσT 1−σT/bracketrightbigg/bracerightbigg ≤1 s/braceleftbigg α+2+2γσ 1−σ/bracerightbigg , sinceγσT/(1−σT) is increasing and (1 −β)T/(1−σT)≤0. 14 Therefore the minimaxity condition ϕ(s)≤k/sis expressed as follows. Case 1:α+2 s≤k s⇔α+2≤k Case 2: α+2+2γσ 1−σ≤k. 5 Concluding Remarks For the class of spherical priors, we have shown that the superha rmonicity of the square root of the marginal density leads to a differential inequality that, in many cases, can be solved using integral transformation techniques to yield a Bayes minimax rule. The methods developed in this paper provide a useful framework for co nstructing default priors for Bayesian analysts who seek estimators with favorable frequen tist properties. We hope that the proposed technique can be extended to construct Baye s minimax estimators in a broad range of settings. While it would be desirable to apply our appro ach to construct proper Bayes minimax rules, the associated computations are curr ently intractable. The results of this paper complement those of Fourdrinier, Strawderm an, and Wells [ FSW98], who proposed an alternative construction method that is specifica lly applicable to priors defined as variance mixtures of normal distributions. Appendix A.1 On the condition ∆√m≤0for the Strawderman prior Note that it turns out that the condition ∆√m≤0 in (1.1) can be difficult to verify. For instance, Strawderman’s prior [ Str71] was obtained through a two-stage hierarchical model, that is conditional on λ(0< λ≤1),θis normal with mean 0 and covariance matrix λ−1(1−λ)Ikand the unconditional density of λwith respect to the Lebesgue measure is given by (1 −a)λ−afor anyasuch that 0 ≤a <1. Hence, this gives rise to the prior density π(θ) = (2 π)−k/2/integraldisplay1 0/parenleftbiggλ 1−λ/parenrightbiggk/2 exp/parenleftbigg−λ∝ba∇dblθ∝ba∇dbl2 2(1−λ)/parenrightbigg (1−a)λ−adλ = (1−a)(2π)−k/2/integraldisplay∞ 0tk/2−a(1+t)a−2exp/parenleftbigg −t∝ba∇dblθ∝ba∇dbl2 2/parenrightbigg dt. (A.1) Therefore π(·) is a spherically symmetric density and hence by ( A.1) has radial density λ(r) =2(1−a) 2k/2Γ(k/2)rk−1/integraldisplay∞ 0tk/2−a(1+t)a−2exp/parenleftbigg −tr2 2/parenrightbigg dt. (A.2) 15 In Magnus, Oberhettinger and Soni [ MOS66], it is shown that the Whittaker function, Wx,µ(z), is related to the integral in ( A.2). In particular, they provide (p.313) the following integral representation of the Whittaker function, that is, Γ/parenleftbigg1 2+µ−x/parenrightbigg Wx,µ(z) = exp/parenleftbigg−z 2/parenrightbigg zµ+1/2/integraldisplay∞ 0exp(−zt)tµ−x−1/2(1+t)µ+x−1/2dt. Therefore, applying this representation, it follows that ( A.2) equals λ(r) =1−a 2k/4−1/parenleftbiggk 2−a/parenrightbigg r(k−2)/2exp/parenleftbiggr2 4/parenrightbigg Wa−1−k/4,(k−2)/4/parenleftbiggr2 2/parenrightbigg .(A.3) Hence, the Strawderman prior has a convenient representation a s a mixture of Whittaker functions. According to ( 2.3), the marginal can be derived through straightforward but tedio us calculations. It is given by ℓ(u) =1−a 2πk/2Γ(k/2−a+1) Γ(k/2−a+2)1F1(k/2−a+1;k/2−a+2;−u2/2), where we used the intergral representation of the confluent hyp ergeometric 1F1(α;β;z). Then the minimaxity condition ∆/radicalbig m(x)≤0 reduces to −k/2+a−3 (k/2−a+2)2∝ba∇dblx∝ba∇dbl2 21F2 1(1;k/2−a+3;∝ba∇dblx∝ba∇dbl2 2)+ 22−a−∝ba∇dblx∝ba∇dbl2/2 k/2−a+21F1(1;k/2−a+3;∝ba∇dblx∝ba∇dbl2 2)−2≤0, wherea >3−k/2. It is easy to check that this inequality is satisfied for xin the neighborhood of the origin and at infinity when k/2+a−3≥0. However, it is considered considerably
|
https://arxiv.org/abs/2505.07649v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.