text
string | source
string |
|---|---|
2024. [3] O. E. Barndorff-Nielsen and D. R. Cox. Inference and Asymptotics . Chapman & Hall, London, 1994. [4] D. W. Berreman. Optics in stratified and anisotropic media: 4 ×4-matrix formulation. J. Opt. Soc. Am., 62(4):502–510, Apr 1972. [5] P. Billingsley. Probability and Measure . John Wiley & Sons, Inc., New York, third edition, 1995. [6] S.P.BrooksandG.O.Roberts. Convergenceassessmenttechniquesformarkovchainmontecarlo. Stat. Comp. , 8:319–335, 1998. [7] J. L. Dalby, A. Majumdar, Y. Wu, and A. K. Dond. Stochastic effects on solution landscapes for nematic liquid crystals. Liq. Cryst. , 51(2):276–296, 2024. [8] M. Dashti and A. M. Stuart. The bayesian approach to inverse problems. In R. Ghanem, D. Hig- don, and H. Owhadi, editors, Handbook of Uncertainty Quantification , pages 311–428. Springer International Publishing, Cham, 2017. [9] T. A. Davis and E. C. Gartland, Jr. Finite element analysis of the Landau-de Gennes minimization problem for liquid crystals. SIAM J. Numer. Anal. , 35(1):336–362, 1998. [10] P. G. de Gennes and J. Prost. The Physics of Liquid Crystals . Clarendon Press, Oxford, 1993. [11] B. Efron. Frequentist accuracy of Bayesian estimates. J. R. Stat. Soc. Ser. B. Stat. Methodol. , 77(3):617–646, 2015. [12] S.Faetti andB.Cocciaro. Elastic,dielectricand opticalconstantsofthe nematicmixturee49. Liq. Cryst., 36(2):147–156, 2009. [13] D. Gamerman and H. F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference. Chapman & Hall/CRC, Boca Raton, FL, second edition, 2006. [14] A.Gelman,G.O.Roberts,andW.R.Gilks. EfficientMetropolisjumpingrules. InJ.M.Bernardo, J.O.Berger,A.P.Dawid,andA.F.M.Smith,editors, BayesianStatistics5 ,pages599–607.Oxford University Press, New York, 1996. [15] D.GolovatyandJ.A.Montero. OnminimizersofaLandau–deGennesenergyfunctionalonplanar domains. Arch. Ration. Mech. Anal. , 213(2):447–490, 2014. [16] Y. Han, A. Majumdar, and L. Zhang. A reduced study for nematic equilibria on two-dimensional polygons. SIAM J. Appl. Math. , 80(4):1678–1703, 2020. [17] Y. Han, J. Yin, P. Zhang, A. Majumdar, and L. Zhang. Solution landscape of a reduced landau–de gennes model on a hexagon. Nonlinearity , 34(4):2048, feb 2021. 23 [18] L. Held and D. Sabanés Bové. Applied Statistical Inference: Likelihood and Bayes . Springer, Heidelberg, 2014. [19] J.KaipioandE.Somersalo. StatisticalandComputationalInverseProblems . Springer-Verlag,New York, 2005. [20] B. Klus, U. A. Laudyn, M. A. Karpierz, and B. Sahraoui. All-optical measurement of elastic constants in nematic liquid crystals. Opt. Express , 22(24):30257–30266, Dec 2014. [21] S. Kralj and A. Majumdar. Order reconstruction patterns in nematic liquid crystal wells. Proc. R. Soc. London A , 470(2169):1 – 18, Sept. 2014. [22] J. P. Lagerwall and G. Scalia. A new era for liquid crystal research: Applications of liquid crystals in soft matter nano-, bio- and microtechnology. Curr. Appl. Phys. , 12(6):1387–1412, 2012. [23] W. R. B. Lionheart and C. J. P. Newton. Analysis of the inverse problem for determining nematic liquidcrystaldirectorprofilesfromopticalmeasurementsusingsingularvaluedecomposition. New J. Phys., 9(3):63, mar 2007. [24] C. Luo, A. Majumdar, and R. Erban. Multistability in planar liquid crystal wells. Phys. Rev. E , 85:061702, Jun 2012. [25] R. R. Maity, A. Majumdar, and N. Nataraj. Discontinuous Galerkin finite element methods for the Landau-deGennesminimizationproblemofliquidcrystals. IMAJ.Numer.Anal. ,41(2):1130–1163, 2021. [26] R.R.Maity,A.Majumdar,andN.Nataraj. 𝐴𝑝𝑟𝑖𝑜𝑟𝑖and𝑎𝑝𝑜𝑠𝑡𝑒𝑟𝑖𝑜𝑟𝑖 erroranalysisforsemilinear problems in liquid crystals. ESAIM Math. Model. Numer. Anal. , 57(6):3201–3250, 2023. [27] A. Majumdar. Equilibrium order parameters of nematic liquid crystals in the Landau-de Gennes theory.European J. Appl. Math. , 21(2):181–203, 2010. [28] A.
|
https://arxiv.org/abs/2504.16029v1
|
Majumdar and A. Zarnescu. Landau-De Gennes theory of nematic liquid crystals: the Oseen- Frank limit and beyond. Arch. Ration. Mech. Anal. , 196(1):227–280, 2010. [29] N. J. Mottram and C. J. P. Newton. Introduction to q-tensor theory. arXiv:1409.3542 , 2014. [30] H.Niederreiter. RandomNumberGenerationandQuasi-MonteCarloMethods . SocietyforIndus- trial and Applied Mathematics (SIAM), Philadelphia, PA, 1992. [31] S. Patranabish, Y. Wang, A. Sinha, and A. Majumdar. Quantum-dots-dispersed bent-core ne- matic liquid crystal and cybotactic clusters: experimental and theoretical insights. Phys. Rev. E , 103(5):Paper No. 052703, 13, 2021. [32] J. L. Pick, C. Kasper, H. Allegue, N. J. Dingemanse, N. A. Dochtermann, K. L. Laskowski, M. R. Lima, H. Schielzeth, D. F. Westneat, J. Wright, and Y. G. Araya-Ajoy. Describing posterior distributionsofvariancecomponents: Problemsandtheuseofnulldistributionstoaidinterpretation. Meth. Ecol. Evol. , 14(10):2557–2574, 2023. [33] E. B. Priestley, P. J. Wojtowicz, and P. Sheng. Introduction to Liquid Crystals . Plenum Press, New York, 1975. [34] A.Raue,C.Kreutz,F.J.Theis,andJ.Timmer. JoiningforcesofBayesianandfrequentistmethod- ology: a study for inference in the presence of non-identifiability. Philos. Trans. R. Soc. A , 371(1984):20110544, 2013. [35] C. P. Robert and G. Casella. Monte Carlo Statistical Methods . Springer-Verlag, New York, 1999. 24 [36] G. O. Roberts and J. S. Rosenthal. Markov-chain Monte Carlo: some practical implications of theoretical results. Canad. J. Statist. , 26(1):5–31, 1998. [37] v. Robson, Brian Albert. The Theory of Polarization Phenomena . Clarendon Press, Oxford, 1974. [38] V. K. Rohatgi and A. K. M. E. Saleh. An Introduction to Probability and Statistics . John Wiley & Sons, Inc., Hoboken, NJ, third edition, 2015. [39] B.Rosić,J.Sýkora,O.Pajonk,A.Kučerová,andH.Matthies. Comparisonofnumericalapproaches to bayesian updating. In A. Ibrahimbegovic, editor, Computational Methods for Solids and Fluids , pages 427–461. Springer, Cham, 2010. [40] V.Roy. ConvergencediagnosticsforMarkovchainMonteCarlo. Annu.Rev.Stat.Appl. ,7:387–412, 2020. [41] I. Siekmann, J. Sneyd, and E. J. Crampin. MCMC can detect nonidentifiable models. Biophys. J. , 103(11):2275–2286, 2012. [42] A. M. Stuart. Inverse problems: A Bayesian perspective. Acta Numerica , 19:451–559, 2010. [43] A.Tarantola. InverseProblemTheory: MethodsforDataFittingandModelParameterEstimation . Elsevier Science Publishers, Amsterdam, 1987. [44] O. M. Tovkach, C. Conklin, M. C. Calderer, D. Golovaty, O. D. Lavrentovich, J. Viñals, and N. J. Walkington. Q-tensor model for electrokinetics in nematic liquid crystals. Phys. Rev. Fluids , 2:053302, May 2017. [45] C. Tsakonas, A. J. Davidson, C. V. Brown, and N. J. Mottram. Multistable alignment states in nematic liquid crystal filled wells. Appl. Phys. Lett. , 90:Article 111913, 2007. [46] K.-V.Yuen. BayesianMethodsforStructuralDynamicsandCivilEngineering . JohnWiley&Sons (Asia), Singapore, 2010. [47] J. Zaplotnik, J. Pišljar, M. Škarabot, and M. Ravnik. Neural networks determination of material elastic constants and structures in nematic complex fluids. Sci. Rep., 13(1):6028, Apr 2023. [48] Y. Zhao, Y. Xu, L. Zhang, and C. Chen. Data-driven optical parameter identification for the ginzburg–landau equation via bayesian methods. Opt. Quantum Electron. , 56(8):1393, Aug 2024. 25
|
https://arxiv.org/abs/2504.16029v1
|
arXiv:2504.16279v1 [math.ST] 22 Apr 2025Detecting Correlation between Multiple Unlabeled Gaussian Networks Taha Ameen and Bruce Hajek Department of Electrical and Computer Engineering Coordinated Science Laboratory University of Illinois Urbana-Champaign Urbana, IL 61801, USA Email:{tahaa3,b-hajek}@illinois.edu Abstract —This paper studies the hypothesis testing problem to determine whether m≥2unlabeled graphs with Gaussian edge weights are correlated under a latent permutation. Previou sly, a sharp detection threshold for the correlation parameter ρwas established by Wu, Xu and Yu [1] for this problem when m= 2. Presently, their result is leveraged to derive necessary a nd sufficient conditions for general m. In doing so, an interval for ρis uncovered for which detection is impossible using 2graphs alone but becomes possible with m >2graphs. I. I NTRODUCTION Large datasets are pervasive in many tasks that involve de- tection and estimation. Often, multiple datasets are corre lated because they convey information about an underlying ground truth. For instance, the topology of two social networks suc h as Facebook and Twitter is correlated because users are like ly to connect with the same individuals in both networks. Such large datasets can be unlabeled orscrambled , and an important precursor to downstream tasks is to detect whether multiple datasets are correlated when the individual data points are unlabeled. This can be formulated as a hypothesis testing problem, where the datasets are independent under the null model, and correlated via a latent permutation under the alternative model. The present work studies the fundamenta l limits for the correlation detection problem for networked data (graphs) when more than two datasets are available. Detecting correlation between graphs is a ubiquitous prob- lem. For instance, correlations between the protein-prote in interaction networks of different species allow biologist s to identify conserved functional components between them [2] . Similarly, the brain connectomes of healthy humans are cor- related [3], and their alignment is useful in detecting ab- normalities [4]. Other applications include object detect ion in computer vision [5], linkage attacks in correlated socia l networks [6], [7], and ontology alignment in natural langua ge processing [8]. The number of correlated graphs varies depe nd- ing on the application. A. Related Work Wu, Xu and Yu [1] studied the hypothesis testing problem to decide whether twounlabeled random graphs are correlated. In the setting where the graphs are Erd ˝os-Rényi, they estab- lished the information-theoretic threshold for detection withina constant factor – this was later sharpened by Ding and Du [9] . Computationally efficient tests for this problem have also b een studied, for example by Barak and co-authors [10], Mao and co-authors [11] and Ding, Du and Li [12]. The work of Wu, Xu and Yu [1] also established the sharp information-theore tic threshold for detection in the setting of graphs with Gaussian weights. The case of general distributions was also studied recently by Oren-Loberman, Paslev and Huleihel [13]. The problem of detecting correlation between two graphs has been studied in other settings as well. For instance, nec es- sary and sufficient conditions for correlation detection be tween unlabeled Galton-Watson trees were established by Ganassa li,
|
https://arxiv.org/abs/2504.16279v1
|
Lelarge and Massoulié [14] and later by Ganassali, Massouli é and Semerjian [15]. Another example is the work of Rácz and Sridhar [16], where a pair of graphs are either independent, or grow together until a time t∗and independently afterwards according to an appropriate evolution model. The correlation detection problem is closely related to thegraph alignment problem, where the unlabeled graphs are correlated and the objective is to recover the underlyin g correspondence between them. Here too, literature has broa dly focused on two settings: 1) The Gaussian case, where two unlabeled complete graphs have Gaussian edge weights. The information-theoretic threshold for recovery for two graphs was independently established by Ganassali [17] and by Yu, Wu, and Xu [18]. Very recently, appearing after the submission of this paper , Vassaux and Massoulié [19] established the recovery threshold for mgraphs for any m≥2. 2) The binary case, where the graphs are correlated Erd ˝os- Rényi graphs. Cullina and Kiyavash [20], [21] studied information theoretic limits for exact recovery, i.e. matc h- ing all the nodes. A flurry of works established thresholds for almost-exact recovery [22] and partial recovery [18], [23]–[25]. Still other works studied information-theoret ic thresholds under heterogeneity [26], perturbations [27] and partial correlation [28]. Variants of the recovery problem with multiple graphs have also been studied [29]– [31]. Another closely related problem is database alignment . In this problem, the two observations are collections of high- dimensional feature vectors which are independent under the null hypothesis, and correlated via a latent permutatio n under the alternative hypothesis. The bulk of the literatur e on database alignment assumes that the feature vectors are Gaussian [32], [33]. Information-theoretic limits for det ection were investigated by Zeynep and Nazer in [34], and later sharpened by Elimelech and Huleihel [35], [36] and Jiao, Wu, and Xu [37]. The case of general distributions was also studi ed by Paslev and Huleihel [38]. Finally, information-theoret ic limits for recovering the underlying permutation have also been studied [39]–[44]. B. Contributions To our knowledge, the problem of detecting correlation when there are multiple unlabeled observations is an open problem for all the aforementioned instances. This paper considers the setting of m≥2complete graphs on nunlabeled nodes with standard Gaussian edge weights, such that the correlation between corresponding edges in any pair of grap hs isρ. – By analyzing the generalized likelihood ratio, we derive a sufficient condition ρ2≥8 mlogn n−1for detection. The 1/m dependence uncovers an interval of ρwhere detection is impossible with 2graphs but possible with m >2graphs. – By inductively leveraging the impossibility result of Wu, Xu and Yu [1], we derive a necessary condition ρ2≤ /parenleftig 4 m−1−ε/parenrightig logn n. II. P RELIMINARIES For a natural number n, let[n]denote the set {1,2,···,n}. Denote by Snthe set of all permutations on [n]. Standard asymptotic notation ( O(·),o(·),Ω(·),···) is used in this paper. Consider mrandom weighted graphs on the common node set[n]with adjacency matrices X1,X2,···,Xm, where Xℓ= (Xℓ ij)1≤i<j≤nandXℓ i,j∼ N(0,1)for each i,jand ℓwhere1≤i < j≤nand1≤ℓ≤m. We are interested in the hypothesis testing problem to determine whether the m
|
https://arxiv.org/abs/2504.16279v1
|
graphs are correlated up to an (unknown) latent permutation . Under the null hypothesis H0, the vectors X1,···,Xmare independent. Under the alternative hypothesis H1, there exist uniformly random permutations π∗ 12,···,π∗ 1mon[n]such that (X1 ij,X2 π∗ 12(i),π∗ 12(j),···,Xm π∗ 1m(i),π∗ 1m(j))1≤i<j≤n are independent tuples of correlated Gaussian vectors: the correlation coefficient for any pair Xk π∗ 1k(i),π∗ 1k(j),Xℓ π∗ 1ℓ(i),π∗ 1ℓ(j) isρ, for any 1≤k < ℓ≤mand1≤i < j≤n. Stated thus, lettingπ∗ kℓ=π∗ 1ℓ◦(π∗ 1k)−1fork,ℓ∈[m], π∗ kℓencodes the latent correspondence between XkandXℓ, and it is implicit that the reference π∗ 11is the identity permutation. Remark 1. The above problem is equivalently a hypothesis testing problem between multiple unlabeled graphs – random ly labeling the nodes of these graphs is equivalent to applying uniformly random permutations to their labeled analogs. The objective is to establish necessary and sufficient condi - tions onρfor which it is possible to statistically distinguishbetweenH0andH1. In light of Remark 1, such a test must rely solely on graph properties that are invariant with resp ect to relabeling the nodes. Let QandPdenote respectively the probability measures under H0andH1. Further, let Xdenote the collection (X1,···,Xm)of graphs. A test statistic T(X) with threshold τachieves •Strong detection if the total error converges to 0: P(T(X)< τ)+Q(T(X)≥τ) =o(1). •Weak detection if the test outperforms random guessing: P(T(X)< τ)+Q(T(X)≥τ) = 1−Ω(1). Letδ(P,Q)denote the total variation distance between the two measures PandQ. It is well known from detection theory that strong detection is possible iff δ(P,Q) = 1−o(1), whereas weak detection is possible iff δ(P,Q) = Ω(1) . By the Neyman-Pearson lemma, the optimal test statistic is the likelihood ratio P(X) Q(X)=1 (n!)m−1/summationdisplay π12,···,π1m∈SnP(X|π12,···,π1m) Q(X), which is difficult to analyze due to the underlying dependenc e of each term on the corresponding permutation profile. Follo w- ing the analysis in [1] for the two-graph setting, we conside r instead the generalized likelihood ratio max π12,···,π1m∈SnP(X|π12,···,π1m) Q(X). (1) By analyzing the generalized likelihood ratio for two graph s, Wu, Xu, and Yu [1] showed that strong detection is possible in this setting if ρ2≥4log(n)/(n−1), and that weak detection is impossible if ρ2≤(4−ε)log(n)/nfor anyε >0. III. T ESTSTATISTIC AND MAINRESULTS First, we derive an expression for the generalized likeliho od ratio. For a permutation profile π={π12,···,π1m}, P(X|π) Q(X)=/productdisplay 1≤i<j≤nP(X1 ij,X2 π12(i),π12(j)···,Xm π1m(i),π1m(j)|π) Q(X1 ij,X2 ij,···,Xm ij) =/productdisplay 1≤i<j≤n1/radicalbig det(Σ)·exp/parenleftbigg −1 2x⊤ ij(Σ−1−I)xij/parenrightbigg , wherexij= [X1 ij,X2 π12(i),π12(j),···,Xm π1m(i),π1m(j)]⊤∈Rm and Σ/defines(1−ρ)I+ρE is the covariance matrix for the random vector xij. Here,Iis the identity matrix and Eis the all-ones matrix of size m×m. A straightforward computation yields Σ−1−I=1 1+(m−2)ρ−(m−1)ρ2/parenleftbig (ρ+(m−1)ρ2)I−ρE/parenrightbig , and so x⊤ ij(Σ−1−I)xij∝(m−1)ρm/summationdisplay k=1(Xk π1k(i),π1k(j))2 −/summationdisplay 1≤k<ℓ≤mXk π1k(i),π1k(j)Xℓ π1ℓ(i),π1ℓ(j), Summing over iandjwhere1≤i < j≤n, it follows that logP(X|π) Q(X)∝ −/summationdisplay 1≤i<j≤nx⊤ ij(Σ−1−I)xij ∝/summationdisplay 1≤i<j≤n/summationdisplay 1≤k<ℓ≤mXk π1k(i),π1k(j)Xℓ π1ℓ(i),π1ℓ. Thus, the generalized likelihood ratio in (1) is equivalent to T/definesmax π12,···,π1m/summationdisplay 1≤i<j≤n/summationdisplay 1≤k<ℓ≤mXk π1k(i)π1k(j)Xℓ π1ℓ(i),π1ℓ(j), With that, the main results of this paper are now presented. Theorem 2. Suppose that ρ2≥8 mlogn n−1. (2) There exists a threshold τfor which the generalized likeli- hood ratio test based on Tachieves strong detection, i.e. P(T < τ)+Q(T≥τ) =o(1). Theorem 3. Suppose that
|
https://arxiv.org/abs/2504.16279v1
|
for some ε >0, ρ2≤/parenleftbigg4 m−1−ε/parenrightbigglogn n. (3) Then weak detection is impossible, i.e. δ(P,Q) =o(1). The positive result in Theorem 2 establishes that for each m >2, there is a region in parameter space where weak detection is impossible with 2graphs alone, but using m graphs as side information allows strong detection. Howeve r, the thresholds in (2) and (3) differ by a multiplicative fact or of 2(m−1)/m. Closing the gap when m >2to establish a sharp detection threshold is an open problem. Vassaux and Massoul ié [19] identified the threshold in (2) as the tight threshold fo r even weak recovery, giving evidence that the threshold in Theorem 3 can be improved. The method of proof of Theorem 3 can be used to extend necessary conditions in [1] and [9] for strong detection for two Erd ˝os-Rényi graphs to m≥3graphs, though first [1] and [9] would need to be considered for the case the subsampling probability sis different for graphs G1 andG2. IV. A VARIATION OF HANSON -WRIGHT INEQUALITY FOR GAUSSIANS The following proposition is a version of the Hanson-Wright inequality [45]. It is much less general than the Hanson-Wri ght inequality because it holds only for Gaussian random variab les whereas Hanson-Wright applies to sub-Gaussian random vari - ables. The proposition has the advantage of having a simple proof with explicit constants and it allows a tradeoff betwe en the two matrix norms involved. A proof may be found in the appendix. Let ∝bardblA∝bardbldenote the spectral norm and ∝bardblA∝bardblFthe Frobenius norm of a matrix A.Proposition 4. LetZ=X⊤AX, whereAis ann×nmatrix andXhas the standard n-dimensional Gaussian distribution. For any constant γwith0< γ <1 P(Z−E[Z]≥t)≤exp/parenleftbigg −t2 4(∝bardblA∝bardbl2 F+∝bardblA∝bardblt)/parenrightbigg (4) ≤exp/parenleftbigg −1 4min/braceleftbiggγt2 ∝bardblA∝bardbl2 F,(1−γ)t ∝bardblA∝bardbl/bracerightbigg/parenrightbigg .(5) Remark 5. If∝bardblA∝bardblis relatively small we could select γclose to one. Since Var(Z) = 2∝bardblA∝bardbl2 Fthe central limit theorem implies the coefficient oft2 /bardblA/bardbl2 Fcannot have magnitude greater than 1 4. Takingγ=1 2yields P(Z−E[Z]≥t)≤exp/parenleftbigg −1 8min/braceleftbiggt2 ∝bardblA∝bardbl2 F,t ∝bardblA∝bardbl/bracerightbigg/parenrightbigg . The following lemma will be used to apply the Hanson- Wright bound in the context of this paper: Lemma 6. Letm≥2and0≤ρ <1.Suppose Y is a random Gaussian mvector with mean zero such that Var(Yk) = 1 and Cov(Yk,Yℓ) =ρfor1≤k < ℓ≤m. LetZ=/summationtext 1≤k<ℓ≤mYkYℓ.ThenZcan be represented as in Proposition 4 for a matrix Awith eigenvalue λmax= m−1+(m−1)2ρ 2with multiplicity one and eigenvalue λmin= ρ−1 2with multiplicity m−1. Proof. The random vector Ycan be represented as Y=/parenleftig√ρ1m×1/radicalbig 1−ρIm×m/parenrightig W whereWis aN(0(m+1)×1,I(m+1)×(m+1))random vector and Z=1 2Y⊤(E−I)Y.Thus,Z=W⊤¯AW where ¯A=1 2 √ρ1⊤ √1−ρI (E−I)/parenleftbig√ρ1√1−ρI/parenrightbig . Note that λmax is an eigenvalue of ¯Awith eigenvector v1= /parenleftigg m/radicalig ρ 1−ρ 1m×1/parenrightigg , λminis an eigenvalue of ¯Aof multiplicity m−1with eigenspace spanned by vectors of the form/parenleftbigg0 v/parenrightbigg such that v⊥1m×1,and 0 is an eigenvalue of ¯Awith eigenvector v0=/parenleftigg −/radicalig 1−ρ ρ 1m×1/parenrightigg .Finally, by an orthonormal transformation of W, we can diagonalize ¯Aand reduce it to anm×mmatrixAby deleting the row and column with the zero eigenvalue. V. A CHIEVABLE DETECTION : PROOF OF THEOREM 2 Assumeρ2≥8 m·logn n−1and take the threshold to be τ=/parenleftbign 2/parenrightbig/parenleftbigm 2/parenrightbig ρ−ncfor a constant cwith1< c <1.5.Without
|
https://arxiv.org/abs/2504.16279v1
|
loss of generality, we may assume that the underlying permutatio ns π∗ 1iare all the identity permutation id. It then follows from the definition of TthatP(T≤τ)≤P(T∗≤τ)whereT∗is the log-likelihood for the identity permutation profile: T∗=/summationdisplay 1≤k<ℓ≤m/summationdisplay 1≤i<j≤nXk ijXℓ ij UnderP,T∗is the sum of/parenleftbign 2/parenrightbig independent quadratic forms, each with the distribution of the random variable Zin Lemma 6. Hence T∗is also a quadratic form in jointly Gaussian random variables, and Proposition 4 yields P(T∗≤τ) =P(T∗−E[T∗]≤ −nc) ≤exp/parenleftigg −1 4n2c /parenleftbign 2/parenrightbig (λ2max+(m−1)λ2 min)+λmaxnc/parenrightigg Bothλmax andλmin are bounded as n→ ∞,so P(T∗−E[T∗]≤ −nc) =o(1). It remains to prove Q(T > τ) =o(1).The distribution of TunderQdoes not depend on ρ,soQ(T > τ)depends on ρonly through the value of τ.So without loss of generality for the remainder of the proof assume ρ2=8logn m(n−1).By the union bound, Q(T > τ) =Q max π12,···,π1m/summationdisplay 1≤i<j≤n 1≤k<ℓ≤mXk π1k(i)π1k(j)Xℓ π1ℓ(i),π1ℓ(j)> τ ≤(n!)m−1·Q /summationdisplay 1≤i<j≤n 1≤k<ℓ≤mXk ijXℓ ij> τ By Lemma 6 with ρ= 0,/summationtext i<j/summationtext 1≤k<ℓ≤mXk ijXℓ ijcorre- sponds to a Gaussian quadratic form with eigvenvaluem−1 2 having multiplicity/parenleftbign 2/parenrightbig and eigenvalue −1 2with multiplicity/parenleftbign 2/parenrightbig (m−1).Hence, Proposition 4, and n!≤exp((n+ 1 2)log(n)−(n−1))yield: Q(T > τ) ≤exp/parenleftbigg (m−1)nlogn+m−1 2logn−(m−1)(n−1) −τ2 /parenleftbig/parenleftbign 2/parenrightbig m(m−1)+2(m−1)τ/parenrightbig/parenrightigg . Letµ=/parenleftbigm 2/parenrightbig/parenleftbign 2/parenrightbig ρ,soτ=µ−nc.Note that (m−1)nlogn=µ2 /parenleftbign 2/parenrightbig m(m−1), τ2 /parenleftbig/parenleftbign 2/parenrightbig m(m−1)+2(m−1)τ/parenrightbig ≥τ2 /parenleftbign 2/parenrightbig m(m−1)/parenleftigg 1−2(m−1)µ/parenleftbign 2/parenrightbig m(m−1)/parenrightigg , τ2≥µ2/parenleftbigg 1−2nc µ/parenrightbiggTherefore, Q(T > τ)≤exp(−(m−1)(n−1)+An+Bn) where An=(m−1)n(logn)2(m−1)µ/parenleftbign 2/parenrightbig m(m−1)= Θ(n1/2(logn)3/2) Bn=(m−1)n(logn)2nc µ+m−1 2logn = Θ/parenleftigg nc/parenleftbigglogn n/parenrightbigg1/2/parenrightigg . Thus,Q(T > τ) =o(1)and the proof is complete. VI. I MPOSSIBLE DETECTION : PROOF OF THEOREM 3 The proof of Theorem 3 given here involves working directly with total variation distances. The idea in terms o f a decision maker based with the hypothesis testing problem is the following. Even if there were a genie available that revealed to the decision maker how matrices X2through Xmwere possibly aligned, the decision maker would not do better than guessing which hypothesis is true based on the information from the genie and the mmatrices. A key idea is that in case the null hypothesis is true, the genie should reveal a plausible alignment of matrices X2throughXm.We begin by summarizing some well known properties of total variation distance in three lemmas. Proofs may be found in the appendix. Lemma 7. Suppose PandQare two joint probability distributions for random variables X,Y. LetPXandQX denote the corresponding marginal probability distributi ons ofX. If the conditional distribution of YgivenXis the same forPandQthenδ(P,Q) =δ(PX,QX). Lemma 8. For any probability measure Pand event A, the distance between Pand the conditional distribution of Pgiven Asatisfies: δ(P,P(·|A))≤P(Ac). Lemma 9. LetPXandQXbe probability distributions for a random vector X. These distributions can be extended to joint distributions PandQfor random variables S,X such that: •P(S∈ {0,1}) =Q(S∈ {0,1}) = 1. •P(S= 0) =Q(S= 0) =δ(PX,QX) =δ(P,Q) •The marginal distribution of XunderPisPX, •The marginal distribution of XunderQisQX •PX|S=1=QX|S=1(equality of conditional distributions) Proof of Theorem 3. The proof is by induction on mwith the base case m= 2.The base case holds because it is the converse part of Theorem 1 in [1]. For ease of notation we give here the proof
|
https://arxiv.org/abs/2504.16279v1
|
for m= 3,and briefly explain at the end how the proof for general m≥3can be given. So suppose m= 3 and suppose ρ2≤(2−ǫ)logn n. To begin, we let Pdenote the joint distribution of X1,X2,X3and the unobserved random permutation profile π∗= (π∗ kℓ)k,ℓ∈[m]under hypothesis H1andQdenote the joint distribution of X1,X2,X3underH0.We use⊗to de- note product form distributions corresponding to independ ent random variables. Using Lemmas 7 and 9 we shall consider extensions of P andQto a larger ensemble of random objects and continue to usePandQto denote the extensions. For a given subset of the objects we denote the marginal distribution of that set o f objects under PorQby using either PorQwith the objects as subscripts with no commas. For example, PX2X3π∗ 23denotes the marginal probability distribution of (X2,X3,π∗ 23)under probability distribution P. Letǫn=δ(PX2X3,QX2X3).By the assumption on ρand the result for m−1,ǫn=o(1)asn→ ∞.(This is the step that requires the proof by induction. It isn’t the tightest p art of the proof. For general m≥3we require ρ2≤(4 m−2−ǫ)logn n, which is implied by the assumption ρ2≤(4 m−1−ǫ)logn n.) Extend the probability distribution Qby adjoining a random permutation π∗ 23such that the conditional distribution of π∗ 23 given(X2,X3)is the same under Qas under P. As in the theory of multiple user information theory we can think of Pdefining a channel with input (X2,X3)and output π∗ 23 and we apply that same channel under probability distributi on Q.SinceX1was independent of (X2,X3)underQto begin with,X1is independent of (X2,X3,π∗ 23)underQextended to include π∗ 23.By Lemma 7, δ(PX2X3π∗ 23,QX2X3π∗ 23) =ǫn. Focus further in this paragraph on the joint distributions of(X2,X3,π∗ 23).By Lemma 9 we can extend PX2X3π∗ 23 andQX2X3π∗ 23so that there is a binary random variable S jointly distributed with (X2,X3,π∗ 23)so that (i) P(S= 0) = Q(S= 0) = ǫnand (ii)PX2X3π∗ 23|S=1=QX2X3π∗ 23|S=1. Equivalently, there exist choices of conditional probabil ity distributions PS|X2,X3,π∗ 23andQS|X2,X3,π∗ 23so that when PX2X3π∗ 23andQX2X3π∗ 23are extended using those conditional distributions properties (i) and (ii) hold. Using those con di- tional probability distributions (again thinking of them a s chan- nels as in multiple user information theory), Scan be adjoined to the larger ensemble (X1,X2,X3,π∗ 23)underPandQsuch that (i) and (ii) hold and under P:X1−(X2,X3,π∗ 23)−S is a Markov sequence. And under Q:X1is independent of (X2,X3,π∗ 23,S). LetX23denote the sum of X2and the version of X3that is aligned with X2usingπ∗ 23:X23 ij△=X2 ij+X3 π∗ 23(i),π∗ 23(j)for 1≤i < j≤n. Note that X23is a function of (X2,X3,π∗ 23). The conditional distribution PX1|X2,X3,π∗ 23can be described as follows. The permutation π∗ 21is independent of (X2,X3,π∗ 23) and uniformly distributed. Given (X2,X3,π∗ 23,π∗ 21)the en- tries ofX1are conditionally independent and X1 π∗ 21(i),π∗ 21(j)∼ N/parenleftigg ρX23 ij 1+ρ,1+ρ−2ρ2 1+ρ/parenrightigg In particular the conditional distribution PX1|X2,X3,π∗ 23de- pends on (X2,X3,π∗ 23)only through X23so that under P: X1−X23−(X2,X3,π∗ 23)−Sis a Markov sequence. Also, sinceX23is a function of (X2,X3,π∗ 23), underQ:X1is independent of (X23,X2,X3,π∗ 23,S).By the choice of π∗ 23andSand the fact that X23is a function of (X2,X3,π∗ 23)it follows that the conditional dis- tribution (X23,π∗ 23,X2,X3|S= 1) is the same under Pand Q. Therefore, the conditional distribution (π∗ 23,X2,X3|S= 1,X23)is also
|
https://arxiv.org/abs/2504.16279v1
|
the same under PandQ. By the Markov prop- erty (under P) and independence property (under Q) discussed in the previous paragraph, adding in conditioning on X1does not change the conditional distributions. In other words, t he conditional distribution (π∗ 23,X2,X3|S= 1,X23,X1)is the same under PandQ. That is: Pπ∗ 23,X2,X3|S=1,X23,X1=Qπ∗ 23,X2,X3|S=1,X23,X1(6) With the above preparations, we now have the string of inequalities: δ(PX1X2X3,QX1X2X3)(a) ≤δ(PX1X2X3X23π∗ 23,QX1X2X3X23π∗ 23) (b) ≤δ(PX1X2X3X23π∗ 23|S=1,QX1X2X3X23π∗ 23|S=1)+2ǫn (c)=δ(PX1X23|S=1,QX1X23|S=1)+2ǫn (d) ≤δ(PX1X23|S=1,PX1X23)+δ(PX1X23,PX1⊗PX23) +δ(PX1⊗PX23,PX1⊗PX23|S=1)+2ǫn (e) ≤δ(PX1X23,PX1⊗PX23)+4ǫn where (a) follows because including more variables cannot decrease variational distance, (b) follows by the triangle in- equality of δand two applications of Lemma 8, (c) follows from Lemma 7 and (6), (d) follows from the triangle inequalit y for variational distance and the fact QX1X23|S=1=PX1⊗ PX23|S=1, and (e) follows by applying Lemmas 7 and 8 to get: δ(PX1⊗PX23,PX1⊗PX23|S=1) =δ(PX23,PX23|S=1)≤ǫn. The term δ(PX1X23,PX1⊗PX23)is the variational distance for the detection problem for the two matrices X1and X23.This problem is an instance of the Gaussian detection problem for two matrices and correlation coefficient ρ′given byρ′=ρ/radicalig 2 1+ρ.The assumption ρ2≤(2−ǫ)logn nimplies ρ′/ρ→√ 2and(ρ′)2≤(4−ǫ 2)logn nfor all large n. Therefore δ(PX1X23,PX1⊗PX23) =o(1)by the result for m= 2. Hence, tracing back through the string of inequalities, δ(PX1X2X3,QX1X2X3) =o(1).The proof for m= 3 is complete. The proof for general m≥3is similar. For all m≥3, matrices X2,...,Xmare essentially replaced by a single matrixX2:mwhereX2:mis the sum of matrices X2through XmafterX3throughXmare aligned with X2.Then the joint distribution of X1,X2:munderPandQafter a scaling ofX2:mby a constant is the same as the model for two matrices with parameter ρ′whereρ′=ρ/radicalig (m−1) 1+(m−2)ρso ρ′/ρ→√m−1. /square ACKNOWLEDGMENTS This work was supported by NSF under Grant CCF 19- 00636. T.A. thanks Sophie Yu for helpful discussions. The authors are also grateful to the reviewers for suggestions t o impove the presentation of the paper. REFERENCES [1] Y . Wu, J. Xu, and S. H. Yu, “Testing correlation of unlabel ed random graphs,” The Annals of Applied Probability , vol. 33, no. 4, pp. 2519– 2558, 2023. [2] R. Singh, J. Xu, and B. Berger, “Global alignment of multi ple protein interaction networks with application to functional ortho logy detection,” Proceedings of the National Academy of Sciences , vol. 105, no. 35, pp. 12 763–12 768, 2008. [3] O. Sporns, G. Tononi, and R. Kötter, “The human connectom e: a structural description of the human brain,” PLoS computational biology , vol. 1, no. 4, p. e42, 2005. [4] A. Calissano, T. Papadopoulo, X. Pennec, and S. Deslauri ers-Gauthier, “Graph alignment exploiting the spatial organization impr oves the sim- ilarity of brain networks,” Human Brain Mapping , vol. 45, no. 1, p. e26554, 2024. [5] C. Schellewald and C. Schnörr, “Probabilistic subgraph matching based on convex relaxation,” in International Workshop on Energy Minimiza- tion Methods in Computer Vision and Pattern Recognition . Springer, 2005, pp. 171–186. [6] A. Narayanan and V . Shmatikov, “Robust de-anonymizatio n of large sparse datasets,” in 2008 IEEE Symposium on Security and Privacy (sp 2008) . IEEE, 2008, pp. 111–125. [7] ——, “De-anonymizing social networks,” in 2009 30th IEEE Symposium on Security and Privacy . IEEE, 2009, pp. 173–187. [8]
|
https://arxiv.org/abs/2504.16279v1
|
A. Haghighi, A. Y . Ng, and C. D. Manning, “Robust textual i nference via graph matching,” in Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural L anguage Processing , 2005, pp. 387–394. [9] J. Ding and H. Du, “Detection threshold for correlated Er d˝os-Rényi graphs via densest subgraph,” IEEE Transactions on Information Theory , vol. 69, no. 8, pp. 5289–5298, 2023. [10] B. Barak, C.-N. Chou, Z. Lei, T. Schramm, and Y . Sheng, “( Nearly) efficient algorithms for the graph matching problem on corre lated random graphs,” Advances in Neural Information Processing Systems , vol. 32, 2019. [11] C. Mao, Y . Wu, J. Xu, and S. H. Yu, “Testing network correl ation efficiently via counting trees,” The Annals of Statistics , vol. 52, no. 6, pp. 2483–2505, 2024. [12] J. Ding, H. Du, and Z. Li, “Low-degree hardness of detect ion for correlated Erd ˝os-Rényi graphs,” arXiv preprint arXiv:2311.15931 , 2023. [13] M. Oren-Loberman, V . Paslev, and W. Huleihel, “Testing dependency of weighted random graphs,” in 2024 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2024, pp. 1263–1268. [14] L. Ganassali, M. Lelarge, and L. Massoulié, “Correlati on detection in trees for planted graph alignment,” The Annals of Applied Probability , vol. 34, no. 3, pp. 2799–2843, 2024. [15] L. Ganassali, L. Massoulié, and G. Semerjian, “Statist ical limits of correlation detection in trees,” The Annals of Applied Probability , vol. 34, no. 4, pp. 3701–3734, 2024. [16] M. Z. Rácz and A. Sridhar, “Correlated randomly growing graphs,” The Annals of Applied Probability , vol. 32, no. 2, pp. 1058–1111, 2022. [17] L. Ganassali, “Sharp threshold for alignment of graph d atabases with Gaussian weights,” in Mathematical and Scientific Machine Learning . PMLR, 2022, pp. 314–335. [18] Y . Wu, J. Xu, and S. H. Yu, “Settling the sharp reconstruc tion thresholds of random graph matching,” IEEE Transactions on Information Theory , vol. 68, no. 8, pp. 5391–5417, 2022. [19] L. Vassaux and L. Massoulié, “The feasibility of multi- graph alignment: a Bayesian approach,” arXiv preprint:2502.17142 , 2025. [20] D. Cullina and N. Kiyavash, “Improved achievability an d converse bounds for Erd ˝os-Rényi graph matching,” ACM SIGMETRICS Perfor- mance Evaluation Review , vol. 44, no. 1, pp. 63–72, 2016. [21] ——, “Exact alignment recovery for correlated Erd ˝os-Rényi graphs,” arXiv preprint arXiv:1711.06783 , 2017. [22] D. Cullina, N. Kiyavash, P. Mittal, and V . Poor, “Partia l recovery of Erd˝os-Rényi graph alignment via k-core alignment,” Proceedings of the ACM on Measurement and Analysis of Computing Systems , vol. 3, no. 3, pp. 1–21, 2019. [23] J. Ding and H. Du, “Matching recovery threshold for corr elated random graphs,” The Annals of Statistics , vol. 51, no. 4, pp. 1718–1743, 2023.[24] G. Hall and L. Massoulié, “Partial recovery in the graph alignment problem,” Operations Research , vol. 71, no. 1, pp. 259–272, 2023. [25] H. Du, “Optimal recovery of correlated Erdos-Renyi gra phs,” arXiv preprint arXiv:2502.12077 , 2025. [26] M. Z. Rácz and A. Sridhar, “Matching correlated inhomog eneous random graphs using the
|
https://arxiv.org/abs/2504.16279v1
|
k-core estimator,” in 2023 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2023, pp. 2499–2504. [27] T. Ameen and B. Hajek, “Robust graph matching when nodes are corrupt,” arXiv preprint arXiv:2310.18543 , 2023. [28] D. Huang, X. Song, and P. Yang, “Information-theoretic thresholds for the alignments of partially correlated graphs,” arXiv preprint arXiv:2406.05428 , 2024. [29] N. Josephs, W. Li, and E. D. Kolaczyk, “Network recovery from unlabeled noisy samples,” in 2021 55th Asilomar Conference on Signals, Systems, and Computers , 2021, pp. 1268–1273. [30] T. Ameen and B. Hajek, “Exact random graph matching with multiple graphs,” arXiv preprint arXiv:2405.12293 , 2024. [31] M. Z. Rácz and J. Zhang, “Harnessing multiple correlate d networks for exact community recovery,” arXiv preprint arXiv:2412.02796 , 2024. [32] R. Tamir, “On correlation detection and alignment reco very of Gaussian databases,” arXiv preprint arXiv:2211.01069 , 2022. [33] ——, “On correlation detection of Gaussian databases vi a local decision making,” in 2023 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2023, pp. 1231–1236. [34] K. Zeynep and B. Nazer, “Detecting correlated Gaussian databases,” in2022 IEEE International Symposium on Information Theory (I SIT). IEEE, 2022, pp. 2064–2069. [35] D. Elimelech and W. Huleihel, “Phase transitions in the detection of correlated databases,” in International Conference on Machine Learning . PMLR, 2023, pp. 9246–9266. [36] ——, “Detection of correlated random vectors,” IEEE Transactions on Information Theory , vol. 70, no. 12, pp. 8942–8960, 2024. [37] S. Jiao, Y . Wu, and J. Xu, “The broken sample problem revi sited: Proof of a conjecture by Bai-Hsing and high-dimensional extensio ns,” arXiv preprint:2503.14619 , 2025. [38] V . Paslev and W. Huleihel, “Testing dependency of unlab eled databases,” IEEE Transactions on Information Theory , 2024. [39] D. Cullina, P. Mittal, and N. Kiyavash, “Fundamental li mits of database alignment,” in 2018 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2018, pp. 651–655. [40] O. E. Dai, D. Cullina, and N. Kiyavash, “Database alignm ent with Gaussian features,” in The 22nd International Conference on Artificial Intelligence and Statistics . PMLR, 2019, pp. 3225–3233. [41] S. Bakırta¸ s and E. Erkip, “Database matching under col umn deletions,” in2021 IEEE International Symposium on Information Theory (I SIT). IEEE, 2021, pp. 2720–2725. [42] S. Bakirtas and E. Erkip, “Database matching under nois y synchroniza- tion errors,” IEEE Transactions on Information Theory , 2024. [43] O. E. Dai, D. Cullina, and N. Kiyavash, “Achievability o f nearly- exact alignment for correlated Gaussian databases,” in 2020 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2020, pp. 1230–1235. [44] ——, “Gaussian database alignment and Gaussian planted matching,” arXiv preprint arXiv:2307.02459 , 2023. [45] D. L. Hanson and F. T. Wright, “A Bound on tail probabilit ies for quadratic forms in independent random rariables,” The Annals of Math- ematical Statistics , vol. 42, no. 3, pp. 1079 – 1083, 1971. VII. A PPENDIX : PROOF OF PROPOSITION 4 We assume without loss of generality that Ais a symmetric matrix. If not we could replace it by its symmetrizationA+A⊤ 2, for which the norms, by convexity, are less
|
https://arxiv.org/abs/2504.16279v1
|
than or equal to the corresponding norms of AwhileZis the same for A replaced by its symmetrization. Since A=UΛU⊤for some orthonormal matrix UandΛbeing the diagonal matrix of eigenvalues, we have Z=/summationtext kλkW2 kwhereW=U⊤X so that the Wkare independent standard Gaussian random variables. Also, ∝bardblA∝bardblF=/summationtext kλ2 kand∝bardblA∝bardbl= max k|λi|. Consider θ >0such that 1−2∝bardblA∝bardblθ >0. Then E[eθZ] =/productdisplay k(1−2λkθ)−1/2= exp/parenleftigg −/summationdisplay k1 2ln(1−2λkθ)/parenrightigg Since|2λkθ| ≤2∝bardblA∝bardblθ <1for allk, the power series expansion of ln(1+z)gives −1 2ln(1−2λkθ) =1 2/parenleftbigg 2λkθ+1 2(2λkθ)2+1 3(2λkθ)3+···/parenrightbigg ≤λkθ+(λkθ)2/bracketleftbig 1+2∝bardblA∝bardblθ+(2∝bardblA∝bardblθ)2+···/bracketrightbig =λkθ+(λkθ)2 1−2∝bardblA∝bardblθ. Using the fact that E[Z] =/summationtext kλk,we then get E[eθ(Z−E[Z])]≤exp/parenleftbiggθ2∝bardblA∝bardbl2 F 1−2∝bardblA∝bardblθ/parenrightbigg Hence, for any t≥0,by the Chernoff inequality, P(Z−E[Z]≥t)≤E[eθ(Z−E[Z]−t)] ≤exp/parenleftbiggθ2∝bardblA∝bardbl2 F 1−2∝bardblA∝bardblθ−θt/parenrightbigg . (7) Settingθ=t 2(/bardblA/bardbl2 F+/bardblA/bardblt)in (7) yields (4). For any a,b >0, 1 a+b≥min{γ a,1−γ b}(easy to see if a+b= 1), yielding (5). VIII. A PPENDIX : PROOF OF LEMMAS ABOUT TOTAL VARIATION DISTANCE The lemmas on properties of total variation distance are well known but for the reader’s convenience are restated and proved below. Lemma 7 SupposePandQare two joint probability distri- butions for random variables X,Y. LetPXandQXdenote the corresponding marginal probability distributions of X. If the conditional distribution of YgivenXis the same for P andQthenδ(P,Q) =δ(PX,QX).Proof. Letfandgbe the joint density of (X,Y)underP andQ, respectively, relative to a suitable product reference measure. Then δ(P,Q) =/integraldisplay/integraldisplay |f(x,y)−g(x,y)|dxdy =/integraldisplay/integraldisplay |f(x)−g(x)|·f(y|x) dxdy =/integraldisplay |f(x)−g(x)|dx=δ(PX,QX), where the second equality uses f(y|x) =g(y|x). Lemma 8 For any probability measure Pand event A, the distance between Pand the conditional distribution of Pgiven Asatisfies: δ(P,P(·|A))≤P(Ac). Proof. We useδ(P,P(·|A)) = supB|P(B)−P(B|A)|.For any event B, P(B)−P(B|A)=P(B|A)P(A)+P(B|Ac)P(Ac)−P(B|A) =P(B|A)[P(A)−1]+P(B|Ac)P(Ac) = (P(B|Ac)−P(B|A))P(Ac) so|P(B)−P(B|A)| ≤ |P(B|Ac)−P(B|A)|P(Ac)≤P(Ac). Lemma 9 LetPXandQXbe probability distributions for a random vector X. These distributions can be extended to joint distributions PandQfor random variables S,X such that: •P(S∈ {0,1}) =Q(S∈ {0,1}) = 1. •P(S= 0) =Q(S= 0) =δ(PX,QX) =δ(P,Q) •The marginal distribution of XunderPisPX, •The marginal distribution of XunderQisQX •PX|S=1=QX|S=1(equality of conditional distributions) Proof. Define a measure µon the range space of Xto serve as a reference measure by µ=PX+QX.By the Radon- Nikodym theorem there are density functions fandgsuch that PX(A) =/integraldisplay Afdµ, Q X(A) =/integraldisplay Agdµ, for Borel measurable subsets Aof the range space of X. Thenδ(PX,QX) =/integraltext |f−g|dµ.Letǫ=δ(PX,QX).To avoid trivialities assume 0< ǫ <1.Letλahave density min{f,g}/(1−ǫ), λPhave density (f−g)+/ǫandλQ have density (g−f)+/ǫ.ThenPX= (1−ǫ)λa+ǫλPand QX= (1−ǫ)λa+ǫλQ.(Here,(1−ǫ)λarepresents the mutually absolutely continuous component of PXandQX and the measures ǫλPandǫλQare mutually singular.) Let P be the joint distribution of (S,X)such that P(S= 0) =ǫ, P(S= 1) = 1 −ǫ, PX|S=1=λa,andPX|S=0=λP. Similarly, let Qbe the joint distribution of (S,X)such that Q(S= 0) = ǫ, Q(S= 1) = 1 −ǫ, QX|S=1=λa andQX|S=0=λQ.The required properties are readily verified.
|
https://arxiv.org/abs/2504.16279v1
|
Linear Regression Using Hilbert-Space-Valued Covariates with Unknown Reproducing Kernel Xinyi Lia, Margaret Hochb, and Michael R. Kosorokb aClemson University,bUniversity of North Carolina at Chapel Hill Abstract: We present a new method of linear regression based on principal components using Hilbert-space- valued covariates with unknown reproducing kernels. We develop a computationally efficient approach to estimation and derive asymptotic theory for the regression parameter estimates under mild assumptions. We demonstrate the approach in simulation studies as well as in data analysis using two-dimensional brain images as predictors. Key words and phrases: Principal component analysis, Donsker class, Hilbert-space-valued random variable, weak convergence 1. Introduction The increasing availability of high-dimensional and complex structured data has led to significant methodological advances in functional data analysis (FDA). An important application area is neu- roimaging, where brain images—often represented as two- or three-dimensional objects—serve as high-dimensional predictors in statistical models. Modern imaging techniques and imaging biomark- ers offer potential not only for characterizing morphological abnormalities and disease information, but also for defining imaging phenotypes of various diseases (Herold et al., 2016). For instance, pathophys- iological changes related to Alzheimer’s disease can be visualized up to 15 years before the onset of clinical dementia (Bateman et al., 2012). Understanding how such imaging biomarkers influence dis- ease progression, especially in chronic conditions like Alzheimer’s Disease, is a crucial yet challenging task. Functional data analysis provides a natural framework for modeling such data, and recent efforts have extended traditional FDA techniques to accommodate images and other complex functional data structures (Chen et al., 2019; Li et al., 2021; Zhang et al., 2023; Zhu et al., 2014). In FDA, two principal perspectives have been employed to analyze functional data: (i) treating functional data as realizations of underlying stochastic processes and (ii) considering them as a random element in Hilbert spaces. Under joint measurability assumptions, these perspectives are equivalent (Hsing and Eubank, 2015). In this work, we adopt the latter approach and propose a novel methodology for linear regression with Hilbert-space-valued covariates where the reproducing kernel is unknown. To Address forcorrespondence: Xinyi Li, School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA. Email: lixinyi@clemson.eduarXiv:2504.16780v1 [math.ST] 23 Apr 2025 be specific, we consider the following general regression model: Y=α+βTX+⟨γ, Z⟩+ε, where the response Y∈R, the Euclidean covariate vector X∈Rd, and Zis a Hilbert space H- valued random element; with εbeing conditionally mean zero given XandZ; and with unknown coefficients α∈R,β∈Rd, and γ∈ H . More details on the model will be provided later. Our goal is to provide an interpretable, finite-dimensional functional principal component approximation for ⟨γ, Z⟩as well as root- nestimation and inference for all unknown parameters. Our framework extends traditional functional regression models by incorporating an implicit, data-driven representation of the inner product structure within the Hilbert space. A fundamental challenge in high-dimensional regression with functional predictors is dimension- ality reduction. Principal Component Analysis (PCA) is a widely used tool for extracting dominant modes of variation in functional data (Morris, 2015; Wang et al., 2016), but its generalization to Hilbert- space-valued data with an unknown reproducing kernel is non-trivial. We
|
https://arxiv.org/abs/2504.16780v1
|
develop a new method for ob- taining PCA representations in such settings and establish asymptotic theory. Specifically, we achieve the dimensionality reduction for Hilbert space elements using Karhunen-Lo `eve expansions, derive the statistical learning properties (Donsker class results) ensuring uniform convergence, establish asymp- totic normality and estimation stability for eigenvalues and eigenfunctions, and provide practical infer- ence techniques using bootstrap methods. This theoretical development supports robust and efficient estimation of high-dimensional or functional regression models. Several researchers have contributed to the development of statistical methods for Hilbert-space- valued functional data. Yuan and Cai (2010) studied several specific examples of separable Hilbert space covariates and derived optimal rates of convergence for the Hilbert-valued coefficient for a Hilbert-valued covariate in linear regression. Giulini (2017) focused on robust PCA and projection techniques within Hilbert spaces. Dai and M ¨uller (2018) extended functional data analysis to Rie- mannian manifolds by embedding them into Euclidean spaces, performing eigenanalysis, and mapping results back via the exponential map. Lin and Yao (2019) further developed eigenanalysis methods by transforming Riemannian stochastic processes into tensor Hilbert spaces. Kim et al. (2020) pro- posed an autocovariance operator for Hilbert-space-valued stochastic processes, enabling interpretable eigenfunctions in L2(R). More recently, Perry et al. (2024) introduced inference techniques for the proportion of variance explained (PVE) criterion in PCA. These contributions have primarily focused on functional data indexed by one-dimensional temporal variables on compact intervals. For multidimensional functional data, Lila et al. (2016) extended traditional PCA to functional data on 2D manifolds using smoothing constraints, combined with finite element discretization to allow efficient computation while maintaining flexibility for handling missing data. Happ and Greven (2018) studied multivariate functional PCA for functional variables defined in compact domains. Kuenzer 2 et al. (2021) employed spectral analysis of weakly stationary functional random fields and proposed a kernel-weighted smoothing approach for spectral density estimation. Shi et al. (2022) utilized spline- based methods to develop a two-dimensional FPCA framework for extracting imaging features. Despite these advancements, existing approaches lack a general, unified theoretical foundation for handling Hilbert-space-valued covariates with unknown reproducing kernels. Addressing this gap is crucial for extending regression methodologies to more general functional settings, where the structure of the underlying Hilbert space is not fully specified. Our approach allows for a general high-dimensional domain, such as two-, three-, or higher- dimensional imaging data, or Hilbert-space-valued data without explicit indices. Our proposed method provides a unified strategy to handle such high-dimensional functional data while inheriting the good in- terpretability of PCA methods. The contributions of this work are threefold. First, we provide a unified theoretical framework for obtaining PCA representations in general Hilbert spaces, along with rigorous asymptotic results. Second, we introduce a computationally efficient estimation procedure for linear regression with Hilbert-space-valued covariates without explicitly specifying the reproducing kernel. Third, we propose a bootstrap-based inference method that allows for the uncertainty quantification of the proposed estimators. The remainder of the paper is organized as follows. Section 2 establishes the theoretical foun- dations for separable Hilbert-space-valued random variables, providing the necessary framework for regression analysis in infinite-dimensional spaces. In Section 3,
|
https://arxiv.org/abs/2504.16780v1
|
we introduce the proposed regression model, outlining key methodological principles and estimation strategies. Section 4 discusses the prac- tical inference techniques using bootstrap methods and presents the corresponding asymptotic proper- ties of the bootstrap estimators. Section 5 details computational aspects and implementation consider- ations, particularly in the context of neuroimaging applications. Section 6 explores the performance of the proposed methodology through numerical experiments, and Section 7 applies the framework to real neuroimaging data. Finally, Section 8 concludes with a discussion of potential extensions and future research directions. 2. Separable Hilbert-Valued Random Variables In this section, we derive several properties of separable Hilbert-space-valued random variables and develop methods for the estimation of eigenvalues and eigenfunctions, including root- nweak con- vergence. We also derive methods for validly selecting the number of principal components based on the percentage/proportion of variance explained (PVE). Section 2.1 focuses on properties of separable Hilbert-space-valued random variables. A key result is a Donsker theorem for Hilbert-valued random variables, which critically enables later results and is of independent interest. Section 2.2 develops es- timation and weak limit properties for eigenvalues and eigenfunctions, including results for consistent PCA selection based on PVE. 3 2.1. Properties of Separable Hilbert Spaces Denote the empirical measure as Pn, denote the expectation over a single observation as P, and define the random measure Gn=√n(Pn−P). LetGbe a mean-zero Gaussian generalized Brownian bridge process indexed by Fwith covariance P(fg)−PfPg , for all f, g∈ F. We need the following assumption for the Hilbert-space-valued random variable Z: (A1) Let Hbe a Hilbert space with inner product ⟨·,·⟩and norm ∥ · ∥ and assume the random variable Z∈ H is separable with E∥Z∥2<∞. ForZ∈ H , denote the mean µ=EZ, where Eis defined via the Bochner integral. The following theorem is the well-known Karhunen-Lo `eve expansion plus a simple extension based on the assumption thatE∥Z∥2<∞: Theorem 1. Under Assumption (A1), Z=µ+∞X j=1λ1/2 jUjϕj, where {λj}jare scalars with ∞> λ 1≥λ2≥ ··· ,{ϕj}jform an orthonormal basis in H,{Uj}j are mean zero and variance one random variables with E(UjUj′) = 0 for1≤j̸=j′≤ ∞ , and µis finite. Moreover, λ1/2 jUj=⟨ϕj, Z−µ⟩andP∞ j=1λj<∞. The span of the bases {ϕj}is a separable Hilbert space H0⊂ H , with Pr(Z∈ H 0) = 1 . Proof. The expansion and properties of {λj},{ϕj}and{Uj}follow from the Karhunen-Lo `eve expan- sion theorem for separable random variables in Hilbert spaces. Existence of the mean follows from E∥Z∥2<∞and so does the finiteness ofP∞ j=1λj. The following Donsker theorem for mean-zero, separable Hilbert-valued random variables under mild moment assumptions is our first main result. We can use it, along with the two corollaries that follow, to establish many of our later weak convergence results. The two corollaries allow the means to be non-zero. Theorem 2. Let Hilbert spaces Hkwith inner products ⟨·,·⟩kand norms ∥ · ∥ kand random variables Zk∈ H ksatisfy Assumption (A1), for 1≤k≤K, where K <∞. Assume also that EZk= 0 and E(QK k=1∥Zk∥2 k)<∞, and let Bk={h∈ H k:∥h∥k≤1},k= 1,···, K. Then F={f(Z) =QK k=1⟨hk, Zk⟩k:hk∈ Bk,1≤k≤K}is Donsker. Proof. By Theorem 1, each Zkcan be written as
|
https://arxiv.org/abs/2504.16780v1
|
Zk=P∞ j=1λ1/2 kjUkjϕkj, where ∞> λk1≥λk2≥ ··· ≥ 0,{ϕk1, ϕk2,···} is an orthonormal basis on Hk,EUkj= 0,EU2 kj= 1, and EUkjUkj′= 0, for all 1≤j̸=j′<∞and all 1≤k≤K. Denote Hk0⊂ H kas the separable subspace spanned 4 by{ϕk1, ϕk2,···}, and let Bk0=Bk∩ H k0,1≤k≤K. Since ⟨ak, Zk⟩k= 0 almost surely for all ak∈ H k\Hk0, define F0=( f(Z) =KY k=1⟨ak, Zk⟩k:ak∈ Bk0,1≤k≤K) , we have F=F0almost surely. Since F=QK k=1∥Zk∥kis an envelope for F, and EF2<∞by assumption, we have all finite subsets G ⊂ F are Donsker; and thus, we have that all finite-dimensional distributions of {Gng:g∈ G} converge. Thus, by Theorem 2.1 and Lemma 7.20 of Kosorok (2008), our proof will become complete if we can show that for every ϵ, η > 0, there exists a finite partition F0=SM m=1Fm, so that M <∞and lim sup n→∞P∗ sup 1≤m≤Msup f,g∈Fm|Gn(f−g)|> ϵ! < η, (2.1) where P∗is the outer probability. Thus F0, and hence also F, is Donsker. We now prove (2.1). First, let a= (a1,···, aK)andb= (b1,···, bK)satisfy ak, bk∈ B k0, 1≤k≤K. Then ak=P∞ j=1akjϕkjandbk=P∞ j=1bkjϕkj, where akj=⟨ϕkj, ak⟩kandbkj= ⟨ϕkj, bk⟩k, respectively. Consequently, Gn KY k=1⟨ak, Zk⟩k−KY k=1⟨bk, Zk⟩k! =KX k=1Gn( k−1Y k′=1⟨bk′, Zk′⟩k′! ⟨ak−bk, Zk⟩k KY k′=k+1⟨ak′, Zk′⟩k′!) =KX k=1(∞,···,∞)X (j1,...,jK)=(1,···,1)( k−1Y k′=1bk′jk′! (akjk−bkjk) KY k′=k+1ak′jk′!) Gn KY k′=1U∗ k′jk′! ≡D1, where U∗ k′jk′=⟨ϕk′jk′, Zk′⟩k′=λ1/2 k′ik′Uk′ik′and products over empty sets have the value 1. Continu- ing, |D1| ≤KX k=1 (∞,···,∞)X (1,···,1)( k−1Y k′=1b2 k′jk′! (akjk−bkjk)2 KY k′=k+1a2 k′jk′!) 1/2 × (∞,···,∞)X (1,···,1)( Gn KY k′=1U∗ k′jk′!)2 1/2 , 5 which implies E|D1| ≤ KX k=1∥ak−bk∥k! (∞,···,∞)X (1,···,1)E(KY k′=1 U∗ k′jk′2) 1/2 (2.2) = KX k=1∥ak−bk∥k!( E KY k′=1∥Zk′∥2 k′!)1/2 . Now fix δ >0. Letr= (r1,···, rK)beKpositive integers. For a= (a1,···, aK)∈ B10×···× BK0≡ B∗ 0, leta(r)= (a(r1) 1,···, a(rK) K), where a(rk) k=Prk j=1akjϕkj. For any f∈ F 0, we can write f(Z) =QK k=1⟨ak, Zk⟩k, for some a∈ B∗ 0. Letf(r)(Z) =QK k=1⟨a(rk) k, Zk⟩k. Then sup f∈F0Gn f−f(r) = sup a∈B∗ 0Gn KY k=1⟨ak, Zk⟩k−KY k=1⟨a(rk) k, Zk⟩k! = sup a∈B∗ 0KX k=1Uk(r)X Lk(r)( KY k′=1ak′jk′! Gn KY k′=1U∗ k′jk′!) ≡D2, where Lk(r) = ( j1= 1,···, jk−1= 1, jk=rk+ 1, jk+1= 1,···, jK= 1) andUk(r) = ( j1= r1,···, jk−1=rk−1, jk=∞,···, jK=∞). Notice that for a single kwe have sup a∈B∗ 0Uk(r)X Lk(r) KY k′=1ak′jk′! Gn KY k′=1U∗ k′jk′! ≤sup a∈B∗ 0 Uk(r)X Lk(r) KY k′=1a2 k′jk′! 1/2 Uk(r)X Lk(r)( Gn KY k′=1U∗ k′jk′!)2 1/2 ≤ Uk(r)X Lk(r)( Gn KY k′=1U∗ k′jk′!)2 1/2 ≡D3. Now, E|D3| ≤ EUk(r)X Lk(r)KY k′=1 U∗ k′jk′2 1/2 ≤ E(∞,···,∞)X Lk(r)KY k′=1 U∗ k′jk′2 1/2 = E ∞X j=rk+1 U∗ kj2 Y 1≤k′≤K:k′̸=k∥Zk′∥2 k′ 1/2 ≡(ECk)1/2. Note that for 1≤k≤K,ECk≤E(QK k′=1∥Zk′∥2 k′)<∞, and sinceP∞ j=rk+1(U∗ kj)2P− →0as rk→ ∞ , we have by the bounded convergence theorem ECk→0asrk→ ∞ , which implies that ∃rk<∞such that ECk≤δ2. 6 Now do this for all k,1≤k≤K. Thus, there exists r= (r1,···, rK)such that rk<∞for all 1≤k≤K, and moreover, E|D2| ≤Kδ. LetB∗ 0(r) =B∗ 10(r1)× ··· × B∗ K0(rK), where B∗ k0(rk) =
|
https://arxiv.org/abs/2504.16780v1
|
{a(rk) k=Prk j=1akjϕkj:ak=P∞ j=1akjϕkj∈ Bk0}for1≤k≤K. Since B∗ 0(r)is compact, we have that there exists a finite subset Tδ⊂ B∗ 0(r)such that supa∈B∗ 0(r)infb∈TδPK k=1∥ak−bk∥k≤δ, where a= (a1,···, aK)andb= (b1,···, bK). For the given set TδofK-tuples of finite sequences, define T∗ δas: T∗ δ={a∗=(a∗ 1, . . . , a∗ K) :∃a∈ Tδsuch that a∗ k=(ak1, . . . , a krk,0,0, . . .)fork=1, . . . , K }, where akjdenotes the jth element of the finite sequence akof length rk, and a∗ kis the corresponding infinite sequence with all elements beyond position rkset to zero. LetM=|T∗ δ|=|Tδ|, the cardinality of Tδ. Let c1,···, cMbe an enumeration of T∗ δ. For each m= 1,···, M, we define Fm={f(Z) =QK k=1⟨ak, Zk⟩ ∈ F 0:PK k=1∥a(rk) k−cmk∥ ≤δ}. Note that sup f,g∈Fm|Gn(f−g)| ≤ sup f,g∈Fm Gn f−f(r) +Gn f(r)−KY k=1⟨cmk, Zk⟩k −Gn g(r)−KY k=1⟨cmk, Zk⟩k −Gn g−g(r) ≤2( sup f∈Fm Gn f−f(r) + sup f∈Fm Gn f(r)−KY k=1⟨cmk, Zk⟩k! ) ≤2( sup f∈F0 Gn f−f(r) + sup f,g∈F0:PK k=1∥ak−bk∥k≤δ∥Gn(f−g)∥) ≡D4, where f(Z) =QK k=1⟨ak, Zk⟩kandg(Z) =QK k=1⟨bk, Zk⟩k. Then E|D4| ≤2 Kδ+δ E Y 1≤k≤K∥Zk∥2 k 1/2 ≡D5, where the first term follows from E|D2| ≤Kδ; the second term follows from the fact that sup f∈Fm Gn f(r)−KY k=1⟨cmk, Zk⟩k! ≤ sup f,g∈Fm:PK k=1∥ak−bk∥k≤δ∥Gn(f−g)∥ ≡D6, since for all f∈ Fm,PK k=1∥a(rk) k−Prk j=1cmkjϕkj∥ ≤δby construction, where f(Z) =QK k=1⟨ak, Zk⟩k as above. Now D6is less than or equal to the second term of D4sinceFm⊂ F 0for all m= 1,···, M. We then use (2.2) to obtain the final form δ{E(Q 1≤k≤K∥Zk∥2 k)}1/2. We make a note thatSM m=1Fm⊃ 7 F, since for each f∈ F,f(r)(Z) =QK k=1⟨a(rk) k, Zk⟩kand all such a(r)= (a(r1) 1,···, a(rk) K)are con- tained in the ball B∗ 0(r)by construction. Now since D4does not depend on m, we have that sup 1≤m≤Msup f,g∈Fm∥Gn(f−g)∥ ≤D4. Recall that U∗ kj=⟨ϕkj, Zk⟩k. Let G0= KY k=1 rkX j=1akjU∗ kj :ak∈ Brk\ Qrk,1≤rk<∞,1≤k≤K , where Brk={x∈Rrk:∥x∥ ≤1}andQare the rationals. It is easy to verify that G0is countable, and moreover, that for every f∈ F0, there exists a sequence {gn} ∈ G 0such that gn(Z)→f(Z)pointwise onZ. Thus F0is pointwise measurable; as it is easy to verify that the two supremum in D4are also measurable, we have lim sup n→∞P∗( sup 1≤m≤Msup f,g∈Fm∥Gn(f−g)∥> ϵ) ≤lim sup n→∞P" 2( sup f∈F0∥Gn(f−f(r))∥+ sup f,g∈F0:PK k=1∥ak−bk∥k≤δ∥Gn(f−g)∥) > ϵ# ≤2ϵ−1 Kδ+δ( E KY k=1∥Zk∥2 k!)1/2 . For the above to be less than η, it is sufficient for δ≤ϵη(3[K+{E(QK k=1∥Zk∥2 k)}]1/2)−1, since δ, as well as ϵandηare arbitrary. We have thus satisfied (2.1) as needed, and our proof is complete. Corollary 1. LetHwith inner product ⟨·,·⟩and norm ∥ · ∥ andZ∈ H satisfy Assumption (A1). Then F={f(Z) =⟨h, Z⟩:∥h∥ ≤1, h∈ H} is Donsker. Proof. For any f∈ F, then, there exists some h∈ H with∥h∥ ≤1, such that Gnf=Gn⟨h, Z⟩= Gn⟨h, Z−µ⟩+Gn⟨h, µ⟩. Since ⟨h, µ⟩is not random, Gn⟨h, µ⟩= 0for all h∈ H with∥h∥ ≤1. The desired conclusion now follows from Theorem 2 by setting K= 1. Corollary 2. LetHkwith inner
|
https://arxiv.org/abs/2504.16780v1
|
product ⟨·,·⟩kand norm ∥ · ∥ k, and random variable Zksatisfy As- sumption (A1), for each 1≤k≤Kfor some K <∞. Let µk=EZk,1≤k≤K, and assume thatPK k=1P r∈NkE(PK k′=1∥Zk′−µk′∥2rk′ k′)<∞, where Nk={r= (r1,···, rK)∈ {0,1}K:PK k′=1rk′=k}. Then F={f(Z) =QK k=1⟨hk, Zk⟩k:hk∈ Bk,1≤k≤K}is Donsker. Proof. We have KY k=1⟨hk, Zk⟩k=KY k=1(⟨hk, Zk−µk⟩k+⟨hk, µk⟩k)=KX k′=0X r∈Nk′KY k=1⟨hk, Zk−µk⟩rk k⟨hk, µk⟩(1−rk) k. 8 This implies that F ⊂PK k′=0P r∈Nk′F(r), where F(r)={cG(r):|c| ≤c∗}, with c∗=QK k=1(1 + ∥µk∥k)<∞, and where G(r)={QK k=1⟨hk, Zk−µk⟩rk k:hk∈ Bk,1≤k≤K}; here, the addition between classes is the Minkowski addition. By Theorem 2, each G(r)is Donsker, and it is easy to see thatF(r)is also Donsker. Since finite sums of Donsker classes are also Donsker, we have that Fis Donsker. 2.2. Eigenvalue-Eigenfunction Estimation In this section, we develop a method for estimating the eigenvalues and eigenfunctions for Zfrom a sample of size n. This is accomplished by starting with a candidate basis which asymptotically spansH0⊂ H but is not necessarily normal or orthogonal. For example, one could use the B-splines or some other set of candidate bases. We will give some specific examples in Section 5 below. Let Ψ∗ N= (ψ∗ 1,···, ψ∗ N)⊤be a vector of bases of length 1≤N <∞, where Ncan grow with nand may be much larger than n; denote J=n∧N. Let Lbe an N×Nmatrix, with elements ⟨ψ∗ ℓ, ψ∗ ℓ′⟩, 1≤ℓ, ℓ′≤N. Let GNdenote the projection operator in Honto the subspace spanned by Ψ∗ N. Define ˙Z=Z−µ. For any element a∈ H , we write a⊤≡ ⟨a,·⟩, so that for any a1, a2∈ H , we have a⊤ 1a2=⟨a1, a2⟩. We need the following assumption: (A2) Lis full rank for n≥1, and δn= E∥˙Z−GN˙Z∥2=o(n−1). The requirement on the richness of GNmay appear to be quite strong, but this can be achieved by adding more bases as needed. Interestingly, we can also test for whether this assumption holds, as we show later in this section. LetZ1, . . . , Z nbe independent and identically distributed separable Hilbert-valued random vari- ables. Let M∗ nbe the N×Nmatrix with elements n−1nX i=1⟨ψ∗ ℓ, Zi−¯Zn⟩⟨Zi−¯Zn, ψ∗ ℓ′⟩,1≤ℓ, ℓ′≤N, where ¯Znis the sample mean of {Z1,···, Zn}. LetcMn=L−1/2M∗ nL−1/2. By construction, cMnis now an N×Nmatrix with elements n−1Pn i=1⟨ψℓ, Zi−¯Zn⟩⟨Zi−¯Zn, ψℓ′⟩,1≤ℓ, ℓ′≤N, where {ψ1,···, ψN}are an orthonormal basis that has the same span as the range of the projection GN. Accordingly, we can write the projection operator GN=PN ℓ=1ψℓψ⊤ ℓ. We can now obtain an operator bVn:H → H as follows: bVn=1 nNX ℓ=1NX ℓ′=1nX i=1ψℓ⟨ψℓ, Zi−¯Zn⟩⟨Zi−¯Zn, ψℓ′⟩ψ⊤ ℓ′. Because cMnis a positive semidefinite symmetric matrix, it has an eigenvalue-eigenvector decompo- sitioncMn=PJ j=1bλjωjω⊤ j, where ∞>bλ1≥bλ2≥ ··· ≥ bλJ≥0, since it is only of rank J; 9 further, ω1,···, ωJare vectors of length N. Now let bϕj=PN ℓ=1ωjℓψℓ, forj= 1,···, J, and, by the properties of a covariance matrix, n−1Pn i=1(⟨ψℓ, Zi−¯Zn⟩⟨Zi−¯Zn, ψℓ′⟩) = 0 when ℓ̸=ℓ′, and (bϕ1,···,bϕJ)are orthonormal. We can now represent bVnasbVn=PJ j=1bλjbϕjbϕ⊤ j. We also have that bVn=n−1Pn i=1GN(Zi−¯Zn)(Zi−¯Zn)⊤GN, which is suitable for estimating V0= E{(Z−µ)(Z−µ)⊤}=P∞ j=1λjϕjϕ⊤ j. We will view V0,bVn, and similar style operators as objects living
|
https://arxiv.org/abs/2504.16780v1
|
in ℓ∞(B × B ). This means the norm we want to compose for operators of this type is ∥V∥B×B= supa1,a2∈B|a⊤ 1V a2|, forV∈ℓ∞(B×B ). We also remind the reader that a sequence {Xn} converges outer almost surely to Xif there exists a sequence {∆n}of measurable random variables satisfying ∥Xn−X∥ ≤∆nfor all nand Pr (lim supn→∞∆n= 0) = 1 . This type of convergence we denote by Xnas∗− − →X. The following theorem derives some important properties of bVnwhich we will need soon: Theorem 3. Let Hilbert space Hwith inner product ⟨·,·⟩and norm ∥·∥andZ∈ H satisfy Assumptions (A1) and (A2). Then (a)supa1,a2∈B|a⊤ 1(bVn−V0)a2|as∗− − →0. (b) Moreover, provided that E∥Z∥4<∞, we also have the following: (i)supa1,a2∈B|a⊤ 1{√n(bVn−V0)−Gn(˙Z˙Z⊤)}a2|=oP(1). (ii)F2={a⊤ 1˙Z˙Z⊤a2:a1, a2∈ B} is Donsker. (iii)supa1,a2∈B|a⊤ 1√n(bVn−V0)a2|=OP(1). Proof. From (A1), we know that F1≡ {⟨a, Z⟩:a∈ B} is Donsker. Thus F1is also Glivenko- Cantelli, and hence ∥¯Zn−µ∥as∗− − →0. Since also bVn=GNPn ˙Z˙Z⊤ GN−GNn ¯Zn−µ ¯Zn−µ⊤o GN, we only need to show that supa1,a2∈B|a⊤ 1(bV∗ n−V0)a2|as∗− − →0, where bV∗ n=PnGN(˙Z˙Z⊤)GN. Since ∥˙Z∥is an envelope for F1, and since also E∥˙Z∥2<∞, we have that F1· F1is Glivenko-Cantelli. Since also GNa⊂ B , for any a∈ B by the properties of projections, we therefore conclude that Cn≡supa1,a2∈B|a⊤ 1{GN(Pn−P)(˙Z˙Z⊤)GN}a2|as∗− − →0.Note that this implies that C∗ nas∗− − →0for some measurable majorant C∗ nofCn. Now fix ϵ >0. We then have that P lim sup n→∞sup a1,a2∈B a⊤ 1(bV∗ n−V0)a2 > ϵ! ≤P lim sup n→∞D∗ n> ϵ/2 + 1" lim sup n→∞sup a1,a2∈B a⊤ 1n GNP ˙Z˙Z⊤ GN−V0o a2 > ϵ/2# ≤0 + 1 lim sup n→∞E ∥GN˙Z−˙Z∥∥GN˙Z∥+∥˙Z∥∥GN˙Z−˙Z∥ > ϵ/2 , 10 Thus, P lim sup n→∞sup a1,a2∈B a⊤ 1(bV∗ n−V0)a2 > ϵ! ≤ 1 2 E∥Z∥21/2lim sup n→∞ E∥GN˙Z−˙Z∥21/2 > ϵ/2 = 0, by Assumption (A2). Thus Part (a) follows. To prove Part (b), let a1, a2∈ B. Now D1≡a⊤ 1n√n bVn−V0 −Gn ˙Z˙Z⊤o a2 =a⊤ 1h√nGNPnn (Z−¯Zn)(Z−¯Zn)⊤o GN−√nP(˙Z˙Z⊤)−Gn(˙Z˙Z⊤)i a2 =a⊤ 1h GNn Gn(˙Z˙Z⊤)−√n(¯Zn−µ)(¯Zn−µ)⊤o GN+√nP(GN˙Z˙Z⊤GN−˙Z˙Z⊤)−Gn(˙Z˙Z⊤)i a2. Nowsupa1,a2∈B|a⊤ 1GN√n(¯Zn−µ)(¯Zn−µ)⊤GNa2|=oP(1), since F1is Donsker and also Glivenko- Cantelli as verified above. Also, we have √n P a⊤ 1 GN˙Z˙Z⊤GN−˙Z˙Z⊤ a2 =√n P ⟨a1,GN˙Z−˙Z⟩⟨GN˙Z,a2⟩+⟨a1,˙Z⟩⟨GN˙Z−˙Z,a2⟩ ≤√nP 2∥GN˙Z−˙Z∥∥˙Z∥ ≤2√n P∥GN˙Z−˙Z∥21/2 P∥˙Z∥21/2 =o(1). Thus, D1=a⊤ 1{GNGn(˙Z˙Z⊤)GN−Gn(˙Z˙Z⊤)}a2+oP(1), where the oP(1)is uniform over a1, a2∈ B. Now a⊤ 1n GNGn ˙Z˙Z⊤ GN−Gn ˙Z˙Z⊤o a2=Gn ⟨a1,GN˙Z⟩⟨GN˙Z, a 2⟩−⟨a1,˙Z⟩⟨˙Z, a 2⟩ , and D2≡E Gn ⟨a1,GN˙Z⟩⟨GN˙Z, a 2⟩ − ⟨a1,˙Z⟩⟨˙Z, a 2⟩ =E Gn ⟨a1,GN˙Z−˙Z⟩⟨GN˙Z, a 2⟩+⟨a1,˙Z⟩⟨GN˙Z−˙Z, a 2⟩ ≤√ 2 E ⟨a1,GN˙Z−˙Z⟩2⟨GN˙Z, a 2⟩2+⟨a1,˙Z⟩2⟨GN˙Z−˙Z, a 2⟩2 1/2 ≤√ 2 E ∥GN˙Z−˙Z∥2∥GN˙Z∥2+∥˙Z∥2∥GN˙Z−˙Z∥2 1/2 ≤2n E ∥GN˙Z−˙Z∥2∥˙Z∥2o1/2 , which does not depend on a1, a2∈ B. We have D2≤4(E∥˙Z∥4)1/2<∞by assumption. Moreover, E∥GN˙Z−˙Z∥2=o(n−1)by Assumption (A2). This implies that ∥GN˙Z−˙Z∥2P− →0, and thus, by the dominated convergence theorem, D2→0, and part (i) is proved. Part (ii) follows from Theorem 2, and part (iii) is a direct result of parts (i) and (ii). We now move to the consistency of eigenvalue and eigenfunction estimation. Before continuing, we need the following assumption that requires a strict ordering of the eigenvalues over the range we desire to consistently estimate. There are potentially ways to avoid this assumption, but this is quite difficult under
|
https://arxiv.org/abs/2504.16780v1
|
the generality we are working in and goes beyond the scope of our paper. 11 (A3) For some 1≤m <∞,∞> λ1> λ2>···> λm> λm+1≥0. Theorem 4. Let Assumptions (A1)–(A3) be satisfied for a random variable Z∈ H and for some 1≤m <∞. Then (i)max 1≤j≤m+1|bλj−λj|as∗− − →0. (ii)max 1≤j≤m∥bϕj−ϕj∥as∗− − →0. Proof. As a consequence of Part (a) of Theorem 3, En≡supa1,a2∈B|a⊤ 1(bVn−V0)a2|as∗− − →0. From the definitions of bλ1andλ1,bϕ1andϕ1, andbVnandV0, we have bλ1=bϕ⊤ 1bVnbϕ1≥ϕ⊤ 1bVnϕ1≥ϕ⊤ 1V0ϕ1−En=λ1−En≥bϕ⊤ 1V0bϕ1−En≥bϕ⊤ 1bVnbϕ1−2En=bλ1−2En, which implies that |bλ1−λ1| ≤En, and thus |bλ1−λ1|as∗− − → 0. Now, 0≥bϕ⊤ 1V0bϕ1−ϕ⊤ 1V0ϕ1≥ bϕ⊤ 1bVnbϕ1−ϕ⊤ 1bVnϕ1−2En≥ −2En, and it follows that ∞X j=1λj⟨bϕ1, ϕj⟩2−λ1as∗− − →0. (2.3) Letρn= 1−P∞ j=1⟨bϕ1, ϕj⟩2, and note that ρn≥0, almost surely. Then (2.3) implies thatP∞ j=1λj⟨bϕ1, ϕj⟩2−λ1(ρn+P∞ j=1⟨bϕ1, ϕj⟩2)as∗− − → 0; equivalently, −P∞ j=1(λ1−λj)⟨bϕ1, ϕj⟩2− ρnλ1as∗− − → 0. Therefore, bothP∞ j=2(λ1−λj)⟨bϕ1, ϕj⟩2as∗− − → 0andρnas∗− − → 0. As a result, we haveP j≥2⟨bϕ1, ϕj⟩2as∗− − → 0, implying that ⟨bϕ1, ϕ1⟩2as∗− − → 1. Thus either bϕ1as∗− − → ϕ1orbϕ1as∗− − → − ϕ1. Without loss of generality, we can change the sign of ϕ1, since either sign is correct; hence bϕ1as∗− − →ϕ1. Now, suppose for some 1≤j < m ,max 1≤j′≤j{∥bϕj′−ϕj′∥ ∨ |bλj′−λj′|}as∗− − →0. LetbVn(j) = bVn−Pj j′=1bλj′bϕj′bϕ⊤ j′andV0(j) =V0−Pj j′=1λj′ϕj′ϕ⊤ j′. Now En(j) = supa1,a2∈B|a⊤ 1{bVn(j)− V0(j)}a2|as∗− − →0, by the supposition and previous results. Moreover, bλj+1=bϕ⊤ j+1bVn(j)bϕj+1≥ϕ⊤ j+1bVn(j)ϕj+1≥ϕ⊤ j+1V0(j)ϕj+1−En(j) =λj+1−En(j) ≥bϕ⊤ j+1V0(j)bϕj+1−En(j)≥bϕ⊤ j+1bVn(j)bϕj+1−2En(j) =bλj+1−2En(j), which implies that |bλj+1−λj+1|as∗− − →0. Now, 0≥bϕ⊤ j+1V0(j)bϕj+1−ϕ⊤ j+1V0(j)ϕj+1≥bϕ⊤ j+1bVn(j)bϕj+1−ϕ⊤ j+1bVn(j)ϕj+1−2En(j)≥−2En(j), which implies thatP∞ j′=j+1λj′⟨bϕj+1, ϕj′⟩2−λj+1as∗− − →0. Note that ρn(j)≡1−∞X j′=j+1⟨bϕj+1, ϕj′⟩2≥0. Consequently,P∞ j′=j+1λj′⟨bϕj+1, ϕj′⟩2−λj+1{P∞ j′=j+1⟨bϕj+1, ϕj′⟩2+ρn(j)}as∗− − →0; it follows that −P∞ j′=j+1(λj+1−λj′)⟨bϕj+1, ϕj′⟩2as∗− − →0andρn(j)as∗− − →0. This implies thatP∞ j′=j+2⟨bϕj+1, ϕj′⟩2as∗− − → 12 0, and as a result we have {⟨bϕj+1, ϕj+1⟩2−1}as∗− − →0, which implies that ⟨bϕj+1, ϕj+1⟩2as∗− − →1. Pro- vided we switch the sign of ϕj+1if needed, we obtain that ∥bϕj+1−ϕj+1∥as∗− − →0. Repeat this process untilj+ 1 = m, and we have bVn(m)andV0(m)so that En(m)as∗− − →0. Then bλm+1= sup ϕ∈Bϕ⊤bVn(m)ϕ≥sup ϕ∈Bϕ⊤V0(m)ϕ−En(m) =λm+1−En(m) ≥sup ϕ∈Bϕ⊤bVn(m)ϕ−2En(m) =bλm+1−2En(m), which implies bλm+1as∗− − →λm+1, even if λm+1= 0. The next theorem yields joint asymptotic linearity and asymptotic normality of the estimated eigen- values and eigenvectors: Theorem 5. Let Assumptions (A1)–(A3) be satisfied for some 1≤m <∞. Also assume E∥Z∥4<∞. Then, for j= 1,···, m, √n(bλj−λj) =ϕ⊤ jGn ˙Z˙Z⊤ ϕj+oP(1) √n(bϕj−ϕj) =X j′̸=j:j′≥1(λj−λj′)−1ϕj′ϕ⊤ j′n Gn ˙Z˙Z⊤o ϕj+oP(1) ≡GnL1j(˙Z˙ZT) +oP(1), where L1j:H ⊗ H 7→ H is linear. Moreover, all of these quantities on the left are jointly, asymptoti- cally a tight, mean-zero Gaussian process with covariance defined by the covariance generated by the influence functions on the right, which are all Donsker classes. Proof. By Theorem 4, we have max 1≤j≤m|bλj−λj| ∨ ∥bϕj−ϕj∥as∗− − →0. By the eigenvalue structure, for any 1≤j≤m, it follows that bVnbϕj=bϕjbλjandV0ϕj=ϕjλj. Then (bVn−V0)bϕj+V0(bϕj−ϕj) = (bϕj−ϕj)bλj+ϕj(bλj−λj), (2.4) which indicates that ϕ⊤ j(bVn−V0)bϕj+ϕ⊤ jV0(bϕj−ϕj) =ϕ⊤ j(bϕj−ϕj)bλj+ϕ⊤ jϕj(bλj−λj), and consequently, ϕ⊤ j(bVn−V0)ϕj+ϕ⊤ j(bVn−V0)(bϕj−ϕj) = ϕ⊤ j(bϕj−ϕj)(bλj−λj) + (bλj−λj). Therefore, bλj−λj=ϕ⊤ j(bVn−V0)ϕj+oP(n−1/2) +oP(|bλj−λj|), and we have √n(bλj−λj) =√nϕ⊤ j(bVn−V0)ϕj+oP(1) = ϕ⊤ jGn(˙Z˙Z⊤)ϕj+oP(1), where the last equality
|
https://arxiv.org/abs/2504.16780v1
|
follows from Theorem 3. It is also the case that (2.4) implies V0(bϕj−ϕj)=(bϕj−ϕj)λj−(bVn−V0)ϕj−(bVn−V0) (bϕj−ϕj)+(bϕj−ϕj) (bλj−λj)+ϕj(bλj−λj) ⇒(V0−λjI)(bϕj−ϕj) =−(bVn−V0)ϕj+oP(n−1/2) +ϕjϕ⊤ j(bVn−V0)ϕj+oP(n−1/2) ⇒(V0−λjI+ϕjϕ⊤ j)(bϕj−ϕj)=−(bVn−V0)ϕj+ϕjϕ⊤ j(bVn−V0)ϕj+ϕjϕ⊤ j(bϕj−ϕj)+oP(n−1/2). 13 Since ϕjϕ⊤ j(bϕj−ϕj) =OP(∥bϕj−ϕj∥2) =oP(∥bϕj−ϕj∥), this implies that bϕj−ϕj=− V0−λjI+ϕjϕ⊤ j−1n bVn−V0 ϕj−ϕjϕ⊤ j bVn−V0 ϕj+oP n−1/2+∥bϕj−ϕj∥o = X j′̸=j,j′≥1 λj−λj′−1ϕj′ϕ⊤ j′−ϕjϕ⊤ j n bVn−V0 ϕj −ϕjϕ⊤ j bVn−V0 ϕj+oP n−1/2+∥bϕj−ϕj∥o = X j′̸=j,j′≥1 λj−λj′−1ϕj′ϕ⊤ j′ n bVn−V0 ϕj+oP n−1/2+∥bϕj−ϕj∥o . We then have√n∥bϕj−ϕj∥=OP(1) + oP(√n∥bϕj−ϕj∥), and√n∥bϕj−ϕj∥=OP(1). Thus √n(bϕj−ϕj) = (P j′̸=j,j′≥1(λj−λj′)−1ϕj′ϕ⊤ j′)√n(bVn−V0)ϕj+oP(1), and the desired result follows from Theorem 3. We now describe an approach to evaluating whether δn=o(n−1)from Assumption (A2). Let bδn=n−1Pn i=1∥Zi−¯Zn∥2−PN ℓ=1n−1Pn i=1⟨ψℓ, Zi−¯Zn⟩2, let˙Zi=Zi−µ, letˇZn=n−1Pn i=1˙Zi, and let ρ2 Nj=PN ℓ=1⟨ψℓ, ϕj⟩2. Recall that we can write ˙Zi=P∞ j=1λ1/2 jUjiϕj; we will denote ¯Ujn=n−1Pn i=1Uji. Note that bδncan be computed from the data and, moreover, bδn=n−1nX i=1∥˙Zi−ˇZn∥2−NX ℓ=1n−1nX i=1⟨ψℓ,˙Zi−ˇZn⟩2 =∞X j=1λjn−1nX i=1 Uji−¯Ujn2−NX ℓ=1ψ⊤ ℓ∞X j=1λjn−1nX i=1 Uji−¯Ujn2ϕjϕ⊤ jψj =∞X j=1λjn−1nX i=1 Uji−¯Ujn2−∞X j=1λjn−1nX i=1 Uji−¯Ujn2ρ2 Nj =∞X j=1λjn−1nX i=1 Uji−¯Ujn2 1−ρ2 Nj =n−1nX i=1∥˙Zi−GN˙Zi−ˇZn+GNˇZn∥2 =δ∗ n− ∥ˇZn−GNˇZn∥2, where δ∗ n=n−1Pn i=1∥˙Zi−GN˙Zi∥2, and thus bδn≤δ∗ n. Let δn= Eδ∗ n, letbS2 n=n−1Pn i=1(∥˙Zi −GN˙Zi−ˇZn+ G NˇZn∥2−bδn)2(which also can be computed from the data), and let bTn=√nbδn/q bS2n+ 1/n. We will denote σ2 0≡Var(∥˙Z−GN˙Z∥2). In the following theorem, the given moment condition implies that E∥˙Z∥4<∞. The proof requires this slightly stronger condition. Theorem 6. LetZ∈ H satisfy Assumption (A1); further assume that Lis positive definite and that supj≥1E(⟨ϕj,˙Z⟩4/E2⟨ϕj,˙Z⟩2)<∞. Then 14 (i) if lim supn→∞nδn= 0, thenbTnP− →0; (ii) for any k <∞andϵ >0, there exists a k1<∞such that lim inf n→∞nδn≥k1implies lim inf n→∞Pr(bTn> k)≥1−ϵ. Proof. For part (i), we have bTn=√nbδnq bS2n+1 n≤√nδ∗ nq bS2n+1 n=nδ∗ nq 1 +nbS2n≤nδ∗ n, and since E(nδ∗ n) =nδn→0, it follows that bTnP− →0. For part (ii), we have bTn≤√nδ∗ nq bS2n+1 n=√nδ∗ nq σ2 0+1 n bS2 n+1 n σ2 0+1 n!−1/2 = bS2 n+1 n σ2 0+1 n!−1/2 √n(δ∗ n−δn)q σ2 0+1 n+√nδnq σ2 0+1 n ≤ bS2 n+1 n σ2 0+1 n!−1/2 −√n|δ∗ n−δn| σ0+√nδnq σ2 0+1 n . Fixη >0, then Pr bS2 n+1 n σ2 0+1 n!−1/2 < η =Pr( bS2 n+1 n σ2 0+1 n! > η−2) ≤η2E bS2 n+1 n σ2 0+1 n! (2.5) ≤η2 σ2 0 1−1 n +8E∥˙Z∥4 n2+1 n σ2 0+1 n ≤η2(1 +O(n−1)), for all nlarge enough. We will verify that EbS2 n≤σ2 0(1−1/n) + 8 E∥˙Z∥4/n2soon. For now, fix c1<∞; then Pr (−√n|δ∗ n−δn|/σ0<−c1)≤1/c2 1. Now, σ2 0=Var ∥˙Z−GN˙Z∥2 =Var ∞X j=1λjU2 j 1−ρ2 Nj = E ∞X j=1∞X j′=1λjλj′ U2 jU2 j′−1 1−ρ2 Nj 1−ρ2 Nj′ ≤∞X j=1∞X j′=1λjλj′n EU4 j1/2 EU4 j′1/2o 1−ρ2 Nj 1−ρ2 Nj′ = ∞X j=1λj EU4 j1/2 1−ρ2 Nj 2 ≤δ2 nsup j≥1EU4 j≤δ2 nc2, 15 where c2= supj≥1EU4 j<∞. We then have√nδn/p σ2 0+ 1/n=nδn/p 1 +nσ2 0≥ nδn/p 1 +c2nδ2n={1/(n2δ2 n) +c2/n}−1/2, where lim inf n→∞{1/(n2δ2 n) +c2/n}−1/2≥k1, pro- vided lim inf n→∞nδn≥k1. Now, fix ϵ >0. Let 1/c2 1=ϵ/2, or alternatively, c1=p 2/ϵ. This implies that Pr (−√n|δ∗
|
https://arxiv.org/abs/2504.16780v1
|
n−δn|/σ0>−c1)≥1−ϵ/2. Letη2=ϵ/2; then η=p ϵ/2and this implies thatlim inf n→∞Pr{(bS2 n+ 1/n)−1/2(σ2 0+ 1/n)1/2)> η} ≥1−ϵ/2. We can now pick k1such thatp ϵ/2 −p 2/ϵ+k1 ≥kand thus k1≥kp 2/ϵ+p 2/ϵ. As long as (2.5) holds, this implies that lim inf n→∞Pr bTn≥k ≥lim inf n→∞Pr −√n|δ∗ n−δn| σ0>−c1, bS2 n+1 n σ2 0+1 n!−1/2 >η ≥1−2ϵ 2=1−ϵ. To complete the proof, we need to verify that EbS2 n≤σ2 0(1−1/n) + 8E ∥˙Z∥4/n2. To that end, we have bS2 n=1 nnX i=1 ∥˙Zi−GN˙Zi−ˇZn+ G NˇZn∥2−bδn2 =1 nnX i=1 ∥˙Zi−GN˙Zi∥2−δ∗ n−2⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩+ 2∥ˇZn−GNˇZn∥22 =1 nnX i=1n ∥˙Zi−GN˙Zi∥2−δ∗ n 2+4⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩2+4∥ˇZn−GNˇZn∥4 −4∥˙Zi−GN˙Zi∥2⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩+4∥˙Zi−GN˙Zi∥2∥ˇZn−GNˇZn∥2 −4δ∗ n∥ˇZn−GNˇZn∥2+4δ∗ n⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩−8⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩∥ˇZn−GNˇZn∥2o =1 nnX i=1n ∥˙Zi−GN˙Zi∥2−δ∗ n2 + 4⟨˙Zi−GN˙Zi,ˇZn−GNeZn⟩2+ 4∥ˇZn−GNˇZn∥4 −4∥˙Zi−GN˙Zi∥2⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩+ 4δ∗ n∥ˇZn−GNˇZn∥2 + 4δ∗ n∥ˇZn−GNˇZn∥2−4δ∗ n∥ˇZn−GNˇZn∥2−8∥ˇZn−GNˇZn∥4o =1 nnX i=1 ∥˙Zi−GN˙Zi∥2−δ∗ n2 +4 nnX i=1⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩2 −4 nnX i=1∥˙Zi−GN˙Zi∥2⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩+4δ∗ n∥ˇZn−GNˇZn∥2−4∥ˇZn−GNˇZn∥4. We have E(∥˙Zi−GN˙Zi∥2−δ∗ n)2=σ2 0(1−1/n)and E⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩2= E ∞X j=1λjUji(¯Ujn)(1−ρ2 Nj) 2 =∞X j=1∞X j′=1λjλj′E Uji¯UjnUj′i¯Uj′n (1−ρ2 Nj)(1−ρ2 Nj′), 16 where E Uji¯UjnUj′i¯Uj′n =n−2E( Uj1Uj′1 nX i=1Uji! nX i=1Uj′i!) =n−2E UjiUj′1 nX i=1UjiUj′i+X i̸=i′UjiUj′i′ =n−2E U2 j1U2 j′1 , which implies that E⟨˙Zi−GN˙Zi,ˇZn−GNˇZn⟩2=n−2E∥˙Z∥4. Further, E ∥Zi−GNZi∥2⟨Zi−GNZi,ˇZn−GNˇZn⟩ = E ∞X j=1∞X j′=1λjλj′ U2 jiUj′i¯Uj′n 1−ρ2 Nj 1−ρ2 Nj′ =E∥˙Z∥4 n. Additionally, we have E δ∗ n∥ˇZn−GNˇZn∥2 = E ∞X j=1∞X j′=1λjλj′ n−1nX i=1U2 ji! ¯Uj′n2(1−ρ2 Nj)(1−ρ2 Nj′) , where E( n−1nX i=1U2 ji! (¯Uj′n)2) =n−3E( nX i=1U2 ji! nX i=1U2 j′i+ 2X i<i′Uj′iUj′i′!) =n−3 nE U2 j1U2 j′1 +n(n−1) . This implies that E δ∗ n∥ˇZn−GNˇZn∥2 ≤E∥˙Z∥4/n2+δ2 n/n≤E∥˙Z∥4/n2+E∥˙Z∥4/n. We can ignore the −4E∥ˇZn−GNˇZn∥4term since it is ≤0almost surely. Thus we have EbS2 n≤σ2 0 1−1 n +4E∥˙Z∥4 n2−4E∥˙Z∥4 n+4E∥˙Z∥4 n2+4E∥˙Z∥4 n=σ2 0 1−1 n +8E∥˙Z∥4 n2, and the proof is complete. Remark 1. We can use bTnand the preceding result to evaluate Assumption (A2). For example, we select α∈(0,0.5)and then reject the null hypothesis that Assumption (A2) holds if bTnis larger than the (1−α)quantile of a standard normal distribution. We now describe a procedure for selecting the number of PCA components to use to achieve a given level of PVE. For any Z∈ H satisfying Assumption (A1) and any α∈(0,1), letm0(α) = inf{m≥1 :Pm j=1λj>(1−α)E∥˙Z∥2}. Note that E∥˙Z∥=P∞ j=1λj, so that m0(α)is the smallest number of eigenvalues needed to capture over (1−α)of the total variation. Let bmn(α) = inf {m≥ 1 :Pm j=1bλj>(1−α)Pn∥Z−¯Zn∥2}. This serves as an estimate of m0(α), and is similar to the 17 idea of using a scree plot to determine how many principal components to use. This can be a way of deciding how large mshould be for a given setting. For example, bm0(.9)is an estimate of how many eigenvalues are needed to cover over 90% of the total variance. The following corollary tells us that this is an asymptotically valid approach. Corollary 3. LetZsatisfy Assumption (A1), let GNsatisfy Assumption (A2), and suppose that As- sumption (A3) is satisfied for m=m0(α). Then bmn(α)as∗− − →m0(α). Proof. We know from previous arguments that Pn∥Z−¯Zn∥2as∗− − → E∥˙Z∥2>0. We also know by Theorem 4 that bλj→λjfor all 1≤j≤m+ 1. Ifm0(α) = 1 , then λ1>(1−α)E∥˙Z∥2and 1{bλ1>(1−α)Pn∥Z−¯Zn∥2}as∗− − →1asn→ ∞ . Ifm0(α)>1, then, for m=m0(α),
|
https://arxiv.org/abs/2504.16780v1
|
we have 1{Pm−1 j=1bλj<(1−α)Pn∥Z−¯Zn∥2<Pm j=1bλj}as∗− − → 1, asn→ ∞ , and the desired result is proved. Remark 2. Theorem 4 actually also gives us that bλm+1→λm+1; however, we will also need consis- tency of the accompanying eigenfunctions to be able to estimate all principal components for down- stream analysis. 3. Linear Models In this section, we further develop the previous results for functional principal component linear regression. Suppose we observe an i.i.d. sample (Y1, X1, Z1), . . .,(Yn, Xn, Zn), where Y=α+βTX+⟨γ, Z⟩+ε, where Yis a real-valued outcome of interest, α∈R;β, X∈Rd; and γ, Z∈ H, where His a Hilbert space with inner product ⟨·,·⟩and norm ∥ · ∥, and where εis mean zero conditional on XandZ. Our goal is to conduct√n-consistent estimation and inference for an approximation of this model based on estimated functional principal components for Z. Specifically, we approximate Zwithmestimated principal components bZ∗= (⟨bϕ1, Z⟩,···,⟨bϕm, Z⟩)⊤, where mis selected, for example, as bmn(α)for a chosen value of α∈(0,1), as described at the end of Section 2.2 above, and bϕ1, . . . ,bϕmare the first mestimated eigenfunctions for Z. We then apply least squares for estimation. Specifically, let bU= (X⊤,bZ⊤ ∗)⊤∈Rpbe the combined vector of covariates, where p=d+m; and assume Assumptions (A1)–(A3) for Z,m, and the estimated eigenfunctions, and also assume E∥Z∥4<∞. While we are assuming here that there is only one such Hilbert-valued random Zin addi- tion to X, this can easily be extended to multiple Hilbert-valued random variables, but we do not discuss this generalization here. Let U= (X⊤, Z⊤ ∗)⊤, where Uare the true targeted regression covariates, with Z∗= (⟨ϕ1, Z⟩,···,⟨ϕm, Z⟩)⊤. Then (bU−U) = (0 ,···,0,⟨bϕ1−ϕ1, Z⟩,···,⟨bϕm−ϕm, Z⟩)⊤. Let µU=EUandµZ=PZ; further let µbU=PbU, thePoperator being an expectation over XandZbut 18 not over the estimated terms bϕ1,···,bϕm, as is the standard interpretation of Pin empirical processes. Here µbU= (PX⊤, PbZ⊤ ∗)⊤, where PX= EX≡µXandPbZ∗= (⟨bϕ1,EZ⟩⊤,···,⟨bϕm,EZ⟩⊤)⊤. Define θ0as the minimizer of P(Y−α−β⊤X−γ⊤Z∗); specifically, this takes the form θ0= α0β0γ0⊤ = [P{ 1X Z ∗ }⊗2]−1P{ 1X Z ∗⊤ Y}, where ⊗2denotes the outer prod- uct. Let bθnbe the minimizer of Pn(Y−α−β⊤X−γ⊤bZ∗); specifically, this takes the form bθn= bαnbβnbγn⊤ = [Pn{ 1XbZ∗ }⊗2]−1Pn{ 1XbZ∗⊤ Y}. We need the following addi- tional assumption: (A4) Let the outcome Ysatisfy E|Y|2<∞andE(∥U∥2|Y|2)<∞, letE∥X∥2<∞, and assume P 1U⊤ U UU⊤! is full rank. The following is the main result of this section. Theorem 7. Under Assumptions (A1)–(A4), and assuming E∥Z∥4<∞, we have that √n(bθn−θ0)=Gn P 1 X Z∗ ⊗2 −1 1 X Z∗ (Y−α0−β⊤ 0X−γ⊤ 0Z∗)+L0(˙Z˙Z⊤) +oP(1), where L0(˙Z˙Z⊤) = Q1(˙Z˙Z⊤) Q2(˙Z˙Z⊤) Q3(˙Z˙Z⊤) , where Q1(˙Z˙Z⊤) =−Pm j=1γ0j⟨µZ, L1j(˙Z˙Z⊤)⟩ ∈R; where Q2(˙Z˙Z⊤) ={Q21(˙Z˙Z⊤), . . . , Q2d(˙Z˙Z⊤)}⊤, with Q2ℓ(˙Z˙Z⊤) =−Pm j=1γ0j⟨E(XℓZ), L1j(˙Z˙Z⊤)⟩ ∈R, for ℓ= 1, . . . , d ; and where Q3(˙Z˙Z⊤) ={Q31(˙Z˙Z⊤), . . . , Q 3p(˙Z˙Z⊤)}⊤,with Q3ℓ(˙Z˙Z⊤) =⟨E{Z(Y−α0−β⊤ 0X−γ⊤ 0Z∗)}, L1ℓ(˙Z˙Z⊤)⟩−mX j=1γ0j⟨E(Z∗ℓZ), L1j(˙Z˙Z⊤)⟩∈R, forℓ= 1, . . . , p . Proof. Let bθ∗ n= bα∗ nbβ∗ nbγ∗ n⊤ =" P 1XbZ∗⊤⊗2#−1 P 1XbZ∗⊤ Y . 19 Then √n(bθn−bθ∗ n) =√n Pn 1 X bZ∗ ⊗2
|
https://arxiv.org/abs/2504.16780v1
|
−1 Pn 1 X bZ∗ Y−bα∗ n− bβ∗ n⊤ X−(bγ∗ n)⊤bZ∗ = Pn 1 X bZ∗ ⊗2 −1 √n Pn 1 X bZ∗ Y−bα∗ n− bβ∗ n⊤ X−(bγ∗ n)⊤bZ∗ −P 1 X bZ∗ Y−bα∗ n− bβ∗ n⊤ X−(bγ∗ n)⊤bZ∗ ! = Pn 1 X bZ∗ ⊗2 −1 √n(Pn−P) 1 X bZ∗ Y−bα∗ n− bβ∗ n⊤ X−(bγ∗ n)⊤bZ∗) = P 1 X Z∗ ⊗2 −1 √n(Pn−P) 1 X Z∗ Y−α0−β⊤ 0X−γ⊤ 0Z∗ +oP(1), where, via recycling previous arguments, the first equality is almost surely true for all nlarge enough since the first term is almost surely consistent for a full rank matrix. The next two equalities follow from algebra, and the last equality follows from recycled empirical process arguments, including reuse 20 of Corollary 2. Next, we have √n(bθ∗ n−θ0) = P 1 X bZ∗ ⊗2 −1 √nP 1 X bZ∗ (Y−α0−β⊤ 0X−γ⊤ 0bZ∗) = P 1 X bZ∗ ⊗2 −1 √nP( 1 X bZ∗ (Y−α0−β⊤ 0X−γ⊤ 0bZ∗) 1 X Z∗ (Y−α0−β⊤ 0X−γ⊤ 0Z∗)) = P 1 X bZ∗ ⊗2 −1 P 0 0 √n(bZ∗−Z∗) (Y−α0−β⊤ 0X−γT 0bZ∗) −P 1 X Z∗ √n(bZ∗−Z∗)⊤γ0 = P 1 X Z∗ ⊗2 −1 P 0 0 √n(bZ∗−Z∗) (Y−α0−β⊤ 0X−γT 0Z∗) −P 1 X Z∗ √n(bZ∗−Z∗)⊤γ0 +oP(1)≡C1+oP(1). The remainder of the proof follows by first recalling that √n(bZ∗−Z∗) = ⟨√n(bϕ1−ϕ1), Z⟩ ... ⟨√n(bϕm−ϕm), Z⟩ , applying some expectations, and then applying some algebraic derivations. This completes the proof. Remark 3. By Lemma 19.8 of Kosorok (2008), the above asymptotic linearity implies that the block jackknife can be used for inference about θ0. To compute the block jackknife, let ˇθnbe an asymp- totically linear estimate of θ0, and select r > p + 1 = d+m+ 1. Given our i.i.d. sample (Y1, X1, Z1),···,(Yn, Xn, Zn), we define kr,nto be the largest integer satisfying rkr,n≤nand we define Nr,n≡rkr,n. We calculate ˇθnfrom our sample, and we then sample without replacement Nr,n out of nobservations. Then for ℓ= 1,···, r, we compute ˇθ∗ n,ℓto be the estimate of θ0based on this set 21 of observations after omitting observations with index ℓ, r+ℓ,2r+ℓ,···,(kr,n−1)r+ℓ. Details for performing inference with the block jackknife can be found in Kosorok (2008). The next section also verifies that both the nonparametric and wild bootstraps work for inference. Remark 4. Suppose Y= ˇα0+ˇβ⊤ 0X+ ˇγ⊤ 0Z∗+ε, where E[ε|X, Z∗] = 0 ,Eε4<∞,E∥X∥4<∞, andE∥Z∥4<∞. Then θ0= (ˇα0,ˇβ0,ˇγ0)andEY4<∞. 4. Bootstrap In this section, we establish the validity of two types of bootstrap for inference for the linear model estimates obtained in the previous section. Let ePnf≡n−1Pn i=1Wnif(Yi, Xi, Zi), and Wn= (Wn1,···, Wnn); leteGn=√n(ePn−Pn). (A5) Let each Wnibe either a random draw (independent of the data) of a multinomial distribution with ncategories of probability 1/neach (the nonparametric bootstrap); or each Wni=ξi/¯ξn, where ξ1,···, ξnare i.i.d. positive random variables
|
https://arxiv.org/abs/2504.16780v1
|
with mean and variance 1, and ¯ξn=n−1Pn i=1ξi, with∥ξ∥2,1=R∞ 0p Pr(|ξ|> x)dx <∞(the wild bootstrap). By Theorem 2.7 of Kosorok (2008), since Fis Donsker and P∗{supf∈F(f(x)−Pf)2}<∞, then eGnas− →Ginℓ∞(F). LeteV∗ n= G⊤ Nn−1ePn{(Z−eZ∗ n)(Z−eZ∗ n)⊤}GN, witheZ∗ n=ePnZ. We remind the reader that for a sequence of bootstrapped process {eXn}with random weights denoted by Wand a tight process X, we use the notation eXnWPXto mean that suph∈BL1|EWh(eXn)−Eh(X)|P− →0and EWh(eXn)∗−EWh(eXn)∗P− →0, for all h∈BL1, where the subscript Win the expectations indicates conditional expectation over the weights Wgiven the remaining data; further, BL1is the space of functions with Lipschitz norm bounded by 1. The next two theorems provide necessary preliminary results for establishing bootstrap validity: Theorem 8. Let Assumptions (A1), (A2), and (A5) hold. Then√n(eV∗ n−bVn)−eGn{˙Z˙Z⊤}WP0in ℓ∞(B × B ). Proof. For any a1, a2∈ B, we have a⊤ 1h√n(eV∗ n−bVn)−eGn{˙Z˙Z⊤}i a2 = n−1/2nX i=1(Wni−1) a⊤ 1 G⊤ N Zi−eZ∗ n Zi−eZ∗ n⊤ GN−˙Z˙Z⊤ a2! =a⊤ 1n−1/2nX i=1(Wni−1 )h G⊤ N˙Zi˙Z⊤ iGN−˙Zi˙Z⊤ i−G⊤ Nn eZ∗ n−µ eGnZ + eGnZ (eZ∗ n−µ)⊤o GNi a2 =a⊤ 1(An−Bn)a2, 22 where An=n1/2Pn i=1(Wni−1)(G⊤ N˙Zi˙Z⊤ iGN−˙Zi˙Z⊤ i)andBn=−n1/2Pn i=1(Wni−1)G⊤ N{(eZ∗ n− µ)(eGnZ) + (eGnZ)(eZ∗ n−µ)⊤}GN. Note that eZ∗ n−µ=eZ∗ n−¯Zn+¯Zn−µ; this implies that Pr(|eZ∗ n−µ|> ϵ|Z1,···, Zn)≤Pr(|eZ∗ n−¯Zn|> ϵ/2|Z1,···, Zn) + 1(|¯Zn−µ|> ϵ/2). As Pr (|eZ∗ n−¯Zn|> ϵ/2|Z1,···, Zn)Z1,···,ZnP0and 1(|¯Zn−µ|> ϵ/2)P− →0, we have Pr (|eZ∗ n−µ|> ϵ|Z1,···, Zn)Z1,···,ZnP0. Thus, we can proceed with An. Now,|a⊤ 1Ana2|=|a⊤ 1{n1/2Pn i=1(Wni−1)(G⊤ N˙Zi˙Z⊤ iGN−˙Zi˙Z⊤ i)}a2|. Since E(|Wni− 1||Z1,···, Zn)≤E(Wni+ 1|Z1,···, Zn) = 2 , this implies that E(a⊤ 1Ana2|Z1,···, Zn) = 2 n−1nX i=1a⊤ 1(G⊤ N˙Zi˙Z⊤ iGN−˙Zi˙Z⊤ i)a2 √n ≤2√n( n−1nX i=1 ∥G⊤ NZi−Zi∥∥GNZi∥+∥Zi∥∥GNZi−Zi∥) ≤4√n( n−1mX i=1 ∥Zi∥∥G⊤ NZi−Zi∥) ≡C1, since∥GNZi∥ ≤ ∥ Zi∥. Thus C1≤4 n−1nX i=1∥Zi∥2!1/2√n n−1nX i=1∥GNZi−Zi∥2!1/2 = 4 E∥Z∥2+oP(1) 1/2 nX i=1∥GNZi−Zi∥2!1/2 =oP(1), since E∥GNZi−Zi∥2=o(n−1). This implies that supa1,a2∈B|a⊤ 1Ana2|WP0, and thus the proof is complete. Theorem 9. Under Assumptions (A1)–(A3) and (A5), for 1≤m≤ ∞ , we have that for f:ℓ∞(B × B)→D, for a Banach space D, where fis Hadamard differentiable tangentially to ℓ∞(B × B ). Then √n{f(eV∗ n)−f(bVn)} −˙fV0Gn(˙Z˙Z⊤)WP0. Proof. The bootstrap and measurability conditions of Theorem 12.1 Kosorok (2008) are satisfied by eV∗ n, since√n(eV∗ n−bVn)WPGn(˙Z˙Z⊤)⇝Xwhich is tight (since the Donsker class has bootstrap validity by Theorem 2.6 Kosorok (2008)). Hence, the results given in the proof of Theorem 12.1 Kosorok (2008) apply. Namely, expression (12.1) of Kosorok (2008) yields the desired result. 23 We now give the main result of this section. Before doing so, we define eθn= [ePn{ 1XeZ∗⊤ }⊗2]−1ePn{ 1XeZ∗⊤ Y}, whereeZ∗is the bootstrapped Z∗. Recall that bθn= [Pn{ 1XbZ∗⊤ }⊗2]−1Pn{ 1XbZ∗⊤ Y}. The following theorem tells us that both the nonparametric and wild bootstraps converge to the same limiting distribution, conditional on the data, as the linear model estimators do, and can thus be used for inference: Theorem 10. Under Assumptions (A1)–(A5), we have that √n(eθn−bθn)−eGn P 1 X Z∗ ⊗2 −1 1 X Z∗ (Y−α0−β⊤ 0X−γ⊤ 0Z∗)+L0(˙Z˙Z⊤) WP0. Proof. Let eθ∗ n= eα∗ neβ∗ neγ∗ n⊤ =" Pn 1XeZ∗⊤⊗2#−1 Pn 1XeZ∗⊤ Y . Then √n(eθn−eθ∗ n) =√n ePn 1 X eZ∗ ⊗2
|
https://arxiv.org/abs/2504.16780v1
|
−1 ePn 1 X eZ∗ n Y−eα∗ n−(eβ∗ n)⊤X−(eγ∗ n)⊤eZ∗o = ePn 1 X eZ∗ ⊗2 −1 √n ePn 1 X eZ∗ n Y−eα∗ n−(eβ∗ n)⊤X−(eγ∗ n)⊤eZ∗o −Pn 1 X eZ∗ n Y−eα∗ n−(eβ∗ n)⊤X−(eγ∗ n)⊤eZ∗o ! = Pn 1 X eZ∗ ⊗2 −1 √n(ePn−Pn) 1 X eZ∗ n Y−eα∗ n−(eβ∗ n)⊤X−(eγ∗ n)⊤eZ∗)o = P 1 X Z∗ ⊗2 −1 √n(ePn−Pn) 1 X Z∗ Y−α0−β⊤ 0X−γ⊤ 0Z∗ +En, 24 where EnWP0, recycling previous arguments. Next, √n(eθ∗ n−bθn) = Pn 1 X eZ∗ ⊗2 −1 √nPn 1 X eZ∗ (Y−bαn−bβ⊤ nX−bγ⊤ neZ∗) = Pn 1 X eZ∗ ⊗2 −1 √nPn( 1 X eZ∗ (Y−bαn−bβ⊤ nX−bγ⊤ neZ∗) − 1 X bZ∗ (√n(Y−bαn−bβ⊤ nX−bγ⊤ nbZ∗)) = Pn 1 X eZ∗ ⊗2 −1" Pn 0 0 √n(eZ∗−bZ∗) (Y−bα1−bβ⊤ nX−bγneZ∗) −Pn 1 X bZ∗ √n(eZ∗−bZ∗)⊤bγn # = P 1 X Z∗ ⊗2 −1" P 0 0 √n(eZ∗−bZ∗) (Y−α0−β⊤ 0X−γ0Z∗) −P 1 X Z∗ √n(eZ∗−bZ∗)⊤γ0 # +En1≡C2, where En1WP0, recycling precision arguments, and thus C2=L0{bGn(˙Z˙Z⊤)}+En2, where En2WP 0. Thus the proof is complete. 5. Application to Neuroimaging Data The application of PCA in Hilbert spaces to neuroimaging data requires careful consideration of both the basis representation and the selection of principal components. Neuroimaging data presents unique challenges due to its high dimensionality, complex spatial structure with irregular boundaries, and spatial heterogeneity across brain regions. In this section, we describe our approach to imple- menting the proposed methodology for neuroimaging applications. We first discuss the multivariate 25 splines over triangulations as our choice for the initial basis functions, which effectively capture the complex spatial patterns inherent in brain imaging. We then present criteria for selecting the optimal number of principal components to balance model complexity with explanatory power. Together, these methodological considerations form the foundation for our analysis framework, enabling robust and interpretable decomposition of neuroimaging data in the proposed Hilbert space setting. 5.1. Multivariate Splines over Triangulations Selecting efficient initial bases Ψ∗ Nlays the foundation for the entire estimation procedure. Various nonparametric methods have been developed for neuroimage analysis; see, for example, tensor-product- based kernel smoothing (Zhu et al., 2014), tensor product B-spline smoothing (Shi et al., 2022), and multivariate spline over triangulation (MST; Lai and Schumaker, 2007). Among these approaches, MST has demonstrated superiority in analyzing multi-dimensional imaging data, as evidenced by ap- plications in bivariate penalized spline analysis of 2D images (Lai and Wang, 2013) and trivariate penalized spline analysis of 3D images (Li et al., 2024). While univariate splines are well understood and widely applied, their multivariate counterparts introduce additional theoretical and computational challenges due to the complexity induced by multi- ple dimensions. One effective approach to constructing multivariate splines is through triangulations, which partition a domain into non-overlapping simplices (triangles in two dimensions, tetrahedra in three dimensions). The construction of MST involves two main steps: first, constructing a triangula- tion
|
https://arxiv.org/abs/2504.16780v1
|
to approximate the domain of interest; and second, constructing multivariate splines based on this triangulation. A multivariate spline over a triangulation is defined as a piecewise polynomial function on a triangulated domain that ensures smooth connections across simplex boundaries. The degree of smoothness is typically specified by requiring continuity of derivatives up to a specified order. These splines are especially valuable for statistical applications involving irregularly shaped domains, such as those in neuroimaging data analysis. By utilizing triangulations, multivariate splines provide flexi- ble function approximations while maintaining computational efficiency. In contrast to tensor-product kernel or spline methods, which suffer from grid alignment constraints, triangulation-based splines adapt more naturally to complex data distributions and heterogeneous spatial structures. The theoreti- cal properties and statistical efficiency of MST have been established in Lai and Wang (2013) and Li et al. (2024), which guarantee that Assumption (A2) is satisfied. For further technical details on the construction and mathematical properties of MST, we refer to Lai and Schumaker (2007). 5.2. Selection Criteria for Principal Component Numbers The selection of leading principal components (PCs) has been extensively studied in the FDA literature, particularly for functional data with univariate indices. Among the various methodologies, two methods have gained favor due to their high testing power and computational efficiency: ranking 26 PCs based on the percentage of variance explained (PVE; Kong et al., 2016), and the percentage of association–variation explained (PA VE; Su et al., 2017). In theory, we have developed the results for consistent PCA selection based on PVE in Section 2. In practice, we employ both PVE and PA VE criteria to select the leading PC bases, extending their application from the univariate functional linear model setting to our proposed framework. Specifically, for a given threshold αand the estimated eigenvalues bλj,j= 1, . . . , J : • PVE selects the number of bases Jsuch that PJ j=1bλj P∞ j=1bλj≥αandPJ−1 j=1bλj P∞ j=1bλj< α; • PA VE selects the number of bases Jsuch that PJ j=1bλj⟨bϕj,bβ⟩2 PJ∗ j=1bλj⟨bϕj,bβ⟩2≥αandPJ−1 j=1bλj⟨bϕj,bβ⟩2 PJ∗ j=1bλj⟨bϕj,bβ⟩2< α, where J∗is determined by pre-fitting the truncated model with a high threshold of PVE, typically in the range [0.95,0.99]. In practice, the threshold αis commonly chosen within [0.95,0.99]. For our implementation, we set α= 0.95for both PVE and PA VE criteria, and use α= 0.99for the threshold of PVE in the pre- fitting step required for PA VE computation. Throughout our subsequent analyses, we adopt both PVE and PA VE as selection criteria for the number of bases, and denote the corresponding results with the superscripts “PVE” and “PA VE”, respectively. 6. Numerical Studies We assess the finite sample performance of the proposed method through simulation studies. The data are generated according to the model: Yi=α+βXi+⟨γ, Zi⟩+εi, i= 1, . . . , n. The covariates Xi∈Rdare drawn independently from MVN (0d,Ωd(r)), where the covariance struc- tureΩd(r)has elements {Ωd(r)}ℓ,ℓ′=r|ℓ−ℓ′|, which allows us to control the dependence among covariates through the parameter r. The random errors εiare generated independently from a standard normal distribution N(0,1). As for the Hilbert-valued components, we considered several different settings as
|
https://arxiv.org/abs/2504.16780v1
|
shown below. In all simulations, we fix d= 4,α0= 1, and β0= (1,1,1,1)⊤. We examine two correlation scenarios: r= 0 (independent covariates) and r= 0.5(moderately correlated covariates), and three sample sizes: n= 100 ,500, and 2000 . We conduct 100Monte Carlo replications for each setting, 27 compare the estimated basis and eigenvalues to their underlying truth, measure the estimation accuracy of the Euclidean covariates by mean squared errors (MSEs), and quantify the inference performance of the bootstrap confidence interval. 6.1 2D Imaging Analysis For the Hilbert-valued component, we first conduct the analysis for the 2D imagings. To mimic the full complexity of the real data and illustrate the performance of the proposed method in a chal- lenging scenario with complicated patterns, we employ the basis functions that are obtained from the real data analysis in Section 7. To be specific, we construct the imaging data as Zi(s1, s2) =P6 j=1λ1/2 jUijϕj(s1, s2)on a 79×95pixel grid, where the eigenvalues (λ1, λ2, λ3, λ4, λ5, λ6) = (3.5,3,2.5,2,1.5,1)and(Ui1, . . . , U i6)⊤follows MVN (06, I6). The basis functions {ϕj}6 j=1are the leading six principal component bases that are selected by both PVE and PA VE from the real data anal- ysis; see Section 7.1 for more details. Figure 6.1 (top row) and Figure 7.1 (left three columns) display the plots of these basis functions. We set γ0= 1.5ϕ1+ϕ2+ 2ϕ3+ 2.5ϕ4+ 1.5ϕ3+ 3ϕ3. ϕ1 ϕ2 ϕ3 ϕ1 ϕ2 ϕ3 bϕ1bϕ2bϕ3bϕ1bϕ2bϕ3 Figure 6.1: Comparison of true basis functions {ϕj}6 j=1(top) with their corresponding estimates {bϕj}6 j=1(bottom) of a randomly selected single iteration with n= 2000 observations. The plots demonstrate the high accuracy of the estimation method, capturing both the pattern and magnitude of each basis function. The estimates of the basis functions are shown at the bottom of Figure 6.1, which is selected randomly from a single iteration. The figure shows that with n= 2000 observations, the estimated basis functions (bottom row) closely reproduce both the pattern and magnitude of the true functions (top row), providing empirical validation of Theorem 4. In addition, Table 6.1 presents the estimation accuracy of the six eigenvalues ( λ1through λ6) and the estimated number of eigenvalues under PVE ( bmPVE n) and PA VE ( bmPA VE n) criteria. One could ob- serve that MSEs for all eigenvalues decrease as sample size increases across both correlation scenarios, demonstrating improved estimation precision with larger samples. The PVE criterion performs better than PA VE, and accurately estimates the true number of eigenvalues (6.00) at moderate and large sam- ple sizes, while slightly underestimating at small sample sizes. The results in Table 6.1 further support 28 Table 6.1: Mean Squared Errors (MSEs ×10−2) for eigenvalue estimates λj(j= 1, . . . , 6), and esti- mated number of PCA components using PVE ( bmPVE n) and PA VE ( bmPA VE n) criteria. Results based on 100Monte Carlo simulations with varying sample sizes ( n= 100 ,500,2000 ) and correlation scenarios (r= 0,0.5). Estimation accuracy for MSEs and the number of eigenvalues improve with increased sample size for both criteria. r
|
https://arxiv.org/abs/2504.16780v1
|
n λ 1 λ2 λ3 λ4 λ5 λ6bmPVE nbmPA VE n 0 100 39.66 9.04 6.82 7.58 4.57 4.24 5.98 4.58 500 4.77 3.67 2.41 1.42 0.83 0.45 6.00 5.20 2000 1.25 0.97 0.64 0.45 0.18 0.09 6.00 5.53 0.5 100 39.66 9.04 6.82 7.58 4.57 4.24 5.98 4.58 500 4.77 3.67 2.41 1.42 0.83 0.45 6.00 5.20 2000 1.25 0.97 0.64 0.45 0.18 0.09 6.00 5.53 the conclusions in Theorem 4 and Corollary 3. To assess the estimation accuracy and uncertainty quantification of the proposed methods, we re- port MSEs and empirical coverage rates of the 95% bootstrap confidence interval for the coefficients α,β, andγ. Tables 6.2 and 6.3 summarize the results under varying sample sizes ( n= 100 ,500,2000 ) and correlation levels ( r= 0,0.5), using both the PVE and PA VE criteria. Note that with 100Monte Carlo replications, the standard error for the 95% confidence level is around 2.18%. As expected, the MSEs decrease significantly with increasing sample size, reflecting the consistency of the estimators. The presence of correlation ( r= 0.5) generally leads to larger estimation errors of linear coefficients αandβcompared to the independent setting ( r= 0), indicating increased complexity in estimation under dependence. Both criteria yield comparable performance in terms of MSE and coverage, with coverage rates close to the nominal level across all settings, and PVE outperforms PA VE in all settings. These findings support the robustness and practical applicability of the proposed estimation method, and provide numerical validation of the theoretical results established in Theorem 7. Figure 6.2 further illustrates the effectiveness of our approach in estimating the functional-coefficient γ. Both the PVE and PA VE criteria produce estimates that accurately capture the true function’s pat- terns. The visual similarity across all three plots confirms that both estimation criteria perform well in reconstructing the true signal. 6.2 3D Imaging Analysis We further explore the performance of the proposed method for 3D imaging analysis. In this scenario, for the Hilbert-valued component, we construct the imaging data as Zi(s1, s2, s3) =P2 j=1λ1/2 jUijϕj(s1, s2, s3)on a79×95×66voxel grid, which mimics the typical 29 Table 6.2: Mean Squared Errors (MSEs ×10−3) and empirical coverage rates (in parentheses) of the 95% bootstrap confidence intervals for the estimates for intercept αand linear coefficients β, using PVE and PA VE criteria. Results are reported for different sample sizes ( n= 100 ,500,2000 ) and correlation scenarios ( r= 0,0.5), based on 100Monte Carlo replications. Both criteria demonstrate improved estimation accuracy with increased sample size, while the presence of correlation leads to higher estimation error. Coverage rates remain close to the nominal level across all settings. r nPVE PA VE α β 1 β2 β3 β4 α β 1 β2 β3 β4 0 100 15.48 14.14 10.69 10.30 14.69 19.62 24.75 30.68 23.37 23.78 (98%) (97%) (100%) (97%) (96%) (100%) (100%) (100%) (100%) (100%) 500 1.96 2.49 1.56 1.51 2.27 3.42 4.27 2.82 3.05 4.73 (96%) (92%) (98%) (97%) (96%) (100%) (98%) (100%) (99%) (99%) 2000 0.54 0.51 0.45 0.58 0.42 0.82 0.81 0.70 0.92 0.62 (94%) (97%) (98%) (94%)
|
https://arxiv.org/abs/2504.16780v1
|
(97%) (100%) (98%) (100%) (99%) (99%) 0.5 100 15.48 14.96 22.90 22.77 14.25 19.62 36.26 48.92 39.14 33.31 (98%) (98%) (97%) (95%) (97%) (100%) (100%) (100%) (100%) (100%) 500 1.96 2.46 3.32 3.74 2.62 3.42 4.02 6.76 6.24 4.91 (96%) (96%) (94%) (93%) (96%) (100%) (100%) (99%) (99%) (98%) 2000 0.54 0.68 0.95 0.69 0.66 0.82 1.09 1.39 1.14 1.07 (94%) (95%) (92%) (97%) (96%) (100%) (99%) (98%) (98%) (100%) γ bγPVEbγPA VE Figure 6.2: Comparison of the true function γ(left) with its estimates bγusing PVE criteria (middle) and PA VE criteria (right), respectively, based on a randomly selected single iteration with n= 2000 ob- servations. Both estimation methods accurately capture the pattern and magnitude of the true function, demonstrating the high precision of the proposed method with an adequate sample size. 30 Table 6.3: Mean Squared Errors (MSEs ×10−1) and empirical coverage rates (in parentheses) of the 95% bootstrap confidence interval for the coefficient γ, using PVE and PA VE criteria. Results are shown for various sample sizes ( n= 100 ,500,2000 ) and correlation levels ( r= 0,0.5), based on 100 Monte Carlo replications. Estimation accuracy improves significantly with increasing sample size, and coverage remains close to nominal. r nPVE PA VE γ1 γ2 γ3 γ4 γ5 γ6 γ1 γ2 γ3 γ4 γ5 γ6 0 100 6.60 7.73 10.41 13.02 8.25 5.79 8.87 9.29 14.49 18.50 13.61 5.67 (93%) (100%) (100%) (94%) (100%) (93%) (93%) (100%) (99%) (94%) (100%) (93%) 500 3.05 4.51 3.87 3.00 2.93 0.73 4.83 6.86 4.49 3.37 7.01 0.72 (92%) (100%) (98%) (96%) (98%) (96%) (92%) (100%) (98%) (96%) (99%) (96%) 2000 0.52 1.32 1.19 0.84 0.89 0.12 0.52 3.57 1.19 0.85 4.50 0.12 (96%) (99%) (93%) (94%) (93%) (97%) (96%) (99%) (93%) (94%) (93%) (97%) 0.5 100 6.60 7.73 10.41 13.02 8.25 5.79 8.87 9.29 14.49 18.50 13.61 5.67 (93%) (100%) (100%) (94%) (100%) (93%) (93%) (100%) (99%) (94%) (100%) (93%) 500 3.05 4.51 3.87 3.00 2.93 0.73 4.83 6.86 4.49 3.37 7.01 0.72 (92%) (100%) (98%) (96%) (98%) (96%) (92%) (100%) (98%) (96%) (99%) (96%) 2000 0.52 1.32 1.19 0.84 0.89 0.12 0.52 3.57 1.19 0.85 4.50 0.12 (96%) (99%) (93%) (94%) (93%) (97%) (96%) (99%) (93%) (94%) (93%) (97%) ϕ1 ϕ2bϕ1bϕ2 Transverse (Axial View) Coronal (Frontal View) Sagittal (Side View) Figure 6.3: Multi-planar representation of true basis functions ϕ1andϕ2with their corresponding estimates bϕ1andbϕ2for a randomly selected single iteration with n= 2000 observations. The plots demonstrate the high accuracy of the estimation method. 31 resolution of well-registered and preprocessed PET images in ADNI database. The eigenvalues (λ1, λ2) are set to be (2,1), and (ζi1, ζi2)⊤follows MVN (02, I2). The basis functions ϕ1andϕ2are derived from 20{(s1−0.5)2+ (s2−0.5)2+ (s3−0.5)2}andexp[−15{(s1−0.5)2+ (s2−0.5)2+ (s3− 0.5)2}]respectively, after orthonormalization. Figure 6.3 (left two columns) displays the multi-planar representation of these basis functions. We set γ0= 1.5ϕ1−ϕ2. Similar to the analysis in Section 6.1, we check the MSEs for estimation accuracy. Figure 6.3 demonstrates that with n= 2000 observations, our method achieves accuracy for a randomly selected single iteration. The estimated basis functions closely capture the main pattern and magnitude of
|
https://arxiv.org/abs/2504.16780v1
|
the true basis functions, which further support the conclusions in Theorem 4. Table 6.4 reports the MSEs for the estimates of eigenvalues λ1andλ2, regression coefficients α, βandγ, and the estimated number of PCA components bmn, under both PVE and PA VE criteria As sample size increases from n= 100 ton= 2000 , MSEs decrease significantly across all parameters, confirming estimation consistency. While both criteria demonstrate enhanced accuracy with larger samples, PVE shows more robustness in component selection regardless of sample sizes. Table 6.4: Mean Squared Errors (MSEs ×10−2) of the estimates for eigenvalues λ1andλ2, coefficients α,βandγ, and the number of estimated PCA components, using both PVE and PA VE criteria. Re- sults are shown for different sample sizes ( n= 100 ,500,2000 ) and correlation scenarios ( r= 0,0.5) based on 100Monte Carlo simulations. Both methods demonstrate improved estimation accuracy with increased sample size. r n λ 1 λ2PVE PA VE α β 1 β2 β3 β4 γ1 γ2bmn α β 1 β2 β3 β4 γ1 γ2bmn 0 100 9.50 2.04 0.89 1.54 1.91 1.65 1.16 2.70 5.40 2 0.93 1.56 1.91 1.73 1.22 2.71 8.42 1.96 500 1.43 0.83 0.19 0.31 0.31 0.34 0.24 0.47 1.21 2 0.19 0.31 0.31 0.34 0.24 0.47 1.21 2 2000 0.38 0.67 0.03 0.07 0.07 0.07 0.07 0.07 0.39 2 0.03 0.07 0.07 0.07 0.07 0.07 0.39 2 0.5 100 9.50 2.04 0.89 1.54 1.91 1.65 1.16 2.70 5.40 2 0.93 1.56 1.91 1.73 1.22 2.71 8.42 1.96 500 1.43 0.83 0.19 0.31 0.31 0.34 0.24 0.47 1.21 2 0.19 0.31 0.31 0.34 0.24 0.47 1.21 2 2000 0.38 0.67 0.03 0.07 0.07 0.07 0.07 0.07 0.39 2 0.03 0.07 0.07 0.07 0.07 0.07 0.39 2 Figure 6.4 shows the multi-planar representation of the coefficient γcompared to its estimates under the PVE and PA VE criteria. Both the PVE and PA VE methods produce estimates that accurately capture the true function’s patterns across all three views, including the radial pattern and magnitude. The visual similarity across all lines of the three plots confirms that both estimation criteria perform well in reconstructing the true signal. In summary, our numerical studies demonstrate that the proposed methodology delivers accurate estimation of both Euclidean and Hilbert-valued components, particularly with adequate sample sizes. The performance remains robust even when covariates exhibit moderate correlation, though estimation 32 γ bγPVEbγPA VE Transverse (Axial View) Coronal (Frontal View) Sagittal (Side View) Figure 6.4: Multi-planar representation of the true function γ(left) with its estimates bγusing PVE criteria (middle) and PA VE criteria (right) based on a randomly selected single iteration with n= 2000 observations. Both estimation methods accurately capture the radial pattern and edge irregularities of the true function, demonstrating the high precision of both approaches with an adequate sample size. precision is somewhat reduced in such cases. These findings support the practical capacity of our approach for analyzing neuroimaging data. 7. Data Analysis In this section, we apply our proposed method to real data analysis, using neuroimaging with envi- ronmental and genetic risk factors to model, predict, and conduct statistical inferences about Alzheimer’s
|
https://arxiv.org/abs/2504.16780v1
|
disease progression. Furthermore, we extend our modeling framework to a precision medicine setting, where imaging biomarkers, along with environmental and genetic risk factors, inform optimal treatment decisions for Alzheimer’s disease. This approach enables patient-specific intervention and treatment strategies based on multimodal data analysis. 7.1 Linear Regression Setting The data that drives our research comes from the large neuroimaging datasets in the Alzheimer’s Disease Neuroimaging Initiative (ADNI, http://adni.loni.usc.edu ). The longitudinal cohort study in ADNI is a comprehensive neuroimaging study that collected a variety of necessary phenotypic measures, including structural, functional, and molecular neuroimaging, biomarkers, clinical and neu- ropsychological variables, and genomic information (Petersen et al., 2010; Weiner and Veitch, 2015). These data provide unprecedented resources for statistical methods development and scientific discov- 33 ery. We now analyze the records from 441 participants through the ADNI1 and ADNI GO phases. The data contains the following variables: • Mini-mental state examination (MMSE) scores: response variable, ranging from 15 to 30, where lower values indicate a more severe AD status. • Fludeoxyglucose positron emission tomography (PET) scans: neuroimaging representing brain metabolism activity level and can be used to make early diagnoses of AD, with 79×95pixels, with the measurements ranging from 0.013 to 2.149. • Age: the participants’ ages, ranging from 55 to 89 years. • Education: the participants’ educational status, ranging from 4 to 20 years. • Gender: the participants’ gender, with 169 female and 278 male. We created a dummy variable with the value of one representing female and zero for male. • Ethnicity: the participants’ ethnic categories, with 12 Hispanic/Latino, 429 not Hispanic/Latino, and 6 unknown. We created a dummy variable with the value of one representing Hispanic/Latino and zero for others. • Race: the participants’ racial categories, with 1 Indian/Alaskan, 7 Asians, 24 Blacks, 413 Whites, and 2 more than one category. We created a dummy variable with the value of one representing white and zero for others. • Marriage: the participants’ marital status, with 35 divorced, 344 married, 12 never married, and 56 widowed. We created a dummy variable with the value of one representing married and zero for others. • Apolipoprotein (APOE) gene: the number of copies of APOE4 gene, the most prevalent genetic risk factor for AD (Ashford and Mortimer, 2002), ranging from 0 to 2. We created two dummy variables, APOE1 and APOE2, to denote those with one and two copies of the APOE4 gene, respectively. We apply the proposed method to the data using both PVE and PA VE criteria. We select the six leading principal component basis maps that are selected by both PVE and PA VE criteria, and show the six leading principal component basis maps in the left three columns of Figure 7.1. All these estimated basis maps illustrate brain structures. The estimated coefficient map for γis shown on the right of Figure 7.1, based on the six leading PCs selected by both PVE and PA VE criteria, and the estimates of γillustrate brain structures. 34 bϕ1bϕ2bϕ3 bγ bϕ4bϕ5bϕ6 Figure 7.1: Left Three Columns: The six leading PCs selected by both PVE and PA
|
https://arxiv.org/abs/2504.16780v1
|
VE. Right: Esti- mated coefficient bγbased on the six leading PCs. Table 7.1 presents the estimated coefficients for the nonfunctional predictors, along with the cor- responding 95% bootstrap confidence intervals. With the increase of age, the general MMSE scores decrease significantly; and with the increase of education level, the MMSE scores increase significantly in general: both are supported by studies on cognitive reserve in aging and AD (Fratiglioni et al., 2007; Stern, 2012). As for the well-known risk genetic factor APOE gene, with more copies of epsilon 4 alleles in the APOE gene, the MMSE scores decrease significantly, which means a higher risk for the onset of AD and agrees with other current studies (Schneider et al., 2011). The difference between females and males is not significant. 7.1 Precision Medicine Setting In addition to the data analyzed in Section 7.1, we further incorporate treatment as a random vari- able in our analysis. To be specific, denote f(α, β, γ ) =α+βX+⟨γ, Z⟩for the linear regression model we studied, then in the precision medicine setting, the studied model is E(Y|X, Z, A ) =f(α1, β1, γ1) +Af(α2, β2, γ2), where Ais the treatment, f(α1, β1, γ1)models the baseline effect, and f(α2, β2, γ2)models the treat- ment effect. During the ADNI1 and ADNI GO study periods, the US FDA-approved therapies for AD symp- toms included cholinesterase inhibitors and the NMDA-partial receptor antagonist memantine. Cholin- esterase inhibitors, including donepezil, galantamine, and rivastigmine, are prescribed for mild-to- moderate-stage AD. Memantine is prescribed for the treatment of AD either as monotherapy or in combination with one of the cholinesterase inhibitors for moderate-to-severe stage AD (Schneider et al., 35 Table 7.1: Estimated coefficients bαandbβand corresponding 95% bootstrap confidence intervals. Term Estimate 95% Bootstrap CI Intercept 17.814 (11.79, 24.59) Age -0.071 (-0.11, -0.04) Education 0.219 (0.15, 0.29) Gender -0.162 (-0.65, 0.35) APOE1 -0.572 (-1.01, -0.13) APOE2 -1.529 (-2.46, -0.65) Ethnicity -0.151 (-1.12, 0.80) Race 1.048 (0.12, 1.95) Marriage -0.398 (-0.94, 0.17) 2011). We denote by A= 1those participants taking one or more combinations of Donepezil (Aricept), Galantamine (Razadyne), Rivastigmine (Exelon), and Memantine (Namenda), while we use A= 0for those without medical records, or taking some other treatments or supplements. We apply the proposed method to the data using both PVE and PA VE criteria. We display visually the leading principal component bases that are selected by both PVE and PA VE (three for Zand three forAZ) in the left three columns of Figure 7.2. All these estimated basis maps illustrate brain structures. The corresponding estimated coefficient maps based on the selected top principal component bases for γ1andγ2are shown in the right column of Figure 7.2, and both illustrate brain structures. bϕ1forZ bϕ2forZ bϕ3forZ bγ1 bϕ1forAZ bϕ2forAZ bϕ3forAZ bγ2 Figure 7.2: Left Three Columns: The leading PC basis maps selected by both PVE and PA VE criteria forZ(top) and AZ(bottom), respectively. Right Column: Estimated coefficient maps of γ1(top) and γ2(bottom). 36 Table 7.2 presents the estimated coefficients for the nonfunctional predictors, along with the cor- responding 95% bootstrap confidence intervals. The main effect of the treatment can improve the performance
|
https://arxiv.org/abs/2504.16780v1
|
in MMSE by around 9.85units on average. The conclusions for the main effect generally agree with what we observed in Section 7.1. Table 7.2: Estimated coefficients and 95% bootstrap confidence intervals for the linear covariates αand β. Term Estimate 95% Bootstrap CI Term Estimate 95% Bootstrap CI Intercept 12.488 (4.59, 20.68) Treatment 9.850 (-4.03, 23.34) Age 0.005 (-0.03, 0.04) Treatment ×Age -0.120 (-0.18, -0.07) Education 0.228 (0.13, 0.31) Treatment ×Education -0.032 (-0.16, 0.12) Gender 0.014 (-0.62, 0.65) Treatment ×Gender -0.090 (-1.02, 0.77) APOE1 -0.785 (-1.43, -0.13) Treatment ×APOE1 0.529 (-0.44, 1.40) APOE2 -0.362 (-1.77, 0.74) Treatment ×APOE2 -1.141 (-2.71, 0.54) Ethnicity -0.396 (-1.77, 0.95) Treatment ×Ethnicity 0.657 (-2.26, 3.92) Race 0.120 (-0.80, 1.12) Treatment ×Race 2.594 (1.01, 4.25) Marriage 0.070 (-0.61, 0.81) Treatment ×Marriage -0.214 (-1.34, 0.93) 8. Conclusions and Discussion We propose a regression framework for Hilbert-space-valued covariates with unknown reproduc- ing kernels. Our theoretical analysis establishes asymptotic properties for the regression parameter es- timates, extending classical functional regression models to accommodate high-dimensional functional data, including multi-dimensional imaging data with complex domains. Through simulation studies and neuroimaging applications, we demonstrate the practical capability of our approach in modeling disease progression. Our methodology offers a computationally efficient solution that maintains theoretical rigor while expanding functional regression to address increasingly complex data structures in modern biomedical research. As shown in Section 7.2, our approach naturally extends to precision medicine settings, en- abling efficient utilization of neuroimaging biomarkers to guide optimal personalized treatment strate- gies. This builds upon the foundation established by Laber and Staicu (2018), who integrated one- dimensional functional data analysis with individualized treatment regimes. In Section 5.1, we adopt multivariate splines over triangulation as initial bases due to their effi- ciency in handling complex domains and their well-established theoretical properties. Future directions include the exploration of alternative initialization methods, such as deep neural networks or autoen- 37 coder methods, which may offer additional flexibility while both the theoretical and computational development could remain big challenges. Acknowledgements Research reported in this publication was supported in part by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number P20GM139769 (Xinyi Li), National Science Foundation awards DMS-2210658 (Xinyi Li) and DMS-2210659 (Michael R. Kosorok). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at http://adni.loni.usc. edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf . References Ashford, J. W. and Mortimer, J. A. (2002), “Non-familial Alzheimer’s disease is mainly due to genetic factors,” Journal of Alzheimer’s disease , 4, 169–177. Bateman, R. J., Xiong, C., Benzinger, T. L., Fagan, A. M., Goate, A., Fox, N. C., Marcus, D. S., Cairns, N. J., Xie, X., Blazey, T. M., Holtzman, D. M., Santacruz, A., Buckles, V ., Oliver, A., Moulder, K., Aisen, P. S., Ghetti, B., Klunk, W. E., McDade, E., Martins, R. N., Masters, C. L., Mayeux,
|
https://arxiv.org/abs/2504.16780v1
|
R., Ringman, J. M., Rossor, M. N., Schofield, P. R., Sperling, R. A., Salloway, S., and Morris, J. C. (2012), “Clinical and biomarker changes in dominantly inherited Alzheimer’s disease,” New England Journal of Medicine , 367, 795–804. Chen, Y ., Goldsmith, J., and Ogden, R. T. (2019), “Functional data analysis of dynamic PET data,” Journal of the American Statistical Association , 114, 595–609. Dai, X. and M ¨uller, H.-G. (2018), “Principal component analysis for functional data on Riemannian manifolds and spheres,” Annals of Statistics , 46, 3334–3361. Fratiglioni, L., Winblad, B., and von Strauss, E. (2007), “Prevention of Alzheimer’s disease and de- mentia. Major findings from the Kungsholmen Project,” Physiology & behavior , 92, 98–104. Giulini, I. (2017), “Robust PCA and pairs of projections in a Hilbert space,” Electronic Journal of Statistics , 11, 3903–3926. Happ, C. and Greven, S. (2018), “Multivariate functional principal component analysis for data ob- served on different (dimensional) domains,” Journal of the American Statistical Association , 113, 649–659. 38 Herold, C. J., Lewin, J. S., Wibmer, A. G., Thrall, J. H., Krestin, G. P., Dixon, A. K., Schoenberg, S. O., Geckle, R. J., Muellner, A., and Hricak, H. (2016), “Imaging in the age of precision medicine: sum- mary of the proceedings of the 10th Biannual Symposium of the International Society for Strategic Studies in Radiology,” Radiology , 279, 226–238. Hsing, T. and Eubank, R. (2015), Theoretical foundations of functional data analysis, with an introduc- tion to linear operators , John Wiley & Sons. Kim, D., Lee, Y . K., and Park, B. U. (2020), “Principal component analysis for Hilbertian functional data,” CSAM (Communications for Statistical Applications and Methods) , 27, 149–161. Kong, D., Staicu, A.-M., and Maity, A. (2016), “Classical testing in functional linear models,” Journal of nonparametric statistics , 28, 813–838. Kosorok, M. R. (2008), Introduction to Empirical Processes and Semiparametric Inference , Springer: New York. Kuenzer, T., H ¨ormann, S., and Kokoszka, P. (2021), “Principal component analysis of spatially indexed functions,” Journal of the American Statistical Association , 116, 1444–1456. Laber, E. B. and Staicu, A.-M. (2018), “Functional feature construction for individualized treatment regimes,” Journal of the American Statistical Association , 113, 1219–1227. Lai, M.-J. and Schumaker, L. L. (2007), Spline functions on triangulations. , Cambridge University Press. Lai, M.-J. and Wang, L. (2013), “Bivariate penalized splines for regression,” Statistica Sinica , 23, 1399–1417. Li, X., Wang, L., and Wang, H. J. (2021), “Sparse Learning and Structure Identification for Ultrahigh- Dimensional Image-on-Scalar Regression,” Journal of the American Statistical Association , 116, 1994–2008. Li, X., Yu, S., Wang, Y ., Wang, G., Wang, L., and Lai, M.-J. (2024), “Nonparametric Regression for 3D Point Cloud Learning,” Journal of Machine Learning Research , 25, 1–56. Lila, E., Aston, J. A., and Sangalli, L. M. (2016), “Smooth principal component analysis over two- dimensional manifolds with an application to neuroimaging,” The Annals of Applied Statistics , 10, 1854–1879. Lin, Z. and Yao, F. (2019), “Intrinsic Riemannian functional data analysis,” Annals of Statistics , 47, 3533–3577. 39 Morris, J. S. (2015), “Functional regression,” Annual Reviews of Statistics and its Application ,
|
https://arxiv.org/abs/2504.16780v1
|
2, 321– 359. Perry, R., Panigrahi, S., Bien, J., and Witten, D. (2024), “Inference on the proportion of variance explained in principal component analysis,” arXiv preprint arXiv:2402.16725 . Petersen, R. C., Aisen, P., Beckett, L. A., Donohue, M., Gamst, A., Harvey, D. J., Jack, C., Jagust, W., Shaw, L., Toga, A., and Trojanowski, J. (2010), “Alzheimer’s disease neuroimaging initiative (ADNI): clinical characterization,” Neurology , 74, 201–209. Schneider, L. S., Insel, P. S., Weiner, M. W., Initiative, A. D. N., et al. (2011), “Treatment with cholinesterase inhibitors and memantine of patients in the Alzheimer’s Disease Neuroimaging Ini- tiative,” Archives of neurology , 68, 58–66. Shi, H., Yang, Y ., Wang, L., Ma, D., Beg, M. F., Pei, J., and Cao, J. (2022), “Two-dimensional functional principal component analysis for image feature extraction,” Journal of Computational and Graphical Statistics , 31, 1127–1140. Stern, Y . (2012), “Cognitive reserve in ageing and Alzheimer’s disease,” The Lancet Neurology , 11, 1006–1012. Su, Y .-R., Di, C.-Z., and Hsu, L. (2017), “Hypothesis testing in functional linear models,” Biometrics , 73, 551–561. Wang, J.-L., Chiou, J.-M., and M ¨uller, H.-G. (2016), “Functional data analysis,” Annual Review of Statistics and its application , 3, 257–295. Weiner, M. W. and Veitch, D. P. (2015), “Introduction to special issue: overview of Alzheimer’s Disease Neuroimaging Initiative,” Alzheimer’s & Dementia , 11, 730–733. Yuan, M. and Cai, T. T. (2010), “A reproducing kernel Hilbert space approach to functional linear regression,” Annals of Statistics , 38, 3412–3444. Zhang, D., Li, L., Sripada, C., and Kang, J. (2023), “Image response regression via deep neural net- works,” Journal of the Royal Statistical Society Series B: Statistical Methodology , 85, 1589–1614. Zhu, H., Fan, J., and Kong, L. (2014), “Spatially varying coefficient model for neuroimaging data with jump discontinuities,” Journal of the American Statistical Association , 109, 1084–1098. 40
|
https://arxiv.org/abs/2504.16780v1
|
Estimation and Inference for the Average Treatment Effect in a Score-Explained Heterogeneous Treatment Effect Model Kevin Christian Wibisono University of Michigan, Statistics kwib@umich.eduDebarghya Mukherjee Boston University, Statistics mdeb@bu.edu Moulinath Banerjee University of Michigan, Statistics moulib@umich.eduYa’acov Ritov University of Michigan, Statistics yritov@umich.edu April 25, 2025 Abstract In many practical situations, randomly assigning treatments to subjects is uncommon due to feasibility constraints. For example, economic aid programs and merit-based scholarships are often restricted to those meeting specific income or exam score thresholds. In these scenarios, traditional approaches to estimating treatment effects typically focus solely on observations near the cutoff point, thereby excluding a significant portion of the sample and potentially leading to information loss. Moreover, these methods generally achieve a non-parametric convergence rate. While some approaches, e.g., Mukherjee et al. [2021], attempt to tackle these issues, they commonly assume that treatment effects are constant across individuals, an assumption that is often unrealistic in practice. In this study, we propose a differencing and matching-based estimator of the average treatment effect on the treated (ATT) in the presence of heterogeneous treatment effects, utilizing all available observations. We establish the asymptotic normality of our estimator and illustrate its effectiveness through various synthetic and real data analyses. Additionally, we demonstrate that our method yields non-parametric estimates of the conditional average treatment effect (CATE) and individual treatment effect (ITE) as a byproduct. Keywords: Average treatment effect on the treated, first-order differencing, heterogeneous treatment effect, non-random treatment allocation, residual matching. 1arXiv:2504.17126v1 [stat.ME] 23 Apr 2025 1 Introduction In numerous practical scenarios, the random assignment of treatments to subjects is impractical. For instance, economic initiatives targeting impoverished individuals may only be extended to those with incomes below a specified threshold. Within the medical realm, treatments are often administered to patients facing urgent needs. Academic scholarships, too, are commonly awarded based on merit exams, where applicants surpassing a predetermined cutoff score receive the scholarship. These instances, among others, emphasize the significance of investigating non-random treatment allocations. In many of these examples, a variable, referred to as the “score variable" and henceforth denoted asQ, along with a predetermined cutoff τ0, determines the treatment allocation. In the economic initiative case, an individual’s income serves as the score variable Q, while in the scholarship case, Qcorresponds to a student’s merit test score. Our focus is on estimating the impact of the treatment on some response variable Y. In the economic initiative case, we may seek to determine whether a specific initiative benefits the have-nots, as measured by an individual’s happiness level. In the scholarship case, we may be interested in assessing the effect of the scholarship on a student’s future prospects, as measured by their future income. In both these cases, happiness level and future income serve as response variables, respectively. Additionally, we often have background information Xon the individuals (e.g., socio-economic background, education, race, gender, and age) that may affect both the response variable Yand the score variable Q. One way to capture the effect of (X, Q)onYis via the following model: Yi=α(Xi, Qi)1Qi≥τ0+X⊤ iβ0+νi, where νiis the unobserved error. Here, α(·)denotes the individual
|
https://arxiv.org/abs/2504.17126v1
|
treatment effect (ITE) which can potentially depend on the background information and score variable. However, as is true for most real world applications, the score variable Qand the unobserved error νcan be correlated through some unobserved confounders . In the exam-based scholarship example, νmay encode students’ innate abilities or intelligence that affects both the score of the merit test and their future income. In other words, (Q, X)may fail to capture all factors that are essential for explaining Y. This is exactly what differentiates our setting with the standard treatment effect models, where either (i) the treatment is allocated randomly (also known as the randomized controlled trial or RCT); or (ii) the observed covariates (Q, X)are assumed to explain all the effects between the treatment and response variables, also known as the ignorability orunconfoundedness assumption (Imbens [2004], Robins et al. [1994], Rosenbaum and Rubin [1983]). Traditionally, researchers address this endogeneity issue, i.e., a non-zero correlation between the score variable and the unobserved error, using regression discontinuity design (RDD) [Thistlethwaite and Campbell, 1960]. The key idea of RDD is to localize the problem: students whose scores belong to a small vicinity around the cutoff—say within the neighborhood [τ0−h, τ0+h]for some small h—are similar in terms of their abilities [Calonico et al., 2019, Cattaneo et al., 2019]. Consequently, it is enough to compare the future income of students who barely missed the scholarship (i.e., Q∈[τ0−h, τ0)and students who barely cleared the merit exam (i.e., Q∈[τ0, τ0+h]). RDD has a rich literature and has found various applications in numerous fields, such as education 2 (Banks and Mazzonna [2012], Jacob and Lefgren [2004], Moss and Yeaton [2006]), health (Chen et al. [2018], Christelis et al. [2020], Venkataramani et al. [2016]), and epidemiology (Anderson et al. [2020], Basta and Halloran [2019], Mody et al. [2018]). While being general and often fully non-parametric, the local approach described above suffers from several drawbacks. First, it only takes into account the observations within a certain neighbor- hood (bandwidth) around the cutoff, consequently losing information by rejecting other observations . In situations where the treatment effect depends on the distance from the cutoff, such methods only assess the impact on individuals near the threshold, who may not be the primary focus of our analysis. A common example is the effect of a medicine on patients who are in dire need of it—these patients might be far from the eligibility threshold, but are the most relevant for analysis. Second, it yields an estimator of the treatment effect with a slow (non-parametric) rate of convergence : when the bandwidth hdecreases to zero with the number of observations n—essential for consistent estimation of local effects—the number of effective samples, i.e., those whose scores are within the neighborhood [τ0−h, τ0+h], is of order nh, which is smaller than n. As a partial solution, Angrist and Rokkanen [2015] introduced a covariate-based method that uses all available information and constructed a√n-consistent estimator of the treatment effect. However, their method relies on the conditional independence assumption, i.e., E(Y|Q, X) =E(Y|X), a variant of the exogeneity assumption which typically
|
https://arxiv.org/abs/2504.17126v1
|
does not hold as argued earlier. Motivated by these observations, Mukherjee et al. [2021] proposed an efficient√n-rate estimator of the treatment effect in the presence of endogeneity that uses all observations. Their approach assumes a homogeneous treatment effect model, where the response variable Yis modeled as Y=α01Q≥τ0+X⊤ iβ0+νi, with α0being the parameter of interest. The key step in their method is to model the score variable Qusing the background information Z—which may or may not be the same as X—via the equation Qi=Z⊤ iγ0+ηi. In this model, (η, ν)encodes all unobserved factors (i.e., innate abilities of students taking the merit test) and can be arbitrarily correlated. These insights lead to the following model: Yi=α01Qi≥τ0+X⊤ iβ0+ℓ(ηi) +ϵi (1.1) Qi=Z⊤ iγ0+ηi, (1.2) where ℓ(η) =E[ν|η]andϵis an error orthogonal to (X, Z, η ). Under the model given by Equations (1.1) and (1.2), Mukherjee et al. [2021] constructed an estimator of α0that is√n- consistent, asymptotically normal, and semi-parametrically efficient. The main shortcoming of the above model is that the treatment effect, i.e., α0, is assumed to be constant. This oversimplification restricts the applicability of this model to real-world problems. For example, the effect of scholarship on a student’s future income may be larger for older students or students with higher innate abilities. This crucial observation motivates a natural extension of this model that incorporates the effect of both the background information and unobserved confounders. 3 1.1 Our contributions In this paper, we analyze a generalized version of the endogenous treatment effect model of Mukherjee et al. [2021] (Equations (1.1) and(1.2)) that incorporates a heterogeneous treatment effect. Specifically, we analyze the following model: Yi=α0(Xi, ηi)1Qi≥τ0+X⊤ iβ0+ℓ(ηi) +ϵi (1.3) Qi=Z⊤ iγ0+ηi. (1.4) Here, α0(·), which we call the individual treatment effect (ITE), is a non-parametric function of both the observed background information Xand unobserved covariates η(e.g., innate abilities). It is important to note that α0(·)relies on the unobserved variable η, in contrast to a common assumption in the conditional average treatment effect (CATE) literature that the dependence is solely on the observed X. In the scholarship example, α0not only depends on the observed background information X, but also on the unobserved innate merit η. This allows for a situation where, for instance, a brighter student can benefit from an advanced curriculum and become more successful in the future. One may argue that ηis not exactly the innate ability, but rather its noisy version. While this is true, it is not possible to encode innate abilities exactly as they are not observed. Nevertheless, if the score is obtained by aggregating multiple scores efficiently (e.g., the average of multiple test scores of a student), then ηis expected to be less noisy and consequently a good proxy for the unobserved innate abilities. Furthermore, when X=Z(which may be true for many practical scenarios), α0(·)can then be viewed as a function of (X, Q)and the map (X, Q)↔(X, η)is bijective. We also note that all results to be established in later sections continue to hold when α0(·)depends solely on X, which corresponds to the standard CATE. The relationship among the variables in
|
https://arxiv.org/abs/2504.17126v1
|
our model (with X=Z) is pictorially presented in Figure 1. X Y Qη ϵ Figure 1: A graphical representation of the variables of Equations (1.3) and (1.4), with X=Z. In this paper, our primary goal is to estimate the average treatment effect on the treated (ATT), defined as θ0=E[α0(X, η)|Q≥τ0]. (1.5) As elaborated previously, the key difficulty in estimating θ0is precisely the unobserved covariates η. Ifηwere known, one could construct a consistent estimator of θ0using the following steps: 1. On the control observations (i.e., observations for which Qi< τ 0), we have Yi=X⊤ iβ0+ℓ(ηi) +ϵi, 4 which is a standard partial linear model (PLM). Therefore, we can use any standard technique available in the literature of PLM to estimate (β0, ℓ). 2.Letˆβandˆfrespectively be the estimators of β0andfobtained in the previous step. Consider the residuals Ri=Yi−X⊤ iˆβ−ˆℓ(ηi)for all the treatment observations (i.e., those with Qi≥τ0) and set ˆθ=P i:Qi≥τ0RiP i1(Qi≥τ0). However, we do not observe ηin practice and need to approximate it via regression residuals from Equation (1.4). This fairly complicates the analysis as the approximation error now depends on(Z, Q), and hence on X. Therefore, an appropriate modification of the above procedure is necessary, and this is outlined below. We first obtain an approximation of ηfrom Equation (1.4) by taking the residuals upon regressing Q onZ. From this approximation, say ˆηi, we broadly follow Steps 1 and 2 as mentioned above, albeit with suitable modifications. First, we use the control observations (along with these estimated ˆηi) to estimate β0using a difference-based technique (see, e.g., Yatchew [1997] and Wang et al. [2011]) for estimating the linear parameter in a PLM, which precludes the need to estimate ℓexplicitly. The fundamental idea of this technique is to estimate β0in a PLM of the form Y=X⊤β0+ℓ(η)+ϵ by ordering the ηivalues and calculating the first-order differences of the corresponding YiandXi values. In our scenario, a careful adaptation of this method is necessary as the ηi’s are unobserved and the ordering is made on their estimates ˆηi’s. To obtain a√n-consistent estimator of θ0, we employ a residual matching technique inspired by the matching-based estimator studied in Abadie and Imbens [2016]. Specifically, for each treatment observation i, we identify its nearest control observation c(i)whose ˆηvalue is closest to that of i(this is unlike Abadie and Imbens [2016], which matches on estimated propensity scores). We then take the difference between the responses corresponding to both indices, i.e., Yi−Yc(i). As the two ˆηvalues are close, the effect of the non-parametric function fbasically vanishes and only the linear function (Xi−Xc(i))⊤β0, which is estimable at a√n-rate, and the ITE α0(Xi, ηi) remain. Finally, we consider the differences between Yi−Yc(i)and(Xi−Xc(i))⊤ˆβand compute the average of these differences over all treatment observations i. The details of our method are elaborated in Section 2. At this point, we note that our model shares a degree of similarity with the simultaneous triangular equation framework studied in Newey et al. [1999] and Pinkse [2000]. However, the existing literature on such models typically assumes a smooth link function between the outcome variable Yand the covariates (X, Z, Q ), whereas
|
https://arxiv.org/abs/2504.17126v1
|
in our setting, a natural discontinuity arises due to the deterministic assignment of treatment based on a thresholded score function. Furthermore, the inclusion of Equation (1.4) may suggest that our method directly falls under the purview of the instrumental variable (IV) framework [Angrist et al., 1996], where the main idea is to identify one or more variables (called instruments) that are correlated with the covariates and affect the outcome solely through their association with the covariates (known as exclusion restriction; see, e.g., Lousdal [2018]). However, our model relaxes the standard exclusion restriction condition 5 by allowing X=Zor, more generally, permitting XandZto share common predictors. For example, in the scholarship example, a student’s high school grade can directly affect both their merit test score and future income. We summarize our key contributions as follows: (a)We propose a√n-consistent and asymptotically normal estimator of θ0in the presence of heterogeneous treatment effect (i.e., Equations (1.3) and(1.4)), where the effect of the treatment depends on both the observed Xand unobserved η. (b)We demonstrate the effectiveness of our estimator through various synthetic and real data analyses. (c) As a byproduct, our analysis also yields a non-parametric estimate of the ITE and CATE. Organization of the paper: Section 2 details our methodology for estimating the ATT, followed by an extension to estimate the ITE and CATE. Section 3 presents our theoretical results on the√n-consistency and asymptotic normality of the estimator. Section 4 conducts simulation studies to verify our theoretical results and assess how well our method estimates the ITE and CATE. Section 5 applies our method to real data sets, examining the impact of Islamic political rule on women’s empowerment and the effect of grade-based academic probation on students’ future GPA. Finally, Section 6 concludes the paper and Section 7 includes proof sketches for the theoretical results. 2 Methodology In this section, we present our methodology for constructing a√n-consistent and asymptotically normal estimator of θ0. Recall that we observe {(Yi, Xi, Zi, Qi)}1≤i≤nfrom Equations (1.3) and (1.4). The first central idea of our method involves taking first-order differences and performing matching on the estimated residuals, i.e., ˆηi’s. We draw inspiration from the literature on partially linear models (PLMs), particularly techniques for estimating a model’s parametric component through first-order differences of its non-parametric counterpart (see, e.g., Yatchew [1997] and Wang et al. [2011]). To elaborate on this approach, consider a generic PLM of the form Y=X⊤β0+ℓ(η) +ϵ , where we observe (Y, X, η )1and assume that E(ϵ|X, η) = 0 andvar(ϵ|X, η) =σ2 ϵ. The goal here is to construct a√n-consistent and asymptotically normal estimator of β0based on an i.i.d. sample {(Yi, Xi, ηi)}n i=1. The method involves two key steps. First, we sort the observations based on the ηi’s and compute the first-order differences of the sorted Yi’s and Xi’s, denoted by ∆Yi and∆Xi. If(X(i), Y(i))denotes the observation corresponding to η(i), where η(i)is the ithorder statistic among {η1, . . . , η n}, then first-order differencing yields Y(i+1)−Y(i)= (X(i+1)−X(i))⊤β0+ℓ(η(i+1))−ℓ(η(i)) +ϵ(i+1)−ϵ(i) 1We first describe our approach in the simple case where the ηi’s are known. We then discuss
|
https://arxiv.org/abs/2504.17126v1
|
how this method can be adapted for the practical scenario where the ηi’s are unknown. 6 Asη(i+1)−η(i)is small (typically of the order of n−1), we expect ℓ(η(i+1))−ℓ(η(i))to be negligible as long as fhas minimal smoothness (e.g., fisα-Hölder with α >1/2). Therefore, we have Y(i+1)−Y(i)≈(X(i+1)−X(i))⊤β0+ϵ(i+1)−ϵ(i). Now, regressing the differences of the sorted Yi’s on the sorted Xi’s yields a√n-consistent and asymptotically normal estimator of β0. Refer to Yatchew [1997] or Wang et al. [2011] for more details on this approach. Let us now consider Equations (1.3) and (1.4), again assuming that the ηi’s are known. Note that observations in the control group satisfy Qi< τ 0, which implies Yi=X⊤ iβ0+ℓ(ηi) +ϵi. Therefore, we can use the first-order difference-based method described in the previous paragraphs to obtain an estimate ˆβofβ0. We now introduce the second central idea of our method, which is residual matching. For the ithobservation in the treatment group, we identify an observation in the control group whose η value is closest to it. To be more specific, we define c(i)as c(i) = arg minj∈control group |ηj−ηi|. Recall that the response of an observation in the treatment group satisfies Yi=α0(Xi, ηi) +X⊤ iβ0+ℓ(ηi) +ϵi. Therefore, if the ithobservation in the treatment group is matched to the c(i)thobservation in the control group, we have Yi−Yc(i)=α0(Xi, ηi) + (Xi−Xc(i))⊤β0+ℓ(ηi)−ℓ(ηc(i)) +ϵi−ϵc(i). Note that ηc(i), by definition, is the closest control value to ηi. Thus, we expect ηi−ηc(i)to be small andℓ(ηi)−ℓ(ηc(i))to be negligible under minimum smoothness assumption on f. Therefore, we have Yi−Yc(i)≈α0(Xi, ηi) + (Xi−Xc(i))⊤β0+ϵi−ϵc(i). (2.1) Recall that we have obtained an estimate ˆβofβ0which is√n-consistent and asymptotically normal as described before. Thus, we can replace β0byˆβin Equation (2.1), which yields Yi−Yc(i)≈α0(Xi, ηi) + (Xi−Xc(i))⊤ˆβ+ϵi−ϵc(i). Finally, taking the average of (Yi−Yc(i))−(Xi−Xc(i))⊤ˆβover all the treatment observations yields our final estimate: ˆθ=1 nTX i:Qi≥τ0n (Yi−Yc(i))−(Xi−Xc(i))⊤ˆβo , where nTis the number of observations in the treatment group. 7 However, in practice, the ηi’s are unknown . In this case, they can be estimated by regressing Qon Zand taking the regression residuals following Equation (1.4). We now implement the method described above, replacing the ηi’s with their estimates ˆηi’s (i.e., we perform the differencing and residual matching with respect to ˆηi). For technical convenience, we also perform data splitting. The entire method is summarized in Algorithm 1. Let us now briefly elaborate on the steps of Algorithm 1. In Step 1, we start by partitioning the data into three roughly equal parts (i.e., when nis divisible by 3, each part consists of n/3data points; if not, each of the first two parts consists of ⌊n/3⌋data points and the last part consists ofn−2⌊n/3⌋data points). The three sets of data are denoted by I1, I2, I3. Data splitting allows certain technical advantages in our quantitative analysis, though in practice, our algorithm can also be used without data splitting. In Step 2, we approximate the unobserved confounders ηby regressing QonZusing the obser- vations in I1. We let ˆγbe the estimated coefficient, and set ˆηi=Qi−Z⊤ iˆγfor all observations. In Step 3, we consider IC 2, the observations in I2that belong to the control group.
|
https://arxiv.org/abs/2504.17126v1
|
We sort ˆηifor Algorithm 1 Estimation of the ATT θ0 Input: i.i.d. data {(Xi, Yi, Zi, Qi)}n i=1, threshold τ0 Output: A√n-consistent and asymptotically normal estimator of θ0 1:Partition {1,···, n}intoI1,I2,I3of roughly equal sizes. 2:Perform OLS of Qiagainst Zifori∈I1; obtain ˆγand set ˆηi=Qi−Z⊤ iˆγfor all i∈I2∪I3. 3:Order {ˆηi}i∈IC 2and denote by {ˆη(i)}be the corresponding order statistics. Denote by {(X(i), Y(i))}the induced order on {(Xi, Yi)}i∈IC 2. 4:Regress the first-order difference ∆Y(i)on∆X(i)(where i∈IC 2) and set ˆβto be the coefficients corresponding to this regression. 5:Perform residual matching: for each i∈IT 3, find c(i)defined as: c(i) = arg minj∈IC 3|ˆηi−ˆηj|. 6:return the estimated ATT defined as follows: ˆθ=1 |IT 3|X i∈IT 3 (Yi−X⊤ iˆβ)−(Yc(i)−X⊤ c(i)ˆβ) . alli∈IC 2and let ˆη(i)be the ithorder statistic among the ˆη’s inIC 2. In Step 4, we take first-order differences of the (X(i), Y(i))observations corresponding to ˆη(i)and apply the method described previously to obtain ˆβ. Finally, we focus on I3and define IT 3andIC 3similarly (i.e., treatment observations in I3and control observations in I3). In Step 5, for each treatment observation i∈IT 3, we select a control observation c(i)∈IC 3such that c(i) = arg minj∈IC 3|ˆηi−ˆηj|. 8 In other words, we match each treatment observation i∈IT 3with a control observation c(i)∈IC 3 whose ˆηvalue is closest to that of i. Lastly, in Step 6, we calculate the differences (Yi−X⊤ iˆβ)− (Yc(i)−X⊤ c(i)ˆβ)for all i∈IT 3and average them out to obtain ˆθ. Remark 1 (Cross-fitting) .In order to reduce the asymptotic variance of our estimator, one may use cross-fitting, i.e., implementing Algorithm 1 by interchanging the role of I1, I2, I3. In particular, note that the final estimator ˆθin Algorithm 1 is calculated from I3. For cross-fitting, we can repeat the algorithm twice to obtain two additional versions of ˆθ, one from each of I1andI2. Finally, we can take the average of the three ˆθ’s as our final estimate ˆθ. Remark 2 (Applications to CATE and fixed treatment effects) .It is easy to see that our method can also be applied when the ITE only depends on X(i.e.,α0(X)), known in the literature as the CATE, and more specifically for a fixed treatment effect model, i.e., α0is a constant. Remark 3 (Comparison between our method and Mukherjee et al. [2021]) .One key distinction between our approach and that of Mukherjee et al. [2021] is that our approach avoids the need to estimate fwhen estimating the ATT and ITE. Estimating non-parametric functions is typically more computationally intensive than dealing with finite-dimensional parameters, and requires careful selection of tuning parameters (such as bandwidth or the number of basis functions). Consequently, our method is more straightforward to implement in practice as compared to the method in Mukherjee et al. [2021]. 2.1 Individual treatment effect estimation So far, we have presented Algorithm 1 for estimating the ATT θ0=E(α0(X, η)|Q≥τ)and established its theoretical properties. However, in certain scenarios, there might also be interest in the ITE, denoted by α0(X, η). For instance, in the context of the scholarship example, the award committee might wish to understand how the effect of the scholarship varies based on students’ background
|
https://arxiv.org/abs/2504.17126v1
|
characteristics X(such as age and race) and innate abilities η. We now present Algorithm 2, a slightly modified version of Algorithm 1 to estimate the ITE α0(X, η). Algorithm 2 Estimation of the ITE α0(X, η) Input: i.i.d. data {(Xi, Yi, Zi, Qi)}n i=1, threshold τ0 Output: An estimate of α0(X) 1:Follow Steps 1 to 5 of Algorithm 2. 2:Estimate α(·)by regressing (Yi−X⊤ iˆβ)−(Yc(i)−X⊤ c(i)ˆβ)onXiandˆηiusing any non- parametric regression algorithm (e.g., basis expansion, kernel smoothing, neural networks, etc.), and call the estimator ˆα(·). 3:return ˆα(·). 9 Algorithm 2 can be summarized as follows. We first follow Steps 1 to 5 of Algorithm 1. We then utilize any non-parametric regression algorithm (e.g., B-splines or local parametric regression) to estimate α0(X, η). In the case where the ITE depends only on X(equivalent to the standard CATE parameter), we can adjust the algorithm so that the non-parametric regression is done on Xi(instead of Xiandˆηi) to yield an estimator ˆα(X)ofα0(X). 3 Theoretical results In this section, we establish the theoretical properties of our estimator ˆθ, demonstrating that it is√n-consistent and asymptotically normal (CAN). We denote by dX(resp. dZ) the dimension of X (resp. Z). Recall that our parameter of interest is the ATT θ0=E(α0(X, η)|Q≥τ0), which we estimate using ni.i.d. realizations of (X, Y, Z, Q ). As elaborated in Algorithm 1, our method relies on splitting the data into three (almost) equal parts. Henceforth, we assume n= 3˜nfor some positive integer ˜nfor ease of presentation and represent the entire data Dn:=I1∪I2∪I3, where eachIjcontains ˜nobservations. Briefly speaking, Algorithm 1 first uses I1to estimate γ0. It then usesI2to estimate β0using the estimator of γ0obtained from I1, and finally uses I3to estimate θ0 using the estimators of (γ0, β0). We now state the assumptions required to show that ˆθis√n-CAN: Assumption 1 (Covariates) .(X, Z)are compactly supported, and 0∈supp( Z). Assumption 2 (Errors) .The error ϵsatisfies E(ϵ|X, η) = 0 ,var(ϵ|X, η) =σ2 ϵ, andE(|ϵ|3|X, η) is finite. The error ηsatisfies E(η|Z) = 0 andvar(η|Z) =σ2 η. Furthermore, η(equivalently Q) is compactly supported. For notational simplicity, we define the following conditional mean, variance and covariance functions of the covariates given the error and the control indicator: gδ(b) =E(X|η−Z⊤δ=b, Q < τ 0), qδ(b) =E(Z|η−Z⊤δ=b, Q < τ 0), vδ(b) = var( X|η−Z⊤δ=b, Q < τ 0), wδ(b) = var( Z|η−Z⊤δ=b, Q < τ 0), kδ(b) = cov( X, Z|η−Z⊤δ=b, Q < τ 0). Assumption 3 (Smoothness of the conditional functions) .For any δ∈RdZ, we have gδ(b), qδ(b), vδ(b) andwδ(b), and kδ(b)are Lipschitz continuous w.r.t. b∈supp( η−Z⊤δ). Furthermore, for any se- quence {δn}n≥1that converges to 0andb∈supp( η), we have gδn(b)→g0(b),qδn(b)→q0(b), vδn(b)→v0(b),wδn(b)→w0(b), and kδn(b)→k0(b). Assumption 4. For a large enough ˜n, we have E 1/ λmin ˜Z⊤˜Z ˜n6 is finite, where λmin(Γ)is the smallest eigenvalue of Γand˜Z= (Z1;Z2;···;Z˜n)⊤∈R˜n×dZ. 10 Assumption 5 (Model parameters) .The non-linear function ℓ(·)in Equation (1.3) is Lipschitz continuous and has a bounded second derivative. Also, E(|α0(X, η)|3|Q≥τ0)is finite. Assumption 6 (Density ratio) .Forδ∈RdZandb∈R, letf0,δ(b)andf1,δ(b)denote the density ofη−Z⊤δatbconditional on Q < τ 0andQ≥τ0, respectively. Then, for any δ∈RdZand b∈supp( η−Z⊤δ),f1,δ(b),f0,δ(b), and f1,δ(b)/f0,δ(b)are uniformly bounded and uniformly bounded away
|
https://arxiv.org/abs/2504.17126v1
|
from zero. Furthermore, for any sequence {δn}n≥1that converges to 0andb∈supp( η), we have f0,δn(b)→f0,0(b)andf1,δn(b)→f1,0(b). The compactness of XandZin Assumption 1 is made for technical convenience and can be relaxed with careful truncation arguments. However, in practical scenarios, XandZare naturally bounded or can be made so through proper scaling. Moreover, the assumption that 0∈supp( Z) ensures that supp(η)is contained in supp(η−Z⊤δ), which is crucial for our proofs. Assumption 2 is standard in the non-parametric regression literature, where we posit homoskedasticity for the errors ϵandη. This can be relaxed by assuming var(ϵ|X, η)andvar(η|Z)to be uniformly bounded and bounded away from zero. Furthermore, the compactness of ηis made for technical convenience and can be relaxed through careful truncation arguments. The continuity of gδ,qδ,vδ,wδ, and kδw.r.t. δin Assumption 3 are satisfied as long as the density function of (X, η−Z⊤δ)(resp. (Z, η−Z⊤δ)) are continuous w.r.t. δfor any fixed X(resp. Z). Assumption 4 is a technical assumption to ensure the Lindeberg’s condition for the martingale central limit theorem [Billingsley, 1995] is satisfied. Furthermore, Assumption 5 sets a minimal requirement on the smoothness of the function ℓand the boundedness of the third moment of the heterogeneous treatment effect α0(X, η). Finally, Assumption 6 is standard for the density ratio of control and treatment observations (see, e.g., Assumption 2 in Abadie and Imbens [2016]). To grasp this, consider the case where δ= 0. Here, f1,0(b)/f0,0(b)corresponds to the density ratio of ηfor treatment and control observations. Our method’s core idea is to match treatment and control observations with respect to η. If P(η∈S|Q≥τ0)/P(η∈S|Q < τ 0) =∞for some set S⊆R, matching becomes infeasible since in S,η’s in the treatment group cannot be matched with η’s in the control group. Here, the need for the density ratio of η−Z⊤δto be bounded for any fixed δarises because we do not directly observe η, but instead approximate it using ˆη=η−Z⊤(ˆγ−γ0). It is easy to see that ˆγis√n-CAN. Our first proposition establishes that the estimator of β0 obtained in Step 4 of Algorithm 1 is√n-CAN. Proposition 1 (ˆβis√n-CAN) .Consider the estimator ˆβofβ0following the first four steps of Algorithm 1. Under Assumptions 1 to 5, the estimator is√n-CAN: √ ˜n(ˆβ−β0)d−→ N (0,Σβ), where the explicit value of Σβcan be found in Equation (A.2) of Appendix A. We elaborate the key steps of the proof in Section 7 and defer the complete proof to Appendix A. 11 Remark 4 (√ ˜nversus√n).In Proposition 1, we present the asymptotic normality of√ ˜n(ˆβ−β0), where, ˜n=n/3by definition. Therefore, it is immediate that √n(ˆβ−β0)d−→ N (0,3Σβ). Note that the factor of 3can be removed by cross-fitting. Upon obtaining a√n-CAN estimator of β0, we are now ready to present our main result pertaining to the√n-consistency and asymptotic normality of the ATT estimator ˆθ, which is obtained via matching each treatment observation with its nearest control observation in terms of ˆη. Theorem 1 (ˆθis√n-CAN) .Consider the estimator ˆθofθ0summarized in Algorithm 1. Under Assumptions 2 to 6, the estimator is√n-CAN: √ ˜n(ˆθ−θ0)d− → N (0, σ2 θ), where the explicit form of σ2 θcan be found in Equation (B.1) of Appendix B. Theorem 1
|
https://arxiv.org/abs/2504.17126v1
|
is the main result of this paper, which proves that the ATT can be estimated at a parametric rate despite a non-parametric correlation between the unobserved errors in Equations (1.3) and(1.4). The key steps of the proof are presented in Section 7, while the detailed proof can be found in Appendix B. Remark 5 (Cross-fitting) .We can gain efficiency (in terms of asymptotic variance) by performing cross-fitting as described in Remark 1. In particular, if ˆθcfdenotes our cross-fitting estimator, we have √n(ˆθcf−θ0)d− → N (0, σ2 θ). 3.1 Estimation of asymptotic variance of ATT In our main theorem (Theorem 1), we have established that the estimator obtained in Algorithm 1 is a√n-CAN estimator of the ATT θ0, with an asymptotic variance denoted by σ2 θ. Therefore, if we have a consistent estimator of σθ(sayˆσθ), then by Slutsky’s theorem, 1 ˆσθ√ ˜n(ˆθ−θ0)d−→ N (0,1). This allows us to perform statistical inference. For example, one might be interested in testing H0:θ0= 0vs.H1:θ0̸= 0, as it quantifies whether there is any treatment effect on the treated individuals. However, ˆσθis notoriously difficult to estimate as it involves numerous nuisance parameters. A more practical approach is to use techniques like bootstrapping. In a standard n-out-of- n bootstrapping procedure, nobservations are drawn from the full data set of nobservations with replacement, and Algorithm 1 is applied to these sampled observations. This process is repeated Btimes (i.e., sample nobservations with replacement and estimate θ0based on these sampled 12 observations) to yield {ˆθb}B b=1. Based on these bootstrapped estimators, we estimate σ2 θas the variance of these estimators (scaled by ˜n): ˆσ2 θ= ˜n PB b=1 ˆθb−¯θB2 B−1 , where ¯θB=1 B ˆθ1+···+ˆθB and˜n=n/3. We empirically show in Section 4.3 that the bootstrap estimator is consistent, and leave its proof for future work. 4 Simulation studies In this section, we conduct three simulation studies. The first simulation seeks to numerically illustrate the consistency and asymptotic normality of the estimator ˆθfor the ATT θ0, as outlined in Algorithm 1. Additionally, it checks whether the asymptotic variance of the estimator matches the formula from Equation (B.1) and whether cross-fitting results in an efficiency gain as explained in Remarks 1 and 5. The second simulation illustrates that bootstrapping, as described in Section 3.1, provides a good approximation of the true asymptotic variance. Lastly, the third simulation study evaluates the capability of Algorithm 2 to estimate the ITE. 4.1 Data generating process For the first and second simulations, we consider a simple data-generating process: Y= (X2 1+X2X3+η2)1Q≥0+X1+X3+η/2 +ϵ (4.1) Q=X4+η . (4.2) where X1,···, X4∼ N (0,1)i.i.d., ϵ∼ N (0,0.5),η∼ U(−1,1). Note that Equations (4.1) and(4.2) are a special instance of Equations (1.3) and(1.4) with Z= (X1, X2, X3, X4),X= (X1, X2, X3),α0(X, η) =X2 1+X2X3+η2,ℓ(η) =η/2,τ0= 0,β0= (1,0,1)⊤, and γ0= (0,0,0,1)⊤. For this particular data-generating process, σ2 θ≈11.455. Also, the true ATT =θ0= E(X2 1+X2X3+η2|X4+η≥0)≈1.333. 4.2 Simulation 1: Estimation of the ATT By Theorem 1,√ ˜n(ˆθ−θ0)d− → N (0,11.455). To numerically verify this, we generate 1,000Monte- Carlo iterations following the data-generating process in Equations (4.1) and(4.2), each of size n= 12,000. This means
|
https://arxiv.org/abs/2504.17126v1
|
that each split is of size ˜n= 4,000. For each iteration 1≤k≤1,000, we compute ζk=√ ˜n(ˆθk−θ0), where ˆθkis the estimate of the ATT θ0using iteration kfollowing Algorithm 1. The sample mean and variance of the ζk’s are around 0.05and12.5, respectively, close to 0and11.455. Moreover, the histogram of the ζk’s (not shown) resembles a Gaussian distribution. It is worth noting that the result in Theorem 1 is still roughly valid despite the violation of Assumption 1 on the compactness of the support of XandZ, since X, Z are effectively compactly supported. 13 We now repeat what we did before, but instead compute ζcf k=√n(ˆθcf k−θ0), where ˆθcf kis the cross-fitted estimate of the ATT θ0for iteration k(see Remark 1). The sample mean and variance of the ζcf k’s are around 0.09and11.2respectively, close to 0and11.455. Also, the histogram of theζcf k’s, as shown in Figure 2, resembles that of a normal distribution, thus corroborating Remark 5. Figure 2: The histogram of the ζcf k’s looks fairly normal and centered around 0. 4.3 Simulation 2: Estimation of the asymptotic variance via bootstrap- ping Our next simulation pertains to the validity of our asymptotic variance estimator obtained via bootstrapping. Recall that in the bootstrap, we start with a data set of size nand follow the method described in Section 3.1, which involves sampling nobservations (with replacement) from the data set and performing Algorithm 1 to obtain an estimate of the ATT θ0. We then repeat this running procedure B= 200 times, with each iteration 1≤b≤200resulting in an estimate ˆθb. The bootstrap estimator of the asymptotic variance is given by ˆσ2 θ= ˜n PB b=1 ˆθb−θB2 B−1 , where θB=1 B ˆθ1+···+ˆθB and˜n=n/3. To evaluate the performance of the bootstrap, for each n, we generate 100different data sets of sizenand calculate ˆσ2 θfor each data set. Table 1 summarizes the mean and 90% Monte-Carlo confidence region (CR) of the ˆσ2 θ’s for each n∈ {2,000,5,000,12,000}. 14 n 2,000 5 ,000 12 ,000 Mean of the ˆσ2 θ’s 12.0 11 .7 11.9 90% Monte-Carlo CR of the ˆσ2 θ’s(10.2,13.7) (10 .2,13.5)(10.3, 13.4) Table 1: The bootstrap provides a good estimate of the true asymptotic variance σ2 θ= 11.455. 4.4 Simulation 3: Estimation of the ITE This section presents our simulation results for estimating the ITE function α0via the method proposed in Section 2.1. To illustrate our method, we consider two simulation scenarios: (i) α0only depends on X, which is equivalent to the standard CATE parameter, i.e., Yi(1)−Yi(0) = α0(Xi); and (ii) α0depends on both Xiandηi, i.e.,Yi(1)−Yi(0) = α0(Xi, ηi). 4.4.1 Case I: α0only depends on X This subsection assumes that α0depends only on the background information X; in particular, we take α0(X) =X2 1+X2X3. Our data generating process is as follows: Y= X2 1+X2X3 1Q≥0+X1+X3+η/2 +ϵ Q=X4+η, where, same as before, X1,···, X4∼ N (0,1)i.i.d., ϵ∼ N (0,0.5),η∼ U (−1,1),Z= (X1, X2, X3, X4), and X= (X1, X2, X3). To obtain an estimate ˆα(X)of the ITE, we first follow Steps 1 to 5 of Algorithm 1 and then regress (Yi−X⊤ iˆβ)−(Yc(i)−X⊤ c(i)ˆβ)onXiusing cubic B-splines with degrees of freedom
|
https://arxiv.org/abs/2504.17126v1
|
chosen via 4-fold cross-validation, and quadratic interaction terms. Table 2 shows that the mean squared error (MSE) of ˆα(X)among all treated individuals tends to decrease as the sample size nincreases. Moreover, Figure 3 provides a comparison between ˆα(X)andα0(X)for(X2, X3) = (±0.7,±0.2) when n= 50,000, demonstrating that our approach can predict the ITE well. n 4-fold CV d.f. MSE of ˆα(X)4-fold CV d.f. MSE of ˆα(X,ˆη) 5,000 3 0.16 3 0.22 10,000 3 0.04 3 0.08 20,000 3 0.04 3 0.05 50,000 3 0.01 3 0.03 Table 2: The MSEs of ˆα(X)andˆα(X,ˆη)tend to decrease as the sample size nincreases. 4.4.2 Case II: α0depends on (X, η) We now consider the same data generating process as used in Sections 4.2 and 4.3, repeated below for clarity: Y= X2 1+X2X3+η2 1Q≥0+X1+X3+η/2 +ϵ Q=X4+η. 15 Figure 3: Actual vs. predicted ITE for (X2, X3) = (±0.7,±0.2). Here, the ITE α0(X, η) =X2 1+X2X3+η2depends on both Xandη. To obtain an estimate ˆα(X,ˆη)of the ITE, we first follow Steps 1 to 5 of Algorithm 1 and then regress (Yi−X⊤ iˆβ)−(Yc(i)−X⊤ c(i)ˆβ)onXiandˆηiusing cubic B-splines with degrees of freedom (d.f.) chosen via 4-fold cross-validation (CV), and quadratic interaction terms. Recall that η is unknown in this scenario and thus needs to be estimated. Table 2 shows that the mean squared error (MSE) of ˆα(X,ˆη)among all treated individuals tends to decrease as the sample sizenincreases. Moreover, Figure 4 provides a comparison between ˆα(X,ˆη)andα0(X, η)for (X1, X2, X3, X4) = (0 .1,±0.2,±0.8,1.5)andη∈[−1,1]when n= 50,000, demonstrating that our approach can predict the ITE well. 16 Figure 4: Actual vs. predicted ITE for (X1, X2, X3, X4) = (0 .1,±0.2,±0.8,1.5). 5 Real data analysis In this section, we apply our method to two real data sets. The first data set examines how Islamic political rule impacts women’s empowerment, while the second one focuses on how academic probation based on grades affects a student’s future GPA. 5.1 Effect of Islamic party on women’s education In this subsection, we utilize a data set originally introduced in Meyersson [2014] regarding the effect of Islamic political rule on women’s empowerment. Specifically, we aim to investigate whether the winning of Islamic parties affects women’s educational outcomes. The data set consists of 2,629rows, each representing a municipality. The response variable Yis calculated as Yw−Ym, where Yw(Ym) denotes the percentage of women (men) aged 15 to 20 who completed 17 high school by 2020. The treatment, Q, is determined by the difference in vote share between the largest Islamic and largest secular party in the 1994 election. An Islamic party is considered elected if and only if Q >0. The covariates Xused in the analysis are as follows: (1) the Islamic vote share in 1994; (2) the number of parties with at least one vote in 1994; (3) the log of population in 1994; (4) whether a municipality is a district center; (5) whether a municipality is a province center; (6) whether a municipality is a sub-metro center; (7) whether a municipality is a metro center; (8) the share of population below the age of 19 in 2000; (9) the share
|
https://arxiv.org/abs/2504.17126v1
|
of population above the age of 60 in 2000; (10) the ratio of males to females in 2000; and (11) the average household size in 2000. Using our method, we find an estimated ATT of ˆθ= 0.65. To test the null hypothesis H0: θ0= 0versus the alternative hypothesis H1:θ0̸= 0, we employ bootstrapping with B= 500 bootstrap samples. We obtain a bootstrap mean of 0.68and a 95% bootstrap confidence interval of(−0.62,2.13). This result is similar to the result obtained by Mukherjee et al. [2021], who assumed a homogeneous treatment effect, where they found a 95% bootstrap confidence interval of(−0.42,1.42). 5.2 Effect of academic probation on subsequent GPA We now utilize Lindo et al.’s (2010) data to analyze whether grade-based academic probation affects a student’s subsequent GPA. This data set comprises 44,362rows, with each row representing a student from one of three large Canadian universities denoted as A,B, and C. For each student, the response variable Yis their GPA in the first term of the second year, and the treatment Qis the difference between their first-year GPA and the academic probation threshold. A student is placed on probation if and only if Q >0. The covariates Xused are as follows: (1) the student’s high school grade; (2) the total credits taken by the student in the first year; (3) whether the student is from university A; (4) whether the student is from university B; (5) whether the student is a male; (6) whether the student was born in North America; (7) the student’s age when entering college; and (8) whether the student is a native English speaker. Using our method, we obtain an estimated ATT of ˆθ= 0.28. To test the null hypothesis H0:θ0= 0 against the alternative hypothesis H1:θ0̸= 0, we again employ bootstrapping with B= 500 bootstrap samples. The bootstrap mean and 95% confidence interval are 0.27and(0.22,0.32), respectively. Since the confidence interval is entirely positive, we conclude that students who are placed on academic probation in their first year tend to see an improvement in their GPA in the first term of the second year. This conclusion is consistent with findings by Lindo et al. (2010) and Mukherjee et al. (2021), with the latter reporting a 95% bootstrap confidence interval of (0.25,0.32). 18 6 Conclusion In this paper, we developed an algorithm to estimate the ATT in a non-randomized treatment setting, where the treatment assignment depends on whether a variable exceeds a pre-specified threshold. Our method assumes that the treatment effect for each individual depends on their observed and unobserved covariates, and incorporates all individuals rather than only those close to the threshold. We proved that the resulting ATT estimator is both√n-consistent and asymptotically normal under standard regularity conditions, with empirical evidence showing that its asymptotic variance can be consistently estimated via the bootstrap. Moreover, a slight adjustment to our algorithm allows us to estimate both the ITE and CATE, though we do not explore the theoretical properties of the corresponding estimators given the manuscript’s complexity. Finally, we validated the effectiveness of our method through synthetic and real data analyses. Future
|
https://arxiv.org/abs/2504.17126v1
|
work may focus on establishing the consistency of the bootstrap variance estimator and conducting inference on the ITE and CATE estimators. 7 Roadmap of theoretical proofs In this section, we outline proof sketches for Proposition 1 and Theorem 1, with full proofs provided in Appendices A and B, respectively. 7.1 Proof sketch of Proposition 1 We divide the proof into 4 key steps: Step 1: Asymptotic normality of ˆγ˜n In Algorithm 1, we first perform OLS of Qiagainst Zifor observations in I1. The goal is to estimate eachηwith ˆη=Q−Z⊤ˆγ˜n, the main ingredient for differencing and matching in Steps 4 and 6, respectively. Observe that √ ˜n(ˆγ˜n−γ0) =˜nX i=11√ ˜n ˜Z⊤˜Z ˜n!−1 Ziηi, where ˜Z= (Z1;Z2;···;Z˜n)⊤∈R˜n×dZ, converges in distribution to N(0,Σγ), where Σγ= σ2 ηΣ−1 Z. To estimate β0, we regress the first-order differences of YonX(based on the ˆηvalues) for observations in the second partition that belong to the control group, denoted by IC 2. Observe that √ ˜n(ˆβ˜n−β0) =1 ˜n(∆X)⊤∆X−11√ ˜n(∆X)⊤∆w , (7.1) where ∆Xconsists of X(i+1)−X(i)terms, ∆Yconsists of Y(i+1)−Y(i)terms, and ∆wconsists of ℓ(η(i+1))−ℓ(η(i)) +ϵ(i+1)−ϵ(i)terms. To establish the asymptotic normality of ˆβ˜n, we examine each term in the product on the RHS of Equation (7.1). We show that the first term converges in probability while the second terms converges in distribution to a normal distribution, whence the conclusion follows via Slutsky’s theorem. 19 Step 2: First term of RHS of Equation (7.1) We initially fix the first partition of the data; in other words, we first assume that ˆδ˜n:= ˆγ˜n−γ0is fixed. Note that each observation in the control group within I2can be written as X=gˆδ˜n(ˆη)+uˆδ˜n, where ˆη=η−Z⊤ˆδ˜nandE(uˆδ˜n|ˆη, Q < τ 0) = 0 . We can also write X=g0(η) +u0, where E(u0|η, Q < τ 0) = 0 . Since X(i)=gˆδ˜n(ˆη(i)) + + uˆδ˜n(i)for every i, we have 1 ˜n(∆X)⊤∆X=1 ˜n|IC 2|−1X i=1(X(i+1)−X(i))(X(i+1)−X(i))⊤=L+ (M+M⊤) +P , where L=1 ˜n|IC 2|−1X i=1 gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i)) gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i))⊤, M=1 ˜n|IC 2|−1X i=1 gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i)) uuˆδ˜n(i+1)−uˆδ˜n(i)⊤ , P=1 ˜n|IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) uˆδ˜n(i+1)−uˆδ˜n(i)⊤ . Utilizing the fact that the ordering is done on the ˆηi’s and gˆδ˜nis Lipschitz, we can show via the Cauchy-Schwarz inequality that LandMare both op(1). Moreover, we can show that conditional onˆδ˜n, op(1) = P−2 ˜n|IC 2|X i=1uˆδ˜niu⊤ ˆδ˜ni =P−2|IC 2| ˜n 1 |IC 2||IC 2|X i=1(Xi−E(X|ˆηi, Q < τ 0))(Xi−E(X|ˆηi, Q < τ 0))⊤ =⇒P−2P(Q < τ 0)E(var(X|ˆη, Q < τ 0)|Q < τ 0) =op(1). We can now apply Lebesgue’s DCT to prove that P(and thus (∆X)⊤∆X/˜n) converges to 2P(Q < τ 0)E(var(X|η, Q < τ 0)|Q < τ 0)in probability. Finally, an application of the con- tinuous mapping theorem yields 1 ˜n(∆X)⊤∆X−1 P−→1 2P(Q < τ 0)Σ−1 u, (7.2) where Σu:=E(var(X|η, Q < τ 0)|Q < τ 0). Step 3: Second term of RHS of Equation (7.1) 20 We have 1√ ˜n(∆X)⊤∆w=1√ ˜n|IC 2|−1X i=1(X(i+1)−X(i)) ℓ(η(i+1))−ℓ(η(i)) +ϵ(i+1)−ϵ(i) As before, we expand the above equation by rewriting Xi=gˆδ˜n(ˆηi) +uˆδ˜ni. Some customary algebra followed by ignoring lower order terms shows 1√ ˜n(∆X)⊤∆w=1√ ˜n|IC 2|−1X i=1 ℓ(η(i+1))−ℓ(η(i)) uˆδ˜n(i+1)−uˆδ˜n(i) +1√ ˜n|IC 2|−1X i=1 ϵ(i+1)−ϵ(i) uˆδ˜n(i+1)−uˆδ˜n(i) +op(1) ≜F+H+op(1), where both FandHcontribute to the asymptotic normality. First,
|
https://arxiv.org/abs/2504.17126v1
|
observe that F=1√ ˜n|IC 2|−1X i=1 ℓ(η(i+1))−ℓ(ˆη(i+1)) uˆδ˜n(i+1)−uˆδ˜n(i) −1√ ˜n|IC 2|−1X i=1 ℓ(η(i))−ℓ(ˆη(i)) uˆδ˜n(i+1)−uˆδ˜n(i) +1√ ˜n|IC 2|−1X i=1 ℓ(ˆη(i+1))−ℓ(ˆη(i)) uˆδ˜n(i+1)−uˆδ˜n(i) . The last summand can be shown to be asymptotically negligible, again due to the fact that the ordering is done on the ˆηi’s and fis Lipschitz. Also, the first and second summands can be approximated via a two-step Taylor expansion: ℓ(η(i))−ℓ(ˆη(i)) = (η(i)−ˆη(i))f′(ˆη(i)) +(η(i)−ˆη(i))2 2ℓ′′(˜η(i)), where ˜η(i)is between η(i)andˆη(i)(we can similarly expand ℓ(η(i+1))−ℓ(ˆη(i+1))). The quadratic terms can be shown to be asymptotically negligible. For the linear terms, we first decompose Zfor control units into qˆδ˜n(ˆη)+wˆδ˜n, where ˆη=η−Z⊤ˆδ˜n andE(wˆδ˜n|ˆη, Q < τ 0) = 0 . Similarly, we can write Z=q0(η) +w0, where E(w0|η, Q < τ0) = 0 . Some algebra followed by ignoring lower order terms yields F=√ ˜n(ˆγ˜n−γ0)¯F+op(1), where ¯F=1 ˜n |IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) ℓ′(ˆη(i+1))(wˆδ˜n(i+1))⊤−ℓ′(ˆη(i))(wˆδ˜n(i))⊤ . 21 It is possible to show that conditional on ˆδ˜n, ¯F−2|IC 2| ˜n 1 |IC 2||IC 2|X i=1ℓ′(ˆηi)uˆδ˜niw⊤ ˆδ˜ni =op(1) =⇒¯F−2P(Q < τ 0)E ℓ′(ˆη)uˆδ˜nw⊤ ˆδ˜n|Q < τ 0 =op(1). Following a similar approach as for the term Pin Step 2, we have that ¯Fconverges to 2P(Q < τ0)E ℓ′(η)u0w⊤ 0|Q < τ 0 in probability, where u0=X−E(X|η, Q < τ 0)as before. Omitting op(1)terms and using the expansion of√ ˜n(ˆγ˜n−γ0)in Step 1, we have F=˜nX i=12√ ˜nP(Q < τ 0)E ℓ′(η)u0w⊤ 0|Q < τ 0 ˜Z⊤˜Z ˜n!−1 Ziηi. (7.3) Finally, we can rewrite Has H=1√ ˜n|IC 2|−1X i=1 ϵ(i+1)−ϵ(i) uˆδ˜n(i+1)−uˆδ˜n(i) =1√ ˜n ϵ(1)(uˆδ˜n(1)−uˆδ˜n(2)) +ϵ(2)(2uˆδ˜n(2)−uˆδ˜n(1)−uˆδ˜n(3)) +··· +ϵ(|IC 2|−1)(2uˆδ˜n(|IC 2|−1)−uˆδ˜n(|IC 2|−2)−uˆδ˜n(|IC 2|)) +ϵ(|IC 2|)(uˆδ˜n(|IC 2|)−uˆδ˜n(|IC 2|−1)) :=1√ ˜n|IC 2|X i=1ϵ(i)aˆδ˜n(i):=1√ ˜n2˜nX i=˜n+1ϵiaˆδ˜n,i,(7.4) where aˆδ˜n,i= 0for any treatment observation i. Step 4: Putting everything together Now, we can employ the martingale central limit theorem [Billingsley, 1995] to establish the asymptotic normality of1√ ˜n(∆X)⊤∆w=F+H+op(1).Effectively, we need to show F+His asymptotically normal. Recall that Fis primarily a function of ηi’s (see Equation (7.3)) and His a function of both ηi’s and ϵi’s (see Equation (7.4)). Consequently, they are notindependent and thus we need to establish the normality jointly. The detailed argument is presented in Appendix A. Lastly, we use Slutsky’s theorem to establish the asymptotic normality of√ ˜n(ˆβ˜n−β0). 7.2 Proof sketch of Theorem 1 We divide the proof into 4 key steps. We first introduce some notations. Let ti=1Qi≥τ0denote the treatment status of observation i,˜n1=P3˜n i=2˜n+1tidenote the number of treatment observations inI3, and ˜n0= ˜n−˜n1denote the number of control observations in I3. Also, let IT 3(resp. IC 3) denotes the group of individuals in I3withti= 1(resp. ti= 0). For each i∈IT 3, letc(i)∈IC 3be itsnearest neighbor in the control group with respect to ˆη(see Step 5 of Algorithm 1). 22 Following Abadie and Imbens [2016], for each i∈IC 3, we define Kˆδ˜n,ito be the number of times observation iis used as a match. In other words, Kˆδ˜n,idenotes the number of treatment observations whose nearest neighbor (with respect to ˆη) isi. Here, ˆδ˜n= ˆγ˜n−γ0that is obtained from I1. We divide the proof into 4 key steps: Step 1: Decomposition of√ ˜n(ˆθ˜n−θ0) Write√ ˜n(ˆθ˜n−θ0) = (1) + (2) + (3) + (4) , where (1) =√ ˜n ˜n13˜nX
|
https://arxiv.org/abs/2504.17126v1
|
i=2˜n+1ti(α0(Xi, ηi)−E(α0(X, η)|Q≥τ0)), (2) =√ ˜n ˜n1(β0−ˆβ˜n)⊤3˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i Xi, (3) =√ ˜n ˜n13˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i ϵi, (4) =√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ηi)−ℓ(ηc(i)) . We initially focus on examining terms (2) and (4) before returning to discuss terms (1) and (3) in Step 4. Step 2: Term (2) We can rewrite (2)as (2) =√ ˜n(β0−ˆβ˜n)⊤˜n ˜n1 1 ˜n3˜nX i=2˜n+1tiXi−1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXi! . As established in Proposition 1, the first term in the product converges to N(0,Σβ). Also, it is easy to see that the second term converges in to 1/P(Q≥τ0)in probability, and the first part of the third term converges to E(wX) =P(Q≥τ0)E(X|Q≥τ0)in probability. For the remaining term, it is possible to show that 1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXi−P(Q≥τ0)E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)X|Q < τ 0! =op(1) conditional on ˆδ˜nby slightly modifying the proofs of Lemmas S.6, S.7 and S.10 of Abadie and Imbens [2016]. Here, fi,ˆδ˜n(ˆη)is the density of ˆη:=η−Z⊤ˆδ˜nconditional on w=ifori∈ {0,1}. Using Lebesgue’s DCT, we have 1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXiP−→P(Q≥τ0)Ef1(η) f0(η)X|Q < τ 0 , 23 where fi(η)denotes the density of ηconditional on w=ifori∈ {0,1}. Intuitively, the density ratio term f1(η)/f0(η)appears since for any control observation i,Kˆδ˜n,idenotes the number of treatment observations having ias their nearerst neighbor. Following the derivations in Proposition 1 and after some algebra, we obtain (2) =√ ˜n(ˆβ˜n−β0)⊤ Ef1(η) f0(η)X|Q < τ 0 −E(X|Q≥τ0) +op(1) =˜nX i=11√ ˜nA⊤ 3A1 ˜Z⊤˜Z ˜n!−1 Ziηi+2˜nX i=˜n+11√ ˜nA⊤ 3ϵiaˆδ˜n,i+op(1), where A1= 2P(Q < τ 0)E ℓ′(η)u0w⊤ 0|Q < τ 0 and A3=1 2P(Q < τ 0)Σ−1 u Ef1(η) f0(η)X|Q < τ 0 −E(X|Q≥τ0) . The terms A1andA3emerge from the decomposition of√ ˜n(ˆβ˜n−β0)in the roadmap of Proposition 1’s proof (see Equations (7.1) to (7.4)). Step 3: Term (4) Next, we can decompose (4)into (4) =√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ˆηi)−ℓ(ˆηc(i)) +√ ˜n ˜n13˜nX i=2˜n+1ti(ℓ(ηi)−ℓ(ˆηi)) −√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ηc(i))−ℓ(ˆηc(i)) .(7.5) As before, we first develop our argument for a fixed ˆδ˜n. Following the proof of Proposition 1 in Abadie and Imbens [2016], we can show that the first summand is op(1). To address the second and third summands, we again utilize two-step Taylor expansions akin to those employed for the term F. After some algebra, we obtain (4) =√ ˜n ˜n13˜nX i=2˜n+1(ti−(1−ti)Kˆδ˜n,i)(ηi−ˆηi)ℓ′(ˆηi) = (√ ˜n(ˆγ˜n−γ0))⊤˜n ˜n1 1 ˜n3˜nX i=2˜n+1(ti−(1−ti)Kˆδ˜n,i)Ziℓ′(ˆηi)! . The expression ti−(1−ti)Kˆδ˜n,iintuitively stems from the observation that in the last two summands of Equation (7.5), each treated observation appears once and each control observation appears Kˆδ˜n,itimes. Finally, following a similar derivation to that for the term (2), we have (4) =˜nX i=11√ ˜nA⊤ 4 ˜Z⊤˜Z ˜n!−1 Ziηi+op(1), 24 where A4=E(Zℓ′(η)|Q≥τ0)−Ef1(η) f0(η)Zℓ′(η)|Q < τ 0 . Again, the density ratio term appears due to the presence of Kˆδ˜n,i. Step 4: Putting everything together To show that√ ˜n(ˆθ˜n−θ0) = (1) + (2) + (3) + (4) is asymptotically normal, we substitute the terms (2)and(4)following our derivations in Steps 2 and 3. Meanwhile, we use the formulas for (1)and(3)as in Step 1. We have √ ˜n(ˆθ˜n−θ0) =˜nX i=11√ ˜nA⊤ 5 ˜Z⊤˜Z ˜n!−1 Ziηi+2˜nX i=˜n+11√ ˜nA⊤ 3ϵiaˆδ˜n,i +3˜nX i=2˜n+1√ ˜n ˜n1ti(α0(Xi, ηi)−E(α0(X, η)|Q≥τ0)) +3˜nX i=2˜n+1√ ˜n ˜n1 ti−(1−ti)Kˆδ˜n,i ϵi, where A5=A⊤ 1A3+A4. Since the terms are not independent, we
|
https://arxiv.org/abs/2504.17126v1
|
need to apply the martingale central limit theorem [Billingsley, 1995] to establish normality jointly. Details are provided in Appendix B. 25 References A. Abadie and G. W. Imbens. Matching on the estimated propensity score. Econometrica , 84(2): 781–807, 2016. M. L. Anderson, C. Dobkin, and D. Gorry. The effect of influenza vaccination for the elderly on hospitalization and mortality: An observational study with a regression discontinuity design. Annals of Internal Medicine , 172(7):445–452, 2020. J. D. Angrist and M. Rokkanen. Wanna get away? Regression discontinuity estimation of exam school effects away from the cutoff. Journal of the American Statistical Association , 110(512): 1331–1344, 2015. J. D. Angrist, G. W. Imbens, and D. B. Rubin. Identification of causal effects using instrumental variables. Journal of the American Statistical Association , 91(434):444–455, 1996. J. Banks and F. Mazzonna. The effect of education on old age cognitive abilities: Evidence from a regression discontinuity design. The Economic Journal , 122(560):418–448, 2012. N. E. Basta and M. E. Halloran. Evaluating the effectiveness of vaccines using a regression discontinuity design. American Journal of Epidemiology , 188(6):987–990, 2019. P. Billingsley. Probability and measure . John Wiley & Sons, 1995. S. Calonico, M. D. Cattaneo, M. H. Farrell, and R. Titiunik. Regression discontinuity designs using covariates. Review of Economics and Statistics , 101(3):442–451, 2019. M. D. Cattaneo, N. Idrobo, and R. Titiunik. A practical introduction to regression discontinuity designs: Foundations . Cambridge University Press, 2019. H. Chen, Q. Li, J. S. Kaufman, J. Wang, R. Copes, Y. Su, and T. Benmarhnia. Effect of air quality alerts on human health: A regression discontinuity analysis in Toronto, Canada. The Lancet Planetary Health , 2(1):e19–e26, 2018. D. Christelis, D. Georgarakos, and A. Sanz-de Galdeano. The impact of health insurance on stockholding: A regression discontinuity approach. Journal of Health Economics , 69:102246, 2020. G. W. Imbens. Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and Statistics , 86(1):4–29, 2004. B. A. Jacob and L. Lefgren. Remedial education and student achievement: A regression- discontinuity analysis. Review of Economics and Statistics , 86(1):226–244, 2004. J. M. Lindo, N. J. Sanders, and P. Oreopoulos. Ability, gender, and performance standards: Evidence from academic probation. American Economic Journal: Applied Economics , 2(2):95–117, 2010. M. L. Lousdal. An introduction to instrumental variable assumptions, validation and estimation. Emerging Themes in Epidemiology , 15(1):1, 2018. E. Meyersson. Islamic rule and the empowerment of the poor and pious. Econometrica , 82(1): 229–269, 2014. 26 A. Mody, I. Sikazwe, N. L. Czaicki, M. Wa Mwanza, T. Savory, K. Sikombe, L. K. Beres, P. Somwe, M. Roy, J. M. Pry, et al. Estimating the real-world effects of expanding antiretroviral treatment eligibility: Evidence from a regression discontinuity analysis in Zambia. PLoS medicine , 15(6): e1002574, 2018. B. G. Moss and W. H. Yeaton. Shaping policies related to developmental education: An evaluation using the regression-discontinuity design. Educational Evaluation and Policy Analysis , 28(3): 215–229, 2006. D. Mukherjee, M. Banerjee, and Y. Ritov. Estimation of a score-explained non-randomized treatment effect in fixed and high dimensions. arXiv preprint arXiv:2102.11229 , 2021. W. K. Newey, J.
|
https://arxiv.org/abs/2504.17126v1
|
L. Powell, and F. Vella. Nonparametric estimation of triangular simultaneous equations models. Econometrica , 67(3):565–603, 1999. J. Pinkse. Nonparametric two-step regression estimation when regressors and error are dependent. Canadian Journal of Statistics , 28(2):289–300, 2000. J. M. Robins, A. Rotnitzky, and L. P. Zhao. Estimation of regression coefficients when some regressors are not always observed. Journal of the American statistical Association , 89(427): 846–866, 1994. P. R. Rosenbaum and D. B. Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika , 70(1):41–55, 1983. D. L. Thistlethwaite and D. T. Campbell. Regression-discontinuity analysis: An alternative to the ex post facto experiment. Journal of Educational psychology , 51(6):309, 1960. A. S. Venkataramani, J. Bor, and A. B. Jena. Regression discontinuity designs in healthcare research. bmj, 352, 2016. L. Wang, L. D. Brown, and T. T. Cai. A difference based approach to the semiparametric partial linear model. Electronic Journal of Statistics , 5:619–641, 2011. A. Yatchew. An elementary estimator of the partial linear model. Economics Letters , 57(2):135–143, 1997. 27 SUPPLEMENTARY MATERIAL A Proof of Proposition 1 In this section we present the proof of Proposition 1. First, it is easy to see that √ ˜n(ˆγ˜n−γ0) =˜nX i=11√ ˜n ˜Z⊤˜Z ˜n!−1 Ziηi, where ˜Z= (Z1;Z2;···;Z˜n)⊤∈R˜n×dZ. Also, it is a standard result in regression analysis that √ ˜n(ˆγ˜n−γ0)d−→ N (0,Σγ), where Σγ=σ2 η·plim ˜Z⊤˜Z ˜n−1 due to Assumption 2. Next, we define ˆδ˜n:= ˆγ˜n−γ0andW˜nto be the event where ||ˆδ˜n||2<1. Note that P(√ ˜n(ˆγ˜n−γ0)≤□) =P(√ ˜n(ˆγ˜n−γ0)≤□∩ W ˜n) +P(√ ˜n(ˆγ˜n−γ0)≤□∩ Wc ˜n), where P(√ ˜n(ˆγ˜n−γ0)≤□∩ Wc ˜n)≤P(Wc ˜n)→0as˜n→ ∞ since||ˆδ˜n||2P− →0. Therefore, WLOG, we can work on the event W˜nin the subsequent parts of the proof (e.g., establishing the asymptotic normality of ˆβ˜nandˆθ˜nin Theorem 1). Now, let IC 2⊆I2be the indices of observations in the second partition that belong to the control group. Recall that {ˆη(i)}is the order statistics of {ˆηi}i∈IC 2, which induces an ordering on the Yi’s, Xi’s,ϵi’s, and ηi’s. We then have Y(i+1)−Y(i)= (X(i+1)−X(i))⊤β0+ℓ(η(i+1))−ℓ(η(i)) +ϵ(i+1)−ϵ(i), compactly written as ∆Y= (∆ X)β0+ ∆w. It is easy to see that √ ˜n(ˆβ˜n−β0) =1 ˜n(∆X)⊤∆X−11√ ˜n(∆X)⊤∆w . (A.1) We first focus on the first term of Equation (A.1) , assuming ˆδ˜n:= ˆγ˜n−γ0is fixed (i.e., we conduct our analysis conditional on I1). Define C=[ ||δ||≤1supp(η−Z⊤δ). It is clear that under Assumptions 1 and 2, we have Cis compact and supp (η)⊆C. Note that for observations in the control group, we can decompose Xintogˆδ˜n(ˆη) +uˆδ˜n, where ˆη=η−Z⊤ˆδ˜nandE(uˆδ˜n|ˆη, Q < τ 0) = 0 . Similarly, we also have X=g0(η) +u0, where E(u0|η, Q < τ 0) = 0 . Using this decomposition, we have 1 ˜n(∆X)⊤∆X=1 ˜n|IC 2|−1X i=1(X(i+1)−X(i))(X(i+1)−X(i))⊤ =L+M+M⊤+P, 28 where L=1 ˜n|IC 2|−1X i=1 gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i)) gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i))⊤, M=1 ˜n|IC 2|−1X i=1 gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i)) uˆδ˜n(i+1)−uˆδ˜n(i)⊤ , P=1 ˜n|IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) uˆδ˜n(i+1)−uˆδ˜n(i)⊤ . For each i≤j, k≤dX, we have |Lj,k|= 1 ˜n|IC 2|−1X i=1 gj,ˆδ˜n(ˆη(i+1))−gj,ˆδ˜n(ˆη(i)) gk,ˆδ˜n(ˆη(i+1))−gk,ˆδ˜n(ˆη(i)) ≤ν1 ˜n|IC 2|−1X i=1 ˆη(i+1)−ˆη(i)2 =Op(n−2) for some Lipschitz constant ν1using the Cauchy-Schwarz inequality, Assumption 3, and the fact that the ordering is done on the ˆηi’s. Similarly, we have |Mj,k|= 1 ˜n|IC 2|−1X i=1 gj,ˆδ˜n(ˆη(i+1))−gj,ˆδ˜n(ˆη(i)) uk,ˆδ˜n(i+1)−uk,ˆδ˜n(i) ≤ν2
|
https://arxiv.org/abs/2504.17126v1
|
˜nvuut|IC 2|−1X i=1 ˆη(i+1)−ˆη(i)2vuut2|IC 2|X i=1(uk,ˆδ˜ni)2 =Op(n−1) for some Lipschitz constant ν2using the Cauchy-Schwarz inequality and the elementary inequality (a−b)2≤2(a2+b2), Assumptions 1 and 3, and the fact that the ordering is done on the ˆηi’s. Specifically, Assumption 1 implies that E(u2 k,ˆδ˜n|Q < τ 0) =E(var(Xk|ˆη, Q < τ 0)|Q < τ 0)is finite. Thus, we have L=op(1)andM=op(1). We now analyze P. Observe that P=1 ˜n|IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) uˆδ˜n(i+1)−uˆδ˜n(i)⊤ =P1−P2−P3−P⊤ 3, 29 where P1=2 ˜n|IC 2|X i=1uˆδ˜niu⊤ ˆδ˜ni, P2=1 ˜n uˆδ˜n(1)u⊤ ˆδ˜n(1)+uˆδ˜n(|IC 2|)u⊤ ˆδ˜n(|IC 2|)! , P3=1 ˜n|IC 2|−1X i=1uˆδ˜n(i+1)u⊤ ˆδ˜n(i). It is easy to see that P1−2P(Q < τ 0)Σu,ˆδ˜n=op(1)where Σu,ˆδ˜n=E(var(X|ˆη, Q < τ 0)|Q < τ 0), andP2=op(1). Moreover, we can also show that P3=op(1). To see this, let W=1 ˜n|IC 2|−1X i=1uj,ˆδ˜n(i+1)uk,ˆδ˜n(i) be the (j, k)-th entry of P3. Then, E(W|Q1< τ 0,···, Q|IC 2|< τ 0) = 0 and var(W|Q1< τ 0,···, Q|IC 2|< τ 0) =1 ˜n2|IC 2|−1X i=1E(u2 j,ˆδ˜n|Q < τ 0)E(u2 k,ˆδ˜n|Q < τ 0) =O(n−1) due to Assumption 1. This implies P3=op(1)since W=op(1)by Chebyshev’s inequality. So far, we have shown that conditional on ˆδ˜n, we have 1 ˜n(∆X)⊤∆X−2P(Q < τ 0)Σu,ˆδ˜n=op(1), where Σu,ˆδ˜n=E(var(X|ˆη, Q < τ 0)|Q < τ 0)andˆη=η−Z⊤ˆδ˜n. We first show the following lemma: Lemma 1. For any sequence {ˆδ˜n}˜n≥1, where ||ˆδ˜n||2≤1for every ˜n, that converges to 0, we have Σu,ˆδ˜n→Σu:=E(var(X|η, Q < τ 0)|Q < τ 0)as˜n→ ∞ . Proof. Note that for any ˜n, we have Σu,ˆδ˜n=Z Cvar X|η−Z⊤ˆδ˜n=t, Q < τ 0 fη−Z⊤ˆδ˜n|Q<τ 0(t)1t∈supp(η−Z⊤ˆδ˜n)dt. Moreover, Σu=Z Cvar(X|η=t, Q < τ 0)fη|Q<τ 0(t)1t∈supp(η)dt. We first prove that for every t∈C,var X|η−Z⊤ˆδ˜n=t, Q < τ 0 →var(X|η=t, Q < τ 0). We consider two cases: (1) t∈supp(η); and (2) t∈C\supp(η). For the first case, the statement clearly follows from Assumptions 1, 2 and 3. For the second case, note that Assumption 1, 2 30 and the fact that ˆδ˜n→0implies that we can find some n∗such that for every ˜n≥n∗, we have t /∈supp(η−Z⊤ˆδ˜n). The conclusion is thus immediate since var X|η−Z⊤ˆδ˜n=t, Q < τ 0 = var(X)for every ˜n≥n∗, and var (X|η=t, Q < τ 0) =var(X). In a similar manner, for every t∈C, we can show that fη−Z⊤ˆδ˜n|Q<τ 0(t)→fη|Q<τ 0(t)under Assumptions 1, 2 and 6, and 1t∈supp(η−Z⊤ˆδ˜n)→1t∈supp(η)under Assumptions 1 and 2. The lemma thus follows upon applying Lebesgue’s dominated convergence theorem (DCT) under Assumptions 1 and 2 using the fact that Cis compact. This completes the proof. We now use Lemma 1 to prove the following lemma: Lemma 2. As˜n→ ∞ , we have 1 ˜n(∆X)⊤∆XP−→2P(Q < τ 0)Σu, where Σu=E(var(X|η, Q < τ 0)|Q < τ 0). Proof. LetΛ˜n=1 ˜n(∆X)⊤∆X. For every coordinate tandϵ >0, we have P Λ˜n,t−2P(Q < τ 0)Σu,ˆδ˜n,t ≥ϵ|ˆδ˜n →0. Applying Lebesgue’s DCT, we have P Λ˜n,t−2P(Q < τ 0)Σu,ˆδ˜n,t ≥ϵ →0. Fix any ϵ >0. From Lemma 1, we know that for every coordinate t, there exists some ξ >0such that|Σu,δ,t−Σu,t|<ϵ 4P(Q<τ 0)whenever ||δ||2< ξ. Now, observe that P(|Λ˜n,t−2P(Q < τ 0)Σu,t| ≥ϵ) =P |Λ˜n,t−2P(Q < τ 0)Σu,t| ≥ϵ∩ ||ˆδ˜n||2< ξ +P |Λ˜n,t−2P(Q < τ 0)Σu,t| ≥ϵ∩ ||ˆδ˜n||2≥ξ ≤P Λ˜n,t−2P(Q < τ 0)Σu,ˆδ˜n,t
|
https://arxiv.org/abs/2504.17126v1
|
≥ϵ 2∩ ||ˆδ˜n||2< ξ +P(||ˆδ˜n||2≥ξ) ≤P Λ˜n,t−2P(Q < τ 0)Σu,ˆδ˜n,t ≥ϵ 2 +P(||ˆδ˜n||2≥ξ). Note that the first term goes to 0as established above, and so does the second term since ˆδ˜nP−→0. This implies Λ˜n,tP−→2P(Q < τ 0)Σu,tfor every coordinate t, which means Λ˜nP−→2P(Q < τ 0)Σu. This completes the proof. From Lemma 2, an application of the continuous mapping theorem yields 1 ˜n(∆X)⊤∆X−1 P−→1 2P(Q < τ 0)Σ−1 u. 31 We now work on the second term of Equation (A.1). Note that we can decompose it as 1√ ˜n(∆X)⊤∆w=1√ ˜n|IC 2|−1X i=1(X(i+1)−X(i)) ℓ(η(i+1))−ℓ(η(i)) +ϵ(i+1)−ϵ(i) =E+F+G+H, where E=1√ ˜n|IC 2|−1X i=1 ℓ(η(i+1))−ℓ(η(i)) gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i)) , F=1√ ˜n|IC 2|−1X i=1 ℓ(η(i+1))−ℓ(η(i)) uˆδ˜n(i+1)−uˆδ˜n(i) , G=1√ ˜n|IC 2|−1X i=1 ϵ(i+1)−ϵ(i) gˆδ˜n(ˆη(i+1))−gˆδ˜n(ˆη(i)) , H=1√ ˜n|IC 2|−1X i=1 ϵ(i+1)−ϵ(i) uˆδ˜n(i+1)−uˆδ˜n(i) . First, observe that |IC 2|−1X i=1 ℓ(η(i+1))−ℓ(η(i))2≤ν2 3IC 2|−1X i=1 |η(i+1)−ˆη(i+1)|+|η(i)−ˆη(i)|+|ˆη(i+1)−ˆη(i)|2 ≤3ν2 3IC 2|−1X i=1 (η(i+1)−ˆη(i+1))2+ (η(i)−ˆη(i))2+ (ˆη(i+1)−ˆη(i))2 ≤6ν2 3|IC 2|X i=1((ZC i)⊤(ˆγ˜n−γ0))2+ 3ν2 3|IC 2|−1X i=1(ˆη(i+1)−ˆη(i))2 =Op(n−1)Op(n) +Op(n−1) =Op(1). for some Lipschitz constant ν3. Here, we used Assumptions 1, 2 and 5, the triangle inequality and the elementary inequality (a+b+c)2≤3(a2+b2+c2), as well as the fact that ||ˆδ˜n||2 2=Op(n−1) and the ordering is done on the ˆηi’s. Now, we can easily utilize the Cauchy-Schwarz inequality to show that E=Op(n−1) =op(1) using Assumption 3, the above result, and the fact that the ordering is done on the ˆηi’s. Similarly, we can show G=Op(n−1/2) =op(1)using the elementary (a−b)2≤2(a2+b2)under Assumptions 32 2 and 3. Let us look at F. We can rewrite it as F=1√ ˜n|IC 2|−1X i=1 ℓ(η(i+1))−ℓ(ˆη(i+1)) uˆδ˜n(i+1)−uˆδ˜n(i) −1√ ˜n|IC 2|−1X i=1 ℓ(η(i))−ℓ(ˆη(i)) uˆδ˜n(i+1)−uˆδ˜n(i) +1√ ˜n|IC 2|−1X i=1 ℓ(ˆη(i+1))−ℓ(ˆη(i)) uˆδ˜n(i+1)−uˆδ˜n(i) . A straightforward application of the Cauchy-Schwarz inequality allows us to show that the third term in Fisop(1)under Assumptions 1, 2 and 5 and the elementary inequality (a−b)2≤2(a2+b2). Moreover, observe that ℓ(η(i))−ℓ(ˆη(i)) = (η(i)−ˆη(i))ℓ′(ˆη(i)) + (η(i)−ˆη(i))2ℓ′′(˜η(i)) for some ˜η(i)between η(i)andˆη(i), and that 1√ ˜n|IC 2|−1X i=1(η(i+1)−ˆη(i+1))2ℓ′′(˜η(i+1))(uˆδ˜n(i+1)−uˆδ˜n(i)) =Op(n−1/2) =op(1) using the Cauchy-Schwarz inequality under Assumptions 1, 2 and 5. Similar to X, we can decompose Zfor observations in the control group into qˆδ˜n(ˆη) +wˆδ˜n, where E(wˆδ˜n|ˆη, Q < τ 0) = 0 andˆη=η−Z⊤ˆδ˜n. Similarly, we also have Z=q0(η) +w0, where E(w0|η, Q < τ 0) = 0 . Omitting op(1)terms, we can further rewrite Fas F=1√ ˜n|IC 2|−1X i=1(η(i+1)−ˆη(i+1))ℓ′(ˆη(i+1))(uˆδ˜n(i+1)−uˆδ˜n(i))−1√ ˜n|IC 2|−1X i=1(η(i)−ˆη(i))ℓ′(ˆη(i))(uˆδ˜n(i+1)−uˆδ˜n(i)) =1 ˜n |IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) ℓ′(ˆη(i+1))(Z(i+1))⊤−ℓ′(ˆη(i))(Z(i))⊤ √ ˜n(ˆγ˜n−γ0) =1 ˜n |IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) ℓ′(ˆη(i+1))(wˆδ˜n(i+1))⊤−ℓ′(ˆη(i))(wˆδ˜n(i))⊤ √ ˜n(ˆγ˜n−γ0) +1 ˜n |IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) ℓ′(ˆη(i+1)) qˆδ˜n(ˆη(i+1))−qˆδ˜n(ˆη(i))⊤ √ ˜n(ˆγ˜n−γ0) +1 ˜n |IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) ℓ′(ˆη(i+1))−ℓ′(ˆη(i)) qˆδ˜n(ˆη(i))⊤ √ ˜n(ˆγ˜n−γ0) . 33 The last two terms of the third equality can be shown to be op(1)using the Cauchy-Schwarz inequal- ity under Assumptions 1, 2 and 5. Again, omitting op(1)terms, we have F=F√ ˜n(ˆγ˜n−γ0) , where F=1 ˜n |IC 2|−1X i=1 uˆδ˜n(i+1)−uˆδ˜n(i) ℓ′(ˆη(i+1))(wˆδ˜n(i+1))⊤−ℓ′(ˆη(i))(wˆδ˜n(i))⊤ . Conditional on ˆδ˜n(i.e.,I1), we can show that F−2P(Q < τ 0)E ℓ′(ˆη)uˆδ˜n(wˆδ˜n)⊤|Q < τ 0 =op(1), where ˆη=η−Z⊤ˆδ˜n, under Assumptions 1 and 5 using the same method as for the term P. We now prove the following lemma: Lemma 3. For any sequence {ˆδ˜n}˜n≥1, where ||ˆδ˜n||2≤1for every ˜n, that converges to 0, we have E ℓ′(ˆη)uˆδ˜n(wˆδ˜n)⊤|Q <
|
https://arxiv.org/abs/2504.17126v1
|
τ 0 →E ℓ′(η)u0(w0)⊤|Q < τ 0 , where uˆδ˜n=X−E(X|ˆη, Q < τ 0),wˆδ˜n=Z−E(Z|ˆη, Q < τ 0),u0=X−E(X|η, Q < τ 0), andw0=Z−E(Z|η, Q < τ 0). Proof. It is easy to see that the statement we want to show is equivalent to E(ℓ′(ˆη)cov(X, Z|ˆη, Q < τ 0)|Q < τ 0)→E(ℓ′(η)cov(X, Z|η, Q < τ 0)|Q < τ 0). Note that for any ˜n, we have E(ℓ′(ˆη)cov(X, Z|ˆη, Q < τ 0)|Q < τ 0) =Z Cℓ′(t)cov(X, Z|η−Z⊤ˆδ˜n=t, Q < τ 0)fη−Z⊤ˆδ˜n|Q<τ 0(t)1t∈supp(η−Z⊤ˆδ˜n)dt. Moreover, E(ℓ′(η)cov(X, Z|η, Q < τ 0)|Q < τ 0) =Z Cℓ′(t)cov(X, Z|η=t, Q < τ 0)fη|Q<τ 0(t)1t∈supp(η)dt. Using the same method as in Lemma 1, the conclusion follows via Lebesgue’s DCT under Assump- tions 1, 2, 3, 5 and 6 using the fact that Cis compact. This completes the proof. From here, a simple adaptation of the proof of Lemma 2 yields FP−→2P(Q < τ 0)E ℓ′(η)u0(w0)⊤|Q < τ 0 . Therefore, omitting op(1)terms, we can write Fas F= 2P(Q < τ 0)E ℓ′(η)u0(w0)⊤|Q < τ 0√ ˜n(ˆγ˜n−γ0) =˜nX i=11√ ˜nA1 ˜Z⊤˜Z ˜n!−1 Ziηi, 34 where A1= 2P(Q < τ 0)E ℓ′(η)u0w⊤ 0|Q < τ 0 . Lastly, we consider H. Note that Hcan be written as H=1√ ˜n|IC 2|−1X i=1 ϵ(i+1)−ϵ(i) uˆδ˜n(i+1)−uˆδ˜n(i) =1√ ˜n ϵ(1)(uˆδ˜n(1)−uˆδ˜n(2)) +ϵ(2)(2uˆδ˜n(2)−uˆδ˜n(1)−uˆδ˜n(3)) +··· +ϵ(|IC 2|−1)(2uˆδ˜n(|IC 2|−1)−uˆδ˜n(|IC 2|−2)−uˆδ˜n(|IC 2|)) +ϵ(|IC 2|)(uˆδ˜n(|IC 2|)−uˆδ˜n(|IC 2|−1)) :=1√ ˜n|IC 2|X i=1ϵ(i)aˆδ˜n(i) :=1√ ˜n2˜nX i=˜n+1ϵiaˆδ˜n,i, where aˆδ˜n,i= 0for each observation iin the treatment group. Now, let c∈RdXbe an arbitrary vector such that ||c||2= 1. Up to op(1)terms, we have c⊤1√ ˜n(∆X)⊤∆w=˜nX i=11√ ˜nc⊤A1 ˜Z⊤˜Z ˜n!−1 Ziηi+2˜nX i=˜n+11√ ˜nc⊤ϵiaˆδ˜n,i =ξ˜n,1+ξ˜n,2+···+ξ˜n,2˜n. Now, we consider the following σ-fields: F˜n,1=σ(Z1:˜n, η1),···,F˜n,˜n=σ(Z1:˜n, η1:˜n),F˜n,˜n+1= σ(Z1:2˜n, η1:2˜n, X1:2˜n, ϵ˜n+1),···, andF˜n,2˜n=σ(Z1:2˜n, η1:2˜n, X1:2˜n, ϵ˜n+1:2˜n). For each ˜n, it is easy to see that (iX j=1ξ˜n,j,F˜n,i,1≤i≤2˜n) is a martingale. We now use Billingsley’s (1995) martingale central limit theorem. Note that using Assumption 2, we have ˜nX i=1E(ξ2 ˜n,i| F˜n,i−1) =˜nX i=1E 1√ ˜nc⊤A1 ˜Z⊤˜Z ˜n!−1 Ziηi 2 |Z1:˜n, η1:i−1 =σ2 ηc⊤A1 ˜Z⊤˜Z ˜n!−1 A⊤ 1c P−→c⊤A1ΣγA⊤ 1c and 2˜nX i=˜n+1E(ξ2 ˜n,i| F˜n,i−1) =2˜nX i=˜n+1E 1√ ˜nc⊤ϵiaˆδ˜n,i2 |Z1:2˜n, η1:2˜n, X1:2˜n, ϵ˜n+1:i−1! =σ2 ϵ2˜nX i=˜n+11 ˜n(c⊤aˆδ˜n,i)2. 35 Conditional on ˆδ˜n(i.e.,I1), it is easy to show that 1 ˜n2˜nX i=˜n+1(c⊤aˆδ˜n,i)2−6P(Q < τ 0)c⊤Σu,ˆδ˜nc=op(1) under Assumptions 1 using the same method as for the term P. Following the proof of Lemmas 1 and 2, we have 1 ˜n2˜nX i=˜n+1(c⊤aˆδ˜n,i)2P−→6P(Q < τ 0)c⊤Σuc, whence 2˜nX i=˜n+1E(ξ2 ˜n,i| F˜n,i−1)P−→6P(Q < τ 0)σ2 ϵc⊤Σuc. Therefore, the martingale central limit theorem and Cramer-Wold device gives us 1√ ˜n(∆X)⊤∆wd−→ N (0, A1ΣγA⊤ 1+ 6P(Q < τ 0)σ2 ϵΣu). The asymptotic normality of√ ˜n(ˆβ˜n−β0)is now just a consequence of Slutsky’s theorem. Concretely, we have √ ˜n(ˆβ˜n−β0)d−→ N 0,1 4P(Q < τ 0)2Σ−1 uζΣ−1 u , (A.2) where •ζ= 4(P(Q < τ 0))2E ℓ′(η)u0w⊤ 0|Q < τ 0 ΣγE ℓ′(η)w0u⊤ 0|Q < τ 0 + 6P(Q < τ 0)σ2 ϵΣu, •Σu=E(var(X|η, Q < τ 0)|Q < τ 0), •u0=X−E(X|η, Q < τ 0), •w0=Z−E(Z|η, Q < τ 0). To finish the proof, we need to establish the Lindeberg’s condition for the martingale central limit theorem. A
|
https://arxiv.org/abs/2504.17126v1
|
pair of sufficient conditions isP˜n i=1E(|ξ˜n,i|3)→0andP2˜n i=˜n+1E(|ξ˜n,i|3)→0 as˜n→ ∞ , whence the Lyapunov’s (and consequently Lindeberg’s) condition is satisfied. 36 Observe that ˜nX i=1E(|ξ˜n,i|3) =˜nX i=1E E 1√ ˜nc⊤A1 ˜Z⊤˜Z ˜n!−1 Ziηi 3 |Z1:˜n =1 ˜n3/2˜nX i=1E c⊤A1 ˜Z⊤˜Z ˜n!−1 Zi 3 E(|ηi|3|Zi) ≤ν4 ˜n1/2E ˜Z⊤˜Z ˜n!−1 Z1 3 ≤ν5 ˜n1/2E ˜Z⊤˜Z ˜n!−1 3 op||Z1||3 =ν5 ˜n1/2E 1 λmin ˜Z⊤˜Z ˜n 3 ||Z1||3 ≤ν6 ˜n1/2vuuutE 1 λmin ˜Z⊤˜Z ˜n 6 q E(||Z3 1||2) →0 under Assumptions 1, 2, 4 and 5. Here, ν4, ν5, ν6are positive constant. Moreover, 2˜nX i=˜n+1E(|ξ˜n,i|3) =2˜nX i=˜n+1E E 1√ ˜nc⊤ϵiaˆδ˜n,i 3 |Z1:2˜n, η1:2˜n, X1:2˜n!! =2˜nX i=˜n+11 ˜n3/2E c⊤aˆδ˜n,i 3 E(|ϵi|3|Xi, ηi) ≤ν7 ˜n1/2E ||u3/2 ˆδ˜n||2|Q < τ 0 →0 using Assumptions 1 and 2, as well as the inequalities |a−b|3≤(|a|+|b|)3≤4(|a|3+|b|3). Here, ν7is a positive constant. The proof is complete since we have verified the Lyapunov’s condition. B Proof of Theorem 1 Letti=1Qi≥τ0denote the treatment status of observation i,˜n1=P3˜n i=2˜n+1tidenote the number of observations in I3belonging to the treatment group, and ˜n0= ˜n−˜n1denote the number of observations in I3belonging to the control group. Moreover, for each i∈I3, letc(i)∈I3be 37 the index of the closest ˆηjfrom ˆηisuch that tj= 1−ti. Intuitively speaking, c(i)represents the index of the closest observation (w.r.t. ˆη) from ithat belongs to the opposite side of the treatment group. Recall that Algorithm 1 involves matching each observation in the treatment group of I3to an observation in the control group of I3based on their ˆηvalues. Following the notation of Abadie and Imbens [2016], we denote by Kˆδ˜n,ithe number of times observation iis used as a match. Here, ˆδ˜n= ˆγ˜n−γ0that is obtained from I1. It is easy to see that √ ˜n(ˆθ˜n−θ0) = (1) + (2) + (3) + (4) , where (1) =√ ˜n ˜n13˜nX i=2˜n+1ti(α0(Xi, ηi)−E(α0(X, η)|Q≥τ0)), (2) =√ ˜n ˜n1(β0−ˆβ˜n)⊤3˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i Xi, (3) =√ ˜n ˜n13˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i ϵi, (4) =√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ηi)−ℓ(ηc(i)) . We begin by looking at (2). Note that we have (2) =√ ˜n ˜n1(β0−ˆβ˜n)⊤3˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i Xi =√ ˜n(β0−ˆβ˜n)⊤˜n ˜n1 1 ˜n3˜nX i=2˜n+1tiXi−1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXi! . The first term in the product converges in distribution to N(0,Σβ)established in Proposition 1, while the second term converges in probability to 1/P(Q≥τ0). Moreover, the first part of the third term converges in probability to E(tX) =P(Q≥τ0)E(X|Q≥τ0). The remainder term can be written as 1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXi=˜n0 ˜n 1 ˜n03˜nX i:ti=0 2˜n+1Kˆδ˜n,iXi . As in the proof of Proposition 1, we first fix ˆδ˜n. Under Assumptions 1 and 6, a slight modification to the proofs of Lemmas S.6, S.7 and S.10 of Abadie and Imbens [2016] yields the following result, whose proof is omitted. 38 Lemma 4. Reorder the data in I3such that observations in the control group are first. Let (˜n1, P0,2˜n+1, ···, P0,2˜n+˜n0)be the parameter of the distribution of (Kˆδ˜n,2˜n+1,···, Kˆδ˜n,2˜n+˜n0)given t2˜n+1:3˜nand ˆη2˜n+1:2˜n+˜n0. For i∈ {2˜n+ 1,···,2˜n+ ˜n0}, letˆη(i)’s be the order statistics for the ˆηi’s. Also, let fi,ˆδ˜n(ˆη)(Fi,ˆδ˜n(ˆη)) be the density (distribution function) of ˆη:=η−Z⊤ˆδ˜nconditional on t=i, for i∈ {0,1}. We have 1 ˜n02˜n+˜n0X i=2˜n+1Xi(Kˆδ˜n,i−˜n1P0,i) =op(1), 2˜n+˜n0X
|
https://arxiv.org/abs/2504.17126v1
|
i=2˜n+1Xi P0,i−f1,ˆδ˜n(ˆηi) f0,ˆδ˜n(ˆηi)F0,ˆδ˜n(ˆη(i+1))−F0,ˆδ˜n(ˆη(i−1)) 2! =op(1), 2˜n+˜n0X i=2˜n+1Xif1,ˆδ˜n(ˆηi) f0,ˆδ˜n(ˆηi)F0,ˆδ˜n(ˆη(i+1))−F0,ˆδ˜n(ˆη(i−1)) 2−E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)X|Q < τ 0! =op(1). From Lemma 4, conditional on ˆδ˜n, we have 1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXi−P(Q≥τ0)E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)X|Q < τ 0! =op(1), where ˆη=η−Z⊤ˆδ˜n. We now prove the following lemma: Lemma 5. For any sequence {ˆδ˜n}˜n≥1, where ||ˆδ˜n||2≤1for every ˜n, that converges to 0, we have E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)X|Q < τ 0! →Ef1,0(η) f0,0(η)X|Q < τ 0 . Proof. Note that for any ˜n, we have E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)X|Q < τ 0! =E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)E(X|ˆη, Q < τ 0)|Q < τ 0! =Z Cf1,ˆδ˜n(t) f0,ˆδ˜n(t)E(X|η−Z⊤ˆδ˜n=t)fη−Z⊤ˆδ˜n|Q<τ 0(t)1t∈supp(η−Z⊤ˆδ˜n)dt and Ef1,0(η) f0,0(η)X|Q < τ 0 =Ef1,0(η) f0,0(η)E(X|η, Q < τ 0)|Q < τ 0 =Z Cf1,0(t) f0,0(t)E(X|η=t)fη|Q<τ 0(t)1t∈supp(η)dt. 39 Using the same method as in Lemma 1, the conclusion follows via Lebesgue’s DCT under Assump- tions 1, 2, 3 and 6 using the fact that Cis compact. Here, we define 0/0 = 0 . From here, an argument similar to Lemma 2 yields 1 ˜n3˜nX i=2˜n+1(1−ti)Kˆδ˜n,iXiP−→P(Q≥τ0)Ef1(η) f0(η)X|Q < τ 0 , where f0(η) := f0,0(η)andf1(η) := f1,0(η). Thus, up to op(1)terms, we can write (2) =√ ˜n(ˆβ˜n−β0)⊤ A2,where A2=Ef1(η) f0(η)X|Q < τ 0 −E(X|Q≥τ0). Following the derivations in Proposition 1, we can again rewrite (2) as (2) =˜nX i=11√ ˜nA⊤ 3A1 ˜Z⊤˜Z ˜n!−1 Ziηi+2˜nX i=˜n+11√ ˜nA⊤ 3ϵiaˆδ˜n,i up to op(1)terms, where A3=1 2P(Q < τ 0)Σ−1 uA2. We now consider (4), which can be decomposed into (4) =√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ηi)−ℓ(ηc(i)) = (5) + (6) −(7), where (5) =√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ˆηi)−ℓ(ˆηc(i)) , (6) =√ ˜n ˜n13˜nX i=2˜n+1ti(ℓ(ηi)−ℓ(ˆηi)), (7) =√ ˜n ˜n13˜nX i=2˜n+1ti ℓ(ηc(i))−ℓ(ˆηc(i)) . As usual, we condition on ˆδ˜n(i.e.,I1). Following the proof of the first part of Proposition 1 in Abadie and Imbens [2016], we can show that (5) is op(1)under Assumption 6. Now, observe that (6) and (7) can be respectively written as (6) =√ ˜n ˜n13˜nX i=2˜n+1ti(ηi−ˆηi)ℓ′(ˆηi) +√ ˜n ˜n13˜nX i=2˜n+1ti(ηi−ˆηi)2ℓ′′(˜ηi). 40 and (7) =√ ˜n ˜n13˜nX i=2˜n+1ti ηc(i)−ˆηc(i) ℓ′(ˆηc(i)) +√ ˜n ˜n13˜nX i=2˜n+1ti ηc(i)−ˆηc(i)2ℓ′′(˜ηc(i)), where ˜ηilies between ηiandˆηiand˜ηc(i)lies between ηc(i)andˆηc(i). It is easy to see that the second summands of (6) and (7) are op(1)under Assumptions 1 and 5. Therefore, omitting op(1)terms, we can rewrite (4) as (4) =√ ˜n ˜n13˜nX i=2˜n+1(ti−(1−ti)Kˆδ˜n,i)(ηi−ˆηi)ℓ′(ˆηi) = (√ ˜n(ˆγ˜n−γ0))⊤˜n ˜n1 1 ˜n3˜nX i=2˜n+1(ti−(1−ti)Kˆδ˜n,i)Ziℓ′(ˆηi)! . Under Assumptions 1, 5 and 6, conditional on ˆδ˜n, we have ˜n ˜n1 1 ˜n3˜nX i=2˜n+1(ti−(1−ti)Kˆδ˜n,i)Ziℓ′(ˆηi)! −E(Zℓ′(ˆη)|Q≥τ0)−E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)Zℓ′(ˆη)|Q < τ 0! =op(1), where ˆη=η−Z⊤ˆδ˜n, using a similar result to Lemma 4. An argument similar to Lemmas 1 and 2 yields ˜n ˜n1 1 ˜n3˜nX i=2˜n+1(ti−(1−ti)Kˆδ˜n,i)Ziℓ′(ˆηi)! P−→ E(Zℓ′(η)|Q≥τ0)−Ef1(η) f0(η)Zℓ′(η)|Q < τ 0 under Assumptions 1, 2, 3, 5 and 6. Up to op(1)terms, we can thus rewrite (4)as (4) =˜nX i=11√ ˜nA⊤ 4 ˜Z⊤˜Z ˜n!−1 Ziηi, where A4=E(Zℓ′(η)|Q≥τ0)−Ef1(η) f0(η)Zℓ′(η)|Q < τ 0 . 41 Now, we are ready to establish the asymptotic normality of ˆθ˜n. Ignoring op(1)terms, we have √ ˜n(ˆθ˜n−θ0) =˜nX i=11√ ˜nA⊤ 5 ˜Z⊤˜Z ˜n!−1 Ziηi+2˜nX i=˜n+11√ ˜nA⊤ 3ϵiaˆδ˜n,i +3˜nX i=2˜n+1√ ˜n ˜n1ti(α0(Xi, ηi)−E(α0(X, η)|Q≥τ0)) +3˜nX i=2˜n+1√ ˜n ˜n1 ti−(1−ti)Kˆδ˜n,i ϵi =ξ˜n,1+···+ξ˜n,˜n+ξ˜n,˜n+1+···+ξ˜n,2˜n+ξ˜n,2˜n+1+···+ξ˜n,3˜n+ξ˜n,3˜n+1+···+ξ˜n,4˜n, where A5=A⊤ 1A3+A4. Consider the following σ-fields: F˜n,1=σ(Z1:˜n, η1),···,F˜n,˜n=σ(Z1:˜n, η1:˜n), F˜n,˜n+1=σ(Z1:2˜n, η1:2˜n, X1:2˜n, ϵ˜n+1),···,F˜n,2˜n=σ(Z1:2˜n,
|
https://arxiv.org/abs/2504.17126v1
|
η1:2˜n, X1:2˜n, ϵ˜n+1:2˜n), F˜n,2˜n+1=σ(Z1:2˜n, η1:2˜n+1, X1:2˜n+1, ϵ˜n+1:2˜n),···,F˜n,3˜n=σ(Z1:2˜n, η1:3˜n, X1:3˜n, ϵ˜n+1:2˜n), F˜n,3˜n+1=σ(Z1:3˜n, η1:3˜n, X1:3˜n, ϵ˜n+1:2˜n+1),···,F˜n,4˜n=σ(Z1:3˜n, η1:3˜n, X1:3˜n, ϵ˜n+1:3˜n). For each ˜n, it is easy to see that(iX j=1ξ˜n,j,F˜n,i,1≤i≤4˜n) is a martingale. We now use Billingsley’s (1995) martingale central limit theorem. Note that using Assumption 2, we have ˜nX i=1E(ξ2 ˜n,i| F˜n,i−1) =˜nX i=1E 1√ ˜nA⊤ 5 ˜Z⊤˜Z ˜n!−1 Ziηi 2 |Z1:˜n, η1:i−1 =σ2 ηA⊤ 5 ˜Z⊤˜Z ˜n!−1 A5 P−→A⊤ 5ΣγA5 and 2˜nX i=˜n+1E(ξ2 ˜n,i| F˜n,i−1) =2˜nX i=˜n+1E 1√ ˜nA⊤ 3ϵiaˆδ˜n,i2 |Z1:2˜n, η1:2˜n, X1:2˜n, ϵ˜n+1:i−1! =σ2 ϵ2˜nX i=˜n+11 ˜n(A⊤ 3aˆδ˜n,i)2. Conditional on ˆδ˜n(i.e.,I1), it is easy to show that 1 ˜n2˜nX i=˜n+1(A⊤ 3aˆδ˜n,i)2−6P(Q < τ 0)A⊤ 3Σu,ˆδ˜nA3=op(1) 42 under Assumptions 1 using the same method as for the term Pin the proof of Proposition 1. Following the proof of Lemma 2, we have 1 ˜n2˜nX i=˜n+1(A⊤ 3aˆδ˜n,i)2P−→6P(Q < τ 0)A⊤ 3ΣuA3, whence 2˜nX i=˜n+1E(ξ2 ˜n,i| F˜n,i−1)P−→6P(Q < τ 0)σ2 ϵA⊤ 3ΣuA3. Moreover, we have 3˜nX i=2˜n+1E(ξ2 ˜n,i| F˜n,i−1) =3˜nX i=2˜n+1E √ ˜n ˜n1ti(α0(Xi, ηi)−E(α0(X, η)|Q≥τ0))!2 |Z1:2˜n, η1:i−1, X1:i−1, ϵ˜n+1:2˜n =˜n ˜n12 var(α0(X, η)|Q≥τ0)P(Q≥τ0)P−→var(α(X, η)|Q≥τ0) P(Q≥τ0). and 4˜nX i=3˜n+1E(ξ2 ˜n,i| F˜n,i−1) =3˜nX i=2˜n+1E √ ˜n ˜n1 ti−(1−ti)Kˆδ˜n,i ϵi!2 |Z1:3˜n, η1:3˜n, X1:3˜n, ϵ˜n+1:i−1 =˜n ˜n12 σ2 ϵ 1 ˜n3˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i2! P−→σ2 ϵ 2 P(Q≥τ0)+3 2P(Q < τ 0)E f1(η) f0(η)2 |Q < τ 0!! . In order to derive the last line, note that 1 ˜n3˜nX i=2˜n+1 ti−(1−ti)Kˆδ˜n,i2 =1 ˜n˜nX i=2˜n+1(t2 i+ (1−ti)2K2 ˆδ˜n,i) =1 ˜n3˜nX i=2˜n+1(ti+ (1−ti)K2 ˆδ˜n,i) =1 ˜n3˜nX i=2˜n+1ti+1 ˜n3˜nX i=2˜n+1(1−ti)K2 ˆδ˜n,i. 43 The first term clearly converges in probability to P(Q≥τ). Also, according to Lemma S.11 of Abadie and Imbens [2016], conditional on ˆδ˜n(i.e.,I1), 1 ˜n3˜nX i:ti=0 2˜n+1K2 ˆδ˜n,i=˜n0 ˜n 1 ˜n03˜nX i:ti=0 2˜n+1K2 ˆδ˜n,i =P(Q≥τ0) +3 2(P(Q≥τ0))2 P(Q < τ 0)E f1,ˆδ˜n(ˆη) f0,ˆδ˜n(ˆη)!2 |Q < τ 0 +op(1), where ˆη=η−Z⊤ˆδ˜n, whence the conclusion immediately follows under Assumption 6 by an argument similar to Lemmas 1 and 2 under Assumptions 1, 2, 3 and 6. Therefore, an application of the martingale central limit theorem [Billingsley, 1995] gives us √ ˜n(ˆθ˜n−θ0)d−→ N 0, σ2 θ , where σ2 θ=A⊤ 5ΣγA5+ 6P(Q < τ 0)σ2 ϵA⊤ 3ΣuA3+B+C , (B.1) with A3=1 2P(Q < τ 0)Σ−1 u Ef1(η) f0(η)X|Q < τ 0 −E(X|Q≥τ0) , A5= 2P(Q < τ 0)E(ℓ′(η)w0u⊤ 0|Q < τ 0)A3+E(Zℓ′(η)|Q≥τ0)−Ef1(η) f0(η)Zℓ′(η)|Q < τ 0 , B=var(α0(X, η)|Q≥τ0) P(Q≥τ0), C=σ2 ϵ 2 P(Q≥τ0)+3 2P(Q < τ 0)E f1(η) f0(η)2 |Q < τ 0!! . To complete the proof, we need to show that the Lindeberg’s condition for the martingale central limit theorem is satisfied. Following the proof of Proposition 1, we haveP˜n i=1E(|ξ˜n,i|3)→0andP2˜n i=˜n+1E(|ξ˜n,i|3)→0as˜n→ ∞ under Assumptions 1, 2, 4 and 5, whence the Lyapunov’s (and consequently Lindeberg’s) condition is satisfied. Moreover, we haveP3˜n i=2˜n+1E(|ξ˜n,i|3)→0provided E |α0(X, η)−E(α0(X, η)|Q≥τ0)|3 is finite, which follows from Assumption 5. Lastly, we haveP4˜n i=3˜n+1E(|ξ˜n,i|3)→0due to Assumption 2 and Lemma S.8 of Abadie and Imbens [2016] on the uniform boundedness of the moments of Kˆδ˜n,i. This finishes the proof. 44
|
https://arxiv.org/abs/2504.17126v1
|
arXiv:2504.17175v1 [math.PR] 24 Apr 2025Asymptotics of Yule’s nonsense correlation for Ornstein-Uhlenbeck paths: The correlated case. Soukaina Douissi∗, Philip Ernst†, Frederi Viens‡ Friday 25thApril, 2025 Abstract We study the continuous-time version of the empirical corre lation coefficient between the paths of two possibly correlated Ornstein-Uhlenbeck processes, kn own as Yule’s nonsense correlation for these paths. Using sharp tools from the analysis on Wiener chaos, w e establish the asymptotic normality of the fluctuations of this correlation coefficient around its long-time limit, which is the mathematical correlation coefficient between the two processes. This asym ptotic normality is quantified in Kolmogorov distance, which allows us to establish speeds of convergenc e in the Type-II error for two simple tests of independence of the paths, based on the empirical correlati on, and based on its numerator. An application to independence of two observations of solutions to the stoc hastic heat equation is given, with excellent asymptotic power properties using merely a small number of t he solutions’ Fourier modes. 1 Introduction and Setup The first purpose of this paper is to provide a detailed asympt otic study of the empirical correlation coefficient ρ(T)between two standard Ornstein-Uhlenbeck (OU) processes X1andX2on a time interval [0,T], as the time horizon Tincreases to infinity, where ρ(T)is defined below in (1). The OU paths X1andX2may or may not be correlated. This paper’s second purpose is to use t hose asymptotics to evaluate the power of independence tests based on ρ(T)itself as a test statistic, or on its constituent components as test statistics. It is important to note from the outset that the data availabl e to compute these test statistics are the single pair of paths (X1,X2), not on repeated measurements of X1and/orX2. This is why we chose to investigate increasing-horizon (large time) asymptotics. This framew ork is well adapted to longitudinal obervational studies with high-frequency observations, as can occur com monly in environmental data, financial data, and many other areas where it is inconvenient or impossible to wo rk with highly repeatable designed experiments. The notion of empirical correlation coefficient ρ(T)for any pair of paths of continuous stochastic processes (X1,X2)defined on [0,T]can be defined by analogy with the standard Pearson correlati on coeffi- cient for these same paths observed in discrete time, e.g. at regular time intervals. Because the paths are continuous, it is a trivial application of standard Riemann integration that the standard Pearson correlation coefficient for the discrete-time observations of (X1,X2)converges, as the time step converges to 0, to the following continuous-time statistic ρ(T) :=Y12(T)/radicalbig Y11(T)/radicalbig Y22(T), (1) ∗Cadi Ayyad University, UCA, National School of Applied Scie nces of Marrakech (ENSAM), BP 575, Avenue Abdelkrim Khattabi, 40000, Guéliz, Marrakech, Morocco. Email: s.douissi@uca.ac.ma †Imperial College London, Department of Mathematics. E-mai l:p.ernst@imperial.ac.uk ‡Department of Statistics, Rice University, USA. E-mail: viens@rice.edu 1 where the random variables Yij(T),i,j= 1,2are given via the following Riemann integrals Yij(T) :=/integraldisplayT 0Xi(u)Xj(u)du−T¯Xi(T)¯Xj(T),¯Xi(T) :=1 T/integraldisplayT 0Xi(u)du. (2) This classical analysis statement holds almost surely, as s oon as the paths (X1,X2)are continuous almost surely and are not constant over time. That the denominators in (1) are non-zero
|
https://arxiv.org/abs/2504.17175v1
|
comes an application of Jensen’s inequality where equality does not hold because the paths are not constant. All other details, including the definition of ρin discrete time, are omitted, since many references includ ing several cited below, such as [11, 2], cover this topic. The topic of independence testing for continuous-time mode led stochastic processes, as a mathe- matical framework approximating time series observed over long horizons or in high frequency, using ρ(T) as above, has gained renewed attention since Ernst, Shepp, a nd Wyner solved a long-standing mathematical conjecture regarding the exact quantitative behavior of th e continuous-time version of Pearson’s correlation coefficient for independent random walks. Their paper [11] di scusses the history of how ρ(T)relates to the discrete-time classical Pearson correlation coefficient. T his paper can be consulted, along with its references, for whyρ(T)is the correct object of study, as we claim above. In the case o f random walks, their paper explains, as was known since the 1960s, that the pair of proce sses(X1,X2)should be Brownian motions (Wiener processes), and their paper proves that when these p aths are independent, neither ρ(T)nor its discrete version converge to 0, as one would expect for standard interpretations of Pearso n correlation coef- ficients. Rather, ρ(T)is constant in distribution, with a variance which they comp ute explicitly. This is the so-called phenomenon of “Yule’s nonsense correlation”, an d indeed this appellation is a label for ρ(T)itself. It was named after G. Udny Yule who discovered this phenomeno n empirically in 1926 in [30], and who had conjectured that the variance of ρ(T)should be computable. The fact that, for random walks, as for other self-similar processes such as fractional Brownian motion s,ρ(T)has a stationary distribution as Tincreases, was presumably well known by the scholars who had studied Yul e’s nonsense correlation since the 1960s, and most likely by Yule himself for the case of standard rando m walks. This fact was not recorded in the literature until it was pointed out in the introduction of [2 ] when discussing the distinction in the behavior of ρ(T)between highly non-stationary paths like random walks and B rownian motion on one hand, and i.i.d. data and stationary time series and processes like OU on the o ther. This brings us to the core of this paper’s topic. It has been kn own for some time that, when a pair of paths of times series is sufficiently stationary (with some li mits on how long their memory (auto-correlation) is), the phenomenon of Yule’s nonsense correlation does not hold: the Pearson correlation of the pair of time series paths typically converges to their underlying mathe matical correlation, just as one would expect for i.i.d data. This was established in continuous time for OU pr ocesses in the paper [9]: if (X1,X2)are two OU paths with correlation r, thenρ(T)→ralmost surely i.e. as T→∞, and the fluctuations in this convergence are Gaussian, i.e. a Central Limit Theorem (CLT ) holds for√ T(ρ(T)−r)asT→∞. The paper [9] was published in 2025, but an arXiv version with thi s result in the
|
https://arxiv.org/abs/2504.17175v1
|
case r= 0was posted in 2022 as [10], which predates the publication of the paper [2]. In t he paper [2], the case of r= 0was studied in detail, and a speed of convergence in this CLT was establishe d, at the so-called Berry-Esséen rate 1/√ T, using tools from the so-called Wiener chaos analysis. That p aper also studied the discrete high-frequency version of ρ(T)and established a rate of convergence of its normal fluctuati ons which depends on Tand on the rate of observations. A study of moderate deviations for ρ(T)is given in the 2025 paper [31], where the OU processes are observed at discrete time and in high fre quency, similarly to the dicrete observation restrictions placed on the OU processes in the earlier contr ibution [2]. This leaves the case of correlated paths (r/\e}atio\slash= 0) in continuous time open. That question was taken up in fully dicrete time in the preprint [7] in the context of AR(1) processes, i.e. without simultan eous restrictions on high-frequency observations and increasing horizon. They establish an exact distributi on theory, and they study asymptotics of the discrete-time version of the empirical correlation, quant itatively, using basic estimates from the Malliavin 2 calculus, similar to the tools developed in [2]. Since that e mpirical correlation converges to the underlying mathematical correlation r, the paper [7]’s distribution theory is used to prove a Berry -Esséen-type theorem in Kolmogorov distance for the Gaussian fluctuations of the d iscrete empirical correlation. This immediately allows [7] to prove that a simple test of independence is asym ptotically powerful, similar to what we do in the present article. The current article picks up the framework in [2], in continu ous time, now allowing (X1,X2)to be correlated, and taking the analysis of independence testin g further. That is also the topic of the preprint [7], in discrete time, as just mentioned. This current paper compares with the fully discrete-time setting of [7] in the following ways. Superficially, both papers use est imates of distances between probability measures on Wiener chaos which can be found in the work of Nourdin and Pe ccati, though the current paper relies on the optimal version of these estimates in [23], while [7] w orks with the possibly suboptimal estimates in the earlier research, summarized in the book [26]. The ext raordinarily detailed exact-distribution theory calcultions performed in [7] are helpful to achieving what a ppear to be sharp estimates via the tools in [26], which is why there does not appear to be any downside to using t hose less optimal methods, circumventing the need to perform third-cumulant calculations. In contra st, in the current paper, as seen for example in the proof of Proposition 2 below, third- and fourth-cumulan t calculations are needed to apply the Optimal Fourth Moment theorem in [23]. The advantage of using this th eorem is a guarantee of optimality assuming efficient cumulant estimations; another advantage is the avo idance of any exact distribution theory, which significantly lightens the technicalities needed to establ ish probability measure distance estimates on
|
https://arxiv.org/abs/2504.17175v1
|
Wiener chaos. Another major difference between [7] and the current p aper is that the latter is in continuous time and the former is in discrete time; this is perhaps a superfici al distinction in terms of results, since both papers concentrate on increasing-horizon asymptotics. Ho wever, in terms of proofs, whether the method of exact distribution theory can be applied to the continuou s-time framework is an open question. The answer could be affirmative, but it is unclear whether the nece ssary technicalities are worth the effort. One could be particularly averse to engaging in the required spe ctral analysis, given how much effort and talent was expended in [7] to handle the finite-dimensional matrix a nalysis needed there. In terms of applications to testing, the current paper engages in a detailed quantita tive power analysis, proposing two different tests depending on whether one uses the full empirical corre lation coefficient, or only the covariance in its numerator; [7] applies its Berry-Esséen result to the empir ical correlation for the power calculation, in an efficient way. Specifically, in the remainder of this paper, (X1,X2)are a pair of two OU processes with the same known drift parameter θ >0, namely Xisolves the linear SDE, for i= 1,2 dXi(t) =−θXi(t)dt+dWi(t), t/greaterorequalslant0 (3) where we assume Xi(0) = 0 ,i= 1,2for the sake of reducing technicalities, where the driving n oises (W1(t))t/greaterorequalslant0,(W2(t))t/greaterorequalslant0are two standard Brownian motions (Wiener processes). As me ntioned, this paper builds a statistical test of independence (or dependence) o f the pair of OU processes (X1,X2)usingρ(T) for large T. That is, we propose a test for the following null hypothesis H0: (X1)and(X2)are independent. Versus the Alternative Hypothesis Ha: (X1)and(X2)are correlated with some fixed r=cor(W1,W2)∈[−1,1]\{0}. The reader may note that this is a simple hypothesis test, in t he sense that the alternative hypothesis is specific to a fixed value r/\e}atio\slash= 0.Because of the infinite-dimensional nature of the objects of study, we believe that a more general hypothesis test, such as a full tw o-sided test where the alternative covers all non-zero values of r, would not be asymptotically powerful. For this reason, we d o not consider such broader alternatives. 3 As mentioned, under H0, by exploiting the second-Wiener-chaos properties of the t hree random variables (Yi,j(T),(i,j) = (1,1),(2,2),(1,2))appearing as components of the ratio ρ(T)in (1), the paper [2] shows, using the connection between the Malliavin Calcu lus and Stein’s method, that the speed of convergence in law of T1/2ρ(T)to the normal law N(0,1/θ)in the Kolmogorov distance dKolis bounded above by a log-corrected Berry-Ess en rate T−1/2log(T). Therefore, as a first step in looking for an asymptotically po werful test to reject the null hyothesis of independence, we will study the Gaussian fluctuations for the statistic ρ(T)underHa. This is the topic of Section 3. We follow that with Section 4 where we identify an a symptotically powerful test for rejecting the null. Finally, section 5.2 provides an interesting example of what Section 4 implies in the case of stochastic differential equations in infinite dimensions, namely how to build a test of
|
https://arxiv.org/abs/2504.17175v1
|
independence for solutions of the stochastic heat equation. But first, in Section 2, we begin wi th some preliminary information on analysis on Wiener space, to help make this paper essentially self-co ntained beyond the construction of basic objects like the Wiener process. 2 Elements of the analysis on Wiener space This section provides essential facts from basic probabili ty, the Malliavin calculus, and more broadly the analysis on Wiener space. These facts and their correspondi ng notations underlie all the results of this paper. This is because, as mentioned in the introduction, an d as noted in the paper [2], the three constituent components of ρ(T)involve random variables in the so-called second Wiener cha os. We have strived to make this section self contained and logically articulated, pre senting material needed to understand all technical details in this paper, and elements that help appreciate how these background results fit together as part of the analysis on Wiener space. Of particular importance below, when performing exact calc ulations on these variables, are the isometry and product formula on Wiener chaos. Another impor tant property of Wiener chaos explained below and used in this paper is the so-called hypercontracti vity, or equivalence of norms on Wiener chaos. The crux of the quantitative arguments we make in this paper, to estimate the rate of normal fluctuations forρ(T)and its components, come from the so-called optimal fourth m oment theorem on Wiener chaos, also explained in detail below. It is the precision afforded by tha t theorem that allows us to produce tests of independence with good, quantitative properties of asympt otic power. That theorem, as explained below, supercedes a previous theorem known as the fourth moment the orem, which we also present below, along with related results about the connection between Stein’s m ethod and Malliavin derivatives, to give the full context of how all these techniques fit together. Strictly sp eaking, the original fourth moment theorem, and the connection between Malliavin derivatives and Stein’s m ethod, are not used directly in the current paper, but we include them in this section’s didactic overview beca use we believe omitting them would not be helpful to readers who have some familiarity with some of the tools bu t not others. The interested reader can find more details about the results in this section by consulting the books [25, Chapter 1] and [26, Chapter 2]. However, the details of the optimal fourth moment theorem sh ould be consulted in the original article [23]. With(Ω,F,P)denoting the Wiener space of a standard Wiener process W, for a deterministic function h∈L2(R+) =:H, the Wiener integral/integraltext R+h(s)drW(s)is also denoted by W(h). The inner product/integraltext R+f(s)g(s)dswill be denoted by /a\}bracketle{tf,g/a\}bracketri}htH. •The Wiener chaos expansion . For every q/greaterorequalslant1,Hqdenotes the qth Wiener chaos of W, defined as the closed linear subspace of L2(Ω)generated by the random variables {Hq(W(h)),h∈H,/bardblh/bardblH= 1} whereHqis theqth Hermite polynomial. Wiener chaos of different orders are o rthogonal in L2(Ω). 4 The so-called Wiener chaos expansion is the fact that any X∈L2(Ω)can be written as X=EX+∞/summationdisplay q=0Xq (4) for some
|
https://arxiv.org/abs/2504.17175v1
|
Xq∈Hqfor every q/greaterorequalslant1. This is summarized in the direct-orthogonal-sum notation L2(Ω) = ⊕∞ q=0Hq. HereH0denotes the constants. •Relation with Hermite polynomials. Multiple Wiener integr als. The mapping Iq(h⊗q) : = q!Hq(W(h))is a linear isometry between the symmetric tensor product H⊙qspace of functions on (R+)q(equipped with the modified norm /bardbl./bardblH⊙q=√q!/bardbl./bardblH⊗q) and the qth Wiener chaos space Hq. To relate this to standard stochastic calculus, one first not es thatIq(h⊗q)can be interpreted as the multiple Wiener integral of h⊗qw.r.t.W. By this we mean that the Riemann-Stieltjes approximation of such an integral converges in L2(Ω)toIq(h⊗q). This is an elementary fact from analysis on Wiener space, which can also be proved using standard stochastic ca lculus for square-integrable martingales, because the multiple integral interpretation of Iq(h⊗q)as a Riemann-Stieltjes integral over (R+)qcan be further shown to coincide with q!times the iterated It integral over the first simplex in (R+)q. More generally, for Xand its Wiener chaos expansion (4) above, each term Xqcan be interpreted as a multiple Wiener integral Iq(fq)for some fq∈H⊙q. •The product formula - Isometry property . For every f,g∈ H⊙qthe following extended isometry property holds E(Iq(f)Iq(g)) =q!/a\}bracketle{tf,g/a\}bracketri}htH⊗q. (5) Similarly as for Iq(h⊗q), this formula is established using basic analysis on Wiener space, but it can also be proved using standard stochastic calculus, owing to the c oincidence of Iq(f)andIq(g)with iterated It integrals. To do so, one uses It ’s version of integration b y parts, in which iterated calculations show coincidence of the expectation of the bounded variation ter m with the right-hand side above. What is typically referred to as the Product Formula on Wiener spa ce is the version of the above formula before taking expectations (see [26, Section 2.7.3]). In ou r work, beyond the zero-order term in that formula, which coincides with the expectation above, we wil l only need to know the full formula for q= 1, which is: I1(f)I1(g) = 2−1I2(f⊗g+g⊗f)+/a\}bracketle{tf,g/a\}bracketri}htH. (6) •Hypercontractivity in Wiener chaos . Forh∈H⊗q, the multiple Wiener integrals Iq(h), which exhaust the setHq, satisfy a hypercontractivity property (equivalence in Hqof allLpnorms for all p/greaterorequalslant2), which implies that for any F∈⊕q l=1Hl(i.e. in a fixed sum of Wiener chaoses), we have /parenleftbig E/bracketleftbig |F|p/bracketrightbig/parenrightbig1/p/lessorequalslantcp,q/parenleftbig E/bracketleftbig |F|2/bracketrightbig/parenrightbig1/2for anyp/greaterorequalslant2. (7) It should be noted that the constants cp,qabove are known with some precision when Fis a single chaos term: indeed, by Corollary 2.8.14 in [26], cp,q= (p−1)q/2. •Malliavin derivative . The Malliavin derivative operator Don Wiener space is not needed explicitly in this paper. However, because of the fundamental role Dplays in evaluating distances between random variables, it is helpful to introduce it, to justify t he estimates (9) and (10) below. For any univariate function Φ∈C1(R)with bounded derivative, and any h∈ H, the Malliavin derivative of the random variable X:= Φ(W(h))is defined to be consistent with the following chain rule: DX:X/ma√sto→DrX:= Φ′(W(h))h(r)∈L2(Ω×R+). 5 A similar chain rule holds for multivariate Φ. One then extends Dto the so-called Gross-Sobolev subsetD1,2/subsetornoteqlL2(Ω)by closing DinsideL2(Ω)under the norm defined by its square /bardblX/bardbl2 1,2:=E/bracketleftbig X2/bracketrightbig +E/bracketleftBigg/integraldisplay R+|DrX|2dr/bracketrightBigg . All Wiener chaos random variable are in the domain D1,2ofD. In fact this
|
https://arxiv.org/abs/2504.17175v1
|
domain can be expressed explicitly for any Xas in (4): X∈D1,2if and only if/summationtext qqq!/bardblfq/bardbl2 H⊗q<∞. •Generator Lof the Ornstein-Uhlenbeck semigroup . The linear operator Lis defined as being diagonal under the Wiener chaos expansion of L2(Ω):Hqis the eigenspace of Lwith eigenvalue−q, i.e. for any X∈Hq,LX=−qX. We have Ker(L) =H0, the constants. The operator −L−1is the negative pseudo-inverse of L, so that for any X∈Hq,−L−1X=q−1X. Since the variables we will be dealing with in this article are finite sums of element s ofHq, the operator−L−1is easy to manipulate thereon. The use of Lis crucial when evaluating the total variation distance bet ween the laws of random variables in Wiener chaos, as we will see short ly. •Distances between random variables . The following is classical. If X,Y are two real-valued random variables, then the total variation distance betwee n the law of Xand the law of Yis given by dTV(X,Y) := sup A∈B(R)|P[X∈A]−P[Y∈A]| where the supremum is over all Borel sets. The Kolmogorov dis tancedKol(X,Y)is the same as dTV except one take the sup over Aof the form (−∞,z]for all real z. The Wasserstein distance uses Lipschitz rather than indicator functions: dW(X,Y) := sup f∈Lip(1)|Ef(X)−Ef(Y)|, Lip(1)being the set of all Lipschitz functions with Lipschitz cons tant/lessorequalslant1. •Malliavin operators and distances between laws on Wiener sp ace. There are two key estimates linking total variation distance and the Malliavin calculu s, which were both obtained by Nourdin and Peccati. The first one is an observation relating an integrat ion-by-parts formula on Wiener space with a classical result of Ch. Stein. The second is a quantitative ly sharp version of the famous 4th moment theorem of Nualart and Peccati. Let Ndenote the standard normal law. – The observation of Nourdin and Peccati . LetX∈D1,2withE[X] = 0 andVar[X] = 1. Then (see [23, Proposition 2.4], or [26, Theorem 5.1.3]), fo rf∈C1 b(R), E[Xf(X)] =E/bracketleftbig f′(X)/angbracketleftbig DX,−DL−1X/angbracketrightbig H/bracketrightbig and by combining this with properties of solutions of Stein’ s equations, one gets dTV(X,N)/lessorequalslant2E/vextendsingle/vextendsingle1−/angbracketleftbig DX,−DL−1X/angbracketrightbig H/vextendsingle/vextendsingle. (8) One notes in particular that when X∈Hq, since−L−1X=q−1X, we obtain conveniently dTV(X,N)/lessorequalslant2E/vextendsingle/vextendsingle/vextendsingle1−q−1/bardblDX/bardbl2 H/vextendsingle/vextendsingle/vextendsingle. (9) 6 It is this last observation which leads to a quantitative ver sion of the fourth moment theorem of Nualart and Peccati, which entails using Jensen’s inequali ty to note that the right-hand side of (8) is bounded above by the variance of/angbracketleftbig DX,−DL−1X/angbracketrightbig H, and then relating that variance in the case of Wiener chaos with the 4th cumulant (centered four th moment) of X. However, this strategy was superseded by the following, which has roots in [3]. – The optimal fourth moment theorem . For each integer n, letXn∈Hq. Assume Var[Xn] = 1 and(Xn)nconverges in distribution to a normal law. It is known (origi nal proof in [27], known as the fourth moment theorem) that this convergence is equiv alent to limnE/bracketleftbig X4 n/bracketrightbig = 3. The following optimal estimate for dTV(X,N), known as the optimal fourth moment theorem, was proved in [23]: with the sequence Xas above, assuming convergence, there exist two constants c,C >0depending only on the law of Xbut not on
|
https://arxiv.org/abs/2504.17175v1
|
n, such that cmax/braceleftbig E/bracketleftbig X4 n/bracketrightbig −3,/vextendsingle/vextendsingleE/bracketleftbig X3 n/bracketrightbig/vextendsingle/vextendsingle/bracerightbig /lessorequalslantdTV(Xn,N)/lessorequalslantCmax/braceleftbig E/bracketleftbig X4 n/bracketrightbig −3,/vextendsingle/vextendsingleE/bracketleftbig X3 n/bracketrightbig/vextendsingle/vextendsingle/bracerightbig .(10) 3 Fluctuations of ρ(T)underHa: CLTs and rates of convergence In this section, we study the detailed asymptotics of the law of the empirical correlation coefficient ρ(T) between our two OU paths X1,X2, under the alternative hypothesis of a non-zero true correl ation between them, when the time horizon T→+∞. As mentioned, we interpret quantitatively the fact that X1and X2are correlated by letting the correlation coefficient rbetween the driving noises W1andW2be a fixed non-zero value: r∈[−1,1]\{0}, which is our alternative hypothesis Ha, while the null hypothesis H0is r= 0. These hypothses are identical to assuming that X1andX2have a fixed non-zero correlation, and have a zero correlation, respectively. Since all these proc esses are Gaussian, H0is equivalent to independence of the pairs (X1,X2)or(W1,W2). To facilitate the mathematical analysis quantitatively, w e introduce a Brownian motion W0defined on the same probability space (Ω,F,P)asW1and assumed to be independent of W1. We then realize the Brownian motion W2on this probability space from W1andW0via the following elementary construction: for anyt/greaterorequalslant0, W2(t) :=rW1(t)+/radicalbig 1−r2W0(t) (11) The two OU paths X1,X2are still given via their SDEs (3). Recall that we defined thei r empirical correlation coefficient, in (1), as ρ(T) :=Y12(T)/radicalbig Y11(T)/radicalbig Y22(T), (12) where the random variables Yij(T),i,j= 1,2are given as in (48) by Yij(T) :=/integraldisplayT 0Xi(u)Xj(u)du−T¯Xi(T)¯Xj(T),¯Xi(T) :=1 T/integraldisplayT 0Xi(u)du, (13) 3.1 Gaussian fluctuations of the numerator Y12(T) The numerator Y12(T)is defined as follows : Y12(T) =/integraldisplayT 0X1(u)X2(u)du−T¯X1(T)¯X2(T) 7 From the construction (11), we can write for any 0/lessorequalslantu/lessorequalslantT X1(u)X2(u) =/bracketleftbigg r/integraldisplayu 0e−θ(u−t)dW1(t)+/radicalbig 1−r2/integraldisplayu 0e−θ(u−t)dW0(t)/bracketrightbigg ×/integraldisplayu 0e−θ(u−t)dW1(t) =rIW1 1(fu)2+/radicalbig 1−r2IW0 1(fu)IW1 1(fu) =r/bracketleftBig IW1 2(f⊗2 u)+/bardblfu/bardbl2 H/bracketrightBig +/radicalbig 1−r2IW0 1(fu)IW1 1(fu). wherefu(.) :=e−θ(u−.)1[0,u](.),H:=L2([0,T]).On the other hand, using a rotational trick, and the linearit y of Wiener integrals, we can write IW0 1(fu)IW1 1(fu) =1 2 /parenleftBigg IW0 1(fu)+IW1 1(fu)√ 2/parenrightBigg2 −/parenleftBigg IW0 1(fu)−IW1 1(fu)√ 2/parenrightBigg2 =1 2/bracketleftBig (IU1 1(fu))2−(IU0 1(fu))2/bracketrightBig whereU0:=W1−W0√ 2,U1:=W1+W0√ 2. Therefore, using the product formula (6) /radicalbig 1−r2/integraldisplayT 0IW0 1(fu)IW1 1(fu)du =√ 1−r2 2/integraldisplayT 0IU1 1(fu)2du−√ 1−r2 2/integraldisplayT 0IU0 1(fu)2du =√ 1−r2 2/integraldisplayT 0IU1 2(f⊗2 u)du+√ 1−r2 2/integraldisplayT 0/bardblfu/bardbl2 Hdu−√ 1−r2 2/integraldisplayT 0IU0 2(f⊗2 u)du−√ 1−r2 2/integraldisplayT 0/bardblfu/bardbl2 Hdu. =√ 1−r2 2/integraldisplayT 0IU1 2(f⊗2 u)du−√ 1−r2 2/integraldisplayT 0IU0 2(f⊗2 u)du. Moreover, we can write r/integraldisplayT 0/bracketleftBig IW1 2(f⊗2 u)+/bardblfu/bardbl2 H/bracketrightBig du=r/integraldisplayT 0I√ 2 2(U1+U0) 2 (f⊗2 u)du+r/integraldisplayT 0/bardblfu/bardbl2 Hdu. =r√ 2 2/integraldisplayT 0IU1 2(f⊗2 u)du+r√ 2 2/integraldisplayT 0IU0 2(f⊗2 u)du+r/integraldisplayT 0/bardblfu/bardbl2 Hdu. Therefore, we can write /integraldisplayT 0X1(u)X2(u)du=/bracketleftBigg r√ 2 2+√ 1−r2 2/bracketrightBigg/integraldisplayT 0IU1 2(f⊗2 u)du+/bracketleftBigg r√ 2 2−√ 1−r2 2/bracketrightBigg/integraldisplayT 0IU0 2(f⊗2 u)du+r/integraldisplayT 0/bardblfu/bardbl2 Hdu. It follows that : 1√ T/integraldisplayT 0X1(u)X2(u)du:=Ar(T)+r√ T/integraldisplayT 0/bardblfu/bardbl2 Hdu=Ar(T)+r√ T 2θ−r 4θ2√ T(1−e−2θT). We therefore obtain the following expression forY12(T)√ T. Y12(T)√ T=Ar(T)+r√ T 2θ+O(1√ T)−√ T¯X1(T)¯X2(T). (14) The following theorem gives the Gaussian fluctuations of the numerator term along with its speed of con- vergence for the Wasserstein distance. 8 Theorem 1 There exists a constant C(θ,r)depending on θandrsuch that dW/parenleftBigg 1 σr,θ/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N(0,1)/parenrightBigg /lessorequalslantC(θ,r)√ T whereσr,θ:=/parenleftBig 1 2θ3/parenleftBig 1 2+r2 2/parenrightBig/parenrightBig1/2 .In particular, /parenleftBigg
|
https://arxiv.org/abs/2504.17175v1
|
Y12(T)√ T−r√ T 2θ/parenrightBigg L−→N/parenleftbigg 0,1 2θ3/parenleftbigg1 2+r2 2/parenrightbigg/parenrightbigg asT→+∞. Proof. We will first prove a CLT for the second- Wiener chaos term Ar(T), indeed we can write Ar(T) :=Ar,1(T)+Ar,2(T) (15) We claim that as T→+∞ Ar,1(T) :=c1(r)√ T/integraltextT 0IU1 2(f⊗2 u)duL−→N/parenleftBig 0,c1(r)2 2θ3/parenrightBig Ar,2(T) :=c2(r)√ T/integraltextT 0IU0 2(f⊗2 u)duL−→N/parenleftBig 0,c2(r)2 2θ3/parenrightBig where : c1(r) =r√ 2 2+√ 1−r2 2, c2(r) =r√ 2 2−√ 1−r2 2. (16) We suggest first to compute the third and fourth cumulant of th e termAr(T)in order to use the Optimal fourth moment theorem (10). Since Ar,1(T)andAr,2(T)are centered and using the independence of U1and U0, we can write k3(Ar(T)) =E[Ar(T)3] =E[(Ar,1(T)+Ar,2(T))3] =E[Ar,1(T)3]+3E[Ar,1(T)2]E[Ar,2(T)]+3E[Ar,1(T)]E[Ar,2(T)3]+E[Ar,2(T)3] =E[Ar,1(T)3]+E[Ar,2(T)3] =k3(Ar,1(T))+k3(Ar,2(T)). For the fourth cumulant, we have E[Ar(T)4] =E[(Ar,1(T)+Ar,2(T))4] =E[Ar,1(T)4]+4E[Ar,1(T)3Ar,2(T)]+6E[Ar,1(T)2Ar,2(T)2]+4E[Ar,1(T)Ar,2(T)3]+E[Ar,2(T)4] =E[Ar,1(T)4]+6E[Ar,1(T)2]E[Ar,2(T)2]+E[Ar,2(T)4]. Therefore k4(Ar(T)) =E[Ar(T)4]−3E[Ar(T)2]2 = (E[Ar,1(T)4]−3E[Ar,1(T)2]2)+(E[Ar,2(T)4]−3E[Ar,2(T)2]2) =k4(Ar,1(T))+k4(Ar,2(T)). 9 Proposition 2 There exists constants c1(θ,r),c2(θ,r)defined as follows ci(θ,r) = max/parenleftbigg16 91 θ5/vextendsingle/vextendsingleci(r)3/vextendsingle/vextendsingle,81 8θ7ci(r)4/parenrightbigg ,i= 1,2. (17) where the constants c1(r)andc2(r)are defined in (16). Then, we have for i= 1,2: max{k3(Ar,i(T)),k4(Ar,i(T))}/lessorequalslantci(θ,r)√ T. Proof. The terms Ar,1(T)andAr,2(T)can be treated similarly, we will do the computations just forAr,1(T). We can write Ar,1(T) =IU1 2(gr,T), with gr,T:=c1(r)√ T/integraldisplayT 0f⊗2 tdt. (18) Therefore, using the definition of the third cumulant and sin ceE[X1(r)X1(s)] =e−θ(r+s) 2θ[e2θ(r∧s)−1]/lessorequalslant 1 2θe−θ|r−s|:=δ(r−s), we get k3(Ar,1(T)) = 8/a\}bracketle{tgr,T,gr,T⊗1gr,T/a\}bracketri}htH⊗2 = 8/integraldisplayT 0/integraldisplayT 0gr,T(x,y)(gr,T⊗1gr,T)(x,y)dxdy = 8/integraldisplayT 0/integraldisplayT 0/integraldisplayT 0gr,T(x,y)gr,T(z,y)gr,T(x,z)dxdydz =8×c1(r)3 T3/2/integraldisplay [0,T]6fu(x)fu(y)fv(x)fv(z)fr(y)fr(z)dudvdrdxdydz =8×c1(r)3 T3/2/integraldisplay [0,T]3/a\}bracketle{tfu,fv/a\}bracketri}ht/a\}bracketle{tfu,fr/a\}bracketri}ht/a\}bracketle{tfv,fr/a\}bracketri}htdudvdr =8×c1(r)3 T3/2/integraldisplay [0,T]3E[X1(u)X1(v)]E[X1(u)X1(v)]E[X1(u)X1(v)]dudvdr. It follows that : |k3(Ar,1(T))|/lessorequalslant8 T3/2/vextendsingle/vextendsinglec1(r)3/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay [0,T]3δ(u−v)δ(v−r)δ(u−r)dudvdr/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle:=|k3(FT)|, whereFT:=IU1 2/parenleftbig c1(r)δ(t−s)1(t,s)[0,T]2/parenrightbig . We proved in Proposition 23 in the Appendix that : ∀p/greaterorequalslant3, kp(FT)∼ +∞c1(r)p/a\}bracketle{tδ∗(p−1),δ/a\}bracketri}htL2(R)2p−1(p−1)! Tp/2−1. (19) whereδ∗(p)denotes the convolution of δp times defined as δ∗(p)=δ∗(p−1)∗δ,p/greaterorequalslant2,δ∗(1)=δwhere∗ denotes the convolution between two functions (f∗g)(x) =/integraltext Rf(x−y)g(y)dy. It follows that there exists T0>0, such that for all T/greaterorequalslantT0, |k3(Ar,1(T))|/lessorequalslant/vextendsingle/vextendsinglec1(r)3/vextendsingle/vextendsingle×8|/a\}bracketle{tδ∗(2),δ/a\}bracketri}htL2(R)|√ T /lessorequalslant16 9θ5/vextendsingle/vextendsinglec1(r)3/vextendsingle/vextendsingle1√ T. 10 In fact, for the last inequality we will make use of Young’s in equality that we recall here: if p,q,s/greaterorequalslant1are such that1 p+1 q=1 s+1, andf∈Lp(R),g∈Lq(R), then /bardblf∗g/bardblLs(R)/lessorequalslant/bardblf/bardblLp(R)/bardblg/bardblLq(R). (20) Therefore, using first Hölder inequality and then Young’s in equality (20), with p=q=3 2ands= 3, we get : |/a\}bracketle{tδ∗(2),δ/a\}bracketri}htL2(R)/lessorequalslant/bardblδ∗δ/bardblL3(R)/bardblδ/bardblL3/2(R) /lessorequalslant/bardblδ/bardbl3 L3/2(R)=/parenleftBigg/integraldisplay R/parenleftbigg1 2θe−θ|u|/parenrightbigg3/2 du/parenrightBigg2 :=2 91 θ5. For the fourth cumulant, we have : |k4(Ar,1(T))|= 16/parenleftbig /bardblgr,T⊗1gr,T/bardbl2 H⊗2+2/bardblgr,T/tildewider⊗1gr,T/bardbl2 H⊗2/parenrightbig /lessorequalslant48/bardblgr,T⊗1gr,T/bardbl2 H⊗2 = 48/integraldisplay [0,T]2(gr,T⊗1gr,T)2(x,y)dxdy = 48/integraldisplay [0,T]4gr,T(x,z)gr,T(x,t)gr,T(z,y)gr,T(t,y)dtdzdxdy =48×c1(r)4 T2/integraldisplay [0,T]8fu(x)fu(z)fv(x)fv(t)fr(z)fr(y)fs(t)fs(y)dudvdtdxdydzdt =48×c1(r)4 T2/integraldisplay [0,T]4E[X1(u)X1(v)]E[X1(u)X1(r)]E[X1(v)X1(s)]E[X1(r)X1(s)]dudvdrds. It follows that : |k4(Ar,1(T))|/lessorequalslant48×c1(r)4/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay [0,T]4δ(u−v)δ(v−r)δ(r−s)δ(s−u)dudvdrds/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle:=|k4(FT)|, Using, the equivalent (19), it follows that that there exist sT0>0, such that for all T/greaterorequalslantT0, |k4(Ar,1(T))|/lessorequalslantc1(r)4×48|/a\}bracketle{tδ∗(3),δ/a\}bracketri}htL2(R)| T/lessorequalslant81 8θ7c1(r)4 T. In fact, using first Hölder inequality and then Young’s inequ ality (20), we get : |/a\}bracketle{tδ∗(3),δ/a\}bracketri}htL2(R)|/lessorequalslant/bardblδ/bardblL4/3(R)×/bardblδ∗(3)/bardblL4(R):=/bardblδ/bardblL4/3(R)×/bardblδ∗(2)∗δ/bardblL4(R) /lessorequalslant/bardblδ/bardbl2 L4/3(R)×/bardblδ∗(2)/bardblL2(R) /lessorequalslant/bardblδ/bardbl4 L4/3(R):=/parenleftBigg/integraldisplay R/parenleftbigg1 2θe−θ|u|/parenrightbigg4/3 du/parenrightBigg3 =27 1281 θ7. Proposition 3 There exists a constant C(θ,r)>0such that /vextendsingle/vextendsingleE[A2 r(T)]−σ2 r,θ/vextendsingle/vextendsingle/lessorequalslantC(θ,r) T. In particular, when T→+∞, we have E[A2 r(T)]→σ2 r,θ. 11 Proof. By the decomposition (15), and the independence between Ar,1(T)andAr,2(T), we have /vextendsingle/vextendsingleE[A2 r(T)]−σ2 r,θ/vextendsingle/vextendsingle/lessorequalslant/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleE[A2 r,1(T)]−1 2θ3/parenleftBigg r√ 2 2+√ 1−r2 2/parenrightBigg2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleE[A2 r,2(T)]−1 2θ3/parenleftBigg r√ 2 2−√ 1−r2 2/parenrightBigg2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Both left hand sided terms can be treated similarly, in fact i t suffices to show that for i= 0,1, /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleE /parenleftBigg 1√ T/integraldisplayT 0IUi 2(f⊗ t)dt/parenrightBigg2 −1 2θ3/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle=O(1 T). By the isometry property (5), we have E IUi 2/parenleftBigg 1√ T/integraldisplayT
|
https://arxiv.org/abs/2504.17175v1
|
0f⊗2 tdt/parenrightBigg2 = 2/bardbl1√ T/integraldisplayT 0f⊗2 tdt/bardblL2[0,T]2 =2 T/integraldisplayT 0/integraldisplayT 0/a\}bracketle{tf⊗2 t,f⊗2 s/a\}bracketri}htdtds =2 T/integraldisplayT 0/integraldisplayT 0(/a\}bracketle{tft,fs/a\}bracketri}ht)2dtds =2 T/integraldisplayT 0/integraldisplayT 0/parenleftbigg/integraldisplayt∧s 0e−θ(t−u)e−θ(s−u)du/parenrightbigg2 dtds =1 θ21 T/integraldisplayT 0/integraldisplayt 0e−2θte−2θs(e2θs−1)2dtds =1 2θ31 T/parenleftBigg T−/integraldisplayT 0e−2θtdt/parenrightBigg −2 θ2T/integraldisplayT 0te−2θtdt+1 2θ31 T/integraldisplayT 0e−2θtdt−1 2θ31 T/integraldisplayT 0e−4θtdt =1 2θ3−3 41 θ4T(1−e−2θT)+3 θ3e−2θT+1 4θ31 T+1 8θ41 T(e−4θT−1). The desired result follows. Proposition 4 Consider Ar(T)defined previously in (15), then there exists a constant Cdepending only onθandrbut not on T, such that : dW/parenleftbiggAr(T) E[A2r(T)]1/2,N(0,1)/parenrightbigg /lessorequalslantC E[A2r(T)]2∧E[A2r(T)]3/2×1√ T. (21) Moreover, there exists a constant Cdepending on θandrsuch that dW/parenleftbigg1 σr,θAr(T),N(0,1)/parenrightbigg /lessorequalslantC√ T. (22) withσr,θ:=/parenleftBig 1 2θ3/parenleftBig 1 2+r2 2/parenrightBig/parenrightBig1/2 . Proof. First observe that the term Ar(T)defined in (15) is a second Wiener chaos term, with respect to a two sided Brownian motion (W(t))t∈Rthat we can construct from (U0(t))t/greaterorequalslant0and(U1(t))t/greaterorequalslant0as follows : W(t) :=U1(t)1{t/greaterorequalslant0}+U0(−t)1{t<0}, t∈R. 12 It is therefore easy to check that the following equality hol ds in law. Ar(T) =1√ T/bracketleftBigg r√ 2 2+√ 1−r2 2/bracketrightBigg/integraldisplayT 0IU1 2(f⊗2 u)du+1√ T/bracketleftBigg r√ 2 2−√ 1−r2 2/bracketrightBigg/integraldisplayT 0IU0 2(f⊗2 u)du ”law”=IW 2/parenleftBigg 1√ T/integraldisplayT 0/parenleftBigg/bracketleftBigg r√ 2 2+√ 1−r2 2/bracketrightBigg f⊗2 u+/bracketleftBigg r√ 2 2−√ 1−r2 2/bracketrightBigg ¯¯f⊗2 u/parenrightBigg du/parenrightBigg where ¯¯f(x) =−f(−x)1{x<0}. Therefore, it is possible to apply the Optimal fourth moment theorem (10) to the termAr(T) E[A2r(T)]1/2, we get : dW/parenleftbiggAr(T) E[A2r(T)]1/2,N(0,1)/parenrightbigg ≍max/braceleftbigg k3/parenleftbiggAr(T) E[A2r(T)]1/2/parenrightbigg ,k4/parenleftbiggAr(T) E[A2r(T)]1/2/parenrightbigg/bracerightbigg . (23) Hence, using Proposition 2, we get the following estimate: dW/parenleftbiggAr(T) E[A2r(T)]1/2,N(0,1)/parenrightbigg /lessorequalslantC×(c1(θ,r)+c2(θ,r)) E[A2r(T)]2∧E[A2r(T)]3/2×1√ T. (24) whereCis a constant coming from (23) and c1(θ,r),c2(θ,r)are defined in (17). For (22), we will need the following proposition. Proposition 5 LetN∼N(0,1), andσ >0, then dW(σN,N)/lessorequalslant√ 2√π/vextendsingle/vextendsingle1−σ2/vextendsingle/vextendsingle Forµ∈R,F∈L2(Ω),Y∈L1(Ω), we have dW(σF+µ+Y,N)/lessorequalslant|µ|+E[|Y|]+σdW(F,N)+√ 2√π/vextendsingle/vextendsingle1−σ2/vextendsingle/vextendsingle. Proof. LetN∼N(0,1), using the Stein’s caracterisation of dW, we get the following estimate : dW(σN,N) = sup h∈lip(1)|E[h(σN)]−E[h(N)]| /lessorequalslantsup f∈FW|E[f′(σN)−σNf(σN)]| whereFW:=/braceleftBig f:R→R∈C1:/bardblf′/bardbl∞/lessorequalslant/radicalbig 2/π/bracerightBig .By an integration by parts, we have E[Nf(σN)] =1√ 2π/integraldisplay Rxf(σx)e−x2 2dx =σ√ 2π/integraldisplay Rf′(σx)e−x2 2dx =σE[f′(σN)]. 13 It follows that dW(σN,N)/lessorequalslant√ 2√π/vextendsingle/vextendsingle1−σ2/vextendsingle/vextendsingle On the other hand, we have for F∈L2(Ω),Y∈L1(Ω),µ∈R,σ >0. dW(σF+µ+Y,N) = sup h∈lip(1)|E[h(σF+µ+Y)]−E[h(N)]| /lessorequalslantsup h∈lip(1)|E[h(σF+µ+Y)]−E[h(σF)]|+ sup h∈lip(1)|E[h(σF)]−E[h(N)]| /lessorequalslant|µ|+E[|Y|]+dW(σF,N). Using triangular inequality, we have dW(σF,N)/lessorequalslantdW(σF,σN)+dW(σN,N). Moreover, we can check that dW(σF,σN) =σdW(F,N).Indeed using the definition of the Wasserstein distance, we h ave dW(σF,σN) = sup h∈lip(1)|E[h(σF)]−E[h(σN)]| = sup h∈lip(σ)|E[h(F)]−E[h(N)]| Similarly, σdW(F,N) =σsup h∈lip(1)|E[h(F)]−E[h(N)]| = sup h∈lip(1)|E[(σh)(F)]−E[(σh)(N)]| = sup h∈lip(σ)|E[h(F)]−E[h(N)]|. The desired result follows using (21). It follows from Propositions (5) and (3), that dW/parenleftbiggAr(T) σr,θ,N(0,1)/parenrightbigg /lessorequalslantE[A2 r(T)]1/2 σr,θdW/parenleftbiggAr(T) E[A2r(T)]1/2,N(0,1)/parenrightbigg +√ 2√π/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1−E[A2 r(T)] σ2 r,θ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle /lessorequalslantC σr,θ1 E[A2 r,T]∧E[A2r(T)]3/2×1√ T+√ 2√π1 T/lessorequalslantC(θ,r)√ T. On the other hand, from decomposition (45), we can write 1 σr,θ/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg =1 σr,θAr(T)+µθ(T)+Yθ(T) whereµθ(T) =O(1√ T),Yθ(T) :=√ T¯X1(T)¯X2(T), therefore, by Proposition 5 and estimate (22), we can write dW/parenleftBigg 1 σr,θ/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N(0,1)/parenrightBigg /lessorequalslant|µθ(T)|+E[|Yθ(T)|]+C(θ,r)√ T. For the term E[|Yθ(T)|], we have X2(u) =rX1(u) +√ 1−r2X0(u), whereX0is the Ornstein-Uhlenbeck driven by the Brownian motion W0considered in the beginning of this section. We can therefor e write E[|Yθ(T)|]/lessorequalslantrE[¯X2 1(T)]+/radicalbig 1−r2E[|¯X1(T)¯X2(T)|] /lessorequalslantrE[¯X2 1(T)]+/radicalbig 1−r2E[¯X2 1(T)]1/2E[¯X2 2(T)]1/2 14 On the other hand, E[¯Xi2(T)] =1 T2/integraldisplayT 0(/integraldisplayT 0ft(u)dt)2du =1 T2/integraldisplayT 0e2θu(/integraldisplayT ue−θtdt)2du =1 T21 θ2/integraldisplayT 0(1−e−θ(T−u))2du/lessorequalslant1 θ21 T. (25) It follows that
|
https://arxiv.org/abs/2504.17175v1
|
E[|Yθ(T)|]/lessorequalslant1√ T1 θ2(r+/radicalbig 1−r2). (26) which finishes the proof of Theorem 1. Proposition 6 Letp/greaterorequalslant1, then there exists a constant depending only on θandp, such that E/bracketleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2θ/radicalbigg Y11(T) T×Y22(T) T−1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep/bracketrightBigg1/p /lessorequalslantc(p,θ)√ T. Moreover, as T→+∞, we have/radicalbigg Y11(T) T×Y22(T) Ta.s.−→1 2θ. (27) Proof. Using the fact that if X/greaterorequalslant0a.s. then for any p/greaterorequalslant1, we have E[|√ X−1|p]1/p/lessorequalslantE[|X−1|p]1/p. Hence, using the notation ¯Yii(T) := 2θYii(T) T, fori= 1,2.By Minkowski’s and Holder’s inequalities, we can write E/bracketleftbig/vextendsingle/vextendsingle¯Y11(T)¯Y22(T)−1/vextendsingle/vextendsinglep/bracketrightbig1/p/lessorequalslantE/bracketleftbig/vextendsingle/vextendsingle¯Y11(T)(¯Y22(T)−1)/vextendsingle/vextendsinglep/bracketrightbig1/p+E/bracketleftbig/vextendsingle/vextendsingle¯Y11(T)−1/vextendsingle/vextendsinglep/bracketrightbig1/p /lessorequalslantE/bracketleftBig/vextendsingle/vextendsingle¯Y11(T)/vextendsingle/vextendsingle2p/bracketrightBig1/2p E/bracketleftBig/vextendsingle/vextendsingle¯Y22(T)−1/vextendsingle/vextendsingle2p/bracketrightBig1/2p +E/bracketleftbig/vextendsingle/vextendsingle¯Y11(T)−1/vextendsingle/vextendsinglep/bracketrightbig1/p On the other hand, we can show that for any p/greaterorequalslant1, there exists a constant c(p,θ)such that E/bracketleftbig/vextendsingle/vextendsingle¯Y11(T)−1/vextendsingle/vextendsinglep/bracketrightbig1/p/lessorequalslantc(p,θ)√ T. where c(p,θ) := 3max/braceleftBigg 2(2p−1) θ,(p−1)√ 2√ θ/parenleftbigg 3+7 4θ/parenrightbigg1/2 ,1 2θ/bracerightBigg We have ¯Y11(T) =2θ T/integraldisplayT 0/parenleftbig X2 1(u)−E[X2 1(u)]/parenrightbig du+2θ T/integraldisplayT 0E[X2 1(u)]du−2θ¯X2 1(T) =2θ T/integraldisplayT 0/parenleftBig (IW1 1(fu))2−/bardblfu/bardbl2 L2([0,T])/parenrightBig du+1 T/integraldisplayT 0E[X2 1(u)]du−¯X2 1(T) = 2θIU0 2(kT)+2θ T/integraldisplayT 0E[X2 1(u)]du−2θ¯X2 1(T). 15 where kT(x,y) :=1 T/integraldisplayT 0f⊗2 u(x,y)du =1 T/integraldisplayT 0e−θ(u−x)e−θ(u−y)1[0,u](x)1[0,u](y)du =1 T1 2θeθxeθy/parenleftBig e−2θ(x∨y)−e−2θT/parenrightBig 1[0,T](x)1[0,T](y) and 2θ T/integraldisplayT 0E[X2 1(u)]du=2θ T/integraldisplayT 0/bardblfu/bardbl2 L2([0,T]du =2θ T/integraldisplayT 0/integraldisplayT 0f2 u(t)dtdu =2θ T/integraldisplayT 0/integraldisplayu 0e−2θ(u−t)dtdu =1 T/integraldisplayT 0(1−e−2θu)du = 1−1 2θT/parenleftbig 1−e−2θT/parenrightbig (28) The following inequality holds for any p/greaterorequalslant1, E/bracketleftbig/vextendsingle/vextendsingle¯Y11(T)−1/vextendsingle/vextendsinglep/bracketrightbig1/p/lessorequalslant2θE[|IU0 2(kT)|p]1/p+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2θ T/integraldisplayT 0E[X2 1(u)]du−1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+2θE[|¯X2p 1(T)|]1/p(29) /lessorequalslant(p−1)E[|IU0 2(kT)|2]1/2+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2θ T/integraldisplayT 0E[X2 1(u)]du−1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+2θ(2p−1)E[|¯X2 1(T)|](30) where we used the hypercontractivity property for multiple Wiener integrals. On the other hand, E/bracketleftBig IW1 2(kT)2/bracketrightBig = 2/bardblkT/bardbl2 L2([0,T]2) =1 T21 2θ2/integraldisplay [0,T]2e2θxe2θy/parenleftBig e2θ(x∨y)−e−2θT/parenrightBig2 dxdy =1 T21 2θ3/parenleftbigg1 4θ(1−e−4θT)+1 θ(e−2θT−1)+T(1+2e−2θT)−1 2θ(1−e−4θT)/parenrightbigg /lessorequalslant1 2θ3/parenleftbigg 3+7 4θ/parenrightbigg1 T. (31) The desired result follows from inequalities (29), (31), (5 1) and (28). Notice that the denominator term of ρ(T)also satisfies the fact that as T→+∞, /radicalbigg Y11(T) T×Y22(T) Ta.s.−→1 2θ. (32) Indeed, for the termY11(T) T, we have X1(t) =/integraltextt 0e−θ(t−u)dW1(u), which can also be written as X1(t) = Z1(t)−e−θtZ1(0), whereZ1(t) =/integraltextt −∞e−θ(t−u)dW1(u), we recall that the process Z1is ergodic, stationary 16 and Gaussian, therefore by the Ergodic theorem, the followi ng convergences hold as T→+∞, 1 T/integraldisplayT 0X1(t)dt=1 T/integraldisplayT 0Z1(t)dt+Z1(0) T(1−e−θT)a.s.−→E[Z1(0)] = 0,1 T/integraldisplayT 0X2 1(t)dta.s.−→E[Z2 1(0)] =1 2θ. (33) Hence, we get as T→+∞: Y11(T) T=1 T/integraldisplayT 0X2 1(u)du−/parenleftBigg 1 T/integraldisplayT 0X1(u)du/parenrightBigg2 a.s.−→1 2θ. (34) It follows that as T→+∞/radicalBig Y11(T) Ta.s.−→1√ 2θand similarly we have/radicalBig Y22(T) Ta.s.−→1√ 2θ. 3.2 Law of large numbers of ρ(T)underHa: Theorem 7 UnderHa, Yule’s nonsense correlation ρ(T)satisifies the following law of large numbers : ρ(T)a.s−→r,asT→+∞ Proof. According to Proposition 6, equation (35), we have /radicalbigg Y11(T) T×Y22(T) Ta.s.−→1 2θ. (35) It remains to proof that the numerator term satisfies : Y12(T) Ta.s−→r 2θ,asT→+∞. (36) From (45), we have Y12(T) T=Ar(T)√ T+r 2θ+O(1√ T)−¯X1(T)¯X2(T). (37) We have Ar(T)√ T=c1(r) T/integraldisplayT 0IU1 2(f⊗2 u)du+c2(r) T/integraldisplayT 0IU0 2(f⊗2 u)du, withc1(r) =r√ 2+√ 1−r2 2andc2(r) =r√ 2−√ 1−r2 2. We will start by proving that : for i= 0,1, 1 n/integraldisplayn 0IUi 2(f⊗2 u)dua.s.−→0,asn→+∞. 17 We haveU1law=U0, both terms can be treated similarly: E/bracketleftBigg/parenleftbigg1 n/integraldisplayn 0IU1 2(f⊗2 u)du/parenrightbigg2/bracketrightBigg =1 n2/integraldisplayn 0/integraldisplayn 0E/bracketleftBig IU1 2(f⊗2 u)IU1 2(f⊗2 v)/bracketrightBig dudv =1 n2/integraldisplayn 0/integraldisplayn 0/a\}bracketle{tf⊗2 u,f⊗2 v/a\}bracketri}htdudv =1 n2/integraldisplayn 0/integraldisplayn 0(/a\}bracketle{tfu,fv/a\}bracketri}ht)2dudv =1 n21 4θ2/integraldisplayn 0/integraldisplayn 0e−2θ(u+v)[e2θu∧v−1]2dudv /lessorequalslant1 n21 4θ2/integraldisplayn 0/integraldisplayn 0e−2θ|u−v|dudv =1 n21 2θ2/integraldisplayn 0/integraldisplayu 0e−2θtdtdu =1 n1 4θ3+1 n28θ4(e−2θn−1) =O(n−1). Letε >0,p
|
https://arxiv.org/abs/2504.17175v1
|
>2, then we can write using the hypercontractivity property of multiple Wiener integrals (7) : +∞/summationdisplay n=1P/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleAr(n)√n/vextendsingle/vextendsingle/vextendsingle/vextendsingle> ε/parenrightbigg =+∞/summationdisplay n=1P/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsinglec1(r) n/integraldisplayn 0IU1 2(f⊗2 u)du+c2(r) n/integraldisplayn 0IU0 2(f⊗2 u)du/vextendsingle/vextendsingle/vextendsingle/vextendsingle> ε/parenrightbigg /lessorequalslant+∞/summationdisplay n=1P/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsinglec1(r) n/integraldisplayn 0IU1 2(f⊗2 u)du/vextendsingle/vextendsingle/vextendsingle/vextendsingle>ε 2/parenrightbigg ++∞/summationdisplay n=1P/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsinglec2(r) n/integraldisplayn 0IU0 2(f⊗2 u)du/vextendsingle/vextendsingle/vextendsingle/vextendsingle>ε 2/parenrightbigg /lessorequalslant2p+1(cp 1(r)+cp 2(r)) εp×+∞/summationdisplay n=1E/bracketleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 n/integraldisplayn 0IU1 2(f⊗2 u)du/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/bracketrightBiggp/2 /lessorequalslantC(r,p) εp+∞/summationdisplay n=11 np/2<+∞. It follows by Borel-Cantelli’s Lemma thatAr(n)√na.s.−→0, whenn→+∞. By Lemma 3.3. of [21], we can conclude that we also have as T→+∞, Ar(T)√ Ta.s.−→0. Hence, using (33) we have ¯X1(T)¯X2(T)←→0asT→+∞. It follows therefore by (37) that : Y12(T) Ta.s.−→r 2θ. The desired result is obtained. 3.3 Gaussian fluctuations of ρ(T)underHa: From (35), we will use of the following approximation for√ T(ρ(T)−r)forTlarge : √ T(ρ(T)−r)≃/parenleftBig Y12(T)√ T−r√ T 2θ/parenrightBig /radicalBig Y11(T) T×Y22(T) T 18 It follows that, we can write for Tlarge, √ T√ θ√ 1+r2(ρ(T)−r)≃2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg /2θ/radicalbigg Y11(T) T×Y22(T) T. Using the triangular property of dW, Cauchy-Schwarz and Holder’s inequalities, we get dW/parenleftBigg√ T√ θ√ 1+r2(ρ(T)−r),N/parenrightBigg /lessorequalslantdW/parenleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N/parenrightBigg +dW/parenleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg /2θ/radicalbigg Y11(T) T×Y22(T) T,2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg/parenrightBigg /lessorequalslantdW/parenleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N/parenrightBigg +E/bracketleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg /2θ/radicalbigg Y11(T) T×Y22(T) T/parenleftBigg 1−2θ/radicalbigg Y11(T) T×Y22(T) T/parenrightBigg/bracketrightBigg /lessorequalslantdW/parenleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N/parenrightBigg +E /parenleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg /2θ/radicalbigg Y11(T) T×Y22(T) T/parenrightBigg2 1/2 E /parenleftBigg 1−2θ/radicalbigg Y11(T) TY22(T) T/parenrightBigg2 1/2 /lessorequalslantdW/parenleftBigg 2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N/parenrightBigg +/bardbl2θ3/2 √ 1+r2/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg /bardblL4/bardbl1/2θ/radicalbigg Y11(T) TY22(T) T/bardblL4/bardbl1−2θ/radicalbigg Y11(T) TY22(T) T/bardblL2. According to Theorem 1, there exists a constant c(θ,r)such that dW/parenleftBig 2θ3/2 √ 1+r2/parenleftBig Y12(T)√ T−r√ T 2θ/parenrightBig ,N/parenrightBig /lessorequalslantc(θ,r)√ T, on the other hand, thanks to Proposition 6, the term /bardbl1−2θ/radicalBig Y11(T) TY22(T) T/bardblL2/lessorequalslantc(θ)√ T. It remains to prove that both terms /bardbl1/2θ/radicalBig Y11(T) TY22(T) T/bardblL4and/bardbl2θ3/2 √ 1+r2/parenleftBig Y12(T)√ T−r√ T 2θ/parenrightBig /bardblL4are finite. Using the decomposition (45) and Minskowski’s inequality a nd the hypercontractivity property, we get for allT >0, /bardblY12(T)√ T−r√ T 2θ/bardblL4/lessorequalslant/bardblAr(T)/bardblL4+O(1√ T)+√ T/bardbl¯X1(T)¯X2(T)/bardblL4 /lessorequalslant3/bardblAr(T)/bardblL2+O(1√ T)+r√ T/bardbl¯X2 1(T)/bardblL4+/radicalbig 1−r2√ T/bardbl¯X1(T)¯X0(T)/bardblL4 /lessorequalslant3/bardblAr(T)/bardblL2+O(1√ T)+7r√ TE[¯X2 1(T)]+3√ T/radicalbig 1−r2/bardbl¯X1(T)/bardblL2/bardbl¯X0(T)/bardblL2 /lessorequalslant3/bardblAr(T)/bardblL2+O(1√ T)/lessorequalslantC(θ,r). 19 where we used inequality (51) and Proposition 3. It follows t hatsup T>0/bardbl2θ3/2 √ 1+r2/parenleftBig Y12(T)√ T−r√ T 2θ/parenrightBig /bardblL4<∞.For the term/bardbl1/2θ/radicalBig Y11(T) TY22(T) T/bardblL4, making use of the notation ¯Yii(T) := 2θYii(T) T,i= 1,2, it is sufficient to show that there exists T0>0such that sup T/greaterorequalslantT0E[¯Yii(T)−4]<+∞, fori= 1,2. From equation (34), it’s easy to derive an estimator of the pa rameterθif the latter is unknown, in fact we showed that for i= 1,2: ˜θT:=1 2/parenleftbiggYii(T) T/parenrightbigg−1 :=1 2 1 T/integraldisplayT 0Xi(t)2dt−/parenleftBigg 1 T/integraldisplayT 0Xi(t)dt/parenrightBigg2 −1 , (38) is strongly consistent. Moreover, in the reference [22] it i s proved that ˜θTis Gaussian, more precisely, we have √ T/parenleftBigg 1 2/parenleftbiggYii(T) T/parenrightbigg−1 −θ/parenrightBigg L−−−−−→ T→+∞N(0,2θ). Therefore, using the Delta method, we can conclude that for i= 1,2 √ T/parenleftbig¯Yii(T)−1/parenrightbigL−−−−−→ T→+∞N(0,2 θ). (39) We can write E[¯Y−4 11(T)] =E[¯Y−4 11(T)1/braceleftBig |¯Y11(T)−1|/greaterorequalslant1√ T/bracerightBig]+E[¯Y−4 11(T)1/braceleftBig |¯Y11(T)−1|<1√ T/bracerightBig] For the second expectation
|
https://arxiv.org/abs/2504.17175v1
|
E[¯Y−4 11(T)1/braceleftBig |¯Y11(T)−1|<1√ T/bracerightBig],we have 1−1√ T<¯Y11(T)<1+1√ Ta.s. we can say that for all T > T 0:= 1,¯Y11(T)>0a.s. thus it’s bounded away from 0. Hence sup T/greaterorequalslantT0E[¯Y−4 11(T)1/braceleftBig |¯Y11(T)−1|<1√ T/bracerightBig]< +∞.For the first expectation E[¯Y−4 11(T)1/braceleftBig |¯Y11(T)−1|/greaterorequalslant1√ T/bracerightBig], using the (39), we can say that for T >2π θ, we have E[¯Y−4 11(T)1/braceleftBig |¯Y11(T)−1|/greaterorequalslant1√ T/bracerightBig]∼/integraldisplay+∞ √ θ√ 2/parenleftBigg√ 2√ θ1√ Tz+1/parenrightBigg−4 e−z2 2dz+/integraldisplay+∞ √ θ√ 2|1−√ 2√ θ1√ Tz|−4e−z2 2dz <+∞. Therefore, for T >(T0∨2π θ),E[¯Y11(T)−4]<+∞, therefore following theorem follows. Theorem 8 There exists a constant C(θ,r)depending on θandrsuch that dW/parenleftBigg √ T√ θ√ 1+r2(ρ(T)−r),N(0,1)/parenrightBigg /lessorequalslantC(θ,r)√ T In particular, √ T(ρ(T)−r)L−→N/parenleftbigg 0,1+r2 θ/parenrightbigg asT→+∞. Remark 9 Notice, in the particular case when X1andX2are independent that is r= 0, we find the CLT that we proved in Theorem 3.8 of [6], that is when T→+∞, then √ θ√ Tρ(T)L−→N(0,1). 20 Moreover, the rate of convergence in dWmetric is even better that the bound we found under H0is [6], Theorem 3.8 which was of the order ofln(T)√ T, while according to Theorem 8, we get when r= 0: dW/parenleftBig√ T√ θρ(T),N(0,1)/parenrightBig /lessorequalslantC(θ)√ T. 4 Some statistical applications of the Gaussian asymptotic s ofρ(T) underHa One possible scenario, that may happen in practice, is when t he paths X1andX2are correlated but the value of the correlation risunknown . In this case, since Yule’s nonsense correlation ρ(T)satisfies a LLN, see Theorem 7 under Ha,ρ(T)approaches the value of the true correlation rwhen the horizon Tis large. We can therefore, consider ρ(T)as a strongly consistent estimator of the parameter r. Moreover, using the Gaussian fluctuations of ρ(T)underHa, Theorem 8 in addition to Slutsky’s Lemma, we can write √ T√ θ/radicalbig 1+ρ2(T)(ρ(T)−r)L−→N(0,1)asT→+∞. We can derive from this an asymptotic confidence interval lev el(1−α)of the parameter rwhich is the following : Iθ(T) :=/bracketleftBigg ρ(T)−/radicalbig 1+ρ2(T)√ θ√ Tqα/2,ρ(T)+/radicalbig 1+ρ2(T)√ θ√ Tqα/2/bracketrightBigg , whereqα/2is the upper quantile of order of the standard Gaussian law N(0,1). It may happen in practice, the drift parameter θwhich is common for X1andX2is also unknown as well, in this case, Theorem 8 in addition to Slutsky’s Lemma, we can write √ T/radicalbig˜θT/radicalbig 1+ρ2(T)(ρ(T)−r)L−→N(0,1)asT→+∞. where˜θTis the estimator of theta defined in (34). In this case, an asym ptotic confidence interval level (1−α) of the parameter ris given by I(T) :=/bracketleftBigg ρ(T)−/radicalbig 1+ρ2(T)/radicalbig˜θT√ Tqα/2,ρ(T)+/radicalbig 1+ρ2(T)/radicalbig˜θT√ Tqα/2/bracketrightBigg , 4.1 Testing independence of X1andX2: Rejection region and power of the test The aim of this section is to build a statistical test of indep endence (or dependence) of the pair of processes (X1,X2). That is we propose a test for the following hypothesis H0: (X1)and(X2)are independent. Versus Ha: (X1)and(X2)are correlated for some r=cor(W1,W2)∈[−1,1]\{0}. based on the statistic ρ(T)observed on the time interval [0,T]. Using the results that we found in the previous section, we will define the rejection regions an d study the power of the test. 21 Let us fix a significance level α∈(0,1). We proved in [2], that under H0, the Yule statistic ρ(T)satisfies the following CLT : √ Tρ(T)L−→N/parenleftbigg 0,1 θ/parenrightbigg asT→+∞ (40) Therefore, a natural test of independence of (X1)and(X2)is to reject independence if/braceleftBig√ T|ρ(T)|> cα/bracerightBig
|
https://arxiv.org/abs/2504.17175v1
|
, cαa threshold depending on α, that we will determine. On the other hand, by definition of type I error and 40, we can wr ite that for Tlarge, we have α≈PH0/parenleftBig√ T|ρ(T)|> cα/parenrightBig =PH0/parenleftBig√ T√ θ|ρ(T)|>√ θcα/parenrightBig We infer that a natural rejection regions Rαare of the form : Rα:=/braceleftbigg√ T|ρ(T)|>qα/2√ θ/bracerightbigg . whereqα/2is the upper quantile of standard normal distribution. The f ollowing proposition gives an estimate of type II error based on Theorem 8. Proposition 10 Fixα∈(0,1), then for Tlarge and r∈[−1,1]\{0}. Then, there exists a constant C(θ,r,α) depending on θ,αandrsuch that we have : β=PHa/bracketleftbigg√ T|ρ(T)|/lessorequalslantqα/2√ θ/bracketrightbigg /lessorequalslantC(θ,r,α) T1/4. Proof. Another, estimate for the type II error which is also a direct consequence of the rate of convergence that we found in Theorem 8 is the following : DenoteZT:=√ T σr,θ(ρ(T)−r)andFZT(.)its cumulative distribution function and cα=qα/2√ θThen, under Ha, we have β=PHa/bracketleftBig√ T|ρ(T)|/lessorequalslantcα/bracketrightBig =PHa/bracketleftBig |√ T(ρ(T)−r)+r√ T|/lessorequalslantcα/bracketrightBig =PHa/bracketleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleZT+r√ T σr,θ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantcα σr,θ/bracketrightBigg =PHa/bracketleftBigg −cα−r√ T σr,θ/lessorequalslantZT/lessorequalslantcα−r√ T σr,θ/bracketrightBigg =FZT/parenleftBigg cα−r√ T σr,θ/parenrightBigg −FZT/parenleftBigg −cα−r√ T σr,θ/parenrightBigg . Thus the following upper bound holds : PHa/bracketleftBig√ T|ρ(T)|/lessorequalslantcα/bracketrightBig /lessorequalslant/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFZT/parenleftBigg cα−r√ T σr,θ/parenrightBigg −φ/parenleftBigg cα−r√ T σr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ/parenleftBigg cα−r√ T σr,θ/parenrightBigg −φ/parenleftBigg −cα−r√ T σr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFZT/parenleftBigg −cα−r√ T σr,θ/parenrightBigg −φ/parenleftBigg −cα−r√ T σr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle(41) 22 Moreover, we have the following estimates for Tlarge enough: /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ/parenleftBigg cα−r√ T σr,θ/parenrightBigg −φ/parenleftBigg −cα−r√ T σr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslant 2cα σr,θ√ 2πe−/parenleftBig cα−r√ T σr,θ/parenrightBig2 ifr∈]0,1], 2cα σr,θ√ 2πe−/parenleftBig −cα−r√ T σr,θ/parenrightBig2 ifr∈[−1,0[.(42) Moreover, we have : /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFZT/parenleftBigg cα−r√ T σr,θ/parenrightBigg −φ/parenleftBigg cα−r√ T σr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFZT/parenleftBigg −cα−r√ T σr,θ/parenrightBigg −φ/parenleftBigg −cα−r√ T σr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle /lessorequalslant2dKol(ZT,N(0,1))/lessorequalslant4/radicalbig dW(ZT,N(0,1))/lessorequalslantC(θ,r) T1/4. That is, type II error for this test has the following estimat e for any Tlarge : β=PHa/bracketleftBig√ T|ρ(T)|/lessorequalslantcα/bracketrightBig /lessorequalslantC(θ,r,α) T1/4. whereC(θ,r,α) =C(θ,r)+2cα σr,θ√ 2π. A direct consequence of the previous proposition, is that th e above test of hypothesis is asymptoti- cally powerful. Corollary 11 Forα∈(0,1)fixed, then for Tlarge and r∈[−1,1]\{0}, we have under Haand forTlarge, we have PHa/bracketleftbigg |√ Tρ(T)|>qα/2√ θ/bracketrightbigg /greaterorequalslant1−C(θ,r,α) T1/4. In particular, this test of hypothesis is asymptotically powerf ul : PHa/bracketleftBig√ θ√ T|ρ(T)|> qα/2/bracketrightBig −→ T→+∞1. whereqα/2is the upper α/2quantile of standard normal distribution. Remark 12 Another scenario, one could consider in this framework is what the rejection regions will be if the drift parameter θis unknown? In this case,we will make use of the estimator (38) of the param eterθ: ˜θT:=1 2/parenleftbiggYii(T) T/parenrightbigg−1 :=1 2 1 T/integraldisplayT 0Xi(t)2dt−/parenleftBigg 1 T/integraldisplayT 0Xi(t)dt/parenrightBigg2 −1 i= 1,2, . We infer using Slutsky’s lemma that in this case, we can cons ider for a a significance level α∈(0,1), the following rejection regions : ˜Rα:=/braceleftbigg/radicalBig T˜θT|ρ(T)|> qα/2/bracerightbigg . Proposition 10 and Corollary 11 can be extended easily for thi s alternative test of hypothesis, thus it’s also asymptotically powerful. 23 4.2 A test of independence using the numerator : We showed that under Ha, we have the existence of a constant C(θ,r)>0, such that : dW/parenleftBigg 1 σr,θ/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg ,N(0,1)/parenrightBigg /lessorequalslantC(θ,r)√ T. whereσr,θ:=/parenleftBig 1 2θ3/parenleftBig 1 2+r2 2/parenrightBig/parenrightBig1/2 .In particular, under H0, we have as T→+∞, Y12(T)√ TL−→N/parenleftbigg 0,1 4θ3/parenrightbigg . We can therefore, consider the numerator itself as a
|
https://arxiv.org/abs/2504.17175v1
|
statist ic of our test and therefore to reject independence if/braceleftBig/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle> cα/bracerightBig ,whereα≈PH0/parenleftBig/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle> cα/parenrightBig =PH0/parenleftBig/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle>qα/2 2θ3/2/parenrightBig . We infer that another natural rejection regions Rαare of the form : Rα:=/braceleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle>qα/2 2θ3/2/bracerightbigg . We have the following estimates for type II error for this tes t. Proposition 13 Fixα∈(0,1)andr∈[−1,1]\{0}. Then, there exists a constant C(θ,α,r)depending on θ,randαsuch that for all Tlarge, we have : β=PHa/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantqα/2 2θ3/2/bracketrightbigg /lessorequalslantC(θ,α,r)×ln(T)√ T. (43) A direct consequence of the previous proposition, is that th e above test of hypothesis is asymptoti- cally powerful. Corollary 14 Forα∈(0,1)fixed, then for Tlarge and r∈[−1,1]\{0}, we have under Haand forTlarge, we have PHa/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle>qα/2 2θ3/2/bracketrightbigg /greaterorequalslant1−C(θ,α,r)×ln(T)√ T. In particular, this test of hypothesis is asymptotically powerf ul : PHa/bracketleftbigg 2θ3/2/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle> qα/2/bracketrightbigg −→ T→+∞1. whereqα/2is the upper α/2quantile of standard normal distribution. Proof. (of Proposition 13) Denote NT:=1 σr,θ/parenleftBig Y12(T)√ T−r√ T 2θ/parenrightBig andFNT(.)its cumulative distribu- 24 tion function and cα=qα/2 2θ3/2Then, under Ha, we have β=PHa/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantcα/bracketrightbigg =PHa/bracketleftBigg 1 σr,θ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg +r√ T 2θ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantcα σr,θ/bracketrightBigg =PHa/bracketleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleNT+r√ T σr,θ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantcα σr,θ/bracketrightBigg =PHa/bracketleftBigg −cα σr,θ−r√ T 2θσr,θ/lessorequalslantNT/lessorequalslantcα σr,θ−r√ T 2θσr,θ/bracketrightBigg =FNT/parenleftBigg cα σr,θ−r√ T 2θσr,θ/parenrightBigg −FNT/parenleftBigg −cα σr,θ−r√ T 2θσr,θ/parenrightBigg . Thus, the following upper bound holds : PHa/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantcα/bracketrightbigg /lessorequalslant/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFNT/parenleftBigg cα σr,θ−r√ T 2θσr,θ/parenrightBigg −φ/parenleftBigg cα σr,θ−r√ T 2θσr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ/parenleftBigg cα σr,θ−r√ T 2θσr,θ/parenrightBigg −φ/parenleftBigg −cα σr,θ−r√ T 2θσr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleFNT/parenleftBigg −cα σr,θ−r√ T 2θσr,θ/parenrightBigg −φ/parenleftBigg −cα σr,θ−r√ T 2θσr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle(44) The following decomposition of the numerator follows from e quality (14) 1 σr,θ/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg =c1(r) σr,θIU1 2/parenleftBigg 1√ T/integraldisplayT 0f⊗2 udu/parenrightBigg +c2(r) σr,θIU0 2/parenleftBigg 1√ T/integraldisplayT 0f⊗2 udu/parenrightBigg −r 4θ2(1−e−2θT) σr,θ√ T−√ T σr,θ¯X1(T)¯X2(T). (45) Proposition 15 Consider a two-sided Brownian motion {W(t),t∈R}, constructed from U1andU0as follows : W(t) =U1(t)1{t/greaterorequalslant0}+U0(−t)1{t<0} Then the following equalities hold a.s. c1(r)√ T/integraldisplayT 0IU1 2(f⊗2 u)du+c2(r)√ T/integraldisplayT 0IU0 2(f⊗2 u)du ”a.s.”=IW 2/parenleftBigg 1√ T/integraldisplayT 0/parenleftBig c1(r)f⊗2 u+c2(r)¯¯f⊗2 u/parenrightBig du/parenrightBigg ”a.s.”=IW 2(hT)+IW 2(gT), where ¯¯fu(x) =−fu(−x) =−e−θ(u+x)1[−u,0](x). hT(t,s) =1 2θ1√ T/bracketleftbig c1(r)1[0,T]2(t,s)+c2(r)1(t,s)[−T,0]2/bracketrightbig e−θ|t−s|. gT(t,s) =1 2θ1√ T/bracketleftbig c1(r)1[0,T]2(t,s)+c2(r)1(t,s)[−T,0]2/bracketrightbig e−2θTeθ|t−s|.(46) 25 with : c1(r) =r√ 2 2+√ 1−r2 2, c2(r) =r√ 2 2−√ 1−r2 2. Proof. We have for t,s∈[0,T], 1√ T/integraldisplayT 0f⊗2 u(t,s)du=1√ T/integraldisplayT 0e−θ(u−t)e−θ(u−s)1[0,u](t)1[0,u](s)du =1√ Teθ(t+s)/integraldisplayT t∨se−2θudu1[0,T](t)1[0,T](s) =1√ Teθ(t+s)1 2θ/bracketleftbig e−2θt∨s−e−2θT/bracketrightbig 1[0,T](t)1[0,T](s) It follows by linearity of multiple Wiener integrals, we hav e : IU1 2/parenleftBigg 1√ T/integraldisplayT 0f⊗2 udu/parenrightBigg =1 2θ1√ T/integraldisplayT 0/integraldisplayT 0e−2θt∨seθ(t+s)dU1(t)dU1(s)−1 2θ1√ T/integraldisplayT 0/integraldisplayT 0e−2θTeθ(t+s)dU1(t)dU1(s), =1 2θ1√ T/integraldisplayT 0/integraldisplayT 0e−2θt∨seθ(t+s)dW(t)dW(s)−1 2θ1√ T/integraldisplayT 0/integraldisplayT 0e−2θTeθ(t+s)dW(t)dW(s). On the other hand, using a change of variable s′=−s,t′=−t, we get: IU0 2/parenleftBigg 1√ T/integraldisplayT 0f⊗2 udu/parenrightBigg =1 2θ1√ T/integraldisplayT 0/integraldisplayT 0e−2θt∨seθ(t+s)dU1(t)dU1(s)−1 2θ1√ T/integraldisplayT 0/integraldisplayT 0e−2θTeθ(t+s)dU0(t)dU0(s), =1 2θ1√ T/integraldisplay0 −T/integraldisplay0 −Te−θt′e−θs′e2θ(t′∧s′)dU0(−t′)dU0(−s′)−1 2θ1√ T/integraldisplay0 −T/integraldisplay0 −Te−2θTe−θ(t′+s′)dU0(−t′)dU0(−s′) =1 2θ1√ T/integraldisplay0 −T/integraldisplay0 −Te−θ|t−s|dW(t)dW(s)−1 2θ1√ T/integraldisplay0 −T/integraldisplay0 −Te−2θTe−θ(t+s)dW(t)dW(s). The desired result follows. It follows from the decomposition (45) along with Propositi on 15, we can write NT=1 σr,θ/parenleftBigg Y12(T)√ T−r√ T 2θ/parenrightBigg =1 σr,θIW 2(hT)+1 σr,θIW 2(gT)−r 4θ2(1−e−2θT) σr,θ√ T−√ T σr,θ¯X1(T)¯X2(T). We have for any x∈Rfixed,∀ε >0, from Michel and Pfanzagl (1971) [1] : |FNT(x)−φ(x)|/lessorequalslantdKol(NT,N(0,1))/lessorequalslantdKol/parenleftbigg1 σr,θIW 2(hT),N(0,1)/parenrightbigg +P(|Y(T)|> ε)+ε, (47) where Y(T) :=1 σr,θIW 2(gT)−r 4θ2(1−e−2θT) σr,θ√ T−√ T σr,θ¯X1(T)¯X2(T). (48)
|
https://arxiv.org/abs/2504.17175v1
|
To control the first term1 σr,θIW 2(hT), we will use the following proposition. 26 Proposition 16 There exists T0/greaterorequalslant0, such that for all T/greaterorequalslantT0,∀x∈R, /vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg1 σr,θIW 2(hT)< x/parenrightbigg −φ(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslantη(θ,r)(1+x2)e−x2 2√ T. (49) In particular, T/greaterorequalslantT0 dKol/parenleftbigg1 σr,θIW 2(hT),N(0,1)/parenrightbigg /lessorequalslant2η(θ,r)√e1√ T. (50) where the constant η(θ,r)is defined in (62) of Proposition 22 of the Appendix. Proof. The bound (49), is a direct consequence of Proposition 22 of t he Appendix, the upper bound of the Kolmogorov distance (50) follows using the fact that sup x∈R(1+x2)e−x2 2=2√e. For the tail of second chaos random variable IW 2(gT), we will recall the following deviation inequality for multiple Wiener integrals, Theorem 2 of [20]. Theorem 17 For any symmetric function f∈L2([0,T]n)andx >0, we have P(|In(f)|> x)/lessorequalslantCexp −1 2/parenleftBigg x√ n!/bardblf/bardblL2([0,T]n)/parenrightBigg2/n , whereIn(f)is then-th Wiener-Itô integral of fwith respect to the Wiener process and the constant C >0 depends only on the multiplicity of the integral. A straight forward calculation shows that : Lemma 18 Consider the kernel gTdefined by : gT(t,s) =1 2θ1√ T/parenleftbig c1(r)1[0,T]2(t,s)+c2(r)1(t,s)[−T,0]2/parenrightbig e−2θTeθ|t−s|. Then, we have /bardblgT/bardblL2([−T,T])=√ r2+1 4√ 2θ2√ T(1−e−2θT). Proof. In fact, we have : /bardblgT/bardbl2 L2([−T,T])=/integraldisplayT −T/integraldisplayT −Tg2 T(t,s)dtds =(c2 1(r)+c2 2(r)) 4θ21 T/integraldisplayT 0/integraldisplayT 0e−4θTe2θte2θsdtds =(c2 1(r)+c2 2(r)) 16θ21 T(1−e−2θT)2 =r2+1 32θ4(1−e−2θT)2 T. A direct consequence of Theorem 17 and Lemma 18, the followin g bound follows. Lemma 19∀ε >0andTlarge, we have : P/parenleftbigg1 σr,θ|IW 2(gT)|> ε/parenrightbigg /lessorequalslantCexp{−2θ2σr,θε√ r2+1√ T}. whereC >0is a constant depending only on the multiplicity of the multi ple integral. 27 The remaining term to control is the following : P/parenleftBigg√ T σr,θ/vextendsingle/vextendsingle¯X1(T)¯X2(T)/vextendsingle/vextendsingle+|r| 4θ2×1 σr,θ√ T>ε 2/parenrightBigg =P/parenleftBigg√ T σr,θ/vextendsingle/vextendsingle¯X1(T)¯X2(T)/vextendsingle/vextendsingle>ε 2−|r| 4θ2×1 σr,θ√ T/parenrightBigg In the following, we will denote ε(θ,r) :=ε 2−|r| 4θ21 σr,θ√ T.Assume in the sequel that ε >|r| 4θ21 σr,θ√ T. On the other hand, recall that we have : X2(u) =/bracketleftbigg r/integraldisplayu 0e−θ(u−t)dW1(t)+/radicalbig 1−r2/integraldisplayu 0e−θ(u−t)dW0(t)/bracketrightbigg Therefore, ¯X2(T) =1 T/integraldisplayT 0X2(u)du=r T/integraldisplayT 0/integraldisplayu 0e−θ(u−v)dW1(v)du+√ 1−r2 T/integraldisplayT 0/integraldisplayu 0e−θ(u−v)dW0(v)du :=r¯X1(T)+/radicalbig 1−r2¯X0(T). It follows that : P/parenleftBig√ T|¯X1(T)¯X2(T)|> σr,θε(θ,r)/parenrightBig /lessorequalslantP/parenleftbigg√ T¯X2 1(T)>σr,θε(θ,r) 2|r|/parenrightbigg +P/parenleftbigg√ T|¯X1(T)¯X0(T)|>σr,θε(θ,r) 2√ 1−r2/parenrightbigg For the term√ T¯X1(T)¯X0(T), we have for i= 0,1 E[¯Xi2(T)] =1 T2/integraldisplayT 0e2θu(/integraldisplayT ue−θtdt)2du =1 T21 θ2/integraldisplayT 0(1−e−θ(T−u))2du =1 T1 θ2+O(1 T2). (51) Applying Proposition 3.5 of [6] to the random variable√ T¯X1(T)¯X0(T), then for Tlarge enough, there exists β(T) :=c(θ)√ T>0, wherec(θ)is a constant depending only on θ, such that : E[e√ T¯X1(T)¯X0(T) β(T)]<2.It follows that : P/parenleftbigg√ T|¯X1(T)¯X0(T)|>σr,θε(θ,r) 2√ 1−r2/parenrightbigg /lessorequalslant2exp/braceleftbigg −σr,θε(θ,r) 2β(T)√ 1−r2/bracerightbigg = 2exp/braceleftBigg −σr,θε(θ,r)√ T 2c(θ)√ 1−r2/bracerightBigg . For the term P/parenleftBig√ T¯X2 1(T)>σr,θε(θ,r) 2|r|/parenrightBig , we have for Tlarge : P/parenleftbigg√ T¯X2 1(T)>σr,θε(θ,r) 2|r|/parenrightbigg =P/parenleftBigg χ2(1)>σr,θε(θ,r) 2|r|√ T E[(√ T¯X1(T))2]/parenrightBigg ≈P/parenleftBigg χ2(1)>θ2√ Tσr,θε(θ,r) 2|r|/parenrightBigg =1√ 2Γ(1/2)/integraldisplay+∞ θ2√ Tσr,θε(θ,r) 2|r|y−1/2e−y/2dy/lessorequalslant2√ 2exp/braceleftBigg −σr,θε(θ,r)θ2√ T 8|r|/bracerightBigg . 28 It follows that : P/parenleftBig√ T|¯X1(T)¯X2(T)|> σr,θε(θ,r)/parenrightBig /lessorequalslant2exp/braceleftbigg −/parenleftbiggθ2 8|r|∧1 2c(θ)√ 1−r2/parenrightbigg σr,θε(θ,r)√ T/bracerightbigg . (52) Using the variable Y(T)defined in (48), it follows from Lemma 19 and (52) that for all ε >|r| 4θ21 σr,θ√ T, we have : P(|Y(T)|> ε)+ε/lessorequalslantP/parenleftbigg1 σr,θ|IW 2(gT)|>ε 2/parenrightbigg +P/parenleftBigg√ T σr,θ|¯X1(T)¯X2(T)|> ε(θ,r)/parenrightBigg +ε /lessorequalslant(C∨2)×exp/braceleftbigg −/parenleftbiggθ2 8|r|∧1 2c(θ)√ 1−r2∧2θ2 √ r2+1/parenrightbigg σr,θε(θ,r)√ T/bracerightbigg +ε We consider the
|
https://arxiv.org/abs/2504.17175v1
|
following constants : c1(T,r,θ) =K(θ,r)√ T,C′=C∨2, K(θ,r) =/parenleftBig θ2 8|r|∧1 2c(θ)√ 1−r2∧2θ2 √ r2+1/parenrightBig σr,θ c2(T,r,θ) =|r| 4θ21 σr,θ√ T Then, it’s easy to check that the function ε/ma√sto→g(ε) :=C′e−c1(T,r,θ)(ε 2−c2(T,r,θ))+εis convex on (0,+∞)and thatarg inf ε>0g(ε) =ε∗(T) =/parenleftBig |r| 2θ2σr,θ+2 K(θ,r)ln/parenleftBig C′K(θ,r) 2/parenrightBig/parenrightBig 1√ T+1 K(θ,r)ln(T)√ T.Therefore for Tlarge, we get : inf ε>0g(ε) =g(ε∗) =/bracketleftbigg|r| 2θ2σr,θ+2 K(θ,r)ln/parenleftbigg C′K(θ,r) 2/parenrightbigg +2 K(θ,r)/bracketrightbigg ×1√ T+1 K(θ,r)ln(T)√ T ∼1 K(θ,r)ln(T)√ T It follows from the decomposition (47), that there exits a co nstantC(θ,r)>0such that for all T/greaterorequalslantT0and for allx∈Rfixed, we have |FNT(x)−φ(x)|/lessorequalslantC(θ,r)ln(T)√ T. On the other hand, for the normal tails, the following estima te holds for any T >4θ2c2 α r2: /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ/parenleftBigg cα σr,θ−r√ T 2θσr,θ/parenrightBigg −φ/parenleftBigg −cα σr,θ−r√ T 2θσr,θ/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/lessorequalslant/radicalbigg 2 πcα σr,θ× e−1 2/parenleftBig −cα σr,θ−r√ T 2θσr,θ/parenrightBig2 ifr∈[−1,0[, e−1 2/parenleftBig cα σr,θ−r√ T 2θσr,θ/parenrightBig2 ifr∈]0,1].(53) It follows that for all T/greaterorequalslantT0∨4θ2c2 α r2, type II error for this test has the following estimate : β=PHa/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleY12(T)√ T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg /lessorequalslantC(θ,α,r)×ln(T)√ T. whereC(θ,α,r) :=C(θ,r)+/radicalBig 2 πcα σr,θ,which finishes the proof. 29 5 Future directions and and application We believe that the strategy behind our testing methodology should be broadly applicable to many pairs of stationary stochastic processes, and in particular, to a br oad class of stationary Gaussian stochastic processes. The OU process represents the simplest one in continuous-ti me modeling. Extensions to other processes can go in several directions. We present some ideas about these e xtensions in this sections first subsection. Its second subsection covers one particular example of an exten sion to infinite-dimensional objects. 5.1 Future directions One may ask whether stationary mean-reverting processes so lving non-linear SDEs, like the Cox-Ingersoll- Ross (CIR) model, will respond to similar testing with compu table rejection regions and provable asymptotic power. This seems likely in some cases. For instance, the sta tionary solution to the CIR SDE is Gamma distributed, which can be construed as a second-chaos distr ibution, or can interpolate between such second- chaos distributions, depending on the shape parameter. The methodology we have developed here could therefore apply, at the cost of slightly more involved Wiene r chaos computations. One can ask about discrete-time processes which are also mea n-reverting. In the case of the AR(1) time series with Gaussian innovations, this is known to be eq uivalent to an OU process observed at even time intervals. Therefore a discretisation of this paper’s methodology should apply directly in this case, with increasing horizon, either using methods as in [6] or as in [8 ]. We believe that the same should hold for other time series models, including any AR( p) model. However, when p >1, AR(p) is not a Markov process, and therefore its interpretation as the solution of a SDE is m uch less straightforward. The case of AR( p) with Gaussian innovations still gives rise to a Gaussian pro cess, and therefore, the same methodology as in the current paper could apply directly. Unlike in the case of the CIR model, the price to pay for handling a Gaussian AR( p) process with p >1lies
|
https://arxiv.org/abs/2504.17175v1
|
in the non-explicit nature of the Wiener chaos kernels n eeded to represent the solution of AR( p) as a Gaussian time series, and its functionals that go into c omputing the Pearson correlation of a pair of AR( p) processes. This could be technically challenging, though not conceptually so. The case of time series with non-Gaussian i nnovations, particularly heavy-tailed ones, would require a different set of technical tools, beyond clas sical Wiener chaos analysis. This begs the question of whether a more general framework, s till based on Wiener chaos analysis, can be put in place for testing independence of stationary Ga ussian processes in discrete or in continuous time. We believe there is a limit to how long the Gaussian proc esses’ memory can be while allowing Gaussian fluctuations for their empirical Pearson correlation, in th e same way that the central limit theorem holds for power and hermite variations of fractional Gaussian noise ( fGn) when the Hurst parameter His less than some threshold, e.g. H <3/4for quadratic variations of fGn, but not beyond this point. F or quadratic variations of fGn for instance, the fluctuations are distrib uted according to the Rosenblatt law, a second-chaos distribution, which would create practical statistical ch allenges in testing. See the Breuer-Major theorem, described for instance in [26], Chapter 7. While we do not investigate any of these future directions he rein, there is another significant exten- sion which lends itself readily to a straightforward use of t he tools we developed in the previous section, as an application to infinite-dimensional stochastic process es. We take this up in the next and final subsection of this paper 5.2 An application: testing independence with stochastic P DEs We close this paper with a method for testing independence of pairs of solutions to a basic instance of the stochastic heat equation with additive noise. As we are a bout to see, the infinite-dimensional setting is actually an asset, which allows us to increase the power of independence tests significantly. Another 30 peculiarity of our test is that the spatial structure of the u nderlying noise, or of the SHE’s solution, is largely immaterial in our basic expository framework. Thus, to place ourselves in a least technical context, consi der the stochastic heat equation on the unit circle (i.e. with periodic boundary condition on [−π,π]) given by : dU(t,x) =∂2 x,xU(t,x)+dW(t,x),0/lessorequalslantt/lessorequalslantT, x∈[−π,π]. U(0,x) = 0,(54) whereWis a cylindrical Brownian motions defined on a probability sp ace/parenleftBig Ω,F,{Ft}t/greaterorequalslant0,P/parenrightBig . The term cylindrical is interpreted here as meaning white in space. A s is clearly seen from the explicit Fourier expansion of the unique solution to (54), given below, this solution is an odd function which is zero at the boundaries of[−π,π], and thus it is sufficient to restrict the space variable to [0,π]. The following facts are well known and easy to check directly; we omit references. • The Laplacian ∂2 x,xhas a discrete spectrum vk=k2,k∈N. • The space time (cylindrical) noise Wcan be written symbolically as dW(t,x) =+∞/summationtext k=1hk(x)dwk(t) (55) where{wk,k/greaterorequalslant1}, is
|
https://arxiv.org/abs/2504.17175v1
|
a family of independent standard Brownian motions and {hk,k/greaterorequalslant1}are the eigenfunctions of ∆, given by : hk(x) =/radicalbigg 2 πsin(kx), k/greaterorequalslant1, (56) •{hk,k/greaterorequalslant1}forms a complete orthonormal system in L2([0,π]). In this case, using the diagnalization afforded by the eigen-elements of the Laplacian ∂2 x,x, the solution Uof equation (54) can be written as : U(t,x) =+∞/summationdisplay k=1hk(x)uk(t), (57) where the Fourier modes (or coefficients) are given by the solu tions to the SDEs: duk(t) =−k2uk(t)dt+dwk(t). (58) In other words, each Fourier mode ukis an Ornstein-Uhlenbeck process, as in (3), with rate of mea n reversion θequal to the respective eigenvalue k2,k∈N\{0}, anduk(0) = 0 . These are the same processes we have studied in the remainder of this paper. We consider now the projection UNof the solution into HN=Span{h1,...,hN}, that is : UN(t,x) =N/summationdisplay k=1hk(x)uk(t), i= 1,2. Since the eigenfunctions hkin (56) are explicit, we consider that we have direct access ( e.g. via integration against each hk) to each OU process uk, and thus observing UN(t,x)over all space and time is equivalent to oberving the set of Nindependent OU processes (u1,...,uN). Henceforth, we will abuse the notation slightly by using UNfor the set of these Nindependent OU processes. 31 Let us now assume that we observe two instances (copies) of th e random field U, calledU1andU2. As mentioned, we thus have access to the correspondig two cop ies of the OU processes uk,1anduk,2for anyk. In practice, we will restrict how we keep track of this infor mation by limiting kto being no greater thanN. Thus, using the aforementioned notation, we assume we obse rve two copies UN 1andUN 2of theN independent OU processes. For each k, these processes uk,1anduk,2correspond to solutions of (3) with θ=k2and two standard Wiener processes wk,1andwk,2. •Our Question (Q N):How can we measure (or test) the dependence or independence b etweenUN 1 andUN 2? More specifically, can we build a statistical test of indepe ndence (or dependence) of the pair (UN 1,UN 2)? We consider the following hypotheses H0: (UN 1)and(UN 2)are independent. Versus Ha: (UN 1)and(UN 2)are correlated with correlation r/\e}atio\slash= 0. • In order to make this question precise from a modeling persp ective, one must choose how to represent r= 0andr/\e}atio\slash= 0in these two hypotheses. –We represent the first one by assuming that U1andU2are solutions to (54) driven respectively by white noises W1andW2as in (55), and we require that for every k, the OU processes uk,1anduk,2 in (58) from the representation of U1andU2respetively in (57), are independent. This is equivalent to requiring that the respective Brownian motions wk,1andwk,2in (58), are independent. We represent the second one by requiring that there is a fixed r/\e}atio\slash= 0which equals the correlation of wk,1andwk,2, i.e. the same r/\e}atio\slash= 0for every ksimultaneously, in the respective representations (57). –This question is slightly more involved than the one we treat ed in the remainder of this paper, whereN= 1, since now we must ask ourselves whether this condition rela tes to asymptotics for the time horizon Ttends to infinity, or whether the number of modes Ntends to infinity, or both. There
|
https://arxiv.org/abs/2504.17175v1
|
are other possible options, such as using differe nt numbers of Fourier modes for each copy, different time horizons (which could also occur if N= 1), and different correlations rkfor everyk. We may also study other spatial noise structures for white n oise in (54). For a spatial covariance operator QforWin /(55) is co-diagonalizable with the Laplacian, this means tha t we may replace hkin (55) by√λkhkfor some sequence of eigenvalues λkforQ, , and the solution to (54) is then the same as in (57) except for replacing hkby√λkhk. We can also consider the case where the SHE (54) has an initial condition U(0,x) =U0(x), which is different from 0, which is then easily handled by starting each component ukat tine0at the corresponding valueuk,0=/integraltexthk(x)U0(x)dx. We will not investigate any of these possibilities, for the sake of conciseness. • One may conside the following scenarios : 1. The number of Fourier modes Nis fixed and T→+∞. 2. The time is fixed and N→+∞. 3. Both T,N→+∞. For the sake of conciseness, we consider in detail only optio n 1 above, where we fix a number of Fourier modes Nand the time horizon Ttends to infinity. See our comments below for the two other cas es where the number of Fourier modes Ntends to infinity. 32 Recall from Question (QN) that we are looking for a procedure to reject the null hypoth esis(H0) thatUN 1andUN 2are independent, versus the alternative that each of their Ncomponents have a common correlation coefficient r/\e}atio\slash= 0. If(H0)holds, then by integrating UN 1andUN 2againsthk, we get that uk,1and uk,2are independent for every k/lessorequalslantN. The converse holds true of course. Therefore, to reject (H0)against the alternative (Ha)that each of the Ncomponents of UN 1andUN 2have correlation coefficient r/\e}atio\slash= 0, it is sufficient to reject the hypothesis (H0,k)thatuk,1anduk,2are independent for some k/lessorequalslantN, against the alternative (Ha,k)thatuk,1anduk,2have correlation coefficient r/\e}atio\slash= 0for that same value k. Equivalently, the probability of a Type-II error, of failin g to reject (H0)against(Ha), is the proba- bility of the event that we fail to reject (H0,k)against(Ha,k)for every k. Working first with a test relative to the empirical correlati onρkrelative to uk,1anduk,2, we may then simply use the test described in Section 4.1, based on uk,1anduk,2, for every k/lessorequalslantN. The Type II error for this test is computed under the alternative hypoth esis. Under this hypothesis (and also under the null), we know that all the uk’s are independent. Therefore, our Type-II error using the test described in Sec tion 4.1 for all k/lessorequalslantNis equal to β=N/productdisplay k=1PHa/bracketleftBig√ T|ρk(T)|/lessorequalslantcα/bracketrightBig . We may then simply use the upper bound provided by Propositio n 10, and noting that the mean-reversion rateθkforukis simlyθk=k2, to obtain β/lessorequalslantN/productdisplay k=1C(k2,r,α) T1/4 =T−N/4N/productdisplay k=1C(k2,r,α). (59) SinceNis fixed in our basic scenario, this leads to a marked improvem ent on the rate of converge to 0 of the Type-II error, as soon as N/greaterorequalslant2. Equivalently, using Corollary 11, the power of our test, us ing the test described in Section 4.1 for all k/lessorequalslantN, converges to 1at the rate given in line (59) above. The exact
|
https://arxiv.org/abs/2504.17175v1
|
same arguments as above, combined with Propositio n 13, shows that, if instead, we define our test using the numerator Y12,kof the empirical correlation ρkofuk,1anduk,2instead of the full ρkitself, as definded in Section 4.2, then the Type-II error βis bounded above as β/lessorequalslant(lnT)N TN/2N/productdisplay k=1C(k2,r,α), and similarly for the rate of convergence to 1of the power of the test. As before, this improves the rate of b y squaring it, except for a logarithmic correction. However, with only a moderate number Nof components, even with the test based on the ρk’s, we obtain a relativey fast, polynomial rate of convergen ce. We close this section with some comments on what an appropria te value of Nmight be, as in Scenario 3 defined above, with a view towards a practical implementati on. In such a view, in practice, observations would be in discrete time, and the ability to compute an appro ximate value of ρkbased on discrete-time observations of the random field U(t,x), hinges on being able to observe each Fourier component ukat a sufficiently high rate so that the discrete version of ρkwill be a good approximation. While this is generally an inoccuous question, when considering values of kwhich could be large, when Nis large, we need to keep in mind that the mean rate of reversion of the OU process ukisθk:=k2, which could then be a very large integer. This means in practice that a faithful observ ation of the dynamics of this OU process has to 33 entail a large number of observation points within each time period where the process is likely to revert back and forth to its mean. Such a length of time is on the order of k−2. How many datapoints are needed to safely determine a Pearson correlation coefficient depends o f course on how close the alternative ris to0, but for values of rwich are not too small, a rule of thumb is 102as an order of magnitude. With N= 10, which might seem like a moderate value of N, this quickly entails at least 104observation points per unit of time, a mean-rerversion period length being as small as 10−2units of time for and knearN= 10. This many datapoints per time unit places values N/greaterorequalslant10out of the reach of many applications, as being a high-frequency regime, with significantly larger Nquickly entering the realm of ultra-high frequency. These comments clearly point us, as a practical matter, to impleme nt our suggested Fourier-based independence test for solutions of high- or infinite-dimensional problem s like the stochastic heat equation only with a small numberNof components, such as N= 2,3,4. Since the Type-II error converges to 0 so quickly for even these moderate values of N, there seems little to be gained for insisting on larger N. A full quantitative analysis of Scenario 3, which depends on realistic practical parameter estimates, is beyond the scope of this paper, though it should be straigh tforward to realize, since the constants in Propositions 10 and 13 are rather explicit functions of θ. We pass on a quantitative analysis of Scenario
|
https://arxiv.org/abs/2504.17175v1
|
2, where Tis fixed and Ntends to infinity, which is a more complex endeavor, since the main propositions in this paper are not tailored to asympotics for fixed time horizon. However, the observation frequency discussi on above regarding Scenario 3 is an indication that asymptotics for Ntending to infinity and Tfixed probably only lead to applicability in ulgtra-high frequency studies, or analog data with access to continuous -time observations, both of which are limiting factors. 6 Appendix Lemma 20 Consider the kernel hTdefined by : hT(t,s) =1 2θ1√ T/parenleftbig c1(r)1[0,T]2(t,s)+c2(r)1(t,s)[−T,0]2/parenrightbig e−θ|t−s|. Then, as T→+∞, we have : /bardblhT/bardblL2([−T,T]2)−→√ 1+r2 2√ 2θ3/2=σr,θ√ 2. Proof. We have : /bardblhT/bardbl2=/integraldisplayT −T/integraldisplayT −Th2 T(t,s)dtds =c2 1(r) 4θ21 T/integraldisplayT 0/integraldisplayT 0e−θ|t−s|dtds+c2 2(r) 4θ21 T/integraldisplay0 −T/integraldisplay0 −Te−θ|t−s|dtds =c2 1(r)+c2 2(r) 4θ2/integraldisplayT 0/integraldisplayT 0e−θ|t−s|dtds−→c2 1(r)+c2 2(r) 4θ3=1 4θ3/parenleftbiggr2+1 2/parenrightbigg . We recall now Proposition 9.4.1 [26] that we will need in the s equel. Proposition 21 LetN∼N(0,1)and letFn=I2(fn),n/greaterorequalslant1, be such that fn∈H⊙2. Writekp(Fn),p/greaterorequalslant1, for the sequence of the cumulants of Fn. Assume that k2(Fn) =E[F2 n] = 1for alln/greaterorequalslant1and that k4(Fn)→0 asn→+∞. If in addition, k3(Fn)/radicalbig k4(Fn)−→α,k8(Fn) (k4(Fn))2−→0 34 asn→+∞, then for all z∈Rfixed: P(Fn/lessorequalslantz)−P(N/lessorequalslantz)/radicalbig k4(Fn)−→α 6√ 2π(1−z2)e−z2 2,asn→+∞. In addition, if the constant α/\e}atio\slash= 0, then there exists a constant c >0andn0/greaterorequalslant1such that : dKol(Fn,N)/greaterorequalslantc/radicalbig k4(Fn)∀n/greaterorequalslantn0. (60) Proposition 22 Consider N∼N(0,1)and˜FT:=IW 2(˜hT), where˜hT:=hT√ 2/bardblhT/bardbl, where the kernel hTis defined in (46) and let δ(t−s) :=1 2θe−θ|t−s|, t,s∈[−T,T]. We have∀z∈Rfixed : P(˜FT/lessorequalslantz)−P(N/lessorequalslantz)∼ +∞η(θ,r)×(1−z2)√ Te−z2 2. (61) where η(θ,r) :=/a\}bracketle{tδ∗(2),δ/a\}bracketri}htL2(R)√π22θ9/2r(3−r2) (1+r2)3/2. (62) Proof. Applying Proposition 23 to the random variable ˜FT, we get k3(˜FT)∼ +∞/a\}bracketle{tδ∗(2),δ/a\}bracketri}htL2(R)(c3 1(r)+c3 2(r))26θ9/2 T1/2(1+r2)3/2 and k4(˜FT)∼ +∞/a\}bracketle{tδ∗(3),δ/a\}bracketri}htL2(R)(c4 1(r)+c4 2(r))27θ63! T(1+r2)2. (63) Consequently, and based on remark 24, c4 1(r)+c4 2(r)/\e}atio\slash= 0, the following convergence holds: k3(˜FT)/radicalBig k4(˜FT)−→ T→+∞α(θ,r) :=/a\}bracketle{tδ∗(2),δ/a\}bracketri}htL2(R)θ3/2 √ 3/radicalBig |/a\}bracketle{tδ∗(3),δ/a\}bracketri}htL2(R)|r√ 2(3−r2)√ 1+r2/radicalbig c4 1(r)+c4 2(r)/\e}atio\slash= 0. For the eight cumulant of ˜FT, we have k8(˜FT)∼ +∞/a\}bracketle{tδ∗(7),δ/a\}bracketri}htL2(R)(c8 1(r)+c8 2(r))2157!×θ12 T3(1+r2)2. It follows that : k8(˜FT) (k4(˜FT))2∼ +∞/a\}bracketle{tδ∗(7),δ/a\}bracketri}htL2(R) (/a\}bracketle{tδ∗(3),δ/a\}bracketri}htL2(R))2×(c8 1(r)+c8 2(r)) (c4 1(r)+c4 2(r))θ628×7! 3!1 T. It follows that k8(˜FT) (k4(˜FT))2−→ T→+∞0. Therefore applying Proposition 21, we get ∀z∈Rfixed, P(˜FT/lessorequalslantz)−P(N/lessorequalslantz)/radicalBig k4(˜FT)−→ T→+∞α(θ,r) 6√ 2π(1−z2)e−z2 2. 35 Consequently,∀z∈Rfixed : P(˜FT/lessorequalslantz)−P(N/lessorequalslantz)∼ +∞/a\}bracketle{tδ∗(2),δ/a\}bracketri}htL2(R)√π√ T22θ9/2r(3−r2) (1+r2)3/2(1−z2)e−z2 2 ∼ +∞η(θ,r)(1−z2)e−z2 2. (64) which finishes the proof. Proposition 23 Consider ˜FT:=IW 2(˜hT), where˜hT:=hT√ 2/bardblhT/bardbl, where the kernel hTis defined in (46) and letδ(t−s) :=1 2θe−θ|t−s|, t,s∈[−T,T]. Then, ∀p/greaterorequalslant3, kp/parenleftBig ˜FT/parenrightBig ∼/a\}bracketle{tδ∗(p−1),δ/a\}bracketri}htL2(R)22p−1(p−1)!(cp 1(r)+cp 2(r))θ3p/2 Tp 2−1(1+r2)p/2. Letδ∗(p)denotes the convolution of δp times defined as δ∗(p)=δ∗(p−1)∗δ,p/greaterorequalslant2,δ∗(1)=δwhere∗denotes the convolution between two functions (f∗g)(x) =/integraltext Rf(x−y)g(y)dy. Proof. The proof of this Proposition is an extension of the proof of P roposition 7.3.3. of [26] for the continuous time setting. Recall that when F=I2(f),f∈H⊙2, then the sequence of cumulants of F,kp(F)for allp/greaterorequalslant2can be computed as follows : ∀p/greaterorequalslant2, kp(F) = 2p−1×(p−1)!×/a\}bracketle{tf⊗(p−1) 1f,f/a\}bracketri}htH⊗2 where the sequence of kernels {f⊗(p) 1f,p/greaterorequalslant1}is defined as follows f⊗(1) 1f=fand forp/greaterorequalslant2,f⊗(p) 1f= (f⊗(p−1) 1f)⊗1f. Letp/greaterorequalslant3,˜FT=I2(˜hT), where˜hT=hT√ 2/bardblhT/bardbl, then kp(˜FT) = 2p−1×(p−1)!×/a\}bracketle{t˜hT⊗(p−1) 1˜hT,˜hT/a\}bracketri}htL2([−T,T]2) = 2p−1×(p−1)!×/integraldisplay [−T,T]2(˜hT⊗(p−1) 1˜hT)(u1,u2)˜hT(u1,u2)du1du2 = 2p−1×(p−1)!×/integraldisplay [−T,T]2((˜hT⊗(p−2) 1˜hT)⊗1˜hT)(u1,u2)˜hT(u1,u2)du1du2 = 2p−1×(p−1)!×/integraldisplay [−T,T]3(˜hT⊗(p−2) 1˜hT)(u1,u3)˜hT(u3,u2)˜hT(u1,u2)du1du2du3 =... = 2p−1×(p−1)!×/integraldisplay [−T,T]p˜hT(up,u1)טhT(up,up−1)×...טhT(u3,u2)˜hT(u1,u2)du1du2...dup =2p−1cp 1(r)×(p−1)! (√ T√ 2/bardblhT/bardbl)p×/integraldisplay [0,T]pδ(up−u1)×δ(up−up−1)×...×δ(u3−u2)δ(u2−u1)du1du2...dup +2p−1cp 2(r)×(p−1)! (√ T√ 2/bardblhT/bardbl)p×/integraldisplay [−T,0]pδ(up−u1)×δ(up−up−1)×...×δ(u3−u2)δ(u2−u1)du1du2...dup =2p−1(cp 1(r)+cp 2(r))×(p−1)! (√ T√ 2/bardblhT/bardbl)p/integraldisplay [0,T]pδ(up−u1)×δ(up−up−1)×...×δ(u3−u2)δ(u2−u1)du1du2...dup. 36 Using the change of variable
|
https://arxiv.org/abs/2504.17175v1
|
vi=ui−u1,i/greaterorequalslant2, then : kp(˜FT) =2p−1(cp 1(r)+cp 2(r))×(p−1)! (√ T√ 2/bardblhT/bardbl)p/integraldisplayT 0/integraldisplayT−u1 −u1.../integraldisplayT−u1 −u1δ(vp)δ(vp−vp−1)×...×δ(v3−v2)dv2dv3...dvpdu1 On the other hand, by dominated convergence theorem, we have 1 T/integraldisplayT−u1 −u1.../integraldisplayT−u1 −u1δ(vp)δ(vp−vp−1)×...×δ(v3−v2)dv2dv3...dvpdu1 =1 T/integraldisplay Rp−1/integraldisplayT∧(T−v2)∧...∧(T−vp) 0∨−v2∨...∨−vpdu1δ(vp)δ(vp−vp−1)...δ(v3−v2)δ(v2)dvp...dv21{|vp|<T,...,|v2|<T}dv2...dvp =/integraldisplay Rp−1δ(vp)δ(vp−vp−1)...δ(v3−v2)δ(v2)/bracketleftBig 1∧/parenleftBig 1−v2∨v3∨...∨vp T/parenrightBig −0∨v2∧v3∧...∧vp T/bracketrightBig 1{|vp|<T,...,|v2|<T}dv2...dvp −→ T→+∞/integraldisplay Rp−1δ(vp)δ(vp−vp−1)...δ(v3−v2)δ(v2)dv2...dvp=/a\}bracketle{tδ∗(p−1),δ/a\}bracketri}htL2(R)<+∞. For the assertion /a\}bracketle{tδ∗(p−1),δ/a\}bracketri}htL2(R)<+∞, we need to check that the function δ∈Lp p−1(R), because in this case theLpnorm of the (p−1)convolution is finite /bardblδ∗(p−1)/bardblLp(R)<+∞. In fact by Holder’s inequality then Young inequality, we get |/a\}bracketle{tδ∗(p−1),δ/a\}bracketri}htL2(R)|/lessorequalslant/bardblδ/bardbl Lp p−1(R)×/bardblδ∗(p−1)/bardblLp(R)=/bardblδ/bardbl Lp p−1(R)×/bardblδ∗(p−2)∗δ/bardblLp(R) /lessorequalslant/bardblδ/bardbl2 Lp p−1(R)×/bardblδ∗(p−2)/bardblLp/2(R) /lessorequalslant/bardblδ/bardbl3 Lp p−1(R)×/bardblδ∗(p−3)/bardblLp/3(R).../lessorequalslant/bardblδ/bardblp Lp p−1(R) It remains to check that δ∈Lp p−1(R), we have /bardblδ/bardblp/(p−1) Lp p−1(R)=1 2p p−1θp p−1/integraldisplay Re−p p−1θ|u|du=p−1 p×2p 2−2×θ2p−1 p−1<+∞. Remark 24 Notice that the constant cp 1(r)+cp 2(r)/\e}atio\slash= 0,∀p/greaterorequalslant3. In fact, we can easily check that : c1(r) = 0⇐⇒r=−1√ 3 c2(r) = 0⇐⇒r=1√ 3.(65) It follows that : •Ifr=1√ 3, thencp 1(1√ 3)+cp 2(1√ 3) =/parenleftBig√ 2√ 3/parenrightBigp /\e}atio\slash= 0. •Ifr=−1√ 3, thencp 1(−1√ 3)+cp 2(−1√ 3) = (−1)p/parenleftBig√ 2√ 3/parenrightBigp /\e}atio\slash= 0. Consequently, the following convergence holds: kp(˜FT)×Tp 2−12p/2/bardblhT/bardblp 2p−1(cp 1(r)+cp 2(r))(p−1)!−→ T→+∞/a\}bracketle{tδ∗(p−1),δ/a\}bracketri}htL2(R) Finally, by Lemma 20, we have : 2p/2/bardblhT/bardblp∼ ∞(1+r2)p/2 2pθ3p/2, the desired result follows. 37 References [1] Michel, R., Pfanzagl, J. (1971). The accuracy of the norm al approximation for minimum contrast estimates. Z. Wahrscheinlichkeitstheorie verw Gebiete 1873–84. [2] Douissi, S., Viens, F. and Es-Sebaiy, K. Asymptotics of Y ule’s nonsense correlation for Ornstein- Uhlenbeck paths: a Wiener chaos approach. Electron. J. Stat ist. 16(1): 3176-3211 (2022). DOI: 10.1214/22-EJS2021. [3] Biermé, H., Bonami, A., Nourdin, I. and Peccati, G. (2012 ). Optimal Berry-Esséen rates on the Wiener space: the barrier of third and fourth cumulants. ALEA 9, no. 2, 473-500. [4] Cheridito, P. (2004). Gaussian moving averages, semima rtingales and option pricing. Stochastic pro- cesses and their applications, 109(1), 47-68. [5] Cheridito, P., Kawaguchi, H. and Maejima, M. (2003). Fra ctional Ornstein-Uhlenbeck processes, Electr. J. Prob. 8, 1-14. [6] Douissi, S., Es-Sebaiy, K., Viens, F. (2019). Berry-Ess een bounds for parameter estimation of general Gaussian processes. ALEA, Lat. Am. J. Probab. Math. Stat. ,16, 633-664. [7] Ernst, Ph. and Huang, D. Exact and asymptotic distributi on theory for the empirical correlation of two AR(1) processes (2023). Preprint , 103 pages, https://arxiv.org/pdf/2310.08575 . [8] Ernst, Ph., Huang, D., Viens, F.G. (2023) Yule’s “nonsen se correlation” for Gaussian random walks. Stochastic Processes and their Applications ,162, 423-455. [9] Philip A. Ernst, L.C.G. Rogers, Quan Zhou (2025). Yule’s “nonsense correlation”: Moments and density. Bernoulli 31(1): 412-431. DOI: 10.3150/24-BEJ1733. [10] Philip A. Ernst, L.C.G. Rogers, Quan Zhou (2022). The di stribution of Yule’s "nonsense correlation". Preprint , 20 pages, https://arxiv.org/abs/1909.02546v1 . [11] Ernst, Ph., Shepp, L., and Wyner, A. (2017). Yule’s “non sense correlation” solved! Annals of Statistics , 45(4): 1789-1809. DOI: 10.1214/16-AOS1509 [12] Es-Sebaiy, K., Viens, F. (2019). Optimal rates for para meter estimation of stationary Gaussian processes. Stochastic Processes and their Applications, 129(9), 3018 -3054. [13] Hu, Y. and Nualart, D. (2010). Parameter estimation for fractional Ornstein-Uhlenbeck processes. Statist. Probab. Lett. 80, 1030-1038. [14] Hu, Y., Nualart, D. and Zhou, H. (2019). Parameter estim ation for fractional
|
https://arxiv.org/abs/2504.17175v1
|
Ornstein-Uhlenbeck processes of general Hurst parameter. Statistical Inferen ce for Stochastic Processes, 22(1), 111-142. [15] Hu, Y. and Song, J. (2013). Parameter estimation for fra ctional Ornstein-Uhlenbeck processes with discrete observations. F. Viens et al (eds), Malliavin Calc ulus and Stochastic Analysis: A Festschrift in Honor of David Nualart, 427-442, Springer. [16] Karhunen, K., (1950). ber die Struktur station rer zuf l liger Funktionen. Ark. Mat. 1 (3), 141-160. [17] Egor, Kosov (2019). Total variation distance estimate s via L2-norm for polynomials in log- concave random vectors. International Mathematics Resear ch Notices, Vol. 00, No. 0, pp. 1-17 doi:10.1093/imrn/rnz278 38 [18] Kloeden, P. and Neuenkirch, A. (2007). The pathwise con vergence of approximation schemes for stochas- tic differential equations. LMS J. Comp. Math. 10, 235-253. [19] Kř ž, P., Maslowski, B. (2019). Central limit theorems a nd minimum-contrast estimators for linear stochastic evolution equations. Stochastics, 91, 1109-11 40. [20] Major, P.(2007). On a multivariate version of Berstein s inequality. Electronic Journal of Probability, 12 966-988. [21] Douissi, S., Es-Sebaiy, K., Kerchev, G. and Nourdin, I. Berry-Ess en bounds of second moment esti- mators for Gaussian processes observed at high frequency. E lectron. J. Statist. 16(1): 636-670 (2022). DOI: 10.1214/21-EJS1967. [22] Xiao, W. and Yu, J. Asymptotic theory for estimating dri ft parameters in the fractional Vasicek model. Econometric theory, 2018, 1-34. [23] Nourdin, I. and Peccati, G. (2015). The optimal fourth m oment theorem. Proc. Amer. Math. Soc. 143, 3123-3133. [24] Nourdin, I. and Tran, T. D. (2019). Statistical inferen ce for Vasicek-type model driven by Hermite processes. Stochastic Processes and their Applications, 1 29(10), 3774-3791. [25] Nualart, D. (2006). The Malliavin calculus and related topics. Springer-Verlag, Berlin. [26] Nourdin, I. and Peccati, G. (2012). Normal approximati ons with Malliavin calculus : from Stein’s method to universality. Cambridge Tracts in Mathematics 19 2. Cambridge University Press, Cambridge. [27] Nualart, D. and Peccati, G. (2005). Central limit theor ems for sequences of multiple stochastic integrals. Ann. Probab. 33, no. 1, 177-193. [28] Sottinen, T. and Viitasaari, L. (2018). Parameter esti mation for the Langevin equation with stationary- increment Gaussian noise. Statistical Inference for Stoch astic Processes, 21(3), 569-601. [29] Wood, A. and Chan, G. (1994). Simulation of stationary G aussian processes. Journal of computational and graphical statistics, 3(4):409-432. [30] Yule, G. U. Why do we sometimes get nonsense-correlatio ns between Time-Series?–a study in sampling and the nature of time-series. J. Royal Statistical Society ,89(1926) 1–63. [31] Jingying Zhou, Hui Jiang, Weigang Wang (2025). Asympto tic normality and Cramér-type moderate de- viations of Yule’s nonsense correlation statistic for Orns tein–Uhlenbeck processes, Journal of Statistical Planning and Inference ,238, 106275, https://doi.org/10.1016/j.jspi.2025.106275 . 39
|
https://arxiv.org/abs/2504.17175v1
|
arXiv:2504.17202v1 [math.ST] 24 Apr 2025Graph Quasirandomness for Hypothesis Testing of Stochastic Block Models Kiril Bangachev∗Guy Bresler† April 25, 2025 Abstract The celebrated theorem of Chung, Graham, and Wilson on quasirand om graphs implies that if the 4-cycle and edge counts in a graph Gare both close to their typical number in G(n,1/2), then this also holds for the counts of subgraphs isomorphic to Hfor anyHof constant size. We aim to prove a similar statement where the notion of close is whethe r the given (signed) subgraph count can be used as a test between G(n,1/2) and a stochastic block model SBM. Quantitatively, this is related to approximately maximizing H−→ |Φ(H)|1 |V(H)|,where Φ( H) is the Fourier coefficient of SBM, indexed by subgraph H.This formulation turns out to be equivalent to approximately maximizing the partition function of a spin model over alphabet equal to the community labels in SBM. We resolve the approximate maximization when SBMsatisfies one of four conditions: 1) the probability of an edge between any two vertices in different communit ies is exactly 1 /2; 2) the probability of an edge between two vertices from any two communitie s is at least 1 /2 (this case is also covered in a recent work of Yu, Zadik, and Zhang); 3) the pro bability of belonging to any given community is at least cfor some universal constant c >0; 4)SBMhas two communities. In each of these cases, we show that there is an approximate maxim izer of|Φ(H)|1 |V(H)|inA= {stars, 4-cycle }.Thisimpliesthatifthereexistsaconstant-degreepolynomialtestd istinguishing G(n,1/2) andSBM,then the two distributions can also be distinguished via the signed cou nt of some graph in A.We conjecture that the same holds true for distinguishing G(n,1/2) and any graphon if we also add triangles to A. Keywords: Stochastic Block Models, Quasirandom Graphs, Low Degree Polynom ial Tests. ∗Dept. of EECS, MIT. kirilb@mit.edu †Dept. of EECS, MIT. guy@mit.edu . Supported by NSF Grant CCF-2428619. Contents 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Implications and Interpretations of Conjecture 1 . . . . . . . . . . . . . . . . . . . . 4 1.3 Signed Subgraph Counts, Stochastic Block Models, and Pa rtition Functions . . . . . 4 1.4 Our Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5 Prior Techniques and Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.6 Our Key Ideas . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2504.17202v1
|
. . . . . . . . . . . . . . . . . . . . . . . 11 2 Preliminaries 14 2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Testing via Low Degree Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Simple Facts about Fourier Coefficients of Stochastic Blo ck Models . . . . . . . . . . 16 2.4 Meta Theorems on Signed Subgraph Counts . . . . . . . . . . . . . . . . . . . . . . . 17 3 Proofs of Main Results 22 3.1 Diagonal SBMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2 Non-Negative SBMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 SBMs with Non-Vanishing Community Probabilities . . . . . . . . . . . . . . . . . . 24 3.4 General 2-SBMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4 Comparison Inequalities 46 4.1 Cycle Comparisons: A Spectral Approach . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2 Exploiting Symmetries: A Sum-of-Squares Approach . . . . . . . . . . . . . . . . . . 47 4.3 Ghost Vertices: A Second Moment Approach . . . . . . . . . . . . . . . . . . . . . . 47 5 Future Directions 48 A Library of Examples 55 B Some Omitted Details 59 B.1 Proof of Inequality (25) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 1 Introduction 1.1 Background Quasirandomness. In the seminal paper [ CGW88], Chung, Graham, and Wilson initiated the study of quasirandom graphs . The central question addressed by quasirandomness is: Does the graph Gresemble a random graph? [CGW88] establishes the equivalence of several seemingly dispara te properties that make a graph similar to
|
https://arxiv.org/abs/2504.17202v1
|
a sample from G(n,1/2) or, in their terminology, quasirandom . To describe the result, we denote by CH(G) the number of labeled copies of Hin a graph G(so, for example, CH(Kn) = n(n−1)···(n−|V(H)|+1) for any H) and by C∗ H(G) the number of induced labeled copies. Theorem 1.1 ([CGW88]).The following conditions are equivalent for an n-vertex graph G: P1)|C∗ H(G)|= (1+o(1))n|V(H)|2−(|V(H)| 2),which is equivalent to |CH(G)|= (1+o(1))n|V(H)|2−|E(H)|, for all graphs Hof constant size. P2)|E(G)| ≥(1+o(1))n2 4and|CCyct(G)| ≤nt 2t(1+o(1))for some cycle Cyctof even length t≥4. P3)|E(G)| ≥(1+o(1))n2 4andλ1(G) =n 2(1+o(1)),λ2(G) =o(n)where these are the largest two eigenvalues of Gby absolute value. P4) For each subset SofV(H),the number of edges restricted to this subset is|S|2 4+o(|V(H)|2). P5) For each subset SofV(H)withS=⌊n/2⌋,the number of edges restricted to this subset is (1+o(1))n4 16. P6)/summationdisplay v/\e}atio\slash=v′∈V(H)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay u∈V(H)1[Gvu=Gv′u]−n/2/vextendsingle/vextendsingle/vextendsingle/vextendsingle=o(n3). P7)/summationdisplay v/\e}atio\slash=v′∈V(H)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay u∈V(H)1[Gvu=Gv′u= 1]−n/4/vextendsingle/vextendsingle/vextendsingle/vextendsingle=o(n3). In particular, if Ghas a number of 4-cycles and edges close to that of a typical sa mple from G(n,1/2),thenanyothersubgraphcountaswellasthespectrumof GresemblethatofErd˝ os-R´ enyi. Hypothesis Testing for Graph Structure. Subgraph counts and spectral statistics are com- monly used to compare random graphs to Erd˝ os-R´ enyi in a diffe rent framework – when testing for hidden structure in random graph distributions . Concretely, the following hypothesis testing question has received considerable attention from the prob ability, computer science, and theoreti- cal statistics communities. For a family of distributions ( Pn)n∈Novern-vertex graphs, one aims to solve the following hypothesis testing problem on input an n-vertex graph G: H0:G(n,1/2) versus H1:Pn. (HT) Boththe information-theoretic question(isthereaconsistent test?) andthe computational question (is there a computationally-efficient test?) are active rese arch areas for many choices of Pnin (HT). 1 Popular choices of Pnin (HT) include graph distributions with the following types of hi dden structure: Community structure such as stochastic block mo dels [HLL83,BCLS84,DF89,Bop87, BJR07,RL08,CO10,DKMZ11 ,Mas14,MNS18] (and many more in [ Abb18]), in particular the planted clique [ Kuˇ c95,Jer92,AKS98,BHK+19,BBH18,BB19,BB20,HS24]; Geometric structure such as random geometric graphs [ DGLU11 ,BDER14 ,BBN19,BBH24,LR23a,LMSY22 ,LR23b, FGKS24,BB24a,BB24b,BB25]; Planted dense subgraph such as a matching, clique, or cycl e [HWX15,DWXY21 ,MNWS+23,DMW25,YZZ24] (in addition to planted clique references). Onenaturalandcomputationally efficient approach to( HT) istocomparesimplestatistics of Pn andG(n,1/2).For example, if Pnhas a lot more triangles than G(n,1/2) (as in the case of random geometric graphs over the sphere, see [ BDER14 ]), then one can use this fact to distinguish the two distributions. The semblance with the classic quasirandom ness theory, initiated in Theorem 1.1 , is striking – subgraph counts are used to compare with Erd˝ os-R ´ enyi. Towards a Quasirandomness Theory for Hypothesis Testing for Graph Structure. In the context of ( HT), the question of graph quasirandomness takes the followin g form: Is there a small set of simple graph statistics such that if PnandG(n,1/2)are indistinguishable under these statistics, then they are al so computationally / information-theoretically indistinguishable? Of course, one may speculate that Theorem 1.1 already answers this question. If the 4-cycle count and edge count of PnandG(n,1/2) are sufficiently close, then so are all other subgraph counts. Hence, polynomials in the edges of the two graph dist ributions are also close. Within the
|
https://arxiv.org/abs/2504.17202v1
|
increasingly popular framework of low-degree polynomial t ests (see Section 2.2 for definitions), this would give evidence for computational indistinguishabili ty. The reason this argument fails is that Theorem 1.1 is quantitatively too weak in the o(1) dependence on the number of vertices. For example, consider Pn=G(n,1/2 +1/√n).According toTheorem 1.1 , (a sample from) this graph distribution is quasirandom. Ho wever, one can easily distinguish PnandG(n,1/2): A simple concentration argument shows that with high pro bability, the first distribution has more than1 2/parenleftbign 2/parenrightbig +1 8n1.5edges, while the second less than1 2/parenleftbign 2/parenrightbig +1 8n1.5.In Section 1.5.2 , we explain why even careful book-keeping of the o(1)-dependencies in Theorem 1.1 is not sufficient. Thereis afundamental barrier to thetechniqu es usedfor Theorem 1.1 and adifferent approach is needed to develop a quasirandomness theory in th e setting of hypothesis testing. In the recent work [ YZZ24], the authors make a first step towards this end.1 Theorem 1.2 ([YZZ24]).Let(Hn)n∈Nbe a sequence of fixed n-vertex graphs. The sequence of planted distributions Pn∈Novern-vertex graphs is defined as follows. To sample from Pn,one first samples GfromG(n,1/2),2then adds the edges of Hn,producing G∪Hn,and finally applies a uniformly random permutation to the vertices of G∪Hn. There exists a constant degree polynomial test that distingu ishesH0:G(n,1/2)andH1:Pnwith high probability if and only if there exists some signed star count that distinguishes H0:G(n,1/2) andH1:Pnwith high probability. 1To the best of our knowledge, the connection of their work to g raph quasirandomness was not known to the authors of [ YZZ24]. 2The results of [ YZZ24] and the current work extend appropriately for any G(n,q) whenq∈(0,1) is an absolute constant (independent on n). We choose to work with q= 1/2 as this makes nearly no difference in the arguments. We do note that if we allow qto depend on n,we expect the behavior to change dramatically, see Section 5 . 2 We will explain the precise meaning of constant degree polynomial test andsignedsubgraph count in a moment. More important for now is the interpretati on of the theorem from a quasiran- domness perspective: if signed star counts between PnandG(n,1/2) are sufficiently similar, then so are also all other constant-degree polynomial statistic s. The goal of this work is to extend the above theorem to a more di verse family of distributions. Specifically, the choice of model in Theorem 1.2 makes the following strong structural assumption: Even conditioned on the latent planted graph H,the edge G ijhas marginal probability of appearance at least 1/2. Boththemethodsandconclusionof Theorem 1.2 breakdowninsome of the simplest models that do not exhibit this structure. Fo r example, the following distribution is easily distinguishable from G(n,1/2) via counting signed 4-cycles but signed stars fail to dist inguish it. Each of nvertices receives an independent uniformly random label 1 o r 2. Ifuandvreceive the same label, they are adjacent with probability 1 /2+n−1/10.Ifuandvreceive different labels, they are adjacent with probability 1 /2−n−1/10.One can show that the expected number of signed stars is the same as in G(n,1/2): zero. The reason is that even conditioned on the latent la bel of u,the edge
|
https://arxiv.org/abs/2504.17202v1
|
Gu,vis distributed as Bern(1/2),due to the randomness of the label of v. Our Conjecture and Results. We make the following quasirandomness conjecture for hypot h- esis testing of a stochastic block model ( Definition 1 below) against Erd˝ os-R´ enyi. Conjecture 1. Suppose that Pnis a stochastic block model (whose parameters may depend arb itrar- ily onn) which one aims to distinguish from G(n,1/2).There exists a constant degree polynomial test that distinguishes H0:G(n,1/2)andH1:Pnwith high probability if and only if one of the following signed subgraph counts {edge, stars, triangle, 4-cycle } distinguishes H0:G(n,1/2)andH1:Pnwith high probability. We note that [ YZZ24] implicitly proves the conjecture whenever, even conditio ned on the labels, any two vertices have a probability of connection at least 1 /2. Our main contribution is to prove this conjecture in the foll owing additional cases: 1.Pnis an arbitrary stochastic block model on 2 communities. 2.Pnis an arbitrary stochastic block model on a constant number o f communities and each community label appears with constant positive probabilit y. 3.Pnisa stochastic block modelin which every two vertices indis tinct communities are adjacent with probability 1 /2. The stochastic block model is of special interest for severa l reasons. First, it is perhaps the most widely studied family of random graph distributions after E rd˝ os-R´ enyi. Second, as we will see in the forthcoming section, the above results for stochastic b lock models with k-communities can be equivalently phrased as an approximate maximization of the partition function of a certain spin- glass model with alphabet of size k.The interaction matrix of the spin model is determined by the stochastic block model interaction matrix and external fiel ds correspond to the community proba- bilities. This gives an alternative statistical-mechanic s interpretation of our results in addition to the quasirandomness perspective. Finally, stochastic blo ck models can approximate arbitrarily well smooth graphons. In fact, we believe that Conjecture 1 should hold more generally for graphons. 3 1.2 Implications and Interpretations of Conjecture 1 Fine-Grained Running Times. A degree- Dpolynomial in the edges of an n-vertex graph may take up to time n2Dto compute and, hence, be completely impractical. Yet, Conjecture 1 tells us that we only need to compute signed star counts or signed 3- an d 4-cycle counts. Signed counts of stars can be computed in near-linear time as the signed star c ount with a fixed central vertex vis a simple one-dimensional functional of the degree of the vert ex (which can trivially be computed in timeO(D) whereDis the star size). One simply needs to evaluate and add this fu nction over all possible central vertices. On the other hand, signed triang les and 4-cycle counts can be evaluated innωtime, where ω∈[2,2.371...] is the matrix-multiplication time [ WXXZ24 ]. Indeed, the signed triangle count of a graph with adjacency matrix Ais istrace((2A−11T)3) and the signed 4-cycle count istrace((2A−11T)4)−n(n−1)(2n−3).Hence, if there exists a constant-degree polynomial distinguisher, there also exists a practical one. More succinctly, our result suggests a strong dichotomy for the complexity of testing stochas- tic block-models with low-degree polynomial tests. Either
|
https://arxiv.org/abs/2504.17202v1
|
there exists a low-degree distinguisher implementable in nearly-quadratic time or there does not ex ist any constant-degree polynomial distinguisher. On Finding A Distinguisher. The practical distinguisher is easy to find as noted in [ YZZ24] – it is a signed star, triangle, or 4-cycle count. Hence, a stat istician in practice may also simply try each of these tests. They do not even need to know the specific p arameters of the stochastic block model, but only the fact that it is part of a family satisfying Conjecture 1 . Conversely, thefailure of avery small set of tests (the sign ed counts of triangles, 4-cycles, starts) implies the failure of ALL low-degree polynomial tests. Our Conjecture And Results in The Context of Low-Degree Hardness. The current work and [ YZZ24] are preceded by a long line of low-degree polynomial hardne ss results on testing between Erd˝ os-R´ enyi and random graph models [ BHK+19,HS17a,Hop18,BB25,BB24a,BB24b, MWZ23,MWZ24,KVWX23 ,RSWY23 ,MVW24] (nearly all of which are graphons).. We remark that there is an important conceptual difference between thes e prior results and ours. Our work does not prove hardness for any specific model (and in fact doe s not prove explicitly any hardness result). Instead, our work and [ YZZ24] aim to derive certain universality principles for the low- degree polynomial framework itself over a rich family of tes ting problems. In this light, our work more closely resembles the universal (near)-equivalence b etween low-degree polynomial tests and SQ algorithms in [ BBH+21]. 1.3 Signed Subgraph Counts, Stochastic Block Models, and Pa rtition Functions We now introduce the main ingredients necessary to describe our results. Signed Subgraph Counts. To derive a quantitatively stronger form of Theorem 1.1 , we need to quantify what it means for a subgraph count to be different en ough from that of Erd˝ os-R´ enyi so that one can use it towards ( HT). We borrow the notion from the framework of low-degree polynomial tests [ HS17a,Hop18] as in [YZZ24]. LetHbe a graph. On input an n-vertex graph G, 4 compute the signed count of H, SCH(G):=/summationdisplay H1∼H/productdisplay (ij)∈E(H1)(2Gij−1), (1) wherethesumisoverallisomorphiccopiesof H1inthecompletegraph KnandGijistheindicatorof therespectiveedge. Comparewiththeunsignedcount CH(G) fromTheorem 1.1 , whichcorresponds toCH(G):=/summationtext H1∼H/producttext (ij)∈E(H1)Gij.Theadvantageofworkingwith signed counts inthehypothesis testingsettingisthattheyhaveasmallervariance(asnote dby[BDER14 ])withrespectto G(n,1/2) due to the fact that the signs of different subgraphs are uncorr elated (see Theorem 2.1 ). One condition that would guarantee that SCHdistinguishes H0:G(n,1/2) andH1:Pnis /vextendsingle/vextendsingle/vextendsingleIE G∼H1SCH(G)−IE G∼H0SCH(G)/vextendsingle/vextendsingle/vextendsingle=ω/parenleftBig max/parenleftBig Var G∼H0[SCH(G)]1/2,Var G∼H1[SCH(G)]1/2/parenrightBig/parenrightBig .(2) If (2) holds, one can test between H1andH0by evaluating SCHon the input G.IfG∼Hbfor b∈ {0,1},thenSCH(G)∈[ IE G∼HbSCH(G)±C×Var G∼Hb[SCH(G)]1/2] =:Ibwith high probability for large enough C=ω(1) by Chebyshev’s inequality. Condition ( 2) ensures that I0,I1are disjoint for some appropriate Cand, hence, one can test by reporting the membership of SCHto one of I0,I1. Under the G(n,1/2) distribution, when Hhas a constant number of edges, it holds that IE G∼G(n,1/2)[SCH(G)] = 0 and Var G∼G(n,1/2)[SCH(G)] =/summationdisplay H1∼H1 = Θ(n|V(H)|).(3) Thus, one can rephrase ( 2) as|IE G∼PnSCH(G)|=ω/parenleftBig max/parenleftbig n|V(H)|/2,Var G∼Pn[SCH(G)]1/2/parenrightbig/parenrightBig . If the distribution Pnis permutation invariant, that is any vertex permutation πis measure- preserving with respect to Pn,
|
https://arxiv.org/abs/2504.17202v1
|
one can further observe that IE G∼PnSCH(G) = IE G∼Pn/productdisplay (ij)∈E(H)(2Gij−1)×/parenleftBigg/summationdisplay H1∼H1/parenrightBigg = Θ/parenleftBigg n|V(H)|×IE G∼Pn/bracketleftBigg/productdisplay (ij)∈E(H)(2Gij−1)/bracketrightBigg/parenrightBigg . The quantity ΦPn(H):= IE G∼Pn/bracketleftBigg/productdisplay (ij)∈E(H)(2Gij−1)/bracketrightBigg (4) denotes the Fourier coefficient of the distribution (i.e., of the probability mass function of) Pn corresponding to shape H.Going back to ( 2), the inequality becomes n|V(H)||ΦPn(H)|=ω(n|V(H)|/2)⇐⇒ |ΦPn(H)|1 |V(H)|n1/2=ω(1). (5) Leaving aside the fact that ( 5) does not capture the variance under Pn3, (5) suggests that the “most powerful” signed subgraph counts for distinguishing G(n,1/2) andPnare the (approximate) maximizers of H−→ΨPn(H),where Ψ Pn(H):=|ΦPn(H)|1 |V(H)|. This motivates the following problem: 3In our setting, it turns out that ( 5) can “nearly” capture the variance under Pn.We return to this in Theorem 2.4 . 5 Problem 1 (Approximate Maximization of Scaled Fourier Coefficients / A pproximate Maximiza- tion of SBM Partition Function) .For every n∈N,letFnbe a family of random graph distributions overn-vertex graphs invariant under vertex permutations. Let F=/uniontext n∈NFn.LetD∈Nbe some fixed constant. Find some minimal set ADof graphs on at most Dedges with the following prop- erty. There exists some constant CD,F>0 (depending only on F,D) such that for any graph H without isolated vertices on at most Dedges and any F∈ F, ΨF(H)≤CD,F×max K∈ADΨF(K). (6) To gain intuition about Problem 1 , consider the setting of dense planted subgraphs of [ YZZ24]. The authors implicitly show that the set of stars on at most Dedges is such a set AD. Stochastic Block Models. Oneofthemotivations ofourworkistoextendtheresultsof[ YZZ24] to a setting when, conditioned on the latent structure of the model, some of the edges might have a probability of appearance less than half. Stochastic bloc k models are a canonical example. Definition 1. A stochastic block model on kcommunities is parametrized by a probability vector p= (p1,p2,...,pk)∈(0,1]kand a symmetric matrix Q∈[−1,1]k.To generate an n-vertex sample fromSBM(n;p,Q),one performs the following two-step process: 1. First, draw nindependent labels x1,x2,...,xnfrom the distribution over [ k] specified by p. 2. Produce an n-vertex graph Gby drawing each edge ( i,j) as an independent Bern(1+Qxi,xj 2) random variable. More commonly, the SBM is defined with the probability matrix M∈[0,1]k×kwhereMi,j= (1+Qi,j)/2,but the current parametrization is more convenient for disc ussing Fourier coefficients. We proceed with some examples: 1. Suppose that Q=0k×kis the zero matrix. Then, regardless of labels, each edge ( i,j) appears with probability 1 /2.Hence, this is G(n,1/2). 2. Suppose that k= 2, p= (1/nα,1−1/nα) andQ=/parenleftbigg1 0 0 0/parenrightbigg .Then,SBM(n;p,Q) is the planted clique distribution with expected clique of size n1−α.Namely, the vertices with label 1 form a clique and every other edge appears with probability 1/2. Partition Functions. We now record an explicit formula for Φ SBM(p,Q)(H).We will often write SBM(p,Q) instead of SBM(n;p,Q) whenever the dependence on nis clear or unimportant. We writex∼pfor a variable distributed over [ k] with p.m.f. p. Proposition 1.3. Consider some SBM(p,Q)distribution and let Hbe a graph on hvertices. Then, ΦSBM(p,Q)(H) = IE (xi)h i=1i.i.d.∼p/bracketleftBigg/productdisplay (i,j)∈E(H)Qxixj/bracketrightBigg =/summationdisplay x1,x2,...,xh∈[k]/parenleftBigh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBig .(7) 6 Proof.Without loss of generality, assume that the vertices of Hare{1,2,...,h}.By definition, ΦSBM(p,Q)(H) = IE G∼SBM(p,Q)/bracketleftBigg/productdisplay (i,j)∈E(H)(2Gi,j−1)/bracketrightBigg = IE/bracketleftBigg IE/bracketleftBigg/productdisplay (i,j)∈E(H)(2Gi,j−1)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle(xi)1≤i≤h/bracketrightBigg/bracketrightBigg = IE/bracketleftBigg/productdisplay (i,j)∈E(H)IE/bracketleftBigg
|
https://arxiv.org/abs/2504.17202v1
|
(2Gi,j−1)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle(xi)1≤i≤h/bracketrightBigg/bracketrightBigg = IE/bracketleftBigg/productdisplay (i,j)∈E(H)IE/bracketleftBigg (21+Qxi,xj 2−1)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle(xi)1≤i≤h/bracketrightBigg/bracketrightBigg = IE/bracketleftBigg/productdisplay (i,j)∈E(H)Qxixj/bracketrightBigg =/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg . The right hand-side would be the partition function of a k-spin system if the matrix Qhad non-negative entries. Denoting f(s):= logps,g(s,t):= logQs,t,we obtain the more familiar from spin systems expression ΦSBM(n;p,Q)(H) =/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg =/summationdisplay x1,x2,...,xh∈[k]exp/parenleftBigg/summationdisplay (i,j)∈E(H)g(xi,xj)+/summationdisplay i∈V(H)f(xi)/parenrightBigg =:ZH(g,f).(8) Hence, (approximately) maximizing Ψ SBM(p,Q)(H) =|ΦSBM(p,Q)(H)|1 |V(H)|over graphs His the same as approximately maximizing |ZH(g,f)|1 |V(H)|or, equivalently,1 |V(H)|log|ZH(g,f)|.This prob- lem has received a lot of attention for spin systems, most not ably in the case of the hard-core model [Alo91,Kah01,Zha09,Gal11,DJPR17,SSSZ18,SSSZ19]. Yet, our task differs from prior work in at least two notable ways. What makes our problem easier is th at we are content with approximate maximization of |ZH(g,f)|1 |V(H)|(up to constant multiplicative factors) as this does not cha nge the asymptotics of testing. What makes our problem harder is tha t the interaction terms Qi,jmight be negative, so prior techniques, for example based on the bi partite swapping trick [ Zha09], fail. 1.4 Our Results 1.4.1 Results for Restricted Stochastic Block Models Theorem 1.4 (Main Results on Testing and Partition Function Maximizati on).Consider the k- community stochastic block model SBM(p,Q).LetDbe an even integer constant greater than 4. Then, in Problem 1 ,one can take the set ADof approximate maximizers, to be: 1.Diagonal SBMs (Theorem 3.1 ): IfQsatisfies that Qi,j= 0whenever i/\e}a⊔io\slash=j,then one can take A1 D={edge, star on 2 edges }. 7 2.Non-Negative SBMs (Theorem 3.3 and [YZZ24]): IfQsatisfies that Qi,j≥0for each i,j, then one can take A2 D={stars on at most Dedges}. 3.SBMs with Non-Vanishing Community Probabilities (Theorem 3.4 ): Ifpsatisfies that pi≥ c∀i∈[k]for some universal constant c,then one can take A3 D={edge, star on 2 edges, 4-cycle }. 4.SBMs with Two Communities (Theorem 3.6 ): Ifk= 2andDis even, then one can take A4 D={stars on at most Dedges}∪{4-cycles}. Furthermore, in each of the cases above, if there is a constan t degree polynomial test distinguishing the respective SBM model from G(n,1/2)with high probability, then one can also distinguish with high probability using the signed count of some graph in the r espective AD. We remark that allowing for small increase in the vertex size , the same theorem holds for degrees as large as D=o(logn/loglogn).We formalize this in Proposition 5.1 . We choose to write all the proofs for constant degree polynomials, D=O(1) for simplicity of exposition, but the extensions to D=o(logn/loglogn) are only a matter of careful bookkeeping. We do not pursuethis direction since we are not aware of any concrete a dvantages of degree O(logn/loglogn) polynomials over constant-degree polynomials. Different wo uld have been the case if our techniques alsocaptureddegree O(logn)polynomialswhichencompassmanyspectralmethods. This, however, seems challenging at present. 1.4.2 One-to-one comparisons of Fourier coefficients: Results and Barriers Barrier to one-to-one comparisons. One natural approach to identifying such optimal sets ADis viaone-to-one comparisons of Fourier coefficients. For example in the case of diagonal SB Ms, prove that for any H,there exists some K∈ {edge, wedge }such that Ψ SBM(p,Q)(H)≤ΨSBM(p,Q)(K) for any diagonal SBM. This indeed works for diagonal SBMs. This simple approach, however, fails in
|
https://arxiv.org/abs/2504.17202v1
|
the case of SBMs with non-vanishing community frac- tions and 2-SBMs (as demonstrated in Theorem A.8 ). Instead, we rely on many-to-one compar- isons. For example in the case of 2-SBMs, we prove that for any Hand anySBM(p,Q),there exists someKSBM(p,Q)∈ {stars on at most Dedges}∪{4-cycles}such that ΨSBM(p,Q)(H)/lessorsimilarΨSBM(p,Q)(KSBM(p,Q)). The only difference from one-to-one comparison is in the order of quantifiers – here Kcan depend on the specific stochastic block model. This flexibility turn s out to be necessary for results 3 and 4 in our Theorem 1.4 as demonstrated by Theorem A.8 . Some one-to-one comparisons. It is still useful to consider what one-to-one comparisons w e can obtain. We summarize below. Theorem 1.5 (One-to-one comparison of Fourier Coefficients) .Consider any SBM(p,Q).Then: 8 1.Theorem 4.1 : For any t≥5,ΨSBM(p,Q)(Cyct)≤ΨSBM(p,Q)(Cyc4). 2.Theorem 4.3 : IfHhas a degree dvertex, then |ΦSBM(p,Q)(H)| ≤ |ΦSBM(p,Q)(K2,d)|1/2. 3.Theorem 4.2 : LetK− 4be the graph on 4 vertices with 5 edges. Then, ΨSBM(p,Q)(K− 4)≤ ΨSBM(p,Q)(Cyc4). The first statement explains why the only signed cycle counts for distinguishing stochastic block models and G(n,1/2) that appear in the literature are triangles (for example, planted coloring in [KVWX23 ])and4-cycles (for example, thequietplantingin[ KVWX23 ]), butnolarger cyclecounts. The proof is a simple spectral argument, at a high-level, sim ilar to the way spectrum (P3) and cycle counts (P2) in Theorem 1.1 are related in classical quasirandomness [ CGW88, F4 and F5]. The second inequality also follows the proof of equivalence used in [CGW88, F12], namely used to show that (P6) implies (P1) in Theorem 1.1 (and, implicitly, appears in [ LR23a] for our setting of signed subgraph counts). And while the quantitative depe ndence in the second inequality is strong enough to imply Theorem 1.1 , it is too weak for the purposes of hypothesis testing. We would like to show the much stronger inequality |ΦSBM(p,Q)(H)| ≤ |ΦSBM(p,Q)(K2,d)||V(H)|/|V(K2,d)| (note that |ΦSBM(p,Q)(K2,d)| ≤1 and|V(H)| ≥d+1≥ |V(K2,d)|/2 = (d+2)/2). Nevertheless, it turns out that in certain cases, one can imp rove the argument in Item 2 above. For example, Item 2 implies that |ΦSBM(p,Q)(K− 4)| ≤ |ΦSBM(p,Q)(K2,2)|1/2.However, we show that in this specific case, the argument can be appropriately modifie d to imply the stronger, and sufficient for the hypothesis testing setting, inequality from Item 3: |ΦSBM(p,Q)(K− 4)| ≤ |ΦSBM(p,Q)(K2,2)|. This inequality also appears implicitly in [ LR23a]. In particular, this result demonstrates why no examples in the literature appear in which one tests via sign edK− 4-counts. 1.4.3 A Library of Examples InAppendix A we provide example SBM distributions which demonstrate the necessity of different subgraph (signed) counts appearing in the respective sets ADinTheorem 1.4 . InTheorem A.2 , we show that for certain SBMs even on just 2 balanced communit ies, one needs to go beyond the signed star counts proposed by [ YZZ24] to distinguish the model from Erd˝ os-R´ enyi with a constant-degree polynomial. In Theorem A.8 , we illustrate the aforementioned necessity of many- to-one comparisons. Finally, with Theorems A.3 toA.7we show that each of the triangle, 4-cycle, edge, wedge, and a large star necessarily belongs to the set i nConjecture 1
|
https://arxiv.org/abs/2504.17202v1
|
. 1.5 Prior Techniques and Barriers Our two main results are parts 3 and 4 of Theorem 1.4 . We begin with an overview of previous techniques appearing in the recent work on planted graph mod els [YZZ24] and the classical quasir- andomness theory [ CGW88] and outline two barriers which prevent those techniques fr om working in our setting. Then, we explain our two new key ideas towards overcoming these barriers. 1.5.1 Signed Stars in Planted-Subgraph Models We beginwith therecent work [ YZZ24]. Adapted tooursetting of stochastic block models, onepar t oftheir argumentshowsthat if Qi,j≥0foralli,j, thenforany graph Honat most Dedges without isolated vertices, there exists some star on t≤Dedges such that Ψ SBM(p,Q)(H)≤ΨSBM(p,Q)(Start). 9 For completeness, we give a full proof of this fact in Theorem 3.3 , but here is a sketch of the proof. Recall (7) – for any graph Hwithout isolated vertices, ΦSBM(p,Q)(H) =/summationdisplay x1,x2,...,xh∈[k]/parenleftBigh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBig . Since all values of Qxi,xjare non-negative and [0,1]-valued, removing an edge from Hcannot decrease the quantity on the right hand-side. Hence, one can iteratively remove edges fromH. The only condition one needs to ensure is that there are no iso lated vertices. Therefore, the process will terminate with a graph Kin which every connected component is a star. Stars are the only connected graphs from which one cannot remove an edge wi thout creating isolated vertices. Altogether, this means that Φ SBM(p,Q)(H)≤ΦSBM(p,Q)(K) whereKhas|V(H)|vertices and every connected component is a star. An elementary factorization of Fourier coefficients of SBMs over connected components ( Proposition 2.2 ) implies that Ψ SBM(p,Q)(H)≤ΨSBM(p,Q)(Start) for some star graph on at most Dedges (one of the components of K). The challenge of extending this argument to general SBM(p,Q) distributions when Qhas neg- ative entries is that removing an edge can decrease the Fouri er coefficient of a graph. This is illus- trated in our Theorem A.2 . Namely, take k= 2,p= (1/2,1/2) andQ=/parenleftbigg1−1 −1 1/parenrightbigg .LetH=K4,4, the complete bipartite graph with two parts of size 4. One can show that Φ SBM(p,Q)(H) = 1 (as H has only even-degree vertices). However, for any edge e,ifH\eis the graph Hwith edge edeleted, ΦSBM(p,Q)(H\e) = 0 (as Hhas an odd-degree vertex). Hence, removing an edge can drama tically decrease a Fourier coefficient – from being 1 and sufficient to te st against G(n,1/2) to being 0 and, thus, being completely useless towards distinguishing fro mG(n,1/2). The fact that edge-removals fail when Qhas negative entries is a serious obstacle towards proving our results 3 and 4 in Theorem 1.4 . The reason is that we always want to compare the Fourier coefficient of a graph Hto the Fourier coefficient of very sparse graphs – stars or 4-cy cle. Challenge 1. The natural approach of comparing the Fourier coefficient of a g raphHto a sparser graph by iteratively removing edges sometimes fails when Qhas negative entries, even on two communities each appearing with probability 1/2. Insight from Challenge 1 :We need one-shot comparisons instead of iterative comparisons . 1.5.2 Classical Quasirandomness The celebrated Theorem 1.1 of [CGW88] gives several equivalent conditions
|
https://arxiv.org/abs/2504.17202v1
|
for when the subgraph counts of graphs resemble that of Erd˝ os-R´ enyi. One way to r ewrite their results to make them more similar to our setting is as follows. Let Gbe any graph on nvertices and let GbeGafter a uniformly random vertex permutation. Then, the condition |CH(G)|= (1+o(1))n|V(H)|2−|E(H)| can be equivalently rewritten as /vextendsingle/vextendsingle/vextendsingleIE/bracketleftBig/productdisplay (i,j)∈E(H)Gi,j/bracketrightBig −2−|E(H)|/vextendsingle/vextendsingle/vextendsingle=o(1). (9) There are several ways in which ( 9) differs with our result: 10 1.Distributional: Thedistribution of Gis not a stochastic block model. However, the stochastic block models is a vertex-symmetric distribution. In partic ular,Theorem 1.1 applies to (a high-probability sample of) the SBM distributions. 2.Signed versus Unsigned Counts: Theorem 1.1 is for unsigned counts |IE/producttext (i,j)∈E(H)Gi,j− 2−|E(H)||instead of signed counts |IE/producttext (i,j)∈E(H)(2Gi,j−1)|.This difference is again minor. Expanding the product in/producttext (i,j)∈E(H)(2Gi,j−1) and applying triangle inequality shows that for constant size graphs H,(9) is equivalent to /vextendsingle/vextendsingle/vextendsingleIE/productdisplay (i,j)∈E(H)(2Gi,j−1)/vextendsingle/vextendsingle/vextendsingle=o(1). (10) 3.Scaling: The real problem with the approach of [ CGW88] is the scaling. In place of ( 10), we need the much more fine-grained inequality ( 5):|IE[/producttext (i,j)∈E(H)(2Gi,j−1)]|1 |V(H)|=o(n−1/2). It turns out that improving ( 10) is not simply a matter of carefully keeping track of the o(1)-dependence in [ CGW88] as their methods are fundamentally too weak for our purpose s. For example, an intermediate step in their argument (Fact 12 ), rewritten in the setting of Fourier coefficients of stochastic block models (see Theorem 1.5 ) gives that for any SBM(p,Q) distribution, and graph Hwith a degree dvertex, /vextendsingle/vextendsingle/vextendsingleIE/productdisplay (i,j)∈E(H)(2Gi,j−1)/vextendsingle/vextendsingle/vextendsingle≤/vextendsingle/vextendsingle/vextendsingleIE/productdisplay (i,j)∈E(K2,d)(2Gi,j−1))/vextendsingle/vextendsingle/vextendsingle1/2 . (11) So, take for example H=K4,4.This inequality implies that if/vextendsingle/vextendsingle/vextendsingleIE/producttext (i,j)∈E(K2,4)(2Gi,j−1))/vextendsingle/vextendsingle/vextendsingle= o(1),then/vextendsingle/vextendsingle/vextendsingleIE/producttext (i,j)∈E(K4,4)(2Gi,j−1)/vextendsingle/vextendsingle/vextendsingle=o(1).However, if/vextendsingle/vextendsingle/vextendsingleIE/producttext (i,j)∈E(K2,4)(2Gi,j−1))/vextendsingle/vextendsingle/vextendsingle1/6 = o(n−1/2),as needed in ( 5), it implies that/vextendsingle/vextendsingle/vextendsingleIE/producttext (i,j)∈E(K4,4)(2Gi,j−1)/vextendsingle/vextendsingle/vextendsingle1/8 =o(n−3/16) which is too weak for our purposes. We outline the scaling of Fourier coefficients to the power of 1 /|V(H)|as a second main challenge. Challenge 2. Developing techniques that not only compare Fourier coeffici ents, but do so with the appropriate 1/|V(H)|scaling appearing in (5). We note that this challenge is directly related to the one-to -one versus many-to-one comparisons discussed in Section 1.4.2 as apparent from the discussion above in Item 3. Insight from Challenge 2 :We need many-to-one comparison inequalities. 1.6 Our Key Ideas We now describe our two main ideas used to overcome Challenge 1 andChallenge 2 . 1.6.1 Main Idea 1: Leaf-Isolation Technique This idea is used to overcome both challenges and appears in t he proofs of both part 3 and part 4 ofTheorem 1.4 . For simplicity, we illustrate with part 3. That is, SBM(p,Q) satisfies that each 11 community label i∈[k] appears with probability piat leastcfor some absolute constant c >0 (which implies that k≤1/c=O(1).) An elementary calculation ( Lemma 3.2 ) shows that ΦSBM(p,Q)(Cyc4) = Ω(max u,v|Qu,v|4). (12) Hence, for any graph Hthat satisfies h=|V(H)| ≤ |E(H)|,by (7) |ΦSBM(p,Q)(H)|=/vextendsingle/vextendsingle/vextendsingle/summationdisplay x1,x2,...,xh∈[k]/parenleftBigh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBig/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay x1,x2,...,xh∈[k]/parenleftBigh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)|Qxi,xj|/parenrightBig /lessorsimilarmax u,v|Qu,v||E(H)|≤max u,v|Qu,v||V(H)|/lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(H)| |V(Cyc4)|,(13) which is enough. Note that so far we have overcome Challenge 1 by directly comparing to a 4-cycle. What does remain a difficulty is Challenge 2 . If we use ( 13) for a tree T,thescaling is
|
https://arxiv.org/abs/2504.17202v1
|
too weak as we obtain |ΦSBM(p,Q)(T)|1 |E(T)|≤ |ΦSBM(p,Q)(Cyc4)|1 |E(Cyc4)|and|E(T)|=|V(T)|−1. 64 325 1 GraphHLeaf-Isolation Triangle Inequality =⇒4 321/unionsqtext 675 GraphH′Star2 Figure 1: Illustration of the leaf-isolation inequality ov er the graph Hwith leaves 5 ,6.It effectively compares |ΦSBM(p,Q)(H)|to|ΦSBM(p,Q)(H′⊔Star2)|=|ΦSBM(p,Q)(H′)|×|ΦSBM(p,Q)(Star2)|.In this comparison, a new vertex 7 is created, which resolves the iss ue that|E(H)|<|V(H)|. Leaf-Isolation Technique. WhenTis a tree on at least 3 vertices, Thas at least two leaves. Let these be h−1,hwith parents par(h−1),par(h).Instead of the crude triangle inequality applied as a first step in ( 13), we use a more fine-grained “leaf-isolation” triangle ineq uality which allows us to artificially create a star count in ( 13) as in the figure. On a more technical level, we perform the summation over all vertices but the two leaves first and th en over the two leaves: 12 /vextendsingle/vextendsingle/vextendsingleΦSBM(p,Q)(T)/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/summationdisplay x1,x2,...,xh∈[k]/parenleftBigh/productdisplay i=1pxi×/productdisplay (i,j)∈E(T)Qxi,xj/parenrightBig/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay x1,x2,...,xh−2/parenleftBigg px1px2···pxh−2/productdisplay (ij)∈E(T)\{(par(h−1),h−1),(par(h),h)}Qxi,xj× /parenleftbig/summationdisplay xh−1pxh−1Qxpar(h−1),xh−1/parenrightbig ×/parenleftbig/summationdisplay xhpxhQxpar(h)xh/parenrightbig/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay x1,x2,...,xh−2/parenleftBigg px1px2···pxh−2/productdisplay (ij)∈E(T)\{(par(h−1),h−1),(par(h),h)}|Qxi,xj|× ×/vextendsingle/vextendsingle/summationdisplay xh−1pxh−1Qxpar(h−1),xh−1/vextendsingle/vextendsingle×/vextendsingle/vextendsingle/summationdisplay xhpxhQxpar(h),xh/vextendsingle/vextendsingle/parenrightBigg /lessorsimilar/summationdisplay x1,x2,...,xh−2/parenleftBigg/productdisplay (ij)∈E(T)\{(par(h−1),h−1),(par(h),h)}|Qxi,xj|× ×/parenleftbigg/parenleftbig/summationdisplay xh−1pxh−1Qxpar(h−1),xh−1/parenrightbig2+/parenleftbig/summationdisplay xhpxhQxpar(h),xh/parenrightbig2/parenrightbigg/parenrightBigg ,(14) where we used |a| × |b| ≤a2+b2in the last line. Now, the product of |E(T)| −2 =|V(T)| − 3 values Qxi,xjcan be bounded as |ΦSBM(p,Q)(Cyc4)||V(T)|−3 |V(Cyc4)|,which is comparable to the signed count of a graph on |V(T)| −3 vertices. On the other hand,/parenleftbig/summationtext xh−1pxh−1Qxpar(h−1),xh−1/parenrightbig2+ /parenleftbig/summationtext xhpxhQxpar(h)xh/parenrightbig2can be easily shown to be bounded by ΦSBM(p,Q)(Star2),a graph on 3 vertices. Altogether, we have managed to show tha t /vextendsingle/vextendsingle/vextendsingleΦSBM(p,Q)(T)/vextendsingle/vextendsingle/vextendsingle/lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(T)|−3 |V(Cyc4)|×|ΦSBM(p,Q)(Star2)|. This immediately implies /vextendsingle/vextendsingle/vextendsingleΦSBM(p,Q)(T)/vextendsingle/vextendsingle/vextendsingle1 |V(T)|/lessorsimilarmax/parenleftBig |ΦSBM(p,Q)(Cyc4)|1 |V(Cyc4)|,|ΦSBM(p,Q)(Star2)|1 |V(Star2)|/parenrightBig . 1.6.2 Main Idea 2: Comparison With a Non-Negative Model The reason why the leaf-isolation technique alone is not suffi cient for general SBMs on two com- munities is that ( 12) fails. Namely, one can show ( Claim 3.4 ) that ΦSBM(p,Q)(Cyc4) = Θ/parenleftbigg max/parenleftBig p4 1|Q1,1|4,p2 1p2 2|Q1,2|,p4 2|Q2,2|4/parenrightBig/parenrightbigg . (15) So, ifp1is very small, say p1=n−1/2,then the inequality used in ( 13) for non-tree graphs may fail. Indeed, in Theorem A.8 we show that there do exist two-community SBMs and connected H which are not trees such that |ΦSBM(p,Q)(H)|1 |V(H)|=ω(|ΦSBM(p,Q)(Cyc4)|1 |V(Cyc4)|). Thus, if we want to utilize ( 15) towards the leaf-isolation technique, we need to understa nd how the probability vector ( p1,p2) relates to the entries of Q. 13 Comparison with Non-Negative SBM. To do so, we compare with a non-negative SBM and utilize what we know from [ YZZ24] (spelled out in Section 1.5.1 ). Namely, let |Q|be the matrix Q in which we take entry-wise absolute values. Then, triangle inequality applied to ( 7) implies that |ΦSBM(p,Q)(H)| ≤ |ΦSBM(p,|Q|)(H)|. On the other hand, as explained in ( 1.5.1), |ΦSBM(p,|Q|)(H)|1 |V(H)|≤ |ΦSBM(p,|Q|)(StarT)|1 |V(StarT)| for some T∈[1,|E(H)|].Combining the last two inequalities, we conclude that |ΦSBM(p,Q)(H)|1 |V(H)|≤ |ΦSBM(p,|Q|)(StarT)|1 |V(StarT)|. Hence, the desired conclusion for 2-SBMs in Theorem 1.4 would be implied by |ΦSBM(p,|Q|)(StarT)|/lessorsimilar|ΦSBM(p,Q)(StarT)| ⇐⇒ p1(p1|Q1,1|+p2|Q1,2|)T+p2(p1|Q1,2|+p2|Q2,2|)T/lessorsimilar/vextendsingle/vextendsingle/vextendsinglep1(p1Q1,1+p2Q1,2)T+p2(p1Q1,2+p2Q2,2)T/vextendsingle/vextendsingle/vextendsingle. Unfortunately, such an inequality is wrong as demonstrated , for example, by Theorem A.2 . Yet, whenever this inequality is violated, there must be certain cancellations in Φ SBM(p,|Q|)(StarT) = p1(p1Q1,1+p2Q1,2)T+p2(p1Q1,2+p2Q2,2)T.The key question, used to address Challenge 1 , is: What causes the cancellations? It turns out that the following dichotomy occurs. Exactly one of the
|
https://arxiv.org/abs/2504.17202v1
|
following two types of cancellations occurs: 1.Within-community cancellations: when there is a cancellation within community 1, corre- sponding to |p1Q1,1+p2Q1,2|=o(p1|Q1,1|+p2|Q1,2|) or, similarly in community 2, |p1Q1,2+ p2Q2,2|=o(p1|Q1,2|+p2|Q2,2|) . 2.Between-community cancellations: when there are no in-community cancellations, but there is a cancellation between p1(p1Q1,1+p2Q1,2)Tandp2(p1Q1,2+p2Q2,2)T. In either case, the cancellations imply strong relationshi ps between Qandp,which help us boost (15) and then apply an appropriate version of the leaf-isolatio n technique. For example, if p1≤p2 (so, also p2≥1/2) in the case of within-community cancellations we can show thatp2|Q1,2|= O(p1|Q1,1|),which turns out to be sufficient for making the leaf-isolation technique to work. The reason such an inequality is useful is the following. As p2≥1/2,it implies that |Q1,2|= O(p1|Q1,1|).This means that when applying the triangle-inequality appr oach (13) (or the more sophisticated leaf-isolation variation of it, ( 14)), we can replace some of the Q1,2occurrences with p1|Q1,1|,thus, adding more p1appearances in the expression. As ( 15) reads Φ SBM(p,Q)(Cyc4) = Θ/parenleftbigg max/parenleftBig p4 1|Q1,1|4,p2 1|Q1,2|,|Q2,2|4/parenrightBig/parenrightbigg whenp2≥1/2,the extra occurrences of p1are key to a comparison with Φ SBM(p,Q)(Cyc4). 2 Preliminaries 2.1 Notation Graph Notation. All graphs in the current paper are undirected and have no loo ps or multiple edges. For a graph H,its vertex set and edge sets are denoted by V(H) andE(H).Whenever 14 A,B⊆V(H) for some graph H,we denote by EH(A,B) the subset of E(H) with one endpoint in Aand one endpoint in B.ForS⊆V(H),we denote by H|Sthe graph on vertex set Sand edge-set EH(S,S). We denote by Cyctthe cycle on tvertices, by Ktthe complete graph on tvertices, by Kt,sthe complete bipartite graph on parts with tandsvertices, by Startthe star graph on t+1 vertices (so K1,t=Start). InStart,there is one central vertex of degree t,which is adjacent to tleaves. Note thatStar1is simply an edge and Star2is the path of length 2 (also called wedge). For two graphs H,K,we denote by H⊔Ktheir disjoint union. If HandKare labeled, denote byH⊗Kthe graph in which ( i,j) is an edge if and only if it is an edge in exactly one of Hand K,and furthemrore all isolated vertices are removed. For two graphs HandK,we denote by H∼Kthe fact that they are isomorphic. Denote by Graphs≤Dthe set of graphs without isolated vertices and at most Dedges and by CGraphs≤Dthose of them that are furthermore connected. Asymptotic Notation. For two quantities x(n),y(n)∈Rdepending on n∈N,we denote x/greaterorsimilary ifx≥Cyfor some absolute constant C >0.Ifx/greaterorsimilaryandy/greaterorsimilarx,we denote x≍y.We similarly denotex/greaterorsimilarDyifx≥C(D)ywhere the constant C(D) can be an arbitrary function of D(but nothing else) and likewise x≍Dy. We also denote by y=O(x) andx= Ω(y) the fact x/greaterorsimilary. Finally,y=on(x) andx=ωn(y) denote the fact that lim n−→+∞y/x= 0.In most places, we omit the dependence on n. 2.2 Testing via Low Degree Polynomials Condition ( 2) is a special case of testing between two distributions via l ow-degree polynomial tests. Since the indicator of edges of a random graph are rand om variables, one naturally defines low-degree polynomial tests as follows, again exploiting C hebyshov’s ineqality. Definition 2 ([HS17a,Hop18] specialized to graphs) .Consider the families of graph distributions (Pn)n∈NandG(n,1/2)n∈N.We say that a polynomial f:{0,1}(n 2)−→Rdistinguishes the two families of distributions if /vextendsingle/vextendsingle/vextendsingleIE
|
https://arxiv.org/abs/2504.17202v1
|
G∼Pnf(G)−IE G∼G(n,1/2)f(G)/vextendsingle/vextendsingle/vextendsingle=ω/parenleftBigg max/parenleftBig Var G∼G(n,1/2)[f(G)]1/2,Var G∼Pn[f(G)]1/2/parenrightBig/parenrightBigg . Conversely, if a polynomial fof degree at most Dsatisfies /vextendsingle/vextendsingle/vextendsingleIE G∼Pnf(G)−IE G∼G(n,1/2)f(G)/vextendsingle/vextendsingle/vextendsingle=o/parenleftBigg max/parenleftBig Var G∼G(n,1/2)[f(G)]1/2,Var G∼Pn[f(G)]1/2/parenrightBig/parenrightBigg , we say that ffails to distinguish the two distributions. If all polynomi als of degree at most Dfail, we say that G(n,1/2) andPnare degree- Dindistinguishable. Polynomials tests of degree D=O(logn) capture a wide range of computationally efficient tests such as constant-sized (signed) subgraph counts, certain s pectral methods [ KWB19], statistical- query algorithms [ BBH+21], and approximate message passing [ MW22] algorithms among others. Due to the variety of methods captured by low-degree polynom ials, hardness against degree D 15 polynomials for some D=ω(logn) is viewed as a strong (but, of course, still limited) heuris tic for the computational hardness of a problem [ HS17b,Hop18]. When testing against G(n,1/2),it is convenient to express polynomials in terms of the Fouri er characters/producttext (ji)∈E(H)(2Gji−1).The key simple fact from Boolean Fourier analysis is the foll owing. Theorem 2.1 (Folklore) .The set of polynomials/braceleftBigg /producttext (ji)∈E(H)(2Gji−1)/bracerightBigg HwhereHranges over all graphs without isolated vertices in Knforms an orthonormal basis of all edge-functions over n-vertex graphs with respect to the G(n,1/2)distribution. 2.3 Simple Facts about Fourier Coefficients of Stochastic Blo ck Models Proposition 2.2. Consider any stochastic block model SBM(p,Q)and letH,Kbe any two graphs. Then, ΦSBM(p,Q)(H⊔K) = ΦSBM(p,Q)(H)×ΦSBM(p,Q)(K). Proof.Since there are no shared vertices between the copies of H,KinH⊔Kand labels are independent, ΦSBM(p,Q)(H⊔K) = IE/bracketleftBigg/productdisplay (i,j)∈E(H∪K)Qxixj/bracketrightBigg = IE/bracketleftBigg/productdisplay (i,j)∈E(H)Qxixj×/productdisplay (u,v)∈E(H)Qxuxv/bracketrightBigg = IE/bracketleftBigg/productdisplay (i,j)∈E(H)Qxixj/bracketrightBigg ×IE/bracketleftBigg/productdisplay (u,v)∈E(H)Qxuxv/bracketrightBigg = ΦSBM(p,Q)(H)×ΦSBM(p,Q)(K). Corollary 2.1. Consider any SBM(p,Q)distribution and let H,Kbe any two graphs. Then, ΨSBM(p,Q)(H⊔K)≤max/parenleftBig ΨSBM(p,Q)(H),ΨSBM(p,Q)(K)/parenrightBig . Proof.Recalling the definition of Ψ , |ΦSBM(p,Q)(H⊔K)|1 |V(H⊔K)| =/parenleftBig |ΦSBM(p,Q)(H)|×|ΦSBM(p,Q)(K)|/parenrightBig 1 |V(K)|+|V(H)| =/parenleftBig |ΦSBM(p,Q)(H)|1 |V(H)|/parenrightBig|V(H)| |V(K)|+|V(H)|×/parenleftBig |ΦSBM(p,Q)(K)|1 |V(K)|/parenrightBig|V(K)| |V(K)|+|V(H)| ≤max/parenleftBig |ΦSBM(p,Q)(H)|1 |V(H)|,|ΦSBM(p,Q)(K)|1 |V(K)|/parenrightBig|V(H)| |V(K)|+|V(H)|× ×max/parenleftBig |ΦSBM(p,Q)(H)|1 |V(H)|,|ΦSBM(p,Q)(K)|1 |V(K)|/parenrightBig|V(K)| |V(K)|+|V(H)| = max/parenleftBig |ΦSBM(p,Q)(H)|1 |V(H)|,|ΦSBM(p,Q)(K)|1 |V(K)|/parenrightBig . 16 Corollary 2.2 (Star Counts) .For any SBM(p,Q)model on kcommunities ΦSBM(p,Q)(Start) =/summationdisplay x∈[k]px×/parenleftbig/summationdisplay y∈[k]Qx,ypy/parenrightbigt. Proof.Follows from Proposition 1.3 by first summing over the central vertex. 2.4 Meta Theorems on Signed Subgraph Counts Theorem 2.3 (Beating Null Variance by A Low Degree Polynomial) .Suppose that (Pn)n∈Nis a family of vertex-symmetric distributions over n-vertex graphs and there exists some polynomial f of constant degree Dsuch that |IE Pn[f]−IE G(n,1/2)[f]|=ω( Var G(n,1/2)[f]1/2). Then, there exists some subgraph Hon at most Dvertices such that ΨSBM(p,Q)(H) =ω(n−1/2). Proof.Recall that Graphs≤Dis the set of graphs without isolated vertices on at most Dedges. Consider the polynomial f(G) =/summationdisplay H∈Graphs≤D/summationdisplay H1⊆Kn:H∼H1cH1/productdisplay (ji)∈E(H1)(2Gji−1). Then, by orthonormality under the G(n,1/2) distribution ( Theorem 2.1 ), Var G(n,1/2)[f(G)] =/summationdisplay H∈Graphs≤D/summationdisplay H1⊆Kn:H∼H1c2 H1. (16) Hence, as Dis constant and there are constantly many graphs without iso lated vertices and at mostDedges, it follows that Var G(n,1/2)[f(G)]1/2=/parenleftBigg/summationdisplay H∈Graphs≤D/summationdisplay H1⊆Kn:H∼H1c2 H1/parenrightBigg1/2 (17) = Θ/parenleftBigg/summationdisplay H∈Graphs≤D/parenleftBig/summationdisplay H1⊆Kn:H∼H1c2 H1/parenrightBig1/2/parenrightBigg . (18) 17 At the same time, /vextendsingle/vextendsingle/vextendsingleIE Pn[f]−IE G(n,1/2)[f]/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/summationdisplay H∈Graphs≤D/summationdisplay H1⊆Kn:H∼H1cH1/productdisplay (ji)∈E(H1)IE Pn(2Gji−1)/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/summationdisplay H∈Graphs≤D/summationdisplay H1⊆Kn:H∼H1cH1ΦPn(H)/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay H∈Graphs≤D/vextendsingle/vextendsingle/vextendsingleΦPn(H)/vextendsingle/vextendsingle/vextendsingle×/summationdisplay H1⊆Kn:H∼H1|cH1| ≤/summationdisplay H∈Graphs≤D/vextendsingle/vextendsingle/vextendsingleΦPn(H)/vextendsingle/vextendsingle/vextendsingle×/parenleftBig/summationtext H1⊆Kn:H∼H1c2 H1/parenrightBig1/2 /parenleftBig/summationtext H1⊆Kn:H∼H11/parenrightBig1/2(Cauchy-Schwartz) = Θ/parenleftBigg/summationdisplay H∈Graphs≤D/vextendsingle/vextendsingle/vextendsingleΦPn(H)/vextendsingle/vextendsingle/vextendsingle×/parenleftBig/summationtext H1⊆Kn:H∼H1c2 H1/parenrightBig1/2 n|V(H)|/2/parenrightBigg . Hence,|IEPn[f]−IEG(n,1/2)[f]|=ω(VarG(n,1/2)[f]1/2) implies that /summationdisplay H∈Graphs≤D|ΦPn(H)|×/parenleftBig/summationtext H1⊆Kn:H∼H1c2 H1/parenrightBig1/2 n|V(H)|/2=ω/parenleftBigg/summationdisplay H∈Graphs≤D/parenleftBig/summationdisplay H1⊆Kn:H∼H1c2 H1/parenrightBig1/2/parenrightBigg . As the sum is over constantly many terms, at most 2O(DlogD)(which is an upper bound on the number
|
https://arxiv.org/abs/2504.17202v1
|
of graphs without isolated vertices and at most Dedges), there exists some Hsuch that/summationtext H1⊆Kn:H∼H1c2 H1/\e}a⊔io\slash= 0 and |ΦPn(H)|×/parenleftBig/summationtext H1⊆Kn:H∼H1c2 H1/parenrightBig1/2 n|V(H)|/2=ω/parenleftBigg/parenleftBig/summationdisplay H1⊆Kn:H∼H1c2 H1/parenrightBig1/2/parenrightBigg . the claim follows. Recall that the reason ( 5) is not sufficient to conclude that a given signed subgraph cou nt is a good tester is that ( 5) does not take into account the variance under the planted di stribution. Somewhat surprisingly, it turns out that the variance of the planted model is actually captured by this equation by the maximizer of K−→ΨSBM(K). Theorem 2.4 (Beating Variance of Null Implies Beating Variance of Plant ed).Suppose that SBM(n;p,Q)is any SBM model and let Dbe any constant (independent of n). Suppose that the connected graph Hon at most Dvertices satisfies the following two properties: 1.ΨSBM(p,Q)(H) =ω(n−1/2). 2.His an approximate maximizer of K−→ΨSBM(p,Q)(K)in the following sense. ΨSBM(p,Q)(H)/greaterorsimilarDΨSBM(p,Q)(K)for anyKon at most 2Dedges. 18 Then, one can test between SBM(n;p,Q)andG(n,1/2)using the signed Hcount, namely /vextendsingle/vextendsingle/vextendsingleIE SBM(n;p,Q)[SCH]−IE G(n,1/2)[SCH]/vextendsingle/vextendsingle/vextendsingle=ω/parenleftBig max( Var SBM(n;p,Q)[SCH]1/2,Var G(n,1/2)[SCH]1/2)/parenrightBig . Proof.Due to the Θ( n|V(H)|) isomorphic copies of H, /vextendsingle/vextendsingle/vextendsingleIE SBM(n;p,Q)[SCH]−IE G(n,1/2)[SCH]/vextendsingle/vextendsingle/vextendsingle= Θ(n|V(H)|×|ΦSBM(p,Q)(H)|). (19) Hence,|ΦSBM(p,Q)(H)|1 |V(H)|=ω(n−1/2) immediately implies that /vextendsingle/vextendsingle/vextendsingleIE SBM(n;p,Q)[SCH]−IE G(n,1/2)[SCH]/vextendsingle/vextendsingle/vextendsingle=ω(n|V(H)|/2) =ω( Var G(n,1/2)[SCH]1/2). Thus, all we need to show is that /vextendsingle/vextendsingle/vextendsingleIE SBM(n;p,Q)[SCH]−IE G(n,1/2)[SCH]/vextendsingle/vextendsingle/vextendsingle=ω( Var SBM(p,Q)[SCH]1/2). For the left hand-side, we use ( 19). The right hand-side we expand as follows: Var SBM(p,Q)[SCH] = Var SBM(p,Q)/bracketleftBigg/summationdisplay H1⊆Kn:H1∼H/productdisplay (ji)∈E(H1)(2Gji−1)/bracketrightBigg =/summationdisplay H1,H2⊆Kn:H1∼H,H2∼HCovSBM(p,Q)/bracketleftBigg/productdisplay (ji)∈E(H1)(2Gji−1),/productdisplay (ji)∈E(H2)(2Gji−1)/bracketrightBigg . Note that if H1,H2don’t have a common vertex, the covariance is clearly zero. H ence, the above 19 expression equals /summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅CovSBM(p,Q)/bracketleftBigg/productdisplay (ji)∈E(H1)(2Gji−1),/productdisplay (ji)∈E(H2)(2Gji−1)/bracketrightBigg =/summationdisplay /summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBigg/productdisplay (ji)∈E(H1)(2Gji−1)/productdisplay (ji)∈E(H2)(2Gji−1)/bracketrightBigg −IE/bracketleftBigg/productdisplay (ji)∈E(H1)(2Gji−1)/bracketrightBigg IE/bracketleftBigg/productdisplay (ji)∈E(H2)(2Gji−1)/bracketrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅/parenleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBigg/productdisplay (ji)∈E(H1)(2Gji−1)/productdisplay (ji)∈E(H2)(2Gji−1)/bracketrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +IE/bracketleftBigg/productdisplay (ji)∈E(H)(2Gji−1)/bracketrightBigg2/parenrightBigg =/summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅/parenleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBigg/productdisplay (ji)∈E(H1)△E(H2)(2Gji−1)/bracketrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +IE/bracketleftBigg/productdisplay (ji)∈E(H)(2Gji−1)/bracketrightBigg2/parenrightBigg . First, we will bound the term /summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅IE/bracketleftBigg/productdisplay (ji)∈E(H)(2Gji−1)/bracketrightBigg2 . Note that there are O(n2|V(H)|−1) terms in the sum. This is the case since |V(H1)∪V(H2)| ≤ 2|V(H)| −1 whenever V(H1)∩V(H2)/\e}a⊔io\slash=∅.Furthermore, each of these terms is |ΦSBM(p,Q)(H)|2. Altogether, /summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅IE/bracketleftBigg/productdisplay (ji)∈E(H)(2Gji−1)/bracketrightBigg2 =O/parenleftBig n2|V(H)|−1|ΦSBM(p,Q)(H)|2/parenrightBig =o/parenleftBig/vextendsingle/vextendsingle/vextendsingleIE SBM(n;p,Q)[SCH]−IE G(n,1/2)[SCH]/vextendsingle/vextendsingle/vextendsingle2/parenrightBig by (19). Next, consider /summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBigg/productdisplay (ji)∈E(H1)△E(H2)(2Gji−1)/bracketrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle. Note that the graph defined by edges E(H1)△E(H2) is exactly H1⊗H2.Hence, the above sum is /summationdisplay H1,H2⊆Kn:H1∼H,H2∼H,V(H1)∩V(H2)/\e}atio\slash=∅/vextendsingle/vextendsingleΦSBM(p,Q)(H1⊗H2)/vextendsingle/vextendsingle. (20) 20 Denote the following cardinalities: s∅=|(V(H1)∪V(H2))\V(H1⊗H2)|, s1=|(V(H1)∩V(H1⊗H2))\V(H2)|, s2=|(V(H2)∩V(H1⊗H2))\V(H1)|, s1,2=|V(H1)∩V(H2)∩V(H1⊗H2)|(21) A simple count shows that s1+s2+s1,2=|V(H1⊗H2)|, s∅+s1+s1,2=|V(H1)|=|V(H2)|=s∅+s2+s1,2, s∅+s1+s2+s1,2=|V(H1)∪V(H2)|.(22) The number of ways to choose H1,H2∼Hso that|V(H1)∪V(H2)|=s∅+s1+s2+s1,2is O(ns∅+s1+s2+s1,2).Hence, the number of terms in ( 20) with the specific overlap pattern of H1,H2 satisfying ( 21) isO(ns∅+s1+s2+s1,2).Asthereareconstantly many choices of s∅,s1,s2,s1,2(asH1,H2 are two graphs on constantly many vertices), recalling ( 19), it is enough to show that for any choice fors∅,s1,s2,s1,2, n2|V(H)|×|ΦSBM(p,Q)(H)|2=ω/parenleftBigg ns∅+s1+s2+s1,2|ΦSBM(p,Q)(H1⊗H2)|/parenrightBigg . Using the fact that |E(H1⊗H2)|=|E(H1)△E(H2)| ≤ |E(H1)|+|E(H2)| ≤2Dby condition 2., the approximate optimality of H,it follows that |ΦSBM(p,Q)(H1⊗H2)|1 |V(H1⊗H2)|=O(|ΦSBM(p,Q)(H)|1 |V(H)|). Hence, it is enough to show that n2|V(H)|×|ΦSBM(p,Q)(H)|2=ω/parenleftBigg ns∅+s1+s2+s1,2|ΦSBM(p,Q)(H)||V(H1⊗H2)| |V(H)|/parenrightBigg . (23) Recalling ( 22) and the fact that |V(H)|=1 2(|V(H1)|+|V(H2)|),we need to show that ns1+s2+2s∅+2s1,2×|ΦSBM(p,Q)(H)|2=ω/parenleftBigg ns∅+s1+s2+s1,2|ΦSBM(p,Q)(H)|s1+s2+s1,2 (s1+s2+2s∅+2s1,2)/2/parenrightBigg ⇐⇒ |ΦSBM(p,Q)(H)|2(s1+s2+2s∅+2s1,2)−2(s1+s2+s1,2) s1+s2+2s∅+2s1,2 =ω(n−s∅−s1,2)⇐⇒ |ΦSBM(p,Q)(H)|4s∅+2s1,2 2|V(H)|=ω(n−s∅−s1,2). This last inequality follows from condition 1. that |ΦSBM(p,Q)(H)|1 |V(H)|=ω(n−1 2).Concretely, |ΦSBM(p,Q)(H)|4s∅+2s1,2 2|V(H)|≥ |ΦSBM(p,Q)(H)|4s∅+4s1,2 2|V(H)|=/parenleftBigg |ΦSBM(p,Q)(H)|1 |V(H)|/parenrightBigg2(s∅+s1,2) =ω(n−s∅−s1,2). Thus, we have shown ( 23) for one possible choice of s∅,s1,s2,s1,2in (21). Enumerating over the constantly many possible choices for s∅,s1,s2,s1,2gives the result. 21 3
|
https://arxiv.org/abs/2504.17202v1
|
Proofs of Main Results 3.1 Diagonal SBMs In this section, we study the family of SBM(p,Q) models where all off-diagonal entries of Qare non- zero. This is equivalent to saying that whenever two vertice s have distinct labels, the probability they are adjacent is exactly 1 /2. Theorem 3.1 (Maximizing Partition Functions in Diagonal SBMs) .Suppose that SBM(p,Q)is such that Qis diagonal. Then, for any connected graph H, ΨSBM(p,Q)(H)≤max(Ψ SBM(p,Q)(Star1),ΨSBM(p,Q)(Star2)). Proof.Take any connected H.RecallProposition 1.3 stating that ΦSBM(p,Q)(H) =/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg . Observe that since Qxi,xj= 0 whenever xi/\e}a⊔io\slash=xj,the expression becomes ΦSBM(p,Q)(H) =/summationdisplay x∈[k](px)|V(H)|×(Qx,x)|E(H)|. In particular, for any graph H,it is clearly the case that |ΦSBM(p,Q)(H)|=/vextendsingle/vextendsingle/vextendsingle/summationdisplay x∈[k](px)|V(H)|×(Qx,x)|E(H)|/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay x∈[k](px)|V(H)|×|Qx,x||E(H)| ≤/summationdisplay x∈[k](px)|V(H)|×|Qx,x||V(H)|−1,(24) where in the last line we used the fact that any connected grap h onvvertices has at least v−1 edges. Now, to finish the proof, we will use the fact that v−→(/summationdisplay x∈[k](px)v×|Qx,x|v−1)1/v(25) is decreasing on [1 ,+∞). This is a simple analytic inequality which we prove in Appendix B.1 . (25) implies that for any graph on at least three vertices (and th ere exists a unique connected graph on two vertices, Star1), it is the case that |ΦSBM(p,Q)(H)|1 |V(H)|≤(/summationdisplay x∈[k](px)3×|Qx,x|2)1 3=|ΦSBM(p,Q)(Star2)|1 |V(Star2)|. Combining with Theorem 2.4 , we obtain a result for the optimal constant degree distingu isher. Theorem 3.2 (Testing in Diagonal SBMs) .Suppose that SBM(p,Q)is a stochastic block model with a diagonal matrix Q.Then, if there exists a constant degree test distinguishing G(n,1/2) andSBM(n;p,Q)with high probability, one can also distinguish the two dist ributions with high probability using the signed Star1(edge) or Star2(wedge) count. 22 3.2 Non-Negative SBMs The case of non-negative SBMs is covered by [ YZZ24]. Let us explain how to relate a non-negative SBM and the more general Theorem 1.2 of [YZZ24]. This relation also appears in the “planted densesubgraphapplication”ofthemainresultof[ YZZ24]. Namely, let SBM(p,Q)beanon-negative SBM. That, is Qi,j≥0∀i,j.Then, the probability of an edge between vertices with label siand jis (Qi,j+1)/2 =1 2+Qi,j 2.The following procedure generates a sample from SBM(p,Q) : 1. First, draw niid labels x1,...,xni.i.d.∼p. 2. Then, draw a graph H′on vertex set [ n] by drawing an edge between i,jwith probability Qxi,xj. 3. LetH′be a uniformly random vertex permutation of H. 4. Now, draw K∼G(n,1/2) and output G=K∪H. Step 4 shows that the model can be captured via the planted sub graph result of Theorem 1.2 . We obtain the following corollary, implicit in [ YZZ24]. Corollary 3.1 (Implicit in [ YZZ24]).LetQbe a non-negative matrix. If there exists a degree- D test distinguishing G(n,1/2)andSBM(n;p,Q)with high probability for some absolute constant D, then one can also distinguish the two distribution with high probability using the signed count of some star on at most Dedges. We still describe the corresponding partition function max imization step as it will be useful in the proof of Theorem 3.6 (our result for 2-SBMs). Theorem 3.3 (Maximizing Partition Functions in Non-Negative SBMs) .Consider any SBM(p,Q) model in which all entries of the matrix Qare non-negative. Then, for any connected graph Hon at mostDedges, ΨSBM(p,Q)(H)≤max 1≤t≤DΨSBM(p,Q)(Start). Proof.LetHbeanyconnectedgraphonatmost Dedges. Again, westartbyrecalling Proposition 1.3 : ΦSBM(p,Q)(H) =/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay
|
https://arxiv.org/abs/2504.17202v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.