text
string
source
string
more di๏ฌƒcult to establish the inequality directly for all x. A.2 Some special functions The modi๏ฌed Bessel functions Iฮฝ(x) andKฮฝ(x) are independent solutions to the modi๏ฌed Bessel di๏ฌ€erential equation x2d2y dx2+xdy dxโˆ’(x2+ฮฝ2)y= 0, whereฮฝandzcan be arbitrary complex valued. Further, recall their series repr esentation Iฮฝ(x) =โˆž/summationdisplay k=01 k!ฮ“(k+ฮฝ+1)/parenleftBigx 2/parenrightBig2k+ฮฝ andKฮฝ(x) =ฯ€ 2Iโˆ’ฮฝ(x)โˆ’Iฮฝ(x) sin(ฮฝฯ€) 16 for non-integer ฮฝ. To de๏ฌne Kn(x) for integer n, one takes the limit Kn(x) = lim ฮฝโ†’nKฮฝ(x) = lim ฮฝโ†’nฯ€ 2Iโˆ’ฮฝ(x)โˆ’Iฮฝ(x) sin(ฮฝฯ€). The Bessel function has a number of integral representations un der certain conditions. We use the following Iฮฝ(x) =/parenleftbigx 2/parenrightbigฮฝ ฮ“/parenleftbig ฮฝ+1 2/parenrightbigโˆšฯ€/integraldisplay1 โˆ’1ext (1+t2)ฮฝ+1 2dt. Notethatthisexpressionhasaninterestingprobabilisticinterpret ation. Let UโˆผUnif(Snโˆ’1) andvโˆˆRnbe ๏ฌxed unit vectors. Then for n= 2ฮฝ+ 1, the modi๏ฌed Bessel function of the ๏ฌrst kind can be written as Iฮฝ(x) =ฮ“(ฮฝ+1) (x/2)ฮฝE/bracketleftbig exvยทU/bracketrightbig . The generalized hypergeometric function is de๏ฌned as: pFq(a1,...,a p;b1,...,b q;z) =โˆž/summationdisplay k=0(a1)kยทยทยท(ap)k (b1)kยทยทยท(bq)kzk k! where (a)kis the Pochhammer symbol (rising factorial) ( a)k=a(a+1)(a+2)ยทยทยท(a+kโˆ’1), for(a)0= 1. When p=q= 1, itcanbeshownthatthegeneralizedhypergeometricfunction 1F1(a;b;z) is a solution of the Kummer di๏ฌ€erential equation zd2y dz2+(bโˆ’z)dy dzโˆ’ay= 0. Acomprehensive study ofthevariousBessel andhypergeometric functionscanbefound in Magnus et al. [ MOS66]. A.3 Integral Transformations A de๏ฌnition of the Hankel transform can be found, for example, in K oh and Zemanian [KZ69] and is as follows. For any ฮฝโˆˆRanda >0 letJฮฝ,abe the function space generated by the kernelโˆšxy Jฮฝ(xy), where Jฮฝ(.) is the Bessel function of the ๏ฌrst kind, x >0, andy is a complex number in the strip โ„ฆ = {y:|Imy|< a, yโˆne}ationslash= 0 or a negative number }. The Hankel transform Hฮฝis de๏ฌned on the dual space Jโˆ— ฮฝ,aas Hฮฝ[f](y) =/integraldisplayโˆž 0f(x)โˆšxyJฮฝ(xy)dx, whereyโˆˆโ„ฆ,fis locally integrable and /integraldisplayโˆž 0|f(x)eaxxa+1/2|dx <โˆž. 17 Now to every fโˆˆ Jโˆ— ฮฝ,athere exists a number ฯƒf(possibly in๏ฌnite) such that fโˆˆ Jโˆ— ฮฝ,bif bโ‰คฯƒfandf /โˆˆ Jฮฝ,bifb > ฯƒf. Hencefโˆˆ Jฮฝ,ฯƒf. Now, for any fโˆˆ Jฮฝ,ฯƒf, de๏ฌne the I-transform as F(y) =Iฮฝ[f](y) =/integraldisplayโˆž 0f(x)โˆšxyIฮฝ(xy)dx (A.4) whereIฮฝ(.) is the modi๏ฌed Bessel function of the ๏ฌrst kind. The real-valued I-transform is de๏ฌned where yis restricted to the strip ฯ‡f={s: 0< s < ฯƒ f}. The inversion formula for theI-transform is given by f(x) =Iโˆ’1 ฮฝ[F](x) = lim rโ†’โˆž1 iฯ€/integraldisplayฯƒ+ir ฯƒโˆ’irF(y)โˆšxyKฮฝ(xy)dy (A.5) forฮฝโ‰ฅ โˆ’1/2, ฯƒโˆˆฯ‡fand where Kฮฝ(.) is the modi๏ฌed Bessel function of the second kind of orderฮฝ. The statement of the inverse formula is given by Koh and Zemanian in [ KZ69] and may be proved using the ideas similar to the proof of Theorem 6.6 provided by Zemanian in [Zem68]. It is important to note that the inversion holds only for ฯƒโˆˆฯ‡f, therefore ฯƒ= 0 is not an admissible value. If one is going to apply ( A.5) it is necessary to use a contour integration technique not just a simple change of variables onto the real axis. However the inverse I-transformis also related to the Hankel transform since the Bess el function, Jฮฝ, and the modi๏ฌed Bessel function, Iฮฝ, are connected by the identity Iฮฝ(z) = exp(โˆ’ฯ€ฮฝi/2)Jฮฝ(iz) forโˆ’ฯ€ <argzโ‰คฯ€/2 (see formula 8.406 of Gradshteyn and Ryzhik [ GR94]). Then it follows that Iโˆ’1 ฮฝ[F](x) = exp(โˆ’iฯ€(ฮฝ/2โˆ’3/4))Hโˆ’1 ฮฝ[ห†F](x) whereห†F(y) =F(โˆ’iy). As the Hankel transform is self
https://arxiv.org/abs/2505.07649v1
reciprocal, i.e. Hโˆ’1 ฮฝ[.] =Hฮฝ[.], the previous identity becomes Iโˆ’1 ฮฝ[F](x) = exp(โˆ’iฯ€(ฮฝ/2โˆ’3/4))Hฮฝ[ห†F](x). (A.6) Both of those relationships are useful when using tables of Hankelโ€™s transforms. TheI-transform is also related to, the more well known, K-transform Kฮฝ[g](y) =/integraldisplayโˆž 0g(x)โˆšxyKฮฝ(xy)dx. IfG(y) =Kฮฝ[g](y) the inversion formula is given by Kโˆ’1 ฮฝ[G](x) = lim rโ†’โˆž1 iฯ€/integraldisplayฯƒ+ir ฯƒโˆ’irG(y)โˆšxyIฮฝ(xy)dy, 18 whereฮฝโ‰ฅ โˆ’1/2 andฯƒโˆˆฯ‡g. As Kฮฝ(z) =ฯ€ 2sin(ฯ€ฮฝ)[Iฮฝ(z)โˆ’Iฮฝ(z)] for non integer ฮฝ, it follows that Iโˆ’1 ฮฝ[F](x) =ฯ€ 2sin(ฯ€ฮฝ)[Kโˆ’1 โˆ’ฮฝ[F](x)โˆ’Kโˆ’1 ฮฝ[F](x)], for integer ฮฝ, the right hand side of these relations are replaced by their limiting va lues. Fortunately, Oberhettinger [ Obe72]has tabulated a rich collection of K-transforms and theirinverses. FormoredetailsconcerningintegralsinvolvingBesse lfunctionsasintegrands andtheirrelationshiptoLaplacetransformations, seetheworkby Luke[Luk62]andErdยด elyi [Erd54]. References [AB91] J.F.AngersandJ.O.Berger. Robusthierarchical Bayese stimationofexchange- able means. Canadian Journal of Statistics , 19:39โ€“56, 1991. [ACD11] A.Armagan, M.Clyde, andD.Dunson. Generalizedbetamixt ures ofGaussians. Advances in Neural Nnformation Processing Systems , 24, 2011. [Bar70] A. J. Baranchik. A family of minimax estimators of the mean of a multivariate normal distribution. The Annals of Mathematical Statistics , pages 642โ€“645, 1970. [Ber85] J. O. Berger. Decision Theory and Bayesian Analysis . Springer, New York, 1985. [BR90] J. O. Berger and C. Robert. Subjective hierarchical Bayes estimation of a multi- variate normal mean: on the frequentist interface. Annals of Statistics , 18:617โ€“ 651, 1990. [DR88] A. DasGupta and H. Rubin. Bayesian estimation subject to min imaxity of the mean of a multivariate normal distribution in the case of a common unk nown variance: acaseforBayesian robustness. InGuptaS.S.andBerg erJ.O., editors, Statistical Decision Theory and Related Topics IV. , volume 1, pages 325โ€“345. Springer Verlag, 1988. [Erd54] A. Erdยด elyi. Tables of Integrals Transforms , volume 2. McGraw Hill, New York, 1954. [FSW98] D. Fourdrinier, W. E. Strawderman, and M. T. Wells. On the c onstruction of Bayes minimax estimators. Annals of Statistics , pages 660โ€“671, 1998. 19 [FSW18] D. Fourdrinier, W. E. Strawderman, and M. T. Wells. Shrinkage Estimation . Springer Nature, 2018. [FW96] D. Fourdrinier and M. T. Wells. Bayes estimators for a linear su bspace of a spherically symmetric normal law. In A. P. David J. M. Bernardo, J. O . Berger and A. F. M. Smith, editors, Bayesian Statistics 5 , pages 569โ€“579. Oxford Uni- versity Press, 1996. [GR94] I. Gradshteyn and I. Ryzhik. Tables of Integrals, Series and Products. Academic Press, New York, 5 edition, 1994. [KKM81] M. Krasnov, A. Kissยด elev, and G. Markarenko. Recueil de probl` emes sur les ยด equations di๏ฌ€ยด erentielles ordinaires. Mir, Moscou, 1981. [KZ69] E. L. Koh and A. H. Zemanian. The complex Hankel and I-tran sformations of generalized functions. SIAM J. Appl. Math. , 10:945โ€“957, 1969. [Luk62] Y. L. Luke. Integrals of Bessel Functions. McGraw-Hill, New York, 1962. [MOS66] W. Magnus, F. Oberhettinger, and R. Soni. Formulas and Theorems for the Special Functions of Mathematical Physics. Springer, New York, 1966. [Obe72] F. Oberhettinger. Tables of Bessel Transforms. Springer Verlag, New York, 1972. [PS92] L. R. Pericchi and A. F. M. Smith. Exact and approximate pos terior moments for the normal location parameter. J. Royal Statist. Soc. , pages 793โ€“804, 1992. [Ste81] C. Stein. Estimation of the mean
https://arxiv.org/abs/2505.07649v1
1 Channel Fingerprint Construction for Massive MIMO: A Deep Conditional Generative Approach Zhenzhou Jin, Graduate Student Member, IEEE, Li You, Senior Member, IEEE, Xudong Li, Zhen Gao, Senior Member, IEEE, Yuanwei Liu, Fellow, IEEE, Xiang-Gen Xia, Fellow, IEEE, and Xiqi Gao, Fellow, IEEE Abstract โ€”Accurate channel state information (CSI) acquisi- tion for massive multiple-input multiple-output (MIMO) systems is essential for future mobile communication networks. Channel fingerprint (CF), also referred to as channel knowledge map, is a key enabler for intelligent environment-aware communication and can facilitate CSI acquisition. However, due to the cost limitations of practical sensing nodes and test vehicles, the resulting CF is typically coarse-grained, making it insufficient for wireless transceiver design. In this work, we introduce the concept of CF twins and design a conditional generative diffusion model (CGDM) with strong implicit prior learning capabilities as the computational core of the CF twin to establish the connection between coarse- and fine-grained CFs. Specifically, we employ a variational inference technique to derive the evidence lower bound (ELBO) for the log-marginal distribution of the observed fine-grained CF conditioned on the coarse-grained CF, enabling the CGDM to learn the complicated distribution of the target data. During the denoising neural network optimization, the coarse-grained CF is introduced as side information to accurately guide the conditioned generation of the CGDM. To make the proposed CGDM lightweight, we further leverage the additivity of network layers and introduce a one-shot pruning approach along with a multi-objective knowledge distillation technique. Experimental results show that the proposed approach exhibits significant improvement in reconstruction performance compared to the baselines. Additionally, zero-shot testing on reconstruction tasks with different magnification factors further demonstrates the scalability and generalization ability of the proposed approach. Index Terms โ€”Massive MIMO, channel knowledge map, environment-aware wireless communication, conditional gener- ative model. I. I NTRODUCTION Part of this work has been accepted for presentation at the IEEE INFOCOM 2025 [1]. Zhenzhou Jin, Li You, Xudong Li, and Xiqi Gao are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211100, China (e-mail: zzjin@seu.edu.cn; lyou@seu.edu.cn; xdli@seu.edu.cn; xqgao@seu.edu.cn). Zhen Gao is with the State Key Laboratory of CNS/ATM, Beijing Institute of Technology, Beijing 100081, China, also with the Beijing Institute of Technology, Zhuhai 519088, China, also with the MIT Key Laboratory of Complex-Field Intelligent Sensing, Beijing Institute of Technology, Beijing 100081, China, also with the Advanced Technology Research Institute, Beijing Institute of Technology Jinan, Jinan 250307, China, and also with the Yangtze Delta Region Academy, Beijing Institute of Technology Jiaxing, Jiaxing 314019, China (e-mail: gaozhen16@bit.edu.cn). Yuanwei Liu is with the Department of Electrical and Electronic Engineer- ing, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk). Xiang-Gen Xia is with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA (e-mail: xxia@ee.udel.edu).THE deep integration of wireless communications, ar- tificial intelligence (AI), and environmental sensing is expected to enable the 6th generation (6G) mobile com- munication networks to โ€œperceiveโ€ the physical world with capabilities surpassing human sensing, thereby facilitating the creation of digital twins (DTs) in
https://arxiv.org/abs/2505.07893v1
the virtual realm [2], [3]. The vision of 6G is to propel society towards โ€œintelligent internet of everythingโ€ and โ€œubiquitous connectivityโ€, real- izing the seamless integration and interaction between the physical and virtual worlds. To achieve this, 6G will need to possess more powerful end-to-end information processing capabilities to support emerging applications and domains, including autonomous vehicles, indoor localization, and the metaverse. Therefore, to achieve ultra-low latency and superior performance, while supporting scenarios that integrate AI, sensing, and communication, DT and environmental sensing are considered key enablers for the upcoming 6G era [4]. With the dramatic increase in the antenna array dimen- sions in massive multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) communi- cation systems, the rapid rise in the number and density of user devices, and the utilization of broader bandwidths, 6G networks will encounter the challenge of processing ultra- large-dimensional MIMO channels [3], [5]โ€“[7]. Traditional pilot-based methods for acquiring and feedback of channel state information (CSI) may suffer from prohibitively high pilot signaling overhead. Furthermore, traditional wireless transceiver designs typically rely on channel modeling, which is based on specific assumptions and probability distributions of channel parameters. However, these stringent assumptions may not be feasible in high-dynamic, complicated wireless propagation environments [8]. As a key determinant, the wireless propagation environment significantly affects channel parameter variation and the performance of wireless communi- cation systems. Consequently, there has been growing interest in environment-aware wireless communications from both the academic and industrial communities. Channel fingerprint (CF), also referred to as channel knowl- edge map (CKM), is an emerging enabling technology for environment-aware communications, offering location-specific channel knowledge related to a potential base station (BS) for any BS-to-everything (B2X) pairs [7], [8]. Ideally, a fine- grained CF serves as a location-specific knowledge base, covering all precise locations within the target communica- tion area. It includes the exact positions of transmitters and receivers, along with their corresponding channel knowledge. This database stores channel-related knowledge for specificarXiv:2505.07893v1 [cs.NI] 12 May 2025 2 Channel data collection is conducted from the sensing nodes and the test vehicle. Channel Measurement Vehicle BS Sensor Node is arranged as a 2- dimensional tensor according to its location.( ),,i jGฮ›e is arranged as a 2- dimensional tensor according to its location.( ),,i jGฮ›e Channel Measurement Vehicle BS Sensor Node Fine-Grained CF Reconstruction Active Awareness Propagation Environment LR CF LR G Reconstructed HR CF HR GChannel Data Collection, Processing, and Reconstruction Empowering Communication Technologies and Network Planning Channel Fingerprint Twin Channel Fingerprint Twin CGDM Core Computing Unit of the Channel Fingerprint Twin 0. , 0 12( ): , 1 ,2|| ( )|| t tT t t t tt ฮฑ ฮฑ =รฆ รถ รง รท = - + - รง รท รจ รธ รฅ ฮต ฮธ ฮธ ฮต ฮต ฮต 0 0 0 0 0 0 , 0 , t T G GG G Fig. 1. Schematic diagram of the CF twin: CGDM functions as the core computational unit of the CF twin, reconstructing fine-grained CF to optimize wireless transmission technologies and network planning. locations, including channel power, angle of arrival/departure, and channel impulse responses,
https://arxiv.org/abs/2505.07893v1
which can alleviate the chal- lenges of CSI acquisition and empower the design of wireless transmission technologies. By providing essential and accu- rate channel knowledge, CF has recently spurred extensive research for various applications in space-air-ground integrated networks, including beam alignment [8], [9], communication among non-cooperative nodes [8], physical environment sens- ing [10], [11] and user localization [12], UA V communication [13] and path optimization [14], [15], resource allocation in air-ground integrated mobility [16]. The aforementioned promising applications hinge on the effective construction of CF, which serves as the corner- stone of environment-aware wireless communications. Exist- ing related works can primarily be categorized into model- driven and data-driven approaches. For the former, the authors of [17] employ an analytical channel model to represent the spatial variation of the propagation environment, where channel modeling parameters are estimated from measured data to reconstruct the CF. The authors of [18] combine prior assumptions of the wireless propagation environment model with partially observed channel measurements to infer channel knowledge at unmeasured locations. For the latter, the work [7] transforms the CF construction task into an image-to-image inpainting problem and designs a Laplacian pyramid-based model to learn the differences between fre- quency components, enabling efficient reconstruction of the CF for a target area. In [19], [20], U-Net is employed to learn geometry-based and physics-based features in urban or indoorenvironments, thereby constructing the corresponding CFs. In [8], fully connected networks are employed to predict channel knowledge at potential locations using simple 2D coordinate information. Most existing related works focus on constructing CFs using physical propagation environment features or prior assumptions of physical propagation models. Evidently, a finer-grained CF can assist the BS in acquir- ing more precise, location-specific channel knowledge [8]. However, two key challenges must be addressed for CF to enable environment-aware wireless communications. First, the operation and maintenance costs associated with sensing nodes and test vehicles for measuring channel-related knowledge are inherently constrained. Second, storing a large amount of fine- grained CFs may incur unnecessary and prohibitively high storage overhead for the BS. Consequently, most available CFs are coarse-grained, lacking precise location-specific channel knowledge. Motivated by these practical challenges, we seek to develop a computational unit dedicated to enhancing CF granularity, particularly by reconstructing ultra-fine-grained CFs from coarse-grained counterparts. In this context, fully leveraging measurable channel knowl- edge and location information from the physical world be- comes essential. The concept of DT has emerged as a promising paradigm, widely recognized as a virtual modeling framework that digitally replicates and extends the physical world [4]. As illustrated in Fig. 1, from the DT perspective, a structured coarse-grained CF can be sensed and reorganized by sensing nodes and test vehicles deployed in the physical world. This coarse-grained CF represents the physical object, 3 which is subsequently processed by the DT hub, serving as the core computing unit of the DT, to generate a fine-grained CF, commonly referred to as the virtual twin object. Leveraging the generated virtual twin facilitates optimized decision-making for subsequent wireless transceiver designs, such as precoding and resource scheduling [7], [8]. Accordingly, our goal is
https://arxiv.org/abs/2505.07893v1
to establish the intrinsic mapping between physical objects and their corresponding virtual twins. Consistent with the DT paradigm, we refer to this concept as the CF twins . To better explore the relationship between the two, we reformulate the task of fine-grained CF construction as an image super- resolution (ISR) problem. However, the conditional distri- bution of SR outputs given coarse-grained inputs typically follows a complicated parametric distribution, potentially re- sulting in suboptimal performance for most feedforward neural network-based regression algorithms in ISR tasks [21], [22]. Recently, generative AI (GenAI) has emerged as a promis- ing technique for high-fidelity DT construction, showcasing exceptional capabilities in modeling complicated relationships and distributions to effectively synthesize and reconstruct high- dimensional data [21]โ€“[26]. Among various GenAI techniques, generative diffusion models (GDM) [23] are widely recog- nized as one of the most prominent classes of generative mod- els. GDM does not require training additional discriminators, as in generative adversarial network (GAN) [21], aligning pos- terior distributions like variational autoencoders (V AE) [26]. Therefore, GDMs have shown remarkable performance in high-dimensional data generation tasks [22], [27], underscor- ing their versatile application potential and efficacy. However, conventional GDMs are typically unconditional, which may lead to uncertainty in accurately generating the target fine- grained CF, potentially resulting in multiple possible solutions for the target CF. To this end, this paper proposes a conditional GDM (CGDM) with powerful implicit prior learning capabilities, serving as the CF twin hub, where coarse-grained CF is introduced as side information to guide the iterative refinement process of the denoising neural network. Moreover, to make the proposed CGDM lightweight, we further introduce an efficient pruning approach and a multi-objective knowledge distillation technique. Specifically, the main contributions of this paper are as follows: โ€ขBuilding on the proposed concept of CF twin, we treat the coarse-grained CF as the physical object and the fine-grained CF as its corresponding virtual twin. To effectively capture the relationship between them, we reformulate this task as an ISR problem and solve it uti- lizing a Stein score-based iterative refinement approach. โ€ขTo enable the CF twin hub to more effectively learn the connections between physical entities and their virtual counterparts, we adopt a variational inference technique to derive the evidence lower bound (ELBO) of the log- conditional marginal distribution of the observed fine- grained CF, which serves as a surrogate training objective. Maximizing this objective allows CGDM to learn the true target CF distribution, facilitating the transition from a standard normal distribution to the desired distribution. To better guide this transition, the coarse-grained CF isincorporated as side information during the optimization of the denoising network, thereby enhancing the control- lability and fidelity of fine-grained CF generation. โ€ขConsidering that the proposed CGDM is a large AI model with numerous parameters, we develop an efficient pruning approach to enable its lightweight deployment. Specifically, we formulate the layer pruning task as a combinatorial optimization problem. By leveraging the inherent additivity property of network layers, we intro- duce a one-shot layer pruning strategy along with a multi- objective knowledge distillation technique, resulting in the lightweight CGDM (LiCGDM).
https://arxiv.org/abs/2505.07893v1
โ€ขExperimental results show that the proposed approach achieves significant performance improvements over the baselines. Additionally, we validate the generalization and knowledge transfer capabilities of the proposed model by conducting zero-shot performance testing on other SR CF tasks with unseen magnification factors (e.g., ร—16,ร—8). The rest of this paper is structured as follows. Section II introduces the system model and the formulation of the fine- grained CF construction problem. The overall design of the CGDM and its specific network architecture are introduced in Section III. The one-shot pruning approach and knowledge distillation technique are proposed in Section IV . Experimen- tal results are presented in Section V , with the conclusions provided in Section VI. II. S YSTEM MODEL ANDPROBLEM FORMULATION In this section, we first outline the communication sce- nario and present the massive MIMO-OFDM physical channel model for each potential user equipment (UE) location. Then, we model the specific CF, that is, the channel power, for each potential UE based on its location and environmental factors. Finally, we describe the fine-grained CF reconstruction problem from the perspective of ISR. A. Massive MIMO Physical Channel Model Consider a massive MIMO-OFDM communication scenario within a square area AโŠ‚R2, where the 2D coordinates of the UE locations are defined as {xm}M m=1=A. Specifically, the BS is outfitted with a uniform planar array (UPA) consisting of Nr=Nr,vร—Nr,hantenna elements, spaced at half-wavelength intervals, and serves Msingle-antenna users within a cell. Here, Nr,vandNr,hrepresent the numbers of antennas in each vertical column and horizontal row, respectively. It is assumed that the wireless propagation environment contains Lxmphysical paths from the BS to the UE located at xm. The system employs OFDM modulation with Ncsubcarri- ers, characterized by an adjacent subcarrier spacing of โˆ†f. Typically, Nkactive subcarriers are utilized at the center of the total Ncsubcarriers for signal transmission, while the remaining subcarriers serve as guard bands. Define the set of active subcarriers as K= โˆ’Nk 2,โˆ’Nk 2+ 1, . . . ,Nk 2โˆ’1 . The spatial-frequency domain response of the channel between the BS and the UE over active subcarriers Kis modeled as [28] hH(xm)=LxmX l=1ฮฑl,xmaH t(ฯ„l,xm)โŠ—aH h(ฯˆl,xm)โŠ—aH v(ฯ•l,xm),(1) 4 where โŠ—denotes the Kronrcker product, and ฮฑl,xmand at(ฯ„l,xm)represent the complex gain and the propagation delay, respectively, corresponding to the l-th path for the UE located at xm. The normalized horizontal angle ฯˆl,xmโˆˆ [โˆ’1,1]is related to the elevation angle ฮธl,xm, while the normalized vertical angle ฯ•l,xmโˆˆ[โˆ’1,1]is associated with the elevation angle ฮธl,xmโˆˆ โˆ’ฯ€ 2,ฯ€ 2 through the relationship ฯ•l,xm= sin ฮธl,xm. The azimuth angle ฯ‘l,xmโˆˆ โˆ’ฯ€ 2,ฯ€ 2 is defined by ฯˆl,xm= cos ฮธl,xmsinฯ‘l,xm. The steering vectors at(ฯ„l,xm),ah(ฯˆl,xm), andav(ฯ•l,xm)can be represented as av(ฯ•l,xm) =h 1, eโˆ’ศทฯ€ฯ•l,xm, . . . , eโˆ’ศทฯ€(Nr,vโˆ’1)ฯ•l,xmiT ,(2) ah(ฯˆl,xm) =h 1, eโˆ’ศทฯ€ฯˆl,xm, . . . , eโˆ’ศทฯ€(Nr,hโˆ’1)ฯˆl,xmiT ,(3) at(ฯ„l,xm)= eโˆ’ศท2ฯ€ โˆ’Nk 2 โˆ†fฯ„l,xm,ยทยทยท, eโˆ’ศท2ฯ€Nk 2โˆ’1 โˆ†fฯ„l,xmT ,(4) where ah(ฯˆl,xm)โˆˆCNr,hร—1,av(ฯ•l,xm)โˆˆCNv,hร—1, and at(ฯ„l,xm)โˆˆCNkร—1. B. Channel Fingerprints Model Based on the proposed physical channel model in Section II.A, the channel power at the UE located at xm, in dB scale, is defined as G(e,xm) = E hH(xm)h(xm)  dB, (5) where E{ยท}denotes the expectation operation, and erepresents the propagation
https://arxiv.org/abs/2505.07893v1
environment, which determines the channel characteristics in (1), including the complex gain and propa- gation delay. It is evident that the channel power attenuation experienced at the UE is influenced by various environmental factors, including propagation losses along different paths as well as reflections and diffractions caused by surrounding structures such as buildings [19], [29]. In this paper, we refer to the collection of channel power values at potential locations as the unstructured CF. Since the communication area under consideration is a 2D square area AโŠ‚R2, we can perform spatial discretization along both the X-axis and Y-axis. Specifically, given an area of interest with size W ร— W , we define a resolution factor ฯƒ, such that the minimum spacing units in the spatial discretization process are โˆ†x=W/ฯƒandโˆ†y=W/ฯƒ. Each spatial grid is located at ฮ›i,j, where i= 1,2, ...,W/โˆ†x andj= 1,2, ...,W/โˆ†y, and the (i, j)-th spatial grid can be represented as ฮ›i,j:= [iโˆ†x, jโˆ†y]T. (6) Given the resolution factor ฯƒ, the unstructured CF correspond- ing to potential UE locations within the target area can be rearranged into a two-dimensional tensor, denoted as [G]i,j=G(e,ฮ›i,j). (7) Furthermore, by incorporating additional dimensions such as time and frequency, the CF model can be naturally extended to a higher-order tensor representation.Remark 1 : When the target area has a size of W ร— W and the resolution factor is ฯƒ, the number of points requiring interpolation is โŒˆW/ฯƒโŒ‰2. As the resolution factor ฯƒincreases, the complexity of traditional interpolation algorithms also grows with O(โŒˆW/ฯƒโŒ‰3), posing significant challenges for con- structing fine-grained CF in practical scenarios. Alternatively, the spatial resolution of the CF can be improved by increasing the density of measurement points, either through deploying more sensing nodes or by employing test vehicles to collect channel knowledge at finer geographical intervals. However, both approaches may be impractical due to high hardware and labor costs. C. Problem Formulation Define a coarse-grained factor ฯƒLRand a fine-grained factor ฯƒHR, where ฯƒHRis typically an integer multiple (e.g., ร—4,ร—8) ofฯƒLR. Accordingly, the coarse-grained CF and fine-grained CF are denoted by GLRandGHR, which are collected by discretizing the target area into ฯƒLRร—ฯƒLRandฯƒHRร—ฯƒHR grids, respectively. In light of Remark 1 , the CF twin aims to reconstruct the fine-grained CF GHRfrom a given coarse- grained CF GLR, particularly in scenarios constrained by measurement costs, privacy concerns, or security requirements. In a typical ISR task, the goal is to reconstruct a high- resolution (HR) image from a given low-resolution (LR) counterpart, thereby enhancing the fine details and overall quality of the image. It can be observed that our problem aligns with the ISR task, which inspires us to analyze the fine- grained CF reconstruction problem from an ISR perspective. Specifically, we treat the elements of the GLRmatrix as pixels, viewing the coarse-grained GLRas an LR image and the fine- grained GHRas an HR image. Then, our goal is to learn a specific mapping relationship that efficiently reconstructs HR CFGHRfrom a given LR CF GLR, i.e., Mฮ˜:GLR,uโ†’GHR,u,โˆ€uโˆˆ {1,2, . . . , U }, (8) where ฮ˜is the parameter set of this mapping Mฮ˜, and Uis
https://arxiv.org/abs/2505.07893v1
the number of training samples. However, this task represents a classic and highly challenging inverse problem, requiring the effective reconstruction of fine details from a given LR CF. Since the conditional distribution of HR outputs corresponding to a given LR input typically does not adhere to a simple parametric distribution, many feedforward neural network-based regression methods for ISR tend to perform poorly at higher upscaling factors, struggling to recover fine details accurately [22]. In contrast, deep generative models have demonstrated success in learning complicated empirical distributions of target data. Specifically, if the implicit prior information of the HR CF distribution, such as the gradient of the data log-density, can be learned, one can transition to the target CF distribution through iterative sampling steps from a standard normal distribution, similar to Langevin dynamics [30]. Therefore, learning the target HR CF distribution can be solved by optimizing argmin ฮ˜Ep(GHR)h โˆฅโˆ‡logp(GHR)โˆ’โˆ‡logpฮ˜(GHR)โˆฅ2 2i ,(9a) s.t.xmโˆˆ A, uโˆˆ {1,2, . . . , U }, (9b) 5 where โˆ‡logp(GHR)represents the gradient of the HR CF log- density, also referred to as the Stein score, and pฮ˜denotes the learned density. It has been shown that the noise learned by traditional GDM is equivalent to the Stein score [23], enabling the generation of HR CF samples aligned with the target data distribution. However, for traditional GDMs, accurately generating the target HR CF is an ill-posed task, meaning that the generation process may yield multiple possible solutions for the HR CF. Therefore, in contrast to traditional GDMs, which starts with a pure Gaussian noise tensor, introducing an additional source signal as side information (also guiding condition) is essential to achieve an optimal solution. In this context, the objective (9) needs to be reformulated as argmin ฮ˜Ep(GH R,GL R)h โˆฅโˆ‡logp(GH R|GL R)โˆ’ โˆ‡logpฮ˜(GH R|GL R)โˆฅ2 2i , (10a) s.t.xmโˆˆ A, uโˆˆ {1,2, . . . , U }. (10b) To this end, our approach leverages the prior information learned, with the LR CF GLRserving as the additional source signal to guide the iterative refinement process. Further implementation details are provided in Section III. III. CGDM-E NABLED SR CF In this section, we introduce the proposed CGDM as the core computational unit of the CF twin. First, under the variational inference framework, we derive a concrete proxy objective to ensure the effective operation of the proposed CGDM. Next, we introduce the LR CF as side informa- tion and design a conditional GDM to iteratively refine the transformation from a standard normal distribution to the target data distribution, akin to Langevin dynamics, for HR CF reconstruction. Finally, we present the detailed network architecture of the proposed CGDM. To simplify the notation, GLRandGHRare represented by ห™GandG, respectively, in the subsequent sections. A. CGDM Design for SR CF Given a dataset of LR CF inputs paired with HR CF outputs, defined as D={ห™Gu,Gu}U u=1, which represent samples drawn from an unknown distribution p(ห™G,G). Such datasets are generally collected from sensing nodes and test vehicles, with different resolution factors (e.g., ฯƒLRandฯƒHR), depending on the specific scenario. In our task, the focus is on learning a parametric
https://arxiv.org/abs/2505.07893v1
approximation to p(G|ห™G)through a directed itera- tive refinement process guided by source information, which enables the mapping of ห™GtoG. Given the powerful implicit prior learning capability of GDM, we design a conditional GDM to facilitate the generation of G. As shown in Fig. 2, CGDM can generate the target HR CF defined as G0through Trefinement time steps. Beginning with a CF GTโˆผ N (0,I)composed of pure Gaussian noise, CGDM iteratively refines this initial input based on the source signal and the prior information learned during the training process, namely the conditional distribution pฮธ(Gtโˆ’1|Gt,ห™G). As it progresses through each time step t, it produces a series of output CFs defined as {GTโˆ’1,GTโˆ’2, ...,G0}, ultimately ( ) 1t tq-G G . 1,t tp-รฆ รถ รง รท รจ รธ ฮธG G G . 0~pรฆ รถ รง รท รจ รธ G GG 1t-G... tG... Gaussian diffusion direction Iterative refinement direction based on source information (LR CF)Fig. 2. The mechanism of CGDM for generating HR CF consists of a Gaussian diffusion process (without learnable parameters) and an iterative refinement process based on LR CF. Specifically, the pink arrow indicates the direction of the Gaussian diffusion process, which progressively adds noise to the HR CF. The blue arrow indicates the direction of the iterative refinement process, which utilizes the implicit prior learned during training and is conditioned on the source information (LR CF) to generate the HR CF. resulting in the target HR CF G0โˆผp(G|ห™G). Specifically, the distribution of intermediate CFs in the iterative refinement chain is governed by the forward diffusion process, which gradually adds noise to the output CF through a fixed Markov chain, denoted as q(Gt|Gtโˆ’1). Our model seeks to reverse the Gaussian diffusion process by iteratively recovering the signal from noise through a reverse Markov chain conditioned on the source CF ห™G. To achieve this, we learn the reverse chain by leveraging a denoising neural network ฮตฮธ(ยท), optimized utilizing the objective function (32). The CGDM takes as input an LR CF and a noisy image to estimate the noise, and after Trefinement steps, generates the target HR CF. B. Diffusion Process Starting with HR CF Consider an HR CF sample that is drawn from a distri- bution of interest, denoted as G0=Gโˆผq(G). The GDM utilizes a fixed diffusion process q(G1:T|G0)for training, which involves relatively high-dimensional latent variables. This process defines a forward diffusion mechanism, a deter- ministic Markovian chain, where Gaussian noise is gradually introduced to the sample over Tsteps. The noise level at each step is determined by a variance schedule {ฮฒtโˆˆ(0,1)}T t=1. Specifically, the forward diffusion process is defined as [23] q(G1:T|G0) =TY t=1q(Gt|Gtโˆ’1). (11) Algorithm 1 Offline Training Strategy for the Denoising Neural Network Conditioned on LR CF 1:repeat 2: Load CF training data pairs ห™G,G0 โˆผp ห™G,G0 3: Obtain time steps tโˆผUniform(1 , ..., T ) 4: Randomly generate a noise tensor with the same dimen- sions as G0,ฮตtโˆผ N (0,I) 5: Add noise incrementally to the HR CF G0according to (14) to perform the diffusion process 6: Input the corrupted HR CF Gt, LR CF ห™G, and time step
https://arxiv.org/abs/2505.07893v1
tinto the model ฮตฮธ(ยท) 7: Perform gradient descent step on the objective function (32) to update the model parameters ฮธ: โˆ‡ฮธ ฮตtโˆ’ฮตฮธห™G,โˆšยฏฮฑtG0+โˆš1โˆ’ยฏฮฑtฮต, t 2 2 8:until the objective function (32) converges 6 Algorithm 2 Inferring the HR CF in the reverse process conditioned on the LR CF through Titerative refinement steps 1:Load the pretrained model and its weights ฮธ 2:Obtain the completely corrupted HR CF GTโˆผ N (0,I), and LR CF ห™G 3:fort=T, ..,1do 4:ฮตโˆ—โˆผ N (0,I)ift >1, else ฮตโˆ—= 0 5: Execute the refinement step according to (34): Gtโˆ’1โ†1โˆšฮฑt Gtโˆ’1โˆ’ฮฑtโˆš1โˆ’ยฏฮฑtฮตฮธ ห™G,Gt, t  +q 1โˆ’ยฏฮฑtโˆ’1 1โˆ’ยฏฮฑtฮฒtฮตโˆ— 6:end for 7:return HR CF ห†G0 Therefore, the forward diffusion process does not involve trainable parameters. Instead, it is a fixed and predefined linear Gaussian model, which can be denoted as q(Gt|Gtโˆ’1) =N Gt;p 1โˆ’ฮฒtGtโˆ’1, ฮฒtI , (12) Gt=p 1โˆ’ฮฒtGtโˆ’1+p ฮฒtฮต, (13) where ฮตdenotes Gaussian noise with a distribution of N(ฮต;0,I). Define ฮฑt= 1โˆ’ฮฒtandฮฑt=Qt i=1ฮฑi. Un- der the linear Gaussian assumption of the transition density q(Gt|Gtโˆ’1), and by combining (12) and (13), we can utilize the reparameterization technique to sample Gtin closed form at any given time step t: Gt=โˆšฮฑtG0+โˆš 1โˆ’ฮฑtฮต, (14) q(Gt|G0) =N Gt;โˆšฮฑtG0,(1โˆ’ฮฑt)I . (15) Generally, the variance schedule is generally set as ฮฒ1< ฮฒ2< ... < ฮฒ T. When ฮฒTis set infinitely close to 1, GTconverges to a standard Gaussian distribution for any initial state G0, i.e., q(GT|G0)โ‰ˆ N(GT;0,I). C. Reverse Diffusion Process Conditioned on LR CF For the traditional GDM [23], the reverse process can be considered a decoding procedure, where at each time step t,Gtis denoised and restored to Gtโˆ’1, with the transition probability for each step denoted as p(Gtโˆ’1|Gt). Based on the Markov density transition properties, the joint distribution of the reverse process is expressed as p(G0:T) = p(GT)TY t=1p(Gtโˆ’1|Gt). (16) However, deriving the expression for p(Gtโˆ’1|Gt)utilizing Bayesโ€™ theorem reveals that its denominator contains an integral, which lacks a closed-form solution. Consequently, a denoising neural network with parameters ฮธneeds to be developed to approximate these conditional probabilities in order to execute the reverse process. Since the reverse process also functions as a Markov chain, it can be represented as p(G0:T) =p(GT)TY t=1pฮธ(Gtโˆ’1|Gt), (17) pฮธ(Gtโˆ’1|Gt) =N(Gtโˆ’1;ยตฮธ(Gt, t),ฮฃฮธ(Gt, t)).(18)Note that in our task, unlike the traditional GDM, the denoising model ฮตฮธ(ยท)is conditioned on side information in the form of an LR CF ห™G, guiding it to progressively denoise from a Gaussian-distributed GTand generate the HR CF G0. Therefore, (17) needs to be rewritten as p G0:T ห™G =p(GT)TY t=1pฮธ Gtโˆ’1 Gt,ห™G , (19) and the denoising model ฮตฮธ(ยท)is trained to approximate pฮธ(Gtโˆ’1|Gt,ห™G), which is defined as pฮธ Gtโˆ’1 Gt,ห™G =N Gtโˆ’1;ยตฮธ ห™G,Gt, t ,ฮฃฮธ ห™G,Gt,t .(20) D. ELBO-Based Optimization of the CGDM To enable the network model ฮตฮธ(ยท)to effectively approx- imate the reverse process, the model parameters ฮธneed to be optimized with a specific objective. Mathematically, the latent variables G1:Tand the observed sample G0conditioned onห™Gcan be represented utilizing a conditional joint distri- bution p(G0:T|ห™G). One common likelihood-based approach in generative modeling involves optimizing the model to maxi- mize the conditional joint probability distribution p(G0:T|ห™G) of all observed samples. However, we only
https://arxiv.org/abs/2505.07893v1
have access to the observed sample G0and the latent variables G1:Tare unknown. Therefore, we seek to maximize the conditional marginal distribution p(G0|ห™G), which is given by p G0 ห™G =Z p G0:T ห™G dG1:T. (21) Within the framework of variational inference, the likelihood of the observed sample G0conditioned onห™G, known as the evidence, allows us to derive the ELBO as a proxy objective function, which can be used to optimize CGDM: logp G0 ห™G = logZ p G0:T ห™G dG1:T (22a) (a) โฉพEq(G1:T|G0)๏ฃซ ๏ฃญlogp G0:T ห™G q(G1:T|G0)๏ฃถ ๏ฃธ,(22b) where(a) โฉพin (22b) follows from Jensenโ€™s inequality. To further derive the ELBO, (22b) can be rewritten as (23b), displayed at the top of the next page. Then, the parameters ฮธof CGDM, can be learned by maximizing the ELBO: argmin ฮธL(ฮธ) =E(โˆ’L ELBO(ฮธ)) =E(Lc+Lbโˆ’ La),(24) where L(ฮธ)is the objective function for CGDM training, Lcis constant from (23b) and can be excluded from the optimization, and Lacan be approximated and optimized via a Monte Carlo estimate [23]. From the above analysis, it is evident that the training objective of CGDM is pri- marily determined by Lb. It can be found that the training objective of CGDM is to approximate the transition den- sity in the reverse process as closely as possible, thereby minimizing the Kullback-Leibler (KL) divergence between pฮธ Gtโˆ’1 Gt,ห™G =N Gtโˆ’1;ยตฮธ ห™G,Gt, t ,ฮฃฮธ ห™G,Gt, t andq(Gtโˆ’1|Gt,G0) =N Gtโˆ’1;หœยตt(Gt,G0),หœฮฒtI . 7 logp G0 ห™G โฉพEq(G1:T|G0)๏ฃซ ๏ฃญlogp(GT)pฮธ G0 G1,ห™G q(G1|G0)+ logq(G1|G0) q(GT|G0)+ logTY t=2pฮธ Gtโˆ’1 Gt,ห™G q(Gtโˆ’1|Gt,G0)๏ฃถ ๏ฃธ (23a) =Eq(G1|G0) logpฮธ G0 G1,ห™G | {z } Laโˆ’TX t=2Eq(Gt|G0) DKL q(Gtโˆ’1|Gt,G0)โˆฅpฮธ Gtโˆ’1 Gt,ห™G   | {z } Lbโˆ’DKL(q(GT|G0)โˆฅp(GT) )| {z } Lc=LELBO(ฮธ)(23b) q(Gtโˆ’1|Gt,G0) = exp โˆ’1 2 1 (1โˆ’ฮฑt)(1โˆ’ยฏฮฑtโˆ’1) 1โˆ’ยฏฮฑt! G2 tโˆ’1โˆ’2โˆšฮฑt(1โˆ’ยฏฮฑtโˆ’1)Gt+โˆšยฏฮฑtโˆ’1(1โˆ’ฮฑt)G0 1โˆ’ยฏฮฑtGtโˆ’1! (26) Therefore, we need to obtain explicit expressions for the mean หœยตt(Gt,G0)and variance หœฮฒtIofq(Gtโˆ’1|Gt,G0). Specif- ically, q(Gtโˆ’1|Gt,G0)can be expressed as q(Gtโˆ’1|Gt,G0) =q(Gt|Gtโˆ’1,G0)q(Gtโˆ’1|G0) q(Gt|G0).(25) Combining (14) and (15), (25) can be further rewritten as (26), displayed at the top of this page. Based on (26), the mean and variance of q(Gtโˆ’1|Gt,G0)are explicitly expressed as: หœยตt(Gt,G0) =1โˆšฮฑt Gtโˆ’ฮฒtโˆš1โˆ’ยฏฮฑtฮตt , (27) หœฮฒt=1โˆ’ยฏฮฑtโˆ’1 1โˆ’ยฏฮฑtฮฒt. (28) Generally, the variance ฮฃฮธ(ห™G,Gt, t)is set as a constant หœฮฒtI [23], [31]. Therefore, to ensure that the denoising transition density closely approximates the ground-truth denoising tran- sition density, we can simplify the optimization of the KL divergence term to minimizing the difference between the expectations of the above two distributions. In this case, we only need to train CGDM to predict หœยตt(Gt,G0): ยตฮธ ห™G,Gt, t =1โˆšฮฑt Gtโˆ’1โˆ’ฮฑtโˆš1โˆ’ยฏฮฑtฮตฮธ ห™G,Gt, t ,(29) Recall that the KL divergence is defined as DKL N(x;ยตx,ฮฃx) N y;ยตy,ฮฃy =1 2h log|ฮฃy| |ฮฃx|โˆ’d + tr ฮฃโˆ’1 yฮฃx + ยตyโˆ’ยตxTฮฃโˆ’1 y ยตyโˆ’ยตxi ,(30) where drepresents the dimension of x. Substituting (27), (29), and (30) into Lbin (23b), yields Lb=TY t=2Eq(Gt|G0) ( 1โˆ’ฮฑt)2 2หœฮฒ2 tฮฑt( 1โˆ’ยฏฮฑt) ฮตtโˆ’ฮตฮธ ห™G,Gt, t 2 2! .(31) Substituting (23b) and (31) into (24), the objective function L(ฮธ)for the CGDM can be further simplified to L(ฮธ):=TX t=1EG0,ฮตt ฮตtโˆ’ฮตฮธห™G,โˆšยฏฮฑtG0+โˆš 1โˆ’ยฏฮฑtฮต| {z } Gt,t 2 2 .(32) Based on the trained CGDM, given any noise-contaminated CFGt, the trained CGDM can leverage the side informationinห™Gto predict the noise ฮตtand
https://arxiv.org/abs/2505.07893v1
subsequently obtain an approximation of the target CF ห†G0through transformation (14), i.e., ห†G0=1โˆšยฏฮฑt๏ฃซ ๏ฃฌ๏ฃญGtโˆ’โˆš 1โˆ’ยฏฮฑtฮตฮธห™G,โˆšยฏฮฑtG0+โˆš 1โˆ’ยฏฮฑtฮต| {z } Gt, t๏ฃถ ๏ฃท๏ฃธ.(33) Through reparameterization, (33) represents the results of iter- ative refinements, with each iteration in our proposed CGDM being represented by Gtโˆ’1โ†1โˆšฮฑt Gtโˆ’1โˆ’ฮฑtโˆš1โˆ’ยฏฮฑtฮตฮธ ห™G,Gt, t +r 1โˆ’ยฏฮฑtโˆ’1 1โˆ’ยฏฮฑtฮฒtฮตโˆ—,(34) where ฮตโˆ—โˆผ N (0,I). Note that the noise estimation step in (34) is analogous to a step in Langevin dynamics within score-based generative models [30], which is equivalent to the estimation of the first derivative of the log-likelihood of the observed samples, also known as the gradient or Stein score. For clarity, we summarize the training process and iterative inference process of the proposed CGDM in Algorithm 1 and Algorithm 2 , respectively. E. Network Architecture of the Proposed CGDM In this subsection, we present the architecture of CGDM, a variant of the U-Net model, as illustrated in Fig. 3. The network is composed of three primary stages: Dn, Mid, and Up. For clarity, we provide a concise overview of the key components in each stage, including the time embedding block, Res+block, self-attention block, downsampling block, and upsampling block. 1) Time Embedding Block: In order to encode the time parameter [1, ..., t, ..., T ]in the diffusion process, sine and cosine functions with different frequencies are employed, akin to the sinusoidal positional encoding approach used in [32]. This approach results in the time embedding vector ฮ“t, which effectively captures the temporal characteristics. Let ฮ“j tโˆˆR 8 3ร—3 Conv 3ร—3 Conv 3ร—3 Conv Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Self-Attention Block Res+ Block Upsampling Self -Self Self Attention Block Res+ Block Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block GSDC Block Stage: Dn Stage: Up t RA Nยด RA Nยด RA Nยด RA Nยด RA Nยด RA NยดRA NยดRA NยดRA NยดRA NยดInput LR G Input LR G Group Norm Swish Dropout 3ร—3 Conv Group Norm Swish Dropout 3ร—3 Conv (g) GSDC Block Group Norm Swish Dropout 3ร—3 Conv Group Norm Swish Dropout 3ร—3 Conv (g) GSDC Block Nearest Interpolation 3ร—3 Conv (e) Upsampling Block GSDC Block ร…Time Embedding GSDC Block 1ร—1 Conv ร…GSDC Block ร…Time Embedding GSDC Block 1ร—1 Conv ร… (a) Res+ Block GSDC Block ร…Time Embedding GSDC Block 1ร—1 Conv ร… (a) Res+ Block (c) Downsampling Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Stage: Mid 1ยด 1ยด (d) CGDM Architecture 3ร—3 Conv Stride=2 3ร—3 Conv Stride=2 Sinusoidal Time Encoding Fully Connected Layer
https://arxiv.org/abs/2505.07893v1
Swish Fully Connected Layer t (f) Time Embedding Block Sinusoidal Time Encoding ncoding Fully Connected Layer Swish Fully Connected Layer Sinusoidal Time Encoding Fully Connected Layer Swish Fully Connected Layer t (f) Time Embedding Block Group Norm Linear Linear Linear ร„ Softmax Attention Matrix Attention Matrix ร„ 1ร—1 Conv Q KV TK ร… (b) Self-Attention Block Group Norm Linear Linear Linear ร„ Softmax Attention Matrix ร„ 1ร—1 Conv Q KV TK ร… (b) Self-Attention Block HR ห†G Output HR ห†G Output 1 Stage 0 : in in h w c ยด ยด 1 Stage 1: in in h w c ยด ยด 2 Stage 2 : 2 2 in in h w cยด ยด 3 Stage 3: 4 4 in in h w cยด ยด 4 Stage 4 : 8 8 in in h w cยด ยด 5 Stage 5: 16 16 in in h w cยด ยด 5 Stage 6 : 16 16 in in h w cยด ยด 5 Stage 7 : 16 16 in in h w cยด ยด 4 Stage 8: 8 8 in in h w cยด ยด 3 Stage 9 : 4 4 in in h w cยด ยด 2 Stage 10 : 2 2 in in h w cยด ยด 1 Stage 11: in in h w c ยด ยด Stage 12 : out out outh w c ยด ยด out out outh w c ยด ยด 2in in in h w c ยด ยด Fig. 3. Diagram and key modules of the CGDM architecture. Specifically, the network architecture of CGDM consists of three primary stages: Dn(substages 1-5), Mid (substage 6), and Up(substages 7-12). The blocks included in each stage are illustrated in the subfigures (a), (b), (c), (e), (f), and (g). (d) shows the network architecture of the proposed CGDM. Additionally, the red and purple arrows represent the embedding of the time constant tand the skip connections, respectively. Taking the ร—4HR CF reconstruction task as an example, the LR CF with a size of 32ร—32ร—3is upsampled to the target resolution (i.e., 128ร—128ร—3), and concatenated with noise of the same resolution along the channel dimension to form the input, resulting in a size of 128ร—128ร—6. The number 2cinof input channels is expanded to c1, representing the number of base channels in the latent space, after passing through substage 0. Note that within the same substage, the height hand width wof the feature maps remain unchanged, while the number cof channels at different substages is controlled by the channel number multiplier ยฏฮท=c1:c2:c3:c4:c5. The specific values of c1,ยฏฮท, and NRAwill be discussed in Subsection V-B. denote the j-th component of the time embedding vector at time step t, defined as ฮ“(j) t=๏ฃฑ ๏ฃด๏ฃด๏ฃฒ ๏ฃด๏ฃด๏ฃณcost 100002j/ctime ,ifjis odd sint 100002j/ctime ,ifjis even, (35) where j= 0,1,2, ..., c time/2โˆ’1,ctime is the dimension of the time embedding vector. Then, the embedding vector for the time constant tcan be expressed as ฮ“t=h sin w0t ,cos w0t ,sin w1t ,cos w1t , . . . ,sin wctime 2โˆ’1t ,cos wctime 2โˆ’1ti ,(36) where wj= 1. 100002j/ctime. Leveraging (36), the embedding vectors
https://arxiv.org/abs/2505.07893v1
for any given time ฮ“t+โˆ†tcan be obtained through alinear transformation expressed as ฮ“T t+โˆ†t=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐsin w0 t+ โˆ†t cos w0 t+ โˆ†t ... sin wctime 2โˆ’1 t+ โˆ†t cos wctime 2โˆ’1 t+ โˆ†t๏ฃน ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป=Mโˆ†tยทฮ“T t.(37) Mโˆ†tis a linear transform matrix defined as Mโˆ†t=๏ฃซ ๏ฃฌ๏ฃฌ๏ฃญR(w0โˆ†t)ยทยทยท 0 ......... 0 ยทยทยทR wctime 2โˆ’1โˆ†t๏ฃถ ๏ฃท๏ฃท๏ฃธ, (38) where R(w0โˆ†t) = cos(w0โˆ†t) sin( w0โˆ†t) โˆ’sin(w0โˆ†t) cos( w0โˆ†t) . To en- able the model to capture more intricate temporal features, the time embedding is further enhanced through two fully connected layers and an activation layer, expressed as ฮ“โ€ฒt= fFul(fSwi(fFul(ฮ“t))), where fFul(ยท)andfSwi(ยท)represent the fully connected layer and swish activation function layer, respectively. The components of the time embedding block are summarized in Fig. 3 (f). 9 2) Res+Block: We design deeper feature extraction net- works aimed at learning higher-level semantic information from the CF. Nonetheless, deep neural networks frequently experience model degradation as a result of challenges like vanishing and exploding gradients. To this end, we introduce residual connections as a means of alleviating these concerns. Specifically, let XโˆˆRhinร—winร—cin,ฮ“tbe the feature map and time embedding vector inputs, respectively. The output Xโ€ฒโˆˆRhinร—winร—coutof Res+block is given by Xโ€ฒ=( โ„ฆ+fConv ,1(X),ifcoutฬธ=cin โ„ฆ+X,else, (39) where fConv ,1(ยท)denotes 1ร—1convolution operation, and โ„ฆโˆˆ Rhinร—winร—coutis defined as โ„ฆ=g(g(X) +fFul(fSwi(fFul(ฮ“t)))). (40) Note that g(X) = fConv ,3(fDro(fSwi(fGn(X)))), where fConv ,3(ยท)is3ร—3convolution operation. fDro(ยท),fGn(ยท)are the dropout and group normalization layers, respectively. The components of the Res+block are summarized in Fig. 3 (a). 3) Self-Attention Block: To capture global structural infor- mation, we introduce a self-attention mechanism to improve the interaction between global and local features. Specifically, to implement the self-attention mechanism, a normalized at- tention matrix is introduced to represent varying degrees of attention to the input. Greater weights are assigned to more significant input components. The final output is generated by weighting the input according to the attention weights indicated in the attention matrix. Specifically, given the input Z=[z1, ...,zm]TโˆˆRdmร—dn,dmdenotes the number of image patches and dnis the feature dimension of each patch, three different linear transformations are applied to zj[32], [33]: kj=zjWk, j = 1, ..., d n, (41a) qj=zjWq, j = 1, ..., d n, (41b) vj=zjWv, j = 1, ..., d n, (41c) where kjโˆˆR1ร—dk,qjโˆˆR1ร—dq, andvjโˆˆR1ร—dnare the key, query, and value vector, respectively. WkโˆˆRdnร—dk,Wqโˆˆ Rdnร—dq, andWvโˆˆRdnร—dnrepresent the respective trainable transformation matrices, with dk=dq. In detail, the weight allocation function is determined by kjandqjโ€ฒ. A higher correlation qjโ€ฒkT jimplies that the features of the j-th input patch zjhold greater importance for the jโ€ฒ-th output patch. Generally, this correlation can be adaptively adjusted based on the input Zand the matrices WkandWq. For clarity, the matrix forms of (41a)-(41c) are presented as K=ZWk,Q= ZWq,V=ZWv, where K= [k1, ...,km]TโˆˆRdmร—dk,Q= [q1, ...,qm]TโˆˆRdmร—dqandV= [v1, ...,vm]TโˆˆRdmร—dn. Leveraging KandQ, we can obtain the attention matrix SAMโˆˆRdmร—dm, which is denoted as SAM= SoftmaxQKT โˆšฮท , (42) where Softmax( Z) = exp( zj)/Pexp( zj), andโˆšฮท >0is a scaling factor. Each column of the attention matrix is a vector of attention scores, i.e., each score is a probability, where all scores are non-negative and sum up to 1. Note that when thekey vector K[j,:]and the query Q[jโ€ฒ,:]has
https://arxiv.org/abs/2505.07893v1
a better match, the corresponding attention score SAM[jโ€ฒ, j]is higher. Thus, the output of the attention mechanism corresponding to the r-th component can be represented by the weighted sum of all inputs, i.e., zโ€ฒ r=P jSAM[r, j]vj=SAM[r,:]ยทV, where zโ€ฒ rโˆˆR1ร—dnrepresents the r-th output, which is computed by adaptively focusing on the inputs based on the attention score SAM[r, j]. When the attention score SAM[r, j]is higher, the associated value vector vjwill have a more significant impact on the r-th output patch. Finally, the output Zโ€ฒof the attention block is given by Zโ€ฒ=SAM[:,:]ยทV=Softmax ZWqWkTZT โˆšฮท! ZWv,(43) where O= [o1,...,om]TโˆˆRdmร—dn. The components of the self-attention block are summarized in Fig. 3 (b). Notably, as illustrated in Fig. 3, the network framework of the proposed CGDM consists of three primary stages: Dn, Mid, and Up. Each sub-stage within these stages comprises NRARes+ blocks, along with their corresponding self-attention blocks. IV. L IGHTWEIGHT CGDM FOR SR CF Considering that the proposed CGDM trades off significant memory consumption and latency for outstanding perfor- mance, its deployment on personal computers and even mobile devices is highly constrained. Specifically, CGDM utilizes a variant of U-Net as their backbone framework, with most of the memory consumption and latency arising from this archi- tecture. To this end, in this section, we leverage the additive property of network layers and design a one-shot pruning approach along with the corresponding knowledge distillation technique to obtain a lightweight CGDM (LiCGDM). A. Efficient Layer Pruning for CGDM Given a specific pruning ratio, the objective of layer pruning is to eliminate a subset of prunable layers from the U-Net architecture in CGDM while minimizing the degradation in the Algorithm 3 Efficient Layer Pruning for LiCGDM 1:Input: A teacher (original) network with prunable layers หœLหœn=n หœl1, ...,หœlหœno , a CF dataset D, the number of parameters Pto be pruned 2:Initialize: Arrays values [ ] = 0 andweights [ ] = 0 , Knapsack capacity C= 0 3:forpiinหœLหœndo 4: Calculate the value of the objective function (48a) on D,values [i] =Eโˆฅฮตteaโˆ’ฮต(pi)โˆฅ2 2 5: Calculate the number of parameters at the pi-th layer, weights [i] = Params ( pi) 6:end for 7:C=P 8:Obtain Pหœmby solving problem (48) using the dynamic programming algorithm, given values ,weights ,C. 9:Return Pหœm 10:Output: The set of layers to be pruned PหœmโˆˆหœLหœn 10 modelโ€™s performance. We define the set of all the หœnprunable layers as หœLหœn={หœl1, ...,หœlหœn}. Inspired by [34], we also minimize the mean-squared-error (MSE) loss between the final output of the original network (referred to as the teacher) and that of the pruned network (referred to as the student) as the pruning objective. Specifically, let ฮตteadefine the output of the original CGDM, piโˆˆหœLหœnrepresent the p-th pruned layer, andฮต(p1, p2, ..., p หœm)denote the output of the network with layers p1,p2,...,pหœmpruned, where หœmis an uncertain variable. Thus, this optimization problem can be formulated as min p1,p2,...,p หœmEโˆฅฮตteaโˆ’ฮต(p1, p2, ..., p หœm)โˆฅ2 2, (44a) s.t.{p1, p2, ..., p หœm} โŠ‚หœLหœn,หœmX i=1Params ( pi)โ‰ฅ P,(44b) where Params ( pi)represents the number of parameters in thepi-th layer. Note that Pdenotes the number of parameters to be
https://arxiv.org/abs/2505.07893v1
pruned, calculated by multiplying the total number of parameters in the teacher network by the pruning ratio. Solving the optimization problems (44) is NP-hard; therefore, we need to find a surrogate objective. Capitalizing on the triangle inequality, we can obtain the upper bound of (44a): Eโˆฅฮตteaโˆ’ฮต(p1, p2, ..., p หœm)โˆฅ2 2โ‰ค L upper, (45) where Lupper =Eโˆฅฮตteaโˆ’ฮต(p1)โˆฅ2 2+Eโˆฅฮต(p1)โˆ’ฮต(p1, p2)โˆฅ2 2+ ...+Eโˆฅฮต(p1, p2, ..., p หœmโˆ’1)โˆ’ฮต(p1, p2, ..., p หœm)โˆฅ2 2. Then, (44) can be further transformed as min p1,p2,...,p หœmLupper, (46a) s.t.{p1, p2, ..., p หœm} โŠ‚หœLหœn,หœmX i=1Params ( pi)โ‰ฅ P.(46b) However, solving the surrogate objective (46a) also remains NP-hard. Note that each term in (46a) represents the MSE between a pruned or unpruned network and the same network with an additional layer pruned. By leveraging the additive property of network layer, where the output distortion caused by multiple perturbations can be approximated as the sum of the distortions caused by each individual perturbation, this additivity is formulated as [34], [35] Eโˆฅฮต(p1, ..., p iโˆ’1, pi)โˆ’ฮต(p1, ..., p iโˆ’1)โˆฅ2 2 โ‰ˆEโˆฅฮต(p1, ..., p iโˆ’2, pi)โˆ’ฮต(p1, ..., p iโˆ’2)โˆฅ2 2 โ‰ˆ...โ‰ˆEโˆฅฮต(p1, pi)โˆ’ฮต(p1)โˆฅ2 2โ‰ˆEโˆฅฮต(pi)โˆ’ฮตteaโˆฅ2 2.(47) Leveraging (47), the surrogate objective in (46) can be further approximated as min p1,p2,...,p หœmหœmX i=1Eโˆฅฮตteaโˆ’ฮต(pi)โˆฅ2 2, (48a) s.t.{p1, p2, ..., p หœm} โŠ‚หœLหœn,หœmX i=1Params ( pi)โ‰ฅ P.(48b) Based on this approximate objective in (48), the term Eโˆฅฮตteaโˆ’ฮต(pi)โˆฅ2 2acts as the criterion for pruning. Therefore, we only need to compute Eโˆฅฮตteaโˆ’ฮต(หœli)โˆฅ2 2, the output lossbetween the original network and the network with only the หœli- th layer pruned, which has a time complexity of O(หœn)per layer หœliโˆˆหœLหœn, thereby transforming the problem into a variant of the 0-1 Knapsack problem. For a series of 0-1 Knapsack problems, the classical dynamic programming algorithm can be utilized for solving them. Finally, our designed one-shot layer pruning algorithm for the CGDM is summarized in Algorithm 3 . Then, the total time complexity of Algorithm 3 isO(หœnU/ยฏs+ หœnC) and the storage complexity is O(C), where Uis the number of training samples, ยฏsis the number of parallel processing processes, and Cis Knapsack capacity. B. Fine-Tuning LiCGDM with Multi-Objective Distillation Typically, the performance of the CGDM may degrade after certain layers are removed from the teacher network. To address this, for the pruned CGDM, referred to as LiCGDM, it is necessary to perform further weight readjustment to restore its performance. Therefore, we enhance the reconstruction performance of the LiCGDM by introducing a re-weighting strategy based on knowledge distillation. Specifically, the overall retraining of the LiCGDM is achieved by combining one task objective and two knowledge distillation objectives: LKD=LTask+ฮปOLOKD+ฮปFLFKD, (49) where LTask=Eห™G,Gt,ฮตt ฮตtโˆ’ฮตS ห™G,Gt, t 2 2, (50) LOKD=Eห™G,Gt,t ฮตT ห™G,Gt, t โˆ’ฮตS ห™G,Gt, t 2 2,(51) LFKD=X iEห™G,Gt,t Fi T ห™G,Gt, t โˆ’ Fi S ห™G,Gt, t 2 2.(52) HereฮตT(ห™G,Gt, t)andฮตS(ห™G,Gt, t)represent the outputs of the teacher model (the frozen CGDM) and the student model (the LiCGDM), respectively. Fi T(ห™G,Gt, t)andFi T(ห™G,Gt, t)denote the feature maps at the end of the i-th stage for the CGDM and LiCGDM, respectively. Without any hyperparameter tuning, we set the values of ฮปOandฮปFto 1. V. N UMERICAL EXPERIMENT In this
https://arxiv.org/abs/2505.07893v1
section, the implementation details are first intro- duced. Then, we analyze the convergence and complexity of the proposed model under different hyperparameter settings. Finally, we comprehensively evaluate the proposed approach in terms of reconstruction accuracy and knowledge transfer capa- bility, and compare it against several state-of-the-art methods. A. Implementation Details i) Wireless Communication Scenario Setup: The layout of the communication scenario is illustrated in Fig. 4. Specifi- cally, we consider a massive MIMO-OFDM system operating within a communication area Ameasuring 128 m ร—128 m, where the BS is equipped with an 8ร—8UPA with half- wavelength spacing. The ground-truth channels are generated using the widely adopted QuaDRiGa generator (version 2.6.1) 11 [36], which employs a geometry-based stochastic channel modeling approach to simulate realistic radio channel impulse responses for mobile radio networks. Meanwhile, we consider the 5G NR typical urban micro-cell scenario โ€œ3GPP 38.901 UMa NLOSโ€, which encompasses both LOS and NLOS physical propagation environments. Additionally, the relevant simulation parameters are summarized in Table I. ii) CF Data Generation: In the aforementioned wireless communication scenario, we set ฯƒHR= 128 and utilize a sampling interval of โˆ†x= โˆ† y= 1m to sample the target area Aalong with its corresponding channel power values, resulting in the HR CF, denoted as GHR,u. For the highly challenging ร—4SR CF reconstruction task, ฯƒLRis set to 32, meaning the sampling interval for the LR CF is โˆ†x= โˆ† y= 4 m, thereby obtaining the LR CF defined as GLR,u. Similarly, we generate 6,000 pairs of CF samples, denoted as {GHR,u,GLR,u}6000 u=1, by running QuaDRiGa simulations with different BS locations (x, y), where xandyare randomly chosen integers in the range [1,128]. This data generation method not only pre- serves the diversity of the dataset, capturing the complexity and dynamic variability inherent in wireless communication environments, but also further validates the generalization performance of the proposed model. These raw samples are subsequently divided into training and testing sets in a 5:1 ratio. To enhance the efficiency of the training process, we apply min-max normalization to the raw CF, i.e., Gโ€ฒ(e,ฮ›i,j) =G(e,ฮ›i,j)โˆ’min ( G(e,ฮ›i,j)) max ( G(e,ฮ›i,j))โˆ’min ( G(e,ฮ›i,j)).(53) iii) Training Strategy and Model Configuration: At the hard- ware level, the proposed CGDM is trained utilizing 2 Nvidia RTX-4090 GPUs, each with 24 GB of memory, and tested on a single Nvidia RTX-4090 GPU with 24 GB of memory. At the algorithm level, we employ the Adam optimizer with a learning rate of 5ร—10โˆ’5for model parameter updates over 500,000 iterations, and the batch size is set to 16. Starting from the 5,000th iteration, we introduce the exponential moving TABLE I WIRELESS SYSTEM AND MODEL SETUP PARAMETERS Parameter Value Size of the interested area A 128 mร—128 m Number of BS antennas Nr,vร—Nr,h= 8ร—8 Center frequency 2.4 GHz Subcarrier spacing โˆ†f= 15 kHz Subcarrier Nc= 512 Active subcarrier Nk= 300 UE velocity 0.3m/s BS height 25 m UE height 1.5 m Number of base channels c1= 64 Numbers of integrated Res+and self-attention blocks NRA= 2 Channel number multiplier ยฏฮท= 1 : 2 : 4 : 8 : 16 EMA rate er= 0.9999 Learning rate
https://arxiv.org/abs/2505.07893v1
lr = 5 ร—10โˆ’5 Batch size 16 Y (0,0)(128,0) (0,128)(128,128) XFig. 4. The layout of the massive MIMO-OFDM system. average algorithm [23], with the decay factor set to 0.9999. Additionally, we incorporate dropout, with the dropout rate configured at 0.1. The forward diffusion steps Tare set to 1000 and the diffusion noise level adheres to a linear variance schedule, ranging from ฮฒ1= 10โˆ’6toฮฒT= 10โˆ’2. To ensure the modelโ€™s generalization capability, we forgo checkpoint selection on the CGDM and utilize only the most recent checkpoint. More detailed hyperparameter settings for the model are presented in Table I. iv) Performance Evaluation Metrics: For a fair compari- son, we employ four widely adopted metrics, namely nor- malized mean squared error (NMSE), mean squared er- ror (MSE), peak signal-to-noise ratio (PSNR), and struc- tural similarity (SSIM). These metrics are defined as fol- lows: NMSE =PNx i=0PNy j=0โˆฅห†G(e,ฮ›i,j)โˆ’Gโ€ฒ(e,ฮ›i,j)โˆฅ2 2PNx i=0PNy j=0โˆฅGโ€ฒ(e,ฮ›i,j)โˆฅ2 2, MSE = 1 Nxร—NyPNx i=0PNy j=0โˆฅห†G(e,ฮ›i,j)โˆ’Gโ€ฒ(e,ฮ›i,j)โˆฅ2 2,PSNR = 20log10(255โˆš MSE),SSIM =(2uห†GuGโ€ฒ+C1)(2ฮดห†GGโ€ฒ+C2) (u2 ห†Gu2 Gโ€ฒ+C1)(ฮด2 ห†Gฮด2 Gโ€ฒ+C2), where Nx= Ny=ฯƒ,ห†G(e,ฮ›i,j)andGโ€ฒ(e,ฮ›i,j)represent the predicted channel channel power and the ground-truth channel power, respectively. Gโ€ฒis the input LR CF, ห†Gis the reconstructed HR CF,uห†GanduGโ€ฒare the means, ฮด2 ห†Gandฮด2 Gโ€ฒare the variances ofห†GandGโ€ฒ, respectively. ฮดห†GGโ€ฒis covariance of ห†GandGโ€ฒ,C1 andC2represent nonzero constants. B. Experiment Results i) Convergence and Complexity Analysis: According to the proposed CGDM architecture shown in Fig. 3, the performance of the CGDM relies on the number c1of base channels in the feature maps within the latent space, as well as the number NRAof integrated modules that combine the Res+ block and the self-attention block. Additionally, during the model training process, the depth of the feature maps, namely the number of channels, also affects the CGDMโ€™s ability to represent features. The channel number multiplier is defined asยฏฮท=c1:c2:...:cยฏn, where ยฏnis related to the number of down-sampling operations. Consequently, we analyze the con- vergence and complexity of CGDM under different influencing factors. Note that the training data used for the analysis in this subsection are all derived from the 32ร—32โ†’128ร—128HR CF reconstruction task. The default parameter configuration for the CGDM is c1= 64 ,ยฏฮท= 1 : 2 : 4 : 8 : 16 , and 12 0 100 200 300 400 500 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b0.000.250.500.751.001.251.501.752.00/uni0000002f/uni00000052/uni00000056/uni000000561eโˆ’6 c 1= 8 c 1= 16 c 1= 32 c 1= 64 c 1= 128 492 493 494 495 496 497 498 4991.551eโˆ’7 (a) 0 100 200 300 400 500 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b0.00.20.40.60.81.01.21.4/uni0000002f/uni00000052/uni00000056/uni000000561eโˆ’6 ฬ„ ฮท = 1 : 2 : 4 ฬ„ ฮท = 1 : 2 : 4 : 8 ฬ„ ฮท = 1 : 2 : 4 : 8 : 16 ฬ„ ฮท = 1 : 2 : 4 : 8 : 16 : 32 492 493 494 495 496 497 498 4991.553.101eโˆ’7 (b) 0 100 200 300 400 500 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b0.00.51.01.52.0/uni0000002f/uni00000052/uni00000056/uni000000561eโˆ’6 N RA= 1 N RA= 2 N RA= 3 N RA= 4 491 492 493 494 495 496 497 498 4990.000.250.500.751.001eโˆ’6 (c) Fig. 5. Convergence analysis of the CGDM under different hyperparameter settings: (a) represents different base channels c1in the feature maps within the latent space; (b) shows different channel number multipliers ยฏฮท; (c) illustrates varying numbers
https://arxiv.org/abs/2505.07893v1
NRAof integrated Res+and self-attention blocks. NRA= 2. Any parameters not explicitly mentioned in the following analysis are assumed to be set to these values. As shown in Fig. 5 (a), the variation of the loss function for the CGDM with different numbers, c1, of base channels, is presented over 500 training epochs. It can be observed that the CGDM loss corresponding to c1={8,16,32,64,128} decreases to a relatively low value, specifically on the order of 10โˆ’6, within the first 100 epochs. For better clarity, we further zoom in on the loss between epochs 490 and 500. It can be ob- served that the loss corresponding to c1={8,16,32,64,128} for CGDM fluctuates at the order of 10โˆ’7, indicating a very slight loss oscillation. Additionally, an interesting observation is that by appropriately increasing c1, the overall amplitude of the loss oscillations during the CGDM training process decreases, while the loss itself also reduces. Therefore, setting c1= 64 is a suitable choice for our task. As shown in Fig. 5 (b), the variation of the loss function for CGDM with different channel number multiplier ยฏฮทover 500 training epochs is presented. The CGDM, with different ยฏฮทconfigurations, converges to a minimal value on the order of10โˆ’6within the first 100 epochs. Additionally, between epochs 490 and 500, the variation remains within a narrow range, on the order of 10โˆ’7, indicating minimal fluctuation. However, a shallower channel number multiplier, such as ยฏฮท= 1 : 2 : 4 , tends to cause relatively larger fluctuations in the loss value. While a deeper channel number multiplier can progressively extract more high-level features, it also introduces challenges in training and optimization, as seen with ยฏฮท= 1 : 2 : 4 : 8 : 16 : 32 . Therefore, setting ยฏฮท= 1 : 2 : 4 : 8 : 16 is a reasonable choice for our task. Fig. 5 (c) shows the variation of the CGDM loss function as the number of integrated Res+and self-attention modules increases. Overall, for CGDM with NRAset to {1,2,3,4}, the loss value decreases to a small value within the first 500 epochs, with minimal fluctuation, and tends to converge. To maintain the modelโ€™s ability to represent features and ensure effective interaction between local and global features, we choose NRA= 2 as a suitable option. The complexity of a large AI model is primarily determined by two key factors: the number of model parameters and the number of floating point operations (FLOPs). For clarity, we visualize the impact of varying hyperparameters NRA,c1,andยฏฮทon the complexity of the CGDM. Specifically, Fig. 6 (a) illustrates the variation in CGDM complexity under different settings of the number NRAof integrated Res+and attention modules, as well as channel number multipliers ยฏฮท, withc1fixed at 64. Fig. 6 (b) shows the variation in CGDM complexity under different settings of the base channel number c1and channel number multiplier ยฏฮท, with NRAfixed at 2. Overall, appropriately increasing these parameters facilitates the extraction of higher-level features and multi-scale fusion in CGDM, while also raising the hardware requirements for model deployment. Therefore, there is a trade-off between CGDM reconstruction performance and complexity. Based
https://arxiv.org/abs/2505.07893v1
on the convergence and complexity analysis, we set c1= 64 , NRA= 2, and ยฏฮท= 1 : 2 : 4 : 8 : 16 as the default con- figuration for subsequent experiments, with the corresponding /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ„ ฮท/uni00000014 /uni00000015 /uni00000016 /uni00000017N RA/uni00000015/uni00000016/uni00000011/uni00000016/uni00000017 /uni00000016/uni00000014/uni00000011/uni00000013/uni0000001b /uni00000016/uni0000001b/uni00000011/uni0000001b/uni00000014 /uni00000017/uni00000019/uni00000011/uni00000018/uni00000018 /uni00000016/uni00000015/uni00000011/uni0000001b/uni00000013 /uni00000017/uni00000016/uni00000011/uni00000019/uni0000001c /uni00000018/uni00000017/uni00000011/uni00000018/uni0000001b /uni00000019/uni00000018/uni00000011/uni00000017/uni0000001a /uni00000017/uni00000015/uni00000011/uni00000015/uni0000001a /uni00000018/uni00000019/uni00000011/uni00000016/uni00000014 /uni0000001a/uni00000013/uni00000011/uni00000016/uni00000018 /uni0000001b/uni00000017/uni00000011/uni00000017/uni00000013 /uni00000018/uni00000014/uni00000011/uni0000001a/uni00000016 /uni00000019/uni0000001b/uni00000011/uni0000001c/uni00000016 /uni0000001b/uni00000019/uni00000011/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000016/uni00000011/uni00000016/uni00000015/uni00000029/uni0000002f/uni00000032/uni00000033/uni00000056/uni00000003/uni0000000b/uni0000002a/uni0000000c /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ„ ฮท/uni00000014 /uni00000015 /uni00000016 /uni00000017N RA/uni0000001c/uni00000011/uni0000001a/uni00000014 /uni00000016/uni0000001b/uni00000011/uni0000001b/uni0000001b /uni00000014/uni00000018/uni00000018/uni00000011/uni00000015/uni0000001c /uni00000019/uni00000015/uni00000013/uni00000011/uni00000016/uni0000001a /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000015 /uni00000018/uni00000018/uni00000011/uni00000016/uni0000001b /uni00000015/uni00000015/uni00000014/uni00000011/uni00000015/uni00000014 /uni0000001b/uni0000001b/uni00000016/uni00000011/uni0000001a/uni00000013 /uni00000014/uni0000001a/uni00000011/uni0000001c/uni00000015 /uni0000001a/uni00000014/uni00000011/uni0000001b/uni0000001a /uni00000015/uni0000001b/uni0000001a/uni00000011/uni00000014/uni00000015 /uni00000014 /uni00000014/uni00000017/uni0000001a/uni00000011/uni00000013/uni00000015 /uni00000015/uni00000015/uni00000011/uni00000013/uni00000015 /uni0000001b/uni0000001b/uni00000011/uni00000016/uni00000019 /uni00000016/uni00000018/uni00000016/uni00000011/uni00000013/uni00000017 /uni00000014/uni00000017/uni00000014/uni00000013/uni00000011/uni00000016/uni00000017/uni00000033/uni00000044/uni00000055/uni00000044/uni00000050/uni00000048/uni00000057/uni00000048/uni00000055/uni00000056/uni00000003/uni0000000b/uni00000030/uni0000000c /uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000001b/uni00000013/uni0000001c/uni00000013/uni00000014/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013/uni00000017/uni00000013/uni00000013/uni00000019/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000013/uni00000014/uni00000013/uni00000013/uni00000013/uni00000014/uni00000015/uni00000013/uni00000013/uni00000014/uni00000017/uni00000013/uni00000013 (a) /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ„ ฮท/uni0000001b /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001bc 1/uni00000013/uni00000011/uni00000018/uni00000015 /uni00000013/uni00000011/uni00000019/uni0000001c /uni00000013/uni00000011/uni0000001b/uni00000019 /uni00000014/uni00000011/uni00000013/uni00000016 /uni00000015/uni00000011/uni00000013/uni0000001a /uni00000015/uni00000011/uni0000001a/uni00000018 /uni00000016/uni00000011/uni00000017/uni00000016 /uni00000017/uni00000011/uni00000014 /uni00000014 /uni0000001b/uni00000011/uni00000015/uni00000015 /uni00000014/uni00000013/uni00000011/uni0000001c/uni00000018 /uni00000014/uni00000016/uni00000011/uni00000019/uni0000001a /uni00000014/uni00000019/uni00000011/uni00000016/uni0000001c /uni00000016/uni00000015/uni00000011/uni0000001b/uni00000013 /uni00000017/uni00000016/uni00000011/uni00000019/uni0000001c /uni00000018/uni00000017/uni00000011/uni00000018/uni0000001b /uni00000019/uni00000018/uni00000011/uni00000017/uni0000001a /uni00000014/uni00000016/uni00000014/uni00000011/uni00000013/uni00000017 /uni00000014/uni0000001a/uni00000017/uni00000011/uni00000018/uni0000001c /uni00000015/uni00000014/uni0000001b/uni00000011/uni00000014/uni00000018 /uni00000015/uni00000019/uni00000014/uni00000011/uni0000001a/uni00000014/uni00000029/uni0000002f/uni00000032/uni00000033/uni00000056/uni00000003/uni0000000b/uni0000002a/uni0000000c /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ„ ฮท/uni0000001b /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001bc 1/uni00000013/uni00000011/uni00000015/uni00000015 /uni00000013/uni00000011/uni0000001b/uni0000001a /uni00000016/uni00000011/uni00000017/uni00000019 /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000015 /uni00000013/uni00000011/uni0000001b/uni0000001a /uni00000016/uni00000011/uni00000017/uni0000001a /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000017 /uni00000018/uni00000018/uni00000011/uni00000015/uni00000018 /uni00000016/uni00000011/uni00000017/uni00000019 /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000018 /uni00000018/uni00000018/uni00000011/uni00000016/uni00000014 /uni00000015/uni00000015/uni00000013/uni00000011/uni0000001c/uni00000018 /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000015 /uni00000018/uni00000018/uni00000011/uni00000016/uni0000001b /uni00000015/uni00000015/uni00000014/uni00000011/uni00000015/uni00000014 /uni0000001b/uni0000001b/uni00000016/uni00000011/uni0000001a/uni00000013 /uni00000018/uni00000018/uni00000011/uni00000015/uni00000016 /uni00000015/uni00000015/uni00000014/uni00000011/uni00000017/uni00000018 /uni0000001b/uni0000001b/uni00000017/uni00000011/uni0000001a/uni00000016 /uni00000016/uni00000018/uni00000016/uni00000017/uni00000011/uni00000018/uni0000001b/uni00000033/uni00000044/uni00000055/uni00000044/uni00000050/uni00000048/uni00000057/uni00000048/uni00000055/uni00000056/uni00000003/uni0000000b/uni00000030/uni0000000c /uni00000018/uni00000013/uni00000014/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000015/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013/uni00000014/uni00000013/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000013/uni00000015/uni00000013/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013/uni00000013/uni00000016/uni00000013/uni00000013/uni00000013/uni00000016/uni00000018/uni00000013/uni00000013 (b) Fig. 6. Analysis of the computational complexity (FLOPs in Giga) and storage complexity (Parameters in Millions) of CGDM under different parameter settings. (a) shows the impact of different settings for NRAandยฏฮทon the CGDM complexity when c1is set to 64. (b) illustrates the impact of different settings for c1andยฏฮทon the CGDM complexity when NRAis set to 2. 13 /uni00000036/uni00000035/uni0000002a/uni00000024/uni00000031/uni00000010/uni00000030/uni00000036/uni00000028 /uni00000036/uni00000035/uni0000002a/uni00000024/uni00000031 /uni00000026/uni00000010/uni00000039 /uni00000024/uni00000028 /uni00000027/uni00000035/uni00000035/uni00000031 /uni00000026/uni0000002a/uni00000027/uni00000030 /uni0000002a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047/uni00000003/uni00000037 /uni00000055/uni00000058/uni00000057/uni0000004b Fig. 7. Qualitative comparison in the ร—4HR CF reconstruction task. The first row displays the reconstruction results of the proposed CGDM and baseline models. For clarity, the second row shows zoomed-in views of the regions highlighted by the blue boxes. parameter count of 221.21 M and FLOPs of 54.58 G. ii) Quantitative and Qualitative Comparison with Baselines: To ensure a fair comparison, we apply the same training strategy as CGDM to the baselines, conducting weight learning and testing on the same ร—4CF dataset. To comprehen- sively evaluate the performance of the proposed CGDM and LiCGDM (with 35% pruning), we compare them with several state-of-the-art models, including SRGAN-MSE [21], SRGAN [21], C-V AE [26], and DRRN [37]. Table II provides a quantitative performance analysis of the proposed model com- pared to baseline models on ร—4HR CF reconstruction tasks. The proposed CGDM demonstrates competitive performance across four metrics (NMSE, MSE, PSNR, and SSIM) and outperforms the baselines. Specifically, compared to SRGAN, CGDM reduces NMSE and MSE 65.236ร—10โˆ’5and 9.542, respectively, while improving PSNR and SSIM by 11.747 and 0.006, respectively. LiCGDM is a student model derived from CGDM, achieved by pruning 35% of the parameters and fine-tuning the remaining ones. Compared to CGDM, LiCGDM exhibits a slight decrease in performance metrics, but the degradation remains within an acceptable range. This confirms that the proposed lightweight approach effectively compresses CGDM, enabling its practical deployment on personal computers and even mobile devices. For the qualitative analysis, we visualize the results of the proposed CGDM and baselines on the 32ร—32โ†’128ร—128 HR CF reconstruction task, as shown in Fig. 7. As observed, both our model and DRRN produce clearer and more accurate pattern edges, while other baselines tend to generate blurrier results that deviate from the ground-truth target. iii) Evaluation of
https://arxiv.org/abs/2505.07893v1
Knowledge Transfer and Generalization Ability: Transferring a trained model to an unseen task is considered zero-shot generation. Namely, the neural network learns and updates its weights solely on the ร—4SR CF dataset, without exposure to the ร—16,ร—8, andร—2SR CF datasets. It relies on the trained model to perform SR CF reconstruction tasks for magnification factors that have not been encountered before. This is an effective way to assess the modelโ€™s knowledge transferability and generalization ability.TABLE II QUANTITATIVE EVALUATION OF THE PROPOSED APPROACH AND BASELINES ON NMSE, MSE, PSNR, AND SSIM FOR THE ร—4SR CF. Reconstruction Task Method NMSE ( ร—10โˆ’5) MSE PSNR SSIM ร—4 322โ†’1282SRGAN-MSE [21] 62.049 9.060 38.610 0.987 SRGAN [21] 69.942 10.220 38.104 0.987 C-V AE [26] 59.833 8.701 39.022 0.989 DRRN [37] 65.978 8.656 41.009 0.988 CGDM (Our) 4.706โ†“ 0.678โ†“ 49.851โ†‘ 0.993โ†‘ LiCGDM (Our) 6.653 0.9640 49.726 0.993 Table III summarizes the zero-shot results of the proposed model and baselines on the ร—16,ร—8, andร—2reconstruction tasks. It can be observed that across the three unseen SR CF reconstruction tasks, SRGAN and SRGAN-MSE perform slightly worse, while C-V AE achieves slightly better results. We think this is due to the fact that GAN-based models need to iteratively minimize the generator loss and maximize the discriminator loss, which may lead to mode collapse and prevent the models from fully capturing the diversity of the true distribution. In contrast, V AE explicitly estimates the latent parameters by maximizing a lower bound on the log- likelihood. Its mathematical formulation ensures a tractable likelihood for evaluation and enables an explicit inference network. It is worth noting that both the proposed CGDM and LiCGDM outperform the baselines across multiple per- formance metrics, achieving impressive reconstruction results. We attribute this to CGDMโ€™s robust ability to learn implicit priors and its capacity to model complicated data distributions. By leveraging source information and implicit priors, CGDM achieves impressive HR CF reconstruction results. VI. C ONCLUSION To facilitate the paradigm shift from environment-unaware to intelligent and environment-aware communication, this pa- per introduced the concept of CF twins. Particularly, we treated the coarse-grained and fine-grained CFs as physical and virtual 14 TABLE III ZERO-SHOT PERFORMANCE COMPARISON OF THE PROPOSED APPROACH AND BASELINES ON ร—16,ร—8,ANDร—2HR CF R ECONSTRUCTION TASKS Zero Shot Method NMSE ( ร—10โˆ’5) MSE PSNR SSIM ร—16 82โ†’1282SRGAN-MSE [21] 3031.40 437.077 21.812 0.741 SRGAN [21] 3440.20 497.045 21.270 0.728 C-V AE [26] 660.69 96.170 28.505 0.860โ†‘ DRRN [37] 1127.20 157.350 26.612 0.844 CGDM (Our) 588.82โ†“ 85.667โ†“ 29.023โ†‘ 0.855 LiCGDM (Our) 601.78 87.572 28.192 0.855 ร—8 162โ†’1282SRGAN-MSE [21] 560.88 80.357 29.132 0.867 SRGAN [21] 695.55 99.956 28.220 0.858 C-V AE [26] 103.68 15.038 36.492 0.986โ†‘ DRRN [37] 265.94 36.083 33.764 0.940 CGDM (Our) 67.84โ†“ 9.866โ†“ 38.393โ†‘ 0.957 LiCGDM (Our) 78.36 11.413 37.719 0.957 ร—2 642โ†’1282SRGAN-MSE [21] 1381.70 203.691 25.544 0.917 SRGAN [21] 1343.10 199.048 25.709 0.914 C-V AE [26] 62.50 9.081 38.852 0.989 DRRN [37] 66.83 8.727 41.271 0.990 CGDM (Our) 15.55โ†“ 2.259โ†“ 44.787โ†‘ 0.994โ†‘ LiCGDM (Our) 21.36 3.113 44.415 0.994 twin objects, respectively, and designed a CGDM as the core computational unit of the CF twins to model their connection.
https://arxiv.org/abs/2505.07893v1
The trained CGDM, combining the learned prior distribution of the target data and side information, generates fine-grained CFs through a series of iterative refinement steps. Addition- ally, to facilitate the practical deployment of the CGDM, we introduced a one-shot pruning approach and employed multi-objective knowledge distillation techniques to minimize performance degradation. Experimental results showed that the proposed model achieved competitive reconstruction accuracy in fine-grained CF reconstruction tasks with magnification factors of ร—2,ร—4,ร—8, andร—16, while also demonstrating exceptional knowledge transfer capabilities. REFERENCES [1] Z. Jin, L. You, X. Li, Z. Gao, Y . Liu, X.-G. Xia, and X. Gao, โ€œUltra- grained channel fingerprint construction via conditional generative dif- fusion models,โ€ in Proc. IEEE Conf. Comput. Commun. (INFOCOM) , London, UK, May 2025, pp. 1โ€“6. [2] F. Liu, Y . Cui, C. Masouros, J. Xu, T. X. Han, Y . C. Eldar, and S. Buzzi, โ€œIntegrated sensing and communications: Toward dual-functional wire- less networks for 6G and beyond,โ€ IEEE J. Sel. Areas Commun. , vol. 40, no. 6, pp. 1728โ€“1767, Jun. 2022. [3] C.-X. Wang, X. You, X. Q. Gao, X. Zhu, Z. Li, C. Zhang, H. Wang, Y . Huang, Y . Chen, H. Haas, J. S. Thompson, E. G. Larsson, M. D. Renzo, W. Tong, P. Zhu, X. Shen, H. V . Poor, and L. Hanzo, โ€œOn the road to 6G: Visions, requirements, key technologies and testbeds,โ€ IEEE Commun. Surv. Tutor. , vol. 25, no. 2, pp. 905โ€“974, Feb. 2023. [4] L. U. Khan, W. Saad, D. Niyato, Z. Han, and C. S. Hong, โ€œDigital-twin- enabled 6G: Vision, architectural trends, and future directions,โ€ IEEE Commun. Mag. , vol. 60, no. 1, pp. 74โ€“80, Jan. 2022. [5] Z. Jin, L. You, H. Zhou, Y . Wang, X. Liu, X. Gong, X. Gao, D. W. K. Ng, and X.-G. Xia, โ€œGDM4MMIMO: Generative diffusion models for massive MIMO communications,โ€ arXiv preprint arXiv:2412.18281 , 2024.[6] R. Zhang, L. Cheng, S. Wang, Y . Lou, Y . Gao, W. Wu, and D. W. K. Ng, โ€œIntegrated sensing and communication with massive MIMO: A unified tensor approach for channel and target parameter estimation,โ€ IEEE Trans. Wirel. Commun. , vol. 23, no. 8, pp. 8571โ€“8587, Aug. 2024. [7] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Gao, โ€œAn I2I inpainting approach for efficient channel knowledge map construction,โ€ IEEE Trans. Wirel. Commun. , vol. 24, no. 2, pp. 1415โ€“1429, Feb. 2025. [8] Y . Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Q. Gao, D. Gesbert, S. Cui, and R. Zhang, โ€œA tutorial on environment-aware communications via channel knowledge map for 6G,โ€ IEEE Commun. Surv. Tuts. , vol. 26, no. 3, pp. 1478โ€“1519, Feb. 2024. [9] D. Wu, Y . Zeng, S. Jin, and R. Zhang, โ€œEnvironment-aware hybrid beamforming by leveraging channel knowledge map,โ€ IEEE Trans. Wirel. Commun. , vol. 23, no. 5, May 2024. [10] B. Zhang and J. Chen, โ€œConstructing radio maps for UA V communica- tions via dynamic resolution virtual obstacle maps,โ€ in Proc. IEEE Int. Workshop Signal Process. Adv. Wireless Commun. (SPAWC) , Atlanta, GA, USA, May 2020, pp. 1โ€“5.
https://arxiv.org/abs/2505.07893v1
[11] P. Zeng and J. Chen, โ€œUA V-aided joint radio map and 3D environment reconstruction using deep learning approaches,โ€ in Proc. IEEE Int. Conf. Commun. (ICC) , Seoul, Korea, Aug. 2022, pp. 5341โ€“5346. [12] Z. Yang, Z. Zhou, and Y . Liu, โ€œFrom RSSI to CSI: Indoor localization via channel response,โ€ ACM Comput. Surveys , vol. 46, no. 2, pp. 1โ€“32, Dec. 2013. [13] C. Zhan, H. Hu, Z. Liu, J. Wang, N. Cheng, and S. Mao, โ€œAerial video streaming over 3D cellular networks: An environment and channel knowledge map approach,โ€ IEEE Trans. Wirel. Commun. , vol. 23, no. 2, pp. 1432โ€“1446, Feb. 2024. [14] H. Li, P. Li, J. Xu, J. Chen, and Y . Zeng, โ€œDerivative-free placement optimization for multi-UA V wireless networks with channel knowledge map,โ€ in Proc. IEEE Int. Conf. Commun. Workshops (ICC Wkshps) , Seoul, South Korea, May 2022, pp. 1029โ€“1034. [15] S. Zhang and R. Zhang, โ€œRadio map-based 3D path planning for cellular- connected UA V,โ€ IEEE Trans. Wirel. Commun. , vol. 20, no. 3, pp. 1975โ€“ 1989, Mar. 2021. [16] W. Yue, J. Li, C. Li, N. Cheng, and J. Wu, โ€œA channel knowledge map- aided personalized resource allocation strategy in air-ground integrated mobility,โ€ IEEE Trans. Intell. Transp. Syst. , vol. 25, no. 11, pp. 18 734โ€“ 18 747, Nov. 2024. [17] X. Xu and Y . Zeng, โ€œHow much data is needed for channel knowledge map construction?โ€ IEEE Trans. Wirel. Commun. , vol. 23, no. 10, pp. 13 011โ€“13 021, Oct. 2024. [18] D. Lee, D. Berberidis, and G. B. Giannakis, โ€œAdaptive Bayesian radio tomography,โ€ IEEE Trans. Signal Process. , vol. 67, no. 8, pp. 1964โ€“ 1977, Apr. 2019. [19] R. Levie, C ยธ . Yapar, G. Kutyniok, and G. Caire, โ€œRadioUnet: Fast radio map estimation with convolutional neural networks,โ€ IEEE Trans. Wirel. Commun. , vol. 20, no. 6, pp. 4001โ€“4015, Jun. 2021. [20] S. Bakirtzis, J. Chen, K. Qiu, J. Zhang, and I. Wassell, โ€œEM DeepRay: An expedient, generalizable, and realistic data-driven indoor propagation model,โ€ IEEE Trans. Antennas Propag. , vol. 70, no. 6, pp. 4140โ€“4154, Jun. 2022. [21] C. Ledig, L. Theis, F. Husz ยดar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al. , โ€œPhoto-realistic single image super-resolution using a generative adversarial network,โ€ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , Jul. 2017, pp. 4681โ€“ 4690. [22] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, โ€œHigh- resolution image synthesis with latent diffusion models,โ€ in Proc. Conf. Comput. Vis. Pattern Recognit. (CVPR) , New Orleans, LA, USA, Jun. 2022, pp. 10 684โ€“10 695. [23] J. Ho, A. Jain, and P. Abbeel, โ€œDenoising diffusion probabilistic models,โ€ inProc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Dec. 2020, pp. 6840โ€“ 6851. [24] Z. Jin, L. You, D. W. K. Ng, X.-G. Xia, and X. Gao, โ€œNear-field channel estimation for XL-MIMO: A deep generative model guided by side information,โ€ IEEE Trans. Cogn. Commun. Netw. , early access, May 2025. [25] H. Du, R. Zhang, Y . Liu, J. Wang, Y . Lin, Z. Li,
https://arxiv.org/abs/2505.07893v1
D. Niyato, J. Kang, Z. Xiong, S. Cui et al. , โ€œEnhancing deep reinforcement learning: A tutorial on generative diffusion models in network optimization,โ€ IEEE Commun. Surv. Tutor. , May 2024. [26] D. P. Kingma and M. Welling, โ€œAuto-encoding variational Bayes,โ€ in Proc. Int. Conf. Learn. Represent. (ICLR) , Banff, AB, Canada, Apr. 2014, pp. 1โ€“14. [27] J. Lovelace, V . Kishore, C. Wan, E. Shekhtman, and K. Q. Weinberger, โ€œLatent diffusion for language generation,โ€ in Proc. Adv. Neural Inf. 15 Process. Syst. (NeurIPS) , New Orleans, LA, USA, Dec. 2023, pp. 56 998โ€“57 025. [28] X. Liu, W. Wang, X. Gong, X. Fu, X. Gao, and X.-G. Xia, โ€œStructured hybrid message passing based channel estimation for massive MIMO- OFDM systems,โ€ IEEE Trans. Veh. Technol. , vol. 72, no. 6, pp. 7491โ€“ 7507, Jun. 2023. [29] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Q. Gao, โ€œChannel knowledge map construction with Laplacian pyramid reconstruction network,โ€ in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC) , Dubai, United Arab Emirates, Apr. 2024, pp. 1โ€“6. [30] Y . Song and S. Ermon, โ€œGenerative modeling by estimating gradients of the data distribution,โ€ in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Vancouver, BC, Canada, Dec. 2019, pp. 11 918โ€“11 930. [31] Z. Jin, L. You, D. W. Kwan Ng, X.-G. Xia, and X. Gao, โ€œA generative denoising approach for near-field XL-MIMO channel estimation,โ€ in Proc. IEEE Global Commun. Conf. (GLOBECOM) , Cape Town, South Africa, Dec. 2024, pp. 1โ€“6. [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, ล. Kaiser, and I. Polosukhin, โ€œAttention is all you need,โ€ in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Long Beach, CA, USA, Jun. 2017, pp. 6000โ€“6010. [33] Z. Jin, N. Xu, Y . Shang, and X. Yao, โ€œEfficient capsule network with multi-subspace learning,โ€ in Proc. IEEE Int. Conf. Wireless Commun. Signal Process. (WCSP) , Oct. 2021, pp. 1โ€“5. [34] D. Zhang, S. Li, C. Chen, Q. Xie, and H. Lu, โ€œLaptop-diff: Layer pruning and normalized distillation for compressing diffusion models,โ€ arXiv preprint arXiv:2404.11098 , 2024. [35] K. Xu, Z. Wang, X. Geng, M. Wu, X. Li, and W. Lin, โ€œEfficient joint optimization of layer-adaptive weight pruning in deep neural networks,โ€ inProc. IEEE Int. Conf. Comput. Vis. (ICCV) , Paris, Oct. 2023, pp. 17 447โ€“17 457. [36] S. Jaeckel, L. Raschkowski, K. B ยจorner, and L. Thiele, โ€œQuaDRiGa: A 3- D multi-cell channel model with time evolution for enabling virtual field trials,โ€ IEEE Trans. Antennas Propag. , vol. 62, no. 6, pp. 3242โ€“3256, Jun. 2014. [37] Y . Tai, J. Yang, and X. Liu, โ€œImage super-resolution via deep recursive residual network,โ€ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , Jul. 2017, pp. 3147โ€“3155.
https://arxiv.org/abs/2505.07893v1
1 EnvCDiff: Joint Refinement of Environmental Information and Channel Fingerprints via Conditional Generative Diffusion Model Zhenzhou Jin, Graduate Student Member, IEEE, Li You, Senior Member, IEEE, Xiang-Gen Xia, Fellow, IEEE, and Xiqi Gao, Fellow, IEEE Abstract โ€”The paradigm shift from environment-unaware communication to intelligent environment-aware communication is expected to facilitate the acquisition of channel state infor- mation for future wireless communications. Channel Fingerprint (CF), as an emerging enabling technology for environment-aware communication, provides channel-related knowledge for poten- tial locations within the target communication area. However, due to the limited availability of practical devices for sens- ing environmental information and measuring channel-related knowledge, most of the acquired environmental information and CF are coarse-grained, insufficient to guide the design of wireless transmissions. To address this, this paper proposes a deep conditional generative learning approach, namely a customized conditional generative diffusion model (CDiff). The proposed CDiff simultaneously refines environmental information and CF, reconstructing a fine-grained CF that incorporates environmental information, referred to as EnvCF, from its coarse-grained coun- terpart. Experimental results show that the proposed approach significantly improves the performance of EnvCF construction compared to the baselines. Index Terms โ€”Environment-aware wireless communication, channel fingerprint, channel-related knowledge. I. I NTRODUCTION Driven by the synergy of artificial intelligence (AI) and environmental sensing, 6G is poised to undergo a paradigm shift, evolving from environment-unaware communications to intelligent environment-aware ones [1]. Channel fingerprint (CF) is an emerging enabling technology for environment- aware communications that provides location-specific chan- nel knowledge for a potential base station (BS) in base-to- everything (B2X) pairs [1], [2]. Ideally, without considering the costs of sensing, computation, and storage, an ultra-fine- grained CF would encapsulate channel-related knowledge for all locations within the target communication area, such as channel gain and angle of arrival/departure, thereby alleviating the challenges of acquiring channel state information. By pro- viding essential channel-related knowledge, CF has recently gained significant research attention for diverse applications, including object sensing, beamforming, localization [1]โ€“[3]. A fundamental challenge for the aforementioned CF-based emerging applications is the construction of a sufficiently Zhenzhou Jin, Li You, and Xiqi Gao are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211100, China (e-mail: zzjin@seu.edu.cn; lyou@seu.edu.cn; xqgao@seu.edu.cn). Xiang-Gen Xia is with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA (e-mail: xxia@ee.udel.edu).fine-grained CF, which is essential to ensure the accurate acquisition of channel-related information at specific loca- tions. Existing related works can generally be categorized into model-based and data-driven approaches. In terms of model-based approaches, the authors of [4] leverage prior assumptions about the physical propagation model along with partially measured channel data to estimate channel-related knowledge at potential locations. In [5], the authors utilize an analytical channel model to represent the spatial variabil- ity of the propagation environment, with channel modeling parameters estimated from measured data to reconstruct the CF. In data-driven approaches, one straightforward method for CF construction is interpolation-based, with classic techniques including radial basis function (RBF) [6] interpolation and Kriging [7]. Additionally, AI-based approaches for CF con- struction are recently emerging.
https://arxiv.org/abs/2505.07894v1
In [2], the authors transform the CF estimation task into an image-to-image inpainting problem and develop a Laplacian pyramid-based model to facilitate CF construction. In [8], [9], UNet is utilized to learn geometry-based and physics-based features in urban or indoor environments, enabling the construction of corresponding CFs. In [1], fully connected networks are used to predict channel knowledge based on 2D coordinate locations. It is evident that CF construction is primarily influenced by the wireless propagation environment, but the nodes and devices available for sensing environmental information and measuring location- specific channel knowledge are usually limited in practice. Most existing methods focus on constructing CFs by either leveraging partial channel knowledge or incorporating prior assumptions about propagation models and wireless environ- ment characteristics. However, limited work has been ded- icated to simultaneously refining environmental information and channel-related knowledge, that is, reconstructing a finer- grained CF that integrates environmental information, referred to as environmental CF (EnvCF), from a coarse-grained one. This paper investigates the construction of a fine-grained EnvCF in scenarios where environmental information and channel knowledge are limited by the high cost and availability constraints of sensing and testing equipment. Specifically, we reformulate the task of fine-grained EnvCF construction as an image super-resolution (ISR) problem. Considering that the conditional distribution of SR outputs given low-resolution (LR) inputs typically follows a complicated parametric dis- tribution, leading to suboptimal performance of most feed- forward neural network-based regression algorithms in ISRarXiv:2505.07894v1 [cs.NI] 12 May 2025 2 tasks [10]. To this end, leveraging the powerful implicit prior learning capability of generative diffusion model (GDM) [11], [12], we propose a conditional GDM (CDiff) to approximate the conditional distribution of the HR EnvCF. Specifically, within the variational inference framework, we derive an evidence lower bound (ELBO) of the log-conditional marginal distribution of the observed high-resolution (HR) EnvCF as the objective function. Furthermore, to make the transformation from a standard normal distribution to the target distribution more controllable, we incorporate the LR EnvCF as side information during the optimization of the denoising network. Simulation results show the superiority of the proposed ap- proach. II. E NVCF M ODEL ANDPROBLEM FORMULATION In this section, we first present the channel gain and the EnvCF model. We then reformulate the fine-grained EnvCF reconstruction problem by aligning the objective function with the learning of the gradient of the log-conditional density. A. EnvCF Model Consider a wireless communication scenario within a target area of interest, AโŠ‚R2, where a base station (BS) serves Muser terminals (UTs), with their 2D position coordinates denoted as {xm}M m=1=A. The attenuation of received signalpower at UT is widely attributed to the physical characteristics of the wireless propagation environment, such as the geometric contours of real-world streets and buildings in urban maps, represented by E. Key contributing factors include path loss along various propagation paths, reflections and diffractions caused by buildings, the ground, or other objects, waveguide effects in densely populated urban areas, and signal blockages from obstacles [1]. The relatively slow-varying components of these influencing factors collectively form the channel gain function, denoted as g(E,xm), which reflects the large-scale
https://arxiv.org/abs/2505.07894v1
signal attenuation measured at the UTs located at {xm}M m=1= A. Additionally, small-scale effects are typically modeled as a complex Gaussian random variable hwith unit variance. Without loss of generality, the baseband signal received by the UT at xmcan be represented as [2] y(E,xm) =p g(E,xm)hs+z(E,xm), (1) where srepresents the transmitted signal with power PX, andz(E,xm)denotes the additive noise with a single-sided power spectral density of N0. The average received energy per symbol can be expressed as PY=E[|y(E,xm)|2] B=g(E,xm)PX B+N0, (2) Loss in (15) ๏‚ด Conv ๏‚ด Conv ๏‚ด Conv 64in inhw๏‚ด๏‚ด Group NormGroup NormGroup Norm Swish Swish Swish ๏‚ด Conv ๏‚ด Conv ๏‚ด Conv DownsamplingDownsamplingDownsampling 6422in inhw๏‚ด๏‚ด Res Block Res BlockRes BlockRes Block Self-Attention Self-Attention Self-AttentionRes Block Self-Attention Res BlockRes BlockRes Block Self-Attention Self-Attention Self-AttentionRes Block Self-AttentionGroup NormGroup NormGroup NormConvConv 11๏‚ดConv 11๏‚ด Swish Swish Swish ๏‚ด Conv ๏‚ด Conv ๏‚ด Conv ๏ƒ… 12844in inhw๏‚ด๏‚ด ๏ƒ…Time EmbeddingTime EmbeddingTime Embedding Res BlockRes Block 2๏‚ดRes Block 2๏‚ด DownsamplingDownsamplingDownsamplingRes Block 2๏‚ด Downsampling Res BlockRes Block 2๏‚ดRes Block 2๏‚ด DownsamplingDownsamplingDownsamplingRes Block 2๏‚ด Downsampling Res BlockRes Block 2๏‚ดRes Block 2๏‚ด DownsamplingDownsamplingDownsamplingRes Block 2๏‚ด Downsampling 51216 16in inhw๏‚ด๏‚ด 25688in inhw๏‚ด๏‚ด 102416 16in inhw๏‚ด๏‚ด 102416 16in inhw๏‚ด๏‚ด 102488in inhw๏‚ด๏‚ด 2๏‚ด Self-Attention Self-Attention Self-AttentionRes BlockRes BlockRes Block 2๏‚ด Self-AttentionRes BlockUpsamplingUpsamplingUpsampling Self-Attention Self-Attention Self-Attention Res BlockRes BlockRes Block 3๏‚ดSelf-Attention Res Block 3๏‚ดUpsampling Self-Attention Res Block 3๏‚ด 51244in inhw๏‚ด๏‚ด 25622in inhw๏‚ด๏‚ด 128in inhw๏‚ด๏‚ดUpsamplingUpsamplingUpsampling Res BlockRes Block 3๏‚ดRes Block 3๏‚ดUpsampling Res Block 3๏‚ดUpsamplingUpsamplingUpsampling Res BlockRes Block 3๏‚ดRes Block 3๏‚ดUpsampling Res Block 3๏‚ดUpsamplingUpsamplingUpsampling Res BlockRes Block 3๏‚ดRes Block 3๏‚ดUpsampling Res Block 3๏‚ดRes BlockRes BlockRes BlockRes BlockRes BlockRes BlockRes BlockRes BlockRes Block Res BlockRes BlockRes Block ๏‚ด Conv ๏‚ด Conv ๏‚ด Conv Group NormGroup NormGroup NormSwishSwishSwish 3in inhw๏‚ด๏‚ด 6in inhw๏‚ด๏‚ด ๏‚ด Conv 64in inhw๏‚ด๏‚ด Group Norm Swish ๏‚ด Conv Downsampling 6422in inhw๏‚ด๏‚ด Res Block Res Block Self-Attention Res Block Self-AttentionGroup NormConv 11๏‚ด Swish ๏‚ด Conv ๏ƒ… 12844in inhw๏‚ด๏‚ด ๏ƒ…Time Embedding Res Block 2๏‚ด Downsampling Res Block 2๏‚ด Downsampling Res Block 2๏‚ด Downsampling 51216 16in inhw๏‚ด๏‚ด 25688in inhw๏‚ด๏‚ด 102416 16in inhw๏‚ด๏‚ด 102416 16in inhw๏‚ด๏‚ด 102488in inhw๏‚ด๏‚ด 2๏‚ด Self-AttentionRes BlockUpsampling Self-Attention Res Block 3๏‚ด 51244in inhw๏‚ด๏‚ด 25622in inhw๏‚ด๏‚ด 128in inhw๏‚ด๏‚ดUpsampling Res Block 3๏‚ดUpsampling Res Block 3๏‚ดUpsampling Res Block 3๏‚ดRes BlockRes BlockRes Block Res Block ๏‚ด Conv Group NormSwish 3in inhw๏‚ด๏‚ด 6in inhw๏‚ด๏‚ด . 0 ~p๏ƒฆ๏ƒถ=๏ƒง๏ƒท๏ƒจ๏ƒธF F F FGaussian Diffusion Process ()1 ttqโˆ’FF 01t t t tฮฑ ฮฑ = + โˆ’ ฮต FF 01t t t tฮฑ ฮฑ = + โˆ’ ฮต FF . F . F Physical Propagation Environment Information Physical Propagation Environment Information Physical Propagation Environment Information tF tFConditional Denoising Neural Network ()๏ƒ—ฮธฮตConditional Denoising Neural Network ()๏ƒ—ฮธฮต (),T 0I F ...Conditional Denoising Neural Network ()๏ƒ—ฮธฮตConditional Denoising Neural Network ()๏ƒ—ฮธฮต 1tโˆ’F 1tโˆ’F . 1 111 1,,1 1tt t t t t t tttฮฑ ฮฑฮฒฮฑ ฮฑ ฮฑโˆ’ โˆ’๏ƒฆ๏ƒถโˆ’โˆ’๏ƒฆ๏ƒถ๏‚ฌ โˆ’ +๏ƒง๏ƒท ๏ƒง๏ƒท๏ƒง๏ƒท โˆ’ ๏ƒจ๏ƒธโˆ’๏ƒจ๏ƒธฮธฮต ฮต F F F F . 1 111 1,,1 1tt t t t t t tttฮฑ ฮฑฮฒฮฑ ฮฑ ฮฑโˆ’ โˆ’๏ƒฆ๏ƒถโˆ’โˆ’๏ƒฆ๏ƒถ๏‚ฌ โˆ’ +๏ƒง๏ƒท ๏ƒง๏ƒท๏ƒง๏ƒท โˆ’ ๏ƒจ๏ƒธโˆ’๏ƒจ๏ƒธฮธฮต ฮต F F F FConcatenate Side Information Training Flow for the Conditional Denoising Neural Network Iterative Refinement Process Conditioned on LR EnvCF Inversion of the Gaussian Diffusion Process . 1,tt
https://arxiv.org/abs/2505.07894v1
pโˆ’๏ƒฆ๏ƒถ ๏ƒง๏ƒท๏ƒจ๏ƒธฮธF F FInversion of the Gaussian Diffusion Process . 1,tt pโˆ’๏ƒฆ๏ƒถ ๏ƒง๏ƒท๏ƒจ๏ƒธฮธF F FSkipping Connection Training Frozen Parameters ()๏ƒ—ฮธฮต Framework of Conditional Denoising Neural Networks ()๏ƒ—ฮธฮตFramework of Conditional Denoising Neural Networks . 0 p๏ƒฆ๏ƒถ ๏ƒง๏ƒท๏ƒจ๏ƒธ๏‚ป F F F F Fig. 1. Schematic of the proposed CDiff workflow and the architecture of the conditional denoising neural network. 3 where Bdenotes the signal bandwidth. The channel gain, in dB, for the UT located at xmis defined as G(E,xm) = (PY)dBโˆ’(PX)dB. (3) The channel gain G(E,xm)in (3) is primarily influenced by the propagation environment and the location of the UT. Assuming the target area of interest has a size of Dร—D , we perform spatial discretization along both the X-axis and the Y-axis. Specifically, a resolution factor ฮดis defined, with the minimum spacing units for the spatial discretization process set as โˆ†x=D/ฮดandโˆ†y=D/ฮด. Each spatial grid is denoted asฮฅi,j, where i= 1,2, ...,D/โˆ†xandj= 1,2, ...,D/โˆ†y, and the(i, j)-th spatial grid is given by ฮฅi,j:= [iโˆ†x, jโˆ†y]T. (4) Through the spatial discretization process, the physical propa- gation environment information Eof the target area Acan be rearranged into a two-dimensional tensor, defined as E, i.e., [E]i,j=E(ฮฅi,j). Similarly, the channel gains collected at potential UT locations within Aare rearranged into a tensor with an image-like structure, referred to as CF, defined as [G]i,j=G [E]i,j,ฮฅi,j . Then, EnvCF, i.e., CF, is integrated with the wireless propagation environment, defined as [F]i,j=G [E]i,j,ฮฅi,j + [E]i,j, (5) where Erepresents the global propagation environment. Note that in our simulation process, Erepresents an urban map with BS location, stored as a morphological 2D image. Binary pixel values of 0 and 1 are utilized to depict buildings and streets with various shapes and geometric layouts, as well as the location of BS. E(ฮฅi,j)represents the local propagation environment at each square meter (or each pixel). The channel gainG [E]i,j,ฮฅi,j is computed using the professional chan- nel simulation software WinProp [2], [8] and then converted into grayscale pixel values ranging from 0 to 1. Therefore, EnvCF is modeled as a 2D environmental channel gain map, comprising both the propagation environment map and the channel gains at each UT location, as illustrated by one of the EnvCF samples presented in Fig. 1. It can be observed that when factors such as time and frequency are considered, the EnvCF model can be extended to a multi-dimensional tensor. B. Problem Formulation It is evident that a finer-grained EnvCF can provide more accurate information about the physical environment and channel gains, which is beneficial for wireless transmission design [13]. However, the EnvCF obtained by practical BS is typically coarse-grained due to the limited availability for collecting environmental information and channel knowledge at specific locations. Therefore, our task focuses on refining both environmental and channel gain information from a given coarse-grained EnvCF, particularly in scenarios constrained by sensing costs, implicit limitations, or security concerns. Define a low-resolution (LR) factor ฮดLRand a high- resolution (HR) factor ฮดHR. Correspondingly, the LR EnvCF and HR EnvCF are represented as FLRandFHR, respectively,which are collected and rearranged by discretizing the target area into ฮดLRร—ฮดLRandฮดHRร—ฮดHRgrids, respectively. Then, our
https://arxiv.org/abs/2505.07894v1
task is to establish a mapping capable of reconstructing a HR EnvCF from a given LR EnvCF, expressed as Mฮ˜:FLR,nโ†’FHR,n,โˆ€nโˆˆ {1,2, . . . , N }, (6) where ฮ˜denotes the learnable parameters for the mapping Mฮ˜, while Nindicates the number of training samples. However, (6) is an undetermined inverse problem. Given that the conditional distribution of HR outputs for a given LR input rarely adheres to a simple parametric distribution, most regression methods based on feedforward neural networks for (6) tend to struggle with high upscaling factors, often failing to reconstruct fine details accurately [10]. Fortunately, GDM has proven effective in capturing the complex empirical distributions of target data. Specifically, if the implicit prior information of the HR EnvCF distribution, such as the gradient of the data log-density, can be effectively learned, it becomes possible to transition to the target EnvCF distribution through iterative refinement steps from a standard normal distribution, akin to Langevin dynamics. Meanwhile, the โ€œnoiseโ€ estimated in traditional GDM is equivalent to the gradient of the data log-density. Therefore, (6) can be further formulated as argmin ฮ˜EP(FH R,FL R)h โˆฅโˆ‡logP(FH R|FL R)โˆ’ โˆ‡logPฮ˜(FH R|FL R)โˆฅ2 2i , (7a) s.t.xmโˆˆ A, nโˆˆ {1,2, . . . , N }, (7b) where โˆ‡logP(ยท)represents the gradient of the log-density, andPฮ˜denotes the learned density. To this end, leveraging the powerful implicit prior learning capability of GDM, we tailor the CDiff to solve (7), with the detailed implementation provided in Sec. III. For simplicity, FLRandFHRare repre- sented by ห™FandF, respectively, in the subsequent sections. III. HR E NVCF R ECONSTRUCTION VIA CDIFF Depending on the resolution factors ฮดLRandฮดHR, the sensing nodes and measurement devices deployed in practice to collect channel knowledge and environmental information can acquire the corresponding LR EnvCF samples, ห™Fn, and HR EnvCF samples, Fn. These samples form a paired LR-HR En- vCF dataset for training, denoted as S=n ห™Fn,FnoN n=1, which is generally sampled from an unknown distribution p ห™F,F . In our task, the goal is to learn a parametric approximation ofp F ห™F through a directed iterative refinement process, guided by side information in the form of LR EnvCF. A. Initiation of Gaussian Diffusion Process with HR EnvCF Denoted as F0=Fโˆผq(F), the GDM employs a fixed diffusion process q(F1:T|F0), represented as a deterministic 4 Markovian chain where Gaussian noise is gradually introduced to the sample over Tsteps [11]: q(F1:T|F0) =TY t=1q(Ft|Ftโˆ’1), (8a) q(Ft|Ftโˆ’1) =N Ft;p 1โˆ’ฮฒtFtโˆ’1, ฮฒtI , (8b) Ft=p 1โˆ’ฮฒtFtโˆ’1+p ฮฒtฮต, (8c) where {ฮฒtโˆˆ(0,1)}T t=1is a variance schedule that controls the noise level at each step, and ฮตdenotes Gaussian noise following the distribution N(ฮต;0,I). Utilizing the reparame- terization trick, Ftcan be sampled in a closed form as: Ft=โˆšฮฑtF0+โˆš 1โˆ’ฮฑtฮตt, (9a) q(Ft|F0) =N Ft;โˆšฮฑtF0,(1โˆ’ฮฑt)I , (9b) where ฮตtโˆผ N (0,I),ฮฑt= 1โˆ’ฮฒtandฮฑt=Qt i=1ฮฑi. Typically, the variance schedule is set as ฮฒ1< ฮฒ 2< ... < ฮฒT[14]. As ฮฒTapproaches 1, FTapproximates a standard Gaussian distribution regardless of the initial state F0, i.e., q(FT|F0)โ‰ˆ N (FT;0,I). B. Inversion of Diffusion Process Conditioned on LR EnvCF In the proposed CDiff, the inversion process is viewed as a conditional decoding procedure, where at
https://arxiv.org/abs/2505.07894v1
each time step t, Ft, conditioned on ห™F, is denoised and refined to Ftโˆ’1, with the conditional transition probability for each step denoted as p Ftโˆ’1 Ft,ห™F . Then, the conditional joint distribution of the inversion process is expressed as p F0:T ห™F =p(FT)TY t=1p Ftโˆ’1 Ft,ห™F . (10) To execute the conditional inversion process (10), a denoising neural network ฮตฮธ(ยท)with a learnable parameter set ฮธneeds to be designed to approximate the conditional transition densities: p F0:T ห™F =p(FT)TY t=1pฮธ Ftโˆ’1 Ft,ห™F , (11a) pฮธ Ftโˆ’1 Ft,ห™F =N Gtโˆ’1;ยตฮธ ห™F,Ft, t ,ฮฃฮธ ห™F,Ft,t .(11b) Note that the proposed CDiff incorporates a denoising neural network ฮตฮธ(ยท)conditioned on side information in the form of an LR EnvCF ห™F, guiding it to progressively denoise from a Gaussian-distributed FTand generate the HR EnvCF F0. To ensure the effective functioning of this denoising neural network ฮตฮธ(ยท), a specific objective function needs to be derived. One common likelihood-based approach in generative modeling involves optimizing the model to maximize the conditional joint probability distribution p F0:T ห™F of all observed samples. However, only the observed sample F0is accessible, while the latent variables F1:Tremain unknown. To this end, we seek to maximize the conditional marginal distribution p F0 ห™F , expressed as p F0 ห™F =Z p F0:T ห™F dF1:T. (12)Algorithm 1 Training Strategy for the Conditional Denoising Neural Network 1:repeat 2: Load data pairs ห™F,F0 โˆผp ห™F,F0 from the EnvCF training dataset S=n ห™Fn,FnoN n=1 3: Sample time steps tuniformly from 1, . . . , T , i.e., tโˆผ Uniform(1 , ..., T ) 4: Generate a noise tensor ฮตtof the same dimensions as F0,ฮตtโˆผ N (0,I) 5: Perform the diffusion process on the HR EnvCF F0, Ft=โˆšฮฑtF0+โˆš1โˆ’ฮฑtฮตt 6: FeedFt,ห™F, and tinto the network ฮตฮธ(ยท) 7: Perform gradient descent step on the loss function (15) to optimize the model parameters ฮธ: โˆ‡ฮธ ฮตtโˆ’ฮตฮธห™F,โˆšยฏฮฑtF0+โˆš1โˆ’ยฏฮฑtฮต, t 2 2 8:until the loss function (15) converges Algorithm 2 HR EnvCF Generation via T-Step Conditional Inversion Diffusion Process 1:Load the trained model and its weight set ฮธ 2:Obtain FTโˆผ N (0,I), and ห™F 3:fort=T, ..,1do 4:ฮตโˆผ N (0,I)ift >1, else ฮต= 0 5: Execute the conditional iterative refinement step: Ftโˆ’1โ†1โˆšฮฑt Ftโˆ’1โˆ’ฮฑtโˆš1โˆ’ยฏฮฑtฮตฮธ ห™F,Ft, t +q 1โˆ’ยฏฮฑtโˆ’1 1โˆ’ยฏฮฑtฮฒtฮต 6:end for 7:return ห†F0 By leveraging variational inference techniques, we can derive the evidence lower bound (ELBO) as a surrogate objective to optimize the denoising neural network: logp F0 ห™F(a) โฉพEq(F1:T|F0)๏ฃซ ๏ฃญlogp F0:T ห™F q(F1:T|F0)๏ฃถ ๏ฃธ,(13) where(a) โฉพin (13) follows from Jensenโ€™s inequality. Then, (13) can be further expressed as (14), displayed at the top of the next page. Here, DKL(ยท)represents the Kullback-Leibler (KL) divergence. The components of the ELBO (14) for the log- conditional marginal distribution are similar to those in the surrogate objective presented in [11]. By applying Bayesโ€™ theorem, the final simplified objective can be derived as L(ฮธ):=TX t=1EF0,ฮตt ฮตtโˆ’ฮตฮธห™F,โˆšยฏฮฑtF0+โˆš 1โˆ’ยฏฮฑtฮต| {z } Ft,t 2 2 .(15) Based on the trained CDiff, given the noise-contaminated EnvCF Ftand the side information ห™F, we can approximate the target HR EnvCF F0through (9a), i.e., ห†F0=1โˆšยฏฮฑt๏ฃซ ๏ฃฌ๏ฃญFtโˆ’โˆš 1โˆ’ยฏฮฑtฮตฮธห™F,โˆšยฏฮฑtF0+โˆš 1โˆ’ยฏฮฑtฮต| {z } Ft, t๏ฃถ ๏ฃท๏ฃธ.(16) 5 logp F0 ห™F
https://arxiv.org/abs/2505.07894v1
โฉพEq(F1:T|F0)๏ฃซ ๏ฃญlogp(FT)pฮธ F0 F1,ห™F q(F1|F0)+ logq(F1|F0) q(FT|F0)+ logTY t=2pฮธ Ftโˆ’1 Ft,ห™F q(Ftโˆ’1|Ft,F0)๏ฃถ ๏ฃธ (14a) =Eq(F1|F0) logpฮธ F0 F1,ห™F โˆ’TX t=2Eq(Ft|F0) DKL q(Ftโˆ’1|Ft,F0)โˆฅpฮธ Ftโˆ’1 Ft,ห™F   โˆ’DKL(q(FT|F0)โˆฅp(FT) ) (14b) Each iteration in the proposed CDiff is expressed as Ftโˆ’1โ†1โˆšฮฑt Ftโˆ’1โˆ’ฮฑtโˆš1โˆ’ยฏฮฑtฮตฮธ ห™F,Ft, t +r 1โˆ’ยฏฮฑtโˆ’1 1โˆ’ยฏฮฑtฮฒtฮต,(17) where ฮตโˆผ N (0,I). For clarity, the training process and the iterative inference process of the proposed CDiff are summarized in Algorithm 1 andAlgorithm 2 , respectively. IV. N UMERICAL EXPERIMENT In this section, we first present the generation of the EnvCF dataset and the parameter configuration of the proposed model. Then, we conduct both quantitative and qualitative compar- isons with the baselines on the ร—4EnvCF reconstruction task (i.e., 64ร—64โ†’256ร—256). A. Datasets and Experiment Setup The RadioMapSeer dataset [8], a widely adopted CF dataset that incorporates environmental information, is utilized for training and validating the proposed CDiff. Specifically, the RadioMapSeer dataset consists of 700 unique city maps, each measuring 256ร—256 m2and containing 80 distinct BS locations. For each possible combination of city maps and BS locations, the dataset provides the corresponding CFs, which are simulated using WinProp [2], [8]. These city maps describe the geometric contours of real-world streets and buildings, sourced from OpenStreetMap [8] for different cities. Considering a highly challenging ร—4EnvCF refinement task, we set ฮดHR= 256 , utilizing a sampling interval of โˆ†x= โˆ† y= 1 m to sample the city map along with its associated channel gain values and environmental information, resulting in the HR EnvCF, denoted as FHR,n. For the LR counterpart, ฮดLRis set to 64, meaning the sampling interval for the LR EnvCF is โˆ†x= โˆ† y= 4m, yielding the LR EnvCF TABLE I SYSTEM AND MODEL PARAMETERS Parameter Value Size of the target area A D ร— D = 256 ร—256 m2 Sampling interval for HR EnvCF โˆ†x= โˆ† y= 1 m Sampling interval for LR EnvCF โˆ†x= โˆ† y= 4 m Carrier frequency f= 5.9GHz Bandwidth B= 10 MHz Transmit power 23 dBm Noise power sepctral density -174 dBm/Hz Variance schedule Linear: ฮฒ1= 10โˆ’6โ†’ฮฒT= 10โˆ’2denoted as FLR,n. Similarly, based on the RadioMapSeer dataset, we generate 56,000 pairs of EnvCF samples, denoted as{FHR,n,FLR,n}56000 n=1, and split them into training and validation sets in a 4 : 1 ratio. The proposed CDiff is trained utilizing 2 Nvidia RTX-4090 GPUs, each with 24 GB of memory, and tested on a single Nvidia RTX-4090 GPU with 24 GB of memory. We employ the Adam optimizer with a learning rate of 5ร—10โˆ’5for model parameter updates over 500,000 iterations, and the batch size is set to 16. Starting from the 5,000th iteration, we introduce the exponential moving average algorithm [11], with the decay factor set to 0.9999. More parameter configurations are summarized in Table I. B. Experiment Results To comprehensively assess the effectiveness of the pro- posed approach, we conduct experiments on the ร—4EnvVF reconstruction task, comparing its performance against several baselines, including Bilinear, Nearest, Kriging [7], RBF [6], and SR-GAN [15]. Without loss of generality, three widely adopted metrics, peak signal-to-noise ratio (PSNR), struc- tural similarity (SSIM), and normalized mean squared error (MNSE), are
https://arxiv.org/abs/2505.07894v1
employed to evaluate performance. Table II presents a quantitative analysis of the proposed CDiff and baselines on the ร—4EnvCF reconstruction task. It can be observed that the performance of the Kriging algorithm is relatively suboptimal. Notably, compared to the baselines, the proposed approach achieves competitive reconstruction performance, with PSNR, SSIM, and NMSE values of 31.15, 0.9280, and 0.0073, respectively. As shown in Fig. 2, to better illustrate the qualitative analysis, we randomly visualize the re- construction results of EnvCF utilizing the proposed CDiff and baselines. Note that the proposed approach effectively refines both environmental information and CF, closely approximating the ground-truth EnvCF with minimal error. TABLE II PERFORMANCE COMPARISON WITH BASELINES Method PSNR SSIM NMSE Bilinear 27.24 0.8521 0.0172 Nearest 26.25 0.8331 0.0215 Kriging 19.88 0.6725 0.1166 RBF 26.99 0.8613 0.0180 SR-GAN 29.75 0.7517 0.0089 CDiff 31.15โ†‘ 0.9280 โ†‘ 0.0073 โ†“ 6 /uni00000025/uni0000004c/uni0000004f/uni0000004c/uni00000051/uni00000048/uni00000044/uni00000055 /uni00000031/uni00000048/uni00000044/uni00000055/uni00000048/uni00000056/uni00000057 /uni0000002e/uni00000055/uni0000004c/uni0000004a/uni0000004c/uni00000051/uni0000004a/uni00000003 /uni00000035/uni00000025/uni00000029 /uni00000036/uni00000035/uni00000010/uni0000002a/uni00000024/uni00000031 /uni00000026/uni00000027/uni0000004c/uni00000049/uni00000049 /uni0000002a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047/uni00000003/uni00000037 /uni00000055/uni00000058/uni00000057/uni0000004b Fig. 2. Random visualizations of the HR EnvCF reconstruction results using the proposed CDiff and baselines. V. C ONCLUSION This paper proposed a deep conditional generative learning- enabled EnvCF refinement approach that effectively refined both environmental information and CF, achieving a fourfold enhancement in granularity. Specifically, we employed varia- tional inference to derive a surrogate objective and proposed the CDiff framework, which effectively generates HR EnvCF conditioned on LR EnvCF. Experimental results showed that the proposed approach achieved significant improvements in enhancing the granularity of EnvCF compared to the baselines. REFERENCES [1] Y . Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Gao, D. Gesbert, S. Cui, and R. Zhang, โ€œA tutorial on environment-aware communications via channel knowledge map for 6G,โ€ IEEE Commun. Surv. Tuts. , vol. 26, no. 3, pp. 1478โ€“1519, Feb. 2024. [2] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Gao, โ€œAn I2I inpainting approach for efficient channel knowledge map construction,โ€ IEEE Trans. Wirel. Commun. , vol. 24, no. 2, pp. 1415โ€“1429, Feb. 2025. [3] Z. Yang, Z. Zhou, and Y . Liu, โ€œFrom RSSI to CSI: Indoor localization via channel response,โ€ ACM Comput. Surveys , vol. 46, no. 2, pp. 1โ€“32, Dec. 2013. [4] D. Lee, D. Berberidis, and G. B. Giannakis, โ€œAdaptive Bayesian radio tomography,โ€ IEEE Trans. Signal Process. , vol. 67, no. 8, pp. 1964โ€“ 1977, Apr. 2019. [5] X. Xu and Y . Zeng, โ€œHow much data is needed for channel knowledge map construction?โ€ IEEE Trans. Wirel. Commun. , vol. 23, no. 10, pp. 13 011โ€“13 021, Oct. 2024. [6] S. Zhang, T. Yu, B. Choi, F. Ouyang, and Z. Ding, โ€œRadiomap inpainting for restricted areas based on propagation priority and depth map,โ€ IEEE Trans. Wirel. Commun. , vol. 23, no. 8, pp. 9330โ€“9344, Feb. 2024. [7] K. Sato and T. Fujii, โ€œKriging-based interference power constraint: Integrated design of the radio environment map and transmission power,โ€ IEEE Trans. Cogn. Commun. Netw. , vol. 3, no. 1, pp. 13โ€“25, Mar. 2017. [8] R. Levie, C ยธ . Yapar, G. Kutyniok, and G. Caire, โ€œRadioUnet: Fast radio map estimation with convolutional neural networks,โ€ IEEE Trans. Wirel. Commun. , vol. 20, no.
https://arxiv.org/abs/2505.07894v1
6, pp. 4001โ€“4015, Jun. 2021. [9] S. Bakirtzis, J. Chen, K. Qiu, J. Zhang, and I. Wassell, โ€œEM DeepRay: An expedient, generalizable, and realistic data-driven indoor propagation model,โ€ IEEE Trans. Antennas Propag. , vol. 70, no. 6, pp. 4140โ€“4154, Jun. 2022. [10] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, โ€œHigh- resolution image synthesis with latent diffusion models,โ€ in Proc. Conf. Comput. Vis. Pattern Recognit. (CVPR) , New Orleans, LA, USA, Jun. 2022, pp. 10 684โ€“10 695. [11] J. Ho, A. Jain, and P. Abbeel, โ€œDenoising diffusion probabilistic models,โ€ inProc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Dec. 2020, pp. 6840โ€“ 6851.[12] Z. Jin, L. You, H. Zhou, Y . Wang, X. Liu, X. Gong, X. Gao, D. W. K. Ng, and X.-G. Xia, โ€œGDM4MMIMO: Generative diffusion models for massive MIMO communications,โ€ arXiv preprint arXiv:2412.18281 , 2024. [13] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Q. Gao, โ€œChannel knowledge map construction with Laplacian pyramid reconstruction network,โ€ in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC) , Dubai, United Arab Emirates, Apr. 2024, pp. 1โ€“6. [14] Z. Jin, L. You, D. W. K. Ng, X.-G. Xia, and X. Gao, โ€œNear-field channel estimation for XL-MIMO: A deep generative model guided by side information,โ€ IEEE Trans. Cogn. Commun. Netw. , early access, May 2025. [15] C. Ledig, L. Theis, F. Husz ยดar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al. , โ€œPhoto-realistic single image super-resolution using a generative adversarial network,โ€ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , Jul. 2017, pp. 4681โ€“ 4690.
https://arxiv.org/abs/2505.07894v1
arXiv:2505.07958v1 [math.PR] 12 May 2025Laws of Large Numbers for Information Resolution Daniel Rabanโˆ— Department of Statistics, University of California, Berkeley May 14, 2025 Abstract Laws of large numbers establish asymptotic guarantees for recovering features of a probability distribution using independent samples. We introduce a framework for proving analogous results for recovery of the ฯƒ-field of a probability space, interpreted as information resolutionโ€”the granularity of measurable events given by comparison to our samples. Our main results show that, under iid sampling, the Borel ฯƒ-field in Rdand in more general metric spaces can be recovered in the strongest possible mode of convergence. We also derive finite-sample L1bounds for uniform convergence of ฯƒ-fields on [0 ,1]d. We illustrate the use of our framework with two applications: constructing random- ized solutions to the Skorokhod embedding problem, and analyzing the loss of variants of random forests for regression. 1 Introduction Laws of large numbers generally assert that, in the context of iid sampling, we can asymptotically recover aspects of our probability space. For example, the strong and weak laws of large numbers assert that we can recover the mean of a measure ยต, and, perhaps more ambitiously, the Glivenkoโ€“Cantelli theorem guarantees recovery of the entire measure via its cumulative distribution function. Inconspicuously absent from these theorems is the following consideration: Given a probability space ( S,B, ยต) can recover the measure ยตwe were sampling from, but what about the ฯƒ-fieldB? Does the information of our samples Xiallow us to measure the same resolution of events as the unknown process associated to the samples? The goal of this paper is to introduce a notion of laws of large numbers regard- ing recovery of the information resolution , as represented by the notion of ฯƒ-fields, associated to the target measure ยตgenerating our iid samples. Just as one approxi- mates the underlying mean by a sample mean or the underlying CDF by an empirical CDF, we will approximate the underlying ฯƒ-field by empirical ฯƒ-fields , representing the granularity of the events we can measure by comparison to our samples. โˆ—Email: danielraban@berkeley.edu 1 We will prove examples of these laws of large numbers in settings such as sampling inRdand in more general metric spaces. We will also present two applications of our theory. The first gives a simple method for randomly generating solutions to the Skorokhod embedding problem by constructing stopping times for Brownian motion through sequences of hitting barriers, interpreted as increasingly resolving partitions of R. The second applies our theory to random forests, analyzing how regression tree loss depends on tree depth by viewing feature space splits as progressively finer resolutions. Here is a basic example illustrating the notion of information resolution. Example 1.1. Suppose we sample X1, X2, X3iidโˆผยตand get the values X1= 5, X2= โˆ’4, and X3= 1. What is the empirical resolution afforded by the knowledge of our three sample values? One choice is as follows: If we were to continue sampling Z1, Z2, . . .iidโˆผยต, we would be able to compare the values of the Ziwith our original sample values X1, X2, X3. We
https://arxiv.org/abs/2505.07958v1
would be able to determine events such as {X2< Z iโ‰ค X3}. X1 X2X3Zi xfยต(x) Figure 1: Comparing a new sample Zito the previous samples X1, X2, X3. From this perspective, the ฯƒ-field representing the resolution given by our first three samples is the ฯƒ-field generated by the partition F3:=ฯƒ((โˆ’โˆž,โˆ’4],(โˆ’4,1],(1,5],(5,โˆž)). Alternatively, we can express this ฯƒ-field more directly in terms of our samples using the sets ( โˆ’โˆž, Xi]: F3=ฯƒ((โˆ’โˆž,5],(โˆ’โˆž,โˆ’4],(โˆ’โˆž,1]). Defining empirical ฯƒ-fields in this way, i.e. Fn:=ฯƒ((โˆ’โˆž, X1], . . . , (โˆ’โˆž, Xn]), we can measure more events as we obtain more samples, increasing the granularity of our information resolution. And as we let nโ†’ โˆž , we might hope that we can measure any event. Care must be taken, however, when defining a notion of empirical information resolution, as not every sequence of ฯƒ-fields will recover the maximal ฯƒ-field of the probability space. Here is a naive example illustrating this point. 2 Example 1.2. When sampling X1, X2, X3iidโˆผยต= Unif[0 ,1], we could define Gn:= ฯƒ({X1}, . . . ,{Xn}). This, at best, generates a sub- ฯƒ-field of C, the ฯƒ-field of countable and co-countable subsets of [0 ,1]. Moreover, all sets in Chave Lebesgue measure 0 or 1, so from the perspective of Lebesgue measure on [0 ,1], we have not gained any resolution at all. The sequence Gnof empirical ฯƒ-fields would only be sufficient for recovering the resolution of our space if the probability measure ยตwere discrete. In general, the setup for ฯƒ-field recovery is as follows: draw iid samples X1, X2, . . .iidโˆผ ยต, taking values in a space S. To each xโˆˆS, we associate a set Axthat reflects the resolution or information revealed by observing x. These sets encode our assumption about the underlying structure, with the goal of recovering the maximal ฯƒ-fieldF:= ฯƒ({Ax:xโˆˆS}). We define the empirical resolution ฯƒ-fields Fn:=ฯƒ(AX1, . . . , A Xn), based on the first nsamples. The central question is whether Fnconverges to Fas nโ†’ โˆž , under an appropriate notion of convergence for ฯƒ-fields. Convergence of ฯƒ-fields has been well-studied (see e.g. [Boy71, Nev72, Kud74, Rog74, VZ93, Vid18]), and there are a number of non-equivalent modes of conver- gence. Most of these modes of convergence involve comparing the ฯƒ-fields using a fixed measure ยต, which we will usually assume to be the shared marginal distribution of our iid samples. We list some modes of convergence here; for a more in-depth study of how these notions relate to each other, see [Vid18], for example. โ€ขMonotone convergence: Fnโ†’ F in the monotone sense (written Fnโ†‘ F) means thatWโˆž n=1Fn=F. Here,Wโˆž n=1Fnis the join of these ฯƒ-fields with respect to inclusion; that is, it is the smallest ฯƒ-field containing Fnfor each n. โ€ขHausdorff convergence: Given a fixed probability measure ยต,Fnโ†’ F in the Hausdorff sense means that dยต(Fn,F):= sup โˆฅfโˆฅLโˆž(ยต)โ‰ค1โˆฅE[f| Fn]โˆ’E[f| F]โˆฅL1(ยต)โ†’0. This is equivalent [Rog74] to dโ€ฒ ยต(Fn,F):= max sup AโˆˆFninf BโˆˆFยต(Aโ–ณB),sup BโˆˆFinf AโˆˆFnยต(Aโ–ณB) โ†’0, which is convergence of the sets FntoFin the Hausdorff topology induced by viewing these ฯƒ-fields as closed subsets of L1(via indicator functions of sets).
https://arxiv.org/abs/2505.07958v1
โ€ขSet theoretic convergence: This means lim supnโ†’โˆžFn= lim inf nโ†’โˆžFn=F, where lim sup nโ†’โˆžFn:=โˆž\ n=1โˆž_ k=nFn, lim inf nโ†’โˆžFn:=โˆž_ n=1โˆž\ k=nFn. โ€ขStrong convergence: This means E[ 1A| Fn]โ†’E[ 1A| F] in probability for all measurable A. In general, monotone and Hausdorff convergence, which are not equivalent, are the strongest. Here is a diagram expressing the strength of various modes of convergence, 3 including some not mentioned above; for a more complete picture, see [Vid18]. Monotone Almost sure Set theoretic Hausdorff Strong Weak Hausdorff convergence, which is given by a pseudometric, may at first seem the most natural to use for an analogue of the Glivenkoโ€“Cantelli theorem, due to its uniform nature. However, we will see in Section 3 that Hausdorff convergence fails for even simple examples in R. Monotone convergence, which appears regularly in probability theory (for example, in the context of martingale convergence), is another natural choice and will suffice in cases where Hausdorff convergence is not possible. Outline. In what follows, we will prove laws of large numbers for two modes of convergence of ฯƒ-fields. - In Section 2, we prove theorems for monotone convergence of ฯƒ-fields in Rdand in more general metric spaces. This gives the strongest convergence possible, as monotone convergence implies all studied modes of convergence for ฯƒ-fields which do not imply Hausdorff convergence. - In Section 3, we prove a weakened version of Hausdorff convergence (and give quantitative rates) by restricting the class of test functions to Lipschitz functions, rather than all of Lโˆž(ยต); in other words, we give bounds on sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’E[f| F]โˆฅL1(ยต). - In Section 4, we apply our theorems to construct randomized solutions to the Skorokhod embedding problem and to analyze the loss of randomized regression trees. These applications use our theorems from Sections 2 and 3, respectively. It is important to note that there are two layers of randomness at play: We want to study a probability space ( S,F, ยต), but we are generating the samples X1, X2, . . .iidโˆผยต via some background probability space (โ„ฆ ,G,P). Just as classical laws of large numbers concern P-a.s. convergence of numbers or random measures, our theorems will concern P-a.s. and L1(P) convergence of random ฯƒ-fields. 2 Monotone convergence of resolution When studying the convergence of ฯƒ-fields, we want to compare ฯƒ-fields by measuring the distance between sets with respect to a fixed measure ยต. The measure ยตcanโ€™t meaningfully distinguish between two sets A, B with ยต(Aโ–ณB) = 0, so we will need to be precise with our statements. However, the following definition and subsequent proposition tell us that this technicality poses no obstruction to our understanding. 4 Definition 2.1. Let ( S,F, ยต) be a measure space, and let A,B โІ F . We say that A andBdiffer only by ยต-null sets if (i)โˆ€Aโˆˆ A,โˆƒBโˆˆ Bs.t.ยต(Aโ–ณB) = 0, (ii)โˆ€Bโˆˆ B,โˆƒAโˆˆ As.t.ยต(Aโ–ณB) = 0. We will make judicious use of the generating construction for ฯƒ-fields: ฯƒ(A) denotes the smallest ฯƒ-field containing all the sets in A, and we say that Agenerates ฯƒ(A). Before proving any results, we must first make sure that
https://arxiv.org/abs/2505.07958v1
altering generating sets by null sets does not cause any issues when generating ฯƒ-fields. Proposition 2.1. Let(S,F, ยต)be a measure space, and let A,B โІ F differ only by ยต-null sets. Then ฯƒ(A)andฯƒ(B)differ only by ยต-null sets. Proof. LetF:={Aโˆˆฯƒ(A) :โˆƒBโˆˆฯƒ(B) s.t. ยต(Aโ–ณB) = 0}be the members of ฯƒ(A) which are represented in ฯƒ(B) up to null sets. Then Fis aฯƒ-field: (i) Empty set: โˆ…โˆˆ Fbecause โˆ…โˆˆฯƒ(A) and ฯƒ(B). (ii) Complements: If Aโˆˆ F, then letting Bbe such that ยต(Aโ–ณB) = 0, we get ยต(Acโ–ณBc) = 0. As Bcโˆˆฯƒ(B), we get Acโˆˆ F. (iii) Countable unions: If A1, A2,ยทยทยท โˆˆ F , then let B1, B2,ยทยทยท โˆˆ ฯƒ(B) be such that ยต(Aiโ–ณBi) = 0 for iโ‰ฅ1. Then ยต โˆž[ i=1Ai! โ–ณ โˆž[ i=1Bi!! โ‰คยต โˆž[ i=1Aiโ–ณBi! โ‰คโˆžX i=1ยต(Aiโ–ณBi) = 0 , soSโˆž i=1Aiโˆˆฯƒ(A). Fis aฯƒ-field that contains A, soF โЇ ฯƒ(A). Hence, F=ฯƒ(A). The same argument shows that all members of ฯƒ(B) are represented in ฯƒ(A) up to null sets. 2.1 Monotone convergence of resolution in Rd In this section, we prove a basic law of large numbers for recovering the Borel ฯƒ-field inRd, using the left-infinite intervals/boxes which show up in the Glivenkoโ€“Cantelli theorem. Theorem 2.1. Let(Rd,B, ยต)be a probability space equipped with the Borel ฯƒ-field, and letX1, X2, . . .iidโˆผยต. For x= (x1, . . . , x d)โˆˆRd, letAx:= (โˆ’โˆž, x1]ร— ยทยทยท ร— (โˆ’โˆž, xd], and define the empirical ฯƒ-fields Fn:=ฯƒ(AX1, . . . , A Xn). Then Fnโ†‘ Ba.s.; that is,Wโˆž n=1FnandBdiffer only by ยต-null sets. This choice of Axis, of course, not the only choice that works. The proof works essentially the same with finite-sized boxes, balls, etc. To recover a different ฯƒ-field, one would use a different choice of Axsets; the choice of Ax= (โˆ’โˆž, x1]ร—ยทยทยทร— (โˆ’โˆž, xd] necessarily implies that we are attempting to recover a sub- ฯƒ-field of the Borel ฯƒ-field because ฯƒ({Ax:xโˆˆRd}) =B. Lemma 2.1. LetG โІ F beฯƒ-fields, let ยตbe a probability measure defined on F, and letBโˆˆ F. Then the infimum inf AโˆˆGยต(Aโ–ณB) is achieved. 5 Proof of Lemma 2.1. To construct the minimizing set, we round E[ 1B| G] to get the โ€œclosest indicator.โ€ Let Aโˆ—={xโˆˆS:E[ 1B| G]โ‰ฅ1/2}for some version of E[ 1B| G] as a member of L2(ยต). We can directly show that ยต(Aโˆ—โ–ณB)โ‰คยต(Aโ–ณB) for any Aโˆˆ G: ยต(Aโˆ—โ–ณB) =โˆฅ 1Aโˆ—โˆ’ 1BโˆฅL1(ยต) =โˆฅ 1Aโˆ—โˆ’ 1Bโˆฅ2 L2(ยต) By the Pythagorean theorem, =โˆฅ 1Aโˆ—โˆ’E[ 1B| G]โˆฅ2 L2(ยต)+โˆฅE[ 1B| G]โˆ’ 1Bโˆฅ2 L2(ยต) By definition, for ยต-a.e. xโˆˆS, 1Aโˆ—(x) is closer to E[ 1B| G](x) than any other G- measurable indicator is. โ‰ค โˆฅ 1Aโˆ’E[ 1B| G]โˆฅ2 L2(ยต)+โˆฅE[ 1B| G]โˆ’ 1Bโˆฅ2 L2(ยต) =โˆฅ 1Aโˆ’ 1Bโˆฅ2 L2(ยต) =โˆฅ 1Aโˆ’ 1BโˆฅL1(ยต) =ยต(Aโ–ณB). Taking the case where this infimum is zero gives the following topological interpre- tation of the above lemma. Corollary 2.1 (Lp(ยต) Closure of ฯƒ-fields) .LetG โІ F beฯƒ-fields, let ยตbe a probability measure defined on F, and let Bโˆˆ F. If there exists a sequence Bnโˆˆ G such that ยต(Bnโ–ณB)โ†’0asnโ†’ โˆž , then Bโˆˆ G. In other words, { 1A:Aโˆˆ G} is a closed subset ofLp(ยต)for all 1โ‰คp <โˆž. Proof of Theorem 2.1.
https://arxiv.org/abs/2505.07958v1
The idea is to reduce the problem to showing that our empirical ฯƒ-fields can approximate any box. Then we use the Glivenkoโ€“Cantelli theorem to approximate any box from the inside; see Figure 2 for a picture. Step 1. (Reduce to recovering generating boxes): Since Bis generated by the countable collection {Aq:qโˆˆQd}, Proposition 2.1 reduces the problem to show- ing that for each qโˆˆQd, with probability 1, there exists AโˆˆWโˆž n=1Fnsuch that ยต(Aโ–ณAq) = 0. Step 2. (Reduce to approximating non-null boxes): By Corollary 2.1, it suffices to show that inf AโˆˆWโˆž n=1Fnยต(Aโ–ณAq) = 0 almost surely. Fix qโˆˆQdandฮต >0. We will exhibit a set AโˆˆWโˆž n=1Fnsuch that ยต(Aโ–ณAq)< ฮต. Moreover, we may assume that ยต(Aq)ฬธ= 0; otherwise, we can just pick A=โˆ…and be done. Step 3. (Approximate boxes from inside): Consider the empirical measure ยตN:= 1 nPN n=1ฮดXn. From the Glivenkoโ€“Cantelli theorem, we can choose Nsuch that supxโˆˆRd|ยตN(Ax)โˆ’ยต(Ax)|< ฮต/ 2. For non-null Aq,P(Xn/โˆˆAqโˆ€n) = 0, so we may assume, increasing Nif necessary, that Aqcontains Xnfor some nโ‰คN. Using this value of N, define r= (r1, . . . , r d)โˆˆRdbyri:= max {(Xj)i:Xjโˆˆ Aq,1โ‰คjโ‰คN}. Then ยตN(Ar) =ยตN(Aq), and ArโІAq, so we can write ยต(Arโ–ณAq) =ยต(Aq)โˆ’ยต(Ar) =ยต(Aq)โˆ’ยตN(Aq)| {z } <ฮต/2+ยตN(Aq)โˆ’ยตN(Ar)| {z } =0+ยตN(Ar)โˆ’ยต(Ar)| {z } <ฮต/2 < ฮต. 6 x1x2 X1q X3 X2r Figure 2: Approximating a box Aqfrom inside in the proof of Theorem 2.1. 2.2 Monotone convergence of resolution in metric spaces Before extending our viewpoint to the more general setting of metric spaces, we must first review some technical notions regarding regularity of measures. The following definition is from [Rig21]. Definition 2.2. Letยตbe a measure on a metric space ( S, ฯ). We say that ยตis of Vitali type with respect to ฯif for every AโІSand every family Cof balls in ( S, ฯ) such that inf {r >0 :B(x, r)โˆˆ C} = 0 for all xโˆˆA, there exists a countable subfamily D โІ C of disjoint balls for which ยต A\[ BโˆˆDB! = 0. [Rig21] provides a number of examples with this property. Here are a few classes of examples. Example 2.1. Any Radon measure on Rdis of Vitali type with respect to the Eu- clidean metric. Example 2.2. Every probability measure ยตon (S, ฯ) which is doubling is of Vitali type with respect to ฯ. Here, ยตis said to be doubling if there exists a constant Cโ‰ฅ1 such that ยต(B2r(x))โ‰คCยต(Br(x)) โˆ€xโˆˆS, r > 0. The reason we care about the Vitali type property is that it describes the regularity of the density of a set Awith respect to the measure ยต. In particular, it tells us that the measure ยตenjoys an analogue of the Lebesgue differentiation theorem. 7 Lemma 2.2 ([Rig21]) .Letยตbe a measure which is of Vitali type with respect to a metric space (S, ฯ). Then for every measurable set A, lim rโ†“0ยต(AโˆฉBr(x)) ยต(Br(x))= 1A(x) forยต-a.e. xโˆˆS. When generalizing the ideas of the previous section to metric spaces, we lose the helpful ordering of R. The natural candidate for a set Axin a general metric space is a ball Br(x) of radius r, centered at x.
https://arxiv.org/abs/2505.07958v1
However, the following simple example shows that balls of a fixed radius may not always suffice. Example 2.3. Consider the metric space [0 ,1] with the Euclidean metric and the measure ยต({1/k}) = 2โˆ’kfork= 1,2, . . .. If we set Ax=Br(x) for any r >0, then we there are some points we cannot distinguish. However, we can still recover information resolution by sampling balls of varying radii. To make sure we can obtain a ball of any arbitrarily small radius, we introduce auxiliary randomness, which can be interpreted as a degree of noise determining the resolution given by the sample point Xn. Theorem 2.2. Let(S, ฯ,B, ยต)be a separable metric space equipped with the Borel ฯƒ-field and a probability measure ยตwhich is of Vitali type with respect to ฯ. Let X1, X2, . . .iidโˆผยตandR1, R2, . . .iidโˆผฮฝbe independent, where ฮฝis a distribution on Rโ‰ฅ0 with ฮฝ((0, ฮต))>0for every ฮต >0. For xโˆˆSandr >0, let Ax,r:=Br(x) ={zโˆˆ S:ฯ(z, x)< r}, and define the empirical ฯƒ-fields Fn:=ฯƒ(AX1,R1, . . . , A Xn,Rn). Then Fnโ†‘ Ba.s.; that is,Wโˆž n=1FnandBdiffer only by ยต-null sets. Remark 2.1. The metric structure is not entirely essential in Theorem 2.2. We mainly restrict this theorem to metric spaces to express the regularity of ยตvia the notion of set density with respect to ยต. This proof technique would work for any choice of sampling setsAx,rwith appropriate regularity for ยตas the sets Ax,rmore closely approximate x, e.g., a countable neighborhood base for a second countable topological space when ยตis purely atomic. In fact, the ฯƒ-field need not be the Borel ฯƒ-field in general! Proof. LetCbe a countable dense subset of S. Balls of rational radius centered at points in Cgenerate B, so it suffices to show that if cโˆˆCandrโˆˆQ>0,Wโˆž n=1Fn contains Br(c) a.s. As in the proof of Theorem 2.1, it suffices for us to show that infBโ€ฒโˆˆWโˆž n=1Fnยต(Br(c)โ–ณBโ€ฒ) = 0 a.s. We will show that the complement event has probability 0. Suppose that inf Bโ€ฒโˆˆWโˆž n=1Fnยต(Br(c)โ–ณBโ€ฒ) =ฮด > 0. Then, by Lemma 2.1, there exists some Bโˆ—โˆˆWโˆž n=1Fnwith ยต(Br(c)โ–ณBโˆ—) =ฮด. Without loss of generality, we may assume that ยต(Br(c)\Bโˆ—)>0; the argument for Bโˆ—\Br(c) is analogous. Lemma 2.2 provides a set UโІBr(c)\Bโˆ—of positive measure which only contains points of positive density with respect to Br(c): lim tโ†“0ยต((Br(c)\Bโˆ—)โˆฉBt(x)) ยต(Bt(x))= 1 โˆ€xโˆˆU. Hence, for each xโˆˆU, there exists some radius rxsuch that for tโ‰คrx, ยต((Br(c)\Bโˆ—)โˆฉBt(x)) ยต(Bt(x))>1/2. 8 Rearranging gives ยต((Br(c)\Bโˆ—)โˆฉBt(x))> ยต(Bt(x)\(Br(c)\Bโˆ—)). On the other hand, disintegrating over Ugives P(XnโˆˆU, R nโ‰คrXn) =Z UP(Rnโ‰คrx)|{z} >0dยต(x) >0, so that P(โˆƒns.t.XnโˆˆU, R nโ‰คrXn) = 1. This event is inconsistent with the fact that ยต(Br(c)โ–ณBโˆ—) =ฮดbecause it implies that we could take Bโˆ—โˆ—:=Bโˆ—โˆชBRXn(Xn) for some nand get the improved approximation ยต(Br(c)โ–ณBโˆ—โˆ—)โ‰คยต(Br(c)โ–ณBโˆ—)|{z} =ฮด +ยต(BRXn(Xn))\(Br(c)\Bโˆ—))โˆ’ยต((Br(c)\Bโˆ—)โˆฉBRXn(Xn))| {z } <0 < ฮด, contradicting the optimality of Bโˆ—. So P(infBโ€ฒโˆˆWโˆž n=1Fnยต(Br(c)โ–ณBโ€ฒ)>0) = 0, as claimed. 3 Uniform convergence of resolution IfFnโ†‘ F, the martingale convergence theorem gives E[f| Fn]โ†’E[f| F] a.s. and inL1for all bounded f. Hausdorff convergence can be viewed as a uniform version of this convergence: dยต(Fn,F):= sup โˆฅfโˆฅLโˆž(ยต)โ‰ค1โˆฅE[f| Fn]โˆ’E[f| F]โˆฅL1(ยต)โ†’0. However, uniform convergence over the entire
https://arxiv.org/abs/2505.07958v1
unit ball in Lโˆž(ยต) is too strong of a condition for our purposes, as the following example shows. Example 3.1. Consider the probability space ([0 ,1],B,Leb), where Bis the Borel ฯƒ-field and ฮปis Lebesgue measure. Given any realization Fn:=ฯƒ([0, x1], . . . , [0, xn]) of an empirical ฯƒfield, we adversarially construct a function fnas follows: Let 0 < x (1)< ยทยทยท< x (n)<1 list the sample points in increasing order, and take the convention that x(0)= 0 and x(n+1)= 1. Define fn(x) =( 1 if x(k)โ‰คx <x(k)+x(k+1) 2for some 0 โ‰คkโ‰คn โˆ’1 ifx(k)+x(k+1) 2โ‰คx < x (k+1)for some 0 โ‰คkโ‰คn. See Figure 3 for an illustration. Then on each Aโˆˆ Fn,E[fn|A] = 0, so E[fn| Fn] = 0 ฮป-a.s. Thus, dฮป(Fn,B)โ‰ฅ โˆฅE[fn| Fn]โˆ’E[fn| B]|{z} =fnโˆฅL1(ฮป)= 1. So we cannot hope for uniform convergence over such a large class of functions. 9 xf2(x) 0.2 0.4 0.6 0.8 1 โˆ’11 0 Figure 3: An adversarially chosen function which maximizes the Hausdorff distance. Instead of uniform convergence over all fwithโˆฅfโˆฅLโˆž(ยต)โ‰ค1, we consider uniform convergence over 1-Lipschitz f. We again use the coordinate-wise dominated boxes Ax:= [0, x1]ร— ยทยทยท ร— [0, xd], but this choice is arbitrary, and one can prove uniform convergence with other choices for Ax(perhaps with different rates). Due to the asymmetrical nature of this partition, the sets containing points with coordinates near 1 will be larger the the sets containing coordinates near 0, leading to a slow rate of convergence of O((log n/n)1/d). After stating this slow rate, we will see that a symmetrizing adjustment to this partition leads to a much faster rate of O(1/n). Theorem 3.1. Let([0,1]d,B, ยต)be a probability space equipped with the Borel ฯƒ-field, and let X1, X2, . . .iidโˆผยต, where ยตโ‰ชฮปandฮณโˆ’1<dยต dฮป< ฮณ for some ฮณโ‰ฅ1. For x= (x1, . . . , x d)โˆˆ[0,1]d, let Ax:= [0, x1]ร— ยทยทยท ร— [0, xd], and define the empirical ฯƒ-fields Fn:=ฯƒ(AX1, . . . , A Xn). Then sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)P-a.s.,L1(P)โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โ†’ 0, where โˆฅfโˆฅLip:= sup {|f(x)โˆ’f(y)| |xโˆ’y|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)# โ‰ฒlogn n1/d โˆ€nโ‰ฅ3. The constant factor in the bound depends only on dandฮณ. The proof is in Appendix A. To improve the convergence, we use the more symmetric partition fFn:=ฯƒ({x: xiโ‰คXj,i}: 1โ‰คiโ‰คd,1โ‰คjโ‰คn), which splits the unit cube in two pieces along every coordinate of each sample point Xj. See Figure 4 for an illustration. Now, we get a much faster rate: Theorem 3.2 (Faster uniform convergence with symmetrized Ax).Let([0,1]d,B, ยต) be a probability space equipped with the Borel ฯƒ-field, and let X1, X2, . . .iidโˆผยต, where 10 Figure 4: The points in fFnsplitting the unit cube in every coordinate. ยตโ‰ชฮปandฮณโˆ’1<dยต dฮป< ฮณfor some ฮณโ‰ฅ1. Define the empirical ฯƒ-fields fFn:=ฯƒ({x: xiโ‰คXj,i}: 1โ‰คiโ‰คd,1โ‰คjโ‰คn). Then sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)P-a.s.,L1(P)โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โ†’ 0, where โˆฅfโˆฅLip:= sup {|f(x)โˆ’f(y)| |xโˆ’y|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)# โ‰ฒโˆš d nโˆ€nโ‰ฅ1. The constant factor in the bound depends only on ฮณ. The proof is in Appendix A. Remark 3.1. By scaling the sides of the box by
https://arxiv.org/abs/2505.07958v1
constants, the results in Theorem 3.1 and Theorem 3.2 apply to boxes in Rdwhich are not [0 ,1]d. We incur only an extra multiplicative factor of the volume of the box in our bound. Similarly, if we allow fto beL-Lipschitz, we incur only a factor of L. Remark 3.2. The bound in Theorem 3.2 is tight. For a lower bound, consider the example f(x) =x1andยต=ฮป. The partition is an axis-aligned grid, so the conditional expectation of fin any set in the box is just the average of the maximal and minimal x1values for that box. So the integral is independent of the latter dโˆ’1 coordinates, and the problem reduces to a 1-dimensional problem. Denoting the 1st coordinate of each XjasX1,1, X2,1, . . . , X n,1and denoting the order statistics of these values as 0 = Y0< Y 1<ยทยทยท< Yn< Yn+1= 1, we write โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)=nX k=0ZYk+1 Yk Yk+Yk+1 2โˆ’x1 dx1 =nX k=0(Yk+1โˆ’Yk)2 4, 11 Taking expectations, we get E[โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)]โ‰ฅE[โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)] =1 4nX k=0E[(Yk+1โˆ’Yk)2], Where the Ykare the order statistics of niid uniform random variables on [0 ,1]. The differences of these successive order statistics are Beta(1, n) distributed, so this equals =1 4(n+ 1)2 (n+ 1)( n+ 2) =1 2(n+ 2). Theฯƒ-fieldfFnis a refinement of Fn, so the lower bound of 1 /napplies to Fn, as well. 4 Applications 4.1 Randomized Skorokhod embeddings Skorokhod ([Sko61], translated into English [Sko65]) posed and solved the problem of embedding distributions of real-valued random variables into Brownian motion by stopping the process at suitably constructed random times. Since then, many solutions to the Skorokhod embedding problem have been discovered, with varying properties of interest; see [Ob l04] for a survey detailing the various constructions and their historical context and [BCH17] for a more recent work unifying many solutions to the problem. Of particular note for our purposes is Dubinsโ€™ 1968 solution to the Skorokhod embedding problem [Dub68]. By adjusting Dubinsโ€™ solution, we will provide a method ofrandomly generating Skorokhod embeddings for a given distribution ยต. Dubinsโ€™ construction proceeds via a binary splitting martingale . Suppose Xโˆผยต (with E[X] = 0) and we want to generate the distribution of Xvia a stopping time Tfor Brownian motion (meaning BTd=X). We first create barriers for the Brownian motion at the points x1:=E[X|X < 0] and x2:=E[X|X > 0] and let T1:= inf{t > 0 :Btโˆˆ {x1, x2}}. This divides the line into four intervals. 0 x1 x2 X < x 1 x1< X < 0 0< X < x 2 X > x 2 0 x1 x2 y1 y2 y3 y4 X < x 1 x1< X < 0 0< X < x 2 X > x 2 Figure 5: Top: first step of Dubinsโ€™ binary splitting, with barriers x1andx2. Bottom: refinement using y1, . . . , y 4. 12 For the next step, we divide each of the intervals in two by adding more barriers. In particular, we add barriers y1:=E[X|Xโ‰คx1], y 2:=E[X|x1< X < 0], y3:=E[X|0< Xโ‰คx2], y 4:=E[X|X > x 2] and let T2:= inf{t > T 1:Btโˆˆ {y1, y2, y3, y4}}. See Figure
https://arxiv.org/abs/2505.07958v1
5 for an illustration. Repeating this process, we end up with a sequence of stopping times ( Tn)โˆž n=1for Brownian motion such that BTnequals, with equal probability, any of the 2nlevel n barrier points. In fact, a more careful analysis of this process shows that BTnd=E[X| Bn], where Bnis the ฯƒ-field representing the partition of the interval by all barrier points up to level n. Taking T:= lim nโ†’โˆžTngives us the stopping time we desire. Figure 6 illustrates the first few steps of this process on a simulated Brownian motion. x1x2 y1y2y3y4T1 T2T3 Figure 6: The first 3 steps of stopping times in Dubinsโ€™ construction. The key insight for this application of our framework is that Dubinsโ€™ meticulously constructed โ€œdyadicโ€ partitions of the line are not actually necessary. We will show that any (deterministic) sequence of partitions adding 1 point at a time suffices for the embedding, provided that the information resolution of the partitions asymptotically captures the degree of resolution associated to ยต. Applying our framework in the con- text of generating random partitions from iid sampling, we obtain random Skorokhod embeddings. The following theorem constructs a Skorokhod embedding for a (deterministic) sequence of partitions. Theorem 4.1. Letยตbe a distribution on Rwith mean zero and finite second mo- ment, and let Xโˆผยต. Let (xn)โˆž n=1be a sequence of real numbers, and let Fn:= ฯƒ((โˆ’โˆž, x1], . . . , (โˆ’โˆž, xn])define a filtration such that Fnโ†‘ B, i.e.Wโˆž n=1Fnand the 13 Borel ฯƒ-fieldBdiffer only by ยต-null sets. There exists a stopping time T(x1, x2, . . .) for Brownian motion such that, P-a.s., BTd=XandE[T] =E[X2]. Proof. Define a sequence of stopping times by T0= 0 and Tn+1= inf{t > T n:Btโˆˆ ran(E[X| Fn])}. Then T0โ‰คT1โ‰คT2โ‰ค ยทยทยท , so there exists a (possibly infinite) stopping time T= lim nโ†’โˆžTn. Moreover, BTnd=E[X| Fn] for each n, asBTn+1|BTn=xis equal to or supported on the same two points as E[X| Fn+1]|E[X| Fn] =x, and E[BTn+1|BTn=x] =x=E[E[X| Fn+1]|E[X| Fn] =x]. The latter equality is due to the fact that E[E[X| Fn+1]| Fn] =E[X| Fn]. This lets us bound the size of Tn, as E[Tn] =E[E[B2 Tn|Tn]] =E[B2 Tn] =E[(E[X| Fn])2]โ‰คE[X2], where we have used the tower property of conditional expectation and the conditional version of Jensenโ€™s inequality. By the monotone convergence theorem, E[T]โ‰คE[X2], from which we conclude that T <โˆža.s. Now, by Theorem 2.1 and the martingale convergence theorem, E[X| Fn] converges in distribution to X. By the continuity of Brownian motion paths, BTnconverges in distribution to BT. Thus, we may conclude thatBTd=X, from which we conclude E[T] =E[E[B2 T|T]] =E[B2 T] =E[X2]. Corollary 4.1 (Randomized Skorokhod embedding) .Letยตbe a distribution on R with mean zero and finite second moment, and let X, X 1, X2, . . .iidโˆผยต. There exists a randomized (depending on X1, X2, . . .) stopping time Tfor Brownian motion such that,P-a.s., BTd=XandE[T|X1, X2, . . .] =E[X2]. Proof. We apply Theorem 4.1 to the sequence of empirical ฯƒ-fields given by Fn:= ฯƒ((โˆ’โˆž, X1], . . . , (โˆ’โˆž, Xn]). Theorem 2.1 shows that Fnโ†‘ B. Remark 4.1. It is not
https://arxiv.org/abs/2505.07958v1
necessary for X1, X2, . . . to be sampled from the same mea- sure as X. Theorem 4.1 still holds if we sample X1, X2, . . .iidโˆผฮฝ, provided that supp ฮฝโЇsupp ยต. This has the interesting consequence that there exist universal gen- erating measures for randomized Skorokhod embeddings. For example, if ฮฝis the standard normal distribution (or any other measure with full support), then sampling X1, X2, . . .iidโˆผฮฝgenerates a randomized Skorokhod embedding construction which is valid for any ยต. This construction yields different Skorokhod embeddings for each sequence of val- uesX1, X2, . . .. See Figure 7 for a simulation comparing Dubinsโ€™ classical Skorokhod embedding and two independent randomized Skorokhod embeddings on the same Brow- nian motion. 14 Figure 7: Stopping times for Dubinsโ€™ embedding and two independent randomized embed- dings on the same Brownian motion. Here, we are embedding the uniform distribution on [โˆ’0.5,0.5]. 4.2 Random splitting random forests Our second example application of this framework is to obtain uniform risk bounds for randomized regression trees in a random forest. Random forest models [Bre01] are popular machine learning tools for tasks such as classification and regression. In the case of regression, the model constructs a number of regression trees, with splits determined by some optimal choice of splitting along a randomly selected subset of the feature coordinates; see Figure 8 for an illustration of splitting the feature space. Then, within each box of the feature space, the model reports the average of the values of the data points in that box. The key facet relating regression trees to our considerations is that a regression tree is essentially reporting the conditional expectation with respect to a partition of the feature space. From this perspective, we build our tree by refining the partition, i.e. by increasing the resolution of the associated ฯƒ-field. So we can study the error incurred in building our tree via convergence of the ฯƒ-fields representing these partitions. For a regression tree, even with an infinite amount of data, performance is bot- tlenecked by the coarseness of the resolution. Here, we use the notion of information resolution to address the following question: given infinite data, how does the error decay as the resolution becomes finer? While we focus on the infinite-data setting for simplicity, similar ideas could be used to study the trade-off between sample size and resolution. We can alter the standard random forest model by constructing regression trees using random splits , similarly to the Extra-Trees algorithm from [GEW06]. That is, we pick random points G1, . . . , G miidโˆผฮฝand construct a partition from these points. For example, we could construct a grid using all axis-parallel lines passing through 15 x1x2 Figure 8: Axis-parallel splits of the feature space in a regression tree. G1, . . . , G m, or we could use an asymmetric partition such as the one in Theorem 3.1; Figure 9 illustrates this variant of random splits. In this setting, our Theorem 3.1 essentially immediately provides a bound on the risk, where the parameter fcan even
https://arxiv.org/abs/2505.07958v1
be chosen adversarially against our regression tree estimator. For simplicity, we will treat the case of the partitions from Theorem 3.1 and Theorem 3.2, but the same analysis could be carried out with other choices of randomized sets Ay. Theorem 4.2 (Random splitting regression tree loss) .Let(Xi, Yi)N i=1be drawn iid according to Y=f(X) +ฮต, where ฮตis independent of XwithE[ฮต] = 0 andVar(ฮต) = ฯƒ2. Draw (Gk)1โ‰คkโ‰คmiidโˆผฮฝwith ฮณโˆ’1<dฮฝ dฮป< ฮณ for some ฮณโ‰ฅ1, define Fm:= ฯƒ(AG1, . . . , A Gm)with Ay:={x:xiโ‰คyiโˆ€1โ‰คiโ‰คd}, and define the random splitting regression tree estimator bf(x):=1 |Rx|X i:XiโˆˆRxYi, where Rxis the set containing xin the finest partition given by Fm. Then lim sup Nโ†’โˆžsup โˆฅfโˆฅLipโ‰ค1Eh โˆฅbfโˆ’fโˆฅL1(ยต)i โ‰ฒlogm m1/d . If we use gFm:=ฯƒ({x:xiโ‰คXj,i}: 1โ‰คiโ‰คd,1โ‰คjโ‰คm)in place of Fm, then lim sup Nโ†’โˆžsup โˆฅfโˆฅLipโ‰ค1Eh โˆฅbfโˆ’fโˆฅL1(ยต)i โ‰ฒโˆš d m. 16 x1x2 Figure 9: Splitting the feature space using random points for an asymmetric partition. Remark 4.2. We only take the limit as Nโ†’ โˆž (infinitely many samples) to guarantee that every set in the partition of the feature space a.s. contains at least 1 data point (so that bfis well-defined). Depending on the choice of sets Ay(and their ensuing geometry), one may calculate the relationship between Nandmto ensure that with high probability, no partition set is empty. Proof. We will treat the case of Fm; the proof for gFmis similar. In taking the limit asNโ†’ โˆž , we may assume that all grid boxes contain at least one Xi, so that bfis well-defined. Then, using the triangle inequality, lim sup Nโ†’โˆžsup โˆฅfโˆฅLipโ‰ค1Eh โˆฅbfโˆ’fโˆฅL1(ยต)i โ‰คlim sup Nโ†’โˆžsup โˆฅfโˆฅLipโ‰ค1Eh โˆฅbfโˆ’E[f| Fm]โˆฅL1(ยต)i + lim sup Nโ†’โˆžsup โˆฅfโˆฅLipโ‰ค1E โˆฅE[f| Fm]โˆ’fโˆฅL1(ยต) Theorem 3.1 upper bounds the latter term by O logm m1/d . The former term can be controlled by noting that for any fwithโˆฅfโˆฅLipโ‰ค1, โˆฅbfโˆ’E[f| Fm]โˆฅL1(ยต)โ‰คZ1 |Rx|X i:XiโˆˆRx f(Xi)โˆ’1 ยต(Rx)Z Rxf(y)dยต(y) dยต(x) โ‰คZ1 |Rx|X i:XiโˆˆRx1 ยต(Rx)Z Rx|f(Xi)โˆ’f(y)|dยต(y)dยต(x) โ‰คZ diam( Rx)dยต(x) 17 =X RโˆˆPmยต(R) diam( R), where Pmdenotes the finest partition given by Fm. Bounding this quantity as in the proof of Theorem 3.1, we get that the second term is O logm m1/d , as claimed. By averaging independently randomized regression trees, one may construct random forests without the need for bootstrap aggregation, optimizing the split points, or random selection of features. Figure 10 compares the performance of such random splitting random forests (with 10 trees, using asymmetric and symmetric partitions) on the California housing dataset, originally introduced in [KB97] and now available through the scikit-learn library, as the number of random splits increases. As predicted by Theorem 4.2, the symmetric partition requires vastly fewer random split points to make accurate predictions. Figure 10: Performance of asymmetric and symmetric random splitting random forests for predicting California housing prices. Acknowledgements. The author would like to thank Sinho Chewi, Steve Evans, Shirshendu Ganguly, Arvind Prasadan, and Joหœ ao Vitor Romano for many helpful com- ments and conversations. While writing this paper, the author was supported by a Two Sigma PhD Fellowship and a research contract with Sandia National Laboratories, a U.S. Department of Energy multimission laboratory. 18 References [BCH17] Mathias Beiglbยจ ock, Alexander MG Cox,
https://arxiv.org/abs/2505.07958v1
and Martin Huesmann. Opti- mal transport and Skorokhod embedding. Inventiones Mathematicae , 208:327โ€“400, 2017. [Boy71] Edward S Boylan. Equiconvergence of martingales. The Annals of Mathematical Statistics , 42(2):552โ€“559, 1971. [Bre01] Leo Breiman. Random forests. Machine Learning , 45:5โ€“32, 2001. [Dub68] Lester E Dubins. On a theorem of Skorohod. The Annals of Mathemat- ical Statistics , 39(6):2094โ€“2097, 1968. [GEW06] Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely random- ized trees. Machine Learning , 63(1):3โ€“42, 2006. [KB97] R. Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statis- tics & Probability Letters , 33(3):291โ€“297, 1997. [Kud74] Hirokichi Kudo. A note on the strong convergence of ฯƒ-algebras. The Annals of Probability , 2(1):76โ€“83, 1974. [MBNWW21] Tudor Manole, Sivaraman Balakrishnan, Jonathan Niles-Weed, and Larry Wasserman. Plugin estimation of smooth optimal transport maps. arXiv preprint arXiv:2107.12364 , 2021. [Nev72] Jacques Neveu. Note on the tightness of the metric on the set of com- plete sub ฯƒ-algebras of a probability space. The Annals of Mathematical Statistics , 43(4):1369โ€“1371, 1972. [Ob l04] Jan Ob lยด oj. The Skorokhod embedding problem and its offspring. Prob- ability Surveys , 1:321 โ€“ 392, 2004. [Rig21] Severine Rigot. Differentiation of measures in metric spaces. In New Trends on Analysis and Geometry in Metric Spaces: Levico Terme, Italy 2017, pages 93โ€“116. Springer, 2021. [Rog74] Lothar Rogge. Uniform inequalities for conditional expectations. The Annals of Probability , 2(3):486โ€“489, 1974. [Sko61] A. V. Skorohod. Issledovaniya po teorii sluchainykh protsessov (Stokhas- ticheskie differentsialnye uravneniya i predelnye teoremy dlya protsessov Markova . Izdat. Kiev. Univ., Kiev, 1961. [Sko65] A. V. Skorokhod. Studies in the theory of random processes . Addison- Wesley Publishing Co., Inc., Reading, MA, 1965. Translated from the Russian by Scripta Technica, Inc. [Vid18] Matija Vidmar. A couple of remarks on the convergence of ฯƒ-fields on probability spaces. Statistics & Probability Letters , 134:86โ€“92, 2018. [VZ93] Timothy Van Zandt. The Hausdorff metric of ฯƒ-fields and the value of information. The Annals of Probability , pages 161โ€“167, 1993. 19 A Proofs of Theorems 3.1 and 3.2 Theorem 3.1. Let([0,1]d,B, ยต)be a probability space equipped with the Borel ฯƒ-field, and let X1, X2, . . .iidโˆผยต, where ยตโ‰ชฮปandฮณโˆ’1<dยต dฮป< ฮณ for some ฮณโ‰ฅ1. For x= (x1, . . . , x d)โˆˆ[0,1]d, let Ax:= [0, x1]ร— ยทยทยท ร— [0, xd], and define the empirical ฯƒ-fields Fn:=ฯƒ(AX1, . . . , A Xn). Then sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)P-a.s.,L1(P)โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โ†’ 0, where โˆฅfโˆฅLip:= sup {|f(x)โˆ’f(y)| |xโˆ’y|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)# โ‰ฒlogn n1/d โˆ€nโ‰ฅ3. The constant factor in the bound depends only on dandฮณ. We first reduce the problem to the geometric problem of constructing a fine mesh partition of the support of ยต. Lemma A.1. Fix the values of X1, X2, . . ., and denote Pnthe finest partition given by the ฯƒ-fieldFn(omitting any ยต-null sets). Then sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)โ‰คX AโˆˆPnยต(A) diam( A), where diam( A):= sup{|xโˆ’y|:x, yโˆˆA}. Proof of Lemma A.1. sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)= sup โˆฅfโˆฅLipโ‰ค1Z [0,1]d X AโˆˆPnE[f|A] 1A(x)โˆ’f(x) dยต(x) โ‰คsup โˆฅfโˆฅLipโ‰ค1X AโˆˆPnZ A|E[f|A]โˆ’f(x)|dยต(x) โ‰คsup โˆฅfโˆฅLipโ‰ค1X AโˆˆPn1 ยต(A)Z AZ A|f(y)โˆ’f(x)|dยต(y)dยต(x) โ‰คX AโˆˆPnยต(A) diam( A). To bound the diameter, we use
https://arxiv.org/abs/2505.07958v1
a slightly modified version of the approach taken for the proof of Lemma 40 in [MBNWW21], which essentially uses a covering argument phrased in terms of Vapnik-Chervonenkis dimension. Proof of Theorem 3.1. We first prove the L1(P)-convergence rate bound. Fix 0 < ฮด < 1, and consider a mesh dividing [0 ,1]dinto cubes Cof side length ฮต= (ฮณlog(n/ฮด) n)1/d. Then, with probability โ‰ฅ1โˆ’ฮด, each cube in the mesh contains some sample point Xi with 1 โ‰คiโ‰คnbecause P(some cube has no samples) โ‰คX CP(Chas no samples) 20 =X C(1โˆ’ยต(C))n โ‰คX C(1โˆ’ฮณโˆ’1ฮตd)n = (1/ฮต)d(1โˆ’ฮณโˆ’1ฮตd)n =n ฮณlog(n/ฮด) 1โˆ’log(n/ฮด) nn โ‰คn ฮณlog(n/ฮด)exp(โˆ’log(n/ฮด)) =ฮด ฮณlog(n/ฮด) โ‰คฮด. To upper boundP AโˆˆPnยต(A) diam( A), first note that this quantity is monotonically nonincreasing in n, as splitting a set Ainto multiple pieces cannot increase the diameter of either piece. So it suffices to show a bound on this quantity when we throw away all sample points Xiexcept for one sample point in each mesh cube C. We will do so on the aforementioned probability โ‰ฅ1โˆ’ฮดevent. Excepting the set Lโˆˆ P ncontaining the point (1 , . . . , 1), the diameter of any set Aโˆˆ P n\ {L}must be โ‰คฮตโˆš d. The diameter of the corner set Lwill be โ‰คโˆš d(the diameter of [0 ,1]d), but on this event, ฮป(L)โ‰คdฮต. Thus, we may bound X AโˆˆPnยต(A) diam( A)โ‰คฮณX AโˆˆPnฮป(A) diam( A) =ฮณฮป(L) diam( L) +ฮณX AโˆˆPn\{L}ฮป(A) diam( A) โ‰คฮณd3/2ฮต+ 4ฮณฮตโˆš dX AโˆˆPn\{L}ฮป(A) โ‰ค2ฮณd3/2ฮต. So, if we denote K= 2ฮณ1+1/dd3/2, using Lemma A.1 gives sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)โ‰คKlog(n/ฮด) n1/d with probability โ‰ฅ1โˆ’ฮด. Writing ฮด=nexp(โˆ’udn Kd) for u >0, =u. Thus, applying the argument over all u >0 (that is, varying ฮดthroughout (0 ,1)), we may estimate E" sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)# =Zโˆž 0P sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)> u! du Picking a cutoff parameter tn=K(2 logn dn)1/d, โ‰คtn+nZโˆž tnexp โˆ’udn Kd du 21 Making the change of variables v=ud/2, =tn+2n dZโˆž td/2 nexp โˆ’v2n Kd v2/dโˆ’1dv โ‰ฒtn+nZโˆž td/2 nexp โˆ’v2n 2Kd v dv =tn+Kdexp โˆ’td nn 2Kd =K2 logn dn1/d +K n1/d โ‰ฒlogn n1/d . TheP-a.s. convergence follows from the L1(P) convergence and the fact that Hn:=P AโˆˆPnยต(A) diam( A) is nonincreasing in nfor each ฯ‰โˆˆโ„ฆ. Indeed, if we let E={ฯ‰โˆˆ โ„ฆ : lim supnโ†’โˆžHn(ฯ‰)>0}then lim sup nโ†’โˆžE[Hn 1E]โ‰คlim sup nโ†’โˆžE[Hn] = 0. But lim supnโ†’โˆžE[Hn 1E] =E[(lim supnโ†’โˆžHn) 1E]>0 ifP(E)>0, so we must have P(E) = 0. Theorem 3.2 (Faster uniform convergence with symmetrized Ax).Let([0,1]d,B, ยต) be a probability space equipped with the Borel ฯƒ-field, and let X1, X2, . . .iidโˆผยต, where ยตโ‰ชฮปandฮณโˆ’1<dยต dฮป< ฮณfor some ฮณโ‰ฅ1. Define the empirical ฯƒ-fields fFn:=ฯƒ({x: xiโ‰คXj,i}: 1โ‰คiโ‰คd,1โ‰คjโ‰คn). Then sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)P-a.s.,L1(P)โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โˆ’ โ†’ 0, where โˆฅfโˆฅLip:= sup {|f(x)โˆ’f(y)| |xโˆ’y|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)# โ‰ฒโˆš d nโˆ€nโ‰ฅ1. The constant factor in the bound depends only on ฮณ. Proof. As before, it suffices to prove the expectation bound. Let Pndenote the finest partition given by the ฯƒ-fieldfFn(omitting any ยต-null sets). We first bound this by the average distance between a point in [0 ,1]dand the upper right corner of the partition box it lies in. Then sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f|fFn]โˆ’fโˆฅL1(ยต)= sup โˆฅfโˆฅLipโ‰ค1Z
https://arxiv.org/abs/2505.07958v1
[0,1]d X AโˆˆPnE[f|A] 1A(x)โˆ’f(x) dยต(x) โ‰คsup โˆฅfโˆฅLipโ‰ค1X AโˆˆPnZ A|E[f|A]โˆ’f(x)|dยต(x) 22 โ‰คsup โˆฅfโˆฅLipโ‰ค1X AโˆˆPn1 ยต(A)Z AZ A|f(y)โˆ’f(x)|dยต(y)dยต(x) โ‰คฮณ3X AโˆˆPn1 ฮป(A)Z AZ Aโˆฅyโˆ’xโˆฅ2dy dx โ‰คฮณ3X AโˆˆPn1 ฮป(A)Z AZ Aโˆฅyโˆ’uAโˆฅ2+โˆฅuAโˆ’xโˆฅ2dy dx, where uAis the upper corner of the set A:uA i= min {Xj,i:Xj,iโ‰ฅxiโˆ€xโˆˆA}for 1โ‰คiโ‰คd(and uA i= 1 if no such points exist). = 2ฮณ3X AโˆˆPnZ Aโˆฅyโˆ’uAโˆฅ2dy = 2ฮณ3Z [0,1]dโˆฅyโˆ’uAyโˆฅ2dy, where Aydenotes the Aโˆˆ Pncontaining y. Taking expectations and applying Cauchy-Schwarz, we get E" sup โˆฅfโˆฅLipโ‰ค1โˆฅE[f| Fn]โˆ’fโˆฅL1(ยต)# โ‰ค2ฮณ3E"Z [0,1]dโˆฅyโˆ’uAyโˆฅ2dy# โ‰ค2ฮณ3vuutE"Z [0,1]dโˆฅyโˆ’uAyโˆฅ2 2dy# = 2ฮณ3โˆš dvuutE"Z [0,1]d(y1โˆ’uAy 1)2dy# Since every Aโˆˆ P nis an axis-parallel box, uAy 1= min {Xj,1:Xj,1โ‰ฅy1}; so the integral depends only on the 1st coordinate. Denoting the order statistics of the values X1,1, . . . , X n,1as 0 = Y0< Y 1<ยทยทยท< Yn< Yn+1= 1, this is = 2ฮณ3โˆš dvuutnX k=0EZYk+1 Yk(yโˆ’Yk+1)2dy = 2ฮณ3โˆš dvuutnX k=0E(Yk+1โˆ’Yk)3 3 The distances between successive order statistics of uniform random variables on [0 ,1] have Beta(1, n) distribution. So this is โ‰ฒโˆš ds (n+ 1)1 (n+ 1)( n+ 2)( n+ 3) โ‰คโˆš d n. 23
https://arxiv.org/abs/2505.07958v1
arXiv:2505.08045v1 [math.ST] 12 May 2025Measures of association for approximating copulas Marcus Rockel May 23, 2025 Department of Quantitative Finance, Institute for Economics, University of Freiburg, Rempartstr. 16, 79098 Freiburg, Germany, marcus.rockel@finance.uni-freiburg.de Abstract This paper studies closed-form expressions for multiple association measures of copulas commonly used for approximation purposes, including Bernstein, shuffleโ€“ofโ€“min, checkerboard and checkโ€“min copulas. In particular, closed-form expressions are provided for the recently popularized Chatterjeeโ€™s xi (also known as Chatterjeeโ€™s rank correlation), which quantifies the dependence between two random variables. Given any bivariate copula C, we show that the closed-form formula for Chatterjeeโ€™s xi of an approximating checkerboard copula serves as a lower bound that converges to the true value of ฮพ(C) as one lets the grid size nโ†’โˆž . Keywords Bernstein copula, checkerboard copula, checkโ€“min, checkโ€“w, Chatterjeeโ€™s xi, Kendallโ€™s tau, Spearmanโ€™s rho, shuffleโ€“ofโ€“min, tail dependence coefficients 1 Introduction Measures of associationโ€”most prominently Spearmanโ€™s rho and Kendallโ€™s tauโ€”are fundamental tools for studying statistical dependence. [6] and [5] popularized a dependence measure of one random variable on another, which we shall refer to as Chatterjeeโ€™s xi. Closedโ€“form expressions for these statistics exist however only for a handful of copula families (see [4, Table 6]); in gen- eral one must resort to numerical or samplingโ€“based procedures. Near the boundaries of the unit square, however, such procedures often become unstable because they require evaluating (condi- tional) distribution functions where numerical precision can quickly become poor. This motivates the search for analytically convenient copula approximations that allow reliable and efficient com- putation of dependence measures. We therefore study measures of association for several popular approximation families. In Section 2, we introduce the basic concepts and notation used in this paper, including the specific types of copulas that are of interest and the considered measures of association. Bernstein and checkerboard constructions, in particular, have a rich history and broad practical use, see, e.g., [15, 16, 11, 22, 7, 14, 24]. Closedโ€“form formulas for Spearmanโ€™s rho and Kendallโ€™s tau are already known for these families, the most elegant arguably appearing in [11]. In Section 3, we extend this catalogue by deriving explicit formulas for Chatterjeeโ€™s xi not only for Bernstein and checkerboard copulas, but also for the checkโ€“min and checkโ€“ wvariants, whose grid cells exhibit perfect dependence. We additionally collect complete closedโ€“form expressions for Spearmanโ€™s rho, Kendallโ€™s tau, and the tailโ€“dependence coefficients, thereby unifying and extending earlier results. In Section 4, we focus on bounding Chatterjeeโ€™s xi via checkerboard approximations. Combining Proposition 3.3 with Theorem 4.1, we establish the inequality 6m ntr/parenleftbig โˆ†โŠคโˆ†Mฮพ/parenrightbig โˆ’2โ‰คฮพ(C), (1) where โˆ† is the mร—nmatrix of copula masses on an equiโ€“spaced grid and Mฮพis defined by (Mฮพ)i,j=TTโŠค+TโŠค+1 3In, withTi,j=1{i<j}. Replacing Cby its associated checkerboard copula thus furnishes a practical estimator of ฮพfrom an analytical copula, but also from empirical data. In Theorem 4.3 we prove that the resulting sampleโ€“based estimator converges at rate O(nlogn) to the true value of ฮพ(C). Checkerboard estimators for dependence measures have been recently investigated in a broader setting in [3, Section 4], but the explicit formulas derived here allow a finerโ€“grained analysis and faster finiteโ€“sample performance. We
https://arxiv.org/abs/2505.08045v1
conclude with an empirical comparison between our estimator and the classical one of Azadkia and Chatterjee in [5]. 1 2 Preliminaries In this section, we introduce the basic concepts and notation used that are required to formulate the main results of this paper. First, we introduce the fundamental concept of a copula, before focusing on the specific types of copulas that are of interest in this paper. Finally, we introduce the studied measures of association. 2.1 Copulas Abivariate copula is a function C: [0,1]2โ†’[0,1] that is grounded (i.e.,C(u,0) =C(0,v) = 0 for allu,vโˆˆ[0,1]), 2 -increasing (meaning that for every 0 โ‰คu1โ‰คv1โ‰ค1 and 0โ‰คu2โ‰คv2โ‰ค1 it holds thatC(v1,v2)โˆ’C(u1,v2)โˆ’C(v1,u2) +C(u1,u2)โ‰ฅ0), and has uniform marginals (so that C(u,1) =uandC(1,v) =vfor allu,vโˆˆ[0,1]). Sklarโ€™s theorem (see, e.g., [20, Theorem 2.3.3]) states that for any bivariate distribution function Fwith univariate marginals F1andF2, there exists a copula Csuch that F(x1,x2) =C(F1(x1),F2(x2)) for all ( x1,x2)โˆˆR2, (2) andCis uniquely determined on Ran( F1)ร—Ran(F2). Conversely, if Cis any copula and F1,F2 are univariate distribution functions, then the function defined by (2) is a bivariate distribution function. Denote byC2the collection of all bivariate copulas. Important examples include the indepen- dence copula ฮ (u,v) =uv, the upper Frยด echet bound M(u,v) = min{u,v}and the lower Frยด echet boundW(u,v) = max{u+vโˆ’1,0}. Classically, if ( X,Y )โˆผC, the upper and lower Frยด echet bounds represent the extreme cases of dependence with perfect co- and countermonotoncity, respectively, whilst the independence copula represents the case of no dependence at all between XandY. Fur- thermore, for any CโˆˆC2, it holds that Wโ‰คCโ‰คMpointwise on [0 ,1]2, see standard references such as [20] or [10]. 2.2 Bernstein copulas Bernstein copulas were introduced by Sancetta and Satchell [22] as a flexible, computable tool for approximating dependence structures. Let Cbe a given bivariate copula and let Dbe an mร—n-matrix defined by Di,j=C/parenleftbiggi m,j n/parenrightbigg (3) for 1โ‰คiโ‰คmand 1โ‰คjโ‰คn. We refer to Das themร—n-grid copula matrix associated with C and generally call Dagrid copula matrix if there exists a copula such that (3) holds for all entries of the matrix. Next, let Bi,m(u) denote the Bernstein basis polynomial of degreem, defined as Bi,m(u) =/parenleftbiggm i/parenrightbigg ui(1โˆ’u)mโˆ’i,for 0โ‰คiโ‰คm,uโˆˆ[0,1]. Then, the Bernstein copula associated with the grid copula matrix Dis defined as CD B(u,v) =m/summationdisplay i=1n/summationdisplay j=1Di,jBi,m(u)Bj,n(v),for (u,v)โˆˆ[0,1]2. (4) This function CD Bis indeed a copula, as shown in [22, Theorem 1] (see also [7, Theorem 2.2]). A key feature of the Bernstein copula CD Bis that it is a polynomial in both u(of degreem) andv(of degree n), which ensures the resulting copula is smooth. The parameters mandn determine the degree of the polynomial and thus control the trade-off between the smoothness of the approximation and its ability to capture fine details of the underlying dependence structure represented by D. If one considers a sequence of grid copula matrices Dm,nassociated with Cand letsmโˆงnโ†’โˆž , the Bernstein copula CDm,n B converges uniformly to C, see [7, Corollary 3.1]. 2 2.3 Shuffleโ€“ofโ€“min copulas The shuffleโ€“ofโ€“min construction, introduced by Mikusiยด nski, Sherwood and Taylor in [18] (see also[20]), produces a rich family of singular copulas that are dense in C2.
https://arxiv.org/abs/2505.08045v1
Fix an integer nโ‰ฅ1 and partition the unit interval into equal subโ€“intervals Ik= [kโˆ’1 n,k n] fork= 1,...,n . Denote by Sn the set of all permutations of {1,...,n}and letฯ€โˆˆSnbe a permutation. The straight shuffleโ€“ ofโ€“min copula supported by ฯ€, denotedCฯ€, redistributes the probability mass of the comonotonic copulaM(u,v) = min{u,v}equally along the ndiagonal line segments /braceleft๏ฃฌig (u,v)โˆˆIkร—Iฯ€(k):v=uโˆ’kโˆ’ฯ€(k) n/braceright๏ฃฌig , k = 1,...,n, so that each segment carries mass 1 /n. Equivalently, Cฯ€is the distribution of ( U,V) where UโˆผU(0,1) and, conditional on UโˆˆIk, one setsV=Uโˆ’kโˆ’ฯ€(k) n. We call ntheorder of the shuffle and ฯ€itsshuffle permutation . More general shuffles allow unequal strip widths p1,...,pn>0 with/summationtextpk= 1 and/or segment reflections, but in this paper we restrict to equalโ€“width straight shuffles, because they are already dense inC2, generate the entire attainable ( ฯ„,ฯ)โ€“region and admit closedโ€“form formulas for the concordance measures considered below. 2.4 Checkerboard, checkโ€“min and checkโ€“w copulas Let โˆ† be an mร—n-matrix. We say that โˆ† is a checkerboard matrix if all entries are nonnegative and for all 1โ‰คiโ‰คmand 1โ‰คjโ‰คnit holds that m/summationdisplay k=1โˆ†k,j=1 n,n/summationdisplay l=1โˆ†i,l=1 m. Next, divide the interval [0 ,1] intomandnequal parts, respectively, and let Ii,j:=/bracketleftbigiโˆ’1 m,i m/parenrightbig ร—/bracketleftbigjโˆ’1 n,j n/parenrightbig .IfCis a copula and P[(X,Y )โˆˆIi,j] = โˆ†i,j (5) for a random vector ( X,Y )โˆผC, we say that โˆ† is a checkerboard matrix associated with C. IfC has a constant density within each rectangle Ii,j, thenCis called a checkerboard copula and we writeC=Cโˆ† ฮ . The copula is explicitly given by Cโˆ† ฮ (x,y) =mnm/summationdisplay i=1n/summationdisplay j=1โˆ†i,j/integraldisplayx 0/integraldisplayy 01Ii,j(u,v)dvdu (6) where 1Ii,jis the indicator function of the rectangle Ii,j.Cโˆ† ฮ is indeed a copula for any checker- board matrix โˆ†, see [14, Section 2] or, in the square case, [16, Theorem 2.2]. Furthermore, as a simple consequence of the density being constant on each Ii,j, it holds that P[(X,Y )โ‰ค(x,y)|(X,Y )โˆˆIi,j] =P[Xโ‰คx|(X,Y )โˆˆIi,j]P[Yโ‰คy|(X,Y )โˆˆIi,j].(7) From (7), one obtains the following expression for the copula Cโˆ† ฮ , which is also covered in [10, Theorem 4.1.3]: Cโˆ† ฮ (u,v) =iโˆ’1/summationdisplay k=1jโˆ’1/summationdisplay l=1โˆ†k,l+iโˆ’1/summationdisplay k=1โˆ†k,j(nvโˆ’j+ 1) +jโˆ’1/summationdisplay l=1โˆ†i,l(muโˆ’i+ 1) + โˆ†i,j(nvโˆ’j+ 1)(muโˆ’i+ 1).(8) For a given copula C, considering associated nร—n-checkerboard matrices โˆ† nyields desirable convergence properties Cโˆ†n ฮ โ†’Casnโ†’โˆž , see, e.g., [16, Corollary 3.2]. Next to independence within rectangles as realized through Cโˆ† ฮ , it is also reasonable to consider for given โˆ† a perfect positive dependence within each rectangle, that is for all 1 โ‰คiโ‰คm, 1โ‰คjโ‰คnit holds that conditionally on ( X,Y )โˆˆIi,jit is X=nYโˆ’j+i m, (9) 3 almost surely. If there exist a checkerboard matrix โˆ† and a random vector ( X,Y )โˆผCfulfilling (5) and (9),Cis called a checkโ€“min copula, and we write C=Cโˆ† โ†—. Checkโ€“min approximations were considered in multiple applications, see, e.g., [19, 27, 9]. In analogy to (7), one can equivalently write (9) as P[(X,Y )โ‰ค(x,y)|(X,Y )โˆˆIi,j] =P/bracketleftbigg Yโ‰คmxโˆ’i+j nโˆงy/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โˆˆIi,j/bracketrightbigg (10) for all (x,y)โˆˆ[0,1]2and a case separation shows that Cโˆ† โ†—(u,v) =iโˆ’1/summationdisplay k=1jโˆ’1/summationdisplay l=1โˆ†k,l+iโˆ’1/summationdisplay k=1โˆ†k,j(nvโˆ’j+ 1) +jโˆ’1/summationdisplay l=1โˆ†i,l(muโˆ’i+ 1) + โˆ†i,jmin{nvโˆ’j+ 1,muโˆ’i+ 1}.(11) Similar convergence properties as for the checkerboard copula hold for checkโ€“min copulas, see [19]. Lastly, consider also the checkโ€“w copula Cโˆ† โ†˜, which represents
https://arxiv.org/abs/2505.08045v1
perfect negative dependence within squares, i.e. โˆ† is associated with Cโˆ† โ†˜and if (X,Y )โˆผCโˆ† โ†˜for some random vector ( X,Y ), then for all 1โ‰คiโ‰คm, 1โ‰คjโ‰คnit holds that conditionally on ( X,Y )โˆˆIi,jit is X=iโˆ’1 +jโˆ’nY m, (12) almost surely. In particular, in analogy to (10), one can write (12) equivalently as P[Xโ‰คx,Yโ‰คy|(X,Y )โˆˆIi,j] =P/bracketleftbiggjโˆ’1 +iโˆ’mx nโ‰คYโ‰คy/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โˆˆIi,j/bracketrightbigg for all (x,y)โˆˆ[0,1]2, and another case separation shows that Cโˆ† โ†˜(u,v) =iโˆ’1/summationdisplay k=1jโˆ’1/summationdisplay l=1โˆ†k,l+iโˆ’1/summationdisplay k=1โˆ†k,j(nvโˆ’j+ 1) +jโˆ’1/summationdisplay l=1โˆ†i,l(muโˆ’i+ 1) + โˆ†i,jmax{nvโˆ’j+muโˆ’i+ 1,0}.(13) One may also consider a generalization of the checkโ€“min and checkโ€“w copulas. For a copula C, we say thatCis anmร—n-perfect dependence copula if for some ( X,Y )โˆผCit holds that conditionally on (X,Y )โˆˆIi,j Y=fi,j(mXโˆ’i+ 1) +jโˆ’1 n(14) almost surely for some Lebesgue measure preserving function fi,j: [0,1]โ†’[0,1] and all 1โ‰คiโ‰คm, 1โ‰คjโ‰คn. Here, being Lebesgue measure preserving means that /integraldisplay1 0g(fi,j(x))dx=/integraldisplay1 0g(y)dy for all bounded, measurable functions g: [0,1]โ†’R. We letCโˆ† pddenote the set of all mร—n-perfect dependence copulas associated with an mร—n-checkerboard matrix โˆ†. When choosing fi,j(x) =x orfi,j(x) = 1โˆ’xfor all 1โ‰คiโ‰คm,1โ‰คjโ‰คn, one obtains back the formulas (9) and (12), so that checkโ€“min and checkโ€“w copulas are special cases of perfect dependence copulas. 2.5 Measures of association Two classical measures of association are Spearmanโ€™s rho and Kendallโ€™s tau, which provide alter- natives to the Pearson correlation coefficient that do not depend on the marginal distributions of the random variables. Both of them can be expressed as an integral over the unit square [0 ,1]2. That is, for a bivariate copula C, one can write Spearmanโ€™s rho as ฯS(C) = 12/integraldisplay [0,1]2C(u,v)dฮป2(u,v)โˆ’3, 4 and Kendallโ€™s tau as ฯ„(C) = 1โˆ’4/integraldisplay [0,1]2โˆ‚1C(u,v)โˆ‚2C(u,v)dฮป2(u,v), see, e.g., [10, Definitions 2.4.5 and 2.4.6]. An equivalent (and classical) interpretation of Kendallโ€™s ฯ„ is in terms of concordant and discordant pairs of observations: if ( U1,V1) and (U2,V2) are two independent draws from the copula C, then ฯ„(C) =P[(U1โˆ’U2)(V1โˆ’V2)>0]โˆ’P[(U1โˆ’U2)(V1โˆ’V2)<0], (15) i.e. the probability of concordance minus the probability of discordance, see, e.g., [20, Section 5.1.1]. This probabilistic view is particularly handy for copulas supported on discrete sets such as shuffleโ€“ofโ€“min constructions (see Section 3.2). Next to these two measures, which take values in [ โˆ’1,1], it is also interesting to measure the strength of dependence between two random variables XandY. Chatterjeeโ€™s xi is one way to do this, yielding values in [0 ,1], where the value 0 is consistent with independence between XandY, and 1 with perfect dependence, i.e. Y=f(X) for some measurable function f, see [1, Theorem 2.1]. Like Spearmanโ€™s rho and Kendallโ€™s tau, Chatterjeeโ€™s xi can be expressed as an integral. For a bivariate copula C, it is ฮพ(C) = 6/integraldisplay [0,1]2(โˆ‚1C(u,v))2dฮป2(u,v)โˆ’2, compare [8] and [6]. For checkerboard, checkโ€“min and checkโ€“w copulas, the above integral formulas for Kendallโ€™s tau, Spearmanโ€™s rho and Chatterjeeโ€™s xi can be evaluated explicitly in terms of the underlying checkerboard matrix. Further classical measures of association for bivariate copulas are the tail dependence coefficients, see, e.g., [13, 20]. For a given bivariate copula C, the lower tail dependence coefficient is defined by ฮปL(C) = lim tโ†’0+C(t,t) t, (16) and the upper
https://arxiv.org/abs/2505.08045v1
tail dependence coefficient by ฮปU(C) = 2โˆ’lim tโ†’1โˆ’1โˆ’C(t,t) 1โˆ’t. (17) 3 Explicit measures of association for approximating copu- las In this section, we formulate the explicit expressions for Spearmanโ€™s rho, Kendallโ€™s tau, Chatterjeeโ€™s xi and the tail dependence coefficients for mร—n-checkerboard matrices associated with Bernstein, checkerboard, checkโ€“min and checkโ€“w copulas. 3.1 Explicit measures of association for Bernstein copulas Let ฮ“ be the mร—n-matrix with constant entries ฮ“i,j=1 (m+ 1)(n+ 1),1โ‰คiโ‰คm,1โ‰คjโ‰คn. This matrix will appear in Spearmanโ€™s rho for Bernstein copulas. Let ฮ˜(m)be themร—m-matrix with entries ฮ˜(m) i,j=(iโˆ’j)/parenleftbigm i/parenrightbig/parenleftbigm j/parenrightbig (2mโˆ’iโˆ’j)/parenleftbig2mโˆ’1 i+jโˆ’1/parenrightbig,1โ‰คi,jโ‰คm, with the convention that 0 /0 = 1. Define ฮ˜(n)analogously (of size nร—n). These matrices enter into Kendallโ€™s tau. For Chatterjeeโ€™s xi, we introduce two more matrices to handle integrals of 5 Bernstein polynomials and their derivatives. Let โ„ฆ be the mร—m-matrix whose ( i,r)-entry is โ„ฆi,r=๏ฃฑ ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃณ/parenleftbigm i/parenrightbig/parenleftbigm r/parenrightbig (2mโˆ’3)/parenleftbig2mโˆ’4 i+rโˆ’2/parenrightbig/bracketleft๏ฃฌigg irโˆ’2m(mโˆ’1)/parenleftbigi+r 2/parenrightbig (2mโˆ’1)(2mโˆ’2)/bracketright๏ฃฌigg ,if 1โ‰คi,r<m, m(mโˆ’1)(iโˆ’m)/parenleftbigm i/parenrightbig (2mโˆ’1)(2mโˆ’2)/parenleftbig2mโˆ’3 m+iโˆ’2/parenrightbig, if 1โ‰คi<m,r =m, m(mโˆ’1)(rโˆ’m)/parenleftbigm r/parenrightbig (2mโˆ’1)(2mโˆ’2)/parenleftbig2mโˆ’3 m+rโˆ’2/parenrightbig, ifi=m,1โ‰คr<m, m2 2mโˆ’1, ifi=m,r=m. and let ฮ› be the nร—n-matrix whose ( j,s)-entry is ฮ›j,s:=/parenleftbign j/parenrightbig/parenleftbign s/parenrightbig (2n+ 1)/parenleftbig2n j+s/parenrightbig. The above matrix definitions to give exact formulas of Spearmanโ€™s rho, Kendallโ€™s tau and Chatter- jeeโ€™s xi for arbitrary Bernstein copulas. These formulas are the content of the following proposition. Proposition 3.1 (Explicit measures of association for Bernstein copulas) . LetC=CD Bbe the Bernstein copula associated with the cumulated mร—n-checkerboard matrix D. Then: ฯS(CD B) = 12 tr (ฮ“ D)โˆ’3, ฯ„(CD B) = 1โˆ’tr/parenleft๏ฃฌig ฮ˜(m)Dฮ˜(n)DT/parenright๏ฃฌig , ฮพ(CD B) = 6 tr/parenleftbig โ„ฆDฮ›DT/parenrightbig โˆ’2. Furthermore, the tail dependence coefficients are given by ฮปL(CD B) =ฮปU(CD B) = 0 . In the case of m=n, the above formula for Spearmanโ€™s rho and Kendallโ€™s tau can be found in [11, Theorem 9 and 10] and the rectangular case is a direct extension. The derivation of the formula for Chatterjeeโ€™s xi in Proposition 3.1 is given in the appendix from page 11 onwards. 3.2 Explicit measures of association for shuffleโ€“ofโ€“min copulas LetCฯ€be the order- nstraight shuffleโ€“ofโ€“min copula determined by a permutation ฯ€โˆˆSn(equal strip width 1 /nand no reversals). Denote Ninv(ฯ€) = #{(i,j) :i<j,ฯ€ (i)>ฯ€(j)}, di=ฯ€(i)โˆ’i. The measures of association introduced in Section 2.5 admit closed algebraic forms for Cฯ€that depend only on these permutation statistics. Proposition 3.2 (measures of association for a straight shuffleโ€“ofโ€“min copula) .For the equalโ€“ width, straight shuffleโ€“ofโ€“min copula Cฯ€of ordern, we have ฯ„(Cฯ€) = 1โˆ’4Ninv(ฯ€) n2, ฯS(Cฯ€) = 1โˆ’6/summationtextn i=1d2 i n3, ฮพ(Cฯ€) = 1. Furthermore, ฮปL(Cฯ€) = 1{ฯ€(1)=1}andฮปU(Cฯ€) = 1{ฯ€(n)=n}. A derivation of the formulas in Proposition 3.2 is given in the appendix from page 15 onwards. Note that similar formulas for Spearmanโ€™s rho and Kendallโ€™s tau have been observed in [23, Lemma 1] and in the recent [25, Lemma 3.2], with the latter covering the Kendallโ€™s tau formula given in the Proposition above in the case of symmetric permutations. Furthermore, note that the identity for Chatterjeeโ€™s xi is a direct consequence of the fact that shuffleโ€“ofโ€“min copulas are perfect dependence copulas (compare, e.g., [2, Example 1.1]), and the tail dependence coefficients are elementary. 6 3.3 Explicit measures of association for
https://arxiv.org/abs/2505.08045v1
checkerboard-type copulas To give concise expressions, we make use of the following matrices: First, let โˆ† = (โˆ†i,j)1โ‰คi<m, 1โ‰คj<n be anmร—n-checkerboard matrix and denote by โˆ†โŠคits transpose. Next, define the mร—n-matrix โ„ฆ by โ„ฆi,j:=(2mโˆ’2iโˆ’1)(2nโˆ’2jโˆ’1) mn for 1โ‰คiโ‰คm,1โ‰คjโ‰คn. Also, let ฮž(m)be themร—m-matrix with entries ฮž(m) i,j=๏ฃฑ ๏ฃด๏ฃฒ ๏ฃด๏ฃณ2,ifi>j, 1,ifi=j, 0,ifi<j,(1โ‰คi,jโ‰คm), and let ฮž(n)be the analogous nร—n-matrix. Lastly, let Tbe the strict upper-triangular nร—n-matrix Ti,j=/braceleft๏ฃฌigg 1,ifi<j, 0,otherwise,(1โ‰คi,jโ‰คn) and letMฮพbe thenร—n-matrix given by Mฮพ=TTโŠค+TโŠค+1 3In. Proposition 3.3 (Explicit measures of association for checkerboard copulas) . LetCฮ ,Cโ†—andCโ†˜be bivariate checkerboard, checkโ€“min and checkโ€“w copulas associated with an mร—n-checkerboard matrix โˆ†. Then, the measures of association can be expressed as follows: (i) Spearmanโ€™s rho: ฯS(Cฮ ) = 3 tr/parenleftbig โ„ฆโŠคโˆ†/parenrightbig โˆ’3, ฯS(Cโ†—) =ฯS(Cฮ ) +1 mn, ฯS(Cโ†˜) =ฯS(Cฮ )โˆ’1 mn. (ii) Kendallโ€™s tau: ฯ„(Cฮ ) = 1โˆ’tr/parenleft๏ฃฌig ฮž(m)โˆ†ฮž(n)โˆ†โŠค/parenright๏ฃฌig ฯ„(Cโ†—) =ฯ„(Cฮ ) + tr(โˆ†โŠคโˆ†), ฯ„(Cโ†˜) =ฯ„(Cฮ )โˆ’tr(โˆ†โŠคโˆ†). (iii) Chatterjeeโ€™s xi: ฮพ(Cฮ ) =6m ntr/parenleftbig โˆ†โŠคโˆ†Mฮพ/parenrightbig โˆ’2, ฮพ(Cpd) =ฮพ(Cฮ ) +mtr(โˆ†โŠคโˆ†) n for allCpdโˆˆCโˆ† pdand in particular for Cโ†—andCโ†˜. Furthermore, ฮปL(Cฮ ) =ฮปU(Cฮ ) =ฮปL(Cโ†˜) =ฮปU(Cโ†˜) = 0 ,ฮปL(Cโ†—) = โˆ† 1,1(mโˆงn)andฮปU(Cโ†—) = โˆ†m,n(mโˆงn). In the case of nร—n-checkerboard copulas the above formula for Spearmanโ€™s rho and Kendallโ€™s tau can be found in [11, Theorem 15 and 16] (see also [24, Theorem 1 and 2] and [14, Formula (2)]). I didnโ€™t find references for the other cases. Corollary 3.4. Letโˆ†be anmร—n-checkerboard matrix and let Cโˆ† pdโˆˆCโˆ† pd. Then, it holds that /vextendsingle/vextendsingleฮพ(Cโˆ† pd)โˆ’ฮพ(Cโˆ† ฮ )/vextendsingle/vextendsingleโ‰ค/braceleft๏ฃฌigg m n2,ifmโ‰คn 1 n,ifm>n. The proofs for Proposition 3.3 and Corollary 3.4 are given in the appendix from p. 16 onwards. 7 4 Checkerboard estimates for Chatterjeeโ€™s xi In this section, we first discuss in Subsection 4.1 how the checkerboard and checkโ€“min formu- las relate to general Chatterjeeโ€™s xi values, and then analyse their performance as estimates for Chatterjeeโ€™s xi from sampled data in Subsection 4.2. 4.1 Checkerboard bound for Chatterjeeโ€™s xi The expressions of Proposition 3.3 can be used to calculate the measures of association for a given checkerboard copula in a straightforward and efficient way, without the need for numerical integration or estimates from sampled data. The next theorem shows that the checkerboard formula in Proposition 3.3 (iii) also serves as lower bound of Chatterjeeโ€™s xi for an arbitrarily given bivariate copulaC, and hence shows the formula (1). Theorem 4.1 (Checkerboard bound for ฮพ).IfCis a bivariate copula associated with a checker- board matrix โˆ†, thenฮพ(Cโˆ† ฮ )โ‰คฮพ(C). Proof. Consider a copula Cwith associated mร—n-checkerboard matrix โˆ†. Let ( X,Y )โˆผC, and letUโˆผUnif [0,1]be independent of ( X,Y ). Define /tildewideX:=โŒŠmXโŒ‹ m+U m SinceUis independent of Yand/tildewideXis a function of XandU, it holds that ฮพ(C) =ฮพ(Y|X) =ฮพ(Y|(X,U ))โ‰ฅฮพ/parenleft๏ฃฌig Y|/tildewideX/parenright๏ฃฌig , (18) see, e.g., [1, Theorem 2.1]. Furthermore,/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โˆˆIi,jif and only if ( X,Y )โˆˆIi,j. It follows P/bracketleft๏ฃฌig /tildewideXโ‰คu,Yโ‰คv/vextendsingle/vextendsingle/vextendsingle/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โˆˆIi,j/bracketright๏ฃฌig =P/bracketleftbiggโŒŠmXโŒ‹ m+U mโ‰คu,Yโ‰คv/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โˆˆIi,j/bracketrightbigg =P/bracketleftbiggiโˆ’1 m+U mโ‰คu,Yโ‰คv/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โˆˆIi,j/bracketrightbigg =P/bracketleftbiggiโˆ’1 m+U mโ‰คu/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โˆˆIi,j/bracketrightbigg P[Yโ‰คv|(X,Y )โˆˆIi,j] =P/bracketleftbiggiโˆ’1 m+U mโ‰คu/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โˆˆIi,j/bracketrightbigg P/bracketleft๏ฃฌig Yโ‰คv/vextendsingle/vextendsingle/vextendsingle/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โˆˆIi,j/bracketright๏ฃฌig , which shows that /tildewideXandYare conditionally independent on Ii,j. Also, trivially, P/bracketleft๏ฃฌig/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โˆˆIi,j/bracketright๏ฃฌig =P[(X,Y )โˆˆIi,j] = โˆ†i,j, so it follows that C/parenleftbig/tildewideX,Y/parenrightbig=Cโˆ† ฮ . Together with (18), this shows that ฮพ(Cโˆ† ฮ )โ‰คฮพ(C). Note that whilst ฮพ(Cโˆ† ฮ )โ‰คฮพ(C) for a matrix โˆ† associated
https://arxiv.org/abs/2505.08045v1
with C, it is generally not true that ฮพ(C)โ‰คฮพ(Cโˆ† โ†—). A simple counterexample is given by the checkโ€“min copula C=Cโˆ† โ†—associated with the checkerboard matrix โˆ† =1 4๏ฃซ ๏ฃฌ๏ฃฌ๏ฃญ1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1๏ฃถ ๏ฃท๏ฃท๏ฃธ, which adheres perfect dependence and hence satisfies ฮพ(C) = 1, but this is not the case when transitioning to the associated 2 ร—2-checkerboard matrix. Example 4.2 shows that even under stronger positive dependence constraints on the copula Ccheckโ€“min copulas may yield lower values forฮพthan the copula itself. 8 Example 4.2. Consider the matrices โˆ†4:=1 4๏ฃซ ๏ฃฌ๏ฃฌ๏ฃญ1 0 0 0 0 0.5 0.5 0 0 0.5 0.5 0 0 0 0 1๏ฃถ ๏ฃท๏ฃท๏ฃธ,โˆ†2=1 8/parenleftbigg5 1 1 5/parenrightbigg , and letCbe the checkerboard copula associated with โˆ† 4, i.e.C=Cโˆ†4 ฮ . This copula has a totally positive density, which implies multiple other classical dependence concepts, see [12, Figure 1]. Furthermore, using Proposition 3.3 (iii), it is ฮพ(C) =ฮพ/parenleft๏ฃฌig Cโˆ†4 ฮ /parenright๏ฃฌig =5 8>7 16=ฮพ/parenleft๏ฃฌig Cโˆ†2 โ†—/parenright๏ฃฌig . 4.2 Checkerboard estimator for Chatterjeeโ€™s xi Let now (X1,Y1),(X2,Y2),...be a random sample from ( X,Y ) and assume that ( X,Y ) has a continuous distribution function. Chatterjeeโ€™s xi admits a strongly consistent and asymptotically normal estimator given by ฮพn(Y|X) =/summationtextn k=1(nmin{Rk,RN(k)}โˆ’L2 k)/summationtextn k=1Lk(nโˆ’Lk), (19) whereRkdenotes the rank of YkamongY1,...,Yn, i.e., the number of jsuch thatYjโ‰คYk, and Lkdenotes the number of jsuch thatYjโ‰ฅYk. For eachk, the number N(k) denotes the index j such thatXjis the nearest neighbor of Xk, where ties are broken uniformly at random. All these appealing properties allow a fast, model-free variable selection method established in [5], noting thatฮพncan be computed in O(nlogn) time. The checkerboard copulas considered above provide an alternative way to estimate Chatterjeeโ€™s xi: Letฮบโˆˆ[0,1] and let โˆ† nbe derived by partitioning the unit square [0 ,1]2intoโŒŠnฮบโŒ‹ร—โŒŠnฮบโŒ‹squares of equal size and counting the number of samples in each square, i.e. (โˆ†n)i,j=1 nn/summationdisplay k=11Ii,j(Xk,Yk) for 1โ‰คiโ‰คโŒŠnฮบโŒ‹, 1โ‰คjโ‰คโŒŠnฮบโŒ‹. Then, set ฮพฮบ n(X(n),Y(n)) = 6 tr/parenleft๏ฃฌig โˆ†โŠค โŒŠnฮบโŒ‹โˆ†โŒŠnฮบโŒ‹Mฮพ/parenright๏ฃฌig +1 2tr/parenleft๏ฃฌig โˆ†โŠค โŒŠnฮบโŒ‹โˆ†โŒŠnฮบโŒ‹/parenright๏ฃฌig โˆ’2 (20) as an arithmetic average of the formulas for Chatterjeeโ€™s xi in Proposition 3.3 (iii). Choosing a checkerboard matrix of size โŒŠnฮบโŒ‹ร—โŒŠnฮบโŒ‹withฮบ < 1 avoids overfitting by ensuring that the number of squares in the checkerboard copula is the same as the average number of samples in each square. In [3], using checkerboard copulas for estimation has already been done for a whole set of dependence measures that in particular covers Chatterjeeโ€™s ฮพ, though with a more implicit formula for the estimator. Theorem 4.3 (Convergence of ฮพฮบ n).Ifฮบโ‰ค1/3, then the estimator ฮพฮบ ncan be computed in time O(nlog(n))and converges to ฮพalmost surely as nโ†’โˆž . Proof. Matrix multiplication of kร—k-matrices is generally possible in O(k3) time, and in (20) we havek=nฮบ, yielding a (sub-)linear evaluation time whenever ฮบโ‰ค1 3. As for the classical estimator from (19), the sample data needs to be transformed to ranks to obtain the doubly stochastic checkerboard matrix, which can be done in O(nlog(n)) time and is the bottleneck of the algorithm. The almost sure convergence ฮพฮบ nโ†’ฮพis obtained as in [3, Theorem 4.2],
https://arxiv.org/abs/2505.08045v1
using also that tr/parenleft๏ฃฌig โˆ†โŠค โŒŠnฮบโŒ‹โˆ†โŒŠnฮบโŒ‹/parenright๏ฃฌig =โŒŠnฮบโŒ‹/summationdisplay i,j=1โˆ†2 i,jโ‰ค1 โŒŠnฮบโŒ‹โ†’0 asnโ†’โˆž . 9 Next toฮพฮบ n, also consider variants ฮพฮบnandฮพฮบ nof the estimator tailored for ฮพ(Cโˆ†โŒŠnฮบโŒ‹ ฮ  ) and ฮพ(Cโˆ†โŒŠnฮบโŒ‹ โ†—). These variants are given by ฮพฮบn(X(n),Y(n)) = 6 tr/parenleft๏ฃฌig โˆ†โŠค โŒŠnฮบโŒ‹โˆ†โŒŠnฮบโŒ‹Mฮพ/parenright๏ฃฌig + tr/parenleft๏ฃฌig โˆ†โŠค โŒŠnฮบโŒ‹โˆ†โŒŠnฮบโŒ‹/parenright๏ฃฌig โˆ’2 ฮพฮบ n(X(n),Y(n)) = 6 tr/parenleft๏ฃฌig โˆ†โŠค โŒŠnฮบโŒ‹โˆ†โŒŠnฮบโŒ‹Mฮพ/parenright๏ฃฌig โˆ’2, respectively. Naturally, the question arises which ฮบto choose, i.e. how large to make the checkerboard matrix given a sample size n. An intuitive choice is ฮบ= 1/3, as this is the largest choice for a checkerboard matrix that can be computed in O(nlogn) time. The next example illustrates that this choice also appears to do well in practice. Example 4.4. Consider a single-factor model in R2where ZโˆผN(0,1), ฮตโˆผN(0,1),andX=Z+ฮต. (21) Here,Zandฮตare independent standard Gaussian random variables. The resulting pair ( Z,X) is jointly Gaussian with correlation1 2, In this model, it is known that ฮพ(X|Z) =3 ฯ€arcsin/parenleftbigg3 4/parenrightbigg โˆ’1 2โ‰ˆ0.3098, see [1, Proposition 2.7]. Figure 1 evaluates ฮพฮบnandฮพฮบ nbased on sample data for different values ofฮบ. As to be seen from the figure, when ฮบis too large, the estimator overfits the sampled data, whilst for small ฮบ, the estimator is too coarse. Figure 1: The estimator ฮพฮบ nfor different values of ฮบ. Each boxplot corresponds to an increasing sample size n. The estimates concentrate near the theoretical value of ฮพ(red line), illustrating consistency. Whilst in the above example, different values of ฮบwere considered, it is also interesting to compare the performance of the estimator ฮพฮบ nwith the classical Chatterjee estimator ฮพndefined above in (19). In Figure 3, we compare the performance of our implementations of ฮพฮบ n,ฮพฮบnandฮพฮบ n forฮบ=1 3with standard implementations of the ฮพnestimator at the exemplar of sample data from the model in (21). Figure 2 shows that also in terms of precision these estimators do not fall behind the standard implementations of ฮพnin the xicorpy and scipy packages. In conclusion, despite the formulas in Proposition 3.3 being particularly appealing for a given cumulative distribution function, also in a practical setting given sample data they yield a reasonable approximation of Chatterjeeโ€™s xi. 10 Figure 2: Convergence of xi estimates to the true value as sample size increases. As suggested by Theorem 4.1, the checkerboard estimate ฮพฮบ n(CheckPi ) tends to underestimate the true value, while the checkโ€“min estimate ฮพฮบn(CheckMin ) tends to overestimate it. ฮพฮบ n(CheckAvg ) is the closest to the true value at smaller sample sizes in this setting. Figure 3: Execution time scaling for different estimation methods with increasing sample size. Our implementation of ฮพฮบ noutperforms the implementations of ฮพninxicorpy approximately by a factor of three and the implementation in scipy by approximately 30 %. A Appendix Proof of Proposition 3.1. LetC=CD Bbe the Bernstein copula associated to the cumulated mร—n-checkerboard matrix D. The formulas for Spearmanโ€™s rho and Kendallโ€™s tau can be obtained as straightforward extensions of the computations in [11, Theorems 9 and 10]. Upper and lower tail dependence coefficients for Bernstein copulas are always equal to zero due to the boundedness of the density, see, e.g., [21, Example 1] for
https://arxiv.org/abs/2505.08045v1
the m=ncase. The rectangular case is again analogous. The rest of the proof is dedicated to deriving the formula for Chatterjeeโ€™s xi. Recall that ฮพ(C) can be written as ฮพ(C) = 6/integraldisplay [0,1]2(โˆ‚1C(u,v))2dฮป2(u,v)โˆ’2, 11 Hence, we need to evaluate the integral for CD B. Step 1: Derivative of โˆ‚1Bi,m(u). Write Bi,m(u) =/parenleftbiggm i/parenrightbigg ui(1โˆ’u)mโˆ’i. (22) We distinguish whether i<m ori=m. Case 1: 1โ‰คi<m . A direct product rule and factoring out yields โˆ‚1Bi,m(u) =/parenleftbiggm i/parenrightbigg (iโˆ’mu)uiโˆ’1(1โˆ’u)mโˆ’iโˆ’1. Case 2:i=m. SinceBm,m(u) =um, it is โˆ‚1Bm,m(u) =mumโˆ’1. Step 2: Derivative of the Bernstein copula. Recall from (4) that CD B(u,v) =m/summationdisplay i=1n/summationdisplay j=1Di,jBi,m(u)Bj,n(v). Thus, โˆ‚1CD B(u,v) =m/summationdisplay i=1n/summationdisplay j=1Di,jโˆ‚1Bi,m(u)Bj,n(v). In the integral for Chatterjeeโ€™s xi, we need to square this expression and get /parenleftbig โˆ‚1CD B(u,v)/parenrightbig2=m/summationdisplay i,r=1n/summationdisplay j,s=1Di,jDr,sโˆ‚1Bi,m(u)โˆ‚1Br,m(u)Bj,n(v)Bs,n(v). (23) Step 3: Factorize the double integral. We must integrate (23) over ( u,v)โˆˆ[0,1]2. Note that โˆ‚1Bi,m(u)โˆ‚1Br,m(u) depends only onu, whileBj,n(v)Bs,n(v) depends only onv. Hence, /integraldisplay1 0/integraldisplay1 0/parenleftbig โˆ‚1CD B(u,v)/parenrightbig2dudv=m/summationdisplay i,r=1n/summationdisplay j,s=1Di,jDr,s/bracketleft๏ฃฌig/integraldisplay1 0โˆ‚1Bi,m(u)โˆ‚1Br,m(u)du /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =:โ„ฆi,r/bracketright๏ฃฌig/bracketleft๏ฃฌig/integraldisplay1 0Bj,n(v)Bs,n(v)dv /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =:ฮ›j,s/bracketright๏ฃฌig . With the matrices โ„ฆ = (โ„ฆ i,r)m i,r=1and ฮ› = (ฮ› j,s)n j,s=1, we can write the double sum/integral as m/summationdisplay i,r=1n/summationdisplay j,s=1Di,jDr,sโ„ฆi,rฮ›j,s= tr/parenleftbig โ„ฆDฮ›DT/parenrightbig , so that ฮพ/parenleftbig CD B/parenrightbig = 6 tr/parenleftbig โ„ฆDฮ›DT/parenrightbig โˆ’2. Step 4: Explicit form of ฮ›. By (22), we have ฮ›j,s=/integraldisplay1 0/parenleftbiggn j/parenrightbigg/parenleftbiggn s/parenrightbigg vj+s(1โˆ’v)2nโˆ’(j+s)dv. A standard Beta-integral identity for nonnegative integers p,qis /integraldisplay1 0xp(1โˆ’x)qdx=p!q! (p+q+ 1)!, (24) 12 see, e.g., [26]. Here, p=j+sandq= 2nโˆ’jโˆ’s, so ฮ›j,s=/parenleftbiggn j/parenrightbigg/parenleftbiggn s/parenrightbigg(j+s)! (2nโˆ’jโˆ’s)! (2n+ 1)!=/parenleftbign j/parenrightbig/parenleftbign s/parenrightbig (2n+ 1)/parenleftbig2n j+s/parenrightbig, as specified in Section 3.1. Step 5: Explicit form of โ„ฆ. Recall from Step 1 that in โ„ฆi,r=/integraldisplay1 0โˆ‚1Bi,m(u)โˆ‚1Br,m(u) it is โˆ‚1Bi,m(u) =๏ฃฑ ๏ฃฒ ๏ฃณ/parenleftbigm i/parenrightbig (iโˆ’mu)uiโˆ’1(1โˆ’u)mโˆ’iโˆ’1,(1โ‰คi<m ) mumโˆ’1, (i=m). Hence, we must consider four cases for the pair ( i,r): (a) 1โ‰คi<m and 1โ‰คr<m . Then โˆ‚1Bi,m(u)โˆ‚1Br,m(u) =/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg (iโˆ’mu)(rโˆ’mu)uiโˆ’1+rโˆ’1(1โˆ’u)mโˆ’iโˆ’1+mโˆ’rโˆ’1. Expanding ( iโˆ’mu)(rโˆ’mu) yields โ„ฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg/integraldisplay1 0/bracketleftbig irโˆ’m(i+r)u+m2u2/bracketrightbig ui+rโˆ’2(1โˆ’u)2mโˆ’iโˆ’rโˆ’2du. Splitting into three Beta-type integrals: โ„ฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg/bracketleftbigg ir/integraldisplay1 0ui+rโˆ’2(1โˆ’u)2mโˆ’iโˆ’rโˆ’2du โˆ’m(i+r)/integraldisplay1 0ui+rโˆ’1(1โˆ’u)2mโˆ’iโˆ’rโˆ’2du+m2/integraldisplay1 0ui+r(1โˆ’u)2mโˆ’iโˆ’rโˆ’2du/bracketrightbigg . Using the Beta integrals from (24) the three integrals become (i+rโˆ’2)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’3)!,(i+rโˆ’1)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’2)!,(i+r)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’1)!, and we now have โ„ฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg/bracketleftbigg ir(i+rโˆ’2)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’3)! โˆ’m(i+r)(i+rโˆ’1)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’2)!+m2(i+r)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’1)!/bracketrightbigg . This expression can be simplified by rewriting each fraction so that all terms share the denominator (2mโˆ’1)!, i.e. using (i+rโˆ’2)!(2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’3)!=(i+rโˆ’2)!(2mโˆ’iโˆ’rโˆ’2)! [(2mโˆ’1)(2mโˆ’2)] (2mโˆ’1)!, (i+rโˆ’1)!(2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’2)!=(i+rโˆ’1)!(2mโˆ’iโˆ’rโˆ’2)! [(2mโˆ’1)] (2mโˆ’1)! we obtain โ„ฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg1 (2mโˆ’1)!/bracketleftbigg ir(2mโˆ’1)(2mโˆ’2)(i+rโˆ’2)! (2mโˆ’iโˆ’rโˆ’2)! โˆ’m(i+r)(2mโˆ’1)(i+rโˆ’1)! (2mโˆ’iโˆ’rโˆ’2)! +m2(i+r)! (2mโˆ’iโˆ’rโˆ’2)!/bracketrightbigg . =/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg(i+rโˆ’2)! (2mโˆ’iโˆ’rโˆ’2)! (2mโˆ’1)!/bracketleftbigg (2mโˆ’1)(2mโˆ’2)irโˆ’2m(mโˆ’1)/parenleftbiggi+r 2/parenrightbigg/bracketrightbigg =/parenleftbigm i/parenrightbig/parenleftbigm r/parenrightbig (2mโˆ’3)/parenleftbig2mโˆ’4 i+rโˆ’2/parenrightbig/bracketleft๏ฃฌigg irโˆ’2m(mโˆ’1)/parenleftbigi+r 2/parenrightbig (2mโˆ’1)(2mโˆ’2)/bracketright๏ฃฌigg . 13 (b) 1โ‰คi<m andr=m. Now, by Step 1 it is โˆ‚1Bi,m(u)โˆ‚1Bm,m(u) =m/parenleftbiggm i/parenrightbigg (iโˆ’mu)um+iโˆ’2(1โˆ’u)mโˆ’iโˆ’1 and โ„ฆi,m=m/parenleftbiggm i/parenrightbigg/integraldisplay1 0(iโˆ’mu)um+iโˆ’2(1โˆ’u)mโˆ’iโˆ’1du. Splitting the factor ( iโˆ’mu): โ„ฆi,m=m/parenleftbiggm i/parenrightbigg/bracketleftbigg i/integraldisplay1 0um+iโˆ’2(1โˆ’u)mโˆ’iโˆ’1duโˆ’m/integraldisplay1 0um+iโˆ’1(1โˆ’u)mโˆ’iโˆ’1du/bracketrightbigg . Here, use again p!q!/(p+q+ 1)! with appropriate exponents. For the first integral, choose p= (m+iโˆ’2) andq= (mโˆ’iโˆ’1), and for the second integral choose p= (m+iโˆ’1) andq= (mโˆ’iโˆ’1). Then, one gets /integraldisplay1 0um+iโˆ’2(1โˆ’u)mโˆ’iโˆ’1du=(m+iโˆ’2)!(mโˆ’iโˆ’1)! (2mโˆ’2)!, /integraldisplay1 0um+iโˆ’1(1โˆ’u)mโˆ’iโˆ’1du=(m+iโˆ’1)!(mโˆ’iโˆ’1)! (2mโˆ’1)!. Thus, โ„ฆi,m =m/parenleftbiggm i/parenrightbigg/bracketleftbigg i(m+iโˆ’2)!(mโˆ’iโˆ’1)! (2mโˆ’2)!โˆ’m(m+iโˆ’1)!(mโˆ’iโˆ’1)! (2mโˆ’1)!/bracketrightbigg =m/parenleftbiggm i/parenrightbigg(m+iโˆ’2)!(mโˆ’iโˆ’1)! (2mโˆ’1)!(mโˆ’1)(iโˆ’m) =m(mโˆ’1)(iโˆ’m)/parenleftbigm i/parenrightbig (2mโˆ’1)(2mโˆ’2)/parenleftbig2mโˆ’3 m+iโˆ’2/parenrightbig. (c)i=mand 1โ‰คr<m . By symmetry, or by the same direct calculation, โ„ฆm,r=m(mโˆ’1)(rโˆ’m)/parenleftbigm r/parenrightbig (2mโˆ’1)(2mโˆ’2)/parenleftbig2mโˆ’3 m+rโˆ’2/parenrightbig. (d)i=mandr=m. Here, โ„ฆm,m=/integraldisplay1 0/bracketleftbig mumโˆ’1/bracketrightbig2du=m2/integraldisplay1 0u2mโˆ’2du=m2/bracketleftbiggu2mโˆ’1
https://arxiv.org/abs/2505.08045v1
2mโˆ’1/bracketrightbigg1 0=m2 2mโˆ’1. Putting these four sub-cases (a)โ€“(d) together provides the complete piecewise formula for โ„ฆ i,rthat is specified in Section 3.1. This completes the proof. Lemma A.1 (Permutation Sum Identities) .Letฯ€โˆˆSnbe a permutation of {1,2,...,n}and let di=ฯ€(i)โˆ’i. Then the following identities hold: (i)n/summationdisplay i=1di= 0 (ii)n/summationdisplay i=1di(2iโˆ’1) =โˆ’n/summationdisplay i=1d2 i 14 Proof. We use the properties that for any permutation ฯ€โˆˆSn: (a)/summationtextn i=1ฯ€(i) =/summationtextn i=1iand (b)/summationtextn i=1ฯ€(i)2=/summationtextn i=1i2. The first identity is straightforward. For the second identity, letโ€™s first expand the left-hand side (LHS): n/summationdisplay i=1di(2iโˆ’1) = 2n/summationdisplay i=1iฯ€(i)โˆ’n/summationdisplay i=1ฯ€(i)โˆ’2n/summationdisplay i=1i2+n/summationdisplay i=1i (a)= 2n/summationdisplay i=1iฯ€(i)โˆ’/parenleft๏ฃฌiggn/summationdisplay i=1i/parenright๏ฃฌigg โˆ’2n/summationdisplay i=1i2+/parenleft๏ฃฌiggn/summationdisplay i=1i/parenright๏ฃฌigg = 2n/summationdisplay i=1iฯ€(i)โˆ’2n/summationdisplay i=1i2. Next, letโ€™s expand the term/summationtextn i=1d2 ifrom the right-hand side (RHS): n/summationdisplay i=1d2 i=n/summationdisplay i=1ฯ€(i)2โˆ’2n/summationdisplay i=1iฯ€(i) +n/summationdisplay i=1i2(b)=n/summationdisplay i=1i2โˆ’2n/summationdisplay i=1iฯ€(i) +n/summationdisplay i=1i2= 2n/summationdisplay i=1i2โˆ’2n/summationdisplay i=1iฯ€(i). Comparing the two resulting terms, we see that: n/summationdisplay i=1di(2iโˆ’1) =โˆ’/parenleft๏ฃฌigg 2n/summationdisplay i=1i2โˆ’2n/summationdisplay i=1iฯ€(i)/parenright๏ฃฌigg =โˆ’n/summationdisplay i=1d2 i. This establishes the second identity. Proof of Proposition 3.2. First, for Kendallโ€™s tau, let ( U1,V1),(U2,V2)โˆผCฮ be independent from each other and write I=โŒˆnU1โŒ‰, J=โŒˆnU2โŒ‰for the indices of the segments on which the two points fall. The random variables IandJare i.i.d. and uniform on {1,...,n}, so P[I=i,J=j] =1 n2(i,j= 1,...,n ). IfIฬธ=J, the sign of ( U1โˆ’U2)(V1โˆ’V2) is completely determined by the permutation: โ€ขI <J,ฯ€ (I)<ฯ€(J) orI >J,ฯ€ (I)>ฯ€(J) =โ‡’concordance; โ€ขI <J,ฯ€ (I)>ฯ€(J) orI >J,ฯ€ (I)<ฯ€(J) =โ‡’discordance. BecauseIandJare chosen with replacement, ties I=Joccur with probability P[I=J] = 1/n. SinceNinv(ฯ€) denotes the number of inversions of ฯ€, the probabilities are pdisc=2Ninv(ฯ€) n2, p conc= 1โˆ’pdisc. Hence, by the concordantโ€“discordant definition given in (15), it is ฯ„(Cฯ€) =pconcโˆ’pdisc= 1โˆ’4Ninv(ฯ€) n2. Regarding Spearmanโ€™s rho, fix a segment index iand write the rank displacement di:=ฯ€(i)โˆ’i. The support of Cฯ€is the union of ndiagonal line segments Si:=/braceleftbigg/parenleftbiggiโˆ’1 +t n,ฯ€(i)โˆ’1 +t n/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingletโˆˆ[0,1]/bracerightbigg . Each carries probability mass 1 /n. OnSithe coordinates are related by V=U+di n, soUV= U2+di nU. WithtโˆผUnif[0,1] andU= (iโˆ’1 +t)/n, the conditional expectation is given by E[U|I=i] =/integraldisplay1 0iโˆ’1 +t ndt=2iโˆ’1 2n,, so that E[U2|I=i] =/integraldisplay1 0(iโˆ’1 +t)2 n2dt=(2iโˆ’1)2 4n2+1 12n2 15 and E[UV|I=i] =E[U2|I=i] +di nE[U|I=i] =(2iโˆ’1)2 4n2+1 12n2+di(2iโˆ’1) 2n2. Averaging over ione obtains E[UV] =1 nn/summationdisplay i=1E[UV|I=i] =1 nn/summationdisplay i=1/parenleftbigg(2iโˆ’1)2 4n2+1 12n2/parenrightbigg /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =E[U2] = 1/3+1 nn/summationdisplay i=1di(2iโˆ’1) 2n2. The first sum is E[U2] for a Unif(0 ,1) variable, which equals 1 /3. For the second sum, we use the second identity from Lemma A.1, namely/summationtextn i=1di(2iโˆ’1) =โˆ’/summationtextn i=1d2 i. Substituting yields: E[UV] =1 3+1 2n3n/summationdisplay i=1di(2iโˆ’1) =1 3โˆ’1 2n3n/summationdisplay i=1d2 i. BecauseฯS(C) = 12 E[UV]โˆ’3 for any copula Cwith uniform margins, ฯS(Cฯ€) = 12/parenleftbigg1 3โˆ’/summationtext id2 i 2n3/parenrightbigg โˆ’3 = 1โˆ’6/summationtextn i=1d2 i n3. Next, regarding Chatterjeeโ€™s ฮพ, note that for ( X,Y )โˆผCฯ€, it isY=f(X) almost surely, see [18, Theorem 2.1]. Consequently, using [1, Theorem 2.1], it follows that ฮพ(Cฯ€) = 1. Lastly, for the tail coefficients, note that for t<1/n,Cฯ€(t,t) =tif and only if ฯ€(1) = 1 (the first segment lies on the main diagonal); otherwise Cฯ€(t,t) = 0. Hence, (16) yields ฮปL= 1{ฯ€(1)=1}. A symmetric argument witht>1โˆ’1/ngivesฮปU= 1{ฯ€(n)=n}, establishing the tail dependence coefficients. Proof of Proposition 3.3. (i) Recall that ฯS(C) = 12/integraldisplay [0,1]2C(u,v)dฮป2(u,v)โˆ’3,
https://arxiv.org/abs/2505.08045v1
whereฮป2denotes the Lebesgue measure on [0 ,1]2and recall also from (8) that the copula Cฮ is given by Cฮ (u,v) =iโˆ’1,jโˆ’1/summationdisplay k,l=1โˆ†k,l+iโˆ’1/summationdisplay k=1โˆ†k,j(nvโˆ’j+ 1) +jโˆ’1/summationdisplay l=1โˆ†i,l(muโˆ’i+ 1) + โˆ† i,j(muโˆ’i+ 1)(nvโˆ’j+ 1) for (u,v)โˆˆIi,j. Hence, with a simple substitution, it is /integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 nCฮ (u,v)dvdu=1 mn๏ฃซ ๏ฃญiโˆ’1,jโˆ’1/summationdisplay k,l=1โˆ†k,l+1 2iโˆ’1/summationdisplay k=1โˆ†k,j+1 2jโˆ’1/summationdisplay l=1โˆ†i,l+1 4โˆ†i,j๏ฃถ ๏ฃธ. Considering the full integral, each โˆ† i,jappears in the integral of the corresponding cell and all cells with โˆ† iโ€ฒjโ€ฒwithiโ€ฒโ‰ฅiandjโ€ฒโ‰ฅj. In total, it appears ( nโˆ’i)(nโˆ’j)-times with a weight of 1, ( nโˆ’i+nโˆ’j)-times with a weight of1 2and one time with a weight of1 4. Consequently, /integraldisplay [0,1]2Cฮ (u,v)dudv=m/summationdisplay i=1n/summationdisplay j=1(2nโˆ’2i+ 1)(2nโˆ’2j+ 1) 4mnโˆ†i,j, and thus ฯS(Cฮ ) = 12m,n/summationdisplay i,j=1(2iโˆ’1)(2jโˆ’1) 4mnโˆ†i,jโˆ’3 = 3 tr/parenleftbig โ„ฆโŠคโˆ†/parenrightbig โˆ’3. For the checkโ€“min copula Cโ†—, recall its formula from (11). It follows that /integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 nCโ†—(u,v)dvdu=1 mn๏ฃซ ๏ฃญiโˆ’1,jโˆ’1/summationdisplay k,l=1โˆ†k,l+1 2iโˆ’1/summationdisplay k=1โˆ†k,j+1 2jโˆ’1/summationdisplay l=1โˆ†i,l+1 3โˆ†i,j๏ฃถ ๏ฃธ, 16 and therefore /integraldisplay [0,1]2Cโ†—(u,v)dudv=m,n/summationdisplay i,j=1(2nโˆ’2i+ 1)(2nโˆ’2j+ 1) 4mnโˆ†i,j+1 12mnm,n/summationdisplay i,j=1โˆ†i,j. Hence, ฯS(Cโ†—) =ฯS(Cฮ ) +1 mn as stated. Similarly, for Cโ†˜, one obtains from (13) that /integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 nCโ†˜(u,v)dvdu=1 mn๏ฃซ ๏ฃญiโˆ’1,jโˆ’1/summationdisplay k,l=1โˆ†k,l+1 2iโˆ’1/summationdisplay k=1โˆ†k,j+1 2jโˆ’1/summationdisplay l=1โˆ†i,l+1 6โˆ†i,j๏ฃถ ๏ฃธ, leading to the stated result. (ii) Kendallโ€™s tau for Cฮ is given by ฯ„(Cฮ ) = 1โˆ’4/integraldisplay [0,1]2โˆ‚1Cฮ (u,v)โˆ‚2Cฮ (u,v)dudv, and we compute โˆ‚1Cฮ (u,v) =m/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,jvโˆ’jโˆ’1 n j nโˆ’jโˆ’1 n/parenright๏ฃฌigg =m/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j(nvโˆ’j+ 1)/parenright๏ฃฌigg for (u,v)โˆˆIi,j. An analogous expression holds for โˆ‚2Cฮ (u,v) on each cell. Integrating cell-by-cell, one obtains /integraldisplay1 0/integraldisplay1 0โˆ‚1Cฮ (u,v)โˆ‚2Cฮ (u,v)dudv =m,n/summationdisplay i,j=1/integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 nโˆ‚1Cฮ (u,v)โˆ‚2Cฮ (u,v)dudv =m,n/summationdisplay i,j=1/parenleft๏ฃฌigg m/integraldisplayj n jโˆ’1 njโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j(nvโˆ’j+ 1)dv/parenright๏ฃฌigg/parenleft๏ฃฌigg n/integraldisplayi m iโˆ’1 miโˆ’1/summationdisplay k=1โˆ†k,j+ โˆ†i,j(nuโˆ’i+ 1)du/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌigg m n/integraldisplay1 0jโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,jvdv/parenright๏ฃฌigg/parenleft๏ฃฌigg n m/integraldisplay1 0iโˆ’1/summationdisplay k=1โˆ†k,j+ โˆ†i,judu/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j+1 2โˆ†i,j/parenright๏ฃฌigg/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+1 2โˆ†i,j/parenright๏ฃฌigg =1 4m,n/summationdisplay i,j=1/parenleft๏ฃฌig ฮž(m)โˆ†/parenright๏ฃฌig i,j/parenleft๏ฃฌig ฮž(n)โˆ†โŠค/parenright๏ฃฌig ji =1 4mโˆ’1/summationdisplay i=1/parenleft๏ฃฌig ฮž(m)โˆ†ฮž(n)โˆ†โŠค/parenright๏ฃฌig ii =1 4tr/parenleft๏ฃฌig ฮž(m)โˆ†ฮž(n)โˆ†โŠค/parenright๏ฃฌig . Hence, ฯ„(Cฮ ) = 1โˆ’4/integraldisplay [0,1]2โˆ‚1Cฮ (u,v)โˆ‚2Cฮ (u,v)dudv= 1โˆ’tr/parenleft๏ฃฌig ฮž(m)Cฮž(n)โˆ†โŠค/parenright๏ฃฌig In the case of Cโ†—, note that now it is โˆ‚1Cโ†—(u,v) =m/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j(1{nvโˆ’j+1โ‰ฅmuโˆ’i+1})/parenright๏ฃฌigg 17 for (u,v)โˆˆIi,jand similarly for โˆ‚2Cโ†—(u,v), so that /integraldisplay1 0/integraldisplay1 0โˆ‚1Cโ†—(u,v)โˆ‚2Cโ†—(u,v)dudv =m,n/summationdisplay i,j=1/integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 nโˆ‚1Cฮ (u,v)โˆ‚2Cฮ (u,v)dudv =m,n/summationdisplay i,j=1/integraldisplay1 0/integraldisplay1 0/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j1{vโ‰ฅu}/parenright๏ฃฌigg/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j+ โˆ†i,j1{uโ‰ฅv}/parenright๏ฃฌigg dvdu =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,jjโˆ’1/summationdisplay l=1โˆ†i,l+1 2iโˆ’1/summationdisplay k=1โˆ†k,jโˆ†i,j+1 2jโˆ’1/summationdisplay l=1โˆ†i,lโˆ†i,j/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j+1 2โˆ†i,j/parenright๏ฃฌigg/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+1 2โˆ†i,j/parenright๏ฃฌigg โˆ’1 4m,n/summationdisplay i,j=1โˆ†2 i,j =1 4/parenleft๏ฃฌig tr(ฮž(m)โˆ†ฮž(n)โˆ†โŠค)โˆ’tr(โˆ†โŠคโˆ†)/parenright๏ฃฌig . Consequently, ฯ„(Cโ†—) = 1โˆ’4/integraldisplay [0,1]2โˆ‚1Cโ†—(u,v)โˆ‚2Cโ†—(u,v)dudv=ฯ„(Cฮ ) + tr(โˆ†โŠคโˆ†), and a similar argument yields /integraldisplay1 0/integraldisplay1 0โˆ‚1Cโ†˜(u,v)โˆ‚2Cโ†˜(u,v)dudv =m,n/summationdisplay i,j=1/integraldisplay1 0/integraldisplay1 0/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j1{vโ‰ฅ1โˆ’u}/parenright๏ฃฌigg/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j+ โˆ†i,j1{uโ‰ฅ1โˆ’v}/parenright๏ฃฌigg dvdu =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,jjโˆ’1/summationdisplay l=1โˆ†i,l+1 2iโˆ’1/summationdisplay k=1โˆ†k,jโˆ†i,j+1 2jโˆ’1/summationdisplay l=1โˆ†i,lโˆ†i,j+/integraldisplay1 0/integraldisplay1 0โˆ†i,j1{u+vโ‰ฅ1}dvdu/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j+1 2โˆ†i,j/parenright๏ฃฌigg/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+1 2โˆ†i,j/parenright๏ฃฌigg +1 4m,n/summationdisplay i,j=1โˆ†2 i,j =1 4/parenleft๏ฃฌig tr/parenleft๏ฃฌig ฮž(m)Cฮž(n)โˆ†โŠค/parenright๏ฃฌig + tr/parenleftbig โˆ†โŠคโˆ†/parenrightbig/parenright๏ฃฌig , which shows ฯ„(Cโ†˜) = 1โˆ’4/integraldisplay [0,1]2โˆ‚1Cโ†˜(u,v)โˆ‚2Cโ†˜(u,v)dudv=ฯ„(Cฮ )โˆ’tr(โˆ†โŠคโˆ†). (iii) Recall that Chatterjeeโ€™s ฮพfor a copula Ccan be expressed as ฮพ(C) = 6/integraldisplay1 0/integraldisplay1 0(โˆ‚1C(u,v))2dudvโˆ’2. For (u,v)โˆˆIi,j, we have โˆ‚1Cฮ (u,v) =m/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j(nvโˆ’j+ 1)/parenright๏ฃฌigg . (25) 18 Hence, squaring and integrating in v, one finds /integraldisplayj n jโˆ’1 n(โˆ‚1Cฮ (u,v))2dv=m2 n/integraldisplay1 0/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,jv/parenright๏ฃฌigg2 dv =m2 n๏ฃซ ๏ฃญ/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l/parenright๏ฃฌigg2 +/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l/parenright๏ฃฌigg โˆ†i,j+1 3โˆ†2 i,j๏ฃถ ๏ฃธ =m2 n/parenleft๏ฃฌig (Tโˆ†)2 i,j+ (Tโˆ†)i,jโˆ†i,j+1 3โˆ†2 i,j/parenright๏ฃฌig . Summing over the cells yields the formula for ฮพ(Cฮ ). ฮพ(Cฮ ) = 6m,n/summationdisplay i,j=1/integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 n(โˆ‚1Cฮ (u,v))2dvduโˆ’2 =6m nm,n/summationdisplay i,j=1๏ฃซ
https://arxiv.org/abs/2505.08045v1
๏ฃญ/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l/parenright๏ฃฌigg2 +/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l/parenright๏ฃฌigg โˆ†i,j+1 3โˆ†2 i,j๏ฃถ ๏ฃธโˆ’2 =6m nm,n/summationdisplay i,j=1/parenleft๏ฃฌig (Tโˆ†)2 i,j+ (Tโˆ†)i,jโˆ†i,j+1 3โˆ†2 i,j/parenright๏ฃฌig โˆ’2 =6m ntr/parenleftbig โˆ†โŠคโˆ†/parenleftbig TTโŠค+TโŠค+1 3In/parenrightbig/parenrightbig โˆ’2 =6m ntr/parenleftbig โˆ†โŠคโˆ†Mฮพ/parenrightbig โˆ’2 In the case of CpdโˆˆCโˆ† pd, note that with (14) it now is โˆ‚1Cpd(u,v) =m/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j1{nvโˆ’j+1โ‰ฅfi,j(muโˆ’i+1)}/parenright๏ฃฌigg for (u,v)โˆˆIi,j. Note further that due to fi,jbeing Lebesgue measure preserving, it is /integraldisplay/integraldisplay [0,1]21{vโ‰ฅfi,j(u)}dvdu=/integraldisplay1 0(1โˆ’fi,j(u)) du=1 2, and hence /integraldisplayi m iโˆ’1 m/integraldisplayj n jโˆ’1 n(โˆ‚1Cpd(u,v))2dvdu=m n/integraldisplay1 0/integraldisplay1 0/parenleft๏ฃฌiggjโˆ’1/summationdisplay l=1โˆ†i,l+ โˆ†i,j1{vโ‰ฅfi,j(u)}/parenright๏ฃฌigg2 dvdu =m n๏ฃซ ๏ฃญ/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j/parenright๏ฃฌigg2 +/parenleft๏ฃฌiggiโˆ’1/summationdisplay k=1โˆ†k,j/parenright๏ฃฌigg โˆ†i,j+1 2โˆ†2 i,j๏ฃถ ๏ฃธ. Thus, one gets an extra1 6โˆ†2 i,jcompared to the previous case, and concludes that ฮพ(Cpd) = 6/integraldisplay [0,1]2(โˆ‚1Cpd(u,v))2dudvโˆ’2 =ฮพ(Cฮ ) +m nm,n/summationdisplay i,j=1โˆ†2 i,j=ฮพ(Cฮ ) +mtr(โˆ†โŠคโˆ†) n. SinceCโ†—,Cโ†˜โˆˆCโˆ† pd, this result in particular also holds for them. Lastly, regarding tail dependence coefficients, it is a direct and classical observation that a copula with a bounded density has no tail dependence, compare, e.g., [17, below Remark 5.1]. In partic- ular,ฮปL(Cฮ ) =ฮปU(Cฮ ) = 0. ForCโ†˜, sinceCโ†˜โ‰คCฮ pointwise, it is clear from the definition of ฮปLandฮปUin (16) and (17) that 0โ‰คฮปL(Cโ†˜)โ‰คฮปL(Cฮ ),0โ‰คฮปU(Cโ†˜)โ‰คฮปU(Cฮ ). 19 Hence, also ฮปL(Cโ†˜) =ฮปU(Cโ†˜) = 0. Finally, for Cโ†—, recall its form from (11). For t > 0 sufficiently small, it is ( t,t)โˆˆI1,1and thus ฮปL(Cโ†—) = lim tโ†—Cโ†—(t,t) t= lim tโ†—โˆ†1,1min{nt,mt} t= โˆ† 1,1(mโˆงn). Similarly, for 1โˆ’t>0 sufficiently small, it is ( t,t)โˆˆIm,nand thus Cโ†—(1,1)โˆ’Cโ†—(t,t) =n(1โˆ’t)mโˆ’1/summationdisplay k=1โˆ†k,n+m(1โˆ’t)nโˆ’1/summationdisplay l=1โˆ†m,l+ โˆ†m,nmax{n(1โˆ’t),m(1โˆ’t)} =n(1โˆ’t)/parenleftbigg1 nโˆ’โˆ†m,n/parenrightbigg +m(1โˆ’t)/parenleftbigg1 mโˆ’โˆ†m,n/parenrightbigg + โˆ†m,nmax{n(1โˆ’t),m(1โˆ’t)} = (1โˆ’t) (2โˆ’(mโˆงn)โˆ†m,n). This yields ฮปU(Cโ†—) = 2โˆ’lim tโ†—11โˆ’Cโ†—(t,t) 1โˆ’t= 2โˆ’lim tโ†—1Cโ†—(1,1)โˆ’Cโ†—(t,t) 1โˆ’t= โˆ†m,n(mโˆงn), finishing the proof. Proof of Corollary 3.4. Note that m,n/summationdisplay i,j=1โˆ†2 i,jโ‰คm/summationdisplay i=1๏ฃซ ๏ฃญn/summationdisplay j=1โˆ†i,j๏ฃถ ๏ฃธ2 =m/summationdisplay i=11 m2=1 m and in the same way m,n/summationdisplay i,j=1โˆ†2 i,jโ‰ค1 n. Hence, by Proposition 3.3 (iii), we have /vextendsingle/vextendsingleฮพ/parenleftbig Cโˆ† โ†—/parenrightbig โˆ’ฮพ/parenleftbig Cโˆ† ฮ /parenrightbig/vextendsingle/vextendsingle=mtr(โˆ†โŠคโˆ†) n=m nm,n/summationdisplay i,j=1โˆ†2 i,jโ‰คm n(mโˆจn)=/braceleft๏ฃฌigg m n2,ifmโ‰คn 1 n,ifm>n, as claimed. References [1] Jonathan Ansari and Sebastian Fuchs. A simple extension of azadkia and chatterjeeโ€™s rank correlation to a vector of endogenous variables. arXiv preprint arXiv:2212.01621 , 2022. [2] Jonathan Ansari and Sebastian Fuchs. On continuity of chatterjeeโ€™s rank correlation and related dependence measures. arXiv preprint arXiv:2503.11390 , 2025. [3] Jonathan Ansari, Patrick B. Langthaler, Sebastian Fuchs, and Wolfgang Trutschnig. Quan- tifying and estimating dependence via sensitivity of conditional distributions. arXiv preprint arXiv:2308.06168 , 2023. [4] Jonathan Ansari and Marcus Rockel. Dependence properties of bivariate copula families. Dependence Modeling , 12(1):20240002, 2024. [5] M. Azadkia and S. Chatterjee. A simple measure of conditional dependence. Ann. Stat. , 49(6):3070โ€“3102, 2021. [6] S. Chatterjee. A new coefficient of correlation. J. Amer. Statist. Ass. , 116(536):2009โ€“2022, 2020. 20 [7] Claudia Cottin and Dietmar Pfeifer. From bernstein polynomials to bernstein copulas. J. Appl. Funct. Anal , 9(3-4):277โ€“288, 2014. [8] Holger Dette, Karl F Siburg, and Pavel A Stoimenov. A copula-based non-parametric measure of regression dependence. Scandinavian Journal of Statistics , 40(1):21โ€“41, 2013. [9] Fabrizio Durante, Juan Fernandez-Sanchez, and Wolfgang Trutschnig. A typical copula is singular. Journal of Mathematical Analysis and Applications , 430(1):517โ€“527, 2015. [10] Fabrizio Durante and Carlo Sempi. Principles of Copula Theory . Boca Raton, FL: CRC Press, 2016. [11] Valdo Durrleman, Ashkan Nikeghbali, and Thierry Roncalli. Copulas approximation
https://arxiv.org/abs/2505.08045v1
and new families. Available at SSRN 1032547 , 2000. [12] Sebastian Fuchs and Marco Tschimpke. Total positivity of copulas from a markov kernel perspective. Journal of Mathematical Analysis and Applications , 518(1):126629, 2023. [13] Harry Joe. Multivariate Models and Dependence Concepts , volume 73 of Monogr. Stat. Appl. Probab. London: Chapman and Hall, 1997. [14] Viktor Kuzmenko, Romel Salam, and Stan Uryasev. Checkerboard copula defined by sums of random variables. Dependence Modeling , 8(1):70โ€“92, 2020. [15] X Li, P Mikusiยด nski, H Sherwood, and MD Taylor. On approximation of copulas. Distributions with given marginals and moment problems , pages 107โ€“116, 1997. [16] Xin Li, P Mikusiยด nski, and Michael D Taylor. Strong approximation of copulas. Journal of Mathematical Analysis and Applications , 225(2):608โ€“623, 1998. [17] Georg Mainik. Risk aggregation with empirical margins: Latin hypercubes, empirical copulas, and convergence of sum distributions. Journal of Multivariate Analysis , 141:197โ€“216, 2015. [18] Piotr Mikusinski, Howard Sherwood, and Michael D Taylor. Shuffles of min. Stochastica , 13(1):61โ€“74, 1992. [19] Piotr Mikusiยด nski and Michael D. Taylor. Some approximations of n-copulas. Metrika , 72(3):385โ€“414, 2010. [20] Roger B. Nelsen. An Introduction to Copulas. 2nd ed. New York, NY: Springer, 2006. [21] Dietmar Pfeifer, Hervยด e Awoumlac Tsatedem, Andreas Mยจ andle, and Cห† ome Girschig. New copu- las based on general partitions-of-unity and their applications to risk management. Dependence modeling , 4(1):000010151520160006, 2016. [22] Alessio Sancetta and Stephen Satchell. The bernstein copula and its applications to modeling and approximations of multivariate distributions. Econometric theory , 20(3):535โ€“562, 2004. [23] Manuela Schreyer, Roland Paulin, and Wolfgang Trutschnig. On the exact region determined by kendallโ€™s ฯ„and spearmanโ€™s ฯ.Journal of the Royal Statistical Society Series B: Statistical Methodology , 79(2):613โ€“633, 2017. [24] Issey Sukeda and Tomonari Sei. On the minimum information checkerboard copulas under fixed kendallโ€™s rank correlation. arXiv preprint arXiv:2306.01604 , 2023. [25] Marco Tschimpke, Manuela Schreyer, and Wolfgang Trutschnig. Revisiting the region deter- mined by spearmanโ€™s ฯand spearmanโ€™s footrule ฯ•.Journal of Computational and Applied Mathematics , 457:116259, 2025. [26] Eric W Weisstein. Beta function. https://mathworld. wolfram. com/ , 2002. [27] Yanting Zheng, Jingping Yang, and Jianhua Z Huang. Approximation of bivariate copulas by patched bivariate frยด echet copulas. Insurance: Mathematics and Economics , 48(2):246โ€“256, 2011. 21
https://arxiv.org/abs/2505.08045v1
Beyond Basic A/B testing: Improving Statistical Efficiency for Business Growth Changshuai Weiโˆ— LinkedIn Corporation chawei@linkedin.comPhuc Nguyen LinkedIn Corporation honnguyen@linkedin.com Benjamin Zelditch LinkedIn Corporation bzelditch@linkedin.comJoyce Chen LinkedIn Corporation joychen@linkedin.com Abstract The standard A/B testing approaches are mostly based on t-test in large scale indus- try applications. These standard approaches however suffers from low statistical power in business settings, due to nature of small sample-size or non-Gaussian distribution or return-on-investment (ROI) consideration. In this paper, we pro- pose several approaches to addresses these challenges: (i) regression adjustment, generalized estimating equation, Man-Whitney U and Zero-Trimmed U that ad- dresses each of these issues separately, and (ii) a novel doubly robust generalized U that handles ROI consideration, distribution robustness and small samples in one framework. We provide theoretical results on asymptotic normality and efficiency bounds, together with insights on the efficiency gain from theoretical analysis. We further conduct comprehensive simulation studies and apply the methods to multiple real A/B tests. 1 Introduction Controlled experiments have been the gold standard of measuring the effect of a treatment/drug in biological and medical research for more than 100 years [ 11,12]. In the last few decades, the rise of the internet and machine learning (ML) algorithms led to the development and revival of controlled experiments for online internet applications, i.e., A/B testing[ 22]. Most of the A/B testing in industry follows standard statistical approaches, e.g., t-test, particularly in large-scale recommender systems (e.g., Feeds, Ads, Growth), which involve sample sizes on the order of millions to billions, and measure engagement metrics such as clicks or impressions. In business settings, e.g., Marketing, Software-as-a-Service (SaaS), and Business-to-Bussiness (B2B), there are unique challenges, where standard approaches like the t-test can lead to either incorrect conclusions or insufficient statistical power: (i) Return-on-Investment (ROI) or Return-on-Ad-Spend (ROAS) type of measurement is almost always key consideration in business setting. There has been little research on how to efficiently measure this type of metric in the A/B testing setting; (ii) Small sample sizes are very common in business-setting A/B tests, since increasing the sample size typically incurs additional cost; (iii) Revenue, as a core metric in business setting, is typically right-skewed with a heavy tail . Since revenue generation is typically sparse event conditioning on sales outreach or marketing touch-point, we also need to address zero-inflation . โˆ—Corresponding Author. Theoretical development was primarily carried out by C. Wei, who also led the design, execution, and writing of the manuscript. Co-authors contributed to the algorithm implementation, simulation studies, real-world applications, and co-writing the manuscript. Preprint.arXiv:2505.08128v1 [stat.ME] 13 May 2025 In this paper, we propose to use a series of statistical methods to address the above challenges, including regression adjustment, generalized estimating equation and Mann-Whitney U. We also develop a novel doubly robust generalized U statistic that combines advantage of above methods. As far as we know, this is the first comprehensive treatment of efficient statistical methods for A/B test in tech industry, particularly for business setting. The key contributions of the paper are: 1)Methodology innovations to improve testing efficiency in business settings, particularly, using (i) regression adjustment for
https://arxiv.org/abs/2505.08128v1
ROI consideration, (ii) GEE for addressing small sample size with repeated measurement, and (iii) Mann-Whitney U for non-Gaussian data, in particular, zero-trimmed U test for zero-inflated heavy tailed data. 2)Theoretical development on (i) systematic analysis of asymptotic efficiency for the proposed approaches, and more importantly (ii) a novel doubly robust generalized U that attains the semi- parametric efficiency bound and can concurrently address ROI, longitudinal analysis, and ill-behaved distributions, as well as (iii) rigorous efficient algorithms for large data for broader applicability. 3) We conducted thorough simulation studies to evaluate the empirical efficiencies and applied the methods to multiple real business applications . In-depth discussion on methodology innovations and theoretical contributions can be found in section 7. Though these methods are proposed to address challenges in business setting, they are broadly applicable to general A/B test in tech and experiments in non-tech field. The rest of the paper is structured as follows. For the remainder of section 1, weโ€™ll discuss related work and introduce the problem setup and preliminaries. In section 2 and section 3, we will discuss regression adjustment and GEE. In section 4, we introduce Mann-Whitney U and Zero-Trimmed U for non-parametric testing. In section 5, we develop methodology for doubly robust generalized U test. Then, we conduct simulation studies and real data analysis in section 6 and conclude the paper with discussion in section 7 and section 8. Details on algorithms, theoretical proof, analytical derivation, and simulation set-up can be found in Appendix. 1.1 Related Works There have been multiple research efforts in the tech industry to address limitations of standard t-tests, particularly for low sensitivity and small treatment effects [ 24]. Covariate adjustment[ 11] has been widely used as an improvement to t-test or proportion test in biomedical research[ 16,13,18,21]. An important relevant development in the tech field is Controlled-experiment Using Pre-Experiment Data (CUPED)[ 10], which leverages pre-experiment metrics in a simple linear adjustment to reduce variance. Later extension of the methods includes leveraging in-experiment data [ 44,9], non-linear predictive modeling [ 32], and individual-variance weighting [ 26] for further reduction of variance. Meanwhile, there are increasing concerns on other challenges, such as repeated measurements[ 27,46] and non-Gaussian heavy-tailed distributions [ 20,2]. Semi-parametric approaches such as GEE have been well adopted in non-tech field for repeated measurements [ 25,39]. Nonparametric methods, such as Wilcoxon Rank-sum and U-statistic, can provide robustness to ill-behaved distribution[ 29,17, 3,5,19,23]. In recent years, U statistics have emerged as an important class of statistical methods in biomedical research [ 14,28,30,45] and social sciences[ 6,1,31], with particular developments in genomics [ 42,41,40] and causal inferences[ 43,38,7] for public health studies. The application of U statistics in tech industry are largely limited to ROC-AUC (equivalent to Mann-Whitney U [ 15]) for ML modelsโ€™ evaluation, and itโ€™s often just used as point estimate. While there are some development on metric learning and non-directional type of tests(e.g., goodness of fit, independence) using U statistics[8, 34, 33], they are not suitable for A/B testing. 1.2 Problem Setup and Preliminaries Letโ€™s assume we perform A/B test to compare two treatment
https://arxiv.org/abs/2505.08128v1
z= 0vsz= 1on business metric y. Our goal is to evaluate "improvement" of yfrom the treatment over control group (directional test). T Test : One common formulation of the "improvement" is: ฮด=E(yi1โˆ’yi0), and we can use t-test for the corresponding null vs alternative hypotheses: H0:ฮด= 0,vsH1:ฮด >0. The corresponding t-statistics is tn=ยฏy1โˆ’ยฏy0โˆšห†v10, where, ยฏykis sample mean for zi=k, and ห†v10is corresponding variance estimator depending on equal or unequal variance assumption. Normal approximation tnโ†’dN(0,1) can be leveraged to get p-value or confidence intervals. 2 Statistical Efficiency : We can measure the statistical efficiency of a estimation process by mean squared error (MSE), and define the relative efficiency by inverse ratio of MSE, rn(ห†ฮด1,ห†ฮด2) = E(ห†ฮด2โˆ’ฮด)2 E(ห†ฮด1โˆ’ฮด)2=V ar(ห†ฮด2)+Bias2(ห†ฮด2) V ar(ห†ฮด1)+Bias2(ห†ฮด1), where ห†ฮด1andห†ฮด2are two different estimator of ฮด. When both estimator are unbiased, the relative efficiency reduced to ratio of variance. We can define asymptotic relative efficiency (ARE) as r(ห†ฮด1,ห†ฮด2) = lim nโ†’โˆžrn(ห†ฮด1,ห†ฮด2). For hypotheses testing, it can happen that two hypothesis testing process are not corresponding to the same parameter. In this case, we can use Pitman efficiency, r(t1, t2) = lim nโ†’โˆžnt2 nt1, where nt1 andnt2are sample size required to reach the same power ฮฒforฮฑlevel test, with test statistic t1and t2respectively. Assume local alternative (e.g., small location shift ฮด), and asymptotic normality of test statistics (i.e.,โˆšntn,iโ†’dN(ยตi(ฮด), ฯƒ2(ฮด))), Pitman efficiency is equivalent to the following alternative definition of efficiency: r(t1, t2) =ฮป2 1 ฮป2 2= ยตโ€ฒ 1(0)/ฯƒ1(0) ยตโ€ฒ 2(0)/ฯƒ2(0)2 , where ฮปk=ยตโ€ฒ k(0) ฯƒk(0)is slope of testk. The equivalence can be shown observing the power function ฮฒ(ฮด) = 1โˆ’ฮฆ(zฮฑโˆ’โˆšnฮดฮป), and thusnโˆ1 ฮป. In this paper, we will evaluate statistical efficiency of a series of methodologies addressing challenges in business setting, by comparing them with t-test and among themselves. The comparison will be either asymptotic efficiency in analytic form or empirical efficiency in terms of simulation studies. 2 Regression Adjustment for ROI Cost is core guardrail (and risk metric) in evaluation of algorithm or strategies in business setting. One common strategy is to perform t-tests on both primary metrics (e.g., revenue) and guardrail metrics (e.g., cost), separately. However, this type of strategy lacks a unified view on ROI and can lead to decision confusion when the conclusion on the two metric goes opposite way. Here, we propose to use regression adjustment approach[ 11,13] as a fundamental approach for measuring ROI, by forming the parametric model: E(yi|zi, wi) =g(ฮฒ0+ฮฒ1zi+ฮณTwi), where, yidenote the revenue or other primary metrics, zidenote the treatment assignment, wiis vector of variables that we want to control (e.g., cost), yi|zi, wifollows distribution of certain parametric family with mean of E(yi|zi, wi), and gis link function. To see how ฮฒ1provide unified view on "ROI", letโ€™s assume yiis the revenue and wiis a scalar metric on the cost (or investment), then ฮฒ1can be integrated as "treatment effect on revenue assuming same level of investment". We can then perform hypothesis testing (e.g., Wald test) on ฮฒ1for: H0:ฮฒ1= 0vsH1:ฮฒ1>0. Beside "ROI" consideration, regression adjustment has two other significant advantages compared with t-test: (i) When there are confounding, regression adjustment is unbiased where t-test or similar tests like
https://arxiv.org/abs/2505.08128v1
proportion tests are biased. (ii) When there are no confounding, regression adjustment has smaller variance and thus more efficient. In fact, under parametric settings, regression adjustment based on maximum likelihood estimation reaches Cramรฉrโ€“Rao lower bound[ 37] and hence most efficient among all unbiased estimators (Appendix B.1). For illustration of insight on how and where the efficiency is gained over t-test, letโ€™s assume gaussian distribution and identity link function: yi=ฮฒ0+ฮฒ1zi+ฮณTwi+ฯตi, ฯตiโˆผN(0, ฯƒ2),where ฮฒ1measures the treatment effect controlling for w, i.e., ฮฒ1=E(y|z= 1, w)โˆ’E(y|z= 0, w). Under confounding and above parametric set-up, we can show ฮฒ1is unbiased, i.e., E(y(1)โˆ’y(0)) = ฮฒ1. Meanwhile, t-test ( ห†ฯ„= ยฏy1โˆ’ยฏy0) is biased by a constant term ฮณT[E(w|z= 1)โˆ’E(w|z= 0)] (Appendix B.2). In this case, the asymptotic relative efficiency is dominated by the bias term (for both, varโˆ1 n), and hence r(ห†ฮฒ1,ห†ฯ„)โ†’ โˆž asnโ†’ โˆž . When there are no confounding, i.e., zโŠฅw, regression adjustment and t-test are both unbiased, however, regression adjustment is more efficient: r(ห†ฮฒ1,ห†ฯ„) = 1 +ฯƒ2 w ฯƒ2โ‰ฅ1, where, ฯƒ2 w=ฮณTV ar(w)ฮณ represent the variance of y explained by w(Appendix B.3). We can see that regression adjustment at least has the same efficiency as t-test. As long as wcan explain some variance of y(i.e.,ฯƒ2 w>0or ฮณฬธ= 0), regression adjustment is strictly more efficient than t-test. This is also the key reason behind efficiency of all the CUPED type of methods, basically by including pre-experiment variables wthat can explain some variance of yand satisfy zโŠฅwby design. 3 3 GEE for Longitudinal Analysis For almost all A/B testing in industry, we measure the metrics regularly over time. This is one unique characteristics in tech industry: repeated measurement of metrics have negligible (additional) cost, whereas in other fields like biomedical field, repeated measurements are often constrained by expense. Therefore, it is essential that we leverage the longitudinal repeated measurement in A/B testing to improve power, particularly in business setting where sample size limitation is prevalent. In stead of common practice that perform analysis on a snapshot of data, we propose to perform longitudinal analysis on all data collected leveraging GEE[ 25,39]. Letโ€™s assume the following model: E(yit|zi, wit) =ยตit=g(ฮฒ0+ฮฒ1zi+ฮณTwit), where, yitis the repeated measure on primary metrics, witis set of repeated measure of variables (e.g., cost) and time-invariant variables (e.g., meta data) that we want to control. ฮฒ1measures the treatment effect on ycontrolling for wit. Note that we can change the parametric form inside g(ยท)to measure more complex treatment effect, e.g., growth curve effect of ฮฒ1+tฮฒ2by setting ยตit=g(ฮฒ0+ฮฒ1zi+ฮฒ2zit+ฮณTwit). We use following GEE for estimation and inference:P iDT iVโˆ’1 i(yiโˆ’ยตi) = 0, where, yi= [yi0,ยทยทยท, yit,ยทยทยท]T,ยตi= [ยตi0,ยทยทยท, ยตit,ยทยทยท]T,ฮธ= [ฮฒT, ฮณT]T, and Di=โˆ‚ยตi โˆ‚ฮธ, Vi= AiR(ฮฑ)Ai, Ai=diag{p V ar(yit|zi, wit)}. Here R(a)represent a working correlation matrix that represent the correlation structures for the repeated measurement, and Ais diagonal matrix with standard deviation of t-th measurement on the t-th diagonal. Letui=DT iVโˆ’1 i(yiโˆ’ยตi),B=E(DTVโˆ’1D), we can estimate ฮธiteratively, ฮธ(s+1)=ฮธ(s)+Bโˆ’ (ฮธ(s))Pui(ฮธ(s)), where ห†B=1 nPDT iVโˆ’1 iDiis empirical estimate of B=E(GD). The estimate ห†ฮธis known to be asymptotically normal under mild regularity condition. For completeness (and connection to Section 5.1), we state the results
https://arxiv.org/abs/2505.08128v1
in following theorem and provided skech of proof in Appendix C.1. Theorem 1 Letฮฃ =V ar(ui). Then, under mild regularity condition, we have consistency: ห†ฮธโ†’pฮธ, and asymptotic normality:โˆšn(ห†ฮธโˆ’ฮธ)โ†’dN(0, Bโˆ’TฮฃBโˆ’1). Here, the variance can be estimated viaห†ฮฃ =1 nP iห†uiห†uT i, and ห†B=1 nPDT iVโˆ’1 iDi. Since GEE uses all the data, intuitively it has higher efficiency to detect the treatment effect compared with snapshot analysis. To see deeper insights on where the efficiency comes from, letโ€™s assume linear model with gaussian distribution, yit=ฮฒ0+ฮฒ1zi+ฮณTwit+ฯตit, where ฯตitโˆผN(0, ฯƒ2), Cov(ฯตi) = ฯƒ2R, and Rโ‰ป0. For easy of comparison with snapshot regression analysis, we further assume witis constant overtime, i.e., wit=wi. We can show variance of GEE estimate, V ar(ห†ฮธgee) =ฯƒ2 eTRโˆ’1e(P ixixT i)โˆ’1, where, xi= [1, zi, wT i]T,e= [1,1,ยทยทยท,1]T, and Xi= exT i. For the snapshot regression analysis, letโ€™s assume we do it on the last time point, and the corresponding estimate ห†ฮธhas variance, V ar(ห†ฮธreg) =ฯƒ2(P ixixT i)โˆ’1. Then the relative efficiency isr(ห†ฮฒ1,gee,ห†ฮฒ1,reg) =eTRโˆ’1e >1. We provide derivation and additional insights discussion in Appendix C.2. 4 U Statistics for Non-Gaussian Distributed Metrics In many common business scenarios, primary metrics such as revenue exhibits strong characteristics of Non-Gaussian distributions, e.g., right skewed heavy tailed distribution. Further, important business event such as conversions happens sparsely, making the primary metrics often zero inflated. In these scenarios, standard parametric approach such as t-test can suffers from inflated type I error or power loss. More robust and efficient non-parametric test is needed. 4.1 Mann-Whitney U Test Given two independent samples {y1i}n1 i=1and{y0j}n0 j=1, the Mann-Whitney U statistic[ 29,17] is given by U=1 n0n1n1X i=1n0X j=1Iy1iโ‰ฅy0j, (1) 4 where Iis indicator function. Observe that E[U] =E I{y1iโ‰ฅy0j} =P(y1iโ‰ฅy0j)and so Uis an unbiased estimator for ฮด=P(y1i> y0j). We can then use Mann-Whitney U to test whether y1greater than y0. Formally, the null hypothesis isH0:P(y1โ‰ฅy0) =1 2, and the alternative hypothesis is H1:P(y1โ‰ฅy0)>1 2. It can be shown thatโˆšn(Uโˆ’ฮด)โ†’dN(0, ฯƒ2 u), where ฯƒ2 u=n0+n1 12(1 n0+1 n1)under H0. Leveraging this, one can perform a score-type hypothesis test on H0by making use of asymptotic normality. Note that this is different than testing a difference in means. Letฮบ(y1i)denote the rank of y1iin the combined sample of {y1i}n1 i=1and{y0j}n0 j=1in descending order, i.e., ฮบ(y1i) = 1 +Pn1 iโ€ฒฬธ=iIy1i<y1iโ€ฒ+Pn0 jIy1i<y0j. The Wilcoxon rank-sum test statistic is given by W=Pn1 i=1ฮบ(y1i)โˆ’n1(n1+n0+1) 2=โˆ’n1n0U+n1n0 2. This relationship between Wand Uallows us to compute Uefficiently for large sample sizes by leveraging fast ranking algorithms. To compare the relative efficiency of Mann-Whitney U and t test, we assume a local alternative of small location shift ฮดfrom distribution Fwith density function fand variance ฯƒ2. The Pitman relative efficiency is: r(U, ฯ„) =ฮป2 U ฮป2ฯ„= 12ฯƒ2R f2(x)dx2. (Appendix D.1) Using this result, we can show for normal distribution, r(U, ฯ„) =3 ฯ€; for Laplace distribution, r(U, ฯ„) = 1 .5; for log-normal r(U, ฯ„)increase exponentially with variance parameter of log-normal; and for Cauchy distribution r(U, ฯ„) =โˆžas t-test will break. (details in Appendix D.1.1) For these common heavy tail distributions, Man-Whitney U is more efficient. Even for perfectly normal distributed data, Man-Whitney
https://arxiv.org/abs/2505.08128v1
Uโ€™s efficiency is very close to t-test. 4.2 Zero-Trimmed U Test The challenges of non-Gaussian distribution is often two fold in business scenario, the heavy tail nature and the zero-inflation nature. We can exploit the zero-inflation characteristic to further improve efficiency. The idea is to trim off the excessive zero and focus on the the continuous distributed part and โ€œresidualโ€ zero difference. Letn+ 0=Pn1 i=1Iy1i>0, and n+ 1=Pn0 j=1Iy0j>0. We can get proportion of positive values in the two samples: ห†p1=n+ 1 n1andห†p0=n+ 0 n0, and define ห†p= max {ห†p1,ห†p0}. Remove n1(1โˆ’ห†p)zeros from {y1i}n1 i=1andn0(1โˆ’ห†p)zeros from {y0j}n0 j=1. Let{yโ€ฒ 1i}nโ€ฒ 1 i=1and{yโ€ฒ 0j}nโ€ฒ 0 j=1denote the residual samples containing nโ€ฒ 1=n1ห†pandnโ€ฒ 0=n0ห†pdata points, respectively. Let ฮบ(yโ€ฒ 1i)denote the rank of yโ€ฒ 1iin the combined residual samples in descending order. The zero-trimmed Wilcoxon rank-sum U test statistic is given by Wโ€ฒ=Pnโ€ฒ 1 i=1ฮบ(yโ€ฒ 1i)โˆ’nโ€ฒ 1(nโ€ฒ 1+nโ€ฒ 0+1) 2. Conditioning on ห†p0andห†p1, we have E(Wโ€ฒ|ห†p1,ห†p0) =nโ€ฒ 1n+ 0โˆ’n+ 1nโ€ฒ 0 2and Var(Wโ€ฒ|ห†p1,ห†p0) = n+ 0n+ 1(n+ 0+n+ 1+1) 12. Then we can show (details in Appendix D.2) its variance as: ฯƒ2 Wโ€ฒ= n2 1n2 0 4ห†p2(ห†p1(1โˆ’ห†p1) n1+ห†p0(1โˆ’ห†p0) n0) +n1n0ห†p1ห†p0 12(n1ห†p1+n0ห†p0) +op(n3). We can estimate the variance empirically, ห†ฯƒ2 Wโ€ฒ=n2 1n2 0 4ห†p2(ห†p1(1โˆ’ห†p1) n1+ห†p0(1โˆ’ห†p0) n0) +n+ 0n+ 1(n+ 0+n+ 1) 12and perform statistical testing viaWโ€ฒ ห†ฯƒWโ€ฒ. To facilitate comparison of efficiency, we can assume m=p1โˆ’p0andd=P(y+ 1> y+ 0)โˆ’1 2. The compound hypothesis would be: H0:m= 0,andd= 0;H1: (1โˆ’Im>0)(1โˆ’Id>0) = 0 . We state the following theorem for Pitman efficiency (proof in Appendix D.3). Theorem 2 Let p denote proportion of positive values under H0,ฯ•be the direction of compound H1,ฮฝbe the effect size along direction ฯ•, i.e., m(ฮฝ) =ฮฝcosฯ•andd(ฮฝ) =ฮฝsinฯ•, The compound hypothesis can be transformed to simple hypothesis testing with direction of ฯ•, i.e., H0:ฮฝ= 0, vsHฯ• 1:ฮฝ >0. And the corresponding Pitman efficiency is, rฯ•(Wโ€ฒ, W) =ฯƒ2 W(0) ฯƒ2 Wโ€ฒ(0)ยตโ€ฒ Wโ€ฒ(0) ยตโ€ฒ W(0)2 =1โˆ’p+p2 3 p2โˆ’p3+p2 3pcosฯ•+ 2p2sinฯ• cosฯ•+ 2p2sinฯ•2 (2) 5 We can then investigate the relative efficiency by varying value of pโˆˆ(0,1]andฯ•โˆˆ[0,ฯ€ 2](in Appendix Figure 2 and Figure 3). Note that rฯ•(Wโ€ฒ, W)is with respect to variance adjusted for tie, ห†ฯƒ2 W=n2 1n2 0 4(ห†p1(1โˆ’ห†p1) n1+ห†p0(1โˆ’ห†p0) n0) +n+ 0n+ 1(n+ 0+n+ 1) 12. We also provide results for rฯ•(Wโ€ฒ, Wo), the efficiency over Wowith original unadjusted variancen1n2(n1+n2+1) 12in Appendix eq. (35). 5 Advanced Distribution-Free Test We develop general and robust U statistics based methodology in this section that can (i) measure various definitions of treatment effect, (ii) address both covariate adjustment and "ill-behaved" distribution in business setting, and (iii) can also utilize repeated measurements in A/B tests. 5.1 Doubly Robust Generalized U Test Letyidenote the response variable that measure the business return, e.g., conversion or revenue, zi denote the treatment assignment, and widenote the variables that needs to be adjusted, e.g., cost or impression. We define the treatment effect as, ฮด=E(ฯ†(yi1โˆ’yi0)), where, yi1andyi0represent response variables for zi= 1andzi= 0respectively. Obviously, we observe only one of yi0andyi1. ฯ†(ยท)is a monotonic function with finite second moment, i.e., E(ฯ†2(yi1โˆ’yi0))<โˆž. For example, when ฯ†(yi1โˆ’yi0) =Iyi1>yj0, we know ฮด=P(yi1> yi0). We can also use other monotonic finite
https://arxiv.org/abs/2505.08128v1
function like logistic function, ฯ†(yi1โˆ’yi0) = [1 + exp( โˆ’(yi1โˆ’yi0))]โˆ’1, Probit function, ฯ†(yi1โˆ’ yi0) = ฮฆ( yi1โˆ’yi0)or signed Laplacian kernel, ฯ†(yi1โˆ’yi0) =sign(yi1โˆ’yi0) exp(โˆ’yi1โˆ’yi0 ฯƒ). Note when ฯ†(ยท)is identity, we get ฮด=E(yi1โˆ’yi0), which is treatment effect corresponding to t-test. However, it doesnโ€™t guarantee finite second moment condition(e.g., infinite second moment under Cauchy distribution). Letp=E(z). We can define a generalized U statistics: Un=n 2โˆ’1P i,jโˆˆCn 2h(yi, yj), where, h(yi, yj) =ฯ†(yi1โˆ’yj0)ฮพij+ฯ†(yj1โˆ’yi0)ฮพji, andฮพij=zi(1โˆ’zj) 2p(1โˆ’p). When there are no confounding, we know E(Un) =ฮด. In fact, when ฯ†(yi1โˆ’yi0) =Iyi1>yj0, it is equivalent to (1). To address covariate adjustment, let ฯ€i=E(zi|wi), and gij=E(ฯ†(yi1โˆ’yj0)|wi, wj). We can form a efficient doubly robust[36] version of the generalized Ustatistics (DRGU): UDR n=n 2โˆ’1X i,jโˆˆCn 2hDR ij, (3) where, hDR ij=zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)(ฯ†(yi1โˆ’yj0)โˆ’gij) +zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)(ฯ†(yj1โˆ’yi0)โˆ’gji) +gij+gji 2. When ฯ€ andgare known, we can show that E(hDR ij) =ฮด, and thus E(UDR n) =ฮด(Appendix E.1). Further, variance of UDR nreaches semi-parametric bound (Appendix E.2), i.e., smallest variance among all unbiased estimator under semi-parametric set-up. In most applications, we donโ€™t know ฯ€andg, and need to estimate them via ห†ฯ€andห†g. As long as one of ห†ฯ€andห†gis consistent estimator, then the corresponding U statistics ห†UDR n, is also consistent, hence doubly robust. We can estimate ฯ€iandgijby imposing a linear structure: ฯ€(wi;ฮฒ) = ฯ•([1, wT i]Tยทฮฒ)), g(wi, wj;ฮณ) = ฯˆ([1, wT i, wT j]Tยทฮณ), where ฯ•()andฯˆ()are link functions. Note that g()is a model on pair of data points and can be considered as simplified Graph Neural Network. For estimation and inference of the parameters ฮธ= (ฮด, ฮฒ, ฮณ ), one way is to do it sequentially, i.e., first estimating ห†ฮฒandห†ฮณwith the regression models, then calculating ห†UDR n(ห†ฮฒ,ห†ฮณ)and correspond- ing asymptotic variance considering variance from ห†ฮฒandห†ฮณ. We will leverage U-statistics-based Generalized Estimation Equations (UGEE) [23] for joint estimation and inference: Un(ฮธ) =X i,jโˆˆCn 2Un,ij=X i,jโˆˆCn 2Gij(hijโˆ’fij) =0, (4) where, hij= [hij1, hij2, hij3]T,fij= [fij1, fij2, fij3]T,hij1=zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)(ฯ†(yi1โˆ’yj0)โˆ’gij) + zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)(ฯ†(yj1โˆ’yi0)โˆ’gji) +gij+gji 2,hij2=zi+zj,hij3=zi(1โˆ’zj)ฯ†(yi1โˆ’yj0) +zj(1โˆ’ 6 zi)ฯ†(yj1โˆ’yi0),fij1=ฮด,fij2=ฯ€i+ฯ€j,fij3=ฯ€i(1โˆ’ฯ€j)gij+ฯ€j(1โˆ’ฯ€i)gji,ฯ€i=ฯ€(wi;ฮฒ), gij=g(wi, wj;ฮณ), andGij=DT ijVโˆ’1 ij,Dij=โˆ‚fij โˆ‚ฮธ,Vij=diag{V ar(hijk|wi, wj)}. Theorem 3 Letui=E(Un,ij|yi0, yi1, zi, wi),ฮฃ = V ar(ui),Mij=โˆ‚(fijโˆ’hij) โˆ‚ฮธ, and B= E(GM). Let ห†ฮดbe the 1st element in ห†ฮธ. Then, under mild condition, we have consistency: ห†ฮธโ†’pฮธ, and asymptotic normality: โˆšn(ห†ฮธโˆ’ฮธ)โ†’dN(0,4Bโˆ’TฮฃBโˆ’1). (5) Further, as long as one of ฯ€andgis correctly specified, ห†ฮดis consistent. When both are correctly specified, ห†ฮดattains semi-parametric efficiency bound , i.e., no other regular estimator can have smaller asymptotic variance. Proof is provided in Appendix E.3. We can estimate ฮธvia either one of the following iterative algorithm: ฮธ(t+1)=ฮธ(t)โˆ’(โˆ‚Un(ฮธ) โˆ‚ฮธ ฮธ(t))โˆ’Un(ฮธ(t)), orฮธ(t+1)=ฮธ(t)+ (ห†B(ฮธ(t)))โˆ’Un(ฮธ(t)) where, ห†B=n 2โˆ’1P i,jโˆˆCn 2ห†Gijห†Mij.ฮฃcan be estimated empirically from outerproduct of ห†ui=1 nโˆ’1P jฬธ=iUij(ห†ฮธ), i.e., ห†ฮฃ =1 nP iห†uiห†uT i. 5.2 DR Generalized U for Longitudinal Data Letyitdenote the metrics we measures overtime, zidenote the treatment assignment, and wit denote the variables needs to be adjusted for. We can measure the treatment effect overtime: ฮดt= E(ฯ†(yit1โˆ’yit0)), where yit0andyit1are counterfactual responses for zi= 0andzi= 1. We can construct DR type of multivariate U statistic for the longitudinal data, UDR n=n 2โˆ’1X i,jโˆˆCn 2hDR ij, (6) where, hDR ij = [hij1,ยทยทยท, hijt,ยทยทยท, hijT]T,hijt=zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)(ฯ†(yit1โˆ’yjt0)โˆ’gijt) + zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)(ฯ†(yjt1โˆ’yit0)โˆ’gjit) +gijt+gjit 2. We can
https://arxiv.org/abs/2505.08128v1
estimate ฯ€andgby,E(zi|wi) =ฯ€(wi;ฮฒ) =ฯ•([1,wT i]Tยทฮฒ)),E(ฯ†(yit1โˆ’yjt0)|wit, wjt) = g(wit, wjt;ฮณt) =ฯˆ([1, wT it, wT jt]Tยทฮณt), where w= [wT 1,ยทยทยท, wT t,ยทยทยทwT T]T. We can estimate the parameters and make inference jointly for ฮธ= [ฮดT, ฮฒT,ฮณT]Tusing UGEE: Un(ฮธ) =X i,jโˆˆCn 2Un,ij=X i,jโˆˆCn 2Gij(hijโˆ’fij) =0, (7) where, hij= [hT ij1, hij2,hT ij3]T,fij= [fT ij1, fij2,fT ij3]T,hij1=hDR ij,hij2=zi+zj,hij3= zi(1โˆ’zj)ฯ†ij+zj(1โˆ’zi)ฯ†ji,fij1=ฮด,fij2=ฯ€i+ฯ€j,fij3=ฯ€i(1โˆ’ฯ€j)gij+ฯ€j(1โˆ’ฯ€i)gji, ฯ€i=ฯ€(wi;ฮฒ),gij=g(wi,wj;ฮณ), andGij=DT ijVโˆ’1 ij,Dij=โˆ‚fij โˆ‚ฮธ,Vij=AR(ฮฑ)A,A= diag{p V ar(hijktk|wi, wj)}. Note here hij2is scalar and hijis a vector of length 2T+ 1. Corollary 4 Letui=E(Un,ij|yi0,yi1, zi,wi),ฮฃ = V ar(ui),Mij=โˆ‚(hijโˆ’fij) โˆ‚ฮธ, and B= E(GM). Then, under mild condition, we have consistency: ห†ฮธโ†’pฮธ, and asymptotic normality:โˆšn(ห†ฮธโˆ’ฮธ)โ†’dN(0,4Bโˆ’TฮฃBโˆ’1). Estimation and computation of asymptotic variance can be perform in the same manner as Section 5.1 for small to medium sample size. For large sample size, the computation burden can grow signifi- cantly. We device efficient algorithms for optimization and inference ( Algorithm 1 andAlgorithm 2 ), and provide theoretical support of the algorithms with Theorem 5 andTheorem 6 . (See proof in Appendix A.2) In most applications, we can reduce number of parameters by imposing some structures on the trajectory ( ฮณtandฮดt), for examples: (i) set the gtto same functional form, i.e, ฮณt=ฮณ; (ii) set the ฮดt 7 Algorithm 1 Mini-batch Fisher Scoring for ห†ฮธ= (ห†ฮด,ห†ฮฒ,ห†ฮณ) 1:Input: Data{(yi, zi, wi)}n i=1, initial parameter ฮธ(0), step size ฮฑ, batch size m, convergence threshold ฮต=c nโˆ’1/2โˆ’ฯ‚/2forฯ‚ >0. 2:tโ†0 3:repeat 4: Sample mrows without replacement: St={i1, . . . , i m}from current epoch 5: Form allm 2 unordered pairs {(i, j) :i < j, i, j โˆˆSt} 6: For each pair i, j, compute: โ€ขUij=Gij(hijโˆ’fij) โ€ขBij=GijMij 7: Estimate score: หœUt=2 m(mโˆ’1)P i<jUij 8: Estimate Jacobian: หœBt=2 m(mโˆ’1)P i<jBij 9: Update parameter: ฮธ(t+1)=ฮธ(t)+ฮฑ หœBtโˆ’1หœUt 10: tโ†t+ 1 11:until หœUt < ฮต 12:Output: ห†ฮธ=ฮธ(t),ห†ฮดis the first component Algorithm 2 Monte Carlo Integration for Estimation of\V ar(ห†ฮธ) 1:Input: Data{(yi, zi, wi)}n i=1, parameter ห†ฮธfrom Fisher scoring, pair sample size k=cโ€ฒn1+ฯตโ€ฒ forฯตโ€ฒโˆˆ(0,1) 2:Sample kunordered pairs {(i, j)}uniformly without replacement fromn 2 3:for all pairs (i, j)in sample do 4: Compute uij=Gij(hijโˆ’fij) 5: Compute Bij=GijMij 6:end for 7:Compute mean: ยฏu=1 kP (i,j)uij 8:Estimate ห†B=1 kP (i,j)Bij 9:Estimate ห†ฮฃ =1 kP (i,j)(uijโˆ’ยฏu)(uijโˆ’ยฏu)โŠค 10:Output: \V ar(ห†ฮธ) =4(ห†Bโˆ’1)Tห†ฮฃห†Bโˆ’1 n to be a simple linear form, e.g., ฮดt=ฮด, orฮดt=ฮด1+ฮด2t. Our simulation and real application will use these structure. Theorem 5 (Decoupling of Optimization and Inference) Assume the estimating equation ยฏUn(ฮธ) =1n 2X i,jโˆˆCn 2Un,ij(ฮธ) = 0 is solved by a numerical algorithm producing ห†ฮธsuch that โˆฅยฏUn(ห†ฮธ)โˆฅ=op nโˆ’1/2 . Then, one hasโˆšn(ห†ฮธโˆ’ฮธ)โ†’dN 0,4 (Bโˆ’1)TฮฃBโˆ’1 .In particular, the small algorithmic error does not affect the first-order asymptotic distribution. Theorem 6 (Monte Carlo Error Bound) LetUv n=n 2โˆ’1P i<jv(oi, oj),with symmetric, sub- Gaussian kernel v(proxy variance ฯƒ2). Form the Monte Carlo estimator ห†Uk=1 kP (i,j)โˆˆCkv(oi, oj), 8 where kpairs are sampled uniformly without replacement from then 2 possible, and let the average overlap factor be โˆ† =O(k/n). Then for any ฯต >0andฮทโˆˆ(0,1), P |ห†Ukโˆ’E[ห†Uk]|> ฯต โ‰ค2 exp โˆ’k ฯต2 2ฯƒ2(1+โˆ†) , and hence with effective sample size หœk=k/(1 + โˆ†) , |ห†Ukโˆ’E[Uv n]| โ‰คs 2ฯƒ2 หœklog 2 ฮท w.p.1โˆ’ฮท. In particular, ห†Ukโˆ’E[Uv n] =Opq 1 k+1 n
https://arxiv.org/abs/2505.08128v1
, so choosing k=O(n1+ฯต)makes the Monte Carlo error asymptotically negligible. 6 Experiments and Results 6.1 Simulation Studies We perform comprehensive simulation studies to evaluation performance of the proposed methods. Due to space limitation, we summarize and highlight the results here. Regression Adjustment : We simulate confounding effect and Poisson responses. When there is no confounding, both t-test and RA can control type I error, while RA has higher power than t-test. Under confounding, t-test canโ€™t control type I error while RA can control type I error. (Appendix F.1) GEE : We simulate confounding effect, Poisson responses and repeated measurement. Both regression and GEE can control type I error under confounding, while GEE has higher power. (Appendix F.2) Mann Whitney U : For heavy tailed distribution with 50% of zeros, Zero-trimmed U has higher power than standard Mann Whitney U most of the time and standard U has higher power than t-test. All three methods can control type I error for zero inflated heavy tail data. (Appendix F.3) Table 1: Power Comparison for Heavy Tailed Distributions with Equal Zero-Inflation (50%) Effect Size Positive Cauchy (n=200) LogNormal (n=200) Zero-trimmed U Standard U t-test Zero-trimmed U Standard U t-test 0.25 0.079 0.065 0.011 0.044 0.044 0.009 0.50 0.165 0.094 0.026 0.067 0.059 0.004 0.75 0.339 0.166 0.031 0.090 0.067 0.007 1.00 0.555 0.262 0.048 0.138 0.082 0.011 Doubly Robust Generalized U We simulate confounding effect with heavy tailed distribution. We compare Type I error rates and power of correctly specified DRGU , correctly specified linear regression OLS, and Wilcoxon rank sum testU(which does not account for confounding covariates). To probe double robustness, we set up misDRGU as misspecifying the quadratic outcome propensity score model with a linear mean model, while the outcome model in misDRGU is specified correctly. (Appendix F.4.1) Table 2: Power of DRGU Adjusting for Confounding Effect Distribution Sample size DRGU misDRGU OLS U Normal200 0.750 0.585 0.940 0.299 50 0.135 0.085 0.135 0.035 LogNormal200 0.610 0.515 0.435 0.235 50 0.260 0.210 0.190 0.110 Cauchy200 0.660 0.580 0.435 0.310 50 0.265 0.180 0.165 0.130 Longitudinal DRGU 9 We compare three models longDRGU ,DRGU using the last timepoint data snapshot, and GEE. The time-varying covariates highlight the strength of using longitudinal method compared to snapshot analysis. (Appendix F.4.2) Table 3: Power of DRGU for Longitudinal data Distribution Sample size Long DRGU DRGU GEE Normal200 0.85 0.88 0.92 50 0.52 0.39 0.75 LogNormal200 0.85 0.78 0.68 50 0.37 0.30 0.33 Cauchy200 0.83 0.76 0.66 50 0.38 0.32 0.29 6.2 Applications in Business Setting Email Marketing : We conducted an user level A/B test comparing our legacy email marketing system against a newer version based on Neural Bandit. We measured the downstream impact on conversion value, a proprietary metric measuring the value of conversions. The conversion value presented characteristic of extreme zero inflation (>95%) and heavy tailed (among the converted). Using the Zero-trimmed U test, we detect a statistically significant lift (+0.94%) in overall conversion value (p-value<0.001). By constast, the t-test is not able to detect a significant effect on the conversion value metric (p-value =
https://arxiv.org/abs/2505.08128v1
0.249). (Appendix G.1) Targeting in Feed : We conducted a user level A/B test to evaluate impact of a new algorithm for marketing on a particular slot in Feed. We faced two challenges: (i) selection bias in ad impression allocation that favored the control system, so we need to adjust for impressions as a cost and compare ROI between control and treatment; (ii) imbalance in baseline covariates due to limited campaign and participant selection (Appendix Table 14). We addressed both issues via Regression Adjustment to estimate ROI lift while controlling for imbalanced covariates, detecting a 1.84% lift in conversions per impression (95% CI: [1.64%, 2.05%], p<0.001). By contrast, a simple t -test found no significant difference in conversion (p=0.154). (Appendix G.2) Paid Search Campaigns We ran a 28-day campaign level A/B test on 3rd-party paid-search campaigns (32 control vs. 32 treatment), measuring conversion value net of cost. To address the small-sample limitation, we fit a GEE model to take advantage of repeated measurement over 28 days, yielding a near-significant effect on ROI (p=0.051) v.s p=0.184 from last day snapshot regression analysis. A 28-day pre-launch AA validation using the same GEE showed no effect (p=0.82), further validating experiment and results. Figure 1: Distribution of Conversion Values from the Validation & Test Period Observing that the distribution of the conversion value exhibit heavy tail characteristics, we further performed statistical testing using longitudinal Doubly Robust U , assuming compound symmetric 10 correlation structure for R(ฮฑ). We were able to attain statistical significant result with ห†P(y1> y0) = 0.54andp-value=0.045. (Appendix G.3) 7 Discussion We provide discussion for general approaches for large sample size (e.g., member level AB test at global scale) as well as various consideration of practical implementation in Appendix A . We further highlight the key contributions on theoretical development and discuss the comparison with existing approaches. 7.1 Methodology Innovation Although RA, GEE, and the Mannโ€“Whitney U test are established statistical methodologies, their applications to A/B testing in the tech field are rare. This is mainly due to four reasons: (i) A/B tests in the tech field generally involve large sample sizes, and efficiency is often not the primary concern; (ii) for large sample sizes, RA, GEE, and Mannโ€“Whitney U lack computationally efficient algorithms; (iii) the primary metrics in A/B tests are typically binary or count data (e.g., impressions or conversions), so there is little perceived need for distribution-robust tests like the Mannโ€“Whitney U; (iv) evaluation of multiple metrics is often conducted heuristicallyโ€”e.g., requiring nonsignificance on guardrail metrics and significance on primary metrics, or making ad hoc trade-offs between them. In A/B tests for business scenarios, the above four reasons vanish: (i) sample sizes are limited because each A/B test incurs business cost, so using more powerful statistical tests (e.g., covariate adjustment) and increasing effective sample size (e.g., repeated measurement) is very important; (ii) in many cases, sample sizes are moderate, so computational burden is less of a concern; (iii) the primary metrics are often revenue, which follows a non-Gaussian distribution, calling for nonparametric tests such as the Mannโ€“Whitney test; (iv) a principled way
https://arxiv.org/abs/2505.08128v1
of performing ROI trade-offs is needed, and covariate adjustment can measure revenue net of cost. Moreover, when revenue- or value-based primary metrics are used, they are almost always associated with zero inflation and heavy-tail distributions. In this situation, we can use Zero-Trimmed U. In fact, we argue that these approaches can be applied generally to all A/B tests in the tech field. Primary metrics can be revenue-based even for engagement-related platforms (e.g., assigning a proxy long-term value to any impression or conversion). Also, there are implicit and explicit costs for any A/B test (e.g., latency can be modeled as a cost to the user). Weโ€™ll then need robust statistics to address the irregular distribution on proxy value and covariate adjustment for ROI consideration. For general applicability, we provide ways to efficiently perform the above tests for extremely large sample sizes. RA and GEE are based on estimating equations, and we can use mini-batch Fisher scoring to solve those equations and then calculate variance from the full sample using asymptotic results. Mannโ€“Whitney U and Zero-Trimmed U can be calculated efficiently using fast ranking algorithms, and the variance of the test statistic can be calculated from the asymptotic distribution easily. 7.2 Theoretical Development We derive analytical results to provide insights into where efficiency gains arise for RA, GEE, and the Mannโ€“Whitney U test: โ€ขFor RA, when there is confounding, relative efficiency over the t-test (measured by MSE) is dominated by the bias term, since the t-test yields a biased estimate of the treatment effect. When there is no confounding, RAโ€™s efficiency gain over the t-test arises from variance reduction due to covariate adjustment. The insight, then, is to find covariates that (i) satisfy non-confounding (i.e., are independent of treatment assignment) and (ii) explain variance in the response. This also explains the efficiency gains of related CUPED-type methods. โ€ขFor GEE, we show that efficiency gains over snapshot come from using repeated measure- ments, and we derive the exact formula for relative efficiency under a Gaussian response, revealing its dependence on the correlations structure among repeated measurements. When repeated measurements are fully independent, relative efficiency is highest, T times that of 11 snapshot regression. When they are perfectly correlated, GEE and snapshot regression share the same efficiency. โ€ขFor the Mannโ€“Whitney U test, we compute relative efficiency over the t-test on several example distributions, illustrating near-1 efficiency for Gaussian data and higher efficiency for heavy-tailed distributions. We detail the asymptotics for Zero-Trimmed U, building on existing works from biostatistics field [14,40]. Moreover, we provide a rigorous treatment of Pitman efficiency under compound hypothesis testing in Theorem 2 . Pitman efficiency is given for both (i) Zero-Trimmed U versus Mannโ€“Whitney U with adjusted variance and (ii) Zero-Trimmed U versus Mannโ€“Whitney U with standard (unadjusted) variance. โ€ขAs shown in Figures 2 and 3, the efficiency of Zero-Trimmed U versus Mannโ€“Whitney U with adjusted variance is not always greater than one; it depends on both the direction ฯ• and the zero proportion 1โˆ’p. When the direction is more on the dcomponent (a location shift among positive values), Zero-Trimmed U has
https://arxiv.org/abs/2505.08128v1
higher power (Figure 3). When the direction focuses on the mcomponent (the zero-proportion difference), Mannโ€“Whitney U with adjusted variance is more efficient, though still close to one (Figure 3). In fact, if sinฯ•= 1(purely on d), Zero-Trimmed U always has higher power (Figure 2); if sinฯ•= 0 (purely on m), Mannโ€“Whitney U with adjusted variance always has higher power (Figure 2). โ€ขThe efficiency of Zero-Trimmed U versus Mannโ€“Whitney U with standard (unadjusted) variance, however, is mostly greater than one, as seen in Figures 4 and 5. The dominance of Zero-Trimmed U is particularly significant (i.e., r >5) for high sparsity of positive values (pclose to zero), as shown in Figure 4. And when there is a substantial proportion of zeros (e.g., p= 0.5), its advantage is robust to direction (i.e., ฯ•) of the compound hypothesis, as shown in Figure 5. Building on existing works from causal inference Mann-Whitney U in biostatistics field[ 43,6,38, 7,45], we propose a novel doubly robust generalized U to address ROI, repeated measurement and distribution robustness all in one framework. We provide the asymptotic results in Theorem 3 and Corollary 4 with detailed derivations in Appendix E.3. Besides the fact that the application of doubly robust U is completely new for A/B test in business setting (and generally in tech field), we also highlight the key theoretical innovations of DRGU on top of existing approaches from biostatistics field: โ€ข The doubly robust generalized U can adopt any monotonic โ€œkernelโ€ ฯ†to form a U statistic to measure the directional treatment effect E(ฯ†)of a customized definition in an A/B test. When ฯ†is the identity function, it reduces to the common doubly robust version of the โ€œmean differenceโ€ treatment effect. When ฯ†is the indicator function, it is equivalent to the doubly robust version of Mannโ€“Whitney U. There are two key requirements for the kernel ฯ†: (i)finite second moment ensures distributional robustness, i.e. E[ฯ†2]<โˆž, a condition the identity kernel (mean-difference) cannot satisfy; (ii) monotonicity guarantees that ฯ† preserves the testโ€™s directional nature, so that any directional (location) shift in outcomes yields a consistent change in the statistic. โ€ขWe provide a detailed UGEE formulation on joint estimation of both the target parameter ฮด and nuisance parameters (i.e., ฮฒ,ฮณ). UGEE is an extension of GEE to pair-wise estimating equations, and readers can refer to [ 23] for a comprehensive treatment of UGEE. Our UGEE formulation is built on top of the formulations from [ 43,7]. There are three important dis- tinctions: (i) our UGEE is built on a generalized kernel ฯ†; (ii) we treat hij3, the estimating equation for the โ€œobservedโ€ treatment effect, by multiplying the pairwise โ€œmissingโ€ probabil- ityzi(1โˆ’zj)with the potential pairwise outcome ฯ†(yi1โˆ’yj0), whereas the formulation in [7] omits the "missing" probability; (iii) we provide the UGEE formulation for longitudinal data, detailing the structure of the propensity model and pairwise regression model for the doubly robust estimator, and the functional forms for different types of longitudinal effects. โ€ขBesides the asymptotic normality result, we prove that when ฯ€andgare known, the corresponding estimator attains the semi-parametric efficiency bound, i.e., the proposed doubly robust generalized
https://arxiv.org/abs/2505.08128v1
U has the smallest variance (most powerful) among all regular estimators of the corresponding treatment effect. We further prove that even when ฯ€and 12 gare unknown, as long as they are correctly specified, the doubly robust generalized U from our UGEE still attains the semi-parametric efficiency bound. This result is stated in Theorem 3, which provides the theoretical foundation for its superior performance in simulation and real A/B analysis. โ€ขWe provide computationally efficient algorithms for the proposed doubly robust generalized U on extremely large datasets (e.g., on the order of 108rows). Basically, the algorithm decouples the optimization procedure that performs the point estimation of ฮธand the inference procedure that estimates the asymptotic variance of ห†ฮธ. The optimization is driven by mini-batch Fisher scoring on paired data and can be implemented easily with existing automatic differentiation libraries (e.g., JAX, PyTorch, TensorFlow). The inference is driven by Monte Carlo integration for the expectation of variance estimate (another U statistic), where we reduce the computational burden from O(n2)toO(n)(a huge reduction when n is extremely large) without losing asymptotic efficiency. We provide rigorous theoretical support for the algorithm, on both the decoupling and error bounds, in Appendix A. Basically: (i) as long as the mini-batch Fisher scoring algorithm attains error op(nโˆ’1 2), this error is negligible (compared with โ€œperfectโ€ optimization) and thus we can decouple optimization and inference; (ii) as long as the Monte Carlo integration processes a sample of size O(n1+ฯต), the Monte Carlo errors are negligible and we attain the same asymptotic efficiency as using the full O(n2)pairs. Besides the methodology innovation and theoretical development, we also share the JAX[ 35] based implementation of UGEE for doubly robust generalized U, as well as simulation code for all simula- tions, including RA, GEE, Zero-Trimmed U and DRGU. So readers can dive deep to the algorithm and replicate simulation the result if interested. 8 Conclusion To conclude, we proposed a series of efficient statistical methods for A/B tests in this paper, with systematic theoretical development and comprehensive empirical evaluations. These methods, though proposed for A/B tests in business settings, are broadly useful to general experiments in both tech and non-tech field. References [1]Chunrong Ai, Lukang Huang, and Zheng Zhang. 2020. A Mannโ€“Whitney test of distributional effects in a multivalued treatment. Journal of Statistical Planning and Inference 209 (2020), 85โ€“100. [2]Eduardo M. Azevedo, Alex Deng, Josรฉ Luis Montiel Olea, Justin Rao, and E. Glen Weyl. 2020. A/B Testing with Fat Tails. Journal of Political Economy 128, 12 (2020), 4319โ€“4377. [3]R Clifford Blair and James J Higgins. 1980. A comparison of the power of Wilcoxonโ€™s rank- sum statistic to that of studentโ€™s t statistic under various nonnormal distributions. Journal of Educational Statistics 5, 4 (1980), 309โ€“335. [4]Stรฉphane Boucheron, Gรกbor Lugosi, and Pascal Massart. 2013. Concentration Inequalities: A Nonasymptotic Theory of Independence . Oxford University Press, Oxford, UK. [5]Patrick D Bridge and Shlomo S Sawilowsky. 1999. Increasing physiciansโ€™ awareness of the impact of statistics on research outcomes: comparative power of the t-test and Wilcoxon rank- sum test in small samples applied research. Journal of clinical epidemiology 52, 3 (1999),
https://arxiv.org/abs/2505.08128v1
229โ€“235. [6]R Chen, T Chen, N Lu, Hui Zhang, P Wu, C Feng, and XM Tu. 2014. Extending the Mannโ€“ Whitneyโ€“Wilcoxon rank sum test to longitudinal regression analysis. Journal of Applied Statistics 41, 12 (2014), 2658โ€“2675. [7]Ruohui Chen, Tuo Lin, Lin Liu, Jinyuan Liu, Ruifeng Chen, Jingjing Zou, Chenyu Liu, Loki Natarajan, Wan Tang, Xinlian Zhang, et al .2024. A doubly robust estimator for the Mann 13 Whitney Wilcoxon rank sum test when applied for causal inference in observational studies. Journal of Applied Statistics 51, 16 (2024), 3267โ€“3291. [8]Stephan Clรฉmenรงon, Igor Colin, and Aurรฉlien Bellet. 2016. Scaling-up empirical risk minimiza- tion: optimization of incomplete U-statistics. Journal of Machine Learning Research 17, 76 (2016), 1โ€“36. [9]Alex Deng, Michelle Du, Anna Matlin, and Qing Zhang. 2023. Variance Reduction Using In-Experiment Data: Efficient and Targeted Online Measurement for Sparse and Delayed Outcomes. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining . 3937โ€“3946. [10] Alex Deng, Ya Xu, Ron Kohavi, and Toby Walker. 2013. Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-Experiment Data. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining (WSDM โ€™13) . ACM, Rome, Italy. [11] Ronald Aylmer Fisher. 1928. Statistical methods for research workers . Oliver and Boyd. [12] Ronald A. Fisher. 1935. The Design of Experiments . Oliver and Boyd, Edinburgh. [13] David A Freedman. 2008. On regression adjustments to experimental data. Advances in Applied Mathematics 40, 2 (2008), 180โ€“193. [14] Alfred P Hallstrom. 2010. A modified Wilcoxon test for non-negative distributions with a clump of zeros. Statistics in Medicine 29, 3 (2010), 391โ€“400. [15] James A. Hanley and Barbara J. McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 1 (1982), 29โ€“36. [16] Adriรกn V Hernรกndez, Ewout W Steyerberg, and J Dik F Habbema. 2004. Covariate adjustment in randomized controlled trials with dichotomous outcomes increases statistical power and reduces sample size requirements. Journal of clinical epidemiology 57, 5 (2004), 454โ€“460. [17] Wassily Hoeffding. 1948. A Class of Statistics with Asymptotically Normal Distribution. The Annals of Mathematical Statistics 19, 3 (1948), 293โ€“325. [18] Yan Hou, Victoria Ding, Kang Li, and Xiao-Hua Zhou. 2010. Two new covariate adjustment methods for non-inferiority assessment of binary clinical trials data. Journal of Biopharmaceu- tical Statistics 21, 1 (2010), 77โ€“93. [19] Svante Janson. 2004. Large deviations for sums of partly dependent random variables. Random Structures & Algorithms 24, 3 (2004), 234โ€“248. [20] Hao Jiang, Fan Yang, and Wutao Wei. 2020. Statistical Reasoning of Zero-Inflated Right- Skewed User-Generated Big Data A/B Testing. In 2020 IEEE International Conference on Big Data (Big Data) . 1533โ€“1544. [21] Brennan C Kahan, Vipul Jairath, Caroline J Dorรฉ, and Tim P Morris. 2014. The risks and rewards of covariate adjustment in randomized trials: an assessment of 12 outcomes from 8 studies. Trials 15 (2014), 1โ€“7. [22] Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M Henne. 2009. Controlled experiments on the web: survey and practical guide. Data mining and knowledge discovery 18 (2009), 140โ€“181. [23] Jeanne Kowalski and Xin M
https://arxiv.org/abs/2505.08128v1
Tu. 2008. Modern applied U-statistics . John Wiley & Sons. [24] Nicholas Larsen, Jonathan Stallrich, Srijan Sengupta, Alex Deng, Ron Kohavi, and Nathaniel T Stevens. 2024. Statistical challenges in online controlled experiments: A review of a/b testing methodology. The American Statistician 78, 2 (2024), 135โ€“149. [25] Kung-Yee Liang and Scott L Zeger. 1986. Longitudinal data analysis using generalized linear models. Biometrika 73, 1 (1986), 13โ€“22. 14 [26] Kevin Liou and Sean J Taylor. 2020. Variance-weighted estimators to improve sensitivity in online experiments. In Proceedings of the 21st ACM Conference on Economics and Computation . 837โ€“850. [27] Kevin Liou, Wenjing Zheng, and Sathya Anand. 2022. Privacy-preserving methods for repeated measures designs. In Companion Proceedings of the Web Conference 2022 . 105โ€“109. [28] Yan Ma. 2012. On inference for kendallโ€™s ฯ„within a longitudinal data setting. Journal of applied statistics 39, 11 (2012), 2441โ€“2452. [29] Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics (1947), 50โ€“60. [30] Lu Mao. 2018. On causal estimation using U-statistics. Biometrika 105, 1 (2018), 215โ€“220. [31] Lu Mao. 2024. Wilcoxon-Mann-Whitney statistics in randomized trials with non-compliance. Electronic Journal of Statistics 18, 1 (2024), 465โ€“489. [32] Alexey Poyarkov, Alexey Drutsa, Andrey Khalyavin, Gleb Gusev, and Pavel Serdyukov. 2016. Boosted decision tree regression adjustment for variance reduction in online controlled exper- iments. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . 235โ€“244. [33] Antonin Schrab, Ilmun Kim, Mรฉlisande Albert, Bรฉatrice Laurent, Benjamin Guedj, and Arthur Gretton. 2023. MMD aggregated two-sample test. Journal of Machine Learning Research 24, 194 (2023), 1โ€“81. [34] Antonin Schrab, Ilmun Kim, Benjamin Guedj, and Arthur Gretton. 2022. Efficient Aggregated Kernel Tests using Incomplete U-statistics. Advances in Neural Information Processing Systems 35 (2022), 18793โ€“18807. [35] Chris Smith, Matthew R. Leary, and Dougal Maclaurin. 2020. JAX: composable transforma- tions of Python+NumPy programs. In Proceedings of the 3rd Machine Learning and Systems Conference (MLSys 2020) . [36] Anastasios A Tsiatis. 2006. Semiparametric theory and missing data . V ol. 4. Springer. [37] Aad W Van der Vaart. 2000. Asymptotic statistics . V ol. 3. Cambridge university press. [38] Karel Vermeulen, Olivier Thas, and Stijn Vansteelandt. 2015. Increasing the power of the Mann-Whitney test in randomized experiments through flexible covariate adjustment. Statistics in medicine 34, 6 (2015), 1012โ€“1030. [39] Ming Wang. 2014. Generalized estimating equations in longitudinal data analysis: a review and recent developments. Advances in Statistics 2014, 1 (2014), 303728. [40] Wanjie Wang, Eric Chen, and Hongzhe Li. 2023. Truncated rank-based tests for two-part models with excessive zeros and applications to microbiome data. The Annals of Applied Statistics 17, 2 (2023), 1663โ€“1680. [41] Changshuai Wei, Ming Li, Yalu Wen, Chengyin Ye, and Qing Lu. 2020. A multi-locus predictiveness curve and its summary assessment for genetic risk prediction. Statistical methods in medical research 29, 1 (2020), 44โ€“56. [42] Changshuai Wei and Qing Lu. 2017. A generalized association test based on U statistics. Bioinformatics 33, 13 (2017), 1963โ€“1971. [43] P Wu, Y Han, T Chen, and
https://arxiv.org/abs/2505.08128v1
XM Tu. 2014. Causal inference for Mannโ€“Whitneyโ€“Wilcoxon rank sum and other nonparametric statistics. Statistics in medicine 33, 8 (2014), 1261โ€“1271. [44] Huizhi Xie and Juliette Aurisset. 2016. Improving the Sensitivity of Online Controlled Ex- periments: Case Studies at Netflix. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD โ€™16) (San Francisco, CA, USA). ACM, 645โ€“654. 15 [45] Anqi Yin, Ao Yuan, and Ming T Tan. 2024. Highly robust causal semiparametric U-statistic with applications in biomedical studies. The international journal of biostatistics 20, 1 (2024), 69โ€“91. [46] Jing Zhou, Jiannan Lu, and Anas Shallah. 2023. All about Sample-Size Calculations for A/B Testing: Novel Extensions & Practical Guide. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management . 3574โ€“3583. Appendix A Algorithm for Large Sample Size Although the methods proposed are mainly for business scenario, where sample size is often small to medium, there are business use-case where sample size is large (e.g., large scale marketing campaigns where user level data is available). Moreover, for broader applicability of the methodologies, we need to consider general AB tests in tech where sample size can be at magnitude of million to billion. ForMann-Whitney U andZero-trimmed U , we can leverage fast ranking algorithm to compute Wor Wโ€ฒ. The variance calculation is straightforward using equation in Section 4. As for Regression Adjustment ,GEE andDRGU , they are all based on estimating equations. DRGU have additional layer complexity as its computation is over pairs of observations. We provide efficient algorithm for DRGU in this section. The algorithm for RA and GEE should follows trivially. A.1 Large Data Estimation and Inference for DR Generalized U The high level idea is to decouple optimization (solving UGEE) and inference (estimation of variance), and use efficient algorithm for both steps: 1.Optimization : We obtain ห†ฮธby stochastic Fisher scoring with mini -batches until ||ยฏUn||< cnโˆ’1 2(1+ฯ‚)(i.e.,||ยฏUn||=op(nโˆ’1 2)). 2.Inference : We estimate B=E(GM)andฮฃ = E(uuT)with Monte Carlo integration from subsample of pairs, and calculate\V ar(ห†ฮธ) =4(ห†Bโˆ’1)Tห†ฮฃห†Bโˆ’1 n. Details are described in Algorithm 1 and Algorithm 2. Remarks : โ€ขFor Algorithm 1, we can use sample by pairs instead of by rows. Both give consistent estimate of the parameter, i.e., ห†ฮธโ†’pฮธ. There are trade-off on multiple aspects: (i) sampling by pair gives clean guarantee on unbiasedness while sampling by row can be biased (though consistent) due to missing on intra-batch pairs; (ii) sampling by row is easier to implement and can use GPU more efficiently while sampling by pair needs to generate all pairs beforehand or implement reservoir sampling (or hashing tricks) for extreamly large data. For both approaches, stratified sampling should be used for highly imbalanced data. โ€ขFor Algorithm 2, the choice of pair sample size kcontrols the Monte Carlo error. While there is no need to set kat order of n2(i.e., full pairs calculation), a sufficiently large kgreater than order of n(e.g., k=cโ€ฒnlogn) is needed to have negligible Monte Carlo error. For example, for data size of 100M rows( 108), setting kโˆˆ(107,108)can give practical inference and setting kโˆˆ(109,1010)gives high-confidence bound. Note that,
https://arxiv.org/abs/2505.08128v1
when "generalize" for regular regression and GEE, we can simply estimate variance on full sample, there is no need for Monte Carlo Integration. โ€ขThe working correlation matrix R(ฮฑ)can be estimated in an outer loop around the ฮธ-updates, e.g., by alternating between updating ฮธusing Fisher scoring and re-estimating ฮฑbased on current residuals: (i) A good initial value for ฮฑis typically ฮฑ(0)= 0, corresponding to the independence working correlation, which ensures consistency of ห†ฮธeven if R(ฮฑ)is misspecified; (ii) ฮฑcan be re-estimated every Ksteps of the inner Fisher scoring loop. This 16 avoids excessive overhead from updating ฮฑtoo frequently. (iii) Re-estimation of ฮฑcan stop once its updates become small or after a fixed number of outer iterations. Typically, only a few updates (e.g., 3โˆผ5) are sufficient in practice. A.2 Algorithms Decoupling and Error Bounds A.2.1 Algorithms Decoupling To see why we can decuple the optimization and inference (i.e., Algorithm 1 and Algorithm 2), observe that โˆšnยฏUn(ห†ฮธ) =โˆšnยฏUn(ฮธ) +โˆ‚ยฏUn โˆ‚ฮธโˆšn(ห†ฮธโˆ’ฮธ) +op(1) โˆšn(ห†ฮธโˆ’ฮธ) =โˆ’(โˆ‚ยฏUn โˆ‚ฮธ)โˆ’โˆšnยฏUn(ฮธ) + (โˆ‚ยฏUn โˆ‚ฮธ)โˆ’โˆšnยฏUn(ห†ฮธ) +op(1) The second term measure the "error" when the estimating equation is not exactly solved, i.e., algorithm error. The first term measures the sampling variations. When the fisher scoring algorithm error is small, ||ยฏUn||=op(nโˆ’1 2), we know (โˆ‚ยฏUn โˆ‚ฮธ)โˆ’โˆšnยฏUn(ห†ฮธ) =Op(1)โˆšnop(nโˆ’1 2) =op(1) and thus โˆšn(ห†ฮธโˆ’ฮธ) =โˆ’(โˆ‚ยฏUn โˆ‚ฮธ)โˆ’โˆšnยฏUn(ฮธ) +op(1)โ†’dN(0,4(Bโˆ’)TฮฃBโˆ’). We state the above results in Theorem 5 A.2.2 Error Bound Observe that estimate for Bandฮฃon full data are both U statistics of form: Uv n=1 (n 2)P i<jv(oi, oj). Letโ€™s assume the symmetric kernel v(oi, oj)โˆˆRis sub-Gaussian with proxy variance ฯƒ2. We compute a Monte Carlo approximation ห†Uk=1 kP (i,j)โˆˆCkv(oi, oj)by sampling kunordered pairs from the full set ofn 2 possible pairs. Due to overlapping indices among pairs, the kernel evaluations are not fully independent. Observe that, for all sampled pair, the expected total number of overlapping pairs are O(k2 n). Then, for each sampled pair, the number of overlapping pairs is โˆ† =O(k/n), and hence V ar(ห†Uk) =1 k2P lโˆˆCkV ar(vl) +1 k(kโˆ’1)P lฬธ=lโ€ฒCov(vl, vlโ€ฒ) =ฯƒ2 k+O(1 n)C=ฯƒ2 k(1 + โˆ†), provided that Cov(vl, vlโ€ฒ)โ‰คC. Using Bernstein-type inequalities [4] adapted for V ar(ห†Uk) =ฯƒ2 k(1 + โˆ†) , the Monte Carlo average satisfies P ห†Ukโˆ’E[ห†Uk] > ฯต โ‰ค2 exp โˆ’kฯต2 2ฯƒ2(1 + โˆ†) This introduces an adjustment factor 1 + โˆ† into the denominator, reflecting variance inflation due to overlap between sampled pairs. To achieve a target error ฯตwith confidence level 1โˆ’ฮท, we can set 2 exp โˆ’kฯต2 2ฯƒ2(1+โˆ†) โ‰คฮท. Solving this w.r.t "effective sample size" หœk=k/(1 + โˆ†) , we have หœkโ‰ฅ2ฯƒ2 ฯต2log(2 ฮท). Equivalently, with high probability 1โˆ’ฮท, the finite sample error bound is: ห†Ukโˆ’E[Un] โ‰คs 2ฯƒ2 หœklog(2 ฮท) 17 The bound implies that the effective asymptotic convergence rate is ห†Ukโˆ’E[Un] =Op r 1 หœk! =Op r 1 k+1 n! =Op r 1 n(1 +n k)! Observing ห†Bโˆ’B=Op(หœkโˆ’0.5)andห†ฮฃโˆ’ฮฃ =Op(หœkโˆ’0.5), we can show ห†Vฮธโˆ’Vฮธ=Op(หœkโˆ’0.5), given Vฮธ= 4(Bโˆ’1)TฮฃBโˆ’1. This leads to a more conservative test statistic, resulting in no inflation of type I error, but a minor loss of finite sample efficiency. Choosing k=O(n1+ฯต)ensures the Monte Carlo error is asymptotically negligible, matching the asymptotic efficiency of the full O(n2)estimator
https://arxiv.org/abs/2505.08128v1
at significantly lower computational cost. We state the above results in Theorem 6. B Efficiency of Regression Adjustment In this section, we will illustrate the efficiency of regression adjustment over t-test under parametric set-up. Weโ€™ll first show regression adjustment is most efficient with Cramer-Rao lower bound and then illustrate the insight on where does the efficiency come from using linear regression as example. B.1 Cramer-Rao Lower Bound This is well established in statistics. For completeness, We provide sketch of proof, so reader can gain insight to later sections e.g., Appendix B.3 and Appendix E.2. Maximum Likelihood solve following estimating equation, Un(ฮธ) =1 nX [โˆ‚logp(xi;ฮธ) โˆ‚ฮธ]T= 0 LetSi(ฮธ) = (โˆ‚logp(xi;ฮธ) โˆ‚ฮธ)Tandฮฃ =E(SST), we know from CLT thatโˆšnUnโ†’dN(0,ฮฃ). Observing 0 = Un(ฮธ0) = Un(ห†ฮธ) +โˆ‚Un(ฮธ0) โˆ‚ฮธ0(ห†ฮธโˆ’ฮธ0) +op(1), we knowโˆšn(ห†ฮธโˆ’ฮธ0) = โˆ’(โˆ‚Un(ฮธ0) โˆ‚ฮธ0)โˆ’โˆšnUn(ห†ฮธ). Observing โˆ’(โˆ‚Un(ฮธ0) โˆ‚ฮธ0)โ†’pฮฃ, we know โˆšn(ห†ฮธโˆ’ฮธ0)โ†’dN(0,ฮฃโˆ’1). To see why โˆ’โˆ‚U โˆ‚ฮธโ†’pฮฃ, observe that: ST=โˆ‚logp(x;ฮธ) โˆ‚ฮธ=1 p(x;ฮธ)โˆ‚p(x;ฮธ) โˆ‚ฮธ โˆ‚S โˆ‚ฮธ=โˆ’1 p2(โˆ‚p โˆ‚ฮธ)Tโˆ‚p โˆ‚ฮธ+1 pโˆ‚ โˆ‚ฮธ([โˆ‚p โˆ‚ฮธ]T) E(โˆ‚S โˆ‚ฮธ) =โˆ’E((โˆ‚logp โˆ‚ฮธ)Tโˆ‚logp โˆ‚ฮธ) +1 p1 โˆ‚ฮธโˆ‚ฮธTZ p(x;ฮธ)dx=โˆ’ฮฃ โˆ’โˆ‚U โˆ‚ฮธโ†’pโˆ’E(โˆ‚S โˆ‚ฮธ) = ฮฃ . Now, for any unbiased estimator ฮธโ€ฒ,E(ฮธโ€ฒ(x)) =ฮธ, we can show V ar(ฮธโ€ฒ)โชฐฮฃโˆ’1, i.e.,V ar(ฮธโ€ฒ)โˆ’ฮฃโˆ’1 is positive semi-definite matrix. Observingโˆ‚E(ฮธโ€ฒ) โˆ‚ฮธ=โˆ‚ฮธ โˆ‚ฮธ=I, and the fact that โˆ‚E(ฮธโ€ฒ) โˆ‚ฮธ=โˆ‚ โˆ‚ฮธZ ฮธโ€ฒ(x)p(x;ฮธ)dx=Z ฮธโ€ฒ(x)โˆ‚logp โˆ‚ฮธp(x;ฮธ)dx=E(ฮธโ€ฒโˆ‚logp โˆ‚ฮธ), we have Cov(ฮธโ€ฒ, S) =E(ฮธโ€ฒโˆ‚logp โˆ‚ฮธ) =I. Apply matrix Cauchyโ€“Schwarz inequality, we have: V ar(ฮธโ€ฒ)V ar(S)โชฐCov(ฮธโ€ฒ, S) =I, thus V ar(ฮธโ€ฒ)โชฐฮฃโˆ’1. 18 B.2 Relative Efficiency under Confounding Letโ€™s assume the following model as in Section 2: yi=ฮฒ0+ฮฒ1zi+ฮณTwi+ฯตi, ฯตiโˆผN(0, ฯƒ2). Letฮธ= [ฮฒ0, ฮฒ1, ฮณT]T,xi= [1, zi, wT i]TandX= [x0,ยทยทยท, xi,ยทยทยท]T,Y= [y0,ยทยทยท, yi,ยทยทยท]T. Under confounding and above parametric set-up, we can show ฮฒ1is unbiased, observing that, E(y(1)โˆ’y(0)) = Ew(E(y|z= 1, w)โˆ’E(y|z= 0, w)) =Ew(ฮฒ1) =ฮฒ1 Meanwhile, t-test ( ห†ฯ„= ยฏy1โˆ’ยฏy0) is biased by a constant term ฮณT[E(w|z= 1)โˆ’E(w|z= 0)] , as E(ยฏy1โˆ’ยฏy0) =E(y|z= 1)โˆ’E(y|z= 1) =Ew|z=1E(y|z= 0, w)โˆ’Ew|z=0E(y|z= 0, w) =ฮฒ1+ฮณT(E(w|z= 1)โˆ’E(w|z= 0)) . In this case, the asymptotic relative efficiency is dominated by the bias term (for both, varโˆ1 n), and hence r(ห†ฮฒ1,ห†ฯ„)โ†’ โˆž asnโ†’ โˆž . B.3 Relative Efficiency under no Confounding For relative efficiency when there is no confounding, the derivation boils down to ratio of variance as both are unbiased. We can estimate ห†ฮธ= (XTX)โˆ’1XTYand its variance V ar(ห†ฮธ) =ฯƒ2(XTX)โˆ’1=ฯƒ2 X ixixT i!โˆ’1 (8) Observing1 nP ixixT iโ†’pE(xxT), we know V ar(ห†ฮธ) =ฯƒ2 n(1 nX xixT i)โˆ’1=ฯƒ2 n E(xxT)โˆ’1+op(nโˆ’1). We need to calculate E(xxT)โˆ’1 2,2for variance of ฮฒ1. LetVx=E(xxT)andp=E(z). We know E(z2) =p. Without loss of generality, assume E(w) = 0 . Since zโŠฅw, we know E(zw) = 0 , and Vx=๏ฃฎ ๏ฃฐ1 E(z)E(wT) E(z)E(z2)E(zwT) E(w)E(zw)E(wwT)๏ฃน ๏ฃป ="1p 0 p p 0 0 0 V ar(w)# . Since Vxis block-diagonal matrix, we can calculate inverse of Vx 2ร—2= 1p p p , which is Vx 2ร—2โˆ’1=1 p(1โˆ’p) pโˆ’p โˆ’p1 . Then, we know V ar(ห†ฮฒ) =ฯƒ2 np(1โˆ’p)+op(nโˆ’1). (9) 19 Now we show the variance of t-test. Let ห†ฯ„= ยฏy1โˆ’ยฏy0. We know V ar(ห†ฯ„) =V ar(ยฏy1) +V ar(ยฏy2). Since V ar(ยฏyk) =1 nkV ar(y|z=k), V ar(y|z=k) =V ar(ฮณTw+ฯต) =ฮณTV ar(w)ฮณ+ฯƒ2, 1 n1+1 n0=1 np+1 n(1โˆ’p)+op(nโˆ’1) =1 np(1โˆ’p)+op(nโˆ’1) this imply V ar(ห†ฯ„) =1 n0V ar(y|z= 0) +1 n1V ar(y|z=
https://arxiv.org/abs/2505.08128v1
1) =1 np(1โˆ’p)(ฯƒ2 w+ฯƒ2) +op(nโˆ’1) (10) where ฯƒ2 w=ฮณTV ar(w)ฮณrepresents variance of yexplained by w. Combining equation(9) and equation(10), we have r(ห†ฮฒ,ห†ฯ„) = 1 +ฯƒ2 w ฯƒ2(11) C Asymptotics and Efficiency of GEE C.1 Asymptotic Normality of GEE In this section, we will show the Asymptotic Normality of ห†ฮธfor the GEE, which will build foundation for Asymptotic Normality of UGEE in Appendix E.3. Recall,X iDT iVโˆ’1 i(yiโˆ’ยตi) =0 where, yi= [yi0,ยทยทยท, yit,ยทยทยท]T,ยตi= [ยตi0,ยทยทยท, ยตit,ยทยทยท]T,ฮธ= [ฮฒ0, ฮฒ1, ฮณT]T, and Di=โˆ‚ยตi โˆ‚ฮธ, Vi=AiR(ฮฑ)Ai, Ai=diag{p V ar(yit|zi, wit)}. Letui=DT iVโˆ’1 i(yiโˆ’ยตi)andUn=1 nP iui, we know by Central Limit Theorem (CLT), โˆšnUnโ†’dN(0,ฮฃu) where ฮฃu=E(uuT). Letห†ฮฑbe the estimate of ฮฑfor the working correlation R(a), and assume mild regularity condition:โˆšn(ห†ฮฑโˆ’ฮฑ) =Op(1). And let ห†ฮธbe the estimate of the ฮธfor the GEE, i.e., Un(ห†ฮธ,ห†ฮฑ) = 0 . Observing the following Taylor expansion, 0 =Un(ห†ฮธ,ห†ฮฑ) =Un(ฮธ, ฮฑ) +โˆ‚Un(ฮธ, ฮฑ) โˆ‚ฮธ(ห†ฮธโˆ’ฮธ) +โˆ‚Un(ฮธ, ฮฑ) โˆ‚ฮฑ(ห†ฮฑโˆ’ฮฑ) +op(nโˆ’0.5), we know, โˆšnUn(ฮธ, ฮฑ) =โˆ’โˆšnโˆ‚Un โˆ‚ฮธ(ห†ฮธโˆ’ฮธ)โˆ’โˆšnโˆ‚Un โˆ‚ฮฑ(ห†ฮฑโˆ’ฮฑ) +op(1). (12) Since E(yiโˆ’ยตi) = 0 , we know E(โˆ‚ui โˆ‚ฮฑ) = 0 , and henceโˆ‚Un โˆ‚ฮฑ=op(1). Combining with the regularity conditionโˆšn(ห†ฮฑโˆ’ฮฑ) =Op(1), we have โˆšnโˆ‚Un โˆ‚ฮฑ(ห†ฮฑโˆ’ฮฑ) =op(1)Op(1) = op(1). (13) Then equation (12) reduce to, โˆšn(ห†ฮธโˆ’ฮธ) =โˆ’(โˆ‚Un โˆ‚ฮธ)โˆ’โˆšnUn(ฮธ, ฮฑ) +op(1) (14) 20 where (ยท)โˆ’denote general inverse. LetGi=DT iVโˆ’1 iandSi=yiโˆ’ยตi. Given thatโˆ‚Un โˆ‚ฮธ=1 nP iโˆ‚GiSi โˆ‚ฮธ, we have โˆ‚Un โˆ‚ฮธโ†’pE(Gโˆ‚S โˆ‚ฮธ) =โˆ’E(GD) (15) To obtain equation (15), we observe that โˆ‚Un โˆ‚ฮธ=1 nX iโˆ‚GiSi โˆ‚ฮธ =1 nX iโˆ‚DT i โˆ‚ฮธVโˆ’1 iSi+1 nX iDT iVโˆ’1 iโˆ‚Si โˆ‚ฮธ. Since1 n(yiโˆ’ยตi)โ†’p0, we have negligible first term1 nP iโˆ‚DT i โˆ‚ฮธVโˆ’1 iSi=op(1). As a result, โˆ‚Un โˆ‚ฮธ=op(1) +1 nX iDT iVโˆ’1 iโˆ‚Si โˆ‚ฮธโ†’pโˆ’E(GD) Combining equation (14) and equation (15), we have โˆšn(ห†ฮธโˆ’ฮธ) =Bโˆ’โˆšnUn(ฮธ, ฮฑ) +op(1) (16) where, B=E(GD). SinceโˆšnUnโ†’dN(0,ฮฃu), this establish the asymptotic normality of ห†ฮธ, โˆšn(ห†ฮธโˆ’ฮธ)โ†’dN(0,(Bโˆ’)TฮฃuBโˆ’) C.2 Asymptotic Efficiency of GEE over snapshot Regression We derive the Asymptotic Efficiency of GEE over snapshot Regression on repeated measurement linear model, shown in Section 3. We can write the underlying linear model as, yi=Xiฮธ+ฯตi, ฯตiโˆผN(0, ฯƒ2R) where Xi=vxT i,v= [1,ยทยทยท,1,ยทยทยท,1]T. For GEE,P iDiVโˆ’1 i(yiโˆ’ยตi) = 0 of above model, we know Di=โˆ‚ยตi โˆ‚ฮธ=Xi, Vโˆ’1 i=1 ฯƒ2Rโˆ’1, ห†ฮฃu=1 nX iDT iVโˆ’1 iV ar(ฯตi)Vโˆ’1 iDi=1 nX iDTVโˆ’1 iDi, ห†B=1 nX iDT iVโˆ’1 iDi, and hence, V ar(ห†ฮธgee) =ห†Bโˆ’Tห†ฮฃuห†Bโˆ’ n =1 nBโˆ’=1 n(1 nX iDT iVโˆ’1 iDi)โˆ’ =ฯƒ2(X iXT iRโˆ’1Xi)โˆ’1. Observing that XT iRโˆ’1Xi= (vxT i)Rโˆ’1(vxT i) =xi(vTRโˆ’1v)xT i = (vTRโˆ’1v)xixT i, 21 we have, V ar(ห†ฮธgee) =ฯƒ2 vTRโˆ’1v(X ixixT i)โˆ’1(17) From (8), we know V ar(ห†ฮธreg) =ฯƒ2P ixixT iโˆ’1, so we have r(ห†ฮธgee,ห†ฮธreg) =vTRv (18) Now we will show vTRv > 1. Observe that vTRโˆ’1v=โŸจRโˆ’0.5v, Rโˆ’0.5vโŸฉ. Let a=R0.5vand b=Rโˆ’0.5v, we have |vTv|2โ‰ค(vTRโˆ’1v)(vTRv) by Cauchy-Schwarz inequality, i.e., |โŸจa, bโŸฉ|2โ‰ค โŸจa, aโŸฉโŸจb, bโŸฉ. Since vTv=TandvTRv=P iP jRij< T2, we know: vTRโˆ’1vโ‰ฅT2 vTRv>1 (19) To further illustrate the connection of efficiency on correlation of repeated measurement, we can assume simple compound symmetric matrix: R= (1โˆ’ฯ)IT+ฯvvT. By Woodbury matrix identity, we know Rโˆ’1=1 1โˆ’ฯ(ITโˆ’ฯ 1+(Tโˆ’1)ฯvvT), hence, vTRโˆ’1v=1 1โˆ’p(Tโˆ’ฯT2 1 + (Tโˆ’1)ฯ) =T 1 + (Tโˆ’1)ฯ. We can see as ฯโ†’1,r(ห†ฮธgee,ห†ฮธreg)โ†’1. And as ฯโ†’0,r(ห†ฮธgee,ห†ฮธreg)โ†’T. In fact, for general case of R, we can define average correlation among different time point as
https://arxiv.org/abs/2505.08128v1
ยฏฯ=1 T(Tโˆ’1)P iฬธ=jRij, then from equation (19), we know r(ห†ฮธgee,ห†ฮธreg)โ‰ฅT2 vTRv=T 1 + (Tโˆ’1)ยฏฯ D Asymptotics and Efficiency of U test D.1 Pitman Efficiency of U test over t test We will derive the pitman efficiency on local alternative of small shift ฮดof certain distribution Fwith variance ฯƒ2. Recall that from definition in (1), we have U=1 n0n1n1X i=1n0X j=1Iy1iโ‰ฅy0j. From standard results in U statistics [23], we know โˆšn(Uโˆ’ฮธ)โ†’dN(0, ฯƒ2 U=ฯ1ฯƒ2 1+ฯ2ฯƒ2 2) (20) where ฯk= lim nโ†’โˆžn nk, and ฯƒ2 k=V ar(E(h(y1, y0|yk)). Under H0, we know Fy1=Fy0, and hence ฯƒ2 1=E(E(Iy1i>y0j|y1i))2โˆ’1 4 =E(Fy(y1i))2โˆ’1 4= (Z1 0x2dx)โˆ’1 4 =1 3โˆ’1 4=1 12. 22 Similarly, we have ฯƒ2 2=1 12. And we know ฯƒ2 U(0) =ฯ1+ฯ2 12=1 12(n n0+n n1) +op(1). (21) Under local alternative, we have: E(U) =E(E(Iy1iโ‰ฅyj0|y1i)) =Z F(y+ฮด)f(y)dy, and accordingly ยตโ€ฒ U(0) =โˆ‚E(U) โˆ‚ฮด ฮด=0=Z f2(y)dy. (22) For t test, ฯ„= ยฏy1โˆ’ยฏy0, we have E(ฯ„) =E(ยฏy1โˆ’ยฏy0) =ฮด, V ar(ฯ„|H0) =V ar(ยฏy1โˆ’ยฏy0) = (1 n1+1 n0)ฯƒ2. and accordingly ยตโ€ฒ ฯ„(0) =โˆ‚E(ฯ„) โˆ‚ฮด ฮด=0= 1, (23) ฯƒ2 ฯ„(0) = lim nโ†’โˆžnV ar (U|H0) = lim nโ†’โˆž(n n0+n n1)ฯƒ2= (ฯ1+ฯ0)ฯƒ2(24) Combining above results (21), (22), (24), (23), we complete the derivation of pitman efficiency: r(U, ฯ„) =ยตโ€ฒ U(0)/ฯƒU(0) ยตโ€ฒฯ„(0)/ฯƒฯ„(0)2 =(R f2(y)dy)2 ฯ0+ฯ1 12 1 (ฯ0+ฯ1)ฯƒ2= 12ฯƒ2Z f2(y)dy2 . (25) D.1.1 Pitman efficiency under specific distributions Weโ€™ll further derive pitman efficiency for a few distributions. Fornormal distribution :N(0, ฯƒ2), and density f(y) =1โˆš 2ฯ€ฯƒexp(โˆ’y2 2ฯƒ2), we have Z f2(y)dy=1 2ฯ€ฯƒ2Z exp(โˆ’y2 ฯƒ2)dy= 2ฯ€ฯƒ2Z exp(โˆ’u2)d(ฯƒu) =1 2ฯ€ฯƒZ exp(โˆ’u2)du=1 2ฯ€ฯƒโˆšฯ€=1 2โˆšฯ€ฯƒ, whereR exp(โˆ’u2)du=โˆšฯ€, because (Z exp(โˆ’u2)du)2=Z Z exp(โˆ’u2โˆ’v2)dudv =Z2ฯ€ 0Zโˆž 0eโˆ’r2rdrdฮธ =Z2ฯ€ 0dฮธZโˆž 0eโˆ’r2rdr= 2ฯ€(โˆ’1 2eโˆ’r2 โˆž 0) =ฯ€. Then we have r(U, ฯ„) = 12 ฯƒ2[1 2โˆšฯ€ฯƒ]2=3 ฯ€โ‰ˆ0.955. (26) ForLaplace distribution :Lap(0, b), with density f(y) =1 2bexp(โˆ’|y|/b)and variance V ar(y) = 2b2, we have Zโˆž โˆ’โˆžf2(y)dy= 2Zโˆž 0f2(y)dy= 2Zโˆž 01 4b2exp(โˆ’2|y| b)dy= 2Zโˆž 01 4b2exp(โˆ’2y b)dy =1 2b2 โˆ’b 2eโˆ’2y b โˆž 0 =1 4b, 23 and hence, r(U, ฯ„) = 12(2 b2)[1 4b]2=3 2. (27) Forlognormal distribution :Log(0, b2), with density f(y) =1 ybโˆš 2ฯ€exp(โˆ’(logy)2 2b2)and variance V ar(y) = (eb2โˆ’1)eb2, we have f2(y) =1 y2b22ฯ€exp(โˆ’(logy)2 b2), Z f2(y)dy=Z1 2ฯ€b2eโˆ’2ueโˆ’u2/b2eudu=1 2ฯ€b2Z eโˆ’uโˆ’u2 b2du (letu= log y) =1 2ฯ€b2Z eb2 4eโˆ’(u b+b 2)2du=eb2 4 2ฯ€b2Z eโˆ’w2d(bw) (letw=u b+b 2) =1 2bโˆšฯ€eb2 4 and hence, r(U, ฯ„) = 12( eb2โˆ’1)eb2(1 2bโˆšฯ€eb2 4)2=3 ฯ€b2(e5 2b2โˆ’e3 2b2), (28) which increase exponentially with b2. ForCauchy distribution :Cau(0,1), with density f(y) =1 ฯ€(1+y2), we have Z f2(y)dy=1 ฯ€2Z1 (1 +y)2dy=1 ฯ€2Zฯ€ 2 0cos2ฮธdฮธ (lety= cos ฮธ) =1 ฯ€2ฯ€ 2=1 2ฯ€(observing cos2ฮธ=1 + cos(2 ฮธ) 2) andV ar(y) =โˆž, and hence, r(U, ฯ„) =โˆž. (29) D.2 Asymptotics of Zero Trimmed U Lets1be the sum of ranks of all positive value in the 1st sample, i.e., s1=nโ€ฒ 1X iฮบ(yโ€ฒ 1i)Iyโ€ฒ 1i>0. Note ฮบ(yโ€ฒ 1i) =ฮบ(y1i),โˆ€y1i>0. Define, Sโ€ฒ=nโ€ฒ 1X iฮบ(yโ€ฒ 1i) S=n1X iฮบ(y1i). Observing that nโ€ฒ 1โˆ’n+ 1representing number of zeros in {yโ€ฒ 1i}nโ€ฒ 1 i=0, and the average rank for those zeros arenโ€ฒ 0+nโ€ฒ 1+1+n+ 0+n+ 1 2, we have Sโ€ฒ=s1+ (nโ€ฒ 1โˆ’n+ 1)nโ€ฒ 0+nโ€ฒ 1+ 1 + n+ 0+n+ 1 2. 24 Similarly, S=s1+ (n1โˆ’n+ 1)n0+n1+ 1 +
https://arxiv.org/abs/2505.08128v1
n+ 0+n+ 1 2. And by definition, Wโ€ฒ=Sโ€ฒโˆ’nโ€ฒ 1(nโ€ฒ 0+nโ€ฒ 1+ 1) 2, W=Sโˆ’n1(n0+n1+ 1) 2. Now, define w1=s1โˆ’n+ 1(n+ 0+n+ 1+1) 2, we have Wโ€ฒ=s1+ (nโ€ฒ 1โˆ’n+ 1)nโ€ฒ 0+nโ€ฒ 1+ 1 + n+ 0+n+ 1 2โˆ’nโ€ฒ 1(nโ€ฒ 0+nโ€ฒ 1+ 1) 2 =s1โˆ’n+ 1(n+ 0+n+ 1+ 1) 2+nโ€ฒ 1n+ 0โˆ’n+ 1nโ€ฒ 0 2 =w1+nโ€ฒ 1n+ 0โˆ’n+ 1nโ€ฒ 0 2 Similarly, W=w1+n1n+ 0โˆ’n+ 1n0 2 Ifd= 0, i.e., P(y+ 1โ‰ฅy+ 0) =1 2, we have E(s1|p0, p1) =n+ 1(n+ 0+n+ 1+1) 2, i.e., E(w1|p0, p1) = 0 . Then, we have E(Wโ€ฒ|p0, p1) =nโ€ฒ 1n+ 0โˆ’n+ 1nโ€ฒ 0 2=n1n0 2(pp0โˆ’pp1), E(W|p0, p1) =n1n+ 0โˆ’n+ 1n0 2=n1n0 2(p0โˆ’p1). Given p0andp1are fixed, we know n+ 0,n+ 1,nโ€ฒ 0andnโ€ฒ 1are all fixed. So, V ar(W|p0, p1) =V ar(Wโ€ฒ|p0, p1) =V ar(s1|p0, p1) =n+ 0n+ 1(n+ 0+n+ 1+ 1) 12. Then we can compute V ar(Wโ€ฒ)under H0, from its conditional expectation and conditional variance, V ar(Wโ€ฒ) =V ar(E(Wโ€ฒ|p0, p1)) +E(V ar(Wโ€ฒ|p0, p1)) =n2 0n2 1 4p2p1(1โˆ’p1) n1+p0(1โˆ’p0) n0 +n1n0p1p0 12(n1p1+n0p0) +o(n3)(30) =n0n1(n0+n1) 4 p3โˆ’p4+p3 3 +o(n3). (under H0,p=p0=p1) (31) Similarly, we have V ar(W) =n2 0n2 1 4p1(1โˆ’p1) n1+p0(1โˆ’p0) n0 +n1n0p1p0 12(n1p1+n0p0) +o(n3) (32) =n0n1(n0+n1) 4 pโˆ’p2+p3 3 +o(n3). (33) D.3 Pitman Efficiency of Zero Trimmed U test over standard U test We have compound alternative hypothesis on two dimension, m=p1โˆ’p0andd=P(y+ 1> y+ 0)โˆ’1 2. However, Pitman efficiency is defined for simple hypothesis testing. To handle the compound 25 hypothesis, we specify a direction ฯ•, and on direction of ฯ•, the test would be simple hypothesis. Specifically, let m(ฮฝ) =ฮฝcosฯ•, d(ฮฝ) =ฮฝsinฯ•. On direction of ฯ•, we test H0:ฮฝ= 0,vsHฯ• 1:ฮฝ >0. Then we know, ยตโ€ฒ(0) =โˆ‚ยต โˆ‚ฮฝ ฮฝ=0=โˆ‚ยต โˆ‚mโˆ‚m โˆ‚ฮฝ+โˆ‚ยต โˆ‚dโˆ‚d โˆ‚ฮฝ ฮฝ=0 = cos ฯ•โˆ‚ยต โˆ‚m m=0,d=0 + sin ฯ•โˆ‚ยต โˆ‚d m=0,d=0 (34) So, we need to compute ยต(m, d)under local alternative to obtain above quantity. Observe that, w1=s1โˆ’n+ 1(n+ 0+n+ 1+ 1) 2=n0n1(1 2โˆ’Un+ 0n+ 1) where, Un+ 0n+ 1is Mann-Whitney U on positive-only samples: Un+ 0n+ 1=1 n+ 0n+ 1n+ 1X in+ 0X jIyโ€ฒ 1i>yโ€ฒ 0j Knowing that E(Un+ 0n+ 1|p0, p1) =P(y+ 1โ‰ฅy+ 0), we have E(Wโ€ฒ|p0, p1) =โˆ’n+ 0n+ 1 P(y+ 1โ‰ฅy+ 0)โˆ’1 2 +nโ€ฒ 1n+ 0โˆ’n+ 1nโ€ฒ 0 2 =โˆ’n+ 0n+ 1dโˆ’n+ 1nโ€ฒ 0โˆ’nโ€ฒ 1n+ 0 2 Hence, ยตWโ€ฒ(m, d) =E(Wโ€ฒ) =E(E(Wโ€ฒ|p0, p1)) =โˆ’n1n0dp(p+m)โˆ’n1n0 2 (p+m)2โˆ’p(p+m) =โˆ’n1n0 2[2p(p+m)d+m(p+m)]. Similarly, ยตW(m, d) =E(W) =E(E(W|p0, p1)) =E โˆ’n+ 0n+ 1dโˆ’n+ 1n0โˆ’n1n+ 0 2 =โˆ’n1n0dp(p+m)โˆ’n1n0 2[p+mโˆ’p] =โˆ’n1n0 2[2p(p+m)d+m]. We can ignore term โˆ’n0n1 2for the ratioยตโ€ฒ Wโ€ฒ(0) ยตโ€ฒ W(0). Observe that โˆ‚ยตWโ€ฒ โˆ‚m 0= (2pd+p+ 2m) m=0,d=0=p, โˆ‚ยตWโ€ฒ โˆ‚d 0= (2p(p+m)) m=0,d=0= 2p2, โˆ‚ยตW โˆ‚m 0= (2pd+ 1) m=0,d=0= 1, โˆ‚ยตWโ€ฒ โˆ‚d 0= (2p(p+m)) m=0,d=0= 2p2. 26 Combining above with (31),(33) and(34), we complete the proof of the pitman efficiency for Zero Trimmed U: rฯ•(Wโ€ฒ, W) =ฯƒ2 W(0) ฯƒ2 Wโ€ฒ(0)ยตโ€ฒ Wโ€ฒ(0) ยตโ€ฒ W(0)2 =1โˆ’p+p2 3 p2โˆ’p3+p2 3pcosฯ•+ 2p2sinฯ• cosฯ•+ 2p2sinฯ•2 . Figure 2: Plot of rฯ•(Wโ€ฒ, W)versus p for multiple fixed ฯ•. Figure 3: Plot of rฯ•(Wโ€ฒ, W)versus ฯ•for multiple fixed p. Note that we actually used the adjusted variance for non-zero trimmed version Wto handles the ties on the zeros. If we
https://arxiv.org/abs/2505.08128v1
calculated the unadjusted variance from the original approach, i.e., V ar(Wo) = n1n2(n1+n2+1) 12, then we have pitman efficiency for Zero-Trimmed U over unadjusted W as: 27 rฯ•(Wโ€ฒ, Wo) =1 3 p3โˆ’p4+p3 3pcosฯ•+ 2p2sinฯ• cosฯ•+ 2p2sinฯ•2 , (35) observing that W=Wofor point estimate. Figure 4: Plot of rฯ•(Wโ€ฒ, Wo)versus p for multiple fixed ฯ•. Figure 5: Plot of rฯ•(Wโ€ฒ, Wo)versus ฯ•for multiple fixed p. 28 E Doubly Robust Generalized U E.1 The Robustness of DRGU When there are no confounding effects, i.e., yโŠฅz, we can show that E(h(yi, yj)) =ฮดby condition- ing on z: E(h(yi, yj)) =P(zi= 1)E(hij|zi= 1) + P(zi= 0)E(hij|zi= 0) =p(1โˆ’p 2p(1โˆ’p)ฮด+ 0) + (1 โˆ’p)(0 +p 2p(1โˆ’p)ฮด) =ฮด, and hence E(Un) =ฮด. We can further show asymtotic normality:โˆšn(Unโˆ’ฮด)โ†’dN(0,4ฯƒ2 h). When there are confounding effects, we can form a inverse probability weighted (IPW) U statistics: UIPW n =n 2โˆ’1X i,jโˆˆCn 2hIPW ij, where, hIPW ij =zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)ฯ†(yi1โˆ’yj0) +zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)ฯ†(yj1โˆ’yi0), andฯ€i=E(zi|wi). Assuming yโŠฅz|w, we can show, E(hIPW ij) =E(E(hIPW ij|wi, wj)) =E(E(zi(1โˆ’zj)ฯ†(yi1โˆ’yj0)|wi, wj) 2ฯ€i(1โˆ’ฯ€j)) +E(E(zj(1โˆ’zi)ฯ†(yj1โˆ’yi0)|wi, wj) 2ฯ€j(1โˆ’ฯ€i)) =E(E(zi(1โˆ’zj)|wi, wj)E(ฯ†(yi1โˆ’yj0)|wi, wj) 2ฯ€i(1โˆ’ฯ€j)) +E(E(zj(1โˆ’zi)|wi, wj)E(ฯ†(yj1โˆ’yi0)|wi, wj) 2ฯ€j(1โˆ’ฯ€i)) =ฯ€i(1โˆ’ฯ€j) 2ฯ€i(1โˆ’ฯ€j)E(ฯ†(yi1โˆ’yj0)) +ฯ€j(1โˆ’ฯ€i) 2ฯ€j(1โˆ’ฯ€i)E(ฯ†(yj1โˆ’yi0)) =ฮด, and hence the IPW adjusted U statistics is unbiased, i.e., E(UIPW n) =ฮด. By further introducing gij=E(ฯ†(yi1โˆ’yj0)|wi, wj), we form a Doubly Robust Generalized U statistics, UDR n, with kernel, hDR ij=zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)(ฯ†(yi1โˆ’yj0)โˆ’gij) +zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)(ฯ†(yj1โˆ’yi0)โˆ’gji) +gij+gji 2. Itโ€™s easy to show that E(hDR ij) =ฮดassuming we know ฯ€andg, observing E(hDR ij) =E(E(hDR ij|wi, wj)) =E(E(zi(1โˆ’zj)|wi, wj)(gijโˆ’gij) 2ฯ€i(1โˆ’ฯ€j)) +E(E(zj(1โˆ’zi)|wi, wj)(gjiโˆ’gji) 2ฯ€j(1โˆ’ฯ€i)) +E(gij+gji 2) = 0 + 0 + ฮด=ฮด. E.2 Semi-parametric Efficiency of DRGU In this section, we sketch the proof for DRGU as most efficient estimator under semi-parametric set-up. At a high level, we need to show DRGU has influence function (IF) that correspond to efficient influence function (EIF) for parameter ฮด=ฯ†(y1โˆ’y0), so naturally there are two steps: (i) find EIF for ฮด=ฯ†(y1โˆ’y0), (ii) show DRUโ€™s IF is consistent with EIF. 29 Preliminary : For regular asymptotic linear estimator ห†ฮธ, we haveโˆšn(ห†ฮธโˆ’ฮธ) =1 nP iฯ‘i+op(1).ฯ‘is the IF for ห†ฮธ. EIF ฯ‘โ€ฒis defined as the unique IF with smallest variance, i.e., V ar(ฯ‘โ€ฒ)โ‰คV ar(ฯ‘),โˆ€ฯ‘. Sinceโˆšn(ห†ฮธโˆ’ฮธ)โ†’pN(0, V ar (ฯ‘)), we know estimator with EIF has smallest variance. For finding the EIF, we follow the standard recipe in semi-parametric theory (i.e., 13.5 of [36]). 1.Identify IF ฯ‘Ffor full data, OF={(y(1), y(0), x)}, where y(1)andy(0)represent response variable under treatment and control respectively. 2. Find all IFs ฯ‘for observation data Oo={(y, z, w )}, ฯ‘(y, z, x ) =ฯ‘o(y, z, x ) + ฮ› where E(ฯ‘o(y, z, x )|OF) =ฯ‘F(y(1), y(0)) andฮ› ={L:E(L(y, z, x )|OF) = 0}is the augmentation space. Note that here, y=zy(1) + (1 โˆ’z)y(0)with the stable unit treatment value assumption(SUTV A). 3. Identify the EIF through projection onto the augmentation space. ฯ‘โ€ฒ(y, z, x ) =ฯ‘o(y, z, x )โˆ’ฮ (ฯ‘o(y, z, x )|ฮ›) where ฮ (f|ฮ›)is a projection of a function fon space ฮ›, such that E[(fโˆ’ฮ (f|ฮ›))g] = 0,โˆ€gโˆˆฮ›. For full data OF={(y(1), y(0), x)}, we can construct U kernel hF ij= 0.5(ฯ†(yi(1)โˆ’yj(0) + ฯ†(yj(1)โˆ’yi(0)), and form a U statistic: UF=n 2โˆ’1X iฬธ=jhF ij for unbiased estimation of ฮด=ฯ†(y(1)โˆ’y(0)). From Hajek projection of
https://arxiv.org/abs/2505.08128v1
U statistics, we knowโˆšn(UFโˆ’ฮด) =2 nP iหœh(yi) +op(1), where หœh(yi) =E(hF ij|OF i)โˆ’ฮด. Now observe, E(hF ij|OF i) = 0 .5E(ฯ†(yi(1)โˆ’yj(0)|OF i) + 0.5E(ฯ†(yj(1)โˆ’yi(0)|OF i) = 0.5Z ฯ†(yi(1)โˆ’s)p0(s)ds+ 0.5Z ฯ†(tโˆ’yi(0))p1(t)dt = 0.5h1(yi(1)) + 0 .5h0(yi(0)) where h1(y) =R ฯ†(yโˆ’s)p0(s)ds,h0(y) =R ฯ†(tโˆ’y)p1(t)dt, andp1(ยท), p0(ยท)are marginal density ofyunder treatment and control respectively. We then haveโˆšn(UFโˆ’ฮด) =1 nP i[h1(yi(1)) + h0(yi(0))โˆ’2ฮด] +op(1), and as a result the corresponding IF under full data is ฯ‘F=h1+h0โˆ’2ฮด. Next step is to find an IF ฯ‘ofor observation data Oo={(y, z, x )}. Letฯ‘obe the inverse propensity weighting version of the ฯ‘F, i.e., ฯ‘o=z ฯ€h1+1โˆ’z 1โˆ’ฯ€h0โˆ’2ฮด where ฯ€=E(z|x). We can verify that E(ฯ‘o|OF) =ฯ‘F, observing E(z ฯ€h1|OF) =h1 ฯ€E(z|x) =h1 as similarly E(1โˆ’z 1โˆ’ฯ€h0|OF) =h0. 30 We then specify the augmentation space ฮ›. For any function L(y, z, x ), since zโˆˆ {0,1}, we can represent the function as L(y, z, w ) =zL1(y, w) + (1 โˆ’z)L0(y, w). Further by definition, E(L|OF) = 0 , we know E(L|OF) =ฯ€L1(y(1), w) + (1 โˆ’ฯ€)L0(y(0), w) = 0 ,โˆ€w, y(0), y(1) Since above equation applies to all values of w, y(0), y(1), we know L0(y(0), w) = L0(w), L1(y(1), w) =L1(w),L0(w) =โˆ’ฯ€ 1โˆ’ฯ€L1(w), and we can represent L(y, z, w )as L(y, z, w ) =zL1(w) + (1 โˆ’z)โˆ’ฯ€ 1โˆ’ฯ€L1(w) =zโˆ’ฯ€ 1โˆ’ฯ€L1(w) Thus, we can specify ฮ› ={L:L(y, z, w ) = (zโˆ’ฯ€)f(w)for arbitrary f}. We next find projection so that EIF ฯ‘โ€ฒ=ฯ‘oโˆ’ฮ (ฯ‘o|ฮ›). From specification of ฮ›, letฮ (zh1|ฮ›) = (zโˆ’ฯ€)f1, and ฮ ((1โˆ’z)h0|ฮ›) = ( zโˆ’ฯ€)f0. By definition, E([zh1โˆ’(zโˆ’ฯ€)f1][(zโˆ’ฯ€)f]) = 0 ,โˆ€f. Observing, E([zh1โˆ’(zโˆ’ฯ€)f1][(zโˆ’ฯ€)f]) =E(z(zโˆ’ฯ€)fhโˆ’(zโˆ’ฯ€)2f1f) =E(ฯ€(1โˆ’ฯ€)fE(h1|z= 1, x))โˆ’E(ฯ€(1โˆ’ฯ€)f1f) =E(ฯ€(1โˆ’ฯ€) [E(h1|z= 1, x)โˆ’f1]f) = 0 ,โˆ€f we have f1=E(h1|z= 1, x). Similarly, we have f0=โˆ’E(h0|z= 0, x). Substitute the two equation, we get ฮ (ฯ‘o|ฮ›) =zโˆ’ฯ€ ฯ€E(h1|z= 1, x)โˆ’zโˆ’ฯ€ 1โˆ’ฯ€E(h0|z= 0, x) and hence the EIF is ฯ‘โ€ฒ=z ฯ€h1+1โˆ’z 1โˆ’ฯ€h0โˆ’2ฮดโˆ’zโˆ’ฯ€ ฯ€E(h1|z= 1, x) +zโˆ’ฯ€ 1โˆ’ฯ€E(h0|z= 0, x) =z ฯ€(h1โˆ’E(h1|z= 1, w)) +1โˆ’z 1โˆ’ฯ€(h0โˆ’E(h0|z= 0, w)) +E(h1|z= 1, w) +E(h0|z= 0, w)โˆ’2ฮด (36) We then need to show the UDR nhas influence function that is consistent with ฯ‘โ€ฒ, i.e.,ฯ‘DR=ฯ‘+op(1). From Hajek projection, we can obtain UDR nโ€™s influence function, i.e., ฯ‘DR= 2E(hDR ij|Oo i)โˆ’2ฮด. Recall hDR ij=zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)(ฯ†(yi1โˆ’yj0)โˆ’gij) +zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)(ฯ†(yj1โˆ’yi0)โˆ’gji) +gij+gji 2. Letโ€™s calculate the E(hDR ij|Oo i)term by term. For the first term, we have E(zi1โˆ’zj 1โˆ’ฯ€jฯ†(yi1โˆ’yj0))|Oo i) =E((1โˆ’zi)ฯ†(yi1โˆ’yj0))|Oo i) =zih1(yi) and similarly E((1โˆ’zi)zj ฯ€jฯ†(yj1โˆ’yi0) = (1 โˆ’zi)h0(yi) By definition, gij=E[ฯ†(yiโˆ’yj)|wi, wj, zi= 1, zj= 0] andgji=E[ฯ†(yjโˆ’yi)|wj, wi, zj= 1, zi= 0]. we can show 31 E(gij|Oo i) =ZZ Z ฯ†(sโˆ’t)p1(s|wi)p0(t|wj)dsdt p(wj)dwj s=yi =Z Z ฯ†(sโˆ’t)p1(s|wi)Z p0(t|wj)p(wj)dwj dsdt s=yi =Z Z ฯ†(sโˆ’t)p1(s|wi)p0(t)dsdt s=yi =ZZ ฯ†(sโˆ’t)p0(t)dt p(s|wi, zi= 1)ds s=yi =E(h1(yi)|wi, zi= 1) and similarly, E(gji|Oo i) =E(h0(yi)|wi, zi= 0) . We also know E(1โˆ’zj 1โˆ’ฯ€jgij|Oo i) =E E(1โˆ’zj 1โˆ’ฯ€j|wj)gij|Oo i =E(gij|Oo i). and similarly E(zj ฯ€jgji|Oo i) =E(gji|Oo i). Substituting above equations, we have E(hDR ij|Oo i) =ฯ‘โ€ฒ 2+ฮด, and hence ฯ‘DR=ฯ‘โ€ฒexactly. E.3 Asymptotics of DRGU with UGEE Weโ€™ll first sketch the proof for the asymptotic normality of DRGU. The proof based on UGEE is very similar to GEE in Appendix C.1. Recall that, Un(ฮธ) =X i,jโˆˆCn 2Un,ij=X i,jโˆˆCn 2Gij(hijโˆ’fij) =0, where, hij= [hij1,
https://arxiv.org/abs/2505.08128v1
hij2, hij3]T fij= [fij1, fij2, fij3]T hij1=zi(1โˆ’zj) 2ฯ€i(1โˆ’ฯ€j)(ฯ†(yi1โˆ’yj0)โˆ’gij) +zj(1โˆ’zi) 2ฯ€j(1โˆ’ฯ€i)(ฯ†(yj1โˆ’yi0)โˆ’gji) +gij+gji 2 hij2=zi+zj hij3=zi(1โˆ’zj)ฯ†(yi1โˆ’yj0) +zj(1โˆ’zi)ฯ†(yj1โˆ’yi0) fij1=ฮด fij2=ฯ€i+ฯ€j fij3=ฯ€i(1โˆ’ฯ€j)gij+ฯ€j(1โˆ’ฯ€i)gji ฯ€i=ฯ€(wi;ฮฒ) gij=g(wi, wj;ฮณ) and Gij=DT ijVโˆ’1 ij Dij=โˆ‚fij โˆ‚ฮธ, Vij=๏ฃฎ ๏ฃฐฯƒ2 ij10 0 0ฯƒ2 ij20 0 0 ฯƒ2 ij3๏ฃน ๏ฃป ฯƒ2 ijk=V ar(hijk|wi, wj). 32 Recall ui=E(Un,ij|yi0, yi1, zi, wi),ฮฃ =V ar(ui),Mij=โˆ‚(fijโˆ’hij) โˆ‚ฮธ, andB=E(GM), and ห†ฮด be the 1st element in ห†ฮธ. LetยฏUn(ฮธ, ฮฑ) =1 (n 2)P i,jโˆˆCn 2Un,ij. We know ยฏUn(ฮธ, ฮฑ)is a U statistics with mean E(Un,ij) = 0 . From asymptotic theory of U statistics, we knowโˆšnยฏUn(ฮธ, ฮฑ)โ†’dN(0,4ฮฃ) Then similarly as (12) and (14), we know โˆšn(ห†ฮธโˆ’ฮธ) =โˆ’(โˆ‚ยฏUn โˆ‚ฮธ)โˆ’โˆšnยฏUn(ฮธ, ฮฑ) +op(1) (37) Similarly as (15), we have โˆ‚ยฏUn โˆ‚ฮธโ†’pE(Gโˆ‚S โˆ‚ฮธ) =โˆ’E(GM) (38) Combining the above two, we have โˆšn(ห†ฮธโˆ’ฮธ) =Bโˆ’โˆšnยฏUn(ฮธ, ฮฑ) +op(1) (39) where B=E(GM). Hence, we establish the following asymptotic normality: โˆšn(ห†ฮธโˆ’ฮธ)โ†’dN(0,4(Bโˆ’)TฮฃBโˆ’). We skip the proof for consistency when only one of ฯ€andgis correctly specified, as most of it has been discussed in Appendix E.1. As for semi-parametric bound ofห†ฮด, proof is straightforward building on results from E.2 and insights from B.3. Observing Dhas structure of block diagonal with following structure: D=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐ1 0 ยทยทยท 0 0d22ยทยทยท d2p ............ 0dT2ยทยทยท dTp๏ฃน ๏ฃบ๏ฃบ๏ฃป Recall EIF ฯ‘โ€ฒfrom E.2, we know E(ฯ‘โ€ฒSฯ€) = 0 andE(ฯ‘โ€ฒSg) = 0 , and thus E(M)has following structure: E(M) =๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐ1 0 ยทยทยท 0 0m22ยทยทยท m2p ............ 0mT2ยทยทยท mTp๏ฃน ๏ฃบ๏ฃบ๏ฃป We then know B=E(GM)has the following structure: B=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐฯƒโˆ’2 1 0ยทยทยท 0 0b22ยทยทยท b2p ............ 0bp2ยทยทยท bpp๏ฃน ๏ฃบ๏ฃบ๏ฃป Since E(ฯ‘โ€ฒSฯ€) = 0 andE(ฯ‘โ€ฒSg) = 0 , we know ฮฃis block diagonal, ฮฃ =๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐฯƒ2 10ยทยทยท 0 0s22ยทยทยท s2p ............ 0sp2ยทยทยท spp๏ฃน ๏ฃบ๏ฃบ๏ฃป Observing the asymptotic covariance matrix is 4(Bโˆ’1)TฮฃBโˆ’1, we know asymptotic variance of ห†ฮดis same as that of EIF, i.e., ฯƒ2 ฮด= 4ฯƒ2 1=V ar(ฯ‘โ€ฒ). 33 F Details on Simulation Studies F.1 Regression Adjustment We compare the type I error rate of regression adjustment and the unadjusted t-test. We perform these simulations using data generated from a Poisson distribution with the following generation process: wiโˆผ N(0,1), zi|wiโˆผBernoulli 1 1+eโˆ’ฮณwi , yi|zi, wiโˆผPoisson e2+ฮฒzzi+ฮฒwwi . Here ฮณโ‰ฅ0is a hyperparameter which controls the degree of confounding. When ฮณ= 0there is no confounding. ฮฒzcontrols the treatment effect. We evaluation type I error with ฮฒz= 0, and power withฮฒz>0. Table 4: Type I Error Comparison: Unadjusted t-test vs. Regression Adjustment ฮณ t-test RA 0.0 0.0504 0.0504 0.2 0.0576 0.0482 0.4 0.0706 0.0522 0.6 0.0844 0.0496 0.8 0.1082 0.0570 1.0 0.1262 0.0524 We validate that regression adjustment controls type I error, and the unadjusted t-test leads to type I error rate inflation under confounding. Table 5: Power Comparison: Unadjusted t-test vs. Regression Adjustment Treatment EffectNo Confounding ( ฮณ= 0.0)With Confounding ( ฮณ= 0.1) Power (t-test) Power (RA) Power (t-test) Power (RA) 0.10 0.727 0.702 0.819 0.690 0.11 0.722 0.779 0.825 0.767 0.12 0.734 0.848 0.826 0.834 0.13 0.729 0.907 0.828 0.879 0.14 0.751 0.947 0.862 0.949 0.15 0.773 0.956 0.863 0.961 0.16 0.795 0.975 0.881 0.987 0.17 0.793 0.992 0.885 0.991 0.18 0.777 0.995 0.881 0.990 0.19 0.796 0.999 0.900 0.996 0.20 0.807 0.997 0.908 0.999 We demonstrate that regression adjustment improves
https://arxiv.org/abs/2505.08128v1
power over t-test ( ฮณ= 0). When there is confounding present, the power of raw unadjusted t-test is not valid as it can not control type I error. F.2 GEE We evaluate the Type I error and power of two estimators in the presence of confounding under varying sample sizes and effect sizes. (i) GLM adjustment at final time point: at t=T, fit a Poisson regression YiTโˆผPoisson exp(ฮฒ0+ฮฒ1zi+ฮณwi) =โ‡’ ห†ฮฒGLM 1. (ii) GEE adjustment with longitudinal data: using all observations t= 1, . . . , T , obtain ห†ฮฒGEE 1 by solving the estimating equation Un(ฮฒ1) =NX i=1TX t=1DโŠค it Yitโˆ’exp(ฮฒ0+ฮฒ1zi+ฮณwi) = 0, 34 where Dit=โˆ‚ E[Yit|zi, wi] โˆ‚ฮฒ1=ziexp(ฮฒ0+ฮฒ1zi+ฮณwi). We generate a longitudinal panel of Nsubjects over Tvisits by first drawing a time-invariant confounder and treatment for each subject, then simulating a Poisson count at each visit: wiโˆผ N(0,1), z iโˆผBernoulli ฯƒ(ฮฑ0+ฮฑ1wi) , Y itโˆผPoisson exp(ฮฒ0+ฮฒ1zi+ฮณwi) , fori= 1, . . . , N andt= 1, . . . , T . Table 6: Empirical Type I error rates ( ฮฒ1= 0) for GEE and GLM estimators under confounded assignment at nominal levels ฮฑ. Sample Size ฮฑ GEE GLM 50 0.05 0.068 0.057 50 0.01 0.021 0.013 200 0.05 0.048 0.044 200 0.01 0.009 0.005 We demonstrate that GEE controls type I error adequately in large samples, with only modest inflation when sample sizes are small. Table 7: Empirical power for Poisson GEE vs. GLM estimators across sample sizes N, effect sizes ฮฒ1, and significance levels ฮฑ. Sample Size ฮฒ1 ฮฑ GEE GLM 50 0.10 0.05 0.283 0.100 50 0.10 0.01 0.138 0.025 50 0.20 0.05 0.749 0.200 50 0.20 0.01 0.556 0.066 200 0.10 0.05 0.729 0.189 200 0.10 0.01 0.490 0.065 200 0.20 0.05 1.000 0.645 200 0.20 0.01 0.997 0.404 We demonstrate that by leveraging longitudinal repeated measurements, the GEE-adjusted estimator achieves higher statistical power than that of Poisson GLM across both small and large samples. Moreover, this power advantage is especially pronounced at medium effect sizes ( ฮฒ1= 0.1) compared to larger ones ( ฮฒ1= 0.2). F.3 Mann Whitney U We compare the zero-trimmed Mann-Whitney U-test to the standard Mann-Whitney U-test and two-sample t-test in type I error rate and power. We simulate the three tests using data generated from zero-inflated log-normal and positive Cauchy distributions and multiple effect sizes. Formally, we generate control data y0i= (1โˆ’Di)yโ€ฒ 0i, where DiโˆผBernoulli (p0)andyโ€ฒ 0iโˆผf(0, ฯƒ). We generate test data y1j= (1โˆ’Dj)yโ€ฒ 1jwhere DjโˆผBernoulli (p0+pโˆ†)andyโ€ฒ 1jโˆผf(ยต, ฯƒ)forpโˆ†, ยตโ‰ฅ0. Here fdenotes either the lognormal or positive Cauchy distribution. 35 Table 8: Type I Error Rates at ฮฑ= 0.05for Zero-Inflated Data Distribution Zero Prop. Sample SizeType I Error Rate Zero-trimmed U Standard U t-test LogNormal0.050 0.0540 0.0540 0.0015 200 0.0515 0.0515 0.0040 0.250 0.0435 0.0500 0.0025 200 0.0480 0.0545 0.0050 0.550 0.0315 0.0465 0.0020 200 0.0405 0.0490 0.0055 0.850 0.0230 0.0475 0.0005 200 0.0305 0.0455 0.0050 Positive Cauchy0.050 0.0535 0.0535 0.0220 200 0.0530 0.0525 0.0200 0.250 0.0465 0.0540 0.0205 200 0.0405 0.0480 0.0240 0.550 0.0335 0.0455 0.0215 200 0.0420 0.0500 0.0175 0.850 0.0290 0.0550 0.0230 200 0.0355 0.0500 0.0170 Table 9: Power
https://arxiv.org/abs/2505.08128v1
Comparison for Positive Cauchy and LogNormal Distributions with Equal Zero- Inflation (50%) Distribution Sample Size Effect SizePower at ฮฑ= 0.05 Zero-trimmed U Standard U t-test Positive Cauchy500.25 0.038 0.040 0.018 0.50 0.050 0.048 0.022 0.75 0.113 0.085 0.033 1.00 0.131 0.086 0.041 2000.25 0.079 0.065 0.011 0.50 0.165 0.094 0.026 0.75 0.339 0.166 0.031 1.00 0.555 0.262 0.048 LogNormal500.25 0.033 0.043 0.002 0.50 0.045 0.053 0.003 0.75 0.048 0.053 0.004 1.00 0.050 0.054 0.004 2000.25 0.044 0.044 0.009 0.50 0.067 0.059 0.004 0.75 0.090 0.067 0.007 1.00 0.138 0.082 0.011 We validate that the zero-trimmed Mann-Whitney U-test has more power than the other two tests on almost all scenarios of zero-inflated heavy-tailed data, while still controlling type I error. F.4 Doubly Robust Generalized U F.4.1 Snapshot DRGU We generate nโˆˆ {50,200}i.i.d. observations (yi, zi, wi)withp= 1baseline covariates for simplicity wiโˆผ N(0,1). The true propensity score is logistic, ฯ€(wi) =ฯƒ โˆ’0.2wi+ 0.6w2 i , z i|wiโˆผBernoulli ฯ€(wi) , 36 where ฯƒ(x) = 1 /(1 +eโˆ’x). The outcome mean model is: ยต0(wi, zi) =ฮฒzi+ 1.0wi, y i|(zi, wi)โˆผ P ยต0(wi, zi),1 where constant ATE ฮฒโˆˆ {0.0,0.5}andPis one of the normal, log-normal, and Cauchy distributions. We compare Type I error rates and power of correctly specified DRGU , correctly specified linear regression OLS, and Wilcoxon rank sum test U(which does not account for confounding covariates). To probe double robustness, we set up misDRGU as misspecifying the quadratic outcome propensity score model with a linear mean model, while the outcome model in misDRGU is specified correctly. Table 10: Type I Error Rate at sample size = 200 Distribution DRGU misDRGU OLS U Normal ฮฑ= 0.05 0.041 0.049 0.043 0.185 LogNormal ฮฑ= 0.05 0.054 0.070 0.054 0.150 Cauchy ฮฑ= 0.05 0.052 0.065 0.042 0.149 Normal ฮฑ= 0.01 0.014 0.005 0.012 0.045 LogNormal ฮฑ= 0.01 0.012 0.020 0.007 0.049 Cauchy ฮฑ= 0.01 0.012 0.025 0.008 0.02 Table 11: Power at ฮฑ= 0.05, ATE=0.5 Distribution Sample size DRGU misDRGU OLS U Normal200 0.750 0.585 0.940 0.299 50 0.135 0.085 0.135 0.035 LogNormal200 0.610 0.515 0.435 0.235 50 0.260 0.210 0.190 0.110 Cauchy200 0.660 0.580 0.435 0.310 50 0.265 0.180 0.165 0.130 F.4.2 Longitudinal DRGU For the longitudinal setting, we use the same simulation setup as above for observations (yit, zi, wit) fort= 1, ..., T = 2time points. The true propensity score is logistic of time-varying covariates, ฯ€(wi) =ฯƒ โˆ’0.3wi1โˆ’0.6wi2 , z i|wiโˆผBernoulli ฯ€(wi) , where ฯƒ(x) = 1 /(1 +eโˆ’x). The outcome mean model is: ยต0(wit, zi) =ฮฒzi+ 1.0wit, y it|(zi, wit)โˆผ P ยต0(wit, zi),1 We compare three models longDRGU ,DRGU using the last timepoint data snapshot, and GEE. The time-varying covariates highlight the strength of using longitudinal method compared to snapshot analysis. Table 12: Type I Error Rate at ฮฑ= 0.05, sample size = 200, T=2 Distribution longDRGU DRGU GEE Normal 0.03 0.04 0.04 LogNormal 0.04 0.05 0.02 Cauchy 0.05 0.05 0.05 Table 13: Power at ฮฑ= 0.05, ATE=0.5, sample size = 200, T=2 Distribution Sample size longDRGU DRGU GEE Normal200 0.85 0.88 0.92 50 0.52 0.39 0.75 LogNormal200 0.85 0.78 0.68 50 0.37 0.30 0.33 Cauchy200
https://arxiv.org/abs/2505.08128v1
0.83 0.76 0.66 50 0.38 0.32 0.29 37 G Details on A/B Testing G.1 Email Marketing We conducted an A/B test comparing our legacy email marketing recommender system against a newer version designed with improved campaign personalization using neural bandits. We randomly assigned audience members to receive recommendations from either system and measured the downstream impact on conversion value, a proprietary metric measuring the value of conversion. The resulting conversion value presented challenging statistical properties: extreme zero inflation (>95% of members had no conversions in both test groups) and significant right-skew among the 1% who did convert. These characteristics violated the assumptions of conventional testing methods such as the standard t-test. The zero-trimmed Mann-Whitney U-test proved ideal for this scenario by balancing the proportion of zeros between test groups before performing rank comparisons. This approach maintained appropriate Type I error control while providing superior statistical power compared to both the t-test and the standard Mann-Whitney U-test. Using the zero-trimmed Mann-Whitney U-test, we detected a statistically significant +0.94% lift in overall conversion value, most of which was driven by a +0.11% lift in B2C product conversions among members experiencing the improved campaign personalization (p-value < 0.001). By constast, the t-test was able to detect a signficant effect conversion value metric (p-value = 0.249). G.2 Targeting in Feed We conducted an online experiment to evaluate impact of a new marketing algorithm vs legacy algorithm for recommending ads on a particular slot in Feed. The primary interest of the study is downstream conversion impact. Members eligible for a small number of pre-selected campaigns were the unit of randomization. We encountered two main challenges. First, the ad impression allocation mechanism showed a selection bias favoring recommendations from the control system. As a result, we want to adjust for impression as cost and compare return-on-investment (ROI) between the control and treatment group. Second, limited campaign and participant selection introduced potential imbalance in baseline covariates even under randomization. Specifically, we observed that a segment of members with lower baseline conversion rate was more likely to be in the treatment group than in the control group. This introduced the classic case of Simpsonโ€™s Paradox where conversion rate averaged over all segments is similar in both groups but higher in treatment group when stratified by this confounding segment. We summarized these imbalanced features in Table 14. Figure 6 further shows the large distribution mismatch between impressions in the treatment and control group. We addressed both of these issues by using regression adjustment to estimate lift in ROI while accounting for a confounder such as being in the member segment with low baseline conversion rate. We found the new algorithm to have a statistically significant lift of 1.84% in conversion per impression, with p-value < 0.001 and 95% confidence interval (1.64% - 2.05%). This is in contrast to failing to reject the null hypothesis of no effect when using two-sample t-test for difference in means of conversion rate (p-value = 0.154). Table 14: Characteristics by treatment variant of imbalanced data. Values are relative to mean values in the control group. Control
https://arxiv.org/abs/2505.08128v1
Treatment mean mean Conversions 1.0 +0.3% Impressions 1.0 -37.7% Low-baseline segment 1.0 +9.5% 38 Figure 6: Distributions of (normalized) impressions and conversions from the targeting in feed experiment. G.3 Paid Search Campaigns We illustrate leveraging longitudinal repeated measurements in A/B testing (via GEE) to improve power using data collected in an online test run on paid ad campaigns over a 28-day period. We randomized 64 ad campaigns at the campaign level into test and control arms (32 campaigns each), a typical setup for tests run on third-party advertising platforms. We collected daily conversion values for each campaign throughout the experiment, yielding a time series of repeated measurements at the campaign-day level. Due to the limited sample size, a traditional two-sample comparison lacks power to detect the treatment effects in this test. To address this small-sample limitation, we fit a Generalized Estimating Equation (GEE) model using campaign as the grouping variable and an exchangeable working-correlation structure to capture within-campaign serial dependence. During the 28-day test, by โ€œborrowing strengthโ€ across daily measurements, the GEE framework substantially reduced residual variance and produced tighter confidence intervals around the treatment coefficient. In this phase, the GEE-estimated treatment effect was very close to significant level (p-value=0.051). In comparison, the snapshot regression analysis using the last snapshot attains p-value at 0.184. We also reserved a 28-day validation period prior to the actual launchโ€”during which no treatment was appliedโ€”so that treatment and control groups should exhibit no true difference. We collected campaign-day conversion values in the same format and ran the identical GEE analysis, yielding an estimated effect indistinguishable from zero (p-value = 0.82). This confirms that leveraging repeated measurements through GEE both enhances sensitivity to subtle treatment effects and maintains proper control of type I error. Observing the distribution of the response variables exhibit heavy tail characteristics, we further performed statistical testing using doubly robust U, assuming compound symmetric correlation structure for R(ฮฑ). We were able to attain statistical significant result with ห†P(y1> y0) = 0 .54and p-value=0.045. 39
https://arxiv.org/abs/2505.08128v1
arXiv:2505.08210v1 [math.ST] 13 May 2025Submitted to Bernoulli On eigenvalues of a renormalized sample correlation matrix QIANQIAN JIANG*1,a, JUNPENG ZHU*1,band ZENG LI1,c 1Department of Statistics and Data Science, Southern University of Science and Technology *, ajqq172515@gmail.com,bzhujp@sustech.edu.cn,cliz9@sustech.edu.cn This paper studies the asymptotic spectral properties of a renormalized sample correlation matrix, including the limiting spectral distribution, the properties of largest eigenvalues, and the central limit theorem for linear spec- tral statistics. All asymptotic results are derived under a unified framework where the dimension-to-sample size ratio๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. Based on our CLT result, we propose an independence test statistic capable of operating effectively in both high and ultrahigh dimensional scenarios. Simulation experiments demonstrate the accuracy of theoretical results. MSC2020 subject classifications: Primary 60B20; secondary 62H15 Keywords: Renormalized sample correlation matrix; Ultrahigh dimension; Linear spectral statistics; Central limit theorem 1. Introduction Let us consider the widely used independent components (IC) model for the population x, admitting the following stochastic representation y=๐+๐šบ1 2x, where ๐โˆˆR๐‘denotes the population mean and xโˆˆR๐‘is a random vector with independent and identically distributed (i.i.d.) components with zero mean and unit variance. Let y1,..., y๐‘›be๐‘›i.i.d. observations from this population and Y=(y1,..., y๐‘›)be the๐‘ร—๐‘›data matrix. The sample correlation matrix R๐‘›can be written as R๐‘›=D1 2๐‘›S๐‘›D1 2๐‘›, where D1 2๐‘›=Diag1โˆš๐‘ 11,1โˆš๐‘ 22,...,1โˆš๐‘ ๐‘๐‘ ,S๐‘›=1 ๐‘Y๐šฝYโŠค,๐šฝ=I๐‘›โˆ’1 ๐‘›1๐‘›1โŠค ๐‘›, ๐‘=๐‘›โˆ’1. Here๐‘ ๐‘˜๐‘˜=eโŠค ๐‘˜S๐‘›e๐‘˜, ๐‘˜=1,...,๐‘ ,e๐‘–โˆˆR๐‘denotes the vector with the ๐‘–the element being 1 and all others being 0, and 1๐‘›=(1,..., 1)โŠคinR๐‘›. The eigenvalues of R๐‘›,๐œ†R๐‘› 1โ‰ฅยทยทยทโ‰ฅ๐œ†R๐‘›๐‘, serve as important statistics and often play crucial roles in the inference on population correlation matrix R, see Anderson (2003). Consider the following regime, ๐‘›โ†’โˆž,๐‘=๐‘๐‘›โ†’โˆž,๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž), (1) *These authors contributed equally 1 2 referred to as the Mar ห‡cenko-Pastur (MP) regime. For R=I๐‘, Jiang (2004) demonstrated that the empirical spectral distribution (ESD) of R๐‘›,๐นR๐‘›(๐‘ฅ)=1 ๐‘ร๐‘ ๐‘–=11{๐œ†๐‘–(R๐‘›)โ‰ค๐‘ฅ}, converges weakly to the Marchenko-Pastur (MP) law with probability one. The extreme eigenvalues of R๐‘›were studied in Xiao and Zhou (2010) and Bao, Pan and Zhou (2012). Additionally, Gao et al. (2017) established the central limit theorem (CLT) for the linear spectral statistics (LSS) of R๐‘›, i.e.,โˆซ ๐‘“(๐‘ฅ)d๐นR๐‘›(๐‘ฅ)=ร๐‘ ๐‘–=1๐‘“(๐œ†R๐‘› ๐‘–)/๐‘where๐‘“(ยท)is a smooth function. For a general R, the limiting spectral distribution (LSD) of R๐‘›, the limit of ESD, can be found in Karoui (2009) and the CLT for LSS was studied in Jiang (2019), Mestre and Vallet (2017), Yin, Zheng and Zou (2023), Yin et al. (2022). All these studies are conducted under the MP regime (1), i.e., ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž). However, in the ultrahigh dimensional case where ๐‘โ‰ซ๐‘›, the eigenvalues of R๐‘›exhibit behaviors markedly different from those in the MP regime. Properties of eigenvalues of sample correlation matrix when๐‘โ‰ซ๐‘›remain largely unknown in current literature. Existing studies on eigenvalue behavior of ultrahigh dimensional matrix focus on sample covariance matrix, see Bai and Yin (1988), Bao (2015), Chen and Pan (2015), Qiu, Li and Yao (2023), Wang and Paul (2014). These works heavily rely on the linear independent component structure and zero mean assumption ๐=0 which suggest that the renormalized sample covariance matrix หœS๐‘›=โˆš๏ธƒ ๐‘ ๐‘› 1 ๐‘YโŠค 0Y0โˆ’I๐‘› ,Y0=Yโˆ’๐1โŠค ๐‘›shares many spectral properties with Wigner matrix. In contrast, due to the nonlinear dependence introduced by
https://arxiv.org/abs/2505.08210v1
the nor- malization inherent in the sample correlation matrix and the presence of a nonzero population mean, the techniques and results developed for ultrahigh dimensional covariance matrices cannot be directly extended to the correlation matrix. To fill this gap, we consider the sample correlation matrix under a new regime where ๐‘/๐‘›โ†’โˆž as๐‘›โ†’โˆž . In this scenario, unlike the MP regime, most eigenvalues of the matrix R๐‘›are zero, and all non-zero eigenvalues diverge to infinity. To address this, we renormalize the sample correlation matrix as follows: B๐‘›=โˆš๏ธ‚๐‘ ๐‘1 ๐‘๐šฝYโŠคD๐‘›Y๐šฝโˆ’๐šฝ . B๐‘›is๐‘›ร—๐‘›and has๐‘›โˆ’1 non-zero eigenvalues, which connect to the non-zero eigenvalues of R๐‘› through the following identity: ๐œ†B๐‘›=โˆš๏ธ„ ๐‘ ๐‘๐œ†R๐‘›โˆ’โˆš๏ธ‚๐‘ ๐‘. This paper investigates the eigenvalues of the renormalized random matrix B๐‘›whenR=I๐‘, allowing for the dimension ๐‘to be comparable to or much larger than the sample size ๐‘›, such that ๐‘›โ†’โˆž,๐‘=๐‘๐‘›โ†’โˆž,๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. Firstly, we propose a unified LSD of B๐‘›in both๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž)and๐‘/๐‘›โ†’โˆž . Secondly, we studied the properties of ๐œ†B๐‘› 1, the largest eigenvalue of B๐‘›. Thirdly, we establish CLT for LSS of B๐‘›under the unified framework, which covers the results in Gao et al. (2017) as a special case. Last but not least, our theoretical findings are further applied to the independence test for both high and ultrahigh dimensional random vectors. Specifically, we propose a test statistic that remains effective when ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. In this paper, our primary contribution is to establish the asymptotic theory for eigenvalues of the renormalized sample correlation matrix B๐‘›when๐‘/๐‘›โ†’โˆž . In addition, we provide a unified represen- tation of the limiting results that hold for both ๐‘/๐‘›โ†’โˆž and๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž). Theoretical analysis of B๐‘›in ultrahigh dimensional settings presents significant challenges due to the nonlinear dependence structure introduced by the normalization process, which makes the study of this random matrix more On eigenvalues of a renormalized sample correlation matrix 3 intricate, even when R=I๐‘. Under the MP regime (1), Gao et al. (2017), Heiny (2022), Jiang (2004), Karoui (2009) showed that the correlation matrix R๐‘›share the same LSD and properties of the largest eigenvalue as the sample covariance S๐‘›, by usingโˆฅD๐‘›โˆ’I๐‘โˆฅโˆฅS๐‘›โˆฅto control the difference between the sample correlation matrix R๐‘›and the sample covariance matrix S๐‘›. However, in the ultrahigh dimen- sional setting, since โˆฅS๐‘›โˆฅtends to infinity, this approach becomes ineffective. Instead, we investigate the convergence of Stieltjes transform of ESD of B๐‘›to obtain the LSD. In addition, we require a unified moment assumption to control the probability that the largest eigenvalue ๐œ†B๐‘› 1lies outside the support of LSD. Moreover, when ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž), Pan and Zhou (2008) used ๐‘๐‘›=๐‘/๐‘›to characterize the CLT for LSS while Yin, Zheng and Zou (2023), Yin et al. (2022) used ๐‘๐‘=๐‘/๐‘. In fact, they are equivalent because, in the high-dimensional setting (1), ๐‘๐‘›โˆ’๐‘๐‘=๐‘‚(1/๐‘›). However, when ๐‘/๐‘›โ†’โˆž , ๐‘๐‘›โˆ’๐‘๐‘=๐‘/(๐‘›๐‘)may diverge to infinity. Therefore, we must handle ๐‘๐‘›and๐‘๐‘with extra caution and we derive a novel determinant equivalent form for the resolvent of the renormalized correlation matrix when๐‘/๐‘›โ†’โˆž . The rest of the paper is organized as follows. Section 2 details our main results, including unified LSD, properties of the largest eigenvalue and CLT for LSS. Section 3 discusses the application of our CLT
https://arxiv.org/abs/2505.08210v1
to independence test. Section 4 presents simulations. Technical proofs are detailed in Section 5 and the Supplementary Material. 2. Main Results 2.1. Preliminaries For any measure ๐บsupported on the real line, the Stieltjes transform of ๐บis defined as ๐‘ ๐บ(๐‘ง)=โˆซ1 ๐‘ฅโˆ’๐‘งd๐บ(๐‘ฅ), ๐‘งโˆˆC+, where C+={๐‘งโˆˆC:โ„‘(๐‘ง)>0}denotes the upper complex plane. As for the LSD of R๐‘›withR=I๐‘when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž), Jiang (2004) showed the ESD of R๐‘›con- verges with probability 1 to the Mar ห‡cenko-Pastur law ๐น๐‘€๐‘ƒ(๐‘ฅ), whose density function has an explicit expression ๐‘“๐‘€๐‘ƒ(๐‘ฅ)=๏ฃฑ๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃณ1 2๐œ‹๐‘ฅ๐‘โˆš๏ธ (๐‘โˆ’๐‘ฅ)(๐‘ฅโˆ’๐‘Ž)๐‘Žโฉฝ๐‘ฅโฉฝ๐‘; 0 otherwise, and a point mass 1โˆ’1/๐‘at the origin if ๐‘>1, where๐‘Ž=(1โˆ’โˆš๐‘)2and๐‘=(1+โˆš๐‘)2. And the Stieltjes transform of ๐น๐‘€๐‘ƒ(๐‘ฅ)is ๐‘ ๐‘€๐‘ƒ(๐‘ง)=๐‘โˆ’1โˆ’๐‘ง+โˆš๏ธ (๐‘งโˆ’๐‘โˆ’1)2โˆ’4๐‘ 2๐‘๐‘ง+1โˆ’๐‘ ๐‘๐‘ง, ๐‘งโˆˆC+. (2) 2.2. LSD of B ๐’ In this section, we provide a unified LSD of the renormalized sample correlation matrix B๐‘›when ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. 4 Assumption 2.1. LetX=(x1,..., x๐‘›)๐‘ร—๐‘›=๐‘ฅ๐‘–๐‘—, which consists of ๐‘ร—๐‘›i.i.d. variables satisfying E๐‘ฅ๐‘–๐‘—=0,E ๐‘ฅ๐‘–๐‘— 2=1,E ๐‘ฅ๐‘–๐‘— 4=๐œ…<โˆž. Assumption 2.2. The population covariance matrix ๐šบis diagonal. Assumption 2.3. The dimension ๐‘is function of sample size ๐‘›and both tend to infinity such that ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž], ๐‘โ‰๐‘›๐‘ก, ๐‘กโ‰ฅ1. Theorem 2.4. Under Assumptions 2.1 - 2.3, with probability one, the ESD of B๐‘›converges weakly to a (non-random) probability measure ๐น๐‘(๐‘ฅ), which has a density function ๐‘“๐‘(๐‘ฅ)=๏ฃฑ๏ฃด๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃณโˆš 4โˆ’๐‘ฅ2โˆ’๐‘โˆ’1+2๐‘ฅ๐‘โˆ’1/2 2๐œ‹(1+๐‘ฅ๐‘โˆ’1/2), if๐‘ฅโˆˆ1โˆš๐‘โˆ’2,1โˆš๐‘+2 , 0, otherwise, and has a point mass 1โˆ’๐‘at the pointโˆ’โˆš๐‘if0<๐‘โ‰ค1. The Stieltjes transform of ๐น๐‘(๐‘ฅ)is ๐‘ ๐‘(๐‘ง)=โˆ’(๐‘ง+๐‘โˆ’1/2)+โˆš๏ธ (๐‘ง+2โˆ’๐‘โˆ’1/2)(๐‘งโˆ’2โˆ’๐‘โˆ’1/2) 2(1+๐‘โˆ’1/2๐‘ง), ๐‘งโˆˆC+. (3) Moreover, the expression of the moments are โˆซ+โˆž โˆ’โˆž๐‘ฅ๐‘˜๐‘“๐‘(๐‘ฅ)d๐‘ฅ=๐‘˜โˆ‘๏ธ ๐‘ =0(โˆ’1)๐‘ ๐‘˜ ๐‘  ๐‘โˆ’๐‘˜/2+๐‘ +1๐›ฝ๐‘˜โˆ’๐‘ +(1โˆ’๐‘)(โˆ’โˆš๐‘)๐‘˜, ๐‘˜โ‰ฅ1, where๐›ฝ0=1and๐›ฝ๐‘—=ร๐‘—โˆ’1 ๐‘Ÿ=01 ๐‘Ÿ+1๐‘— ๐‘Ÿ ๐‘—โˆ’1 ๐‘Ÿ ๐‘๐‘Ÿfor๐‘—โ‰ฅ1. Remark 1. Theorem 2.4 provides a unified LSD of B๐‘›when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. This result is consis- tent with the MP law of R๐‘›when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž)in (2). The following theorem shows the result when ๐‘/๐‘›โ†’โˆž , which, to the best of our knowledge, is presented here for the first time. Theorem 2.5. Under Assumptions 2.1 - 2.3 and ๐‘โ‰๐‘›๐‘ก,๐‘ก > 1, with probability one, the ESD of B๐‘› converges weakly to the semicircular law ๐น(๐‘ฅ)with density function ๐‘“(๐‘ฅ)=๏ฃฑ๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃณ1 2๐œ‹โˆš๏ธ 4โˆ’๐‘ฅ2, if๐‘ฅโˆˆ[โˆ’2,2], 0, otherwise,(4) and Stieltjes transform ๐‘ (๐‘ง)=โˆ’๐‘ง+โˆš ๐‘ง2โˆ’4 2, ๐‘งโˆˆC+. Moreover, the expression of the moments are โˆซโˆž โˆ’โˆž๐‘ฅ๐‘˜ยท1 2๐œ‹โˆš๏ธ 4โˆ’๐‘ฅ2๐‘‘๐‘ฅ=๏ฃฑ๏ฃด๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃณ1 ๐‘˜/2+1 ๐‘˜ ๐‘˜/2! , if๐‘˜is even, 0, if๐‘˜is odd. On eigenvalues of a renormalized sample correlation matrix 5 2.3. The largest eigenvalue of B ๐’ In this section, we study the properties of ๐œ†B๐‘› 1, the largest eigenvalue of B๐‘›, when๐‘โ‰๐‘›๐‘ก,๐‘กโ‰ฅ1. Assumption 2.1*. LetX=(x1,..., x๐‘›)๐‘ร—๐‘›=๐‘ฅ๐‘–๐‘—, which consists of ๐‘ร—๐‘›i.i.d. variables satisfying E๐‘ฅ๐‘–๐‘—=0,E ๐‘ฅ๐‘–๐‘— 2=1,E ๐‘ฅ๐‘–๐‘— 4=๐œ…,E ๐‘ฅ๐‘–๐‘— 2(๐‘ก+1)<โˆž. Remark 2. Compared with the assumptions in the literature (Bao, Pan and Zhou, 2012, Gao et al., 2017, Jiang, 2004, Yin, Zheng and Zou, 2023, Yin et al., 2022) where ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž), Assumption 2.1* is not stronger. In fact, when ๐‘ก=1, the moment condition E ๐‘ฅ๐‘–๐‘— 2(๐‘ก+1)<โˆžreduces to a finite fourth moment, which coincides with the standard assumption in random matrix theory . Theorem 2.6. Under Assumptions 2.1*, 2.2 and 2.3, we have (i)๐œ†1(B๐‘›)โ†’2+1โˆš๐‘a.s.; (ii)for any๐œ– >0,โ„“> 0, if ๐‘ฅ๐‘–๐‘— โ‰ค๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2),where๐›ฟ๐‘›โ†’0, ๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2)โ†’โˆž , as๐‘›โ†’โˆž, then P ๐œ†1(B๐‘›)โ‰ฅ2+1โˆš๐‘+๐œ– =o ๐‘›โˆ’โ„“ . Remark 3. Theorem 2.6 is consistence with the results of ๐œ†R๐‘› 1when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž)in Theorem 1.1 of Jiang (2004) and Lemma 7 of Gao
https://arxiv.org/abs/2505.08210v1
et al. (2017). 2.4. CLT for LSS of B ๐’ In this section, we focus on linear spectral statistic of B๐‘›, i.e.1 ๐‘›ร๐‘› ๐‘–=1๐‘“(๐œ†๐‘–), where๐‘“is an analytic function on[0,โˆž). Since๐นB๐‘›converges to ๐น๐‘almost surely, we have 1 ๐‘›๐‘›โˆ‘๏ธ ๐‘–=1๐‘“(๐œ†๐‘–)โ†’โˆซ ๐‘“(๐‘ฅ)d๐น๐‘(๐‘ฅ). We explore second order fluctuation of1 ๐‘›ร๐‘› ๐‘–=1๐‘“(๐œ†๐‘–)describing how such LSS converges to its first order limit. Consider a renormalized functional: ๐บ๐‘›(๐‘“)=๐‘›โˆซ+โˆž โˆ’โˆž๐‘“(๐‘ฅ)d ๐นB๐‘›(๐‘ฅ)โˆ’๐น๐‘๐‘(๐‘ฅ) +1 2๐œ‹๐‘–โˆฎ C๐‘“(๐‘ง)ฮ˜๐‘›(๐‘ ๐‘๐‘(๐‘ง))d๐‘ง, where๐น๐‘๐‘(๐‘ฅ)and๐‘ ๐‘๐‘(๐‘ง)serve as finite-sample proxies for ๐น๐‘(๐‘ฅ)and๐‘ ๐‘(๐‘ง)in (3), by substituting ๐‘ with๐‘๐‘=๐‘/๐‘, ฮ˜๐‘›(๐‘ ๐‘๐‘(๐‘ง))=2๐‘โˆ’1 2 ๐‘๐‘”2 ๐‘๐‘(๐‘ง)โ„Ž๐‘๐‘(๐‘ง)๐‘ 2 ๐‘๐‘(๐‘ง)๐‘‘๐‘๐‘(๐‘ง)โˆ’๐‘โˆ’1 2 ๐‘๐‘”2 ๐‘๐‘(๐‘ง)โ„Ž๐‘๐‘(๐‘ง)๐‘ โ€ฒ ๐‘๐‘(๐‘ง)๐‘‘๐‘๐‘(๐‘ง) +1โˆš๐‘๐‘+๐‘ง๐‘”๐‘๐‘(๐‘ง)โ„Ž๐‘๐‘(๐‘ง)๐‘‘๐‘๐‘(๐‘ง)โˆ’โˆš๐‘๐‘ ๐‘ง(โˆš๐‘๐‘+๐‘ง) 6 +๐‘› ๐‘๐‘”๐‘๐‘(๐‘ง)โ„Ž๐‘๐‘(๐‘ง)๐‘ ๐‘๐‘(๐‘ง)๐‘‘๐‘๐‘(๐‘ง)โˆ’๐‘3 2 ๐‘ โˆ’๐‘โˆ’1 2 ๐‘๐‘ ๐‘๐‘(๐‘ง)๐‘™โˆ’1๐‘๐‘+๐‘๐‘+โˆš๐‘๐‘๐‘ง, (5) and โ„Ž๐‘๐‘(๐‘ง)=1 1+1โˆš๐‘๐‘๐‘ ๐‘๐‘(๐‘ง)+1โˆ’๐‘๐‘› ๐‘๐‘+โˆš๐‘๐‘๐‘ง, ๐‘”๐‘๐‘(๐‘ง)=โˆ’๐‘๐‘+โˆš๐‘๐‘๐‘ง ๐‘๐‘›๐‘ ๐‘๐‘(๐‘ง)โˆš๐‘๐‘+1โˆ’๐‘๐‘› ๐‘๐‘+โˆš๐‘๐‘๐‘ง , ๐‘‘๐‘๐‘(๐‘ง)=โˆ’๐‘๐‘›โ„Žโˆ’1 ๐‘๐‘(๐‘ง)๐‘ ๐‘๐‘(๐‘ง) โˆš๐‘๐‘โˆ’๐‘™โˆ’1๐‘๐‘(๐‘ง)๐‘ ๐‘๐‘(๐‘ง)(๐‘๐‘+โˆš๐‘๐‘๐‘ง), ๐‘™๐‘๐‘(๐‘ง)=โ„Ž๐‘๐‘(๐‘ง) ๐‘๐‘› 1+โˆš๐‘๐‘ ๐‘ ๐‘๐‘(๐‘ง) ๐‘๐‘›+๐‘๐‘›(1โˆ’๐‘๐‘›) ๐‘๐‘+โˆš๐‘๐‘๐‘ง+(๐‘๐‘›โˆ’1)๐‘ ๐‘๐‘(๐‘ง)โˆš๐‘๐‘ , Here the contourโˆฎ Cis closed and taken in the positive direction in the complex plane, enclosing the support of๐น๐‘(๐‘ฅ). The main result is stated in the following theorem. Theorem 2.7. Under Assumptions 2.1*, 2.2 and 2.3, let ๐‘“1,๐‘“2,...,๐‘“๐‘˜be functions on Rand analytic on an open interval containingh โˆ’2+1โˆš๐‘๐‘,2+1โˆš๐‘๐‘i . Then, the random vector (๐บ๐‘›(๐‘“1),...,๐บ๐‘›(๐‘“๐‘˜)) forms a tight sequence in ๐‘›and converges weakly to a centered Gaussian vector (๐‘‹๐‘“1,...,๐‘‹๐‘“๐‘˜)with the covariance function ๐ถ๐‘œ๐‘ฃ(๐‘‹๐‘“,๐‘‹๐‘”)=โˆ’1 4๐œ‹2โˆฎ C1โˆฎ C2๐‘“(๐‘ง1)๐‘”(๐‘ง2)๐ถ๐‘œ๐‘ฃ(๐‘€(๐‘ง1),๐‘€(๐‘ง2))d๐‘ง1d๐‘ง1 where ๐ถ๐‘œ๐‘ฃ(๐‘€(๐‘ง1),๐‘€(๐‘ง2))=2๐‘ โ€ฒ ๐‘(๐‘ง1)๐‘ โ€ฒ ๐‘(๐‘ง2) {๐‘ ๐‘(๐‘ง1)โˆ’๐‘ ๐‘(๐‘ง2)}2โˆ’1 (๐‘ง1โˆ’๐‘ง2)2 โˆ’2๐‘ โ€ฒ ๐‘(๐‘ง1)๐‘ โ€ฒ ๐‘(๐‘ง2)  1+๐‘ ๐‘(๐‘ง1)/โˆš๐‘ 2 1+๐‘ ๐‘(๐‘ง2)/โˆš๐‘ 2, and๐‘ ๐‘(๐‘ง)is defined in (3). Remark 4. Theorem 2.7 establishes a unified CLT for LSS of B๐‘›when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. This result is consistent with the results of R๐‘›when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž)in Theorem 1 of Gao et al. (2017) and Theorem 3.2 of Yin, Zheng and Zou (2023). In particular, when ๐‘/๐‘›โ†’โˆž , ๐บ๐‘›(๐‘“)=๐‘›โˆซ+โˆž โˆ’โˆž๐‘“(๐‘ฅ)d ๐นB๐‘›(๐‘ฅ)โˆ’๐น๐‘๐‘(๐‘ฅ) +1 2๐œ‹๐‘–โˆฎ C๐‘“(๐‘ง)๐‘ 3(๐‘ง)+๐‘ (๐‘ง)โˆ’๐‘ โ€ฒ(๐‘ง)๐‘ (๐‘ง) ๐‘ 2(๐‘ง)โˆ’1โˆ’1 ๐‘ง d๐‘ง, where๐‘ (๐‘ง)=โˆ’๐‘ง+โˆš ๐‘ง2โˆ’4 2is the Stieltjes transform of the semicircle law. Then we have the following result. On eigenvalues of a renormalized sample correlation matrix 7 Theorem 2.8. With the same notations and assumptions given in Theorem 2.7 with ๐‘โ‰๐‘›๐‘ก,๐‘ก > 1, then the random vector (๐บ๐‘›(๐‘“1),...,๐บ๐‘›(๐‘“๐‘˜))forms a tight sequence in ๐‘›and converges weakly to a centered Gaussian vector (๐‘‹๐‘“1,...,๐‘‹๐‘“๐‘˜)with the covariance function ๐ถ๐‘œ๐‘ฃ(๐‘‹๐‘“,๐‘‹๐‘”)=โˆ’1 4๐œ‹2โˆฎ C1โˆฎ C2๐‘“(๐‘ง1)๐‘”(๐‘ง2)๐ถ๐‘œ๐‘ฃ(๐‘€(๐‘ง1),๐‘€(๐‘ง2))d๐‘ง1d๐‘ง1, where ๐ถ๐‘œ๐‘ฃ(๐‘€(๐‘ง1),๐‘€(๐‘ง2))=2๐‘ โ€ฒ(๐‘ง1)๐‘ โ€ฒ(๐‘ง2) {๐‘ (๐‘ง1)โˆ’๐‘ (๐‘ง2)}2โˆ’1 (๐‘ง1โˆ’๐‘ง2)2 โˆ’2๐‘ โ€ฒ(๐‘ง1)๐‘ โ€ฒ(๐‘ง2). (6) Remark 5. Theorem 2.8 establishes a novel CLT for LSS of the renormalized sample correlation ma- trixB๐‘›in the ultrahigh-dimensional regime where ๐‘/๐‘›โ†’โˆž , which constitutes the main contribution of this paper. The proof technique is different from the classical case where ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž). In particular, we develop a novel determinant equivalent form for the resolvent of the renormalized corre- lation matrix, under ultrahigh dimensional context (see proof of Lemma 5.7). Theorem 2.7 provides a unified formulation of the limiting results for both ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž)and๐‘/๐‘›โ†’โˆž . Corollary 2.9. With the same notations and assumptions given in Theorem 2.7, consider three analytic functions๐‘“2(๐‘ฅ)=๐‘ฅ2,๐‘“3(๐‘ฅ)=๐‘ฅ3,๐‘“4(๐‘ฅ)=๐‘ฅ4, we have ๐บ๐‘›(๐‘“2)=tr B2 ๐‘› โˆ’๐‘›+2๐‘‘โˆ’ โ†’N( 0,4), ๐บ๐‘›(๐‘“3)=tr B3 ๐‘› โˆ’๐‘›โˆ’4โˆš๐‘๐‘๐‘‘โˆ’ โ†’N 0,6+36 ๐‘ , ๐บ๐‘›(๐‘“4)=tr B4 ๐‘› โˆ’(๐‘›โˆ’1)2 ๐‘+2๐‘›โˆ’5โˆ’6 ๐‘๐‘ ๐‘‘โˆ’ โ†’N 0,72+288 ๐‘+144 ๐‘2 . 3. Application of CLTs to hypothesis test In this section, we provide a statistical application of LSS for renormalized sample correlation ma- trixB๐‘›. It is the independence test for high and ultra-high dimensional random vectors, namely the hypothesis ๐ป0:R=I๐‘vs.๐ป1:Rโ‰ I๐‘. We aim to propose a test statistic that can work when ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. Motivated by the
https://arxiv.org/abs/2505.08210v1
Frobenius norm of Rโˆ’I๐‘used in Schott (2005), Gao et al. (2017) and Yin, Zheng and Zou (2023), with the relationship tr(R๐‘›โˆ’I๐‘)2=๐‘ ๐‘(trB2 ๐‘›+๐‘)โˆ’๐‘, we consider the following test statistic constructed from the renormalized correlation matrix B๐‘›, ๐‘‡:=trB2 ๐‘›. We reject๐ป0when๐‘‡is too large. By by taking ๐‘“(๐‘ฅ)=๐‘ฅ2in Theorem 2.7, the limiting null distribution of๐‘‡is given in the following theorem. 8 Theorem 3.1. Suppose that Assumptions 2.1*, 2.2 and 2.3 hold, under ๐ป0, we have as ๐‘›โ†’โˆž , 1 2(๐‘‡โˆ’๐‘›+2)๐ทโˆ’โ†’N( 0,1). Theorem 3.1 establishes the unified CLT for ๐‘‡under๐ป0when๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž]. Based on these, we employ the following procedure for testing the null hypothesis: Reject๐ป0if1 2(๐‘‡โˆ’๐‘›+2)>๐‘ง๐›ผ, where๐‘ง๐›ผis the upper-๐›ผquantile of the standard normal distribution at nominal level ๐›ผ. 4. Simulations In this section, we implement some simulation studies to examine (1) the LSD of the renormalized sample correlation matrix B๐‘›; (2) finite-sample properties of some LSS for B๐‘›by comparing their empirical means and variances with theoretical limiting values; (3) finite-sample performance of independence test. 4.1. Limiting spectral distribution In this section, simulation experiments are conducted to verify the LSD of the renormalized sample correlation matrix B๐‘›, as stated in Theorem 2.4. We generate data ๐‘ฆ๐‘–๐‘—from three populations, drawing histograms of eigenvalues of B๐‘›and comparing them with theoretical densities. Specifically, three types of distributions for ๐‘ฆ๐‘–๐‘—are considered: (1)๐‘ฆ๐‘–๐‘—follows the standard normal distribution; (2)๐‘ฆ๐‘–๐‘—follows the exponential distribution with rate parameter 2; (3)๐‘ฆ๐‘–๐‘—follows the Poisson distribution with parameter 1. The dimensional settings are (๐‘,๐‘›)=(104,5000),(2002,200),(2002.5,200).We display histograms of eigenvalues of B๐‘›generated by three populations under various (๐‘,๐‘›)in Figure 1. This reveals that all histograms align with their LSD, affirming the accuracy of our theoretical results. 4.2. CLT for LSS In this section, we implement some simulation studies to examine finite-sample properties of some LSS for B๐‘›by comparing their empirical means and variances with theoretical limiting values, as stated in Theorem 2.7. In the following, we present the numerical simulation of CLT for LSS. Let ๐บ๐‘›(๐‘“๐‘Ÿ)=๐บ๐‘›(๐‘“๐‘Ÿ)โˆš Var(๐‘‹๐‘“๐‘Ÿ). First, we examine ๐บ๐‘›(๐‘“๐‘Ÿ)๐‘‘โ†’๐‘(0,1),๐‘“๐‘Ÿ=๐‘ฅ๐‘Ÿ(๐‘Ÿ=2,3,4), by Theorem 2.7. Two types of data distribution of ๐‘ฆ๐‘–๐‘—are consider: (1) Gaussian data: ๐‘ฆ๐‘–๐‘—โˆผ๐‘(0,1)i.i.d. for 1โ‰ค๐‘–โ‰ค๐‘, 1โ‰ค๐‘—โ‰ค๐‘›; (2) Non-Gaussian data: ๐‘ฆ๐‘–๐‘—โˆผ๐œ’2(2)i.i.d. for 1โ‰ค๐‘–โ‰ค๐‘, 1โ‰ค๐‘—โ‰ค๐‘›. On eigenvalues of a renormalized sample correlation matrix 9 (a) N(0,1) (b) Exponential(2) (c) Poisson(1) (d) N(0,1) (e) Exponential(2) (f) Poisson(1) (g) N(0,1) (h) Exponential(2) (i) Poisson(1) Figure 1: Histograms of sample eigenvalues of B๐‘›, fitted by LSD (blue solid curves). In the first row, (๐‘,๐‘›)=(104,5000), in the second row, (๐‘,๐‘›)=(2002,200), in the third row(๐‘,๐‘›)=(2002.5,200). Empirical mean and variance of {๐บ๐‘›(๐‘“๐‘Ÿ)},๐‘“๐‘Ÿ=๐‘ฅ๐‘Ÿ,๐‘Ÿ=2,3,4, are calculated for various ๐‘๐‘›. The dimensional settings are (๐‘,๐‘›)=(1000,500),(3002,300),(5002,500),(1002.5,100)with๐‘๐‘›= 2,300,500,1000. As shown in Tables 1, the empirical mean and variance of {๐บ๐‘›(๐‘“๐‘Ÿ)}perfectly match their theoretical limits 0 and 1 under all scenarios. 4.3. Hypothesis test Numerical simulations are conducted to find empirical size and powers of our proposed test statistic. The random variables (๐‘ฅ๐‘–๐‘—)are generated from: (1) Gaussian data: ๐‘ฅ๐‘–๐‘—โˆผ๐‘(0,1)i.i.d. for 1โ‰ค๐‘–โ‰ค๐‘, 1โ‰ค๐‘—โ‰ค๐‘›; (2) Non-Gaussian data: ๐‘ฅ๐‘–๐‘—โˆผ(๐œ’2(2)โˆ’2)/2 i.i.d. for 1โ‰ค๐‘–โ‰ค๐‘, 1โ‰ค๐‘—โ‰ค๐‘›. And we consider the following two settings of ๐šบ: โ€ข๐šบ1=๐‘ ๐‘–,๐‘—,๐œƒ ๐‘ร—๐‘,๐‘ ๐‘–,๐‘—,๐œƒ=๐›ฟ{๐‘–=๐‘—}+๐›ฟ{๐‘–โ‰ ๐‘—}๐œƒ|๐‘–โˆ’๐‘—|,๐‘–,๐‘—=1,...,๐‘ , โ€ข๐šบ2=๐‘ ๐‘–,๐‘—,๐œ‚ ๐‘ร—๐‘,๐‘ ๐‘–,๐‘—,๐œ‚=๐›ฟ{๐‘–=๐‘—}+๐›ฟ{๐‘–โ‰ ๐‘—}๐œ‚,๐‘–,๐‘—=1,...,๐‘ , where๐œƒ,๐œ‚are two parameters satisfying |๐œƒ|<1,0<๐œ‚< 1. The parameter setting is as follows: 10 Table 1. Empirical mean
https://arxiv.org/abs/2505.08210v1
and variance of ๐บ๐‘›(๐‘“๐‘–),๐‘–=2,3,4 from 5000 replications with ๐‘๐‘›=2,300,500,1000. Theoretical mean and variance are 0 and 1, respectively. ๐บ๐‘›(๐‘“2) ๐บ๐‘›(๐‘“3) ๐บ๐‘›(๐‘“4) ๐‘๐‘› mean var mean var mean var Gaussian data 2 0.0090 1.0079 -0.0103 0.9737 0.0040 0.9793 300 0.0185 0.9974 -0.0919 0.9777 0.0037 0.9785 500 0.0143 0.9837 -0.0821 0.9914 -0.0076 0.9639 1000 0.0144 0.9889 -0.0465 0.9712 -0.0035 0.9896 Non-Gaussian data 2 0.0284 1.1201 -0.0122 1.0672 0.0005 1.0342 300 -0.0357 1.0326 -0.0977 1.0390 0.0045 0.9974 500 -0.0066 1.0240 -0.0694 1.0112 -0.0006 1.0179 1000 0.0020 1.0840 -0.0582 0.9785 0.0026 1.0432 โ€ข๐œƒ=๐œ‚=0 to evaluate empirical size; โ€ข๐œƒ=0.20,0.25 to evaluate empirical power of ๐šบ1; โ€ข๐œ‚=0.007,0.011 to evaluate empirical power of ๐šบ2. Table 2 reports the empirical size and power for different ๐‘๐‘›. The dimensional settings are (๐‘,๐‘›)= (1200,600),(502,50),(1002,100),(2002,200)with๐‘๐‘›=2,50,100,200, and the nominal significance level is fixed at ๐›ผ=0.05. This shows our test statistic is robust in both high and ultra-high dimensional settings and performs stably for Gaussian and non-Gaussian data. Table 2. Empirical size and power from 5000 replications for Gaussian and Non-Gaussian data with different ๐‘๐‘›. Size Power of ๐šบ1 Power of ๐šบ2 ๐‘๐‘›๐œƒ=๐œ‚=0๐œƒ=0.20๐œƒ=0.25๐œ‚=0.007๐œ‚=0.011 Gaussian data 2 0.0528 0.9970 1 0.5954 0.9866 50 0.044 0.608 0.902 0.7688 0.9878 100 0.0456 0.9884 1 0.9999 1 200 0.0512 1 1 1 1 Non-Gaussian data 2 0.0498 0.9964 1 0.5908 0.9814 50 0.06 0.6278 0.922 0.7372 0.98 100 0.0542 0.9878 1 0.9997 1 200 0.055 1 1 1 1 On eigenvalues of a renormalized sample correlation matrix 11 5. Proofs 5.1. Notations The following notations are used throughout the proofs. Let YโŠค=(หœy1,..., หœy๐‘),thenB๐‘›can be written as B๐‘›=โˆš๏ธ‚๐‘ ๐‘›โˆ’1๐‘›โˆ’1 ๐‘หœY๐‘›หœYโŠค ๐‘›โˆ’๐šฝ ,หœY๐‘›= ๐šฝหœy1 โˆฅ๐šฝหœy1โˆฅ,๐šฝหœy2 โˆฅ๐šฝหœy2โˆฅ,...,๐šฝหœy๐‘ ๐šฝหœy๐‘ ! . Denote A๐‘›=โˆš๏ธ‚๐‘ ๐‘๐‘ ๐‘R๐‘›โˆ’I๐‘› ,R๐‘›=หœY๐‘›หœYโŠค ๐‘›, ๐‘=๐‘›โˆ’1, ๐‘๐‘›=๐‘/๐‘›, ๐‘๐‘=๐‘/๐‘, ๐‘ ๐‘›(๐‘ง)=1 ๐‘›tr(A๐‘›โˆ’๐‘งI๐‘›)โˆ’1, ๐‘ B๐‘›๐‘›(๐‘ง)=1 ๐‘›tr(B๐‘›โˆ’๐‘งI๐‘›)โˆ’1, ๐‘งโˆˆC+, หœY๐‘›= ๐šฝหœy1 โˆฅ๐šฝหœy1โˆฅ,๐šฝหœy2 โˆฅ๐šฝหœy2โˆฅ,...,๐šฝหœy๐‘ ๐šฝหœy๐‘ ! =r1,..., r๐‘,หœY๐‘˜=r1,ยทยทยท,r๐‘˜โˆ’1,r๐‘Ÿ+1,ยทยทยท,r๐‘, R๐‘›๐‘˜=หœY๐‘˜หœYโŠค ๐‘˜,A๐‘›๐‘˜=โˆš๏ธ‚๐‘ ๐‘๐‘ ๐‘ƒR๐‘›๐‘˜โˆ’I๐‘› ,A๐‘›๐‘˜๐‘—=A๐‘›๐‘˜โˆ’โˆš๏ธ„ ๐‘ ๐‘r๐‘—rโŠค ๐‘—, Q(๐‘ง)=A๐‘›โˆ’๐‘งI๐‘›,Q๐‘˜(๐‘ง)=A๐‘›๐‘˜โˆ’๐‘งI๐‘›,Q๐‘˜๐‘—(๐‘ง)=A๐‘›๐‘˜๐‘—โˆ’๐‘งI๐‘›, ๐›ฝ๐‘˜(๐‘ง)=1 โˆš๐‘๐‘+rโŠค ๐‘˜Qโˆ’1 ๐‘˜(๐‘ง)r๐‘˜,หœ๐›ฝ๐‘˜(๐‘ง)=1 โˆš๐‘๐‘+trQโˆ’1 ๐‘˜(๐‘ง)/๐‘›, ๐‘๐‘›(๐‘ง)=1 โˆš๐‘๐‘+EtrQโˆ’1 ๐‘˜(๐‘ง)/๐‘›, ๐‘1(๐‘ง)=1 โˆš๐‘๐‘+EtrQโˆ’1 12(๐‘ง)/๐‘›, ๐›พ๐‘˜(๐‘ง)=rโŠค ๐‘˜Qโˆ’1 ๐‘˜(๐‘ง)r๐‘˜โˆ’E1 ๐‘›trQโˆ’1 ๐‘˜(๐‘ง),๐›ฝ๐‘˜๐‘—(๐‘ง)=1 โˆš๐‘๐‘+rโŠค ๐‘˜Qโˆ’1 ๐‘˜๐‘—(๐‘ง)r๐‘˜, ๐œ€๐‘˜(๐‘ง)=rโŠค ๐‘˜Qโˆ’1 ๐‘˜(๐‘ง)r๐‘˜โˆ’1 ๐‘›trQโˆ’1 ๐‘˜(๐‘ง), ๐›ฟ๐‘˜(๐‘ง)=rโŠค ๐‘˜Qโˆ’2 ๐‘˜(๐‘ง)r๐‘˜โˆ’1 ๐‘›trQโˆ’2 ๐‘˜(๐‘ง). We denote by ๐พsome constant which may take different values at different appearances. By the results in Bai and Silverstein (2004), we have โˆฅQ๐‘˜(๐‘ง)โˆ’1โˆฅโ‰ค๐พ, tr Qโˆ’1(๐‘ง)โˆ’Qโˆ’1 ๐‘˜(๐‘ง) M โ‰ค โˆฅMโˆฅ๐‘โˆ’1 2๐‘›,|๐›ฝ๐‘˜(๐‘ง)|โ‰ค๐พ๐‘โˆ’1 2๐‘›,|หœ๐›ฝ๐‘˜(๐‘ง)|โ‰ค๐พ๐‘โˆ’1 2๐‘›,|๐‘๐‘›(๐‘ง)|โ‰ค๐พ๐‘โˆ’1 2๐‘›. And straightforward calculation gives Qโˆ’1(๐‘ง)โˆ’Qโˆ’1 ๐‘˜(๐‘ง)=โˆ’Qโˆ’1 ๐‘˜(๐‘ง)r๐‘˜rโŠค ๐‘˜Qโˆ’1 ๐‘˜(๐‘ง)๐›ฝ๐‘˜(๐‘ง), ๐›ฝ๐‘˜(๐‘ง)=๐‘๐‘›(๐‘ง)โˆ’๐‘๐‘›(๐‘ง)๐›พ๐‘˜(๐‘ง)๐›ฝ๐‘˜(๐‘ง)=หœ๐›ฝ๐‘˜(๐‘ง)โˆ’หœ๐›ฝ๐‘˜(๐‘ง)๐œ–๐‘˜(๐‘ง)๐›ฝ๐‘˜(๐‘ง). (7) 5.2. Proof of Theorem 2.5 Since ๐‘ B๐‘›๐‘›(๐‘ง)=๐‘ ๐‘›(๐‘ง)โˆ’1 ๐‘›โˆš๐‘๐‘ ๐‘ง(โˆš๐‘๐‘+๐‘ง), (8) 12 for all๐‘งโˆˆC+, the difference between ๐‘ B๐‘›๐‘›(๐‘ง)and๐‘ ๐‘›(๐‘ง)is a deterministic term of order ๐‘‚(1/๐‘›). There- fore, to show that ๐‘ B๐‘›๐‘›(๐‘ง)โ†’๐‘ (๐‘ง)almost surely, it suffices to prove that ๐‘ ๐‘›(๐‘ง)โ†’๐‘ (๐‘ง)almost surely. Here๐‘ (๐‘ง)is the Stieltjes transform of semicircular law (4). We now proceed with the proof in the following four steps: Step 1. Truncation, centralization, and rescaling. Step 2. For any fixed ๐‘งโˆˆC+={๐‘ง,โ„‘(๐‘ง)>0},๐‘ ๐‘›(๐‘ง)โˆ’E๐‘ ๐‘›(๐‘ง)โ†’0, a.s.. Step 3. For any fixed ๐‘งโˆˆC+,E๐‘ ๐‘›(๐‘ง)โ†’๐‘ (๐‘ง). Step 4. Outside a null set, ๐‘ ๐‘›(๐‘ง)โ†’๐‘ (๐‘ง)for every๐‘งโˆˆC+. Then, it follows that, except for this null set, ๐นB๐‘›โ†’๐นweakly, where ๐นis the distribution function of semicircular law in (4). Step 1. Truncation, centralization, and rescaling. By the moment condition E|x11|4<โˆž, one may choose a positive sequence of {ฮ”๐‘›}such that ฮ”โˆ’4 ๐‘›E|๐‘ฅ11|4๐ผ{|๐‘ฅ11|โฉพฮ”๐‘›4โˆš๐‘›๐‘}โ†’0,ฮ”๐‘›โ†’0,ฮ”๐‘›4โˆš๐‘›๐‘โ†’โˆž. Recall X=(x1,..., x๐‘›)๐‘ร—๐‘›=๐‘ฅ๐‘–๐‘—. Then we can write B๐‘›=B๐‘›(๐‘ฅ๐‘–๐‘—)=๐šฝB๐‘›0๐šฝ, where B๐‘›0=โˆš๏ธ‚๐‘ ๐‘1
https://arxiv.org/abs/2505.08210v1
๐‘XโŠคD๐‘›Xโˆ’I๐‘› ,D๐‘›=Diag1 ๐‘ 11,1 ๐‘ 22,...,1 ๐‘ ๐‘๐‘ , ๐‘ ๐‘˜๐‘˜=1 ๐‘eโŠค ๐‘˜X๐šฝXโŠคe๐‘˜, ๐‘˜=1,...,๐‘. Letห†B๐‘›=ห†B๐‘›(ห†๐‘ฅ๐‘–๐‘—),ห‡B๐‘›=ห‡B๐‘›(ห‡๐‘ฅ๐‘–๐‘—)and หœB๐‘›=หœB๐‘›(หœ๐‘ฅ๐‘–๐‘—)be defined similarly to B๐‘›with๐‘ฅ๐‘–๐‘—replaced by ห†๐‘ฅ๐‘–๐‘—, ห‡๐‘ฅ๐‘–๐‘—and หœ๐‘ฅ๐‘–๐‘—respectively, where ห† ๐‘ฅ๐‘–๐‘—=๐‘ฅ๐‘–๐‘—๐ผ{|๐‘ฅ๐‘– ๐‘—|โ‰คฮ”๐‘›4โˆš๐‘›๐‘}, ห‡๐‘ฅ๐‘–๐‘—=ห†๐‘ฅ๐‘–๐‘—โˆ’Eห†๐‘ฅ๐‘–๐‘—, and หœ๐‘ฅ๐‘–๐‘—=ห‡๐‘ฅ๐‘–๐‘—/๐œŽ๐‘›with ๐œŽ2 ๐‘›=E|ห†๐‘ฅ๐‘–๐‘—โˆ’Eห†๐‘ฅ๐‘–๐‘—|2. And similarly define ห†D๐‘›,ห‡D๐‘›,หœD๐‘›and ห†B๐‘›0,ห‡B๐‘›0,หœB๐‘›0. Note that ห†D๐‘›=ห‡D๐‘›and ห‡B๐‘›0=หœB๐‘›0. Then by Theorems A.43-A.44 in Bai and Silverstein (2010) and Bernsteinโ€™s inequality, we have โˆฅ๐นB๐‘›โˆ’๐นB๐‘›0โˆฅโ‰ค1 ๐‘›rank(B๐‘›โˆ’B๐‘›0)โ‰ค๐พ ๐‘›, โˆฅ๐นB๐‘›0โˆ’๐นห†B๐‘›0โˆฅโ‰ค1 ๐‘›rank XโŠคD1 2๐‘›โˆ’ห†XโŠคห†D1 2๐‘› โ‰ค1 ๐‘›๐‘โˆ‘๏ธ ๐‘–=1๐‘›โˆ‘๏ธ ๐‘—=1๐ผ{|๐‘ฅ๐‘– ๐‘—|โ‰ฅฮ”๐‘›4โˆš๐‘›๐‘}โ†’0๐‘Ž.๐‘ ., โˆฅ๐นห†B๐‘›0โˆ’๐นหœB๐‘›0โˆฅ=โˆฅ๐นห†B๐‘›0โˆ’๐นห‡B๐‘›0โˆฅโ‰ค1 ๐‘›rank ห†XโŠคห†D1 2๐‘›โˆ’ห‡XโŠคห‡D1 2๐‘› =1 ๐‘›rank(Eห†XโŠคห†D1 2๐‘›)=1 ๐‘›. Thus in the rest of the proof of Theorem 2.4, we assume ๐‘ฅ๐‘–๐‘— โฉฝฮ”๐‘›4โˆš๐‘›๐‘, E๐‘ฅ๐‘–๐‘—=0,E ๐‘ฅ๐‘–๐‘— 2=1,E ๐‘ฅ๐‘–๐‘— 4=๐œ…+๐‘œ(1)<โˆž. Step 2. Almost sure convergence of the random part. Let E0(ยท)denote expectation and E๐‘—(ยท)denote conditional expectation with respect to the ๐œŽ-field generated by r1,r2,..., r๐‘, where๐‘—=1,2,...,๐‘ . By Lemma 2.7 in Bai and Silverstein (1998) and Lemma 5 in Gao et al. (2017), we can obtain for ๐‘ž>2, E|๐œ€๐‘˜(๐‘ง)|๐‘žโ‰ค๐พ ๐‘›โˆ’๐‘ž/2+๐‘›โˆ’๐‘ž/2๐‘๐‘ž/2โˆ’1ฮ”2๐‘žโˆ’4 ๐‘› ,E|๐›ฟ๐‘˜(๐‘ง)|๐‘žโ‰ค๐พ ๐‘›โˆ’๐‘ž/2+๐‘›โˆ’๐‘ž/2๐‘๐‘ž/2โˆ’1ฮ”2๐‘žโˆ’4 ๐‘› , E หœ๐›ฝ๐‘˜(๐‘ง)โˆ’๐‘๐‘›(๐‘ง) ๐‘ž=๐‘‚(๐‘›๐‘ž/2๐‘โˆ’๐‘ž),|๐‘๐‘›(๐‘ง)โˆ’๐‘1(๐‘ง)|=๐‘‚(๐‘›1/2๐‘โˆ’2/3),E|๐‘๐‘›(๐‘ง)โˆ’E๐›ฝ๐‘˜(๐‘ง)|=๐‘‚(๐‘›๐‘โˆ’2), E|๐›พ๐‘˜(๐‘ง)โˆ’๐œ€๐‘˜(๐‘ง)|๐‘ž=๐‘‚(๐‘›โˆ’๐‘ž/2),E 1 ๐‘›tr Qโˆ’1(๐‘ง)M โˆ’E1 ๐‘›tr Qโˆ’1(๐‘ง)M ๐‘ž =๐‘‚(โˆ’๐‘›๐‘ž/2). (9) On eigenvalues of a renormalized sample correlation matrix 13 Write ๐‘ ๐‘›(๐‘ง)โˆ’E๐‘ ๐‘›(๐‘ง)=โˆ’1 ๐‘›๐‘โˆ‘๏ธ ๐‘—=1E๐‘—โˆ’E๐‘—โˆ’1๐›ฝ๐‘—(๐‘ง)rโŠค ๐‘—Qโˆ’2 ๐‘—(๐‘ง)r๐‘—. By using Lemma 2.1 in Bai and Silverstein (2004), we have E|๐‘ ๐‘›(๐‘ง)โˆ’E๐‘ ๐‘›(๐‘ง)|4โ‰ค๐พ ๐‘›4Eยฉยญ ยซ๐‘โˆ‘๏ธ ๐‘—=1 E๐‘—โˆ’E๐‘—โˆ’1๐›ฝ๐‘—(๐‘ง)rโŠค ๐‘—Qโˆ’2 ๐‘—(๐‘ง)r๐‘— 2ยชยฎ ยฌ2 =๐‘‚(๐‘›โˆ’2), where in the last step, we use the fact that ๐›ฝ๐‘—(๐‘ง) โ‰ค๐พ๐‘โˆ’1 2๐‘›andE rโŠค ๐‘˜Qโˆ’2 ๐‘˜(๐‘ง)r๐‘˜ 2โ‰คE|๐›ฟ๐‘˜(๐‘ง)|2+ E 1 ๐‘›trQโˆ’2 ๐‘˜(๐‘ง) 2=๐‘‚(1)by (9). Therefore, we obtain ๐‘ ๐‘›(๐‘ง)โˆ’E๐‘ ๐‘›(๐‘ง)=๐‘œ๐‘Ž.๐‘ .(1). Step 3. Convergence of the expected Stieltjes transform. Similarly to the proof of Lemma 5.5 in the Supplementary Material, and by applying the estimates in (9), we obtain ๐‘› E๐‘ ๐‘›(๐‘ง)โˆ’๐‘ ๐‘๐‘(๐‘ง) =๐‘‚(1), which implies that E๐‘ ๐‘›(๐‘ง)=๐‘ ๐‘๐‘(๐‘ง)+๐‘‚(๐‘›โˆ’1). The details are omitted here. Moreover, since ๐‘ ๐‘๐‘(๐‘ง)= ๐‘ (๐‘ง)+๐‘œ(1), it follows that E๐‘ ๐‘›(๐‘ง)=๐‘ (๐‘ง)+๐‘œ(1). Step 4. Completion of the proof of Theorem 2.4. By Steps 2 and 3, for any fixed ๐‘งโˆˆC+, we have ๐‘ ๐‘›(๐‘ง)โ†’๐‘ (๐‘ง), a.s.. That is, for each ๐‘งโˆˆC+, there exists a null set ๐‘๐‘ง(i.e.,๐‘ƒ(๐‘๐‘ง)=0 ) such that ๐‘ ๐‘›(๐‘ง,๐œ”)โ†’๐‘ (๐‘ง)for all๐œ”โˆˆ๐‘๐‘ ๐‘ง.Now, let C+ 0={๐‘ง๐‘š}be a dense subset of C+(e.g., all๐‘งof rational real and imaginary parts) and let ๐‘=โˆช๐‘๐‘ง๐‘š. Then ๐‘ ๐‘›(๐‘ง,๐œ”)โ†’๐‘ (๐‘ง)for all๐œ”โˆˆ๐‘๐‘and๐‘งโˆˆC+ 0. LetC+ ๐‘š={๐‘งโˆˆC+,โ„‘๐‘ง>1/๐‘š,|๐‘ง|โ‰ค๐‘š}. When๐‘งโˆˆC+ ๐‘š, we have|๐‘ ๐‘›(๐‘ง)|โ‰ค๐‘š. By Vitaliโ€™s convergence theorem, we have ๐‘ ๐‘›(๐‘ง,๐œ”)โ†’๐‘ (๐‘ง)for all๐œ”โˆˆ๐‘๐‘and๐‘งโˆˆC+ ๐‘š. Since the convergence above holds for every ๐‘š, we conclude that ๐‘ ๐‘›(๐‘ง,๐œ”)โ†’๐‘ (๐‘ง)for all๐œ”โˆˆ๐‘๐‘and๐‘งโˆˆC+. Thus, for all ๐‘งโˆˆC+,๐‘ B๐‘›๐‘›(๐‘ง)โ†’๐‘ (๐‘ง)almost surely. By Theorem B.9 in Bai and Silverstein (2010), we conclude that ๐นB๐‘›๐‘คโˆ’โ†’๐น, a.s.. Thus we complete the proof of Theorem 2.4. 5.3. Proof of Theorem 2.6 Since๐œ†B๐‘› 1=โˆš๏ธƒ ๐‘ ๐‘๐œ†R๐‘› 1โˆ’โˆš๏ธƒ ๐‘ ๐‘,Theorem 2.6 can be obtained directly by Lemma 1 and 7 in Gao et al. (2017) when ๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž). Then we focus on the case where ๐‘/๐‘›โ†’โˆž . 14 Proof of Theorem 2.6 (i) : By Theorem 2.4, we have lim inf ๐‘›โ†’โˆž๐œ†1(B๐‘›)โ‰ฅ1 a.s.. Thus to prove conclusion (i) in Theorem 2.6, it suffices to show that lim sup ๐‘›โ†’โˆž๐œ†1(B๐‘›)โ‰ค1 a.s.. Firstly, according to Assumption 2.1*, we truncate the underlying random variables. Here, we choose ๐›ฟ๐‘›satisfying ๐›ฟโˆ’2(๐‘ก+1) ๐‘› E|๐‘ฅ11|2๐‘ก+21{|๐‘ฅ11|โฉพ๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2)}โ†’0, ๐›ฟ๐‘›โ†’0, ๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2)โ†’โˆž. (10) Similar as arguments in section 5.2, let ห†B๐‘›=ห†B๐‘›(ห†๐‘ฅ๐‘–๐‘—),หœB๐‘›=หœB๐‘›(หœ๐‘ฅ๐‘–๐‘—)be defined similarly to B๐‘›with๐‘ฅ๐‘–๐‘— replaced by ห†๐‘ฅ๐‘–๐‘—, and หœ๐‘ฅ๐‘–๐‘—respectively, where ห† ๐‘ฅ๐‘–๐‘—=๐‘ฅ๐‘–๐‘—๐ผ{|๐‘ฅ๐‘– ๐‘—|โ‰ค๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2)}and หœ๐‘ฅ๐‘–๐‘—=(ห†๐‘ฅ๐‘–๐‘—โˆ’Eห†๐‘ฅ๐‘–๐‘—)/๐œŽ๐‘› with๐œŽ2 ๐‘›=E|ห†๐‘ฅ๐‘–๐‘—โˆ’Eห†๐‘ฅ๐‘–๐‘—|2. Following the proof of Theorem 1 in Chen and Pan
https://arxiv.org/abs/2505.08210v1
(2012), we have P B๐‘›โ‰ ห†B๐‘›,i.o. =0 a.s., from which we obtain ๐œ†1(B๐‘›)โˆ’๐œ†1 ห†B๐‘› โ†’0 a.s. as๐‘›โ†’โˆž . And note that ห†B๐‘›=หœB๐‘›. We have๐œ†1(B๐‘›)โˆ’ ๐œ†1หœB๐‘›โ†’0 a.s.. By the above results, it is sufficient to show that lim sup sup๐‘›โ†’โˆž๐œ†1หœB๐‘›โ‰ค1 a.s.. To this end, note that หœB๐‘›satisfies truncation condition of Theorem 2.6 (ii). Therefore, we can obtain Theorem 2.6 (i) according to conclusion (ii). Next we give the proof of the conclusion (ii). Proof of Theorem 2.6 (ii): To begin with, by (S11) in Yu, Xie and Zhou (2023), we have ๐œ†B๐‘› 1โ‰ค๐œ†๐šฝ2 1๐œ†B๐‘›0 1โ‰คmax 1โ‰ค๐‘–โ‰ค๐‘› eโŠค ๐‘–B๐‘›0e๐‘– +๐œ†C๐‘› 1, where B๐‘›0=โˆš๏ธ‚๐‘ ๐‘1 ๐‘XโŠคD๐‘›Xโˆ’I๐‘› ,C๐‘›=B๐‘›0โˆ’diag(B๐‘›0). Since eโŠค ๐‘–B๐‘›0e๐‘–=1โˆš๐‘๐‘๐‘โˆ‘๏ธ ๐‘˜=1(1 ๐‘ ๐‘˜๐‘˜๐‘‹2 ๐‘˜๐‘–โˆ’1)=1โˆš๐‘๐‘๐‘โˆ‘๏ธ ๐‘˜=11 ๐‘ ๐‘˜๐‘˜(๐‘‹2 ๐‘˜๐‘–โˆ’1)+1โˆš๐‘๐‘๐‘โˆ‘๏ธ ๐‘˜=1(1 ๐‘ ๐‘˜๐‘˜โˆ’1), to prove Theorem 2.6, it is sufficient to prove, for any ๐œ–>0,โ„“> 0, P max 1โ‰ค๐‘–โ‰ค๐‘›1โˆš๐‘๐‘๐‘โˆ‘๏ธ ๐‘˜=1 1 ๐‘ ๐‘˜๐‘˜ ๐‘‹2 ๐‘˜๐‘–โˆ’1 >๐œ–! =o ๐‘›โˆ’โ„“ , (11) P 1โˆš๐‘๐‘๐‘โˆ‘๏ธ ๐‘˜=1 1 ๐‘ ๐‘˜๐‘˜โˆ’1 >๐œ–! =o ๐‘›โˆ’โ„“ , (12) P ๐œ†C๐‘› 1>2+๐œ– =o ๐‘›โˆ’โ„“ . (13) The proofs of (11)-(13) rely on Lemma 5.1 below. The proof of Lemma 5.1 is postponed to the supplementary material. On eigenvalues of a renormalized sample correlation matrix 15 Lemma 5.1. Under the assumptions of Theorem 2.6 (ii), we have P max 1โ‰ค๐‘˜โ‰ค๐‘ 1 ๐‘ ๐‘˜๐‘˜โˆ’1 >๐œ– =๐‘œ(๐‘›โˆ’โ„“). By Lemma 5.1, max 1โ‰ค๐‘˜โ‰ค๐‘1/๐‘ ๐‘˜๐‘˜<2 with high probability, then (11) comes directly from (9) in Chen and Pan (2012). For (12), by Burkholder inequality (Lemma 2.13 in Bai and Silverstein (2010)), we have P 1โˆš๐‘๐‘๐‘โˆ‘๏ธ ๐‘˜=1 1 ๐‘ ๐‘˜๐‘˜โˆ’1 >๐œ–! =P ๐‘โˆ‘๏ธ ๐‘˜=1 1 ๐‘ ๐‘˜๐‘˜โˆ’1 >๐œ–โˆš๏ธ ๐‘๐‘! โ‰ค๐พE ร๐‘ ๐‘˜=1(๐‘ ๐‘˜๐‘˜โˆ’1) โ„“ (๐œ–โˆš๐‘๐‘)โ„“+๐‘œ(๐‘›โˆ’โ„“) โ‰ค๐พร๐‘ ๐‘˜=1E|๐‘ ๐‘˜๐‘˜โˆ’1|2โ„“/2 +ร๐‘ ๐‘˜=1E|๐‘ ๐‘˜๐‘˜โˆ’1|โ„“ (๐œ–โˆš๐‘๐‘)โ„“+๐‘œ(๐‘›โˆ’โ„“) โ‰ค๐พ(๐‘/๐‘›)โ„“/2+๐‘๐‘›โˆ’โ„“/2+๐‘๐‘›โˆ’โ„“+1๐‘ฃ2โ„“ (๐œ–โˆš๐‘๐‘)โ„“+๐‘œ(๐‘›โˆ’โ„“)=๐‘œ(๐‘›โˆ’โ„“). And for (13), by using Lemma 5.1 again, we have for any ๐œ–,๐œ–โ€ฒ>0, P ๐œ†C๐‘› 1>2+๐œ– =P ๐œ†C๐‘› 1>2+๐œ–,max 1โ‰ค๐‘˜โ‰ค๐‘ 1 ๐‘ ๐‘˜๐‘˜โˆ’1 <๐œ–โ€ฒ +P ๐œ†C๐‘› 1>2+๐œ–,max 1โ‰ค๐‘˜โ‰ค๐‘ 1 ๐‘ ๐‘˜๐‘˜โˆ’1 >๐œ–โ€ฒ =P ๐œ†C๐‘› 1>2+๐œ–,max 1โ‰ค๐‘˜โ‰ค๐‘ 1 ๐‘ ๐‘˜๐‘˜โˆ’1 <๐œ–โ€ฒ +๐‘œ ๐‘›โˆ’โ„“ =๐‘œ ๐‘›โˆ’โ„“ , where the last equality holds by (8) in Chen and Pan (2012) and (S12) in Yu, Xie and Zhou (2023). Together with (11) and (12), we obtain P(๐œ†1(B๐‘›)โ‰ฅ2+๐œ–)=o๐‘›โˆ’โ„“. Therefore we complete the proof. 5.4. Proof of Theorem 2.8 Now we present the strategy for the proof of Theorem 2.8. By the Cauchy integral formula, we haveโˆซ ๐‘“(๐‘ฅ)d๐บ(๐‘ฅ)=โˆ’1 2๐œ‹๐‘–โˆฎ C๐‘“(๐‘ง)๐‘š๐บ(๐‘ง)d๐‘งvalid for any c.d.f ๐บand any analytic function ๐‘“on an open set containing the support of ๐บ, whereโˆฎ Cis the contour integration in the anti-clockwise direction. In our case, ๐บ(๐‘ฅ)=๐‘›(๐นB๐‘›(๐‘ฅ)โˆ’๐น๐‘๐‘(๐‘ฅ)). Therefore, the problem of finding the limiting distribution reduces to the study of ๐‘€B๐‘›๐‘›(๐‘ง): ๐‘€B๐‘›๐‘›(๐‘ง)=๐‘› ๐‘ B๐‘›๐‘›(๐‘ง)โˆ’๐‘ ๐‘๐‘(๐‘ง) โˆ’ฮ˜๐‘›(๐‘ ๐‘๐‘(๐‘ง)). By using (8), under the ultrahigh dimensional case, ฮ˜๐‘›(๐‘ ๐‘๐‘(๐‘ง))=๐‘ 3(๐‘ง)+๐‘ (๐‘ง)โˆ’๐‘ โ€ฒ(๐‘ง)๐‘ (๐‘ง) ๐‘ 2(๐‘ง)โˆ’1โˆ’1 ๐‘ง+๐‘œ(1), 16 then we have ๐‘€B๐‘›๐‘›(๐‘ง)=๐‘€๐‘›(๐‘ง)โˆ’๐‘ 3(๐‘ง)+๐‘ (๐‘ง)โˆ’๐‘ โ€ฒ(๐‘ง)๐‘ (๐‘ง) ๐‘ 2(๐‘ง)โˆ’1+๐‘œ(1), (14) where ๐‘€๐‘›(๐‘ง)=๐‘›๐‘ ๐‘›(๐‘ง)โˆ’๐‘ ๐‘๐‘(๐‘ง). Firstly, according to Assumption 2.1*, we truncate the underlying random variables. Here, we choose ๐›ฟ๐‘› defined in (10). By the arguments in section 5.3, let ห†B๐‘›=ห†B๐‘›(ห†๐‘ฅ๐‘–๐‘—),หœB๐‘›=หœB๐‘›(หœ๐‘ฅ๐‘–๐‘—)be defined similarly toB๐‘›with๐‘ฅ๐‘–๐‘—replaced by ห† ๐‘ฅ๐‘–๐‘—, and หœ๐‘ฅ๐‘–๐‘—respectively, where ห† ๐‘ฅ๐‘–๐‘—=๐‘ฅ๐‘–๐‘—๐ผ{|๐‘ฅ๐‘– ๐‘—|โ‰ค๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2)}and หœ๐‘ฅ๐‘–๐‘—= (ห†๐‘ฅ๐‘–๐‘—โˆ’Eห†๐‘ฅ๐‘–๐‘—)/๐œŽ๐‘›with๐œŽ2 ๐‘›=E|ห†๐‘ฅ๐‘–๐‘—โˆ’Eห†๐‘ฅ๐‘–๐‘—|2. We then conclude that ๐‘ƒ B๐‘›โ‰ ห†B๐‘› โ‰ค๐‘›๐‘ยท๐‘ƒ |๐‘ฅ11|โ‰ฅ๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2) โ‰ค๐พ๐›ฟโˆ’2(๐‘ก+1) ๐‘› E|๐‘ฅ11|2๐‘ก+21{|๐‘ฅ11|โฉพ๐›ฟ๐‘›(๐‘›๐‘)1/(2๐‘ก+2)}=๐‘œ(1). Let ห†๐บ๐‘›(๐‘“)and หœ๐บ๐‘›(๐‘“)be๐บ๐‘›(๐‘“)with B๐‘›replaced by ห†B๐‘›and หœB๐‘›respectively. Then for each ๐‘—= 1,2,...,๐‘˜, since ห†B๐‘›=หœB๐‘›, we have ๐บ๐‘›(๐‘“๐‘—)=ห†๐บ๐‘›(๐‘“๐‘—)+๐‘œ๐‘(1)=หœ๐บ๐‘›(๐‘“๐‘—)+๐‘œ๐‘(1). Thus, we only need to find the limit distribution ofn e๐บ๐‘›๐‘“๐‘—,๐‘—=1,...,๐‘˜o . Hence, in
https://arxiv.org/abs/2505.08210v1
what follows, we assume that the underlying variables are truncated at ๐›ฟ๐‘›(๐‘›๐‘)1 2๐‘ก+2, centralized, and renormalized. For convenience, we shall suppress the superscript on the variables, and assume that, for any 1 โฉฝ๐‘–โฉฝ๐‘and 1โฉฝ๐‘—โฉฝ๐‘›, ๐‘ฅ๐‘–๐‘— โฉฝ๐›ฟ๐‘›(๐‘›๐‘)1 2๐‘ก+2,E๐‘ฅ๐‘–๐‘—=0,E|๐‘ฅ๐‘–๐‘—|2=1,E|๐‘ฅ๐‘–๐‘—|4=๐œ…+๐‘œ(1),E|๐‘ฅ๐‘–๐‘—|2๐‘ก+2<โˆž. (15) For any๐œ€ >0, define the event ๐น๐‘›(๐œ€)= max๐‘—โ‰ค๐‘› ๐œ†๐‘—(B๐‘›) โ‰ฅ2+๐œ€ where B๐‘›is defined by the truncated and normalized variables satisfying condition (15). By Theorem 2.6, for any โ„“>0 P(๐น๐‘›(๐œ€))=o ๐‘›โˆ’โ„“ . (16) Here we would point out that the result regarding the minimum eigenvalue of B๐‘›can be obtained similarly by investigating the maximum eigenvalue of โˆ’B๐‘›. Note that the support of ๐นB๐‘›is random. Fortunately, we have shown that the extreme eigenvalues of B๐‘›are highly concentrated around two edges of the support of the limiting semicircle law ๐น(๐‘ฅ)in (16). Then the contourCcan be appropriately chosen. Moreover, as in Bai and Silverstein (2004), by (16), we can replace the process {๐‘€๐‘›(๐‘ง),๐‘งโˆˆC} by a slightly modified process {b๐‘€๐‘›(๐‘ง),๐‘งโˆˆC} . Below we present the definitions of the contour Cand the modified process b๐‘€๐‘›(๐‘ง). Let๐‘ฅ๐‘Ÿbe any number greater than 2+1โˆš๐‘๐‘. Let๐‘ฅ๐‘™be any number less than โˆ’2+1โˆš๐‘๐‘. Now letC๐‘ข={๐‘ฅ+๐‘–๐‘ฃ0:๐‘ฅโˆˆ[๐‘ฅ๐‘™,๐‘ฅ๐‘Ÿ]}. Then we defineC+:={๐‘ฅ๐‘™+๐‘–๐‘ฃ:๐‘ฃโˆˆ[0,๐‘ฃ0]}โˆชC๐‘ขโˆช{๐‘ฅ๐‘Ÿ+๐‘–๐‘ฃ:๐‘ฃโˆˆ[0,๐‘ฃ0]}, andC=C+โˆชC+. Now we define the subsetsC๐‘›ofCon which๐‘€๐‘›(ยท)equals to b๐‘€๐‘›(ยท). Choose sequence {๐œ€๐‘›}decreasing to zero satisfying for some๐›ผโˆˆ(0,1),๐œ€๐‘›โ‰ฅ๐‘›โˆ’๐›ผ. Let C๐‘™={๐‘ฅ๐‘™+๐‘–๐‘ฃ:๐‘ฃโˆˆ[0,๐‘ฃ0]}, On eigenvalues of a renormalized sample correlation matrix 17 andC๐‘Ÿ={๐‘ฅ๐‘Ÿ+๐‘–๐‘ฃ:๐‘ฃโˆˆ[๐‘›โˆ’1๐œ€,๐‘ฃ0]}. ThenC๐‘›=C๐‘™โˆชC๐‘ขโˆชC๐‘Ÿ. For๐‘ง=๐‘ฅ+๐‘–๐‘ฃ, we define b๐‘€๐‘›(๐‘ง)=๏ฃฑ๏ฃด๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃณ๐‘€๐‘›(๐‘ง), for๐‘งโˆˆC๐‘›, ๐‘€๐‘›(๐‘ฅ๐‘Ÿ+๐‘–๐‘›โˆ’1๐œ€๐‘›), for๐‘ฅ=๐‘ฅ๐‘Ÿ,๐‘ฃโˆˆ[0,๐‘›โˆ’1๐œ€๐‘›], ๐‘€๐‘›(๐‘ฅ๐‘™+๐‘–๐‘›โˆ’1๐œ€๐‘›), for๐‘ฅ=๐‘ฅ๐‘™,๐‘ฃโˆˆ[0,๐‘›โˆ’1๐œ€๐‘›]. With the help of (16), one may thus find โˆฎ C๐‘“๐‘—(๐‘ง)๐‘€๐‘›(๐‘ง)๐‘‘๐‘ง=โˆฎ C๐‘“๐‘—(๐‘ง)b๐‘€๐‘›(๐‘ง)๐‘‘๐‘ง+๐‘œ๐‘(1), for every๐‘—โˆˆ{1,...,๐พ}. Hence according to (14), the proof of Theorem 2.8 can be completed by verifying the convergence of b๐‘€๐‘›(๐‘ง)onCas stated in the following lemma. Lemma 5.2. In addition to Assumptions 2.1*, 2.2,2.3, suppose condition (15) holds. We have b๐‘€๐‘›(๐‘ง)๐‘‘=๐‘€(๐‘ง)+๐‘œ๐‘(1), ๐‘งโˆˆC, where the random process ๐‘€(๐‘ง)is a two-dimensional Gaussian process. The mean function is E๐‘€(๐‘ง)=๐‘ 3(๐‘ง)+๐‘ (๐‘ง)โˆ’๐‘ โ€ฒ(๐‘ง)๐‘ (๐‘ง) ๐‘ 2(๐‘ง)โˆ’1, and the covariance function is ๐ถ๐‘œ๐‘ฃ(๐‘€(๐‘ง1),๐‘€(๐‘ง2))=2๐‘ โ€ฒ(๐‘ง1)๐‘ โ€ฒ(๐‘ง2) {๐‘ (๐‘ง1)โˆ’๐‘ (๐‘ง2)}2โˆ’1 (๐‘ง1โˆ’๐‘ง2)2 โˆ’2๐‘ โ€ฒ(๐‘ง1)๐‘ โ€ฒ(๐‘ง2). (17) To prove Lemma 5.2, we decompose b๐‘€๐‘›(๐‘ง)into a random part ๐‘€(1) ๐‘›(๐‘ง)and a deterministic part ๐‘€(2) ๐‘›(๐‘ง)for๐‘งโˆˆC๐‘›, that is,๐‘€๐‘›(๐‘ง)=๐‘€(1) ๐‘›(๐‘ง)+๐‘€(2) ๐‘›(๐‘ง), where ๐‘€(1) ๐‘›(๐‘ง)=๐‘› ๐‘ ๐‘›(๐‘ง)โˆ’E๐‘ ๐‘›(๐‘ง) and๐‘€(2) ๐‘›(๐‘ง)=๐‘› E๐‘ ๐‘›(๐‘ง)โˆ’๐‘ ๐‘๐‘(๐‘ง) . The random part contributes to the covariance function and the deterministic part contributes to the mean function. By Theorem 8.1 in Billingsley (1968), the proof of Lemma 5.2 is then complete if we can verify the following three steps: Step 1. Finite-dimensional convergence of ๐‘€(1) ๐‘›(๐‘ง)in distribution onC๐‘›to a centered multivariate Gaussian random vector with covariance function given by (17). Lemma 5.3. Under assumptions of Theorem 2.8 and condition (15), as๐‘›โ†’โˆž , for any set of ๐‘Ÿpoints{๐‘ง1,๐‘ง2,...,๐‘ง๐‘Ÿ}โІC , the random vector๐‘€(1) ๐‘›(๐‘ง1),...,๐‘€(1) ๐‘›(๐‘ง๐‘Ÿ)converges weakly to a ๐‘Ÿ-dimensional centered Gaussian distribution with covariance function in (17). Step 2. Tightness of the ๐‘€(1) ๐‘›(๐‘ง)for๐‘งโˆˆC๐‘›. The tightness can be established by Theorem 12.3 of Billingsley (1968). Itโ€™s sufficient to verify the moment condition given in the following lemma. Lemma 5.4. Under assumptions of Lemma 5.3, sup๐‘›;๐‘ง1,๐‘ง2โˆˆC๐‘›E ๐‘€(1) ๐‘›(๐‘ง1)โˆ’๐‘€(1) ๐‘›(๐‘ง2) 2 |๐‘ง1โˆ’๐‘ง2|2<โˆž. 18 Step 3. Convergence of the non-random part ๐‘€(2) ๐‘›(๐‘ง). Lemma 5.5. Under assumptions of Lemma 5.3, ๐‘€(2) ๐‘›(๐‘ง)=๐‘ 3(๐‘ง)+๐‘ (๐‘ง)โˆ’๐‘ โ€ฒ(๐‘ง)๐‘ (๐‘ง) ๐‘ 2(๐‘ง)โˆ’1+๐‘œ(1)for๐‘งโˆˆC๐‘›. Thus we complete the proof of Theorem 2.8. The proof of Lemma 5.3 is presented below while the proofs of Lemma 5.4-5.5
https://arxiv.org/abs/2505.08210v1
are delegated to the supplement file due to page limit. 5.5. Proof of Lemma 5.3 To prove Lemma 5.3, we first introduce the following supporting lemmas. Lemma 5.6. Under assumptions of Lemma 5.3, we have Y1(๐‘ง1,๐‘ง2)โ‰œโˆ’๐œ•2 ๐œ•๐‘ง1๐œ•๐‘ง2๐‘โˆ‘๏ธ ๐‘—=1 E๐‘—โˆ’1หœ๐›ฝ๐‘—(๐‘ง1)๐œ€๐‘—(๐‘ง1)][E๐‘—โˆ’1หœ๐›ฝ๐‘—(๐‘ง2)๐œ€๐‘—(๐‘ง2) =๐‘œ๐‘(1), Y2(๐‘ง1,๐‘ง2)โ‰œ๐œ•2 ๐œ•๐‘ง1๐œ•๐‘ง2๐‘โˆ‘๏ธ ๐‘—=1E๐‘—โˆ’1 E๐‘—หœ๐›ฝ๐‘—(๐‘ง1)๐œ€๐‘—(๐‘ง1)E๐‘—หœ๐›ฝ๐‘—(๐‘ง2)๐œ€๐‘—(๐‘ง2) =2๐œ•2 ๐œ•๐‘ง1๐œ•๐‘ง2Jโˆ’ 2๐‘ โ€ฒ(๐‘ง1)๐‘ โ€ฒ(๐‘ง2)+๐‘œ๐‘(1), where J=1 ๐‘›2๐‘๐‘›(๐‘ง1)๐‘๐‘›(๐‘ง2)๏ฃฎ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐE๐‘โˆ‘๏ธ ๐‘—=1trh E๐‘— Qโˆ’1 ๐‘—(๐‘ง1) E๐‘— Qโˆ’1 ๐‘—(๐‘ง2)i๏ฃน๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป. Lemma 5.7. Under assumptions of Lemma 5.3, we have ๐œ•2 ๐œ•๐‘ง2๐œ•๐‘ง1J=๐‘ 2(๐‘ง1)๐‘ 2(๐‘ง2) ๐‘ 2(๐‘ง1)โˆ’1  ๐‘ 2(๐‘ง2)โˆ’1 [๐‘ (๐‘ง1)๐‘ (๐‘ง2)โˆ’1]2+๐‘œ๐‘(1). The proof of Lemma 5.7 is presented in next section while the proof of Lemma 5.6 is delegated to the supplement file due to page limit. We now proceed to the proof of Lemma 5.3. By the fact that a random vector is multivariate normally distributed if and only if every linear combination of its components is normally distributed, we need only show that, for any positive integer ๐‘Ÿand any complex sequence ๐‘Ž๐‘—, the sum ๐‘Ÿโˆ‘๏ธ ๐‘—=1๐‘Ž๐‘—๐‘€(1) ๐‘›(๐‘ง๐‘—) converges weakly to a Gaussian random variable. To this end, we first decompose the random part ๐‘€(1) ๐‘›(๐‘ง)as a sum of martingale difference, which is given in (20). Then, we apply the martingale CLT (Proposition 5.8) to obtain the asymptotic distribution of ๐‘€(1) ๐‘›(๐‘ง). On eigenvalues of a renormalized sample correlation matrix 19 Proposition 5.8. (Theorem 35.12 of Billingsley (1968)). Suppose for each ๐‘›,๐‘Œ๐‘›,1,๐‘Œ๐‘›,2,...,๐‘Œ๐‘›,๐‘Ÿ๐‘›is a real martingale difference sequence with respect to the increasing ๐œŽ-field F๐‘›,๐‘— having second mo- ments. If as๐‘›โ†’โˆž , (i)ร๐‘Ÿ๐‘› ๐‘—=1๐ธ ๐‘Œ2 ๐‘›,๐‘—|F๐‘›,๐‘—โˆ’1๐‘–.๐‘.โˆ’โˆ’โˆ’โ†’๐œŽ2,and (ii)ร๐‘Ÿ๐‘› ๐‘—=1๐ธ ๐‘Œ2 ๐‘›,๐‘—๐ผ(|๐‘Œ๐‘›, ๐‘—|โ‰ฅ๐œ€) โ†’0,where ๐œŽ2is a positive constant and ๐œ€is an arbitrary positive number, thenร๐‘Ÿ๐‘› ๐‘—=1๐‘Œ๐‘›,๐‘—๐ทโˆ’โ†’๐‘ 0,๐œŽ2 . To begin with, similar as (9), we give some useful estimate below. For ๐‘ž>2, we have E|๐œ€๐‘˜(๐‘ง)|๐‘žโ‰ค๐พ ๐‘›โˆ’๐‘ž/2+๐‘›โˆ’๐‘ž/2๐‘๐‘ž/2โˆ’1๐›ฟ2๐‘žโˆ’4 ๐‘› ,E|๐›ฟ๐‘˜(๐‘ง)|๐‘žโ‰ค๐พ ๐‘›โˆ’๐‘ž/2+๐‘›โˆ’๐‘ž/2๐‘๐‘ž/2โˆ’1๐›ฟ2๐‘žโˆ’4 ๐‘› , E หœ๐›ฝ๐‘˜(๐‘ง)โˆ’๐‘๐‘›(๐‘ง) ๐‘ž=๐‘‚(๐‘›๐‘ž/2๐‘โˆ’๐‘ž),|๐‘๐‘›(๐‘ง)โˆ’๐‘1(๐‘ง)|=๐‘‚(๐‘›1/2๐‘โˆ’2/3),E|๐‘๐‘›(๐‘ง)โˆ’E๐›ฝ๐‘˜(๐‘ง)|=๐‘‚(๐‘›๐‘โˆ’2), E|๐›พ๐‘˜(๐‘ง)โˆ’๐œ€๐‘˜(๐‘ง)|๐‘ž=๐‘‚(๐‘›โˆ’๐‘ž/2),E 1 ๐‘›tr Qโˆ’1(๐‘ง)M โˆ’E1 ๐‘›tr Qโˆ’1(๐‘ง)M ๐‘ž =๐‘‚(โˆ’๐‘›๐‘ž/2). (18) Write๐‘€(1) ๐‘›(๐‘ง)as a sum of martingale difference sequences (MDS), and then utilize the CLT of MDS to derive the asymptotic distribution of ๐‘€(1) ๐‘›(๐‘ง), which can be written as ๐‘€(1) ๐‘›(๐‘ง)=๐‘›[๐‘ ๐‘›(๐‘ง)โˆ’E๐‘ ๐‘›(๐‘ง)]=๐‘โˆ‘๏ธ ๐‘—=1(E๐‘—โˆ’E๐‘—โˆ’1)trh Qโˆ’1(๐‘ง)โˆ’Qโˆ’1 ๐‘—(๐‘ง)i =โˆ’๐‘โˆ‘๏ธ ๐‘—=1(E๐‘—โˆ’E๐‘—โˆ’1)๐›ฝ๐‘—(๐‘ง)rโŠค ๐‘—Qโˆ’2 ๐‘—(๐‘ง)r๐‘—. (19) By using (7) and the fact thatE๐‘—โˆ’E๐‘—โˆ’1หœ๐›ฝ๐‘—(๐‘ง)1 ๐‘›trQโˆ’2 ๐‘—(๐‘ง)=0, we have E๐‘—โˆ’E๐‘—โˆ’1๐›ฝ๐‘—(๐‘ง)rโŠค ๐‘—Qโˆ’2 ๐‘—(๐‘ง)r๐‘—=E๐‘— หœ๐›ฝ๐‘—(๐‘ง)๐›ฟ๐‘—(๐‘ง)โˆ’หœ๐›ฝ2 ๐‘—(๐‘ง)๐œ€๐‘—(๐‘ง)1 ๐‘›trQโˆ’2 ๐‘—(๐‘ง) +E๐‘—โˆ’1[๐‘Œ๐‘—(๐‘ง)]โˆ’E๐‘—โˆ’E๐‘—โˆ’1h หœ๐›ฝ2 ๐‘—(๐‘ง) ๐œ€๐‘—(๐‘ง)๐›ฟ๐‘—(๐‘ง)โˆ’๐›ฝ๐‘—(๐‘ง)rโŠค ๐‘—Qโˆ’2 ๐‘—(๐‘ง)r๐‘—๐œ€2 ๐‘—(๐‘ง)i , where๐‘Œ๐‘—(๐‘ง)=โˆ’E๐‘—h หœ๐›ฝ๐‘—(๐‘ง)๐›ฟ๐‘—(๐‘ง)โˆ’หœ๐›ฝ2 ๐‘—(๐‘ง)๐œ€๐‘—(๐‘ง)1 ๐‘›trQโˆ’2 ๐‘—(๐‘ง)i . With the help of (18), we have E ๐‘โˆ‘๏ธ ๐‘—=1E๐‘—โˆ’E๐‘—โˆ’1e๐›ฝ2 ๐‘—(๐‘ง)๐œ€๐‘—(๐‘ง)๐›ฟ๐‘—(๐‘ง) 2 =๐‘โˆ‘๏ธ ๐‘—=1E E๐‘—โˆ’E๐‘—โˆ’1e๐›ฝ2 ๐‘—(๐‘ง)๐œ€๐‘—(๐‘ง)๐›ฟ๐‘—(๐‘ง) 2 โ‰ค๐พ๐‘โˆ‘๏ธ ๐‘—=1E e๐›ฝ2 ๐‘—(๐‘ง)๐œ€๐‘—(๐‘ง)๐›ฟ๐‘—(๐‘ง) 2 โ‰ค๐พ(๐‘โˆ’1+๐›ฟ4 ๐‘›)=๐‘œ(1), and similarly E ๐‘โˆ‘๏ธ ๐‘—=1E๐‘—โˆ’E๐‘—โˆ’1e๐›ฝ2 ๐‘—(๐‘ง)๐›ฝ๐‘—(๐‘ง)rโŠค ๐‘—Dโˆ’2 ๐‘—(๐‘ง)r๐‘—๐œ€2 ๐‘—(๐‘ง) 2 =๐‘œ(1). 20 By (19), we obtain ๐‘€(1) ๐‘›(๐‘ง)=๐‘โˆ‘๏ธ ๐‘—=1[E๐‘—โˆ’E๐‘—โˆ’1]๐‘Œ๐‘—(๐‘ง)+๐‘œ๐‘(1). (20) Then we need to consider the limit of the following term: ๐‘Ÿโˆ‘๏ธ ๐‘–=1๐›ผ๐‘–๐‘โˆ‘๏ธ ๐‘—=1 E๐‘—โˆ’E๐‘—โˆ’1 ๐‘Œ๐‘—(๐‘ง)=๐‘โˆ‘๏ธ ๐‘—=1๐‘Ÿโˆ‘๏ธ ๐‘–=1๐›ผ๐‘– E๐‘—โˆ’E๐‘—โˆ’1 ๐‘Œ๐‘—(๐‘ง). Using (18) we obtain E ๐‘Œ๐‘—(๐‘ง) 4โ‰ค๐พ ๐‘โˆ’2 ๐‘›E ๐›ฟ๐‘—(๐‘ง) 4+๐‘โˆ’4 ๐‘›E ๐œ€๐‘—(๐‘ง) 4 โ‰ค๐พ ๐‘โˆ’2+๐‘โˆ’1๐›ฟ4 ๐‘› , from which we can have ๐‘โˆ‘๏ธ ๐‘—=1Eยฉยญ ยซ ๐‘Ÿโˆ‘๏ธ ๐‘–=1๐›ผ๐‘– E๐‘—โˆ’E๐‘—โˆ’1 ๐‘Œ๐‘—(๐‘ง๐‘–) 2 ๐ผ(|ร๐‘Ÿ ๐‘–=1๐›ผ๐‘–[E๐‘—โˆ’E๐‘—โˆ’1]๐‘Œ๐‘—(๐‘ง๐‘–)|โ‰ฅ๐œ€)ยชยฎ ยฌ โ‰ค1 ๐œ€2๐‘โˆ‘๏ธ ๐‘—=1E ๐‘Ÿโˆ‘๏ธ ๐‘–=1๐›ผ๐‘– E๐‘—โˆ’E๐‘—โˆ’1 ๐‘Œ๐‘—(๐‘ง๐‘–) 4 โ†’0. Thus the condition (ii) of Proposition 5.8 is satisfied. Next, it suffices to prove that ๐‘โˆ‘๏ธ ๐‘—=1E๐‘—โˆ’1 ๐‘Œ๐‘—(๐‘ง1)โˆ’๐ธ๐‘—โˆ’1๐‘Œ๐‘—(๐‘ง1)  ๐‘Œ๐‘—(๐‘ง2)โˆ’๐ธ๐‘—โˆ’1๐‘Œ๐‘—(๐‘ง2) (21) converges in probability to (17). Note that (21)=๐‘โˆ‘๏ธ ๐‘—=1E๐‘—โˆ’1[๐‘Œ๐‘—(๐‘ง1)๐‘Œ๐‘—(๐‘ง2)]โˆ’๐‘โˆ‘๏ธ ๐‘—=1[E๐‘—โˆ’1๐‘Œ๐‘—(๐‘ง1)][E๐‘—โˆ’1๐‘Œ๐‘—(๐‘ง2)]=Y1(๐‘ง1,๐‘ง2)+Y 2(๐‘ง1,๐‘ง2). By Lemmas 5.6-5.7, we obtain the limit of (21)
https://arxiv.org/abs/2505.08210v1
is (17). Thus we complete the proof of Lemma 5.3. 5.6. Proof of Lemma 5.7 The proof of Lemma 5.7 differs substantially from the classical case. Unlike the high dimensional case where๐‘/๐‘›โ†’๐‘โˆˆ(0,โˆž)(Gao et al., 2017), our analysis is conducted in the ultrahigh dimen- sional regime with ๐‘/๐‘›โ†’โˆž . In this setting, we carefully examine the influence of ๐‘๐‘›and๐‘๐‘, and derive a novel determinant equivalent form ๐‘โˆ’1 ๐‘๐‘1(๐‘ง)โˆ’โˆš๐‘๐‘โˆ’๐‘งโˆ’1 I๐‘›forQโˆ’1 ๐‘—(๐‘ง), the resolvent of the renormalized correlation matrix with the ๐‘—th component information removed. Specifically, by using the identity rโŠค ๐‘–Qโˆ’1 ๐‘—(๐‘ง)=โˆš๐‘๐‘๐›ฝ๐‘–๐‘—(๐‘ง)rโŠค ๐‘–Qโˆ’1 ๐‘–๐‘—(๐‘ง), we get Qโˆ’1 ๐‘—(๐‘ง)=โˆ’H๐‘›(๐‘ง)+๐‘1(๐‘ง1)A(๐‘ง1)+B(๐‘ง1)+C(๐‘ง1)+F(๐‘ง1), On eigenvalues of a renormalized sample correlation matrix 21 where H๐‘›(๐‘ง1)=โˆš๐‘๐‘+๐‘ง1โˆ’๐‘โˆ’1 ๐‘๐‘1(๐‘ง1)โˆ’1 I๐‘›and A(๐‘ง1)=๐‘โˆ‘๏ธ ๐‘–โ‰ ๐‘—H๐‘›(๐‘ง1) r๐‘–rโŠค ๐‘–โˆ’1 ๐‘›โˆ’1๐šฝ Qโˆ’1 ๐‘–๐‘—(๐‘ง1), B(๐‘ง1)=๐‘โˆ‘๏ธ ๐‘–โ‰ ๐‘—๐›ฝ๐‘–๐‘—(๐‘ง1)โˆ’๐‘1(๐‘ง1)H๐‘›(๐‘ง1)r๐‘–rโŠค ๐‘–Qโˆ’1 ๐‘–๐‘—(๐‘ง1), C(๐‘ง1)=โˆ’๐‘โˆ’1 ๐‘๐‘1(๐‘ง1)H๐‘›(๐‘ง1)๐šฝ Qโˆ’1 ๐‘—(๐‘ง1)โˆ’Qโˆ’1 ๐‘–๐‘—(๐‘ง1) , F(๐‘ง1)=โˆ’๐‘โˆ’1 ๐‘›๐‘๐‘1(๐‘ง1)H๐‘›(๐‘ง1)1๐‘›1โŠค ๐‘›Qโˆ’1 ๐‘—(๐‘ง1). We next employโˆ’H๐‘›(๐‘ง)as a suitable approximation to the resolvent matrix Qโˆ’1 ๐‘—(๐‘ง), extract the dom- inant terms contributing to the limiting behavior of J, and demonstrate that the error terms are negli- gible. Note thatโˆฅH๐‘›(๐‘ง1)โˆฅโ‰ค๐พand by Lemma 6 in Gao et al. (2017), we have Er๐‘–rโŠค ๐‘–=1 ๐‘›โˆ’1๐šฝ. For any nonrandom MwithโˆฅMโˆฅโ‰ค๐พ, by using (18), we can obtain ๐‘›โˆ’1E|trB(๐‘ง1)M|=๐‘‚(๐‘›โˆ’1/2), ๐‘›โˆ’1E|trC(๐‘ง1)M|=๐‘‚(๐‘›โˆ’1), which implies ๐‘›โˆ’1E trE๐‘—(B(๐‘ง1))Qโˆ’1 ๐‘—(๐‘ง2) =๐‘œ(1), ๐‘›โˆ’1E trE๐‘—(C(๐‘ง1))Qโˆ’1 ๐‘—(๐‘ง2) =๐‘œ(1). And since 1โŠค ๐‘›Qโˆ’1 ๐‘—(๐‘ง1)=โˆ’1โˆš๐‘๐‘+๐‘ง1โŠค ๐‘›, we have trE๐‘—(F(๐‘ง1))Qโˆ’1 ๐‘—(๐‘ง2) โ‰ค๐พ/โˆš๐‘๐‘›.In the end, consider the term๐‘1(๐‘ง1)trE๐‘—(A(๐‘ง1))Qโˆ’1 ๐‘—(๐‘ง2).By using Qโˆ’1(๐‘ง)โˆ’Qโˆ’1 ๐‘˜(๐‘ง)=โˆ’Qโˆ’1 ๐‘˜(๐‘ง)r๐‘˜rโŠค ๐‘˜Qโˆ’1 ๐‘˜(๐‘ง)๐›ฝ๐‘˜(๐‘ง), we can write tr E๐‘—(A(๐‘ง1))Qโˆ’1 ๐‘—(๐‘ง2)=๐ด1(๐‘ง1,๐‘ง2)+๐ด2(๐‘ง1,๐‘ง2)+๐ด3(๐‘ง1,๐‘ง2),where ๐ด1(๐‘ง1,๐‘ง2)=โˆ’๐‘โˆ‘๏ธ ๐‘–<๐‘—๐›ฝ๐‘–๐‘—(๐‘ง2)rโŠค ๐‘–E๐‘— Qโˆ’1 ๐‘–๐‘—(๐‘ง1) Qโˆ’1 ๐‘–๐‘—(๐‘ง2)r๐‘–rโŠค ๐‘–Qโˆ’1 ๐‘–๐‘—(๐‘ง2)H๐‘›(๐‘ง1)r๐‘–, ๐ด2(๐‘ง1,๐‘ง2)=โˆ’tr๐‘โˆ‘๏ธ ๐‘–<๐‘—H๐‘›(๐‘ง1)๐‘โˆ’1๐šฝE๐‘— Qโˆ’1 ๐‘–๐‘—(๐‘ง1)  Qโˆ’1 ๐‘—(๐‘ง2)โˆ’Qโˆ’1 ๐‘–๐‘—(๐‘ง2) , ๐ด3(๐‘ง1,๐‘ง2)=tr๐‘โˆ‘๏ธ ๐‘–<๐‘—H๐‘›(๐‘ง1) r๐‘–rโŠค ๐‘–โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–๐‘—(๐‘ง1) Qโˆ’1 ๐‘–๐‘—(๐‘ง2). With (18), we obtain |๐‘1(๐‘ง1)๐ด2(๐‘ง1,๐‘ง2)|โ‰ค๐พ. Our next aim is to show ๐‘›โˆ’1๐‘1(๐‘ง1)E๐‘—๐ด3(๐‘ง1,๐‘ง2)=๐‘œ๐‘(1). (22) Write E ๐‘1(๐‘ง1)E๐‘—๐ด3(๐‘ง1,๐‘ง2) 2=|๐‘1(๐‘ง1)|2โˆ‘๏ธ ๐‘–1,๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–1๐‘—(๐‘ง1) E๐‘— Qโˆ’1 ๐‘–1๐‘—(๐‘ง2) ร—trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1) E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) 22 =|๐‘1(๐‘ง1)|2โˆ‘๏ธ ๐‘–1,๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–1๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–1๐‘—(๐‘ง2) ร—trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) , where ห‡Q๐‘–2๐‘—is defined similarly as Q๐‘–2๐‘—by(r1,..., r๐‘—โˆ’1,ห‡r๐‘—+1,..., ห‡r๐‘)and where ห‡r๐‘—+1,..., ห‡r๐‘are i.i.d. copies of r๐‘—+1,..., r๐‘. When๐‘–1=๐‘–2, with Lemma 5 in Gao et al. (2017), the term in the above expression is bounded by |๐‘1(๐‘ง1)|2โˆ‘๏ธ ๐‘–1<๐‘—E trH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–1๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–1๐‘—(๐‘ง2) 2 โ‰ค๐พ๐‘๐‘โˆ’1 ๐‘›๐‘›โˆ’1=๐‘‚(1). For๐‘–1โ‰ ๐‘–2< ๐‘—, define ๐›ฝ๐‘–1๐‘–2๐‘—(๐‘ง)=1 โˆš๐‘๐‘+rโŠค ๐‘–2Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง)r๐‘–2,Q๐‘–1๐‘–2๐‘—(๐‘ง)=Q(๐‘ง)โˆ’โˆš๏ธ„ ๐‘ ๐‘(r๐‘–1rโŠค ๐‘–1+r๐‘–2rโŠค ๐‘–2+r๐‘—rโŠค ๐‘—), and similarly define ห‡๐›ฝ๐‘–1๐‘–2๐‘—and ห‡Q๐‘–1๐‘–2๐‘—(๐‘ง). Then we have |๐‘1(๐‘ง1)|2โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–1๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–1๐‘—(๐‘ง2) ร—trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) =๐‘†1+๐‘†2+๐‘†3, where ๐‘†1=โˆ’โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— ๐›ฝ๐‘–1๐‘–2๐‘—(๐‘ง1)Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)r๐‘–2rโŠค ๐‘–2Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–1๐‘—(๐‘ง2) ร—|๐‘1(๐‘ง1)|2trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) , ๐‘†2=โˆ’โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)ห‡๐›ฝ๐‘–1๐‘–2๐‘—(๐‘ง2)ห‡Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง2)r๐‘–2rโŠค ๐‘–2ห‡Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง2) ร—|๐‘1(๐‘ง1)|2trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) , ๐‘†3=โˆ’|๐‘1(๐‘ง1)|2โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง2) ร—trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— ๐›ฝ๐‘–2๐‘–1๐‘—(๐‘ง1)Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)r๐‘–1rโŠค ๐‘–1Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) . Spilt ๐‘†1=๐‘†(1) 1+๐‘†(2) 1, ๐‘†(2) 1=๐‘†(21) 1+๐‘†(22) 1, ๐‘†(22) 1=๐‘†(221) 1+๐‘†(222) 1, where ๐‘†(1) 1=โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ On eigenvalues of a renormalized sample correlation matrix 23 ร—E๐‘— ๐›ฝ๐‘–1๐‘–2๐‘—(๐‘ง1)Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)r๐‘–2rโŠค ๐‘–2Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)ห‡๐›ฝ๐‘–1๐‘–2๐‘—(๐‘ง2)ห‡Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง2)r๐‘–2rโŠค ๐‘–2ห‡Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง2) ร—|๐‘1(๐‘ง1)|2trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) , ๐‘†(2) 1=โˆ’โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ E๐‘— ๐›ฝ๐‘–1๐‘–2๐‘—(๐‘ง1)Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)r๐‘–2rโŠค ๐‘–2Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–1๐‘–2๐‘—(๐‘ง2) ร—|๐‘1(๐‘ง1)|2trH๐‘›(๐‘ง1) r๐‘–2rโŠค ๐‘–2โˆ’๐‘โˆ’1๐šฝ E๐‘— Qโˆ’1 ๐‘–2๐‘—(๐‘ง1)ห‡Qโˆ’1 ๐‘–2๐‘—(๐‘ง2) , ๐‘†(21) 1=โˆ‘๏ธ ๐‘–1โ‰ ๐‘–2<๐‘—EtrH๐‘›(๐‘ง1) r๐‘–1rโŠค ๐‘–1โˆ’๐‘โˆ’1๐šฝ
https://arxiv.org/abs/2505.08210v1