text
string | source
string |
|---|---|
more di๏ฌcult to establish the inequality directly for all x. A.2 Some special functions The modi๏ฌed Bessel functions Iฮฝ(x) andKฮฝ(x) are independent solutions to the modi๏ฌed Bessel di๏ฌerential equation x2d2y dx2+xdy dxโ(x2+ฮฝ2)y= 0, whereฮฝandzcan be arbitrary complex valued. Further, recall their series repr esentation Iฮฝ(x) =โ/summationdisplay k=01 k!ฮ(k+ฮฝ+1)/parenleftBigx 2/parenrightBig2k+ฮฝ andKฮฝ(x) =ฯ 2Iโฮฝ(x)โIฮฝ(x) sin(ฮฝฯ) 16 for non-integer ฮฝ. To de๏ฌne Kn(x) for integer n, one takes the limit Kn(x) = lim ฮฝโnKฮฝ(x) = lim ฮฝโnฯ 2Iโฮฝ(x)โIฮฝ(x) sin(ฮฝฯ). The Bessel function has a number of integral representations un der certain conditions. We use the following Iฮฝ(x) =/parenleftbigx 2/parenrightbigฮฝ ฮ/parenleftbig ฮฝ+1 2/parenrightbigโฯ/integraldisplay1 โ1ext (1+t2)ฮฝ+1 2dt. Notethatthisexpressionhasaninterestingprobabilisticinterpret ation. Let UโผUnif(Snโ1) andvโRnbe ๏ฌxed unit vectors. Then for n= 2ฮฝ+ 1, the modi๏ฌed Bessel function of the ๏ฌrst kind can be written as Iฮฝ(x) =ฮ(ฮฝ+1) (x/2)ฮฝE/bracketleftbig exvยทU/bracketrightbig . The generalized hypergeometric function is de๏ฌned as: pFq(a1,...,a p;b1,...,b q;z) =โ/summationdisplay k=0(a1)kยทยทยท(ap)k (b1)kยทยทยท(bq)kzk k! where (a)kis the Pochhammer symbol (rising factorial) ( a)k=a(a+1)(a+2)ยทยทยท(a+kโ1), for(a)0= 1. When p=q= 1, itcanbeshownthatthegeneralizedhypergeometricfunction 1F1(a;b;z) is a solution of the Kummer di๏ฌerential equation zd2y dz2+(bโz)dy dzโay= 0. Acomprehensive study ofthevariousBessel andhypergeometric functionscanbefound in Magnus et al. [ MOS66]. A.3 Integral Transformations A de๏ฌnition of the Hankel transform can be found, for example, in K oh and Zemanian [KZ69] and is as follows. For any ฮฝโRanda >0 letJฮฝ,abe the function space generated by the kernelโxy Jฮฝ(xy), where Jฮฝ(.) is the Bessel function of the ๏ฌrst kind, x >0, andy is a complex number in the strip โฆ = {y:|Imy|< a, yโne}ationslash= 0 or a negative number }. The Hankel transform Hฮฝis de๏ฌned on the dual space Jโ ฮฝ,aas Hฮฝ[f](y) =/integraldisplayโ 0f(x)โxyJฮฝ(xy)dx, whereyโโฆ,fis locally integrable and /integraldisplayโ 0|f(x)eaxxa+1/2|dx <โ. 17 Now to every fโ Jโ ฮฝ,athere exists a number ฯf(possibly in๏ฌnite) such that fโ Jโ ฮฝ,bif bโคฯfandf /โ Jฮฝ,bifb > ฯf. Hencefโ Jฮฝ,ฯf. Now, for any fโ Jฮฝ,ฯf, de๏ฌne the I-transform as F(y) =Iฮฝ[f](y) =/integraldisplayโ 0f(x)โxyIฮฝ(xy)dx (A.4) whereIฮฝ(.) is the modi๏ฌed Bessel function of the ๏ฌrst kind. The real-valued I-transform is de๏ฌned where yis restricted to the strip ฯf={s: 0< s < ฯ f}. The inversion formula for theI-transform is given by f(x) =Iโ1 ฮฝ[F](x) = lim rโโ1 iฯ/integraldisplayฯ+ir ฯโirF(y)โxyKฮฝ(xy)dy (A.5) forฮฝโฅ โ1/2, ฯโฯfand where Kฮฝ(.) is the modi๏ฌed Bessel function of the second kind of orderฮฝ. The statement of the inverse formula is given by Koh and Zemanian in [ KZ69] and may be proved using the ideas similar to the proof of Theorem 6.6 provided by Zemanian in [Zem68]. It is important to note that the inversion holds only for ฯโฯf, therefore ฯ= 0 is not an admissible value. If one is going to apply ( A.5) it is necessary to use a contour integration technique not just a simple change of variables onto the real axis. However the inverse I-transformis also related to the Hankel transform since the Bess el function, Jฮฝ, and the modi๏ฌed Bessel function, Iฮฝ, are connected by the identity Iฮฝ(z) = exp(โฯฮฝi/2)Jฮฝ(iz) forโฯ <argzโคฯ/2 (see formula 8.406 of Gradshteyn and Ryzhik [ GR94]). Then it follows that Iโ1 ฮฝ[F](x) = exp(โiฯ(ฮฝ/2โ3/4))Hโ1 ฮฝ[หF](x) whereหF(y) =F(โiy). As the Hankel transform is self
|
https://arxiv.org/abs/2505.07649v1
|
reciprocal, i.e. Hโ1 ฮฝ[.] =Hฮฝ[.], the previous identity becomes Iโ1 ฮฝ[F](x) = exp(โiฯ(ฮฝ/2โ3/4))Hฮฝ[หF](x). (A.6) Both of those relationships are useful when using tables of Hankelโs transforms. TheI-transform is also related to, the more well known, K-transform Kฮฝ[g](y) =/integraldisplayโ 0g(x)โxyKฮฝ(xy)dx. IfG(y) =Kฮฝ[g](y) the inversion formula is given by Kโ1 ฮฝ[G](x) = lim rโโ1 iฯ/integraldisplayฯ+ir ฯโirG(y)โxyIฮฝ(xy)dy, 18 whereฮฝโฅ โ1/2 andฯโฯg. As Kฮฝ(z) =ฯ 2sin(ฯฮฝ)[Iฮฝ(z)โIฮฝ(z)] for non integer ฮฝ, it follows that Iโ1 ฮฝ[F](x) =ฯ 2sin(ฯฮฝ)[Kโ1 โฮฝ[F](x)โKโ1 ฮฝ[F](x)], for integer ฮฝ, the right hand side of these relations are replaced by their limiting va lues. Fortunately, Oberhettinger [ Obe72]has tabulated a rich collection of K-transforms and theirinverses. FormoredetailsconcerningintegralsinvolvingBesse lfunctionsasintegrands andtheirrelationshiptoLaplacetransformations, seetheworkby Luke[Luk62]andErdยด elyi [Erd54]. References [AB91] J.F.AngersandJ.O.Berger. Robusthierarchical Bayese stimationofexchange- able means. Canadian Journal of Statistics , 19:39โ56, 1991. [ACD11] A.Armagan, M.Clyde, andD.Dunson. Generalizedbetamixt ures ofGaussians. Advances in Neural Nnformation Processing Systems , 24, 2011. [Bar70] A. J. Baranchik. A family of minimax estimators of the mean of a multivariate normal distribution. The Annals of Mathematical Statistics , pages 642โ645, 1970. [Ber85] J. O. Berger. Decision Theory and Bayesian Analysis . Springer, New York, 1985. [BR90] J. O. Berger and C. Robert. Subjective hierarchical Bayes estimation of a multi- variate normal mean: on the frequentist interface. Annals of Statistics , 18:617โ 651, 1990. [DR88] A. DasGupta and H. Rubin. Bayesian estimation subject to min imaxity of the mean of a multivariate normal distribution in the case of a common unk nown variance: acaseforBayesian robustness. InGuptaS.S.andBerg erJ.O., editors, Statistical Decision Theory and Related Topics IV. , volume 1, pages 325โ345. Springer Verlag, 1988. [Erd54] A. Erdยด elyi. Tables of Integrals Transforms , volume 2. McGraw Hill, New York, 1954. [FSW98] D. Fourdrinier, W. E. Strawderman, and M. T. Wells. On the c onstruction of Bayes minimax estimators. Annals of Statistics , pages 660โ671, 1998. 19 [FSW18] D. Fourdrinier, W. E. Strawderman, and M. T. Wells. Shrinkage Estimation . Springer Nature, 2018. [FW96] D. Fourdrinier and M. T. Wells. Bayes estimators for a linear su bspace of a spherically symmetric normal law. In A. P. David J. M. Bernardo, J. O . Berger and A. F. M. Smith, editors, Bayesian Statistics 5 , pages 569โ579. Oxford Uni- versity Press, 1996. [GR94] I. Gradshteyn and I. Ryzhik. Tables of Integrals, Series and Products. Academic Press, New York, 5 edition, 1994. [KKM81] M. Krasnov, A. Kissยด elev, and G. Markarenko. Recueil de probl` emes sur les ยด equations di๏ฌยด erentielles ordinaires. Mir, Moscou, 1981. [KZ69] E. L. Koh and A. H. Zemanian. The complex Hankel and I-tran sformations of generalized functions. SIAM J. Appl. Math. , 10:945โ957, 1969. [Luk62] Y. L. Luke. Integrals of Bessel Functions. McGraw-Hill, New York, 1962. [MOS66] W. Magnus, F. Oberhettinger, and R. Soni. Formulas and Theorems for the Special Functions of Mathematical Physics. Springer, New York, 1966. [Obe72] F. Oberhettinger. Tables of Bessel Transforms. Springer Verlag, New York, 1972. [PS92] L. R. Pericchi and A. F. M. Smith. Exact and approximate pos terior moments for the normal location parameter. J. Royal Statist. Soc. , pages 793โ804, 1992. [Ste81] C. Stein. Estimation of the mean
|
https://arxiv.org/abs/2505.07649v1
|
1 Channel Fingerprint Construction for Massive MIMO: A Deep Conditional Generative Approach Zhenzhou Jin, Graduate Student Member, IEEE, Li You, Senior Member, IEEE, Xudong Li, Zhen Gao, Senior Member, IEEE, Yuanwei Liu, Fellow, IEEE, Xiang-Gen Xia, Fellow, IEEE, and Xiqi Gao, Fellow, IEEE Abstract โAccurate channel state information (CSI) acquisi- tion for massive multiple-input multiple-output (MIMO) systems is essential for future mobile communication networks. Channel fingerprint (CF), also referred to as channel knowledge map, is a key enabler for intelligent environment-aware communication and can facilitate CSI acquisition. However, due to the cost limitations of practical sensing nodes and test vehicles, the resulting CF is typically coarse-grained, making it insufficient for wireless transceiver design. In this work, we introduce the concept of CF twins and design a conditional generative diffusion model (CGDM) with strong implicit prior learning capabilities as the computational core of the CF twin to establish the connection between coarse- and fine-grained CFs. Specifically, we employ a variational inference technique to derive the evidence lower bound (ELBO) for the log-marginal distribution of the observed fine-grained CF conditioned on the coarse-grained CF, enabling the CGDM to learn the complicated distribution of the target data. During the denoising neural network optimization, the coarse-grained CF is introduced as side information to accurately guide the conditioned generation of the CGDM. To make the proposed CGDM lightweight, we further leverage the additivity of network layers and introduce a one-shot pruning approach along with a multi-objective knowledge distillation technique. Experimental results show that the proposed approach exhibits significant improvement in reconstruction performance compared to the baselines. Additionally, zero-shot testing on reconstruction tasks with different magnification factors further demonstrates the scalability and generalization ability of the proposed approach. Index Terms โMassive MIMO, channel knowledge map, environment-aware wireless communication, conditional gener- ative model. I. I NTRODUCTION Part of this work has been accepted for presentation at the IEEE INFOCOM 2025 [1]. Zhenzhou Jin, Li You, Xudong Li, and Xiqi Gao are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211100, China (e-mail: zzjin@seu.edu.cn; lyou@seu.edu.cn; xdli@seu.edu.cn; xqgao@seu.edu.cn). Zhen Gao is with the State Key Laboratory of CNS/ATM, Beijing Institute of Technology, Beijing 100081, China, also with the Beijing Institute of Technology, Zhuhai 519088, China, also with the MIT Key Laboratory of Complex-Field Intelligent Sensing, Beijing Institute of Technology, Beijing 100081, China, also with the Advanced Technology Research Institute, Beijing Institute of Technology Jinan, Jinan 250307, China, and also with the Yangtze Delta Region Academy, Beijing Institute of Technology Jiaxing, Jiaxing 314019, China (e-mail: gaozhen16@bit.edu.cn). Yuanwei Liu is with the Department of Electrical and Electronic Engineer- ing, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk). Xiang-Gen Xia is with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA (e-mail: xxia@ee.udel.edu).THE deep integration of wireless communications, ar- tificial intelligence (AI), and environmental sensing is expected to enable the 6th generation (6G) mobile com- munication networks to โperceiveโ the physical world with capabilities surpassing human sensing, thereby facilitating the creation of digital twins (DTs) in
|
https://arxiv.org/abs/2505.07893v1
|
the virtual realm [2], [3]. The vision of 6G is to propel society towards โintelligent internet of everythingโ and โubiquitous connectivityโ, real- izing the seamless integration and interaction between the physical and virtual worlds. To achieve this, 6G will need to possess more powerful end-to-end information processing capabilities to support emerging applications and domains, including autonomous vehicles, indoor localization, and the metaverse. Therefore, to achieve ultra-low latency and superior performance, while supporting scenarios that integrate AI, sensing, and communication, DT and environmental sensing are considered key enablers for the upcoming 6G era [4]. With the dramatic increase in the antenna array dimen- sions in massive multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) communi- cation systems, the rapid rise in the number and density of user devices, and the utilization of broader bandwidths, 6G networks will encounter the challenge of processing ultra- large-dimensional MIMO channels [3], [5]โ[7]. Traditional pilot-based methods for acquiring and feedback of channel state information (CSI) may suffer from prohibitively high pilot signaling overhead. Furthermore, traditional wireless transceiver designs typically rely on channel modeling, which is based on specific assumptions and probability distributions of channel parameters. However, these stringent assumptions may not be feasible in high-dynamic, complicated wireless propagation environments [8]. As a key determinant, the wireless propagation environment significantly affects channel parameter variation and the performance of wireless communi- cation systems. Consequently, there has been growing interest in environment-aware wireless communications from both the academic and industrial communities. Channel fingerprint (CF), also referred to as channel knowl- edge map (CKM), is an emerging enabling technology for environment-aware communications, offering location-specific channel knowledge related to a potential base station (BS) for any BS-to-everything (B2X) pairs [7], [8]. Ideally, a fine- grained CF serves as a location-specific knowledge base, covering all precise locations within the target communica- tion area. It includes the exact positions of transmitters and receivers, along with their corresponding channel knowledge. This database stores channel-related knowledge for specificarXiv:2505.07893v1 [cs.NI] 12 May 2025 2 Channel data collection is conducted from the sensing nodes and the test vehicle. Channel Measurement Vehicle BS Sensor Node is arranged as a 2- dimensional tensor according to its location.( ),,i jGฮe is arranged as a 2- dimensional tensor according to its location.( ),,i jGฮe Channel Measurement Vehicle BS Sensor Node Fine-Grained CF Reconstruction Active Awareness Propagation Environment LR CF LR G Reconstructed HR CF HR GChannel Data Collection, Processing, and Reconstruction Empowering Communication Technologies and Network Planning Channel Fingerprint Twin Channel Fingerprint Twin CGDM Core Computing Unit of the Channel Fingerprint Twin 0. , 0 12( ): , 1 ,2|| ( )|| t tT t t t tt ฮฑ ฮฑ =รฆ รถ รง รท = - + - รง รท รจ รธ รฅ ฮต ฮธ ฮธ ฮต ฮต ฮต 0 0 0 0 0 0 , 0 , t T G GG G Fig. 1. Schematic diagram of the CF twin: CGDM functions as the core computational unit of the CF twin, reconstructing fine-grained CF to optimize wireless transmission technologies and network planning. locations, including channel power, angle of arrival/departure, and channel impulse responses,
|
https://arxiv.org/abs/2505.07893v1
|
which can alleviate the chal- lenges of CSI acquisition and empower the design of wireless transmission technologies. By providing essential and accu- rate channel knowledge, CF has recently spurred extensive research for various applications in space-air-ground integrated networks, including beam alignment [8], [9], communication among non-cooperative nodes [8], physical environment sens- ing [10], [11] and user localization [12], UA V communication [13] and path optimization [14], [15], resource allocation in air-ground integrated mobility [16]. The aforementioned promising applications hinge on the effective construction of CF, which serves as the corner- stone of environment-aware wireless communications. Exist- ing related works can primarily be categorized into model- driven and data-driven approaches. For the former, the authors of [17] employ an analytical channel model to represent the spatial variation of the propagation environment, where channel modeling parameters are estimated from measured data to reconstruct the CF. The authors of [18] combine prior assumptions of the wireless propagation environment model with partially observed channel measurements to infer channel knowledge at unmeasured locations. For the latter, the work [7] transforms the CF construction task into an image-to-image inpainting problem and designs a Laplacian pyramid-based model to learn the differences between fre- quency components, enabling efficient reconstruction of the CF for a target area. In [19], [20], U-Net is employed to learn geometry-based and physics-based features in urban or indoorenvironments, thereby constructing the corresponding CFs. In [8], fully connected networks are employed to predict channel knowledge at potential locations using simple 2D coordinate information. Most existing related works focus on constructing CFs using physical propagation environment features or prior assumptions of physical propagation models. Evidently, a finer-grained CF can assist the BS in acquir- ing more precise, location-specific channel knowledge [8]. However, two key challenges must be addressed for CF to enable environment-aware wireless communications. First, the operation and maintenance costs associated with sensing nodes and test vehicles for measuring channel-related knowledge are inherently constrained. Second, storing a large amount of fine- grained CFs may incur unnecessary and prohibitively high storage overhead for the BS. Consequently, most available CFs are coarse-grained, lacking precise location-specific channel knowledge. Motivated by these practical challenges, we seek to develop a computational unit dedicated to enhancing CF granularity, particularly by reconstructing ultra-fine-grained CFs from coarse-grained counterparts. In this context, fully leveraging measurable channel knowl- edge and location information from the physical world be- comes essential. The concept of DT has emerged as a promising paradigm, widely recognized as a virtual modeling framework that digitally replicates and extends the physical world [4]. As illustrated in Fig. 1, from the DT perspective, a structured coarse-grained CF can be sensed and reorganized by sensing nodes and test vehicles deployed in the physical world. This coarse-grained CF represents the physical object, 3 which is subsequently processed by the DT hub, serving as the core computing unit of the DT, to generate a fine-grained CF, commonly referred to as the virtual twin object. Leveraging the generated virtual twin facilitates optimized decision-making for subsequent wireless transceiver designs, such as precoding and resource scheduling [7], [8]. Accordingly, our goal is
|
https://arxiv.org/abs/2505.07893v1
|
to establish the intrinsic mapping between physical objects and their corresponding virtual twins. Consistent with the DT paradigm, we refer to this concept as the CF twins . To better explore the relationship between the two, we reformulate the task of fine-grained CF construction as an image super- resolution (ISR) problem. However, the conditional distri- bution of SR outputs given coarse-grained inputs typically follows a complicated parametric distribution, potentially re- sulting in suboptimal performance for most feedforward neural network-based regression algorithms in ISR tasks [21], [22]. Recently, generative AI (GenAI) has emerged as a promis- ing technique for high-fidelity DT construction, showcasing exceptional capabilities in modeling complicated relationships and distributions to effectively synthesize and reconstruct high- dimensional data [21]โ[26]. Among various GenAI techniques, generative diffusion models (GDM) [23] are widely recog- nized as one of the most prominent classes of generative mod- els. GDM does not require training additional discriminators, as in generative adversarial network (GAN) [21], aligning pos- terior distributions like variational autoencoders (V AE) [26]. Therefore, GDMs have shown remarkable performance in high-dimensional data generation tasks [22], [27], underscor- ing their versatile application potential and efficacy. However, conventional GDMs are typically unconditional, which may lead to uncertainty in accurately generating the target fine- grained CF, potentially resulting in multiple possible solutions for the target CF. To this end, this paper proposes a conditional GDM (CGDM) with powerful implicit prior learning capabilities, serving as the CF twin hub, where coarse-grained CF is introduced as side information to guide the iterative refinement process of the denoising neural network. Moreover, to make the proposed CGDM lightweight, we further introduce an efficient pruning approach and a multi-objective knowledge distillation technique. Specifically, the main contributions of this paper are as follows: โขBuilding on the proposed concept of CF twin, we treat the coarse-grained CF as the physical object and the fine-grained CF as its corresponding virtual twin. To effectively capture the relationship between them, we reformulate this task as an ISR problem and solve it uti- lizing a Stein score-based iterative refinement approach. โขTo enable the CF twin hub to more effectively learn the connections between physical entities and their virtual counterparts, we adopt a variational inference technique to derive the evidence lower bound (ELBO) of the log- conditional marginal distribution of the observed fine- grained CF, which serves as a surrogate training objective. Maximizing this objective allows CGDM to learn the true target CF distribution, facilitating the transition from a standard normal distribution to the desired distribution. To better guide this transition, the coarse-grained CF isincorporated as side information during the optimization of the denoising network, thereby enhancing the control- lability and fidelity of fine-grained CF generation. โขConsidering that the proposed CGDM is a large AI model with numerous parameters, we develop an efficient pruning approach to enable its lightweight deployment. Specifically, we formulate the layer pruning task as a combinatorial optimization problem. By leveraging the inherent additivity property of network layers, we intro- duce a one-shot layer pruning strategy along with a multi- objective knowledge distillation technique, resulting in the lightweight CGDM (LiCGDM).
|
https://arxiv.org/abs/2505.07893v1
|
โขExperimental results show that the proposed approach achieves significant performance improvements over the baselines. Additionally, we validate the generalization and knowledge transfer capabilities of the proposed model by conducting zero-shot performance testing on other SR CF tasks with unseen magnification factors (e.g., ร16,ร8). The rest of this paper is structured as follows. Section II introduces the system model and the formulation of the fine- grained CF construction problem. The overall design of the CGDM and its specific network architecture are introduced in Section III. The one-shot pruning approach and knowledge distillation technique are proposed in Section IV . Experimen- tal results are presented in Section V , with the conclusions provided in Section VI. II. S YSTEM MODEL ANDPROBLEM FORMULATION In this section, we first outline the communication sce- nario and present the massive MIMO-OFDM physical channel model for each potential user equipment (UE) location. Then, we model the specific CF, that is, the channel power, for each potential UE based on its location and environmental factors. Finally, we describe the fine-grained CF reconstruction problem from the perspective of ISR. A. Massive MIMO Physical Channel Model Consider a massive MIMO-OFDM communication scenario within a square area AโR2, where the 2D coordinates of the UE locations are defined as {xm}M m=1=A. Specifically, the BS is outfitted with a uniform planar array (UPA) consisting of Nr=Nr,vรNr,hantenna elements, spaced at half-wavelength intervals, and serves Msingle-antenna users within a cell. Here, Nr,vandNr,hrepresent the numbers of antennas in each vertical column and horizontal row, respectively. It is assumed that the wireless propagation environment contains Lxmphysical paths from the BS to the UE located at xm. The system employs OFDM modulation with Ncsubcarri- ers, characterized by an adjacent subcarrier spacing of โf. Typically, Nkactive subcarriers are utilized at the center of the total Ncsubcarriers for signal transmission, while the remaining subcarriers serve as guard bands. Define the set of active subcarriers as K= โNk 2,โNk 2+ 1, . . . ,Nk 2โ1 . The spatial-frequency domain response of the channel between the BS and the UE over active subcarriers Kis modeled as [28] hH(xm)=LxmX l=1ฮฑl,xmaH t(ฯl,xm)โaH h(ฯl,xm)โaH v(ฯl,xm),(1) 4 where โdenotes the Kronrcker product, and ฮฑl,xmand at(ฯl,xm)represent the complex gain and the propagation delay, respectively, corresponding to the l-th path for the UE located at xm. The normalized horizontal angle ฯl,xmโ [โ1,1]is related to the elevation angle ฮธl,xm, while the normalized vertical angle ฯl,xmโ[โ1,1]is associated with the elevation angle ฮธl,xmโ โฯ 2,ฯ 2 through the relationship ฯl,xm= sin ฮธl,xm. The azimuth angle ฯl,xmโ โฯ 2,ฯ 2 is defined by ฯl,xm= cos ฮธl,xmsinฯl,xm. The steering vectors at(ฯl,xm),ah(ฯl,xm), andav(ฯl,xm)can be represented as av(ฯl,xm) =h 1, eโศทฯฯl,xm, . . . , eโศทฯ(Nr,vโ1)ฯl,xmiT ,(2) ah(ฯl,xm) =h 1, eโศทฯฯl,xm, . . . , eโศทฯ(Nr,hโ1)ฯl,xmiT ,(3) at(ฯl,xm)= eโศท2ฯ โNk 2 โfฯl,xm,ยทยทยท, eโศท2ฯNk 2โ1 โfฯl,xmT ,(4) where ah(ฯl,xm)โCNr,hร1,av(ฯl,xm)โCNv,hร1, and at(ฯl,xm)โCNkร1. B. Channel Fingerprints Model Based on the proposed physical channel model in Section II.A, the channel power at the UE located at xm, in dB scale, is defined as G(e,xm) = E hH(xm)h(xm) dB, (5) where E{ยท}denotes the expectation operation, and erepresents the propagation
|
https://arxiv.org/abs/2505.07893v1
|
environment, which determines the channel characteristics in (1), including the complex gain and propa- gation delay. It is evident that the channel power attenuation experienced at the UE is influenced by various environmental factors, including propagation losses along different paths as well as reflections and diffractions caused by surrounding structures such as buildings [19], [29]. In this paper, we refer to the collection of channel power values at potential locations as the unstructured CF. Since the communication area under consideration is a 2D square area AโR2, we can perform spatial discretization along both the X-axis and Y-axis. Specifically, given an area of interest with size W ร W , we define a resolution factor ฯ, such that the minimum spacing units in the spatial discretization process are โx=W/ฯandโy=W/ฯ. Each spatial grid is located at ฮi,j, where i= 1,2, ...,W/โx andj= 1,2, ...,W/โy, and the (i, j)-th spatial grid can be represented as ฮi,j:= [iโx, jโy]T. (6) Given the resolution factor ฯ, the unstructured CF correspond- ing to potential UE locations within the target area can be rearranged into a two-dimensional tensor, denoted as [G]i,j=G(e,ฮi,j). (7) Furthermore, by incorporating additional dimensions such as time and frequency, the CF model can be naturally extended to a higher-order tensor representation.Remark 1 : When the target area has a size of W ร W and the resolution factor is ฯ, the number of points requiring interpolation is โW/ฯโ2. As the resolution factor ฯincreases, the complexity of traditional interpolation algorithms also grows with O(โW/ฯโ3), posing significant challenges for con- structing fine-grained CF in practical scenarios. Alternatively, the spatial resolution of the CF can be improved by increasing the density of measurement points, either through deploying more sensing nodes or by employing test vehicles to collect channel knowledge at finer geographical intervals. However, both approaches may be impractical due to high hardware and labor costs. C. Problem Formulation Define a coarse-grained factor ฯLRand a fine-grained factor ฯHR, where ฯHRis typically an integer multiple (e.g., ร4,ร8) ofฯLR. Accordingly, the coarse-grained CF and fine-grained CF are denoted by GLRandGHR, which are collected by discretizing the target area into ฯLRรฯLRandฯHRรฯHR grids, respectively. In light of Remark 1 , the CF twin aims to reconstruct the fine-grained CF GHRfrom a given coarse- grained CF GLR, particularly in scenarios constrained by measurement costs, privacy concerns, or security requirements. In a typical ISR task, the goal is to reconstruct a high- resolution (HR) image from a given low-resolution (LR) counterpart, thereby enhancing the fine details and overall quality of the image. It can be observed that our problem aligns with the ISR task, which inspires us to analyze the fine- grained CF reconstruction problem from an ISR perspective. Specifically, we treat the elements of the GLRmatrix as pixels, viewing the coarse-grained GLRas an LR image and the fine- grained GHRas an HR image. Then, our goal is to learn a specific mapping relationship that efficiently reconstructs HR CFGHRfrom a given LR CF GLR, i.e., Mฮ:GLR,uโGHR,u,โuโ {1,2, . . . , U }, (8) where ฮis the parameter set of this mapping Mฮ, and Uis
|
https://arxiv.org/abs/2505.07893v1
|
the number of training samples. However, this task represents a classic and highly challenging inverse problem, requiring the effective reconstruction of fine details from a given LR CF. Since the conditional distribution of HR outputs corresponding to a given LR input typically does not adhere to a simple parametric distribution, many feedforward neural network-based regression methods for ISR tend to perform poorly at higher upscaling factors, struggling to recover fine details accurately [22]. In contrast, deep generative models have demonstrated success in learning complicated empirical distributions of target data. Specifically, if the implicit prior information of the HR CF distribution, such as the gradient of the data log-density, can be learned, one can transition to the target CF distribution through iterative sampling steps from a standard normal distribution, similar to Langevin dynamics [30]. Therefore, learning the target HR CF distribution can be solved by optimizing argmin ฮEp(GHR)h โฅโlogp(GHR)โโlogpฮ(GHR)โฅ2 2i ,(9a) s.t.xmโ A, uโ {1,2, . . . , U }, (9b) 5 where โlogp(GHR)represents the gradient of the HR CF log- density, also referred to as the Stein score, and pฮdenotes the learned density. It has been shown that the noise learned by traditional GDM is equivalent to the Stein score [23], enabling the generation of HR CF samples aligned with the target data distribution. However, for traditional GDMs, accurately generating the target HR CF is an ill-posed task, meaning that the generation process may yield multiple possible solutions for the HR CF. Therefore, in contrast to traditional GDMs, which starts with a pure Gaussian noise tensor, introducing an additional source signal as side information (also guiding condition) is essential to achieve an optimal solution. In this context, the objective (9) needs to be reformulated as argmin ฮEp(GH R,GL R)h โฅโlogp(GH R|GL R)โ โlogpฮ(GH R|GL R)โฅ2 2i , (10a) s.t.xmโ A, uโ {1,2, . . . , U }. (10b) To this end, our approach leverages the prior information learned, with the LR CF GLRserving as the additional source signal to guide the iterative refinement process. Further implementation details are provided in Section III. III. CGDM-E NABLED SR CF In this section, we introduce the proposed CGDM as the core computational unit of the CF twin. First, under the variational inference framework, we derive a concrete proxy objective to ensure the effective operation of the proposed CGDM. Next, we introduce the LR CF as side informa- tion and design a conditional GDM to iteratively refine the transformation from a standard normal distribution to the target data distribution, akin to Langevin dynamics, for HR CF reconstruction. Finally, we present the detailed network architecture of the proposed CGDM. To simplify the notation, GLRandGHRare represented by หGandG, respectively, in the subsequent sections. A. CGDM Design for SR CF Given a dataset of LR CF inputs paired with HR CF outputs, defined as D={หGu,Gu}U u=1, which represent samples drawn from an unknown distribution p(หG,G). Such datasets are generally collected from sensing nodes and test vehicles, with different resolution factors (e.g., ฯLRandฯHR), depending on the specific scenario. In our task, the focus is on learning a parametric
|
https://arxiv.org/abs/2505.07893v1
|
approximation to p(G|หG)through a directed itera- tive refinement process guided by source information, which enables the mapping of หGtoG. Given the powerful implicit prior learning capability of GDM, we design a conditional GDM to facilitate the generation of G. As shown in Fig. 2, CGDM can generate the target HR CF defined as G0through Trefinement time steps. Beginning with a CF GTโผ N (0,I)composed of pure Gaussian noise, CGDM iteratively refines this initial input based on the source signal and the prior information learned during the training process, namely the conditional distribution pฮธ(Gtโ1|Gt,หG). As it progresses through each time step t, it produces a series of output CFs defined as {GTโ1,GTโ2, ...,G0}, ultimately ( ) 1t tq-G G . 1,t tp-รฆ รถ รง รท รจ รธ ฮธG G G . 0~pรฆ รถ รง รท รจ รธ G GG 1t-G... tG... Gaussian diffusion direction Iterative refinement direction based on source information (LR CF)Fig. 2. The mechanism of CGDM for generating HR CF consists of a Gaussian diffusion process (without learnable parameters) and an iterative refinement process based on LR CF. Specifically, the pink arrow indicates the direction of the Gaussian diffusion process, which progressively adds noise to the HR CF. The blue arrow indicates the direction of the iterative refinement process, which utilizes the implicit prior learned during training and is conditioned on the source information (LR CF) to generate the HR CF. resulting in the target HR CF G0โผp(G|หG). Specifically, the distribution of intermediate CFs in the iterative refinement chain is governed by the forward diffusion process, which gradually adds noise to the output CF through a fixed Markov chain, denoted as q(Gt|Gtโ1). Our model seeks to reverse the Gaussian diffusion process by iteratively recovering the signal from noise through a reverse Markov chain conditioned on the source CF หG. To achieve this, we learn the reverse chain by leveraging a denoising neural network ฮตฮธ(ยท), optimized utilizing the objective function (32). The CGDM takes as input an LR CF and a noisy image to estimate the noise, and after Trefinement steps, generates the target HR CF. B. Diffusion Process Starting with HR CF Consider an HR CF sample that is drawn from a distri- bution of interest, denoted as G0=Gโผq(G). The GDM utilizes a fixed diffusion process q(G1:T|G0)for training, which involves relatively high-dimensional latent variables. This process defines a forward diffusion mechanism, a deter- ministic Markovian chain, where Gaussian noise is gradually introduced to the sample over Tsteps. The noise level at each step is determined by a variance schedule {ฮฒtโ(0,1)}T t=1. Specifically, the forward diffusion process is defined as [23] q(G1:T|G0) =TY t=1q(Gt|Gtโ1). (11) Algorithm 1 Offline Training Strategy for the Denoising Neural Network Conditioned on LR CF 1:repeat 2: Load CF training data pairs หG,G0 โผp หG,G0 3: Obtain time steps tโผUniform(1 , ..., T ) 4: Randomly generate a noise tensor with the same dimen- sions as G0,ฮตtโผ N (0,I) 5: Add noise incrementally to the HR CF G0according to (14) to perform the diffusion process 6: Input the corrupted HR CF Gt, LR CF หG, and time step
|
https://arxiv.org/abs/2505.07893v1
|
tinto the model ฮตฮธ(ยท) 7: Perform gradient descent step on the objective function (32) to update the model parameters ฮธ: โฮธ ฮตtโฮตฮธ หG,โยฏฮฑtG0+โ1โยฏฮฑtฮต, t 2 2 8:until the objective function (32) converges 6 Algorithm 2 Inferring the HR CF in the reverse process conditioned on the LR CF through Titerative refinement steps 1:Load the pretrained model and its weights ฮธ 2:Obtain the completely corrupted HR CF GTโผ N (0,I), and LR CF หG 3:fort=T, ..,1do 4:ฮตโโผ N (0,I)ift >1, else ฮตโ= 0 5: Execute the refinement step according to (34): Gtโ1โ1โฮฑt Gtโ1โฮฑtโ1โยฏฮฑtฮตฮธ หG,Gt, t +q 1โยฏฮฑtโ1 1โยฏฮฑtฮฒtฮตโ 6:end for 7:return HR CF หG0 Therefore, the forward diffusion process does not involve trainable parameters. Instead, it is a fixed and predefined linear Gaussian model, which can be denoted as q(Gt|Gtโ1) =N Gt;p 1โฮฒtGtโ1, ฮฒtI , (12) Gt=p 1โฮฒtGtโ1+p ฮฒtฮต, (13) where ฮตdenotes Gaussian noise with a distribution of N(ฮต;0,I). Define ฮฑt= 1โฮฒtandฮฑt=Qt i=1ฮฑi. Un- der the linear Gaussian assumption of the transition density q(Gt|Gtโ1), and by combining (12) and (13), we can utilize the reparameterization technique to sample Gtin closed form at any given time step t: Gt=โฮฑtG0+โ 1โฮฑtฮต, (14) q(Gt|G0) =N Gt;โฮฑtG0,(1โฮฑt)I . (15) Generally, the variance schedule is generally set as ฮฒ1< ฮฒ2< ... < ฮฒ T. When ฮฒTis set infinitely close to 1, GTconverges to a standard Gaussian distribution for any initial state G0, i.e., q(GT|G0)โ N(GT;0,I). C. Reverse Diffusion Process Conditioned on LR CF For the traditional GDM [23], the reverse process can be considered a decoding procedure, where at each time step t,Gtis denoised and restored to Gtโ1, with the transition probability for each step denoted as p(Gtโ1|Gt). Based on the Markov density transition properties, the joint distribution of the reverse process is expressed as p(G0:T) = p(GT)TY t=1p(Gtโ1|Gt). (16) However, deriving the expression for p(Gtโ1|Gt)utilizing Bayesโ theorem reveals that its denominator contains an integral, which lacks a closed-form solution. Consequently, a denoising neural network with parameters ฮธneeds to be developed to approximate these conditional probabilities in order to execute the reverse process. Since the reverse process also functions as a Markov chain, it can be represented as p(G0:T) =p(GT)TY t=1pฮธ(Gtโ1|Gt), (17) pฮธ(Gtโ1|Gt) =N(Gtโ1;ยตฮธ(Gt, t),ฮฃฮธ(Gt, t)).(18)Note that in our task, unlike the traditional GDM, the denoising model ฮตฮธ(ยท)is conditioned on side information in the form of an LR CF หG, guiding it to progressively denoise from a Gaussian-distributed GTand generate the HR CF G0. Therefore, (17) needs to be rewritten as p G0:T หG =p(GT)TY t=1pฮธ Gtโ1 Gt,หG , (19) and the denoising model ฮตฮธ(ยท)is trained to approximate pฮธ(Gtโ1|Gt,หG), which is defined as pฮธ Gtโ1 Gt,หG =N Gtโ1;ยตฮธ หG,Gt, t ,ฮฃฮธ หG,Gt,t .(20) D. ELBO-Based Optimization of the CGDM To enable the network model ฮตฮธ(ยท)to effectively approx- imate the reverse process, the model parameters ฮธneed to be optimized with a specific objective. Mathematically, the latent variables G1:Tand the observed sample G0conditioned onหGcan be represented utilizing a conditional joint distri- bution p(G0:T|หG). One common likelihood-based approach in generative modeling involves optimizing the model to maxi- mize the conditional joint probability distribution p(G0:T|หG) of all observed samples. However, we only
|
https://arxiv.org/abs/2505.07893v1
|
have access to the observed sample G0and the latent variables G1:Tare unknown. Therefore, we seek to maximize the conditional marginal distribution p(G0|หG), which is given by p G0 หG =Z p G0:T หG dG1:T. (21) Within the framework of variational inference, the likelihood of the observed sample G0conditioned onหG, known as the evidence, allows us to derive the ELBO as a proxy objective function, which can be used to optimize CGDM: logp G0 หG = logZ p G0:T หG dG1:T (22a) (a) โฉพEq(G1:T|G0)๏ฃซ ๏ฃญlogp G0:T หG q(G1:T|G0)๏ฃถ ๏ฃธ,(22b) where(a) โฉพin (22b) follows from Jensenโs inequality. To further derive the ELBO, (22b) can be rewritten as (23b), displayed at the top of the next page. Then, the parameters ฮธof CGDM, can be learned by maximizing the ELBO: argmin ฮธL(ฮธ) =E(โL ELBO(ฮธ)) =E(Lc+Lbโ La),(24) where L(ฮธ)is the objective function for CGDM training, Lcis constant from (23b) and can be excluded from the optimization, and Lacan be approximated and optimized via a Monte Carlo estimate [23]. From the above analysis, it is evident that the training objective of CGDM is pri- marily determined by Lb. It can be found that the training objective of CGDM is to approximate the transition den- sity in the reverse process as closely as possible, thereby minimizing the Kullback-Leibler (KL) divergence between pฮธ Gtโ1 Gt,หG =N Gtโ1;ยตฮธ หG,Gt, t ,ฮฃฮธ หG,Gt, t andq(Gtโ1|Gt,G0) =N Gtโ1;หยตt(Gt,G0),หฮฒtI . 7 logp G0 หG โฉพEq(G1:T|G0)๏ฃซ ๏ฃญlogp(GT)pฮธ G0 G1,หG q(G1|G0)+ logq(G1|G0) q(GT|G0)+ logTY t=2pฮธ Gtโ1 Gt,หG q(Gtโ1|Gt,G0)๏ฃถ ๏ฃธ (23a) =Eq(G1|G0) logpฮธ G0 G1,หG | {z } LaโTX t=2Eq(Gt|G0) DKL q(Gtโ1|Gt,G0)โฅpฮธ Gtโ1 Gt,หG | {z } LbโDKL(q(GT|G0)โฅp(GT) )| {z } Lc=LELBO(ฮธ)(23b) q(Gtโ1|Gt,G0) = exp โ1 2 1 (1โฮฑt)(1โยฏฮฑtโ1) 1โยฏฮฑt! G2 tโ1โ2โฮฑt(1โยฏฮฑtโ1)Gt+โยฏฮฑtโ1(1โฮฑt)G0 1โยฏฮฑtGtโ1! (26) Therefore, we need to obtain explicit expressions for the mean หยตt(Gt,G0)and variance หฮฒtIofq(Gtโ1|Gt,G0). Specif- ically, q(Gtโ1|Gt,G0)can be expressed as q(Gtโ1|Gt,G0) =q(Gt|Gtโ1,G0)q(Gtโ1|G0) q(Gt|G0).(25) Combining (14) and (15), (25) can be further rewritten as (26), displayed at the top of this page. Based on (26), the mean and variance of q(Gtโ1|Gt,G0)are explicitly expressed as: หยตt(Gt,G0) =1โฮฑt Gtโฮฒtโ1โยฏฮฑtฮตt , (27) หฮฒt=1โยฏฮฑtโ1 1โยฏฮฑtฮฒt. (28) Generally, the variance ฮฃฮธ(หG,Gt, t)is set as a constant หฮฒtI [23], [31]. Therefore, to ensure that the denoising transition density closely approximates the ground-truth denoising tran- sition density, we can simplify the optimization of the KL divergence term to minimizing the difference between the expectations of the above two distributions. In this case, we only need to train CGDM to predict หยตt(Gt,G0): ยตฮธ หG,Gt, t =1โฮฑt Gtโ1โฮฑtโ1โยฏฮฑtฮตฮธ หG,Gt, t ,(29) Recall that the KL divergence is defined as DKL N(x;ยตx,ฮฃx) N y;ยตy,ฮฃy =1 2h log|ฮฃy| |ฮฃx|โd + tr ฮฃโ1 yฮฃx + ยตyโยตxTฮฃโ1 y ยตyโยตxi ,(30) where drepresents the dimension of x. Substituting (27), (29), and (30) into Lbin (23b), yields Lb=TY t=2Eq(Gt|G0) ( 1โฮฑt)2 2หฮฒ2 tฮฑt( 1โยฏฮฑt) ฮตtโฮตฮธ หG,Gt, t 2 2! .(31) Substituting (23b) and (31) into (24), the objective function L(ฮธ)for the CGDM can be further simplified to L(ฮธ):=TX t=1EG0,ฮตt ฮตtโฮตฮธ หG,โยฏฮฑtG0+โ 1โยฏฮฑtฮต| {z } Gt,t 2 2 .(32) Based on the trained CGDM, given any noise-contaminated CFGt, the trained CGDM can leverage the side informationinหGto predict the noise ฮตtand
|
https://arxiv.org/abs/2505.07893v1
|
subsequently obtain an approximation of the target CF หG0through transformation (14), i.e., หG0=1โยฏฮฑt๏ฃซ ๏ฃฌ๏ฃญGtโโ 1โยฏฮฑtฮตฮธ หG,โยฏฮฑtG0+โ 1โยฏฮฑtฮต| {z } Gt, t๏ฃถ ๏ฃท๏ฃธ.(33) Through reparameterization, (33) represents the results of iter- ative refinements, with each iteration in our proposed CGDM being represented by Gtโ1โ1โฮฑt Gtโ1โฮฑtโ1โยฏฮฑtฮตฮธ หG,Gt, t +r 1โยฏฮฑtโ1 1โยฏฮฑtฮฒtฮตโ,(34) where ฮตโโผ N (0,I). Note that the noise estimation step in (34) is analogous to a step in Langevin dynamics within score-based generative models [30], which is equivalent to the estimation of the first derivative of the log-likelihood of the observed samples, also known as the gradient or Stein score. For clarity, we summarize the training process and iterative inference process of the proposed CGDM in Algorithm 1 and Algorithm 2 , respectively. E. Network Architecture of the Proposed CGDM In this subsection, we present the architecture of CGDM, a variant of the U-Net model, as illustrated in Fig. 3. The network is composed of three primary stages: Dn, Mid, and Up. For clarity, we provide a concise overview of the key components in each stage, including the time embedding block, Res+block, self-attention block, downsampling block, and upsampling block. 1) Time Embedding Block: In order to encode the time parameter [1, ..., t, ..., T ]in the diffusion process, sine and cosine functions with different frequencies are employed, akin to the sinusoidal positional encoding approach used in [32]. This approach results in the time embedding vector ฮt, which effectively captures the temporal characteristics. Let ฮj tโR 8 3ร3 Conv 3ร3 Conv 3ร3 Conv Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Downsampling Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Self-Attention Block Res+ Block Upsampling Self -Self Self Attention Block Res+ Block Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block Upsampling Self-Attention Block Res+ Block GSDC Block Stage: Dn Stage: Up t RA Nยด RA Nยด RA Nยด RA Nยด RA Nยด RA NยดRA NยดRA NยดRA NยดRA NยดInput LR G Input LR G Group Norm Swish Dropout 3ร3 Conv Group Norm Swish Dropout 3ร3 Conv (g) GSDC Block Group Norm Swish Dropout 3ร3 Conv Group Norm Swish Dropout 3ร3 Conv (g) GSDC Block Nearest Interpolation 3ร3 Conv (e) Upsampling Block GSDC Block ร
Time Embedding GSDC Block 1ร1 Conv ร
GSDC Block ร
Time Embedding GSDC Block 1ร1 Conv ร
(a) Res+ Block GSDC Block ร
Time Embedding GSDC Block 1ร1 Conv ร
(a) Res+ Block (c) Downsampling Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Res+ Block Self -Self Self Attention Block Res+ Block Self-Attention Block Stage: Mid 1ยด 1ยด (d) CGDM Architecture 3ร3 Conv Stride=2 3ร3 Conv Stride=2 Sinusoidal Time Encoding Fully Connected Layer
|
https://arxiv.org/abs/2505.07893v1
|
Swish Fully Connected Layer t (f) Time Embedding Block Sinusoidal Time Encoding ncoding Fully Connected Layer Swish Fully Connected Layer Sinusoidal Time Encoding Fully Connected Layer Swish Fully Connected Layer t (f) Time Embedding Block Group Norm Linear Linear Linear ร Softmax Attention Matrix Attention Matrix ร 1ร1 Conv Q KV TK ร
(b) Self-Attention Block Group Norm Linear Linear Linear ร Softmax Attention Matrix ร 1ร1 Conv Q KV TK ร
(b) Self-Attention Block HR หG Output HR หG Output 1 Stage 0 : in in h w c ยด ยด 1 Stage 1: in in h w c ยด ยด 2 Stage 2 : 2 2 in in h w cยด ยด 3 Stage 3: 4 4 in in h w cยด ยด 4 Stage 4 : 8 8 in in h w cยด ยด 5 Stage 5: 16 16 in in h w cยด ยด 5 Stage 6 : 16 16 in in h w cยด ยด 5 Stage 7 : 16 16 in in h w cยด ยด 4 Stage 8: 8 8 in in h w cยด ยด 3 Stage 9 : 4 4 in in h w cยด ยด 2 Stage 10 : 2 2 in in h w cยด ยด 1 Stage 11: in in h w c ยด ยด Stage 12 : out out outh w c ยด ยด out out outh w c ยด ยด 2in in in h w c ยด ยด Fig. 3. Diagram and key modules of the CGDM architecture. Specifically, the network architecture of CGDM consists of three primary stages: Dn(substages 1-5), Mid (substage 6), and Up(substages 7-12). The blocks included in each stage are illustrated in the subfigures (a), (b), (c), (e), (f), and (g). (d) shows the network architecture of the proposed CGDM. Additionally, the red and purple arrows represent the embedding of the time constant tand the skip connections, respectively. Taking the ร4HR CF reconstruction task as an example, the LR CF with a size of 32ร32ร3is upsampled to the target resolution (i.e., 128ร128ร3), and concatenated with noise of the same resolution along the channel dimension to form the input, resulting in a size of 128ร128ร6. The number 2cinof input channels is expanded to c1, representing the number of base channels in the latent space, after passing through substage 0. Note that within the same substage, the height hand width wof the feature maps remain unchanged, while the number cof channels at different substages is controlled by the channel number multiplier ยฏฮท=c1:c2:c3:c4:c5. The specific values of c1,ยฏฮท, and NRAwill be discussed in Subsection V-B. denote the j-th component of the time embedding vector at time step t, defined as ฮ(j) t=๏ฃฑ ๏ฃด๏ฃด๏ฃฒ ๏ฃด๏ฃด๏ฃณcost 100002j/ctime ,ifjis odd sint 100002j/ctime ,ifjis even, (35) where j= 0,1,2, ..., c time/2โ1,ctime is the dimension of the time embedding vector. Then, the embedding vector for the time constant tcan be expressed as ฮt=h sin w0t ,cos w0t ,sin w1t ,cos w1t , . . . ,sin wctime 2โ1t ,cos wctime 2โ1ti ,(36) where wj= 1. 100002j/ctime. Leveraging (36), the embedding vectors
|
https://arxiv.org/abs/2505.07893v1
|
for any given time ฮt+โtcan be obtained through alinear transformation expressed as ฮT t+โt=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐsin w0 t+ โt cos w0 t+ โt ... sin wctime 2โ1 t+ โt cos wctime 2โ1 t+ โt๏ฃน ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป=MโtยทฮT t.(37) Mโtis a linear transform matrix defined as Mโt=๏ฃซ ๏ฃฌ๏ฃฌ๏ฃญR(w0โt)ยทยทยท 0 ......... 0 ยทยทยทR wctime 2โ1โt๏ฃถ ๏ฃท๏ฃท๏ฃธ, (38) where R(w0โt) = cos(w0โt) sin( w0โt) โsin(w0โt) cos( w0โt) . To en- able the model to capture more intricate temporal features, the time embedding is further enhanced through two fully connected layers and an activation layer, expressed as ฮโฒt= fFul(fSwi(fFul(ฮt))), where fFul(ยท)andfSwi(ยท)represent the fully connected layer and swish activation function layer, respectively. The components of the time embedding block are summarized in Fig. 3 (f). 9 2) Res+Block: We design deeper feature extraction net- works aimed at learning higher-level semantic information from the CF. Nonetheless, deep neural networks frequently experience model degradation as a result of challenges like vanishing and exploding gradients. To this end, we introduce residual connections as a means of alleviating these concerns. Specifically, let XโRhinรwinรcin,ฮtbe the feature map and time embedding vector inputs, respectively. The output XโฒโRhinรwinรcoutof Res+block is given by Xโฒ=( โฆ+fConv ,1(X),ifcoutฬธ=cin โฆ+X,else, (39) where fConv ,1(ยท)denotes 1ร1convolution operation, and โฆโ Rhinรwinรcoutis defined as โฆ=g(g(X) +fFul(fSwi(fFul(ฮt)))). (40) Note that g(X) = fConv ,3(fDro(fSwi(fGn(X)))), where fConv ,3(ยท)is3ร3convolution operation. fDro(ยท),fGn(ยท)are the dropout and group normalization layers, respectively. The components of the Res+block are summarized in Fig. 3 (a). 3) Self-Attention Block: To capture global structural infor- mation, we introduce a self-attention mechanism to improve the interaction between global and local features. Specifically, to implement the self-attention mechanism, a normalized at- tention matrix is introduced to represent varying degrees of attention to the input. Greater weights are assigned to more significant input components. The final output is generated by weighting the input according to the attention weights indicated in the attention matrix. Specifically, given the input Z=[z1, ...,zm]TโRdmรdn,dmdenotes the number of image patches and dnis the feature dimension of each patch, three different linear transformations are applied to zj[32], [33]: kj=zjWk, j = 1, ..., d n, (41a) qj=zjWq, j = 1, ..., d n, (41b) vj=zjWv, j = 1, ..., d n, (41c) where kjโR1รdk,qjโR1รdq, andvjโR1รdnare the key, query, and value vector, respectively. WkโRdnรdk,Wqโ Rdnรdq, andWvโRdnรdnrepresent the respective trainable transformation matrices, with dk=dq. In detail, the weight allocation function is determined by kjandqjโฒ. A higher correlation qjโฒkT jimplies that the features of the j-th input patch zjhold greater importance for the jโฒ-th output patch. Generally, this correlation can be adaptively adjusted based on the input Zand the matrices WkandWq. For clarity, the matrix forms of (41a)-(41c) are presented as K=ZWk,Q= ZWq,V=ZWv, where K= [k1, ...,km]TโRdmรdk,Q= [q1, ...,qm]TโRdmรdqandV= [v1, ...,vm]TโRdmรdn. Leveraging KandQ, we can obtain the attention matrix SAMโRdmรdm, which is denoted as SAM= SoftmaxQKT โฮท , (42) where Softmax( Z) = exp( zj)/Pexp( zj), andโฮท >0is a scaling factor. Each column of the attention matrix is a vector of attention scores, i.e., each score is a probability, where all scores are non-negative and sum up to 1. Note that when thekey vector K[j,:]and the query Q[jโฒ,:]has
|
https://arxiv.org/abs/2505.07893v1
|
a better match, the corresponding attention score SAM[jโฒ, j]is higher. Thus, the output of the attention mechanism corresponding to the r-th component can be represented by the weighted sum of all inputs, i.e., zโฒ r=P jSAM[r, j]vj=SAM[r,:]ยทV, where zโฒ rโR1รdnrepresents the r-th output, which is computed by adaptively focusing on the inputs based on the attention score SAM[r, j]. When the attention score SAM[r, j]is higher, the associated value vector vjwill have a more significant impact on the r-th output patch. Finally, the output Zโฒof the attention block is given by Zโฒ=SAM[:,:]ยทV=Softmax ZWqWkTZT โฮท! ZWv,(43) where O= [o1,...,om]TโRdmรdn. The components of the self-attention block are summarized in Fig. 3 (b). Notably, as illustrated in Fig. 3, the network framework of the proposed CGDM consists of three primary stages: Dn, Mid, and Up. Each sub-stage within these stages comprises NRARes+ blocks, along with their corresponding self-attention blocks. IV. L IGHTWEIGHT CGDM FOR SR CF Considering that the proposed CGDM trades off significant memory consumption and latency for outstanding perfor- mance, its deployment on personal computers and even mobile devices is highly constrained. Specifically, CGDM utilizes a variant of U-Net as their backbone framework, with most of the memory consumption and latency arising from this archi- tecture. To this end, in this section, we leverage the additive property of network layers and design a one-shot pruning approach along with the corresponding knowledge distillation technique to obtain a lightweight CGDM (LiCGDM). A. Efficient Layer Pruning for CGDM Given a specific pruning ratio, the objective of layer pruning is to eliminate a subset of prunable layers from the U-Net architecture in CGDM while minimizing the degradation in the Algorithm 3 Efficient Layer Pruning for LiCGDM 1:Input: A teacher (original) network with prunable layers หLหn=n หl1, ...,หlหno , a CF dataset D, the number of parameters Pto be pruned 2:Initialize: Arrays values [ ] = 0 andweights [ ] = 0 , Knapsack capacity C= 0 3:forpiinหLหndo 4: Calculate the value of the objective function (48a) on D,values [i] =Eโฅฮตteaโฮต(pi)โฅ2 2 5: Calculate the number of parameters at the pi-th layer, weights [i] = Params ( pi) 6:end for 7:C=P 8:Obtain Pหmby solving problem (48) using the dynamic programming algorithm, given values ,weights ,C. 9:Return Pหm 10:Output: The set of layers to be pruned PหmโหLหn 10 modelโs performance. We define the set of all the หnprunable layers as หLหn={หl1, ...,หlหn}. Inspired by [34], we also minimize the mean-squared-error (MSE) loss between the final output of the original network (referred to as the teacher) and that of the pruned network (referred to as the student) as the pruning objective. Specifically, let ฮตteadefine the output of the original CGDM, piโหLหnrepresent the p-th pruned layer, andฮต(p1, p2, ..., p หm)denote the output of the network with layers p1,p2,...,pหmpruned, where หmis an uncertain variable. Thus, this optimization problem can be formulated as min p1,p2,...,p หmEโฅฮตteaโฮต(p1, p2, ..., p หm)โฅ2 2, (44a) s.t.{p1, p2, ..., p หm} โหLหn,หmX i=1Params ( pi)โฅ P,(44b) where Params ( pi)represents the number of parameters in thepi-th layer. Note that Pdenotes the number of parameters to be
|
https://arxiv.org/abs/2505.07893v1
|
pruned, calculated by multiplying the total number of parameters in the teacher network by the pruning ratio. Solving the optimization problems (44) is NP-hard; therefore, we need to find a surrogate objective. Capitalizing on the triangle inequality, we can obtain the upper bound of (44a): Eโฅฮตteaโฮต(p1, p2, ..., p หm)โฅ2 2โค L upper, (45) where Lupper =Eโฅฮตteaโฮต(p1)โฅ2 2+Eโฅฮต(p1)โฮต(p1, p2)โฅ2 2+ ...+Eโฅฮต(p1, p2, ..., p หmโ1)โฮต(p1, p2, ..., p หm)โฅ2 2. Then, (44) can be further transformed as min p1,p2,...,p หmLupper, (46a) s.t.{p1, p2, ..., p หm} โหLหn,หmX i=1Params ( pi)โฅ P.(46b) However, solving the surrogate objective (46a) also remains NP-hard. Note that each term in (46a) represents the MSE between a pruned or unpruned network and the same network with an additional layer pruned. By leveraging the additive property of network layer, where the output distortion caused by multiple perturbations can be approximated as the sum of the distortions caused by each individual perturbation, this additivity is formulated as [34], [35] Eโฅฮต(p1, ..., p iโ1, pi)โฮต(p1, ..., p iโ1)โฅ2 2 โEโฅฮต(p1, ..., p iโ2, pi)โฮต(p1, ..., p iโ2)โฅ2 2 โ...โEโฅฮต(p1, pi)โฮต(p1)โฅ2 2โEโฅฮต(pi)โฮตteaโฅ2 2.(47) Leveraging (47), the surrogate objective in (46) can be further approximated as min p1,p2,...,p หmหmX i=1Eโฅฮตteaโฮต(pi)โฅ2 2, (48a) s.t.{p1, p2, ..., p หm} โหLหn,หmX i=1Params ( pi)โฅ P.(48b) Based on this approximate objective in (48), the term Eโฅฮตteaโฮต(pi)โฅ2 2acts as the criterion for pruning. Therefore, we only need to compute Eโฅฮตteaโฮต(หli)โฅ2 2, the output lossbetween the original network and the network with only the หli- th layer pruned, which has a time complexity of O(หn)per layer หliโหLหn, thereby transforming the problem into a variant of the 0-1 Knapsack problem. For a series of 0-1 Knapsack problems, the classical dynamic programming algorithm can be utilized for solving them. Finally, our designed one-shot layer pruning algorithm for the CGDM is summarized in Algorithm 3 . Then, the total time complexity of Algorithm 3 isO(หnU/ยฏs+ หnC) and the storage complexity is O(C), where Uis the number of training samples, ยฏsis the number of parallel processing processes, and Cis Knapsack capacity. B. Fine-Tuning LiCGDM with Multi-Objective Distillation Typically, the performance of the CGDM may degrade after certain layers are removed from the teacher network. To address this, for the pruned CGDM, referred to as LiCGDM, it is necessary to perform further weight readjustment to restore its performance. Therefore, we enhance the reconstruction performance of the LiCGDM by introducing a re-weighting strategy based on knowledge distillation. Specifically, the overall retraining of the LiCGDM is achieved by combining one task objective and two knowledge distillation objectives: LKD=LTask+ฮปOLOKD+ฮปFLFKD, (49) where LTask=EหG,Gt,ฮตt ฮตtโฮตS หG,Gt, t 2 2, (50) LOKD=EหG,Gt,t ฮตT หG,Gt, t โฮตS หG,Gt, t 2 2,(51) LFKD=X iEหG,Gt,t Fi T หG,Gt, t โ Fi S หG,Gt, t 2 2.(52) HereฮตT(หG,Gt, t)andฮตS(หG,Gt, t)represent the outputs of the teacher model (the frozen CGDM) and the student model (the LiCGDM), respectively. Fi T(หG,Gt, t)andFi T(หG,Gt, t)denote the feature maps at the end of the i-th stage for the CGDM and LiCGDM, respectively. Without any hyperparameter tuning, we set the values of ฮปOandฮปFto 1. V. N UMERICAL EXPERIMENT In this
|
https://arxiv.org/abs/2505.07893v1
|
section, the implementation details are first intro- duced. Then, we analyze the convergence and complexity of the proposed model under different hyperparameter settings. Finally, we comprehensively evaluate the proposed approach in terms of reconstruction accuracy and knowledge transfer capa- bility, and compare it against several state-of-the-art methods. A. Implementation Details i) Wireless Communication Scenario Setup: The layout of the communication scenario is illustrated in Fig. 4. Specifi- cally, we consider a massive MIMO-OFDM system operating within a communication area Ameasuring 128 m ร128 m, where the BS is equipped with an 8ร8UPA with half- wavelength spacing. The ground-truth channels are generated using the widely adopted QuaDRiGa generator (version 2.6.1) 11 [36], which employs a geometry-based stochastic channel modeling approach to simulate realistic radio channel impulse responses for mobile radio networks. Meanwhile, we consider the 5G NR typical urban micro-cell scenario โ3GPP 38.901 UMa NLOSโ, which encompasses both LOS and NLOS physical propagation environments. Additionally, the relevant simulation parameters are summarized in Table I. ii) CF Data Generation: In the aforementioned wireless communication scenario, we set ฯHR= 128 and utilize a sampling interval of โx= โ y= 1m to sample the target area Aalong with its corresponding channel power values, resulting in the HR CF, denoted as GHR,u. For the highly challenging ร4SR CF reconstruction task, ฯLRis set to 32, meaning the sampling interval for the LR CF is โx= โ y= 4 m, thereby obtaining the LR CF defined as GLR,u. Similarly, we generate 6,000 pairs of CF samples, denoted as {GHR,u,GLR,u}6000 u=1, by running QuaDRiGa simulations with different BS locations (x, y), where xandyare randomly chosen integers in the range [1,128]. This data generation method not only pre- serves the diversity of the dataset, capturing the complexity and dynamic variability inherent in wireless communication environments, but also further validates the generalization performance of the proposed model. These raw samples are subsequently divided into training and testing sets in a 5:1 ratio. To enhance the efficiency of the training process, we apply min-max normalization to the raw CF, i.e., Gโฒ(e,ฮi,j) =G(e,ฮi,j)โmin ( G(e,ฮi,j)) max ( G(e,ฮi,j))โmin ( G(e,ฮi,j)).(53) iii) Training Strategy and Model Configuration: At the hard- ware level, the proposed CGDM is trained utilizing 2 Nvidia RTX-4090 GPUs, each with 24 GB of memory, and tested on a single Nvidia RTX-4090 GPU with 24 GB of memory. At the algorithm level, we employ the Adam optimizer with a learning rate of 5ร10โ5for model parameter updates over 500,000 iterations, and the batch size is set to 16. Starting from the 5,000th iteration, we introduce the exponential moving TABLE I WIRELESS SYSTEM AND MODEL SETUP PARAMETERS Parameter Value Size of the interested area A 128 mร128 m Number of BS antennas Nr,vรNr,h= 8ร8 Center frequency 2.4 GHz Subcarrier spacing โf= 15 kHz Subcarrier Nc= 512 Active subcarrier Nk= 300 UE velocity 0.3m/s BS height 25 m UE height 1.5 m Number of base channels c1= 64 Numbers of integrated Res+and self-attention blocks NRA= 2 Channel number multiplier ยฏฮท= 1 : 2 : 4 : 8 : 16 EMA rate er= 0.9999 Learning rate
|
https://arxiv.org/abs/2505.07893v1
|
lr = 5 ร10โ5 Batch size 16 Y (0,0)(128,0) (0,128)(128,128) XFig. 4. The layout of the massive MIMO-OFDM system. average algorithm [23], with the decay factor set to 0.9999. Additionally, we incorporate dropout, with the dropout rate configured at 0.1. The forward diffusion steps Tare set to 1000 and the diffusion noise level adheres to a linear variance schedule, ranging from ฮฒ1= 10โ6toฮฒT= 10โ2. To ensure the modelโs generalization capability, we forgo checkpoint selection on the CGDM and utilize only the most recent checkpoint. More detailed hyperparameter settings for the model are presented in Table I. iv) Performance Evaluation Metrics: For a fair compari- son, we employ four widely adopted metrics, namely nor- malized mean squared error (NMSE), mean squared er- ror (MSE), peak signal-to-noise ratio (PSNR), and struc- tural similarity (SSIM). These metrics are defined as fol- lows: NMSE =PNx i=0PNy j=0โฅหG(e,ฮi,j)โGโฒ(e,ฮi,j)โฅ2 2PNx i=0PNy j=0โฅGโฒ(e,ฮi,j)โฅ2 2, MSE = 1 NxรNyPNx i=0PNy j=0โฅหG(e,ฮi,j)โGโฒ(e,ฮi,j)โฅ2 2,PSNR = 20log10(255โ MSE),SSIM =(2uหGuGโฒ+C1)(2ฮดหGGโฒ+C2) (u2 หGu2 Gโฒ+C1)(ฮด2 หGฮด2 Gโฒ+C2), where Nx= Ny=ฯ,หG(e,ฮi,j)andGโฒ(e,ฮi,j)represent the predicted channel channel power and the ground-truth channel power, respectively. Gโฒis the input LR CF, หGis the reconstructed HR CF,uหGanduGโฒare the means, ฮด2 หGandฮด2 Gโฒare the variances ofหGandGโฒ, respectively. ฮดหGGโฒis covariance of หGandGโฒ,C1 andC2represent nonzero constants. B. Experiment Results i) Convergence and Complexity Analysis: According to the proposed CGDM architecture shown in Fig. 3, the performance of the CGDM relies on the number c1of base channels in the feature maps within the latent space, as well as the number NRAof integrated modules that combine the Res+ block and the self-attention block. Additionally, during the model training process, the depth of the feature maps, namely the number of channels, also affects the CGDMโs ability to represent features. The channel number multiplier is defined asยฏฮท=c1:c2:...:cยฏn, where ยฏnis related to the number of down-sampling operations. Consequently, we analyze the con- vergence and complexity of CGDM under different influencing factors. Note that the training data used for the analysis in this subsection are all derived from the 32ร32โ128ร128HR CF reconstruction task. The default parameter configuration for the CGDM is c1= 64 ,ยฏฮท= 1 : 2 : 4 : 8 : 16 , and 12 0 100 200 300 400 500 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b0.000.250.500.751.001.251.501.752.00/uni0000002f/uni00000052/uni00000056/uni000000561eโ6 c 1= 8 c 1= 16 c 1= 32 c 1= 64 c 1= 128 492 493 494 495 496 497 498 4991.551eโ7 (a) 0 100 200 300 400 500 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b0.00.20.40.60.81.01.21.4/uni0000002f/uni00000052/uni00000056/uni000000561eโ6 ฬ ฮท = 1 : 2 : 4 ฬ ฮท = 1 : 2 : 4 : 8 ฬ ฮท = 1 : 2 : 4 : 8 : 16 ฬ ฮท = 1 : 2 : 4 : 8 : 16 : 32 492 493 494 495 496 497 498 4991.553.101eโ7 (b) 0 100 200 300 400 500 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b0.00.51.01.52.0/uni0000002f/uni00000052/uni00000056/uni000000561eโ6 N RA= 1 N RA= 2 N RA= 3 N RA= 4 491 492 493 494 495 496 497 498 4990.000.250.500.751.001eโ6 (c) Fig. 5. Convergence analysis of the CGDM under different hyperparameter settings: (a) represents different base channels c1in the feature maps within the latent space; (b) shows different channel number multipliers ยฏฮท; (c) illustrates varying numbers
|
https://arxiv.org/abs/2505.07893v1
|
NRAof integrated Res+and self-attention blocks. NRA= 2. Any parameters not explicitly mentioned in the following analysis are assumed to be set to these values. As shown in Fig. 5 (a), the variation of the loss function for the CGDM with different numbers, c1, of base channels, is presented over 500 training epochs. It can be observed that the CGDM loss corresponding to c1={8,16,32,64,128} decreases to a relatively low value, specifically on the order of 10โ6, within the first 100 epochs. For better clarity, we further zoom in on the loss between epochs 490 and 500. It can be ob- served that the loss corresponding to c1={8,16,32,64,128} for CGDM fluctuates at the order of 10โ7, indicating a very slight loss oscillation. Additionally, an interesting observation is that by appropriately increasing c1, the overall amplitude of the loss oscillations during the CGDM training process decreases, while the loss itself also reduces. Therefore, setting c1= 64 is a suitable choice for our task. As shown in Fig. 5 (b), the variation of the loss function for CGDM with different channel number multiplier ยฏฮทover 500 training epochs is presented. The CGDM, with different ยฏฮทconfigurations, converges to a minimal value on the order of10โ6within the first 100 epochs. Additionally, between epochs 490 and 500, the variation remains within a narrow range, on the order of 10โ7, indicating minimal fluctuation. However, a shallower channel number multiplier, such as ยฏฮท= 1 : 2 : 4 , tends to cause relatively larger fluctuations in the loss value. While a deeper channel number multiplier can progressively extract more high-level features, it also introduces challenges in training and optimization, as seen with ยฏฮท= 1 : 2 : 4 : 8 : 16 : 32 . Therefore, setting ยฏฮท= 1 : 2 : 4 : 8 : 16 is a reasonable choice for our task. Fig. 5 (c) shows the variation of the CGDM loss function as the number of integrated Res+and self-attention modules increases. Overall, for CGDM with NRAset to {1,2,3,4}, the loss value decreases to a small value within the first 500 epochs, with minimal fluctuation, and tends to converge. To maintain the modelโs ability to represent features and ensure effective interaction between local and global features, we choose NRA= 2 as a suitable option. The complexity of a large AI model is primarily determined by two key factors: the number of model parameters and the number of floating point operations (FLOPs). For clarity, we visualize the impact of varying hyperparameters NRA,c1,andยฏฮทon the complexity of the CGDM. Specifically, Fig. 6 (a) illustrates the variation in CGDM complexity under different settings of the number NRAof integrated Res+and attention modules, as well as channel number multipliers ยฏฮท, withc1fixed at 64. Fig. 6 (b) shows the variation in CGDM complexity under different settings of the base channel number c1and channel number multiplier ยฏฮท, with NRAfixed at 2. Overall, appropriately increasing these parameters facilitates the extraction of higher-level features and multi-scale fusion in CGDM, while also raising the hardware requirements for model deployment. Therefore, there is a trade-off between CGDM reconstruction performance and complexity. Based
|
https://arxiv.org/abs/2505.07893v1
|
on the convergence and complexity analysis, we set c1= 64 , NRA= 2, and ยฏฮท= 1 : 2 : 4 : 8 : 16 as the default con- figuration for subsequent experiments, with the corresponding /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ ฮท/uni00000014 /uni00000015 /uni00000016 /uni00000017N RA/uni00000015/uni00000016/uni00000011/uni00000016/uni00000017 /uni00000016/uni00000014/uni00000011/uni00000013/uni0000001b /uni00000016/uni0000001b/uni00000011/uni0000001b/uni00000014 /uni00000017/uni00000019/uni00000011/uni00000018/uni00000018 /uni00000016/uni00000015/uni00000011/uni0000001b/uni00000013 /uni00000017/uni00000016/uni00000011/uni00000019/uni0000001c /uni00000018/uni00000017/uni00000011/uni00000018/uni0000001b /uni00000019/uni00000018/uni00000011/uni00000017/uni0000001a /uni00000017/uni00000015/uni00000011/uni00000015/uni0000001a /uni00000018/uni00000019/uni00000011/uni00000016/uni00000014 /uni0000001a/uni00000013/uni00000011/uni00000016/uni00000018 /uni0000001b/uni00000017/uni00000011/uni00000017/uni00000013 /uni00000018/uni00000014/uni00000011/uni0000001a/uni00000016 /uni00000019/uni0000001b/uni00000011/uni0000001c/uni00000016 /uni0000001b/uni00000019/uni00000011/uni00000014/uni00000015 /uni00000014/uni00000013/uni00000016/uni00000011/uni00000016/uni00000015/uni00000029/uni0000002f/uni00000032/uni00000033/uni00000056/uni00000003/uni0000000b/uni0000002a/uni0000000c /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ ฮท/uni00000014 /uni00000015 /uni00000016 /uni00000017N RA/uni0000001c/uni00000011/uni0000001a/uni00000014 /uni00000016/uni0000001b/uni00000011/uni0000001b/uni0000001b /uni00000014/uni00000018/uni00000018/uni00000011/uni00000015/uni0000001c /uni00000019/uni00000015/uni00000013/uni00000011/uni00000016/uni0000001a /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000015 /uni00000018/uni00000018/uni00000011/uni00000016/uni0000001b /uni00000015/uni00000015/uni00000014/uni00000011/uni00000015/uni00000014 /uni0000001b/uni0000001b/uni00000016/uni00000011/uni0000001a/uni00000013 /uni00000014/uni0000001a/uni00000011/uni0000001c/uni00000015 /uni0000001a/uni00000014/uni00000011/uni0000001b/uni0000001a /uni00000015/uni0000001b/uni0000001a/uni00000011/uni00000014/uni00000015 /uni00000014 /uni00000014/uni00000017/uni0000001a/uni00000011/uni00000013/uni00000015 /uni00000015/uni00000015/uni00000011/uni00000013/uni00000015 /uni0000001b/uni0000001b/uni00000011/uni00000016/uni00000019 /uni00000016/uni00000018/uni00000016/uni00000011/uni00000013/uni00000017 /uni00000014/uni00000017/uni00000014/uni00000013/uni00000011/uni00000016/uni00000017/uni00000033/uni00000044/uni00000055/uni00000044/uni00000050/uni00000048/uni00000057/uni00000048/uni00000055/uni00000056/uni00000003/uni0000000b/uni00000030/uni0000000c /uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000001b/uni00000013/uni0000001c/uni00000013/uni00000014/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013/uni00000017/uni00000013/uni00000013/uni00000019/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000013/uni00000014/uni00000013/uni00000013/uni00000013/uni00000014/uni00000015/uni00000013/uni00000013/uni00000014/uni00000017/uni00000013/uni00000013 (a) /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ ฮท/uni0000001b /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001bc 1/uni00000013/uni00000011/uni00000018/uni00000015 /uni00000013/uni00000011/uni00000019/uni0000001c /uni00000013/uni00000011/uni0000001b/uni00000019 /uni00000014/uni00000011/uni00000013/uni00000016 /uni00000015/uni00000011/uni00000013/uni0000001a /uni00000015/uni00000011/uni0000001a/uni00000018 /uni00000016/uni00000011/uni00000017/uni00000016 /uni00000017/uni00000011/uni00000014 /uni00000014 /uni0000001b/uni00000011/uni00000015/uni00000015 /uni00000014/uni00000013/uni00000011/uni0000001c/uni00000018 /uni00000014/uni00000016/uni00000011/uni00000019/uni0000001a /uni00000014/uni00000019/uni00000011/uni00000016/uni0000001c /uni00000016/uni00000015/uni00000011/uni0000001b/uni00000013 /uni00000017/uni00000016/uni00000011/uni00000019/uni0000001c /uni00000018/uni00000017/uni00000011/uni00000018/uni0000001b /uni00000019/uni00000018/uni00000011/uni00000017/uni0000001a /uni00000014/uni00000016/uni00000014/uni00000011/uni00000013/uni00000017 /uni00000014/uni0000001a/uni00000017/uni00000011/uni00000018/uni0000001c /uni00000015/uni00000014/uni0000001b/uni00000011/uni00000014/uni00000018 /uni00000015/uni00000019/uni00000014/uni00000011/uni0000001a/uni00000014/uni00000029/uni0000002f/uni00000032/uni00000033/uni00000056/uni00000003/uni0000000b/uni0000002a/uni0000000c /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni0000001b/uni0000001d/uni00000014/uni00000019/uni0000001d/uni00000016/uni00000015 ฬ ฮท/uni0000001b /uni00000014/uni00000019 /uni00000016/uni00000015 /uni00000019/uni00000017 /uni00000014/uni00000015/uni0000001bc 1/uni00000013/uni00000011/uni00000015/uni00000015 /uni00000013/uni00000011/uni0000001b/uni0000001a /uni00000016/uni00000011/uni00000017/uni00000019 /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000015 /uni00000013/uni00000011/uni0000001b/uni0000001a /uni00000016/uni00000011/uni00000017/uni0000001a /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000017 /uni00000018/uni00000018/uni00000011/uni00000015/uni00000018 /uni00000016/uni00000011/uni00000017/uni00000019 /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000018 /uni00000018/uni00000018/uni00000011/uni00000016/uni00000014 /uni00000015/uni00000015/uni00000013/uni00000011/uni0000001c/uni00000018 /uni00000014/uni00000016/uni00000011/uni0000001b/uni00000015 /uni00000018/uni00000018/uni00000011/uni00000016/uni0000001b /uni00000015/uni00000015/uni00000014/uni00000011/uni00000015/uni00000014 /uni0000001b/uni0000001b/uni00000016/uni00000011/uni0000001a/uni00000013 /uni00000018/uni00000018/uni00000011/uni00000015/uni00000016 /uni00000015/uni00000015/uni00000014/uni00000011/uni00000017/uni00000018 /uni0000001b/uni0000001b/uni00000017/uni00000011/uni0000001a/uni00000016 /uni00000016/uni00000018/uni00000016/uni00000017/uni00000011/uni00000018/uni0000001b/uni00000033/uni00000044/uni00000055/uni00000044/uni00000050/uni00000048/uni00000057/uni00000048/uni00000055/uni00000056/uni00000003/uni0000000b/uni00000030/uni0000000c /uni00000018/uni00000013/uni00000014/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000015/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000013/uni00000014/uni00000013/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000013/uni00000015/uni00000013/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013/uni00000013/uni00000016/uni00000013/uni00000013/uni00000013/uni00000016/uni00000018/uni00000013/uni00000013 (b) Fig. 6. Analysis of the computational complexity (FLOPs in Giga) and storage complexity (Parameters in Millions) of CGDM under different parameter settings. (a) shows the impact of different settings for NRAandยฏฮทon the CGDM complexity when c1is set to 64. (b) illustrates the impact of different settings for c1andยฏฮทon the CGDM complexity when NRAis set to 2. 13 /uni00000036/uni00000035/uni0000002a/uni00000024/uni00000031/uni00000010/uni00000030/uni00000036/uni00000028 /uni00000036/uni00000035/uni0000002a/uni00000024/uni00000031 /uni00000026/uni00000010/uni00000039 /uni00000024/uni00000028 /uni00000027/uni00000035/uni00000035/uni00000031 /uni00000026/uni0000002a/uni00000027/uni00000030 /uni0000002a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047/uni00000003/uni00000037 /uni00000055/uni00000058/uni00000057/uni0000004b Fig. 7. Qualitative comparison in the ร4HR CF reconstruction task. The first row displays the reconstruction results of the proposed CGDM and baseline models. For clarity, the second row shows zoomed-in views of the regions highlighted by the blue boxes. parameter count of 221.21 M and FLOPs of 54.58 G. ii) Quantitative and Qualitative Comparison with Baselines: To ensure a fair comparison, we apply the same training strategy as CGDM to the baselines, conducting weight learning and testing on the same ร4CF dataset. To comprehen- sively evaluate the performance of the proposed CGDM and LiCGDM (with 35% pruning), we compare them with several state-of-the-art models, including SRGAN-MSE [21], SRGAN [21], C-V AE [26], and DRRN [37]. Table II provides a quantitative performance analysis of the proposed model com- pared to baseline models on ร4HR CF reconstruction tasks. The proposed CGDM demonstrates competitive performance across four metrics (NMSE, MSE, PSNR, and SSIM) and outperforms the baselines. Specifically, compared to SRGAN, CGDM reduces NMSE and MSE 65.236ร10โ5and 9.542, respectively, while improving PSNR and SSIM by 11.747 and 0.006, respectively. LiCGDM is a student model derived from CGDM, achieved by pruning 35% of the parameters and fine-tuning the remaining ones. Compared to CGDM, LiCGDM exhibits a slight decrease in performance metrics, but the degradation remains within an acceptable range. This confirms that the proposed lightweight approach effectively compresses CGDM, enabling its practical deployment on personal computers and even mobile devices. For the qualitative analysis, we visualize the results of the proposed CGDM and baselines on the 32ร32โ128ร128 HR CF reconstruction task, as shown in Fig. 7. As observed, both our model and DRRN produce clearer and more accurate pattern edges, while other baselines tend to generate blurrier results that deviate from the ground-truth target. iii) Evaluation of
|
https://arxiv.org/abs/2505.07893v1
|
Knowledge Transfer and Generalization Ability: Transferring a trained model to an unseen task is considered zero-shot generation. Namely, the neural network learns and updates its weights solely on the ร4SR CF dataset, without exposure to the ร16,ร8, andร2SR CF datasets. It relies on the trained model to perform SR CF reconstruction tasks for magnification factors that have not been encountered before. This is an effective way to assess the modelโs knowledge transferability and generalization ability.TABLE II QUANTITATIVE EVALUATION OF THE PROPOSED APPROACH AND BASELINES ON NMSE, MSE, PSNR, AND SSIM FOR THE ร4SR CF. Reconstruction Task Method NMSE ( ร10โ5) MSE PSNR SSIM ร4 322โ1282SRGAN-MSE [21] 62.049 9.060 38.610 0.987 SRGAN [21] 69.942 10.220 38.104 0.987 C-V AE [26] 59.833 8.701 39.022 0.989 DRRN [37] 65.978 8.656 41.009 0.988 CGDM (Our) 4.706โ 0.678โ 49.851โ 0.993โ LiCGDM (Our) 6.653 0.9640 49.726 0.993 Table III summarizes the zero-shot results of the proposed model and baselines on the ร16,ร8, andร2reconstruction tasks. It can be observed that across the three unseen SR CF reconstruction tasks, SRGAN and SRGAN-MSE perform slightly worse, while C-V AE achieves slightly better results. We think this is due to the fact that GAN-based models need to iteratively minimize the generator loss and maximize the discriminator loss, which may lead to mode collapse and prevent the models from fully capturing the diversity of the true distribution. In contrast, V AE explicitly estimates the latent parameters by maximizing a lower bound on the log- likelihood. Its mathematical formulation ensures a tractable likelihood for evaluation and enables an explicit inference network. It is worth noting that both the proposed CGDM and LiCGDM outperform the baselines across multiple per- formance metrics, achieving impressive reconstruction results. We attribute this to CGDMโs robust ability to learn implicit priors and its capacity to model complicated data distributions. By leveraging source information and implicit priors, CGDM achieves impressive HR CF reconstruction results. VI. C ONCLUSION To facilitate the paradigm shift from environment-unaware to intelligent and environment-aware communication, this pa- per introduced the concept of CF twins. Particularly, we treated the coarse-grained and fine-grained CFs as physical and virtual 14 TABLE III ZERO-SHOT PERFORMANCE COMPARISON OF THE PROPOSED APPROACH AND BASELINES ON ร16,ร8,ANDร2HR CF R ECONSTRUCTION TASKS Zero Shot Method NMSE ( ร10โ5) MSE PSNR SSIM ร16 82โ1282SRGAN-MSE [21] 3031.40 437.077 21.812 0.741 SRGAN [21] 3440.20 497.045 21.270 0.728 C-V AE [26] 660.69 96.170 28.505 0.860โ DRRN [37] 1127.20 157.350 26.612 0.844 CGDM (Our) 588.82โ 85.667โ 29.023โ 0.855 LiCGDM (Our) 601.78 87.572 28.192 0.855 ร8 162โ1282SRGAN-MSE [21] 560.88 80.357 29.132 0.867 SRGAN [21] 695.55 99.956 28.220 0.858 C-V AE [26] 103.68 15.038 36.492 0.986โ DRRN [37] 265.94 36.083 33.764 0.940 CGDM (Our) 67.84โ 9.866โ 38.393โ 0.957 LiCGDM (Our) 78.36 11.413 37.719 0.957 ร2 642โ1282SRGAN-MSE [21] 1381.70 203.691 25.544 0.917 SRGAN [21] 1343.10 199.048 25.709 0.914 C-V AE [26] 62.50 9.081 38.852 0.989 DRRN [37] 66.83 8.727 41.271 0.990 CGDM (Our) 15.55โ 2.259โ 44.787โ 0.994โ LiCGDM (Our) 21.36 3.113 44.415 0.994 twin objects, respectively, and designed a CGDM as the core computational unit of the CF twins to model their connection.
|
https://arxiv.org/abs/2505.07893v1
|
The trained CGDM, combining the learned prior distribution of the target data and side information, generates fine-grained CFs through a series of iterative refinement steps. Addition- ally, to facilitate the practical deployment of the CGDM, we introduced a one-shot pruning approach and employed multi-objective knowledge distillation techniques to minimize performance degradation. Experimental results showed that the proposed model achieved competitive reconstruction accuracy in fine-grained CF reconstruction tasks with magnification factors of ร2,ร4,ร8, andร16, while also demonstrating exceptional knowledge transfer capabilities. REFERENCES [1] Z. Jin, L. You, X. Li, Z. Gao, Y . Liu, X.-G. Xia, and X. Gao, โUltra- grained channel fingerprint construction via conditional generative dif- fusion models,โ in Proc. IEEE Conf. Comput. Commun. (INFOCOM) , London, UK, May 2025, pp. 1โ6. [2] F. Liu, Y . Cui, C. Masouros, J. Xu, T. X. Han, Y . C. Eldar, and S. Buzzi, โIntegrated sensing and communications: Toward dual-functional wire- less networks for 6G and beyond,โ IEEE J. Sel. Areas Commun. , vol. 40, no. 6, pp. 1728โ1767, Jun. 2022. [3] C.-X. Wang, X. You, X. Q. Gao, X. Zhu, Z. Li, C. Zhang, H. Wang, Y . Huang, Y . Chen, H. Haas, J. S. Thompson, E. G. Larsson, M. D. Renzo, W. Tong, P. Zhu, X. Shen, H. V . Poor, and L. Hanzo, โOn the road to 6G: Visions, requirements, key technologies and testbeds,โ IEEE Commun. Surv. Tutor. , vol. 25, no. 2, pp. 905โ974, Feb. 2023. [4] L. U. Khan, W. Saad, D. Niyato, Z. Han, and C. S. Hong, โDigital-twin- enabled 6G: Vision, architectural trends, and future directions,โ IEEE Commun. Mag. , vol. 60, no. 1, pp. 74โ80, Jan. 2022. [5] Z. Jin, L. You, H. Zhou, Y . Wang, X. Liu, X. Gong, X. Gao, D. W. K. Ng, and X.-G. Xia, โGDM4MMIMO: Generative diffusion models for massive MIMO communications,โ arXiv preprint arXiv:2412.18281 , 2024.[6] R. Zhang, L. Cheng, S. Wang, Y . Lou, Y . Gao, W. Wu, and D. W. K. Ng, โIntegrated sensing and communication with massive MIMO: A unified tensor approach for channel and target parameter estimation,โ IEEE Trans. Wirel. Commun. , vol. 23, no. 8, pp. 8571โ8587, Aug. 2024. [7] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Gao, โAn I2I inpainting approach for efficient channel knowledge map construction,โ IEEE Trans. Wirel. Commun. , vol. 24, no. 2, pp. 1415โ1429, Feb. 2025. [8] Y . Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Q. Gao, D. Gesbert, S. Cui, and R. Zhang, โA tutorial on environment-aware communications via channel knowledge map for 6G,โ IEEE Commun. Surv. Tuts. , vol. 26, no. 3, pp. 1478โ1519, Feb. 2024. [9] D. Wu, Y . Zeng, S. Jin, and R. Zhang, โEnvironment-aware hybrid beamforming by leveraging channel knowledge map,โ IEEE Trans. Wirel. Commun. , vol. 23, no. 5, May 2024. [10] B. Zhang and J. Chen, โConstructing radio maps for UA V communica- tions via dynamic resolution virtual obstacle maps,โ in Proc. IEEE Int. Workshop Signal Process. Adv. Wireless Commun. (SPAWC) , Atlanta, GA, USA, May 2020, pp. 1โ5.
|
https://arxiv.org/abs/2505.07893v1
|
[11] P. Zeng and J. Chen, โUA V-aided joint radio map and 3D environment reconstruction using deep learning approaches,โ in Proc. IEEE Int. Conf. Commun. (ICC) , Seoul, Korea, Aug. 2022, pp. 5341โ5346. [12] Z. Yang, Z. Zhou, and Y . Liu, โFrom RSSI to CSI: Indoor localization via channel response,โ ACM Comput. Surveys , vol. 46, no. 2, pp. 1โ32, Dec. 2013. [13] C. Zhan, H. Hu, Z. Liu, J. Wang, N. Cheng, and S. Mao, โAerial video streaming over 3D cellular networks: An environment and channel knowledge map approach,โ IEEE Trans. Wirel. Commun. , vol. 23, no. 2, pp. 1432โ1446, Feb. 2024. [14] H. Li, P. Li, J. Xu, J. Chen, and Y . Zeng, โDerivative-free placement optimization for multi-UA V wireless networks with channel knowledge map,โ in Proc. IEEE Int. Conf. Commun. Workshops (ICC Wkshps) , Seoul, South Korea, May 2022, pp. 1029โ1034. [15] S. Zhang and R. Zhang, โRadio map-based 3D path planning for cellular- connected UA V,โ IEEE Trans. Wirel. Commun. , vol. 20, no. 3, pp. 1975โ 1989, Mar. 2021. [16] W. Yue, J. Li, C. Li, N. Cheng, and J. Wu, โA channel knowledge map- aided personalized resource allocation strategy in air-ground integrated mobility,โ IEEE Trans. Intell. Transp. Syst. , vol. 25, no. 11, pp. 18 734โ 18 747, Nov. 2024. [17] X. Xu and Y . Zeng, โHow much data is needed for channel knowledge map construction?โ IEEE Trans. Wirel. Commun. , vol. 23, no. 10, pp. 13 011โ13 021, Oct. 2024. [18] D. Lee, D. Berberidis, and G. B. Giannakis, โAdaptive Bayesian radio tomography,โ IEEE Trans. Signal Process. , vol. 67, no. 8, pp. 1964โ 1977, Apr. 2019. [19] R. Levie, C ยธ . Yapar, G. Kutyniok, and G. Caire, โRadioUnet: Fast radio map estimation with convolutional neural networks,โ IEEE Trans. Wirel. Commun. , vol. 20, no. 6, pp. 4001โ4015, Jun. 2021. [20] S. Bakirtzis, J. Chen, K. Qiu, J. Zhang, and I. Wassell, โEM DeepRay: An expedient, generalizable, and realistic data-driven indoor propagation model,โ IEEE Trans. Antennas Propag. , vol. 70, no. 6, pp. 4140โ4154, Jun. 2022. [21] C. Ledig, L. Theis, F. Husz ยดar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al. , โPhoto-realistic single image super-resolution using a generative adversarial network,โ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , Jul. 2017, pp. 4681โ 4690. [22] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, โHigh- resolution image synthesis with latent diffusion models,โ in Proc. Conf. Comput. Vis. Pattern Recognit. (CVPR) , New Orleans, LA, USA, Jun. 2022, pp. 10 684โ10 695. [23] J. Ho, A. Jain, and P. Abbeel, โDenoising diffusion probabilistic models,โ inProc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Dec. 2020, pp. 6840โ 6851. [24] Z. Jin, L. You, D. W. K. Ng, X.-G. Xia, and X. Gao, โNear-field channel estimation for XL-MIMO: A deep generative model guided by side information,โ IEEE Trans. Cogn. Commun. Netw. , early access, May 2025. [25] H. Du, R. Zhang, Y . Liu, J. Wang, Y . Lin, Z. Li,
|
https://arxiv.org/abs/2505.07893v1
|
D. Niyato, J. Kang, Z. Xiong, S. Cui et al. , โEnhancing deep reinforcement learning: A tutorial on generative diffusion models in network optimization,โ IEEE Commun. Surv. Tutor. , May 2024. [26] D. P. Kingma and M. Welling, โAuto-encoding variational Bayes,โ in Proc. Int. Conf. Learn. Represent. (ICLR) , Banff, AB, Canada, Apr. 2014, pp. 1โ14. [27] J. Lovelace, V . Kishore, C. Wan, E. Shekhtman, and K. Q. Weinberger, โLatent diffusion for language generation,โ in Proc. Adv. Neural Inf. 15 Process. Syst. (NeurIPS) , New Orleans, LA, USA, Dec. 2023, pp. 56 998โ57 025. [28] X. Liu, W. Wang, X. Gong, X. Fu, X. Gao, and X.-G. Xia, โStructured hybrid message passing based channel estimation for massive MIMO- OFDM systems,โ IEEE Trans. Veh. Technol. , vol. 72, no. 6, pp. 7491โ 7507, Jun. 2023. [29] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Q. Gao, โChannel knowledge map construction with Laplacian pyramid reconstruction network,โ in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC) , Dubai, United Arab Emirates, Apr. 2024, pp. 1โ6. [30] Y . Song and S. Ermon, โGenerative modeling by estimating gradients of the data distribution,โ in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Vancouver, BC, Canada, Dec. 2019, pp. 11 918โ11 930. [31] Z. Jin, L. You, D. W. Kwan Ng, X.-G. Xia, and X. Gao, โA generative denoising approach for near-field XL-MIMO channel estimation,โ in Proc. IEEE Global Commun. Conf. (GLOBECOM) , Cape Town, South Africa, Dec. 2024, pp. 1โ6. [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, ล. Kaiser, and I. Polosukhin, โAttention is all you need,โ in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Long Beach, CA, USA, Jun. 2017, pp. 6000โ6010. [33] Z. Jin, N. Xu, Y . Shang, and X. Yao, โEfficient capsule network with multi-subspace learning,โ in Proc. IEEE Int. Conf. Wireless Commun. Signal Process. (WCSP) , Oct. 2021, pp. 1โ5. [34] D. Zhang, S. Li, C. Chen, Q. Xie, and H. Lu, โLaptop-diff: Layer pruning and normalized distillation for compressing diffusion models,โ arXiv preprint arXiv:2404.11098 , 2024. [35] K. Xu, Z. Wang, X. Geng, M. Wu, X. Li, and W. Lin, โEfficient joint optimization of layer-adaptive weight pruning in deep neural networks,โ inProc. IEEE Int. Conf. Comput. Vis. (ICCV) , Paris, Oct. 2023, pp. 17 447โ17 457. [36] S. Jaeckel, L. Raschkowski, K. B ยจorner, and L. Thiele, โQuaDRiGa: A 3- D multi-cell channel model with time evolution for enabling virtual field trials,โ IEEE Trans. Antennas Propag. , vol. 62, no. 6, pp. 3242โ3256, Jun. 2014. [37] Y . Tai, J. Yang, and X. Liu, โImage super-resolution via deep recursive residual network,โ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , Jul. 2017, pp. 3147โ3155.
|
https://arxiv.org/abs/2505.07893v1
|
1 EnvCDiff: Joint Refinement of Environmental Information and Channel Fingerprints via Conditional Generative Diffusion Model Zhenzhou Jin, Graduate Student Member, IEEE, Li You, Senior Member, IEEE, Xiang-Gen Xia, Fellow, IEEE, and Xiqi Gao, Fellow, IEEE Abstract โThe paradigm shift from environment-unaware communication to intelligent environment-aware communication is expected to facilitate the acquisition of channel state infor- mation for future wireless communications. Channel Fingerprint (CF), as an emerging enabling technology for environment-aware communication, provides channel-related knowledge for poten- tial locations within the target communication area. However, due to the limited availability of practical devices for sens- ing environmental information and measuring channel-related knowledge, most of the acquired environmental information and CF are coarse-grained, insufficient to guide the design of wireless transmissions. To address this, this paper proposes a deep conditional generative learning approach, namely a customized conditional generative diffusion model (CDiff). The proposed CDiff simultaneously refines environmental information and CF, reconstructing a fine-grained CF that incorporates environmental information, referred to as EnvCF, from its coarse-grained coun- terpart. Experimental results show that the proposed approach significantly improves the performance of EnvCF construction compared to the baselines. Index Terms โEnvironment-aware wireless communication, channel fingerprint, channel-related knowledge. I. I NTRODUCTION Driven by the synergy of artificial intelligence (AI) and environmental sensing, 6G is poised to undergo a paradigm shift, evolving from environment-unaware communications to intelligent environment-aware ones [1]. Channel fingerprint (CF) is an emerging enabling technology for environment- aware communications that provides location-specific chan- nel knowledge for a potential base station (BS) in base-to- everything (B2X) pairs [1], [2]. Ideally, without considering the costs of sensing, computation, and storage, an ultra-fine- grained CF would encapsulate channel-related knowledge for all locations within the target communication area, such as channel gain and angle of arrival/departure, thereby alleviating the challenges of acquiring channel state information. By pro- viding essential channel-related knowledge, CF has recently gained significant research attention for diverse applications, including object sensing, beamforming, localization [1]โ[3]. A fundamental challenge for the aforementioned CF-based emerging applications is the construction of a sufficiently Zhenzhou Jin, Li You, and Xiqi Gao are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211100, China (e-mail: zzjin@seu.edu.cn; lyou@seu.edu.cn; xqgao@seu.edu.cn). Xiang-Gen Xia is with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA (e-mail: xxia@ee.udel.edu).fine-grained CF, which is essential to ensure the accurate acquisition of channel-related information at specific loca- tions. Existing related works can generally be categorized into model-based and data-driven approaches. In terms of model-based approaches, the authors of [4] leverage prior assumptions about the physical propagation model along with partially measured channel data to estimate channel-related knowledge at potential locations. In [5], the authors utilize an analytical channel model to represent the spatial variabil- ity of the propagation environment, with channel modeling parameters estimated from measured data to reconstruct the CF. In data-driven approaches, one straightforward method for CF construction is interpolation-based, with classic techniques including radial basis function (RBF) [6] interpolation and Kriging [7]. Additionally, AI-based approaches for CF con- struction are recently emerging.
|
https://arxiv.org/abs/2505.07894v1
|
In [2], the authors transform the CF estimation task into an image-to-image inpainting problem and develop a Laplacian pyramid-based model to facilitate CF construction. In [8], [9], UNet is utilized to learn geometry-based and physics-based features in urban or indoor environments, enabling the construction of corresponding CFs. In [1], fully connected networks are used to predict channel knowledge based on 2D coordinate locations. It is evident that CF construction is primarily influenced by the wireless propagation environment, but the nodes and devices available for sensing environmental information and measuring location- specific channel knowledge are usually limited in practice. Most existing methods focus on constructing CFs by either leveraging partial channel knowledge or incorporating prior assumptions about propagation models and wireless environ- ment characteristics. However, limited work has been ded- icated to simultaneously refining environmental information and channel-related knowledge, that is, reconstructing a finer- grained CF that integrates environmental information, referred to as environmental CF (EnvCF), from a coarse-grained one. This paper investigates the construction of a fine-grained EnvCF in scenarios where environmental information and channel knowledge are limited by the high cost and availability constraints of sensing and testing equipment. Specifically, we reformulate the task of fine-grained EnvCF construction as an image super-resolution (ISR) problem. Considering that the conditional distribution of SR outputs given low-resolution (LR) inputs typically follows a complicated parametric dis- tribution, leading to suboptimal performance of most feed- forward neural network-based regression algorithms in ISRarXiv:2505.07894v1 [cs.NI] 12 May 2025 2 tasks [10]. To this end, leveraging the powerful implicit prior learning capability of generative diffusion model (GDM) [11], [12], we propose a conditional GDM (CDiff) to approximate the conditional distribution of the HR EnvCF. Specifically, within the variational inference framework, we derive an evidence lower bound (ELBO) of the log-conditional marginal distribution of the observed high-resolution (HR) EnvCF as the objective function. Furthermore, to make the transformation from a standard normal distribution to the target distribution more controllable, we incorporate the LR EnvCF as side information during the optimization of the denoising network. Simulation results show the superiority of the proposed ap- proach. II. E NVCF M ODEL ANDPROBLEM FORMULATION In this section, we first present the channel gain and the EnvCF model. We then reformulate the fine-grained EnvCF reconstruction problem by aligning the objective function with the learning of the gradient of the log-conditional density. A. EnvCF Model Consider a wireless communication scenario within a target area of interest, AโR2, where a base station (BS) serves Muser terminals (UTs), with their 2D position coordinates denoted as {xm}M m=1=A. The attenuation of received signalpower at UT is widely attributed to the physical characteristics of the wireless propagation environment, such as the geometric contours of real-world streets and buildings in urban maps, represented by E. Key contributing factors include path loss along various propagation paths, reflections and diffractions caused by buildings, the ground, or other objects, waveguide effects in densely populated urban areas, and signal blockages from obstacles [1]. The relatively slow-varying components of these influencing factors collectively form the channel gain function, denoted as g(E,xm), which reflects the large-scale
|
https://arxiv.org/abs/2505.07894v1
|
signal attenuation measured at the UTs located at {xm}M m=1= A. Additionally, small-scale effects are typically modeled as a complex Gaussian random variable hwith unit variance. Without loss of generality, the baseband signal received by the UT at xmcan be represented as [2] y(E,xm) =p g(E,xm)hs+z(E,xm), (1) where srepresents the transmitted signal with power PX, andz(E,xm)denotes the additive noise with a single-sided power spectral density of N0. The average received energy per symbol can be expressed as PY=E[|y(E,xm)|2] B=g(E,xm)PX B+N0, (2) Loss in (15) ๏ด Conv ๏ด Conv ๏ด Conv 64in inhw๏ด๏ด Group NormGroup NormGroup Norm Swish Swish Swish ๏ด Conv ๏ด Conv ๏ด Conv DownsamplingDownsamplingDownsampling 6422in inhw๏ด๏ด Res Block Res BlockRes BlockRes Block Self-Attention Self-Attention Self-AttentionRes Block Self-Attention Res BlockRes BlockRes Block Self-Attention Self-Attention Self-AttentionRes Block Self-AttentionGroup NormGroup NormGroup NormConvConv 11๏ดConv 11๏ด Swish Swish Swish ๏ด Conv ๏ด Conv ๏ด Conv ๏
12844in inhw๏ด๏ด ๏
Time EmbeddingTime EmbeddingTime Embedding Res BlockRes Block 2๏ดRes Block 2๏ด DownsamplingDownsamplingDownsamplingRes Block 2๏ด Downsampling Res BlockRes Block 2๏ดRes Block 2๏ด DownsamplingDownsamplingDownsamplingRes Block 2๏ด Downsampling Res BlockRes Block 2๏ดRes Block 2๏ด DownsamplingDownsamplingDownsamplingRes Block 2๏ด Downsampling 51216 16in inhw๏ด๏ด 25688in inhw๏ด๏ด 102416 16in inhw๏ด๏ด 102416 16in inhw๏ด๏ด 102488in inhw๏ด๏ด 2๏ด Self-Attention Self-Attention Self-AttentionRes BlockRes BlockRes Block 2๏ด Self-AttentionRes BlockUpsamplingUpsamplingUpsampling Self-Attention Self-Attention Self-Attention Res BlockRes BlockRes Block 3๏ดSelf-Attention Res Block 3๏ดUpsampling Self-Attention Res Block 3๏ด 51244in inhw๏ด๏ด 25622in inhw๏ด๏ด 128in inhw๏ด๏ดUpsamplingUpsamplingUpsampling Res BlockRes Block 3๏ดRes Block 3๏ดUpsampling Res Block 3๏ดUpsamplingUpsamplingUpsampling Res BlockRes Block 3๏ดRes Block 3๏ดUpsampling Res Block 3๏ดUpsamplingUpsamplingUpsampling Res BlockRes Block 3๏ดRes Block 3๏ดUpsampling Res Block 3๏ดRes BlockRes BlockRes BlockRes BlockRes BlockRes BlockRes BlockRes BlockRes Block Res BlockRes BlockRes Block ๏ด Conv ๏ด Conv ๏ด Conv Group NormGroup NormGroup NormSwishSwishSwish 3in inhw๏ด๏ด 6in inhw๏ด๏ด ๏ด Conv 64in inhw๏ด๏ด Group Norm Swish ๏ด Conv Downsampling 6422in inhw๏ด๏ด Res Block Res Block Self-Attention Res Block Self-AttentionGroup NormConv 11๏ด Swish ๏ด Conv ๏
12844in inhw๏ด๏ด ๏
Time Embedding Res Block 2๏ด Downsampling Res Block 2๏ด Downsampling Res Block 2๏ด Downsampling 51216 16in inhw๏ด๏ด 25688in inhw๏ด๏ด 102416 16in inhw๏ด๏ด 102416 16in inhw๏ด๏ด 102488in inhw๏ด๏ด 2๏ด Self-AttentionRes BlockUpsampling Self-Attention Res Block 3๏ด 51244in inhw๏ด๏ด 25622in inhw๏ด๏ด 128in inhw๏ด๏ดUpsampling Res Block 3๏ดUpsampling Res Block 3๏ดUpsampling Res Block 3๏ดRes BlockRes BlockRes Block Res Block ๏ด Conv Group NormSwish 3in inhw๏ด๏ด 6in inhw๏ด๏ด . 0 ~p๏ฆ๏ถ=๏ง๏ท๏จ๏ธF F F FGaussian Diffusion Process ()1 ttqโFF 01t t t tฮฑ ฮฑ = + โ ฮต FF 01t t t tฮฑ ฮฑ = + โ ฮต FF . F . F Physical Propagation Environment Information Physical Propagation Environment Information Physical Propagation Environment Information tF tFConditional Denoising Neural Network ()๏ฮธฮตConditional Denoising Neural Network ()๏ฮธฮต (),T 0I F ...Conditional Denoising Neural Network ()๏ฮธฮตConditional Denoising Neural Network ()๏ฮธฮต 1tโF 1tโF . 1 111 1,,1 1tt t t t t t tttฮฑ ฮฑฮฒฮฑ ฮฑ ฮฑโ โ๏ฆ๏ถโโ๏ฆ๏ถ๏ฌ โ +๏ง๏ท ๏ง๏ท๏ง๏ท โ ๏จ๏ธโ๏จ๏ธฮธฮต ฮต F F F F . 1 111 1,,1 1tt t t t t t tttฮฑ ฮฑฮฒฮฑ ฮฑ ฮฑโ โ๏ฆ๏ถโโ๏ฆ๏ถ๏ฌ โ +๏ง๏ท ๏ง๏ท๏ง๏ท โ ๏จ๏ธโ๏จ๏ธฮธฮต ฮต F F F FConcatenate Side Information Training Flow for the Conditional Denoising Neural Network Iterative Refinement Process Conditioned on LR EnvCF Inversion of the Gaussian Diffusion Process . 1,tt
|
https://arxiv.org/abs/2505.07894v1
|
pโ๏ฆ๏ถ ๏ง๏ท๏จ๏ธฮธF F FInversion of the Gaussian Diffusion Process . 1,tt pโ๏ฆ๏ถ ๏ง๏ท๏จ๏ธฮธF F FSkipping Connection Training Frozen Parameters ()๏ฮธฮต Framework of Conditional Denoising Neural Networks ()๏ฮธฮตFramework of Conditional Denoising Neural Networks . 0 p๏ฆ๏ถ ๏ง๏ท๏จ๏ธ๏ป F F F F Fig. 1. Schematic of the proposed CDiff workflow and the architecture of the conditional denoising neural network. 3 where Bdenotes the signal bandwidth. The channel gain, in dB, for the UT located at xmis defined as G(E,xm) = (PY)dBโ(PX)dB. (3) The channel gain G(E,xm)in (3) is primarily influenced by the propagation environment and the location of the UT. Assuming the target area of interest has a size of DรD , we perform spatial discretization along both the X-axis and the Y-axis. Specifically, a resolution factor ฮดis defined, with the minimum spacing units for the spatial discretization process set as โx=D/ฮดandโy=D/ฮด. Each spatial grid is denoted asฮฅi,j, where i= 1,2, ...,D/โxandj= 1,2, ...,D/โy, and the(i, j)-th spatial grid is given by ฮฅi,j:= [iโx, jโy]T. (4) Through the spatial discretization process, the physical propa- gation environment information Eof the target area Acan be rearranged into a two-dimensional tensor, defined as E, i.e., [E]i,j=E(ฮฅi,j). Similarly, the channel gains collected at potential UT locations within Aare rearranged into a tensor with an image-like structure, referred to as CF, defined as [G]i,j=G [E]i,j,ฮฅi,j . Then, EnvCF, i.e., CF, is integrated with the wireless propagation environment, defined as [F]i,j=G [E]i,j,ฮฅi,j + [E]i,j, (5) where Erepresents the global propagation environment. Note that in our simulation process, Erepresents an urban map with BS location, stored as a morphological 2D image. Binary pixel values of 0 and 1 are utilized to depict buildings and streets with various shapes and geometric layouts, as well as the location of BS. E(ฮฅi,j)represents the local propagation environment at each square meter (or each pixel). The channel gainG [E]i,j,ฮฅi,j is computed using the professional chan- nel simulation software WinProp [2], [8] and then converted into grayscale pixel values ranging from 0 to 1. Therefore, EnvCF is modeled as a 2D environmental channel gain map, comprising both the propagation environment map and the channel gains at each UT location, as illustrated by one of the EnvCF samples presented in Fig. 1. It can be observed that when factors such as time and frequency are considered, the EnvCF model can be extended to a multi-dimensional tensor. B. Problem Formulation It is evident that a finer-grained EnvCF can provide more accurate information about the physical environment and channel gains, which is beneficial for wireless transmission design [13]. However, the EnvCF obtained by practical BS is typically coarse-grained due to the limited availability for collecting environmental information and channel knowledge at specific locations. Therefore, our task focuses on refining both environmental and channel gain information from a given coarse-grained EnvCF, particularly in scenarios constrained by sensing costs, implicit limitations, or security concerns. Define a low-resolution (LR) factor ฮดLRand a high- resolution (HR) factor ฮดHR. Correspondingly, the LR EnvCF and HR EnvCF are represented as FLRandFHR, respectively,which are collected and rearranged by discretizing the target area into ฮดLRรฮดLRandฮดHRรฮดHRgrids, respectively. Then, our
|
https://arxiv.org/abs/2505.07894v1
|
task is to establish a mapping capable of reconstructing a HR EnvCF from a given LR EnvCF, expressed as Mฮ:FLR,nโFHR,n,โnโ {1,2, . . . , N }, (6) where ฮdenotes the learnable parameters for the mapping Mฮ, while Nindicates the number of training samples. However, (6) is an undetermined inverse problem. Given that the conditional distribution of HR outputs for a given LR input rarely adheres to a simple parametric distribution, most regression methods based on feedforward neural networks for (6) tend to struggle with high upscaling factors, often failing to reconstruct fine details accurately [10]. Fortunately, GDM has proven effective in capturing the complex empirical distributions of target data. Specifically, if the implicit prior information of the HR EnvCF distribution, such as the gradient of the data log-density, can be effectively learned, it becomes possible to transition to the target EnvCF distribution through iterative refinement steps from a standard normal distribution, akin to Langevin dynamics. Meanwhile, the โnoiseโ estimated in traditional GDM is equivalent to the gradient of the data log-density. Therefore, (6) can be further formulated as argmin ฮEP(FH R,FL R)h โฅโlogP(FH R|FL R)โ โlogPฮ(FH R|FL R)โฅ2 2i , (7a) s.t.xmโ A, nโ {1,2, . . . , N }, (7b) where โlogP(ยท)represents the gradient of the log-density, andPฮdenotes the learned density. To this end, leveraging the powerful implicit prior learning capability of GDM, we tailor the CDiff to solve (7), with the detailed implementation provided in Sec. III. For simplicity, FLRandFHRare repre- sented by หFandF, respectively, in the subsequent sections. III. HR E NVCF R ECONSTRUCTION VIA CDIFF Depending on the resolution factors ฮดLRandฮดHR, the sensing nodes and measurement devices deployed in practice to collect channel knowledge and environmental information can acquire the corresponding LR EnvCF samples, หFn, and HR EnvCF samples, Fn. These samples form a paired LR-HR En- vCF dataset for training, denoted as S=n หFn,FnoN n=1, which is generally sampled from an unknown distribution p หF,F . In our task, the goal is to learn a parametric approximation ofp F หF through a directed iterative refinement process, guided by side information in the form of LR EnvCF. A. Initiation of Gaussian Diffusion Process with HR EnvCF Denoted as F0=Fโผq(F), the GDM employs a fixed diffusion process q(F1:T|F0), represented as a deterministic 4 Markovian chain where Gaussian noise is gradually introduced to the sample over Tsteps [11]: q(F1:T|F0) =TY t=1q(Ft|Ftโ1), (8a) q(Ft|Ftโ1) =N Ft;p 1โฮฒtFtโ1, ฮฒtI , (8b) Ft=p 1โฮฒtFtโ1+p ฮฒtฮต, (8c) where {ฮฒtโ(0,1)}T t=1is a variance schedule that controls the noise level at each step, and ฮตdenotes Gaussian noise following the distribution N(ฮต;0,I). Utilizing the reparame- terization trick, Ftcan be sampled in a closed form as: Ft=โฮฑtF0+โ 1โฮฑtฮตt, (9a) q(Ft|F0) =N Ft;โฮฑtF0,(1โฮฑt)I , (9b) where ฮตtโผ N (0,I),ฮฑt= 1โฮฒtandฮฑt=Qt i=1ฮฑi. Typically, the variance schedule is set as ฮฒ1< ฮฒ 2< ... < ฮฒT[14]. As ฮฒTapproaches 1, FTapproximates a standard Gaussian distribution regardless of the initial state F0, i.e., q(FT|F0)โ N (FT;0,I). B. Inversion of Diffusion Process Conditioned on LR EnvCF In the proposed CDiff, the inversion process is viewed as a conditional decoding procedure, where at
|
https://arxiv.org/abs/2505.07894v1
|
each time step t, Ft, conditioned on หF, is denoised and refined to Ftโ1, with the conditional transition probability for each step denoted as p Ftโ1 Ft,หF . Then, the conditional joint distribution of the inversion process is expressed as p F0:T หF =p(FT)TY t=1p Ftโ1 Ft,หF . (10) To execute the conditional inversion process (10), a denoising neural network ฮตฮธ(ยท)with a learnable parameter set ฮธneeds to be designed to approximate the conditional transition densities: p F0:T หF =p(FT)TY t=1pฮธ Ftโ1 Ft,หF , (11a) pฮธ Ftโ1 Ft,หF =N Gtโ1;ยตฮธ หF,Ft, t ,ฮฃฮธ หF,Ft,t .(11b) Note that the proposed CDiff incorporates a denoising neural network ฮตฮธ(ยท)conditioned on side information in the form of an LR EnvCF หF, guiding it to progressively denoise from a Gaussian-distributed FTand generate the HR EnvCF F0. To ensure the effective functioning of this denoising neural network ฮตฮธ(ยท), a specific objective function needs to be derived. One common likelihood-based approach in generative modeling involves optimizing the model to maximize the conditional joint probability distribution p F0:T หF of all observed samples. However, only the observed sample F0is accessible, while the latent variables F1:Tremain unknown. To this end, we seek to maximize the conditional marginal distribution p F0 หF , expressed as p F0 หF =Z p F0:T หF dF1:T. (12)Algorithm 1 Training Strategy for the Conditional Denoising Neural Network 1:repeat 2: Load data pairs หF,F0 โผp หF,F0 from the EnvCF training dataset S=n หFn,FnoN n=1 3: Sample time steps tuniformly from 1, . . . , T , i.e., tโผ Uniform(1 , ..., T ) 4: Generate a noise tensor ฮตtof the same dimensions as F0,ฮตtโผ N (0,I) 5: Perform the diffusion process on the HR EnvCF F0, Ft=โฮฑtF0+โ1โฮฑtฮตt 6: FeedFt,หF, and tinto the network ฮตฮธ(ยท) 7: Perform gradient descent step on the loss function (15) to optimize the model parameters ฮธ: โฮธ ฮตtโฮตฮธ หF,โยฏฮฑtF0+โ1โยฏฮฑtฮต, t 2 2 8:until the loss function (15) converges Algorithm 2 HR EnvCF Generation via T-Step Conditional Inversion Diffusion Process 1:Load the trained model and its weight set ฮธ 2:Obtain FTโผ N (0,I), and หF 3:fort=T, ..,1do 4:ฮตโผ N (0,I)ift >1, else ฮต= 0 5: Execute the conditional iterative refinement step: Ftโ1โ1โฮฑt Ftโ1โฮฑtโ1โยฏฮฑtฮตฮธ หF,Ft, t +q 1โยฏฮฑtโ1 1โยฏฮฑtฮฒtฮต 6:end for 7:return หF0 By leveraging variational inference techniques, we can derive the evidence lower bound (ELBO) as a surrogate objective to optimize the denoising neural network: logp F0 หF(a) โฉพEq(F1:T|F0)๏ฃซ ๏ฃญlogp F0:T หF q(F1:T|F0)๏ฃถ ๏ฃธ,(13) where(a) โฉพin (13) follows from Jensenโs inequality. Then, (13) can be further expressed as (14), displayed at the top of the next page. Here, DKL(ยท)represents the Kullback-Leibler (KL) divergence. The components of the ELBO (14) for the log- conditional marginal distribution are similar to those in the surrogate objective presented in [11]. By applying Bayesโ theorem, the final simplified objective can be derived as L(ฮธ):=TX t=1EF0,ฮตt ฮตtโฮตฮธ หF,โยฏฮฑtF0+โ 1โยฏฮฑtฮต| {z } Ft,t 2 2 .(15) Based on the trained CDiff, given the noise-contaminated EnvCF Ftand the side information หF, we can approximate the target HR EnvCF F0through (9a), i.e., หF0=1โยฏฮฑt๏ฃซ ๏ฃฌ๏ฃญFtโโ 1โยฏฮฑtฮตฮธ หF,โยฏฮฑtF0+โ 1โยฏฮฑtฮต| {z } Ft, t๏ฃถ ๏ฃท๏ฃธ.(16) 5 logp F0 หF
|
https://arxiv.org/abs/2505.07894v1
|
โฉพEq(F1:T|F0)๏ฃซ ๏ฃญlogp(FT)pฮธ F0 F1,หF q(F1|F0)+ logq(F1|F0) q(FT|F0)+ logTY t=2pฮธ Ftโ1 Ft,หF q(Ftโ1|Ft,F0)๏ฃถ ๏ฃธ (14a) =Eq(F1|F0) logpฮธ F0 F1,หF โTX t=2Eq(Ft|F0) DKL q(Ftโ1|Ft,F0)โฅpฮธ Ftโ1 Ft,หF โDKL(q(FT|F0)โฅp(FT) ) (14b) Each iteration in the proposed CDiff is expressed as Ftโ1โ1โฮฑt Ftโ1โฮฑtโ1โยฏฮฑtฮตฮธ หF,Ft, t +r 1โยฏฮฑtโ1 1โยฏฮฑtฮฒtฮต,(17) where ฮตโผ N (0,I). For clarity, the training process and the iterative inference process of the proposed CDiff are summarized in Algorithm 1 andAlgorithm 2 , respectively. IV. N UMERICAL EXPERIMENT In this section, we first present the generation of the EnvCF dataset and the parameter configuration of the proposed model. Then, we conduct both quantitative and qualitative compar- isons with the baselines on the ร4EnvCF reconstruction task (i.e., 64ร64โ256ร256). A. Datasets and Experiment Setup The RadioMapSeer dataset [8], a widely adopted CF dataset that incorporates environmental information, is utilized for training and validating the proposed CDiff. Specifically, the RadioMapSeer dataset consists of 700 unique city maps, each measuring 256ร256 m2and containing 80 distinct BS locations. For each possible combination of city maps and BS locations, the dataset provides the corresponding CFs, which are simulated using WinProp [2], [8]. These city maps describe the geometric contours of real-world streets and buildings, sourced from OpenStreetMap [8] for different cities. Considering a highly challenging ร4EnvCF refinement task, we set ฮดHR= 256 , utilizing a sampling interval of โx= โ y= 1 m to sample the city map along with its associated channel gain values and environmental information, resulting in the HR EnvCF, denoted as FHR,n. For the LR counterpart, ฮดLRis set to 64, meaning the sampling interval for the LR EnvCF is โx= โ y= 4m, yielding the LR EnvCF TABLE I SYSTEM AND MODEL PARAMETERS Parameter Value Size of the target area A D ร D = 256 ร256 m2 Sampling interval for HR EnvCF โx= โ y= 1 m Sampling interval for LR EnvCF โx= โ y= 4 m Carrier frequency f= 5.9GHz Bandwidth B= 10 MHz Transmit power 23 dBm Noise power sepctral density -174 dBm/Hz Variance schedule Linear: ฮฒ1= 10โ6โฮฒT= 10โ2denoted as FLR,n. Similarly, based on the RadioMapSeer dataset, we generate 56,000 pairs of EnvCF samples, denoted as{FHR,n,FLR,n}56000 n=1, and split them into training and validation sets in a 4 : 1 ratio. The proposed CDiff is trained utilizing 2 Nvidia RTX-4090 GPUs, each with 24 GB of memory, and tested on a single Nvidia RTX-4090 GPU with 24 GB of memory. We employ the Adam optimizer with a learning rate of 5ร10โ5for model parameter updates over 500,000 iterations, and the batch size is set to 16. Starting from the 5,000th iteration, we introduce the exponential moving average algorithm [11], with the decay factor set to 0.9999. More parameter configurations are summarized in Table I. B. Experiment Results To comprehensively assess the effectiveness of the pro- posed approach, we conduct experiments on the ร4EnvVF reconstruction task, comparing its performance against several baselines, including Bilinear, Nearest, Kriging [7], RBF [6], and SR-GAN [15]. Without loss of generality, three widely adopted metrics, peak signal-to-noise ratio (PSNR), struc- tural similarity (SSIM), and normalized mean squared error (MNSE), are
|
https://arxiv.org/abs/2505.07894v1
|
employed to evaluate performance. Table II presents a quantitative analysis of the proposed CDiff and baselines on the ร4EnvCF reconstruction task. It can be observed that the performance of the Kriging algorithm is relatively suboptimal. Notably, compared to the baselines, the proposed approach achieves competitive reconstruction performance, with PSNR, SSIM, and NMSE values of 31.15, 0.9280, and 0.0073, respectively. As shown in Fig. 2, to better illustrate the qualitative analysis, we randomly visualize the re- construction results of EnvCF utilizing the proposed CDiff and baselines. Note that the proposed approach effectively refines both environmental information and CF, closely approximating the ground-truth EnvCF with minimal error. TABLE II PERFORMANCE COMPARISON WITH BASELINES Method PSNR SSIM NMSE Bilinear 27.24 0.8521 0.0172 Nearest 26.25 0.8331 0.0215 Kriging 19.88 0.6725 0.1166 RBF 26.99 0.8613 0.0180 SR-GAN 29.75 0.7517 0.0089 CDiff 31.15โ 0.9280 โ 0.0073 โ 6 /uni00000025/uni0000004c/uni0000004f/uni0000004c/uni00000051/uni00000048/uni00000044/uni00000055 /uni00000031/uni00000048/uni00000044/uni00000055/uni00000048/uni00000056/uni00000057 /uni0000002e/uni00000055/uni0000004c/uni0000004a/uni0000004c/uni00000051/uni0000004a/uni00000003 /uni00000035/uni00000025/uni00000029 /uni00000036/uni00000035/uni00000010/uni0000002a/uni00000024/uni00000031 /uni00000026/uni00000027/uni0000004c/uni00000049/uni00000049 /uni0000002a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047/uni00000003/uni00000037 /uni00000055/uni00000058/uni00000057/uni0000004b Fig. 2. Random visualizations of the HR EnvCF reconstruction results using the proposed CDiff and baselines. V. C ONCLUSION This paper proposed a deep conditional generative learning- enabled EnvCF refinement approach that effectively refined both environmental information and CF, achieving a fourfold enhancement in granularity. Specifically, we employed varia- tional inference to derive a surrogate objective and proposed the CDiff framework, which effectively generates HR EnvCF conditioned on LR EnvCF. Experimental results showed that the proposed approach achieved significant improvements in enhancing the granularity of EnvCF compared to the baselines. REFERENCES [1] Y . Zeng, J. Chen, J. Xu, D. Wu, X. Xu, S. Jin, X. Gao, D. Gesbert, S. Cui, and R. Zhang, โA tutorial on environment-aware communications via channel knowledge map for 6G,โ IEEE Commun. Surv. Tuts. , vol. 26, no. 3, pp. 1478โ1519, Feb. 2024. [2] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Gao, โAn I2I inpainting approach for efficient channel knowledge map construction,โ IEEE Trans. Wirel. Commun. , vol. 24, no. 2, pp. 1415โ1429, Feb. 2025. [3] Z. Yang, Z. Zhou, and Y . Liu, โFrom RSSI to CSI: Indoor localization via channel response,โ ACM Comput. Surveys , vol. 46, no. 2, pp. 1โ32, Dec. 2013. [4] D. Lee, D. Berberidis, and G. B. Giannakis, โAdaptive Bayesian radio tomography,โ IEEE Trans. Signal Process. , vol. 67, no. 8, pp. 1964โ 1977, Apr. 2019. [5] X. Xu and Y . Zeng, โHow much data is needed for channel knowledge map construction?โ IEEE Trans. Wirel. Commun. , vol. 23, no. 10, pp. 13 011โ13 021, Oct. 2024. [6] S. Zhang, T. Yu, B. Choi, F. Ouyang, and Z. Ding, โRadiomap inpainting for restricted areas based on propagation priority and depth map,โ IEEE Trans. Wirel. Commun. , vol. 23, no. 8, pp. 9330โ9344, Feb. 2024. [7] K. Sato and T. Fujii, โKriging-based interference power constraint: Integrated design of the radio environment map and transmission power,โ IEEE Trans. Cogn. Commun. Netw. , vol. 3, no. 1, pp. 13โ25, Mar. 2017. [8] R. Levie, C ยธ . Yapar, G. Kutyniok, and G. Caire, โRadioUnet: Fast radio map estimation with convolutional neural networks,โ IEEE Trans. Wirel. Commun. , vol. 20, no.
|
https://arxiv.org/abs/2505.07894v1
|
6, pp. 4001โ4015, Jun. 2021. [9] S. Bakirtzis, J. Chen, K. Qiu, J. Zhang, and I. Wassell, โEM DeepRay: An expedient, generalizable, and realistic data-driven indoor propagation model,โ IEEE Trans. Antennas Propag. , vol. 70, no. 6, pp. 4140โ4154, Jun. 2022. [10] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, โHigh- resolution image synthesis with latent diffusion models,โ in Proc. Conf. Comput. Vis. Pattern Recognit. (CVPR) , New Orleans, LA, USA, Jun. 2022, pp. 10 684โ10 695. [11] J. Ho, A. Jain, and P. Abbeel, โDenoising diffusion probabilistic models,โ inProc. Adv. Neural Inf. Process. Syst. (NeurIPS) , Dec. 2020, pp. 6840โ 6851.[12] Z. Jin, L. You, H. Zhou, Y . Wang, X. Liu, X. Gong, X. Gao, D. W. K. Ng, and X.-G. Xia, โGDM4MMIMO: Generative diffusion models for massive MIMO communications,โ arXiv preprint arXiv:2412.18281 , 2024. [13] Z. Jin, L. You, J. Wang, X.-G. Xia, and X. Q. Gao, โChannel knowledge map construction with Laplacian pyramid reconstruction network,โ in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC) , Dubai, United Arab Emirates, Apr. 2024, pp. 1โ6. [14] Z. Jin, L. You, D. W. K. Ng, X.-G. Xia, and X. Gao, โNear-field channel estimation for XL-MIMO: A deep generative model guided by side information,โ IEEE Trans. Cogn. Commun. Netw. , early access, May 2025. [15] C. Ledig, L. Theis, F. Husz ยดar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al. , โPhoto-realistic single image super-resolution using a generative adversarial network,โ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , Jul. 2017, pp. 4681โ 4690.
|
https://arxiv.org/abs/2505.07894v1
|
arXiv:2505.07958v1 [math.PR] 12 May 2025Laws of Large Numbers for Information Resolution Daniel Rabanโ Department of Statistics, University of California, Berkeley May 14, 2025 Abstract Laws of large numbers establish asymptotic guarantees for recovering features of a probability distribution using independent samples. We introduce a framework for proving analogous results for recovery of the ฯ-field of a probability space, interpreted as information resolutionโthe granularity of measurable events given by comparison to our samples. Our main results show that, under iid sampling, the Borel ฯ-field in Rdand in more general metric spaces can be recovered in the strongest possible mode of convergence. We also derive finite-sample L1bounds for uniform convergence of ฯ-fields on [0 ,1]d. We illustrate the use of our framework with two applications: constructing random- ized solutions to the Skorokhod embedding problem, and analyzing the loss of variants of random forests for regression. 1 Introduction Laws of large numbers generally assert that, in the context of iid sampling, we can asymptotically recover aspects of our probability space. For example, the strong and weak laws of large numbers assert that we can recover the mean of a measure ยต, and, perhaps more ambitiously, the GlivenkoโCantelli theorem guarantees recovery of the entire measure via its cumulative distribution function. Inconspicuously absent from these theorems is the following consideration: Given a probability space ( S,B, ยต) can recover the measure ยตwe were sampling from, but what about the ฯ-fieldB? Does the information of our samples Xiallow us to measure the same resolution of events as the unknown process associated to the samples? The goal of this paper is to introduce a notion of laws of large numbers regard- ing recovery of the information resolution , as represented by the notion of ฯ-fields, associated to the target measure ยตgenerating our iid samples. Just as one approxi- mates the underlying mean by a sample mean or the underlying CDF by an empirical CDF, we will approximate the underlying ฯ-field by empirical ฯ-fields , representing the granularity of the events we can measure by comparison to our samples. โEmail: danielraban@berkeley.edu 1 We will prove examples of these laws of large numbers in settings such as sampling inRdand in more general metric spaces. We will also present two applications of our theory. The first gives a simple method for randomly generating solutions to the Skorokhod embedding problem by constructing stopping times for Brownian motion through sequences of hitting barriers, interpreted as increasingly resolving partitions of R. The second applies our theory to random forests, analyzing how regression tree loss depends on tree depth by viewing feature space splits as progressively finer resolutions. Here is a basic example illustrating the notion of information resolution. Example 1.1. Suppose we sample X1, X2, X3iidโผยตand get the values X1= 5, X2= โ4, and X3= 1. What is the empirical resolution afforded by the knowledge of our three sample values? One choice is as follows: If we were to continue sampling Z1, Z2, . . .iidโผยต, we would be able to compare the values of the Ziwith our original sample values X1, X2, X3. We
|
https://arxiv.org/abs/2505.07958v1
|
would be able to determine events such as {X2< Z iโค X3}. X1 X2X3Zi xfยต(x) Figure 1: Comparing a new sample Zito the previous samples X1, X2, X3. From this perspective, the ฯ-field representing the resolution given by our first three samples is the ฯ-field generated by the partition F3:=ฯ((โโ,โ4],(โ4,1],(1,5],(5,โ)). Alternatively, we can express this ฯ-field more directly in terms of our samples using the sets ( โโ, Xi]: F3=ฯ((โโ,5],(โโ,โ4],(โโ,1]). Defining empirical ฯ-fields in this way, i.e. Fn:=ฯ((โโ, X1], . . . , (โโ, Xn]), we can measure more events as we obtain more samples, increasing the granularity of our information resolution. And as we let nโ โ , we might hope that we can measure any event. Care must be taken, however, when defining a notion of empirical information resolution, as not every sequence of ฯ-fields will recover the maximal ฯ-field of the probability space. Here is a naive example illustrating this point. 2 Example 1.2. When sampling X1, X2, X3iidโผยต= Unif[0 ,1], we could define Gn:= ฯ({X1}, . . . ,{Xn}). This, at best, generates a sub- ฯ-field of C, the ฯ-field of countable and co-countable subsets of [0 ,1]. Moreover, all sets in Chave Lebesgue measure 0 or 1, so from the perspective of Lebesgue measure on [0 ,1], we have not gained any resolution at all. The sequence Gnof empirical ฯ-fields would only be sufficient for recovering the resolution of our space if the probability measure ยตwere discrete. In general, the setup for ฯ-field recovery is as follows: draw iid samples X1, X2, . . .iidโผ ยต, taking values in a space S. To each xโS, we associate a set Axthat reflects the resolution or information revealed by observing x. These sets encode our assumption about the underlying structure, with the goal of recovering the maximal ฯ-fieldF:= ฯ({Ax:xโS}). We define the empirical resolution ฯ-fields Fn:=ฯ(AX1, . . . , A Xn), based on the first nsamples. The central question is whether Fnconverges to Fas nโ โ , under an appropriate notion of convergence for ฯ-fields. Convergence of ฯ-fields has been well-studied (see e.g. [Boy71, Nev72, Kud74, Rog74, VZ93, Vid18]), and there are a number of non-equivalent modes of conver- gence. Most of these modes of convergence involve comparing the ฯ-fields using a fixed measure ยต, which we will usually assume to be the shared marginal distribution of our iid samples. We list some modes of convergence here; for a more in-depth study of how these notions relate to each other, see [Vid18], for example. โขMonotone convergence: Fnโ F in the monotone sense (written Fnโ F) means thatWโ n=1Fn=F. Here,Wโ n=1Fnis the join of these ฯ-fields with respect to inclusion; that is, it is the smallest ฯ-field containing Fnfor each n. โขHausdorff convergence: Given a fixed probability measure ยต,Fnโ F in the Hausdorff sense means that dยต(Fn,F):= sup โฅfโฅLโ(ยต)โค1โฅE[f| Fn]โE[f| F]โฅL1(ยต)โ0. This is equivalent [Rog74] to dโฒ ยต(Fn,F):= max sup AโFninf BโFยต(AโณB),sup BโFinf AโFnยต(AโณB) โ0, which is convergence of the sets FntoFin the Hausdorff topology induced by viewing these ฯ-fields as closed subsets of L1(via indicator functions of sets).
|
https://arxiv.org/abs/2505.07958v1
|
โขSet theoretic convergence: This means lim supnโโFn= lim inf nโโFn=F, where lim sup nโโFn:=โ\ n=1โ_ k=nFn, lim inf nโโFn:=โ_ n=1โ\ k=nFn. โขStrong convergence: This means E[ 1A| Fn]โE[ 1A| F] in probability for all measurable A. In general, monotone and Hausdorff convergence, which are not equivalent, are the strongest. Here is a diagram expressing the strength of various modes of convergence, 3 including some not mentioned above; for a more complete picture, see [Vid18]. Monotone Almost sure Set theoretic Hausdorff Strong Weak Hausdorff convergence, which is given by a pseudometric, may at first seem the most natural to use for an analogue of the GlivenkoโCantelli theorem, due to its uniform nature. However, we will see in Section 3 that Hausdorff convergence fails for even simple examples in R. Monotone convergence, which appears regularly in probability theory (for example, in the context of martingale convergence), is another natural choice and will suffice in cases where Hausdorff convergence is not possible. Outline. In what follows, we will prove laws of large numbers for two modes of convergence of ฯ-fields. - In Section 2, we prove theorems for monotone convergence of ฯ-fields in Rdand in more general metric spaces. This gives the strongest convergence possible, as monotone convergence implies all studied modes of convergence for ฯ-fields which do not imply Hausdorff convergence. - In Section 3, we prove a weakened version of Hausdorff convergence (and give quantitative rates) by restricting the class of test functions to Lipschitz functions, rather than all of Lโ(ยต); in other words, we give bounds on sup โฅfโฅLipโค1โฅE[f| Fn]โE[f| F]โฅL1(ยต). - In Section 4, we apply our theorems to construct randomized solutions to the Skorokhod embedding problem and to analyze the loss of randomized regression trees. These applications use our theorems from Sections 2 and 3, respectively. It is important to note that there are two layers of randomness at play: We want to study a probability space ( S,F, ยต), but we are generating the samples X1, X2, . . .iidโผยต via some background probability space (โฆ ,G,P). Just as classical laws of large numbers concern P-a.s. convergence of numbers or random measures, our theorems will concern P-a.s. and L1(P) convergence of random ฯ-fields. 2 Monotone convergence of resolution When studying the convergence of ฯ-fields, we want to compare ฯ-fields by measuring the distance between sets with respect to a fixed measure ยต. The measure ยตcanโt meaningfully distinguish between two sets A, B with ยต(AโณB) = 0, so we will need to be precise with our statements. However, the following definition and subsequent proposition tell us that this technicality poses no obstruction to our understanding. 4 Definition 2.1. Let ( S,F, ยต) be a measure space, and let A,B โ F . We say that A andBdiffer only by ยต-null sets if (i)โAโ A,โBโ Bs.t.ยต(AโณB) = 0, (ii)โBโ B,โAโ As.t.ยต(AโณB) = 0. We will make judicious use of the generating construction for ฯ-fields: ฯ(A) denotes the smallest ฯ-field containing all the sets in A, and we say that Agenerates ฯ(A). Before proving any results, we must first make sure that
|
https://arxiv.org/abs/2505.07958v1
|
altering generating sets by null sets does not cause any issues when generating ฯ-fields. Proposition 2.1. Let(S,F, ยต)be a measure space, and let A,B โ F differ only by ยต-null sets. Then ฯ(A)andฯ(B)differ only by ยต-null sets. Proof. LetF:={Aโฯ(A) :โBโฯ(B) s.t. ยต(AโณB) = 0}be the members of ฯ(A) which are represented in ฯ(B) up to null sets. Then Fis aฯ-field: (i) Empty set: โ
โ Fbecause โ
โฯ(A) and ฯ(B). (ii) Complements: If Aโ F, then letting Bbe such that ยต(AโณB) = 0, we get ยต(AcโณBc) = 0. As Bcโฯ(B), we get Acโ F. (iii) Countable unions: If A1, A2,ยทยทยท โ F , then let B1, B2,ยทยทยท โ ฯ(B) be such that ยต(AiโณBi) = 0 for iโฅ1. Then ยต โ[ i=1Ai! โณ โ[ i=1Bi!! โคยต โ[ i=1AiโณBi! โคโX i=1ยต(AiโณBi) = 0 , soSโ i=1Aiโฯ(A). Fis aฯ-field that contains A, soF โ ฯ(A). Hence, F=ฯ(A). The same argument shows that all members of ฯ(B) are represented in ฯ(A) up to null sets. 2.1 Monotone convergence of resolution in Rd In this section, we prove a basic law of large numbers for recovering the Borel ฯ-field inRd, using the left-infinite intervals/boxes which show up in the GlivenkoโCantelli theorem. Theorem 2.1. Let(Rd,B, ยต)be a probability space equipped with the Borel ฯ-field, and letX1, X2, . . .iidโผยต. For x= (x1, . . . , x d)โRd, letAx:= (โโ, x1]ร ยทยทยท ร (โโ, xd], and define the empirical ฯ-fields Fn:=ฯ(AX1, . . . , A Xn). Then Fnโ Ba.s.; that is,Wโ n=1FnandBdiffer only by ยต-null sets. This choice of Axis, of course, not the only choice that works. The proof works essentially the same with finite-sized boxes, balls, etc. To recover a different ฯ-field, one would use a different choice of Axsets; the choice of Ax= (โโ, x1]รยทยทยทร (โโ, xd] necessarily implies that we are attempting to recover a sub- ฯ-field of the Borel ฯ-field because ฯ({Ax:xโRd}) =B. Lemma 2.1. LetG โ F beฯ-fields, let ยตbe a probability measure defined on F, and letBโ F. Then the infimum inf AโGยต(AโณB) is achieved. 5 Proof of Lemma 2.1. To construct the minimizing set, we round E[ 1B| G] to get the โclosest indicator.โ Let Aโ={xโS:E[ 1B| G]โฅ1/2}for some version of E[ 1B| G] as a member of L2(ยต). We can directly show that ยต(AโโณB)โคยต(AโณB) for any Aโ G: ยต(AโโณB) =โฅ 1Aโโ 1BโฅL1(ยต) =โฅ 1Aโโ 1Bโฅ2 L2(ยต) By the Pythagorean theorem, =โฅ 1AโโE[ 1B| G]โฅ2 L2(ยต)+โฅE[ 1B| G]โ 1Bโฅ2 L2(ยต) By definition, for ยต-a.e. xโS, 1Aโ(x) is closer to E[ 1B| G](x) than any other G- measurable indicator is. โค โฅ 1AโE[ 1B| G]โฅ2 L2(ยต)+โฅE[ 1B| G]โ 1Bโฅ2 L2(ยต) =โฅ 1Aโ 1Bโฅ2 L2(ยต) =โฅ 1Aโ 1BโฅL1(ยต) =ยต(AโณB). Taking the case where this infimum is zero gives the following topological interpre- tation of the above lemma. Corollary 2.1 (Lp(ยต) Closure of ฯ-fields) .LetG โ F beฯ-fields, let ยตbe a probability measure defined on F, and let Bโ F. If there exists a sequence Bnโ G such that ยต(BnโณB)โ0asnโ โ , then Bโ G. In other words, { 1A:Aโ G} is a closed subset ofLp(ยต)for all 1โคp <โ. Proof of Theorem 2.1.
|
https://arxiv.org/abs/2505.07958v1
|
The idea is to reduce the problem to showing that our empirical ฯ-fields can approximate any box. Then we use the GlivenkoโCantelli theorem to approximate any box from the inside; see Figure 2 for a picture. Step 1. (Reduce to recovering generating boxes): Since Bis generated by the countable collection {Aq:qโQd}, Proposition 2.1 reduces the problem to show- ing that for each qโQd, with probability 1, there exists AโWโ n=1Fnsuch that ยต(AโณAq) = 0. Step 2. (Reduce to approximating non-null boxes): By Corollary 2.1, it suffices to show that inf AโWโ n=1Fnยต(AโณAq) = 0 almost surely. Fix qโQdandฮต >0. We will exhibit a set AโWโ n=1Fnsuch that ยต(AโณAq)< ฮต. Moreover, we may assume that ยต(Aq)ฬธ= 0; otherwise, we can just pick A=โ
and be done. Step 3. (Approximate boxes from inside): Consider the empirical measure ยตN:= 1 nPN n=1ฮดXn. From the GlivenkoโCantelli theorem, we can choose Nsuch that supxโRd|ยตN(Ax)โยต(Ax)|< ฮต/ 2. For non-null Aq,P(Xn/โAqโn) = 0, so we may assume, increasing Nif necessary, that Aqcontains Xnfor some nโคN. Using this value of N, define r= (r1, . . . , r d)โRdbyri:= max {(Xj)i:Xjโ Aq,1โคjโคN}. Then ยตN(Ar) =ยตN(Aq), and ArโAq, so we can write ยต(ArโณAq) =ยต(Aq)โยต(Ar) =ยต(Aq)โยตN(Aq)| {z } <ฮต/2+ยตN(Aq)โยตN(Ar)| {z } =0+ยตN(Ar)โยต(Ar)| {z } <ฮต/2 < ฮต. 6 x1x2 X1q X3 X2r Figure 2: Approximating a box Aqfrom inside in the proof of Theorem 2.1. 2.2 Monotone convergence of resolution in metric spaces Before extending our viewpoint to the more general setting of metric spaces, we must first review some technical notions regarding regularity of measures. The following definition is from [Rig21]. Definition 2.2. Letยตbe a measure on a metric space ( S, ฯ). We say that ยตis of Vitali type with respect to ฯif for every AโSand every family Cof balls in ( S, ฯ) such that inf {r >0 :B(x, r)โ C} = 0 for all xโA, there exists a countable subfamily D โ C of disjoint balls for which ยต A\[ BโDB! = 0. [Rig21] provides a number of examples with this property. Here are a few classes of examples. Example 2.1. Any Radon measure on Rdis of Vitali type with respect to the Eu- clidean metric. Example 2.2. Every probability measure ยตon (S, ฯ) which is doubling is of Vitali type with respect to ฯ. Here, ยตis said to be doubling if there exists a constant Cโฅ1 such that ยต(B2r(x))โคCยต(Br(x)) โxโS, r > 0. The reason we care about the Vitali type property is that it describes the regularity of the density of a set Awith respect to the measure ยต. In particular, it tells us that the measure ยตenjoys an analogue of the Lebesgue differentiation theorem. 7 Lemma 2.2 ([Rig21]) .Letยตbe a measure which is of Vitali type with respect to a metric space (S, ฯ). Then for every measurable set A, lim rโ0ยต(AโฉBr(x)) ยต(Br(x))= 1A(x) forยต-a.e. xโS. When generalizing the ideas of the previous section to metric spaces, we lose the helpful ordering of R. The natural candidate for a set Axin a general metric space is a ball Br(x) of radius r, centered at x.
|
https://arxiv.org/abs/2505.07958v1
|
However, the following simple example shows that balls of a fixed radius may not always suffice. Example 2.3. Consider the metric space [0 ,1] with the Euclidean metric and the measure ยต({1/k}) = 2โkfork= 1,2, . . .. If we set Ax=Br(x) for any r >0, then we there are some points we cannot distinguish. However, we can still recover information resolution by sampling balls of varying radii. To make sure we can obtain a ball of any arbitrarily small radius, we introduce auxiliary randomness, which can be interpreted as a degree of noise determining the resolution given by the sample point Xn. Theorem 2.2. Let(S, ฯ,B, ยต)be a separable metric space equipped with the Borel ฯ-field and a probability measure ยตwhich is of Vitali type with respect to ฯ. Let X1, X2, . . .iidโผยตandR1, R2, . . .iidโผฮฝbe independent, where ฮฝis a distribution on Rโฅ0 with ฮฝ((0, ฮต))>0for every ฮต >0. For xโSandr >0, let Ax,r:=Br(x) ={zโ S:ฯ(z, x)< r}, and define the empirical ฯ-fields Fn:=ฯ(AX1,R1, . . . , A Xn,Rn). Then Fnโ Ba.s.; that is,Wโ n=1FnandBdiffer only by ยต-null sets. Remark 2.1. The metric structure is not entirely essential in Theorem 2.2. We mainly restrict this theorem to metric spaces to express the regularity of ยตvia the notion of set density with respect to ยต. This proof technique would work for any choice of sampling setsAx,rwith appropriate regularity for ยตas the sets Ax,rmore closely approximate x, e.g., a countable neighborhood base for a second countable topological space when ยตis purely atomic. In fact, the ฯ-field need not be the Borel ฯ-field in general! Proof. LetCbe a countable dense subset of S. Balls of rational radius centered at points in Cgenerate B, so it suffices to show that if cโCandrโQ>0,Wโ n=1Fn contains Br(c) a.s. As in the proof of Theorem 2.1, it suffices for us to show that infBโฒโWโ n=1Fnยต(Br(c)โณBโฒ) = 0 a.s. We will show that the complement event has probability 0. Suppose that inf BโฒโWโ n=1Fnยต(Br(c)โณBโฒ) =ฮด > 0. Then, by Lemma 2.1, there exists some BโโWโ n=1Fnwith ยต(Br(c)โณBโ) =ฮด. Without loss of generality, we may assume that ยต(Br(c)\Bโ)>0; the argument for Bโ\Br(c) is analogous. Lemma 2.2 provides a set UโBr(c)\Bโof positive measure which only contains points of positive density with respect to Br(c): lim tโ0ยต((Br(c)\Bโ)โฉBt(x)) ยต(Bt(x))= 1 โxโU. Hence, for each xโU, there exists some radius rxsuch that for tโคrx, ยต((Br(c)\Bโ)โฉBt(x)) ยต(Bt(x))>1/2. 8 Rearranging gives ยต((Br(c)\Bโ)โฉBt(x))> ยต(Bt(x)\(Br(c)\Bโ)). On the other hand, disintegrating over Ugives P(XnโU, R nโคrXn) =Z UP(Rnโคrx)|{z} >0dยต(x) >0, so that P(โns.t.XnโU, R nโคrXn) = 1. This event is inconsistent with the fact that ยต(Br(c)โณBโ) =ฮดbecause it implies that we could take Bโโ:=BโโชBRXn(Xn) for some nand get the improved approximation ยต(Br(c)โณBโโ)โคยต(Br(c)โณBโ)|{z} =ฮด +ยต(BRXn(Xn))\(Br(c)\Bโ))โยต((Br(c)\Bโ)โฉBRXn(Xn))| {z } <0 < ฮด, contradicting the optimality of Bโ. So P(infBโฒโWโ n=1Fnยต(Br(c)โณBโฒ)>0) = 0, as claimed. 3 Uniform convergence of resolution IfFnโ F, the martingale convergence theorem gives E[f| Fn]โE[f| F] a.s. and inL1for all bounded f. Hausdorff convergence can be viewed as a uniform version of this convergence: dยต(Fn,F):= sup โฅfโฅLโ(ยต)โค1โฅE[f| Fn]โE[f| F]โฅL1(ยต)โ0. However, uniform convergence over the entire
|
https://arxiv.org/abs/2505.07958v1
|
unit ball in Lโ(ยต) is too strong of a condition for our purposes, as the following example shows. Example 3.1. Consider the probability space ([0 ,1],B,Leb), where Bis the Borel ฯ-field and ฮปis Lebesgue measure. Given any realization Fn:=ฯ([0, x1], . . . , [0, xn]) of an empirical ฯfield, we adversarially construct a function fnas follows: Let 0 < x (1)< ยทยทยท< x (n)<1 list the sample points in increasing order, and take the convention that x(0)= 0 and x(n+1)= 1. Define fn(x) =( 1 if x(k)โคx <x(k)+x(k+1) 2for some 0 โคkโคn โ1 ifx(k)+x(k+1) 2โคx < x (k+1)for some 0 โคkโคn. See Figure 3 for an illustration. Then on each Aโ Fn,E[fn|A] = 0, so E[fn| Fn] = 0 ฮป-a.s. Thus, dฮป(Fn,B)โฅ โฅE[fn| Fn]โE[fn| B]|{z} =fnโฅL1(ฮป)= 1. So we cannot hope for uniform convergence over such a large class of functions. 9 xf2(x) 0.2 0.4 0.6 0.8 1 โ11 0 Figure 3: An adversarially chosen function which maximizes the Hausdorff distance. Instead of uniform convergence over all fwithโฅfโฅLโ(ยต)โค1, we consider uniform convergence over 1-Lipschitz f. We again use the coordinate-wise dominated boxes Ax:= [0, x1]ร ยทยทยท ร [0, xd], but this choice is arbitrary, and one can prove uniform convergence with other choices for Ax(perhaps with different rates). Due to the asymmetrical nature of this partition, the sets containing points with coordinates near 1 will be larger the the sets containing coordinates near 0, leading to a slow rate of convergence of O((log n/n)1/d). After stating this slow rate, we will see that a symmetrizing adjustment to this partition leads to a much faster rate of O(1/n). Theorem 3.1. Let([0,1]d,B, ยต)be a probability space equipped with the Borel ฯ-field, and let X1, X2, . . .iidโผยต, where ยตโชฮปandฮณโ1<dยต dฮป< ฮณ for some ฮณโฅ1. For x= (x1, . . . , x d)โ[0,1]d, let Ax:= [0, x1]ร ยทยทยท ร [0, xd], and define the empirical ฯ-fields Fn:=ฯ(AX1, . . . , A Xn). Then sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)P-a.s.,L1(P)โ โ โ โ โ โ โ โ 0, where โฅfโฅLip:= sup {|f(x)โf(y)| |xโy|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)# โฒlogn n1/d โnโฅ3. The constant factor in the bound depends only on dandฮณ. The proof is in Appendix A. To improve the convergence, we use the more symmetric partition fFn:=ฯ({x: xiโคXj,i}: 1โคiโคd,1โคjโคn), which splits the unit cube in two pieces along every coordinate of each sample point Xj. See Figure 4 for an illustration. Now, we get a much faster rate: Theorem 3.2 (Faster uniform convergence with symmetrized Ax).Let([0,1]d,B, ยต) be a probability space equipped with the Borel ฯ-field, and let X1, X2, . . .iidโผยต, where 10 Figure 4: The points in fFnsplitting the unit cube in every coordinate. ยตโชฮปandฮณโ1<dยต dฮป< ฮณfor some ฮณโฅ1. Define the empirical ฯ-fields fFn:=ฯ({x: xiโคXj,i}: 1โคiโคd,1โคjโคn). Then sup โฅfโฅLipโค1โฅE[f|fFn]โfโฅL1(ยต)P-a.s.,L1(P)โ โ โ โ โ โ โ โ 0, where โฅfโฅLip:= sup {|f(x)โf(y)| |xโy|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โฅfโฅLipโค1โฅE[f|fFn]โfโฅL1(ยต)# โฒโ d nโnโฅ1. The constant factor in the bound depends only on ฮณ. The proof is in Appendix A. Remark 3.1. By scaling the sides of the box by
|
https://arxiv.org/abs/2505.07958v1
|
constants, the results in Theorem 3.1 and Theorem 3.2 apply to boxes in Rdwhich are not [0 ,1]d. We incur only an extra multiplicative factor of the volume of the box in our bound. Similarly, if we allow fto beL-Lipschitz, we incur only a factor of L. Remark 3.2. The bound in Theorem 3.2 is tight. For a lower bound, consider the example f(x) =x1andยต=ฮป. The partition is an axis-aligned grid, so the conditional expectation of fin any set in the box is just the average of the maximal and minimal x1values for that box. So the integral is independent of the latter dโ1 coordinates, and the problem reduces to a 1-dimensional problem. Denoting the 1st coordinate of each XjasX1,1, X2,1, . . . , X n,1and denoting the order statistics of these values as 0 = Y0< Y 1<ยทยทยท< Yn< Yn+1= 1, we write โฅE[f|fFn]โfโฅL1(ยต)=nX k=0ZYk+1 Yk Yk+Yk+1 2โx1 dx1 =nX k=0(Yk+1โYk)2 4, 11 Taking expectations, we get E[โฅE[f| Fn]โfโฅL1(ยต)]โฅE[โฅE[f|fFn]โfโฅL1(ยต)] =1 4nX k=0E[(Yk+1โYk)2], Where the Ykare the order statistics of niid uniform random variables on [0 ,1]. The differences of these successive order statistics are Beta(1, n) distributed, so this equals =1 4(n+ 1)2 (n+ 1)( n+ 2) =1 2(n+ 2). Theฯ-fieldfFnis a refinement of Fn, so the lower bound of 1 /napplies to Fn, as well. 4 Applications 4.1 Randomized Skorokhod embeddings Skorokhod ([Sko61], translated into English [Sko65]) posed and solved the problem of embedding distributions of real-valued random variables into Brownian motion by stopping the process at suitably constructed random times. Since then, many solutions to the Skorokhod embedding problem have been discovered, with varying properties of interest; see [Ob l04] for a survey detailing the various constructions and their historical context and [BCH17] for a more recent work unifying many solutions to the problem. Of particular note for our purposes is Dubinsโ 1968 solution to the Skorokhod embedding problem [Dub68]. By adjusting Dubinsโ solution, we will provide a method ofrandomly generating Skorokhod embeddings for a given distribution ยต. Dubinsโ construction proceeds via a binary splitting martingale . Suppose Xโผยต (with E[X] = 0) and we want to generate the distribution of Xvia a stopping time Tfor Brownian motion (meaning BTd=X). We first create barriers for the Brownian motion at the points x1:=E[X|X < 0] and x2:=E[X|X > 0] and let T1:= inf{t > 0 :Btโ {x1, x2}}. This divides the line into four intervals. 0 x1 x2 X < x 1 x1< X < 0 0< X < x 2 X > x 2 0 x1 x2 y1 y2 y3 y4 X < x 1 x1< X < 0 0< X < x 2 X > x 2 Figure 5: Top: first step of Dubinsโ binary splitting, with barriers x1andx2. Bottom: refinement using y1, . . . , y 4. 12 For the next step, we divide each of the intervals in two by adding more barriers. In particular, we add barriers y1:=E[X|Xโคx1], y 2:=E[X|x1< X < 0], y3:=E[X|0< Xโคx2], y 4:=E[X|X > x 2] and let T2:= inf{t > T 1:Btโ {y1, y2, y3, y4}}. See Figure
|
https://arxiv.org/abs/2505.07958v1
|
5 for an illustration. Repeating this process, we end up with a sequence of stopping times ( Tn)โ n=1for Brownian motion such that BTnequals, with equal probability, any of the 2nlevel n barrier points. In fact, a more careful analysis of this process shows that BTnd=E[X| Bn], where Bnis the ฯ-field representing the partition of the interval by all barrier points up to level n. Taking T:= lim nโโTngives us the stopping time we desire. Figure 6 illustrates the first few steps of this process on a simulated Brownian motion. x1x2 y1y2y3y4T1 T2T3 Figure 6: The first 3 steps of stopping times in Dubinsโ construction. The key insight for this application of our framework is that Dubinsโ meticulously constructed โdyadicโ partitions of the line are not actually necessary. We will show that any (deterministic) sequence of partitions adding 1 point at a time suffices for the embedding, provided that the information resolution of the partitions asymptotically captures the degree of resolution associated to ยต. Applying our framework in the con- text of generating random partitions from iid sampling, we obtain random Skorokhod embeddings. The following theorem constructs a Skorokhod embedding for a (deterministic) sequence of partitions. Theorem 4.1. Letยตbe a distribution on Rwith mean zero and finite second mo- ment, and let Xโผยต. Let (xn)โ n=1be a sequence of real numbers, and let Fn:= ฯ((โโ, x1], . . . , (โโ, xn])define a filtration such that Fnโ B, i.e.Wโ n=1Fnand the 13 Borel ฯ-fieldBdiffer only by ยต-null sets. There exists a stopping time T(x1, x2, . . .) for Brownian motion such that, P-a.s., BTd=XandE[T] =E[X2]. Proof. Define a sequence of stopping times by T0= 0 and Tn+1= inf{t > T n:Btโ ran(E[X| Fn])}. Then T0โคT1โคT2โค ยทยทยท , so there exists a (possibly infinite) stopping time T= lim nโโTn. Moreover, BTnd=E[X| Fn] for each n, asBTn+1|BTn=xis equal to or supported on the same two points as E[X| Fn+1]|E[X| Fn] =x, and E[BTn+1|BTn=x] =x=E[E[X| Fn+1]|E[X| Fn] =x]. The latter equality is due to the fact that E[E[X| Fn+1]| Fn] =E[X| Fn]. This lets us bound the size of Tn, as E[Tn] =E[E[B2 Tn|Tn]] =E[B2 Tn] =E[(E[X| Fn])2]โคE[X2], where we have used the tower property of conditional expectation and the conditional version of Jensenโs inequality. By the monotone convergence theorem, E[T]โคE[X2], from which we conclude that T <โa.s. Now, by Theorem 2.1 and the martingale convergence theorem, E[X| Fn] converges in distribution to X. By the continuity of Brownian motion paths, BTnconverges in distribution to BT. Thus, we may conclude thatBTd=X, from which we conclude E[T] =E[E[B2 T|T]] =E[B2 T] =E[X2]. Corollary 4.1 (Randomized Skorokhod embedding) .Letยตbe a distribution on R with mean zero and finite second moment, and let X, X 1, X2, . . .iidโผยต. There exists a randomized (depending on X1, X2, . . .) stopping time Tfor Brownian motion such that,P-a.s., BTd=XandE[T|X1, X2, . . .] =E[X2]. Proof. We apply Theorem 4.1 to the sequence of empirical ฯ-fields given by Fn:= ฯ((โโ, X1], . . . , (โโ, Xn]). Theorem 2.1 shows that Fnโ B. Remark 4.1. It is not
|
https://arxiv.org/abs/2505.07958v1
|
necessary for X1, X2, . . . to be sampled from the same mea- sure as X. Theorem 4.1 still holds if we sample X1, X2, . . .iidโผฮฝ, provided that supp ฮฝโsupp ยต. This has the interesting consequence that there exist universal gen- erating measures for randomized Skorokhod embeddings. For example, if ฮฝis the standard normal distribution (or any other measure with full support), then sampling X1, X2, . . .iidโผฮฝgenerates a randomized Skorokhod embedding construction which is valid for any ยต. This construction yields different Skorokhod embeddings for each sequence of val- uesX1, X2, . . .. See Figure 7 for a simulation comparing Dubinsโ classical Skorokhod embedding and two independent randomized Skorokhod embeddings on the same Brow- nian motion. 14 Figure 7: Stopping times for Dubinsโ embedding and two independent randomized embed- dings on the same Brownian motion. Here, we are embedding the uniform distribution on [โ0.5,0.5]. 4.2 Random splitting random forests Our second example application of this framework is to obtain uniform risk bounds for randomized regression trees in a random forest. Random forest models [Bre01] are popular machine learning tools for tasks such as classification and regression. In the case of regression, the model constructs a number of regression trees, with splits determined by some optimal choice of splitting along a randomly selected subset of the feature coordinates; see Figure 8 for an illustration of splitting the feature space. Then, within each box of the feature space, the model reports the average of the values of the data points in that box. The key facet relating regression trees to our considerations is that a regression tree is essentially reporting the conditional expectation with respect to a partition of the feature space. From this perspective, we build our tree by refining the partition, i.e. by increasing the resolution of the associated ฯ-field. So we can study the error incurred in building our tree via convergence of the ฯ-fields representing these partitions. For a regression tree, even with an infinite amount of data, performance is bot- tlenecked by the coarseness of the resolution. Here, we use the notion of information resolution to address the following question: given infinite data, how does the error decay as the resolution becomes finer? While we focus on the infinite-data setting for simplicity, similar ideas could be used to study the trade-off between sample size and resolution. We can alter the standard random forest model by constructing regression trees using random splits , similarly to the Extra-Trees algorithm from [GEW06]. That is, we pick random points G1, . . . , G miidโผฮฝand construct a partition from these points. For example, we could construct a grid using all axis-parallel lines passing through 15 x1x2 Figure 8: Axis-parallel splits of the feature space in a regression tree. G1, . . . , G m, or we could use an asymmetric partition such as the one in Theorem 3.1; Figure 9 illustrates this variant of random splits. In this setting, our Theorem 3.1 essentially immediately provides a bound on the risk, where the parameter fcan even
|
https://arxiv.org/abs/2505.07958v1
|
be chosen adversarially against our regression tree estimator. For simplicity, we will treat the case of the partitions from Theorem 3.1 and Theorem 3.2, but the same analysis could be carried out with other choices of randomized sets Ay. Theorem 4.2 (Random splitting regression tree loss) .Let(Xi, Yi)N i=1be drawn iid according to Y=f(X) +ฮต, where ฮตis independent of XwithE[ฮต] = 0 andVar(ฮต) = ฯ2. Draw (Gk)1โคkโคmiidโผฮฝwith ฮณโ1<dฮฝ dฮป< ฮณ for some ฮณโฅ1, define Fm:= ฯ(AG1, . . . , A Gm)with Ay:={x:xiโคyiโ1โคiโคd}, and define the random splitting regression tree estimator bf(x):=1 |Rx|X i:XiโRxYi, where Rxis the set containing xin the finest partition given by Fm. Then lim sup Nโโsup โฅfโฅLipโค1Eh โฅbfโfโฅL1(ยต)i โฒlogm m1/d . If we use gFm:=ฯ({x:xiโคXj,i}: 1โคiโคd,1โคjโคm)in place of Fm, then lim sup Nโโsup โฅfโฅLipโค1Eh โฅbfโfโฅL1(ยต)i โฒโ d m. 16 x1x2 Figure 9: Splitting the feature space using random points for an asymmetric partition. Remark 4.2. We only take the limit as Nโ โ (infinitely many samples) to guarantee that every set in the partition of the feature space a.s. contains at least 1 data point (so that bfis well-defined). Depending on the choice of sets Ay(and their ensuing geometry), one may calculate the relationship between Nandmto ensure that with high probability, no partition set is empty. Proof. We will treat the case of Fm; the proof for gFmis similar. In taking the limit asNโ โ , we may assume that all grid boxes contain at least one Xi, so that bfis well-defined. Then, using the triangle inequality, lim sup Nโโsup โฅfโฅLipโค1Eh โฅbfโfโฅL1(ยต)i โคlim sup Nโโsup โฅfโฅLipโค1Eh โฅbfโE[f| Fm]โฅL1(ยต)i + lim sup Nโโsup โฅfโฅLipโค1E โฅE[f| Fm]โfโฅL1(ยต) Theorem 3.1 upper bounds the latter term by O logm m1/d . The former term can be controlled by noting that for any fwithโฅfโฅLipโค1, โฅbfโE[f| Fm]โฅL1(ยต)โคZ1 |Rx|X i:XiโRx f(Xi)โ1 ยต(Rx)Z Rxf(y)dยต(y) dยต(x) โคZ1 |Rx|X i:XiโRx1 ยต(Rx)Z Rx|f(Xi)โf(y)|dยต(y)dยต(x) โคZ diam( Rx)dยต(x) 17 =X RโPmยต(R) diam( R), where Pmdenotes the finest partition given by Fm. Bounding this quantity as in the proof of Theorem 3.1, we get that the second term is O logm m1/d , as claimed. By averaging independently randomized regression trees, one may construct random forests without the need for bootstrap aggregation, optimizing the split points, or random selection of features. Figure 10 compares the performance of such random splitting random forests (with 10 trees, using asymmetric and symmetric partitions) on the California housing dataset, originally introduced in [KB97] and now available through the scikit-learn library, as the number of random splits increases. As predicted by Theorem 4.2, the symmetric partition requires vastly fewer random split points to make accurate predictions. Figure 10: Performance of asymmetric and symmetric random splitting random forests for predicting California housing prices. Acknowledgements. The author would like to thank Sinho Chewi, Steve Evans, Shirshendu Ganguly, Arvind Prasadan, and Joห ao Vitor Romano for many helpful com- ments and conversations. While writing this paper, the author was supported by a Two Sigma PhD Fellowship and a research contract with Sandia National Laboratories, a U.S. Department of Energy multimission laboratory. 18 References [BCH17] Mathias Beiglbยจ ock, Alexander MG Cox,
|
https://arxiv.org/abs/2505.07958v1
|
and Martin Huesmann. Opti- mal transport and Skorokhod embedding. Inventiones Mathematicae , 208:327โ400, 2017. [Boy71] Edward S Boylan. Equiconvergence of martingales. The Annals of Mathematical Statistics , 42(2):552โ559, 1971. [Bre01] Leo Breiman. Random forests. Machine Learning , 45:5โ32, 2001. [Dub68] Lester E Dubins. On a theorem of Skorohod. The Annals of Mathemat- ical Statistics , 39(6):2094โ2097, 1968. [GEW06] Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely random- ized trees. Machine Learning , 63(1):3โ42, 2006. [KB97] R. Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statis- tics & Probability Letters , 33(3):291โ297, 1997. [Kud74] Hirokichi Kudo. A note on the strong convergence of ฯ-algebras. The Annals of Probability , 2(1):76โ83, 1974. [MBNWW21] Tudor Manole, Sivaraman Balakrishnan, Jonathan Niles-Weed, and Larry Wasserman. Plugin estimation of smooth optimal transport maps. arXiv preprint arXiv:2107.12364 , 2021. [Nev72] Jacques Neveu. Note on the tightness of the metric on the set of com- plete sub ฯ-algebras of a probability space. The Annals of Mathematical Statistics , 43(4):1369โ1371, 1972. [Ob l04] Jan Ob lยด oj. The Skorokhod embedding problem and its offspring. Prob- ability Surveys , 1:321 โ 392, 2004. [Rig21] Severine Rigot. Differentiation of measures in metric spaces. In New Trends on Analysis and Geometry in Metric Spaces: Levico Terme, Italy 2017, pages 93โ116. Springer, 2021. [Rog74] Lothar Rogge. Uniform inequalities for conditional expectations. The Annals of Probability , 2(3):486โ489, 1974. [Sko61] A. V. Skorohod. Issledovaniya po teorii sluchainykh protsessov (Stokhas- ticheskie differentsialnye uravneniya i predelnye teoremy dlya protsessov Markova . Izdat. Kiev. Univ., Kiev, 1961. [Sko65] A. V. Skorokhod. Studies in the theory of random processes . Addison- Wesley Publishing Co., Inc., Reading, MA, 1965. Translated from the Russian by Scripta Technica, Inc. [Vid18] Matija Vidmar. A couple of remarks on the convergence of ฯ-fields on probability spaces. Statistics & Probability Letters , 134:86โ92, 2018. [VZ93] Timothy Van Zandt. The Hausdorff metric of ฯ-fields and the value of information. The Annals of Probability , pages 161โ167, 1993. 19 A Proofs of Theorems 3.1 and 3.2 Theorem 3.1. Let([0,1]d,B, ยต)be a probability space equipped with the Borel ฯ-field, and let X1, X2, . . .iidโผยต, where ยตโชฮปandฮณโ1<dยต dฮป< ฮณ for some ฮณโฅ1. For x= (x1, . . . , x d)โ[0,1]d, let Ax:= [0, x1]ร ยทยทยท ร [0, xd], and define the empirical ฯ-fields Fn:=ฯ(AX1, . . . , A Xn). Then sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)P-a.s.,L1(P)โ โ โ โ โ โ โ โ 0, where โฅfโฅLip:= sup {|f(x)โf(y)| |xโy|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)# โฒlogn n1/d โnโฅ3. The constant factor in the bound depends only on dandฮณ. We first reduce the problem to the geometric problem of constructing a fine mesh partition of the support of ยต. Lemma A.1. Fix the values of X1, X2, . . ., and denote Pnthe finest partition given by the ฯ-fieldFn(omitting any ยต-null sets). Then sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)โคX AโPnยต(A) diam( A), where diam( A):= sup{|xโy|:x, yโA}. Proof of Lemma A.1. sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)= sup โฅfโฅLipโค1Z [0,1]d X AโPnE[f|A] 1A(x)โf(x) dยต(x) โคsup โฅfโฅLipโค1X AโPnZ A|E[f|A]โf(x)|dยต(x) โคsup โฅfโฅLipโค1X AโPn1 ยต(A)Z AZ A|f(y)โf(x)|dยต(y)dยต(x) โคX AโPnยต(A) diam( A). To bound the diameter, we use
|
https://arxiv.org/abs/2505.07958v1
|
a slightly modified version of the approach taken for the proof of Lemma 40 in [MBNWW21], which essentially uses a covering argument phrased in terms of Vapnik-Chervonenkis dimension. Proof of Theorem 3.1. We first prove the L1(P)-convergence rate bound. Fix 0 < ฮด < 1, and consider a mesh dividing [0 ,1]dinto cubes Cof side length ฮต= (ฮณlog(n/ฮด) n)1/d. Then, with probability โฅ1โฮด, each cube in the mesh contains some sample point Xi with 1 โคiโคnbecause P(some cube has no samples) โคX CP(Chas no samples) 20 =X C(1โยต(C))n โคX C(1โฮณโ1ฮตd)n = (1/ฮต)d(1โฮณโ1ฮตd)n =n ฮณlog(n/ฮด) 1โlog(n/ฮด) nn โคn ฮณlog(n/ฮด)exp(โlog(n/ฮด)) =ฮด ฮณlog(n/ฮด) โคฮด. To upper boundP AโPnยต(A) diam( A), first note that this quantity is monotonically nonincreasing in n, as splitting a set Ainto multiple pieces cannot increase the diameter of either piece. So it suffices to show a bound on this quantity when we throw away all sample points Xiexcept for one sample point in each mesh cube C. We will do so on the aforementioned probability โฅ1โฮดevent. Excepting the set Lโ P ncontaining the point (1 , . . . , 1), the diameter of any set Aโ P n\ {L}must be โคฮตโ d. The diameter of the corner set Lwill be โคโ d(the diameter of [0 ,1]d), but on this event, ฮป(L)โคdฮต. Thus, we may bound X AโPnยต(A) diam( A)โคฮณX AโPnฮป(A) diam( A) =ฮณฮป(L) diam( L) +ฮณX AโPn\{L}ฮป(A) diam( A) โคฮณd3/2ฮต+ 4ฮณฮตโ dX AโPn\{L}ฮป(A) โค2ฮณd3/2ฮต. So, if we denote K= 2ฮณ1+1/dd3/2, using Lemma A.1 gives sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)โคKlog(n/ฮด) n1/d with probability โฅ1โฮด. Writing ฮด=nexp(โudn Kd) for u >0, =u. Thus, applying the argument over all u >0 (that is, varying ฮดthroughout (0 ,1)), we may estimate E" sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)# =Zโ 0P sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)> u! du Picking a cutoff parameter tn=K(2 logn dn)1/d, โคtn+nZโ tnexp โudn Kd du 21 Making the change of variables v=ud/2, =tn+2n dZโ td/2 nexp โv2n Kd v2/dโ1dv โฒtn+nZโ td/2 nexp โv2n 2Kd v dv =tn+Kdexp โtd nn 2Kd =K2 logn dn1/d +K n1/d โฒlogn n1/d . TheP-a.s. convergence follows from the L1(P) convergence and the fact that Hn:=P AโPnยต(A) diam( A) is nonincreasing in nfor each ฯโโฆ. Indeed, if we let E={ฯโ โฆ : lim supnโโHn(ฯ)>0}then lim sup nโโE[Hn 1E]โคlim sup nโโE[Hn] = 0. But lim supnโโE[Hn 1E] =E[(lim supnโโHn) 1E]>0 ifP(E)>0, so we must have P(E) = 0. Theorem 3.2 (Faster uniform convergence with symmetrized Ax).Let([0,1]d,B, ยต) be a probability space equipped with the Borel ฯ-field, and let X1, X2, . . .iidโผยต, where ยตโชฮปandฮณโ1<dยต dฮป< ฮณfor some ฮณโฅ1. Define the empirical ฯ-fields fFn:=ฯ({x: xiโคXj,i}: 1โคiโคd,1โคjโคn). Then sup โฅfโฅLipโค1โฅE[f|fFn]โfโฅL1(ยต)P-a.s.,L1(P)โ โ โ โ โ โ โ โ 0, where โฅfโฅLip:= sup {|f(x)โf(y)| |xโy|:xฬธ=y}is the Lipschitz norm. Moreover, E" sup โฅfโฅLipโค1โฅE[f|fFn]โfโฅL1(ยต)# โฒโ d nโnโฅ1. The constant factor in the bound depends only on ฮณ. Proof. As before, it suffices to prove the expectation bound. Let Pndenote the finest partition given by the ฯ-fieldfFn(omitting any ยต-null sets). We first bound this by the average distance between a point in [0 ,1]dand the upper right corner of the partition box it lies in. Then sup โฅfโฅLipโค1โฅE[f|fFn]โfโฅL1(ยต)= sup โฅfโฅLipโค1Z
|
https://arxiv.org/abs/2505.07958v1
|
[0,1]d X AโPnE[f|A] 1A(x)โf(x) dยต(x) โคsup โฅfโฅLipโค1X AโPnZ A|E[f|A]โf(x)|dยต(x) 22 โคsup โฅfโฅLipโค1X AโPn1 ยต(A)Z AZ A|f(y)โf(x)|dยต(y)dยต(x) โคฮณ3X AโPn1 ฮป(A)Z AZ Aโฅyโxโฅ2dy dx โคฮณ3X AโPn1 ฮป(A)Z AZ AโฅyโuAโฅ2+โฅuAโxโฅ2dy dx, where uAis the upper corner of the set A:uA i= min {Xj,i:Xj,iโฅxiโxโA}for 1โคiโคd(and uA i= 1 if no such points exist). = 2ฮณ3X AโPnZ AโฅyโuAโฅ2dy = 2ฮณ3Z [0,1]dโฅyโuAyโฅ2dy, where Aydenotes the Aโ Pncontaining y. Taking expectations and applying Cauchy-Schwarz, we get E" sup โฅfโฅLipโค1โฅE[f| Fn]โfโฅL1(ยต)# โค2ฮณ3E"Z [0,1]dโฅyโuAyโฅ2dy# โค2ฮณ3vuutE"Z [0,1]dโฅyโuAyโฅ2 2dy# = 2ฮณ3โ dvuutE"Z [0,1]d(y1โuAy 1)2dy# Since every Aโ P nis an axis-parallel box, uAy 1= min {Xj,1:Xj,1โฅy1}; so the integral depends only on the 1st coordinate. Denoting the order statistics of the values X1,1, . . . , X n,1as 0 = Y0< Y 1<ยทยทยท< Yn< Yn+1= 1, this is = 2ฮณ3โ dvuutnX k=0EZYk+1 Yk(yโYk+1)2dy = 2ฮณ3โ dvuutnX k=0E(Yk+1โYk)3 3 The distances between successive order statistics of uniform random variables on [0 ,1] have Beta(1, n) distribution. So this is โฒโ ds (n+ 1)1 (n+ 1)( n+ 2)( n+ 3) โคโ d n. 23
|
https://arxiv.org/abs/2505.07958v1
|
arXiv:2505.08045v1 [math.ST] 12 May 2025Measures of association for approximating copulas Marcus Rockel May 23, 2025 Department of Quantitative Finance, Institute for Economics, University of Freiburg, Rempartstr. 16, 79098 Freiburg, Germany, marcus.rockel@finance.uni-freiburg.de Abstract This paper studies closed-form expressions for multiple association measures of copulas commonly used for approximation purposes, including Bernstein, shuffleโofโmin, checkerboard and checkโmin copulas. In particular, closed-form expressions are provided for the recently popularized Chatterjeeโs xi (also known as Chatterjeeโs rank correlation), which quantifies the dependence between two random variables. Given any bivariate copula C, we show that the closed-form formula for Chatterjeeโs xi of an approximating checkerboard copula serves as a lower bound that converges to the true value of ฮพ(C) as one lets the grid size nโโ . Keywords Bernstein copula, checkerboard copula, checkโmin, checkโw, Chatterjeeโs xi, Kendallโs tau, Spearmanโs rho, shuffleโofโmin, tail dependence coefficients 1 Introduction Measures of associationโmost prominently Spearmanโs rho and Kendallโs tauโare fundamental tools for studying statistical dependence. [6] and [5] popularized a dependence measure of one random variable on another, which we shall refer to as Chatterjeeโs xi. Closedโform expressions for these statistics exist however only for a handful of copula families (see [4, Table 6]); in gen- eral one must resort to numerical or samplingโbased procedures. Near the boundaries of the unit square, however, such procedures often become unstable because they require evaluating (condi- tional) distribution functions where numerical precision can quickly become poor. This motivates the search for analytically convenient copula approximations that allow reliable and efficient com- putation of dependence measures. We therefore study measures of association for several popular approximation families. In Section 2, we introduce the basic concepts and notation used in this paper, including the specific types of copulas that are of interest and the considered measures of association. Bernstein and checkerboard constructions, in particular, have a rich history and broad practical use, see, e.g., [15, 16, 11, 22, 7, 14, 24]. Closedโform formulas for Spearmanโs rho and Kendallโs tau are already known for these families, the most elegant arguably appearing in [11]. In Section 3, we extend this catalogue by deriving explicit formulas for Chatterjeeโs xi not only for Bernstein and checkerboard copulas, but also for the checkโmin and checkโ wvariants, whose grid cells exhibit perfect dependence. We additionally collect complete closedโform expressions for Spearmanโs rho, Kendallโs tau, and the tailโdependence coefficients, thereby unifying and extending earlier results. In Section 4, we focus on bounding Chatterjeeโs xi via checkerboard approximations. Combining Proposition 3.3 with Theorem 4.1, we establish the inequality 6m ntr/parenleftbig โโคโMฮพ/parenrightbig โ2โคฮพ(C), (1) where โ is the mรnmatrix of copula masses on an equiโspaced grid and Mฮพis defined by (Mฮพ)i,j=TTโค+Tโค+1 3In, withTi,j=1{i<j}. Replacing Cby its associated checkerboard copula thus furnishes a practical estimator of ฮพfrom an analytical copula, but also from empirical data. In Theorem 4.3 we prove that the resulting sampleโbased estimator converges at rate O(nlogn) to the true value of ฮพ(C). Checkerboard estimators for dependence measures have been recently investigated in a broader setting in [3, Section 4], but the explicit formulas derived here allow a finerโgrained analysis and faster finiteโsample performance. We
|
https://arxiv.org/abs/2505.08045v1
|
conclude with an empirical comparison between our estimator and the classical one of Azadkia and Chatterjee in [5]. 1 2 Preliminaries In this section, we introduce the basic concepts and notation used that are required to formulate the main results of this paper. First, we introduce the fundamental concept of a copula, before focusing on the specific types of copulas that are of interest in this paper. Finally, we introduce the studied measures of association. 2.1 Copulas Abivariate copula is a function C: [0,1]2โ[0,1] that is grounded (i.e.,C(u,0) =C(0,v) = 0 for allu,vโ[0,1]), 2 -increasing (meaning that for every 0 โคu1โคv1โค1 and 0โคu2โคv2โค1 it holds thatC(v1,v2)โC(u1,v2)โC(v1,u2) +C(u1,u2)โฅ0), and has uniform marginals (so that C(u,1) =uandC(1,v) =vfor allu,vโ[0,1]). Sklarโs theorem (see, e.g., [20, Theorem 2.3.3]) states that for any bivariate distribution function Fwith univariate marginals F1andF2, there exists a copula Csuch that F(x1,x2) =C(F1(x1),F2(x2)) for all ( x1,x2)โR2, (2) andCis uniquely determined on Ran( F1)รRan(F2). Conversely, if Cis any copula and F1,F2 are univariate distribution functions, then the function defined by (2) is a bivariate distribution function. Denote byC2the collection of all bivariate copulas. Important examples include the indepen- dence copula ฮ (u,v) =uv, the upper Frยด echet bound M(u,v) = min{u,v}and the lower Frยด echet boundW(u,v) = max{u+vโ1,0}. Classically, if ( X,Y )โผC, the upper and lower Frยด echet bounds represent the extreme cases of dependence with perfect co- and countermonotoncity, respectively, whilst the independence copula represents the case of no dependence at all between XandY. Fur- thermore, for any CโC2, it holds that WโคCโคMpointwise on [0 ,1]2, see standard references such as [20] or [10]. 2.2 Bernstein copulas Bernstein copulas were introduced by Sancetta and Satchell [22] as a flexible, computable tool for approximating dependence structures. Let Cbe a given bivariate copula and let Dbe an mรn-matrix defined by Di,j=C/parenleftbiggi m,j n/parenrightbigg (3) for 1โคiโคmand 1โคjโคn. We refer to Das themรn-grid copula matrix associated with C and generally call Dagrid copula matrix if there exists a copula such that (3) holds for all entries of the matrix. Next, let Bi,m(u) denote the Bernstein basis polynomial of degreem, defined as Bi,m(u) =/parenleftbiggm i/parenrightbigg ui(1โu)mโi,for 0โคiโคm,uโ[0,1]. Then, the Bernstein copula associated with the grid copula matrix Dis defined as CD B(u,v) =m/summationdisplay i=1n/summationdisplay j=1Di,jBi,m(u)Bj,n(v),for (u,v)โ[0,1]2. (4) This function CD Bis indeed a copula, as shown in [22, Theorem 1] (see also [7, Theorem 2.2]). A key feature of the Bernstein copula CD Bis that it is a polynomial in both u(of degreem) andv(of degree n), which ensures the resulting copula is smooth. The parameters mandn determine the degree of the polynomial and thus control the trade-off between the smoothness of the approximation and its ability to capture fine details of the underlying dependence structure represented by D. If one considers a sequence of grid copula matrices Dm,nassociated with Cand letsmโงnโโ , the Bernstein copula CDm,n B converges uniformly to C, see [7, Corollary 3.1]. 2 2.3 Shuffleโofโmin copulas The shuffleโofโmin construction, introduced by Mikusiยด nski, Sherwood and Taylor in [18] (see also[20]), produces a rich family of singular copulas that are dense in C2.
|
https://arxiv.org/abs/2505.08045v1
|
Fix an integer nโฅ1 and partition the unit interval into equal subโintervals Ik= [kโ1 n,k n] fork= 1,...,n . Denote by Sn the set of all permutations of {1,...,n}and letฯโSnbe a permutation. The straight shuffleโ ofโmin copula supported by ฯ, denotedCฯ, redistributes the probability mass of the comonotonic copulaM(u,v) = min{u,v}equally along the ndiagonal line segments /braceleft๏ฃฌig (u,v)โIkรIฯ(k):v=uโkโฯ(k) n/braceright๏ฃฌig , k = 1,...,n, so that each segment carries mass 1 /n. Equivalently, Cฯis the distribution of ( U,V) where UโผU(0,1) and, conditional on UโIk, one setsV=Uโkโฯ(k) n. We call ntheorder of the shuffle and ฯitsshuffle permutation . More general shuffles allow unequal strip widths p1,...,pn>0 with/summationtextpk= 1 and/or segment reflections, but in this paper we restrict to equalโwidth straight shuffles, because they are already dense inC2, generate the entire attainable ( ฯ,ฯ)โregion and admit closedโform formulas for the concordance measures considered below. 2.4 Checkerboard, checkโmin and checkโw copulas Let โ be an mรn-matrix. We say that โ is a checkerboard matrix if all entries are nonnegative and for all 1โคiโคmand 1โคjโคnit holds that m/summationdisplay k=1โk,j=1 n,n/summationdisplay l=1โi,l=1 m. Next, divide the interval [0 ,1] intomandnequal parts, respectively, and let Ii,j:=/bracketleftbigiโ1 m,i m/parenrightbig ร/bracketleftbigjโ1 n,j n/parenrightbig .IfCis a copula and P[(X,Y )โIi,j] = โi,j (5) for a random vector ( X,Y )โผC, we say that โ is a checkerboard matrix associated with C. IfC has a constant density within each rectangle Ii,j, thenCis called a checkerboard copula and we writeC=Cโ ฮ . The copula is explicitly given by Cโ ฮ (x,y) =mnm/summationdisplay i=1n/summationdisplay j=1โi,j/integraldisplayx 0/integraldisplayy 01Ii,j(u,v)dvdu (6) where 1Ii,jis the indicator function of the rectangle Ii,j.Cโ ฮ is indeed a copula for any checker- board matrix โ, see [14, Section 2] or, in the square case, [16, Theorem 2.2]. Furthermore, as a simple consequence of the density being constant on each Ii,j, it holds that P[(X,Y )โค(x,y)|(X,Y )โIi,j] =P[Xโคx|(X,Y )โIi,j]P[Yโคy|(X,Y )โIi,j].(7) From (7), one obtains the following expression for the copula Cโ ฮ , which is also covered in [10, Theorem 4.1.3]: Cโ ฮ (u,v) =iโ1/summationdisplay k=1jโ1/summationdisplay l=1โk,l+iโ1/summationdisplay k=1โk,j(nvโj+ 1) +jโ1/summationdisplay l=1โi,l(muโi+ 1) + โi,j(nvโj+ 1)(muโi+ 1).(8) For a given copula C, considering associated nรn-checkerboard matrices โ nyields desirable convergence properties Cโn ฮ โCasnโโ , see, e.g., [16, Corollary 3.2]. Next to independence within rectangles as realized through Cโ ฮ , it is also reasonable to consider for given โ a perfect positive dependence within each rectangle, that is for all 1 โคiโคm, 1โคjโคnit holds that conditionally on ( X,Y )โIi,jit is X=nYโj+i m, (9) 3 almost surely. If there exist a checkerboard matrix โ and a random vector ( X,Y )โผCfulfilling (5) and (9),Cis called a checkโmin copula, and we write C=Cโ โ. Checkโmin approximations were considered in multiple applications, see, e.g., [19, 27, 9]. In analogy to (7), one can equivalently write (9) as P[(X,Y )โค(x,y)|(X,Y )โIi,j] =P/bracketleftbigg Yโคmxโi+j nโงy/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โIi,j/bracketrightbigg (10) for all (x,y)โ[0,1]2and a case separation shows that Cโ โ(u,v) =iโ1/summationdisplay k=1jโ1/summationdisplay l=1โk,l+iโ1/summationdisplay k=1โk,j(nvโj+ 1) +jโ1/summationdisplay l=1โi,l(muโi+ 1) + โi,jmin{nvโj+ 1,muโi+ 1}.(11) Similar convergence properties as for the checkerboard copula hold for checkโmin copulas, see [19]. Lastly, consider also the checkโw copula Cโ โ, which represents
|
https://arxiv.org/abs/2505.08045v1
|
perfect negative dependence within squares, i.e. โ is associated with Cโ โand if (X,Y )โผCโ โfor some random vector ( X,Y ), then for all 1โคiโคm, 1โคjโคnit holds that conditionally on ( X,Y )โIi,jit is X=iโ1 +jโnY m, (12) almost surely. In particular, in analogy to (10), one can write (12) equivalently as P[Xโคx,Yโคy|(X,Y )โIi,j] =P/bracketleftbiggjโ1 +iโmx nโคYโคy/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โIi,j/bracketrightbigg for all (x,y)โ[0,1]2, and another case separation shows that Cโ โ(u,v) =iโ1/summationdisplay k=1jโ1/summationdisplay l=1โk,l+iโ1/summationdisplay k=1โk,j(nvโj+ 1) +jโ1/summationdisplay l=1โi,l(muโi+ 1) + โi,jmax{nvโj+muโi+ 1,0}.(13) One may also consider a generalization of the checkโmin and checkโw copulas. For a copula C, we say thatCis anmรn-perfect dependence copula if for some ( X,Y )โผCit holds that conditionally on (X,Y )โIi,j Y=fi,j(mXโi+ 1) +jโ1 n(14) almost surely for some Lebesgue measure preserving function fi,j: [0,1]โ[0,1] and all 1โคiโคm, 1โคjโคn. Here, being Lebesgue measure preserving means that /integraldisplay1 0g(fi,j(x))dx=/integraldisplay1 0g(y)dy for all bounded, measurable functions g: [0,1]โR. We letCโ pddenote the set of all mรn-perfect dependence copulas associated with an mรn-checkerboard matrix โ. When choosing fi,j(x) =x orfi,j(x) = 1โxfor all 1โคiโคm,1โคjโคn, one obtains back the formulas (9) and (12), so that checkโmin and checkโw copulas are special cases of perfect dependence copulas. 2.5 Measures of association Two classical measures of association are Spearmanโs rho and Kendallโs tau, which provide alter- natives to the Pearson correlation coefficient that do not depend on the marginal distributions of the random variables. Both of them can be expressed as an integral over the unit square [0 ,1]2. That is, for a bivariate copula C, one can write Spearmanโs rho as ฯS(C) = 12/integraldisplay [0,1]2C(u,v)dฮป2(u,v)โ3, 4 and Kendallโs tau as ฯ(C) = 1โ4/integraldisplay [0,1]2โ1C(u,v)โ2C(u,v)dฮป2(u,v), see, e.g., [10, Definitions 2.4.5 and 2.4.6]. An equivalent (and classical) interpretation of Kendallโs ฯ is in terms of concordant and discordant pairs of observations: if ( U1,V1) and (U2,V2) are two independent draws from the copula C, then ฯ(C) =P[(U1โU2)(V1โV2)>0]โP[(U1โU2)(V1โV2)<0], (15) i.e. the probability of concordance minus the probability of discordance, see, e.g., [20, Section 5.1.1]. This probabilistic view is particularly handy for copulas supported on discrete sets such as shuffleโofโmin constructions (see Section 3.2). Next to these two measures, which take values in [ โ1,1], it is also interesting to measure the strength of dependence between two random variables XandY. Chatterjeeโs xi is one way to do this, yielding values in [0 ,1], where the value 0 is consistent with independence between XandY, and 1 with perfect dependence, i.e. Y=f(X) for some measurable function f, see [1, Theorem 2.1]. Like Spearmanโs rho and Kendallโs tau, Chatterjeeโs xi can be expressed as an integral. For a bivariate copula C, it is ฮพ(C) = 6/integraldisplay [0,1]2(โ1C(u,v))2dฮป2(u,v)โ2, compare [8] and [6]. For checkerboard, checkโmin and checkโw copulas, the above integral formulas for Kendallโs tau, Spearmanโs rho and Chatterjeeโs xi can be evaluated explicitly in terms of the underlying checkerboard matrix. Further classical measures of association for bivariate copulas are the tail dependence coefficients, see, e.g., [13, 20]. For a given bivariate copula C, the lower tail dependence coefficient is defined by ฮปL(C) = lim tโ0+C(t,t) t, (16) and the upper
|
https://arxiv.org/abs/2505.08045v1
|
tail dependence coefficient by ฮปU(C) = 2โlim tโ1โ1โC(t,t) 1โt. (17) 3 Explicit measures of association for approximating copu- las In this section, we formulate the explicit expressions for Spearmanโs rho, Kendallโs tau, Chatterjeeโs xi and the tail dependence coefficients for mรn-checkerboard matrices associated with Bernstein, checkerboard, checkโmin and checkโw copulas. 3.1 Explicit measures of association for Bernstein copulas Let ฮ be the mรn-matrix with constant entries ฮi,j=1 (m+ 1)(n+ 1),1โคiโคm,1โคjโคn. This matrix will appear in Spearmanโs rho for Bernstein copulas. Let ฮ(m)be themรm-matrix with entries ฮ(m) i,j=(iโj)/parenleftbigm i/parenrightbig/parenleftbigm j/parenrightbig (2mโiโj)/parenleftbig2mโ1 i+jโ1/parenrightbig,1โคi,jโคm, with the convention that 0 /0 = 1. Define ฮ(n)analogously (of size nรn). These matrices enter into Kendallโs tau. For Chatterjeeโs xi, we introduce two more matrices to handle integrals of 5 Bernstein polynomials and their derivatives. Let โฆ be the mรm-matrix whose ( i,r)-entry is โฆi,r=๏ฃฑ ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃด๏ฃณ/parenleftbigm i/parenrightbig/parenleftbigm r/parenrightbig (2mโ3)/parenleftbig2mโ4 i+rโ2/parenrightbig/bracketleft๏ฃฌigg irโ2m(mโ1)/parenleftbigi+r 2/parenrightbig (2mโ1)(2mโ2)/bracketright๏ฃฌigg ,if 1โคi,r<m, m(mโ1)(iโm)/parenleftbigm i/parenrightbig (2mโ1)(2mโ2)/parenleftbig2mโ3 m+iโ2/parenrightbig, if 1โคi<m,r =m, m(mโ1)(rโm)/parenleftbigm r/parenrightbig (2mโ1)(2mโ2)/parenleftbig2mโ3 m+rโ2/parenrightbig, ifi=m,1โคr<m, m2 2mโ1, ifi=m,r=m. and let ฮ be the nรn-matrix whose ( j,s)-entry is ฮj,s:=/parenleftbign j/parenrightbig/parenleftbign s/parenrightbig (2n+ 1)/parenleftbig2n j+s/parenrightbig. The above matrix definitions to give exact formulas of Spearmanโs rho, Kendallโs tau and Chatter- jeeโs xi for arbitrary Bernstein copulas. These formulas are the content of the following proposition. Proposition 3.1 (Explicit measures of association for Bernstein copulas) . LetC=CD Bbe the Bernstein copula associated with the cumulated mรn-checkerboard matrix D. Then: ฯS(CD B) = 12 tr (ฮ D)โ3, ฯ(CD B) = 1โtr/parenleft๏ฃฌig ฮ(m)Dฮ(n)DT/parenright๏ฃฌig , ฮพ(CD B) = 6 tr/parenleftbig โฆDฮDT/parenrightbig โ2. Furthermore, the tail dependence coefficients are given by ฮปL(CD B) =ฮปU(CD B) = 0 . In the case of m=n, the above formula for Spearmanโs rho and Kendallโs tau can be found in [11, Theorem 9 and 10] and the rectangular case is a direct extension. The derivation of the formula for Chatterjeeโs xi in Proposition 3.1 is given in the appendix from page 11 onwards. 3.2 Explicit measures of association for shuffleโofโmin copulas LetCฯbe the order- nstraight shuffleโofโmin copula determined by a permutation ฯโSn(equal strip width 1 /nand no reversals). Denote Ninv(ฯ) = #{(i,j) :i<j,ฯ (i)>ฯ(j)}, di=ฯ(i)โi. The measures of association introduced in Section 2.5 admit closed algebraic forms for Cฯthat depend only on these permutation statistics. Proposition 3.2 (measures of association for a straight shuffleโofโmin copula) .For the equalโ width, straight shuffleโofโmin copula Cฯof ordern, we have ฯ(Cฯ) = 1โ4Ninv(ฯ) n2, ฯS(Cฯ) = 1โ6/summationtextn i=1d2 i n3, ฮพ(Cฯ) = 1. Furthermore, ฮปL(Cฯ) = 1{ฯ(1)=1}andฮปU(Cฯ) = 1{ฯ(n)=n}. A derivation of the formulas in Proposition 3.2 is given in the appendix from page 15 onwards. Note that similar formulas for Spearmanโs rho and Kendallโs tau have been observed in [23, Lemma 1] and in the recent [25, Lemma 3.2], with the latter covering the Kendallโs tau formula given in the Proposition above in the case of symmetric permutations. Furthermore, note that the identity for Chatterjeeโs xi is a direct consequence of the fact that shuffleโofโmin copulas are perfect dependence copulas (compare, e.g., [2, Example 1.1]), and the tail dependence coefficients are elementary. 6 3.3 Explicit measures of association for
|
https://arxiv.org/abs/2505.08045v1
|
checkerboard-type copulas To give concise expressions, we make use of the following matrices: First, let โ = (โi,j)1โคi<m, 1โคj<n be anmรn-checkerboard matrix and denote by โโคits transpose. Next, define the mรn-matrix โฆ by โฆi,j:=(2mโ2iโ1)(2nโ2jโ1) mn for 1โคiโคm,1โคjโคn. Also, let ฮ(m)be themรm-matrix with entries ฮ(m) i,j=๏ฃฑ ๏ฃด๏ฃฒ ๏ฃด๏ฃณ2,ifi>j, 1,ifi=j, 0,ifi<j,(1โคi,jโคm), and let ฮ(n)be the analogous nรn-matrix. Lastly, let Tbe the strict upper-triangular nรn-matrix Ti,j=/braceleft๏ฃฌigg 1,ifi<j, 0,otherwise,(1โคi,jโคn) and letMฮพbe thenรn-matrix given by Mฮพ=TTโค+Tโค+1 3In. Proposition 3.3 (Explicit measures of association for checkerboard copulas) . LetCฮ ,CโandCโbe bivariate checkerboard, checkโmin and checkโw copulas associated with an mรn-checkerboard matrix โ. Then, the measures of association can be expressed as follows: (i) Spearmanโs rho: ฯS(Cฮ ) = 3 tr/parenleftbig โฆโคโ/parenrightbig โ3, ฯS(Cโ) =ฯS(Cฮ ) +1 mn, ฯS(Cโ) =ฯS(Cฮ )โ1 mn. (ii) Kendallโs tau: ฯ(Cฮ ) = 1โtr/parenleft๏ฃฌig ฮ(m)โฮ(n)โโค/parenright๏ฃฌig ฯ(Cโ) =ฯ(Cฮ ) + tr(โโคโ), ฯ(Cโ) =ฯ(Cฮ )โtr(โโคโ). (iii) Chatterjeeโs xi: ฮพ(Cฮ ) =6m ntr/parenleftbig โโคโMฮพ/parenrightbig โ2, ฮพ(Cpd) =ฮพ(Cฮ ) +mtr(โโคโ) n for allCpdโCโ pdand in particular for CโandCโ. Furthermore, ฮปL(Cฮ ) =ฮปU(Cฮ ) =ฮปL(Cโ) =ฮปU(Cโ) = 0 ,ฮปL(Cโ) = โ 1,1(mโงn)andฮปU(Cโ) = โm,n(mโงn). In the case of nรn-checkerboard copulas the above formula for Spearmanโs rho and Kendallโs tau can be found in [11, Theorem 15 and 16] (see also [24, Theorem 1 and 2] and [14, Formula (2)]). I didnโt find references for the other cases. Corollary 3.4. Letโbe anmรn-checkerboard matrix and let Cโ pdโCโ pd. Then, it holds that /vextendsingle/vextendsingleฮพ(Cโ pd)โฮพ(Cโ ฮ )/vextendsingle/vextendsingleโค/braceleft๏ฃฌigg m n2,ifmโคn 1 n,ifm>n. The proofs for Proposition 3.3 and Corollary 3.4 are given in the appendix from p. 16 onwards. 7 4 Checkerboard estimates for Chatterjeeโs xi In this section, we first discuss in Subsection 4.1 how the checkerboard and checkโmin formu- las relate to general Chatterjeeโs xi values, and then analyse their performance as estimates for Chatterjeeโs xi from sampled data in Subsection 4.2. 4.1 Checkerboard bound for Chatterjeeโs xi The expressions of Proposition 3.3 can be used to calculate the measures of association for a given checkerboard copula in a straightforward and efficient way, without the need for numerical integration or estimates from sampled data. The next theorem shows that the checkerboard formula in Proposition 3.3 (iii) also serves as lower bound of Chatterjeeโs xi for an arbitrarily given bivariate copulaC, and hence shows the formula (1). Theorem 4.1 (Checkerboard bound for ฮพ).IfCis a bivariate copula associated with a checker- board matrix โ, thenฮพ(Cโ ฮ )โคฮพ(C). Proof. Consider a copula Cwith associated mรn-checkerboard matrix โ. Let ( X,Y )โผC, and letUโผUnif [0,1]be independent of ( X,Y ). Define /tildewideX:=โmXโ m+U m SinceUis independent of Yand/tildewideXis a function of XandU, it holds that ฮพ(C) =ฮพ(Y|X) =ฮพ(Y|(X,U ))โฅฮพ/parenleft๏ฃฌig Y|/tildewideX/parenright๏ฃฌig , (18) see, e.g., [1, Theorem 2.1]. Furthermore,/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โIi,jif and only if ( X,Y )โIi,j. It follows P/bracketleft๏ฃฌig /tildewideXโคu,Yโคv/vextendsingle/vextendsingle/vextendsingle/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โIi,j/bracketright๏ฃฌig =P/bracketleftbiggโmXโ m+U mโคu,Yโคv/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โIi,j/bracketrightbigg =P/bracketleftbiggiโ1 m+U mโคu,Yโคv/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โIi,j/bracketrightbigg =P/bracketleftbiggiโ1 m+U mโคu/vextendsingle/vextendsingle/vextendsingle/vextendsingle(X,Y )โIi,j/bracketrightbigg P[Yโคv|(X,Y )โIi,j] =P/bracketleftbiggiโ1 m+U mโคu/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โIi,j/bracketrightbigg P/bracketleft๏ฃฌig Yโคv/vextendsingle/vextendsingle/vextendsingle/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โIi,j/bracketright๏ฃฌig , which shows that /tildewideXandYare conditionally independent on Ii,j. Also, trivially, P/bracketleft๏ฃฌig/parenleft๏ฃฌig /tildewideX,Y/parenright๏ฃฌig โIi,j/bracketright๏ฃฌig =P[(X,Y )โIi,j] = โi,j, so it follows that C/parenleftbig/tildewideX,Y/parenrightbig=Cโ ฮ . Together with (18), this shows that ฮพ(Cโ ฮ )โคฮพ(C). Note that whilst ฮพ(Cโ ฮ )โคฮพ(C) for a matrix โ associated
|
https://arxiv.org/abs/2505.08045v1
|
with C, it is generally not true that ฮพ(C)โคฮพ(Cโ โ). A simple counterexample is given by the checkโmin copula C=Cโ โassociated with the checkerboard matrix โ =1 4๏ฃซ ๏ฃฌ๏ฃฌ๏ฃญ1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1๏ฃถ ๏ฃท๏ฃท๏ฃธ, which adheres perfect dependence and hence satisfies ฮพ(C) = 1, but this is not the case when transitioning to the associated 2 ร2-checkerboard matrix. Example 4.2 shows that even under stronger positive dependence constraints on the copula Ccheckโmin copulas may yield lower values forฮพthan the copula itself. 8 Example 4.2. Consider the matrices โ4:=1 4๏ฃซ ๏ฃฌ๏ฃฌ๏ฃญ1 0 0 0 0 0.5 0.5 0 0 0.5 0.5 0 0 0 0 1๏ฃถ ๏ฃท๏ฃท๏ฃธ,โ2=1 8/parenleftbigg5 1 1 5/parenrightbigg , and letCbe the checkerboard copula associated with โ 4, i.e.C=Cโ4 ฮ . This copula has a totally positive density, which implies multiple other classical dependence concepts, see [12, Figure 1]. Furthermore, using Proposition 3.3 (iii), it is ฮพ(C) =ฮพ/parenleft๏ฃฌig Cโ4 ฮ /parenright๏ฃฌig =5 8>7 16=ฮพ/parenleft๏ฃฌig Cโ2 โ/parenright๏ฃฌig . 4.2 Checkerboard estimator for Chatterjeeโs xi Let now (X1,Y1),(X2,Y2),...be a random sample from ( X,Y ) and assume that ( X,Y ) has a continuous distribution function. Chatterjeeโs xi admits a strongly consistent and asymptotically normal estimator given by ฮพn(Y|X) =/summationtextn k=1(nmin{Rk,RN(k)}โL2 k)/summationtextn k=1Lk(nโLk), (19) whereRkdenotes the rank of YkamongY1,...,Yn, i.e., the number of jsuch thatYjโคYk, and Lkdenotes the number of jsuch thatYjโฅYk. For eachk, the number N(k) denotes the index j such thatXjis the nearest neighbor of Xk, where ties are broken uniformly at random. All these appealing properties allow a fast, model-free variable selection method established in [5], noting thatฮพncan be computed in O(nlogn) time. The checkerboard copulas considered above provide an alternative way to estimate Chatterjeeโs xi: Letฮบโ[0,1] and let โ nbe derived by partitioning the unit square [0 ,1]2intoโnฮบโรโnฮบโsquares of equal size and counting the number of samples in each square, i.e. (โn)i,j=1 nn/summationdisplay k=11Ii,j(Xk,Yk) for 1โคiโคโnฮบโ, 1โคjโคโnฮบโ. Then, set ฮพฮบ n(X(n),Y(n)) = 6 tr/parenleft๏ฃฌig โโค โnฮบโโโnฮบโMฮพ/parenright๏ฃฌig +1 2tr/parenleft๏ฃฌig โโค โnฮบโโโnฮบโ/parenright๏ฃฌig โ2 (20) as an arithmetic average of the formulas for Chatterjeeโs xi in Proposition 3.3 (iii). Choosing a checkerboard matrix of size โnฮบโรโnฮบโwithฮบ < 1 avoids overfitting by ensuring that the number of squares in the checkerboard copula is the same as the average number of samples in each square. In [3], using checkerboard copulas for estimation has already been done for a whole set of dependence measures that in particular covers Chatterjeeโs ฮพ, though with a more implicit formula for the estimator. Theorem 4.3 (Convergence of ฮพฮบ n).Ifฮบโค1/3, then the estimator ฮพฮบ ncan be computed in time O(nlog(n))and converges to ฮพalmost surely as nโโ . Proof. Matrix multiplication of kรk-matrices is generally possible in O(k3) time, and in (20) we havek=nฮบ, yielding a (sub-)linear evaluation time whenever ฮบโค1 3. As for the classical estimator from (19), the sample data needs to be transformed to ranks to obtain the doubly stochastic checkerboard matrix, which can be done in O(nlog(n)) time and is the bottleneck of the algorithm. The almost sure convergence ฮพฮบ nโฮพis obtained as in [3, Theorem 4.2],
|
https://arxiv.org/abs/2505.08045v1
|
using also that tr/parenleft๏ฃฌig โโค โnฮบโโโnฮบโ/parenright๏ฃฌig =โnฮบโ/summationdisplay i,j=1โ2 i,jโค1 โnฮบโโ0 asnโโ . 9 Next toฮพฮบ n, also consider variants ฮพฮบnandฮพฮบ nof the estimator tailored for ฮพ(Cโโnฮบโ ฮ ) and ฮพ(Cโโnฮบโ โ). These variants are given by ฮพฮบn(X(n),Y(n)) = 6 tr/parenleft๏ฃฌig โโค โnฮบโโโnฮบโMฮพ/parenright๏ฃฌig + tr/parenleft๏ฃฌig โโค โnฮบโโโnฮบโ/parenright๏ฃฌig โ2 ฮพฮบ n(X(n),Y(n)) = 6 tr/parenleft๏ฃฌig โโค โnฮบโโโnฮบโMฮพ/parenright๏ฃฌig โ2, respectively. Naturally, the question arises which ฮบto choose, i.e. how large to make the checkerboard matrix given a sample size n. An intuitive choice is ฮบ= 1/3, as this is the largest choice for a checkerboard matrix that can be computed in O(nlogn) time. The next example illustrates that this choice also appears to do well in practice. Example 4.4. Consider a single-factor model in R2where ZโผN(0,1), ฮตโผN(0,1),andX=Z+ฮต. (21) Here,Zandฮตare independent standard Gaussian random variables. The resulting pair ( Z,X) is jointly Gaussian with correlation1 2, In this model, it is known that ฮพ(X|Z) =3 ฯarcsin/parenleftbigg3 4/parenrightbigg โ1 2โ0.3098, see [1, Proposition 2.7]. Figure 1 evaluates ฮพฮบnandฮพฮบ nbased on sample data for different values ofฮบ. As to be seen from the figure, when ฮบis too large, the estimator overfits the sampled data, whilst for small ฮบ, the estimator is too coarse. Figure 1: The estimator ฮพฮบ nfor different values of ฮบ. Each boxplot corresponds to an increasing sample size n. The estimates concentrate near the theoretical value of ฮพ(red line), illustrating consistency. Whilst in the above example, different values of ฮบwere considered, it is also interesting to compare the performance of the estimator ฮพฮบ nwith the classical Chatterjee estimator ฮพndefined above in (19). In Figure 3, we compare the performance of our implementations of ฮพฮบ n,ฮพฮบnandฮพฮบ n forฮบ=1 3with standard implementations of the ฮพnestimator at the exemplar of sample data from the model in (21). Figure 2 shows that also in terms of precision these estimators do not fall behind the standard implementations of ฮพnin the xicorpy and scipy packages. In conclusion, despite the formulas in Proposition 3.3 being particularly appealing for a given cumulative distribution function, also in a practical setting given sample data they yield a reasonable approximation of Chatterjeeโs xi. 10 Figure 2: Convergence of xi estimates to the true value as sample size increases. As suggested by Theorem 4.1, the checkerboard estimate ฮพฮบ n(CheckPi ) tends to underestimate the true value, while the checkโmin estimate ฮพฮบn(CheckMin ) tends to overestimate it. ฮพฮบ n(CheckAvg ) is the closest to the true value at smaller sample sizes in this setting. Figure 3: Execution time scaling for different estimation methods with increasing sample size. Our implementation of ฮพฮบ noutperforms the implementations of ฮพninxicorpy approximately by a factor of three and the implementation in scipy by approximately 30 %. A Appendix Proof of Proposition 3.1. LetC=CD Bbe the Bernstein copula associated to the cumulated mรn-checkerboard matrix D. The formulas for Spearmanโs rho and Kendallโs tau can be obtained as straightforward extensions of the computations in [11, Theorems 9 and 10]. Upper and lower tail dependence coefficients for Bernstein copulas are always equal to zero due to the boundedness of the density, see, e.g., [21, Example 1] for
|
https://arxiv.org/abs/2505.08045v1
|
the m=ncase. The rectangular case is again analogous. The rest of the proof is dedicated to deriving the formula for Chatterjeeโs xi. Recall that ฮพ(C) can be written as ฮพ(C) = 6/integraldisplay [0,1]2(โ1C(u,v))2dฮป2(u,v)โ2, 11 Hence, we need to evaluate the integral for CD B. Step 1: Derivative of โ1Bi,m(u). Write Bi,m(u) =/parenleftbiggm i/parenrightbigg ui(1โu)mโi. (22) We distinguish whether i<m ori=m. Case 1: 1โคi<m . A direct product rule and factoring out yields โ1Bi,m(u) =/parenleftbiggm i/parenrightbigg (iโmu)uiโ1(1โu)mโiโ1. Case 2:i=m. SinceBm,m(u) =um, it is โ1Bm,m(u) =mumโ1. Step 2: Derivative of the Bernstein copula. Recall from (4) that CD B(u,v) =m/summationdisplay i=1n/summationdisplay j=1Di,jBi,m(u)Bj,n(v). Thus, โ1CD B(u,v) =m/summationdisplay i=1n/summationdisplay j=1Di,jโ1Bi,m(u)Bj,n(v). In the integral for Chatterjeeโs xi, we need to square this expression and get /parenleftbig โ1CD B(u,v)/parenrightbig2=m/summationdisplay i,r=1n/summationdisplay j,s=1Di,jDr,sโ1Bi,m(u)โ1Br,m(u)Bj,n(v)Bs,n(v). (23) Step 3: Factorize the double integral. We must integrate (23) over ( u,v)โ[0,1]2. Note that โ1Bi,m(u)โ1Br,m(u) depends only onu, whileBj,n(v)Bs,n(v) depends only onv. Hence, /integraldisplay1 0/integraldisplay1 0/parenleftbig โ1CD B(u,v)/parenrightbig2dudv=m/summationdisplay i,r=1n/summationdisplay j,s=1Di,jDr,s/bracketleft๏ฃฌig/integraldisplay1 0โ1Bi,m(u)โ1Br,m(u)du /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =:โฆi,r/bracketright๏ฃฌig/bracketleft๏ฃฌig/integraldisplay1 0Bj,n(v)Bs,n(v)dv /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =:ฮj,s/bracketright๏ฃฌig . With the matrices โฆ = (โฆ i,r)m i,r=1and ฮ = (ฮ j,s)n j,s=1, we can write the double sum/integral as m/summationdisplay i,r=1n/summationdisplay j,s=1Di,jDr,sโฆi,rฮj,s= tr/parenleftbig โฆDฮDT/parenrightbig , so that ฮพ/parenleftbig CD B/parenrightbig = 6 tr/parenleftbig โฆDฮDT/parenrightbig โ2. Step 4: Explicit form of ฮ. By (22), we have ฮj,s=/integraldisplay1 0/parenleftbiggn j/parenrightbigg/parenleftbiggn s/parenrightbigg vj+s(1โv)2nโ(j+s)dv. A standard Beta-integral identity for nonnegative integers p,qis /integraldisplay1 0xp(1โx)qdx=p!q! (p+q+ 1)!, (24) 12 see, e.g., [26]. Here, p=j+sandq= 2nโjโs, so ฮj,s=/parenleftbiggn j/parenrightbigg/parenleftbiggn s/parenrightbigg(j+s)! (2nโjโs)! (2n+ 1)!=/parenleftbign j/parenrightbig/parenleftbign s/parenrightbig (2n+ 1)/parenleftbig2n j+s/parenrightbig, as specified in Section 3.1. Step 5: Explicit form of โฆ. Recall from Step 1 that in โฆi,r=/integraldisplay1 0โ1Bi,m(u)โ1Br,m(u) it is โ1Bi,m(u) =๏ฃฑ ๏ฃฒ ๏ฃณ/parenleftbigm i/parenrightbig (iโmu)uiโ1(1โu)mโiโ1,(1โคi<m ) mumโ1, (i=m). Hence, we must consider four cases for the pair ( i,r): (a) 1โคi<m and 1โคr<m . Then โ1Bi,m(u)โ1Br,m(u) =/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg (iโmu)(rโmu)uiโ1+rโ1(1โu)mโiโ1+mโrโ1. Expanding ( iโmu)(rโmu) yields โฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg/integraldisplay1 0/bracketleftbig irโm(i+r)u+m2u2/bracketrightbig ui+rโ2(1โu)2mโiโrโ2du. Splitting into three Beta-type integrals: โฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg/bracketleftbigg ir/integraldisplay1 0ui+rโ2(1โu)2mโiโrโ2du โm(i+r)/integraldisplay1 0ui+rโ1(1โu)2mโiโrโ2du+m2/integraldisplay1 0ui+r(1โu)2mโiโrโ2du/bracketrightbigg . Using the Beta integrals from (24) the three integrals become (i+rโ2)! (2mโiโrโ2)! (2mโ3)!,(i+rโ1)! (2mโiโrโ2)! (2mโ2)!,(i+r)! (2mโiโrโ2)! (2mโ1)!, and we now have โฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg/bracketleftbigg ir(i+rโ2)! (2mโiโrโ2)! (2mโ3)! โm(i+r)(i+rโ1)! (2mโiโrโ2)! (2mโ2)!+m2(i+r)! (2mโiโrโ2)! (2mโ1)!/bracketrightbigg . This expression can be simplified by rewriting each fraction so that all terms share the denominator (2mโ1)!, i.e. using (i+rโ2)!(2mโiโrโ2)! (2mโ3)!=(i+rโ2)!(2mโiโrโ2)! [(2mโ1)(2mโ2)] (2mโ1)!, (i+rโ1)!(2mโiโrโ2)! (2mโ2)!=(i+rโ1)!(2mโiโrโ2)! [(2mโ1)] (2mโ1)! we obtain โฆi,r=/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg1 (2mโ1)!/bracketleftbigg ir(2mโ1)(2mโ2)(i+rโ2)! (2mโiโrโ2)! โm(i+r)(2mโ1)(i+rโ1)! (2mโiโrโ2)! +m2(i+r)! (2mโiโrโ2)!/bracketrightbigg . =/parenleftbiggm i/parenrightbigg/parenleftbiggm r/parenrightbigg(i+rโ2)! (2mโiโrโ2)! (2mโ1)!/bracketleftbigg (2mโ1)(2mโ2)irโ2m(mโ1)/parenleftbiggi+r 2/parenrightbigg/bracketrightbigg =/parenleftbigm i/parenrightbig/parenleftbigm r/parenrightbig (2mโ3)/parenleftbig2mโ4 i+rโ2/parenrightbig/bracketleft๏ฃฌigg irโ2m(mโ1)/parenleftbigi+r 2/parenrightbig (2mโ1)(2mโ2)/bracketright๏ฃฌigg . 13 (b) 1โคi<m andr=m. Now, by Step 1 it is โ1Bi,m(u)โ1Bm,m(u) =m/parenleftbiggm i/parenrightbigg (iโmu)um+iโ2(1โu)mโiโ1 and โฆi,m=m/parenleftbiggm i/parenrightbigg/integraldisplay1 0(iโmu)um+iโ2(1โu)mโiโ1du. Splitting the factor ( iโmu): โฆi,m=m/parenleftbiggm i/parenrightbigg/bracketleftbigg i/integraldisplay1 0um+iโ2(1โu)mโiโ1duโm/integraldisplay1 0um+iโ1(1โu)mโiโ1du/bracketrightbigg . Here, use again p!q!/(p+q+ 1)! with appropriate exponents. For the first integral, choose p= (m+iโ2) andq= (mโiโ1), and for the second integral choose p= (m+iโ1) andq= (mโiโ1). Then, one gets /integraldisplay1 0um+iโ2(1โu)mโiโ1du=(m+iโ2)!(mโiโ1)! (2mโ2)!, /integraldisplay1 0um+iโ1(1โu)mโiโ1du=(m+iโ1)!(mโiโ1)! (2mโ1)!. Thus, โฆi,m =m/parenleftbiggm i/parenrightbigg/bracketleftbigg i(m+iโ2)!(mโiโ1)! (2mโ2)!โm(m+iโ1)!(mโiโ1)! (2mโ1)!/bracketrightbigg =m/parenleftbiggm i/parenrightbigg(m+iโ2)!(mโiโ1)! (2mโ1)!(mโ1)(iโm) =m(mโ1)(iโm)/parenleftbigm i/parenrightbig (2mโ1)(2mโ2)/parenleftbig2mโ3 m+iโ2/parenrightbig. (c)i=mand 1โคr<m . By symmetry, or by the same direct calculation, โฆm,r=m(mโ1)(rโm)/parenleftbigm r/parenrightbig (2mโ1)(2mโ2)/parenleftbig2mโ3 m+rโ2/parenrightbig. (d)i=mandr=m. Here, โฆm,m=/integraldisplay1 0/bracketleftbig mumโ1/bracketrightbig2du=m2/integraldisplay1 0u2mโ2du=m2/bracketleftbiggu2mโ1
|
https://arxiv.org/abs/2505.08045v1
|
2mโ1/bracketrightbigg1 0=m2 2mโ1. Putting these four sub-cases (a)โ(d) together provides the complete piecewise formula for โฆ i,rthat is specified in Section 3.1. This completes the proof. Lemma A.1 (Permutation Sum Identities) .LetฯโSnbe a permutation of {1,2,...,n}and let di=ฯ(i)โi. Then the following identities hold: (i)n/summationdisplay i=1di= 0 (ii)n/summationdisplay i=1di(2iโ1) =โn/summationdisplay i=1d2 i 14 Proof. We use the properties that for any permutation ฯโSn: (a)/summationtextn i=1ฯ(i) =/summationtextn i=1iand (b)/summationtextn i=1ฯ(i)2=/summationtextn i=1i2. The first identity is straightforward. For the second identity, letโs first expand the left-hand side (LHS): n/summationdisplay i=1di(2iโ1) = 2n/summationdisplay i=1iฯ(i)โn/summationdisplay i=1ฯ(i)โ2n/summationdisplay i=1i2+n/summationdisplay i=1i (a)= 2n/summationdisplay i=1iฯ(i)โ/parenleft๏ฃฌiggn/summationdisplay i=1i/parenright๏ฃฌigg โ2n/summationdisplay i=1i2+/parenleft๏ฃฌiggn/summationdisplay i=1i/parenright๏ฃฌigg = 2n/summationdisplay i=1iฯ(i)โ2n/summationdisplay i=1i2. Next, letโs expand the term/summationtextn i=1d2 ifrom the right-hand side (RHS): n/summationdisplay i=1d2 i=n/summationdisplay i=1ฯ(i)2โ2n/summationdisplay i=1iฯ(i) +n/summationdisplay i=1i2(b)=n/summationdisplay i=1i2โ2n/summationdisplay i=1iฯ(i) +n/summationdisplay i=1i2= 2n/summationdisplay i=1i2โ2n/summationdisplay i=1iฯ(i). Comparing the two resulting terms, we see that: n/summationdisplay i=1di(2iโ1) =โ/parenleft๏ฃฌigg 2n/summationdisplay i=1i2โ2n/summationdisplay i=1iฯ(i)/parenright๏ฃฌigg =โn/summationdisplay i=1d2 i. This establishes the second identity. Proof of Proposition 3.2. First, for Kendallโs tau, let ( U1,V1),(U2,V2)โผCฮ be independent from each other and write I=โnU1โ, J=โnU2โfor the indices of the segments on which the two points fall. The random variables IandJare i.i.d. and uniform on {1,...,n}, so P[I=i,J=j] =1 n2(i,j= 1,...,n ). IfIฬธ=J, the sign of ( U1โU2)(V1โV2) is completely determined by the permutation: โขI <J,ฯ (I)<ฯ(J) orI >J,ฯ (I)>ฯ(J) =โconcordance; โขI <J,ฯ (I)>ฯ(J) orI >J,ฯ (I)<ฯ(J) =โdiscordance. BecauseIandJare chosen with replacement, ties I=Joccur with probability P[I=J] = 1/n. SinceNinv(ฯ) denotes the number of inversions of ฯ, the probabilities are pdisc=2Ninv(ฯ) n2, p conc= 1โpdisc. Hence, by the concordantโdiscordant definition given in (15), it is ฯ(Cฯ) =pconcโpdisc= 1โ4Ninv(ฯ) n2. Regarding Spearmanโs rho, fix a segment index iand write the rank displacement di:=ฯ(i)โi. The support of Cฯis the union of ndiagonal line segments Si:=/braceleftbigg/parenleftbiggiโ1 +t n,ฯ(i)โ1 +t n/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingletโ[0,1]/bracerightbigg . Each carries probability mass 1 /n. OnSithe coordinates are related by V=U+di n, soUV= U2+di nU. WithtโผUnif[0,1] andU= (iโ1 +t)/n, the conditional expectation is given by E[U|I=i] =/integraldisplay1 0iโ1 +t ndt=2iโ1 2n,, so that E[U2|I=i] =/integraldisplay1 0(iโ1 +t)2 n2dt=(2iโ1)2 4n2+1 12n2 15 and E[UV|I=i] =E[U2|I=i] +di nE[U|I=i] =(2iโ1)2 4n2+1 12n2+di(2iโ1) 2n2. Averaging over ione obtains E[UV] =1 nn/summationdisplay i=1E[UV|I=i] =1 nn/summationdisplay i=1/parenleftbigg(2iโ1)2 4n2+1 12n2/parenrightbigg /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright =E[U2] = 1/3+1 nn/summationdisplay i=1di(2iโ1) 2n2. The first sum is E[U2] for a Unif(0 ,1) variable, which equals 1 /3. For the second sum, we use the second identity from Lemma A.1, namely/summationtextn i=1di(2iโ1) =โ/summationtextn i=1d2 i. Substituting yields: E[UV] =1 3+1 2n3n/summationdisplay i=1di(2iโ1) =1 3โ1 2n3n/summationdisplay i=1d2 i. BecauseฯS(C) = 12 E[UV]โ3 for any copula Cwith uniform margins, ฯS(Cฯ) = 12/parenleftbigg1 3โ/summationtext id2 i 2n3/parenrightbigg โ3 = 1โ6/summationtextn i=1d2 i n3. Next, regarding Chatterjeeโs ฮพ, note that for ( X,Y )โผCฯ, it isY=f(X) almost surely, see [18, Theorem 2.1]. Consequently, using [1, Theorem 2.1], it follows that ฮพ(Cฯ) = 1. Lastly, for the tail coefficients, note that for t<1/n,Cฯ(t,t) =tif and only if ฯ(1) = 1 (the first segment lies on the main diagonal); otherwise Cฯ(t,t) = 0. Hence, (16) yields ฮปL= 1{ฯ(1)=1}. A symmetric argument witht>1โ1/ngivesฮปU= 1{ฯ(n)=n}, establishing the tail dependence coefficients. Proof of Proposition 3.3. (i) Recall that ฯS(C) = 12/integraldisplay [0,1]2C(u,v)dฮป2(u,v)โ3,
|
https://arxiv.org/abs/2505.08045v1
|
whereฮป2denotes the Lebesgue measure on [0 ,1]2and recall also from (8) that the copula Cฮ is given by Cฮ (u,v) =iโ1,jโ1/summationdisplay k,l=1โk,l+iโ1/summationdisplay k=1โk,j(nvโj+ 1) +jโ1/summationdisplay l=1โi,l(muโi+ 1) + โ i,j(muโi+ 1)(nvโj+ 1) for (u,v)โIi,j. Hence, with a simple substitution, it is /integraldisplayi m iโ1 m/integraldisplayj n jโ1 nCฮ (u,v)dvdu=1 mn๏ฃซ ๏ฃญiโ1,jโ1/summationdisplay k,l=1โk,l+1 2iโ1/summationdisplay k=1โk,j+1 2jโ1/summationdisplay l=1โi,l+1 4โi,j๏ฃถ ๏ฃธ. Considering the full integral, each โ i,jappears in the integral of the corresponding cell and all cells with โ iโฒjโฒwithiโฒโฅiandjโฒโฅj. In total, it appears ( nโi)(nโj)-times with a weight of 1, ( nโi+nโj)-times with a weight of1 2and one time with a weight of1 4. Consequently, /integraldisplay [0,1]2Cฮ (u,v)dudv=m/summationdisplay i=1n/summationdisplay j=1(2nโ2i+ 1)(2nโ2j+ 1) 4mnโi,j, and thus ฯS(Cฮ ) = 12m,n/summationdisplay i,j=1(2iโ1)(2jโ1) 4mnโi,jโ3 = 3 tr/parenleftbig โฆโคโ/parenrightbig โ3. For the checkโmin copula Cโ, recall its formula from (11). It follows that /integraldisplayi m iโ1 m/integraldisplayj n jโ1 nCโ(u,v)dvdu=1 mn๏ฃซ ๏ฃญiโ1,jโ1/summationdisplay k,l=1โk,l+1 2iโ1/summationdisplay k=1โk,j+1 2jโ1/summationdisplay l=1โi,l+1 3โi,j๏ฃถ ๏ฃธ, 16 and therefore /integraldisplay [0,1]2Cโ(u,v)dudv=m,n/summationdisplay i,j=1(2nโ2i+ 1)(2nโ2j+ 1) 4mnโi,j+1 12mnm,n/summationdisplay i,j=1โi,j. Hence, ฯS(Cโ) =ฯS(Cฮ ) +1 mn as stated. Similarly, for Cโ, one obtains from (13) that /integraldisplayi m iโ1 m/integraldisplayj n jโ1 nCโ(u,v)dvdu=1 mn๏ฃซ ๏ฃญiโ1,jโ1/summationdisplay k,l=1โk,l+1 2iโ1/summationdisplay k=1โk,j+1 2jโ1/summationdisplay l=1โi,l+1 6โi,j๏ฃถ ๏ฃธ, leading to the stated result. (ii) Kendallโs tau for Cฮ is given by ฯ(Cฮ ) = 1โ4/integraldisplay [0,1]2โ1Cฮ (u,v)โ2Cฮ (u,v)dudv, and we compute โ1Cฮ (u,v) =m/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,jvโjโ1 n j nโjโ1 n/parenright๏ฃฌigg =m/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j(nvโj+ 1)/parenright๏ฃฌigg for (u,v)โIi,j. An analogous expression holds for โ2Cฮ (u,v) on each cell. Integrating cell-by-cell, one obtains /integraldisplay1 0/integraldisplay1 0โ1Cฮ (u,v)โ2Cฮ (u,v)dudv =m,n/summationdisplay i,j=1/integraldisplayi m iโ1 m/integraldisplayj n jโ1 nโ1Cฮ (u,v)โ2Cฮ (u,v)dudv =m,n/summationdisplay i,j=1/parenleft๏ฃฌigg m/integraldisplayj n jโ1 njโ1/summationdisplay l=1โi,l+ โi,j(nvโj+ 1)dv/parenright๏ฃฌigg/parenleft๏ฃฌigg n/integraldisplayi m iโ1 miโ1/summationdisplay k=1โk,j+ โi,j(nuโi+ 1)du/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌigg m n/integraldisplay1 0jโ1/summationdisplay l=1โi,l+ โi,jvdv/parenright๏ฃฌigg/parenleft๏ฃฌigg n m/integraldisplay1 0iโ1/summationdisplay k=1โk,j+ โi,judu/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j+1 2โi,j/parenright๏ฃฌigg/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+1 2โi,j/parenright๏ฃฌigg =1 4m,n/summationdisplay i,j=1/parenleft๏ฃฌig ฮ(m)โ/parenright๏ฃฌig i,j/parenleft๏ฃฌig ฮ(n)โโค/parenright๏ฃฌig ji =1 4mโ1/summationdisplay i=1/parenleft๏ฃฌig ฮ(m)โฮ(n)โโค/parenright๏ฃฌig ii =1 4tr/parenleft๏ฃฌig ฮ(m)โฮ(n)โโค/parenright๏ฃฌig . Hence, ฯ(Cฮ ) = 1โ4/integraldisplay [0,1]2โ1Cฮ (u,v)โ2Cฮ (u,v)dudv= 1โtr/parenleft๏ฃฌig ฮ(m)Cฮ(n)โโค/parenright๏ฃฌig In the case of Cโ, note that now it is โ1Cโ(u,v) =m/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j(1{nvโj+1โฅmuโi+1})/parenright๏ฃฌigg 17 for (u,v)โIi,jand similarly for โ2Cโ(u,v), so that /integraldisplay1 0/integraldisplay1 0โ1Cโ(u,v)โ2Cโ(u,v)dudv =m,n/summationdisplay i,j=1/integraldisplayi m iโ1 m/integraldisplayj n jโ1 nโ1Cฮ (u,v)โ2Cฮ (u,v)dudv =m,n/summationdisplay i,j=1/integraldisplay1 0/integraldisplay1 0/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j1{vโฅu}/parenright๏ฃฌigg/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j+ โi,j1{uโฅv}/parenright๏ฃฌigg dvdu =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,jjโ1/summationdisplay l=1โi,l+1 2iโ1/summationdisplay k=1โk,jโi,j+1 2jโ1/summationdisplay l=1โi,lโi,j/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j+1 2โi,j/parenright๏ฃฌigg/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+1 2โi,j/parenright๏ฃฌigg โ1 4m,n/summationdisplay i,j=1โ2 i,j =1 4/parenleft๏ฃฌig tr(ฮ(m)โฮ(n)โโค)โtr(โโคโ)/parenright๏ฃฌig . Consequently, ฯ(Cโ) = 1โ4/integraldisplay [0,1]2โ1Cโ(u,v)โ2Cโ(u,v)dudv=ฯ(Cฮ ) + tr(โโคโ), and a similar argument yields /integraldisplay1 0/integraldisplay1 0โ1Cโ(u,v)โ2Cโ(u,v)dudv =m,n/summationdisplay i,j=1/integraldisplay1 0/integraldisplay1 0/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j1{vโฅ1โu}/parenright๏ฃฌigg/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j+ โi,j1{uโฅ1โv}/parenright๏ฃฌigg dvdu =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,jjโ1/summationdisplay l=1โi,l+1 2iโ1/summationdisplay k=1โk,jโi,j+1 2jโ1/summationdisplay l=1โi,lโi,j+/integraldisplay1 0/integraldisplay1 0โi,j1{u+vโฅ1}dvdu/parenright๏ฃฌigg =m,n/summationdisplay i,j=1/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j+1 2โi,j/parenright๏ฃฌigg/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+1 2โi,j/parenright๏ฃฌigg +1 4m,n/summationdisplay i,j=1โ2 i,j =1 4/parenleft๏ฃฌig tr/parenleft๏ฃฌig ฮ(m)Cฮ(n)โโค/parenright๏ฃฌig + tr/parenleftbig โโคโ/parenrightbig/parenright๏ฃฌig , which shows ฯ(Cโ) = 1โ4/integraldisplay [0,1]2โ1Cโ(u,v)โ2Cโ(u,v)dudv=ฯ(Cฮ )โtr(โโคโ). (iii) Recall that Chatterjeeโs ฮพfor a copula Ccan be expressed as ฮพ(C) = 6/integraldisplay1 0/integraldisplay1 0(โ1C(u,v))2dudvโ2. For (u,v)โIi,j, we have โ1Cฮ (u,v) =m/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j(nvโj+ 1)/parenright๏ฃฌigg . (25) 18 Hence, squaring and integrating in v, one finds /integraldisplayj n jโ1 n(โ1Cฮ (u,v))2dv=m2 n/integraldisplay1 0/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,jv/parenright๏ฃฌigg2 dv =m2 n๏ฃซ ๏ฃญ/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l/parenright๏ฃฌigg2 +/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l/parenright๏ฃฌigg โi,j+1 3โ2 i,j๏ฃถ ๏ฃธ =m2 n/parenleft๏ฃฌig (Tโ)2 i,j+ (Tโ)i,jโi,j+1 3โ2 i,j/parenright๏ฃฌig . Summing over the cells yields the formula for ฮพ(Cฮ ). ฮพ(Cฮ ) = 6m,n/summationdisplay i,j=1/integraldisplayi m iโ1 m/integraldisplayj n jโ1 n(โ1Cฮ (u,v))2dvduโ2 =6m nm,n/summationdisplay i,j=1๏ฃซ
|
https://arxiv.org/abs/2505.08045v1
|
๏ฃญ/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l/parenright๏ฃฌigg2 +/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l/parenright๏ฃฌigg โi,j+1 3โ2 i,j๏ฃถ ๏ฃธโ2 =6m nm,n/summationdisplay i,j=1/parenleft๏ฃฌig (Tโ)2 i,j+ (Tโ)i,jโi,j+1 3โ2 i,j/parenright๏ฃฌig โ2 =6m ntr/parenleftbig โโคโ/parenleftbig TTโค+Tโค+1 3In/parenrightbig/parenrightbig โ2 =6m ntr/parenleftbig โโคโMฮพ/parenrightbig โ2 In the case of CpdโCโ pd, note that with (14) it now is โ1Cpd(u,v) =m/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j1{nvโj+1โฅfi,j(muโi+1)}/parenright๏ฃฌigg for (u,v)โIi,j. Note further that due to fi,jbeing Lebesgue measure preserving, it is /integraldisplay/integraldisplay [0,1]21{vโฅfi,j(u)}dvdu=/integraldisplay1 0(1โfi,j(u)) du=1 2, and hence /integraldisplayi m iโ1 m/integraldisplayj n jโ1 n(โ1Cpd(u,v))2dvdu=m n/integraldisplay1 0/integraldisplay1 0/parenleft๏ฃฌiggjโ1/summationdisplay l=1โi,l+ โi,j1{vโฅfi,j(u)}/parenright๏ฃฌigg2 dvdu =m n๏ฃซ ๏ฃญ/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j/parenright๏ฃฌigg2 +/parenleft๏ฃฌiggiโ1/summationdisplay k=1โk,j/parenright๏ฃฌigg โi,j+1 2โ2 i,j๏ฃถ ๏ฃธ. Thus, one gets an extra1 6โ2 i,jcompared to the previous case, and concludes that ฮพ(Cpd) = 6/integraldisplay [0,1]2(โ1Cpd(u,v))2dudvโ2 =ฮพ(Cฮ ) +m nm,n/summationdisplay i,j=1โ2 i,j=ฮพ(Cฮ ) +mtr(โโคโ) n. SinceCโ,CโโCโ pd, this result in particular also holds for them. Lastly, regarding tail dependence coefficients, it is a direct and classical observation that a copula with a bounded density has no tail dependence, compare, e.g., [17, below Remark 5.1]. In partic- ular,ฮปL(Cฮ ) =ฮปU(Cฮ ) = 0. ForCโ, sinceCโโคCฮ pointwise, it is clear from the definition of ฮปLandฮปUin (16) and (17) that 0โคฮปL(Cโ)โคฮปL(Cฮ ),0โคฮปU(Cโ)โคฮปU(Cฮ ). 19 Hence, also ฮปL(Cโ) =ฮปU(Cโ) = 0. Finally, for Cโ, recall its form from (11). For t > 0 sufficiently small, it is ( t,t)โI1,1and thus ฮปL(Cโ) = lim tโCโ(t,t) t= lim tโโ1,1min{nt,mt} t= โ 1,1(mโงn). Similarly, for 1โt>0 sufficiently small, it is ( t,t)โIm,nand thus Cโ(1,1)โCโ(t,t) =n(1โt)mโ1/summationdisplay k=1โk,n+m(1โt)nโ1/summationdisplay l=1โm,l+ โm,nmax{n(1โt),m(1โt)} =n(1โt)/parenleftbigg1 nโโm,n/parenrightbigg +m(1โt)/parenleftbigg1 mโโm,n/parenrightbigg + โm,nmax{n(1โt),m(1โt)} = (1โt) (2โ(mโงn)โm,n). This yields ฮปU(Cโ) = 2โlim tโ11โCโ(t,t) 1โt= 2โlim tโ1Cโ(1,1)โCโ(t,t) 1โt= โm,n(mโงn), finishing the proof. Proof of Corollary 3.4. Note that m,n/summationdisplay i,j=1โ2 i,jโคm/summationdisplay i=1๏ฃซ ๏ฃญn/summationdisplay j=1โi,j๏ฃถ ๏ฃธ2 =m/summationdisplay i=11 m2=1 m and in the same way m,n/summationdisplay i,j=1โ2 i,jโค1 n. Hence, by Proposition 3.3 (iii), we have /vextendsingle/vextendsingleฮพ/parenleftbig Cโ โ/parenrightbig โฮพ/parenleftbig Cโ ฮ /parenrightbig/vextendsingle/vextendsingle=mtr(โโคโ) n=m nm,n/summationdisplay i,j=1โ2 i,jโคm n(mโจn)=/braceleft๏ฃฌigg m n2,ifmโคn 1 n,ifm>n, as claimed. References [1] Jonathan Ansari and Sebastian Fuchs. A simple extension of azadkia and chatterjeeโs rank correlation to a vector of endogenous variables. arXiv preprint arXiv:2212.01621 , 2022. [2] Jonathan Ansari and Sebastian Fuchs. On continuity of chatterjeeโs rank correlation and related dependence measures. arXiv preprint arXiv:2503.11390 , 2025. [3] Jonathan Ansari, Patrick B. Langthaler, Sebastian Fuchs, and Wolfgang Trutschnig. Quan- tifying and estimating dependence via sensitivity of conditional distributions. arXiv preprint arXiv:2308.06168 , 2023. [4] Jonathan Ansari and Marcus Rockel. Dependence properties of bivariate copula families. Dependence Modeling , 12(1):20240002, 2024. [5] M. Azadkia and S. Chatterjee. A simple measure of conditional dependence. Ann. Stat. , 49(6):3070โ3102, 2021. [6] S. Chatterjee. A new coefficient of correlation. J. Amer. Statist. Ass. , 116(536):2009โ2022, 2020. 20 [7] Claudia Cottin and Dietmar Pfeifer. From bernstein polynomials to bernstein copulas. J. Appl. Funct. Anal , 9(3-4):277โ288, 2014. [8] Holger Dette, Karl F Siburg, and Pavel A Stoimenov. A copula-based non-parametric measure of regression dependence. Scandinavian Journal of Statistics , 40(1):21โ41, 2013. [9] Fabrizio Durante, Juan Fernandez-Sanchez, and Wolfgang Trutschnig. A typical copula is singular. Journal of Mathematical Analysis and Applications , 430(1):517โ527, 2015. [10] Fabrizio Durante and Carlo Sempi. Principles of Copula Theory . Boca Raton, FL: CRC Press, 2016. [11] Valdo Durrleman, Ashkan Nikeghbali, and Thierry Roncalli. Copulas approximation
|
https://arxiv.org/abs/2505.08045v1
|
and new families. Available at SSRN 1032547 , 2000. [12] Sebastian Fuchs and Marco Tschimpke. Total positivity of copulas from a markov kernel perspective. Journal of Mathematical Analysis and Applications , 518(1):126629, 2023. [13] Harry Joe. Multivariate Models and Dependence Concepts , volume 73 of Monogr. Stat. Appl. Probab. London: Chapman and Hall, 1997. [14] Viktor Kuzmenko, Romel Salam, and Stan Uryasev. Checkerboard copula defined by sums of random variables. Dependence Modeling , 8(1):70โ92, 2020. [15] X Li, P Mikusiยด nski, H Sherwood, and MD Taylor. On approximation of copulas. Distributions with given marginals and moment problems , pages 107โ116, 1997. [16] Xin Li, P Mikusiยด nski, and Michael D Taylor. Strong approximation of copulas. Journal of Mathematical Analysis and Applications , 225(2):608โ623, 1998. [17] Georg Mainik. Risk aggregation with empirical margins: Latin hypercubes, empirical copulas, and convergence of sum distributions. Journal of Multivariate Analysis , 141:197โ216, 2015. [18] Piotr Mikusinski, Howard Sherwood, and Michael D Taylor. Shuffles of min. Stochastica , 13(1):61โ74, 1992. [19] Piotr Mikusiยด nski and Michael D. Taylor. Some approximations of n-copulas. Metrika , 72(3):385โ414, 2010. [20] Roger B. Nelsen. An Introduction to Copulas. 2nd ed. New York, NY: Springer, 2006. [21] Dietmar Pfeifer, Hervยด e Awoumlac Tsatedem, Andreas Mยจ andle, and Cห ome Girschig. New copu- las based on general partitions-of-unity and their applications to risk management. Dependence modeling , 4(1):000010151520160006, 2016. [22] Alessio Sancetta and Stephen Satchell. The bernstein copula and its applications to modeling and approximations of multivariate distributions. Econometric theory , 20(3):535โ562, 2004. [23] Manuela Schreyer, Roland Paulin, and Wolfgang Trutschnig. On the exact region determined by kendallโs ฯand spearmanโs ฯ.Journal of the Royal Statistical Society Series B: Statistical Methodology , 79(2):613โ633, 2017. [24] Issey Sukeda and Tomonari Sei. On the minimum information checkerboard copulas under fixed kendallโs rank correlation. arXiv preprint arXiv:2306.01604 , 2023. [25] Marco Tschimpke, Manuela Schreyer, and Wolfgang Trutschnig. Revisiting the region deter- mined by spearmanโs ฯand spearmanโs footrule ฯ.Journal of Computational and Applied Mathematics , 457:116259, 2025. [26] Eric W Weisstein. Beta function. https://mathworld. wolfram. com/ , 2002. [27] Yanting Zheng, Jingping Yang, and Jianhua Z Huang. Approximation of bivariate copulas by patched bivariate frยด echet copulas. Insurance: Mathematics and Economics , 48(2):246โ256, 2011. 21
|
https://arxiv.org/abs/2505.08045v1
|
Beyond Basic A/B testing: Improving Statistical Efficiency for Business Growth Changshuai Weiโ LinkedIn Corporation chawei@linkedin.comPhuc Nguyen LinkedIn Corporation honnguyen@linkedin.com Benjamin Zelditch LinkedIn Corporation bzelditch@linkedin.comJoyce Chen LinkedIn Corporation joychen@linkedin.com Abstract The standard A/B testing approaches are mostly based on t-test in large scale indus- try applications. These standard approaches however suffers from low statistical power in business settings, due to nature of small sample-size or non-Gaussian distribution or return-on-investment (ROI) consideration. In this paper, we pro- pose several approaches to addresses these challenges: (i) regression adjustment, generalized estimating equation, Man-Whitney U and Zero-Trimmed U that ad- dresses each of these issues separately, and (ii) a novel doubly robust generalized U that handles ROI consideration, distribution robustness and small samples in one framework. We provide theoretical results on asymptotic normality and efficiency bounds, together with insights on the efficiency gain from theoretical analysis. We further conduct comprehensive simulation studies and apply the methods to multiple real A/B tests. 1 Introduction Controlled experiments have been the gold standard of measuring the effect of a treatment/drug in biological and medical research for more than 100 years [ 11,12]. In the last few decades, the rise of the internet and machine learning (ML) algorithms led to the development and revival of controlled experiments for online internet applications, i.e., A/B testing[ 22]. Most of the A/B testing in industry follows standard statistical approaches, e.g., t-test, particularly in large-scale recommender systems (e.g., Feeds, Ads, Growth), which involve sample sizes on the order of millions to billions, and measure engagement metrics such as clicks or impressions. In business settings, e.g., Marketing, Software-as-a-Service (SaaS), and Business-to-Bussiness (B2B), there are unique challenges, where standard approaches like the t-test can lead to either incorrect conclusions or insufficient statistical power: (i) Return-on-Investment (ROI) or Return-on-Ad-Spend (ROAS) type of measurement is almost always key consideration in business setting. There has been little research on how to efficiently measure this type of metric in the A/B testing setting; (ii) Small sample sizes are very common in business-setting A/B tests, since increasing the sample size typically incurs additional cost; (iii) Revenue, as a core metric in business setting, is typically right-skewed with a heavy tail . Since revenue generation is typically sparse event conditioning on sales outreach or marketing touch-point, we also need to address zero-inflation . โCorresponding Author. Theoretical development was primarily carried out by C. Wei, who also led the design, execution, and writing of the manuscript. Co-authors contributed to the algorithm implementation, simulation studies, real-world applications, and co-writing the manuscript. Preprint.arXiv:2505.08128v1 [stat.ME] 13 May 2025 In this paper, we propose to use a series of statistical methods to address the above challenges, including regression adjustment, generalized estimating equation and Mann-Whitney U. We also develop a novel doubly robust generalized U statistic that combines advantage of above methods. As far as we know, this is the first comprehensive treatment of efficient statistical methods for A/B test in tech industry, particularly for business setting. The key contributions of the paper are: 1)Methodology innovations to improve testing efficiency in business settings, particularly, using (i) regression adjustment for
|
https://arxiv.org/abs/2505.08128v1
|
ROI consideration, (ii) GEE for addressing small sample size with repeated measurement, and (iii) Mann-Whitney U for non-Gaussian data, in particular, zero-trimmed U test for zero-inflated heavy tailed data. 2)Theoretical development on (i) systematic analysis of asymptotic efficiency for the proposed approaches, and more importantly (ii) a novel doubly robust generalized U that attains the semi- parametric efficiency bound and can concurrently address ROI, longitudinal analysis, and ill-behaved distributions, as well as (iii) rigorous efficient algorithms for large data for broader applicability. 3) We conducted thorough simulation studies to evaluate the empirical efficiencies and applied the methods to multiple real business applications . In-depth discussion on methodology innovations and theoretical contributions can be found in section 7. Though these methods are proposed to address challenges in business setting, they are broadly applicable to general A/B test in tech and experiments in non-tech field. The rest of the paper is structured as follows. For the remainder of section 1, weโll discuss related work and introduce the problem setup and preliminaries. In section 2 and section 3, we will discuss regression adjustment and GEE. In section 4, we introduce Mann-Whitney U and Zero-Trimmed U for non-parametric testing. In section 5, we develop methodology for doubly robust generalized U test. Then, we conduct simulation studies and real data analysis in section 6 and conclude the paper with discussion in section 7 and section 8. Details on algorithms, theoretical proof, analytical derivation, and simulation set-up can be found in Appendix. 1.1 Related Works There have been multiple research efforts in the tech industry to address limitations of standard t-tests, particularly for low sensitivity and small treatment effects [ 24]. Covariate adjustment[ 11] has been widely used as an improvement to t-test or proportion test in biomedical research[ 16,13,18,21]. An important relevant development in the tech field is Controlled-experiment Using Pre-Experiment Data (CUPED)[ 10], which leverages pre-experiment metrics in a simple linear adjustment to reduce variance. Later extension of the methods includes leveraging in-experiment data [ 44,9], non-linear predictive modeling [ 32], and individual-variance weighting [ 26] for further reduction of variance. Meanwhile, there are increasing concerns on other challenges, such as repeated measurements[ 27,46] and non-Gaussian heavy-tailed distributions [ 20,2]. Semi-parametric approaches such as GEE have been well adopted in non-tech field for repeated measurements [ 25,39]. Nonparametric methods, such as Wilcoxon Rank-sum and U-statistic, can provide robustness to ill-behaved distribution[ 29,17, 3,5,19,23]. In recent years, U statistics have emerged as an important class of statistical methods in biomedical research [ 14,28,30,45] and social sciences[ 6,1,31], with particular developments in genomics [ 42,41,40] and causal inferences[ 43,38,7] for public health studies. The application of U statistics in tech industry are largely limited to ROC-AUC (equivalent to Mann-Whitney U [ 15]) for ML modelsโ evaluation, and itโs often just used as point estimate. While there are some development on metric learning and non-directional type of tests(e.g., goodness of fit, independence) using U statistics[8, 34, 33], they are not suitable for A/B testing. 1.2 Problem Setup and Preliminaries Letโs assume we perform A/B test to compare two treatment
|
https://arxiv.org/abs/2505.08128v1
|
z= 0vsz= 1on business metric y. Our goal is to evaluate "improvement" of yfrom the treatment over control group (directional test). T Test : One common formulation of the "improvement" is: ฮด=E(yi1โyi0), and we can use t-test for the corresponding null vs alternative hypotheses: H0:ฮด= 0,vsH1:ฮด >0. The corresponding t-statistics is tn=ยฏy1โยฏy0โหv10, where, ยฏykis sample mean for zi=k, and หv10is corresponding variance estimator depending on equal or unequal variance assumption. Normal approximation tnโdN(0,1) can be leveraged to get p-value or confidence intervals. 2 Statistical Efficiency : We can measure the statistical efficiency of a estimation process by mean squared error (MSE), and define the relative efficiency by inverse ratio of MSE, rn(หฮด1,หฮด2) = E(หฮด2โฮด)2 E(หฮด1โฮด)2=V ar(หฮด2)+Bias2(หฮด2) V ar(หฮด1)+Bias2(หฮด1), where หฮด1andหฮด2are two different estimator of ฮด. When both estimator are unbiased, the relative efficiency reduced to ratio of variance. We can define asymptotic relative efficiency (ARE) as r(หฮด1,หฮด2) = lim nโโrn(หฮด1,หฮด2). For hypotheses testing, it can happen that two hypothesis testing process are not corresponding to the same parameter. In this case, we can use Pitman efficiency, r(t1, t2) = lim nโโnt2 nt1, where nt1 andnt2are sample size required to reach the same power ฮฒforฮฑlevel test, with test statistic t1and t2respectively. Assume local alternative (e.g., small location shift ฮด), and asymptotic normality of test statistics (i.e.,โntn,iโdN(ยตi(ฮด), ฯ2(ฮด))), Pitman efficiency is equivalent to the following alternative definition of efficiency: r(t1, t2) =ฮป2 1 ฮป2 2= ยตโฒ 1(0)/ฯ1(0) ยตโฒ 2(0)/ฯ2(0)2 , where ฮปk=ยตโฒ k(0) ฯk(0)is slope of testk. The equivalence can be shown observing the power function ฮฒ(ฮด) = 1โฮฆ(zฮฑโโnฮดฮป), and thusnโ1 ฮป. In this paper, we will evaluate statistical efficiency of a series of methodologies addressing challenges in business setting, by comparing them with t-test and among themselves. The comparison will be either asymptotic efficiency in analytic form or empirical efficiency in terms of simulation studies. 2 Regression Adjustment for ROI Cost is core guardrail (and risk metric) in evaluation of algorithm or strategies in business setting. One common strategy is to perform t-tests on both primary metrics (e.g., revenue) and guardrail metrics (e.g., cost), separately. However, this type of strategy lacks a unified view on ROI and can lead to decision confusion when the conclusion on the two metric goes opposite way. Here, we propose to use regression adjustment approach[ 11,13] as a fundamental approach for measuring ROI, by forming the parametric model: E(yi|zi, wi) =g(ฮฒ0+ฮฒ1zi+ฮณTwi), where, yidenote the revenue or other primary metrics, zidenote the treatment assignment, wiis vector of variables that we want to control (e.g., cost), yi|zi, wifollows distribution of certain parametric family with mean of E(yi|zi, wi), and gis link function. To see how ฮฒ1provide unified view on "ROI", letโs assume yiis the revenue and wiis a scalar metric on the cost (or investment), then ฮฒ1can be integrated as "treatment effect on revenue assuming same level of investment". We can then perform hypothesis testing (e.g., Wald test) on ฮฒ1for: H0:ฮฒ1= 0vsH1:ฮฒ1>0. Beside "ROI" consideration, regression adjustment has two other significant advantages compared with t-test: (i) When there are confounding, regression adjustment is unbiased where t-test or similar tests like
|
https://arxiv.org/abs/2505.08128v1
|
proportion tests are biased. (ii) When there are no confounding, regression adjustment has smaller variance and thus more efficient. In fact, under parametric settings, regression adjustment based on maximum likelihood estimation reaches CramรฉrโRao lower bound[ 37] and hence most efficient among all unbiased estimators (Appendix B.1). For illustration of insight on how and where the efficiency is gained over t-test, letโs assume gaussian distribution and identity link function: yi=ฮฒ0+ฮฒ1zi+ฮณTwi+ฯตi, ฯตiโผN(0, ฯ2),where ฮฒ1measures the treatment effect controlling for w, i.e., ฮฒ1=E(y|z= 1, w)โE(y|z= 0, w). Under confounding and above parametric set-up, we can show ฮฒ1is unbiased, i.e., E(y(1)โy(0)) = ฮฒ1. Meanwhile, t-test ( หฯ= ยฏy1โยฏy0) is biased by a constant term ฮณT[E(w|z= 1)โE(w|z= 0)] (Appendix B.2). In this case, the asymptotic relative efficiency is dominated by the bias term (for both, varโ1 n), and hence r(หฮฒ1,หฯ)โ โ asnโ โ . When there are no confounding, i.e., zโฅw, regression adjustment and t-test are both unbiased, however, regression adjustment is more efficient: r(หฮฒ1,หฯ) = 1 +ฯ2 w ฯ2โฅ1, where, ฯ2 w=ฮณTV ar(w)ฮณ represent the variance of y explained by w(Appendix B.3). We can see that regression adjustment at least has the same efficiency as t-test. As long as wcan explain some variance of y(i.e.,ฯ2 w>0or ฮณฬธ= 0), regression adjustment is strictly more efficient than t-test. This is also the key reason behind efficiency of all the CUPED type of methods, basically by including pre-experiment variables wthat can explain some variance of yand satisfy zโฅwby design. 3 3 GEE for Longitudinal Analysis For almost all A/B testing in industry, we measure the metrics regularly over time. This is one unique characteristics in tech industry: repeated measurement of metrics have negligible (additional) cost, whereas in other fields like biomedical field, repeated measurements are often constrained by expense. Therefore, it is essential that we leverage the longitudinal repeated measurement in A/B testing to improve power, particularly in business setting where sample size limitation is prevalent. In stead of common practice that perform analysis on a snapshot of data, we propose to perform longitudinal analysis on all data collected leveraging GEE[ 25,39]. Letโs assume the following model: E(yit|zi, wit) =ยตit=g(ฮฒ0+ฮฒ1zi+ฮณTwit), where, yitis the repeated measure on primary metrics, witis set of repeated measure of variables (e.g., cost) and time-invariant variables (e.g., meta data) that we want to control. ฮฒ1measures the treatment effect on ycontrolling for wit. Note that we can change the parametric form inside g(ยท)to measure more complex treatment effect, e.g., growth curve effect of ฮฒ1+tฮฒ2by setting ยตit=g(ฮฒ0+ฮฒ1zi+ฮฒ2zit+ฮณTwit). We use following GEE for estimation and inference:P iDT iVโ1 i(yiโยตi) = 0, where, yi= [yi0,ยทยทยท, yit,ยทยทยท]T,ยตi= [ยตi0,ยทยทยท, ยตit,ยทยทยท]T,ฮธ= [ฮฒT, ฮณT]T, and Di=โยตi โฮธ, Vi= AiR(ฮฑ)Ai, Ai=diag{p V ar(yit|zi, wit)}. Here R(a)represent a working correlation matrix that represent the correlation structures for the repeated measurement, and Ais diagonal matrix with standard deviation of t-th measurement on the t-th diagonal. Letui=DT iVโ1 i(yiโยตi),B=E(DTVโ1D), we can estimate ฮธiteratively, ฮธ(s+1)=ฮธ(s)+Bโ (ฮธ(s))Pui(ฮธ(s)), where หB=1 nPDT iVโ1 iDiis empirical estimate of B=E(GD). The estimate หฮธis known to be asymptotically normal under mild regularity condition. For completeness (and connection to Section 5.1), we state the results
|
https://arxiv.org/abs/2505.08128v1
|
in following theorem and provided skech of proof in Appendix C.1. Theorem 1 Letฮฃ =V ar(ui). Then, under mild regularity condition, we have consistency: หฮธโpฮธ, and asymptotic normality:โn(หฮธโฮธ)โdN(0, BโTฮฃBโ1). Here, the variance can be estimated viaหฮฃ =1 nP iหuiหuT i, and หB=1 nPDT iVโ1 iDi. Since GEE uses all the data, intuitively it has higher efficiency to detect the treatment effect compared with snapshot analysis. To see deeper insights on where the efficiency comes from, letโs assume linear model with gaussian distribution, yit=ฮฒ0+ฮฒ1zi+ฮณTwit+ฯตit, where ฯตitโผN(0, ฯ2), Cov(ฯตi) = ฯ2R, and Rโป0. For easy of comparison with snapshot regression analysis, we further assume witis constant overtime, i.e., wit=wi. We can show variance of GEE estimate, V ar(หฮธgee) =ฯ2 eTRโ1e(P ixixT i)โ1, where, xi= [1, zi, wT i]T,e= [1,1,ยทยทยท,1]T, and Xi= exT i. For the snapshot regression analysis, letโs assume we do it on the last time point, and the corresponding estimate หฮธhas variance, V ar(หฮธreg) =ฯ2(P ixixT i)โ1. Then the relative efficiency isr(หฮฒ1,gee,หฮฒ1,reg) =eTRโ1e >1. We provide derivation and additional insights discussion in Appendix C.2. 4 U Statistics for Non-Gaussian Distributed Metrics In many common business scenarios, primary metrics such as revenue exhibits strong characteristics of Non-Gaussian distributions, e.g., right skewed heavy tailed distribution. Further, important business event such as conversions happens sparsely, making the primary metrics often zero inflated. In these scenarios, standard parametric approach such as t-test can suffers from inflated type I error or power loss. More robust and efficient non-parametric test is needed. 4.1 Mann-Whitney U Test Given two independent samples {y1i}n1 i=1and{y0j}n0 j=1, the Mann-Whitney U statistic[ 29,17] is given by U=1 n0n1n1X i=1n0X j=1Iy1iโฅy0j, (1) 4 where Iis indicator function. Observe that E[U] =E I{y1iโฅy0j} =P(y1iโฅy0j)and so Uis an unbiased estimator for ฮด=P(y1i> y0j). We can then use Mann-Whitney U to test whether y1greater than y0. Formally, the null hypothesis isH0:P(y1โฅy0) =1 2, and the alternative hypothesis is H1:P(y1โฅy0)>1 2. It can be shown thatโn(Uโฮด)โdN(0, ฯ2 u), where ฯ2 u=n0+n1 12(1 n0+1 n1)under H0. Leveraging this, one can perform a score-type hypothesis test on H0by making use of asymptotic normality. Note that this is different than testing a difference in means. Letฮบ(y1i)denote the rank of y1iin the combined sample of {y1i}n1 i=1and{y0j}n0 j=1in descending order, i.e., ฮบ(y1i) = 1 +Pn1 iโฒฬธ=iIy1i<y1iโฒ+Pn0 jIy1i<y0j. The Wilcoxon rank-sum test statistic is given by W=Pn1 i=1ฮบ(y1i)โn1(n1+n0+1) 2=โn1n0U+n1n0 2. This relationship between Wand Uallows us to compute Uefficiently for large sample sizes by leveraging fast ranking algorithms. To compare the relative efficiency of Mann-Whitney U and t test, we assume a local alternative of small location shift ฮดfrom distribution Fwith density function fand variance ฯ2. The Pitman relative efficiency is: r(U, ฯ) =ฮป2 U ฮป2ฯ= 12ฯ2R f2(x)dx2. (Appendix D.1) Using this result, we can show for normal distribution, r(U, ฯ) =3 ฯ; for Laplace distribution, r(U, ฯ) = 1 .5; for log-normal r(U, ฯ)increase exponentially with variance parameter of log-normal; and for Cauchy distribution r(U, ฯ) =โas t-test will break. (details in Appendix D.1.1) For these common heavy tail distributions, Man-Whitney U is more efficient. Even for perfectly normal distributed data, Man-Whitney
|
https://arxiv.org/abs/2505.08128v1
|
Uโs efficiency is very close to t-test. 4.2 Zero-Trimmed U Test The challenges of non-Gaussian distribution is often two fold in business scenario, the heavy tail nature and the zero-inflation nature. We can exploit the zero-inflation characteristic to further improve efficiency. The idea is to trim off the excessive zero and focus on the the continuous distributed part and โresidualโ zero difference. Letn+ 0=Pn1 i=1Iy1i>0, and n+ 1=Pn0 j=1Iy0j>0. We can get proportion of positive values in the two samples: หp1=n+ 1 n1andหp0=n+ 0 n0, and define หp= max {หp1,หp0}. Remove n1(1โหp)zeros from {y1i}n1 i=1andn0(1โหp)zeros from {y0j}n0 j=1. Let{yโฒ 1i}nโฒ 1 i=1and{yโฒ 0j}nโฒ 0 j=1denote the residual samples containing nโฒ 1=n1หpandnโฒ 0=n0หpdata points, respectively. Let ฮบ(yโฒ 1i)denote the rank of yโฒ 1iin the combined residual samples in descending order. The zero-trimmed Wilcoxon rank-sum U test statistic is given by Wโฒ=Pnโฒ 1 i=1ฮบ(yโฒ 1i)โnโฒ 1(nโฒ 1+nโฒ 0+1) 2. Conditioning on หp0andหp1, we have E(Wโฒ|หp1,หp0) =nโฒ 1n+ 0โn+ 1nโฒ 0 2and Var(Wโฒ|หp1,หp0) = n+ 0n+ 1(n+ 0+n+ 1+1) 12. Then we can show (details in Appendix D.2) its variance as: ฯ2 Wโฒ= n2 1n2 0 4หp2(หp1(1โหp1) n1+หp0(1โหp0) n0) +n1n0หp1หp0 12(n1หp1+n0หp0) +op(n3). We can estimate the variance empirically, หฯ2 Wโฒ=n2 1n2 0 4หp2(หp1(1โหp1) n1+หp0(1โหp0) n0) +n+ 0n+ 1(n+ 0+n+ 1) 12and perform statistical testing viaWโฒ หฯWโฒ. To facilitate comparison of efficiency, we can assume m=p1โp0andd=P(y+ 1> y+ 0)โ1 2. The compound hypothesis would be: H0:m= 0,andd= 0;H1: (1โIm>0)(1โId>0) = 0 . We state the following theorem for Pitman efficiency (proof in Appendix D.3). Theorem 2 Let p denote proportion of positive values under H0,ฯbe the direction of compound H1,ฮฝbe the effect size along direction ฯ, i.e., m(ฮฝ) =ฮฝcosฯandd(ฮฝ) =ฮฝsinฯ, The compound hypothesis can be transformed to simple hypothesis testing with direction of ฯ, i.e., H0:ฮฝ= 0, vsHฯ 1:ฮฝ >0. And the corresponding Pitman efficiency is, rฯ(Wโฒ, W) =ฯ2 W(0) ฯ2 Wโฒ(0)ยตโฒ Wโฒ(0) ยตโฒ W(0)2 =1โp+p2 3 p2โp3+p2 3pcosฯ+ 2p2sinฯ cosฯ+ 2p2sinฯ2 (2) 5 We can then investigate the relative efficiency by varying value of pโ(0,1]andฯโ[0,ฯ 2](in Appendix Figure 2 and Figure 3). Note that rฯ(Wโฒ, W)is with respect to variance adjusted for tie, หฯ2 W=n2 1n2 0 4(หp1(1โหp1) n1+หp0(1โหp0) n0) +n+ 0n+ 1(n+ 0+n+ 1) 12. We also provide results for rฯ(Wโฒ, Wo), the efficiency over Wowith original unadjusted variancen1n2(n1+n2+1) 12in Appendix eq. (35). 5 Advanced Distribution-Free Test We develop general and robust U statistics based methodology in this section that can (i) measure various definitions of treatment effect, (ii) address both covariate adjustment and "ill-behaved" distribution in business setting, and (iii) can also utilize repeated measurements in A/B tests. 5.1 Doubly Robust Generalized U Test Letyidenote the response variable that measure the business return, e.g., conversion or revenue, zi denote the treatment assignment, and widenote the variables that needs to be adjusted, e.g., cost or impression. We define the treatment effect as, ฮด=E(ฯ(yi1โyi0)), where, yi1andyi0represent response variables for zi= 1andzi= 0respectively. Obviously, we observe only one of yi0andyi1. ฯ(ยท)is a monotonic function with finite second moment, i.e., E(ฯ2(yi1โyi0))<โ. For example, when ฯ(yi1โyi0) =Iyi1>yj0, we know ฮด=P(yi1> yi0). We can also use other monotonic finite
|
https://arxiv.org/abs/2505.08128v1
|
function like logistic function, ฯ(yi1โyi0) = [1 + exp( โ(yi1โyi0))]โ1, Probit function, ฯ(yi1โ yi0) = ฮฆ( yi1โyi0)or signed Laplacian kernel, ฯ(yi1โyi0) =sign(yi1โyi0) exp(โyi1โyi0 ฯ). Note when ฯ(ยท)is identity, we get ฮด=E(yi1โyi0), which is treatment effect corresponding to t-test. However, it doesnโt guarantee finite second moment condition(e.g., infinite second moment under Cauchy distribution). Letp=E(z). We can define a generalized U statistics: Un= n 2โ1P i,jโCn 2h(yi, yj), where, h(yi, yj) =ฯ(yi1โyj0)ฮพij+ฯ(yj1โyi0)ฮพji, andฮพij=zi(1โzj) 2p(1โp). When there are no confounding, we know E(Un) =ฮด. In fact, when ฯ(yi1โyi0) =Iyi1>yj0, it is equivalent to (1). To address covariate adjustment, let ฯi=E(zi|wi), and gij=E(ฯ(yi1โyj0)|wi, wj). We can form a efficient doubly robust[36] version of the generalized Ustatistics (DRGU): UDR n=n 2โ1X i,jโCn 2hDR ij, (3) where, hDR ij=zi(1โzj) 2ฯi(1โฯj)(ฯ(yi1โyj0)โgij) +zj(1โzi) 2ฯj(1โฯi)(ฯ(yj1โyi0)โgji) +gij+gji 2. When ฯ andgare known, we can show that E(hDR ij) =ฮด, and thus E(UDR n) =ฮด(Appendix E.1). Further, variance of UDR nreaches semi-parametric bound (Appendix E.2), i.e., smallest variance among all unbiased estimator under semi-parametric set-up. In most applications, we donโt know ฯandg, and need to estimate them via หฯandหg. As long as one of หฯandหgis consistent estimator, then the corresponding U statistics หUDR n, is also consistent, hence doubly robust. We can estimate ฯiandgijby imposing a linear structure: ฯ(wi;ฮฒ) = ฯ([1, wT i]Tยทฮฒ)), g(wi, wj;ฮณ) = ฯ([1, wT i, wT j]Tยทฮณ), where ฯ()andฯ()are link functions. Note that g()is a model on pair of data points and can be considered as simplified Graph Neural Network. For estimation and inference of the parameters ฮธ= (ฮด, ฮฒ, ฮณ ), one way is to do it sequentially, i.e., first estimating หฮฒandหฮณwith the regression models, then calculating หUDR n(หฮฒ,หฮณ)and correspond- ing asymptotic variance considering variance from หฮฒandหฮณ. We will leverage U-statistics-based Generalized Estimation Equations (UGEE) [23] for joint estimation and inference: Un(ฮธ) =X i,jโCn 2Un,ij=X i,jโCn 2Gij(hijโfij) =0, (4) where, hij= [hij1, hij2, hij3]T,fij= [fij1, fij2, fij3]T,hij1=zi(1โzj) 2ฯi(1โฯj)(ฯ(yi1โyj0)โgij) + zj(1โzi) 2ฯj(1โฯi)(ฯ(yj1โyi0)โgji) +gij+gji 2,hij2=zi+zj,hij3=zi(1โzj)ฯ(yi1โyj0) +zj(1โ 6 zi)ฯ(yj1โyi0),fij1=ฮด,fij2=ฯi+ฯj,fij3=ฯi(1โฯj)gij+ฯj(1โฯi)gji,ฯi=ฯ(wi;ฮฒ), gij=g(wi, wj;ฮณ), andGij=DT ijVโ1 ij,Dij=โfij โฮธ,Vij=diag{V ar(hijk|wi, wj)}. Theorem 3 Letui=E(Un,ij|yi0, yi1, zi, wi),ฮฃ = V ar(ui),Mij=โ(fijโhij) โฮธ, and B= E(GM). Let หฮดbe the 1st element in หฮธ. Then, under mild condition, we have consistency: หฮธโpฮธ, and asymptotic normality: โn(หฮธโฮธ)โdN(0,4BโTฮฃBโ1). (5) Further, as long as one of ฯandgis correctly specified, หฮดis consistent. When both are correctly specified, หฮดattains semi-parametric efficiency bound , i.e., no other regular estimator can have smaller asymptotic variance. Proof is provided in Appendix E.3. We can estimate ฮธvia either one of the following iterative algorithm: ฮธ(t+1)=ฮธ(t)โ(โUn(ฮธ) โฮธ ฮธ(t))โUn(ฮธ(t)), orฮธ(t+1)=ฮธ(t)+ (หB(ฮธ(t)))โUn(ฮธ(t)) where, หB= n 2โ1P i,jโCn 2หGijหMij.ฮฃcan be estimated empirically from outerproduct of หui=1 nโ1P jฬธ=iUij(หฮธ), i.e., หฮฃ =1 nP iหuiหuT i. 5.2 DR Generalized U for Longitudinal Data Letyitdenote the metrics we measures overtime, zidenote the treatment assignment, and wit denote the variables needs to be adjusted for. We can measure the treatment effect overtime: ฮดt= E(ฯ(yit1โyit0)), where yit0andyit1are counterfactual responses for zi= 0andzi= 1. We can construct DR type of multivariate U statistic for the longitudinal data, UDR n=n 2โ1X i,jโCn 2hDR ij, (6) where, hDR ij = [hij1,ยทยทยท, hijt,ยทยทยท, hijT]T,hijt=zi(1โzj) 2ฯi(1โฯj)(ฯ(yit1โyjt0)โgijt) + zj(1โzi) 2ฯj(1โฯi)(ฯ(yjt1โyit0)โgjit) +gijt+gjit 2. We can
|
https://arxiv.org/abs/2505.08128v1
|
estimate ฯandgby,E(zi|wi) =ฯ(wi;ฮฒ) =ฯ([1,wT i]Tยทฮฒ)),E(ฯ(yit1โyjt0)|wit, wjt) = g(wit, wjt;ฮณt) =ฯ([1, wT it, wT jt]Tยทฮณt), where w= [wT 1,ยทยทยท, wT t,ยทยทยทwT T]T. We can estimate the parameters and make inference jointly for ฮธ= [ฮดT, ฮฒT,ฮณT]Tusing UGEE: Un(ฮธ) =X i,jโCn 2Un,ij=X i,jโCn 2Gij(hijโfij) =0, (7) where, hij= [hT ij1, hij2,hT ij3]T,fij= [fT ij1, fij2,fT ij3]T,hij1=hDR ij,hij2=zi+zj,hij3= zi(1โzj)ฯij+zj(1โzi)ฯji,fij1=ฮด,fij2=ฯi+ฯj,fij3=ฯi(1โฯj)gij+ฯj(1โฯi)gji, ฯi=ฯ(wi;ฮฒ),gij=g(wi,wj;ฮณ), andGij=DT ijVโ1 ij,Dij=โfij โฮธ,Vij=AR(ฮฑ)A,A= diag{p V ar(hijktk|wi, wj)}. Note here hij2is scalar and hijis a vector of length 2T+ 1. Corollary 4 Letui=E(Un,ij|yi0,yi1, zi,wi),ฮฃ = V ar(ui),Mij=โ(hijโfij) โฮธ, and B= E(GM). Then, under mild condition, we have consistency: หฮธโpฮธ, and asymptotic normality:โn(หฮธโฮธ)โdN(0,4BโTฮฃBโ1). Estimation and computation of asymptotic variance can be perform in the same manner as Section 5.1 for small to medium sample size. For large sample size, the computation burden can grow signifi- cantly. We device efficient algorithms for optimization and inference ( Algorithm 1 andAlgorithm 2 ), and provide theoretical support of the algorithms with Theorem 5 andTheorem 6 . (See proof in Appendix A.2) In most applications, we can reduce number of parameters by imposing some structures on the trajectory ( ฮณtandฮดt), for examples: (i) set the gtto same functional form, i.e, ฮณt=ฮณ; (ii) set the ฮดt 7 Algorithm 1 Mini-batch Fisher Scoring for หฮธ= (หฮด,หฮฒ,หฮณ) 1:Input: Data{(yi, zi, wi)}n i=1, initial parameter ฮธ(0), step size ฮฑ, batch size m, convergence threshold ฮต=c nโ1/2โฯ/2forฯ >0. 2:tโ0 3:repeat 4: Sample mrows without replacement: St={i1, . . . , i m}from current epoch 5: Form all m 2 unordered pairs {(i, j) :i < j, i, j โSt} 6: For each pair i, j, compute: โขUij=Gij(hijโfij) โขBij=GijMij 7: Estimate score: หUt=2 m(mโ1)P i<jUij 8: Estimate Jacobian: หBt=2 m(mโ1)P i<jBij 9: Update parameter: ฮธ(t+1)=ฮธ(t)+ฮฑ หBtโ1หUt 10: tโt+ 1 11:until หUt < ฮต 12:Output: หฮธ=ฮธ(t),หฮดis the first component Algorithm 2 Monte Carlo Integration for Estimation of\V ar(หฮธ) 1:Input: Data{(yi, zi, wi)}n i=1, parameter หฮธfrom Fisher scoring, pair sample size k=cโฒn1+ฯตโฒ forฯตโฒโ(0,1) 2:Sample kunordered pairs {(i, j)}uniformly without replacement from n 2 3:for all pairs (i, j)in sample do 4: Compute uij=Gij(hijโfij) 5: Compute Bij=GijMij 6:end for 7:Compute mean: ยฏu=1 kP (i,j)uij 8:Estimate หB=1 kP (i,j)Bij 9:Estimate หฮฃ =1 kP (i,j)(uijโยฏu)(uijโยฏu)โค 10:Output: \V ar(หฮธ) =4(หBโ1)TหฮฃหBโ1 n to be a simple linear form, e.g., ฮดt=ฮด, orฮดt=ฮด1+ฮด2t. Our simulation and real application will use these structure. Theorem 5 (Decoupling of Optimization and Inference) Assume the estimating equation ยฏUn(ฮธ) =1 n 2X i,jโCn 2Un,ij(ฮธ) = 0 is solved by a numerical algorithm producing หฮธsuch that โฅยฏUn(หฮธ)โฅ=op nโ1/2 . Then, one hasโn(หฮธโฮธ)โdN 0,4 (Bโ1)TฮฃBโ1 .In particular, the small algorithmic error does not affect the first-order asymptotic distribution. Theorem 6 (Monte Carlo Error Bound) LetUv n= n 2โ1P i<jv(oi, oj),with symmetric, sub- Gaussian kernel v(proxy variance ฯ2). Form the Monte Carlo estimator หUk=1 kP (i,j)โCkv(oi, oj), 8 where kpairs are sampled uniformly without replacement from the n 2 possible, and let the average overlap factor be โ =O(k/n). Then for any ฯต >0andฮทโ(0,1), P |หUkโE[หUk]|> ฯต โค2 exp โk ฯต2 2ฯ2(1+โ) , and hence with effective sample size หk=k/(1 + โ) , |หUkโE[Uv n]| โคs 2ฯ2 หklog 2 ฮท w.p.1โฮท. In particular, หUkโE[Uv n] =Opq 1 k+1 n
|
https://arxiv.org/abs/2505.08128v1
|
, so choosing k=O(n1+ฯต)makes the Monte Carlo error asymptotically negligible. 6 Experiments and Results 6.1 Simulation Studies We perform comprehensive simulation studies to evaluation performance of the proposed methods. Due to space limitation, we summarize and highlight the results here. Regression Adjustment : We simulate confounding effect and Poisson responses. When there is no confounding, both t-test and RA can control type I error, while RA has higher power than t-test. Under confounding, t-test canโt control type I error while RA can control type I error. (Appendix F.1) GEE : We simulate confounding effect, Poisson responses and repeated measurement. Both regression and GEE can control type I error under confounding, while GEE has higher power. (Appendix F.2) Mann Whitney U : For heavy tailed distribution with 50% of zeros, Zero-trimmed U has higher power than standard Mann Whitney U most of the time and standard U has higher power than t-test. All three methods can control type I error for zero inflated heavy tail data. (Appendix F.3) Table 1: Power Comparison for Heavy Tailed Distributions with Equal Zero-Inflation (50%) Effect Size Positive Cauchy (n=200) LogNormal (n=200) Zero-trimmed U Standard U t-test Zero-trimmed U Standard U t-test 0.25 0.079 0.065 0.011 0.044 0.044 0.009 0.50 0.165 0.094 0.026 0.067 0.059 0.004 0.75 0.339 0.166 0.031 0.090 0.067 0.007 1.00 0.555 0.262 0.048 0.138 0.082 0.011 Doubly Robust Generalized U We simulate confounding effect with heavy tailed distribution. We compare Type I error rates and power of correctly specified DRGU , correctly specified linear regression OLS, and Wilcoxon rank sum testU(which does not account for confounding covariates). To probe double robustness, we set up misDRGU as misspecifying the quadratic outcome propensity score model with a linear mean model, while the outcome model in misDRGU is specified correctly. (Appendix F.4.1) Table 2: Power of DRGU Adjusting for Confounding Effect Distribution Sample size DRGU misDRGU OLS U Normal200 0.750 0.585 0.940 0.299 50 0.135 0.085 0.135 0.035 LogNormal200 0.610 0.515 0.435 0.235 50 0.260 0.210 0.190 0.110 Cauchy200 0.660 0.580 0.435 0.310 50 0.265 0.180 0.165 0.130 Longitudinal DRGU 9 We compare three models longDRGU ,DRGU using the last timepoint data snapshot, and GEE. The time-varying covariates highlight the strength of using longitudinal method compared to snapshot analysis. (Appendix F.4.2) Table 3: Power of DRGU for Longitudinal data Distribution Sample size Long DRGU DRGU GEE Normal200 0.85 0.88 0.92 50 0.52 0.39 0.75 LogNormal200 0.85 0.78 0.68 50 0.37 0.30 0.33 Cauchy200 0.83 0.76 0.66 50 0.38 0.32 0.29 6.2 Applications in Business Setting Email Marketing : We conducted an user level A/B test comparing our legacy email marketing system against a newer version based on Neural Bandit. We measured the downstream impact on conversion value, a proprietary metric measuring the value of conversions. The conversion value presented characteristic of extreme zero inflation (>95%) and heavy tailed (among the converted). Using the Zero-trimmed U test, we detect a statistically significant lift (+0.94%) in overall conversion value (p-value<0.001). By constast, the t-test is not able to detect a significant effect on the conversion value metric (p-value =
|
https://arxiv.org/abs/2505.08128v1
|
0.249). (Appendix G.1) Targeting in Feed : We conducted a user level A/B test to evaluate impact of a new algorithm for marketing on a particular slot in Feed. We faced two challenges: (i) selection bias in ad impression allocation that favored the control system, so we need to adjust for impressions as a cost and compare ROI between control and treatment; (ii) imbalance in baseline covariates due to limited campaign and participant selection (Appendix Table 14). We addressed both issues via Regression Adjustment to estimate ROI lift while controlling for imbalanced covariates, detecting a 1.84% lift in conversions per impression (95% CI: [1.64%, 2.05%], p<0.001). By contrast, a simple t -test found no significant difference in conversion (p=0.154). (Appendix G.2) Paid Search Campaigns We ran a 28-day campaign level A/B test on 3rd-party paid-search campaigns (32 control vs. 32 treatment), measuring conversion value net of cost. To address the small-sample limitation, we fit a GEE model to take advantage of repeated measurement over 28 days, yielding a near-significant effect on ROI (p=0.051) v.s p=0.184 from last day snapshot regression analysis. A 28-day pre-launch AA validation using the same GEE showed no effect (p=0.82), further validating experiment and results. Figure 1: Distribution of Conversion Values from the Validation & Test Period Observing that the distribution of the conversion value exhibit heavy tail characteristics, we further performed statistical testing using longitudinal Doubly Robust U , assuming compound symmetric 10 correlation structure for R(ฮฑ). We were able to attain statistical significant result with หP(y1> y0) = 0.54andp-value=0.045. (Appendix G.3) 7 Discussion We provide discussion for general approaches for large sample size (e.g., member level AB test at global scale) as well as various consideration of practical implementation in Appendix A . We further highlight the key contributions on theoretical development and discuss the comparison with existing approaches. 7.1 Methodology Innovation Although RA, GEE, and the MannโWhitney U test are established statistical methodologies, their applications to A/B testing in the tech field are rare. This is mainly due to four reasons: (i) A/B tests in the tech field generally involve large sample sizes, and efficiency is often not the primary concern; (ii) for large sample sizes, RA, GEE, and MannโWhitney U lack computationally efficient algorithms; (iii) the primary metrics in A/B tests are typically binary or count data (e.g., impressions or conversions), so there is little perceived need for distribution-robust tests like the MannโWhitney U; (iv) evaluation of multiple metrics is often conducted heuristicallyโe.g., requiring nonsignificance on guardrail metrics and significance on primary metrics, or making ad hoc trade-offs between them. In A/B tests for business scenarios, the above four reasons vanish: (i) sample sizes are limited because each A/B test incurs business cost, so using more powerful statistical tests (e.g., covariate adjustment) and increasing effective sample size (e.g., repeated measurement) is very important; (ii) in many cases, sample sizes are moderate, so computational burden is less of a concern; (iii) the primary metrics are often revenue, which follows a non-Gaussian distribution, calling for nonparametric tests such as the MannโWhitney test; (iv) a principled way
|
https://arxiv.org/abs/2505.08128v1
|
of performing ROI trade-offs is needed, and covariate adjustment can measure revenue net of cost. Moreover, when revenue- or value-based primary metrics are used, they are almost always associated with zero inflation and heavy-tail distributions. In this situation, we can use Zero-Trimmed U. In fact, we argue that these approaches can be applied generally to all A/B tests in the tech field. Primary metrics can be revenue-based even for engagement-related platforms (e.g., assigning a proxy long-term value to any impression or conversion). Also, there are implicit and explicit costs for any A/B test (e.g., latency can be modeled as a cost to the user). Weโll then need robust statistics to address the irregular distribution on proxy value and covariate adjustment for ROI consideration. For general applicability, we provide ways to efficiently perform the above tests for extremely large sample sizes. RA and GEE are based on estimating equations, and we can use mini-batch Fisher scoring to solve those equations and then calculate variance from the full sample using asymptotic results. MannโWhitney U and Zero-Trimmed U can be calculated efficiently using fast ranking algorithms, and the variance of the test statistic can be calculated from the asymptotic distribution easily. 7.2 Theoretical Development We derive analytical results to provide insights into where efficiency gains arise for RA, GEE, and the MannโWhitney U test: โขFor RA, when there is confounding, relative efficiency over the t-test (measured by MSE) is dominated by the bias term, since the t-test yields a biased estimate of the treatment effect. When there is no confounding, RAโs efficiency gain over the t-test arises from variance reduction due to covariate adjustment. The insight, then, is to find covariates that (i) satisfy non-confounding (i.e., are independent of treatment assignment) and (ii) explain variance in the response. This also explains the efficiency gains of related CUPED-type methods. โขFor GEE, we show that efficiency gains over snapshot come from using repeated measure- ments, and we derive the exact formula for relative efficiency under a Gaussian response, revealing its dependence on the correlations structure among repeated measurements. When repeated measurements are fully independent, relative efficiency is highest, T times that of 11 snapshot regression. When they are perfectly correlated, GEE and snapshot regression share the same efficiency. โขFor the MannโWhitney U test, we compute relative efficiency over the t-test on several example distributions, illustrating near-1 efficiency for Gaussian data and higher efficiency for heavy-tailed distributions. We detail the asymptotics for Zero-Trimmed U, building on existing works from biostatistics field [14,40]. Moreover, we provide a rigorous treatment of Pitman efficiency under compound hypothesis testing in Theorem 2 . Pitman efficiency is given for both (i) Zero-Trimmed U versus MannโWhitney U with adjusted variance and (ii) Zero-Trimmed U versus MannโWhitney U with standard (unadjusted) variance. โขAs shown in Figures 2 and 3, the efficiency of Zero-Trimmed U versus MannโWhitney U with adjusted variance is not always greater than one; it depends on both the direction ฯ and the zero proportion 1โp. When the direction is more on the dcomponent (a location shift among positive values), Zero-Trimmed U has
|
https://arxiv.org/abs/2505.08128v1
|
higher power (Figure 3). When the direction focuses on the mcomponent (the zero-proportion difference), MannโWhitney U with adjusted variance is more efficient, though still close to one (Figure 3). In fact, if sinฯ= 1(purely on d), Zero-Trimmed U always has higher power (Figure 2); if sinฯ= 0 (purely on m), MannโWhitney U with adjusted variance always has higher power (Figure 2). โขThe efficiency of Zero-Trimmed U versus MannโWhitney U with standard (unadjusted) variance, however, is mostly greater than one, as seen in Figures 4 and 5. The dominance of Zero-Trimmed U is particularly significant (i.e., r >5) for high sparsity of positive values (pclose to zero), as shown in Figure 4. And when there is a substantial proportion of zeros (e.g., p= 0.5), its advantage is robust to direction (i.e., ฯ) of the compound hypothesis, as shown in Figure 5. Building on existing works from causal inference Mann-Whitney U in biostatistics field[ 43,6,38, 7,45], we propose a novel doubly robust generalized U to address ROI, repeated measurement and distribution robustness all in one framework. We provide the asymptotic results in Theorem 3 and Corollary 4 with detailed derivations in Appendix E.3. Besides the fact that the application of doubly robust U is completely new for A/B test in business setting (and generally in tech field), we also highlight the key theoretical innovations of DRGU on top of existing approaches from biostatistics field: โข The doubly robust generalized U can adopt any monotonic โkernelโ ฯto form a U statistic to measure the directional treatment effect E(ฯ)of a customized definition in an A/B test. When ฯis the identity function, it reduces to the common doubly robust version of the โmean differenceโ treatment effect. When ฯis the indicator function, it is equivalent to the doubly robust version of MannโWhitney U. There are two key requirements for the kernel ฯ: (i)finite second moment ensures distributional robustness, i.e. E[ฯ2]<โ, a condition the identity kernel (mean-difference) cannot satisfy; (ii) monotonicity guarantees that ฯ preserves the testโs directional nature, so that any directional (location) shift in outcomes yields a consistent change in the statistic. โขWe provide a detailed UGEE formulation on joint estimation of both the target parameter ฮด and nuisance parameters (i.e., ฮฒ,ฮณ). UGEE is an extension of GEE to pair-wise estimating equations, and readers can refer to [ 23] for a comprehensive treatment of UGEE. Our UGEE formulation is built on top of the formulations from [ 43,7]. There are three important dis- tinctions: (i) our UGEE is built on a generalized kernel ฯ; (ii) we treat hij3, the estimating equation for the โobservedโ treatment effect, by multiplying the pairwise โmissingโ probabil- ityzi(1โzj)with the potential pairwise outcome ฯ(yi1โyj0), whereas the formulation in [7] omits the "missing" probability; (iii) we provide the UGEE formulation for longitudinal data, detailing the structure of the propensity model and pairwise regression model for the doubly robust estimator, and the functional forms for different types of longitudinal effects. โขBesides the asymptotic normality result, we prove that when ฯandgare known, the corresponding estimator attains the semi-parametric efficiency bound, i.e., the proposed doubly robust generalized
|
https://arxiv.org/abs/2505.08128v1
|
U has the smallest variance (most powerful) among all regular estimators of the corresponding treatment effect. We further prove that even when ฯand 12 gare unknown, as long as they are correctly specified, the doubly robust generalized U from our UGEE still attains the semi-parametric efficiency bound. This result is stated in Theorem 3, which provides the theoretical foundation for its superior performance in simulation and real A/B analysis. โขWe provide computationally efficient algorithms for the proposed doubly robust generalized U on extremely large datasets (e.g., on the order of 108rows). Basically, the algorithm decouples the optimization procedure that performs the point estimation of ฮธand the inference procedure that estimates the asymptotic variance of หฮธ. The optimization is driven by mini-batch Fisher scoring on paired data and can be implemented easily with existing automatic differentiation libraries (e.g., JAX, PyTorch, TensorFlow). The inference is driven by Monte Carlo integration for the expectation of variance estimate (another U statistic), where we reduce the computational burden from O(n2)toO(n)(a huge reduction when n is extremely large) without losing asymptotic efficiency. We provide rigorous theoretical support for the algorithm, on both the decoupling and error bounds, in Appendix A. Basically: (i) as long as the mini-batch Fisher scoring algorithm attains error op(nโ1 2), this error is negligible (compared with โperfectโ optimization) and thus we can decouple optimization and inference; (ii) as long as the Monte Carlo integration processes a sample of size O(n1+ฯต), the Monte Carlo errors are negligible and we attain the same asymptotic efficiency as using the full O(n2)pairs. Besides the methodology innovation and theoretical development, we also share the JAX[ 35] based implementation of UGEE for doubly robust generalized U, as well as simulation code for all simula- tions, including RA, GEE, Zero-Trimmed U and DRGU. So readers can dive deep to the algorithm and replicate simulation the result if interested. 8 Conclusion To conclude, we proposed a series of efficient statistical methods for A/B tests in this paper, with systematic theoretical development and comprehensive empirical evaluations. These methods, though proposed for A/B tests in business settings, are broadly useful to general experiments in both tech and non-tech field. References [1]Chunrong Ai, Lukang Huang, and Zheng Zhang. 2020. A MannโWhitney test of distributional effects in a multivalued treatment. Journal of Statistical Planning and Inference 209 (2020), 85โ100. [2]Eduardo M. Azevedo, Alex Deng, Josรฉ Luis Montiel Olea, Justin Rao, and E. Glen Weyl. 2020. A/B Testing with Fat Tails. Journal of Political Economy 128, 12 (2020), 4319โ4377. [3]R Clifford Blair and James J Higgins. 1980. A comparison of the power of Wilcoxonโs rank- sum statistic to that of studentโs t statistic under various nonnormal distributions. Journal of Educational Statistics 5, 4 (1980), 309โ335. [4]Stรฉphane Boucheron, Gรกbor Lugosi, and Pascal Massart. 2013. Concentration Inequalities: A Nonasymptotic Theory of Independence . Oxford University Press, Oxford, UK. [5]Patrick D Bridge and Shlomo S Sawilowsky. 1999. Increasing physiciansโ awareness of the impact of statistics on research outcomes: comparative power of the t-test and Wilcoxon rank- sum test in small samples applied research. Journal of clinical epidemiology 52, 3 (1999),
|
https://arxiv.org/abs/2505.08128v1
|
229โ235. [6]R Chen, T Chen, N Lu, Hui Zhang, P Wu, C Feng, and XM Tu. 2014. Extending the Mannโ WhitneyโWilcoxon rank sum test to longitudinal regression analysis. Journal of Applied Statistics 41, 12 (2014), 2658โ2675. [7]Ruohui Chen, Tuo Lin, Lin Liu, Jinyuan Liu, Ruifeng Chen, Jingjing Zou, Chenyu Liu, Loki Natarajan, Wan Tang, Xinlian Zhang, et al .2024. A doubly robust estimator for the Mann 13 Whitney Wilcoxon rank sum test when applied for causal inference in observational studies. Journal of Applied Statistics 51, 16 (2024), 3267โ3291. [8]Stephan Clรฉmenรงon, Igor Colin, and Aurรฉlien Bellet. 2016. Scaling-up empirical risk minimiza- tion: optimization of incomplete U-statistics. Journal of Machine Learning Research 17, 76 (2016), 1โ36. [9]Alex Deng, Michelle Du, Anna Matlin, and Qing Zhang. 2023. Variance Reduction Using In-Experiment Data: Efficient and Targeted Online Measurement for Sparse and Delayed Outcomes. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining . 3937โ3946. [10] Alex Deng, Ya Xu, Ron Kohavi, and Toby Walker. 2013. Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-Experiment Data. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining (WSDM โ13) . ACM, Rome, Italy. [11] Ronald Aylmer Fisher. 1928. Statistical methods for research workers . Oliver and Boyd. [12] Ronald A. Fisher. 1935. The Design of Experiments . Oliver and Boyd, Edinburgh. [13] David A Freedman. 2008. On regression adjustments to experimental data. Advances in Applied Mathematics 40, 2 (2008), 180โ193. [14] Alfred P Hallstrom. 2010. A modified Wilcoxon test for non-negative distributions with a clump of zeros. Statistics in Medicine 29, 3 (2010), 391โ400. [15] James A. Hanley and Barbara J. McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 1 (1982), 29โ36. [16] Adriรกn V Hernรกndez, Ewout W Steyerberg, and J Dik F Habbema. 2004. Covariate adjustment in randomized controlled trials with dichotomous outcomes increases statistical power and reduces sample size requirements. Journal of clinical epidemiology 57, 5 (2004), 454โ460. [17] Wassily Hoeffding. 1948. A Class of Statistics with Asymptotically Normal Distribution. The Annals of Mathematical Statistics 19, 3 (1948), 293โ325. [18] Yan Hou, Victoria Ding, Kang Li, and Xiao-Hua Zhou. 2010. Two new covariate adjustment methods for non-inferiority assessment of binary clinical trials data. Journal of Biopharmaceu- tical Statistics 21, 1 (2010), 77โ93. [19] Svante Janson. 2004. Large deviations for sums of partly dependent random variables. Random Structures & Algorithms 24, 3 (2004), 234โ248. [20] Hao Jiang, Fan Yang, and Wutao Wei. 2020. Statistical Reasoning of Zero-Inflated Right- Skewed User-Generated Big Data A/B Testing. In 2020 IEEE International Conference on Big Data (Big Data) . 1533โ1544. [21] Brennan C Kahan, Vipul Jairath, Caroline J Dorรฉ, and Tim P Morris. 2014. The risks and rewards of covariate adjustment in randomized trials: an assessment of 12 outcomes from 8 studies. Trials 15 (2014), 1โ7. [22] Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M Henne. 2009. Controlled experiments on the web: survey and practical guide. Data mining and knowledge discovery 18 (2009), 140โ181. [23] Jeanne Kowalski and Xin M
|
https://arxiv.org/abs/2505.08128v1
|
Tu. 2008. Modern applied U-statistics . John Wiley & Sons. [24] Nicholas Larsen, Jonathan Stallrich, Srijan Sengupta, Alex Deng, Ron Kohavi, and Nathaniel T Stevens. 2024. Statistical challenges in online controlled experiments: A review of a/b testing methodology. The American Statistician 78, 2 (2024), 135โ149. [25] Kung-Yee Liang and Scott L Zeger. 1986. Longitudinal data analysis using generalized linear models. Biometrika 73, 1 (1986), 13โ22. 14 [26] Kevin Liou and Sean J Taylor. 2020. Variance-weighted estimators to improve sensitivity in online experiments. In Proceedings of the 21st ACM Conference on Economics and Computation . 837โ850. [27] Kevin Liou, Wenjing Zheng, and Sathya Anand. 2022. Privacy-preserving methods for repeated measures designs. In Companion Proceedings of the Web Conference 2022 . 105โ109. [28] Yan Ma. 2012. On inference for kendallโs ฯwithin a longitudinal data setting. Journal of applied statistics 39, 11 (2012), 2441โ2452. [29] Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics (1947), 50โ60. [30] Lu Mao. 2018. On causal estimation using U-statistics. Biometrika 105, 1 (2018), 215โ220. [31] Lu Mao. 2024. Wilcoxon-Mann-Whitney statistics in randomized trials with non-compliance. Electronic Journal of Statistics 18, 1 (2024), 465โ489. [32] Alexey Poyarkov, Alexey Drutsa, Andrey Khalyavin, Gleb Gusev, and Pavel Serdyukov. 2016. Boosted decision tree regression adjustment for variance reduction in online controlled exper- iments. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . 235โ244. [33] Antonin Schrab, Ilmun Kim, Mรฉlisande Albert, Bรฉatrice Laurent, Benjamin Guedj, and Arthur Gretton. 2023. MMD aggregated two-sample test. Journal of Machine Learning Research 24, 194 (2023), 1โ81. [34] Antonin Schrab, Ilmun Kim, Benjamin Guedj, and Arthur Gretton. 2022. Efficient Aggregated Kernel Tests using Incomplete U-statistics. Advances in Neural Information Processing Systems 35 (2022), 18793โ18807. [35] Chris Smith, Matthew R. Leary, and Dougal Maclaurin. 2020. JAX: composable transforma- tions of Python+NumPy programs. In Proceedings of the 3rd Machine Learning and Systems Conference (MLSys 2020) . [36] Anastasios A Tsiatis. 2006. Semiparametric theory and missing data . V ol. 4. Springer. [37] Aad W Van der Vaart. 2000. Asymptotic statistics . V ol. 3. Cambridge university press. [38] Karel Vermeulen, Olivier Thas, and Stijn Vansteelandt. 2015. Increasing the power of the Mann-Whitney test in randomized experiments through flexible covariate adjustment. Statistics in medicine 34, 6 (2015), 1012โ1030. [39] Ming Wang. 2014. Generalized estimating equations in longitudinal data analysis: a review and recent developments. Advances in Statistics 2014, 1 (2014), 303728. [40] Wanjie Wang, Eric Chen, and Hongzhe Li. 2023. Truncated rank-based tests for two-part models with excessive zeros and applications to microbiome data. The Annals of Applied Statistics 17, 2 (2023), 1663โ1680. [41] Changshuai Wei, Ming Li, Yalu Wen, Chengyin Ye, and Qing Lu. 2020. A multi-locus predictiveness curve and its summary assessment for genetic risk prediction. Statistical methods in medical research 29, 1 (2020), 44โ56. [42] Changshuai Wei and Qing Lu. 2017. A generalized association test based on U statistics. Bioinformatics 33, 13 (2017), 1963โ1971. [43] P Wu, Y Han, T Chen, and
|
https://arxiv.org/abs/2505.08128v1
|
XM Tu. 2014. Causal inference for MannโWhitneyโWilcoxon rank sum and other nonparametric statistics. Statistics in medicine 33, 8 (2014), 1261โ1271. [44] Huizhi Xie and Juliette Aurisset. 2016. Improving the Sensitivity of Online Controlled Ex- periments: Case Studies at Netflix. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD โ16) (San Francisco, CA, USA). ACM, 645โ654. 15 [45] Anqi Yin, Ao Yuan, and Ming T Tan. 2024. Highly robust causal semiparametric U-statistic with applications in biomedical studies. The international journal of biostatistics 20, 1 (2024), 69โ91. [46] Jing Zhou, Jiannan Lu, and Anas Shallah. 2023. All about Sample-Size Calculations for A/B Testing: Novel Extensions & Practical Guide. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management . 3574โ3583. Appendix A Algorithm for Large Sample Size Although the methods proposed are mainly for business scenario, where sample size is often small to medium, there are business use-case where sample size is large (e.g., large scale marketing campaigns where user level data is available). Moreover, for broader applicability of the methodologies, we need to consider general AB tests in tech where sample size can be at magnitude of million to billion. ForMann-Whitney U andZero-trimmed U , we can leverage fast ranking algorithm to compute Wor Wโฒ. The variance calculation is straightforward using equation in Section 4. As for Regression Adjustment ,GEE andDRGU , they are all based on estimating equations. DRGU have additional layer complexity as its computation is over pairs of observations. We provide efficient algorithm for DRGU in this section. The algorithm for RA and GEE should follows trivially. A.1 Large Data Estimation and Inference for DR Generalized U The high level idea is to decouple optimization (solving UGEE) and inference (estimation of variance), and use efficient algorithm for both steps: 1.Optimization : We obtain หฮธby stochastic Fisher scoring with mini -batches until ||ยฏUn||< cnโ1 2(1+ฯ)(i.e.,||ยฏUn||=op(nโ1 2)). 2.Inference : We estimate B=E(GM)andฮฃ = E(uuT)with Monte Carlo integration from subsample of pairs, and calculate\V ar(หฮธ) =4(หBโ1)TหฮฃหBโ1 n. Details are described in Algorithm 1 and Algorithm 2. Remarks : โขFor Algorithm 1, we can use sample by pairs instead of by rows. Both give consistent estimate of the parameter, i.e., หฮธโpฮธ. There are trade-off on multiple aspects: (i) sampling by pair gives clean guarantee on unbiasedness while sampling by row can be biased (though consistent) due to missing on intra-batch pairs; (ii) sampling by row is easier to implement and can use GPU more efficiently while sampling by pair needs to generate all pairs beforehand or implement reservoir sampling (or hashing tricks) for extreamly large data. For both approaches, stratified sampling should be used for highly imbalanced data. โขFor Algorithm 2, the choice of pair sample size kcontrols the Monte Carlo error. While there is no need to set kat order of n2(i.e., full pairs calculation), a sufficiently large kgreater than order of n(e.g., k=cโฒnlogn) is needed to have negligible Monte Carlo error. For example, for data size of 100M rows( 108), setting kโ(107,108)can give practical inference and setting kโ(109,1010)gives high-confidence bound. Note that,
|
https://arxiv.org/abs/2505.08128v1
|
when "generalize" for regular regression and GEE, we can simply estimate variance on full sample, there is no need for Monte Carlo Integration. โขThe working correlation matrix R(ฮฑ)can be estimated in an outer loop around the ฮธ-updates, e.g., by alternating between updating ฮธusing Fisher scoring and re-estimating ฮฑbased on current residuals: (i) A good initial value for ฮฑis typically ฮฑ(0)= 0, corresponding to the independence working correlation, which ensures consistency of หฮธeven if R(ฮฑ)is misspecified; (ii) ฮฑcan be re-estimated every Ksteps of the inner Fisher scoring loop. This 16 avoids excessive overhead from updating ฮฑtoo frequently. (iii) Re-estimation of ฮฑcan stop once its updates become small or after a fixed number of outer iterations. Typically, only a few updates (e.g., 3โผ5) are sufficient in practice. A.2 Algorithms Decoupling and Error Bounds A.2.1 Algorithms Decoupling To see why we can decuple the optimization and inference (i.e., Algorithm 1 and Algorithm 2), observe that โnยฏUn(หฮธ) =โnยฏUn(ฮธ) +โยฏUn โฮธโn(หฮธโฮธ) +op(1) โn(หฮธโฮธ) =โ(โยฏUn โฮธ)โโnยฏUn(ฮธ) + (โยฏUn โฮธ)โโnยฏUn(หฮธ) +op(1) The second term measure the "error" when the estimating equation is not exactly solved, i.e., algorithm error. The first term measures the sampling variations. When the fisher scoring algorithm error is small, ||ยฏUn||=op(nโ1 2), we know (โยฏUn โฮธ)โโnยฏUn(หฮธ) =Op(1)โnop(nโ1 2) =op(1) and thus โn(หฮธโฮธ) =โ(โยฏUn โฮธ)โโnยฏUn(ฮธ) +op(1)โdN(0,4(Bโ)TฮฃBโ). We state the above results in Theorem 5 A.2.2 Error Bound Observe that estimate for Bandฮฃon full data are both U statistics of form: Uv n=1 (n 2)P i<jv(oi, oj). Letโs assume the symmetric kernel v(oi, oj)โRis sub-Gaussian with proxy variance ฯ2. We compute a Monte Carlo approximation หUk=1 kP (i,j)โCkv(oi, oj)by sampling kunordered pairs from the full set of n 2 possible pairs. Due to overlapping indices among pairs, the kernel evaluations are not fully independent. Observe that, for all sampled pair, the expected total number of overlapping pairs are O(k2 n). Then, for each sampled pair, the number of overlapping pairs is โ =O(k/n), and hence V ar(หUk) =1 k2P lโCkV ar(vl) +1 k(kโ1)P lฬธ=lโฒCov(vl, vlโฒ) =ฯ2 k+O(1 n)C=ฯ2 k(1 + โ), provided that Cov(vl, vlโฒ)โคC. Using Bernstein-type inequalities [4] adapted for V ar(หUk) =ฯ2 k(1 + โ) , the Monte Carlo average satisfies P หUkโE[หUk] > ฯต โค2 exp โkฯต2 2ฯ2(1 + โ) This introduces an adjustment factor 1 + โ into the denominator, reflecting variance inflation due to overlap between sampled pairs. To achieve a target error ฯตwith confidence level 1โฮท, we can set 2 exp โkฯต2 2ฯ2(1+โ) โคฮท. Solving this w.r.t "effective sample size" หk=k/(1 + โ) , we have หkโฅ2ฯ2 ฯต2log(2 ฮท). Equivalently, with high probability 1โฮท, the finite sample error bound is: หUkโE[Un] โคs 2ฯ2 หklog(2 ฮท) 17 The bound implies that the effective asymptotic convergence rate is หUkโE[Un] =Op r 1 หk! =Op r 1 k+1 n! =Op r 1 n(1 +n k)! Observing หBโB=Op(หkโ0.5)andหฮฃโฮฃ =Op(หkโ0.5), we can show หVฮธโVฮธ=Op(หkโ0.5), given Vฮธ= 4(Bโ1)TฮฃBโ1. This leads to a more conservative test statistic, resulting in no inflation of type I error, but a minor loss of finite sample efficiency. Choosing k=O(n1+ฯต)ensures the Monte Carlo error is asymptotically negligible, matching the asymptotic efficiency of the full O(n2)estimator
|
https://arxiv.org/abs/2505.08128v1
|
at significantly lower computational cost. We state the above results in Theorem 6. B Efficiency of Regression Adjustment In this section, we will illustrate the efficiency of regression adjustment over t-test under parametric set-up. Weโll first show regression adjustment is most efficient with Cramer-Rao lower bound and then illustrate the insight on where does the efficiency come from using linear regression as example. B.1 Cramer-Rao Lower Bound This is well established in statistics. For completeness, We provide sketch of proof, so reader can gain insight to later sections e.g., Appendix B.3 and Appendix E.2. Maximum Likelihood solve following estimating equation, Un(ฮธ) =1 nX [โlogp(xi;ฮธ) โฮธ]T= 0 LetSi(ฮธ) = (โlogp(xi;ฮธ) โฮธ)Tandฮฃ =E(SST), we know from CLT thatโnUnโdN(0,ฮฃ). Observing 0 = Un(ฮธ0) = Un(หฮธ) +โUn(ฮธ0) โฮธ0(หฮธโฮธ0) +op(1), we knowโn(หฮธโฮธ0) = โ(โUn(ฮธ0) โฮธ0)โโnUn(หฮธ). Observing โ(โUn(ฮธ0) โฮธ0)โpฮฃ, we know โn(หฮธโฮธ0)โdN(0,ฮฃโ1). To see why โโU โฮธโpฮฃ, observe that: ST=โlogp(x;ฮธ) โฮธ=1 p(x;ฮธ)โp(x;ฮธ) โฮธ โS โฮธ=โ1 p2(โp โฮธ)Tโp โฮธ+1 pโ โฮธ([โp โฮธ]T) E(โS โฮธ) =โE((โlogp โฮธ)Tโlogp โฮธ) +1 p1 โฮธโฮธTZ p(x;ฮธ)dx=โฮฃ โโU โฮธโpโE(โS โฮธ) = ฮฃ . Now, for any unbiased estimator ฮธโฒ,E(ฮธโฒ(x)) =ฮธ, we can show V ar(ฮธโฒ)โชฐฮฃโ1, i.e.,V ar(ฮธโฒ)โฮฃโ1 is positive semi-definite matrix. ObservingโE(ฮธโฒ) โฮธ=โฮธ โฮธ=I, and the fact that โE(ฮธโฒ) โฮธ=โ โฮธZ ฮธโฒ(x)p(x;ฮธ)dx=Z ฮธโฒ(x)โlogp โฮธp(x;ฮธ)dx=E(ฮธโฒโlogp โฮธ), we have Cov(ฮธโฒ, S) =E(ฮธโฒโlogp โฮธ) =I. Apply matrix CauchyโSchwarz inequality, we have: V ar(ฮธโฒ)V ar(S)โชฐCov(ฮธโฒ, S) =I, thus V ar(ฮธโฒ)โชฐฮฃโ1. 18 B.2 Relative Efficiency under Confounding Letโs assume the following model as in Section 2: yi=ฮฒ0+ฮฒ1zi+ฮณTwi+ฯตi, ฯตiโผN(0, ฯ2). Letฮธ= [ฮฒ0, ฮฒ1, ฮณT]T,xi= [1, zi, wT i]TandX= [x0,ยทยทยท, xi,ยทยทยท]T,Y= [y0,ยทยทยท, yi,ยทยทยท]T. Under confounding and above parametric set-up, we can show ฮฒ1is unbiased, observing that, E(y(1)โy(0)) = Ew(E(y|z= 1, w)โE(y|z= 0, w)) =Ew(ฮฒ1) =ฮฒ1 Meanwhile, t-test ( หฯ= ยฏy1โยฏy0) is biased by a constant term ฮณT[E(w|z= 1)โE(w|z= 0)] , as E(ยฏy1โยฏy0) =E(y|z= 1)โE(y|z= 1) =Ew|z=1E(y|z= 0, w)โEw|z=0E(y|z= 0, w) =ฮฒ1+ฮณT(E(w|z= 1)โE(w|z= 0)) . In this case, the asymptotic relative efficiency is dominated by the bias term (for both, varโ1 n), and hence r(หฮฒ1,หฯ)โ โ asnโ โ . B.3 Relative Efficiency under no Confounding For relative efficiency when there is no confounding, the derivation boils down to ratio of variance as both are unbiased. We can estimate หฮธ= (XTX)โ1XTYand its variance V ar(หฮธ) =ฯ2(XTX)โ1=ฯ2 X ixixT i!โ1 (8) Observing1 nP ixixT iโpE(xxT), we know V ar(หฮธ) =ฯ2 n(1 nX xixT i)โ1=ฯ2 n E(xxT)โ1+op(nโ1). We need to calculate E(xxT)โ1 2,2for variance of ฮฒ1. LetVx=E(xxT)andp=E(z). We know E(z2) =p. Without loss of generality, assume E(w) = 0 . Since zโฅw, we know E(zw) = 0 , and Vx=๏ฃฎ ๏ฃฐ1 E(z)E(wT) E(z)E(z2)E(zwT) E(w)E(zw)E(wwT)๏ฃน ๏ฃป ="1p 0 p p 0 0 0 V ar(w)# . Since Vxis block-diagonal matrix, we can calculate inverse of Vx 2ร2= 1p p p , which is Vx 2ร2โ1=1 p(1โp) pโp โp1 . Then, we know V ar(หฮฒ) =ฯ2 np(1โp)+op(nโ1). (9) 19 Now we show the variance of t-test. Let หฯ= ยฏy1โยฏy0. We know V ar(หฯ) =V ar(ยฏy1) +V ar(ยฏy2). Since V ar(ยฏyk) =1 nkV ar(y|z=k), V ar(y|z=k) =V ar(ฮณTw+ฯต) =ฮณTV ar(w)ฮณ+ฯ2, 1 n1+1 n0=1 np+1 n(1โp)+op(nโ1) =1 np(1โp)+op(nโ1) this imply V ar(หฯ) =1 n0V ar(y|z= 0) +1 n1V ar(y|z=
|
https://arxiv.org/abs/2505.08128v1
|
1) =1 np(1โp)(ฯ2 w+ฯ2) +op(nโ1) (10) where ฯ2 w=ฮณTV ar(w)ฮณrepresents variance of yexplained by w. Combining equation(9) and equation(10), we have r(หฮฒ,หฯ) = 1 +ฯ2 w ฯ2(11) C Asymptotics and Efficiency of GEE C.1 Asymptotic Normality of GEE In this section, we will show the Asymptotic Normality of หฮธfor the GEE, which will build foundation for Asymptotic Normality of UGEE in Appendix E.3. Recall,X iDT iVโ1 i(yiโยตi) =0 where, yi= [yi0,ยทยทยท, yit,ยทยทยท]T,ยตi= [ยตi0,ยทยทยท, ยตit,ยทยทยท]T,ฮธ= [ฮฒ0, ฮฒ1, ฮณT]T, and Di=โยตi โฮธ, Vi=AiR(ฮฑ)Ai, Ai=diag{p V ar(yit|zi, wit)}. Letui=DT iVโ1 i(yiโยตi)andUn=1 nP iui, we know by Central Limit Theorem (CLT), โnUnโdN(0,ฮฃu) where ฮฃu=E(uuT). Letหฮฑbe the estimate of ฮฑfor the working correlation R(a), and assume mild regularity condition:โn(หฮฑโฮฑ) =Op(1). And let หฮธbe the estimate of the ฮธfor the GEE, i.e., Un(หฮธ,หฮฑ) = 0 . Observing the following Taylor expansion, 0 =Un(หฮธ,หฮฑ) =Un(ฮธ, ฮฑ) +โUn(ฮธ, ฮฑ) โฮธ(หฮธโฮธ) +โUn(ฮธ, ฮฑ) โฮฑ(หฮฑโฮฑ) +op(nโ0.5), we know, โnUn(ฮธ, ฮฑ) =โโnโUn โฮธ(หฮธโฮธ)โโnโUn โฮฑ(หฮฑโฮฑ) +op(1). (12) Since E(yiโยตi) = 0 , we know E(โui โฮฑ) = 0 , and henceโUn โฮฑ=op(1). Combining with the regularity conditionโn(หฮฑโฮฑ) =Op(1), we have โnโUn โฮฑ(หฮฑโฮฑ) =op(1)Op(1) = op(1). (13) Then equation (12) reduce to, โn(หฮธโฮธ) =โ(โUn โฮธ)โโnUn(ฮธ, ฮฑ) +op(1) (14) 20 where (ยท)โdenote general inverse. LetGi=DT iVโ1 iandSi=yiโยตi. Given thatโUn โฮธ=1 nP iโGiSi โฮธ, we have โUn โฮธโpE(GโS โฮธ) =โE(GD) (15) To obtain equation (15), we observe that โUn โฮธ=1 nX iโGiSi โฮธ =1 nX iโDT i โฮธVโ1 iSi+1 nX iDT iVโ1 iโSi โฮธ. Since1 n(yiโยตi)โp0, we have negligible first term1 nP iโDT i โฮธVโ1 iSi=op(1). As a result, โUn โฮธ=op(1) +1 nX iDT iVโ1 iโSi โฮธโpโE(GD) Combining equation (14) and equation (15), we have โn(หฮธโฮธ) =BโโnUn(ฮธ, ฮฑ) +op(1) (16) where, B=E(GD). SinceโnUnโdN(0,ฮฃu), this establish the asymptotic normality of หฮธ, โn(หฮธโฮธ)โdN(0,(Bโ)TฮฃuBโ) C.2 Asymptotic Efficiency of GEE over snapshot Regression We derive the Asymptotic Efficiency of GEE over snapshot Regression on repeated measurement linear model, shown in Section 3. We can write the underlying linear model as, yi=Xiฮธ+ฯตi, ฯตiโผN(0, ฯ2R) where Xi=vxT i,v= [1,ยทยทยท,1,ยทยทยท,1]T. For GEE,P iDiVโ1 i(yiโยตi) = 0 of above model, we know Di=โยตi โฮธ=Xi, Vโ1 i=1 ฯ2Rโ1, หฮฃu=1 nX iDT iVโ1 iV ar(ฯตi)Vโ1 iDi=1 nX iDTVโ1 iDi, หB=1 nX iDT iVโ1 iDi, and hence, V ar(หฮธgee) =หBโTหฮฃuหBโ n =1 nBโ=1 n(1 nX iDT iVโ1 iDi)โ =ฯ2(X iXT iRโ1Xi)โ1. Observing that XT iRโ1Xi= (vxT i)Rโ1(vxT i) =xi(vTRโ1v)xT i = (vTRโ1v)xixT i, 21 we have, V ar(หฮธgee) =ฯ2 vTRโ1v(X ixixT i)โ1(17) From (8), we know V ar(หฮธreg) =ฯ2 P ixixT iโ1, so we have r(หฮธgee,หฮธreg) =vTRv (18) Now we will show vTRv > 1. Observe that vTRโ1v=โจRโ0.5v, Rโ0.5vโฉ. Let a=R0.5vand b=Rโ0.5v, we have |vTv|2โค(vTRโ1v)(vTRv) by Cauchy-Schwarz inequality, i.e., |โจa, bโฉ|2โค โจa, aโฉโจb, bโฉ. Since vTv=TandvTRv=P iP jRij< T2, we know: vTRโ1vโฅT2 vTRv>1 (19) To further illustrate the connection of efficiency on correlation of repeated measurement, we can assume simple compound symmetric matrix: R= (1โฯ)IT+ฯvvT. By Woodbury matrix identity, we know Rโ1=1 1โฯ(ITโฯ 1+(Tโ1)ฯvvT), hence, vTRโ1v=1 1โp(TโฯT2 1 + (Tโ1)ฯ) =T 1 + (Tโ1)ฯ. We can see as ฯโ1,r(หฮธgee,หฮธreg)โ1. And as ฯโ0,r(หฮธgee,หฮธreg)โT. In fact, for general case of R, we can define average correlation among different time point as
|
https://arxiv.org/abs/2505.08128v1
|
ยฏฯ=1 T(Tโ1)P iฬธ=jRij, then from equation (19), we know r(หฮธgee,หฮธreg)โฅT2 vTRv=T 1 + (Tโ1)ยฏฯ D Asymptotics and Efficiency of U test D.1 Pitman Efficiency of U test over t test We will derive the pitman efficiency on local alternative of small shift ฮดof certain distribution Fwith variance ฯ2. Recall that from definition in (1), we have U=1 n0n1n1X i=1n0X j=1Iy1iโฅy0j. From standard results in U statistics [23], we know โn(Uโฮธ)โdN(0, ฯ2 U=ฯ1ฯ2 1+ฯ2ฯ2 2) (20) where ฯk= lim nโโn nk, and ฯ2 k=V ar(E(h(y1, y0|yk)). Under H0, we know Fy1=Fy0, and hence ฯ2 1=E(E(Iy1i>y0j|y1i))2โ1 4 =E(Fy(y1i))2โ1 4= (Z1 0x2dx)โ1 4 =1 3โ1 4=1 12. 22 Similarly, we have ฯ2 2=1 12. And we know ฯ2 U(0) =ฯ1+ฯ2 12=1 12(n n0+n n1) +op(1). (21) Under local alternative, we have: E(U) =E(E(Iy1iโฅyj0|y1i)) =Z F(y+ฮด)f(y)dy, and accordingly ยตโฒ U(0) =โE(U) โฮด ฮด=0=Z f2(y)dy. (22) For t test, ฯ= ยฏy1โยฏy0, we have E(ฯ) =E(ยฏy1โยฏy0) =ฮด, V ar(ฯ|H0) =V ar(ยฏy1โยฏy0) = (1 n1+1 n0)ฯ2. and accordingly ยตโฒ ฯ(0) =โE(ฯ) โฮด ฮด=0= 1, (23) ฯ2 ฯ(0) = lim nโโnV ar (U|H0) = lim nโโ(n n0+n n1)ฯ2= (ฯ1+ฯ0)ฯ2(24) Combining above results (21), (22), (24), (23), we complete the derivation of pitman efficiency: r(U, ฯ) =ยตโฒ U(0)/ฯU(0) ยตโฒฯ(0)/ฯฯ(0)2 =(R f2(y)dy)2 ฯ0+ฯ1 12 1 (ฯ0+ฯ1)ฯ2= 12ฯ2Z f2(y)dy2 . (25) D.1.1 Pitman efficiency under specific distributions Weโll further derive pitman efficiency for a few distributions. Fornormal distribution :N(0, ฯ2), and density f(y) =1โ 2ฯฯexp(โy2 2ฯ2), we have Z f2(y)dy=1 2ฯฯ2Z exp(โy2 ฯ2)dy= 2ฯฯ2Z exp(โu2)d(ฯu) =1 2ฯฯZ exp(โu2)du=1 2ฯฯโฯ=1 2โฯฯ, whereR exp(โu2)du=โฯ, because (Z exp(โu2)du)2=Z Z exp(โu2โv2)dudv =Z2ฯ 0Zโ 0eโr2rdrdฮธ =Z2ฯ 0dฮธZโ 0eโr2rdr= 2ฯ(โ1 2eโr2 โ 0) =ฯ. Then we have r(U, ฯ) = 12 ฯ2[1 2โฯฯ]2=3 ฯโ0.955. (26) ForLaplace distribution :Lap(0, b), with density f(y) =1 2bexp(โ|y|/b)and variance V ar(y) = 2b2, we have Zโ โโf2(y)dy= 2Zโ 0f2(y)dy= 2Zโ 01 4b2exp(โ2|y| b)dy= 2Zโ 01 4b2exp(โ2y b)dy =1 2b2 โb 2eโ2y b โ 0 =1 4b, 23 and hence, r(U, ฯ) = 12(2 b2)[1 4b]2=3 2. (27) Forlognormal distribution :Log(0, b2), with density f(y) =1 ybโ 2ฯexp(โ(logy)2 2b2)and variance V ar(y) = (eb2โ1)eb2, we have f2(y) =1 y2b22ฯexp(โ(logy)2 b2), Z f2(y)dy=Z1 2ฯb2eโ2ueโu2/b2eudu=1 2ฯb2Z eโuโu2 b2du (letu= log y) =1 2ฯb2Z eb2 4eโ(u b+b 2)2du=eb2 4 2ฯb2Z eโw2d(bw) (letw=u b+b 2) =1 2bโฯeb2 4 and hence, r(U, ฯ) = 12( eb2โ1)eb2(1 2bโฯeb2 4)2=3 ฯb2(e5 2b2โe3 2b2), (28) which increase exponentially with b2. ForCauchy distribution :Cau(0,1), with density f(y) =1 ฯ(1+y2), we have Z f2(y)dy=1 ฯ2Z1 (1 +y)2dy=1 ฯ2Zฯ 2 0cos2ฮธdฮธ (lety= cos ฮธ) =1 ฯ2ฯ 2=1 2ฯ(observing cos2ฮธ=1 + cos(2 ฮธ) 2) andV ar(y) =โ, and hence, r(U, ฯ) =โ. (29) D.2 Asymptotics of Zero Trimmed U Lets1be the sum of ranks of all positive value in the 1st sample, i.e., s1=nโฒ 1X iฮบ(yโฒ 1i)Iyโฒ 1i>0. Note ฮบ(yโฒ 1i) =ฮบ(y1i),โy1i>0. Define, Sโฒ=nโฒ 1X iฮบ(yโฒ 1i) S=n1X iฮบ(y1i). Observing that nโฒ 1โn+ 1representing number of zeros in {yโฒ 1i}nโฒ 1 i=0, and the average rank for those zeros arenโฒ 0+nโฒ 1+1+n+ 0+n+ 1 2, we have Sโฒ=s1+ (nโฒ 1โn+ 1)nโฒ 0+nโฒ 1+ 1 + n+ 0+n+ 1 2. 24 Similarly, S=s1+ (n1โn+ 1)n0+n1+ 1 +
|
https://arxiv.org/abs/2505.08128v1
|
n+ 0+n+ 1 2. And by definition, Wโฒ=Sโฒโnโฒ 1(nโฒ 0+nโฒ 1+ 1) 2, W=Sโn1(n0+n1+ 1) 2. Now, define w1=s1โn+ 1(n+ 0+n+ 1+1) 2, we have Wโฒ=s1+ (nโฒ 1โn+ 1)nโฒ 0+nโฒ 1+ 1 + n+ 0+n+ 1 2โnโฒ 1(nโฒ 0+nโฒ 1+ 1) 2 =s1โn+ 1(n+ 0+n+ 1+ 1) 2+nโฒ 1n+ 0โn+ 1nโฒ 0 2 =w1+nโฒ 1n+ 0โn+ 1nโฒ 0 2 Similarly, W=w1+n1n+ 0โn+ 1n0 2 Ifd= 0, i.e., P(y+ 1โฅy+ 0) =1 2, we have E(s1|p0, p1) =n+ 1(n+ 0+n+ 1+1) 2, i.e., E(w1|p0, p1) = 0 . Then, we have E(Wโฒ|p0, p1) =nโฒ 1n+ 0โn+ 1nโฒ 0 2=n1n0 2(pp0โpp1), E(W|p0, p1) =n1n+ 0โn+ 1n0 2=n1n0 2(p0โp1). Given p0andp1are fixed, we know n+ 0,n+ 1,nโฒ 0andnโฒ 1are all fixed. So, V ar(W|p0, p1) =V ar(Wโฒ|p0, p1) =V ar(s1|p0, p1) =n+ 0n+ 1(n+ 0+n+ 1+ 1) 12. Then we can compute V ar(Wโฒ)under H0, from its conditional expectation and conditional variance, V ar(Wโฒ) =V ar(E(Wโฒ|p0, p1)) +E(V ar(Wโฒ|p0, p1)) =n2 0n2 1 4p2p1(1โp1) n1+p0(1โp0) n0 +n1n0p1p0 12(n1p1+n0p0) +o(n3)(30) =n0n1(n0+n1) 4 p3โp4+p3 3 +o(n3). (under H0,p=p0=p1) (31) Similarly, we have V ar(W) =n2 0n2 1 4p1(1โp1) n1+p0(1โp0) n0 +n1n0p1p0 12(n1p1+n0p0) +o(n3) (32) =n0n1(n0+n1) 4 pโp2+p3 3 +o(n3). (33) D.3 Pitman Efficiency of Zero Trimmed U test over standard U test We have compound alternative hypothesis on two dimension, m=p1โp0andd=P(y+ 1> y+ 0)โ1 2. However, Pitman efficiency is defined for simple hypothesis testing. To handle the compound 25 hypothesis, we specify a direction ฯ, and on direction of ฯ, the test would be simple hypothesis. Specifically, let m(ฮฝ) =ฮฝcosฯ, d(ฮฝ) =ฮฝsinฯ. On direction of ฯ, we test H0:ฮฝ= 0,vsHฯ 1:ฮฝ >0. Then we know, ยตโฒ(0) =โยต โฮฝ ฮฝ=0=โยต โmโm โฮฝ+โยต โdโd โฮฝ ฮฝ=0 = cos ฯโยต โm m=0,d=0 + sin ฯโยต โd m=0,d=0 (34) So, we need to compute ยต(m, d)under local alternative to obtain above quantity. Observe that, w1=s1โn+ 1(n+ 0+n+ 1+ 1) 2=n0n1(1 2โUn+ 0n+ 1) where, Un+ 0n+ 1is Mann-Whitney U on positive-only samples: Un+ 0n+ 1=1 n+ 0n+ 1n+ 1X in+ 0X jIyโฒ 1i>yโฒ 0j Knowing that E(Un+ 0n+ 1|p0, p1) =P(y+ 1โฅy+ 0), we have E(Wโฒ|p0, p1) =โn+ 0n+ 1 P(y+ 1โฅy+ 0)โ1 2 +nโฒ 1n+ 0โn+ 1nโฒ 0 2 =โn+ 0n+ 1dโn+ 1nโฒ 0โnโฒ 1n+ 0 2 Hence, ยตWโฒ(m, d) =E(Wโฒ) =E(E(Wโฒ|p0, p1)) =โn1n0dp(p+m)โn1n0 2 (p+m)2โp(p+m) =โn1n0 2[2p(p+m)d+m(p+m)]. Similarly, ยตW(m, d) =E(W) =E(E(W|p0, p1)) =E โn+ 0n+ 1dโn+ 1n0โn1n+ 0 2 =โn1n0dp(p+m)โn1n0 2[p+mโp] =โn1n0 2[2p(p+m)d+m]. We can ignore term โn0n1 2for the ratioยตโฒ Wโฒ(0) ยตโฒ W(0). Observe that โยตWโฒ โm 0= (2pd+p+ 2m) m=0,d=0=p, โยตWโฒ โd 0= (2p(p+m)) m=0,d=0= 2p2, โยตW โm 0= (2pd+ 1) m=0,d=0= 1, โยตWโฒ โd 0= (2p(p+m)) m=0,d=0= 2p2. 26 Combining above with (31),(33) and(34), we complete the proof of the pitman efficiency for Zero Trimmed U: rฯ(Wโฒ, W) =ฯ2 W(0) ฯ2 Wโฒ(0)ยตโฒ Wโฒ(0) ยตโฒ W(0)2 =1โp+p2 3 p2โp3+p2 3pcosฯ+ 2p2sinฯ cosฯ+ 2p2sinฯ2 . Figure 2: Plot of rฯ(Wโฒ, W)versus p for multiple fixed ฯ. Figure 3: Plot of rฯ(Wโฒ, W)versus ฯfor multiple fixed p. Note that we actually used the adjusted variance for non-zero trimmed version Wto handles the ties on the zeros. If we
|
https://arxiv.org/abs/2505.08128v1
|
calculated the unadjusted variance from the original approach, i.e., V ar(Wo) = n1n2(n1+n2+1) 12, then we have pitman efficiency for Zero-Trimmed U over unadjusted W as: 27 rฯ(Wโฒ, Wo) =1 3 p3โp4+p3 3pcosฯ+ 2p2sinฯ cosฯ+ 2p2sinฯ2 , (35) observing that W=Wofor point estimate. Figure 4: Plot of rฯ(Wโฒ, Wo)versus p for multiple fixed ฯ. Figure 5: Plot of rฯ(Wโฒ, Wo)versus ฯfor multiple fixed p. 28 E Doubly Robust Generalized U E.1 The Robustness of DRGU When there are no confounding effects, i.e., yโฅz, we can show that E(h(yi, yj)) =ฮดby condition- ing on z: E(h(yi, yj)) =P(zi= 1)E(hij|zi= 1) + P(zi= 0)E(hij|zi= 0) =p(1โp 2p(1โp)ฮด+ 0) + (1 โp)(0 +p 2p(1โp)ฮด) =ฮด, and hence E(Un) =ฮด. We can further show asymtotic normality:โn(Unโฮด)โdN(0,4ฯ2 h). When there are confounding effects, we can form a inverse probability weighted (IPW) U statistics: UIPW n =n 2โ1X i,jโCn 2hIPW ij, where, hIPW ij =zi(1โzj) 2ฯi(1โฯj)ฯ(yi1โyj0) +zj(1โzi) 2ฯj(1โฯi)ฯ(yj1โyi0), andฯi=E(zi|wi). Assuming yโฅz|w, we can show, E(hIPW ij) =E(E(hIPW ij|wi, wj)) =E(E(zi(1โzj)ฯ(yi1โyj0)|wi, wj) 2ฯi(1โฯj)) +E(E(zj(1โzi)ฯ(yj1โyi0)|wi, wj) 2ฯj(1โฯi)) =E(E(zi(1โzj)|wi, wj)E(ฯ(yi1โyj0)|wi, wj) 2ฯi(1โฯj)) +E(E(zj(1โzi)|wi, wj)E(ฯ(yj1โyi0)|wi, wj) 2ฯj(1โฯi)) =ฯi(1โฯj) 2ฯi(1โฯj)E(ฯ(yi1โyj0)) +ฯj(1โฯi) 2ฯj(1โฯi)E(ฯ(yj1โyi0)) =ฮด, and hence the IPW adjusted U statistics is unbiased, i.e., E(UIPW n) =ฮด. By further introducing gij=E(ฯ(yi1โyj0)|wi, wj), we form a Doubly Robust Generalized U statistics, UDR n, with kernel, hDR ij=zi(1โzj) 2ฯi(1โฯj)(ฯ(yi1โyj0)โgij) +zj(1โzi) 2ฯj(1โฯi)(ฯ(yj1โyi0)โgji) +gij+gji 2. Itโs easy to show that E(hDR ij) =ฮดassuming we know ฯandg, observing E(hDR ij) =E(E(hDR ij|wi, wj)) =E(E(zi(1โzj)|wi, wj)(gijโgij) 2ฯi(1โฯj)) +E(E(zj(1โzi)|wi, wj)(gjiโgji) 2ฯj(1โฯi)) +E(gij+gji 2) = 0 + 0 + ฮด=ฮด. E.2 Semi-parametric Efficiency of DRGU In this section, we sketch the proof for DRGU as most efficient estimator under semi-parametric set-up. At a high level, we need to show DRGU has influence function (IF) that correspond to efficient influence function (EIF) for parameter ฮด=ฯ(y1โy0), so naturally there are two steps: (i) find EIF for ฮด=ฯ(y1โy0), (ii) show DRUโs IF is consistent with EIF. 29 Preliminary : For regular asymptotic linear estimator หฮธ, we haveโn(หฮธโฮธ) =1 nP iฯi+op(1).ฯis the IF for หฮธ. EIF ฯโฒis defined as the unique IF with smallest variance, i.e., V ar(ฯโฒ)โคV ar(ฯ),โฯ. Sinceโn(หฮธโฮธ)โpN(0, V ar (ฯ)), we know estimator with EIF has smallest variance. For finding the EIF, we follow the standard recipe in semi-parametric theory (i.e., 13.5 of [36]). 1.Identify IF ฯFfor full data, OF={(y(1), y(0), x)}, where y(1)andy(0)represent response variable under treatment and control respectively. 2. Find all IFs ฯfor observation data Oo={(y, z, w )}, ฯ(y, z, x ) =ฯo(y, z, x ) + ฮ where E(ฯo(y, z, x )|OF) =ฯF(y(1), y(0)) andฮ ={L:E(L(y, z, x )|OF) = 0}is the augmentation space. Note that here, y=zy(1) + (1 โz)y(0)with the stable unit treatment value assumption(SUTV A). 3. Identify the EIF through projection onto the augmentation space. ฯโฒ(y, z, x ) =ฯo(y, z, x )โฮ (ฯo(y, z, x )|ฮ) where ฮ (f|ฮ)is a projection of a function fon space ฮ, such that E[(fโฮ (f|ฮ))g] = 0,โgโฮ. For full data OF={(y(1), y(0), x)}, we can construct U kernel hF ij= 0.5(ฯ(yi(1)โyj(0) + ฯ(yj(1)โyi(0)), and form a U statistic: UF=n 2โ1X iฬธ=jhF ij for unbiased estimation of ฮด=ฯ(y(1)โy(0)). From Hajek projection of
|
https://arxiv.org/abs/2505.08128v1
|
U statistics, we knowโn(UFโฮด) =2 nP iหh(yi) +op(1), where หh(yi) =E(hF ij|OF i)โฮด. Now observe, E(hF ij|OF i) = 0 .5E(ฯ(yi(1)โyj(0)|OF i) + 0.5E(ฯ(yj(1)โyi(0)|OF i) = 0.5Z ฯ(yi(1)โs)p0(s)ds+ 0.5Z ฯ(tโyi(0))p1(t)dt = 0.5h1(yi(1)) + 0 .5h0(yi(0)) where h1(y) =R ฯ(yโs)p0(s)ds,h0(y) =R ฯ(tโy)p1(t)dt, andp1(ยท), p0(ยท)are marginal density ofyunder treatment and control respectively. We then haveโn(UFโฮด) =1 nP i[h1(yi(1)) + h0(yi(0))โ2ฮด] +op(1), and as a result the corresponding IF under full data is ฯF=h1+h0โ2ฮด. Next step is to find an IF ฯofor observation data Oo={(y, z, x )}. Letฯobe the inverse propensity weighting version of the ฯF, i.e., ฯo=z ฯh1+1โz 1โฯh0โ2ฮด where ฯ=E(z|x). We can verify that E(ฯo|OF) =ฯF, observing E(z ฯh1|OF) =h1 ฯE(z|x) =h1 as similarly E(1โz 1โฯh0|OF) =h0. 30 We then specify the augmentation space ฮ. For any function L(y, z, x ), since zโ {0,1}, we can represent the function as L(y, z, w ) =zL1(y, w) + (1 โz)L0(y, w). Further by definition, E(L|OF) = 0 , we know E(L|OF) =ฯL1(y(1), w) + (1 โฯ)L0(y(0), w) = 0 ,โw, y(0), y(1) Since above equation applies to all values of w, y(0), y(1), we know L0(y(0), w) = L0(w), L1(y(1), w) =L1(w),L0(w) =โฯ 1โฯL1(w), and we can represent L(y, z, w )as L(y, z, w ) =zL1(w) + (1 โz)โฯ 1โฯL1(w) =zโฯ 1โฯL1(w) Thus, we can specify ฮ ={L:L(y, z, w ) = (zโฯ)f(w)for arbitrary f}. We next find projection so that EIF ฯโฒ=ฯoโฮ (ฯo|ฮ). From specification of ฮ, letฮ (zh1|ฮ) = (zโฯ)f1, and ฮ ((1โz)h0|ฮ) = ( zโฯ)f0. By definition, E([zh1โ(zโฯ)f1][(zโฯ)f]) = 0 ,โf. Observing, E([zh1โ(zโฯ)f1][(zโฯ)f]) =E(z(zโฯ)fhโ(zโฯ)2f1f) =E(ฯ(1โฯ)fE(h1|z= 1, x))โE(ฯ(1โฯ)f1f) =E(ฯ(1โฯ) [E(h1|z= 1, x)โf1]f) = 0 ,โf we have f1=E(h1|z= 1, x). Similarly, we have f0=โE(h0|z= 0, x). Substitute the two equation, we get ฮ (ฯo|ฮ) =zโฯ ฯE(h1|z= 1, x)โzโฯ 1โฯE(h0|z= 0, x) and hence the EIF is ฯโฒ=z ฯh1+1โz 1โฯh0โ2ฮดโzโฯ ฯE(h1|z= 1, x) +zโฯ 1โฯE(h0|z= 0, x) =z ฯ(h1โE(h1|z= 1, w)) +1โz 1โฯ(h0โE(h0|z= 0, w)) +E(h1|z= 1, w) +E(h0|z= 0, w)โ2ฮด (36) We then need to show the UDR nhas influence function that is consistent with ฯโฒ, i.e.,ฯDR=ฯ+op(1). From Hajek projection, we can obtain UDR nโs influence function, i.e., ฯDR= 2E(hDR ij|Oo i)โ2ฮด. Recall hDR ij=zi(1โzj) 2ฯi(1โฯj)(ฯ(yi1โyj0)โgij) +zj(1โzi) 2ฯj(1โฯi)(ฯ(yj1โyi0)โgji) +gij+gji 2. Letโs calculate the E(hDR ij|Oo i)term by term. For the first term, we have E(zi1โzj 1โฯjฯ(yi1โyj0))|Oo i) =E((1โzi)ฯ(yi1โyj0))|Oo i) =zih1(yi) and similarly E((1โzi)zj ฯjฯ(yj1โyi0) = (1 โzi)h0(yi) By definition, gij=E[ฯ(yiโyj)|wi, wj, zi= 1, zj= 0] andgji=E[ฯ(yjโyi)|wj, wi, zj= 1, zi= 0]. we can show 31 E(gij|Oo i) =ZZ Z ฯ(sโt)p1(s|wi)p0(t|wj)dsdt p(wj)dwj s=yi =Z Z ฯ(sโt)p1(s|wi)Z p0(t|wj)p(wj)dwj dsdt s=yi =Z Z ฯ(sโt)p1(s|wi)p0(t)dsdt s=yi =ZZ ฯ(sโt)p0(t)dt p(s|wi, zi= 1)ds s=yi =E(h1(yi)|wi, zi= 1) and similarly, E(gji|Oo i) =E(h0(yi)|wi, zi= 0) . We also know E(1โzj 1โฯjgij|Oo i) =E E(1โzj 1โฯj|wj)gij|Oo i =E(gij|Oo i). and similarly E(zj ฯjgji|Oo i) =E(gji|Oo i). Substituting above equations, we have E(hDR ij|Oo i) =ฯโฒ 2+ฮด, and hence ฯDR=ฯโฒexactly. E.3 Asymptotics of DRGU with UGEE Weโll first sketch the proof for the asymptotic normality of DRGU. The proof based on UGEE is very similar to GEE in Appendix C.1. Recall that, Un(ฮธ) =X i,jโCn 2Un,ij=X i,jโCn 2Gij(hijโfij) =0, where, hij= [hij1,
|
https://arxiv.org/abs/2505.08128v1
|
hij2, hij3]T fij= [fij1, fij2, fij3]T hij1=zi(1โzj) 2ฯi(1โฯj)(ฯ(yi1โyj0)โgij) +zj(1โzi) 2ฯj(1โฯi)(ฯ(yj1โyi0)โgji) +gij+gji 2 hij2=zi+zj hij3=zi(1โzj)ฯ(yi1โyj0) +zj(1โzi)ฯ(yj1โyi0) fij1=ฮด fij2=ฯi+ฯj fij3=ฯi(1โฯj)gij+ฯj(1โฯi)gji ฯi=ฯ(wi;ฮฒ) gij=g(wi, wj;ฮณ) and Gij=DT ijVโ1 ij Dij=โfij โฮธ, Vij=๏ฃฎ ๏ฃฐฯ2 ij10 0 0ฯ2 ij20 0 0 ฯ2 ij3๏ฃน ๏ฃป ฯ2 ijk=V ar(hijk|wi, wj). 32 Recall ui=E(Un,ij|yi0, yi1, zi, wi),ฮฃ =V ar(ui),Mij=โ(fijโhij) โฮธ, andB=E(GM), and หฮด be the 1st element in หฮธ. LetยฏUn(ฮธ, ฮฑ) =1 (n 2)P i,jโCn 2Un,ij. We know ยฏUn(ฮธ, ฮฑ)is a U statistics with mean E(Un,ij) = 0 . From asymptotic theory of U statistics, we knowโnยฏUn(ฮธ, ฮฑ)โdN(0,4ฮฃ) Then similarly as (12) and (14), we know โn(หฮธโฮธ) =โ(โยฏUn โฮธ)โโnยฏUn(ฮธ, ฮฑ) +op(1) (37) Similarly as (15), we have โยฏUn โฮธโpE(GโS โฮธ) =โE(GM) (38) Combining the above two, we have โn(หฮธโฮธ) =BโโnยฏUn(ฮธ, ฮฑ) +op(1) (39) where B=E(GM). Hence, we establish the following asymptotic normality: โn(หฮธโฮธ)โdN(0,4(Bโ)TฮฃBโ). We skip the proof for consistency when only one of ฯandgis correctly specified, as most of it has been discussed in Appendix E.1. As for semi-parametric bound ofหฮด, proof is straightforward building on results from E.2 and insights from B.3. Observing Dhas structure of block diagonal with following structure: D=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐ1 0 ยทยทยท 0 0d22ยทยทยท d2p ............ 0dT2ยทยทยท dTp๏ฃน ๏ฃบ๏ฃบ๏ฃป Recall EIF ฯโฒfrom E.2, we know E(ฯโฒSฯ) = 0 andE(ฯโฒSg) = 0 , and thus E(M)has following structure: E(M) =๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐ1 0 ยทยทยท 0 0m22ยทยทยท m2p ............ 0mT2ยทยทยท mTp๏ฃน ๏ฃบ๏ฃบ๏ฃป We then know B=E(GM)has the following structure: B=๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐฯโ2 1 0ยทยทยท 0 0b22ยทยทยท b2p ............ 0bp2ยทยทยท bpp๏ฃน ๏ฃบ๏ฃบ๏ฃป Since E(ฯโฒSฯ) = 0 andE(ฯโฒSg) = 0 , we know ฮฃis block diagonal, ฮฃ =๏ฃฎ ๏ฃฏ๏ฃฏ๏ฃฐฯ2 10ยทยทยท 0 0s22ยทยทยท s2p ............ 0sp2ยทยทยท spp๏ฃน ๏ฃบ๏ฃบ๏ฃป Observing the asymptotic covariance matrix is 4(Bโ1)TฮฃBโ1, we know asymptotic variance of หฮดis same as that of EIF, i.e., ฯ2 ฮด= 4ฯ2 1=V ar(ฯโฒ). 33 F Details on Simulation Studies F.1 Regression Adjustment We compare the type I error rate of regression adjustment and the unadjusted t-test. We perform these simulations using data generated from a Poisson distribution with the following generation process: wiโผ N(0,1), zi|wiโผBernoulli 1 1+eโฮณwi , yi|zi, wiโผPoisson e2+ฮฒzzi+ฮฒwwi . Here ฮณโฅ0is a hyperparameter which controls the degree of confounding. When ฮณ= 0there is no confounding. ฮฒzcontrols the treatment effect. We evaluation type I error with ฮฒz= 0, and power withฮฒz>0. Table 4: Type I Error Comparison: Unadjusted t-test vs. Regression Adjustment ฮณ t-test RA 0.0 0.0504 0.0504 0.2 0.0576 0.0482 0.4 0.0706 0.0522 0.6 0.0844 0.0496 0.8 0.1082 0.0570 1.0 0.1262 0.0524 We validate that regression adjustment controls type I error, and the unadjusted t-test leads to type I error rate inflation under confounding. Table 5: Power Comparison: Unadjusted t-test vs. Regression Adjustment Treatment EffectNo Confounding ( ฮณ= 0.0)With Confounding ( ฮณ= 0.1) Power (t-test) Power (RA) Power (t-test) Power (RA) 0.10 0.727 0.702 0.819 0.690 0.11 0.722 0.779 0.825 0.767 0.12 0.734 0.848 0.826 0.834 0.13 0.729 0.907 0.828 0.879 0.14 0.751 0.947 0.862 0.949 0.15 0.773 0.956 0.863 0.961 0.16 0.795 0.975 0.881 0.987 0.17 0.793 0.992 0.885 0.991 0.18 0.777 0.995 0.881 0.990 0.19 0.796 0.999 0.900 0.996 0.20 0.807 0.997 0.908 0.999 We demonstrate that regression adjustment improves
|
https://arxiv.org/abs/2505.08128v1
|
power over t-test ( ฮณ= 0). When there is confounding present, the power of raw unadjusted t-test is not valid as it can not control type I error. F.2 GEE We evaluate the Type I error and power of two estimators in the presence of confounding under varying sample sizes and effect sizes. (i) GLM adjustment at final time point: at t=T, fit a Poisson regression YiTโผPoisson exp(ฮฒ0+ฮฒ1zi+ฮณwi) =โ หฮฒGLM 1. (ii) GEE adjustment with longitudinal data: using all observations t= 1, . . . , T , obtain หฮฒGEE 1 by solving the estimating equation Un(ฮฒ1) =NX i=1TX t=1Dโค it Yitโexp(ฮฒ0+ฮฒ1zi+ฮณwi) = 0, 34 where Dit=โ E[Yit|zi, wi] โฮฒ1=ziexp(ฮฒ0+ฮฒ1zi+ฮณwi). We generate a longitudinal panel of Nsubjects over Tvisits by first drawing a time-invariant confounder and treatment for each subject, then simulating a Poisson count at each visit: wiโผ N(0,1), z iโผBernoulli ฯ(ฮฑ0+ฮฑ1wi) , Y itโผPoisson exp(ฮฒ0+ฮฒ1zi+ฮณwi) , fori= 1, . . . , N andt= 1, . . . , T . Table 6: Empirical Type I error rates ( ฮฒ1= 0) for GEE and GLM estimators under confounded assignment at nominal levels ฮฑ. Sample Size ฮฑ GEE GLM 50 0.05 0.068 0.057 50 0.01 0.021 0.013 200 0.05 0.048 0.044 200 0.01 0.009 0.005 We demonstrate that GEE controls type I error adequately in large samples, with only modest inflation when sample sizes are small. Table 7: Empirical power for Poisson GEE vs. GLM estimators across sample sizes N, effect sizes ฮฒ1, and significance levels ฮฑ. Sample Size ฮฒ1 ฮฑ GEE GLM 50 0.10 0.05 0.283 0.100 50 0.10 0.01 0.138 0.025 50 0.20 0.05 0.749 0.200 50 0.20 0.01 0.556 0.066 200 0.10 0.05 0.729 0.189 200 0.10 0.01 0.490 0.065 200 0.20 0.05 1.000 0.645 200 0.20 0.01 0.997 0.404 We demonstrate that by leveraging longitudinal repeated measurements, the GEE-adjusted estimator achieves higher statistical power than that of Poisson GLM across both small and large samples. Moreover, this power advantage is especially pronounced at medium effect sizes ( ฮฒ1= 0.1) compared to larger ones ( ฮฒ1= 0.2). F.3 Mann Whitney U We compare the zero-trimmed Mann-Whitney U-test to the standard Mann-Whitney U-test and two-sample t-test in type I error rate and power. We simulate the three tests using data generated from zero-inflated log-normal and positive Cauchy distributions and multiple effect sizes. Formally, we generate control data y0i= (1โDi)yโฒ 0i, where DiโผBernoulli (p0)andyโฒ 0iโผf(0, ฯ). We generate test data y1j= (1โDj)yโฒ 1jwhere DjโผBernoulli (p0+pโ)andyโฒ 1jโผf(ยต, ฯ)forpโ, ยตโฅ0. Here fdenotes either the lognormal or positive Cauchy distribution. 35 Table 8: Type I Error Rates at ฮฑ= 0.05for Zero-Inflated Data Distribution Zero Prop. Sample SizeType I Error Rate Zero-trimmed U Standard U t-test LogNormal0.050 0.0540 0.0540 0.0015 200 0.0515 0.0515 0.0040 0.250 0.0435 0.0500 0.0025 200 0.0480 0.0545 0.0050 0.550 0.0315 0.0465 0.0020 200 0.0405 0.0490 0.0055 0.850 0.0230 0.0475 0.0005 200 0.0305 0.0455 0.0050 Positive Cauchy0.050 0.0535 0.0535 0.0220 200 0.0530 0.0525 0.0200 0.250 0.0465 0.0540 0.0205 200 0.0405 0.0480 0.0240 0.550 0.0335 0.0455 0.0215 200 0.0420 0.0500 0.0175 0.850 0.0290 0.0550 0.0230 200 0.0355 0.0500 0.0170 Table 9: Power
|
https://arxiv.org/abs/2505.08128v1
|
Comparison for Positive Cauchy and LogNormal Distributions with Equal Zero- Inflation (50%) Distribution Sample Size Effect SizePower at ฮฑ= 0.05 Zero-trimmed U Standard U t-test Positive Cauchy500.25 0.038 0.040 0.018 0.50 0.050 0.048 0.022 0.75 0.113 0.085 0.033 1.00 0.131 0.086 0.041 2000.25 0.079 0.065 0.011 0.50 0.165 0.094 0.026 0.75 0.339 0.166 0.031 1.00 0.555 0.262 0.048 LogNormal500.25 0.033 0.043 0.002 0.50 0.045 0.053 0.003 0.75 0.048 0.053 0.004 1.00 0.050 0.054 0.004 2000.25 0.044 0.044 0.009 0.50 0.067 0.059 0.004 0.75 0.090 0.067 0.007 1.00 0.138 0.082 0.011 We validate that the zero-trimmed Mann-Whitney U-test has more power than the other two tests on almost all scenarios of zero-inflated heavy-tailed data, while still controlling type I error. F.4 Doubly Robust Generalized U F.4.1 Snapshot DRGU We generate nโ {50,200}i.i.d. observations (yi, zi, wi)withp= 1baseline covariates for simplicity wiโผ N(0,1). The true propensity score is logistic, ฯ(wi) =ฯ โ0.2wi+ 0.6w2 i , z i|wiโผBernoulli ฯ(wi) , 36 where ฯ(x) = 1 /(1 +eโx). The outcome mean model is: ยต0(wi, zi) =ฮฒzi+ 1.0wi, y i|(zi, wi)โผ P ยต0(wi, zi),1 where constant ATE ฮฒโ {0.0,0.5}andPis one of the normal, log-normal, and Cauchy distributions. We compare Type I error rates and power of correctly specified DRGU , correctly specified linear regression OLS, and Wilcoxon rank sum test U(which does not account for confounding covariates). To probe double robustness, we set up misDRGU as misspecifying the quadratic outcome propensity score model with a linear mean model, while the outcome model in misDRGU is specified correctly. Table 10: Type I Error Rate at sample size = 200 Distribution DRGU misDRGU OLS U Normal ฮฑ= 0.05 0.041 0.049 0.043 0.185 LogNormal ฮฑ= 0.05 0.054 0.070 0.054 0.150 Cauchy ฮฑ= 0.05 0.052 0.065 0.042 0.149 Normal ฮฑ= 0.01 0.014 0.005 0.012 0.045 LogNormal ฮฑ= 0.01 0.012 0.020 0.007 0.049 Cauchy ฮฑ= 0.01 0.012 0.025 0.008 0.02 Table 11: Power at ฮฑ= 0.05, ATE=0.5 Distribution Sample size DRGU misDRGU OLS U Normal200 0.750 0.585 0.940 0.299 50 0.135 0.085 0.135 0.035 LogNormal200 0.610 0.515 0.435 0.235 50 0.260 0.210 0.190 0.110 Cauchy200 0.660 0.580 0.435 0.310 50 0.265 0.180 0.165 0.130 F.4.2 Longitudinal DRGU For the longitudinal setting, we use the same simulation setup as above for observations (yit, zi, wit) fort= 1, ..., T = 2time points. The true propensity score is logistic of time-varying covariates, ฯ(wi) =ฯ โ0.3wi1โ0.6wi2 , z i|wiโผBernoulli ฯ(wi) , where ฯ(x) = 1 /(1 +eโx). The outcome mean model is: ยต0(wit, zi) =ฮฒzi+ 1.0wit, y it|(zi, wit)โผ P ยต0(wit, zi),1 We compare three models longDRGU ,DRGU using the last timepoint data snapshot, and GEE. The time-varying covariates highlight the strength of using longitudinal method compared to snapshot analysis. Table 12: Type I Error Rate at ฮฑ= 0.05, sample size = 200, T=2 Distribution longDRGU DRGU GEE Normal 0.03 0.04 0.04 LogNormal 0.04 0.05 0.02 Cauchy 0.05 0.05 0.05 Table 13: Power at ฮฑ= 0.05, ATE=0.5, sample size = 200, T=2 Distribution Sample size longDRGU DRGU GEE Normal200 0.85 0.88 0.92 50 0.52 0.39 0.75 LogNormal200 0.85 0.78 0.68 50 0.37 0.30 0.33 Cauchy200
|
https://arxiv.org/abs/2505.08128v1
|
0.83 0.76 0.66 50 0.38 0.32 0.29 37 G Details on A/B Testing G.1 Email Marketing We conducted an A/B test comparing our legacy email marketing recommender system against a newer version designed with improved campaign personalization using neural bandits. We randomly assigned audience members to receive recommendations from either system and measured the downstream impact on conversion value, a proprietary metric measuring the value of conversion. The resulting conversion value presented challenging statistical properties: extreme zero inflation (>95% of members had no conversions in both test groups) and significant right-skew among the 1% who did convert. These characteristics violated the assumptions of conventional testing methods such as the standard t-test. The zero-trimmed Mann-Whitney U-test proved ideal for this scenario by balancing the proportion of zeros between test groups before performing rank comparisons. This approach maintained appropriate Type I error control while providing superior statistical power compared to both the t-test and the standard Mann-Whitney U-test. Using the zero-trimmed Mann-Whitney U-test, we detected a statistically significant +0.94% lift in overall conversion value, most of which was driven by a +0.11% lift in B2C product conversions among members experiencing the improved campaign personalization (p-value < 0.001). By constast, the t-test was able to detect a signficant effect conversion value metric (p-value = 0.249). G.2 Targeting in Feed We conducted an online experiment to evaluate impact of a new marketing algorithm vs legacy algorithm for recommending ads on a particular slot in Feed. The primary interest of the study is downstream conversion impact. Members eligible for a small number of pre-selected campaigns were the unit of randomization. We encountered two main challenges. First, the ad impression allocation mechanism showed a selection bias favoring recommendations from the control system. As a result, we want to adjust for impression as cost and compare return-on-investment (ROI) between the control and treatment group. Second, limited campaign and participant selection introduced potential imbalance in baseline covariates even under randomization. Specifically, we observed that a segment of members with lower baseline conversion rate was more likely to be in the treatment group than in the control group. This introduced the classic case of Simpsonโs Paradox where conversion rate averaged over all segments is similar in both groups but higher in treatment group when stratified by this confounding segment. We summarized these imbalanced features in Table 14. Figure 6 further shows the large distribution mismatch between impressions in the treatment and control group. We addressed both of these issues by using regression adjustment to estimate lift in ROI while accounting for a confounder such as being in the member segment with low baseline conversion rate. We found the new algorithm to have a statistically significant lift of 1.84% in conversion per impression, with p-value < 0.001 and 95% confidence interval (1.64% - 2.05%). This is in contrast to failing to reject the null hypothesis of no effect when using two-sample t-test for difference in means of conversion rate (p-value = 0.154). Table 14: Characteristics by treatment variant of imbalanced data. Values are relative to mean values in the control group. Control
|
https://arxiv.org/abs/2505.08128v1
|
Treatment mean mean Conversions 1.0 +0.3% Impressions 1.0 -37.7% Low-baseline segment 1.0 +9.5% 38 Figure 6: Distributions of (normalized) impressions and conversions from the targeting in feed experiment. G.3 Paid Search Campaigns We illustrate leveraging longitudinal repeated measurements in A/B testing (via GEE) to improve power using data collected in an online test run on paid ad campaigns over a 28-day period. We randomized 64 ad campaigns at the campaign level into test and control arms (32 campaigns each), a typical setup for tests run on third-party advertising platforms. We collected daily conversion values for each campaign throughout the experiment, yielding a time series of repeated measurements at the campaign-day level. Due to the limited sample size, a traditional two-sample comparison lacks power to detect the treatment effects in this test. To address this small-sample limitation, we fit a Generalized Estimating Equation (GEE) model using campaign as the grouping variable and an exchangeable working-correlation structure to capture within-campaign serial dependence. During the 28-day test, by โborrowing strengthโ across daily measurements, the GEE framework substantially reduced residual variance and produced tighter confidence intervals around the treatment coefficient. In this phase, the GEE-estimated treatment effect was very close to significant level (p-value=0.051). In comparison, the snapshot regression analysis using the last snapshot attains p-value at 0.184. We also reserved a 28-day validation period prior to the actual launchโduring which no treatment was appliedโso that treatment and control groups should exhibit no true difference. We collected campaign-day conversion values in the same format and ran the identical GEE analysis, yielding an estimated effect indistinguishable from zero (p-value = 0.82). This confirms that leveraging repeated measurements through GEE both enhances sensitivity to subtle treatment effects and maintains proper control of type I error. Observing the distribution of the response variables exhibit heavy tail characteristics, we further performed statistical testing using doubly robust U, assuming compound symmetric correlation structure for R(ฮฑ). We were able to attain statistical significant result with หP(y1> y0) = 0 .54and p-value=0.045. 39
|
https://arxiv.org/abs/2505.08128v1
|
arXiv:2505.08210v1 [math.ST] 13 May 2025Submitted to Bernoulli On eigenvalues of a renormalized sample correlation matrix QIANQIAN JIANG*1,a, JUNPENG ZHU*1,band ZENG LI1,c 1Department of Statistics and Data Science, Southern University of Science and Technology *, ajqq172515@gmail.com,bzhujp@sustech.edu.cn,cliz9@sustech.edu.cn This paper studies the asymptotic spectral properties of a renormalized sample correlation matrix, including the limiting spectral distribution, the properties of largest eigenvalues, and the central limit theorem for linear spec- tral statistics. All asymptotic results are derived under a unified framework where the dimension-to-sample size ratio๐/๐โ๐โ(0,โ]. Based on our CLT result, we propose an independence test statistic capable of operating effectively in both high and ultrahigh dimensional scenarios. Simulation experiments demonstrate the accuracy of theoretical results. MSC2020 subject classifications: Primary 60B20; secondary 62H15 Keywords: Renormalized sample correlation matrix; Ultrahigh dimension; Linear spectral statistics; Central limit theorem 1. Introduction Let us consider the widely used independent components (IC) model for the population x, admitting the following stochastic representation y=๐+๐บ1 2x, where ๐โR๐denotes the population mean and xโR๐is a random vector with independent and identically distributed (i.i.d.) components with zero mean and unit variance. Let y1,..., y๐be๐i.i.d. observations from this population and Y=(y1,..., y๐)be the๐ร๐data matrix. The sample correlation matrix R๐can be written as R๐=D1 2๐S๐D1 2๐, where D1 2๐=Diag1โ๐ 11,1โ๐ 22,...,1โ๐ ๐๐ ,S๐=1 ๐Y๐ฝYโค,๐ฝ=I๐โ1 ๐1๐1โค ๐, ๐=๐โ1. Here๐ ๐๐=eโค ๐S๐e๐, ๐=1,...,๐ ,e๐โR๐denotes the vector with the ๐the element being 1 and all others being 0, and 1๐=(1,..., 1)โคinR๐. The eigenvalues of R๐,๐R๐ 1โฅยทยทยทโฅ๐R๐๐, serve as important statistics and often play crucial roles in the inference on population correlation matrix R, see Anderson (2003). Consider the following regime, ๐โโ,๐=๐๐โโ,๐/๐โ๐โ(0,โ), (1) *These authors contributed equally 1 2 referred to as the Mar หcenko-Pastur (MP) regime. For R=I๐, Jiang (2004) demonstrated that the empirical spectral distribution (ESD) of R๐,๐นR๐(๐ฅ)=1 ๐ร๐ ๐=11{๐๐(R๐)โค๐ฅ}, converges weakly to the Marchenko-Pastur (MP) law with probability one. The extreme eigenvalues of R๐were studied in Xiao and Zhou (2010) and Bao, Pan and Zhou (2012). Additionally, Gao et al. (2017) established the central limit theorem (CLT) for the linear spectral statistics (LSS) of R๐, i.e.,โซ ๐(๐ฅ)d๐นR๐(๐ฅ)=ร๐ ๐=1๐(๐R๐ ๐)/๐where๐(ยท)is a smooth function. For a general R, the limiting spectral distribution (LSD) of R๐, the limit of ESD, can be found in Karoui (2009) and the CLT for LSS was studied in Jiang (2019), Mestre and Vallet (2017), Yin, Zheng and Zou (2023), Yin et al. (2022). All these studies are conducted under the MP regime (1), i.e., ๐/๐โ๐โ(0,โ). However, in the ultrahigh dimensional case where ๐โซ๐, the eigenvalues of R๐exhibit behaviors markedly different from those in the MP regime. Properties of eigenvalues of sample correlation matrix when๐โซ๐remain largely unknown in current literature. Existing studies on eigenvalue behavior of ultrahigh dimensional matrix focus on sample covariance matrix, see Bai and Yin (1988), Bao (2015), Chen and Pan (2015), Qiu, Li and Yao (2023), Wang and Paul (2014). These works heavily rely on the linear independent component structure and zero mean assumption ๐=0 which suggest that the renormalized sample covariance matrix หS๐=โ๏ธ ๐ ๐ 1 ๐Yโค 0Y0โI๐ ,Y0=Yโ๐1โค ๐shares many spectral properties with Wigner matrix. In contrast, due to the nonlinear dependence introduced by
|
https://arxiv.org/abs/2505.08210v1
|
the nor- malization inherent in the sample correlation matrix and the presence of a nonzero population mean, the techniques and results developed for ultrahigh dimensional covariance matrices cannot be directly extended to the correlation matrix. To fill this gap, we consider the sample correlation matrix under a new regime where ๐/๐โโ as๐โโ . In this scenario, unlike the MP regime, most eigenvalues of the matrix R๐are zero, and all non-zero eigenvalues diverge to infinity. To address this, we renormalize the sample correlation matrix as follows: B๐=โ๏ธ๐ ๐1 ๐๐ฝYโคD๐Y๐ฝโ๐ฝ . B๐is๐ร๐and has๐โ1 non-zero eigenvalues, which connect to the non-zero eigenvalues of R๐ through the following identity: ๐B๐=โ๏ธ ๐ ๐๐R๐โโ๏ธ๐ ๐. This paper investigates the eigenvalues of the renormalized random matrix B๐whenR=I๐, allowing for the dimension ๐to be comparable to or much larger than the sample size ๐, such that ๐โโ,๐=๐๐โโ,๐/๐โ๐โ(0,โ]. Firstly, we propose a unified LSD of B๐in both๐/๐โ๐โ(0,โ)and๐/๐โโ . Secondly, we studied the properties of ๐B๐ 1, the largest eigenvalue of B๐. Thirdly, we establish CLT for LSS of B๐under the unified framework, which covers the results in Gao et al. (2017) as a special case. Last but not least, our theoretical findings are further applied to the independence test for both high and ultrahigh dimensional random vectors. Specifically, we propose a test statistic that remains effective when ๐/๐โ๐โ(0,โ]. In this paper, our primary contribution is to establish the asymptotic theory for eigenvalues of the renormalized sample correlation matrix B๐when๐/๐โโ . In addition, we provide a unified represen- tation of the limiting results that hold for both ๐/๐โโ and๐/๐โ๐โ(0,โ). Theoretical analysis of B๐in ultrahigh dimensional settings presents significant challenges due to the nonlinear dependence structure introduced by the normalization process, which makes the study of this random matrix more On eigenvalues of a renormalized sample correlation matrix 3 intricate, even when R=I๐. Under the MP regime (1), Gao et al. (2017), Heiny (2022), Jiang (2004), Karoui (2009) showed that the correlation matrix R๐share the same LSD and properties of the largest eigenvalue as the sample covariance S๐, by usingโฅD๐โI๐โฅโฅS๐โฅto control the difference between the sample correlation matrix R๐and the sample covariance matrix S๐. However, in the ultrahigh dimen- sional setting, since โฅS๐โฅtends to infinity, this approach becomes ineffective. Instead, we investigate the convergence of Stieltjes transform of ESD of B๐to obtain the LSD. In addition, we require a unified moment assumption to control the probability that the largest eigenvalue ๐B๐ 1lies outside the support of LSD. Moreover, when ๐/๐โ๐โ(0,โ), Pan and Zhou (2008) used ๐๐=๐/๐to characterize the CLT for LSS while Yin, Zheng and Zou (2023), Yin et al. (2022) used ๐๐=๐/๐. In fact, they are equivalent because, in the high-dimensional setting (1), ๐๐โ๐๐=๐(1/๐). However, when ๐/๐โโ , ๐๐โ๐๐=๐/(๐๐)may diverge to infinity. Therefore, we must handle ๐๐and๐๐with extra caution and we derive a novel determinant equivalent form for the resolvent of the renormalized correlation matrix when๐/๐โโ . The rest of the paper is organized as follows. Section 2 details our main results, including unified LSD, properties of the largest eigenvalue and CLT for LSS. Section 3 discusses the application of our CLT
|
https://arxiv.org/abs/2505.08210v1
|
to independence test. Section 4 presents simulations. Technical proofs are detailed in Section 5 and the Supplementary Material. 2. Main Results 2.1. Preliminaries For any measure ๐บsupported on the real line, the Stieltjes transform of ๐บis defined as ๐ ๐บ(๐ง)=โซ1 ๐ฅโ๐งd๐บ(๐ฅ), ๐งโC+, where C+={๐งโC:โ(๐ง)>0}denotes the upper complex plane. As for the LSD of R๐withR=I๐when๐/๐โ๐โ(0,โ), Jiang (2004) showed the ESD of R๐con- verges with probability 1 to the Mar หcenko-Pastur law ๐น๐๐(๐ฅ), whose density function has an explicit expression ๐๐๐(๐ฅ)=๏ฃฑ๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃณ1 2๐๐ฅ๐โ๏ธ (๐โ๐ฅ)(๐ฅโ๐)๐โฉฝ๐ฅโฉฝ๐; 0 otherwise, and a point mass 1โ1/๐at the origin if ๐>1, where๐=(1โโ๐)2and๐=(1+โ๐)2. And the Stieltjes transform of ๐น๐๐(๐ฅ)is ๐ ๐๐(๐ง)=๐โ1โ๐ง+โ๏ธ (๐งโ๐โ1)2โ4๐ 2๐๐ง+1โ๐ ๐๐ง, ๐งโC+. (2) 2.2. LSD of B ๐ In this section, we provide a unified LSD of the renormalized sample correlation matrix B๐when ๐/๐โ๐โ(0,โ]. 4 Assumption 2.1. LetX=(x1,..., x๐)๐ร๐= ๐ฅ๐๐, which consists of ๐ร๐i.i.d. variables satisfying E ๐ฅ๐๐=0,E ๐ฅ๐๐ 2=1,E ๐ฅ๐๐ 4=๐
<โ. Assumption 2.2. The population covariance matrix ๐บis diagonal. Assumption 2.3. The dimension ๐is function of sample size ๐and both tend to infinity such that ๐/๐โ๐โ(0,โ], ๐โ๐๐ก, ๐กโฅ1. Theorem 2.4. Under Assumptions 2.1 - 2.3, with probability one, the ESD of B๐converges weakly to a (non-random) probability measure ๐น๐(๐ฅ), which has a density function ๐๐(๐ฅ)=๏ฃฑ๏ฃด๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃณโ 4โ๐ฅ2โ๐โ1+2๐ฅ๐โ1/2 2๐(1+๐ฅ๐โ1/2), if๐ฅโ1โ๐โ2,1โ๐+2 , 0, otherwise, and has a point mass 1โ๐at the pointโโ๐if0<๐โค1. The Stieltjes transform of ๐น๐(๐ฅ)is ๐ ๐(๐ง)=โ(๐ง+๐โ1/2)+โ๏ธ (๐ง+2โ๐โ1/2)(๐งโ2โ๐โ1/2) 2(1+๐โ1/2๐ง), ๐งโC+. (3) Moreover, the expression of the moments are โซ+โ โโ๐ฅ๐๐๐(๐ฅ)d๐ฅ=๐โ๏ธ ๐ =0(โ1)๐ ๐ ๐ ๐โ๐/2+๐ +1๐ฝ๐โ๐ +(1โ๐)(โโ๐)๐, ๐โฅ1, where๐ฝ0=1and๐ฝ๐=ร๐โ1 ๐=01 ๐+1๐ ๐ ๐โ1 ๐ ๐๐for๐โฅ1. Remark 1. Theorem 2.4 provides a unified LSD of B๐when๐/๐โ๐โ(0,โ]. This result is consis- tent with the MP law of R๐when๐/๐โ๐โ(0,โ)in (2). The following theorem shows the result when ๐/๐โโ , which, to the best of our knowledge, is presented here for the first time. Theorem 2.5. Under Assumptions 2.1 - 2.3 and ๐โ๐๐ก,๐ก > 1, with probability one, the ESD of B๐ converges weakly to the semicircular law ๐น(๐ฅ)with density function ๐(๐ฅ)=๏ฃฑ๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃณ1 2๐โ๏ธ 4โ๐ฅ2, if๐ฅโ[โ2,2], 0, otherwise,(4) and Stieltjes transform ๐ (๐ง)=โ๐ง+โ ๐ง2โ4 2, ๐งโC+. Moreover, the expression of the moments are โซโ โโ๐ฅ๐ยท1 2๐โ๏ธ 4โ๐ฅ2๐๐ฅ=๏ฃฑ๏ฃด๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃณ1 ๐/2+1 ๐ ๐/2! , if๐is even, 0, if๐is odd. On eigenvalues of a renormalized sample correlation matrix 5 2.3. The largest eigenvalue of B ๐ In this section, we study the properties of ๐B๐ 1, the largest eigenvalue of B๐, when๐โ๐๐ก,๐กโฅ1. Assumption 2.1*. LetX=(x1,..., x๐)๐ร๐= ๐ฅ๐๐, which consists of ๐ร๐i.i.d. variables satisfying E ๐ฅ๐๐=0,E ๐ฅ๐๐ 2=1,E ๐ฅ๐๐ 4=๐
,E ๐ฅ๐๐ 2(๐ก+1)<โ. Remark 2. Compared with the assumptions in the literature (Bao, Pan and Zhou, 2012, Gao et al., 2017, Jiang, 2004, Yin, Zheng and Zou, 2023, Yin et al., 2022) where ๐/๐โ๐โ(0,โ), Assumption 2.1* is not stronger. In fact, when ๐ก=1, the moment condition E ๐ฅ๐๐ 2(๐ก+1)<โreduces to a finite fourth moment, which coincides with the standard assumption in random matrix theory . Theorem 2.6. Under Assumptions 2.1*, 2.2 and 2.3, we have (i)๐1(B๐)โ2+1โ๐a.s.; (ii)for any๐ >0,โ> 0, if ๐ฅ๐๐ โค๐ฟ๐(๐๐)1/(2๐ก+2),where๐ฟ๐โ0, ๐ฟ๐(๐๐)1/(2๐ก+2)โโ , as๐โโ, then P ๐1(B๐)โฅ2+1โ๐+๐ =o ๐โโ . Remark 3. Theorem 2.6 is consistence with the results of ๐R๐ 1when๐/๐โ๐โ(0,โ)in Theorem 1.1 of Jiang (2004) and Lemma 7 of Gao
|
https://arxiv.org/abs/2505.08210v1
|
et al. (2017). 2.4. CLT for LSS of B ๐ In this section, we focus on linear spectral statistic of B๐, i.e.1 ๐ร๐ ๐=1๐(๐๐), where๐is an analytic function on[0,โ). Since๐นB๐converges to ๐น๐almost surely, we have 1 ๐๐โ๏ธ ๐=1๐(๐๐)โโซ ๐(๐ฅ)d๐น๐(๐ฅ). We explore second order fluctuation of1 ๐ร๐ ๐=1๐(๐๐)describing how such LSS converges to its first order limit. Consider a renormalized functional: ๐บ๐(๐)=๐โซ+โ โโ๐(๐ฅ)d ๐นB๐(๐ฅ)โ๐น๐๐(๐ฅ) +1 2๐๐โฎ C๐(๐ง)ฮ๐(๐ ๐๐(๐ง))d๐ง, where๐น๐๐(๐ฅ)and๐ ๐๐(๐ง)serve as finite-sample proxies for ๐น๐(๐ฅ)and๐ ๐(๐ง)in (3), by substituting ๐ with๐๐=๐/๐, ฮ๐(๐ ๐๐(๐ง))=2๐โ1 2 ๐๐2 ๐๐(๐ง)โ๐๐(๐ง)๐ 2 ๐๐(๐ง)๐๐๐(๐ง)โ๐โ1 2 ๐๐2 ๐๐(๐ง)โ๐๐(๐ง)๐ โฒ ๐๐(๐ง)๐๐๐(๐ง) +1โ๐๐+๐ง๐๐๐(๐ง)โ๐๐(๐ง)๐๐๐(๐ง)โโ๐๐ ๐ง(โ๐๐+๐ง) 6 +๐ ๐๐๐๐(๐ง)โ๐๐(๐ง)๐ ๐๐(๐ง)๐๐๐(๐ง)โ๐3 2 ๐ โ๐โ1 2 ๐๐ ๐๐(๐ง)๐โ1๐๐+ ๐๐+โ๐๐๐ง, (5) and โ๐๐(๐ง)=1 1+1โ๐๐๐ ๐๐(๐ง)+1โ๐๐ ๐๐+โ๐๐๐ง, ๐๐๐(๐ง)=โ๐๐+โ๐๐๐ง ๐๐๐ ๐๐(๐ง)โ๐๐+1โ๐๐ ๐๐+โ๐๐๐ง , ๐๐๐(๐ง)=โ๐๐โโ1 ๐๐(๐ง)๐ ๐๐(๐ง) โ๐๐โ๐โ1๐๐(๐ง)๐ ๐๐(๐ง)(๐๐+โ๐๐๐ง), ๐๐๐(๐ง)=โ๐๐(๐ง) ๐๐ 1+โ๐๐ ๐ ๐๐(๐ง) ๐๐+๐๐(1โ๐๐) ๐๐+โ๐๐๐ง+(๐๐โ1)๐ ๐๐(๐ง)โ๐๐ , Here the contourโฎ Cis closed and taken in the positive direction in the complex plane, enclosing the support of๐น๐(๐ฅ). The main result is stated in the following theorem. Theorem 2.7. Under Assumptions 2.1*, 2.2 and 2.3, let ๐1,๐2,...,๐๐be functions on Rand analytic on an open interval containingh โ2+1โ๐๐,2+1โ๐๐i . Then, the random vector (๐บ๐(๐1),...,๐บ๐(๐๐)) forms a tight sequence in ๐and converges weakly to a centered Gaussian vector (๐๐1,...,๐๐๐)with the covariance function ๐ถ๐๐ฃ(๐๐,๐๐)=โ1 4๐2โฎ C1โฎ C2๐(๐ง1)๐(๐ง2)๐ถ๐๐ฃ(๐(๐ง1),๐(๐ง2))d๐ง1d๐ง1 where ๐ถ๐๐ฃ(๐(๐ง1),๐(๐ง2))=2๐ โฒ ๐(๐ง1)๐ โฒ ๐(๐ง2) {๐ ๐(๐ง1)โ๐ ๐(๐ง2)}2โ1 (๐ง1โ๐ง2)2 โ2๐ โฒ ๐(๐ง1)๐ โฒ ๐(๐ง2) 1+๐ ๐(๐ง1)/โ๐ 2 1+๐ ๐(๐ง2)/โ๐ 2, and๐ ๐(๐ง)is defined in (3). Remark 4. Theorem 2.7 establishes a unified CLT for LSS of B๐when๐/๐โ๐โ(0,โ]. This result is consistent with the results of R๐when๐/๐โ๐โ(0,โ)in Theorem 1 of Gao et al. (2017) and Theorem 3.2 of Yin, Zheng and Zou (2023). In particular, when ๐/๐โโ , ๐บ๐(๐)=๐โซ+โ โโ๐(๐ฅ)d ๐นB๐(๐ฅ)โ๐น๐๐(๐ฅ) +1 2๐๐โฎ C๐(๐ง)๐ 3(๐ง)+๐ (๐ง)โ๐ โฒ(๐ง)๐ (๐ง) ๐ 2(๐ง)โ1โ1 ๐ง d๐ง, where๐ (๐ง)=โ๐ง+โ ๐ง2โ4 2is the Stieltjes transform of the semicircle law. Then we have the following result. On eigenvalues of a renormalized sample correlation matrix 7 Theorem 2.8. With the same notations and assumptions given in Theorem 2.7 with ๐โ๐๐ก,๐ก > 1, then the random vector (๐บ๐(๐1),...,๐บ๐(๐๐))forms a tight sequence in ๐and converges weakly to a centered Gaussian vector (๐๐1,...,๐๐๐)with the covariance function ๐ถ๐๐ฃ(๐๐,๐๐)=โ1 4๐2โฎ C1โฎ C2๐(๐ง1)๐(๐ง2)๐ถ๐๐ฃ(๐(๐ง1),๐(๐ง2))d๐ง1d๐ง1, where ๐ถ๐๐ฃ(๐(๐ง1),๐(๐ง2))=2๐ โฒ(๐ง1)๐ โฒ(๐ง2) {๐ (๐ง1)โ๐ (๐ง2)}2โ1 (๐ง1โ๐ง2)2 โ2๐ โฒ(๐ง1)๐ โฒ(๐ง2). (6) Remark 5. Theorem 2.8 establishes a novel CLT for LSS of the renormalized sample correlation ma- trixB๐in the ultrahigh-dimensional regime where ๐/๐โโ , which constitutes the main contribution of this paper. The proof technique is different from the classical case where ๐/๐โ๐โ(0,โ). In particular, we develop a novel determinant equivalent form for the resolvent of the renormalized corre- lation matrix, under ultrahigh dimensional context (see proof of Lemma 5.7). Theorem 2.7 provides a unified formulation of the limiting results for both ๐/๐โ๐โ(0,โ)and๐/๐โโ . Corollary 2.9. With the same notations and assumptions given in Theorem 2.7, consider three analytic functions๐2(๐ฅ)=๐ฅ2,๐3(๐ฅ)=๐ฅ3,๐4(๐ฅ)=๐ฅ4, we have ๐บ๐(๐2)=tr B2 ๐ โ๐+2๐โ โN( 0,4), ๐บ๐(๐3)=tr B3 ๐ โ๐โ4โ๐๐๐โ โN 0,6+36 ๐ , ๐บ๐(๐4)=tr B4 ๐ โ(๐โ1)2 ๐+2๐โ5โ6 ๐๐ ๐โ โN 0,72+288 ๐+144 ๐2 . 3. Application of CLTs to hypothesis test In this section, we provide a statistical application of LSS for renormalized sample correlation ma- trixB๐. It is the independence test for high and ultra-high dimensional random vectors, namely the hypothesis ๐ป0:R=I๐vs.๐ป1:Rโ I๐. We aim to propose a test statistic that can work when ๐/๐โ๐โ(0,โ]. Motivated by the
|
https://arxiv.org/abs/2505.08210v1
|
Frobenius norm of RโI๐used in Schott (2005), Gao et al. (2017) and Yin, Zheng and Zou (2023), with the relationship tr(R๐โI๐)2=๐ ๐(trB2 ๐+๐)โ๐, we consider the following test statistic constructed from the renormalized correlation matrix B๐, ๐:=trB2 ๐. We reject๐ป0when๐is too large. By by taking ๐(๐ฅ)=๐ฅ2in Theorem 2.7, the limiting null distribution of๐is given in the following theorem. 8 Theorem 3.1. Suppose that Assumptions 2.1*, 2.2 and 2.3 hold, under ๐ป0, we have as ๐โโ , 1 2(๐โ๐+2)๐ทโโN( 0,1). Theorem 3.1 establishes the unified CLT for ๐under๐ป0when๐/๐โ๐โ(0,โ]. Based on these, we employ the following procedure for testing the null hypothesis: Reject๐ป0if1 2(๐โ๐+2)>๐ง๐ผ, where๐ง๐ผis the upper-๐ผquantile of the standard normal distribution at nominal level ๐ผ. 4. Simulations In this section, we implement some simulation studies to examine (1) the LSD of the renormalized sample correlation matrix B๐; (2) finite-sample properties of some LSS for B๐by comparing their empirical means and variances with theoretical limiting values; (3) finite-sample performance of independence test. 4.1. Limiting spectral distribution In this section, simulation experiments are conducted to verify the LSD of the renormalized sample correlation matrix B๐, as stated in Theorem 2.4. We generate data ๐ฆ๐๐from three populations, drawing histograms of eigenvalues of B๐and comparing them with theoretical densities. Specifically, three types of distributions for ๐ฆ๐๐are considered: (1)๐ฆ๐๐follows the standard normal distribution; (2)๐ฆ๐๐follows the exponential distribution with rate parameter 2; (3)๐ฆ๐๐follows the Poisson distribution with parameter 1. The dimensional settings are (๐,๐)=(104,5000),(2002,200),(2002.5,200).We display histograms of eigenvalues of B๐generated by three populations under various (๐,๐)in Figure 1. This reveals that all histograms align with their LSD, affirming the accuracy of our theoretical results. 4.2. CLT for LSS In this section, we implement some simulation studies to examine finite-sample properties of some LSS for B๐by comparing their empirical means and variances with theoretical limiting values, as stated in Theorem 2.7. In the following, we present the numerical simulation of CLT for LSS. Let ๐บ๐(๐๐)=๐บ๐(๐๐)โ Var(๐๐๐). First, we examine ๐บ๐(๐๐)๐โ๐(0,1),๐๐=๐ฅ๐(๐=2,3,4), by Theorem 2.7. Two types of data distribution of ๐ฆ๐๐are consider: (1) Gaussian data: ๐ฆ๐๐โผ๐(0,1)i.i.d. for 1โค๐โค๐, 1โค๐โค๐; (2) Non-Gaussian data: ๐ฆ๐๐โผ๐2(2)i.i.d. for 1โค๐โค๐, 1โค๐โค๐. On eigenvalues of a renormalized sample correlation matrix 9 (a) N(0,1) (b) Exponential(2) (c) Poisson(1) (d) N(0,1) (e) Exponential(2) (f) Poisson(1) (g) N(0,1) (h) Exponential(2) (i) Poisson(1) Figure 1: Histograms of sample eigenvalues of B๐, fitted by LSD (blue solid curves). In the first row, (๐,๐)=(104,5000), in the second row, (๐,๐)=(2002,200), in the third row(๐,๐)=(2002.5,200). Empirical mean and variance of {๐บ๐(๐๐)},๐๐=๐ฅ๐,๐=2,3,4, are calculated for various ๐๐. The dimensional settings are (๐,๐)=(1000,500),(3002,300),(5002,500),(1002.5,100)with๐๐= 2,300,500,1000. As shown in Tables 1, the empirical mean and variance of {๐บ๐(๐๐)}perfectly match their theoretical limits 0 and 1 under all scenarios. 4.3. Hypothesis test Numerical simulations are conducted to find empirical size and powers of our proposed test statistic. The random variables (๐ฅ๐๐)are generated from: (1) Gaussian data: ๐ฅ๐๐โผ๐(0,1)i.i.d. for 1โค๐โค๐, 1โค๐โค๐; (2) Non-Gaussian data: ๐ฅ๐๐โผ(๐2(2)โ2)/2 i.i.d. for 1โค๐โค๐, 1โค๐โค๐. And we consider the following two settings of ๐บ: โข๐บ1= ๐ ๐,๐,๐ ๐ร๐,๐ ๐,๐,๐=๐ฟ{๐=๐}+๐ฟ{๐โ ๐}๐|๐โ๐|,๐,๐=1,...,๐ , โข๐บ2= ๐ ๐,๐,๐ ๐ร๐,๐ ๐,๐,๐=๐ฟ{๐=๐}+๐ฟ{๐โ ๐}๐,๐,๐=1,...,๐ , where๐,๐are two parameters satisfying |๐|<1,0<๐< 1. The parameter setting is as follows: 10 Table 1. Empirical mean
|
https://arxiv.org/abs/2505.08210v1
|
and variance of ๐บ๐(๐๐),๐=2,3,4 from 5000 replications with ๐๐=2,300,500,1000. Theoretical mean and variance are 0 and 1, respectively. ๐บ๐(๐2) ๐บ๐(๐3) ๐บ๐(๐4) ๐๐ mean var mean var mean var Gaussian data 2 0.0090 1.0079 -0.0103 0.9737 0.0040 0.9793 300 0.0185 0.9974 -0.0919 0.9777 0.0037 0.9785 500 0.0143 0.9837 -0.0821 0.9914 -0.0076 0.9639 1000 0.0144 0.9889 -0.0465 0.9712 -0.0035 0.9896 Non-Gaussian data 2 0.0284 1.1201 -0.0122 1.0672 0.0005 1.0342 300 -0.0357 1.0326 -0.0977 1.0390 0.0045 0.9974 500 -0.0066 1.0240 -0.0694 1.0112 -0.0006 1.0179 1000 0.0020 1.0840 -0.0582 0.9785 0.0026 1.0432 โข๐=๐=0 to evaluate empirical size; โข๐=0.20,0.25 to evaluate empirical power of ๐บ1; โข๐=0.007,0.011 to evaluate empirical power of ๐บ2. Table 2 reports the empirical size and power for different ๐๐. The dimensional settings are (๐,๐)= (1200,600),(502,50),(1002,100),(2002,200)with๐๐=2,50,100,200, and the nominal significance level is fixed at ๐ผ=0.05. This shows our test statistic is robust in both high and ultra-high dimensional settings and performs stably for Gaussian and non-Gaussian data. Table 2. Empirical size and power from 5000 replications for Gaussian and Non-Gaussian data with different ๐๐. Size Power of ๐บ1 Power of ๐บ2 ๐๐๐=๐=0๐=0.20๐=0.25๐=0.007๐=0.011 Gaussian data 2 0.0528 0.9970 1 0.5954 0.9866 50 0.044 0.608 0.902 0.7688 0.9878 100 0.0456 0.9884 1 0.9999 1 200 0.0512 1 1 1 1 Non-Gaussian data 2 0.0498 0.9964 1 0.5908 0.9814 50 0.06 0.6278 0.922 0.7372 0.98 100 0.0542 0.9878 1 0.9997 1 200 0.055 1 1 1 1 On eigenvalues of a renormalized sample correlation matrix 11 5. Proofs 5.1. Notations The following notations are used throughout the proofs. Let Yโค=(หy1,..., หy๐),thenB๐can be written as B๐=โ๏ธ๐ ๐โ1๐โ1 ๐หY๐หYโค ๐โ๐ฝ ,หY๐= ๐ฝหy1 โฅ๐ฝหy1โฅ,๐ฝหy2 โฅ๐ฝหy2โฅ,...,๐ฝหy๐ ๐ฝหy๐ ! . Denote A๐=โ๏ธ๐ ๐๐ ๐R๐โI๐ ,R๐=หY๐หYโค ๐, ๐=๐โ1, ๐๐=๐/๐, ๐๐=๐/๐, ๐ ๐(๐ง)=1 ๐tr(A๐โ๐งI๐)โ1, ๐ B๐๐(๐ง)=1 ๐tr(B๐โ๐งI๐)โ1, ๐งโC+, หY๐= ๐ฝหy1 โฅ๐ฝหy1โฅ,๐ฝหy2 โฅ๐ฝหy2โฅ,...,๐ฝหy๐ ๐ฝหy๐ ! = r1,..., r๐,หY๐= r1,ยทยทยท,r๐โ1,r๐+1,ยทยทยท,r๐, R๐๐=หY๐หYโค ๐,A๐๐=โ๏ธ๐ ๐๐ ๐R๐๐โI๐ ,A๐๐๐=A๐๐โโ๏ธ ๐ ๐r๐rโค ๐, Q(๐ง)=A๐โ๐งI๐,Q๐(๐ง)=A๐๐โ๐งI๐,Q๐๐(๐ง)=A๐๐๐โ๐งI๐, ๐ฝ๐(๐ง)=1 โ๐๐+rโค ๐Qโ1 ๐(๐ง)r๐,ห๐ฝ๐(๐ง)=1 โ๐๐+trQโ1 ๐(๐ง)/๐, ๐๐(๐ง)=1 โ๐๐+EtrQโ1 ๐(๐ง)/๐, ๐1(๐ง)=1 โ๐๐+EtrQโ1 12(๐ง)/๐, ๐พ๐(๐ง)=rโค ๐Qโ1 ๐(๐ง)r๐โE1 ๐trQโ1 ๐(๐ง),๐ฝ๐๐(๐ง)=1 โ๐๐+rโค ๐Qโ1 ๐๐(๐ง)r๐, ๐๐(๐ง)=rโค ๐Qโ1 ๐(๐ง)r๐โ1 ๐trQโ1 ๐(๐ง), ๐ฟ๐(๐ง)=rโค ๐Qโ2 ๐(๐ง)r๐โ1 ๐trQโ2 ๐(๐ง). We denote by ๐พsome constant which may take different values at different appearances. By the results in Bai and Silverstein (2004), we have โฅQ๐(๐ง)โ1โฅโค๐พ, tr Qโ1(๐ง)โQโ1 ๐(๐ง) M โค โฅMโฅ๐โ1 2๐,|๐ฝ๐(๐ง)|โค๐พ๐โ1 2๐,|ห๐ฝ๐(๐ง)|โค๐พ๐โ1 2๐,|๐๐(๐ง)|โค๐พ๐โ1 2๐. And straightforward calculation gives Qโ1(๐ง)โQโ1 ๐(๐ง)=โQโ1 ๐(๐ง)r๐rโค ๐Qโ1 ๐(๐ง)๐ฝ๐(๐ง), ๐ฝ๐(๐ง)=๐๐(๐ง)โ๐๐(๐ง)๐พ๐(๐ง)๐ฝ๐(๐ง)=ห๐ฝ๐(๐ง)โห๐ฝ๐(๐ง)๐๐(๐ง)๐ฝ๐(๐ง). (7) 5.2. Proof of Theorem 2.5 Since ๐ B๐๐(๐ง)=๐ ๐(๐ง)โ1 ๐โ๐๐ ๐ง(โ๐๐+๐ง), (8) 12 for all๐งโC+, the difference between ๐ B๐๐(๐ง)and๐ ๐(๐ง)is a deterministic term of order ๐(1/๐). There- fore, to show that ๐ B๐๐(๐ง)โ๐ (๐ง)almost surely, it suffices to prove that ๐ ๐(๐ง)โ๐ (๐ง)almost surely. Here๐ (๐ง)is the Stieltjes transform of semicircular law (4). We now proceed with the proof in the following four steps: Step 1. Truncation, centralization, and rescaling. Step 2. For any fixed ๐งโC+={๐ง,โ(๐ง)>0},๐ ๐(๐ง)โE๐ ๐(๐ง)โ0, a.s.. Step 3. For any fixed ๐งโC+,E๐ ๐(๐ง)โ๐ (๐ง). Step 4. Outside a null set, ๐ ๐(๐ง)โ๐ (๐ง)for every๐งโC+. Then, it follows that, except for this null set, ๐นB๐โ๐นweakly, where ๐นis the distribution function of semicircular law in (4). Step 1. Truncation, centralization, and rescaling. By the moment condition E|x11|4<โ, one may choose a positive sequence of {ฮ๐}such that ฮโ4 ๐E|๐ฅ11|4๐ผ{|๐ฅ11|โฉพฮ๐4โ๐๐}โ0,ฮ๐โ0,ฮ๐4โ๐๐โโ. Recall X=(x1,..., x๐)๐ร๐= ๐ฅ๐๐. Then we can write B๐=B๐(๐ฅ๐๐)=๐ฝB๐0๐ฝ, where B๐0=โ๏ธ๐ ๐1
|
https://arxiv.org/abs/2505.08210v1
|
๐XโคD๐XโI๐ ,D๐=Diag1 ๐ 11,1 ๐ 22,...,1 ๐ ๐๐ , ๐ ๐๐=1 ๐eโค ๐X๐ฝXโคe๐, ๐=1,...,๐. LetหB๐=หB๐(ห๐ฅ๐๐),หB๐=หB๐(ห๐ฅ๐๐)and หB๐=หB๐(ห๐ฅ๐๐)be defined similarly to B๐with๐ฅ๐๐replaced by ห๐ฅ๐๐, ห๐ฅ๐๐and ห๐ฅ๐๐respectively, where ห ๐ฅ๐๐=๐ฅ๐๐๐ผ{|๐ฅ๐ ๐|โคฮ๐4โ๐๐}, ห๐ฅ๐๐=ห๐ฅ๐๐โEห๐ฅ๐๐, and ห๐ฅ๐๐=ห๐ฅ๐๐/๐๐with ๐2 ๐=E|ห๐ฅ๐๐โEห๐ฅ๐๐|2. And similarly define หD๐,หD๐,หD๐and หB๐0,หB๐0,หB๐0. Note that หD๐=หD๐and หB๐0=หB๐0. Then by Theorems A.43-A.44 in Bai and Silverstein (2010) and Bernsteinโs inequality, we have โฅ๐นB๐โ๐นB๐0โฅโค1 ๐rank(B๐โB๐0)โค๐พ ๐, โฅ๐นB๐0โ๐นหB๐0โฅโค1 ๐rank XโคD1 2๐โหXโคหD1 2๐ โค1 ๐๐โ๏ธ ๐=1๐โ๏ธ ๐=1๐ผ{|๐ฅ๐ ๐|โฅฮ๐4โ๐๐}โ0๐.๐ ., โฅ๐นหB๐0โ๐นหB๐0โฅ=โฅ๐นหB๐0โ๐นหB๐0โฅโค1 ๐rank หXโคหD1 2๐โหXโคหD1 2๐ =1 ๐rank(EหXโคหD1 2๐)=1 ๐. Thus in the rest of the proof of Theorem 2.4, we assume ๐ฅ๐๐ โฉฝฮ๐4โ๐๐, E๐ฅ๐๐=0,E ๐ฅ๐๐ 2=1,E ๐ฅ๐๐ 4=๐
+๐(1)<โ. Step 2. Almost sure convergence of the random part. Let E0(ยท)denote expectation and E๐(ยท)denote conditional expectation with respect to the ๐-field generated by r1,r2,..., r๐, where๐=1,2,...,๐ . By Lemma 2.7 in Bai and Silverstein (1998) and Lemma 5 in Gao et al. (2017), we can obtain for ๐>2, E|๐๐(๐ง)|๐โค๐พ ๐โ๐/2+๐โ๐/2๐๐/2โ1ฮ2๐โ4 ๐ ,E|๐ฟ๐(๐ง)|๐โค๐พ ๐โ๐/2+๐โ๐/2๐๐/2โ1ฮ2๐โ4 ๐ , E ห๐ฝ๐(๐ง)โ๐๐(๐ง) ๐=๐(๐๐/2๐โ๐),|๐๐(๐ง)โ๐1(๐ง)|=๐(๐1/2๐โ2/3),E|๐๐(๐ง)โE๐ฝ๐(๐ง)|=๐(๐๐โ2), E|๐พ๐(๐ง)โ๐๐(๐ง)|๐=๐(๐โ๐/2),E 1 ๐tr Qโ1(๐ง)M โE1 ๐tr Qโ1(๐ง)M ๐ =๐(โ๐๐/2). (9) On eigenvalues of a renormalized sample correlation matrix 13 Write ๐ ๐(๐ง)โE๐ ๐(๐ง)=โ1 ๐๐โ๏ธ ๐=1 E๐โE๐โ1๐ฝ๐(๐ง)rโค ๐Qโ2 ๐(๐ง)r๐. By using Lemma 2.1 in Bai and Silverstein (2004), we have E|๐ ๐(๐ง)โE๐ ๐(๐ง)|4โค๐พ ๐4Eยฉยญ ยซ๐โ๏ธ ๐=1 E๐โE๐โ1๐ฝ๐(๐ง)rโค ๐Qโ2 ๐(๐ง)r๐ 2ยชยฎ ยฌ2 =๐(๐โ2), where in the last step, we use the fact that ๐ฝ๐(๐ง) โค๐พ๐โ1 2๐andE rโค ๐Qโ2 ๐(๐ง)r๐ 2โคE|๐ฟ๐(๐ง)|2+ E 1 ๐trQโ2 ๐(๐ง) 2=๐(1)by (9). Therefore, we obtain ๐ ๐(๐ง)โE๐ ๐(๐ง)=๐๐.๐ .(1). Step 3. Convergence of the expected Stieltjes transform. Similarly to the proof of Lemma 5.5 in the Supplementary Material, and by applying the estimates in (9), we obtain ๐ E๐ ๐(๐ง)โ๐ ๐๐(๐ง) =๐(1), which implies that E๐ ๐(๐ง)=๐ ๐๐(๐ง)+๐(๐โ1). The details are omitted here. Moreover, since ๐ ๐๐(๐ง)= ๐ (๐ง)+๐(1), it follows that E๐ ๐(๐ง)=๐ (๐ง)+๐(1). Step 4. Completion of the proof of Theorem 2.4. By Steps 2 and 3, for any fixed ๐งโC+, we have ๐ ๐(๐ง)โ๐ (๐ง), a.s.. That is, for each ๐งโC+, there exists a null set ๐๐ง(i.e.,๐(๐๐ง)=0 ) such that ๐ ๐(๐ง,๐)โ๐ (๐ง)for all๐โ๐๐ ๐ง.Now, let C+ 0={๐ง๐}be a dense subset of C+(e.g., all๐งof rational real and imaginary parts) and let ๐=โช๐๐ง๐. Then ๐ ๐(๐ง,๐)โ๐ (๐ง)for all๐โ๐๐and๐งโC+ 0. LetC+ ๐={๐งโC+,โ๐ง>1/๐,|๐ง|โค๐}. When๐งโC+ ๐, we have|๐ ๐(๐ง)|โค๐. By Vitaliโs convergence theorem, we have ๐ ๐(๐ง,๐)โ๐ (๐ง)for all๐โ๐๐and๐งโC+ ๐. Since the convergence above holds for every ๐, we conclude that ๐ ๐(๐ง,๐)โ๐ (๐ง)for all๐โ๐๐and๐งโC+. Thus, for all ๐งโC+,๐ B๐๐(๐ง)โ๐ (๐ง)almost surely. By Theorem B.9 in Bai and Silverstein (2010), we conclude that ๐นB๐๐คโโ๐น, a.s.. Thus we complete the proof of Theorem 2.4. 5.3. Proof of Theorem 2.6 Since๐B๐ 1=โ๏ธ ๐ ๐๐R๐ 1โโ๏ธ ๐ ๐,Theorem 2.6 can be obtained directly by Lemma 1 and 7 in Gao et al. (2017) when ๐/๐โ๐โ(0,โ). Then we focus on the case where ๐/๐โโ . 14 Proof of Theorem 2.6 (i) : By Theorem 2.4, we have lim inf ๐โโ๐1(B๐)โฅ1 a.s.. Thus to prove conclusion (i) in Theorem 2.6, it suffices to show that lim sup ๐โโ๐1(B๐)โค1 a.s.. Firstly, according to Assumption 2.1*, we truncate the underlying random variables. Here, we choose ๐ฟ๐satisfying ๐ฟโ2(๐ก+1) ๐ E|๐ฅ11|2๐ก+21{|๐ฅ11|โฉพ๐ฟ๐(๐๐)1/(2๐ก+2)}โ0, ๐ฟ๐โ0, ๐ฟ๐(๐๐)1/(2๐ก+2)โโ. (10) Similar as arguments in section 5.2, let หB๐=หB๐(ห๐ฅ๐๐),หB๐=หB๐(ห๐ฅ๐๐)be defined similarly to B๐with๐ฅ๐๐ replaced by ห๐ฅ๐๐, and ห๐ฅ๐๐respectively, where ห ๐ฅ๐๐=๐ฅ๐๐๐ผ{|๐ฅ๐ ๐|โค๐ฟ๐(๐๐)1/(2๐ก+2)}and ห๐ฅ๐๐=(ห๐ฅ๐๐โEห๐ฅ๐๐)/๐๐ with๐2 ๐=E|ห๐ฅ๐๐โEห๐ฅ๐๐|2. Following the proof of Theorem 1 in Chen and Pan
|
https://arxiv.org/abs/2505.08210v1
|
(2012), we have P B๐โ หB๐,i.o. =0 a.s., from which we obtain ๐1(B๐)โ๐1 หB๐ โ0 a.s. as๐โโ . And note that หB๐=หB๐. We have๐1(B๐)โ ๐1 หB๐โ0 a.s.. By the above results, it is sufficient to show that lim sup sup๐โโ๐1 หB๐โค1 a.s.. To this end, note that หB๐satisfies truncation condition of Theorem 2.6 (ii). Therefore, we can obtain Theorem 2.6 (i) according to conclusion (ii). Next we give the proof of the conclusion (ii). Proof of Theorem 2.6 (ii): To begin with, by (S11) in Yu, Xie and Zhou (2023), we have ๐B๐ 1โค๐๐ฝ2 1๐B๐0 1โคmax 1โค๐โค๐ eโค ๐B๐0e๐ +๐C๐ 1, where B๐0=โ๏ธ๐ ๐1 ๐XโคD๐XโI๐ ,C๐=B๐0โdiag(B๐0). Since eโค ๐B๐0e๐=1โ๐๐๐โ๏ธ ๐=1(1 ๐ ๐๐๐2 ๐๐โ1)=1โ๐๐๐โ๏ธ ๐=11 ๐ ๐๐(๐2 ๐๐โ1)+1โ๐๐๐โ๏ธ ๐=1(1 ๐ ๐๐โ1), to prove Theorem 2.6, it is sufficient to prove, for any ๐>0,โ> 0, P max 1โค๐โค๐1โ๐๐๐โ๏ธ ๐=1 1 ๐ ๐๐ ๐2 ๐๐โ1 >๐! =o ๐โโ , (11) P 1โ๐๐๐โ๏ธ ๐=1 1 ๐ ๐๐โ1 >๐! =o ๐โโ , (12) P ๐C๐ 1>2+๐ =o ๐โโ . (13) The proofs of (11)-(13) rely on Lemma 5.1 below. The proof of Lemma 5.1 is postponed to the supplementary material. On eigenvalues of a renormalized sample correlation matrix 15 Lemma 5.1. Under the assumptions of Theorem 2.6 (ii), we have P max 1โค๐โค๐ 1 ๐ ๐๐โ1 >๐ =๐(๐โโ). By Lemma 5.1, max 1โค๐โค๐1/๐ ๐๐<2 with high probability, then (11) comes directly from (9) in Chen and Pan (2012). For (12), by Burkholder inequality (Lemma 2.13 in Bai and Silverstein (2010)), we have P 1โ๐๐๐โ๏ธ ๐=1 1 ๐ ๐๐โ1 >๐! =P ๐โ๏ธ ๐=1 1 ๐ ๐๐โ1 >๐โ๏ธ ๐๐! โค๐พE ร๐ ๐=1(๐ ๐๐โ1) โ (๐โ๐๐)โ+๐(๐โโ) โค๐พร๐ ๐=1E|๐ ๐๐โ1|2โ/2 +ร๐ ๐=1E|๐ ๐๐โ1|โ (๐โ๐๐)โ+๐(๐โโ) โค๐พ(๐/๐)โ/2+๐๐โโ/2+๐๐โโ+1๐ฃ2โ (๐โ๐๐)โ+๐(๐โโ)=๐(๐โโ). And for (13), by using Lemma 5.1 again, we have for any ๐,๐โฒ>0, P ๐C๐ 1>2+๐ =P ๐C๐ 1>2+๐,max 1โค๐โค๐ 1 ๐ ๐๐โ1 <๐โฒ +P ๐C๐ 1>2+๐,max 1โค๐โค๐ 1 ๐ ๐๐โ1 >๐โฒ =P ๐C๐ 1>2+๐,max 1โค๐โค๐ 1 ๐ ๐๐โ1 <๐โฒ +๐ ๐โโ =๐ ๐โโ , where the last equality holds by (8) in Chen and Pan (2012) and (S12) in Yu, Xie and Zhou (2023). Together with (11) and (12), we obtain P(๐1(B๐)โฅ2+๐)=o ๐โโ. Therefore we complete the proof. 5.4. Proof of Theorem 2.8 Now we present the strategy for the proof of Theorem 2.8. By the Cauchy integral formula, we haveโซ ๐(๐ฅ)d๐บ(๐ฅ)=โ1 2๐๐โฎ C๐(๐ง)๐๐บ(๐ง)d๐งvalid for any c.d.f ๐บand any analytic function ๐on an open set containing the support of ๐บ, whereโฎ Cis the contour integration in the anti-clockwise direction. In our case, ๐บ(๐ฅ)=๐(๐นB๐(๐ฅ)โ๐น๐๐(๐ฅ)). Therefore, the problem of finding the limiting distribution reduces to the study of ๐B๐๐(๐ง): ๐B๐๐(๐ง)=๐ ๐ B๐๐(๐ง)โ๐ ๐๐(๐ง) โฮ๐(๐ ๐๐(๐ง)). By using (8), under the ultrahigh dimensional case, ฮ๐(๐ ๐๐(๐ง))=๐ 3(๐ง)+๐ (๐ง)โ๐ โฒ(๐ง)๐ (๐ง) ๐ 2(๐ง)โ1โ1 ๐ง+๐(1), 16 then we have ๐B๐๐(๐ง)=๐๐(๐ง)โ๐ 3(๐ง)+๐ (๐ง)โ๐ โฒ(๐ง)๐ (๐ง) ๐ 2(๐ง)โ1+๐(1), (14) where ๐๐(๐ง)=๐ ๐ ๐(๐ง)โ๐ ๐๐(๐ง). Firstly, according to Assumption 2.1*, we truncate the underlying random variables. Here, we choose ๐ฟ๐ defined in (10). By the arguments in section 5.3, let หB๐=หB๐(ห๐ฅ๐๐),หB๐=หB๐(ห๐ฅ๐๐)be defined similarly toB๐with๐ฅ๐๐replaced by ห ๐ฅ๐๐, and ห๐ฅ๐๐respectively, where ห ๐ฅ๐๐=๐ฅ๐๐๐ผ{|๐ฅ๐ ๐|โค๐ฟ๐(๐๐)1/(2๐ก+2)}and ห๐ฅ๐๐= (ห๐ฅ๐๐โEห๐ฅ๐๐)/๐๐with๐2 ๐=E|ห๐ฅ๐๐โEห๐ฅ๐๐|2. We then conclude that ๐ B๐โ หB๐ โค๐๐ยท๐ |๐ฅ11|โฅ๐ฟ๐(๐๐)1/(2๐ก+2) โค๐พ๐ฟโ2(๐ก+1) ๐ E|๐ฅ11|2๐ก+21{|๐ฅ11|โฉพ๐ฟ๐(๐๐)1/(2๐ก+2)}=๐(1). Let ห๐บ๐(๐)and ห๐บ๐(๐)be๐บ๐(๐)with B๐replaced by หB๐and หB๐respectively. Then for each ๐= 1,2,...,๐, since หB๐=หB๐, we have ๐บ๐(๐๐)=ห๐บ๐(๐๐)+๐๐(1)=ห๐บ๐(๐๐)+๐๐(1). Thus, we only need to find the limit distribution ofn e๐บ๐ ๐๐,๐=1,...,๐o . Hence, in
|
https://arxiv.org/abs/2505.08210v1
|
what follows, we assume that the underlying variables are truncated at ๐ฟ๐(๐๐)1 2๐ก+2, centralized, and renormalized. For convenience, we shall suppress the superscript on the variables, and assume that, for any 1 โฉฝ๐โฉฝ๐and 1โฉฝ๐โฉฝ๐, ๐ฅ๐๐ โฉฝ๐ฟ๐(๐๐)1 2๐ก+2,E๐ฅ๐๐=0,E|๐ฅ๐๐|2=1,E|๐ฅ๐๐|4=๐
+๐(1),E|๐ฅ๐๐|2๐ก+2<โ. (15) For any๐ >0, define the event ๐น๐(๐)= max๐โค๐ ๐๐(B๐) โฅ2+๐ where B๐is defined by the truncated and normalized variables satisfying condition (15). By Theorem 2.6, for any โ>0 P(๐น๐(๐))=o ๐โโ . (16) Here we would point out that the result regarding the minimum eigenvalue of B๐can be obtained similarly by investigating the maximum eigenvalue of โB๐. Note that the support of ๐นB๐is random. Fortunately, we have shown that the extreme eigenvalues of B๐are highly concentrated around two edges of the support of the limiting semicircle law ๐น(๐ฅ)in (16). Then the contourCcan be appropriately chosen. Moreover, as in Bai and Silverstein (2004), by (16), we can replace the process {๐๐(๐ง),๐งโC} by a slightly modified process {b๐๐(๐ง),๐งโC} . Below we present the definitions of the contour Cand the modified process b๐๐(๐ง). Let๐ฅ๐be any number greater than 2+1โ๐๐. Let๐ฅ๐be any number less than โ2+1โ๐๐. Now letC๐ข={๐ฅ+๐๐ฃ0:๐ฅโ[๐ฅ๐,๐ฅ๐]}. Then we defineC+:={๐ฅ๐+๐๐ฃ:๐ฃโ[0,๐ฃ0]}โชC๐ขโช{๐ฅ๐+๐๐ฃ:๐ฃโ[0,๐ฃ0]}, andC=C+โชC+. Now we define the subsetsC๐ofCon which๐๐(ยท)equals to b๐๐(ยท). Choose sequence {๐๐}decreasing to zero satisfying for some๐ผโ(0,1),๐๐โฅ๐โ๐ผ. Let C๐={๐ฅ๐+๐๐ฃ:๐ฃโ[0,๐ฃ0]}, On eigenvalues of a renormalized sample correlation matrix 17 andC๐={๐ฅ๐+๐๐ฃ:๐ฃโ[๐โ1๐,๐ฃ0]}. ThenC๐=C๐โชC๐ขโชC๐. For๐ง=๐ฅ+๐๐ฃ, we define b๐๐(๐ง)=๏ฃฑ๏ฃด๏ฃด๏ฃด ๏ฃฒ ๏ฃด๏ฃด๏ฃด๏ฃณ๐๐(๐ง), for๐งโC๐, ๐๐(๐ฅ๐+๐๐โ1๐๐), for๐ฅ=๐ฅ๐,๐ฃโ[0,๐โ1๐๐], ๐๐(๐ฅ๐+๐๐โ1๐๐), for๐ฅ=๐ฅ๐,๐ฃโ[0,๐โ1๐๐]. With the help of (16), one may thus find โฎ C๐๐(๐ง)๐๐(๐ง)๐๐ง=โฎ C๐๐(๐ง)b๐๐(๐ง)๐๐ง+๐๐(1), for every๐โ{1,...,๐พ}. Hence according to (14), the proof of Theorem 2.8 can be completed by verifying the convergence of b๐๐(๐ง)onCas stated in the following lemma. Lemma 5.2. In addition to Assumptions 2.1*, 2.2,2.3, suppose condition (15) holds. We have b๐๐(๐ง)๐=๐(๐ง)+๐๐(1), ๐งโC, where the random process ๐(๐ง)is a two-dimensional Gaussian process. The mean function is E๐(๐ง)=๐ 3(๐ง)+๐ (๐ง)โ๐ โฒ(๐ง)๐ (๐ง) ๐ 2(๐ง)โ1, and the covariance function is ๐ถ๐๐ฃ(๐(๐ง1),๐(๐ง2))=2๐ โฒ(๐ง1)๐ โฒ(๐ง2) {๐ (๐ง1)โ๐ (๐ง2)}2โ1 (๐ง1โ๐ง2)2 โ2๐ โฒ(๐ง1)๐ โฒ(๐ง2). (17) To prove Lemma 5.2, we decompose b๐๐(๐ง)into a random part ๐(1) ๐(๐ง)and a deterministic part ๐(2) ๐(๐ง)for๐งโC๐, that is,๐๐(๐ง)=๐(1) ๐(๐ง)+๐(2) ๐(๐ง), where ๐(1) ๐(๐ง)=๐ ๐ ๐(๐ง)โE๐ ๐(๐ง) and๐(2) ๐(๐ง)=๐ E๐ ๐(๐ง)โ๐ ๐๐(๐ง) . The random part contributes to the covariance function and the deterministic part contributes to the mean function. By Theorem 8.1 in Billingsley (1968), the proof of Lemma 5.2 is then complete if we can verify the following three steps: Step 1. Finite-dimensional convergence of ๐(1) ๐(๐ง)in distribution onC๐to a centered multivariate Gaussian random vector with covariance function given by (17). Lemma 5.3. Under assumptions of Theorem 2.8 and condition (15), as๐โโ , for any set of ๐points{๐ง1,๐ง2,...,๐ง๐}โC , the random vector ๐(1) ๐(๐ง1),...,๐(1) ๐(๐ง๐)converges weakly to a ๐-dimensional centered Gaussian distribution with covariance function in (17). Step 2. Tightness of the ๐(1) ๐(๐ง)for๐งโC๐. The tightness can be established by Theorem 12.3 of Billingsley (1968). Itโs sufficient to verify the moment condition given in the following lemma. Lemma 5.4. Under assumptions of Lemma 5.3, sup๐;๐ง1,๐ง2โC๐E ๐(1) ๐(๐ง1)โ๐(1) ๐(๐ง2) 2 |๐ง1โ๐ง2|2<โ. 18 Step 3. Convergence of the non-random part ๐(2) ๐(๐ง). Lemma 5.5. Under assumptions of Lemma 5.3, ๐(2) ๐(๐ง)=๐ 3(๐ง)+๐ (๐ง)โ๐ โฒ(๐ง)๐ (๐ง) ๐ 2(๐ง)โ1+๐(1)for๐งโC๐. Thus we complete the proof of Theorem 2.8. The proof of Lemma 5.3 is presented below while the proofs of Lemma 5.4-5.5
|
https://arxiv.org/abs/2505.08210v1
|
are delegated to the supplement file due to page limit. 5.5. Proof of Lemma 5.3 To prove Lemma 5.3, we first introduce the following supporting lemmas. Lemma 5.6. Under assumptions of Lemma 5.3, we have Y1(๐ง1,๐ง2)โโ๐2 ๐๐ง1๐๐ง2๐โ๏ธ ๐=1 E๐โ1ห๐ฝ๐(๐ง1)๐๐(๐ง1)][E๐โ1ห๐ฝ๐(๐ง2)๐๐(๐ง2) =๐๐(1), Y2(๐ง1,๐ง2)โ๐2 ๐๐ง1๐๐ง2๐โ๏ธ ๐=1E๐โ1 E๐ ห๐ฝ๐(๐ง1)๐๐(๐ง1)E๐ ห๐ฝ๐(๐ง2)๐๐(๐ง2) =2๐2 ๐๐ง1๐๐ง2Jโ 2๐ โฒ(๐ง1)๐ โฒ(๐ง2)+๐๐(1), where J=1 ๐2๐๐(๐ง1)๐๐(๐ง2)๏ฃฎ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฏ๏ฃฐE๐โ๏ธ ๐=1trh E๐ Qโ1 ๐(๐ง1) E๐ Qโ1 ๐(๐ง2)i๏ฃน๏ฃบ๏ฃบ๏ฃบ๏ฃบ๏ฃป. Lemma 5.7. Under assumptions of Lemma 5.3, we have ๐2 ๐๐ง2๐๐ง1J=๐ 2(๐ง1)๐ 2(๐ง2) ๐ 2(๐ง1)โ1 ๐ 2(๐ง2)โ1 [๐ (๐ง1)๐ (๐ง2)โ1]2+๐๐(1). The proof of Lemma 5.7 is presented in next section while the proof of Lemma 5.6 is delegated to the supplement file due to page limit. We now proceed to the proof of Lemma 5.3. By the fact that a random vector is multivariate normally distributed if and only if every linear combination of its components is normally distributed, we need only show that, for any positive integer ๐and any complex sequence ๐๐, the sum ๐โ๏ธ ๐=1๐๐๐(1) ๐(๐ง๐) converges weakly to a Gaussian random variable. To this end, we first decompose the random part ๐(1) ๐(๐ง)as a sum of martingale difference, which is given in (20). Then, we apply the martingale CLT (Proposition 5.8) to obtain the asymptotic distribution of ๐(1) ๐(๐ง). On eigenvalues of a renormalized sample correlation matrix 19 Proposition 5.8. (Theorem 35.12 of Billingsley (1968)). Suppose for each ๐,๐๐,1,๐๐,2,...,๐๐,๐๐is a real martingale difference sequence with respect to the increasing ๐-field F๐,๐ having second mo- ments. If as๐โโ , (i)ร๐๐ ๐=1๐ธ ๐2 ๐,๐|F๐,๐โ1๐.๐.โโโโ๐2,and (ii)ร๐๐ ๐=1๐ธ ๐2 ๐,๐๐ผ(|๐๐, ๐|โฅ๐) โ0,where ๐2is a positive constant and ๐is an arbitrary positive number, thenร๐๐ ๐=1๐๐,๐๐ทโโ๐ 0,๐2 . To begin with, similar as (9), we give some useful estimate below. For ๐>2, we have E|๐๐(๐ง)|๐โค๐พ ๐โ๐/2+๐โ๐/2๐๐/2โ1๐ฟ2๐โ4 ๐ ,E|๐ฟ๐(๐ง)|๐โค๐พ ๐โ๐/2+๐โ๐/2๐๐/2โ1๐ฟ2๐โ4 ๐ , E ห๐ฝ๐(๐ง)โ๐๐(๐ง) ๐=๐(๐๐/2๐โ๐),|๐๐(๐ง)โ๐1(๐ง)|=๐(๐1/2๐โ2/3),E|๐๐(๐ง)โE๐ฝ๐(๐ง)|=๐(๐๐โ2), E|๐พ๐(๐ง)โ๐๐(๐ง)|๐=๐(๐โ๐/2),E 1 ๐tr Qโ1(๐ง)M โE1 ๐tr Qโ1(๐ง)M ๐ =๐(โ๐๐/2). (18) Write๐(1) ๐(๐ง)as a sum of martingale difference sequences (MDS), and then utilize the CLT of MDS to derive the asymptotic distribution of ๐(1) ๐(๐ง), which can be written as ๐(1) ๐(๐ง)=๐[๐ ๐(๐ง)โE๐ ๐(๐ง)]=๐โ๏ธ ๐=1(E๐โE๐โ1)trh Qโ1(๐ง)โQโ1 ๐(๐ง)i =โ๐โ๏ธ ๐=1(E๐โE๐โ1)๐ฝ๐(๐ง)rโค ๐Qโ2 ๐(๐ง)r๐. (19) By using (7) and the fact that E๐โE๐โ1ห๐ฝ๐(๐ง)1 ๐trQโ2 ๐(๐ง)=0, we have E๐โE๐โ1๐ฝ๐(๐ง)rโค ๐Qโ2 ๐(๐ง)r๐=E๐ ห๐ฝ๐(๐ง)๐ฟ๐(๐ง)โห๐ฝ2 ๐(๐ง)๐๐(๐ง)1 ๐trQโ2 ๐(๐ง) +E๐โ1[๐๐(๐ง)]โ E๐โE๐โ1h ห๐ฝ2 ๐(๐ง) ๐๐(๐ง)๐ฟ๐(๐ง)โ๐ฝ๐(๐ง)rโค ๐Qโ2 ๐(๐ง)r๐๐2 ๐(๐ง)i , where๐๐(๐ง)=โE๐h ห๐ฝ๐(๐ง)๐ฟ๐(๐ง)โห๐ฝ2 ๐(๐ง)๐๐(๐ง)1 ๐trQโ2 ๐(๐ง)i . With the help of (18), we have E ๐โ๏ธ ๐=1 E๐โE๐โ1e๐ฝ2 ๐(๐ง)๐๐(๐ง)๐ฟ๐(๐ง) 2 =๐โ๏ธ ๐=1E E๐โE๐โ1e๐ฝ2 ๐(๐ง)๐๐(๐ง)๐ฟ๐(๐ง) 2 โค๐พ๐โ๏ธ ๐=1E e๐ฝ2 ๐(๐ง)๐๐(๐ง)๐ฟ๐(๐ง) 2 โค๐พ(๐โ1+๐ฟ4 ๐)=๐(1), and similarly E ๐โ๏ธ ๐=1 E๐โE๐โ1e๐ฝ2 ๐(๐ง)๐ฝ๐(๐ง)rโค ๐Dโ2 ๐(๐ง)r๐๐2 ๐(๐ง) 2 =๐(1). 20 By (19), we obtain ๐(1) ๐(๐ง)=๐โ๏ธ ๐=1[E๐โE๐โ1]๐๐(๐ง)+๐๐(1). (20) Then we need to consider the limit of the following term: ๐โ๏ธ ๐=1๐ผ๐๐โ๏ธ ๐=1 E๐โE๐โ1 ๐๐(๐ง)=๐โ๏ธ ๐=1๐โ๏ธ ๐=1๐ผ๐ E๐โE๐โ1 ๐๐(๐ง). Using (18) we obtain E ๐๐(๐ง) 4โค๐พ ๐โ2 ๐E ๐ฟ๐(๐ง) 4+๐โ4 ๐E ๐๐(๐ง) 4 โค๐พ ๐โ2+๐โ1๐ฟ4 ๐ , from which we can have ๐โ๏ธ ๐=1Eยฉยญ ยซ ๐โ๏ธ ๐=1๐ผ๐ E๐โE๐โ1 ๐๐(๐ง๐) 2 ๐ผ(|ร๐ ๐=1๐ผ๐[E๐โE๐โ1]๐๐(๐ง๐)|โฅ๐)ยชยฎ ยฌ โค1 ๐2๐โ๏ธ ๐=1E ๐โ๏ธ ๐=1๐ผ๐ E๐โE๐โ1 ๐๐(๐ง๐) 4 โ0. Thus the condition (ii) of Proposition 5.8 is satisfied. Next, it suffices to prove that ๐โ๏ธ ๐=1E๐โ1 ๐๐(๐ง1)โ๐ธ๐โ1๐๐(๐ง1) ๐๐(๐ง2)โ๐ธ๐โ1๐๐(๐ง2) (21) converges in probability to (17). Note that (21)=๐โ๏ธ ๐=1E๐โ1[๐๐(๐ง1)๐๐(๐ง2)]โ๐โ๏ธ ๐=1[E๐โ1๐๐(๐ง1)][E๐โ1๐๐(๐ง2)]=Y1(๐ง1,๐ง2)+Y 2(๐ง1,๐ง2). By Lemmas 5.6-5.7, we obtain the limit of (21)
|
https://arxiv.org/abs/2505.08210v1
|
is (17). Thus we complete the proof of Lemma 5.3. 5.6. Proof of Lemma 5.7 The proof of Lemma 5.7 differs substantially from the classical case. Unlike the high dimensional case where๐/๐โ๐โ(0,โ)(Gao et al., 2017), our analysis is conducted in the ultrahigh dimen- sional regime with ๐/๐โโ . In this setting, we carefully examine the influence of ๐๐and๐๐, and derive a novel determinant equivalent form ๐โ1 ๐๐1(๐ง)โโ๐๐โ๐งโ1 I๐forQโ1 ๐(๐ง), the resolvent of the renormalized correlation matrix with the ๐th component information removed. Specifically, by using the identity rโค ๐Qโ1 ๐(๐ง)=โ๐๐๐ฝ๐๐(๐ง)rโค ๐Qโ1 ๐๐(๐ง), we get Qโ1 ๐(๐ง)=โH๐(๐ง)+๐1(๐ง1)A(๐ง1)+B(๐ง1)+C(๐ง1)+F(๐ง1), On eigenvalues of a renormalized sample correlation matrix 21 where H๐(๐ง1)=โ๐๐+๐ง1โ๐โ1 ๐๐1(๐ง1)โ1 I๐and A(๐ง1)=๐โ๏ธ ๐โ ๐H๐(๐ง1) r๐rโค ๐โ1 ๐โ1๐ฝ Qโ1 ๐๐(๐ง1), B(๐ง1)=๐โ๏ธ ๐โ ๐ ๐ฝ๐๐(๐ง1)โ๐1(๐ง1)H๐(๐ง1)r๐rโค ๐Qโ1 ๐๐(๐ง1), C(๐ง1)=โ๐โ1 ๐๐1(๐ง1)H๐(๐ง1)๐ฝ Qโ1 ๐(๐ง1)โQโ1 ๐๐(๐ง1) , F(๐ง1)=โ๐โ1 ๐๐๐1(๐ง1)H๐(๐ง1)1๐1โค ๐Qโ1 ๐(๐ง1). We next employโH๐(๐ง)as a suitable approximation to the resolvent matrix Qโ1 ๐(๐ง), extract the dom- inant terms contributing to the limiting behavior of J, and demonstrate that the error terms are negli- gible. Note thatโฅH๐(๐ง1)โฅโค๐พand by Lemma 6 in Gao et al. (2017), we have Er๐rโค ๐=1 ๐โ1๐ฝ. For any nonrandom MwithโฅMโฅโค๐พ, by using (18), we can obtain ๐โ1E|trB(๐ง1)M|=๐(๐โ1/2), ๐โ1E|trC(๐ง1)M|=๐(๐โ1), which implies ๐โ1E trE๐(B(๐ง1))Qโ1 ๐(๐ง2) =๐(1), ๐โ1E trE๐(C(๐ง1))Qโ1 ๐(๐ง2) =๐(1). And since 1โค ๐Qโ1 ๐(๐ง1)=โ1โ๐๐+๐ง1โค ๐, we have trE๐(F(๐ง1))Qโ1 ๐(๐ง2) โค๐พ/โ๐๐.In the end, consider the term๐1(๐ง1)trE๐(A(๐ง1))Qโ1 ๐(๐ง2).By using Qโ1(๐ง)โQโ1 ๐(๐ง)=โQโ1 ๐(๐ง)r๐rโค ๐Qโ1 ๐(๐ง)๐ฝ๐(๐ง), we can write tr E๐(A(๐ง1))Qโ1 ๐(๐ง2)=๐ด1(๐ง1,๐ง2)+๐ด2(๐ง1,๐ง2)+๐ด3(๐ง1,๐ง2),where ๐ด1(๐ง1,๐ง2)=โ๐โ๏ธ ๐<๐๐ฝ๐๐(๐ง2)rโค ๐E๐ Qโ1 ๐๐(๐ง1) Qโ1 ๐๐(๐ง2)r๐rโค ๐Qโ1 ๐๐(๐ง2)H๐(๐ง1)r๐, ๐ด2(๐ง1,๐ง2)=โtr๐โ๏ธ ๐<๐H๐(๐ง1)๐โ1๐ฝE๐ Qโ1 ๐๐(๐ง1) Qโ1 ๐(๐ง2)โQโ1 ๐๐(๐ง2) , ๐ด3(๐ง1,๐ง2)=tr๐โ๏ธ ๐<๐H๐(๐ง1) r๐rโค ๐โ๐โ1๐ฝ E๐ Qโ1 ๐๐(๐ง1) Qโ1 ๐๐(๐ง2). With (18), we obtain |๐1(๐ง1)๐ด2(๐ง1,๐ง2)|โค๐พ. Our next aim is to show ๐โ1๐1(๐ง1)E๐๐ด3(๐ง1,๐ง2)=๐๐(1). (22) Write E ๐1(๐ง1)E๐๐ด3(๐ง1,๐ง2) 2=|๐1(๐ง1)|2โ๏ธ ๐1,๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ Qโ1 ๐1๐(๐ง1) E๐ Qโ1 ๐1๐(๐ง2) รtrH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1) E๐ Qโ1 ๐2๐(๐ง2) 22 =|๐1(๐ง1)|2โ๏ธ ๐1,๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ Qโ1 ๐1๐(๐ง1)หQโ1 ๐1๐(๐ง2) รtrH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) , where หQ๐2๐is defined similarly as Q๐2๐by(r1,..., r๐โ1,หr๐+1,..., หr๐)and where หr๐+1,..., หr๐are i.i.d. copies of r๐+1,..., r๐. When๐1=๐2, with Lemma 5 in Gao et al. (2017), the term in the above expression is bounded by |๐1(๐ง1)|2โ๏ธ ๐1<๐E trH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ Qโ1 ๐1๐(๐ง1)หQโ1 ๐1๐(๐ง2) 2 โค๐พ๐๐โ1 ๐๐โ1=๐(1). For๐1โ ๐2< ๐, define ๐ฝ๐1๐2๐(๐ง)=1 โ๐๐+rโค ๐2Qโ1 ๐1๐2๐(๐ง)r๐2,Q๐1๐2๐(๐ง)=Q(๐ง)โโ๏ธ ๐ ๐(r๐1rโค ๐1+r๐2rโค ๐2+r๐rโค ๐), and similarly define ห๐ฝ๐1๐2๐and หQ๐1๐2๐(๐ง). Then we have |๐1(๐ง1)|2โ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ Qโ1 ๐1๐(๐ง1)หQโ1 ๐1๐(๐ง2) รtrH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) =๐1+๐2+๐3, where ๐1=โโ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ ๐ฝ๐1๐2๐(๐ง1)Qโ1 ๐1๐2๐(๐ง1)r๐2rโค ๐2Qโ1 ๐1๐2๐(๐ง1)หQโ1 ๐1๐(๐ง2) ร|๐1(๐ง1)|2trH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) , ๐2=โโ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ Qโ1 ๐1๐2๐(๐ง1)ห๐ฝ๐1๐2๐(๐ง2)หQโ1 ๐1๐2๐(๐ง2)r๐2rโค ๐2หQโ1 ๐1๐2๐(๐ง2) ร|๐1(๐ง1)|2trH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) , ๐3=โ|๐1(๐ง1)|2โ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ Qโ1 ๐1๐2๐(๐ง1)หQโ1 ๐1๐2๐(๐ง2) รtrH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ ๐ฝ๐2๐1๐(๐ง1)Qโ1 ๐1๐2๐(๐ง1)r๐1rโค ๐1Qโ1 ๐1๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) . Spilt ๐1=๐(1) 1+๐(2) 1, ๐(2) 1=๐(21) 1+๐(22) 1, ๐(22) 1=๐(221) 1+๐(222) 1, where ๐(1) 1=โ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ On eigenvalues of a renormalized sample correlation matrix 23 รE๐ ๐ฝ๐1๐2๐(๐ง1)Qโ1 ๐1๐2๐(๐ง1)r๐2rโค ๐2Qโ1 ๐1๐2๐(๐ง1)ห๐ฝ๐1๐2๐(๐ง2)หQโ1 ๐1๐2๐(๐ง2)r๐2rโค ๐2หQโ1 ๐1๐2๐(๐ง2) ร|๐1(๐ง1)|2trH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) , ๐(2) 1=โโ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ E๐ ๐ฝ๐1๐2๐(๐ง1)Qโ1 ๐1๐2๐(๐ง1)r๐2rโค ๐2Qโ1 ๐1๐2๐(๐ง1)หQโ1 ๐1๐2๐(๐ง2) ร|๐1(๐ง1)|2trH๐(๐ง1) r๐2rโค ๐2โ๐โ1๐ฝ E๐ Qโ1 ๐2๐(๐ง1)หQโ1 ๐2๐(๐ง2) , ๐(21) 1=โ๏ธ ๐1โ ๐2<๐EtrH๐(๐ง1) r๐1rโค ๐1โ๐โ1๐ฝ
|
https://arxiv.org/abs/2505.08210v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.