text
string
source
string
more difficult to establish the inequality directly for all x. A.2 Some special functions The modified Bessel functions Iν(x) andKν(x) are independent solutions to the modified Bessel differential equation x2d2y dx2+xdy dx−(x2+ν2)y= 0, whereνandzcan be arbitrary complex valued. Further, recall their series repr esentation I...
https://arxiv.org/abs/2505.07649v1
reciprocal, i.e. H−1 ν[.] =Hν[.], the previous identity becomes I−1 ν[F](x) = exp(−iπ(ν/2−3/4))Hν[ˆF](x). (A.6) Both of those relationships are useful when using tables of Hankel’s transforms. TheI-transform is also related to, the more well known, K-transform Kν[g](y) =/integraldisplay∞ 0g(x)√xyKν(xy)dx. IfG(y) =Kν[g]...
https://arxiv.org/abs/2505.07649v1
1 Channel Fingerprint Construction for Massive MIMO: A Deep Conditional Generative Approach Zhenzhou Jin, Graduate Student Member, IEEE, Li You, Senior Member, IEEE, Xudong Li, Zhen Gao, Senior Member, IEEE, Yuanwei Liu, Fellow, IEEE, Xiang-Gen Xia, Fellow, IEEE, and Xiqi Gao, Fellow, IEEE Abstract —Accurate channel st...
https://arxiv.org/abs/2505.07893v1
the virtual realm [2], [3]. The vision of 6G is to propel society towards “intelligent internet of everything” and “ubiquitous connectivity”, real- izing the seamless integration and interaction between the physical and virtual worlds. To achieve this, 6G will need to possess more powerful end-to-end information proces...
https://arxiv.org/abs/2505.07893v1
which can alleviate the chal- lenges of CSI acquisition and empower the design of wireless transmission technologies. By providing essential and accu- rate channel knowledge, CF has recently spurred extensive research for various applications in space-air-ground integrated networks, including beam alignment [8], [9], c...
https://arxiv.org/abs/2505.07893v1
to establish the intrinsic mapping between physical objects and their corresponding virtual twins. Consistent with the DT paradigm, we refer to this concept as the CF twins . To better explore the relationship between the two, we reformulate the task of fine-grained CF construction as an image super- resolution (ISR) p...
https://arxiv.org/abs/2505.07893v1
•Experimental results show that the proposed approach achieves significant performance improvements over the baselines. Additionally, we validate the generalization and knowledge transfer capabilities of the proposed model by conducting zero-shot performance testing on other SR CF tasks with unseen magnification factor...
https://arxiv.org/abs/2505.07893v1
environment, which determines the channel characteristics in (1), including the complex gain and propa- gation delay. It is evident that the channel power attenuation experienced at the UE is influenced by various environmental factors, including propagation losses along different paths as well as reflections and diffr...
https://arxiv.org/abs/2505.07893v1
the number of training samples. However, this task represents a classic and highly challenging inverse problem, requiring the effective reconstruction of fine details from a given LR CF. Since the conditional distribution of HR outputs corresponding to a given LR input typically does not adhere to a simple parametric d...
https://arxiv.org/abs/2505.07893v1
approximation to p(G|˙G)through a directed itera- tive refinement process guided by source information, which enables the mapping of ˙GtoG. Given the powerful implicit prior learning capability of GDM, we design a conditional GDM to facilitate the generation of G. As shown in Fig. 2, CGDM can generate the target HR CF ...
https://arxiv.org/abs/2505.07893v1
tinto the model εθ(·) 7: Perform gradient descent step on the objective function (32) to update the model parameters θ: ∇θ εt−εθ˙G,√¯αtG0+√1−¯αtε, t 2 2 8:until the objective function (32) converges 6 Algorithm 2 Inferring the HR CF in the reverse process conditioned on the LR CF through Titerative refinement steps 1...
https://arxiv.org/abs/2505.07893v1
have access to the observed sample G0and the latent variables G1:Tare unknown. Therefore, we seek to maximize the conditional marginal distribution p(G0|˙G), which is given by p G0 ˙G =Z p G0:T ˙G dG1:T. (21) Within the framework of variational inference, the likelihood of the observed sample G0conditioned on˙G, kn...
https://arxiv.org/abs/2505.07893v1
subsequently obtain an approximation of the target CF ˆG0through transformation (14), i.e., ˆG0=1√¯αt Gt−√ 1−¯αtεθ˙G,√¯αtG0+√ 1−¯αtε| {z } Gt, t .(33) Through reparameterization, (33) represents the results of iter- ative refinements, with each iteration in our proposed CGDM being represented by Gt−1←1√αt Gt−1...
https://arxiv.org/abs/2505.07893v1
Swish Fully Connected Layer t (f) Time Embedding Block Sinusoidal Time Encoding ncoding Fully Connected Layer Swish Fully Connected Layer Sinusoidal Time Encoding Fully Connected Layer Swish Fully Connected Layer t (f) Time Embedding Block Group Norm Linear Linear Linear Ä Softmax Attention Matrix Attention Matrix Ä 1×...
https://arxiv.org/abs/2505.07893v1
for any given time Γt+∆tcan be obtained through alinear transformation expressed as ΓT t+∆t= sin w0 t+ ∆t cos w0 t+ ∆t ... sin wctime 2−1 t+ ∆t cos wctime 2−1 t+ ∆t =M∆t·ΓT t.(37) M∆tis a linear transform matrix defined as M∆t= R(w0∆t)··· 0 ......... 0 ···R wctime 2−1∆t , (...
https://arxiv.org/abs/2505.07893v1
a better match, the corresponding attention score SAM[j′, j]is higher. Thus, the output of the attention mechanism corresponding to the r-th component can be represented by the weighted sum of all inputs, i.e., z′ r=P jSAM[r, j]vj=SAM[r,:]·V, where z′ r∈R1×dnrepresents the r-th output, which is computed by adaptively f...
https://arxiv.org/abs/2505.07893v1
pruned, calculated by multiplying the total number of parameters in the teacher network by the pruning ratio. Solving the optimization problems (44) is NP-hard; therefore, we need to find a surrogate objective. Capitalizing on the triangle inequality, we can obtain the upper bound of (44a): E∥εtea−ε(p1, p2, ..., p ˜m)∥...
https://arxiv.org/abs/2505.07893v1
section, the implementation details are first intro- duced. Then, we analyze the convergence and complexity of the proposed model under different hyperparameter settings. Finally, we comprehensively evaluate the proposed approach in terms of reconstruction accuracy and knowledge transfer capa- bility, and compare it ag...
https://arxiv.org/abs/2505.07893v1
lr = 5 ×10−5 Batch size 16 Y (0,0)(128,0) (0,128)(128,128) XFig. 4. The layout of the massive MIMO-OFDM system. average algorithm [23], with the decay factor set to 0.9999. Additionally, we incorporate dropout, with the dropout rate configured at 0.1. The forward diffusion steps Tare set to 1000 and the diffusion noise...
https://arxiv.org/abs/2505.07893v1
NRAof integrated Res+and self-attention blocks. NRA= 2. Any parameters not explicitly mentioned in the following analysis are assumed to be set to these values. As shown in Fig. 5 (a), the variation of the loss function for the CGDM with different numbers, c1, of base channels, is presented over 500 training epochs. It...
https://arxiv.org/abs/2505.07893v1
on the convergence and complexity analysis, we set c1= 64 , NRA= 2, and ¯η= 1 : 2 : 4 : 8 : 16 as the default con- figuration for subsequent experiments, with the corresponding /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017 /uni00000014/uni0000001d/uni00000015/uni0000001d/uni00000017/uni0000001d/uni000000...
https://arxiv.org/abs/2505.07893v1
Knowledge Transfer and Generalization Ability: Transferring a trained model to an unseen task is considered zero-shot generation. Namely, the neural network learns and updates its weights solely on the ×4SR CF dataset, without exposure to the ×16,×8, and×2SR CF datasets. It relies on the trained model to perform SR CF ...
https://arxiv.org/abs/2505.07893v1
The trained CGDM, combining the learned prior distribution of the target data and side information, generates fine-grained CFs through a series of iterative refinement steps. Addition- ally, to facilitate the practical deployment of the CGDM, we introduced a one-shot pruning approach and employed multi-objective knowle...
https://arxiv.org/abs/2505.07893v1
[11] P. Zeng and J. Chen, “UA V-aided joint radio map and 3D environment reconstruction using deep learning approaches,” in Proc. IEEE Int. Conf. Commun. (ICC) , Seoul, Korea, Aug. 2022, pp. 5341–5346. [12] Z. Yang, Z. Zhou, and Y . Liu, “From RSSI to CSI: Indoor localization via channel response,” ACM Comput. Surveys ...
https://arxiv.org/abs/2505.07893v1
D. Niyato, J. Kang, Z. Xiong, S. Cui et al. , “Enhancing deep reinforcement learning: A tutorial on generative diffusion models in network optimization,” IEEE Commun. Surv. Tutor. , May 2024. [26] D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” in Proc. Int. Conf. Learn. Represent. (ICLR) , Banff, AB, C...
https://arxiv.org/abs/2505.07893v1
1 EnvCDiff: Joint Refinement of Environmental Information and Channel Fingerprints via Conditional Generative Diffusion Model Zhenzhou Jin, Graduate Student Member, IEEE, Li You, Senior Member, IEEE, Xiang-Gen Xia, Fellow, IEEE, and Xiqi Gao, Fellow, IEEE Abstract —The paradigm shift from environment-unaware communicat...
https://arxiv.org/abs/2505.07894v1
In [2], the authors transform the CF estimation task into an image-to-image inpainting problem and develop a Laplacian pyramid-based model to facilitate CF construction. In [8], [9], UNet is utilized to learn geometry-based and physics-based features in urban or indoor environments, enabling the construction of corresp...
https://arxiv.org/abs/2505.07894v1
signal attenuation measured at the UTs located at {xm}M m=1= A. Additionally, small-scale effects are typically modeled as a complex Gaussian random variable hwith unit variance. Without loss of generality, the baseband signal received by the UT at xmcan be represented as [2] y(E,xm) =p g(E,xm)hs+z(E,xm), (1) where sre...
https://arxiv.org/abs/2505.07894v1
p− θF F FInversion of the Gaussian Diffusion Process . 1,tt p− θF F FSkipping Connection Training Frozen Parameters ()θε Framework of Conditional Denoising Neural Networks ()θεFramework of Conditional Denoising Neural Networks . 0 p  F F F F Fig. 1. Schematic of the proposed CDiff workflow and the ...
https://arxiv.org/abs/2505.07894v1
task is to establish a mapping capable of reconstructing a HR EnvCF from a given LR EnvCF, expressed as MΘ:FLR,n→FHR,n,∀n∈ {1,2, . . . , N }, (6) where Θdenotes the learnable parameters for the mapping MΘ, while Nindicates the number of training samples. However, (6) is an undetermined inverse problem. Given that the c...
https://arxiv.org/abs/2505.07894v1
each time step t, Ft, conditioned on ˙F, is denoised and refined to Ft−1, with the conditional transition probability for each step denoted as p Ft−1 Ft,˙F . Then, the conditional joint distribution of the inversion process is expressed as p F0:T ˙F =p(FT)TY t=1p Ft−1 Ft,˙F . (10) To execute the conditional inver...
https://arxiv.org/abs/2505.07894v1
⩾Eq(F1:T|F0) logp(FT)pθ F0 F1,˙F q(F1|F0)+ logq(F1|F0) q(FT|F0)+ logTY t=2pθ Ft−1 Ft,˙F q(Ft−1|Ft,F0)  (14a) =Eq(F1|F0) logpθ F0 F1,˙F −TX t=2Eq(Ft|F0) DKL q(Ft−1|Ft,F0)∥pθ Ft−1 Ft,˙F   −DKL(q(FT|F0)∥p(FT) ) (14b) Each iteration in the proposed CDiff is expressed as Ft−1←1√αt Ft−1−αt√1−¯αtεθ ˙F,Ft, ...
https://arxiv.org/abs/2505.07894v1
employed to evaluate performance. Table II presents a quantitative analysis of the proposed CDiff and baselines on the ×4EnvCF reconstruction task. It can be observed that the performance of the Kriging algorithm is relatively suboptimal. Notably, compared to the baselines, the proposed approach achieves competitive re...
https://arxiv.org/abs/2505.07894v1
6, pp. 4001–4015, Jun. 2021. [9] S. Bakirtzis, J. Chen, K. Qiu, J. Zhang, and I. Wassell, “EM DeepRay: An expedient, generalizable, and realistic data-driven indoor propagation model,” IEEE Trans. Antennas Propag. , vol. 70, no. 6, pp. 4140–4154, Jun. 2022. [10] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Omm...
https://arxiv.org/abs/2505.07894v1
arXiv:2505.07958v1 [math.PR] 12 May 2025Laws of Large Numbers for Information Resolution Daniel Raban∗ Department of Statistics, University of California, Berkeley May 14, 2025 Abstract Laws of large numbers establish asymptotic guarantees for recovering features of a probability distribution using independent samples....
https://arxiv.org/abs/2505.07958v1
would be able to determine events such as {X2< Z i≤ X3}. X1 X2X3Zi xfµ(x) Figure 1: Comparing a new sample Zito the previous samples X1, X2, X3. From this perspective, the σ-field representing the resolution given by our first three samples is the σ-field generated by the partition F3:=σ((−∞,−4],(−4,1],(1,5],(5,∞)). Al...
https://arxiv.org/abs/2505.07958v1
•Set theoretic convergence: This means lim supn→∞Fn= lim inf n→∞Fn=F, where lim sup n→∞Fn:=∞\ n=1∞_ k=nFn, lim inf n→∞Fn:=∞_ n=1∞\ k=nFn. •Strong convergence: This means E[ 1A| Fn]→E[ 1A| F] in probability for all measurable A. In general, monotone and Hausdorff convergence, which are not equivalent, are the strongest....
https://arxiv.org/abs/2505.07958v1
altering generating sets by null sets does not cause any issues when generating σ-fields. Proposition 2.1. Let(S,F, µ)be a measure space, and let A,B ⊆ F differ only by µ-null sets. Then σ(A)andσ(B)differ only by µ-null sets. Proof. LetF:={A∈σ(A) :∃B∈σ(B) s.t. µ(A△B) = 0}be the members of σ(A) which are represented in ...
https://arxiv.org/abs/2505.07958v1
The idea is to reduce the problem to showing that our empirical σ-fields can approximate any box. Then we use the Glivenko–Cantelli theorem to approximate any box from the inside; see Figure 2 for a picture. Step 1. (Reduce to recovering generating boxes): Since Bis generated by the countable collection {Aq:q∈Qd}, Prop...
https://arxiv.org/abs/2505.07958v1
However, the following simple example shows that balls of a fixed radius may not always suffice. Example 2.3. Consider the metric space [0 ,1] with the Euclidean metric and the measure µ({1/k}) = 2−kfork= 1,2, . . .. If we set Ax=Br(x) for any r >0, then we there are some points we cannot distinguish. However, we can s...
https://arxiv.org/abs/2505.07958v1
unit ball in L∞(µ) is too strong of a condition for our purposes, as the following example shows. Example 3.1. Consider the probability space ([0 ,1],B,Leb), where Bis the Borel σ-field and λis Lebesgue measure. Given any realization Fn:=σ([0, x1], . . . , [0, xn]) of an empirical σfield, we adversarially construct a f...
https://arxiv.org/abs/2505.07958v1
constants, the results in Theorem 3.1 and Theorem 3.2 apply to boxes in Rdwhich are not [0 ,1]d. We incur only an extra multiplicative factor of the volume of the box in our bound. Similarly, if we allow fto beL-Lipschitz, we incur only a factor of L. Remark 3.2. The bound in Theorem 3.2 is tight. For a lower bound, co...
https://arxiv.org/abs/2505.07958v1
5 for an illustration. Repeating this process, we end up with a sequence of stopping times ( Tn)∞ n=1for Brownian motion such that BTnequals, with equal probability, any of the 2nlevel n barrier points. In fact, a more careful analysis of this process shows that BTnd=E[X| Bn], where Bnis the σ-field representing the pa...
https://arxiv.org/abs/2505.07958v1
necessary for X1, X2, . . . to be sampled from the same mea- sure as X. Theorem 4.1 still holds if we sample X1, X2, . . .iid∼ν, provided that supp ν⊇supp µ. This has the interesting consequence that there exist universal gen- erating measures for randomized Skorokhod embeddings. For example, if νis the standard normal...
https://arxiv.org/abs/2505.07958v1
be chosen adversarially against our regression tree estimator. For simplicity, we will treat the case of the partitions from Theorem 3.1 and Theorem 3.2, but the same analysis could be carried out with other choices of randomized sets Ay. Theorem 4.2 (Random splitting regression tree loss) .Let(Xi, Yi)N i=1be drawn iid...
https://arxiv.org/abs/2505.07958v1
and Martin Huesmann. Opti- mal transport and Skorokhod embedding. Inventiones Mathematicae , 208:327–400, 2017. [Boy71] Edward S Boylan. Equiconvergence of martingales. The Annals of Mathematical Statistics , 42(2):552–559, 1971. [Bre01] Leo Breiman. Random forests. Machine Learning , 45:5–32, 2001. [Dub68] Lester E Du...
https://arxiv.org/abs/2505.07958v1
a slightly modified version of the approach taken for the proof of Lemma 40 in [MBNWW21], which essentially uses a covering argument phrased in terms of Vapnik-Chervonenkis dimension. Proof of Theorem 3.1. We first prove the L1(P)-convergence rate bound. Fix 0 < δ < 1, and consider a mesh dividing [0 ,1]dinto cubes Cof...
https://arxiv.org/abs/2505.07958v1
[0,1]d X A∈PnE[f|A] 1A(x)−f(x) dµ(x) ≤sup ∥f∥Lip≤1X A∈PnZ A|E[f|A]−f(x)|dµ(x) 22 ≤sup ∥f∥Lip≤1X A∈Pn1 µ(A)Z AZ A|f(y)−f(x)|dµ(y)dµ(x) ≤γ3X A∈Pn1 λ(A)Z AZ A∥y−x∥2dy dx ≤γ3X A∈Pn1 λ(A)Z AZ A∥y−uA∥2+∥uA−x∥2dy dx, where uAis the upper corner of the set A:uA i= min {Xj,i:Xj,i≥xi∀x∈A}for 1≤i≤d(and uA i= 1 if no such points e...
https://arxiv.org/abs/2505.07958v1
arXiv:2505.08045v1 [math.ST] 12 May 2025Measures of association for approximating copulas Marcus Rockel May 23, 2025 Department of Quantitative Finance, Institute for Economics, University of Freiburg, Rempartstr. 16, 79098 Freiburg, Germany, marcus.rockel@finance.uni-freiburg.de Abstract This paper studies closed-form...
https://arxiv.org/abs/2505.08045v1
conclude with an empirical comparison between our estimator and the classical one of Azadkia and Chatterjee in [5]. 1 2 Preliminaries In this section, we introduce the basic concepts and notation used that are required to formulate the main results of this paper. First, we introduce the fundamental concept of a copula,...
https://arxiv.org/abs/2505.08045v1
Fix an integer n≥1 and partition the unit interval into equal sub–intervals Ik= [k−1 n,k n] fork= 1,...,n . Denote by Sn the set of all permutations of {1,...,n}and letπ∈Snbe a permutation. The straight shuffle– of–min copula supported by π, denotedCπ, redistributes the probability mass of the comonotonic copulaM(u,v) ...
https://arxiv.org/abs/2505.08045v1
perfect negative dependence within squares, i.e. ∆ is associated with C∆ ↘and if (X,Y )∼C∆ ↘for some random vector ( X,Y ), then for all 1≤i≤m, 1≤j≤nit holds that conditionally on ( X,Y )∈Ii,jit is X=i−1 +j−nY m, (12) almost surely. In particular, in analogy to (10), one can write (12) equivalently as P[X≤x,Y≤y|(X,Y )∈...
https://arxiv.org/abs/2505.08045v1
tail dependence coefficient by λU(C) = 2−lim t→1−1−C(t,t) 1−t. (17) 3 Explicit measures of association for approximating copu- las In this section, we formulate the explicit expressions for Spearman’s rho, Kendall’s tau, Chatterjee’s xi and the tail dependence coefficients for m×n-checkerboard matrices associated with ...
https://arxiv.org/abs/2505.08045v1
checkerboard-type copulas To give concise expressions, we make use of the following matrices: First, let ∆ = (∆i,j)1≤i<m, 1≤j<n be anm×n-checkerboard matrix and denote by ∆⊤its transpose. Next, define the m×n-matrix Ω by Ωi,j:=(2m−2i−1)(2n−2j−1) mn for 1≤i≤m,1≤j≤n. Also, let Ξ(m)be them×m-matrix with entries Ξ(m) i,j=...
https://arxiv.org/abs/2505.08045v1
with C, it is generally not true that ξ(C)≤ξ(C∆ ↗). A simple counterexample is given by the check–min copula C=C∆ ↗associated with the checkerboard matrix ∆ =1 4 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 , which adheres perfect dependence and hence satisfies ξ(C) = 1, but this is not the case when transitioning to the as...
https://arxiv.org/abs/2505.08045v1
using also that tr/parenleftig ∆⊤ ⌊nκ⌋∆⌊nκ⌋/parenrightig =⌊nκ⌋/summationdisplay i,j=1∆2 i,j≤1 ⌊nκ⌋→0 asn→∞ . 9 Next toξκ n, also consider variants ξκnandξκ nof the estimator tailored for ξ(C∆⌊nκ⌋ Π ) and ξ(C∆⌊nκ⌋ ↗). These variants are given by ξκn(X(n),Y(n)) = 6 tr/parenleftig ∆⊤ ⌊nκ⌋∆⌊nκ⌋Mξ/parenrightig + tr/pare...
https://arxiv.org/abs/2505.08045v1
the m=ncase. The rectangular case is again analogous. The rest of the proof is dedicated to deriving the formula for Chatterjee’s xi. Recall that ξ(C) can be written as ξ(C) = 6/integraldisplay [0,1]2(∂1C(u,v))2dλ2(u,v)−2, 11 Hence, we need to evaluate the integral for CD B. Step 1: Derivative of ∂1Bi,m(u). Write Bi,m(...
https://arxiv.org/abs/2505.08045v1
2m−1/bracketrightbigg1 0=m2 2m−1. Putting these four sub-cases (a)–(d) together provides the complete piecewise formula for Ω i,rthat is specified in Section 3.1. This completes the proof. Lemma A.1 (Permutation Sum Identities) .Letπ∈Snbe a permutation of {1,2,...,n}and let di=π(i)−i. Then the following identities hold...
https://arxiv.org/abs/2505.08045v1
whereλ2denotes the Lebesgue measure on [0 ,1]2and recall also from (8) that the copula CΠis given by CΠ(u,v) =i−1,j−1/summationdisplay k,l=1∆k,l+i−1/summationdisplay k=1∆k,j(nv−j+ 1) +j−1/summationdisplay l=1∆i,l(mu−i+ 1) + ∆ i,j(mu−i+ 1)(nv−j+ 1) for (u,v)∈Ii,j. Hence, with a simple substitution, it is /integraldispla...
https://arxiv.org/abs/2505.08045v1
/parenleftiggj−1/summationdisplay l=1∆i,l/parenrightigg2 +/parenleftiggj−1/summationdisplay l=1∆i,l/parenrightigg ∆i,j+1 3∆2 i,j −2 =6m nm,n/summationdisplay i,j=1/parenleftig (T∆)2 i,j+ (T∆)i,j∆i,j+1 3∆2 i,j/parenrightig −2 =6m ntr/parenleftbig ∆⊤∆/parenleftbig TT⊤+T⊤+1 3In/parenrightbig/parenrightbig −2 =6m ...
https://arxiv.org/abs/2505.08045v1
and new families. Available at SSRN 1032547 , 2000. [12] Sebastian Fuchs and Marco Tschimpke. Total positivity of copulas from a markov kernel perspective. Journal of Mathematical Analysis and Applications , 518(1):126629, 2023. [13] Harry Joe. Multivariate Models and Dependence Concepts , volume 73 of Monogr. Stat. Ap...
https://arxiv.org/abs/2505.08045v1
Beyond Basic A/B testing: Improving Statistical Efficiency for Business Growth Changshuai Wei∗ LinkedIn Corporation chawei@linkedin.comPhuc Nguyen LinkedIn Corporation honnguyen@linkedin.com Benjamin Zelditch LinkedIn Corporation bzelditch@linkedin.comJoyce Chen LinkedIn Corporation joychen@linkedin.com Abstract The st...
https://arxiv.org/abs/2505.08128v1
ROI consideration, (ii) GEE for addressing small sample size with repeated measurement, and (iii) Mann-Whitney U for non-Gaussian data, in particular, zero-trimmed U test for zero-inflated heavy tailed data. 2)Theoretical development on (i) systematic analysis of asymptotic efficiency for the proposed approaches, and m...
https://arxiv.org/abs/2505.08128v1
z= 0vsz= 1on business metric y. Our goal is to evaluate "improvement" of yfrom the treatment over control group (directional test). T Test : One common formulation of the "improvement" is: δ=E(yi1−yi0), and we can use t-test for the corresponding null vs alternative hypotheses: H0:δ= 0,vsH1:δ >0. The corresponding t-st...
https://arxiv.org/abs/2505.08128v1
proportion tests are biased. (ii) When there are no confounding, regression adjustment has smaller variance and thus more efficient. In fact, under parametric settings, regression adjustment based on maximum likelihood estimation reaches Cramér–Rao lower bound[ 37] and hence most efficient among all unbiased estimators...
https://arxiv.org/abs/2505.08128v1
in following theorem and provided skech of proof in Appendix C.1. Theorem 1 LetΣ =V ar(ui). Then, under mild regularity condition, we have consistency: ˆθ→pθ, and asymptotic normality:√n(ˆθ−θ)→dN(0, B−TΣB−1). Here, the variance can be estimated viaˆΣ =1 nP iˆuiˆuT i, and ˆB=1 nPDT iV−1 iDi. Since GEE uses all the data,...
https://arxiv.org/abs/2505.08128v1
U’s efficiency is very close to t-test. 4.2 Zero-Trimmed U Test The challenges of non-Gaussian distribution is often two fold in business scenario, the heavy tail nature and the zero-inflation nature. We can exploit the zero-inflation characteristic to further improve efficiency. The idea is to trim off the excessive z...
https://arxiv.org/abs/2505.08128v1
function like logistic function, φ(yi1−yi0) = [1 + exp( −(yi1−yi0))]−1, Probit function, φ(yi1− yi0) = Φ( yi1−yi0)or signed Laplacian kernel, φ(yi1−yi0) =sign(yi1−yi0) exp(−yi1−yi0 σ). Note when φ(·)is identity, we get δ=E(yi1−yi0), which is treatment effect corresponding to t-test. However, it doesn’t guarantee finite...
https://arxiv.org/abs/2505.08128v1
estimate πandgby,E(zi|wi) =π(wi;β) =ϕ([1,wT i]T·β)),E(φ(yit1−yjt0)|wit, wjt) = g(wit, wjt;γt) =ψ([1, wT it, wT jt]T·γt), where w= [wT 1,···, wT t,···wT T]T. We can estimate the parameters and make inference jointly for θ= [δT, βT,γT]Tusing UGEE: Un(θ) =X i,j∈Cn 2Un,ij=X i,j∈Cn 2Gij(hij−fij) =0, (7) where, hij= [hT ij1,...
https://arxiv.org/abs/2505.08128v1
, so choosing k=O(n1+ϵ)makes the Monte Carlo error asymptotically negligible. 6 Experiments and Results 6.1 Simulation Studies We perform comprehensive simulation studies to evaluation performance of the proposed methods. Due to space limitation, we summarize and highlight the results here. Regression Adjustment : We s...
https://arxiv.org/abs/2505.08128v1
0.249). (Appendix G.1) Targeting in Feed : We conducted a user level A/B test to evaluate impact of a new algorithm for marketing on a particular slot in Feed. We faced two challenges: (i) selection bias in ad impression allocation that favored the control system, so we need to adjust for impressions as a cost and comp...
https://arxiv.org/abs/2505.08128v1
of performing ROI trade-offs is needed, and covariate adjustment can measure revenue net of cost. Moreover, when revenue- or value-based primary metrics are used, they are almost always associated with zero inflation and heavy-tail distributions. In this situation, we can use Zero-Trimmed U. In fact, we argue that thes...
https://arxiv.org/abs/2505.08128v1
higher power (Figure 3). When the direction focuses on the mcomponent (the zero-proportion difference), Mann–Whitney U with adjusted variance is more efficient, though still close to one (Figure 3). In fact, if sinϕ= 1(purely on d), Zero-Trimmed U always has higher power (Figure 2); if sinϕ= 0 (purely on m), Mann–Whitn...
https://arxiv.org/abs/2505.08128v1
U has the smallest variance (most powerful) among all regular estimators of the corresponding treatment effect. We further prove that even when πand 12 gare unknown, as long as they are correctly specified, the doubly robust generalized U from our UGEE still attains the semi-parametric efficiency bound. This result is ...
https://arxiv.org/abs/2505.08128v1
229–235. [6]R Chen, T Chen, N Lu, Hui Zhang, P Wu, C Feng, and XM Tu. 2014. Extending the Mann– Whitney–Wilcoxon rank sum test to longitudinal regression analysis. Journal of Applied Statistics 41, 12 (2014), 2658–2675. [7]Ruohui Chen, Tuo Lin, Lin Liu, Jinyuan Liu, Ruifeng Chen, Jingjing Zou, Chenyu Liu, Loki Nataraja...
https://arxiv.org/abs/2505.08128v1
Tu. 2008. Modern applied U-statistics . John Wiley & Sons. [24] Nicholas Larsen, Jonathan Stallrich, Srijan Sengupta, Alex Deng, Ron Kohavi, and Nathaniel T Stevens. 2024. Statistical challenges in online controlled experiments: A review of a/b testing methodology. The American Statistician 78, 2 (2024), 135–149. [25] ...
https://arxiv.org/abs/2505.08128v1
XM Tu. 2014. Causal inference for Mann–Whitney–Wilcoxon rank sum and other nonparametric statistics. Statistics in medicine 33, 8 (2014), 1261–1271. [44] Huizhi Xie and Juliette Aurisset. 2016. Improving the Sensitivity of Online Controlled Ex- periments: Case Studies at Netflix. In Proceedings of the 22nd ACM SIGKDD I...
https://arxiv.org/abs/2505.08128v1
when "generalize" for regular regression and GEE, we can simply estimate variance on full sample, there is no need for Monte Carlo Integration. •The working correlation matrix R(α)can be estimated in an outer loop around the θ-updates, e.g., by alternating between updating θusing Fisher scoring and re-estimating αbased...
https://arxiv.org/abs/2505.08128v1
at significantly lower computational cost. We state the above results in Theorem 6. B Efficiency of Regression Adjustment In this section, we will illustrate the efficiency of regression adjustment over t-test under parametric set-up. We’ll first show regression adjustment is most efficient with Cramer-Rao lower bound ...
https://arxiv.org/abs/2505.08128v1
1) =1 np(1−p)(σ2 w+σ2) +op(n−1) (10) where σ2 w=γTV ar(w)γrepresents variance of yexplained by w. Combining equation(9) and equation(10), we have r(ˆβ,ˆτ) = 1 +σ2 w σ2(11) C Asymptotics and Efficiency of GEE C.1 Asymptotic Normality of GEE In this section, we will show the Asymptotic Normality of ˆθfor the GEE, which w...
https://arxiv.org/abs/2505.08128v1
¯ρ=1 T(T−1)P i̸=jRij, then from equation (19), we know r(ˆθgee,ˆθreg)≥T2 vTRv=T 1 + (T−1)¯ρ D Asymptotics and Efficiency of U test D.1 Pitman Efficiency of U test over t test We will derive the pitman efficiency on local alternative of small shift δof certain distribution Fwith variance σ2. Recall that from definition ...
https://arxiv.org/abs/2505.08128v1
n+ 0+n+ 1 2. And by definition, W′=S′−n′ 1(n′ 0+n′ 1+ 1) 2, W=S−n1(n0+n1+ 1) 2. Now, define w1=s1−n+ 1(n+ 0+n+ 1+1) 2, we have W′=s1+ (n′ 1−n+ 1)n′ 0+n′ 1+ 1 + n+ 0+n+ 1 2−n′ 1(n′ 0+n′ 1+ 1) 2 =s1−n+ 1(n+ 0+n+ 1+ 1) 2+n′ 1n+ 0−n+ 1n′ 0 2 =w1+n′ 1n+ 0−n+ 1n′ 0 2 Similarly, W=w1+n1n+ 0−n+ 1n0 2 Ifd= 0, i.e., P(y+ 1≥y+ 0)...
https://arxiv.org/abs/2505.08128v1
calculated the unadjusted variance from the original approach, i.e., V ar(Wo) = n1n2(n1+n2+1) 12, then we have pitman efficiency for Zero-Trimmed U over unadjusted W as: 27 rϕ(W′, Wo) =1 3 p3−p4+p3 3pcosϕ+ 2p2sinϕ cosϕ+ 2p2sinϕ2 , (35) observing that W=Wofor point estimate. Figure 4: Plot of rϕ(W′, Wo)versus p for mu...
https://arxiv.org/abs/2505.08128v1
U statistics, we know√n(UF−δ) =2 nP i˜h(yi) +op(1), where ˜h(yi) =E(hF ij|OF i)−δ. Now observe, E(hF ij|OF i) = 0 .5E(φ(yi(1)−yj(0)|OF i) + 0.5E(φ(yj(1)−yi(0)|OF i) = 0.5Z φ(yi(1)−s)p0(s)ds+ 0.5Z φ(t−yi(0))p1(t)dt = 0.5h1(yi(1)) + 0 .5h0(yi(0)) where h1(y) =R φ(y−s)p0(s)ds,h0(y) =R φ(t−y)p1(t)dt, andp1(·), p0(·)are mar...
https://arxiv.org/abs/2505.08128v1
hij2, hij3]T fij= [fij1, fij2, fij3]T hij1=zi(1−zj) 2πi(1−πj)(φ(yi1−yj0)−gij) +zj(1−zi) 2πj(1−πi)(φ(yj1−yi0)−gji) +gij+gji 2 hij2=zi+zj hij3=zi(1−zj)φ(yi1−yj0) +zj(1−zi)φ(yj1−yi0) fij1=δ fij2=πi+πj fij3=πi(1−πj)gij+πj(1−πi)gji πi=π(wi;β) gij=g(wi, wj;γ) and Gij=DT ijV−1 ij Dij=∂fij ∂θ, Vij= σ2 ij10 0 0σ2 ij20 0 0 σ2 ...
https://arxiv.org/abs/2505.08128v1
power over t-test ( γ= 0). When there is confounding present, the power of raw unadjusted t-test is not valid as it can not control type I error. F.2 GEE We evaluate the Type I error and power of two estimators in the presence of confounding under varying sample sizes and effect sizes. (i) GLM adjustment at final time ...
https://arxiv.org/abs/2505.08128v1
Comparison for Positive Cauchy and LogNormal Distributions with Equal Zero- Inflation (50%) Distribution Sample Size Effect SizePower at α= 0.05 Zero-trimmed U Standard U t-test Positive Cauchy500.25 0.038 0.040 0.018 0.50 0.050 0.048 0.022 0.75 0.113 0.085 0.033 1.00 0.131 0.086 0.041 2000.25 0.079 0.065 0.011 0.50 0....
https://arxiv.org/abs/2505.08128v1
0.83 0.76 0.66 50 0.38 0.32 0.29 37 G Details on A/B Testing G.1 Email Marketing We conducted an A/B test comparing our legacy email marketing recommender system against a newer version designed with improved campaign personalization using neural bandits. We randomly assigned audience members to receive recommendations...
https://arxiv.org/abs/2505.08128v1
Treatment mean mean Conversions 1.0 +0.3% Impressions 1.0 -37.7% Low-baseline segment 1.0 +9.5% 38 Figure 6: Distributions of (normalized) impressions and conversions from the targeting in feed experiment. G.3 Paid Search Campaigns We illustrate leveraging longitudinal repeated measurements in A/B testing (via GEE) to ...
https://arxiv.org/abs/2505.08128v1
arXiv:2505.08210v1 [math.ST] 13 May 2025Submitted to Bernoulli On eigenvalues of a renormalized sample correlation matrix QIANQIAN JIANG*1,a, JUNPENG ZHU*1,band ZENG LI1,c 1Department of Statistics and Data Science, Southern University of Science and Technology *, ajqq172515@gmail.com,bzhujp@sustech.edu.cn,cliz9@sustec...
https://arxiv.org/abs/2505.08210v1
the nor- malization inherent in the sample correlation matrix and the presence of a nonzero population mean, the techniques and results developed for ultrahigh dimensional covariance matrices cannot be directly extended to the correlation matrix. To fill this gap, we consider the sample correlation matrix under a new r...
https://arxiv.org/abs/2505.08210v1
to independence test. Section 4 presents simulations. Technical proofs are detailed in Section 5 and the Supplementary Material. 2. Main Results 2.1. Preliminaries For any measure 𝐺supported on the real line, the Stieltjes transform of 𝐺is defined as 𝑠𝐺(𝑧)=∫1 𝑥−𝑧d𝐺(𝑥), 𝑧∈C+, where C+={𝑧∈C:ℑ(𝑧)>0}denotes the...
https://arxiv.org/abs/2505.08210v1
et al. (2017). 2.4. CLT for LSS of B 𝒏 In this section, we focus on linear spectral statistic of B𝑛, i.e.1 𝑛Í𝑛 𝑖=1𝑓(𝜆𝑖), where𝑓is an analytic function on[0,∞). Since𝐹B𝑛converges to 𝐹𝑐almost surely, we have 1 𝑛𝑛∑︁ 𝑖=1𝑓(𝜆𝑖)→∫ 𝑓(𝑥)d𝐹𝑐(𝑥). We explore second order fluctuation of1 𝑛Í𝑛 𝑖=1𝑓(𝜆𝑖)de...
https://arxiv.org/abs/2505.08210v1
Frobenius norm of R−I𝑝used in Schott (2005), Gao et al. (2017) and Yin, Zheng and Zou (2023), with the relationship tr(R𝑛−I𝑝)2=𝑝 𝑁(trB2 𝑛+𝑝)−𝑝, we consider the following test statistic constructed from the renormalized correlation matrix B𝑛, 𝑇:=trB2 𝑛. We reject𝐻0when𝑇is too large. By by taking 𝑓(𝑥)=𝑥2i...
https://arxiv.org/abs/2505.08210v1
and variance of 𝐺𝑛(𝑓𝑖),𝑖=2,3,4 from 5000 replications with 𝑐𝑛=2,300,500,1000. Theoretical mean and variance are 0 and 1, respectively. 𝐺𝑛(𝑓2) 𝐺𝑛(𝑓3) 𝐺𝑛(𝑓4) 𝑐𝑛 mean var mean var mean var Gaussian data 2 0.0090 1.0079 -0.0103 0.9737 0.0040 0.9793 300 0.0185 0.9974 -0.0919 0.9777 0.0037 0.9785 500 0.0143...
https://arxiv.org/abs/2505.08210v1
𝑝X⊤D𝑛X−I𝑛 ,D𝑛=Diag1 𝑠11,1 𝑠22,...,1 𝑠𝑝𝑝 , 𝑠𝑘𝑘=1 𝑁e⊤ 𝑘X𝚽X⊤e𝑘, 𝑘=1,...,𝑝. LetˆB𝑛=ˆB𝑛(ˆ𝑥𝑖𝑗),ˇB𝑛=ˇB𝑛(ˇ𝑥𝑖𝑗)and ˜B𝑛=˜B𝑛(˜𝑥𝑖𝑗)be defined similarly to B𝑛with𝑥𝑖𝑗replaced by ˆ𝑥𝑖𝑗, ˇ𝑥𝑖𝑗and ˜𝑥𝑖𝑗respectively, where ˆ 𝑥𝑖𝑗=𝑥𝑖𝑗𝐼{|𝑥𝑖 𝑗|≤Δ𝑛4√𝑛𝑝}, ˇ𝑥𝑖𝑗=ˆ𝑥𝑖𝑗−Eˆ𝑥𝑖𝑗, and...
https://arxiv.org/abs/2505.08210v1
(2012), we have P B𝑛≠ˆB𝑛,i.o. =0 a.s., from which we obtain 𝜆1(B𝑛)−𝜆1 ˆB𝑛 →0 a.s. as𝑛→∞ . And note that ˆB𝑛=˜B𝑛. We have𝜆1(B𝑛)− 𝜆1˜B𝑛→0 a.s.. By the above results, it is sufficient to show that lim sup sup𝑛→∞𝜆1˜B𝑛≤1 a.s.. To this end, note that ˜B𝑛satisfies truncation condition of Theorem 2.6 (...
https://arxiv.org/abs/2505.08210v1
what follows, we assume that the underlying variables are truncated at 𝛿𝑛(𝑛𝑝)1 2𝑡+2, centralized, and renormalized. For convenience, we shall suppress the superscript on the variables, and assume that, for any 1 ⩽𝑖⩽𝑝and 1⩽𝑗⩽𝑛, 𝑥𝑖𝑗 ⩽𝛿𝑛(𝑛𝑝)1 2𝑡+2,E𝑥𝑖𝑗=0,E|𝑥𝑖𝑗|2=1,E|𝑥𝑖𝑗|4=𝜅+𝑜(1),E|𝑥𝑖𝑗|2𝑡+2<...
https://arxiv.org/abs/2505.08210v1
are delegated to the supplement file due to page limit. 5.5. Proof of Lemma 5.3 To prove Lemma 5.3, we first introduce the following supporting lemmas. Lemma 5.6. Under assumptions of Lemma 5.3, we have Y1(𝑧1,𝑧2)≜−𝜕2 𝜕𝑧1𝜕𝑧2𝑝∑︁ 𝑗=1 E𝑗−1˜𝛽𝑗(𝑧1)𝜀𝑗(𝑧1)][E𝑗−1˜𝛽𝑗(𝑧2)𝜀𝑗(𝑧2) =𝑜𝑝(1), Y2(𝑧1,𝑧2)≜𝜕2...
https://arxiv.org/abs/2505.08210v1
is (17). Thus we complete the proof of Lemma 5.3. 5.6. Proof of Lemma 5.7 The proof of Lemma 5.7 differs substantially from the classical case. Unlike the high dimensional case where𝑝/𝑛→𝑐∈(0,∞)(Gao et al., 2017), our analysis is conducted in the ultrahigh dimen- sional regime with 𝑝/𝑛→∞ . In this setting, we caref...
https://arxiv.org/abs/2505.08210v1