text
string
source
string
Experiments - additional material This section provides additional context for the experiments described in Section 3.2. Table 3 provides a summary of the best description length obtained by optimizing with a given method in a given setting. Table 3: Description length (train+test) in bytes for teacher-student models across different training set sizes ( N) and noise levels. Values show median description length with IQR error in parentheses, where "k" indicates kilobytes. Teacher architecture has layer sizes [2, 5, 8, 1]. The test set size was restricted to match the train set size. UP stands for “unpruned”. N PMMP PMMP DRR DRR R-ℓ1 R-ℓ1 UP (MSE) (Gauß) (MSE) (Gauß) (MSE) (Gauß) Low noise ( σ2= 10−5) 30 182(2) 212(5) 187(3) 195(2) 180(1) 220(11) 3111(9) 300 1637(24) 1732(28) 1623(21) 1623(17) 1586(18) 1611(20) 4199(14) 2000 9.4(2)k 9.0(1)k 8.6(1)k 8.8(2)k 8.86(8)k 8.79(9)k 10.86(5)k High noise ( σ2= 0.08) 30 182(2) 197(7) 187(4) 195(2) 180(1) 209(10) 3105(9) 300 1692(4) 1722(12) 1675(5) 1678(5) 1666(8) 1668(7) 4468(6) 2000 10.65(3)k 10.66(3)k 10.57(1)k 10.61(2)k 10.58(2)k 10.57(1)k 13.27(2)k Figure 6 shows additional plots that exhibit the dependence of test loss on model size (which is in turn determined by regularization strength). As in Appendix H, the error bars in those figures, as well as in Figure 1 in the main text, were computed using an adaptive binning approach that groups data points along the x-axis into bins containing at least 10 points each and spanning at least 0.1 on the log10 scale. For each bin, the median performance metric was calculated with error tubes indicating the interquartile range (IQR). This method provides a more robust representation of performance trends compared to simple averaging, especially with the varied outcomes that result from different hyperparameter combinations and network initializations. One can again observe a U-curve in the center column of the figure and better test accuracy for smaller model sizes for the small dataset in 28 the more accurate Gauß loss scenario, while the MSE loss for the smallest model size is somewhat noisy. For large dataset sizes (right column), the trend is inverted as expected. (a) (30, 1e-5, Gauß) (b) (300, 1e-5, Gauß) (c) (2000, 1e-5, Gauß) (d) (30, 0.08, MSE) (e) (300, 0.08, MSE) (f) (2000, 0.08, MSE) Figure 6: ‘Test Loss‘ vs ‘Model Byte Size‘ for different models. Description length (in bytes) isolines are color coded. Tuples like (2000, 0.08, MSE) denote the dataset size, noise level ( σ) and whether the full Gauß loss (5) or the approximated MSE loss was used. Figure 7 shows the dependence of description length on α. The black line indicates the verbatim coding of the data. One can see that the right amount of regularization depends on the dataset size and is always capable of reducing the description length of the data. As we progress from smaller to larger datasets, we again witness a U-shaped curve pattern emerging (which starts with a left half-U, morphs into a full U and is expected to eventually reverse into a right half-U), interestingly somewhat shifted toward larger dataset sizes compared to the pattern we observe in Figures
https://arxiv.org/abs/2505.17469v1
1 and 6. (a) (30, 0.08, MSE) (b) (300, 0.08, MSE) (c) (2000, 0.08, MSE) Figure 7: ‘Description Length‘ vs ‘ α‘ for different models. Description length (in bytes) isolines are color coded. Each point represents the median result from bootstrap sampling across multiple runs for a given regularization strength ( α), with error bars indicating the interquartile range (IQR) of the bootstrap distribution. 29 J Transformer studies J.1 Compression of larger datasets This subsection provides additional context for the experiments described in Section 3.3. In Figure 8, we show results from the experiments, in which we compressed the full 9.31GB Wiki-40b dataset [22] with regularized transformers. The description length isolines in this figure are almost horizontal. This means that the dataset size is so large that the only effective way to decrease the description length is to decrease the loss term −log2(µ(x))inLµin eq. (2), even if this means that ℓ(µ)has to be increased. This means that, in this regime, minimization of Lµand unregularized optimization are both going to result in a model of full size. The model is so small in comparison to the dataset size that overfitting cannot occur, even at full model size. Thus, all parameters of the model are useful for lowering the loss and description length. /uni00000088/uni00000087/uni00000089/uni00000088/uni00000087/uni0000008a/uni00000088/uni00000087/uni0000008b/uni00000088/uni00000087/uni0000008c/uni00000088/uni00000087/uni000000a4/uni00000088/uni00000087/uni0000008d /uni00000011/uni00000033/uni00000026/uni00000027/uni000000a3/uni00000002/uni00000019/uni0000002d/uni00000041/uni00000027/uni00000088/uni00000057/uni00000087/uni00000088/uni00000057/uni0000008c/uni00000089/uni00000057/uni00000087/uni00000089/uni00000057/uni0000008c/uni0000008a/uni00000057/uni00000087/uni0000008a/uni00000057/uni0000008c/uni00000011/uni00000027/uni00000021/uni00000032/uni00000002/uni00000010/uni00000033/uni00000039/uni00000039/uni000000a3/uni00000088 /uni00000026/uni00000027/uni00000024/uni00000021/uni00000040 /uni00000021/uni00000039/uni00000024/uni00000027/uni00000032/uni0000003b /uni0000003d/uni00000021/uni00000032/uni0000002d/uni000000a3/uni000000a3/uni00000021/uni00000002/uni00000089/uni00000087/uni00000087/uni00000030 /uni0000003d/uni00000021/uni00000032/uni0000002d/uni000000a3/uni000000a3/uni00000021/uni00000002/uni000000a5/uni00000087/uni00000087/uni00000030 /uni0000003d/uni00000021/uni00000032/uni0000002d/uni000000a3/uni000000a3/uni00000021/uni00000002/uni0000008b/uni00000057/uni00000088/uni00000031 /uni00000002/uni00000092/uni00000002/uni00000024/uni00000033/uni00000032/uni00000039/uni0000003b /uni00000089/uni0000000b/uni00000005/uni0000008a/uni0000000b/uni00000005/uni0000008b/uni0000000b/uni00000005/uni0000008c/uni0000000b/uni00000005 /uni00000008/uni00000027/uni00000039/uni00000024/uni00000038/uni0000002d/uni00000036/uni0000003b/uni0000002d/uni00000033/uni00000032/uni00000002/uni00000010/uni00000027/uni00000032/uni0000002b/uni0000003b/uni0000002c Figure 8: Mean loss vs model size with description length (in GB) isolines for the transformer described in Section 3.3, applied to the full 9.31GB Wiki-40b dataset [ 22]. The solid lines correspond to the three different regularization methods PMMP, DRR and R-L1 described in Section 2.2. The crosses correspond to transformers of the indicated model size trained without regularization. The shape of the isolines in such a plot therefore also gives useful information about whether overfitting can occur or not. J.2 LLaMA Description Length Analysis This appendix provides the methodology and calculations for the LLaMA description length analysis, comparing fixed-model and online learning approaches. Setup and Methodology We analyze the 65B parameter LLaMA model from [ 66], trained on approximately 1400B tokens, comparing two approaches to model description length: 1.Fixed-model approach: Train the model to convergence, then encode data using the final model 2. Online learning approach: Encode data sequentially as training progresses [55, 4] For our analysis, we extracted approximate loss values from the published learning curves. For the initial loss at one token, we used log(32000) ≈10.4, corresponding to the cross-entropy loss with LLaMA’s 32k token vocabulary (assuming uniform distribution). The loss curve was best approximated by the power-law function: y= 2.7553·(x−1.0)−0.0793(42) 30 ←initial loss: log(32000) ↑ 1.85 ↑ 1.78 ↑ 1.72 ↑ 1.69 ↑ 1.66 ↑ 1.62 ↑ 1.58←1.54(a) LLaMA training loss curve from [66] (b) Power-law curve fit to LLaMA loss data Figure 9: Loss curves for LLaMA models. Left: Original training curve from [ 66] showing cross- entropy loss vs. tokens processed, with manually penciled-in loss values (in blue) extracted for analysis. Right: Power-law fit to these extracted 65B model data points. Description Length Calculations Online Learning Approach The total description length under the online
https://arxiv.org/abs/2505.17469v1
approach is given by the integral under the loss curve: Z1400 02.7553·(x−1.0)−0.0793dx= 6349 .41 (43) Converting from base eto storage size (in bytes), this yields approximately 550 GB for the online approach. Fixed-Model Approach For the fixed-model approach, we use the final loss value (approximately 1.54 at 1400B tokens): • Data encoding: 1.54·1400·109= 2156 ·109in base e(converted to 187 GB in base 2). • Model size: 131 GB (65B float16 parameters) • Total: 318 GB Results and Discussion Comparing the two approaches: • Online learning approach: 550 GB • Fixed-model approach: 318 GB (187 GB data + 131 GB model) The fixed-model approach yields a description length approximately 42% smaller than the online learning approach. This suggests that for current state-of-the-art models, parameter count does impact the total description length meaningfully, supporting the potential value of parameter-reducing techniques. K Societal impact of work The work presented in this article is of a foundational nature and therefore does not lead to any direct positive or negative societal impacts. It facilitates the compression of data and potentially enables training models with higher sample-efficiency. This can be used in all applications where neural models are optimized and sample-efficiency is a concern. However, it is not the first work of this nature, and regularization techniques are generally common in deep learning. 31 L Computational resources In this section, we briefly provide an overview of the computational resources needed and used for the simulations described in Section 3. For a single run of a classifier, as described in Section 3.1, a conventional personal computer with a decent GPU is sufficient. However, to execute the large number of runs that we conducted, we submitted many jobs in parallel to a cluster of Nvidia A100 GPUs. The MLP experiments described in Section 3.2 were executed on a cluster of Intel Xeon CPUs. Since the models and datasets were small, CPU training was much more efficient than GPU training, especially employing Julia’s Lux library [ 47,48]. For a single run, a conventional personal computer is again sufficient. The transformer experiments described in Section 3.3 were executed on a cluster of A100 GPUs, making use of PyTorch’s Distributed Data Parallel (DDP) library. For a single run, data was distributed across up to 8 GPUs that jointly optimized a transformer. Many such jointly optimized runs were in turn run in parallel to obtain all datapoints. Not all runs were successful, hence the total number of runs exceeded the number of datapoints provided in this paper. Below we provide approximate runtime estimates: 1.Classifier: VGG on CIFAR takes about 500s for a run on an Nvidia A100 GPU. We usually performed 6 runs per alpha (different seeds). 2.Teacher-student experiments: With 20 CPU cores, about 45s per run. Again, we usually performed 6 runs per alpha (different seeds). 3.Transformer: About 18h for one run of about 1 million iterations. Each run corresponds to one datapoint. We performed many runs in parallel. 32
https://arxiv.org/abs/2505.17469v1
arXiv:2505.17800v1 [math.PR] 23 May 2025Lq-maximal inequality for high dimensional means under dependence Jonathan B. Hill∗ Dept. of Economics, University of North Carolina, Chapel Hill, NC This draft: May 26, 2025 Abstract We derive an Lq-maximal inequality for zero mean dependent random variables {xt}n t=1onRp, where p >> n is allowed. The upper bound is a familiar multiple of ln( p) and an l∞moment, as well as Kolmogorov distances based on Gaussian approximations ( ρn,˜ρn), derived with and without negligible truncation and sub-sample blocking. The latter arise due to a departure from independence and therefore a departure from standard symmetrization arguments. Examples are provided demonstrating ( ρn,˜ρn)→0 under heterogeneous mixing and physical dependence conditions, where ( ρn,˜ρn) are multiples of ln( p)/nbfor some b >0 that depends on memory, tail decay, the truncation level and block size. AMS classifications : 60F10, 60-F25. 1 Introduction In this paper we establish an Lq-maximal inequality for zero mean, high dimensional dependent random variables xt= [xi,t]p i=1onRp, where p >> n andp=pn→ ∞ as n→ ∞ are allowed. Let {xt}n t=1be a sample of observations, n∈N, and assume max 1≤i≤pmax 1≤t≤n|xi,t|isLqintegrable for some q≥1. We show Emax 1≤i≤p|1/nPn t=1xi,t|q is bounded by a multiple of (ln ( p)/Nn)q/2in a general dependence setting that uses block- ing, where Nn→ ∞ are the number of blocks satisfying Nn=o(n). We then prove required Gaussian approximations and comparisons are asymptotically negligible under mixing and physical dependence properties. The maximal moment becomes o(1) after bounding the growth of p, depending on whether xi,thave sub-exponential tails. We use a negligible ∗Department of Economics, University of North Carolina, Chapel Hill, North Carolina, E- mail: jbhill@email.unc.edu ;https://tarheels.live/jbhill . 1 truncation approximation with a diverging level Mn, and a blocking (or M-dependent) approximation popular in the dependent multiplier bootstrap literature (e.g. Liu (1988); Hansen (1996); Shao (2010, 2011)). Thus an upper bound on pwill naturally depend on truncation Mnand block size bn, as well as dependence decay and heterogeneity. See below, and Section 2, for notation conventions. Under high dimensionality concentration inequalities can be combined with a log-exp or “smooth max ” approximation to bound Emax 1≤i≤p|1/nPn t=1xi,t|. Thus, for example, for any λ >0 we have by Jensen’s inequality Emax 1≤i≤p 1 nnX t=1xi,t ≤1 λln pmax 1≤i≤pZ∞ 0P 1 nnX t=1xi,t >1 λln (u)! du! . (1.1) The integrated probability can be assessed under sub-exponential tails using probability concentration or large deviation bounds like Bernstein and Fuk-Nagaev inequalities. The latter have been extended to weakly dependent data in many mixing and non-mixing contexts (see, e.g., Rio, 2017; Hill, 2024). If tail decay is slower than exponential then use Lyapunov’s inequality and max 1≤i≤p|xi| ≤Pp i=1|xi|to deduce Emax 1≤i≤p|1/nPn t=1xi,t|q ≤(pq/s/nq/2) max 1≤i≤p||1/√nPn t=1xi,t||q sfors≥q. The final Lq-moment is boundable in many dependence and heterogeneity settings (e.g. Davydov, 1968; McLeish, 1975; Wu, 2005), yielding Emax 1≤i≤p|1/nPn t=1xi,t|q→0 under a polynomial bound on p. As far as we know, Nemirovski (2000)-type bounds for Emax 1≤i≤p|1/nPn t=1xi,t|qthat are a multiple of ln( p)without exploiting sub-exponential tails only exist for independent data. The latter hinges on a symmetrization
https://arxiv.org/abs/2505.17800v1
argument classically laid out in Pollard (1984, Chapt. II.2) and van der Vaart and Wellner (1996, Chapt.’s 2.2-2.3). See also Section 2 below. Write max i= max 1≤i≤pand max i,t= max 1≤i≤pmax 1≤t≤n, and define ¯ xi,n := 1/nPn t=1xi,t. In this setting Nemirovski (2000) shows for any q≥1 and p≥eq−1(see, e.g., Buhlmann and van de Geer, 2011, Theorem 14.24) Emax i|¯xi,n|q≤Cn(p, q)E max i1 nnX t=1x2 i,t!q/2 where Cn(p, q) =8 ln (2 p) nq/2 .(1.2) The bound depends on pboth from ln( p) and E(max i1/nPn t=1x2 i,t)q/2. Ifq= 2 then of course Emax i¯x2 i,n≲p ln(p)/nPn t=1Emax ix2 i,t/n. Ifxi,tis sub-Gaussian then Emax ix2 i,t =O(ln(n)); otherwise Emax ix2 i,t≤p2/s(max i||xi,t||s)2under Ls-boundedness, s≥2, by Lyapunov’s inequality and max i{ai} ≤Pp i=1ai. Ifxi,tis sub-exponential then for q= 1 and L1-boundedness, and the Cauchy-Schwartz inequality, E(max i1/nPn t=1x2 i,t)1/2≤ {Emax i,t|xi,t| ×n−1Pn t=1Emax i|xi,t|}1/2=O(p ln(pn) ln(p)). By restricting dependence 2 and tail decay, other Orlicz norms besides the one above corresponding to x7→xqcan be used, including exponential under a sub-exponential assumption (van der Vaart and Wellner, 1996, Lemma 2.2.2). Chernozhukov, Chetverikov and Kato (2015, Lemma 8) use a similar symmetrization approach to prove an L1-maximal inequality Emax i|¯xi,n|≲vuutln(p) n×max i1 nnX t=1Ex2 i,t+Emax i,tx2 i,tln(p) n. See also Buhlmann and van de Geer (2011, Lemma 14.18, Corollary 14.4) for a low dimen- sional result, and see Juditsky and Nemirovski (2008), D¨ umbgen, van de Geer, Veraar and Wellner (2010) and Massart and Rossignol (2013) for extensions for independent random variables in Banach space. Extant results rely on square integrability of the l∞envelope max i,t|xi,t|, and generally exploit symmetrization. Often the goal is an optimal constant Cn(p, q) in different spacial contexts (e.g. Ne- mirovski, 2000; D¨ umbgen et al., 2010), typically in low dimensions (e.g. Ibragimov and Sharakhmetov, 1998). Extensions to dependent random variables have a long history in the low dimensional case; see, e.g., von Bahr and Esseen (1965), McLeish (1975), Yokoyama (1980), Hitczenko (1990), de la Pena, Ibragimov and Sharakhmetov (2003), Merlevede and Peligrad (2013) and Szewczak (2015). Cf. Marcinkiewicz and Zygmund (1937) and Rosen- thal (1970). The usefulness of (1.2) can be easily understood. Under Ls-boundedness a “least sharp” bound for Emax i|¯xi,n|q, modulo universal multiplicative constants, is pq/s. Strengthening the bound to (ln( p))q/2under general tail and memory conditions has great practical ap- plication for concentration inequalities in a variety of disciplines, and can promote an exponential bound on pyielding a max-law of large numbers max i|¯xi,n|p→0. Such LLN’s appear in many places in the high dimensional literature, including regularized estimation theory like debiased Lasso, wavelet-like white noise tests and high dimensional regression model tests to name just a few (see, e.g., Dezeure, B¨ uhlmann and Zhang, 2017; Hill, 2024; Jin, Wang and Wang, 2015; Hill and Li, 2025). Under dependence , however, a (classic) symmetrization argument is apparently unavail- able. We therefore approach the problem akin to the high dimensional dependent bootstrap and Gaussian approximation literature (e.g. Mammen, 1993; Chernozhukov, Chetverikov and Kato, 2013, 2017, 2019; Zhang and Cheng, 2018; Dezeure et al., 2017; Zhang and
https://arxiv.org/abs/2505.17800v1
Cheng, 2014; Chang, Jiang and Shao, 2023; Chang, Chen and Wu, 2024; Hill, 2024; Hill and Li, 2025). We initially assume xi,tis bounded in order to yield a version of (1.2) by using telescoping sub-sample blocks. We then derive the main result by approximating 3 ¯xi,nboth by negligible truncation and blocking. We show a comparison of Gaussian dis- tributions with and without truncation or blocking delivers (1.2), modified to account for blocking. Our general proof requires a new concentration bound for tail probability measures, akin to Nemirovski (2000)’s bound in Lq, levied in general and under sub-exponential and Lq-boundedness conditions. Set ¯PM:= max iP(|¯xi,n|>M). In general for any ζ∈(0,1] and irrespective of dependence, we show P max i|¯xi,n|>Mn ≤ln(p) ln(¯P−ζ Mn)+¯P1−ζ Mn ln(¯P−ζ Mn). Under Lq-boundedness this becomes P(max i|¯xi,n|>Mn)≲2 ln(p)/ln(Mq n/max iE|¯xi,n|q). In a general dependence setting this yields a significant improvement over using Markov’s inequality P(max i|¯xi,n|>Mn)≤ M−q nEmax i|¯xi,n|qand our main bound (2.10) on Emax i|¯xi,n|q, cf. Proposition 2.2. The latter entails multiple approximation errors that are bypassed when we directly treat P(max i|¯xi,n|>Mn) as a moment EImaxi|¯xi,n|>Mn. The main results are presented in Section 2. Examples involving heterogeneous geo- metric mixing sequences with sub-exponential tails, and heterogeneous physical dependent sequences under Lq-boundedness, are presented in Section 3. Concluding remarks are left for Section 4, and omitted proofs are relegated to the appendix. We assume all random variables exist on a complete measure space (Ω ,F,P) in order to side-step any measurability issues concerning suprema (see Pollard, 1984, Appendix C). Eis the expectations operator; EAis expectations conditional on F-measurable A.Lq:= {X, σ(X)⊂ F :E|X|q<∞}.|| · || qis the Lq-norm. a.s.isP-almost surely .K > 0 is a finite constant that may have different values in different places. x≲yifx≤Kyfor some K > 0 that is not a function of n, x, y .IAis the indicator function: IA= 1 if Ais true.|z|+= 0∨z. [·]+rounds to the next greater integer. 2Lq-maximal inequality It is useful to recall how symmetrization works under independence. Write X(n):= {xt}n t=1, and let {x∗ t}be an independent copy of {xt}. Then Ex∗ i,t=Exi,t= 0, hence by the conditional Jensen’s inequality and Fubini’s theorem max i|¯xi,n| q≤ EX(n)max i 1 nnX t=1 xi,t−x∗ i,t q≤ max i 1 nnX t=1 xi,t−x∗ i,t q. 4 If{εt}are iid Rademacher (i.e. P(εt=−1) =P(εt= 1) = 1 /2), independent of {xt}, then symmetrically distributed xi,t−x∗ i,tandεt xi,t−x∗ i,t have the same distribution. Indeed, under mutual and serial independence max i|1/nPn t=1(xi,t−x∗ i,t)|and max i|1/nPn t=1εt(xi,t −x∗ i,t)|have the same distribution. Thus ||max i|1/nPn t=1(xi,t−x∗ i,t)|||q=||max i|1/nPn t=1εt(xi,t −x∗ i,t)|||q. The latter is easily bounded using triangle and Minkowski inequalities yielding the “desymmetrized” ||max i|1/nPn t=1εt(xi,t−x∗ i,t)|||q≤2||max i|¯xi,n|||q. Now note that εt(xi,t−x∗ i,t) is sub-Gaussian once we condition on {xi,t, x∗ i,t}n t=1. A classic log-exp bound and Hoeffding’s inequality are then enough to yield (1.2), cf. Buhlmann and van de Geer (2011, Lemmas 14.10, 14.14, 14.24). Under dependence we use telescoping blocks and cross-block independent multipliers. Letbn∈ {1, ..., n −1}be a pre-set block size, bn→ ∞ ,bn=o(n). Define the number of blocks
https://arxiv.org/abs/2505.17800v1
Nn:= [n/bn], and index sets Bl:={(l−1)bn+ 1, . . . , lb n}with l= 1, . . . ,Nn. Assume Nnbn=nthroughout to reduce notation. Generate bounded iid random variables {εl}Nn l=1independent of {xt}n t=1, where P(|εl| ≤c) = 1, c∈(0,∞), and define a sequence of multipliers {ηt}n t=1byηt=εlift∈Bl. Define partial sums and a variance Xn(i) :=1√nnX t=1xi,t,σ2 n(i, j) :=EXn(i)Xn(j) and σ2 n(i) :=EX2 n(i) (2.1) X∗ n(i) :=1√nnX t=1ηtxi,t=1√nNnX l=1εlSn,l(i) where Sn,l(i) :=lbnX t=(l−1)bn+1xi,t. ThusX∗ n(i) is a multiplier (wild) bootstrap approximation of Xn(i). It is, very loosely speaking, a “symmetrized” version of Xn(i). As long as bn→ ∞ andbn=o(n) then under a broad array of dependence settings X∗ n(i) is a draw from the distribution governing Xn(i), asymptotically with probability approaching one (e.g. Liu, 1988; Gin´ e and Zinn, 1990; Hill and Li, 2025). A Rademacher assumption for εlunder dependence cannot precisely serve the purpose it does under independence. We still require boundedness, however, in order to invoke Hoeffding’s inequality in applications. Now let {Xn(i)}p i=1be a Gaussian process, Xn(i)∼N(0, σ2 n(i)), and define ρn=ρ(n, p) := sup z≥0 P max i|Xn(i)| ≤z −P max i|Xn(i)| ≤z (2.2) ρ∗ n=ρ∗(n, p) := sup z≥0 P max i|X∗ n(i)| ≤z −P max i|Xn(i)| ≤z . Hence supz≥0|P(max i|X∗ n(i)| ≤z)−P(max i|Xn(i)| ≤z)| ≤ρn+ρ∗ nby the triangle inequality: blocking has an asymptotically negligible (in probability) impact whenever the Kolmogorov distances {ρn, ρ∗ n} → 0. The latter holds in many settings: see Examples 5 3.1-3.3, and see, e.g., Chernozhukov et al. (2013, 2017), Jin et al. (2015), Zhang and Wu (2017), Zhang and Cheng (2018), Chang et al. (2023), and Chang et al. (2024). 2.1 Bounded We have our first Nemirovski (2000)-like moment bound under boundedness and ar- bitrary dependence. The result provides a foundation for using a negligible truncation approximation under unboundedness. Proposition 2.1. LetP(max i,t|xi,t| ≤ M n) = 1 for some non-decreasing sequence {M n}, Mn∈(0,∞)where Mn→ ∞ is possible. Then Emax i|¯xi,n|q≤2c2ln (2p) Nnq/2vuutEmax i,l 1 bnSn,l(i) q E max i1 NnNnX l=1|Sn,l(i)|!q (2.3) +Mn√nq {ρn+ρ∗ n}. Remark 2.1. Use Minkowski and Jensen inequalities to yield simple refinements Emax i|¯xi,n|q≤2c2ln (2p) Nnq/2vuuutEmax i,t|xi,t|q×Emax i,l 1 bnlbnX t=(l−1)bn+1|xi,t| q +Mn√nq {ρn+ρ∗ n} ≤2c2ln (2p) Nnq/2 Emax i,t|xi,t|q+Mn√nq {ρn+ρ∗ n}. Ifεlis Rademacher then c= 1, while in general applications we need c≥1. Indeed, we are not free to choose cand thus εl. In order to verify {ρn, ρ∗ n} →0 we employ a Gaussian- to-Gaussian comparison that exploits Eε2 l= 1, a natural requirement in the multiplier bootstrap literature, hence c≥1. Remark 2.2. Ifq= 1 then we only need Emax i,t|xi,t|<∞, although with boundedness this is immaterial since all moments exist. It becomes important, however, when we relax boundedness where a second moment need not exist. Remark 2.3. Boundedness is used solely to yield a bounded integral under general de- pendence Emax i|¯xi,n|q=RMn 0P(max i|max i|¯xi,n||q> u)du. The Kolmogorov distances {ρn, ρ∗ n}themselves, however, do not require nor exploit boundedness. 6 Proof . By the triangle inequality and definitions of {ρn, ρ∗ n}, sup z≥0 P max i 1√nnX t=1xi,t
https://arxiv.org/abs/2505.17800v1
≤z! −P max i 1√nNnX l=1εlSn,l(i) ≤z! ≤ρn+ρ∗ n. Now replace xi,twith xi,t/√nto yield for each x≥0, P max i|¯xi,n| ≤x −P max i 1 nNnX l=1εlSn,l(i) ≤x! (2.4) = P max i nX t=1xi,t√n ≤√nx! −P max i NnX l=1εlSn,l(i)√n ≤√nx! ≤ρn+ρ∗ n. Recall X(n):={xt}n t=1. Boundedness, twice a change of variables, and (2.4) imply Emax i|¯xi,n|q=ZMn 0P max i max i|¯xi,n| q > u du =qZMn 0vq−1P max i √n¯xi,n >√nv dv =q nq/2Z√nMn 0zq−1P max i √n¯xi,n > z dz ≤q nq/2Z√nMn 0zq−1P max i 1√nNnX l=1εlSn,l(i) ≤z! dz +ρn∨ρ∗ n nq/2qZMn 0uq−1du =E EX(n)max i 1 nNnX l=1εlSn,l(i) q! +Mn√nq {ρn+ρ∗ n}.(2.5) Since εlis iid and P(|εl| ≤c) = 1, c∈(0,∞), Hoeffding’s inequality applies to EX(n)(·) with a log-exp bound to yield (see Buhlmann and van de Geer, 2011, Lemma 14.14) EX(n)max i 1 nNnX l=1εlSn,l(i) q ≤2c2ln (2p) nq/2 max i1 nNnX l=1S2 n,l(i)!q/2 . See, e.g., Bentkus (2004, 2008) for Hoeffding-related bounds allowing for unbounded εl. 7 Hence by Fubini’s theorem, the Cauchy-Schwartz inequality and n=Nnbn, Emax i 1 nNnX l=1εlSn,l(i) q (2.6) ≤2c2ln (2p) Nnq/2 E max i1 nNnX l=1S2 n,l(i)!q/2 ≤2c2ln (2p) Nnq/2 E max i,l 1 bnSn,l(i) q/2 max i1 NnNnX l=1|Sn,l(i)|!q/2  ≤2c2ln (2p) Nnq/2vuutEmax i,l 1 bnSn,l(i) q E max i1 NnNnX l=1|Sn,l(i)|!q . Combine (2.5) with (2.6) to conclude (2.3). QED . 2.2 Unbounded Now assume xthas support Rp, and let {Mn}n∈Nbe a sequence of monotonically in- creasing positive reals, Mn→ ∞ . We approximate xtwith a centered negligibly truncated version y(M) i,t:=x(M) i,t−Ex(M) i,twhere x(M) i,t:=xi,tI|xi,t|≤M n+MnI|xi,t|>Mn. Define partial sums under truncation with and without blocking, and a partial sum (co)variance X(M) n(i) :=1√nnX t=1y(M) i,t,σ(M)2 n(i, j) :=EX(M) n(i)X(M) n(j) and σ(M)2 n(i) =σ(M)2 n(i, i) X(M)∗ n(i) :=1√nnX t=1ηty(M) i,t=1√nNnX l=1εlS(M) n,l(i) where S(M) n,l(i) :=lbnX t=(l−1)bn+1y(M) i,t. Let{X(M) n(i)}i∈Nbe a Gaussian process, X(M) n(i)∼N(0, σ(M)2 n(i)), recall Xn(i)∼N(0,EX2 n(i)), and define ρ∗(M) n:= sup z≥0 P max i X(M)∗ n(i) ≤z −P max i X(M) n(i) ≤z δ(M) n:= sup z≥0 P max i X(M) n(i) ≤z −P max i|Xn(i)| ≤z . 8 Thus ρ∗(M) n measure how close the truncated and blocked max i|X(M)∗ n(i)|is to the max- Gaussian law under truncation max i|X(M) n(i)|, and δ(M) ncaptures the distance between Gaussian laws with and without truncation. Notice inequality (2.3) instantly holds for the bounded X(M) n(i) by Proposition 2.1. Recall σ2 n(i, j) :=EXn(i)Xn(j). By Lemma C.5 in Chen and Kato (2019), cf. Theorem 2 in Chernozhukov et al. (2015), δ(M) n≲∆(M)1/3 n ×ln(p)2/3where ∆(M) n:= max i,j σ2 n(i, j)−σ(M)2 n(i, j) . It is straightforward to show δ(M) n→0 by manipulating the truncation points {M n}and bounding ln( p), without any reference to underlying dependence or tail decay other than the existence of a higher moment. First, by construction and the triangle inequality ∆(M) n≤max i,j 1 nnX s,t=1Exi,sxi,t−1 nnX s,t=1Exi,sI|xi,s|≤M nxi,tI|xi,t|≤M n + 2Mnmax i,j 1 nnX s,t=1EI|xi,s|>Mn xi,tI|xi,t|≤M n+MnI|xi,t|>Mn :=An+Bn. Now suppose E|xi,t|2r<∞for some r >1, and assume Mn→ ∞ sufficiently fast, n1/(r−1) 1 + max i,t∥xi,s∥2r 2r/(r−1) Mn→0. (2.7)
https://arxiv.org/abs/2505.17800v1
Thus under data homogeneity Mn/n1/(r−1)→ ∞ . By multiple applications of Cauchy- Schwartz, H¨ older, Lyapunov and Markov inequalities, and (2.7) with r >1, An≤ 1 nnX s,t=1Exi,txi,sI|xi,s|>Mn + 1 nnX s,t=1Exi,sI|xi,s|≤M nxi,tI|xi,t|>Mn ≤1 nnX s,t=1 Ex2 i,t1/2 Ex2 i,sI|xi,s|>Mn1/2+1 nnX s,t=1 Ex2 i,sI|xi,s|≤M n1/2 Ex2 i,tI|xi,t|>Mn1/2 ≤1 nnX s,t=1 Ex2 i,t1/2 Ex2r i,s1/(2r)P(|xi,s|>Mn)(r−1)/(r2) +1 nnX s,t=1 Ex2 i,sI|xi,s|≤M n1/2 Ex2r i,s1/(2r)P(|xi,s|>Mn)(r−1)/(r2) ≤nM−(r−1) n max i,t Ex2 i,t1/2max i,t Ex2r i,t1/(2r)max i,t E|xi,t|2r(r−1)/(r2) 9 ≤nmax i,t∥xi,t∥r+1 2r Mr−1 n→0. Similarly, for some s:= 2r >2 use (2.7) to yield Bn≃ M n 1 nnX s,t=1EMnI|xi,s|>Mn xi,tI|xi,t|≤M n+MnI|xi,t|>Mn ≤ M n 1 nnX s,t=1(E|xi,t|s)1/sP(|xi,s|>Mn)(s−1)/s ≤ M n 1 nnX s,t=1(E|xi,t|s)1/s(E|xi,s|s)(s−1)/s1 Ms−1 n ≤n Ms−2 nmax i,t(E|xi,s|s) =n M2(r−1) nmax i,tE|xi,s|2r→0. Thus ∆(M) n≤An+Bn→0 under (2.7). Hence when ln( p) =o(M(r−1)/2 n /[√nmax i,t||xi,t||r 2r]), for some r >1 δ(M) n≲n1/2 M(r−1)/2 nmax i,t∥xi,t∥r 2rln(p)2/3 →0. (2.8) If marginal distribution tail information for xi,tis available then P(|xi,t|>Mn) can be sharpened. We could likewise exploit dependence decay, if available, to sharpen bounds on 1/nPn s,t=1Exi,txi,sI|xi,s|>Mnand 1 /nPn s,t=1Exi,sI|xi,s|≤M nxi,tI|xi,t|>Mn. Assumption 1. Letq≥1. (a)Emax i,t|xi,t|(q∨2)r<∞for some r >1and each n. (b)Truncation points {M n}satisfy (2.7). Remark 2.4. We need Emax i,t|xi,t|(q∨2)r<∞for some r >1 in order for the truncation error to vanish rapidly enough in high dimension. Remark 2.5. (2.7) ensures a truncation and non-truncation Gaussian comparison max i,j|σ2 n(i, j)−σ(M)2 n(i, j)| →0 holds. Since symmetrization cannot be used, it ensures remainder terms {Bn(p),Dn(p)}defined in (2.9) and (2.11)-(2.13) below are negligible when pis bounded. A third remainder Cn(p) := ρn+ρ∗(M) n in (2.9) is negligible once a dependence setting is imposed, as in Examples 3.2 and 3.3. Define ¯PM:= max iP(|¯xi,n|>M) and for r >1 Bn(p) :=max i,t∥xi,t∥r 2r M(r−1) nln(p)2/3 andCn(p) :=ρn+ρ∗(M) n. (2.9) 10 We now have the main result of this paper. Proposition 2.2. For any random variables {xt}n t=1satisfying Assumption 1, Emax i|¯xi,n|q≲2 ln (2 p) Nnq/2vuutEmax i,l 1 bnS(M) n,l(i) q E max i1 NnNnX l=1 S(M) n,l(i) !q (2.10) +Mn√nq {Bn(p) +Cn(p)}+Dn(p) where Dn(p)is defined as follows by case: a.(Lrs-boundedness ) In general for r >1, Dn(p) = 2(r−1)/rpq/(sr)max i∥¯xi,n∥q rs ln(p) ln¯P−1 Mn!(r−1)/r . (2.11) If additionally max iE|¯xi,n|rs=O(n−a),a > 0, then Dn(p)≲2(r−1)/rpq/(sr)n−aq/(rs)× [ln(p)/ln¯P−1 Mn](r−1)/r. b.(Lrs-boundedness ) In general for r >1 Dn(p) = 2(r−1)/rpq/(sr)max i∥¯xi,n∥q rsln(p) ln (Mq n/max iE|¯xi,n|q)(r−1)/r . (2.12) If additionally max iE|¯xi,n|rs=O(n−a),a >0, then Dn(p)≲2(r−1)/rpq/(sr)n−aq/(rs)[ln(p)/ ln(Mq nna)](r−1)/r. c. (sub-exponential ) Ifmax iP(|¯xi,n| ≥c)≤aexp{−bnγcγ}for some finite constants (a, b)>0,γ > q , and all c >0, then for p > e andr >1 Dn(p) :=s 8 nγ/qa1/qln(p) exp{exp{Mq n}}. (2.13) Remark 2.6. ErrorBn(p) arises from a Gaussian comparison with and without truncation. Cn(p) captures the total Gaussian approximation error from truncation and blocking. Dn(p) represents the truncation tail approximation error caused by replacing Emax i|¯xi,n|qwith Emax i|¯x(M) i,n|q. Remark 2.7. E|¯xi,n|rs=O(n−a) holds for many stochastic processes, including α, β,C, ϕ, τ, φ - mixing, mixingale, near epoch dependent, and physical dependent processes. See McLeish (1975) , Hansen (1991), Wu (2005) and Hill (2024) to name a few. 11 Remark 2.8. The proof exploits the
https://arxiv.org/abs/2505.17800v1
decomposition Emax i|¯xi,n|q=Emax i|¯xi,n|qImaxi|¯xi,n|≤M n+Emax i|¯xi,n|qImaxi|¯xi,n|>Mn(2.14) =:In,1+In,2. ThusMnimparts two opposing forces. First, largeMnyields a better truncation ap- proximation Emax i|¯xi,n|q× I maxi|¯xi,n|≤M n≈Emax i|¯xi,n|q, hence smaller {Bn(p),Dn(p)}. Second, under dependence we cannot use a symmetrization argument, thus we use Gaus- sian approximations {ρn, ρ∗(M) n}. Hence Emax i|¯xi,n|qImaxi|¯xi,n|≤M nincurs a cost via scale (Mn/√n)q: Emax i|¯xi,n|qImaxi|¯xi,n|≤M n≤Emax i 1 nNnX l=1εlS(M) n,l(i) q +Mn√nq ρn∨ρ∗(M) n . Asmaller Mnis then optimal. Cf. (2.5) in the proof of Proposition 2.1, and step 1 in the proof of Proposition 2.2. Arguments leading to (2.8) combined with the proof of Proposition 2.1 yield In,1≤ An(p) +Bn(p)∨ C n(p). In order to bound In,2in (2.14), under Lrs-boundedness by H¨ older and Lyapunov inequalities, for some s > q andr >1, In,2≤Emax i|¯xi,n|qImaxi|¯xi,n|>Mn ≤ Emax i|¯xi,n|rq1/r ×P max i|¯xi,n|>Mn(r−1)/r ≤pq/(sr)max i (E|¯xi,n|rs)1/sq/r ×P max i|¯xi,n|>Mn(r−1)/r =pq/(sr)max i∥¯xi,n∥q rs×P max i|¯xi,n|>Mn(r−1)/r . (2.15) Proposition 2.2 uses the following concentration inequality to deal with P(max i|¯xi,n|> Mn) in (2.15). Write as before ¯PM:= max iP(|¯xi,n|>M). The following result appears new, and is akin to a tail measure version of Nemirovski (2000)’s inequality, in most cases without need to define dependence. Lemma 2.3. Let{xt},xt= [xi,t]p i=1, be random variables on a common probability space. a.Ifp > e andp=o(¯P−1 Mn)then P max i|¯xi,n|>Mn ≤2ln(p) ln¯P−1 Mn→0. (2.16) 12 b.Ifxi,tareLq-bounded, q≥1,max i||¯xi,n||q=o(Mn), and p=o(Mq n/max iE|¯xi,n|q) then P max i|¯xi,n|>Mn ≤2ln(p) ln (Mq n/max iE|¯xi,n|q)→0. (2.17) c.Ifmax iP(|¯xi,n| ≥c)≤aexp{−bnγcγ}for some (a, b, γ )>0and all c >0, then for any ϕ∈(0, γ),p > e andln(p) =o(nϕMϕ n), P max i|¯xi,n| ≥ M n ≤ln(p) nϕMϕ nln(ln p)+a(ln(p))nϕMϕ n nϕMϕ nexp{bnγMγ n}ln(ln p)→0. Remark 2.9. (a) optimizes a log-exp bound without using any tail information, while ( b) uses the same logic with Markov’s inequality and Lq-boundedness. Remark 2.10. Sub-exponential tails for the sample mean max iP(|¯xi,n| ≥c)≤aexp{−bnγcγ} in (c) is essentially a Bernstein or Fuk-Naegev-type inequality, and thus must be verified by case. It holds when xi,thas sub-exponential tails and, e.g., satisfies physical depen- dence (Wu, 2005, Theorem 2( ii)), geometric τ-mixing (Merlevede, Peligrad and Rio, 2011, Theorem 1), or α-mixing or a mixingale property (Hill, 2024, 2025). See Rio (2017), and see, e.g., Hill (2024, 2025) for a comprehensive review. Remark 2.11. We still need the Lqnorm to be bounded max i||¯xi,n||q=o(Mn) under (b). This is mild or irrelevant considering under common regularity conditions max i||¯xi,n||q =O(1/√n) automatically for possibly non-stationary mixingales and thus α-mixing se- quences (Hansen, 1991, 1992), as well as physical dependent and other mixing sequences (see, e.g., Hill, 2024). Thus max i||¯xi,n||q=o(Mn) allows for trend: e.g. xi,t=fn+zi,t where zi,tis stationary, uniformly Lq-bounded, and {fn}is a sequence of constants, fn= o(Mn). Then max i,t||xi,t/Mn||q=o(1) when Mn→ ∞ , and therefore max i||¯xi,n||q= o(Mn). 3 Examples We now study mixing and physical dependence settings in which the Gaussian approx- imation Kolmogorov distances {ρn, ρ∗ n, ρ∗(M) n} → 0 yielding the error Cn(p)→0 in (2.9) once pis bounded. We begin with a bounded process. Throughout the multiplier εlis iid, bounded, and Eε2 l= 1. 13 3.1 Bounded, geometric mixing
https://arxiv.org/abs/2505.17800v1
Assume {xt}are zero mean [ −M n,Mn]p-valued random variables. Suppose xi,t= gi,t(ϵi,t, ϵi,t−1, . . .) for some measurable functions gi,t(·) and iid sequences {ϵi,t}. Define a functional ψζ(x) := exp {|x|ζ} −1,ζ >0, and an exponential Orlicz norm ||x||ψζ := inf{λ >0 :E[ψζ(x/λ)]≤1}. Now let ||xi,t||ψζ≤χnfor some ζ≥1, all t∈ {1, ..., n}and i∈ {1, ..., p}, and some sequence {χn}, inf n∈Nχn≥1. Thus Eexp{|xi,t|ζ/χζ n} ≤2 implying heterogeneous sub-exponential tails which holds automatically under boundedness. Indeed max i,tE|xi,t|q≤ Mq n. However, ignoring the upper bound Mnand for general use below, use the heterogeneous sub-exponential implication P(|xi,t|> u)≤2 exp{−uζχ−ζ n}with E|xi,t|q=R∞ 0P(|xi,t|> u1/q)duto deduce max i,tE|xi,t|q≤4q ζ +!χq n≲qqχq n∀q≥1. (3.1) Now define Ft i,n,s:=σ(xi,τ: 1≤s≤τ≤t≤n) and α-mixing coefficients, αi,j,n(m) := sup 1≤t≤nsup C∈Ft i,n,−∞,D∈F∞ j,n,t +m|P(C ∩ D )−P(C)P(D)|andαi,n(m) :=αi,i,n(m). Assume geometric decay lim supn→∞max i,jαi,j,n(m)≤aωmfor some a≥1 and ω∈[1/e,1). Lastly, assume non-degeneracy EX2 n(i),EX∗2 n(i)> K for each i, nand some K > 0. The latter variances are finite under mixing: max iEX2 n(i)≲max i,t{||xi,t||2 r}<∞forr > q follows from Davydov (1968, Corollary) and geometric decay. The same argument yields max i,lE(1/√bnPlbn t=(l−1)bn+1xi,t)2≲max i,t{||xi,t||2 r}, hence max iEX∗2 n(i)<∞by mutual independence and Eε2 l= 1. The use of telescoping blocks and therefore block-wise dependent {ηt}lowers the mix- ing decay rate, hampering high dimensional theory (see Chang et al., 2023, Appendix, Proposition 3). We can yield better bounds on {ρn, ρ∗ n}, however, by assuming xi,tis a measurable function of iid innovations and using a new link between mixing and physical dependence properties (Hill, 2024, 2025). Lemma 3.1. Assume block size bnand tail heterogeneity χnsatisfy χn=o(bn). Let ln(p) < n1/4/bn→ ∞ . Then ρn≲χn(ln(p))7/6 n1/9(3.2) ρ∗ n≲(χn bn1/3 × 1 + ln( p) + lnχn bn 2/3) ∨χnbβ∗ n(ln(p))β∗+2 nβ∗/(4+β∗)(3.3) 14 where β∗:=p 4 lnn/[lnbn+ ln ln( p)]−4>0∀n. We therefore achieve a Nemirovski (2000)-like moment bound under geometric mixing, Emax i|¯xi,n|q≲2 ln (2 p) Nnq/2vuutEmax i,l 1 bnSn,l(i) q E max i1 NnNnX l=1|Sn,l(i)|!q +o(1) =Nn+o(1), withNnimplicit, provided for some r >1 ln(p) = o n1/4 bn∧n3q/4 max i,t∥xi,t∥r 2rM3q/2−(r−1) n∧n3q/7+2/21 χ6/7 nM6q/7 n∧bn χn1/2n3q/4 M3q/2 n ∧nβ∗/[(4+β∗)(β∗+2)] χ1/(β∗+2) n bβ∗/(β∗+2) n∧nγ/qexp{exp{Mq n}}! . Remark 3.1. The upper bound component ( χn/bn)1/3in (3.3) arises from a Gaussian- to-Gaussian comparison, thus we enforce χn/bn→0, an implicit restriction on hetero- geneity. A larger/faster block size bn→ ∞ naturally improves the approximation between EXn(i)Xn(k) and its blocked version EX∗ n(i)X∗ n(j), and therefore between the limiting max- Gaussian processes for {max i|Xn(i)|,max i|X∗ n(i)|}(Chernozhukov et al., 2015, Theorem 2). Otherwise, bnadversely affects ρ∗ n→0, and slows down high dimension divergence p → ∞ , a penalty for having dependent data. The final term nγ/qexp{exp{Mq n}}never dominates when Mn→ ∞ . Hence as long as r <1 + 3 q/2 (which can always be assumed), larger/faster tail heterogeneity or upper bound ( χn,Mn)→ ∞ adversely affects ln( p)→ ∞, penalties for deviating from a homogeneous setting. Remark 3.2. Boundedness is not exploited when handling {ρn, ρ∗ n}; cf. Remark 2.3. Remark 3.3. Suppose block size bn≃nb, heterogeneity χn≃nχ, and bound Mn≃nm for finite (
https://arxiv.org/abs/2505.17800v1
b, m, χ )>0, with χ∈[0,(1/9)∧b) and b <1/4. Let max i,t||xi,t||2r=O(1). Then ln( p) =o(na) and β∗≃2/√ b−4 with a=1 4−b ∧3q 4−m3q 2−(r−1) ∧1 7 3q+2 3−6χ−6qm ∧1 23q 2+b−χ−3qm ∧√ b 2−2√ b 1−2√ b−χ−2√ bh 1−2√ bi . For example, if q= 1 and r= 1.5, with bound growth m= 1/2, homogeneity χ= 0, and block size b= 1/8 then ln( p) =o(n.02345). 15 3.2 Unbounded, geometric mixing, sub-exponential Reconsider the Example 3.1 setting, except assume xthas support Rp. Let max i,tE|xi,t|2r <∞for each nand some r >1, and assume χn=o(bn) as above. Under sub-exponentiality max iP(|xi,t|> u)≤2 exp{−uγχ−γ n}we can make rarbitrarily large, cf. (3.1), and by Proposition 2.2.c for p > e , Emax i|¯xi,n|q≲2 ln (2 p) Nnq/2vuutEmax i,l 1 bnS(M) n,l(i) q E max i1 NnNnX l=1 S(M) n,l(i) !q +Mq n nq/2( M−2(r−1) n max i,t∥xi,t∥2r 2r1/2 ln(p))2/3 +Mn√nq ρn+ρ∗(M) n +r 8 nγ/qa1/qp ln(p) exp{exp{Mq n}/2} =An(p) +Bn(p) +Cn(p) +Dn(p). Analyzing Cn(p) = (Mn/√n)q{ρn+ρ∗(M) n}proceeds exactly as in Lemma 3.1 since the latter does not exploit boundedness, thus (3.2) and (3.3) apply. Furthermore, after sim- plifying terms it can be shown all ( Bn(p),Cn(p),Dn(p))→0 sufficiently when ln(p) = o n3q/4 max i,t∥xi,t∥r 2rM3q/2−(r−1) n∧n3q/7+2/21 χ6/7 nM6q/7 n∧bn χn1/2n3q/4 M3q/2 n(3.4) ∧n[q/2+β∗/(4+β∗)]/(β∗+2) χ1/(β∗+2) n bβ∗/(β∗+2) n Mq/(β∗+2) n∧nγ/qexp{exp{Mq n}}! . Finally, recall Mnsatisfies (2.7), while we need Mq nn−q/2{(M−2(r−1) n max i,t∥xi,t∥2r 2r)1/2}2/3 →0 inBn(p). Bound heterogeneity by assuming max i,t||xi,t||r 2r=O(1). Then we need n1/(r−1)<Mn< nq/{2[q−2(r−1)/3]}, hence r >(4 + 9 q)/(4 + 3 q). Thus if q= 2 then r > 2.2, hence xi,tmust be L4.4+δ-bounded for some δ >0. 3.2.1 Example: sub-Gaussian In a sub-Gaussian setting γ= 2 thus ||max i,t|xi,t|||2=O(χ1/2 nln(pn)1/4). For example, if|xi,t|rqhas sub-exponential tails then P(|xi,t|> u)≤2 exp{−urqγχ−γ n},γ≥1. Thus if rq≥2/γthen xi,tis sub-Gaussian; e.g. q= 2, r= 1.5 and γ≥2/3. Assume bounded heterogeneity χn=O(1) for simplicity, set truncation level Mn= nm,m > 0, block size bn≃nb,b∈(0,1), and set q= 1 and r= 1.5. Thus β∗≃2/√ b− 16 4, and ln( p) =o(na) where a=3 4−m ∧3−6m 7+2 21 ∧b−3m 2+3 4 ∧1 β∗+ 21 2+β∗ 4 +β∗ −bβ∗+m β∗+ 2 . We therefore need b <1/4, and m <{b/3 + 1 /2} ∧ { 1/2 + (2 /√ b−4)(.5√ b−b)}. 3.3 Unbounded, physical dependent, Lq-bounded Now consider physical dependence as in Wu (2005) and Wu and Min (2005). We work in the high dimensional Gaussian approximation setting of Zhang and Cheng (2018) allowing for non-stationarity, but results in, e.g., Chang et al. (2024) also work. Let{ϵ′ i,t}be an independent copy of {ϵi,t}, and define the coupled process x′ i,n,t(m) := gi,n,t(ϵi,t, . . . , ϵ i,t−m+1, ϵ′ i,t−m, ϵi,t−m−1, . . .),m= 0,1,2, ...Define the Lq-physical dependence measure θ(q) i,n(m) and its partial accumulation θ(q) i,n(m) := max t xi,t−x′ i,t(m) qand Θ(q) i,n(m) :=∞X l=mθ(q) i,n(l). We say xtis (uniformly) Lq-physical dependent when max iΘ(q) i,n(0)< K for each n. We also need the truncated process x(M) i,t=xi,tI|xi,t|≤M n+MnI|xi,t|>Mnto satisfy physical dependence. It is easy to show the property carries over to affine combinations
https://arxiv.org/abs/2505.17800v1
and products (e.g. Wu, 2005), however it generally need not carry over to general measurable transforms, including I|xi,t|≤M n. In Assumption 2, below, we merely assume x(M) i,tis physical dependent. However, under mild conditions it holds when xi,tis physical dependent. In order to see this define Ii,n,t :=I|xi,t|≤M n, a coupled version I′ i,n,t(m) :=I|x′ i,t(m)|≤M n, and coefficients ϖ(q) i,n(m) := max t Ii,n,t− I′ i,n,t(m) q θM(q) i,n(m) := max t y(M) i,t−y(M)′ i,t(m) qand ΘM(q) i,n(m) :=∞X l=mθM(q) i,n(l). By Minkowski, Cauchy-Schwartz and Lyapunov inequalities, for any q≥2 θM(q/2) i,n (m)≤max t xi,tI|xi,t|≤M n−x′ i,t(m)I|x′ i,t(m)|≤M n q/2 +Mnmax t I|xi,t|>Mn− I x′ i,t(m)>Mn q/2 17 ≤max t xi,t−x′ i,t(m) I|xi,t|≤M n q/2 + max t x′ i,t(m)I|xi,t|≤M n−x′ i,t(m)I|x′ i,t(m)|≤M n q/2 +Mnmax t I|xi,t|>Mn− I x′ i,t(m)>Mn q/2 ≤θ(q/2) i,n(m) + max t x′ i,t(m) q+Mn ×ϖ(q) i,n(m) ≤θ(q/2) i,n(m) + max t∥xi,t∥q+Mn ×ϖ(q) i,n(m). Moreover, for any q >1 and m≥0, ϖ(q) i,n(m) := max t Ii,n,t− I′ i,n,t(m) q≤max tP(|xi,t|>Mn)≤ M−q nmax t∥xi,t∥q q. Thus θM(q/2) i,n (m)≤θ(q/2) i,n(m) + max t∥xi,t∥q Mq/(q+1) n!q+1 + max t∥xi,t∥q M1−1/q n!q . (3.5) Since q/(q+ 1) >1−1/qwe conclude x(M) i,tisLq/2-physical dependent when xi,tis, as long as max t||xi,t||q=o(M1−1/q n). Thus physical dependence extends to x(M) i,tas long as trend is not too severe, with a trade-off with the availability of higher moments q. Write ln:=ln(p, γ) = ln( pn/γ n)∨1 for any positive sequence {γn}. Assumption 2. a.Letmax i,tEx4 i,t≤K;max tEh(max i|xi,t|/χn)≤Kfor some strictly increasing convex function h: [0,∞)→[0,∞), and sequence {χn}, χn≥1. b.There exist En>0,En→ ∞ , and γn∈(0,1)such that K{χnh−1(n/γ n)∨l1/2 n} ≤ n3/8/[E1/2 nl5/8 n]. c.0<min iEX2 n(i)≤max iEX2 n(i)< K for each n;{xi,t, x(M) i,t}are uniformly physical dependent with max iP∞ l=0θM(4) i,n(l)<∞andmax iP∞ l=0l{θ(3) i,n(l)∨θM(3) i,n(m)} ≤ ξnfor some sequence ξn≥1. Remark 3.4. An exponential map h(z) = exp {z}implies xi,trequires sub-exponentail tails. The lqmap h(z) =zq,q≥1, implies h−1(y) =y1/q, hence ( b) reduces to n χn(n/γ n)1/q∨(ln(pn/γ n)∨1)1/2o ≲n3/8 E1/2 n(ln(pn/γ n)∨1)5/8. Remark 3.5. Assumption 2 yields Assumptions 2.1-2.3 in Zhang and Cheng (2018). The generalization max iP∞ l=1l{θ(6) i,n(l)∨θM(3) i,n(m)} ≤ ξnallows for trending moments that 18 may not be adequately captured in max tEh(max i|xi,t|/χn)≤Kwith χnfor a given h. If, however, h(·) = exp {·}or| · |qthen max i,tE|xi,t|q≲χq nby (3.1), hence we may take ξn≃ χn. Remark 3.6. For the first part of ( c), by arguments in Wu (2005, Theorems 1 and 2(i)) and the second part of ( c), max iEX2 n(i)≲max iP∞ l=1θ(2) i,n(l)≤K. Verification of the sec- ond part of ( c) requires either additional dependence or heterogeneity restrictions because f(xi,t) need not be physical dependent for measurable f. (i) Suppose xi,tareα-mixing with max i.jαi,j,n(m)≲m−λfor some λ >2q2/(q−1) and q≥6. Then {I|xi,t|≤M n, xi,tI|xi,t|≤M n} areα-mixing with coefficients αi,n(m), and hence {xi,t, x(M) i,t}are physical dependent with coefficients ||xi,t||1/q qα1/(qr) i,n(m)≲||xi,t||1/q qm−λ(q−1)/q2where λ(q−1)/q2>2: see McLeish (1975, Lemma 1.6) and Hill (2025, Theorem 2.1), cf. Hill (2024, proof of Theorem 2.10). Hence max iP∞ l=1l{θ(3) i,n(l)∨θM(3) i,n(m)} ≤K, where q >1 must hold, a
https://arxiv.org/abs/2505.17800v1
typical “ moment- memory ” trade-off. Or: ( ii) Let max iP∞ l=0lθ(3) i,n(l)≤ξnand max t||xi,t||q=o(M1−1/q n). Then max iP∞ l=0lθM(3) i,n(l)≤ξnby the arguments leading to (3.5). Under Lq-boundedness and Proposition 2.2.b, Emax i|¯xi,n|q≲2 ln (2 p) Nnq/2vuutEmax i,l 1 bnS(M) n,l(i) q E max i1 nNnX l=1 S(M) n,l(i) !q (3.6) +Mn√nq( M−2(r−1) n max i,t∥xi,t∥2r 2r1/2 ln(p))2/3 +Mn√nq ρn+ρ∗(M) n + max i,t|xi,t| q rqln(p) ln (Mq n/max iE|¯xi,n|q)(r−1)/r =An(p) +Bn(p) +Cn(p) +Dn(p). It remains to bound {ρn, ρ∗(M) n}inCn(p). Recall ln:=ln(p, γ) = ln( pn/γ n)∨1 for some γn ∈(0,1), and define Θ(q) i,n(m) :=∞X l=mn θ(2q) i,n(l)∨θ(q) i,n(m)o and Ξ n(m) := max i∞X l=mln θ(4) i,n(l)∨θ(2) i,n(m)o ˇΘ(q) i,n(m) :=∞X l=mn θ(2q) i,n(l)∨θM(q) i,n(m)o and ˇΞn(m) := max i∞X l=mln θ(4) i,n(l)∨θM(2) i,n(m)o . Notice θM(·) i,n(m) under truncation in the latter {ˇΘ(q) i,n(m),ˇΞn(m)}. Recall En→ ∞ in Assumption 2.b. 19 Lemma 3.2. Under Assumption 2, ξn=o(bn)andq≥2, a. ρ n≲E1/2 nl7/8 n n1/8+γn+n1/8 E1/2 nl3/8 nq/(q+1) pX i=1Θ(q) i,n(En)!1/(q+1) (3.7) + Ξ1/3 n(En)×(1∨ln (p/Ξn(En)))2/3 b. ρ∗(M) n≲cn(p)∨ξn bn1/3 ln(p)2/3, (3.8) where cn(p) :=E1/2 nl7/8 n n1/8+γn+n1/8 E1/2 nl3/8 nq/(q+1) pX i=1ˇΘ(M)(q/2) i,n (En)!1/(q+1) +ˇΞ(M)1/3 n (En)× 1∨ln p/ˇΞ(M) n(En)2/3. 3.3.1 Example: lqmap and hyperbolic memory Consider h(x) =|x|q,q≥2. Assume max i,tEx4 i,t≤K, max tEmax i|xi,t/χn|q≤Kand ln(p)≲n1/q. Assume hyperbolic weak dependencePp i=1{θ(6) i,n(m)∨θM(3) i,n(m)} ≤ϕnm−λfor some λ >2 and some sequence of positive real numbers {ϕn},ϕn≥1. ThenPp i=1Θ(·) i,n(En) ≃ϕnE−λ nand Ξ n(En)≃ϕnE−(λ−1) n . Compare this to results in Zhang and Cheng (2018, Corollary 2.2, Sect. 2.3) involving geometric decay. SimilarlyPp i=1ˇΘ(M)(·) i,n (En)≃ϕnE−λ n andˇΞn(En)≃ϕnE−(λ−1) n . Then for any En=O(n3(1−1/q)/4) and γn∈(0,1) it is straight- forward to verify that Assumption 2.a,b hold. The second part of Assumption 2.c holds with ξn=ϕn. Now assume q= 2, heterogeneity ϕn=nϕ,ϕ <(3c−1/4)∧(1/2), memory decay λ = 2, and set block size bn=nbforb∈(0, ϕ), and En=ncandγn=n−γfor some c∈ (1/12,1/4) and γ >0. Thus ln= ln( pn/γ n) = ln( pn1+γ) and 1 ∨ln(p/E−λ n)≲ln(pn) for p ≥e. Ifp/n→ ∞ then ρn≲n nc/2−1/8ln(p)7/8+nϕ/3−2c/3ln (p)2/3o +n−γ+1 n3c/2−1/8−ϕ/2ln(p)3/82/3 ρ∗(M) n≲nc/2−1/8ln(p)7/8+nϕ/3−2c/3ln (p)2/3+n−(ϕ−b)/3ln(p)2/3 +n−γ+1 n3c/2−1/8−ϕ/2ln(p)3/82/3 . 20 Hence from (3.6) Emax i¯x2 i,n≲2 ln (2 p) NnvuutEmax i,l1 bnS(M) n,l(i)2 E max i1 nNnX l=1 S(M) n,l(i) !2 +o(1) for any psatisfying ln(p) = o n3q/4 max i,t∥xi,t∥r 2rM3q/2−(r−1) n∧nq/2−c/2+1/8 Mq n∧n4q/7+4c/7−1/7 M8q/7 n(3.9) ∧ln (Mq n/max iE|¯xi,n|q) ∥max i,t|xi,t|∥qr/(r−1) rq! . Notice 3 q/2−(r−1)>0 for any q≥1 when r <2.5. In that case the truncation pointMnexacts two forces discussed in Remark 2.8: larger Mnreduces the truncation approximation error, allowing for larger ln( p); while smaller Mnreduces the tail remainder in the truncation decomposition. Furthermore, ceteris paribus ifc < q + 1/4,Mn≃nm, m > 0 and m <3q/4 3q/2 +r−1∧q/2−c/2 + 1 /8 q∧4q+ 4c−1 8q, then the fourth term in (3.9) dominates and larger Mnis optimal: ||max i,t|xi,t|||qr/(r−1) rq ×ln(p) =o(m×ln (n/max iE|¯xi,n|q)). 4 Conclusion We develop Lq-maximal moment bounds similar to Nemirovski (2000), in a general dependence setting. Classic arguments exploit symmetrization under independence. Un- der arbitrary weak dependence we exploit a standard multiplier blocking argument with a negligible truncation approximation, and use
https://arxiv.org/abs/2505.17800v1
Gaussian approximation and comparison theory to sidestep symmetrization. We also require a concentration bound for tail prob- abilities that appears new, and works roughly like a Nemirovski bound for tail measures. The latter arises from the truncation approximation error, leading to a higher moment requirement than in Nemirovski’s case with independence. Examples are provided in order to verify assumptions and to yield Gaussian approximations, working under either mixing or physical dependence conditions. We do not focus on sharpness, leaving that idea for future research, and instead seek broad conditions such that an Lq-maximal moment is bounded. 21 5 Appendix: technical proofs Recall ρn:= sup z≥0 P max i|Xn(i)| ≤z −P max i|Xn(i)| ≤z ρ∗(M) n:= sup z≥0 P max i X(M)∗ n(i) ≤z −P max i X(M) n(i) ≤z δ(M) n:= sup z≥0 P max i X(M) n(i) ≤z −P max i|Xn(i)| ≤z . Proof of Proposition 2.2. By a change of variables, Emax i|¯xi,n|q=E max i|¯xi,n|qImaxi|¯xi,n|≤M n +E max i|¯xi,n|qImaxi|¯xi,n|>Mn =qZMn 0uq−1P max i|¯xi,n|> u du+E max i|¯xi,n|qImaxi|¯xi,n|>Mn =In,1+In,2. Step 1 ( In,1). Add and subtract like terms, and invoke the triangle inequality and the proof of Proposition 2.1 to yield for some r >1 such that max i,t||xi,t||2r<∞, and any x ≥0, P√nmax i|¯xi,n| ≤x −P max i 1√nNnX l=1εlS(M) n,l(i) ≤x! (5.1) ≤δ(M) n+ ρn+ρ∗(M) n ≤(n Mr−1 nmax i,t∥xi,t∥2r 2r1/2 ln(p))2/3 + ρn+ρ∗(M) n :=Bn(p) +Cn(p). Therefore, by twice a change of variables In,1=qZMn 0uq−1P√nmax i|¯xi,n|>√nu du =q nq/2ZMn 0vq−1P√nmax i|¯xi,n|> v dv ≤q nq/2ZMn 0vq−1P max i 1√nNnX l=1εlS(M) n,l(i) ≤v! dv+q{Bn+Cn(p)} nq/2ZMn 0vq−1dv =Emax i 1 nNnX l=1εlS(M) n,l(i) q +Mq n nq/2{Bn(p) +Cn(p)}. 22 Replicate (2.5)-(2.6), and use n=Nnbnto complete the bounds: In,1≤2 ln (2 p) Nnq/2vuutEmax i,l 1 bnS(M) n,l(i) q E max i1 NnNnX l=1 S(M) n,l(i) !q +Mq n nq/2{Bn(p) +Cn(p)}. Step 2 ( In,2): Recall ¯PMn:= max iP(|¯xi,n|>Mn). H¨ older’s inequality yields for r >1 In,2=Emax i|¯xi,n|qImaxi|¯xi,n|>Mn≤ max i|¯xi,n| q rqׯP(r−1)/r Mn. (5.2) Claim (a). Combine Lemma 2.3.a with (5.2) and Minkowski’s inequality to deduce In,2 ≤2(r−1)/r||max i|¯xi,n|||q rq× {ln(p)/ln¯P−1 Mn}(r−1)/r. Now use Lyapunov’s inequality and max i|ai| ≤P|ai|to obtain for s > q , In,2≤2(r−1)/rpq/(sr)max i∥¯xi,n∥q rs ln(p) ln¯P−1 Mn!(r−1)/r :=Dn(p). Claim (b). By Lemma 2.3.b with (5.2), and Lyapunov’s inequality, for s > q andr >1 In,2≤2(r−1)/rpq/(sr)pq/(sr)max i∥¯xi,n∥q rsln(p) ln (Mq n/max iE|¯xi,n|q)(r−1)/r :=Dn(p). Moreover, if E|¯xi,n|s=O(n−a) for some a >0 and s >1 then by Lyapunov’s inequality and max i|zi| ≤Pp i=1|zi|we have for s > rq max i|¯xi,n| q rq≤pq/smax i∥¯xi,n∥q s≤Kpq/s naq/s. Hence, in view of (5.2) we may replace ||max i,t|xi,t|||q rqwithKpq/s/naq/syielding as claimed In,2≤2(r−1)/rpq/sn−aq/s(ln(p)/ln(Mq nna))(r−1)/r. Claim (c). Let max iP(|¯xi,n| ≥c)≤aexp{−bnγcγ},a, b > 0,γ > q . By Jensen’s inequality In,2≤1 λln pmax iE exp λ|¯xi,n|qImaxi|¯xi,n|>Mn  for any λ >0. Twice a change of variables, sub-exponential tails, and Mn→ ∞ yield for any c∈[q.γ), and any b∈(0, a] E exp λ|¯xi,n|qImaxi|¯xi,n|>Mn  23 = 1 +Z∞ exp{λMq n}P |¯xi,n|>1 λ1/qln(u)1/q du = 1 + qλZ∞ exp{λMq n}P(|¯xi,n|> v) exp{λvq}vq−1dv ≤1 + 2 qλZ∞ exp{λMq n}exp{λvq−bnγvγ}vq−1dv ≤1 + 2 qλZ∞ exp{λMq n}exp{−bnγvc}dv∀n≥¯nand some ¯
https://arxiv.org/abs/2505.17800v1
n∈N = 1 +2qλ cnγ/cb1/cZ∞ exp{λMq n}1 u1/c−1exp{−u}du ≤1 +2qλ cnγ/cb1/cZ∞ exp{λMq n}exp{−u}du= 1 +2qλ cnγ/cb1/cexp{−exp{λMq n}}. Now exploit ln(1 + z)≤zforz >0 and λ≥1 to yield In≤1 λln p 1 +2qλ cnγ/cb1/cexp{−exp{λMq n}} ≤1 λln(p) +2qλ cnγ/cb1/cexp{−exp{λMq n}} ≤1 λln(p) +2qλ cnγ/cb1/cexp{−exp{Mq n}}. Choose λ=p cnγ/cb1/c/(2q)×exp{.5 exp{Mq n}}p ln(p) to minimize the upper bound. Thus λ≥1 when p≥eandMn≥[ln(1∨ln(2q/{cb1/c}))]1/q, while the latter holds for alln≥n¯and some n¯∈N. Thus In,2≤2r 2q cnγ/cb1/cp ln(p) exp{exp{Mq n}/2}:=Dn(p). Now choose c=qandb=ato complete the proof.1QED . Proof of Lemma 2.3. Define PMn:= max iP(|¯xi,n| ≤ M n) and ¯PMn:= max iP(|¯xi,n|> Mn). We first prove for any λ >0 P max i|¯xi,n| ≥ M n ≤1 λln(p) +1 λexp{λ} ׯPMn. (5.3) Jensen’s inequality yields for any λ >0 P max i|¯xi,n|>Mn =1 λE ln exp λImaxi|¯xi,n|≥M n  1If, for example, Mn=nm,m > 0, then λ≥1 for all n≥1∨[ln(1∨ln(2/a1/q))]1/(qm):= n¯. 24 ≤1 λln E exp λImaxi|¯xi,n|≥M n  ≤1 λln pmax iE exp λI|¯xi,n|≥M n  . By construction max iE[exp{λI|¯xi,n|≥M n}]≤exp{λ}¯PMn+PMn≤exp{λ}¯PMn+ 1. Now ln(1 + x)≤x∀x≥0 achieves (5.3). Claim (a). The upper bound in (5.3) is minimized at λ= ln ¯P−1 Mn+ ln ln( p) asn→ ∞ Thus P max i|¯xi,n| ≥ M n ≤1 ln¯P−1 Mnln(p)ln(p) +1 ln¯P−1 Mnln(p)exp ln¯P−1 Mnln(p) ׯPMn = 2ln(p) ln¯P−1 Mn+ ln ln( p)≤2ln(p) ln¯P−1 Mn, proving the claim when p > e andp=o(¯P−1 Mn), given ¯P−1 Mn>1. Claim (b). Use¯PMn≤ M−q nmax iE|¯xi,n|qwith (5.3) and Lq-boundedness. Minimizing the upper bound with respect to λ, and setting p=o(Mq n/max iE|¯xi,n|q) yields the claim. Claim (c). Let¯PMn≤aexp{−bnγMγ n}. By (5.3) with λ=nϕMϕ nln(ln p) for any ϕ∈ (0, γ) we have P max i|¯xi,n| ≥ M n ≤1 λln(p) +1 λexp{λ}aexp{−bnγMγ n} =ln(p) nϕMϕ nln(ln p)+a(ln(p))nϕMϕ n nϕMϕ nexp{bnγMγ n}ln(ln p). The upper bound is o(1) if p > e and ln( p) =o(nϕMϕ n∧exp{bnγ−ϕMγ−ϕ n}) =o(nϕMϕ n). QED . Proof of Lemma 3.1. Conditions 1-3 in Chang et al. (2024) hold by construction. In particular, their Condition 2 αi,j,n(m)≤aexp{−blγ}holds for a≥1, some b >0 and γ = 1. Thus ρn≲χ2/3 nln(p)/n1/9+χn(ln(p))7/6/n1/9by their Theorem 1 and the mapping theorem, yielding (3.2). Now turn to X∗ n(i) = 1 /√nPn t=1ηtxi,t, and let {X∗ n(i)}be a Gaussian process with marginals X∗ n(i)∼N(0,EX∗2 n(i)). We will derive the following Gaussian approximation and Gaussian (without blocking)-to-Gaussian (with blocking) comparison bounds for some {cn(p), dn(p)}, ρ∗∗ n:= sup z≥0 P max i|X∗ n(i)| ≤z −P max i|X∗ n(i)| ≤z ≲cn(p) (5.4) 25 δ∗ n:= sup z≥0 P max i|Xn(i)| ≤z −P max i|X∗ n(i)| ≤z ≲dn(p). (5.5) Hence by the triangle inequality ρ∗ n= sup z≥0 P max i|X∗ n(i)| ≤z −P max i|Xn(i)| ≤z ≲cn(p)∨dn(p). Step 1: eq. (5.4). {ηtxi,t}n t=1isα-mixing with coefficients αi,j,n((m−bn)+) by bn- dependence of ηt, mutual independence and measurability. If the sub-exponential tail component χnis a fixed constant χn=Kthen we can use Proposition 3 in Chang et al. (2023). Otherwise, we use the fact that uniform geometric α-mixing implies geometric physical dependence in the sense of Wu (2005) and Wu and Min (2005). Recall xi,t=gi,t(ϵi,t, ϵi,t−1, . . .), where
https://arxiv.org/abs/2505.17800v1
ϵi,tare iid for each i. Let{ϵ′ i,t}be an independent copy of {ϵi,t}, and define the coupled process x′ i,t(m) :=gi,t(ϵi,t, . . . , ϵ i,t−m+1, ϵ′ i,t−m, ϵi,t−m−1, . . .),m= 0,1,2, ...Now define the Lp-physical dependence measure θ(p) i,t(m) :=||xi,t−x′ i,t(m)||p. We have θ(q) i,n,t(m) ≤21+2/q||xi,t||1/q qα1/(qr) i,j,n(m) for 1 /q+ 1/r= 1: see McLeish (1975, Lemma 1.6) and Hill (2025, Theorem 2.1), cf. Hill (2024, proof of Theorem 2.10). Hence by supposition and therefore moment bound (3.1), q≥2, and r=q/(q−1), we have for some ω∈(0,1) max i,tθ(q) i,n,t(m)≤21+2/qq1/qχ1/q nωm/(qr)≤4q1/qχ1/q nωm(q−1)/q2. Moreover, by mutual independence and additivity yi,t:=ηtxi,tis also geometrically Lq/2-physical dependent. Let {ε(j) l}Nn l=1be independent copies of {εl}Nn l=1,j= 2, ..., b n, and letε(1) l=εl. Let{˜εt}n t=1be iid random variables constructed block-wise ˜ εbn(l−1)+j=ε(j) lfor j= 1, ..., b n. Hence yi,t=h(˜ϵi,t,˜ϵi,t−1, . . .) where ˜ ϵi,t= [ϵi,t,˜εt]. Using the above notation define ˇθ(q) i,n,t(m) :=||ηt−η′ t(m)||q≤ I m≤bnand˜θ(r) i,n,t(m) :=||yi,t−y′ i,t(m)||r. Then by Minkowski and Cauchy-Schwartz inequalities, |ηt| ≤c a.s. for some c∈(0,∞),χn≥1 and (3.1), ˜θ(q/2) i,n,t(m)≤ ∥xi,t∥q∥ηt−η′ t(m)∥q+∥η′ t(m)∥q xi,t−x′ i,t(m) q =∥xi,t∥qˇθ(q) i,n,t(m) +∥η′ t(m)∥qθ(q) i,n,t(m) ≲qχnIm≤bn+ 4cq1/qχ1/q nωm(q−1)/q2≤4qχn Im≤bn+cωm(q−1)/q2 .(5.6) Moreover, by mutual independence and the definition of χn,||ηtxi,t||ψζ≤cχn. Finally, by supposition and the mixing assumption EX∗2 n(i)∈[K,∞), cf. Davydov (1968). Theorem 26 3(ii) in Chang et al. (2024) thus applies: for some β, ν∈(0,∞) ρ∗ n≲χn(ln(p))7/6+ Ψ(2) n,β1/3 Ψ(2) n,01/3 (ln(p))2/3 nβ/(12+6 β)+Φn,β,ν(ln(p))1+ν nβ/(4+β), where (Ψ(q) n,β,Φn,β,ν) are aggregated dependence adjusted norms (cf. Wu and Wu (2016); Chang et al. (2023)): Ψ(q) n,β= max i( sup m≥0( (m+ 1)β∞X i=m˜θ(q) i,n,t(m))) ≤4qχn sup m≥0( (m+ 1)β∞X i=mIm≤bn) + sup m≥0( (m+ 1)βθX i=mωl(q−1)/q2)! ≲4qχn( bβ n+ sup m≥0(m+ 1)βωm(q−1)/q2 1−ω(q−1)/q2) ≲4qχn  bβ n+q2β (q−1)βωβ/ln(1/ω)−(q−1)/q2 1−ω(q−1)/q2  ≲χnbβ nqβ+1 and Φn,β,ν = sup q≥2( q−νsup m≥0( (m+ 1)β∞X l=m˜θ(q) i,n,t(l))) (5.7) ≲sup q≥2 q−νχnbβ nqβ+1 =χnbβ nforν=β+ 1, where ω∈(0,1),q≥2, and bn→ ∞ yield the upper bounds. Thus using ν=β+ 1 we have shown for some β >0 (which may be arbitrarily large under geometric mixing) ρ∗ n≲χn(ln(p))7/6+χ2/3 nb2β/3 n(ln(p))2/3 nβ/(12+6 β)+χnbβ n(ln(p))β+2 nβ/(4+β)(5.8) ≤χnbβ n(ln(p))β+2 nβ/(4+β):=cn(p). Finally, the upper bound is minimized with β∗=p 4 lnn/{lnbn+ ln ln( p)} −4, where β∗ >0 given ln( p)< n1/4/bn. Step 2: eq. (5.5). Define σ2 n(i, j) :=EXn(i)Xn(j),σ∗2 n(i, j) :=EX∗ n(i)X∗ n(j) and ∆ n:= max i,j|σ2 n(i, j)−σ∗2 n(i, j)|. By Theorem 2 in Chernozhukov et al. (2015), δ∗ n≲∆1/3 n× {1∨ln(p/∆n)}2/3≤∆1/3 n× {1 + ln( p) +|ln ∆ n|}2/3:=d(n, p). (5.9) 27 It suffices to prove ∆ n≲χn/bn. Subsequently (5.8) with (5.9) yields (3.3). We now prove ∆ n≲χn/bn. Using Davydov (1968, Corollary)’s bound under geometric mixing, mutual independence and Eε2 l= 1, and Lyapunov’s inequality, we deduce for some q >2 and ω∈(0,1),2 ∆n= max i,j 1 nNnX l=1lbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1Exi,sxj,t ≤max i,t∥xi,t∥q×max i 1 NnNnX l=11 bnlbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1αi,j,n(|s−t|)1−2/q ≤max i,t∥xi,t∥q×max 1≤l≤Nn 1 bnlbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1ω|s−t|(1−2/q) . It is straightforward to verifyPlbn s=(l−1)bn+1Plbn t/∈(l−1)bn+1ω|s−t|(1−2/q)=O(1) for each l∈ {1, ...,Nn}since the summand index sets s∈Blandt∈Bc lare mutually exclusive. This proves ∆ n≲max i,t||xi,t||q/bn. Thus the proof is complete
https://arxiv.org/abs/2505.17800v1
since max i,t||xi,t||q≲qχnfrom (3.1). QED . Proof of Lemma 3.2 . Claim ( a). Assumptions 2.1-2.3 in Zhang and Cheng (2018) [ZC] hold by Assumption 2. Hence (3.7) holds by Theorem 2.1 in ZC when q≥2. Claim ( b). Recall y(M) i,t:=x(M) i,t−Ex(M) i,twith x(M) i,t:=xi,tI|xi,t|≤M n+MnI|xi,t|>Mn, andS(M) n,l(i) :=Plbn t=(l−1)bn+1y(M) i,t. Also X(M) n(i) = 1 /√nPn t=1y(M) i,twith blocked version X(M)∗ n(i) = 1 /√NnPNn l=1εlS(M) n,l(i)/√bn. Let{X(M) n(i),X(M)∗ n(i)}be Gaussian processes with marginals X(M) n(i)∼N(0,EX(M) n(i)2) andX(M)∗ n(i)∼N(0,EX(M)∗ n(i)2). We will bound for some {cn(p), dn(p)}the following Gaussian approximation and Gaussian-to-Gaussian comparison, ρ(M)∗∗ n := sup z≥0 P max i X(M)∗ n(i) ≤z −P max i X(M)∗ n(i) ≤z ≲cn(p) (5.10) δ(M)∗ n := sup z≥0 P max i X(M) n(i) ≤z −P max i X(M)∗ n(i) ≤z ≲dn(p),(5.11) thus ρ∗(M) n≲cn(p)∨dn(p). Step 1: (5.10). It suffices to will prove {εlS(M) n,l(i)/√bn}Nn l=1satisfies Assumptions 2.1-2.3 2A sharp covariance bound, modulo constants, due to Rio (1993) will not improve the rate at which ∆n→0. 28 in ZC. Then by their Theorem 2.1 ρ(M)∗∗ n ≲˜E1/2 n˜l7/8 n n1/8+ ˜γn+n1/8 ˜E1/2 n˜l3/8 nq/(q+1) pX i=1ˇΘ(M)(q/2) i,n (m)!1/(q+1) +ˇΞ(M)1/3 n (m)× 1∨ln p/ˇΞn(m)2/3:=cn(p), where ˜ln:=ln(p, γ) = ln( pn/˜γn)∨1;˜En≥1 and ˜ γn∈(0,1) satisfy n3/8 ˜E1/2 n˜l5/8 n≥Kn Mnh−1(n/˜γn)∨˜l1/2 no ; andˇΘ(M)(q) i,n (m) :=P∞ l=mˇΘ(M)(q) i,t (l) and ˇΞ(M) n(m) := max iP∞ l=ml˜θ(M)(2) i,n (l) with ˜θ(M)(q) i,n (m) := max 1≤l≤Nn εl1√bnlbnX t=(l−1)bn+1x(M) i,t−ε′ l(m)1√bnlbnX t=(l−1)bn+1x(M)′ i,t(m) q. We show below that ˜En=En, ˜γn=γnand therefore ˜ln=lnsuffice. Assumption 2.1 in ZC : We need for some sequence {˜χn},˜χn≥1, (i) max i,l εlS(M) n,l(i)√bn 4≤Kand ( ii) max lEh  εlS(M) n,l(i) ˜χn√bn ≤K. (5.12) The first bound (5.12).( i) holds by mutual independence of ( εl,S(M) n,l(i)),|εl| ≤c a.s. , physical dependence Assumption 2.c, and thus Theorems 1 and 2( i) in Wu (2005): max i,l εlS(M) n,l(i)√bn 4≤cmax i,l 1√bnlbnX t=(l−1)bn+1y(M) i,t 4≲∞X m=0max iθM(4) i,n(m) =O(1). Next, set ˜ χn= 4 max {χn,Mn}√bn. By hconvex and strictly increasing, mutual inde- pendence, |εl| ≤c a.s. , and Jensen’s inequality, max tEh  εlS(M) n,l(i) ˜χn√bn ≤max l1 bnlbnX t=(l−1)bn+1Eh  εly(M) i,t 4 max{χn,Mn}  ≤1 2max l1 bnlbnX t=(l−1)bn+1Ehc|xi,t| χn 29 +1 2max l1 bnlbnX t=(l−1)bn+1Eh c MnI|xi,t|>Mn Mn! ≤1 2max tEhc|xi,t| χn +h(c) 2≤K. The final inequality follows from Assumption 2.a. Hence max tEh(|εlS(M) n,l(i)|/[˜χn√bn])≤ Kwhich is (5.12).( ii). Assumption 2.2 in ZC : We need to show there exist ˜En>0 and ˜ γn∈(0,1) such that n3/8/[˜E1/2 n˜l5/8 n]≥K{˜γnh−1(n/˜γn)∨˜l1/2 n}. Use Assumption 2.b with ˜En=Enand ˜γn =γn, and therefore ˜ln=ln. Assumption 2.3 in ZC : Define σ(M)∗2 n (i) :=EX(M)∗ n(i)2=E 1√NnNnX l=1εlS(M) n,l(i)√bn!2 . We need to show 0<min iσ(M)∗2 n (i)≤max iσ(M)∗2 n (i)≤Kfor each n (5.13) and that {εlS(M) n,l(i)/√bn}are uniformly L3-physical dependent with coefficients {θM∗(3) i,n (m)}, max i∞X l=1lθM∗(3) i,n (m)≤K. (5.14) Consider (5.13). Assumption 2.c gives the lower bound. The upper bound holds by (5.14), and Theorems 1 and 2( i) in Wu (2005). It remains to verify (5.14). By mutual independence of ( εl,S(M) n,l(i)), θM∗(3) i,n (m)≤2 max l 1√bnlbnX
https://arxiv.org/abs/2505.17800v1
t=(l−1)bn+1εl y(M) i,t−y(M)′ i,t(m) 3 = 2 max l  E E  1√bnlbnX t=(l−1)bn+1εl y(M) i,t−y(M)′ i,t(m) 3 |Yi,n    1/3 ≲max l y(M) i,t−y(M)′ i,t(m) 3≲θM(3) i,n(m), (5.15) 30 because by independence of εl,|εl| ≤c a.s. , and Theorems 1 and 2( i) in Wu (2005), max l 1√bnlbnX t=(l−1)bn+1εl y(M) i,t−y(M)′ i,t(m) |Yi,n 3 ≤∞X r=0max l εl y(M) i,t−y(M)′ i,t(m) −ε′ l(r) y(M) i,t−y(M)′ i,t(m) 3 = max l (εl−εl+1) y(M) i,t−y(M)′ i,t(m) 3≤2cmax l y(M) i,t−y(M)′ i,t(m) 3. Now exploit Assumption 2.c with (5.15) to yield (5.14). Step 2: (5.11). Define σ(M)2 n(i, j) :=EX(M) n(i)X(M) n(j),σ(M)∗2 n (i, j) :=EX(M)∗ n(i)X(M)∗ n(j), and ∆(M) n:= max i,j|σ(M)2 n(i, j)−σ(M)∗2 n (i, j)|. We will prove ∆(M) n=O(ξn/bn), hence δ(M)∗ n≲(ξn/bn)1/3ln(p)2/3by Lemma C.5 in Chen and Kato (2019) (cf. Chernozhukov et al., 2015, Theorem 2). By construction, mutual independence and Eε2 l= 1, ∆(M) n = max i,j σ(M)2 n(i, j)−σ(M)∗2 n (i, j) = max i,j 1 nNnX l=1lbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1Ey(M) i,sy(M) j,t . (5.16) Now define Ft:=σ(ϵt, ϵt−1, . . .) and a projection operator Pty(M) i,t:=EFty(M) i,t−EFt−1y(M) i,t. Then y(M) i,t=P∞ l=0Pt−ly(M) i,t. Therefore by the martingale difference property of Pt−ly(M) i,t, and triangle and Cauchy-Schwartz inequalities, if s≤t Ey(M) i,sy(M) j,t ≤ E∞X m1=0Ps−m1y(M) i,s∞X m2=0Pt−m2y(M) j,t ≤∞X m=0 E Ps−my(M) i,s Ps−my(M) j,t ≤∞X m=0 Ps−my(M) i,s 2 Ps−my(M) j,t 2, and if s > t then|Ey(M) i,sy(M) j,t| ≤P∞ m=0||Pt−my(M) i,s||2× ||Pt−my(M) j,t||2. Use Theorem 1 in Wu (2005) to yield ||Pt−my(M) i,t||2≤θM(2) i,n(m). Thus Ey(M) i,sy(M) j,t ≤∞X m=0θM(2) i,n(m)×θM(2) j,n(m+|t−s|). (5.17) 31 Combine (5.16) with (5.17) to deduce ∆(M) n≤max i,j 1 nNnX l=1lbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1(∞X m=0θM(2) i,n(m)θM(2) j,n(m+|t−s|)) = max i,j ∞X m=0θM(2) i,n(m)  1 nNnX l=1lbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1θM(2) j,n(m+|t−s|)   . Finally, it is straightforward to verify 1 nNnX l=1lbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1θM(2) j,n(m+|t−s|) ≤1 NnNnX l=1 1 bnlbnX s=(l−1)bn+1(l−1)bnX t=1θM(2) j,n(m+|t−s|)  +1 NnNnX l=1 1 bnlbnX s=(l−1)bn+1nX t=lbn+1θM(2) j,n(m+|t−s|)  =1 NnNnX l=1 1 bnlbnX r=1rθM(2) j,n(m+r)! +1 NnNnX l=1 1 bnn−(l−1)bn−1X r=1rθM(2) j,n(m+r) . Under Assumption 2.c and Lyapunov’s inequality max jmax m≥01/bnP∞ r=1rθM(2) j,n(m+r) =O(ξn/bn). Hence max jmax m≥01 nNnX l=1lbnX s=(l−1)bn+1lbnX t/∈(l−1)bn+1θM(2) j,n(m+|t−s|) =O(ξn/bn), and therefore ∆(M) n≲b−1 nmax i|P∞ m=0θM(2) i,n(m)|=O(ξn/bn) as claimed .QED . References von Bahr, B., Esseen, C.G., 1965. Inequalities for the rthabsolute moment of a sum of random variables, 1 ≤r≤2. Ann. Inst. Statist. Math. 36, 299–303. Bentkus, V., 2004. On hoeffding’s inequalities. Ann. Probab. 32, 1650–1673. 32 Bentkus, V., 2008. An extension of the hoeffding inequality to unbounded random vari- ables. Lith. Math. J. 48, 137–157. Buhlmann, P., van de Geer, S., 2011. Statistics for High-Dimensional Data. Springer, Berlin. Chang, J., Chen, X., Wu, M., 2024. Central limit theorems for high dimensional dependent data. Bernoulli 30, 712–742. Chang, J., Jiang, Q., Shao, X., 2023. Testing the martingale difference hypothesis in high dimension. J. Econometrics 235, 972–1000. Chen, X., Kato, K., 2019. Randomized incomplete U-statistics in high dimensions. Ann. Statist. 47, 3127–3156. Chernozhukov, V., Chetverikov, D., Kato, K., 2013. Gaussian approximations and multi- plier
https://arxiv.org/abs/2505.17800v1
bootstrap for maxima of sums of high-dimensional random vectors. Ann. Statist. 41, 2786–2819. Chernozhukov, V., Chetverikov, D., Kato, K., 2015. Comparison and anti-concentration bounds for maxima of gaussian random vectors. Probab. Theory Rel. 162, 47–70. Chernozhukov, V., Chetverikov, D., Kato, K., 2017. Central limit theorems and bootstrap in high dimension. Ann. Probab. 45, 2309–2352. Chernozhukov, V., Chetverikov, D., Kato, K., 2019. Inference on causal and structural parameters using many moment inequalities. Review Econom Studies 84, 1867–1900. Davydov, Y.A., 1968. Convergence of distributions generated by stationary stochastic processes. Theory Probab. Appl. 13, 691–696. Dezeure, R., B¨ uhlmann, P., Zhang, C.H., 2017. High-dimensional simultaneous inference with the bootstrap. Test 26, 685–719. D¨ umbgen, L., van de Geer, S., Veraar, M.C., Wellner, J.A., 2010. Nemirovski’s inequalities revisited. Amer. Math. Monthly 117, 138–160. Gin´ e, E., Zinn, J., 1990. Bootstrapping general empirical measures. Ann. Probab. 18, 851–869. Hansen, B.E., 1991. Strong laws for dependent heterogeneous processes. Econometric Theory 7, 213–221. 33 Hansen, B.E., 1992. Erratum: Strong laws for dependent heterogeneous processes. Econo- metric Theory 8, 421–422. Hansen, B.E., 1996. Inference When a Nuisance Parameter Is Not Identified Under the Null Hypothesis. Econometrica 64, 413–430. Hill, J.B., 2024. Max-laws of large numbers for weakly dependent high dimensional arrays with applications. Technical Report, Dept. of Economics, University of North Carolina. Hill, J.B., 2025. Mixingale and physical dependence equality with applications. Statistics and Probability Letters 221, in press. Hill, J.B., Li, T., 2025. A bootstrapped test of covariance stationarity based on orthonormal transformations. Bernoulli 31, 1527–1551. Hitczenko, P., 1990. Best constants in martingale version of Rosenthal’s inequality. Ann. Probab. 18, 1656–1668. Ibragimov, R., Sharakhmetov, S., 1998. On an exact constant for the rosenthal inequality. Theory Probab. Appl. 42, 294–302. Jin, L., Wang, S., Wang, H., 2015. A new non-parametric stationarity test of time series in the time domain. J. Roy. Stat. Soc. Ser. B 77, 893–922. Juditsky, A., Nemirovski, A.S., 2008. Large Deviations of Vector-Valued Martingales in 2-Smooth Normed Spaces. Technical Report. Ga. Tech.. Atlanta, GA. Liu, R.Y., 1988. Bootstrap procedures under some non-i.i.d. models. Ann. Statist. 16, 1696–1708. Mammen, E., 1993. Bootstrap and wild bootstrap for high dimensional linear models. Ann. Statist. 21, 255–285. Marcinkiewicz, J., Zygmund, A., 1937. Sur les fonctions ind´ ependantes. Fund. Math. 28, 60–90. Massart, P., Rossignol, R., 2013. Around nemirovski’s inequality, in: Banerjee, M., Bunea, F., Huang, J., Koltchinskii, V., Maathuis, M.H. (Eds.), From Probability to Statistics and Back: High-Dimensional Models and Processes - A Festschrift in Honor of Jon A. Wellner. IMS. volume 9, pp. 254–165. McLeish, D.L., 1975. A maximal inequality and dependent strong laws. Ann. Probab. 3, 829–839. 34 Merlevede, F., Peligrad, M., 2013. Rosenthal-type inequalities for the maximum of partial sums of stationary processes and examples. Ann. Probab. 41, 914–960. Merlevede, F., Peligrad, M., Rio, E., 2011. Bernstein inequality and moderate deviations for weakly dependent sequences. Probab. Theory Rel. 151, 435–474. Volume 5. Nemirovski, A.S., 2000. Topics in nonparametric statistics, in: Emery, M., Nemirovski, A., Voiculescu, D., Bernard, P. (Eds.), Lectures on Probability Theory and Statistics: Ecole d’Ete de Probabilites de Saint-Flour XXVIII - 1998. Springer, New York. volume
https://arxiv.org/abs/2505.17800v1
1738, pp. 87–285. Lectures Notes on Mathematics, 1738. de la Pena, V., Ibragimov, R., Sharakhmetov, S., 2003. On extremal distributions and sharp lp-bounds for sums of multilinear forms. Ann. Probab. 31, 630–675. Pollard, D., 1984. Convergence of Stochastic Processes. Springer Verlag, New York. Rio, E., 1993. Covariance inequalities for strongly mixing processes. Ann. Inst. H. Poincar´ e, Sect. B 29, 587–597. Rio, E., 2017. Asymptotic Theory of Weakly Dependent Random Processes. Springer, Berlin. Rosenthal, H.P., 1970. On the subspaces of Lp(p >2) spanned by sequences of independent random variables. Isreal J. Math. 8, 273–303. Shao, X., 2010. The dependent wild bootstrap. J. Amer. Statist. Assoc. 105, 218–235. Shao, X., 2011. A bootstrap-assisted spectral test of white noise under unknown depen- dence. Journal of Econometrics 162, 213–224. Szewczak, Z.S., 2015. A moment maximal inequality for dependent random variables. Statist. Probab. Lett. 106, 129–133. van der Vaart, A., Wellner, J., 1996. Weak Convergence and Empirical Processes. Springer, New York. Wu, W.B., 2005. Nonlinear system theory: Another look at dependence. Proc. Natl. Acad. Sci. 102, 14150–14154. Wu, W.B., Min, M., 2005. On linear processes with dependent innovations. Stochastic Process. Appl. 115, 939–958. 35 Wu, W.B., Wu, Y.N., 2016. Performance bounds for parameter estimates of high- dimensional linear models with correlated errors. Electron. J. Statist. 10, 352–379. Yokoyama, R., 1980. Moment bounds for stationary mixing sequences. Z. Wahr. verw. Gebiete 52, 45–57. Zhang, D., Wu, W.B., 2017. Gaussian approximation for high dimensional time series. Ann. Statist. 45, 1895–1919. Zhang, X., Cheng, G., 2014. Bootstrapping high dimensional time series. Available at arXiv:1406.1037. Zhang, X., Cheng, G., 2018. Gaussian approximation for high dimensional vector under physical dependence. Bernoulli 24, 2640–2675. 36
https://arxiv.org/abs/2505.17800v1
arXiv:2505.17851v1 [math.ST] 23 May 20251 Optimal Decision Rules for Composite Binary Hypothesis Testing under Neyman-Pearson Framework Yanglei Song, Berkan Dulek, and Sinan Gezici, Fellow, IEEE Abstract The composite binary hypothesis testing problem within the Neyman-Pearson framework is considered. The goal is to maximize the expectation of a nonlinear function of the detection probability, integrated with respect to a given probability measure, subject to a false-alarm constraint. It is shown that each power function can be realized by a generalized Bayes rule that maximizes an integrated rejection probability with respect to a finite signed measure. For a simple null hypothesis and a composite alternative, optimal single-threshold decision rules based on an appropriately weighted likelihood ratio are derived. The analysis is extended to composite null hypotheses, including both average and worst-case false-alarm constraints, resulting in modified optimal threshold rules. Special cases involving exponential family distributions and numerical examples are provided to illustrate the theoretical results. Index Terms Binary hypothesis testing, composite hypotheses, optimal detection, randomized decision rules. Y . Song is with the Department of Mathematics and Statistics, Queen’s University, Kingston, Ontario, Canada (e-mail: yan- glei.song@queensu.ca). B. Dulek is with the Department of Electrical and Electronics Engineering, Hacettepe University, Beytepe Campus, Ankara 06800, Turkey (e-mail: berkan@ee.hacettepe.edu.tr). S. Gezici is with the Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey, (e-mails: gezici@ee.bilkent.edu.tr). Y . Song acknowledges the support by NSERC Grant RGPIN- 2020-04256. B. Dulek was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under the Grant Number 122E493. S. Gezici was supported by TUBITAK under the BIDEB 2219 Program. B. Dulek and S. Gezici thank TUBITAK for their supports. 2 I. I NTRODUCTION In binary hypothesis testing, the goal is to decide between two possible statistical scenarios, known as hypotheses , based on an observation [1]–[4]. This framework arises in various applications such as target detection, data communication, and classification [5]–[7]. Depending on whether the observation under each hypothesis corresponds to a single probability distribution or a family of distributions, hypothesis testing problems are categorized as simple orcomposite . Insimple binary hypothesis testing problems, the observation follows a single distribution under each hypoth- esis. For these problems, optimal decision rules can readily be derived using classical optimality frameworks such as Bayesian, minimax, and Neyman-Pearson (NP) hypothesis testing [1]–[4]. In particular, likelihood ratio tests are shown to be optimal, where the ratio of the distributions under the two hypotheses is compared to a threshold. In contrast, composite binary hypothesis testing—where at least one hypothesis is composite—varies in complexity depending on the chosen optimality criterion. For example, in the Bayesian framework, these problems are as tractable as simple hypothesis testing by using averaged likelihood functions in a likelihood ratio test [2]. However, the problem can become complicated in the NP framework, which is the main focus of this paper. In the NP framework for distinguishing between two simple hypotheses, the goal is to maximize the detection probability while maintaining a constraint on the false-alarm probability [2], [6]. When the null hypothesis is composite, the common practice is to enforce the
https://arxiv.org/abs/2505.17851v1
false-alarm constraint for all possible distributions under the null hypothesis [2], [8], [9]. In contrast, when the alternative hypothesis is composite, various approaches are em- ployed in the literature. One approach is to seek a uniformly most powerful (UMP) decision rule that maximizes the detection probability for all possible distributions under the alternative hypothesis, while satisfying the false- alarm constraint [2], [6]. However, such a decision rule exists only under specific conditions [2]. An alternative approach is to maximize the average detection probability under the false-alarm constraint [10]–[13]. In this case, the problem reduces to an NP problem for a simple alternative hypothesis by representing the alternative distribution as the integrated distribution over the prior on the parameter under the alternative hypothesis [14]. This approach, however, requires a prior distribution to compute the average detection probability. If such a prior distribution is unavailable (as is often the case in NP formulations), a maximin approach can be used. This method aims to maximize the minimum detection probability subject to the false-alarm constraint [8], [9]. The solution can be characterized by an NP decision rule corresponding to a least-favorable distribution under the alternative hypothesis in a dual problem formulation [9]. Notably, considering the least-favorable distribution is equivalent to assuming a worst-case scenario, which corresponds to a conservative approach. Modifications to this approach are studied in [15], [16] utilizing the interval probability concept. As a generalization, the restricted NP approach is proposed in [14], addressing uncertainty in the prior distribution of the alternative hypothesis. A restricted NP decision rule aims to maximize the average detection probability calculated based on the uncertain prior distribution while ensuring that the minimum detection probability does not fall below a predefined threshold and that the false-alarm probability remains below a specified significance level. It is shown that the restricted NP decision rule can be specified as a classical NP decision rule corresponding to a certain least-favorable distribution [14]. The generalized likelihood ratio test (GLRT) is also a common and practical approach for composite binary 3 hypothesis testing [2], [6]. In the GRLT, the maximum likelihood estimates of the unknown parameter are ob- tained under both hypotheses using the observation and substituted into the corresponding likelihood expressions in place of the unknown parameter value. In this way, a likelihood ratio test is formed based on the parameter values that best fit the observation under both hypotheses. While the GLRT is easy to design and implement, it lacks optimality properties in general for finitely many observations, though it has some asymptotic optimality properties [17]. As an alternative asymptotic approach, the studies in [18]–[21] adopt a generalized version of the NP criterion, which aims to maximize the misdetection exponent uniformly across all distributions under the alternative hypothesis subject to the false-alarm exponent constraint. Moreover, in [22], a simple null and composite alternative hypothesis testing problem is considered under the NP framework, and a decision rule is proposed to achieve the optimal error exponent tradeoff for each alternative distribution. Our work addresses non-asymptotic composite binary hypothesis testing problems within an NP detection theoretic framework.
https://arxiv.org/abs/2505.17851v1
Our primary motivation stems from recent advances in behavioral utility-based detection problems [23]–[26]. Specifically, for decision-making tasks involving humans in the loop, their cognitive biases and subjective perception of probabilities, gains, and losses need to be taken into account. These effects can be incorporated into the system performance metric using prospect-theoretic probability weighting and cost valuation functions [27], which introduce a nonlinear relationship between actual and perceived performance metrics. Despite extensive prior research, deriving optimal decision rules under such nonlinearities remains a challenge. In this regard, the main contributions and novel aspects of this manuscript can be listed as follows. 1) In Section III, we derive structural results on the set of all power functions, where the power function of a randomized decision rule is defined as its rejection probabilities over the parameter space. Specifically, we show that each power function can be realized by a “generalized Bayes rule” that maximizes the integral of a power function with respect to (w.r.t.) some finite signed measure (see Theorem 3). Thus, for any hypothesis testing problem whose formulation solely depends on power functions, it suffices to consider the family of “generalized Bayes rules”. 2) In Section IV, we consider the problem of testing a simple null hypothesis against a composite alternative. The objective is to maximize the expected value of a continuously differentiable but otherwise arbitrary function g(·)of the detection probability, subject to a constraint on the false-alarm probability. The expectation is taken w.r.t. a given probability measure over the unknown parameter under the alternative hypothesis. We show that optimal detection is achieved using a single-threshold test on the following statistic: the ratio of a probability density function (p.d.f.), averaged over the alternative hypothesis, to the p.d.f. under the null hypothesis, where the averaging measure depends on the power function of the optimal detection rule (see Theorem 5). If gis strictly increasing and the likelihood ratio can be expressed as a convex function of a scalar-valued sufficient statistic for the family of distributions indexed by the unknown parameter, then the optimal test accepts the null hypothesis when the sufficient statistic falls within an interval (possibly semi-infinite) of the real line and rejects otherwise. 3) In Sections V and VI, we extend our analysis to the case where the null hypothesis is also composite, while maintaining the same objective function as in Section IV. In Section V, we impose a constraint on the average false-alarm probability w.r.t. a given probability measure Λ0for the unknown parameter under the null hypothesis. In contrast, in Section VI, the constraint is on the worst-case false-alarm probability 4 under the null. For average false-alarm control, we show that if we replace the single null p.d.f. from the previous part with the Λ0-integrated p.d.f. under the composite null hypothesis, then the modified single-threshold rule is optimal (see Theorem 8). In addition, for the supremum false-alarm control, we establish that replacing the single null p.d.f. with the integrated p.d.f., taken w.r.t. some least-favorable distribution under the null, yields the optimal decision rule (see Theorem 10). 4) Throughout the paper, several theorems and
https://arxiv.org/abs/2505.17851v1
corollaries are derived for the special case of single-parameter exponential family distributions (see Corollary 7, Theorem 9 and Theorem 11). Numerical examples from behavioral utility-based detection theory are also provided to corroborate the theoretical results. II. P ROBLEM FORMULATION Consider a compact parameter space Θ⊂ Rd1for some integer d1≥1, and partition it into two disjoint, non-empty subsets, Θ0andΘ1, such that Θ = Θ 0∪Θ1andΘ0∩Θ1=∅. Denote by Yan observation that takes values in the observation space Y⊂ Rd2for some integer d2≥1. Let µbe aσ-finite measure on Y, and for each θ∈Θ, letfθbe a p.d.f. w.r.t. µ. When θ∈Θis true, the observation Yhas a density fθw.r.t. µ. The goal is to decide, based on the observation Y, whether θbelongs to Θ0orΘ1, which correspond to hypothesis H0and hypothesis H1, respectively. Specifically, a randomized decision rule δis a measurable function from Yto[0,1]such that δ(Y)represents the probability of rejecting H0(or equivalently, accepting H1). Let ∆denote the collection of all such randomized decision rules. For each rule δ∈∆andθ∈Θ, we define p(θ;δ) := E θ[δ(Y)] =Z Yδ(y)fθ(y)µ(dy), which is the probability of rejecting H0when θis the true parameter value. We refer to p(·;δ)as the power function ofδ, and let P={p(·;δ) :δ∈∆} denote the set of all power functions that can be realized. In this work, we investigate various optimization problems with the aim of finding decision rules that optimize a certain objective function subject to constraints, where both the objective function and constraints depend solely on the associated power functions. III. S TRUCTURAL RESULTS In this section, we establish structural results for the set Pof all power functions, and show that for any hypothesis testing problem, it is sufficient to focus on the family of “generalized Bayes rules”. For each θ∈Θ, denote by µθthe distribution of the observation Ywhen θis the true parameter value, that is,dµθ/dµ(y) =fθ(y)fory∈ Y, where dµθ/dµ denotes the Radon-Nikodym derivative of µθw.r.t. µ. Assumption 1. The function θ7→µθis continuous in total variation distance, that is, lim n→∞Z Y|fθn(y)−fθ(y)|µ(dy) = 0 , for all θn, n≥1,θ∈Θandlimn→∞θn=θ. Further, for any θ, θ′∈Θ,µθandµθ′are mutually absolutely continuous. 5 The last condition of Assumption 1 requires that {µθ:θ∈Θ}has the same support. Denote by C(Θ) the space of continuous functions on the compact set Θ, and we equip it, as well as its subsets, with the supremum norm∥ · ∥∞. For a vector v∈ Rd1, denote its Euclidean norm by ∥v∥. We write “ µ-a.e.” to mean “ µ-almost everywhere” for simplicity. We summarize some properties about Pin the following lemma and provide its proof in Appendix A. Lemma 1. Suppose Assumption 1 holds. 1)P ⊂C(Θ). 2)Pis convex and compact. 3) Assume that Θ⊂ Rd1has a non-empty interior Θ◦. Let U⊂Θ◦be a non-empty open set. Assume that forµ-a.e.y, the function θ∈U7→fθ(y)∈ Ris differentiable with gradient ˙fθ(y). In addition, there exists a function F: Y→ Rsuch that supθ∈U∥˙fθ(y)∥ ≤F(y)forµ-a.e.yandR YF(y)µ(dy)<∞. Then the interior of Pis empty. 4) Let p∈ P. If0< p(θ′)<1for some θ′∈Θ, then 0<infθ∈Θp(θ)≤supθ∈Θp(θ)<1. Remark 1. In part 3) of Lemma 1, we require the parameter space Θto have a non-empty interior (i.e., containing an open set), and thus exclude
https://arxiv.org/abs/2505.17851v1
the case of finite Θfor that part. The intuition is that, under the assumptions in part 3), any p∈ P is a differentiable function on U. However, C(Θ) is sufficiently rich that in any neighborhood of p∈ P, there exist continuous but non-differentiable functions on U. A. Generalized Bayes Rules Denote by M(Θ),M+(Θ)andM1(Θ)the space of finite signed measures, finite measures, and probability measures on Θ, respectively (see [28, Section 3.1] for definitions of signed measures). Further, recall the Hahn-Jordan decomposition of signed measures in [28, Theorem 3.4]. For two probability measures π+, π−∈ M 1(Θ), a non-negative real number c≥0, and a measurable function γ: Y→[0,1], we define the following rule: for y∈ Y, δ∗(y;π+, π−, c, γ) =  1, ifR Θfθ(y)π+(dθ)> cR Θfθ(y)π−(dθ) γ(y),ifR Θfθ(y)π+(dθ) =cR Θfθ(y)π−(dθ) 0, ifR Θfθ(y)π+(dθ)< cR Θfθ(y)π−(dθ). (1) The next lemma shows that when the objective is to maximize the averaged power function with respective to some finite signed measure ν∈ M (Θ), the optimal rule must be of the form in (1). For this reason, we refer to this family of decision rules as “generalized Bayes rules”. (Conventional Bayes rules employ positive measures and do not focus on the equality case as it does affect the Bayes risk.) Lemma 2. Letν∈ M(Θ)be a finite signed measure. Denote by ν+∈ M +(Θ)andν−∈ M +(Θ)the positive and negative parts in the Hanh decomposition of ν, respectively. Assume ν+andν−are non-zero. Consider the optimization problem: max δ∈∆Z Θp(θ;δ)ν(dθ). (2) 6 Letδ∈∆. Then δis an optimal rule for (2)if and only if δis equal to, up to µ-a.e.-y,δ∗(·;π+, π−, c, γ), where π+:=ν+ R Θν+(dθ), π−:=ν− R Θν−(dθ), c:=Z Θν−(dθ).Z Θν+(dθ), andγ: Y→[0,1]is some measurable function. Proof. See Appendix A. Remark 2. Ifν−(resp. ν+) is the zero measure, then the optimal rule above is given by ˜δ(y) = 0 (resp. ˜δ(y) = 1) for y∈ Y. In either case, the optimal rule is in the form of (1). Remark 3. Rules in the form of (1)may also be viewed as generalizations of NP rules for simple versus simple hypothesis testing problems. In particular, π+andπ−are two probability measures on Θwith disjoint support , and we threshold the ratio of the two integrated likelihood functions. B. General Structural Results In this subsection, we focus on the case that Phas an empty interior . As noted in Lemma 1, if the parameter space Θhas a non-empty interior (e.g., if Θ⊂ Rcontains an interval), under mild technical conditions, P ⊂C(Θ) indeed has an empty interior. The following theorem shows that any power function can be realized by some “generalized Bayes rule” in (1). Theorem 3. Letp∈ P be an arbitrary power function. Suppose that Assumption 1 holds, and that Phas an empty interior. There exist some π+, π−∈ M 1(Θ),c≥0, and γ: Y→[0,1]such that π+andπ−are mutually singular, and pis the power function associated with the decision rule δ∗(·;π+, π−, c, γ). Proof. See Appendix B. The above theorem establishes that for any hypothesis testing problem, if its formulation depends solely on the power function, it is sufficient to consider the family of generalized
https://arxiv.org/abs/2505.17851v1
Bayes rules in (1). However, at this generality, we cannot explicitly characterize the support of π+orπ−for each power function p∈ P. In subsequent sections, we analyze specific optimization problems and derive additional results concerning the structure of optimal decision rules. Remark 4. The key strategy for proving Theorem 3 is to establish that every p∈ P is a support point of P, that is, it solves the optimization problem in (2)corresponding to some non-zero signed measure ν∈M(Θ). This enables us to apply Lemma 2. Note that for general Banach spaces, such as the space of absolutely summable sequences, not every point in a closed, convex subset with an empty interior is a support point; see Example 7.8 in [29]. For any π+, π−∈ M 1(Θ) andc≥0, we define B(π+, π−, c)to be the following set: B(π+, π−, c) = y∈ Y:Z Θfθ(y)π+(dθ) =cZ Θfθ(y)π−(dθ) . 7 If such sets have zero probabilities for mutually singular π+andπ−, then it suffices to consider those deterministic (i.e., without randomization) generalized Bayes rules in (1), which follows immediately from Theorem 3. Corollary 4. Letp∈ P be an arbitrary power function. Assume the conditions in Theorem 3 hold. Further, assume for all π+, π−∈ M 1(Θ) that are mutually singular and any c≥0, µ B(π+, π−, c) = 0. (3) Let¯γ1(y) = 1 fory∈ Y. There exist some π+, π−∈ M 1(Θ) andc≥0such that π+andπ−are mutually singular, and pis the power function associated with the decision rule δ∗(·;π+, π−, c,¯γ1). Remark 5. When µis the Lebesgue measure on Rd2and{fθ:θ∈Θ}is the family of exponential family distributions in (10), the “zero probability” condition in (3)holds. Remark 6. By part 3) of Lemma 1, for composite hypotheses, Ptypically has an empty interior. However, whenPhas a non-empty interior — for instance, when Θhas finitely many elements — we can apply the usual supporting hyperlane theorem [29, Lemma 7.7] to conclude that every boundary point of Pis a support point. Consequently, in this case, every power function p∈ P can be realized as a convex combination of at most two generalized Bayes rules of the form in (1). IV. S IMPLE NULL VERSUS COMPOSITE ALTERNATIVE Letθ0∈Θ. In this section, we aim to test H0:θ=θ0,vsH1:θ∈Θ1:= Θ\ {θ0}. For instance, if Θ = [−K, K ]for some K > 0andθ0= 0, then the test is two-sided. In general, however, there is no uniformly most powerful (UMP) test [3]. As an example, if fθrepresents the density of the normal distribution with mean θand variance 1, no UMP test exists. Our goal is to solve the following problem max δ∈∆Z Θ1g(p(θ;δ))Λ1(dθ), subject to p(θ0;δ)≤α,(4) where α∈(0,1)is a user-specified level, g: [0,1]→ Ris a measurable function, and Λ1(·)is a probability measure on Θ1. Note that the weighting probability Λ1(·)is general, e.g., it can be finitely discrete. Assumption 2. gis continuously differentiable on (0,1)and continuous at {0,1}. Ifg(t) =tfort∈[0,1], the optimal test follows from the NP lemma. The problem becomes more interesting when gis nonlinear, such as g(t) =tκfort∈[0,1], where κ >0is given. Another example, inspired by the literature on prospect theory-based hypothesis testing [23], [27], is g(t)
https://arxiv.org/abs/2505.17851v1
=ωv(t)fort∈[0,1], where v >0is user-specified, and ωv(t) :=tv (tv+ (1−t)v)1/v. (5) To derive stronger results, we impose the following assumption in certain cases , which requires gto be strictly increasing. This assumption is satisfied by the examples mentioned above. 8 Assumption 3. g′(t)>0for0< t < 1. Below, we denote by ¯δ0the decision rule that always accepts H0, that is, ¯δ0(y) = 0 fory∈ Y. Theorem 5. Suppose that Assumptions 1 and 2 hold, and that ¯δ0is not an optimal solution to problem (4). i) There exists an optimal rule δ∗∈∆for problem (4)such that for some constant κ∈ Rand a measurable function γ: Y→[0,1], δ∗(y) =  1, ifH(y)> κf θ0(y) γ(y), ifH(y) =κfθ0(y) 0, ifH(y)< κf θ0(y), (6) for each y∈ Y, where H(y) :=Z Θ1g′(p(θ;δ∗))fθ(y)Λ1(dθ). (7) ii) For any optimal rule δ∗∈∆, it must take the form given in (6), with the function H(·)defined in (7), for µ-a.e.y. iii) If, in addition to Assumptions 1 and 2, Assumption 3 holds, then ¯δ0is not an optimal solution to problem (4), and for any optimal rule δ∗∈∆, we have p(θ0;δ∗) =α. Proof. See Appendix C. Remark 7. The proof strategy for Theorem 5 is to establish that δ∗is the optimal decision rule for a simple versus simple hypothesis testing problem by linearizing the objective function in (4)atp(·;δ∗)∈ P. Remark 8. In general, such as when gis a decreasing function, ¯δ0may be an optimal solution to problem (4). In Theorem 5, we exclude this case, which implies that for any optimal rule δ∗,p(θ;δ∗)∈(0,1)for all θ∈Θ, and thus g′(p(θ;δ∗))is well defined. Remark 9. Assumption 1 holds under mild conditions; for example, it holds if for µ-a.e.y,θ∈Θ7→fθ(y)∈ [0,∞)is continuous, and supθ∈Θ|fθ(y)| ≤F(y)whereR F(y)µ(dy)<∞. Assumptions 2 and 3 concern the function gin(4). The preceding theorem establishes that an optimal rule exists, which rejects (resp. accept) H0if the function H(·)is strictly above (resp. below) a multiple of fθ0. Further, any optimal rule must be of this form. It is clear that the optimal rule in (6) is a special case of (1), where π−is supported on the singleton {θ0}, and if gis strictly increasing, π+is supported on Θ1. The above structural results can significantly be simplified under additional assumptions. Corollary 6. Suppose that Assumptions 1, 2, and 3 hold. Let T: Y→ Rbe a measurable function, and assume that for each θ∈Θ1, there exists a strictly convex function ϕθ: R→ Rsuch that fθ(y) fθ0(y)=ϕθ(T(y)),fory∈ Y. (8) 9 Then, there exists an optimal rule δ∗for problem (4)such that p(θ0;δ∗) =α, and for some constants −∞ ≤ ℓ < u≤ ∞ and a measurable function γ: Y→[0,1], we have that for each y∈ Y, δ∗(y) =  1, ifT(y)> u orT(y)< ℓ γ(y), ifT(y)∈ {ℓ, u} 0, ifℓ < T (y)< u. (9) Further, assume µ({y∈ Y:T(y) =c}) = 0 forc∈ R. Then, for some constants −∞ ≤ ℓ < u≤ ∞ , δ∗(y) = 1{T(y)≥uorT(y)≤ℓ}. Proof. See Appendix D. Remark 10. The function T(y)in(8)is a sufficient statistic for the family {fθ(·);θ∈Θ}. A. Exponential Family Distributions In this subsection, we illustrate the preceding results with examples from the single-parameter exponential
https://arxiv.org/abs/2505.17851v1
family distributions [30, Chapter 4.2]. Specifically, let fθ(y) =c(θ)h(y)eθT(y), (10) where h: Y→[0,∞)andT: Y→ Rare given, and the domain is D:={θ∈ R:c(θ)−1:=Z Yh(y)eθT(y)µ(dy)<∞}. Assume that Dis open and that Θ⊂ D is compact. Corollary 7. For the exponential family distributions in (10), Assumption 1 and condition (8)hold. Proof. See Appendix D. If Assumptions 2 and 3 hold, which concern the function gin (4), then Corollary 6 can be applied to the above exponential family distributions. More concretely, let µbe the Lebesgue measure and Y= R. Corollary 6 applies to the following families {fθ:θ∈Θ}: •Normal (Gaussian) densities with a known variance σ2>0:fθ(y) =1√ 2πσexp −1 2σ2(y−θ)2 fory∈ R. •Exponential densities: fθ(y) =θexp(−θy) 1{y >0}fory∈ R. •Beta densities with a single shape parameter: fθ(y) =θyθ−11{0< y < 1}fory∈ R. For the above examples, the condition that µ({y∈ Y:T(y) =c}) = 0 forc∈ Ralso holds, and thus there exists a deterministic optimal rule as shown in Corollary 6. In addition, Corollary 6 applies to the discrete exponential family distributions, with µbeing the counting measure on R. Examples include Binomial, Geometric, and Poisson distributions. Finally, if the observation Y= (Y1, . . . , Y n), where Y1, . . . , Y nare independent and identically distributed according to one of the distributions mentioned above, then Yhas a density of the form in (10) with respect to some product measure. Therefore, Corollary 6 applies. 10 TABLE I OPTIMAL ℓ∗CORRESPONDING TO VARIOUS βFOR THE FUNCTION IN (11). β 1/3 1/2 2/3 1 1.5 2 3 ℓ∗-2.3198 -2.0382 -1.8587 -1.6447 -1.4874 -1.4101 -1.3419 B. Numerical Example - Normal Distributions In this subsection, we consider a numerical study using the normal distribution family with a known variance. Specifically, let Θ = [−1,1]andθ0= 0. Let µbe the Lebesgue measure and fθbe the density of the normal distribution with mean θand variance 1. By Corollary 6 and 7, it suffices to consider the following rules: ˜δℓ(y) = 1{y≥Φ−1(1−(α−Φ(ℓ)))ory≤ℓ}, where −∞ ≤ ℓ≤Φ−1(α)andΦandΦ−1are the cumulative distribution function and the quantile function of the standard normal distribution. For −∞ ≤ ℓ≤Φ−1(α), the power function of ˜δℓis p(θ;˜δℓ) =Z ˜δℓ(y)fθ(y)dy= Φ(ℓ−θ) + 1−Φ Φ−1(1−(α−Φ(ℓ))−θ . Further, the value of the objective function isR Θ1g(p(θ;˜δℓ))Λ1(dθ). A grid search over ℓ∈[−∞,Φ−1(α)] would yield an optimal rule for the problem in (4). Now, consider the following more specific optimization problem: max δ∈∆βZ0 −1ωv(p(θ;δ))dθ+Z1 0ωv(p(θ;δ))dθ, subject to p(0;δ)≤α, where β > 0,v > 0, and ωvis defined in (5). The optimal rule is given by ˜δℓ∗, where ℓ∗maximizes the function that maps ℓ∈[−∞,Φ−1(α)]to βZ0 −1ωv(p(θ;˜δℓ))dθ+Z1 0ωv(p(θ;˜δℓ))dθ. (11) We set α= 10% andv= 0.69. For different values of β, we report the optimal ℓ∗in Table I using numerical integration. Note that for β= 1, due to symmetry, we have ℓ∗= Φ−1(α/2). Further, in Fig. 1, for β= 2/3, we plot the objective value as a function of ℓfrom−4toΦ−1(0.1). C. Numerical Example - Binomial Distributions In this subsection, we consider a numerical study using the Binomial distribution with ntrials and success probability θ. Specifically, let θ0= 0.5andΘ = [0 .4,0.6]. We aim to solve the following problem: max δ∈∆βZ0.5
https://arxiv.org/abs/2505.17851v1
0.4ωv(p(θ;δ))dθ+Z0.6 0.5ωv(p(θ;δ))dθ, subject to p(0.5;δ)≤α,(12) 11 −4.0 −3.5 −3.0 −2.5 −2.0 −1.50.28 0.30 0.32 0.34 0.36 l valueObjective Fig. 1. The value of the objective in (11) as a function ℓforβ= 2/3andv= 0.69. where we recall that ωv(·)is defined in (5), β > 0, and α∈(0,1). By Corollary 6 and 7, there exists an optimal rule of the following form: for 0≤y≤n, ˜δℓ,u,p ℓ,pu(y) =  1, ify > u ory < ℓ pu, ify=u pℓ, ify=ℓ 0, ifℓ < y < u, where 0≤ℓ, u≤nand0≤pℓ, pu≤1are chosen such that α= Pθ0(Y > u ) + P θ0(Y < ℓ ) +pℓ×Pθ0(Y=ℓ) +pu×Pθ0(Y=u). Here, Pθ0means that Yhas the binomial distribution with ntrials and success probability given by θ0. Once ℓ andpℓare chosen, uandpucan be solved based on the above equation if they exist. Thus a grid search over ℓandpℓwould yield an optimal solution. Forn= 10 ,α= 9% , and v= 0.69, we report in Table II the optimal values of (ℓ∗, u∗, pℓ∗, pu∗)for various β. Note that due to symmetry, for β= 1, we have ℓ∗+u∗= 10 andpℓ∗=pu∗. Further, in Fig. 2, we plot the objective function value as a function of pℓforβ= 1.05andℓ= 2. D. Discussions As previously discussed, we focus on cases where no UMP test exists. However, Theorem 5 also applies when UMP tests do exist, recovering their existence and structural properties, which is well known in the literature [3]. 12 TABLE II OPTIMAL ℓ∗, u∗, pℓ∗, pu∗CORRESPONDING TO VARIOUS βFOR PROBLEM (12). β 0.5 0.97 1 1.05 1.2 ℓ∗1 2 2 2 2 u∗7 8 8 8 8 p∗ ℓ1 0.7189 0.7799 0.8776 1 p∗ u 0.2097 0.8402 0.7792 0.6814 0.5591 0.6 0.7 0.8 0.9 1.00.03672 0.03676 0.03680 0.03684 pl valueObjective Fig. 2. The value of the objective in (12) as a function pℓforℓ= 2,β= 1.05, and v= 0.69. Specifically, assume that g(t) =tfort∈[0,1]in this subsection. When Θ1contains a single element, say θ1, the probability measure Λ1is the Dirac measure at θ1. The problem (4) reduces to the following: max δ∈∆p(θ1;δ),subject to p(θ0;δ)≤α. Then, Theorem 5 reduces to the well-known NP lemma [3, Theorem 3.2.1], since H(·)in (7) is fθ1(·). Further, consider the case that the parameter is a scalar, i.e., Θ⊂ R. Assume the following monotone likelihood ratio (MLR) property [3] holds: there exists a measurable function T: Y→ Rsuch that fθ′(y)/fθ(y) is non-decreasing function of T(y)for any θ, θ′∈Θandθ < θ′. Ifθ0< θ forθ∈Θ1, then H(y)is a non- decreasing function of T(y). By Theorem 5, for any probability measure Λ1onΘ1, there exists an optimal rule for problem (4) of the following form: for y∈ Y, δ∗(y) =  1, ifT(y)> κ γ(y), ifT(y) =κ 0, ifT(y)< κ. 13 In general, γ(·)depends on Λ1, so Theorem 5 does not immediately imply the existence of UMP tests. However, if we assume that for any probability measure νonΘ1andc≥0, µ y∈ Y:Z Θ1fθ(y)ν(dθ) =cfθ0(y) = 0, then Theorem 5 establishes that there exists a UMP test of the form: δ∗(y) = 1{T(y)≥κ}fory∈ Y, where κis selected so that p(θ0;δ∗) =α. V. C OMPOSITE NULL AND ALTERNATIVE : INTEGRATED FALSE
https://arxiv.org/abs/2505.17851v1
-ALARM CONTROL In this section, we consider the case where both Θ0andΘ1are composite, and aim to test: H0:θ∈Θ0,vsH1:θ∈Θ1:= Θ\Θ0. (13) LetΛ0andΛ1be two probability measures on Θ0andΘ1respectively, α∈(0,1)a user-specified level, and g: [0,1]→ Ra measurable function. Our goal is to solve the following problem: max δ∈∆Z Θ1g(p(θ;δ))Λ1(dθ), subject toZ Θ0p(θ;δ)Λ0(dθ)≤α.(14) This generalizes the case in (4). Define the following probability density function w.r.t. µ: fory∈ Y, ˜fΘ0(y) :=Z Θ0fθ(y)Λ0(dθ). We have the following structural result, which parallels Theorem 5, with fθ0replaced by ˜fΘ0. Theorem 8. Suppose that Assumptions 1 and 2 hold, and that ¯δ0is not an optimal solution to problem (14). i) There exist a rule δ∗∈∆, constant κ∈ R, and a measurable function γ: Y→[0,1]such that δ∗solves the optimization problem in (14), and for each y∈ Y, δ∗(y) =  1, ifH(y)> κ˜fΘ0(y) γ(y), ifH(y) =κ˜fΘ0(y) 0, ifH(y)< κ˜fΘ0(y), where recall that H(·)is defined in (7). ii) For any optimal rule δ∗∈∆, it must take the above form for µ-a.e.y. iii) If, in addition to Assumptions 1 and 2, Assumption 3 holds, then ¯δ0is not an optimal solution to problem (14), and for any optimal rule δ∗∈∆, we haveR Θ0p(θ;δ∗)Λ0(dθ) =α. Proof. See Appendix E. We now simplify the above results for the single-parameter exponential family distributions defined in (10) for “two-sided” hypotheses with a composite null. Theorem 9. Consider the single-parameter exponential family distributions in (10). Suppose that Assumptions 2 and 3 hold, and that Θ0= [a, b]andΘ = [−K, K ]for some K > max{|a|,|b|}. There exists an optimal ruleδ∗for the problem in (14) such thatRb ap(θ;δ∗)Λ0(dθ) =α, and for some constants −∞ ≤ ℓ < u ≤ ∞ and a measurable function γ: Y→[0,1],δ∗is given by (9). 14 Proof. See Appendix E. Remark 11. The proof strategy of Theorem 9 is as follows. Note that log(H(y)/˜fΘ0(y))is a function of T(y), denoted by ψ(T(y)). Using a change-of-measure argument (see Lemma 15), we show that if ψ′(x) = 0 for some x∈ R, then ψ′′(x)must be positive. By Lemma 14, this implies that ψ′hasat most one root. Thus, ψ is either monotonic or has a single inflection point. A. Numerical Example We use a numerical example to illustrate the results in Theorem 9. Let µbe the Lebesgue measure and fθ be the density of the normal distribution with mean θand variance 1. Let Θ0= [−0.2,0.4]andΘ = [−1,1]. Consider the following optimization problem: max δ∈∆Z−0.2 −1p p(θ;δ)dθ+Z1 0.4p p(θ;δ)dθ, subject to the constraint thatR0.4 −0.2p(θ;δ)dθ≤0.1. By Theorem 9, there exists an optimal rule of the following form: for each y∈ R, ˜δℓ(y) = 1{y≤ℓory≥uℓ}, where ℓanduℓsatisfy the following constraintR0.4 −0.2p(θ;˜δℓ)dθ= 0.1. Then, a grid search over ℓwould yield an optimal solution. By numerical integration, the optimal parameters are obtained as ℓ∗=−1.1673 and uℓ∗= 1.6713 . VI. C OMPOSITE NULL AND ALTERNATIVE :SUPREMUM FALSE -ALARM CONTROL In this section, we continue considering the hypotheses in (13). However, rather than controlling the integrated false-alarm risk, we focus on the supremum risk. Specifically, the objective is to solve the following optimization problem: max δ∈∆Z Θ1g(p(θ;δ))Λ1(dθ), subject to sup θ∈Θ0p(θ;δ)≤α,(15) where Λ1is a probability measure
https://arxiv.org/abs/2505.17851v1
on Θ1,α∈(0,1)a user-specified level, and g: [0,1]→ Ra measurable function. In this setup, a UMP test typically does not exist. Theorem 10. Suppose that Assumptions 1, 2, and 3 hold, and that Θ0is closed. There exists an optimal rule δ∗∈∆for problem (15) such that supθ∈Θ0p(θ;δ∗) =αand that for some constant κ≥0, a measurable function γ: Y→[0,1]and a probability measure eΛ0onΘ0, we have δ∗(y) =  1, ifH(y)> κefeΛ0(y) γ(y), ifH(y) =κefeΛ0(y) 0, ifH(y)< κefeΛ0(y), for each y∈ Y, where H(·)is defined in (7)and efeΛ0(y) :=Z fθ(y)eΛ0(dθ). 15 TABLE III OPTIMAL PAIRS OF (ℓ∗, uℓ∗)AND THE REJECTION PROBABILITIES FOR THE SUPREMUM FALSE -ALARM CONTROL EXAMPLE β ℓ∗u∗p(a;˜δℓ∗)p(b;˜δuℓ∗) 1 -1.677 1.677 10% 10% 5 -1.559 2.027 10% 7.31% 10 -1.505 2.444 10% 5.65% Proof. See Appendix F. Remark 12. From the proof of Theorem 10, eΛ0is the least-favorable distribution (see [3, Theorem 3.8.1]) for the following hypothesis testing problem under the supremum false-alarm constraint: the null hypothesis {fθ:θ∈Θ0}versus the simple alternative (normalized) H(·). We now focus on two-sided hypotheses and exponential family distributions to simplify the preceding results. Theorem 11. Consider the single-parameter exponential family distributions in (10), and assume that Θ0= [a, b]andΘ = [−K, K ]for some K > max{|a|,|b|}. Suppose that Assumptions 2 and 3 hold, and that T(y) takes more than two values, that is, µ({y∈ Y:T(y)̸∈ {c1, c2}})>0for any c1, c2∈ R. Then, there exists an optimal rule δ∗for problem (15) such that supθ∈[a,b]p(θ;δ∗) =α= max {p(a;δ), p(b;δ)}, and that for some−∞ ≤ ℓ < u≤ ∞ and measurable function γ: Y→[0,1],δ∗is given by (9). Proof. See Appendix F. Remark 13. In addition to the strategy discussed in Remark 11, the key to the proof of Theorem 11 is to show that, for a rule of the form (9), the worst-case false-alarm probability over [a, b]is attained at either aorb. A. Numerical Example Letµbe the Lebesgue measure and fθbe the density of the normal distribution with mean θand variance 1. Let a=−0.2, b= 0.2andK= 1. Consider the following optimization problem: max δ∈∆βZ−0.2 −1p p(θ;δ)dθ+Z1 0.2p p(θ;δ)dθ, subject to the constraint that supθ∈[−0.2,0.2]p(θ;δ)≤α. By Theorem 11, there exists an optimal rule of the following form: for each y∈ R,˜δℓ(y) = 1{y≤ℓory≥uℓ}, where ℓanduℓsatisfy the following constraint max{p(−0.2;˜δℓ), p(0.2;˜δℓ)}=α. Then, a grid search over ℓwould yield an optimal solution. By numerical integration, for α= 10% and various β, we report in Table III the optimal pairs of (ℓ∗, uℓ∗), as well as the corresponding rejection probabilities at −0.2and0.2. VII. C ONCLUSION In this paper, we investigated the problem of composite binary hypothesis testing within the NP framework. We demonstrated that for any hypothesis testing problem involving rejection probabilities, it is sufficient to consider the family of “generalized Bayes rules”. By analyzing several special cases, we provided more concrete results for optimal decision rules. Unlike existing studies, we allow the objective function to incorporate a nonlinear 16 component in the detection probabilities. As a result, our findings are applicable to a wide range of detection problems, including those involving behavioral utility-based hypothesis testing. APPENDIX A PROOFS OF LEMMA 1AND LEMMA 2 Proof
https://arxiv.org/abs/2505.17851v1
of Lemma 1. Letθn, n≥1andθbe in Θsuch that limn→∞θn=θ. For any procedure δ∈∆, due to Assumption 1, we have p(θn;δ) =Z Yδ(y)fθn(y)µ(dy) →Z Yδ(y)fθ(y)µ(dy) =p(θ;δ), and thus p(·;δ)∈C(Θ), which completes the proof of the first statement. Further, let p1, p2∈ P. By definition, there exist δ1, δ2∈∆such that pk(·) =p(·;δk)fork= 1,2. For any α∈(0,1), αp1(·) + (1 −α)p2(·) =αp(·;δ1) + (1 −α)p(·;δ2) =p(·;αδ1+ (1−α)δ2). That is, αp1+ (1−α)p2is the power function of αδ1+ (1−α)δ2, and belongs to P. Thus, Pis convex. Next, we show that Pis compact. Note that sup δ∈∆|p(θn;δ)−p(θ;δ)| ≤Z Y|fθn(y)−fθ(y)|µ(dy). Thus, due to Assumption 1, P ⊂C(Θ)is equicontinuous at any θ∈Θ. Further, by definition, supθ∈Θ,p∈P|p(θ)| ≤ 1. Thus, by Arzela-Ascoli Theorem [28, Theorem 4.43], the closure of Pis compact in C(Θ). It remains to show that Pis closed. Letpn, n≥1be elements in Psuch that ∥pn−p∥∞→0asn→ ∞ for some p∈C(Θ). By definition, there exist δn, n≥1in∆such that pn(·) =p(·;δn)forn≥1. By [3, Theorem A.5.1], there exists a subsequence δnj, j≥1andδ∈∆, such that lim j→∞Z Yδnj(y)η(y)µ(dy) =Z Yδ(y)η(y)µ(dy), for any µ-integrable function η. As a result, for any θ∈Θ, p(θ) = lim j→∞Z Yδnj(y)fθ(y)µ(dy) =Z Yδ(y)fθ(y)µ(dy) =p(θ;δ), which implies that p∈ P. Thus Pis closed, and the proof of the second statement is complete. Next, we consider the third statement. For any δ∈∆, by the dominated convergence theorem [28, Theorem 2.27], the function p(·;δ)∈C(Θ) is differentiable on U. Let ˜f∈C(Θ) be any non-differentiable function on U. Then p(·;δ) +ϵ˜fis not differentiable on Ufor any ϵ >0and thus does not belong to P. This proves that Phas no interior point. 17 Finally, we consider the last statement. Since p∈ P, it is the power function of some decision rule δ∈∆, that is, p(θ) =R δ(y)fθ(y)µ(dy)for each θ∈Θ. Recall that µθis the distribution of Ywhen θis the true parameter. Since p(θ′)∈(0,1), we have µθ′({y∈ Y:δ(y)>0})>0, µθ′({y∈ Y:δ(y)<1})>0. Due to Assumption 1, we have that µθ({y∈ Y:δ(y)>0})>0andµθ({y∈ Y:δ(y)<1})>0for all θ∈Θ, which implies that 0< p(θ)<1for all θ∈Θ. Then the proof is complete since p∈C(Θ) andΘis compact. Proof of Lemma 2. By assumption, ν=ν+−ν−, and ν+andν−are positive, finite measures, and mutually singular. Note that by the Fubini’s Theorem, Z Θp(θ;δ)ν(dθ) =Z ΘZ Yδ(y)fθ(y)µ(dy) ν(dθ) =Z Yδ(y)Z Θfθ(y)ν+(dθ)−Z Θfθ(y)ν−(dθ) µ(dy). Then the result is immediate. APPENDIX B PROOF OF THEOREM 3 Recall that Θis compact and C(Θ)is the space of continuous functions on Θequipped the supremum norm ∥ · ∥∞, and that M(Θ) is the space of finite signed measures. For each ν∈ M (Θ), denote by |ν|the total variation of ν. Note that the dual space (i.e. the space of bounded linear functionals) of C(Θ) isM(Θ) [28, Theorem 7.17]. We first define the support points of a subset F ⊂ C(Θ), and show that if Fis non-empty, convex and closed with an empty interior, then all points of Fare support points. Definition 1. LetF ⊂ C(Θ). We say f∈ F is a support point of Fif there exists a non-zero finite signed measure ν∈ M(Θ) such that Z Θf(θ)ν(dθ)≥Z Θh(θ)ν(dθ),forh∈ F. Lemma 12. LetF ⊂C(Θ)be non-empty, closed and convex. Assume Fhas an empty interior. Then any
https://arxiv.org/abs/2505.17851v1
point f∈ F is a support point of F. Proof. Letf∈ F be arbitrary. Since Fis convex and closed and has an empty interior, by Bishop–Phelps Theorem [29, Theorem 7.43], the set of support points of F ⊂ C(Θ) isdense inF. Thus, there exists a sequence {fn:n≥1} ⊂ F such that fnis a support point of Fforn≥1, and∥fn−f∥∞→0asn→ ∞ . By definition, for each n≥1, there exists a non-zero finite signed measure νn∈ M(Θ) such that Z Θfn(θ)νn(dθ)≥Z Θh(θ)νn(dθ),forh∈ F. (16) Without loss of generality, we may assume |νn|= 1. Since Θis compact, by applying the Prohorov’s Theorem [31, Theorem 13.29] to the positive and negative parts of νn, n≥1, there exists ν∞∈ M(Θ)and a subsequence 18 {νnk:k≥1}of{νn:n≥1}such that νnkconverges to ν∞weakly, that is, for any h∈C(Θ), we have for each h∈C(Θ), lim k→∞Z Θh(θ)νnk(dθ) =Z Θh(θ)ν∞(dθ). Further, by Portmanteau theorem [31, Theorem 13.16], we have |ν∞|= 1, that is, ν∞is non-zero. Since∥fn−f∥∞→0,|νnk|= 1 fork≥1, and νnkconverges to ν∞weakly, we have lim k→∞Z Θfnk(θ)νnk(dθ) = lim k→∞Z Θ(fnk(θ)−f(θ))νnk(dθ) +Z Θf(θ)νnk(dθ) =Z Θf(θ)ν∞(dθ). Further, for any h∈ F, due to (16), Z Θh(θ)ν∞(dθ) = lim k→∞Z Θh(θ)νnk(dθ) ≤lim k→∞Z Θfnk(θ)νnk(dθ). Thus, for any h∈ F, we haveR Θf(θ)ν∞(dθ)≥R Θh(θ)ν∞(dθ), which shows that fis a support point of F. The proof is complete. Proof of Theorem 3. Letp∈ P. By definition, pis the power function of some decision rule δ∗∈∆, that is, p(·) =p(·;δ∗). By Lemma 1 and the assumption, Pis non-empty, compact and convex, with an empty interior. Then, by Lemma 12, pis a support point of P, that is, there exists some non-zero finite signed measure ν∈ M(Θ)such thatZ Θp(θ)ν(dθ)≥Z Θh(θ)ν(dθ),forh∈ P. SincePis the set of all power functions, it implies that δ∗is an optimal solution to the following problem: max δ∈∆Z Θp(θ;δ)ν(dθ). Finally, by Lemma 2 and the remark following it, δ∗must be of the form in (1). The proof is complete. APPENDIX C PROOF OF THEOREM 5 Proof of Theorem 5. By Lemma 1, Pis compact, which implies that {p∈ P:p(θ0)≤α}is also compact. Due to Assumption 2, the function p∈ P 7→R Θ1g(p(θ))Λ1(dθ)∈ Ris continuous. Thus, there exists p∗∈ P that solves the following problem: max p∈PZ Θ1g(p(θ))Λ1(dθ),subject to p(θ0)≤α. Since ¯δ0is not an optimal solution to problem (4), we have Λ1({θ∈Θ1:p∗(θ)>0})>0. Further, since p∗ is feasible, we have p∗(θ0)≤α. By Lemma 1, 0<inf θ∈Θp∗(θ)≤sup θ∈Θp∗(θ)<1. (17) Since p∗∈ P, there exists δ∗∈∆such that p∗(·) =p(·;δ∗). 19 Fix an arbitrary rule ˜δ∈∆such that its associated power function ˜psatisfies ˜p(θ0)≤α, that is, ˜δis a feasible solution for the problem in (4). Since Pis convex by Lemma 1, (1−ϵ)p∗+ϵ˜p∈ P and(1−ϵ)p∗(θ0)+ϵ˜p(θ0)≤α forϵ∈[0,1]. Define the following function: for ϵ∈[0,1], J(ϵ) =Z Θ1g((1−ϵ)p∗(θ) +ϵ˜p(θ)) Λ1(dθ). Since g(·)is continuously differentiable on (0,1)(see Assumption 2), due to (17) and by the dominated convergence theorem, J′(ϵ)is equal to the following Z Θ1g′((1−ϵ)p∗(θ) +ϵ˜p(θ)) (˜p(θ)−p∗(θ))Λ1(dθ). By the optimality of p∗,J(·)attains its maximal value at ϵ= 0, which implies J′(0)≤0, that is, Z Θ1g′(p∗(θ)) ˜p(θ)Λ1(dθ)≤Z Θ1g′(p∗(θ))p∗(θ)Λ1(dθ). Recall the definition of H(·)in (7). By definition and Fubini’s theorem, Z Θ1g′(p∗(θ)) ˜p(θ)Λ1(dθ) =Z Θ1g′(p∗(θ))Z Y˜δ(y)fθ(y)µ(dy) Λ1(dθ) =Z Y˜δ(y)H(y)µ(dy). Similarly,Z Θ1g′(p∗(θ))p∗(θ)Λ1(dθ) =Z
https://arxiv.org/abs/2505.17851v1
Yδ∗(y)H(y)µ(dy). Since ˜δ∈∆is arbitrary, we must have that δ∗solves the following optimization problem: max δ∈∆Z Yδ(y)H(y)µ(dy), subject toZ Yδ(y)fθ0(y)µ(dy)≤α. Define c∗:=R Yδ∗(y)fθ0(y)µ(dy). Then, δ∗also solves the following optimization problem: max δ∈∆Z Yδ(y)H(y)µ(dy), subject toZ Yδ(y)fθ0(y)µ(dy) =c∗. Since δ∗is feasible, and due to (17), c∗∈(0, α]⊂(0,1). Further, since g′is continuous on (0,1)(see Assumption 2), due to (17), H(·)isµ-integrable. Thus, by the generalized NP lemma [3, Theorem 3.6.1 (iv)], forµ-a.e.y,δ∗must take the following form: δ∗(y) =  1, ifH(y)> κf θ0(y) γ(y), ifH(y) =κfθ0(y) 0, ifH(y)> κf θ0(y), for some κ∈ Rand a measurable function γ: Y→[0,1]. The proof is complete for i) and ii). Finally, we focus on iii). Define ¯δα(y) =αfory∈ Y; that is, the rule ¯δα∈∆rejects H0with probability α regardless of y. Clearly, ¯δαis a feasible solution to problem (4), and since g(·)is strictly increasing (Assumption 3), the objective value for ¯δαis strictly larger than that for ¯δ0. Thus, ¯δ0is not an optimal solution to problem (4). 20 Letδ∗be an optimal solution to problem (4), whose existence is guaranteed by i). Assume the contrary that c∗:=p(θ0;δ∗)< α. Define a new rule as follows: ˜δ∗(y) =δ∗(y) +α−c∗ 1−c∗(1−δ∗(y)),fory∈ Y. Clearly, ˜δ∗∈∆is feasible for problem (4), and since g(·)is strictly increasing, ˜δ∗has a strictly larger objective value than δ∗, which is a contradiction. The proof is complete. APPENDIX D PROOFS OF COROLLARY 6AND 7 Proof of Corollary 6. Letδ∗be an optimal rule in part i) of Theorem 5 and p∗be the associated power function, that is, p∗(θ) =p(θ;δ∗)forθ∈Θ. Recall the form of δ∗in (6) and the definition of H(·)in (7). By iii) of Theorem 5, p(θ0;δ∗) =α. For each fixed θ∈Θ, the following function is strictly convex: T(y)∈ R7→ϕθ(T(y))∈[0,∞). Since g′(t)>0fort∈(0,1)(Assumption 3) and Λ1is a probability measure, the following function T(y)∈ R7→Z Θ1ϕθ(T(y))ν(dθ)∈[0,∞), is also strictly convex, where νis ameasure onΘ1such that dν/dΛ1(θ) =g′(p∗(θ))forθ∈Θ1. As a result, there exists a strictly convex function Ψ : R→ Rsuch that H(y)/fθ0(y) = Ψ( T(y)),fory∈ Y. Due to convexity, the following two sets are convex: {T(y)∈ R: Ψ(T(y))< κ}, and{T(y)∈ R: Ψ(T(y))≤ κ}, which implies that they are intervals in R. Further, since Ψ(·)is strictly convex, the set {T(y)∈ R: Ψ(T(y)) =κ}contains at most two points. As a result, there exist −∞ ≤ ℓ < u≤ ∞ such that (ℓ, u)⊂{T(y)∈ R: Ψ(T(y))< κ} ⊂{T(y)∈ R: Ψ(T(y))≤κ} ⊂[ℓ, u]. Since γ(·)is allowed to assume values in {0,1}, the first claim follows immediately from Theorem 5. The second claim is because µ({y∈ Y:T(y)∈ {ℓ, u}) = 0 due to the assumption. Proof of Corollary 7. For a proof that Assumption 1 holds, see Remark 9. Note that for y∈ Y, fθ(y) fθ0(y)=ϕθ(T(y)),fory∈ Y, where ϕθ(z) :=c(θ) c(θ0)e(θ−θ0)zforz∈ R. Thus for each θ∈Θ1,ϕθis a strictly convex function on R. Thus condition (8) holds. 21 APPENDIX E PROOFS OF THEOREM 8AND 9 Proof of Theorem 8. Recall the definition of ˜fΘ0proceeding Theorem 8. By Fubini’s theorem, Z Θ0p(θ;δ)Λ0(dθ) =Z Θ0Z Yδ(y)fθ(y)µ(dy) Λ0(dθ) =Z Yδ(y)Z Θ0fθ(y)Λ0(dθ) µ(dy) =Z Yδ(y)˜fΘ0(y)µ(dy). Thus, the optimization problem in (14) is equivalent to the following: max δ∈∆Z Θ1g(p(θ;δ))Λ1(dθ), subject toZ Yδ(y)˜fΘ0(y)µ(dy)≤α. The proof then follows
https://arxiv.org/abs/2505.17851v1
by applying nearly identical arguments as in the proof of Theorem 5, replacing fθ0 with ˜fΘ0. Before proving Theorem 9, we start with supporting lemmas. Lemma 13. Leta < b be two real numbers. Let Ube a random variable taking values in [a, b], and Vbe a random variable taking values in (−∞, a]∪[b,∞)such that E[V2]<∞andP(V < a ) + P( V > b )>0. If E[U] = E[ V]∈(a, b), then Var(U)<Var(V). Proof. Without loss of generality, assume a= 0 andb= 1. Further, it suffices to show that E[U2]<E[V2]. Denote by t= E[U]∈(0,1)and since U2≤U, we have E[U2]≤E[U] =t. Further, denote by q1= P ( V≥1)andq0= P ( V≤0). Since E[V]∈(0,1), we have q0, q1>0. Define t1= E ( V|V≥1)andt0= E ( V|V≤0). By definition, t1≥1,t0≤0, and q0t0+q1t1=t. Further, since P(V <0) + P( V >1)>0, at least one of the following holds: t1>1ort0<0. By Jensen’s inequality, E[V2] =q1E[V2|V≥1] +q0E[V2|V≤0] ≥q1t2 1+q0t2 0> q1t1≥t. The proof is complete. Lemma 14. Letf: R→ Rbe an analytic function, which is not identically zero. Assume that for any x∈ R, iff(x) = 0 , we have f′(x)>0. Then fhas at most one root. Proof. Without loss of generality, assume fhas a root at a. Assume that fhas another root on (a,∞). Since fis a non-zero analytic function, there exists a smallest root on (a,∞), denoted by b. Since f(a) = 0 , which implies that f′(a)>0, and due to the minimality of b,f(x)>0forx∈(a, b). This contradicts with the fact thatf(b) = 0 , but f′(b)>0. By a similar argument, there can be no root on (−∞, a). The proof is complete. The following change-of-measure result (also known as exponential tilting) is well-known in the literature and is provided here for completeness. 22 Lemma 15. LetUbe a bounded random variable with distribution F. For each x∈ R, define a new distribution: dFx dF(u) =exu R exuF(du),foru∈ R. LetU(x)be a random variable with distribution Fx. Then U(x)has the same support as U, and E[U(x)] = log E exU′, Var(U(x)) = log E exU′′. Now we prove Theorem 9. Proof of Theorem 9. Recall H(·)in (7) and ˜fΘ0(·)proceeding Theorem 8. Define eH(y) :=H(y) ˜fΘ0(y),fory∈ Y. By the definition of fθin (10), we have eH(y) =R Θ1g′(p(θ;δ∗))c(θ)eθT(y)Λ1(dθ)R Θ0c(θ)eθT(y)Λ0(dθ). By Theorem 8, for any optimal rule δ∗∈∆, it rejects H0with probability 1,γ(y),0ifeH(y)>,=, < κ , respectively, for some κ≥0andγ: Y→[0,1]. Note that g′(t)>0fort∈(0,1)(Assumption 3). Let UandVbe two random variables with the distributions FUandFV, respectively, where dFU dΛ1(θ) =1 K1g′(p(θ;δ∗)c(θ),forθ∈Θ1 dFV dΛ0(θ) =1 K2c(θ),forθ∈Θ0, and the normalizing constants are defined as K1:=Z Θ1g′(p(˜θ;δ∗)c(˜θ)Λ1(d˜θ), K2:=Z Θ0c(˜θ)Λ0(d˜θ). By definition, we have eH(y) :=K1 K2E[eUT(y)] E[eV T(y)]. ThuseH(y)>,=, < κ is equivalent to, respectively, the following: log(E[ eUT(y)])−log(E[ eV T(y)])>,=, <˜κ, where ˜κ:= log( κ)−log(K1/K2). Define for x∈ R, ψ(x) := log(E[ eUx])−log(E[ eV x])−˜κ. Recall the definition of U(x)in Lemma 15, and we define V(x)similarly. Then by Lemma 15, ψ′(x) = Eh U(x)i −Eh V(x)i , ψ′′(x) =Varh U(x)i −Varh V(x)i . 23 Since for each x∈ R,U(x)is supported on Θ1= [−K, a)∪(b, K], and V(x)onΘ0= [a, b]. By Lemma 13, ifψ′(x)
https://arxiv.org/abs/2505.17851v1
= 0 for some x∈ R, then ψ′′(x)>0. Then, by Lemma 14, ψ′has at most one root on R. Ifψ′has no root, then ψ(T(y))is a monotone function of T(y). Ifψ′has exactly one root, then ψ(T(y)) is either “first increasing, then decreasing” or “first decreasing, then increasing” with T(y). Since Θ0= [a, b], as|T(y)| → ∞ ,ψ(T(y))→ ∞ , which implies that δ∗must be of the form given in (9). APPENDIX F PROOFS OF THEOREM 10AND THEOREM 11 Proof of Theorem 10. By arguments similar to those in Theorem 5, optimal decision rules exist for (15). Let δ∗∈∆be an optimal rule for (15) and p∗∈ P be the associated power function. Again, by arguments similar to those in Theorem 5, we have that supθ∈Θ0p(θ;δ∗) =αand (17) holds for p∗, and that δ∗is the optimal solution to the following optimization problem: max δ∈∆Z Yδ(y)H(y)µ(dy), subject to sup θ∈Θ0p(θ;δ)≤α, where recall that H(·)is defined in (7). Since g′(t)>0for0< t < 1(Assumption 3), due to (17), we have H(y)≥0fory∈ Yand0<R YH(y)µ(dy)<1. Since Θ0is closed and Θis compact, by Theorem 3.8.1 of [3] (see also the remark on Page 86 of [3] and [32]), there exists a probability measure eΛ0onΘ0such that δ∗is the optimal solution to the following optimization problem: max δ∈∆Z Yδ(y)H(y)µ(dy), subject toZ Θ0δ(y)efeΛ0(y)µ(dy)≤α, where efeΛ0is defined in Theorem 10. Then the proof is complete by the NP lemma [3, Theorem 3.6.1], Next, we prove Theorem 11. Proof of Theorem 11. Letδ∗be the optimal rule in Theorem 10. Recall H(·)in (7), efeΛ0(·)in 10 and fθin (10). Define for y∈ Y, ¯H(y) :=H(y) efeΛ0(y)=R Θ1g′(p(θ;δ∗))c(θ)eθT(y)Λ1(dθ) R Θ0c(θ)eθT(y)eΛ0(dθ). Thus for µ-a.e.y,δ∗(y) = 1 , γ(y),0if¯H(y)>,=, <1respectively. In the proof of Theorem 9, it is shown thatδ∗must be of the form given in (9). It remains to show that max{p(a;δ∗), p(b;δ∗)}=α. Assume the contrary that max{p(a;δ∗), p(b;δ∗)}= α′< α. Denote by ¯θa maximizer of the function p(·;δ∗)on[a, b]. Since α′< α= supθ∈[a,b]p(θ;δ∗), we must have ¯θ∈(a, b)andp(¯θ;δ∗)> α. Further, d dθlogp(¯θ;δ∗) = 0 ,d2 d2θlogp(¯θ;δ∗)≤0. (18) 24 For each θ∈[−K, K ], recall the definition of fθin (10), and define two probabilities measures µθ,+and µθ,−on Yas follows: for each y∈ Y, dµθ,+/dµ(y) = (Kθ,+)−1δ∗(y)h(y)eθT(y), dµθ,−/dµ(y) = (Kθ,−)−1¯δ∗(y)h(y)eθT(y), where ¯δ∗(y) := 1 −δ∗(y)and Kθ,+:=Z Yδ∗(˜y)h(˜y)eθT(˜y)µ(d˜y), Kθ,−:=Z Y¯δ∗(˜y)h(˜y)eθT(˜y)µ(d˜y). By definition, p(θ;δ∗) =Kθ,+/(Kθ,++Kθ,−), and for ι∈ {+,−}, d dθKθ,ι=Kθ,ιZ YT(y)µθ,ι(dy), d2 d2θKθ,ι=Kθ,ιZ Y(T(y))2µθ,ι(dy). By elementary calculation of the first two derivatives of logp(θ;δ∗)at¯θ, and due to (18), we have Z YT(y)µ¯θ,+(dy) =Z YT(y)µ¯θ,−(dy), Z Y(T(y))2µ¯θ,+(dy)≤Z Y(T(y))2µ¯θ,−(dy). LetY+andY−be two random variables with distribution µ¯θ,+andµ¯θ,+respectively. Then, the above-displayed equation implies that E[T(Y+)] = E[ T(Y−)],Var(T(Y+))≤Var(T(Y−)). However, by the definition of δ∗,T(Y+)is supported on (−∞, ℓ]∪[u,∞), and T(Y−)on[ℓ, u]. Further, since T(Y+)andT(Y−)take at least two values, we have that P(T(Y+)< ℓ) + P( T(Y+)> u)>0, and P(ℓ < T (Y−)< u)>0. Then by Lemma 13, if E[T(Y+)] = E[ T(Y−)]∈(ℓ, u), we must have Var (T(Y+))> Var(T(Y−)), which is a contradiction. The proof is complete. REFERENCES [1] J. Neyman and E. S. Pearson, “On the problem of the most efficient tests of statistical hypotheses,” Philosophical Transactions of the Royal Society A , vol. 231,
https://arxiv.org/abs/2505.17851v1
pp. 289–337, 1933. [2] H. V . Poor, An Introduction to Signal Detection and Estimation , 2nd ed. New York: Springer-Verlag, 1994. [3] E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses , 3rd ed. New York: Springer Berlin, 2005. [4] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume II: Detection Theory . Prentice Hall, 1998. [5] M. A. Richards, Fundamentals of Radar Signal Processing , ser. Electronics Engineering Series. USA: McGraw-Hill, 2005. [6] H. L. V . Trees, Detection, Estimation, and Modulation Theory: Part I , 2nd ed. New York, NY: John Wiley & Sons, Inc., 2001. [7] K. Yao, F. Lorenzelli, and C.-E. Chen, Detection and Estimation for Communication and Radar Systems . Cambridge University Press, 2013. [8] J. Cvitanic and I. Karatzas, “Generalized Neyman-Pearson lemma via convex duality,” Bernoulli , vol. 7, no. 1, pp. 79–97, 2001. [9] B. Rudloff and I. Karatzas, “Testing composite hypotheses via convex duality,” Bernoulli , vol. 16, no. 4, pp. 1224–1239, 2010. [10] M. J. Bayarri and J. O. Berger, “The interplay of Bayesian and frequentist analysis,” Statistical Science , vol. 19, pp. 58–80, 2004. [11] E. L. Lehmann, “Some history of optimality,” Lecture Notes-Monograph Series , vol. 57, pp. 11–17, 2009. [12] N. Begum and M. L. King, “A new class of test for testing a composite null against a composite alternative hypothesis,” in Proc. Australasian Meeting of the Econometric Society , 2007. 25 [13] J. O. Berger, B. Boukai, and Y . Wang, “Unified frequentist and bayesian testing of a precise hypothesis,” Statistical Science , vol. 12, no. 3, pp. 133–160, 1997. [14] S. Bayram and S. Gezici, “On the restricted Neyman-Pearson approach for composite hypothesis-testing in the presence of prior distribution uncertainty,” IEEE Transactions on Signal Processing , vol. 59, no. 10, pp. 5056–5065, Oct. 2011. [15] P. J. Huber and V . Strassen, “Minimax tests and the Neyman-Pearson lemma for capacities,” The Annals of Statistics , vol. 1, pp. 251–263, Mar. 1973. [16] T. Augustin, “Neyman-Pearson testing under interval probability by globally least favorable pairs: A survey of Huber-Strassen theory and some results on its extension to general interval probability,” Journal of Statistical Planning and Inference , vol. 105, pp. 149–173, June 2002. [17] O. Zeitouni, J. Ziv, and N. Merhav, “When is the generalized likelihood ratio test optimal?” IEEE Transactions on Information Theory , vol. 38, no. 5, pp. 1597–1602, 2002. [18] W. Hoeffding, “Asymptotically optimal tests for multinomial distributions,” The Annals of Mathematical Statistics , vol. 36, pp. 369– 400, 1965. [19] S. Natarajan, “Large deviations, hypotheses testing, and source coding for finite Markov chains,” IEEE Transactions on Information Theory , vol. 31, pp. 360–365, May 1985. [20] O. Zeitouni and M. Gutman, “On universal hypotheses testing via large deviations,” IEEE Transactions on Information Theory , vol. 37, pp. 285–290, Mar. 1991. [21] E. Levitan and N. Merhav, “A competitive Neyman-Pearson approach to universal hypothesis testing with applications,” IEEE Transactions on Information Theory , vol. 48, pp. 2215–2229, Aug. 2002. [22] P. Boroumand and A. G. i. F `abregas, “Composite Neyman-Pearson hypothesis testing with a
https://arxiv.org/abs/2505.17851v1
known hypothesis,” in 2022 IEEE Information Theory Workshop (ITW) , 2022, pp. 131–136. [23] S. Gezici and P. K. Varshney, “On the optimality of likelihood ratio test for prospect theory based binary hypothesis testing,” IEEE Signal Process. Lett. , vol. 25, no. 12, pp. 1845–1849, Dec. 2018. [24] B. Geng, S. Brahma, T. Wimalajeewa, P. K. Varshney, and M. Rangaswamy, “Prospect theoretic utility based human decision making in multi-agent systems,” IEEE Trans. Signal Process. , vol. 68, pp. 1091–1104, Jan. 2020. [25] B. Geng, Q. Li, and P. K. Varshney, “Prospect theory based crowdsourcing for classification in the presence of spammers,” IEEE Trans. Signal Process. , vol. 68, pp. 4083–4093, July 2020. [26] B. Dulek, E. Efendi, and P. K. Varshney, “Behavioral utility-based distributed detection with conditionally independent observations,” IEEE Trans. Signal Process. , vol. 72, pp. 3717–3730, Aug. 2024. [27] A. Tversky and D. Kahneman, “Advances in prospect theory: Cumulative represenation of uncertainty,” Journal of Risk and Uncertainty , vol. 5, pp. 297–323, 1992. [28] G. B. Folland, Real Analysis: Modern Techniques and Their Applications . John Wiley & Sons, 1999, vol. 40. [29] C. D. Aliprantis and K. C. Border, Infinite Dimensional Analysis: A Hitchhiker’s Guide . Springer Science & Business Media, 2006. [30] A. W. Van der Vaart, Asymptotic Statistics . Cambridge University Press, 2000, vol. 3. [31] A. Klenke, Probability Theory: A Comprehensive Course . Springer Science & Business Media, 2013. [32] E. L. Lehmann, “On the existence of least favorable distributions,” The Annals of Mathematical Statistics , pp. 408–416, 1952.
https://arxiv.org/abs/2505.17851v1
arXiv:2505.17961v1 [stat.ME] 23 May 2025Federated Causal Inference from Multi-Site Observational Data via Propensity Score Aggregation Rémi Khellaf∗ INRIA, Université de Montpellier, INSERM, France remi.khellaf@inria.fr Aurélien Bellet INRIA, Université de Montpellier, INSERM, France aurelien.bellet@inria.fr Julie Josse INRIA, Université de Montpellier, INSERM, France julie.josse@inria.fr Abstract Causal inference typically assumes centralized access to individual-level data. Yet, in practice, data are often decentralized across multiple sites, making centralization infeasible due to privacy, logistical, or legal constraints. We address this by estimating the Average Treatment Ef- fect (ATE) from decentralized observational data using federated learn- ing, which enables inference through the exchange of aggregate statistics rather than individual-level data. We propose a novel method to estimate propensity scores in a (non-)parametric manner by computing a federated weighted average of local scores, using two theoretically grounded weighting schemes—Membership Weights (MW) and Density Ratio Weights (DW)— that balance communication efficiency and model flexibility. These feder- ated scores are then used to construct two ATE estimators: the Federated Inverse Propensity Weighting estimator (Fed-IPW) and its augmented vari- ant (Fed-AIPW). Unlike meta-analysis methods, which fail when any site violates positivity, our approach leverages heterogeneity in treatment as- signment across sites to improve overlap. We show that Fed-IPW and Fed- AIPWperformwellundersite-levelheterogeneityinsamplesizes, treatment mechanisms, and covariate distributions, with theoretical analysis and ex- perimentsonsimulatedandreal-worlddatahighlightingtheirstrengthsand limitations relative to meta-analysis and related methods. 1 Introduction In Randomized Clinical Trials (RCTs), treatment assignment is randomized, which ensures that the observed association between treatment and outcome reflects a causal effect. Under this design, the Average Treatment Effect (ATE) can be consistently estimated using a sim- ple Difference-in-Means (DM) estimator (Splawa-Neyman et al., 1990), which can be further ∗https://remikhellaf.github.io Preprint. Under review. refined through covariate adjustment to reduce variance (U.S. Food and Drug Administra- tion,2023;EuropeanMedicinesAgency,2024;LeiandDing,2021). However, RCTsareoften expensive, time-consuming, or infeasible in practice. In such cases, estimating treatment effects from observational data becomes the only viable alternative (Hernán, 2018; Hernán and Robins, 2006). Although real-world data is abundant, drawing causal inferences from it remains challenging due to the presence of confounding covariates. As a result, the un- adjusted DM estimator is biased when estimating the ATE (Grimes and Schulz, 2002). To mitigate this bias, it is essential to adjust for confounding variables (VanderWeele, 2019). This can be done by predicting the counterfactual outcomes of the individuals under the opposite treatment before averaging the differences between the observed and counterfactual outcomes, leading to the so-called G-Formula plug-in estimator (Robins, 1986). Another popular method for deconfounding is to weight individuals according to their probability of receiving the treatment, thereby emulating a randomized experiment. The Inverse Propen- sity Weighting (IPW) estimator (Rosenbaum and Rubin, 1983) relies on the estimation of thepropensity score —the probability of treatment given the covariates. Doubly robust es- timators, such as the Augmented IPW (AIPW) (Bang and Robins, 2005), enhance IPW by incorporating an outcome model, offering robustness against misspecification of either model. In causal inference, larger datasets enhance the precision and reliability of treatment effect estimates, particularly for underrepresented subgroups often overlooked in smaller studies. However, real-world data is typically decentralized—collected across several hospitals, com- panies, or
https://arxiv.org/abs/2505.17961v1
countries—making it challenging to aggregate into a centralized dataset. This difficulty is particularly acute in domains like healthcare, where privacy concerns, regulatory barriers, data ownership and logistical issues (such as responsibility for data storage and governance) complicate data sharing. Federated Learning (FL) (Kairouz et al., 2021) offers a promising solution by enabling model training directly on decentralized data within a server-client architecture, without sharing individual-level data. By exchanging only aggre- gate statistics such as model updates or gradients, FL has the potential to support collab- orative causal inference studies with performance approaching that of centralized analyses. However, realizing this potential requires tailored methodological developments. Contributions. In this article, we build on ideas from Federated Learning to design (A)IPW estimators for decentralized observational data, moving beyond the simple ag- gregation of local ATE estimates used in traditional meta-analysis approaches. At the core of our approach is a flexible, nonparametric strategy for federating propen- sity scores. Unlike prior methods that aggregate parameters of a global parametric model via “one-shot” averaging (Xiong et al., 2023) or federated gradient descent (Khellaf et al., 2025), we construct a global propensity score model as a mixture of locally estimated mod- els. This is done through a two-step process: (1) local estimation of propensity scores at each site, which naturally accommodates site-specific heterogeneity in treatment assignment mechanisms and offers flexibility in model choice, and (2) aggregation of these local models into a global one using carefully selected federated weights. We introduce two theoretically grounded weighting schemes: Membership Weights (MW), which represent the probability of an individual belonging to each site given their covariates, and Density Ratio Weights (DW), which model the relative density of covariates at each site compared to the global population. We explain how to infer MW and DW weights in a federated manner and highlight the distinct trade-offs they offer between communication efficiency and modeling assumptions. Using these federated propensity scores, we construct two ATE estimators: the Federated Inverse Propensity Weighting estimator (Fed-IPW) and its augmented variant (Fed-AIPW). We derive asymptotic variance expressions for these estimators and compare them to the meta-analysis estimators—often overlooked in FL (Khellaf et al., 2025)––, prov- ing that our estimators achieve lower or equal variances in all scenarios considered. Ourapproachisparticularlybeneficialinscenarioswithpoorornonexistentoverlapbetween treated and control groups within individual sites. In such cases, cross-site collaboration becomes crucial, as propensity score functions vary across sites. Indeed, when treatment assignmentmechanismsdiffersubstantially—suchaswhenonesitetreatsasubgroupthatre- mains untreated elsewhere—the global dataset achieves markedly greater overlap, providing essential support for estimating treatment effects that would otherwise be poorly estimated 2 WX YH Figure 1: Graphical model for multi-site observational data. within isolated sites. Additionally, our framework naturally accommodates a wide range of cross-site heterogeneities, including variations in sample sizes, treatment policies, covariate distributions, and violations of positivity. Numerical experiments on both simulated and real-world data validate our theoretical findings and highlight the practical value of our method. 2 Preliminaries 2.1 ATE estimators from centralized multi-site observational data In this section, we recall the key components of ATE estimation in a centralized multi-site setting. Following the potential outcomes framework (Rubin, 1974; Splawa-Neyman et al.,
https://arxiv.org/abs/2505.17961v1
1990), we consider random variables (X,H,W,Y (1),Y(0)), whereX∈Rdrepresents patient covariates, H∈[K]indicates site membership, W∈{0,1}denotes the binary treatment, andY(1)andY(0)are the potential outcomes under treatment and control, respectively. We assume that the Stable Unit Treatment Values Assumption (SUTVA) holds, so that the observed outcome is Y=WY(1)+(1−W)Y(0), and that potential outcomes are uniformly bounded. In the centralized setting, we observe n=/summationtextK k=1nkobservations of independently and identically distributed (i.i.d.) tuples (Hi,Xi,Wi,Yi)i∈1,...,n∼P, withnk=/summationtextn i=11{Hi=k} the number of observations in site k. We aim to estimate the ATE defined as the risk differenceτ=E[Y(1)−Y(0)] = E[E[Y(1)−Y(0)|H]], where the expectation is taken over the population P. To be able to identify the ATE, we assume unconfoundedness. Assumption 1 (Unconfoundedness ).(Y(0),Y(1))⊥ ⊥W|X. While Assumption 1 is a standard assumption in causal inference, we further consider As- sumption 2, which is specific to the multi-site setting. Together with Assumption 1, it ensures that the variables Xform a sufficient set of covariates to adjust for confounding. Assumption 2 (Ignorability on sites ).(Y(0),Y(1))⊥ ⊥H|X. OursettingisdepictedinthegraphicalmodelinFigure1, whichhighlightsthatweeliminate any direct effect of the site on the outcome. We defineµw(x) =E[Y|X=x,W =w]forw∈{0,1}. The oracle propensity score is denoted by e(x) =P(W|X=x), and we consider the weak (global) overlap assumption (Wager, 2024). Assumption 3 (Global overlap ).Oglobal =E/bracketleftbig (e(X)(1−e(X)))−1/bracketrightbig <∞. Assumption 3 is crucial for propensity score-based estimators, as it states that every region of the covariate space has a non-zero probability of receiving both treatments. A lower value ofOglobalindicates that these probabilities lie further away from 0 and 1, which corresponds to better overlap. For further insights on overlap, see Li et al. (2018a,b), and for a “misoverlap” metric, refer to Clivio et al. (2024). With Assumptions 1, 2 and 3, the ATE is identifiable as τ=E/bracketleftbigWY e(X)−(1−W)Y 1−e(X)/bracketrightbig (see Appendix A.1). We denote oracle ATE estimators, which assume knowledge of the nuisance components e,µ0,µ1, by a superscript∗. We define the centralized, multi-site estimators as follows. 3 Definition 1 (Oracle multi-site centralized estimators ). ˆτ∗ IPW=1 nn/summationdisplay i=1τIPW(Xi;e), ˆτ∗ AIPW =1 nn/summationdisplay i=1τAIPW (Xi;e,µ1,µ0), (1) whereτIPW(Xi;e) =WiYi e(Xi)−(1−Wi)Yi 1−e(Xi)andτAIPW (Xi;e,µ1,µ0) =µ1(Xi)−µ0(Xi) + Wi(Yi−µ1(Xi)) e(Xi)−(1−Wi)(Yi−µ0(Xi)) 1−e(Xi). These oracle estimators are unbiased and asymptotically normal. Theorem 1 (Consistency of oracle multi-site centralized estimators ).Under As- sumptions 1, 2 and 3, we have the following asymptotic normality results: √n(ˆτ−τ)→N (0,V⋆)with V[ˆτ∗ IPW] =E/bracketleftbiggY(1)2 e(X)/bracketrightbigg +E/bracketleftbiggY(0)2 1−e(X)/bracketrightbigg −τ2, V[ˆτ∗ AIPW ] =V[τ(X)] +E/bracketleftbigg/parenleftbigg(Y−µ1(X))2 e(X)/parenrightbigg/bracketrightbigg +E/bracketleftigg/parenleftbigg(Y−µ0(X))2 1−e(X)/parenrightbigg2/bracketrightigg , whereτ(x) =E[Y(1)−Y(0)|X=x]denotes the Conditional Average Treatment Effect (CATE). These asymptotic variances align with those in the single-site setting (Hirano et al., 2003), as detailed in Appendix A.2. However, in practice, the propensity score and outcome models are typically unknown and must be estimated from data. This creates a challenge in the decentralized setting, where centralizing data to compute µ1,µ0, andeis not feasible. Therefore, the estimators in Definition 1 need to be adapted to this setting. Importantly, the AIPW estimator is doubly robust , meaning it remains consistent as long as either the outcome or the propensity score model is correctly specified (Chernozhukov et al., 2018). 2.2 Meta-analysis estimators We now turn to a decentralized setting in which the Ksites cannot share individual-level data. A
https://arxiv.org/abs/2505.17961v1
natural baseline for estimating the ATE across sites is a two-stage meta-analysis approach (Burke et al., 2017), wherein each site independently estimates the relevant nui- sance parameters and communicates only the resulting ATE estimates for aggregation. We will need the following assumption. Assumption 4 (Local overlap ).∀k∈[K],Ok=E/bracketleftbig (e(X)(1−e(X)))−1|H=k/bracketrightbig <∞. Assumption 4 is much stronger than its global counterpart (Assumption 3), as it requires the overlap to hold locally for each site. Denoting by ek(x) =P(W= 1|X=x,H =k)the oracle local propensity score at site k, we can define the oracle meta-analysis estimators as follows. Definition 2 (Oracle meta-analysis estimators ). ˆτmeta∗ IPW =K/summationdisplay k=1nk nˆτ(k) IPW, ˆτmeta∗ AIPW =K/summationdisplay k=1nk nˆτ(k) AIPW, (2) where ˆτ(k) IPW =1 nk/summationtextnk i=1τIPW(Xi;ek)and ˆτ(k) AIPW =1 nk/summationtextnk i=1τAIPW (Xi;ek,µ1,µ0)are the local estimators at site k. While alternative aggregation weights—such as the inverse variance of local estimates—can be considered, they produce, in our setting, biased estimates of the global ATE τ, which decomposes as τ=/summationtextK k=1ρkτk, whereρk=P(H=k)andτk=E[Y(1)−Y(0)|H=k] denotes the local ATE at site k. This bias arises whenever the {τk}kdiffer, which commonly occurs when covariate distributions vary across sites and treatment effects are heterogeneous (i.e., depend on covariates). 4 Theorem2 (Consistencyoforaclemeta-analysisestimators ).Under Assumptions 1, 2 and 4, the oracle meta-analysis estimators are unbiased for the ATE with asymptotic variances V/bracketleftig ˆτmeta∗ IPW/bracketrightig =1 nK/summationdisplay k=1ρkV(k) IPW+1 nV[τH], V/bracketleftig ˆτmeta∗ AIPW/bracketrightig =1 nK/summationdisplay k=1ρkV(k) AIPW +1 nV[τH] with within-site variances V(k) IPW=E/bracketleftig Y(1)2 ek(X)|H=k/bracketrightig +E/bracketleftig Y(0)2 1−ek(X)|H=k/bracketrightig −τ2 k,V(k) AIPW = V[τ(X)|H=k]+E/bracketleftbigg/parenleftig (Y−µ1(X))2 ek(X)/parenrightig2 |H=k/bracketrightbigg +E/bracketleftig/parenleftig (Y−µ0(X))2 1−ek(X)/parenrightig |H=k/bracketrightig and between-sites variance of the local ATEs V[τH] =V[E[Y(1)−Y(0)|H]]. This result is proved in Appendix A.3. A key limitation of meta-analysis estimators is their reliance on Assumption 4, which is fragile and often violated in practice—for example, when a site follows a deterministic treatment policy (e.g., treating all patients or only those with a specific condition). In such cases, these estimators become ill-defined and yield biased estimates of the global ATE. To overcome this, we introduce a federated approach that estimates the global propensity scoreeasaweightedcombinationofthelocalscores ek, enablingvalidinferenceevenwithout local overlap. 3 Related work Federated causal inference has only recently gained attention. Khellaf et al. (2025) propose a federated framework for estimating the ATE across multiple RCTs by fitting separate parametric linear outcome models for treated and control groups at each site using federated learning, followed by ATE estimation via the G-formula. Their method is motivated by the fact that, in RCTs, this estimator remains unbiased under model misspecification and achieves lower variance than the Difference-in-Means estimator. In observational studies, however, outcome model misspecification can bias ATE estimates. To address this, Vo et al. (2022) introduce a Bayesian framework using Gaussian processes trained via federated gradient descent. Their method requires estimating a multi-site co- variance kernel, which involves each site computing and sharing the first four moments of its data—a computationally intensive process that raises concerns regarding scalability and communication efficiency. Alternative approaches, such as IPW, require estimating the propensity score. Guo et al. (2025) propose a federated method for learning a global propensity score model using a con- sensus voting scheme
https://arxiv.org/abs/2505.17961v1
over parametric model parameters, retaining only sites that conform to a shared specification. In contrast, our approach does not assume a common propensity score function: each site can adopt its own, potentially distinct model. To address site-level heterogeneity, Xiong et al. (2023) assume a parametric logistic propen- sity score model with shared and site-specific parameters, federating only the common ones in a single round of communication round. In a similar line of work, Yin et al. (2025) estimate a global parametric propensity score model that adjusts for both covariates and site membership, but their handling of heterogeneity is limited to a site-specific scalar, re- stricting the model’s flexibility. In contrast, our method makes no structural assumptions about the form of the propensity score. We enable fully nonparametric global estimation while allowing flexible and heterogeneous local models, without requiring prior knowledge of shared or site-specific components. Crucially, unlike the above methods, which assume overlap holds locally at each site, our approach explicitly relaxes this strong assumption. Although not always framed within a federated context, a related body of work focuses on the generalization of causal findings from source data to a target population. Assuming homogeneous outcome models and propensity scores across sites, Han et al. (2025) propose a density ratio weighting method to generalize ATE estimates from multiple source sites to a target domain. Their approach resembles meta-analysis, relying on aggregate statis- tics—suchassamplesizesandlocalATEvariances—ratherthanindividual-leveldata. While 5 they estimate density ratios using parametric models, our Membership Weights approach leverages supervised learning methods (e.g., logistic regression) that have lower sample com- plexity and make less stringent assumptions on the sites’ distributions. Similarly, Guo et al. (2024) address generalization to a target population composed of multiple source sites and previously excluded individuals. Their method assumes the target propensity score is pro- portionaltothoseof thesourcesandestimateslocalscoresviadensity ratiosbetweentreated or controlsubgroupsand thetarget population. However, nonparametricestimation ofthese ratios requires sufficient data in each treatment arm at every site—a condition often unmet in small or imbalanced settings. Furthermore, the authors do not address how this could be implemented in a federated setting. 4 Federated estimators via propensity score aggregation 4.1 Oracle federated estimators As discussed in the previous section, existing federated causal inference methods often rely on restrictive assumptions—such as a common propensity score across sites (Guo et al., 2025), site differences limited to intercept shifts (Yin et al., 2025), or predefined shared structures (Xiong et al., 2023). In practice, treatment assignment often varies across sites due to differences in norms, resources, or clinical practices. To accommodate this hetero- geneity, we express the global propensity score as a weighted combination of site-specific scores, capturing local treatment policies. Specifically, the global propensity score can be expressed as a weighted combination of local scores using two alternative weighting schemes (see Appendix A.4). The first relies on Membership Weights (MW), representing the probability of belonging to each site given the covariates: e(X) =K/summationdisplay k=1P(H=k|X)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright membership weightsek(X). (3) The second uses Density ratio Weights (DW), based on the ratio between the covariate density at site k, denotedfk, and the overall population density f: e(X) =K/summationdisplay k=1ρkfk(X) f(X)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright density
https://arxiv.org/abs/2505.17961v1
weightsek(X). (4) Building on Eq. 3 and 4, we define our oracle Federated (A)IPW estimators, denoted Fed- (A)IPW, where the corresponding weights will be estimated via federated learning (see Section 4.2). Definition 3 (Oracle federated estimators ). ˆτfed∗ IPW=K/summationdisplay k=1nk nˆτfed(k) IPW, ˆτfed∗ AIPW =K/summationdisplay k=1nk nˆτfed(k) AIPW, (5) where ˆτfed(k) IPW =1 nk/summationtextnk i=1τIPW(Xi;e), and ˆτfed(k) AIPW =1 nk/summationtextnk i=1τAIPW (Xi;e,µ1,µ0)rely on the global propensity score e(X) =/summationtextK k=1ωk(X)ek(X)with federated weights {ωk(X)}kthat are either the membership (Eq. 3) or the density ratio (Eq. 4) weights. Crucially, our decompositions of the global propensity score in Eq. 3 and 4 enable a flexible combinationoflocallyestimatedpropensityscores ekandgloballylearnedfederationweights ωk(x). Theorem 3 (proved in Appendix A.5) establishes that this approach, in the oracle setting, matches the efficiency of centralized estimators. Theorem 3 (Equality of centralized and federated (A)IPW estimators ).Under Assumptions 1, 2 and 3, the oracle federated estimators (Def. 3) are equal to the oracle centralized estimators (Def. 1). 6 We further show in Appendix A.6 that even when local overlap (Assumption 4) holds, our federated estimators achieve lower variance than meta-analysis estimators. Theorem 4 (Variance comparison between meta-analysis and federated estima- tors).Under Assumptions 1, 2 and 4, both federated and meta-analysis estimators can be computed, and V[ˆτ∗ IPW] =V[ˆτfed∗ IPW]≤V/bracketleftig ˆτmeta∗ IPW/bracketrightig , V[ˆτ∗ AIPW ] =V[ˆτfed∗ AIPW ]≤V/bracketleftig ˆτmeta∗ AIPW/bracketrightig , with equality when the local propensity scores are equal. We provide two main reasons for this improvement. First, our decomposition of ein Eq.3 marginalizes over H|X, effectively removing the need to adjust for site membership H. Thisreducesvariancebyavoidingunnecessaryconditioning. Second, ourfederatedapproach inherently yields better overlap than the siloed meta-analysis framework, as formalized in the following theorem. Theorem 5 (Overlap improvement ).0≤O global≤/summationtextK k=1ρkOk. Theorem 5 (proved in Appendix A.7) establishes that global overlap is always at least as good as the worst local overlap. This implies that even when local overlap holds, sites with poor overlap still benefit from the federated approach: the global propensity scores e(X)are more bounded away from 0 and 1 than the local scores {ek(X)}k∈[K]. Notably, sites with poor individual overlap can even improve the overall overlap of the federated population, as illustrated in the following example. Example 1. LetK= 2withXi= 1in both sites, n1=n2,P(Hi|Xi) = 0.5,e1(Xi) = 0.99×Xiande2(Xi) = 0.01×Xi, leading to e(Xi) = 0.5×e1(Xi)+0.5×e2(Xi) = 0.5×Xi. Then,O1=O2= (0.99×0.01)−1≈101, whileOglobal = (0.5×0.5)−1= 4. This illustrates how heterogeneity among local propensity scores over a shared population can enhance global overlap. 4.2 Federated estimation We now move beyond oracle estimators and describe how to implement the Fed-(A)IPW estimatorsinapracticalfederatedlearningsetting. Ourmethodfollowsatwo-stepprocedure (which can be executed in parallel): each site kfirst estimates and shares a local propensity score function ˆek(x); then, global scores ˆe(x)are computed as weighted averages of the local scores, using weights {ˆωk(x)}k∈[K]estimated in a federated manner. For Fed-AIPW, a third step is needed to train the outcome models ˆµ0,ˆµ1with federated learning. Details on how to estimate{ˆek(x),ˆωk(x)}k, along with ˆµ0,ˆµ1, are provided in the following subsections. 4.2.1 Estimation of the local propensity scores Local propensity scores can be estimated using any probabilistic binary classification method. Common choices include logistic regression for parametric estimation
https://arxiv.org/abs/2505.17961v1
and gen- eralized random forests for non-parametric estimation (Lee et al., 2010). A key advantage of our approach is its flexibility: different estimation methods can be used across sites, tailored to local data characteristics or computational constraints. As previously discussed, this procedure does not require the overlap assumption to hold locally. That is, local propensity scores can legitimately approach 0 or 1—as long as the global score remains sufficiently bounded away from these extremes. This property makes the approach suitable for scenarios involving external control arms (Center for Drug Evalu- ation and Research and Center for Biologics Evaluation and Research and Oncology Center of Excellence, 2023; European Medicines Agency, 2023) where some sites have ek(Xi) = 0 for alliin the control group, yet still contribute to the global analysis. 7 4.2.2 Estimation of the federated weights Membership weights. We begin with membership weights {ωk(x) =P(Hi=k|X= x)}k, which represent the probability of site membership given covariates. These weights can be estimated without access to individual-level data using any federated algorithm for multi-class classification models that can output probabilities. Federated versions of parametric models—such as logistic regression or deep neural net- works—trained via gradient-based empirical risk minimization are well-suited for this task and readily supported in practice (Kairouz et al., 2021). Additionally, non-parametric mod- els like random forests (Hauschild et al., 2022) and gradient-boosted trees (Li et al., 2020) have been adapted to the federated setting, offering more flexibility in capturing complex decision boundaries. In our experiments, we use a simple multinomial logistic regression model trained via the standardFederatedAveraging(FedAvg)algorithm(McMahanetal.,2017). Thisapproachis easy to deploy in a server–client architecture, remains interpretable, and requires exchanging onlyTKdfloats, where Tis the number of communication rounds. We refer to Appendix B for details. Densityratiosweights. Wenowturntodensityratioweights {ωk(x) =ρkfk(x) f(x)}k, where fkandfdenote the local and global covariate densities, respectively. Although various methods exist for estimating density ratios (Sugiyama et al., 2012), we are not aware of any that are directly compatible with federated settings. Standard non-parametric techniques, such as kernel density estimation, typically require sharing raw data, which violates the decentralized constraints we impose. We thus propose to estimate each local density fkusing a parametric model (e.g. via max- imum likelihood estimation). In our experiments, we assume that covariates are normally distributed within each site. Each site klocally estimates its mean ˆµk= 1/nk/summationtextnk i=1Xiand covariance matrix ˆΣk= 1/nk/summationtextnk i=1(Xi−ˆµk)(Xi−ˆµk)⊤, then transmits these to the central server. Noting that the global density fis a mixture of the Klocal densities weighted by site proportions ρk(which can be estimated by nk/n), we estimate the density ratios weights asˆωk(x) =nk nˆfk(x)/summationtextK k=1nk nˆfk(x), where each local density is given by ˆfk(x) =1 (2π)d/2|ˆΣk|1/2×exp/parenleftig −1 2(x−ˆµk)⊤ˆΣ−1 k(x−ˆµk)/parenrightig . In contrast to the estimation of membership weights, this approach requires only a single round of communication, but involves sending Kd+Kd2floats, which can be significant for high-dimensional data. More broadly, we recommend using a parametric approach to density ratio estimation when prior knowledge about the local distributions is available and well-justified. However, in the absence of reliable distributional assumptions, membership weights often offer a more robust and practical alternative.
https://arxiv.org/abs/2505.17961v1
They allow for flexible parameterizations and can be estimated throughclassificationmodels, whicharegenerallyeasiertolearnthandensities—particularly in high-dimensional settings where density estimation suffers from high sample complexity and poor scalability (Sugiyama et al., 2012). 4.2.3 Estimation of Fed-AIPW Fed-AIPW requires estimating the outcome models µ1andµ0in addition to the propensity components{ˆek(x),ˆωk(x)}k. The estimation procedure for {ˆek(x),ˆωk(x)}kfollows the same steps outlined in Sections 4.2.1 and 4.2.2. For the outcome models, one can estimate µ0and µ1globally using federated gradient descent, as in Khellaf et al. (2025), with parametric models such as logistic regression for binary outcomes or linear regression for continuous outcomes. As with the estimation of membership weights, our framework also supports more flexible outcome modeling using a broad range of function classes, including neural networks, decisionforests, andothergradient-basedmodelsadaptedtothefederatedsetting. 8 5 Experiments Synthetic data. We consider K= 3sites andd= 10covariates. Two distinct data- generating processes (DGPs) are used in the simulations. In DGP A, each sitekindepen- dently samples nk= 500individuals from a site-specific multivariate Gaussian distribution N(µk,Σk). InDGP B, a total of n= 1500individuals are first drawn from a bimodal Gaussian mixture and then assigned to sites using a multinomial logistic model based on their covariates. We vary the degree of overlap within sites to reflect different practical scenarios: No local overlap (the second site has no treated individuals, hence O2=∞), Poor local overlap (O2≈107), andGood local overlap (O2≈4.6). The outcome models µ1andµ0are shared across sites and specified as linear functions with interaction terms. For comparison, we also generate data consistent with the setting described in Xiong et al. (2023) (Figure 3). All reported results are aggregated over 1,500 simulation runs. Complete details of the simulation setup are provided in Appendix C. We evaluate several estimators: (i) our proposed Fed-IPW andFed-AIPW , with either MembershipWeights (MW)(Def.3)estimatedviafederatedmultinomiallogisticregression (well-specified in DGP B), or Density Ratio Weights (DW)(Def. 4) based on Gaussian models (well-specified in DGP A); (ii) theCentralized Oracle (Def. 1); (iii) meta-analysis IPW and AIPW estimators with sample size weighting (Meta-SW) (Def. 2); and (iv) the one-shot inverse variance weighted IPW estimator (1S-IVW) from Xiong et al. (2023), evaluated under a favorable setting for this method—specifically, we assume that certain propensity score parameters are shared across sites. For all estimators, propensity scores are estimated with logistic regression, and outcome models with linear regression. The results for the three degrees of local overlap are shown in Figures 2a, 2b and 2c, with the top(respectivelybottom)rowshowing DGP A(respectively DGP B).Beforediscussingeach setting separately, we start by some general observations. First, Fed-IPW-DW is unbiased inDGP A but biased in DGP B, while our Fed-IPW-MW is unbiased in DGP B but biased in DGP A, as expected. Note that Fed-IPW-MW could be made unbiased by using more flexible models (e.g., neural networks) to estimate membership probabilities. Second, our Fed-AIPW estimators remain unbiased across all overlap levels thanks to their double robustness property: despite misspecified linear outcome models, well-estimated propensity scores ensure consistency. Finally, in most cases our Fed-IPW estimators exhibit lower variance than the oracle IPW estimator. This is because they leverage estimated global propensity scores, a strategy known to reduce variance in IPW estimation (Hirano et
https://arxiv.org/abs/2505.17961v1
al., 2003). In theNo local overlap setting (Figure 2a), meta-analysis estimators are undefined due to the absence of treated individuals at one site. In contrast, our federated estimators remain unbiased under both DGPs, as Assumption 3 holds (with global overlap Oglobal≈6.22). In thePoor local overlap setting (Figure 2b), site 2 lacks sufficient overlap, leading to bias and instability in the meta-analysis estimators, including Meta-AIPW as both the propensity weights and local outcome models are inaccurate. However, this issue is mitigated in the global dataset (see Figure 5 in the Appendix), allowing our federated estimators to perform reliably. Finally, in the Good local overlap setting (Figure 2c), while all competitors are unbiased, our federated methods exhibit lower variance. In Figure 3, we consider a specific setting in which all local propensity score functions share a common subset of 5 out of 10 logistic regression coefficients with data generated from DGP B. This setup aligns with the assumptions of the 1S-IVW method proposed by Xiong et al. (2023), which relies on prior knowledge of the shared coefficients—an assumption our method does not require. Nevertheless, Fed-MW is unbiased and exhibits the smallest variance. Real data. We apply our methods to the multi-site observational Traumabase cohort, which contains data on patients with traumatic brain injury (Mayer et al., 2020; Colnet et al., 2024). We study the effect of tranexamic acid (TA), a drug used to reduce bleeding, on the binary outcome of mortality among trauma patients. For this analysis, we select the K= 4centres with highest number of treated patients (min. 47), totaling 472 treated and 5,531 control patients, each with 17 measured covariates. We standardize the covariates 9 Fed-MWCentralized OracleFed-DWMeta-SWTrue ATE1S-IVWIPWAIPWCentralizedFigure 2: Synthetic data: DGP A (top panel) and B (bottom). 678910111213 246810 (a) No local overlap 678910111213 246810(b) Poor local over- lap 678910111213 246810(c) Good local over- lapFigure 3: Comparison to Xiong et al. (2023) 3.54.04.55.05.56.0 0.00.10.20.30.40.50.6 Figure 4: Results of Traumabase experiment. to have zero mean and unit variance in a federated manner by computing and sharing the means and variances of the covariates across sites. We focus on AIPW estimators. For Fed-MW and Fed-DW, we estimate the ek’s with local logistic regression, while the outcome models µ1,µ0and the membership weights are learned with a logistic regression model trained with FedAvg (Algorithm 1 in the Appendix) with T= 5000rounds,E= 1local steps and learning rate η= 0.1. The density ratio weights are computed under a Gaussian model for the covariates. We compare our estimators to Meta-SW and centralized AIPW estimators using also linear and logistic models. To construct empirical confidence intervals, we perform 1000 bootstrap resamples. The results in Figure 4 demonstrate that our federated approach yields estimates comparable to those of the centralized method, while exhibiting lower variance than Meta-SW. Notably, prior analyses using the combined data from a larger set of sites have reported a non-significant average treatment effect (Mayer et al., 2020; Colnet et al., 2024), which is consistent with our findings on this data subset. 6 Conclusion, limitations and future work We present a theoretically grounded framework for federated causal
https://arxiv.org/abs/2505.17961v1
inference from decen- tralized observational data. We introduce two methods for federating local propensity scores—via estimated membership probabilities or density ratios—both of which are de- signed to be robust to covariate shift and heterogeneity in propensity scores. Our methods improve covariate overlap, leading to more stable and accurate estimates of the ATE. The framework is flexible, allowing the integration of external control arms and supporting het- erogeneous local propensity score estimation strategies. One of the common benefits of federated learning is the ability to increase the overall sample size by leveraging data from multiple sites. While our approach does offer this advantage, it 10 nonetheless requires a sufficiently large amount of data per site, especially when the number of variables is high. Indeed, accurate estimation of key quantities—local propensity scores, parameters of the distribution of the features per site and membership probabilities—relies on sufficient local data. For the latter, the issue becomes more pronounced as the number of sites grows. RCTs and observational data each have strengths and limitations. Combining them can harness their respective advantages while mitigating weaknesses—supporting goals like im- proving heterogeneous effect estimation, addressing unobserved confounding, and enhancing generalizability. Estimating treatment effects from diverse sources is thus a promising di- rection for future federated causal inference. Finally, estimating the CATE in a federated setting is crucial for advancing beyond average treatment effects, towards more personaliza- tion in treatment decision-making. Acknowledgments This work has been done in the frame of the PEPR SN SMATCH project and has benefited from a governmental grant managed by the Agence Nationale de la Recherche under the France 2030 programme, reference ANR-22-PESN-0003. Rémi Khellaf was supported by an AXIAUM fellowship. 11 References Bang, H. and Robins, J. M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics , 61(4):962–973. Burke, D. L., Ensor, J., and Riley, R. D. (2017). Meta-analysis using individual participant data: one-stageandtwo-stageapproaches, andwhytheymaydiffer. Statistics in medicine , 36(5):855–875. Center for Drug Evaluation and Research and Center for Biologics Evaluation and Research and Oncology Center of Excellence (2023). Considerations for the design and conduct of externally controlled trials for drug and biological products. https://www.fda.gov/ media/164960/download . U.S. Food and Drug Administration. Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural pa- rameters. Clivio, O., Bruns-Smith, D., Feller, A., and Holmes, C. C. (2024). Towards principled representation learning to improve overlap in treatment effect estimation. In 9th Causal Inference Workshop at UAI 2024 . Colnet, B., Mayer, I., Chen, G., Dieng, A., Li, R., Varoquaux, G., Vert, J.-P., Josse, J., and Yang, S. (2024). Causal inference methods for combining randomized trials and observational studies: a review. Statistical science , 39(1):165–191. European Medicines Agency (2023). Reflection paper on establishing efficacy based on single arm trials submitted as pivotal evidence in a marketing authorisation. European Medicines Agency. European Medicines Agency (2024). ICH E9 Statistical Principles for Clinical Trials: Sci- entific Guideline. Grimes, D. A. and Schulz, K. F. (2002). Bias and causal associations in observational research. The lancet ,
https://arxiv.org/abs/2505.17961v1
359(9302):248–252. Guo, T., Karimireddy, S. P., and Jordan, M. I. (2024). Collaborative heterogeneous causal inference beyond meta-analysis. arXiv preprint arXiv:2404.15746 . Guo, Z., Li, X., Han, L., and Cai, T. (2025). Robust inference for federated meta-learning. Journal of the American Statistical Association , pages 1–16. Han, L., Hou, J., Cho, K., Duan, R., and Cai, T. (2025). Federated adaptive causal estima- tion (face) of target treatment effects. Journal of the American Statistical Association , (just-accepted):1–25. Hauschild, A.-C., Lemanczyk, M., Matschinske, J., Frisch, T., Zolotareva, O., Holzinger, A., Baumbach, J., and Heider, D. (2022). Federated random forests can improve local performance of predictive models for various healthcare applications. Bioinformatics , 38(8):2278–2286. Hernán, M. A. (2018). The c-word: scientific euphemisms do not improve causal inference from observational data. American journal of public health , 108(5):616–619. Hernán, M. A. and Robins, J. M. (2006). Estimating causal effects from epidemiological data.Journal of Epidemiology & Community Health , 60(7):578–586. Hirano, K., Imbens, G. W., and Ridder, G. (2003). Efficient estimation of average treatment effects using the estimated propensity score. Econometrica , 71(4):1161–1189. Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. (2021). Advances and open problems in federated learning. Foundations and trends ®in machine learning , 14(1–2):1–210. 12 Khaled, A., Mishchenko, K., and Richtárik, P. (2020). Tighter theory for local SGD on identical and heterogeneous data. In AISTATS . Khellaf, R., Bellet, A., and Josse, J. (2025). Federated Causal Inference: Multi-Study ATE Estimation beyond Meta-Analysis. In AISTATS . Lee, B. K., Lessler, J., and Stuart, E. A. (2010). Improving propensity score weighting using machine learning. Statistics in medicine , 29(3):337–346. Lei, L. and Ding, P. (2021). Regression adjustment in completely randomized experiments with a diverging number of covariates. Biometrika , 108(4):815–828. Li, F., Morgan, K. L., and and, A. M. Z. (2018a). Balancing covariates via propensity score weighting. Journal of the American Statistical Association , 113(521):390–400. Li, F., Thomas, L. E., and Li, F. (2018b). Addressing extreme propensity scores via the overlap weights. American Journal of Epidemiology , 188(1):250–257. Li, Q., Wen, Z., and He, B. (2020). Practical federated gradient boosting decision trees. In AAAI, pages 4642–4649. Li, X., Huang, K., Yang, W., Wang, S., and Zhang, Z. (2019). On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189 . Mayer, I., Sverdrup, E., Gauss, T., Moyer, J.-D., Wager, S., and Josse, J. (2020). Dou- bly robust treatment effect estimation with missing attributes. The Annals of Applied Statistics , 14(3):1409–1431. McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artifi- cial intelligence and statistics , pages 1273–1282. PMLR. Robins, J. (1986). A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. Mathemat- ical modelling , 7(9-12):1393–1512. Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika , 70(1):41–55. Rubin, D. B. (1974). Estimating
https://arxiv.org/abs/2505.17961v1
causal effects of treatments in randomized and nonran- domized studies. Journal of educational psychology , 66(5):688–701. Splawa-Neyman, J., Dabrowska, D. M., and Speed, T. P. (1990). On the Application of ProbabilityTheorytoAgriculturalExperiments.EssayonPrinciples.Section9. Statistical Science, 5(4):465 – 472. Stich, S. U. (2019). Local sgd converges fast and communicates little. In ICLR. Sugiyama, M., Suzuki, T., and Kanamori, T. (2012). Density ratio estimation in machine learning. Cambridge University Press. U.S. Food and Drug Administration (2023). Adjusting for covariates in randomized clinical trials for drugs and biological products. VanderWeele, T. J. (2019). Principles of confounder selection. European journal of epidemi- ology, 34:211–219. Vo, T. V., Lee, Y., Hoang, T. N., and Leong, T.-Y. (2022). Bayesian federated estimation of causal effects from observational data. In Cussens, J. and Zhang, K., editors, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence , volume 180 of Proceedings of Machine Learning Research , pages 2024–2034. PMLR. Wager, S. (2024). Causal inference: A statistical learning approach. 13 Xiong, R., Koenecke, A., Powell, M., Shen, Z., Vogelstein, J. T., and Athey, S. (2023). Federated causal inference in heterogeneous observational data. Statistics in Medicine , 42(24):4418–4439. Yin, C., Chen, H.-Y., Chao, W.-L., and Zhang, P. (2025). Federated inverse proba- bility treatment weighting for individual treatment effect estimation. arXiv preprint arXiv:2503.04946 . 14 A Proofs A.1 ATE Identification and Sufficient Conditions Proof.Under Assumptions 1 and 2, the variables in Xiare a sufficient adjustment set, so that the ATE is identifiable as τ=E[Y(1)−Y(0)] =E[E[Y(1)|X]−E[Y(0)|X]] =E[E[Y(1)|X,W = 1]−E[Y(0)|X,W = 0]] Assumptions 1 and 2 =E[E[Y|X,W = 1]−E[Y|X,W = 0]] SUTVA =E/bracketleftbiggWY e(X)−(1−W)Y 1−e(X)/bracketrightbigg Assumption 3 A.2 Proof of Theorem 1 Proof.Unbiasedness. We have: E/bracketleftig ˆτIPW∗/bracketrightig =E/bracketleftigg 1 nn/summationdisplay i=1/parenleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/parenrightbigg/bracketrightigg =1 nK/summationdisplay k=1E/bracketleftigg E/bracketleftiggn/summationdisplay i=1/parenleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/parenrightbigg ·1[Hi=k]|n1,...,nK/bracketrightigg/bracketrightigg =1 nK/summationdisplay k=1E[nk]/parenleftbigg E/bracketleftbiggWiYi e(Xi)|Hi=k/bracketrightbigg −E/bracketleftbigg(1−Wi)Yi 1−e(Xi)|Hi=k/bracketrightbigg/parenrightbigg =K/summationdisplay k=1ρk/parenleftbigg E/bracketleftbiggWiYi e(Xi)|Hi=k/bracketrightbigg −E/bracketleftbigg(1−Wi)Yi 1−e(Xi)|Hi=k/bracketrightbigg/parenrightbigg (6) Let us focus on the first term of the sum: E/bracketleftbiggWiYi e(Xi)|Hi=k/bracketrightbigg =E/bracketleftbiggWiYi(1) e(Xi)|Hi=k/bracketrightbigg SUTVA =E/bracketleftbigg E/bracketleftbiggWiYi(1) e(Xi)|Xi,Hi=k/bracketrightbigg/bracketrightbigg =E/bracketleftbiggE[Wi|Xi,Hi=k]E[Yi(1)|Xi,Hi=k] e(Xi)/bracketrightbigg Assumptions 1 and 2 =E/bracketleftbiggek(Xi)E[Yi(1)|Xi,Hi=k] e(Xi)/bracketrightbigg =E/bracketleftbiggek(Xi)E[Yi(1)|Xi,Hi=k] E[ek(Xi)|Xi,Hi=k]/bracketrightbigg Definition of e(Xi) =E/bracketleftbigg E[Yi(1)|Xi,Hi=k]E/bracketleftbiggek(Xi) E[ek(Xi)]|Xi/bracketrightbigg/bracketrightbigg =E[Yi(1)|Hi=k] Similarly, we have E/bracketleftig (1−Wi)Yi 1−e(Xi)|Hi=k/bracketrightig =E[Yi(0)|Hi=k], so that E/bracketleftbig ˆτIPW∗/bracketrightbig = /summationtextK k=1ρkτk, which concludes the proof of unbiasedness of the oracle multi-site centralized IPW estimator. 15 For the AIPW estimator, following the same steps as in eq. (6), we have E/bracketleftig ˆτAIPW∗/bracketrightig =E/bracketleftigg 1 nn/summationdisplay i=1/parenleftbigg µ1(Xi)−µ0(Xi) +Wi(Yi−µ1(Xi)) e(Xi)−(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)/parenrightbigg/bracketrightigg =K/summationdisplay k=1ρk/parenleftigg E[µ1(Xi)−µ0(Xi)|Hi=k] +E/bracketleftbiggWi(Yi−µ1(Xi)) e(Xi)|Hi=k/bracketrightbigg −E/bracketleftbigg(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)|Hi=k/bracketrightbigg/parenrightigg =K/summationdisplay k=1ρk(E[τ(Xi)|Hi=k] + 0 + 0) =τ Variance. As we consider uniformly bounded potential outcomes, that is ∀w∈{0,1}, ∃(L,U)∈R2,L < Y (w)< U, and Assumption 3, we have that E/bracketleftig Yi(1)2 e(Xi)/bracketrightig <∞and E/bracketleftig Yi(0)2 1−e(Xi)/bracketrightig <∞, so the quantities that follow are well defined: V/bracketleftig ˆτIPW∗/bracketrightig =V/bracketleftigg 1 nn/summationdisplay i=1/parenleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/parenrightbigg/bracketrightigg =1 n2n/summationdisplay i=1V/bracketleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/bracketrightbigg i.i.d. observations =1 n2n/summationdisplay i=1/parenleftigg E/bracketleftigg/parenleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/parenrightbigg2/bracketrightigg −τ2/parenrightigg unbiasedness =1 n2n/summationdisplay i=1/parenleftigg E/bracketleftigg/parenleftbiggWiYi e(Xi)/parenrightbigg2/bracketrightigg +E/bracketleftigg/parenleftbigg(1−Wi)Yi 1−e(Xi)/parenrightbigg2/bracketrightigg −2E[0]−τ2/parenrightigg =1 n2n/summationdisplay i=1/parenleftigg E/bracketleftigg/parenleftbiggWiYi e(Xi)/parenrightbigg2/bracketrightigg +E/bracketleftigg/parenleftbigg(1−Wi)Yi 1−e(Xi)/parenrightbigg2/bracketrightigg −τ2/parenrightigg E/bracketleftigg/parenleftbiggWiYi e(Xi)/parenrightbigg2/bracketrightigg =E/bracketleftigg E/bracketleftigg/parenleftbiggWiYi e(Xi)|Xi/parenrightbigg2/bracketrightigg/bracketrightigg =E E/bracketleftig (WiYi|Xi)2/bracketrightig e(Xi)2  =E/bracketleftigg E[Wi|Xi]E/bracketleftbig Y2 i|Xi,Wi= 1/bracketrightbig e(Xi)2/bracketrightigg Assumption 1 =E/bracketleftigg E/bracketleftbig Yi(1)2|Xi/bracketrightbig e(Xi)/bracketrightigg SUTVA =E/bracketleftbiggYi(1)2 e(Xi)/bracketrightbigg Similarly, E/bracketleftbigg/parenleftig (1−Wi)Yi 1−e(Xi)/parenrightig2/bracketrightbigg =E/bracketleftig Yi(0)2 1−e(Xi)/bracketrightig . Then,
https://arxiv.org/abs/2505.17961v1
the variance of the oracle multi-site IPW estimator is V/bracketleftig ˆτIPW∗/bracketrightig =1 n/parenleftbigg E/bracketleftbiggYi(1)2 e(Xi)/bracketrightbigg +E/bracketleftbiggYi(0)2 1−e(Xi)/bracketrightbigg −τ2/parenrightbigg . 16 For the variance of the oracle multi-site centralized AIPW, we first notice that since E[Yi(w)−µw(Xi)|Xi] = 0for treatment w, we have A= Cov/parenleftbigg τ(Xi),Wi(Yi−µ1(Xi)) e(Xi)−(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)/parenrightbigg =E/bracketleftbigg τ(Xi)/parenleftbiggWi(Yi−µ1(Xi)) e(Xi)−(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)/parenrightbigg/bracketrightbigg =E/bracketleftbigg τ(Xi)E/bracketleftbiggWi(Yi−µ1(Xi)) e(Xi)|Xi/bracketrightbigg/bracketrightbigg −E/bracketleftbigg τ(Xi)E/bracketleftbigg(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)|Xi/bracketrightbigg/bracketrightbigg =E/bracketleftbigg τ(Xi)e(Xi)E[Yi(1)−µ1(Xi)|Xi] e(Xi)/bracketrightbigg −E/bracketleftbigg τ(Xi)(1−e(Xi))E[Yi(0)−µ0(Xi)|Xi] 1−e(Xi)/bracketrightbigg = 0 Then, V/bracketleftig ˆτAIPW∗/bracketrightig =V/bracketleftigg 1 nn/summationdisplay i=1/parenleftbigg µ1(Xi)−µ0(Xi) +Wi(Yi−µ1(Xi)) e(Xi)−(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)/parenrightbigg/bracketrightigg =1 n2n/summationdisplay i=1/parenleftbigg V[τ(Xi)] +V/bracketleftbiggWi(Yi−µ1(Xi)) e(Xi)/bracketrightbigg +V/bracketleftbigg(1−Wi)(Yi−µ0(Xi)) 1−e(Xi)/bracketrightbigg + 2A/parenrightbigg =1 n2n/summationdisplay i=1/parenleftigg V[τ(Xi)] +E/bracketleftbigg/parenleftbigg(Y−µ1(Xi))2 e(Xi)/parenrightbigg/bracketrightbigg +E/bracketleftigg/parenleftbigg(Y−µ0(Xi))2 1−e(Xi)/parenrightbigg2/bracketrightigg/parenrightigg =1 n/parenleftigg V[τ(Xi)] +E/bracketleftbigg/parenleftbigg(Y−µ1(Xi))2 e(Xi)/parenrightbigg/bracketrightbigg +E/bracketleftigg/parenleftbigg(Y−µ0(Xi))2 1−e(Xi)/parenrightbigg2/bracketrightigg/parenrightigg Because these variances are well defined from what precedes, the Central Limit Theorem can be applied to ˆτIPW∗andˆτAIPW∗, which gives the result in Theorem 1. A.3 Proof of Theorem 2 Proof.Unbiasedness. We first prove the unbiasedness of the local IPW estimators: E/bracketleftig ˆτIPW∗ k|Hi=k/bracketrightig =E/bracketleftbiggWiYi ek(Xi)−(1−Wi)Yi 1−ek(Xi)|Hi=k/bracketrightbigg i.i.d. =E/bracketleftbiggWiYi(1) ek(Xi)−(1−Wi)Yi(0) 1−ek(Xi)|Hi=k/bracketrightbigg SUTVA =E/bracketleftbigg E/bracketleftbiggWiYi(1) ek(Xi)|Hi=k,Xi/bracketrightbigg −E/bracketleftbigg(1−Wi)Yi(0) 1−ek(Xi)|Xi,Hi=k,Xi/bracketrightbigg/bracketrightbigg =E/bracketleftigg E[Wi|Hi=k,Xi]E[Yi(1)|Hi=k,Xi] ek(Xi) −E[(1−Wi)|Hi=k,Xi]E[Yi(0)|Hi=k,Xi] 1−ek(Xi)/bracketrightigg Ass. 1 =E[E[Yi(1)−Yi(0)|Hi=k,Xi]] =E[Yi(1)−Yi(0)|Hi=k] =τk 17 This yields E/bracketleftig ˆτmeta -IPW∗/bracketrightig =E/bracketleftiggK/summationdisplay k=1nk nˆτIPW∗ k/bracketrightigg =K/summationdisplay k=1E/bracketleftignk nE/bracketleftig ˆτIPW∗ k|Hi=k/bracketrightig/bracketrightig =K/summationdisplay k=1E/bracketleftignk n/bracketrightig τk =K/summationdisplay k=1ρkτk =τ Variance. For the variance of the local ATEs, we follow the same steps as in the previous proof which yields V/bracketleftig ˆτIPW∗ k|H=k/bracketrightig =1 nk/parenleftbigg E/bracketleftbiggYi(1)2 ek(Xi)|Hi=k/bracketrightbigg −E/bracketleftbiggYi(0)2 1−ek(Xi)|Hi=k/bracketrightbigg −τ2 k/parenrightbigg =1 nkVk,i withVk,i=E/bracketleftig Yi(1)2 ek(Xi)|Hi=k/bracketrightig −E/bracketleftig Yi(0)2 1−ek(Xi)|Hi=k/bracketrightig −τ2 k. Finally, by Lemma 2, we have V/bracketleftig ˆτmeta -IPW∗/bracketrightig =E/bracketleftig V/bracketleftig ˆτmeta -IPW∗|H=k/bracketrightig/bracketrightig +V/bracketleftig E/bracketleftig ˆτmeta -IPW∗|H=k/bracketrightig/bracketrightig =1 nK/summationdisplay k=1ρkVk,i+1 nV[τH] The proof for the meta AIPW estimator follows the same steps. A.4 Decomposition of the global propensity score By the law of total probabilities, e(x) =P(Wi= 1|Xi) =K/summationdisplay k=1P(Wi= 1∩Hi=k|Xi=x) =K/summationdisplay k=1P(Hi=k|Xi)P(Wi= 1|Xi=x,Hi=k) =K/summationdisplay k=1P(Hi=k|Xi)ek(Xi) (Eq. 3) =K/summationdisplay k=1P(Hi=k)P(Xi|Hi=k) P(Xi)ek(Xi) (Eq. 4) 18 A.5 Proof of Theorem 3 Proof. ˆτfed∗ IPW=K/summationdisplay k=1nk nˆτfed(k) IPW =K/summationdisplay k=1nk n/parenleftigg 1 nknk/summationdisplay i=1/parenleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/parenrightbigg/parenrightigg =1 nn/summationdisplay i=1/parenleftbiggWiYi e(Xi)−(1−Wi)Yi 1−e(Xi)/parenrightbigg = ˆτIPW∗ We prove similarly that ˆτfed∗ AIPW = ˆτAIPW∗. A.6 Proof of Theorem 4 We start by two technical lemmas. Lemma 1. Under Assumptions 1 and 2 we have, V[τ(Xi)] =K/summationdisplay k=1ρkV[τ(Xi)|Hi=k] +V[τHi] Proof. V[τ(Xi)] =E[V[τ(Xi)|Hi=k]] +V[E[τ(Xi)|Hi=k]] =K/summationdisplay k=1ρkV[τ(Xi)|Hi=k] +V/bracketleftiggK/summationdisplay k=11[Hi=k]τHi/bracketrightigg =K/summationdisplay k=1ρkV[τ(Xi)|Hi=k] +V/bracketleftigg τHiK/summationdisplay k=11[Hi=k]/bracketrightigg =K/summationdisplay k=1ρkV[τ(Xi)|Hi=k] +V[τHi] Lemma 2. For the general form of meta-analysis estimator ˆτmeta=/summationtextK k=1nk nˆτk, we have V/bracketleftbig ˆτmeta/bracketrightbig =1 nK/summationdisplay k=1ρkVk+1 nV[τHi] whereVk=V[ˆτk|Hi=k]is the within-site variance of the ATE estimator in site kand V[τHi] =V[E[Yi(1)−Yi(0)|Hi]]is the between-sites variance of the local ATEs. 19 Proof. V/bracketleftbig ˆτmeta/bracketrightbig =V/bracketleftiggK/summationdisplay k=1nk nˆτk/bracketrightigg =E/bracketleftigg V/bracketleftiggK/summationdisplay k=1nk nˆτk|H1,...,Hn/bracketrightigg/bracketrightigg +V/bracketleftigg E/bracketleftiggK/summationdisplay k=1nk nˆτk|H1,...,Hn/bracketrightigg/bracketrightigg =E/bracketleftiggK/summationdisplay k=1nk n2Vk/bracketrightigg +V/bracketleftiggK/summationdisplay k=1nk nτk/bracketrightigg =1 nK/summationdisplay k=1ρkVk+V/bracketleftigg 1 nK/summationdisplay k=1n/summationdisplay i=11[Hi=k]τHi/bracketrightigg =1 nK/summationdisplay k=1ρkVk+V/bracketleftigg 1 nn/summationdisplay i=1τHiK/summationdisplay k=11[Hi=k]/bracketrightigg =1 nK/summationdisplay k=1ρkVk+V/bracketleftigg 1 nn/summationdisplay i=1τHi/bracketrightigg =1 nK/summationdisplay k=1ρkVk+1 nV[τH], We can now prove Theorem 4. Proof.We begin with IPW estimators, and then move to AIPW. IPW.First, applying Jensen’s inequality with the strictly convex function t∝⇕⊣√∫⊔≀→1 tin]0; 1[ and summing-to-one weights ωk(X) =P(Hi=k|Xi=X), we have E/bracketleftbiggYi(1)2 e(Xi)/bracketrightbigg <E/bracketleftiggK/summationdisplay k=1ωk(X)Yi(1)2 ek(Xi)/bracketrightigg if∃(k,k′)∈[K],ek(Xi)̸=ek′(Xi), and equality if∀k∈[K],ek(Xi) =e(Xi), i.e. if the local propensity scores are all equal to one another. Then, with
https://arxiv.org/abs/2505.17961v1
the same condition on strictness and equality, E/bracketleftbiggYi(1)2 e(Xi)/bracketrightbigg ≤E/bracketleftiggK/summationdisplay k=1ωk(X)Yi(1)2 ek(Xi)/bracketrightigg =K/summationdisplay k=1E/bracketleftbigg E/bracketleftbig 1[Hi=k]|Xi/bracketrightbigYi(1)2 ek(Xi)/bracketrightbigg =K/summationdisplay k=1E/bracketleftbigg E/bracketleftbigg 1[Hi=k]Yi(1)2 ek(Xi)|Xi/bracketrightbigg/bracketrightbigg =K/summationdisplay k=1E/bracketleftbigg 1[Hi=k]Yi(1)2 ek(Xi)/bracketrightbigg =K/summationdisplay k=1ρkE/bracketleftbiggYi(1)2 ek(Xi)|Hi=k/bracketrightbigg 20 Similarly, E/bracketleftig Yi(0)2 1−e(Xi)/bracketrightig ≤/summationtextK k=1ρkE/bracketleftig Yi(0)2 1−ek(Xi)|Hi=k/bracketrightig . Then, V/bracketleftig ˆτfed-IPW∗/bracketrightig =1 n/parenleftbigg E/bracketleftbiggYi(1)2 e(Xi)+Yi(0)2 1−e(Xi)/bracketrightbigg −τ2/parenrightbigg ≤1 n/parenleftiggK/summationdisplay k=1ρkE/bracketleftbiggYi(1)2 ek(Xi)+Yi(0)2 1−ek(Xi)|Hi=k/bracketrightbigg −τ2/parenrightigg ≤1 n/parenleftiggK/summationdisplay k=1ρk/parenleftbigg E/bracketleftbiggYi(1)2 ek(Xi)+Yi(0)2 1−ek(Xi)|Hi=k/bracketrightbigg −τ2 k/parenrightbigg /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=Vk+K/summationdisplay k=1ρkτ2 k−τ2 /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright V(τH)/parenrightigg On the other hand, by Lemma 2, V/bracketleftig ˆτmeta -IPW∗/bracketrightig =1 nK/summationdisplay k=1ρkVk+1 nV[τH]. Hence V/bracketleftbig ˆτfed-IPW∗/bracketrightbig =V/bracketleftbig ˆτmeta -IPW∗/bracketrightbig if∀k∈[K],ek=e, and V/bracketleftbig ˆτfed-IPW∗/bracketrightbig <V/bracketleftbig ˆτmeta -IPW∗/bracketrightbig if∃(k,k′)∈[K]2,ek̸=ek′. AIPW. Similarly to the IPW case, we have E/bracketleftigg/parenleftbiggWi(Yi−µ1) e(Xi)/parenrightbigg2/bracketrightigg ≤K/summationdisplay k=1ρkE/bracketleftigg/parenleftbiggWi(Yi−µ1) ek(Xi)/parenrightbigg2 |Hi=k/bracketrightigg , E/bracketleftigg/parenleftbigg(1−Wi)(Yi−µ0) 1−e(Xi)/parenrightbigg2/bracketrightigg ≤K/summationdisplay k=1ρkE/bracketleftigg/parenleftbigg(1−Wi)(Yi−µ0) 1−ek(Xi)/parenrightbigg2 |Hi=k/bracketrightigg , and with Lemma 1: V[τ(Xi)] =K/summationdisplay k=1ρkV[τ(Xi)|Hi=k] +V[τHi]. Then, using Lemma 2, we have the desired result. 21 A.7 Proof of Theorem 5 Proof.Letf:t∝⇕⊣√∫⊔≀→1 t(1−t)witht∈]0,1[.fis convex. Then by Jensen’s inequality with summing-to-one weights ωk(X) =P(Hi=k|Xi=X), Oglobal =E/bracketleftigg f/parenleftiggK/summationdisplay k=1ωk(X)ek(X)/parenrightigg/bracketrightigg ≤E/bracketleftiggK/summationdisplay k=1ωk(X)f(ek(X))/bracketrightigg ≤K/summationdisplay k=1E/bracketleftbig E/bracketleftbig 1[Hi=k|Xi]/bracketrightbig f(ek(X))/bracketrightbig ≤K/summationdisplay k=1E/bracketleftbig E/bracketleftbig 1[Hi=k]f(ek(X))|Xi/bracketrightbig/bracketrightbig ≤K/summationdisplay k=1E/bracketleftbig 1[Hi=k]f(ek(X))/bracketrightbig ≤K/summationdisplay k=1P(Hi=k)E[f(ek(X))|Hi=k] ≤K/summationdisplay k=1ρkOk B Federated learning of membership weights with logistic regression We detail how to estimate membership weights using a multinomial logistic regression model trained via Federated Averaging (FedAvg) McMahan et al. (2017), without requiring access to individual-level data. Definition 4 (Multinomial Logistic Model ).The multinomial logistic regression model is defined as ωk(Xi) =exp(θ⊤ kXi)/summationtextK k′=1exp(θ⊤ k′Xi), (7) whereθkare the parameters of the model for class k. Weintroducethefollowingnotations: Θ = (θ1,···,θK)∈Rd×Kisthematrixcontainingthe parameters of the multinomial logistic regression, and ℓ(Θ;D)is the negative log-likelihood of the model on a dataset Dgiven byℓ(Θ;D) =−1 |D|/summationtext|D| i=1/summationtextK k=11{Hi=k}log(ωk(Xi))the softmax probabilities. Henc iis the one-hot encoding of the variable Hi. Algorithm 1 presents the FedAvg procedure applied to training a multinomial logistic re- gression model for estimating membership weights. 22 Algorithm 1 Federated Multinomial Logistic Regression with FedAvg for Site Mem- bership 1:Input:Ksites,Elocal steps, ηlearning rate, Trounds of communication, Bbatch size 2:Initialize model parameters Θ0∈Rd×K 3:fort= 1toTdo 4:foreach client k∈[1,...,K ]in parallel do 5: Θ(k) t+1←LocalUpdate (k,Θ(k) t) 6:end for 7: Θt+1←/summationtextK k=1nk nΘ(k) t // Federated Averaging 8:end for 9:LocalUpdate (k,Θ(k),B): 10:fore= 1toEdo 11:Bk←a random batch of Bsamples fromDk 12:ωl(xi)←exp(θ(k) l⊤xi)/summationtextK l′=1exp(θ(k) l′⊤xi)// Multinomial probabilities 13:Pi←(ω1(Xi),...,ωK(Xi)) 14:∇ℓ(Θ(k);Bk)←1 B/summationtext i∈Bkxi(Pi−Henc i) // Multinomial gradient 15: Θ(k)←Θ(k)−η∇ℓ(Θ(k);Bk) 16:end for 17:return Θ(k) With a suitable choice of learning rate η, and a small number of local steps Eper round, Algorithm1producesparameterestimatesforthemultinomiallogisticregressionmodelthat converge to their centralized counterparts as the number of communication rounds Tgoes to infinity Stich (2019); Khaled et al. (2020); Li et al. (2019). After Trounds, Algorithm 1 yields an estimate /hatwideΘfed= (ˆθfed 1,···,ˆθfed K)of the multinomial logistic parameters which can be used to estimate the membership weights as: ˆωk(Xi) =exp(ˆθfed k⊤xi)/summationtextK k′=1exp(ˆθfed k′⊤xi). C Simulation Details The parameters common to all settings are shown in Table 1, where γ(weak) 2 = [−2.5,−1,−0.15,−0.15,0,−0.15,−1,−0.15,−0.15,0]andγ(good) 2 = [−.05,−.1,−.05,−.1,.05,−.1,−.05,−.1,.05,−.1]. Parameter Center 1 Center 2 Center 3 d 10 µ1(X)/summationtext5 j=1j 10X2 j+/summationtext10 j=6j 10Xj+X9∗X10 µ0(X)/summationtext5 j=13j−10 30X2 j+/summationtext10 j=63j−10 30Xj+X1∗X10 ek Logistic(x,γk) γk[−.25,.25, −.25,−.25,.25, −.25,−.25,.25, −.25,.25]No overlap: not logistic (only controls) Weak overlap: γ(weak) 2 Good overlap:
https://arxiv.org/abs/2505.17961v1
γ(good) 2[.15,−.15, .15,−.15,.15, −.15,.15,−.15, .15,−.15] Table 1: Common simulation parameters. 23 DGP A-specific settings are shown in Table 2, where Jdis thed×dmatrix of ones, and Id is thed×didentity matrix. Parameter Center 1 Center 2 Center 3 nk 2000 Dk N(µk,Σk) µk (1,..., 1)∈Rd(1.5,..., 1.5)∈Rd(3,..., 3)∈Rd ΣkId+ 0.5Jd 0.6Id+ 0.4Jd 3Id+ 0.3Jd Table 2: Simulation parameters specific to DGP A. DGP B-specific settings are shown in Table 3. Parameter Value n 6000 D2 3N(µ1,Σ1) +1 3N(µ2,Σ2) µ1 (0,..., 0)∈Rd µ2 (1.5,..., 1.5)∈Rd Σ1 Id Σ2 Id+ 0.5Jd P(Hi=k|X) Logistic(x,θk) θ1 [−0.5,−0.5,0.2,−0.5,−0.5,0.2,−0.5,−0.5,0.2,0.2] θ2 [0.5,0.5,0.2,0.5,0.5,0.2,0.5,0.5,0.2,0.5] θ3 [1,1,0.2,0.2,0.2,0.2,0.2,0.2,0.2,0.2] Table 3: Simulation parameters specific to DGP B. 24 D Overlap Improvement Echoing theorem 5, Figures 5 and 6 display the empirical distributions (on a log-scale) of the local propensity score in site 2 ( e2) and of the global propensity score ein thePoor local overlapscenario (see Figure 2b for corresponding results). A good overlap is when both the propensity score distributions for the treated and control overlap, and are far from 0. We see that the poor overlap at site 2 (Figure 5), with values of e2on the local data close to 0, is significantly improved at the global level (Figure 6). 1010 108 106 104 102 100 Local Propensity Score050100150200250300350DensityLocal Propensity Score Distribution by Treatment Group in Site 2 Treatment Control (W=0) Treated (W=1) Figure 5: Local overlap in site 2 for the Poor local overlap scenario (DGP A). 1010 108 106 104 102 100 Global Propensity Score0.00.51.01.52.02.53.03.5DensityGlobal Propensity Score Distribution by Treatment Group in site 2 Treatment Control (W=0) Treated (W=1) Figure 6: Global overlap for the Poor local overlap scenario (DGP A). We see a clear improvement compared to the local overlap in site 2 (Figure 5). 25
https://arxiv.org/abs/2505.17961v1
arXiv:2505.18130v1 [stat.ME] 23 May 2025Loss Functions for Measuring the Accuracy of Nonnegative Cross-Sectional Predictions Charles D. Coleman1 ORCID: https://orcid.org/0000-0001-6940-8117 Timely Analytics, LLC E-mail: info@timely-analytics.com May 26, 2025 1This paper reports the general results of research originally undertaken while the author was employed by the Census Bureau. The views expressed are attributable to the author and do not necessarily reflect those of the Census Bureau. I would like to thank Jay Siegel, Stan Smith, Bashiruddin Ahmed, Gregg Diffendal and Dave Word for comments and Reuben A. Coleman for research assistance. An earlier version of this paper, entitled “Metrics for Assessing the Accuracy of Cross-Sectional Estimates and Forecasts,” was presented to the Southern Demographic Association meetings in San Antonio, TX, October 1999. Abstract Measuring the accuracy of cross-sectional predictions is a subjective problem. Generally, this problem is avoided. In contrast, this paper confronts subjectivity up front by eliciting an impartial decision-maker’s preferences. These preferences are embedded into an axiomatically-derived loss function, the simplest version of which is described. The parameters of the loss function can be estimated by linear regression. Specification tests for this function are described. This framework is extended to weighted averages of estimates to find the optimal weightings. Rescalings to account for changes in control data or base year data are considered. A special case occurs when the predictions represent resource allocations: the apportionment literature is used to construct the Webster-Saint Lag¨ ue Rule, a particular parametrization of the loss function. These loss functions are compared to those existing in the literature. Finally, a bias measure is created that uses signed versions of these loss functions. 1 Introduction The accuracy of cross-sectional predictions is of importance to a large number of their users: governments, investors, and so on. These data include, for example, population, employment or income per capita by geographic area. The outstanding characteristics of cross-sectional data are a great range of values and systematically varying error variances and coefficients of variation. Predictions encompass both forecasts and estimates, the only difference, generally irrelevant to this article, is the date being predicted relative to the date the predictions are made. In the context of predictions, accuracy is generally a subjective concept.1 Even if objective measures of scale can be used, they may not necessarily coincide with the user’s concept of loss. This paper axiomatically developing loss functionns, which measure the “badness” of errors as a function of the sizes of the error and the actual values, or, equivalently, the predicted and actual values. This approach is in contrast to the common use of measures based on time series.2These measures were developed to measure variations in the level of a single variable over time and are, thus, inapplicable to cross-sectional data.3Initially, an impartial decision-maker is assumed to exist and asked to specify values of his loss function (disguised) for various combinations of errors and actual values. The decision-maker is assumed to be interested solely in the overall accuracy of the predictions and not to have an interest in the accuracy for a particular area or set of areas. The context of the
https://arxiv.org/abs/2505.18130v1
predictions is also assumed fixed.4 Linear regression is then used to estimate the parameters of the loss function.5An upshot of this process is that, in general, no single “ideal” measure of accuracy exists, as the evaluation of accuracy depends on the evaluator.6In the special case in which predictions represent resource allocations, the apportionment literature is used to establish the Webster-Saint Lag¨ ue Rule as the appropriate loss function. The use of loss functions to measure the accuracy of cross-sectional predictions is hardly a new idea. Two commonly used measures, the mean absolute percentage error (MAPE) and mean absolute error use the absolute percentage error and absolute error as loss functions, respectively. Armstrong (1985, ch. 15) discusses various accuracy measures and the “cost” (i.e., loss) functions underlying them. None of these measures has an axiomatic basis. Stanford Research Institute (1974) and Spencer (1980) used utility criteria to create the first axiomatically-based cross-sectional loss functions in the context of allocating general revenue sharing funds. NRC (1980, p. 87) developed similar loss functions to ours by generalizing from special cases. Spencer (1985) developed loss function representations of four rules that have been used to apportion the U.S. House of Representatives, one of which is the Method of Major Fractions, also known as the Webster-Saint Lag¨ ue Rule. Loss functions have been used by the U.S. Census Bureau to compare unadjusted and adjusted census counts to artificial target populations, beginning with the 1990 census. Mulry and Spencer (1993) is an example of this. Section 2 axiomatically constructs the simple loss function that we will use. It further specifies assump- tions and develops the simplest loss function that satisfies those assumptions also has the property that the loss associated with a given absolute relative error7increases in the actual value. Section 3 discusses esti- mation of this loss function, along with specification tests. Section 4 argues for the use of the Webster-Saint Lag¨ ue Rule when predictions can be viewed as apportionments. Given a set of predicted and actual values, 1Compare Lindley (1953, p. 46): “...I feel that loss is not easily separated in our minds from beliefs.” Later we develop loss functions to reflect beliefs. 2Armstrong and Collopy (1992) and Fildes (1992), to give two examples, are quite clear about the time series basis of the error measures they use. 3NRC (1980), is one of the few examples to use cross-sectional criteria: “‘To minimize errors in allocations’ is a vague statement which allows several interpretations (NRC, 1980, p.86).” The allocations are revenues directed to states and localities in proportion to their populations. I am indebted to Bashiruddin Ahmed for providing this example. Spencer (1980, 1985, 1986) also uses cross-sectional criteria. 4I’d like to thank Stan Smith for pointing this out. 5The assumption of an impartial decision-maker and the statistical construction of the loss function provide an answer Spencer’s (1986) criticisms about whose preferences are to be incorporated into a von Neumann-Morgenstern loss function and how. These constructions are very much along the lines of Wald (1950) and Lindley (1953, 1985). 6Tukey (1986, p. 408) writes in the
https://arxiv.org/abs/2505.18130v1
context of cross-sectional population estimates: “‘Optimum’ estimates, for example, are only optimum under narrow specifications that do not hold exactly in practice.” (Original underline) 7Equivalently, one can also refer to the absolute percentage error (APE), but this will complicate the arguments made later on. APE will appear, however, in the example in Section 6. 2 the total loss function is the sum of the loss functions for the individual observations. Different sets of predictions may have systematic, offsetting biases. The decision-maker may wish to form weighted averages of these predictions to produce a set of new predictions. Subection 3.4 provides a method to search for the optimal weightings which minimize the total loss function. Section 5 provides an example of the use of loss functions, comparing the Webster-Saint Lag¨ ue Rule to the commonly used mean absolute percentage error (MAPE). Section 6 compares the metrics developed in this paper to existing metrics in the literature. Section 7 constructs a family of bias measures based on the loss functions. Section 8 concludes this paper. 2 The Loss Function This section describes the assumptions used to generate the loss function, L(P;A), where Pis the predicted value and Ais the actual value.8After the assumptions are made, the simplest form of Lwhich satisfies them is specified. Restrictions on the values of the parameters of Lwhich make it increase in Afor a given relative error are then specified. The total loss Lis the sum of the losses associated with each observation i:L=Pn iL(Pi−Ai), where n is the number of observations.9While other forms of Lare possible, the summation corresponds to the concept of additively separable utility and obeys the von Neumann- Morgenstern expected utility axioms.10 The first assumption we make is that L is symmetric in the errors: Assumption 1: L(A+ϵ;A) =L(A−ϵ;A) for all A >0. This assumption is not as innocuous as it looks.11It is quite possible that, at least for some range of A, that the decision-maker is not indifferent between positive and negative errors. Consider, for example, William Tell and the apple. If Tell places the arrow one inch above the apple, he suffers one sort of loss. If he places the arrow the same distance below the apple, he suffers a much more severe loss.12However, by not making Assumption 1, the resulting asymmetry complicates the definition of L. The symmetry of Lallows us to use the equivalent notation ℓ(A, ϵ)≡L(A+ϵ;A) where ϵ≥0. The next assumption makes L, or, equivalently, ℓ, increasing in the error ϵ: Assumption 2: ∂ℓ/∂ϵ > 0 for all ϵ >0. Note that this assumption is stated in terms of ℓ, rather than L. This assumption is quite intuitive, as it states that more accurate predictions are preferred to less accurate ones.13 Finally, we want L, or, equivalently, ℓ, to decrease in A. This means that for a given value of ϵ, the loss associated with it decreases with the actual value. This has two justifications. First, for example, an error of 500 when the true value is 1,000 is a whopping 50%, a highly significant error. However, the same error,
https://arxiv.org/abs/2505.18130v1
when the true value is 1,000,000 is akin to a roundoff error. In short, error variance increases in the actual value. Second, when making predictions, the coefficient of variation of the errors, σ/µ, where σis the variance and µis the expected value, generally decreases in A.7 We state this formally as: Assumption 3: ∂ℓ/∂A > 0 for all A >0. 8Muhsam (1966) was apparently the first to use this form of loss in the context of predicting population. He was concerned with the loss associated with a point forecast of the population of a single geographic area. He assumed continuity of loss and a version of Assumption 2 below. I’d like to thank Jay Siegel for this point. 9In terms of Spencer (1986), Lis a component loss function and Lis the loss function. The U.S. Census Bureau also uses this terminology for evaluating census adjustments. 10That is, loss is the negative of utility. Additive losses are equivalent to additive utility. The von Neumann-Morgenstern expected utility axioms imply that the expected utility of a gamble is equal to the utilities of the outcomes multiplied by their probabilities. In terms of loss, the expected loss when losses of values L1andL2occur with probabilities p1andp2, respectively, is p1L1+p2L2. This approach is similar to Lindley’s (1985, p. 59) commandment to “Go forth and maximise your expected utility.” (Original quotes) Lindley (1985) builds his book around the concept of making decisions that maximize a decision-maker’s expected utility. Since expectation requires specifying probability distributions, which we do not do, our commandment can be stated as “Go forth and maximize utility.” 11The measures of bias mentioned in footnote 1 violate this assumption by design. 12I would like to thank Dave Word for providing me with this example. 13It is easy to show that Assumption 2 implies Fisher-consistency (Spencer, 1986, p. 395): Lis uniquely minimized when Pi=Aifor all i. 3 The simplest function which satisfies Assumptions 1–3 and admits Property 1 below is the Cobb-Douglas function ϵpAq(1a) or, equivalently, |P−A|pAq(1b) where ϵ, p > 0 and q < 0.14Note that ϵ=|P−A|. The loss function (1a) can be interpreted as an exponentially weighted product of the absolute value of the difference and the absolute relative difference: |P−A|p+q P−A A −q . (1c) In order for the loss function to increase in the absolute value of the difference |P−A|, the sum p+qmust be positive. When this is true, the loss function gains the desirable property that it rises in Afor a given absolute relative error. We formalize this argument as follows. The absolute relative error is: P−A A (2) Note that, in this case, p= 1, q=−1 and p+q= 0. Choosing q >−1 makes the corresponding ( p= 1) loss function increase in A. Therefore, p+q >0 in the loss function when p= 1. We generalize the argument by raising equation (2) to any positive power r: P−A A r (3) Here p=randq=−rand, again, p+q= 0. Choosing q >−rmakes the corresponding ( p=r) loss function increase in A. Again, p+q >0. We summarize this as Property 1: Property 1: The loss function defined
https://arxiv.org/abs/2505.18130v1
by equations (1a) and (1b) increases in Afor any given absolute relative error. This is assured whenever q >−p, or, equivalently, p+q >0. 3 Estimating the Loss Function This Section assumes the existence of an impartial decision-maker with well-formed preferences, such an investor motivated by profits affected by prediction error. The decision-maker is queried about his preferences to ultimately estimate the parameters of his loss function. The steps in this include ascertaining reasonable bounds on Aandϵ, given A; presenting pairs ( ϵj, Aj) to the decision-maker for evaluation; estimating p andq; checking pandqfor reasonableness; and testing the specification. This procedure is in keeping with Lindley’s (1953, p. 43) interpretation of Wald (1950): [T]he statistician’s answer to a problem cannot be cannot be given except in close co-operation with the questioner, who is in a position to make the decisions and share their consequences with the statistician. In this case, the problem is to compare the accuracy of different sets of predictions, as determined by the impartial decision-maker’s preferences. Some alternative methods are given in Subsubsection 3.2.1, but are not recommended for use by a single decision-maker. 14NRC (1980, p. 87) obtains the same loss function by generalizing from special cases. 4 3.1 Reasonable Actual Values and Error Bounds This analysis requires that a reasonable range for Aand error bounds be obtained from a knowledgeable source (often, the decision-maker) beforehand. This ensures that the analysis operates only on values which can reasonably be expected to appear. The error bounds need to be obtained to prevent situations in which a pair ( ϵj, Aj) is completely unacceptable to the decision-maker. This can create difficulties as noted in the next Subsection. 3.2 Determining the Loss Function at a Set of Points and Estimating it Globally The decision-maker is given a set of pairs ( ϵj, Aj) and asked to express his satisfaction r U(ϵj, Aj) for each pair (ϵj, Aj).15The framing of “satisfaction” is important: a possible question is “What percentage acceptability does this pair have, where 0% is completely unacceptable and 100% is completely accurate?”16The exact phrasing of this question is a matter of survey design, which is beyond the scope of the present paper. An answer of 0% indicates that the pair has an unreasonable error bound. Any pair with 0% acceptability has to be removed from the analysis and replaced, if desired, by another pair with reasonable values. The selection of the pairs ( ϵj, Aj) can be done by many means. Perhaps the simplest method is to choose a set of values of A, spanning its range, and then a set of errors for each value of A, which are well within their reasonable ranges. The number of pairs has to be determined as well. A small number can make the regression procedure below unreliable, while a large number can overwhelm the decision-maker. Again, these points are all matters of survey design. Once the U(ϵj, Aj) are obtained, the associated values of the loss function L(ϵj, Aj) can be determined by the equation L(ϵj, Aj) = 100 −U(ϵj, Aj) (4) where
https://arxiv.org/abs/2505.18130v1
LandUare expressed in terms of percentage points. We can see why points with U= 0 are removed: At these points U= 100 for some ϵj>0. Higher values of ϵjproduce the same value of L, in contradiction to Assumption 2. Then equation (1b) is estimated by the regression equation formed by taking its logarithm and evaluating it at each pair ( ϵj, Aj):17 logLj=plogϵj+qlogAj+uj (5) where Lj>0, ais the constant of integration, an ignorable scaling factor, and ujis the regression residual. The values of pandqfound from this regression become the parameters of the loss function. One caveat is in order: if the decision-maker reports a Uj= 100, then and its logarithm becomes −∞. The fix for this is to either delete this observation from the analysis or to choose a small δ >0 to bound the Ljfrom below. 3.2.1 Multiple Decision Makers Within an organization, one can expect individuals’ preferences to become homogeneous over time.18There- fore, it is likely that their values of pandqestimated from regression equation (5) will be very similar. However, individuals from different organizations may have very different preferences. For example, one individual may be more willing to accept inaccuracy in small areas than another. It is thus possible that their loss functions will be very dissimilar. Aggregating these dissimilar preferences into a single metric becomes an exercise in group decision making. Zahedi (2000) discusses group decision making methods. It is also quite possible that these dissimilar preferences have no substantial effects on the analysis. 15The use of Uto denote this function corresponds to the economic concept of utility, as discussed in footnote 10 above. The decision-maker is asked to reveal his utility, which is negated to produce loss. 16100% satisfaction is equivalent to L= 0. 17Throughout this paper, we understand logarithms and the exponent function to be natural. 18Coleman (1964) models processes by which preferences within groups become more homogeneous. Two effects (Coleman, 1964, pp. 348-349) can account for this. “Contagion” simply involves individuals’ influencing each other, while “heterogeneity” implies self-selection among individuals comprising a group. 5 One alternative to Zahedi’s (2000) procedures lies in constructing several loss functions.19Each loss function can be made to obey Assumptions 1-3 and Property 1. Since estimation is not used, the loss functions are effectively chosen arbitrarily.20The ensemble of loss functions can be used to compare different sets of predictions. If all of the loss functions are in agreement about the rankings of the prediction sets’ total losses, then one can argue that the exact parametrization does not matter: each loss function produces the same result. However, if the loss functions disagree, then one is forced back onto the horns of subjectivity, this time in the form of a multiattribute utility decision problem.21These require specifying weights to be assigned to each loss function or using some sort of tie-breaking function. The essence of this problem is stated by Lindley (1985, p. 180): [N]o coherent approach exists.22 The fundamental problem is that there is no way to specify a unique utility function with which to evaluate prediction errors. Arrow’s (1950) famous theorem disproving the
https://arxiv.org/abs/2505.18130v1
existence of a function that aggregates individuals’ preferences and obeys some weak conditions underlies the last claim. A final alternative is to adopt a loss function which works “well” in most cases as a convention in the manner of Keyfitz (1979). The Mean Absolute Percentage Error (MAPE,) discussed in Section 6, is frequently used in this sense. Another alternative is the Webster-Saint Lag¨ ue Rule, discussed in Section 5. Webster’s Rule is a particular parametrization of the loss functions developed in the present paper. That the chosen loss function need not work well in every case has been explicated by Lerner (1957, p. 441) in the context of legal rules: [W]ith all general rules, there are particular cases where...it would be better for the rule not to be applied. The poor performance of a particular loss function in a particular circumstance does not invalidate its use in general. 3.3 Specification Tests While equations (1a) and (1b) are the simplest form that the loss function can take, they are hardly the only forms. In fact, an infinite number of forms of loss functions are possible. One quick specification test is to look at the sum p+q. If it is nonpositive, then it violates Property 1. Either the equation is misspecified,23 or the decision-maker’s preferences violate Property 1.24The form of the function may be misspecified, for which many tests are available. Heteroscedasticity may be present, which can be accounted for by weighted regression. 3.4 Optimal Averages of Predictions Given two or more sets of predictions, it may be possible to improve on their accuracy by finding an optimal weighted average of these predictions. Suppose there are two sets of estimates P1andP2, with associated total losses L1andL2. Let w1andw2be weights such that w1+w2= 1. By varying the values of w1 andw2, one creates new sets of predictions of the form and computes the total loss associated with each set of weights.25The total loss minimizing set of estimates and weights is then chosen. The optimal search method is unclear: this probably requires grid-searching. With k >2 estimate sets, this becomes a possibly expensive mesh search in k−1 dimensions. 19I’d like to thank Gregg Diffendal for pointing this out. 20This is a standard technique for evaluating census adjustments. 21See Sarin (2000) for a discussion of multiattribute utility. 22Lindley (1985, p. 59) defines coherence in terms of probabilities of events and utilities of their consequences. 23Remember that equations (1a) and (1b) represent only one of an infinite number of loss functions that satisfy Assumptions 1-3 and Property 1. 24In some contexts, this may be reasonable. 25The new estimates may have to be constrained to sum up to a predetermined overall total or several predetermined subtotals. See Subsection 4.1. 6 This can be seen to be a sort of Bayesian enterprise. The usual practice is to chose w1=w2= 1/2. This accords with the Bayesian principle of indifference. By letting the weights vary and choosing the optimal set with regard to a given loss function, one effectively creates a Bayesian prior which places those weights on the different sets of
https://arxiv.org/abs/2505.18130v1
predictions. The practice of putting weights on sets of predictions is a form of model averaging.26The weight put on a particular set of predictions is often used as the weight for the model which produced those predictions. The averaged model is then used to generate new sets of predictions. Suppose there are two sets of estimates P1andP2, with associated total losses L1andL2. Let w1andw2be weights such that w1andw2= 1.27By varying the values of w1andw2, one creates new sets of predictions of the form{P1,2 i=w1P1 i+w2P2 i}and computes the total loss associated with each set of weights.28The total loss minimizing set of estimates and weights is then chosen. The optimal search method is unclear: this probably requires grid-searching. With k >2 prediction sets, this becomes a possibly expensive mesh search ink−1 dimensions. 4 Predictions as Apportionments: The Webster-Saint Lag¨ ue Rule Often predictions represent apportionments, that is, resource or legislative seat allocations. The U.S. decen- nial census is used to apportion the House of Representatives. Population estimates are used by some states to apportion tax revenues to localities. One would like to have the rule used for evaluating the predictions satisfy various fairness criteria. Balinski and Young (1982) have shown that the Webster-Saint Lag¨ ue Rule satisfies a large number of fairness criteria.29Most of these criteria result from the Webster-Saint Lag¨ ue Rule being a divisor method. the Webster-Saint Lag¨ ue Rule method is the only divisor method which has an admissible (in the present paper) loss function representation: p= 2 and q=−1, (Balinski and Young, 1985) See Subsection 6.5 below for other apportionment-based loss functions. An additional normative criterion of particular interest is unbiasedness: for any two disjoint groups of areas, with all members in one group larger than those in the other, the probability that the method favors large areas equals the probability that it favors the small areas. (Balinksi and Young, 2001, Theorem 5.3) Since an optimal apportionment minimizes LW, it is natural to use it as a measure of accuracy. The greater its value, the greater the departures from the normative criteria used to evaluate the predictions. This way of characterizing loss avoids the need for an impartial decision-maker. It also has an a priori probabilistic interpretation, as shown in Subsection 4.1. Fellegi (2008) effectively proposed a variant of the Webster-Saint Lag¨ ue Rule. This is studied in Subsection 4.2. 4.1 An A Priori Probabilistic Interpretation of the Webster-Saint Lag¨ ue Rule Let the sets of predictions be indexed by j. Let the expected mean squared error of the prediction from set jfor area ibe proportionate to Ai:E(Pj i−Ai)2=cAi. If the predictions are unbiased, E(Pj i−Ai) = 0, Webster’s rule actually estimates the underlying variances, thereby enabling a ranking based on the average estimated variance. This is proved below. 26For a fuller explanation of Bayesian model averaging see Hoeting, Madigan and Volinsky (1999). Many machine learning algorithms are model averaging algorithms in that they create ensembles of estimators, then average over all ensemble members to produce estimates or predictions. 27This argument generalizes to any number ksets of estimates withPk
https://arxiv.org/abs/2505.18130v1
i=1wi= 1. 28The new estimates may have to be constrained to sum up to a predetermined overall total or several predetermined subtotals. See Subsection 4.1. 29However, no method can simultaneously satisfy all fairness criteria (Balinski and Young, 1979, th. 6). For example, Ernst (1992) proves that Hill’s method satisfies other fairness criteria. This result is similar to Arrow’s (1950) Theorem, mentioned above in Subsection 3.2.1. Still, the Webster-Saint Lag¨ ue Rule is the only apportionment rule considered by Balinski and Young (1979, 1982) and Ernst (1992) that can be represented by minimizing the sum of loss functions satisifying Assumptions 1-3 and Property 1 (Spencer, 1985). I would like to thank Gregg Diffendal for informing me of Ernst (1992). 7 Let the estimates be unbiased (i.e., Eϵj i= 0 for all iandj) and let Pj ibe the sum of Aiindependent variables ηj iwith common mean µand variance σ2 j. Then EPj i=Aiand E(Pj i−Ai)2=E(Ai+ϵj i−Ai)2(6a) =E(ϵj i)2(6b) =EAiX k=1ηj i (6c) =AiVar AiX k=1ηj i! (6d) =AiAiX k=1Var(ηj i) (6e) =Aiσ2 j, (6f) where equation (6e) follows from the variance of the sum of independent random variables being equal to the sum of their variances and equation (6f) follows from the definition of σ2 j. Therefore, E(Pj i−Ai)2 Ai=Aiσ2 j Ai(7a) =σ2 j. (7b) Thus, the Webster-Saint Lag¨ ue rule produces estimates of the variances of each observation, Aiˆσ2 ij. Using its minimization to select predictions chooses the estimates set with the lowest ˆ σ2 j. Relaxing the the identical variance assumption so that σ2 ijis not constant causes this rule to estimate the average variance ˆ σ2 j= 1 nPn k=1ˆσ2 kj. Now, the Webster-Saint Lag¨ ue loss minimization selects the predictions set with the lowest average estimated variance. This is robust to small departures from the assumptions. 4.2 Fellegi’s Variant Fellegi (1980, eq. 8, 194) proposes an inequality measure based on differences between optimal and predicted shares. He uses conventions to assign exponents. Hogan and Mulry (2014, eq. 94, 124) show that Fellegi’s measure is equivalent to the loss function30 1P IAiPiP iPi−AiP iAi2 . (8) It is easy to show that this obeys Assumption 3, ∂ℓ/∂A > 0 for all A >0, is satisfied. However, Assumptions 1 and 2 have to be modified. Letting ϵ′ i= PiP iPi−AiP iAi (9) and substituting into Assumptions 1 and 2 yields new assumptions. Supressing subscripts, these become: Assumption 1′:L(A+ϵ′;A) =L(A−ϵ′;A) for all A >0. and Assumption 2′:∂ℓ/∂ϵ′>0 for all ϵ′>0. Fellegi’s loss function satisfies the two new assumptions. Noting that ℓ(ϵ′, A) = ( ϵ′)2A−1, we see that p= 2 and q=−1, thereby satisfying Property 1. The key difference between the Fellegi and Webster-Saint Lag¨ ue loss functions is that the former defines errors as differences in shares while the latter uses differences in levels. 30This is after removing constants and the summation. 8 Table 1: Comparison of MAPE and Webster-Saint Lag¨ ue Loss Functions for Three Hypothetical Scenarios Scenario 1 Scenario 2 Scenario 3 Area Ai ϵiAPE i Li ϵiAPE i Li ϵiAPE i Li 1 100000 2000 240.00 1000 110.0 3000
https://arxiv.org/abs/2505.18130v1
3.0 90.00 2 50000 1000 220.00 500 1 5.0 850 1.7 14.45 3 10000 200 2 4.00 100 1 1.0 170 1.7 2.89 4 5000 100 2 2.00 50 1 0.5 85 1.7 1.45 5 1000 20 2 0.40 10 1 0.1 17 1.7 0.29 6 100 2 2 0.04 10 10 1.0 2 2.0 0.04 Means 211.08 2.5 2.9 1.97 18.19 5 Example of Evaluating Predictions Using Loss Functions Loss functions can produce entirely different and more meaningful results than common error measures such as the mean absolute percentage error (MAPE).31Table 1 shows the true values of six areas, Ai, i= 1, . . . , 6, and three sets (Scenarios) of absolute errors ( ϵi), along with the corresponding absolute percentage errors (APE i) and Webster-Lag¨ ue Rule loss function values Li.32The bottom row shows the means of the last two variables. These are simply MAPE and LW/n, respectively. Three Scenarios are used to compare the results of an evaluation using a loss function to those obtained by using MAPE. Scenario 1 is the baseline scenario with APE i)≡2 andLW/n= 11.08. In Scenario 2, APE i is reduced to 1 for i≤5, but APE 6iincreases to 10. That is, all but the smallest areas have their APEs halved, but the very smallest area’s APE increases by a factor of 5. MAPE increases to 2.5, but LW/nfalls to 2.9. Thus, MAPE ranks Scenario 2 as being less accurate than Scenario 1, even though the individual errors are smaller except for the very smallest area. On the other hand, the loss function takes into account the size of the smallest area and discounts its accuracy loss and considers Scenario 2 to be more accurate. In Scenario 3, APE ifalls by 15% to 1.7 for i= 2≤i≤5, rises by 50% to 3 in the largest area, and is unchanged in the smallest area. MAPE falls slightly to 1.97, but LW/nrises to 18.19. Thus, MAPE considers Scenario 3 to be superior to Scenario 1, as a result of the general reduction in the APE i, in spite of the major loss in accuracy in the largest area. The loss function, on the other hand, puts a large weight on the accuracy loss in area 1 and increases its error measure relative to Scenario 1. Thus, the loss function puts increasing weight on an error as the size of the area increases. Putting all of these together, we find that MAPE and the Webster-Saint Lag¨ ue loss function produce exactly opposite rankings of the Scenarios. The reader may notice that the losses have no intuitive interpretation, unlike the absolute percentage errors that go into MAPE. This is hardly unknown to this enterprise. For example, Lindley (1953, p. 46) writes ...they have no direct interpretation in the real world... This quote refers to “weight” functions, Wald’s (1950) original term for loss functions.33The upshot of this statement is that one should generally not expect loss functions to coincide with any easily interpreted measures. 31That is the loss function is computed with p= 1 and q=−1, absolute values
https://arxiv.org/abs/2505.18130v1
taken, then multiplied by 100 for reexpression in terms of percentage points. 32This example is from Coleman (2000). 33Wald (1950, p. 8) introduces weight functions in a subsection entitled “Losses Due to Possible Wrong Terminal Decisions and Cost of Experimentation.” 9 6 Comparison to Other Metrics This Section compares other, commonly used, metrics to the metric defined by loss functions (1a) and (1b) and Property 1. 6.1 Root Mean Squared Error The root mean squared error (RMSE) is defined asqPn i=1(Pi−Ai)2 n. This is equivalent to p= 2 and q= 0. This violates Assumption 3. It is similar to the standard deviation and is only useful if the expected mean squared errors, Eϵ2 i, are constant. This clearly is not true of most cross-sectional predictions, which often span several orders of magnitude. 6.2 Median Absolute Percentage Error The median absolute percentage error (MedAPE) is a close relative of MAPE and, for our purposes, can only be defined by its total loss function, 100 ∗median i|Pi−Ai Ai|. Not only does it have the disadvantages of MAPE, but its use of the median instead of the mean makes it insensitive to large errors.34 6.3 90thPercentile Absolute Error Smith and Sincich (1992) use the 90thPercentile Absolute Percentage Error (90PE) as one of their metrics for comparing cross-sectional forecasts. Like MedAPE, this can only be defined in our framework by the total loss function, 100 ∗median i|P90−A90 A90|, where the subscript 90 refers to the observation whose absolute percentage error is greater than that of 90% of all the observations. 90PE shares MedAPE’s weaknesses, while being sensitive to fewer extreme errors. 6.4 Root Mean Squared Percentage Error The Root Mean Squared Percentage Error (RMSPE) is defined asq [Pn i=1(Pi−Ai)/Pn i=1Ai]2 n. This is equivalent top= 2 and q=−2. Like MAPE, this violates Property 1. It is useful only when the a priori relative mean squared errors are constant. That is, E(Pi−Ai Ai) =c, for some constant c >0. 6.5 Apportionment-Based Loss Functions Spencer (1985) develops five apportionment-based loss functions. In addition to the Webster-Saint Lag¨ ue Rule, these are (1) p= 1, q= 0, or more generally, p > 0, q= 0 .(Hamilton’s method), (2) p= 2, q= −1 with Piraised to the qinstead of Ai. (Hill’s method/Method of Equal Propertions), (3) max iPi/Ai (Jefferson/d’Hondt method) and (4) max iAi/Pi(Adam’s Method). Hamilton’s and Hill’s methods both violate Assumption 3. The Jefferson/d’Hondt method violates both Assumption 3 and the von Neumann- Morgenstern axioms. Adam’s Method additionally violates Assumptions 1 and 2. Spencer (1986, p. 398) advocates incorporating expectations into these methods to measure accuracy. However, since we have assumed the distributions generating the errors to be unknown, this is impossible. 6.6 Some Examples which Obey Assumptions 1-3 and Property 1 Bryan (2000) proposes using p= 1 and q= log( range )/25−1 in the context of small-area population estimates evaluation, where range is the range of the Ai. This obeys Assumption 3 and Property 1 for range < exp(25) ≈72,000,000,000. Apparently, p= 1 and q=−1/2 was proposed for evaluating the U.S. Census Bureau’s Small Area Income and Poverty
https://arxiv.org/abs/2505.18130v1
Estimates (William R. Bell, personal correspondence). 34It should also be noted that additive separability and the von Neumann-Morgenstern expected utility axioms are violated as well. See footnote 10. Coleman (2002) further clarifies this point by showing that quantile total loss functions (e.g., MedAPE and 90PE of the next Subsection) in general have unbounded insensitivity to outliers. 10 7 Loss Function-Based Bias Measures Any of our loss functions can be converted to a bias measure simply by multiplying loss by the sign of ϵ. From page 4 we know that ϵ=|P−A|. Therefore, equation (1b) becomes Si(P, A) = sgn( ϵi)|P−A|pAq, (10) where Siis interpreted as the signed loss function. Besides being a measure of bias, it can rank observations by how overprediction or underprediction can contributes to the loss of accuracy. Using it with external data may enable identification of factors affecting accuracy. 8 Conclusion This paper has axiomatically developed loss functions for measuring the accuracy of cross-sectional predic- tions. It is a generalization of several metrics in the literature and avoids the pitfalls common to most of them when applied to cross-sectional data. When the predictions represent resource allocation levels, the Webster-Saint Lag¨ ue Rule provides an exact parametrization. Other cases require a great deal of information from an impartial decision-maker and are not immune to manipulation. A forecaster can easily manipulate their parameters to make his forecasts look better than those of their competitors. Decision-makers with interests concentrated in particular size ranges can degrade performance with respect to other size ranges. Therefore, this approach should only be used by disinterested parties who wish to evaluate the overall perfor- mance of cross-sectional predictions. Finally, a bias measure has been derived that can be used to measure bias or possibly identify factors affecting accuracy when combined with other data. References Armstrong, J.S. (1985). Long-Range Forecasting: From Crystal Ball to Computer, 2nd edition. New York: Wiley. Armstrong, J.S. and Collopy, F. (1992). Error Measures for Generalizing about Forecasting Methods: Empirical Comparisons. International Journal of Forecasting, 8, 69-80. Arrow, K.J. (1950). A Difficulty in the Concept of Social Welfare. The Journal of Political Economy, 58, 328-346. Balinski, M.L. and Young, H.P. (1979). Criteria for Proportional Representation. Operations Research, 27, 80-95. Balinski, M.L. and Young, H.P. (1982). Fair Representation. New Haven: Yale University Press. Beaumont, P.M. and Isserman, A.M. (1987). Tests of Accuracy and Bias for County Population Projec- tions: Comment. Journal of the American Statistical Association, 82, 1004-1009. Bryan, T. (1999). U.S. Census Bureau Population Estimates and Evaluation with Loss Functions. Statis- tics in Transition, 4, 537-548. Coleman, C.D. (2000). Evaluation and Optimization of Population Projections Using Loss Functions. In Federal Forecasters Conference 2000: Papers and Proceedings, ed. Gerald, D.E., Washington: Department of Education, Office of Edu Coleman, C D. (2002). Total Loss Functions for Assessing the Accuracy of Cross-Sectional Estimates and Forecasts. Manuscript, Alexandria, VA: Timely Analytics, LLC. Coleman, J.S. (1964). Introduction to Mathematical Sociology. New York: Free Press of Glencoe. Davis, Sam T. (1994). Evaluation of Postcensal County Estimates for the 1980s. Population Division Working Paper No. 5, Washington, D.C.: U.S. Census Bureau.
https://arxiv.org/abs/2505.18130v1
Available at http://www.census.gov/population/www/documentation/twps0005/twps0005.html. Ernst, L.R. (1992). Apportionment Methods for the House of Representatives and the Court Challenges. SRD Research Report 92/06, Washington, D.C.: U.S. Census Bureau. Available at http://www.census.gov/srd/papers/pdf/rr92-6.pdf. 11 Fellegi, I. (1980). Should the Census Count Be Adjusted for Allocation Purposes—Equity Con- sidera- tions. Conference on Census Undercount: Proceedings of the 1980 Conference. U.S. Department of Com- merce, Washington, DC, 193-203. Fildes, R. (1992). The Evaluation of Extrapolative Forecasting Methods. International Journal of Fore- casting, 8, 81-98. Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction: With 200 Full-Color Illustrations. New York : Springer. Hoeting, J.A., Madigan, D., Raftery, A.E. and Volinsky C.T. (1999). Bayesian Model Averaging: A Tutorial (with discussion. Statistical Science, 14, 382-417. Keyfitz, N. (1979). Information and Allocation: Two Uses of the 1980 Census. The American Statistician, 33, 45-50. Lerner, A.P. (1957). The Backward-Leaning Approach to Controls. The Journal of Political Economy, 65, 437-441. Lindley, D.V. (1953). Statistical Inference. Journal of the Royal Statistical Society, Series B, 15, 30-76. Lindley, D.V. (1985). Making Decisions, 2nd ed., New York: Wiley. Muhsam, H.V. (1966). The Use of Cost Functions in Making Assumptions for Population Forecasts. In United Nations, World Population Conference, 1965, Volume III. New York: United Nations, 23-26. Mulry, M.H. and Spencer, B.D. (1993). Accuracy of the 1990 Census and Undercount Adjustments. Journal of the American Statistical Association, 88, 1080-1091. National Research Council (1980). Estimating Population and Income of Small Areas. Washington, D.C.: National Academy Press. Sarin, R.K. (2000). Multi-Attribute Utility Theory. In Encyclopedia of Operations Research and Man- agement Science, 2nd ed., eds. Gass, S.I. and Harris, C.M., Boston: Kluwer Academic Publishers, 526-529. Smith, S.K. (1987). Tests of Accuracy and Bias for County Population Projections. Journal of the American Statistical Association, 82, 991-1003. Smith, S.K. and Sincich, T. (1992). Evaluating the Forecast Accuracy and Bias of Alternative Population Projections for States. International Journal of Forecasting, 8, 495-508. Spencer, B.D. (1980). Benefit-Cost Analysis of Data Used to Allocated Funds. New York: Springer- Verlag. Spencer, B.D. (1985). Statistical Aspects of Equitable Apportionment. Journal of the American Statis- tical Association, 80, 815-822. Spencer, B.D. (1986). Conceptual Issues in Measuring Improvement in Population Estimates. In Bureau of the Census, Second Annual Research Conference: Proceedings, March 23-26, 1986, 393-407. Stanford Research Institute (1974). General Revenue Sharing Data Study, Volume III. Menlo Park, CA: Stanford Research Institute. Tayman, J., Schafer E., and Carter L. (1998). The Role of Population Size in the Determination and Prediction of Population Forecast Errors: An Evaluation using Confidence Intervals for Subcounty Areas. Population Research and Policy Review, 17, 1-20. Wald, A. (1950). Statistical Decision Functions, New York: Wiley. Zahedi, F. (2000). Group Decision Making. In Encyclopedia of Operations Research and Management Science, 2nd ed., eds. Gass, S.I. and Harris, C.M., Boston: Kluwer Academic Publishers, 343-350. 12
https://arxiv.org/abs/2505.18130v1
A NEW MEASURE OF DEPENDENCE: INTEGRATED R2 MONA AZADKIA, POUYA ROUDAKI Abstract. We propose a new measure of dependence that quantifies the degree to which a random variable Ydepends on a random vector X. This measure is zero if and only if YandXare independent, and equals one if and only if Yis a measurable function of X. We introduce a simple and interpretable estimator that is comparable in ease of computation to classical correlation coefficients such as Pearson’s, Spearman’s, or Chat- terjee’s. Building on this coefficient, we develop a model-free variable selection algorithm, feature ordering by dependence, FORD, inspired by FOCI [4]. FORD requires no tuning parameters and is provably consistent under suitable sparsity assumptions. We demonstrate its ef- fectiveness and improvements over FOCI through experiments on both synthetic and real datasets. 1.Introduction Measuring the degree of dependence between two random variables is a longstanding problem in statistics, with numerous methods proposed over the years; for recent surveys, see [24, 60]. Among the most widely used classical measures of statistical association are Pearson’s correlation coeffi- cient, Spearman’s ρ, and Kendall’s τ. These coefficients are highly effective for identifying monotonic relationships, and their asymptotic behaviour is well-established. However, a major limitation is that they perform poorly in detecting non-monotonic associations, even when there is no noise in the data. To address this deficiency, there have been many proposals, such as the maximal correlation coefficient [16, 44, 54, 80], various methods based on joint cumulative distribution functions, and ranks [10, 12, 29, 31, 34, 42, 50, 56, 72, 78, 82, 83, 96–99, 106], kernel-based methods [46, 47, 77, 85, 105] information theoretic coefficients [62, 67, 81], coefficients based on copulas [33, 48, 68, 84, 88, 101], and coefficients based on pairwise distances [40, 53, 69, 75, 90, 91]. Some of these coefficients are widely used in practice; however, they suffer from two common limitations. First, most are primarily designed to test for independence rather than to quantify the strength of the dependence between variables. Second, many of these coefficients lack simple asymptotic 2020 Mathematics Subject Classification. 62H20, 62H15. Keywords and phrases. Independence, measure of association, correlation. 1arXiv:2505.18146v1 [math.ST] 23 May 2025 2 MONA AZADKIA, POUYA ROUDAKI distributions under the null hypothesis of independence, which hampers the efficient computation of p-values, since they rely on permutation-based tests. Recently, Chatterjee introduced a new coefficient of correlation [26] that is as simple to compute as classical coefficients, yet it serves as a consistent estimator of a dependence measure ξ(X, Y) that equals 0 if and only if the variables are independent, and 1 if and only if one is a measurable function of the other. Moreover, like classical coefficients, it enjoys a simple asymptotic theory under the null hypothesis of independence. The limiting value ξ(X, Y) was previously introduced in [33] as the limit of a copula-based estimator in the case where XandYare continuous. The simplicity, efficiency, and interpretability of Chatterjee’s correlation have sparked significant interest, leading to a growing body of research on the behavior of the coefficient and its extensions to more complex settings [2–
https://arxiv.org/abs/2505.18146v1
4, 11, 18, 21, 30, 32, 41, 43, 48, 51, 59, 63, 65, 66, 86, 87, 89, 93, 103, 104]. Building on this line of work, the first contribution of this paper is a new coefficient of dependence with the following properties (1) it has a simple expression, (2) it is fully non-parametric, (3) it has no tuning parameters, (4) there is no need for estimating densities, or characteristic functions, (5) it can be estimated from data very quickly, in time O(nlogn) where nis the sample size, (6) asymptotically, it converges to a limit in [0 ,1], where the limit is 0 if and only if random variable Yand random vector Xare independent, and is 1 if and only if Yis equal to a measurable function of X (7) the limit has a natural interpretation as a generalisation of the fa- miliar partial R2statistic for measuring the dependence of Yand X, and (8) all of the above hold under absolutely no assumptions on the laws of the random variables. The second contribution of this paper, which demonstrates the improved ability of our proposed measure to detect dependence compared to [4, 26], is a variable selection algorithm. Inspired by the FOCI introduced in [4], our algorithm is model-free, requires no tuning parameters, and is provably consistent under sparsity assumptions. Finally, we highlight that the newly introduced coefficient of dependence can be interpreted as a novel discrepancy measure on the space of permuta- tions. The paper is organised as follows. Section 2 introduces the definition and key properties of our proposed coefficient. In Section 3, we interpret the co- efficient as a generalisation of the classical R2measure. Section 4 presents a theorem on the rate of convergence of the estimator. Our variable selection method is introduced in Section 5, along with a consistency theorem. In A NEW MEASURE OF DEPENDENCE: INTEGRATED R23 Section 6, we discuss a simplified estimator for the case where both vari- ables are one-dimensional, and explore its connection to distance measures between permutations. Applications to both simulated and real datasets are provided in Section 7. Finally, Section 8 contains the technical proofs. 2.A New Measure of Dependence LetYbe a random variable and X= (X1, . . . , X p) a random vector defined on the same probability space. For clarity, when p= 1, we denote the vector Xsimply by X. Let µbe the probability law of Y. Let S⊆Rbe the support of µ. IfSattains a maximum smaxlet˜S=S\{smax}otherwise let˜S=S. We define a probability measure ˜ µonSwhere for any measurable setA⊆S, ˜µ(A) =µ(A∩˜S)/µ(˜S). We propose the following quantity as a measure of dependence of YonX: ν=ν(Y,X) :=ZVar(E[ 1{Y > t} |X]) Var( 1{Y > t})d˜µ(t), (1) where 1{Y > t}is the indicator of the event {Y > t}. Notice the difference ofν(Y,X) with T(Y,X) =R Var(E[ 1{Y≥t} |X])dµ(t)R Var( 1{Y≥t})dµ(t)(2) which was introduced in [33] and recently have been studied more extensively [4, 22]. To explain the rationale behind modifying µto ˜µ, first note that when µ possesses a continuous density, ˜ µ=µ. The modification only kicks in when Sattains a
https://arxiv.org/abs/2505.18146v1
maximum smaxandµ(smax)>0; i.e. µhas a point mass at smax. In this case, since 1{Y > s max}= 0 we have Var( 1{Y > s max}) = 0. In addition, since 1{Y > s max}is a deterministic constant, it can be considered independent of Xor a measurable function of X. Hence, to ensure that νis a reasonable measure of dependence, we need to address the point mass at the maximum of the support by focusing on the part of the support where the question of dependency is well-defined. In addition, since 1{Y > s max}= 0, there is no information in its variation with respect to dependence on X. Before stating our theorem about ν, let us first see why νis a reasonable measure of dependence. First, νis a deterministic quantity that depends only on the joint law of ( Y,X). Note that taking conditional expectation does not increase the variance; therefore, we have that for any t, Var(E[ 1{Y > t} |X])≤Var( 1{Y > t}), ensuring that νis between 0 and 1. Also note that if Yis a measurable function of X, for almost all twe have Var( E[ 1{Y > t} |X]) = Var( 1{Y > t}) and hence ν= 1. On the other hand, when Yis independent of X, for almost all twe have Var( E[ 1{Y > t } |X]) = 0 and hence ν= 0. We will show that the inverse of these is also true. The following theorem summarises the properties of ν. 4 MONA AZADKIA, POUYA ROUDAKI Theorem 2.1. For random variables YandXsuch that Yis not almost surely a constant ν(Y,X)belongs to the interval [0,1], it is 0if and only if YandXare independent and it is 1if and only if there exists a measurable function f:Rp→Rsuch that Y=f(X)almost surely. The statistic νfollows the same idea as Tin [4, 22, 33]. In this work, for the purpose of simplicity in our notation, we use the indicator function 1{Y > t }instead of 1{Y≥t}, which is used in the definition of T. Note that when Yhas a continuous density, Tis equal to ˜Tdefined as ˜T:=R Var(E[ 1{Y > t} |X])dµ(t)R Var( 1{Y > t})dµ(t). To highlight the difference between Tandν, we focus on the case where Ypossesses a continuous density, meaning that µhas no point mass, and hence work with ˜T=T. The main difference is that ˜Tis the ratio of the average variance of Var( E[ 1{Y > t } |X]) to the average variance of Var( 1{Y > t}), versus νlooks at the average of the ratio of Var( E[ 1{Y > t} |X])/Var( 1{Y > t}). Another way to look at this difference is to see that both ˜Tandνare weighted averages of Var( E[ 1{Y > t} |X]). The weight of Var( E[ 1{Y > t} |X]) in ˜T(Y,X) is the same for all values of t, but this is not the case for ν(Y,X): Var( E[ 1{Y > t} |X]) gets a higher weight for those tmore at the tail. This becomes more pronounced when comparing functional relationships that exhibit oscillatory behaviour. Having defined ν,
https://arxiv.org/abs/2505.18146v1
the main question is whether νcan be efficiently es- timated from data. We propose νn(Y,X) to estimate ν(Y,X) and study its properties. Our data consists of ni.i.d. copies of ( Y1,X1), . . . , (Yn,Xn) of the pair ( Y,X) for n≥3. For each i, let Ribe the rank of Yi, i.e. Ri=Pn j=1 1{Yj≤Yi}. For each pair iandjin 1, . . . , n where i̸=jdefine N−j(i) the index of the nearest neighbor of Xirespect to the Euclidean metric on Rpamong all Xkfork̸=i, j(ties broken uniformly at random). Let Rj i:= [min {Ri, RN−j(i)},max{Ri, RN−j(i)}]. Letnmax=|{i:Yi= max j∈[n]{Yj}}|and let cmin= 1 if |{i:Yi= minj∈[n]{Yj}}|equals 1 and 0 otherwise. Let n0=nmax+cmin. In the case where are no ties among Yj’s,nmax=cmin= 1. When n0< nwe define νnas below νn(Y,X) := 1 −1 2n−1 n−n0X j Rj̸=1,nX i̸=j1{Rj∈ Rj i} (Rj−1)(n−Rj). (3) Ifn=n0, we are unable to provide an estimator for ν. The following theorem proves that νnis a consistent estimator of ν. Theorem 2.2. IfYis not almost surely a constant, then νnconverges almost surely to νasn→ ∞ . A NEW MEASURE OF DEPENDENCE: INTEGRATED R25 Remark 2.3. (1) If pis fixed, the statistic νncan be computed in O(nlogn) time because nearest neighbors can be determined in O(nlogn) [39], and terms 1{Yj∈ Rj i}and the ranks Rjcan be computed in O(nlogn) time [61]. At first glance, equation 4 appears to require a double loop over each jand each interval i, suggesting a O(n2) complexity. However, the core operation reduces to counting how many integer intervals contain a given integer, which can be efficiently solved in O(n) time using a Difference Array Method . (2) No assumptions are needed on the joint distribution of ( Y,X), except for the non-degeneracy condition that Yis not almost surely constant. This condition is essential: if Ywere almost surely constant, it would be both independent of Xand a measurable function of X, hence rendering any meaningful measure of dependence between YandXimpossible. (3) Although Theorem 2.1 guarantees that νis between 0 and 1 and Theo- rem 2.2 ensures the almost sure convergence of νntoνbut the actual value ofνnmay lie outside the interval. (4) The coefficient νn(Y,X) remains unchanged if we apply strictly increas- ing transformation to Y, because it is based on ranks. (5) We have prepared an Rpackage, called FORD, which will soon be available on CRAN. This package includes a function for computing νnand another for executing the variable selection algorithm FORD, as presented in Section 5. For now, the code is available on Github.1 (6) Besides variable selection, another natural area of applications of our coefficient is graphical models; similar ideas as in [6, 27] are being investi- gated. (7) The coefficients νnandνresemble those defined in earlier works [4, 22, 33], but they appear to be genuinely novel. (8) If the Xi’s contain ties, then νn(Y,X) becomes a randomized estimate ofν(Y,X) due to the randomness introduced by tie-breaking. While this effect diminishes as ngrows large, a more robust estimate can be obtained by averaging νnover all possible tie-breaking configurations. (9) Note that νnis based on nearest
https://arxiv.org/abs/2505.18146v1
neighbour graphs and, as a result, generally lacks scale invariance; that is, changes in the scale of certain co- variates can significantly alter the graph structure. To address this issue, a rank-based variant, similar to that proposed in [93], can be considered. (10) Note that ν(Y, X) is not symmetric in YandX. We intentionally preserve this asymmetry, as our goal may be to assess whether Yis a function ofX, rather than simply whether one variable is a function of the other. If a symmetric measure of dependence is desired, it can be readily obtained by taking the maximum of ν(Y, X) and ν(X, Y). 1https://github.com/PouyaRoudaki/FORD 6 MONA AZADKIA, POUYA ROUDAKI 3.Interpreting the Measure of Dependence To interpret ν(Y,X), we follow the approach in [22], beginning with the case where Yis a binary random variable. Specifically, suppose that Ytakes values in {0,1}, i.e. Y= 1{Y >0}. By the law of total variance ν(Y,X) =Var(E[Y|X]) Var(Y)= 1−E[Var( Y|X)] Var(Y). Therefore, ν(Y,X) =R2 Y,Xwhere R2 Y,Xis the proportion of variation in Y which is explained by X. For a general random variable Ytaking values in R, for each t∈Rlet Yt:= 1{Y > t}. Then ν(Y,X) =Z 1−E[Var( Yt|X)] Var(Yt)d˜µ(t) =Z R2 Yt,Xd˜µ(t). Hence, ν(Y,X) is the average of R2 Y,Xover all t∈Rwith respect to the measure ˜ µ. Since Ycan be viewed as a linear combination of {Yt}t∈R, ν(Y,X) can be viewed as the measure of the proportion of variation in Y that is explainable by X. 4.Rate of Convergence To obtain a convergence rate for νntoν, we must impose certain as- sumptions on the distribution of ( Y,X). Without such assumptions, the convergence can, in principle, be arbitrarily slow. The primary challenge lies in controlling the sensitivity of the conditional distribution of Ygiven Xwith respect to variations in X, which is addressed by the first assumption below. The second assumption is introduced for technical convenience. (A1) There are nonnegative real numbers βandCsuch that for any t∈R,x,x′∈Rp, |P(Y≤t|X=x)−P Y≤t|X=x′ | ≤ C 1 +∥x∥β+∥x′∥β ∥x−x′∥min{F(t),1−F(t)}. (A2) There exists a constant K > 0 such that P(∥X∥ ≤K) = 1; that is, Xhas bounded support. Assumption (A1) implies that the conditional distribution function t7→P Y≤t X=x is locally Lipschitz in x, with a Lipschitz constant that may grow at most polynomially in ∥x∥and∥x′∥. Because the bound in (A1) is multiplied by min {F(t),1−F(t)}, the Lipschitz requirement becomes stricter for tail values of Y. Under Assumptions (A1) and (A2), the following theorem shows that νn converges to νat the rate n−1/(p∨2), up to a logarithmic factor. A NEW MEASURE OF DEPENDENCE: INTEGRATED R27 Theorem 4.1. Suppose that p≥1, and assumptions (A1) and(A2) holds for some CandβandK. Then as n→ ∞ νn−ν=OP (logn)1+ 1{p=1} n1/(p∨2)! . 5.Variable Selection: Feature Ordering by Dependence Many commonly used variable selection methods in the statistics litera- ture are based on linear or additive models. This includes several classical approaches [14, 28, 35, 38, 45, 52, 71, 92] as well as modern ones [19, 36, 79, 100, 107, 108], which are both powerful and widely adopted in practice. However, these methods can struggle
https://arxiv.org/abs/2505.18146v1
when interaction effects or nonlinear relationships are present. Such problems can sometimes be overcome by model-free methods [1, 8, 13, 15, 17, 20, 37, 52, 55, 94]. These, too, are powerful and widely used techniques, and they perform better than model-based methods if interac- tions are present. On the flip side, their theoretical foundations are usually weaker than those of model-based methods. In this section, we propose a new variable selection algorithm for mul- tivariate regression using a forward stepwise algorithm based on ν. Our algorithm in nature follows precisely the idea of FOCI [4] for multivariate re- gression. We call our method Feature O dering by Integrated R2Dependence (FORD). The method is as follows. Let Ybe the response variable and let X= (X1, . . . , X p) be the set of predictors. The data consists of ni.i.d. copies of (Y,X). First, choose j1to be the index jthat maximizes νn(Y, X j). If νn(Y, X j1)≤0, declare ˆVto be empty set and terminate the process. Other- wise, having obtained j1, . . . , j k, choose jk+1to be the index j /∈ {j1, . . . , j k} that maximizes νn(Y,(Xj1, . . . , X jk, Xj)). Continue like this until arriving at the first ksuch that νn(Y,(Xj1, . . . , X jk, Xjk+1))≤νn(Y,(Xj1, . . . , X jk)), and then declare the chosen subset to be ˆV:={j1, . . . , j k}. If there is no such k, define ˆVas the whole set of variables. Note that the algorithm closely follows the setup of FOCI [4], with the minor difference that FOCI uses a conditional measure of dependence and terminates when the conditional dependence between Yand the newly added variable, given all previously selected variables, becomes non-positive. Sev- eral extensions of FOCI have been proposed in the literature. For example, KFOCI [59] incorporates kernel-based methods to estimate conditional de- pendence, while [76] introduces a parametric, differentiable approximation of the same conditional dependence measure, which is used to evaluate fea- ture importance in neural networks. Some other model-agnostic variable important scores are [57, 95, 102]. 5.1.Efficacy of FORD. Let ( Y,X) be as defined in the previous section. For any subset of indices V⊆ {1, . . . , p }, define XV:= (Xj)j∈Vand let 8 MONA AZADKIA, POUYA ROUDAKI Vc:={1, . . . , p } \V. A subset Vis said to be sufficient [94] if Yand XVcare conditionally independent given XV. This definition allows for the possibility that Vis the empty set, in which case it simply implies that Y andXare independent. We will prove later that ν(Y,XV′)≥ν(Y,XV) whenever V′⊇V, with equality if and only YandXV′\Vare conditionally independent given XV. Thus if V′⊇V, the difference ν(Y,XV′)−ν(Y,XV) is a measure of how much extra predictive power is added by appending XV′\Vto the set of predictors XV. Letδbe the largest constant such that for every insufficient subset V⊆ {1, . . . , p }, there exists some index j /∈Vsatisfying ν(Y,XV∪{j})≥ν(Y,XV) +δ. In other words, if Vis insufficient, then appending at least one
https://arxiv.org/abs/2505.18146v1
variable Xj with j /∈Vto the set XVincreases the dependence with Yby at least δ. The main result of this section, stated below, shows that if δis bounded away from zero, then under certain regularity conditions on the distribution of (Y,X), the subset selected by FORD is sufficient with high probability. It is worth noting that the assumption that δis not too small implicitly encodes a sparsity condition: by definition, δguarantees the existence of a sufficient subset of size at most 1 /δ. To prove our result, we need the following two technical assumptions on the joint distribution of ( Y,X). They are generalisations of the assumptions (A1) and (A2) from Section 4. (A1′) There are nonnegative real numbers βandCsuch that for any set V⊆ {1, . . . , p }of size ≤1/δ+ 2, any x,x′∈RVand any t∈R, |P(Y≥t|XV=x)−P Y≥t|XV=x′ | ≤ C 1 +∥x∥β+∥x′∥β ∥x−x′∥min{F(t),1−F(t)}. (A2′) There exists a constant K > 0 such that for any subset V⊆ {1, . . . , p }with cardinality at most 1 /δ+2, we have P(∥XV∥ ≤K) = 1; that is, XVhas bounded support. Theorem 5.1. Suppose that δ > 0, and that the assumptions (A1′)and (A2′)hold. Let ˆVbe the subset selected by FORD with a sample of size n. There are positive real numbers L1, L2andL3depending only on C, β, K , andδsuch that P(ˆVis sufficient )≥1−L1pL2e−L3n. Theorem 5.1 demonstrates that FORD, like FOCI [4], differs from many traditional variable selection methods in that it is not only model-free but also incorporates a principled stopping rule and provides a theoretical guar- antee that the selected subset is sufficient with high probability. A closely related approach in the literature is the mutual information-based method proposed by [8]; however, in contrast to FOCI and FORD, it does not include a well-defined stopping criterion. A NEW MEASURE OF DEPENDENCE: INTEGRATED R29 6.A Simpler Estimator Consider the case where p= 1, i.e., Xis a one-dimensional random vari- able. To emphasise this, we denote XbyXthroughout this section. Fol- lowing [22], we introduce a related estimator whose asymptotic behaviour is easier to analyse. Let ( Y1, X1), . . . , (Yn, Xn) be i.i.d. samples drawn from the same distribution as ( Y, X), where n≥2. Rearrange the data as (Y(1), X(1)), . . . , (Y(n), X(n)) such that X(1)≤ ··· ≤ X(n). If there are no ties among Xi’s, there is a unique way of doing this. If there are ties among theXi’s, then choose an increasing rearrangement as above by breaking ties uniformly at random. Let ribe the rank of Y(i), that is, the number of jsuch that Y(j)≤Y(i). Let Ki:= [min {ri, ri+1},max{ri, ri+1}]. The estimator of ν(Y, X) is defined then as ν1-dim n(Y, X) := 1 −1 2X j rj̸=1,nX i̸=j,j−1,n1{rj∈ K i} (rj−1)(n−rj). (4) In the following, we show that ν1-dim n is a consistent estimator of ν. Further- more, under the assumption that XandYare independent, we derive their asymptotic mean and variance. Theorem 6.1. ForXandYrandom variables, if Yis not almost surely a constant, then as n→ ∞ ,ν1-dim
https://arxiv.org/abs/2505.18146v1
n converges almost surely to ν. Proposition 6.2. Suppose that XandYare independent and both have continuous distributions. E[ν1-dim n(Y, X)] = 2 /n, lim n→∞nVar(ν1-dim n(Y, X)) =π2/3−3. In addition to Proposition 6.2, we conjecture that under the same assump- tions√nν1-dim n(Y, X) converges in distribution to N(0, π2/3−3) as n→ ∞ . Unfortunately, the methods introduced in [23, 25] do not yield asymptotic normality in our setting, as the dependency structure does not appear to be local. We believe that the moment method can prove the asymptotic normality under the null, but the lengthy calculations are beyond the scope of this work. 6.1.Comparison to Chatterjee’s Correlation Coefficient. To explain the distinction between the measures νandT, it is instructive to compare their respective estimators ν1-dim n andξn. Suppose there are no ties among the sample observations XiandYi. Under this assumption, the estimators 10 MONA AZADKIA, POUYA ROUDAKI can be expressed as ν1-dim n(Y, X) = 1−n−1X i=1X j̸=i, i+1 rj̸=1, nwν1-dimn ,jI{rj∈ K i}, ξn(Y, X) = 1−n−1X i=1X j̸=iwξnI{rj∈ K i}, where the weights are given by wν1-dimn ,j= 1/{2(rj−1)(n−rj)}andwξn= 3/(n2−1). This formulation highlights the principal difference in how the two statistics weight rank oscillations. Forn≥5, the inequality wξn≥wν1-dimn ,jholds precisely when rj∈Ln:=" n+ 1−p (n−1)(n−5)/3 2,n+ 1 +p (n−1)(n−5)/3 2# , and the wξn< wν1-dimn ,jotherwise. Thus, for any rank oscillation interval Ki containing rj∈Ln, the statistic ξnimposes a greater penalty—interpreted in terms of deviation from independence-than does ν1-dim n. In general, the weight ratio satisfies wν1-dimn ,j wξn≥2 3. However, this ratio does not admit a uniform upper bound; instead, its max- imal value grows asymptotically as n/6. Consequently, when rj/∈Ln, the estimator ν1-dim n penalises the corresponding rank oscillation more heavily than ξn, with the disparity increasing with the sample size n. 6.2.A Metric on Permutations. Without loss of generality, assume that {Xi}={Yi}= [n]. Let π, σbe the permutations such that Xπ(1)<···< Xπ(n)andYσ(1)<···< Yσ(n). Then, in this case, we have ri= rank( Yπ(i)) = rank( σ−1π(i)) =σ−1π(i). Therefore, one can write ν1-dim n(Y, X) =1−dν(σ, π), where dν(σ, π) :=1 2n−1X ℓ=2n−1X i=11{ℓis between σ−1π(i) and σ−1π(i+ 1)} (ℓ−1)(n−ℓ).(5) Note that 1.dνisleft-invariant , i.e. dν(σ, π) =dν(τσ, τπ ) for any permutation τ, 2.dν(σ, π) = 0 if and only if σ=π, 3.dν(σ, π) is not necessarily equal to dν(π, σ). But one can symmetrize it by dsym ν(σ, π) := ( dν(σ, π) +dν(π, σ))/2. A NEW MEASURE OF DEPENDENCE: INTEGRATED R211 Therefore dν(σ, τ) can be viewed as measure of discrepancy between σand π. There are many metrics in the literature for quantifying the distance between permutations. Some widely used metrics are: •Spearman’s footrule: ds(σ, π) =Pn i=1|σ(i)−π(i)|. •Spearman’s rho: d2 ρ(σ, π) =Pn i=1(σ(i)−π(i))2. •Kendall’s tau: dτ(σ, π) = minimum number of adjacent transposi- tions to bring πtoσ. •Cayley: dC(σ, π) = minimum number of transpositions to bring π toσ. •Hamming: dH(σ, π) =|{i:σ(i)̸=π(i)}|. •Ulam: dU(σ, π) =n−length of longest increasing subsequence in σπ−1. We observe that dν(σ, π) closely resembles Spearman’s footrule, or the re- lated measure Osc( σ−1π) [64], in the sense that
https://arxiv.org/abs/2505.18146v1
it quantifies how much σ−1π oscillates as we move from itoi+ 1. However, unlike Spearman’s footrule, these oscillations are weighted: dν(I, σ−1π) =dν(σ, π) considers not only the magnitude |σ−1π(i)−σ−1π(i+ 1)|, but also the positions of σ−1π(i) andσ−1π(i+ 1) within the range. Oscillations occurring near the top or bottom of the range are assigned greater weight, emphasising their contri- bution to the overall discrepancy. This is also evident from the fact that dν is left-invariant, in contrast to other metrics such as Spearman’s footrule, which are right-invariant. Therefore, dνtakes into account the positional information when evaluating the disorder. 7.Examples This section presents applications of our methods to simulated and real datasets. In all cases, covariates were standardised before analysis. 7.1.Simulation Examples. Example 7.1. (general behaviour) Figure 1 illustrates the general perfor- mance of νnas a measure of association. The figure consists of three rows, each beginning with a scatterplot in which Yis a noiseless function of X, where Xis drawn from the uniform distribution on [ −1,1]. Moving to the right within each row, increasing levels of noise are added to Y. The sample size is fixed at n= 100 across all cases, demonstrating that νnperforms well even with relatively small samples. In each row, we observe that νnis close to 1 in the leftmost plot and gradually decreases as more noise is introduced. In each column, we observe that the values of νnare comparable, meaning thatνnsatisfies the notion of equitability defined in [81]: “to assign similar scores to equally noisy relationships of different types”. Example 7.2. (asymptotic behaviour) We numerically examine the distri- bution of ν1-dim n(Y, X) under the assumption that YandXare independent. Specifically, we let the Xi’s and Yi’s be independent and identically dis- tributed as Uniform[0 ,1], and consider the case n= 20. We generate 10000 12 MONA AZADKIA, POUYA ROUDAKI νn= 0.985 νn= 0.772 νn= 0.212 νn= 0.973 νn= 0.703 νn= 0.284 νn= 0.901 νn= 0.667 νn= 0.301 Figure 1. Values of νn(X, Y) for various kinds of scatter- plots with n= 100. Noise increases from left to right. replications of ν1-dim n(Y, X), and the resulting histogram is displayed in Fig- ure 2a. For n= 20, the normal approximation already provides a reasonable fit. Figure 2b shows the histogram for n= 1000, where the agreement with the normal distribution is even more pronounced. We also examine a setting where XandYare dependent. To this end, we consider the following simple model: let XandZbe independent random variables, each distributed as Uniform[0 ,1], and define Y:=XZ. We have ν(Y, X) =Z1 01 + 2 tlogt−t2−(1−t+tlogt)2 (1−t+tlogt) (t−tlogt)·(−logt)dt which is approximately equal to 0 .3126. To study the asymptotic behavior ofν1-dim n(Y, X), we perform 10000 simulations with n= 1000. The sample A NEW MEASURE OF DEPENDENCE: INTEGRATED R213 −0.2 0.0 0.2 0.4 0.60.0 0.5 1.0 1.5 2.0 2.5 3.0 (a)n= 20 −0.06 −0.04 −0.02 0.00 0.02 0.04 0.060 5 10 15 20 25 (b)n= 1000 Figure 2. Histogram of 10000 simulations of ν1-dim n(Y, X) with XandYindependently distributed as Uniform[0 ,1], overlaid with the asymptotic normal density N(µn, σ2
https://arxiv.org/abs/2505.18146v1
n), where µn= 2/nandσ2 n= (π2/3−3)/n. mean of ν1-dim n(Y, X) is approximately 0 .314, with a standard deviation of about 0 .02. The resulting histogram, shown in Figure 3, exhibits an excellent fit with a normal distribution having the same mean and standard deviation. 0.25 0.30 0.350 5 10 15 20 Figure 3. Histogram of 10000 simulations of ν1-dim n(Y, X) under dependence between XandY, overlaid with the nor- mal density curve with mean 0 .314 and standard deviation 0.02. Example 7.3. (power comparison) In this example, we assess the power of the independence test based on νnand its one-dimensional variant ν1-dim n, 14 MONA AZADKIA, POUYA ROUDAKI and compare their performance against several recently proposed, powerful tests. The test statistics included in our comparison are: Maximal informa- tion coefficient (MIC) [81], Distance correlation [91], the Hilbert–Schmidt independence criterion (HSIC) [46, 47], the HHG statistic [53], Chatterjee’s ξnxicor correlation coefficient [22], and Tnstatistics [4]. Throughout the comparisons, we assume that ( X1, Y1), . . . , (Xn, Yn) is an i.i.d. sample drawn from a distribution on R2. We adopt the same experimental setup as described in Section 4.3 of [22]. Power comparisons were conducted with a sample size of n= 100, using 500 simulations to estimate the power in each scenario. The variable Xwas generated from the uniform distribution on [ −1,1], the noise parameter λranged from 0 to 1, and the noise variable ε∼N(0,1), which is independent of X. The following six alternatives were considered: (1) Linear: Y= 0.5X+ 3λε, (2) Step function: Y=f(X)+10 λε, where ftakes values −3,2,−4 and −3 in the intervals [ −1,−0.5),[−0.5,0),[0,0.5) and [0 .5,1], (3) W-shaped: Y=|X+ 0.5| 1{X < 0}+|X−0.5| 1{X≥0}+ 0.75λε, (4) Sinusoid: Y= cos 8 πX+ 3λε, (5) Circular: Y=Z√ 1−X2+ 0.9λε, where Zis 1 or -1 with equal probability, independent of X, (6) Heteroskedastic: Y= 3(σ(X)(1−λ) +λ)ε, where σ(X) = 1 if |X| ≤ 0.5 and 0 otherwise. The Rpackages energy ,minerva ,HHG,dHSIC ,XICOR , and FOCI were em- ployed to compute the distance correlation, MIC, HHG, HSIC, ξnandTn statistics, respectively. The p-values were calculated using 1000 independent permutations and the power is estimated at the significance level of 5%. The plots in Figure 4 illustrate that νnandν1-dim n are competitive with ξn and outperform other tests in scenarios where the underlying dependency has an oscillatory structure, such as the W-shaped and sinusoidal settings. However, their power is relatively lower for smooth alternatives like the linear, circular, and heteroskedastic patterns. A comparison between ν1-dim n and its counterpart ξn, as well as between νnandTn, reveals consistently slightly higher power for the former in both pairs. Furthermore, across all alternatives, the simpler one-dimensional sta- tistics, ν1-dim n andξn, tend to outperform their more flexible counterparts, νn andTn, respectively. This advantage is likely due to their reduced variance. Specifically, the simpler methods use only the immediate next neighbour when ordering the predictor X, whereas the more complex versions can choose freely between preceding and succeeding neighbours. This added flexibility introduces higher variability in the estimation, reducing power. In addition, we consider
https://arxiv.org/abs/2505.18146v1
the following alternative (7) Heteroskedastic and Sinusoid: Y= cos(20 π(1 + 10 λε)X2). Figure 5 illustrates that in this case ν1-dim n andνnappear more powerful than other tests, including ξn. This example demonstrates that the new A NEW MEASURE OF DEPENDENCE: INTEGRATED R215 Linear 0.000.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Noise LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MICStep Function 0.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Noise LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MIC W-shaped 0.000.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Noise LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MICSinusoid 0.000.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Noise LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MIC Circular 0.000.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Noise LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MICHeteroskedastic 0.000.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Homoskedasticity LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MIC Figure 4. Comparison of power of several tests of indepen- dence. The level of the noise or homoskedasticity increases from left to right. In each case, the sample size is 100, and 500 simulations were used to estimate the power. The p-values were calculated using 1000 independent permutations. coefficient is more effective at detecting sinusoidal relationships and less sensitive to heteroskedasticity compared to ξn. Example 7.4. (time complexity) In this example, we compare the compu- tational complexity of several dependence measures: ξnfrom [22], Tnfrom [4],bρ2andeρ2from [58], alongside the proposed coefficients in this paper, νn andν1-dim n. We independently sample XandYfrom the standard normal distribution and perform 100 replications. The average computation time in seconds for 16 MONA AZADKIA, POUYA ROUDAKI Heteroskedastic and Sinusoid 0.250.500.751.00 0.00 0.25 0.50 0.75 1.00 Noise LevelPowerνn ν1−dim n ξn Tn dCor HHG HSIC MIC Figure 5. Comparison of power of several tests of indepen- dence. The level of the noise or homoskedasticity increases from left to right. The sample size is 100, and 500 simula- tions were used to estimate the power. The p-values were calculated using 1000 independent permutations. each method is reported in Table 1. The most efficient methods in terms of computational cost are ν1-dim n andξn, both demonstrating O(nlogn) be- havior. The statistics νnandTnalso exhibit O(nlogn) time complexity. In contrast, the kernel-based measuresbρ2andeρ2are substantially more computationally intensive, with a time complexity of O(n2). n ν1−dim n νn ξn Tncρ2 fρ2 10 0.00035 0.00098 0.00092 0.01092 0.01468 0.01036 31 0.00039 0.00133 0.00059 0.00323 0.01046 0.00982 100 0.00044 0.00311 0.00069 0.00417 0.01886 0.01558 316 0.00049 0.00866 0.00079 0.00734 0.05568 0.11863 1000 0.00076 0.02761 0.00114 0.01684 0.33250 3.03182 3162 0.00176 0.11807 0.00247 0.04560 3.11779 88.56498 10000 0.00485 0.68661 0.00731 0.14825 34.68341 2604.97461 Table 1. Average runtime (in seconds) of various depen- dency measures across different sample sizes. The lowest runtime in each row is highlighted in bold. Example 7.5. (variable selection) We evaluate the performance of FORD and compare it with FOCI [4] across a variety of settings. Both FORD and FOCI are model-free and do not require tuning parameters, and both methods include built-in stopping rules. A NEW MEASURE OF DEPENDENCE: INTEGRATED R217 We consider the following models with sample size n∈
https://arxiv.org/abs/2505.18146v1
{100,500,1000}, covariates X= (X1, . . . , X p)∼N(0, Ip) where p= 1000, and independent noise variable ε: •LM (linear model): Y= 3X1+ 2X2−X3+ε,ε∼N(0,1) •Nonlin1 (nonlinear model): Y=X1X2+ sin( X1X3) •Nonlin2 (non-additive noise): Y=|X1+ε|sin(X2−X3),ε∼Uniform[0 ,1] •Osc1 (oscillatory): Y=sin(X1)√ |X1|+X2X3 •Osc2 (oscillatory with interaction): Y=sin(X1) X2+X2X3 For the implementation, we use the Rpackages FOCI [5] and FORD2. In all the models considered, the true Markov blanket of Yis{X1, X2, X3}. Table 2 presents the results over 1000 iterations, summarising the following: (1) The proportion of times {X1, X2, X3}is exactly recovered, (2) The proportion of times {X1, X2, X3}has been selected, possibly along with additional variables, (3) The average number of falsely selected variables. The results in Table 2 show that FORD consistently outperforms FOCI across all linear and nonlinear models considered, both in terms of exact recovery and fewer falsely selected variables. 7.2.Real Data Examples. Example 7.6. (variable selection) In this example, we evaluate the perfor- mance of FORD on three real-world datasets from the UCI Machine Learn- ing Repository, comparing it with existing approaches such as FOCI [4] and KFOCI [58] (using the default exponential kernel with median bandwidth and 1-nearest neighbor). For each dataset, we describe the train-test split, explain the variables involved, and provide relevant contextual information. (1)Superconductivity: The dataset is randomly split into 70% for train- ing and 30% for testing. It comprises 81 features extracted from 21263 superconductors, with the critical temperature as the target variable (last column). The remaining covariates capture various chemical and thermodynamic properties of the superconductors, pro- vided in both raw and weighted forms. The weighted features include the weighted mean, geometric mean, entropy, range, and standard deviation of the corresponding properties. The primary objective is to predict the critical temperature based on these features. This dataset was introduced and analysed in [49] and is publicly available from the UCI Machine Learning Repository3. (2)Wave Energy Converter: The dataset is randomly split into 70% for training and 30% for testing. It contains the positions and absorbed power outputs of wave energy converters (WECs) operating under 2https://github.com/PouyaRoudaki/FORD 3https://archive.ics.uci.edu/dataset/464/superconductivty+data 18 MONA AZADKIA, POUYA ROUDAKI FORD FOCI Models n exact/inclusion/avg.false. exact/inclusion/avg.false. LM 100 0.030/0.303/ 1.609 0.003 /0.064 /2.720 LM 500 0.526 /1.000 /0.474 0.103/0.974/0.932 LM 1000 0.808 /1.000 /0.192 0.253/1.000/0.748 Nonlin1 100 0.015 /0.063 /3.281 0.001/0.015/3.948 Nonlin1 500 0.228 /0.479 /1.517 0.061/0.158/2.445 Nonlin1 1000 0.547 /0.824 /0.620 0.172/0.347/1.751 Nonlin2 100 0.000/ 0.002 /3.205 0.000/0.000/3.988 Nonlin2 500 0.059 /0.259 /2.091 0.004/0.073/2.826 Nonlin2 1000 0.245 /0.520 /1.388 0.042/0.162/2.280 Osc1 100 0.028 /0.116 /3.071 0.001/0.026/3.519 Osc1 500 0.572 /0.802 /0.602 0.243/0.382/1.319 Osc1 1000 0.938 /0.992 /0.070 0.574/0.752/0.569 Osc2 100 0.004 /0.026 /3.004 0.000/0.004/4.046 Osc2 500 0.418 /0.661 /1.054 0.038/0.098/2.754 Osc2 1000 0.809 /0.966 /0.233 0.117/0.229/2.108 Table 2. Proportion of times the Markov boundary was ex- actly recovered, the proportion it was included in the selected set, and the average number of falsely selected variables across 1000 iterations. For each row, the better-performing method is highlighted in bold. real wave conditions off the southern coast of Australia, near Tasma- nia. The dataset consists of 72000 samples and includes 32 features
https://arxiv.org/abs/2505.18146v1
representing the positions of the WECs, denoted as X1, X2, . . . , X 16 andY1, Y2, . . . , Y 16, along with 16 features corresponding to the ab- sorbed power outputs, denoted as P1, P2, . . . , P 16. The target vari- able, Powerall , represents the total power output of the WEC farm. The goal is to predict the total power output based on the individual positions and power outputs of the converters. This dataset and its applications were discussed in [73] and are publicly available through the UCI Machine Learning Repository4. (3)Lattice Physics: The dataset consists of a training set with 23999 observations and a test set with 359 observations. Each observa- tion corresponds to a distinct fuel enrichment configuration for a NuScale US600 fuel assembly of type C-01 (NFAC-01). The dataset includes 39 features representing U-235 enrichment levels (ranging from 0.7 to 5.0 weight percent) for fuel rods located within a one- eighth symmetric segment of the assembly. The response variable 4https://archive.ics.uci.edu/dataset/494/wave+energy+converters A NEW MEASURE OF DEPENDENCE: INTEGRATED R219 of interest is the infinite multiplication factor ( k-inf ), calculated us- ing the MCNP6 Monte Carlo simulation code. The objective is to predict k-inf based on the enrichment levels of the fuel rods. This dataset was generated and described in [74] and is publicly available through the UCI Machine Learning Repository5. Superconductivity Wave Energy Converter Lattice Physics Subset size MSPE Subset size MSPE Subset size MSPE FOCI 8 106 .27 31 1 .76×10920 1 .53×10−4 KFOCI 11 106 .53 28 5 .18×1096 1 .53×10−4 FORD 15 97.92 28 1.75×10920 1.51×10−4 Random Forest - 92 .72 - 2 .02×109- 1 .54×10−4 Table 3. Performance comparison of FORD, KFOCI, and FOCI on three datasets, using the MSPE of a random forest fitted with the variables selected by each method. For each dataset, we compared the performance of FORD with two com- peting methods: FOCI and KFOCI (the latter using the default exponential kernel with median bandwidth and 1-nearest neighbour). Following variable selection via each method’s respective stopping rule, the selected subsets were used to train predictive models on the training data using random forests implemented in the randomForest package in R. Mean squared pre- diction errors (MSPEs) were then estimated on the test set. Table 3 reports the sizes of the selected subsets along with their corresponding MSPEs. The final row of Table 3 shows the performance of a random forest model trained on the full set of variables. In all cases, FORD achieved prediction accu- racy comparable to that of FOCI and KFOCI; only in the Superconductivity dataset with the full model yield a lower MSPE. Since each of these variable selection methods results in a set with possibly different sizes, we compare the performance of the ordered subsets by comparing the MSPE of the fitted random forest on the first kselected variables for k∈ {1, . . . , 15}. Figure 6 shows the MSPEs for all these models. Example 7.7. (yeast gene expression data) In this example, we follow the Yeast gene
https://arxiv.org/abs/2505.18146v1
expression example in [22] and investigate the effectiveness of ν1-dim n(Y, X) in identifying genes with oscillating transcript levels over time. Specifically, we apply it to the curated Spellman dataset available in the R package minerva , which contains gene expression data for 4381 transcripts measured at 23 time points. In this context, Ydenotes the transcript level of a gene, while Xrepresents the time of recording. To identify the genes whose transcript levels exhibit oscillatory patterns, we conduct a permutation test on the dependence measures ν1-dim n andξn 5https://archive.ics.uci.edu/dataset/1091/lattice-physics+(pwr+fuel+ assembly+neutronics+simulation+results 20 MONA AZADKIA, POUYA ROUDAKI 100150200250 4 8 12 Number of Selected VariablesMSPEMethod FOCI KFOCI Full RF FORD RF Ref FORD Ref Figure 6. Comparison of MSPE as a function of the number of selected variables on the Superconductivity dataset, using variable selection methods FOCI, FORD, and KFOCI, each followed by a random forest trained on the selected variables. TheFull RF curve represents a random forest model trained on the top- kvariables ( k∈ {1, . . . , 15}) ranked by variable importance from a random forest using all features. Dashed and dotted horizontal lines indicate the baseline MSPEs for the initial FORD model and the full random forest model (using all variables), respectively. The results illustrate the advantage of targeted variable selection in reducing model complexity while maintaining or improving predictive per- formance. using 10000 replications. Genes with significantly large values of these de- pendence measures are identified as having time-dependent expression pat- terns, as determined by an independence-based permutation test. For both statistics, p-values are computed and the Benjamini–Hochberg procedure [9] is applied to control the false discovery rate (FDR) at the 0 .05 level. We refer to the adjusted p-values using Benjamini–Hochberg procedure as q-values. As a result, out of 4381 genes, 685 are found to be significant using ν1-dim n. Among these, 78 genes are uniquely detected by ν1-dim n and not by ξn. Con- versely, ξndetects 679 significant genes, of which 72 are not detected by ν1-dim n. This slight discrepancy suggests that ν1-dim n may have an edge in identifying certain types of dependence patterns. Figure 7 illustrates four gene expression patterns exclusively detected by ν1-dim n. Specifically, the first row of Figure 7 presents the two genes with A NEW MEASURE OF DEPENDENCE: INTEGRATED R221 05101520 50 100 150 200 250 YDR148C: qξn= 0.0554, qν1-dimn= 0.0140 05101520 50 100 150 200 250 YOR038C: qξn= 0.0810, qν1-dimn= 0.0178 05101520 50 100 150 200 250 YHR216W: qξn= 0.2106, qν1-dimn= 0.0312 05101520 50 100 150 200 250 YKL144C: qξn= 0.2251, qν1-dimn= 0.0465 Figure 7. Plots of four genes detected by ν1-dim n but not byξn, the first row figures are selected based on the small- est q-values under ν1-dim n and the second row figures are se- lected based on the largest q-values under ξn. The vertical axis shows the gene expression ranks, and the horizontal axis represents time. Ranks 1 and 23 are marked with red stars. The region between the two horizontal red dot-dashed lines indicates where wξnexceeds wν1-dimn ,j. A LOESS regression curve (black
https://arxiv.org/abs/2505.18146v1
dashed line) is overlaid using a smoothing pa- rameter of 0.2. the smallest q-values under ν1-dim n among those not identified by ξn, high- lighting cases where ν1-dim n shows strong confidence in detection. The sec- ond row of Figure 7 displays two genes selected by ν1-dim n but not by ξn, which exhibit the largest q-values under ξn. Both figures support the obser- vation that when oscillations occur around mid-range rank values—where wξn≥wν1-dimn ,j—ν1-dim n is more effective at capturing dependencies than ξn. On the other hand, Figure 8 displays gene expression patterns detected by ξnbut not by ν1-dim n. The first row of Figure 8 presents the two genes with the 22 MONA AZADKIA, POUYA ROUDAKI 05101520 50 100 150 200 250 YKL001C: qξn= 0.0138, qν1-dimn= 0.0598 05101520 50 100 150 200 250 YGL063W: qξn= 0.0152, qν1-dimn= 0.0718 05101520 50 100 150 200 250 YKL056C: qξn= 0.0423, qν1-dimn= 0.1105 05101520 50 100 150 200 250 YGR069W: qξn= 0.0430, qν1-dimn= 0.1105 Figure 8. Plots of four genes detected by ξnbut not by ν1-dim n, the first row figures are selected based on the small- est q-values under ξnand the second row figures are selected based on the largest q-values under ν1-dim n. The vertical axis represents gene expression ranks, and the horizontal axis rep- resents time. Ranks 1 and 23 are marked with red stars. The region between the two horizontal red dot-dashed lines indi- cates where wξnexceeds wν1-dimn ,j. A LOESS regression curve (black dashed line) is fitted using a smoothing parameter of 0.2. smallest q-values under ξnamong those not identified by ν1-dim n, highlighting cases where ξnshowed strong confidence in selection. The second row of Figure 8 shows two genes selected by ξnand not by ν1-dim n that have the largest q-values under ν1-dim n. In conclusion, it seems ν1-dim n excels at detecting smooth, mid-rank os- cillatory patterns, whereas ξnis more sensitive to sharp transitions at the extremes. Independence testing using the respective asymptotic distribu- tions—established for ξnand conjectured for ν1-dim n—further supports the A NEW MEASURE OF DEPENDENCE: INTEGRATED R223 advantage of ν1-dim n, which identified 677 genes compared to 586 by ξn. Among these 586 genes, only 39 were not detected by ν1-dim n. Acknowledgement We are grateful to Sourav Chatterjee and Rina Foygel Barber for helpful comments. Part of this work was conducted during M.A.’s visit to the Insti- tute for Mathematical and Statistical Innovation (IMSI), which is supported by the National Science Foundation under Grant No. DMS-1929348. 8.Proofs 8.1.Proof of Theorem 2.1. Proof. Remember that Sis the support of µand we define ˜ µ, the modified version of µ, in the following way: If Sattains a maximum smax, let ˜S= S\{smax}otherwise let ˜S=S, and for any measurable set A⊆S, let ˜µ(A) = µ(A∩˜S})/µ(˜S). In addition, for simplicity in notation, since Var( E[ 1{Y > t} |X]) = 0 whenever Var( 1{Y > t }) = 0 we define Var( E[ 1{Y > t } | X])/Var( 1{Y > t}) to be equal to 1. Assuming that Yis not almost surely a constant guarantee that
https://arxiv.org/abs/2505.18146v1
for almost all values of twith respect to ˜ µ, Var( 1{Y > t}) is non-zero and hence ν(Y,X) is well-defined. Note that by the law of total variance and non-negativity of variance, we have 0≤Var(E[ 1{Y > t} |X])≤Var( 1{Y > t}), which gives ν(Y,X)∈[0,1]. When Yis independent of Xfor all t∈Rwe have E[ 1{Y > t} |X] =E[ 1{Y > t}], therefore Var( E[ 1{Y > t} |X]) = 0 which gives ν(Y,X) = 0. For each tletG(t) :=P(Y > t ), and GX(t) :=P(Y > t |X). Note that ν(Y,X) = 0 implies that there exists a Borel set A⊆Rsuch that ˜µ(A) = 1 and for any t∈A, Var( GX(t)) = 0. This implies that for t∈A, GX(t) =G(t) almost surely with respect to ˜ µ. We claim that A=R. Take any t∈R. If ˜ µ({t})>0, then t∈A. So w.l.o.g assume that ˜µ({t}) = 0. Note that this also implies µ(t) = 0, unless t=smax. We also have Var( G(smax)) = Var( GX(smax)) = 0 which implies smax∈A. Therefore, for any other such t,µ(t) = 0. This implies that Gis right- continuous at t. Suppose for all s > t we have G(s)< G(t). Then for each s > t ,µ([t, s))> 0 and hence A∩[t, s)̸=∅. Therefore, there exists a sequence rn∈Asuch thatrn↓t. Since rn∈A, we have GX(rn) =G(rn) almost surely for all n. Therefore with probability 1 we have GX(t)≥lim n→∞GX(rn) = lim n→∞G(rn) =G(t) 24 MONA AZADKIA, POUYA ROUDAKI because of the right-continuity of G. Note that E[GX(t)] =G(t), hence this implies GX(t) =G(t) almost surely and therefore t∈A. Suppose there exist s > t such that G(s) =G(t). Take the largest such s, which exists because Gis left-continuous. If s=∞, then G(t) =G(s) = 0. Since E[GX(t)] = G(t) = 0 this implies GX(t) = G(t) = 0 almost surely which implies t∈A. So assume s <∞. Either µ({s})>0, which implies GX(s) =G(s) almost surely, or µ({s}) = 0 and G(r)< G(s) for all r > s , which again implies GX(s) =G(s) almost surely as in the previous paragraph. Therefore, in either case, with probability 1, we have GX(t)≥GX(s) =G(s) =G(t). Since E[GX(t)] =G(t), this implies GX(t) =G(t) almost surely. Therefore t∈A. This shows we can take Aas big as R. Now, for an arbitrary Borel set B⊆R, P({Y > t} ∩ {X∈B}) = E[E[ 1{Y > t} |X] 1{X∈B}] =E[GX(t) 1{X∈B}] =E[G(t) 1{X∈B}] =G(t)P(X∈B) =P(Y > t )P(X∈B). This proves that YandXare independent. Assume there exists a measurable function f:Rp→Rsuch that Y= f(X). This implies that for all t∈R,E[ 1{Y > t } |X] = 1{Y > t }and therefore Var(E[ 1{Y > t} |X]) = Var( 1{Y > t}). This gives ν(Y,X) = 1. On the other hand, assume ν(Y,X) = 1. This implies for almost all t∈Rw.r.t ˜ µwe have Var(GX(t)) = Var( 1{Y > t}). IfS, the support of µattains the minimum smax, then note that we also have Var(GX(smax)) = Var( 1{Y > s max}). This implies E[Var( 1{Y > t} |X)] =E[GX(t)(1−GX(t))] = 0 for almost all twith
https://arxiv.org/abs/2505.18146v1
respect to µ. Therefore, GX(t) almost surely takes only the values of 0 and 1 with respect to µ. Let E(X-measurable) the event that GX(t)∈ {0,1} for almost all values of tand note that P(E) = 1. Let aXbe the largest value such that GX(aX) = 1 and bXbe the smallest value such that GX(bX) = 0. Note that aX≤bX. Suppose {aX< bX} ∩Ehappens. This means that for all t∈(aX, bX) we have GX(t)∈(0,1) therefore µ((aX, bX)) = 0. Then we have P(Y∈(aX, bX)|X) = 0 which implies event {aX< bX} ∩Eis of measure 0 and hence aX=bXalmost surely. Then this gives us Y=aX almost surely, which completes the proof. □ A NEW MEASURE OF DEPENDENCE: INTEGRATED R225 8.2.Proof of Theorem 2.2. For more clarity in the notation of our proof, we rewrite the estimator νnin terms of the empirical cumulative function. Let Ij i:= [min {Yi, YN−j(i)},max{Yi, YN−j(i)}]. For each j∈[n] and t∈Rlet Fn,j(t) := ( n−1)−1X k̸=j1{Yk≤t}, F n(t) :=n−1nX k=11{Yk≤t}. Note that Rj=nFn(Yj), F n,j(Yj) =n n−1 Fn(Yj)−1 n−1=Rj−1 n−1. Using these, we can rewrite νn(Y,X) as νn(Y,X) = 1−1 2(n−1)(n−n0)nX j=1X i̸=j1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} Fn,j(Yj)(1−Fn,j(Yj)), where n0=nmax+cmin, with nmaxandcmindefined as before: nmaxnumber ofYj’s that are equal to the maximum of Yi’s and cmin= 1 if Yj’s minimum is unique and zero otherwise. Proof. Let Qn:=1 2(n−1)(n−n0)nX j=1X i̸=j1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} Fn,j(Yj)(1−Fn,j(Yj)),(6) Q′ n:=1 2(n−1)(n−n0)nX j=1X i̸=j1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj)),(7) Q:=ZE[FX(t)(1−FX(t))] F(t)(1−F(t))d˜µ(t). (8) Lemma 8.1. With QnandQdefined in (6)and(8) lim n→∞E[Qn] =Q. Proof. To prove the convergence of E[Qn] toQ, we divide the argument into two steps: first, we show that E[|Qn−Q′ n|] converges to zero; second, we show that E[Q′ n] converges to Q. Step I. In this step we show that E[|Qn−Q′ n|] converges to zero. E[n−n0 n |Qn−Q′ n|] ≤1 2E[E[ 1{Yj∈ Ij i} |Fn(Yj), F(Yj)]|Fn,j(Yj)−F(Yj)| 1{Fn(Yj)̸= 1, n−1} max{Fn,j(Yj)(1−Fn,j(Yj)),n−1 n2}F(Yj)(1−F(Yj))] ≤E[|Fn,j(Yj)−F(Yj)| F(Yj)(1−F(Yj))]. 26 MONA AZADKIA, POUYA ROUDAKI Note that E[|Fn,j(Yj)−F(Yj)| F(Yj)(1−F(Yj))] =Z t∈RE[|Fn−1(t)−F(t)|] F(t)(1−F(t))dµ(t). Using Theorem 1.2 of [7], there exists absolute constants c0andc1such that for every ∆ ≥c0log log m/m with probability at least 1 −exp(−c1∆m), for every tsuch that ∆ ≤F(t)(1−F(t)) we have |Fm(t)−F(t)| ≤p F(t)(1−F(t))∆. Letδ= (1−√ 1−4∆)/2. Using this and symmetry, we have Z t∈RE[|Fn−1(t)−F(t)|] F(t)(1−F(t))dµ(t) ≤2Z δ≤F(t)≤0.5E[|Fn−1(t)−F(t)|] F(t)(1−F(t))dµ(t) + 2Z F(t)<δE[|Fn−1(t)−F(t)|] F(t)(1−F(t))dµ(t) ≤2Z δ≤F(t)≤0.5p ∆F(t)(1−F(t))(1−2 exp(−c1(n−1)∆)) F(t)(1−F(t))dµ(t)+ 2Z δ≤F(t)≤0.52 exp(−c1(n−1)∆) F(t)(1−F(t))dµ(t) + 2Z F(t)<δE[Fn−1(t)] +F(t) F(t)(1−F(t))dµ(t) ≤π√ ∆ + 4 exp( −c1(n−1)∆) log1 +√ 1−4∆ 1−√ 1−4∆ . (9) Now let ∆ = c−1 1log(n)/(n−1). Then, as ngoes to infinity, (9) goes to zero. Hence E[n−n0 n |Qn−Q′ n|] converges to zero. Since Yis not almost surely a constant, as ngrows to ∞, (n−n0)/nconverges to constant µ(˜S)>0. For large enough nwe have ( n−n0)/n > µ (˜S)/2. Therefore, for large enough nwe have E[|Qn−Q′ n|]≤2 µ(˜S)E[n−n0 n |Qn−Q′ n|]. Since the right-hand side of the above inequality converges to zero, we con- clude that lim n→∞E[|Qn−Q′ n|] = 0. Step II. In this step we show that E[Q′ n] converges to Q. E[n−n0 n Q′ n] =1 2E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))]. A
https://arxiv.org/abs/2505.18146v1
NEW MEASURE OF DEPENDENCE: INTEGRATED R227 First, let’s study the case when µis continuous. In this case, by conditioning on the value of Fn(Yj), we have E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))] =1 nnX r=1E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))|Fn(Yj) =r/n] =1 nn−1X r=2E[E[ 1{Yj∈ Ij i} |Yj] F(Yj)(1−F(Yj))|Fn(Yj) =r/n] ≤1 nn−1X r=2(r−1)(n−r) (n−1)(n−2) E[1 F(Yj)(1−F(Yj))|Fn(Yj) =r/n] Given Fn(Yj) =r/n,F(Yj)∼Beta( r, n−r+ 1), therefore this gives us E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))]≤2, which meansn−n0 n Q′ nis uniformly integrable. If µdoes not have a con- tinuous density, showing uniform integrability ofn−n0 n Q′ nrequires extra work. We divide the argument into the following four cases: (i) Support µattains a minimum sminand a maximum smaxwhich µhas point masses on; (ii) Support µattains a maximum smaxwhich µhas a mass point on but support µeither does not attain a minimum or it does not have a mass point on its minimum; (iii) Support µattains a minimum sminwhich µhas a mass point on but support µeither does not attain a maximum or it does not have a mass point on its maximum; (iv) Support µattains a minimum or maximum or does not have point masses on them. Case (i). There exists δ >0 such that µ(smax), µ(smin)≥δ. E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))] =Z S\{smax}E[ 1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} |Yj=t] F(t)(1−F(t))dµ(t) ≤1 +δ δ(1−δ). 28 MONA AZADKIA, POUYA ROUDAKI Case (ii). There exists δ >0 such that µ(smax)≥δandµ(smin) = 0 or S does not have a minimum. E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))] =Z S\{smax}E[ 1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} |Yj=t] F(t)(1−F(t))dµ(t) ≤Z F(t)<(n−1)−1E[ 1{Fn(Yj)̸= 1,1/n} |Yj=t] F(t)(1−F(t))dµ(t) + Z (n−1)−1≤F(t)<1−δE[ 1{Yj∈ Ij i} |Yj=t] F(t)(1−F(t))dµ(t) ≤Z(n−1)−1 01−(1−x)n−1 x(1−x)dx+Z1−δ (n−1)−12x(1−x) x(1−x)dx. For large nwe have Z(n−1)−1 01−(1−x)n−1 x(1−x)dx≲Z(n−1)−1 0(n−1)x x(1−x)dx≤n−2 n−1≤2. Therefore E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))]≤4. Case (iii). There exists δ >0 such that µ(smin)≥δandµ(smax) = 0 or Sdoes not have a maximum. Note that by symmetry, this is equivalent to the previous case. Case iv. µis not continuous but does not have point masses at minimum or maximum. Note that this is similar to case (ii). E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))] ≤Z min{F(t),1−F(t)}≤(n−1)−1E[ 1{Fn(Yj)̸= 1,1/n} |Yj=t] F(t)(1−F(t))dµ(t) + Z (n−1)−1<F(t)<1−(n−1)−1E[ 1{Yj∈ Ij i} |Yj=t] F(t)(1−F(t))dµ(t) ≤6. Thereforen−n0 n Q′ nis uniformly integrable. Note that by Lemma 11.3. in [6] XN−j(i)→Xiwith probability one. Then, using Lemma 11.7. in [6] with probability one we have E[ 1{Yj∈ Ij i} |Yj,Xi,XN−j(i)]−E[ 1{Yj∈ I′ i} |Yj,Xi]→0, A NEW MEASURE OF DEPENDENCE: INTEGRATED R229 where I′ i= [min {Yi, Y′ i},max{Yi, Y′ i}] in which YiandY′ iare i.i.d given Xi. Also E[ 1{Yj∈ I′ i} |Yj] =E[E[ 1{Yj∈ I′ i} |Yj,Xi]|Yj] →2E[FXi(Yj)(1−FXi(Yj))|Yj]. Since 1{Fn(Yj)̸= 1,1/n}converges almost surely to 1{Yj∈˜S}, by the dominated convergence theorem, we have E[n−n0 n Q′ n]→Z ˜SE[FX(t)(1−FX(t))] F(t)(1−F(t))dµ(t). Considering that 1 −n0/nconverges almost surely to µ(˜S) which is bounded away from zero, (1 −n0 n)−1−µ(˜S)−1converges almost surely to zero. Finally the uniformly integrability ofn−n0 n Q′ ngives us E[Q′ n] =En n−n0−1 µ(˜S)n−n0 n Q′ n +1 µ(˜S)En−n0 n Q′ n . The first term on the right-hand side of the
https://arxiv.org/abs/2505.18146v1
above equality converges to zero by the Vitali convergence theorem. Therefore lim n→∞E[Q′ n] =1 µ(˜S)Z ˜SE[FX(t)(1−FX(t))] F(t)(1−F(t))dµ(t) =Q. Putting steps I and II together gives us lim n→∞E[Qn] =Q. □ Lemma 8.2. ForQndefined in (6), there are constants C1andC2such that P(|Qn−E[Qn]| ≥t)≤C1e−C2nt2/log2n. Proof. We apply the bounded difference inequality [70] to establish concen- tration. To do so, we first derive an upper bound on the maximum change in Qnresulting from replacing a single observation ( Xk, Yk) with an alternative value ( X′ k, Y′ k) for any k∈[n]. We decompose this change into two steps: first, replacing ( Yk,Xk) with ( Y′ k,Xk), and second, replacing ( Y′ k,Xk) with (Y′ k,X′ k). Take an arbitrary k∈[n]. Let QkYnbe defined similar to Qnbut using sample {(Yi,Xi)}i̸=k∪ {(Y′ k,Xk)}. We show that |Qn−QkYn| ≤Clogn/n for some constant Cthat only depends on the dimension of X. First, observe that since Xkremains unchanged, the nearest neighbor indices are unaffected. We analyze the effect of modifying Ykunder two distinct scenarios: (i) neither YknorY′ kis the minimum or maximum among {Yi}i̸=k; (ii) at least one of YkorY′ kis the minimum or maximum relative to{Yi}i̸=k. Case (i). Neither Yknor Y′ kattains the minimum or maximum value. Note that in this case, for all indices j∈[n], the indicator 1{n−1< Fn(Yj)<1}remains unchanged, as replacing Ykwith Y′ kdoes not alter 30 MONA AZADKIA, POUYA ROUDAKI the minimum or maximum of the {Yi}. Consequently, n0also remains un- changed. Without loss of generality, we assume Yk< Y′ k. Then we have 2(n−1)(n−n0)Qn=X j:Yj<YkorYj>Y′ k n−1<Fn(j)<1X i̸=j i̸=k, N−j(i)̸=k1{Yj∈ Ij i} Fn,j(Yj)(1−Fn,j(Yj))+ X j:Yj<YkorYj>Y′ k n−1<Fn(j)<1X i̸=j i=korN−j(i)=k1{Yj∈ Ij i} Fn,j(Yj)(1−Fn,j(Yj))+ X j:Yk≤Yj≤Y′ k n−1<Fn(j)<1X i̸=j i̸=k, N−j(i)̸=k1{Yj∈ Ij i} Fn,j(Yj)(1−Fn,j(Yj))+ X j:Yk≤Yj≤Y′ k n−1<Fn(j)<1X i̸=j i=korN−j(i)=k1{Yj∈ Ij i} Fn,j(Yj)(1−Fn,j(Yj))+ X i̸=k1{Yk∈ Ik i} Fn,k(Yk)(1−Fn,k(Yk)) =A1+A2+A3+A4+A5. We denote the corresponding terms involving Y′ kbyAkY ifori= 1, . . . , 5. Observe that for all jsuch that Yj< Y korYj> Y′ k, the empirical distri- bution values remain unchanged, i.e., Fn,j(Yj) =Fk n,j(Yj), where Fk n,j(Yj) denotes the empirical distribution after replacing Ykwith Y′ k. Consequently, in the terms A1andA2, all denominators remain unchanged after the mod- ification. In contrast, for indices jsuch that Yk≤Yj≤Y′ k, the value of Fn,j(Yj) changes by exactly ( n−1)−1. We first focus on A1andA2. Since changing YktoY′ kdoes not affect the denominators, it suffices to analyse the numerator term 1{Yj∈ Ij i}. In the case of A1, the intervals Ij iremain unchanged under the replacement of Yk with Y′ k, soA1is unaffected, i.e., A1=AkY 1. ForA2, consider first the case where Yj< Y k< Y′ k. For any isuch that N−j(i) =k, the indicator 1{Yj∈ Ij i}remains unchanged when Ykis replaced by Y′ k. A similar argument holds when Yk< Y′ k< Yj. Finally, consider the case where i=k. Even in this situation, the indicator 1{Yj∈ Ij k}remains unchanged under the modification of Yk, and thus A2=AkY 2. Now consider A3. Note that all indicator terms 1{Yj∈ Ij i}remain un- changed when Ykis replaced by Y′ k. Therefore, it suffices to bound the A NEW MEASURE
https://arxiv.org/abs/2505.18146v1
OF DEPENDENCE: INTEGRATED R231 difference 1 Fn,j(Yj)(1−Fn,j(Yj))−1 Fk n,j(Yj)(1−Fk n,j(Yj)) for those indices isuch that 1{Yj∈ Ij i}= 1. We first consider the case where there are no ties among the Yi’s. In this setting, for each j, Lemma 11.4. in [4] implies that there are at most nC(p) min{Fn(Yj)−n−1,1−Fn(Yj)}such indices ifor which Yj∈ Ij i. This gives us |A3−AkY 3| ≤X j:Yk≤Yj≤Y′ k n−1<Fn(Yj)<1nC(p) min{Fn(Yj)−1 n,1−Fn(Yj)}× |1 Fn,j(Yj)(1−Fn,j(Yj))−1 Fk n,j(Yj)(1−Fk n,j(Yj))| =nC(p)n−1X j=3minj−1 n,1−j n 1 j−1 n−1 1−j−1 n−1−1 j−2 n−1 1−j−2 n−1 ≤2nC(p) n/2−1X j=11 j+1 n−j −nn−2X i=n/21 i(i+ 1)! =O(nlogn). The case where ties exist among the Yi’s is similar but requires additional care. Let r1<···< rmdenote the ordered sequence of distinct values taken by the empirical ranks of Yjforj∈[n]. Define ℓ∗as the smallest index i∈[m] such that for every jsatisfying Yk≤Yj≤Y′ k, we have Fn(Yj)≤ri. Similarly, define ℓ∗as the largest index i∈[m] such that for every such j, Fn(Yj)≥ri. Then |A3−AkY 3| ≤C(p)(n−1)2× ℓ∗X i=ℓ∗(ri−ri−1) min{(ri−1),(n−ri)} 1 (ri−1)(n−ri)−1 (ri−2)(n−ri+ 1) . For all indices isuch that ri≤n/2, replacing the corresponding Yjvalues with distinct (tie-free) values can only increase the difference |A3−AkY 3|. Therefore, it suffices to bound this difference in the case where rℓ∗≥n/2, since for all ri≤n/2 we can use the bound on this difference when there are no ties. In this case, we have |A3−AkY 3| ≤C(p)n2ℓ∗X i=ℓ∗(ri−ri−1)|2ri−n−2| (ri−1)(ri−2)(n−ri+ 1). 32 MONA AZADKIA, POUYA ROUDAKI Define g(r) =|2r−n−2| (r−1)(r−2)(n−r+ 1), r ∈ {1, . . . , n −1}. Forr≥n/2, the function gis U-shaped and attains its minimum at ⌈n/2 + 1⌉. Hence, for every index iwith ri≥ ⌈n/2 + 1⌉we bound g(ri) above by g(rℓ∗). Because rℓ∗< n, we have n−rℓ∗+1 = O(n), which implies |A3−AkY 3|=O(n). Combining this bound with those obtained in the remaining cases yields the overall estimate |A3−AkY 3|=O(nlogn). ForA4, observe that for any fixed jthere are at most C(p) indices isuch that either i=korN−j(i) =k. Consequently, |A4−AkY 4| ≤X j:Yk≤Yj≤Y′ k n−1<Fn(Yj)<1C(p)|1 Fn,j(Yj)(1−Fn,j(Yj))−1 Fk n,j(Yj)(1−Fk n,j(Yj))| We first examine the case in which the Yjare all distinct. Then |A4−AkY 4| ≤2C(p)n2n/2X j=2|1 (j−1)(n−j)−1 (j−2)(n−j+ 1)|=O(n). When ties are present among the Yjvalues, we have |A4−AkY 4| ≤C(p)n2ℓ∗X i=ℓ∗(ri−ri−1)|1 (ri−1)(n−ri)−1 (ri−2)(n−ri+ 1)| =O(n). Finally, observe that |A5−AkY 5| ≤A5+AkY 5. We therefore bound A5only, as the same argument applies verbatim to AkY 5. Since there are at most nC(p) min{Fn(Yk),1−Fn(Yk)}indices ifor which 1{Yk∈ Ik i}= 1, we have A5≤nC(p) min{Fn(Yk),1−Fn(Yk)} Fn,k(Yk)(1−Fn,k(Yk))=O(n). Consequently, provided that replacing Ykwith Y′ kleaves the sample mini- mum and maximum unchanged, we obtain |Qn−QkYn|=O(logn n). Case (ii). YkorY′ kis minimum or maximum. Without loss of gener- ality, assume Yk< Y′ k. Then one of the following scenarios arises: (a)Replacing Ykwith Y′ kleaves both the sample minima and maxima un- changed: Yk< Y′ k≤Yjfor all j̸=k. Consequently, Qn=QkYn. A NEW MEASURE OF DEPENDENCE: INTEGRATED R233 (b)Replacing Ykwith Y′ kalters the set of minima but leaves the set of maxima unchanged: we have Yk≤Yjfor every j̸=k, and there exists at least one index jwithYj≥Y′ k. If, for some such j, we have n−1< F n(Yj)<1 before the
https://arxiv.org/abs/2505.18146v1
change and Fk n(Yj) =n−1afterwards, then the contribution of thatjto|Qn−QkYn|is bounded by C(p) 2 (n−2) (n−n0−1)=O n−1 . For every other index j, the argument from case (i) applies. (c)Replacing Ykwith Y′ kchanges both the sample minima and maxima: indeed, Yk≤Yj≤Y′ kfor every j̸=k. Assume there exist indices j1andj2 such that n−1< F n(Yj1)<1, Fk n(Yj1) =n−1, F n(Yj2) = 1 , n−1< Fk n(Yj2)<1. The combined contribution of these two indices to |Qn−QkYn|is bounded by C(p) (n−2) (n−n0−1)=O(n−1). For all remaining indices j, the reasoning from case (i) applies unchanged. (d)Replacing Ykwith Y′ kleaves the set of sample minima unchanged but alters the set of sample maxima: we have Yj≤Y′ kfor every j̸=k, and there exists at least one index jwith Yj≤Yk. If there is an index jfor which Fn(Yj) = 1 and n−1< Fk n(Yj)<1, the contribution of that jto|Qn−QkYn| is bounded by C(p) 2 (n−2) (n−n0−1)=O(n−1). For every other index j, the argument from case (i) applies. Combining Cases (i)–(iv), we obtain |Qn−QkYn| ≤C(p) logn n, whenever ( Yk,Xk) is replaced by ( Y′ k,Xk). We now analyse the change induced when replacing ( Y′ k,Xk) with ( Y′ k,X′ k). Because the Yivalues remain unchanged, both the denominators Fn,j(Yj) 1− Fn,j(Yj) and the index set {j:n−1< F n(Yj)<1}are unaffected. For no- tational convenience, therefore, we study the effect of changing ( Yk,Xk) to (Yk,X′ k). LetQkxndenote the analogue of Qncomputed from the sample in which Xkis replaced by X′ k. For each fixed j, modifying Xkcan alter at most C(p) of the intervals Ij i. Among those indices iwhose intervals change, only those for which 1{Yj∈ Ij i}flip value matters—namely, the indices where Yj∈ Ij iunder XkbutYj/∈ Ij iunder X′ k, or vice versa. 34 MONA AZADKIA, POUYA ROUDAKI Finally, if Yjhas rank ri, then at most min {ri−1, n−ri}of the indicators 1{Yj∈ Ij i}equal 1 under either XkorX′ k. Therefore |Qn−Qkxn| ≤n−1 n−n0mX i=1(ri−ri−1)min{C(p), ri−1, n−ri} (ri−1)(n−ri) ≤C(p)n−1 n−n0ℓX i=1(ri−ri−1) (ri−1)(n−ri) ≤C(p) logn n. Combining the bounds for |Qn−QkYn|and|Qn−Qkxn|, we obtain that re- placing ( Yk,Xk) with ( Y′ k,X′ k) yields |Qn−Qk n| ≤C(p) logn n. Applying McDiarmid’s bounded-difference inequality [70] gives P(|Qn−E[Qn]| ≥t)≤2 exp(−Cnt2/log2n) □ Using Lemma 8.2, set tn=√ 2(log n)3/2/√ Cn. Then note that ∞X n=1P(|Qn−E[Qn]| ≥tn)≤2nX i=11 n2<∞. By the Borel–Cantelli lemma, it follows that |Qn−E[Qn]|converges to zero almost surely. This, combined with Lemma 8.1, establishes the almost sure convergence of QntoQ. □ 8.3.Proof of Theorem 4.1. Throughout this section, we will assume that the assumptions (A1) and (A2) from Section 4 hold. In the following, we restate Lemma 14.1 [4] and its proof for convenience. Let Xn,1be the nearest neighbor of X1among X2, . . . ,Xn(with ties broken at random). Lemma 8.3. Under assumption (A2), there is some Cdepending only on Kandpsuch that E(∥X1−Xn,1∥)≤( Cn−1(logn)2ifp= 1 Cn−1/p(logn)ifp≥2 Proof. Throughout this proof, Cwill denote any constant that depends only onKandp. Take ε∈(n−1/p,1). Let Bbe the ball of radius KinRpcentred at the origin. Partition Binto at most CKpε−psmall sets of diameter ≤ε. LetEbe the small set containing X1. Then P(∥X1−Xn,1∥ ≥ε) =P(X2/∈E,
https://arxiv.org/abs/2505.18146v1
. . . , Xn/∈E). A NEW MEASURE OF DEPENDENCE: INTEGRATED R235 Now note that P(X2/∈E, . . . , Xn/∈E|X1) = (1 −P(X2∈E|X1))n−1= (1−λ(E))n−1, where λis the law of X. Let Abe the collection of all small sets with λ-mass less than δ. Since there are at most CKpε−psmall sets, we get E (1−λ(E))n−1 ≤(1−δ)n−1+P(X1∈A)≤(1−δ)n−1+CKpε−pδ. This gives P(∥X1−Xn,1∥ ≥ε)≤(1−δ)n−1+CKpε−pδ. Now choosing δ=n−1logn, we get P(∥X1−Xn,1∥ ≥ε)≤1 n+CKplogn nεp. Thus, E(∥X1−Xn,1∥)≤n−1/p+Z2K n−1/pP(∥X1−Xn,1∥ ≥ε)dε ≤n−1/p+CKplogn nZ2K n−1/pε−pdε. Finally, the last term is bounded by CKn−1(logn)2when p= 1, and by CKpn−1/plognwhen p≥2. □ Lemma 8.4. LetCandβbe as in assumption (A1) andKbe as in as- sumption (A2). Then there are K1, K2andK3depending only on C,β, K andpsuch that for any t≥0, P |νn−ν| ≥K1n−1/p∨2(logn)1{p=1}+1+t ≤K2e−K3nt2/logn Proof. Recall Q′ ndefined in (7). Let FXbe the σ-algebra generate by X1, . . . ,Xn. Since Fn(Yj) = 1 /nimplies 1{Yj∈ Ij i}= 0, we have E[n−n0 n Q′ n] =1 2E[1{Yj∈ Ij i} 1{n−1< F n(Yj)<1} F(Yj)(1−F(Yj))] =1 2E" E[1{Yj∈ Ij i} 1{n−1< F n(Yj)<1} F(Yj)(1−F(Yj))|Yj,FX]# =1 2E" E[ 1{Yj∈ Ij i} 1{Fn(Yj)<1} |Yj,FX] F(Yj)(1−F(Yj))# . In addition, note that Q=1 2µ(˜S)EE[ 1{Yj∈ I′ i} 1{Fn(Yj)<1} |Yj,FX] F(Yj)(1−F(Yj)) , 36 MONA AZADKIA, POUYA ROUDAKI where I′ i= [min {Yi, Y′ i},max{Yi, Y′ i}] such that YiandY′ iare i.i.d. given Xi. Note that E[ 1{Yj∈ I′ i} |Yj,FX] =1−F2 Xi(Yj)−(1−FXi(Yj))2, E[ 1{Yj∈ Ij i} |Yj,FX] =1−FXi(Yj)FXN−j(i)(Yj)−(1−FXi(Yj))(1−FXN−j(i)(Yj)). Assumption (A1) yields |FXi(Yj)−FXN−j(i)(Yj)| ≤ C(1 +∥XN−j(i)∥β+∥Xi∥β)∥XN−j(i)−Xi∥min{F(Yj),1−F(Yj)}. By assumption (A2) there exists Ksuch that ∥Xi∥,∥XN−j(i)∥ ≤K. This gives us |E[n−n0 n Q′ n]−µ(˜S)Q|= E"E[(2FXi(Yj)−1)(FXN−j(i)−FXi(Yj))| FX, Yj] F(Yj)(1−F(Yj))# ≤CKβE[∥Xi−XN−j(i)∥]. Therefore by Lemma 8.3 |E[(1−n0/n) µ(˜S)Q′ n]−Q| ≤( Cn−1(logn)2ifp= 1 Cn−1/p(logn) if p≥2. |E[Q′ n]−Q| ≤E |1−1−n0/n µ(˜S)|n−n0 n Q′ n + E(1−n0/n) µ(˜S)Q′ n −Q . Note that the first term on the right-hand side is O(n−1/2) sincen−n0 n Q′ n is uniformly integrable and n0/nconverges at the rate of 1 /√ntoµ(˜S). Following the proof of Theorem 2.2, with the choice of ∆ = c−1 1log(n)/(n−1) in (9) we have E[|Qn−Q′ n|]≤Cr logn n. Finally, using Lemma 8.2 and noting that νn= 1−Qnandν= 1−Qfinishes the proof. □ Lemma 8.4 implies |νn−ν|=(logn)1+ 1{p=1} n1/(p∨2), which gives the proof of Theorem 4.1. 8.4.Proof of Theorem 6.1. Proof. Using Lemma 9.3. in [22], the proof closely mirrors that of Theo- rem 2.2, hence we omit it here. The only difference is that the constant C(p) can be bounded above by 3 throughout the argument. □ A NEW MEASURE OF DEPENDENCE: INTEGRATED R237 8.5.Proof of Theorem 5.1. Letj1, j2, . . . , j pbe the complete ordering of all variables by FORD. Let V0=∅, and for each 1 ≤k≤p, let Vk:= {j1, . . . , j k}. For k > p , letVk:=Vp. Note that for each k,jkis the index j̸∈Vk−1that maximizes νn(Y,XVk−1∪ {j}). Let K=⌊4/δ+ 2⌋. Let E′be the event that |νn(Y,XVk)−ν(Y,XVk)| ≤δ/8 for all 1 ≤k≤K, and let E be the event that VKis sufficient. Lemma 8.5. Suppose that E′has happened, and for some 1≤k≤K νn(Y,XVk)−νn(Y,XVk−1)≤δ/2. (10) Then Vk−1is sufficient. Proof. Take any k≤Ksuch that 10 holds. If k > p there is
https://arxiv.org/abs/2505.18146v1
nothing to prove. So let us assume that k≤p. Since E′has happened, this implies that for any j̸∈Vk−1, ν(Y,XVk−1∪{j})−ν(Y,XVk−1)≤νn(Y,XVk)−νn(Y,XVk−1) +δ 4≤3δ 4. Then note that by definition of δ,Vk−1must be a sufficient set. □ Lemma 8.6. The event E′implies E. Proof. Suppose E′has happened but there is no ksuch that 10 is valid. Therefore for all 1 ≤k≤Kwe have νn(Y,XVk)−νn(Y,XVk−1)> δ/2. This implies that ν(Y,XVk)−ν(Y,XVk−1)≥νn(Y,XVk)−νn(Y,XVk−1)−δ 4≥δ 4. This gives ν(Y,XVK) =KX k=1ν(Y,XVk)−ν(Y,XVk−1) ≥Kδ 4≥4 δ+ 2δ 4>1. Note that this contradicts the fact that ν(Y,XVk)∈[0,1]. Therefore, this shows that 10 must hold for some k≤K. Therefore, Lemma 8.5 implies thatVKis sufficient. □ Lemma 8.7. There are positive constants L1, L2andL3depending only on C, β, K andδsuch that P(E′)≥1−L1pL2exp(−L3n/logn). Proof. By assumptions (A1′) and (A2′), and Lemma 8.4, there exists L1, L2 andL3such that for any Vof size at most Kand any t≥0, P(|νn(Y,XV)−ν(Y,XV)| ≥L1n−1/K∨2(logn)2+t)≤L2exp(−L3nt2/logn). 38 MONA AZADKIA, POUYA ROUDAKI Let the event on the left be AV,tandAt:=S |V|≤KAV,t. By union bound we have P(At)≤L2pKexp(−L3nt2/logn). Choose t=δ/16. If nis large enough so that L1n−1/K∨2(logn)2≤δ 16, (11) then the above bound implies that P(E′)≥1−L2pKexp(−L4n/logn). (12) Equivalently, one can write 11 as n≥L5for some large L5. Then we choose L6≥L2such that for any n < L 5, L6pKexp(−L3n/logn)≥1. Therefore for n < L 5, we have the trivial bound P(E′)≥1−L6pKexp(−L3n). Combining this with 12 finishes the proof. □ Lemma 8.8. Event E′implies that ˆVis sufficient. Proof. Suppose that E′has happened. First, suppose that FORD has stopped at step Kor later. Then VK⊆ˆVand, therefore, Lemma 8.6 implies that Ehas also happened, and therefore ˆVis sufficient. Next, sup- pose that FORD has stopped at step k−1< K. Then, by definition of the stopping rule, we have νn(Y,XVk)≤νn(Y,XVk−1), which implies 10. Since E′has happened, Lemma 8.5 implies that ˆV=Vk−1 is sufficient. □ Theorem 5.1 is an immediate result of Lemma 8.8 and 8.7. 8.6.Proof of Proposition 6.2. Lemma 8.9. Suppose that XandYare independent and Yis continuous. Then E[ν1-dim n(Y, X)] =2 n, and Var(ν1-dim n(Y, X)) = 2 nn−2X ℓ=2n−1X k=ℓ+11 (k−1)(n−ℓ)−1 n−2 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 k−1+o(1 n). Proof. When Y⊥Xthen ( r1, . . . , r n) is random uniform permutation of 1, . . . , n . In this case ν1-dim n(Y, X) can be written as ν1-dim n(Y, X) = 1−1 2n−1X i=1n−1X j=21{j∈ K i} (j−1)(n−j) A NEW MEASURE OF DEPENDENCE: INTEGRATED R239 Let’s focus on A:=n−1X ℓ=2n−1X i=11{ℓ∈ K i} (ℓ−1)(n−ℓ). We first work out the mean and variance of A. E[A] =(n−1)n−1X ℓ=22(ℓ−1)(n−ℓ) n(n−1)(ℓ−1)(n−ℓ)= 2−4 n. For variance, we first look at the second moment of A A2=n−1X ℓ=2n−1X k=2n−1X i=1n−1X j=11{ℓ∈ K i} 1{k∈ K j} (ℓ−1)(n−ℓ)(k−1)(n−k) =n−1X ℓ=2n−1X i=11{ℓ∈ K i} (ℓ−1)2(n−ℓ)2+ 2n−1X ℓ=2n−2X i=11{ℓ∈ K i} 1{ℓ∈ K i+1} (ℓ−1)2(n−ℓ)2+ 2n−1X ℓ=2n−3X i=1n−1X j=i+21{ℓ∈ K i} 1{ℓ∈ K j} (ℓ−1)2(n−ℓ)2+ 2n−2X ℓ=2n−1X k=ℓ+1n−1X i=11{ℓ∈ K i} 1{k∈ K i} (ℓ−1)(n−ℓ)(k−1)(n−k)+ 4n−2X ℓ=2n−1X k=ℓ+1n−2X i=11{ℓ∈ K i} 1{k∈ K i+1} (ℓ−1)(n−ℓ)(k−1)(n−k)+ 4n−2X ℓ=2n−1X k=ℓ+1n−3X i=1n−1X j=i+21{ℓ∈ K i} 1{k∈ K j} (ℓ−1)(n−ℓ)(k−1)(n−k) =A1+A2+A3+A4+A5+A6. LetHm=Pm j=11/j. Then E[A1] =2 nn−1X ℓ=21 (ℓ−1)(n−ℓ)=4Hn−2 n(n−1). E[A2] =2 n(n−1)n−1X ℓ=2(n−ℓ−1) + (
https://arxiv.org/abs/2505.18146v1
n−ℓ)(ℓ−1)(ℓ−2) (ℓ−1)(n−ℓ)=4(n−3) n(n−1)2Hn−2. 40 MONA AZADKIA, POUYA ROUDAKI E[A3] =4 n(n−1)n−1X ℓ=2(ℓ−2)(n−ℓ−1) (ℓ−1)(n−ℓ) =4(n−2) n(n−1)(1−2Hn−2 n−2+2Hn−2 (n−1)(n−2)). E[A4] =4 nn−2X ℓ=2n−1X k=ℓ+11 (n−ℓ)(k−1). E[A5] =4 n(n−1)n−2X ℓ=2n−1X k=ℓ+1(n−ℓ) + (k−ℓ−3) + ( k−1) (n−ℓ)(k−1) =4 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 k−1+2 n−ℓ−ℓ+ 2 (n−ℓ)(k−1) =4 n(n−1)n−2X ℓ=2n−1X k=ℓ+12 k−1+2 n−ℓ−n+ 2 (n−ℓ)(k−1) =8 n+8 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 k−1−16 n(n−1)−8Hn−2 n(n−1)2 −4 nn−2X ℓ=2n−1X k=ℓ+11 (k−1)(n−ℓ)−16 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 (k−1)(n−ℓ). E[A6] =8 n(n−1)n−2X ℓ=2n−1X k=ℓ+1(n−ℓ)(k−1)−3(k−1)−(n−ℓ) +ℓ+ 2 + ( k−2) (n−ℓ)(k−1) =8 n(n−1)n−2X ℓ=2n−1X k=ℓ+11−2 n−ℓ−2 k−1+n+ 1 (n−ℓ)(k−1) =4−32 n+16Hn−2 n(n−1)+24 n(n−1)−16 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 k−1+ 8(n+ 1) n(n−1)n−2X ℓ=2n−1X k=ℓ+11 (k−1)(n−ℓ). Putting these together, we have Var(A) =8 nn−2X ℓ=2n−1X k=ℓ+11 (k−1)(n−ℓ)−8 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 k−1−4 n+o(1 n2). Then note that E[ν1-dim n(Y, X)] = 1−E[A]/2 = 2 /n, and Var( ν1-dim n(Y, X)) = Var(A)/4. This finishes the proof. □ REFERENCES 41 References [1] Amit, Y. and Geman, D. “Shape quantization and recognition with randomized trees”. In: Neural computation 9.7 (1997), pp. 1545–1588. [2] Ansari, J. and Fuchs, S. “A simple extension of Azadkia & Chat- terjee’s rank correlation to a vector of endogenous variables”. In: Preprint (2022). [3] Auddy, A., Deb, N., and Nandy, S. “Exact detection thresholds and minimax optimality of Chatterjee’s correlation coefficient”. In: Bernoulli 30.2 (2024), pp. 1640–1668. [4] Azadkia, M. and Chatterjee, S. “A simple measure of conditional dependence”. In: Ann. Statist. 49.6 (2021), pp. 3070–3102. [5] Azadkia, M., Chatterjee, S., and Matloff, N. “FOCI: Feature Ordering by Conditional Independence, 2020”. In: R package version 0.1 2 (). [6] Azadkia, M., Taeb, A., and B¨ uhlmann, P. “A fast non-parametric approach for causal structure learning in polytrees”. In: (2021). arXiv: 2111.14969 . [7] Bartl, D. and Mendelson, S. “On a variance dependent Dvoretzky- Kiefer-Wolfowitz inequality”. In: (2023). arXiv: 2308.04757 . [8] Battiti, R. “Using mutual information for selecting features in super- vised neural net learning”. In: IEEE Transactions on neural networks 5.4 (1994), pp. 537–550. [9] Benjamini, Y. and Hochberg, Y. “Controlling the false discovery rate: a practical and powerful approach to multiple testing”. In: J. R. Stat. Soc., Ser. B 57.1 (1995), pp. 289–300. [10] Bergsma, W. and Dassios, A. “A consistent test of independence based on a sign covariance related to Kendall’s tau”. In: Bernoulli 20.2 (2014), pp. 1006–1028. [11] Bickel, P. J. “Measures of independence and functional dependence”. In:arXiv preprint arXiv:2206.13663 (2022). [12] Blum, J., Kiefer, J., and Rosenblatt, M. “Distribution free tests of independence based on the sample distribution function”. In: Annals of Mathematical Statistics 32 (1961), pp. 485–498. [13] Breiman, L. “Bagging predictors”. In: Mach. Learn. 24.2 (1996), pp. 123–140. [14] Breiman, L. “Better subset regression using the nonnegative garrote”. In:Technometrics 37.4 (1995), pp. 373–384. [15] Breiman, L. “Random forests”. In: Mach. Learn. 45.1 (2001), pp. 5– 32. [16] Breiman, L. and Friedman, J. H. “Estimating optimal transforma- tions for multiple regression and correlation”. In: J. Am. Stat. Assoc. 80 (1985), pp. 580–619. [17] Breiman, L. et al. Classification and regression trees . English. 1984. 42 REFERENCES [18] B¨ ucher, A. and Dette, H. “On the lack of weak continuity of Chatter-
https://arxiv.org/abs/2505.18146v1
jee’s correlation coefficient”. In: (2024). url:https://arxiv.org/ abs/2410.11418 . [19] Cand` es, E. and Tao, T. “The Dantzig selector: statistical estimation when pis much larger than n. (With discussions and rejoinder).” In: Ann. Stat. 35.6 (2007), pp. 2313–2404. [20] Cand` es, E. et al. “Panning for gold: ‘model-X’ knockoffs for high dimensional controlled variable selection”. In: J. R. Stat. Soc., Ser. B, Stat. Methodol. 80.3 (2018), pp. 551–577. [21] Cao, S. and Bickel, P. J. “Correlations with tailored extremal prop- erties”. In: arXiv preprint arXiv:2008.10177 (2020). [22] Chatterjee, S. “A new coefficient of correlation”. In: J. Amer. Statist. Assoc. 116.536 (2021), pp. 2009–2022. [23] Chatterjee, S. “A new method of normal approximation”. In: Ann. Probab. 36.4 (2008), pp. 1584–1610. [24] Chatterjee, S. “A survey of some recent developments in measures of association”. In: Probability and stochastic processes. A volume in honour of Rajeeva L. Karandikar . Singpore: Springer, 2024, pp. 109– 128. [25] Chatterjee, S. and Diaconis, P. “A central limit theorem for a new statistic on permutations”. In: Indian J. Pure Appl. Math. 48.4 (2017), pp. 561–573. [26] Chatterjee, S. and Holmes, S. XICOR: Association measurement through cross rank increments .https://CRAN.R- project.org/package= XICOR . R package. 2020. [27] Chatterjee, S. and Vidyasagar, M. “Estimating large causal polytrees from small samples”. In: arXiv preprint arXiv:2209.07028 (2022). [28] Chen, S. and Donoho, D. “Basis pursuit”. In: Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers . Vol. 1. IEEE. 1994, pp. 41–44. [29] Cs¨ org¨ o, S. “Testing for independence by the empirical characteristic function”. In: J. Multivariate Anal. 16 (1985), pp. 290–299. [30] Deb, N., Ghosal, P., and Sen, B. “Measuring Association on Topolog- ical Spaces Using Kernels and Geometric Graphs”. In: (2020). arXiv: 2010.01768 . [31] Deb, N. and Sen, B. “Multivariate rank-based distribution-free non- parametric testing using measure transportation”. In: J. Am. Stat. Assoc. 118.541 (2023), pp. 192–207. [32] Dette, H. and Kroll, M. “A simple bootstrap for Chatterjee’s rank correlation”. In: Biometrika 112.1 (2025), asae045. [33] Dette, H., Siburg, K. F., and Stoimenov, P. A. “A Copula-Based Non- parametric Measure of Regression Dependence”. In: Scand. J. Stat. 40.1 (2013), pp. 21–41. REFERENCES 43 [34] Drton, M., Han, F., and Shi, H. “High-dimensional consistent inde- pendence testing with maxima of rank correlations”. In: Ann. Stat. 48.6 (2020), pp. 3206–3227. [35] Efron, B. et al. “Least angle regression. (With discussion)”. In: Ann. Stat. 32.2 (2004), pp. 407–499. [36] Fan, J. and Li, R. “Variable selection via nonconcave penalized likeli- hood and its oracle properties.” In: J. Am. Stat. Assoc. 96.456 (2001), pp. 1348–1360. [37] Freund, Y., Schapire, R. E., et al. “Experiments with a new boosting algorithm”. In: icml. Vol. 96. Citeseer. 1996, pp. 148–156. [38] Friedman, J. H. “Multivariate adaptive regression splines”. In: Ann. Stat. 19.1 (1991), pp. 1–67. [39] Friedman, J. H., Bentley, J. L., and Finkel, R. A. “An algorithm for finding best matches in logarithmic expected time”. In: 3.3 (1977), pp. 209–226. [40] Friedman, J. H. and Rafsky, L. C. “Graph-theoretic measures of multivariate association and prediction”. In: Ann. Stat. 11 (1983), pp. 377–391. [41] Fuchs,
https://arxiv.org/abs/2505.18146v1
S. “Quantifying directed dependence via dimension reduc- tion”. In: J. Multivariate Anal. 201 (2024), p. 21. [42] Gamboa, F., Klein, T., and Lagnoux, A. “Sensitivity analysis based on Cram´ er-von Mises distance”. In: SIAM/ASA J. Uncertain. Quan- tif.6 (2018), pp. 522–548. [43] Gamboa, F. et al. “Global sensitivity analysis: a novel generation of mighty estimators based on rank statistics”. In: Bernoulli 28.4 (2022), pp. 2345–2374. [44] Gebelein, H. “Das statistische Problem der Korrelation als Variations- und Eigenwertproblem und sein Zusammenhang mit der Ausgleich- srechnung”. In: Z. Angew. Math. Mech. 21 (1941), pp. 364–379. [45] George, E. I. and McCulloch, R. E. “Variable selection via Gibbs sampling”. In: J. Amer. Statist. Assoc. 88.423 (1993), pp. 881–889. [46] Gretton, A. et al. “A kernel statistical test of independence”. In: Advances in Neural Information Processing Systems . 2008, pp. 585– 592. [47] Gretton, A. et al. “Measuring statistical dependence with Hilbert- Schmidt norms”. In: Algorithmic learning theory. 16th international conference, ALT 2005, Singapore, October 8–11, 2005. Proceedings. 2005, pp. 63–77. isbn: 3-540-29242-X. [48] Griessenberger, F., Junker, R. R., and Trutschnig, W. “On a mul- tivariate copula-based dependence measure and its estimation”. In: Electron. J. Stat. 16.1 (2022), pp. 2206–2251. [49] Hamidieh, K. “A data-driven statistical model for predicting the critical temperature of a superconductor”. In: Computational Ma- terials Science (2018). url:https://api.semanticscholar.org/ CorpusID:55069173 . 44 REFERENCES [50] Han, F., Chen, S., and Liu, H. “Distribution-free tests of indepen- dence in high dimensions”. In: Biometrika 104.4 (2017), pp. 813–828. [51] Han, F. and Huang, Z. “Azadkia-Chatterjee’s correlation coefficient adapts to manifold data”. In: Ann. Appl. Probab. 34.6 (2024), pp. 5172– 5210. [52] Hastie, T., Tibshirani, R., and Friedman, J. The elements of statistical learning. Data mining, inference, and prediction . 2nd ed. Springer Ser. Stat. New York, NY: Springer, 2009. [53] Heller, R., Heller, Y., and Gorfine, M. “A consistent multivariate test of association based on ranks of distances”. In: Biometrika 100.2 (2013), pp. 503–510. [54] Hirschfeld, H. O. “A connection between correlation and contingency”. In:Proc. Camb. Philos. Soc. 31 (1935), pp. 520–524. [55] Ho, T. K. “The random subspace method for constructing decision forests”. In: IEEE transactions on pattern analysis and machine in- telligence 20.8 (1998), pp. 832–844. [56] Hoeffding, W. “A non-parametric test of independence”. In: Ann. Math. Stat. 19 (1948), pp. 546–557. [57] Huang, C. and Joseph, V. R. “Factor Importance Ranking and Se- lection using Total Indices”. In: Technometrics just-accepted (2025), pp. 1–29. [58] Huang, Z., Deb, N., and Sen, B. “Kernel Partial Correlation Coeffi- cient — a Measure of Conditional Dependence”. In: J. Mach. Learn. Res. 23.216 (2022), pp. 1–58. [59] Huang, Z., Deb, N., and Sen, B. “Kernel Partial Correlation Coefficient— a Measure of Conditional Dependence”. In: J. Mach. Learn. Res. 23.216 (2022), pp. 1–58. [60] Josse, J. and Holmes, S. “Measuring multivariate association and beyond”. In: Stat. Surv. 10 (2016), pp. 132–167. [61] Knuth, D. E. The art of computer programming . Vol. 3. Pearson Ed- ucation, 1997. [62] Kraskov, A., St¨ ogbauer, H., and Grassberger, P. “Estimating mutual information”. In: Physical Review E 69 (2004), p. 066138. [63] Kroll,
https://arxiv.org/abs/2505.18146v1
M. “Asymptotic Normality of Chatterjee’s Rank Correlation”. In:arXiv preprint arXiv:2408.11547 (2024). [64] Levcopoulos, C. and Petersson, O. “Heapsort—adapted for presorted files”. In: Algorithms and Data Structures: Workshop WADS’89 Ot- tawa, Canada, August 17–19, 1989 Proceedings 1 . Springer. 1989, pp. 499–509. [65] Lin, Z. and Han, F. “On boosting the power of Chatterjee’s rank correlation”. In: Biometrika 110.2 (2023), pp. 283–299. [66] Lin, Z. and Han, F. “On the failure of the bootstrap for Chatterjee’s rank correlation”. In: Biometrika 111.3 (2024), pp. 1063–1070. [67] Linfoot, E. H. “An informational measure of correlation”. In: Inf. Control 1 (1957), pp. 85–89. REFERENCES 45 [68] Lopez-Paz, D., Hennig, P., and Sch¨ olkopf, B. “The randomized de- pendence coefficient”. In: Advances in Neural Information Processing Systems . 2013, pp. 1–9. [69] Lyons, R. “Distance covariance in metric spaces”. In: Ann. Probab. 41.5 (2013), pp. 3284–3305. [70] McDiarmid, C. “On the method of bounded differences”. In: Surveys in Combinatorics . Cambridge University Press, 1989, pp. 148–188. [71] Miller, A. Subset selection in regression . chapman and hall/CRC, 2002. [72] Nandy, P., Weihs, L., and Drton, M. “Large-sample theory for the Bergsma-Dassios sign covariance”. In: Electron. J. Stat. 10.2 (2016), pp. 2287–2311. [73] Neshat, M. and al., et. “A new insight into the Position Optimization of Wave Energy Converters by a Hybrid Local Search”. In: arXiv preprint arXiv:1904.09599 (2019). url:https://arxiv.org/abs/ 1904.09599 . [74] , n. h. tiep nguyen huu et al. “a new hyperparameter tuning frame- work for regression tasks in deep neural network: combined-sampling algorithm to search the optimized hyperparameters”. In: mathematics 12.24 (2024). url:https://www.mdpi.com/2227-7390/12/24/3892 . [75] Pan, W. et al. “Ball covariance: a generic measure of dependence in Banach space”. In: J. Am. Stat. Assoc. 115.529 (2020), pp. 307–317. [76] Pavasovic, K. L. et al. “A Differentiable Rank-Based Objective For Better Feature Learning”. In: arXiv preprint arXiv:2502.09445 (2025). [77] Pfister, N. et al. “Kernel-based tests for joint independence”. In: J. R. Stat. Soc., Ser. B, Stat. Methodol. 80.1 (2018), pp. 5–31. [78] Puri, M. L. and Sen, P. K. Nonparametric methods in multivariate analysis . Wiley Ser. Probab. Math. Stat. John Wiley & Sons, Hobo- ken, NJ, 1971. [79] Ravikumar, P. et al. “Sparse additive models”. In: J. R. Stat. Soc., Ser. B, Stat. Methodol. 71.5 (2009), pp. 1009–1030. [80] R´ enyi, A. “On measures of dependence”. In: Acta Math. Acad. Sci. Hung. 10 (1959), pp. 441–451. [81] Reshef, D. N. et al. “Detecting novel associations in large data sets”. In:Science 334.6062 (2011), pp. 1518–1524. [82] Romano, J. P. “A bootstrap revival of some nonparametric distance tests”. In: J. Am. Stat. Assoc. 83.403 (1988), pp. 698–708. [83] Rosenblatt, M. “A quadratic measure of deviation of two-dimensional density estimates and a test of independence”. In: Ann. Stat. 3 (1975), pp. 1–14. [84] Schweizer, B. and Wolff, E. F. “On nonparametric measures of de- pendence for random variables”. In: Ann. Stat. 9 (1981), pp. 879– 885. [85] Sen, A. and Sen, B. “Testing independence and goodness-of-fit in linear models”. In: Biometrika 101.4 (2014), pp. 927–942. 46 REFERENCES [86] Shi, H., Drton, M., and Han, F. “On the power
https://arxiv.org/abs/2505.18146v1
of Chatterjee’s rank correlation”. In: Biometrika 109.2 (2022), pp. 317–333. [87] Shi, H., Drton, M., and Han, F. “On Azadkia-Chatterjee’s conditional dependence coefficient”. In: Bernoulli 30.2 (2024), pp. 851–877. [88] Sklar, M. “Fonctions de r´ epartition ` a n dimensions et leurs marges”. In:Annales de l’ISUP . Vol. 8. 3. 1959, pp. 229–231. [89] Strothmann, C., Dette, H., and Siburg, K. F. “Rearranged depen- dence measures”. In: Bernoulli 30.2 (2024), pp. 1055–1078. [90] Sz´ ekely, G. J. and Rizzo, M. L. “Brownian distance covariance”. In: Ann. Appl. Stat. 3.4 (2009), pp. 1236–1265. [91] Sz´ ekely, G. J., Rizzo, M. L., and Bakirov, N. K. “Measuring and testing dependence by correlation of distances”. In: Ann. Stat. 35.6 (2007), pp. 2769–2794. [92] Tibshirani, R. “Regression shrinkage and selection via the lasso”. In: J. R. Stat. Soc., Ser. B 58.1 (1996), pp. 267–288. [93] Tran, L. and Han, F. “On a rank-based Azadkia-Chatterjee correla- tion coefficient”. In: arXiv preprint arXiv:2412.02668 (2024). [94] Vergara, J. R. and Est´ evez, P. A. “A review of feature selection meth- ods based on mutual information”. In: 24 (2014), pp. 175–186. [95] Wang, W. et al. “Total Variation Floodgate for Variable Importance Inference in Classification”. In: Proceedings of Machine Learning Re- search 235 (2024), pp. 50711–50725. [96] Wang, X., Jiang, B., and Liu, J. S. “Generalized R-squared for de- tecting dependence”. In: Biometrika 104.1 (2017), pp. 129–139. [97] Weihs, L., Drton, M., and Meinshausen, N. “Symmetric rank covari- ances: a generalized framework for nonparametric measures of depen- dence”. In: Biometrika 105.3 (2018), pp. 547–562. [98] Weihs, L., Drton, M., and Leung, D. “Efficient computation of the Bergsma-Dassios sign covariance”. In: Comput. Stat. 31.1 (2016), pp. 315–328. [99] Yanagimoto, T. “On measures of association and a related problem”. In:Ann. Inst. Stat. Math. 22 (1970), pp. 57–63. [100] Yuan, M. and Lin, Y. “Model selection and estimation in regression with grouped variables”. In: J. R. Stat. Soc., Ser. B, Stat. Methodol. 68.1 (2006), pp. 49–67. [101] Zhang, K. “BET on independence”. In: J. Am. Stat. Assoc. 114.528 (2019), pp. 1620–1637. [102] Zhang, L. and Janson, L. “Floodgate: inference for model-free variable importance”. In: arXiv preprint arXiv:2007.01283 (2020). [103] Zhang, Q. “On relationships between Chatterjee’s and Spearman’s correlation coefficients”. In: Commun. Stat., Theory Methods 54.1 (2025), pp. 259–279. [104] Zhang, Q. “On the asymptotic null distribution of the symmetrized Chatterjee’s correlation coefficient”. In: Stat. Probab. Lett. 194 (2023), p. 7. REFERENCES 47 [105] Zhang, Q. et al. “Large-scale kernel methods for independence test- ing”. In: Stat. Comput. 28.1 (2018), pp. 113–130. [106] Zhou, H. and M¨ uller, H.-G. “Association and Independence Test for Random Objects”. In: arXiv preprint arXiv:2505.01983 (2025). [107] Zou, H. “The adaptive lasso and its oracle properties”. In: J. Am. Stat. Assoc. 101.476 (2006), pp. 1418–1429. [108] Zou, H. and Hastie, T. “Regularization and variable selection via the elastic net”. In: J. R. Stat. Soc., Ser. B, Stat. Methodol. 67.2 (2005), pp. 301–320. Department of Statistics, London School of Economics & Political Science Email address :m.azadkia@lse.ac.uk Email address :s.mirrezaeiroudaki@lse.ac.uk
https://arxiv.org/abs/2505.18146v1
arXiv:2505.18769v1 [stat.ME] 24 May 2025Regularisation of CART trees by summation ofp-values Nils Engler∗, Mathias Lindholm†, Filip Lindskog‡and Taariq Nazar§ May 27, 2025 Abstract The standard procedure to decide on the complexity of a CART regression tree is to use cross-validation with the aim of obtaining a predictor that generalises well to unseen data. The randomness in the selection of folds implies that the selected CART tree is not a deter- ministic function of the data. We propose a deterministic in-sample method that can be used for stopping the growing of a CART tree based on node-wise statistical tests. This testing procedure is derived using a connection to change point detection, where the null hypothe- sis corresponds to that there is no signal. The suggested p-value based procedure allows us to consider covariate vectors of arbitrary dimen- sion and allows us to bound the p-value of an entire tree from above. Further, we show that the test detects a not-too-weak signal with a high probability, given a not-too-small sample size. We illustrate our methodology and the asymptotic results on both simulated and real world data. Additionally, we illustrate how our p- value based method can be used as an automatic deterministic early stopping procedure for tree-based boosting. The boosting iterations stop when the tree to be added consists only of a root node. Keywords : Regression trees, CART, p-value, stopping criterion, multiple testing, max statistics ∗nils.engler@math.su.se, Department of Mathematics, Stockholm University, Sweden †lindholm@math.su.se, Department of Mathematics, Stockholm University, Sweden ‡lindskog@math.su.se, Department of Mathematics, Stockholm University, Sweden §taariq.nazar@math.su.se, Department of Mathematics, Stockholm University, Sweden 1 1 Introduction When using binary-split regression trees in practice an important question is how to decide on the complexity of the constructed tree expressed in terms of, e.g., the number of binary splits in the tree, given data. Many applica- tions focus on predictive modeling, where the objective is to construct a tree that generalises well to unseen data. The standard approach to decide on the tree complexity is then to use hold-out data and apply cross-validation techniques, seee.g.[Hastie et al., 2009]. Whenconstructingatreebysequen- tially deciding on continuing to split, adding new leaves to the tree in each step, cross-validation corresponds to a method for so-called “early stopping”. When using a cross-validation-based early stopping rule, the constructed tree obviously depends on the hold-out-data for the different steps of the proce- dure. In particular, a randomised selection of hold-out data will inevitably result in the constructed tree being a random function of the data. This is not always desirable. In the present paper a deterministic in-sample early stopping rule is introduced, which is based on p-values for whether to accept a binary split or not. In order to explain the suggested tree-growing method, let Tmdenote a greedilygrownoptimal L2CARTregressiontree( L2referstousingasquared- errorlossfunction)with mleaves(suppressingthedependenceoncovariates), see e.g. [Breiman et al., 1984]. Input to the tree-growing method is a given sequence of nested regression trees Tm1, Tm2, . . ., where 1 =:m1< m 2< . . ., i.e. the first tree is simply a root node, each tree is a subtree of the next tree
https://arxiv.org/abs/2505.18769v1
in the sequence, and no tree appears more than once. Note that Tmjand Tmj+1may differ by more than one leaf, i.e. mj+1−mj≥1. The tree-growing process starts from the root node Tm1by testing whether increasing the tree- complexity from Tm1toTm2corresponds to a significant improvement in terms of the L2loss. If this is the case, the tree-growing process continues to test whether the tree-complexity should be increased from Tm2andTm3; otherwise the tree-growing process stops. If mj+1−mj>1, all added splits are tested. The tree-growing process is (i) based on p-values so hypotheses and significance levels need to be spec- ified, (ii) an iterative procedure, possibly resulting in a large number of tests. Concerning (i): The null hypothesis, H0, is that there is no signal in data. The alternative hypothesis, HA, is that there is a sufficiently strong signal making a binary split appropriate. The significance level of the test can be seen as a subjectively chosen hyper-parameter, depending on the modeler’s 2 view on the Type I-error. Concerning (ii): We cannot perfectly adjust for multiple testing, but it is possible to use Bonferroni arguments to bound the Type I-error from above. By doing so the tree-growing process is stopped once the sumof the p-values is greater than the subjectively chosen overall significance level for testing the significance of the entire tree. If mj+1−mj> 1, then more than one p-value is added is added to sum. Since the p-value based stopping rule relies on a Bonferroni bound, this tree-growing procedure will be conservative, tending to avoid fitting too large trees to the data. Relating to the previous paragraph it is important to recall that the tree- growing process is based on a given sequence of nested greedily-grown L2 CART regression trees, and it is whether these binary splits provide signif- icant loss improvements or not that is being tested. In order to compute a p-value for such a split it is crucial to account for that the split was found to be optimal in a step of the greedy recursive partitioning process that gen- erated the tree. This is done by representing the tree-growing process as a certainchange-pointdetectionproblem, buildingonresultsandconstructions from [Yao and Davis, 1986]. The usefulness of these results for change-point detection when analysing regression trees was noted in [Shih and Tsai, 2004]. It is important to stress that the p-values used are defined with respect to loss improvements and not with respect to potential errors in the estima- tors for the mean values within a leaf. In the latter problem one needs to adjust for selective inference and this is discussed in a CART-tree context in [Neufeld et al., 2022]. By focusing on the loss improvement and properly taking into account that the tested splits are locally optimal (as described above), selective inference will not be an issue here. Moreover, since the tree- growing process is based on a given sequence of nested CART-trees, we do not address variable selection issues. For more on CART-trees and variable selection, see [Shih and Tsai, 2004]. The p-values for loss improvements for a single locally optimally cho- sen
https://arxiv.org/abs/2505.18769v1
binary split can be calculated exactly for small sample sizes n, but in practice large values for the sample size require approximations. In the cur- rent paper an asymptotic approximation is used, which is based on results from [Yao and Davis, 1986] for a single covariate. A contribution of the cur- rent paper is to show that for covariate vectors of arbitrary dimension, the accuracy of the p-value approximation for a single binary split does not de- teriorate substantially if we increase the dimension of the covariate vector. Thep-value approximation for an entire tree, accounting for multiple testing issues, results in (a) a conservative stopping rule, given that the null hypothesis H0of no signal is true, i.e. the tree-growing process will not be stopped too late, 3 due to that we are using a Bonferroni upper bound, (b) that a not-too-weak signal should be detected with a high probability, given a sufficient sample size, i.e. given that the alternative hypothesis HAis true, the signal will be detected as the sample size tends to infinity. Sofarwehavefocusedondeterministic p-value-basedearlystoppingwhen constructing a single greedily grown optimal L2CART tree. In practice, however, trees are commonly used as so-called “weak learners” in boosting. The use of p-value based early stopping in tree-based L2boosting is con- sidered in Section 4. This is similar to the so-called ABT-machine intro- duced in [Huyghe et al., 2024], which uses another deterministic (not based on e.g. cross-validation) stopping rule based on a sequence of nested trees obtained from so-called cost-complexity pruning, see [Breiman et al., 1984]. Although we focus only on CART trees, one may, of course, consider other types of regression trees and inference based procedures to construct trees. For more on this, see e.g. [Hothorn et al., 2006]. Our main contribution. Given an arbitrary sequence of nested L2 CART trees, grown by greedy optimal recursive partitioning, we provide an easy-to-use deterministic stopping rule for deciding on the regression tree with suitable complexity. We allow for covariate vectors of arbitrary dimen- sion and the stopping rule is formulated in terms of an easily computable upper bound for the p-value corresponding to testing the hypothesis of no signal. Because of the upper bound, the stopping rule is conservative. How- ever, we provide a theoretical guarantee that if there exists signal, then we willdetecttheexistenceofthissignalifthesamplesizeissufficientlylarge. In particular, it is unlikely that we will stop the tree-growing process too early. The asymptotic theoretical guarantee is confirmed by numerical experiments. Organisation of the paper. The remainder of the paper is structured as follows. Section 2 introduces L2CART trees and sequences of nested such trees. Section 2.1 presents and motivates the suggested stopping rule. Section 2.2 describes that the stopping rule naturally leads to considering a change-point-detection problem and presents theoretical results that guar- antee statistical soundness of our approach for large sample size. Section 3 compares, for a single split, our approach to well-established regularisation techniques. Section 4 provides a range of numerical illustrations, both in order to clarify the finite-sample performance of our approach and also to il- lustrate useful applications for tree-based boosting without cross-validation.
https://arxiv.org/abs/2505.18769v1
The proofs of the main results are found in the appendix. 4 2 Regression trees The Classification and Regression Tree (CART) method was introduced in the 1980s and uses a greedy approach to build a piecewise constant predictor based on binary splits of the covariate space, one covariate at a time, see e.g. [Breiman et al., 1984]. If we let xbe a d-dimensional covariate vector with x∈X⊆Rd, a regression tree with mleaves can be expressed as x7→Tm(x) :=mX k=1ζk1{x∈Ak}, (1) where ζk∈R, where Ak⊂X,∪m k=1Ak=X, and where 1{x∈Ak}is the indicator such that 1{x∈Ak}= 1ifx∈Ak, and 0otherwise. For binary split regression trees, having mleaves corresponds to having made m−1binary splits. TheconstructionofaCARTtreeisbasedonrecursivegreedybinarysplit- ting. A split is decided by, for each covariate dimension j, considering the best threshold value ξfor the given covariate dimension, and finally choos- ing to split based on the best covariate dimension and the associated best threshold value. Splitting the covariate space Xbased on the jth covariate dimension and threshold value ξcorresponds to the two regions Rleft(j, ξ) ={x∈X:xj≤ξ},Rright(j, ξ) ={x∈X:xj> ξ}. The CART algorithm estimates a regression tree by recursively minimising the empirical risk based on the observed data (Y(1), X(1)), . . . , (Y(n), X(n)) that are independent copies of (Y, X), where Yis a real-valued response variable and Xis aX-valued covariate vector. When using the L2loss and considering a split w.r.t. covariate j, this means that we want to minimise X i:X(i)∈Rleft(j,ξ)(Y(i)−Yleft(j, ξ))2+X i:X(i)∈Rright(j,ξ)(Y(i)−Yright(j, ξ))2,(2) where Yleft(j, ξ)is the average of all Y(i)for which X(i)∈Rleft(j, ξ), and similarly for Yright(j, ξ). A regression tree with a single binary split w.r.t. co- variate jand threshold value ξis therefore T2(x) =Yleft(j, ξ)1{x∈Rleft(j,ξ)}+Yright(j, ξ)1{x∈Rright(j,ξ)}. In order to ease notation, it is convenient to fix a covariate dimension index jand considered the the ordered pairs (Y(1), X(1)), . . . , (Y(n), X(n))of (Y, X), where we assume ordered covariate values X(1) j≤ ··· ≤ X(n) jand that the response variables appear in the order corresponding to the size of 5 the covariate values. Hence, (Y(1), X(1))satisfies X(1) j= min iX(i) j, etc. A different choice of index jwould therefore imply a particular permutation of the nresponse-covariate pairs. By suppressing the dependence on j, this allows us to introduce S≤r:=rX i=1(Y(i)−Y≤r)2, S >r:=nX i=r+1(Y(i)−Y>r)2, S :=S≤n,(3) where Y≤r:=1 rrX i=1Y(i),Y>r:=1 n−rnX i=r+1Y(i). That is, minimisation of (2) is equivalent to minimising S≤r+S>rwith respect to r, or alternatively we can consider maximising the relative L2loss improvement, given by S−(S≤r+S>r) S. (4) Further, note that unless we build balanced trees with a pre-specified number of splits we need to add a stopping criterion to the tree-growing process. The perhaps most natural choice is to consider a threshold value, ϑ, say, such that the recursive splitting only continues if the optimal r, denoted r∗, for the optimally chosen covariate dimension j∗∈ {1, . . . , d }satisfies S−(S≤r∗+S>r∗) S> ϑ. (5) This means that the threshold parameter ϑfunctions as a hyper-parameter. In particular, if we let Tmdenote a recursively grown L2optimal CART-tree with mleaves created using the threshold parameter ϑ, then for any
https://arxiv.org/abs/2505.18769v1
subtree TmofTm′,m < m′, the corresponding threshold parameters satisfy ϑ > ϑ′. Threshold parameters ϑ1> ϑ 2> . . . > ϑ τgenerate a sequence of nested trees Tm1, Tm2, . . . , T mτwith m1≤m2≤. . .≤mτ. In applications we will consider sequences ϑ1> ϑ 2> . . .such that 1 =m1< m 2< . . .. Note that such a decreasing sequence of threshold parameters will not necessarily result in a sequence of nested trees that only increases by one split at a time. One procedure to construct a sequence of nested trees is to first pick ϑ= 0and build a maximal CART-tree, which is pruned from the leaves to the root. One such procedure is the cost-complexity pruning introduced in [Breiman et al., 1984], which likely will lead to a sequence of nested trees where more than one leaf is added in each iteration. For more on this, see Section 3.1. 6 The threshold parameter ϑcontrols the complexity of the tree that is con- structed using recursive binary splitting, but it is not clear how to choose ϑ. One option is to base the choice of ϑon out-of-sample validation techniques, such as cross-validation. The drawback with this is that the tree construction then becomes random: given a fixed dataset repeated application of the pro- cedure may generate different regression trees. We do not want a procedure for constructing regression trees to have this feature. The focus of the cur- rent paper is to start from a sequence of nested greedy binary split regression trees, from shallow to deep, and use a particular stopping criterion to decide when to stop the greedy binary splitting in the tree-growing process. The stopping criterion is based entirely on the data used for building the regres- sion trees and is a deterministic mapping from the data to the elements in the sequence of regression trees. 2.1 The stopping rule Our approach relies on that all binary splits in the sequence of nested re- gression trees have been chosen in a greedy optimal manner. That is, if we consider an arbitrary binary split in the sequence of nested trees, the reduction in squared error loss is given by the statistic Umax:= max 1≤j≤dUj, U j:= max 1≤r≤n−1S−(S≤r+S>r) S, (6) where the sums S≤randS>rdepend on jbecause of the implicit ordering of the terms as outlined above, see (3). Given any sample size nand any observed value uobsfor the test statistic Umaxwe easily compute, under the null hypothesis of no signal, an upper bound pobs≥PN(Umax> u obs), where the subscript Nemphasizes the null hypothesis. Therefore, for a regression treeTmresulting from m−1binary splits, it holds that PNm−1[ k=1 Umax,k> u obs,k  ≤m−1X k=1PN(Umax,k> u obs,k)≤m−1X k=1pobs,k. Note that the summation is over all m−1splits (or internal nodes) of the tree with mleaves. We emphasize that, for every binary split k,uobs,kis observed andpobs,kis easily computed from uobs,k. If for a pre-chosen tolerance δ∈ (0,1)close to zero, m−1X k=1pobs,k≤δ, (7) 7 then we conclude that the event ∪m−1 k=1{Umax,k> u obs,k}is very unlikely and we reject the null
https://arxiv.org/abs/2505.18769v1