text string | source string |
|---|---|
Experiments - additional material This section provides additional context for the experiments described in Section 3.2. Table 3 provides a summary of the best description length obtained by optimizing with a given method in a given setting. Table 3: Description length (train+test) in bytes for teacher-student models a... | https://arxiv.org/abs/2505.17469v1 |
1 and 6. (a) (30, 0.08, MSE) (b) (300, 0.08, MSE) (c) (2000, 0.08, MSE) Figure 7: ‘Description Length‘ vs ‘ α‘ for different models. Description length (in bytes) isolines are color coded. Each point represents the median result from bootstrap sampling across multiple runs for a given regularization strength ( α), with... | https://arxiv.org/abs/2505.17469v1 |
approach is given by the integral under the loss curve: Z1400 02.7553·(x−1.0)−0.0793dx= 6349 .41 (43) Converting from base eto storage size (in bytes), this yields approximately 550 GB for the online approach. Fixed-Model Approach For the fixed-model approach, we use the final loss value (approximately 1.54 at 1400B to... | https://arxiv.org/abs/2505.17469v1 |
arXiv:2505.17800v1 [math.PR] 23 May 2025Lq-maximal inequality for high dimensional means under dependence Jonathan B. Hill∗ Dept. of Economics, University of North Carolina, Chapel Hill, NC This draft: May 26, 2025 Abstract We derive an Lq-maximal inequality for zero mean dependent random variables {xt}n t=1onRp, where... | https://arxiv.org/abs/2505.17800v1 |
argument classically laid out in Pollard (1984, Chapt. II.2) and van der Vaart and Wellner (1996, Chapt.’s 2.2-2.3). See also Section 2 below. Write max i= max 1≤i≤pand max i,t= max 1≤i≤pmax 1≤t≤n, and define ¯ xi,n := 1/nPn t=1xi,t. In this setting Nemirovski (2000) shows for any q≥1 and p≥eq−1(see, e.g., Buhlmann and... | https://arxiv.org/abs/2505.17800v1 |
Cheng, 2014; Chang, Jiang and Shao, 2023; Chang, Chen and Wu, 2024; Hill, 2024; Hill and Li, 2025). We initially assume xi,tis bounded in order to yield a version of (1.2) by using telescoping sub-sample blocks. We then derive the main result by approximating 3 ¯xi,nboth by negligible truncation and blocking. We show a... | https://arxiv.org/abs/2505.17800v1 |
Nn:= [n/bn], and index sets Bl:={(l−1)bn+ 1, . . . , lb n}with l= 1, . . . ,Nn. Assume Nnbn=nthroughout to reduce notation. Generate bounded iid random variables {εl}Nn l=1independent of {xt}n t=1, where P(|εl| ≤c) = 1, c∈(0,∞), and define a sequence of multipliers {ηt}n t=1byηt=εlift∈Bl. Define partial sums and a vari... | https://arxiv.org/abs/2505.17800v1 |
≤z! −P max i 1√nNnX l=1εlSn,l(i) ≤z! ≤ρn+ρ∗ n. Now replace xi,twith xi,t/√nto yield for each x≥0, P max i|¯xi,n| ≤x −P max i 1 nNnX l=1εlSn,l(i) ≤x! (2.4) = P max i nX t=1xi,t√n ≤√nx! −P max i NnX l=1εlSn,l(i)√n ≤√nx! ≤ρn+ρ∗ n. Recall X(n):={xt}n t=1. Boundedness, twice a change of variables, and (2.4) imply Emax i|¯... | https://arxiv.org/abs/2505.17800v1 |
Thus under data homogeneity Mn/n1/(r−1)→ ∞ . By multiple applications of Cauchy- Schwartz, H¨ older, Lyapunov and Markov inequalities, and (2.7) with r >1, An≤ 1 nnX s,t=1Exi,txi,sI|xi,s|>Mn + 1 nnX s,t=1Exi,sI|xi,s|≤M nxi,tI|xi,t|>Mn ≤1 nnX s,t=1 Ex2 i,t1/2 Ex2 i,sI|xi,s|>Mn1/2+1 nnX s,t=1 Ex2 i,sI|xi,s|≤M n1/2 ... | https://arxiv.org/abs/2505.17800v1 |
decomposition Emax i|¯xi,n|q=Emax i|¯xi,n|qImaxi|¯xi,n|≤M n+Emax i|¯xi,n|qImaxi|¯xi,n|>Mn(2.14) =:In,1+In,2. ThusMnimparts two opposing forces. First, largeMnyields a better truncation ap- proximation Emax i|¯xi,n|q× I maxi|¯xi,n|≤M n≈Emax i|¯xi,n|q, hence smaller {Bn(p),Dn(p)}. Second, under dependence we cannot use a... | https://arxiv.org/abs/2505.17800v1 |
Assume {xt}are zero mean [ −M n,Mn]p-valued random variables. Suppose xi,t= gi,t(ϵi,t, ϵi,t−1, . . .) for some measurable functions gi,t(·) and iid sequences {ϵi,t}. Define a functional ψζ(x) := exp {|x|ζ} −1,ζ >0, and an exponential Orlicz norm ||x||ψζ := inf{λ >0 :E[ψζ(x/λ)]≤1}. Now let ||xi,t||ψζ≤χnfor some ζ≥1, all... | https://arxiv.org/abs/2505.17800v1 |
b, m, χ )>0, with χ∈[0,(1/9)∧b) and b <1/4. Let max i,t||xi,t||2r=O(1). Then ln( p) =o(na) and β∗≃2/√ b−4 with a=1 4−b ∧3q 4−m3q 2−(r−1) ∧1 7 3q+2 3−6χ−6qm ∧1 23q 2+b−χ−3qm ∧√ b 2−2√ b 1−2√ b−χ−2√ bh 1−2√ bi . For example, if q= 1 and r= 1.5, with bound growth m= 1/2, homogeneity χ= 0, and block size b= 1/8... | https://arxiv.org/abs/2505.17800v1 |
and products (e.g. Wu, 2005), however it generally need not carry over to general measurable transforms, including I|xi,t|≤M n. In Assumption 2, below, we merely assume x(M) i,tis physical dependent. However, under mild conditions it holds when xi,tis physical dependent. In order to see this define Ii,n,t :=I|xi,t|≤M n... | https://arxiv.org/abs/2505.17800v1 |
typical “ moment- memory ” trade-off. Or: ( ii) Let max iP∞ l=0lθ(3) i,n(l)≤ξnand max t||xi,t||q=o(M1−1/q n). Then max iP∞ l=0lθM(3) i,n(l)≤ξnby the arguments leading to (3.5). Under Lq-boundedness and Proposition 2.2.b, Emax i|¯xi,n|q≲2 ln (2 p) Nnq/2vuutEmax i,l 1 bnS(M) n,l(i) q E max i1 nNnX l=1 S(M) n,l(i) !q (3... | https://arxiv.org/abs/2505.17800v1 |
Gaussian approximation and comparison theory to sidestep symmetrization. We also require a concentration bound for tail prob- abilities that appears new, and works roughly like a Nemirovski bound for tail measures. The latter arises from the truncation approximation error, leading to a higher moment requirement than in... | https://arxiv.org/abs/2505.17800v1 |
n∈N = 1 +2qλ cnγ/cb1/cZ∞ exp{λMq n}1 u1/c−1exp{−u}du ≤1 +2qλ cnγ/cb1/cZ∞ exp{λMq n}exp{−u}du= 1 +2qλ cnγ/cb1/cexp{−exp{λMq n}}. Now exploit ln(1 + z)≤zforz >0 and λ≥1 to yield In≤1 λln p 1 +2qλ cnγ/cb1/cexp{−exp{λMq n}} ≤1 λln(p) +2qλ cnγ/cb1/cexp{−exp{λMq n}} ≤1 λln(p) +2qλ cnγ/cb1/cexp{−exp{Mq n}}. Choose λ=p cnγ... | https://arxiv.org/abs/2505.17800v1 |
ϵi,tare iid for each i. Let{ϵ′ i,t}be an independent copy of {ϵi,t}, and define the coupled process x′ i,t(m) :=gi,t(ϵi,t, . . . , ϵ i,t−m+1, ϵ′ i,t−m, ϵi,t−m−1, . . .),m= 0,1,2, ...Now define the Lp-physical dependence measure θ(p) i,t(m) :=||xi,t−x′ i,t(m)||p. We have θ(q) i,n,t(m) ≤21+2/q||xi,t||1/q qα1/(qr) i,j,n(m... | https://arxiv.org/abs/2505.17800v1 |
since max i,t||xi,t||q≲qχnfrom (3.1). QED . Proof of Lemma 3.2 . Claim ( a). Assumptions 2.1-2.3 in Zhang and Cheng (2018) [ZC] hold by Assumption 2. Hence (3.7) holds by Theorem 2.1 in ZC when q≥2. Claim ( b). Recall y(M) i,t:=x(M) i,t−Ex(M) i,twith x(M) i,t:=xi,tI|xi,t|≤M n+MnI|xi,t|>Mn, andS(M) n,l(i) :=Plbn t=(l−1)... | https://arxiv.org/abs/2505.17800v1 |
t=(l−1)bn+1εl y(M) i,t−y(M)′ i,t(m) 3 = 2 max l E E 1√bnlbnX t=(l−1)bn+1εl y(M) i,t−y(M)′ i,t(m) 3 |Yi,n 1/3 ≲max l y(M) i,t−y(M)′ i,t(m) 3≲θM(3) i,n(m), (5.15) 30 because by independence of εl,|εl| ≤c a.s. , and Theorems 1 and 2( i) in Wu (2005), max l 1√bnlbnX t=(l−1)bn+1εl y(M) i,t−y(M)′ i,t(... | https://arxiv.org/abs/2505.17800v1 |
bootstrap for maxima of sums of high-dimensional random vectors. Ann. Statist. 41, 2786–2819. Chernozhukov, V., Chetverikov, D., Kato, K., 2015. Comparison and anti-concentration bounds for maxima of gaussian random vectors. Probab. Theory Rel. 162, 47–70. Chernozhukov, V., Chetverikov, D., Kato, K., 2017. Central limi... | https://arxiv.org/abs/2505.17800v1 |
1738, pp. 87–285. Lectures Notes on Mathematics, 1738. de la Pena, V., Ibragimov, R., Sharakhmetov, S., 2003. On extremal distributions and sharp lp-bounds for sums of multilinear forms. Ann. Probab. 31, 630–675. Pollard, D., 1984. Convergence of Stochastic Processes. Springer Verlag, New York. Rio, E., 1993. Covarianc... | https://arxiv.org/abs/2505.17800v1 |
arXiv:2505.17851v1 [math.ST] 23 May 20251 Optimal Decision Rules for Composite Binary Hypothesis Testing under Neyman-Pearson Framework Yanglei Song, Berkan Dulek, and Sinan Gezici, Fellow, IEEE Abstract The composite binary hypothesis testing problem within the Neyman-Pearson framework is considered. The goal is to ma... | https://arxiv.org/abs/2505.17851v1 |
false-alarm constraint for all possible distributions under the null hypothesis [2], [8], [9]. In contrast, when the alternative hypothesis is composite, various approaches are em- ployed in the literature. One approach is to seek a uniformly most powerful (UMP) decision rule that maximizes the detection probability fo... | https://arxiv.org/abs/2505.17851v1 |
Our primary motivation stems from recent advances in behavioral utility-based detection problems [23]–[26]. Specifically, for decision-making tasks involving humans in the loop, their cognitive biases and subjective perception of probabilities, gains, and losses need to be taken into account. These effects can be incor... | https://arxiv.org/abs/2505.17851v1 |
corollaries are derived for the special case of single-parameter exponential family distributions (see Corollary 7, Theorem 9 and Theorem 11). Numerical examples from behavioral utility-based detection theory are also provided to corroborate the theoretical results. II. P ROBLEM FORMULATION Consider a compact parameter... | https://arxiv.org/abs/2505.17851v1 |
the case of finite Θfor that part. The intuition is that, under the assumptions in part 3), any p∈ P is a differentiable function on U. However, C(Θ) is sufficiently rich that in any neighborhood of p∈ P, there exist continuous but non-differentiable functions on U. A. Generalized Bayes Rules Denote by M(Θ),M+(Θ)andM1(... | https://arxiv.org/abs/2505.17851v1 |
Bayes rules in (1). However, at this generality, we cannot explicitly characterize the support of π+orπ−for each power function p∈ P. In subsequent sections, we analyze specific optimization problems and derive additional results concerning the structure of optimal decision rules. Remark 4. The key strategy for proving... | https://arxiv.org/abs/2505.17851v1 |
=ωv(t)fort∈[0,1], where v >0is user-specified, and ωv(t) :=tv (tv+ (1−t)v)1/v. (5) To derive stronger results, we impose the following assumption in certain cases , which requires gto be strictly increasing. This assumption is satisfied by the examples mentioned above. 8 Assumption 3. g′(t)>0for0< t < 1. Below, we deno... | https://arxiv.org/abs/2505.17851v1 |
family distributions [30, Chapter 4.2]. Specifically, let fθ(y) =c(θ)h(y)eθT(y), (10) where h: Y→[0,∞)andT: Y→ Rare given, and the domain is D:={θ∈ R:c(θ)−1:=Z Yh(y)eθT(y)µ(dy)<∞}. Assume that Dis open and that Θ⊂ D is compact. Corollary 7. For the exponential family distributions in (10), Assumption 1 and condition (8... | https://arxiv.org/abs/2505.17851v1 |
0.4ωv(p(θ;δ))dθ+Z0.6 0.5ωv(p(θ;δ))dθ, subject to p(0.5;δ)≤α,(12) 11 −4.0 −3.5 −3.0 −2.5 −2.0 −1.50.28 0.30 0.32 0.34 0.36 l valueObjective Fig. 1. The value of the objective in (11) as a function ℓforβ= 2/3andv= 0.69. where we recall that ωv(·)is defined in (5), β > 0, and α∈(0,1). By Corollary 6 and 7, there exists an... | https://arxiv.org/abs/2505.17851v1 |
-ALARM CONTROL In this section, we consider the case where both Θ0andΘ1are composite, and aim to test: H0:θ∈Θ0,vsH1:θ∈Θ1:= Θ\Θ0. (13) LetΛ0andΛ1be two probability measures on Θ0andΘ1respectively, α∈(0,1)a user-specified level, and g: [0,1]→ Ra measurable function. Our goal is to solve the following problem: max δ∈∆Z Θ1... | https://arxiv.org/abs/2505.17851v1 |
on Θ1,α∈(0,1)a user-specified level, and g: [0,1]→ Ra measurable function. In this setup, a UMP test typically does not exist. Theorem 10. Suppose that Assumptions 1, 2, and 3 hold, and that Θ0is closed. There exists an optimal rule δ∗∈∆for problem (15) such that supθ∈Θ0p(θ;δ∗) =αand that for some constant κ≥0, a measu... | https://arxiv.org/abs/2505.17851v1 |
of Lemma 1. Letθn, n≥1andθbe in Θsuch that limn→∞θn=θ. For any procedure δ∈∆, due to Assumption 1, we have p(θn;δ) =Z Yδ(y)fθn(y)µ(dy) →Z Yδ(y)fθ(y)µ(dy) =p(θ;δ), and thus p(·;δ)∈C(Θ), which completes the proof of the first statement. Further, let p1, p2∈ P. By definition, there exist δ1, δ2∈∆such that pk(·) =p(·;δk)fo... | https://arxiv.org/abs/2505.17851v1 |
point f∈ F is a support point of F. Proof. Letf∈ F be arbitrary. Since Fis convex and closed and has an empty interior, by Bishop–Phelps Theorem [29, Theorem 7.43], the set of support points of F ⊂ C(Θ) isdense inF. Thus, there exists a sequence {fn:n≥1} ⊂ F such that fnis a support point of Fforn≥1, and∥fn−f∥∞→0asn→ ∞... | https://arxiv.org/abs/2505.17851v1 |
Yδ∗(y)H(y)µ(dy). Since ˜δ∈∆is arbitrary, we must have that δ∗solves the following optimization problem: max δ∈∆Z Yδ(y)H(y)µ(dy), subject toZ Yδ(y)fθ0(y)µ(dy)≤α. Define c∗:=R Yδ∗(y)fθ0(y)µ(dy). Then, δ∗also solves the following optimization problem: max δ∈∆Z Yδ(y)H(y)µ(dy), subject toZ Yδ(y)fθ0(y)µ(dy) =c∗. Since δ∗is f... | https://arxiv.org/abs/2505.17851v1 |
by applying nearly identical arguments as in the proof of Theorem 5, replacing fθ0 with ˜fΘ0. Before proving Theorem 9, we start with supporting lemmas. Lemma 13. Leta < b be two real numbers. Let Ube a random variable taking values in [a, b], and Vbe a random variable taking values in (−∞, a]∪[b,∞)such that E[V2]<∞and... | https://arxiv.org/abs/2505.17851v1 |
= 0 for some x∈ R, then ψ′′(x)>0. Then, by Lemma 14, ψ′has at most one root on R. Ifψ′has no root, then ψ(T(y))is a monotone function of T(y). Ifψ′has exactly one root, then ψ(T(y)) is either “first increasing, then decreasing” or “first decreasing, then increasing” with T(y). Since Θ0= [a, b], as|T(y)| → ∞ ,ψ(T(y))→ ∞... | https://arxiv.org/abs/2505.17851v1 |
pp. 289–337, 1933. [2] H. V . Poor, An Introduction to Signal Detection and Estimation , 2nd ed. New York: Springer-Verlag, 1994. [3] E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses , 3rd ed. New York: Springer Berlin, 2005. [4] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume II: Detecti... | https://arxiv.org/abs/2505.17851v1 |
known hypothesis,” in 2022 IEEE Information Theory Workshop (ITW) , 2022, pp. 131–136. [23] S. Gezici and P. K. Varshney, “On the optimality of likelihood ratio test for prospect theory based binary hypothesis testing,” IEEE Signal Process. Lett. , vol. 25, no. 12, pp. 1845–1849, Dec. 2018. [24] B. Geng, S. Brahma, T. ... | https://arxiv.org/abs/2505.17851v1 |
arXiv:2505.17961v1 [stat.ME] 23 May 2025Federated Causal Inference from Multi-Site Observational Data via Propensity Score Aggregation Rémi Khellaf∗ INRIA, Université de Montpellier, INSERM, France remi.khellaf@inria.fr Aurélien Bellet INRIA, Université de Montpellier, INSERM, France aurelien.bellet@inria.fr Julie Joss... | https://arxiv.org/abs/2505.17961v1 |
countries—making it challenging to aggregate into a centralized dataset. This difficulty is particularly acute in domains like healthcare, where privacy concerns, regulatory barriers, data ownership and logistical issues (such as responsibility for data storage and governance) complicate data sharing. Federated Learnin... | https://arxiv.org/abs/2505.17961v1 |
1990), we consider random variables (X,H,W,Y (1),Y(0)), whereX∈Rdrepresents patient covariates, H∈[K]indicates site membership, W∈{0,1}denotes the binary treatment, andY(1)andY(0)are the potential outcomes under treatment and control, respectively. We assume that the Stable Unit Treatment Values Assumption (SUTVA) hold... | https://arxiv.org/abs/2505.17961v1 |
natural baseline for estimating the ATE across sites is a two-stage meta-analysis approach (Burke et al., 2017), wherein each site independently estimates the relevant nui- sance parameters and communicates only the resulting ATE estimates for aggregation. We will need the following assumption. Assumption 4 (Local over... | https://arxiv.org/abs/2505.17961v1 |
over parametric model parameters, retaining only sites that conform to a shared specification. In contrast, our approach does not assume a common propensity score function: each site can adopt its own, potentially distinct model. To address site-level heterogeneity, Xiong et al. (2023) assume a parametric logistic prop... | https://arxiv.org/abs/2505.17961v1 |
weightsek(X). (4) Building on Eq. 3 and 4, we define our oracle Federated (A)IPW estimators, denoted Fed- (A)IPW, where the corresponding weights will be estimated via federated learning (see Section 4.2). Definition 3 (Oracle federated estimators ). ˆτfed∗ IPW=K/summationdisplay k=1nk nˆτfed(k) IPW, ˆτfed∗ AIPW =K/sum... | https://arxiv.org/abs/2505.17961v1 |
and gen- eralized random forests for non-parametric estimation (Lee et al., 2010). A key advantage of our approach is its flexibility: different estimation methods can be used across sites, tailored to local data characteristics or computational constraints. As previously discussed, this procedure does not require the ... | https://arxiv.org/abs/2505.17961v1 |
They allow for flexible parameterizations and can be estimated throughclassificationmodels, whicharegenerallyeasiertolearnthandensities—particularly in high-dimensional settings where density estimation suffers from high sample complexity and poor scalability (Sugiyama et al., 2012). 4.2.3 Estimation of Fed-AIPW Fed-AI... | https://arxiv.org/abs/2505.17961v1 |
al., 2003). In theNo local overlap setting (Figure 2a), meta-analysis estimators are undefined due to the absence of treated individuals at one site. In contrast, our federated estimators remain unbiased under both DGPs, as Assumption 3 holds (with global overlap Oglobal≈6.22). In thePoor local overlap setting (Figure ... | https://arxiv.org/abs/2505.17961v1 |
inference from decen- tralized observational data. We introduce two methods for federating local propensity scores—via estimated membership probabilities or density ratios—both of which are de- signed to be robust to covariate shift and heterogeneity in propensity scores. Our methods improve covariate overlap, leading ... | https://arxiv.org/abs/2505.17961v1 |
359(9302):248–252. Guo, T., Karimireddy, S. P., and Jordan, M. I. (2024). Collaborative heterogeneous causal inference beyond meta-analysis. arXiv preprint arXiv:2404.15746 . Guo, Z., Li, X., Han, L., and Cai, T. (2025). Robust inference for federated meta-learning. Journal of the American Statistical Association , pag... | https://arxiv.org/abs/2505.17961v1 |
causal effects of treatments in randomized and nonran- domized studies. Journal of educational psychology , 66(5):688–701. Splawa-Neyman, J., Dabrowska, D. M., and Speed, T. P. (1990). On the Application of ProbabilityTheorytoAgriculturalExperiments.EssayonPrinciples.Section9. Statistical Science, 5(4):465 – 472. Stich... | https://arxiv.org/abs/2505.17961v1 |
the variance of the oracle multi-site IPW estimator is V/bracketleftig ˆτIPW∗/bracketrightig =1 n/parenleftbigg E/bracketleftbiggYi(1)2 e(Xi)/bracketrightbigg +E/bracketleftbiggYi(0)2 1−e(Xi)/bracketrightbigg −τ2/parenrightbigg . 16 For the variance of the oracle multi-site centralized AIPW, we first notice that sinc... | https://arxiv.org/abs/2505.17961v1 |
the same condition on strictness and equality, E/bracketleftbiggYi(1)2 e(Xi)/bracketrightbigg ≤E/bracketleftiggK/summationdisplay k=1ωk(X)Yi(1)2 ek(Xi)/bracketrightigg =K/summationdisplay k=1E/bracketleftbigg E/bracketleftbig 1[Hi=k]|Xi/bracketrightbigYi(1)2 ek(Xi)/bracketrightbigg =K/summationdisplay k=1E/bracketlef... | https://arxiv.org/abs/2505.17961v1 |
γ(good) 2[.15,−.15, .15,−.15,.15, −.15,.15,−.15, .15,−.15] Table 1: Common simulation parameters. 23 DGP A-specific settings are shown in Table 2, where Jdis thed×dmatrix of ones, and Id is thed×didentity matrix. Parameter Center 1 Center 2 Center 3 nk 2000 Dk N(µk,Σk) µk (1,..., 1)∈Rd(1.5,..., 1.5)∈Rd(3,..., 3)∈Rd ΣkI... | https://arxiv.org/abs/2505.17961v1 |
arXiv:2505.18130v1 [stat.ME] 23 May 2025Loss Functions for Measuring the Accuracy of Nonnegative Cross-Sectional Predictions Charles D. Coleman1 ORCID: https://orcid.org/0000-0001-6940-8117 Timely Analytics, LLC E-mail: info@timely-analytics.com May 26, 2025 1This paper reports the general results of research originall... | https://arxiv.org/abs/2505.18130v1 |
predictions is also assumed fixed.4 Linear regression is then used to estimate the parameters of the loss function.5An upshot of this process is that, in general, no single “ideal” measure of accuracy exists, as the evaluation of accuracy depends on the evaluator.6In the special case in which predictions represent reso... | https://arxiv.org/abs/2505.18130v1 |
context of cross-sectional population estimates: “‘Optimum’ estimates, for example, are only optimum under narrow specifications that do not hold exactly in practice.” (Original underline) 7Equivalently, one can also refer to the absolute percentage error (APE), but this will complicate the arguments made later on. APE... | https://arxiv.org/abs/2505.18130v1 |
when the true value is 1,000,000 is akin to a roundoff error. In short, error variance increases in the actual value. Second, when making predictions, the coefficient of variation of the errors, σ/µ, where σis the variance and µis the expected value, generally decreases in A.7 We state this formally as: Assumption 3: ∂... | https://arxiv.org/abs/2505.18130v1 |
by equations (1a) and (1b) increases in Afor any given absolute relative error. This is assured whenever q >−p, or, equivalently, p+q >0. 3 Estimating the Loss Function This Section assumes the existence of an impartial decision-maker with well-formed preferences, such an investor motivated by profits affected by predi... | https://arxiv.org/abs/2505.18130v1 |
LandUare expressed in terms of percentage points. We can see why points with U= 0 are removed: At these points U= 100 for some ϵj>0. Higher values of ϵjproduce the same value of L, in contradiction to Assumption 2. Then equation (1b) is estimated by the regression equation formed by taking its logarithm and evaluating ... | https://arxiv.org/abs/2505.18130v1 |
existence of a function that aggregates individuals’ preferences and obeys some weak conditions underlies the last claim. A final alternative is to adopt a loss function which works “well” in most cases as a convention in the manner of Keyfitz (1979). The Mean Absolute Percentage Error (MAPE,) discussed in Section 6, i... | https://arxiv.org/abs/2505.18130v1 |
predictions. The practice of putting weights on sets of predictions is a form of model averaging.26The weight put on a particular set of predictions is often used as the weight for the model which produced those predictions. The averaged model is then used to generate new sets of predictions. Suppose there are two sets... | https://arxiv.org/abs/2505.18130v1 |
i=1wi= 1. 28The new estimates may have to be constrained to sum up to a predetermined overall total or several predetermined subtotals. See Subsection 4.1. 29However, no method can simultaneously satisfy all fairness criteria (Balinski and Young, 1979, th. 6). For example, Ernst (1992) proves that Hill’s method satisfi... | https://arxiv.org/abs/2505.18130v1 |
3.0 90.00 2 50000 1000 220.00 500 1 5.0 850 1.7 14.45 3 10000 200 2 4.00 100 1 1.0 170 1.7 2.89 4 5000 100 2 2.00 50 1 0.5 85 1.7 1.45 5 1000 20 2 0.40 10 1 0.1 17 1.7 0.29 6 100 2 2 0.04 10 10 1.0 2 2.0 0.04 Means 211.08 2.5 2.9 1.97 18.19 5 Example of Evaluating Predictions Using Loss Functions Loss functions can pro... | https://arxiv.org/abs/2505.18130v1 |
taken, then multiplied by 100 for reexpression in terms of percentage points. 32This example is from Coleman (2000). 33Wald (1950, p. 8) introduces weight functions in a subsection entitled “Losses Due to Possible Wrong Terminal Decisions and Cost of Experimentation.” 9 6 Comparison to Other Metrics This Section compar... | https://arxiv.org/abs/2505.18130v1 |
Estimates (William R. Bell, personal correspondence). 34It should also be noted that additive separability and the von Neumann-Morgenstern expected utility axioms are violated as well. See footnote 10. Coleman (2002) further clarifies this point by showing that quantile total loss functions (e.g., MedAPE and 90PE of th... | https://arxiv.org/abs/2505.18130v1 |
Available at http://www.census.gov/population/www/documentation/twps0005/twps0005.html. Ernst, L.R. (1992). Apportionment Methods for the House of Representatives and the Court Challenges. SRD Research Report 92/06, Washington, D.C.: U.S. Census Bureau. Available at http://www.census.gov/srd/papers/pdf/rr92-6.pdf. 11 F... | https://arxiv.org/abs/2505.18130v1 |
A NEW MEASURE OF DEPENDENCE: INTEGRATED R2 MONA AZADKIA, POUYA ROUDAKI Abstract. We propose a new measure of dependence that quantifies the degree to which a random variable Ydepends on a random vector X. This measure is zero if and only if YandXare independent, and equals one if and only if Yis a measurable function o... | https://arxiv.org/abs/2505.18146v1 |
4, 11, 18, 21, 30, 32, 41, 43, 48, 51, 59, 63, 65, 66, 86, 87, 89, 93, 103, 104]. Building on this line of work, the first contribution of this paper is a new coefficient of dependence with the following properties (1) it has a simple expression, (2) it is fully non-parametric, (3) it has no tuning parameters, (4) ther... | https://arxiv.org/abs/2505.18146v1 |
maximum smaxandµ(smax)>0; i.e. µhas a point mass at smax. In this case, since 1{Y > s max}= 0 we have Var( 1{Y > s max}) = 0. In addition, since 1{Y > s max}is a deterministic constant, it can be considered independent of Xor a measurable function of X. Hence, to ensure that νis a reasonable measure of dependence, we n... | https://arxiv.org/abs/2505.18146v1 |
the main question is whether νcan be efficiently es- timated from data. We propose νn(Y,X) to estimate ν(Y,X) and study its properties. Our data consists of ni.i.d. copies of ( Y1,X1), . . . , (Yn,Xn) of the pair ( Y,X) for n≥3. For each i, let Ribe the rank of Yi, i.e. Ri=Pn j=1 1{Yj≤Yi}. For each pair iandjin 1, . . ... | https://arxiv.org/abs/2505.18146v1 |
neighbour graphs and, as a result, generally lacks scale invariance; that is, changes in the scale of certain co- variates can significantly alter the graph structure. To address this issue, a rank-based variant, similar to that proposed in [93], can be considered. (10) Note that ν(Y, X) is not symmetric in YandX. We i... | https://arxiv.org/abs/2505.18146v1 |
when interaction effects or nonlinear relationships are present. Such problems can sometimes be overcome by model-free methods [1, 8, 13, 15, 17, 20, 37, 52, 55, 94]. These, too, are powerful and widely used techniques, and they perform better than model-based methods if interac- tions are present. On the flip side, th... | https://arxiv.org/abs/2505.18146v1 |
variable Xj with j /∈Vto the set XVincreases the dependence with Yby at least δ. The main result of this section, stated below, shows that if δis bounded away from zero, then under certain regularity conditions on the distribution of (Y,X), the subset selected by FORD is sufficient with high probability. It is worth no... | https://arxiv.org/abs/2505.18146v1 |
n converges almost surely to ν. Proposition 6.2. Suppose that XandYare independent and both have continuous distributions. E[ν1-dim n(Y, X)] = 2 /n, lim n→∞nVar(ν1-dim n(Y, X)) =π2/3−3. In addition to Proposition 6.2, we conjecture that under the same assump- tions√nν1-dim n(Y, X) converges in distribution to N(0, π2/3... | https://arxiv.org/abs/2505.18146v1 |
it quantifies how much σ−1π oscillates as we move from itoi+ 1. However, unlike Spearman’s footrule, these oscillations are weighted: dν(I, σ−1π) =dν(σ, π) considers not only the magnitude |σ−1π(i)−σ−1π(i+ 1)|, but also the positions of σ−1π(i) andσ−1π(i+ 1) within the range. Oscillations occurring near the top or bott... | https://arxiv.org/abs/2505.18146v1 |
n), where µn= 2/nandσ2 n= (π2/3−3)/n. mean of ν1-dim n(Y, X) is approximately 0 .314, with a standard deviation of about 0 .02. The resulting histogram, shown in Figure 3, exhibits an excellent fit with a normal distribution having the same mean and standard deviation. 0.25 0.30 0.350 5 10 15 20 Figure 3. Histogram of ... | https://arxiv.org/abs/2505.18146v1 |
the following alternative (7) Heteroskedastic and Sinusoid: Y= cos(20 π(1 + 10 λε)X2). Figure 5 illustrates that in this case ν1-dim n andνnappear more powerful than other tests, including ξn. This example demonstrates that the new A NEW MEASURE OF DEPENDENCE: INTEGRATED R215 Linear 0.000.250.500.751.00 0.00 0.25 0.50 ... | https://arxiv.org/abs/2505.18146v1 |
{100,500,1000}, covariates X= (X1, . . . , X p)∼N(0, Ip) where p= 1000, and independent noise variable ε: •LM (linear model): Y= 3X1+ 2X2−X3+ε,ε∼N(0,1) •Nonlin1 (nonlinear model): Y=X1X2+ sin( X1X3) •Nonlin2 (non-additive noise): Y=|X1+ε|sin(X2−X3),ε∼Uniform[0 ,1] •Osc1 (oscillatory): Y=sin(X1)√ |X1|+X2X3 •Osc2 (oscill... | https://arxiv.org/abs/2505.18146v1 |
representing the positions of the WECs, denoted as X1, X2, . . . , X 16 andY1, Y2, . . . , Y 16, along with 16 features corresponding to the ab- sorbed power outputs, denoted as P1, P2, . . . , P 16. The target vari- able, Powerall , represents the total power output of the WEC farm. The goal is to predict the total po... | https://arxiv.org/abs/2505.18146v1 |
expression example in [22] and investigate the effectiveness of ν1-dim n(Y, X) in identifying genes with oscillating transcript levels over time. Specifically, we apply it to the curated Spellman dataset available in the R package minerva , which contains gene expression data for 4381 transcripts measured at 23 time po... | https://arxiv.org/abs/2505.18146v1 |
dashed line) is overlaid using a smoothing pa- rameter of 0.2. the smallest q-values under ν1-dim n among those not identified by ξn, high- lighting cases where ν1-dim n shows strong confidence in detection. The sec- ond row of Figure 7 displays two genes selected by ν1-dim n but not by ξn, which exhibit the largest q-... | https://arxiv.org/abs/2505.18146v1 |
for almost all values of twith respect to ˜ µ, Var( 1{Y > t}) is non-zero and hence ν(Y,X) is well-defined. Note that by the law of total variance and non-negativity of variance, we have 0≤Var(E[ 1{Y > t} |X])≤Var( 1{Y > t}), which gives ν(Y,X)∈[0,1]. When Yis independent of Xfor all t∈Rwe have E[ 1{Y > t} |X] =E[ 1{Y ... | https://arxiv.org/abs/2505.18146v1 |
respect to µ. Therefore, GX(t) almost surely takes only the values of 0 and 1 with respect to µ. Let E(X-measurable) the event that GX(t)∈ {0,1} for almost all values of tand note that P(E) = 1. Let aXbe the largest value such that GX(aX) = 1 and bXbe the smallest value such that GX(bX) = 0. Note that aX≤bX. Suppose {a... | https://arxiv.org/abs/2505.18146v1 |
NEW MEASURE OF DEPENDENCE: INTEGRATED R227 First, let’s study the case when µis continuous. In this case, by conditioning on the value of Fn(Yj), we have E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))] =1 nnX r=1E[1{Yj∈ Ij i} 1{Fn(Yj)̸= 1,1/n} F(Yj)(1−F(Yj))|Fn(Yj) =r/n] =1 nn−1X r=2E[E[ 1{Yj∈ Ij i} |Yj] F(Yj)(1−F(Yj))... | https://arxiv.org/abs/2505.18146v1 |
above equality converges to zero by the Vitali convergence theorem. Therefore lim n→∞E[Q′ n] =1 µ(˜S)Z ˜SE[FX(t)(1−FX(t))] F(t)(1−F(t))dµ(t) =Q. Putting steps I and II together gives us lim n→∞E[Qn] =Q. □ Lemma 8.2. ForQndefined in (6), there are constants C1andC2such that P(|Qn−E[Qn]| ≥t)≤C1e−C2nt2/log2n. Proof. We ap... | https://arxiv.org/abs/2505.18146v1 |
OF DEPENDENCE: INTEGRATED R231 difference 1 Fn,j(Yj)(1−Fn,j(Yj))−1 Fk n,j(Yj)(1−Fk n,j(Yj)) for those indices isuch that 1{Yj∈ Ij i}= 1. We first consider the case where there are no ties among the Yi’s. In this setting, for each j, Lemma 11.4. in [4] implies that there are at most nC(p) min{Fn(Yj)−n−1,1−Fn(Yj)}such in... | https://arxiv.org/abs/2505.18146v1 |
change and Fk n(Yj) =n−1afterwards, then the contribution of thatjto|Qn−QkYn|is bounded by C(p) 2 (n−2) (n−n0−1)=O n−1 . For every other index j, the argument from case (i) applies. (c)Replacing Ykwith Y′ kchanges both the sample minima and maxima: indeed, Yk≤Yj≤Y′ kfor every j̸=k. Assume there exist indices j1andj2 ... | https://arxiv.org/abs/2505.18146v1 |
. . . , Xn/∈E). A NEW MEASURE OF DEPENDENCE: INTEGRATED R235 Now note that P(X2/∈E, . . . , Xn/∈E|X1) = (1 −P(X2∈E|X1))n−1= (1−λ(E))n−1, where λis the law of X. Let Abe the collection of all small sets with λ-mass less than δ. Since there are at most CKpε−psmall sets, we get E (1−λ(E))n−1 ≤(1−δ)n−1+P(X1∈A)≤(1−δ)n−1+C... | https://arxiv.org/abs/2505.18146v1 |
nothing to prove. So let us assume that k≤p. Since E′has happened, this implies that for any j̸∈Vk−1, ν(Y,XVk−1∪{j})−ν(Y,XVk−1)≤νn(Y,XVk)−νn(Y,XVk−1) +δ 4≤3δ 4. Then note that by definition of δ,Vk−1must be a sufficient set. □ Lemma 8.6. The event E′implies E. Proof. Suppose E′has happened but there is no ksuch that 10... | https://arxiv.org/abs/2505.18146v1 |
n−ℓ)(ℓ−1)(ℓ−2) (ℓ−1)(n−ℓ)=4(n−3) n(n−1)2Hn−2. 40 MONA AZADKIA, POUYA ROUDAKI E[A3] =4 n(n−1)n−1X ℓ=2(ℓ−2)(n−ℓ−1) (ℓ−1)(n−ℓ) =4(n−2) n(n−1)(1−2Hn−2 n−2+2Hn−2 (n−1)(n−2)). E[A4] =4 nn−2X ℓ=2n−1X k=ℓ+11 (n−ℓ)(k−1). E[A5] =4 n(n−1)n−2X ℓ=2n−1X k=ℓ+1(n−ℓ) + (k−ℓ−3) + ( k−1) (n−ℓ)(k−1) =4 n(n−1)n−2X ℓ=2n−1X k=ℓ+11 k−1+2 n−ℓ−... | https://arxiv.org/abs/2505.18146v1 |
jee’s correlation coefficient”. In: (2024). url:https://arxiv.org/ abs/2410.11418 . [19] Cand` es, E. and Tao, T. “The Dantzig selector: statistical estimation when pis much larger than n. (With discussions and rejoinder).” In: Ann. Stat. 35.6 (2007), pp. 2313–2404. [20] Cand` es, E. et al. “Panning for gold: ‘model-X’... | https://arxiv.org/abs/2505.18146v1 |
S. “Quantifying directed dependence via dimension reduc- tion”. In: J. Multivariate Anal. 201 (2024), p. 21. [42] Gamboa, F., Klein, T., and Lagnoux, A. “Sensitivity analysis based on Cram´ er-von Mises distance”. In: SIAM/ASA J. Uncertain. Quan- tif.6 (2018), pp. 522–548. [43] Gamboa, F. et al. “Global sensitivity ana... | https://arxiv.org/abs/2505.18146v1 |
M. “Asymptotic Normality of Chatterjee’s Rank Correlation”. In:arXiv preprint arXiv:2408.11547 (2024). [64] Levcopoulos, C. and Petersson, O. “Heapsort—adapted for presorted files”. In: Algorithms and Data Structures: Workshop WADS’89 Ot- tawa, Canada, August 17–19, 1989 Proceedings 1 . Springer. 1989, pp. 499–509. [65... | https://arxiv.org/abs/2505.18146v1 |
of Chatterjee’s rank correlation”. In: Biometrika 109.2 (2022), pp. 317–333. [87] Shi, H., Drton, M., and Han, F. “On Azadkia-Chatterjee’s conditional dependence coefficient”. In: Bernoulli 30.2 (2024), pp. 851–877. [88] Sklar, M. “Fonctions de r´ epartition ` a n dimensions et leurs marges”. In:Annales de l’ISUP . Vol... | https://arxiv.org/abs/2505.18146v1 |
arXiv:2505.18769v1 [stat.ME] 24 May 2025Regularisation of CART trees by summation ofp-values Nils Engler∗, Mathias Lindholm†, Filip Lindskog‡and Taariq Nazar§ May 27, 2025 Abstract The standard procedure to decide on the complexity of a CART regression tree is to use cross-validation with the aim of obtaining a predict... | https://arxiv.org/abs/2505.18769v1 |
in the sequence, and no tree appears more than once. Note that Tmjand Tmj+1may differ by more than one leaf, i.e. mj+1−mj≥1. The tree-growing process starts from the root node Tm1by testing whether increasing the tree- complexity from Tm1toTm2corresponds to a significant improvement in terms of the L2loss. If this is t... | https://arxiv.org/abs/2505.18769v1 |
binary split can be calculated exactly for small sample sizes n, but in practice large values for the sample size require approximations. In the cur- rent paper an asymptotic approximation is used, which is based on results from [Yao and Davis, 1986] for a single covariate. A contribution of the cur- rent paper is to s... | https://arxiv.org/abs/2505.18769v1 |
The proofs of the main results are found in the appendix. 4 2 Regression trees The Classification and Regression Tree (CART) method was introduced in the 1980s and uses a greedy approach to build a piecewise constant predictor based on binary splits of the covariate space, one covariate at a time, see e.g. [Breiman et ... | https://arxiv.org/abs/2505.18769v1 |
subtree TmofTm′,m < m′, the corresponding threshold parameters satisfy ϑ > ϑ′. Threshold parameters ϑ1> ϑ 2> . . . > ϑ τgenerate a sequence of nested trees Tm1, Tm2, . . . , T mτwith m1≤m2≤. . .≤mτ. In applications we will consider sequences ϑ1> ϑ 2> . . .such that 1 =m1< m 2< . . .. Note that such a decreasing sequenc... | https://arxiv.org/abs/2505.18769v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.