text
string | source
string |
|---|---|
E𝑗 𝛽𝑖1𝑖2𝑗(𝑧1)Q−1 𝑖1𝑖2𝑗(𝑧1)r𝑖2r⊤ 𝑖2Q−1 𝑖1𝑖2𝑗(𝑧1)ˇQ−1 𝑖1𝑖2𝑗(𝑧2) ×|𝑏1(𝑧1)|2trH𝑛(𝑧1) r𝑖2r⊤ 𝑖2−𝑁−1𝚽 E𝑗 𝛽𝑖2𝑖1𝑗(𝑧1)Q−1 𝑖2𝑖1𝑗(𝑧1)r𝑖1r⊤ 𝑖1Q−1 𝑖2𝑖1𝑗(𝑧1)ˇQ−1 𝑖2𝑗(𝑧2) , 𝑆(22) 1=−∑︁ 𝑖1≠𝑖2<𝑗EtrH𝑛(𝑧1) r𝑖1r⊤ 𝑖1−𝑁−1𝚽 E𝑗 𝛽𝑖1𝑖2𝑗(𝑧1)Q−1 𝑖1𝑖2𝑗(𝑧1)r𝑖2r⊤ 𝑖2Q−1 𝑖1𝑖2𝑗(𝑧1)ˇQ−1 𝑖1𝑖2𝑗(𝑧2) ×|𝑏1(𝑧1)|2trH𝑛(𝑧1) r𝑖2r⊤ 𝑖2−𝑁−1𝚽 E𝑗 Q−1 𝑖2𝑖1𝑗(𝑧1)ˇQ−1 𝑖2𝑗(𝑧2) , 𝑆(221) 1=∑︁ 𝑖1≠𝑖2<𝑗EtrH𝑛(𝑧1) r𝑖1r⊤ 𝑖1−𝑁−1𝚽 E𝑗 𝛽𝑖1𝑖2𝑗(𝑧1)Q−1 𝑖1𝑖2𝑗(𝑧1)r𝑖2r⊤ 𝑖2Q−1 𝑖1𝑖2𝑗(𝑧1)ˇQ−1 𝑖1𝑖2𝑗(𝑧2) ×|𝑏1(𝑧1)|2trH𝑛(𝑧1) r𝑖2r⊤ 𝑖2−𝑁−1𝚽 E𝑗 Q−1 𝑖2𝑖1𝑗(𝑧1)ˇ𝛽𝑖2𝑖1𝑗(𝑧1)ˇQ−1 𝑖2𝑖1𝑗(𝑧1)r𝑖1r⊤ 𝑖1ˇQ−1 𝑖2𝑖1𝑗(𝑧1) , 𝑆(222) 1=−∑︁ 𝑖1≠𝑖2<𝑗EtrH𝑛(𝑧1) r𝑖1r⊤ 𝑖1−𝑁−1𝚽 E𝑗 𝛽𝑖1𝑖2𝑗(𝑧1)Q−1 𝑖1𝑖2𝑗(𝑧1)r𝑖2r⊤ 𝑖2Q−1 𝑖1𝑖2𝑗(𝑧1)ˇQ−1 𝑖1𝑖2𝑗(𝑧2) ×|𝑏1(𝑧1)|2trH𝑛(𝑧1) r𝑖2r⊤ 𝑖2−𝑁−1𝚽 E𝑗 Q−1 𝑖2𝑖1𝑗(𝑧1)ˇQ−1 𝑖2𝑖1𝑗(𝑧2) . With (18), we have 𝑆(1) 1=𝑂(𝑛), 𝑆(21) 1=𝑂(𝑛), 𝑆(221) 1=𝑂(𝑛), 𝑆(222) 1=0, which gives us 𝑆1=𝑂(𝑛). Similarly, we can show 𝑆2=𝑂(𝑛), 𝑆3=𝑂(𝑛). Hence, we obtain (22). For𝐴1(𝑧1,𝑧2), we have E 𝐴1(𝑧1,𝑧2)+𝑗−1 𝑛2𝑏1(𝑧2)tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) tr Q−1 𝑗(𝑧2)H𝑛(𝑧1) ≤𝐾𝑝1/2, from which we obtain tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) =−tr H𝑛(𝑧1)Q−1 𝑗(𝑧2) −𝑗−1 𝑛2𝑏1(𝑧1)𝑏1(𝑧2)tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) tr Q−1 𝑗(𝑧2)H𝑛(𝑧1) +𝐴4(𝑧1,𝑧2), where E|𝐴4(𝑧1,𝑧2)|≤𝐾𝑛1/2. By the similar strategy as the proof of (22), we have E|E𝑗tr𝑏1(𝑧2)A(𝑧2)H𝑛(𝑧1)|≤√︃ E|E𝑗tr𝑏1(𝑧2)A(𝑧2)H𝑛(𝑧1)|2≤𝐾𝑛−1/2, 24 from which we obtain tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) =tr(H𝑛(𝑧1)H𝑛(𝑧2)) +𝑗−1 𝑛2𝑏1(𝑧1)𝑏1(𝑧2)tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) tr(H𝑛(𝑧2)H𝑛(𝑧1))+𝐴5(𝑧1,𝑧2), where E|𝐴5(𝑧1,𝑧2)|≤𝐾𝑛1/2+𝐾𝑛𝑝−1/2. Define𝑔𝑛(𝑧)=√𝑐𝑁E𝛽1(𝑧). Similar as (B.40)-(B.41) in Gao et al. (2017), we then have 𝑔𝑛(𝑧)=√𝑐𝑁E𝛽1(𝑧)=− 𝑐𝑁+√𝑐𝑁𝑧1 𝑐𝑛E1√𝑐𝑁𝑠𝑛(𝑧)+1−𝑐𝑛 𝑐𝑁+√𝑐𝑁𝑧 And with (18),we have |√𝑐𝑁𝑏𝑛(𝑧)−𝑔𝑛(𝑧)|=√𝑐𝑁|𝑏𝑛(𝑧)−E𝛽1(𝑧)|≤𝐾𝑝−1/2,|𝑏𝑛(𝑧)−𝑏1(𝑧)|≤𝐾𝑛1/2𝑝−3/2, (23) which implies tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) =tr(H𝑛(𝑧1)H𝑛(𝑧2)) +𝑗−1 𝑛2𝑔𝑛(𝑧1)𝑔𝑛(𝑧2) 𝑐𝑁tr E𝑗 Q−1 𝑗(𝑧1) Q−1 𝑗(𝑧2) tr(H𝑛(𝑧2)H𝑛(𝑧1))+𝐴6(𝑧1,𝑧2), where E|𝐴6(𝑧1,𝑧2)|≤𝐾𝑛1/2+𝐾𝑛𝑝−1/2. Recall H𝑛(𝑧)=√𝑐𝑁+𝑧−𝑝−1 𝑁𝑏1(𝑧)−1 I𝑛and let 𝑑𝑛(𝑧1,𝑧2)=1 𝑛trH𝑛(𝑧1)H𝑛(𝑧2), 𝑎𝑛(𝑧1,𝑧2)=𝑔𝑛(𝑧1)𝑔𝑛(𝑧2)𝑑𝑛(𝑧1,𝑧2). Since𝑐𝑛𝑏𝑛(𝑧1)𝑏𝑛(𝑧2)𝑑𝑛(𝑧1,𝑧2)/𝑎𝑛(𝑧1,𝑧2)→1,Jcan be written as J=1 𝑝𝑎𝑛(𝑧1,𝑧2)𝑝∑︁ 𝑗=11 1−𝑗−1 𝑝𝑎𝑛(𝑧1,𝑧2)+𝐴7(𝑧1,𝑧2), where E|𝐴7(𝑧1,𝑧2)|≤𝐾𝑛−1/2. Note that the limit of 𝑎𝑛(𝑧1,𝑧2)is𝑎(𝑧1,𝑧2)=1 (𝑠(𝑧1)+𝑧1)(𝑠(𝑧2)+𝑧2). Thus the limit of𝜕2 𝜕𝑧2𝜕𝑧1Jin probability is 𝜕2 𝜕𝑧2𝜕𝑧1∫𝑎(𝑧1,𝑧2) 01 1−𝑧𝑑𝑧=𝜕 𝜕𝑧2𝜕𝑎(𝑧1,𝑧2)/𝜕𝑧1 1−𝑎(𝑧1,𝑧2) =𝑠′(𝑧1)𝑠′(𝑧2) [𝑠(𝑧1)−𝑠(𝑧2)]2−1 (𝑧1−𝑧2)2 =𝑠2(𝑧1)𝑠2(𝑧2) 𝑠2(𝑧1)−1 𝑠2(𝑧2)−1 [𝑠(𝑧1)𝑠(𝑧2)−1]2. Thus we complete the proof of Lemma 5.7. Supplementary Material Supplementary Material of “On eigenvalues of a renormalized sample correlation matrix" This supplementary document contains the proofs of Lemmas 5.1, 5.4,5.5,5.6. On eigenvalues of a renormalized sample correlation matrix 25 References ANDERSON , T. W. (2003). An introduction to multivariate statistical analysis , third ed. Wiley Series in Probability and Statistics . Wiley-Interscience [John Wiley & Sons], Hoboken, NJ. BAI, Z. and S ILVERSTEIN , J. W. (1998). No eigenvalues outside the support of the limiting spec- tral distribution of large-dimensional sample covariance matrices. The Annals of Probability 26. https://doi.org/10.1214/aop/1022855421 BAI, Z. and S ILVERSTEIN , J. W. (2004). CLT for linear spectral statistics of large-dimensional sample covariance matrices. The Annals of Probability 32553–605. https://doi.org/10.1214/aop/1078415845 BAI, Z. and S ILVERSTEIN , J. W. (2010). Spectral analysis of large dimensional random matrices , 2nd ed. Springer. BAI, Z. D. and Y IN, Y. Q. (1988). Convergence to the Semicircle Law. The Annals of Probability 16863–875. BAO, Z. (2015). On Asymptotic Expansion and Central Limit Theorem of Linear Eigenvalue Statistics for Sample Covariance Matrices when 𝑁/𝑀→0.Theory of Probability & Its Applications 59185-207. https://doi.org/10.1137/S0040585X97T987089 BAO, Z., P AN, G. and Z HOU, W. (2012). Tracy-Widom law for the extreme eigenvalues of sample correlation matrices. Electron. J. Probab. 17no. 88, 32. https://doi.org/10.1214/EJP.v17-1962 BILLINGSLEY , P. (1968). Convergence of probability measures . New York: Wiley. CHEN, B. B. and P AN, G. M. (2012). Convergence of the largest eigenvalue of normalized sample
|
https://arxiv.org/abs/2505.08210v1
|
covariance matrices when 𝑝and𝑛both tend to infinity with their ratio converging to zero. Bernoulli 181405–1420. https://doi.org/10.3150/11-BEJ381 CHEN, B. and P AN, G. (2015). CLT for linear spectral statistics of normalized sample covariance matrices with the dimension much larger than the sample size. Bernoulli 211089–1133. https://doi.org/10.3150/14-BEJ599 GAO, J., H AN, X., P AN, G. and Y ANG, Y. (2017). High dimensional correlation matrices: The central limit theorem and its applications. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79 677–693. https://doi.org/10.1111/rssb.12189 HEINY , J. (2022). Large sample correlation matrices: a comparison theorem and its applications. Electron. J. Probab. 271–20. https://doi.org/10.1214/22-EJP817 JIANG , T. (2004). The limiting distributions of eigenvalues of sample correlation matrices. Sankhy ¯a: The Indian Journal of Statistics 6635–48. JIANG , T. (2019). Determinant of sample correlation matrix with application. Annals of Applied Probability 29 1356–1397. https://doi.org/10.1214/17-AAP1362 KAROUI , E. (2009). Concentration of measure and spectra of random matrices: With applications to correlation matrices, elliptical distributions and beyond. Annals of Applied Probability 192362–2405. MESTRE , X. and V ALLET , P. (2017). Correlation tests and linear spectral statistics of the sample correlation matrix. IEEE Transactions on Information Theory 634585–4618. https://doi.org/10.1109/TIT.2017.2689780 PAN, G. M. and Z HOU, W. (2008). Central limit theorem for signal-to-interference ratio of reduced rank linear receiver. The Annals of Applied Probability 18. https://doi.org/10.1214/07-aap477 QIU, J., L I, Z. and Y AO, J. (2023). Asymptotic normality for eigenvalue statistics of a general sample covariance matrix when 𝑝/𝑛→∞ and applications. Ann. Statist. 511427–1451. https://doi.org/10.1214/23-aos2300 SCHOTT , J. R. (2005). Testing for complete independence in high dimensions. Biometrika 92951–956. https://doi.org/10.1093/biomet/92.4.951 WANG, L. and P AUL, D. (2014). Limiting spectral distribution of renormalized separable sample covariance matrices when p/n →0.Journal of Multivariate Analysis 12625-52. https://doi.org/10.1016/j.jmva.2013.12.015 XIAO, H. and Z HOU, W. (2010). Almost sure limit of the smallest eigenvalue of some sample correlation matrices. J. Theoret. Probab. 231–20. https://doi.org/10.1007/s10959-009-0270-2 YIN, Y., Z HENG , S. and Z OU, T. (2023). Central limit theorem of linear spectral statistics of high-dimensional sample correlation matrices. Bernoulli 29984–1006. https://doi.org/10.3150/22-BEJ1487 YIN, Y., L I, C., T IAN, G.-L. and Z HENG , S. (2022). Spectral properties of rescaled sample correlation matrix. Statist. Sinica 322007–2022. YU, L., X IE, J. and Z HOU, W. (2023). Testing Kronecker product covariance matrices for high-dimensional matrix-variate data. Biometrika 110799–814. https://doi.org/10.1093/biomet/asac063
|
https://arxiv.org/abs/2505.08210v1
|
arXiv:2505.08262v1 [cs.LG] 13 May 2025Super-fast Rates of Convergence for Neural Networks Classifiers under the Hard Margin Condition Nathanael Tepakbong∗1, Ding-Xuan Zhou†2, and Xiang Zhou‡3 1Department of Data Science, City University of Hong Kong, Hong Kong SAR 3Department of Mathematics, City University of Hong Kong. Hong Kong SAR 2School of Mathematics and Statistics, The University of Sydney, Sydney, NSW 2006 Australia May 24, 2025 Abstract We study the classical binary classification problem for hypothesis spaces of Deep Neural Networks (DNNs) with ReLU activation under Tsybakov’s low-noise condition with exponent q >0, and its limit-case q→ ∞ which we refer to as the hard margin condition . We show that DNNs which minimize the empirical risk with square loss surrogate and ℓppenalty can achieve finite-sample excess risk bounds of order Opn−αq for arbitrarily large α >0 under the hard-margin condition, provided that the regres- sion function ηis sufficiently smooth. The proof relies on a novel decomposition of the excess risk which might be of independent interest. Keywords: Neural network; approximation theory; convergence rate; hard margin; excess risk 1 Introduction In this article, we study the problem of classifying high-dimensional data points with binary labels. It is common knowledge that, without any additional regularity assumptions on the problem structure, any classifier trained to solve this task will have arbitrarily slow rates of convergence as the dimensionality increases, which is commonly referred to as the curse of dimensionality. It has however been observed that many models used in practice, especially ∗ntepakbo-c@my.cityu.edu.hk. †dingxuan.zhou@sydney.edu.au. ‡xizhou@cityu.edu.hk. 1 Deep Neural Networks in recent years, are seemingly able to efficiently solve extremely high- dimensional classification tasks at a rate which seemingly does not suffer from the curse of dimensionality (CoD) (Goodfellow et al., 2016; Krizhevsky et al., 2012). These gaps between theory and practical observation can be explained by adding some suit- able regularity assumptions on the problem at hand. In the framework of supervised binary classification, such regularity assumptions often take the form of margin conditions: first introduced in the seminal work (Mammen and Tsybakov, 1999), they typically characterize the behaviour of the data distribution near the decision boundary, which is the region where classification is hardest. Over the years, many CoD-free rates of convergence for classifiers induced by various hypothesis spaces have been shown thanks to these margin conditions (Tsybakov, 2004; Audibert and Tsybakov, 2007). Another remarkable fact highlighted by these results is that margin conditions do not only lead to CoD-free rates, but also to “fast”— faster than O` n−1/2˘ , and sometimes “super-fast” — faster than Opn−1q— when assuming the strongest version of these margin conditions. Notable examples of hypothesis spaces for which these super-fast (sometimes even exponen- tial) rates of convergence have been observed include local polynomial estimators (Audibert and Tsybakov, 2007), support vector machines (Steinwart and Scovel, 2005; Steinwart and Christmann, 2008; Cabannnes and Vigogna, 2023) or Reproducing Kernel Hilbert Spaces (RKHS) (Koltchinskii and Beznosova, 2005; Smale and Zhou, 2007; Vigogna et al., 2022). More recently, it has even been shown that for data coming from an infinite-dimensional Hilbert space, the Delaigle-Hall condition (Delaigle and
|
https://arxiv.org/abs/2505.08262v1
|
Hall, 2012), which can be thought of as an infinite-dimensional analogue of the classical margin conditions, can lead to super-fast rates of convergence for RKHS classifiers (Wakayama and Imaizumi, 2024). Perhaps surprisingly however, for Deep Neural Networks (DNNs) hypothesis spaces, no such “super-fast” rates of convergence have been shown to hold, even under the strongest mar- gin and regularity assumptions. This fact seemingly contradicts the observation that DNNs outperform all other traditional methods by far when it comes to high-dimensional classifica- tion. Could Neural Networks truly be inferior to traditional methods in the “hard-margin” regime ? In this work, we answer negatively to this question by showing arbitrarily fast rates of convergence for DNN classifiers under the hard-margin condition. Before presenting our setup and results in greater detail, we briefly review related literature in the next section. 1.1 Related works When considering a binary classification problem on [0 ,1]dwith labels {1,−1}, there are different possible objects which can be used to characterize the regularity of the problem: •the Bayes regression function η:x∈[0,1]d7→E[Y|X=x] which, when normalized, represents the probability of each class, •the Bayes classifier cinduced by the Bayes regression function: c:x7→sign(η(x)). It is the optimal classifier in the sense that it minimizes the expected 0 −1 loss over all admissible classifiers, and it thus is what we’re implicitly trying to learn. •the decision region Ω := c−1({1}) and the induced decision boundary ∂Ω. 2 The margin condition which we refer to in this work, and originally introduced in (Mammen and Tsybakov, 1999), assumes that for all t >0,P(|η(X)|≥t)Àtq, where q >0 is a constant called the margin exponent (note that depending on the source, this is also referred to as a low-noise condition). In (Kim et al., 2021), it has been shown that such a margin condition coupled with additional assumptions on respectively the regression function η, the decision boundary ∂Ω, or the probability for data points to be near the decision boundary ∂Ω, leads to minimax optimal fast rates of convergence for sparse DNN classifiers obtained by hinge- loss empirical risk minimization. For instance, if ηis assumed to be H¨ older continuous, they prove the excess risk bound: E(ˆfDNN)Àˆlog3n n˙β(q+1) β(q+2)+ d , where βis the H¨ older exponent of η. As we can see, when the margin exponent q→ ∞ , their result leads to the “fast rate” of Opn−1q. In a similar vein, by assuming different kinds of regularity on these objects, and leveraging recent advances on the approximation rates and complexity measures of DNNs hypothesis spaces, various minimax optimal rates of convergence of this kind have been obtained for DNNs under different settings. A non-exhaustive list of such works includes (Feng et al., 2021; Meyer, 2023; Petersen and Voigtlaender, 2021; Bos and Schmidt-Hieber, 2022; Hu et al., 2022a; Ko et al., 2023). As it has been mentioned earlier, while these results for DNNs clearly highlight their ability to generalize with CoD-free rates of convergence, none of them obtain a rate faster than Opn−1q, even under the most idealized regularity assumptions, unlike the more traditional methods. To the best
|
https://arxiv.org/abs/2505.08262v1
|
of the authors’ knowledge, it has only been shown in (Hu et al., 2022b) that thehard-margin condition , which is the limit as q→ ∞ of Tsybakov’s noise condition, can lead to exponential rates of convergence for the excess risk for Neural Networks: they prove the result for shallow networks in the Neural Tangent Kernel (NTK) regime (Jacot et al., 2018) which minimize Empirical Risk with square loss surrogate. (Nitanda and Suzuki, 2020) similarly show how, in the NTK regime, the hard-margin condition leads to an exponential convergence of the averaged stochastic gradient descent (SGD) algorithm with respect to the number of epochs. However, these results are not fully satisfactory, as it is known that the NTK regime does not accurately represent the expressive power of deeper Networks (Bietti and Bach, 2021). This work is thus, to the best of our knowledge, the first to prove super-fast rates of convergence for DNNs hypothesis spaces under the hard-margin condition. 1.2 Our Contributions We study the binary classification problem over a hypothesis space of fully connected deep neural networks with ReLU activation. The classifiers are learned in a standard supervised learning fashion, by minimizing an Empirical Risk with the square loss as a surrogate and anℓppenalty on the network’s weight, where 0 < p < ∞. For a real-valued, measurable function f, denote the excess risk E(f) of the classifier induced by fas E(f) :=P(X,Y)∼ρpsignf(X)̸=Yq−P(X,Y)∼ρpc∗(X)̸=Yq, 3 where c∗is the so-called Bayes classifier whose definition will be given later. Our main contributions can be stated as follows: •In our Theorem 1, we provide a novel error decomposition for the excess risk of DNN classifiers under both “weak” ( q >0) and “hard” ( q=∞) margin conditions. •As a direct application of Theorem 1, we show that when the regression function η is assumed to be Cssmooth, the excess risk E(ˆfDNN) of DNN classifiers converges with rate up to Opn−αq, where α∼1−C/?sunder the weak-margin condition, and α∼C?sunder the hard-margin condition. •Lastly, we apply Theorem 1 again to a simplified version of the teacher-student setting: we show that if the teacher network is realizable by the student network, then the excess riskE(ˆfDNN) converges exponentially fast to zero. In all of our results, the excess risk bounds are non-asymptotic: they hold for all finite sample sizen, as long as nis greater than a lower bound which has an explicit expression in terms of the problem’s parameters. 1.3 Notations Function Spaces : For a closed subset X ⊆Rd, we will denote by •M(X,R) the space of Borel measurable functions from XtoR, •C(X,R) the space of real-valued, continuous functions on X, •Lp(X, µ) the space of Borel measurable functions on Xwhose absolute p-th power is µ-integrable, where µis a measure on Xandp∈[1,∞]. Whenever µis the Lebesgue measure, we will omit it from notation and simply write Lp(X). For any of these function spaces, we might drop the domain Xand/or the co-domain Rfrom notation if context already makes it clear. Norms : For any 0 < p < ∞,x= (x1, . . . , x d)T∈Rd,A= (ai,j)∈Ru×vandf∈ M (X,R), we will
|
https://arxiv.org/abs/2505.08262v1
|
denote by •respectively |x|p:=p|x1|p+. . .+|xd|pq1/p,|x|0:=|x1|0+. . .|xd|0(with the convention 00:= 0) and |x|∞:= max 1≤i≤d|xi|, the ℓp,ℓ0andℓ∞(quasi-)norm of x. •|A|p:=´P i,j|ai,j|p¯1/p theℓp,pnorm of A, •respectively ∥f∥C(X)and∥f∥Lp(µ)the supremum norm and Lp(X, µ) norm of f, which are defined in the standard way. 4 Other Symbols : We will also denote by •N:={1,2, . . .}the set of all natural numbers (excluding 0), •∇f:=” ∂f ∂x1. . .∂f ∂xdıT the gradient of a differentiable function f∈ C(X,R), where X ⊆Rd. In case X=A×B , we might denote ∇afor∇bfto emphasize the parameters with respect to which the derivatives are taken, • 1Athe indicator function of a set A, which equals 1 on Aand 0 everywhere else, •sign(x) := 1(0,∞)(x)− 1(−∞,0)(x) the sign of a real number x. We will also denote by signf:= sign ◦fthe composition of a real-valued function fwith sign, •E[Z] the expectation of a random variable Z. IfZ=f(X, Y ), we may write EX[Z] or EY[Z] to indicate with respect to which variables the expectation is taken, or equiva- lently Eµ[Z] to indicate with respect to which distribution the expectation is taken. 2 Problem Setting We are given a sample of nobservations ( xi, yi)∈ X × Y where X ≡ [0,1]dis the d- dimensional unit cube and Y ≡ {− 1,1}is the set of possible labels. Each sample is assumed to be i.i.d. data points generated from a distribution ρon the probability space (Ω ,A,P). We will call any measurable map c:X → Y aclassifier , and for any such function cwe define its misclassification risk by R(c) :=P(X,Y)∼ρ(c(X)̸=Y) (1) For any function f:X →R, we thus see that sign fis always a classifier, and we will call signfthe classifier induced byf. It is well known that the misclassification risk is minimized by the Bayes classifier c∗:= sign η(Devroye et al., 2013), where η(x) :=E(X,Y)∼ρ[Y|X=x] is the so-called Bayes regression function. We will denote by R∗:=R(c∗) the optimal risk. As c∗depends on the unknown distribution ρ, it is a priori not possible to achieve the optimal risk R∗, hence we instead aim to learn from the observations ( x1, y1), . . . , (xn, yn) a classifier pcnsuch that the excess risk R(pcn)−R∗ converges to zero as fast as possible when ngoes to infinity. 2.1 Empirical Risk Minimization The misclassification risk (1) being a function of ρ, it can’t be explicitly computed and hence minimized. We instead minimize the following Empirical Risk with square surrogate loss: pRℓ(f) :=1 nnX i=1pf(xi)−yiq2(2) 5 Our choice of the square loss ℓ(f(x), y) := ( f(x)−y)2as a surrogate is motivated by at least three reasons : •Empirical evidence suggests that square loss may perform just as well if not better than cross-entropy for classification tasks (Hui and Belkin, 2020). Our result thus provides some theoretical backing for this observation. •(Hu et al., 2022b) prove rates of convergence under hard-margin condition for Neural Networks classifiers in the NTK regime learned with square loss. Our work shows that their results extend outside of the NTK regime, as they correctly conjectured. •Most convergence rate results for
|
https://arxiv.org/abs/2505.08262v1
|
kernel-based classifiers under margin conditions also consider the square loss as a surrogate (Steinwart and Scovel, 2005; Steinwart and Christmann, 2008). We thus have an analogous setting for DNNs and can meaningfully compare the two approaches. To match what is often done in practice, we also introduce a penalty function P:H →R≥0 and a regularization parameter λ≥0. This leads to the following λ-Regularized Empirical Risk Minimization (λ-ERM) problem : pfλ:= argmin f∈H pRℓ(f) +λ 2P(f) . (3) As stated earlier, we will set the hypothesis space Has a family of Deep Neural Networks, and the penalty Pas the ℓpnorm. We aim to give fast rates of convergence for the excess risk of sign pfλ, i.e. the classifier induced by pfλ. 2.2 The Hypothesis Space of Deep ReLU Networks 2.2.1 The Feedforward ReLU Network parametrization We now introduce notations for the hypothesis space of Neural Networks we study in this paper. Given integers L, a 0, a1, . . . , a L∈N, call a neural network parametrization and denote by θ:= ((W1, B1), . . . , (WL, BL)) a tuple of matrix-vector pairs, where Wl∈Ral×al−1andBl∈Ralare respectively referred to as weight matrices andbias vectors . We also let a0≡dandaL≡1 in the following. Each pair ( Wl, Bl) induces an affine map Tl:Ral−1→Ral, hence given θand an activation function σ:R→R, we can define the neural network function realized by θas f(·;θ) :Ra0→RaL, x7→TL◦σ◦TL−1◦σ◦ ··· ◦ σ◦T1(x) where σis given by σ(x)≡max{0, x}and acts on vectors elementwise. For an architecture vector a= (a0, a1, . . . , a L)∈NL+1, which describes the shape of a FNN (and which we may abusively refer to as a neural network itself), we define the sets of all respectively bounded and unbounded Neural Network parametrizations as : Pa,R:=Lą l=1` [−R, R]al×al−1×[−R, R]al˘ ,Pa,∞:=Lą l=1` Ral×al−1×Ral˘ (4) 6 where R > 0 is a fixed parameter bound . Note that a parametrization θ∈ Pa,Rcan naturally be identified with a vector rθ∈[−R, R]P(a)where P(a)≡PL l=1alal−1+alis the number of free parameters in the family of neural networks described by a. As usual, we respectively callL≡L(a) and W≡W(a) :=|a|∞thedepth andwidth of a given Neural Network in Pa,R. Lastly, for a neural network architecture awhich satisfies a0=dandaL= 1, we call Fσ:Pa,R→ C(Rd,R),θ7→f(·;θ) (5) therealization mapping , and define the induced hypothesis space of Neural Networks as NN(a, W, L, R ) :={Fσ(θ)|θ∈ Pa,R} (6) where we identify f∈ NN (a, W, L, R ), which is defined on all of Rd, with its restriction to the unit cube X. 2.2.2 Clipping the Neural Network outputs To study the generalization error of our Neural Network-induced hypothesis space, it is necessary to ensure that all functions within it have a uniformly bounded supremum norm, as the complexity may grow unboundedly otherwise. We achieve this as follows: given a clipping constant D > 0, we compose all the functions in NN(a,A, W, L, R ) with clipD:R→R defined by clipD(x) = D ifx≥D x if−D≤x≤D −D ifx≤ −D(7) It has been shown in, e.g., (Zhou et al., 2024) that the clipping function (also known
|
https://arxiv.org/abs/2505.08262v1
|
in the literature as a truncation function ) clipD:R→Rcan be implemented by a shallow ReLU neural network. Indeed, we have clipD(x) =σ(x)−σ(−x)−σ(x−D) +σ(−x−D) =“ Fσ(θD)‰ (x) for all x∈R, where σ(x)≡ReLU( x) and θD:=¨ ˚˚˝¨ ˚˚˝» ——–1 −1 1 −1fi ffiffifl,» ——–0 0 −D −Dfi ffiffifl˛ ‹‹‚,´“ 1−1−1 1‰ ,0¯˛ ‹‹‚. This shows that the hypothesis space of clipped Neural Networks is realized by appending an additional shallow Neural Network with fixed parameters at the end of each architecture. Furthermore, the following lemma guarantees that as long as the clipping constant Dis chosen larger than ∥η∥L∞, the approximation error of the clipped neural networks hypothesis space does not get larger than its unclipped counterpart. Lemma 1. Letf∗∈L∞(X,R)andD≥ ∥f∗∥L∞(X,R). For any f∈L∞(X,R), we have ∥clipD◦f−f∗∥L∞(X,R)≤ ∥f−f∗∥L∞(X,R) where clipDis as defined in (7). 7 Proof. By the assumption on D, we have f∗(x) = clipD◦f∗(x) for almost all x∈ X. Hence, by 1-Lipschitz continuity of clipD, |clipD◦f(x)−f∗(x)|=|clipD◦f(x)−clipD◦f∗(x)|≤ |f(x)−f∗(x)| holds for almost all x, and the conclusion follows by definition of the essential supremum. Thanks to Lemma 1, since the composition with clipDdoes not affect the number of free parameters, and ∥η∥L∞(X)≤1, we will fix D= 1 and assume in the following that all Neural Networks we consider have been composed with clipD, without making it explicit in the notation. 2.2.3 ℓpRegularization Lastly, we fix 0 < p < ∞and regularize the objective (2) with an ℓppenalty term. We thus define the regularized empirical risk as pRℓ,λ(θ) :=1 nnX i=1(f(xi;θ)−yi)2+λ 2|θ|p p (8) where, for a parametrization θ= ((Wl, Bl))L l=1∈ Pa,R, |θ|p p:=LX l=1|Wl|p p+|Bl|p p. ℓpregularization is very popular in practical applications. For p= 2, in which case it is often referred to as weight decay, it is known to help training and improve generalization (Krogh and Hertz, 1991). Similarly, p= 1 is a popular choice for DNNs as it tends to promote sparse solutions, which are less expensive to store and more efficient to compute with (Candes et al., 2008). Although not as common, taking 0 < p < 1 also has its merits, as it can be used as a differentiable approximation of the ℓ0penalty, which induces very sparse models but is not compatible with standard gradient-based optimization algorithms (Louizos et al., 2017). For fixed R > 0 and λ >0, the λ-ERM problem (3) thus consists in finding pθλsatisfying pθλ∈argmin θ∈Pa,RpRℓ,λ(θ) (9) Note that the objective (9) is highly non-convex, implying that the set of minimizers is not reduced to a singleton. Therefore, we will only consider the minimum norm solutions throughout this paper, i.e. we only consider pθλ∈argmin( |θ|,forθ∈argmin θ∈Pa,RpRℓ,λ(θ)) . (10) 8 2.3 Technical Assumptions In this section, we present the technical assumptions with which will be working to establish our main results. (A1) The Bayes regression function η:x7→E[Y|X=x] satisfies Tsybakov’s low-noise condition : there exists a noise exponent q >0 and a positive constant C > 0 such that Pp|η(X)|≤δq≤Cδq.for all δ >0 In the limit q→ ∞ , we get the so-called hard-margin condition : (A2) The Bayes regression function η:x7→E[Y|X=x] satisfies the hard-margin condi- tion:
|
https://arxiv.org/abs/2505.08262v1
|
there exists δ >0 such that Pp|η(X)|> δq= 1. Assumption (A2) was originally introduced in (Mammen and Tsybakov, 1999) as a charac- terization of classification problems for which the two classes are in some sense “separable”, and has been repeatedly shown in the literature to lead to faster rates of convergence for various hypothesis classes. Consider the regularized population risk Rℓ,λ, which is given for all λ≥0 and θ∈ Pa,Rby Rℓ,λ(f(·;θ)) :=E(x,y)∼ρ“ (f(x)−y)2‰ +λ 2|θ|p p (11) Naturally, for any θλ∈argminθ∈Pa,RRℓ,λ(f(·;θ)) and any other θ∈ P a,R, we have by optimality dist(θ,argmin Rℓ,λ)>0 =⇒ R ℓ,λ(f(·;θλ))<Rℓ,λ(f(·;θ)) where dist( a, A) denotes the ℓ∞-distance of a vector a∈Rkto a set A⊆Rk. The following assumption gives a quantitative estimate on the growth of the loss function Rℓ,λaway from its minimizers : (A3) There exist two constants K > 0,r > 1 such that for all λ≥0,t >0, and θλ∈ argminθ∈Pa,R,Rℓ,λ(f(·;θ)) inf θ∈Pa,R:dist( θ,argmin Rℓ,λ)≥tRℓ,λ(f(·;θ))− R ℓ,λ(f(·;θλ))≥Ktr. (12) Assumption (A3) is motivated by the following observation : by compactness of Pa,Rand continuity of Rℓ,λ, there must exist a θ0∈ Pa,Rsuch that inf θ∈Pa,R:dist( θ,argmin Rℓ,λ)≥tRℓ,λ(f(·;θ)) =Rℓ,λ(f(·;θ0)) and furthermore, by continuity of the map θ7→ |θ0−θ|∞onPa,R, there must be one θλ∈argminPa,RRℓ,λsuch that dist(θ0,argmin Rℓ,λ) =|θ0−θλ|∞≥t. 9 Now, we can apply a first order Taylor expansion with exact remainder around this global minimizer θλ, where the gradient is zero, to get Rℓ,λ(f(·;θ0)) =Rℓ,λ(f(·;θλ)) +h(θ0)|θ0−θλ|∞, where h(θ0)→0 as θ0→θλ. Assumption (A3) thus requires hto behave like θ07→ |θ0−θλ|κfor some κ >0. Conditions similar to (A3) can be found in the empirical process theory literature, where it is referred to as a well-separation assumption and is used to prove consistency of M-estimators (Van der Vaart, 2000; Sen, 2018). In those works, the Ktrterm in equation (12) is replaced byψ(t), for an unknown function ψwho is merely assumed to be positive for all t >0. Having explicit information on the growth of this lower bound will be necessary for us to explicitly bound the excess risk of our estimators1. Denote respectively by pRℓ,nandRℓthe unregularized ( λ= 0) versions of pRℓ,λandRℓ,λ defined in (8) and (11), where the dependence on nis made explicit in the notation. Denote by argmin* pRℓ,n:=( argmin |θ|∞:θ∈argmin θ∈Pa,∞pRℓ,n) (13) the minimum-norm minimizers of pRℓ,n, taken as a function defined on the unrestricted parameter space Pa,∞(4). We will assume the following : (A4) The argmins of RℓandpRℓ,nare not empty, and almost surely over all possible i.i.d. draws ( xi, yi)i≥1with distribution ρ, we have sup n≥1n |θ|∞:θ∈argmin* pRℓ,no <∞ Besides the requirement that minimizers exist, which is standard and often implicitly as- sumed when studying empirical risk minimization, assumption (A4) states that the minimum- norm solutions of the unregularized ERM problem (10) almost surely do not run off to infinity as the sample size nincreases. Although it is intuitively expected that minimum-norm so- lutions do not diverge, in practice such an event could have a small, positive probability. Assumption (A4) thus requires ρto give zero measure to “pathological” datasets where such a thing happens. Under assumption (A4) we have a uniform bound on the norms of regularized
|
https://arxiv.org/abs/2505.08262v1
|
solutions: Lemma 2. Assume that (A4) holds and denote by R∗:= sup n≥1n |θ|∞:θ∈argmin *pRℓ,no the supremum. For all n≥1, λ > 0the argmins of pRℓ,λare not empty, and we have the inequality sup λ≥0,n≥1n |θ|∞:θ∈argmin *pRℓ,λo ≤R∗·P(a)1/p, 1In reality, we only need (A3) to be true for all tin a neighborhood of 0, but we omit this detail for the sake of exposition. 10 where P(a)denotes the number of parameters in the architecture Pa,∞andargmin *pRℓ,λis defined as in (13). Proof of Lemma 2. If we denote by 0∈ Pa,∞the parametrization whose entries are all zeros, it is readily checked that pRℓ,λ(f(·;0)) = 1 and pRℓ,λ(f(·;θ))>1 for all |θ|p p>2/λ. HencepRℓ,λ is minimized somewhere in {θ:|θ|p p≤2/λ}and the minimum is attained by compactness and continuity. Now let 0 ≤λ≤λ′andθ,θ′respectively in argmin* pRℓ,λand argmin* pRℓ,λ′. By optimality we have pRℓ(θ) +λ 2|θ|p p≤pRℓ(θ′) +λ 2|θ′|p p =pRℓ(θ′) +λ′ 2|θ′|p p+λ−λ′ 2|θ′|p p ≤pRℓ(θ) +λ′ 2|θ|p p+λ−λ′ 2|θ′|p p Hence we have shown 0≤(λ−λ′)` |θ′|p p−|θ|p p˘ which implies that |θ′|p≤ |θ|pwhenever λ≤λ′. Finally, by basic properties of ℓpnorms, we have sup λ≥0,n≥1n |θ|∞:θ∈argmin* pRℓ,λo ≤sup λ≥0,n≥1n |θ|p:θ∈argmin* pRℓ,λo ≤sup n≥1n |θ|p:θ∈argmin* pRℓ,no ≤sup n≥1n P(a)1/p|θ|∞:θ∈argmin* pRℓ,no =R∗·P(a)1/p Lemma 2 guarantees that whenever the parameter bound Ris chosen larger than R∗·P(a)1/p, our hypothesis space contains Neural Networks which are global minimizers of the objective (10), which is desirable for practical purposes and necessary to prove our main results in the next section. 3 Main results 3.1 An abstract upper bound for the excess risk of Deep Neural Network Classifiers Before stating our main results, we introduce a few useful definitions. The first being that of an ε-covering : 11 Definition 1 (ε-cover) .Letε >0andG ⊆L∞(X,R)be a family of functions. Any finite collection of functions g1, . . . , g N∈L∞(X,R)with the property that for any ginGthere is an index j≡j(g)such that ∥g−gj∥L∞≤ε is called an ε-covering (or cover) of Gwith respect to ∥·∥L∞. For a given ε, we can think of the cardinality of an ε-cover as a measure of complexity for the family G. This motivates the definition of a covering number : Definition 2 (ε-covering number) .Letε >0andG ⊆L∞(X,R). We denote by Cov(G,∥·∥L∞, ε) the size of the smallest ε-cover of Gwith respect to ∥·∥L∞, with the convention Cov(G,∥·∥L∞, ε) := ∞when no finite cover exists. Cov(G,∥·∥L∞, ε)will be called an ε-covering number of G with respect to ∥·∥L∞. IfG=NN(a, W, L, R )is given by our neural network hypothesis space, we will abbreviate and denote Cov(NN(a, W, L, R ),∥·∥L∞, ε) =:Cov∞(NN, ε). The above two quantities, whose definitions are adapted from (Gy¨ orfi et al., 2002), are ubiquitous in the learning theory literature, as they give a lot of information on the statistical properties of our estimators. Another related, though not as common, measure of complexity for hypothesis spaces of Neural Networks is the Lipschitz constant of the realization map (5) : Definition 3. Given a parametrization Pa,R, recall the definition of the realization mapping Fσ:Pa,R→ C(X,R) (5). We will denote by Lip(Fσ) := sup θ,θ′∈Pa,R θ̸=θ′∥Fσ(θ)− F σ(θ′)∥C(X)
|
https://arxiv.org/abs/2505.08262v1
|
|θ−θ′|∞ its Lipschitz constant. Intuitively, the Lipschitz constant of the realization map estimates the complexity of the Neural Network hypothesis space in the sense that it controls how different two realizations can be given that their parametrizations are close. For this reason, the problem of estimating a Neural Network’s Lipschitz constant has garnered a lot of interest over recent years (Fazlyab et al., 2019; Virmaux and Scaman, 2018). We are now ready to state our main result, which gives an upper bound on the excess risk of DNN classifiers under our setting. Theorem 1. Assume that assumptions (A3) and(A4) hold. Fix an architecture awith parameter bound R≥R∗·P(a)1/p, and denote εapprox := inf f∈NN (a,W,L,R )∥f−η∥L∞(X) the approximation error of the corresponding Neural Network hypothesis space. We have the following excess risk bounds : 12 •If the low-noise condition (A1) holds, then for all δ > ε approx and0< ν < δ , any minimum-norm solution pθλof the λ-ERM problem (10) with 0≤λ < 2p−1(δ− εapprox )2(P(a)Rp)−1satisfies for all n≥1 : R(sign f(·;pθλ))− R∗≤εapprox +b 21−pλP(a)Rp+Cδq + (δ−ν)−2ˆ εapprox +b 21−pλP(a)Rp˙2 (14) + 4Cov∞ˆ NN,K(21−Lν)r 24 Lip( Fσ)1+r˙ expˆ−nK2(21−Lν)2r 288 Lip( Fσ)2r˙ •If the hard-margin condition (A2) holds with margin δ > 0, and εapprox < δ, then for all 0< ν < δ , any minimum-norm solution pθλof the λ-ERM problem (10) with 0≤λ <2p−1(δ−εapprox )2(P(a)Rp)−1satisfies for all n≥1 : R(sign f(·;pθλ))− R∗≤εapprox +b 21−pλP(a)Rp + (δ−ν)−2ˆ εapprox +b 21−pλP(a)Rp˙2 (15) + 4Cov∞ˆ NN,K(21−Lν)r 24 Lip( Fσ)1+r˙ expˆ−nK2(21−Lν)2r 288 Lip( Fσ)2r˙ The bounds given by Theorem 1 are quite different from those usually seen in the literature for similar problems. While the approximation error term is standard, the remaining sum- mands are not: we get three terms which relate the noise condition, approximation error, and regularization constant, and one last exponential term which can be thought of as the statistical error in classical learning theory. At first glance, it is not clear that Theorem 1 can improve known rates of convergence for DNN hypothesis spaces. Indeed, while the non-exponential summands in (14) and (15) can be controlled satisfactorily when the regression function ηlies in a space suitable for Neural Network approximation, the exponential term is hard to control, and may in fact not converge to zero as the Neural Network dimensions grow with the sample size n. More concretely, we have the following estimate on FCNN complexity measures: Lemma 3 (Theorem 2.6 in (Berner et al., 2020)) .For any architecture vector awith depth L∈Nand width W∈N, and any parameter bound R > 0, we have the upper bound on Lip(Fσ) sup θ,θ′∈Pa,R θ̸=θ′∥Fσ(θ)− F σ(θ′)∥C(X) |θ−θ′|∞≤2L2RL−1WL. The above inequality, which is tight, shows that the Lipschitz constant of Fσgrows expo- nentially with depth. As one could expect, the covering number behaves similarly: 13 Lemma 4. Letε >0. For any architecture vector awith depth L∈Nand width W∈N, and any parameter bound R > 0, we have the upper bound Cov∞(NN, ε)≤ˆ 1 +2RLip(Fσ) ε˙P(a) Proof of Lemma 4. Letθ,θ′∈ Pa,R. Because of the inequality ∥f(·;θ)−f(·;θ′)∥L∞(X)≤Lip(Fσ)|θ−θ′|∞, we get that Cov∞(NN, ε) is bounded by
|
https://arxiv.org/abs/2505.08262v1
|
the number of ℓ∞balls of radius ε/Lip(Fσ) needed to cover the hypercube [ −R, R]P(a). It is straightforward to check that the collection of such balls centered at the points −R⃗1+ε⃗k,where ⃗k= [k1, k2, . . . k P(a)]T,andki∈ 0,1, . . . ,2RLip(Fσ) ε , where ⃗1is the vector whose entries are all ones, covers [ −R, R]P(a)and has ⌈2RLip(Fσ)/ε⌉P(a) elements, hence the proof is complete. Although these estimates suggest that, in general, the bounds provided by Theorem 1 are likely to be vacuous, we will see in the following section that, when the regression function ηlies in a suitably regular function space, Theorem 1 can in fact lead to super fast rates of convergence. 3.2 Super fast rates of convergence for smooth regression func- tions After the seminal work of (Yarotsky, 2017), many approximation rates of functions in Sobolev, Besov, Korobov and various other smoothness spaces by deep ReLU networks have been discovered in recent years (Suzuki, 2018; Petersen and Voigtlaender, 2018; Mao and Zhou, 2022). In particular, we will make use of the following result due to (Lu et al., 2021), which provides exact approximation bounds of stimes continuously differentiable functions by Deep ReLU FCNNs : Theorem 2 (Theorem 1.1 from (Lu et al., 2021)) .Leth∈ Cs(X). For any W0, L0∈Nthere exists a neural network f(·;θ)∈ NN (a, W, L, R )with width W(a) =C1(W0+ 2) log2(8W0) and depth L(a) =C2(L0+ 2) log2(4L0) + 2dsuch that ∥f(·;θ)−h∥L∞(X)≤C3∥h∥Cs(X)W−2s/d 0 L−2s/d 0, where C1= 17sd+13dd,C2= 18s2andC3= 85( s+ 1)d8s. We will work under the following assumption (A5) , which is a little stronger than simply assuming that η∈ Cs(X). 14 (A5) Letη:x7→E(X,Y)∼ρ[Y|X=x]. For any W0, L0∈N≥2there exists a neural network f(·;θ)∈ NN (a, W, L, R ) with width W(a) = C1W0log2(W0) and depth L(a) =C2(L0) log2(L0) + 2dsuch that ∥f(·;θ)−η∥L∞(X)≤C3W−2s/d 0 L−2s/d 0, where C1= (3s)dd,C2=?s,C3=sd8s∥η∥Cs(X). Our Assumption (A5) is similar to the conclusion of Theorem 2, with the exception that the multiplicative constants C1, C2, C3, as well as some additive terms in the expressions of W(a) and L(a) have been modified. While most of these changes are cosmetic and do not modify the orders of magnitude for each of these terms, the value of C2, which relates the depth Lof the architecture awith the smoothness sof the target function has been changed non-trivially : while the original theorem of (Lu et al., 2021) requires a depth scaling like O(s2) to guarantee a given approximation error, Assumption (A5) asserts that ηcan be approximated to similar accuracy with a depth scaling like O(?s) only. Our assumption (A5) can thus be thought of as requiring that ηis a “nice” Csfunction, in the sense that its rate of approximation is faster than the worst possible case. The approximation error bound provided by Assumption (A5) gives us an exact quantifi- cation of the width and depth required to reach a desired approximation error. Given the Lipschitz constant bound from Proposition 3, we can thus pick the pair ( W0, L0)∈N2 optimally so as to minimize the approximation error εnwhile ensuring that the Lipschitz
|
https://arxiv.org/abs/2505.08262v1
|
constant grows slower than n−1. Applying this strategy leads to the following excess risk bound for deep FCNN classifiers: Theorem 3. Assume that assumptions (A3) ,(A4) and(A5) hold, and let α > 0be a desired order of convergence. There exists a FCNN architecture anwith parameter bound Rn, width Wnand depth Lngiven by Wn=C1` L−1 0(n−α r/C3)−d/2s˘ log2` L−1 0(n−α r/C3)−d/2s˘ Ln=C2L0log2L0+ 2d Rn=R∗rLn(W2 n+Wn)s1/p=˜O(nαd/ps), where C1, C2, C3are given in (A5) ,L0≥2is fixed, and ˜Ohides logarithmic factors, such that the following excess risk bounds hold: •If the low-noise condition (A1) holds and α <´ 1 +B1?s+B2 s¯−1 , then any minimum- norm solution pθλof the λ-ERM problem (10)with0≤λ≤¯λn−2α/rR−2p n=˜O´ n−2α(s+d) rs¯ , where ¯λ >0is a constant which depends only on p, satisfies for all n≥1 : R´ signf(·;pθλ)¯ − R∗≤6n−α r+Cn−αq 2r+ 4 exp` −A1n1−αA2+nαd/slog(γnακ)˘ (16) where γ, κare constants which depend on d, p, s, andronly, while A1=K222r(1−Ln)/288, A2= 1 +B1?s+B2 s, and B1, B2>0are two constants which depend on s, dandponly. Furthermore, for these values of α, we always have 1−αA2> αd/s . 15 •If the hard-margin condition (A2) holds with margin δ >0andα <s?s sB1+?sB2, then any minimum-norm solution pθλof the λ-ERM problem (10) with 0≤λ≤¯λn−2α/rR−2p n= ˜O´ n−2α(s+d) rs¯ , where ¯λ > 0is a constant which depends only on p, satisfies for all n >(δ/2)−α/r: R´ signf(·;pθλ)¯ −R∗≤18n−α r+ 4 exp´ −A1n1−αA2+nαd/slog(γnακ′(δ/2)−r)¯ (17) where γ, κ′are constants which depend on d, p, s, andronly, while A1=K2(δ2−Ln)2r 288, A2=B1?s+B2 s, and B1, B2>0are two constants which depend on dandponly. Furthermore, for these values of α, we always have 1−αA2> αd/s . The upper bounds on the exponent αin equations (16) and (17) respectively ensure that the exponential term converges to zero, leading to an effective convergence rate of ˜O(n−α/r): as the smoothness sincreases to infinity, we thus get a convergence rate of n−min{1,q/2} r under the low-noise condition (A1) . Since r >1, the bound we get is thus slightly worse than the Opn−1qrate that (Kim et al., 2021) were able to get under assumption (A1) when q→ ∞ . On the other hand, under the hard-margin assumption (A2) , we find that the exponent α/r grows without bound as the smoothness sof the regression function ηgoes to ∞. Theorem 3 thus shows how deep FCNNs can leverage the hard-margin condition (A2) together with the smoothness of the regression function ηto achieve arbitrarily fast rates of convergence for the excess risk. A result which, to the best of our knowledge, is the first of its kind for this hypothesis space. 3.3 A case of exponential convergence rate: well-specified teacher- student learning The takeaway message from Theorem 3 is that whenever the regression function ηlies in a suitable space, such that it can be approximated by FCNNs whose size grows slowly, the margin conditions (A1) and(A2) will lead to fast rates for the excess risk. Taking this idea a step further, we look in this subsection at what happens when the regression function ηis exactly representable by our hypothesis space of FCNNs. Our starting point is
|
https://arxiv.org/abs/2505.08262v1
|
the following Lemma: Lemma 5. LetR∗>0, L∗∈Nbe fixed, and a∗∈NL+1be any FCNN architecture. For any parametrization θ∗∈ Pa∗,R∗, there exists a distribution ρθ∗onX × Y such that E(X,Y)∼ρθ∗[Y|X=x] =f(x;θ∗),forρX-a.e. x∈ X. where f(·;θ∗) :X → [−1,1]is the function realized by θ∗. Proof. LetX∼ρXandU∼Uniform([ −1,1]) be two independent random variables on the same probability space, and define: Y:= 1[U≤f(X;θ∗)]− 1[U > f (X;θ∗)] =( 1, ifU≤f(X;θ∗), −1,ifU > f (X;θ∗). 16 Now let ρθ∗be the joint distribution of ( X, Y ): we then have that for ρX-almost every x∈ X, E[Y|X=x] =E[ 1[U≤f(x;θ∗)]− 1[U > f (x;θ∗)]] =P[U≤f(x;θ∗)]−P[U > f (x;θ∗)] =1 2(1 +f(x;θ∗))−1 2(1−f(x;θ∗)) =f(x;θ∗). From Lemma 5, we see that any “target” Neural Network classifier corresponds to the Bayes regression function of a distribution which can be explicitly computed and sampled from. This observation can be thought of as a formalization of the knowledge distillation frame- work, which consists in training Neural Networks of small size to solve problems at which bigger Neural Networks are very successful with comparable performance. This approach, also known as the teacher-student setting , is typically implemented by training a smaller (“student”) network to predict the outputs of a larger (“teacher”) network, and has shown to be very successful in practice (Hinton et al., 2015; Xu et al., 2023). Recent works on the expressivity of deep ReLU FCNNs have shown that a neural network architecture awith input dimension d, width Wand depth L, could induce piecewise linear functions with a number of linear regions ranging anywhere between O(1) and O` (WL)d˘ (Montufar et al., 2014; Serra et al., 2018). This exponential gap between suggests that it could be possible for a ReLU FCNN of large width and depth W, L to be represented by another one with much smaller dimensions W′≪W, L′≪L, partially explaining the numerous successes of knowledge distillation in practical applications, and gives credit to the so-called “lottery ticket hypothesis”, according to which large networks contain small subnetworks able to generalize comparably well (Frankle and Carbin, 2019). In light of this discussion, we are compelled to consider the following assumption: (A6) The regression function ηis given by the realization of a FCNN with architecture a∗∈NL∗+1of depth L∗and width W∗. Furthermore, there exists an architecture a∈NL+1with width W≤W∗and depth L≤L∗, and a parameter bound R > 0 such that f(x) =η(x)∀x∈ X,for some f∈ NN (a, W, L, R ). As we can expect, this well specification assumption leads to a remarkable improvement in the convergence rate of our DNN classifiers: since εapprox = 0, and R, L, P (a),Lip(Fσ) are now all independent of n, it suffices to apply Theorem 1 with ν≡δ/2 to get an exponential upper bound on the excess risk under the hard-margin condition. Theorem 4. Assume that assumptions (A3) ,(A4) and(A6) hold. If the hard-margin condition (A2) also holds with margin δ >0, then any minimum-norm solution pθλof the λ-ERM problem (10) with 0≤λ≤2p−1 P(a)Rpexpp−2nβ1q 17 satisfies for all n≥1the excess risk bound: R´ signf(·;pθλ)¯ − R∗≤β2expp−nβ1q+4 δ2expp−2nβ1q, where β1=K2(2−Lδ)2r 288 Lip( Fσ)2r, β 2= 1 + 4 Cov∞ˆ
|
https://arxiv.org/abs/2505.08262v1
|
NN,K(2−Lδ)r 24 Lip( Fσ)1+r˙ are constants which do not depend on n. 4 Conclusion and discussion We have established in this work a general upper bound on the excess risk of ReLU Deep Neural Networks classifiers under the hard-margin condition, and have shown how it can be used to deduce “super-fast” rates of convergence under some suitable regularity conditions on the regression function η. We briefly discuss in this section some possible extensions and generalizations of our results. Possible extensions: We believe that our Theorem 1 and its consequence Theorem 3 could be generalized to the following setups by a direct adaptation of our arguments: •General Lipschitz activation functions. We have only considered ReLU in this work to simplify the exposition, but the only properties of ReLU we use is its Lip- schitzness and its rate of approximation for smooth functions. It is however known that many other popular activations can achieve similar approximation rates as ReLU (Ohn and Kim, 2019; Zhang et al., 2024), hence all such activations functions, as long as they are Lipschitz, should lead to results analogous to Theorem 1 and 3. •Other measures of regularity for η.We have shown how Cs-smoothness of ηcan lead to fast rates of convergence, as we believe it is the most important example, but in fact our proof shows that the same is true as long as ηcan be approximated at a rate similar to the one given by Theorem 2. This suggests that similar super fast-rates can be established for ηbelonging in a variety of other smoothness spaces (Elbr¨ achter et al., 2021). •Multi-class classification. Similarly, we believe that if we define an appropriate notion of Bayes regression function and margin conditions in the multiclass setting, such as what was done in (Vigogna et al., 2022), the results should extend naturally. Open questions and future work: We highlight some interesting questions which we think would be worth investigating further: •Other loss functions. Our theoretical analysis crucially relies on the properties of the square loss surrogate, and so our results do not extend to popular loss functions 18 used in practice, such as hinge loss and cross-entropy. Establishing the same results for more general losses — perhaps under a different type of margin condition — would be an interesting avenue of future research. •Sparse architectures. We have only considered in this work deep FCNNs. However, state-of-the-art classification results are often obtained by sparse architectures, such as deep Convolutional Neural Networks. It would be interesting to establish similar “super-fast” rates for those types of architectures as well, for which the approximation theory is increasingly well understood (Zhou, 2020). 5 Proofs 5.1 Some Useful Results We start by collecting a number of useful lemmas which will be needed to prove the main results. Throughout the following, recall the definition of the misclassification risk R(sign f) (1) for a real-valued function f: R(sign f) :=P(X,Y)∼ρ(sign f(X)̸=Y) Our first lemma is a bound on the difference of the misclassification risks of classifiers induced by measurable functions f, g∈L∞(ρX) : Lemma 6. For any two f, g∈L∞(ρX),
|
https://arxiv.org/abs/2505.08262v1
|
we have |R(sign f)− R(sign g)|≤Px∼ρX` ∥f−g∥L∞(ρX)≥ |f(x)|˘ Proof of Lemma 6. We have |R(sign f)− R(sign g)|=|Er 1{signf(X)̸=Y} − 1{signg(X)̸=Y}s| ≤Er| 1{signf(X)̸=Y} − 1{signg(X)̸=Y}|s ≤Er 1{signf(X)̸= sign g(X)}s=Ppsignf(X)̸= sign g(X)q But now observe that for any x∈ X, sign f(x)̸= sign g(x) =⇒ | f(x)−g(x)|≥ |f(x)|. Hence the inclusion of events {signf(X)̸= sign g(X)} ⊆ ∥f−g∥L∞(ρX)≥ |f(X)| , which implies the claimed inequality. We next have an upper bound on the excess misclassification risk of a classifier sign fin terms of the L2(ρX) distance between fand the regression function η. Lemma 7. For any f∈L2(ρX), we have the inequality R(sign f)− R(sign η)≤ ∥f−η∥L2(ρX) 19 Proof. Note that we have η(X) =E[Y|X] =P(Y= 1|X)−P(Y=−1|X), hence by the law of total expectation : R(sign f)− R(sign η) =EXrEY[ 1{signf(X)̸=Y} − 1{signη(X)̸=Y} |X]s =EXr( 1{signf(X)̸= 1} − 1{signη(X)̸= 1})·P(Y= 1|X) + ( 1{signf(X)̸=−1} − 1{signη(X)̸=−1})·P(Y=−1|X)s ≤EX[|η(X)| 1{signf(X)̸= sign η(X)}] ≤EX[|η(X)−f(X)| 1{signf(X)̸= sign η(X)}] ≤ ∥f−η∥L2(ρX) The following result states that, whenever ηsatisfies either the low-noise assumption (A1) or the hard margin condition (A2) , any sufficiently good L2(ρX) approximation of ηwill satisfy the same assumption with high probability. Lemma 8. Letf∈L2(ρX)be such that ∥f−η∥L2(ρX)≤εfor some ε >0. The following is true : •Ifηsatisfies the low-noise assumption (A1) , we have for all δ > ε and0< ν < δ : P(|f(X)|≤ν)≤ε2 (δ−ν)2+Cδq •Ifηsatisfies the hard-margin assumption (A2) with margin δ >0andε < δ , we have for all ν < δ : P(|f(X)|≤ν)≤ε2 (δ−ν)2 Proof. •Assume that assumption (A1) holds. Observe that for any δ >0 P(|f(X)|≤ν) =P(|f(X)|≤ν;|η(X)|> δ) +P(|f(X)|≤ν;|η(X)|≤δ) ≤P(|f(X)|≤ν;|η(X)|> δ) +Cδq Now note that on the event |η(X)|> δ, we have by triangle inequality |f(X)−η(X)|+|f(X)|≥ |η(X)|> δ =⇒ |f(X)−η(X)|≥δ− |f(X)| Finally, Chebyshev’s inequality yields P(|f(X)|≤ν;|η(X)|> δ)≤P(|f(X)−η(X)|≥δ−ν) ≤∥f−η∥2 L2(ρX) (δ−ν)2 ≤ε2 (δ−ν)2, this yields the claimed inequality. 20 •If we now assume that ηsatisfies the hard-margin condition (A2) , we proceed similarly as in the previous case, with the only difference being that the term P(|f(X)|≤ν;|η(X)|≤δ) is now equal to zero. The rest of the argument carries through. The following lemma quantifies the approximation error of minimizers θλof the regularized population risk Rℓ,λoverNN(a, W, L, R ) in terms of the approximation error of the function classNN(a, W, L, R ). Lemma 9. Leta∈NL+1be a neural network architecture with width Wand depth L, and R > 0a parameter bound such that inf f∈NN (a,W,L,R )∥f−η∥L∞(ρX)≤ε for some constant ε≥0. Then, for any λ≥0, we have that any minimizer θλof the regularized population risk Rℓ,λoverNN(a, W, L, R )satisfies ∥f(·;θλ)−η∥L2(ρX)≤ε+c λ 2P(a)Rp, where P(a)denotes the number of parameters in the architecture a. Proof. First note that for any g∈L2(ρX), we have Rℓ(g) :=E(x,y)∼ρ“ (g(x)−y)2‰ =E(x,y)∼ρ“ (g(x)−η(x))2‰ +E(x,y)∼ρ“ (η(x)−y)2‰ + 2E(x,y)∼ρr(g(x)−y)(η(x)−y)s =∥g−η∥2 L2(ρX)+C+ 2E(x,y)∼ρEr(g(x)−η(x))(η(x)−y)|xs =∥g−η∥2 L2(ρX)+C+ 2E(x,y)∼ρr(g(x)−η(x))(η(x)−E[y|x])s =∥g−η∥2 L2(ρX)+C+ 0, where C≡E(x,y)∼ρr(η(x)−y)2s≥0 is a constant which does not depend on g. This shows that minimizing Rℓis equivalent to minimizing the L2(ρX) distance to η, and in particular for two square-integrable functions f, g∈L2(ρX), we have the identity Rℓ(f)− R ℓ(g) =∥f−η∥2 L2(ρX)−∥g−η∥2 L2(ρX). (18) Now denote by θ∗any minimizer of ∥f(·;θ)−η∥2 L2(ρX)overPa,R. For any
|
https://arxiv.org/abs/2505.08262v1
|
positive λ, we have Rℓ(f(·;θλ)) =Rℓ,λ(f(·;θλ))−λ 2|θλ|p p ≤ R ℓ,λ(f(·;θλ)) ≤ R ℓ,λ(f(·;θ∗)) =Rℓ(f(·;θ∗)) +λ 2|θ∗|p p ≤ R ℓ(f(·;θ∗)) +λ 2P(a)Rp 21 Where P(a) is the number of parameters in the architecture a. From the identity (18) above, we deduce that ∥f(·;θλ)−η∥2 L2(ρX)differs from ∥f(·;θ∗)−η∥2 L2(ρX)by at most λP(a)Rp/2. Because ∥·∥L2(ρX)is dominated by ∥·∥L∞(ρX), we find that ∥f(·;θλ)−η∥2 L2(ρX)≤ ∥f(·;θ∗)−η∥2 L2(ρX)+λ 2P(a)Rp ≤ ∥f(·;θ∗)−η∥2 L∞(ρX)+λ 2P(a)Rp ≤ε2+λ 2P(a)Rp we conclude the proof by using the subadditivity of x7→?x. The last result we will need is a large deviation type estimate on the probability that a minimizer pθλof the empirical risk pRℓ,λis far away from the argmin of Rℓ,λ. Such estimate can be readily obtained by applying covering number based concentration bounds, which are a standard tool in Learning Theory literature (Gy¨ orfi et al., 2002). Lemma 10. For any λ≥0, letpθλ∈ P a,Rbe a minimum-norm solution of the λ-ERM problem (10), and denote by Rℓ,λthe regularized population risk (11). If the well-separation assumption (A3) holds, then for all t >0, we have the estimate P(dist(pθλ,argmin Rℓ,λ)≥t)≤4Cov∞ˆ NN,Ktr 24 Lip( Fσ)˙ expˆ−nK2t2r 288˙ Proof. Observe the inclusion of events dist(pθλ,argmin Rℓ,λ)≥t=⇒ R ℓ,λ(f(·,pθλ))≥ inf θ∈Pa,R:dist( θ,argmin Rℓ,λ)≥tRℓ,λ(f(·,θ)) =⇒ R ℓ,λ(f(·,pθλ))− R ℓ,λ(f(·,θλ))≥Ktr =⇒ R ℓ,λ(f(·,pθλ))−pRℓ,λ(f(·,pθλ)) +pRℓ,λ(f(·,θλ))− R ℓ,λ(f(·,θλ))≥Ktr =⇒pRℓ,λ(f(·,θλ))− R ℓ,λ(f(·,θλ))≥Ktr/2 ORRℓ,λ(f(·,pθλ))−pRℓ,λ(f(·,pθλ))≥Ktr/2, where we used assumption (A3) in the second line. Now set ε:=Ktr/2 and let {f(·;θε) :θε∈Θε} be a minimal size ε/(12 Lip( Fσ))-cover of NN(a, W, L, R ). By observing that the map φ:Pa,R→R,θ7→(f(x;θ)−y)2 is 4 Lip( Fσ)-Lipschitz continuous uniformly over ( x, y)∈ X × {− 1,1}, we get that for any 22 θ∈ Pa,R, and θε∈Θεsuch that |θ−θε|∞≤ε/(12 Lip( Fσ)) : |pRℓ,λ(f(·,θ))− R ℓ,λ(f(·,θ))|=ˇˇˇˇˇ1 nnX i=1(f(xi;θ)−yi)2−E“ (f(x;θ)−y)2‰ˇˇˇˇˇ ≤ˇˇˇˇˇ1 nnX i=1(f(xi;θε)−yi)2−1 nnX i=1(f(xi;θ)−yi)2ˇˇˇˇˇ +ˇˇE“ (f(x;θε)−y)2‰ −E“ (f(x;θ)−y)2‰ˇˇ +ˇˇˇˇˇ1 nnX i=1(f(xi;θε)−yi)2−E“ (f(x;θε)−y)2‰ˇˇˇˇˇ ≤2 Lip( φ)|θ−θε|∞+ˇˇˇˇˇ1 nnX i=1(f(xi;θε)−yi)2−E“ (f(x;θε)−y)2‰ˇˇˇˇˇ ≤2ε 3+ˇˇˇˇˇ1 nnX i=1(f(xi;θε)−yi)2−E“ (f(x;θε)−y)2‰ˇˇˇˇˇ After taking the supremum over θ∈ Pa,Rin the above inequality, and observing that the Zi:= (f(xi;θε)−yi)2are i.i.d. and taking value in [0 ,4] almost surely, we apply the union bound together with Hoeffding’s inequality to find: P´ dist(pθλ,argmin Rℓ,λ)≥t¯ ≤P´ pRℓ,λ(f(·,θλ))− R ℓ,λ(f(·,θλ))≥Ktr/2¯ +P´ Rℓ,λ(f(·,pθλ))−pRℓ,λ(f(·,pθλ))≥Ktr/2¯ ≤2P˜ sup θ∈Pa,R|Rℓ,λ(f(·,θ))−pRℓ,λ(f(·,θ))|≥Ktr/2¸ = 2P˜ sup θ∈Pa,R|Rℓ,λ(f(·,θ))−pRℓ,λ(f(·,θ))|≥ε¸ ≤2Pˆ sup θε∈Θε|Rℓ,λ(f(·,θε))−pRℓ,λ(f(·,θε))|≥ε/3˙ ≤4Cov∞ˆ NN,ε 12 Lip( Fσ)˙ expˆ−nε2 72˙ . Finally, after substituting εbyKtr/2, we find P´ dist(pθλ,argmin Rℓ,λ)≥t¯ ≤4Cov∞ˆ NN,Ktr 24 Lip( Fσ)˙ expˆ−nK2t2r 288˙ , as desired. 5.2 Proof of Theorem 1 We prove Theorem 1 under the low-noise assumption (A1) only, the case (A2) can be shown using the exact same argument. 23 To begin, we decompose the excess risk in two parts : R(sign f(·;pθλ))− R∗:=R(sign f(·;pθλ))− R(sign η) =R(sign f(·;pθλ))− R(sign f(·;θλ)) +R(sign f(·;θλ))− R(sign η), wherepθλ∈ Pa,Randθλ∈ Pa,2Rare respectively minimum-norm minimizers of the empirical and population risk (10), such that |pθλ−θλ|∞= dist(pθλ,argmin Pa,RRℓ,λ). Note that by assumption (A4) and closedness of argminPa,RRℓ,λ, the above is always possible as long as the parameter bound Rhas been chosen larger than R∗·P(a)1/p, but the ℓ∞norm ofθλcan only be bounded by 2 Rinstead of R. Combining Lemma 7 and Lemma 9, we immediately get the bound on the first summand
|
https://arxiv.org/abs/2505.08262v1
|
: R(sign f(·;θλ))− R(sign η)≤ ∥f(·;θλ)−η∥L2(ρX)≤εapprox +a 2p−1λP(a)Rp. (19) It only remains to bound the second summand. To that end, we apply Lemma 6, which yields : R(sign f(·;pθλ))− R(sign f(·;θλ))≤Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥ |f(X;θλ)|o . Now note that thanks to inequality (19), we can apply the “high-probability” margin prop- erty from Lemma 8 to get for all δ > ε approx ,λ <2p−1(δ−εapprox )2(P(a)Rp)−1, and 0 < ν < δ : Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥ |f(X;θλ)|o =Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥ |f(X;θλ)|;|f(X;θλ)|> νo +Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥ |f(X;θλ)|;|f(X;θλ)|≤νo ≤Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥νo + (δ−ν)−2´ εapprox +a 2p−1λP(a)Rp¯2 +Cδq We are now left with estimating the probability that ∥f(·;pθλ)−f(·;θλ)∥L∞≥ν. By Lips- chitzness of Fσ, we have Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥νo ≤P´ |pθλ−θλ|∞≥21−Lν/Lip(Fσ)¯ =P´ dist(pθλ,argmin Rℓ,λ)≥21−Lν/Lip(Fσ)¯ ≤4Cov∞ˆ NN,K(21−Lν)r 24 Lip( Fσ)1+r˙ expˆ−nK2(21−Lν)2r 288 Lip( Fσ)2r˙ , 24 where the 21−Lfactor is due to the substitution R←2Rin the Lipschitz bound from Lemma 3, and the exponential inequality is due to Lemma 10. Combining all of these inequalities, we have thus shown that for all δ > ε approx ,λ <2p−1(δ−εapprox )2(P(a)Rp)−1, and 0 < ν < δ : R(sign f(·;pθλ))− R∗≤εapprox +b 21−pλP(a)Rp+Cδq + (δ−ν)−2ˆ εapprox +b 21−pλP(a)Rp˙2 + 4Cov∞ˆ NN,K(21−Lν)r 24 Lip( Fσ)1+r˙ expˆ−nK2(21−Lν)2r 288 Lip( Fσ)2r˙ which concludes the proof of Theorem 1 under assumption (A1) . As was mentioned in the beginning, the proof under (A2) can be done with the exact same argument : the only difference is that the Cδqterm will disappear when applying Lemma 8. 5.3 Proof of Theorem 3 Start by fixing α >0, and recall the approximation error bound given by Assumption (A5) , according to which inf f∈NN (a,W,L,R )∥f−η∥L∞(ρX)≤C3W−2s/d 0 L−2s/d 0 for some architecture asuch that W(a) =C1W0log2(W0),L(a) =C2L0log2(L0) + 2d, where W0, L0∈N≥2are arbitrary, C1= (3s)dd,C2=?sandC3=∥η∥Cs(X)sd8s. By fixing L0≥2 as a constant independent of nand letting W0=L−1 0(n−α/C3)−d/2s, we deduce that there is a Neural Network architecture anwith depth Ln=C2L0log2(L0) + 2d and width Wn=C1` L−1 0(n−α/C3)−d/2s˘ log2` L−1 0(n−α/C3)−d/2s˘ =C1Cd/2s 3˜O` nαd/2s˘ =˜O` nαd/2s˘ , where ˜Ohides logarithmic factors, such that inf f∈NN (an,Wn,Ln,R)∥f−η∥L∞(ρX)≤n−α Furthermore, the number of parameters in anis bounded as P(an) =LnX l=1a(l) na(l−1) n+a(l) n≤Ln(W2 n+Wn) =˜O` nαd/s˘ . 25 Similarly, recall the Lipschitz constant bound given by Lemma 3 : sup θ,θ′∈Pan,R θ̸=θ′∥Fσ(θ)− F σ(θ′)∥C(X) |θ−θ′|∞≤2L2 nRLn−1WLn n, and note that with R≡R∗P(an)1/p, we have R=˜O` nαd/ps˘ Putting these together we get sup θ,θ′∈Pan,R θ̸=θ′∥Fσ(θ)− F σ(θ′)∥C(X) |θ−θ′|∞≤2L2 n˜O´` nαd/ps˘Ln−1` nαd/s˘Ln¯ ≤˜O´ nαd s·rLn−1 p+Lns¯ =˜O´ nαd ps·rLn(1+p)−1s¯ =˜O´ nαd ps·rp?sL0log2L0+2dq(1+p)−1s¯ , where all the logarithmic factors and terms which do not depend on nare hidden in the ˜O. We are now left with bounding the quantity Cov∞ˆ NN,K(21−Lnν)r 24 Lip( Fσ)1+r˙ , which by Lemma 4, we know is bounded by ˆ 1 +48RLip(Fσ)2+r K(21−Lnν)r˙P(an) ≤ˆ49RLip(Fσ)2+r K(21−Lnν)r˙P(an) . Using the bounds on Rand Lip( Fσ) above, we find that 49RLip(Fσ)2+r K·2r(1−Ln)≤˜O´ nαd ps·n(2+r)αd ps·rp?sL0log2L0+2dq(1+p)−1s¯ The above quantity being polynomial in n, we thus find that the covering number grows as the exponential of P(an), up to a multiplicative logarithmic factor: log„ Cov∞ˆ NN,K(21−Lν)r 24 Lip( Fσ)1+r˙ȷ =O´ P(an) log( nβ·ν−r)¯ , where β≡αd ps´ 1 + (2 +
|
https://arxiv.org/abs/2505.08262v1
|
r)·“ p?sL0log2L0+ 2dq(1 +p)−1‰¯ To conclude the proof for the case (A1) , we let εapprox≡n−α r,δ≡2n−α 2randν≡n−α 2r: observe that by picking λsuch that 0≤λ≤2p−1ε2 approx ((R∗)pP(an)2)−1=O´ n−2α(s+d) rs¯ , 26 we have λ <2p−1(δ−εapprox )2(RP(an)p)−1and εapprox +b 21−pλP(an)Rp≤2εapprox . We are thus allowed to apply Theorem 1 with these values of λ, which yields the excess risk bound: R´ signf(·;pθλ)¯ − R∗≤2n−α r+ 2Cn−αq 2r+ 4n−α r + 4 exp´ −A1n1−A2+nαd slog(γn(α+2β)/2)¯ , where A1≡K222r(1−Ln) 288, A 2≡αˆ 1 +d/p s·´ r?sL0log2(L0) + 2ds·(2 + 2 p)−2¯˙ , andγ > 0 is a quantity which does not depend on n. Hence we see that the exponential term converges to zero as n→ ∞ if 1−A2>0 and 1 −A2> αd/s , or equivalently if 1−A2> αd/s , which after some algebra is equivalent to the following inequality for α: α <ˆ 1 +d/p s·´ r?sL0log2(L0) + 2ds·(2 + 2 p) +p−2¯˙−1 . The proof under the assumption that (A2) holds with margin δ >0 is very similar: we now pickεapprox≡n−α r,ν≡δ/2, and 0≤λ≤2p−1ε2 approx ((R∗)pP(an)2)−1=O´ n−2α(s+d) rs¯ , such that Theorem 1 can be applied, to yield for all n≥ (δ/2)−r/α : R´ signf(·;pθλ)¯ − R∗≤2n−α r+ 16n−α r+ 4 exp` −A1n1−A2+nαd/slog(γnβ(δ/2)−r)˘ , where A1≡K2(δ2−Ln)2r 288, A 2≡αd sp` [?sL0log2L0+ 2d]·(2 + 2 p)−2˘ . Hence, we see as before that in this case the term 4 exp` −A1n1−A2+nαd/slog(γnβ(δ/2)−r)˘ vanishes exponentially fast as n→ ∞ if 1−A2> αd/s , which equivalently means that α needs to satisfy the following inequality α <sp dp[?sL0log2L0+ 2d]·(2 + 2 p) +p−2q. 6 Acknowledgement Tepakbong acknowledges the support of Hong Kong PhD Fellowship. Xiang Zhou acknowl- edges the support from Hong Kong General Research Funds (11308121, 11318522, 11308323), and the NSFC/RGC Joint Research Scheme [RGC Project No. N-CityU102/20 and NSFC Project No. 12061160462. The work of Ding-Xuan Zhou is partially supported by the Aus- tralian Research Council under project DP240101919 and partially supported by InnoHK initiative, the Government of the HKSAR, China, and the Laboratory for AI-Powered Fi- nancial Technologies. 27 References Jean-Yves Audibert and Alexandre B Tsybakov. Fast learning rates for plug-in classifiers. The Annals of statistics , 35(2):608–633, 2007. Julius Berner, Philipp Grohs, and Arnulf Jentzen. Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of black–scholes partial differential equations. SIAM Journal on Mathematics of Data Science , 2(3):631–657, 2020. Alberto Bietti and Francis Bach. Deep equals shallow for relu networks in kernel regimes, 2021. Thijs Bos and Johannes Schmidt-Hieber. Convergence rates of deep relu networks for multiclass classification. Electronic Journal of Statistics , 16(1):2724–2773, 2022. Vivien Cabannnes and Stefano Vigogna. A case of exponential convergence rates for svm. In International Conference on Artificial Intelligence and Statistics , pages 359–374. PMLR, 2023. Emmanuel J Candes, Michael B Wakin, and Stephen P Boyd. Enhancing sparsity by reweighted ℓ1minimization. Journal of Fourier analysis and applications , 14:877–905, 2008. Aurore Delaigle and Peter Hall. Achieving near perfect classification for functional data. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 74(2):267–286,
|
https://arxiv.org/abs/2505.08262v1
|
2012. Luc Devroye, L´ aszl´ o Gy¨ orfi, and G´ abor Lugosi. A probabilistic theory of pattern recognition , vol- ume 31. Springer Science & Business Media, 2013. Dennis Elbr¨ achter, Dmytro Perekrestenko, Philipp Grohs, and Helmut B¨ olcskei. Deep neural net- work approximation theory. IEEE Transactions on Information Theory , 67(5):2581–2623, 2021. Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. Efficient and accurate estimation of lipschitz constants for deep neural networks. Advances in Neural Information Processing Systems , 32, 2019. Han Feng, Shuo Huang, and Ding-Xuan Zhou. Generalization analysis of cnns for classification on spheres. IEEE Transactions on Neural Networks and Learning Systems , 2021. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In ICLR . OpenReview.net, 2019. URL http://dblp.uni-trier.de/db/ conf/iclr/iclr2019.html#FrankleC19 . Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning . MIT Press, 2016. http: //www.deeplearningbook.org . L´ aszl´ o Gy¨ orfi, Michael K¨ ohler, Adam Krzy˙ zak, and Harro Walk. A distribution-free theory of nonparametric regression , volume 1. Springer, 2002. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR , abs/1503.02531, 2015. URL http://dblp.uni-trier.de/db/journals/corr/ corr1503.html#HintonVD15 . 28 Tianyang Hu, Ruiqi Liu, Zuofeng Shang, and Guang Cheng. Minimax optimal deep neural network classifiers under smooth decision boundary. arXiv preprint arXiv:2207.01602 , 2022a. Tianyang Hu, Jun Wang, Wenjia Wang, and Zhenguo Li. Understanding square loss in training overparametrized neural network classifiers. Advances in Neural Information Processing Systems , 35:16495–16508, 2022b. Like Hui and Mikhail Belkin. Evaluation of neural architectures trained with square loss vs cross- entropy in classification tasks. arXiv preprint arXiv:2006.07322 , 2020. Arthur Jacot, Franck Gabriel, and Cl´ ement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems , 31, 2018. Yongdai Kim, Ilsang Ohn, and Dongha Kim. Fast convergence rates of deep neural networks for classification. Neural Networks , 138:179–197, 2021. Hyunouk Ko, Namjoon Suh, and Xiaoming Huo. On excess risk convergence rates of neural network classifiers. arXiv preprint arXiv:2309.15075 , 2023. Vladimir Koltchinskii and Olexandra Beznosova. Exponential convergence rates in classification. InLearning Theory: 18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy, June 27-30, 2005. Proceedings 18 , pages 295–307. Springer, 2005. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. Advances in neural information processing systems , 25, 2012. Anders Krogh and John Hertz. A simple weight decay can improve generalization. Advances in neural information processing systems , 4, 1991. Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through l0 regularization. arXiv preprint arXiv:1712.01312 , 2017. Jianfeng Lu, Zuowei Shen, Haizhao Yang, and Shijun Zhang. Deep network approximation for smooth functions. SIAM Journal on Mathematical Analysis , 53(5):5465–5506, 2021. Enno Mammen and Alexandre B Tsybakov. Smooth discrimination analysis. The Annals of Statis- tics, 27(6):1808–1829, 1999. Tong Mao and Ding-Xuan Zhou. Approximation of functions from korobov spaces by deep convo- lutional neural networks. Advances in Computational Mathematics , 48(6):84, 2022. Joseph T Meyer. Optimal convergence rates of deep neural networks in
|
https://arxiv.org/abs/2505.08262v1
|
a classification setting. Electronic Journal of Statistics , 17(2):3613–3659, 2023. Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. Advances in neural information processing systems , 27, 2014. Atsushi Nitanda and Taiji Suzuki. Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime. arXiv preprint arXiv:2006.12297 , 2020. Ilsang Ohn and Yongdai Kim. Smooth function approximation by deep neural networks with general activation functions. Entropy , 21(7):627, 2019. 29 Philipp Petersen and Felix Voigtlaender. Optimal approximation of piecewise smooth functions using deep relu neural networks. Neural Networks , 108:296–330, 2018. Philipp Petersen and Felix Voigtlaender. Optimal learning of high-dimensional classification prob- lems using deep neural networks. arXiv preprint arXiv:2112.12555 , 2021. Bodhisattva Sen. A gentle introduction to empirical process theory and applications. Lecture Notes, Columbia University , 11:28–29, 2018. Thiago Serra, Christian Tjandraatmadja, and Srikumar Ramalingam. Bounding and counting linear regions of deep neural networks. In International conference on machine learning , pages 4558–4566. PMLR, 2018. Steve Smale and Ding-Xuan Zhou. Learning theory estimates via integral operators and their approximations. Constructive approximation , 26(2):153–172, 2007. Ingo Steinwart and Andreas Christmann. Support vector machines . Springer Science & Business Media, 2008. Ingo Steinwart and Clint Scovel. Fast rates for support vector machines. In Learning Theory: 18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy, June 27-30, 2005. Proceedings 18 , pages 279–294. Springer, 2005. Taiji Suzuki. Adaptivity of deep relu network for learning in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. arXiv preprint arXiv:1810.08033 , 2018. Alexander B Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics , 32(1):135–166, 2004. Aad W Van der Vaart. Asymptotic statistics , volume 3. Cambridge university press, 2000. Stefano Vigogna, Giacomo Meanti, Ernesto De Vito, and Lorenzo Rosasco. Multiclass learning with margin: exponential rates with no bias-variance trade-off. arXiv preprint arXiv:2202.01773 , 2022. Aladin Virmaux and Kevin Scaman. Lipschitz regularity of deep neural networks: analysis and efficient estimation. Advances in Neural Information Processing Systems , 31, 2018. Tomoya Wakayama and Masaaki Imaizumi. Fast convergence on perfect classification for functional data. Statistica Sinica , 34:1801–1819, 2024. Chuanyun Xu, Wenjian Gao, Tian Li, Nanlan Bai, Gang Li, and Yang Zhang. Teacher-student collaborative knowledge distillation for image classification. Applied Intelligence , 53(2):1997– 2009, 2023. Dmitry Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks , 94: 103–114, 2017. Shijun Zhang, Jianfeng Lu, and Hongkai Zhao. Deep network approximation: Beyond relu to diverse activation functions. Journal of Machine Learning Research , 25(35):1–39, 2024. URL http://jmlr.org/papers/v25/23-0912.html . Ding-Xuan Zhou. Universality of deep convolutional neural networks. Applied and computational harmonic analysis , 48(2):787–794, 2020. 30 Tian-Yi Zhou, Matthew Lau, Jizhou Chen, Wenke Lee, and Xiaoming Huo. Optimal classification- based anomaly detection with neural networks: Theory and practice. arXiv preprint arXiv:2409.08521 , 2024. 31
|
https://arxiv.org/abs/2505.08262v1
|
arXiv:2505.08565v1 [math.ST] 13 May 2025On testing the class of symmetry using entropy characterization and empirical likelihood approach Ganesh Vishnu Avhada, Ananya Lahiria, Sudheesh K. Kattumannilb aDepartment of Mathematics and Statistics, Indian Institute of Technology, Tirupati, Andhra Pradesh, India bStatistical Sciences Division, Indian Statistical Institute, Chennai, Tamil Nadu, India Abstract In this paper, we obtain a new characterization result for symmetric distributions based on the entropy measure. Using the characterization, we propose a nonparametric test to test the symmetry of a distribution. We also develop the jackknife empirical likelihood and the adjusted jackknife empirical likelihood ratio tests. The asymptotic properties of the proposed test statistics are studied. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed tests. The simulation results indicate that the jackknife empirical likelihood and adjusted jackknife empirical likelihood ratio tests show better performance than the existing tests. Finally, two real data sets are analysed to illustrate the applicability of the proposed tests. Keywords: Continuous symmetric distribution ·Cumulative entropy ·U-statistics ·Jackknife empirical likelihood ·Wilks’ theorem. 1. Introduction In probability and statistics, the assessment of distributional symmetry is a fundamental prob- lem. Many classical statistical procedures, such as the sign test and Wilcoxon signed-rank test, depend on the assumption of symmetry to ensure optimal performance. Methods such as trimmed means and M-estimators are constructed, particularly under the assumption of symmetry in the data. Symmetry improves the effectiveness of resampling methods such as the bootstrap by accelerating the convergence of confidence intervals via the pivotal quantity. Symmetrically distributed pivotal quantities often lead to more accurate inferential outcomes, Email address: skkattu@isichennai.res.in ( Sudheesh K. Kattumannil) 1 making the assumption of symmetry advantageous in a range of applications. In finance, mod- els such as the Sharpe–Lintner capital asset pricing model and the Black–Scholes option pricing model heavily rely on symmetric about zero assumptions (Davison and Mamba (2017)). The lit- erature extensively examines various aspects of the symmetry of probability distributions, with numerous authors investigating the properties of symmetric distributions (see, e.g. Ushakov (2011), Ahmadi (2021), Husseiny et al. (2024) and the references therein). Symmetric distribu- tions are utilized by traders in the financial sector to forecast the value of assets over time, with the mean reversion hypothesis positing that asset prices will eventually revert to their long-term mean or average values based on the assumption of a symmetric distribution of prices over time, see Ahmed et al. (2018). The broad applicability of symmetric assumptions in statistical modelling has been well- documented; however, Partlett and Patil (2017) reported potential issues with the indiscriminate fitting of symmetric distributions to data, emphasizing the potential risks associated with such practices. Therefore, the first step in any robust statistical inference involving symmetric distri- butions is to evaluate whether the data align sufficiently with the symmetry assumption. As a result of its wide range of practical uses, several goodness-of-fit tests for symmetric distributions using different approaches are available in the literature. The most famous classical sign and Wilcoxon signed-rank test (see, e.g. Feuerverger and Mureika (1977) and Maesono (1987)), and its modified version can
|
https://arxiv.org/abs/2505.08565v1
|
be found in Vexler et al. (2023). Symmetry holds significant relevance in non-parametric statistics and structured models, where a common assumption is that errors are distributed symmetrically. In the regression setup, Allison and Pretorius (2017) conducted an extensive Monte Carlo simulation study to test whether the error distribution in the linear regression model is symmetric. Additionally, several papers offer comparative analyses of sym- metry tests; see Maesono (1987), Farrell and Rogers-Stewart (2006), Miloˇ sevi´ c and Obradovi´ c (2019) and Ivanovi´ c et al. (2020a). Characterization theorems play a crucial role, particularly in the development of goodness-of-fit tests; one may refer to Marchetti and Mudholkar (2002) and Nikitin (2017). Tests based on characterization provide a robust framework for assessing symme- try and have been applied to various families of distributions. Recently, Miloˇ sevi´ c and Obradovi´ c (2016), Ahmadi and Fashandi (2019) and Boˇ zin et al. (2020) proposed characterization-based 2 tests for the univariate symmetry of the distribution. The entropy measures serve as a com- pelling alternative to traditional symmetry measures (Fashandi and Ahmadi (2012)). Entropy, a notion built upon information theory, quantifies the uncertainty associated with a probability distribution. It captures the inherent randomness and disorder within the data, providing an in-depth perspective on distributional characteristics. Several forms of entropy, such as Shannon entropy, R´ enyi entropy, and Tsallis entropy, have been developed to address different aspects of uncertainty. Recently, several papers have been published on the characterization of distribu- tion functions using the entropies of order statistics and record values. Ahmadi and Fashandi (2019) introduced several characterization results for symmetric continuous distributions, which are established based on the properties of order statistics and various information measures. Ahmadi (2020) obtained a characterization of the symmetric continuous distributions based on the properties of k-records and spacings. Ahmadi (2021), Jose and Abdul Sathar (2022) and Gupta and Chaudhary (2024) developed new tests for the distributional symmetry based on the different information measures. Thomas and Grunkemeier (1975) introduced the first paper to find the empirical likelihood- based confidence intervals for survival data analysis. The empirical likelihood method is a powerful non-parametric approach that does not depend on specific parametric assumptions, and inference based on it does not require variance estimation. Jing et al. (2009) introduced the jackknife empirical likelihood (JEL) approach, integrating two non-parametric techniques: the jackknife and empirical likelihood. Since then, various tests based on JEL have been developed and explored in the literature. Some of the recent works in this area can be found in Liu et al. (2023), Huang et al. (2024), Chen et al. (2024) and Suresh and Kattumannil (2025). For a comprehensive overview of the JEL method, including recent advancements, the reader is referred to the review articles by Lazar (2021) and Liu and Zhao (2023). In addition, the JEL method can face challenges due to the empty set problem, as noted by Chen et al. (2008) and Jing et al. (2017). To enhance the accuracy of JEL, several techniques have been introduced, including bootstrap calibration (Owen (1988)) and Bartlett correction (Chen and Cui (2007)). To address both
|
https://arxiv.org/abs/2505.08565v1
|
the empty set issue and low coverage probability, the adjusted jackknife empirical likelihood (AJEL) method was proposed by Chen et al. (2008) and Wang et al. (2015). In what follows, we 3 introduce a novel entropy-based characterization for the symmetric distributions and propose a class of goodness-of-fit tests for the symmetric distributions. The rest of the article is organized as follows. In Section 2, we provide characterization results of the symmetric distributions based on the entropy measure, and with a few examples, we demonstrate the results. In Section 3, based on the random samples X1, X2, ..., X nfrom an unknown distribution F, nonparametric goodness-of-fit test procedure is developed to test H0:F∈FSagainst H1:F /∈FS, where FS={F:Fis a class of symmetric distributions }. We also propose JEL and AJEL ratio tests for distribution symmetry based on the characterization introduced in Section 2. In Section 4, the finite sample performance of the proposed tests is evaluated and compared with other symmetry tests through a Monte Carlo simulation study. Section 5 presents and analyses two real examples. Finally, Section 6 concludes with some closing remarks. 2. Characterization result based on entropy measure Since the test is based on the entropy measure, we begin by modifying the generalized entropy measure obtained by Kattumannil et al. (2022). Definition 1. Consider an absolutely continuous random variable with real values Xwith a distribution function Fwith a finite mean. Furthermore, let ϕ(·)be a function of X, and let w(·)be a positive weight function. The generalized cumulative residual entropy (GCRE )ofXis defined as GCRE (X) =Z∞ −∞w(u)E ϕ(X)−ϕ(u)|X≥u dF(u), (1) and generalized cumulative entropy (GCE)ofXis defined as GCE(X) =Z∞ −∞w(u)E ϕ(u)−ϕ(X)|X≤u dF(u), (2) here, w(·)andϕ(·)can be arbitrarily chosen under the existence of the above integral such that GCRE (X)andGCE(X)become concave. 4 We now present the characterization theorem using the above definition, and we will use it to make our tests. The following theorem gives a relationship of symmetric distribution with GCRE (X) and GCE(X). Theorem 1. The following two statements are equivalent under the conditions w(·)is a positive weight function, for x < u ,ϕ(·)≥ϕ(u)andx > u ,ϕ(·)≤ϕ(u): (i)Xis a symmetric random variable; (ii)GCRE (X) =GCE(X). Proof. Firstly, to show ( i) =⇒(ii), let Xbe a symmetric random variable. This implies that the distribution function F(x) is symmetric about a point u∈R, i.e., F(u+x) = 1−F(u−x),and ¯F(u+x) =F(u−x),∀x∈R. (3) Since Xis symmetric about u, it follows that E[ϕ(X)−ϕ(u)|X≥u] =E[ϕ(u)−ϕ(X)|X≤u]. (4) To show (4), we have E[ϕ(X)−ϕ(u)|X≥u] =Z∞ uϕ(x)−ϕ(u) ¯F(u)dF(x). Applying the change of variable x=y+u, and the identity (3) evaluated at x= 0 gives F(u) = 1−F(u), which leads to F(u) =1 2, and hence F(u) =¯F(u). One has E[ϕ(X)−ϕ(u)|X≥u] =Z∞ 0ϕ(u+y)−ϕ(u) ¯F(u)dF(u+y). Next, applying the substitution y=u−x(sox= 2u−y), we have E[ϕ(X)−ϕ(u)|X≥u] =Z∞ uϕ(2u−x)−ϕ(u) F(u)dF(2u−x). 5 Letting z= 2u−x=⇒x= 2u−z, we obtain E[ϕ(X)−ϕ(u)|X≥u] =−Z∞ uϕ(z)−ϕ(u) F(u)dF(z) =Zu −∞ϕ(u)−ϕ(z) F(u)dF(z) =E[ϕ(u)−ϕ(X)|X≤u]. Therefore, it follows that GCRE (X) =Z∞ −∞w(u)E[ϕ(X)−ϕ(u)|X≥u]dF(u) =Z∞ −∞w(u)E[ϕ(u)−ϕ(X)|X≤u]dF(u) =GCE(X). Hence, GCRE (X) =GCE(X) when Xis a symmetric random variable. Hence ( i) =⇒(ii). Next, to show ( ii) =⇒(i), we have 0 =GCRE (X)− GCE (X) =Z∞ −∞Z∞ uw(u)
|
https://arxiv.org/abs/2505.08565v1
|
¯F(u) ϕ(x)−ϕ(u) dF(x)dF(u)−Z∞ −∞Zu −∞w(u) F(u) ϕ(u)−ϕ(x) dF(x)dF(u) =Z∞ −∞w(u) ¯F(u)Z∞ u ϕ(x)−ϕ(u) dF(x)dF(u)−Z∞ −∞w(u) F(u)Zu −∞ ϕ(u)−ϕ(x) dF(x)dF(u) =Z∞ −∞1 ¯F(u)Z∞ u ϕ(x)−ϕ(u) dF(x)−1 F(u)Zu −∞ ϕ(u)−ϕ(x) dF(x) w(u)dF(u) (5) =Z∞ −∞g(u)w(u)dF(u) (say). We know that if g(u)≥0 for all uin some domain D, andR Dg(u)du= 0, then g(u) = 0 almost everywhere (a.e.) on D. Also, if (5) holds and it is noted that, under the conditions specified in the theorem, the function g(u) satisfies g(u)≥0 for all u, then from the completeness property for the distribution of U, it follows that 6 1 ¯F(u)Z∞ u ϕ(x)−ϕ(u) dF(x) =1 F(u)Zu −∞ ϕ(u)−ϕ(x) dF(x) =⇒1 ¯F(u)Z∞ uϕ(x)dF(x)−ϕ(u) =ϕ(u)−1 F(u)Zu −∞ϕ(x)dF(x) =⇒1 ¯F(u)Z∞ uϕ(x)dF(x) +1 F(u)Zu −∞ϕ(x)dF(x) = 2 ϕ(u). (6) Now, recall that the conditional expectation of ϕ(X) given X≥uis E[ϕ(X)|X≥u] =1 ¯F(u)Z∞ uϕ(x)dF(x), also, the conditional expectation of ϕ(X) given X≤uis E[ϕ(X)|X≤u] =1 F(u)Zu −∞ϕ(x)dF(x). Using these definitions, (6) simplifies to E[ϕ(X)|X≥u] +E[ϕ(X)|X≤u] = 2ϕ(u) =⇒ E[ϕ(X)|X≥u]−ϕ(u) =ϕ(u)−E[ϕ(X)|X≤u]. (7) Now, we assume without loss of generality ϕ(u) =u. Hence, the mean residual life and the mean past life at time uareE[X−u|X≥u] =1 ¯F(u)R∞ u¯F(x)dxandE[u−X|X≤u] = 1 F(u)Ru −∞F(x)dx,respectively. From (7), we have 1 1−F(u)Z∞ u(1−F(x))dx=1 F(u)Zu −∞F(x)dx. (8) Now consider the transformation x=u+yin the first integral, so that Z∞ u(1−F(x))dx=Z∞ 0(1−F(u+y))dy, (9) 7 andx=u−yin the second integral, we get Zu −∞F(x)dx=Z∞ 0F(u−y)dy. (10) From (9) and (10), we get 1 1−F(u)Z∞ 0(1−F(u+y))dy=1 F(u)Z∞ 0F(u−y)dy. For simplicity, we can multiply both sides by F(u)(1−F(u)) to get F(u)Z∞ 0(1−F(u+y))dy= (1−F(u))Z∞ 0F(u−y)dy. =⇒ F(u)Z∞ 0(1−F(u+y))dy−(1−F(u))Z∞ 0F(u−y)dy= 0. =⇒Z∞ 0 F(u)(1−F(u+y))−(1−F(u))F(u−y) dy= 0. =⇒Z∞ 0G(y)dy= 0 ( say). IfG(y)≥0 (or G(y)≤0) for all y≥0, and the integral is zero, then it follows that G(y) = 0 for all y≥0. Hence, we get F(u)(1−F(u+y)) = (1 −F(u))F(u−y), u > 0, y > 0, (11) for almost all F(u)∈(0,1). Following the technique of Theorem 2 of Ahmadi (2020), we choose u=F−1(1/2) (i.e., the point where F(u) = 1 /2), then (11) becomes 1−F(u+y) =F(u−y), y > 0. This completes the proof ( ii) =⇒(i). We can define several entropy measures using (1) and (2), with different choices of w(·) and ϕ(·). For a detailed study, one can see Kattumannil et al. (2022) and the references therein. To construct the test for symmetry, we consider the weight function w(u) = ¯F(u)F(u) and 8 ϕ(u) =u. Then, by Theorem 1, when Xhas symmetric distributions if and only if Z∞ −∞¯F(u)F(u)E X−u|X≥u dF(u) =Z∞ −∞¯F(u)F(u)E u−X|X≤u dF(u). (12) 2.1. Examples To illustrate the results obtained above, we have now explored a few examples. 1. Uniform distribution LetXbe a random variable following a uniform distribution on the interval [ −1,1] with the probability density function (pdf) and cumulative distribution function (cdf) given by f(x) =1 2,and F(x) =x+ 1 2,−1≤x≤1. Hence Z1 −1¯F(u)F(u)E X−u|X≥u dF(u) =Z1 −1F(u)Z1 u(x−u)dF(x)dF(u) =Z1 −1(u+ 1) 2Z1 u(x−u) 4dxdu =1 12, and Z1 −1¯F(u)F(u)E u−X|X≤u dF(u) =Z1 −1¯F(u)Zu −1(u−x)dF(x)dF(u) =Z1 −1(1−u) 2Zu −1(u−x) 4dxdu =1 12. 2. Logistic distribution The pdf and cdf of the logistic distribution is f(x) =e−x (1 +e−x)2,and F(x) =1 (1
|
https://arxiv.org/abs/2505.08565v1
|
+e−x),− ∞ ≤ x≤ ∞. 9 One has Z∞ −∞¯F(u)F(u)E X−u|X≥u dF(u) =Z∞ −∞F(u)Z∞ u(x−u)dF(x)dF(u) =Z∞ −∞Z∞ u(x−u)e−(x+u) (1 +e−x)3(1 +e−u)2dxdu =3 4, and Z∞ −∞¯F(u)F(u)E u−X|X≤u dF(u) =Z∞ −∞¯F(u)Zu −∞(u−x)dF(x)dF(u) =Z∞ −∞Zu −∞(u−x)e−(2x+u) (1 +e−x)3(1 +e−u)2dxdu =3 4. 3. Exponential distribution LetXfollow an exponential distribution with parameter λ= 1. Then f(x) =e−x,and F(x) = 1−e−x, x > 0. Hence Z∞ 0¯F(u)F(u)E X−u|X≥u dF(u) =Z∞ 0F(u)Z∞ u(x−u)dF(x)dF(u) =Z∞ 0Z∞ u(x−u)(1−e−x)e−(x+u)dxdu =5 12, and Z∞ 0¯F(u)F(u)E u−X|X≤u dF(u) =Z∞ 0¯F(u)Zu 0(u−x)dF(x)dF(u) =Z∞ 0¯F(u)Zu 0(u−x)e−(2x+u)dF(x)dF(u) =1 3. We note that the integrals in (12) are identical for the uniform and logistic distributions. How- ever, in the case of the exponential distribution, it is not equal, reflecting the inherent asymmetry of the distribution. 10 3. Test for symmetry In this section, we propose a non-parametric test to assess whether the underlying distribution ofXis symmetric. Let FS={F:Fis a class of symmetric distributions }. Based on random samples X1, X2, . . . , X nfrom an unknown distribution function F, we are interested in testing the null hypothesis H0:F∈ FSagainst the alternative H1:F /∈ FS. Using (12), we now introduce the departure measure: ∆ =Z∞ −∞F(u)F(u)E[X−u|X≥u]dF(u)−Z∞ −∞F(u)F(u)E[u−X|X≤u]dF(u) = ∆ 1−∆2(say). (13) In view of Theorem 1, the departure measure ∆ is zero under the null hypothesis and non- zero under the alternative hypothesis. For finding a test statistic, we simplify the departure measure ∆ as ∆1=Z∞ −∞F(u)¯F(u)E X−u|X≥u dF(u) =Z∞ −∞Z∞ u(x−u)F(u)dF(x)dF(u) =Z∞ −∞Z∞ uxF(u)dF(x)dF(u)−Z∞ −∞Z∞ uuF(u)dF(x)dF(u) =Z∞ −∞Zx −∞xF(u)dF(u)dF(x)−Z∞ −∞uF(u)¯F(u)dF(u) =Z∞ −∞xF2(x) 2dF(x)−Z∞ −∞uF(u)(1−F(u))dF(u) =3 2Z∞ −∞xF2(x)dF(x)−Z∞ −∞uF(u)dF(u) =1 2E max( X1, X2, X3)−max( X1, X2) , (14) 11 and ∆2=Z∞ −∞F(u)¯F(u)E u−X|X≤u dF(u) =Z∞ −∞Zu −∞(u−x)¯F(u)dF(x)dF(u) =Z∞ −∞Zu −∞u¯F(u)dF(x)dF(u)−Z∞ −∞Zu −∞x¯F(u)dF(x)dF(u) =Z∞ −∞u¯F(u)F(u)dF(u)−Z∞ −∞Z∞ xx¯F(u)dF(u)dF(x) =Z∞ −∞u¯F(u)(1−¯F(u))dF(u)−Z∞ −∞x¯F2(x) 2dF(x) =Z∞ −∞u¯F(u)dF(u)−3 2Z∞ −∞x¯F2(x)dF(x) =1 2E min(X1, X2)−min(X1, X2, X3) . (15) Substituting (14) and (15) in (13), we obtain ∆ =1 2E max( X1, X2, X3) + min( X1, X2, X3)−max( X1, X2)−min(X1, X2) . (16) Now, one can simplify the above equation as E(max( X1, X2) + min( X1, X2)) =Z∞ −∞2xF(x)dF(x) +Z∞ −∞2x¯F(x)dF(x) =Z∞ −∞2x(F(x) +¯F(x))dF(x) =Z∞ −∞2xdF(x) = 2 E(X1). (17) Hence (16) becomes ∆ =1 2E max( X1, X2, X3) + min( X1, X2, X3)−2X1 . Consider a kernel h∗(X1, X2, X3) =1 2 max( X1, X2, X3) + min( X1, X2, X3) −X1, 12 such that E(h∗(X1, X2, X3)) = ∆. Hence, the U-statistics based test for testing the symmetry is given by b∆n=1 n 3nX i=1nX j=1;j<inX k=1;k<jh(Xi, Xj, Xk) (18) with h(X1, X2, X3) =1 33 max( X1, X2, X3) + 3 min( X1, X2, X3) 2−(X1+X2+X3) (19) is the associated symmetric kernel of h∗(X1, X2, X3). Next, we express the proposed test statistic b∆nin a simple form. Let X(i),i= 1,2, . . . , n be the i-th order statistics based on a random sample of size nfrom F. Using X(i), we have the following expressions nX i=1nX j=1;j<inX k=1;k<jmax( Xi, Xj, Xk) =1 2nX i=1(i−1)(i−2)X(i), and nX i=1nX j=1;j<inX k=1;k<jmin(Xi, Xj, Xk) =1 2nX i=1(n−i−1)(n−i)X(i). Hence, in terms of order statistics, the test statistic can be
|
https://arxiv.org/abs/2505.08565v1
|
written as b∆n=6 n(n−1)(n−2) 3 6×2nX i=1(i−1)(i−2)X(i)+3 6×2nX i=1(n−i−1)(n−i)X(i)! −1 nnX i=1X(i) =3 2n(n−1)(n−2) nX i=1(i2−3i+ 2)X(i)+nX i=1(n2−2ni+i2−n+i)X(i)! −1 nnX i=1X(i) =3 2n(n−1)(n−2) nX i=1(n2−2ni−n+ 2i2−2i+ 2)X(i)! −1 nnX i=1X(i) =3 2n(n−1)(n−2) nX i=1 n(n−1)−2 i(n+ 1−i)−1 X(i)! −1 nnX i=1X(i) =3 2n(n−2)nX i=1nX (i)−2(n−2) 2n(n−2)nX i=1X(i)−3 n(n−1)(n−2)nX i=1 i(n+ 1−i)−1 X(i) =(3n−2n+ 4) 2n(n−2)nX i=1X(i)−3 n(n−1)(n−2)nX i=1 i(n+ 1−i)−1 X(i), 13 which simplifies to b∆n=(n+ 4) 2n(n−2)nX i=1X(i)−3 n(n−1)(n−2)nX i=1 i(n+ 1−i)−1 X(i). Next, by applying the asymptotic theory of U-statistics (refer to Lee (2019), Theorem 1, page 76), we derive the asymptotic distribution of b∆n. Theorem 2. Asn→ ∞ ,√n(b∆n−∆)converges in distribution to a normal random variable with mean zero and variance σ2=V ar K(X) , where K(x) = xF(x) F(x)−1 −x 2−Zx −∞y 1−2F(y) dF(y) . Proof. Sinceb∆nis aU-statistic, according to the central limit theorem of U-statistics, we have the asymptotic normality of√n(b∆n−∆) and the asymptotic variance is 9 σ2 1, where σ2 1is given by σ2 1=V ar E(h(X1, X2, X3)|X1) , (20) where the symmetric kernel function h(·) is given by (19). Now, consider E(h(X1, X2, X3)|X1) =1 2E max( x, X 2, X3) + min( x, X 2, X3) −1 3E(x+X2+X3). LetZ= max( X2, X3), and W= min( X2, X3) then E(max( x, X 2, X3)) =E(xI(Z < x ) +ZI(Z > x )) =xF2(x) + 2Z∞ xyF(y)dF(y), E(min( x, X 2, X3)) =E(xI(W > x ) +WI(W < x )) =x(1−F(x))2+ 2Zx −∞y¯F(y)dF(y). 14 Hence E(max( x, X 2, X3) + min( x, X 2, X3)) =xF2(x) +x−2xF(x) +xF2(x) + 2Z∞ xyF(y)dF(y) + 2Zx −∞y¯F(y)dF(y) = 2xF2(x) +x−2xF(x) + 2Zx −∞yF(y)dF(y) + 2Z∞ xyF(y)dF(y) −2Zx −∞yF(y)dF(y) + 2Zx −∞y(1−F(y))dF(y) = 2xF2(x) +x−2xF(x) + 2Z∞ −∞yF(y)dF(y)−4Zx −∞yF(y)dF(y) + 2Zx −∞ydF(y), and E(x+X2+X3) =x+ 2Z∞ −∞ydF(y). Substituting the above expressions in (20), we obtain the variance expression given in the the- orem. Note that ∆ = 0 under the null hypothesis H0. Hence, we obtain the asymptotic null distribution of the test statistic. Corollary 1. Under H0, as n→ ∞ ,√nb∆nconverges in distribution to a normal random variable with mean zero and variance σ2 0=V ar(K0(X)), where σ2 0is the value of σ2evaluated under the null hypothesis. Letbσ2 0be a consistent estimator of the null variance σ2 0. Using the asymptotic distribution, we can establish testing criteria based on the normal distribution. The null hypothesis H0is rejected in favor of the alternative hypothesis H1at a significance level α, if √n|b∆n| bσ0> Z α/2, where, Zαrepresents the upper α-percentile point of a standard normal distribution. Imple- menting the normal-based test is challenging due to the difficulty in finding bσ0. Hence, we find the critical region of the proposed test using a Monte Carlo simulation. We used the simulated critical region (SCR) technique to obtain critical points and perform the test to avoid relying 15 on asymptotic critical values. The values of the lower quantile C1and the upper quantile C2are determined using the exact distribution in such a way that P(b∆n< C 1) =P(b∆n> C 2) =α 2. The procedure to find the critical points in this study is
|
https://arxiv.org/abs/2505.08565v1
|
outlined in Algorithm 1. Algorithm 1 Nonparametric Bootstrap Algorithm to Find C1andC2 xi←generate from null distribution n←length( xi) b∆n←Calculate test statistic b∆n(x) B←1000 ▷Number of bootstrap replicates for (bfrom 1 to B){ i←sample(1 : n, size= n, replace=TRUE) y←x[i] deltas[b] ←b∆n} deltas ←sort(deltas) C1←quantile(deltas, α/2) ▷lower bound C2←quantile(deltas, 1 −α/2) ▷upper bound ifelse(( b∆n< C 1||b∆n> C 2), print(“Reject H0”), print(“Accept H0”)) Consequently, we developed the distribution-free test known as the JEL ratio test to assess the distributional symmetry. The JEL transforms the test statistic of interest into a sample mean using jackknife pseudo-values, proving particularly effective in addressing tests based on U-statistics (Peng and Tan (2018) and Garg et al. (2024)). This approach is most suitable for dealing with the testing problem associated with a class of distributions where the null hypothesis is not completely specified. For recent developments related to this area, one can see Anjana and Kattumannil (2024) and Avhad and Lahiri (2025). Motivated by this, we formulate JEL and AJEL ratio tests in the following section. 3.1. Jackknife empirical likelihood ratio test To construct the JEL ratio test, we initially define the jackknife pseudo values utilized in em- pirical likelihood. The jackknife pseudo values denoted by Vi, i= 1, . . . , n are defined as bVi=nb∆n−(n−1)b∆(−i) n−1, i= 1, . . . , n. (21) In this context, the value of b∆iis derived from (18) by excluding the i-th observation from the sample X1, . . . , X nand the jackknife estimator b∆jack=1 nnP i=1bVi. Let pibe the probability 16 associated with bVisuch thatnP i=1pi= 1 and pi≥0 for 1 ≤i≤n. Then, the JEL based on ∆ is defined as L= max(nY i=1pi:nX i=1pi= 1, pi≥0,nX i=1pibVi= 0) . (22) We know thatnQ i=1pi, constrained bynP i=1pi= 1, reaches its maximum value of 1 /nnwhen each pi=1 n. Thus, the JEL ratio for ∆ is given by R=L n−n= maxnY i=1npi:nX i=1pi= 1, pi≥0,nX i=1pibVi= 0 . (23) Using Lagrange multipliers, whenever min 1≤k≤nbVk i<b∆jack<max 1≤k≤nbVk i, (24) we have pi=1 n1 1+λ1bVi, and λ1satisfies1 nnP i=1bVi 1+λ1bVi= 0. Substituting the value of piin (23) and taking the logarithm of R, we get the non-parametric jackknife empirical log-likelihood ratio as logR=−nX i=1log{1 +λ1bVi}. For large values of log R, we reject the null hypothesis H0in favour of the alternative hypothesis H1. To determine the critical region for the JEL-based test, we ascertain the limiting distribution of the jackknife empirical log-likelihood ratio. The following regularity conditions, as outlined by Gastwirth (1974), are required to explain how Wilk’s theorem works: (A1) The random variable Xpossesses a finite mean µand variance σ2, and (A2) The probability density function of Xis continuous in the neighbourhood of µ. Using Theorem 1 from Jing et al. (2009), we have established the asymptotic null distribution of the jackknife empirical log-likelihood ratio as an analogue to Wilks’ theorem. Theorem 3. Assume that A1andA2hold. Then as n→ ∞ ,−2 logRconverges to χ2 1in distribution. 17 Proof. The proof of this theorem can be established by following the argument of Zhao et al. (2015). First
|
https://arxiv.org/abs/2505.08565v1
|
we have1 nPn i=1bV2 i=bσ2+op(n−1/2) almost surely. Thus λ1=( 1 nnX i=1bVi.1 nnX i=1bV2 i) +op(n−1/2) =b∆jack bσ2+op(n−1/2). Using Lemma A1 of Jing et al. Jing et al. (2009), we have the condition (24) and E(Y2)<∞ andσ2>0 are also satisfied. The Wilks’ theorem of JEL can be established as −2 logR= 2nX i=1( λ1bVi−1 2λ2 1bV2 i) +op(1) = 2nλ1b∆jack−nλ12bσ2+op(1) =nb∆jack 2 bσ2+op(1) d− →χ2 1. Hence, the theorem. Hence we reject the null hypothesis H0in favour of the alternative hypothesis H1at a significance level α, if −2 logR> χ2 1,α where, χ2 1,αrepresents the upper α-percentile point of the χ2distribution with one degree of freedom. 3.2. Adjusted jackknife empirical likelihood ratio test To overcome the convex hull restrictions inherent in the traditional JEL approach, Chen et al. (2008) introduced the AJEL ratio test. Generally, the AJEL method improves upon the original JEL in terms of empirical power and coverage probability. From the generated Vi’s in (21), the extra data point is obtained by using the proposed convention; cWn+1=−knb∆jack, where knis 18 a positive constant. Then the AJEL is defined as, Lad= max(n+1Y i=1pi:n+1X i=1pi= 1, pi≥0,n+1X i=1picWi= 0) , where cWi= bVi fori= 1,2, . . . , n −max 1,log(n) 21 nnP i=1bVi fori=n+ 1.(25) Therefore, we define the AJEL ratio as Rad=Lad (n+ 1)−(n+1)= maxn+1Y i=1(n+ 1)pi:n+1X i=1pi= 1, pi≥0,n+1X i=1picWi= 0 . Then, we obtain the AJEL log ratio, −2 logRad= 2n+1X i=1log{1 +λ2cWi}, (26) where λ2satisfies the equation, f(λ) =1 n+ 1n+1X i=1cWi 1 +λ2cWi= 0. We can derive the following Wilks’ theorem for the AJEL, which provides a foundational result in establishing the asymptotic behavior of the test statistic. Theorem 4. Under regularity conditions hold like Theorem 3. Under H0, the−2 logRadcon- verges in distribution to χ2 1asn→ ∞ . Proof. By following the argument of Chen et al. (2008) and Zhao et al. (2015), we have the adjusted jackknife pseudo value defined in (25). We have |λ2|=Op(1/√n), when kn=op(n). 19 The Wilks’ theorem of AJEL can be derived as −2 logRad=n+1X i=12 log(1 + λ2cWi) = 2n+1X i=1 λ2cWi−(λ2cW2 i 2 +op(1) = 2nλ2b∆jack−nλ2 2bσ2+op(1) =nb∆jack 2 bσ2+op(1). Hence by Slutsky’s theorem, as n→ ∞ , −2 logRad=nb∆jack 2 bσ2d− →χ2 1. Hence, the theorem. From Theorem 4, we can then reject the null hypothesis H0at aαlevel of significance if −2 logRad> χ2 1,α. 4. Simulation study In this section, we conducted an extensive Monte Carlo simulation study to examine the finite sample performance of the newly proposed tests against fixed alternatives; the computations were exclusively conducted utilizing the Rprogramming language. In our simulation study, we employed the Rpackage symmetry (a version of which is accessible on CRAN, see Ivanovi´ c et al. (2020b)). The main objectives were to assess the empirical type-I error and empirical power of the proposed tests and compare their performance with existing tests for symmetry. Sample sizes of 25 ,50,100 and 200 were considered to evaluate how the sample size affects the performance of the tests. The simulation is repeated 10000 times. To calculate the empirical type-I error of
|
https://arxiv.org/abs/2505.08565v1
|
the tests, we considered various symmetric distributions such as standard normal (N(0,1)), standard Laplace ( Lap(0,1)), standard lognormal ( Log(0,1)), and a mixture of normal (MxN (0.5)). The critical values of the SCR approach are calculated using the generated samples 20 from the considered symmetric distributions. The empirical type-I error calculated for both SCR, JEL and AJEL ratio tests is reported in Table 1. From Table 1, we observe that as the sample size increases, the empirical type-I error converges to the given significance level. Table 1: Empirical type-I rate at 0 .05 level of significance n SCR JEL AJEL MW MS BHI CM MGG MOI NAI B1 SGN N(0,1) 25 0.047 0.065 0.067 0.021 0.040 0.057 0.042 0.045 0.041 0.053 0.031 0.043 50 0.048 0.062 0.051 0.038 0.038 0.050 0.041 0.043 0.049 0.051 0.040 0.048 100 0.050 0.053 0.054 0.044 0.053 0.051 0.045 0.047 0.051 0.049 0.055 0.052 200 0.052 0.051 0.049 0.049 0.051 0.050 0.047 0.050 0.049 0.051 0.054 0.053 Lap(0,1) 25 0.041 0.056 0.054 0.006 0.051 0.057 0.032 0.053 0.039 0.054 0.107 0.043 50 0.046 0.061 0.058 0.038 0.058 0.053 0.038 0.056 0.047 0.053 0.087 0.058 100 0.050 0.054 0.052 0.060 0.067 0.052 0.035 0.058 0.045 0.050 0.079 0.057 200 0.051 0.052 0.049 0.048 0.053 0.050 0.041 0.051 0.051 0.049 0.063 0.052 Log(0,1) 25 0.037 0.063 0.062 0.004 0.029 0.057 0.036 0.046 0.031 0.038 0.053 0.050 50 0.042 0.066 0.052 0.020 0.025 0.054 0.038 0.044 0.049 0.049 0.047 0.052 100 0.051 0.054 0.048 0.038 0.048 0.055 0.040 0.048 0.048 0.051 0.079 0.049 200 0.052 0.050 0.051 0.048 0.050 0.050 0.041 0.049 0.049 0.050 0.063 0.050 MxN(0.5) 25 0.065 0.062 0.060 0.004 0.035 0.060 0.042 0.048 0.042 0.055 0.027 0.047 50 0.049 0.056 0.053 0.010 0.039 0.055 0.041 0.044 0.049 0.056 0.043 0.060 100 0.050 0.054 0.050 0.025 0.044 0.048 0.043 0.042 0.050 0.054 0.049 0.055 200 0.052 0.051 0.049 0.036 0.049 0.050 0.043 0.050 0.049 0.052 0.046 0.053 Next, to calculate the empirical power of the proposed test, we used the different forms of skewed distributions with θ= 0.5 and 1 as: •Fernandez-Steel type distributions (see Fern´ andez and Steel (1998)) with density function g(x;θ) = fx 1 +θ I(x <0) +f (1 +θ)x I(x≥0); •Azzalini type distributions (see Azzalini (2013)) with density function g(x;θ) = 2 f(x)F(θx); 21 •Contamination (with shift) alternative with distribution function G(x;θ, β) = (1 −θ)F(x) +θF(x−β), β > 1, θ∈[0,1]; where fandgare density functions, GandFare the distribution functions. For the empirical power calculation, the critical values for the SCR test are calculated using the standard normal distribution. As these values are derived under the null hypothesis, the particular choice of symmetric distribution does not significantly affect the outcomes. Several tests are available in the literature to test the symmetry of random variables based on various characterizations, and well-known classical tests exist. The subsequent tests are considered for comparative study: •MW: Modified signed rank Wilcoxon test (Vexler et al. (2023)), •MS: Modified sign test (Vexler et al. (2023)), •BHI: Statistic of the Litvinova test (Litvinova (2001)), •CM: Cabilio-Masaro test statistic (Cabilio and Masaro (1996)),
|
https://arxiv.org/abs/2505.08565v1
|
•MGG: Miao, Gel and Gastwirth test statistic (Miao et al. (2006)), •MOI: Milosevic and Obradovic test statistic (Miloˇ sevi´ c and Obradovi´ c (2016)), •NAI: Statistic of the Nikitin and Ahsanullah test (Nikitin and Ahsanullah (2015)), •B1: Statistic of the test√b1(Miloˇ sevi´ c and Obradovi´ c (2019)), •SGN: Sign test statistic (Miloˇ sevi´ c and Obradovi´ c (2019)). 22 Table 2: Empirical powers of the tests under Fernandez-Steel type distribution n SCR JEL AJEL MW MS BHI CM MGG MOI NAI B1 SGN N(0.5) 25 0.581 0.488 0.436 0.035 0.257 0.322 0.338 0.390 0.181 0.313 0.298 0.431 50 0.842 0.797 0.779 0.251 0.463 0.056 0.681 0.702 0.313 0.557 0.689 0.644 100 0.998 0.984 0.979 0.768 0.790 0.851 0.923 0.927 0.554 0.859 0.968 0.780 200 1.000 1.000 0.999 0.992 0.997 0.990 0.995 0.994 0.841 0.990 0.999 0.868 Lap(0.5) 25 0.632 0.590 0.423 0.059 0.303 0.217 0.270 0.422 0.131 0.218 0.377 0.436 50 0.798 0.785 0.656 0.331 0.626 0.393 0.605 0.724 0.222 0.390 0.668 0.612 100 0.990 0.985 0.897 0.798 0.924 0.667 0.931 0.962 0.416 0.677 0.843 0.841 200 1.000 1.000 0.993 0.992 0.998 0.927 0.999 0.999 0.694 0.926 0.958 0.899 Log(0.5) 25 0.587 0.464 0.431 0.043 0.239 0.280 0.307 0.388 0.159 0.282 0.351 0.428 50 0.782 0.720 0.731 0.269 0.492 0.506 0.660 0.703 0.290 0.506 0.650 0.614 100 0.980 0.945 0.946 0.779 0.823 0.803 0.930 0.938 0.514 0.808 0.902 0.792 200 1.000 1.000 0.999 0.992 0.986 0.978 0.996 0.998 0.809 0.979 0.994 0.849 N(1) 25 0.738 0.820 0.781 0.047 0.409 0.598 0.689 0.731 0.369 0.598 0.620 0.406 50 0.887 0.985 0.983 0.270 0.699 0.889 0.927 0.926 0.615 0.882 0.946 0.662 100 0.994 1.000 0.999 0.721 0.955 0.994 0.994 0.993 0.882 0.993 1.000 0.781 200 1.000 1.000 1.000 0.982 0.999 1.000 1.000 1.000 0.993 1.000 1.000 0.868 Lap(1) 25 0.788 0.844 0.779 0.071 0.524 0.467 0.639 0.780 0.280 0.466 0.701 0.462 50 0.846 0.975 0.969 0.369 0.895 0.762 0.956 0.974 0.496 0.766 0.955 0.688 100 0.990 1.000 0.999 0.872 0.997 0.969 0.999 0.999 0.784 0.968 0.993 0.710 200 1.000 1.000 1.000 0.999 1.000 0.999 1.000 1.000 0.971 0.999 1.000 0.738 Log(1) 25 0.749 0.821 0.778 0.051 0.408 0.556 0.669 0.739 0.337 0.558 0.665 0.544 50 0.898 0.982 0.980 0.295 0.741 0.853 0.936 0.939 0.590 0.856 0.874 0.569 100 1.000 1.000 0.999 0.787 0.974 0.989 0.995 0.995 0.858 0.990 0.991 0.654 200 1.000 1.000 1.000 0.994 1.000 1.000 1.000 1.000 0.991 0.999 1.000 0.757 23 Table 3: Empirical powers of the tests under Azzalini type distribution n SCR JEL AJEL MW MS BHI CM MGG MOI NAI B1 SGN N(0.5) 25 0.784 0.545 0.483 0.014 0.296 0.381 0.452 0.354 0.440 0.381 0.311 0.541 50 0.898 0.819 0.802 0.089 0.441 0.577 0.682 0.391 0.565 0.592 0.531 0.621 100 1.000 0.979 0.980 0.350 0.681 0.843 0.895 0.451 0.932 0.784 0.803 0.874 200 1.000 1.000 0.999 0.654 0.897 1.000 1.000 0.637 0.990 0.942 0.984 1.000 Lap(0.5) 25 0.571 0.418 0.379 0.012 0.038 0.510 0.147 0.528 0.328 0.285 0.298 0.175 50 0.668 0.603 0.574 0.078 0.191 0.632 0.151 0.687 0.490 0.538 0.548 0.226 100 0.891 0.979 0.881 0.156 0.276
|
https://arxiv.org/abs/2505.08565v1
|
0.871 0.169 0.873 622 0.628 0.827 0.362 200 1.000 1.000 0.999 0.206 0.359 1.000 0.192 1.000 0.879 0.820 0.985 0.490 Log(0.5) 25 0.482 0.406 0.373 0.240 0.324 0.305 0.385 0.373 0.421 0.398 0.272 0.221 50 0.692 0.713 0.654 0.416 0.492 0.638 0.585 0.551 0.783 0.734 0.545 0.401 100 0.801 0.910 0.917 0.610 0.660 0.873 0.814 0.783 0.869 0.894 0.867 0.570 200 1.000 1.000 0.998 0.791 0.897 0.990 0.965 0.956 0.991 0.988 0.994 0.682 N(1) 25 0.428 0.356 0.334 0.004 0.418 0.481 0.308 0.373 0.798 0.742 0.505 0.304 50 0.783 0.569 0.559 0.022 0.497 0.610 0.414 0.554 0.849 0.877 0.774 0.549 100 0.909 0.791 0.847 0.062 0.612 0.804 0.594 0.788 0.914 0.942 0.971 0.671 200 1.000 1.000 0.968 0.141 0.773 1.000 0.800 1.000 1.000 1.000 0.99 0.715 Lap(1) 25 0.792 0.633 0.594 0.271 0.470 0.512 0.205 0.373 0.781 0.801 0.495 0.441 50 0.847 0.854 0.780 0.369 0.681 0.708 0.203 0.464 0.846 0.891 0.771 0.613 100 0.978 0.990 0.891 0.572 0.798 0.871 0.221 0.636 0.961 0.948 0.971 0.749 200 1.000 1.000 0.989 0.714 1.000 0.968 0.236 0.832 1.000 1.000 1.000 0.929 Log(1) 25 0.656 0.766 0.671 0.291 0.384 0.309 0.278 0.253 0.761 0.812 0.124 0.412 50 0.835 0.852 0.716 0.390 0.469 0.422 0.359 0.316 0.810 0.890 0.196 0.481 100 1.000 0.917 0.809 0.455 0.562 0.510 0.497 0.446 0.931 0.948 0.397 0.531 200 1.000 1.000 0.971 0.608 0.639 0.794 0.681 0.630 1.000 1.000 0.634 0.611 24 Table 4: Empirical powers of the tests under contamination alternative type distribution n SCR JEL AJEL MW MS BHI CM MGG MOI NAI B1 SGN N(0.5) 25 0.586 0.411 0.346 0.335 0.496 0.567 0.481 0.412 0.392 0.375 0.531 0.481 50 0.865 0.621 0.592 0.523 0.581 0.642 0.597 0.598 0.493 0.578 0.701 0.646 100 1.000 0.908 0.898 0.780 0.761 0.873 0.694 0.740 0.680 0.841 0.907 0.941 200 1.000 1.000 0.996 0.991 0.959 1.000 0.898 0.908 0.867 0.923 0.998 1.000 Lap(0.5) 25 0.662 0.517 0.461 0.431 0.514 0.407 0.328 0.527 0.481 0.557 0.597 0.420 50 0.841 0.810 0.775 0.684 0.669 0.597 0.490 0.619 0.671 0.709 0.724 0.580 100 0.999 0.981 0.977 0.806 0.816 0.829 0.789 0.791 0.806 0.890 0.894 0.891 200 1.000 1.000 1.000 0.898 0.994 0.968 0.958 0.893 0.934 0.968 0.991 0.977 Log(0.5) 25 0.592 0.446 0.357 0.384 0.481 0.517 0.429 0.488 0.397 0.418 0.518 0.499 50 0.941 0.835 0.703 0.509 0.633 0.760 0.629 0.607 0.559 0.634 0.626 0.597 100 1.000 0.986 0.963 0.697 0.894 0.919 0.816 0.790 0.873 0.872 0.903 0.833 200 1.000 1.000 0.999 0.994 0.989 1.000 1.000 0.973 0.955 1.000 0.996 0.990 N(1) 25 0.610 0.919 0.896 0.519 0.594 0.684 0.604 0.580 0.448 0.525 0.696 0.570 50 0.941 0.998 0.989 0.809 0.784 0.843 0.790 0.729 0.697 0.785 0.738 0.690 100 1.000 1.000 1.000 0.991 0.968 0.992 0.938 0.988 0.893 0.832 0.956 0.968 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 Lap(1) 25 0.718 0.861 0.831 0.492 0.643 0.589 0.493 0.679 0.609 0.678 0.621 0.588 50 0.891 0.981 0.946 0.746 0.797 0.894 0.609 0.891 0.867 0.894 0.840 0.798 100 0.984 1.000 1.000 0.869 0.918 0.993 0.972 0.994 0.969 0.984 0.992 0.980 200 1.000
|
https://arxiv.org/abs/2505.08565v1
|
1.000 1.000 0.948 1.000 1.000 1.000 0.987 1.000 1.000 1.000 1.000 Log(1) 25 0.748 0.861 0.833 0.571 0.509 0.664 0.571 0.623 0.527 0.671 0.631 0.609 50 1.000 0.893 0.864 0.791 0.871 0.890 0.790 0.809 0.738 0.809 0.907 0.799 100 1.000 0.999 0.978 0.949 1.000 0.992 0.937 0.964 0.966 0.985 1.000 0.938 200 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 25 The results of empirical power compression are given in Tables 2-4. From Tables 2-4, we observe that our entropy-based tests showed significantly higher power than classical tests when detecting asymmetry, especially in moderate to severe skewness cases. The empirical power of the proposed tests increased with larger sample sizes. Our proposed tests generally perform better than traditional methods across various alternatives. For the Fernandez-Steel and Azzalini type alternative with samples of size 100, the power reached above 0 .85 for most asymmetric distributions. At the same time, traditional tests, such as the SGN test, often fail to reach 0.70. Given the overall performance shown in Tables 1–4, it is evident that, in most cases, our proposed tests outperform the existing ones. 5. Data Analysis In this section, we illustrate the proposed methodology using two real datasets. Example 1. The dataset consists of 50 observations detailing the tensile strength (MPa) of fibre (X) as reported in Sathar and Jose (2023), and this dataset follows a normal distribution with mean µ= 3076 .88 and σ= 344 .362. The depiction illustrates the fit of a normal distribution to the survival time data shown in Figure 1. To find the critical values of the SCR test, we have generated a random sample from the standard normal distribution, and the values are −0.4746 and 0 .4750. Example 2. The dataset is sourced from Qiu and Jia (2018) and is reported in Table 5, demonstrating a good fit with an asymmetric inverse Gaussian distribution. The estimated values of parameters ( bµ= 3.61,bλ= 1.588) of the inverse Gaussian distribution. This dataset is used to demonstrate the applicability of the proposed test statistic. Figure 2 presents the corresponding boxplot. The proposed tests are then applied to evaluate whether the underlying distribution exhibits symmetry. Hence, the exact critical points of the SCR method are C1= 6.37 and C2= 7.99 for the 5% significance level. Table 5: Active repair times (in hours) for an airborne communication transceiver. 0.2 0.3 0.5 0.5 0.5 0.5 0.6 0.6 0.7 0.7 0.7 0.8 0.8 1.0 1.0 1.0 1.0 1.1 1.3 1.5 1.5 1.5 1.5 2.0 2.0 2.2 2.5 3.0 3.0 3.3 3.3 4.0 4.0 4.5 4.7 5.0 5.4 5.4 7.0 7.5 8.8 9.0 10.3 22.0 24.5 26 Figure 1: Modeling tensile strength data using normal distribution Table 6: Test statistics along with p-values results for real datasets Example 1 Example 2 Test statistic p-value Decision Test statistic p-value Decision SCR 0.0307 1.000 ( >0.05) Accept H0 0.4494 0.000 ( <0.05) Reject H0 JEL 0.1278 0.900 ( >0.05) Accept H0 51.7850 0.002 ( <0.05) Reject H0 AJEL 0.1176 0.916 ( >0.05) Accept H0 19.3561 0.001 ( <0.05) Reject H0 The critical values
|
https://arxiv.org/abs/2505.08565v1
|
were obtained by applying the tests to the data, while bootstrap p-values were obtained by generating 10000 samples from the normal distribution using the estimated parameter. Each test was applied to the sample data, and the frequency with which the test statistic exceeded the corresponding critical value was noted. The bootstrap p-value was then calculated as the proportion of such occurrences out of the total number of samples. From Table 6, we observed that the null hypothesis of symmetry is not rejected at the 5% significance level for the tensile strength data, suggesting that a symmetric distribution can appropriately model the tensile strength (in MPa) of the fibre. In contrast, the symmetry hypothesis is rejected for the airborne communication transceiver dataset, indicating that the active repair times (in hours) exhibit an asymmetric distribution. 27 Figure 2: Boxplot 6. Conclusion In this paper, we have developed two nonparametric tests for assessing the symmetry of a distribution. Using the definitions of generalized cumulative residual entropy and generalized cumulative entropy, we explored and illustrated a novel characterization of the continuous sym- metry distribution with examples. We have thoroughly examined the properties of these tests. Given that calculating the null variance of our proposed test can be challenging in practical settings, we have introduced two additional nonparametric tests based on the jackknife empiri- cal likelihood (JEL) and adjusted jackknife empirical likelihood (AJEL) ratio. Through Monte Carlo simulations, we have demonstrated the effectiveness of our proposed tests against various alternative hypotheses. The findings suggest that all the proposed tests demonstrate strong performance across various alternative scenarios. Finally, we have illustrated the applicabil- ity of the proposed tests through real-world examples involving tensile strength and airborne communication transceivers datasets. We conclude this paper by highlighting the potential for future research in exploring new testing problems and developing similar characterization-based tests using alternative measures, 28 such as extropy. References Ahmadi, J. (2020). Characterization results for symmetric continuous distributions based on the properties of k-records and spacings. Statistics & Probability Letters , 162:108764. Ahmadi, J. (2021). Characterization of continuous symmetric distributions using information measures of records. Statistical Papers , 62:2603–2626. Ahmadi, J. and Fashandi, M. (2019). Characterization of symmetric distributions based on some information measures properties of order statistics. Physica A: Statistical Mechanics and its Applications , 517:141–152. Ahmed, R. R., Vveinhardt, J., ˇStreimikien˙ e, D., Ghauri, S. P., and Ashraf, M. (2018). Stock returns, volatility and mean reversion in emerging and developed financial markets. Techno- logical and Economic Development of Economy , 24:1149–1177. Allison, J. S. and Pretorius, C. (2017). A Monte Carlo evaluation of the performance of two new tests for symmetry. Computational Statistics , 32:1323–1338. Anjana, S. and Kattumannil, S. (2024). Jackknife empirical likelihood ratio test for log sym- metric distribution using probability weighted moments. arXiv preprint arXiv:2410.04082 . Avhad, G. V. and Lahiri, A. (2025). A jackknife empirical likelihood ratio test for log-symmetric distributions. Statistics & Probability Letters , 222:110394. Azzalini, A. (2013). The skew-normal and related families , volume 3. Cambridge University Press. Boˇ zin, V., Miloˇ sevi´ c, B., Nikitin, Y. Y., and Obradovi´
|
https://arxiv.org/abs/2505.08565v1
|
c, M. (2020). New characterization-based symmetry tests. Bulletin of the Malaysian Mathematical Sciences Society , 43:297–320. Cabilio, P. and Masaro, J. (1996). A simple test of symmetry about an unknown median. The Canadian Journal of Statistics/La Revue Canadienne de Statistique , pages 349–361. 29 Chen, D., Dai, L., and Zhao, Y. (2024). Jackknife empirical likelihood for the correlation coefficient with additive distortion measurement errors. TEST , pages 1–31. Chen, J., Variyath, A. M., and Abraham, B. (2008). Adjusted empirical likelihood and its properties. Journal of Computational and Graphical Statistics , 17:426–443. Chen, S. X. and Cui, H. (2007). On the second-order properties of empirical likelihood with moment restrictions. Journal of Econometrics , 141:492–516. Davison, A. and Mamba, S. (2017). Symmetry methods for option pricing. Communications in Nonlinear Science and Numerical Simulation , 47:421–425. Farrell, P. J. and Rogers-Stewart, K. (2006). Comprehensive study of tests for normality and symmetry: extending the spiegelhalter test. Journal of Statistical Computation and Simula- tion, 76:803–816. Fashandi, M. and Ahmadi, J. (2012). Characterizations of symmetric distributions based on R´ enyi entropy. Statistics & Probability Letters , 82:798–804. Fern´ andez, C. and Steel, M. F. (1998). On Bayesian modeling of fat tails and skewness. Journal of the American Statistical Association , 93:359–371. Feuerverger, A. and Mureika, R. A. (1977). The empirical characteristic function and its appli- cations. The Annals of Statistics , pages 88–97. Garg, N., Mathew, L., Dewan, I., and Kattumannil, S. K. (2024). Jackknife empirical likelihood method for U-statistics based on multivariate samples and its applications. arXiv preprint arXiv:2408.14038 . Gastwirth, J. L. (1974). Large sample theory of some measures of income inequality. Econo- metrica: Journal of the Econometric Society , pages 191–196. Gupta, N. and Chaudhary, S. K. (2024). Some characterizations of continuous symmetric dis- tributions based on extropy of record values. Statistical Papers , 65:291–308. 30 Huang, L., Zhang, L., and Zhao, Y. (2024). Jackknife empirical likelihood for the lower-mean ratio. Journal of Nonparametric Statistics , 36:287–312. Husseiny, I., Barakat, H., Nagy, M., and Mansi, A. (2024). Analyzing symmetric distributions by utilizing extropy measures based on order statistics. Journal of Radiation Research and Applied Sciences , 17:101100. Ivanovi´ c, B., Miloˇ sevi´ c, B., and Obradovi´ c, M. (2020a). Comparison of symmetry tests against some skew-symmetric alternatives in iid and non-iid setting. Computational Statistics & Data Analysis , 151:106991. Ivanovi´ c, B., Milo´ sevi´ c, B., and Obradovi´ c, M. (2020b). Package ‘Symmetry’: Testing for symmetry of data and model residuals. R package version 0.2.1. Jing, B.-Y., Tsao, M., and Zhou, W. (2017). Transforming the empirical likelihood towards better accuracy. Canadian Journal of Statistics , 45:340–352. Jing, B.-Y., Yuan, J., and Zhou, W. (2009). Jackknife empirical likelihood. Journal of the American Statistical Association , 104:1224–1232. Jose, J. and Abdul Sathar, E. (2022). Symmetry being tested through simultaneous application of upper and lower k-records in extropy. Journal of Statistical Computation and Simulation , 92:830–846. Kattumannil, S. K., Sreedevi, E., and Balakrishnan, N. (2022). A generalized measure of cu- mulative residual entropy. Entropy , 24:444. Lazar, N. A. (2021). A review of empirical likelihood. Annual Review of
|
https://arxiv.org/abs/2505.08565v1
|
Statistics and its Application , 8:329–344. Lee, A. J. (2019). U-statistics: Theory and Practice . Routledge. Litvinova, V. (2001). New nonparametric test for symmetry and its asymptotic efficiency. Master’s thesis, VESTNIK-ST PETERSBURG UNIVERSITY MATHEMATICS C/C OF VESTNIK-SANKT-PETERBURGSKII UNIVERSITET MATEMATIKA. 31 Liu, P. and Zhao, Y. (2023). A review of recent advances in empirical likelihood. Wiley Inter- disciplinary Reviews: Computational Statistics , 15:e1599. Liu, Y., Wang, S., and Zhou, W. (2023). General jackknife empirical likelihood and its applica- tions. Statistics and Computing , 33:74. Maesono, Y. (1987). Competitors of the Wilcoxon signed rank test. Annals of the Institute of Statistical Mathematics , 39:363–375. Marchetti, C. E. and Mudholkar, G. S. (2002). Characterization theorems and goodness-of-fit tests. Goodness-of-Fit Tests and Model Validity , pages 125–142. Miao, W., Gel, Y. R., and Gastwirth, J. L. (2006). A new test of symmetry about an unknown median. In Random walk, sequential analysis and related topics: A festschrift in honor of Yuan-Shih Chow , pages 199–214. World Scientific. Miloˇ sevi´ c, B. and Obradovi´ c, M. (2016). Characterization based symmetry tests and their asymptotic efficiencies. Statistics & Probability Letters , 119:155–162. Miloˇ sevi´ c, B. and Obradovi´ c, M. (2019). Comparison of efficiencies of some symmetry tests around an unknown centre. Statistics , 53:43–57. Nikitin, Y. Y. (2017). Tests based on characterizations, and their efficiencies: a survey. arXiv preprint arXiv:1707.01522 . Nikitin, Y. Y. and Ahsanullah, M. (2015). New U-empirical tests of symmetry based on ex- tremal order statistics, and their efficiencies. In Mathematical Statistics and Limit Theorems: Festschrift in Honour of Paul Deheuvels , pages 231–248. Springer. Owen, A. B. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika , 75:237–249. Partlett, C. and Patil, P. (2017). Measuring asymmetry and testing symmetry. Annals of the Institute of Statistical Mathematics , 69:429–460. 32 Peng, H. and Tan, F. (2018). Jackknife empirical likelihood goodness-of-fit tests for U-statistics based general estimating equations. Bernoulli , 24:449–464. Qiu, G. and Jia, K. (2018). Extropy estimators with applications in testing uniformity. Journal of Nonparametric Statistics , 30:182–196. Sathar, E. I. A. and Jose, J. (2023). Extropy based on records for random variables representing residual life. Communications in Statistics - Simulation and Computation , 52:196–206. Suresh, S. and Kattumannil, S. K. (2025). Jackknife empirical likelihood ratio test for testing the equality of semivariance. Statistical Papers , 66:1–17. Thomas, D. R. and Grunkemeier, G. L. (1975). Confidence interval estimation of survival probabilities for censored data. Journal of the American Statistical Association , 70:865–871. Ushakov, N. (2011). One characterization of symmetry. Statistics & Probability Letters , 81:614– 617. Vexler, A., Gao, X., and Zhou, J. (2023). How to implement signed-rank wilcox.test() type procedures when a center of symmetry is unknown. Computational Statistics & Data Analysis , 184:107746. Wang, L., Chen, J., and Pu, X. (2015). Resampling calibrated adjusted empirical likelihood. Canadian Journal of Statistics , 43:42–59. Zhao, Y., Meng, X., and Yang, H. (2015). Jackknife empirical likelihood inference for the mean absolute deviation. Computational Statistics & Data Analysis , 91:92–101. 33
|
https://arxiv.org/abs/2505.08565v1
|
arXiv:2505.08908v1 [math.ST] 13 May 2025Statistical Decision Theory with Counterfactual Loss∗ Benedikt Koch†Kosuke Imai‡ May 22, 2025 Abstract Classical statistical decision theory evaluates treatment choices based solely on observed out- comes. However, by ignoring counterfactual outcomes, it cannot assess the quality of decisions relative to feasible alternatives. For example, the quality of a physician’s decision may depend not only on patient survival, but also on whether a less invasive treatment could have produced a similar result. To address this limitation, we extend standard decision theory to incorporate counterfactual losses–criteria that evaluate decisions using all potential outcomes. The central challenge in this generalization is identification: because only one potential outcome is observed for each unit, the associated risk under a counterfactual loss is generally not identifiable. We show that under the assumption of strong ignorability, a counterfactual risk is identifiable if and only if the counterfactual loss function is additive in the potential outcomes. Moreover, we demonstrate that additive counterfactual losses can yield treatment recommendations that differ from those based on standard loss functions, provided that the decision problem involves more than two treatment options. Keywords: causal inference, expected utility, identification, policy learning, potential outcomes, risk ∗We thank Iav Bojinov, D. James Greiner, Sooahn Shin, and Davide Viviano for insightful discussions. †Ph.D. candidate, Department of Statistics, Harvard University. 1 Oxford Street, Cambridge MA 02138. Email: benedikt koch@g.harvard.edu URL: https://benediktjkoch.github.io ‡Professor, Department of Government and Department of Statistics, Harvard University. 1737 Cambridge Street, Institute for Quantitative Social Science, Cambridge MA 02138, U.S.A. Email: imai@harvard.edu URL: https://imai.fas.harvard.edu 1 Introduction Statistical decision theory, pioneered by Wald [1950], formulates decision-making as a game against nature. A decision-maker selects an action D=din the face of an unknown state of nature θ, and incurs a prespecified lossor negative utility ℓ(d;θ), which quantifies the consequence of the action when the true state of nature is θ. Better decisions correspond to lower losses. Given covariates X, we can define a decision rule D=π(X) that maps observed data to actions. The performance of a decision rule is commonly evaluated by its risk, that is, the expected loss: R(π;θ, ℓ) =E[ℓ(π(X);θ)], where the expectation is taken across over the distribution of X. Since θis unknown, the true risk is typically unidentifiable. This motivates the use of optimality criteria such as admissibility, minimax, minimax-regret, and Bayes risk [Wald, 1950, Savage, 1972, Berger, 1985]. Furthermore, statistical decision theory also provides a foundational framework for classical estimation theory [e.g., Lehmann and Casella, 2006] and hypothesis testing [e.g., Lehmann et al., 1986]. In a series of influential papers, Manski [2000, 2004, 2011] applies this framework to the problem of treatment choice. In its simplest form, the goal is to choose a treatment Dto minimize the expected loss associated with an outcome of interest Y. Because the treatment may influence the outcome, the loss ℓ(d;Y(d)) depends on the potential outcome under the chosen treatment d, denoted byY(d) [Neyman, 1923, Rubin, 1974], and possibly the treatment itself. As before, we can define a treatment rule D=π(X) based on observed covariates, and evaluate its performance via risk: R(π;ℓ) =E[ℓ(π(X);Y(π(X)))], where the
|
https://arxiv.org/abs/2505.08908v1
|
expectation is taken over the joint distribution of XandY(π(X)). Since only one potential outcome is observed per unit, the risk is generally unidentifiable unless the treatment rule coincides with the one used to generate the data. Consequently, additional assumptions — most notably, strong ignorability of treatment assignment [Rosenbaum and Rubin, 1983] — are required to identify the risk and determine optimal treatment rules [e.g., Manski, 2000, Kitagawa and Tetenov, 2018, Athey and Wager, 2021]. In this paper, we generalize the standard framework of statistical decision theory for treatment choice by introducing a broader class of loss functions. Specifically, we consider counterfactual losses, which depend on the full set of potential outcomes rather than the outcome under the selected treatment alone. That is, the loss takes the form ℓ(d;{Y(d′)}d′∈D}), where Ddenotes the support of the treatment variable D. We refer to this as a counterfactual loss because it evaluates each decision in light of the alternative outcomes that could have occurred had a different treatment been chosen. Given this loss, we define the counterfactual risk for measuring the performance of treatment rule π as: R(π;ℓ) =E[ℓ(π(X);{Y(d)}d∈D)], 1 where the expectation is taken over the distribution of covariates Xand all potential outcomes {Y(d)}d∈D. The idea of counterfactual loss includes familiar examples such as regret [Savage, 1972] as special cases. A regret-based loss can be defined as ℓ(d;{Y(d′)}d′∈D}) = max d′∈DY(d′)−Y(d). However, our definition of counterfactual risk differs from another notion of regret commonly used in the reinforcement learning literature, where maximization occurs outside the expectation [e.g., Latti- more and Szepesv´ ari, 2020]. Specifically, the expression of this version of regret, which is given by max π′E[Y(π′(X))]−E[Y(π(X))], is not equivalent to R(π;ℓ) =E[max π′Y(π′(X))]−E[Y(π(X))]. A key challenge introduced by counterfactual losses is that their associated risks are generally not identifiable, even under the standard strong ignorability assumption. Although this assumption enables identification of the standard risk in treatment choice settings, it is insufficient for identifying counterfactual risk when the loss depends on unobserved potential outcomes. The main contribution of this paper is to propose a structural condition on the counterfactual loss that enables identification of counterfactual risk. Specifically, we show that under strong ignorability, the additivity of the counterfactual loss in potential outcomes is a necessary and sufficient condition for identifying the difference in counterfactual risk between any two decision rules. This result, presented in Section 3, implies that when the counterfactual loss is additive, one can not only identify the optimal treatment rule but also evaluate risk differences between competing rules. Moreover, we also show that counterfactual risks give treatment recommendations different from those based on standard risks, provided that the decision problem involves more than two treatment options. 2 Illustrative Examples To clarify the ideas introduced above, we now present a series of illustrative examples in the context of medical decision-making. 2.1 Binary decision Consider a stylized setting in which a physician must choose whether to provide a patient with a standard care treatment ( D= 1) or do nothing ( D= 0). The outcome of interest is patient survival, denoted by Y,
|
https://arxiv.org/abs/2505.08908v1
|
where Y= 1 indicates survival and Y= 0 otherwise. For notational simplicity, we assume no covariates are observed. Suppose we evaluate the physician’s decision using the standard loss ℓ(D;Y(D)), which assigns a value to each pair of decision and realized outcome ( D, Y(D)). Let c1>0 =c0denote the cost of providing treatment relative to doing nothing. We also use ℓyto denote the loss associated with the observed outcome Y(D) =y. For example, if the physician administers treatment and the patient does not survive, the loss is ℓ(1; 0) = ℓ0+c1. If the physician does nothing and the patient survives, the loss is ℓ(0; 1) = ℓ1. Since patient survival is a desired outcome, we assume ℓ1< ℓ0. This is an example of a standard loss without counterfactuals which takes the form, ℓ(D;Y(D)) =ℓY(D)+cD. (1) 2 Decision Negative ( D= 0) Positive ( D= 1) Negative ( Y(0) = 0)False Negative (FN) True Positive ℓ0˜ℓ0+c1 Positive ( Y(0) = 1)True Negative False Positive (FP) ℓ1˜ℓ1+c1Potential Outcomes Negative ( Y(1) = 0)True Negative False Positive (FP) ˜ℓ0 ℓ0+c1 Positive ( Y(1) = 1)False Negative (FN) True Positive ˜ℓ1 ℓ1+c1 Table 1: Confusion matrix for each combination of potential outcome Y(d) =ydand decision D=d. Each cell is assigned a loss ℓ(d;y0, y1) for d, y0, y1∈ {0,1}. Darker colors indicate realized outcomes, while lighter colors indicate counterfactual outcomes. Coston et al. [2020] and Ben-Michael et al. [2024a] consider a loss function that depends on the baseline potential outcome rather than the realized outcome. This is an example of the counterfactual lossℓ(D;Y(0)). Under this formulation, the decision is incorrect (i.e., a false negative) if the physician does nothing and the patient dies, ( D, Y(0)) = (0 ,0). Since Y(0) is realized in this case, the loss isℓ0, as defined earlier. Conversely, a false positive occurs when the physician provides treatment even though the patient would have survived without it, ( D, Y(0)) = (1 ,1). In this case, Y(0) is counterfactual, and the loss is ˜ℓ1. Under this setup, Ben-Michael et al. [2024a] propose the following counterfactual classification loss: ℓ(D;Y(0)) = (1 −D)(1−Y(0))ℓ0+D·Y(0)˜ℓ1+cD. (2) As illustrated in Table 1, this counterfactual loss can be further generalized. First, we may also incorporate loss terms for incorrect decisions based on the potential outcome under treatment — specifically, cases where ( D, Y(1)) = (0 ,1) (false negative) or ( D, Y(1)) = (1 ,0) (false positive). Second, we can include losses associated with correct decisions: ( D, Y(0)) = (0 ,1) and ( D, Y(1)) = (1,1) are true positives, while ( D, Y(0)) = (0 ,1) and ( D, Y(1)) = (0 ,0) are true negatives. Letting ˜ℓ0denote the loss incurred when a patient would have died under an alternative (unchosen) decision, we can write a general counterfactual loss as: ℓ(D;Y(0), Y(1)) = ℓY(D)+˜ℓY(1−D)+cD. (3) These examples illustrate that the counterfactual loss framework allows us to formalize aspects of decision-making that are difficult to capture using standard loss functions. 3 2.2 Asymmetric counterfactual loss The counterfactual classification loss introduced above considers only one potential outcome at
|
https://arxiv.org/abs/2505.08908v1
|
a time, Y(d) ford= 0,1. In contrast, a fully general counterfactual loss may depend on both potential outcomes at the same time. This leads naturally to the concept of principal strata , which classifies individuals according to their joint potential outcomes [Frangakis and Rubin, 2002]: •Never survivors (Y(0), Y(1)) = (0 ,0): individuals who would not survive regardless of treatment. •Responders (Y(0), Y(1)) = (0 ,1): individuals who would survive only if given the treatment. •Harmed (Y(0), Y(1)) = (1 ,0): individuals who would survive only if not treated. •Always survivors (Y(0), Y(1)) = (1 ,1): individuals who would survive regardless of treat- ment. For any treatment decision Dand observed outcome Y(D), the counterfactual loss can vary depend- ing on the principal stratum to which the individual belongs. Building on this framework, Ben-Michael et al. [2024b] propose a counterfactual loss function designed to encode the principle of the Hippocratic Oath — “Do no harm” — a longstanding ethical guideline in medicine [Wiens et al., 2019]. According to this principle, causing harm with a treatment is considered more serious than failing to provide a potentially life-saving intervention. This ethical asymmetry motivates the following loss function: ℓ(D;Y(0), Y(1)) = (1 −Y(0))Y(1)ℓR D+Y(0)(1−Y(1))ℓH 1−D +Y(0)Y(1)ℓ1+ (1−Y(0))(1 −Y(1))ℓ0+cD,(4) where ℓR 0> ℓR 1≥0 and ℓH 0> ℓH 1≥0 represent the respective losses for incorrect and correct decisions within the responder and harmed strata, respectively. The following inequality encodes the Hippocratic Oath: ∆R=ℓR 0−ℓR 1|{z } Failure to treat a responder<∆H=ℓH 0−ℓH 1|{z } Harming a patient. That is, failing to provide life-saving treatment (a false negative among responders) is considered less costly than harming a patient by administering an unnecessary or detrimental treatment (a false positive among the harmed). This counterfactual loss framework enables ethical principles to be explicitly incorporated into the treatment decision-making process. Moreover, the structure of principal strata and asymmetric loss can be extended to address other concerns, such as fairness in decisions [Imai and Jiang, 2023] or prioritizing treatment equity across subpopulations. 4 2.3 Trichotomous decision Finally, we consider a slightly more general treatment choice setting by introducing an experimental treatment as a third option, denoted D= 2. A physician may now choose among no treatment (D= 0), standard treatment ( D= 1), and experimental treatment ( D= 2). Let c2represent the cost of the experimental treatment (relative to the cost of no treatment c0= 0), which we assume to be greater than the cost of standard treatment c1. Let ℓydenote the loss associated with the observed outcome Y(D) =yunder decision D. Then, the standard loss function from Equation (1) takes again the form: ℓStd(D, Y(D)) =ℓY(D)+cD. Suppose further that the experimental treatment is more invasive than the standard one, which itself is more invasive than giving no treatment. From an ethical or clinical perspective, a physician may prefer to avoid overtreatment by prescribing the least invasive option that ensures patient survival. A counterfactual loss function that encodes this preference is: ℓCof(D;Y(0), Y(1), Y(2)) = ℓY(D)+cD+X k<DrkY(k), (5) where rkrepresents the “regret” or penalty for administering a more invasive treatment when a
|
https://arxiv.org/abs/2505.08908v1
|
less invasive one would have sufficed (i.e., when Y(k) = 1 for some k < D ). When does the counterfactual loss recommend a different treatment than the standard loss? Let µd=E[Y(d)] denote the average potential outcome under treatment d. Then, the difference between the counterfactual and standard risks for a given treatment D=dcan be written as: R(d, ℓCof)−R(d, ℓStd) =X k<drkPr(Y(k) = 1|D=d) Pr(D=d). To gain further intuition, we normalize the loss values such that ℓ0= 1 and ℓ1= 0, and again let µd=E[Y(d)]. Under this normalization, the counterfactual risk may favor a less invasive treatment k < d , even when the standard risk prefers d, if the regret from overtreatment is sufficiently large. Specifically, counterfactual risk favors koverdwhenever: 0<(µd−cd)−(µk−ck) µk< rk. This expression highlights the trade-off between improved survival probabilities, treatment costs, and the ethical penalty for overtreatment. If the increase in cost and survival probability from k todis small relative to the regret for overtreatment, the counterfactual risk will recommend a less invasive option. Thus, the counterfactual loss formally incorporates treatment conservatism and ethical preferences into decision-making. 3 Identification Analysis under Strong Ignorability Despite its potential, the counterfactual loss framework faces a fundamental challenge: it is difficult to identify the counterfactual risk, which depends on the joint distribution of multiple potential 5 outcomes — quantities that are never jointly observed. This raises the central question we address in the remainder of the paper: Under what conditions is the counterfactual risk identifiable? We will derive a necessary and sufficient condition that a counterfactual loss function must satisfy for its associated counterfactual risk to be identifiable under the standard assumption of strong ignorability, which posits that treatment decisions are unconfounded given observed variables. 3.1 Setup Consider an independently and identically distributed (IID) sample of size ndrawn from a popula- tion. For each unit i= 1, . . . , n , we observe a vector of pre-treatment covariates Xi∈ X, a discrete decision Di∈ D:={0,1, . . . , K −1}with K≥2, and an outcome Yi∈ Y:={0,1, . . . , M −1}with M≥2. Let Yi(d)∈ Ydenote the potential outcome of unit iunder decision d∈ D. Our goal is to evaluate the quality of a generic decision rule D∗ i∈ D, which may differ from the observed decision Di. We assume that this target decision D∗ iis observed for each unit and, when it shares the same potential outcomes as the observed decision, yields the same outcome. Formally, we adopt the following assumptions: Assumption 1 (IID Sampling) The tuple {Yi, Di, D∗ i,Xi}is independently and identically dis- tributed across i= 1, . . . , n . Assumption 2 (Consistency) For all i, the observed outcome satisfies Yi=Yi(Di). Moreover, if D∗ i=Di, then Yi(D∗ i) =Yi(Di). In this setting, we do not require knowledge of how D∗ iis generated, only that it is observed for each unit. For example, D∗ imay correspond to an individualized treatment rule that depends on both observed and unobserved covariates: D∗ i=π(Xi,Ui) for some possibly unknown function π:X × U → D , where Ui∈
|
https://arxiv.org/abs/2505.08908v1
|
Udenotes unobserved covariates. Alternatively, D∗ imay arise from a stochastic policy, where Pr( D∗ i=d) =πd(Xi,Ui) ford∈ DandPK−1 d=0πd(Xi,Ui) = 1. Importantly, we do not observe the corresponding potential outcome Yi(D∗ i). We study identifiability of counterfactual risk under standard assumptions from the causal infer- ence literature: Assumption 3 (Strong Ignorability) Fori= 1, . . . , n , the following assumptions hold, (a) Unconfoundedness: Di⊥ ⊥(D∗ i,{Yi(d)}d∈D)|Xi. (b) Overlap: there exists η >0such that, for all d∈ D,η <Pr(Di=d|Xi)<1−η. Assumption 3(a) implies that, conditional on covariates, the observed decision Diis independent of both the potential outcomes and the decision rule D∗ i. Notably, D∗ imay still depend on unobserved covariates, and thus on potential outcomes, in ways that do not violate this assumption. Assump- tion 3(b) requires that every treatment level in Doccurs with nonzero probability given covariates, but only for the observed decision Di, not for the hypothetical decision of interest D∗ i. 6 3.2 Counterfactual Loss and Risk We define the loss (or negative utility) associated with a decision as a function of the covariates and all potential outcomes. A lower value of the loss indicates a better decision. We refer to this as the counterfactual loss because it depends on the full vector of potential outcomes rather than just the realized one. Definition 1 (Counterfactual Loss) Given potential outcomes (Yi(0), . . . , Y i(K−1)) = ( y0, . . . , y K−1) and covariates Xi=x, the counterfactual loss of choosing decision D∗=dis defined as a function ℓ:D × YK× X → R. We now define the counterfactual risk , which is the expected counterfactual loss and serves as the primary estimand of interest. We also define the conditional counterfactual risk , which conditions on the observed covariates. Definition 2 (Counterfactual Risk and Conditional Counterfactual Risk) Given the counterfactual loss ℓ, the counterfactual risk R(D∗;ℓ)of decision D∗is given by, R(D∗;ℓ) :=E[ℓ(D∗;Y(0), . . . , Y (K−1),X)] =E[RX(D∗;ℓ)], where Rx(D∗;ℓ)is the conditional counterfactual risk given X=x, Rx(D∗;ℓ) :=X d∈DX {yk}K−1 k=0∈YKℓ(d;y0, . . . , y k−1,x)·Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x). When clear from context, we abbreviate the counterfactual risk as R(D∗) = R(D∗;ℓ) and the conditional counterfactual risk as Rx(D∗) =Rx(D∗;ℓ). The central goal of this paper is to study the identifiability of counterfactual risk. Intuitively, the challenge arises from the fact that, for each unit, we observe only a single realized outcome out of K possible potential outcomes. Although the counterfactual loss depends on the entire set of potential outcomes, we will show that the expected loss, i.e., the counterfactual risk, can be identified under a set of relatively mild structural conditions on the loss function. 3.3 Additivity To overcome the identification problem, we consider the following restricted class of counterfactual loss that is additive in the potential outcomes. Definition 3 (Additive Counterfactual Loss) Lety= (y0, . . . , y K−1). Then, additive coun- terfactual loss is defined as, ℓAdd(d;y,x) :=X k∈Dωk(d, yk,x) +ϖ(y,x), where ωk:D × Y × X → Ris the weight function and
|
https://arxiv.org/abs/2505.08908v1
|
ϖ:YK× X → Ris the intercept function. 7 The additive counterfactual loss equals the sum of two terms; the first term depends on the decision D∗=d, while the second term does not. Moreover, the first term is the sum of weight func- tions ωk, each depending only on the corresponding potential outcomes Yi(k) given the covariates. The second term does not depend on the decision and includes all possible interactions of potential outcomes. We illustrate Definition 3 by revisiting the examples shown in Section 2. Example 1 (Counterfactual Classification Loss) The counterfactual loss function given in Equation (2)is additive in the potential outcomes with ω0(0,0) = ℓ0,ω0(1,1) = ˜ℓ1+c,ω1(d, y1) = 0 ford, y1∈ {0,1}, and ϖ(y) = 0 . The extension of this counterfactual loss given in Equation (3)is also additive with ωd(d, yd) =ℓyd+cd,ω1−d(d, y1−d) =˜ℓy1−dford, y∈ {0,1}, and ϖ(y) = 0 . Example 2 (Asymmetric Counterfactual Loss) The asymmetric counterfactual loss given in Equation (4)is an example of a non-additive loss whenever ∆R̸= ∆H(see Section 3.6 for details). When ∆R= ∆H= ∆ the loss is additive. The weights for this loss are given by ωd(d, yd) =ℓyd+cd, ω1−d(d, y1−d) ={∆−(ℓ0−ℓ1)}y1−d,ϖ(y) = (ℓH 1−ℓ1)y0(1−y1)+(ℓR 1−ℓ1)(1−y0)y1−{∆−(ℓ0−ℓ1)}y0y1. Note that ∆−(ℓ0−ℓ1)is the difference (of the loss differences) between units for whom the treatment matters and those for whom it does not. Example 3 (Trichotomous decision) The counterfactual loss given in Equation (5)is addi- tive in the potential outcomes with ωd(d, yd) = ℓyd+cd,ωk(d, yk) = rkyk1{k < d}ford, k∈ {0,1,2}, yd, yk∈ {0,1}, and ϖ(y) = 0 . 3.4 Nonparametric Identification We now investigate conditions under which the counterfactual risk is identifiable, focusing on the class of additive counterfactual loss functions. Identification has been extensively studied in the causal inference literature [e.g., Matzkin, 2007, Pearl, 2009]. We adopt the following general definition of identification from Basse and Bojinov [2020]: Definition 4 (Identifiability) LetPdenote the set of all joint probability distributions under consideration, and let Qbe the set of all observable distributions. Let Q:P → Q denote a surjective mapping that associates each joint distribution with an observable distribution. Let θ(P)denote the (causal) estimand of interest under distribution P∈ P. For a fixed observable distribution Q0∈ Q define S0={P∈ P :Q(P) =Q0}.We say that θisidentifiable at Q0with respect to Qif there exists a constant θ0such that θ(P) =θ0for all P∈ S0. We say that θisidentifiable with respect to Qif it is identifiable at every Q0∈ Q. According to the definition, a causal estimand is identifiable if it takes the same value across all joint probability measures that map to any given observable distribution. Surjectivity of Qensures that this property holds for every possible observable distribution. 8 In our setting, Pdenotes the set of all joint probability distributions P(D∗, D, Y (0), . . . , Y (K− 1),X) that satisfy Assumptions 1–3. The causal estimand of interest is the counterfactual risk R(D∗;ℓ), which, according to Definition 2, is fully determined by P∈ P. Let Q(P) =P(D∗, D, Y, X) denote the surjective mapping that yields this observable distribution from each P∈ P. Then, according
|
https://arxiv.org/abs/2505.08908v1
|
to Definition 4, the counterfactual risk R(D∗;ℓ) is identifiable with respect to Qif it can be expressed as a function of the observable distribution P(D∗, D, Y, X). As formally shown in Lemma 2 (see Appendix G.2), since P(X) is observable, the counterfactual riskR(D∗;ℓ) is identifiable with respect to P(D∗, D, Y, X) if and only if the conditional counter- factual risk RX(D∗;ℓ) is identifiable with respect to P(D∗, D, Y |X). Since RX(D∗;ℓ) depends on the conditional distribution P(D∗, Y(0), . . . , Y (K−1)|X), we need to express this distribution of potential outcomes in terms of observable distributions. Under Assumption 3, we can only identify the joint distribution of the decision D∗and a single potential outcome given covariates, Pr(D∗=d, Y(k) =yk|X) = Pr( D∗=d, Y(k) =yk|D=k,X) = Pr( D∗=d, Y=yk|D=k,X), for any k∈ D. In particular, without additional assumptions, we cannot identify a joint distribution that involves multiple potential outcomes. Thus, it follows that Rx(D∗;ℓ) is identifiable if and only if it can be expressed solely in terms of the distribution Pr( D∗=d, Y(k) =y|X=x) for any d, k∈ D, y∈ Y. This observation suggests that the additivity assumption given in Definition 3 can facilitate identification. We first show that under additive loss, we can identify the difference in conditional counterfactual risk between any two decision-making systems even though each conditional counterfactual risk is only identifiable up to an unknown additive constant. That is, the conditional counterfactual risk of decision-making system D∗can be decomposed into an identifiable part and an unidentifiable part, the latter is a function of covariates and does not depend on D∗. Theorem 1 (The Conditional Counterfactual Risk under the Additive Loss) Suppose that a counterfactual loss function takes the additive form given in Definition 3. Then, under As- sumptions 1–3, the conditional counterfactual risk can be decomposed as, Rx(D∗;ℓAdd) =X d∈DX k∈DX y∈Yωk(d, y,x) Pr(D∗=d, Y(k) =y|X=x) +C(x), where C(x) =X y∈YKϖ(y,x) Pr(Y(D) =y|X=x), andY(D) = ( Y(0), . . . , Y (K−1)). Thus, the difference in conditional counterfactual risk between any pair of decision-making systems can be identified. The proof is given in Appendix A. 9 Taking the expectation of the expression in Theorem 1 over the distribution of covariates, we find that the counterfactual risk is identified up to an unknown additive constant E[C(X)]. We state this result as the following corollary without proof. Corollary 1 (The Counterfactual Risk under the Additive Loss) Suppose that a coun- terfactual loss function takes the additive form given in Definition 3. Then, under Assumptions 1–3, the counterfactual risk can be decomposed as, R(D∗;ℓAdd) =X d∈DX k∈DX y∈YE[ωk(d, y,X) Pr(D∗=d, Y=y|D=k,X) +C(X)]. Thus, the difference in counterfactual risk between any pair of decision-making systems is identified. Theorem 1 immediately implies that one can find an optimal decision, e.g., π:X → D , by minimizing the identifiable term alone, i.e., πopt∈argmin π∈ΠR(π(X);ℓAdd) = argmin π∈ΠX d∈DX k∈DX y∈YE[ωk(d, y,X) Pr(π(X) =d, Y=y|D=k,X)], where Π is a class of decision rules to be considered. Moreover, if the constant term is zero, then we can exactly identify the counterfactual
|
https://arxiv.org/abs/2505.08908v1
|
risk. This happens when the second term in the additive counterfactual loss (Definition 3 is zero, i.e., ϖ(y,x) = 0 for all y∈ YKandx∈ X). Lastly, by placing assumptions on ϖ, we can control the magnitude of E[C(X)], which leads to partial identification results. A natural question arises as to whether or not the above identification results can be obtained under a different form of counterfactual loss function. The next theorem shows that the additivity of the loss function given in Definition 3 is a necessary and sufficient condition for the identification of the difference in counterfactual risk. In other words, all counterfactual risks that are identifiable up to an additive constant must have an additive loss. Theorem 2 (Additivity as Necessary and Sufficient Condition for Identification) The difference in counterfactual risk between any pair of decision-making systems can be identified under Assumptions 1–3 if and only if the corresponding loss takes the additive form given in Definition 3. The proof is given in Appendix B. Finally, as a corollary of this identification result, we show that the exact identification of the counterfactual risk is possible if and only if the interaction term ϖis zero in the additive counter- factual loss (Definition 3). Corollary 2 (Necessary and Sufficient Condition for Exact Identification) The coun- terfactual risk given in Definition 2 can be exactly identified under Assumptions 1–3 if and only if 10 the corresponding loss takes the following form, ˜ℓAdd(d;y,x) =X k∈Dωk(d, yk,x), for a weight function ωk:D × Y × X → R. The identification formula is given by, R(D∗,˜ℓAdd) =X d∈DX k∈DX y∈YE[ωk(d, y,X) Pr(D∗=d, Y=y|D=k,X)]. The proof is given in Appendix C. 3.5 Binary Outcome Case To further build intuition for the above results, we study the special case of binary outcome, i.e., Y={0,1}. For notational simplicity, we ignore covariates. Without loss of generality, assume that Y= 1 is a desirable outcome while Y= 0 is undesirable. Under this setting, it is natural to consider the following ordering of weight functions in the additive counterfactual loss for any given decision d: ωd(d,1)≤ {ωd′(d,0)}d′̸=d≤0≤ {ωd′(d,1)}d′̸=d≤ωd(d,0), (6) where the terms ωd(d, yd) for yd= 0,1 account for the accuracy of decision, and the other weights {ωd′(d, yd′)}d′̸=dforyd′= 0,1 capture its difficulty . The counterfactual loss is smaller when fewer alternative decisions would yield a desirable outcome (i.e., making a correct decision is difficult). The loss is greater when many alternatives would result in a desirable outcome (making a correct decision is easier). While accuracy is an essential part of standard decision theory, difficulty is unique to the counterfactual loss. In this way, the counterfactual loss rewards a consequential decision that changes the outcome. We first discuss the accuracy component. The correct “true positive” decision, i.e., ( D∗, Y(d)) = (d,1) would decrease the counterfactual loss by the largest amount, which is represented by the largest negative loss of ωd(d,1). In contrast, the incorrect “false negative” decision, i.e., ( D∗, Y(d)) = ( d,0) would increase the loss by the greatest amount, which corresponds to a positive loss of ωd(d,0). While accuracy
|
https://arxiv.org/abs/2505.08908v1
|
is an essential part of standard decision theory, difficulty is unique to the coun- terfactual loss. Avoidance of an incorrect decision (“true negative”), i.e., ( D∗, Y(d)) = ( d′,0) in the top right cell results in a decrease of the loss, while failure to make a correct “true positive” decision, i.e., ( D∗, Y(d)) = ( d′,1) increases the loss. Because these counterfactual outcomes do not occur, their weights are smaller in magnitude than those of the realized outcome. In this way, the counterfactual loss rewards a consequential decision that changes the outcome. Lastly, since each weight ωk(d, y) depends on both the decision and the outcome, we can decom- pose it into a baseline decision term and an interaction component: ωk(d, y) =ωk(d,0) +1{y= 1}(ωk(d,1)−ωk(d,0)). The first term corresponds to the baseline cost of making decision D∗=don the potential outcome Y(k), while the second term accounts for the interaction of decision and potential outcome. 11 Principal StrataDecision D∗= 0 D∗= 1 Never Survivor (Y(0), Y(1)) = (0 ,0)ℓ(0; 0,0) ℓ(1; 0,0) Harmed (Y(0), Y(1)) = (1 ,0)ℓ(0; 1,0) ℓ(1; 1,0) Responsive (Y(0), Y(1)) = (0 ,1)ℓ(0; 0,1) ℓ(1; 0,1) Always Survivor (Y(0), Y(1)) = (1 ,1)ℓ(0; 1,1) ℓ(1; 1,1) Table 2: Counterfactual Loss for Principal Strata in the Binary Case ℓ(D∗;Y(0), Y(1)). We formalize the above discussion as the following corollary of Theorem 1. Specifically, we show that the conditional counterfactual risk can be decomposed into four components — accuracy based on the realized outcome Pr( D∗=d, Y(d) = 1|X=x), difficulty based on counterfactual outcomes Pr(D∗=d, Y(k) = 1|X=x) fork̸=d, and baseline decision probability Pr( D∗=d|X=x), and the constant C(x)— with each term having weights based on the specified additive loss function. Corollary 3 (Accuracy and Difficulty for the Binary Outcome Case) Assume Y∈ {0,1}. Suppose that the counterfactual loss function takes the additive form given in Definition 3. Then, under Assumptions 1–3, the conditional counterfactual risk can be written as, Rx(D∗;ℓAdd) =X d∈Dζd(d,x) Pr(D∗=d, Y(d) = 1|X=x) +X d∈DX k∈D,k̸=dζk(d,x) Pr(D∗=d, Y(k) = 1|X=x) +X d∈Dξ(d,x) Pr(D∗=d|X=x) +C(x). where ζk(d,x) =ωk(d,1,x)−ωk(d,0,x),ξ(d,x) =P k∈Dωk(d,0,x), and C(x) =X y∈{0,1}Kϖ(y,x) Pr(Y(D) =y|X=x). The proof is given in Appendix F. If we choose the weights such that Equation (6) holds, then we have ζd(d,x)≤0≤ζk(d,x) for all k̸=d. This implies that the risk decreases with greater accuracy, i.e., when the realized outcome is desirable, and increases with counterfactual regret, i.e., when other decisions also would yield a desirable outcome, reflecting the difficulty of the decision. The baseline decision term is a regularizer that penalizes decisions with higher baseline losses. 3.6 Connection to Principal Strata Next, we consider the relationship between the principal strata and the weights of an additive counterfactual loss. For simplicity, we further restrict our attention to the case of binary decision 12 and binary outcome, though the results can be generalized. In this setting, as shown in Table 2, there are a total of eight different values the loss function can take, i.e., ℓ(d;y0, y1) for d, y0, y1∈ {0,1}, corresponding to two decisions for each of four principal strata. We
|
https://arxiv.org/abs/2505.08908v1
|
can place different restrictions on the loss function to satisfy additivity. Standard statistical decision theory assumes that the loss depends only on choice and its realized outcome. This implies that when D∗= 0, never survivors and responsive units, for whom we have Y(0) = 0, incur the same loss. Similarly, always survivors and the harmed units, for whom we have Y(0) = 1, yield the same loss. When D∗= 1, never survivors and the harmed are assumed to have the same loss, as do always survivors and responsive units. Together, standard statistical decision theory imposes a total of four equality constraints, ℓ(0; 0,0) = ℓ(0; 0,1), ℓ(0; 1,0) = ℓ(0; 1,1), ℓ(1; 0,0) = ℓ(1; 1,0), ℓ(1; 0,1) = ℓ(1; 1,1).(7) In Table 2, these constraints imply that within each column, the light orange and purple cells equal one another, while the dark orange and purple cells are equal to each other. In this case, four out of the twelve possible weights are non-zero and take the form, ω0(0,0) = ℓ(0; 0, y1), ω 0(0,1) = ℓ(0; 1, y1), ω 1(1,0) = ℓ(1;y0,0), ω 1(1,1) = ℓ(1;y0,1), fory0, y1∈ {0,1}. While the above set of constraints leads to the exact identification of risk, we can further reduce the number of constraints without losing the exact identification. For example, we can assume that for each decision d∈ {0,1}, the sum of losses for never survivors and always survivors equals the sum of losses for the harmed and responsive, ℓ(d; 0,0) +ℓ(d; 1,1) = ℓ(d; 0,1) +ℓ(d; 1,0). (8) This represents a necessary and sufficient condition for exact identification of the counterfactual risk under strong ignorability. In Table 2, this implies that the sum of orange cells equals the sum of purple cells within each column. Given these two equality restrictions, if we let ω0(1,1) = aand ω1(0,1) = b, we can express the remaining six weights as follows, ω0(0,0) = ℓ(0; 0,1)−b, ω 0(0,1) = ℓ(0; 1,1)−b, ω 1(1,0) = ℓ(1; 1,0)−a, ω 1(1,1) = ℓ(1; 1,1)−a, ω0(1,0) = ℓ(1; 0,1)−ℓ(1; 1,1) +a, ω 1(0,0) = ℓ(0; 1,0)−ℓ(0; 1,1) +b. We can further relax the equality constraints and achieve identification up to an unknown con- stant. We may assume that the sum of loss differences between the two decisions for never-survivors and always-survivors is equal to the corresponding sum for the responsive and harmed units. This re- striction represents a necessary and sufficient condition for identification up to an unknown constant and can be written as, [ℓ(1; 0,0)−ℓ(0; 0,0)] + [ ℓ(1; 1,1)−ℓ(0; 1,1)] = [ ℓ(1; 1,0)−ℓ(0; 1,0)] + [ ℓ(1; 0,1)−ℓ(0; 0,1)].(9) 13 Restrictions Degrees Strong on loss functions of freedom ignorability light (dark) orange = light (dark) purple within each column (Equation (7))4 Exact sum of orange = sum of purple within each column (Equation (8))6 Exact* sum of orange = sum of purple across two columns (Equation (9))7 Constant* No restriction 8 Unidentifiable Table 3: The table summarizes the identifiability restrictions under which the risk is either exactly identifiable (“Exact” in green), identifiable up to a constant (“Constant” in
|
https://arxiv.org/abs/2505.08908v1
|
yellow), or unidentifiable (in red), under strong ignorability. An asterisk ∗signifies a necessary and sufficient condition. The second column shows the number of free parameters in the loss specification. In Table 2, this equality constraint implies that the sum of orange cells is equal to the sum of purple cells across the two columns. To examine identifiability and obtain weights under this and more general settings with multi- valued decisions and outcomes, we consider the following linear system, Aw=ℓ, (10) where ℓ={ℓ(d, y0, . . . , y K−1)}d∈D,y∈YKis the vector of all possible loss values, w={ωk(d, y), ϖ(y)}d∈D,k∈D,y∈Y,y∈YK is the vector of all weights and intercept terms, and Ais a|ℓ|×|w|matrix of zeros and ones, defined by the additive structure given in Definition 3. Then, the image of A, denoted by im( A) = ker( a⊤)⊥ [Lax, 2007], characterizes the space of counterfactual loss functions that is additive. If ℓ∈im(A), then the loss is identifiable and there exists a solution wsatisfying Equation (10). Otherwise, ℓ/∈im(A) and the loss is not identifiable. For exact identification, we consider a restricted version of Equation (10), eA˜w=˜ℓwhere ˜w= {ωk(d, y)}d∈D,k∈D,y∈Ycontains only the weights, and the matrix eAis restricted accordingly. An explicit example of this construction is provided in Appendix H. Solving this linear equation with the restriction given in Equation (9) gives the following weights, ω0(0,0) = ℓ(0; 0,1)−b−c, ω 0(0,1) = ℓ(0; 1,1)−b−e, ω1(1,0) = ℓ(1; 1,0)−a−d, ω 1(1,1) = ℓ(1; 1,1)−a−e, ω0(1,0) = ℓ(1; 0,1)−ℓ(1; 1,1) +a−c+e, ω 1(0,0) = ℓ(0; 1,0)−ℓ(0; 1,1) +b−d+e, ϖ0,1(0,0) = ℓ(1; 0,0)−ℓ(1; 1,0)−ℓ(1; 0,1) +ℓ(1; 1,1) +c+d−e. where ω0(1,1) = a, ω1(0,1) = b, ϖ 0,1(0,1) = c, ϖ 0,1(1,0) = d, ϖ 0,1(1,1) = eare free parameters. The third column of Table 3 summarizes the identification results discussed above. 14 3.7 When is the Additive Counterfactual Loss Different from the Standard Loss? Although the additive counterfactual loss given in Definition 1 generalizes the additive standard loss, one may wonder when they yield different treatment recommendations. We first show that, when the decision is binary, i.e., K= 2, for any given additive counterfactual loss, we can always find a standard loss that gives the same treatment recommendations. However, when the decision is non- binary, the additive counterfactual loss no longer generates the same treatment recommendations as those based on the standard loss. Proposition 1 (Additive Counterfactual Risk with Binary Decision) Suppose that the de- cision is binary, i.e., D={0,1}. 1. For any additive counterfactual loss ℓAdd(d;y)defined in Definition 3, we can construct a standard loss ℓStd(d, yd)such that the risk difference R(D∗;ℓAdd)−R(D∗;ℓStd)does not depend onD∗. Moreover, this standard loss is unique up to a constant only depending on covariates and takes the following form, ℓStd(d, yd,x) =ωd(d, yd,x)−ωd(1−d, yd,x) +c(x). 2. While the mapping from the space of counterfactual losses to standard losses that yield the same treatment recommendations is surjective, it is not injective. That is, for any standard loss, there exists an infinite number of counterfactual losses that give the same treatment rec- ommendations. The proof is given in Appendix D. This proposition implies that in
|
https://arxiv.org/abs/2505.08908v1
|
the binary decision setting, any additive counterfactual risk admits a standard risk that yields identical treatment recommendations. As a consequence, both of these risks lead to the same optimal policy. Perhaps surprisingly, this result is not limited to binary outcomes. In addition, the corresponding standard loss ℓStd(d, yd) is given by (up to an additive constant only depending on covariates) the difference between the weight for the realized outcome Y(D) =ydunder the chosen decision D∗=dand the weight for the same outcome Y(D) =ydunder the alternative decision D∗= 1−d. Although every additive counterfactual loss has a unique corresponding standard loss (up to an additive constant) with the same treatment recommendations, the reverse does not hold. That is, any given standard loss has infinitely many additive counterfactual losses with the same treatment recommendations. Each of such counterfactual losses assigns different values to principal strata. This asymmetry explains why, even in the binary case, additive counterfactual loss has a straightforward interpretation based on principal strata whereas standard loss does not. We now revisit our binary decision examples given in Section 3.3. Example 1 (Counterfactual Classification Loss) The counterfactual loss given in Equa- tion(3)can be written as the following standard loss ℓStd(d, yd) =ℓyd+cd−˜ℓyd=ℓyd−˜ℓyd+cd, which corresponds to the difference between the realized and counterfactual loss associated with outcome yd with the additional cost term. 15 Example 2 (Asymmetric Counterfactual Loss) When ∆R= ∆H= ∆ , the loss in Equa- tion (4)can be rewritten as a standard loss with ℓStd(d, yd) = ℓyd+cd− {∆−(ℓ0−ℓ1)}yd= (ℓH 1−ℓH 0)yd+cd+ℓ0= (ℓR 1−ℓR 0)yd+cd+ℓ0. This is the loss difference of a positive and negative outcome for units for which the treatment matters when the treatment is applied. Note that the loss ℓyassociated with never-and always-survivors does not impact the treatment choice. Finally, we show that in general when the decision is non-binary, no standard loss corresponds directly to the additive counterfactual loss. Proposition 2 (Additive Counterfactual Risk with Non-binary Decision) Assume that the decision is non-binary, i.e., K=|D| ≥ 3. Then, for any additive counterfactual loss defined in Definition 3 with at least one counterfactual weight ωk(d, yk,x)depending on the decision d∈ D and potential outcome yk∈ Y ford̸=k, there exists no standard loss ℓStd(d;yd)such that the risk difference R(D∗;ℓAdd)−R(D∗;ℓStd)does not depend on D∗. The proof is given in Appendix E. Proposition 2 shows that in the non-binary setting, no standard loss can replicate the decision behavior induced by a genuinely counterfactual additive loss. This demonstrates that standard losses are inadequate when decision-counterfactual interactions matter. 4 Concluding Remarks In this paper, we generalize the classical statistical decision theory for treatment choice by introducing counterfactual losses that depend on all potential outcomes. While the standard loss evaluates decisions based solely on observed outcomes, the counterfactual loss also considers what would have happened under alternative actions. This enables us to assess not only the accuracy of a decision but also its difficulty, that is, whether the chosen action truly mattered or whether other choices would have led to similar results. A central challenge in this framework is identification: since only one potential outcome
|
https://arxiv.org/abs/2505.08908v1
|
is ob- served per unit, counterfactual risks are generally unidentifiable. We show that under the standard assumption of strong ignorability, a counterfactual loss leads to an identifiable risk if and only if it is additive in the potential outcomes. Moreover, we establish that such additive counterfactual losses do not have an equivalent standard loss representation so long as decisions are non-binary. Counterfactual loss expands the scope of statistical decision theory by allowing analysts to incor- porate ethical and practical considerations that are especially relevant in domains such as medicine and public policy. Promising directions for future work include extending the framework to con- tinuous outcomes and relaxing the strong ignorability assumption in the identification analysis [see Ben-Michael et al., 2024a]. 16 References Susan Athey and Stefan Wager. Policy learning with observational data. Econometrica , 89(1): 133–161, 2021. Guillaume Basse and Iavor Bojinov. A general theory of identification. arXiv preprint arXiv:2002.06041 , 2020. Eli Ben-Michael, D. James Greiner, Melody Huang, Kosuke Imai, Zhichao Jiang, and Sooahn Shin. Does AI help humans make better decisions? A methodological framework for experimental evalu- ation, March 2024a. URL http://arxiv.org/abs/2403.12108 . arXiv:2403.12108 [cs, econ, q-fin, stat]. Eli Ben-Michael, Kosuke Imai, and Zhichao Jiang. Policy Learning with Asymmetric Counterfac- tual Utilities. Journal of the American Statistical Association , 0(0):1–14, 2024b. ISSN 0162- 1459. doi: 10.1080/01621459.2023.2300507. URL https://doi.org/10.1080/01621459.2023. 2300507 . Publisher: ASA Website eprint: https://doi.org/10.1080/01621459.2023.2300507. James O Berger. Statistical Decision Theory and Bayesian Analysis . Springer Science & Business Media, 1985. Amanda Coston, Alan Mishler, Edward H Kennedy, and Alexandra Chouldechova. Counterfactual risk assessments, evaluation, and fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency , pages 582–593, 2020. Constantine E Frangakis and Donald B Rubin. Principal stratification in causal inference. Biomet- rics, 58(1):21–29, 2002. Kosuke Imai and Zhichao Jiang. Principal fairness for human and algorithmic decision-making. Statistical Science , 38(2):317–328, 2023. Toru Kitagawa and Aleksey Tetenov. Who should be treated? empirical welfare maximization methods for treatment choice. Econometrica , 86(2):591–616, 2018. Tor Lattimore and Csaba Szepesv´ ari. Bandit algorithms . Cambridge University Press, 2020. Peter D. Lax. Linear Algebra and Its Applications . Wiley-Interscience, Hoboken, NJ, 2 edition, 2007. ISBN 978-0-471-75156-4. URL https://matematicas.unex.es/ ~navarro/algebralineal/lax. pdf. Erich L Lehmann and George Casella. Theory of point estimation . Springer Science & Business Media, 2006. Erich Leo Lehmann, Joseph P Romano, et al. Testing statistical hypotheses , volume 3. Springer, 1986. 17 Charles F Manski. Identification problems and decisions under ambiguity: Empirical analysis of treatment response and normative analysis of treatment choice. Journal of Econometrics , 95(2): 415–442, 2000. Charles F Manski. Statistical treatment rules for heterogeneous populations. Econometrica , 72(4): 1221–1246, 2004. Charles F Manski. Choosing treatment policies under ambiguity. Annu. Rev. Econ. , 3(1):25–49, 2011. Rosa L Matzkin. Nonparametric identification. Handbook of econometrics , 6:5307–5368, 2007. Jerzy Neyman. On the application of probability theory to agricultural experiments. essay on prin- ciples. Ann. Agricultural Sciences , pages 1–51, 1923. Judea Pearl. Causal inference in statistics: An overview. 2009. Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects.
|
https://arxiv.org/abs/2505.08908v1
|
Biometrika , 70(1):41–55, 1983. Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology , 66(5):688, 1974. Leonard J Savage. The foundations of statistics . Courier Corporation, 1972. Abraham Wald. Statistical decision functions . Wiley, 1950. Jenna Wiens, Suchi Saria, Mark Sendak, Marzyeh Ghassemi, Vincent X Liu, Finale Doshi-Velez, Kenneth Jung, Katherine Heller, David Kale, Mohammed Saeed, et al. Do no harm: a roadmap for responsible machine learning for health care. Nature medicine , 25(9):1337–1340, 2019. 18 Supplementary Appendix A Proof of Theorem 1 By the definition of the counterfactual conditional risk (Definition 2), we have, Rx(D∗;ℓAdd) =X d∈DX y∈YK X k∈Dωk(d, yk,x) +ϖ(y,x)! ×Pr(D∗=d, Y(0) = y1, . . . , Y (K−1) = yK−1|X=x) =X d∈DX y∈YKX k∈Dωk(d, yk,x) Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x) +X d∈DX y∈YKϖ(y,x) Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x), (A.1) where the first equality follows from Definition 3. We now consider each of the two summation terms in Equation (A.1). For notational convenience, we write y−k= (y0, . . . , y k−1, yk+1, . . . , y K−1) for any k∈ D. We can rewrite the first term as, X d∈DX y∈YKX k∈Dωk(d, yk,x) Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x) =X d∈DX k∈DX yk∈YX y−k∈YK−1ωk(d, yk,x) Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x) =X d∈DX k∈DX yk∈Yωk(d, yk,x)X y−k∈YK−1Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x) =X d∈DX k∈DX yk∈Yωk(d, yk,x) Pr(D∗=d, Y(k) =yk|X=x) =X d∈DX k∈DX y∈Yωk(d, y,x) Pr(D∗=d, Y(k) =y|X=x) =X d∈DX k∈DX y∈Yωk(d, y,x) Pr(D∗=d, Y=y|D=k,X=x), where the last equality follows from Assumptions 2and 3. We next consider the second summation in Equation (A.1). X d∈DX y∈YKϖ(y,x) Pr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x) =X y∈YKϖ(y,x)X d∈DPr(D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x) =X y∈YKϖ(y,x) Pr(Y(0) = y0, . . . , Y (K−1) = yK−1|X=x). This completes the proof. 2 19 B Proof of Theorem 2 Theorem 1 immediately establishes the sufficiency by Definition 4, as the counterfactual risk is a function only of the observable marginals (up to an unknown additive constant). We now prove the necessity. That is, we will show that if the counterfactual risk is identifiable up to an un- known additive constant, then the corresponding loss function must take the additive form given in Definition 3. By Lemma 2, the counterfactual risk is identifiable if and only if the conditional counterfactual risk is identifiable. It therefore suffices to show that if the conditional counterfactual risk is identifi- able up to an unknown additive constant depending only on covariates, then the corresponding loss must be additive. We begin by fixing X=x. Let p(x) := ( p1(x), . . . , p N(x))⊤∈∆N−1denote the (conditional) probability measure over {D∗, Y(0), . . . , Y (K−1)}givenX=xwhere N=K×MKand ∆ N−1is the probability simplex defined as, ∆N−1=( p∈RN NX i=1pi= 1 and pi≥0 for
|
https://arxiv.org/abs/2505.08908v1
|
i= 1, . . . , N) . Thus, each element of p(x) is indexed by i=i(d,y) and is equal to pi= Pr( D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x). Then, by Definition 2, we can write the conditional counterfac- tual risk as the following linear function of p(x), Rx(D∗;ℓ) =ℓ⊤p(x), where ℓ= (ℓ1, . . . , ℓ N)⊤with ℓi=ℓ(d,y,x) for i=i(d,y). By Definition 4 and the subsequent discussion in Section 3.4 an identifiable risk can be written entirely as a function of the observable marginals Pr( D∗=d, Y(k) =yk|X=x). Since we allow the risk to be identifiable up to a constant, it can also depend on the joint distribution of potential outcomes that do not involve the decision D∗, i.e., Pr( Y(D) =y|X=x). LetQ: ∆N−1→[0,1]Lbe the observational mapping from Definition 4 with output Q(p(x)) = q(x) the L-dimensional stacked vector of observed distribution {Pr(D∗=d, Y(k) = yk|X= x)}{k∈D,d∈D,yk∈Y}and unobserved one {Pr(Y(D) =y|X=x)}y∈YKwith L=K2×M+MK. By Lemma 3(b) we know that Qis surjective to the space of marginals and Q(p(x)) =Cp(x) =q(p(x)), for every p(x)∈∆N−1and corresponding marginal q(p(x)), where Cis aL×Nmatrix of zeros and ones. Its kernel satisfies ker( C) ={v∈RN:Cv=0L,1⊤ Nv= 0}where 0Land1Nare an L-dimensional vector of zeros and an N-dimensional vector of ones, respectively. Because the conditional risk Rx(D∗;ℓ) is identifiable up to an additive constant, Definition 4 combined with Lemma 1 guarantees the existence of a map fsuch that Rx(D∗;ℓ) =f(q(p(x))), (B.1) 20 for all p(x)∈∆N−1. Equation (B.1) differs from our discussion in Section 3.4, in which identifiability is defined with respect to the conditional probability measures over {D, D∗, Y(0), . . . , Y (K−1)}given X=x that satisfy Assumptions 1–3. In contrast, Equation (B.1) considers allconditional probability measures over {D∗, Y(0), . . . , Y (K−1)}given X=x. This difference does not affect our con- clusion for two reasons. First, the latter distribution fully characterizes the conditional counter- factual risk Rx(D∗;ℓ). Second, Assumptions 1–3 do not place any restriction on the distribution of{D∗, Y(0), . . . , Y (K−1)}given X=x. Thus, identifiability must hold uniformly over every possible observable collection of the distribution {D∗, Y(0), . . . , Y (K−1)}given X=x, without any restriction. Thus, Equation (B.1) holds for every p(x)∈∆N−1. Finally, Lemma 4 implies that there exists a vector w∈RLsuch that Rx(D;ℓ) =w⊤q(p(x)) =X d∈DX k∈DX y∈Ywk(d, y,x) Pr(D∗(z) =d, Y(k) =y|X=x) +X y∈YKϖ(y,x) Pr(Y(D) =yS|X=x) where wk(d, y,x) and ϖ(y,x) correspond to different entries of w. We must show that the additive counterfactual loss given in Definition 3 is the only representation that leads to this conditional counterfactual risk. Assume that there exists another loss function representation ˜ℓsuch that Rx(D;ℓ) =ℓ⊤p(x) =˜ℓ⊤p(x). Since this relationship must hold for any p(x)∈∆N−1, we can choose p(x) =e(i)∈RN, which denotes the standard basis vector with zero entries except the ith entry equaling one. This yields ℓi=˜ℓi, for all i∈ {1, . . . , N }and thus ℓ=˜ℓ. This completes the proof. 2 C Proof of Corollary 2 The result follows immediately from the proof of Theorem 2 by restricting q(p(x))
|
https://arxiv.org/abs/2505.08908v1
|
to the probabilities {Pr(D∗=d, Y(k) =yk|X=x)}d∈D,k∈D,yk∈Y, while omitting the other terms involving multiple potential outcomes. Then, instead of Lemma 3(b), we apply Lemma 3(a). 2 D Proof of Proposition 1 We first show the existence of such a standard loss. When D={0,1}, Theorem 1 implies, Rx(D∗;ℓAdd) =X d∈{0,1}X k∈{0,1}X y∈Yωk(d, y,x) Pr(D∗=d, Y(k) =y|X=x) +C(x) =X y∈Y{ω0(0, y,x) Pr(D∗= 0, Y(0) = y|X=x) +ω1(1, y,x) Pr(D∗= 1, Y(1) = y|X=x) +ω0(1, y,x) Pr(D∗= 1, Y(0) = y|X=x) +ω1(0, y,x) Pr(D∗= 0, Y(1) = y|X=x)}+C(x) 21 where C(x) =P y∈YKϖ(y,x) Pr(Y(D) =y|X=x). Observe, Pr(D∗=d, Y(1−d) =y|X=x) = Pr( Y(1−d) =y|X=x)−Pr(D∗= 1−d, Y(1−d) =y|X=x), ford= 0,1. Substituting this into the above equation yields, Rx(D∗;ℓAdd) =X y∈Yω0(0, y,x){Pr(D∗= 0, Y(0) = y|X=x) +ω1(1, y,x) Pr(D∗= 1, Y(1) = y|X=x) +ω0(1, y,x) (Pr( Y(0) = y|X=x)−Pr(D∗= 0, Y(0) = y|X=x)) +ω1(0, y,x) (Pr( Y(1) = y|X=x)−Pr(D∗= 1, Y(1) = y|X=x))}+C(x) =X y∈Y(ω0(0, y,x)−ω0(1, y,x)) Pr( D∗= 0, Y(0) = y|X=x) + (ω1(1, y,x)−ω1(0, y,x)) Pr( D∗= 1, Y(1) = y|X=x) +C′(x) =X d∈DX y∈YℓStd(d, y,x) Pr(D∗=d, Y(d) =y|X=x) +C′(x) =Rx(D∗;ℓStd) +C′(x), where ℓStd(d, y,x) =ωd(d, y,x)−ωd(1−d, y,x) and C′(x) =X y∈Yω0(1, y,x) Pr(Y(0) = y|X=x) +ω1(0, y,x) Pr(Y(1) = y|X=x) +X y∈YKϖ(y,x) Pr(Y(D) =y|X=x). Thus, Rx(D∗;ℓAdd)−Rx(D∗;ℓStd) =C′(x), which does not depend on D∗. Taking the expectation on both sides yields the existence of the desired standard loss. This completes the proof of the first part of the proposition. Next, we prove the uniqueness. Assume there exists another standard loss ˜ℓStdsuch that R(D∗, ℓAdd)−R(D∗,˜ℓStd) = ˜ cwhere ˜ cis a constant. Then, this assumption implies that the conditional risk difference is given by Rx(D∗;ℓStd)−Rx(D∗;˜ℓStd) =eC(x), (D.1) where eC(x) is a function that satisfies ˜ c=E[eC(X)] (otherwise, for any D∗independent of X, the resulting risk difference would depend on D∗, contradicting our assumption). By choosing D∗≡d, we obtain Rx(d;ℓStd)−Rx(d;˜ℓStd) =X y∈Y ℓStd(d, y,x)−˜ℓStd(d, y,x) Pr(Y(d) =y|X=x) 22 =Eh ℓStd(d, Y(d),X)−˜ℓStd(d, Y(d),X) X=xi . Since this holds for d= 0,1 and as the risk difference is invariant to the decision, we obtain the following equality, Eh ℓStd(0, Y(0),X)−˜ℓStd(0, Y(0),X) X=xi =Eh ℓStd(1, Y(1),X)−˜ℓStd(1, Y(1),X) X=xi . for all x. The left hand side depends on the distribution of Y(0) given X=x, while the right hand side depends on the distribution of Y(1) given X=x. Since there is no restriction on these distributions, the above equality must hold for every choice of these two conditional distributions. This is only possible if the following equality holds, ℓStd(d, Y(d),x)−˜ℓStd(d, Y(d),x) =C(d,x). By Equation (D.1), we must have C(d,x)≡eC(x) for d= 0,1. That is, the difference in loss does not depend on the decision. Hence, we have shown ℓStd(d, y,x) =˜ℓStd(d, y,x) +eC(x), implying the uniqueness of ℓStd(d, Y(d),x) up to an additive constant that only depends on covariates. Finally, the second part of the proposition concerning surjectivity of the mapping follows im- mediately because any standard loss is a special case of an additive counterfactual loss. For any standard loss ℓStd(d, yd,x) and λ∈R, define the weights ωd(d, y,x) = (1 + λ)ℓStd(d, y,x), ω d(1−d, y,x) =λℓStd(d, y,x), ford= 0,1 and
|
https://arxiv.org/abs/2505.08908v1
|
y∈ Y. Then, by the first part of this proposition, this corresponds to an additive counterfactual loss with the same treatment recommendations. Since λis arbitrary, there exist infinitely many additive counterfactual losses that map to the same standard loss. 2 E Proof of Proposition 2 LetℓAdd(d;y) denote an additive counterfactual loss defined in Definition 3. Without loss of gener- ality let ϖ≡0, as this term does not depend on the decision. Suppose that there exists a standard lossℓStd(d;yd) such that the risk difference R(D∗;ℓAdd)−R(D∗;ℓStd) does not depend on D∗. This implies that for fixed X=x, there exists C(x) such that the following conditional risk difference does not depend on D∗, ∆x(D∗) =Rx(D∗;ℓAdd)−Rx(D∗;ℓStd) =C(x). (E.1) Otherwise, we could choose D∗to be independent of X, making the expectation of Equation (E.1) dependent on D∗and leading to a contradiction. Because this conditional risk difference must remain fixed no matter how we choose D∗, it must stay identical across all deterministic decision rules. Consider the following constant decision rule πj=πj(X) =jforj∈ D. Let ∆x(πj) =X k∈DX yk∈Yωk(j, yk,x) Pr(Y(k) =yk|X=x)−X yj∈Y˜ω(j, yj,x) Pr(Y(j) =yj|X=x) 23 =X k∈DE[ωk(j, Y(k),X)−˜ω(j, Y(j),X)|X=x], where ˜ ω(j, yj,x) is the weight function for the standard loss. Note that ∆( πj) =C(x) by Equa- tion (E.1) for every j∈ D. Let j, j′∈ Dwith j̸=j′. Then, ∆ x(πj)−∆x(πj′) =C(x)−C(x) = 0 or equivalently, X k∈DE[ωk(j, Y(k),X)−ωk(j′, Y(k),X)|X=x] =E[˜ω(j, Y(j),X)−˜ω(j′, Y(j′),X)|X=x]. Since the potential outcomes {Y(0), . . . , Y (K−1)}can have an arbitrary conditional distribution givenX=x, the above equality must hold for every choice of this distribution. Since the right-hand side of the equation depends at most on the conditional distribution of {Y(j), Y(j′)}, so must its left-hand side. In particular, for every k̸={j, j′}we must have ωk(j, yk,x) =ωk(j′, yk,x), for all yk∈ Yandx∈ X. Note that such kexists so long as K≥3. Thus, ωk(d, yk,x) cannot depend on the decision dwhenever d̸=k, implying that there exists functions cksuch that ωk(d, y,x) =ck(y,x), ford̸=k. However, this contradicts the assumption that ℓAddis an additive counterfactual loss with at least one counterfactual weight depending on the decision and potential outcome. F Proof of Corollary 3 By Theorem 2, the conditional counterfactual risk can be written as, Rx(D) =X d∈DX k∈D1X y=0ωk(d, y,x) Pr(D=d, Y(k) =y|X=x) +C(x), (F.1) whereP y∈{0,1}Kϖ(y,x) Pr(Y(S) =yS|X=x). First note, Pr(D∗=d, Y(k) = 0|X=x) = Pr( D∗=d|X=x)−Pr(D∗=d, Y(k) = 1|X=x). Substituting this expression into the first term of Equation (F.1) yields, X d∈DX k∈D1X y=0ωk(d, y,x) Pr(D∗=d, Y(k) =y|X=x) =X d∈DX k∈Dωk(d,0,x) Pr(D∗=d, Y(k) = 0|X=x) +ωk(d,1,x) Pr(D∗=d, Y(k) = 1|X=x) =X d∈DX k∈Dωk(d,0,x){Pr(D∗=d|X=x)−Pr(D∗=d, Y(k) = 1|X=x)} +ωk(d,1,x) Pr(D∗=d, Y(k) = 1|X=x) =X d∈DX k∈Dωk(d,0,x) Pr(D∗=d|X=x) +{ωk(d,1,x)−ωk(d,0,x)}Pr(D∗=d, Y(k) = 1|X=x) =X d∈Dξ(d,x) Pr(D∗=d|X=x) +X d∈DX k∈Dζk(d,x) Pr(D∗=d, Y(k) = 1|X=x) =X d∈Dξ(d,x) Pr(D∗=d|X=x) +X d∈DX k∈D,k̸=dζk(d,x) Pr(D∗=d, Y(k) = 1|X=x) 24 +KX d=1ζd(d,x) Pr(D∗=d, Y(d) = 1|X=x), where ζk(d,x) =ωk(d,1,x)−ωk(d,0,x),ξ(d,x) =P k∈Dωk(d,0,x). 2 G Lemmas and their Proofs G.1 Lemma 1 Lemma 1 LetPdenote the set of all joint probability distributions under consideration, and let Q be the set of all observable distributions. Let Q:P → Q denote a
|
https://arxiv.org/abs/2505.08908v1
|
surjective mapping that associates each joint distribution with an observable distribution. Let θ(P)denote the (causal) estimand of interest under distribution P∈ P. For a fixed observable distribution Q0∈ Q define S0={P∈ P: Q(P) =Q0}.Then, the following statements are equivalent: (a)For every Q0∈ Q there exists a constant θ0such that θ(P) =θ0for all P∈ S0. (b)There exists a function fsuch that θ(P) =f(Q(P))for every P∈ P. Proof: We first prove that (b) implies (a). Fix any Q0∈ Qand let P∈ S0such that Q(P) =Q0. Note that S0is non-empty by Qsurjective. Then, by (b), there exists fsuch that θ(P) =f(Q(P)) =f(Q0) =θ0, for every P∈ S0. Because Q0was arbitrary, the claim follows. We next prove that (a) implies (b). Let Q0∈ Q. Surjectivity of Qensures S0is non-empty. By (a), there is a constant θ0such that θ(P) =θ0for all P∈ S 0. Define f:Q → Rby setting f(Q0) :=θ0for every such Q0∈ Q. Now, let P∈ P be an arbitrary probability measure and let Q0=Q(P). Then, there exists θ0=f(Q0) such that θ(P) =θ0=f(Q0) =f(Q(P)). This proves the claim. □ G.2 Lemma 2 Lemma 2 Given a counterfactual loss ℓdefined in Definition 1, the counterfactual risk R(D∗, ℓ) defined in Definition 2 is identifiable with respect to P(D∗, D, Y, X)if and only if the conditional counterfactual risk RX(D∗, ℓ)(also defined in Definition 2) is identifiable with respect to P(D∗, D, Y | X). Proof: LetQ:P → Q be the mapping from P∈ Pto the distribution Q(P) =P(D∗, D, Y, X). Further, let QX:P → Q Xbe the mapping from P∈ Pto the conditional distribution QX(P) = P(D∗, D, Y |X). We use PXto denote the marginal probability distribution of X. Note that for anyP∈ Pwe can write P(D∗, D, Y (D),X) =P(D∗, D, Y, Y (D),X) =P(Y(D)|D∗, D, Y, X)Q(P) 25 =P(Y(D)|D∗, D, Y, X)QX(P)PX(X), where the first equality follows from Assumption 2. To ease notation, we write R=R(D∗, ℓ) and Rx=Rx(D∗, ℓ). We first assume that Ris identifiable. We use proof by contradiction. Suppose that Rxis not identifiable. Then, by Definition 4, there exist a conditional observable distribution Q0(X)⊆ Q X and distributions P1, P2∈ Psuch that QX(P1) =QX(P2) =Q0(X) but ∆( x) =Rx(P1)−Rx(P2)̸= 0 on some measurable set eX ⊆ X . LetePXbe a probability measure such that Z x∈X∆(x)dePX(x)̸= 0. We can always construct such a probability measure PX(x), for example, by concentrating the probability mass on a set eXwhere ∆( x)>0 for all x∈eX. Next, define another probability measure ePifori= 1,2: ePi(D∗, D, Y (D),X) =Pi(Y(D)|D∗, D, Y, X)Q0(X)ePX(X). where each ePi∈ Psatisfies Assumptions 1–3, as these assumptions do not restrict the marginal of X. Since the marginal distribution of Xdoes not affect the conditional risk, we have Rx(ePi) =Rx(Pi) fori= 1,2 and Rx(eP1)−Rx(eP2) =Rx(P1)−Rx(P2) = ∆( x). Further, by construction, we have Q(eP1) =Q0(X)ePX(X) =Q(eP2)∈ Q. Since Ris identifiable, Definition 4 implies that R(eP1) =R(eP2). However, R(eP1)−R(eP2) =Z x∈XRx(eP1)−Rx(eP2)dePX(x) =Z x∈XRx(P1)−Rx(P2)dePX(x) =Z x∈X∆(x)dePX(x) ̸=0, which leads to a contradiction. Thus, Rxmust be identifiable. Assume next that RXis identifiable. Then, by Lemma 1, there exists a function gsuch that RX(P) =g(QX(P)).
|
https://arxiv.org/abs/2505.08908v1
|
Define a function f:Q →Rsuch that f(Q(P)) =Z x∈Xg(QX(P))dPX(x). Note that we can define fin such a way as both QX(P) and PXare obtainable from Q. Hence R(P) =f(Q(P)) and so Ris identifiable by Lemma 1. □ 26 G.3 Lemma 3 Lemma 3 LetD, Y 0, . . . , Y K−1be discrete random variables such that D∈ D ={0,1, . . . , K −1} andY0, . . . , Y K−1∈ Y={0,1, . . . , M −1}. Let ∆N−1={p∈RN|PN i=1pi= 1andpi≥0fori= 1, . . . , N }denote the probability simplex in RNwhere N=K×MK. Then, for every probability distribution P(D, Y 0, . . . , Y K−1), there exists p∈∆N−1where pi= Pr( D=d, Y0=y0, . . . , Y K−1= yK−1)for each i=i(d,y)∈ {1, . . . , N }. Consider the following two cases that yield the same conclusion. (a) Let q=q(p)∈[0,1]Lbe a vector of probabilities {Pr(D=d, Yk=yk)}d∈D,k∈D,yk∈Ywhere qj= Pr( D=d, Yk=yk)forj=j(d, k, y k)∈ {1, . . . , L }andL=K2×M. (b) Let q=q(p)∈[0,1]Lbe the extended vector of probabilities that also includes the joint dis- tribution of Y0, . . . , Y K−1, i.e., probabilities {Pr(D=d, Yk=yk)}d∈D, k∈D, yk∈Yand{Pr(Y0= y0, . . . , Y K−1=yK−1)}y∈YK, with L=K2×M+MK. Then, in each case, there exists an L×Nmatrix Cof zeros and ones such that q=Cpfor all p∈∆N−1and ker(C) = v∈RN:X y−k∈YK−1vd,y= 0for all d∈ D, k∈ D, yk∈ Y,1⊤ Nv= 0andCv=0L , where y−k= (y0, . . . , y k−1, yk+1, . . . , y K−1)and1Nrepresents an N-dimensional vector of ones. Furthermore, Cis a surjective map from the space of joint distribution P(D, Y 0, . . . , Y K−1)onto the space of the respective marginal distribution. We now prove each case in turn. Proof: [Case (a)] For every joint probability distribution P(D, Y 0, . . . , Y K−1), we can express the marginal distribution of P(D, Y k) as, Pr(D=d, Yk=yk) =X d′∈DX y′∈YK1 d′=d, y′ k=yk Pr(D=d′, Y0=y′ 0, . . . , Y K−1=y′ K−1).(G.1) Define a matrix C∈ {0,1}L×Nwhose ( j, i) entry is given by, Cj(d,k,y k), i(d′,y′)=1 d′=d, y′ k=yk . Then, Equation (G.1) can be written as, qj(d,k,y k)=X d′∈DX y′∈YKCj(d,k,y k), i(d′,y′)pi(d′,y′), implying q=Cp. Since every marginal distribution has a corresponding joint probability distri- bution, Cis a surjective map from the space of joint probability distributions onto the space of marginal distributions. 27 We proceed to examine the kernel of the matrix C. Let v= (vd,y)∈RN. The definition of C implies that for all j=j(d, k, y k)∈ {1, . . . , L }, we have, (Cv)j(d,k,y k)=X d′∈DX y′∈YK1 d′=d, y′ k=yk v(d′,y)=X y−k∈YK−1vd,y−k. Therefore, Cv=0Lif and only ifP y−k∈YK−1vd,y= 0 for all d∈ D, k∈ D, yk∈ Y. This proves the first part of the kernel property. The second part follows immediately by observing, 1⊤ Nv=X d∈DX y∈YKvd,y=X d∈DX k∈DX yk∈YX y−k∈YK−1vd,y= 0. □ Proof: [Case (b)] The existence and surjectivity of Cfollow from the same arguments as in the above proof of Case (a). Thus, we only need
|
https://arxiv.org/abs/2505.08908v1
|
to verify the kernel property of C. We start by partitioning C=" C1 C2# , where C1is an K2M×Nmatrix of zeros and ones, containing all the rows corresponding to the marginal distributions {Pr(D=d, Yk=yk)}d∈D,k∈D,yk∈Y, which are included in Cby assumption. Letv∈Ker(C), then, Cv=" C1v C2v# =" 0 0# , implying C1v= 0. Thus, v∈ker(C1) and consequently, ker( C)⊆ker(C1). Since C1corresponds to the matrix described in Case (a), this completes the proof. □ G.4 Lemma 4 Lemma 4 Let∆N−1={p∈RN|PN i=1pi= 1andpi≥0fori= 1, . . . , N }denote the probability simplex in RN. LetC∈RL×Nbe a matrix such that ker (C) = v∈RN:1⊤ Nv= 0andCv=0L . SetC(∆n−1) ={q∈RL| ∃p∈∆n−1:q=Cp}to the image of Con∆n−1. Let fℓ:C(∆n−1)→R be a function for which there exists ℓ∈RNsuch that for all q(p)∈C(∆n−1), we have fℓ(q) = fℓ(Cp) =ℓ⊤p. Then, fℓis well-defined and linear. That is, ℓ∈im(C⊤), and there exists w∈RLonly depending onCandℓsuch that fℓ(q) =w⊤qholds. Proof: Observe that if ℓ∈im(C⊤), then there exists w∈RLsuch that ℓ=C⊤w. Then, for any q=Cpwe have fℓ(q) =ℓ⊤p= C⊤w⊤p=w⊤Cp=w⊤q. Hence, fℓ(q) =w⊤q. 28 Assume now that ℓ/∈im(C⊤). We will prove that this yields a contradiction. By standard linear algebra [Lax, 2007, Theorem 5, Chapter 3], im( C⊤) = ker( C)⊥. This implies that ∃v∈ ker(C)\{0}:ℓ⊤v̸= 0. Let p0∈˚∆n−1be in the interior of the simplex with strictly positive entries and set q0=Cp0. Define p=p0+αvwith 0 < α <mini∈[N]p0i max i∈[N]|vi|. Note that such an αexists. Indeed, by assumption, 0 < p 0ifor all entries and vhas at least one non-zero entry as v̸=0. Observe, min i∈[n]pi≥min i∈[n]p0i−αmax i∈[n]|vi|>0, and, nX i=1pi=nX i=1p0i+αnX i=1vi=nX i=1p0i= 1, where the second equality follows fromPn i=1vi= 0 by v∈ker(C). Thus, p∈∆N−1. Because v∈ker(C), Cp=C p0+αv =Cp0=q0. This implies that fℓ(q0) =fℓ(Cp) =fℓ(Cp0) and, 0 =fℓ(Cp)−fℓ(Cp0) =ℓ⊤p−ℓ⊤p0=ℓ⊤(p−p0) =αℓ⊤v̸= 0, asℓ⊤v̸= 0 by assumption. This yields a contradiction. □ H Derivation of Weights We illustrate the linear system described in Equation 10 by considering the binary decision and outcome case Y∈ {0,1}. This example corresponds to the identifiability conditions discussed in Section 3.6. Recall that we are trying to solve a set of linear equations Adfwdf=ℓ, 29 where df∈ {4,6,7}and A= 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 ,w= ω0(0,0) ω0(0,1) ω1(1,0) ω1(1,1) ω0(1,0) ω0(1,1) ω1(0,0) ω1(0,1) ϖ0,1(0,0) ϖ0,1(0,1) ϖ0,1(1,0) ϖ0,1(1,1) ,ℓ= ℓ(0; 0,0) ℓ(1; 0,0) ℓ(0; 1,0) ℓ(1; 1,0) ℓ(0; 0,1) ℓ(1; 0,1) ℓ(0; 1,1) ℓ(1; 1,1) with A4=A1:4,w4=w1:4,A6=A1:8,w6=w1:8,A7=A,w7=w. Here, A1:kdenotes the first krows of matrix A, andw1:krefers to the first kentries of the weight vector w. Note that the rank of Ais seven. These three systems correspond to the
|
https://arxiv.org/abs/2505.08908v1
|
arXiv:2505.09043v1 [stat.ME] 14 May 2025Exploratory Hierarchical Factor Analysis with an Applicat ion to Psychological Measurement Jiawei Qiao, Yunxiao Chen and Zhiliang Ying Abstract Hierarchical factor models, which include the bifactor mod el as a special case, are useful in social and behavioural sciences for measuring hierarchically str uctured constructs. Specifying a hierarchical factor model involves imposing hierarchically structured zero constraints on a factor loading matrix, which is a demanding task that can result in misspecification . Therefore, an exploratory analysis is often needed to learn the hierarchical factor structure from data . Unfortunately, we lack an identifiability theory for the learnability of this hierarchical structure and a computationally efficient method with provable performance. The method of Schmid–Leiman transfo rmation, which is often regarded as the default method for exploratory hierarchical factor analys is, is flawed and likely to fail. The contribution of this paper is three-fold. First, an identifiability resul t is established for general hierarchical factor models, which shows that the hierarchical factor structure is learnable under mild regularity conditions. Second, a computationally efficient divide-and-conquer app roach is proposed for learning the hierarchical factor structure. This approach has two building blocks – (1 ) a constraint-based continuous optimisation algorithm and (2) a search algorithm based on an information criterion – that together explore the structureoffactors nestedwithinagivenfactor. Finally, asymptotictheoryis established fortheproposed method, showing that it can consistently recover the true hi erarchical factor structure as the sample size grows to infinity. The power of the proposed method is shown vi a simulation studies and a real data application to a personality test. The computation code for the proposed method is publicly available at https://anonymous.4open.science/r/Exact-Explorato ry-Hierarchical-Factor-Analysis-F850. Keywords: Hierarchical factor model, augmented Lagrangia n method, exploratory bi-factor model, ex- ploratory hierarchical factor analysis, Schmid–Leiman tr ansformation 1 1 Introduction Many constructs in social and behavioural sciences are conceptu alised to be hierarchically structured, such as psychological traits (e.g., Carroll, 1993; DeYoung, 2006) , economic factors (e.g., Kose et al., 2008; Moench et al., 2013), health outcomesmeasures(e.g., Chen et al., 2 006; Reise et al., 2007), and constructsin marketingresearch(e.g.,Sharma et al.,2022). Hierarchicalfacto rmodels(Brunner et al.,2012;Schmid and Leiman, 1957; Thomson, 1939; Yung et al., 1999), which include the bi-facto r model (Holzinger and Swineford, 1937) as a special case with two factor layers, are commonly used to meas ure hierarchically structured constructs. In these models, hierarchically structured zero constraints are im posed on factor loadings to define the hierarchical factors. When the hierarchical factor structure is known or hypothesised a priori, the statistical inference of a hierarchical factor model only requires standard confirmatory factor analysis techniques (Brunner et al., 2012). However, for many real-world scenarios, little prior informa tion about the hierarchical factor struc- ture is available, so we need to learn this structure from data. This a nalysis is referred to as exploratory hierarchical factor analysis. Exploratory hierarchical factor analysis faces theoretical and c omputational challenges. First, we lack a theoretical understanding of its identifiability, i.e., the conditions u nder which the hierarchical factor structure is uniquely determined by the distribution of manifest var iables.
|
https://arxiv.org/abs/2505.09043v1
|
This is an important question, as learning a hierarchical factor structure is only sensible when it is iden tifiable. Although identifiability theory has been established for exploratory bi-factor analysis in Qiao et al. (2025), to our knowledge, no results are available under the general hierarchical factor model. Second , learning the hierarchical factor structure is a model selection problem, which is computationally challenging due to its combinatorial nature. For a moderately large J, it is computationally infeasible to compare all the possible hierarchica l factor structures using relative fit measures. However, it is worth noting that a compu tationally efficient method is available and commonly used for this problem, known as the Schmid–Leiman tra nsformation (Schmid and Leiman, 1957). This method involves constructing a constrained higher-or der factor model by iteratively applying an exploratoryfactoranalysismethodwithobliquerotationand, furt her, performingorthogonaltransformations to turn the higher-order factor model solution into a hierarchical factor model solution. However, as shown in Yung et al. (1999), the Schmid-Leiman transformation imposes un necessary proportionality constraints on the factor loadings. As a result, it may not work well for more gen eral hierarchical factor models. Jennrich and Bentler (2011) gave an example in which the Schmid–Leim an transformation fails to recover a bi-factor loading structure. Not only theoretically flawed, the imp lementation of the Schmid–Leiman transformation can also be a challenge for practitioners due to sev eral decisions one needs to make, including 2 the choice of oblique rotation method for the exploratory factor a nalysis and how the number of factors is determined in each iteration. This paper fills these gaps. Specifically, we establish an identifiability re sult for exploratory hierarchical factor analysis, showing that the hierarchical factor structure is learnable under mild regularity conditions. We also propose a computationally efficient divide-and-conquer appr oach for learning the hierarchical factor structure. This approach divides the learning problem into many sub tasks of learning the factors nested within a factor, also known as the child factors of this factor. It co nquers these subtasks layer by layer, starting from the one consisting only of the general factor. Our m ethod for solving each subtask has two building blocks – (1) a constraint-based continuous optimisation algo rithm and (2) a search algorithm based on an information criterion. The former is used to explore the numbe r and loading structure of the child factors, and the latter serves as a refinement step that ensure s the true structure of the child factors is selected with high probability. Finally, asymptotic theory is establishe d for the proposed method, showing that it can consistently recover the true hierarchical factor str ucture as the sample size grows to infinity. The proposed method is closely related to the method proposed in Qia o et al. (2025) for exploratory bi-factor analysis, which can be seen as a special case of the curre nt method when the hierarchical factor structure is known to have only two layers. However, we note that the current problem is substantially more challenging as the complexity of a hierarchical factor structure gr ows quickly as the
|
https://arxiv.org/abs/2505.09043v1
|
number of factor layers increases. Nevertheless, the constraint-based continuous opt imization algorithm that serves as a building block of the proposed method is similar to the algorithm used for explo ratorybi-factor analysis in Qiao et al. (2025). This algorithm turns a computationally challenging combinato rial model selection problem into a relatively easier-to-solve continuous optimization problem, enabling a more efficient global search of the factor structure. The rest of the paper is organized as follows. In Section 2, we estab lish the identifiability of the general hierarchical factor model and, further, propose a divide-and-c onquer approach for exploratory hierarchical factor analysis and establish its consistency. In Section 3, the com putation of the divide-and-conquer ap- proachisdiscussed. Simulationstudiesandarealdataexamplearep resentedinSections4and5, respectively, to evaluate the performance of the proposed method. We conclud e with discussions in Section 6. 3 2 Exploratory Hierarchical Factor Analysis 2.1 Constraints of hierarchical factor model Consider a factor model for Jobserved variables, with Korthogonal factors. The population covariance matrix can be decomposed as Σ“ΛΛJ`Ψ, whereΛ “ pλjkqJˆKistheloadingmatrixandΨisa JˆJdiagonalmatrixwithdiagonalentries ψ1,...,ψ Ją 0 that record the unique variances. We say this factor model is a hie rarchical factor model if the loading matrix Λ satisfies certain zero constraints that encode a factor h ierarchy. More specifically, let vk“ tj:λjk‰0ube the variables loading on the kth factor. The factor model becomes a hierarchical factor model if v1,...,v Ksatisfy the following constraints: C1.v1“ t1,...,J ucorresponds to a general factor that is loaded on by all the items. C2. For any kăl, it holds that either vlĹvkorvlĂ t1,...,J uzvk. That is, the variables that load on factorlare either a subset of those that load on factor kor do not overlap with them. When vlĹvk, we say factor lis a descendant factor of factor k. If further that there does not exist k1such that kăk1ălandvlĹvk1Ĺvk, we say factor lis a child factor of factor k, and factor kis a parent factor of factorl. C3. For a given factor k, we denote all its child factors as Ch k. Then its cardinality |Chk|satisfies that |Chk| “0 or |Chk| ě2. That is, a factor either does not have any child factor or at least two child factors. Moreover, when a factor khas two or more child factors, these child factors satisfy that vlXvl1“ H,for anyl,l1PChk, and YlPChkvl“vk.That is, the sets of variables that load on the child factors of a factor are a partition of the variables that load on this factor. We note that one child node is not allowed due to identification issues. To avoid ambiguity in the labelling of the factors, we further require that (a)kălif factorskandlare the child factors of the same factor and min tvku ămintvlu. That is, we label the child factors of the same factor based on the labels of t he variables that load on each factor. (b)kălif factorskandldo not have the same parent factor, and the parent factor of khas a smaller label than the parent factor of l. 4 We remark that the requirement |Chk| “0 or |Chk| ě2 in constraint C3 is necessary for the hierarchical factor model
|
https://arxiv.org/abs/2505.09043v1
|
to be identifiable. When a factor khas a unique child factor (i.e. |Chk| “1), it is easy to show that the two columns of the loading matrix that correspond to fact orkand its single child factor are not determined up to an orthogonal rotation. We note that when the above constraints hold, the hierarchical fa ctor structure can be visualized as a tree, where each internal node represents a factor, and each le af node represents an observed variable. In this tree, factor lbeing a child factor of factor k, is represented by node lbeing a child node of node k. The variables that load on each factor are indicated by its descendent le af nodes. When the factors follow a hierarchical structure, we can classify t he factors into layers. The first factor layer only includes the general factor, denoted by L1“ t1u. The rest of the layers can be defined recursively. That is, if a factor kis in thetth layer, then its child factors are in the pt`1qth layer. Let Tbe the total number of layers and L1,...,L Tbe the sets of factors for the Tlayers. It is worth noting that the way the layers are labelled here is opposite to how they are labelled in the literat ure. That is, we label the layers from the top to the bottom of the hierarchy of the factors. In co ntrast, they are labelled from the bottom to the top in the literature (see, e.g., Yung et al., 1999). We adopt th e current labelling system because the proposed method in Section 2.2 learns the factor hierarchy from to p to bottom. The current labelling system is more convenient for us to describe the proposed method. An illustrative example of a three-layer hierarchical factor model is given in Figure 1, where Panel (a) shows the variables that load on each factor from the top layer to t he bottom layer, and Panel (b) shows the corresponding path diagram. In this example, J“16,K“6,v1“ t1,2,...,16u,v2“ t1,...,8u, v3“ t9,...,12u,v4“ t13,...,16u,v5“ t1,...,4uandv6“ t5,...,8u. The factors are labeled following the constraints C3(a) and C3(b). Based on this hierarchical structu re, we have T“3,L1“ t1u,L2“ t2,3,4u andL3“ t5,6u. The loading matrix Λ under the hierarchical structure takes the f orm Λ“¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝λ11λ21λ31λ41λ51λ61λ71λ81λ91λ10,1λ11,1λ12,1λ13,1λ14,1λ15,1λ16,1 λ12λ22λ32λ42λ52λ62λ72λ820 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 λ93λ10,3λ11,3λ12,30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 λ13,4λ14,4λ15,4λ16,4 λ15λ25λ35λ450 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 λ56λ66λ76λ860 0 0 0 0 0 0 0˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚J . (1) Under a confirmatory setting, the number of factors Kand the variables associated with each factors, 5 v1 v2 v3 v4 v5 v6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16L1 L2 L3 (a) The hierarchical factor structure of a three-layer hier archical factor model. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16F1 F4F5 F3 F2F6 (b) The path diagram
|
https://arxiv.org/abs/2505.09043v1
|
corresponding to the hierarchical fact or model in Panel (a). Figure 1: The illustrative example of a three-layer hierarchical fact or model. v1,v2,...,v K, areknown. Inthatcase,estimatingthehierarchicalfactormod elisarelativelysimpleproblem, which involves solving an optimisation problem with suitable zero constr aints on the loading parameters. However, in many real-world applications, we do not have prior knowle dge about the hierarchical structure of the loading matrix. In these cases, we are interested in explorat ory hierarchical factor analysis, i.e., simultaneously learning the hierarchical structure from data and e stimating the corresponding parameters. Before presenting a method for exploratory hierarchical factor analysis, we first show that the true factor hierarchy is unique under mild conditions, which is essential for the tr ue structure to be learnable. Suppose that we are given a true covariance matrix Σ˚“Λ˚pΛ˚qJ`Ψ˚, where the true loading matrix Λ˚satisfies the constraints of a hierarchical factor model. Theorem 1 below sh ows that the true loading matrix Λ˚is unique up to column sign-flips and thus yields the same hierarchical st ructure. The following notation is needed in the rest of the paper. Given a hiera rchical factor structure with loading sets vi, letDi“ tj:vjĹviube the set of all descendent factors of factor i. For example, in the hierarchicalstructureshowninFigure1, D2“ t5,6u. Foranymatrix A“ pai,jqmˆnandsetsS1Ă t1,...,m u 6 andS2Ă t1,...,n u, letArS1,S2s“ pai,jqiPS1,jPS2be the submatrix of Aconsisting of elements that lie in rows belonging to set S1and columns belonging to set S2, where the rows and columns are arranged in ascending order based on their labels in S1andS2, respectively. For example, consider the loading matrix in (1), where v2“ t1,2,...,8u. Then, Λ rv2,t1,2ustakes the form Λrv2,t1,2us“¨ ˚˚˝λ11λ12λ13λ14λ15λ16λ17λ18 λ21λ22λ23λ24λ25λ26λ27λ28˛ ‹‹‚J . For any vector a“ pa1,...,a nqJand setSĂ t1,...,n u, we similarly define arSs“ paiqJ iPSbe the subvector ofaconsisting of the elements belonging to S, where the elements in arSsare arranged in ascending order based on their labels in S. For any set S1Ă t1,2,...,n u, let vec pS1qbe a mapping that maps the set S1to a vector whose elements are the same as S1and arranged in ascending order. For two sets S1Ă t1,2,...,n u andS2Ă t1,2,..., |S1|u, we denote S1rS2sas the subset of S1, consisting of elements in vec pS1qrS2s. Condition 1. The population covariance matrix can be expressed as the for mΣ˚“Λ˚pΛ˚qJ`Ψ˚, where the true loading matrix Λ˚is of rankKand the loading sets v˚ kand child factors Ch˚ kdefined by Λ˚satisfy the constraints C1–C3 of a hierarchical factor model. Condition 2. Given another JˆKmatrixΛandJˆJdiagonal matrix Ψsuch that Σ˚“Λ˚pΛ˚qJ`Ψ˚“ ΛΛJ`Ψ, we have ΛΛJ“Λ˚pΛ˚qJandΨ“Ψ˚. Condition 3. LetD˚ kbe the corresponding true set of descendant factors of facto rk. For any factor iwith Ch˚ i‰ Hand anyjPCh˚ i, it satisfies that (1) any two rows of Λ˚ rv˚ j,ti,jusare linearly independent, (2) for anykPv˚ j,Λ˚ rv˚ jztku,ti,juYD˚ jshas full column rank, and (3) If |Ch˚ j| ě2, then, for any s1,s2PCh˚ j, k1,k2Pv˚ s1, andk3,k4Pv˚ s2,Λ˚ rtk1,...,k4u,ti,j,s1,s2usis of full rank. Theorem 1. Suppose that Conditions 1–3 hold. There does not exist anoth er hierarchical factor structure withKfactors such that its loading matrix Λand unique variance matrix ΨsatisfyΣ˚“ΛΛJ`Ψand Λ‰Λ˚Qfor all
|
https://arxiv.org/abs/2505.09043v1
|
sign flip matrix QPQ, whereQconsists of all KˆKdiagonal matrix Qwhose diagonal entries taking values 1or´1. Remark 1. Condition 2 ensures the separation between the low-rank mat rixΛ˚pΛ˚qJand the diagonal matrixΨ˚, which is necessary for the true hierarchical factor model t o be identifiable. This condition can be guaranteed by a mild requirement on the true loading matri x, such as Condition 4 in Section 2.2.3. On the other hand, as Proposition 1 below implies, the true load ing matrix needs to satisfy certain minimum requirements for Condition 2 to hold. The proof of Propositi on 1 directly follows Theorem 1 of Fang et al. (2021). 7 Proposition 1. There exists another JˆKmatrixΛfollowing the same hierarchical factor structure as the true model and a JˆJdiagonal matrix Ψsuch that Σ˚“Λ˚pΛ˚qJ`Ψ˚“ΛΛJ`Ψ, if there exists a factorksuch that (1) |v˚ k| ď2or (2) |Ch˚ k| ě2and |v˚ k| ď6. Whenperformingexploratoryhierarchicalfactoranalysis, weonly searchamongtheidentifiablehierarchi- cal factor models. Thus, we require the considered models to satis fy constraint C4 in addition to constraints C1–C3 introduced previously. C4. For all factors k“1,...,K,|vk| ě3 and |vk| ě7 if factorkhas two or more child factors. Remark 2. Condition 3 imposes three requirements. First, it requires that there do not exist two variables loading on factor jsuch that their loadings on any factor iand its child node jare linearly dependent. This is a mild assumption satisfied by almost all the models in the ful l parameter space of hierarchical factor models. Second, it requires that the submatrix Λ˚ rv˚ j,ti,juYD˚ js, which corresponds to variables in v˚ jand factors i,j,j’s descendants, are still of full column rank after deleting an y row. This condition mainly imposes a restriction on the number of descendant factors each factor can have. Tha t is, the full-column-rank requirement implies that |v˚ j| ě3`|D˚ j|. Other than that, the full-column-rank requirement is easi ly satisfied by most hierarchical factor models. These two requirements can be seen as an exten sion of Condition 2 of Qiao et al. (2025) to hierarchical factor models, where Qiao et al. (2025) cons ider a bi-factor model with possibly correlated bi-factors. Third, we require that when factor jhas child factors s1ands2, for any two variables k1,k2 loading on factor s1and any variables k3,k4loading on factor s2, the sub-loading matrix corresponding to variablesk1,...,k 4and factors i,j,s1,s2is of full rank. 2.2 Proposed Method In what follows, we propose a divide-and-conquer method for explo ratory hierarchical factor analysis. 2.2.1 An Overview of Proposed Method As the proposed method is quite sophisticated, we start with an ove rview of the proposed method to help readersunderstand it. Consideradataset with Nobservationunits froma certainpopulation and Jobserved variables. Let Sbe the sample covariance matrix of observed data. The proposed m ethod takes Sas the input and outputs estimators: 1.pTand pKfor the number of layers Tand the number of factors K. 2.pL1,..., pLpTfor the factor layers L1, ...,LTand pv1,pv2,..., pvxKfor the sets of variables loading on the K factors,v1, ...,vK. 8 3.pΛ and pΨ for the loading matrix Λ and unique variance matrix Ψ. As shown in Theorem 2 below, with the
|
https://arxiv.org/abs/2505.09043v1
|
sample size Ngoing to infinity, these estimates will converge to their true values. The proposed method learns the hierarchical factor structure f rom the top to the bottom of the factor hierarchy. It divides the learning problem into many subproblems and conquers them layer by layer, starting from the first layer pL1“ t1uwith pv1“ t1,...,J u. For each step t,t“2,3,...,suppose the first to the pt´1qth layers have been learned. These layers are denoted by pLi“ tki´1`1,...,k iu,i“1,...,t ´1, wherek0“0 andk1“1, and the associated sets of variables are denoted by pv1,..., pvkt´1. We make the following decisions in the tth step: 1. For eachfactor kPpLt´1, we learn its child factorsunder the constraintsC3 and C4. This is ac hievedby an Information-Criterion-Based(ICB) method described in Sectio n 2.2.2 below. The labels of the child factors are denoted by xChk. When xChk‰ H, we denote the associated sets of variables as pvl,lPxChk. 2. If xChk“ Hfor allkPpLt´1, we stop the learning algorithm and conclude that the factor hierar chy has pT“t´1 layers. 3. Otherwise, we let pLt“ tkt´1`1,...,k tu “ YkPpLt´1xChkand proceed to the pt`1qth step. We iteratively learn the structure of each layer until the preceding stopping criterion is met. Then we obtain the estimates pΛ and pΨ by maximum likelihood estimation given pK“kpT,pv1,..., pvxK: ppΛ,pΨq “argmin Λ,ΨlpΛΛJ`Ψ;Sq, s.t.λij“0,iRpvj,i“1,...,J,j “1,..., pK, Ψrtiu,tiusě0,Ψrtiu,tjus“0,i“1,...,J,j ‰i.(2) where lpΛΛJ`Ψ;Sq “N` logpdetpΛΛJ`Ψqq `trpSpΛΛJ`Ψq´1q ´logpdetpSqq ´J˘ equals two times the negative log-likelihood of the observed data up t o a constant. We output pT,pK, pL1,..., pLpT,pv1,..., pvxK,pΛ and pΨ as our final estimate of the learned hierarchical factor model. To illustrate, consider the example in Figure 1. In the first step, we s tart with pL1“ t1uand pv1“ t1,...,16u. In the second step, we learn the child factors of Factor 1. If the y are correctly learned, then we obtain xCh1“ t2,3,4uwith pv2“ t1,...,8u,pv3“ t9,...,12uand pv4“ t13,...,16u. This leads to pL2“ t2,3,4u. In the third step, we learn the child factors of factors 2, 3 and 4, one by one. If correctly 9 learned, we have xCh2“ t5,6u,xCh3“ H,xCh4“ H,pL3“ t5,6u,pv5“ t1,...,4uand pv6“ t5,...,8u. In the fourth step, if correctly learned, we have xCh5“xCh6“ H, and the learning algorithm stops. We then have pT“3,pK“6,pL1,..., pL3,pv1,..., pv6and further obtain pΛ and pΨ using (2) given pKand pv1,..., pv6. We summarise the steps of the proposed method in Algorithm 1 below. Algorithm 1 A Divide-and-Conquer method for learning the hierarchical factor structure Input:Sample covariance matrix SPRJˆJ; 1:SetpL1“ t1uwith pv1“ t1,...,J u. 2:Determine xCh1, the child factors of factor 1, and pvifor alliPxCh1, the sets of variables loading on these child factors, by the ICB method in Algorithm 2; 3:SetpL2“xCh1andt“2; 4:while pLt‰ Hdo 5:forkPpLtdo 6: Determine xChkand pvifor alliPxChkby the ICB method in Algorithm 2; 7:end for 8:SetpLt`1“ YkPpLtxChk. 9:t“t`1 10:end while 11:Set pT“t´1,pK“ΣpT l“1|pLl|; 12:Obatin pΛ and pΨ using (2) given pKand pv1,..., pvxK. Output: pT,pK,pL1,..., pLpT,pv1,..., pvxK,pΛ and pΨ. 2.2.2 ICB Method for Learning Child Factors From the overview of the proposed method described above, we se e that the proposed method solves the learning problem by iteratively applying an ICB method
|
https://arxiv.org/abs/2505.09043v1
|
to learn the child factors of each given factor. We now give the details of this method. We start with the ICB method for learning the child factors of Factor 1, i.e., the general factor. In this case, the main questions the ICB me thod answers are: (1) how many child factors does Factor 1 have? and (2) what variables load on each ch ild factor? It is worth noting that when learning these from data, we need to account for the fact th at each child factor can have an unknown number of descendant factors. However, with a divide-and-conc ur spirit, we do not learn the structure of the descendant factors (i.e., the hierarchical structure of thes e descendant factors and the variables loading on them) of each child factor in this step because this structure is t oo complex to learn at once. The ICB method answersthe twoquestions aboveby learninga loadin gmatrix rΛ1with zeropatterns that encode the number and loading structure of the child factors of Fa ctor 1. More specifically, rΛ1is searched among the space of loading matrices that satisfy certain zero cons traints that encode a hierarchical factor model. This space is defined as A1“ YcPt0,2,...,cmaxu,d1,...,dcPt1,...,dmaxuA1pc,d1,...,d cq, 10 where, ifcě2, for a pre-specified constant τą0, A1pc,d1,...,d cq “tA“ paijqJˆp1`d1`¨¨¨`dcq: there exists a partition of t1,...,J u,denoted byv1 1,...,v1 c,satisfying min tv1 1u ămintv1 2u ă ¨ ¨ ¨ ă mintv1 cu,such that Arv1s,tjus“0,for alls“1,...,c,andjR t1,2`ÿ s1ăsds1,3`ÿ s1ăsds1,..., 1`ÿ s1ďsds1uand |aij| ďτ,for alli“1,...,Jandj“1,...,1`cÿ s“1dc.u, and, ifc“0,A1p0q “ tA“ paijqJˆ1:|aij| ďτu.Here,cmaxanddmaxare pre-specified constants typically decided by domain knowledge. τis a universal upper bound for the loading parameters, which is need ed for technical reasons for our theory. The space A1pc,d1,...,d cqincludes all possible loading matrices for a hierarchical factor structure, where Factor 1 has cchild factors, and each child factor has ds´1 descendant factors. The space A1is the union of all the possible A1pc,d1,...,d cqfor different combinations of the numbers of child factors and their descendant factors. For example, consider the hierarchical factor model example in Figu re 1, for which pv1“ t1,...,16u. Then, the matrix Λ1“¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝λ11λ21λ31λ41λ51λ61λ71λ81λ91λ10,1λ11,1λ12,1λ13,1λ14,1λ15,1λ16,1 λ12λ22λ32λ42λ52λ62λ72λ820 0 0 0 0 0 0 0 λ13λ23λ33λ43λ53λ63λ73λ830 0 0 0 0 0 0 0 λ14λ24λ34λ44λ54λ64λ74λ840 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 λ95λ10,5λ11,5λ12,50 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 λ13,6λ14,6λ15,6λ16,6˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚J lies in space A1p3,3,1,1q. This loading matrix is the one that the ICB method aims to find, as it ha s the same blockwise zero pattern (ignoring the zero constraints implied b y the lower-layer factors) as the true loading pattern in (1) after reordering the columns of Λ in (1). 11 We search for the best possible loading matrix in A1using the information criterion defined as: IC1pc,d1,...,d cq “min Λ1,Ψ1l` Λ1ΛJ 1`Ψ1,S˘ `p1pΛ1qlogN, s.t. Λ1PA1pc,d1,...,d cq, pΨ1qrtiu,tiusě0,pΨ1qrtiu,tjus“0,i“1,..., |pv1|,j‰i, where p1pΛ1q “$ ’’& ’’%Σc s“1p|v1 s|ds´dspds´1q{2qifdsď |v1 s|for alls“1,...c, 8,otherwise, is a penalty on the number of free parameters1for a matrix Λ 1inA1pc,d1,...,d cq. The penalty ensures that in the selected factor
|
https://arxiv.org/abs/2505.09043v1
|
loadings, one plus the number of descen dant factors of each child factor of factor 1 will not exceed the number of items loading on the corresponding ch ild factor. Ideally, we hope to find the loading matrix in A1that minimises IC 1pc,d1,...,d cqamong all cP t0,2,...,c maxuandd1,...,d cP t1,...,d maxu. More specifically, we define p¯c1,¯d1 1,...,¯d1 ¯c1q “ argmin cPt0,2,...,cmaxu,1ďdsďdmax,s“1,...,cIC1pc,d1,...,d cq (3) and further p¯Λ1,¯Ψ1q “argmin Λ1,Ψ1l` Λ1ΛJ 1`Ψ1,S˘ s.t. Λ1PA1p¯c1,¯d1 1,...,¯d1 ¯c1q, pΨ1qrtiu,tiusě0,pΨ1qrtiu,tjus“0,i“1,..., |pv1|,j‰i.(4) We can further determine the variables loading on each child factor o f Factor 1 based on the zero pattern of ¯Λ1. However, we note that A1is highly complex, and thus, enumerating all the possible loading matric es inA1is computationally infeasible. In other words, while the quantities in (3 ) and (4) are well-defined mathematically, they cannot be computed within a reasonable time. I n this regard, we develop a greedy search method, presented in Algorithm 2, for searching over the s paceA1. This greedy search method will output pc1and pv1 1,..., pv1 pc1. As shown in Theorem 2, with probability tending to 1, they are consis tent estimates of the corresponding true quantities for the factors in this layer. In other words, this greedy search is theoretically guaranteed to learn the correct hierarchic al factor structure. Moreover, Algorithm 2 also solves a similar optimisation as (4) for loading matrices in A1ppc1,pd1,..., pdpc1q, from which we obtain a 1|v1 s|dsis the number of nonzero parameters in the columns 2 `Σs1ăsds1,...,1`Σs1ďsds1and the minus term is to account for the reduced degree of freedom due to rotational indeterm inacy. 12 consistent estimate of the first column of the loading matrix, denot ed by rλ1. So far, we have learned the factors in the second layer of the factor hierarchy. Fortě3, suppose that the first to the pt´1qth layers have been successfully learned, and we now need to learn the factors in the tth layer. This problem can be decomposed into learning the child facto rs of each factorkinpLt´1“ tkt´2`1,...,k t´1u. At this moment, we have the estimated variables loading on Factor k, denoted by pvk, and a consistent estimate of the loading parameters for the fact ors in the first to the pt´2qth layer, denoted by rλi,i“1,...,k t´2, which are obtained as a by-product of the ICB method in the previous steps2. We define rΣk,0:“řkt´2 i“1prλiqrpvksprλiqJ rpvksandSk:“Srpvk,pvks. Similar to the learning of child factors of Factor 1, we define the possible space for the loading su bmatrix associated with the descendant factors of Factor kas Ak“ YcPt0,2,...,cmaxu,d1,...,dcPt1,...,dmaxuAkpc,d1,...,d cq, where, ifcě2, for the same constant τą0 as inA1 Akpc,d1,...,d cq “tA“ paijq|pvk|ˆp1`d1`¨¨¨`dcq: there exists a partition of t1,..., |pvk|u,denoted byvk 1,...,vk c,satisfying min tvk 1u ămintvk 2u ă ¨ ¨ ¨ ă mintvk cu,such that Arvks,tjus“0,for alls“1,...,c,andjR t1,2`ÿ s1ăsds1,3`ÿ s1ăsds1,..., 1`ÿ s1ďsds1uand |aij| ďτfor alli“1,...,Jandj“1,...,1`cÿ s“1dsu, and, ifc“0,Akp0q “ tA“ paijq|pvk|ˆ1:|aij| ďτu. Here,candd1,...,d chave similar meanings as in A1pc,d1,...,d cq. That is, Akpc,d1,..., dcqincludes the corresponding loading submatrices when Factor khascchild factors, and each child factor hasds´1 descendant factors. It should be noted that, however, each m atrix inAkpc,d1,...,d
|
https://arxiv.org/abs/2505.09043v1
|
cqhas only |pvk|rows, while those in A1pc,d1,...,d cqhaveJrows. This is because, given the results from the previous steps, we have already estimated that factor kand its descendant factors are only loaded by the variables in pvk. Therefore, we only focus on learning the rows of the loading matrix that correspond to the variables in pvkin the current task. Similar to IC 1pc,d1,...,d cq, we define 2Whent“3, we have rλ1obtained from learning the factors in the second layer, as me ntioned previously. When learning the factors in the third layer, we further obtain rλ2,..., rλk2. When t“4, we use rλ1,..., rλk2to learn the factors in the fourth layer. This process continues as tincreases until Algorithm 1 stops. 13 v1 v2 v3 v4 v5 v6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16L1 L2 L3 Figure 2: A correctly specified model with a redundant factor corr esponding to v2. ICkpc,d1,...,d cq “min Λk,Ψkl´ rΣk,0rΣJ k,0`ΛkΛJ k`Ψk,Sk¯ `pkpΛkqlogN, s.t. ΛkPAkpc,d1,...,d cq, pΨkqrtiu,tiusě0,pΨkqrtiu,tjus“0,i“1,..., |pvk|,j‰i, where pkpΛkq “$ ’’& ’’%Σc s“1p|vk s|ds´dspds´1q{2qifdsď |vk s|for alls“1,...c, 8,otherwise is a penalty term. Again, we use the greedy search algorithm, Algorithm 2, to search f or the best possible Λ kinAk. It out- puts pckand pvk 1,..., pvk pck, and an estimate of the kth column of the loading matrix, rλk. Under some regularity conditions, Theorem 2 shows that pck,pvk 1,..., pvk pck, and rλkare consistent estimates of the corresponding true quantities. Remark 3. The penalty term in the proposed information criterion is es sential for learning the correct hierarchical factor structure that satisfies the constrain ts in C1-C4. It avoids asymptotically rank-degenerated solutions for the loading matrix and, thus, avoids selectin g an over-specified hierarchical factor model with more factors than the true number of factors. Consider the ex ample in Figure 1. Without the penalty in the proposed information criterion, we may select the struc ture in Figure 2, which is still a correct specified model but has a redundant factor (corresponding to v2). We present the proposed greedy search algorithm for efficiently se arching over the space Akfor eachk. Recall that rΣk,0:“řkt´2 i“1prλiqrpvksprλiqJ rpvkswhenkPpLt´1fortě3. We further define rΣk,0as aJˆJzero matrix to cover the case when t“2 andk“1. We divide the search into two cases. 14 1. Forc“0, we simply compute ĂICk,0“min Λk,Ψkl´ rΣk,0rΣJ k,0`ΛkΛJ k`Ψk,Sk¯ , s.t. ΛkPAkp0q, pΨkqrtiu,tiusě0,pΨkqrtiu,tjus“0,i“1,..., |pvk|,j‰i(5) and use prΛk,0,rΨk,0qto denote the solution to (5). This is a relatively simple continuous opt imisation problem that a standard numerical solver can solve. 2. Setd“dmax`2´t. For eachcP t2,...,c maxu, we perform the following steps: (a) SolvetheoptimisationinIC kpc,d,...,d q. ItiseasytocheckthatthepenaltyterminIC kpc,d,...,d q equalsto |pvk|d´cdpd´1q{2, whichdoesnotdepend onthe loadingmatrixΛ kaslongasthenumber of item within each of the corresponding partition is no less than d. Therefore, the optimisation problem becomes min Λk,Ψkl´ rΣk,0rΣJ k,0`ΛkΛJ k`Ψk,Sk¯ , s.t. ΛkPAkpc,d,...,d q, pΨkqrtiu,tiusě0,pΨkqrtiu,tjus“0,i“1,..., |pvk|,j‰i.(6) Letvk,c 1,...,vk,c cbe the partition of 1 ,..., |pvk|given by the solution to (6). We note that (6) is a discrete optimisation problem, due to the combinatorial nature of t he space Akpc,d,...,d q. The theoretical properties in Theorem 2 are established
|
https://arxiv.org/abs/2505.09043v1
|
under the idea l scenario that this optimisation is solved exactly for all k. In reality, however, exactly solving (6) is computationally infeasible whenJandcare large. To search for the solution to (6), we cast it into a continu ous optimisation problem with nonlinear zero constraints and solved by an augmented Lagrangian method; see Section 3 for the relevant details. (b) Given the partition vk,c 1,...,vk,c cfrom the previous step, we define the space for all d1,...,d cP 15 t1,...,d maxu rAkpc,d1,...,d cq “tA“ paijqJˆp1`d1`¨¨¨`dcq:Arvk,c s,tjus“0,for alls“1,...,c,and jR t1,2`ÿ s1ăsds1,3`ÿ s1ăsds1,...,1`ÿ s1ďsds1u,and |aij| ďτfor all i“1,...,J,j “1,...,1`cÿ s“1dsu for the same constant τas inAkpc,d1,...,d cq. We note that the space of rAkpc,d1,..., dcqis substantially smaller than Akpc,d1,...,d cqas the partition of the variables is fixed. Based onrAkpc,d1,...,d cq, we define information criterion ĂICkpc,d1,...,d cq “min Λk,Ψkl´ rΣk,0rΣJ k,0`ΛkΛJ k`Ψk,Sk¯ `pkpΛkqlogN, s.t. ΛkPrAkpc,d1,...,d cq, pΨkqrtiu,tiusě0,pΨkqrtiu,tjus“0,i“1,..., |pvk|,j‰i.(7) As the space rAkpc,d1,...,d cqis relatively simple, the optimisation in (7) is a relatively simple continuous optimisation problem that a standard numerical solver c an solve. (c) We then search for the best values for d1,...,d cfor the given c. They are determined sequentially, one after another. More specifically, we first determine d1by rdc 1“argmin 1ďd1ďmin p|vk,c 1|,dqĂICkpc,d1,minp|vk,c 2|,dq,...,minp|vk,c c|,dqq, (8) where we fix the value of d2,...,d cat min p|vk,c 2|,dq,...,minp|vk,c c|,dqand only vary the value of d1. Solving (8) involves solving min p|vk,c 1|,dqrelatively simple continuous optimisation problems. Then we proceed to d2and so on. For sě2, suppose that we have learned rdc 1,..., rdc s´1, thends is determined by rdc s“argmin 1ďdsďmin p|vk,c s|,dqĂICkpc,rdc 1,..., rdc s´1,ds,minp|vk,c s`1|,dq,...,minp|vk,c c|,dqq, where we fix d1,...,d s´1at their learned values and fix ds`1,...,d cat min p|vk,c s`1|,dq, ...,minp|vk,c c|,dq. 16 (d) Given rdc 1,..., rdc c, we define ĂICk,c“ĂICkpc,rdc 1,..., rdc cq (9) and rΛk,c,rΨk,cas the solution to (9). The above steps yield ĂICk,c,cP t0,2,...,c maxu. Then, we estimate the number of child factors of Factor kby the value of cthat minimises the modified information criterion ĂICk,c. That is, we let pck“argmin cPt0,2,...,cmaxuĂICk,c. Moreover, we define pvk s“pvkrvk,pckss,s“1,..., pck, wherevk,pcks,s“1,..., pck, is the partition of t1,..., |pvk|ulearned above for c“pck. Then pvk s,s“1,..., pck, give a partition of pvk, and we estimate that the sth child factor of Factor kis loaded by the variables in pvk s. As a by-product, we obtain an estimate of the kth column of the loading matrix, denoted by rλk, satisfying that prλkqrpvksequals to the first column of rΛk,pckand prλkqrt1,...,J uzpvksis a zero vector. We summarise the steps described previously in Algorithm 2. Algorithm 2 Information-Criterion-based method Input: pvk,cmax,dmaxPN`,rΣk,0,Skand layert. 1:Setd“minp|pvk|,dmax`2´tq. 2:Solve ĂICk,0defined in (5). Let prΛk,0,rΨk,0qas the solution to ĂICk,0. 3:forc“2,3,...,c maxdo 4:Solve the optimisation problem (6). Set vk,c 1,...,vk,c cas the partition of t1,..., |pvk|uby the solution to (6). 5:fors“1,...,cdo 6: Compute rdc s“argmin 1ďdsďmin p|vk,c s|,dqĂICkpc,rdc 1,..., rdc s´1,ds,minp|vk,c s`1|,dq,...,minp|vk,c c|,dqq, where ĂICkis defined in (7). 7:end for 8:Define ĂICk,c“ĂICkpc,rdc 1,..., rdc cqand prΛk,c,rΨk,cqas the solution to ĂICk,c. 9:end for 10:Define pck“argmincPt0,2,3,...,cmaxuĂICk,c; 11:Setrvk 1,..., rvk pckbe the partition of t1,..., |pvk|uassociated with rΛk. Define the partition of pvkbypvk 1“ pvkrrvk
|
https://arxiv.org/abs/2505.09043v1
|
1s,..., pvk pck“pvkrrvk pcks. 12:Define rλkfollowing that prλkqrpvksequals to the first column of rΛk,pckand prλkqrt1,...,J uzpvksis a zero vector. Output: pck,pvk 1,..., pvk pckand rλk. Remark 4. cP t0,2,...,c maxurepresents the number of child factors of Factor k. In other words, cmaxis 17 an upper bound on the possible number of child factors of Fact ork. On the one hand, we need to ensure thatcmaxis not too small so that Condition 8 is satisfied. On the other h and, we want to avoid cmaxbeing too large to reduce the computational cost. Since the true va lue ofcshould satisfy constraints C3 and C4 in Section 2.1, cmaxshould be no more than t|pvk|{3uwhen |pvk| ě7andcmax“0, when |pvk| ď6, where t¨uis the floor function that returns the greatest integer less tha n or equal to the input. In the simulation study in Section 4, we set cmax“$ ’’& ’’%minp4,t|pvk|{3uqwhen |pvk| ě7, 0 when |pvk| ď6, which, according to the data generation model, is an upper bo und for the value of c. For the real data analysis in Section 5, since the true structure is unknown, we set cmax“$ ’’& ’’%minp6,t|pvk|{3uqwhen |pvk| ě7, 0 when |pvk| ď6, as a more conservative choice than that of cmaxfor the simulation study. In practice, we may adjust our choice based on prior knowledge about the hierarchical fact or structure. Remark 5. The input hyperparameter dmaxis an upper bound of one plus the number of descendant factors of the factors in the second layer. When learning the factors on thetth layer for tě3, it is reasonable to assume that dmax`2´tis an upper bound of one plus the number of descendant factors of the factors in thept`1qth layer, indicating that the upper bound of the number of des cendant factors of the factors in the tth layer should decline once at a time by t. Similar to the choice of cmax, we want to choose a dmaxthat is neither too large nor too small. In the simulation study in Se ction 4, we start with dmax“6when learning the factors in the second layer and reduce this value by one ev ery time we proceed to the next layer. In the real data analysis in Section 5, we start with dmax“10. In practice, we may adjust this choice based on the problem size (e.g., the number of variables) and prior knowl edge of the hierarchical factor structure. Remark 6. In Step 4 of Algorithm 2, we point out that there may exist some sP t1,...,c usuch that |vk,c s| ăd, which leads to the penalty term in IC kpc,d,...,d qbe8instead of a constant. However, Step 4 aims to solve a reasonable partition of pvkas an input for the following steps of Algorithm 2. As is shown in the proof of Theorem 2, as long as dis sufficiently large, Step 4 can return the partition of pvkthat we need. In Step 6 of Algorithm 2, the upper bounds minpvk,c s,dq,s“1,...,censure that the penalty on the loading matrix is finite. 18 2.2.3 Theoretical Results We now provide theoretical guarantees for the proposed method
|
https://arxiv.org/abs/2505.09043v1
|
based on Algorithms 1 and 2. We start with introducing some notation. We use } ¨ }Fto denote the Frobenius norm of any matrix and } ¨ }as the Euclidean norm of any vector. We also use the notation aN“OPpbNqto denote that aN{bNis bounded in probability. In addition to the conditions required for the identifiab ility of the true hierarchical factor model, we additionally require Conditions 4–8 to ensure the proposed method is consistent. Condition 4. For any factor iwith Ch˚ i‰ HandjPv˚ i, there exist E1,E2Ăv˚ iztjuwith |E1| “ |E2| “ 1` |D˚ i|andE1XE2“ H, such that Λ˚ rE1,tiuYD˚ isandΛ˚ rE2,tiuYD˚ isare of full rank. Condition 5. For any factor iwith Ch˚ i‰ Hand anykPCh˚ i, there exist E1,E2Ăv˚ kwith |E1| “2`|D˚ k|, |E2| “1` |D˚ k|andE1XE2“ Hsuch that Λ˚ rE1,ti,kuYD˚ ksandΛ˚ rE2,tkuYD˚ ksare of full rank. Condition 6. }S´Σ˚}F“OPp1{? Nq. Condition 7. The true loading parameters satisfy |λ˚ ij| ďτfor alli,j, whereτis defined in the parameter spacesAkpc,d1,...,d cqand rAkpc,d1,...,d cqin the ICB method. Condition 8. When learning the child factors of each true factor k, the constants cmaxanddmaxare chosen such thatcmaxě |Ch˚ k|anddmaxěmaxsPCh˚ k|D˚ s| `1. Theorem 2. Suppose that Conditions 1,3–8 hold. Then, the outputs from A lgorithm 1 are consistent. That is, asNgoes to infinity, the probability of pT“T,pK“K,pLt“Lt,t“1,...,T, and pvi“v˚ i,i“1,...,K goes to 1, and }pΛ´Λ˚pQ}F“OPp1{? Nqand }pΨ´Ψ˚}F“OPp1{? Nq, where pQPQis the diagonal matrix with diagonal entries consisting of the signs of the corresp onding entries of pΛJΛ˚. Theorem 2 guarantees that the true hierarchical factor struct ure can be consistently learned from data and its parameters can be consistently estimated after adjusting the sign for each column of the loading matrix by pQ. Remark 7. It should be noted that in Theorem 2, Algorithm 1 applies Algo rithm 2, which involves some nontrivial optimisation problems, including a discrete op timisation problem (6). The theorem is established under the oracle scenario that these optimisations are alwa ys solved completely. However, we should note that this cannot be achieved by polynomial-time algorithms due to the complexity of these optimisations. Remark 8. Theorem 2 did not explicitly require Condition 2, because Co ndition 4 is a stronger condition that implies Condition 2, as will be shown in Lemma 3 in the Appendi x. Similar to Condition 3, this condition imposes further requirements on the numbers of child and des cent factors a factor can have. More specifically, 19 for such a partition to exist, we need |v˚ i| ě2|D˚ i| `3. Other than that, the full-rank requirement is easily satisfied by most hierarchical factor models. Similar to Con dition 4, Condition 5 also requires |v˚ i| ě2|D˚ i|`3. This condition plays a central role in ensuring Step 6 in Algo rithm 2 to be valid. Condition 6 is very mild. It is automatically satisfied when the sample covariance matri x is constructed using independent and identically distributed observations from the true model. Condition 7 r equires the absolute values of the true loading parameters to satisfy the same bound as the one used in the ICB method in Algorithm
|
https://arxiv.org/abs/2505.09043v1
|
2. Condition 8 requires thatcmaxanddmaxare chosen sufficiently large so that the search space covers t he true model. 3 Computation As mentioned previously, the optimisation problem in IC kpc,d,...,d qin Algorithm 2 can be cast into a continuous optimisation problem and solved by an augmented Lagran gian method(ALM). In what follows, we provide the details about the formulation and the ALM algorithm. With slight abuse of notation, we use the reparameterization of the unique variance matrix such that Ψk“diagpψ2 kq, where diag p¨qis a function that converts a vector to a diagonal matrix with the dia gonal entries filled by the vector. Here, ψ2 k“ tψ2 k1,...,ψ2 k,|pvk||uis a|pvk|-dimensional vector for ψk1,...,ψ k,|pvk|PR. We further let Bs“ t2` ps´1qd,...,1`sdufors“1,...,c. We note that, up to a relabelling of the partition sets or, equivalently, dropping the label ordering constr aint min tvk 1u ămintvk 2u ă ¨ ¨ ¨ ă mintvk cu, the setAkpc,d,...,d qcan be rewritten as tA“ paijq|pvk|ˆp1`cdq:aijaij1“0 fori“1,..., |pvk|,jPBs,j1PBs1,s‰s1u. Therefore, we can solve IC kpc,d,...,d qby solving the following continuous optimisation problem with nonlinear zero constraints. ¯Λk,c,¯ψk,c“argmin Λk,ψkl´ rΣk,0`ΛkpΛkqJ`diagpψ2 kq,Sk¯ s.t.λk,ijλk,ij1“0 fori“1,..., |pvk|,jPBk s,j1PBk s1,s‰s1.(10) Once this optimisation is solved, then for each i, there is one and only one Bssuch that ¯Λk,crtiu,Bss ‰0. Therefore, we obtain a partition of 1 ,..., |pvk|by the sets ti:p¯Λk,cqrtiu,Bss‰0u,s“1,...,c. We obtain vk,c 1,...,vk,c cby reordering ti:p¯Λk,cqrtiu,Bss‰0u,s“1,...,cto satisfy the constraint on the 20 labels of these sets. We solve (10) by the ALM algorithm (see, e.g., Bertsekas, 2014), wh ich is a standard approach to such problems. Thismethodfindsasolutionto(10)bysolvingasequenceo funconstrainedoptimisationproblems. More specifically, in the tth iteration, t“1,2,..., the ALM minimises an augmented Lagrangian function that is constructed based on the result of the previous iteration. Details of the ALM are given in Algorithm 3 below, where the function hp¨qreturns the second largest valuesofavector. The updating rule of βptq jii1andcptqfollowsequation(1) and(47) in Chapter2.2ofBertsekas Algorithm 3 An augmented Lagrangian method for solving IC kpc,d,...,d q Input:Initial value Λp0qandψp0q, initial Lagrangian parameters βp0q ijj1fori“1,..., |pvk|,jPBs,j1PBs1 ands‰s1, initial penalty coefficient cp0qą0, constants cθP p0,1qandcσą1, tolerances δ1,δ2ą0, maximal iteration number Mmax. 1:fort“1,2,...,M maxdo 2:Solve the following problem: Λptq k,ψptq k“argmin Λk,ψkl´ rΣk,0`ΛkpΛkqJ`diagpψkq,Sk¯ `¨ ˝|pvk|ÿ i“1ÿ jPBs,j1PBs1,s‰s1βptq ijj1λk,ijλk,ij1˛ ‚ `1 2cptq¨ ˝|pvk|ÿ i“1ÿ jPBs,j1PBs1,s‰s1pλk,ijλk,ij1q2˛ ‚. 3:Updateβptq ijj1andcptqaccording to equations (11) and (12) βptq ijj1“βpt´1q ijj1`cpt´1qλptq k,ijλptq k,ij1, (11) and cptq“$ ’’’& ’’’%cσcpt´1qif´ř|pvk| i“1ř jPBs,j1PBs1,s‰s1pλptq k,ijλptq k,ij1q2¯1{2 ącθ´ř|pvk| i“1ř jPBs,j1PBs1,s‰s1pλpt´1q k,ijλpt´1q k,ij1q2¯1{2 ; cpt´1qotherwise.(12) 4:if´ }Λptq k´Λpt´1q k}2 F` }ψptq k´ψpt´1q k}2¯1{2 {a |pvk|p2`dq ăδ1 and max iPt1,...,|pvk|uhpmax jPB1|λptq k,ij|,max jPB2|λptq k,ij|...,max jPBc|λptq k,ij|q ăδ2, then 5: returnΛptq k,ψptq k. 6: Break 7:end if 8:end for Output: Λptq k,ψptq k. (2014) and the convergence of Algorithm 3 to a stationary point of (10) is guaranteed by Proposition 2.7 of Bertsekas (2014). We follow the recommended choices of cθ“0.25 andcσ“10 in Bertsekas (2014) as the 21 tuning parameters in Algorithm 3. We remark on the stopping criterion in the implementation of Algorithm 3. We monitor the convergence of the algorithmbased on two criteria: (1) the changein parameter values in twoconsecutive steps, measured by ´ }Λptq k´Λpt´1q k}2 F` }ψptq k´ψpt´1q k}2¯1{2 {a |pvk|p2`dq, and
|
https://arxiv.org/abs/2505.09043v1
|
(2) the distance between the estimate and the space Akpc,d,...,d qmeasured by max iPt1,...,|pvk|uhpmax jPB1|λptq k,ij|,max jPB2|λptq k,ij|...,max jPBc|λptq k,ij|q. When both criteria are below their pre-specified thresholds, δ1andδ2, respectively, we stop the algorithm. LetMbe the last iteration number. Then the selected partition of t1,..., pvku, denoted by vk,c 1,...,vk,c c, is given by vk,c s“ tj:|λpMq k,ij| ăδ2for alljRBsu. For the simulation study in Section 4 and the real data analysis in Sect ion 5, we choose δ1“δ2“0.01. Algorithm 3 can suffer from slow convergence when the penalty term s become large, resulting in an ill-conditioned optimisation problem. When the algorithm does not conv erge within Mmaxiterations, we suggest restarting the algorithm, using the current parameter v alue as a warm start. We set Mmax “100 in the simulation study in Section 4 and the real data analysis in Section 5 and keep the maximum number of restarting times to be five. In addition, since the optimisation pro blem (10) is non-convex, Algorithm 3 may only converge to a local optimum and this local solution may not sa tisfy condition C4. Therefore, we recommend running it with multiple random starting points and then fin ding the best solution that satisfies condition C4. In our implementation, each time to solve (10), we star t by running Algorithm 3 100 times, each with a random starting point. If more than 50 of the solutions s atisfy C4, then we stop and choose the best solution (i.e., the one with the smallest objective function va lue) among these. Otherwise, we rerun Algorithm 3 100 times with random starting points, until there are 50 solutions satisfy C4 or we have restarted the algorithm five times. 4 Simulation Study In this section, we examine the recovery of the hierarchical struc ture as well as the accuracy in estimating the loading matrix of the proposed method. Suppose that pv1,..., pvxKare the estimated sets of variables loading on each factor, where pKis the estimated number of factors, and pΛ is the estimated loading matrix. 22 To examine the recovery of the hierarchical factor structure, w e measure the matching between the true sets of variables loading on each factor and the estimated sets of v ariables. More specifically, the following evaluation criteria are considered: 1. Exact Match Criterion (EMC): 1ppK“Kqśmin pxK,K q k“11ppvk“v˚ kq, which equals to 1 when the hierar- chical structure is fully recovered and 0 otherwise. 2. Layer Match Criterion (LMC): 1ptpvkukPpLt“ tv˚ kukPLtq, which is defined for each layer t. It equals 1 if the sets of variables loading on the factors in the tth layer are correctly learned and 0 otherwise for t“1,...,T. We then examine the accuracy in the loading matrix. We calculate the m ean square error(MSE) for pΛ and pΨ, after adjusting for the sign indeterminacy shown in Theorem 1. M ore specifically, recall that Qis the sets of sign flip matrices defined in Theorem 1. When the propose d method outputs a correct estimate of the hierarchical structure (i.e. EMC “1), we define the MSEs for pΛ and pΨ
|
https://arxiv.org/abs/2505.09043v1
|
as MSEΛ“min QPQ}pΛ´Λ˚Q}2 F{pJKq, MSEΨ“ }pΨ´Ψ˚}2 F{J. We consider the following hierarchical factor structure shown in Fig ure 3 with the number of variables JP t36,54u, the number of layers T“4, the number of factors K“10,L1“ t1u,L2“ t2,3u,L3“ t4,...,8u,L4“ t9,10uandv˚ 1“ t1,...,J u,v˚ 2“ t1,...,J {3u,v˚ 3“ t1`J{3,...,J u,v˚ 4“ t1,...,J {6u, v˚ 5“ t1`J{6,...,J {3u,v˚ 6“ t1`J{3,...,5J{9u,v˚ 7“ t1`5J{9,...,7J{9u,v˚ 8“ t1`7J{9,...,J u, v˚ 9“ t1`J{3,...,4J{9u,v˚ 10“ t1`4J{9,...,5J{9u. In the data generation model, Ψ˚is aJˆJidentity matrix, and Λ˚is generated by λ˚ jk“$ ’’’’’’& ’’’’’’%ujkifk“1; 0 ifką1,jRv˚ k; p1´2xjkqujkifką1,jPv˚ k, forj“1,...,J, andk“1,...,K. Here,ujks are i.i.d., following a Uniform p0.5,2qdistribution and xjks are i.i.d., following a Bernoulli p0.5qdistribution. For each value of J, we generate the true loading matrix Λ˚once and use it for all its simulations. We consider four simulation settings, given by the combinations of J“36,54 and two sample sizes, N“500,2000. For each setting, 100 independent simulations are generate d. The results about the learning 23 v˚ 1 ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨ ¨v˚ 5v˚ 3 v˚ 6 v˚ 4 v˚ 7 v˚ 8 v˚ 9 v˚ 10L1 L2 L3 L4v˚ 2 1 1+J 6J 6J 31+4J 95J 91+J 34J 97J 91+5J 9J 1+7J 9 Figure 3: The hierarchical factor structure in the simulation study . Table 1: The accuracy of the overall estimates of hierarchical str ucture and parameters. J N ¯K¯TEMC MSE pΛMSE pΨ 36 500 10.11 4.00 0.83 2 .90ˆ10´31.58ˆ10´2 2000 10.12 4.00 0.88 0 .74ˆ10´33.76ˆ10´3 54 500 10.07 4.00 0.95 2 .66ˆ10´36.39ˆ10´3 2000 10.03 4.00 0.98 0 .65ˆ10´31.64ˆ10´3 of the hierarchical factor structure and the estimation of the mo del parameters are shown in Tables 1 and 2. In these tables, ¯Kand¯Treport the average values of pKand pT, respectively, and |pL2|,|pL3|and |pL4|report the average numbers of factors in pL2,pL3and pL4, respectively. As shown in Table 1, the proposed method can perfectly recover the true hierarchical factor structure m ore than 80% of the time under all the settings, with the highest accuracy of 98% achieved under the setting with J“54 andN“2000. We note that the learning accuracy increases with the sample size NwhenJis fixed, and also increases with the number of itemsJwhenNis fixed. This is likely due to that, when the number of factors is fixed, the signal about the hierarchical factor structure becomes stronger as the sam ple sizeNand item size Jincrease. The MSE ofpΛ and pΨ show that the loading matrix and the unique variance matrix are acc urately estimated when the hierarchical structure is correctly learned. Table 2 further show s that the bottleneck of the current learning problems is in learning the third layer of the factor hierarchy. That is , under all the settings, the factor structure of the second layer of the hierarchy is always correctly learned, and the overall accuracy of learning the factor hierarchy is completely determined by the accuracy in lea rning the structure of the third layer. 24 Table 2: The accuracy of the estimated hierarchical structure on each layer. J N |pL2|LMC2|pL3|LMC3|pL4|LMC4 36 500 2.00
|
https://arxiv.org/abs/2505.09043v1
|
1.00 4.90 0.83 2.21 0.90 2000 2.00 1.00 4.88 0.88 2.24 0.88 54 500 2.00 1.00 4.95 0.95 2.12 0.95 2000 2.00 1.00 4.98 0.98 2.05 0.98 5 Real Data Analysis In this section, we apply the exploratory hierarchical factor analy sis to a personality assessment dataset based on the International Personality Item Pool (IPIP) NEO 120 personality inventory (Johnson, 2014)3. We investigate the structure of the Extraversion scale based on a sample of 1655 UK participants aged between 30 and 50 years. This scale consists of 24 items, which are d esigned to measure six facets of Agreeableness, including Trust (A1), Morality (A2), Altruism (A3), Cooperation (A4), Modesty (A5) and Sympathy (A6); see Section A of the Appendix for the details. All th e items are on a 1-5 Likert scale, and the reversely worded items have been reversely scored so that a la rger score always means a higher level of extraversion. There is no missing data. Detailed descriptions of the items can be found in the Appendix A. The proposed method returns a hierarchical factor structure w ith four layers and ten factors as shown in Figure 4 and an estimated loading matrix pΛ as shown in Table 3. pv1L1 L2 L3 L4pv7pv3 pv8 pv5 pv4 pv9pv6pv2 pv10 Figure 4: The hierarchical factor structure from the real data a nalysis We now examine the learned model. We notice that the loadings on fact or 1 are all positive, except for item 18, which has a small negative loading. Thus, factor 1 may be inte rpreted as a general Agreeableness factor. Factor 2 is loaded positively by all items designed to measure the Trust, Altruism, and Sympathy facets. Therefore, it may be interpreted as a higher-order fact or of these facets. Factors 4–6 are child factors of factor 2, and based on the loading patterns, they may be interpreted as the Trust, Altruism, and Sympathy factors, respectively. It is worth noting that items 11 ( “Am indifferent to the feelings of others”) and 23 (“Am not interested in other people’s problems”), which are d esigned to measure the Altruism and 3The data are downloaded from https://osf.io/tbmh5/ 25 Table 3: The estimated loading matrix pΛ with four layers and ten factors. Item Facet F1F2F3F4F5F6F7F8F9F10 1 A1 0.48 0.11 0 0.69 0 0 0 0 0 0 2 A1 0.37 0.22 0 0.60 0 0 0 0 0 0 3 A1 0.33 0.21 0 0.69 0 0 0 0 0 0 4 A1 0.62 0.06 0 0.63 0 0 0 0 0 0 5 A2 0.40 0 0.58 0 0 0 0.59 0 0 0 6 A2 0.42 0 0.34 0 0 0 0.32 0 0 0 7 A2 0.53 0 0.47 0 0 0 0.61 0 0 0 8 A2 0.43 0 0.25 0 0 0 0 ´0.07 ´0.07 0 9 A3 0.28 0.35 0 0 0.46 0 0 0 0 0 10 A3 0.28 0.53 0 0 0.17 0 0 0 0 0 11 A3 0.50 0.48 0 ´0.12 0 0 0 0 0 0 12 A3 0.46 0.30 0
|
https://arxiv.org/abs/2505.09043v1
|
0 0.22 0 0 0 0 0 13 A4 0.20 0 0.46 0 0 0 0 0.04 0 0.41 14 A4 0.43 0 0.18 0 0 0 0 ´0.15 0 0.69 15 A4 0.58 0 0.29 0 0 0 0 ´0.05 0 0.46 16 A4 0.52 0 0.41 0 0 0 0 ´0.19 0 0.21 17 A5 0.34 0 0.43 0 0 0 0 0.59 0.46 0 18 A5 ´0.07 0 0.38 0 0 0 0 0.87 ´0.09 0 19 A5 0.08 0 0.41 0 0 0 0 1.00 0 0.03 20 A5 0.27 0 0.43 0 0 0 0 0.16 0 0.13 21 A6 0.25 0.44 0 0 0 0.74 0 0 0 0 22 A6 0.25 0.56 0 0 0 0.41 0 0 0 0 23 A6 0.46 0.53 0 ´0.05 0 0 0 0 0 0 24 A6 0.37 0.37 0 0 0 0.39 0 0 0 0 Sympathy facets, now load weakly and negatively on factor 4 rathe r than their corresponding factors. Factor 3 is another child factor of factor 1. It is loaded with items de signed to measure the facets of Morality,Cooperation,andModesty. Asallthenonzeroloadingson factor3arepositive,itcanbeinterpreted as a higher-order factor of morality, cooperation, and modesty. Factor 7 is the child factor of factor 3. It is positively loaded by three items designed to measure the Morality fac et, and can be interpreted accordingly. Factor 8 is another child factor of factor 3. It is loaded positively by all the items designed to measure the Modesty facet and item 13 (“Love a good fight”) that is designed to measure the Cooperation facet and negatively, although relatively weakly, by items 14 - 16 that are desig ned to measure the Cooperation facet, and item 8 (“Obstruct others’ plans”) that is designed to measure the Morality facet, but is closely related in concept to cooperation. Since the positive loading on item 13 is very small, we can interpret factor 8 as a higher-order factor of modesty and weak aggression (the oppo site of cooperation). Finally, factors 9 and 10 are child factors of factor 8. Based on the loading pattern, fac tor 10 may be interpreted as a cooperation 26 Table 4: The BICs of the hierarchical factor model and the compet ing models HF CFA CBF EBF BIC 102,988.72 103,804.43 103,200.42 103,048.00 factor, while factor 9 seems to be weak and is not very interpretab le. Finally, we compare the learned hierarchical factor model with seve ral alternative models based on the Bayesian Information Criterion (BIC; Schwarz, 1978). The compe ting models include 1. (CFA) A six-factorconfirmatory factor analysis model with cor related factors. Each factor corresponds to a facet of Agreeableness, loaded by the four items designed to m easure this facet. 2. (CBF) A confirmatory bi-factor model with one general factor and six group factors, where the group factors are allowed to be correlated. Each group factor corresp onds to a facet of Agreeableness, loaded by the four items designed to measure this facet. 3. (EBF) An exploratory bi-factor model with one general factor and six group
|
https://arxiv.org/abs/2505.09043v1
|
factors, where the factors areallowedtobe correlated. Thebi-factorstructureislearnedus ingthemethod proposedinQiao et al. (2025). Specifically, exploratory bi-factor models with 2 ,3,...,12 group factors are considered, among which the one with six group factors is selected based on the BIC. Table 4 presents the BIC values of all the models, where the model la beled HF is the learned hierarchical factor model. From the results of BIC, the proposed hierarchical factor model fits the data best. Detailed results on the estimated loading matrix and the estimated correlatio n matrix of the competing models are shown in Appendix B. 6 Discussions This paper proposes a divide-and-conquer method with theoretica l guarantees for exploring the underlying hierarchical factor structure of the observed data. The metho d divides the problem into learning the factor structure from the general factor to finer-grained factors. I t is computationally efficient, achieved through a greedy search algorithm and an Augmented Lagrangian method. T o our knowledge, this is the first statistically consistent method for exploratory hierarchical fact or analysis that goes beyond the bifactor setting. Our simulation study shows that our method can accurate ly recover models with up to four factor layers, ten factors, and 36 items under practically reasonable sam ple sizes, suggesting that it may be suitable for various applications in psychology, education, and related fields . The proposed method is further applied 27 to data from an Agreeableness personality scale, which yields a sens ible model with four layers and ten factors that are all psychologically interpretable. The current method assumes that a general factor exists and inc ludes it in the first factor layer. However, this may not always be the case. For example, in psychology, there is still a debate about whether a general factor of personality exists (see, e.g., Revelle and Wilt, 2013). In ca ses where we are unsure about the presence of a general factor, the current method can be easily m odified to estimate a hierarchical factor model without a general factor, which can be achieved by modifying the first step of Algorithm 1. The current method and asymptotic theory consider a relatively low -dimensional setting where the num- ber of variables Jis treated as a constant that does not grow with the sample size. Ho wever, in some large-scale settings, Jcan be on a scale of hundreds or even larger, so it may be better to t reat it as a diverging term rather than a fixed constant. In that case, a large r penalty term may be needed in the infor- mation criterion to account for the larger parameter space, and t he asymptotic analysis needs modification accordingly. Finally, the current work focuses on linear hierarchical factor mod els, which are suitable for continuous variables. In many applications of hierarchical factor models, we ha ve categorical data (e.g., binary, ordinal, and nominal data) that may be better analysed with non-linear fact or models (see, e.g., Chen et al., 2020). We believeit is possibletoextend the currentframeworktothe explo ratoryanalysisofnon-linearhierarchical factor models. This extension, however, requires further work, as under non-linear factor models,
|
https://arxiv.org/abs/2505.09043v1
|
we can no longer use a sample covariance matrix as a summary statistic for the factor structure. 28 Appendix A Real Data Analysis: Agreeableness Scale Item Key Table A.1: Agreeableness Item Key Item Sign Facet Item detail 1 + Trust(A1) Trust others. 2 + Trust(A1) Believe that others have good intentions. 3 + Trust(A1) Trust what people say. 4 ´Trust(A1) Distrust people. 5 ´Morality(A2) Use others for my own ends. 6 ´Morality(A2) Cheat to get ahead. 7 ´Morality(A2) Take advantage of others. 8 ´Morality(A2) Obstruct others’ plans. 9 + Altruism(A3) Love to help others. 10 + Altruism(A3) Am concerned about others. 11 ´Altruism(A3) Am indifferent to the feelings of others. 12 ´Altruism(A3) Take no time for others. 13 ´Cooperation(A4) Love a good fight. 14 ´Cooperation(A4) Yell at people. 15 ´Cooperation(A4) Insult people. 16 ´Cooperation(A4) Get back at others. 17 ´Modesty(A5) Believe that I am better than others. 18 ´Modesty(A5) Think highly of myself. 19 ´Modesty(A5) Have a high opinion of myself. 20 ´Modesty(A5) Boast about my virtues. 21 + Sympathy(A6) Sympathize with the homeless. 22 + Sympathy(A6) Feel sympathy for those who are worse off tha n myself. 23 ´Sympathy(A6) Am not interested in other people’s problems. 24 ´Sympathy(A6) Try not to think about the needy. B Real Data Analysis: Additional Results In this section, we present the estimated loading matrix and correla tion matrix of the three competing models. The estimated loading matrix of the three models, denoted b ypΛCFA,pΛCBF,pΛEBF, are shown in (B.1), (B.2), and (B.3). The estimated correlation matrix of the thr ee models, denoted by pΦCFA,pΦCBF, pΦEBF, are shown in (B.4), (B.5), and (B.6). 29 pΛCFA “¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝0.85 0 0 0 0 0 0.73 0 0 0 0 0 0.76 0 0 0 0 0 0.87 0 0 0 0 0 0 0.89 0 0 0 0 0 0.64 0 0 0 0 0 0.92 0 0 0 0 0 0.39 0 0 0 0 0 0 0 .51 0 0 0 0 0 0 .61 0 0 0 0 0 0 .67 0 0 0 0 0 0 .57 0 0 0 0 0 0 0 .55 0 0 0 0 0 0 .71 0 0 0 0 0 0 .81 0 0 0 0 0 0 .71 0 0 0 0 0 0 0 .71 0 0 0 0 0 0 .90 0 0 0 0 0 1 .12 0 0 0 0 0 0 .33 0 0 0 0 0 0 0 .70 0 0 0 0 0 0 .71 0 0 0 0 0 0 .65 0 0 0 0 0 0 .65˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚, (B.1) 30 pΛCBF “¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝0.42 0.73 0 0 0 0 0 0.35 0.64 0 0 0 0 0 0.30 0.72 0 0 0 0 0 0.53 0.69 0 0 0 0 0 0.46 0 0 .83 0 0 0 0 0.49 0 0 .41 0 0 0 0 0.57 0 0 .71 0 0 0 0 0.47 0 0 .11 0 0 0 0 0.25 0 0 0 .45 0 0 0 0.24 0 0 0 .60 0 0 0
|
https://arxiv.org/abs/2505.09043v1
|
0.43 0 0 0 .51 0 0 0 0.41 0 0 0 .41 0 0 0 0.28 0 0 0 0 .70 0 0 0.54 0 0 0 0 .43 0 0 0.68 0 0 0 0 .41 0 0 0.66 0 0 0 0 .27 0 0 0.30 0 0 0 0 0 .73 0 ´0.14 0 0 0 0 0 .93 0 ´0.03 0 0 0 0 1 .09 0 0.38 0 0 0 0 0 .35 0 0.15 0 0 0 0 0 0 .73 0.16 0 0 0 0 0 0 .75 0.38 0 0 0 0 0 0 .52 0.27 0 0 0 0 0 0 .58˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚, (B.2) 31 pΛEBF “¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝0.35 0 0 0 0 0 .77 0 0.29 0 0 0 0 0 .67 0 0.27 0 0 0 0 0 .73 0 0.45 0 0 0 0 0 .74 0 0.65 0.64 0 0 0 0 0 0.55 0.31 0 0 0 0 0 0.70 0.61 0 0 0 0 0 0.45 0 0 0 0 .11 0 0 0.18 0 0 0 0 0 0 .48 0.19 0 0 0 0 0 0 .58 0.36 0 0 0 0 0 0 .56 0.32 0 0 0 0 .69 0 0 0.50 0 0 .18 0 0 0 0 0.68 ´0.27 0 0 0 0 0 0.80 ´0.21 0 0 0 0 0 0.69 0 0 0 0 0 .10 0 0.38 0 0 .67 0 0 0 0 0.02 0 0 .93 0 0 0 0 0.13 0 1 .09 0 0 0 0 0.44 0 0 .28 0 0 0 0 0.20 0 0 0 .74 0 0 0 0.18 0 0 0 .75 0 0 0 0.29 0 0 0 0 0 0 .63 0.27 0 0 0 .59 0 0 0˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚, (B.3) 32 pΦCFA “¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝1 0.33 0.44 0.43 ´0.06 0.37 0.33 1 0 .42 0.62 0.25 0.37 0.44 0.42 1 0 .39 0.15 0.80 0.43 0.62 0.39 1 0 .11 0.30 ´0.06 0.25 0.15 0.11 1 0 .16 0.37 0.37 0.80 0.30 0.16 1˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚, (B.4) pΦCBF “¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝1 0 0 0 0 0 0 0 1 0 .01 0.24 0.03 ´0.07 0.25 0 0.01 1 0 .12 0.27 0.34 0.22 0 0.24 0.12 1 ´0.08 0.18 0.74 0 0.03 0.27 ´0.08 1 0 .25 0.05 0´0.07 0.34 0.18 0.25 1 0 .17 0 0.25 0.22 0.74 0.05 0.17 1˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚, (B.5) pΦEBF “¨ ˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˚˝1 0 0 0 0 0 0 0 1 0 .24 0.18 0.07 ´0.03 0.12 0 0.24 1 0 .13 0.11 ´0.14 0.09 0 0.18 0.13 1 0 .35 0.24 0.71 0 0.07 0.11 0.35 1 0 .24 0.72 0´0.03 ´0.14 0.24 0.24 1 0 .32 0 0.12 0.09 0.71 0.72 0.32 1˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‚. (B.6) 33 C Proof of Theorem 1 In this section, we give the proof of Theorem 1. For simplicity of nota tion, for any matrix APRmˆn, S1Ă t1,...,m uandS2Ă t1,...,n u, we denote by ArS1,:s“ArS1,t1,...,n usandAr:,S2s“Art1,...,m u,S2s. Proof.Suppose that there exists a hierarchical factor model satisfying constraints C1-C4 and
|
https://arxiv.org/abs/2505.09043v1
|
the corre- sponding loading matrix Λ and the unique variance matrix Ψ satisfy Σ “ΛΛJ`Ψ and Σ “Σ˚. We present the proof of Theorem 1 recursively. We first prove that Ch 1“Ch˚ 1,vk“v˚ kfor allkPCh˚ 1andλ1“λ˚ 1 orλ1“ ´λ˚ 1hold, where v1,...,v Kare the corresponding sets of variables for each factor accordin g to Λ, Ch1,...,ChKare the child factors of each factor according to the hierarchical factor model given Λ, and λ1 andλ˚ 1are the first columns of Λ and Λ˚respectively. According to Condition 2, ΛΛJ“Λ˚pΛ˚qJand Ψ “Ψ˚hold. If Ch˚ 1“ H, the proof is trivial. If Ch˚ 1‰ H, for anykPCh˚ 1, we denote Bk,i“v˚ kXvi,iPCh1. If Ch˚ k“ H, consider the following cases: 1.|tiPCh1:|Bk,i| ě1u| ě4, which indicates that there exist four factors, denoted by i1,i2,i3,i4, such thatvijXv˚ k‰ Hforj“1,...,4. In this case, choose j1PBk,i1,...,j 4PBk,i4. Consider Σrtj1,j2u,tj3,j4us“Σ˚ rtj1,j2u,tj3,j4us, which is equivalent to Λrtj1,j2u,t1uspΛrtj3,j4u,t1usqJ“Λ˚ rtj1,j2u,t1,kuspΛ˚ rtj3,j4u,t1,kusqJ. (C.1) Notice that the rank of the matrix on the left side of (C.1) is 1, while ac cording to Condition 3, the rank of the matrix on the right side is 2, which contradicts (C.1). Thu s, such a case does not hold. 2. There exist i1such that |Bk,i1| ě2 andi2‰i1such that |Bk,i2| ě1. In this case, for j1,j2PBk,i1and j3PBk,i2, consider Σ rtj1,j2,j3u,tj1,j2,j3us“Σ˚ rtj1,j2,j3u,tj1,j2,j3us, which is equivalent to Λrtj1,j2,j3u,:spΛrtj1,j2,j3u,:sqJ“Λ˚ rtj1,j2,j3u,t1,kuspΛ˚ rtj1,j2,j3u,t1,kusqJ. (C.2) According to Condition 3, the rank of the matrix on the right side of ( C.2) is 2. Moreover, according to Condition 3, the rank of Σ˚ rtj1,j2u,tj1,j2usis 2. Thus, the rank of Λ rtj1,j2u,:sis 2. However, since λj1,s“0 andλj2,s“0 for anysP ti2u YDi2and Λ rtj3u,ti2uYDi2s‰0, the rank of Λ rtj1,j2,j3u,:sis 3. Then, the rank of the matrix on the left side of (C.2) is 3, which contradicts (C.2 ). Thus, such a case does not hold. 3.|v˚ k| “3, and there exist i1,i2,i3such that |Bk,i1| “ |Bk,i2| “ |Bk,i3| “1. We denote by tj1u “Bk,i1, 34 tj2u “Bk,i2and tj3u “Bk,i3. Consider Σrtj1,j2,j3u,tj1,j2,j3us“Σ˚ rtj1,j2,j3u,tj1,j2,j3us, which is equivalent to (C.2). In this case, the rank of the matrix on th e left side of (C.2) is 3 while according to Condition 3, the rank of the matrix on the right side is 2, which contradicts (C.2). Thus, such a case does not hold. 4. There exists a unique iPCh1such that Bk,i1“v˚ k, which indicates that v˚ kĂvi. Only this case is allowed. When Ch˚ k‰ H, consider the following cases: 1. There exist sPCh˚ kandiPCh1such that |Bk,iXv˚ s| ě2. In such case, we assert |pYi1‰i,i1PCh1Bk,i1q X pYs1‰s,s1PCh˚ kv˚ s1q| ď1, (C.3) Otherwise, choose j1,j2PBk,iXv˚ sandj3,j4P pYi1‰i,i1PCh1Bk,i1q X pYs1‰s,s1PCh˚ kv˚ s1q. Consider Σrtj1,j2u,tj3,j4us“Σ˚ rtj1,j2u,tj3,j4us, which is equivalent to (C.1). Notice that the rank of the matrix on the left side of (C.1) is 1, while according to Condition 3, the rank of the matrix on the right side is 2, which contradicts (C.1). Thus, the assertion in (C.3) holds. Noticing that |v˚ s1| ě3 for alls1‰s,s1PCh˚ k, according to (C.3), |Bk,iXv˚ s1| ě2 for alls1PCh˚ k. Similar to (C.3), |pYi1‰i,i1PCh1Bk,i1q Xv˚ s| ď1, (C.4) holds. Combining (C.3) with (C.4), we have |pYi1‰i,i1PCh1Bk,i1q X
|
https://arxiv.org/abs/2505.09043v1
|
pYs1PCh˚ kv˚ s1q| ď2. If|pYi1‰i,i1PCh1Bk,i1q X pYs1PCh˚ kv˚ s1q| “2, we denote by s1‰ssuch that (C.3) holds. We choose j1,j2PBk,iXv˚ s,j3,j4PBk,iXv˚ s1,j5P pYi1‰i,i1PCh1Bk,i1q Xv˚ sandj6P pYi1‰i,i1PCh1Bk,i1q Xv˚ s1. We further require that when Ch˚ s‰ H,j1,j2andj5belong to different child factors of factor s, and when Ch˚ s1‰ H,j3,j4andj6belong to different child factors of factor s1. Such requirements can always be 35 met. Then, consider Σ rtj1,j2,j3,j4u,tj5,j6us“Σ˚ rtj1,j2,j3,j4u,tj5,j6us, which is equivalent to Λrtj1,j2,j3,j4u,t1uspΛrtj5,j6u,t1usqJ “Λ˚ rtj1,j2,j3,j4u,t1,k,s,s1uspΛ˚ rtj5,j6u,t1,k,s,s1usqJ,(C.5) according to the choice of j1,...,j 6. Notice that the rank of the matrix on the first row of (C.5) is 1. For the matrix on the second row of (C.5), according to Condition 3, the rank of Λ˚ rtj1,j2,j3,j4u,t1,k,s,s1us is 4, and the rank of Λ˚ rtj5,j6u,t1,k,s,s1usis 2. According to Sylvester’s rank inequality (see, e.g., Horn and Johnson, 2012), rank` Λ˚ rtj1,j2,j3,j4u,t1,k,s,s1uspΛ˚ rtj5,j6u,t1,k,s,s1usqJ˘ ěrank` Λ˚ rtj1,j2,j3,j4u,t1,k,s,s1us˘ `rank` Λ˚ rtj5,j6u,t1,k,s,s1us˘ ´4 “2, which contradict (C.5). If|pYi1‰i,i1PCh1Bk,i1q X pYs1PCh˚ kv˚ s1q| “1. Without loss of generality, we denote pYi1‰i,i1PCh1Bk,i1q X pYs1PCh˚ kv˚ s1q “Bk,i1Xv˚ s1“ tju. Herei1PCh1,i1‰iands1PCh˚ k,s1‰s. Consider Σrv˚ k,v˚ ks“ Σ˚ rv˚ k,v˚ ks, which is equivalent to Λrv˚ k,:spΛrv˚ k,:sqJ“Λ˚ rv˚ k,t1,kuYD˚ kspΛ˚ rv˚ k,t1,kuYD˚ ksqJ. (C.6) According to Condition 3, the rank of Λ˚ rv˚ kztju,t1,kuYD˚ ksis 2` |D˚ k|. Thus, the rank of Λrv˚ kztju,:sis 2` |D˚ k|. Since Λ rtju,ti1us‰0 and Λrv˚ kztju,ti1us“0, the rank of Λrv˚ k,:sis 3` |D˚ k|, which contradicts (C.6). If|pYi1‰i,i1PCh1Bk,i1q X pYs1PCh˚ kv˚ s1q| “0, which indicates that there exists a unique iPCh1such that Bk,i“v˚ k. That is,v˚ kĂvi. Only this case is allowed. 2.|Bk,iXv˚ s| ď1 for alliPCh1andsPCh˚ k. If there exist some iPCh1andsPCh˚ ksuch that |Bk,iXv˚ s| “1 and |Bk,iXv˚ s1| “0 for alls1PCh˚ k,s1‰s, we denote by tju “Bk,iXv˚ s. Similar to the proof in (C.6), the ranks of the matrix on both sides are unequa l. Thus, the assumption does not hold. We assume that there exist iPCh1,s1PCh˚ kands2PCh˚ k,s2‰s1such that |Bk,iXv˚ s1| “1 and |Bk,iXv˚ s2| “1. If there further exists s3PCh˚ k,s3‰s1,s2such that |Bk,iXv˚ s3| “0, we denote bytj1u “Bk,iXv˚ s1and tj2u “Bk,iXv˚ s2. Consider Σrv˚ s3,tj1,j2us“Σ˚ rv˚ s3,tj1,j2us, which is equivalent to Λrv˚ s3,t1uspΛrtj1,j2u,t1usqJ“Λ˚ rv˚ s3,t1,kuspΛ˚ rtj1,j2u,t1,kusqJ. 36 Noticing that the rank of the matrix on the left side is 1, while accordin g to Condition 3, the rank of the matrix on the right side is 2, the assumption does not hold. Thus, for any iPCh1, if there exists some sPCh˚ ksuch that |Bk,iXv˚ s| “1, then |Bk,iXv˚ s| “1 for all sPCh˚ k, which indicate that |v˚ s|are the same for sPCh˚ k. If|Ch˚ k| ě3, we denote by s1,s2,s3PCh˚ k andi1,i2,i3PCh1such that tj1u “Bk,i1Xv˚ s1,tj2u “Bk,i2Xv˚ s1,tj3u “Bk,i3Xv˚ s2,tj4u “Bk,i3Xv˚ s3. Consider Σ rtj1,j2u,tj3,j4us“Σ˚ rtj1,j2u,tj3,j4us, which is equivalent to Λrtj1,j2u,t1uspΛrtj3,j4u,t1usqJ“Λ˚ rtj1,j2u,t1,kuspΛ˚ rtj3,j4u,t1,kusqJ. Since the rank of the matrix on the left side is 1, while according to Con dition 3, the rank of the matrix on the right side is 2, the assumption does not hold. Finally, if |Ch˚ k| “2, we denote by tj1u “Bk,i1Xv˚ s1,tj2u “Bk,i1Xv˚ s2,j3,j4Pv˚ s1,j3,j4‰j1and j5,j6Pv˚ s2,j5,j6‰j2that satisfy if |Ch˚ s1| ‰0,j1andj3,j4belong
|
https://arxiv.org/abs/2505.09043v1
|
to different child factors of factor s1and if |Ch˚ s2| ‰0,j2andj5,j6belong to different child factors of factor s2. This requirement can always be met. Consider Σ rtj1,j2u,tj3,j4,j5,j6us“Σ˚ rtj1,j2u,tj3,j4,j5,j6us, which is equivalent to Λrtj1,j2u,t1uspΛrtj3,j4,j5,j6u,t1usqJ “Λ˚ rtj1,j2u,t1,k,s1,s2uspΛ˚ rtj3,j4,j5,j6u,t1,k,s1,s2usqJ.(C.7) The rank of the matrix on the first row of (C.7) is 1. According to Con dition 3, the rank of Λ˚ rtj3,j4,j5,j6u,t1,k,s1,s2usis 4 and the rank of Λ˚ rtj1,j2u,t1,k,s1,s2usis 2. According to Sylvester’s rank inequality, rank` Λ˚ rtj1,j2u,t1,k,s1,s2uspΛ˚ rtj3,j4,j5,j6u,t1,k,s1,s2usqJ˘ ěrank` Λ˚ rtj1,j2u,t1,k,s1,s2us˘ `rank` Λ˚ rtj3,j4,j5,j6u,t1,k,s1,s2us˘ ´4 “2, which contradicts (C.7). Thus, the assumption does not hold. From the previous proof, for any kPCh˚ 1, there exists iPCh1such thatv˚ kĂvi. For anyiPCh1, we denoteCi“ tkPCh˚ 1:v˚ kĂviu. Consider Σ rvi,vis“Σ˚ rvi,vis, which is equivalent to Λrvi,t1,iuYDispΛrvi,t1,iuYDisqJ“Λ˚ rvi,t1uYCiYpYkPCiD˚ kqspΛ˚ rvi,t1uYCiYpYkPCiD˚ kqsqJ. According to Condition 3, the rank of Λ˚ rvi,t1uYCiYpYkPCiD˚ kqsis 1 ` |Ci| `ř kPCi|D˚ k|. Thus, 1 ` |Di| ě 37 |Ci| `ř kPCi|D˚ k|. Taking summation over ion both sides of the inequality, we have K´1“ÿ iPCh1p1` |Di|q ěÿ iPCh1˜ |Ci| `ÿ kPCi|D˚ k|¸ “ÿ kPCh˚ 1p1` |D˚ k|q “K´1. Thus, we have 1` |Di| “ |Ci| `ÿ kPCi|D˚ k|, (C.8) for anyiPCh1. According to Lemma 5.1 of Anderson and Rubin (1956), there exist s an orthogonal matrix Risuch that Λrvi,t1,iuYDis“Λ˚ rvi,t1uYCiYpYkPCiD˚ kqsRi. (C.9) On the other hand, for i,i1PCh1, consider Σ rvi,vi1s“Σ˚ rvi,vi1s, which is equivalent to Λrvi,t1uspΛrvi1,t1usqJ“Λ˚ rvi,t1uspΛ˚ rvi1,t1usqJ. (C.10) Combining (C.9) with (C.10), either Λ rvi,t1us“Λ˚ rvi,t1usor Λ rvi,t1us“ ´Λ˚ rvi,t1usholds. Without loss of generality, we assume Λ rvi,t1us“Λ˚ rvi,t1us. Then, we further have λ1“λ˚ 1. We also have to show that |Ci| “1 for alliPCh1. Otherwise, assume that there exists some iPCh1 such that |Ci| ě2. Noticing that |Di| ě2, fors1,s2PCh1, there exists k1,k2PCisuch thatvs1Xv˚ k1‰ H andvs2Xv˚ k2‰ H. Consider Σrvs1Xv˚ k1,vs2Xv˚ k2s“Σrvs1Xv˚ k1,vs2Xv˚ k2s. Combined with Λrvs1Xv˚ k1,t1us“ Λ˚ rvs1Xv˚ k1,t1usand Λrvs2Xv˚ k2,t1us“Λ˚ rvs2Xv˚ k2,t1us, we have Λrvs1Xv˚ k1,tiuspΛrvs2Xv˚ k2,tiusqJ“0, which indicates Λrvs1Xv˚ k1,tius“0or Λrvs2Xv˚ k2,tius“0and contradicts the definition of vs1andvs2. Thus, |Ci| “1 for alliPCh1. Then, we have proved Ch 1“Ch˚ 1,vk“v˚ kfor allkPCh˚ 1. Noticing that combined with λ1“λ˚ 1, Σ“Σ˚can be separated into |Ch˚ 1|equations Λrv˚ k,tkuYDkspΛrv˚ k,tkuYDksqJ“Λ˚ rv˚ k,tkuYD˚ kspΛ˚ rv˚ k,tkuYD˚ ksqJ, kPCh˚ 1and according to (C.8) we have |Dk| “ |D˚ k|for allkPCh˚ 1. Thus, for the factor ion thetth layer, t“2,...,T, we apply the same argument to factor 1 and finally have Λ “Λ˚Qand Ψ “Ψ˚for some sign flip matrix Q. 38 D Proof of Theorem 2 We first introduce somenotations and lemmasneeded forthe proof ofTheorem2. Suppose that A,ε PRmˆn. We denote by σ1pAq ě...ěσmin pm,n qpAq ě0 are the singular values of A, andU1,...,U min pm,n qare the corresponding right(left) singular vectors. Similarly, we denote by σ1pA`εq ě...ěσmin pm,n qpA`εq ě0 as the singular values of A`εandU1 1,...,U1 min pm,n qthe corresponding right(left) singular vectors. Lemma 1 (Weyl’s bound, Weyl (1912)) . max 1ďiďmin pm,n q|σi´σ1 i| ď }ε}F. We further assume that the rank of Aisr. We denote by U“ pU1,...,U jqandU1“ pU1 1,...,U1 jq, 1ďjďr. Lemma 2 (Wedin’s Theorem, Wedin (1972)) .There exists some orthogonal matrix Rsuch that
|
https://arxiv.org/abs/2505.09043v1
|
}UR´U1}Fď2}ε}F δ, whereδ“σjpAq ´σj`1pAq. We refer the proof of Lemma 2 to Theorems 4 and 19 of O’Rourke et a l. (2018). Lemma 3. Given aJˆKdimensional matrix Λfollowing a hierarchical structure that satisfies constrai nts C1-C4 and a JˆJdimensional diagonal matrix Ψ“diagpψ1,...,ψ Jqwithψją0,j“1,...,J.Λsatisfies Condition 4 and Condition 7. If there exits a series of JˆKdimensional random matrix tpΛNu8 N“1and a series of JˆJdimensional diagonal random matrix tpΨNu8 N“1, where pΨN“diagppψN,1,..., pψN,Jqwith pψN,jě0,j“1,...,J, such that tpΛNu8 N“1satisfies Condition 7 and }pΛNpΛJ N`pΨN´ΛΛJ´Ψ}F“OPp1{? Nq. (D.1) Then we have }pΛNpΛJ N´ΛΛJ}F“OPp1{? Nqand }pΨN´Ψ}F“OPp1{? Nq. Lemma 3 is the generation of Theorem 5.1 of Anderson and Rubin (195 6). The proof of Lemma 3 follows the proof of Theorem 5.1 of Anderson and Rubin (1956). Proof.Forj“1,...,J, according to Condition 4, there exist E1,E2P t1,...,J uztjuwith |E1| “ |E1| “K andE1XE2“ Hsuch that Λ rE1,:sand Λ rE2,:sare full rank matrix. Without loss of generality, we assume 39 that Λ and pΛNcan be expressed as Λ“¨ ˚˚˚˚˚˚˚˚˚˚˝Λ1 λj Λ2 Λ3˛ ‹‹‹‹‹‹‹‹‹‹‚,pΛN“¨ ˚˚˚˚˚˚˚˚˚˚˝pΛN,1 pλN,j pΛN,2 pΛN,3˛ ‹‹‹‹‹‹‹‹‹‹‚, where we denote by Λ 1“ΛrE1,:s, Λ2“ΛrE1,:s,λj“Λrtju,:sis thejth row of Λ, Λ 3consists of the rest of the rows in Λ. pΛN,1,pΛN,2,pλN,jand pΛN,3are the corresponding sub-matrix of pΛN. Thus, we have ΛrE1YE2Ytju,:sΛJ rE1YE2Ytju,:s“¨ ˚˚˚˚˚˚˝Λ1ΛJ 1Λ1λJ jΛ1ΛJ 2 λjΛJ 1λjλJ jλjΛJ 2 Λ2ΛJ 1Λ2λJ jΛ2ΛJ 2˛ ‹‹‹‹‹‹‚, and ppΛNqrE1YE2Ytju,:sppΛNqJ rE1YE2Ytju,:s“¨ ˚˚˚˚˚˚˝pΛN,1pΛJ N,1pΛN,1pλJ N,jpΛN,1pΛJ N,2 pλN,jpΛJ N,1pλN,jpλJ N,jpλN,jpΛJ N,2 pΛN,2pΛJ N,1pΛN,2pλJ N,jpΛN,2pΛJ N,2˛ ‹‹‹‹‹‹‚. According to (D.1), we have }Λ1λJ j´pΛN,1pλJ N,j} “OPp1{? Nq, }Λ2λJ j´pΛN,2pλJ N,j} “OPp1{? Nq, }Λ1ΛJ 2´pΛN,1pΛJ N,2}F“OPp1{? Nq.(D.2) Noticing that the rank of the pK`1q ˆ pK`1qdimensional matrix ¨ ˚˚˝Λ1λJ jΛ1ΛJ 2 λjλJ jλjΛJ 2˛ ‹‹‚and¨ ˚˚˝pΛN,1pλJ N,jpΛN,1pΛJ N,2 pλN,jpλJ N,jpλN,jpΛJ N,2˛ ‹‹‚, is at mostK, thus we have det¨ ˚˚˝Λ1λJ jΛ1ΛJ 2 λjλJ jλjΛJ 2˛ ‹‹‚“det¨ ˚˚˝pΛN,1pλJ N,jpΛN,1pΛJ N,2 pλN,jpλJ N,jpλN,jpΛJ N,2˛ ‹‹‚“0. 40 Then, we have p´1qKλjλJ jdetpΛ1ΛJ 2q `fpΛ1λJ j,λjΛJ 2q “p´1qKpλN,jpλJ N,jdetppΛN,1pΛJ N,2q `fppΛN,1pλJ N,j,pλN,jpΛJ N,2q “0,(D.3) wherefp¨qis a matrix function and is Lipschitz continuous with respect to the ele ments of the matrix. Moreover, the Lipschitz constant of fp¨qonly depends on Kandτ. det p¨qis also a Lipschitz continuous matrix function, the Lipschitz constant of which only depends on Kandτ. Combined with (D.2) and (D.3), we have |λjλJ j´pλN,jpλJ N,j| ¨ |detpΛ1ΛJ 2q| ď|fpΛ1λJ j,λjΛJ 2q ´fppΛN,1pλJ N,j,pλN,jpΛJ N,2q| ` |pλN,jpλJ N,j| ¨ |detpΛ1ΛJ 2q ´detppΛN,1pΛJ N,2q| “OPp1{? Nq. Noticing that |detpΛ1ΛJ 2q| ą0, we have |λjλJ j´pλN,jpλJ N,j| “OPp1{? Nq. Combined with |λjλJ j`ψj´pλN,jpλJ N,j´pψN,j| “OPp1{? Nq, we have |ψj´pψN,j| “OPp1{? Nqforj“1,...,J. Thus, we have }pΨN´Ψ}F“OPp1{? Nqand further more we have }pΛNpΛJ N´ΛΛJ}F“OPp1{? Nq. We now give the proof of Theorem 2. Proof.The proof also follows a recursive manner. We first prove that with p robability approaching 1 as Ngrows to infinity, xCh1“Ch˚ 1,pvi“v˚ ifor alliPCh˚ 1and as a by-product, we further have min p}rλ1´ λ˚ 1},}rλ1`λ˚ 1}q “OPp1{? Nq. Then, we apply the argument to the factors in the tth layer,t“2,...,T. First, we show that when c˚“ |Ch˚ 1|, the ĂIC1,c˚given by Algorithm 2 takes the form ĂIC1,c˚“ÿ kPCh˚ 1` |v˚ k|p|D˚ k| `1q ´ |D˚ k|p|D˚ k| `1q{2˘ logN`OPp1q. (D.4) Whenc˚“0, combined
|
https://arxiv.org/abs/2505.09043v1
|
with Condition 6, we have }rΛ1,0rΛJ 1,0`rΨ1,0´Σ˚}F“OPp1{? Nqaccording to the M-estimation theory (see, e.g., van der Vaart, 2000). By Taylor’s e xpansion, we further have lprΛ1,0rΛJ 1,0`rΨ1,0;Sq “OpN}rΛ1,0rΛJ 1,0`rΨ1,0´Σ˚}2 Fq `OpN}S´Σ˚}2 Fq. Thus ĂIC1,c˚“OPp1q, which satisfies (D.4). 41 Whenc˚‰0, we first assume that the v1,c˚ 1,...,v1,c˚ c˚derived by Step 4 of Algorithm 2 equal to v˚ 2,...,v˚ 1`c˚. Then we will show that Step 6 of Algorithm 2 outputs rdc˚ s“1` |D˚ 1`s|fors“1,...,c˚ with probability approaching 1 as Ngrows to infinity. When s“1, ford1ě1` |D˚ 2|, the parametric space rA1pc˚,d1,minp|v˚ 3|,dq,...,minp|v˚ 1`c˚|,dqqdefined in ĂIC1pc,d1,minp|v˚ 3|,dq,...,minp|v˚ 1`c˚|,dqq, (D.5) includes the true parameters Λ˚and Ψ˚. Thus, the solution to (D.5), denoted by Λd1and Ψd1, satisfies ››Λd1ΛJ d1`Ψd1´Σ˚›› F“OPp1{? Nq, and we further have ĂIC1pc,d1,minp|v˚ 3|,dq,...,minp|v˚ c˚|,dqq “l` Λd1ΛJ d1`Ψd1;S˘ `p1` Λd1˘ logN “OPp1q `` |v˚ 2|d1´d1pd1´1q{2˘ logN`ÿ 2ďsďc˚` |v˚ s`1|ds´dspds´1q{2˘ logN,(D.6) where we denote by ds“minp|v˚ s|,dq,s“2,...,c˚for simplicity. Noticing that the third term of (D.6) is independent of the choice of d1and the second term is strictly increasing with respect to d1for 1 ` |D˚ 2| ď d1ďminp|v˚ 2|,dq, we then have 1` |D˚ 2| “ argmin 1`|D˚ 2|ďd1ďmin p|v˚ 2|,dqĂIC1pc,d1,minp|v˚ 3|,dq,...,minp|v˚ c˚|,dqq, (D.7) with probability approaching 1 as Nincreases to infinity. Whend1ă1` |D˚ 2|, for any Λ,ΨPrA1pc˚,d1,minp|v˚ 3|,dq,...,minp|v˚ 1`c˚|,dqq, we denote by Σ “ΛΛJ` Ψ. According to Condition 5, there exist E1,E2Ăv˚ 2with |E1| “2`|D˚ 2|,|E2| “1`|D˚ 2|andE1XE2“ H such that Λ˚ rE1,t1,2uYD˚ 2sand Λ˚ rE2,t2uYD˚ 2sare of full rank. We further denote by B1“ t2,...,1`d1u. First we have }Σ´Σ˚}Fě1? 2´››Σrv˚ 2,v˚ is´Σ˚ rv˚ 2,v˚ is›› F`››ΣrE1,E2s´Σ˚ rE1,E2s›› F¯ , (D.8) for anyi“3,...,1`c˚.We denote by δ“››Σrv˚ 2,v˚ is´Σ˚ rv˚ 2,v˚ is›› F. Notice that Σrv˚ 2,v˚ is“Λrv˚ 2,t1uspΛrv˚ i,t1usqJ, and Σ˚ rv˚ 2,v˚ is“Λ˚ rv˚ 2,t1uspΛ˚ rv˚ i,t1usqJ. 42 According to Lemma 2, either ›››››Λrv˚ 2,t1us››Λrv˚ 2,t1us››´Λ˚ rv˚ 2,t1us››Λ˚ rv˚ 2,t1us›››››››ď2δ››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚ i,t1us››, (D.9) or ›››››Λrv˚ 2,t1us››Λrv˚ 2,t1us››`Λ˚ rv˚ 2,t1us››Λ˚ rv˚ 2,t1us›››››››ď2δ››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚ i,t1us››, holds. Without loss of generality, we assume that (D.9) holds. On the other hand, notice that ΣrE1,E2s´Σ˚ rE1,E2s “ΛrE1,t1uspΛrE2,t1usqJ`ΛrE1,B1spΛrE2,B1sqJ´Λ˚ rE1,t1uspΛ˚ rE2,t1usqJ ´Λ˚ rE1,t2uYD˚ 2spΛ˚ rE2,t2uYD˚ 2sqJ “ΛrE1,t1uspΛrE2,t1usqJ´››Λrv˚ 2,t1us››2 ››Λ˚ rv˚ 2,t1us››2Λ˚ rE1,t1uspΛ˚ rE2,t1usqJ `ΛrE1,B1spΛrE2,B1sqJ´¨ ˝1´››Λrv˚ 2,t1us››2 ››Λ˚ rv˚ 2,t1us››2˛ ‚Λ˚ rE1,t1uspΛ˚ rE2,t1usqJ ´Λ˚ rE1,t2uYD˚ 2spΛ˚ rE2,t2uYD˚ 2sqJ.(D.10) Combined with (D.9), we have ››››››ΛrE1,t1uspΛrE2,t1usqJ´››Λrv˚ 2,t1us››2 ››Λ˚ rv˚ 2,t1us››2Λ˚ rE1,t1uspΛ˚ rE2,t1usqJ›››››› F ď›››››˜ ΛrE1,t1us´››Λrv˚ 2,t1us›› ››Λ˚ rv˚ 2,t1us››Λ˚ rE1,t1us¸ pΛrE2,t1usqJ››››› F `››Λrv˚ 2,t1us›› ››Λ˚ rv˚ 2,t1us››¨››››››Λ˚ rE1,t1us˜ ΛrE2,t1us´››Λrv˚ 2,t1us›› ››Λ˚ rv˚ 2,t1us››Λ˚ rE2,t1us¸J›››››› F ď2δ››Λrv˚ 2,t1us›› ››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚ i,t1us››¨˜ ››ΛrE2,t1us››`››Λrv˚ 2,t1us›› ››Λ˚ rv˚ 2,t1us››¨››Λ˚ rE1,t1us››¸ ď4τ2|v˚ 2|δ››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚ i,t1us››.(D.11) We denote by ΛJ E“` Λ˚ rE1,t1,2uYD˚ 2s˘´1ΛrE1,B1spΛrE2,B1sqJ, 43 is a matrix whose rank is no more than d1ă1` |D˚ 2|and Λ˚ E“¨ ˝¨ ˝1´››Λrv˚ 2,t1us››2 ››Λ˚ rv˚ 2,t1us››2˛ ‚Λ˚ rE2,t1us,Λ˚ rE2,t2uYD˚ 2s˛ ‚. Notice that according to Condition 5, the rank of Λ˚ rE2,t2uYD˚ 2sis 1` |D˚ 2|. Thus, by Lemma 1 }ΛE´Λ˚ E}F ě›››` ΛE˘ r:,B1s´` Λ˚ E˘ r:,B1s››› F ěσ1`|D˚ 2|` Λ˚ rE2,t2uYD˚ 2s˘ .(D.12) Combined with (D.10), (D.11) and (D.12), we have ›››ΣrE1,E2s´Σ˚ rE1,E2s››› F ěσ2`|D˚ 2|` Λ˚ rE1,t1,2uYD˚ 2s˘ ¨σ1`|D˚ 2|` Λ˚ rE2,t2uYD˚ 2s˘ ´4τ2|v˚ 2|δ››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚
|
https://arxiv.org/abs/2505.09043v1
|
i,t1us››.(D.13) Combined with (D.8) we further have }Σ´Σ˚}F ěmin˜? 2 4,››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚ i,t1us›› 8? 2τ2|v˚ 2|¸ σ2`|D˚ 2|` Λ˚ rE1,t1,2uYD˚ 2s˘ σ1`|D˚ 2|` Λ˚ rE2,t2uYD˚ 2s˘ . Thus, for sufficiently large Nand any Λ,Ψ defined in rA1pc˚,d1,minp|v˚ 3|,dq,...,minp|v˚ 1`c˚|,dqq, we have inf Λ,ΨlpΛΛJ`Ψ;Sq “OpN}Σ´Σ˚}2 Fq `OPp1q “OPpNq ą0. (D.14) Noticing that p1pΛqis uniformly upper bounded, with probability approaching 1 as Ngrows to infinity, we have 1` |D˚ 2| “argmin 1ďd1ď1`|D˚ 2|ĂIC1pc,d1,minp|v˚ 3|,dq,...,minp|v˚ c˚|,dqq. (D.15) Combining (D.7) with (D.15), we have rdc˚ 1“1`|D˚ 2|. Similarly, we have rdc˚ s“1`|D˚ 1`s|, fors“1,...,c˚. Then we have ĂIC1pc˚,1` |D˚ 2|,...,1` |D˚ 1`c˚|q “ÿ kPCh˚ 1p|v˚ k|p|D˚ k| `1q ´ |D˚ k|p|D˚ k| `1q{2qlogN`OPp1q. 44 If thev1,c˚ 1,...,v1,c˚ c˚derived by Step 4 of Algorithm 2 are not equal to v˚ 2,...,v˚ c˚, we denote by Bi,s“ v˚ iXv1,c˚ sfori“2,...,1`c˚ands“1,...,c˚. Fori“2,...,1`c˚, if Ch˚ i“ H, consider the following cases: 1.|ts:|Bi,s| ě1u| ě4. We denote by s1,...,s 4satisfying |Bi,s1| ě1, ... |Bi,s4| ě1. For any d1,...,d c˚, consider the parametric space rA1pc˚,d1,...,d c˚qdefined in ĂIC1pc,d1,...,d c˚q. For any Λ,ΨPrA1pc˚,d1,...,d c˚q, we denote by Σ “ΛΛJ`Ψ and choose j1PBi,s1,...,j 4PBi,s4. It is easy to check that Σrtj1,j2u,tj3,j4us“Λrtj1,j2u,t1uspΛrtj1,j2u,t1usqJ, is a rank-1 matrix, while according to Condition 3, Σ˚ rtj1,j2u,tj3,j4us“Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj1,j2u,t1,iusqJ, is a rank-2 matrix. By Lemma 1, we have }Σ´Σ˚}F ě››Σrtj1,j2u,tj3,j4us´Σ˚ rtj1,j2u,tj3,j4us›› F ěσ2` Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj1,j2u,t1,iusqJ˘ ą0.(D.16) Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. 2. There exists some 1 ďsďc˚such that |Bi,s| ě2 and |v˚ izBi,s| ě2. In this case, choose j1,j2PBi,s andj3,j4Pv˚ izBi,s, (D.16) also holds, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. 3. Thereexistssome1 ďsďc˚such |Bi,s| “1. Wedenoteby tju “Bi,s. Foranyd1,...,d c˚, considerthe parametric space rA1pc˚,d1,...,d c˚qdefined in ĂIC1pc,d1,...,d c˚q. For any Λ ,ΨPrA1pc˚,d1,...,d c˚q, we denote by Σ “ΛΛJ`Ψ. It is obvious that }Σ´Σ˚}F ě1? 2ˆ››Σrv˚ iztju,tjus´Σ˚ rv˚ iztju,tjus››`››Σrv˚ iztju,v1,c˚ s ztjus´Σ˚ rv˚ iztju,v1,c˚ s ztjus›› F˙ .(D.17) 45 Notice that Σrv˚ iztju,v1,c˚ s ztjus“Λrv˚ iztju,t1uspΛrv1,c˚ s ztju,t1usqJ, and Σ˚ rv˚ iztju,v1,c˚ s ztjus“Λ˚ rv˚ iztju,t1uspΛ˚ rv1,c˚ s ztju,t1usqJ. We denote by δ“››Σrv˚ iztju,v1,c˚ s ztjus´Σ˚ rv˚ iztju,v1,c˚ s ztjus›› F. According to Lemma 2, either ›››››Λrv˚ iztju,t1us››Λrv˚ iztju,t1us››´Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us›››››››ď2δ››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s ztju,t1us››, (D.18) or ›››››Λrv˚ iztju,t1us››Λrv˚ iztju,t1us››`Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us›››››››ď2δ››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s ztju,t1us››. holds. Without loss of generality, we assume that (D.18) holds. On th e other hand, notice that Σrv˚ iztju,tjus“λj,1Λrv˚ iztju,t1us, and Σ˚ rv˚ iztju,tjus“λ˚ j,1Λ˚ rv˚ iztju,t1us`λ˚ j,iΛ˚ rv˚ iztju,tius. According to Condition 3, there exist constant αand vectorµ‰0, which are only related with Λ˚, such that Λ˚ rv˚ iztju,tius“αΛ˚ rv˚ iztju,t1us`µ, andµJΛ˚ rv˚ iztju,t1us“0. According to Condition 7, we have ››Σrv˚ iztju,tjus´Σ˚ rv˚ iztju,tjus›› “››λj,1Λrv˚ iztju,t1us´λ˚ j,1Λ˚ rv˚ iztju,t1us´λ˚ j,iΛ˚ rv˚ iztju,tius›› ě|λ˚ j,i| ¨ }µ} ´ |λj,1| ¨›››››Λrv˚ iztju,t1us´››Λrv˚ iztju,t1us›› ››Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us››››› ě|λ˚ j,i| ¨ }µ} ´2τ2δp|v˚ i| ´1q1{2 ››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s
|
https://arxiv.org/abs/2505.09043v1
|
ztju,t1us››.(D.19) 46 Combining (D.17) and (D.19), we have }Σ´Σ˚}Fěmin¨ ˝? 2 4,››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s ztju,t1us›› 4? 2τ2p|v˚ i| ´1q1{2˛ ‚|λ˚ j,i| ¨ }µ} ą0. Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. Thus, when Ch˚ i“ H, we only need to consider the case that there exists some s“1,...,c˚such that v˚ iĂv1,c˚ s. When Ch˚ i‰ H, consider the following cases: 1. There exist kPCh˚ iands“1,...,c˚such that |Bi,sXv˚ k| ě2. If we further have |pY1ďs1ďc˚,s1‰sBi,s1q X pYk1‰k,k1PCh˚ iv˚ k1q| ě2, choosej1,j2PBi,sXv˚ kandj3,j4P pY1ďs1ďc˚,s1‰sBi,s1q X pYk1‰k,k1PCh˚ 1v˚ k1q. For anyd1,...,d c˚, con- sidertheparametricspace rA1pc˚,d1,...,d c˚qdefinedin ĂIC1pc,d1,...,d c˚q. ForanyΛ,ΨPrA1pc˚,d1,... ,dc˚q, we denote Σ “ΛΛJ`Ψ. It is obvious that Σrtj1,j2u,tj3,j4us“Λrtj1,j2u,t1uspΛrtj3,j4u,t1usqJ, is a rank-1 matrix, while according to Condition 3 Σ˚ rtj1,j2u,tj3,j4us“Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj3,j4u,t1,iusqJ, is a rank-2 matrix. By Lemma 1, we also have }Σ´Σ˚}F ě››Σrtj1,j2u,tj3,j4us´Σ˚ rtj1,j2u,tj3,j4us›› F ěσ2` Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj1,j2u,t1,iusqJ˘ ą0. Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability 47 approaching 1 as Nincreases to infinity. If |pY1ďs1ďc˚,s1‰sBi,s1q X pYk1‰k,k1PCh˚ iv˚ k1q| ď1, (D.20) noticing that |v˚ k1| ě3 fork1‰k,k1PCh˚ i, by (D.20) we also have |Bi,sXv˚ k1| ě2 for allk1PCh˚ i. Similar to (D.20), we have |pY1ďs1ďc˚,s1‰sBi,s1q Xv˚ k| ď1. (D.21) Combining (D.20) and (D.21), we have |pY1ďs1ďc˚,s1‰sBi,s1q X pYk1PCh˚ iv˚ k1q| ď2. If|pY1ďs1ďc˚,s1‰sBi,s1q X pYk1PCh˚ iv˚ k1q| “2, we denote by k1‰ksuch that (D.20) holds. Choose j1,j2PBi,sXv˚ k,j3,j4PBi,sXv˚ k1,j5P pY1ďs1ďc˚,s1‰sBi,s1q Xv˚ kandj6P pY1ďs1ďc˚,s1‰sBi,s1q Xv˚ k1 such that when Ch˚ k‰ H,j1,j2andj5belong to different child factors of factor kand when Ch˚ k1‰ H,j3,j4andj6belong to different child factors of factor k1, which can always be met. For any d1,...,d c˚, consider the parametric space rA1pc˚,d1,...,d c˚qdefined in ĂIC1pc,d1,...,d c˚q. For any Λ,ΨPrA1pc˚,d1,...,d c˚q, we denote Σ “ΛΛJ`Ψ. It is easy to check that Σrtj1,j2,j3,j4u,tj5,j6us“Λrtj1,j2,j3,j4u,t1uspΛrtj5,j6u,t1usqJ, is a rank-1 matrix. On the other hand, Σ˚ rtj1,j2,j3,j4u,tj5,j6us“Λ˚ rtj1,j2,j3,j4u,t1,i,k,k1uspΛ˚ rtj5,j6u,t1,i,k,k1usqJ. According to Condition 3, the rank of Λ˚ rtj1,j2,j3,j4u,t1,i,k,k1usis 4 and the rank of Λ˚ rtj5,j6u,t1,i,k,k1usis 2. By Sylvester’s rank inequality, rank` Λ˚ rtj1,j2,j3,j4u,t1,i,k,k1uspΛ˚ rtj5,j6u,t1,i,k,k1usqJ˘ ěrank` Λ˚ rtj1,j2,j3,j4u,t1,i,k,k1us˘ `rank` Λ˚ rtj5,j6u,t1,i,k,k1us˘ ´4 “2. 48 Thus, by Lemma 1, }Σ´Σ˚}F ě››Σrtj1,j2,j3,j4u,tj5,j6us´Σ˚ rtj1,j2,j3,j4u,tj5,j6us›› F ěσ2` Λ˚ rtj1,j2,j3,j4u,t1,i,k,k1uspΛ˚ rtj5,j6u,t1,i,k,k1usqJ˘ ą0. Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. If|pY1ďs1ďc˚,s1‰sBi,s1qXpYk1PCh˚ iv˚ k1q| “1. Withoutlossofgenerality,wedenoteby pY1ďs1ďc˚,s1‰sBi,s1qX pYk1PCh˚ iv˚ k1q “Bi,s1Xv˚ k1“ tju. For anyd1,...,d c˚, consider the parametric space rA1pc˚,d1,...,d c˚q defined in ĂIC1pc,d1,...,d c˚q. For any Λ ,ΨPrA1pc˚,d1,...,d c˚q, we denote by Σ “ΛΛJ`Ψ. It is obvious that }Σ´Σ˚}F ě1? 2ˆ››Σrv˚ iztju,tjus´Σ˚ rv˚ iztju,tjus››`››Σrv˚ iztju,v1,c˚ s1ztjus´Σ˚ rv˚ iztju,v1,c˚ s1ztjus›› F˙ .(D.22) Notice that Σrv˚ iztju,v1,c˚ s1ztjus“Λrrv˚ iztju,t1uspΛrrv1,c˚ s1ztju,t1usqJ, while Σ˚ rv˚ iztju,v1,c˚ s1ztjus“Λ˚ rv˚ iztju,t1uspΛ˚ rv1,c˚ s1ztju,t1usqJ. We denote by δ“››Σrv˚ iztju,v1,c˚ s1ztjus´Σ˚ rv˚ iztju,v1,c˚ s1ztjus›› F. By Lemma 2, either ›››››Λrv˚ iztju,t1us››Λrv˚ iztju,t1us››´Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us›››››››ď2δ››Λ˚
|
https://arxiv.org/abs/2505.09043v1
|
rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s1ztju,t1us››, (D.23) or ›››››Λrv˚ iztju,t1us››Λrv˚ iztju,t1us››`Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us›››››››ď2δ››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s1ztju,t1us››. holds. Without loss of generality, we assume that (D.23) holds. On th e other hand, notice that Σrv˚ iztju,tjus“λj,1Λrv˚ iztju,t1us, 49 and Σ˚ rv˚ iztju,tjus“Λ˚ rv˚ iztju,t1,iuYD˚ ispΛ˚ rtju,t1,iuYD˚ isqJ. By Condition 3, there exist constant αmand vectorµm‰0, which are only related with Λ˚, such that Λ˚ rv˚ iztju,tmus“αmΛ˚ rv˚ iztju,t1us`µm, andµJ mΛ˚ rv˚ iztju,t1us“0 formP tiu YD˚ i. Moreover, σ1`|D˚ i|` Λ˚ Proj,i˘ ą0, where Λ˚ Proj,i, whose columns consist of µm,mP tiu YD˚ i, is a p|v˚ i| ´1q ˆ p1` |D˚ i|qdimensional matrix. We further denote by α1“1. By Condition 7, ››Σrv˚ iztju,tjus´Σ˚ rv˚ iztju,tjus›› “››››››λj,1Λrv˚ iztju,t1us´¨ ˝ÿ mPt1,iuYD˚ iαmλ˚ j,m˛ ‚Λ˚ rv˚ iztju,t1us´Λ˚ Proj,ipΛ˚ rtju,tiuYD˚ isqJ›››››› ě ´ |λj,1| ¨›››››Λrv˚ iztju,t1us´››Λrv˚ iztju,t1us›› ››Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us››››› `››Λ˚ rtju,tiuYD˚ is››σ1`|D˚ i|` Λ˚ Proj,i˘ ě››Λ˚ rtju,tiuYD˚ is››σ1`|D˚ i|` Λ˚ Proj,i˘ ´2τ2δp|v˚ i| ´1q1{2 ››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s1ztju,t1us››.(D.24) Combining (D.22) and (D.24), we have }Σ´Σ˚}F ěmin¨ ˝? 2 4,››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s ztju,t1us›› 4? 2τ2p|v˚ i| ´1q1{2˛ ‚››Λ˚ rtju,tiuYD˚ is››σ1`|D˚ i|` Λ˚ Proj,i˘ ą0.(D.25) Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. Finally, when |pY1ďs1ďc˚,s1‰sBi,s1q X pYk1PCh˚ iv˚ k1q| “0 , there exists a unique 1 ďsďc˚such that Bi,s“v˚ i, which indicates v˚ iĂv1,c˚ s. Only the situation is allowed. 50 2.|Bi,sXv˚ k| ď1 for all 1 ďsďc˚andkPCh˚ i. If there exist some 1 ďsďc˚andkPCh˚ isuch that |Bi,sXv˚ k| “1 and |Bi,sXv˚ k1| “0 for allk1PCh˚ i,k1‰k. We denote by tju “Bi,sXv˚ k. Similar to the proof in (D.25), for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. Thus, forany 1 ďsďc˚andk1PCh˚ i, if|Bi,sXv˚ k1| “1, there exist k2PCh˚ isuchthat |Bi,sXv˚ k2| “1. If there exists another k3PCh˚ i,k3‰k1,k2such that |Bi,sXv˚ k3| “0. We denote by tj1u “Bi,sXv˚ k1, tj2u “Bi,sXv˚ k2. For anyd1,...,d c˚, consider the parametric space rA1pc˚,d1,...,d c˚qdefined in ĂIC1pc,d1,...,d c˚q. For any Λ ,ΨPrA1pc˚,d1,...,d c˚q, we denote Σ “ΛΛJ`Ψ. It is obvious that Σrv˚ k3,tj1,j2us“Λrv˚ k3,t1uspΛrtj1,j2u,t1usqJ, is a rank-1 matrix, and according to Condition 3 Σ˚ rv˚ k3,tj1,j2us“Λ˚ rv˚ k3,t1,iuspΛ˚ rtj1,j2u,t1,iusqJ is a rank-2 matrix. By Lemma 1, }Σ´Σ˚}F ě››Σrv˚ k3,tj1,j2us´Σ˚ rv˚ k3,tj1,j2us›› F ěσ2` Λ˚ rv˚ k3,t1,iuspΛ˚ rtj1,j2u,t1,iusqJ˘ ą0. Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. We then assume that for any 1 ďsďc˚, if there exists some kPCh˚ isuch that |Bi,sXv˚ k| “1, then |Bi,sXv˚ k| “1for allkPCh˚ i, which indicates that forall kPCh˚ i,|v˚ k|isthe same. If |Ch˚ i| ě3, choose k1,k2,k3PCh˚ iand 1 ďs1,s2,s3ďc˚such that tj1u “Bi,s1Xv˚ k1,tj2u “Bi,s2Xv˚ k1,tj3u “Bi,s3Xv˚ k2, tj4u “Bi,s3Xv˚ k3. For anyd1,...,d c˚, consider the parametric space rA1pc˚,d1,...,d c˚qdefined in ĂIC1pc,d1,...,d
|
https://arxiv.org/abs/2505.09043v1
|
c˚q. 51 For any Λ,ΨPrA1pc˚,d1,...,d c˚q, we denote by Σ “ΛΛJ`Ψ. We have Σrtj1,j2u,tj3,j4us“Λrtj1,j2u,t1uspΛrtj3,j4u,t1usqJ, is a rank-1 matrix, while according to Condition 3 Σ˚ rtj1,j2u,tj3,j4us“Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj3,j4u,t1,iusqJ, is a rank-2 matrix. By Lemma 1, }Σ´Σ˚}F ě››Σrtj1,j2u,tj3,j4us´Σ˚ rtj1,j2u,tj3,j4us›› F ěσ2` Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj3,j4u,t1,iusqJ˘ ą0. Thus, for sufficiently large Nand any Λ,Ψ defined in rA1pc˚,d1,...,d c˚q, (D.14) still holds, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. Finally, if |Ch˚ i| “2, we choose tj1u “Bi,s1Xv˚ k1,tj2u “Bi,s1Xv˚ k2,j3,j4Pv˚ k1,j3,j4‰j1and j5,j6Pv˚ k2,j5,j6‰j2satisfying that if |Ch˚ k1| ‰0,j1andj3,j4belong to different child factors of factork1and if |Ch˚ k2| ‰0,j2andj5,j6belong to different child factor of factor k2, which can be always satisfied. For any d1,...,d c˚, consider the parametric space rA1pc˚,d1,...,d c˚qdefined in ĂIC1pc,d1,...,d c˚q. For any Λ ,ΨPrA1pc˚,d1,...,d c˚q, we denote Σ “ΛΛJ`Ψ. It is obvious that Σrtj1,j2u,tj3,j4,j5,j6us“Λrtj1,j2u,t1uspΛrtj3,j4,j5,j6u,t1usqJ, is a rank-1 matrix. By Condition 3, we further have the rank of Λ˚ rtj3,j4,j5,j6u,t1,i,k1,k2usis 4 and the rank of Λ˚ rtj1,j2u,t1,i,k1,k2usis 2. By Sylvester’s rank inequality, rank` Λ˚ rtj1,j2u,t1,i,k1,k2uspΛ˚ rtj3,j4,j5,j6u,t1,i,k1,k2usqJ˘ ěrank` Λ˚ rtj1,j2u,t1,i,k1,k2us˘ `rank` Λ˚ rtj3,j4,j5,j6u,t1,i,k1,k2us˘ ´4 “2. 52 By Lemma 1, }Σ´Σ˚}F ě››Σrtj1,j2u,tj3,j4,j5,j6us´Σ˚ rtj1,j2u,tj3,j4,j5,j6us›› F ěσ2` Λ˚ rtj1,j2u,t1,i,k1,k2uspΛ˚ rtj3,j4,j5,j6u,t1,i,k1,k2usqJ˘ ą0. Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. From the proof above, for any v˚ i,iPCh˚ 1, if there does not exist some v1,c˚ s,sP t1,...,c˚u, such that v˚ iĂv1,c˚ s, the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. Thus, when c“c˚, only in the case v1,c˚ s “v˚ 1`s, s“1,...,c˚, the information criterion reaches (D.4). We then consider the case where c‰c˚. Ifcąc˚, there exists some v˚ i,iPCh˚ 1such thatv˚ iĆv1,c s for alls“1,...,c. Similar to the proof above, the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space rA1pc˚,d1,...,d c˚qis larger than (D.4) with probability approaching 1 as Nincreases to infinity. When 2ďcăc˚, we also only need to consider the case that for any v˚ i,iPCh˚ 1, there exists some v1,c s, s“1,...,c, such that v˚ iĂv1,c s. We will show that the rdc sgiven by Step 6 of Algorithm 2 satisfies rdc s“ř v˚ iĂv1,c s1` |D˚ i|fors“1,...,cwith probability approaching 1 as Ngrows to infinity. Fors“1, whend1ěř v˚ iĂv1,c s1` |D˚ i|, the parametric space rA1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq defined in ĂIC1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq, (D.26) includes the true parameters Λ˚and Ψ˚. Thus, the solution to (D.26), denoted by Λd1and Ψd1, satisfies ››Λd1ΛJ d1`Ψd1´Σ˚›› F“OPp1{? Nq. Moreover, we have ĂIC1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq “l` Λd1ΛJ d1`Ψd1;S˘ `p1` Λd1˘ logN “OPp1q `` |v1,c 1|d1´d1pd1´1q{2˘ logN`ÿ 2ďsďc` |v1,c s|ds´dspds´1q˘ logN,(D.27) where we denoted by ds“minp|v1,c s|,dq,s“2,...,cfor simplicity. Notice that the third term in (D.27) is independent ofthechoiceof d1andthe secondtermisstrictlyincreasingwith respectto d1whenř v˚ iĂv1,c 11` 53 |D˚ i| ďd1ďminp|v1,c 1|,dq. Thus, with probability approaching 1, as Ngrows to infinity, we have ÿ v˚ iĂv1,c 11` |D˚ i| “ argminř v˚ iĂv1,c 11`|D˚ i|ďd1ďmin p|v1,c 1|,dqĂIC1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq.(D.28) Whend1ăř v˚
|
https://arxiv.org/abs/2505.09043v1
|
iĂv1,c 11` |D˚ i|, for any Λ ,ΨPrA1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq, we denote Σ “ ΛΛJ`Ψ. Similar to the proof in (D.8)-(D.13), for sufficiently large N, we have inf Λ,ΨlpΛΛJ`Ψ;Sq “OpN}Σ´Σ˚}2 Fq `OPp1q “OPpNq ą0. Noticing that p1pΛqis uniformly upper bounded, with probability approaching to 1 as Ngrows to infinity, ÿ v˚ iĂv1,c 11` |D˚ i| “ argmin 1ďd1ďř v˚ iĂv1,c 11`|D˚ i|ĂIC1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq.(D.29) Combining (D.28) with (D.29), we have rdc 1“ř v˚ iĂv1,c 1p1` |D˚ i|q. Similarly, we also have rdc s“ř v˚ iĂv1,c sp1` |D˚ i|q,s“1,...,c. However, it is obvious that cÿ s“1` |v1,c s|rdc s´rdc sprdc s´1q{2˘ ąÿ iPCh˚ i` |v˚ s|p|D˚ s| `1q ´ |D˚ s|p|D˚ s| `1q{2˘ , when rdc s“ř v˚ iĂv1,c sp1` |D˚ i|q,s“1,...,c. Thus, with probability approaching 1 as Ngrows to infinity, the derived ĂIC1pc,rdc 1,..., rdc cqis larger than (D.4). From the proof above, we have shown that xCh1“Ch˚ 1and pvi“v˚ ifor alliPCh˚ 1with probability approaching 1 as Ngrows to infinity. In the rest of the proof, we show that, as a by-p roduct, min p}rλ1´ λ˚ 1},}rλ1`λ˚ 1}q “OPp1{? Nq. We denote by rΛ1,c˚,rΨ1,c˚as the solution to ĂIC1pc˚,1`|D˚ 2|,...,1`|D˚ c˚`1|q. For the simplicity of notations, we denote rΛ1,c˚,rΨ1,c˚byrΛ,rΨ, and rΣ“rΛrΛJ`rΨ. It is easy to check that }rΣ´Σ˚}F“OPp1{? Nq. By Lemma 3, we have }rΛrΛJ´Λ˚pΛ˚qJ}F“OPp1{? Nqand }rΨ´Ψ˚}F“OPp1{? Nq. By Lemma 1 and Lemma 2, there exists an orthogonal matrix Rsuch that }rΛ´Λ˚R}F“OPp1{? Nq. (D.30) 54 Fori1,i2PCh˚ 1,i1‰i2, by Lemma 2 and ››Λ˚ rv˚ i1,t1uspΛ˚ rv˚ i2,t1usqJ´rΛrv˚ i1,t1usrΛJ rv˚ i2,t1us›› F“OPp1{? Nq, we have min¨ ˝››››››rΛrv˚ i1,t1us }rΛrv˚ i1,t1us}´Λ˚ rv˚ i1,t1us }Λ˚ rv˚ i1,t1us}››››››,››››››rΛrv˚ i1,t1us }rΛrv˚ i1,t1us}`Λ˚ rv˚ i1,t1us }Λ˚ rv˚ i1,t1us}››››››˛ ‚“OPp1{? Nq. Without loss of generality, we assume that ››››››rΛrv˚ i1,t1us››rΛrv˚ i1,t1us››´Λ˚ rv˚ i1,t1us››Λ˚ rv˚ i1,t1us››››››››“OPp1{? Nq, holds. Then, we further have ›››››rΛrv˚ i,t1us››rΛrv˚ i,t1us››´Λ˚ rv˚ i,t1us››Λ˚ rv˚ i,t1us›››››››“OPp1{? Nq, (D.31) foriPCh˚ 1. According to (D.30), for any iPCh˚ 1, we have ››rΛrv˚ i,:s´Λ˚ rv˚ i,t1,iuYD˚ isRrt1,iuYD˚ i,:s›› F“OPp1{? Nq. Thus, ››rΛrv˚ i,t1us´Λ˚ rv˚ i,t1,iuYD˚ isRrt1,iuYD˚ i,t1us››“OPp1{? Nq. (D.32) By Condition 3, for sP tiu YD˚ i, there exist constant αsą0 and vectorµs‰0, which are only related with Λ˚, such that Λ˚ rv˚ i,tsus“αsΛ˚ rv˚ i,t1us`µs, andµJ sΛ˚ rv˚ i,t1us“0 forsP tiu YD˚ i. Moreover, σ1`|D˚ i|` Λ˚ Proj,i˘ ą0, (D.33) where Λ˚ Proj,i, whose columns consist of µs,sP tiu YD˚ i, is a |v˚ i| ˆ p1` |D˚ i|qdimensional matrix. We 55 further denote by α1“1. By (D.32), we have ››rΛrv˚ i,t1us´Λ˚ rv˚ i,t1,iuYD˚ isRrt1,iuYD˚ i,t1us›› ě››››››››rΛrv˚ i,t1us›› ››Λ˚ rv˚ i,t1us››Λ˚ rv˚ i,t1us´¨ ˝ÿ sPt1,iuYD˚ iαsRrtsu,t1us˛ ‚Λ˚ rv˚ i,t1us´Λ˚ Proj,iRrtiuYD˚ i,t1us›››››› ´››rΛrv˚ i,t1us››¨›››››rΛrv˚ i,t1us››rΛrv˚ i,t1us››´Λ˚ rv˚ i,t1us››Λ˚ rv˚ i,t1us››››››› ě››RrtiuYD˚ i,t1us››σ1`|D˚ i|` Λ˚ Proj,i˘ `OPp1{? Nq. Combining (D.31) and (D.33), we have ››RrtiuYD˚ i,t1us››“OPp1{? Nq, for alliPCh˚ i. Thus, we have ››Rrt2,...,K u,t1us››“OPp1{? Nq and min`ˇˇRrt1u,t1us´1ˇˇ,ˇˇRrt1u,t1us`1ˇˇ˘ “OPp1{? Nq. Without loss of generality, we assume |Rrt1u,t1us´1| “OPp1{? Nq. Combined with (D.30), we have ››rΛr:,t1us´Λ˚ r:,t1us›› ď››rΛr:,t1us´Λ˚Rr:,t1us››`ˇˇRrt1u,t1us´1ˇˇ¨››Λ˚ r:,t1us›› `››Rrt2,...,K u,t1us››¨››Λ˚ r:,t2,...,K us›› F “OPp1{? Nq,(D.34) From now on, we havefinished the main part ofthe proof. Then, we c an split the learning problem into |Ch˚ 1| sub-problems related with the variables belong to
|
https://arxiv.org/abs/2505.09043v1
|
v˚ k,kPCh˚ 1. By a recursive manner, with probability approaching 1 as Ngrows to infinity, we have pT“T,pK“K,pLt“Lt,t“1,...,T, and pvi“v˚ i,i“1,...,K. The proofof }pΛ´Λ˚pQ}F“OPp1{? Nqand }pΨ´Ψ˚}F“OPp1{? Nqcan be obtained by recursively applying similar arguments in (D.30) to (D.34). 56 References Anderson, T. and Rubin, H. (1956). Statistical inference in facto r analysis. In Proceedings of the Berkeley Symposium on Mathematical Statistics and Probability , pages 111 – 150. University of California Press. Bertsekas, D. P. (2014). Constrained optimization and Lagrange multiplier methods . Academic Press. Brunner, M., Nagy, G., and Wilhelm, O. (2012). A tutorial on hierarch ically structured constructs. Journal of Personality , 80(4):796–846. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic stu dies. Cambridge University Press. Chen, F. F., West, S. G., and Sousa, K. H. (2006). A comparison of b ifactor and second-order models of quality of life. Multivariate behavioral research , 41(2):189–225. Chen, Y., Li, X., and Zhang, S. (2020). Structured latent factor a nalysis for large-scale data: Identifiability, estimability, and their implications. Journal of the American Statistical Association , 115(532):1756–1770. DeYoung, C. G. (2006). Higher-order factors of the big five in a mu lti-informant sample. Journal of Personality and Social Psychology , 91(6):1138. Fang, G., Guo, J., Xu, X., Ying, Z., and Zhang, S. (2021). Identifiabilit y of bifactor models. Statistica Sinica, 31:2309–2330. Holzinger, K. J. and Swineford, F. (1937). The bi-factor method. Psychometrika , 2(1):41–54. Horn, R. A. and Johnson, C. R. (2012). Matrix analysis . Cambridge University Press. Jennrich, R. I. and Bentler, P. M. (2011). Exploratory bi-factor analysis. Psychometrika , 76:537–549. Johnson, J. A. (2014). Measuring thirty facets of the five facto r model with a 120-item public domain inventory: Development of the ipip-neo-120. Journal of Research in Personality , 51:78–89. Kose, M. A., Otrok, C., and Whiteman, C. H. (2008). Understanding the evolution of world business cycles. Journal of International Economics , 75(1):110–130. Moench, E., Ng, S., and Potter, S. (2013). Dynamic hierarchical fa ctor models. Review of Economics and Statistics , 95(5):1811–1817. O’Rourke, S., Vu, V., and Wang, K. (2018). Random perturbation of low rank matrices: Improving classical bounds. Linear Algebra and its Applications , 540:26–59. 57 Qiao,J., Chen, Y., andYing, Z.(2025). Exactexploratorybi-facto ranalysis: Aconstraint-basedoptimisation approach. Psychometrika . to appear. Reise, S. P., Morizot, J., and Hays, R. D. (2007). The role of the bifa ctor model in resolving dimensionality issues in health outcomes measures. Quality of Life Research , 16:19–31. Revelle, W. and Wilt, J. (2013). The general factor of personality: A general critique. Journal of Research in Personality , 47(5):493–504. Schmid, J. and Leiman, J. M. (1957). The development of hierarchic al factor solutions. Psychometrika , 22(1):53–61. Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics , 6:461–464. Sharma, P., Sivakumaran, B., and Mohan, G. (2022). Using schmid–le iman solution with higher-order constructs in marketing research. Marketing Intelligence & Planning , 40(4):513–526. Thomson, G. H. (1939). The factorial analysis of human ability. Houghton Mifflin. van der Vaart, A. W. (2000). Asymptotic statistics . Cambridge University Press. Wedin, P.-A. (1972). Perturbation bounds in connection
|
https://arxiv.org/abs/2505.09043v1
|
arXiv:2505.09090v1 [math.ST] 14 May 2025Submitted SEQUENTIAL SCORING RULE EV ALUATION FOR FORECAST METHOD SELECTION BYD. T. F RAZIER1D. S. P OSKITT2, 1Department of Econometrics and Business Statistics, Monas h University, david.frazier@monash.edu 2Department of Econometrics and Business Statistics, Monas h University, Donald.Poskitt@monash.edu This paper shows that sequential statistical analysis tech niques can be generalised to the problem of selecting between alternativ e forecasting meth- ods using scoring rules. A return to basic principles is nece ssary in order to show that ideas and concepts from sequential statistical methods can be adapted and applied to sequential scoring rule evaluation ( SSRE). One key technical contribution of this paper is the development of a large deviations type result for SSRE schemes using a change of measure that pa rallels a tra- ditional exponential tilting form. Further, we also show th at SSRE will ter- minate in finite time with probability one, and that the momen ts of the SSRE stopping time exist. A second key contribution is to show tha t the exponen- tial tilting form underlying our large deviations result al lows us to cast SSRE within the framework of generalised e-values. Relying on th is formulation, we devise sequential testing approaches that are both power ful and main- tain control on error probabilities underlying the analysi s. Through several simulated examples, we demonstrate that our e-values based SSRE approach delivers reliable results that are more powerful than more c ommonly applied testing methods precisely in the situations where these com monly applied methods can be expected to fail. 1. Introduction. 1.1. Motivation. In time-series forecasting a basic task that befalls the pra ctitioner is to select from a set of competing alternatives a forecasting system that is likely to deliver the best forecasting performance. This is commonly done by e xamining so called out-of- sample loss, defined as the expected value of a measures of the discrepancy between the predictions and the observed values (the mean squared error for example). The empirical forecaster is therefore concerned with assessing expected performance on as yet unseen data. Researchers also use out-of-sample loss to assess whether a proposed forecasting system outperforms an already established benchmark. Out-of-sam ple loss is by definition unknown, however, and needs therefore to be estimated. This is typica lly done by excluding parts of the observed time-series from the estimation and mimicking the actual process of out-of- sample forecasting by performing a sequence of predictions for these observations instead. An estimate of the out-of-sample loss is then obtained by ave raging the loss incurred across the individual predictions, á lacross validation. Suppose, for example, that the forecaster is asked to provid e a probability forecast pfor an eventEassociated with a future value of a random variable of intere stY. Key require- ments for such a forecast are calibration, meaning that even ts with a predicted probability ofpshould occur with a relative frequency of p, and sharpness, which requires the forecast probabilities to be informative, i.e. close to 0 or 1. These p roperties can be assessed using a proper scoring rule (
|
https://arxiv.org/abs/2505.09090v1
|
Gneiting, Balabdaoui and Raftery ,2007 ), which in the case of a proba- bility forecast coincides with a consistent scoring functi on for the mean ( Gneiting ,2011 ). A MSC2020 subject classifications :Primary 62M10, 62M15; secondary 62G09. Keywords and phrases: forecasting method, error probability. 1 2 scoring rule is a function Spp;yqthat maps a forecast probability pand an observation yto a numerical score that measures the discrepancy of the foreca st, with smaller scores indicating a superior forecast. To compare two forecasting procedures, ptpτqandqtpτqsay, at horizon τě1, the fore- caster might collect a sample of T`τvalues ofYand partition the sample into P segments containing training data, in-sample observations used for fittingptpτqandqtpτq, and test data, pseudo out-of-sample observations used for forecast evaluation. Given ptpτqandqtpτq, and the associated observation yt`hfort“R`1,...,T , whereR“T`τ´P, the average score difference can be computed as ∆Ppptpτq,qtpτqq “1 PTÿ t“R`1rSpptpτq;yt`τq ´Spqtpτq;yt`τqs, and the standardized statistic? P∆Ppptpτq,qtpτqq{pσ, wherepσ2is a consistent estimator of the asymptotic variance of ∆Ppptpτq,qtpτqq, is often used to assess the relative accuracy of the forecasts by testing if the average score difference dif fers significantly from zero. A number of tests of the type briefly outlined above are availa ble in the literature, promi- nent examples being Diebold and Mariano (1995 ),West (1996 ),White (2000 ) and Hansen (2005 ), see also the review in Tashman (2000 ). Further examples are the martingale-based approaches in Giacomini and White (2006 ),Lai, Gross and Shen (2011 ), and Yen and Yen (2021 ). A common difficulty with such pseudo out-of-sample evalua tion schemes is the res- olution of the inevitable trade-off between the size of the d ata-sets designated as in-sample training data and pseudo out-of-sample test data. The large r is the former the more accurate is the estimation, but the loss is evaluated only on the obser vations reserved as test data and the smaller is the latter the less precise is the evaluation o f the loss. Moreover, the tests men- tioned above are all only asymptotically valid. Scarcity of pseudo out-of-sample observations will therefore call into question the accuracy of any asympt otic approximations. Differences in practical implementation can also have a dramatic impact on test performance (see for ex- ample Lazarus et al. (2018 ) and Frazier et al. (2023 ). Furthermore, the tests mentioned above are couched in terms of testing a null hypothesis of no inferior forecast performance , so they are more suited to making a comparison of a new method to a n established benchmark or state-of-the-art method rather than choosing between tw o competing specifications. See Diebold (2015 ) for further discussion of the pros and cons of using pseudo o ut-of-sample tests. Sequential methods address these issues and (obviou sly) are ideally suited to the com- mon scenario where data arrives sequentially and the foreca ster is required to evaluate fore- casts based on a small recently observed set of data. The cont ribution of this paper is to show that the apparatus of sequential statistical methods can be adapted to the analysis of scor- ing rules, thereby
|
https://arxiv.org/abs/2505.09090v1
|
offering the forecaster an alternative m ethodological approach to forecast method selection. To the best of the authors’ knowledge, the extant literature does not contain a discussion that pursues the particular avenue of research that we consider here. 1.2. Background. Sequential statistical methods gained prominence in the 19 40’s dur- ing the second world war, leading to the development of a body of theory around the ‘se- quential probability ratio test’ (SPRT) due to Wald (1947 ). A summary of related work is also presented in Barnard (1946 ). Whereas Wald’s theory was mainly devoted to the spec- ification of sampling schemes satisfying given requirement s, Barnard’s work addressed the converse problem of examining the statistical properties o f a given sequential scheme. In the ensuing decades considerable research has been conduct ed on sequential methods, in particular, the martingale theory in Doob (1953 ) was used to extend Wald’s results, includ- ing Wald’s fundamental identity, to stochastic processes i n continuous and discrete time SEQUENTIAL FORECAST METHOD SELECTION 3 (Dvoretzky, Kiefer and Wolforwitz ,1953 ;Robbins and Samuel ,1966 ;Brown ,1969 ), and stationary processes with independent increments ( Hall,1970 ). Text book accounts from a decision theoretic viewpoint can be found in Ferguson (1967 ) and DeGroot (1970 ). For a review of the theory and application of sequential methods s eeWetherill (1975 ). The sce- nario envisaged in this literature is one where a practition er has available a data series and has to decide which of two hypothesised data generating proc esses (DGPs) has produced the observed realization. The decision is based on a boundary cr ossing rule applied to the ‘prob- ability ratio’ (i.e. the likelihood ratio) given by the prob ability distributions specified for the two DGPs. The sequential scoring rule evaluation (SSRE) scheme devel oped in this paper takes a sim- ilar perspective to that just described both in terms of the b ackground problem formulation and the use of a boundary crossing stopping rule. However, a f undamental difference be- tween SSRE and SPRT is that SSRE employs a stopping rule that i nvolves the use of a ratio of scoring rules and not a likelihood ratio. Consequently, s tandard analysis techniques, such as large deviations theory for independent and identically distributed random variables, or martingale theory, can no longer be relied upon. The main cha llenges in analysing SSRE lie in the fact that it is not defined in terms of a likelihood ratio , and a return to basic principles is therefore necessary in order to show that ideas and concep ts from sequential methods can be adapted and applied to SSRE. The technical contribution o f this paper is the establishment of probabilistic results for SSRE schemes using a change of m easure that parallels a tradi- tional exponential tilting form. This allows us to provide d escriptions of error probabilities in terms of the levels set for the stopping rule. The inferent ial characteristics of SSRE can thus be specified in advance, and as information from additio nal time points is
|
https://arxiv.org/abs/2505.09090v1
|
accrued an assessment of relative forecast performance can be made tha t is valid in finite samples. We also show that SSRE will terminate in finite time with probabi lity one, and that the moments of the SSRE stopping time exist. Thus, SSRE provides the fore caster with an assessment tool that allows the forecaster to stop (or continue) forecast ev aluation upon seeing sufficient (or not enough) evidence and, more significantly, SSRE is geared towards the task of forecast method selection rather than model testing. To implement the proposed SSRE testing approach, we show a no vel link between SSRE schemes and e-variables. In particular, we show that ou r proposed SSRE scheme is a universal generalized e-value (GUe-value) under our null hypothesis; GUe-values were proposed and studied in Dey, Martin and Williams (2024a ) for parameter inference and byDey, Martin and Williams (2024b ) for multiple testing problems. As the name sug- gests, GUe-values are related to both e-values and universa l inference. E-variables have recently emerged as a method for conducting multiple hypoth esis testing that do not suf- fer from the same issues as related p-value-based methods; s eeV ovk and Wang (2021 ), Grünwald, de Heide and Koolen (2024 ),Shafer (2021 ) (who refers to e-variables as betting scores), and Wang and Ramdas (2022 ) for full definitions. On the other hand, universal infer- ence methods were proposed by Wasserman, Ramdas and Balakrishnan (2020 ) as a method to derive calibrated confidence intervals in well-specified models without any of the usual regularity conditions. While exceedingly useful, the meth od looses correct calibration in mis- specified models. One can show that the correct calibration o f the confidence sets in universal inference is a consequence of the fact that the criterion tha t is inverted to form the confidence set in universal inference is an e-process under the assumed model. Following Giacomini and White (2006 ) we define a forecasting method as being com- prised of a forecasting model together with associated tech niques that must be specified by the forecaster at the time the forecast is made, such as what e stimation procedure to employ and what data to use. In what follows we allow for the use of dif ferent estimation procedures in the construction of the forecasts, including parametric and nonparametric methods, as well 4 as limited memory estimators, such as recursive estimators of the exponential smoothing type, fixed width rolling window estimators, or geometrical ly weighted expanding window estimators that discount less recent observations. These c hoices can have a significant affect on future forecast performance over and above those due to th e specification of the forecast- ing model. In the derivation of our results we impose minimal conditions on the DGP and the forecast model and method. This accommodates various ty pes of misspecification and uncertainty resulting from the choices made by the forecast er. 1.3. Organization of the Paper. LetY1,...,YT,... denote a sequence generated by a stochastic process defined on a probability space pY,B,Pq. We suppose that the goal of the
|
https://arxiv.org/abs/2505.09090v1
|
forecaster is to predict some feature of FYT`h, the distribution of YT`hat a forecast horizon ofpě1. We are interested therefore in predictions of a functional φrFYT`hsmade at time T. We assume that competing forecasts and the observations Ytare random processes adapted to a filtration Bt,tPN, and the measure Pdescribes the joint dynamics of the forecasts and the observations through its characterisation of the distribu tion of every finite set tY1,...,Ynu, which we denote by P,n“1,2,... (see Halmos ,1950 , § 48 & 49). Since in general the true measure P, and hence P, is unknown there is no ‘correct’ way to predict features of interest. We assume therefore that the forecaster consider s a class of forecast methods Q that are identified by their distribution functions QPQ, and are constructed using data from realizations y1,...,yn,n“1,2,..., of the stochastic process. In the following section of the paper we outline the class of m odels and methods un- der consideration, and provide a brief outline of the scorin g rules and sequential estimation techniques that underly SSRE. In Section 3we present some basic concepts and methods of SSRE and relate these to Wald’s SPRT. Section 4establishes the relationship of SSRE to e-variables and generalized e-variables. Section 5discusses implementation of the proposed SSRE, and compares this approach against the most common app roach for testing forecast accuracy across three classes of examples. Section 6summarises the paper and proofs are assembled in the Appendix. 2. Preliminaries. Denote byFγ:YˆΓÞÑCr0,1sa probability distribution on pY,Bq indexed by a parameter γPΓĎRpand lying in a model family MpΓq:“ tFγ:γPΓu. We envisage forecasting settings where the practitioner en tertains a collection of different possible statistical models that can describe, with varyin g degrees of accuracy, the evolution of the stochastic process. We suppose that each model is spec ified using a (semi-) parametric family that depends on unknown parameters. Each of the models is combined with transformations Tυthat depend on tuning parameters υPΥĎRq. The transformations characterise data manipulations and other techniques used in the construction of the forecasts (the choice of window wi dth in a rolling window for example). Given a family of technical transformations, TpΥq:“ tTυ:υPΥu, and a model familyMpΓq, the class of probability distributions used for predictio n is the composition Qθ:YˆΘÞÑCr0,1swhere (with a slight abuse of notation) Mγ˝Tυ:pY1,¨ ¨ ¨,YTq ÞÑQθ. We denote by QpΘqthe class of forecast methods tQθ:θPΘ“ΓˆΥu. Generally, the model parameters of Qθare unknown and must be estimated, and the tuning parameters are assigned by the practitioner using data directed selection criteria or “rules of thumb". For any1ďnďT, letΩnĚBndenote the information available to the forecaster at timen. The forecasters prediction is then given by the functional φrQθpYT`h|ΩTqs, where QθpYT`h|ΩTqdenotes the distribution of the random variable YT`hderived from the fore- casting method conditional on the information ΩT. Throughout, we consider that the fore- caster wishes to obtain ‘optimal’ forecasts in the spirit of Gneiting and Raftery (2007 ), Gneiting and Ranjan (2011 ), and Martin et al. (2022 ), meaning that the forecasting method the forecaster chooses to employ out-of-sample is produced by targeting a scoring
|
https://arxiv.org/abs/2505.09090v1
|
rule that measures precisely the feature or features of YT`hthat are of interest. SEQUENTIAL FORECAST METHOD SELECTION 5 2.1. Scoring Rules. Recall that Qis a class of distributions on Y, and a scoring rule is a measurable function Spf;yqthat maps a forecast fand an observation yto a numerical value in r0,8qthat measures the discrepancy of the forecast, with smaller scores indicating a superior forecast. The functional φ:QÞÑΦmaps a distribution QPQto a subsetφrQsof the feature space Φ,QÞÑφrQs ĎΦ. A scoring function Sp¨,¨qis said to be Qconsistent or proper for a functional φr¨sif EQtSpφrQs,Yqu ďEQrSpφrPs,Yqs, for allP,Q PQ, whereYis a random variable with distribution Q. Here and throughout EPrXsdenotes the expectation of the random variable XunderP, and we assume that any stated expectation exists and is finite. We say that Sp¨,¨qis strictly Qconsistent or proper for φr¨sif EQtSpφrQs,Yqu “EQrSpφrPs,Yqs implies that φrPs “φrQs. Ifφr¨sadmits a strictly consistent or proper scoring function, th en it is called elicitable . Here and throughout, we will assume that the functional of i nterested is elicitable . Scoring rules can be used to measure the accuracy of a forecas t distribution itself (i.e.φrfs “f) and they can also be used to measure the accuracy of features of the distribution such as quantiles, or predictive intervals (s ee,Gneiting and Raftery ,2007 ). (Gneiting, Balabdaoui and Raftery ,2007 ) ,Gneiting (2011 ),Gneiting and Ranjan (2011 ). Some functionals are not elicitable however, at least not on their own, see Brehmer and Gneiting (2021 ) and Fissler and Ziegel ,2016 . See also Patton (2020 ) for a discussion of the sensitiv- ity of consistent scoring rules to model misspecification, p arameter estimation error, and nonnested information sets. 2.2. Parameter Estimation. Given a realisation of nobservations tyt:1ďtďnu,nď T´h, the optimal functional predictive can generally be produc ed in two possible ways; i. by searching for the most accurate predictive distributi onQθPQunder a given loss func- tionL:QˆYÞÑRand substituting into φto giveφrQpθnpYT`h|ΩTqswhere pθn“argmin θPΘnÿ t“1wtLrQθpYt`h|Ωtq,yt`hs, or similarly ii. by searching for the most accurate functional predictiv eφrQθs,QθPQ, under a given scoring rule and evaluating φrQrϑnpYT`h|ΩTqsdirectly where rϑn“argmin θPΘnÿ t“1wtSrφrQϑpYt`h|Ωtqs,yt`hs. Herewt,tPN, is aBt-measurable weighting sequence that is known at the time of f orecast- ing. For example, if wt“1,t“n´ω,...,n ,wt“0,tăn´ω,ωăn, then the estimate is based on a data window of width ω`1, whereaswt“λn´t,t“1,...,n ,0ăλă1, corre- sponds to a geometrically weighted expanding window. Pract itioners might also be interested in whether a particular forecast method is more accurate tha n another given that certain con- ditions apply. Choosing weights such that wt“ρtif the conditions hold and wt“1´ρt otherwise, 0ăρtă1, might addresses this question. The requirement that the we ights be Bt-measurable reflects that knowledge that one method is prefe rable to another under given conditions is only useful if the conditions are known ex-ant i. 6 Given a particular forecast method class tQθ:θPΘ“ΓˆΥu, the result of either i. or ii. is an optimal forecast distribution, where optimality is asse ssed with respect to the chosen loss function or scoring rule. We have used the notation pθnandrϑnfor the parameter estimator in the two cases since the
|
https://arxiv.org/abs/2505.09090v1
|
estimates and the optimal forecasts w ill not coincide in general even if the forecast model is the same. Any differences between th e two will be due to differences in the estimation technique and the forecasting method as ap posed to the forecasting model, hence the focus in our analysis on the former rather than the l atter. In the sequential statistical analysis that follows the inf ormation used for estimation is increased from ΩntoΩn`1, and according to the value obtained by a scoring rule, pθn(re- spectively rϑn), which are based on nobservations, will need to be updated to pθn`1(respec- tivelyrϑn`1).1This can be done recursively using a Newton-Raphson type alg orithm. Con- sider the case where pθnis obtained using a geometrically weighted expanding windo w with tuning parameter λ. Letℓtpθq “LrQθpYt`h|Ωtq,yt`hs, orℓtpθq “SrφrQθpYt`h|Ωtqs,yt`hs, and suppose that ℓtpθqis twice continuously differentiable, t“1,...,n `1. SetLnpθq “řn t“1λn´tℓtpθq. Since by definition of pθnwe have BLnppθnq{Bθ“0, it follows from the Taylor-Young formula that BLn`1ppθn`1q Bθ“BLn`1ppθnq Bθ`B2Ln`1ppθnq BθBθ1ppθn`1´pθnq `op}pθn`1´pθn}q “Bℓn`1ppθnq Bθ`Hn`1ppθnqppθn`1´pθnq `op}pθn`1´pθn}q “0 whereHn`1pθq “ B2Ln`1pθq{BθBθ1. If we assume that for nsufficiently large Prp}pθn`1´ pθn} ăǫq ě1´ǫ,ǫą0, andHn`1pθqis positive definite at pθn, which will be the case if Ln`1pθqis strictly convex in a neighbourhood of pθn`1, we obtain the stochastic approxima- tion pθn`1«pθn´ rHn`1ppθnqs´1Bℓn`1ppθnq{Bθ. Starting at an initial value pθn0“argminθPΘLn0pθqthe previous approximation can be used to obtain pθn`1recursively for něn0together with Hn`1ppθnq “λHnppθnq ` B2ℓn`1ppθnq{BθBθ1. IfHn`1pθqis not positive definite it can be replaced with Hn`1pθq `ρnI whereρnis chosen at each update, giving an interpolation of Newton- Raphson with the steepest descent method, or by replacing Hn`1pθqwith a matrix that satisfies the so called Newton condition to yield a version akin to a Broyden-Fletch er-Goldfarb-Shanno algorithm. (We refer the reader interested in numerical implementatio n to Nocedal and Wright ,2006 , for more detailed particulars). 3. Sequential Scoring Rule Evaluation and Termination. 3.1. SSRE Evaluation. LetφrQpθnpYn`τ|Ωnqs,QPQandφrPrϑnpYn`τ|Ωnqs,PPP, de- note two alternative forecast methods used to forecast the f unctionalφ, which for ease of reference we designate as method Q and method P. Set RnpQ,P q “eSpφrQpθnpYn`τ|Ωnqsq eSpφrPrϑnpYn`τ|Ωnqsq. 1The use of parameter updates corresponds to the approach ado pted in West (1996 ). SEQUENTIAL FORECAST METHOD SELECTION 7 where, without loss of generality, we will often consider τ“1. Since the scoring rule is negatively orientated, i.e., lower is better, method Q is de emed to be at least as good as method P if RnpQ,P q ď1, and vise versa. Define CnpQ,P q “śn m“1RmpQ,P q, and note thatCnpQ,P qcan be used to gauge the accumulation of evidence as more obse rvations accumulate. For given methods Q and P the performance of the SSRE scheme wi ll be governed by the values of two boundary constants klandku, and the true DGP. The influence of the DGP on the behaviour of SSRE will be formulated here in terms o f the impact of the true unknown distribution Pon the ratio of scoring rules, as measured by RnpQ,P q. Denote by Pthe set of probability measures on pY,Bq, and consider a sequence of natural filtrations B1Ď ¨ ¨ ¨ Ď Bn,
|
https://arxiv.org/abs/2505.09090v1
|
withYnadapted to Bnfor allně1. We will analyse the SSRE under the following hypotheses, HQ“rPPP:EPrRnpQ,P q |Bns ď1,pně1qsand HP“rPPP:EPrRnpQ,P q |Bns ě1,pně1qs. The hypotheses state that given the information available a t the time of forecasting, fore- cast method Q is expected to be at least as good as method P unde r scoring rule S, or vice versa.2Indeed, for each n, the hypothesis HQimplies that 1ěEPrRnpQ,P q |Bns ěexptEPrlogpRnpQ,P qq |Bnsu, where the lower bound follows by Jensen’s inequality, so tha t taking logarithms yields the usual null hypothesis of mean forecast dominance commonly e mployed in the literature (1) 0ěEPrlogpRnpQ,P qq |Bns “EPrSpφrQsq ´SpφrPsq |Bns. The hypotheses HQandHPimply (slightly more than) mean dominance at all time points . According to whether HQorHPdoes or does not hold, obviously, evidence in favour or againstHQorHPwill accumulate in CnpQ,P qasnincreases, and thus that a selection of method Q or method P should eventually be made. Ultimately, o ur goal is to gauge which method has more empirical support given our observed data. Given a sequence of observations tYn:ně1u, arriving sequentially through time, we measure the evidence for or against HQusing CnpQ,P q “nź m“1RmpQ,P q “exp#nÿ m“1rSpφrQs,Ym`τq ´SpφrPs,Ym`τqs+ “exp#nÿ m“1DmrQ,P s+ . IfHQis satisfied at each ně1, thenEPrCnpQ,P q |Bns ď1, and vice versa under HP. To gauge the empirical support for HQ(orHP), we must consider when CnpQ,P qis small enough (or large enough) to support HQ(orHP). To this end, define the events EQ n,EP nand EQXP n , for eachn, by the inequalities CnpQ,P q ďkl, CnpQ,P q ěkuandklăCnpQ,P q ăku,0ăklă1ăku. The formulation of CnpQ,P qensures that, for any ně1,CnpQ,P q PR`, so that we can view the constants klandkuas defining regions of plausibility for HQorHP. In particular, 2Zhu and Timmermann (2020 ) have suggested that the null hypothesis of no inferior forecast performance (HQXHP) used in the forecast tests referenced op. cit. is unlikely to ever be satisfied in realistic settings. 8 the formulation of CnpQ,P qand the specification of HQandHPis such that the following diagram illustrates how the boundary constants klandkudetermine regions where CnpQ,P q favours either HQorHP. 0 kl 1 ku 8 CnpQ,P qfavoursHQ CnpQ,P qfavoursHPSSRE indifference region Fig 1: Regions of favorability for HQandHPin terms of the SSRE boundary constants kl andku. Define a SSRE decision rule by reference to the preference-in difference regions py1,...,ynq P$ & %EQ n,at time point nselect method Q and cease SSRE; EP n,at time point nselect method P and cease SSRE; EQXP n,at time point nadd data point yn`1and continue SSRE. The2n`1eventsEQ 1,...,EQ n,EP 1,...,EP nandEQXP n form a disjoint partition of the sam- ple space of Y1,...,Yn, which together with the previous preference-indifferenc e decision rule, define a SSRE scheme for selecting a preferred forecast ing method. In a typical testing setting, one would accept or reject HQ(orHP) according to some measure of statistical evidence. However, the nature of HQandHP, and the sequential na- ture of data arrival, makes the specification of such criteri a cumbersome. In particular, the regions that define the SSRE scheme, which are represented in terms ofklandku, govern the
|
https://arxiv.org/abs/2505.09090v1
|
performance of the scheme. Before examining the relatio nship between SSRE and the re- lated literature on E-variables, we will first analyse more c losely how the choice of boundary valuesklandkuinfluences features of SSRE. 3.2. Termination of SSRE. The SSRE process terminates at the smallest integer nfor which the inequality klăCnpQ,P q ăkufails to hold. Thus EQXP n is the event resulting in the outcome NąnwhereNis the random variable denoting the SSRE stopping time. The probability that the SSRE process terminates equals PrpNă 8q “lim nÑ8PrpNănq “1´lim nÑ8PrpNěnq. Thus, iflimnÑ8PrpNěnq ą0the SSRE process will have a positive probability of contin- uing indefinitely. The following result shows that, in fact, the SSRE stopping time Nis finite with probability one, and the tail of its distribution decli nes towards zero at an exponential rate, under very weak conditions. PROPOSITION 1.If underHQorHPthe distribution of CnpQ,P q “śn m“1RmpQ,P q is nondegenerate and PrpEQ nq ‰0orPrpEP nq ‰0, then for all něnǫ`1there exists a ̺ǫ with0ă̺ǫă1such that PrpNąnq ăexptnǫlogp̺ǫquand the SSRE process will terminate with probability one. SEQUENTIAL FORECAST METHOD SELECTION 9 If the SSRE process terminates with probability one then the expected value of N, the average number of time points required before either foreca st method Q or forecast method P is selected, will be of interest. The following result indi cates that all moments of Nwill in fact exist under the same conditions as for Proposition 1. COROLLARY 1.Assume that the conditions of Proposition 1hold. Then the moment generating function ψNpsq “ErexppNsqsexists for all sin a neighbourhood of zero. Since the moment generating function of Nis convergent in a region containing the origin it follows that ψNpsqis continuously differentiable at s“0and hence that ErNrsexists for r“1,2,.... Proposition 1and its corollary generalise to SSRE results known to hold fo r the standard SPRT (a test of a simple null hypothesis against a simple alte rnative using the likelihood ratio statistic from independent and identically distributed da ta,Wetherill ,1975 ). Here however the hypotheses HQandHPare highly composite, and simplicity of the hypotheses is re placed by the condition that the statistic CnpQ,P qthat replaces the likelihood ratio is based on a consistent (proper) scoring rule for the feature of interes t. More importantly, HQandHPdo not specify the structure of the stochastic process giving r ise to the observed series, which is usually not well enough understood to formulate a suitable s tochastic process model for the DGP. The forecaster is therefore at liberty to apply SSRE to a ny forecasting method and data that they choose. 3.3. Boundary Values klandku.The probability of selecting method Q at time point n, PrpEQ n|Haq, or selecting method P, PrpEP n|Haq, or of observing an additional value of the series,PrpEQXP n |Haq, obviously sum to one under each of Ha,a“Q,P . Furthermore, we have already noted that the 2n`1eventsEQ 1,...,EQ n,EP 1,...,EP nandEQXP n also form a disjoint partition of the sample space of Y1,...,Yn, so Prpnď i“1EQ i|Haq “nÿ i“1PrpEQ i|Haq, Prpnď i“1EQ n|Haq “nÿ i“1PrpEP i|Haqand PrpEQXP n |Haq “PrpNąn|Haq, are non-negative and also sum to one.
|
https://arxiv.org/abs/2505.09090v1
|
Since PrpŤn i“1EQ i|HaqandPrpŤn i“1EP i|Haqare nondecreasing and fall in the interval r0,1sit follows that under HQorHPboth PrpEQ 8|Haq “lim nÑ8Prpnď i“1EQ i|HaqandPrpEP 8|Haq “lim nÑ8Prpnď i“1EP i|Haq exist. These limits give the probability of selecting metho d Q and the probability of selecting method P, respectively. Although these may be of theoretica l interest, the forecaster will presumably be far less concerned with probabilities that st retch out into the infinite future than they will be with properties of SSRE schemes over finite t ime horizons. Moreover, Proposition 1establishes that the SSRE process will terminate with proba bility one under HQorHP, i.e.PrpEQXP 8 |Haq “limnÑ8PrpNąn|Haq “0,a“Q,P . Consider then the mutually exclusive events TQ n“ tn´1č i“1EQXP i uč EQ n,TP n“ tn´1č i“1EQXP i uč EP nandCQXP n “nč i“1EQXP i. 10 These events represent the time paths over the time interval t“1,...,n leading to the out- comes py1,...,ynq P$ & %TQ n,SSRE terminates by select method Q; TP n,SSRE terminates by select method P; CQXP n,SSRE continues. It follows that βQ“PrpTQ n|HPqrepresents the probability of terminating SSRE at time pointnby selecting method Q when in fact the expected score is minim ised by method P, and thatβP“PrpTP n|HQqrepresents the probability of terminating SSRE at time poin tnby selecting method P when in fact the expected score is minimis ed by method Q. An obviously desirable property of any SSRE scheme is that klandkushould be set so that the risks of making an incorrect decision on termination are suitably bo unded.3 In order to determine values of klandkuthat will ensure that βQandβPachieve preas- signed levels we will use the following lemma. This result ge neralises a moment generating function (MGF) property due to Wald that he used to approxima te the operating characteris- tic of the standard SPRT ( Wald ,1947 , § A 2.3). In the following lemma, and in subsequent integrals with respect to P, the integral is taken over TQ nYTP nYCQXP n . LEMMA 1.Assume that EPrlogpCnpQ,P qqs ‰0and that EPpexprhlogtCnpQ,P qusq “ż exprhlogtCnpQ,P qusdPă 8 for allhPR. Suppose, in addition, there exists an ǫą0such that PrpCnpQ,P q ą1`ǫq ą0 andPrpCnpQ,P q ă1´ǫq ą0. Then there exists a value of h‰0,h“hPsay, wherehPand EPrlogpCnpQ,P qsare of opposite sign and hPis such that EPrexpphPlogpCnpQ,P qqqs “1. PROOF . Setmphq “EPrexpphZqswhereZ“logpCnpQ,P q. Forhą0we have mphq ąexpthlogp1`ǫquPrpZąlogp1`ǫqq “ p1`ǫqhPrpCnpQ,P q ą1`ǫq, and forhă0 mphq ąexpt´hlogp1´ǫquPrpZălogp1´ǫqq “ p1´ǫq´hPrpCnpQ,P q ă1´ǫq, from which it is evident that mphq Ñ 8 as|h| Ñ 8 . Denoting the first and second derivatives of mphqbym1phqandm2phqwe havem1p0q “EPrZs “µ‰0andm2p0q “ EPrZ2expphZqs ą0. It follows that mphqis a strictly convex function that increases to in- finity as |h| Ñ 8 , and sincemp0q “1there exists a value hP‰0such thatmphPq “1. Rolle’s theorem now implies that mphqhas a unique minimum at a value h0,|h0| ă |hP|, withmph0q ă1. Ifµă0thenmphqis decreasing at h“0, which implies that hPis positive. Conversely, mphqis increasing at h“0ifµą0andhPmust be negative. The proof of Lemma 1is modelled on that of Wald (Lemma A.1, Wald ,1947 , § A 2.1) and is presented here for the
|
https://arxiv.org/abs/2505.09090v1
|
sake of completeness and to facilit ate translation to SSRE schemes. Now, consider the evaluation of PrpTQ n|HQq, and letQdenote the measure with Radon- Nikodym derivative with respect to Pgiven by (2)dQ dP“exprhPlogtCnpQ,P qus “exp« hP#nÿ i“1DipQ,P q+ff . 3The risk associated with a SSRE incorrect decision is analog ous to a hypothesis test type II error, but HQ andHPdo not play the conventional role of null and alternative hyp otheses here. SEQUENTIAL FORECAST METHOD SELECTION 11 By definitionQdefines a probability distribution on pˆn m“1Ym,Bnqthat is absolutely con- tinuous with respect to P(Halmos ,1950 , § 31 & 32). From the definition of the SSRE scheme we have that method Q will be selected at time point nifCnpQ,P q ďkl. Hence we have that 1tTQ nu “˜n´1ź i“11tCQXP i u¸ 1tCnpQ,P q ďklu, where1tEudenotes the indicator function for the event E, and from ( 1) and Lemma 1it follows that QpTQ nq “ż 1tTQ nudQ“ż 1tTQ nuexpthPlogpCnpQ,P qqudP ďż khP l1tTQ nudP“khP lPrpTQ n|HQq, andQpTQ nq “khP`ǫQ lPrpTQ n|HQqwhere ǫQ“logQpTQ nq ´logkhP lPrpTQ n|HQq logklě0. A parallel argument shows that QpTP nq# ěkhPuPrpTP n|HPq “khP`ǫPuPrpTP n|HPq,ǫP“logQpTP nq´logkhPuPrpTP n|HPq logkuě0. We have already seen that PrpCQXP n |Haq “ş 1tCQXP n udPÑ0asnÑ 8 , from which it follows that limnÑ8QpCQXP n q “0. Similarly, if termination occurs at time point nthen PpCQXP n q “PpHq “0, soQpCQXP n q “0, and (3) QpTQ nq `QpTP nq “1“PrpTQ n|Haq `PrpTP n|Haq. By definition, βQ“PrpTQ n|HPq, and hence from ( 3) we have 1´βQ“PrpTP n|HPq, with corresponding definitions for βPand1´βPobtained by interchanging the roles of Q andP. Applying the equality in ( 3) in conjunction with the relationship between PrpTQ n|HQq andPrpTP n|HPq, andQpTQ nqandQpTP nqon one hand, and βQandβPon the other, we find, after some somewhat tedious algebra, that (4) βQ“khP`ǫQ lpkhP`ǫPu ´1q khP`ǫPu ´khP`ǫQ landβP“1´khP`ǫQ l khP`ǫPu ´khP`ǫQ l, from which the following results follow; βQ 1´βP“khP`ǫQ lQă1ăkhP`ǫP uQ“1´βQ βP, underHQ, and under HP βP 1´βQ“kphP`ǫPq lPă1ăkphP`ǫQq uP“1´βP βQ. This leads us to the following boundary value result. PROPOSITION 2.In order to achieve preassigned levels of risk βQandβPon termina- tion, the boundary constants klandkushould be set equal to kl“ˆβQ 1´βP˙1{phP`ǫQq andku“ˆ1´βQ βP˙1{phP`ǫPq . 12 The boundary values obtained in Proposition 2correspond to the boundary inequalities of standard SPRT ( cf.Wetherill ,1975 , Sections 2.3 and 2.3). Although they are relevant, the use of Proposition 2to implement SSRE is infeasible as it stands because neither hPnor the perturbations ǫQandǫPwill be known. We will delay a consideration of how the parame ters klandkucan be determined in practice until after we have outlined th e consequences of not terminating SSRE within a given time frame. 3.4. Implementation and Truncation. Suppose that a forecaster has two forecast methods φrQpθnpYn`τ|ΩnqsandφrPrϑnpYn`τ|Ωnqsthat they contemplate using to forecast the func- tionalφ, and that they wish to select method Q or method P before makin g a prediction of the functional φrYT`τsat timeT. Assume that they collect a sample of Tvalues ofY, y1,...,yT, and partition the sample into the first R!Tvalues of ‘training data’ used to eval- uatepθ0andrϑ0, and out-of-sample ‘test data’
|
https://arxiv.org/abs/2505.09090v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.