text string | source string |
|---|---|
E𝑗 𝛽𝑖1𝑖2𝑗(𝑧1)Q−1 𝑖1𝑖2𝑗(𝑧1)r𝑖2r⊤ 𝑖2Q−1 𝑖1𝑖2𝑗(𝑧1)ˇQ−1 𝑖1𝑖2𝑗(𝑧2) ×|𝑏1(𝑧1)|2trH𝑛(𝑧1) r𝑖2r⊤ 𝑖2−𝑁−1𝚽 E𝑗 𝛽𝑖2𝑖1𝑗(𝑧1)Q−1 𝑖2𝑖1𝑗(𝑧1)r𝑖1r⊤ 𝑖1Q−1 𝑖2𝑖1𝑗(𝑧1)ˇQ−1 𝑖2𝑗(𝑧2) , 𝑆(22) 1=−∑︁ 𝑖1≠𝑖2<𝑗EtrH𝑛(𝑧1) r𝑖1r⊤ 𝑖1−𝑁−1𝚽 E𝑗 𝛽𝑖1𝑖2𝑗(𝑧1)Q−1 𝑖1𝑖2𝑗(𝑧1)r𝑖2r⊤ 𝑖2Q−1 𝑖1�... | https://arxiv.org/abs/2505.08210v1 |
covariance matrices when 𝑝and𝑛both tend to infinity with their ratio converging to zero. Bernoulli 181405–1420. https://doi.org/10.3150/11-BEJ381 CHEN, B. and P AN, G. (2015). CLT for linear spectral statistics of normalized sample covariance matrices with the dimension much larger than the sample size. Bernoulli 211... | https://arxiv.org/abs/2505.08210v1 |
arXiv:2505.08262v1 [cs.LG] 13 May 2025Super-fast Rates of Convergence for Neural Networks Classifiers under the Hard Margin Condition Nathanael Tepakbong∗1, Ding-Xuan Zhou†2, and Xiang Zhou‡3 1Department of Data Science, City University of Hong Kong, Hong Kong SAR 3Department of Mathematics, City University of Hong Kon... | https://arxiv.org/abs/2505.08262v1 |
Hall, 2012), which can be thought of as an infinite-dimensional analogue of the classical margin conditions, can lead to super-fast rates of convergence for RKHS classifiers (Wakayama and Imaizumi, 2024). Perhaps surprisingly however, for Deep Neural Networks (DNNs) hypothesis spaces, no such “super-fast” rates of conv... | https://arxiv.org/abs/2505.08262v1 |
of the authors’ knowledge, it has only been shown in (Hu et al., 2022b) that thehard-margin condition , which is the limit as q→ ∞ of Tsybakov’s noise condition, can lead to exponential rates of convergence for the excess risk for Neural Networks: they prove the result for shallow networks in the Neural Tangent Kernel ... | https://arxiv.org/abs/2505.08262v1 |
denote by •respectively |x|p:=p|x1|p+. . .+|xd|pq1/p,|x|0:=|x1|0+. . .|xd|0(with the convention 00:= 0) and |x|∞:= max 1≤i≤d|xi|, the ℓp,ℓ0andℓ∞(quasi-)norm of x. •|A|p:=´P i,j|ai,j|p¯1/p theℓp,pnorm of A, •respectively ∥f∥C(X)and∥f∥Lp(µ)the supremum norm and Lp(X, µ) norm of f, which are defined in the standard way. 4... | https://arxiv.org/abs/2505.08262v1 |
kernel-based classifiers under margin conditions also consider the square loss as a surrogate (Steinwart and Scovel, 2005; Steinwart and Christmann, 2008). We thus have an analogous setting for DNNs and can meaningfully compare the two approaches. To match what is often done in practice, we also introduce a penalty fun... | https://arxiv.org/abs/2505.08262v1 |
in the literature as a truncation function ) clipD:R→Rcan be implemented by a shallow ReLU neural network. Indeed, we have clipD(x) =σ(x)−σ(−x)−σ(x−D) +σ(−x−D) =“ Fσ(θD)‰ (x) for all x∈R, where σ(x)≡ReLU( x) and θD:=¨ ˚˚˝¨ ˚˚˝» ——–1 −1 1 −1fi ffiffifl,» ——–0 0 −D −Dfi ffiffifl˛ ‹‹‚,´“ 1−1−1 1‰ ,0¯˛ ‹‹‚. This shows that... | https://arxiv.org/abs/2505.08262v1 |
there exists δ >0 such that Pp|η(X)|> δq= 1. Assumption (A2) was originally introduced in (Mammen and Tsybakov, 1999) as a charac- terization of classification problems for which the two classes are in some sense “separable”, and has been repeatedly shown in the literature to lead to faster rates of convergence for var... | https://arxiv.org/abs/2505.08262v1 |
solutions: Lemma 2. Assume that (A4) holds and denote by R∗:= sup n≥1n |θ|∞:θ∈argmin *pRℓ,no the supremum. For all n≥1, λ > 0the argmins of pRℓ,λare not empty, and we have the inequality sup λ≥0,n≥1n |θ|∞:θ∈argmin *pRℓ,λo ≤R∗·P(a)1/p, 1In reality, we only need (A3) to be true for all tin a neighborhood of 0, but we omi... | https://arxiv.org/abs/2505.08262v1 |
|θ−θ′|∞ its Lipschitz constant. Intuitively, the Lipschitz constant of the realization map estimates the complexity of the Neural Network hypothesis space in the sense that it controls how different two realizations can be given that their parametrizations are close. For this reason, the problem of estimating a Neural ... | https://arxiv.org/abs/2505.08262v1 |
the number of ℓ∞balls of radius ε/Lip(Fσ) needed to cover the hypercube [ −R, R]P(a). It is straightforward to check that the collection of such balls centered at the points −R⃗1+ε⃗k,where ⃗k= [k1, k2, . . . k P(a)]T,andki∈ 0,1, . . . ,2RLip(Fσ) ε , where ⃗1is the vector whose entries are all ones, covers [ −R, R]P... | https://arxiv.org/abs/2505.08262v1 |
constant grows slower than n−1. Applying this strategy leads to the following excess risk bound for deep FCNN classifiers: Theorem 3. Assume that assumptions (A3) ,(A4) and(A5) hold, and let α > 0be a desired order of convergence. There exists a FCNN architecture anwith parameter bound Rn, width Wnand depth Lngiven by ... | https://arxiv.org/abs/2505.08262v1 |
the following Lemma: Lemma 5. LetR∗>0, L∗∈Nbe fixed, and a∗∈NL+1be any FCNN architecture. For any parametrization θ∗∈ Pa∗,R∗, there exists a distribution ρθ∗onX × Y such that E(X,Y)∼ρθ∗[Y|X=x] =f(x;θ∗),forρX-a.e. x∈ X. where f(·;θ∗) :X → [−1,1]is the function realized by θ∗. Proof. LetX∼ρXandU∼Uniform([ −1,1]) be two i... | https://arxiv.org/abs/2505.08262v1 |
NN,K(2−Lδ)r 24 Lip( Fσ)1+r˙ are constants which do not depend on n. 4 Conclusion and discussion We have established in this work a general upper bound on the excess risk of ReLU Deep Neural Networks classifiers under the hard-margin condition, and have shown how it can be used to deduce “super-fast” rates of convergenc... | https://arxiv.org/abs/2505.08262v1 |
we have |R(sign f)− R(sign g)|≤Px∼ρX` ∥f−g∥L∞(ρX)≥ |f(x)|˘ Proof of Lemma 6. We have |R(sign f)− R(sign g)|=|Er 1{signf(X)̸=Y} − 1{signg(X)̸=Y}s| ≤Er| 1{signf(X)̸=Y} − 1{signg(X)̸=Y}|s ≤Er 1{signf(X)̸= sign g(X)}s=Ppsignf(X)̸= sign g(X)q But now observe that for any x∈ X, sign f(x)̸= sign g(x) =⇒ | f(x)−g(x)|≥ |f(x)|. ... | https://arxiv.org/abs/2505.08262v1 |
positive λ, we have Rℓ(f(·;θλ)) =Rℓ,λ(f(·;θλ))−λ 2|θλ|p p ≤ R ℓ,λ(f(·;θλ)) ≤ R ℓ,λ(f(·;θ∗)) =Rℓ(f(·;θ∗)) +λ 2|θ∗|p p ≤ R ℓ(f(·;θ∗)) +λ 2P(a)Rp 21 Where P(a) is the number of parameters in the architecture a. From the identity (18) above, we deduce that ∥f(·;θλ)−η∥2 L2(ρX)differs from ∥f(·;θ∗)−η∥2 L2(ρX)by at most λP(a)... | https://arxiv.org/abs/2505.08262v1 |
: R(sign f(·;θλ))− R(sign η)≤ ∥f(·;θλ)−η∥L2(ρX)≤εapprox +a 2p−1λP(a)Rp. (19) It only remains to bound the second summand. To that end, we apply Lemma 6, which yields : R(sign f(·;pθλ))− R(sign f(·;θλ))≤Pn ∥f(·;pθλ)−f(·;θλ)∥L∞(ρX)≥ |f(X;θλ)|o . Now note that thanks to inequality (19), we can apply the “high-probability”... | https://arxiv.org/abs/2505.08262v1 |
r)·“ p?sL0log2L0+ 2dq(1 +p)−1‰¯ To conclude the proof for the case (A1) , we let εapprox≡n−α r,δ≡2n−α 2randν≡n−α 2r: observe that by picking λsuch that 0≤λ≤2p−1ε2 approx ((R∗)pP(an)2)−1=O´ n−2α(s+d) rs¯ , 26 we have λ <2p−1(δ−εapprox )2(RP(an)p)−1and εapprox +b 21−pλP(an)Rp≤2εapprox . We are thus allowed to apply Theor... | https://arxiv.org/abs/2505.08262v1 |
2012. Luc Devroye, L´ aszl´ o Gy¨ orfi, and G´ abor Lugosi. A probabilistic theory of pattern recognition , vol- ume 31. Springer Science & Business Media, 2013. Dennis Elbr¨ achter, Dmytro Perekrestenko, Philipp Grohs, and Helmut B¨ olcskei. Deep neural net- work approximation theory. IEEE Transactions on Information ... | https://arxiv.org/abs/2505.08262v1 |
a classification setting. Electronic Journal of Statistics , 17(2):3613–3659, 2023. Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. Advances in neural information processing systems , 27, 2014. Atsushi Nitanda and Taiji Suzuki. Optimal rates f... | https://arxiv.org/abs/2505.08262v1 |
arXiv:2505.08565v1 [math.ST] 13 May 2025On testing the class of symmetry using entropy characterization and empirical likelihood approach Ganesh Vishnu Avhada, Ananya Lahiria, Sudheesh K. Kattumannilb aDepartment of Mathematics and Statistics, Indian Institute of Technology, Tirupati, Andhra Pradesh, India bStatistical... | https://arxiv.org/abs/2505.08565v1 |
be found in Vexler et al. (2023). Symmetry holds significant relevance in non-parametric statistics and structured models, where a common assumption is that errors are distributed symmetrically. In the regression setup, Allison and Pretorius (2017) conducted an extensive Monte Carlo simulation study to test whether the... | https://arxiv.org/abs/2505.08565v1 |
the empty set issue and low coverage probability, the adjusted jackknife empirical likelihood (AJEL) method was proposed by Chen et al. (2008) and Wang et al. (2015). In what follows, we 3 introduce a novel entropy-based characterization for the symmetric distributions and propose a class of goodness-of-fit tests for t... | https://arxiv.org/abs/2505.08565v1 |
¯F(u) ϕ(x)−ϕ(u) dF(x)dF(u)−Z∞ −∞Zu −∞w(u) F(u) ϕ(u)−ϕ(x) dF(x)dF(u) =Z∞ −∞w(u) ¯F(u)Z∞ u ϕ(x)−ϕ(u) dF(x)dF(u)−Z∞ −∞w(u) F(u)Zu −∞ ϕ(u)−ϕ(x) dF(x)dF(u) =Z∞ −∞1 ¯F(u)Z∞ u ϕ(x)−ϕ(u) dF(x)−1 F(u)Zu −∞ ϕ(u)−ϕ(x) dF(x) w(u)dF(u) (5) =Z∞ −∞g(u)w(u)dF(u) (say). We know that if g(u)≥0 for all uin some domain D, an... | https://arxiv.org/abs/2505.08565v1 |
+e−x),− ∞ ≤ x≤ ∞. 9 One has Z∞ −∞¯F(u)F(u)E X−u|X≥u dF(u) =Z∞ −∞F(u)Z∞ u(x−u)dF(x)dF(u) =Z∞ −∞Z∞ u(x−u)e−(x+u) (1 +e−x)3(1 +e−u)2dxdu =3 4, and Z∞ −∞¯F(u)F(u)E u−X|X≤u dF(u) =Z∞ −∞¯F(u)Zu −∞(u−x)dF(x)dF(u) =Z∞ −∞Zu −∞(u−x)e−(2x+u) (1 +e−x)3(1 +e−u)2dxdu =3 4. 3. Exponential distribution LetXfollow an exponential di... | https://arxiv.org/abs/2505.08565v1 |
written as b∆n=6 n(n−1)(n−2) 3 6×2nX i=1(i−1)(i−2)X(i)+3 6×2nX i=1(n−i−1)(n−i)X(i)! −1 nnX i=1X(i) =3 2n(n−1)(n−2) nX i=1(i2−3i+ 2)X(i)+nX i=1(n2−2ni+i2−n+i)X(i)! −1 nnX i=1X(i) =3 2n(n−1)(n−2) nX i=1(n2−2ni−n+ 2i2−2i+ 2)X(i)! −1 nnX i=1X(i) =3 2n(n−1)(n−2) nX i=1 n(n−1)−2 i(n+ 1−i)−1 X(i)! −1 nnX i=1X(i) =3 2n(n−2... | https://arxiv.org/abs/2505.08565v1 |
outlined in Algorithm 1. Algorithm 1 Nonparametric Bootstrap Algorithm to Find C1andC2 xi←generate from null distribution n←length( xi) b∆n←Calculate test statistic b∆n(x) B←1000 ▷Number of bootstrap replicates for (bfrom 1 to B){ i←sample(1 : n, size= n, replace=TRUE) y←x[i] deltas[b] ←b∆n} deltas ←sort(deltas) C1←qua... | https://arxiv.org/abs/2505.08565v1 |
we have1 nPn i=1bV2 i=bσ2+op(n−1/2) almost surely. Thus λ1=( 1 nnX i=1bVi.1 nnX i=1bV2 i) +op(n−1/2) =b∆jack bσ2+op(n−1/2). Using Lemma A1 of Jing et al. Jing et al. (2009), we have the condition (24) and E(Y2)<∞ andσ2>0 are also satisfied. The Wilks’ theorem of JEL can be established as −2 logR= 2nX i=1( λ1bVi−1 2λ2 1... | https://arxiv.org/abs/2505.08565v1 |
the tests, we considered various symmetric distributions such as standard normal (N(0,1)), standard Laplace ( Lap(0,1)), standard lognormal ( Log(0,1)), and a mixture of normal (MxN (0.5)). The critical values of the SCR approach are calculated using the generated samples 20 from the considered symmetric distributions.... | https://arxiv.org/abs/2505.08565v1 |
•MGG: Miao, Gel and Gastwirth test statistic (Miao et al. (2006)), •MOI: Milosevic and Obradovic test statistic (Miloˇ sevi´ c and Obradovi´ c (2016)), •NAI: Statistic of the Nikitin and Ahsanullah test (Nikitin and Ahsanullah (2015)), •B1: Statistic of the test√b1(Miloˇ sevi´ c and Obradovi´ c (2019)), •SGN: Sign test... | https://arxiv.org/abs/2505.08565v1 |
0.871 0.169 0.873 622 0.628 0.827 0.362 200 1.000 1.000 0.999 0.206 0.359 1.000 0.192 1.000 0.879 0.820 0.985 0.490 Log(0.5) 25 0.482 0.406 0.373 0.240 0.324 0.305 0.385 0.373 0.421 0.398 0.272 0.221 50 0.692 0.713 0.654 0.416 0.492 0.638 0.585 0.551 0.783 0.734 0.545 0.401 100 0.801 0.910 0.917 0.610 0.660 0.873 0.814... | https://arxiv.org/abs/2505.08565v1 |
1.000 1.000 0.948 1.000 1.000 1.000 0.987 1.000 1.000 1.000 1.000 Log(1) 25 0.748 0.861 0.833 0.571 0.509 0.664 0.571 0.623 0.527 0.671 0.631 0.609 50 1.000 0.893 0.864 0.791 0.871 0.890 0.790 0.809 0.738 0.809 0.907 0.799 100 1.000 0.999 0.978 0.949 1.000 0.992 0.937 0.964 0.966 0.985 1.000 0.938 200 1.000 1.000 1.000... | https://arxiv.org/abs/2505.08565v1 |
were obtained by applying the tests to the data, while bootstrap p-values were obtained by generating 10000 samples from the normal distribution using the estimated parameter. Each test was applied to the sample data, and the frequency with which the test statistic exceeded the corresponding critical value was noted. T... | https://arxiv.org/abs/2505.08565v1 |
c, M. (2020). New characterization-based symmetry tests. Bulletin of the Malaysian Mathematical Sciences Society , 43:297–320. Cabilio, P. and Masaro, J. (1996). A simple test of symmetry about an unknown median. The Canadian Journal of Statistics/La Revue Canadienne de Statistique , pages 349–361. 29 Chen, D., Dai, L.... | https://arxiv.org/abs/2505.08565v1 |
Statistics and its Application , 8:329–344. Lee, A. J. (2019). U-statistics: Theory and Practice . Routledge. Litvinova, V. (2001). New nonparametric test for symmetry and its asymptotic efficiency. Master’s thesis, VESTNIK-ST PETERSBURG UNIVERSITY MATHEMATICS C/C OF VESTNIK-SANKT-PETERBURGSKII UNIVERSITET MATEMATIKA. ... | https://arxiv.org/abs/2505.08565v1 |
arXiv:2505.08908v1 [math.ST] 13 May 2025Statistical Decision Theory with Counterfactual Loss∗ Benedikt Koch†Kosuke Imai‡ May 22, 2025 Abstract Classical statistical decision theory evaluates treatment choices based solely on observed out- comes. However, by ignoring counterfactual outcomes, it cannot assess the quality... | https://arxiv.org/abs/2505.08908v1 |
expectation is taken over the joint distribution of XandY(π(X)). Since only one potential outcome is observed per unit, the risk is generally unidentifiable unless the treatment rule coincides with the one used to generate the data. Consequently, additional assumptions — most notably, strong ignorability of treatment a... | https://arxiv.org/abs/2505.08908v1 |
where Y= 1 indicates survival and Y= 0 otherwise. For notational simplicity, we assume no covariates are observed. Suppose we evaluate the physician’s decision using the standard loss ℓ(D;Y(D)), which assigns a value to each pair of decision and realized outcome ( D, Y(D)). Let c1>0 =c0denote the cost of providing trea... | https://arxiv.org/abs/2505.08908v1 |
a time, Y(d) ford= 0,1. In contrast, a fully general counterfactual loss may depend on both potential outcomes at the same time. This leads naturally to the concept of principal strata , which classifies individuals according to their joint potential outcomes [Frangakis and Rubin, 2002]: •Never survivors (Y(0), Y(1)) =... | https://arxiv.org/abs/2505.08908v1 |
less invasive one would have sufficed (i.e., when Y(k) = 1 for some k < D ). When does the counterfactual loss recommend a different treatment than the standard loss? Let µd=E[Y(d)] denote the average potential outcome under treatment d. Then, the difference between the counterfactual and standard risks for a given tre... | https://arxiv.org/abs/2505.08908v1 |
Udenotes unobserved covariates. Alternatively, D∗ imay arise from a stochastic policy, where Pr( D∗ i=d) =πd(Xi,Ui) ford∈ DandPK−1 d=0πd(Xi,Ui) = 1. Importantly, we do not observe the corresponding potential outcome Yi(D∗ i). We study identifiability of counterfactual risk under standard assumptions from the causal inf... | https://arxiv.org/abs/2505.08908v1 |
ϖ:YK× X → Ris the intercept function. 7 The additive counterfactual loss equals the sum of two terms; the first term depends on the decision D∗=d, while the second term does not. Moreover, the first term is the sum of weight func- tions ωk, each depending only on the corresponding potential outcomes Yi(k) given the cov... | https://arxiv.org/abs/2505.08908v1 |
to Definition 4, the counterfactual risk R(D∗;ℓ) is identifiable with respect to Qif it can be expressed as a function of the observable distribution P(D∗, D, Y, X). As formally shown in Lemma 2 (see Appendix G.2), since P(X) is observable, the counterfactual riskR(D∗;ℓ) is identifiable with respect to P(D∗, D, Y, X) i... | https://arxiv.org/abs/2505.08908v1 |
risk. This happens when the second term in the additive counterfactual loss (Definition 3 is zero, i.e., ϖ(y,x) = 0 for all y∈ YKandx∈ X). Lastly, by placing assumptions on ϖ, we can control the magnitude of E[C(X)], which leads to partial identification results. A natural question arises as to whether or not the above... | https://arxiv.org/abs/2505.08908v1 |
is an essential part of standard decision theory, difficulty is unique to the coun- terfactual loss. Avoidance of an incorrect decision (“true negative”), i.e., ( D∗, Y(d)) = ( d′,0) in the top right cell results in a decrease of the loss, while failure to make a correct “true positive” decision, i.e., ( D∗, Y(d)) = ( ... | https://arxiv.org/abs/2505.08908v1 |
can place different restrictions on the loss function to satisfy additivity. Standard statistical decision theory assumes that the loss depends only on choice and its realized outcome. This implies that when D∗= 0, never survivors and responsive units, for whom we have Y(0) = 0, incur the same loss. Similarly, always s... | https://arxiv.org/abs/2505.08908v1 |
yellow), or unidentifiable (in red), under strong ignorability. An asterisk ∗signifies a necessary and sufficient condition. The second column shows the number of free parameters in the loss specification. In Table 2, this equality constraint implies that the sum of orange cells is equal to the sum of purple cells acro... | https://arxiv.org/abs/2505.08908v1 |
the binary decision setting, any additive counterfactual risk admits a standard risk that yields identical treatment recommendations. As a consequence, both of these risks lead to the same optimal policy. Perhaps surprisingly, this result is not limited to binary outcomes. In addition, the corresponding standard loss ℓ... | https://arxiv.org/abs/2505.08908v1 |
is ob- served per unit, counterfactual risks are generally unidentifiable. We show that under the standard assumption of strong ignorability, a counterfactual loss leads to an identifiable risk if and only if it is additive in the potential outcomes. Moreover, we establish that such additive counterfactual losses do no... | https://arxiv.org/abs/2505.08908v1 |
Biometrika , 70(1):41–55, 1983. Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology , 66(5):688, 1974. Leonard J Savage. The foundations of statistics . Courier Corporation, 1972. Abraham Wald. Statistical decision functions . Wiley, 1950. J... | https://arxiv.org/abs/2505.08908v1 |
i= 1, . . . , N) . Thus, each element of p(x) is indexed by i=i(d,y) and is equal to pi= Pr( D∗=d, Y(0) = y0, . . . , Y (K−1) = yK−1|X=x). Then, by Definition 2, we can write the conditional counterfac- tual risk as the following linear function of p(x), Rx(D∗;ℓ) =ℓ⊤p(x), where ℓ= (ℓ1, . . . , ℓ N)⊤with ℓi=ℓ(d,y,x) for... | https://arxiv.org/abs/2505.08908v1 |
to the probabilities {Pr(D∗=d, Y(k) =yk|X=x)}d∈D,k∈D,yk∈Y, while omitting the other terms involving multiple potential outcomes. Then, instead of Lemma 3(b), we apply Lemma 3(a). 2 D Proof of Proposition 1 We first show the existence of such a standard loss. When D={0,1}, Theorem 1 implies, Rx(D∗;ℓAdd) =X d∈{0,1}X k∈{0... | https://arxiv.org/abs/2505.08908v1 |
y∈ Y. Then, by the first part of this proposition, this corresponds to an additive counterfactual loss with the same treatment recommendations. Since λis arbitrary, there exist infinitely many additive counterfactual losses that map to the same standard loss. 2 E Proof of Proposition 2 LetℓAdd(d;y) denote an additive c... | https://arxiv.org/abs/2505.08908v1 |
surjective mapping that associates each joint distribution with an observable distribution. Let θ(P)denote the (causal) estimand of interest under distribution P∈ P. For a fixed observable distribution Q0∈ Q define S0={P∈ P: Q(P) =Q0}.Then, the following statements are equivalent: (a)For every Q0∈ Q there exists a cons... | https://arxiv.org/abs/2505.08908v1 |
Define a function f:Q →Rsuch that f(Q(P)) =Z x∈Xg(QX(P))dPX(x). Note that we can define fin such a way as both QX(P) and PXare obtainable from Q. Hence R(P) =f(Q(P)) and so Ris identifiable by Lemma 1. □ 26 G.3 Lemma 3 Lemma 3 LetD, Y 0, . . . , Y K−1be discrete random variables such that D∈ D ={0,1, . . . , K −1} andY... | https://arxiv.org/abs/2505.08908v1 |
to verify the kernel property of C. We start by partitioning C=" C1 C2# , where C1is an K2M×Nmatrix of zeros and ones, containing all the rows corresponding to the marginal distributions {Pr(D=d, Yk=yk)}d∈D,k∈D,yk∈Y, which are included in Cby assumption. Letv∈Ker(C), then, Cv=" C1v C2v# =" 0 0# , implying C1v= 0. Thus,... | https://arxiv.org/abs/2505.08908v1 |
arXiv:2505.09043v1 [stat.ME] 14 May 2025Exploratory Hierarchical Factor Analysis with an Applicat ion to Psychological Measurement Jiawei Qiao, Yunxiao Chen and Zhiliang Ying Abstract Hierarchical factor models, which include the bifactor mod el as a special case, are useful in social and behavioural sciences for measu... | https://arxiv.org/abs/2505.09043v1 |
This is an important question, as learning a hierarchical factor structure is only sensible when it is iden tifiable. Although identifiability theory has been established for exploratory bi-factor analysis in Qiao et al. (2025), to our knowledge, no results are available under the general hierarchical factor model. Secon... | https://arxiv.org/abs/2505.09043v1 |
number of factor layers increases. Nevertheless, the constraint-based continuous opt imization algorithm that serves as a building block of the proposed method is similar to the algorithm used for explo ratorybi-factor analysis in Qiao et al. (2025). This algorithm turns a computationally challenging combinato rial mod... | https://arxiv.org/abs/2505.09043v1 |
to be identifiable. When a factor khas a unique child factor (i.e. |Chk| “1), it is easy to show that the two columns of the loading matrix that correspond to fact orkand its single child factor are not determined up to an orthogonal rotation. We note that when the above constraints hold, the hierarchical fa ctor struct... | https://arxiv.org/abs/2505.09043v1 |
corresponding to the hierarchical fact or model in Panel (a). Figure 1: The illustrative example of a three-layer hierarchical fact or model. v1,v2,...,v K, areknown. Inthatcase,estimatingthehierarchicalfactormod elisarelativelysimpleproblem, which involves solving an optimisation problem with suitable zero constr aint... | https://arxiv.org/abs/2505.09043v1 |
sign flip matrix QPQ, whereQconsists of all KˆKdiagonal matrix Qwhose diagonal entries taking values 1or´1. Remark 1. Condition 2 ensures the separation between the low-rank mat rixΛ˚pΛ˚qJand the diagonal matrixΨ˚, which is necessary for the true hierarchical factor model t o be identifiable. This condition can be guaran... | https://arxiv.org/abs/2505.09043v1 |
sample size Ngoing to infinity, these estimates will converge to their true values. The proposed method learns the hierarchical factor structure f rom the top to the bottom of the factor hierarchy. It divides the learning problem into many subproblems and conquers them layer by layer, starting from the first layer pL1“ t... | https://arxiv.org/abs/2505.09043v1 |
to learn the child factors of each given factor. We now give the details of this method. We start with the ICB method for learning the child factors of Factor 1, i.e., the general factor. In this case, the main questions the ICB me thod answers are: (1) how many child factors does Factor 1 have? and (2) what variables ... | https://arxiv.org/abs/2505.09043v1 |
loadings, one plus the number of descen dant factors of each child factor of factor 1 will not exceed the number of items loading on the corresponding ch ild factor. Ideally, we hope to find the loading matrix in A1that minimises IC 1pc,d1,...,d cqamong all cP t0,2,...,c maxuandd1,...,d cP t1,...,d maxu. More specificall... | https://arxiv.org/abs/2505.09043v1 |
cqhas only |pvk|rows, while those in A1pc,d1,...,d cqhaveJrows. This is because, given the results from the previous steps, we have already estimated that factor kand its descendant factors are only loaded by the variables in pvk. Therefore, we only focus on learning the rows of the loading matrix that correspond to th... | https://arxiv.org/abs/2505.09043v1 |
under the idea l scenario that this optimisation is solved exactly for all k. In reality, however, exactly solving (6) is computationally infeasible whenJandcare large. To search for the solution to (6), we cast it into a continu ous optimisation problem with nonlinear zero constraints and solved by an augmented Lagran... | https://arxiv.org/abs/2505.09043v1 |
1s,..., pvk pck“pvkrrvk pcks. 12:Define rλkfollowing that prλkqrpvksequals to the first column of rΛk,pckand prλkqrt1,...,J uzpvksis a zero vector. Output: pck,pvk 1,..., pvk pckand rλk. Remark 4. cP t0,2,...,c maxurepresents the number of child factors of Factor k. In other words, cmaxis 17 an upper bound on the possibl... | https://arxiv.org/abs/2505.09043v1 |
based on Algorithms 1 and 2. We start with introducing some notation. We use } ¨ }Fto denote the Frobenius norm of any matrix and } ¨ }as the Euclidean norm of any vector. We also use the notation aN“OPpbNqto denote that aN{bNis bounded in probability. In addition to the conditions required for the identifiab ility of t... | https://arxiv.org/abs/2505.09043v1 |
2. Condition 8 requires thatcmaxanddmaxare chosen sufficiently large so that the search space covers t he true model. 3 Computation As mentioned previously, the optimisation problem in IC kpc,d,...,d qin Algorithm 2 can be cast into a continuous optimisation problem and solved by an augmented Lagran gian method(ALM). In ... | https://arxiv.org/abs/2505.09043v1 |
(2) the distance between the estimate and the space Akpc,d,...,d qmeasured by max iPt1,...,|pvk|uhpmax jPB1|λptq k,ij|,max jPB2|λptq k,ij|...,max jPBc|λptq k,ij|q. When both criteria are below their pre-specified thresholds, δ1andδ2, respectively, we stop the algorithm. LetMbe the last iteration number. Then the selecte... | https://arxiv.org/abs/2505.09043v1 |
as MSEΛ“min QPQ}pΛ´Λ˚Q}2 F{pJKq, MSEΨ“ }pΨ´Ψ˚}2 F{J. We consider the following hierarchical factor structure shown in Fig ure 3 with the number of variables JP t36,54u, the number of layers T“4, the number of factors K“10,L1“ t1u,L2“ t2,3u,L3“ t4,...,8u,L4“ t9,10uandv˚ 1“ t1,...,J u,v˚ 2“ t1,...,J {3u,v˚ 3“ t1`J{3,...,... | https://arxiv.org/abs/2505.09043v1 |
1.00 4.90 0.83 2.21 0.90 2000 2.00 1.00 4.88 0.88 2.24 0.88 54 500 2.00 1.00 4.95 0.95 2.12 0.95 2000 2.00 1.00 4.98 0.98 2.05 0.98 5 Real Data Analysis In this section, we apply the exploratory hierarchical factor analy sis to a personality assessment dataset based on the International Personality Item Pool (IPIP) NEO... | https://arxiv.org/abs/2505.09043v1 |
0 0.22 0 0 0 0 0 13 A4 0.20 0 0.46 0 0 0 0 0.04 0 0.41 14 A4 0.43 0 0.18 0 0 0 0 ´0.15 0 0.69 15 A4 0.58 0 0.29 0 0 0 0 ´0.05 0 0.46 16 A4 0.52 0 0.41 0 0 0 0 ´0.19 0 0.21 17 A5 0.34 0 0.43 0 0 0 0 0.59 0.46 0 18 A5 ´0.07 0 0.38 0 0 0 0 0.87 ´0.09 0 19 A5 0.08 0 0.41 0 0 0 0 1.00 0 0.03 20 A5 0.27 0 0.43 0 0 0 0 0.16 0... | https://arxiv.org/abs/2505.09043v1 |
factors, where the factors areallowedtobe correlated. Thebi-factorstructureislearnedus ingthemethod proposedinQiao et al. (2025). Specifically, exploratory bi-factor models with 2 ,3,...,12 group factors are considered, among which the one with six group factors is selected based on the BIC. Table 4 presents the BIC val... | https://arxiv.org/abs/2505.09043v1 |
we can no longer use a sample covariance matrix as a summary statistic for the factor structure. 28 Appendix A Real Data Analysis: Agreeableness Scale Item Key Table A.1: Agreeableness Item Key Item Sign Facet Item detail 1 + Trust(A1) Trust others. 2 + Trust(A1) Believe that others have good intentions. 3 + Trust(A1) ... | https://arxiv.org/abs/2505.09043v1 |
0.43 0 0 0 .51 0 0 0 0.41 0 0 0 .41 0 0 0 0.28 0 0 0 0 .70 0 0 0.54 0 0 0 0 .43 0 0 0.68 0 0 0 0 .41 0 0 0.66 0 0 0 0 .27 0 0 0.30 0 0 0 0 0 .73 0 ´0.14 0 0 0 0 0 .93 0 ´0.03 0 0 0 0 1 .09 0 0.38 0 0 0 0 0 .35 0 0.15 0 0 0 0 0 0 .73 0.16 0 0 0 0 0 0 .75 0.38 0 0 0 0 0 0 .52 0.27 0 0 0 0 0 0 .58˛ ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹... | https://arxiv.org/abs/2505.09043v1 |
the corre- sponding loading matrix Λ and the unique variance matrix Ψ satisfy Σ “ΛΛJ`Ψ and Σ “Σ˚. We present the proof of Theorem 1 recursively. We first prove that Ch 1“Ch˚ 1,vk“v˚ kfor allkPCh˚ 1andλ1“λ˚ 1 orλ1“ ´λ˚ 1hold, where v1,...,v Kare the corresponding sets of variables for each factor accordin g to Λ, Ch1,...... | https://arxiv.org/abs/2505.09043v1 |
pYs1PCh˚ kv˚ s1q| ď2. If|pYi1‰i,i1PCh1Bk,i1q X pYs1PCh˚ kv˚ s1q| “2, we denote by s1‰ssuch that (C.3) holds. We choose j1,j2PBk,iXv˚ s,j3,j4PBk,iXv˚ s1,j5P pYi1‰i,i1PCh1Bk,i1q Xv˚ sandj6P pYi1‰i,i1PCh1Bk,i1q Xv˚ s1. We further require that when Ch˚ s‰ H,j1,j2andj5belong to different child factors of factor s, and when C... | https://arxiv.org/abs/2505.09043v1 |
to different child factors of factor s1and if |Ch˚ s2| ‰0,j2andj5,j6belong to different child factors of factor s2. This requirement can always be met. Consider Σ rtj1,j2u,tj3,j4,j5,j6us“Σ˚ rtj1,j2u,tj3,j4,j5,j6us, which is equivalent to Λrtj1,j2u,t1uspΛrtj3,j4,j5,j6u,t1usqJ “Λ˚ rtj1,j2u,t1,k,s1,s2uspΛ˚ rtj3,j4,j5,j6u,t1... | https://arxiv.org/abs/2505.09043v1 |
}UR´U1}Fď2}ε}F δ, whereδ“σjpAq ´σj`1pAq. We refer the proof of Lemma 2 to Theorems 4 and 19 of O’Rourke et a l. (2018). Lemma 3. Given aJˆKdimensional matrix Λfollowing a hierarchical structure that satisfies constrai nts C1-C4 and a JˆJdimensional diagonal matrix Ψ“diagpψ1,...,ψ Jqwithψją0,j“1,...,J.Λsatisfies Condition... | https://arxiv.org/abs/2505.09043v1 |
with Condition 6, we have }rΛ1,0rΛJ 1,0`rΨ1,0´Σ˚}F“OPp1{? Nqaccording to the M-estimation theory (see, e.g., van der Vaart, 2000). By Taylor’s e xpansion, we further have lprΛ1,0rΛJ 1,0`rΨ1,0;Sq “OpN}rΛ1,0rΛJ 1,0`rΨ1,0´Σ˚}2 Fq `OpN}S´Σ˚}2 Fq. Thus ĂIC1,c˚“OPp1q, which satisfies (D.4). 41 Whenc˚‰0, we first assume that th... | https://arxiv.org/abs/2505.09043v1 |
i,t1us››.(D.13) Combined with (D.8) we further have }Σ´Σ˚}F ěmin˜? 2 4,››Λ˚ rv˚ 2,t1us››¨››Λ˚ rv˚ i,t1us›› 8? 2τ2|v˚ 2|¸ σ2`|D˚ 2|` Λ˚ rE1,t1,2uYD˚ 2s˘ σ1`|D˚ 2|` Λ˚ rE2,t2uYD˚ 2s˘ . Thus, for sufficiently large Nand any Λ,Ψ defined in rA1pc˚,d1,minp|v˚ 3|,dq,...,minp|v˚ 1`c˚|,dqq, we have inf Λ,ΨlpΛΛJ`Ψ;Sq “OpN}Σ´Σ˚}2 Fq... | https://arxiv.org/abs/2505.09043v1 |
ztju,t1us››.(D.19) 46 Combining (D.17) and (D.19), we have }Σ´Σ˚}Fěmin¨ ˝? 2 4,››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s ztju,t1us›› 4? 2τ2p|v˚ i| ´1q1{2˛ ‚|λ˚ j,i| ¨ }µ} ą0. Thus, for sufficiently large N, (D.14) still holds in rA1pc˚,d1,...,d c˚q, which indicates that the derived ĂIC1pc˚,d1,...,d c˚qin the parametric space r... | https://arxiv.org/abs/2505.09043v1 |
rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s1ztju,t1us››, (D.23) or ›››››Λrv˚ iztju,t1us››Λrv˚ iztju,t1us››`Λ˚ rv˚ iztju,t1us››Λ˚ rv˚ iztju,t1us›››››››ď2δ››Λ˚ rv˚ iztju,t1us››¨››Λ˚ rv1,c˚ s1ztju,t1us››. holds. Without loss of generality, we assume that (D.23) holds. On th e other hand, notice that Σrv˚ iztju,tjus“λj,1Λrv˚ iztju,t1us... | https://arxiv.org/abs/2505.09043v1 |
c˚q. 51 For any Λ,ΨPrA1pc˚,d1,...,d c˚q, we denote by Σ “ΛΛJ`Ψ. We have Σrtj1,j2u,tj3,j4us“Λrtj1,j2u,t1uspΛrtj3,j4u,t1usqJ, is a rank-1 matrix, while according to Condition 3 Σ˚ rtj1,j2u,tj3,j4us“Λ˚ rtj1,j2u,t1,iuspΛ˚ rtj3,j4u,t1,iusqJ, is a rank-2 matrix. By Lemma 1, }Σ´Σ˚}F ě››Σrtj1,j2u,tj3,j4us´Σ˚ rtj1,j2u,tj3,j4us›... | https://arxiv.org/abs/2505.09043v1 |
iĂv1,c 11` |D˚ i|, for any Λ ,ΨPrA1pc,d1,minp|v1,c 2|,dq,...,minp|v1,c c|,dqq, we denote Σ “ ΛΛJ`Ψ. Similar to the proof in (D.8)-(D.13), for sufficiently large N, we have inf Λ,ΨlpΛΛJ`Ψ;Sq “OpN}Σ´Σ˚}2 Fq `OPp1q “OPpNq ą0. Noticing that p1pΛqis uniformly upper bounded, with probability approaching to 1 as Ngrows to infini... | https://arxiv.org/abs/2505.09043v1 |
v˚ k,kPCh˚ 1. By a recursive manner, with probability approaching 1 as Ngrows to infinity, we have pT“T,pK“K,pLt“Lt,t“1,...,T, and pvi“v˚ i,i“1,...,K. The proofof }pΛ´Λ˚pQ}F“OPp1{? Nqand }pΨ´Ψ˚}F“OPp1{? Nqcan be obtained by recursively applying similar arguments in (D.30) to (D.34). 56 References Anderson, T. and Rubin,... | https://arxiv.org/abs/2505.09043v1 |
arXiv:2505.09090v1 [math.ST] 14 May 2025Submitted SEQUENTIAL SCORING RULE EV ALUATION FOR FORECAST METHOD SELECTION BYD. T. F RAZIER1D. S. P OSKITT2, 1Department of Econometrics and Business Statistics, Monas h University, david.frazier@monash.edu 2Department of Econometrics and Business Statistics, Monas h University,... | https://arxiv.org/abs/2505.09090v1 |
Gneiting, Balabdaoui and Raftery ,2007 ), which in the case of a proba- bility forecast coincides with a consistent scoring functi on for the mean ( Gneiting ,2011 ). A MSC2020 subject classifications :Primary 62M10, 62M15; secondary 62G09. Keywords and phrases: forecasting method, error probability. 1 2 scoring rule is... | https://arxiv.org/abs/2505.09090v1 |
offering the forecaster an alternative m ethodological approach to forecast method selection. To the best of the authors’ knowledge, the extant literature does not contain a discussion that pursues the particular avenue of research that we consider here. 1.2. Background. Sequential statistical methods gained prominence... | https://arxiv.org/abs/2505.09090v1 |
accrued an assessment of relative forecast performance can be made tha t is valid in finite samples. We also show that SSRE will terminate in finite time with probabi lity one, and that the moments of the SSRE stopping time exist. Thus, SSRE provides the fore caster with an assessment tool that allows the forecaster to s... | https://arxiv.org/abs/2505.09090v1 |
forecaster is to predict some feature of FYT`h, the distribution of YT`hat a forecast horizon ofpě1. We are interested therefore in predictions of a functional φrFYT`hsmade at time T. We assume that competing forecasts and the observations Ytare random processes adapted to a filtration Bt,tPN, and the measure Pdescribes... | https://arxiv.org/abs/2505.09090v1 |
rule that measures precisely the feature or features of YT`hthat are of interest. SEQUENTIAL FORECAST METHOD SELECTION 5 2.1. Scoring Rules. Recall that Qis a class of distributions on Y, and a scoring rule is a measurable function Spf;yqthat maps a forecast fand an observation yto a numerical value in r0,8qthat measur... | https://arxiv.org/abs/2505.09090v1 |
estimates and the optimal forecasts w ill not coincide in general even if the forecast model is the same. Any differences between th e two will be due to differences in the estimation technique and the forecasting method as ap posed to the forecasting model, hence the focus in our analysis on the former rather than the... | https://arxiv.org/abs/2505.09090v1 |
withYnadapted to Bnfor allně1. We will analyse the SSRE under the following hypotheses, HQ“rPPP:EPrRnpQ,P q |Bns ď1,pně1qsand HP“rPPP:EPrRnpQ,P q |Bns ě1,pně1qs. The hypotheses state that given the information available a t the time of forecasting, fore- cast method Q is expected to be at least as good as method P unde... | https://arxiv.org/abs/2505.09090v1 |
performance of the scheme. Before examining the relatio nship between SSRE and the re- lated literature on E-variables, we will first analyse more c losely how the choice of boundary valuesklandkuinfluences features of SSRE. 3.2. Termination of SSRE. The SSRE process terminates at the smallest integer nfor which the ineq... | https://arxiv.org/abs/2505.09090v1 |
Since PrpŤn i“1EQ i|HaqandPrpŤn i“1EP i|Haqare nondecreasing and fall in the interval r0,1sit follows that under HQorHPboth PrpEQ 8|Haq “lim nÑ8Prpnď i“1EQ i|HaqandPrpEP 8|Haq “lim nÑ8Prpnď i“1EP i|Haq exist. These limits give the probability of selecting metho d Q and the probability of selecting method P, respectivel... | https://arxiv.org/abs/2505.09090v1 |
sake of completeness and to facilit ate translation to SSRE schemes. Now, consider the evaluation of PrpTQ n|HQq, and letQdenote the measure with Radon- Nikodym derivative with respect to Pgiven by (2)dQ dP“exprhPlogtCnpQ,P qus “exp« hP#nÿ i“1DipQ,P q+ff . 3The risk associated with a SSRE incorrect decision is analog ou... | https://arxiv.org/abs/2505.09090v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.