text
string
source
string
max h∈[H]∥ξh∥2˜RK√ H (¯R2 x+˜R2 K)1/2. This implies that E2⊂n |ε0|< z1−α/2,max K⊊[K]¯Rx|ε0|+ max h∈[H]∥ξh∥2˜RK√ H (¯R2 x+˜R2 K)1/2≥z1−α/2o , ⊂n |ε0|< z1−α/2,max K⊊[K]¯Rxz1−α/2+ max h∈[H]∥ξh∥2˜RK√ H (¯R2 x+˜R2 K)1/2≥z1−α/2o , =n |ε0|< z1−α/2,max h∈[H]∥ξh∥2≥z1−α/2min K⊊[K](¯R2 x+˜R2 K)1/2−¯Rx√ H˜RKo =n |ε0|< z1−α/2,max h...
https://arxiv.org/abs/2505.01137v1
h∈[H]} ⊇n |ε0|=t, sh,[−1]= 1, h∈[H],min h∈[H]∥ξh∥2> ψ(t),max h∈[H]∥ξh∥2 2< ao . Obviously, E3is nonempty. The conclusion follows. E.3 Proof of Theorem S2 (ii) Proof of Theorem S2 (ii).The proof of the “if” part: LetA∞ ss-rep(αt) ={max h∈[H] ∥σ(Vh,xx)−1εh,x∥∞≤z1−αt/2}. Recall the definition of E2in the proof of Lemma S1...
https://arxiv.org/abs/2505.01137v1
hPK k=1V1/2 h,ττRh,1 V1/2 ττ =¯Rx|ε0|+ min h,k|εh,k/Vh,kk|Rx(HK)1/2, where the last equality holds because by our construction,PH h=1π1/2 hPK k=1V1/2 h,ττRh,1= PH h=1π1/2 hK1/2V1/2 h,ττRh,x=V1/2 ττRx(HK)1/2. Therefore, we have E2∩ {sh,k= 1, h∈[H], k∈[K]} ⊇n |ε0|< z1−α/2, sh,k= 1, h∈[H], k∈[K],¯Rx|ε0|+ min h,k|εh,k/Vh,k...
https://arxiv.org/abs/2505.01137v1
arXiv:2505.01297v1 [math.ST] 2 May 2025Model-free identification in ill-posed regression Gianluca Finocchio∗and Tatyana Krivobokova∗ May 15, 2025 Abstract The problem of parsimonious parameter identification in possibly high-dimensional lin- ear regression with highly correlated features is addressed. This problem is f...
https://arxiv.org/abs/2505.01297v1
realizations ( xi, yi) of the population random pair ( x, y)∈Rp×R. The dependence between the features xand the response yis arbitrary (not necessarily linear), whereas the features might be highly-correlated and heavy-tailed. We characterize the insightful parameters and solve in its generality the problem of estimati...
https://arxiv.org/abs/2505.01297v1
low-rank parameter θBshould not use more than rBdegrees- of-freedom, any population algorithm in our class computes a compatible parameter θA= (BA,UA, rB,βA) where BAis some linear subspace, UAits orthogonal projection, rBits dimension and βAthe population least-squares solution of regressing yonUAx. We quan- tify the ...
https://arxiv.org/abs/2505.01297v1
However, we can still measure the excess risk in terms of the low-rank 4 parameter βB. That is to say, with R(bβA) =E({y−xtbβA}2|X,y) the least-squares risk of the linear approximation using bβAandR(βB) the oracle risk for the low-rank parameter, we show that the excess risk can be bounded from above as R(ex)(bβA) =R(b...
https://arxiv.org/abs/2505.01297v1
the work by Afreixo et al. [2024] on Alzheimer’s disease, we claim that sparse methods are unable to provide parsimonious representations of GWAS datasets described by Uffelmann et al. [2021], where the response seems to be correlated with many features all having very small effect. 1.1.2 Unsupervised Reduction In this...
https://arxiv.org/abs/2505.01297v1
Marion et al. [2020] and VC PCR by Marion et al. [2024]. A few variants of Partial Least Squares incorporating different penalizations are Sparse PLS by Chun and Kele¸ s [2010] and Regularized PLS by Allen et al. [2012]. Many more are possible and their performance is model-dependent. Our findings suggest that addition...
https://arxiv.org/abs/2505.01297v1
. . , p }, we denote IS∈Rp×pthe matrix obtained by setting to zero the diagonal entries of Ip in positions i /∈S. We denote Rp×p ⪰0the space of p-dimensional square matrices that are symmetric and positive-semidefinite. We denote A†the unique generalized inverse of a matrix A, with the convention that A†=A−1when the ma...
https://arxiv.org/abs/2505.01297v1
eB. The latter induces the following characterization of relevant subspace By= arg minn dim(eB) :R(Σx) =eB ⊕eB⊥,E(xeB⊥y) =0p,E(xeB⊥xt eB) =0p×po .(4) We denote UyandUy⊥the orthogonal projections onto ByandB⊥ y, respectively. We call relevant features the projection xy:=Uyxand irrelevant features the projection xy⊥:= Uy...
https://arxiv.org/abs/2505.01297v1
that depend only on the moments of the population pair ( x, y). In particular, we assume thatA(x, y) computes parameters eθA,s:=eBA,s,eUA,s,erA,s,eβA,s ,0≤s≤p, (8) where {0p}=eBA,0⊆eBA,1⊆ ··· ⊆ eBA,p⊆Rpare linear subspaces, eUA,sis the orthogonal projection onto eBA,s,erA,s= dim( eBA,s) is its dimension and eβA,sis t...
https://arxiv.org/abs/2505.01297v1
appearing in Equations (8) - (9) are consistent with our Definition A.3 in Section A.3, whereas the constants of stability are taken from Section A.4. Assumption 2.3 (Population Algorithm) .LetBy⊆ R(Σx) be the relevant subspace and (xy, y) the relevant pair. Let B ⊆ B ybe the low-rank subspace and ( xB, y) the low-rank...
https://arxiv.org/abs/2505.01297v1
highly-correlated and the class of functions for which E(xy) =0pis negligible. Assumption 2.3 provides conditions on the population reduction algorithms. Condi- tion (i) means that it is harmless to replace, in the statement of Theorem 2.4, the low-rank parameter θBwith the parameter θAcomputed by the oracle population...
https://arxiv.org/abs/2505.01297v1
best s-sparse active subset JFSS,s(x, y) of βLS. The active sets are computed iteratively, in the sense that jFSS,s+1∈JLS\JFSS,s. When s=p, we find BFSS,p=EJLS(x, y) and βFSS,p=βLS. One can define a similar procedure for the LASSO( ·) by Tibshirani [1996] where the active set JLASSO ,s(x, y) is now selected through the...
https://arxiv.org/abs/2505.01297v1
x, y) := PLS( Σx,σx,y) with the following properties. For all 0 ≤s≤p, the linear subspace BPLS,sis spanned by the firstsvectors of the Krylov basis Ks(x, y) := span {σx,y,Σxσx,y, . . . ,Σs−1 xσx,y}generated byΣxandσx,y. The subspace BPLS,sis one-dimensional when σx,yis an eigenvector of Σx. When s=p, we find that BPLS,...
https://arxiv.org/abs/2505.01297v1
1≤i≤n∥xeB,i∥2 2 ∥ΣxeB∥op. (15) The rank reBis the dimension of the span of the support of xeB. The effective rank ρeB≤reB can be rewritten as the weighted average Tr( ΣxeB)/∥ΣxeB∥opand measures the interplay be- tween dimension and variation. The uniform effective rank ρeB,naccounts for the variability of a sample of ...
https://arxiv.org/abs/2505.01297v1
≤DAεB+eDAεeB∗,n, 19 whereas, for any r∈ {erA,s: 0≤s≤p}such that r≤rA, the early-stopping estimation error on the least-squares solution is ∥bβ(r) A−βA∥2 ∥βA∥2≤√rA−r+5 2MAεB+5 2 1 +5 2MAεB fMAεeB∗,n. Remark 2.15 (Our Assumptions) .We address here all our assumptions leading to the sample error bounds in Theorem 2.12. ...
https://arxiv.org/abs/2505.01297v1
∥Σx∥op∨∥bσx,y−σx,y∥2 ∥σx,y∥2≲  q ρeB∗logrx n ν2n, ifxeB∗as in Lemma B.3 (i) q (ρeB∗∨logn) logrx n ν2n,ifxeB∗as in Lemma B.3 (ii) r ρeB∗logrx n1−1 kν2n, ifxeB∗as in Lemma B.3 (iii) with probability at least 1 −2νn. The effective rank ρeB∗appearing in the above display corresponds to the linear subspac...
https://arxiv.org/abs/2505.01297v1
= bβ(r) A−βA 2 Σx−2 bβ(r) A−βA,βLS−βA Σx, where βLS=Σ† xσx,yis the population least-squares solution. Remark 2.19 (Optimal Linear Prediction Risk) .Under Assumption 2.10 alone, the de- pendence between the features xand the response yis arbitrary and the linear predictor xt n+1bβ(r) Aconsidered here might be far from t...
https://arxiv.org/abs/2505.01297v1
integers d, p≥1, any matrix A∈Rd×p, any vector b∈Rdand any linear subspace C ⊆Rp, we denote LS( A,b,C) := arg minζ∈C∥Aζ−b∥2 2the set of solutions to the reduced least-squares problem. It is a classical result, see Theorem 4 by Price [1964], that LS( A,b,Rp) ={A†b+ (Ip−A†A)ζ:ζ∈Rp}and the minimum- L2-norm solution ζLS:=A...
https://arxiv.org/abs/2505.01297v1
the unique solution of the equivalent least-squares problem UA,s·LS(A,b). Remark A.5 (Degrees-of-Freedom) .Notice that the parameter θA,sdefined in Equa- tion (20) is redundant in the sense that it is uniquely determined by A,bandCA,s. This 26 means that two parameters θA,sandθA,s′are identical if and only if they have...
https://arxiv.org/abs/2505.01297v1
·) selects CSPR,s= span {ej1, . . . ,ejs}for all 0≤s≤p. This means that CSPR,sis never a subspace of R(A) and, for s=p, one findsCSPR,p=Rpso that ζSPR,p=ζLS. However, the algorithm uses p= dim( CSPR,p) degrees-of-freedom instead of 1. A.4 Stability of Regularization Algorithms LetA∈Rp×p ⪰0andb∈ R(A) be given and consid...
https://arxiv.org/abs/2505.01297v1
if MA,r·ε <1with the constant from Equation (25), then eζ(r) A−ζ(r) A 2 ζ(r) A 2≤5 2·MA,r·ε. Proof of Theorem A.8 .Equation (24) defines CA,ras the smallest constant Csuch that ∥eU(r) A−U(r) A∥op≤Cε(eA,eb) and DA,ras the smallest constant Dsuch that ϕ1(eC(r) A,C(r) A)≤ Dε(eA,eb) for all perturbations ( eA,eb)∈∆A(A,b). ...
https://arxiv.org/abs/2505.01297v1
Rp×pandeUs∈Rp×pto be respectively the orthogonal projections of Rponto the Krylov spaces Ks(A,b) andKs(eA,eb). Theorem 3.3 by Kuznetsov [1997], see our Theorem B.11 and Appendix B.3, shows that ∥eUs−Us∥op≲ε κs(A,b) and implies that the stability constant CPLS,s(A,b) in Equation (24) is proportional to the condition num...
https://arxiv.org/abs/2505.01297v1
B.3 .The first inequality follows from 1≤ρξ=Tr(Σξ) λmax(Σξ)≤λmax(Σξ) rk(Σξ) λmax(Σξ)=rξ≤p. We prove (i) by noting that E(∥ξ∥2 2) =∥Σξ∥opρξimplies R2≤ ∥Σξ∥opρξ, so that ρξ≤ρξ,n=E(max 1≤i≤n∥ξi∥2 2) ∥Σξ∥op≤R2 ∥Σξ∥op≤ρξ. We prove (ii) by direct computation via Jensen’s inequality and the change of variable s=t· ∥Σξ∥−1 op, ...
https://arxiv.org/abs/2505.01297v1
n + 2C·E ∥bΣζ∥op1 2∥Σξ∥1 2opr ρξ,nlog(rξ∨rζ) n ≤2C· E ∥bΣξ∥op1 2∥Σζ∥1 2op+E ∥bΣζ∥op1 2∥Σξ∥1 2op ·r (ρξ,n∨ρζ,n) log( rξ∨rζ) n. Now using Lemma B.4 we get both E(∥bΣξ∥op)≤ ∥Σξ∥op(1+δξ,n)≤2∥Σξ∥opandE(∥bΣζ∥op)≤ ∥Σζ∥op(1 +δζ,n)≤2∥Σζ∥op. The latter display can be bounded as E ∥bΣξ,ζ−Σξ,ζ∥op ≤8C· ∥Σξ∥1 2op∥Σζ∥1 2op·...
https://arxiv.org/abs/2505.01297v1
|ωj|:eiωj∈Sp(U),0< ωj< π , where the bound on the interval (0 , π) is justified by the fact that the eigenvalues of Uconsist of complex-conjugate pairs. As for the original definition, we remove the real eigenvalues {±1}. LetBandeBbe two m-dimensional subspaces of Rp, and let B∈Rp×mandeB∈Rp×m some orthonormal basis of ...
https://arxiv.org/abs/2505.01297v1
ε>0sup (eA,eb) : ∆( eA,eb)≤εd(Km,eKm) ∆(eA,eb), κ 2(Km(A,b)) := min Kmκb(Km). (27) Since Krylov spaces are invariant under orthonormal transformations, there exist Vm= (Km|Km⊥)∈Rp×pandeVm= (eKm|eKm⊥)∈Rp×pboth orthonormal bases of Rpsuch that Gm=Vt mKmandeGm=eVt meKmare the natural orthonormal bases of Km(Hm,e1) and Km(...
https://arxiv.org/abs/2505.01297v1
provide all the proofs for the results in the main sections. C.1 Proofs for Section 2 Proof of Lemma 2.2 .For the first statement, we notice that the conditions determining the relevant subspace Byin Equation (4) are equivalent to Σx=UyΣxUy+Uy⊥ΣxUy⊥,σx,y=Uyσx,y. This implies that Byis the unique Σx-envelope of span {σx...
https://arxiv.org/abs/2505.01297v1
the population features x=xy+xy⊥ withxy∈ Byandxy⊥∈ B⊥ y. Similarly, the observations xy,i=xeB,i+xeB⊥,iare i.i.d. copies of the population features xy=xeB+xeB⊥withxeB∈eBandxeB⊥∈eB⊥. Thus, we bound 43 ∥d∆b∥2≤IeB(d∆b) +IeB⊥(d∆b) +Iy⊥(d∆b) with IeB(d∆b) := 1 nnX i=1xeB,iyi−E(xeBy) 2, IeB⊥(d∆b) := 1 nnX i=1xeB⊥,iyi−E(xeB⊥y)...
https://arxiv.org/abs/2505.01297v1
yields ∥eβ(r) A−eβA∥2≤ ∥eβA∥2√rA−r. An inspection of the proof of Theorem 2.12 shows that its induced bounds hold for all sample parameters bθ(r) A∈bA(x, y) and population parameters θ(r) A∈ A(x, y) with any r∈DoF(A(x, y)) and r≤rA, since the quantities appearing in the assumptions are largest when r=rA. That is to say...
https://arxiv.org/abs/2505.01297v1
the features, the or- acle number of degrees-of-freedom is one. This means that a sample algorithm achieves par- simonious identification if it can estimate the oracle direction using one degree-of-freedom. Here we compare the best sparse direction buFSS,1estimated by sample Forward Subset Selec- tion, the principal di...
https://arxiv.org/abs/2505.01297v1
Associated with Alzheimer’s Disease. Applied Sciences , 14(6):2572, Mar 2024. ISSN 2076-3417. doi: 10.3390/app14062572. URL http://dx. doi.org/10.3390/app14062572 . Seung C. Ahn and Alex R. Horenstein. Eigenvalue Ratio Test for the Number of Factors. Econometrica , 81(3):1203–1227, 2013. doi: 10.3982/ecta8968. URL http...
https://arxiv.org/abs/2505.01297v1
The Annals 53 of Applied Statistics , 9(3), Sep 2015. ISSN 1932-6157. doi: 10.1214/15-aoas842. URL http://dx.doi.org/10.1214/15-AOAS842 . Artur Buchholz. Optimal Constants in Khintchine Type Inequalities for Fermions, Rademachers and q-Gaussian Operators. Bulletin of the Polish Academy of Sciences. Mathematics , 53(3):...
https://arxiv.org/abs/2505.01297v1
Association , pages 1–13, Feb 2023. doi: 10.1080/01621459.2023.2169700. URL https://doi.org/10.1080/ 01621459.2023.2169700 . Gianluca Finocchio and Tatyana Krivobokova. An Extended Latent Factor Framework for Ill-Posed Linear Regression, 2023. URL https://arxiv.org/abs/2307.08377 . Sophie M. Fosson, Vito Cerone, and Di...
https://arxiv.org/abs/2505.01297v1
10.1109/jsait.2020.2984716. URL http://dx.doi.org/10.1109/JSAIT.2020.2984716 . Roberto I. Oliveira and Zoraida F. Rico. Improved Covariance Estimation: Optimal Ro- bustness and Sub-Gaussian Guarantees under Heavy Tails. The Annals of Statistics , 52 (5), Oct 2024. ISSN 0090-5364. doi: 10.1214/24-aos2407. URL http://dx....
https://arxiv.org/abs/2505.01297v1
arXiv:2505.01324v5 [stat.ME] 21 May 2025Design-Based Inference under Random Potential Outcomes via Riesz Representation Yukai Yang Department of Statistics Uppsala University yukai.yang@statistik.uu.se May 22, 2025 Abstract We introduce a design-based framework for causal inference that accommodates random potential ou...
https://arxiv.org/abs/2505.01324v5
that potential outcomes should be treated as random rather than fixed. Our framework retains the design-based identification logi c through randomised treatment assignment, assuming the treatment vector is randomly draw n from a known distribution µ. By making outcome-level randomness explicit, the approach extends the c...
https://arxiv.org/abs/2505.01324v5
and conservative set-based alternatives that yield valid infe rence when uncorrelated unit pairs can be reasonably approximated. This version is computatio nally simpler and circumvents the need to estimate weak or negligible cross-terms, making it especially practical in settings with limited structural information. N...
https://arxiv.org/abs/2505.01324v5
for unit idepends on the entire treatment vector z, so that interference (or spillover effects) is permitted. Notably, one may also co nsider the special case of no spillover effects, whereby yi(z,xi,ǫi) =yi(zi,xi,ǫi), (2.2) which corresponds to a version of the Stable Unit Treatment V alue Assumption (SUTVA) incorporati...
https://arxiv.org/abs/2505.01324v5
from the randomised assignment of treatments. Thi s approach preserves the fun- damental idea that each unit has a well-defined potential out come under every treatment assignment, capturing the core principle of SUTVA (or, more generally, the notion of well- defined potential outcomes), even as we permit these outcome s...
https://arxiv.org/abs/2505.01324v5
2.1) becomes a deterministic function of z, recovering the setting considered in their work. We equip Miwith the inner product /an}⌊ra⌋ketle{tu,v/an}⌊ra⌋ketri}ht:=Eω/bracketleftBig Ez/bracketleftbig u(z,ω)v(z,ω)/bracketrightbig/bracketrightBig =E/bracketleftbig u(z,ω)v(z,ω)/bracketrightbig , (2.5) where the second equa...
https://arxiv.org/abs/2505.01324v5
which in turn guarantees thatθcan be uniquely represented by an element of the Hilbert spac e. Researchers need not be concerned about such technicalitie s in practice, as they are always free to define or adjust their treatment effect functional to c omply with the requirements in Assumptions 4. For instance, one might ...
https://arxiv.org/abs/2505.01324v5
only one realisation of the outco me˜yi(z,ω)for certain zandω. It follows immediately that the Riesz estimator in ( 2.14) for each unit is unbiased for its treatment effect given in ( 2.13). 1L2(Ω;H1(Z))denotes the Bochner space, and H1(Z)denotes the Sobolev space W1,2(Z). 9 Yang Design-Based Inference with Random Outco...
https://arxiv.org/abs/2505.01324v5
corresponds to the integrab ility of the weight function. Given the unit-level Riesz estimators ( 2.14) and the weights, Definition 2 (Aggregate Riesz Estimator) .we define the corresponding aggregate Riesz estimator for the finite-sample estimand ˆτn(z,ω) =n/summationdisplay i=1νniˆθi(z,ω). (2.18) Thus we have the unbias...
https://arxiv.org/abs/2505.01324v5
1≤i≤n|Ni|anddn=n−1/summationtextn i=1|Ni| denote, respectively, the maximum and average neighbourho od sizes across the sample. These quantities will play a central role in the asymptotic a nalysis to follow. In practice, the specification of neighbourhoods Nidepends on substantive knowledge of the experimental context....
https://arxiv.org/abs/2505.01324v5
Assumption 8 (Boundedness of Norms) .For somer≥1, there exist real numbers p,q∈ [r,∞]satisfying 1/p+1/q= 1/rsuch that sup i∈N/⌊ard⌊l˜yi/⌊ard⌊lp<∞andsup i∈N/⌊ard⌊lψi/⌊ard⌊lq<∞. (3.5) This assumption is formulated to allow flexibility in the cho ice ofr. Intuitively, the condition imposes uniform boundedness on the pth mo...
https://arxiv.org/abs/2505.01324v5
for statistical inference using the aggr egate Riesz estimator under outcome-level randomness. In particular, it justifie s asymptotic testing and confidence interval construction for the null hypothesis H0:τn= 0, based on the standardised statistic Tn= ˆτn(z,ω)/σn, assuming that both ˆτn(z,ω)andσnare consistently estima...
https://arxiv.org/abs/2505.01324v5
knowl- edge of the dependency neighbourhood structure: not only th e maximal neighbourhood size Dn, but also which specific units are likely to be dependent. Thi s may seem restrictive, but it is consistent with the broader literature, where the v ariance is generally considered non-identifiable from a single realisation...
https://arxiv.org/abs/2505.01324v5
to exhibit non-negligible second-order dep endence, resulting in a conservative correlation-based estimator. Specifically, consider the s et Ec n={(i,j)∈ {1,...,n}2:E[ζiζj]/ne}ationslash= 0}, (4.6) and define the corresponding correlation-based variance es timator ˆσ2 cn=/summationdisplay (i,j)∈Ecnνniνnjζiζj. (4.7) 17 Ya...
https://arxiv.org/abs/2505.01324v5
Index Pairs) .Ec n⊂˜En⊂ En. Based on this, we define a sparse matrix ˆΣd n∈Rn×nthat retains only the entries correspond- ing to the index set ˜En, representing the estimated dependency structure. (ˆΣd n)ij=/braceleftBigg ζiζjif(i,j)∈˜En, 0 otherwise.(4.9) The corresponding variance estimator is then given by ˜σ2 n=ν′ nˆ...
https://arxiv.org/abs/2505.01324v5
and outcome distribution contai ns sufficient information for reliable estimation and inference, provided that the under lying conditions hold. In this way, the results provide a rigorous justification for making popu lation-level claims based on one observed experiment. 5 Riesz Representer Estimation in Practice The theo...
https://arxiv.org/abs/2505.01324v5
the conditional distribution of z|ω0could differ from µ, and the estimated representer would no longer correspond to the true Riesz representation defined under the design. It is at this stage, during the offline construction of ˆψi(z,ω0), that Assumption 2is invoked. All subsequent inference proceeds without requ iring in...
https://arxiv.org/abs/2505.01324v5
aggregate estima tor and the construction of con- sistent variance estimators under local dependence. For completeness, we note that under standard conditions, s uch as an orthonormal basis, eigenvalues of the Gram matrix bounded away from zero, and su fficient smoothness of the representer, the estimation error satisfies...
https://arxiv.org/abs/2505.01324v5
excluded without loss of accuracy. We consider sample sizes n∈ {100,200,500,1000}and dependency growth parameters d∈ {0.0,0.1,0.2,0.25,0.3}. For each (n,d)pair, we run 2000independent replications. For each replication, we compute the estimator ˆτn, the standard error ˆσn, and test statistics for Wald-type inference. W...
https://arxiv.org/abs/2505.01324v5
gene ralising the framework to ad- dress the stochastic nature of modern experimental data. In doing so, it contributes toward a broader synthesis between randomised experimental desig n and inferential generalisation. We develop a general asymptotic theory for the aggregate Rie sz estimator under local de- pendence in...
https://arxiv.org/abs/2505.01324v5
without restrictive asymptotic a ssumptions. It is particularly well suited to modern experiments involving sensors, adaptive i nterventions, high-dimensional treatments, or biological systems, where random variation is intrinsic rather than incidental. While the correlation-based variance estimator ˆσ2 cnoffers a more ...
https://arxiv.org/abs/2505.01324v5
L. and M. G. Hudgens (2014). Large sample randomization inference with applications to cluster-randomized and panel experiments. Biometrika 101 (2), 457–471. Neyman, J. (1990). On the application of probability theory to agricultural experiments. essay on principles. section 9. Statistical Science 5 (4), 465–472. Origi...
https://arxiv.org/abs/2505.01324v5
of HWS Inputs: •Basis functions {gi,1,...,gi,m}forMi •Known linear functional θi •Randomisation distribution for z Step 1: Compute Gram Matrix ˆGi [ˆGi]ℓ,k=Ez/bracketleftbig gi,ℓ(z,ω0)gi,k(z,ω0)/bracketrightbig Step 2: Compute Target Vector ˆTi [ˆTi]ℓ=θi(gi,ℓ) Step 3: Solve Linear System ˆGiβi=ˆTi⇒βi=ˆG−1 iˆTi Output: ...
https://arxiv.org/abs/2505.01324v5
Proof of Theorem 2.By definition of the Riesz estimator in ( 2.14) and the linearity of ex- pectation, we have E/bracketleftbigˆθi(z,ω)/bracketrightbig =θi(˜yi), which further implies E/bracketleftbig ˆτn(z,ω)/bracketrightbig =n/summationdisplay i=1νniE/bracketleftbigˆθi(z,ω)/bracketrightbig =n/summationdisplay i=1νniθi...
https://arxiv.org/abs/2505.01324v5
Therefore, |E[fε(Wn)]−E[fε(Z)]| ≤Lε·dW(Wn,Z). SincedW(Wn,Z)→0, there exists N(ε)∈Nsuch that |E[fε(Wn)]−E[fε(Z)]|<ε 3,for alln>N(ε). Using the triangle inequality, we have |E[f(Wn)]−E[f(Z)]| ≤ |E[f(Wn)]−E[fε(Wn)]| +|E[fε(Wn)]−E[fε(Z)]| +|E[fε(Z)]−E[f(Z)]| <ε 3+ε 3+ε 3=ε. 37 Yang Design-Based Inference with Random Outcom...
https://arxiv.org/abs/2505.01324v5
pairs. Hence every Xijis dependent with at most C1D2 nother summands for some constant C1>0. There are at most nDnordered pairs in En, so V[∆n]≤C1D2 n·nDn·¯ν4C n2=C2¯ν4D3 n n, (D.11) for a constant C2>0. By Chebyshev’s inequality, for any ε>0, P/parenleftbig |∆n|>ε/parenrightbig ≤Var(∆n) ε2≤C2¯ν4 ε2D3 n n. Since the ri...
https://arxiv.org/abs/2505.01324v5
Weight-calibrated estimation for factor models of high-dimensional time series Xinghao Qiao1, Zihan Wang2, Qiwei Yao3, and Bo Zhang4 1Faculty of Business and Economics, The University of Hong Kong, Hong Kong 2Department of Statistics and Data Science, Tsinghua University, Beijing, China 3Department of Statistics, Londo...
https://arxiv.org/abs/2505.01357v2
models 1We refer to lag-0 and nonzero-lagged autocovariances as covariance and autocovariance, respectively. 2 (Yu et al., 2022; Chen and Fan, 2023) and tensor factor models (Chen and Lam, 2024; Chen et al., 2024). The second type of models assumes that the dynamic information in ytis entirely cap- tured by the common ...
https://arxiv.org/abs/2505.01357v2
weight matrix induces a rescaling of eigenvalues of pΩypkqpΩypkqT,often resulting in an enhanced separation between the common and idiosyn- cratic components, and, more precisely, the relative decrease from the r0-th to thepr0`1q-th largest eigenvalues of pΩypkqxWpΩypkqTis of higher order than that of pΩypkqpΩypkqT.Add...
https://arxiv.org/abs/2505.01357v2
specify the weight matrix in (3), we cast the autocovariance-based method in a reduced-rank autoregression framework. This is motivated by the equivalence between the standard PCA and a reduced-rank regression (Reinsel et al., 2022), which also in- volves a weight matrix implicitly. Specifically, consider regressing a ...
https://arxiv.org/abs/2505.01357v2
5.1 below. Detailed intuitive illustrations are presented in Section C.2 of the sup- plementary material. (ii) The weight matrix can also enhance the capability to distinguish strong factors from weak factors. Consider the model yt“Axt`Bzt`et,where the strong factors xtPRr0and the weak factors ztPRr1are asymptotically ...
https://arxiv.org/abs/2505.01357v2
However, it can be relaxed to allow weak correlations between txtuandtetu, which do not affect the asymptotic results. For simplicity, we assume uncorrelatedness to facilitate the presentation. Conditions 2(ii) and (iii) are also standard in the literature; see, e.g., Lam and Yao (2012) and Zhang et al. (2024). The par...
https://arxiv.org/abs/2505.01357v2
Theorem 1(ii), we can further establish that DtCpΦk,Aq,CpAqu“ Oppn´1{2pp1´δ0q{2qfor each kP rms,achieving the same rate as in Theorem 1(ii). Similar arguments apply to our proposed estimator pA,as shown in Proposition 2 below. 11 Proposition 1. Let the conditions for Theorem 1 hold. Assume that ϑnp´δ0Ñ0and ϑnÁn´1p,wher...
https://arxiv.org/abs/2505.01357v2
strictly stationary with the finite fourth moments; (iii) tytutPZis ψ-mixing with the mixing coefficients satisfyingř tě1tψptq1{2ă8. Condition 8. (i) There exist some constants δ0andδ1with 0ăδ1ďδ0ď1such that }Ωx}—}Ωx}min—pδ0and}Ωz}—}Ωz}min—pδ1; (ii) n“Oppqandp1´δ1“opnq. Condition 9. For model (10), letE“ pe1, . . . ,en...
https://arxiv.org/abs/2505.01357v2
of Ψk,Ain Corollary 1(ii) is consistent with that in Theorem 1(ii) for model (1). Corollary 1. Let Conditions 6–10 hold and }Ωzpkq}“Opn´1{2pp1`δ1q{2qfor each kPrms. (i)µk1—µkr0—p2δ0with probability tending to 1, and µk,r0`1“Oppn´1p1`δ1q; (ii)››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2q. 15 4.3 Weight-calibrated autocovariance...
https://arxiv.org/abs/2505.01357v2
of simulations to evaluate the finite-sample performance of the pro- posed method for factor models (1) and (10) in Sections 5.1 and 5.2, respectively. 5.1 The model with uniform factor strength Consider the factor model yt“rArxt`et, tPrns, (11) where the entries of rAPRpˆr0are sampled independently from the uniform di...
https://arxiv.org/abs/2505.01357v2
common 19 factors is weak relative to the idiosyncratics, corresponding to smaller values of δ0andp,the idiosyncratic errors exhibit cross-sectional correlations comparable to those of the factors in finite dimensions. This diminishes the capability of Cov to separate xtfromet, leading to inferior performance compared ...
https://arxiv.org/abs/2505.01357v2
misidentifying 6 strong factors. This aligns with Theorem 2(i). Additionally, Auto1 also fails to distinguish xtfromzteffectively, whereas Auto2 performs very well, as guaranteed by Theorem 3(i). This distinct performance arises from δ11“1 and δ12“0 fortztu, which follows a vector MA(1) process. Secondly, when δ0“δ1“1,...
https://arxiv.org/abs/2505.01357v2
0.275 0.370 0.308 0.335 0.295 (0.253) (0.208) (0.134) (0.224) (0.170) (0.177) (0.114) p“300 0.687 0.312 0.252 0.384 0.281 0.288 0.266 (0.147) (0.200) (0.112) (0.247) (0.162) (0.142) (0.084) p“500 0.712 0.290 0.246 0.457 0.270 0.275 0.260 (0.077) (0.174) (0.095) (0.263) (0.146) (0.120) (0.064) δ0“1, δ 1“0.85 p“50 0.438 ...
https://arxiv.org/abs/2505.01357v2
for WAuto, rRj, R˚ jandrR˚ j can be defined analogously using tµkju,tλ˚ kjuandtµ˚ kju, respectively, where tλ˚ kjuandtµ˚ kju represent the corresponding eigenvalues obtained in the second step when applying WAuto or Auto toty˚ tun t“1. variances of the squared loadings. The varimax rotation enhances the color distincti...
https://arxiv.org/abs/2505.01357v2
original series as pyT`h“ pApxT`h, wherepxT`h“pˆxT`h,1, . . . , ˆxT`h,ˆr0qT. To evaluate the forecasting performance, we employ the expanding window approach. The data is divided into a training set and a test set, consisting of the first n1and the lastn2“n´n1observations, respectively. For a positive integer h,we appl...
https://arxiv.org/abs/2505.01357v2
j“Q1j␣ QT 1jpΩp1q y,jjp0qQ1j(´1QT 1j, andxWp2q j1“Q2j1␣ QT 2j1pΩp2q y,j1j1p0qQ2j1(´1QT 2j1. The expressions of xMp1qandxMp2qare derived in a reduced-rank autoregression formulation for matrix-valued time series; see Section C.4 of the supplementary material. The row (or column) factor loading space CpRq(orCpCq) can the...
https://arxiv.org/abs/2505.01357v2
Royal Statistical Society Series B: Statistical Methodology 75(3): 531–552. Forni, M., Hallin, M., Lippi, M. and Reichlin, L. (2000). The generalized dynamic-factor model: Identification and estimation, The Review of Economics and Statistics 82(4): 540– 554. Gao, Z. and Tsay, R. S. (2022). Modeling high-dimensional tim...
https://arxiv.org/abs/2505.01357v2
and are used multiple times in the proofs of this paper. Let tλjujPrpsbe the eigenvalues of ΣPRpˆpin a descending order andtξjujPrpsbe the corresponding eigenvectors. Similarly, trλjujPrpsandtrξjujPrpsare the corresponding eigenvalues and eigenvectors of rΣPRpˆp,respectively. Lemma S.1. (Weyl’s theorem; Wely (1912)). |...
https://arxiv.org/abs/2505.01357v2
than the corresponding ordered eigenvalues of N1{2 k,ArΨT k,Apřr0 j“1θ´1 jξjξT jqrΨk,AN1{2 k,A.Then, by combining (S.3) and (S.5) in Lemmas S.3 and S.4, it follows that ›››rΨT k,A´r0ÿ j“1θ´1 jξjξT j¯ rΨk,A›››—›››rΨT k,A´r0ÿ j“1θ´1 jξjξT j¯ rΨk,A››› min—p´δ0 with probability tending to 1, and ›››rΨT k,A´ qÿ j“r0`1θ´1 jξ...
https://arxiv.org/abs/2505.01357v2
spectral decom- position written as pΩy“pÿ i“1θiξiξT i“ΞAΘAΞT A`ΞBΘBΞT B`ΞEΘEΞT E, 6 where ΘA“diagpθ1, . . . , θ r0q,ΘB“diagpθr0`1, . . . , θ rq,ΘE“diagpθr`1, . . . , θ pq,ΞA“ pξ1, . . . ,ξr0q,ΞB“pξr0`1, . . . ,ξrq,andΞE“pξr`1, . . . ,ξpq. (i) Firstly, θ1{2 1쨨¨ě θ1{2 tc1pp^nquě0 are the tc1pp^nqulargest singular valu...
https://arxiv.org/abs/2505.01357v2
be used to prove the other three terms of (S.11). Thus, we have }pIp´Ψk,AΨT k,AqpΩypkqΨk,AΨT k,ApΩypkqTpIp´Ψk,AΨT k,Aq}“oppp2δ1kq, which concludes (S.10). The proof is complete. Now we are ready to prove Theorem 3. The proofs of µk1—µkr0—p2δ0in Theorem 3(i) and››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1k´δ0qin Theorem 3(i...
https://arxiv.org/abs/2505.01357v2
proof of (S.18) is similar to that of our Theorem 2(ii), or Theorem 1 in Zhang et al. (2024), and thus is omitted. It suffices to prove (S.19). In the following, we consider two cases, i.e., δ0“δ1andδ0ąδ1, separately. On the one hand, when δ0“δ1, we can rewrite YasY“WCT`EwithCCT“PA,Band}n´1WTW}—}n´1WTW}min“pδ0 with pro...
https://arxiv.org/abs/2505.01357v2
and the entries of them are all Oppn1{2pδ0{2qandOppn1{2pδ1{2q, respectively. Thus, }pn´kq´1pAXT`BZT`ETqDkEpIp´PA,BqΞq} ď}pn´kq´1AXTDkEpIp´PA,BqΞq}`}p n´kq´1BZTDkEpIp´PA,BqΞq} `}pn´kq´1ETDkEpIp´PA,BqΞq} “Oppn´1{2pδ0{2q`Oppn´1{2pδ1{2q`Oppn´1pq“Oppn´1pq, which, together with the fact }Θ´1{2 q}“Oppn1{2p´1{2q, which can be ...
https://arxiv.org/abs/2505.01357v2
the eigenvalues of PA,BpΩypkqTpΩypkqPA,B, implies that }pΩypkqPA,BΞqΘ´1{2 q}—}pΩypkqPA,BΞqΘ´1{2 q}min—pδ0{2 with probability tending to 1. Then combined with (S.22) and Theorem 3(i), we can show that λk1—λkr0—pδ0,andλk,r0`1—λkr—p2δ1k´δ1with probability tending to 1. On the other hand, when δ0ąδ1, we have pΩypkqPA,BΞqΘ´...
https://arxiv.org/abs/2505.01357v2
some constant C6ą0 such that }pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkZBTΞB}min ě}pn´kq´1pIp´Φk,AΦT k,AqBZTDkZBTΞB}min´}pn´kq´1ETDkZBT} ´}AAT´Φk,AΦT k,A}}pn´kq´1AXTDkZBT}ěC6pδ1k(S.33) with probability tending to 1. Combing (S.31), (S.32) and (S.33), we have }pIp´Φk,AΦT k,AqpΩypkqΞBΘ´1{2 B}miněC5C6pδ1k´δ1{2 22 with probabil...
https://arxiv.org/abs/2505.01357v2
a relatively faster order of decrease from ther0-th to thepr0`1q-th largest eigenvalues of pΩypkqxWpΩypkqTthan that of pΩypkqpΩypkqT. Based on the properties of ckj’s discussed above, we present a reasonable choice of Q.Note that, for each kPrms,tψkjur0 j“1andtrψkjur0 j“1are the r0leading eigenvectors of pΩypkqpΩypkqT ...
https://arxiv.org/abs/2505.01357v2
j¨. Then, we have the following p2vector factor models: yt,¨j“RX tcT j¨`et,¨j“Rzt,j`et,¨j, jPrp2s. (S.41) Without loss of generality, we assume that Epyt,¨jq“0. Letting Q1jconsists of the q1leading eigenvectors of pΩp1q y,jjp0qwith d1ăq1ďp1^n, we project yt,¨jtoryt,¨j“QT 1jyt,¨jPRq1.By assuming the latent factor is of ...
https://arxiv.org/abs/2505.01357v2
arXiv:2505.01388v1 [eess.IV] 2 May 2025Potential Contrast: Properties, Equivalences, and Generalization to Multiple Classes Wallace Peaslee DAMTP , University of Cambridge Cambridge, UK https://orcid.org/0000-0002-5274-9035Anna Breger DAMTP , University of Cambridge Cambridge, UK CMPBE, Medical University of Vienna Vie...
https://arxiv.org/abs/2505.01388v1
distance [8] between the distributions of sampled foreground and background pixels, enabling generalization to continuous contexts. We also introduce new equalities for NPC that can make computation simpler and provide alternate interpreta- tions of its value. One of these equalities allows us to define multi-class NPC...
https://arxiv.org/abs/2505.01388v1
h(B)) (3) Following an argument analogous to Proposition 1 of [1], an optimal hopt A,B∈arg maxh∈H(A,B)CMI( h(A), h(B))is the following binarization hopt A,B(y) =( 1ifPA(y)≥PB(y) 0ifPA(y)< PB(y).(4)The manner in which PC relies on the range of values of an image format is formalized by the following lemma. Lemma 2. For ...
https://arxiv.org/abs/2505.01388v1
the appropriate scaling according to Corollary 4. Next, we prove some equalities for NPC, including its equivalence to the total variation distance [8] between prob- ability measures, which we denote by δtv(PA, PB), thinking ofPAandPBas probability mass functions. In discrete cases like ours, δtv(PA, PB)can be defined ...
https://arxiv.org/abs/2505.01388v1
labeled. Labels for one ink are in yellow, for a second ink are in orange, and for the background are in blue. Top Right: a multi-class NPC result, where the colors reflect the class segmentation. The multi-class NPC has a value of 0.885. avoid iterating over AandBa second time and can reduce computation. The equivalen...
https://arxiv.org/abs/2505.01388v1
x, as the difference between the distribution with largest portion of pixels that have value xand the average of the remaining distributions. We can also interpret multi-class NPC as an error rate analogous to Section III. In particular, if eigives the proportion of pixels from Aithat are misclassified as belonging to ...
https://arxiv.org/abs/2505.01388v1
is an image contrast and quality measure that relies on task-dependent labels. We introduce a scaled version that is commensurate across image formats and ap- plications, which retains many of the desirable properties of potential contrast. We prove equalities that support different interpretations of normalized potent...
https://arxiv.org/abs/2505.01388v1
arXiv:2505.01825v1 [math.ST] 3 May 2025Asymptotic representations for Spearman’s footrule correlation coefficient Liqi Xia, Sami Ullah and Li Guan* School of Mathematics, Statistics and Mechanics, Beijing Universit y of Technology, Beijing 100124, China E-mail address: Correspondence: guanli@bjut.edu.cn (Li Guan) Abstrac...
https://arxiv.org/abs/2505.01825v1
{(X1,Y1),...,(Xn,Yn)}, is obtained independently and identically distributed (i.i.d.) from ( X,Y). LetRi=/summationtextn k=1I(Xk/lessorequalslantXi) be the rank ofXiwith indicator function I(·),i= 1,...,n. Similarly, Si=/summationtextn k=1I(Yk/lessorequalslantYi) is the rank ofYi. Then, Spearman’s footrule rank correla...
https://arxiv.org/abs/2505.01825v1
investigations into Spearman’s footr ule. 3 Simulation study A simple simulation is conducted in this section to demonstr ate the asymptotic behavior of√nϕn,√n/tildewideϕnand√n/hatwideϕn. Specifically, let XandYbedrawnfromthestandardnormaldistribution 3 and the standard uniform distribution respectively, with s ample si...
https://arxiv.org/abs/2505.01825v1