text
string | source
string |
|---|---|
max h∈[H]∥ξh∥2˜RK√ H (¯R2 x+˜R2 K)1/2. This implies that E2⊂n |ε0|< z1−α/2,max K⊊[K]¯Rx|ε0|+ max h∈[H]∥ξh∥2˜RK√ H (¯R2 x+˜R2 K)1/2≥z1−α/2o , ⊂n |ε0|< z1−α/2,max K⊊[K]¯Rxz1−α/2+ max h∈[H]∥ξh∥2˜RK√ H (¯R2 x+˜R2 K)1/2≥z1−α/2o , =n |ε0|< z1−α/2,max h∈[H]∥ξh∥2≥z1−α/2min K⊊[K](¯R2 x+˜R2 K)1/2−¯Rx√ H˜RKo =n |ε0|< z1−α/2,max h∈[H]∥ξh∥2≥z1−α/2(¯R2 x+ min k∈[K]˜R2 [−k])1/2−¯Rx√ Hmin k∈[K]˜R[−k]o , where the last equality holds because (¯R2 x+˜R2 K)1/2−¯Rx /(√ H˜RK) is an increasing function of ˜R2 Kand min K⊊[K]˜R2 K= min k∈[K]˜R2 [−k]. Since z1−α/2(¯R2 x+ min k∈[K]˜R2 [−k])1/2−¯Rx√ Hmin k∈[K]˜R[−k]=z1−α/2min k∈[K]˜R[−k]/√ H (¯R2 x+ min k∈[K]˜R2 [−k])1/2+¯Rx=n¯arem(α, R2 x,∆) Ho1/2 , we have E2⊂n |ε0|< z1−α/2,max h∈[H]∥ξh∥2≥¯arem(α, R2 x,∆) Ho . This completes the proof. Proof of Theorem S2 (i).The proof of the “if” part: We use the same notation as in the proof of Lemma S19. If Condition 2 holds for h∈[H],k∈[K], we have ˜R2 h,[−k]=1 KR2 h,x 53 which yields that ˜R2 [−k]=HX h=1πh˜R2 h,[−k]Vh,ττ/Vττ=HX h=1πh1 KR2 h,xVh,ττ/Vττ=1 KR2 x. Therefore ∆ = min k∈[K]˜R2 [−k]=K−1R2 x. The “if” part follows from Lemma S19. The proof of the “only if” part: We then prove the “only if” part, i.e., for any R2 x∈(0,1), there exists ( Yi(1), Yi(0))n i=1∈ Mss(R2 x) satisfying Condition 2 such that for any a >¯arem(α, R2 x, R2 x/K)/H, we have P∞(ph L≤α| Ass-rem(a))>0. We construct ( Yi(1), Yi(0))n i=1∈Mss(R2 x) as follows, which satisfies Condition 2: ( i) Yi(1) = Yi(0) for i∈[n] and ( ii) for h∈[H],Vh,xx=IK,π1/2 hVh,xτ=1K,Vh,ττ= (πhR2 x)−1K. Then for any K ⊆ [K],h∈[H],πhVh,τKV−1 h,KKVh,Kτ=|K|, where |K|is the size of the setK. We can verify that for any K ⊆ [K],h∈[H], R2 h,K=R2 x|K|/K, π h˜R2 h,KVh,ττ=K− |K| , V ττ=X h∈[H]πhVh,ττ=HK/R2 x,˜R2 K=K− |K| KR2 x. (21) Recall the notation E,E1andE2in the proof of Lemma S19. Since Yi(1) = Yi(0), for i∈[n], we have NL,K=˜NL,K, and we have P∞(ph L≤α| Ass-rem(a)) =P(E | A∞ ss-rem(a)) =P(E1| A∞ ss-rem(a)) +P(E2| A∞ ss-rem(a)) =α+P(E2| A∞ ss-rem(a)). It remains to show that P(E2∩A∞ ss-rem(a))>0. We treat E2∩A∞ ss-rem(a) as a measurable set of ( ε0,(ξh)h∈[H]). Since nonempty open set has measure greater than 0, it suffices to show that the following open subset E3⊆ E 2∩ A∞ ss-rem(a) is nonempty: E3={|N L,[K]|< z1−α/2,max K⊊[K]|NL,K|> z1−α/2} ∩n max h∈[H]∥ξh∥2 2< ao . We see that |NL,K|=V1/2 ττ|¯Rxε0+PH h=1π1/2 hε⊤ h,x(βh,L,x−˜βh,L,K)| (¯R2 x+˜R2 K)1/2V1/2 ττ= ¯RxV1/2 ττ|ε0|+PH h=1π1/2 hsh,K∥ξh∥2˜Rh,KV1/2 h,ττ V1/2 ττ(¯R2 x+˜R2 K)1/2, 54 where sh,K= sign( ε0)ε⊤ h,x(βh,L,x−˜βh,L,K)/{∥ξh∥2˜Rh,KV1/2 h,ττ}. Define ψ(|ε0|) =(¯R2 x+K−1R2 x)1/2z1−α/2−¯Rx|ε0|p HK−1R2 x. Under the event {sh,[−1]= 1, h∈[H]}, we have max K⊊[K]|NL,K| ≥ |N L,[−1]|= ¯RxV1/2 ττ|ε0|+PH h=1π1/2 h∥ξh∥2˜Rh,[−1]V1/2 h,ττ V1/2 ττ(¯R2 x+˜R2 [−1])1/2 ≥¯Rx|ε0|+ min h∈[H]∥ξh∥2(H/K )1/2Rx (¯R2 x+K−1R2 x)1/2, where the last inequality is due to the second and third equalities in (21). Therefore, E3∩ {sh,[−1]= 1, h∈[H]} ⊇n |ε0|< z1−α/2, sh,[−1]= 1, h∈[H],¯Rx|ε0|+ min h∈[H]∥ξh∥2(H/K )1/2Rx (¯R2 x+K−1R2 x)1/2> z1−α/2, max h∈[H]∥ξh∥2 2< ao =n |ε0|< z1−α/2, sh,[−1]= 1, h∈[H],min h∈[H]∥ξh∥2> ψ(|ε0|),max h∈[H]∥ξh∥2 2< ao . Since ψ(z1−α/2) = ¯arem(α, R2 x, R2 x/K)1/2/H1/2< a1/2, andψis a continuous function, there exists 0 < t < z 1−α/2such that ψ(t)< a1/2. Therefore, E3∩ {sh,[−1]= 1,
|
https://arxiv.org/abs/2505.01137v1
|
h∈[H]} ⊇n |ε0|=t, sh,[−1]= 1, h∈[H],min h∈[H]∥ξh∥2> ψ(t),max h∈[H]∥ξh∥2 2< ao . Obviously, E3is nonempty. The conclusion follows. E.3 Proof of Theorem S2 (ii) Proof of Theorem S2 (ii).The proof of the “if” part: LetA∞ ss-rep(αt) ={max h∈[H] ∥σ(Vh,xx)−1εh,x∥∞≤z1−αt/2}. Recall the definition of E2in the proof of Lemma S19. 55 We have P∞(ph L≤α| Ass-rep(αt))−α≤P(E2| A∞ ss-rep(αt)). We next prove, E2⊂n |ε0|< z1−α/2,max h∥σ(Vh,xx)−1/2εh,x∥∞≥crep(R2 x) H1/2g(α)o . (22) By (22), if z1−αt/2≤crep(R2 x) H1/2g(α),i.e., α t≥g−1crep(R2 x) H1/2g(α) , we have P(E2| A∞ ss-rep(αt)) = 0 . As a consequence, we will have P∞(ph L≤α| Ass-rep(αt))≤α. Using that |˜NL,K|= ¯RxV1/2 ττε0+PH h=1π1/2 hε⊤ h,x(βh,L,x−˜βh,L,K) (Vττ¯R2 x+Vττ˜R2 K)1/2 ≤¯RxV1/2 ττ|ε0|+P hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1max h∥σ(Vh,xx)−1εh,x∥∞ V1/2 ττ(¯R2 x+˜R2 K)1/2 we have E2⊆n |ε0|< z1−α/2, max K⊊[K]¯RxV1/2 ττ|ε0|+ max h∥σ(Vh,xx)−1εh,x∥∞P hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1 V1/2 ττ(¯R2 x+˜R2 K)1/2≥z1−α/2o ⊆n |ε0|< z1−α/2, max K⊊[K]¯RxV1/2 ττz1−α/2+ max h∥σ(Vh,xx)−1εh,x∥∞P hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1 V1/2 ττ(¯R2 x+˜R2 K)1/2≥z1−α/2o =n |ε0|< z1−α/2,max h∥σ(Vh,xx)−1εh,x∥∞≥z1−α/2min K⊊[K]V1/2 ττ{(¯R2 x+˜R2 K)1/2−¯Rx} P hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1o . Next, we calculate the termP hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1. If Condition 2 holds for every stratum, we have βh,L,x=V−1 h,xxVh,xτ=Vh,kτ Vh,kk k∈[K]. 56 By the definition of R2 h,k, we have Vh,ττR2 h,k=V−1 h,kkV2 h,kτ.Therefore, we have σ(Vh,xx)βh,L,x=Vh,kτV1/2 h,kk Vh,kk k∈[K]=V1/2 h,ττRh,1(sign( Vh,kτ))k∈[K]. Similarly, we have σ(Vh,xx)˜βh,L,K=V1/2 h,ττRh,1(sign( Vh,kτ)I(k∈ K))k∈[K]. Thus σ(Vh,xx)(βh,L,x−˜βh,L,K) =V1/2 h,ττRh,1(sign( Vh,kτ)I(k̸∈ K))k∈[K]. (23) On the other hand, since Condition 2 holds for every stratum, we have R2 h,x=KR2 h,1, R2 h,K=|K|R2 h,1,˜R2 h,K= (K− |K| )R2 h,1. Therefore, we have X hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1=X hπ1/2 h(K− |K| )V1/2 h,ττRh,1 =(K− |K| )1/2X hπ1/2 hV1/2 h,ττ˜Rh,K≤(K− |K| )1/2H1/2 X hπhVh,ττ˜R2 h,K1/2 =(K− |K| )1/2H1/2V1/2 ττ˜RK. It follows that min K⊊[K]V1/2 ττ{(¯R2 x+˜R2 K)1/2−¯Rx} P hπ1/2 h∥σ(Vh,xx)(βh,L,x−˜βh,L,K)∥1≥min K⊊[K](¯R2 x+˜R2 K)1/2−¯Rxp (K− |K| )H˜RK = min K⊊[K](KH)−1/2Rx (¯R2 x+˜R2 K)1/2+¯Rx=(KH)−1/2Rx 1 +¯Rx, where the second to last equality holds because ( K− |K| )−1˜R2 K=K−1R2 xand the last equality holds because min K⊊[K]˜R2 K=˜R2 ∅=R2 x. Thus, E2⊂n |ε0|< z1−α/2,max h∥σ(Vh,xx)−1/2εh,x∥∞≥z1−α/2(HK)−1/2Rx 1 +¯Rxo =n |ε0|< z1−α/2,max h∥σ(Vh,xx)−1/2εh,x∥∞≥crep(R2 x) H1/2g(α)o . We complete the proof of “if” part. The proof of the “only if” part: To prove the “only if” part, we show that if αt< g−1 H−1/2crep(R2 x)g(α) , there exists (Yi(1), Yi(0))n i=1∈Mss(R2 x) such that P∞(ph L| Ass-rep(αt))>0. Consider the same construction as we prove the “only if” part in Theorem S2 (i), i.e., (i)Yi(1) = Yi(0) for i∈[n] such that ˜NL,K=NL,K; (ii) for h∈[H],Vh,xx=IK, π1/2 hVh,xτ=1KandVh,ττ= (πhR2 x)−1K. 57 Since P∞(ph L≤α| Ass-rep(αt))−α=P(E2| A∞ ss-rep(αt)), it remains to show that if αt< g−1 H−1/2crep(R2 x)g(α) ,P(E2| A∞ ss-rep(αt))>0. Since Yi(1) = Yi(0), for i∈[n], we have NL,K=˜NL,K. By (23), we have NL,K=¯RxV1/2 ττε0+PH h=1π1/2 h(βh,L,x−˜βh,L,K)εh,x V1/2 ττ(¯R2 x+˜R2 K)1/2 =¯RxV1/2 ττε0+PH h=1π1/2 hP k̸∈KV1/2 h,ττRh,1sign(Vh,kτ)εh,k/Vh,kk V1/2 ττ(¯R2 x+˜R2 K)1/2. Letsh,k= sign( ε0) sign( εh,k) sign( Vh,kτ), we have |NL,K|= ¯RxV1/2 ττ|ε0|+PH h=1π1/2 hP k̸∈KV1/2 h,ττRh,1sh,k|εh,k/Vh,kk| V1/2 ττ(¯R2 x+˜R2 K)1/2. Since the strata are independent and Vh,xxare diagonal, ε0andεh,k,h∈[H], k∈[K] are independent. Therefore, sh,kare independent and take values in {−1,1}with equal probability. Under {sh,k= 1, h∈[H], k∈[K]}, we have max K⊆[K]|NL,K| ≥|N L,∅|=¯RxV1/2 ττ|ε0|+PH h=1π1/2 hPK k=1V1/2 h,ττRh,1|εh,k/Vh,kk| V1/2 ττ ≥¯RxV1/2 ττ|ε0|+ min h,k|εh,k/Vh,kk|PH h=1π1/2
|
https://arxiv.org/abs/2505.01137v1
|
hPK k=1V1/2 h,ττRh,1 V1/2 ττ =¯Rx|ε0|+ min h,k|εh,k/Vh,kk|Rx(HK)1/2, where the last equality holds because by our construction,PH h=1π1/2 hPK k=1V1/2 h,ττRh,1= PH h=1π1/2 hK1/2V1/2 h,ττRh,x=V1/2 ττRx(HK)1/2. Therefore, we have E2∩ {sh,k= 1, h∈[H], k∈[K]} ⊇n |ε0|< z1−α/2, sh,k= 1, h∈[H], k∈[K],¯Rx|ε0|+ min h,k|εh,k/Vh,kk|Rx(HK)1/2≥z1−α/2o =n |ε0|< z1−α/2, sh,k= 1, h∈[H], k∈[K],min h,k|εh,k/Vh,kk| ≥z1−α/2−¯Rx|ε0| Rx(HK)1/2o =:E3. Let ψ(|ε0|) =z1−α/2−¯Rx|ε0| Rx(HK)1/2,E4=n |ε0|< z1−α/2,min h,k|εh,k/Vh,kk| ≥ψ(|ε0|)o , 58 with E4∩ A∞ ss-rep(αt) =n |ε0|< z1−α/2,min h,k|εh,k/Vh,kk| ≥ψ(|ε0|),max h,k|εh,k/Vh,kk| ≤z1−αt/2o . Since αt< g−1 H−1/2crep(R2 x)g(α) , we have z1−αt/2>z1−α/2(1−¯Rx) Rx(HK)1/2=ψ(z1−α/2). Since ψ(x) is a continuous function of x, there exists 0 < x 0< z1−α/2such that ψ(x0)< z1−αt/2. Therefore, we have E4∩ A∞ ss-rep(αt)⊇n |ε0|=x0,min h,k|εh,k/Vh,kk| ≥ψ(x0),max h,k|εh,k/Vh,kk| ≤z1−αt/2o which is nonempty. It follows that P(E4∩ A∞ ss-rep(αt))>0. Noticing that sh,kand|ε0|and|εh,k|are independent and E3=E4∩ sh,k= 1, h∈ [H], k∈[K] , it follows that P(E2∩ A∞ ss-rep(αt))≥P(E3∩ A∞ ss-rep(αt)) =P(E4∩ A∞ ss-rep(αt))P(sh,k= 1, h∈[H], k∈[K])>0. Therefore P(E2| A∞ ss-rep(αt))>0. The conclusion follows. E.4 Proof of Theorem S3 Proof of Theorem S3. Proof of Theorem S3 (i) and (ii): By Lemma S16, we have P∞(ph L≤α| Ass-rem(a)) =P(E | A∞ ss-rem(a)),P∞(ph L≤α| Ass-rep(αt)) =P(E | A∞ ss-rep(αt)), where E=n max K⊆[K]|NL,K| ≥z1−α/2o . We have shown in the proof of Lemma S19 that |NL,K| ≤ | ˜NL,K| ≤¯RxV1/2 ττ|ε0|+PH h=1π1/2 h∥ξh∥2˜Rh,KV1/2 h,ττ (¯R2 x+˜R2 K)1/2V1/2 ττ. 59 Noticing that Vττ¯R2 x+Vττ˜R2 K=Vττ¯R2 x+PH h=1πh˜R2 h,KVh,ττand by Cauchy-Schwarz inequality, we have ¯RxV1/2 ττ|ε0|+PH h=1π1/2 h∥ξh∥2˜Rh,KV1/2 h,ττ (Vττ¯R2 x+Vττ˜R2 K)1/2≤(ε2 0+X h∥ξh∥2 2)1/2. It follows that E ⊆n (ε2 0+X h∥ξh∥2 2)1/2≥z1−α/2o . (24) Combining this with A∞ ss-rem(a) ={max h∥ξh∥2 2≤a}, we prove the first bound. For the second bound, we see that A∞ ss-rep(αt) ={max h∈[H]∥σ(Vh,xx)−1V1/2 h,xxξh∥∞≤z1−αt/2}. Letξ′ h=D(Vh,xx)−1/2σ(Vh,xx)−1V1/2 h,xxξh. Then, A∞ ss-rep(αt) =n max h∈[H]∥D(Vh,xx)1/2ξ′ h∥∞≤z1−αt/2o . (25) Noticing that V1/2 h,xxσ(Vh,xx)−1D(Vh,xx)−1σ(Vh,xx)−1V1/2 h,xx=IK, ξ′ h,h∈[H], are also standard Gaussian random vectors and ∥ξ′ h∥2 2=∥ξh∥2 2. Therefore, by (24) and (25), we have P∞(ph L≤α| Ass-rep(αt))≤P ε2+HX h=1∥ξ′ h∥2 21/2≥z1−α/2 max h∈[H]∥D(Vh,xx)1/2ξ′ h∥∞≤z1−αt/2 . Thus, we prove the second bound. Proof of Theorem S3 (iii): LetA∞ rep−fe(αt) ={∥σ(Vω,xx)−1εω,x∥∞≤z1−αt/2}. Note that by Lemma S16, we have P∞(ph fe≤α| Arep−fe(αt)) =P( max K⊆[K]|Nfe,K| ≥z1−α/2| A∞ rep−fe(αt)), where Nfe,K=εω,τ−β⊤ fe,Kεω,K n1/2˜ sefe,K. We define ˜Nfe,K=εω,τ−β⊤ fe,Kεω,K var(εω,τ−β⊤ fe,Kεω,K)1/2. 60 By Lemma S11, n˜ se2 fe≥var(εω,τ−β⊤ fe,Kεω,K), and therefore |˜Nfe,K| ≥ |N fe,K|. Therefore, we have P∞(ph fe≤α| Afe-rep(αt))≤P( max K⊆[K]|˜Nfe,K| ≥z1−α/2| A∞ rep−fe(αt)). Letε0= (εω,τ−β⊤ ω,xεω,x)/var(εω,τ−β⊤ ω,xεω,x)1/2andξ0=V−1/2 ω,xxεω,x. Define ˜βfe,K such that ( ˜βfe,K)K=βfe,Kand all other entries are 0. We have |˜Nfe,K|= εω,τ−β⊤ ω,xεω,x+β⊤ ω,xεω,x−β⊤ fe,Kεω,K var(εω,τ−β⊤ fe,Kεω,K)1/2 ≤var(εω,τ−β⊤ ω,xεω,x)1/2|ε0|+∥(βω,x−˜βfe,K)⊤V1/2 ω,xx∥2∥ξ0∥2 var(εω,τ−β⊤ fe,Kεω,K)1/2, Noticing that var(εω,τ−β⊤ fe,Kεω,K) = var( εω,τ−β⊤ ω,xεω,x) + var( β⊤ ω,xεω,x−β⊤ fe,Kεω,K) = var( εω,τ−β⊤ ω,xεω,x) + (˜βfe,K−βω,x)⊤Vω,xx(˜βfe,K−βω,x), and using Cauchy-Schwarz inequality, we have |˜Nfe,K| ≤(ε2 0+∥ξ0∥2 2)1/2. SinceA∞ fe-rep(αt) ={∥σ(Vω,xx)−1V1/2 ω,xxξ0∥∞≤z1−αt/2}, by letting ξ′ 0=D(Vω,xx)−1/2 ·σ(Vω,xx)−1V1/2 ω,xxξ0, we have P∞(ph fe≤α| Afe-rep(αt))≤P (ε2 0+∥ξ0∥2 2)1/2≥z1−α/2 A∞ fe-rep(αt) =P (ε2 0+∥ξ′ 0∥2 2)1/2≥z1−α/2 A∞ fe-rep(αt) =P (ε2 0+∥ξ′ 0∥2 2)1/2≥z1−α/2 ∥D(Vω,xx)1/2ξ′ 0∥∞≤z1−αt/2 . Sinceξ′ 0is a standard Gaussian random vector independent of ε0, the conclusion follows. 61
|
https://arxiv.org/abs/2505.01137v1
|
arXiv:2505.01297v1 [math.ST] 2 May 2025Model-free identification in ill-posed regression Gianluca Finocchio∗and Tatyana Krivobokova∗ May 15, 2025 Abstract The problem of parsimonious parameter identification in possibly high-dimensional lin- ear regression with highly correlated features is addressed. This problem is formalized as the estimation of the best, in a certain sense, linear combinations of the features that are relevant to the response variable. Importantly, the dependence between the features and the response is allowed to be arbitrary. Necessary and sufficient conditions for such parsimonious identification – referred to as statistical interpretability – are established for a broad class of linear dimensionality reduction algorithms. Sharp bounds on their estimation errors, with high probability, are derived. To our knowledge, this is the first formal framework that enables the definition and assessment of the interpretability of a broad class of algorithms. The results are specifically applied to methods based on sparse regression, unsupervised projection and sufficient reduction. The implications of employing such methods for prediction problems are discussed in the context of the prolific literature on overparametrized methods in the regime of benign overfitting. Keywords: linear dimension reduction, parsimonious identification, perturbation bounds, statistical interpretability MSC: Primary: 65F22, 65F10; Secondary: 62B05, 65F20. 1 Introduction Experimental scientists embark on ambitious collective efforts to gather and investigate complex datasets via high-throughput platforms. However, many modern applications im- pose strict interpretability requirements, meaning that their primary goal is to provide insights on the dependence between a design matrix of features Xand a response vector y. In such situations, practitioners often find desirable to sacrifice optimality for the sake of simplicity and estimate a vector of effects bβsuch that Xbβis as close as possible to y ∗Department of Statistics and Operations Research, Universit¨ at Wien, Oskar-Morgenstern-Platz 1, 1090 Wien, Austria 1 in the least-squares sense, under some regularization constraint. Since the design matrix is almost always ill-posed, practitioners forcibly reduce the dimensionality of the problem by guessing or estimating features of interest. Noteworthy examples of such situations are genome-wide association studies (GWAS) for which Uffelmann et al. [2021] provide an extensive review. The authors claim that the interpretation of GWAS is one of the biggest challenges in modern biology. The authors also detail how hundreds of thousands of genetic variants are tested in order to find those statistically associated with diseases such as coronary artery disease, type 2 diabetes and breast cancer. Insights gained from GWAS may be crucial in making clinical risk predic- tions, informing drug development and relating risk factors to health outcomes. Among the many tools in the established methodology, one finds linear regressions where yis a vector of phenotypes values (disease traits) and Xa matrix of genotype values. The authors identify two fundamental difficulties hindering direct biological insights: i) due to physical reasons, different genetic variants can be highly correlated making the problem ill-posed; ii) most disease traits are correlated with many relevant genetic variants all having very small individual effect. Motivated by the above applications, we introduce a model-free framework to formalize the identifiability problem for ill-posed regression. We work with ( X,y)∈Rn×p×Ra sample of n≥1 i.i.d.
|
https://arxiv.org/abs/2505.01297v1
|
realizations ( xi, yi) of the population random pair ( x, y)∈Rp×R. The dependence between the features xand the response yis arbitrary (not necessarily linear), whereas the features might be highly-correlated and heavy-tailed. We characterize the insightful parameters and solve in its generality the problem of estimating the best parsimonious span of linear combinations of features being relevant for the response. We determine necessary and sufficient conditions leading to sharp finite-sample guarantees for a broad class of linear dimensionality reduction algorithms. We provide below an overview of our contributions. As long as the population pair ( x, y) is centered and admits positive-semidefinite covari- ance matrix Σx=E(xxt) and covariance vector σx,y=E(xy), the features belong almost surely to the range R(Σx), which is a linear subspace of Rpof dimension rx= rk(Σx) for some 1 ≤rx≤p. We find crucial to characterize the dependence between the features and the response in terms of two complementary linear subspaces By,B⊥ y⊆ R(Σx) where the projection of the features along B⊥ yprovides no information on the response. We make this precise by showing that the latter is uniquely defined as the largest linear subspace of the range R(Σx) such that the orthogonal projection xy⊥=Uy⊥xof the features onto B⊥ yis uncorrelated with both the response yand the orthogonal projection xy=Uyxof the fea- tures onto By. We call Bythe relevant subspace and xythe relevant features. We show that, by construction, the relevant subspace preserves the population least-squares problem, in 2 the sense that the sets of solutions arg minβ∈R(Σx)E(y−xtβ)2= arg minβ∈ByE(y−xtβ)2 are the same. Although the relevant subspace Bypreserves the population least-squares problem, the condition number of the covariance matrix Σxyof the relevant features xymight still be large making the problem still ill-posed. Therefore, we introduce the set of parsimonious representations of the relevant features, which is parametrized by arbitrary linear subspaces B ⊆ B y. For any of them, let UB∈Rp×pbe the orthogonal projection of RpontoB, rB= dim( B) its dimension and βB∈ Bthe population least-squares solution in regressing y onxB. In particular, any low-rank parameter θB:= (B,UB, rB,βB) combining all the above is uniquely defined and provides the following insights: Bdetermines the linear span of the support of xB;UBdetermines which linear combination of features xyield the projection xB;rBmeasures the degrees-of-freedom of the problem; βBquantifies the linear dependence between the projected features xBand the response y. Informally, we call best parsimonious representation of the features the largest linear subspace B ⊆ B yfor which the low-rank covariance matrix ΣxBof the low-rank features xBis well-posed. Since such notion relies on heuristic numerical considerations and not on intrinsic statistical properties, we develop our theory for arbitrary low-rank parameters instead of formally defining the best in any sense. Similarly, we call statistically interpretable any procedure for linear dimensionality reduction that can estimate the best parsimonious parameter θB= (B,UB, rB,βB) by using no more than rBdegrees-of-freedom, in a sense that will be clear below. We establish a broad class of population dimensionality reduction algorithms A(x, y) = A(Σx,σx,y) for estimating θBthat only depend on the population moments Σxandσx,y. Since the estimation of the
|
https://arxiv.org/abs/2505.01297v1
|
low-rank parameter θBshould not use more than rBdegrees- of-freedom, any population algorithm in our class computes a compatible parameter θA= (BA,UA, rB,βA) where BAis some linear subspace, UAits orthogonal projection, rBits dimension and βAthe population least-squares solution of regressing yonUAx. We quan- tify the performance of any population algorithm in terms of the principal-angle ϕ1(·,·) between subspaces, the operator norm ∥ · ∥opfor orthogonal projections and the Euclidean norm∥ · ∥2for least-squares solutions. In short, the population error is ε(θA,θB) := ϕ1(BA,B)∨ ∥UA−UB∥op∨∥βA−βB∥2 ∥βB∥2 and should be as small as possible. Since the relevant subspace Bypreserves population least-squares, it is natural to require A(x, y) =A(xy, y), meaning that the population algorithm is adaptive and only depends on the information contained on the relevant subspace. This condition is sufficient for obtaining sharp bounds on the population error and is almost necessary, since we can provide examples where non-adaptive algorithms have population error lower bounded away from zero. We 3 show that all adaptive algorithms have nearly the same population error ε(θA,θB)≤MB·∥Σxy−ΣxB∥op ∥ΣxB∥op∨∥σxy,y−σxB,y∥2 ∥σxB,y∥2 for some constant MB≥1 proportional to the condition number κ2(UBΣxUB) of the projected covariance matrix of the features. The latter display holds as long as the per- turbation error between the moments of the relevant pair ( xy, y) and the moments of the low-rank pair ( xB, y) is sufficiently small. When B=By, the population error for adaptive algorithms is identically zero. In practice, one implements sample dimensionality reduction algorithms bA(x, y) = A(bΣx,bσx,y) that only depend on the sample moments bΣx=n−1XtXandbσx,y=n−1Xty, together with the compatible parameters bθA= (bBA,bUA, rB,bβA) where bBAis some linear subspace, bUAits orthogonal projection, rBits dimension and bβAthe sample least-squares solution computed from the projected dataset ( XbUA,y). The sample error ε(bθA,θA) :=( ϕ1(bBA,BA)∨ ∥bUA−UA∥op∨∥bβA−βA∥2 ∥βA∥2) is independent of the low-rank parameter θB. As long as the sample size n≥1 is sufficiently large, we prove that any reduction algorithm Ain our class satisfies, with high probability, ε(bθA,θA)≤MA·( ∥bΣx−Σx∥op ∥Σx∥op∨∥bσx,y−σx,y∥2 ∥σx,y∥2) for some constant MA≥1 proportional to the condition number κ2(UAΣxUA) of the projected covariance matrix of the features. This means that finding sharp bounds on the perturbation error between the sample moments ( bΣx,bσx,y) and the population moments (Σx,σx,y) is sufficient to guarantee optimal sample error. In particular, we show that all sample algorithms are consistent to their population counterpart, in the sense that bθAP− →θA when n→ ∞ . We can assess whether any algorithm Ais well-suited for parsimonious identification, and, herewith is statistically interpretable, by bounding the estimation error as ε(bθA,θB)≤ ε(θA,θB) +ε(bθA,θA) and recalling that ε(bθA,θB) = ε(θA,θB) +oP(1) under minimal assumptions. Since non-adaptive algorithms can have population error ε(θA,θB) bounded away from zero, we find that only adaptive algorithms can achieve optimal rates. Our results have implications for the prediction problem, where a newly observed xn+1 is available and the goal is to predict an unobserved yn+1following the same population distribution. Since the dependence between the features and the response is arbitrary, the predictor XbβAcomputed from a sample reduction algorithm that only depends on ( X,y) is far from optimal.
|
https://arxiv.org/abs/2505.01297v1
|
However, we can still measure the excess risk in terms of the low-rank 4 parameter βB. That is to say, with R(bβA) =E({y−xtbβA}2|X,y) the least-squares risk of the linear approximation using bβAandR(βB) the oracle risk for the low-rank parameter, we show that the excess risk can be bounded from above as R(ex)(bβA) =R(bβA)−R(βB)≲∥bβA−βB∥2 2, and a similar bound from below holds. This means that the excess risk of any sample parameter bβAis proportional to the square of the estimation error between bβAand the low- rank parameter βB. As mentioned above, under minimal assumptions this is R(ex)(bβA)≲ ∥βA−βB∥2 2+oP(1) so that adaptivity is necessary to recover optimal risk bounds. 1.1 Related Literature Our framework is the first to provide the tools to assess whether arbitrary algorithms for linear dimensionality reduction are interpretable, specifically, whether they can estimate the best parsimonious projection of the relevant features. Most works in the available literature deal with identification by assuming a well-specified underlying model and only focus on specific regularization algorithms that are tailored for the problem. In contrast, our framework requires the minimal assumptions for which the identifiable parameters exist and reflect the properties of datasets in most applications. Furthermore, our results offer guidance for designing new, interpretable procedures. The most prominent strategies that fall within our framework are sparsity, unsupervised reduction and sufficient reduction. 1.1.1 Sparsity In this paper, the class of sparse regression (SPR) methods contains all algorithms Athat estimate: an active set JA⊆ {1, . . . , p }of size rA; the linear subspace BA= span {ej: j∈JA}spanned by the corresponding vectors of the canonical basis of Rp; the orthogonal projection UA=IJAobtained by putting to zero the diagonal entries of the identity matrix Ipcorresponding to Jc A; the population least-squares solution βA∈ BA. Sparse methods are built with the belief that the response can be fully explained by the projected features UAxobtained by putting to zero all the entries of the features xcorresponding to Jc A. Since only the active set JAmatters for identification, we allow for arbitrary model selection rules. This includes all the algorithms reviewed by Freijeiro-Gonz´ alez et al. [2021] such as Best Subset Selection by Beale et al. [1967], the LASSO by Tibshirani [1996] and all its variations and alternatives, for example, the Elastic Net by Zou and Hastie [2005], the Adaptive LASSO by Zou [2006], the Group LASSO by Yuan and Lin [2005], the Dantzig Selector by Candes and Tao [2007], the Squared-Root LASSO by Belloni et al. [2011] and the SLOPE by Bogdan et al. [2015]. We refer to Table 6 by Freijeiro-Gonz´ alez et al. [2021] 5 for a more exhaustive list. The LASSO estimator achieves consistent model selection under the regularity conditions discussed by Zhao and Yu [2006], Bickel et al. [2009] and van de Geer and B¨ uhlmann [2009] as well as oracle estimation rates as shown by Bellec et al. [2018]. However, it has become apparent that these conditions fail dramatically when dealing with ill-posed datasets from genomics, as demonstrated by Wang et al. [2018]. Despite their wide-spread application, see for example
|
https://arxiv.org/abs/2505.01297v1
|
the work by Afreixo et al. [2024] on Alzheimer’s disease, we claim that sparse methods are unable to provide parsimonious representations of GWAS datasets described by Uffelmann et al. [2021], where the response seems to be correlated with many features all having very small effect. 1.1.2 Unsupervised Reduction In this paper, the class of unsupervised reduction (UDR) methods contains all algorithms Athat estimate: the orthonormal matrix of eigenvectors Vx= [v1|···|vrx] of the covari- ance matrix Σxcorresponding to the positive eigenvalues; independently of the response y, any orthonormal matrix VAof linear combinations of eigenvectors having dimension rA= rk(VA); the linear subspace BA=R(VA) spanned by the columns; the orthogonal projection UA=VAVt A; the population least-squares solution βA∈ BA. Unsupervised methods are built with the belief that the response can be fully explained by projecting the features xalong the important directions of variations determined by UA. This includes Principal Component Regression devised by Hotelling [1933] and all the projection algorithms discussed by Bing et al. [2021] that only use knowledge of the sample covariance matrix. These methods achieve consistent model selection for the latent factor models introduced by Stock and Watson [2002] and Bai and Ng [2002] under the regularity conditions discussed by Fan et al. [2023], which essentially guarantee consistency of the sample eigenvalue ratio method by Lam and Yao [2012] and Ahn and Horenstein [2013]. Under the same conditions, finite-sample guarantees for the prediction risk of such methods have been obtained by Bing et al. [2021], Teresa et al. [2022] and Hucker and Wahl [2023]. Despite their ability to identify well-posed clusters of the features, see the work by Gedon et al. [2024], the sentiment that these assumptions are too restrictive is quite old, Cox [1968], see Section 3 (ii) (c), and other authors suggest that there is no logical reason for the principal components to contain any information at all on the response. 1.1.3 Sufficient Reduction In this paper, the class of sufficient reduction (SDR) methods contains all algorithms A that estimate any linear subspace BAsuch that, with UAits orthogonal projection, the distribution of y|xandy|UAxare the same. Sufficient methods exploit the dependence between the features and the response and are fully adaptive. 6 This includes Sliced Inverse Regression devised by Li [1991] and Partial Least Squares proposed by Wold [1966]. In well-specified models, the performance of Partial Least Squares as a tool for dimensionality reduction has been investigated by Cook et al. [2013] and Cook and Forzani [2021]. Under the same assumptions, its finite-sample estimation rates have been obtained by Singer et al. [2016] whereas its asymptotic prediction risk has been studied by Cook and Forzani [2019]. Although these methods find the sufficient projection of the features, this might still yield ill-posed estimators. 1.1.4 Hybrid Reduction It is possible to mix and match principles from the previous categories to create hybrid algorithms Athat still fall within our framework. These methods combine more than one type of regularization and are meant to outperform their competitors in well-specified models. A few variants of Principal Components incorporating sparse representations for clus- tering are AdaCLV by
|
https://arxiv.org/abs/2505.01297v1
|
Marion et al. [2020] and VC PCR by Marion et al. [2024]. A few variants of Partial Least Squares incorporating different penalizations are Sparse PLS by Chun and Kele¸ s [2010] and Regularized PLS by Allen et al. [2012]. Many more are possible and their performance is model-dependent. Our findings suggest that additional layers of regularization are often redundant or counterproductive. If the original algorithm Ais adaptive, such as Partial Least Squares, under most modifications the new algorithm eAis no more adaptive despite having a smaller sample error. This might make the population error arbitrarily high and result in a worse performance overall. 1.1.5 Optimal Prediction We show that underparametrized methods achieve interpretable identification for ill-posed datasets by sacrificing optimality. Our work is complementary to the prolific literature on optimal prediction, where the primary focus is to produce an estimator bfcomputed from the data ( X,y) such that, given a new observation xn+1, the predictor byn+1=bf(xn+1) has optimal least-squares risk for the unobserved response value yn+1. For such problem, model complexity is not necessarily a burden since overparametrised models in the regime of benign overfitting achieve small prediction error despite interpolating the observed response. Most methods achieving benign overfitting are quite sophisticated and hard to interpret, which makes them unsuitable for identifying any combination of features that might be relevant for the response. To name a few, Belkin et al. [2019] first observed that deep learning methods exhibit the double-descent pattern typical of benign overfitting; Arnould et al. [2023] showed that interpolation can be benign for random forests; Chhor et al. [2024] 7 provided an adaptive kernel estimator that achieves minimax optimality over H¨ older classes while naturally interpolating the data; Bartlett et al. [2020], Muthukumar et al. [2020] and Bartlett and Long [2021] have developed the theoretical framework for linear regression, later adapted to ℓ2-penalized regression by Chinot and Lerasle [2020], Lecu´ e and Shang [2022] and Tsigler and Bartlett [2023] or latent factor regression by Bunea et al. [2022]. 1.2 Structure of the Paper The paper is organized as follows. In Section 2 we formalize our model-free setting and the insightful parameters of interest. In Section 2.1 we investigate population reduction algo- rithms together with necessary and sufficient conditions for bounding the population error in Theorem 2.4. In Section 2.2 we investigate sample reduction algorithms obtain sharp bounds for the sample error in Theorem 2.12 and the estimation error in Theorem 2.14. In Section 2.3 we extend our findings to the prediction problem. A brief discussion follows Section 3. Appendix A is a self-contained section that formalizes the perturbation theory of reduced least-squares problems. All the auxiliary results are given in Appendix B. The proofs for the main sections are given in Appendix C. In Appendix D we provide additional visualizations. 1.3 Notation We denote 0p∈Rpthe zero vector, 1p∈Rpthe vector of all ones, Ip∈Rp×pthe identity matrix. For any integers 1 ≤d≤p, we denote Id,p∈Rp×pthe matrix obtained by setting to zero the diagonal entries of Ipin positions i=d+ 1, . . . , p . For any subset S⊆ {1, .
|
https://arxiv.org/abs/2505.01297v1
|
. . , p }, we denote IS∈Rp×pthe matrix obtained by setting to zero the diagonal entries of Ip in positions i /∈S. We denote Rp×p ⪰0the space of p-dimensional square matrices that are symmetric and positive-semidefinite. We denote A†the unique generalized inverse of a matrix A, with the convention that A†=A−1when the matrix is invertible. We denote R(A) the range of any matrix A(the span of its columns), this is a linear subspace of dimension equal to the rank rk( A). We denote Tr( A) the trace of any square-matrix A, this is the sum of its diagonal entries. We denote deg( A) the degree of any matrix A, this is the number of its unique non-zero eigenvalues. For any symmetric and positive semi-definite matrix A, we denote its condition number as κ2(A) :=∥A∥op∥A†∥op, where ∥ · ∥ opis the operator norm for matrices. For any symmetric and positive semi-definite matrix A, we denote the matrix-induced norm ∥v∥A:=∥A1/2v∥2, where A1/2is the matrix-square-root ofAand∥ · ∥ 2is the Euclidean norm for vectors. For any two sequences of real numbers (an)n≥1,(bn)n≥1we denote an≲bnwhen there exists a constant C >0 such that an≤Cbn for all n≥1. We denote we denote an≳bnwhen there exists a constant C > 0 such that an≥Cbnfor all n≥1. We denote an≂bnwhen both an≲bnandan≳bnhold. 8 2 Misspecified Ill-Posed Regression We begin by stating our main assumption and establishing the necessary notation. Assumption 2.1 (Model-Free, 2nd moments) .The features x∈Rpare a random vector and the response y∈Ris a random variable, they are both centered and have finite second moments Σx=E(xxt)∈Rp×p ⪰0,σx,y=E(xy)∈Rp\{0p}andσ2 y=E(y2)>0. The features are possibly degenerate with 1 ≤rx= rk(Σx)≤p. We show in Lemma B.2 that, regardless of the true dependence between the features and the response, the population least-squares problem LS(x, y) := arg min β∈RpE(xtβ−y)2= arg min β∈Rp βtΣxβ−2βtσx,y+σ2 y =: LS( Σx,σx,y) (1) admits minimum- L2-norm solution βLS:=Σ† xσx,y∈Rpand only depends on the second moments of the common joint distribution P(x,y). Without additional assumptions, our first goal is to characterize the largest span of linear combinations of features that is irrelevant for the response. We check in Lemma B.1 that, almost surely, the features xbelong to the range R(Σx). With Ux∈Rp×pthe orthogonal projection of RpontoR(Σx), the latter is equivalent to x=Uxxalmost surely. Projections of the features along directions in the orthogonal complement R(Σx)⊥= ker( Σx) are always irrelevant. Therefore, we are interested in projections of the features along complementary linear subspaces eB,eB⊥⊆ R(Σx) that partition the range R(Σx) =eB ⊕eB⊥in such a way that the information in eB⊥ is to be discarded and the information in eBis to be preserved. We denote UeB∈Rp×pthe orthogonal projection of RpontoeB, which means that the orthogonal projection onto eB⊥ isUeB⊥=Ux−UeB∈Rp×p. With the above notation, we define the projected features xeB:=UeBx∈eB,xeB⊥:=UeB⊥x∈eB⊥, (2) so that x=xeB+xeB⊥∈ R(Σx). We formally define the irrelevant subspace B⊥ y:= arg maxn dim(eB⊥) :R(Σx) =eB ⊕eB⊥,E(xeB⊥y) =0p,E(xeB⊥xt eB) =0p×po .(3) This is the largest subspace eB⊥ofR(Σx) for which the projected features xeB⊥are un- correlated with both the response yand the the projection xeBof the features along the complement
|
https://arxiv.org/abs/2505.01297v1
|
eB. The latter induces the following characterization of relevant subspace By= arg minn dim(eB) :R(Σx) =eB ⊕eB⊥,E(xeB⊥y) =0p,E(xeB⊥xt eB) =0p×po .(4) We denote UyandUy⊥the orthogonal projections onto ByandB⊥ y, respectively. We call relevant features the projection xy:=Uyxand irrelevant features the projection xy⊥:= Uy⊥x. This induces the orthogonal decompositions Rp=R(Σx)⊕ R(Σx)⊥,R(Σx) =By⊕ B⊥ y,x=xy+xy⊥∈ R(Σx). (5) 9 We do not impose any assumption on the joint distribution P(x,y)and the relevant subspace Byis allowed to be the whole range R(Σx). We show that the Byis well-defined and it preserves all the information on the dependence between the features and the response. Lemma 2.2. Let(x, y)∈Rp×Rsatisfy Assumption 2.1. The relevant subspace Byin Equation (4)is unique. Furthermore, with xythe relevant features in Equation (5), it holds LS(x, y) = LS( xy, y)for the population least-squares problem in Equation (1). The least-squares problem LS( xy, y) only depends on the moments Σxy∈Rp×p ⪰0and σxy,y∈ R(Σxy) of the relevant pair ( xy, y), thus the relevant subspace By=R(Σxy) has dimension ry:= rk( Σxy). Although the relevant subspace is sufficient in solving the population least-squares problem, it might still be unnecessarily large when the main goal is parsimonious identification. In ill-posed settings, the condition number κ2(Σxy) might be large due to the presence of a small perturbation of some underlying low-dimensional signal. In our model-free setting, for any linear subspace B ⊆ B yof dimension rB≤ry, we denote εB(xy, y) :=∥Σxy−ΣxB∥op ∥ΣxB∥op∨∥σxy,y−σxB,y∥2 ∥σxB,y∥2(6) the size of the perturbation between the moments of the relevant pair ( xy, y) and the low-rank pair ( xB, y). The corresponding low-rank least-squares problem is LS( xB, y) and its aspects of interest are: the linear subspace B=R(ΣxB); the orthogonal projection UB=Σ† xBΣxBofRpontoB; the dimension rB= dim( B) of the linear subspace; the unique solution βB=Σ† xBσxB,y. This means that the low-rank parameter θB:= B,UB, rB,βB (7) combining all the above is identifiable (it is uniquely defined). Informally, we are interested in the best parsimonious parameter where the linear subspace B ⊆ B yis the largest for which ΣxBis well-posed. Since such notion hinges on heuristic numerical considerations rather than intrinsic statistical properties, we derive necessary and sufficient conditions for the estimation of arbitrary low-rank parameters. Again informally, we call statistically interpretable any procedure for linear dimensionality reduction that can estimate the best parsimonious parameter using at most rBdegrees-of-freedom, in a sense to be defined below. To fix the ideas, we visualize in Figure 1 the simplest non-trivial realization of our framework. 2.1 Population Reduction Algorithms In this section we investigate the performance of population reduction algorithms A(x, y) := A(Σx,σx,y) that estimate any low-rank parameter θB= (B,UB, rB,βB) by making choices 10 R(Σx) By⊕ B⊥ y rx= 3 ry= 2, ry⊥= 1 Figure 1: A toy simulation of our framework with n= 1000 and p= 3. A sample of i.i.d. observations (black) in the three-dimensional space; the features have full rank rx= 3; the subspace of irrelevant features (light green) has dimension ry⊥= 1 and is the first direction of variation; the subspace of relevant features (dark blue) has dimension ry= 2.
|
https://arxiv.org/abs/2505.01297v1
|
that depend only on the moments of the population pair ( x, y). In particular, we assume thatA(x, y) computes parameters eθA,s:= eBA,s,eUA,s,erA,s,eβA,s ,0≤s≤p, (8) where {0p}=eBA,0⊆eBA,1⊆ ··· ⊆ eBA,p⊆Rpare linear subspaces, eUA,sis the orthogonal projection onto eBA,s,erA,s= dim( eBA,s) is its dimension and eβA,sis the minimum- L2-norm solution of LS( eUA,sx, y). Furthermore, when s=p, we assume that eβA,p=βLSrecovers the solution of the least-squares problem LS( x, y). Similarly, we denote A(xB, y) :=A(ΣxB,σxB,y) the population reduction algorithm that makes use of the oracle knowledge of the low-rank subspace Band only depends on the moments of the low-rank pair ( xB, y). This computes parameters θA,s:= BA,s,UA,s, rA,s,βA,s ,0≤s≤p, (9) where {0p}=BA,0⊆ BA,1⊆ ··· ⊆ B A,p⊆Rpare linear subspaces, UA,sis the orthogonal projection onto BA,s,rA,s= dim( BA,s) is its dimension and βA,sis the minimum- L2-norm 11 solution of LS( UA,sx, y). Furthermore, when s=p, we assume that βA,p=βBrecovers the solution of the least-squares problem LS( xB, y). For example, when the policy for dimensionality reduction is principal components, A(x, y) computes s-dimensional subspaces eBA,sspanned by eigenvectors of Σx, whereas A(xB, y) computes s-dimensional subspaces BA,sspanned by eigenvectors of ΣxB. For more details and examples, see Remarks below. For the special choice of s=pin Equation (9), we rewrite the parameter θA,pcomputed by the oracle population algorithm A(xB, y) as θA:= BA,UA, rA,βA , (10) where we know that βA=βB. Therefore, there is no loss of information in taking θAas the parameter of interest instead of the low-rank parameter θB. The identification is par- simonious as long as rA≤rB, that is to say, the oracle population algorithm A(xB, y) uses no more than rBdegrees-of-freedom to solve the low-rank least-squares problem LS( xB, y). As long as the population algorithms A(x, y) and A(xB, y) are compatible, in the sense that the set of degrees-of-freedom {rA,s: 0≤s≤p}is a subset of {erA,s: 0≤s≤p}, there exists a parameter in Equation (8) computed by the population algorithm A(x, y) where eθA:= eBA,eUA, rA,eβA (11) has the same degrees-of-freedom as the parameter θAcomputed by the oracle population algorithm A(xB, y). We have shown in Lemma 2.2 that population least-squares problem LS( x, y) = LS( xy, y) only depends on the moments of the relevant pair ( xy, y). Since discarding the irrelevant di- rections of the features is paramount for parsimonious identification, we call adaptive those populations algorithms such that A(x, y) =A(xy, y) where A(xy, y) :=A(Σxy,σxy,y) is the population algorithm that only depends on the moments of ( xy, y). In order to state our main result, we need to guarantee that the choices made by the dimensionality reduction policy A(·) are continuous in some sense. Specifically, we will need to assume that, as long as the moments ( Σxy,σxy,y) are a small perturbation of the moments (ΣxB,σxB,y), the errors in operator-norm ∥eUA−UA∥opand principal-angle ϕ1(eBA,BA) are proportional to the size of the perturbation. We call constants of stability the smallest such proportionality constants. For the sake of exposition, we omit here the technical details which we establish in Appendix A later. We stress here that the parameters
|
https://arxiv.org/abs/2505.01297v1
|
appearing in Equations (8) - (9) are consistent with our Definition A.3 in Section A.3, whereas the constants of stability are taken from Section A.4. Assumption 2.3 (Population Algorithm) .LetBy⊆ R(Σx) be the relevant subspace and (xy, y) the relevant pair. Let B ⊆ B ybe the low-rank subspace and ( xB, y) the low-rank pair. 12 LetA(·) be a dimensionality reduction policy in the sense of Definition A.3 in Section A.3. We assume that: (i) the oracle population algorithm A(xB, y) is parsimonious, in the sense that rA≤rB; (ii) the oracle population algorithm A(xB, y) is stable with constants CA≥1,DA≥1 as in Equation (24) in Section A.4 and let MA:= 2·κ2(UAΣxBUA)· {4CA+ 1} ·∥ΣxB∥op ∥UAΣxBUA∥op∨∥σxB,y∥2 ∥UAσxB,y∥2 , corresponding to Equation (25) in Section A.4; (iii) the population algorithm is compatible with the oracle population algorithm, in the sense that {rA,s: 0≤s≤p} ⊆ {erA,s: 0≤s≤p}; (iv) the population algorithm is adaptive, in the sense that A(x, y) =A(xy, y); (v) the size of the perturbation εB=εB(xy, y) in Equation (6) satisfies MA·εB<1. Theorem 2.4. Let(x, y)∈Rp×Rsatisfy Assumption 2.1 and let A(·)be any dimension- ality reduction policy for which Assumption 2.3 holds. Let θA= BA,UA, rA,βA ,eθA= eBA,eUA, rA,eβA be the compatible parameters θA∈ A(xB, y)in Equation (10) andeθA∈ A(x, y)in Equa- tion (11). Then, the population errors on the orthogonal projection and linear subspace are eUA−UA op≤CAεB, ϕ 1 eBA,BA ≤DAεB, whereas the population error on the least-squares solution is ∥eβA−βA∥2 ∥βA∥2≤5 2MAεB. Since we advocate for parsimonious methods, we extend the perturbation bound in the previous result to situations where only r≤rAdegrees-of-freedom are used by the population algorithm A(x, y). We quantify the price to pay for early-stopping. Corollary 2.5. Under the assumptions of Theorem 2.4, for any r∈ {rA,s: 0≤s≤p} consider the parameter eθ(r) A∈ A(x, y)using rdegrees-of-freedom. The early-stopping popu- lation error on the least-squares solution is ∥eβ(r) A−βA∥2 ∥βA∥2≤√rA−r+5 2MAεB. 13 Remark 2.6 (Our Assumptions) .We address here all our assumptions leading to the pop- ulation error bounds in Theorem 2.4. This result is a special case of a general perturbation bound which we establish with Theorem A.8 in Section A.4. Assumption 2.1 is the weakest model-free condition under which the set of non-trivial solutions to the population least-squares problem in Equation (1) exists. We assume the covariance matrix Σxto have positive rank rx≥1 and the covariance vector σx,yto be non-zero, since we often divide by ∥σx,y∥2and∥Σx∥op. If this is not the case, the relevant subspace in Equation (4) is trivially By={0p}as well. This is not only a technicality, since one can come up with examples where the response yis determined by some nonlinear function of the features such that E(xy) =0p. To see this, consider the case when x∼ N(0p, σ2 xIp) for some σ2 x>0. It is easy to check that E(xHk(x)) =0pwhen Hk(·) is any Hermite polynomial of degree k≥2. We claim that such adversarial situations are very unlikely in the ill-posed setting we are considering here. When the covariance matrix of the features Σxis ill-posed, the entries of x= (x1, . . . , x p)tare
|
https://arxiv.org/abs/2505.01297v1
|
highly-correlated and the class of functions for which E(xy) =0pis negligible. Assumption 2.3 provides conditions on the population reduction algorithms. Condi- tion (i) means that it is harmless to replace, in the statement of Theorem 2.4, the low-rank parameter θBwith the parameter θAcomputed by the oracle population algorithm. This is a necessary condition for parsimonious identification. We address in the following Remarks the implications of using non-parsimonious methods. Condition (ii) means that the policy A(·) is stable under perturbations. Although we refer to Section A.4 for more details, we mention here that the stability of classical algorithms has been investigated in the liter- ature. Following Davis and Kahan [1970] and Godunov et al. [1993] one finds that the stability constant CPCRof the population PCR algorithm PCR( x, y) inspired by Hotelling [1933] is proportional to δ(x)−1, where δ(x) is the minimum gap between the unique eigen- values of Σx. Following Carpraux et al. [1994] and Kuznetsov [1997] one finds that the stability constant CPLSof the population PLS algorithm PLS( x, y) inspired by Wold [1966] is proportional to the condition number κ(x, y) of the Krylov space K(Σx,σx,y). Following Cerone et al. [2019] and Fosson et al. [2020] one finds that some penalized sparse methods have stability constant CSPR= 1. The constant MAappearing in the bound for the least- squares solutions is proportional to the condition number κ2(UAΣxBUA) of the reduced low-rank covariance matrix. For this reason, we are implicitly interested in the largest linear subspace B ⊆ B yfor which the low-rank parameter θBis well-posed. Condition (iii) means that for any parameter θAcomputed by the oracle population algorithm A(xB, y) there exists a parameter eθAcomputed by the population algorithm A(x, y) such that both have the same degrees-of-freedom. This is true without loss of generality for all those al- gorithms where A(·) selects exactly s-dimensional linear subspaces BA,sandeBA,sfor all 14 0≤s≤rB. This is often true for unsupervised or sparse methods. As for the PLS method, Lemma 3.1 by Kuznetsov [1997], see Lemma B.13 in Section B.3, guarantees the compat- ibility of perturbed Krylov spaces. Condition (iv) means that the population algorithm A(x, y) implicitly discards all the information contained in the irrelevant subspace B⊥ y. We have shown in Lemma 2.2 that this is always true for the population least-squares LS( x, y). This is a sufficient condition that we exploit to remove the contribution of the irrelevant features from the perturbation size in Equation (6). We address in the following Remarks the implications for using non-adaptive methods. Condition (v) establishes the threshold M−1 Aas the largest perturbation size for which the perturbation bounds are possible. Remark 2.7 (Misleading Sparse Reduction) .Among the sparse methods in Section 1.1.1, we consider here a variation of Best Subset Selection by Beale et al. [1967]. Forward Subset Selection is a dimensionality reduction policy FSS( ·) leading to a population reduction algorithm FSS( x, y) := FSS( Σx,σx,y) with the following properties. For all 0 ≤s≤p, the linear subspace BFSS,sis spanned by the vectors of the canonical basis EJFSS,s(x, y) := span{ej:j∈JFSS,s(x, y)}corresponding to the
|
https://arxiv.org/abs/2505.01297v1
|
best s-sparse active subset JFSS,s(x, y) of βLS. The active sets are computed iteratively, in the sense that jFSS,s+1∈JLS\JFSS,s. When s=p, we find BFSS,p=EJLS(x, y) and βFSS,p=βLS. One can define a similar procedure for the LASSO( ·) by Tibshirani [1996] where the active set JLASSO ,s(x, y) is now selected through the λ(s)-penalized least-squares solution βλ(s)= arg minβ∈Rp{E(y−xtβ)2+λ(s)∥β∥1}such that ∥βλ(s)∥0=s. Then, the least- squares solution βLASSO ,sis the minimum- L2-norm solution of LS( xJLASSO ,s, y) on the linear subspace BLASSO ,s=EJLASSO ,s(x, y). Other variations are possible. One can specify many situations where all sparse reduction methods we mentioned in Section 1.1.1 cannot achieve parsimonious identification. Therefore, they are not statis- tically interpretable in general. Let SPR( ·) be any policy for sparse reduction and con- sider the oracle population algorithm SPR( xB, y) when the low-rank subspace is B= span{e1+···+ep}and the relevant subspace is the whole range By=R(Σx) with rx≥2. In this case, the parameter in Equation (10) computed by the oracle population algorithm SPR(xB, y) isθSPR= (Rp,Ip, p, βB) and it is clear that Assumption 2.3 (i) fails be- cause rSPR=pis much larger than rB= 1. In such situations, we are not allowed to replace θBbyθSPRin the statement of Theorem 2.4 and the performance of the population algorithm must be measured in terms of the parameter eθSPR= (span {ej},Ij,1, λjej) using rB= 1 degrees-of-freedom. In particular, this implies ∥eUSPR−UB∥op≥(p−1)/p, ϕ1(eBSPR,B)≥(π/2)(√p−1)/√pand∥eβSPR−βSPR∥2/∥βSPR∥2≥√p−1/√p. Remark 2.8 (Misleading Unsupervised Reduction) .Among the unsupervised methods in Section 1.1.2, we consider here Principal Components Regression by Hotelling [1933]. This is a dimensionality reduction policy PCR( ·) leading to a population reduction algorithm 15 PCR( x, y) := PCR( Σx,σx,y) with the following properties. For all 0 ≤s≤p, the linear subspaces BPCR,sdepend only on the covariance matrix Σx. In particular, with λx,1≥ ··· ≥ λx,rx>0 the sorted positive eigenvalues of Σx, the linear subspace BPCR,sis spanned by the eigenvectors Vs(x) := span {vx,1, . . . ,vx,s}ofΣxcorresponding to the largest s eigenvalues. When s=p, we find that BPCR,p=Vp(x) =R(Σx) is exactly the range of Σx so that βPCR,p=βLS. All unsupervised methods for dimensionality reduction we mentioned in Section 1.1.2 are not adaptive. Therefore, they are not statistically interpretable in general. We show this in particular for the population PCR algorithm PCR( x, y) but the argument is easily adapted. Consider the case where the low-rank subspace B ⊆ B yand the irrelevant subspace B⊥ y⊆ R(Σx) have the same dimension rB=ry⊥. Furthermore, assume that the eigenvalues of the covariance Σxy⊥are larger than the eigenvalues of the low-rank covariance ΣxB. This implies that the parameter in Equation (10) is θPCR= (B,UB, rB,βB) whereas the parameter in Equation (11) is eθPCR = (B⊥ y,eUy⊥, rB,0p), since the principal rB- dimensional eigenspace of Σxis orthogonal to the relevant subspace. In such situations Assumption 2.3 (iv) fails and one recovers ∥eUPCR−UPCR∥op= 1,ϕ1(eBPCR,BPCR) =π/2 and∥eβPCR−βPCR∥2/∥βPCR∥2= 1. Remark 2.9 (Reliable Sufficient Reduction) .Among the sufficient methods in Section 1.1.3, we consider here Partial Least Squares by Wold [1966]. This is a dimensionality reduction policy PLS( ·) leading to a population reduction algorithm PLS(
|
https://arxiv.org/abs/2505.01297v1
|
x, y) := PLS( Σx,σx,y) with the following properties. For all 0 ≤s≤p, the linear subspace BPLS,sis spanned by the firstsvectors of the Krylov basis Ks(x, y) := span {σx,y,Σxσx,y, . . . ,Σs−1 xσx,y}generated byΣxandσx,y. The subspace BPLS,sis one-dimensional when σx,yis an eigenvector of Σx. When s=p, we find that BPLS,p=Kp(Σx,σx,y)⊆ R(Σx) whereas Lemma B.7 implies βPLS,p=βLS. Among the sufficient reduction methods we mentioned in Section 1.1.3, the population PLS algorithm PLS( x, y) is general adaptive and parsimonious. This makes it statistically interpretable. Adaptivity is a consequence of Lemma B.7 which shows that, as long as the population least-squares problem is preserved by the relevant subspace, in the sense that LS(x, y) = LS( xy, y), then the corresponding Krylov spaces are preserved. Parsimony is a consequence of the fact that Krylov spaces are themselves linear subspaces of the range of the matrix used to generate them, therefore the parameter θPLSin Equation (10) satisfies BPLS⊆ R(ΣxB) =B. 2.2 Sample Reduction Algorithms Consider a dataset ( X,y)∈Rn×p×Rnconsisting of n≥1 i.i.d. realizations ( xi, yi) of the same population pair ( x, y)∈Rp×Runder Assumption 2.1. In this section we investigate 16 the performance of sample reduction algorithms bA(x, y) :=A(bΣx,bσx,y) that depend only on the sample moments bΣx:=n−1XtXandbσx,y:=n−1Xtyestimated from the dataset (X,y). In particular, we assume that bA(x, y) computes parameters bθA,s(x, y) := bBA,s,bUA,s,brA,s,bβA,s ,0≤s≤p, (12) where {0p}=bBA,0⊆bBA,1⊆ ··· ⊆ bBA,p⊆Rpare linear subspaces, bUA,sis the orthogonal projection onto bBA,s,brA,s= dim( bBA,s) is its dimension and bβA,sis the minimum- L2-norm solution of the sample least-squares problem LS( XeUA,s,y). Furthermore, when s=p, we assume that bβA,p=bβLS:=bΣ† xbσx,yrecovers the solution of the sample least-squares problem LS( X,y). For example, when the policy for dimensionality reduction is principal components, bA(x, y) computes s-dimensional subspaces bBA,sspanned by eigenvectors of bΣx. As long as the sample algorithm bA(x, y) is compatible with the population algorithm A(x, y), in the sense that set of degrees-of-freedom {erA,s: 0≤s≤p}is a subset of {brA,s: 0≤s≤p}, there exists a parameter in Equation (12) computed by the sample algorithm bA(x, y) where bθA:= bBA,bUA, rA,bβA (13) has the same degrees-of-freedom as the parameter eθA= (eBA,eUA, rA,eβA) in Equation (11) computed by the population algorithm A(x, y). We are interested in finite-samples guarantees on the sample error between the compat- ible parameters bθA∈bA(x, y) andeθA∈ A(x, y). In particular, we want to recover sharp bounds in high probability for the size of the sample perturbation bε(x, y) :=∥bΣx−Σx∥op ∥Σx∥op∨∥bσx,y−σx,y∥2 ∥σx,y∥2. (14) For this, we impose finite fourth-moments. Assumption 2.10 (Model-Free, 4th moments) .Let (x, y)∈Rp×Rsatisfy Assumption 2.1. The response yand the projected features xeB=UeBx, for any linear subspace eB ⊆ R (Σx), have finite moment-ratios Ly:=E(y4)1 4 E(y2)1 2, L eB:=E(∥xeB∥4 2)1 4 E(∥xeB∥2 2)1 2, with the convention that LeBis set to one if the denominator is zero. Let (x, y)∈Rp×Rsatisfy Assumption 2.10. From now on, we are interested in the geometrical properties of the projected features xeBwhere the linear subspace eBis either 17 eBA,eB⊥ AorB⊥ y. For any such choice of eB, we denote reB:= rk( ΣxeB), ρ eB:=E(∥xeB∥2 2) ∥ΣxeB∥op, ρ eB,n:=E max
|
https://arxiv.org/abs/2505.01297v1
|
1≤i≤n∥xeB,i∥2 2 ∥ΣxeB∥op. (15) The rank reBis the dimension of the span of the support of xeB. The effective rank ρeB≤reB can be rewritten as the weighted average Tr( ΣxeB)/∥ΣxeB∥opand measures the interplay be- tween dimension and variation. The uniform effective rank ρeB,naccounts for the variability of a sample of i.i.d. realizations of xeB. We select the linear subspace among eBA,eB⊥ AorB⊥ y corresponding to the largest variation eB∗:= arg maxn LeB∥ΣxeB∥opρeB,n:eB ∈ {eBA,eB⊥ A,B⊥ y}o (16) and define the sequence δeB∗,n:=s ρeB∗,nlogrx n, (17) summarizing the intrinsic geometrical complexity. As we did in the previous section, we need to assume that the choices made by the dimensionality reduction policy A(·) are stable under small perturbations. Specifically, we will need to assume that, as long as the sample moments ( bΣx,bσx,y) are a small pertur- bation of the population moments ( Σx,σx,y), the errors in operator-norm ∥bUA−eUA∥op and principal-angle ϕ1(bBA,eBA) are proportional to the size of the perturbation. We call constants of stability the smallest such proportionality constants. Once more, we omit the technical details and stress here that the parameters appearing in Equation (12) are con- sistent with our Definition A.3 in Section A.3, whereas the constants of stability are taken from Section A.4. Assumption 2.11 (Sample Algorithm) .LetA(·) be a dimensionality reduction policy in the sense of Definition A.3 in Section A.3. We assume that: (i) the population algorithm A(x, y) is stable with constants eCA≥1,eDA≥1 as in Equation (24) in Section A.4 and let fMA:= 2·κ2(eUAΣxeUA)· {4eCA+ 1} ·( ∥Σx∥op ∥eUAΣxeUA∥op∨∥σx,y∥2 ∥eUAσx,y∥2) , corresponding to Equation (25) in Section A.4; (ii) the sample algorithm is compatible with the population algorithm, in the sense that {erA,s: 0≤s≤p} ⊆ {brA,s: 0≤s≤p}; 18 (iii) with eB∗the leading linear subspace among eBA,eB⊥ A,B⊥ yin the sense of Equation (16), δeB∗,nthe corresponding complexity in Equation (17), some absolute constant C≥1, eKeB∗:= 99 CLyLeB∗ σy∥ΣxeB∗∥1 2op ∥σx,y∥2∨∥ΣxeB∗∥op ∥Σx∥op ,fMeB∗:=fMAeKeB∗, it holds δeB∗,nn→∞− − − → 0, ν eB∗,n:=fMeB∗δeB∗,n<1 2. Theorem 2.12. Let(x, y)∈Rp×Rsatisfy Assumption 2.10. Let (X,y)∈Rn×p×Rnbe a dataset of i.i.d. realizations of (x, y). LetA(·)be any dimensionality reduction policy for which Assumption 2.11 holds. Let eθA= eBA,eUA, rA,eβA ,bθA= bBA,bUA, rA,bβA , be the compatible parameters eθA∈ A(x, y)in Equation (11) andbθA∈bA(x, y)in Equa- tion (13). Then, for any νeB∗,n< ν n<1 2, the size of the perturbation bε=bε(x, y)in Equation (14) satisfies bε≤εeB∗,n:=eKeB∗ν−1 nδeB∗,nwith probability at least 1−2νn. On this event, the sample errors on the orthogonal projection and linear subspace are bUA−eUA op≤eCAεeB∗,n, ϕ 1 bBA,eBA ≤eDAεeB∗,n, whereas the sample error on the least-squares solution is ∥bβA−eβA∥2 ∥eβA∥2≤5 2fMAεeB∗,n. Corollary 2.13. Under the assumptions of Theorem 2.12, for any r∈ {erA,s: 0≤s≤p} such that r≤rA, consider the parameter bθ(r) A∈bA(x, y)using rdegrees-of-freedom. On the same event of probability at least 1−2νn, the early-stopping sample error on the least-squares solution is ∥bβ(r) A−eβA∥2 ∥eβA∥2≤√rA−r+5 2fMAεeB∗,n, Theorem 2.14. Under the assumptions of Theorem 2.4 and Theorem 2.12, on the same event of probability at least 1−2νn, one has the following. The estimation errors for the orthogonal projection and linear subspace are bUA−UA op≤CAεB+eCAεeB∗,n, ϕ 1 bBA,BA
|
https://arxiv.org/abs/2505.01297v1
|
≤DAεB+eDAεeB∗,n, 19 whereas, for any r∈ {erA,s: 0≤s≤p}such that r≤rA, the early-stopping estimation error on the least-squares solution is ∥bβ(r) A−βA∥2 ∥βA∥2≤√rA−r+5 2MAεB+5 2 1 +5 2MAεB fMAεeB∗,n. Remark 2.15 (Our Assumptions) .We address here all our assumptions leading to the sample error bounds in Theorem 2.12. This result is a special case of a general perturbation bound which we establish with Theorem A.8 in Section A.4. Assumption 2.10 is a stronger version of Assumption 2.1 in the sense that it requires finiteness of four moments instead of two. Assumption 2.11 provides conditions on the population algorithm and the sample algo- rithm. Condition (i) means that the policy A(·) is stable under perturbations. This is the same as Assumption 2.3 (ii) which we have discussed already in Remark 2.6. The constant fMAis proportional to the condition number κ2(eUAΣxeUA) of the reduced covariance ma- trix. Condition (ii) means that for any parameter eθAcomputed by the population algorithm A(x, y) there exists a parameter bθAcomputed by the sample algorithm bA(x, y) having the same degrees-of-freedom. This is true without loss of generality for all those algorithms where A(·) selects exactly s-dimensional linear subspaces eBA,sandbBA,sfor all 0 ≤s≤rx. This is often the case for unsupervised or sparse methods as long as the sample is large enough that rk( bΣx) = rk( Σx). As for the PLS method, Lemma 3.1 by Kuznetsov [1997], see Lemma B.13 in Section B.3, guarantees the compatibility of perturbed Krylov spaces. Condition (iii) means that our theoretical results work under the regime ρeB∗,n/n→0, where ρeB∗,nis a uniform effective rank in Equation (15). This is a dimension-free necessary condition, in the sense that nothing is assumed on the ratio p/nwhich can be arbitrarily large. This makes our result relevant to practitioners working with applications where the dimension pmuch larger than the sample size n. Remark 2.16 (Consistent Sample Algorithms) .Differently from Theorem 2.4 for the pop- ulation errors, there is no adaptivity nor parsimony requirement to bound the sample errors in Theorem 2.12. Under the assumptions of the latter, it is always possible to find a se- quence νeB∗,n< νn<1 2such that νn→0 arbitrarily slow and ν−1 nδeB∗,n→0 when n→+∞. This means that stable policies A(·) for dimensionality reduction yield the convergence in probability bθAP− − − → n→∞eθA or, in other words, all sample reduction algorithms are consistent to their population coun- terpart, as long as the procedure is stable. It is not advisable to discriminate between different sample algorithms by only comparing their convergence rates. In fact, as we show 20 with Theorem 2.14, the overall estimation error of a sample algorithm is controlled by the sum of the population error and the sample error. Only parsimonious and adaptive algorithms can achieve optimal estimation errors. Remark 2.17 (Optimal Sample Error) .The sample errors in Theorem 2.12 are obtained for a sufficiently large sample of i.i.d. realizations ( xi, yi). Together with our Lemma B.3, one infers the following characterization (up to constants), depending on whether the leading projection xeB∗is bounded almost surely, has infinitely-many moments, or is heavy tailed: ∥bΣx−Σx∥op
|
https://arxiv.org/abs/2505.01297v1
|
∥Σx∥op∨∥bσx,y−σx,y∥2 ∥σx,y∥2≲ q ρeB∗logrx n ν2n, ifxeB∗as in Lemma B.3 (i) q (ρeB∗∨logn) logrx n ν2n,ifxeB∗as in Lemma B.3 (ii) r ρeB∗logrx n1−1 kν2n, ifxeB∗as in Lemma B.3 (iii) with probability at least 1 −2νn. The effective rank ρeB∗appearing in the above display corresponds to the linear subspace eB∗of largest variation among eBA,eB⊥ AandB⊥ yand thus depends on the underlying algorithm A(·). We achieve this by bounding separately all possible projections of ( bΣx−Σx) and ( bσx,y−σx,y) along these subspaces. The term logrxis the logarithm of the rank of the covariance Σxand is an artifact of the proof relying on Rademacher complexities and the Khintchine-type inequalities established by Buchholz [2005], Vershynin [2012] and Moeller and Ullrich [2021], given in Section B.2. In the worst-case scenario where xeB∗has exactly finite fourth-moments, the rate is of order√ n−1+1/2=n−1/4and is worse that the rate n−1/2obtained in the best-case scenario by a polynomial factor in n. Since our proof strategy only relies on sharp bounds for the above display, one can exploit recent breakthroughs in the theory of empirical processes due to Abdalla and Zhiv- otovskiy [2024] and Oliveira and Rico [2024] to achieve optimal sample errors for arbitrary algorithms. With the formulation of Theorem 5.4 by Bartl and Mendelson [2025], we con- jecture that for a sufficiently large sample ( xi, yi) of realizations corrupted by a ratio of outliers 0 ≤η≤1, there exist sample procedures bΣx,νn∈Rp×p ⪰0andbσx,y,νn∈Rpsuch that ∥bΣx,νn−Σx∥op ∥Σx∥op∨∥bσx,y,νn−σx,y∥2 ∥σx,y∥2≲rρeB∗ n+s logν−1n n+√η, with probability at least 1 −νn. This recovers the optimal sample rate in the hardest setting where the common random pair ( x, y) satisfies Assumption 2.10 and the additional L4−L2 equivalence condition by Bartl and Mendelson [2025]. 21 2.3 Prediction Risk In this section we briefly discuss the implications of our main results for the prediction problem. Consider a dataset ( X,y)∈Rn×p×Rnconsisting of n≥1 i.i.d. realizations (xi, yi) of the same population pair ( x, y)∈Rp×Runder Assumption 2.10. We consider the problem of predicting a new response value yn+1∈Rin terms of a new observed feature vector xn+1∈Rpfollowing the same population distribution. With the low-rank parameter θB= (B,UB, rB,βB) in Equation (7), we consider reduc- tion policies A(·) for which the assumptions of both Theorem 2.4 and Theorem 2.12 hold. LetθA= (BA,UA, rA,βA) be the population parameter in Equation (10) with rA≤rBand βA=βB. Letbθ(r) A= (bB(r) A,bU(r) A, r,bβ(r) A) in Equation (13) using r≤rAdegrees-of-freedom. The sample parameter bθ(r) Ais computed from the data ( X,y) and is independent of the new pair ( xn+1, yn+1). We define the risk and excess risk of such estimators as Rx,y(bβ(r) A) :=E(x,y) {y−xtbβ(r) A}2|X,y , R(ex) x,y(bβ(r) A) :=Rx,y(bβ(r) A)−Rx,y(βA), where the expectation is taken with respect to the population pair ( x, y) and conditionally on the data ( X,y). We can prove the following identity. Theorem 2.18. Under the assumptions of Theorem 2.4 and Theorem 2.12, on the same event of probability at least 1−2νn, one has the following. For any r∈ {erA,s: 0≤s≤p} such that r≤rA, the early-stopping excess risk on the least-squares solution is R(ex) x,y(bβ(r) A)
|
https://arxiv.org/abs/2505.01297v1
|
= bβ(r) A−βA 2 Σx−2 bβ(r) A−βA,βLS−βA Σx, where βLS=Σ† xσx,yis the population least-squares solution. Remark 2.19 (Optimal Linear Prediction Risk) .Under Assumption 2.10 alone, the de- pendence between the features xand the response yis arbitrary and the linear predictor xt n+1bβ(r) Aconsidered here might be far from the optimal predictor bf(xn+1) ofyn+1. Most of the overparametrized methods for computing bfwe mentioned in Section 1.1.5 are hard to interpret. On the other hand, Theorem 2.18 characterizes the excess risk, with respect to some interpretable low-rank parameter, of sample reduction algorithms that are parsi- monious and adaptive. For such methods, the excess risk is proportional to ∥bβ(r) A−βA∥2 Σx, which is the square of the Σx-weighted estimation error from Theorem 2.14. With eβ(r) Athe population parameter, this means R(ex) x,y bβ(r) A =R(ex) x,y eβ(r) A +OP ρeB∗,nlogrx nν2n! = eβ(r) A−βA 2 Σx−2 eβ(r) A−βA,βLS−βA Σx+oP(1) and the optimality of such rate is discussed in Remark 2.17. Only algorithms that are both parsimonious and adaptive can achieve optimal linear prediction risk. 22 3 Dicussion We introduced a comprehensive framework for addressing the identification problem in its full generality, allowing for arbitrary dependence between features and the response. We characterized the largest projection of irrelevant features and the parsimonious projections of relevant ones. We examined a broad class of linear dimensionality reduction algorithms and demonstrated that only methods that are both parsimonious and adaptive are statis- tically interpretable, and hence can achieve optimal estimation error rates. In particular, we discussed how sparse and unsupervised methods may be misleading in general. Our framework encourages the study of general, not necessarily linear, dimensionality reduction algorithms, thereby extending the notion of statistical interpretability to broader classes of machine learning methods. 23 A Reduction Algorithms for Least-Squares Problems A.1 Principal-Angle for Compatible Linear Subspaces Two linear subspaces E ⊆Rp,eE ⊆Rpare orthogonal, denoted by E⊥eE, ifetee= 0 for alle∈ E,ee∈eE. We say that two linear subspaces EandeEofRpare compatible if they have the same dimension 1 ≤d≤p. IfE= [e1|···|ed]∈Rp×dis any orthonormal basis of EandeE= [ee1|···|eed]∈Rp×dis any orthonormal basis of eE, the singular values ofEteE∈Rd×ddo not depend on the choice of orthonormal bases and can be written asσ(E,eE) = ( σ1(E,eE), . . . , σ d(E,eE))t∈Rd. Since the vectors are orthonormal, one finds 0≤σi(E,eE)≤1 for all 1 ≤i≤dand can uniquely define the angle 0 ≤ϕi(E,eE)≤π/2 between eiandeeibyϕi(E,eE) := arccos( σi(E,eE)). We denote the collection of angles between compatible linear subspaces as π 2≥ϕ1(E,eE)≥. . .≥ϕd(E,eE)≥0, (18) which are independent on the choice of orthonormal basis. A detailed reference on angles is given by Davis and Kahan [1970]. With UEandUeErespectively the orthogonal projections ofRpontoEandeE, it is a classical result, for a proof see Theorem 5.1 by Godunov et al. [1993], that ϕ1(eE,E) = arcsin ∥UeE−UE∥op , (19) which we refer to as the principal-angle. Following Section 8.6 by Berger [1987] one can prove the following properties of the principal-angle. Lemma A.1. The principal-angle ϕ1(·,·)in Equation (19) satisfies the following: (i)0≤ϕ1(eE,E) =ϕ1(E,eE)≤π 2; (ii)ϕ1(eE,E) = 0 ⇐⇒eE=E, whereas eE⊥E =⇒ϕ1(eE,E) =π 2; (iii) ϕ1(bE,eE) +ϕ1(eE,E)≤π 2=⇒ϕ1(bE,E)≤ϕ1(bE,eE) +ϕ1(eE,E). A.2 Reduced Least-Squares Problems For any
|
https://arxiv.org/abs/2505.01297v1
|
integers d, p≥1, any matrix A∈Rd×p, any vector b∈Rdand any linear subspace C ⊆Rp, we denote LS( A,b,C) := arg minζ∈C∥Aζ−b∥2 2the set of solutions to the reduced least-squares problem. It is a classical result, see Theorem 4 by Price [1964], that LS( A,b,Rp) ={A†b+ (Ip−A†A)ζ:ζ∈Rp}and the minimum- L2-norm solution ζLS:=A†b∈Rpbelongs to the range R(A†)⊆Rp. This means that restricting the least-squares problem to the range of its inverse operator always admits unique solution, that is, LS( A,b) := LS( A,b,R(A†)) ={A†b}. Furthermore, one can always replace the 24 vector bwith its projection AA†bonto the range R(A†), since LS( A,AA†b) ={A†b} andAA†b=bif and only if b∈ R(A†). Lemma 1 by Penrose [1955] provides the equivalent formulation A†= (AtA)†At, so that ζLS= (AtA)†Atbis the unique solution of the equivalent least-squares problem LS( AtA,Atb) restricted to R((AtA)†) =R(AtA) sinceAtA∈Rp×pis symmetric and positive-semidefinite, see Theorem 20.5.1 by Harville [1997]. Lemma A.2. For any integers d, p≥1, any matrix A∈Rd×p, any vector b∈Rd and any linear subspace C ⊆ R (A†), the reduced least-squares problem LS(A,b,C)ad- mits unique solution ζC:=UCζLSwhere UC∈Rp×pis the orthogonal projection of Rp ontoCandζLS=A†bthe minimum- L2-norm solution of the unreduced least-squares problem. The reduced least-squares problem is equivalent to LS(UCAtAUC,UCAtb)and ζC= (UCAtAUC)†UCAtb. Proof of Lemma A.2 .Recall that A†Ais the orthogonal projection of RpontoR(A†), therefore UCA†A=A†AUC=UC, since R(UC)⊆ R(A†). This implies AUC(UCA†)AUC=AUCUCA†AUC=AUC, so that ( AUC)†=UCA†. By definition, UCζ=ζif and only if ζ∈ R(UC) and UCζ∈ R(UC) for all ζ∈Rp. Thus, we can write R(UC) ={ζ∈Rp:UCζ=ζ}={UCζ:ζ∈Rp}. With the above, we can now infer LS(A,b,C) = arg min ζ∈R(UC)∥Aζ−b∥2 2 = arg min ζ∈R(UC)∥AUCζ−b∥2 2 =UC·arg min ζ∈Rp∥AUCζ−b∥2 2 =UC· {(AUC)†b+ (Ip−(AUC)†(AUC))ζ:ζ∈Rp} =UC· {UCA†b+ (Ip−UCA†AUC)ζ:ζ∈Rp} =UC· {UCA†b+ (Ip−UC)ζ:ζ∈Rp} ={UCA†b}. This shows that ζC=UCζLSwith ζLS=A†b. To conclude the proof, we notice that the equivalent formulation ζC= (AUC)†b= (UCAtAUC)†UCAtb is the unique solution of the equivalent least-squares problem LS( UCAtAUC,UCAtb). 25 A.3 Reduction Algorithms Definition A.3. LetA∈Rp×p ⪰0andb∈ R(A), letA(·) be any choice function that selects: (i) a non-decreasing sequence CA,0⊆ ··· ⊆ C A,p⊆Rpof linear subspaces having dimen- sions rA,s:= dim( CA,s)≤sfor all 0 ≤s≤p; (ii) the orthogonal projections UA,s∈Rp×pofRpontoCA,s, the projected matrices AA,s:=UA,sAUA,s∈Rp×p ⪰0and the projected vectors bA,s:=UA,sb∈ R(AA,s); (iii) the minimum- L2-norm solutions ζA,s:=A† A,sbA,sof the reduced least-squares prob- lems LS( AA,s,bA,s); (iv) when s=p, one recovers ζA,p=ζLSthe solution of the unreduced least-squares problem LS( A,b). We call reduction algorithm the collection A(A,b) :=n θA,s:= CA,s,UA,s, rA,s,ζA,s : 0≤s≤po , (20) determined by the above choices. Lemma A.4. LetA∈Rp×p ⪰0andb∈ R(A). In Definition A.3 one can replace (iii) with the following equivalent definitions: ζA,s:=A† A,sbA,s∈LS AA,s,bA,s , ζA,s:= (A1 2UA,s)†A† 2b∈LS A1 2,A† 2b,R(UA,s) , ζA,s:=UA,sζLS∈UA,s·LS A,b . Proof of Lemma A.4 .A reduction algorithm A(A,b) satisfies (iii) in Definition A.3 if and only if, for all 1 ≤s≤p, one has ζA,s=A† A,sbA,s∈LS(AA,s,bA,s). With the equivalent formulation of generalized inverse, ζA,s= (UA,sAUA,s)†UA,sb= (UA,sA1 2A1 2UA,s)†UA,sA1 2A† 2b= (A1 2UA,s)†A† 2b is the unique solution of the equivalent least-squares problem LS(A1 2UA,s,A† 2b,R(UA,s)) = arg min ζ∈R(UA,s)∥A1 2UA,sζ−A† 2b∥2 2= LS( A1 2,A† 2b,R(UA,s)). We now invoke Lemma A.2 to infer that ζA,s=UA,s(A1 2)†A† 2b=UA,sζLSis
|
https://arxiv.org/abs/2505.01297v1
|
the unique solution of the equivalent least-squares problem UA,s·LS(A,b). Remark A.5 (Degrees-of-Freedom) .Notice that the parameter θA,sdefined in Equa- tion (20) is redundant in the sense that it is uniquely determined by A,bandCA,s. This 26 means that two parameters θA,sandθA,s′are identical if and only if they have the same dimension rA,s=rA,s′. For any parameter θA= (CA,UA, rA,ζA) computed from a reduction algorithm A(A,b), we define its degrees-of-freedom to be the dimension of its subspace, namely DoF( θA) :=rA, whereas the set DoF( A(A,b)) :={DoF( θA) :θA∈ A(A,b)}contains all the degrees-of- freedom without repetition. By construction, for each r∈DoF(A(A,b)) there exists a unique parameter θ(r) A∈ A(A,b) such that DoF( θ(r) A) =r. We say that a reduction algorithm A(A,b) preserves the degrees-of-freedom of the least- squares problem LS( A,b) if the parameter θA,pconsists of a linear subspace CA,p⊆ R(A). This implies that the non-decreasing sequence of linear subspaces in Definition A.3 (i) is contained as a whole in the range R(A) and that no more than DoF( θA,p) =rA,p≤rk(A) are used. Remark A.6 (Examples of Reduction Algorithms) .In view of Remark A.5, in order for A(A,b) to be a reduction algorithm, it is sufficient to provide a choice function A(·) that satisfies Condition (i) and Condition (iv) in Definition A.3. Principal Components Regression by Hotelling [1933] can be translated into a choice function PCR( ·) and reduction algorithm PCR( A,b) with the following properties. For all 0≤s≤p, the choice of the linear subspaces CPCR,sdepends only on the matrix A and not on the vector b. In particular, with 1 ≤rA= rk( A)≤pandλA,1≥ ··· ≥ λA,rA>0 the sorted positive eigenvalues of A,CPCR,s:=Vs(A) is the span of eigenvectors {v1(A), . . . ,vs(A)}ofAcorresponding to the largest seigenvalues. When s=p, we find thatCPCR,p=Vp(A) =R(A) recovers exactly the range of Aso that ζPCR,p=ζLS. Partial Least Squares by Wold [1966] can be translated into a choice function PLS( ·) and reduction algorithm PLS( A,b) with the following properties. For all 0 ≤s≤p, the linear subspaces CPLS,s:=Ks(A,b) are spanned by the first svectors of the Krylov basis {b,Ab, . . . ,As−1b}generated by Aandb. The subspace CPLS,sis one-dimensional when bis an eigenvector of A. When s=p, we find that CPLS,p=Kp(A,b)⊆ R(A) and check in Lemma B.7 that ζPLS,p=ζLS. Only a variation of Best Subset Selection by Beale et al. [1967] falls within our frame- work. Forward Subset Selection can be translated into a choice function FSS( ·) and re- duction algorithm FSS( A,b) with the following properties. For all 0 ≤s≤p, the lin- ear subspaces CFSS,s:=EJFSS,s(A,b) are spanned by the vectors of the canonical basis {ej:j∈JFSS,s(A,b)}corresponding to an active set JFSS,s(A,b)⊆JLS(A,b), where JLS is the active set of ζLS. When s=p, we find CFSS,p=EJLS(A,b) and ζFSS,p=ζLS. Notice that no sparse method preserves, in general, the degrees-of-freedom of the unper- turbed least-squares problem in the sense of Remark A.5. An example is any rank-one ma- trixA∈Rp×p ⪰0such that R(A) = span {e1+···+ep}andb∈ R(A). This implies that ζLS∈ 27 R(A). The degrees-of-freedom of the least-squares problem LS( A,b) are dim( R(A)) = rk(A) = 1. However, any sparse policy SPR(
|
https://arxiv.org/abs/2505.01297v1
|
·) selects CSPR,s= span {ej1, . . . ,ejs}for all 0≤s≤p. This means that CSPR,sis never a subspace of R(A) and, for s=p, one findsCSPR,p=Rpso that ζSPR,p=ζLS. However, the algorithm uses p= dim( CSPR,p) degrees-of-freedom instead of 1. A.4 Stability of Regularization Algorithms LetA∈Rp×p ⪰0andb∈ R(A) be given and consider any reduction algorithm A(A,b) with parameters θA,s= (CA,s,UA,s, rA,s,ζA,s) for all 0 ≤s≤p. We are interested in the stability of the algorithm with respect to arbitrary perturbations eA∈Rp×p ⪰0andeb∈ R(eA) for which A(eA,eb) is also a reduction algorithm with parameters eθA,s= (eCA,s,eUA,s,erA,s,eζA,s) for all 0 ≤s≤p. We are only interested in perturbations preserving the degrees-of-freedom, in the sense that DoF( A(A,b))⊆DoF(A(eA,eb)). The latter inclusion means that for any parameter θA∈ A(A,b) there exists a compatible parameter eθA∈ A(eA,eb) having the same degrees-of-freedom DoF( θA) = DoF( eθA). In particular, such parameter eθAmust be unique due to the fact that erA,s=erA,s′⇐⇒ eθA,s=eθA,s′. Formally, the set of all such perturbations is ∆A(A,b) :=n (eA,eb)∈Rp×p ⪰0× R(eA) : DoF( A(A,b))⊆DoF(A(eA,eb))o (21) and the size of a perturbation ( eA,eb)∈∆A(A,b) is ε(eA,eb) :=∥eA−A∥op ∥A∥op∨∥eb−b∥2 ∥b∥2. (22) The perturbation set defined above is non-empty because the trivial perturbation ( A,b) belongs to ∆ A(A,b) and has size ε(A,b) = 0. For each ( eA,eb)∈∆A(A,b) and each r∈DoF(A(A,b)), there exists a unique pair of compatible parameters θ(r) A:= C(r) A,U(r) A, r,ζ(r) A ,eθ(r) A:= eC(r) A,eU(r) A, r,eζ(r) A , (23) such that θ(r) A∈ A(A,b) andeθ(r) A∈ A(eA,eb). We quantify the effect of perturbations in terms of both the operator norm ∥ · ∥ opand the principal-angle ϕ1(·,·). We say that A(A,b) is stable if, for all r∈DoF(A(A,b)), CA,r(A,b) := 1 ∨ sup (eA,eb)∈∆A(A,b) ε(eA,eb)>0 eU(r) A−U(r) A op ε(eA,eb)<+∞, DA,r(A,b) := 1 ∨ sup (eA,eb)∈∆A(A,b) ε(eA,eb)>0ϕ1 eC(r) A,C(r) A ε(eA,eb)<+∞,(24) 28 Notice that, using t≤arcsin( t)≤π 2tfor all 0 ≤t≤1 and the definition of principal-angle ϕ1(·,·) in Equation (19), it follows immediately that CA,r≤DA,r≤π 2CA,r. Therefore, an algorithm is stable as long as either one of the quantities in the above display is finite. For a stable algorithm we denote MA,r(A,b) := 2 ·κ2(A(r) A)· {4CA,r(A,b) + 1} ·( ∥A∥op ∥A(r) A∥op∨∥b∥2 ∥b(r) A∥2) , (25) for all r∈DoF(A(A,b)). We recall here a classical result established by Wei [1989]. Theorem A.7 (Theorem 1.1 by Wei [1989]) .LetζLS:= LS( A,b)be the minimum- L2- norm solution of a least-squares problem with A∈Rp×psome symmetric and positive semi- definite matrix and b∈ R(A)some vector. Let eζLS:= LS( eA,eb)be the minimum- L2-norm solution of a perturbed least-squares problem with eA=A+g∆A∈Rp×psome symmetric and positive semi-definite matrix and eb=b+g∆b∈ R(eA)some vector. Assume that rk(eA) = rk( A)and ∥g∆b∥2 ∥b∥2≤ε,∥g∆A∥op ∥A∥op≤ε,0≤ε≤1 2·κ2(A). Then, ∥eζLS−ζLS∥2 ∥ζLS∥2≤5·κ2(A)·ε. The following theorem is our main contribution to the theory of reduced least-squares problems. Theorem A.8. LetA(A,b)be a reduction algorithm in the sense of Definition A.3. Let (eA,eb)∈∆A(A,b)be a perturbation of size ε=ε(eA,eb). For all r∈DoF(A(A,b)), let θ(r) A∈ A(A,b)andeθ(r) A∈ A(eA,eb)be the compatible parameters in Equation (23). If the reduction algorithm is stable with constants CA,r,DA,rfrom Equation (24), then eU(r) A−U(r) A op≤CA,r·ε, ϕ 1 eC(r) A,C(r) A ≤DA,r·ε. Furthermore,
|
https://arxiv.org/abs/2505.01297v1
|
if MA,r·ε <1with the constant from Equation (25), then eζ(r) A−ζ(r) A 2 ζ(r) A 2≤5 2·MA,r·ε. Proof of Theorem A.8 .Equation (24) defines CA,ras the smallest constant Csuch that ∥eU(r) A−U(r) A∥op≤Cε(eA,eb) and DA,ras the smallest constant Dsuch that ϕ1(eC(r) A,C(r) A)≤ Dε(eA,eb) for all perturbations ( eA,eb)∈∆A(A,b). It remains to prove the perturbation 29 bound on the least-squares solutions. By definition of orthogonal projection, we have U(r) A=U(r) At=U(r) AU(r) Atand the same is true for eU(r) A. By Definition A.3, ζ(r) A:=A(r) A†b(r) A∈LS(A(r) A,b(r) A),eζ(r) A:=eA(r) A†eb(r) A∈LS(eA(r) A,eb(r) A). We check that ( eA(r) A,eb(r) A) is a sufficiently small perturbation of ( A(r) A,b(r) A) and apply Theorem A.7 to the above display. Notice that rk( A(r) A) =r= rk(eA(r) A) since the parameters θ(r) A∈ A(A,b) andeθ(r) A∈ A(eA,eb) have the same degrees-of-freedom. First, we bound ∥eb(r) A−b(r) A∥2 ∥b(r) A∥2=∥eU(r) Aeb−U(r) Ab∥2 ∥b(r) A∥2 ≤∥(eU(r) A−U(r) A)eb∥2 ∥b(r) A∥2+∥U(r) A(eb−b)∥2 ∥b(r) A∥2 ≤ ∥eU(r) A−U(r) A∥op·∥b∥2+∥b∥2·ε ∥b(r) A∥2+∥b∥2·ε ∥b(r) A∥2 ≤CA,r·ε·∥b∥2+∥b∥2·ε ∥b(r) A∥2+∥b∥2·ε ∥b(r) A∥2 ≤(2CA,r+ 1)·∥b∥2 ∥b(r) A∥2·ε. Second, we bound ∥eA(r) A−A(r) A∥op ∥A(r) A∥op=∥eU(r) AeAeU(r) A−U(r) AAU(r) A∥op ∥A(r) A∥op ≤∥(eU(r) A−U(r) A)eAeU(r) A∥op ∥A(r) A∥op+∥U(r) A(eAeU(r) A−AU(r) A)∥op ∥A(r) A∥op ≤∥eU(r) A−U(r) A∥op· ∥eA∥op ∥A(r) A∥op+∥eA(eU(r) A−U(r) A)∥op ∥A(r) A∥op+∥(eA−A)U(r) A∥op ∥A(r) A∥op ≤2· ∥eU(r) A−U(r) A∥op·∥eA∥op ∥A(r) A∥op+∥eA−A∥op ∥A(r) A∥op ≤2·CA,r·ε·∥A∥op+∥A∥op·ε ∥A(r) A∥op+∥A∥op ∥A(r) A∥op·ε ≤(4CA,r+ 1)·∥A∥op ∥AA,s∥op·ε. Putting the above displays together yields ∥eA(r) A−A(r) A∥op ∥A(r) A∥op∨∥eb(r) A−b(r) A∥2 ∥b(r) A∥2≤εr:= (4 CA,r+ 1) ∥A∥op ∥A(r) A∥op∨∥b∥2 ∥b(r) A∥2! ε. 30 By assumption, 2 ·κ2(A(r) A)·εr=MA,r·ε <1, thus the assumptions of Theorem A.7 are satisfied and one can bound ∥eζ(r) A−ζ(r) A∥2 ∥ζ(r) A∥2≤5·κ2(A(r) A)·εr, which gives the claim. Remark A.9 (Examples of Stable Algorithms) .It is beyond the scope of this paper to provide a comprehensive study on the stability of classical regularizations algorithms, but we provide below some references dealing with the examples mentioned in ??? Following Davis and Kahan [1970] and Godunov et al. [1993] one finds that PCR( A,b) inspired by Hotelling [1933] is a stable algorithm. In fact, consider an arbitrary matrix A∈Rp×p ⪰0with rank 1 ≤rA= rk( A)≤pand degree 1 ≤dA= deg( A)≤rA, so that the unique positive eigenvalues of AareλA,1>···> λA,dA>0. We denote UA,i∈Rp×p the orthogonal projection of Rponto the λA,i-eigenspace of A. The dimension mA,i= rk(UA,i) of each eigenspace is the multiplicity of the eigenvalue λA,i. We also denote δA,i= min {λA,i−1−λA,i, λA,i−λA,i+1}>0 the minimum gap between λA,iand the other eigenvalues. Now consider any perturbation eA∈Rp×p ⪰0such that ∥eA−A∥op≤ ε∥A∥op. Theorem 5.3 by Godunov et al. [1993] shows that, for a corresponding orthogonal projections eUA,i∈Rp×pwith same dimension mA,i= rk(eUA,i), it holds ∥eUA,i−UA,i∥op≲ ε/δA,i. That is to say, the stability constant CPCR,mA,iin Equation (24) is proportional to the inverse-eigengap δ−1 A,i<+∞. Following Carpraux et al. [1994] and Kuznetsov [1997] one finds that PLS( A,b) inspired by Wold [1966] is a stable algorithm. In fact, consider an arbitrary matrix A∈Rp×p ⪰0 and vector b∈ R(A), together with perturbations eA∈Rp×p ⪰0andeb∈ R(eA) such that ∥eA−A∥op≤ε∥A∥opand∥eb−b∥2≤ε∥b∥2. For any 1 ≤s≤rA= rk(A), consider Us∈
|
https://arxiv.org/abs/2505.01297v1
|
Rp×pandeUs∈Rp×pto be respectively the orthogonal projections of Rponto the Krylov spaces Ks(A,b) andKs(eA,eb). Theorem 3.3 by Kuznetsov [1997], see our Theorem B.11 and Appendix B.3, shows that ∥eUs−Us∥op≲ε κs(A,b) and implies that the stability constant CPLS,s(A,b) in Equation (24) is proportional to the condition number κs(A,b)<+∞of the Krylov space Ks(A,b). Following Cerone et al. [2019] and Fosson et al. [2020] one finds that penalized variations of FSS( A,b) inspired by Beale et al. [1967] are stable. In fact, consider an arbitrary matrix A∈Rp×p ⪰0and vector b∈ R(A), together with perturbations eA∈Rp×p ⪰0andeb∈ R(eA) such that∥eA−A∥op≤ε∥A∥opand∥eb−b∥2≤ε∥b∥2. For any 1 ≤s≤p, letJs=Js(A,b) and eJs=Js(eA,eb) be respectively the best s-sparse active sets leading to orthogonal projections UJs=IJs∈Rp×pandeUs=IeJs∈Rp×p. Theorem 2 by Fosson et al. [2020] guarantees exact recovery eJs=Js, so that ∥eUs−Us∥op= 0 and the stability constant CFSS,s(A,b) in Equation (24) is 1 <+∞. 31 B Auxiliary Results Here we gather all the relevant auxiliary results and provide proofs when necessary. B.1 Random Vectors Lemma B.1. Letx∈Rpbe a possibly degenerate random vector and y∈Ra random variable, both centered and with finite second moments. Then, x∈ R(Σx)almost surely andσx,y∈ R(Σx). Proof of Lemma B.1 .For the first statement, let Ux∈Rp×pbe the orthogonal projec- tion onto R(Σx), so that UxΣx=Σx=ΣxUxandR(Σx) =R(Ux). Then, with Ux⊥= Ip−Uxand the random vector ex=Ux⊥x∈ R(Σx)⊥, we prove that P(ex=0p) = 1. For this, compute the covariance matrix Σex=E(exet x) =E(Ux⊥xxtUx⊥) =Ux⊥ΣxUx⊥=0p×p, thus the vector exis almost surely equal to its expectation E(ex) =E(Ux⊥x) =0p. The second statement is a consequence of the first, since x=Uxxalmost surely implies σx,y=E(xy) =E(Uxxy) =Uxσx,y∈ R(Σx). Lemma B.2. Let(x, y)∈Rp×Rbe a centered random pair for which the squared-loss ℓx,y(β) :=E(y−xtβ)2is well-defined for all β∈Rp. The set of least-squares solutions LS(x, y,Rp) := arg minβ∈Rpℓx,y(β)is{β∈Rp:Σxβ=σx,y}and the minimum- L2-norm solution is βLS:=Σ† xσx,y. Proof of Lemma B.2 .The squared-loss function is β7→ℓx,y(β) :=E(y−xtβ)2∈R over all β∈Rp. The gradient of the squared-loss function is β7→ ∇ βℓx,y(β) := 2E(xxtβ)−2E(xy) = 2· {Σxβ−σx,y} ∈Rp, and its Hessian β7→ ∇ β∇t βℓx,y(β) := 2E(xxt) = 2·Σx∈Rp×p ⪰0 is a positive semi-definite matrix, so the squared-loss function β7→ℓx,y(β) is convex every- where, although possibly not strictly convex. As a consequence, the set of least-squares so- lutions LS( x, y,Rp) = arg minβ∈Rpℓx,y(β) coincides with the set {β∈Rp:∇βℓx,y(β) =0p} 32 of critical points. By the above displays, any critical point β∈Rpsatisfies the normal equa- tionΣxβ=σx,y, which admits at least one solution (the set of critical points is non-empty) since Lemma B.1 shows that the covariance vector σx,yalways belongs to the range of the covariance matrix Σx. In particular, the minimum- L2-norm solution is βLS:=Σ† xσx,y. B.2 Empirical Processes Lemma B.3. For some integer p≥1, letξ∈Rpbe a possibly degenerate random vector and(ξi)i=1,...,nbe i.i.d. copies of ξwith finite rξ:= rk( Σξ), ρ ξ:=E(∥ξ∥2 2) ∥Σξ∥op, ρ ξ,n:=E(max 1≤i≤n∥ξi∥2 2) ∥Σξ∥op. Then, it follows that 1≤ρξ≤rξ≤pand: (i) if∥ξ∥2 2≤R2almost surely for some R >0, then ρξ,n=ρξ; (ii) if ∥ξ∥2 2admits MGF Mξ(·)andlog (Mξ(t))≤t·LξE(∥ξ∥2 2)for all 0< t < t ξ, then ρξ,n≤Lξρξ+t−1 ξ∥Σξ∥oplog(n); (iii) ifE(∥ξ∥2k 2)1 2k≤LξE(∥ξ∥2 2)1 2for some k≥2, then ρξ,n≤L2 ξρξ·n1 k. Proof of Lemma
|
https://arxiv.org/abs/2505.01297v1
|
B.3 .The first inequality follows from 1≤ρξ=Tr(Σξ) λmax(Σξ)≤λmax(Σξ) rk(Σξ) λmax(Σξ)=rξ≤p. We prove (i) by noting that E(∥ξ∥2 2) =∥Σξ∥opρξimplies R2≤ ∥Σξ∥opρξ, so that ρξ≤ρξ,n=E(max 1≤i≤n∥ξi∥2 2) ∥Σξ∥op≤R2 ∥Σξ∥op≤ρξ. We prove (ii) by direct computation via Jensen’s inequality and the change of variable s=t· ∥Σξ∥−1 op, so that ρξ,n=E max 1≤i≤n∥ξi∥2 2 ∥Σξ∥op ≤inf 0<s<s ξlog E exp s·max 1≤i≤n∥ξi∥2 2 ∥Σξ∥op s ≤inf 0<s<s ξlog EPn i=1exp s·∥ξi∥2 2 ∥Σξ∥op s = inf 0<s<s ξlog n·E exp s·∥ξ∥2 2 ∥Σξ∥op s 33 = inf 0<s<s ξlog(n) + log E exp s·∥ξ∥2 2 ∥Σξ∥op s ≤inf 0<s<s ξlog(n) +s·LξE∥ξ∥2 2 ∥Σξ∥op s ≤log(n) sξ+LξE∥ξ∥2 2 ∥Σξ∥op . We prove (iii) by direct computation via Jensen’s inequality ρξ,n=E max 1≤i≤n∥ξi∥2 2 ∥Σξ∥op ≤E max 1≤i≤n∥ξi∥2 2 ∥Σξ∥opk!1 k ≤E nX i=1∥ξi∥2k 2 ∥Σξ∥kop!1 k =n1 k·E∥ξ∥2k 2 ∥Σξ∥kop1 k ≤n1 k·L2 ξE∥ξ∥2 2 ∥Σξ∥op . Lemma B.4 (Theorem 5.48 by Vershynin [2012]) .For some integer p≥1, letξ∈Rpbe a possibly degenerate random vector and (ξi)i=1,...,nbe i.i.d. copies of ξwith finite rξ:= rk( Σξ), ρ ξ,n:=E(max 1≤i≤n∥ξi∥2 2) ∥Σξ∥op. Then, for some absolute constant C >0andn > C2ρξ,nlog(rξ∧n), E ∥bΣξ−Σξ∥op ≤ ∥Σξ∥opδξ,n, δ ξ,n:=Cr ρξ,nlog(rξ∧n) n. Lemma B.5. For some integer p≥1, letξ∈Rpandζ∈Rpbe possibly degenerate random vectors and (ξi)i=1,...,n,(ζi)i=1,...,ni.i.d. copies of ξ,ζwith finite rξ:= rk( Σξ), ρ ξ,n:=E(max 1≤i≤n∥ξi∥2 2) ∥Σξ∥op, rζ:= rk( Σζ), ρ ζ,n:=E(max 1≤i≤n∥ζi∥2 2) ∥Σζ∥op. 34 Then, for some absolute constant C≥1andn > C2max{ρξ,nlog(rξ∧n), ρζ,nlog(rζ∧n)}, E ∥bΣξ,ζ+bΣζ,ξ−Σξ,ζ−Σζ,ξ∥op ≤ ∥Σξ∥1 2op∥Σζ∥1 2opδξ,ζ,n, δξ,ζ,n:= 16 Cr (ρξ,n∨ρζ,n) log( rξ∨rζ) n. Proof of Lemma B.5 .We start with the triangle inequality E ∥bΣξ,ζ+bΣζ,ξ−Σξ,ζ−Σζ,ξ∥op ≤E ∥bΣξ,ζ−Σξ,ζ∥op +E ∥bΣζ,ξ−Σζ,ξ∥op and work on the first term only, since the computations for the second are identical. The symmetrization argument in Lemma 5.46 by Vershynin [2012] gives E bΣξ,ζ−Σξ,ζ op =E 1 nnX i=1ξiζt i−E[ξζt] op ≤2 nE nX i=1εi·ξiζt i op , where εiare i.i.d. Rademacher variables. With Theorem 3.4 by Moeller and Ullrich [2021], see also Theorem 5 by Buchholz [2005], the equivalence between Schatten norm ∥·∥S2ℓand operator norm ∥ · ∥ opdiscussed in Remark 5.27 by Vershynin [2012], we further bound E nX i=1εi·ξiζt i op =E E nX i=1εi·ξiζt i op {ξiζt i}n i=1 ≤E E nX i=1εi·ξiζt i S2ℓ {ξiζt i}n i=1 ≤(2ℓ)! 2ℓℓ!1 2ℓ ·E max nX i=1ξiζt iζiξt i!1 2 S2ℓ, nX i=1ζiξt iξiζt i!1 2 S2ℓ ≤Cq log(rξ∨rζ)·E max nX i=1∥ζi∥2 2·ξiξt i!1 2 op, nX i=1∥ξi∥2 2·ζiζt i!1 2 op ≤Cq log(rξ∨rζ)· E nX i=1∥ζi∥2 2·ξiξt i!1 2 op +E nX i=1∥ξi∥2 2·ζiζt i!1 2 op ≤Cq log(rξ∨rζ)· E max i=1,...,n∥ζi∥2 21 2 ·E nX i=1ξiξt i op 1 2 +E max i=1,...,n∥ξi∥2 21 2 ·E nX i=1ζiζt i op 1 2 ≤Cq log(rξ∨rζ)· ρ1/2 ζ,n∥Σζ∥1/2 op·E nX i=1ξiξt i op 1 2 +ρ1/2 ξ,n∥Σξ∥1/2 op·E nX i=1ζiζt i op 1 2 . 35 Putting everything together, we have E ∥bΣξ,ζ−Σξ,ζ∥op ≤2C·E ∥bΣξ∥op1 2∥Σζ∥1 2opr ρζ,nlog(rξ∨rζ)
|
https://arxiv.org/abs/2505.01297v1
|
n + 2C·E ∥bΣζ∥op1 2∥Σξ∥1 2opr ρξ,nlog(rξ∨rζ) n ≤2C· E ∥bΣξ∥op1 2∥Σζ∥1 2op+E ∥bΣζ∥op1 2∥Σξ∥1 2op ·r (ρξ,n∨ρζ,n) log( rξ∨rζ) n. Now using Lemma B.4 we get both E(∥bΣξ∥op)≤ ∥Σξ∥op(1+δξ,n)≤2∥Σξ∥opandE(∥bΣζ∥op)≤ ∥Σζ∥op(1 +δζ,n)≤2∥Σζ∥op. The latter display can be bounded as E ∥bΣξ,ζ−Σξ,ζ∥op ≤8C· ∥Σξ∥1 2op∥Σζ∥1 2op·r (ρξ,n∨ρζ,n) log( rξ∨rζ) n, which gives the claim. Lemma B.6. For some integer p≥1, letξ∈Rpbe a possibly degenerate random vector, ζ∈Ra random variable and (ξi)i=1,...,n,(ζi)i=1,...,ni.i.d. copies of ξ, ζwith finite rξ:= rk( Σξ), ρ ξ:=E(∥ξ∥2 2) ∥Σξ∥op, L ξ:=E(∥ξ∥4 2)1 4 E(∥ξ∥2 2)1 2, L ζ:=E(ζ4)1 4 σζ. Then, E ∥bΣξ,ζ−Σξ,ζ∥2 ≤ ∥Σξ∥1 2opσζδξ,ζ, δ ξ,ζ:=LξLζrρξ n. Proof of Lemma B.6 .We start with Jensen’s inequality E ∥bΣξ,ζ−Σξ,ζ∥2 ≤E ∥bΣξ,ζ−Σξ,ζ∥2 21 2 and compute E bΣξ,ζ−Σξ,ζ 2 2 =E 1 nnX i=1ξiζi−E(ξζ) 2 2 =E pX j=1( 1 nnX i=1ξiζi−E(ξζ))2 j =E pX j=1( 1 nnX i=1ξi,jζi−E(ξjζ))2 =1 n2pX j=1nX i=1nX i′=1E {ξi,jζi−E(ξjζ)} · {ξi′,jζi′−E(ξjζ)} 36 =1 n2pX j=1nX i=1E {ξi,jζi−E(ξjζ)}2 =1 npX j=1E {ξjζ−E(ξjζ)}2 . We bound the variances with the corresponding second moments and get E bΣξ,ζ−Σξ,ζ 2 2 ≤1 npX j=1E ξ2 jζ2 =1 nE ζ2∥ξ∥2 2 ≤1 nE ζ41 2E ∥ξ∥4 21 2=L2 ζσ2 ζL2 ξρξ∥Σξ∥op n. B.3 Numerical Perturbation Theory In this section we provide classical and novel results which are relevant to the theory of deterministic perturbations of least-squares problems. Lemma B.7. LetA∈Rp×p ⪰0andb∈ R(A)arbitrary. With ζLS:=A†b∈LS(A,b), we have ζLS∈ K dA(A,b)withdA:= deg( A). Proof of Lemma B.7 .The Cayley-Hamilton theorem, see Theorem 8.1 in Zhang [1997], guarantees that pA(A) =0p×pforpAthe minimum polynomial of A, having degree dA:= deg(A)≤p. Since At=A, Theorem 3 in Decell [1965] guarantees that the generalized inverse can be represented as A†= (At)δPdA k=1ck−1Ak−1= (PdA k=1ck−1Ak−1)(At)δ, with δ= 0 if A†=A−1andδ= 1 otherwise. Notice that R(At) =R(AA†) and b∈ R(A) implies AA†b=b, we find ζLS=A†b∈span{AA†b, . . . ,Ar−1AA†b}= span {b, . . . ,AdA−1b}, as required. We recall and adapt the main results from the numerical perturbation theory of Krylov spaces initiated by Carpraux et al. [1994] and later developed by Kuznetsov [1997]. It is worth noticing that the whole theory has been developed in terms of perturbation bounds with respect to the Frobenius norm ∥ · ∥ Finstead of the operator norm ∥ · ∥ op. However, the choice of metric on the space of square matrices is arbitrary, as long as it is unitary invariant. To see this, notice that all the proofs by Kuznetsov [1997] exploit the properties of orthogonal matrices presented in their Section 4, which are already expressed in terms of operator norms. We provide below the immediate generalization of the main objects from the classical theory to the case of perturbations in operator norm. Let U∈Rp×pbe some matrix such that UtU=Ip, its spectrum Sp( U) ={λ1(U), . . . , λ p(U)}is a subset of the 37 unit circle. That is, one can write λj(U) =eiωj(U)for some ωj(U)∈Rand order these values as −π≤ω1(U)≤. . .≤ωp(U)< π. The following is an adaptation of Definition 2.1 by Kuznetsov [1997] to suit our needs, we denote ρ(U) := max
|
https://arxiv.org/abs/2505.01297v1
|
|ωj|:eiωj∈Sp(U),0< ωj< π , where the bound on the interval (0 , π) is justified by the fact that the eigenvalues of Uconsist of complex-conjugate pairs. As for the original definition, we remove the real eigenvalues {±1}. LetBandeBbe two m-dimensional subspaces of Rp, and let B∈Rp×mandeB∈Rp×m some orthonormal basis of BandeBrespectively. Then, there exists some matrix U∈Rp×p withUtU=Ipsuch that eB=UB. With the above, we define d(B,eB) := inf Uρ(U), d(B,eB) := inf B,eBd(B,eB), (26) where the first infimum is taken over all possible orthonormal matrices such that eB=UB and the second infimum is taken over all possible orthonormal bases B,eBofB,eB. This is the same as Definition 2.2 by Kuznetsov [1997], the only difference being the definition of spectral radius ρ(U) in the previous display. It is important noticing that this distance is equivalent to the principal-angle defined in Equation (19). Lemma B.8. For some 1≤m≤p, letB ⊆RpandeB ⊆Rpbe two orthogonal m- dimensional subspaces. Then, d(B,eB) =π/2. Proof of Lemma B.8 .LetB= (v1|···|vm)∈Rp×mandeB= (ev1|···|evm)∈Rp×mbe any two orthonormal basis of BandeB, respectively. By orthogonality, the vectors viand evjare linearly independent for all i, j= 1, . . . , m . Thus, there exist vectors w1, . . . ,wp−2m such that ( v1|···|vm|ev1|···|evm|w1|···|wp−2m)∈Rp×pis an orthonormal basis of Rp. Now, among all possible orthogonal transformations U∈Rp×psuch that eB=UB, the ones achieving the smallest ρ(U) in Equation (26) are those such that Uw i=±wifor alli= 1, . . . , p −2m. Any such matrix is then a pairwise permutation matrix that swaps positions of viandeviin the original basis of Rp. The spectrum Sp( U) of such a matrix consists of the eigenvalue ±1 corresponding to the fixed points and the complex unit root i=eiπ/2. From Equation (26), we get d B,eB = min eB,Bmin U:eB=UBρ(U) = min eB,Bmin U:eB=UBmaxn π 2 :eiπ/2∈Sp(U)o =π 2. Lemma B.9 (Lemma 1 by Carpraux et al. [1994]) .LetB,eBbe orthonormal bases of B,eB such that eB= (Ip+∆+O(∥∆∥2 op))Bfor some ∥∆∥op≪1. Then, at the first order in 38 ∥∆∥op≪1, one has d(B,eB) = inf ∆∥∆∥op, where the infimum is taken over all matrices ∆∈Rp×psuch that ∥∆∥op≪1,∆t=−∆ andeB= (Ip+∆+O(∥∆∥2 op))B. We always consider A∈Rp×psome symmetric and positive semi-definite matrix and b∈ R(A) some vector, together with a Krylov space Km=Km(A,b) of full dimension, that is m= dim( Km(A,b)) for some 1 ≤m≤p. We are interested in all perturbations eA= A+g∆A∈Rp×psome symmetric and positive semi-definite matrix and eb=b+g∆b∈ R(eA) some vector such that the perturbed Krylov space eKm=Km(eA,eb) has full dimension, that ism= dim( Km(eA,eb)). We denote Km∈Rp×mandeKm∈Rp×many natural orthonormal bases of Km(A,b) and Km(eA,eb), in the sense of Definition 2 by Carpraux et al. [1994]. Notice that these bases are unique up to their signs and can be computed, for example, with the Arnoldi iteration devised by Arnoldi [1951]. When eA=Aandeb=b, one can always find some orthonormal matrix U∈Rp×phaving spectrum Sp( U) ={±1}and such thateKm=UK m, so that d(Km,eKm) = 0 even though Km̸=eKm(the identity holds up to the signs of the columns). For arbitrary eAandebwe denote the size of the perturbation by ∆(eA,eb) :=∥eA−A∥op ∥A∥op∨∥eb−b∥2 ∥b∥2and define κb(Km) := inf
|
https://arxiv.org/abs/2505.01297v1
|
ε>0sup (eA,eb) : ∆( eA,eb)≤εd(Km,eKm) ∆(eA,eb), κ 2(Km(A,b)) := min Kmκb(Km). (27) Since Krylov spaces are invariant under orthonormal transformations, there exist Vm= (Km|Km⊥)∈Rp×pandeVm= (eKm|eKm⊥)∈Rp×pboth orthonormal bases of Rpsuch that Gm=Vt mKmandeGm=eVt meKmare the natural orthonormal bases of Km(Hm,e1) and Km(eHm,e1) with e1∈Rpthe first vector of the canonical basis and both Hm=Vt mAVm∈ Rp×p,eHm=eVt meAeVm∈Rp×ptridiagonal symmetric (Hessenberg symmetric) matrices. In particular, one can always reduce the problem to perturbations of Krylov spaces having same vector eb=band tridiagonal symmetric matrices. This gives the equivalent definition κb(Gm) := inf ε>0sup eH: ∆( eH)≤εd(Gm,eGm) ∆(eH), κ 2(Km(H,e1)) := min Gmκb(Gm), corresponding to Definition 3 by Carpraux et al. [1994], for which the reduction to the Hessenberg case (tridiagonal symmetric for us) is given in their Theorem 1. Theorem B.10 (Theorem 3.1 by Kuznetsov [1997]) .LetH∈Rp×pbe a symmetric tridi- agonal matrix and m= dim( Km(H,e1))for some 1≤m≤p. Assume H(t)∈Rp×pis a continuously differentiable matrix function such that H(0) = H, dH(t) dt op≤ν∥H(t)∥op, 39 for some 0< ν < 1. Let Vm(t)∈Rp×pbe the orthogonal matrix defined as the solution of the Cauchy-problem in Equations (3.1) - (3.3) by Kuznetsov [1997]. For all Gm∈Rp×m natural orthonormal bases of Km(H,e1)and 0≤t≤1 16νκb(Gm)(κb(Gm) + 1), one has ∥Vt m(t)Gm−Gm∥op≤2κb(Gm)νt. Proof of Theorem B.10 .We slightly adapt the proof of Theorem 3.1 by Kuznetsov [1997]. Their Equation (3.7) becomes for us δm(t) = 2 sin Rt 0∥Xm(ξ)∥opdξ 2! , ρ m(t) ={2δm(t) +tν[1 +δm(t)]}∥H∥op. Using that ∥H(t)−H∥op=∥Rt 0d dξX(ξ)dξ∥op≤tν∥H∥op, one recovers their bound ∥Zm(t)− H∥op≤ρm(t). The remainder of the proof proceeds as in the original reference. Theorem B.11 (Theorem 3.3 by Kuznetsov [1997]) .LetA∈Rp×pbe some symmetric and positive semi-definite matrix and b∈ R(A)some vector. Let eA=A+g∆A∈Rp×pbe some symmetric and positive semi-definite matrix and eb=b+g∆b∈ R(eA)some vector. LetKm∈Rp×mbe any natural orthonormal basis of Km=Km(A,b)for some 1≤m≤p. Assume that m= dim( Km(A,b)) = dim( Km(eA,eb))and ∥g∆A∥op ∥A∥op≤ε,∥g∆b∥2 ∥b∥2≤ε,0≤ε≤1 64κb(Km)(κb(Km) + 1). Then, there exists a natural orthonormal basis eKm∈Rp×mofeKm=Km(eA,eb)such that ∥eKm−Km∥op≤11κb(Km)ε. Proof of Theorem B.11 .We slightly improve the proof of Theorem 3.3 by Kuznetsov [1997]. Define the continuously differentiable matrix function A(t) :=A+ (eA−A)t∈ Rp×pso that A(0) = AandA(1) = eAand∥dA(t) dt∥op=∥g∆A∥op≤ε∥A∥op. One can findV∈Rp×pandeV∈Rp×porthonormal matrices such that ∥V−eV∥2≤√ 2εand define Hessenberg matrices H=VtA(0)VandeH=eVtA(1)eV. One can check that ∥eH−H∥op≤(2√ 2 + 1) ∥H∥opε≤4∥H∥opε. The assumptions of Theorem B.10 hold for t= 1 and ν= 4ε, thus, with suitable bases GmandeGmofKm(H,e1) and Km(eH,e1) one has∥eGm−Gm∥op≤8κb(Gm)ε. One concludes the proof for the bases KmandeKmby writing ∥eKm−Km∥op=∥eVeGm−VG m∥op≤ ∥V−eV∥2+∥eGm−Gm∥op≤11κb(Km)ε. 40 The next result is a combination of the proof of Theorem 3.3 by Kuznetsov [1997], together with one of its corollaries. Corollary B.12 (Corollary 2 by Kuznetsov [1997]) .Under assumptions of Theorem B.11, letAm=Kt mAK m∈Rm×m,bm=Kt mb∈Rmbe the projected matrix and vector relative to the orthonormal basis Km∈Rp×mandeAm=eKt meAeKm∈Rm×m,ebm=eKt meb∈Rmbe the projected matrix and vector relative to the orthonormal basis eKm∈Rp×m. Then, ∥ebm−bm∥2 ∥bm∥2≤2ε,∥eAm−Am∥op ∥Am∥op≤24·κb(Km)∥A∥op∥Am∥−1 opε. Lemma B.13 (Lemma 3.1 by Kuznetsov [1997]) .Under assumptions of Theorem B.11, m= dim( K(A,b))implies em= dim( K(eA,eb))≥mand, for all m < s ≤p, κ2(Ks(A,b)) = + ∞, κ 2(Ks(eA,eb))≥1 ε{14 + 56 κ2(Km(A,b))}. 41 C Proofs Here we
|
https://arxiv.org/abs/2505.01297v1
|
provide all the proofs for the results in the main sections. C.1 Proofs for Section 2 Proof of Lemma 2.2 .For the first statement, we notice that the conditions determining the relevant subspace Byin Equation (4) are equivalent to Σx=UyΣxUy+Uy⊥ΣxUy⊥,σx,y=Uyσx,y. This implies that Byis the unique Σx-envelope of span {σx,y} ⊆ R (Σx) in the sense of Definition 2.1 by Cook et al. [2010]. That is to say, Byis the intersection of all reducing subspaces for Σxthat contain span {σx,y}. For the second statement, we recall that by definition LS( x, y) = LS( Σx,σx,y) admits unique solution βLS:=Σ† xσx,yand LS( xy, y) = LS( Σxy,σxy,y) admits unique solution βy:=Σ† xyσxy,y. Therefore, it is sufficient to show that βLS=βy. From the definition of relevant subspace in Equation (4) and the orthogonal factorization in Equation (5), it follows σx,y=E(xy) =E(xyy) +E(xy⊥y) =E(xyy) =σxy,y, Σx=E(xxt) =E(xy+xy⊥)(xy+xy⊥)t=E(xyxt y) +E(xy⊥xt y⊥) =Σxy+Σxy⊥. The range of the matrix Σxyis the relevant subspace R(Σxy) =By, whereas range of the matrix Σxy⊥is the irrelevant subspace R(Σxy⊥) =B⊥ y. Thus, the same holds for the generalized inverse Σ† x= (Σxy+Σxy⊥)†=Σ† xy+Σ† xy⊥. One last computation yields βLS=Σ† xσx,y=Σ† xyσxy,y+Σ† xy⊥σxy,y=Σ† xyσxy,y=βy, which is the claim. Proof of Theorem 2.4 .Assumption 2.3 (iv) implies A(xB, y) =A(ΣxB,σxB,y),A(x, y) =A(xy, y) =A(Σxy,σxy,y). We want to invoke Theorem A.8 with A=ΣxB,b=σxB,y,eA=Σxy,eb=σxy,y. We now check the assumptions. First, the compatibility of the perturbation required by Equation (21) holds by Assumption 2.3 (iii). This implies ( eA,eb)∈∆A(A,b). Second, the oracle population algorithm is stable with constants CA,andDA,by Assumption 2.3 (ii). Third, the size of the perturbation is sufficiently small by Assumption 2.3 (v). We can thus apply Theorem A.8 and obtain the required bounds. 42 Proof of Corollary 2.5 .For any r∈DoF(A(xB, y)), we have ∥eβ(r) A−βA∥2 ∥βA∥2≤∥eβ(r) A−β(r) A∥2 ∥βA∥2+∥β(r) A−βA∥2 ∥βA∥2. We now bound the two terms in the above display. By Definition A.3 and the equivalent representations in Lemma A.4, the population solution satisfies β(r) A=U(r) AβA. That is to say, for some orthonormal basis {uA,1, . . . ,uA,rA}one has βA=PrA ℓ=1cℓuA,ℓand β(r) A=Pr ℓ=1cℓuA,ℓwith the same coefficients cℓ,ℓ= 1, . . . , s . Since ∥βA∥2 2=PrA ℓ=1c2 ℓ, we can bound ∥β(r) A−βA∥2= rAX ℓ=s+1c2 ℓ!1 2 ≤ max ℓ=s+1,...,rAc2 ℓ1 2√rA−r≤ ∥βA∥2√rA−r. An inspection of the proof of Theorem 2.4 shows that its induced bounds hold for all population solutions computed from θ(r) A∈ A(xB, y) andeθ(r) A∈ A(x, y) for all compatible choices of r≤rA, since the quantities appearing in the assumptions are largest when r=rA. That is to say, we can bound ∥eβ(r) A−β(r) A∥2≤5 2∥β(r) A∥2MAεB≤5 2∥βA∥2MAεB. The claim follows by combining the above displays. Lemma C.1. Under the assumptions of Theorem 2.12, for any νeB∗,n< νn<1 2, the event Ωx,y(νn) :=( ∥bΣx−Σx∥op ∥Σx∥op∨∥bσx,y−σx,y∥2 ∥σx,y∥2≤eKeB∗ν−1 nδeB∗,n) , has probability at least 1−2νn. Proof of Lemma C.1 .We denote A=Σx,b=σx,y,bA=bΣx,bb=bσx,yand split the proof into three parts. Step 1. Bound on the covariance vector. The size of the perturbation on the covariance vector is d∆b 2=∥bσx,y−σx,y∥2= 1 nnX i=1xiyi−E(xy) 2. The observations xi=xy,i+xy⊥,iare i.i.d. copies of
|
https://arxiv.org/abs/2505.01297v1
|
the population features x=xy+xy⊥ withxy∈ Byandxy⊥∈ B⊥ y. Similarly, the observations xy,i=xeB,i+xeB⊥,iare i.i.d. copies of the population features xy=xeB+xeB⊥withxeB∈eBandxeB⊥∈eB⊥. Thus, we bound 43 ∥d∆b∥2≤IeB(d∆b) +IeB⊥(d∆b) +Iy⊥(d∆b) with IeB(d∆b) := 1 nnX i=1xeB,iyi−E(xeBy) 2, IeB⊥(d∆b) := 1 nnX i=1xeB⊥,iyi−E(xeB⊥y) 2, Iy⊥(d∆b) := 1 nnX i=1xy⊥,iyi−E(xy⊥y) 2. We can apply Lemma B.6 separately to each of the above, since the required moments are bounded by Assumption 2.10. With σy=E(y2)1/2, our definitions in Equations (16) - (17) and constants in Assumption 2.11, we find E ∥d∆b∥2 ∥b∥2! =E(∥bσx,y−σx,y∥2) ∥σx,y∥2≤3LyLeB∗σy∥ΣxeB∗∥1 2opδeB∗,n ∥σx,y∥2≤eKeB∗δeB∗,n, and an application of Markov’s inequality yields P ∥d∆b∥2 ∥b∥2>eKeB∗ν−1 nδeB∗,n! < νn. Step 2. Bound on the covariance matrix. The size of the perturbation on the covariance matrix can be written as d∆A op= bΣx−Σx op= 1 nnX i=1xixt i−E(xxt) op. With the orthogonal projections we discussed in Step 1, we now can bound ∥d∆A∥op≤ IeB(d∆A) +IeB⊥(d∆A) +Iy⊥(d∆A) +IeB,eB⊥(d∆A) +IeB,y⊥(d∆A) +IeB⊥,y⊥(d∆A) with IeB(d∆A) := 1 nnX i=1xeB,ixt eB,i−E(xeBxt eB) op, IeB⊥(d∆A) := 1 nnX i=1xeB⊥,ixt eB⊥,i−E(xeB⊥xt eB⊥) op, Iy⊥(d∆A) := 1 nnX i=1xy⊥,ixt y⊥,i−E(xy⊥xt y⊥) op, IeB,eB⊥(d∆A) := 1 nnX i=1(xeB,ixt eB⊥,i+xeB⊥,ixt eB,i)−E(xeBxt eB⊥+xeB⊥xt eB) op, IeB,y⊥(d∆A) := 1 nnX i=1(xeB,ixt y⊥,i+xy⊥,ixt eB,i)−E(xeBxt y⊥+xy⊥xt eB) op, 44 IeB⊥,y⊥(d∆A) := 1 nnX i=1(xeB⊥,ixt y⊥,i+xy⊥,ixt eB⊥,i)−E(xeB⊥xt y⊥+xy⊥xt eB⊥) op. We can apply Lemma B.4 separately to each of the first three and Lemma B.5 separately to each of the last three, the required moments are bounded by Assumption 2.10 and the sample size is sufficiently large by Assumption 2.11. Let A,A′be two elements (possibly the same) of the set {eB,eB⊥,B⊥ y}andxA,xA′the corresponding projected features, thus E ∥d∆A∥op ∥A∥op! =E(∥bΣx−Σx∥op) ∥Σx∥op≤X A,A′∈{eB,eB⊥,B⊥y}E(IA,A′(d∆A)) ∥Σx∥op. ForA=A′, we have E(IA(d∆A))≤CL eB∗∥ΣxeB∗∥opδeB∗,n. ForA ̸=A′, we have E(IA,A′(d∆A))≤16CL1 2 AL1 2 A′∥ΣxA∥1 2op∥ΣxA′∥1 2opr (ρA,n∨ρA′,n) log( rA∨rA′) n ≤16CL1 2 AL1 2 A′∥ΣxA∥1 2op∥ΣxA′∥1 2oprρA,n n+rρA′,n n ·p logrx ≤32CL eB∗∥ΣxeB∗∥opδeB∗,n. Putting all the above together gives E ∥d∆A∥op ∥A∥op! =E(∥bΣx−Σx∥op) ∥Σx∥op≤99CL eB∗∥ΣxeB∗∥opδeB∗,n ∥Σx∥op≤eKeB∗δeB∗,n, and an application of Markov’s inequality yields P ∥d∆A∥op ∥A∥op>eKeB∗ν−1 nδeB∗,n! < νn. Step 3. Bound on intersection. The intersection of the events in Step 1 and Step 2 has probability at least 1 −2νnand, conditionally on this event, ∥d∆A∥op ∥A∥op∨∥d∆b∥2 ∥b∥2≤eKeB∗ν−1 nδeB∗,n, which gives the claim. Proof of Theorem 2.12 .The assumptions of Lemma C.1 hold and, with bε=bε(x, y) the size of the perturbation in Equation (14) and εeB∗,n:=eKeB∗ν−1 nδeB∗,n, the event Ω x,y(νn) = {bε≤εeB∗,n}has probability at least 1 −2νn. On this event, we check that we can invoke Theorem A.8 with A=Σx,b=σx,yandeA=bΣx,eb=bσx,y. First, the compatibility of the perturbation required by Equation (21) holds by Assumption 2.11 (ii). This implies 45 (eA,eb)∈∆A(A,b). Second, the population algorithm is stable with constants eCA,eDAand fMAby Assumption 2.11 (i). Third, the size of the perturbation is sufficiently small with fMA·bε≤fMA·εeB∗,n<fMAeKeB∗ν−1 eB∗,nδeB∗,n=fMAeKeB∗ fMeB∗= 1. Thus, on the event Ω x,y(νn), it holds bUA−eUA op≤eCAεeB∗,n, ϕ 1 bBA,eBA ≤eDAεeB∗,n, and also ∥bβA−eβA∥2 ∥eβA∥2≤5 2fMAεeB∗,n. Proof of Corollary 2.13 .For all r≤rA, we find ∥bβ(r) A−eβA∥2 ∥eβA∥2≤∥bβ(r) A−eβ(r) A∥2 ∥eβA∥2+∥eβ(r) A−eβA∥2 ∥eβA∥2. With Definition A.3 and the equivalent representations in Lemma A.4, the same argument in the proof of Corollary 2.5
|
https://arxiv.org/abs/2505.01297v1
|
yields ∥eβ(r) A−eβA∥2≤ ∥eβA∥2√rA−r. An inspection of the proof of Theorem 2.12 shows that its induced bounds hold for all sample parameters bθ(r) A∈bA(x, y) and population parameters θ(r) A∈ A(x, y) with any r∈DoF(A(x, y)) and r≤rA, since the quantities appearing in the assumptions are largest when r=rA. That is to say, we can bound (on the same event) ∥bβ(r) A−eβ(r) A∥2≤5 2∥eβ(r) A∥2fMAεeB∗,n. Putting all the above displays together we get ∥bβ(r) A−eβA∥2 ∥eβA∥2≤√rA−s+5 2fMAεeB∗,n, which is the claim. Proof of Theorem 2.14 .Both the operator norm ∥·∥opand principal-angle ϕ1(·,·) satisfy the triangle inequality, therefore bUA−UA op≤ bUA−eUA op+ eUA−UA op, ϕ1 bBA,BA ≤ϕ1 bBA,eBA +ϕ1 eBA,BA , 46 and we can apply Theorem 2.4 and Theorem 2.12 to bound the above display. For the least-squares solutions we find, r∈DoF(A(x, y)) such that r≤rA, that ∥bβ(r) A−βA∥2 ∥βA∥2≤∥bβ(r) A−eβ(r) A∥2 ∥βA∥2+∥eβ(r) A−β(r) A∥2 ∥βA∥2+∥β(r) A−βA∥2 ∥βA∥2. With Definition A.3 and the equivalent representations in Lemma A.4, the same argument in the proof of Corollary 2.5 yields both ∥β(r) A−βA∥2≤ ∥βA∥2√rA−s, ∥eβ(r) A−β(r) A∥2≤5 2∥βA∥2MAεB. An inspection of the proof of Theorem 2.12 shows that its induced bounds hold for all sample parameters bθ(r) A∈bA(x, y) and population parameters θ(r) A∈ A(x, y) with any r∈DoF(A(x, y)) and r≤rA, since the quantities appearing in the assumptions are largest when r=rA. That is to say, we can bound (on the same event) ∥bβ(r) A−eβ(r) A∥2≤5 2∥eβ(r) A∥2fMAεeB∗,n ≤5 2n ∥β(r) A∥2+∥eβ(r) A−β(r) A∥2o fMAεeB∗,n ≤5 2∥βA∥2 1 +5 2MAεB fMAεeB∗,n. Putting all the above displays together we get ∥bβ(r) A−βA∥2 ∥βA∥2≤√rA−s+5 2MAεB+5 2 1 +5 2MAεB fMAεeB∗,n, which is the claim. Proof of Theorem 2.18 .We write Rx,y(bβ(r) A) =E(x,y) (y−xtbβ(r) A)2|X,y =E(x,y) (y−xtβA−xt{bβ(r) A−βA})2|X,y =E(x,y) (y−xtβA)2 +E(x,y) (xt{bβ(r) A−βA})2|X,y −2E(x,y) (y−xtβA)(xt{bβ(r) A−βA})|X,y . We further expand, with σx,y=ΣxβLS, E(x,y) (y−xtβA)(xt{bβ(r) A−βA})|X,y =E(x,y) (y−xtβA)xt {bβ(r) A−βA} 47 ={σt x,y+βt AΣx} {bβ(r) A−βA} ={βLS−βA}tΣx{bβ(r) A−βA} =⟨βLS−βA,bβ(r) A−βA⟩Σx. This gives Rx,y(bβ(r) A) =Rx,y(βA) +∥bβ(r) A−βA∥2 Σx−2⟨bβ(r) A−βA,βLS−βA⟩Σx, as required. 48 D Visualizations We provide here additional visualizations inspired by the toy simulation in Figure 1. In Figure 2 below we consider n= 1000 and p= 3 so the dataset ( X,y)∈R1000×3×R consists of i.i.d. observations with features in the Euclidean space R3. The following is true but unknown to the statistician: (i) the response vector ydepends linearly on the design matrix X; (ii) the sample covariance matrix bΣxhas full rank; (iii) the main direction of variation of bΣxis irrelevant for the response; (iv) the second direction of variation of bΣx fully explains the response; (v) the third direction of variation of bΣxcomes from a small independent noise. We consider two scenarios: in the first, the dataset is observed aligned to the (x,y,z)-axis of the Euclidean space; in the second, the dataset is only observed after it has been rotated by 30◦around the z-axis. Although the statistician does not know whether the observed dataset was arbitrarily rotated, all the properties (i)-(v) that we have listed above are preserved. However, the problem is only sparse in the first scenario. Since the response fully depends on a one-dimensional projection of
|
https://arxiv.org/abs/2505.01297v1
|
the features, the or- acle number of degrees-of-freedom is one. This means that a sample algorithm achieves par- simonious identification if it can estimate the oracle direction using one degree-of-freedom. Here we compare the best sparse direction buFSS,1estimated by sample Forward Subset Selec- tion, the principal direction buPCR,1estimated by sample Principal Component Regression, the covariance direction buPLS,1estimated by sample Partial Least Squares. With u0the oracle direction of features, again this is the second direction of variation of the sample covariance matrix, we measure the angles bϕFSS,1=ϕ(buFSS,1,u0),bϕPCR,1=ϕ(buPCR,1,u0) andbϕPLS,1=ϕ(buPLS,1,u0). In view of our main results, we expect bϕFSS,1to be small only when the data is non-rotated, we expect bϕPCR,1to always be 90◦since the oracle direction u0is an orthogonal direction of variation, we expect bϕPLS,1to always be small. We show that is indeed the case in Figure 3 where we can see that the PLS method is comparable to FSS in the sparse setting and outperforms its competitors in the non-sparse setting. We conclude this section by referring the reader to a previous work by Finocchio and Krivobokova [2023] and the simulation study in their Section 3. An extensive simulation was performed for a well-posed regression setting and it was investigated whether LASSO, PCR and PLS are able to discard the irrelevant information in the dataset and estimate the true vector of coefficients. It was found that PLS is either comparable or outperforms its competitors regardless of the problem being sparse or not. 49 Figure 2: A toy simulation of our framework with n= 1000 and p= 3. PLOTS: the observations (black) have full rank; the main direction of variation (light green) is irrelevant for the response; the second direction of variation (light red) determines the response; the third direction of variation (light blue) is a small noise. TOP: the dataset is aligned with the (x,y,z)-axis and the problem is sparse. BOTTOM: the dataset is rotated by 30◦around the z-axis and the problem is not sparse. 50 SPARSE bϕFSS,1 1◦ bϕPCR,1 90◦ bϕPLS,1 7◦ROTATED bϕFSS,1 29◦ bϕPCR,1 90◦ bϕPLS,1 3◦ Figure 3: A toy simulation of our framework with n= 1000 and p= 3 as in Figure 2. PLOTS: the oracle direction of the features (light red); the sparse direction of the features estimated by Forward Subset Selection (dark blue); the principal direction of the features estimated by Principal Component Regression (dark green); the covariance direction of the features estimated by Partial Least Squares (dark red). TABLES: the angles (in degrees) between the oracle direction of the features and the directions estimated by FSS, PCR and PLS methods. LEFT: sparse setting where the dataset is aligned with the (x,y,z)-axis. RIGHT: non-sparse setting where the dataset is rotated by 30◦around the z-axis. 51 References Pedro Abdalla and Nikita Zhivotovskiy. Covariance Estimation: Optimal Dimension- Free Guarantees for Adversarial Corruption and Heavy Tails. Journal of the Euro- pean Mathematical Society , Aug 2024. ISSN 1435-9863. doi: 10.4171/jems/1505. URL http://dx.doi.org/10.4171/JEMS/1505 . Vera Afreixo, Ana Helena Tavares, Vera Enes, Miguel Pinheiro, Leonor Rodrigues, and Gabriela Moura. Stable Variable Selection Method with Shrinkage Regression Applied to the Selection of Genetic Variants
|
https://arxiv.org/abs/2505.01297v1
|
Associated with Alzheimer’s Disease. Applied Sciences , 14(6):2572, Mar 2024. ISSN 2076-3417. doi: 10.3390/app14062572. URL http://dx. doi.org/10.3390/app14062572 . Seung C. Ahn and Alex R. Horenstein. Eigenvalue Ratio Test for the Number of Factors. Econometrica , 81(3):1203–1227, 2013. doi: 10.3982/ecta8968. URL https://doi.org/ 10.3982/ecta8968 . Genevera I. Allen, Christine Peterson, Marina Vannucci, and Mirjana Maleti´ c-Savati´ c. Reg- ularized Partial Least Squares with an Application to NMR Spectroscopy. Statistical Analysis and Data Mining: The ASA Data Science Journal , 6(4):302–314, Nov 2012. ISSN 1932-1872. doi: 10.1002/sam.11169. URL http://dx.doi.org/10.1002/sam. 11169 . W. E. Arnoldi. The Principle of Minimized Iterations in the Solution of the Ma- trix Eigenvalue Problem. Quarterly of Applied Mathematics , 9(1):17–29, 1951. doi: 10.1090/qam/42792. URL https://doi.org/10.1090/qam/42792 . Ludovic Arnould, Claire Boyer, and Erwan Scornet. Is Interpolation Benign for Random Forest Regression? In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics , volume 206 of Proceedings of Machine Learning Research , pages 5493–5548. PMLR, 25–27 Apr 2023. doi: https://proceedings.mlr.press/v206/arnould23a.html. URL https://proceedings.mlr.press/v206/arnould23a.html . Jushan Bai and Serena Ng. Determining the Number of Factors in Approximate Factor Models. Econometrica , 70(1):191–221, Jan 2002. doi: 10.1111/1468-0262.00273. URL https://doi.org/10.1111/1468-0262.00273 . Daniel Bartl and Shahar Mendelson. Uniform Mean Estimation via Generic Chaining, 2025. URL https://arxiv.org/abs/2502.15116 . 52 Peter L. Bartlett and Philip M. Long. Failures of Model-dependent Generalization Bounds for Least-norm Interpolation. Journal of Machine Learning Research , 22(204):1–15, 2021. doi: http://jmlr.org/papers/v22/20-1164.html. URL http://jmlr.org/papers/v22/ 20-1164.html . Peter L. Bartlett, Philip M. Long, G´ abor Lugosi, and Alexander Tsigler. Benign Over- fitting in Linear Regression. Proceedings of the National Academy of Sciences , 117 (48):30063–30070, Apr 2020. ISSN 1091-6490. doi: 10.1073/pnas.1907378117. URL http://dx.doi.org/10.1073/pnas.1907378117 . E. M. L. Beale, M. G. Kendall, and D. W. Mann. The Discarding of Variables in Multivariate Analysis. Biometrika , 54(3/4):357, Dec 1967. ISSN 0006-3444. doi: 10.2307/2335028. URL http://dx.doi.org/10.2307/2335028 . Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling Modern Machine- Learning Practice and the Classical Bias–Variance Trade-Off. Proceedings of the National Academy of Sciences , 116(32):15849–15854, Jul 2019. ISSN 1091-6490. doi: 10.1073/pnas. 1903070116. URL http://dx.doi.org/10.1073/pnas.1903070116 . Pierre C. Bellec, Guillaume Lecu´ e, and Alexandre B. Tsybakov. Slope meets Lasso: Im- proved Oracle Bounds and Optimality. The Annals of Statistics , 46(6B), Dec 2018. ISSN 0090-5364. doi: 10.1214/17-aos1670. URL http://dx.doi.org/10.1214/17-AOS1670 . A. Belloni, V. Chernozhukov, and L Wang. Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming. Biometrika , 98(4):791–806, 2011. ISSN 00063444, 14643510. doi: http://www.jstor.org/stable/23076172. URL http://www.jstor.org/ stable/23076172 . Marcel Berger. Geometry I . Springer International Publishing, 1987. doi: 10.1007/ 978-3-540-93815-6. URL http://dx.doi.org/10.1007/978-3-540-93815-6 . Peter J. Bickel, Ya’acov Ritov, and Alexandre B. Tsybakov. Simultaneous Analysis of Lasso and Dantzig Selector. The Annals of Statistics , 37(4), Aug 2009. doi: 10.1214/08-aos620. URL https://doi.org/10.1214/08-aos620 . Xin Bing, Florentina Bunea, Seth Strimas-Mackey, and Marten Wegkamp. Prediction Under Latent Factor Regression: Adaptive PCR, Interpolating Predictors and Beyond. Journal of Machine Learning Research , 22(177):1–50, 2021. doi: http://jmlr.org/papers/ v22/20-768.html. URL http://jmlr.org/papers/v22/20-768.html . Ma lgorzata Bogdan, Ewout van den Berg, Chiara Sabatti, Weijie Su, and Emmanuel J. Cand` es. SLOPE—Adaptive Variable Selection via Convex Optimization.
|
https://arxiv.org/abs/2505.01297v1
|
The Annals 53 of Applied Statistics , 9(3), Sep 2015. ISSN 1932-6157. doi: 10.1214/15-aoas842. URL http://dx.doi.org/10.1214/15-AOAS842 . Artur Buchholz. Optimal Constants in Khintchine Type Inequalities for Fermions, Rademachers and q-Gaussian Operators. Bulletin of the Polish Academy of Sciences. Mathematics , 53(3):315–321, 2005. doi: http://eudml.org/doc/280682. URL http: //eudml.org/doc/280682 . Florentina Bunea, Seth Strimas-Mackey, and Marten Wegkamp. Interpolating Predic- tors in High-Dimensional Factor Regression. Journal of Machine Learning Research , 23(10):1–60, 2022. doi: http://jmlr.org/papers/v23/20-112.html. URL http://jmlr. org/papers/v23/20-112.html . Emmanuel Candes and Terence Tao. The Dantzig Selector: Statistical Estimation when p is much larger than n. The Annals of Statistics , 35(6), Dec 2007. ISSN 0090-5364. doi: 10. 1214/009053606000001523. URL http://dx.doi.org/10.1214/009053606000001523 . Jean-Fran¸ cois Carpraux, Sergei Godunov, and S. Kuznetsov. Stability of the Krylov Bases and Subspaces. Advances in Numerical Methods and Applications: O (h3): Proceedings of the Third International Conference , 1994. doi: https://inria.hal.science/inria-00074377. URL https://inria.hal.science/inria-00074377 . V. Cerone, S. M. Fosson, and D. Regruto. A Linear Programming Approach to Sparse Linear Regression with Quantized Data. In 2019 American Control Conference (ACC) , page 2990–2995. IEEE, Jul 2019. doi: 10.23919/acc.2019.8815117. URL http://dx. doi.org/10.23919/ACC.2019.8815117 . Julien Chhor, Suzanne Sigalla, and Alexandre B. Tsybakov. Benign Overfitting and Adaptive Nonparametric Regression. Probability Theory and Related Fields , 189(3–4): 949–980, Jun 2024. ISSN 1432-2064. doi: 10.1007/s00440-024-01278-0. URL http: //dx.doi.org/10.1007/s00440-024-01278-0 . Geoffrey Chinot and Matthieu Lerasle. On the Robustness of the Minimum ℓ2Interpolator. arXiv e-prints , art. arXiv:2003.05838, Mar 2020. doi: 10.48550/arXiv.2003.05838. URL https://arxiv.org/abs/2003.05838 . Hyonho Chun and S¨ und¨ uz Kele¸ s. Sparse Partial Least Squares Regression for Simultaneous Dimension Reduction and Variable Selection. Journal of the Royal Statistical Society Series B: Statistical Methodology , 72(1):3–25, Jan 2010. doi: 10.1111/j.1467-9868.2009. 00723.x. URL https://doi.org/10.1111/j.1467-9868.2009.00723.x . 54 R. D. Cook, I. S. Helland, and Z. Su. Envelopes and Partial Least Squares Regression. Journal of the Royal Statistical Society Series B: Statistical Methodology , 75(5):851–877, Jul 2013. ISSN 1467-9868. doi: 10.1111/rssb.12018. URL http://dx.doi.org/10.1111/ rssb.12018 . R. Dennis Cook and Liliana Forzani. Partial Least Squares Prediction in High-Dimensional Regression. The Annals of Statistics , 47(2), Apr 2019. ISSN 0090-5364. doi: 10.1214/ 18-aos1681. URL http://dx.doi.org/10.1214/18-AOS1681 . R. Dennis Cook and Liliana Forzani. PLS Regression Algorithms in the Presence of Non- linearity. Chemometrics and Intelligent Laboratory Systems , 213:104307, Jun 2021. ISSN 0169-7439. doi: 10.1016/j.chemolab.2021.104307. URL http://dx.doi.org/10.1016/ j.chemolab.2021.104307 . R. Dennis Cook, Bing Li, and Francesca Chiaromonte. Envelope Models for Parsimonious and Efficient Multivariate Linear Regression. Statistica Sinica , 20(3):927–960, 2010. ISSN 10170405, 19968507. doi: http://www.jstor.org/stable/24309466. URL http://www. jstor.org/stable/24309466 . D. R. Cox. Notes on Some Aspects of Regression Analysis. Journal of the Royal Statistical Society. Series A (General) , 131(3):265, 1968. doi: 10.2307/2343523. URL https: //doi.org/10.2307/2343523 . Chandler Davis and W. M. Kahan. The Rotation of Eigenvectors by a Perturbation. III. SIAM Journal on Numerical Analysis , 7(1):1–46, Mar 1970. ISSN 1095-7170. doi: 10. 1137/0707001. URL http://dx.doi.org/10.1137/0707001 . Henry P. Decell. An Application of the Cayley-Hamilton Theorem to Generalized Matrix Inversion. SIAM Review , 7(4):526–528, Oct 1965. doi: 10.1137/1007108. URL https: //doi.org/10.1137/1007108 . Jianqing Fan, Zhipeng Lou, and Mengxin Yu. Are Latent Factor Regression and Sparse Regression Adequate? Journal of the American Statistical
|
https://arxiv.org/abs/2505.01297v1
|
Association , pages 1–13, Feb 2023. doi: 10.1080/01621459.2023.2169700. URL https://doi.org/10.1080/ 01621459.2023.2169700 . Gianluca Finocchio and Tatyana Krivobokova. An Extended Latent Factor Framework for Ill-Posed Linear Regression, 2023. URL https://arxiv.org/abs/2307.08377 . Sophie M. Fosson, Vito Cerone, and Diego Regruto. Sparse Linear Regression from Per- turbed Data. Automatica , 122:109284, Dec 2020. ISSN 0005-1098. doi: 10.1016/ 55 j.automatica.2020.109284. URL http://dx.doi.org/10.1016/j.automatica.2020. 109284 . Laura Freijeiro-Gonz´ alez, Manuel Febrero-Bande, and Wenceslao Gonz´ alez-Manteiga. A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates. International Statistical Review , 90(1):118–145, Aug 2021. ISSN 1751-5823. doi: 10.1111/insr.12469. URL http://dx.doi.org/10.1111/insr.12469 . Daniel Gedon, Antonio H. Ribeiro, and Thomas B. Sch¨ on. No Double Descent in Prin- cipal Component Regression: A High-Dimensional Analysis. In Proceedings of Ma- chine Learning Research , volume 235, pages 15271–15293. PMLR, 21–27 Jul 2024. doi: https://proceedings.mlr.press/v235/gedon24a.html. URL https://proceedings.mlr. press/v235/gedon24a.html . S. K. Godunov, A. G. Antonov, O. P. Kiriljuk, and V. I. Kostin. Guaranteed Accuracy in Numerical Linear Algebra . Springer Netherlands, 1993. ISBN 9789401119528. doi: 10.1007/978-94-011-1952-8. URL http://dx.doi.org/10.1007/978-94-011-1952-8 . David A. Harville. Matrix Algebra From a Statistician’s Perspective . Springer New York, 1997. doi: 10.1007/b98818. URL https://doi.org/10.1007/b98818 . H. Hotelling. Analysis of a Complex of Statistical Variables into Principal Components. Journal of Educational Psychology , 24(6):417–441, Sep 1933. doi: 10.1037/h0071325. URL https://doi.org/10.1037/h0071325 . Laura Hucker and Martin Wahl. A Note on the Prediction Error of Principal Component Regression in High Dimensions. Theory of Probability and Mathematical Statistics , 109 (0):37–53, Oct 2023. ISSN 1547-7363. doi: 10.1090/tpms/1196. URL http://dx.doi. org/10.1090/tpms/1196 . S.V. Kuznetsov. Perturbation Bounds of the Krylov Bases and Associated Hessenberg Forms. Linear Algebra and its Applications , 265(1-3):1–28, Nov 1997. doi: 10.1016/ s0024-3795(96)00299-6. URL https://doi.org/10.1016/s0024-3795(96)00299-6 . Clifford Lam and Qiwei Yao. Factor Modeling for High-Dimensional Time Series: Inference for the Number of Factors. The Annals of Statistics , 40(2), Apr 2012. doi: 10.1214/ 12-aos970. URL https://doi.org/10.1214/12-aos970 . Guillaume Lecu´ e and Zong Shang. A Geometrical Viewpoint on the Benign Overfitting Property of the Minimum l2-Norm Interpolant Estimator and its Universality. arXiv e- prints , art. arXiv:2203.05873, Mar 2022. doi: 10.48550/arXiv.2203.05873. URL https: //arxiv.org/abs/2203.05873 . 56 Ker-Chau Li. Sliced Inverse Regression for Dimension Reduction. Journal of the American Statistical Association , 86(414):316–327, Jun 1991. ISSN 1537-274X. doi: 10.1080/01621459.1991.10475035. URL http://dx.doi.org/10.1080/01621459.1991. 10475035 . Rebecca Marion, Bernadette Govaerts, and Rainer von Sachs. Adaclv for interpretable variable clustering and dimensionality reduction of spectroscopic data. Chemometrics and Intelligent Laboratory Systems , 206:104169, Nov 2020. ISSN 0169-7439. doi: 10.1016/j. chemolab.2020.104169. URL http://dx.doi.org/10.1016/j.chemolab.2020.104169 . Rebecca Marion, Johannes Lederer, Bernadette Goevarts, and Rainer von Sachs. VC-PCR: A Prediction Method based on Variable Selection and Clustering. Statistica Neerlandica , 79(1), Aug 2024. ISSN 1467-9574. doi: 10.1111/stan.12358. URL http://dx.doi.org/ 10.1111/stan.12358 . Moritz Moeller and Tino Ullrich. L2-Norm Sampling Discretization and Recovery of Func- tions from RKHS with Finite Trace. Sampling Theory, Signal Processing, and Data Analysis , 19(2), Jul 2021. ISSN 2730-5724. doi: 10.1007/s43670-021-00013-3. URL http://dx.doi.org/10.1007/s43670-021-00013-3 . Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, and Anant Sahai. Harmless Interpolation of Noisy Data in Regression. IEEE Journal on Selected Areas in Informa- tion Theory , 1(1):67–83, May 2020. ISSN 2641-8770. doi:
|
https://arxiv.org/abs/2505.01297v1
|
10.1109/jsait.2020.2984716. URL http://dx.doi.org/10.1109/JSAIT.2020.2984716 . Roberto I. Oliveira and Zoraida F. Rico. Improved Covariance Estimation: Optimal Ro- bustness and Sub-Gaussian Guarantees under Heavy Tails. The Annals of Statistics , 52 (5), Oct 2024. ISSN 0090-5364. doi: 10.1214/24-aos2407. URL http://dx.doi.org/ 10.1214/24-aos2407 . R. Penrose. A Generalized Inverse for Matrices. Mathematical Proceedings of the Cambridge Philosophical Society , 51(3):406–413, 1955. doi: 10.1017/S0305004100030401. Charles M. Price. The Matrix Pseudoinverse and Minimal Variance Estimates. SIAM Review , 6(2):115–120, 1964. ISSN 00361445. doi: http://www.jstor.org/stable/2028075. URL http://www.jstor.org/stable/2028075 . Marco Singer, Tatyana Krivobokova, Axel Munk, and Bert de Groot. Partial Least Squares for Dependent Data. Biometrika , 103(2):351–362, 04 2016. ISSN 0006-3444. doi: 10. 1093/biomet/asw010. URL https://doi.org/10.1093/biomet/asw010 . 57 James H. Stock and Mark W. Watson. Forecasting Using Principal Components from a Large Number of Predictors. Journal of the American Statistical Association , 97(460): 1167–1179, 2002. ISSN 01621459. doi: http://www.jstor.org/stable/3085839. URL http://www.jstor.org/stable/3085839 . Ningyuan Teresa, David W. Hogg, and Soledad Villar. Dimensionality Reduction, Reg- ularization and Generalization in Overparameterized Regressions. SIAM Journal on Mathematics of Data Science , 4(1):126–152, Feb 2022. ISSN 2577-0187. doi: 10.1137/ 20m1387821. URL http://dx.doi.org/10.1137/20M1387821 . Robert Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B, Methodological , 58(1):267–288, 1996. ISSN 0035-9246. doi: https://www.jstor.org/stable/2346178. URL https://www.jstor.org/stable/ 2346178 . Alexander Tsigler and Peter L. Bartlett. Benign Overfitting in Ridge Regression. Journal of Machine Learning Research , 24(123):1–76, 2023. doi: http://jmlr.org/papers/v24/ 22-1398.html. URL http://jmlr.org/papers/v24/22-1398.html . Emil Uffelmann, Qin Qin Huang, Nchangwi Syntia Munung, Jantina de Vries, Yukinori Okada, Alicia R. Martin, Hilary C. Martin, Tuuli Lappalainen, and Danielle Posthuma. Genome-Wide Association Studies. Nature Reviews Methods Primers , 1(1), Aug 2021. ISSN 2662-8449. doi: 10.1038/s43586-021-00056-9. URL http://dx.doi.org/10.1038/ s43586-021-00056-9 . Sara A. van de Geer and Peter B¨ uhlmann. On the Conditions Used to Prove Oracle Results for the Lasso. Electronic Journal of Statistics , 3(none), Jan 2009. doi: 10.1214/09-ejs506. URL https://doi.org/10.1214/09-ejs506 . Roman Vershynin. Introduction to the Non-Asymptotic Analysis of Random Matrices , page 210–268. Cambridge University Press, May 2012. doi: 10.1017/cbo9780511794308.006. URL http://dx.doi.org/10.1017/CBO9780511794308.006 . Haohan Wang, Benjamin J Lengerich, Bryon Aragam, and Eric P Xing. Precision Lasso: Accounting for Correlations and Linear Dependencies in High-Dimensional Genomic Data. Bioinformatics , 35(7):1181–1187, Sep 2018. ISSN 1367-4811. doi: 10.1093/ bioinformatics/bty750. URL http://dx.doi.org/10.1093/bioinformatics/bty750 . Musheng Wei. The Perturbation of Consistent Least Squares Problems. Linear Algebra and its Applications , 112:231–245, Jan 1989. doi: 10.1016/0024-3795(89)90598-3. URL https://doi.org/10.1016/0024-3795(89)90598-3 . 58 H. Wold. Nonlinear Estimation by Iterative Least Squares Procedure. In F. N. David, editor, Research papers in statistics: Festschrift for J. Neyman , pages 411–444. Wiley, 1966. Ming Yuan and Yi Lin. Model Selection and Estimation in Regression with Grouped Variables. Journal of the Royal Statistical Society Series B: Statistical Methodology , 68 (1):49–67, Dec 2005. ISSN 1467-9868. doi: 10.1111/j.1467-9868.2005.00532.x. URL http://dx.doi.org/10.1111/j.1467-9868.2005.00532.x . Fuzhen Zhang. Quaternions and Matrices of Quaternions. Linear Algebra and its Applications , 251:21–57, 1997. ISSN 0024-3795. doi: https://doi.org/10.1016/ 0024-3795(95)00543-9. URL https://www.sciencedirect.com/science/article/ pii/0024379595005439 . Peng Zhao and Bin Yu. On Model Selection Consistency of Lasso. Journal of Machine Learning Research , 7(90):2541–2563, 2006. doi: http://jmlr.org/papers/v7/zhao06a. html. URL http://jmlr.org/papers/v7/zhao06a.html . Hui Zou. The Adaptive Lasso and
|
https://arxiv.org/abs/2505.01297v1
|
arXiv:2505.01324v5 [stat.ME] 21 May 2025Design-Based Inference under Random Potential Outcomes via Riesz Representation Yukai Yang Department of Statistics Uppsala University yukai.yang@statistik.uu.se May 22, 2025 Abstract We introduce a design-based framework for causal inference that accommodates random potential outcomes, thereby extending the classica l Neyman-Rubin model in which outcomes are treated as fixed. Each unit’s potential ou tcome is modelled as a structural mapping ˜yi(z,ω), wherezdenotes the treatment assignment and ωrepresents latent outcome-level randomness. Inspired by recent conne ctions between design-based inference and the Riesz representation theorem, we embed po tential outcomes in a Hilbert space and define treatment effects as linear function als, yielding estimators constructed via their Riesz representers. This approach pr eserves the core identification logic of randomised assignment while enabling valid infere nce under stochastic outcome variation. We establish large-sample properties under loc al dependence and develop consistent variance estimators that remain valid under wea ker structural assumptions, including partially known dependence. A simulation study i llustrates the robustness and finite-sample behaviour of the estimators. Overall, the framework unifies design- based reasoning with stochastic outcome modelling, broade ning the scope of causal inference in complex experimental settings. Keywords: Design-based causal inference; Random potential outcomes ; Riesz representa- tion; Local dependence; Variance estimation; Simulation s tudy. 1 Introduction Randomised experiments remain central to the identificatio n of causal effects. In the classical design-based perspective, pioneered by Neyman (1990) andRubin (1974), potential outcomes are treated as fixed quantities, with all randomness attribu ted to the treatment assignment mechanism. This approach has given rise to influential estim ators such as the Horvitz- Thompson estimator, which rely solely on the experimental d esign for identification. 1 Yang Design-Based Inference with Random Outcomes Building on this foundation, several strands of recent lite rature have extended design-based inference to more complex settings. In particular, a substa ntial body of work addresses inter- ference and dependence among units, including network and s patial designs ( Aronow and Samii , 2017;Sävje et al. ,2021;Athey et al. ,2021). These methods typically retain the fixed- potential-outcomes assumption and model dependence throu gh exposure mappings or strat- ified interference structures. Other related contribution s focus on inference under model misspecification or robust estimation in observational stu dies (Abadie et al. ,2020;Imbens , 2004), but rely on outcome modelling rather than randomisation f or identification. A limitation of the classical design-based framework is its assumption that potential out- comes are deterministic. However, in many applied settings , outcomes may vary randomly even when treatment assignments are fixed. This variability can arise from factors such as measurement noise or random physiological responses. We propose a new design-based framework in which potential o utcomes are modelled as random functions of the treatment assignment zand a latent variable ω, written as ˜yi(z,ω). This formulation is motivated by both practical and concept ual considerations. Many mod- ern experiments, such as in biology or sensor-based fieldwor k, exhibit intrinsic outcome-level variability even under identical treatments. More fundame ntally, the very notion of an aver- age treatment effect presumes a distribution over units or ou tcomes, suggesting
|
https://arxiv.org/abs/2505.01324v5
|
that potential outcomes should be treated as random rather than fixed. Our framework retains the design-based identification logi c through randomised treatment assignment, assuming the treatment vector is randomly draw n from a known distribution µ. By making outcome-level randomness explicit, the approach extends the classical Neyman- Rubin model and bridges design-based reasoning with super- population perspectives, offering a unified structure for causal inference in complex experime ntal environments. Our analysis is confined to experimental settings with known randomisation schemes. We do not consider observational designs, where unconfoundedne ss must be justified via covariate adjustment. The framework allows for spillover effects and c ontinuous treatments. Recent contributions, notably by Harshaw, Wang, and Sävje (2022, hereafter HWS), have reframed the classical design-based approach through the R iesz representation theorem. By modelling potential outcomes as elements of a Hilbert space induced by the design, and defining treatment effects as linear functionals on that spac e, they construct a unified frame- work that encompasses standard weighting estimators and ex tends naturally to settings with interference or continuous treatments. Their asymptotic r esults on consistency and normal- ity draw upon foundational ideas from Stein’s method framew ork (Stein,1972;Ross,2011) and the analysis of dependency graphs as formalised by Ross(2011), which they adapt to the design-based context. This paper draws methodological inspiration from the Riesz -based framework introduced by HWS. While our approach broadly follows the logic of their es timator, namely, representing treatment effect functionals via projection in a Hilbert spa ce, we work within a fundamen- tally different modelling context. In particular, we consid er random potential outcomes defined over a latent probability space, in contrast to the fix ed-outcome formulation adopted by HWS. As a result, all assumptions, estimands, and inferen tial results are formulated 2 Yang Design-Based Inference with Random Outcomes independently to suit the stochastic framework introduced here. The Riesz representation theorem remains central: under mi nimal regularity conditions, it ensures that treatment effects can still be expressed as in ner products with a unique representer function, even when potential outcomes are ran dom. The resulting Riesz estimator remains unbiased and consist ent under local dependence. In addition, it satisfies asymptotic normality, enabling vali d inference from a single experimental realisation. These asymptotic results draw upon ideas from Stein’s method and dependency graph theory, as also employed in HWS, but are adapted here to a random potential outcome setting. To implement valid hypothesis testing based on the Riesz est imator, one must input the true variance for the corresponding test statistic. Previous wo rk has focused on deriving upper bounds on the variance that are sharp under strong condition s, but these bounds are not directly usable for inference and tend to be overly conserva tive in practice. In contrast, we develop a local-dependence variance estimator that does no t require assumptions stronger than those already needed for asymptotic normality. The onl y additional requirement is some structural knowledge about local dependence, which is often realistic in networked, spatial, or blocked experimental designs. In addition to the local-dependence variance estimator, we propose correlation-based
|
https://arxiv.org/abs/2505.01324v5
|
and conservative set-based alternatives that yield valid infe rence when uncorrelated unit pairs can be reasonably approximated. This version is computatio nally simpler and circumvents the need to estimate weak or negligible cross-terms, making it especially practical in settings with limited structural information. Notably, these variance estimators are also applicable in fi xed-outcome settings, thereby strengthening previous frameworks by providing a practica l means of conducting inference based on a single realisation of the experimental design. A central insight of our framework is that it formalises a for m of sample-based ergodicity under local dependence: averaging across units within a sin gle experimental realisation can substitute for averaging across multiple hypothetical rep etitions of the stochastic environ- ment. Despite the presence of latent randomness in potentia l outcomes, our results show that one need not observe multiple parallel worlds to conduc t valid inference. Instead, un- der appropriate moment and dependence assumptions, a singl e realisation of the experiment suffices for consistent estimation and asymptotically valid uncertainty quantification. We further discuss implementation via basis expansions, an d conduct a simulation study to evaluate the finite-sample performance of the estimator. Th e results confirm key theoretical properties, such as unbiasedness, consistency, and valid c overage, and demonstrate that the method remains robust across a range of dependency structur es. Throughout, we use “random” to refer to outcome-level varia tion due to latent variables, and “stochastic” to describe modelling frameworks that explic itly incorporate such randomness. The paper is organised as follows. Section 2introduces the model and basic assumptions. Sec- tion3establishes large-sample properties of the estimator unde r local dependence. Section 4 3 Yang Design-Based Inference with Random Outcomes presents feasible variance estimators and discusses their consistency. Section 5covers com- putational aspects of estimating the Riesz representer in b oth finite- and infinite-dimensional settings. Section 6provides a simulation study. Section 7concludes with a discussion of the broader implications and possible extensions. 2 Model Setup 2.1 The Potential Outcomes We begin by considering a setting with nunits, where Assumption 1 (The Stochastic Setting) .each unit’s potential outcome is modelled as a general measurable function yi(z,xi,ǫi)with the following arguments: z= (z1,...,zn)de- noting the vector of treatment assignments; xi=xi(ω)representing the possibly observed covariates for unit i;ǫi=ǫi(ω)denoting the idiosyncratic error or unobserved heterogene ity for uniti. Here,ω∈Ωis a latent random element defined on a common probability spa ce (Ω,Fω,P)shared by all units. Furthermore, we define the structural mapping from the laten t spaceΩto variables (xi,ǫi) and represent the potential outcome via the composition ˜yi(z,ω) =yi(z,xi(ω),ǫi(ω)), (2.1) sincexiandǫiare measurable functions of ω. We refer to this as the stochastic setting because the potent ial outcomes ˜yi(z,ω)are explicitly modelled as random functions of a latent variable ω, introducing outcome-level randomness into the design-based framework. This formulation enables the analysis of causal effects under a stochastic data-generating process and bridges the gap between classical design- based inference and super-population perspectives. Althoughωis not unit-specific, its structure permits different subcom ponents to influence different units, thereby being able to preserve unit-specifi c variability and independence and accommodating heterogeneous potential outcomes. The potential outcome
|
https://arxiv.org/abs/2505.01324v5
|
for unit idepends on the entire treatment vector z, so that interference (or spillover effects) is permitted. Notably, one may also co nsider the special case of no spillover effects, whereby yi(z,xi,ǫi) =yi(zi,xi,ǫi), (2.2) which corresponds to a version of the Stable Unit Treatment V alue Assumption (SUTVA) incorporating consistency, as introduced by Rubin (1980). In the present work we do not impose the no-spillover condition a priori ; rather, we allow for general interference and propose to test for its presence. In practice, it typically suffices to allow heterogeneity acr oss units only through xiand ǫi, making it unnecessary to distinguish yiexplicitly. In such cases, one may work with a common structural function y, while still obtaining heterogeneous mappings ˜yi(z,ω) = 4 Yang Design-Based Inference with Random Outcomes y(z,xi,ǫi)across units. The structural mapping ( 2.1) is the primary object of interest in what follows. Denoting Zas the treatment assignment space, equipped with its Borel s igma-algebra Fz, we presume that the structural mapping ˜yi:Z ×Ω→Ris measurable with respect to the sigma-algebra Fz⊗Fωon the domain and the Borel sigma-algebra B(R)on the codomain. This ensures that the relevant random variables and their ex pectations are well-defined, and it also serves as a prerequisite for defining inner produc ts and applying Hilbert space tools in the sequel. Note that the measurability of the covar iates is sometimes required to ensure that conditional expectations and probabilities givenxiare well-defined within the underlying probability space, although we do not explic itly rely on this property in the present paper. Following randomisation principles, we assume that Assumption 2 (Randomisation) .the treatment assignment vector zis drawn from a known randomisation distribution that is independent of the outc ome-generating process given the latent variable ω. Assumption 2is not required for the unbiasedness or asymptotic validity of the Riesz estima- tor. Its purpose is operational rather than inferential: it ensures that the design distribution µremains valid when conditioning on ω, which is essential for defining the Riesz representer via the inner product induced by µ. Once the representer ψiis constructed, all subsequent inference, including estimation and asymptotic analysis, proceeds without requiring indepen- dence between zandω. This assumption will be invoked again in Section 5, where the Gram matrix is computed in order to estimate the Riesz represente r of the potential outcomes. This assumption also justifies the use of the product measure µ(dz)P(dω)in the correspond- ing integrals. It allows expectations of the form Ez,ω[·]to be written as iterated expectations EzEω[·]. Throughout, when writing integrals or expectations in thi s way, we implicitly invoke this assumption. It is worth noting that this assumption corresponds to the as sumption of ignorability orun- confoundedness , as articulated in Rubin (1978) and further developed in Rosenbaum and Rubin (1983) for observational studies. Under this formulation, ωgoverns all outcome-related ran- domness across units, while the randomisation mechanism fo rzremains externally controlled and separate (see Neyman (1990) and Fisher (1935)). Conceptually, once a particular realisation of ωis fixed, each unit’s potential outcome func- tion˜yi(z,ω)becomes deterministic in z; hence, the only source of randomness in the observed outcomes
|
https://arxiv.org/abs/2505.01324v5
|
from the randomised assignment of treatments. Thi s approach preserves the fun- damental idea that each unit has a well-defined potential out come under every treatment assignment, capturing the core principle of SUTVA (or, more generally, the notion of well- defined potential outcomes), even as we permit these outcome s to vary randomly across realisations of ω. This formulation is flexible and expressive in that it captur es both unconfoundedness and confounding, depending on the relationship between zand the latent variable ω. Ifzand 5 Yang Design-Based Inference with Random Outcomes ωare independent, then ˜yi(z0,ω)and˜yi(z1,ω)are independent of z. More generally, if z andωare independent given x∈X0, thenyi(z0,x,ǫi)andyi(z1,x,ǫi)are independent of z|x∈X0, for any measurable X0⊂ Fx:={A⊆xi(Ω) :x−1 i(A)∈ Fω}. In randomised experiments, the design ensures that zis independent of (xi,ǫi), mitigating confounding. In observational studies, however, it is imperative to adjust forxito control for confounding. In either case, this framework models potential outcomes as random rather than fixed. 2.2 Model Space and Inner Product LetLp(Z ×Ω)denote the space of all measurable functions u:Z ×Ω→R, satisfying E[|u(z,ω)|p]<∞, for some integer p≥1. For notational convenience, we henceforth write Lpin place of Lp(Z ×Ω), whenever the meaning is clear from context. Furthermore, W e denote the p-norm ofu∈Lpby /⌊ard⌊lu/⌊ard⌊lp:=/parenleftbigg/integraldisplay Z×Ω|u(z,ω)|pµ(dz)P(dω)/parenrightbigg1/p = (E|u(z,ω)|p)1/p, (2.3) whereµdenotes the probability measure for randomisation. The equ ality in ( 2.3) relies on Assumption 2. Ifzandωare not independent, the intermediate expression involvin g the product measure should be omitted, and the Lpnorm should be defined directly in terms of the joint distribution of (z,ω). We write /⌊ard⌊lu/⌊ard⌊l:=/⌊ard⌊lu/⌊ard⌊l2for brevity if u∈L2. Note that p-norm characterises the p-moment of the random variable u. We model each unit’s potential outcome function, via the str uctural mapping ( 2.1), as an element of a model space Mi, which is a subspace of L2. Formally, we write Assumption 3 (Model Space) . ˜yi(z,ω)∈ Mi⊂L2(Z ×Ω). (2.4) Our construction is formulated in a distinct functional set ting that explicitly separates treat- ment assignments from latent variation induced by unobserv ed random elements. The model space incorporates both an observable assignment vector zand a latent random element ω, capturing two sources of variation. This separation underp ins our interpretation of potential outcomes as random functions and motivates a stochastic mod elling framework. In par- ticular, the space Miis defined over random potential outcomes, and the associate d inner product reflects variation due to both assignment and latent structure. Importantly, the spaces from which zandωare drawn may differ in structure: while ωis modelled as a general element of a latent probability space, zis often a finite-dimensional or even dis- crete vector representing treatment assignments. This str uctural heterogeneity is explicitly accommodated in our formulation of the product space Z ×Ωand the associated function spaceMi. The framework of HWS can be viewed as a special case of our mode l, corresponding to the degenerate setting where Ω ={ω0}for someω0. In this case, the latent variable ω0is effectively fixed, and the realised data serve as the entire ou tcome-generating mechanism. 6 Yang Design-Based Inference with Random Outcomes Thus, the potential outcome function (
|
https://arxiv.org/abs/2505.01324v5
|
2.1) becomes a deterministic function of z, recovering the setting considered in their work. We equip Miwith the inner product /an}⌊ra⌋ketle{tu,v/an}⌊ra⌋ketri}ht:=Eω/bracketleftBig Ez/bracketleftbig u(z,ω)v(z,ω)/bracketrightbig/bracketrightBig =E/bracketleftbig u(z,ω)v(z,ω)/bracketrightbig , (2.5) where the second equality follows from the law of iterated ex pectations. The associated norm is given by /⌊ard⌊lu/⌊ard⌊l=/radicalbig /an}⌊ra⌋ketle{tu,u/an}⌊ra⌋ketri}ht, which coincides with the standard L2-norm. In the degenerate case where Ω ={ω0}, the inner product reduces to /an}⌊ra⌋ketle{tu,v/an}⌊ra⌋ketri}ht0=Ez/bracketleftbig u(z,ω0)v(z,ω0)/bracketrightbig , (2.6) which coincides with the inner product employed in HWS. This structure enables us to apply the Riesz representation theorem to construct estimators f or treatment effects, even in the presence of interference. 2.3 Treatment Effect Functionals and Riesz Representation While our model space Mi⊂L2(Z×Ω)may not be complete under the inner product defined in (2.5), this poses no issue in what follows. Any inner product spac e admits a Hilbert space completion, and the Riesz representation theorem applies i n that completed space. Hence, throughout this paper, whenever necessary, we implicitly w ork with the closure of Miin L2(Z ×Ω). LetM′ idenote the dual space of Mi. Assumption 4 (Dual Representability of the Treatment Effect) .The treatment effect θi for each unit ican be represented as a continuous linear functional on the m odel space of potential outcomes, viz.,θi:Mi→Rwithθi∈ M′ i. This assumption reflects a standard construction in functio nal analysis, where a treatment effect is represented as a continuous linear functional on a f unction space of potential out- comes. Such a formulation enables the use of the Riesz repres entation theorem, which yields an estimator expressed as an inner product with a represente r function. This approach ap- plies to both classical fixed-outcome frameworks and to stoc hastic settings where potential outcomes depend on latent variables. Note that functions in an L2space are identified through equivalence classes, meaning t wo functions differing only on a set of measure zero are consider ed identical. Consequently, a well-defined linear functional must assign the same value to all functions within the same equivalence class. A potential pitfall arises when attempting to use evaluatio n functionals, such as θ(u) =u(x0), that assign values based on a single point. Such evaluation m aps generally are not well- defined in an L2context because they may yield different values for function s identical almost everywhere, violating linearity and the identification of e lements in the same equivalence 7 Yang Design-Based Inference with Random Outcomes class. In our analysis, we explicitly avoid these issues by requiri ng that the treatment effect func- tionalθibe defined consistently on equivalence classes. More specifi cally, if two functions differ by a function of zero L2-norm ( i.e., distance zero), the functional θishall assign them the same value. From Assumption 4, we also require that the linear functional θion a normed space is continuous or equivalently bounded. A heuristic justificat ion is that requiring the treatment effect functional to be continuous ensures that small pertur bations in the potential outcome function (as measured by the L2norm) result in small changes in the treatment effect. This continuity is essential for invoking the Riesz representat ion theorem,
|
https://arxiv.org/abs/2505.01324v5
|
which in turn guarantees thatθcan be uniquely represented by an element of the Hilbert spac e. Researchers need not be concerned about such technicalitie s in practice, as they are always free to define or adjust their treatment effect functional to c omply with the requirements in Assumptions 4. For instance, one might consider the following examples of continuous linear functionals. Expected treatment effect for binary treatment θi(u) =Eω/bracketleftbig u(z(1),ω)−u(z(0),ω)/bracketrightbig (2.7) representing the expected outcome difference when shifting treatment assignment from z(0) toz(1). Expected treatment effect for treatment intervals or region s θi(u) =Eω/bracketleftBig Ez∈A/bracketleftbig u(z,ω)/bracketrightbig/bracketrightBig −Eω/bracketleftBig Ez∈B/bracketleftbig u(z,ω)/bracketrightbig/bracketrightBig , (2.8) whereA,B⊂ Z andA∩B=∅, representing the expected outcome difference between treatment groups AandB. Expected partial derivative θi(u) =Eω/bracketleftbigg∂u ∂z(z0,ω)/bracketrightbigg , (2.9) at some point z0∈ Z, representing the expected marginal effect at z0. The next two propositions show that these functionals, unde r certain conditions, can be linear and continuous. Proposition 1. Suppose that µ(A)>0andµ(B)>0. The functional (2.8)is linear and continuous on Mi. Note that Example ( 2.7) is a special case of Example ( 2.8), obtained by setting A={z(1)}and B={z(0)}. The condition µ(A)>0andµ(B)>0is commonly referred to as the positivity or overlap assumption in the causal inference literature. It ensures t hat both treatment groups are represented in the randomisation distribution, which g uarantees that the corresponding estimand is well-defined and estimable. In our case, the posi tivity assumption plays a crucial 8 Yang Design-Based Inference with Random Outcomes role in verifying that the functional is both linear and cont inuous – properties that are essential for the treatment effect to be well-defined and for t he Riesz representation to hold. It is more subtle to establish the linearity and continuity o f the functional ( 2.9), as it requires additional structure from the model space. Proposition 2. LetZbe open and bounded, and fix z0∈ Z, and letu∈L2(Ω;H1(Z))1,i.e., u∈L2and∂u/∂z∈L2. Then the functional (2.9)is linear and continuous on L2(Ω;H1(Z)). This result implies that, for the functional to satisfy the n ecessary continuity condition, the model space must be further restricted to Mi⊂L2(Ω;H1(Z))⊂L2. Within this space, weak derivatives are well-defined and continuous, thereby e nsuring the applicability of the Riesz representation theorem. Theorem 1 (Riesz Representation Theorem in Mi).Under Assumptions 1,3, and 4, for everyθi∈ M′ i, there exists a unique element ψi∈ Misuch that θi(u) =/an}⌊ra⌋ketle{tu,ψi/an}⌊ra⌋ketri}htfor allu∈ Mi, (2.10) and /⌊ard⌊lθi/⌊ard⌊lM′ i=/⌊ard⌊lψi/⌊ard⌊l, (2.11) where the dual norm is defined by /⌊ard⌊lθi/⌊ard⌊lM′ i:= sup /bardblu/bardbl≤1|θi(u)|. (2.12) The resulting Riesz representer ψiplays a central role in our estimation framework. This approach retains the mathematical elegance of Riesz-b ased identification while extend- ing it to accommodate outcome-level stochasticity. It lays the foundation for valid estimation in experiments where heterogeneity arises not only from tre atment variation but also from latent random effects. By constructing the Riesz representer ψicorresponding to the treatment effect functional θi, we only consider those treatment effects that are represent able in the dual space M′ i. Consequently, we estimate the treatment effect θi(˜yi) =/an}⌊ra⌋ketle{t˜yi,ψi/an}⌊ra⌋ketri}ht=E/bracketleftBig ˜yi(z,ω)ψi(z,ω)/bracketrightBig , (2.13) by using the corresponding Riesz estimator ˆθi(z,ω) = ˜yi(z,ω)ψi(z,ω), (2.14) as, in practice, we observe
|
https://arxiv.org/abs/2505.01324v5
|
only one realisation of the outco me˜yi(z,ω)for certain zandω. It follows immediately that the Riesz estimator in ( 2.14) for each unit is unbiased for its treatment effect given in ( 2.13). 1L2(Ω;H1(Z))denotes the Bochner space, and H1(Z)denotes the Sobolev space W1,2(Z). 9 Yang Design-Based Inference with Random Outcomes Definition 1 (Aggregate Estimand) .The finite-sample estimand we consider in this paper is defined as a weighted average of individual treatment effec ts τn=n/summationdisplay i=1νniθi(˜yi), (2.15) where the weights νni∈Rsatisfyνni≥0. This quantity corresponds to an average treatment effect und er a general weighting scheme, aligning with the broader framework of causal inference, wh ere estimands are typically de- fined as weighted averages of unit-level causal effects acros s different designs and sampling schemes. Our setup allows for general fixed weighting scheme sνi, providing greater flexibility in defining estimands. More importantly, τnreflects an integration over latent randomness ω, and thus captures variation across a distribution of potential outcomes, effe ctively averaging over “parallel worlds” indexed by ω. In contrast to the classical Neyman-Rubin framework, wher e potential outcomes are fixed and comparisons are defined across units wi thin a single universe, our formulation treats outcomes as random functions and estima nds as expectations over both treatment assignments and latent heterogeneity. This shif t from fixed to random potential outcomes leads to a richer class of estimands, grounded in th e observed design but expressive enough to reflect underlying stochastic complexity. It also conceptually aligns with super- population inference while maintaining the rigour of desig n-based identification. The estimand τndoes not rely on Assumption 2; it is mathematically well-defined regardless of whether zandωare independent. However, the dependence structure betwee nzand ωinfluences the potential outcomes ˜yiand thus affects the value of the estimand. In the causal inference literature, selecting the right estimand involves specifying a target quantity that captures the scientific question of interest under the a ssumed data-generating process. Different assumptions about the relationship between zandωlead to different estimands, each carrying its own causal or associational interpretati on. The estimand considered by HWS corresponds to the special ca se of our formulation, τ0 n=1 nn/summationdisplay i=1/an}⌊ra⌋ketle{t˜yi,ψi/an}⌊ra⌋ketri}ht0=1 nn/summationdisplay i=1Ez/bracketleftBig ˜yi(z,ω0)ψi(z,ω0)/bracketrightBig , (2.16) whereΩ ={ω0}is a singleton probability space. In this special case, the t reatment effect is deterministic and the average is equally weighted. A common choice for the weights is the uniform weighting νni= 1/n, which yields the sample average treatment effect (SATE). More generally, thi s formulation accommodates subgroup-specific or covariate-adjusted averages, depend ing on the structure of the weights. We impose the following assumption to control the magnitude of the weights νni. Assumption 5 (Uniform Weight Upper Bound) .There exist a constant ¯ν >0such that nνni≤¯νfor alliand alln∈N. (2.17) 10 Yang Design-Based Inference with Random Outcomes It guarantees that each weight is bounded by a constant multi ple of1/n, uniformly in i, without requiring the individual sequences {nνni}n≥ito converge. Consequently, no finite subset of indices can receive an asymptotically dominant sh are of the total weight. Note that Assumption 5impliessupn→∞/summationtextn i=1νni<∞but not necessarily one, which is quite general. In the continuous analogue this
|
https://arxiv.org/abs/2505.01324v5
|
corresponds to the integrab ility of the weight function. Given the unit-level Riesz estimators ( 2.14) and the weights, Definition 2 (Aggregate Riesz Estimator) .we define the corresponding aggregate Riesz estimator for the finite-sample estimand ˆτn(z,ω) =n/summationdisplay i=1νniˆθi(z,ω). (2.18) Thus we have the unbiasedness of the Riesz estimator in the fo llowing. Theorem 2 (Unbiasedness of the Riesz Estimator) .The Riesz estimator defined in (2.14) is unbiased for the individual treatment effect, i.e.,E/bracketleftbigˆθi(z,ω)/bracketrightbig =θi(˜yi). Consequently, the aggregate Riesz estimator in (2.18)is unbiased for the finite-sample estimand (2.15),i.e., E/bracketleftBig ˆτn(z,ω)/bracketrightBig =τn. This is an immediate consequence of the fact that a single rea lisation of a random variable is, by definition, unbiased for its expectation. 3 Large Sample Properties of the Aggregate Riesz Estimator In this section, we clarify the notion of local independence required in our framework, which is a concept that has been widely adopted in recent literatur e on causal inference under interference. Informally, local dependence refers to the i dea that subsets of random variables are conditionally independent of those outside their respe ctive neighbourhoods. This struc- ture is often represented using a dependency graph , in which vertices correspond to units and edges encode potential statistical dependence. For furthe r information about the dependence structure of the variables, see for example Chen and Shao (2004). Definition 3 (Dependency Neighbourhoods) .For each unit i, let{Mik}k∈Iibe the collection of all subsets Mik⊂ {1,2,...,n}such that the pair of random variables (˜yi,ψi)is independent of the set {(˜yj,ψj) :j /∈Mik}. We define the dependency neighbourhood of unitias the intersection over all such sets: Ni:=/intersectiondisplay k∈IiMik. (3.1) This definition ensures that Niis the smallest subset of units for which independence holds . Formally, for all such j /∈Ni, and for all measurable sets Ai∈ F˜yi,Bi∈ Fψi,Aj∈ F˜yj, and Bj∈ Fψj, we have P((Ai∩Bi)∩(Aj∩Bj)) =P(Ai∩Bi)P(Aj∩Bj). 11 Yang Design-Based Inference with Random Outcomes This definition corresponds to Definition 3.1 in Ross(2011) and is a weaker version of Def- inition 5.1 in HWS, as it only requires independence of the po tential outcome ˜yiand the Riesz representer ψifrom those outside Ni, rather than full functional independence across the spaces MiandMj. In contrast to HWS, who assume independence of the full model spaces across units, our framework only requires independence between the observab le random pairs (˜yi,ψi). This relaxation is justified by the introduction of a latent rando m variableω, under which poten- tial outcomes are modelled as conditionally random functio ns. Importantly, our formulation permits cases where the model spaces coincide, i.e.Mi=Mj, so long as the associated outcome representer pairs remain independent. Consequent ly, we do not assume functional independence at the space level, but merely require that the realised outcomes and their associated Riesz representers be independent across suffici ently separated units. This dis- tinction both broadens the scope of our analysis and clarifie s the assumptions necessary for variance control and asymptotic normality in a design-base d setting. We denote by |Ni|the number of units in the dependency neighbourhood of unit i, which we refer to as the neighbourhood size for unit i. LetDn= max
|
https://arxiv.org/abs/2505.01324v5
|
1≤i≤n|Ni|anddn=n−1/summationtextn i=1|Ni| denote, respectively, the maximum and average neighbourho od sizes across the sample. These quantities will play a central role in the asymptotic a nalysis to follow. In practice, the specification of neighbourhoods Nidepends on substantive knowledge of the experimental context. In network experiments, for exam ple,Nimight include unit i and its observed social or physical connections. In spatial designs, it is common to define Nivia fixed-radius rules or k-nearest-neighbour relationships based on geographic dis tance. When no such structure is explicitly available, one can cons ervatively define Nito include all units within the same covariate stratum or treatment block a s uniti. These strategies allow researchers to encode plausible channels of dependence wit hout imposing strong model-based assumptions. The following two assumptions restrict the growth rate of th e dependency neighbourhoods. Assumption 6 (Maximum Dependency Neighbourhood Size) .The maximum dependency neighbourhood size DnsatisfiesDn=o(nd)for somed>0. Assumption 7 (Average Dependency Neighbourhood Size) .The average dependency neigh- bourhood size dnsatisfies (a)dn=o(n); (b)supn∈Ndn<∞. Assumptions 6and7(a) regulate the growth of dependency neighbourhoods and pl ay a role analogous to sparsity constraints commonly used in the analysis of dependent data. Assumption 7(b) additionally imposes uniform boundedness of the averag e neighbourhood size. These conditions are standard in the local dependence literature and are stated here explicitly to clarify their role and facilitate reference i n our self-contained derivations. In addition, for a sequence of functions u1,...,un∈Lp, wherep >1, we define the max- p 12 Yang Design-Based Inference with Random Outcomes norm as /⌊ard⌊lu/⌊ard⌊ln max,p:= max 1≤i≤n/⌊ard⌊lui/⌊ard⌊lp. (3.2) The sequence /⌊ard⌊lu/⌊ard⌊ln max,pis clearly non-decreasing in n, and, as such, is bounded if and only if it converges as n→ ∞. We now state a proposition that provides an upper bound on the asymptotic variance, denoted by σ2 n:=V[ˆτn(z,ω)], of the aggregate Riesz estimator ˆτn(z,ω)of the estimand τn. This result follows from the setup and notation introduced a bove, and it serves to justify or at least suggest the consistency of the estimator under appr opriate conditions. Proposition 3 (Variance Upper Bound) .Under Assumptions 1and3–5, the following inequality holds σ2 n≤¯ν2 n2n/summationdisplay i=1|Ni| /⌊ard⌊l˜yi/⌊ard⌊l2 p/⌊ard⌊lψi/⌊ard⌊l2 q, (3.3) for anyp,q∈[1,∞]satisfying 1/p+1/q= 1/2. The inequality holds trivially if either /⌊ard⌊l˜yi/⌊ard⌊lp or/⌊ard⌊lψi/⌊ard⌊lqhas no finite upper bound. Furthermore, we define S2:=/braceleftBig (p,q)∈[1,∞]2:1 p+1 q=1 2/bracerightBig , and then we have σ2 n≤¯ν2dn n/parenleftbigg inf (p,q)∈S2/⌊ard⌊l˜yi/⌊ard⌊ln max,p/⌊ard⌊lψi/⌊ard⌊ln max,q/parenrightbigg2 . (3.4) The inequality hold trivially if infp,q∈S/⌊ard⌊l˜yi/⌊ard⌊ln max,p/⌊ard⌊lψi/⌊ard⌊ln max,qhas no finite upper bound. Proposition 3provides an upper bound on the variance of the aggregate esti mator. It shows that the variance decreases with sample size under boundedn ess of unit-level norms and local sparsity. The result highlights the interaction between de pendence complexity and unit-level variability in determining inferential precision. It implies that the variance may decay as the sample size incr eases, provided that the norms /⌊ard⌊l˜yi/⌊ard⌊ln max,pand/⌊ard⌊lψi/⌊ard⌊ln max,qremain bounded if there exists such a pair p,q∈[1,∞]satisfying 1/p+ 1/q= 1/2, and that the neighbourhood sizes |Ni|are uniformly small. The bound highlights the importance of controlling both the complexity of depend ence and the magnitude of unit-level variation in order to obtain valid inference.
|
https://arxiv.org/abs/2505.01324v5
|
Assumption 8 (Boundedness of Norms) .For somer≥1, there exist real numbers p,q∈ [r,∞]satisfying 1/p+1/q= 1/rsuch that sup i∈N/⌊ard⌊l˜yi/⌊ard⌊lp<∞andsup i∈N/⌊ard⌊lψi/⌊ard⌊lq<∞. (3.5) This assumption is formulated to allow flexibility in the cho ice ofr. Intuitively, the condition imposes uniform boundedness on the pth moments of ˜yiand theqth moments of ψi, thereby excluding heavy-tailed behaviour. Note that the max- pnorm satisfies /⌊ard⌊lu/⌊ard⌊ln max,p≤supi∈N/⌊ard⌊lui/⌊ard⌊lp. Therefore, Assumption 8is 13 Yang Design-Based Inference with Random Outcomes equivalent to requiring that the four quantities /⌊ard⌊l˜y/⌊ard⌊ln max,p1,/⌊ard⌊lψ/⌊ard⌊ln max,q1,/⌊ard⌊l˜y/⌊ard⌊ln max,p2, and/⌊ard⌊lψ/⌊ard⌊ln max,q2 remain bounded and converge as n→ ∞. The following result follows naturally from our variance co ntrol framework and the assump- tions stated above. Theorem 3 (Consistency) .Under Assumptions 1,3–5,7(a), and8(r=2), the aggregate Riesz estimator in (2.18)is consistent in mean square. Moreover, if 7(b)holds, then the aggregate Riesz estimator satisfies Op(n−1/2). We should interpret Op(n−1/2)with care. This notation means that the sequence√n(ˆτn−τn) is bounded in probability, but it does not imply convergence in distribution to a single non- degenerate law. In fact, the sequence may vary between differ ent limiting distributions with bounded variances. This result has an important practical implication. Althou ghˆτndepends on a single treat- ment assignment zand latent realisation ω, Theorem 3guarantees that, under mild condi- tions, averaging over units suffices for consistent estimati on. Asngrows,ˆτnapproximates the estimand involving expectations over zandω, reflecting the aforementioned sample-based ergodicity: averaging across units substitutes for repeat ed sampling. So far, we have shown that, under certain conditions, the agg regate Riesz estimator is con- sistent. However, these conditions alone are not sufficient f or establishing asymptotic nor- mality; an observation that aligns with the fact that consis tency, in general, does not imply asymptotic normality. The following non-degeneracy assumption imposes a lower bo und on the variance of the aggregate Riesz estimator and plays a critical role in ensur ing asymptotic normality. When combined with the previous assumptions, it provides sufficie nt conditions for the aggregate Riesz estimator to satisfy a central limit theorem. Assumption 9 (Variance Lower Bound) .There exists σ0>0such that infn∈N√nσn≥σ0. This condition imposes a non-degeneracy requirement on the estimator. It permits the asymptotic variance σ2 nto decay with n, but not too rapidly, thus ensuring that the root- n scaled estimator√n(ˆτn−τn)retains a non-degenerate variance in the limit. We require further the following assumption to establish th e asymptotic normality result, which concerns the existence of finite fourth moments. Assumption 10 (Finite Fourth Moment) .The individual treatment effect estimator satis- fies: (a)E/bracketleftBig ˆθi(z,ω)4/bracketrightBig <∞; (b)supi∈NE/bracketleftBig ˆθi(z,ω)4/bracketrightBig <∞. Note that condition (b)imposes a stronger requirement than (a), as it demands uniform boundedness of the fourth moments across all units. 14 Yang Design-Based Inference with Random Outcomes We now present the theorem establishing asymptotic normali ty of the Riesz estimator in the stochastic setting. Theorem 4 (Asymptotic Normality) .Under Assumptions 1,3–5,6(d=1/4),8(r=3),8(r=4),9, and10(a), the aggregate Riesz estimator defined in (2.18)satisfies the following asymptotic normality σ−1 n(ˆτn(z,ω)−τn)d− → N(0,1). (3.6) Both Assumptions 8(r=3)and8(r=4)are required here, as boundedness at r= 4does not imply the same at r= 3over(Z,Ω), due to the potential unboundedness of the joint space. Theorem 4provides the basis
|
https://arxiv.org/abs/2505.01324v5
|
for statistical inference using the aggr egate Riesz estimator under outcome-level randomness. In particular, it justifie s asymptotic testing and confidence interval construction for the null hypothesis H0:τn= 0, based on the standardised statistic Tn= ˆτn(z,ω)/σn, assuming that both ˆτn(z,ω)andσnare consistently estimable from a single realisation of zandω. The required rate condition on the maximum dependency neigh bourhood size, Dn=o(n1/4), is in line with that of HWS. However, the proof here proceeds b y bounding Wasserstein distance under mixed expectations over both the treatment a ssignments zand latent hetero- geneityω. This highlights a key observation: the dependence-degree requirement does not tighten despite the added randomness in potential outcomes . This result is particularly relevant in empirical settings where potential outcomes are in- herently random. For instance, in field experiments involvi ng sensor-based measurements, observed outcomes may be contaminated by random noise due to hardware limitations or environmental fluctuations. Similarly, in panel experimen ts with repeated units, treatment effects may be estimated via machine learning algorithms tha t introduce randomness in the form of first-stage fitted values. In both cases, the random co mponent of the outcome arises naturally and cannot be ignored. The asymptotic normality r esult ensures that inference remains valid under such randomness. 4 Variance Estimation under Local Dependence The test statistic supported by Theorem 4requires an estimate of the variance of the ag- gregate Riesz estimator ˆτn, which is generally unknown in finite samples. In the presenc e of local dependence, such as that arising from network inter ference, exposure mappings, or latent factors, standard i.i.d.variance estimators are invalid, and specialised methods m ust be employed. Several approaches have been proposed. Some yield consiste nt estimators, but only under strong structural assumptions. For example, Aronow and Samii (2017) rely on known ex- posure mappings and joint inclusion probabilities; Yu et al. (2022) assume bounded-degree network structures; and Liu and Hudgens (2014) consider two-stage designs with bounded group sizes. Other methods construct variance bounds rathe r than consistent estimators: Chin (2019) derives conservative expressions using Stein’s method, w hileChen and Shao 15 Yang Design-Based Inference with Random Outcomes (2004) propose block bootstrap procedures that are valid under fix ed neighbourhood size and finite moment assumptions. Of particular relevance to our setting, HWS construct an unb iased estimator for an up- per bound on the variance using a tensor-product Riesz repre sentation. More recently, Harshaw et al. (2024) develop a constrained optimisation framework to tighten s uch con- servative bounds across a wide range of experimental design s. However, neither method formally proves that their bound converges to the true varia nce, except in special cases. Without sharpness, these bounds may overestimate the varia nce and lead to overly conser- vative inference. In this paper, we take a different approach. Rather than estim ating a variance upper bound, we propose a direct estimator for the true asymptotic varian ce of the aggregate Riesz estima- tor under local dependence. This estimator is shown to be con sistent under conditions that essentially match those required for asymptotic normality . The key requirement is
|
https://arxiv.org/abs/2505.01324v5
|
knowl- edge of the dependency neighbourhood structure: not only th e maximal neighbourhood size Dn, but also which specific units are likely to be dependent. Thi s may seem restrictive, but it is consistent with the broader literature, where the v ariance is generally considered non-identifiable from a single realisation without further assumptions. Our recommendation is practical: given ζi=ˆθi(z,ω)−θi(˜yi), we define the empirical variance estimator by retaining only the terms that correspond to kno wn (or assumed) dependent pairs. The rest are set to zero. This enables consistent esti mation of the variance and allows us to construct feasible and asymptotically valid test stat istics. Let/vectorζn= (ζ1,...,ζn)′denote a vector of centred random variables with covariance matrix Σn=E[/vectorζn/vectorζ′ n]. From a single realisation, we observe only the outer produc tˆΣn=/vectorζn/vectorζ′ n, and cannot estimate Σnconsistently without further structure. Even under indepe ndence, each /vectorζn/vectorζ′ ntypically has variance of order one, so E/bracketleftbig /⌊ard⌊lˆΣn−Σn/⌊ard⌊l2 F/bracketrightbig ≥cn2(4.1) for some constant c>0, where/⌊ard⌊l·/⌊ard⌊lFstands for the Frobenius norm for a matrix, indicating that the Frobenius error diverges with n. Consistency can be recovered by imposing stronger assumpti ons, for example: observing mul- tiple independent copies of /vectorζn, enabling standard sample covariance estimators; time ord ering and weak stationarity allowing the use of banded estimators or kernel-smoothed autocovari- ances; structural models such as factor structures (low-ra nk plus sparse decomposition), enabling regularised estimation (e.g., POET); vanishing s ignal assuming that V[ζn]→0as n→ ∞, so that the norm /⌊ard⌊lΣn/⌊ard⌊lFitself converges to zero; among others. In the absence of such assumptions, the full matrix Σnis not consistently estimable. Our proposed approach sidesteps this by targeting only those co mponents of the variance that correspond to known dependent pairs. 16 Yang Design-Based Inference with Random Outcomes Suppose the dependency structure is given by En={(i,j)∈ {1,...,n}2:ζiandζjare dependent }. (4.2) We define the local-dependence variance estimator ˆσ2 n=/summationdisplay (i,j)∈Enνniνnjζiζj, (4.3) and the population variance σ2 n=/summationdisplay (i,j)∈EnνniνnjE[ζiζj]. (4.4) The following theorem shows that, under mild regularity con ditions,ˆσ2 nconsistently estimates σ2 n. Theorem 5 (Consistency of the Variance Estimator) .Under Assumptions 1,3–5, and10(b), nˆσ2 n−nσ2 n=Op(n−1/2D3/2 n). In particular, under Assumption 6(d=1/3),nˆσ2 n−nσ2 nconverges to zero in mean square . Corollary 1 (Asymptotic Normality) .Under Assumptions 1,3–5,6(d=1/4),8(r=3),8(r=4),9, and10(b), the aggregate Riesz estimator defined in (2.18)satisfies the following asymptotic normality ˆσ−1 n(ˆτn(z,ω)−τn)d− → N(0,1). (4.5) Note that Assumption 6(d=1/4)is stronger than Assumption 6(d=1/3)and therefore suffices to imply both consistency and asymptotic normality. Corollary 1establishes that the aggregate Riesz estimator is asymptot ically normal under local dependence, provided that the dependency neighbourh oods are sufficiently sparse and the variance estimator ˆσ2 nis properly constructed. A natural question is whether one can use a reduced dependenc y structure based solely on correlation. In many experimental settings, it is difficul t to explicitly define or justify a full dependency neighbourhood structure Enbased on latent interference or unobserved design constraints. However, domain knowledge or structur al assumptions may suggest which unit-level estimators ζiandζjare likely to be uncorrelated, even if their full depen- dence is unknown. This motivates constructing a variance es timator by summing only over pairs believed
|
https://arxiv.org/abs/2505.01324v5
|
to exhibit non-negligible second-order dep endence, resulting in a conservative correlation-based estimator. Specifically, consider the s et Ec n={(i,j)∈ {1,...,n}2:E[ζiζj]/ne}ationslash= 0}, (4.6) and define the corresponding correlation-based variance es timator ˆσ2 cn=/summationdisplay (i,j)∈Ecnνniνnjζiζj. (4.7) 17 Yang Design-Based Inference with Random Outcomes The answer is affirmative: using this reduced set still yields a consistent variance estimator under the same local dependence framework. Theorem 6 (Consistency of the Variance Estimator) .Under Assumptions 1,3–5, and10(b), nˆσ2 cn−nσ2 n=Op(n−1/2D3/2 n). In particular, under Assumption 6(d=1/3),nˆσ2 cn−nσ2 nconverges to zero in mean square . Corollary 2 (Asymptotic Normality) .Under Assumptions 1,3–5,6(d=1/4),8(r=3),8(r=4), 9, and10(b), the aggregate Riesz estimator defined in (2.18)satisfies ˆσ−1 cn(ˆτn(z,ω)−τn)d− → N(0,1). (4.8) Although the correlation-based variance estimator omits m any cross-terms in the full vari- ance formula, it yields valid inference when the omitted pai rs correspond to approximately uncorrelated units. Importantly, the assumption that cert ain unit-level estimators ζiand ζjare uncorrelated cannot be directly verified from a single da taset. Nonetheless, this as- sumption often aligns with realistic structural knowledge available to practitioners, such as spatial adjacency, blocking in experimental design, or kno wn network structure, which can be used to construct a conservative approximation of the cor relation graph. In this way, the correlation-based approach offers a practically viable and interpretable alternative to full dependency modelling. Although these variance estimators are developed within ou r stochastic setting, they are equally applicable to the fixed-outcome setting as a special case. Indeed, the classical frame- work can be recovered by taking the latent space Ωto be a singleton, in which case the only source of randomness is the treatment assignment. Unde r this simplification, the local dependence structure reduces to that of the assignment mech anism alone, typically known by design (see Assumption 2). Consequently, the variance estimators proposed here can be directly applied to previous frameworks such as HWS, and are in fact easier to implement in such settings due to the absence of outcome-level variati on. This extends the practical utility of our estimators beyond the stochastic setting, pr oviding consistent inference tools for a broad class of design-based estimators. In the remainder of the paper, we adopt the correlation-base d variance estimator ˆσ2 cn, while emphasising that the overall methodology still relies on th e local dependency assumption, even though it is not explicitly invoked in the variance form ula. Crucially, while uncorre- latedness justifies omitting second-order cross terms in th e estimator, it does not imply the vanishing of higher-order joint moments across units. Ther efore, a corresponding correlation- based neighbourhood structure cannot be meaningfully defin ed for the full dependency graph. In practice, this means we continue to assume a local depende ncy structure satisfying As- sumption 6withd= 1/4, while approximating σ2 nby summing only over pairs (i,j)for whichζiandζjare correlated. Suppose that, in practice, the correlation structure is app roximated by a set of index pairs ˜En⊂ {1,...,n}2, serving as a practical substitute for ( 4.6). To ensure valid inference, the construction of ˜Enshould be conservative, satisfying 18 Yang Design-Based Inference with Random Outcomes Assumption 11 (Conservative Set of
|
https://arxiv.org/abs/2505.01324v5
|
Index Pairs) .Ec n⊂˜En⊂ En. Based on this, we define a sparse matrix ˆΣd n∈Rn×nthat retains only the entries correspond- ing to the index set ˜En, representing the estimated dependency structure. (ˆΣd n)ij=/braceleftBigg ζiζjif(i,j)∈˜En, 0 otherwise.(4.9) The corresponding variance estimator is then given by ˜σ2 n=ν′ nˆΣd nνn, (4.10) whereνn= (νn1,...,νnn)′is the vector of aggregation weights. By appropriately selecting the conservative set of index pa irs, the corresponding variance estimator remains consistent. Theorem 7 (Consistency of the Variance Estimator) .Under Assumptions 1,3–5,10(b), and11,n˜σ2 n−nσ2 n=Op(n−1/2D3/2 n). In particular, under Assumption 6(d=1/3),n˜σ2 n−nσ2 n converges to zero in mean square . Corollary 3 (Asymptotic Normality) .Under Assumptions 1,3–5,6(d=1/4),8(r=3),8(r=4), 9,10(b), and11, the aggregate Riesz estimator defined in (2.18)satisfies ˜σ−1 n(ˆτn(z,ω)−τn)d− → N(0,1). (4.11) Note that consistency is preserved even if some independent pairs are mistakenly included in ˜En, provided that the total number of dependent pairs, includi ng both genuinely dependent and erroneously included independent pairs, remains withi n the sparsity condition Dn= o(n1/4). In the special case where all units are believed to be mutuall y independent, i.e., eachζiis independent of every other ζj,˜Enconsists only of diagonal pairs (i,i). ThenˆΣd nreduces to a diagonal matrix, and the variance estimator and the true va riances simplify to ˜σ2 n=n/summationdisplay i=1ν2 niζ2 i,andσ2 n=n/summationdisplay i=1ν2 niE[ζ2 i], (4.12) respectively. This corresponds to the classical form used i n standard inference under inde- pendence. Corollaries 1and2thus recover the familiar asymptotic normality result as a special case of the more general locally dependent setting t reated here. From a functional perspective, the target variance σ2 ncan be regarded as a continuous lin- ear functional of a collection of second-order moments, spe cifically, Σnor the covariances Cov(ζi,ζj)for pairs (i,j), with norm/summationtextn i=1ν2 ni. Since these covariances are not directly ob- servable from a single experimental realisation, our estim ator proceeds by substituting the population moments with observed products ζiζj, yielding the expression in ( 4.3). In this 19 Yang Design-Based Inference with Random Outcomes sense, the estimator operates analogously to a plug-in esti mator: it replaces the unobserv- able moments with empirical counterparts derived from the d ata. However, unlike classical plug-in estimators that substitute a distributional param eter with an empirical measure, see for example van der Vaart (2000, Section 25.8), this construction relies on direct moment substitution guided by a known dependency graph. This persp ective clarifies that the vari- ance estimators approximates a functional over unknown mom ents, and its consistency is guaranteed by structural assumptions on the dependence nei ghbourhood and boundedness of higher-order terms. Taken together, Theorems 3–7and Corollaries 1–3convey a central message: although the estimandτnis defined as an average of individual treatment effects evalu ated over the joint probability space Z×Ω, the entire inference procedure, from estimation to varian ce approx- imation, can be carried out using just a single realisation o fz∈ Zandω∈Ω. This is made possible by the structure of the Riesz representation frame work and the use of carefully con- structed estimators that remain valid under local dependen ce. As the sample size ngrows, one draw from the treatment
|
https://arxiv.org/abs/2505.01324v5
|
and outcome distribution contai ns sufficient information for reliable estimation and inference, provided that the under lying conditions hold. In this way, the results provide a rigorous justification for making popu lation-level claims based on one observed experiment. 5 Riesz Representer Estimation in Practice The theoretical results in earlier sections show that the ag gregate Riesz estimator in the stochastic setting can be constructed using only a single re alisation of the underlying data- generating process. Specifically, the Riesz estimates are c omputed over the randomisation distribution of z, conditional on a fixed (but unobserved) ω0. Crucially, this suffices to identify the estimand, which is defined as an expectation ove r bothzandω. This structure has concrete implications for how we estimat e the individual Riesz representers ψi(z,ω). By Theorem 1, for each unit i, there exists a unique representer satisfying ( 2.10) with ( 2.6). Hence, the representer ψi(z,ω0)is identified conditionally and can be estimated directly from the realised function ˜yi(z,ω0). The setting considered by HWS arises as a special case of our f ramework when Ω ={ω0}, so that potential outcomes reduce to deterministic functio ns of the treatment assignment. 5.1 Computation in Finite-Dimensional Model Spaces In finite-dimensional model spaces Mi, we can estimate ψi(z,ω0)via basis expansion and moment matching. Let {gi,1,...,gi,m}be a basis for Mianddim(Mi) =m. We solve for coefficients βi∈Rmsuch that ˆψi(z,ω0) =m/summationdisplay k=1βi,kgi,k(z,ω0) (5.1) 20 Yang Design-Based Inference with Random Outcomes satisfies the identity θi(˜yi) =/an}⌊ra⌋ketle{t˜yi,ˆψi/an}⌊ra⌋ketri}ht0. Algorithm 1summarises this procedure. Note that it is in line with the approach described by HWS (preprint version ), where Riesz representers are computed via basis expansion and moment matching under the r andomisation distribution. we provide here a few concrete examples of basis choices to cl arify practical implementation. When the treatment space Zis finite with Kdistinct levels, a simple and interpretable basis consists of treatment indicators. In this case, one may defin e each basis function gi,k(z,ω) to equal 1 if zcorresponds to the k-th treatment condition and 0 otherwise. This yields a saturated model with a separate coefficient for each treatmen t arm and does not rely on any structural assumptions about the response surface. Whenzis continuous or high-dimensional, one can instead use low- order basis functions such as polynomials or splines in z, possibly interacted with pre-specified functions of ω. For instance, if z∈R, a natural choice is the span of {1,z,z2,ω,z·ω}, which captures basic nonlinearities and interactions between treatment a nd latent outcome variation. These basis sets offer a balance between expressiveness and comput ational tractability and can be tailored to the structure of the experiment. This procedure is fully offline in the sense that it depends onl y on the design distribution for z, the basis functions, and the known functional θi. It does not require access to observed outcomesYi= ˜yi(z0,ω0). This construction relies on Assumption 2, which ensures that the treatment assignment zis independent of the latent variable ω. In particular, this allows expectations over z conditional on a fixed realisation ω0to be taken with respect to the known design distribution µ. Without this assumption,
|
https://arxiv.org/abs/2505.01324v5
|
the conditional distribution of z|ω0could differ from µ, and the estimated representer would no longer correspond to the true Riesz representation defined under the design. It is at this stage, during the offline construction of ˆψi(z,ω0), that Assumption 2is invoked. All subsequent inference proceeds without requ iring independence between treatment assignment and outcome-level randomnes s. In symmetric settings with common Miandθi, the same representer ψcan be reused across units. 5.2 Approximation in Infinite-Dimensional Model Spaces In many practical applications, the model space Miis infinite-dimensional. This arises naturally when either the treatment assignment zor the latent variable ωlies in a contin- uous domain, or when potential outcomes depend on complex fu nctional relationships. To compute the Riesz representer in such settings, we project t he problem onto a sequence of growing finite-dimensional subspaces. Our setting explicitly accounts for a random latent variabl eω, and the Riesz representation is defined with respect to an inner product /an}⌊ra⌋ketle{t·,·/an}⌊ra⌋ketri}ht0taken over the treatment assignment z, conditional on the fixed value ω0∈Ω. Although our estimation routine shares a compu- tational structure with existing methods, constructing a G ram matrix and solving a linear system, our framework generalises the representer computa tion to a richer class of problems involving random potential outcomes. 21 Yang Design-Based Inference with Random Outcomes Assumption 12 (Separable Model Space) .The model space Mi⊂L2(Z ×Ω), equipped with the inner product /an}⌊ra⌋ketle{t·,·/an}⌊ra⌋ketri}ht0defined in (2.6)for someω0∈Ω, is a separable Hilbert space. By Assumption 12, the space Miadmits a countable orthonormal basis {ek}k≥1. For each dimensionm∈N, define the subspace M(m) i= span{e1,...,em}, (5.2) and letPm:Mi→ M(m) idenote the orthogonal projection. Then the projected repre senter ψ(m) i:=Pmψi=m/summationdisplay k=1/an}⌊ra⌋ketle{tψi,ek/an}⌊ra⌋ketri}ht0ek (5.3) converges to ψiin norm as m→ ∞, by the completeness of the Hilbert space Mi, since {ek}k≥1forms a total orthonormal system and Pmis the orthogonal projection onto the span of the first mbasis elements. The estimation of ψ(m) iproceeds by solving the finite-dimensional moment equation system that results from projecting the Riesz representation iden tity onto the span of the first m basis functions. This yields a linear system involving a Gra m matrix and a target vector, as shown in Algorithm 2. This procedure is still fully offline in the sense that both G(m) iandT(m) ican be computed offline from the design distribution and the known linear func tionalθi. It does not require access to observed outcomes Yi= ˜yi(z0,ω0). In symmetric settings with common Miandθi, the same representer ψcan be reused across units. The procedure for estimating ψi(z,ω0)described above relies on projecting onto a finite- dimensional subspace spanned by a pre-specified basis and so lving empirical moment equa- tions derived from the known randomisation design and linea r functional θi. This approach follows the classical sieve estimation framework and is the oretically well understood; see, e.g.,Chen (2007);Chen and Pouzo (2012);van der Vaart (2000). While consistency of ˆψ(m) i→ψiunder appropriate regularity conditions can be rigorously established using a standard bias-variance decomposition , this is not the focus of the present paper. Our main contribution lies in the theory developed in earlier sections, namely, the design-based asymptotic normality of the
|
https://arxiv.org/abs/2505.01324v5
|
aggregate estima tor and the construction of con- sistent variance estimators under local dependence. For completeness, we note that under standard conditions, s uch as an orthonormal basis, eigenvalues of the Gram matrix bounded away from zero, and su fficient smoothness of the representer, the estimation error satisfies /⌊ard⌊l/hatwideψ(m) i−ψi/⌊ard⌊l=Op/parenleftBig m−s+/radicalbig m/n/parenrightBig (5.4) for somes >0. This rate is minimised by setting the truncation level mproportional to n1/(2s+1), which balances the bias and variance contributions. We omi t the full proof, as it follows standard arguments from the sieve estimation liter ature. 22 Yang Design-Based Inference with Random Outcomes The orthonormal system {ek}can either be specified a priori , using classical bases such as Fourier series, wavelets, or splines, or derived from the da ta through methods like principal components or kernel eigenfunctions. Additional practica l examples of basis construction in deterministic settings are discussed in the preprint by HWS , to which we refer the interested reader for further details. 6 Simulation Study To illustrate the finite-sample behaviour, we carry out a sim ulation study to evaluate the finite-sample performance of the Riesz estimator and the var iance estimators. The simulation is designed to reflect a typical experimental setting with lo cal dependence induced by latent block-level effects. All potential outcomes, covariates, a nd errors are modelled as functions of a common latent variable, consistent with our framework. For each unit i∈ {1,...,n}, we simulate treatment assignment zi∼Bernoulli(0 .5)inde- pendently. Covariates xi(ω)and outcome-level parameters (αi,βi,δi)are drawn i.i.d.from N(0,1). The potential outcome is generated via ˜yi(z,ω) =αi+βiz+δizxi+εi(ω), (6.1) whereεi(ω)is a locally dependent noise term described below. The obser ved outcome is Yi= ˜yi(zi,ω). Local dependence is introduced by partitioning the sample i nto blocks, where units in the same block share a common latent shock. Specifically, the num ber of blocks is Bn=⌊n1−d⌋ for a fixed parameter d∈[0,0.3]. The error for unit iis εi(ω) =γiηb(i)+νi, whereb(i)is the block to which unit iis assigned, ηb∼ N(0,σ2)is the shared block-level shock (with σ2= 1),νi∼ N(0,1)is an idiosyncratic noise term, and γi∈ {−1,1}is a random sign (independent Rademacher) to allow both positiv e and negative correlations between units in the same block. This construction yields a d ependency graph Enwith within-block dependence and across-block independence. We estimate the average treatment effect using the aggregate Riesz estimator. The repre- senterψiis computed as ψi(z,ω0) =z µ1−1−z µ0,withµ1=E[zi] = 0.5, which corresponds to the efficient estimator under the known d esign. The estimator is ˆτn=1 nn/summationdisplay i=1Yiψi(zi,ω0), 23 Yang Design-Based Inference with Random Outcomes and the variance estimator follows ( 4.3) ˆσ2 n=/summationdisplay (i,j)∈Enνniνnjζiζj,whereζi=Yiψi(zi)−θi(˜yi)andνni= 1/n. That is, we sum pairwise products of residuals over all unit p airs within the same block. In this simulation setting, the variance estimators ˆσ2 cnand˜σ2 ncoincide with the local- dependence estimator ˆσ2 n. This is because the dependency structure is fully captured by shared block-level shocks, and all non-zero cross-moments correspond to block-sharing units. Therefore, all simulation results reflect the performance o f both estimators. In general, however, the correlation-based estimator yields improved efficiency in settings where weakly dependent pairs can be
|
https://arxiv.org/abs/2505.01324v5
|
excluded without loss of accuracy. We consider sample sizes n∈ {100,200,500,1000}and dependency growth parameters d∈ {0.0,0.1,0.2,0.25,0.3}. For each (n,d)pair, we run 2000independent replications. For each replication, we compute the estimator ˆτn, the standard error ˆσn, and test statistics for Wald-type inference. We record the following metrics: • Empirical bias: E[ˆτn−τn]; • Root mean squared error (RMSE):/radicalbig E[(ˆτn−τn)2]; • Coverage of 95% confidence intervals: P(|ˆτn−τn| ≤1.96ˆσn); • Rejection rates at 1%, 5%, and 10% significance levels. The complete codebase, including replication scripts and p lotting routines, is publicly avail- able atgithub.com/yukai-yang/RieszRE_Experiments under the MIT license. Figure 1depicts the empirical bias and root mean squared error (RMSE ) of the Riesz es- timator as functions of the dependency growth rate d, stratified by sample size. Across all settings, the estimator exhibits negligible bias: empi rical values remain close to zero, typically below 0.01 in absolute magnitude. The RMSE decrea ses steadily as the sample size increases, from approximately 0.37 at n= 100 to around 0.10 at n= 1000 , indicating improved precision in larger samples. These results offer st rong empirical support for the consistency of ˆτnunder local dependence, as established in Theorem 3. Figure 2shows the empirical coverage rates of nominal 95% confidence intervals across varying levels of dependency growth dand sample sizes n. Coverage rates are uniformly at or above the nominal level in all scenarios. For n= 100 , coverage ranges from 94.9% to 95.6%, while for n= 500 andn= 1000 , it stabilises in the range of 95.5% to 96.8%. These results indicate that the variance estimator is sligh tly conservative but well-calibrated, yielding confidence intervals with reliable coverage even i n the presence of moderate local dependence. Table 2reports the empirical rejection rates of two-sided Wald tes ts at the 1%, 5%, and 10% significance levels across all combinations of nandd. At the 5% level, rejection rates generally lie within the 4%–5% range. For instance, when n= 1000 , the empirical size ranges from 3.4% to 4.5%, depending on the degree of local dependenc e. Similar patterns hold at 24 Yang Design-Based Inference with Random Outcomes the 1% and 10% levels, with empirical rejection rates closel y aligned with nominal values across the board. These findings indicate that the standardi sed test statistic Tn=ˆτn−τn ˆσn is approximately standard normal under the null, consisten t with the asymptotic normality established in Corollary 1. The simulation confirms the unbiasedness, consistency, and inferential validity of the Riesz estimator in finite samples. The variance estimator success fully accounts for local dependence induced by block-level shocks, even when correlations are s igned and heterogeneous. The method performs well across all sample sizes and dependency regimes considered, providing empirical support for its practical application. 7 Conclusion This paper extends the classical design-based framework fo r causal inference to accommo- date random potential outcomes. By introducing latent rand omness through a variable ω, we allow for outcome-level variability while preserving th e core identification principle of ran- domised treatment assignment. This formulation retains th e key strength of design-based methods, minimal reliance on outcome modelling, while
|
https://arxiv.org/abs/2505.01324v5
|
gene ralising the framework to ad- dress the stochastic nature of modern experimental data. In doing so, it contributes toward a broader synthesis between randomised experimental desig n and inferential generalisation. We develop a general asymptotic theory for the aggregate Rie sz estimator under local de- pendence in the stochastic setting. A central strength of th e results lies in their generality: neither the estimator ˆτn(z,ω)nor the variance σ2 nis required to converge to a fixed distri- bution or limiting value. Instead, the theory accommodates growing network complexity, heterogeneous variance behaviour, and non-stationary dep endence structures. Our work departs from the fixed-outcome framework by modelli ng treatment effect function- als in a random outcome space. This yields a distinct theoret ical foundation that supports valid inference under local dependence from a single observ ed dataset. In addition, we de- velop consistent variance estimators tailored to this sett ing. The contribution lies not in additional technical abstraction, but in providing a rigor ous, design-respecting methodology that accommodates the outcome-level randomness inherent i n many modern experiments. Although our variance estimators are developed within the s tochastic setting, they remain valid in the classical fixed-outcome setting as a special cas e. In particular, when outcome- level randomness is absent, the local dependence structure reduces to that of the assignment mechanism alone. This structure is typically known by desig n. As a result, our variance estimators can be readily applied in earlier frameworks suc h as HWS, where they provide a consistent and implementable alternative to the conserva tive variance bounds previously available. Theorems 3–7, along with Corollaries 1–3, establish a robust framework for inference using 25 Yang Design-Based Inference with Random Outcomes the Riesz estimator in the stochastic setting. For instance , Theorem 4shows that the stan- dardised statistic σ−1 n(ˆτn(z,ω)−τn)is asymptotically standard normal without requiring convergence of the variance sequence. Similarly, Theorems 5–7show that the variance esti- matorsnˆσ2 n,nˆσ2 cn, and˜σ2 nconsistently estimate nσ2 n, even when the latter does not stabilise. These results enable valid inference in experiments with in creasing dependence, network size, or covariate complexity. Theorem 3establishes mean-square consistency under mild moment and neighbourhood growth assumptions. When the average neighbourhood size is uniformly bounded, the esti- mator achieves an Op(n−1/2)convergence rate without relying on classical limiting dis tribu- tions. Corollaries 1–3provide feasible, fully data-driven test statistics that c onverge to the standard normal under regularity conditions and consisten t variance estimation. To evaluate finite-sample behaviour, we conduct a simulatio n study using locally dependent data generated via signed block-level shocks. The results c onfirm the theoretical properties: the Riesz estimator is unbiased and consistent, with RMSE de clining as sample size increases. Confidence intervals based on the variance estimator exhibi t accurate or slightly conservative coverage across dependency settings. Empirical rejection rates for two-sided Wald tests closely match nominal levels, even in moderate samples, sup porting the method’s practical applicability. In summary, the Riesz representation approach remains vali d and powerful in the presence of outcome-level randomness and structured dependence. Our f ramework is robust and flexible, enabling valid inference
|
https://arxiv.org/abs/2505.01324v5
|
without restrictive asymptotic a ssumptions. It is particularly well suited to modern experiments involving sensors, adaptive i nterventions, high-dimensional treatments, or biological systems, where random variation is intrinsic rather than incidental. While the correlation-based variance estimator ˆσ2 cnoffers a more parsimonious alternative to the full local-dependence estimator, it rests on untestabl e assumptions about which unit-level estimators are uncorrelated. Its finite-sample performanc e depends on how well the assumed correlation structure approximates the true dependency gr aph. Nonetheless, simulations suggest that under reasonable constructions, this approac h may yield tighter intervals and improved efficiency while maintaining valid coverage. Future research may extend the framework in several directi ons. Natural next steps include incorporating non-linear or non-smooth functionals, adap ting the methodology to observa- tional studies (e.g., through covariate balancing or doubl y robust adjustment), and gener- alising to high-dimensional or non-parametric spaces. Whi le this paper focuses on latent randomness under local dependence, extensions to dynamic t reatments, time-varying inter- ference, or structured measurement error remain open and pr omising. We believe these tools lay the foundation for a broader class of estimators that rem ain faithful to randomisation principles while accommodating the stochastic complexity of modern data. We hope that this framework will serve as a useful addition to the design-based toolkit, especially in experiments where outcome-level randomness cannot be ignored. 26 Yang Design-Based Inference with Random Outcomes Acknowledgements I am grateful to Fredrik Sävje for his valuable discussions a nd support. References Abadie, A., M. M. Chingos, and M. R. West (2020). Sampling-ba sed vs design-based uncer- tainty in regression analysis. Econometrica 88 (1), 265–296. Adams, R. A. and J. J. F. Fournier (2003). Sobolev Spaces (2nd ed.). Academic Press. Aronow, P. M. and C. Samii (2017). Estimating average causal effects under general inter- ference. Annals of Applied Statistics 11 (4), 1912–1947. Athey, S., G. W. Imbens, and S. Wager (2021). Design-based an alysis in difference-in- differences settings with staggered adoption. Journal of Econometrics 225 (2), 105–116. Billingsley, P. (1999). Convergence of Probability Measures (2nd ed.). Wiley Series in Prob- ability and Statistics. New York: John Wiley & Sons. Chen, L. H. Y. and Q.-M. Shao (2004). Normal approximation un der local dependence. The Annals of Probability 32 (3A), 1985–2028. Chen, X. (2007). Large sample sieve estimation of semi-nonp arametric models. Handbook of Econometrics 6 (B), 5549–5632. Chen, X. and D. Pouzo (2012). Estimation of nonparametric co nditional moment models with possibly nonsmooth generalized residuals. Econometrica 80 (1), 277–321. Chin, A. (2019). Central limit theorems via stein’s method f or randomized experiments under interference. Fisher, R. A. (1935). The Design of Experiments . Oliver & Boyd. Harshaw, C., J. A. Middleton, and F. Sävje (2024). Optimized variance estimation under interference and complex experimental designs. Harshaw, C., Y. Wang, and F. Sävje (2022). A design-based rie sz representation framework for randomized experiments. In NeurIPS 2022 Workshop on Causal Machine Learning for Real-World Impact . Workshop paper. Imbens, G. W. (2004). Nonparametric estimation of average t reatment effects under exo- geneity: A review. Review of Economics and Statistics 86 (1), 4–29. Liu,
|
https://arxiv.org/abs/2505.01324v5
|
L. and M. G. Hudgens (2014). Large sample randomization inference with applications to cluster-randomized and panel experiments. Biometrika 101 (2), 457–471. Neyman, J. (1990). On the application of probability theory to agricultural experiments. essay on principles. section 9. Statistical Science 5 (4), 465–472. Originally published in 1923. 27 Yang Design-Based Inference with Random Outcomes Riesz, F. (1907). Sur une espèce de géométrie analytique des systèmes de fonctions sommables. Comptes rendus de l’Académie des Sciences 144 , 1409–1411. Rosenbaum, P. R. and D. B. Rubin (1983). The central role of th e propensity score in observational studies for causal effects. Biometrika 70 (1), 41–55. Ross, N. (2011). Fundamentals of stein’s method. Probability Surveys 8 , 210–293. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandom- ized studies. Journal of Educational Psychology 66 (5), 688–701. Rubin, D. B. (1978). Bayesian inference for causal effects: T he role of randomization. Annals of Statistics 6 (1), 34–58. Rubin, D. B. (1980). Comment on “randomization analysis of e xperimental data” by E. Korn. Journal of the American Statistical Association 75 (371), 591–593. Stein, C. (1972). A bound for the error in the normal approxim ation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Prob ability Theory , pp. 583–602. University of California Press. Sävje, F., P. M. Aronow, and M. G. Hudgens (2021). Average tre atment effects in the presence of unknown interference. Annals of Statistics 49 (2), 673–701. van der Vaart, A. W. (2000). Asymptotic statistics . Cambridge University Press. Yu, C. L., E. M. Airoldi, C. Borgs, and J. T. Chayes (2022). Est imating the total treatment effect in randomized experiments with unknown network struc ture. Proceedings of the National Academy of Sciences 119 (44), e2208975119. 28 Yang Design-Based Inference with Random Outcomes A Notation Summary Table 1: Summary of commonly used notation throughout the pa per. Symbol Description n Number of units in the experiment z∈ Z Treatment assignment vector µ Probability measure for randomisation ω∈Ω Latent variable representing outcome-level randomness P Probability measure for the latent random variable ω Fx Sigma-algebra generated by the random variable x ˜yi(z,ω)Potential outcome of unit iunder treatment zand latent variable ω θi(˜yi) Linear functional defining the treatment effect for unit i ˆθi(z,ω)Riesz estimator for the treatment effect for unit i ζi the difference ˆθi(z,ω)−θi(˜yi) τn Aggregate estimand (e.g., average treatment effect) ˆτn Aggregate Riesz estimator for τn ψi(z,ω)Riesz representer for unit i ˆψi(z,ω)Estimated Riesz representer (e.g., via basis expansion) Mi Model space for unit i’s outcome function Ni Dependency neighbourhood of unit i Dn Maximum neighbourhood size across units dn Average neighbourhood size across units σ2 n Variance of ˆτn En set of dependent pairs (i,j)forζiandζj ˆσ2 n Variance estimator of ˆτnusingEn Ec n set of correlated pairs (i,j)forζiandζj ˆσ2 cn Variance estimator of ˆτnusingEc n ˜En conservative set of index pairs ˜σ2 n Variance estimator of ˆτnusing˜En 29 Yang Design-Based Inference with Random Outcomes B Algorithms Algorithm 1 Computing the Riesz representer ψi(z,ω0), following the procedure
|
https://arxiv.org/abs/2505.01324v5
|
of HWS Inputs: •Basis functions {gi,1,...,gi,m}forMi •Known linear functional θi •Randomisation distribution for z Step 1: Compute Gram Matrix ˆGi [ˆGi]ℓ,k=Ez/bracketleftbig gi,ℓ(z,ω0)gi,k(z,ω0)/bracketrightbig Step 2: Compute Target Vector ˆTi [ˆTi]ℓ=θi(gi,ℓ) Step 3: Solve Linear System ˆGiβi=ˆTi⇒βi=ˆG−1 iˆTi Output: Estimated Riesz representer ˆψi(z,ω0) 30 Yang Design-Based Inference with Random Outcomes Algorithm 2 Approximating the Riesz representer ψi(z,ω0)in infinite-dimensional settings Inputs: •Truncated orthonormal basis {e1,...,em}ofMi •Known linear functional θi •Randomisation distribution for z Step 1: Compute Gram Matrix ˆG(m) i [ˆG(m) i]ℓ,k=Ez/bracketleftbig eℓ(z,ω0)ek(z,ω0)/bracketrightbig Step 2: Compute Target Vector ˆT(m) i [ˆT(m) i]ℓ=θi(eℓ) Step 3: Solve Regularised Linear System ˆG(m) iβi=ˆT(m) i⇒βi=/parenleftBig ˆG(m) i/parenrightBig−1ˆT(m) i Output: Estimated Riesz representer ˆψ(m) i(z,ω0) =m/summationdisplay k=1βi,kek(z,ω0) 31 Yang Design-Based Inference with Random Outcomes C Figures and Table Figure 1: Bias and RMSE of estimators across varying neighbo urhood growth rates d. 0.00.10.20.30.4 0 0.1 0.2 0.25 0.3 Dependency growth rate dBias RMSE n= 100 200 500 1000 Figure 2: Coverage of 95% confidence intervals under varying dependency growth rates d. 0.920.940.960.98 0 0.1 0.2 0.25 0.3 Dependency growth rate dn= 100 200 500 1000 32 Yang Design-Based Inference with Random Outcomes Table 2: Empirical rejection rates at 1%, 5%, and 10% signific ance levels across sample sizesnfor each dependency growth rate d. Based on 2000 replications using the variance estimator. Sig. levels d n= 100n= 200n= 500n= 1000 1%0.00 0.0075 0.0060 0.0060 0.0050 0.10 0.0100 0.0050 0.0050 0.0070 0.20 0.0100 0.0065 0.0045 0.0085 0.25 0.0115 0.0100 0.0075 0.0055 0.30 0.0155 0.0100 0.0100 0.0060 5%0.00 0.0470 0.0345 0.0325 0.0380 0.10 0.0440 0.0380 0.0410 0.0335 0.20 0.0450 0.0410 0.0390 0.0450 0.25 0.0455 0.0480 0.0425 0.0345 0.30 0.0510 0.0480 0.0415 0.0435 10%0.00 0.0930 0.0820 0.0785 0.0825 0.10 0.0870 0.0810 0.0830 0.0795 0.20 0.0865 0.0855 0.0855 0.0850 0.25 0.0865 0.0950 0.0930 0.0760 0.30 0.0955 0.0905 0.0940 0.0920 33 Yang Design-Based Inference with Random Outcomes D Lemmas and Proofs Proof of Proposition 1.The linearity for both the two examples follows directly fro m the facts that integrals are linear and that we are taking differe nces of linear functionals. For continuity, we need to rewrite ( 2.8) clearly θi(u) =/integraldisplay/integraldisplay Z×Ω/parenleftbiggIA(z) µ(A)−IB(z) µ(B)/parenrightbigg u(z,ω)µ(dz)P(dω) =/an}⌊ra⌋ketle{tu, ϕ/an}⌊ra⌋ketri}ht, whereIstands for indicator function, and ϕ(z,ω) =IA(z) µ(A)−IB(z) µ(B). We can see that ϕ∈L2(Z ×Ω)as /⌊ard⌊lϕ/⌊ard⌊l2 L2=/integraldisplay Ω/integraldisplay Z/vextendsingle/vextendsingle/vextendsingle/vextendsingleIA(z) µ(A)−IB(z) µ(B)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 µ(dz)P(dω) =/integraldisplay Z/vextendsingle/vextendsingle/vextendsingle/vextendsingleIA(z) µ(A)−IB(z) µ(B)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 µ(dz) =1 µ(A)+1 µ(B)<∞. Now continuity is immediate due to the Cauchy-Schwarz inequ ality |θi(u)|=|/an}⌊ra⌋ketle{tu,ϕ/an}⌊ra⌋ketri}ht| ≤ /⌊ard⌊lu/⌊ard⌊l /⌊ard⌊lϕ/⌊ard⌊l. Proof of Proposition 2.Derivatives and expectation are linear, which establishes linearity. Sinceu∈L2(Ω;H1(Z)), it follows that for almost every ω∈Ω, the function z/ma√sto→u(z,ω)∈ H1(Z), and hence the weak derivative ∂u/∂z exists and is continuous almost everywhere in z. Moreover, for any interior point z0∈ Z, the mapping ω/ma√sto→∂u/∂z(z0,ω)belongs toL2(Ω), with /vextenddouble/vextenddouble/vextenddouble/vextenddouble∂u ∂z(z0,·)/vextenddouble/vextenddouble/vextenddouble/vextenddouble L2(Ω)≤C/vextenddouble/vextenddouble/vextenddouble/vextenddouble∂u ∂z/vextenddouble/vextenddouble/vextenddouble/vextenddouble, for some constant C >0, by the Sobolev embedding theorem and the continuity of the point evaluation map on H1(Z). SeeAdams and Fournier (2003, Theorem 5.36) and related discussion of pointwise evaluation in H1spaces. Applying the Cauchy-Schwarz inequality gives |θi(u)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay Ω∂u ∂z(z0,ω)P(dω)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤/vextenddouble/vextenddouble/vextenddouble/vextenddouble∂u ∂z(z0,·)/vextenddouble/vextenddouble/vextenddouble/vextenddouble L2(Ω), and this completes the proof. 34 Yang Design-Based Inference with Random Outcomes Proof of Theorem 1.SeeRiesz (1907).
|
https://arxiv.org/abs/2505.01324v5
|
Proof of Theorem 2.By definition of the Riesz estimator in ( 2.14) and the linearity of ex- pectation, we have E/bracketleftbigˆθi(z,ω)/bracketrightbig =θi(˜yi), which further implies E/bracketleftbig ˆτn(z,ω)/bracketrightbig =n/summationdisplay i=1νniE/bracketleftbigˆθi(z,ω)/bracketrightbig =n/summationdisplay i=1νniθi(˜yi) =τn. This equality follows from the construction of the estimato r and does not require any inte- grability assumptions. In particular, it holds even if ˜yi(z,ω)ψi(z,ω)/∈L1. Lemma 1. Suppose the weights νnisatisfy Assumption 5. Then for any integer r≥1, the following bound holds: n/summationdisplay i=1νr ni=O(n1−r). (D.1) Proof. By Assumption 5, we have n/summationdisplay i=1νr ni≤n/summationdisplay i=1/parenleftBig¯ν n/parenrightBigr =n·/parenleftBig¯ν n/parenrightBigr = ¯νr·n1−r, which implies the claimed order bound. Lemma 2. Under Assumptions 1,3, and4, therth central moment of the treatment effect Riesz estimator, for some integer r≥1, if it exists, satisfies E/vextendsingle/vextendsingle/vextendsingleˆθi(z,ω)−θi(˜yi)/vextendsingle/vextendsingle/vextendsingler ≤/parenleftBig 2/⌊ard⌊l˜yi/⌊ard⌊lp/⌊ard⌊lψi/⌊ard⌊lq/parenrightBigr , (D.2) for anyp,q∈[1,∞]satisfying 1/p+1/q= 1/r. In particular, when r= 2, E/bracketleftBig ˆθi(z,ω)−θi(˜yi)/bracketrightBig2 ≤ /⌊ard⌊l˜yi/⌊ard⌊l2 p/⌊ard⌊lψi/⌊ard⌊l2 q. (D.3) The inequalities (D.2)and(D.3)hold trivially if either /⌊ard⌊l˜yi/⌊ard⌊lpor/⌊ard⌊lψi/⌊ard⌊lqhas no finite upper bound. Proof. Inequality ( D.2) is a direct extension of Lemma C.3 in HWS, which handles the c ase without the additional randomness ω. The proof carries through immediately by applying Hölder’s inequality on the product space Z ×Ω, since the additional integration over ω preserves the inequality structure. 35 Yang Design-Based Inference with Random Outcomes Regarding inequality ( D.3), E/bracketleftBig ˆθi(z,ω)−θi(˜yi)/bracketrightBig2 =E[ˆθi(z,ω)]2−(θi(˜yi))2≤E[˜yi(z,ω)ψi(z,ω)]2 =/⌊ard⌊l˜yi(z,ω)ψi(z,ω)/⌊ard⌊l2≤/parenleftBig /⌊ard⌊l˜yi(z,ω)/⌊ard⌊lp/⌊ard⌊lψi(z,ω)/⌊ard⌊lq/parenrightBig2 .(Hölder) Proof of Proposition 3.Letζi=ˆθi(z,ω)−θi(˜yi)and we observe that E[ζi] = 0by Theorem 2. Then, σ2 n=V[ˆτn(z,ω)−τn] =V/bracketleftBiggn/summationdisplay i=1νniζi/bracketrightBigg =E/bracketleftBiggn/summationdisplay i=1νniζi/bracketrightBigg2 =n/summationdisplay i=1n/summationdisplay j=1νniνnjI(i∈Nj)I(j∈Ni)E[ζiζj] ≤n/summationdisplay i=1n/summationdisplay j=1νniνnjI(i∈Nj)I(j∈Ni)/⌊ard⌊lζi/⌊ard⌊l /⌊ard⌊lζj/⌊ard⌊l (Cauchy-Schwartz) ≤n/summationdisplay i=1νnin/summationdisplay j=11 2νnj/parenleftbig I(j∈Ni)/⌊ard⌊lζi/⌊ard⌊l2+I(i∈Nj)/⌊ard⌊lζj/⌊ard⌊l2/parenrightbig (AM-GM) =n/summationdisplay i=1νni/parenleftBigg/summationdisplay j∈Niνnj/parenrightBigg /⌊ard⌊lζi/⌊ard⌊l2≤¯ν2 n2n/summationdisplay i=1|Ni|/⌊ard⌊lζi/⌊ard⌊l2(Lemma 1(2.17)) ≤¯ν2 n2n/summationdisplay i=1|Ni| /⌊ard⌊l˜yi/⌊ard⌊l2 p/⌊ard⌊lψi/⌊ard⌊l2 q (Lemma 2(D.3)) Note that AM-GM stands for arithmetic-geometric mean inequ ality. To obtain the uniform bound, we apply the max- pnorm, as defined in ( 3.2), and note that dn=n−1/summationtextn i=1|Ni|, yielding σ2 n≤¯ν2 n2/parenleftBig /⌊ard⌊l˜yi/⌊ard⌊ln max,p/⌊ard⌊lψi/⌊ard⌊ln max,q/parenrightBig2n/summationdisplay i=1|Ni|=¯ν2dn n/parenleftBig /⌊ard⌊l˜yi/⌊ard⌊ln max,p/⌊ard⌊lψi/⌊ard⌊ln max,q/parenrightBig2 . Since this inequality holds for any p,q∈[1,∞]satisfying 1/p+1/q= 1/2, we have a more conservative upper bound σ2 n≤¯ν2dn n/parenleftbigg inf p,q∈S2/⌊ard⌊l˜yi/⌊ard⌊ln max,p/⌊ard⌊lψi/⌊ard⌊ln max,q/parenrightbigg2 . Proof of Theorem 3.The result follows directly from Proposition 3. Specifically, the bound 36 Yang Design-Based Inference with Random Outcomes in (3.4), together with Assumption 8withr= 2, implies that E[ˆτn(z,ω)−τn]2→0, asn→ ∞, establishing mean square consistency. Moreover, suppose supn∈Ndn<∞, that is, there exists some d0∈(0,∞)such thatdn≤d0 for alln. Sinceˆτn−τnfollows some distribution with zero mean and variance σ2 n, it follows that√n(ˆτn−τn)has variance nσ2 n. By the bound in Proposition 3, we obtain nσ2 n≤¯ν2d0/parenleftbigg inf p,q∈S2/⌊ard⌊l˜yi/⌊ard⌊ln max,p/⌊ard⌊lψi/⌊ard⌊ln max,q/parenrightbigg2 <∞, under the stated assumptions. Hence, the aggregate Riesz es timator satisfies Op(n−1/2). Lemma 3. Let(X,d)be a metric space. Suppose WnandZare random variables taking values inX, anddW(Wn,Z)→0asn→ ∞, wheredWdenotes the Wasserstein distance. ThenWnd− →Z,i.e.,Wnconverges in distribution to Z. Proof. We will show that E[f(Wn)]→E[f(Z)]for all bounded uniformly continuous func- tionsf:X→R, which characterises weak convergence; see Billingsley (1999, Theorem 2.1). Letf∈BUC(X), the space of all bounded uniformly continuous functions. S ince Lipschitz functions are dense in BUC(X)under the supremum norm, for any ε >0, there exists a Lipschitz function fε:X→Rsuch that /⌊ard⌊lf−fε/⌊ard⌊l∞<ε 3. The function fεis Lipschitz with some finite constant Lε.
|
https://arxiv.org/abs/2505.01324v5
|
Therefore, |E[fε(Wn)]−E[fε(Z)]| ≤Lε·dW(Wn,Z). SincedW(Wn,Z)→0, there exists N(ε)∈Nsuch that |E[fε(Wn)]−E[fε(Z)]|<ε 3,for alln>N(ε). Using the triangle inequality, we have |E[f(Wn)]−E[f(Z)]| ≤ |E[f(Wn)]−E[fε(Wn)]| +|E[fε(Wn)]−E[fε(Z)]| +|E[fε(Z)]−E[f(Z)]| <ε 3+ε 3+ε 3=ε. 37 Yang Design-Based Inference with Random Outcomes Sinceε >0andf∈BUC(X)were arbitrary, we conclude that E[f(Wn)]→E[f(Z)],i.e., Wnd− →Z. Proof of Theorem 4.The proof proceeds in two steps. First, we establish Wassers tein conver- gence for a sequence of probability measures. The key argume nt in this step is an application of Stein’s method ( Stein,1972), specifically Theorem 3.5 of Ross(2011) for a dependency- graph-based variant, following the strategy also employed in HWS. We then adapt the norm- bounding technique of HWS to our stochastic setting, where b oth the model space and the structure of dependence differ. Second, we show that Wassers tein convergence implies weak convergence of the same sequence. ClearlyE[ˆθi(z,ω)]4<∞is equivalent to E[ˆθi(z,ω)−θi(˜yi)]4<∞. DefineXi=νni(ˆθi(z,ω)− θi(˜yi)), and then E[X4 i]<∞. By Theorem 2, we have also E[Xi] = 0. Note that/summationtextn i=1Xi= ˆτn(z,ω)−τn. DefineWn=/summationtextn i=1Xi/σn. Since the collection {X1,X2,...,Xn} have dependency neighborhoods Ni,i= 1,...,n, withDn= max 1≤i≤n|Ni|. Then we can now apply Theorem 3.5 in Ross(2011) and obtain dW(Wn,Z)≤D2 n σ3 nn/summationdisplay i=1E|Xi|3+√ 26D3/2 n√πσ2n/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1E[X4 i], (D.4) wheredW(Wn,Z)denotes the Wasserstein distance between Wnand a standard normal one Z. Observe that ( D.4) gives an upper bound of the distance. First, we examine the first term in ( D.4) n/summationdisplay i=1E|Xi|3=n/summationdisplay i=1E/vextendsingle/vextendsingle/vextendsingleνni/parenleftBig ˆθi(z,ω)−θi(˜yi)/parenrightBig/vextendsingle/vextendsingle/vextendsingle3 =n/summationdisplay i=1ν3 niE/vextendsingle/vextendsingle/vextendsingleˆθi(z,ω)−θi(˜yi)/vextendsingle/vextendsingle/vextendsingle3 ≤/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1ν6 ni·/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1/parenleftbigg E/vextendsingle/vextendsingle/vextendsingleˆθi(z,ω)−θi(˜yi)/vextendsingle/vextendsingle/vextendsingle3/parenrightbigg2 (Cauchy-Schwartz) ≤n−5/2¯ν3·/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1/parenleftBig 2/⌊ard⌊l˜yi/⌊ard⌊lp1/⌊ard⌊lψi/⌊ard⌊lq1/parenrightBig6 (Lemma 1and2) ≤n−2¯ν3/parenleftBig 2/⌊ard⌊l˜y/⌊ard⌊ln max,p1/⌊ard⌊lψ/⌊ard⌊ln max,q1/parenrightBig3 . 38 Yang Design-Based Inference with Random Outcomes Next, we examine the second term in ( D.4). n/summationdisplay i=1E[Xi]4=n/summationdisplay i=1E/bracketleftBig νni/parenleftBig ˆθi(z,ω)−θi(˜yi)/parenrightBig/bracketrightBig4 =n/summationdisplay i=1ν4 niE/bracketleftBig ˆθi(z,ω)−θi(˜yi)/bracketrightBig4 ≤/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1ν8 ni·/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1/parenleftbigg E/bracketleftBig ˆθi(z,ω)−θi(˜yi)/bracketrightBig4/parenrightbigg2 (Cauchy-Schwartz) ≤n−7/2¯ν4·/radicaltp/radicalvertex/radicalvertex/radicalbtn/summationdisplay i=1/parenleftBig 2/⌊ard⌊l˜yi/⌊ard⌊lp2/⌊ard⌊lψi/⌊ard⌊lq2/parenrightBig8 (Lemma 1and2) ≤n−3¯ν4/parenleftBig 2/⌊ard⌊l˜y/⌊ard⌊ln max,p2/⌊ard⌊lψ/⌊ard⌊ln max,q2/parenrightBig4 . Thus, we obtain dW(Wn,Z)≤¯ν3D2 n n2σ3n/parenleftBig 2/⌊ard⌊l˜y/⌊ard⌊ln max,p1/⌊ard⌊lψ/⌊ard⌊ln max,q1/parenrightBig3 +√ 26¯ν2D3/2 n√πn3/2σ2 n/parenleftBig 2/⌊ard⌊l˜y/⌊ard⌊ln max,p2/⌊ard⌊lψ/⌊ard⌊ln max,q2/parenrightBig2 =O/parenleftbiggD2 n n2σ3n/parenrightbigg +O/parenleftBigg D3/2 n n3/2σ2n/parenrightBigg , (D.5) under Assumption 8, which states that /⌊ard⌊l˜y/⌊ard⌊ln max,p1,/⌊ard⌊lψ/⌊ard⌊ln max,q1,/⌊ard⌊l˜y/⌊ard⌊ln max,p2, and/⌊ard⌊lψ/⌊ard⌊ln max,q2converge for somep1,q1,p2,q2∈[1,∞]satisfying 1/p1+1/q1= 1/3and1/p2+1/q2= 1/4. Given Assumption 9, (D.5) can be further bounded by dW(Wn,Z)≤O/parenleftbiggD2 n n1/2/parenrightbigg +O/parenleftBigg D3/2 n n1/2/parenrightBigg , (D.6) which implies that, if Dn=o(n1/4), thendW(Wn,Z)→0. Lemma 3shows that Wasserstein convergence implies weak convergen ce, which completes the proof. Lemma 4. Let{ζi}n i=1be random variables such that E[ζi] = 0 andsup i∈NE[ζ4 i]<∞. For deterministic weights νnisatisfying Assumption 5, define ∆n=n/summationdisplay (i,j)∈En/parenleftBig νniνnjζiζj−νniνnjE[ζiζj]/parenrightBig . (D.7) 39 Yang Design-Based Inference with Random Outcomes Then ∆n=Op/parenleftbig n−1/2D3/2 n/parenrightbig . (D.8) Proof. Set Xij:=nνniνnj/parenleftbig ζiζj−E[ζiζj]/parenrightbig ,(i,j)∈ En, so that∆n=/summationtext (i,j)∈EnXijand V[∆n] =/summationdisplay (i,j),(l,k)∈EnCov[Xij,Xlk]. (D.9) Note that Cov[Xij,Xlk] = 0wheneverXijandXlkare independent, so the sum involves only a restricted number of dependent pairs. Sinceνni≤¯ν/n, we havenνniνnj≤¯ν2/n. Then, for any dependent (i,j),(l,k)∈ En, we apply Cauchy-Schwarz and the uniform fourth-moment bound supiE[ζ4 i]<∞to obtain Cov[Xij,Xlk]≤/parenleftbigg¯ν2 n/parenrightbigg2 ·E/vextendsingle/vextendsingle/vextendsingle/parenleftbig ζiζj−E[ζiζj]/parenrightbig/parenleftbig ζlζk−E[ζlζk]/parenrightbig/vextendsingle/vextendsingle/vextendsingle≤¯ν4C n2(D.10) for some constant C >0. This bound also applies when (i,j) = (l,k). Fix(i,j)∈ En. A second pair (k,ℓ)can be dependent with (i,j)only if it lies within the corresponding neighbourhoods, that is, (k,ℓ)∩(Ni∪Nj)/ne}ationslash=∅. Each unit has at most Dn neighbours, so there are at most D2 n−1 =O(D2 n)such
|
https://arxiv.org/abs/2505.01324v5
|
pairs. Hence every Xijis dependent with at most C1D2 nother summands for some constant C1>0. There are at most nDnordered pairs in En, so V[∆n]≤C1D2 n·nDn·¯ν4C n2=C2¯ν4D3 n n, (D.11) for a constant C2>0. By Chebyshev’s inequality, for any ε>0, P/parenleftbig |∆n|>ε/parenrightbig ≤Var(∆n) ε2≤C2¯ν4 ε2D3 n n. Since the right-hand side is of order n−1D3 n,we have ∆n=Op/parenleftbig n−1/2D3/2 n/parenrightbig , which completes the proof. Proof of Theorem 5.By Theorem 2, we have E[ζi] = 0, whereζi:=ˆθi(z,ω)−θi(˜yi). As- sumption 10(b)ensures that supi∈NE[ˆθi(z,ω)4]<∞, which is equivalent to supi∈NE[ζ4 i]<∞, sinceθi(˜yi)is constant. 40 Yang Design-Based Inference with Random Outcomes Lemma 4together with Assumption 5implies that nˆσ2 n−nσ2 n=n/summationdisplay (i,j)∈Enνniνnj(ζiζj−E[ζiζj]) =Op(n−1/2D3/2 n). If Assumption 6holds with Dn=o(n1/3), (D.11) implies that this expression converges to zero in mean square. Proof of Corollary 1.The result follows directly from Theorems 4and5. We write ˆσ−1 n(ˆτn(z,ω)−τn) =√n(ˆτn(z,ω)−τn)√nσn·√nσn√nˆσn. The first factor converges in distribution to N(0,1)by Theorem 4, and the second factor converges to 1 in probability by Theorem 5, together with Assumption 9, which guarantees that√nσnis bounded away from zero. Since the product of a sequence converging in distribution a nd another converging in prob- ability (to a constant) also converges in distribution, Slu tsky’s theorem yields the desired result. Lemma 5. ∆c n=Op/parenleftbig n−1/2D3/2 n/parenrightbig . Proof of Lemma 5.Define ∆c n=n/summationdisplay (i,j)∈Ecn(νniνnjζiζj−νniνnjE[ζiζj]), (D.12) so thatnˆσ2 cn−nσ2 n= ∆c n. To bound the variance of ∆c n, observe that V[∆c n] =/summationdisplay (i,j),(l,k)∈EcnCov[Xij,Xlk]. (D.13) SinceEc n⊆ En, V[∆n]−V[∆c n] =/summationdisplay (i,j),(l,k)∈En/EcnCov[Xij,Xlk]. This difference can be positive when/summationtext (i,j),(l,k)∈En/EcnCov[Xij,Xlk]≥0, but this cannot be guaranteed in general, as some of the covariance terms may be negative. Nonetheless, we can assert that the number of terms in the upp er bound of V[∆c n]is no greater than C1D2 n, and that the bound in ( D.11) applies as well, since it is derived by summing the absolute values of the covariances. Therefore, ∆c n=Op/parenleftbig n−1/2D3/2 n/parenrightbig , as claimed. 41 Yang Design-Based Inference with Random Outcomes Proof of Theorem 6.By Lemma 5,∆c n=Op/parenleftBig n−1/2D3/2 n/parenrightBig . In particular, if Assumption 6 holds with Dn=o(n1/3), thenˆσ2 cn−σ2 ngoes to zero in mean square. Proof of Corollary 2.The proof follows the same argument as in Corollary 1, replacing ˆσ2 n withˆσ2 cn. Proof of Theorem 7.Define ˜∆n=n/summationdisplay (i,j)∈˜En(νniνnjζiζj−νniνnjE[ζiζj]). (D.14) Obviously, by Assumption 11, the upper bound of V[˜∆n]lies between those of V[∆c n]and V[∆n], and the result follows directly. Proof of Corollary 3.The proof follows the same argument as in Corollary 1, replacing ˆσ2 n with˜σ2 n. 42
|
https://arxiv.org/abs/2505.01324v5
|
Weight-calibrated estimation for factor models of high-dimensional time series Xinghao Qiao1, Zihan Wang2, Qiwei Yao3, and Bo Zhang4 1Faculty of Business and Economics, The University of Hong Kong, Hong Kong 2Department of Statistics and Data Science, Tsinghua University, Beijing, China 3Department of Statistics, London School of Economics, London, U.K. 4School of Management, University of Science and Technology of China, Hefei, China Abstract The factor modeling for high-dimensional time series is powerful in discovering la- tent common components for dimension reduction and information extraction. Most available estimation methods can be divided into two categories: the covariance- based under asymptotically-identifiable assumption and the autocovariance-based with white idiosyncratic noise. This paper follows the autocovariance-based framework and develops a novel weight-calibrated method to improve the estimation perfor- mance. It adopts a linear projection to tackle high-dimensionality, and employs a reduced-rank autoregression formulation. The asymptotic theory of the proposed method is established, relaxing the assumption on white noise. Additionally, we make the first attempt in the literature by providing a systematic theoretical comparison among the covariance-based, the standard autocovariance-based, and our proposed weight-calibrated autocovariance-based methods in the presence of factors with differ- ent strengths. Extensive simulations are conducted to showcase the superior finite- sample performance of our proposed method, as well as to validate the newly estab- lished theory. The superiority of our proposal is further illustrated through the analysis of one financial and one macroeconomic data sets. Keywords: Autocovariance; Covariance; Eigenanalysis; Factor strength; Reduced- rank autoregression; Weight matrix. 1arXiv:2505.01357v2 [stat.ME] 5 May 2025 1 Introduction High-dimensional time series have become indispensable in various fields such as eco- nomics, finance, climatology, medical research and others. However, conventional methods such as joint multivariate or componentwise univariate time series modeling often suffer from over-parametrization or omission of cross-sectional correlations. To overcome these difficul- ties, high-dimensional factor models have emerged as one of the most powerful approaches for dimension reduction and information extraction. Consider the factor model for a stationary p-vector time series tytutPZ: yt“Axt`et, t“1, . . . , n, (1) where xtis a latent stationary r0-vector factors, Ais apˆr0full-ranked factor loading matrix, andetis a stationary p-vector idiosyncratic component. Denote Ωypkq“Covpyt,yt´kqfor kPZas the (auto)covariance matrix of yt1with the corresponding sample estimate pΩypkq. For simplicity, we use Ωyto denote the covariance matrix Ωyp0q. Similar definitions also apply to ΩxpkqandΩepkq. The literature has mainly focused on two types of model assumptions on (1). The first type assumes that the leading r0eigenvalues of AΩ xATdiverge at rate Oppqwhereas all the eigenvalues of Ωeare bounded as pÑ8 .This asymptotic identifiability assumption holds when the common factors can influence a non-vanishing proportion of components in ytand the idiosyncratic components exhibit weak cross-sectional correlations. The seminal work Bai and Ng (2002) proposes to estimate factors and loadings by solving a constrained least squares minimization problem, whose solution is equivalent to the principal component analysis (PCA) based on the sample covariance estimator pΩy, thereby introducing a covariance-based approach. Other relevant literature includes, e.g., Stock and Watson (2002); Bai (2003); Bai and Li (2012); Fan et al. (2013); Fan et al. (2018), and extensions to matrix factor
|
https://arxiv.org/abs/2505.01357v2
|
models 1We refer to lag-0 and nonzero-lagged autocovariances as covariance and autocovariance, respectively. 2 (Yu et al., 2022; Chen and Fan, 2023) and tensor factor models (Chen and Lam, 2024; Chen et al., 2024). The second type of models assumes that the dynamic information in ytis entirely cap- tured by the common factors, while the idiosyncratic component etis a white noise sequence withEpetq “0andΩepkq “ 0 for any k‰0,and is allowed to exhibit any strength of cross-sectional correlations. Given that the autocovariance of ytcan filter out the im- pact of etautomatically, Lam et al. (2011) proposed an autocovariance-based method to estimate the factor loading space and the number of factors r0through eigenanlaysis of řm k“1pΩypkqpΩypkqT,where mis some prescribed positive integer. For further developments, see, e.g., Lam and Yao (2012); Gao and Tsay (2022), and extensions to matrix factor models (Wang et al., 2019; Chen et al., 2020; Chang et al., 2023) and tensor factor models (Chen et al., 2022; Han, Yang, Zhang and Chen, 2024; Han, Chen, Yang and Zhang, 2024). In this paper, we follow the autocovariance-based estimation in our methodological development, but systematically compare it with the covariance-based method in theory. The static factor model (1) considered in this paper differs from the dynamic factor model (Forni et al., 2000), which allows ytto depend on xtand its lagged values. Their approach adopts the frequency domain analysis based on the PCA for spectral density matrices. See also Barigozzi and Hallin (2024) and the references therein. Without the loss of generality, we assume that the columns of Aare orthonormal, i.e. ATA“Ir0,aspA,xtqin (1) can be replaced by p p A,Vxtq, where, e.g., A“ p AVis the QR decomposition of A. Even with this orthogonality constraint, Acan still be replaced by AU in (1) for any r0ˆr0orthogonal matrix U. However the linear space spanned by the columns ofA, denoted by CpAq, can be uniquely determined by (1) when etis white noise and none of the linear combinations of xtis white noise. Then the standard autocovariance-based method is based on the fact that CpAqcan be spanned by the r0leading eigenvectors, i.e., they correspond to the r0largest eigenvalues, of ΩypkqΩypkqTfor any k‰0 (Lam and Yao, 2012). Hence the columns of an estimator for Acan be taken as the r0leading orthonormal 3 eigenvectors ofřm k“1pΩypkqpΩypkqT.However, such method often suffers from underestimating the number of factors in practice due to its less effectiveness in separating the common and idiosyncratic components in finite samples, not mentioning the existence of the common factors with different strengths in practice. In this paper, we propose a novel weight-calibrated method by introducing a pˆpnon- negative definite weight matrix xWto enhance the separation, thereby improving the per- formance of the standard autocovariance-based method. Specifically, we take the r0leading orthonormal eigenvectors of matrix xM“mÿ k“1pΩypkqxWpΩypkqT(2) as columns of the estimated loading matrix. By casting this weight-calibrated autocovariance- based estimation in a reduced-rank autoregression framework, the weight matrix turns out to be of the form xW“Q` QTpΩyQ˘´1QT, (3) where Qis apˆqprojection matrix and its columns are the qleading orthonormal eigenvec- tors ofpΩy, and r0ăqďminpp, nq. This
|
https://arxiv.org/abs/2505.01357v2
|
weight matrix induces a rescaling of eigenvalues of pΩypkqpΩypkqT,often resulting in an enhanced separation between the common and idiosyn- cratic components, and, more precisely, the relative decrease from the r0-th to thepr0`1q-th largest eigenvalues of pΩypkqxWpΩypkqTis of higher order than that of pΩypkqpΩypkqT.Addition- ally, we introduce a data-driven criterion for selecting the tuning parameter q. The main contribution of our paper is four-fold. Firstly, this paper represents the first ef- fort in the literature to establish the connection between the autocovariance-based eigenanal- ysis and constrained least squares in a reduced-rank autoregression formulation. Our pro- posal involves a novel weight matrix to enhance the separation of common and idiosyncratic components, leading to the improvement in estimation. The proposed weight-calibrated es- timation method is of independent interest as it can also be applied to other factor models, such as those for matrix or tensor time series. See Section 7. Secondly, under model (1), we 4 investigate the asymptotic properties of the relevant estimated quantities using our weight- calibrated autocovariance-based method. In particular, we relax the commonly imposed white noise assumption in the autocovariance-based literature to allow for weak serial cor- relations in the idiosyncratic components. Thirdly, this paper makes the first attempt in the literature to systematically compare the covariance-based and autocovariance-based methods in the presence of the factors with different strengths, highlighting their respective applicable scenarios through both theoretical analysis and simulation studies. Last but not least, by presenting asymptotic properties of the corresponding eigenvalues and eigenvectors that arise from the covariance-based, the standard autocovariance-based and our newly proposed weight-calibrated autocovariance- based methods, we demonstrate the theoretical superiority of our approach in effectively distinguishing strong factors from weak factors and idiosyncratic components, even when the two competing methods fail. The remainder of the paper is organized as follows. In Section 2, we specify the weight matrix from a reduced-rank autoregression formulation. Section 3 presents the asymp- totic properties of the proposed estimators. Section 4 conducts the theoretical analy- sis of the covariance-based, the standard autocovariance-based, and the weight-calibrated autocovariance-based methods in the presence of the factors with different strengths. In Sections 5 and 6, we demonstrate the superior finite sample performance of our proposed method over the competitors through extensive simulations and the analysis of two real datasets, respectively. Section 7 discusses several future extensions. All technical proofs are relegated to the supplementary material. Notation . For any matrix B“pBijqpˆq,we let}B}min“λ1{2 minpBTBq,}B}“λ1{2 maxpBTBq and denote its Frobenius norm by }B}F“ př i,jB2 ijq1{2. Let λipBqandσipBqbe the i-th largest eigenvalue (if exists) and singular value of B, respectively. Let rank pBqbe the rank ofB. LetCpBq“Cptbiuq i“1qbe the space spanned by the columns of B“pb1, . . . ,bqq.For a positive integer m,writerms“t 1, . . . , muand denote by Imthe identity matrix of size 5 mˆm.Forx, yPR,we use x^y“minpx, yq,x_y“maxpx, yq, and txuas the floor function of x. For two positive sequences tanuandtbnu, we write anÀbnoran“Opbnqor bnÁanif there exists a positive constant csuch that an{bnďc, and an!bnoran“opbnq ifan{bnÑ0. We write an—bnif and only if anÀbnandanÁbnhold simultaneously. 2 Methodology 2.1 Reduced-rank autoregression formulation To
|
https://arxiv.org/abs/2505.01357v2
|
specify the weight matrix in (3), we cast the autocovariance-based method in a reduced-rank autoregression framework. This is motivated by the equivalence between the standard PCA and a reduced-rank regression (Reinsel et al., 2022), which also in- volves a weight matrix implicitly. Specifically, consider regressing a p-vector yton itself with a rank-reduced coefficient matrix: yt“Lyt`etfortP rns, where Lispˆp with rank r0ăp. Let pďn. Denoted by pLthe constrained least squares estimator for this reduced-rank regression. Then CppLqis spanned by the r0leading eigenvectors of YTYpYTYq´1YTY:“npΩyĂWpΩT y“npΩy,where Y“py1, . . . ,ynqTPRnˆp.Note the sand- wiched form with the weight matrix ĂW“pΩ´1 y. This effectively amounts to regressing yton itsr0leading principal components. Inspired by this observation, we aim to implement the autocovariance-based eigenanalysis based on (3) through regressing yton its lagged values yt´k, forkě1, using latent factor xt as an intermediary. To tackle high-dimensionality including the case pąn,we project ytto ryt“QTyt,which confines our analysis to a subspace that retains the most information of common components. Here Qis apˆqprojection matrix and its columns are the qleading orthonormal eigenvectors of pΩy, and q, satisfying r0ăqďp^n, is a tuning parameter. Now by assuming the latent factor is of the form xt“HT kryt´k`retk, where Hkisqˆr0and 6 full-ranked, model (1) becomes a reduced-rank autoregression yt“AHT kryt´k`etk, t“k`1, . . . , n, (4) where etk“Aretk`et. The estimators for both AandHkare then the solution of the following constrained least squares problem min Hk,ATA“Ir0nÿ t“k`1}yt´AHT kryt´k}2, (5) According to Section C.1 of the supplementary material, the columns of the resulting esti- mator for Acan be taken as the r0leading orthonormal eigenvectors of matrix pΩypkqQ` QTpΩyQ˘´1QTpΩypkqT“pΩypkqxWpΩypkq, where the weight matrix xWis defined in (3). To accumulate information across different lags, we propose to estimate the columns of Aby the r0leading orthonormal eigenvectors of matrix xMdefined in (2). The estimation procedure developed for model (1) assumes that the number of factors r0 is known or can be correctly identified. In practice, r0is unknown and needs to be estimated. Letλk1쨨¨ě λkqbe the eigenvalues of pΩypkqxWpΩypkqTforkPrms. Motivated by Zhang et al. (2024), we then estimate r0by maximizing the ratios of the adjacent cumulative weighted eigenvalues as ˆr0“arg max jPrq´1sRj,with Rj“řm k“1p1´k{nqλkj`ϑnřm k“1p1´k{nqλk,j`1`ϑn, (6) where heavier weights are placed on eigenvalues with smaller k. Here ϑną0 provides a lower bound correction to λkjforj“r0`1, . . . , q and satisfies the conditions in Proposition 1 below. In practice, we may set ϑn“0.1n´1p.It is worth noting that the standard method based on the ratios of the adjacent eigenvalues of xM(Lam and Yao, 2012) may suffer from inconsistent estimation, as discussed in Zhang et al. (2024). We finally take the ˆ r0leading eigenvectors of xMas the columns of our loading matrix estimator pA. 7 Remark 1. (i) We provide an explanation for why the weight matrix xWcan improve the estimation performance for model (1). SincexW“QpQTpΩyQq´1QTacts as a rank- qpseudo- inverse of pΩy, it induces a rescaling of eigenvalues of pΩypkqpΩypkqT.This often enhances the separation between the common and idiosyncratic components, which in turn improves the performance of the ratio-based method in (6), as empirically evidenced by simulation results in Section
|
https://arxiv.org/abs/2505.01357v2
|
5.1 below. Detailed intuitive illustrations are presented in Section C.2 of the sup- plementary material. (ii) The weight matrix can also enhance the capability to distinguish strong factors from weak factors. Consider the model yt“Axt`Bzt`et,where the strong factors xtPRr0and the weak factors ztPRr1are asymptotically identifiable through the difference in the respective strengths. Let µk1ě ¨¨¨ ě µkpbe the eigenvalues of pΩypkqpΩypkqT, and λk1ě ¨¨¨ ě λkq be the eigenvalues of pΩypkqxWpΩypkqT. When the strong and weak factors exhibit compa- rable cross-sectional correlations, it can be shown that pµkr0{µk,r0`1q{pµk,r0`r1{µk,r0`r1`1q“ op␣ pλkr0{λk,r0`1q{pλk,r0`r1{λk,r0`r1`1q( under mild conditions. This implies that our proposed method can distinguish xtfromztmore effectively. A rigorous theoretical justification is pro- vided in Section 4 below. 2.2 Determining the tuning parameter The practical implementation requires choosing a suitable value for the tuning parameter q(i.e., the number of columns in Q). Intuitively, if qis too small, the bias increases because theqleading eigenvectors of pΩymay fail to preserve the most information of common components. Conversely, if qis too large, the variance increases due to a higher proportion of invalid eigenvectors being included. The bias-variance tradeoff underscores the necessity of optimally selecting q. Inspired by Wang and Zhu (2011) and Fan and Tang (2013), we propose a generalized Bayesian information criterion (BIC): BIC kpqq“pnlogLkpqq`Cdkpqqlogppnq, kPrms, (7) 8 where Lkpqq “ p pnq´1řn t“k`1}yt´pAkpHT kryt´k}2is the sum of squared residuals (divided bypn) from the regression in (4), ppHk,pAkqis the solution of the constrained minimization problem in (5) using ˆ r0,ˆr0is obtained by our proposed ratio-based estimator in (6), and Cą0 is some constant. Additionally, dkpqq “ p p`qqˆr0´ˆr0pˆr0`1q{2 represents the corresponding degrees of freedom, calculated as the total number of parameters minus the number of constraints in (5). Then, the optimal value of qcan be determined as: ˆq“arg min ¯r0ăqďq0mÿ k“1BIC kpqq, (8) where q0is a prespecified positive integer that does not need to be very large for computa- tional efficiency, and ¯ r0is obtained by (6) using q0.In our empirical analysis in Sections 5 and 6, we set C“0.2 and q0“15,which consistently yield good finite-sample performance. 3 Theoretical results with uniform factor strength In this section, we investigate the asymptotic properties of the relevant estimated quan- tities under the condition that all the factors in model (1) are of the same strength. Before presenting the theoretical results, we impose some regularity conditions. Condition 1. ATA“Ir0andr0is fixed. Condition 2. (i)txtutPZandtetutPZare uncorrelated; (ii) tytutPZ,txtutPZandtetutPZare strictly stationary with finite fourth moments; (iii) tytutPZisψ-mixing with the mixing coef- ficients satisfyingř tě1tψptq1{2ă8. Condition 3. (i) There exists some constant δ0P p0,1ssuch that}Ωx} — }Ωx}min—pδ0; (ii)n“Oppqandp1´δ0“opnq. Condition 4. For model (1), letE“pe1, . . . ,enqT“GΓR ,where there exists some constant cP p0,1ssuch that GPRnˆnwith 0ăσtcnupGq ď ¨¨¨ ď σ1pGq ă 8 andRPRpˆpwith 9 0ăσtcpupRq﨨¨ď σ1pRqă8 .Moreover, Γ“pΓtjqnˆp,whose entries are i.i.d. across tPrnsandjPrpswith zero-mean, unit-variance and finite fourth moment. Condition 5. }Ωxpkq}—} Ωxpkq}min—pδ0forkPrms, where δ0is specified in Condition 3. Remark 2. Condition 1 can always be satisfied as discussed in Section 1. Condition 2(i) is conventional in the factor modeling literature (Fan et al., 2013; Wang et al., 2019; Chen et al., 2022).
|
https://arxiv.org/abs/2505.01357v2
|
However, it can be relaxed to allow weak correlations between txtuandtetu, which do not affect the asymptotic results. For simplicity, we assume uncorrelatedness to facilitate the presentation. Conditions 2(ii) and (iii) are also standard in the literature; see, e.g., Lam and Yao (2012) and Zhang et al. (2024). The parameter δ0in Conditions 3 and 5 can be regarded as the strength of cross-sectional contemporaneous and serial correlations in model (1), with larger values yielding stronger factors. When δ0“1,Condition 3 corresponds to the pervasiveness assumption in Fan et al. (2013). Condition 4 implies that tr pEETq—np, and the singular values of Esatisfy σ1pEq — σtcpp^nqupEq — n1{2`p1{2with probability tending to 1. These restrictions on idiosyncratic errors tetuare weaker than the commonly- imposed white noise assumption in the autocovariance-based factor modeling literature (Lam and Yao, 2012; Wang et al., 2019; Chen et al., 2022). Condition 5 indicates comparable serial correlations in txtu, allowing the autocovariance matrices of ytto still retain the useful information of CpAq; see Zhang et al. (2024). Letϕk1, . . . ,ϕkqbe the eigenvectors of pΩypkqxWpΩypkqTcorresponding to the eigenvalues λk1쨨¨ě λkqě0, and Φk,A“pϕk1, . . . ,ϕkr0qPRpˆr0.Note that CpΦk,Aqserves as the k-th individual estimate of CpAqforkPrms.The following theorem presents the asymptotic properties of the eigenvalues and eigenvectors of pΩypkqxWpΩypkqT. Theorem 1. Let Conditions 1–5 hold. For each kPrms,the following assertions hold: (i)λk1—λkr0—pδ0with probability tending to 1, and λk,r0`1“Oppn´1pq; (ii)››Φk,AΦT k,A´AAT››“Oppn´1{2pp1´δ0q{2q. 10 Remark 3. (i) Theorem 1(i) is essential in establishing the consistency of our ratio-based estimator ˆr0forr0proposed in (6); see Proposition 1 below. Specifically, combined with the lower bound correction ϑn—n´1p,it yields that, with probability tending to 1, pλr0` ϑnq{pλr0`1`ϑnq—npδ0´1,which goes to infinity by Condition 3(ii). This demonstrates that our method can distinguish xtfrometwith high probability. (ii) Theorem 1(ii) establishes the convergence rate of CpΦk,Aqfor each kPrms.It, together with Proposition 1, facilitates the asymptotic analysis of the estimate pAofA,whose columns are the ˆr0leading eigenvectors of xMin(2), aggregating dynamic information across different lags. As demonstrated in Proposition 2 below, CppAqachieves the same rate as CpΦk,Aq. (iii) Similar arguments as Remarks 3(i) and (ii) apply to the theoretical results for model (10) with different factor strengths in Section 4. Therefore, we will only present the asymptotic properties of the eigenvalues and eigenvectors of pΩy,pΩypkqpΩypkqTandpΩypkqxWpΩypkqTfor each kPrmsin Sections 4.1, 4.2 and 4.3, respectively. Remark 4. Since only the loading space CpAqis uniquely determined, we measure the es- timation error in terms of its uniquely defined projection matrix AATunder the operator norm (Chen et al., 2022; Zhang et al., 2024), as presented in Theorem 1(ii). Furthermore, it can be shown that for each kPrms,there exists an orthogonal matrix UkPRr0ˆr0such that}Φk,A´AU k}“Oppn´1{2pp1´δ0q{2q,which is consistent with Theorem 2(i) of Lam et al. (2011). To quantify the accuracy in estimating CpAqdirectly, we can use the measure of the distance between two column spaces (Wang et al., 2019). Specifically, for two column orthogonal matrices K1PRpˆq1andK2PRpˆq2, define D␣ CpK1q,CpK2q( “! 1´1 q1_q2trpK1KT 1K2KT 2q)1{2 , (9) ranging from 0 to 1. The distance is 0 if and only if CpK1q“CpK2q, and 1 if and only if K1 andK2are orthogonal. By
|
https://arxiv.org/abs/2505.01357v2
|
Theorem 1(ii), we can further establish that DtCpΦk,Aq,CpAqu“ Oppn´1{2pp1´δ0q{2qfor each kP rms,achieving the same rate as in Theorem 1(ii). Similar arguments apply to our proposed estimator pA,as shown in Proposition 2 below. 11 Proposition 1. Let the conditions for Theorem 1 hold. Assume that ϑnp´δ0Ñ0and ϑnÁn´1p,where ϑnis specified in (6). Then, Ppˆr0“r0qÑ1asp, nÑ8, Proposition 2. Under the conditions of Proposition 1,››pApAT´AAT››“Oppn´1{2pp1´δ0q{2q andD␣ CppAq,CpAq( “Oppn´1{2pp1´δ0q{2q. 4 Theoretical results with different factor strengths In this section, we consider a more common scenario in practice that the factors are of different strengths. To highlight the added complication in comparison to the case of uniform factor strength presented in Section 3, we consider the case that the components of xtin model (1) exhibit two different levels of strength. More specifically, we rewrite (1) as follows: yt“Axt`Bzt`et, tPrns, (10) where xtis an r0-vector of strong factors, ztis an r1-vector of weak factors, APRpˆr0and BPRpˆr1are the corresponding full-ranked factor loading matrices, and etis ap-vector of idiosyncratic errors. The strengths of xtandztare reflected by constants δ0andδ1, δ1k’s respectively, which are specified in Conditions 8, 10, 11 below. Model (10) and its variants have been extensively studied in the literature; see, e.g., Pe˜ na and Poncela (2006); Chudik et al. (2011); Lam and Yao (2012); Ando and Bai (2017); Anatolyev and Mikusheva (2022) and Zhang et al. (2024). Note that Zhang et al. (2024) studied (10) under a special assumption on the strengths of xtandzt. In this section, we relax their factor strength assumption; see details in Remark 6 below. In model (10), our main focus is on the estimation of the number of strong factors r0 and the corresponding loading matrix Afor two main reasons. Firstly, the strong factors txtu,which exhibit strong serial correlations, play a crucial role in forecasting the time series tytu.Once the strong factors are correctly identified, the weak factors and their loadings can then be estimated using a two-step method (Lam and Yao, 2012). Secondly, treating 12 ut“Bzt`etas the idiosyncratic errors, which may exhibit moderate cross-sectional and serial correlations due to an unobserved factor structure (Anatolyev and Mikusheva, 2022), can degenerate model (10) to model (1), where r0andAare the primary quantities to be estimated. To identify the scenarios in which xtcan be distinguished from ztandet, we investi- gate the theoretical properties of the covariance-based, the standard autocovariance-based, and our weight-calibrated autocovariance-based methods in Sections 4.1, 4.2 and 4.3, re- spectively. Specifically, we present the corresponding asymptotic results for the eigenvalues and eigenvectors of pΩy,pΩypkqpΩypkqT, andpΩypkqxWpΩypkqTfor each kP rms.The num- ber of strong factors r0can be determined using the ratios of adjacent eigenvalues for the covariance-based method (Ahn and Horenstein, 2013) and, analogously to (6), the ratios of adjacent cumulative weighted eigenvalues for the autocovariance-based methods. The asymptotic properties of the corresponding estimators for r0andAwill be presented in a similar fashion to Theorem 1(i) and (ii). 4.1 Covariance-based method We impose the following regularity conditions for model (10), which correspondingly generalize Conditions 1–4 for model (1) to incorporate the additional weak factors tztu. Condition 6. ATA“Ir0,BTB“Ir1,}ATB}ă1andr“r0`r1is fixed. Condition 7. (i)txtutPZ,tztutPZ, andtetutPZare mutually uncorrelated; (ii) tytutPZ,txtutPZ, tztutPZandtetutPZare
|
https://arxiv.org/abs/2505.01357v2
|
strictly stationary with the finite fourth moments; (iii) tytutPZis ψ-mixing with the mixing coefficients satisfyingř tě1tψptq1{2ă8. Condition 8. (i) There exist some constants δ0andδ1with 0ăδ1ďδ0ď1such that }Ωx}—}Ωx}min—pδ0and}Ωz}—}Ωz}min—pδ1; (ii) n“Oppqandp1´δ1“opnq. Condition 9. For model (10), letE“ pe1, . . . ,enqT“GΓR ,where there exists some constant c1Pp0,1ssuch that GPRnˆnwith 0ăσtc1nupGq﨨¨ď σ1pGqă8 andRPRpˆp 13 with 0ăσtc1pupRq ď ¨¨¨ ď σ1pRq ă 8 .Moreover, Γ“ pΓtjqnˆp,whose entries are i.i.d. across tPrnsandjPrpswith zero-mean, unit-variance and finite fourth moment. Letθ1ě ¨¨¨ ě θpě0 be the eigenvalues of pΩywith the corresponding eigenvectors ξ1, . . . ,ξp,andΞA“pξ1, . . . ,ξr0qPRpˆr0.The following theorem establishes the asymptotic properties of the eigenvalues and eigenvectors of pΩy. Theorem 2. Let Conditions 6–9 hold, and δ0ąδ1. The following assertions hold: (i)θ1—θr0—pδ0, θr0`1—θr—pδ1,andθr`1—θtc1pp^nqu—n´1pwith probability tending to 1; (ii)››ΞAΞT A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1´δ0q. Remark 5. By Theorem 2(i), we obtain that θr0{θr0`1—pδ0´δ1andθr{θr`1—npδ1´1with probability tending to 1, which together demonstrate that the covariance-based method can distinguish xtfromztwith high probability as long as δ0ąδ1andn“opp1`δ0´2δ1q. The condition δ0ąδ1can be relaxed to δ0ěδ1for two autocovariance-based methods, as presented in Theorems 3 and 4 below. 4.2 Standard autocovariance-based method In addition to Conditions 6–9, we give the following regularity conditions for model (10), which characterize the strengths of serial correlations for strong and weak factors. Condition 10. }Ωxpkq}—} Ωxpkq}min—pδ0forkPrms, where δ0is specified in Condition 8. Condition 11. (i) There exists some constant δ1kwith 0ăδ1kăδ0such that}Ωzpkq}— }Ωzpkq}min—pδ1kforkPrms, where δ0is specified in Condition 8; (ii) p1`δ1´2δ1k“opnq. Remark 6. Condition 10 serves as the counterpart of Condition 5 for strong factors in model (10). See also Remark 2. Condition 11(i) implies that tztuhas weaker serial cor- relations compared to txtu, where δ1kăδ0is satisfied if δ1kăδ1orδ1ăδ0. A similar assumption is imposed in Zhang et al. (2024) as δ1k“δ1ăδ0“1,which is a special case 14 of Condition 11(i). Condition 11(ii) is required to establish the asymptotics of the standard and weight-calibrated autocovariance-based methods, which is stronger than Condition 8(ii) ifδ1kăδ1. Letψk1, . . . ,ψkpbe the eigenvectors of pΩypkqpΩypkqTcorresponding to the eigenvalues µk1쨨¨ě µkpě0, and Ψk,A“pψk1, . . . ,ψkr0qPRpˆr0.The following theorem gives the asymptotic properties of the eigenvalues and eigenvectors of pΩypkqpΩypkqT. Theorem 3. Let Conditions 6–11 hold. For each kPrms, the following assertions hold: (i)µk1—µkr0—p2δ0, µk,r0`1—µkr—p2δ1k,andµk,r`1—µk,tc1pp^nqu—n´2p2with probability tending to 1; (ii)››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1k´δ0q. Remark 7. Theorem 3(i) entails that µkr0{µk,r0`1—p2δ0´2δ1kandµkr{µk,r`1—n2p2δ1k´2 with probability tending to 1. These results show that the standard autocovariance-based method can distinguish xtfromztwith high probability when δ0ąδ1kandn“opp1`δ0´2δ1kq. Remark 8. Condition 11(ii) may not always hold, especially when there are very weak serial correlations in tztu(i.e., very small δ1k). For example, when tztuis from a vector MA plq model, then for each kąl,Ωzpkq “0,resulting in δ1k“0. Conditions 8(ii) and 11(ii) cannot be satisfied simultaneously. To address this issue, we can replace Condition 11 with a new condition }Ωzpkq} “ Opn´1{2pp1`δ1q{2q, which includes the case that tztufollows a vector MA( l) process with kąl, and establish the asymptotic results for pΩypkqpΩypkqTin Corollary 1 below. With a lower bound correction ϑn—n´1p1`δ1, Corollary 1(i) implies that pµkr0`ϑnq{pµk,r0`1`ϑnqgoes to infinity with probability tending to 1, which ensures the consistency of our ratio-based estimator for r0. Moreover, the convergence rate
|
https://arxiv.org/abs/2505.01357v2
|
of Ψk,Ain Corollary 1(ii) is consistent with that in Theorem 1(ii) for model (1). Corollary 1. Let Conditions 6–10 hold and }Ωzpkq}“Opn´1{2pp1`δ1q{2qfor each kPrms. (i)µk1—µkr0—p2δ0with probability tending to 1, and µk,r0`1“Oppn´1p1`δ1q; (ii)››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2q. 15 4.3 Weight-calibrated autocovariance-based method The following theorem presents the asymptotic theory for the eigenvalues and eigenvectors ofpΩypkqxWpΩypkqTfor model (10), compared to Theorem 1 for model (1). Theorem 4. Let Conditions 6–11 hold. For each kPrms, the following assertions hold: (i)λk1—λkr0—pδ0andλk,r0`1—λkr—p2δ1k´δ1with probability tending to 1, and λk,r`1“ Oppn´1pq; (ii)››Φk,AΦT k,A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1k´pδ0`δ1q{2q. Similar to the arguments in Remark 3, Theorems 3(i) and 4(i) can guarantee the con- sistency of the corresponding ratio-based estimators for r0,which, combined with Theo- rems 3(ii) and 4(ii), can ensure that the respective loading space estimates achieve the same convergence rates. Therefore, we rely on the results in Theorems 3 and 4 to compare the theoretical properties of the standard and weight-calibrated autocovariance-based methods. Remark 9. Comparing the asymptotic properties of eigenvalues in Theorems 3(i) and 4(i) yields that λkj—µ1{2 kiforjPrr0swith probability tending to 1 and λkj“Oppµ1{2 klqforjąr. Moreover λkj“oppµ1{2 kjqforj“r0`1, . . . , r ifδ1ąδ1k.Therefore, compared to pΩypkqpΩypkqT, our weight-calibrated version pΩypkqxWpΩypkqTresults in a higher order of relative decrease from the r0-th to thepr0`1q-th eigenvalues and a smaller order of relative decrease from the r- th andpr`1q-th eigenvalues, which enhance its capability to separate xtfromzt. Specifically, by Theorem 4(i) with a lower bound correction ϑn—n´1p, we havepλkr0`ϑnq{pλk,r0`1` ϑnq—pδ0`δ1´2δ1kandpλkr`ϑnq{pλk,r`1`ϑnq—np2δ1k´δ1´1with probability tending to 1. If n“oppδ0`2δ1´4δ1k`1q, thenpλkr`ϑnq{pλk,r`1`ϑnq“optpλkr0`ϑnq{pλk,r0`1`ϑnqu,which indicates that the proposed method can distinguish xtfromztwith high probability. Combined with Remark 7, there exists an overlap, pδ0´2δ1k`1!n!pδ0`2δ1´4δ1k`1when δ1ąδ1k, such that our weight-calibrated autocovariance-based method can distinguish xtfromztwith high probability, whereas the standard autocovariance-based method cannot. 16 Remark 10. For the loading space estimation of A,despite the rate of Φk,Abeing not faster than that of Ψk,Aaccording to Theorems 3(ii) and 4(ii), when δ0“δ1or Condition 11 is replaced by}Ωzpkq}“ Opn´1{2pp1`δ1q{2q(see Corollaries 1 and 2), the standard and weight- calibrated autocovariance-based methods achieve the same rate. Our simulations provide empirical evidence of improved estimation performance for Ausing the proposed method. Following an analogy to Remark 8, we present the following corollary for our method. Corollary 2. Let Conditions 6–10 hold, and }Ωzpkq}“Opn´1{2pp1`δ1q{2qfor each kPrms. (i)λk1—λkr0—pδ0with probability tending to 1, and λk,r0`1“Oppn´1pq; (ii)››Φk,AΦT k,A´AAT››“Oppn´1{2pp1´δ0q{2q. 4.4 Summary of theoretical results We compare the effectiveness of three competing methods in distinguishing xtfromzt andet, based on the results in Theorems 2(i), 3(i), and 4(i) as follows: •Theorem 2(i) reveals that when δ0“δ1, the covariance-based method cannot separate xtfromztwith high probability. In contrast, both the standard and weight-calibrated autocovariance-based methods can achieve this separation even when δ0“δ1, as demonstrated in Theorems 3(i) and 4(i). •Theorems 3(i) and 4(i) respectively show that, with probability tending to 1, µkr0{µk,r0`1 µkr{µk,r`1—n´2p2`2δ0´4δ1k,pλkr0`ϑnq{pλk,r0`1`ϑnq pλkr`ϑnq{pλk,r`1`ϑnq—n´1p1`δ0`2δ1´4δ1k, where ϑn—n´1pprovides a lower bound correction to λkjforjąr. When p1`δ0´2δ1“ opnq, we have µkr0µk,r`1{µk,r0`1µkr“optpλkr0`ϑnqpλk,r`1`ϑnq{pλk,r0`1`ϑnqpλkr` ϑnqu.This demonstrates that our proposed method can distinguish xtfromztmore effectively, highlighting its theoretical superiority over the standard autocovariance- based method. It is worth noting that, when δ0“δ1, the requirement p1`δ0´2δ1“opnq is automatically satisfied under Condition 8(ii). 17 5 Simulations We conduct a series
|
https://arxiv.org/abs/2505.01357v2
|
of simulations to evaluate the finite-sample performance of the pro- posed method for factor models (1) and (10) in Sections 5.1 and 5.2, respectively. 5.1 The model with uniform factor strength Consider the factor model yt“rArxt`et, tPrns, (11) where the entries of rAPRpˆr0are sampled independently from the uniform distribution onp´p´p1´δ0q{2, p´p1´δ0q{2q, following a similar sampling procedure as in Wang et al. (2019). Each component series of rxtfollows an AR(1) process with independent Np0,1qinnovations, and the autoregressive coefficient drawn from the uniform distribution on p´0.95,´0.7qY p0.7,0.95q.Thep-vector of the idiosyncratic error etis generated by a vector MA(1) model et“εt`Πεt´1,where the coefficient matrix Π“ p0.6|i´j|qpˆp,and each component se- ries of the p-vector εtfollows a MA(1) process with independent Np0,1qinnovations, and the moving-average coefficient drawn from the uniform distribution on tp´0.15,´0.05qY p0.05,0.15qu. To incorporate model (11) into our framework, we let the columns of AP Rpˆr0consist of the r0leading left singular vectors of rA, which ensures CpAq“CprAqand ATA“Ir0,and results in the decomposition rA“AC.By letting xt“Crxt, it is easy to check that txtufulfills the factor strength requirements in Conditions 3(i) and 5. Then, model (11) can be rewritten as yt“Axt`et,aligning with model (1) with uniform factor strength. We set in model (11) n“300, p“50,100,200 and 400 ,andr0“3 with factor strength ranging from relatively weak to strong, specifically δ0“0.75 and 1 . We compare the proposed weight-calibrated autocovariance-based method, referred to as WAuto for simplicity, with two competing methods: the covariance-based method (denoted as Cov) and the standard autocovariance-based method (denoted as Auto). We evaluate 18 the sample performance of these three methods in estimating the number of factors r0and the factor loading space CpAq,which are respectively measured by the relative frequency estimate for Ppˆr0“r0qand the estimation error using the distance D␣ CppAq,CpAq( ,defined according to (9). Since our experimental results are insensitive to the choice of mwhen implementing Auto and WAuto, we set m“2.We ran each simulation 1000 times. Table 1: The relative frequency estimate of Ppˆr0“r0qand the average of ˆ r0(in parentheses) for model (11) over 1000 simulation runs. δ0“0.75 δ0“1 Cov Auto WAuto Cov Auto WAuto p“50 0.098 0.647 0.899 0.651 0.961 0.998 (1.635) (2.430) (2.855) (2.432) (2.937) (2.997) p“100 0.282 0.730 0.942 0.948 0.991 1.000 (1.812) (2.549) (2.922) (2.910) (2.984) (3.000) p“200 0.605 0.791 0.961 0.991 0.991 0.999 (2.341) (2.649) (2.940) (2.983) (2.983) (2.999) p“400 0.851 0.873 0.965 1.000 0.999 1.000 (2.749) (2.792) (2.939) (3.000) (2.998) (3.000) Table 1 presents the average relative frequencies of ˆ r0“r0and the average ˆ r0.Table 2 re- ports the numerical summaries of the corresponding estimation errors for CpAq.A few trends are observable from Tables 1 and 2. Firstly, our proposed WAuto consistently outperforms Auto across all settings, significantly improving the estimation accuracy for both r0and CpAq,and thus demonstrating the effectiveness of calibrating weight in the autocovariance- based estimation. For example, when δ0“0.75 and p“50, the average relative frequency of ˆr0“r0increases from 0.647 to 0.899, while the average D␣ CppAq,CpAq( decreases from 0.37 to 0.23. Such good performance of WAuto provides empirical evidence to validate the established asymptotic results in Propositions 1 and 2. Secondly, when the signal of
|
https://arxiv.org/abs/2505.01357v2
|
common 19 factors is weak relative to the idiosyncratics, corresponding to smaller values of δ0andp,the idiosyncratic errors exhibit cross-sectional correlations comparable to those of the factors in finite dimensions. This diminishes the capability of Cov to separate xtfromet, leading to inferior performance compared to Auto and WAuto, which achieve more effective separation. In contrast, under cases of relatively strong signals with larger values of δ0andp,Cov can often successfully identify the factors and result in the best performance, such as when δ0“1 andp“200,400.Thirdly, we observe the phenomenon of the “blessing of dimensionality” when estimating r0in the sense of improved accuracy as pincreases, which is due to the increased information from the added components on the factors. Table 2: The mean and standard deviation (in parentheses) of D␣ CppAq,CpAq( for model (11) over 1000 simulation runs. δ0“0.75 δ0“1 Cov Auto WAuto Cov Auto WAuto p“50 0.724 0.370 0.230 0.345 0.133 0.109 (0.152) (0.276) (0.165) (0.292) (0.124) (0.037) p“100 0.606 0.341 0.223 0.137 0.117 0.109 (0.244) (0.254) (0.121) (0.148) (0.066) (0.022) p“200 0.427 0.321 0.233 0.097 0.118 0.111 (0.267) (0.227) (0.107) (0.068) (0.068) (0.026) p“400 0.281 0.285 0.243 0.085 0.110 0.108 (0.201) (0.180) (0.108) (0.011) (0.030) (0.020) 5.2 The model with different factor strengths Consider the factor model yt“rArxt`rBrzt`et, tPrns, (12) 20 where the entries of rAPRpˆr0andrBPRpˆr1are sampled independently from the uni- form distributions on p´p´p1´δ0q{2, p´p1´δ0q{2qandp´p´p1´δ1q{2, p´p1´δ1q{2q, respectively. The component series of xtandztrespectively follow AR(1) and MA(1) models with indepen- dentNp0,0.22qinnovations, and the corresponding autoregressive and moving-average co- efficients drawn from the uniform distribution on p´0.95,´0.85qYp0.85,0.95q. Each com- ponent series of etis generated independently from standard normal. Similar to the for- mulation in Section 5.1, we can decompose rAandrBasrA“AC 1andrB“BC 2such thatCpAq “CprAq,ATA“Ir0,CpBq “CprBqandBTB“Ir1. By letting xt“C1rxtand zt“C2rzt, we can rewrite model (12) as yt“Axt`Bzt`et,which takes the same form of factor model (10). Under our simulation setup, the strong and weak factor series txtu andtztucan be easily verified to satisfy the corresponding factor strength requirements in Conditions 8(i), 10, and 11(i). We generate n“400 serially dependent observations of p“50,100,300 and 500 variables based on r0“3 strong factors and r1“3 weak factors with the respective strengths pδ0, δ1q“p1,1qandpδ0, δ1q“p1,0.85q.Notable, under the first scenario δ0“δ1,Cov fails to separate xtfromztaccording to Remark 5. Thus, the relaxed second scenario δ0ąδ1is also considered, where Cov is able to achieve such separation. To better validate the theoretical results established in Section 4, in addition to three competing methods, we also compare the following methods: the standard autocovariance- based method using lag 1 only (denoted as Auto1) and its weight-calibrated version (denoted as WAuto1), which estimate Aandr0by the corresponding eigenanalysis of pΩyp1qpΩyp1qT andpΩyp1qxWpΩyp1qT,and, similarly, two autocovariance-based methods using lag 2 only (respectively denoted as Auto2 and WAuto2). For a fair comparison, we set m“2 for Auto and WAuto. All simulation results are based on 1000 replications. Table 3 reports the relative frequencies of ˆ r0“r0and the average ˆ r0. We observe several apparent patterns. Firstly, when strong and weak factors exhibit the same strength of cross- sectional correlations (i.e., δ0“δ1“1), Cov completely fails to estimate r0especially for 21 large p, often
|
https://arxiv.org/abs/2505.01357v2
|
misidentifying 6 strong factors. This aligns with Theorem 2(i). Additionally, Auto1 also fails to distinguish xtfromzteffectively, whereas Auto2 performs very well, as guaranteed by Theorem 3(i). This distinct performance arises from δ11“1 and δ12“0 fortztu, which follows a vector MA(1) process. Secondly, when δ0“δ1“1, our WAuto methods substantially improve the accuracy of estimating r0compared to Auto methods, demonstrating their undeniable advantage in distinguishing factors with the same strength of cross-sectional correlations but different strengths of serial correlations. Specifically, com- pared to Auto1, WAuto1 effectively corrects over-identified ˆ r0to three strong factors with a high proportion. This confirms the result in Theorem 4(i) and the discussion in Remark 9. Moreover, our weight-calibrated strategy also enhances the performance of Auto2. For ex- ample, when p“500,our method increases the relative frequency estimates of Ppˆr0“r0q from 0.876 to 0.966 for Auto, from 0.532 to 0.913 for Auto1, and from 0.945 to 0.987 for Auto2. Meanwhile, the average ˆ r0decreases from 3.914 to 2.972 for Auto1, and increases from 2.800 to 2.948 for Auto, and from 2.910 to 2.981 for Auto2. Thirdly, under the relaxed scenario where δ0“1 and δ1“0.85,although Cov is able to distinguish xtfromztespe- cially for large p, the proposed WAuto still outperforms the competitors across all settings, highlighting its broad empirical superiority. Table 4 presents numerical summaries for D␣ CppAq,CpAq( , revealing several notable trends. Firstly, when δ0“δ1“1, WAuto consistently achieves the most accurate recovery of the factor loading space for all settings we consider. Secondly, when δ0“1, δ1“0.85 andp“50,100,300,WAuto continues to perform the best, while for p“500,Cov leads to the highest accuracy. This aligns with the findings in Section 5.1 that Cov achieves the best performance when the signal is relatively strong. Thirdly, WAuto, which aggregates the dynamic information across different lags, is superior to both WAuto1 and WAuto2 in most cases, highlighting the benefit of aggregating more autocovariance information. 22 Table 3: The relative frequency estimate of Ppˆr0“r0qand the average of ˆ r0(in parentheses) for model (12) over 1000 simulation runs. Cov Auto WAuto Auto1 WAuto1 Auto2 WAuto2 δ0“1, δ 1“1 p“50 0.322 0.664 0.839 0.634 0.793 0.700 0.823 (1.983) (2.474) (2.772) (2.438) (2.771) (2.532) (2.758) p“100 0.345 0.773 0.915 0.721 0.853 0.834 0.934 (2.703) (2.664) (2.881) (2.580) (2.797) (2.752) (2.910) p“300 0.073 0.830 0.952 0.687 0.892 0.918 0.974 (5.359) (2.724) (2.926) (3.117) (2.916) (2.867) (2.961) p“500 0.019 0.876 0.966 0.532 0.913 0.945 0.987 (5.844) (2.800) (2.948) (3.914) (2.972) (2.910) (2.981) δ0“1, δ 1“0.85 p“50 0.580 0.795 0.924 0.821 0.913 0.752 0.835 (2.338) (2.681) (2.893) (2.723) (2.890) (2.617) (2.774) p“100 0.793 0.920 0.969 0.925 0.973 0.908 0.936 (2.669) (2.881) (2.958) (2.890) (2.964) (2.864) (2.910) p“300 0.945 0.977 0.990 0.976 0.992 0.980 0.984 (2.906) (2.962) (2.982) (2.960) (2.985) (2.966) (2.976) p“500 0.979 0.989 0.995 0.989 0.995 0.990 0.993 (2.964) (2.982) (2.992) (2.982) (2.992) (2.984) (2.990) 23 Table 4: The mean and standard deviation (in parentheses) of D␣ CppAq,CpAq( for model (12) over 1000 simulation runs. Cov Auto WAuto Auto1 WAuto1 Auto2 WAuto2 δ0“1, δ 1“1 p“50 0.583 0.402 0.319 0.425 0.351 0.409 0.357 (0.239) (0.241) (0.177) (0.238) (0.195) (0.220) (0.173) p“100 0.556 0.340
|
https://arxiv.org/abs/2505.01357v2
|
0.275 0.370 0.308 0.335 0.295 (0.253) (0.208) (0.134) (0.224) (0.170) (0.177) (0.114) p“300 0.687 0.312 0.252 0.384 0.281 0.288 0.266 (0.147) (0.200) (0.112) (0.247) (0.162) (0.142) (0.084) p“500 0.712 0.290 0.246 0.457 0.270 0.275 0.260 (0.077) (0.174) (0.095) (0.263) (0.146) (0.120) (0.064) δ0“1, δ 1“0.85 p“50 0.438 0.335 0.275 0.332 0.291 0.382 0.349 (0.257) (0.206) (0.128) (0.191) (0.135) (0.208) (0.170) p“100 0.320 0.267 0.245 0.267 0.248 0.298 0.290 (0.217) (0.136) (0.085) (0.131) (0.080) (0.139) (0.115) p“300 0.230 0.234 0.230 0.233 0.227 0.255 0.257 (0.131) (0.084) (0.063) (0.086) (0.059) (0.080) (0.070) p“500 0.210 0.228 0.227 0.225 0.224 0.249 0.252 (0.084) (0.061) (0.046) (0.061) (0.046) (0.059) (0.052) 24 6 Real data analysis 6.1 Daily returns for S&P 500 stocks The first dataset consists of the daily returns of S&P 500 component stocks from January 2, 2002 to December 31, 2007 encompassing n“1510 trading days. We removed stocks that were not traded on every trading day during the period, resulting in a total of p“ 375 stocks in our analysis. A similar dataset was previously analyzed using the standard autocovariance-based method in Lam and Yao (2012). Since the returns exhibit very small lag-kautocorrelations beyond k“2,we use m“2 in our estimation. With qselected by the generalized BIC in (7), we employ the ratio-based method for both Auto and WAuto, yielding the estimated number of strong factors as ˜ r0“2 and ˆ r0“1,respectively. Figures 1(a) and (b) plot the ratios of cumulative weighted eigenvalues defined in the form of (6) for Auto and WAuto, respectively. It can be observed from Figure 1(b) that there exist two or three weak factors when implementing WAuto, as the 3rd and 4th ratios are significantly larger than the 2nd and 5th ratios. To further estimate the number of weak factors and the corresponding loading matrix, we apply the two-step estimation procedure introduced by Lam and Yao (2012). Specifically for the two-step WAuto, let pAPRpˆˆr0be the estimate of Ain (10) and pxt“pATytbe estimated strong factors based on the original time seriestytutPrnsin the first step. Define the remaining time series as y˚ t“yt´pApxt“ yt´pApATytfortPrns.In the second step, we apply the same estimation procedure to ty˚ tun t“1, thus obtaining the estimated number of weak factors as ˆ r1and the corresponding estimated loading matrix as pBPRpˆˆr1.The resulting ratios of cumulative weighted eigenvalues in the second step for Auto and WAuto are displayed in Figures 1(c) and (d), respectively, indicating the corresponding estimated number of weak factors as ˜ r1“1 and ˆ r1“3. Figure 2 displays the heatmaps of the loading matrices for the strong and weak factors identified by Auto and WAuto, using the varimax procedure to maximize the sum of the 25 2 4 6 8 10 121.01.52.02.53.03.54.04.5(a) First step for Auto jR~ j 2 4 6 8 10 121.01.52.02.53.0(b) First step for WAuto jRj 2 4 6 8 10 121.01.11.21.31.41.5(c) Second step for Auto jR~ j* 2 4 6 8 10 121.101.151.201.251.301.35(d) Second step for WAuto jRj*Figure 1: Plots of ratios of adjacent cumulative weighted eigenvalues in the first and second steps for Auto and WAuto. With such ratio proposed in (6)
|
https://arxiv.org/abs/2505.01357v2
|
for WAuto, rRj, R˚ jandrR˚ j can be defined analogously using tµkju,tλ˚ kjuandtµ˚ kju, respectively, where tλ˚ kjuandtµ˚ kju represent the corresponding eigenvalues obtained in the second step when applying WAuto or Auto toty˚ tun t“1. variances of the squared loadings. The varimax rotation enhances the color distinction in the heatmaps for better interpretation. The stocks are sorted according to their respective industrial sectors. It is apparent that the four factors identified by WAuto serve as the main driving force for the dynamics of certain sectors. Specifically, the 1st factor mainly influ- ences the dynamics of Consumer Discretionary, Energy, Industrials, and Materials, as well as some stocks in Health Care. The 2nd factor has a significantly impact on the dynamics of Information Technology, while Utilities is predominately loaded on the 3rd factor. Addi- tionally, the dynamics of Financials are driven by both the 2nd and 4th factors, whereas the dynamics of Consumer Staples are influenced by both the 3rd and 4th factors. For Auto, 26 (A) Auto 1st factor 2nd factor 3rd factorSector Communication Services Consumer Discretionary Consumer Staples Energy Financials Health Care Industrials Information Technology Materials Real Estate Utilities−0.2−0.15−0.1−0.0500.050.1(B) WAuto 1st factor 2nd factor 3rd factor 4th factor−0.15−0.1−0.0500.050.10.15Figure 2: The left and right heatmaps display the varimax-rotated loadings of the first p˜r0`˜r1qandpˆr0`ˆr1qidentified factors for Auto and WAuto, respectively. although its three identified factors can also be viewed as the main driving force for the dynamics of specific sectors, its inferior intepretability is evident from the less clear sector- specific clustering structure of the factor loadings compared to WAuto, especially for the 1st factor. Moreover, the 2nd and 3rd factors have opposite impacts on the dynamics of Energy, Information Technology and Real Estate, which could potentially be merged into a single factor. In summary, WAuto provides more intepretable results for the model in presence of factors with different strengths. 6.2 U.S. monthly macroeconomic data Our second example considers incorporating the factor model framework into macroeco- nomic forecasting. The dataset, provided by McCracken and Ng (2016), contains p“119 27 U.S. monthly macroeconomic variables covering broad economic categories from January 1960 to September 2024 ( n“777). Before further analysis, each macro variable is trans- formed as recommended in McCracken and Ng (2016) to ensure stationarity and then stan- dardized to have zero mean and unit variance. Following Fan et al. (2013) and Wang et al. (2019), we try ˆ r0“1,3 and 5 to check its effect on the forecasting performance of WAuto, and compare it with Cov and Auto. Denote the p-vector of macroeconomic variables series astytun t“1. For each method, the h-step-ahead forecasting procedure consists of the following four steps: 1. For a given ˆ r0, apply the corresponding method to estimate model (1) based on past observationstytuT t“1, thus obtaining the estimated factor loading matrix pA. 2. Compute ˆ r0estimated factors by pxt“pATytfortPrTs. 3. For each jPrˆr0s,select the best ARMA model that fits tpxtjuT t“1using the AIC criterion, and based on which achieves the h-step-ahead forecast for the factor series as pxT`h,j. 4. Convert back to obtain the h-step-ahead forecast for the
|
https://arxiv.org/abs/2505.01357v2
|
original series as pyT`h“ pApxT`h, wherepxT`h“pˆxT`h,1, . . . , ˆxT`h,ˆr0qT. To evaluate the forecasting performance, we employ the expanding window approach. The data is divided into a training set and a test set, consisting of the first n1and the lastn2“n´n1observations, respectively. For a positive integer h,we apply each fitting method to the training set, obtain an h-step-ahead forecast on the test data according to the fitted model, increase the training size by one, and repeat this procedure n2´h`1 times. Theh-step-ahead mean absolute forecasting error (MAFE) and mean squared forecasting error (MSFE) are given by MAFE phq “ t pˆpn2´h`1qu´1řn t“n1`hřp j“1|pytj´ytj|and MASEphq“tpˆpn2´h`1qu´1řn t“n1`hřp j“1ppytj´ytjq2, respectively. For comparison, we consider n1“727, n2“50, h“1,2,3,and ˆr0“1,3,5. Auto and WAuto are implemented by using m“3. For WAuto, we select the optimal qaccording to the generalized BIC 28 (8). The resulting MAFE and MSFE values are summarized in Table 5. It is obvious that WAuto consistently provides the highest predictive accuracies among the three competitors across all settings. Moreover, Auto outperforms Cov in many cases, particularly when a relatively small ˆ r0or large his considered. Overall, our proposed WAuto effectively captures the dynamical information, thereby achieving the best forecasting performance. Table 5: The MAFEs and MSFEs of the proposed WAuto and three competing methods. The lowest values are in bold font. MAFE MSFE Cov Auto WAuto Cov Auto WAuto h“1ˆr0“10.715 0.689 0.685 1.303 1.177 1.167 ˆr0“30.723 0.716 0.693 1.318 1.325 1.204 ˆr0“50.712 0.717 0.683 1.297 1.342 1.206 h“2ˆr0“10.693 0.678 0.674 1.172 1.139 1.132 ˆr0“30.701 0.697 0.679 1.184 1.236 1.148 ˆr0“50.699 0.700 0.671 1.179 1.247 1.152 h“3ˆr0“10.696 0.679 0.677 1.186 1.143 1.140 ˆr0“30.701 0.678 0.672 1.192 1.138 1.130 ˆr0“50.699 0.680 0.666 1.189 1.143 1.138 7 Discussion We identify several important directions for future research. The first topic involves developing inferential theory for our proposed weight-calibrated autocovariance-based es- timation. The existing literature on the limiting distributions of relevant estimators for the factor models primarily focuses on the covariance-based method (Bai, 2003). However, even for the standard autocovariance-based method, the corresponding inferential theory 29 is largely untouched. Motivated by the formulation of our proposal from a reduced-rank autoregression perspective, we may potentially leverage the existing inferential results in reduced-rank regression to investigate the limiting distributions of relevant estimated quan- tities. For some heuristic distributional analysis under strong assumptions, see Section C.3 of the supplementary material. The second topic considers developing the weight-calibrated autocovariance-based method to improve the estimation for high-dimensional matrix factor models. In the autocovariance- based estimation (e.g., Wang et al., 2019), non-negative definite matrices can be constructed using respective weight matrices, and their eigenpairs can be used to estimate the numbers of factors and the factor loading spaces. Specifically, consider a stationary matrix-valued time seriestYtutPZof size p1ˆp2satisfying the factor model Yt“RX tCT`EtfortPrns,where the latent matrix-valued factor time series tXtuis of size d1ˆd2. For lag kPrms,define Ωp1q y,ijpkq“Covpyt,¨i,yt´k,¨jqfori, jPrp2s, and Ωp2q y,i1j1pkq“Covpyt,i1¨,yt´k,j1¨qfori1, j1Prp1s, where yt,¨jandyt,j1¨are the j-th column and the j1-th row of Yt,respectively. Let pΩp1q y,ijpkq andpΩp2q y,i1j1pkqbe the corresponding sample estimates. Define xMp1q“mÿ k“1p2ÿ i“1p2ÿ j“1pΩp1q y,ijpkqxWp1q jpΩp1q y,ijpkqT,andxMp2q“mÿ k“1p1ÿ i1“1p1ÿ j1“1pΩp2q y,i1j1pkqxWp2q j1pΩp2q y,i1j1pkqT, wherexWp1q
|
https://arxiv.org/abs/2505.01357v2
|
j“Q1j␣ QT 1jpΩp1q y,jjp0qQ1j(´1QT 1j, andxWp2q j1“Q2j1␣ QT 2j1pΩp2q y,j1j1p0qQ2j1(´1QT 2j1. The expressions of xMp1qandxMp2qare derived in a reduced-rank autoregression formulation for matrix-valued time series; see Section C.4 of the supplementary material. The row (or column) factor loading space CpRq(orCpCq) can then be estimated using the d1(ord2) leading eigenvectors of xMp1q(orxMp2q). Analogous to the formulation in Section 2.1, we can specify Q1j(orQ2j1) as the q1(orq2) leading eigenvectors of pΩp1q y,jjp0q(orpΩp2q y,j1j1p0q). Last but not least, we expect that our weight-calibrated idea can also be applied to improve the autocovariance-based estimation of factor models for high-dimensional tensor time series (e.g., Chen et al., 2022). The above topics are beyond the scope of the current paper and will be pursued elsewhere. 30 References Ahn, S. C. and Horenstein, A. R. (2013). Eigenvalue ratio test for the number of factors, Econometrica 81(3): 1203–1227. Anatolyev, S. and Mikusheva, A. (2022). Factor models with many assets: Strong factors, weak factors, and the two-pass procedure, Journal of Econometrics 229(1): 103–126. Ando, T. and Bai, J. (2017). Clustering huge number of financial time series: A panel data approach with high-dimensional predictors and factor structures, Journal of the American Statistical Association 112(519): 1182–1198. Bai, J. (2003). Inferential theory for factor models of large dimensions, Econometrica 71(1): 135–171. Bai, J. and Li, K. (2012). Statistical analysis of factor models of high dimension, The Annals of Statistics 40(1): 436–465. Bai, J. and Ng, S. (2002). Determining the number of factors in approximate factor models, Econometrica 70(1): 191–221. Barigozzi, M. and Hallin, M. (2024). Dynamic factor models: A genealogy, Partial Identifi- cation in Econometrics and Related Topics , Springer, pp. 3–24. Chang, J., He, J., Yang, L. and Yao, Q. (2023). Modelling matrix time series via a tensor CP- decomposition, Journal of the Royal Statistical Society Series B: Statistical Methodology 85(1): 127–148. Chen, B., Han, Y. and Yu, Q. (2024). Estimation and inference for CP tensor factor models, arXiv preprint arXiv:2406.17278 . Chen, E. Y. and Fan, J. (2023). Statistical inference for high-dimensional matrix-variate factor models, Journal of the American Statistical Association 118(542): 1038–1055. Chen, E. Y., Tsay, R. S. and Chen, R. (2020). Constrained factor models for high-dimensional matrix-variate time series, Journal of the American Statistical Association 115(530): 775– 793. 31 Chen, R., Yang, D. and Zhang, C.-H. (2022). Factor models for high-dimensional tensor time series, Journal of the American Statistical Association 117(537): 94–116. Chen, W. and Lam, C. (2024). Rank and factor loadings estimation in time series tensor factor model by pre-averaging, The Annals of Statistics 52(1): 364–391. Chudik, A., Pesaran, M. H. and Tosetti, E. (2011). Weak and strong cross-section dependence and estimation of large panels, The Econometrics Journal 14(1): C45–C90. Fan, J., Liao, Y. and Mincheva, M. (2013). Large covariance estimation by thresholding prin- cipal orthogonal complements, Journal of the Royal Statistical Society Series B: Statistical Methodology 75(4): 603–680. Fan, J., Liu, H. and Wang, W. (2018). Large covariance estimation through elliptical factor models, The Annals of Statistics 46(4): 1383–1414. Fan, Y. and Tang, C. Y. (2013). Tuning parameter selection in high dimensional penal- ized likelihood, Journal of the
|
https://arxiv.org/abs/2505.01357v2
|
Royal Statistical Society Series B: Statistical Methodology 75(3): 531–552. Forni, M., Hallin, M., Lippi, M. and Reichlin, L. (2000). The generalized dynamic-factor model: Identification and estimation, The Review of Economics and Statistics 82(4): 540– 554. Gao, Z. and Tsay, R. S. (2022). Modeling high-dimensional time series: A factor model with dynamically dependent factors and diverging eigenvalues, Journal of the American Statistical Association 117(539): 1398–1414. Han, Y., Chen, R., Yang, D. and Zhang, C.-H. (2024). Tensor factor model estimation by iterative projection, The Annals of Statistics 52(6): 2641–2667. Han, Y., Yang, D., Zhang, C.-H. and Chen, R. (2024). CP factor model for dynamic tensors, Journal of the Royal Statistical Society Series B: Statistical Methodology 86(5): 1383–1413. Lam, C. and Yao, Q. (2012). Factor modeling for high-dimensional time series: Inference for the number of factors, The Annals of Statistics 40(2): 694–726. 32 Lam, C., Yao, Q. and Bathia, N. (2011). Estimation of latent factors for high-dimensional time series, Biometrika 98(4): 901–918. McCracken, M. W. and Ng, S. (2016). FRED-MD: A monthly database for macroeconomic research, Journal of Business & Economic Statistics 34(4): 574–589. Pe˜ na, D. and Poncela, P. (2006). Nonstationary dynamic factor analysis, Journal of Statis- tical Planning and Inference 136(4): 1237–1257. Reinsel, G. C., Velu, R. P. and Chen, K. (2022). Multivariate Reduced-Rank Regression: Theory, Methods and Applications , Vol. 225, Springer Nature. Stock, J. H. and Watson, M. W. (2002). Forecasting using principal components from a large number of predictors, Journal of the American Statistical Association 97(460): 1167–1179. Wang, D., Liu, X. and Chen, R. (2019). Factor models for matrix-valued high-dimensional time series, Journal of Econometrics 208(1): 231–248. Wang, T. and Zhu, L. (2011). Consistent tuning parameter selection in high dimensional sparse linear regression, Journal of Multivariate Analysis 102(7): 1141–1151. Yu, L., He, Y., Kong, X. and Zhang, X. (2022). Projected estimation for large-dimensional matrix factor models, Journal of Econometrics 229(1): 201–217. Zhang, B., Pan, G., Yao, Q. and Zhou, W. (2024). Factor modeling for clustering high- dimensional time series, Journal of the American Statistical Association 119(546): 1252– 1263. 33 Supplementary Material to “Weight-calibrated estimation for factor models of high-dimensional time series” Xinghao Qiao, Zihan Wang, Qiwei Yao, and Bo Zhang This supplementary material contains technical proofs of theoretical results in Sections A and B, and further methodology and derivations in Section C. Throughout, let n´1{2Y“n´1{2py1, . . . ,ynqT“rΞΘ1{2ΞT“řp j“1θ1{2 jrξjξT jbe the nˆp matrix, where θ1{2 1ě ¨¨¨ ě θ1{2 pě0 are the singular values of n´1{2Y.Without loss of generality, we assume that tytutPZ,txtutPZandtztutPZhave been centered to have mean zero. Then,pΩy“n´1YTY“řp j“1θjξjξT j,andn´1YYT“řp j“1θjrξjrξT j.Moreover, we define a nˆnmatrix Dk“ tDksjunˆnwith Dksj“Ips“k`jqfors, jP rns, where Ip¨qis the indicator function. Then, it follows that pΩypkq“pn´kq´1YTDkY,and pΩypkqxWpΩypkqT“pΩypkqΞqΘ´1 qΞT qpΩypkqT, (S.1) where Ξq“pξ1, . . . ,ξqqis apˆqmatrix, and Θq“diagpθ1, . . . , θ qq. A Proofs of theoretical results in Section 3 LetX“px1, . . . ,xnqTandE“pe1, . . . ,enqT. Then, model (1) can be rewritten in the matrix form as Y“XAT`E.We first introduce two useful lemmas that provide upper bounds on the perturbation of matrix eigenvalues and eigenvectors,
|
https://arxiv.org/abs/2505.01357v2
|
and are used multiple times in the proofs of this paper. Let tλjujPrpsbe the eigenvalues of ΣPRpˆpin a descending order andtξjujPrpsbe the corresponding eigenvectors. Similarly, trλjujPrpsandtrξjujPrpsare the corresponding eigenvalues and eigenvectors of rΣPRpˆp,respectively. Lemma S.1. (Weyl’s theorem; Wely (1912)). |rλj´λj|ď}rΣ´Σ}forjPrps. 1 Lemma S.2. (A useful variant of sinpθqtheorem; Yu et al. (2015)). If rξT jξjě0forjPrps, then we have }rξj´ξj}ď}rΣ´Σ}{? 2 |rλj´1´λj|^|λj´rλj`1|. A.1 Proof of Theorem 1 Letλk1쨨¨ě λkqě0 be the eigenvalues of pΩypkqxWpΩypkqT, with the corresponding spectral decomposition pΩypkqxWpΩypkqT“qÿ i“1λkiϕkiϕT ki“Φk,AΛk,AΦT k,A`Φk,EΛk,EΦT k,E, (S.2) where Λk,A“diagpλk1, . . . , λ kr0q,Λk,E“diagpλk,r0`1, . . . , λ kqq,Φk,A“pϕk1, . . . ,ϕkr0q, and Φk,E“pϕk,r0`1, . . . ,ϕkqq.To show Theorem 1, we first introduce two technical lemmas. Lemma S.3. ForqPpr0, p^ns, recall that n´1{2Y“řp j“1θ1{2 jrξjξT j, and θ1쨨¨ě θqě0 are the eigenvalues of pΩy. Let ΞA“pξ1, . . . ,ξr0q. Then, under Conditions 1–4, θ1—θr0— pδ0with probability tending to 1, θr0`1—θq—n´1pwith probability tending to 1, and ››ΞAΞT A´AAT››“Oppn´1{2pp1´δ0q{2q. (S.3) Proof. Note that θ1{2 1쨨¨ě θ1{2 qare the qlargest singular values of n´1{2Y“n´1{2XAT` n´1{2E. By Conditions 2–3, n´1{2XAThasr0singular values with the order pδ0{2with probability tending to 1. By Condition 4 and Remark 2, the singular values of Esatisfy that σ1pEq—σtcpp^nqupEq—n1{2`p1{2with probability tending to 1. Since rank pY´Eq“r0, we have θ1—θr0—pδ0,andθr0`1—θq—n´1pwith probability tending to 1. By Condition 2(i), txtuandtetuare uncorrelated, then }n´1AXTE}“Oppn´1{2p1{2`δ0{2q by Conditions 1–4. Considering that the r0largest eigenvalues of n´1AXTXATare of order pδ0with probability tending to 1. Thus, we conclude (S.3), which completes the proof. Lemma S.4. Letµk1쨨¨ě µkpě0be the eigenvalues of pΩypkqpΩypkqT. Rewrite pΩypkqas pΩypkq“řp i“1µ1{2 kiψkirψT ki“Ψk,AN1{2 k,ArΨT k,A`Ψk,EN1{2 k,ErΨT k,E,where Nk,A“diagpµk1, . . . , µ kr0q, 2 andNk,E“diagpµk,r0`1, . . . , µ kpq. Then, under Conditions 1–5, for qP pr0, p^nsand kPrms,we have µk1—µkr0—p2δ0, µk,r0`1—µkq—n´2p2with probability tending to 1, and ››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2q, (S.4) ››rΨk,ArΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2q. (S.5) Proof. We only prove the results µk,r0`1—µkq—n´2p2with probability tending to 1, as the others can be obtained by similar procedures to those in the proof of Lemma S.3. Since }E}“Oppn1{2`p1{2q, it can be shown that µk,r0`1“Oppn´2p2q. Recall that Dk“tDksjunˆn andpΩypkq“pn´kq´1YTDkY.Then, we have pÿ i“r0`1µki“trtpIp´Ψk,AΨT k,AqpΩypkqpΩypkqTpIp´Ψk,AΨT k,Aqu “pn´kq´2trtpIp´Ψk,AΨT k,AqYTDkYYTDT kYpIp´Ψk,AΨT k,Aqu “pn´kq´2trtpIp´Ψk,AΨT k,AqpAXT`ETqDkpXAT`Eq ˆpAXT`ETqDT kpXAT`EqpIp´Ψk,AΨT k,Aqu ěpn´kq´2}pIp´Ψk,AΨT k,AqpAXT`ETqDkEpIp´AATq}2 F with probability tending to 1. Note that Dkcan be rewritten as Dk“řn´k i“1sk`isT i, where siis then-dimensional vector with the j-th element Iti“ju. Then, the singular values of Rand GTDkGin Condition 4 imply that pn´kq´2}pIp´Ψk,AΨT k,AqETDkEpIp´AATq}2 F—n´1p2 with probability tending to 1, and pn´kq´2}pIp´Ψk,AΨT k,AqAXTDkEpIp´AATq}2 Fď r0pn´kq´2}pIp´Ψk,AΨT k,AqA}2 F}XTDkE}2 F“oppn´1p2q.Thus, there exists some constant C0ą0 such thatřp i“r0`1µkiěC0n´1p2with probability tending to 1, which, together with n“Oppqby Condition 3(ii) and µk,r0`1“Oppn´2p2q, completes the proofs. Now we are ready to prove Theorem 1. (i) To study the asymptotic properties of λk1ěλk2ě ¨¨¨ ě λkqě0, we consider the singular values of pΩypkqxW1{2. Note that pΩypkqxW1{2“pΨk,AN1{2 k,ArΨT k,A`Ψk,EN1{2 k,ErΨT k,Eq´qÿ j“1θ´1{2 jξjξT j¯ , 3 where θj,ξj,Ψk,A,N1{2 k,A, andrΨk,Aare defined in Lemmas S.3 and S.4. It can be shown that rank␣ Ψk,AN1{2 k,ArΨT k,Apřq j“1θ´1{2 jξjξT jq( “r0with probability tending to 1, and the eigenval- ues of N1{2 k,ArΨT k,Apřq j“1θ´1{ jξjξT jqrΨk,AN1{2 k,Aare not smaller
|
https://arxiv.org/abs/2505.01357v2
|
than the corresponding ordered eigenvalues of N1{2 k,ArΨT k,Apřr0 j“1θ´1 jξjξT jqrΨk,AN1{2 k,A.Then, by combining (S.3) and (S.5) in Lemmas S.3 and S.4, it follows that ›››rΨT k,A´r0ÿ j“1θ´1 jξjξT j¯ rΨk,A›››—›››rΨT k,A´r0ÿ j“1θ´1 jξjξT j¯ rΨk,A››› min—p´δ0 with probability tending to 1, and ›››rΨT k,A´ qÿ j“r0`1θ´1 jξjξT j¯ rΨk,A›››“Oppn´1p1´δ0¨np´1q“Oppp´δ0q. Thus, we have ›››N1{2 k,ArΨT k,A´r0ÿ j“1θ´1 jξjξT j¯ rΨk,AN1{2 k,A›››—›››N1{2 k,ArΨT k,A´r0ÿ j“1θ´1 jξjξT jqrΨk,AN1{2 k,A››› min—pδ0 with probability tending to 1, and ›››N1{2 k,ArΨT k,A´ qÿ j“r0`1θ´1 jξjξT j¯ rΨk,AN1{2 k,A›››“Oppp2δ0¨p´δ0q“Opppδ0q, which together shows that ›››N1{2 k,ArΨT k,A´qÿ j“1θ´1 jξjξT j¯ rΨk,AN1{2 k,A›››—›››N1{2 k,ArΨT k,A´qÿ j“1θ´1 jξjξT j¯ rΨk,AN1{2 k,A››› min—pδ0(S.6) with probability tending to 1. For λkjwith jąr0,by Lemmas S.3 and S.4, we have ›››N1{2 k,ErΨT k,E´qÿ j“1θ´1 jξjξT j¯ rΨk,EN1{2 k,E›››“Oppθk,r0`1θ´1 qq“Oppn´1pq. (S.7) Hence, it follows that λk1—λkr0—pδ0with probability tending to 1 by combining (S.6) and (S.7), and λk,r0`1“Oppn´1pq“opppδ0qsince p1´δ0“opnqby Condition 3(ii). (ii) Notice that }pΩypkqxWpΩypkqT´Ψk,AN1{2 k,ArΨT k,AxWrΨk,AN1{2 k,AΨT k,A} ď}Ψk,EN1{2 k,ErΨT k,ExWrΨk,EN1{2 k,EΨT k,E}`2}Ψk,AN1{2 k,ArΨT k,AxWrΨk,EN1{2 k,EΨT k,E} “Oppn´1p`n´1{2pp1`δ0q{2q“Oppn´1{2pp1`δ0q{2q 4 by using (S.6) and (S.7). Then by Lemma S.2, we have }Φk,AΦT k,A´Ψk,AΨT k,A}“Oppn´1{2pp1´δ0q{2q, which, together with (S.4) in Lemma S.4, obtains that››Φk,AΦT k,A´AAT››“Oppn´1{2pp1´δ0q{2q and completes the proof of Theorem 1. A.2 Proof of Proposition 1 By Theorem 1(i), and the conditions ϑnp´δ0Ñ0 and ϑnÁn´1pas imposed in Proposi- tion 1, we have Rj—1 with probability tending to 1 for jďr0´1 and r0`1ďjďq´1, where Rjis defined in (6). Moreover, Rr0—pδ0ϑ´1 nÑ8 with probability tending to 1. Thus, Rr0ąmax 1ďj‰r0ďq´1Rjwith probability tending to 1, which implies that Ppˆr0“r0qÑ1 as p, nÑ8. A.3 Proof of Proposition 2 Letλ1ě ¨¨¨ ě λqě0 be the eigenvalues of xM“řm k“1pΩypkqxWpΩypkqTwith the corresponding eigenvectors ϕ1, . . . ,ϕq,andpA“pϕ1, . . . ,ϕˆr0q. Denote rA“pϕ1, . . . ,ϕr0qas the estimator of Awith the known value of r0.Note that xM“řm k“1pΩypkqxWpΩypkqT,where each term on the right hand is non-negative. Thus, by Theorem 1(i), we immediately obtain that λ1—λr0—pδ0with probability tending to 1, and λr0`1“Oppn´1pq. By Lemma S.2 and similar procedures to the proof of Theorem 1(ii), it can be shown that there exists an orthogonal matrix UPRr0ˆr0such that}rA´AU}“Oppn´1{2pp1´δ0q{2q, and thus ››rArAT´AAT››“Oppn´1{2pp1´δ0q{2q. (S.8) With the aid of weak consistency of ˆ r0in Proposition 1, the convergence rate of pAwith the estimated ˆ r0is the same as rAwith the known r0. To see this, let ϱn“n´1{2pp1´δ0q{2. It 5 holds that for any constant ˜ cą0 that P` ϱ´1 n}pApAT´AAT}ą˜c˘ ďP` ϱ´1 n}pApAT´AAT}ą˜c|ˆr0“r0˘ `Ppˆr0‰r0q ďP` ϱ´1 n}rArAT´AAT}ą˜c˘ `op1q,(S.9) which, together with (S.8), shows that››pApAT´AAT››“Oppϱnq“Oppn´1{2pp1´δ0q{2q. For the measure of the distance between two columns spaces, it follows from the orthog- onality of columns of rAandAthat D2␣ CprAq,CpAq( “1´r´1 0trpAATrArATq“r´1 0trpATA´AATrArATq “r´1 0trtUTATpIp´rArATqAUu “r´1 0” trtUTATpIp´rArATqAUu´trtUTATpIp´AATqAUuı “r´1 0trtUTATpAAT´rArATqAUu ď}UTATpAAT´rArATqAU} “}pAU´rAqTpAU´rAq´UTATpAU´rAqpAU´rAqTAU} ď2}rA´AU}2“Oppn´1p1´δ0q, which implies that D␣ CprAq,CpAq( “Oppn´1{2pp1´δ0q{2q. By using the similar arguments to (S.9), we have D␣ CppAq,CpAq( “Oppn´1{2pp1´δ0q{2q. The proof is complete. B Proofs of theoretical results in Section 4 Define X“px1, . . . ,xnqT,Z“pz1, . . . ,znqTandE“pe1, . . . ,enqT. Then model (10) can be expressed as Y“XAT`ZBT`E. B.1 Proof of Theorem 2 Letθ1쨨¨ě θpě0 be the eigenvalues of pΩy, with the corresponding
|
https://arxiv.org/abs/2505.01357v2
|
spectral decom- position written as pΩy“pÿ i“1θiξiξT i“ΞAΘAΞT A`ΞBΘBΞT B`ΞEΘEΞT E, 6 where ΘA“diagpθ1, . . . , θ r0q,ΘB“diagpθr0`1, . . . , θ rq,ΘE“diagpθr`1, . . . , θ pq,ΞA“ pξ1, . . . ,ξr0q,ΞB“pξr0`1, . . . ,ξrq,andΞE“pξr`1, . . . ,ξpq. (i) Firstly, θ1{2 1쨨¨ě θ1{2 tc1pp^nquě0 are the tc1pp^nqulargest singular values of n´1{2Y“ n´1{2XAT`n´1{2ZBT`n´1{2E. By Conditions 7–9, n´1{2XAThasr0singular values of order pδ0{2with probability tending to 1, }n´1{2ZBT}“Opppδ1{2q“opppδ0{2q,and}n´1{2E}“ Oppn´1{2p1{2q“opppδ0{2q,which together derive θ1—θr0—pδ0with probability tending to 1. Secondly, consider that n´1{2YpIp´ΞAΞT Aq “n´1{2pXAT`ZBT`EqpIp´ΞAΞT Aq “ n´1{2pZBT`EqpIp´AATq`n´1{2pXAT`ZBT`EqpAAT´ΞAΞT Aq.Condition 6 and Lemma S.1 together ensure that }BTpIp´AATq}mině}B}min´}AATB}ą0, which further implies that }n´1{2pZBT`EqpIp´AATq}—} n´1{2pZBT`EqpIp´AATq}min—pδ1{2 with probability tending to 1. By Theorem 2(ii), which will be proved later, and Condi- tions 6–9, we have }n´1{2pXAT`ZBT`EqpAAT´ΞAΞT Aq} “ Oppn´1{2p1{2`pδ1´δ0{2q “ opppδ1q,which implies that }n´1{2YpIp´ΞAΞT Aq´n´1{2pZBT`EqpIp´AATq}“ opppδ1q. Note that θ1{2 r0`1쨨¨ě θ1{2 qě0 are the q´r0largest singular values of n´1{2YpIp´ΞAΞT Aq. Thus, we have θr0`1—θr—pδ1with probability tending to 1. Thirdly, by Condition 9 and Remark 2, the singular values of Esatisfy that and σ1pEq— σtc1pp^nqupEq—n1{2`p1{2with probability tending to 1. Since rank pY´Eq“r, we conclude thatθr`1—θq—n´1pwith probability tending to 1. Then, the proof is complete. (ii) By Condition 7(i), txtutPZ,tztutPZandtetutPZare mutually uncorrelated. Then, by Conditions 6–9, we have }n´1AXTZBT}ď}n´1XTZ}“Oppn´1{2pδ1{2`δ0{2q“Oppn´1{2p1{2`δ0{2q, }n´1AXTE}ď}n´1XTE}“Oppn´1{2p1{2`δ0{2q, }n´1BZTZBT}“Opppδ1q,}n´1ETE}“Oppn´1pq, which together imply that }n´1YTY´n´1AXTXAT}“Oppn´1{2p1{2`δ0{2`pδ1q.Consider 7 that the r0largest eigenvalues of n´1AXTXATare of order pδ0with probability tending to 1. Thus, by using Lemma S.2, we conclude››ΞAΞT A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1´δ0q. B.2 Proof of Theorem 3 Letµk1ě ¨¨¨ ě µkpě0 be the eigenvalues of pΩypkqpΩypkqT, with the corresponding spectral decomposition pΩypkqpΩypkqT“pÿ i“1µkiψkiψT ki“Ψk,ANk,AΨT k,A`Ψk,BNk,BΨT k,B`Ψk,ENk,EΨT k,E, where Nk,A“diagpµk1, . . . , µ kr0q,Nk,B“diagpµk,r0`1, . . . , µ krq,Nk,E“diagpµk,r`1, . . . , µ kpq, Ψk,A“pψk1, . . . ,ψkr0q,Ψk,B“pψk,r0`1, . . . ,ψkrq, and Ψk,E“pψk,r`1, . . . ,ψkpq. To show Theorem 3, we first introduce a technical lemma. Lemma S.5. Suppose that Conditions 6–11 hold and µk1—µkr0—p2δ0with probability tending to 1. Then, for kPrms, }pIp´Ψk,AΨT k,AqpΩypkqΨk,AΨT k,ApΩypkqTpIp´Ψk,AΨT k,Aq}“oppp2δ1kq. (S.10) Proof. Consider that pIp´Ψk,AΨT k,AqpΩypkqΨk,AΨT k,ApΩypkqTpIp´Ψk,AΨT k,Aq “Ψk,BΨT k,BpΩypkqΨk,AΨT k,ApΩypkqTΨk,BΨT k,B`Ψk,BΨT k,BpΩypkqΨk,AΨT k,ApΩypkqTΨk,EΨT k,E `Ψk,EΨT k,EpΩypkqΨk,AΨT k,ApΩypkqTΨk,BΨT k,B`Ψk,EΨT k,EpΩypkqΨk,AΨT k,ApΩypkqTΨk,EΨT k,E. (S.11) We first derive the order of the first term in (S.11), i.e., the order of ΨT k,BpΩypkqΨk,A. To this end, by the properties of eigenvectors, we have 0“ΨT k,BpΩypkqpΩypkqTΨk,A “ΨT k,BpΩypkqΨk,AΨT k,ApΩypkqTΨk,A`ΨT k,BpΩypkqΨk,BΨT k,BpΩypkqTΨk,A `ΨT k,BpΩypkqΨk,EΨT k,EpΩypkqTΨk,A. The orders of each term are calculated as follows: 8 (i)}ΨT k,ApΩypkqTΨk,A} — } ΨT k,ApΩypkqTΨk,A}min—pδ0with probability tending to 1, by noticing that µk1—µkr0—p2δ0, rank` ΨT k,ApΩypkqTΨk,A˘ “r0, and ΨT k,ApΩypkqTΨk,A“ ΨT k,ArΨk,AN1{2 k,A“ΨT k,AN1{2 k,ArΨk,A. (ii)}ΨT k,BpΩypkqΨk,B}—pδ1kwith probability tending to 1, by noticing that ΨT k,BpΩypkqΨk,B“ΨT k,BpAXT`BZT`ETqDkpXAT`ZBT`EqΨk,B “ΨT k,BpIp´Ψk,AΨT k,AqpAXT`BZT`ETqDkpXAT`ZBT`EqpIp´Ψk,AΨT k,AqΨk,B “ΨT k,BpBZT`ETqDkpZBT`EqΨk,B `ΨT k,BpAAT´Ψk,AΨT k,AqAXTDkXATpAAT´Ψk,AΨT k,AqΨk,B `ΨT k,BpAAT´Ψk,AΨT k,AqAXTDkpZBT`EqΨk,B `ΨT k,BpBZT`ETqDkXATpAAT´Ψk,AΨT k,AqΨk,B. (iii)}ΨT k,BpΩypkqTΨk,A}“Opppδ1k`n´1{2pp1`δ0q{2q, by noticing that ΨT k,ApΩypkqΨk,B“ΨT ApAXT`BZT`ETqDkpXAT`ZBT`EqΨk,B “ΨT k,ApAXT`BZT`ETqDkpXAT`ZBT`EqpIp´Ψk,AΨT k,AqΨk,B “ΨT k,ApAXT`BZT`ETqDkpZBT`EqΨk,B `ΨT k,ApAXT`BZT`ETqDkXATpAAT´Ψk,AΨT k,AqΨk,B. (iv)}ΨT k,BpΩypkqΨk,E}“Opppδ1kq,by noticing that ΨT k,BpΩypkqΨk,E“ΨT k,BpAXT`BZT`ETqDkpXAT`ZBT`EqΨk,E “ΨT k,BpIp´Ψk,AΨT k,AqpAXT`BZT`ETqDkpXAT`ZBT`EqpIp´Ψk,AΨT k,AqΨk,E “ΨT k,BpBZT`ETqDkpZBT`EqΨk,E `Ψk,BBTpAAT´Ψk,AΨT k,AqAXTDkXATpAAT´Ψk,AΨT k,AqΨk,E `ΨT k,BpAAT´Ψk,AΨT k,AqAXTDkpZBT`EqΨk,E `ΨT k,BpBZT`ETqDkXATpAAT´Ψk,AΨT k,AqΨk,E. 9 (v)}ΨT k,EpΩypkqTΨk,A}“Opppδ1k`n´1{2pp1`δ0q{2q,by noticing that ΨT k,ApΩypkqΨk,E“ΨT k,ApAXT`BZT`ETqDkpXAT`ZBT`EqΨk,E “ΨT k,ApAXT`BZT`ETqDkpXAT`ZBT`EqpIp´Ψk,AΨT k,AqΨk,E “ΨT k,ApAXT`BZT`ETqDkpZBT`EqΨk,B `ΨT k,ApAXT`BZT`ETqDkXATpAAT´Ψk,AΨT k,AqΨk,E. Combing above, we have }ΨT k,BpΩypkqΨk,A}“Oppp2δ1k´δ0`n´1{2pp1´δ0`2δ1kq{2q“opppδ1kq. (S.12) Similar procedures can
|
https://arxiv.org/abs/2505.01357v2
|
be used to prove the other three terms of (S.11). Thus, we have }pIp´Ψk,AΨT k,AqpΩypkqΨk,AΨT k,ApΩypkqTpIp´Ψk,AΨT k,Aq}“oppp2δ1kq, which concludes (S.10). The proof is complete. Now we are ready to prove Theorem 3. The proofs of µk1—µkr0—p2δ0in Theorem 3(i) and››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1k´δ0qin Theorem 3(ii) are similar to those of Theorem 1 in Zhang et al. (2024), and thus are omitted. It suffices to show that µk,r0`1— µkr—p2δ1k, and µk,r`1—µk,tc1pp^nqu—n´2p2with probability tending to 1. By definitions, we have Ψk,BNk,BΨT k,B`Ψk,ENk,EΨT k,E“pIp´Ψk,AΨT k,AqpΩypkqpΩypkqTpIp´Ψk,AΨT k,Aq “ pIp´Ψk,AΨT k,AqpΩypkqpIp´Ψk,AΨT k,AqpΩypkqTpIp´Ψk,AΨT k,Aq (S.13) `pIp´Ψk,AΨT k,AqpΩypkqΨk,AΨT k,ApΩypkqTpIp´Ψk,AΨT k,Aq. (S.14) Since both (S.13) and (S.14) are non-negative definite, the rank of each term is not larger than p´r0. Recall that pΩypkq“pn´kq´1YTDkY. Then we rewrite (S.13) as pIp´Ψk,AΨT k,AqpΩypkqpIp´Ψk,AΨT k,AqpΩypkqTpIp´Ψk,AΨT k,Aq “pn´kq´2pIp´Ψk,AΨT k,AqYTDkYpIp´Ψk,AΨT k,AqpYTDkYqTpIp´Ψk,AΨT k,Aq :“pÿ i“r0`1sµkisψkisψT ki“sΨk,BsNk,BsΨT k,B`sΨk,EsNk,EsΨT k,E(S.15) 10 withsNk,B“diagpsµk,r0`1, . . . ,sµkrq,sNk,E“diagpsµk,r`1, . . . ,sµkpq,sΨk,B“psψk,r0`1, . . . ,sψkrq, andsΨk,E“ psψk,r`1, . . . ,sψkpq, wheresµ1{2 k,r0`1ě ¨¨¨ ě sµ1{2 kpare the singular values of pn´ kq´1pIp´Ψk,AΨT k,AqYTDkYpIp´Ψk,AΨT k,Aq.Define PA,B“pA,BqtpA,BqTpA,Bqu´1pA,BqT, and rewrite it as PA,B´AAT“B1BT 1withBT 1B1“Ir1(see Lemma S.6 below for the well- definedness of B1). Note that pIp´Ψk,AΨT k,AqYT“pIp´Ψk,AΨT k,AqpAXT`BZT`ETq “pAAT´Ψk,AΨT k,AqpAXT`BZTq`B1BT 1BZT`pIp´Ψk,AΨT k,AqET. The orders of each term are calculated as follows: (i)pn´kq´1}E}2“Oppn´1pq“opppδ1kq. (ii)}pn´kq´1B1BT 1BZTDkE}“Oppn´1{2pp1`δ1q{2q“opppδ1kq. (iii)}pn´kq´1B1BT 1BZTDkZBTB1BT 1} — }p n´kq´1B1BT 1BZTDkZBTB1BT 1}min—pδ1k with probability tending to 1. (iv)}pn´kq´1pAAT´Ψk,AΨT k,AqpAXT`BZTqDkpXAT`ZBTqpAAT´Ψk,AΨT k,Aq} “ Oppn´1p`p2δ1k´δ0q“opppδ1kqby using Theorem 3(ii). (v)}pn´kq´1B1BT 1BZTDkpXAT`ZBTqpAAT´Ψk,AΨT k,Aq} “ Oppn´1{2pp1´δ0`2δ1kq{2` p2δ1k´δ0q“opppδ1kqby using Theorem 3(ii). (vi)}pn´kq´1pAAT´Ψk,AΨT k,AqpAXT`BZTqDkE}“Oppn´1p`n´1{2pp1´δ0`2δ1kq{2q“ opppδ1kqby using Theorem 3(ii). Combing above, we have }sΨk,BsΨT k,B´B1BT 1}“Oppn´1{2pp1`δ1´2δ1kq{2`pδ1k´δ0q,and sµk,r0`1—sµkr—p2δ1k (S.16) with probability tending to 1, which together with Lemmas S.1 and S.5, imply µk,r0`1— µkr—p2δ1kwith probability tending to 1. The assertion µk,r`1—µk,tc1pp^nqu—n´2p2with probability tending to 1 can be proved by using similar procedures. The proof is complete. 11 B.3 Proof of Corollary 1 (i) Note that pΩypkq“pIp´AATqpΩypkq`AATpΩypkqwith}ATpΩypkq}—} ATpΩypkq}min— pδ0with probability tending to 1. Firstly, by }Ωzpkq}“Opn´1{2pp1`δ1q{2q, we have }pIp´AATqpΩypkq}“}p n´kq´1pIp´AATqpBZT`ETqDkpXAT`ZBT`Eq} ď}pn´kq´1pBZT`ETqDkpXAT`ZBT`Eq} ď}pn´kq´1pBZT`ETqDkpZBT`Eq}`}p n´kq´1pBZT`ETqDkXATq} “Oppn´1{2pp1`δ0q{2`n´1{2pp1`δ1q{2`n´1{2ppδ0`δ1q{2`n´1pq “Oppn´1{2pp1`δ0q{2q“opppδ0q,(S.17) which implies µk1—µkr0—p2δ0with probability tending to 1. Note that µ1{2 k,r0`1is the largest singular value of pIp´Ψk,AΨT k,AqpΩypkq. It follows that pIp´Ψk,AΨT k,AqpΩypkqpIp´Ψk,AΨT k,Aq “pn´kq´1pIp´Ψk,AΨT k,AqpAXT`BZT`ETqDkpXAT`ZBT`EqpIp´Ψk,AΨT k,Aq “pn´kq´1pIp´Ψk,AΨT k,AqpBZT`ETqDkpZBT`EqpIp´Ψk,AΨT k,Aq `pn´kq´1pIp´Ψk,AΨT k,AqpBZT`ETqDkXATpAAT´Ψk,AΨT k,Aq `pn´kq´1pAAT´Ψk,AΨT k,AqAXTDkpZBT`EqpIp´Ψk,AΨT k,Aq `pn´kq´1pAAT´Ψk,AΨT k,AqAXTDkXATpAAT´Ψk,AΨT k,Aq “Oppn´1{2pp1`δ1q{2`n´1pq“Oppn´1{2pp1`δ1q{2q. Similarly, we can show that }pIp´Ψk,AΨT k,AqpΩypkqΨk,AΨT k,A}“Oppn´1{2pp1`δ1q{2q.Thus, we have µk,r0`1“Oppn´1p1`δ1q. (ii) Noticing that }pIp´AATqpΩypkqpΩypkqT}ď}p n´kq´1pIp´AATqpBZT`ETqDkpXAT` ZBT`Eq}}pΩypkq}“Oppn´1{2pp1`3δ0q{2qby (S.17), which, together with the fact µk1—µkr0— p2δ0with probability tending to 1, derives that››Ψk,AΨT k,A´AAT››“Oppn´1{2pp1´δ0q{2q. The proof is complete. 12 B.4 Proof of Theorem 4 Letλk1쨨¨ě λkqě0 be the eigenvalues of pΩypkqxWpΩypkqT, with the corresponding spectral decomposition pΩypkqxWpΩypkqT“qÿ i“1λkiϕkiϕT ki“Φk,AΛk,AΦT k,A`Φk,BΛk,BΦT k,B`Φk,EΛk,EΦT k,E, where Λk,A“diagpλk1, . . . , λ kr0q,Λk,B“diagpλk,r0`1, . . . , λ krq,Λk,E“diagpλk,r`1, . . . , λ kqq, Φk,A“ pϕk1, . . . ,ϕkr0q,Φk,B“ pϕk,r0`1, . . . ,ϕkrq,andΦk,E“ pϕk,r`1, . . . ,ϕkqq. To show Theorem 4, we first introduce the following technical lemmas. Lemma S.6. Define PA,B“pA,BqtpA,BqTpA,Bqu´1pA,BqT. Under Conditions 6–9, }ΞAΞT A`ΞBΞT B´PA,B}“Oppn´1{2pp1´δ1q{2q. (S.18) Moreover, we rewrite PA,B´AAT“B1BT 1withBT 1B1“Ir1. Then, for any iPpr, qs, }ξT iA}2“Oppn´1p1´δ0q,}ξT iB1}2“Oppn´1p1´δ1q. (S.19) Proof. Note that Condition 6 and Lemma S.1 together ensure that }BTpIp´AATq}mině }B}min´}AATB} ą 0,which implies that B1is well defined with rank pB1q “ r1and BT 1B1“Ir1.The
|
https://arxiv.org/abs/2505.01357v2
|
proof of (S.18) is similar to that of our Theorem 2(ii), or Theorem 1 in Zhang et al. (2024), and thus is omitted. It suffices to prove (S.19). In the following, we consider two cases, i.e., δ0“δ1andδ0ąδ1, separately. On the one hand, when δ0“δ1, we can rewrite YasY“WCT`EwithCCT“PA,Band}n´1WTW}—}n´1WTW}min“pδ0 with probability tending to 1. For each iPpr, qs, with probability tending to 1, n´1p—θi“n´1ξT ipCWT`ETqpWCT`Eqξi“n´1pξT iCqpWTWqpCTξiq`Oppn´1pq, which implies that }ξT iC}2“Oppn´1p1´δ0q, and thus (S.19) holds. On the other hand, when δ0ąδ1, define 9XT“XT`ATBZTand9ZT“BT 1BZT. Then 13 Y“9XAT`9ZBT 1`E. For each iPpr, qs, with probability tending to 1, n´1p—θi“n´1ξT ipA9XT`B19ZT`ETqp9XAT`9ZBT 1`Eqξi “n´1pξT iAqp9XT9XqpATξiq`2n´1pξT iAqp9XT9ZqpBT 1ξiq`n´1pξT iB1qp9ZT9ZqpBT 1ξiq `2n´1ξT ipA9XT`B19ZTqEξi`Oppn´1pq, which implies that }n´1ξT ipA9XT`B19ZTqEξi}“Oppn´1pq. SincetxtutPZ,tztutPZandtetutPZ are mutually uncorrelated, by Condition 8, it follows that }n´1XTZ}“Oppn´1{2pδ1{2`δ0{2q“ opppδ1q,and thus }n´19XT9Z}“}n´1XTZBTB1`n´1ATBZTZBTB1}“Opppδ1q. Then, with probability tending to 1, we have n´1pξT iAqp9XT9XqpATξiq—pδ0}ξT iA}2, n´1pξT iAqp9XT9ZqpBT 1ξiq“Opppδ1}ξT iA}}BT 1ξi}q, n´1pξT iB1qp9ZT9ZqpBT 1ξiq—pδ1}ξT iB1}2, n´1ξT ipA9XT`B19ZTqEξi“Oppn´1pq. For analysis, we consider the following two cases separately: (i) If}ξT iA}“opp}ξT iB1}q, we have n´1pξT iAqp9XT9ZqpBT 1ξiq“opppδ1}ξT iB1}2q. Then, pδ0}ξT iA}2` pδ1}ξT iB1}2“Oppn´1pq, which implies that (S.19). (ii) If}ξT iB1}“Opp}ξT iA}qandδ0ąδ1, we have n´1pξT iAqp9XT9ZqpBT 1ξiq“opppδ0}ξT iA}2q, andn´1pξT iB1qp9ZT9ZqpBT 1ξiq“opppδ0}ξT iA}2q.Then, we have pδ0}ξT iA}2“Oppn´1pq, which implies that (S.19). The proof is complete. Lemma S.7. Under Conditions 6–9, we have }ATxWA}“Oppp´δ0q,}BT 1xWA}“Oppp´pδ0`δ1q{2q,}BT 1xWB 1}“Oppp´δ1q,(S.20) and }ATxWpIp´PA,Bq}“Oppn1{2p´p1`δ0q{2q,}BT 1xWpIp´PA,Bq}“Oppn1{2p´p1`δ1q{2q,(S.21) where B1andPA,Bare defined in Lemma S.6. 14 Proof. Recall that ΘA“diagpθ1, . . . , θ r0q,ΘB“diagpθr0`1, . . . , θ rq,Θq´r“diagpθr`1, . . . , θ qq, ΞA“ pξ1, . . . ,ξr0q,ΞB“ pξr0`1, . . . ,ξrq, and Ξq´r“ pξr`1, . . . ,ξqq. In the following, we consider two cases, i.e., δ0“δ1andδ0ąδ1, separately. On the one hand, when δ0ąδ1, we have}ΞT BA}“Oppn´1{2pp1´δ0q{2`pδ1´δ0qby Theo- rem 1(ii). Then, there exist some constants C1, C2ą0 such that, with probability tending to 1, }ATxWA}ď}ATΞAΘ´1 AΞT AA}`}ATΞBΘ´1 BΞT BA}`}ATΞq´rΘ´1 q´rΞT q´rA} ďC1` p´δ0}ΞT AA}2`p´δ1}ΞT BA}2`np´1}ATΞq´r}2˘ ďC2p´δ0, }ATxWB 1}ď}ATΞAΘ´1 AΞT AB1}`}ATΞBΘ´1 BΞT BB1}`}ATΞq´rΘ´1 q´rΞT q´rB1} ďC1` p´δ0}ΞT AB1}`p´δ1}ΞT BA}`np´1}BT 1Ξq´r}}ΞT q´rA}˘ ďC2p´pδ0`δ1q{2, }BT 1xWB 1}ď}BT 1ΞAΘ´1 AΞT AB1}`}BT 1ΞBΘ´1 BΞT BB1}`}BT 1Ξq´rΘ´1 q´rΞT q´rB1} ďC1` p´δ0}ΞT AB1}2`p´δ1}ΞT BB1}2`np´1}BT 1Ξq´r}2˘ ďC2p´δ1, which together imply (S.20). Consequently, }PA,BxWP A,B} ď } ATxWA}`}BT 1xWB 1}` 2}ATxWB 1}“Oppp´δ1q.Moreover, by (S.19), there exist constants C3, C4ą0 such that }ATxWpIp´PA,Bq}ď}ATΞAΘ´1 AΞT ApIp´PA,Bq}`}ATΞBΘ´1 BΞT BpIp´PA,Bq} `}ATΞq´rΘ´1 q´rΞT q´rpIp´PA,Bq} ďC3` p´δ0}ΞT ApIp´PA,Bq}`p´δ1}ATΞB}`np´1}ATΞq´r}˘ ďC4n1{2p´p1`δ0q{2, }BT 1xWpIp´PA,Bq}ď}BT 1ΞAΘ´1 AΞT ApIp´PA,Bq}`}BT 1ΞBΘ´1 BΞT BpIp´PA,Bq} `}BT 1Ξq´rΘ´1 q´rΞT q´rpIp´PA,Bq} ďC3` p´δ0}ΞT ApIp´PA,Bq}`p´δ1}ΞT BpIp´PA,Bq}`np´1}BT 1Ξq´r}˘ ďC4n1{2p´p1`δ1q{2, which together imply (S.21). On the other hand, when δ0“δ1,we can rewrite YasY“WCT`EwithCCT“PA,B and}n´1WTW}—}n´1WTW}min—pδ0with probability tending to 1. Since B1is proved to 15 be full-ranked in Lemma (S.6), some standard arguments similar to the proof of Theorem 2(i) can show that CPRpˆr,WPRnˆr, θ1{2 1쨨¨ě θ1{2 rě0 are the rlargest singular values of n´1WCTwith θ1—θr—pδ0, and also θr`1—θq—n´1pwith probability tending to 1. Thus, }ATxWA}ďC1pp´δ0}ΞT AA}2`p´δ0}ΞT BA}2`np´1}ATΞq´r}2qďC2p´δ0,}ATxWpIp´PA,Bq}ď C3pp´δ0}ΞT ApIp´PA,Bq}`p´δ0}ATΞB}`np´1}ATΞq´r}qď C4n1{2p´p1`δ0q{2, and the other terms can be similarly proved. The proof is complete. Lemma S.8. Under the conditions of Theorem 4, we have }pΩypkqpIp´PA,BqΞqΘ´1{2 q}“Oppn´1{2p1{2q“opppδ1k´δ1{2q, (S.22) where PA,Bis defined in Lemma S.6, and ΞqandΘqare defined in (S.1) . Proof. Since qis fixed, the dimensions of XTDkEpIp´PA,BqΞqandZTDkEpIp´PA,BqΞq are fixed,
|
https://arxiv.org/abs/2505.01357v2
|
and the entries of them are all Oppn1{2pδ0{2qandOppn1{2pδ1{2q, respectively. Thus, }pn´kq´1pAXT`BZT`ETqDkEpIp´PA,BqΞq} ď}pn´kq´1AXTDkEpIp´PA,BqΞq}`}p n´kq´1BZTDkEpIp´PA,BqΞq} `}pn´kq´1ETDkEpIp´PA,BqΞq} “Oppn´1{2pδ0{2q`Oppn´1{2pδ1{2q`Oppn´1pq“Oppn´1pq, which, together with the fact }Θ´1{2 q}“Oppn1{2p´1{2q, which can be shown to hold for both cases, i.e., δ0“δ1andδ0ąδ1in a similar fashion to the proof of Theorem 2(i), implies that }pΩypkqpIp´PA,BqΞqΘ´1{2 q}“}p n´kq´1pAXT`BZT`ETqDkEpIp´PA,BqΞqΘ´1{2 q} ď}Θ´1{2 q}}pn´kq´1pAXT`BZT`ETqDkEpIp´PA,BqΞq} “Oppn1{2p´1{2qˆOppn´1pq“Oppn´1{2p1{2q“opppδ1k´δ1{2q, which concludes (S.22). The proof is complete. Lemma S.9. Under the conditions of Theorem 4, for kPrms, }Ψk,BΨT k,B´B1BT 1}“Oppn´1{2pp1`δ1´2δ1kq{2`pδ1k´δ0q“opp1q, (S.23) where B1is defined in Lemma S.6. 16 Proof. By (S.12), it can be shown that }Ψk,BΨT k,BpΩypkqΨk,AΨT k,ApΩypkqTΨk,EΨT k,E} “Oppp2δ1k´δ0`n´1{2pp1´δ0`2δ1kq{2qˆOpppδ1k`n´1{2pp1`δ0q{2q “Oppn´1{2pp1´δ0`4δ1kq{2`n´1p1`δ1k`p3δ1k´δ0q, which, together with Lemma S.2, the second assertion in Theorem 3(i) and the orders calcu- lated in Lemma S.5, implies that }Ψk,BΨT k,B´sΨk,BsΨT k,B}“Oppn´1{2pp1´δ0q{2`n´1p1´δ1k` pδ1k´δ0q“Oppn´1{2pp1`δ1´2δ1kq{2`pδ1k´δ0q,wheresΨk,Bis defined in (S.15). By the proof of Theorem 3, we have }sΨk,BsΨT k,B´B1BT 1}“Oppn´1{2pp1`δ1´2δ1kq{2`pδ1k´δ0q,which concludes (S.23). The proof is complete. Lemma S.10. Under the conditions of Theorem 4, for kPrms, }pIp´AATqpΩypkqxWpΩypkqT}“Op` n´1{2pp1`δ0q{2`pδ1k`pδ0´δ1q{2q“opppδ0˘ . (S.24) Proof. Consider the following decomposition: pIp´AATqpΩypkqxWpΩypkqT “pIp´AATqpΩypkqAATxWAATpΩypkqT`pIp´AATqpΩypkqB1BT 1xWB 1BT 1pΩypkqT `pIp´AATqpΩypkqAATxWB 1BT 1pΩypkqT`pIp´AATqpΩypkqB1BT 1xWAATpΩypkqT `pIp´AATqpΩypkqAATxWpIp´PA,BqpΩypkqT`pIp´AATqpΩypkqB1BT 1xWpIp´PA,BqpΩypkqT `pIp´AATqpΩypkqpIp´PA,BqxWpIp´PA,BqpΩypkqT`pIp´AATqpΩypkqpIp´PA,BqxWB 1BT 1pΩypkqT `pIp´AATqpΩypkqpIp´PA,BqxWAATpΩypkqT. 17 The orders of each term are calculated as follows: }pIp´AATqpΩypkqA}“}p n´kq´1pIp´AATqpBZT`ETqDkpXAT`ZBT`EqA}“Opppδ1kq, }ATpΩypkqT}“}p n´kq´1pAXT`BZT`ETqDkpXAT`ZBT`EqA}“Opppδ0q, }pIp´AATqpΩypkqB1}“}p n´kq´1pIp´AATqpBZT`ETqDkpZBT`EqB1}“Opppδ1kq, }BT 1pΩypkqT}“}p n´kq´1pAXT`BZT`ETqDkpZBT`EqB1}“Opppδ1kq, }pIp´PA,BqpΩypkqT}“}p n´kq´1pAXT`BZT`ETqDkEpIp´PA,Bq}“Oppn´1{2pp1`δ0q{2q, }pIp´AATqpΩypkqpIp´PA,Bq}“}p n´kq´1pIp´AATqpBZT`ETqDkE}“Oppn´1{2pp1`δ1q{2q, }pΩypkqpIp´PA,BqxWpIp´PA,BqpΩypkqT}“Oppn´1pqby (S.22) , which, together with (S.20) and (S.21) in Lemma S.7, imply that }pIp´AATqpΩypkqxWpΩypkqT} “}pIp´AATqpΩypkqpIp´PA,BqxWAATpΩypkqT}`Opppδ1kq`Opppδ1k`pδ0´δ1q{2q`Oppn´1{2pp1`δ0q{2q ď}pIp´AATqpΩypkqpIp´PA,BqxWA}}pΩypkqT}`Opppδ1k`pδ0´δ1q{2`n´1{2pp1`δ0q{2q. For the first term above, we have }pIp´AATqpΩypkqpIp´PA,BqxWA} ď}pn´kq´1ZTDkEpIp´PA,BqxWA}`}p n´kq´1ETDkEpIp´PA,BqxWA} “}pn´kq´1ZTDkEpIp´PA,BqxWA}`Oppn´1{2pp1´δ0q{2q, ď}pn´kq´1ZTDkEpIp´PA,BqΞAΘ´1 AΞT AA}`}p n´kq´1ZTDkEpIp´PA,BqΞBΘ´1 BΞT BA} `}pn´kq´1ZTDkEpIp´PA,BqΞq´rΘ´1 q´rΞT q´rA}`Oppn´1{2pp1´δ0q{2q “}pn´kq´1ZTDkEpIp´PA,BqΞq´rΘ´1 q´rΞT q´rA}`Oppn´1{2pp1´δ0q{2q, where Ξq´r:“pξr`1, . . . ,ξqqandΘq´r:“diagpθr`1, . . . , θ qq. Note that Ξq´ris apˆpq´rq matrix, and}pn´kq´1ZTDkEpIp´PA,BqΞq´r}“Oppn´1{2pδ1{2q.Then, by (S.19), }pn´kq´1ZTDkEpIp´PA,BqxWA}“Oppppδ1´δ0´1q{2`n´1{2pp1´δ0q{2q. Since pδ1“opp2δ1kqby combining Conditions 8(ii) and 11(ii), we have }pIp´AATqpΩypkqpIp´ PA,BqxWA}}pΩypkqT} “Oppn´1{2pp1`δ0q{2`ppδ1`δ0´1q{2q “Oppn´1{2pp1`δ0q{2`pδ1k`pδ0´δ1q{2q. 18 Combing above, it follows that }pIp´AATqpΩypkqxWpΩypkqT}“Op` n´1{2pp1`δ0q{2`pδ1k`pδ0´δ1q{2q“opppδ0˘ , which concludes (S.24). The proof is complete. Now we are ready to prove Theorem 4. Step 1 . Recall that in (S.1) we have pΩypkqxWpΩypkqT“pΩypkqΞqΘ´1 qΞT qpΩypkqT, so we focus on the singular values of pΩypkqΞqΘ´1{2 q. Note that pΩypkqΞqΘ´1{2 q“pΩypkqPA,BΞqΘ´1{2 q`pΩypkqpIp´PA,BqΞqΘ´1{2 q, where rank␣pΩypkqPA,BΞqΘ´1{2 q( ďr. Then by Lemma S.8, λk,r`1“Oppn´1pqholds. Step 2 . Next we prove λk1—λkr0—pδ0with probability tending to 1, and λk,r0`1— λkr—p2δ1k´δ1with probability tending to 1 when δ0“δ1. Consider the singular values ofpΩypkqPA,BΞqΘ´1{2 q“pΩypkqPA,BPA,BΞqΘ´1{2 q, which can be divided into two parts, pΩypkqPA,BandPA,BΞqΘ´1{2 q. Firstly, by combining Theorem 3 and (S.23) in Lemma S.9, it can be shown that PA,BpΩypkqTpΩypkqPA,Bhasrnon-negative eigenvalues, where the r0 largest eigenvalues are of order p2δ0with probability tending to 1, and the other r1“r´r0 non-negative eigenvalues are of order p2δ1kwith probability tending to 1. Then, note that PA,BΞqΘ´1 qΞT qPA,B“PA,BΞAΘ´1 AΞT APA,B`PA,BΞBΘ´1 BΞT BPA,B`PA,BΞq´rΘ´1 q´rΞT q´rPA,B, where Ξq´r“pξr`1, . . . ,ξqqandΘq´r“diagpθr`1, . . . , θ qq, and the three terms in the right hand are non-negative. By using (S.18) in Lemma S.6, we have }PA,BΞAΘ´1 AΞT APA,B}—}PA,BΞAΘ´1 AΞT APA,B}min—p´δ0, }PA,BΞBΘ´1 BΞT BPA,B}—}PA,BΞBΘ´1 BΞT BPA,B}min—p´δ1 with probability tending to 1, and by (S.19) in Lemma S.6, }PA,BΞq´rΘ´1 q´rΞT q´rPA,B} “ Oppp´δ1q. In the following, we consider two cases, i.e., δ0“δ1andδ0ąδ1, separately. On the one hand, when δ0“δ1, we can find that }PA,BΞqΘ´1 qΞT qPA,B}—}PA,BΞqΘ´1 qΞT qPA,B}min—p´δ0 19 with probability tending to 1, which, together with
|
https://arxiv.org/abs/2505.01357v2
|
the eigenvalues of PA,BpΩypkqTpΩypkqPA,B, implies that }pΩypkqPA,BΞqΘ´1{2 q}—}pΩypkqPA,BΞqΘ´1{2 q}min—pδ0{2 with probability tending to 1. Then combined with (S.22) and Theorem 3(i), we can show that λk1—λkr0—pδ0,andλk,r0`1—λkr—p2δ1k´δ1with probability tending to 1. On the other hand, when δ0ąδ1, we have pΩypkqPA,BΞqΘ´1{2 q“pΩypkqAATΞqΘ´1{2 q`pΩypkqB1BT 1ΞqΘ´1{2 q. (S.25) For the first term in (S.25), note that ATΞqΘ´1 qΞT qA“ATΞAΘ´1 AΞT AA`ATΞBΘ´1 BΞT BA`ATΞq´rΘ´1 q´rΞT q´rA, where the three terms in the right hand are non-negative. By combining (S.18) and The- orem 2(ii),}ATΞBΘ´1 BΞT BA} “oppp´δ0q, and}ATΞAΘ´1 AΞT AA} — }ATΞAΘ´1 AΞT AA}min— p´δ0with probability tending to 1. By combing (S.19) and the fact θr`1—θq—n´1p with probability tending to 1 by Theorem 2(i), we have }ATΞq´rΘ´1 q´rΞT q´rA} “oppp´δ0q. Then,}ATΞqΘ´1 qΞT qA}—}ATΞqΘ´1 qΞT qA}min—p´δ0with probability tending to 1. Thus, it follows that }pΩypkqAATΞqΘ´1{2 q}—}pΩypkqAATΞqΘ´1{2 q}min—pδ0{2(S.26) with probability tending to 1. For the second term in (S.25), note that BT 1ΞqΘ´1 qΞT qB1“BT 1ΞAΘ´1 AΞT AB1`BT 1ΞBΘ´1 BΞT BB1`BT 1Ξq´rΘ´1 q´rΞT q´rB1, where the three terms in the right hand are non-negative. By Theorem 2(ii), it follows thatΞT AB1“opp1q, which implies that }BT 1ΞAΘ´1 AΞT AB1}“oppp´δ1q. By (S.18), we have }BT 1ΞBΘ´1 BΞT BB1}—}BT 1ΞBΘ´1 BΞT BB1}min—p´δ1with probability tending to 1. By comb- ing (S.19) and the fact θr`1—θq—n´1pwith probability tending to 1 by Theorem 2(i), we have}BT 1Ξq´rΘ´1 q´rΞT q´rB1}“oppp´δ1q. Then,}BT 1ΞqΘ´1 qΞT qB1}—}BT 1ΞqΘ´1 qΞT qB1}min— 20 p´δ1with probability tending to 1. Thus, considering the orthogonality of the columns of A andB1,we have }pΩypkqB1BT 1ΞqΘ´1{2 q}—}pΩypkqB1BT 1ΞqΘ´1{2 q}min—pδ1k´δ1{2(S.27) with probability tending to 1. Note that the rank of pΩypkqPA,BΞqΘ´1 qΞT qPA,BpΩypkqTis not larger than r. By combing (S.22), (S.25), (S.26) and (S.27), we conclude λk1—λkr0—pδ0 with probability tending to 1, and λk,r0`1“Oppp2δ1k´δ1q. (S.28) It is noteworthy that there is a gap between (S.28) and the assertion λk,r0`1—λkr—p2δ1k´δ1 with probability tending to 1. We will prove Theorem 4(ii) first and then conclude this assertion by using (S.28) and Theorem 4(ii). Step 3 . Now we prove Theorem 4(ii). Note that the fact λk1—λkr0—pδ0with probability tending to 1 and (S.24) imply }ATpΩypkqxWpΩypkqTA}min—pδ0with probability tending to 1. To show Theorem 4(ii), by (S.24) in Lemma S.10, we have }pΩypkqxWpΩypkqT´AATpΩypkqxWpΩypkqTAAT} ď}pIp´AATqpΩypkqxWpΩypkqT}`}AATpΩypkqxWpΩypkqTpIp´AATq} ď2}pIp´AATqpΩypkqxWpΩypkqT}“Op` n´1{2pp1`δ0q{2`pδ1k`pδ0´δ1q{2q,(S.29) which, together with the fact λk1—λkr0—pδ0with probability tending to 1 and Lemma S.2, concludes››Φk,AΦT k,A´AAT››“Oppn´1{2pp1´δ0q{2`pδ1k´pδ0`δ1q{2q. Step 4 . Finally, we prove the assertion λk,r0`1—λkr—p2δ1k´δ1with probability tending to 1 when δ0ąδ1. By the definitions, we have that λk,r0`1쨨¨ě λkqare the q´r0largest eigenvalues of the matrix pIp´Φk,AΦT k,AqpΩypkqxWpΩypkqTpIp´Φk,AΦT k,Aq “pIp´Φk,AΦT k,AqpΩypkqΞAΘ´1 AΞT ApΩypkqTpIp´Φk,AΦT k,Aq `pIp´Φk,AΦT k,AqpΩypkqΞBΘ´1 BΞT BpΩypkqTpIp´Φk,AΦT k,Aq `pIp´Φk,AΦT k,AqpΩypkqΞq´rΘ´1 q´rΞT q´rpΩypkqTpIp´Φk,AΦT k,Aq, 21 where the three terms in the right hand are non-negative. We focus on the singular values ofpIp´Φk,AΦT k,AqpΩypkqΞBΘ´1{2 B.By Theorem 2(ii) under δ0ąδ1, we have }ATΞB}“Oppn´1{2pp1´δ0q{2`pδ1´δ0q“opp1q. (S.30) There exists some constant C5ą0 such that }pIp´Φk,AΦT k,AqpΩypkqΞBΘ´1{2 B}mině }pIp´Φk,AΦT k,AqpΩypkqΞB}min}Θ´1{2 B}min ěC5}pIp´Φk,AΦT k,AqpΩypkqΞB}minp´δ1{2(S.31) with probability tending to 1. For (S.31), consider the decomposition pIp´Φk,AΦT k,AqpΩypkqΞB “pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkXATΞB `pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkEΞB `pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkZBTΞB,(S.32) where the first and second terms are bounded by (S.30) and Theorem 4(ii), following that }pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkXATΞB} ď}ATΞB}}AAT´Φk,AΦT k,A}}pn´kq´1AXTDkX} `}ATΞB}}pn´kq´1pBZT`ETqDkX}“opppδ1kq, }pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkEΞB} ď}AAT´Φk,AΦT k,A}}pn´kq´1AXTDkE}`}p n´kq´1pBZT`ETqDkE}“opppδ1kq, and for the third term of (S.32), there exists
|
https://arxiv.org/abs/2505.01357v2
|
some constant C6ą0 such that }pn´kq´1pIp´Φk,AΦT k,AqpAXT`BZT`ETqDkZBTΞB}min ě}pn´kq´1pIp´Φk,AΦT k,AqBZTDkZBTΞB}min´}pn´kq´1ETDkZBT} ´}AAT´Φk,AΦT k,A}}pn´kq´1AXTDkZBT}ěC6pδ1k(S.33) with probability tending to 1. Combing (S.31), (S.32) and (S.33), we have }pIp´Φk,AΦT k,AqpΩypkqΞBΘ´1{2 B}miněC5C6pδ1k´δ1{2 22 with probability tending to 1, which implies that λkrěC2 5C2 6p2δ1k´δ1with probability tending to 1. Then, by combing (S.28), we conclude that λk,r0`1—λkr—p2δ1k´δ1with probability tending to 1 and complete the proof. B.5 Proof of Corollary 2 If Condition 11 is replaced by }Ωzpkq} “ Opn´1{2pp1`δ1q{2qforkP rms, then we have }pIp´AATqpΩypkqA}“Oppn´1{2pp1`δ1q{2q,}pIp´AATqpΩypkqB1}“Oppn´1{2pp1`δ1q{2q,and }BT 1pΩypkqT}“Oppn´1{2pp1`δ1q{2q.By similar procedures to Lemma S.10, we can show that }pIp´AATqpΩypkqxWpΩypkqT}“Oppn´1{2pp1`δ1q{2q“opppδ0q, which implies that λk1—λkr0—pδ0with probability tending to 1, and }Φk,AΦT k,A´AAT}“ Oppn´1{2pp1´δ0q{2q. Note that λ1{2 k,r0`1is the largest singular value of pIp´AATqpΩypkqxW1{2. By using similar procedures on }pIp´AATqpΩypkqxW1{2}to the proof of Theorem 4(i), we obtain that λk,r0`1“Oppn´1pq. The proof is complete. C Further methodology and derivations C.1 The solution of (5) This section derives the solution of (5) in Section 2.1. We rewrite (4) as yt“Lkryt´k`etk“AHT kryt´k`etk, t“k`1, . . . , n, (S.34) where the rank-reduced coefficient matrix Lk“AHT kPRpˆqsatisfies rankpLkq “r0and CpLkq“CpAq. Let Yk“pyk`1, . . . ,ynqTPRpn´kqˆpandrYk“pry1, . . . ,ryn´kqTPRpn´kqˆq. The ordinary least squares estimator for LkispLk,ols“YT krYkprYT krYkq´1,which, however, does not satisfy the rank-reduced constraint. To fit model (S.34), we consider solving the following constrained minimization problem, which is equivalent to (5): min Hk,ATA“Ir0trtpYk´rYkHkATqpYk´rYkHkATqTu. (S.35) 23 The solution of (S.35) is pHk“prYT krYkq´1rYT kYkΦk,AandpAk“Φk,A,where the columns of Φk,Aare the r0leading eigenvectors of pYT k,olspYk,olswithpYk,ols“rYkpLT k,ols.By noticing that pYT k,olspYk,ols“pLk,olsrYT krYkpLT k,ols“YT krYkprYT krYkq´1rYT kYk “´ nÿ t“k`1ytyT t´k¯ Q´ QTn´kÿ t“1ytyT tQ¯´1 QT´ nÿ t“k`1yt´kyT t¯ “pn´kqpΩypkqQ` QTpΩyQ˘´1QTpΩypkqT, we obtain the solution of (5). C.2 Justification for the weight matrix This section presents explanations for why xWcan improve the estimation performance and why Qis constructed as the leading eigenvectors of pΩy. Recall that pΩy“řp j“1θjξjξT j andpΩypkq“řp j“1µ1{2 kjψkjrψT kjforkPrms,with eigenvalues θ1쨨¨ě θpě0 and µk1ě ¨¨¨ě µkpě0,respectively. Suppose that the weight matrix has the spectral decomposition xW“řq i“1τiνiνT iwith eigenvalues τ1쨨¨ě τqě0. Notice that pΩypkqpΩypkqT“pÿ j“1µkjψkjψT kj,and pΩypkqxWpΩypkqT“´pÿ j“1µ1{2 kjψkjrψT kj¯´ qÿ i“1τiνiνT i¯´ pÿ j1“1µ1{2 kj1rψkj1ψT kj1¯ “pÿ j“1ckjµkjψkjψT kj`Sk,(S.36) where Sk“ř 1ďj‰j1ďpµ1{2 kjµ1{2 kj1`rψT kjxWrψkj1˘ ψkjψT kj1is the cross term and ckj“řq i“1τi`rψT kjνi˘2 is regarded as the calibrating coefficient. The cross term Skhas a small impact on the r0 leading eigenpairs of (S.36) asymptotically, since Skψkj“ř j1‰jµ1{2 kj1µ1{2 kj`rψT kj1xWrψkj˘ ψkj1,is orthogonal to ψkjfor each jP rr0s, thus asymptotically orthogonal to CpAq. Hence, ckj’s are expected to improve the eigenstructure of pΩypkqxWpΩypkqTcompared to pΩypkqpΩypkqTby adjusting the corresponding coefficients in the leading termřp j“1ckjµ1{2 kjψkjψT kj. To this end, we aim for ckj’s to be large (or small) for those corresponding Cpψkjq’s that are close to (or 24 far from) CpAq. Hereafter, we say CpK1qis close to CpK2q, ifDtCpK1q,CpK2quis large and vice versa. This ensures that the eigenvectors of pΩypkqxWpΩypkqTthat are closer to CpAqare associated with larger eigenvalues, relative to the eigenpairs of pΩypkqpΩypkqT. Consequently, these eigenvectors are more likely to be selected when identifying the r0leading eigenvectors to estimate CpAq.Additionally, this often results in an enhanced separation between common and idiosyncratic components, and, more precisely,
|
https://arxiv.org/abs/2505.01357v2
|
a relatively faster order of decrease from ther0-th to thepr0`1q-th largest eigenvalues of pΩypkqxWpΩypkqTthan that of pΩypkqpΩypkqT. Based on the properties of ckj’s discussed above, we present a reasonable choice of Q.Note that, for each kPrms,tψkjur0 j“1andtrψkjur0 j“1are the r0leading eigenvectors of pΩypkqpΩypkqT andpΩypkqTpΩypkq, respectively. According to the standard autocovariance-based method, bothC` tψkjur0 j“1˘ andC` trψkjur0 j“1˘ are close to CpAq. For kPrmsandjPrps,we expand rψkj“řp i“1gkjiνi,wheretνiup i“1forms an orthonormal basis of Rpand the basis coefficients satisfyřp i“1g2 kji“1. Given that ckj“řq i“1τig2 kji,we aim fortgkjiuiPrqs’s to be large for those corresponding Cpψkjq’s that are close to CpAq,which suggests that DtCptνiuq i“1q,CpAqu should be small. By the covariance-based estimation, a feasible choice is to guarantee tνiuq i“1“tξiuq i“1. (S.37) To achieve this, we use Q“pξ1, . . . ,ξqq. Then, QTpΩyQ“diagpθ´1 1, . . . , θ´1 qq, and qÿ i“1τiνiνT i“xW“Q` QTpΩyQ˘´1QT“qÿ i“1θ´1 iξiξT i, (S.38) which implies that the non-degenerate eigenvectors of xWcorrespond to the qleading eigen- vectors of pΩy,and thus (S.37) holds. An alternative method is to sample the entries of Qindependently from some random distribution with zero mean and, e.g., unit variance, such as standard normal. This random calibration ensures that xWwill still contain some information about tξiuq i“1.However, it is expected to suffer from the reduced statistical efficiency and empirical performance compared to (S.38). 25 C.3 Heuristic distributional analysis Recall that ryt“QTytPRq,xt“HT kryt´k`retk, and yt“AHT kryt´k`etk.Let ppHk,pAkq“arg min Hk,ATA“Ir0trtpYk´rYkHkATqpYk´rYkHkATqTu, wherepAkis equivalent to Φk,Ain (S.2). For each kP rms, we assume that etk’s are i.i.d. and follow a multivariate normal distribution Np0,Ipq.Then, it is shown that pAk“ ppAk1, . . . ,pAkr0qis the maximum likelihood estimator of A“ pA1, . . . ,Ar0q.Note that Covpyt,ryt´kq“AHT kΩry,where Ωry“Covprytqis positive definite. Then, for each kPrms, ΩypkqWΩ ypkq“Covpyt,ryt´kqtCovpryt´kqu´1Covpyt,ryt´kqT“AHT kΩryHkAT has rank r0.Given that HT kΩryHk“HT kQTΩyQH kPRr0ˆr0is full-ranked, the space spanned by the r0leading eigenvectors of ΩypkqWΩ ypkqis exactly CpAq. LettAjujPrpsbe an or- thonormal basis. Without loss of generality, assume pAT kjAjě0.By Theorem 2.4 of Reinsel et al. (2022), for jPrr0s, EtnppAkj´AjqppAkj´AjqTuÑGjj:“ÿ 1ďi‰jďprλkj`rλki prλkj´rλkiq2AiAT i, (S.39) asnÑ8 , and for 1ďj‰lďr0, EtnppAkj´AjqppAkl´AlqTuÑGjl:“´rλkj`rλkl prλkj´rλklq2AlAT j, (S.40) asnÑ 8 , whererλkj’s are the eigenvalues of ΩypkqWΩ ypkqforjP rr0s, andrλkj“0 for j“r0`1, . . . , p. Thus, Gjjcan be rewritten as Gjj“ÿ 1ďi‰jďr0rλkj`rλki prλkj´rλkiq2AiAT i`1 rλkjpIp´AATq. LetG“ tGjlu PRpr0ˆpr0be the asymptotic covariance matrix of the vectorization of pAk,where Gjlis thepj, lq-th block for j, lP rr0s. Then, by (S.39) and (S.40), we have ?nvecppAk´AqÑNp0,GqasnÑ8 ,which can be used to conduct hypothesis tests on A. Additionally, similar to Theorem 1, we can show that rλkj—pδ0forjP rr0s.Thus, }Gjj}FÀp1´δ0and}Gjl}FÀp´δ0forj‰l.Hence,}G}ď`řr0 j“1řr0 l“1}Gjl}2 F˘1{2Àp1´δ0, and thus}pAkpAT k´AAT}“Oppn´1{2pp1´δ0q{2q, aligning with the rate in Theorem 1(ii). 26 C.4 Reduced-rank autoregression formulation for matrix-valued time series This section provides an explanation for the suggested forms of weight matrices for matrix factor models as discussed in Section 7. Recall that tYtutPZis a stationary matrix-valued time series of size p1ˆp2satisfying the factor model Yt“RX tCT`Et. The matrices pΩp1q y,ijpkq andpΩp2q y,ijpkqdenote the sample estimates of Ωp1q y,ijpkqandΩp2q y,ijpkq, respectively. Furthermore, let the j-th row of RandCberj¨andcj¨, respectively, the j-th column of Etbeet,¨j, and zt,j“XtcT
|
https://arxiv.org/abs/2505.01357v2
|
j¨. Then, we have the following p2vector factor models: yt,¨j“RX tcT j¨`et,¨j“Rzt,j`et,¨j, jPrp2s. (S.41) Without loss of generality, we assume that Epyt,¨jq“0. Letting Q1jconsists of the q1leading eigenvectors of pΩp1q y,jjp0qwith d1ăq1ďp1^n, we project yt,¨jtoryt,¨j“QT 1jyt,¨jPRq1.By assuming the latent factor is of the form zt,i“HT kijryt´k,¨j`retkij, where Hkijisq1ˆd1and full-ranked, model (S.41) becomes a reduced-rank autoregression yt,¨i“Lkijryt´k,¨j`etkij“RHT kijryt´k,¨j`etkij, t“k`1, . . . , n, (S.42) where Lkij“RHT kijPRp1ˆq1satisfies rankpLkijq “d1andCpLkijq “CpRq, and etkij“ Rretkij`et,¨i. Let Yk,¨j“pyk`1,¨j, . . . ,yn,¨jqTPRpn´kqˆp1, andrYk,¨j“pry1,¨j, . . . ,ryn´k,¨jqTP Rpn´kqˆq1.To fit (S.42), we consider solving the constrained minimization problem: ppHkij,pRkijq“arg min Hkij,RTR“Id1trtpYk,¨i´rYk,¨jHkijRTqpYk,¨i´rYk,¨jHkijRTqTu.(S.43) Following similar procedures to those in Section C.1, the solution of (S.43) is pHkij“ prYT k,¨jrYk,¨jq´1rYT k,¨jYk,¨iΦkij,R andpRkij“Φkij,R,where the columns of Φkij,R are the d1 leading eigenvectors of pYT k,¨i,olspYk,¨i,ols“pn´kqpΩp1q y,ijpkqQ1j␣ QT 1jpΩp1q y,jjp0qQ1j(´1QT 1jpΩp1q y,ijpkqT. 27 This implies that the weight matrix to construct xMp1qisxWp1q j“Q1jtQT 1jpΩp1q y,jjp0qQ1ju´1QT 1j forjP rp2s, which is non-negative and independent of lag k. The weight matrix xWp2q jto improve the estimation of d2andCcan be similarly obtained. References Reinsel, G. C., Velu, R. P. and Chen, K. (2022). Multivariate Reduced-Rank Regression: Theory, Methods and Applications , Vol. 225, Springer Nature. Weyl, H. (1912). Das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen (mit einer anwendung auf die theorie der hohlraumstrahlung), Mathematische Annalen 71(4): 441-479. Yu, Y., Wang, T. and Samworth, R. J. (2015). A useful variant of the Davis–Kahan theorem for statisticians, Biometrika 102(2): 315-323. Zhang, B., Pan, G., Yao, Q. and Zhou, W. (2024). Factor modeling for clustering high- dimensional time series, Journal of the American Statistical Association 119(546): 1252-1263. 28
|
https://arxiv.org/abs/2505.01357v2
|
arXiv:2505.01388v1 [eess.IV] 2 May 2025Potential Contrast: Properties, Equivalences, and Generalization to Multiple Classes Wallace Peaslee DAMTP , University of Cambridge Cambridge, UK https://orcid.org/0000-0002-5274-9035Anna Breger DAMTP , University of Cambridge Cambridge, UK CMPBE, Medical University of Vienna Vienna, Austria https://orcid.org/0000-0001-8878-5743Carola-Bibiane Sch ¨onlieb DAMTP , University of Cambridge Cambridge, UK https://orcid.org/0000-0003-0099-6306 Abstract —Potential contrast is typically used as an image quality measure and quantifies the maximal possible contrast between samples from two classes of pixels in an image after an arbitrary grayscale transformation. It has been valuable in cultural heritage applications, identifying and visualizing relevant information in multispectral images while requiring a small number of pixels to be manually sampled. In this work, we introduce a normalized version of potential contrast that removes dependence on image format and also prove equalities that enable generalization to more than two classes and to continuous settings. Finally, we exemplify the utility of multi- class normalized potential contrast through an application to a medieval music manuscript with visible bleedthrough from the back page. We share our implementations, based on both original algorithms and our new equalities, including general- ization to multiple classes, at https://github.com/wallacepeaslee/ Multiple-Class-Normalized-Potential-Contrast. Index Terms —Potential Contrast, Contrast Measure, Image Quality, Cultural Heritage, Image Analysis, Multi-Class Segmen- tation I. I NTRODUCTION Potential contrast (PC) is a task-dependent image contrast and quality measure. It involves binarizing an image based on labeled pixels from two classes, typically called the foreground and background and manually selected for a particular task. PC then measures the maximal contrast possible between the labeled pixels from each class after an arbitrary grayscale transformation [1]. The most prominent successes of PC so far have been in cul- tural heritage applications, especially to multispectral images of degraded writing. In particular, areas where ink is present (foreground) and absent (background) are labeled. Then PC is computed for each band in a multispectral image using pixels from labeled regions to determine which band(s) may have the most relevant information. This process is described with more detail in [2] and [3], where PC was applied to ostraca (potsherds with writing), and further explored in [4]–[6]. A primary property of PC is its invariance under invertible grayscale transformations, which can account for limitations in human perception like the relative difficulty of evaluating the WP is supported by an Engineering and Physical Sciences Research Council Doctoral Training Partnership grant EP/W524633/1.quality of bright images according to the Weber-Fechner Law [1], [7]. For example, an image Imay appear to show writing more poorly than another image J, when in reality Icontains valuable information and shows writing much more clearly after remapping the grayscale values e.g. adjusting brightness or contrast. In that sense, PC does not necessarily correspond to visual perception quality, but rather measures the quality of underlying information. In this paper, we introduce the notion of normalized po- tential contrast (NPC), which is based directly on PC, but is commensurate across applications as it does not vary with image format and is interpretable as an error rate of a binary classifier. We prove that NPC is equivalent to the total variation
|
https://arxiv.org/abs/2505.01388v1
|
distance [8] between the distributions of sampled foreground and background pixels, enabling generalization to continuous contexts. We also introduce new equalities for NPC that can make computation simpler and provide alternate interpreta- tions of its value. One of these equalities allows us to define multi-class NPC , generalizing the notion of NPC to more than two classes. Instead of binarizing an image, multi-class NPC segments an image by class. Examples from cultural heritage with more than two classes include palimpsest, bleed-through, the pres- ence of multiple pigments, and similar problems commonly encountered in historical manuscripts. Lastly, we provide code for PC, NPC, and their multiple- Fig. 1. Left: an example image, where labeled background pixels are shown in blue and labeled foreground pixels are in yellow. Right: the binarization resulting from potential contrast, with a value of 254.983 for an 8-bit image, and a normalized potential contrast value of 0.996. class generalizations at https://github.com/wallacepeaslee/ Multiple-Class-Normalized-Potential-Contrast. We include implementations following both the algorithms described in [2] and [3] and implementations using the equalities from Section III. This paper is structured as follows. In Section II, we define NPC and demonstrate some of its advantages. In Section III, we introduce equalities that enable generalizations of NPC. In Section IV, we define multiple-class NPC. Finally, an example application to bleedthrough in historical manuscripts is explored in Section V. II. P OTENTIAL CONTRAST & N ORMALIZATION We define potential contrast (PC) loosely following [1]. Let A= (a1, . . . , a n)andB= (b1, . . . b m)be labeled foreground and background pixel values, taken from an image with values from a set X⊂R. We assume throughout this work that |X|>1, i.e. the image is not constant, and finite. A measure of contrast, ‘Clayness Minus Inkiness’ (CMI), introduced and used to analyze historical documents in [9], is given by CMI( A, B) =µ[A]−µ[B], where µ[A]denotes the (discrete) mean of sampled foreground pixels and similarly µ[B]denotes the mean of sampled back- ground pixels. Additionally, denote the set of functions from a set Xto itself by G(X) ={g:X→X}. Then, we define g(A)andg(B)as the sampled foreground and background pixels after applying g. Therefore, if A= (a1, . . . , a n)∈Xn, then g(A) = ( g(a1), . . . , g (an))∈Xn. Following [1], potential contrast can be defined as PCX(A, B) := max g∈G(X)CMI( g(A), g(B)). (1) Throughout this work, we use PAto denote the discrete distribution, i.e. the relative histogram of values, in A, and likewise for PB. So,P x∈XPA(x) =P x∈XPB(x) = 1 and 0≤PA(x), PB(x)≤1for any x∈X. A solution for Eq. 1, namely an optimal grayscale transformation, is the binarization given in Proposition 1 of [1]: gopt A,B(x) =( max( X)ifPA(x)≥PB(x) min(X)ifPA(x)< PB(x).(2) The definition of PC (and its solution) relies on the set X, which corresponds to an image’s data type. To remove this dependence, we introduce a normalized version of PC. Definition 1. LetYbe the set of values that occur in either A orB. LetH(A, B) ={h:Y→ {0,1}}. Then, the normalized potential contrast is NPC (A, B) := max h∈H(A,B)CMI( h(A),
|
https://arxiv.org/abs/2505.01388v1
|
h(B)) (3) Following an argument analogous to Proposition 1 of [1], an optimal hopt A,B∈arg maxh∈H(A,B)CMI( h(A), h(B))is the following binarization hopt A,B(y) =( 1ifPA(y)≥PB(y) 0ifPA(y)< PB(y).(4)The manner in which PC relies on the range of values of an image format is formalized by the following lemma. Lemma 2. For any injective f:X→T⊂R, PCT(f(A), f(B)) =max( T)−min(T) max( X)−min(X)PCX(A, B). Proof. For brevity, let ¯t= max( T),¯t= min( T),¯x= max( X), and¯x= min( X). Additionally, let gopt T∈ arg maxg∈G(T)CMI(( g◦f)A),(g◦f)(B))as constructed in Eq. 2 and define a rescaling function r:T→Ssuch that r(¯t) = ¯xandr(¯t) =¯x. Finally, define h:= (r◦gopt S◦f)∈ G(X). Then, (¯t−¯t)PCX(A, B)≥(¯t−¯t) µ[h(A)]−µ[h(B)] = (¯x−¯x) µ[(gopt S◦f)(A)]−µ[(gopt S◦f)(B)] = (¯x−¯x)PCT(f(A), f(B)). When fis also injective, we can define a left inverse f†: T→Xsuch that (f†◦f)(x) =xfor all x∈X. Reusing the inequality above, PCX((f†◦f)(A),(f†◦f)(B))≤¯x−¯x ¯t−¯tPCT(f(A), f(B)). Since AandBhave values in X,(f†◦f)(A) = Aand (f†◦f)(B) =B, so we have shown equality. In particular, this lemma shows that PC scales linearly with the range of values in X, which is typically defined by image format. For example, an 8-bit image usually consists of integer values {0,1, . . . , 255}, which is the setting for the original definition of PC. A 16-bit image might have values from {0, . . . , 216−1}, in which case conversion to this format would change the PC by a factor of216−1 28−1.Or, if the 16-bit image is restricted to have values between 0 and 1, conversion would change PC by a factor of 1/255. III. P ROPERTIES & E QUALITIES FOR NPC Lemma 2 shows how PC scales with the range of values given by an image format. An analogous statement for NPC shows it invariant to this range of possible values. Lemma 3. For any injective f:X→T⊂R, NPC (f(A), f(B)) =NPC (A, B). Proof. LetYbe the set of values in AorB. Ifhopt A,B∈ arg maxh∈H(A,B)CMI( h(A), h(B))and hopt f(A),f(B)∈ arg maxh∈H(f(A),f(B))CMI(( h◦f)(A),(h◦f)(B))as defined by Eq. 4, then hopt A,B(y) = ( hopt f(A),f(B)◦f)(y)for ally∈Y. So, µ[hopt A,B(A)] = µ[(hopt f(A),f(B)◦f)(A)]and µ[hopt A,B(B)] = µ[(hopt f(A),f(B)◦f)(B)], which proves the lemma. A corollary of Lemma 2 gives an equivalent formulation for NPC in terms of PC with the appropriate rescaling. Corollary 4. Given an image with values from X, NPC (A, B) =PCX(A, B) max( X)−min(X). Proof. Apply Lemma 2 with f(x) =x−min(X) max(X)−min(X).Be- cause fis injective, PCf(X)(f(A), f(B)) =PCX(A,B) max(X)−min(X).. Since min(f(X)) = 0 andmax( f(X)) = 1 we see that PCf(X)(f(A), f(B)) =NPC (A, B). This proof shows that NPC is directly equivalent to nor- malizing an image (with f(x) =s−min(X) max(X)−min(X)) and then computing PC. Because of the direct relationship between PC and NPC, many of the observations in [1] also hold for NPC (e.g. symmetry in arguments AandB, or the fact that NPC can be considered an equivalence relation among images). While PC is always in the range [0,max( X)−min(X)], NPC is has values in [0,1]. Because NPC and PC are so similar,we only use NPC for the remainder of this paper. However, analogous results for PC can be obtained with
|
https://arxiv.org/abs/2505.01388v1
|
the appropriate scaling according to Corollary 4. Next, we prove some equalities for NPC, including its equivalence to the total variation distance [8] between prob- ability measures, which we denote by δtv(PA, PB), thinking ofPAandPBas probability mass functions. In discrete cases like ours, δtv(PA, PB)can be defined as a kind of ℓ1distance between PAandPB, i.e. δtv(PA, PB) :=1 2∥PA(x)−PB(x)∥1 :=1 2X x∈X|PA(x)−PB(x)|. This equality and our other equalities characterizing NPC are summarized in the following theorem. Theorem 5. Given AandB, with associated distributions PA andPBdefined on some set X, NPC (A, B) = 1−X x∈Xmin(PA(x), PB(x)) (5) =X x∈Xmax( PA(x), PB(x))−1 (6) =1 2∥PA(x)−PB(x)∥1(7) =δtv(PA, PB). (8) Proof. From Eq. 4, we can write hopt A,B(x) = 1[(PA)(x)≥ (PB)(x)], where 1is the indicator function. With this notation, µ[hopt(A)] =X x∈XPA(x) 1[PA(x)≥PB(x)], and likewise for µ[hopt(B)]. For brevity, we will use the notation MA,B(x) = max( PA(x), PB(x))andmA,B(x) = min(PA(x), PB(x)). For any given x∈X, whether PA(x)≥ PB(x)andPA(x)< PB(x), it holds that (PA(x)−PB(x)) 1[PA(x)≥PB(x)] =PB(x)−mA,B(x). Applying this to the definition of NPC yields Eq. 5. NPC (A, B) =X x∈X(PA(x)−PB(x)) 1[PA(x)≥PB(x)] = 1−X x∈XmA,B(x).Similarly, NPC (A, B) = 1−X x∈XmA,B(x) +X x∈XMA,B(x)−X x∈XMA,B(x) = 1− 2−X x∈XMA,B(x) =X x∈XMA,B(x)−1 Using the fact that MA,B(x)−mA,B(x) =|PA(x)−PB(x)|, (9) we can show NPC (A, B) = 1−X x∈XmA,B(x) = 1−X x∈X MA,B(x)− |PB(x)−PB(x)| = 1 +1 2∥PA(x)−PB(x)∥1 −X x∈X MA,B(x)−1 2|PA(x)−PB(x)| . For Eq. 7, we check that the second term in the last equality above is equal to 1 by again applying Eq. 9, i.e. 1 =X x∈X1 2MA,B(x) +1 2mA,B(x) =X x∈X MA,B(x)−1 2|PA(x)−PB(x)| . Alternatively Eq. 7 also follows from Eq. 5 using Scheff ´e’s Theorem [8], [10]. The final equality given by Eq. 8, equivalence to the total variation distance, follows from its definition, taking PAand PBto be appropriate probability mass functions. Eq. 5 reflects an interpretation of NPC as an accuracy rate. In particular, we can consider hopt A,Bfrom Eq. 4 (or gopt A,Bfrom Eq. 2) as binary classification functions. Then, the portion of pixels incorrectly classified as belonging to class B, but which should belong to class A, iseA:=P x∈XPA(x) 1(PA(x)< PB(x)). With a similar expression for eB, the portion pixels from Bthat are misclassified as belong to A, we see that eA+eB=P x∈Xmin(PA(x), PB(x)). So, the accuracy of our binary classification is NPC (A, B) = 1−eA−eB. The conception of PC in this way first appeared in [1], where a similar argument described PC as a scaled error rate. This scaling directly corresponds to the scaling between PC and NPC. Eq. 6 is used in Section IV to generalize NPC to multiple sources. Eq. 7 gives a potentially faster way of computing NPC than the 4-step algorithms of [1], [2]. We can directly com- pute (and normalize) the histograms of values for Aand Bbefore taking their difference, giving a time complexity ofO(|A|+|B|+|X|). This is similar to the time complexity as in [1] (where |X|is considered constant). However, by not computing the mean µ[gopt A,B(A)]andµ[gopt A,B(B)],we Fig. 2. Left: The example image from Figure 1, now with three different classes
|
https://arxiv.org/abs/2505.01388v1
|
labeled. Labels for one ink are in yellow, for a second ink are in orange, and for the background are in blue. Top Right: a multi-class NPC result, where the colors reflect the class segmentation. The multi-class NPC has a value of 0.885. avoid iterating over AandBa second time and can reduce computation. The equivalence to total variation in Eq. 8 is useful beyond proving that NPC (and PC) is a metric. Most importantly, NPC can instead be defined as a total variation distance, which allows us to extend NPC to the continuous case by setting NPC (A, B) =δtv(PA, PB)when PAandPBare defined on continuous domains. Additionally, the total variation distance is special kind of of integral probability metric, which has the more general structure DF(PA, PB) = sup f∈F|EY∼PAf(Y)−EZ∼PBf(Z)|. This parallels difference of discrete means µ[h(A)]−µ[h(B)] we see in NPC. The total variation distance is the integral probability metric with F={f:X→ {0,1}}, mirroring the functions contained in H(A, B). IV. M ULTIPLE CLASS POTENTIAL CONTRAST A generalized multi-class NPC allows us to compute NPC when more than two classes are present. This could arise in various contexts, e.g. manuscript bleedthrough, palimpsest, or different inks. Given nclasses, NPC could be applied to all n 2 pairs, yielding n 2 values and binarizations. Alternatively, for cases where interpreting all of these relationships simulta- neously is of interest, we extend NPC to multiple classes by generalizing Eq. 6 from Theorem 5. Given nclasses, we denote Ai= (a1, . . . , a m1)as labeled samples for a class i∈ {1, . . . , n }, each with a discrete distribution PAi. Definition 6. We define the multi-class NPC of A1, . . . A nas NPC (A1, . . . , A n) :=−1 +P x∈Xmax 1≤i≤nPAi(x) n−1.(10) The scaling ensures that NPC (A1, . . . , A n)∈[0,1]. For pixel-wise segmentation, we assign a pixel with value xto the class Aiwhen i∈arg max1≤i≤nPAi(x), i.e. to the class that has the highest portion of its sampled pixels at that value x. An example of such a segmentation is given in Figure 2.Remark 7. Multi-class PC is defined by scaling multi-class NPC, as in the binary case: PC(A1, . . . , A n) := (max( X)−min(X))·NPC (A1, . . . , A n). We can now state an analog of Theorem 5 for multi-class NPC. Theorem 8. Suppose we have nclasses with sampled pixels Aiand distributions PAidefined on X. For a given x∈X, letPA(i) xbe a reordering of PAifrom smallest to largest, i.e.min 1≤i≤nPAi(x) = PA(1) x(x)≤PA(2) x(x)≤. . .≤ PA(n) x(x) = max 1≤i≤nPAi(x). Then, NPC (A1, . . . , A n) = 1−1 n−1X x∈Xn−1X i=1PA(i) x(x) (11) =1 nX x∈Xn−1X i=1(PA(n) x(x)−PA(i) x(x)).(12) The proof is similar to Theorem 5. We note that Eqs. 10, 11, and 12 are analogs of Eqs. 6, 5, and 7 respectively. Additionally, we can rewrite Eq. 12 as NPC (A1, . . . A n) =n−1 nX x∈X PA(n) x(x)−1 n−1n−1X i=1PA(i) x(x)! . We can interpret this as mean, across each possible value
|
https://arxiv.org/abs/2505.01388v1
|
x, as the difference between the distribution with largest portion of pixels that have value xand the average of the remaining distributions. We can also interpret multi-class NPC as an error rate analogous to Section III. In particular, if eigives the proportion of pixels from Aithat are misclassified as belonging to a different class j̸=i, then NPC (A1, . . . , A n) = 1−nX i=1ei. This follows from Eq. 11, since the sumPn−1 i=1PA(i) xgives an error rate across all classes for a pixel value x, since our pixel- wise classification will assign xto class iifPAiis maximal (among all distributions evaluated at x). V. A PPLICATION To illustrate the utility of multi-class NPC, we apply it to photographed page from the Caius Choirbook (GB-CGC 667/760), a music manuscript dated to the late 1520s and currently at Gonville and Caius College, Cambridge [11]. Like many historical manuscripts, this choirbook exhibits substan- tial bleedthrough, showing ink from the back of the page as well as the front. We apply NPC with three classes: written notation on the open page, bleedthrough, and background. The result, shown in Figure 3, is a strong multi-class NPC value, suggesting that the labeled pixels are easily separated using only their intensity values. This is reflected in the segmen- tation, where foreground text is easily isolated, allowing us to remove the bleedthrough from the page and visualize the music without undesired noise from the background. Fig. 3. From top to bottom. 1. A grayscale image showing an excerpt from GB-Cgc 667/760, page 19, with foreground notation (red), bleedthrough (green), and background (blue) labeled. 2. A three-class NPC result with foreground in black, bleedthrough in gray, and background in white. The three-class NPC value is 0.965. The two-class NPC between foregreound notation and background is nearly 1, the two-class NPC between foreground and bleedthrough is 0.996, and the two-class NPC between bleedthrough and background is 0.934. 3. The three-class NPC segmentation showing only foreground notation. Original imaging by DIAMM (diamm.ac.uk). Image use by kind permission of the Master and Fellows of Gonville and Caius College, Cambridge. The two-class NPC values also reflect what we expect: the foregrorund is easy to separate from background and from bleedthrough, but separating background from bleedthrough is more difficult. However, for many applications a single value is more useful, accounting for all classes rather than only pairwise comparisons. The imperfections in Figure 3 also exemplify some of the challenges of using NPC. For example, because NPC is a global measure and relies only on the histogram of grayscale values (discarding spatial information), changes in brightness across a page, such as those from uneven lighting or dirt, can result in a lower multi-class NPC when that may not be desired or expected. Finally, the NPC value also heavily depends on the labeled pixels. While labeling the pixels can take time, usually only a small number of pixels must be labeled in each class. This labeling also allows allows a user or domain expert to adapt the measure to a particular task.VI. C ONCLUSION Potential contrast
|
https://arxiv.org/abs/2505.01388v1
|
is an image contrast and quality measure that relies on task-dependent labels. We introduce a scaled version that is commensurate across image formats and ap- plications, which retains many of the desirable properties of potential contrast. We prove equalities that support different interpretations of normalized potential contrast and show it is equivalent to the total variation distance on the probability distributions of sam- pled pixels, allowing generalization to continuous domains. We use another equality to define normalized potential contrast when there are more than two classes. Such generalizations enable applications to a much broader array of contexts, including where potential contrast has al- ready found success. Multi-class NPC can provide a valuable, interpretable, task-adaptive image quality and contrast measure in cultural heritage and beyond. REFERENCES [1] A. Shaus, S. Faigenbaum-Golovin, B. Sober, and E. Turkel, “Potential contrast–a new image quality measure,” Electronic Imaging , vol. 29, pp. 52–58, 2017. [2] S. Faigenbaum, B. Sober, A. Shaus, M. Moinester, E. Piasetzky, G. Bear- man, M. Cordonsky, and I. Finkelstein, “Multispectral images of ostraca: acquisition and analysis,” Journal of Archaeological Science , vol. 39, no. 12, pp. 3581–3590, 2012. [3] B. Sober, S. Faigenbaum, I. Beit-Arieh, I. Finkelstein, M. Moinester, E. Piasetzky, and A. Shaus, “Multispectral imaging as a tool for enhancing the reading of ostraca,” Palestine Exploration Quarterly , vol. 146, no. 3, pp. 185–197, 2014. [4] S. Faigenbaum-Golovin, A. Shaus, and B. Sober, “Computational hand- writing analysis of ancient hebrew inscriptions—a survey,” IEEE BITS the Information Theory Magazine , vol. 2, no. 1, pp. 90–101, 2022. [5] S. Faigenbaum-Golovin, A. Mendel-Geberovich, A. Shaus, B. Sober, M. Cordonsky, D. Levin, M. Moinester, B. Sass, E. Turkel, E. Piasetzky, et al. , “Multispectral imaging reveals biblical-period inscription unno- ticed for half a century,” PLoS One , vol. 12, no. 6, p. e0178400, 2017. [6] S. Faigenbaum, B. Sober, I. Finkelstein, M. Moinester, E. Piasetzky, A. Shaus, and M. Cordonsky, “Multispectral imaging of two hieratic inscriptions from qubur el-walaydah,” ¨Agypten und Levante/Egypt and the Levant , pp. 349–353, 2014. [7] G. T. Fechner, “Elements of psychophysics, 1860.,” Readings in the history of psychology. , pp. 206–213, 1948. Place: East Norwalk, CT, US Publisher: Appleton-Century-Crofts. [8] A. B. Tsybakov, Nonparametric estimators , pp. 1–76. New York, NY: Springer New York, 2009. [9] A. Shaus, E. Turkel, and E. Piasetzky, “Quality evaluation of facsimiles of hebrew first temple period inscriptions,” in 2012 10th IAPR Interna- tional Workshop on Document Analysis Systems , pp. 170–174, 2012. [10] H. Scheff ´e, “A useful convergence theorem for probability distributions,” The Annals of Mathematical Statistics , vol. 18, no. 3, pp. 434–438, 1947. [11] Digitial Image Archive of Medieval Music, “Gb-cgc ms 667/760 (caius choirbook).” https://www.diamm.ac.uk/sources/225/. Accessed: 2025- 03-13.
|
https://arxiv.org/abs/2505.01388v1
|
arXiv:2505.01825v1 [math.ST] 3 May 2025Asymptotic representations for Spearman’s footrule correlation coefficient Liqi Xia, Sami Ullah and Li Guan* School of Mathematics, Statistics and Mechanics, Beijing Universit y of Technology, Beijing 100124, China E-mail address: Correspondence: guanli@bjut.edu.cn (Li Guan) Abstract In order to address the theoretical challenges arising from the de pendence structure of ranksinSpearman’sfootrulecorrelationcoefficient, weproposetw oasymptoticrepresentations under the null hypothesis of independence. The first representa tion simplifies the dependence structurebyreplacingempiricaldistributionfunctionswiththeirpo pulationcounterparts. The second representation leverages the H´ ajek projection techniq ue to decompose the initial form into a sum of independent components, thereby rigorously justify ing asymptotic normality. Simulation study demonstrates the appropriateness of these asy mptotic representations and their potential as a foundation for extending nonparametric infer ence techniques, such as large-sample hypothesis testing and confidence intervals. Keywords : Asymptotic representation; Spearman’s footrule; Rank co rrelation. 1 Introduction Nonparametric measures of association play a pivotal role i n statistical inference, particularly when data violate parametric assumptions or exhibit comple x dependencies. Among these mea- sures, Spearman’s footrule rank correlation coefficient ( Spearman ,1906) as a rank-based metric, has garnered renewed interest due to its robustness and inte rpretability ( Bukovˇ sek and Mojˇ skerc , 2022;Chen et al. ,2023;P´ erez et al. ,2023). Adding up the absolute differences between two sets of ranks, Spearman’s footrule quantifies disarray betw een permutations, offering a natural alternative to Euclidean-based metrics like Spearman’s rh o. Furthermore, it also possesses an intuitive population. For continuous random variables XandYwith an underlying copula C, the population version of Spearman’s footrule is defined as: ϕC= 1−3/integraldisplay [0,1]2|u−v|dC(u,v), whereuandvrepresent the marginal distribution functions of XandY, respectively. Under independence, ϕC= 0, while perfect agreement or disagreement yields ϕC= 1 orϕC=−1 2in the bivariate case ( Nelsen,2006). 1 Early work by Diaconis and Graham (1977) established the asymptotic normality of Spear- man’s footrule correlation coefficient under independence u sing combinatorial arguments. Subse- quent studies, such as Sen and Salama (1983), leveraged Markov chain properties and martingale theory to derive similar results, emphasizing its significa nce in permutation-based frameworks. Despite these advances, critical gaps persist. While the ra nk-based robustness of Spearman’s footrule enhances its applicability in statistical infere nce, the inherent dependence structure of rankshashistorically complicated itstheoretical advanc ement. Addressingthisproblem, wederive two distinct asymptotic representations under the null hyp othesis of independence. By replacing the empirical distribution functions with their populatio n counterparts, an initial asymptotic rep- resentation for Spearman’s footrule is established. This a pproach circumvents the complexities introduced by rank dependencies, directly linking the stat istic to its limiting behavior. Building on the first result, the H´ ajek projection technique is furth er employed to decompose Spearman’s footrule into a linear combination of independent componen ts. This decomposition not only re- inforces the asymptotic normality conclusion but also eluc idates the role of rank transformations in the statistical variance structure. 2 Asymptotic representations In this context, the joint distribution function of the biva riate continuous random variable (X,Y) is denoted by P(x,y), and their respective marginal distribution functions ar e represented byF(x) andG(y). A finite sample of size n, comprising
|
https://arxiv.org/abs/2505.01825v1
|
{(X1,Y1),...,(Xn,Yn)}, is obtained independently and identically distributed (i.i.d.) from ( X,Y). LetRi=/summationtextn k=1I(Xk/lessorequalslantXi) be the rank ofXiwith indicator function I(·),i= 1,...,n. Similarly, Si=/summationtextn k=1I(Yk/lessorequalslantYi) is the rank ofYi. Then, Spearman’s footrule rank correlation coefficient is g iven by ϕn:=ϕ/parenleftbig {(Xi,Yi)}i=n i=1/parenrightbig = 1−3 n2−1n/summationdisplay i=1|Ri−Si|. (2.1) Under the assumption of independence between XandY, its expectation and variance are as follows Eϕn= 0,Var(ϕn) =2n2+7 5(n+1)(n−1)2. Although the existence of ranks makes the tests based on ϕnfully distribution-free, i.e., not rely on the underlying distribution of the data, the dependence a mong ranks in practical applications complicates the derivation of certain asymptotic theories under independence between XandY. Below, we introduce two asymptotic representations of ϕnto address this issue. Through intuitive and straightforward calculation, ϕncan be rewritten as ϕn=3n2 n2−1 1 n2n/summationdisplay i=1n/summationdisplay j=1|Fn(Xi)−Gn(Yi)|−1 nn/summationdisplay i=1|Fn(Xi)−Gn(Yi)| , which is composed of components that involve the empirical d istribution functions Fn(x) = 1 n/summationtextn k=1I(Xk/lessorequalslantx) andGn(y) =1 n/summationtextn k=1I(Yk/lessorequalslanty) ofXandYfor anyx∈Randy∈R. A natural inclination is to replace these two empirical func tions with their population counter- partsFandG, but there are still remaining terms that need to be addresse d. Notably, F(X) and G(Y) follow a uniform distribution over the interval [0 ,1]. This ultimately induces the following theorem. The specific proof involving empirical processes, is presented in Appendix A. 2 Theorem 2.1 (The first asymptotic representation ).Under the assumption of independence betweenXandY,ϕnis asymptotically identically distributed with the follow ing form, /tildewideϕn=3n2 n2−1 1 n2n/summationdisplay i=1n/summationdisplay j=1|Ui−Vj|−1 nn/summationdisplay i=1|Ui−Vi| , (2.2) where,U1,...,UnandV1,...,Vnare i.i.d. random variables from uniform distribution U(0,1), and UiandViare also independent for i= 1,···,n. Additionally, E/tildewideϕn= 0,Var(/tildewideϕn) =2n2 5(n+1)2(n−1). To further obtain a simpler form, we will now apply the H´ ajek projection to /tildewideϕn, resulting in the following theorem. Theorem 2.2 (The second asymptotic representation ).Under the assumption of indepen- dence between XandY,/tildewideϕn’s H´ ajek asymptotic representation is as follows /hatwideϕn=3 n+1n/summationdisplay i=1/parenleftbigg2 3−|Ui−Vi|−Ui(1−Ui)−Vi(1−Vi)/parenrightbigg , (2.3) with expectation and variance, E/hatwideϕn= 0,Var(/hatwideϕn) =2n 5(n+1)2. By utilizing Theorem 2.2in conjunction with Theorem 2.1, the limiting null distribution of ϕncan be readily obtained. Theorem 2.3 (The limiting null distribution ).Under the assumption of independence be- tweenXandY,√nϕn,√n/tildewideϕnand√n/hatwideϕnconverge weakly to the same normal distribution with mean 0 and variance2 5. Remark 1. In the existing literature, there are various approaches for deriving the limiting null distribution of√nϕn.Diaconis and Graham (1977) established its normality by utilizing the combinatorial central limit theorem developed by Hoeffding (1951). InSen and Salama (1983), martingale techniques are incorporated into the study of th e asymptotic normality of Spearman’s footrule. Shi et al. (2023) derived the rate of convergence for the standardized Spear man’s footrule to the standard normal distribution based on the combinator ial central limit theorem and the Cram´ er-type moderate deviation result (Section 4.1 of Chen et al. ,2013).Shi et al. (2024) obtained an alternative form of the convergence rate using the Edgewo rth expansion ( Small,2010), and these two results also naturally led to the limiting null dis tribution of Spearman’s footrule. It is evident that our approach differs significantly from these methods and serves as the basis for further theoretical
|
https://arxiv.org/abs/2505.01825v1
|
investigations into Spearman’s footr ule. 3 Simulation study A simple simulation is conducted in this section to demonstr ate the asymptotic behavior of√nϕn,√n/tildewideϕnand√n/hatwideϕn. Specifically, let XandYbedrawnfromthestandardnormaldistribution 3 and the standard uniform distribution respectively, with s ample sizes set to 10, 20, 30, and 100. The simulation was run 100,000 times, and the results are sho wn in Figure 1. As can beseen from Figure 1, even with a samplesize of n= 30, thedistributions of√n/tildewideϕnand√n/hatwideϕn, as well as ϕn, arevery close tothe theoretical limiting distribution(t he normal distribution with mean 0 and variance2 5). However, the best approximation is provided by√n/hatwideϕn, followed by√n/tildewideϕn, and the worst by√nϕn. This also reflects, to some extent, the rate at which the distributions of these three representations converge to t he limiting distribution. It is worth noting that when the sample size is extremely small ( n= 10), the performance of√nϕnis very poor as shown in the first subfigure of Figure 1. This is due to the permutations of ranks present in the structure of ϕn. Despite these permutations being different and numerous (fa ctorial of 10), the calculated values of√nϕnexhibit a large number of repetitions. Even with a very large number of simulation repetitions (100,000), there are rela tively few distinct values (only a few dozen) used for kernel density estimation. 4 Discussions Spearman’s footrule, despite its robustness, faces theore tical complexities due to rank de- pendencies. Two asymptotic representations address this i ssue under independence. The initial representation simplifies the statistic by using populatio n distribution functions. The subsequent use of H´ ajek projection decomposes the footrule into indep endent components, reinforcing the asymptotic normality, thus enhancing its theoretical unde rstanding. Acknowledgements This work was supported by the National Social Science Fund o f China (No.21BTJ041). Declarations The authors declare that they have no conflict of interest. A Appendix Lemma A.1. Given that U1,V1, andV2are independently and identically distributed from the uniform distribution U(0,1), through simple integral calculation, the following facts can be easily deduced: E|U1−V1|=1 3,E(|U1−V1||U1) =1 2−U1(1−U1).E(U1(1−U1)) =1 6,Var(|U1−V1|) =1 18, Var(U1(1−U1)) =1 180,Cov(|U1−V1|,U1(1−U1)) =−1 180,Cov(|U1−V1|,|U1−V2|) =1 180. Proof of Theorem 2.1.Let (X1,Y1),...,(Xn,Yn)∈ Z=R×Rbe a random sample from a probability distribution Pdefined on a measurable space ( Z,A). We denote two empirical distributions as Pn=n−1/summationtextn i=1δ(Xi,Yi)andP′ n=n−2/summationtextn i=1/summationtextn j=1δ(Xi,Yj), whereδ(x,y)represents the probability distribution degenerate at the point ( x,y). For a given measurable function 4 −2 −1 0 1 20.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7Density Curves(n=10) N(0,0.4) ϕn ϕ~ n ϕn −2 −1 0 1 20.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7Density Curves(n=20) N(0,0.4) ϕn ϕ~ n ϕn −2 −1 0 1 20.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7Density Curves(n=30) N(0,0.4) ϕn ϕ~ n ϕn −2 −1 0 1 20.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7Density Curves(n=100) N(0,0.4) ϕn ϕ~ n ϕn Figure 1: Density curves of√nϕn,√n/tildewideϕnand√n/hatwideϕn, estimated using kernel density estimation with a Gaussian kernel, where the solid line represents a nor mal curve with a mean of 0 and a variance of 0.4. f:Z /ma√sto→R, we usePnfandP′ nfto denote the expectations of funder the empirical measures
|
https://arxiv.org/abs/2505.01825v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.