text
string
source
string
t o c onsider a far br oader c lass of low coor dinate degr ee (L CD ) alg orithms . In fact w e w ill c onsider a slightly br oader c lass of 𝐑𝑁- v alued L CD alg orithms , supplement ed b y an additional r ounding sc heme int o Σ𝑁 . The r andomized r ounding Μƒ π’œ of π’œ is de- fined b y Μƒ π’œ ≔ π—‹π—ˆπ—Žπ—‡π–½ βƒ— π‘ˆ( π’œ ) . Her e π—‹π—ˆπ—Žπ—‡π–½ βƒ— π‘ˆ( π‘₯1 , … , π‘₯𝑛 ) = ( π—‹π—ˆπ—Žπ—‡π–½π‘ˆ1( π‘₯1 ) , … , π—‹π—ˆπ—Žπ—‡π–½π‘ˆπ‘( π‘₯𝑁 ) ) = ( π—Œπ—‚π—€π—‡ ( π‘₯1 βˆ’ π‘ˆ1 ) , … , π—Œπ—‚π—€π—‡ ( π‘₯𝑁 βˆ’ π‘ˆπ‘ ) ) f or i.i. d. Unif orms π‘ˆπ‘– ∈ [ βˆ’ 1 , 1 ] and 5 π—Œπ—‚π—€π—‡ ( 𝑧 ) ≔ {+ 1 𝑧 > 0 , βˆ’ 1 𝑧 ≀ 0 . Theor em 1. 4 (Har dness f or L CD Alg orithms ) . Let 𝑔 ∼ 𝒩 ( 0 , 𝐼𝑁 ) be a standar d N ormal r andom vec tor . Let π’œ be an y coor dinate degr ee 𝐷 alg orithm with 𝐄 β€– π’œ ( 𝑔 ) β€–2≀ 𝐢 𝑁 , and let Μƒ π’œ be its r andomized r ounding as described above . Assume that ( a) if 𝐸 = 𝛿 𝑁 f or 𝛿 ∈ ( 0 , 1 ) , then 𝐷 ≀ π‘œ ( 𝑁 ) ; (b ) if πœ” ( log2𝑁 ) ≀ 𝐸 ≀ π‘œ ( 𝑁 ) , then 𝐷 ≀ π‘œ ( 𝐸 / log2( 𝑁 / 𝐸 ) ) . Then 𝐏 ( Μƒ π’œ ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) = π‘œ ( 1 ) . R emar k 1.5 . Although Theor em 1. 4 is st at ed f or Gaussian disor der f or simplicit y and c onsist ency w ith the lo w degr ee poly nomial case , the pr oof e x t ends v er batim t o the muc h mor e g ener al situation that ( 𝑔1 , … , 𝑔𝑁 ) ar e independent r andom v ariables w ith unif ormly bounded densit y ( w ith r espect t o Le besgue measur e ). N ot e that if eac h 𝑔𝑖 has densit y unif ormly at most some c onst ant 𝐿 independent of 𝑁 , then the same holds ( w ith the same v alue of 𝐿 ) f or an y independent sum βˆ‘π‘  ∈ 𝑆𝑔𝑠 f or det erministic 𝑆 βŠ† [ 𝑁 ] . This is the only pr opert y of the Gaussian densit y that is needed t o pr o v e Theor em 1. 4 , appearing in Lemma 2.1 ( whic h is used t o sho w Lemma 2. 4 ) and Lemma 2.12 . 1.2 Heuristic Optimalit y of Theor em 1. 4 Theor em 1. 4
https://arxiv.org/abs/2505.20607v1
is essentially best -possible under the lo w degr ee heuristic. In particular , this heuristic sugg est s fr om the ener gy - degr ee tr adeoff of 𝐷 ≀ Μƒ π‘œ ( 𝐸 ) that finding solutions w ith ener gy 𝐸 r equir es time 𝑒̃ Ξ© ( 𝐸 ). This tr adeoff is att ainable along the full r ang e 1 β‰ͺ 𝐸 ≀ 𝑁 v ia the f ollo w ing r estrict ed brut e -f or c e sear c h: (a) Choose a subset 𝐽 βŠ† [ 𝑁 ] of 𝐸 c oor dinat es ( sa y , the fir st 𝐸 ). (b ) R un an e xisting NPP alg orithm on the r estrict ed inst anc e 𝑔𝐽 t o find π‘₯𝐽 w ith ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ ≀ 1 . ( c) F ixing π‘₯𝐽 , the NPP giv en b y 𝑔 turns int o finding π‘₯𝐽 minimizing | ⟨ 𝑔 , π‘₯ ⟩ | = | ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ + ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ | . ( d) F ixing 𝑗 ∈ 𝐽 and the c oor dinat e π‘₯𝑗 = 1 , the c onditional la w of ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ + 𝑔𝑗 is c ontiguous w ith a st andar d Gaussian. If ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ + 𝑔𝑗 w er e e x actly st andar d Gaussian, then b y the pr e v i- ously mentioned r esult s on st atistically optimal solutions , w ith high pr obabilit y ther e w ould e xist 𝑔 att aining ener gy Θ ( 𝐸 ) . By c ontiguit y , the same holds e v en w ith the shift ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ . ( e ) Giv en the pr e v ious st ep , w e can brut e f or c e sear c h o v er the r emaining 𝐸 c oor dinat es in 𝐽 , y ielding a solution w ith ener gy Θ ( 𝐸 ) w ith high pr obabilit y in 𝑒𝑂 ( 𝐸 ) time . It is r easonable t o ask whether the lo w ( c oor dinat e ) degr ee heuristic is truly appr opriat e in our setting . In pr e v ious pr oblems wher e it has been applied, the objectiv e under c onsider ation is st able t o small input pertur bations . By c ontr ast solutions t o the NPP ar e quit e brittle , a fact whic h guides our pr oof. This brittleness also under lies the failur e of lo w - degr ee poly nomials in Theor em 1.3 , whic h st ems fr om the fact that poly nomials can depend on the entries 𝑔𝑖 only in a some what c oar se w a y . N e v ertheless the sharpness of Theor
https://arxiv.org/abs/2505.20607v1
em 1. 4 is not able and highly sugg estiv e . See also [L S24] f or r elat ed discussion. W e also mention that c onjectur ally optimal tr ade - off s of a similar fla v or bet w een running time and signal-t o -noise r atio ha v e been est ablished f or spar se PC A in [ A WZ23] , [Din+24] and f or t ensor PC A in [K WB19] , [ WEM19] . 6 F inally , w e not e that alg orithms w ith c oor dinat e degr ee Ξ© ( 𝑁 ) b y definition in v olv e nonlinear int er actions bet w een a c onst ant fr action of the 𝑁 c oor dinat es . Thus Theor em 1. 4 essentially st at es that g ood NPP alg orithms must be β€œtruly global” , whic h is ec hoed b y r ec ent heuristic alg orithms f or the NPP [KK S14] , [ C O Y19] , [SBD 21] . 1.3 N ot ations and Pr eliminaries W e use the st andar d Bac hmann-Landau not ations π‘œ ( β‹… ) , 𝑂 ( β‹… ) , πœ” ( β‹… ) , Ξ© ( β‹… ) , Θ ( β‹… ) , in the limit 𝑁 β†’ ∞ . W e abbr e v iat e 𝑓 ( 𝑁 ) β‰ͺ 𝑔 ( 𝑁 ) or 𝑓 ( 𝑁 ) ≫ 𝑔 ( 𝑁 ) when 𝑓 ( 𝑁 ) = π‘œ ( 𝑔 ( 𝑁 ) ) or 𝑓 ( 𝑁 ) = πœ” ( 𝑔 ( 𝑁 ) ) . In addi- tion, w e w rit e 𝑓 ( 𝑁 ) ≲ 𝑔 ( 𝑁 ) or 𝑓 ( 𝑁 ) ≳ 𝑔 ( 𝑁 ) when ther e e xist s an 𝑁 -independent c onst ant 𝐢 suc h that 𝑓 ( 𝑁 ) ≀ 𝐢 𝑔 ( 𝑁 ) or 𝑓 ( 𝑁 ) β‰₯ 𝐢 𝑔 ( 𝑁 ) f or all 𝑁 . W e w rit e [ 𝑁 ] ≔ { 1 , … , 𝑁 } . If 𝑆 βŠ† [ 𝑁 ] , then w e w rit e 𝑆 ≔ [ 𝑁 ] βˆ– 𝑆 f or the c ompliment ar y set of indic es . If π‘₯ ∈ 𝐑𝑁 and 𝑆 βŠ† [ 𝑁 ] , then the r estric tion of π‘₯ to the coor dinates in 𝑆 is the v ect or π‘₯𝑆 w ith ( π‘₯𝑆 )𝑖≔ {π‘₯𝑖 𝑖 ∈ 𝑆 , 0 else. In particular , f or π‘₯ , 𝑦 ∈ 𝐑𝑁, ⟨ π‘₯𝑆 , 𝑦 ⟩ = ⟨ π‘₯ , 𝑦𝑆 ⟩ = ⟨ π‘₯𝑆 , 𝑦𝑆 ⟩ . On 𝐑𝑁, w e w rit e β€– β‹… β€– f or the E uc lidean norm, and 𝐡 ( π‘₯ , π‘Ÿ ) ≔ { 𝑦 ∈ 𝐑𝑁: β€– 𝑦 βˆ’ π‘₯ β€– < π‘Ÿ }
https://arxiv.org/abs/2505.20607v1
f or the E uc lidean ball of r adius π‘Ÿ ar ound π‘₯ . In addition, w e w rit e 𝐡Σ ( π‘₯ , π‘Ÿ ) ≔ 𝐡 ( π‘₯ , π‘Ÿ ) ∩ Σ𝑁 = { 𝑦 ∈ Σ𝑁 : β€– 𝑦 βˆ’ π‘₯ β€– < π‘Ÿ } , t o denot e point s on Σ𝑁 w ithin dist anc e π‘Ÿ of π‘₯ . W e use 𝒩 ( πœ‡ , 𝜎2) t o denot e the scalar N ormal distribution w ith giv en mean and v arianc e , and the abbr e v iation β€œr . v . ” f or r andom v ariable ( or r andom v ect or , if it is c lear fr om c ont e x t). 𝑝 - C orr elat ed and 𝑝 -R esampled P air s . F or 𝑝 ∈ [ 0 , 1 ] and a pair ( 𝑔 , 𝑔′) of 𝑁 - dimensional st andar d N ormal r andom v ect or s , w e sa y ( 𝑔 , 𝑔′) ar e 𝑝 - corr elated if 𝑔′ is distribut ed ( c onditionally on 𝑔 ) as 𝑔′= 𝑝 𝑔 + √ 1 βˆ’ 𝑝2 Μƒ 𝑔 , wher e Μƒ 𝑔 is an independent c op y of 𝑔 . W e sa y ( 𝑔 , 𝑔′) ar e 𝑝 -r esampled if 𝑔 is a st andar d N ormal r andom v ect or and 𝑔′ is dr a w n as f ollo w s: f or eac h 𝑖 ∈ [ 𝑁 ] independently , 𝑔′ 𝑖 = {𝑔𝑖 with probability 𝑝 , drawn from 𝒩 ( 0 , 1 ) with probability 1 βˆ’ 𝑝 . W e denot e suc h a pair b y π‘”β€²βˆΌ ℒ𝑝 ( 𝑔 ) . In both cases , 𝑔 and 𝑔′ ar e mar ginally multiv ariat e st andar d N ormal and ha v e entr y w ise c orr elation 𝑝 . Ho w e v er , while 𝐏𝑔′ βˆΌπ‘ 𝑔 ( 𝑔 = 𝑔′) = 0 , when ( 𝑔 , 𝑔′) ar e 𝑝 -r esampled, w e ha v e 𝐏𝑔′ ∼ ℒ𝑝 ( 𝑔 ) ( 𝑔 = 𝑔′) = βˆπ‘ 𝑖 = 1𝐏 ( 𝑔𝑖 = 𝑔𝑖′ ) = ( 1 βˆ’ πœ€ )𝑁. (1.3) 7 whic h is non-negligible f or πœ€ ≀ 𝑂 ( 1 / 𝑁 ) . C oor dinat e Degr ee . Let 𝛾𝑁 be the 𝑁 - dimensional st andar d N ormal measur e on 𝐑𝑁, and c onsider the spac e 𝐿2( 𝛾𝑁 ) ; this is the spac e of 𝐿2 functions of 𝑁 i.i. d. st andar d N ormal r . v . s . F or 𝑔 ∈ 𝐑𝑁 and 𝑆 βŠ† [ 𝑁 ] , w e can define subspac es of 𝐿2( 𝛾𝑁 ) 𝑉𝑆 ≔ { 𝑓 ∈ 𝐿2( 𝛾𝑁 ) : 𝑓
https://arxiv.org/abs/2505.20607v1
( 𝑔 ) depends only on 𝑔𝑆 } , 𝑉≀ 𝐷 ≔ span ( { 𝑉𝐽 : 𝐽 βŠ† [ 𝑁 ] , | 𝐽 | ≀ 𝐷 } ) These subset s describe functions whic h only depend on some subset of c oor dinat es , or on some bounded number of c oor dinat es . N ot e that 𝑉[ 𝑁 ] = 𝑉≀ 𝑁 = 𝐿2( 𝐑𝑁, πœ‹βŠ— 𝑁) . The coor dinate degr ee of a function 𝑓 ∈ 𝐿2( 𝛾𝑁 ) is defined as min { 𝐷 : 𝑓 ∈ 𝑉≀ 𝐷 } . N ot e that if 𝑓 is a degr ee 𝐷 poly nomial, then it has c oor dinat e degr ee at most 𝐷 . See [K un 24, Β§ 1.3] or [ O'D14, Β§ 8 .3] f or further discussion. 1. 4 St abilit y of Lo w Degr ee Alg orithms The k e y pr opert y of LDP and L CD alg orithms f or us is 𝐿2 st abilit y under input pertur bations . Pr oposition 1. 6 (Lo w Degr ee St abilit y ) . S uppose π’œ : 𝐑𝑁→ 𝐑𝑁 is a deterministic alg orithm with pol ynomial degr ee (r esp . coor dinate degr ee ) 𝐷 and norm 𝐄 β€– π’œ ( 𝑔 ) β€–2≀ 𝐢 𝑁 . Then, f or standar d N ormal r . v .s 𝑔 and 𝑔′ whic h ar e ( 1 βˆ’ πœ€ ) - corr elated (r esp . ( 1 βˆ’ πœ€ ) -r esampled), 𝐄 β€– π’œ ( 𝑔 ) βˆ’ π’œ ( 𝑔′) β€–2≀ 2 𝐢 𝐷 πœ€ 𝑁 , (1.4) and thus f or an y πœ‚ > 0 , 𝐏 ( β€– π’œ ( 𝑔 ) βˆ’ π’œ ( 𝑔′) β€– β‰₯ 2 √ πœ‚ 𝑁 ) ≀𝐢 𝐷 πœ€ 2 πœ‚. (1.5) Pr oof : W e sho w ( 1. 4) when π’œ has c oor dinat e degr ee 𝐷 and ( 𝑔 , 𝑔′) ar e ( 1 βˆ’ πœ€ ) -r esampled. See e .g . [HS25b , Pr op . 1. 7] f or the case wher e π’œ is poly nomial. In both cases , Mar k o v ’ s inequalit y giv es ( 1.5 ) . W e f ollo w [ Gam+2 2, Lem. 3 . 4] . Assume w ithout loss of g ener alit y that 𝐄 β€– π’œ ( 𝑔 ) β€–2= 1 . W riting π’œ = ( π’œ1 , … , π’œπ‘ ) , obser v e that f or π‘”β€²βˆΌ β„’1 βˆ’ πœ€ ( 𝑔 ) , 𝐄 β€– π’œ ( 𝑔 ) βˆ’ π’œ ( 𝑔′) β€–2= 𝐄 β€– π’œ ( 𝑔 ) β€–2+ 𝐄 β€– π’œ ( 𝑔′) β€–2βˆ’ 2 𝐄 ⟨ π’œ ( 𝑔 ) , π’œ ( 𝑔′) ⟩ = 2 βˆ’ 2 𝐄 ⟨ π’œ ( 𝑔 ) , π’œ ( 𝑔′) ⟩ . By [ O'D14, E x er . 8 .18]
https://arxiv.org/abs/2505.20607v1
, w e kno w that f or eac h π’œπ‘– ∈ 𝑉≀ 𝐷 w e ha v e ( 1 βˆ’ πœ€ )𝐷𝐄 | π’œπ‘– ( 𝑔 ) |2≀ 𝐄 [ π’œπ‘– ( 𝑔 ) π’œπ‘– ( 𝑔′) ] ≀ 𝐄 | π’œπ‘– ( 𝑔 ) |2. Summing this o v er 𝑖 giv es ( 1 βˆ’ πœ€ )𝐷≀ 𝐄 ⟨ π’œ ( 𝑔 ) , π’œ ( 𝑔′) ⟩ ≀ 1 . C ombining this w ith the abo v e , and using 1 βˆ’ ( 1 βˆ’ πœ€ )𝐷≀ πœ€ 𝐷 , y ields ( 1. 4) . β–‘ R emar k 1. 7 . Pr oposition 1. 6 also holds f or r andomized alg orithms , w ith e x actly the same pr oof. 8 N e x t w e intr oduc e a family of locally impr o v ing alg orithms , whic h w ill be useful lat er in sho w ing the failur e of r andomized r ounding . Belo w , fix a dist anc e π‘Ÿ = 𝑂 ( 1 ) . Giv en a 𝐑𝑁- v alued π’œ , w e can obt ain a Σ𝑁 - v alued alg orithm b y fir st r ounding π’œ ( 𝑔 ) int o the solid h y per cube [ βˆ’ 1 , 1 ]𝑁 and then pic king the best c orner of Σ𝑁 w ithin c onst ant dist anc e of this output. Suc h a modification r equir es calculating the ener gy of at most π‘π‘Ÿ additional point s on Σ𝑁 , and thus pr eser v es e .g . an y poly nomial runtime bound. Sinc e π‘Ÿ = 𝑂 ( 1 ) , this w ill also pr eser v e st abilit y . W e f ormalize this c onstruction as f ollo w s . Let 𝖼𝗅𝗂𝗉 : 𝐑𝑁→ [ βˆ’ 1 , 1 ]𝑁 be the function whic h r ounds π‘₯ ∈ 𝐑𝑁 int o the cube [ βˆ’ 1 , 1 ]𝑁: 𝖼𝗅𝗂𝗉 ( π‘₯ )𝑖 ≔ {{{{{ βˆ’ 1 π‘₯𝑖 ≀ βˆ’ 1 , π‘₯𝑖 βˆ’ 1 < π‘₯𝑖 < 1 , 1 π‘₯𝑖 β‰₯ 1 . N ot e that 𝖼𝗅𝗂𝗉 is 1 -Lipsc hit z w ith r espect t o the E uc lidean norm. Definition 1.8 . Let π‘Ÿ > 0 and π’œ be an alg orithm. Define the [ βˆ’ 1 , 1 ]𝑁- v alued alg orithm Μ‚ π’œπ‘Ÿ b y Μ‚ π’œπ‘Ÿ ( 𝑔 ) ≔ {{{{{ argmin π‘₯β€² ∈ 𝐡Σ ( 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) , π‘Ÿ )| ⟨ 𝑔 , π‘₯β€² ⟩ | if 𝐡Σ ( 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) , π‘Ÿ ) β‰  βˆ… , 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) else .(1.6) The ne x t simply lemma sho w s that if π’œ is st able , then Μ‚ π’œπ‘Ÿ is also st able . Lemma 1. 9 . S
https://arxiv.org/abs/2505.20607v1
uppose π’œ : 𝐑𝑁→ 𝐑𝑁 is a deterministic alg orithm with pol ynomial degr ee (r esp . coor di- nate degr ee ) 𝐷 and norm 𝐄 β€– π’œ ( 𝑔 ) β€–2≀ 𝐢 𝑁 . Then, f or π‘Ÿ = 𝑂 ( 1 ) and standar d N ormal r . v .s 𝑔 and 𝑔′ whic h ar e ( 1 βˆ’ πœ€ ) - corr elated (r esp . ( 1 βˆ’ πœ€ ) -r esampled), Μ‚ π’œπ‘Ÿ as in Definition 1. 8 satisfies: 𝐄 β€– Μ‚ π’œπ‘Ÿ ( 𝑔 ) βˆ’ Μ‚ π’œπ‘Ÿ ( 𝑔′) β€–2≀ 4 𝐢 𝐷 πœ€ 𝑁 + 8 π‘Ÿ2. (1.7) Thus f or an y πœ‚ > 0 . 𝐏 ( β€– Μ‚ π’œπ‘Ÿ ( 𝑔 ) βˆ’ Μ‚ π’œπ‘Ÿ ( 𝑔′) β€– β‰₯ 2 √ πœ‚ 𝑁 ) ≀𝐢 𝐷 πœ€ πœ‚+2 π‘Ÿ2 πœ‚ 𝑁. (1.8) Pr oof : Obser v e that b y the triangle inequalit y , β€– Μ‚ π’œπ‘Ÿ ( 𝑔 ) βˆ’ Μ‚ π’œπ‘Ÿ ( 𝑔′) β€– is bounded b y β€– Μ‚ π’œπ‘Ÿ ( 𝑔 ) βˆ’ 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) β€– + β€– 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) βˆ’ 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔′) ) β€– + β€– 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔′) ) βˆ’ Μ‚ π’œπ‘Ÿ ( 𝑔′) β€– ≀ 2 π‘Ÿ + β€– π’œ ( 𝑔 ) βˆ’ π’œ ( 𝑔′) β€– . This f ollo w s as 𝖼𝗅𝗂𝗉 is 1 -Lipsc hit z and the c orner-pic king st ep in ( 1. 6 ) only mo v es Μ‚ π’œπ‘Ÿ ( 𝑔 ) fr om 𝖼𝗅𝗂𝗉 ( π’œ ( π‘Ÿ ) ) b y at most π‘Ÿ . By J ensen ’ s inequalit y , squaring this giv es β€– Μ‚ π’œπ‘Ÿ ( 𝑔 ) βˆ’ Μ‚ π’œπ‘Ÿ ( 𝑔′) β€–2≀ 2 ( 4 π‘Ÿ2+ β€– π’œ ( 𝑔 ) βˆ’ π’œ ( 𝑔′) β€–2) . C ombining this w ith Pr oposition 1. 6 giv es ( 1. 7) , and ( 1. 8 ) f ollo w s fr om Mar k o v ’ s inequalit y . β–‘ Of c our se , our c onstruction of Μ‚ π’œπ‘Ÿ is c ert ainly ne v er poly nomial and does not pr eser v e c oor dinat e degr ee in a c ontr ollable w a y . Ho w e v er , because the r ounding does not dr astically alt er the st abilit y 9 analy sis , w e ar e still able t o sho w that f or an y 𝐑𝑁- v alued lo w c oor dinat e degr ee alg orithm π’œ and π‘Ÿ = 𝑂 ( 1 ) , str ong lo w degr ee har dness holds f or Μ‚ π’œπ‘Ÿ . Est ablishing the failur e of Μ‚ π’œπ‘Ÿ w ill be a useful int ermediat e st ep t o w ar ds the full pr oof of Theor em 1. 4 . (The
https://arxiv.org/abs/2505.20607v1
same ar gument pr o v es har dness when π’œ is a lo w degr ee poly nomial alg orithm; this is omitt ed f or br e v it y .) 2 Har dness f or Lo w Degr ee Alg orithms In this section, w e pr o v e Theor em 1.3 and Theor em 1. 4 – that is , w e e xhibit str ong lo w degr ee har dness f or both lo w poly nomial degr ee and lo w c oor dinat e degr ee alg orithms . Our ar gument utilizes what can be thought of as a β€œ c onditional” OGP . Pr e v iously , most OGP pr oof s identif y a global obstruction: w ith high pr obabilit y , ther e do not e xist an y tuples of g ood solutions t o a family of c orr elat ed inst anc es whic h ar e e .g . r oughly the same dist anc e apart. Her e , ho w e v er , w e sho w a local obstruction; w e c ondition on being able t o solv e a single inst anc e and sho w that aft er a small c hang e t o the inst anc e , it is unlik ely that an y solutions w ill e xist c lose t o the fir st one . This is an inst anc e of the β€œbrittleness” that mak es the NPP c hallenging t o solv e; e v en small c hang es in the inst anc e br eak the landscape g eometr y , so that e v en if solutions e xist, ther e is no w a y t o kno w wher e the y w ill end up . R elat ed str at egies ha v e been used r ec ently in [ A G24] , [ Ala 24] , [HS25b] . The fir st of these studied the Ising per c eptr on (in our t erminology , the VBP w ith 𝑑 = Θ ( 𝑁 ) ), and deduc ed har dness of sampling fr om the fact that a t ypical solution t o a giv en inst anc e w ill disappear in a slightly c orr elat ed inst anc e . F or our r esult, it is crucial that ever y solution t o a giv en NPP inst anc e is lik ely t o disappear when the input is slightly pertur bed. On the other hand, this pr opert y is w eak er than r equiring all solutions t o simultaneousl y disappear under suc h pertur bation, as in a mor e st andar d β€œ global” OGP ar gument. W e not e that Gamarnik and KΔ±zΔ±ldağ sho w ed in [ GK23 , Thm. 2.5] that sublinear ener gies do not e xhibit a c ert ain multi- OGP
https://arxiv.org/abs/2505.20607v1
, sugg esting that a sharp analy sis v ia global OGP ar gument s ma y be c hallenging t o implement. Let us giv e a mor e det ailed outline of our str at egy , in the case of lo w c oor dinat e degr ee . Let 𝐸 be an ener gy le v el, and assume π’œ is a Σ𝑁 - v alued alg orithm w ith c oor dinat e degr ee at most 𝐷 ≀ Μƒ π‘œ ( 𝐸 ) . W e c hoose suit able par amet er s πœ‚ β‰ˆπΈ 𝑁 log 𝑁 and πœ€ β‰ˆlog ( 𝑁 / 𝐷 ) 𝑁, and aim t o sho w that 𝐏 ( π’œ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) β†’ 0 as 𝑁 β†’ ∞ . T o do so , w e c onsider a ( 1 βˆ’ πœ€ ) -r esampled pair 𝑔 , 𝑔′ of NPP inst anc es and pr oc eed ac c or ding t o the f ollo w ing st eps . (a) F or πœ€ small, 𝑔 and 𝑔′ ha v e c orr elation c lose t o 1. By Pr oposition 1. 6 , this implies that the output s of an L CD alg orithm π’œ w ill be w ithin dist anc e 2βˆšπœ‚ 𝑁 of eac h other w ith high pr obabilit y . (b ) Sinc e πœ€ ≫1 𝑁, w e w ill ha v e 𝑔 β‰  𝑔′ w ith high pr obabilit y , and w e assume belo w that this holds . (This is the c entr al diff er enc e bet w een the L CD and LDP cases; in the latt er ther e is no issue in t aking πœ€ muc h smaller , whic h turns out t o dr astically aff ect the r esulting har dness bound.) ( c) F or πœ‚ small and fix ed π’œ ( 𝑔 ) , Lemma 2. 4 sho w s using the Mar k o v inequalit y that c onditional on 𝑔 and the e v ent 𝑔′≠ 𝑔 , the pertur bed inst anc e 𝑔′ t y pically has no solutions w ithin dist anc e 2βˆšπœ‚ 𝑁 of π’œ ( 𝑔 ) . This is the c onditional landscape obstruction w e described abo v e . 10 ( d) Put t og ether , these st eps imply that it is unlik ely f or π’œ t o find solutions t o both 𝑔 and 𝑔′ suc h that the st abilit y guar ant ee of Pr oposition 1. 6 holds . U sing a positiv e c orr elation pr opert y ( see Lemma 2.5 ), w e c onc lude that π’œ ( 𝑔 ) βˆ‰ 𝑆 ( 𝐸 ; 𝑔 ) w ith high pr obabilit y . The abo v e ar gument handles lo w c oor dinat e
https://arxiv.org/abs/2505.20607v1
degr ee alg orithms w ithout the r andomized r ounding st ep . T o handle the latt er , w e obser v e that solutions t o a giv en NPP inst anc e ar e isolat ed w ith high pr obabilit y . This implies that r andomized r ounding either c hang es only 𝑂 ( 1 ) c oor dinat es ( whic h pr eser v es st abilit y ), or else inject s t oo muc h r andomness t o pr eser v e an y c hanc e of finding a g ood solution. F or c on v enienc e , w e pur sue this e x t ension only in the L CD case (as the main messag e of the LDP case is that poly nomial degr ee is a poor pr o x y f or c omple xit y in the NPP ). 2.1 Pr eliminar y Estimat es W e begin w ith some g ener al estimat es that w ill be utilized r epeat edly thr oughout the pr oof. F ir st, w e bound the pr obabilit y of a N ormal r . v . being e xponentially c lose t o zer o . W e denot e 2π‘₯ b y exp2 ( π‘₯ ) . Lemma 2.1 . Let 𝐸 , 𝜎2> 0 , and suppose that conditionall y on πœ‡ , we have 𝑍 ∼ 𝒩 ( πœ‡ , 𝜎2) . Then 𝐏 ( | 𝑍 | ≀ 2βˆ’ 𝐸| πœ‡ ) ≀ exp2 ( βˆ’ 𝐸 βˆ’1 2log2 ( 𝜎2) + 𝑂 ( 1 ) ) , βˆ€ πœ‡ ∈ 𝐑 . (2.1) Pr oof : Obser v e that c onditional on πœ‡ , the distribution of 𝑍 is bounded as πœ‘π‘ | πœ‡ ( 𝑧 ) =1√ 2 πœ‹ 𝜎2π‘’βˆ’( 𝑧 βˆ’ πœ‡ )2 2 𝜎 2 ≀ ( 2 πœ‹ 𝜎2)βˆ’ 1 / 2. Int egr ating o v er | 𝑧 | ≀ 2βˆ’ 𝐸 then giv es (2.1) , v ia 𝐏 ( | 𝑍 | ≀ 2βˆ’ 𝐸) = ∫ | 𝑧 | ≀ 2βˆ’ 𝐸( 2 πœ‹ 𝜎2)βˆ’ 1 / 2d 𝑧 ≀ 2βˆ’ 𝐸 βˆ’1 2log2 ( 2 πœ‹ 𝜎2) + 1. β–‘ N ot e that (2.1) is a decr easing function of 𝜎2. Thus , if ther e e xist s 𝛾 w ith 𝜎2β‰₯ 𝛾 > 0 , then (2.1) is bounded b y exp2 ( βˆ’ 𝐸 βˆ’ log2 ( 𝛾 ) / 2 + 𝑂 ( 1 ) ) . N e x t, w e r ecall the f ollo w ing bound on the partial sums of binomial c oefficient s; this w ill be used f or a fir st moment c omput ation in Section 2.2 . Lemma 2.2 ( Chernoff-Hoeff ding) . S uppose that 𝐾 ≀ 𝑁 / 2 , and let β„Ž ( π‘₯ ) = βˆ’ π‘₯ log2 ( π‘₯ ) βˆ’ (
https://arxiv.org/abs/2505.20607v1
1 βˆ’ π‘₯ ) log2 ( π‘₯ ) be the binar y entr opy func tion. Then, f or 𝑝 ≔ 𝐾 / 𝑁 , βˆ‘ π‘˜ ≀ 𝐾(𝑁 π‘˜) ≀ exp2 ( 𝑁 β„Ž ( 𝑝 ) ) ≀ exp2 ( 2 𝑁 𝑝 log2 (1 𝑝) ) . Pr oof : F or a Bin ( 𝑁 , 𝑝 ) r andom v ariable 𝑆 , w e ha v e: 1 β‰₯ 𝐏 ( 𝑆 ≀ 𝐾 ) = βˆ‘ π‘˜ ≀ 𝐾(𝑁 π‘˜) π‘π‘˜( 1 βˆ’ 𝑝 )𝑁 βˆ’ π‘˜β‰₯ βˆ‘ π‘˜ ≀ 𝐾(𝑁 π‘˜) 𝑝𝐾( 1 βˆ’ 𝑝 )𝑁 βˆ’ 𝐾. The last inequalit y f ollo w s b y multiply ing eac h t erm b y ( 𝑝 / ( 1 βˆ’ 𝑝 ) )𝐾 βˆ’ π‘˜β‰€ 1 . R earr anging giv es 11 βˆ‘ π‘˜ ≀ 𝐾(𝑁 π‘˜) ≀ π‘βˆ’ 𝐾( 1 βˆ’ 𝑝 )βˆ’ ( 𝑁 βˆ’ 𝐾 ) = exp2 ( βˆ’ 𝐾 log2 ( 𝑝 ) βˆ’ ( 𝑁 βˆ’ 𝐾 ) log2 ( 1 βˆ’ 𝑝 ) ) = exp2 ( 𝑁 β‹… ( βˆ’πΎ 𝑁log2 ( 𝑝 ) βˆ’ (𝑁 βˆ’ 𝐾 𝑁) log2 ( 1 βˆ’ 𝑝 ) ) ) = exp2 ( 𝑁 β‹… ( βˆ’ 𝑝 log2 ( 𝑝 ) βˆ’ ( 1 βˆ’ 𝑝 ) log2 ( 1 βˆ’ 𝑝 ) ) ) = exp2 ( 𝑁 β„Ž ( 𝑝 ) ) . The final equalit y then f ollo w s fr om the bound β„Ž ( 𝑝 ) ≀ 2 𝑝 log2 ( 1 / 𝑝 ) f or 𝑝 ≀ 1 / 2 . β–‘ F inally , w e sho w that f or an y ener gy 𝐸 ≀ 𝑂 ( 𝑁 ) , ther e e xist s a c hoic e of β€œ dist anc e ” πœ‚ suc h that the t erm in the pr e v ious lemma is c ontr olled. Lemma 2.3 . F or all 𝐸 ≀ 𝑁 , ther e e xist uni ver sal constants 𝐢 , 𝐢′> 0 suc h that πœ‚ ≔𝐸 𝐢′ 𝑁 log2 ( 𝐢 𝑁 / 𝐸 )(2.2) satisfies πœ‚ ∈ ( 0 , 1 / 2 ) and 2 πœ‚ log2 (1 πœ‚) <𝐸 4 𝑁. (2.3) In addition ( a) if 𝐸 = Θ ( 𝑁 ) , this πœ‚ is Θ ( 1 ) ; (b ) if 𝐸 ≀ π‘œ ( 𝑁 ) , this πœ‚ satisfies πœ‚ ≳ 𝐸 / 𝑁 log2 ( 𝐢 𝑁 / 𝐸 ) Pr oof : It is not har d t o sho w that if 0 < πœ‚ , 𝐸 / 𝑁 β‰ͺ 1 and βˆ’ πœ‚ log πœ‚ ∼ 𝐸 / 𝑁 , then w e ha v e πœ‚ ∼ 𝐸 / ( 𝑁 log 𝑁 / 𝐸 ) . Pic king suit able c onst ant s ( e .g . 𝐢 = 8 and 𝐢′= 1 6 suffic e ) in (2.2) , it is eas y
https://arxiv.org/abs/2505.20607v1
t o see that πœ‚ ∈ ( 0 , 1 / 2 ) and (2.3 ) holds f or all 𝐸 ≀ 𝑁 . The r esulting as y mpt otics f ollo w immediat ely . β–‘ 2.2 C onditional Landscape Obstruction W e turn no w t o est ablishing the c entr al c onditional landscape obstruction of our ar gument. The idea is that f or an π‘₯ ∈ Σ𝑁 depending on 𝑔 and a r elat ed inst anc e 𝑔′, the lik elihood of an y π‘₯β€²βˆˆ Σ𝑁 solv ing 𝑔′ is muc h smaller than the number of point s w ithin a neighbor hood of π‘₯ . Thus , e v en small c hang es t o the inst anc e destr o y an y solutions . Lemma 2. 4 . Let ( 𝑔 , 𝑔′) be a pair of either ( 1 βˆ’ πœ€ ) - corr elated or ( 1 βˆ’ πœ€ ) -r esampled instances. Let π‘₯ ∈ Σ𝑁 be conditionall y independent of 𝑔′ gi ven 𝑔 . Then f or an y πœ‚ ∈ ( 0 , 1 / 2 ) , 𝐏𝑔′ ∼1 βˆ’ πœ€ 𝑔 (βˆƒ π‘₯β€² ∈ 𝑆 ( 𝐸 ; 𝑔′) s.t. β€– π‘₯ βˆ’ π‘₯β€²β€– ≀ 2βˆšπœ‚ 𝑁) ≀ exp2 ( βˆ’ 𝐸 βˆ’1 2log2 ( πœ€ ) + 2 πœ‚ log2 (1 πœ‚) 𝑁 + 𝑂 ( log 𝑁 ) ) , (2.4) and 12 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) (βˆƒ π‘₯β€² ∈ 𝑆 ( 𝐸 ; 𝑔′) s.t. β€– π‘₯ βˆ’ π‘₯β€²β€– ≀ 2βˆšπœ‚ 𝑁| 𝑔 β‰  𝑔′) ≀ exp2 ( βˆ’ 𝐸 + 2 πœ‚ log2 (1 πœ‚) 𝑁 + 𝑂 ( 1 ) ) . (2.5) Pr oof : Thr oughout, abbr e v iat e πœ‚π‘ ≔ 2βˆšπœ‚ 𝑁 . W e fir st sho w (2. 4) , b y bounding the pr obabilit y that | 𝐡 ( π‘₯ , πœ‚π‘ ) ∩ 𝑆 ( 𝐸 ; 𝑔′) | = βˆ‘ π‘₯β€² ∈ 𝐡Σ ( π‘₯ , πœ‚π‘ )𝐼 { π‘₯β€²βˆˆ 𝑆 ( 𝐸 ; 𝑔′) } is nonzer o . By Mar k o v ’ s inequalit y , this is upper bounded b y 𝐄 [[[βˆ‘ π‘₯β€² ∈ 𝐡Σ ( π‘₯ , πœ‚π‘ )𝐼 { π‘₯β€²βˆˆ 𝑆 ( 𝐸 ; 𝑔′) } ]]]= 𝐄 [[[βˆ‘ π‘₯β€² ∈ 𝐡Σ ( π‘₯ , πœ‚π‘ )𝐄 [ 𝐼 { π‘₯β€²βˆˆ 𝑆 ( 𝐸 ; 𝑔′) } | 𝑔 ] ]]] = 𝐄 [[[βˆ‘ π‘₯β€² ∈ 𝐡Σ ( π‘₯ , πœ‚π‘ )𝐏 ( | ⟨ 𝑔′, π‘₯β€²βŸ© | ≀ 2βˆ’ 𝐸| 𝑔 ) ]]].(2.6) N ot e in particular that the r ang e of this sum is independent of the inner pr obabilit y , as 𝑔′ and π‘₯ ar e c onditionally independent giv en 𝑔 . T o bound the number of t erms in (2. 6 ) , let π‘˜ be the number of c oor dinat es whic h diff er bet w een π‘₯ and π‘₯β€², so
https://arxiv.org/abs/2505.20607v1
that β€– π‘₯ βˆ’ π‘₯β€²β€–2= 4 π‘˜ . Thus β€– π‘₯ βˆ’ π‘₯β€²β€– ≀ 2βˆšπœ‚ 𝑁 if and only if π‘˜ ≀ 𝑁 πœ‚ < 𝑁 / 2 , so b y Lemma 2.2 , | 𝐡Σ ( π‘₯ , πœ‚π‘ ) | = βˆ‘ π‘˜ ≀ 𝑁 πœ‚(𝑁 π‘˜) ≀ exp2 ( 2 πœ‚ log2 ( 1 / πœ‚ ) 𝑁 ) . (2.7) T o bound the inner pr obabilit y under π‘”β€²βˆΌ1 βˆ’ πœ€ 𝑔 , fix an y π‘₯β€²βˆˆ Σ𝑁 and w rit e 𝑔′= 𝑝 𝑔 + √ 1 βˆ’ 𝑝2 Μƒ 𝑔 f or 𝑝 ≔ 1 βˆ’ πœ€ and Μƒ 𝑔 an independent c op y of 𝑔 . W e kno w ⟨ Μƒ 𝑔 , π‘₯β€²βŸ© ∼ 𝒩 ( 0 , 𝑁 ) , so this giv es ⟨ 𝑔′, π‘₯β€²βŸ© | 𝑔 ∼ 𝒩 ( 𝑝 ⟨ 𝑔 , π‘₯β€²βŸ© , ( 1 βˆ’ 𝑝2) 𝑁 ) . This is nondeg ener at e f or ( 1 βˆ’ 𝑝2) 𝑁 β‰₯ πœ€ 𝑁 > 0 ; b y Lemma 2.1 , w e g et 𝐏𝑔′ ∼1 βˆ’ πœ€ 𝑔 ( | ⟨ 𝑔′, π‘₯β€²βŸ© | ≀ 2βˆ’ 𝐸| 𝑔 ) ≀ exp2 ( βˆ’ 𝐸 βˆ’1 2log2 ( πœ€ ) + 𝑂 ( log 𝑁 ) ) . U sing (2. 7) t o upper bound the number of t erms in (2. 6 ) , summing this bound giv es (2. 4) . Alt ernativ ely , f or (2.5 ) , w e kno w that if 𝑔 = 𝑔′, then the 𝐡 ( π‘₯ , πœ‚π‘ ) ∩ 𝑆 ( 𝐸 ; 𝑔′) w ill be nonempt y if π‘₯ is c hosen t o be a solution t o 𝑔 ; w e thus c ondition on 𝑔 β‰  𝑔′ thr oughout (2. 6 ) . T o bound the c orr e- sponding inner t erm, ag ain fix an y π‘₯β€²βˆˆ Σ𝑁 and let Μƒ 𝑔 be an independent c op y of 𝑔 . Let 𝐽 βŠ† [ 𝑁 ] be a r andom subset wher e eac h 𝑖 ∈ 𝐽 independently w ith pr obabilit y 1 βˆ’ πœ€ , so 𝑔′ can be r epr esent ed as 𝑔′= 𝑔𝐽 + Μƒ 𝑔𝐽 . C onditional on ( 𝑔 , 𝐽 ) , w e kno w that ⟨ Μƒ 𝑔𝐽 , π‘₯β€²βŸ© is 𝒩 ( 0 , 𝑁 βˆ’ | 𝐽 | ) and ⟨ 𝑔𝐽 , π‘₯β€²βŸ© is det erministic, so that 13 ⟨ 𝑔′, π‘₯β€²βŸ© | ( 𝑔 , 𝐽 ) ∼ 𝒩 ( ⟨ 𝑔𝐽 , π‘₯β€²βŸ© , 𝑁 βˆ’ | 𝐽 | ) . As { 𝑔 β‰  𝑔′} = { | 𝐽 | < 𝑁 } , w e ha v e 𝑁 βˆ’ | 𝐽 | β‰₯ 1 c onditional on 𝑔 β‰  𝑔′. Thus , Lemma 2.1 giv es 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) ( | ⟨ 𝑔′, π‘₯β€²βŸ© | ≀ 2βˆ’ 𝐸| 𝑔 , 𝑔 β‰  𝑔′) ≀
https://arxiv.org/abs/2505.20607v1
exp2 ( βˆ’ 𝐸 + 𝑂 ( 1 ) ) , and w e c onc lude (2.5 ) as in the pr e v ious case . β–‘ The f ollo w ing lemma sho w s a positiv e c orr elation pr opert y that enables us t o a v oid union-bound- ing o v er π’œ finding solutions t o c orr elat ed or r esampled pair s of inst anc es . Belo w , the set 𝑆 pla y s the r ole of the e v ent π’œ ( 𝑔 , πœ” ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) . Lemma 2.5 ( Adapt ed fr om [HS25b , Lem. 2. 7] ) . Let ( 𝑔 , 𝑔′) be a pair of either 𝑝 - corr elated or 𝑝 -r esampled instances. Then f or an y set 𝑆 βŠ† 𝐑𝑁 and 𝑝 > 0 , with π‘ž ≔ 𝐏 ( 𝑔 ∈ 𝑆 ) , 𝐏𝑔′ βˆΌπ‘ 𝑔 ( 𝑔 ∈ 𝑆 , π‘”β€²βˆˆ 𝑆 ) β‰₯ π‘ž2and 𝐏𝑔′ ∼ ℒ𝑝 ( 𝑔 ) ( 𝑔 ∈ 𝑆 , π‘”β€²βˆˆ 𝑆 ) β‰₯ π‘ž2. Pr oof : In the fir st case , let Μƒ 𝑔 , 𝑔( 0 ), 𝑔( 1 ) be thr ee i.i. d. c opies of 𝑔 , and obser v e that π‘”β€²βˆΌπ‘ 𝑔 ar e jointly r epr esent able as 𝑔 =√ 1 βˆ’ πœ€ Μƒ 𝑔 +βˆšπœ€ 𝑔( 0 ), 𝑔′=√ 1 βˆ’ πœ€ Μƒ 𝑔 +βˆšπœ€ 𝑔( 1 ). Sinc e 𝑔 , 𝑔′ ar e c onditionally i.i. d. giv en Μƒ 𝑔 , w e ha v e b y J ensen ’ s inequalit y that 𝐏𝑔′ βˆΌπ‘ 𝑔 ( 𝑔 ∈ 𝑆 , π‘”β€²βˆˆ 𝑆 ) = 𝐄 [ 𝐏 ( 𝑔 ∈ 𝑆 , π‘”β€²βˆˆ 𝑆 | Μƒ 𝑔 ) ] = 𝐄 [ 𝐏 ( 𝑔 ∈ 𝑆 | Μƒ 𝑔 )2] β‰₯ 𝐄 [ 𝐏 ( 𝑔 ∈ 𝑆 | Μƒ 𝑔 ) ]2= π‘ž2. Lik e w ise , when π‘”β€²βˆΌ ℒ𝑝 ( 𝑔 ) , let 𝐽 be a r andom subset of [ 𝑁 ] wher e eac h 𝑖 ∈ 𝐽 independently w ith pr obabilit y 𝑝 , so that ( 𝑔 , 𝑔′) ar e jointly r epr esent able as 𝑔 = Μƒ 𝑔𝐽 + 𝑔( 0 ) 𝐽, 𝑔′= Μƒ 𝑔𝐽 + 𝑔( 1 ) 𝐽. Thus 𝑔 and 𝑔′ ar e c onditionally i.i. d. giv en ( Μƒ 𝑔 , 𝐽 ) , and w e c onc lude in the same w a y . β–‘ R emar k 2. 6 . N ot e that Lemma 2.5 also holds when ther e is an auxiliar y r andom seed πœ” shar ed acr oss the inst anc es . In this case the suc c ess e v ent is a set 𝑆 βŠ† 𝐑𝑁× Ω𝑁 , and w e w rit e π‘ž = 𝐏 ( ( 𝑔 , πœ”
https://arxiv.org/abs/2505.20607v1
) ∈ 𝑆 ) , 𝑄 = 𝐏 ( ( 𝑔 , πœ” ) ∈ 𝑆 , ( 𝑔′, πœ” ) ∈ 𝑆 ) , π‘ž ( πœ” ) = 𝐏 ( ( 𝑔 , πœ” ) ∈ 𝑆 | πœ” ) , 𝑄 ( πœ” ) = 𝐏 ( ( 𝑔 , πœ” ) ∈ 𝑆 , ( 𝑔′, πœ” ) ∈ 𝑆 | πœ” ) . Lemma 2.5 sho w s that f or an y πœ” ∈ Ω𝑁 , 𝑄 ( πœ” ) β‰₯ π‘ž ( πœ” )2. Then, b y J ensen ’ s inequalit y , 𝑄 = 𝐄 [ π‘ž ( πœ” ) ] β‰₯ 𝐄 [ π‘ž ( πœ” )2] β‰₯ 𝐄 [ π‘ž ( πœ” ) ]2= 𝑝2. Thus , in c ombination w ith R emar k 1. 7 , the pr oof s of Theor em 1.3 and Theor em 1. 4 hold w ithout modification when π’œ depends on an independent r andom seed πœ” . 14 2.3 Har dness f or LDP Alg orithms W e fir st pr o v e Theor em 1.3 . Let π’œ be a Σ𝑁 - v alued alg orithm w ith satisf y ing ( 1.5 ) degr ee 𝐷 ; b y R emar k 2. 6 , assume w ithout loss of g ener alit y that π’œ = π’œ ( 𝑔 ) is det erministic. C onsider a pair of ( 1 βˆ’ πœ€ ) - c orr elat ed inst anc es ( 𝑔 , 𝑔′) . Let π‘₯ ≔ π’œ ( 𝑔 ) and define the e v ent s 𝑆solve ≔ { π’œ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) , π’œ ( 𝑔′) ∈ 𝑆 ( 𝐸 ; 𝑔′) } , 𝑆stable ≔ { β€– π’œ ( 𝑔 ) βˆ’ π’œ ( 𝑔′) β€– ≀ 2 √ πœ‚ 𝑁 } , 𝑆cond ( π‘₯ ) ≔ {βˆ„ π‘₯β€² ∈ 𝑆 ( 𝐸 ; 𝑔′) such that β€– π‘₯ βˆ’ π‘₯β€²β€– ≀ 2βˆšπœ‚ 𝑁} .(2.8) Intuitiv ely , the fir st t w o e v ent s ask that the alg orithm solv es both inst anc es and is st able , r espec- tiv ely . The last e v ent, whic h depends on π‘₯ , c orr esponds t o the c onditional landscape obstruction: f or an π‘₯ depending only on 𝑔 , ther e is no solution t o 𝑔′ whic h is c lose t o π‘₯ . Lemma 2. 7 . F or π‘₯ ≔ π’œ ( 𝑔 ) , we have 𝑆solve ∩ 𝑆stable ∩ 𝑆cond ( π‘₯ ) = βˆ… . Pr oof : Suppose that 𝑆solve and 𝑆stable both oc cur . Letting π‘₯ ≔ π’œ ( 𝑔 ) ( whic h only depends on 𝑔 ) and π‘₯′≔ π’œ ( 𝑔′) , w e kno w π‘₯β€²βˆˆ 𝑆 ( 𝐸 ; 𝑔′) and is w ithin dist anc e 2βˆšπœ‚ 𝑁 of π‘₯ , c ontr adicting 𝑆cond ( π‘₯ )
https://arxiv.org/abs/2505.20607v1
. β–‘ Pr oof of Theor em 1.3 : Let 𝑝solve ≔ 𝐏 ( π’œ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) be the pr obabilit y that π’œ solv es one inst anc e . By Lemma 2.5 , 𝐏𝑔′ ∼1 βˆ’ πœ€ 𝑔 ( 𝑆solve ) β‰₯ 𝑝2 solve . In addition, let 𝑝cond ≔ max π‘₯ ∈ Σ𝑁1 βˆ’ 𝐏𝑔′ ∼1 βˆ’ πœ€ 𝑔 ( 𝑆cond ( π‘₯ ) ) . 𝑝unstable ≔ 1 βˆ’ 𝐏𝑔′ ∼1 βˆ’ πœ€ 𝑔 ( 𝑆stable ) , Set πœ€ ≔ 2βˆ’ 𝐸 / 2 and set πœ‚ as in Lemma 2.3 . By Lemma 2. 4 , f or 𝑁 sufficiently lar g e , 𝑝cond ≀ exp2 ( βˆ’ 𝐸 βˆ’1 2log2 ( πœ€ ) + 2 πœ‚ log2 (1 πœ‚) 𝑁 + 𝑂 ( log 𝑁 ) ) ≀ exp2 ( βˆ’ 𝐸 +𝐸 4+𝐸 4+ 𝑂 ( log 𝑁 ) ) = exp2 ( βˆ’πΈ 2+ 𝑂 ( log 𝑁 ) ) = π‘œ ( 1 ) . N e x t, f or log 𝑁 β‰ͺ 𝐸 ≀ 𝑁 and 𝐷 = π‘œ ( exp2 ( 𝐸 / 4 ) ) , ( 1.5 ) giv es 𝑝unstable ≲𝐷 πœ€ πœ‚β‰²π· exp2 ( βˆ’ 𝐸 / 2 ) 𝑁 log2 ( 𝐢 𝑁 / 𝐸 ) 𝐸 ≀ 𝐷 exp2 ( βˆ’πΈ 2+ 𝑂 ( log 𝑁 ) ) ≀ π‘œ ( 1 ) β‹… exp2 ( βˆ’πΈ 4+ 𝑂 ( log 𝑁 ) ) = π‘œ ( 1 ) . That is , both 𝑝cond and 𝑝unstable v anish f or lar g e 𝑁 . T o c onc lude , Lemma 2. 7 giv es (f or 𝐏 = 𝐏𝑔′ ∼1 βˆ’ πœ€ 𝑔 ) 15 𝐏 ( 𝑆solve ) + 𝐏 ( 𝑆stable ) + 𝐏 ( 𝑆cond ( π‘₯ ) ) ≀ 2 , and r earr anging y ields ( 𝑝solve )2≀ 𝑝unstable + ( 1 βˆ’ 𝐏 ( 𝑆cond ( π‘₯ ) ) ≀ 𝑝unstable + 𝑝cond = π‘œ ( 1 ) . β–‘ 2. 4 Har dness f or N on-R ounded L CD Alg orithms W e no w build t o w ar ds the main r esult, Theor em 1. 4 . As a st epping st one , w e fir st sho w str ong lo w degr ee har dness f or π’œ w ith c oor dinat e degr ee 𝐷 whic h ar e Σ𝑁 - v alued. W e then e x t end t o 𝑂 ( 1 ) - c lose and r ounded alg orithms . R ecalling R emar k 2. 6 , w e assume π’œ is det erministic f or c on v enienc e . C onsider a pair of ( 1 βˆ’ πœ€ ) -r esampled inst anc es ( 𝑔 , 𝑔′) . Let π‘₯ ≔ π’œ ( 𝑔 ) and k eep the definitions of 𝑆solve , 𝑆stable , 𝑆cond fr om (2. 8 ) . In
https://arxiv.org/abs/2505.20607v1
addition, define 𝑆diff ≔ { 𝑔 β‰  𝑔′} . Lemma 2.8 . F or π‘₯ ≔ π’œ ( 𝑔 ) , we have 𝑆diff ∩ 𝑆solve ∩ 𝑆stable ∩ 𝑆cond ( π‘₯ ) = βˆ… . Pr oof : This f ollo w s fr om Lemma 2. 7 , noting that the pr oof did not use that 𝑔 β‰  𝑔′ almost sur ely . β–‘ As bef or e , our pr oof f ollo w s fr om sho w ing that f or appr opriat e c hoic es of πœ€ and πœ‚ ( depending on 𝐷 , 𝐸 , and 𝑁 ), the e v ent s 𝑆stable and 𝑆cond ( π‘₯ ) hold w ith high pr obabilit y . Ho w e v er , when 𝑆diff fails , 𝑆cond ( π‘₯ ) alw a y s fails . Thus , t o ensur e the appr opriat e pr obabilities v anish, w e ar e r equir ed t o c hoose πœ€ ≫ 1 / 𝑁 , whic h b y ( 1.3 ) ensur es 𝑔 β‰  𝑔′ w ith high pr obabilit y . C ontr ast this w ith Section 2.3 , wher e πœ€ c ould be e xponentially small in 𝐸 . This r estriction on πœ€ pr e v ent s us fr om sho w ing har dness f or alg orithms w ith degr ee lar g er than the best possible le v el, as discussed in Section 1.2 . Pr oof of Theor em 1. 4 , f or Σ𝑁 -valued alg orithms without r andomized r ounding : A g ain let 𝑝solve ≔ 𝐏 ( π’œ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) be the pr obabilit y that π’œ solv es one inst anc e . By Lemma 2.5 , 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) ( 𝑆solve ) β‰₯ 𝑝2 solve . W e no w r edefine 𝑝cond and 𝑝unstable v ia 𝑝cond ≔ max π‘₯ ∈ Σ𝑁1 βˆ’ 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) ( 𝑆cond ( π‘₯ ) | 𝑆diff ) , 𝑝unstable ≔ 1 βˆ’ 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) ( 𝑆stable | 𝑆diff ) . (2.9) Setting πœ‚ as in Lemma 2.3 , w e ha v e b y Lemma 2. 4 that 𝑝cond ≀ exp2 ( βˆ’ 𝐸 + 2 πœ‚ log2 (1 πœ‚) 𝑁 + 𝑂 ( 1 ) ) ≀ exp2 ( βˆ’3 𝐸 4+ 𝑂 ( 1 ) ) = π‘œ ( 1 ) . N e x t, set πœ€ ≔log2 ( 𝑁 / 𝐷 ) 𝑁. This c lear ly has 𝑁 πœ€ ≫ 1 , so 𝐏 ( 𝑆diff ) = 1 βˆ’ ( 1 βˆ’ πœ€ )𝑁β‰₯ 1 βˆ’ π‘’βˆ’ πœ€ 𝑁→ 1 . (2.10) By Pr oposition 1. 6 , w e ha v e f or (a) 𝐸 = 𝛿 𝑁 and 𝐷 = π‘œ ( 𝑁 ) that 𝑝unstable ≲
https://arxiv.org/abs/2505.20607v1
𝐷 πœ€ =𝐷 𝑁log2 (𝑁 𝐷) = π‘œ ( 1 ) . Lik e w ise , f or (b ) log2𝑁 β‰ͺ 𝐸 β‰ͺ 𝑁 and 𝐷 = π‘œ ( 𝐸 / log2( 𝐢 𝑁 / 𝐸 ) ) , w e g et 16 𝑝unstable ≲𝐷 log2 ( 𝑁 / 𝐷 ) log2 ( 𝐢 𝑁 / 𝐸 ) 𝐸 As 𝐷 is an upper bound on the maximum possible alg orithm degr ee , w e ma y incr ease 𝐷 w ithout loss of g ener alit y in the analy sis so that 𝐷 gr o w s only slightly slo w er than 𝐸 . Thus w e assume henc ef orth that 𝐷 β‰₯ 𝐸 / log3 2 ( 𝑁 / 𝐸 ) , so that 𝑁 / 𝐷 ≀ 𝑁 log3 2 ( 𝑁 / 𝐸 ) / 𝐸 . This let s us bound log2 ( 𝑁 / 𝐷 ) ≀ log2 ( 𝑁 / 𝐸 ) + 3 log2 log2 ( 𝑁 / 𝐸 ) ≲ log2 ( 𝐢 𝑁 / 𝐸 ) , whic h giv es 𝑝unstable ≲𝐷 log2 2 ( 𝐢 𝑁 / 𝐸 ) 𝐸= π‘œ ( 1 ) . As bef or e , these c hoic es of πœ€ and πœ‚ ensur e both 𝑝cond and 𝑝unstable v anish f or lar g e 𝑁 and ar bitr ar y ener gy log2𝑁 β‰ͺ 𝐸 ≀ 𝑁 . T o c onc lude , f or π‘₯ = π’œ ( 𝑔 ) , Lemma 2. 8 implies 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) ( 𝑆solve , 𝑆stable , 𝑆cond ( π‘₯ ) | 𝑆diff ) = 0 , so 𝐏 ( 𝑆solve | 𝑆diff ) + 𝐏 ( 𝑆stable | 𝑆diff ) + 𝐏 ( 𝑆cond ( π‘₯ ) | 𝑆diff ) ≀ 2 . Thus , r earr anging and multiply ing b y 𝐏 ( 𝑆diff ) giv es 𝐏 ( 𝑆solve and 𝑆diff ) ≀ 𝐏 ( 𝑆diff ) β‹… ( 𝑝unstable + 𝑝cond ) ≀ 𝑝unstable + 𝑝cond . F inally , adding 𝐏 ( 𝑆solve , 𝑆¬ diff ) ≀ 1 βˆ’ 𝑃 ( 𝑆diff ) , whic h is π‘œ ( 1 ) b y our c hoic e of πœ€ , t o both sides ( so as t o apply Lemma 2.5 ) let s us c onc lude 𝑝2 solve ≀ 𝐏 ( 𝑆solve ) ≀ 𝑝unstable + 𝑝cond + ( 1 βˆ’ 𝐏 ( 𝑆diff ) ) = π‘œ ( 1 ) . β–‘ 2.5 Locally Impr o v ing Alg orithms So far , w e ha v e est ablished str ong lo w degr ee har dness f or both lo w degr ee poly nomial and lo w c oor dinat e degr ee alg orithms whic h t ak e v alues in Σ𝑁 . N e x t w e sho w that the last c ondition is not in fact as r estrictiv e as it might
https://arxiv.org/abs/2505.20607v1
appear , f ocusing on the L CD case . In the ne x t st ep t o w ar ds Theor em 1. 4 , w e sho w that our pr ec eding analy sis e x t ends t o 𝑂 ( 1 ) - dist anc e pertur bations of an L CD alg orithm, thank s t o the pr eser v ation of st abilit y . Thr oughout the belo w , fix a dist anc e π‘Ÿ = 𝑂 ( 1 ) . W e c onsider the e v ent that the 𝐑𝑁- v alued alg o- rithm π’œ output s a point c lose t o a solution f or an inst anc e 𝑔 : 𝑆close ( π‘Ÿ ) = {βˆƒ Μ‚ π‘₯ ∈ 𝑆 ( 𝐸 ; 𝑔 ) s.t. 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) ∈ 𝐡 ( Μ‚ π‘₯ , π‘Ÿ )} = { 𝐡Σ ( 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) , π‘Ÿ ) β‰  βˆ… } . Equiv alently , this means that the r ounded alg orithm Μ‚ π’œπ‘Ÿ defined in ( 1. 6 ) solv es the NPP . Pr oposition 2. 9 (Har dness f or Locally Impr o v ed L CD Alg orithms ) . Let 𝑔 ∼ 𝒩 ( 0 , 𝐼𝑁 ) be a standar d N ormal r andom vec tor . Let π‘Ÿ > 0 be an 𝑁 -independent constant and π’œ be an y coor dinate degr ee 𝐷 alg orithm with 𝐄 β€– π’œ ( 𝑔 ) β€–2≲ 𝑁 . Assume that ( a) if 𝐸 = 𝛿 𝑁 f or 𝛿 ∈ ( 0 , 1 ) , then 𝐷 ≀ π‘œ ( 𝑁 ) ; 17 (b ) if πœ” ( log2𝑁 ) ≀ 𝐸 ≀ π‘œ ( 𝑁 ) , then 𝐷 ≀ π‘œ ( 𝐸 / log2( 𝐢 𝑁 / 𝐸 ) ) . Then 𝐏 ( 𝑆close ( π‘Ÿ ) ) = 𝐏 ( Μ‚ π’œπ‘Ÿ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) = π‘œ ( 1 ) . Pr oof of Pr oposition 2.9 : W e maint ain the setup of the pr oof of Theor em 1. 4 . Let π‘₯ ≔ Μ‚ π’œπ‘Ÿ ( 𝑔 ) and define the e v ent s 𝑆diff , 𝑆solve , 𝑆stable , and 𝑆cond ( π‘₯ ) as in Section 2. 4 , r eplacing π’œ w ith Μ‚ π’œπ‘Ÿ . Let 𝑝solve ≔ 𝐏 ( Μ‚ π’œπ‘Ÿ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) ; letting 𝑆 be the e v ent { Μ‚ π’œπ‘Ÿ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) } = { 𝖼𝗅𝗂𝗉 ( π’œ ( 𝑔 ) ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) + 𝐡 ( 0 , π‘Ÿ ) } , Lemma 2.5 ensur es 𝐏𝑔′ ∼ β„’1 βˆ’ πœ€ ( 𝑔 ) ( 𝑆solve ) β‰₯ 𝑝2 solve . W e k eep the same definitions
https://arxiv.org/abs/2505.20607v1
of 𝑝unstable and 𝑝cond as in (2.9 ) , ag ain r eplacing π’œ w ith Μ‚ π’œπ‘Ÿ . Thus , w e c hoose πœ€ ≔ log2 ( 𝑁 / 𝐷 ) / 𝑁 , so that 𝐏 ( 𝑆diff ) β†’ 1 . Additionally , c hoose πœ‚ as Lemma 2.3 , so that 𝑝cond = π‘œ ( 1 ) . It then suffic es t o sho w that 𝑝unstable = π‘œ ( 1 ) . T o see this , r ecall that when 𝐸 = 𝛿 𝑁 , our c hoic e of πœ‚ is Θ ( 1 ) , so π‘Ÿ2 πœ‚ 𝑁= π‘œ ( 1 ) . In the sublinear case πœ” ( log2𝑁 ) ≀ 𝐸 ≀ π‘œ ( 𝑁 ) , w e inst ead g et πœ‚ 𝑁 ≳𝐸 𝑁 log2 ( 𝐢 𝑁 / 𝐸 )β‹… 𝑁 β‰₯𝐸 log2 ( 𝐢 𝑁 / 𝐸 )= πœ” ( 1 ) , sinc e 𝐸 ≫ log2𝑁 . Apply ing the pr oper ly modified Lemma 1.9 – kno w ing that the fir st t erm bound- ing 𝑝unstable is π‘œ ( 1 ) w ith these c hoic es of πœ€ and πœ‚ – w e see that Μ‚ 𝑝unstable = π‘œ ( 1 ) , as desir ed. β–‘ 2. 6 Truly R andom R ounding T o c omplet e the pr oof of Theor em 1. 4 , w e need t o sho w that r andomized r ounding cannot help solv e the NPP . F or this , w e sho w in Lemma 2.12 belo w that if one has a subcube of Σ𝑁 w ith dimension gr o w ing slo wly w ith 𝑁 , then at most one of those point s w ill be a solution. Then w e sho w that r andomized r ounding fails w ith high pr obabilit y whene v er it c hang es a div er ging number of c oor dinat es . T og ether w ith the pr e v ious subsection whic h addr esses 𝑂 ( 1 ) c oor dinat e c hang es , this c omplet es the pr oof. Belo w let π’œ be an 𝐑𝑁- v alued alg orithm and πœ” ( log2𝑁 ) ≀ 𝐸 ≀ 𝑁 . W e then define the det erministically r ounded alg orithm π’œβˆ—( 𝑔 ) ≔ π—Œπ—‚π—€π—‡ ( π’œ ( 𝑔 ) ) , whic h is the most lik ely single out c ome of r andomized r ounding on π’œ ( 𝑔 ) . Define 𝑝1 ( π‘₯ ) , … , 𝑝𝑁 ( π‘₯ ) ∈ [ 0 ,1 2] b y 𝑝𝑖 ( π‘₯ ) ≔ 𝐏 ( π—‹π—ˆπ—Žπ—‡π–½ ( π‘₯ )𝑖 β‰  π—Œπ—‚π—€π—‡ ( π‘₯ )𝑖 ) =| π‘₯𝑖 βˆ’ π—Œπ—‚π—€π—‡ ( π‘₯𝑖 ) | 2. (2.11) (Her e w e suppr ess the dependenc e on βƒ— π‘ˆ = ( π‘ˆ1 , … , π‘ˆπ‘ ) fr om just
https://arxiv.org/abs/2505.20607v1
abo v e Theor em 1. 4 .) Lemma 2.10 . F ix π‘₯ ∈ 𝐑𝑁. Dr aw 𝑁 coin flips 𝐼π‘₯ , 𝑖 ∼ Bern ( 2 𝑝𝑖 ( π‘₯ ) ) as well as 𝑁 signs 𝑆𝑖 ∼ Unif { Β± 1 } , all mutuall y independent; define the r andom variable Μƒ π‘₯ ∈ Σ𝑁 by Μƒ π‘₯𝑖 ≔ 𝑆𝑖 𝐼π‘₯ , 𝑖 + ( 1 βˆ’ 𝐼π‘₯ , 𝑖 ) π—Œπ—‚π—€π—‡ ( π‘₯ )𝑖 . Then Μƒ π‘₯ ∼ π—‹π—ˆπ—Žπ—‡π–½ ( π‘₯ ) . Pr oof : Let π‘₯βˆ—β‰” π—Œπ—‚π—€π—‡ ( π‘₯ ) . C onditioning on 𝐼π‘₯ , 𝑖 , w e can c hec k that 𝐏 ( Μƒ π‘₯𝑖 β‰  π‘₯βˆ— 𝑖 ) = 2 𝑝𝑖 ( π‘₯ ) β‹… 𝐏 ( Μƒ π‘₯𝑖 = π‘₯𝑖 | 𝐼π‘₯ , 𝑖 = 1 ) + ( 1 βˆ’ 2 𝑝𝑖 ( π‘₯ ) ) β‹… 𝐏 ( Μƒ π‘₯𝑖 β‰  π‘₯βˆ— 𝑖 | 𝐼π‘₯ , 𝑖 = 0 ) = 𝑝𝑖 ( π‘₯ ) . Thus , 𝐏 ( Μƒ π‘₯𝑖 = π‘₯βˆ— 𝑖 ) = 𝐏 ( π—‹π—ˆπ—Žπ—‡π–½ ( π‘₯ )𝑖 = π‘₯βˆ— 𝑖 ) . β–‘ 18 By Lemma 2.10 , w e can r edefine π—‹π—ˆπ—Žπ—‡π–½ ( π‘₯ ) t o be Μƒ π‘₯ as c onstruct ed abo v e w ithout loss of g ener alit y . It thus mak es sense t o define Μƒ π’œ ( 𝑔 ) ≔ π—‹π—ˆπ—Žπ—‡π–½ ( π’œ ( 𝑔 ) ) , whic h is no w alw a y s Σ𝑁 - v alued. W e ha v e seen in Pr oposition 2.9 that when β€– Μƒ π’œ βˆ’ π’œβˆ—β€– ≀ 𝑂 ( 1 ) , lo w degr ee har dness w ill still apply . On the other hand, when β€– Μƒ π’œ βˆ’ π’œβˆ—β€–1 β‰₯ πœ” ( 1 ) , an y r ounding sc heme w ill intr oduc e so muc h r andomness that Μƒ π‘₯ w ill eff ectiv ely be a r andom point, whic h has a v anishing pr obabilit y of being a solution due t o the local spar sit y of the solution set. (These t w o cases suffic e as β€– Μƒ π’œ βˆ’ π’œβˆ—β€– ≀ β€– Μƒ π’œ βˆ’ π’œβˆ—β€–1 .) T o see this , w e fir st sho w that an y r andomized r ounding sc heme as in Lemma 2.10 whic h diff er s w ith high pr obabilit y fr om π’œβˆ— must r esample a div er ging number of c oor dinat es . Lemma 2.11 . F ix π‘₯ ∈ 𝐑𝑁, and let 𝑝1 ( π‘₯ ) , … , 𝑝𝑁 ( π‘₯ ) be defined as in (2.11) . Then Μƒ π‘₯ β‰  π‘₯βˆ— holds with pr oba- bilit y 1 βˆ’ π‘œ ( 1 ) if and onl y if βˆ‘π‘–π‘π‘– ( π‘₯ ) = πœ” ( 1 ) . Mor eover if βˆ‘π‘–π‘π‘– ( π‘₯ ) = πœ” ( 1 )
https://arxiv.org/abs/2505.20607v1
, the number of coor dinates Μƒ π‘₯ is r esampled in di ver g es in pr obabilit y (i. e . e x ceeds an y fix ed π‘š < ∞ with pr obabilit y 1 βˆ’ π‘œ ( 1 ) ). Pr oof : R ecall that f or π‘₯ ∈ [ 0 , 1 / 2 ] , w e ha v e log2 ( 1 βˆ’ π‘₯ ) = Θ ( π‘₯ ) . Thus , as eac h c oor dinat e of π‘₯ is r ounded independently , w e can c omput e 𝐏 ( Μƒ π‘₯ = π‘₯βˆ—) = ∏ 𝑖( 1 βˆ’ 𝑝𝑖 ( π‘₯ ) ) = exp2 ( βˆ‘ 𝑖log2 ( 1 βˆ’ 𝑝𝑖 ( π‘₯ ) ) ) ≀ exp2 ( βˆ’ Θ ( βˆ‘ 𝑖𝑝𝑖 ( π‘₯ ) ) ) . Thus , 𝐏 ( Μƒ π‘₯ = π‘₯βˆ—) = π‘œ ( 1 ) if and only if βˆ‘π‘–π‘π‘– ( π‘₯ ) = πœ” ( 1 ) . N e x t, f ollo w ing the c onstruction of Μƒ π‘₯ in Lemma 2.10 , let 𝐸𝑖 = { 𝐼π‘₯ , 𝑖 = 1 } be the e v ent that Μƒ π‘₯𝑖 is r esampled fr om Unif { Β± 1 } , independently of π‘₯βˆ— 𝑖 . The 𝐸𝑖 ar e independent Bernoulli v ariables , so βˆ‘π‘–πΈπ‘– has v arianc e at most it s mean. Henc e the sec ond c laim f ollo w s b y Che b y she v ’ s inequalit y . β–‘ The f ollo w ing Lemma 2.12 t ak es adv ant ag e of this b y sho w ing that NPP solutions ar e isolat ed. Lemma 2.12 (Solutions R epel) . C onsider an y distances π‘˜ = Ξ© ( 1 ) and ener gy levels 𝐸 ≫ π‘˜ log 𝑁 . W ith high pr obabilit y , ther e ar e no pair s of distinc t solutions π‘₯ , π‘₯β€²βˆˆ 𝑆 ( 𝐸 ; 𝑔 ) to an instance 𝑔 with β€– π‘₯ βˆ’ π‘₯β€²β€– ≀ 2√ π‘˜ (i. e ., within π‘˜ sign flips of eac h other ): 𝐏 (βˆƒ ( π‘₯ , π‘₯β€²) ∈ 𝑆 ( 𝐸 ; 𝑔 ) s.t. β€– π‘₯ βˆ’ π‘₯β€²β€– ≀ 2√ π‘˜ .) ≀ exp2 ( βˆ’ 𝐸 + 𝑂 ( π‘˜ log 𝑁 ) ) = π‘œ ( 1 ) . (2.12) Pr oof : C onsider an y π‘₯ β‰  π‘₯β€², and let 𝐽 βŠ† [ 𝑁 ] denot e the c oor dinat es in whic h π‘₯ , π‘₯β€² diff er . Then π‘₯ = π‘₯𝐽 + π‘₯𝐽 , π‘₯β€²= π‘₯𝐽 βˆ’ π‘₯𝐽 . Assuming both π‘₯ , π‘₯β€²βˆˆ 𝑆 ( 𝐸 ; 𝑔 ) , w e can e xpand the inequalities βˆ’ 2βˆ’ 𝐸≀ ⟨ 𝑔 , π‘₯ ⟩ , ⟨ 𝑔 , π‘₯β€²βŸ© ≀ 2βˆ’ 𝐸 int o βˆ’ 2βˆ’ 𝐸≀ ⟨ 𝑔 , π‘₯𝐽 ⟩ + ⟨ 𝑔 , π‘₯𝐽 ⟩
https://arxiv.org/abs/2505.20607v1
≀ 2βˆ’ 𝐸, βˆ’ 2βˆ’ 𝐸≀ ⟨ 𝑔 , π‘₯𝐽 ⟩ βˆ’ ⟨ 𝑔 , π‘₯𝐽 ⟩ ≀ 2βˆ’ 𝐸. Multiply ing the lo w er equation b y βˆ’ 1 and adding the r esulting inequalities giv es | ⟨ 𝑔 , π‘₯𝐽 ⟩ | ≀ 2βˆ’ 𝐸. 19 Thus , finding pair s of distinct solutions w ithin dist anc e 2√ π‘˜ implies finding a subset 𝐽 βŠ† [ 𝑁 ] of at most π‘˜ c oor dinat es and | 𝐽 | signs π‘₯𝐽 suc h that | ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ | ≀ 2βˆ’ 𝐸. By [ V er 18 , E x er . 0 . 0 .5] , ther e ar e βˆ‘ 1 ≀ π‘˜β€² ≀ π‘˜(𝑁 π‘˜β€²) ≀ (𝑒 𝑁 π‘˜)π‘˜ ≀ ( 𝑒 𝑁 )π‘˜= 2𝑂 ( π‘˜ log 𝑁 ) c hoic es of suc h subset s , and at most 2π‘˜ c hoic es of signs . N o w , ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ ∼ 𝒩 ( 0 , | 𝐽 | ) , and as | 𝐽 | β‰₯ 1 , Lemma 2.1 and the f ollo w ing r emar k implies 𝐏 ( | ⟨ 𝑔𝐽 , π‘₯𝐽 ⟩ | ≀ 2βˆ’ 𝐸) ≀ exp2 ( βˆ’ 𝐸 + 𝑂 ( 1 ) ) . Union bounding this o v er the 2𝑂 ( π‘˜ log 𝑁 ) possibilities giv es (2.12) . β–‘ N e x t w e deduc e str ong har dness f or β€œtruly r andomized” alg orithms whic h r esample a div er ging number of c oor dinat es . R oughly , this holds because if enough c oor dinat es ar e r esampled, the r esulting point is c onditionally r andom w ithin a subcube of dimension gr o w ing slo wly w ith Σ𝑁 , whic h b y Lemma 2.12 can only c ont ain a single solution. Theor em 2.13 . Let π‘₯ = π’œ ( 𝑔 ) , and define π‘₯βˆ—, Μƒ π‘₯ , etc ., as pr eviousl y . Mor eover , assume that f or an y π‘₯ in the possible outputs of π’œ , we have βˆ‘π‘–π‘π‘– ( π‘₯ ) = πœ” ( 1 ) . Then, f or an y 𝐸 β‰₯ πœ” ( log2𝑁 ) , we have 𝐏 ( Μƒ π’œ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) = 𝐏 ( Μƒ π‘₯ ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) ≀ π‘œ ( 1 ) . Pr oof : F ollo w ing the c har act erization of Μƒ π‘₯ in Lemma 2.10 , let 𝐾 ≔ max ( log2 𝑁 , βˆ‘π‘–πΌπ‘₯ , 𝑖 ) . By the assumptions on βˆ‘π‘–π‘π‘– ( π‘₯ ) and Lemma 2.11 , w e kno w 𝐾 , whic h is at least the number of c oor dinat es whic h ar e r esampled, is bounded as 1 β‰ͺ 𝐾 ≀ log2 𝑁 ,
https://arxiv.org/abs/2505.20607v1
f or an y possible π‘₯ = π’œ ( 𝑔 ) . N o w , let 𝐽 βŠ† [ 𝑁 ] denot e the set of the fir st 𝐾 c oor dinat es t o be r esampled, so that 𝐾 = | 𝐽 | , and c onsider 𝐏 ( Μƒ π‘₯ ∈ 𝑆 ( 𝐸 ; 𝑔 ) | Μƒ π‘₯𝐽 ) , wher e w e fix the c oor dinat es out side of 𝐽 and let Μƒ π‘₯ be unif ormly sampled fr om a 𝐾 - dimensional subcube of Σ𝑁 . All suc h Μƒ π‘₯ ar e w ithin dist anc e 2√ 𝐾 of eac h other , so b y Lemma 2.12 , the pr obabilit y that ther e is mor e than one suc h Μƒ π‘₯ ∈ 𝑆 ( 𝐸 ; 𝑔 ) is bounded b y exp2 ( βˆ’ 𝐸 + 𝑂 ( 𝐾 log 𝑁 ) ) ≀ exp2 ( βˆ’ 𝐸 + 𝑂 ( log2𝑁 ) ) = π‘œ ( 1 ) , b y assumption on 𝐸 . Thus , the pr obabilit y that an y of the Μƒ π‘₯ is in 𝑆 ( 𝐸 ; 𝑔 ) is bounded b y 2βˆ’ 𝐾, whenc e 𝐏 ( Μƒ π‘₯ ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) = 𝐄 [ 𝐏 ( Μƒ π‘₯ ∈ 𝑆 ( 𝐸 ; 𝑔 ) | Μƒ π‘₯𝐽 ) ] ≀ 2βˆ’ 𝐾≀ π‘œ ( 1 ) . β–‘ F inally w e c ombine the pr ec eding r esult s t o deduc e Theor em 1. 4 . Pr oof of Theor em 1. 4 : F ix π‘Ÿ > 0 an ar bitr arily lar g e int eg er (but fix ed as 𝑁 gr o w s ). F ir st, w e kno w fr om Pr oposition 2.9 that 𝐏 ( Μ‚ π’œπ‘Ÿ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) ) = π‘œπ‘ β†’ ∞ ( 1 ) f or an y fix ed π‘Ÿ . Ther ef or e b y t aking π‘Ÿ lar g e slo wly w ith 𝑁 , it suffic es t o sho w that 𝐏 ( Μƒ π’œ ( 𝑔 ) ∈ 𝑆 ( 𝐸 ; 𝑔 ) , Μ‚ π’œπ‘Ÿ ( 𝑔 ) βˆ‰ 𝑆 ( 𝐸 ; 𝑔 ) ) ≀ π‘œπ‘Ÿ β†’ ∞ ( 1 ) . Indeed sinc e β€– π‘₯ βˆ’ π‘₯βˆ—β€–2 ≀ β€– π‘₯ βˆ’ π‘₯βˆ—β€–1 , the left -hand pr obabilit y t ends t o 0 unif ormly as π‘Ÿ gr o w s b y Theor em 2.13 . This c omplet es the pr oof. β–‘ 20 Ac kno wledg ement . Thank s t o Bric e Huang and Subhabr at a Sen f or c omment s and discussions . R ef er enc es [ A C08] D . Ac hliopt as and A . C oja- O ghlan, β€œ Alg orithmic Barrier s
https://arxiv.org/abs/2505.20607v1
fr om Phase Tr ansitions , ” in 2008 4 9th A nnual IEEE S ymposium on F oundations of C omputer Science , Oct. 2008 , pp . 79 3–80 2. doi: 10 .1109 /F OCS .2008 .11 . [ A CR11] D . Ac hliopt as , A . C oja- O ghlan, and F . Ric ci- T er senghi, β€œ On the solution-spac e g eometr y of r andom c onstr aint satisfaction pr oblems , ” R andom S truc tur es & Alg orithms , v ol. 38 , no . 3 , pp . 25 1–268 , 2011. [ AF G96] M. F . Ar gΓΌello , T . A . F eo , and O . Goldsc hmidt, β€œR andomized Methods f or the N umber P artitioning Pr oblem, ” C omputer s & Oper ations R esear c h , v ol. 23 , no . 2, pp . 10 3–111, F e b . 1996 , doi: 10 .1016 /0 305-0548( 9 5 )E 00 20-L . [ A G24] A . E. Alaoui and D . Gamarnik , β€œHar dness of sampling solutions fr om the S y mmetric Binar y P er c eptr on, ” arX i v pr eprint arX i v :240 7 .166 2 7 , 20 24. [ A GJ20] G. B . Ar ous , R. Gheissari, and A . J ag annath, β€œ Alg orithmic Thr esholds f or T ensor PC A , ” The A nnals of Pr obabilit y , v ol. 48 , no . 4, pp . 205 2–208 7 , J ul. 20 20 , doi: 10 .1214 / 19- A OP1415 . [ Ala 24] A . E. Alaoui, β€œN ear- optimal shatt ering in the Ising pur e p -spin and r arit y of solutions r eturned b y st able alg orithms , ” arX i v pr eprint arX i v :2412. 0 35 11 , 20 24. [ Ali+05] B . Alidaee , F . Glo v er , G. A . K oc henber g er , and C. R eg o , β€œ A N e w Modeling and Solution Appr oac h f or the N umber P artitioning Pr oblem, ” J ournal of A pplied Mathematics and Decision Sciences , v ol. 2005 , no . 2, pp . 113–121, J an. 2005 , doi: 10 .115 5 /J AMDS .2005 .113 . [ AMS25] A . E. Alaoui, A . Mont anari, and M. Sellk e , β€œShatt ering in pur e spherical spin glasses , ” C ommunications in Mathemat- ical Ph ysics , v ol. 406 , no . 5 , pp . 1–36 , 20 25 . [ A WZ23] G. B . Ar ous , A . S . W ein, and I. Zadik , β€œF r ee ener gy w ells and o v er lap g ap pr opert y in spar se PC A , ”
https://arxiv.org/abs/2505.20607v1
C ommunications on Pur e and A pplied Mathematics , v ol. 7 6 , no . 10 , pp . 2410–24 7 3 , 20 23 . [Bar+19] B . Bar ak , S . Hopkins , J . K elner , P . K. K othari, A . Moitr a, and A . P ot ec hin, β€œ A near ly tight sum- of-squar es lo w er bound f or the plant ed c lique pr oblem, ” SIAM J ournal on C omputing , v ol. 48 , no . 2, pp . 68 7–7 35 , 2019 . [BCP01] C. Bor g s , J . Cha y es , and B . Pitt el, β€œPhase Tr ansition and F init e ‐size Scaling f or the Int eg er P artitioning Pr oblem, ” R andom S truc tur es & Alg orithms , v ol. 19 , no . 3–4, pp . 24 7–288 , Oct. 2001, doi: 10 .100 2 / r sa.10004 . [BFM04] H. Bauk e , S . F r anz , and S . Mert ens , β€œN umber P artitioning as a R andom Ener gy Model, ” J ournal of S tatistical Mec hanics: Theor y and E xperiment , v ol. 2004, no . 4, p . P 400 3 , Apr . 2004, doi: 10 .1088 / 17 4 2- 5468 /2004 /04 /P0400 3 . [BH21] G. Br esler and B . Huang , β€œ The Alg orithmic Phase Tr ansition of R andom k -SA T f or Lo w Degr ee P oly nomials , ” in 20 21 IEEE 6 2nd A nnual S ymposium on F oundations of C omputer Science (F OCS) , 20 21, pp . 298–309 . doi: 10 .1109 / F OCS5 29 79 .20 21. 000 38 . [BM04] H. Bauk e and S . Mert ens , β€œUniv er salit y in the Le v el St atistics of Disor der ed S y st ems , ” Ph ysical R eview E , v ol. 7 0 , no . 2, p . 25 10 2, A ug . 2004, doi: 10 .110 3 /Ph y sR e vE. 7 0 . 0 25 10 2 . [BM08] S . Boett c her and S . Mert ens , β€œ Analy sis of the K armar k ar-K arp Diff er encing Alg orithm, ” The E ur opean Ph ysical J ournal B , v ol. 6 5 , no . 1, pp . 131–140 , Sep . 2008 , doi: 10 .1140 / epjb / e 2008-00 3 20- 9 . [Bor+09a] C. Bor g s , J . Cha y es , S . Mert ens , and C. N air , β€œPr oof of the Local REM C onjectur e f or N umber P artitioning . I: C onst ant Ener gy Scales , ” R andom S truc tur es & Alg orithms , v ol. 34, no
https://arxiv.org/abs/2505.20607v1
. 2, pp . 217–240 , 2009 , doi: 10 .100 2 / r sa.20 25 5 . [Bor+09b] C. Bor g s , J . Cha y es , S . Mert ens , and C. N air , β€œPr oof of the Local REM C onjectur e f or N umber P artitioning . II. Gr o w ing Ener gy Scales , ” R andom S truc tur es & Alg orithms , v ol. 34, no . 2, pp . 241–284, 2009 , doi: 10 .100 2 / r sa.20 256 . [BR13] Q . Berthet and P . Rig ollet, β€œ C omple xit y theor etic lo w er bounds f or spar se principal c omponent det ection, ” J ournal of Mac hine Learning R esear c h , 2013 . [ CE15] A . C oja- O ghlan and C. Efth y miou, β€œ On Independent Set s in R andom Gr aphs , ” R andom S truc tur es & Alg orithms , v ol. 4 7 , no . 3 , pp . 4 36–486 , Oct. 2015 , doi: 10 .100 2 / r sa.205 50 . [ C GJ7 8] E. G. C offman Jr ., M. R. Gar e y , and D . S . J ohnson, β€œ An Application of Bin-P ac king t o Multipr oc essor Sc heduling , ” SIAM J ournal on C omputing , v ol. 7 , no . 1, pp . 1–17 , F e b . 19 7 8 , doi: 10 .113 7 /0 20 7 001 . [ Che+19] W . -K. Chen, D . Gamarnik , D . P anc henk o , and M. R ahman, β€œSuboptimalit y of Local Alg orithms f or a Class of Max - C ut Pr oblems , ” The A nnals of Pr obabilit y , v ol. 4 7 , no . 3 , Ma y 2019 , doi: 10 .1214 / 18- A OP1291 . 21 [ CL 91] E. G. C offman and G. S . L uek er , Pr obabilistic A nal ysis of P ac king and P artitioning Alg orithms . in W ile y -Int er scienc e Series in Discr et e Mathematics and Optimization. N e w Y or k: J . W ile y & sons , 1991. [ C O Y19] D . C orus , P . S . Oliv et o , and D . Y azdani, β€œ Artificial Immune S y st ems C an F ind Ar bitr arily Good Appr o ximations f or the NP -har d N umber P artitioning Pr oblem, ” A rtificial Intellig ence , v ol. 2 7 4, pp . 180–196 , Sep . 2019 , doi: 10 .1016 / j .artint.2019 . 0 3 . 001 . [Der 80] B . Derrida, β€œR andom-Ener gy Model: Limit of a F amily of Disor der ed Models , ” Ph ysical R
https://arxiv.org/abs/2505.20607v1
eview Letter s , v ol. 4 5 , no . 2, pp . 79–8 2, J ul. 1980 , doi: 10 .110 3 /Ph y sR e vLett. 4 5 . 79 . [Der 81] B . Derrida, β€œR andom-Ener gy Model: An E x actly Solv able Model of Disor der ed S y st ems , ” Ph ysical R eview B , v ol. 24, no . 5 , pp . 2613–26 26 , Sep . 1981, doi: 10 .110 3 /Ph y sR e vB .24.2613 . [DGH25] H. Du, S . Gong , and R. Huang , β€œ The alg orithmic phase tr ansition of r andom gr aph alignment pr oblem, ” Pr obabilit y Theor y and R elated F ields , pp . 1–56 , 20 25 . [Din+24] Y . Ding , D . K unisk y , A . S . W ein, and A . S . Bandeir a, β€œSube xponential-time alg orithms f or spar se PC A , ” F oundations of C omputational Mathematics , v ol. 24, no . 3 , pp . 86 5–914, 20 24. [DM15] Y . Deshpande and A . Mont anari, β€œImpr o v ed sum- of-squar es lo w er bounds f or hidden c lique and hidden submatrix pr oblems , ” in C onf er ence on Learning Theor y , 2015 , pp . 5 23–56 2. [ Gam+2 2] D . Gamarnik , E. C. KΔ±zΔ±ldağ , W . P er kins , and C. X u, β€œ Alg orithms and barrier s in the s y mmetric binar y per c eptr on model, ” in 20 2 2 IEEE 6 3r d A nnual S ymposium on F oundations of C omputer Science (F OCS) , 20 2 2, pp . 5 7 6–58 7 . [ Gam+23] D . Gamarnik , E. C. Kizildağ , W . P er kins , and C. X u, β€œ Geometric barrier s f or st able and online alg orithms f or discr ep- ancy minimization, ” in C onf er ence on Learning Theor y , 20 23 , pp . 3 231–3 26 3 . [ Gam 21] D . Gamarnik , β€œ The Ov er lap Gap Pr opert y: A Geometric Barrier t o Optimizing o v er R andom Structur es , ” Pr oceeding s of the N ational Academ y of Sciences , v ol. 118 , no . 41, p . e 21084 9 2118 , Oct. 20 21, doi: 10 .10 7 3 / pnas .21084 9 2118 . [ GJ21] D . Gamarnik and A . J ag annath, β€œ The o v er lap g ap pr opert y and appr o ximat e messag e passing alg orithms f or p -spin models , ” The A nnals of Pr obabilit y , v ol. 4 9 , no . 1, pp . 180–205 , 20 21. [ GJ79] M. R. Gar e y
https://arxiv.org/abs/2505.20607v1
and D . S . J ohnson, C omputer s and Intr ac tabilit y : A Guide to the Theor y of NP - C ompleteness . in A Series of Book s in the Mathematical Scienc es . N e w Y or k: W . H. F r eeman, 19 79 . [ GJK23] D . Gamarnik , A . J ag annath, and E. C. KΔ±zΔ±ldağ , β€œShatt ering in the Ising pur e 𝑝 -spin model” , arX i v pr eprint arX i v :230 7 . 0 7 461 , 20 23 . [ GJS21] D . Gamarnik , A . J ag annath, and S . Sen, β€œ The Ov er lap Gap Pr opert y in Principal Submatrix R ec o v er y , ” Pr obabilit y Theor y and R elated F ields , v ol. 181, no . 4, pp . 7 5 7–814, Dec. 20 21, doi: 10 .100 7 / s00440-0 21-0108 9- 7 . [ GJ W 24] D . Gamarnik , A . J ag annath, and A . S . W ein, β€œHar dness of R andom Optimization Pr oblems f or Boolean Cir cuit s , Lo w - Degr ee P oly nomials , and Lang e v in Dy namics , ” SIAM J ournal on C omputing , v ol. 5 3 , no . 1, pp . 1–46 , 20 24. [ GK23] D . Gamarnik and E. C. KΔ±zΔ±ldağ , β€œ Alg orithmic obstructions in the r andom number partitioning pr oblem, ” The A nnals of A pplied Pr obabilit y , v ol. 3 3 , no . 6B , pp . 54 9 7–5 56 3 , 20 23 . [ GL18] D . Gamarnik and Q . Li, β€œF inding a Lar g e Submatrix of a Gaussian R andom Matrix, ” A nn. S tat. , v ol. 46 , no . 6 A , pp . 25 11–2561, 2018 . [ Gr a6 9] R. L. Gr aham, β€œBounds on Multipr oc essing Timing Anomalies , ” SIAM J ournal on A pplied Mathematics , v ol. 17 , no . 2, pp . 416–4 29 , Mar . 196 9 , doi: 10 .113 7 /0117 0 3 9 . [ GS14] D . Gamarnik and M. Sudan, β€œLimit s of Local Alg orithms o v er Spar se R andom Gr aphs , ” in Pr oceeding s of the 5 th C onf er ence on Innovations in Theor etical C omputer Science , in IT CS ' 14. N e w Y or k , N Y , USA: Association f or C omputing Mac hiner y , J an. 2014, pp . 36 9–3 7 6 . doi: 10 .114 5 /25 54 79 7 .25 548 31 . [ GS17] D . Gamarnik and M. Sudan, β€œP erf ormanc e of sequential local alg orithms f or the r andom N AE- 𝐾 -SA T pr
https://arxiv.org/abs/2505.20607v1
oblem ” , SIAM J ournal on C omputing , v ol. 46 , no . 2, pp . 5 90–619 , 2017 . [ GW 00] I. Gent and T . W alsh, β€œPhase Tr ansitions and Annealed Theories: N umber P artitioning as a C ase Study , ” Instituto C ultur a , J un. 2000 . [ GW 98] I. P . Gent and T . W alsh, β€œ Analy sis of Heuristics f or N umber P artitioning , ” C omputational Intellig ence , v ol. 14, no . 3 , pp . 4 30–4 5 1, 1998 , doi: 10 .1111 /08 24- 79 35 . 0006 9 . 2 2 [ GZ2 2] D . Gamarnik and I. Zadik , β€œSpar se high- dimensional linear r egr ession. Estimating squar ed err or and a phase tr ansition, ” A nn. S tat. , v ol. 50 , no . 2, pp . 880–90 3 , 20 2 2. [ GZ24] D . Gamarnik and I. Zadik , β€œ The landscape of the plant ed c lique pr oblem: Dense subgr aphs and the o v er lap g ap pr opert y , ” The A nnals of A pplied Pr obabilit y , v ol. 34, no . 4, pp . 3 3 7 5–34 34, 20 24. [Har+24] C. Har sha w , F . SΓ€ v je , D . A . Spielman, and P . Zhang , β€œBalancing c o v ariat es in r andomized e xperiment s w ith the gr am– sc hmidt w alk design, ” J ournal of the A merican S tatistical Association , v ol. 119 , no . 548 , pp . 29 34–29 46 , 20 24. [Ha y0 2] B . Ha y es , β€œ The Easiest Har d Pr oblem, ” A merican Scientist , v ol. 90 , no . 2, pp . 113–117 , Apr . 200 2. [Hob+17] R. Hober g , H. R amadas , T . R oth v oss , and X . Y ang , β€œN umber balancing is as har d as Mink o w ski’ s theor em and short est v ect or , ” in International C onf er ence on Integ er Pr ogr amming and C ombinatorial Optimization , 2017 , pp . 254–266 . [Hop+17] S . B . Hopkins , P . K. K othari, A . P ot ec hin, P . R agha v endr a, T . Sc hr amm, and D . St eur er , β€œ The po w er of sum- of-squar es f or det ecting hidden structur es , ” in 2017 IEEE 58 th A nnual S ymposium on F oundations of C omputer Science (F OCS) , 2017 , pp . 720–7 31. [Hop18] S . Hopkins , β€œSt atistical Inf er enc e and the Sum of Squar es Method, ” 2018 . [ Online]. A v ailable: https: / /
https://arxiv.org/abs/2505.20607v1
w w w . samuelbhopkins . c om / thesis .pdf [HS23] B . Huang and M. Sellk e , β€œ Alg orithmic Thr eshold f or Multi-Species Spherical Spin Glasses , ” arX i v :230 3 .12172 , 20 23 . [HS25a] B . Huang and M. Sellk e , β€œ Tight Lipsc hit z Har dness f or Optimizing Mean F ield Spin Glasses , ” C ommunications on Pur e and A pplied Mathematics , v ol. 7 8 , no . 1, pp . 60–119 , 20 25 . [HS25b] B . Huang and M. Sellk e , β€œStr ong Lo w Degr ee Har dness f or St able Local Optima in Spin Glasses . ” Ac c essed: J an. 30 , 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /2501. 064 2 7 [HSS15] S . B . Hopkins , J . Shi, and D . St eur er , β€œ T ensor principal c omponent analy sis v ia sum- of-squar e pr oof s , ” in C onf er ence on Learning Theor y , 2015 , pp . 9 56–1006 . [ J er9 2] M. J errum, β€œLar g e Cliques Elude the Metr opolis Pr oc ess , ” R andom S truc tur es & Alg orithms , v ol. 3 , no . 4, pp . 34 7–35 9 , J an. 199 2, doi: 10 .100 2 / r sa.3 2400 3040 2 . [ J oh+8 9] D . S . J ohnson, C. R. Ar ag on, L. A . McGeoc h, and C. Sc he v on, β€œ Optimization b y Simulat ed Annealing: An E xperiment al E v aluation; P art I, Gr aph P artitioning , ” Oper ations R esear c h , v ol. 3 7 , no . 6 , pp . 86 5–8 9 2, 198 9 , Ac c essed: Mar . 15 , 20 25 . [ Online]. A v ailable: http: / / w w w .jst or . or g/ st able / 1714 7 0 [ J oh+91] D . S . J ohnson, C. R. Ar ag on, L. A . McGeoc h, and C. Sc he v on, β€œ Optimization b y Simulat ed Annealing: An E xperiment al E v aluation; P art II, Gr aph C oloring and N umber P artitioning , ” Oper ations R esear c h , v ol. 3 9 , no . 3 , pp . 3 7 8–406 , 1991, Ac c essed: Mar . 15 , 20 25 . [ Online]. A v ailable: http: / / w w w .jst or . or g/ st able / 1713 9 3 [ J on+23] C. J ones , K. Mar w aha, J . S . Sandhu, and J . Shi, β€œR andom Max - CSP s Inherit Alg orithmic Har dness fr om Spin Glasses , ” in
https://arxiv.org/abs/2505.20607v1
14th Innovations in Theor etical C omputer Science C onf er ence (IT CS 20 23 ) , 20 23 , p . 77 . [K AK19] A . M. Krieg er , D . Azriel, and A . K apelner , β€œN ear ly R andom Designs w ith Gr eatly Impr o v ed Balanc e , ” Biometrik a , v ol. 106 , no . 3 , pp . 6 9 5–7 01, Sep . 2019 , doi: 10 .109 3 /biomet / asz0 26 . [K ar+86] N . K armar k ar , R. M. K arp , G. S . L uek er , and A . M. Odlyzk o , β€œPr obabilistic Analy sis of Optimum P artitioning , ” J ournal of A pplied Pr obabilit y , v ol. 23 , no . 3 , pp . 6 26–64 5 , 1986 , doi: 10 .230 7 / 3 21400 2 . [Kis 15] N . Kistler , β€œDerrida ’ s r andom ener gy models: F r om spin glasses t o the e x tr emes of c orr elat ed r andom fields , ” C orr e- lated R andom S ystems: F i ve Diff er ent Methods: CIRM J ean-Mor let Chair , Spring 2013 . Spring er , pp . 71–120 , 2015 . [KK 8 3] N . K armar k ar and R. M. K arp , β€œ The Diff er encing Method of Set P artitioning , ” 198 3 . Ac c essed: Mar . 15 , 20 25 . [ Online]. A v ailable: https: / / w w w 2. eecs .ber k ele y . edu /Pubs /T ec hRpt s / 198 3 / 6 35 3 .html [KK S14] J . Kr atica, J . K ojiΔ‡, and A . Sa v iΔ‡, β€œ Tw o Met aheuristic Appr oac hes f or Solv ing Multidimensional Tw o - W a y N umber P artitioning Pr oblem, ” C omputer s & Oper ations R esear c h , v ol. 46 , pp . 5 9–68 , J un. 2014, doi: 10 .1016 /j . c or .2014. 01. 00 3 . [K ot+17] P . K. K othari, R. Mori, R. O'Donnell, and D . W itmer , β€œSum of squar es lo w er bounds f or r efuting an y CSP , ” in Pr oceeding s of the 4 9th A nnual A CM SIGA CT S ymposium on Theor y of C omputing , 2017 , pp . 13 2–14 5 . [K un 24] D . K unisk y , β€œLo w C oor dinat e Degr ee Alg orithms I: Univ er salit y of C omput ational Thr esholds f or Hy pothesis T esting . ” Ac c essed: Mar . 26 , 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /240 3 . 0 7
https://arxiv.org/abs/2505.20607v1
86 2 [K WB19] D . K unisk y , A . S . W ein, and A . S . Bandeir a, β€œN ot es on c omput ational har dness of h y pothesis t esting: Pr edictions using the lo w - degr ee lik elihood r atio , ” in ISAA C C ongr ess (International Societ y f or A nal ysis, its A pplications and C omputation) , 2019 , pp . 1–50 . 23 [LKZ15a] T . Lesieur , F . Kr zak ala, and L. Zde bor o v Γ‘, β€œMMSE of Pr obabilistic Lo w -R ank Matrix Estimation: Univ er salit y w ith R espect t o the Output Channel, ” in 2015 5 3r d A nnual Allerton C onf er ence on C ommunication, C ontr ol, and C omputing ( Allerton) , Sep . 2015 , pp . 680–68 7 . doi: 10 .1109 / ALLER T ON .2015 . 7 44 7 0 7 0 . [LKZ15b] T . Lesieur , F . Kr zak ala, and L. Zde bor o v a, β€œPhase Tr ansitions in Spar se PC A , ” in 2015 IEEE International S ymposium on Inf ormation Theor y (ISIT) , J un. 2015 , pp . 16 35–16 3 9 . doi: 10 .1109 /ISIT .2015 . 728 2 7 3 3 . [L S24] S . Li and T . Sc hr amm, β€œSome Eas y Optimization Pr oblems Ha v e the Ov er lap - Gap Pr opert y . ” Ac c essed: Mar . 31, 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /2411. 018 36 [L SZ24] S . Li, T . Sc hr amm, and K. Zhou, β€œDiscr epancy alg orithms f or the binar y per c eptr on, ” arX i v :2408 . 00 796 , 20 24. [L ue 8 7] G. S . L uek er , β€œ A N ot e on the A v er ag e - C ase Beha v ior of a Simple Diff er encing Method f or P artitioning , ” Oper ations R esear c h Letter s , v ol. 6 , no . 6 , pp . 28 5–28 7 , Dec. 198 7 , doi: 10 .1016 /016 7 -6 3 77( 8 7)90044- 7 . [Mer01] S . Mert ens , β€œ A Ph y sicist's Appr oac h t o N umber P artitioning , ” Theor etical C omputer Science , v ol. 26 5 , no . 1, pp . 79–108 , A ug . 2001, doi: 10 .1016 /S0 304- 3 9 7 5( 01)0015 3-0 . [Mer06] S . Mert ens , β€œ The Easiest Har d Pr oblem: N umber P artitioning , ” C omputational C omple xit y and S tatistical Ph ysics, Chapter 5; E ditor s A . P er cus and G. Istr ate and C. Moor
https://arxiv.org/abs/2505.20607v1
e , pp . 125–160 , 2006 . [MH7 8] R. Mer kle and M. Hellman, β€œHiding Inf ormation and Signatur es in Tr apdoor Knapsac k s , ” IEEE T r ansac tions on Inf ormation Theor y , v ol. 24, no . 5 , pp . 5 25–5 30 , Sep . 19 7 8 , doi: 10 .1109 /TIT .19 7 8 .105 5 9 2 7 . [MMZ 05] M. MΓ©zar d, T . Mor a, and R. Zec c hina, β€œ Clust ering of Solutions in the R andom Satisfiabilit y Pr oblem, ” Ph ysical R eview Letter s , v ol. 9 4, no . 19 , p . 19 7205 , Ma y 2005 , doi: 10 .110 3 /Ph y sR e vLett.9 4.19 7205 . [MP W15] R. Mek a, A . P ot ec hin, and A . W ig der son, β€œSum- of-Squar es Lo w er Bounds f or Plant ed Clique , ” in Pr oceeding s of the F ort y -Seventh A nnual A CM S ymposium on Theor y of C omputing , P ortland Or eg on USA: A CM, J un. 2015 , pp . 8 7–96 . doi: 10 .114 5 /2 7 46 5 3 9 .2 7 46600 . [ O'D14] R. O'Donnell, A nal ysis of Boolean F unc tions . C ambridg e Univ er sit y Pr ess , 2014. [R V17] M. R ahman and B . V ir ag , β€œLocal Alg orithms f or Independent Set s Ar e Half- Optimal, ” The A nnals of Pr obabilit y , v ol. 4 5 , no . 3 , Ma y 2017 , doi: 10 .1214 / 16- A OP109 4 . [SBD 21] V . Santuc ci, M. Baioletti, and G. Di Bari, β€œ An Impr o v ed Memetic Alg e br aic Diff er ential E v olution f or Solv ing the Multidimensional Tw o - W a y N umber P artitioning Pr oblem, ” E xpert S ystems with A pplications , v ol. 17 8 , p . 114 9 38 , Sep . 20 21, doi: 10 .1016 /j . es w a.20 21.114 9 38 . [SFD96] R. H. St or er , S . W . Flander s , and S . Da v id W u, β€œPr oblem Spac e Local Sear c h f or N umber P artitioning , ” A nnals of Oper ations R esear c h , v ol. 6 3 , no . 4, pp . 46 3–48 7 , A ug . 1996 , doi: 10 .100 7 /BF 0 21566 30 . [Sha8 2] A . Shamir , β€œ A P oly nomial Time Alg orithm f or Br eaking the Basic Mer kle -Hellman Cr y pt os y st em, ” in 23r d A nnual S ymposium on F oundations of C omputer Science (Sf cs 198 2) , N
https://arxiv.org/abs/2505.20607v1
o v . 198 2, pp . 14 5–15 2. doi: 10 .1109 /SF CS .198 2.5 . [TMR 20] P . T urner , R. Mek a, and P . Rig ollet, β€œBalancing Gaussian v ect or s in high dimension, ” in C onf er ence on Learning Theor y , 20 20 , pp . 34 5 5–3486 . [T sa 9 2] L. -H. T sai, β€œ As y mpt otic Analy sis of an Alg orithm f or Balanc ed P ar allel Pr oc essor Sc heduling , ” SIAM J ournal on C omputing , v ol. 21, no . 1, pp . 5 9–64, F e b . 199 2, doi: 10 .113 7 /0 2 2100 7 . [ V er 18] R. V er sh y nin, High-Dimensional Pr obabilit y : A n Intr oduc tion with A pplications in Data Science , 1st ed. in C ambridg e Series in St atistical and Pr obabilistic Mathematics . N e w Y or k , N Y : C ambridg e Univ er sit y Pr ess , 2018 . [ VV 25] N . V afa and V . V aik unt anathan, β€œS y mmetric P er c eptr ons , N umber P artitioning and Lattic es . ” Ac c essed: Mar . 20 , 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /2501.16 5 17 [ W ei2 2] A . S . W ein, β€œ Optimal lo w - degr ee har dness of maximum independent set, ” Mathematical S tatistics and Learning , v ol. 4, no . 3 , pp . 2 21–25 1, 20 2 2. [ WEM19] A . S . W ein, A . El Alaoui, and C. Moor e , β€œ The Kik uc hi hier ar c h y and t ensor PC A , ” in 2019 IEEE 60th A nnual S ymposium on F oundations of C omputer Science (F OCS) , 2019 , pp . 1446–1468 . [ Y ak 96] B . Y akir , β€œ The Diff er encing Alg orithm LDM f or P artitioning: A Pr oof of a C onjectur e of K armar k ar and K arp , ” Mathe- matics of Oper ations R esear c h , v ol. 21, no . 1, pp . 8 5–99 , F e b . 1996 , doi: 10 .128 7 / moor .21.1. 8 5 . 24
https://arxiv.org/abs/2505.20607v1
arXiv:2505.20668v1 [math.ST] 27 May 2025Eigenstructure inference for high-dimensional covariance with generalized shrinkage inverse-Wishart prior Seongmin Kim1, Kwangmin Lee2, Sewon Park3, and Jaeyong Lee1 1Department of Statistics, Seoul National University 2Department of Big Data Convergence, Chonnam National University 3Security Algorithm Lab, Samsung SDS May 28, 2025 Abstract In multivariate statistics, estimating the covariance matrix is es sential for under- standing the interdependence among variables. In high-dimensional settings, where the number of covariates increases with the sample size, it is well known that the eigenstructure of the sample covariance matrix is inconsistent. Th e inverse-Wishart prior, a standard choice for covariance estimation in Bayesian inferen ce, also suffers from posterior inconsistency. To address the issue of eigenvalue dispersion in high-dimensional se ttings, the shrinkage inverse-Wishart (SIW) prior has recently been proposed. Despite its con- ceptual appeal and empirical success, the asymptotic justification for the SIW prior has remained limited. In this paper, we propose a generalized shrinkage inverse-Wishart (gS IW) prior for high-dimensional covariance modeling. By extending the SIW fram ework, the 1 gSIW prior accommodates a broader class of prior distributions and facilit ates the derivation of theoretical properties under specific parameter choice s. In particular, under the spiked covariance assumption, we establis h the asymp- totic behavior of the posterior distribution for both eigenvalues and ei genvectors by directly evaluating the posterior expectations for two sets of paramet er choices. This direct evaluation provides insights into the large-sample behavior of the posterior that cannot be obtained through general posterior asymptotic theorems. Finally, simulation studies illustrate that the proposed prior prov ides accurate estimation of the eigenstructure, particularly for spiked eigenvalu es, achieving nar- rowercredibleintervalsandhighercoverageprobabilitiescompared toexistingmeth- ods. For spiked eigenvectors, the performance is generally comparable to that of competing approaches, including the sample covariance. 1 Introduction Suppose X1,...,X narep-dimensional independent samples from N(0,Ξ£), where Ξ£ is a pΓ—pcovariance matrix. In the study of multivariate statistics , estimating the covariance matrix is crucial for understanding the interdependence am ong variables. Extensive re- search has been conducted on covariance estimation, includ ingEfron and Morris (1976), Dey and Srinivasan (1985), andDaniels and Kass (2001). In the high-dimensional settings, where the number of covar iates increases with the sample size, Johnstone (2001) andJohnstone and Lu (2009) demonstrated the inconsis- tency of the largest eigenvalue and eigenvector of sample co variance matrix. To address the challenges in high-dimensional settings, se veral shrinkage estimators have been proposed. Stein(1975),Haff(1980),Ledoit and Wolf (2004) andBodnar et al. (2014) suggested shrinkage estimators for large covariances by m inimizing specific loss functions. Additionally, Karoui(2008),Ledoit and Wolf (2012), andLam(2016) pro- posed orthogonal invariant shrinkage estimators based on r andom matrix theory. These methods require the ratio p/nto be bounded, which limits their applicability in ultra-hi gh dimensional settings. In contrast, we consider the regime w herep/nβ†’ ∞. 2 Some structural assumptions are necessary to address the pr oblem of estimating high- dimensional covariance where p/nβ†’ ∞. The first structural assumption is the sparsity, either in the covariance or precision matrix. In the Frequen tist literature, Bickel and Levina(2008a),Cai and Zhou (2012), andCai et al. (2016)
https://arxiv.org/abs/2505.20668v1
suggested sparse covariance estimation using thresholding methods. In the Bayesian lit erature,Banerjee and Ghosal (2015) suggested sparse precision estimation using the Gaussian graphical model. Lee et al.(2022) introduced a beta-mixture shrinkage prior for sparse cova riance and showed the posterior has nearly optimal contraction rate. Recentl y,Lee and Lee (2023) suggested post-processed posterior for sparse covariance model. Anot her structure assumption is bandedness. In the Frequentist literature, Bickel and Levina (2008b),Cai et al. (2010) andBien et al. (2016) proposed banded covariance using tapering or banding meth od. In the Bayesian literature, Banerjee and Ghosal (2014) introduced banded precision es- timation by using the Gaussian graphical model. Recently, Lee et al. (2023) suggested post-processed posterior for the banded covariance model. These assumptions of sparsity or bandedness require prior knowledge of the covariance mat rix structure, which may not always be available. Without requiring sparsity or bandedness assumptions on the covariance, the spiked covariance assumption was initially proposed by Johnstone (2001). The spiked covariance Ξ£ is composed of klarge eigenvalues, while the remaining eigenvalues are rel atively small. Paul(2007),Shen et al. (2016), andWang and Fan (2017) investigated the asymptotic properties of the eigenstructure of sample covariances und er the assumption that the true covariance is spiked covariance. Further, Wang and Fan (2017) suggested an estimator of covariancebasedontheasymptoticpropertiesofsamplecov ariance.IntheBayesianstudy, Ning and Ning (2021) andXie et al. (2022) proposed a prior for the spiked covariance and verified the posterior convergence rate of covariance. In Bayesian inference, the inverse-Wishart prior is commonl y used for covariance es- timation. Surprisingly, there has been little prior resear ch in Bayesian statistics on the eigenstructures of unconstrained covariance matrices. Re cently,Lee et al. (2024) explored the asymptotic properties of the eigenstructure of covaria nce matrices under the inverse- 3 Wishart prior, marking the first time such properties have bee n verified in a Bayesian context. Additionally, the shrinkage inverse-Wishart (SIW) p rior was proposed by Berger et al.(2020) to address some of the limitations of the inverse-Wishart pr ior. However, despite its conceptual appeal and empirical success, the as ymptotic justification for the SIW prior has remained limited. Inthispaper,weproposeageneralizedshrinkageinverse-Wi shart(gSIW)priorthatac- commodatesabroaderclassofpriordistributionsandfacil itatestheoreticalanalysisofthe eigenstructure of high-dimensional covariance matrices u nder specific parameter choices. It is known that the eigenvalues of the sample covariance mat rix are biased ( Wang and Fan 2017 ), and that the inverse-Wishart posterior is also biased for t he eigenvalues ( Lee et al. 2024 ). When parameters are chosen appropriately, the proposed gS IW prior offers an asymptotically unbiased estimator for the eigenstructu re and enables rapid posterior computation. There has been extensive research on posterior convergence rates for the covariance matrix in high-dimensional settings, including Banerjee and Ghosal (2014),Xiang et al. (2015),Lee et al. (2022), andLee and Lee (2023). These studies primarily focus on co- variance or the precision matrix itself, with little attent ion given to the eigenstructures of unconstrained covariance matrices in the Bayesian framewo rk. Unlike these studies that rely on the general theorems of Ghosal et al. (2000) and Ghosal and van der Vaart (2007) to establish posterior convergence rates,
https://arxiv.org/abs/2505.20668v1
we directly compute the posterior expectation. Commonly, to investigate the asymptotic properties of the p osterior, researchers resort to general theorems on posterior convergence rates (e.g., Ghosal et al. (2000);Ghosal and van der Vaart (2007)). However, in this paper, we directly evaluate the posterio r expectations of eigenvalues and eigenvectors. Our approac h offers a couple of advantages. Inparticular,wederiveanexplicitexpressionfortheboun dsoftheposteriorexpectations, which provides deeper statistical insights into the behavi or of the posterior. Moreover, our method is applicable to general functionals of the covarian ce matrix. The rest of the paper is structured as follows. In Section 2, we propose a generalization 4 of the shrinkage inverse-Wishart introduced by Berger et al. (2020). We also suggest a sampling method for the posterior. Section 3presents the posterior convergence rates for theeigenvaluesandeigenvectorsofthecovariancematrix. Simulationstudiesarediscussed inSection 4,followedbyrealdataanalysisinSection 5.AconclusionisprovidedinSection 6. The proofs of the theorems and related lemmas are provided i n the Appendix. 2 Generalized shrinkage inverse-Wishart prior and its posterior 2.1 Notation For any positive sequence anandbn, we write an=o(bn) or equivalently anzbn, if an bnβ†’0 asnβ†’ ∞. We write an=O(bn) or equivalently anβ‰Όbnif/vextendsingle/vextendsingle/vextendsingle/vextendsinglean bn/vextendsingle/vextendsingle/vextendsingle/vextendsinglefcfor some constant c >0. For real constants aandb, we usea'banda(bto denote the minimum and maximum values between aandb, respectively. Let ||A||F=/radicalbig tr(ATA) denote the Frobenius norm, and ||A||denote the spectral norm, i.e., the largest singular value o fA. For a squared matrix A, we use etr(A) to represent exp(tr(A)). We define the set of p dimensional orthogonal matrices as O(p) :={U∈RpΓ—p: UTU =Ip}and the set of pΓ—p positive definite matrices as Cp:={A∈RpΓ—p:A is positive definite }. We write X∼InvGam( Β³,Β΄) ifXfollows the inverse-gamma distribution with shape parameter Β³ >0andscaleparameter Β΄ >0,havingdensityproportionalto xβˆ’Β³βˆ’1exp(βˆ’Β΄/x) forx >0. The notation Xiiid∼Pindicates that the random variables Xiare independent and identically distributed according to distribution P, whileXiind∼Pidenotes that the Xi’s are independent but not necessarily identically distrib uted. SupposeΞ£ = ( Γƒij,1fi,jfp)isapΓ—psymmetricmatrix,Ξ› = diag( ΒΌ1,...,ΒΌ p)isapΓ— pdiagonalmatrix,andU = [ u1,...,u p]∈O(p),whereuiisthei-thcolumnofU.Wedefine the matrix differential forms of these matrices as follows. L et (dΞ£) :=/logicalanddisplay ifjdΓƒijdenote the exterior product of the distinct elements of Ξ£, ( dΞ›) :=p/logicalanddisplay i=1dΒΌi, and (dU) :=/logicalanddisplay 1fifjfpuT jdui, 5 which represents the differential form corresponding to the invariant measure on O(p). Note that the volume of a set in O(p) can be obtained by integrating ( dU) over the set, i.e., volume(D) =/integraldisplay D(dU), DΒ’O(p). For a detailed discussion on differential forms related to ma trices, see Muirhead (2009). 2.2 Generalized shrinkage inverse-Wishart prior Suppose X1,...,X nare independent random vectors following a p-dimensional multivari- ate normal distribution with covariance matrix Ξ£ ∈ Cp: Xiiid∼N(0,Ξ£),βˆ€i= 1,...,n. (1) Berger et al. (2020) proposed the shrinkage inverse-Wishart (SIW) prior, a new cl ass of priors for Ξ£ whose densities are given by: ΓƒSIW(Ξ£)(dΞ£) =ΓƒSIW(Ξ£|a,b,H)(dΞ£)∝etr(βˆ’1 2Ξ£βˆ’1H) |Ξ£|a[/producttext i<j(ΒΌiβˆ’ΒΌj)]b(dΞ£), whereag0,b∈[0,1],H∈ CpandΒΌ1,...,ΒΌ pare the ordered eigenvalues of Ξ£. This prior is called the shrinkage inverse-Wishart prior, as it induces shrinkage of the eigenvalues by the term/productdisplay i<j(ΒΌiβˆ’ΒΌj) compared to the inverse-Wishart prior. When b= 0, the SIW prior reduces to the inverse-Wishart prior. Whenb <0, the eigenvalue spreading problem, which is already probl ematic under the inverse-Wishart prior
https://arxiv.org/abs/2505.20668v1
in high-dimensional settings, be comes even more severe. On the other hand, when b >1, the prior forces the eigenvalues to become nearly identic al. Accordingly, as proposed in Berger et al. (2020), we restrict bto the range [0 ,1] to avoid these undesirable behaviors. ConsiderthefollowingspectraldecompositionofΞ£ = UΞ›UT,whereΞ› = diag( ΒΌ1,...,ΒΌ p) 6 andU = [ u1,...,u p]isapΓ—pmatrixwhose i-thcolumn uiistheeigenvectorcorresponding to thei-th eigenvalue ΒΌi. We can rewrite ( dΞ£) in terms of ( dU) and (dΞ›): (dΞ£) =/productdisplay i<j(ΒΌiβˆ’ΒΌj)(dΞ›)(dU). (2) SeeFarrell(2012) andMuirhead (2009). Using equation ( 2), the density of the shrinkage inverse-Wishart prior can be w ritten as ΓƒSIW(Ξ›,U|a,b,H)(dΞ›)(dU)∝etr(βˆ’1 2Ξ£βˆ’1H) |Ξ£|a[/producttext i<j(ΒΌiβˆ’ΒΌj)]bβˆ’1(dΞ›)(dU), whereΒΌ1,...,ΒΌ pare the ordered eigenvalues of Ξ£. Since the SIW prior is symme tric with respect to the eigenvalues, the ordering constraint ca n be removed without affecting the distribution. The SIW prior has demonstrated empirical ly successful performance but is not supported by comprehensive theoretical analysis . In this paper, we propose a generalized Shrinkage inverse-Wishart (gSIW) prior whose de nsity is defined as ΓƒgSIW(Ξ›,U|a1,...,a p,b,H)(dΞ›)(dU)∝etr(βˆ’1 2UΞ›βˆ’1UTH) p/producttext i=1ΒΌai i/producttext i<j|ΒΌiβˆ’ΒΌj|bβˆ’1(dΞ›)(dU),(3) wherea1,...,a p>0,b∈[0,1], andΒΌ1,Β·Β·Β·,ΒΌpare unordered eigenvalues of Ξ£. By allowing eachaito be defined differently for each ΒΌi, we can consider a prior distribution that is more general than the SIW prior. In particular, when a1=Β·Β·Β·=ap=a, the gSIW prior reduces to the SIW prior. When b= 1, the term |ΒΌiβˆ’ΒΌj|1βˆ’bdisappears from the prior distribution of (Ξ› ,U), which facilitates faster sampling and simplifies theoret ical analysis. Therefore, we assume b= 1 in the following sections to establish the theoretical properties. The gSIW prior is conjugate to a multivariate normal distrib ution. The posterior den- sity of (Ξ› ,U) is given by: 7 Γƒ(Ξ›,U|Xn)(dΞ›)(dU)∝p/productdisplay i=1ΒΌβˆ’aiβˆ’n/2 ietr(βˆ’1 2Ξ›βˆ’1UT(H+nS)U)(dΞ›)(dU), whereXn= (X1,...,X n), andSis the sample covariance. Consider the following spectral decomposition of nS=QWQT, whereW= diag(nΛ†ΒΌ1,...,nΛ†ΒΌn'p,0,...,0) andQ is apΓ—pmatrix whose i-th column is the eigenvector corresponding to the i-th eigenvalue. If we denote Ξ“ = QTU andH=hIpfor some positive h >0, then ( dU) = (dΞ“) and we get Γƒ(Ξ›,Ξ“|Xn)(dΞ›)(dΞ“)∝p/productdisplay i=1ΒΌβˆ’aiβˆ’n/2 ietr(βˆ’1 2Ξ›βˆ’1Ξ“T(hIp+W)Ξ“)(dΞ›)(dΞ“). To obtain the posterior convergence rate, we need the follow ing expression, Γƒ(Ξ“|Xn)(dΞ“)∝/parenleftbigg/integraldisplay Γƒ(Ξ›,Ξ“|Xn)(dΞ›)/parenrightbigg (dΞ“) ∝p/productdisplay i=1(2ai+n/2βˆ’1Ξ“(ai+n/2βˆ’1))Β·p/productdisplay i=1cβˆ’aiβˆ’n/2+1 i (dΞ“), whereciis the (i,i)-th element of Ξ“T(hIp+W)Ξ“. 2.3 Sampling Method We use the Gibbs sampling method as proposed by Berger et al. (2020). The posterior density is given by: Γƒ(Ξ›,Ξ“|Xn)∝p/productdisplay i=1ΒΌβˆ’aiβˆ’n/2 ietr(βˆ’1 2Ξ›βˆ’1Ξ“T(hIp+W)Ξ“), whereΒΌ1,...,ΒΌ pare unordered diagonal elements of Ξ›. 1. Sampling Ξ› given (Ξ“ ,Xn): Γƒ(Ξ›|Ξ“,Xn)∝p/productdisplay i=1ΒΌβˆ’aiβˆ’n/2 iexp/parenleftbigg βˆ’ci 2ΒΌi/parenrightbigg , 8 whereciis the (i,i)-th element of Γ¦(hIp+W)Ξ“. Each ΒΌican be sampled indepen- dently from ΒΌi∼InvGam( ai+n/2βˆ’1,ci/2). 2. Sampling Ξ“ given (Ξ› ,Xn): Γƒ(Ξ“|Ξ›,Xn)∝etr(βˆ’1 2Ξ›βˆ’1Ξ“T(hIp+W)Ξ“). WeusetheGibbssamplingschemeproposedin Bergeretal. (2020),whichiteratively updatespairsofrowsinΞ“.Theprocedureforupdatingthefirs ttworowsisasfollows: Step I. Rotation transformation. To update the first two rows of Ξ“, we construct the updated matr ix as Ξ“new=diag(R,Ipβˆ’2) ο£­Ξ“12 Ξ“βˆ’12ο£Ά ο£Έ, where Ξ“ 12and Ξ“βˆ’12denote the first two and remaining pβˆ’2 rows of Ξ“, respec- tively. The matrix Ris a signed rotation matrix defined as R= ο£­Ο΅10 0Ο΅2ο£Ά ο£ΈRΒΉ, RΒΉ= ο£­cosΒΉβˆ’sinΒΉ sinΒΉcosΒΉο£Ά ο£Έ, whereΟ΅1,Ο΅2∈ {βˆ’1,1}and¹∈(βˆ’Γƒ/2,Γƒ/2]. Step II. Conditional distribution of ΒΉ. SinceH0=hIp+Wis diagonal, it can be partitioned as H0= ο£­H10 0H2ο£Ά ο£Έ, whereH1∈R2Γ—2. The conditional distribution of ΒΉbecomes
https://arxiv.org/abs/2505.20668v1
Γƒ(ΒΉ|Ξ“12,Ξ“βˆ’12,Ξ›;H0)∝etr(βˆ’1 2H1RΒΉΞ“12Ξ›βˆ’1Ξ“T 12RT ΒΉ). 9 Let the spectral decomposition of Ξ“ 12Ξ›βˆ’1Ξ“T 12be Ξ“12Ξ›βˆ’1Ξ“T 12= ο£­cosΓ‰βˆ’sinΓ‰ sinΓ‰cosΓ‰ο£Ά  ο£­s10 0s2ο£Ά  ο£­cosΓ‰sinΓ‰ βˆ’sinΓ‰cosΓ‰ο£Ά ο£Έ, withs1> s2andΓ‰βˆˆ(βˆ’Γƒ 2,Γƒ 2]. Then, the conditional density of ΒΉsimplifies to Γƒ(ΒΉ|Ξ“12,Ξ“βˆ’12,Ξ›;H0)∝exp/parenleftbig ccos2(ΒΉ+Γ‰)/parenrightbig , wherec=βˆ’1 2|(s1βˆ’s2)(h1βˆ’h2)|. Step III. Sampling via transformation. LetΒ³= cos2(ΒΉ+Γ‰), then we obtain Γƒ(Β³|Ξ“12,Ξ“βˆ’12,Ξ›;H0)∝exp(cΒ³)Β³βˆ’1/2(1βˆ’Β³)βˆ’1/2, ³∈(0,1), which corresponds to a tilted Beta(1/2,1/2) distribution. We sample Β³using rejection sampling with a Beta proposal. Once Β³is drawn, the corresponding pair of rows in Ξ“ is updated accordingly, and this procedure i s repeated for all row pairs. Finally, since Ξ“ = QTU, the eigenvector matrix of Ξ£ is recovered as U=QΞ“. 2.4 Choice of the number of spiked eigenvalues k To implement the proposed gSIW prior, the number of spiked ei genvalues k must be spec- ified in advance. To select k, we consider three methods. First, we adopt the Watanabe- Akaike Information Criterion (WAIC; Watanabe and Opper 2010 ), defined for each can- didate value k as WAIC(k) =βˆ’2n/summationdisplay i=1logEΞ£|X(k) n[p(Xi|Ξ£)]+2n/summationdisplay i=1VarΞ£|X(k) n[logp(Xi|Ξ£)], 10 where the posterior distribution is derived under the gSIW p rior assuming k spiked eigen- values. In computing the WAIC, we follow the second approach p roposed by Gelman et al.(2014), which estimates the effective number of parameters using t he variance of the log predictive density. The value of k is then selected by minimizing WAIC over k= 1,...,k max, wherekmaxis a suitably chosen upper bound. Second, we consider the Growth Ratio (GR) method proposed by Ahn and Horenstein (2013), which is based on the eigenvalues of sample covariance. For each k, the GR is de fined as GR(k) =log/parenleftBig 1+Λ†ΒΌk V(k)/parenrightBig log/parenleftBig 1+Λ†ΒΌk+1 V(k+1)/parenrightBig, whereV(k) =n'p/summationdisplay j=k+1Λ†ΒΌj, andΛ†ΒΌjdenotesthe j-th largest eigenvalue ofthe samplecovariance matrix. The value of k is selected by maximizing GR over k= 1,...,k max. Finally, we consider the information criterion IC p3proposed by Bai and Ng (2002), among several criteria introduced in their work, as it showed relatively s trong empirical performance under our model setting. It is defined as ICp3(k) = log/parenleftbigg1 np/vextendsingle/vextendsingle/vextendsingle/vextendsingleXβˆ’XU1(UΒ¦ 1U1)βˆ’1UT 1/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F/parenrightbigg +klog(n'p) n'p, whereX∈RnΓ—pdenotes the observed data matrix, and U1∈RpΓ—kdenotes the matrix of the estimated leading k eigenvectors. The value of k is selec ted by minimizing IC p3over k= 1,...,k max. In practice, all three methods provide good estimates when t he spiked eigenvalues are large, and WAIC generally performs better even when the spike d eigenvalues are rela- tively small. Overall, WAIC exhibits more robust performanc e across a range of settings. Therefore, we recommend using WAIC for selecting k. The code for the Gibbs sampling procedure and choice of kare publicly available at https://github.com/swpark0413/besiw . 11 3 Main Results Consider nindependent samples X1,...,X nfromN(0,Ξ£) where Ξ£ is a pΓ—ppositive definite matrix. We assume the gSIW prior as defined in ( 3) withb= 1. Under the gSIW prior, the posterior is given by Γƒ(Ξ›,Ξ“|Xn)(dΞ›)(dΞ“)∝p/productdisplay i=1ΒΌβˆ’aiβˆ’n/2 ietr(βˆ’1 2Ξ›βˆ’1Ξ“T(hIp+W)Ξ“)(dΞ›)(dΞ“), whereΒΌ1,...,ΒΌ pare the unordered eigenvalues of Ξ£. Suppose that Ξ£ is a spiked covari- ance with kspiked eigenvalues. Let ΒΌ0,1,...,ΒΌ 0,pbe the ordered eigenvalues of the true covariance Ξ£ 0. To obtain the
https://arxiv.org/abs/2505.20668v1
posterior convergence rates for eigenstruct ure, we assume the following conditions: A1. We consider high-dimensional settings where n/pβ†’0. A2. There exist positive constants c0andC0such that, for all n, C0> ΒΌ0,k+1>Β·Β·Β·> ΒΌ0,p> c0. A3. The klargest eigenvalues are sufficiently separated by a constant valueΒΆ0>0: ΒΌ0,jβˆ’ΒΌ0,j+1 ΒΌ0,jgΒΆ0,βˆ€j= 1,Β·Β·Β·,k. A4. Forkspiked eigenvalues, the quantities dj=p nΒΌ0,jare bounded. A5. We set the hyperparameters of the prior as follows: a1f Β·Β·Β· fakfak+1=Β·Β·Β·=anfan+1=Β·Β·Β·=ap, akzn, H=hIp,withh < n. The conditions A1βˆ’A4 are used to establish the asymptotic properties of the samp le covariance as in Wang and Fan (2017). The condition A2 implies that the non-spiked 12 eigenvalues are bounded away from zero and from above. The co nditionA4 represents the requirement that ΒΌ0,jis at least of orderp n. The conditions we impose differ slightly from those considered in Lee et al. (2024), which assume thatk3 nβ†’0 andΒΌ0,k+1p ΒΌ0,knβ†’0. In their setting, the number of spiked eigenvalues kis allowed to diverge, and the non-spiked eigenvalues are not necessarily bounded by a constant. In co ntrast, we assume that kis fixed and impose a boundedness condition on the non-spiked ei genvalues. The condition A5 specifies the hyperparameters of the proposed prior and ser ves as a sufficient condition for verifying posterior convergence. We demonstrate the posterior convergence rates for eigenva lues and eigenvectors when using the gSIW prior. The proof relies on the asymptotic prop erties of sample covariance ofWang and Fan (2017) and will be provided in the Appendix. We begin by presenting the shrinkage rates of the posterior distribution itself. Lemma 3.1. Under model (1)and prior (3), assume that conditions A1βˆ’A5hold. Let Ο΅ >0, and define Β»= min l<k(al+1βˆ’al)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Suppose thatnp anzΒΌ0,k nΒΌ0,1Β» 2Ο΅and Ο΅{/radicalbiggnp an(Β»βˆ’1. Then, /integraldisplay DcϡÃ(Ξ“|Xn)(dΞ“) =O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig , (4) where the subset DΟ΅ofO(p)is defined as: DΟ΅=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . In this paper, we consider two sets of prior parameter choice s for the asymptotic anal- ysis of the posterior. First, we examine the case where a1,...,a kare approximately equal. In this case, we need to reparametrize Ξ› and Ξ“ to investigate t he posterior convergence rate. For Ξ“ ∈O(p), we determine the kΓ—kpermutation matrix Plby solving the following 13 optimization problem Pl= argminP∈P/bracketleftBigg inf Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­P0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F/bracketrightBigg , wherePis the set of all kΓ—kpermutation matrices. Once Plis determined, we consider the following reparametrization on Ξ› and Ξ“, ΛœΞ› = ο£­P0 0Ipβˆ’kο£Ά ο£ΈT Λ ο£­P0 0Ipβˆ’kο£Ά ο£Έ,ΛœΞ“ = Γ ο£­P0 0Ipβˆ’kο£Ά ο£Έ. (5) Since the transformation preserves the measure, it follows that (dΞ“) = (dΛœΞ“). By reparametrizing the variables, the posterior distribut ion becomes Γƒ(ΛœΞ›,ΛœΞ“|Xn)(dΛœΞ›)(dΛœΞ“) =p/productdisplay i=1ΛœΒΌβˆ’aiβˆ’n/2 ietr(βˆ’1 2ΛœΞ›βˆ’1ΛœΞ“T(hIp+W)ΛœΞ“)(dΛœΞ›)(dΛœΞ“). For each Ξ“ ∈O(p), the optimal permutation Plcan be selected, allowing us to define the subset DΟ΅,lΒ’O(p) as DΟ΅,l=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Pl0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg , whereP1,...,P Ldenotes all kΓ—kpermutation matrices. It follows/integraldisplay (/uniontextL l=1DΟ΅,l)cΓƒ(Ξ“|Xn)(dΞ“)β‰ˆ/integraldisplay DϡÃ(ΛœΞ“|Xn)(dΛœΞ“), andthus,when a1,...,a kareapproximatelyequal,theposteriorprobabilityover(L/uniondisplay l=1DΟ΅,l)c can be analyzed analogously to Lemma 3.1. 14 Lemma 3.2. Under model (1)and prior (3), assume that conditions A1βˆ’A5hold, and max ifkaiβˆ’min ifkaifqfor some positive q, which
https://arxiv.org/abs/2505.20668v1
may depend on nandp. LetΟ΅ >0, and defineΓ„= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Suppose thatnp anzΒΌ0,k nΒΌ0,1Γ„ 2nΟ΅2andΟ΅{/radicalbiggnp an((nΓ„)βˆ’1/2. Then, /integraldisplay (/uniontextL l=1DΟ΅,l)cΓƒ(Ξ“|Xn)(dΞ“) =O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig .(6) Lemma3.1and3.2describe the degree of posterior shrinkage on the sets of ort hogonal matrices DΟ΅andL/uniondisplay l=1DΟ΅,l, respectively. These lemmas provide the foundation for ana lyzing posterior expectation. Consider the posterior expectatio n of a general function f(Ξ“,Ξ›), defined as E/bracketleftbig f(Ξ“,Ξ›)|Xn/bracketrightbig =/integraldisplay fβ€²(c,Ξ“)/productdisplay cβˆ’aiβˆ’n 2+1 i(dΞ“) /integraldisplay/productdisplay cβˆ’aiβˆ’n 2+1 i(dΞ“), wherefβ€²(c,Ξ“) =EΞ›[f(Ξ“,Ξ›)] andΒΌiind∼InvGam( ai+n/2βˆ’1,ci/2). Suppose the function fβ€²is sufficiently bounded such that sup|fβ€²| zο£±  exp/parenleftBigΓ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigβˆ’kq '/parenleftBignΒΌ0,k+p p/parenrightBigΟ΅2an ,ifa1β‰ˆ Β·Β·Β· β‰ˆak, exp/parenleftBigΒ» 2Ο΅/parenrightBig '/parenleftBignΒΌ0,k+p p/parenrightBigΟ΅2an , ifa1<Β·Β·Β·< ak. Under this condition, Lemma 3.1or3.2can be applied to derive the posterior expectation, thereby establishing the posterior convergence rate. The f ollowing corollaries demonstrate specific cases where the posterior expectation can be explic itly derived. Theorem 3.3. Under the assumptions of Lemma 3.2, letΒΌ(i)denote the i-th largest eigenvalue of Ξ£. Suppose that the ratio ΒΌ0,1/ΒΌ0,kis bounded by a positive constant, ϡ≍ 15 nβˆ’1/4,a1,...,a kβ‰Όn1/2, andan{n3/2p. Then, E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(p nΒΌ0,i)+O(nβˆ’1/2+ΒΆ), fori= 1,...,kand for all small ΒΆ >0. The convergence rate stated in Theorem 3.3is equivalent to the rate achieved by the inverse-Wishart prior, as established by Lee et al. (2024). To achieve faster rates than those given in Theorem 3.3, we set the hyperparameter aidepending on the eigenvalues of the sample covariance. Theorem 3.4. Under the assumptions of Lemma 3.1and3.2, letΒΌ(i)denote the i-th largest eigenvalue of Ξ£. Suppose that the ratio ΒΌ0,1/ΒΌ0,kis bounded by a positive constant, ϡ≍nβˆ’1/4, and the parameters satisfy 2aiβˆ’4 =nt Λ†ΒΌiβˆ’t,fori= 1,...,k,witht∈ [Λ†ΒΌk+1,Λ†ΒΌn], whereΛ†ΒΌidenotes the i-th eigenvalue of the sample covariance matrix, and an{ n3/2p. Then, E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(1 ΒΌ0,i/radicalbiggp n)+O(nβˆ’1 2+ΒΆ), fori= 1,...,kand for all small ΒΆ >0. Theorem 3.4provides a posterior convergence rate that is better or at le ast equivalent compared to that described in Theorem 3.3. If the condition ΒΌ0,izp√nholds, then the convergencerategiveninTheorem 3.4isfasterthanthatofthepriordescribedinTheorem 3.3. The rate coincides with the shrinkage eigenvalues propose d byWang and Fan (2017), given by: Λ†ΒΌS j=Λ†ΒΌjβˆ’p npβˆ’nkβˆ’pk/parenleftBig tr(S)βˆ’k/summationdisplay i=1Λ†ΒΌi/parenrightBig . LetΓ€jbe the eigenvector corresponding to ΒΌj, thej-th eigenvalue of the covariance Ξ£. 16 The eigenvector corresponding to the true covariance ΒΌ0,iis denoted by Γ€0,j. Theorem 3.5 presents the posterior convergence rate for the first keigenvectors. Theorem 3.5. Under the assumptions of Theorem 3.4, letΓ€(j)denote the eigenvector corresponding to the j-th largest eigenvalue ΒΌ(j)ofΞ£. Then, E/bracketleftBig 1βˆ’/vextendsingle/vextendsingleΓ€T 0,jΓ€(j)/vextendsingle/vextendsingle2/vextendsingle/vextendsingleXn/bracketrightBig =O(p nΒΌ0,j)+Op(Β·j), forj= 1,...,k, whereΒ·j=1 ΒΌ0,j/radicalbiggp n+p n3/2ΒΌ0,j+1 n. Thisconvergencerateisequivalenttotherateofthesample eigenvectorasanestimator of the first keigenvectors. According to the study of Cai et al. (2013), this rate is proven to be optimal. Corollary 3.6. Under the assumptions of Theorem 3.4, E/bracketleftBig||Ξ£βˆ’Ξ£0||2 F ||Ξ£0||2 F/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp ΒΌ2 0,1+p+nβˆ’1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op(ΒΌ2 0,1 ΒΌ2 0,1+pΒ·k), (7) for all small ΒΆ >0, whereΒ·k=1 ΒΌ0,k/radicalbiggp n+p n3/2ΒΌ0,k+1 n. Furthermore, for the sample covariance, we obtain the following rate ||Sβˆ’Ξ£0||2 F ||Ξ£0||2 F=O/parenleftBigp2 n(ΒΌ2 0,1+p)+nβˆ’1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op/parenleftBigΒΌ2 0,1 ΒΌ2 0,1+pΒ·k/parenrightBig . (8) The first term on the right-hand side of ( 7) corresponds to
https://arxiv.org/abs/2505.20668v1
the convergence rate of the eigenvalues, while the second and third terms are attribute d to the convergence of the eigenvectors. Comparing the rate in ( 7) with the sample covariance rate in ( 8), we observe that the first term in the sample covariance rate is larger. 17 4 Simulation Studies To evaluate the performance of the proposed gSIW prior, we co nsider observations sam- pled from a multivariate normal distribution N(0,Ξ£0). We design two types of simulation settings and compare six different methods. Throughout the s imulations, we assume the true number of spiked eigenvalues is known and focus on estim ating the spiked eigenvalues and their corresponding eigenvectors. Each simulation is r epeated 100 times. Method I. Sample covariance (Sample). Method II. Generalized shrinkage inverse-Wishart prior (gS IW) : Γƒ(Ξ£|a,H)∝etr(βˆ’1 2Ξ“Ξ›βˆ’1Ξ“TH) p/producttext i=1ΒΌai i/producttext i<j|ΒΌiβˆ’ΒΌj|. Method III. Generalized inverse-Wishart prior (gIW) : Γƒ(Ξ£|a,H)∝etr(βˆ’1 2Ξ“Ξ›βˆ’1Ξ“TH) p/producttext i=1ΒΌai i, where the hyperparameters aandHare set identically for both gSIW and gIW. In particular, we use ai=nt 2(Λ†ΒΌiβˆ’t)+ 2 fori= 1,...,k, wheret=1 nβˆ’knβˆ’k/summationdisplay i=1Λ†ΒΌi, an=p/2,ap= 2p, andH= 4Ip. Method IV. Shrinkage inverse-Wishart prior proposed by Berger et al. (2020) (SIW) : Γƒ(Ξ£|a,H)∝etr(βˆ’1 2Ξ“Ξ›βˆ’1Ξ“TH) p/producttext i=1ΒΌa i/producttext i<j|ΒΌiβˆ’ΒΌj|, wherea= 4 and H= 4Ip. 18 Method V. Inverse-Wishart prior (IW) : Γƒ(Ξ£|a,H)∝etr(βˆ’1 2Ξ“Ξ›βˆ’1Ξ“TH) p/producttext i=1ΒΌa i, wherea=p+1 andH=Ip, as specified by Lee et al. (2024). Method VI. Shrinkage POET estimator proposed by Wang and Fan (2017) (S-POET). To evaluate performance, we use the following errors for spi ked eigenvalues and eigen- vectors: ErrΒΌ=|ΒΌ0,iβˆ’ΒΌi| ΒΌ0,i,ErrΓ€= 1βˆ’(Γ€T iΓ€i,0)2. For the Bayesian methods, the posterior mean is used as the po int estimator. Eigenvector estimatesareobtainedbyaligningsigns,takingtheEuclid eanaverage,and thennormaliz- ing. We also report the 95% credible (or confidence) interval length (IR) and the coverage probability (CP) for each spiked eigenvalue. The coverage p robability is defined as the proportion of repetitions in which the true value lies withi n the credible (or confidence) interval. For the S-POET method, the confidence intervals are derived f rom the asymptotic distribution √n/parenleftBigΛ†ΒΌS i ΒΌ0,iβˆ’1/parenrightBig dβ‡’N(0,2),if√p=o(ΒΌ0,i), whereΛ†ΒΌS iis the shrinkage eigenvalue, as established by Wang and Fan (2017). 4.1 Case 1 In this simulation, we set ( n,p) = (50,500) with Ξ£ 0= diag(50 ,20,10,1,...,1). For the Ξ£0, we obtain d1= 0.2,d2= 0.5, andd3= 1 where dj=p nΒΌ0,j. This setting satisfies assumption A4, as the values of djfall within the specified range. The goal of this setting is to assess the performance of the methods when the dimensio npis large and the spiked eigenvalues are significantly larger than the others. 19 EigenvalueSample gSIW gIW SIW IW S-POET ErrΒΌErrΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL ΒΌ1 0.258 0.164 0.88 36.1 5.28 0.01 608 0.173 0.85 35.30.589 0.06 56.2 0.170 0.9248.3 ΒΌ2 0.517 0.149 0.95 13.72.23 0.01 77.7 0.165 0.94 14.6 1.59 0 26.7 0.128 0.98 19.5 ΒΌ3 1.050.141 0.916.141.280.0519.5 0.289 0.65 6.93 3.13 0 17.2 0.174 0.9410.7 Table 1: Average errors (Err ΒΌ), coverage probabilities (CP), and interval lengths (IL) f or the estimated spiked eigenvalues. Eigenvector Sample gSIW gIW SIW IW S-POET Γ€1 0.202 0.227 0.223 0.203
https://arxiv.org/abs/2505.20668v1
0.221 0.206 Γ€2 0.419 0.443 0.466 0.427 0.673 0.428 Γ€3 0.639 0.642 0.658 0.679 0.921 0.620 Table 2: Average errors (Err Γ€) for estimated eigenvectors. Table1presents the average errors (Err ΒΌ), coverage probabilities (CP), and credible (or confidence) interval length (IL) for the estimated spike d eigenvalues across different methods. The IW and gIW methods perform poorly in both estima tion accuracy and coverage. In contrast, SIW, S-POET, and gSIW outperform the o ther methods. Among these, the gSIW achieves the lowest errors for the first and th ird eigenvalues, while S- POET performs best for the second eigenvalue. The gSIW, SIW, an d S-POET methods exhibit comparable coverage probabilities. Although SIW pe rforms slightly worse and S- POET slightly better than gSIW, the credible intervals of gSI W are narrower than those of S-POET. Furthermore, gSIW achieves lower estimation err ors than SIW for all spiked eigenvalues. Table2presents the average errors (Err Γ€) for the estimated eigenvectors. The sample covariance achieves the lowest errors for the first and secon d spiked eigenvectors, while the S-POET method demonstrates the best performance for the third spiked eigenvector. Overall, the differences among the methods are relatively sm all. 4.2 Case 2 In this simulation, we vary the sample size nandpas (n,p) = (20,100),(40,100),...,(80,100), 20 Eigenvalue nSample gSIW gIW SIW IW S-POET ErrΒΌErrΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL ΒΌ120 1.45 0.422 0.69 6.27 2.80 0 29.3 0.110 1 5.37 4.23 0 27.4 0.670 0.42 16.8 40 0.732 0.152 0.96 3.81 3.00 0 27.4 0.154 0.963.90 2.16 0 10.8 0.402 0.55 7.60 60 0.499 0.102 0.98 3.11 3.64 0 30.9 0.132 0.96 3.50 1.41 0 6.66 0.307 0.64 5.36 80 0.387 0.0958 0.97 2.75 4.40 0 35.4 0.113 0.96 3.09 1.03 0 4.89 0.254 0.70 4.30 ΒΌ220 1.54 0.292 0.81 3.55 1.80 0 11.6 0.0793 1 2.72 3.23 0 12.7 0.555 0.64 12.5 40 0.783 0.105 0.97 2.43 2.10 0 12.0 0.165 0.88 2.081.95 0 6.08 0.390 0.65 6.03 60 0.525 0.0791 0.98 2.04 2.54 0 13.7 0.184 0.8 2.06 1.36 0 3.92 0.314 0.66 4.31 80 0.395 0.0895 0.98 1.86 3.07 0 15.6 0.163 0.78 2.10 1.03 0 2.98 0.262 0.72 3.46 ΒΌ320 1.98 0.322 0.84 2.83 1.54 0 7.41 0.0913 1 1.93 3.16 0 7.76 0.655 0.47 10.0 40 1.05 0.123 0.99 2.02 1.80 0 7.88 0.0544 1 1.43 2.18 0 4.16 0.533 0.17 4.99 60 0.700 0.0758 0.99 1.70 2.11 0 8.83 0.101 0.98 1.311.65 0 2.84 0.453 0.14 3.58 80 0.528 0.0812 0.97 1.54 2.49 0 9.92 0.145 0.88 1.301.31 0 2.18 0.404 0.09 2.89 Table 3: Average errors (Err ΒΌ), coverage probabilities (CP), and credible (or confidence ) interval lengths (IL) for estimated eigenvalues under vary ingn. while fixing the covariance as Ξ£ 0= diag(5,4,3,1,...,1). We assume that the true number of spiked eigenvalues is 3. As n increases, the quantity dj=p nΒΌ0,jdecreases, in accordance with assumption A4, which holds more strongly for larger n. The goal of this setting is to evaluate the performance of the methods when the spiked eige nvalues
https://arxiv.org/abs/2505.20668v1
are relatively weak. Table3represents the average errors (Err ΒΌ), coverage probabilities (CP), and credible (or confidence) interval length (IL) for the estimated spike d eigenvalues, across different methods and values of n. The gSIW, SIW, and S-POET methods consistently outperform the sample covariance. In contrast, the IW and gIW methods ex hibit significantly poorer performance across all settings. For small sample size n= 20, the SIW method shows superior performance in terms of both error and coverage pro bability, while for larger sample sizes, gSIW achieves the best overall performance. I n particular, the coverage probability of gSIW increases with n, indicating that the method becomes more reliable assamplesizegrows.Furthermore,acrossallvaluesof n,thecredibleintervalsproducedby gSIWareshorterthanthoseofS-POET,highlightingtheeffici encyofgSIWinquantifying uncertainty. These results demonstrate the advantages of t he gSIW prior, particularly for moderate to large sample sizes. Table4presents the average error (Err Γ€) for the estimated spiked eigenvectors across different methods and varying sample sizes n. Since the gIW and IW methods fail to accurately estimate the eigenvalues, as shown in Table 3, we exclude them from further 21 Eigenvector n Sample gSIW gIW SIW IW S-POET Γ€120 0.747 0.788 0.784 0.791 0.828 0.737 400.614 0.643 0.671 0.671 0.726 0.634 600.543 0.560 0.607 0.565 0.638 0.578 800.492 0.521 0.528 0.490 0.543 0.536 Γ€220 0.897 0.893 0.913 0.930 0.938 0.885 400.794 0.837 0.835 0.886 0.896 0.798 600.709 0.732 0.752 0.788 0.811 0.738 800.650 0.680 0.703 0.657 0.786 0.698 Γ€320 0.960 0.961 0.948 0.958 0.961 0.952 400.889 0.908 0.905 0.945 0.938 0.879 600.802 0.813 0.831 0.917 0.902 0.806 800.740 0.745 0.792 0.841 0.915 0.756 Table 4: Average errors (Err Γ€) for estimated eigenvectors under varying n. consideration.At n= 20,S-POETshowsthebestperformance,whereassamplecova riance performs the best for all other cases. Overall, sample covar iance, gSIW, and S-POET exhibit comparable performance in estimating eigenvector s. Combining the results from Case 1 and Case 2, we observe that I W and gIW gen- erally exhibit poor performance across all settings. In con trast, gSIW shows improved performance in estimating spiked eigenvalues when the quan tityp nΒΌiis small. For spiked eigenvectors, sample covariance consistently demonstrat es strong performance, and both gSIW and S-POET perform comparably to sample covariance. 5 Real Data In this section, we apply the proposed gSIW prior to the MNIST d ataset. To estimate the number of spiked eigenvalues in the covariance matrix, w e select 50 images from the MNIST dataset, all labeled as the digit 7. Based on the sel ected number of spiked eigenvalues k, we perform dimensionality reduction. We assume that the MNIST dataset follows a multivariate norma l distribution with a gSIW prior on the covariance matrix. The number of spiked eig envalues, denoted as k, is estimated using two model selection criteria: the Watanabe -Akaike Information Criterion (WAIC) and IC p3. Dimensionality reduction is then performed based on the se lectedk. For comparison, we also include results based on the Growth R atio (GR) method. 22 Figure 1: The images of selected 50 MNIST samples labeled as 7. Figure1displays the 50 selected MNIST samples labeled as 7. We flatten the 28Γ—28 images into 784-dimensional vectors. For each candidate
https://arxiv.org/abs/2505.20668v1
k, the gSIW prior is applied, and the WAIC and IC p3are computed. WAIC and IC p3selectk= 7 and k= 9, respectively, while the GR method selects k= 1 as the optimal number of spiked eigenvalues. (a) Dimensionality reduction with k= 7 (WAIC). (b) Dimensionality reduction with k= 9 (ICp3). Figure 2: The images after dimensionality reduction of the se lected 50 MNIST samples labeled as 7 using different model selection criteria. Figure2shows the images reconstructed using dimensionality reduc tion with k= 7 23 andk= 9, as determined by WAIC and IC p3, respectively. The reconstructed images successfully retain the shape of the digit 7, indicating tha t both WAIC and IC p3effectively capture the relevant subspace structure. Figure 3: The first 50 eigenvalues of the sample covariance mat rix. In this case, where ( n,p) = (50,784), the sample covariance eigenvalues are known to be heavily distorted due to high dimensionality. To illus trate this distortion, Figure 3 shows the first 50 eigenvalues of the sample covariance matri x. A few leading eigenvalues dominate the eigenvalue structure, while the remaining eig envalues are much smaller. As a result, the GR method, which is based directly on sample eig envalues, selects k= 1. By contrast, both WAIC and IC p3select larger values of kthat lead to substantially better reconstruction. To further assess the quality of the reduced representation , we compute the normalized mean squared reconstruction error (NMSE), defined as the mean squared error divided by the square of the range of the true values. The NMSE for k= 7 (WAIC) is 0.030, and fork= 9 (IC p3), it is 0.024, both substantially lower than the NMSE for k= 1 (GR), which is 0.069. This suggests that both WAIC and IC p3identify subspaces that preserve the original variation well, while the GR-based choice perf orms poorly. We alsoevaluate thecumulativeexplained varianceratio(C VE), defined asthepropor- tion of total variance captured by the top keigenvalues of the sample covariance matrix. 24 The CVE for k= 7 is 74.3%, and for k= 9, it is 79.0%, both substantially higher than the value for k= 1, which is 40.0%. These results demonstrate that, in this experiment, both WAI C and IC p3provide rea- sonable estimates of kthat lead to effective dimensionality reduction and data rec onstruc- tion, while the GR method continues to underestimate kdue to the distortion of sample eigenvalues in high-dimensional settings. 6 Conclusion We proposed the generalized shrinkage inverse-Wishart (gSI W) prior, which generalizes the shrinkage inverse-Wishart (SIW) prior of Berger et al. (2020) by allowing componen- twise shrinkage and distinguishing between spiked and non- spiked eigenvalue structures. We established posterior expectation for both eigenvalues and eigenvectors under the gSIW prior, and showed that it outperforms existing priors b oth theoretically and empir- ically when Assumptions A1 through A5 are satisfied. Our theoretical analysis builds on the asymptotic behavior of sample covariance ma- trices developed in Wang and Fan (2017), particularly under the spiked covariance model. Future work may extend our results beyond the current assump tions, such as relaxing the spiked structure or
https://arxiv.org/abs/2505.20668v1
improving asymptotic bounds for sam ple eigenstructures in more general settings. Acknowledgements This work was supported by the National Research Foundation o f Korea (NRF) grant funded by the Korea government(MSIT) (No. NRF-2023R1A2C100305 0). 25 References Ahn, S. C. and Horenstein, A. R. (2013). Eigenvalue ratio test fo r the number of factors, Econometrica 81(3): 1203–1227. Bai, J. and Ng, S. (2002). Determining the number of factors in a pproximate factor models,Econometrica 70(1): 191–221. Banerjee, S. and Ghosal, S. (2014). Posterior convergence r ates for estimating large precisionmatricesusinggraphicalmodels, Electronic JournalofStatistics 8:2111–2137. Banerjee, S. and Ghosal, S. (2015). Bayesian structure lear ning in graphical models, JournalofMultivariate Analysis 136: 147–162. Berger, J. O., Sun, D. and Song, C. (2020). Bayesian analysis o f the covariance matrix of a multivariate normal distribution with a new class of prior s,TheAnnalsofstatistics 48(4): 2381–2403. Bickel, P. J. and Levina, E. (2008a). Covariance regularizat ion by thresholding, The AnnalsofStatistics pp. 2577–2604. Bickel, P. J. and Levina, E. (2008b). Regularized estimation of large covariance matrices, TheAnnalsofStatistics pp. 199–227. Bien, J., Bunea, F. and Xiao, L. (2016). Convex banding of the cov ariance matrix, Journal oftheAmerican Statistical Association 111(514): 834–845. Bodnar, T., Gupta, A. K. and Parolya, N. (2014). On the strong c onvergence of the optimal linear shrinkage estimator for large dimensional c ovariance matrix, Journalof Multivariate Analysis 132: 215–228. Cai, T. T., Ma, Z. and Wu, Y. (2013). Sparse pca: Optimal rates and adaptive estimation, AnnalsofStatistics 41(6): 3074–3110. 26 Cai, T. T., Ren, Z. and Zhou, H. H. (2016). Estimating structure d high-dimensional covariance and precision matrices: Optimal rates and adapt ive estimation, Electronic JournalofStatistics 10: 1–59. Cai, T. T., Zhang, C. H. and Zhou, H. H. (2010). Optimal rates of co nvergence for covariance matrix estimation, AnnalsofStatistics 38(4): 2118–2144. Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence f or sparse covariance matrix estimation, TheAnnalsofStatistics 40(271): 2389–2420. Daniels, M. J. and Kass, R. E. (2001). Shrinkage estimators fo r covariance matrices, Biometrics 57(4): 1173–1184. Dey, D. K. and Srinivasan, C. (1985). Estimation of a covaria nce matrix under stein’s loss,TheAnnalsofStatistics pp. 1581–1591. Efron,B.andMorris,C.(1976). Multivariateempiricalbay esandestimationofcovariance matrices, TheAnnalsofStatistics pp. 22–32. Farrell, R. H. (2012). Multivariate calculation: Useofthecontinuous groups, Springer Science & Business Media. Gelman, A., Hwang, J. and Vehtari, A. (2014). Understanding predi ctive information criteria for bayesian models, Statistics andcomputing 24: 997–1016. Ghosal, S., Ghosh, J. K. and Van Der Vaart, A. W. (2000). Converge nce rates of posterior distributions, AnnalsofStatistics pp. 500–531. Ghosal, S. and van der Vaart, A. (2007). Convergence rates of posterior distributions for non-iid observations, AnnalsofStatistics 35(1): 192–223. Haff, L. (1980). Empirical bayes estimation of the multivaria te normal covariance matrix, TheAnnalsofStatistics 8(3): 586–597. 27 Johnstone, I. M. (2001). On the distribution of the largest ei genvalue in principal com- ponents analysis, TheAnnalsofstatistics 29(2): 295–327. Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsit y for principal com- ponents analysis in high dimensions, JournaloftheAmerican Statistical Association 104(486): 682–693. Karoui, N. E. (2008). Spectrum estimation for large dimensio nal covariance matrices using random matrix theory, TheAnnalsofStatistics pp. 2757–2790. Lam,
https://arxiv.org/abs/2505.20668v1
C. (2016). Nonparametric eigenvalue-regularized prec ision or covariance matrix estimator, AnnalsofStatistics 44(3): 928–953. Ledoit, O. and Wolf, M. (2004). A well-conditioned estimato r for large-dimensional co- variance matrices, Journalofmultivariate analysis88(2): 365–411. Ledoit, O. and Wolf, M. (2012). Nonlinear shrinkage estimati on of large-dimensional covariance matrices, TheAnnalsofStatistics 40(2): 1024–1060. Lee, K., Jo, S. and Lee, J. (2022). The beta-mixture shrinkage p rior for sparse covari- ances with near-minimax posterior convergence rate, JournalofMultivariate Analysis 192: 105067. Lee, K. and Lee, J. (2023). Post-processed posteriors for spa rse covariances, Journalof Econometrics 236(1): 105475. Lee, K., Lee, K. and Lee, J. (2023). Post-processed posterior s for banded covariances, Bayesian Analysis 18(3): 1017–1040. Lee, K., Park, S., Kim, S. and Lee, J. (2024). Posterior asympt otics of high-dimensional spiked covariance model with inverse-wishart prior, arXivpreprint arXiv:2412.10753 . Muirhead, R. J. (2009). Aspectsofmultivariate statistical theory, John Wiley & Sons. 28 Ning, B. Y.-C. and Ning, N. (2021). Spike and slab bayesian sparse principal component analysis, arXivpreprint arXiv:2102.00305 . Paul, D. (2007). Asymptotics of sample eigenstructure for a l arge dimensional spiked covariance model, Statistica Sinica pp. 1617–1642. Shen, D., Shen, H., Zhu, H. and Marron, J. (2016). The statistics and mathematics of high dimension low sample size asymptotics, Statistica Sinica26(4): 1747. Stein, C. (1975). Estimation of a covariance matrix, 39thAnnualMeeting IMS,Atlanta, GA,1975. Wang, W. and Fan, J. (2017). Asymptotics of empirical eigenstru cture for high dimen- sional spiked covariance, Annalsofstatistics 45(3): 1342. Watanabe, S. and Opper,M.(2010). Asymptoticequivalenceof bayescrossvalidation and widely applicable information criterion in singular learn ing theory., Journalofmachine learning research 11(12). Xiang, R., Khare, K. and Ghosh, M. (2015). High dimensional pos terior convergence rates for decomposable graphical models, Electronic JournalofStatistics 9: 2828–2854. Xie, F., Cape, J., Priebe, C. E. and Xu, Y. (2022). Bayesian sparse s piked covariance model with a continuous matrix shrinkage prior, Bayesian Analysis 17(4): 1193–1217. 29 Supplementary material of β€œEigenstructure inference for high-dimensional covariance with generalized shrinkage inverse-Wishart prior” Seongmin Kimβˆ—1, Kwangmin Lee†2, Sewon Park‑3, and Jaeyong LeeΒ§1 1Department of Statistics, Seoul National University 2Department of Big Data Convergence, Chonnam National University 3Security Algorithm Lab, Samsung SDS May 28, 2025 βˆ—zlatjdals@snu.ac.kr †klee564@jnu.ac.kr ‑swpark0413@gmail.com Β§leejyc@gmail.com 1 S1 Appendix Consider the transformed data Yi= Ξ“TXiwhich follows N(0,Ξ›) where Ξ› = diag(ΒΌ0,1,Β·Β·Β·,ΒΌ0,p) and ΒΌ0,iisith eigenvalue of true covariance. Since the eigenvalues of sample covar iance are invariant under the orthogonal transform, we consider the sample covariance S=1 nYTYwhereY= (Y1,...,Y n)T. Define˜S=1 nY YT, which preserves the first neigenvalues of S. Let˜Yiisith column of Y, so that ˜Yiind∼N(0,ΒΌ0,iIn). Therefore, ˜S=1 np/summationdisplay i=1˜Yi˜YT i. It can be expressed as ˜S=1 np/summationdisplay i=1ΒΌ0,iZiZT i, whereZ1,Β·Β·Β·,Zpare independent n-dimensional vectors, and each elements of Zifollows an independent standard normal distribution. Asymptotic properties of eigenstructure of sample covariance Lemma S1.1 (Lemma A.1 of Wang and Fan ;2017).Suppose that A1,...,A pare independent n- dimensional Gaussian random vectors with mean 0and variance In. For all tg0, the inequality holds at least1βˆ’2exp/parenleftbig ct2/parenrightbig probability for some positive constant c Β―wβˆ’max(ΒΆ,ΒΆ2)fΒΌn/parenleftBig1 pp/summationdisplay i=1wiAiAT i/parenrightBig fΒΌ1/parenleftBig1 pp/summationdisplay i=1wiAiAT i/parenrightBig fΒ―w+max(ΒΆ,ΒΆ2), whereΒΆ=C/radicalbig n/p+t/√p,Cis positive constant, |wi|is bounded for all i, andΒ―w=pβˆ’1p/summationdisplay i=1wi. Thefirst neigenvaluesofthesamplecovarianceisequivalencewiththatof ˜S.The˜Scanbedecomposed into the
https://arxiv.org/abs/2505.20668v1
sum of two matrices AandB, whereA=1 nk/summationdisplay i=1ΒΌ0,iZiZT iandB=1 np/summationdisplay i=k+1ΒΌ0,iZiZT i. Lemma S1.2 (Asymptotic properties of eigenvalues of sample covariance) .Under model (1), the eigen- values of sample covariance satisfy the following properties for all sufficiently large n, Λ†ΒΌj ΒΌ0,j=ο£±  1+Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n+Β΄j, j= 1,...,k Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n, j =k+1,...,n, whereΒ³jis a constant in the interval [βˆ’C,C]for some positive constant C, andΒ΄j≲nβˆ’1/2+ΒΆfor all smallΒΆ >0. 2 Proof.AccordingtoLemma S1.1,fort=√n,theinequalityholdswithprobabilityatleast1 βˆ’2exp(βˆ’cn): 1 pβˆ’kp/summationdisplay i=k+1ΒΌ0,iβˆ’C/radicalbiggn pfΒΌl/parenleftbiggn pβˆ’kB/parenrightbigg f1 pβˆ’kp/summationdisplay i=k+1ΒΌ0,i+C/radicalbiggn p, (S1) for some positive constant Cand for all sufficiently large n. The inequality follows from Wely’s theorem: ΒΌj(A) ΒΌ0,j+ΒΌp(B) ΒΌ0,jfΛ†ΒΌj ΒΌ0,j=ΒΌj(A+B) ΒΌ0,jfΒΌj(A) ΒΌ0,j+ΒΌ1(B) ΒΌ0,j, forj= 1,Β·Β·Β·,n. Given the bound in ( S1), we obtain the following inequality ΒΌj(A) ΒΌ0,j+Β―ddjβˆ’CΒΌ0,jβˆ’1/radicalbiggp nfΛ†ΒΌj ΒΌ0,jfΒΌj(A) ΒΌ0,j+Β―ddj+CΒΌ0,jβˆ’1/radicalbiggp n, whereΒ―d=1 pβˆ’kp/summationdisplay i=k+1ΒΌ0,i, anddj=p nΒΌ0,i. Therefore, we obtain the following equality Λ†ΒΌj ΒΌ0,j=ΒΌj(A) ΒΌ0,j+Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n, j= 1,...,n, for some constant Β³j∈[βˆ’C,C]. According to Lemma A.2 of Wang and Fan (2017), the following asymptotic normality and indepen- dence hold √n/parenleftbiggΒΌj(A) ΒΌ0,jβˆ’1/parenrightbigg dβ†’N(0,2),βˆ€i= 1,...,k. We obtain the following asymptotic normality √n/parenleftBiggΛ†ΒΌj ΒΌ0,jβˆ’/parenleftbigg 1+Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n/parenrightbigg/parenrightBigg dβ†’N(0,2), j= 1,...,k. Sincekis constant, for ΒΆnz√n, the following inequality holds for sufficiently large nandp ΒΆn/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΛ†ΒΌj ΒΌ0,jβˆ’/parenleftbigg 1+Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle< Ο΅,βˆ€j= 1,...,k. Therefore, we obtain the following equality Λ†ΒΌj ΒΌ0,j= 1+Β΄j+Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n, j= 1,...,k, forΒ΄j≲nβˆ’1/2+ΒΆfor all small ΒΆ >0. 3 Forj > k,ΒΌj(A) = 0, and it implies Λ†ΒΌj ΒΌ0,j=Β―ddj+Β³jΒΌ0,jβˆ’1/radicalbiggp n. β–  Let (S,d) be a metric space and KΒ¦S. ForΟ΅ >0, theΟ΅-covering number of Kwith respect to the metricd, denoted by N(K,d,Ο΅), is the minimum number of closed balls of radius Ο΅(with respect to d) required to cover the set K. Similarly, the Ο΅-packing number of Kwith respect to the metric d, denoted byM(K,d,Ο΅), is the maximum number of points in Ksuch that any two distinct points are at least distance Ο΅apart. Lemma S1.3 (Covering number of Orthogonal group ; Proposition 7 of Szarek 1982 ).LetΒΏbe a unitary ideal norm, for which ΒΏ(PAQ) =ΒΏ(A)holds for any unitary matrices PandQ, and any matrix A. Then, there exist universal positive constants candCthat satisfy the following inequality for ϡ∈(0,2ΒΏ(I)] (cΒΏ(I)/Ο΅)dfN(O(m),ΒΏ,Ο΅)f(CΒΏ(I)/Ο΅)d, whered=m(mβˆ’1)/2andN(O(m),ΒΏ,Ο΅)isΟ΅-covering number of O(m)with respect to the metric Γ„(x,y) =ΒΏ(xβˆ’y), wherex,y∈O(m). Grassmannmanifold Gn,pdenotesthe n-dimensionalsubspacesof Rp.Itcanbeexpressedasaquotient spaceGn,p=O(p)/(O(n)Γ—O(pβˆ’n)).Edelmanetal. (1998)statesthatapointintheGrassmannmanifold can be represented as an equivalence class [Q] ={Q ο£­Q10 0Q2ο£Ά ο£Έ:Q1∈O(n),Q2∈O(pβˆ’n)}, which consists of all orthogonal matrices that span the same subspace as the fi rstncolumns of Q.Szarek (1982) defines the quotient metric on Gn,pby the unitary ideal norm ΒΏonO(p) as follows Ä¿(H1,H2) = inf{ΒΏ(Iβˆ’V) :V∈O(p),VH1=H2}, forH1,H2∈Gn,p. Therefore, H1andH2can be represented as follows H1={P1 ο£­Q10 0Q2ο£Ά ο£Έ:Q1∈O(n),Q2∈O(pβˆ’n)}, H2={P2 ο£­Q10 0Q2ο£Ά ο£Έ:Q1∈O(n),Q2∈O(pβˆ’n)}, 4 for some P1,P2∈O(p). The quotient metric is then given by Ä¿(H1,H2) = inf/braceleftBigg ΒΏ(Iβˆ’V) :V=P2 ο£­Q10 0Q2ο£Ά  ο£­Q30 0Q4ο£Ά ο£ΈPT 1, Q1,Q3∈O(n), Q2,Q4∈O(pβˆ’n)/bracerightBigg = inf Q1∈O(n),Q2∈O(pβˆ’n)ΒΏ/parenleftBig Iβˆ’P2 ο£­Q10 0Q2ο£Ά ο£ΈPT 1/parenrightBig = inf Q1∈O(n),Q2∈O(pβˆ’n)ΒΏ/parenleftBig P1βˆ’P2 ο£­Q10 0Q2ο£Ά ο£Έ/parenrightBig . Lemma S1.4 (Covering number of Grassmann manifold ; Proposition 8 of Szarek 1982 ).For all unitary ideal norms ΒΏ, there exist universal positive constants mandMthat satisfy the following inequality for ϡ∈(0,DΒΏ] (mDΒΏ/Ο΅)dfN(Gn,p,Ä¿,Ο΅)f(MDΒΏ/Ο΅)d, whered=n(pβˆ’n) =dimGn,p, andDΒΏis the diameter of Gn,pwith respect to the metric Ä¿. Specifically, for the Frobenius norm ΒΏ, the diameter
https://arxiv.org/abs/2505.20668v1
is given by DΒΏ= 2/radicalbig min(n,pβˆ’n). Lemma S1.5 (Probabilityofsubsetonorthogonalgroup) .Consider the following subset of the orthogonal groupO(p), defined as BΟ΅=/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(n), Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Then, the probability measure of BΟ΅satisfies the following lower bound: P(BΟ΅) :=/integraldisplay BΟ΅[dΞ“]g/parenleftBig2M/radicalbig min(n,pβˆ’n) Ο΅/parenrightBigβˆ’n(pβˆ’n) , for some positive constant M. Proof.LetS1,...,S N(Gn,p,Γ„||Β·||F,Ο΅)form an Ο΅-covering of Gn,pwith respect to the distance Γ„||Β·||F. Then, the inequality holds 1 =P/parenleftBigN(Gn,p,Γ„||Β·||F,Ο΅)/uniondisplay i=1{Ξ“βˆˆO(p) :Γ„||Β·||F([Ξ“],Si)< Ο΅}/parenrightBig fN(Gn,p,Γ„||Β·||F,Ο΅)/summationdisplay i=1P({Ξ“βˆˆO(p) :Γ„||Β·||F([Ξ“],Si)< Ο΅}) =N(Gn,p,Γ„||Β·||F,Ο΅)Β·P({Ξ“βˆˆO(p) :Γ„||Β·||F([Ξ“],Si)< Ο΅}) 5 =N(Gn,p,Γ„||Β·||F,Ο΅)Β·P({Ξ“βˆˆO(p) :Γ„||Β·||F([Ξ“],[Ip])< Ο΅}) where [Ξ“] := {Γ ο£­Q10 0Q2ο£Ά ο£Έ:Q1∈O(n), Q2∈O(pβˆ’n)}for Ξ“βˆˆO(p). We obtain the following lower bound on the probability measure of BΟ΅ P(BΟ΅) =P/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(n), Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg =P({Ξ“βˆˆO(p) :Γ„||Β·||F([Ξ“],[Ip])< Ο΅}) g1 N(Gn,p,Γ„||Β·||F,Ο΅) g/parenleftBig2M/radicalbig min(n,pβˆ’n) Ο΅/parenrightBigβˆ’n(pβˆ’n) , whereMis a positive constant. The last inequality follows from Lemma S1.4. β–  Lemma S1.6 (Probabilityofsubsetonorthogonalgroup) .Consider the following subset of the orthogonal groupO(p), defined as CΟ΅=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­In0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Then, the probability measure of CΟ΅satisfies the following lower bound: P(CΟ΅) :=/integraldisplay CΟ΅[dΞ“]g/parenleftBigc√n Ο΅/parenrightBigβˆ’n(pβˆ’n 2βˆ’1 2) , Ο΅f2√n, for some positive constant c. Proof.LetS1,Β·Β·Β·,SN(O(n),||Β·||F,Ο΅1)form an Ο΅1-covering of Onwith respect to the distance ||Β·||FforΟ΅1∈ (0,Ο΅). For all Q1∈O(n), there exists some Sisuch that ||Siβˆ’Q1||F< Ο΅1. By triangular inequality, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ff/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’ο£« ο£­Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F. (S2) Taking the infimum over Q2∈O(pβˆ’n) on both sides of ( S2), then the inequality holds inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ffinf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’ο£« ο£­Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+ inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F. (S3) 6 We obtain the following lower bound on the probability measure of CΟ΅: P(CΟ΅) =P/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­In0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg g1 N(O(n),||Β·||F,Ο΅1)P/parenleftbiggN(O(n),||Β·||F,Ο΅1)/uniondisplay i=1/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg g1 N(O(n),||Β·||F,Ο΅1)P/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(n), Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅βˆ’Ο΅1/bracerightbigg/parenrightbigg =P(AΟ΅βˆ’Ο΅1) N(O(n),||Β·||F,Ο΅1). The first inequality holds by the following fact P(CΟ΅) =P/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg , fori= 1,...,N(O(n),||Β·||F,Ο΅1). The second inequality holds by ( S3). Therefore, by applying Lemma S1.3and Lemma S1.4, we obtain the following inequality: P(CΟ΅)g/parenleftBig2M/radicalbig min(n,pβˆ’n) Ο΅βˆ’Ο΅1/parenrightBigβˆ’n(pβˆ’n) Β·/parenleftBigC√n Ο΅1/parenrightBigβˆ’n(nβˆ’1)/2 g/parenleftBig2M√n Ο΅βˆ’Ο΅1/parenrightBigβˆ’n(pβˆ’n) Β·/parenleftBigC√n Ο΅1/parenrightBigβˆ’n(nβˆ’1)/2 =/parenleftBig4M√n Ο΅/parenrightBigβˆ’n(pβˆ’n) Β·/parenleftBig2C√n Ο΅/parenrightBigβˆ’n(nβˆ’1)/2 g/parenleftBigmax(4M,2C)√n Ο΅/parenrightBigβˆ’n(pβˆ’n 2βˆ’1 2) , where we set Ο΅1=Ο΅ 2, andMandCare positive constants. As a result, for all sufficiently large n, we obtain the lower bound P(CΟ΅)g/parenleftBigc√n Ο΅/parenrightBigβˆ’n(pβˆ’n 2βˆ’1 2) , for some positive constant c. β–  We define cias (i,i) elements of Ξ“T(hIp+W)Ξ“, where W=diag(nΛ†ΒΌ1,...,nΛ†ΒΌn,0,...,0), 7 andΛ†ΒΌiis theith eigenvalue of sample covariance S. By Lemma S1.2, we express cias follows: ci n=h n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj =h n+k/summationdisplay j=1Ξ“2 ji/braceleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)/bracerightBig +n/summationdisplay j=k+1Ξ“2 ji/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig . Next, we establish two lemmas that provide upper and lower bounds for ci. Lemma S1.7 (Upper bound of ci).Under the conditions A1βˆ’A4in Section 3, the quantity cisatisfies the following upper bound on CΟ΅ ci nfο£±  /parenleftbigg (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h n/parenrightbigg (1+4Ο΅2ΒΌ0,1 ΒΌ0,k)i= 1,...,k, /parenleftBig Β―dp n+Β³i/radicalbiggp n+h n/parenrightBig/parenleftBig 1+4Ο΅2nΒΌ0,1 Β―dp/parenrightBig i=k+1,...,n, for all sufficiently large n. Furthermore, the following inequality also holds
https://arxiv.org/abs/2505.20668v1
on CΟ΅ p/productdisplay i=n+1ci nf/parenleftBigh n/parenrightBigpβˆ’n/bracketleftbigg 1+2Ο΅2n (pβˆ’n)hΒΌ0,1/bracketrightbiggpβˆ’n , whereΒ΄max= max 1fjfkΒ΄j, and for all sufficiently large n. Proof.For Ξ“βˆˆCΟ΅, the following inequality holds: (Ξ“iiβˆ’1)2+p/summationdisplay j=1 jΜΈ=iΞ“2 ji< Ο΅2, fori= 1,...,n. Sincep/summationdisplay j=1Ξ“2 ji= 1, it follows that (1βˆ’Ξ“2 ii)< Ο΅2, which implies Ξ“2 ii>1βˆ’Ο΅2,fori= 1,...,n. Fori= 1,...,k, we have the following bound: ci n=h n+k/summationdisplay j=1(1+Β΄j)Ξ“2 jiΒΌ0,j+n/summationdisplay j=1/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig Ξ“2 ji fh n+(1+Β΄i)ΒΌ0,iΞ“2 ii+/parenleftBig (1+Β΄max)ΒΌ0,1/parenrightBigp/summationdisplay j=1 jΜΈ=iΞ“2 ji+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig Ξ“2 ii+/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBigp/summationdisplay j=1 jΜΈ=iΞ“2 ji 8 fh n+(1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +/parenleftBig (1+Β΄max)ΒΌ0,1+Β―dp n+C/radicalbiggp n/parenrightBig (1βˆ’Ξ“2 ii) fh n+(1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +2Ο΅2ΒΌ0,1 f/parenleftbigg (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h n/parenrightbigg (1+4Ο΅2ΒΌ0,1 ΒΌ0,k), for all sufficiently large n. The third inequality follows from the fact that: ΒΌ0,1{Β΄maxΒΌ0,1+Β―dp n+C/radicalbiggp n. The fourth inequality follows from (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h ngΒΌ0,k 2, for all sufficiently large n. Fori=k+1,...,n, we have the following bound: ci n=h n+k/summationdisplay j=1(1+Β΄j)Ξ“2 jiΒΌ0,j+n/summationdisplay j=1/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig Ξ“2 ji fh n+(1+Β΄max)ΒΌ0,1p/summationdisplay j=1 jΜΈ=iΞ“2 ji+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig Ξ“2 ii+/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBigp/summationdisplay j=1 jΜΈ=iΞ“2 ji fh n+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +/parenleftBig (1+Β΄max)ΒΌ0,1+Β―dp n+C/radicalbiggp n/parenrightBig (1βˆ’Ξ“2 ii) fh n+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +2ΒΌ0,1Ο΅2 f/parenleftBig Β―dp n+Β³i/radicalbiggp n+h n/parenrightBig/parenleftBig 1+4Ο΅2nΒΌ0,1 Β―dp/parenrightBig , for all sufficiently large n. The third inequality follows from the fact that ΒΌ0,1{Β΄maxΒΌ0,1+Β―dp n+C/radicalbiggp n. The fourth inequality follows from Β―dp n+Β³i/radicalbiggp n+h ng1 2Β―dp n, for all sufficiently large n. 9 For Ξ“βˆˆCΟ΅, the following bound holds for j= 1,...,n: p/summationdisplay i=n+1Ξ“2 ji< Ο΅2. Therefore, for i=n+1,...,p, we obtain the following inequality: p/summationdisplay i=n+1ci n=h(pβˆ’n) n+p/summationdisplay i=n+1k/summationdisplay j=1(1+Β΄j)Ξ“2 jiΒΌ0,j+p/summationdisplay i=n+1n/summationdisplay j=1/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig Ξ“2 ji fh(pβˆ’n) n+(1+Β΄max)ΒΌ0,1p/summationdisplay i=n+1k/summationdisplay j=1Ξ“2 ji+/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBigp/summationdisplay i=n+1n/summationdisplay j=1Ξ“2 ji fh(pβˆ’n) n+Ο΅2(1+Β΄max)ΒΌ0,1+Ο΅2/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBig fh(pβˆ’n) n+2Ο΅2ΒΌ0,1, whereΒ΄max= max 1fjfkΒ΄j, and for all sufficiently large n. The last inequality follows from ΒΌ0,1{Β΄maxΒΌ0,1+Β―dp n+C/radicalbiggp n. By arithmetic geometric mean inequality, we obtain the following ine quality: p/productdisplay i=n+1ci nf/parenleftBig1 pβˆ’np/summationdisplay i=n+1ci n/parenrightBigpβˆ’n f/bracketleftbiggh n+2Ο΅2 (pβˆ’n)ΒΌ0,1/parenrightBig/bracketrightbiggpβˆ’n =/parenleftBigh n/parenrightBigpβˆ’n/bracketleftbigg 1+2Ο΅2n (pβˆ’n)hΒΌ0,1/bracketrightbiggpβˆ’n . β–  Lemma S1.8 (Lower bound of ci).Under the conditions A1βˆ’A4on Section 3,cihas the following lower bound: ci ngk/productdisplay j=1/parenleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/parenrightBigΞ“2 jiΒ·n/productdisplay j=k+1/parenleftBigΒ―dp n+Β³j/radicalbiggp n+h n/parenrightBigΞ“2 jiΒ·p/productdisplay j=n+1/parenleftBigh n/parenrightBigΞ“2 ji. Proof.By arithmetic–geometric mean inequality, we obtain the following ine quality ci n=k/summationdisplay j=1Ξ“2 ji/braceleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/bracerightBig +n/summationdisplay j=k+1Ξ“2 ji/parenleftBig Β―dp n+Β³j/radicalbiggp n+h n/parenrightBig +p/summationdisplay j=n+1Ξ“2 jih n 10 gk/productdisplay j=1/parenleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/parenrightBigΞ“2 jiΒ·n/productdisplay j=k+1/parenleftBigΒ―dp n+Β³j/radicalbiggp n+h n/parenrightBigΞ“2 jiΒ·p/productdisplay j=n+1/parenleftBigh n/parenrightBigΞ“2 ji. β–  Lemma S1.9 (Hoffman-Wielandt theorem ; Stewart and Sun 1990 ).FornΓ—nnormal matrices A,˜A, the following inequality holds: min Γƒ/summationdisplay i/vextendsingle/vextendsingle/vextendsingleΛœΒΌΓƒ(i)βˆ’ΒΌi/vextendsingle/vextendsingle/vextendsinglef/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle˜Aβˆ’A/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, whereΓƒis permutation of the set [n] ={1,2,...,n}, andmin Γƒdenotes the minimum among all permutation of the set [n]. Lemma S1.10 (Block matrix approximation) .For allΞ“βˆˆO(p), if||Ξ“1:n,n+1:p||F< ΒΈ, then it follows that inf Q1∈O(n), Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F<2ΒΈ, for all¸∈(0,1). Proof.Consider the following block matrix of Ξ“: Ξ“ = ο£­Ξ“11Ξ“12 Ξ“21Ξ“22ο£Ά ο£Έ, where Ξ“ 11∈RnΓ—nand Ξ“22∈R(pβˆ’n)Γ—(pβˆ’n). LetA=Ipβˆ’n,˜A=Ipβˆ’nβˆ’Ξ“T 12Ξ“12, andBΒΈ={Ξ“βˆˆO(p) :||Ξ“12||F< ΒΈ}. For all Ξ“ ∈BΒΈ, the following inequality holds by Lemma S1.9: pβˆ’n/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingle˜¼iβˆ’1/vextendsingle/vextendsingle/vextendsinglef/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“T 12Ξ“12/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ2, where˜¼iisith eigenvalue of ˜A. ConsiderthesingularvaluedecompositionΞ“ 22=UDVT,whereU,V∈O(pβˆ’n)andD=diag(/radicalBig ˜¼1,Β·Β·Β·,/radicalBig ˜¼pβˆ’n). Then,
https://arxiv.org/abs/2505.20668v1
the equality holds for all Ξ“ ∈BΒΈ: inf Q∈O(pβˆ’n)||Ξ“22βˆ’Q||2 F= inf Q∈O(pβˆ’n)||Dβˆ’Q||2 F = inf Q∈O(pβˆ’n)pβˆ’n/summationdisplay i=1[(/radicalBig ˜¼iβˆ’qi)2+(1βˆ’q2 i)] =pβˆ’n/summationdisplay i=1(/radicalBig ˜¼iβˆ’1)2 11 fmax ifpβˆ’n/vextendsingle/vextendsingle/vextendsingle˜¼iβˆ’1/vextendsingle/vextendsingle/vextendsingle (/radicalbig˜¼i+1)2Β·pβˆ’n/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingle˜¼iβˆ’1/vextendsingle/vextendsingle/vextendsingle fΒΈ2, whereqiis (i,i) element of Q. Likewise we attain inf Q∈O(n)||Ξ“11βˆ’Q||2 FfΒΈ2. For Ξ“βˆˆBΒΈ, we obtain the inequality: inf Q1∈O(n),Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“βˆ’ο£« ο£­Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F= inf Q1∈O(n)||Q1βˆ’Ξ“11||2 F+ inf Q2∈O(pβˆ’n)||Q2βˆ’Ξ“22||2 F+||Ξ“12||2 F+||Ξ“21||2 F f4ΒΈ2. β–  Lemma S1.11. Consider the following subset of O(p): AΟ΅=/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(k), Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“βˆ’ο£« ο£­Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Under the conditions A1βˆ’A5, suppose that Ο΅2an{np. Then, the following inequality holds: /integraldisplay AcΟ΅p/productdisplay i=1cβˆ’aiβˆ’n 2+1 i(dΞ“) /integraldisplayp/productdisplay i=1cβˆ’aiβˆ’n 2+1 i(dΞ“)β‰Ό/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an . Proof.Consider the following subset of the orthogonal group O(p): CΟ΅=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’n)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“βˆ’ο£« ο£­In0 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Define the following quantities: Ki=/parenleftBig (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h n/parenrightBig , i= 1,...,k, Li=/parenleftBig Β―dp n+Β³i/radicalbiggp n+h n/parenrightBig , i=k+1,...,n, 12 M=/parenleftBigh n/parenrightBig , rn= 4Ο΅2 2ΒΌ0,1 ΒΌ0,k, sn= 4Ο΅2 2nΒΌ0,1 Β―dp, tn=2Ο΅2 2n (pβˆ’n)hΒΌ0,1. Using Lemma S1.7and Lemma S1.8, we obtain the following inequality: /integraldisplay AcΟ΅/productdisplay cβˆ’aiβˆ’n 2+1 i(dΞ“) /integraldisplay CΟ΅2/productdisplay cβˆ’aiβˆ’n 2+1 i(dΞ“) fP(Ac Ο΅)Β·supAc Ο΅/producttextcβˆ’aiβˆ’n 2+1 i P(CΟ΅2)Β·infCΟ΅2/producttextcβˆ’aiβˆ’n 2+1 i f1 P(CΟ΅2)Β·supCΟ΅2/producttextcai+n 2βˆ’1 i infAcΟ΅/producttextcai+n 2βˆ’1 i f1 P(CΟ΅2)Β·k/producttext j=1/bracketleftbigg Kj(1+rn)/bracketrightbigg(aj+n 2βˆ’1) Β·n/producttext j=k+1/bracketleftbigg Lj(1+sn)/bracketrightbigg(an+n 2βˆ’1) Β·/bracketleftbigg M(1+tn)/bracketrightbigg(pβˆ’n)(ap+n 2βˆ’1) infAc Ο΅p/producttext i=1/bracketleftbiggk/producttext j=1KΞ“2 ji jΒ·n/producttext j=k+1LΞ“2 ji jΒ·p/producttext j=n+1MΞ“2 ji/bracketrightbiggai+n 2βˆ’1 =1 P(CΟ΅2)Β·(1+rn)k/summationtext j=1(aj+n 2βˆ’1) (1+sn)(nβˆ’k)(an+n 2βˆ’1)(1+tn)(pβˆ’n)(ap+n 2βˆ’1) Γ—sup Ac Ο΅k/producttext j=1K(aj+n 2βˆ’1) j Β·n/producttext j=k+1L(an+n 2βˆ’1) j Β·M(pβˆ’n)(ap+n 2βˆ’1) p/producttext i=1/bracketleftbiggk/producttext j=1KΞ“2 ji jΒ·n/producttext j=k+1LΞ“2 ji jΒ·p/producttext j=n+1MΞ“2 ji/bracketrightbiggai+n 2βˆ’1. Now, we focus on bounding the term sup Ac Ο΅k/producttext j=1K(aj+n 2βˆ’1) j Β·n/producttext j=k+1L(an+n 2βˆ’1) j Β·M(pβˆ’n)(ap+n 2βˆ’1) p/producttext i=1/bracketleftbiggk/producttext j=1KΞ“2 ji jΒ·n/producttext j=k+1LΞ“2 ji jΒ·p/producttext j=n+1MΞ“2 ji/bracketrightbiggai+n 2βˆ’1. We obtain the following upper bound: 13 sup AcΟ΅k/producttext j=1K(aj+n 2βˆ’1) j Β·n/producttext j=k+1L(an+n 2βˆ’1) j Β·M(pβˆ’n)(ap+n 2βˆ’1) p/producttext i=1/bracketleftbiggk/producttext j=1KΞ“2 ji jΒ·n/producttext j=k+1LΞ“2 ji jΒ·p/producttext j=n+1MΞ“2 ji/bracketrightbiggai+n 2βˆ’1 = sup AcΟ΅/bracketleftBiggk/producttext j=1K(aj+n 2βˆ’1) j k/producttext j=1Kk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+n/summationtext i=k+1Ξ“2 ji(an+n 2βˆ’1)+p/summationtext i=n+1Ξ“2 ji(ap+n 2βˆ’1) j Γ—n/producttext j=k+1L(an+n 2βˆ’1) j n/producttext j=k+1Lk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+n/summationtext i=k+1Ξ“2 ji(an+n 2βˆ’1)+p/summationtext i=n+1Ξ“2 ji(ap+n 2βˆ’1) jΓ—M(pβˆ’n)(ap+n 2βˆ’1) Mk/summationtext i=1p/summationtext j=n+1Ξ“2 ji(ai+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1)+p/summationtext i=n+1p/summationtext j=n+1Ξ“2 ji(ap+n 2βˆ’1)/bracketrightBigg = sup AcΟ΅/bracketleftBiggk/producttext j=1K(aj+n 2βˆ’1) j k/producttext j=1Kk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+p/summationtext i=k+1Ξ“2 ji(ak+n 2βˆ’1) jΒ·1 k/producttext j=1Kn/summationtext i=k+1Ξ“2 ji(anβˆ’ak)+p/summationtext i=n+1Ξ“2 ji(apβˆ’ak) i Γ—n/producttext j=k+1L(1βˆ’n/summationtext i=k+1Ξ“2 ji)(an+n 2βˆ’1) j n/producttext j=k+1Lk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+p/summationtext i=n+1Ξ“2 ji(ap+n 2βˆ’1) jΓ—M(pβˆ’nβˆ’p/summationtext i=n+1p/summationtext j=n+1Ξ“2 ji)(ap+n 2βˆ’1) Mk/summationtext i=1p/summationtext j=n+1Ξ“2 ji(ai+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1)/bracketrightBigg ≀sup O(p)/bracketleftBiggk/producttext j=1K(aj+n 2βˆ’1) j k/producttext j=1Kk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+p/summationtext i=k+1Ξ“2 ji(ak+n 2βˆ’1) j/bracketrightBigg Γ—sup AcΟ΅/bracketleftBigg 1 k/producttext j=1Kn/summationtext i=k+1Ξ“2 ji(anβˆ’ak)+p/summationtext i=n+1Ξ“2 ji(apβˆ’ak) iΓ—n/producttext j=k+1L(1βˆ’n/summationtext i=k+1Ξ“2 ji)(an+n 2βˆ’1) j n/producttext j=k+1Lk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+p/summationtext i=n+1Ξ“2 ji(ap+n 2βˆ’1) jΓ—M(pβˆ’nβˆ’p/summationtext i=n+1p/summationtext j=n+1Ξ“2 ji)(ap+n 2βˆ’1) Mk/summationtext i=1p/summationtext j=n+1Ξ“2 ji(ai+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1)/bracketrightBigg ≀sup AcΟ΅/bracketleftBigg 1 k/producttext j=1Kn/summationtext i=k+1Ξ“2 ji(anβˆ’ak)+p/summationtext i=n+1Ξ“2 ji(apβˆ’ak) iΓ—n/producttext j=k+1L(1βˆ’p/summationtext i=k+1Ξ“2 ji)(an+n 2βˆ’1) j n/producttext j=k+1Lk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+p/summationtext i=n+1Ξ“2 ji(apβˆ’an) jΓ—M(pβˆ’nβˆ’p/summationtext i=n+1p/summationtext j=n+1Ξ“2 ji)(ap+n 2βˆ’1) Mk/summationtext i=1p/summationtext j=n+1Ξ“2 ji(ai+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1)/bracketrightBigg ≀sup AcΟ΅/bracketleftBigg 1 Kk/summationtext j=1n/summationtext i=k+1Ξ“2 ji(anβˆ’ak)+k/summationtext j=1p/summationtext i=n+1Ξ“2 ji(apβˆ’ak) kΓ—L(nβˆ’kβˆ’n/summationtext j=k+1p/summationtext i=k+1Ξ“2 ji)(an+n 2βˆ’1) n Lp/summationtext i=n+1n/summationtext j=k+1Ξ“2
https://arxiv.org/abs/2505.20668v1
ji(apβˆ’an) k+1Γ—M(pβˆ’nβˆ’p/summationtext i=n+1p/summationtext j=n+1Ξ“2 ji)(ap+n 2βˆ’1) Mk(a1+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1)/bracketrightBigg = sup AcΟ΅/bracketleftBigg 1 Kk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(anβˆ’ak)+k/summationtext j=1p/summationtext i=n+1Ξ“2 ji(apβˆ’ak) kΓ—Ln/summationtext j=k+1k/summationtext i=1Ξ“2 ji(an+n 2βˆ’1) n Lp/summationtext i=n+1n/summationtext j=k+1Ξ“2 ji(apβˆ’an) k+1Γ—Mn/summationtext i=1p/summationtext j=n+1Ξ“2 ji(ap+n 2βˆ’1) Mk(a1+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1)/bracketrightBigg , 14 where the second inequality follows from the fact that the term k/productdisplay j=1Kk/summationtext i=1Ξ“2 ji(ai+n 2βˆ’1)+p/summationtext i=k+1Ξ“2 ji(ak+n 2βˆ’1) j , is minimized when Ξ“2 jj= 1 forj= 1,...,k. The third inequality is satisfied due to the ordering conditions K1g Β·Β·Β· gKkandLk+1g Β·Β·Β· gLn. Since Lemma S1.10implies that ||Ξ“1:n,n+1:p||F> Ο΅/2 on the Ac Ο΅, we obtain the inequality: 1 Kk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(anβˆ’ak)+k/summationtext j=1p/summationtext i=n+1Ξ“2 ji(apβˆ’ak) kΓ—Ln/summationtext j=k+1k/summationtext i=1Ξ“2 ji(an+n 2βˆ’1) n Lp/summationtext i=n+1n/summationtext j=k+1Ξ“2 ji(apβˆ’an) k+1Γ—Mn/summationtext i=1p/summationtext j=n+1Ξ“2 ji(ap+n 2βˆ’1) Mk(a1+n 2βˆ’1)+n/summationtext i=k+1p/summationtext j=n+1Ξ“2 ji(an+n 2βˆ’1) f1 Kk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(anβˆ’ak) kΓ—Ln/summationtext j=k+1k/summationtext i=1Ξ“2 ji(an+n 2βˆ’1) n Γ—1 Mk(a1+n 2βˆ’1) =/parenleftBigLn Kk/parenrightBigk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(anβˆ’ak) Γ—Lk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(ak+n 2βˆ’1) n Mk(a1+n 2βˆ’1), where the first inequality is derived from Kk,Lk+1>1,M <1, andapgan. Next, we derive an upper bound for the remaining term: (1+rn)k/summationtext i=1(ai+n 2βˆ’1) (1+sn)(nβˆ’k)(an+n 2βˆ’1)(1+tn)(pβˆ’k)(ap+n 2βˆ’1) β‰Όexp/parenleftbigg Ο΅2 2ΒΌ0,1 ΒΌ0,kak/parenrightbigg Β·exp/parenleftbigg 4Ο΅2 2nΒΌ0,1 Β―dpnan/parenrightbigg Β·exp/parenleftbigg/parenleftBig2Ο΅2 2n (pβˆ’n)hΒΌ0,1/parenrightBig pap/parenrightbigg , using the bound log(1+ x)fxfor all small x >0. Therefore, we obtain the final upper bound /integraldisplay Ac Ο΅/productdisplay cβˆ’aiβˆ’n 2+1 i(dΞ“) /integraldisplay CΟ΅2/productdisplay cβˆ’aiβˆ’n 2+1 i(dΞ“) f1 P(CΟ΅2)Β·exp/parenleftbigg Ο΅2 2ΒΌ0,1 ΒΌ0,kak/parenrightbigg Β·exp/parenleftbigg 4Ο΅2 2nΒΌ0,1 Β―dpnan/parenrightbigg Β·exp/parenleftbigg/parenleftBig2Ο΅2 2n (pβˆ’n)hΒΌ0,1/parenrightBig pap/parenrightbigg Γ—sup AcΟ΅/bracketleftBigg/parenleftBigLn Kk/parenrightBigk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(anβˆ’ak) Γ—Lk/summationtext j=1p/summationtext i=k+1Ξ“2 ji(ak+n 2βˆ’1) n Mk(a1+n 2βˆ’1)/bracketrightBigg β‰Ό/parenleftBigc√n Ο΅2/parenrightBignpβˆ’n2 Β·exp/parenleftbiggn2Ο΅2 2ΒΌ0,1an p/parenrightbigg Β·exp/parenleftbiggnΟ΅2 2ΒΌ0,1ap h/parenrightbigg Β·/parenleftBigLn Kk/parenrightBigk/summationtext i=1n/summationtext j=k+1Ξ“2 jian 15 β‰Ό/parenleftBigKk Ln/parenrightBigβˆ’Ο΅2an/4 ≍/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an , whereΟ΅2 2zhp apΒΌ0,1'p2 nΒΌ0,1an, andΟ΅2an{np. β–  Lemma S1.12. Suppose that a1<Β·Β·Β·< anandb1>Β·Β·Β·> bn. Consider the permutation function Γƒ, which is a permutation of the set [n] ={1,...,n}. IfΓƒis not the identity permutation, then there exists an index i∈ {1,...,n}such that Γƒ(i)ΜΈ=i. In this case, the following inequality holds n/summationdisplay l=1albΓƒ(l)gn/summationdisplay l=1albl+min l<n(al+1βˆ’al)Β·min l<n(blβˆ’bl+1). Proof.Letibe the smallest integer which satisfies Γƒ(i)ΜΈ=i, thenΓƒ(i) =j > i. Furthermore, there exists k > isuch that Γƒ(k) =i. Now, we define a new permutation function Γƒas follows: Γƒ(l) =ο£±  Ã(l) forlΜΈ=i,k, iforl=i, jforl=k. Using the newly defined permutation function Γƒ, we have n/summationdisplay l=1albΓƒ(l)=n/summationdisplay l=1albΓƒ(l)+(aibj+akbi)βˆ’(aibi+akbj) =n/summationdisplay l=1albΓƒ(l)+(akβˆ’ai)(biβˆ’bj), Since the rearrangement inequality guarantees that: n/summationdisplay l=1albΓƒ(l)gn/summationdisplay l=1albl, if follows thatn/summationdisplay l=1albΓƒ(l)gn/summationdisplay l=1albl+(akβˆ’ai)(biβˆ’bj). For any non-identity permutation function Γƒ, we obtain the following inequality n/summationdisplay l=1albΓƒ(l)gn/summationdisplay l=1albl+ inf i,j,k:i<j,i<k(akβˆ’ai)(biβˆ’bj) 16 gn/summationdisplay l=1albl+min l<n(al+1βˆ’al)Β·min l<n(blβˆ’bl+1). β–  Lemma S1.13. Assume that conditions A1βˆ’A5hold. Define the sets as follows: DΒΈ=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­S0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereS∈O(k)has non-negative diagonal elements, and satisfies ||Ikβˆ’S||FgΟ΅. IfΒΈzΒΌ0,k ΒΌ0,1, then the following inequality holds sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1β‰Όexp/parenleftBigg nΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigg exp/parenleftBigg Ο΅ 2√ kminl<k(al+1βˆ’al)Β·minl<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg/parenrightBigg. Proof. sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1=sup DΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ“2 jiΛ†ΒΌj/bracketrightbiggai+n/2βˆ’1 inf EΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ“2 jiΛ†ΒΌj/bracketrightbiggai+n/2βˆ’1. At first, we focus on the following term sup DΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj/bracketrightbiggai+n/2βˆ’1 . (S4) For Ξ“βˆˆDΒΈ, (1βˆ’Ξ“2
https://arxiv.org/abs/2505.20668v1
ii)+p/summationdisplay j=1 jΜΈ=iΞ“2 ji< ΒΈ2holds and it implies Ξ“2 iig1βˆ’ΒΈ2. Therefore, we obtain the upper bound of ( S4) as follows (S4)fsup DΒΈk/productdisplay i=1/bracketleftbiggh n+Ξ“2 iiΛ†ΒΌi+/summationdisplay j=1 jΜΈ=iΞ“2 jiΛ†ΒΌj/bracketrightbiggai+n/2βˆ’1 17 fsup DΒΈk/productdisplay i=1/bracketleftbiggh n+Ξ“2 iiΛ†ΒΌi+(1βˆ’Ξ“2 ii)Λ†ΒΌ1/bracketrightbiggai+n/2βˆ’1 fsup DΒΈk/productdisplay i=1/bracketleftbiggh n+Λ†ΒΌi+ΒΈ2Λ†ΒΌ1/bracketrightbiggai+n/2βˆ’1 fk/productdisplay i=1/bracketleftbigg/parenleftBigh n+Λ†ΒΌi/parenrightBig (1+ΒΈ2Λ†ΒΌ1 Λ†ΒΌi)/bracketrightbiggai+n/2βˆ’1 fk/productdisplay i=1/bracketleftbiggh n+Λ†ΒΌi/bracketrightbiggai+n/2βˆ’1 Β·/parenleftBig 1+ΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk/summationtext i=1(ai+n/2βˆ’1) . Now, we focus on the following term inf EΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj/bracketrightbiggai+n/2βˆ’1 . (S5) For Ξ“βˆˆEΒΈ,k/summationdisplay j=1(Ξ“2 ijβˆ’S2 ij)g βˆ’ΒΈ2fori= 1,...,k. Therefore, we obtain the lower bound of ( S5) as follows (S5)ginf EΒΈk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1Ξ“2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/bracketrightbiggai+n/2βˆ’1 = inf EΒΈk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig +k/summationdisplay j=1(Ξ“2 ijβˆ’S2 ij)/parenleftBig Λ†ΒΌj+h n/parenrightBig/bracketrightbiggai+n/2βˆ’1 ginf EΒΈk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig βˆ’ΒΈ2k/summationdisplay j=1/parenleftBig Λ†ΒΌj+h n/parenrightBig/bracketrightbiggai+n/2βˆ’1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig βˆ’kΒΈ2/parenleftBig Λ†ΒΌ1+h n/parenrightBig/bracketrightbiggai+n/2βˆ’1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenleftBig 1βˆ’kΒΈ2Λ†ΒΌ1+h n Λ†ΒΌk+h n/parenrightBig/bracketrightbiggai+n/2βˆ’1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/bracketrightbiggai+n/2βˆ’1 Β·/parenleftBig 1βˆ’2kΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk/summationtext i=1(ai+n/2βˆ’1) . Consider the kΓ—kmatrixM, whose ( j,i)-th entry is given by S2 ji. SinceSis an orthonormal matrix, the matrix Mbecomes a doubly stochastic matrix, meaning that the sum of each row and the sum of each column both equal 1. By the Birkhoff-von Neumann theorem, any doubly st ochastic matrix can be expressed as a convex combination of permutation matrices. Therefore, there exist nonnegative weights 18 w0,w1,...,w lg0 satisfying/summationdisplay wi= 1, such that: M=w0Ik+l/summationdisplay i=1wiPi, whereIk,P1,...,P lare allkΓ—kpermutation matrices. Since||Sβˆ’Ik||FgΟ΅, we obtain the following inequality ||Sβˆ’Ik||2 F=k/summationdisplay j=1/parenleftBig (Sjjβˆ’1)2+/summationdisplay iΜΈ=jS2 ji/parenrightBig =k/summationdisplay j=1(2βˆ’2Sjj) fk/summationdisplay j=1(2βˆ’2S2 jj) =k/summationdisplay j=1/parenleftBig (S2 jjβˆ’1)2+/summationdisplay iΜΈ=j(S2 ji)2/parenrightBig =||Mβˆ’Ik||2 F, which implies ||Mβˆ’Ik||2 FgΟ΅2. Therefore, the inequality holds ||Mβˆ’Ik||F=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglel/summationdisplay i=1wlPlβˆ’(1βˆ’w0)Ik/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglel/summationdisplay i=1wl(Plβˆ’Ik)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F fl/summationdisplay i=1wl||Plβˆ’Ik||F fl/summationdisplay i=1wl(||Pl||F+||Ik||F) = 2√ k(1βˆ’w0), which implies w0<1βˆ’Ο΅ 2√ k. Letg(A) =k/summationdisplay i=1(ai+n 2βˆ’1)log/parenleftBigk/summationdisplay j=1Aji/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenrightBig for doubly stochastic matrix A∈RkΓ—k, then the inequality holds g(M) =k/summationdisplay i=1(ai+n 2βˆ’1)log/parenleftBigk/summationdisplay j=1Mji/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenrightBig 19 =k/summationdisplay i=1(ai+n 2βˆ’1)log/parenleftBigk/summationdisplay j=1/parenleftBig w0[Ik]ji+/summationdisplay lwl[Pl]ji/parenrightBig/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenrightBig =k/summationdisplay i=1(ai+n 2βˆ’1)log/parenleftBig w0k/summationdisplay j=1[Ik]ji/parenleftBig Λ†ΒΌj+h n/parenrightBig +/summationdisplay lwlk/summationdisplay j=1[Pl]ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenrightBig gk/summationdisplay i=1(ai+n 2βˆ’1)/bracketleftbigg w0log/parenleftBigk/summationdisplay j=1[Ik]ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenrightBig +/summationdisplay lwllog/parenleftBigk/summationdisplay j=1[Pl]ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/parenrightBig/bracketrightbigg =w0g(Ik)+/summationdisplay lwlg(Pl), where the inequality holds by Jensen inequality since log is conca ve. Sincea1<Β·Β·Β·< akandΛ†ΒΌ1>Β·Β·Β·>Λ†ΒΌk, by applying Lemma S1.12, we obtain the following inequality g(Pl)gg(Ik)+min l<k(al+1βˆ’al)Β·min l<k(logΛ†ΒΌlβˆ’logΛ†ΒΌl+1) =g(Ik)+min l<k(al+1βˆ’al)Β·min l<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg . Therefore, we obtain the lower bound of g(M) as follows: g(M)gw0g(Ik)+/summationdisplay lwlg(Pl) gw0g(Ik)+/summationdisplay lwl(g(Ik)+min l<k(al+1βˆ’al)Β·min l<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg ) =g(Ik)+/summationdisplay lwlΒ·min l<k(al+1βˆ’al)Β·min l<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg gg(Ik)+Ο΅ 2√ kmin l<k(al+1βˆ’al)Β·min l<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg . The lower bound of ( S5) is given by (S5)gk/productdisplay i=1/bracketleftbigg Λ†ΒΌi+h n/bracketrightbiggai+n/2βˆ’1 exp/parenleftBigg Ο΅ 2√ kmin l<k(al+1βˆ’al)Β·min l<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg/parenrightBigg Β·/parenleftBig 1βˆ’2kΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk/summationtext i=1(ai+n/2βˆ’1) . By appending the upper bound of ( S4) and the lower bound of ( S5), the following inequality holds: sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 20 fk/producttext i=1/bracketleftbiggh n+Λ†ΒΌi/bracketrightbiggai+n/2βˆ’1 Β·/parenleftBig 1+ΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk/summationtext i=1(ai+n/2βˆ’1) k/producttext i=1/bracketleftbigg Λ†ΒΌi+h n/bracketrightbiggai+n/2βˆ’1 exp/parenleftBigg Ο΅ 2√ kminl<k(al+1βˆ’al)Β·minl<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg/parenrightBigg Β·/parenleftBig 1βˆ’2kΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk/summationtext i=1(ai+n/2βˆ’1) ≍exp/parenleftBigg nΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigg exp/parenleftBigg Ο΅ 2√ kminl<k(al+1βˆ’al)Β·minl<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg/parenrightBigg. β–  Lemma S1.14. Assume that conditions A1βˆ’A5hold. Define the sets as follows DΒΈ=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­S0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereS∈O(k)with
https://arxiv.org/abs/2505.20668v1
non-negative diagonal elements, and ||Pβˆ’S||FgΟ΅for all permutation matrix P. Ifa1=Β·Β·Β·=ak, then the inequality holds sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1β‰Όexp/parenleftBigg nΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigg exp/parenleftBigg nΟ΅2min i<klog/parenleftBiggΛ†ΒΌi Λ†ΒΌi+1/parenrightBigg/parenrightBigg. Proof.By proof of Lemma ( S1.13), we obtain the following sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1=sup DΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ“2 jiΛ†ΒΌj/bracketrightbigga1+n/2βˆ’1 inf EΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ“2 jiΛ†ΒΌj/bracketrightbigga1+n/2βˆ’1. (S6) At first, the upper bound of numerator of ( S6) is given by sup DΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj/bracketrightbigga1+n/2βˆ’1 fk/productdisplay i=1/bracketleftbiggh n+Λ†ΒΌi/bracketrightbigga1+n/2βˆ’1 Β·/parenleftBig 1+ΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk(a1+n/2βˆ’1) . 21 Next, the lower bound of denominator of ( S6) is given by inf EΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj/bracketrightbigga1+n/2βˆ’1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/bracketrightbigga1+n/2βˆ’1 Β·/parenleftBig 1βˆ’2kΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk(a1+n/2βˆ’1) . Since||Pβˆ’S||FgΟ΅for all permutation matrix P, there exist Suvwhich satisfies S2 uv∈[Ο΅2 k,1βˆ’Ο΅2 k]. Then, the inequality holds k/summationdisplay j=1S2 jv(Λ†ΒΌj+h n) =/summationdisplay j<uS2 jv(Λ†ΒΌj+h n)+S2 uv(Λ†ΒΌu+h n)+/summationdisplay j>uS2 jv(Λ†ΒΌj+h n) g(/summationdisplay j<uS2 jv)/productdisplay j<u(Λ†ΒΌj+h n)S2 jv/summationtext j<uS2 jv+S2 uv(Λ†ΒΌu+h n)+(/summationdisplay j>uS2 jv)/productdisplay j>u(Λ†ΒΌj+h n)S2 jv/summationtext j>uS2 jv, (S7) where the last inequality holds by arithmetic geometric mean inequal ity. SinceS2 uv∈[Ο΅2 k,1βˆ’Ο΅2 k],/summationdisplay j<uS2 jvor/summationdisplay j>uS2 jvis larger thanΟ΅2 2k. Without loss of generality, we assume that/summationdisplay j<uS2 jvis larger thanΟ΅2 2k. Then, the lower bound of ( S7) is given by (S7)g(/summationdisplay j<uS2 jv)/productdisplay j<u(Λ†ΒΌj+h n)S2 jv/summationtext j<uS2 jv+(/summationdisplay jguS2 jv)/productdisplay jgu(Λ†ΒΌj+h n)S2 jv/summationtext jβ‰₯uS2 jv =yt1 1y1βˆ’t1 2Β·/bracketleftBig t1/parenleftbigy1 y2/parenrightbig1βˆ’t1+(1βˆ’t1)/parenleftbigy2 y1/parenrightbigt1/bracketrightBig =k/productdisplay j=1(Λ†ΒΌj+h n)S2 jvΒ·/bracketleftBig t1/parenleftbigy1 y2/parenrightbig1βˆ’t1+(1βˆ’t1)/parenleftbigy2 y1/parenrightbigt1/bracketrightBig , (S8) wheret1=/summationdisplay j<uS2 jv,y1=/productdisplay j<u(Λ†ΒΌj+h n)S2 jv/summationtext j<uS2 jv, andy2=/productdisplay jgu(Λ†ΒΌj+h n)S2 jv/summationtext jβ‰₯uS2 jv. Letf(t) =tatβˆ’1+(1βˆ’t)atwitha >1. Then, fβ€²(t) =atβˆ’1((logaβˆ’aloga)t+alogaβˆ’a+1) holds. Fort∈(s,1βˆ’s) withs >0, the inequality holds f(t)gf(s)'f(1βˆ’s). We apply talor expansion on f(s) andf(1βˆ’s), then we obtain the following equality f(s) =as(saβˆ’1+1βˆ’s) 22 = (1+(log a)s+1 2(loga)2s2+O(s3))(saβˆ’1+1βˆ’s) = 1+s(aβˆ’1+logaβˆ’1)+O(s2) f(1βˆ’s) =aβˆ’s(1βˆ’s+as) = (1βˆ’(loga)s+1 2(loga)2s2+O(s3))(1βˆ’s+as) = 1+s(aβˆ’1βˆ’loga)+O(s2), for sufficiently small s. It implies f(s)'f(1βˆ’s) =f(s). Therefore, the lower bound of ( S8) is given by (S8)gk/productdisplay j=1(Λ†ΒΌj+h n)S2 jvΒ·/bracketleftBigg 1+Ο΅2 2k/parenleftBig/parenleftbigy2 y1/parenrightbigβˆ’1+log/parenleftbiggy2 y1/parenrightbigg βˆ’1/parenrightBig +O/parenleftBig/parenleftbigΟ΅2 2k/parenrightbig2/parenrightBig/bracketrightBig/bracketrightBigg gk/productdisplay j=1(Λ†ΒΌj+h n)S2 jvΒ·/bracketleftBigg 1+cΟ΅2 2klog/parenleftbiggy2 y1/parenrightbigg +O(Ο΅4)/bracketrightBigg gk/productdisplay j=1(Λ†ΒΌj+h n)S2 jvΒ·/bracketleftBig 1+cΟ΅2 4kmin i<klog/parenleftBiggΛ†ΒΌi Λ†ΒΌi+1/parenrightBigg/bracketrightBig , for some positive constant c. Finally, the upper bound of ( S6) is given by (S6)fk/producttext i=1/bracketleftbiggh n+Λ†ΒΌi/bracketrightbigga1+n/2βˆ’1 Β·/parenleftBig 1+ΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk(a1+n/2βˆ’1) k/producttext i=1/bracketleftbiggk/summationtext j=1S2 ji/parenleftBig Λ†ΒΌj+h n/parenrightBig/bracketrightbigga1+n/2βˆ’1 Β·/parenleftBig 1βˆ’2kΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk(a1+n/2βˆ’1) fk/producttext i=1/bracketleftbiggh n+Λ†ΒΌi/bracketrightbigga1+n/2βˆ’1 Β·/parenleftBig 1+ΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk(a1+n/2βˆ’1) k/producttext i=1/bracketleftbiggk/producttext j=1/parenleftBig Λ†ΒΌj+h n/parenrightBigS2 ji/bracketrightbigga1+n/2βˆ’1 Β·/bracketleftBig 1+cΟ΅2 4kmin i<klog/parenleftBiggΛ†ΒΌi Λ†ΒΌi+1/parenrightBigg/bracketrightBiga1+n/2βˆ’1 Β·/parenleftBig 1βˆ’2kΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigk(a1+n/2βˆ’1) fexp/parenleftBigg nΒΈ2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigg exp/parenleftBigg nΟ΅2min i<klog/parenleftBiggΛ†ΒΌi Λ†ΒΌi+1/parenrightBigg/parenrightBigg. β–  23 Theorem S1.15. Under the assumptions of Lemma 3.1, fori= 1,...,k, we have E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig , whereΒ³i∈[βˆ’C,C]for some positive constant C,Β΄i≲nβˆ’1/2+ΒΆfor all small ΒΆ >0,Β―d=1 pβˆ’kp/summationdisplay j=k+1ΒΌ0,j, andΒ»= min l<k(al+1βˆ’al)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Lemma S1.16. Under the assumptions of Lemma 3.1, fori= 1,...,k, we have 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βˆ’E[ΒΌ(i)|Xn]/vextendsingle/vextendsingle=ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightbigg , whereΒ»= min l<k(al+1βˆ’al)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Proof.LetDΟ΅=/braceleftBig Ξ“βˆˆO(p) : inf Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightBig .By Lemma 3.1, we have /integraldisplay DΟ΅/integraldisplay Γƒ(Ξ›,Ξ“|Xn)(dΞ›)(dΞ“) = 1+ O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig , whereΒ»= min l<k(al+1βˆ’al)Β·min l<klog/parenleftBigΒΌ0,l ΒΌ0,l+1/parenrightBig . OnDΟ΅, since (Ξ“
https://arxiv.org/abs/2505.20668v1
iiβˆ’1)2+(1βˆ’Ξ“2 ii)< Ο΅2, it follows that Ξ“2 ii>1βˆ’Ο΅2. Givenci n=h n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj,we obtain the bounds: ci n∈/bracketleftbigg (1βˆ’Ο΅2)Λ†ΒΌi+h n,(1βˆ’Ο΅2)Λ†ΒΌi+Ο΅2Λ†ΒΌ1+h n/bracketrightbigg ,fori= 1,...,n, ci n∈/bracketleftbiggh n, Ο΅2Λ†ΒΌ1+h n/bracketrightbigg ,fori=n+1,...,p. Under Ξ“ ∈DΟ΅, eachΒΌifollows an inverse gamma distribution, ΒΌiiNd∼InvGam/parenleftBig ai+n 2βˆ’1,ci 2/parenrightBig . By Chebyshev’s inequality, Γƒ(|ΒΌiβˆ’E[ΒΌi|Ξ“,Xn]|> Β³i|Ξ“,Xn)fVar(ΒΌi) Β³2 i=2 Β³2 i/parenleftBigci n/parenrightBig2 Β·n2 (2ai+nβˆ’4)2(2ai+nβˆ’6). 24 Letpibe an upper bound for each of these probabilities pi=ο£±  2 Β³2 i/parenleftBig (1βˆ’Ο΅2)Λ†ΒΌi+Ο΅2Λ†ΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβˆ’4)2(2ai+nβˆ’6)fori= 1,...,n, 2 Β³2 i/parenleftBig Ο΅2Λ†ΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβˆ’4)2(2ai+nβˆ’6)fori=n+1,...,p. ChooseΒ³i=1 4ΒΆ0Λ†ΒΌifori= 1,...,n, andΒ³i=1 4ΒΆ0Λ†ΒΌnfori=n+ 1,...,p. Then with probability at leastp/productdisplay i=1(1βˆ’pi), the inequalities ΒΌi∈[E[ΒΌi|Ξ“,Xn]βˆ’Β³i,E[ΒΌi|Ξ“,Xn]+Β³i], hold for i= 1,...,p. This implies: ΒΌ1>Β·Β·Β·> ΒΌk, ΒΌk> ΒΌifori=k+1,...,p, and therefore, ΒΌ(i)=ΒΌifor alli= 1,...,k. We now bound the difference of expectation: /vextendsingle/vextendsingleE[ΒΌ(i)βˆ’ΒΌi|Xn]/vextendsingle/vextendsingle=/vextendsingle/vextendsingleE[(ΒΌ(i)βˆ’ΒΌi)Β·I(ΒΌ(i)ΜΈ=ΒΌi)|Xn]/vextendsingle/vextendsingle f2E[p/summationdisplay j=1ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Xn] = 2E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Ξ“,Xn]Β·(I(Ξ“βˆˆDΟ΅)+I(Ξ“/∈DΟ΅))|Xn] f2E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Ξ“,Xn]Β·I(Ξ“βˆˆDΟ΅)|Xn]+2E[p/summationdisplay j=1E[ΒΌj|Ξ“,Xn]Β·I(Ξ“/∈DΟ΅)|Xn]. (S9) There exists Β΄j(Ξ“)>0 such that Γƒ(ΒΌ(i)ΜΈ=ΒΌi|Ξ“,Xn) =Γƒ(ΒΌjgΒ΄j(Ξ“)|Ξ“,Xn), 25 for each j= 1,...,p. Therefore, the inequality holds E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Ξ“,Xn]Β·I(Ξ“βˆˆDΟ΅)|Xn]fE[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌjgΒ΄j(Ξ“))|Ξ“,Xn]Β·I(Ξ“βˆˆDΟ΅)|Xn]. By applying HΒ¨ older’s inequality, we obtain the following bound on the first term on the right-hand side of (S9) as follows E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌjgΒ΄j(Ξ“))|Ξ“,Xn]Β·I(Ξ“βˆˆDΟ΅)|Xn] fE[p/summationdisplay j=1/radicalBig E[ΒΌ2 j|Ξ“]Β·/radicalBig E[I(ΒΌj> Β΄j(Ξ“))|Ξ“,Xn]I(Ξ“βˆˆDΟ΅)|Xn] fE[p/summationdisplay j=1E[ΒΌj|Ξ“]Β·/radicalBig Γƒ(ΒΌj> Β΄j(Ξ“)|Ξ“,Xn)I(Ξ“βˆˆDΟ΅)|Xn] fsup Ξ“βˆˆDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·/radicaltp/radicalvertex/radicalvertex/radicalbt1βˆ’p/productdisplay i=1(1βˆ’pi) fsup Ξ“βˆˆDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·/radicaltp/radicalvertex/radicalvertex/radicalbtp/summationdisplay i=1pi. We obtain the following bound on the second term on the right-hand side of (S9) as follows E[p/summationdisplay j=1E[ΒΌj|Ξ“,Xn]Β·I(Ξ“/∈DΟ΅)|Xn] =E[/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·I(Ξ“/∈DΟ΅)|Xn] fsup Ξ“/∈DΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·/parenleftBig O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightBig . Therefore, we obtain /vextendsingle/vextendsingleE[ΒΌi|Xn]βˆ’E[ΒΌ(i)|Xn]/vextendsingle/vextendsinglefsup Ξ“βˆˆDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·p/summationdisplay i=1pi + sup Ξ“/∈DΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·/parenleftBig O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightBig fsup Ξ“/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·p/summationdisplay i=1pi +sup Ξ“/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·/parenleftBig O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightBig . 26 Sincea1f Β·Β·Β· fap, the termp/summationdisplay j=1cj n+2ajβˆ’4is maximized when Ξ“ = Ip, and hence sup Ξ“/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig fn/summationdisplay j=1nΛ†ΒΌj+h n+2ajβˆ’4+p/summationdisplay j=n+1h n+2ajβˆ’4 =O(k/summationdisplay j=1ΒΌ0,j+np an+hp ap) =O(k/summationdisplay j=1ΒΌ0,j), =O(ΒΌ0,1), where we used the condition an{n3/2p. Next, we evaluate the upper bound ofp/summationdisplay i=1pi p/summationdisplay i=1pi=n/summationdisplay i=12 Β³2 i/parenleftBig (1βˆ’Ο΅2)Λ†ΒΌi+Ο΅2Λ†ΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβˆ’4)2(2ai+nβˆ’6) +p/summationdisplay i=n+12 Β³2 i/parenleftBig Ο΅2Λ†ΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβˆ’4)2(2ai+nβˆ’6) ≍k/summationdisplay i=1n2 (2ai+nβˆ’4)2(2ai+nβˆ’6)+n/summationdisplay i=k+1n2 (2ai+nβˆ’4)2(2ai+nβˆ’6)+h2 n2Λ†ΒΌ2np/summationdisplay i=n+1n2 (2ai+nβˆ’4)2(2ai+nβˆ’6) =O(k n)+O(n2(nβˆ’k) a3n)+O(h2 p2Β·n2(pβˆ’n) a3p) =O(1 n), again using an{n3/2p. Combining these, we conclude 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βˆ’E[ΒΌ(i)|Xn]/vextendsingle/vextendsingle f1 ΒΌ0,iΒ·/parenleftBig sup Ξ“/parenleftBigp/summationdisplay j=1cj n+2ajβˆ’4/parenrightBig Β·/parenleftBigp/summationdisplay i=1pi+O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightBig =ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightbigg . β–  27 Theorem S1.17. Under the assumptions of Lemma 3.2, fori= 1,...,k, we have E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig , whereΒ³i∈[βˆ’C,C]for some positive constant C,Β΄i≲nβˆ’1/2+ΒΆfor all small ΒΆ >0,Β―d=1 pβˆ’kp/summationdisplay j=k+1ΒΌ0,j, andΓ„= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Lemma S1.18. Under the assumptions of Lemma 3.2, fori= 1,...,k, we have 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βˆ’E[ΒΌ(i)|Xn]/vextendsingle/vextendsingle=ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightbigg , whereΓ„= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Proof.The subset DΟ΅,lof the orthogonal group O(p) is defined as: DΟ΅,l=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Pl0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg , whereP1,...,P Ldenotes all kΓ—kpermutation matrices. The proof proceeds by applying the same argument as in Lemma S1.16, withDΟ΅replaced by the union L/uniondisplay l=1DΟ΅,l. By Lemma 3.1, we have /integraldisplay /uniontextL l=1DΟ΅,l/integraldisplay
https://arxiv.org/abs/2505.20668v1
Γƒ(Ξ›,Ξ“|Xn)(dΞ›)(dΞ“) = 1+ O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig , whereΓ„= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Following the same bounding steps as in Lemma S1.16, we obtain 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βˆ’E[ΒΌ(i)|Xn]/vextendsingle/vextendsingle=ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig/parenrightbigg . 28 β–  Proof of Theorem Proof of Lemma 3.1 Proof.We consider the following inequality: Γƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(k), Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg |Xn/parenrightbigg (S10) fΓƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅4/bracerightbigg |Xn/parenrightbigg (S11) +Γƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(k), Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅, inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg , (S12) for some positive Ο΅4. By Lemma S1.11, we have derived the following lower bound on the posterior probability: (S10)g1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an. Thus, in order to establish the convergence of ( S11) to 1, it suffices to show that ( S12) vanishes as nβ†’ ∞. Now, we focus on the term: Γƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(k), Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅, inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg . Supposethat {R1,...,R M(O(k),||Β·||F,Ο΅5)}formsamaximal Ο΅5-packingof O(k),whereRM(O(k),||Β·||F,Ο΅5)= Ik,whichalsoservesasan Ο΅5-covering.Forall Q1∈O(k),thereexistssome Risuchthat ||Riβˆ’Q1||F< Ο΅5. By triangular inequality, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ri0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ff/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ri0 0Q2ο£Ά ο£Έβˆ’ο£« ο£­Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F.(S13) Taking the infimum over Q2∈O(pβˆ’k) on both sides of ( S13), then the inequality holds inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ri0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ffinf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ri0 0Q2ο£Ά ο£Έβˆ’ο£« ο£­Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+ inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F 29 fΟ΅5+ inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F. (S14) Let{S1,...,S m}be a subset of {R1,...,R M(O(k),||Β·||F,Ο΅5)}which satisfies the following condition /braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5,inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg ΜΈ=Ο•.(S15) Moreover, it follows that Ik/∈ {S1,...,S m}, whereΟ΅4> Ο΅+Ο΅5. Then, the following inequality holds: Γƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈O(k), Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅, inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg fΓƒ/parenleftbiggM(O(k),||Β·||F,Ο΅5)/uniondisplay i=1/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ri0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5,inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg fΓƒ/parenleftbiggm/uniondisplay i=1/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5/bracerightbigg |Xn/parenrightbigg , for some positive Ο΅5which satisfies Ο΅4> Ο΅+Ο΅5. The first inequality follows from ( S14), and the second inequality follows from ( S15). By the triangular inequality, the following inequality holds: /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’ο£« ο£­Ik0 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Fg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, fori= 1,...,m. Taking the infimum over Q2∈O(pβˆ’k) on both sides yields ||Siβˆ’Ik||F+ inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Fginf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, fori= 1,...,m. By the definition of Si, there exists Ξ“ i∈O(p) satisfying condition ( S15). Therefore, we obtain the inequality ||Siβˆ’Ik||F> Ο΅4βˆ’(Ο΅+Ο΅5), fori= 1,...,m. 30 Next, we define the sets: DΒΈ=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ,l=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Sl0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereSl∈O(k) and||Slβˆ’Ik||F> Ο΅4βˆ’(Ο΅+Ο΅5). By considering the variable transformation Ξ“βˆ—= Γ ο£­ST l0 0Ipβˆ’kο£Ά ο£Έ, it follows that ( dΞ“) = (dΞ“βˆ—), since ( dΞ“) is an invariant measure. Therefore, we obtain the following equality: /integraldisplay EΒΈ,lp/productdisplay i=k+1/parenleftBigci n/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“) =/integraldisplay EΒΈ,lp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“) =/integraldisplay DΒΈp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1(Ξ“βˆ— ji)2Λ†ΒΌj/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“βˆ—) =/integraldisplay DΒΈp/productdisplay i=k+1/parenleftBigci n/parenrightBigβˆ’aiβˆ’n/2+1
https://arxiv.org/abs/2505.20668v1
(dΞ“). The last equality holds due to /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Sl0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“βˆ—/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, and Ξ“2 ji= (Ξ“βˆ— ji)2fori > k. Therefore, we obtain the following inequality: /integraldisplay EΟ΅+Ο΅5,lp/productdisplay i=1cβˆ’aiβˆ’n/2+1 i (dΞ“) /integraldisplay DΟ΅+Ο΅5p/productdisplay i=1cβˆ’aiβˆ’n/2+1 i (dΞ“)fsup EΟ΅+Ο΅5,lk/producttext i=1cβˆ’aiβˆ’n/2+1 i Β·/integraldisplay EΟ΅+Ο΅5,lp/productdisplay i=k+1cβˆ’aiβˆ’n/2+1 i (dΞ“) inf DΟ΅+Ο΅5k/producttext i=1cβˆ’aiβˆ’n/2+1 i Β·/integraldisplay DΟ΅+Ο΅5p/productdisplay i=k+1cβˆ’aiβˆ’n/2+1 i (dΞ“) 31 =sup DΟ΅+Ο΅5k/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1. (S16) By applying Lemma ( S1.13), the upper bound of ( S16) is given by (S16)β‰Όexp/parenleftBigg n(Ο΅+Ο΅5)2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigg exp/parenleftBigg Ο΅4βˆ’(Ο΅+Ο΅5) 2√ kmin l<k(al+1βˆ’al)Β·min l<klog/parenleftBiggΛ†ΒΌl Λ†ΒΌl+1/parenrightBigg/parenrightBigg ≍exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp/parenleftbigg Ο΅4min l<k(al+1βˆ’al)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg/parenrightbigg, whereΟ΅4{Ο΅+Ο΅5andϡ≍ϡ5. LetΒ»= min l<k(al+1βˆ’al)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Then, we get the following inequality Γƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈Ok, Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg |Xn/parenrightbigg fΓƒ(DΟ΅4|Xn)+mΓƒ(EΟ΅+Ο΅5,1|Xn) fΓƒ(DΟ΅4|Xn)+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp(»ϡ4)Β·Γƒ(DΟ΅+Ο΅5|Xn) f/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp(»ϡ4)/parenrightBigg Γƒ(DΟ΅4|Xn). We obtain the following probability Γƒ(DΟ΅4|Xn)g/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp(»ϡ4)/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an) g/parenleftBigg 1+exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,kβˆ’Β»Ο΅4/parenrightbigg/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an) g/parenleftBigg 1+exp/parenleftBig βˆ’Β» 2Ο΅4/parenrightBig/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an), wherenΟ΅2ΒΌ0,1 ΒΌ0,kfΒ» 2Ο΅4. 32 Therefore, we obtain: Γƒ(Dc Ο΅4|Xn)f1βˆ’/parenleftBigg 1+exp/parenleftBig βˆ’Β» 2Ο΅4/parenrightBig/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an) =exp/parenleftBig βˆ’Β» 2Ο΅4/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an 1+exp/parenleftBig βˆ’Β» 2Ο΅4/parenrightBig =O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅4/parenrightBig/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an/parenrightBig , whereΟ΅4{Β»βˆ’1. In summary, if there exists Ο΅ >0 satisfying np anzΟ΅2fΒΌ0,k nΒΌ0,1Β» 2Ο΅4, and Ο΅4{Ο΅(Β»βˆ’1, then the following holds Γƒ(Dc Ο΅4|Xn) =O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅4/parenrightBig/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an/parenrightBig . In particular, the same posterior bound holds under the simplified con ditionnp anzΒΌ0,k nΒΌ0,1Β·Β» 2Ο΅4and Ο΅4{/radicalbiggnp an(Β»βˆ’1. β–  Proof of Lemma 3.2 Proof.The proof follows the same procedure as the proof of Lemma 3.1. By replacing inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Ik0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4, with inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­P0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4,for all permutation matrix P, we obtain the following. Supposethat {R1,...,R M(O(k),||Β·||F,Ο΅5)}formsamaximal Ο΅5-packingof O(k),whereRM(O(k),||Β·||F,Ο΅5)= Ik, which also serves as an Ο΅5-covering. Let {S1,...,S m}be a subset of {R1,...,R M(O(k),||Β·||F,Ο΅5)}which 33 satisfies the following condition /braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Si0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5,inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Pl0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4, l= 1,...,L/bracerightbigg ΜΈ=Ο•. (S17) Moreover, it follows that Pl/∈ {S1,...,S m}forl= 1,...,L, whereΟ΅4> Ο΅+Ο΅5. Next, we define the sets: DΒΈ,l=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Pl0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ,r=/braceleftbigg Ξ“βˆˆO(p) : inf Q2∈O(pβˆ’k)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Sr0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereSr∈O(k) and||Srβˆ’Pl||F> Ο΅4βˆ’(Ο΅+Ο΅5). By considering the variable transformation Ξ“βˆ—= Γ ο£­ST rPl0 0Ipβˆ’kο£Ά ο£Έ, it follows that ( dΞ“) = (dΞ“βˆ—), since ( dΞ“) is an invariant measure. Therefore, we obtain the following equality: /integraldisplay EΒΈ,rp/productdisplay i=k+1/parenleftBigci n/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“) =/integraldisplay EΒΈ,rp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1Ξ“2 jiΛ†ΒΌj/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“) =/integraldisplay DΒΈ,lp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1(Ξ“βˆ— ji)2Λ†ΒΌj/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“βˆ—) =/integraldisplay DΒΈ,lp/productdisplay i=k+1/parenleftBigci n/parenrightBigβˆ’aiβˆ’n/2+1 (dΞ“). The last equality holds due to /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Sr0 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Pl0 0Q2ο£Ά ο£Έβˆ’Ξ“βˆ—/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, and Ξ“2 ji= (Ξ“βˆ— ji)2fori > k. 34 Therefore, we obtain the following inequality: /integraldisplay EΟ΅+Ο΅5,rp/productdisplay i=1cβˆ’aiβˆ’n/2+1 i (dΞ“) /integraldisplay DΟ΅+Ο΅5,lp/productdisplay i=1cβˆ’aiβˆ’n/2+1 i (dΞ“)fsup EΟ΅+Ο΅5,rk/producttext i=1cβˆ’aiβˆ’n/2+1 i Β·/integraldisplay EΟ΅+Ο΅5,rp/productdisplay i=k+1cβˆ’aiβˆ’n/2+1 i (dΞ“) inf DΟ΅+Ο΅5,lk/producttext i=1cβˆ’aiβˆ’n/2+1 i Β·/integraldisplay DΟ΅+Ο΅5,lp/productdisplay i=k+1cβˆ’aiβˆ’n/2+1 i (dΞ“) =sup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBigai+n/2βˆ’1 f/bracketleftBiggsup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBig inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBig/bracketrightBigga1+n/2βˆ’1 Γ—sup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBigaiβˆ’a1 inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBigaiβˆ’a1 f/bracketleftBiggsup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBig inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBig/bracketrightBigga1+n/2βˆ’1 Γ—/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq (S18) By applying Lemma
https://arxiv.org/abs/2505.20668v1
( S1.14), the upper bound of ( S18) is given by (S18)β‰Όexp/parenleftBigg n(Ο΅+Ο΅5)2Λ†ΒΌ1 Λ†ΒΌk/parenrightBigg exp/parenleftBigg n(Ο΅4βˆ’(Ο΅+Ο΅5))2min i<klog/parenleftBiggΛ†ΒΌi Λ†ΒΌi+1/parenrightBigg/parenrightBiggΓ—/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq β‰Όexp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp/parenleftbigg nΟ΅2 4min i<klog/parenleftbiggΒΌ0,i ΒΌ0,i+1/parenrightbigg/parenrightbiggΓ—/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq , whereΟ΅{Ο΅+Ο΅5andϡ≍ϡ5. LetΓ„= min i<klog/parenleftbiggΒΌ0,i ΒΌ0,i+1/parenrightbigg . Then, we get the following inequality Γƒ/parenleftbigg/braceleftbigg Ξ“βˆˆO(p) : inf Q1∈Ok, Q2∈Opβˆ’k/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£­Q10 0Q2ο£Ά ο£Έβˆ’Ξ“/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg |Xn/parenrightbigg fΓƒ(L/uniondisplay l=1DΟ΅4,l|Xn)+mΓƒ(EΟ΅+Ο΅5,1|Xn) fΓƒ(L/uniondisplay l=1DΟ΅4,l|Xn)+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq exp(Γ„nΟ΅2 4)Β·Γƒ(DΟ΅+Ο΅5,1|Xn) 35 f/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq exp(Γ„nΟ΅2 4)/parenrightBigg Γƒ(L/uniondisplay l=1DΟ΅4,l|Xn). We obtain the following probability Γƒ(L/uniondisplay l=1DΟ΅4,l|Xn)g/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq exp(Γ„nΟ΅2 4)/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an) g/parenleftBigg 1+exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,kβˆ’Γ„nΟ΅2 4/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an) g/parenleftBigg 1+exp/parenleftBig βˆ’Γ„ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an), wherenΟ΅2ΒΌ0,1 ΒΌ0,kfΓ„ 2nΟ΅2 4. Therefore, we obtain: Γƒ((L/uniondisplay l=1DΟ΅4,l)c|Xn)f1βˆ’/parenleftBigg 1+exp/parenleftBig βˆ’Γ„ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBiggβˆ’1 (1βˆ’/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an) =exp/parenleftBig βˆ’Γ„ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an 1+exp/parenleftBig βˆ’Γ„ 2nΟ΅2 4/parenrightBig =O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an/parenrightBig , whereΟ΅4{(nΓ„)βˆ’1/2. In summary, if there exists Ο΅ >0 satisfying np anzΟ΅2fΒΌ0,k nΒΌ0,1Γ„ 2nΟ΅2 4, and Ο΅4{Ο΅((nΓ„)βˆ’1/2, then the following holds Γƒ((L/uniondisplay l=1DΟ΅4,l)c|Xn) =O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2 4an/parenrightBig . In particular, the same posterior bound holds under the simplified con ditionnp anzΒΌ0,k nΒΌ0,1Γ„ 2nΟ΅2 4and 36 Ο΅4{/radicalbiggnp an((nΓ„)βˆ’1/2. β–  Proof of Theorem S1.17 Proof.The proof follows the same procedure as the proof of Theorem S1.15. By using the reparemetriza- tion as we mentioned at ( 5), we obtain the following inequality E/bracketleftBig˜¼iβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplay˜¼iβˆ’ΒΌ0,i ΒΌ0,i/productdisplayΛœΒΌβˆ’aiβˆ’n 2 iexp/parenleftbigg βˆ’Λœci 2˜¼i/parenrightbigg (dΛœΞ›)(dΛœΞ“) /integraldisplay/productdisplayΛœΒΌβˆ’aiβˆ’n 2 iexp/parenleftbigg βˆ’Λœci 2˜¼i/parenrightbigg (dΛœΞ›)(dΛœΞ“) =/integraldisplay f(˜c)/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) f/integraldisplay DΟ΅f(˜c)/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay DΟ΅/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“)+/integraldisplay DcΟ΅f(˜c)/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) fsup DΟ΅f(˜c)+sup Dc Ο΅f(˜c)/integraldisplay Dc Ο΅/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) = sup DΟ΅f(˜c)+sup DcΟ΅f(˜c)/integraldisplay (/uniontextL l=1DΟ΅,l)c/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“), where f(˜c) =EΒΌi[ΒΌi ΒΌ0,iβˆ’1] =Ξ“(ai+n/2βˆ’2) 2Ξ“(ai+n/2βˆ’1)˜ci ΒΌ0,iβˆ’1, withΒΌiiid∼InvGam (ai+n 2βˆ’1,˜ci 2). Since sup Dc Ο΅f(˜c)β‰ΌΒΌ0,1 ΒΌ0,i, applying Lemma 3.2, we obtain: E/bracketleftBig˜¼iβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig fn n+2aiβˆ’41 ΒΌ0,iΒ·sup DΟ΅/parenleftBig˜ci n/parenrightBig βˆ’1 +O/parenleftBigΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBigΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig =n n+2aiβˆ’41 ΒΌ0,iΒ·sup DΟ΅/parenleftBigci n/parenrightBig βˆ’1 +O/parenleftBigΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBigΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . 37 The last equality follows from ci= ˜cionDΟ΅. Since the ciis given by ci n=k/summationdisplay j=1Ξ“2 ji/braceleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/bracerightBig +n/summationdisplay j=k+1Ξ“2 ji/parenleftBig Β―dp n+Β³j/radicalbiggp n+h n/parenrightBig +p/summationdisplay j=n+1Ξ“2 jih n, we obtain the upper bound of sup DΟ΅/parenleftBigci n/parenrightBig as follows: sup DΟ΅/parenleftBigci n/parenrightBig f(1βˆ’Ο΅2)/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig +Ο΅2/bracketleftBig (1+Β΄1)ΒΌ0,1+(Β―dp n+Β³1/radicalbiggp n)+h n/bracketrightBig f/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig +2ΒΌ0,1Ο΅2. Therefore, the posterior expectation given by, E/bracketleftBig˜¼iβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig fn n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . Next, we derive the lower bound of the posterior expectation: E/bracketleftBig˜¼iβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplay f(˜c)/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) g/integraldisplay DΟ΅f(˜c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay DΟ΅/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“)Β·/integraldisplay DΟ΅/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) =/integraldisplay DΟ΅f(˜c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay DΟ΅/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“)Β·/parenleftBigg 1βˆ’/integraldisplay Dc Ο΅/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“) /integraldisplay/parenleftBig/productdisplay ˜ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΛœΞ“)/parenrightBigg = inf DΟ΅f(˜c)Γ—/parenleftBigg
https://arxiv.org/abs/2505.20668v1
1βˆ’/integraldisplay (/uniontextL l=1DΟ΅,l)c/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“)/parenrightBigg ginf DΟ΅f(c)Γ—/parenleftBig 1+O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig/parenrightBig . The last inequality holds by Lemma 3.2andci= ˜cionDΟ΅. 38 We obtain the lower bound of inf DΟ΅/parenleftBigci n/parenrightBig as follows: inf DΟ΅/parenleftBigci n/parenrightBig g(1βˆ’Ο΅2)/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig +Ο΅2/bracketleftBigh n/bracketrightBig g/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig βˆ’2Ο΅2ΒΌ0,i. Therefore, the lower bound of posterior expectation is given by E/bracketleftBig˜¼iβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig g/bracketleftBigg n n+2aiβˆ’41 ΒΌ0,i/parenleftbigg/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig βˆ’2Ο΅2ΒΌ0,i/parenrightbigg βˆ’1/bracketrightBigg Γ—/parenleftBig 1+O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig/parenrightBig =n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . Thus, we conclude that the posterior expectation satisfies: E/bracketleftBig˜¼iβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . β–  Proof of Theorem 3.3 Proof.By Theorem S1.17, we obtain the following posterior expectation, E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig =/parenleftBig(1+Β΄i)n n+2aiβˆ’4βˆ’1/parenrightBig +n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig =Β΄inβˆ’2ai+4 n+2aiβˆ’4+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . 39 Ifaiβ‰Όn1/2, then the following inequality holds, Β΄inβˆ’2ai+4 n+2aiβˆ’4β‰ΌΒ΄i=O(nβˆ’1/2+ΒΆ). Therefore, the posterior expectation is given by E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(nβˆ’1/2+ΒΆ)+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . By the assumptions of the theorem, in particular ΒΌ0,1/ΒΌ0,k=O(1) andΟ΅=nβˆ’1/4, we apply Lemma S1.18to obtain E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig +E/bracketleftBigΒΌ(i)βˆ’ΒΌi ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(nβˆ’1/2+ΒΆ)+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig +ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n+exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig/parenrightbigg =O(nβˆ’1/2+ΒΆ)+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig (Ο΅2+1 n)ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig , whereΒΌ(i)is thei-th largest eigenvalue of Ξ£. SinceΒΌ0,1/ΒΌ0,kis bounded by a positive constant, a1,...,a kzn1/2, andϡ≍nβˆ’1/4, it follows that Ο΅2+exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2anβ‰Όnβˆ’1/2+ΒΆ+p nΒΌ0,i. Therefore, we conclude E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp nΒΌ0,i/parenrightBig +O(nβˆ’1/2+ΒΆ). β–  40 Proof of Theorem S1.15 Proof.Using the Lemma 3.1, we obtain the following inequality: E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplayΒΌiβˆ’ΒΌ0,i ΒΌ0,i/productdisplay ΒΌβˆ’aiβˆ’n 2 iexp/parenleftbigg βˆ’ci 2ΒΌi/parenrightbigg (dΞ›)(dΞ“) /integraldisplay/productdisplay ΒΌβˆ’aiβˆ’n 2 iexp/parenleftbigg βˆ’ci 2ΒΌi/parenrightbigg (dΞ›)(dΞ“) =/integraldisplay f(c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) f/integraldisplay DΟ΅f(c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) /integraldisplay DΟ΅/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“)+/integraldisplay DcΟ΅f(c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) fsup DΟ΅f(c)+sup DcΟ΅f(c)Β·/integraldisplay DcΟ΅f(c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 (dΞ“) where f(c) =EΒΌi[ΒΌi ΒΌ0,iβˆ’1] =Ξ“(ai+n/2βˆ’2) 2Ξ“(ai+n/2βˆ’1)ci ΒΌ0,iβˆ’1, withΒΌiiid∼InvGam (ai+n 2βˆ’1,ci 2). By the same procedure of proof of Theorem S1.17, we conclude that the posterior expectation satisfies E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβˆ’41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig βˆ’1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . β–  Proof of Theorem 3.4 Proof.Let 2aiβˆ’4 =nt Λ†ΒΌiβˆ’t. Then, by Theorem S1.15, the posterior expectation can be written as: E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =1 ΒΌ0,i/bracketleftBignΛ†ΒΌi n+2aiβˆ’4βˆ’ΒΌ0,i/bracketrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig =1 ΒΌ0,i/bracketleftBig Λ†ΒΌiβˆ’tβˆ’ΒΌ0,i/bracketrightBig 41 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig =1 ΒΌ0,i/bracketleftBigh n+Β΄iΒΌ0,i+Β―dp n+Β³i/radicalbiggp nβˆ’t/bracketrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . Ift∈[Λ†ΒΌk+1,Λ†ΒΌn], then
https://arxiv.org/abs/2505.20668v1
for some constant Β³0∈[βˆ’C,C], we have t=Β―dp n+Β³0/radicalbiggp n. Therefore, the posterior expectation simplifies to: E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =1 ΒΌ0,i(Β³iβˆ’Β³0)/radicalbiggp n+Β΄i+1 ΒΌ0,ih n +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig =O(ΒΌ0,iβˆ’1/radicalbiggp n)+O(Β΄i) +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig . By the assumptions of the theorem, in particular ΒΌ0,1/ΒΌ0,k=O(1) andΟ΅=nβˆ’1/4, we apply Lemma S1.18to obtain E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =E/bracketleftBigΒΌiβˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig +E/bracketleftBigΒΌ(i)βˆ’ΒΌi ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(ΒΌ0,iβˆ’1/radicalbiggp n)+O(Β΄i) +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig +ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n+exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig/parenrightbigg =O(1 ΒΌ0,i/radicalbiggp n)+O(nβˆ’1 2+ΒΆ) +O/parenleftBig1 n+exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2an/parenrightBig , whereΒΌ(i)is thei-th largest eigenvalue of Ξ£. Since 2aiβˆ’4 =nt Λ†ΒΌiβˆ’t,fori= 1,...,k,ϡ≍nβˆ’1/4, andΒΌ0,kβ‰Όp/n1/4, it follows that Ο΅2+exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβˆ’Ο΅2anβ‰Όnβˆ’1/2+ΒΆ+1 ΒΌ0,i/radicalbiggp n. 42 Therefore, we conclude E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBig1 ΒΌ0,i/radicalbiggp n/parenrightBig +O(nβˆ’1/2+ΒΆ), whereΒΌ0,kβ‰Όp/n1/4. On the other hand, if ΒΌ0,k{p/n1/4, then it follows that max ifkaiβˆ’min ifkaiβ‰Όp ΒΌ0,kzn1/2. In this case, Theorem 3.3applies and yields E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp nΒΌ0,i/parenrightBig +O(nβˆ’1/2+ΒΆ). SinceΒΌ0,i{p/n1/4in this regime, it follows that O/parenleftbiggp nΒΌ0,i/parenrightbigg +O/parenleftBig nβˆ’1/2+ΒΆ/parenrightBig =O/parenleftbigg1 ΒΌ0,i/radicalbiggp n/parenrightbigg +O/parenleftBig nβˆ’1/2+ΒΆ/parenrightBig =O(nβˆ’1/2+ΒΆ), holds. Therefore, in both regimes, we obtain the unified posterior bound: E/bracketleftBigΒΌ(i)βˆ’ΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBig1 ΒΌ0,i/radicalbiggp n/parenrightBig +O(nβˆ’1/2+ΒΆ). β–  Proof of Corollary 3.5 Proof.Consider the spectral decomposition of nS=QWQT. Then, on DΟ΅, we have the following inequal- ity: (Γ€T jΛ†Γ€j)2= ([QΞ“]T Β·jQΒ·j)2 = [(QΞ“eT j)T(QeT j)]2 = Ξ“2 jj g1βˆ’Ο΅2, forj= 1,...,k. 43 By Theorem 3.2 of Wang and Fan (2017), the following limit holds: /vextendsingle/vextendsingle/vextendsingleΓ€T 0,jΛ†Γ€j/vextendsingle/vextendsingle/vextendsingleβˆ’(1+Β―ddj)βˆ’1 2=Op(Β·), whereΒ·j=1 ΒΌ0,j/radicalbiggp n+p n3/2ΒΌ0,j+1 n. Applying the triangular inequality of angles from Castano et al. (2016), where ΒΉ(u,v) = arccos/vextendsingle/vextendsingleuTv/vextendsingle/vextendsingle ||u||||v||, we obtain: /vextendsingle/vextendsingleΓ€T 0,jΓ€j/vextendsingle/vextendsinglefcos/bracketleftBig arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΛ†Γ€T jΓ€j/vextendsingle/vextendsingle/vextendsingle/parenrightBig +arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΓ€T 0,jΛ†Γ€j/vextendsingle/vextendsingle/vextendsingle/parenrightBig/bracketrightBig fcos/bracketleftBig arccos/parenleftBig/radicalbig 1βˆ’Ο΅2/parenrightBig +arccos ο£­1/radicalBig 1+Β―ddj+Op(Β·j)ο£Ά ο£Έ/bracketrightBig = cos/bracketleftBig arccos/parenleftBig/radicalbig 1βˆ’Ο΅2/radicalBigg 1 1+Β―ddj+Op(Β·j)βˆ’Ο΅/radicalBigg 1βˆ’/parenleftbig1 1+Β―ddj+Op(Β·j)/parenrightbig2/parenrightBig/bracketrightBig = cos/bracketleftBig arccos/parenleftbig/radicalBigg 1βˆ’Ο΅2 1+Β―ddjβˆ’/radicalBigg Β―ddj 1+Β―ddjΟ΅2+Op(Β·j)/parenrightbig/bracketrightBig f(1+Β―ddj)βˆ’1/2+Op(Β·j). Similarly, applying the lower bound using ΒΉ(Γ€0,j,Γ€j)gΒΉ(Γ€0,j,Λ†Γ€j)βˆ’ΒΉ(Λ†Γ€j,Γ€j). /vextendsingle/vextendsingleΓ€T 0,jΓ€j/vextendsingle/vextendsinglegcos/bracketleftBig arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΓ€T 0,jΛ†Γ€j/vextendsingle/vextendsingle/vextendsingle/parenrightBig βˆ’arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΛ†Γ€T jΓ€j/vextendsingle/vextendsingle/vextendsingle/parenrightBig/bracketrightBig gcos/bracketleftBig arccos ο£­1/radicalBig 1+Β―ddjο£Ά ο£Έβˆ’arccos/parenleftBig/radicalbig 1βˆ’Ο΅2/parenrightBig/bracketrightBig = cos/bracketleftBig arccos/parenleftBig/radicalbig 1βˆ’Ο΅2/radicalBigg 1 1+Β―ddj+Op(Β·j)+Ο΅/radicalBigg 1βˆ’/parenleftbig1 1+Β―ddj+Op(Β·j)/parenrightbig2/parenrightBig/bracketrightBig = cos/bracketleftBig arccos/parenleftbig/radicalBigg 1βˆ’Ο΅2 1+Β―ddj+/radicalBigg Β―ddj 1+Β―ddjΟ΅2+Op(Β·j)/parenrightbig/bracketrightBig =/radicalBigg 1βˆ’Ο΅2 1+Β―ddj+Ο΅/radicalBigg Β―ddj 1+Β―ddj+Op(Β·j). Therefore, we obtain the expectation: E/bracketleftBig 1βˆ’(Γ€T 0,jΓ€j)2/vextendsingle/vextendsingleXn/bracketrightBig = sup DΟ΅(1βˆ’/vextendsingle/vextendsingleΓ€T 0,jΓ€j/vextendsingle/vextendsingle2) 44 = 1βˆ’/parenleftBig1βˆ’Ο΅2 1+Β―ddj+Op(Β·j)/parenrightBig βˆ’/parenleftBigΒ―ddjΟ΅2 1+Β―ddj/parenrightBig βˆ’2/radicalBigg/parenleftBig1βˆ’Ο΅2 1+Β―ddj/parenrightBig/parenleftBigΒ―ddjΟ΅2 1+Β―ddj/parenrightBig +Op(Β·j) =Β―ddj 1+Β―ddj+O(p nΒΌ0,jΟ΅2)+O/parenleftBig Ο΅/radicalbiggp nΒΌ0,j/parenrightBig +Op(Β·j) =pΒ―d nΒΌ0,j+pΒ―d+O(p nΒΌ0,jΟ΅2)+O/parenleftBig Ο΅/radicalbiggp nΒΌ0,j/parenrightBig +Op(Β·j). IfΒΌ0,1β‰Όp/n1/4, then by proof of Lemma S1.16, we obtain /vextendsingle/vextendsingle/vextendsingleE/bracketleftBig 1βˆ’(Γ€T 0,jΓ€j)2|Xn/bracketrightBig βˆ’E/bracketleftBig 1βˆ’(Γ€T 0,jΓ€(j))2|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingleE/bracketleftBig/parenleftbig (Γ€T 0,jΓ€(j))2βˆ’(Γ€T 0,jΓ€j)2)/parenrightbig Β·I(ΒΌ(i))ΜΈ=ΒΌi)|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsingle f2E/bracketleftBig I(ΒΌ(i))ΜΈ=ΒΌi)|Xn/bracketrightBig = 2E/bracketleftBig E[I(ΒΌ(i))ΜΈ=ΒΌi)|Ξ“]Β·(I(Ξ“βˆˆDΟ΅)+I(Ξ“/∈DΟ΅))|Xn/bracketrightBig f2E/bracketleftBig E[I(ΒΌ(i))ΜΈ=ΒΌi)|Ξ“]Β·I(Ξ“βˆˆDΟ΅)|Xn/bracketrightBig +2E/bracketleftBig E[1|Ξ“]Β·I(Ξ“/∈DΟ΅)|Xn/bracketrightBig f2(1βˆ’p/productdisplay i=1(1βˆ’pi))+Γƒ(Ξ“/∈DΟ΅|Xn) f2p/summationdisplay i=1pi+O/parenleftBig exp/parenleftBig βˆ’Β» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig . Similarly, if ΒΌ0,1{p/n1/4, then by proof of Lemma S1.18, we obtain /vextendsingle/vextendsingle/vextendsingleE/bracketleftBig 1βˆ’(Γ€T 0,jΓ€j)2|Xn/bracketrightBig βˆ’E/bracketleftBig 1βˆ’(Γ€T 0,jΓ€(j))2|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsinglef2p/summationdisplay i=1pi+O/parenleftBig exp/parenleftBig βˆ’Γ„ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβˆ’Ο΅2an/parenrightBig . By the assumptions of the theorem, in particular ΒΌ0,1/ΒΌ0,k=O(1) andϡ≍nβˆ’1/4, we havep/summationdisplay i=1pi= O(1/n). Therefore, /vextendsingle/vextendsingle/vextendsingleE/bracketleftBig 1βˆ’(Γ€T 0,jΓ€j)2|Xn/bracketrightBig βˆ’E/bracketleftBig 1βˆ’(Γ€T 0,jΓ€(j))2|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsingleβ‰Όnβˆ’1/2+ΒΆ+1 ΒΌ0,i/radicalbiggp n. Hence, we obtain the final posterior bound E/bracketleftBig 1βˆ’/vextendsingle/vextendsingleΓ€T 0,jΓ€(j)/vextendsingle/vextendsingle2/vextendsingle/vextendsingleXn/bracketrightBig =O(p nΒΌ0,j)+Op(Β·j), forj= 1,...,k, whereΒ·j=1 ΒΌ0,j/radicalbiggp n+p n3/2ΒΌ0,j+1 n. β–  45 We consider the spectral decomposition Ξ£ = ΓΛΓT, where Ξ› = ο£­Ξ›10 0 Ξ›2ο£Ά ο£Έ, with Ξ› 1being akΓ—kmatrix and Ξ› 2an (pβˆ’k)Γ—(pβˆ’k) matrix. Furthermore, we write Ξ“ = (Ξ“ 1,Ξ“2) where Ξ“ 1is apΓ—kmatrix and Ξ“ 2is apΓ—(pβˆ’k) matrix. Lemma S1.19 (Spectral
https://arxiv.org/abs/2505.20668v1
difference of eigenvectors) .For the spiked eigenvector matrix Ξ“1, the following inequality holds on the set DΟ΅: ||Ξ“0,1βˆ’Ξ“1||2=O(dk)+Op(Β·k), whereΒ·k=1 ΒΌ0,k/radicalbiggp n+p n3/2ΒΌ0,k+1 n. Proof. ||Ξ“0,1βˆ’Ξ“1||2f ||Ξ“0,1βˆ’Ξ“1||2 F fk/summationdisplay i=1||Γ€0,iβˆ’Γ€i||2 2 = 2k/summationdisplay i=1(1βˆ’/vextendsingle/vextendsingleΓ€T 0,iΓ€i/vextendsingle/vextendsingle) f2k/summationdisplay i=1/parenleftbigg 1βˆ’/parenleftbig/radicalBigg 1βˆ’Ο΅2 1+di+Ο΅/radicalbigg di 1+di+Op(Β·i)/parenrightbig/parenrightbigg , whereΒ·i=1 ΒΌ0,i/radicalbiggp n+p n3/2ΒΌ0,i+1 n. The last inequality follows from proof of Theorem 3.5. 2k/summationdisplay i=1/parenleftbigg 1βˆ’/parenleftbig/radicalBigg 1βˆ’Ο΅2 1+di+Ο΅/radicalbigg di 1+di/parenrightbig/parenrightbigg f2k/summationdisplay i=1/parenleftbigg 1βˆ’/radicalBigg 1βˆ’Ο΅2 1+di/parenrightbigg f2k/parenleftbigg 1βˆ’/radicalBigg 1βˆ’Ο΅2 1+dk/parenrightbigg f2k/parenleftbigg/radicalbig 1+dkβˆ’/radicalbig 1βˆ’Ο΅2/parenrightbigg = 2kdk+Ο΅2 √1+dk+√ 1βˆ’Ο΅2 =O(dk). Sincek/summationdisplay j=1Β·jfkΒ·k, it follows that ||Ξ“0,1βˆ’Ξ“1||2=O(dk)+Op(Β·k). 46 β–  Proof of Corollary 3.6 Proof. ||Ξ£βˆ’Ξ£0||2 Ff22/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“iΞ›iΞ“T iβˆ’Ξ“0,iΞ›0,iΞ“T 0,i/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F f22/summationdisplay i=1/bracketleftbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle(Ξ“iβˆ’Ξ“0,i)Ξ›0,iΞ“T i/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“i(Ξ›iβˆ’Ξ›0,i)Ξ“T i/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“0,iΞ›0,i(Ξ“iβˆ’Ξ“0,i)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle F/bracketrightbig2 f62/summationdisplay i=1/bracketleftbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle(Ξ“iβˆ’Ξ“0,i)Ξ›0,iΞ“T i/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“i(Ξ›iβˆ’Ξ›0,i)Ξ“T i/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“0,iΞ›0,i(Ξ“iβˆ’Ξ“0,i)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F/bracketrightbig (S19) f62/summationdisplay i=1/bracketleftbig 2||Ξ“0,iβˆ’Ξ“i||2||Ξ›0,i||2 F+||Ξ›0,iβˆ’Ξ›i||2 F/bracketrightbig f6/bracketleftBig 8(k/summationdisplay i=1ΒΌ2 0,i)Β·/parenleftbig O(dk)+Op(Β·k)/parenrightbig +k/summationdisplay i=1(ΒΌ0,iβˆ’ΒΌi)2/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kΒΌ2 0,i+/summationdisplay i>k(ΒΌ0,iβˆ’ΒΌi)2/bracketrightBig , (S20) f6/bracketleftBig 8k/summationdisplay i=1ΒΌ2 0,iΒ·O(dk)+k/summationdisplay i=1(ΒΌ0,iβˆ’ΒΌi)2/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kΒΌ2 0,i+/summationdisplay i>k(ΒΌ0,iβˆ’ΒΌi)2/bracketrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k) The inequality ( S19) follows from the Cauchy-Schwarz inequality, and ( S20) follows from Lemma S1.19. The posterior expectation is given by: E/bracketleftBig (ΒΌiβˆ’ΒΌ0,i)2/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplay (ΒΌiβˆ’ΒΌ0,i)2/productdisplay ΒΌβˆ’aiβˆ’n 2 iexp/parenleftbigg βˆ’ci 2ΒΌi/parenrightbigg dΞ›dΞ“ /integraldisplay/productdisplay ΒΌβˆ’aiβˆ’n 2 iexp/parenleftbigg βˆ’ci 2ΒΌi/parenrightbigg dΞ›dΞ“ =/integraldisplay f(c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 dΞ“ /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 dΞ“ fsup DΟ΅f(c)+/integraldisplay Dc Ο΅f(c)/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 dΞ“ /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβˆ’aiβˆ’n 2+1 dΞ“, where f(c) =EΒΌi[(ΒΌiβˆ’ΒΌ0,i)2] =/parenleftBig ΒΌ0,iβˆ’ci n+2aiβˆ’4/parenrightBig2 +2c2 i (n+2aiβˆ’4)2(n+2aiβˆ’6)2, withΒΌiiid∼InvGam (ai+n 2βˆ’1,ci 2). 47 Therefore, we obtain for i= 1,...,k: sup DΟ΅f(c) =/parenleftBig O(/radicalbiggp n+ΒΌ0,inβˆ’1 2+ΒΆ)/parenrightBig2 +O(ΒΌ2 0,i n3) =O/parenleftBigp n+ΒΌ2 0,i n1βˆ’2ΒΆ/parenrightBig , for all small ΒΆ >0. The posterior expectation is given by: E/bracketleftBig (ΒΌiβˆ’ΒΌ0,i)2/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp n+ΒΌ2 0,i n1βˆ’2ΒΆ/parenrightBig , fori= 1,...,k. Similarly, we obtain the upper bound for f(c) overDΟ΅. Using the assumption that ai{pfori > k, we obtain: sup DΟ΅f(c) =/parenleftBig ΒΌ0,iβˆ’ci n+2aiβˆ’4/parenrightBig2 +2 (n+2aiβˆ’6)2Β·/parenleftBigci (n+2aiβˆ’4)/parenrightBig2 fΒΌ2 0,i+1 a2 iΒ·ΒΌ2 0,i f2ΒΌ2 0,i. The posterior expectation is given by: E/bracketleftBig (ΒΌiβˆ’ΒΌ0,i)2/vextendsingle/vextendsingleXn/bracketrightBig =O(ΒΌ2 0,i), for alli > k. Therefore, we obtain the following inequality of posterior expectati on: E/bracketleftBig ||Ξ£βˆ’Ξ£0||2 F|Xn/bracketrightBig =O(dkk/summationdisplay i=1ΒΌ2 0,i+p n+nβˆ’1+2ΒΆk/summationdisplay i=1ΒΌ2 0,i)+O(/summationdisplay i>kΒΌ2 0,i)+Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k) =O/parenleftBig/parenleftbig nβˆ’1+2ΒΆ+p nΒΌ0,k/parenrightbigk/summationdisplay i=1ΒΌ2 0,i+p n+p/summationdisplay i=k+1ΒΌ2 0,i/parenrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k) =O/parenleftBig/parenleftbig nβˆ’1+2ΒΆ+p nΒΌ0,k/parenrightbigk/summationdisplay i=1ΒΌ2 0,i+p/parenrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k). SinceΒΌ0,1/ΒΌ0,kis bounded by a positive constant, the posterior expectation of the cov ariance is given 48 by: E/bracketleftBig||Ξ£βˆ’Ξ£0||2 F ||Ξ£0||2 F/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp p/summationtext i=1ΒΌ2 0,i+nβˆ’1+2ΒΆk/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,i/parenrightBig +O/parenleftBigp nΒΌ0,kk/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,i/parenrightBig +Op(k/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,iΒ·k) =O/parenleftBigp ΒΌ2 0,1+p+nβˆ’1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op(ΒΌ2 0,1 ΒΌ2 0,1+pΒ·k). We consider the spectral decomposition S=Λ†Ξ“Λ†Ξ›Λ†Ξ“T, where Λ†Ξ› = ο£­Λ†Ξ›10 0Λ†Ξ›2ο£Ά ο£Έ, withΛ†Ξ›1being akΓ—kmatrix and Λ†Ξ›2an (pβˆ’k)Γ—(pβˆ’k) matrix. Furthermore, we write Λ†Ξ“ = (Λ†Ξ“1,Λ†Ξ“2) whereΛ†Ξ“1is apΓ—kmatrix and Λ†Ξ“2is apΓ—(pβˆ’k) matrix. By Theorem 3.2 of Wang and Fan (2017), the following limit holds for each j= 1,...,k /vextendsingle/vextendsingle/vextendsingleΓ€T 0,jΛ†Γ€j/vextendsingle/vextendsingle/vextendsingle= (1+Β―ddj)βˆ’1 2+Op(Β·j). Using this, we bound the spectral norm between the true and sample ei genvectors as /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“0,1βˆ’Λ†Ξ“1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 f/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“0,1βˆ’Λ†Ξ“1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F fk/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΓ€0,jβˆ’Λ†Γ€j/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 2 = 2k/summationdisplay j=1(1βˆ’/vextendsingle/vextendsingle/vextendsingleΓ€T 0,jΛ†Γ€j/vextendsingle/vextendsingle/vextendsingle) = 2k/summationdisplay j=1/parenleftBig 1βˆ’((1+Β―ddj)βˆ’1 2+Op(Β·j))/parenrightBig = 2k/summationdisplay j=1Β―ddj/radicalBig 1+Β―ddj(/radicalBig 1+Β―ddj+1)+Op(Β·k) = 2k/summationdisplay j=1O(dj)+Op(Β·k) =O(dk)+Op(Β·k). 49 By the same procedure, we have ||Ξ£βˆ’Ξ£0||2 Ff62/summationdisplay i=1/bracketleftbig 2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“0,iβˆ’Λ†Ξ“i/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ||Ξ›0,i||2 F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ›0,iβˆ’Λ†Ξ›i/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F/bracketrightbig f6/bracketleftBig 8(k/summationdisplay i=1ΒΌ2
https://arxiv.org/abs/2505.20668v1
0,i)Β·/parenleftbig O(dk)+Op(Β·k)/parenrightbig +k/summationdisplay i=1(ΒΌ0,iβˆ’Λ†ΒΌi)2/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kΒΌ2 0,i+/summationdisplay i>k(ΒΌ0,iβˆ’Λ†ΒΌi)2/bracketrightBig . (S21) By Lemma S1.2, we obtain (ΒΌ0,iβˆ’Λ†ΒΌi)2=ο£±  O(p2 n2+ΒΌ2 0,i n1βˆ’2ΒΆ) fori= 1,...,k O(p2 n2) for i=k+1,...,n O(1) for i=n+1,...,p. Substituting into ( S21), we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ“0,1βˆ’Λ†Ξ“1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 f6/bracketleftBig 8(k/summationdisplay i=1ΒΌ2 0,i)Β·/parenleftbig O(dk)+Op(Β·k)/parenrightbig +k/summationdisplay i=1O(p2 n2+ΒΌ2 0,i n1βˆ’2ΒΆ)/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kO(1)+n/summationdisplay i=k+1O(p2 n2)+p/summationdisplay i=n+1O(1)/bracketrightBig =O/parenleftBig dkk/summationdisplay i=1ΒΌ2 0,i+p2 n2+n1βˆ’2ΒΆk/summationdisplay i=1ΒΌ2 0,i+p+p2 n/parenrightBig +Op/parenleftBigk/summationdisplay i=1ΒΌ2 0,iΒ·k/parenrightBig =O/parenleftBigp nΒΌ0,kk/summationdisplay i=1ΒΌ2 0,i+p2 n+n1βˆ’2ΒΆk/summationdisplay i=1ΒΌ2 0,i/parenrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k)ΒΌ2 0,i+p2 n/parenrightBig +Op/parenleftBigk/summationdisplay i=1ΒΌ2 0,iΒ·k/parenrightBig . SinceΒΌ0,1/ΒΌ0,kis bounded by a positive constant, it follows that: ||Sβˆ’Ξ£0||2 F ||Ξ£0||2 F=O/parenleftBigp nΒΌ0,k/summationtextk i=1ΒΌ2 0,i/summationtextp i=1ΒΌ2 0,i+p2 n/summationtextp i=1ΒΌ2 0,i+nβˆ’1+2ΒΆk/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,i/parenrightBig +Op/parenleftBig/summationtextk i=1ΒΌ2 0,i/summationtextp i=1ΒΌ2 0,iΒ·k/parenrightBig =O/parenleftBigp2 n(ΒΌ2 0,1+p)+nβˆ’1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op/parenleftBigΒΌ2 0,1 ΒΌ2 0,1+pΒ·k/parenrightBig . β–  References Castano, D., Paksoy, V. E. and Zhang, F. (2016). Angles, triangle inequalities, correlation matrices and metric-preserving and subadditive functions, LinearAlgebra anditsApplications 491: 15–29. 50 Edelman, A., Arias, T. A. and Smith, S. T. (1998). The geometry of algorithms wit h orthogonality constraints, SIAMjournalonMatrixAnalysis andApplications 20(2): 303–353. Stewart,G.andSun,J.(1990). MatrixPerturbation Theory,ComputerScienceandScientificComputing, Elsevier Science. Szarek, S. J. (1982). Nets of grassmann manifold and orthogonal group, Proceedings ofresearch workshop onBanachspacetheory(IowaCity,Iowa,1981), Vol. 169, University of Iowa Iowa City, IA, p. 185. Wang, W. and Fan, J. (2017). Asymptotics of empirical eigenstructure for high dimensional spiked covariance, Annalsofstatistics 45(3): 1342. 51
https://arxiv.org/abs/2505.20668v1
arXiv:2505.20681v1 [stat.ME] 27 May 2025HYBRID BAYESIAN ESTIMATION IN THE ADDITIVE HAZARDS MODEL Β΄Alvarez Enrique Ernesto1, Riddick Maximiliano Luis2 1Instituto de CΒ΄ alculo, Universidad de Buenos Aires - CONICET Universidad Nacional de LujΒ΄ an, Argentina mail: enriqueealvarez@fibertel.com.ar 2NUCOMPA and Departamendo de MatemΒ΄ atica, Facultad de Ciencias Exactas, Universidad Nacional del Centro de la Provincia de Buenos Aires, Tandil (7000), Argentina, mail: mriddick@nucompa.exa.unicen.edu.ar Abstract : Hereby we propose a Bayesian method of estimation for the semipara- metric Additive Hazards Model (AHM) from Survival Analysis under right- censoring. With this aim, we review the AHM revisiting the likelihood func- tion, so as to comment on the challenges posed by Bayesian estimation from the full likelihood. Through an algorithmic reformulation of that likelihood, we present an alternative method based on a hybrid Bayesian treatment that exploits Lin and Ying (1994) estimating equation approach and which chooses tractable priors for the parameters. We obtain the estimators from the posterior distributions in closed form, we perform a small simulation experiment, and lastly, we illustrate our method with the classical Nickels Miners dataset and a brief simulation experiment. Keywords : Additive Hazards Model, Survival Analysis, Bayesian Inference. 2000 AMS Subject Classification: 62N02 - 62F15 1 Introduction Suppose we have a sample of nindividuals who may experience a certain event of interest such as death, definite illness remission or permanent retirement from the labor force over an observation window [0 , Ο„]. We denote by Tβˆ— ithe true, possibly unobserved, time to occurrence for the i-th individual. Because some individuals experience censoring at times Ci, their duration until the event is ob- served only when Ciβ‰₯Tβˆ— i. In classical Survival Analysis, it is of interest to study this variables as related to observed individual level characteristics, as measured by some independent variables or, frequenty called, covariates . Notice that we are only considering that we are able to observe at most one event time for each individual. Even though this model could be extended for repeated events, we 2 leave that as matter for future research. For example, in Medicine, it could be of interest how remission make depend on the choice of different drugs, or other individual specific characteristics. In the literature, models for survival data typically are defined according to the so-called hazard rate ,Ξ»i(t, ΞΈ) := lim Ρ↓0Ξ΅βˆ’1PΞΈ(Tβˆ— i≀t+Ξ΅|Tβˆ— i> t),which is a heuristic measure of the instantaneous risk at any given time. Approaches for parametric models, where ΞΈβˆˆΞ˜βŠ‚I Rk, are plenty. Some common choices are the exponential, Weibull or Gamma distributions (e.g., Lawless 2003). Alterna- tively, useful models have been proposed within the semi-parametric framework, in which the hazard function is decomposed in a nonparametric baseline hazard function Ξ»0(Β·) and an Euclidean parameter β∈I Rk. In this setting the three most common models are: ( i)Cox’s Proportional Hazards Model (CPM), which decomposes the hazard multiplicatively proposing Ξ»(t,Ξ²) =Ξ»0(t) exp ( zβ€²Ξ²); (1) (ii) the Additive Hazards Model (AHM), which proposes instead an additive de- composition Ξ»(t,Ξ²) =Ξ»0(t) +zβ€²Ξ², (2) and ( iii) the Accelerated Failure Time Model (AFTM), which takes a slightly dif- ferent approach than
https://arxiv.org/abs/2505.20681v1
the previous ones by proposing a rescaling for the duration times themselves. I.e., Tβˆ—=Tβˆ— 0exp(Zβ€²Ξ²),where Tβˆ— 0∼F0(Β·) is the baseline cumu- lative distribution function. Interestingly, for the corresponding hazard functions, this entails that Ξ»(t,Ξ²) =Ξ»0 texp(Zβ€²Ξ²) exp(Zβ€²Ξ²), (3) which is neither multiplicative nor additive. Developments in Classical Estimation for the above three models have been fairly vast. An extensive account for them can be found in some classical text- books such as Kalbfleisch and Prentice (1980), Klein and Melvin (2006) or Lawless (2003), among others. As for Bayesian estimation, most of the statistical focus in the literature was put on the Multiplicative Hazards Model, an extensive ac- count of which can be found in the textbook of Ibrahim, Chen and Sinha (2001). Comparatively, Bayesian developments for the Additive Hazards Model and for the Accelerated Failure time Model have been fewer. For the AHM, a recent literature review is provided by Alvarez and Riddick (2019), and for the AFT model, the relevant references are discussed in Zhang and Lawson (2011). It is our goal in this study to propose a Bayesian method of estimation for the semi-parametric Additive Hazards Model under right-censoring. This paper is organized as follows. In Section 2 we review the Additive Hazards Model, we 3 introduce the likelihood function and we discuss the challenges posed by either Classical or Bayesian estimation from the full likelihood. We further discuss alternative approaches based on Cox’s Partial Likelihood and on the seminal work of Lin and Ying (1994), who proposed an approach which is based on an estimating equation method developed from Counting Process theory. Section 3 is our main contribution in this manuscript in which, based on a convenient expression of the likelihood, we present a new hybrid Bayesian method of estimation. This method combines the classical Lin and Ying’s estimating equation with Bayesian priors for the Euclidean parameter and for the baseline hazard function. We obtain the posterior distributions and provide formulas for the estimators in closed form. The availability of those closed-form expressions is one of the main advantages of our method, which avoids the necessity of relying on approximate sampling from the posterior distributions or on other types of numerical approximation. Section 4 contains a small simulation experiment that illustrates some of the properties of our hybrid Bayesian estimators. In Section 5 we carry out a comparative Bayesian analysis for the Welsh Nickels Refiners dataset first introduced by Doll, Morgan and Speizer (1970) and subsequently analyzed from a Classical perspective by Breslow and Day (1987), Lin and Ying (1994), and Alvarez and Ferrario (2016), among others. Lastly, in section 6, we highlight the main contributions of this article, which is part of Maximiliano Riddick PhD thesis (2020). 2 Additive Hazards Model The parameter vector Ξ²is non-negative and k-dimensional, and Ξ»0(Β·)∈Lis a baseline hazard function. This entails that, denoting L:= h:R+7β†’R+,Z∞ 0h(s)ds=∞ , theparameter space for (Ξ², Ξ»0) is Θ = Rk+Γ—L. It is in that sense that the model is semi-parametric . It is noteworthy that in the general AHM formulation what needs to be positive is the
https://arxiv.org/abs/2505.20681v1
hazard function itself, but not necessarily all the coefficients. In our context, there are two ways to accomplish that goal: (i) by adding the specific constraint that the estimated Λ†Ξ»(t) =Λ†Ξ»0(t) +Ξ²β€²zbe nonnegative and performing constrained inference; or ( ii) by using only positive covariates zk, either as given by nature or constructing them purposely to achieve positivity, and forcing all the coefficients Ξ²kalso to be positive. Althought it seems restrictive, this option could be extended to allow for negative parameters through the model proposed by Dunson & Herring (2005). Our choice, which is sufficient yet not necessary to obtain nonnegative hazards, is the alternative 4 that we adopted in this manuscript. The main advantage of being willing to live with this limitation is the availability of estimators in closed form. According to the hazard function of the model 2, and under right censoring, our interest is to estimate the parameters of the model. Under right censoring Ξ΄i= 1(tiβ‰₯u), the likelihood function in survival models is, calling Θ to the model parameters, Ln(Θ) =nY i=1Ξ»(ti|Θ)S(ti|Θ)Ξ΄i. In the next of this article, and for practicality, we will omit in the expression the conditional respect to the parameters of the model. So, in the AHM, the likelihood expression when ni.i.d. triplets ( ti, zi, Ξ΄i) are observed results in Ln(Ξ², Ξ»0) =nY i=1[f(ti)]Ξ΄i[S(ti)]1βˆ’Ξ΄i=nY i=1[Ξ»(ti)]Ξ΄iS(ti) =nY i=1 Ξ»0(ti) +Ξ²β€²ziΞ΄iexp{βˆ’Ξ›0(ti)βˆ’tiΞ²β€²zi}, (4) since S(t) = exp {βˆ’Ξ›(t)}= exp{βˆ’Rt 0Ξ»(u)du}, where Ξ› 0(t) represents the denoted cumulative baseline hazard function , i.e., Ξ› 0(t) =Rt 0Ξ»0(u)du. We will expand about this in the algorithm explanation. Piecewise constant model. For simplicity, we will in this Section propose to model the baseline hazard function as independent of the Euclidean parameter and as a piecewise constant function. E.g., we fix m∈Nand we choose a partition 0 =: s0< s1< . . . < s mβˆ’1<∞=:sm. Define further a random nonnegative stepwise function L0(t) :=ο£±  A1 0≀t < s 1, A2 s1≀t < s 2, ... Amβˆ’1smβˆ’2≀t < s mβˆ’1, Am tβ‰₯smβˆ’1,(5) and let us call Athem-dimensional random vector with entries A1, . . . , A mβˆ’1β‰₯0, and with Am>0 (where the last entry is strictly positive in order to guarantee that any realization of the hazard function integrates to infinity). Strictly speak- ing, in a Bayesian setting, the random function L0has distribution Q, which is a probability measure on the space L. Several choices are possible for Q. But, for simplicity, we assume throughout this manuscript that the grid s0, . . . , s mis fixed and it was chosen before collecting any data. 5 Polynomial expansion of the Likelihood under the piecewise constant hazard model. The next construction is crucial since it makes available the Bayesian treatment developed in the next Section. First, we reconsider the likeli- hood including the piecewise exponential model proposed to the baseline hazard: Ln(Ξ², Ξ»0) = exp( βˆ’Ξ²β€²nX i=1tizi) exp( βˆ’nX i=1Ξ›0(ti))nY i=1 Ξ»0(ti) +Ξ²β€²ziΞ΄i. Now, the trick is to rewrite the expressionQn i=1 Ξ»0(ti) +Ξ²β€²ziΞ΄iaccording to the grid and the values defined in the piecewise exponential model. Let njthe number of
https://arxiv.org/abs/2505.20681v1
observations that belong to each grid interval, and calling ( zkj, Ξ΄kj) to their corresponding covariate and censoring indicator, this yields Ln(Ξ², Ξ»0) = exp( βˆ’Ξ²β€²nX i=1tizi) exp( βˆ’nX i=1Ξ›0(ti))mY j=1njY kj=1 aj+Ξ²β€²zkjΞ΄kj(6) (7) Notice that the formula in Equation (6) above does not correspond to any stan- dard density for which moment or quantile formulae, nor simulation algorithms, are readily available. Because of that, the previous statistical inference ap- proaches to this model were based on numerical methods. We now endeavor to find approximations to its first two moments, from a tractable re-expression of this formulae. Let us expand the product njY kj=1 aj+Ξ²β€²zkjΞ΄kj=d0+d1aj+. . .+dNjaNj j, (8) which is a polynomial of order Nj=Pnj kj=1Ξ΄kj, with coefficients di’s that depend on the sample ( ti, zi, Ξ΄i),i= 1, ..., n and on the parameters. To estimate the coefficients of Equation (8), Chernoukhov (2013, 2018) proposed a method which essentially consisted in two steps: ( i) evaluating the polynomial at ( Nj+ 1) different points; and ( ii) solving a system of linear equations. The drawback of that method is its numerical unstability. Instead, in this manuscript we propose an alternative approach which is more efficient numerically and which allows for estimators in closed-form. With that aim, we basically exploit a recursive relationship obtained for the polynomial coefficients. We explain this as follows. Recursion Let us propose the following recursive method, based on how the polynomial coefficients changes when a degree npolynomial is multiplied by a 6 monomial ( x+b). If the number of uncensored times is n(i.e., a degree npoly- nomial, for a generic variable Ξ»), we denote the polynomial Pn(Ξ») =Pn j=0djΞ»j. When the sample is augmented by one uncensored observation, it is equivalently to multiply Pnby (Ξ»+Ξ²β€²zn+1), i.e., Pn+1(Ξ») = ο£­nX j=0djΞ»jο£Ά ο£Έ(Ξ»+Ξ²β€²zn+1) =nX j=0djΞ»j+1+nX j=0djΞ²β€²zn+1Ξ»j. Then, the coefficients of Pn+1will be updated as ( i)d(n+1) 0 =d(n) 0Ξ²β€²zn+1, (ii) d(n+1) j =d(n) jβˆ’1+d(n) jΞ²β€²zn+1, for j∈ {1, ..., n}, and lastly ( iii)d(n+1) n+1=d(n) n= 1. With this recursive relationship between the coefficients, we can calculate the di’s in an fairly easy and numerically efficient way. According to this modification, we propose to estimate Ξ², and then estimate Ξ»0. In order to achieve that goal, we deal with the likelihood expression Ln(Ξ»0|Ξ²) which becomes Ln(Ξ»0|Ξ²)∝mY j=1 ο£­NjX k=0dkak jο£Ά ο£Έexp( βˆ’nX i=1Ξ›0(ti)) . (9) We endeavour now to obtain a tractable expression for exp {βˆ’Pn i=1Ξ›0(ti)}in Equation (9), for each interval in the grid, expressing each Ξ› 0(ti) as function of each aj. For that, notice that, for each aβ€² js, Ξ›0(t) (i) does not depend of ajif t≀sjβˆ’1, (ii) depends of ajandtifsjβˆ’1< t≀sj, or (iii) Ξ›0(ti) :=ο£±  Λ0(ti) ti≀sjβˆ’1, Ξ›0(sjβˆ’1) +aj(tiβˆ’sjβˆ’1) sjβˆ’1< ti≀sj, Ξ›0(sjβˆ’1) +aj(sjβˆ’sjβˆ’1) +Rti sjΞ»0(u)du s j< ti. Then, calling mj= #{ti> sj}, after standard calculations we obtain exp( βˆ’nX i=1Ξ›0(ti)) ∝expο£± ο£² ο£³βˆ’aj ο£­njX kj=1(tkjβˆ’sjβˆ’1) +mj(sjβˆ’sjβˆ’1)ο£Ά ο£Έο£Ό ο£½ ο£Ύ.(10) Therefore, combining Equations (9) and (10) we see that Ln(Ξ»0|Ξ²) is proportional to a mixture of Gamma distributions, i.e., Ln(Ξ»0|Ξ²)∝mY j=1 ο£­NjX k=0dkak jexpο£± ο£² ο£³βˆ’aj ο£­njX kj=1(tkjβˆ’sjβˆ’1) +mj(sjβˆ’sjβˆ’1)ο£Ά ο£Έο£Ό ο£½ ο£Ύο£Ά ο£Έ.(11)
https://arxiv.org/abs/2505.20681v1
Being a mixture of Gammas, the display above provides a tractable expression which we extensively exploit in the sequel. 2.1 Uninformative Priors 7 Without this convenient expression, several algorithms were presented in lit- erature with innovative adaptations of acceptance rejection sampling, Metropolis- Hastings and Metropolis within Gibbs (e.g., Turkmann, Paulino & MΒ¨ uller 2020). Instead, our formulation, because it leads to a mixture of Gamma distributions, enables avoiding the Gibbs sampler and provide estimators in closed form. 2.1 Uninformative Priors According to covariates and censoring indicators, under the piecewise constant hazard model for the baseline hazard we arrive to mixture of Gamma distributions expression to the likelihood function for the AHM. We leave analysis under non- informative priors formulations as Laplace prior (equals to 1 over the parametric space), or MAXENT priors for a future research. In the present article the objective is to develop Bayesian inference according to informative priors. 2.2 Informative Priors As mentioned above, in a Bayesian treatment, we view the parameters as re- alizations of a random vector B=Ξ²and a random curve L0=Ξ»0(Β·). In this Subsection, we attempt to specify informative prior distributions in the following manner: Prior for the Euclidean Parameter. Consider B∼Nk(¡β,CΞ²)|Rk+, (12) this is a truncation of a multivariate normal distribution with mean vector ¡βand positive definite covariance matrix CΞ², to the positive orthant Rk+. This entails that the density function for Bis given by fB(Ξ²) =Nk Ξ²kY j=1I(Ξ²jβ‰₯0) Z∞ 0. . .Z∞ 0Nk Ξ²dΞ².(13) where Nk Ξ²= 1√ 2Ο€k |det(CΞ²)|βˆ’1 2exp βˆ’1 2(Ξ²βˆ’Β΅Ξ²)β€²CΞ²βˆ’1(Ξ²βˆ’Β΅Ξ²) . The de- nominator is the probability P( Ξ²1β‰₯0;. . .;Ξ²kβ‰₯0) under the nontruncated normal distribution. It is a constant that we denote by Ω( ¡β,CΞ²) and which depends only on ¡βandCΞ². Apart for its mathematical tractability, an additional advantage of choosing a prior based on a normal distribution is that it may simplify the process of prior elicitation. This is because field experts are often reasonably acquainted with them, by specifying the mean, mode or some of its quantiles. 8 Prior for the cumulative baseline hazard function. In order to making avail- able certain dependence among the baseline hazard function parameters, we opted to model it as a Gamma process, with a pre-specified increas- ing and left continuous function Ξ±(t) and a scale parameter c. So, calling Ξ·i:=Ξ±(si)βˆ’Ξ±(siβˆ’1) the increment of the function Ξ±(t) in the interval [siβˆ’1, si), the joint prior density for the cumulative baseline parameters Ξ›+ 0j according to the time grid selected is fΞ›+ 0j(a+ 01, . . . , a+ 0m) =mY i=1cΞ·i Ξ“(Ξ·i)(a+ 0i)Ξ·iβˆ’1exp βˆ’c a+ 0i , (14) where the introduced new parameters ( a+ 01, . . . , a+ 0m) are a realization of the random vector Ξ›+ 0= (Ξ›+ 01,Ξ›+ 02,Β·Β·Β·,Ξ›+ 0m), and are constructed according to the baseline hazard increments. We provide algebraic details of this calculations below in Section 3.2. We let ¡β=1kbe a vector of ones, and CΞ²=Ο‰Ik, i.e., a constant Ο‰ > 0 times the identity matrix of order k. Alternative methods which rely on choosing the hyperparameters for both BandL0under prior elicitation from expert knowledge
https://arxiv.org/abs/2505.20681v1
are currently under research. 3 The Hybrid Bayesian Method We attempt to develop a Bayesian method that achieves two goals, ( i) it disen- tangles estimation of Ξ²from the baseline hazard function Ξ»0(Β·), as the latter is often treated as a nuisance in many applications, and ( ii) it generates estimators in closed form. Those are two important goals because Survival Analysis typi- cally has two objectives, namely: ( i) prediction of the survival of an individual with given covariate values, and/or ( ii) assessing how survival or the risk of the event occurring at any given time may depend on some covariates. While for the first objective (i.e., prediction) estimation of both the baseline hazard function and the Euclidean parameters is necessary, for the second objective only the Eu- clidean parameters are needed. As already mentioned, straight application of the maximum likelihood approach for either of the three models mentioned in this article, namely PHM, AHM or AFTM, estimates the baseline hazard function and the Euclidean parameter jointly (e.g., Andersen et.al. , 1993) has presented some computational difficulties that encouraged researchers to propose alterna- tive estimation methods in the literature. For instance, for the PHM, Cox has developed the approach of the so-called Partial Likelihood , which made it pos- sible to estimate Ξ²disentangled from Ξ»0(Β·). First, we propose an approach to estimate Ξ², and then we develop a method to estimate the baseline hazard given the estimation of Ξ². 3.1 Estimation of Ξ² 9 3.1 Estimation of Ξ² In the AHM context, a classical way to estimate Ξ²uses an estimating equation approach instead of the score function from either maximum likelihood or partial maximum likelihood. This is because neither of them allows the estimation of Ξ²to be disentangled from estimation of Ξ»0and, furthermore, they do not allow estimators in closed form. The pioneering work of Lin and Ying (1994) proposed an estimating equation for Ξ² U(Ξ²) :="nX i=1βˆ†i[ziβˆ’Λœzn(ti)]# βˆ’"nX i=1Zti 0[ziβˆ’Λœzn(u)]βŠ—2du# Ξ²= 0,(15) where for a given column vector a, we denote the matrix aβŠ—2=a aβ€², and ˜zn(u) :=Pn i=1zi1(tiβ‰₯u)Pn i=11(tiβ‰₯u)(16) is the vector function which averages of all the z’s corresponding to time values greater or equal to u. In order to put the estimating function U(Ξ²) of Equation (15) into a Bayesian context, let us now denote the statistics V1:=1 nnX i=1βˆ†i[ziβˆ’Λœzn(ti)], V2:=1 nnX i=1Zti 0[ziβˆ’Λœzn(u)]βŠ—2du , V3:=1 nnX i=1[ziβˆ’Λœzn(ti)]βŠ—2, where V1is a row vector, while V2andV3are symmetric matrices. Notice that with that notation LY estimating equation is U(Ξ²) =V1βˆ’V2Ξ²= 0. Consider further the function g(Ξ²) = exp( βˆ’1 2" (Ξ²βˆ’Vβˆ’1 2V1)β€²Vβˆ’1 2V3Vβˆ’1 2 nβˆ’1 (Ξ²βˆ’Vβˆ’1 2V1)#) .(17) 3.1 Estimation of Ξ² 10 Taking logarithms and derivatives with respect to Ξ²we see that logg(Ξ²) =βˆ’1 2" Ξ²β€²Vβˆ’1 2V3Vβˆ’1 2 nβˆ’1 Ξ²βˆ’2Ξ²β€²Vβˆ’1 2V3Vβˆ’1 2 nβˆ’1 (Vβˆ’1 2V1) βˆ’(Vβˆ’1 2V1)β€²Vβˆ’1 2V3Vβˆ’1 2 nβˆ’1 (Vβˆ’1 2V1)# , βˆ‚ βˆ‚Ξ²logg(Ξ²) =βˆ’Vβˆ’1 2V3Vβˆ’1 2 nβˆ’1 Ξ²+Vβˆ’1 2V3Vβˆ’1 2 nβˆ’1 (Vβˆ’1 2V1). From the display above, we observe that U(Ξ²) = 0 whenever ( βˆ‚/βˆ‚Ξ²) logg(Ξ²) = 0. As a consequence, it is remarkable that Lin and Ying’s point estimator of Ξ² corresponds to
https://arxiv.org/abs/2505.20681v1
the mean or the mode of a multivariate normal distribution with mean vector m= (Vβˆ’1 2V1) and covariance matrix is D=nβˆ’1(Vβˆ’1 2V3Vβˆ’1 2). This entails that in a Bayesian context we could consider LY estimators as belonging to a posterior normal distribution for the Euclidean parameter with a flat (improper) prior. In this manuscript, in contrast, we have opted for an informative conjugate multivariate normal prior truncated to the positive orthant given by Equation (12). From there, we propose the pseudo-marginal posterior distribution for [B|X,L0] as fB|X,L0 Ξ² ∝exp βˆ’1 2h (Ξ²βˆ’m)β€²Dβˆ’1(Ξ²βˆ’m)β€²+ (Ξ²βˆ’Β΅Ξ²)β€²Cβˆ’1 Ξ²(Ξ²βˆ’Β΅Ξ²)β€²i kY j=1I(Ξ²jβ‰₯0),(18) which from standard calculations entail [B|X,L0]∼Nkh (Dβˆ’1+Cβˆ’1 Ξ²)βˆ’1(Dβˆ’1m+Cβˆ’1 β¡β); (Dβˆ’1+Cβˆ’1 Ξ²)βˆ’1i Rk+.(19) It is worthy to remark, at this point, why we call the distribution in the Equation above a β€œ pseudo ” posterior, instead of a (plain) posterior distribution. That is because the Bayesian calculations that led to the density fB|X,L0in Equation (18) started from g(Ξ²), which is not the conditional distribution of the sample given the parameters, but a convenient transformation inspired by an estimat- ing equation. That is why we call our method β€œ hybrid ”. Within a Bayesian framework, our method shares the same spirit of Cox’s partial likelihood, and Lin and Ying’s estimating equation in an alternative frequentist approach to in- ference. All those methods rely on estimating equations that depart from the traditional likelihood function. Furthermore, it turns out convenient that with 3.1 Estimation of Ξ² 11 our formulation [ B|X,L0] = [B|X]. I.e., this marginal posterior distribution is independent of the baseline hazard function. This enables estimation of Ξ²to be disentangled from that of Ξ»0(Β·), which in many contexts is treated as a nuisance parameter. Secondly, we will see that our formulation enables point estimation ofΞ²in closed-form, a feature which is shared by LY development, while not by Cox’s. As for the choice of the Bayesian point estimator of Ξ², we opt in this article for the mode of the posterior distribution. This is, Λ†Ξ²B= (Dβˆ’1+Cβˆ’1 Ξ²)βˆ’1(Dβˆ’1m+Cβˆ’1 β¡β) (20) whenever positive, or zero otherwise. It is notheworthy that that under a noninformative prior, i.e., when βˆ₯CΞ²βˆ₯ β†’ ∞ , then the point estimator Λ†Ξ²Bbecomes asymptotically equivalent tom=Vβˆ’1 2V1, which coincides in the positive case with LY classical estimator. In contrast with LY estimator, however, ours has the advantage that it could never turn out negative for any sample. Notice also that an alternative Bayesian estimator, based on the expectation of the pseudo-posterior distribution, would have a serious problem: that it would diverge to infinity as βˆ₯CΞ²βˆ₯ β†’ ∞ , due to the effect of truncation of the prior multivariate normal distribution to the positive orthant. For the same reason, we propose not to estimate the variance of Λ†Ξ²Bby the variance of the (truncated) pseudo posterior distribution, but instead with a different method. We explain that as follows. Estimation of the Variance. We proceed in the following steps 1. Estimate Λ†Ξ²Baccording to Equation (20) whenever positive, or zero other- wise. 2. To estimate the precision of each component Λ†Ξ²B kwe will construct the high- est posterior density (HPD) credible interval
https://arxiv.org/abs/2505.20681v1
of (1 βˆ’Ξ±) coverage. The inter- val is of the form [ bl, bu], where 0 ≀bl< bu<∞. It is noteworthy that in Lin and Ying’s formulation, since the confidence intervals are based on the approximate normal distribution, they take the form Λ†Ξ²LYΒ±z1βˆ’Ξ±/2Λ†Οƒ(Λ†Ξ²LY), which allows for a possibly negative lower endpoint. Here for the sake of comparability of the standard deviation estimators, we propose Λ†Οƒ(˜βB):=buβˆ’bl 2z1βˆ’Ξ±/2. (21) Notice also that because we work with truncated normal distributions, the point estimates need not be at the center of the intervals, which will happen only when bl>0. 3.2 Estimation of the Baseline Hazard 12 This method for estimating ˜ Οƒ(˜βB)also has an important consequence for test- ing the significance of covariates. Based on the duality between confidence inter- vals and testing, we will consider that when zero belongs to an interval, it is an indication of nonsignificance of a particular covariate. In this regard, construct- ing credible intervals of highest pseudo-posterior density is essential for testing purposes as well as model selection. 3.2 Estimation of the Baseline Hazard We recall first the so-called Gamma Process. Let G(a, b) denote the gamma distribution with shape parameter a >0 and scale parameter b >0. Let Ξ±(t) be an increasing left continuous function on tβ‰₯0 such that Ξ±(0) = 0. Further, letZ(t) be a stochastic process on tβ‰₯0, with the properties: ( i)Z(0) = 0, ( ii) Z(t) has independent increments in disjoint intervals, and ( iii) For any t > s , Z(t)βˆ’Z(s)∼G(c(Ξ±(t)βˆ’Ξ±(s)), c). Then the process {Z(t) :t >0}is called a Gamma Process and is denoted by Z(t)∼GP(cΞ±(t), c). We opt in this manuscript to incorporate a Gamma Process prior on the cumulative baseline hazard function. With that goal, we first need to find an alternative expression for the likelihood of the AHM, expressed in terms of the cumulative hazard increments Ξ›+ 0= (Ξ›+ 01,Ξ›+ 02,Β·Β·Β·,Ξ›+ 0m:=∞) associated to each interval in the grid. Since there is no sense in trying to estimate the cumulative increment in the last interval of the grid, we truncate it at a fixed time tF(i.e. sm=tF); this guarantees that Ξ›+ 0m<∞. This does not represent a limitation in practice, because every empirical study has necessarily a finite follow up time. Considering the expression (5), which introduced the Aj’s, we deduce Ξ›+ 0j= Aj(sjβˆ’sjβˆ’1). Then, replacing for each jwe obtain: fΞ›+ 0j|X,Ξ²(a+ 0j)∝NjX k=0dk a+ 0j sjβˆ’sjβˆ’1!k eβˆ’ a+ 0j sjβˆ’sjβˆ’1!Pnj kj=1(tkjβˆ’sjβˆ’1) +mj(sjβˆ’sjβˆ’1) . (22) For a fixed and increasing left continuous function Ξ±and a scale parameter c, the prior for each Ξ›+ 0jisG(c(Ξ±(sj)βˆ’Ξ±(sjβˆ’1)), c). By construction, different grid intervals are disjoint, and following the gamma process structure, they are independent. According to that, the prior for Ξ›+ 0is: Ο€(Ξ›+ 0) =mY j=1fG(c(Ξ±(sj)βˆ’Ξ±(sjβˆ’1)),c)(a+ 0) =mY j=1fG(c(Ξ±j),c)(a+ 0), where Ξ±j=Ξ±(sj)βˆ’Ξ±(sjβˆ’1), and fG(a,b)(t) denotes the density function of a Gamma distribution with parameters aandbevaluated at t. Using that factor, 3.2 Estimation of the Baseline Hazard 13 we obtain the posterior fΞ›+ 0|X,Ξ²(a+ 0)∝mY j=1NjX k=0dk a+ 0j sjβˆ’sjβˆ’1!k eβˆ’ a+ 0j sjβˆ’sjβˆ’1!Pnj kj=1(tkjβˆ’sjβˆ’1) +mj(sjβˆ’sjβˆ’1) mY j=1(a+ 0j)cΞ±jβˆ’1eβˆ’a+ 0jccc Ξ±j Ξ“(c Ξ±j). In
https://arxiv.org/abs/2505.20681v1
order now to simplify notation, let us call d(j) k:=dk (sjβˆ’sjβˆ’1)kΓ—Ξ“(c Ξ±j), cj:=Pnj kj=1(tkjβˆ’sjβˆ’1) +mj(sjβˆ’sjβˆ’1) (sjβˆ’sjβˆ’1)+c. With that notation we express the posterior fΞ›+ 0j|X,Ξ²(a+ 0) =PNj k=0d(j) k(a+ 0j)(k+c Ξ±jβˆ’1)eβˆ’a+ 0jcj R∞ βˆ’βˆžPNj k=0d(j) k(a+ 0j)(k+c.Ξ±jβˆ’1)eβˆ’a+ 0j.cjda+ 0 =PNj k=0d(j) k ck jΞ“(k+c Ξ±j)fG(k+c Ξ±j,cj)(a+ 0) PNj k=0d(j) k ck jΞ“(k+c Ξ±j). Therefore, the resulting posterior distribution is proportional to a mixture of Gamma distributions, which enables point estimation in closed-form under square error loss. This is one of the most important advantages of our choice of prior specification in this article. Naturally, other choices of prior distributions for the baseline hazards are possible, for example by placing a prior on the plain hazard function itself, or on the baseline survival function. Those options will be presented in a separated article, a first version of which could be obtained from the authors upon request. In order now to provide formulae for the point estimators and their variances, let us call e(j) k:= (d(j) k/ck j) Ξ“(k+c Ξ±j), so that ca+ 0jB = E[Ξ›+ 0j|X,Ξ²] =Z∞ βˆ’βˆža+ 0"PNj k=0e(j) k.fG(k+c.Ξ±j,cj)(a+ 0) PNj k=0e(j) k# da+ 0 =PNj k=0e(j) k(k+c Ξ±j) PNj k=0e(j) kcj. 14 To estimate their variances, we take V[Ξ›+ 0j|X,Ξ²] = Z∞ βˆ’βˆž(a+ 0)2"PNj k=0e(j) kfG(k+c.Ξ±j,cj)(a+ 0) PNj k=0e(j) k# da+ 0! βˆ’E[Ξ›+ 0j|X,Ξ²]2 =PNj k=0e(j) k[(k+c Ξ±j)c2 j+ ((k+c Ξ±j)cj)2] PNj k=0e(j) kβˆ’(ca+ 0jB )2. As with our proposed the estimator for Ξ², it is worth to note the properties ofca+ 0jB as we relax or tighten the prior. Remark 1: When cdiverges to ∞, eachca+ 0jB converges almost surely to Ξ±j (i.e., it’s prior mean). To see that, we express ca+ 0jB =PNj k=0e(j) k(k+c Ξ±j) PNj k=0e(j) kcj=PNj k=0e(j) kk PNj k=0e(j) kcj+PNj k=0e(j) kc Ξ±jPNj k=0e(j) kcj =PNj k=1e(j) kk e(j) 0cj+PNj k=1e(j) kcj+c Ξ±j cj. Since e(j) k>0 for all k∈ {0,1,Β·Β·Β·, Nj}, 0≀PNj k=1e(j) kk e(j) 0cj+PNj k=1e(j) kcj≀PNj k=1e(j) kNjPNj k=1e(j) kcj=Nj cj. Since Njis bounded in probability,ca+ 0jB converges in probability to Ξ±jascβ†’ ∞ . Remark 2: In the opposite case than the first Remark, i.e., when cconverges to zero, the Bayesian estimator, Λ† a+B 0jdoes not depend on the prior function Ξ±j. The proof of that is clear since every time that Ξ±jis in the expression, it is appears multiplied by c. It is worth to mention again that our choices of the loss functions is different for the Euclidean coefficients than for the baseline hazard function. While we used the mode of the posterior distribution for estimation of the Euclidean parameter Ξ², we have opted to use the mean of the posterior distribution for estimation of the baseline hazard. It is also for this reason that we call at our method is hybrid . Again, we notice that one of the main advantages of this approach is that it enables estimators in closed form. 4 Simulation In this Section, we present a small simulation experiment according to the fol- lowing choices: ( i) hazard function Ξ»(t, Ξ²) = 1 + 0 .5z, i.e. Ξ»0(t) = 1 in R+, 15 and
https://arxiv.org/abs/2505.20681v1
a single covariate with regression parameter Ξ²= 0.5, (ii) Censoring variable Cwith an Exponential distribution with mean 2, ( iii) Covariate ZβˆΌΟ‡2 1, (iv) sample sizes n= 100 or 500, and ( v)R= 1000 replicates. Simulations for the regression parameter Ξ².The results are presented in Tables (1) and (2). We observe that in all cases Λ†Ξ²Bβ‰₯0, and further, if either βˆ₯CΞ²βˆ₯ β†’ ∞ (i.e., the prior becomes less informative) or nβ†’ ∞ (the sample size increases), the point estimator of Ξ²and its standard deviation converge to those of Lin and Ying, i.e., Λ†Ξ²Bβ‰ˆΛ†Ξ²LYand Λ†ΟƒB(Λ†Ξ²B)β‰ˆΛ†ΟƒLY(Λ†Ξ²LY). Those simulation tables also show the effects of varying the position parameter ¡βand the scale parameter wof the Ξ²-prior. Naturally, the estimates of Ξ²get closer to its true value (0 .5) as the prior position parameter ¡βgets near to the truth. w ¡β 0.1 0.5 1 2 5 10 1000 0 0.3241 0.4813 0.5155 0.5352 0.5479 0.5524 0.5569 (0.1625 ) (0.2122 ) (0.2211 ) (0.2259 ) (0.2291 ) (0.2301 ) (0.2312 ) 0.25 0.4157 0.5093 0.5306 0.5430 0.5512 0.5540 0.5569 (0.1784 ) (0.2154 ) (0.2227 ) (0.2268 ) (0.2294 ) (0.2303 ) (0.2312 ) 0.5 0.5073 0.5372 0.5457 0.5509 0.5544 0.5556 0.5569 (0.1851 ) (0.2182 ) (0.2242 ) (0.2275 ) (0.2297 ) (0.2304 ) (0.2312 ) 0.75 0.5989 0.5652 0.5608 0.5587 0.5576 0.5573 0.5569 (0.1872 ) (0.2203 ) (0.2256 ) (0.2283 ) (0.2300 ) (0.2306 ) (0.2312 ) 1 0.6905 0.5932 0.5759 0.5666 0.5608 0.5589 0.5569 (0.1879 ) (0.2221 ) (0.2268 ) (0.2290 ) (0.2303 ) (0.2307 ) (0.2312 ) 2 1.0570 0.7051 0.6362 0.5981 0.5737 0.5654 0.5570 (0.1884 ) (0.2262 ) (0.2304 ) (0.2314 ) (0.2315 ) (0.2313 ) (0.2312 ) 10 3.9886 1.6002 1.1189 0.8496 0.6770 0.6175 0.5575 (0.1884 ) (0.2290 ) (0.2365 ) (0.2394 ) (0.2378 ) (0.2354 ) (0.2312 ) Table 1: Simulations for the regression parameter Ξ²:n= 100, LY = 0.5569 (0.2457 ). Simulation for the baseline hazard parameters. In this second part of the simulation experiment, we choose a grid according the approximate quan- tiles 0 .2,0.4,0.6 and 0 .8 of the observed event times. A decreasing number of the β€œat risk” times along the grid makes more accurate those estimates in ear- lier intervals in the time axis, as is seen in estimates of the standard deviations. According to the chosen grid, the true values are Ξ›+ 0= (0.125,0.3,0.6,1.15). Also, for this part of the simulation, the priori selected is a left continuous func- 16 w ¡β 0.1 0.5 1 2 5 10 1000 0 0.4585 0.4992 0.5048 0.5077 0.5094 0.5100 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 0.25 0.4829 0.5045 0.5075 0.5090 0.5100 0.5103 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 0.5 0.5074 0.5099 0.5102 0.5104 0.5105 0.5106 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 0.75 0.5319 0.5152 0.5129 0.5118 0.5111 0.5108 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 1 0.5564 0.5205 0.5156 0.5131 0.5116 0.5111 0.5106 (0.0982
https://arxiv.org/abs/2505.20681v1
) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 2 0.6543 0.5419 0.5264 0.5186 0.5138 0.5122 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 10 1.4375 0.7128 0.6128 0.5620 0.5312 0.5209 0.5107 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) Table 2: Simulations for the regression parameter Ξ²:n= 500, LY = 0.5106 (0.1036 ). tionΞ±(t), which takes values: Ξ±(0) = 0, Ξ±(0.125) = 5, Ξ±(0.3) = 6, Ξ±(0.6) = 6.3, and Ξ±(1.15) = 6 .31. Thus, the corresponding true values of the vector (Ξ±1, Ξ±2, Ξ±3, Ξ±4) = (5 ,1,0.3,0.01). The prior weight is regulated by the hyperpa- rameter c. We present the simulation with c= 10, c= 1 and c= 0.1. Also, for this simulation, we have previously estimated bΞ²with the hybrid Bayesian estimator, with prior parameters ¡β= 0.5 and σβ= 10000. The results are presented in Tables (3) and (4). As expected, we see that ( i) the value of chas an impact in the estimator, giving more or less weight to the prior, ( ii) the standard deviation increases as the intervals shift to the right, and (iii) as the sample size increases, the prior losses impact on the estimates. 5 Analysis of a real dataset In this last Section we compute and compare the hybrid Bayesian estimates of Ξ² from the Welsh Nickels miners dataset, with those of Lin and Ying (1994). The β€œWelsh Nickels refinery” dataset was originally introduced by Doll et al. (1970), and subsequently analyzed by Breslow & Day (1987), Lin & Yin (1994) andΒ΄Alvarez & Ferrario (2016), among others. It contains information about 679 workers from a nickel refinery at the south of Wales, who were presumably ex- posed to cancer triggering substances. Data were collected from payroll registers 17 c Ξ›+ 01=0.125 Ξ›+ 02=0.175 Ξ›+ 03=0.3 Ξ›+ 04=0.55 10 0.6527 0.2967 0.2991 0.3354 (0.0824 ) ( 0.0657 ) ( 0.0825 ) (0.1127 ) 1 0.1880 0.1845 0.2967 0.5167 (0.0495 ) ( 0.0587 ) ( 0.0928 ) (0.1762 ) 0.1 0.1265 0.1688 0.2951 0.5384 (0.0431 ) ( 0.0577 ) ( 0.0943 ) (0.1850 ) Table 3: Simulation for the baseline hazard parameters: n= 100, bΞ²= 0.5569. c Ξ›+ 01=0.125 Ξ›+ 02=0.175 Ξ›+ 03=0.3 Ξ›+ 04=0.55 10 0.2510 0.2041 0.2991 0.4919 (0.0247 ) ( 0.0269 ) ( 0.0406 ) (0.0732 ) 1 0.1384 0.1771 0.2990 0.5437 (0.0198 ) ( 0.0260 ) ( 0.0416 ) (0.0799 ) 0.1 0.1257 0.1742 0.2990 0.5494 (0.0192 ) ( 0.0259 ) ( 0.0417 ) (0.0807 ) Table 4: Simulation for the baseline hazard parameters: n= 500, bΞ²= 0.5106. 18 from 1934 to 1981. The covariates are AFE:= Age at first employment, YFE:= Year at first employment and EXP:= Exposure level (an index of the degree of contamination). We fit the same model as in Lin & Ying (1994). We observed that the hybrid Bayesian estimate approaches the classical LY estimator as the prior gets flatter. Λ†Ξ²LY ¡β= 0σβ= 0.01 ¡β= 0σβ= 1000 log(AFE-10) 2.5126 2.4183 2.5126 (YFE-1915)/10 0.2179 0.1868 0.2179 -(YFE-1915)2/100 2.6005 2.3829
https://arxiv.org/abs/2505.20681v1
2.6005 log(EXP+1) 1.7172 1.7081 1.7172 Table 5: (Results are multiplied by 103) The results are presented in Table (5), where we visualize the impact of the choice of the hyperparameters on the final estimates. Those results show the versatility of our proposed Hybrid Bayesian method, which can be calibrated to give the desired weight to the prior information. As discussed, this makes the Hybrid Bayesian method a potential valuable tool for the Statistical practitioner in Data Analysis. 6 Conclusions In this manuscript, we presented an alternative way to express the likelihood function of the AHM as a mixture of Gamma distributions. This enabled estima- tion methods in two steps: (i) estimation of the Euclidean coefficients in closed form, (ii) estimation of the baseline hazard function parameters. Our Bayesian proposal is based on constant regression coefficients, as did the work of LY. A flexible extension of those methods is to consider the general Aalen (1980) model, where Ξ²=Ξ²(t), as in Silva & Amaral-Turkman (2005), who additionally included frailty terms in their approach. Acknowledgements Maximiliano L. Riddick wishes to thank CONICET for his doctoral fellowship, and to Departamento de MatemΒ΄ atica = Facultad de Ciencias Exactas - Universi- dad Nacional de La Plata. Both authors thank Universidad Nacional de La Plata (PPID UNLP I231). References 19 Aalen, O. (1980). A Model for Nonparametric Regression Analysis of Counting Processes. Mathematical statistics and probability theory , 1-25. Β΄Alvarez, E. E. and Ferrario, J. (2016). Robust estimation in the additive hazards model. Communications in Statistics-Theory and Methods 45 (4) , 906-921. Β΄Alvarez, E. E. and Riddick, M. L. (2019). Review of Bayesian Analysis in Additive Hazards Model. Asian Journal of Probability and Statistics , 1-14. Β΄Alvarez, E. E. and Ferrario, J. (2012). RevisiΒ΄ on de la EstimaciΒ΄ on Robusta en Modelos SemiparamΒ΄ etricos de Supervivencia. IASI (Journal of the Inter- american Statistical Institute) 64 (182-183) , 85-106. Andersen, P. K., Borgan, O., Gill, R. D. and Keiding, N. (1993). Statistical Models Based on Counting Processes . Springer. Breslow, N. E. and Day, N. E. (1987). Statistical Methods in Cancer Research: The Design and Analysis of Cohort Studies . International Agency for Re- search on Cancer. Cox, D. R. (1972). Regression Models and Life-Tables. Journal of the Royal Statistical Society. Series B (Methodological) 34 (2) , 187-220. Chernoukhov, A. (2013). Bayesian Spatial Additive Hazard Model. Electronic Theses and Dissertations. Windsor University. Chernoukhov, A., Hussein, A., Nkurunziza, S. and Bandyopadhyay, D. (2018). Bayesian inference in time-varying additive hazards models with applica- tions to disease mapping. Environmetrics 29 (5-6) , e2478. Doll, R., Morgan, L. G. and Speizer, F. E. (1970). Cancers of the lung and nasal sinuses in nickel workers. British journal of cancer 24 (4) , 623-632. Dunson, D. B., & Herring, A. H. (2005). Bayesian model selection and averaging in additive and proportional hazards models. Lifetime data analysis, 11, 213-232. Ibrahim, J. G., Chen, M. and Sinha, D. (2001). Bayesian Survival Analysis . Springer Science & Business Media. Kalbfleisch, J. D. and Prentice, R. L. (1990). The Statistical Analysis of Time Failure Data . Hoboken: John Wiley
https://arxiv.org/abs/2505.20681v1
& Sons. Klein, J. P. and Moeschberger, M. L. (2006). Survival analysis: techniques for censored and truncated data . Springer Science & Business Media. 20 Lawless, J. F. (2003). Event history analysis and longitudinal surveys. Analysis of Survey data , 221-243. Lin, D. Y. and Ying, Z. (1994). Semiparametric Analysis of the Additive Risk model. Biometrika 81 (1) , 61-71. Riddick, M. L. (2020). Estimaci´ on Bayesiana en el modelo de riesgos adi- tivos Doctoral dissertation, Universidad Nacional de La Plata, Argentina (https://doi.org/10.35537/10915/138368). Silva, G. L., & Amaral-Turkman, M. A. (2005). Bayesian analysis of an addi- tive survival model with frailty. Communications in Statistics-Theory and Methods, 33(10), 2517-2533. Turkman, M. A. A., Paulino, C. D., & M¨ uller, P. (2019). Computational bayesian statistics: an introduction (Vol. 11). Cambridge University Press. Zhang, J. and Lawson, A. B. (2011). Bayesian parametric accelerated failure time spatial model and its application to prostate cancer. Journal of Ap- plied Statistics 38 (3) , 591-603.
https://arxiv.org/abs/2505.20681v1
arXiv:2505.20708v1 [econ.TH] 27 May 2025Berk-Nash Rationalizabilityβˆ— Ignacio Esponda Demian Pouzo UC Santa Barbara UC Berkeley May 28, 2025 Abstract We introduce Berk–Nash rationalizability , a new solution concept for misspecified learning environments. It parallels rationalizability in games and captures all actions that are optimal given beliefs formed using the model that be st fits the data in the agent’s misspecified model class. Our main result shows that , with probability one, everylimit action β€”any action played or approached infinitely oftenβ€”is Berk–N ash rationalizable. This holds regardless of whether behavior converges. We apply the concept to known examples and identify classes of environme nts where it is easily characterized. The framework provides a general tool for bo unding long-run behavior without assuming convergence to a Berk–Nash equilibrium. βˆ—Esponda: iesponda@ucsb.edu. Pouzo: dpouzo@berkeley.edu Contents 1 Introduction 1 2 Model and solution concept 3 3 Justification of solution concept 7 4 Characterization of rationalizability 10 A Appendix 16 References 23 1 Introduction A growing literature studies learning in environments where agents o perate under misspeci- fied modelsβ€”that is, models that exclude the true data-generating process. This framework captures both systematic biases, such as overconfidence, selec tion neglect, and mistaken causal reasoning, and the idea that agents may be unable to repre sent the world accurately due to complexity or informational constraints.1 Much of the existing analysis of misspecified learning focuses on long- run behavior char- acterized by Berk-Nash equilibrium (Esponda and Pouzo (2016); henceforth EP), where agents best respond to beliefs that minimize the Kullback-Leibler (KL ) divergence rel- ative to the data they generate. EP show that if behavior converg es, it must con- verge to such an equilibrium. More recent work has studied whether and when con- vergence occurs, using techniques from stochastic approximatio n and martingale the- ory (e.g., Fudenberg, Lanzani and Strack (2021);Esponda, Pouzo and Yamamoto (2021); Murooka and Yamamoto (2021),Frick, Iijima and Ishii (2023)).2 Thispapertakesacomplementaryapproach. Ratherthanimposec onditionsthatguaran- tee convergence, we ask what can be said about asymptotic behav ior even when convergence fails. To that end, we introduce a solution conceptβ€”Berk-Nash rat ionalizabilityβ€”that is inspired by rationalizability in game theory ( Bernheim (1984);Pearce(1984)), but tailored to the structure of misspecified learning problems. Our main result s hows that, with prob- ability one, every limit action3is Berk-Nash rationalizable . Equivalently, if an action is not rationalizable, then there exists an open neighborhood around it th at is visited only finitely many times (in the finite case, the action is played at most finitely ofte n). The result is useful for two reasons. First, it holds without requirin g convergence of actions or beliefs, and avoids the need to verify technical condition s that often appear in convergence analysesβ€”such as characterizing the asymptotic be havior of solutions to differ- 1Examples include Jehiel(2005),Eyster and Rabin (2005),Esponda (2008),Schwartzstein (2014), Bohren (2016),Spiegler (2016),Heidhues, K˝ oszegi and Strack (2018),Eliaz and Spiegler (2020), Fudenberg, Lanzani and Strack (2024),Frick, Iijima and Ishii (2024), andHe and Libgober (2025) among many others. 2Several papers examine the convergence of misspecified learning d ynamics in single-agent settings that are either tailored to specific
https://arxiv.org/abs/2505.20708v1
environments or rely on paramet ric assumptions such as Gaussian signals or binary states. Examples include Nyarko(1991),Fudenberg, Romanyuk and Strack (2017), Heidhues, K˝ oszegi and Strack (2018),Heidhues, K˝ oszegi and Strack (2021),Bohren and Hauser (2021), and He(2022) among many others. 3Limit actions are defined in the natural way: in discrete action space s, they are actions played infinitely often; in continuous spaces, they are actions that are approach ed arbitrarily closely and infinitely often. 1 ential inclusions. Also, some existing results establish convergence to equilibrium only under specific initial conditions or with positive probability, and may not desc ribe what happens when actions do not converge. Second, whentherationalizableset isasingleton, our result pins down long-runbehavior: the agent almost surely converges to that action. More generally, when the rationalizable set is not a point, it provides boundson the actions the agent may take in the limit (though in such cases, existing convergence results may still help identify wh ich actions are selected within the rationalizable set). The definition of Berk-Nash rationalizability is based on a best respon se operator that maps sets of actions into new sets. Given a candidate set of actions , the operator considers all probability distributions over that set, identifies the models that minimize Kullback- Leibler (KL) divergence relative to the outcome distribution those a ctions generate, and then constructs beliefs supported on those models. It then maps the original set to the set of actions that are optimal under such beliefs. A set is closedunder this operator if it includes all the actions it justifies in this way. An action is Berk-Nash rationaliz able if it belongs to some closed set. This operator admits a standard characterization: starting from the full action space, we apply the operator iteratively. The largest closed set is the interse ction of this sequence and coincides with the set of rationalizable actions. We illustrate this construction and show how it can be applied in well-kno wn environ- ments. In particular, we revisit an example of Heidhues, K˝ oszegi and Strack (2018), in which overconfidence distorts learning. In their overconfident ca se, convergence to a unique equilibrium is established under assumptions that guarantee uniquen ess. We relax these assumptions and show that the limit set of actions lies between the sm allest and largest equilibrium. In the underconfident case, where their convergence results do not apply, we showthattherationalizableset isstillinformativeβ€”it maybeasingleto norawell-structured intervalβ€”thus delivering bounds on long-run behavior. We conclude by identifying two general classes of environmentsβ€”su permodular and single-peakedβ€”in which the rationalizable set can be characterized in a tractable way. Overall, the paper contributes a method for bounding asymptotic b ehavior under mis- specified learning, without relying on convergence, and shifts focu s from equilibrium out- comes to the broader class of rationalizable actions. For concrete ness, we focus on single- agent decision problems, though the solution concept extends nat urally to other environ- ments such as games (EP) and Markov decision processes ( Esponda and Pouzo 2021 ). 2 2 Model and solution concept Primitives. We consider a single-agent decision problem with an action
https://arxiv.org/abs/2505.20708v1
space A, a space of observable consequences Y, and a consequence function that maps actions to probability distributions over consequences, denoted by Q:Aβ†’βˆ†Y. The agent does not necessarily knowQbut possesses a parametric model of it, represented as QΞΈ:Aβ†’βˆ†Y, where the model parameter ΞΈbelongs to a parameter space Θ. The agent’s behavior is described b y an action correspondence F: βˆ†Ξ˜β‡’Athat is non-empty valued and upper hemi-continuous. This correspondence specifies the set of actions the agent might c hoose given a belief over the set of models Θ. For concreteness, we consider the standard case in which the agent chooses actions that maximize a payoff function Ο€:AΓ—Yβ†’R, i.e., F(Β΅) = argmax a∈A/integraldisplay ΘU(a,ΞΈ)Β΅(dΞΈ), whereU(a,ΞΈ) :=/integraltext YΟ€(aβ€²,y)QΞΈ(dy|a). We maintain the following assumptions throughout the paper.4 Assumption. (i)Ais a compact subset and Yis a Borel subset of Euclidean space; (ii) For all a∈A, there exists a density q(Β· |a) inL1(Y,R,Ξ½) such that/integraltext Yq(y|a)Ξ½(dy) =Q(Y|a) for anyYβŠ†YBorel; (iii) For all a∈Aandθ∈Θ, there exists a density qΞΈ(Β· |a)∈L1(Y,R,Ξ½) such that/integraltext YqΞΈ(y|a)Ξ½(dy) =QΞΈ(Y|a) for any YβŠ†YBorel; (iv) Continuity : Θ is a compact subset of Euclidean space and, for all a∈A, (ΞΈ,a)/mapstoβ†’qΞΈ(Β· |a) anda/mapstoβ†’q(Β· |a) are continuous almost surely with respect to Q(Β· |a); (v)Uniform integrability : There exists a measurable function G:Yβ†’R+such that, for every a∈A,G∈L2(Y,R,Q(Β· |a)), and for every θ∈Θ,|log(q(Β· |a)/qΞΈ(Β· |a))| ≀G(Β·) almost surely with respect to Q(Β· |a); (vi) Ο€:AΓ—Yβ†’Ris jointly continuous. Assumption(i)-(iii) allows AandYto be continuous or discrete, with density functions defined accordingly. The agent employs a parametric model, with compact parameter space and continuous densities. Assumption (v) is instrumental in establis hing a uniform law of large numbers. Moreover, this condition guarantees that, for ev eryΞΈanda, the support of QΞΈ(Β· |a) contains the support of Q(Β· |a); that is, every observation can be generated by the agent’s model. Finally, assumption (vi), along with the preceding cond itions, ensures that 4As usual, Lp(Y,R,Ξ½) denotes the space of all functions f:Yβ†’Rsuch that/integraltext |f(y)|pΞ½(dy)<∞. All Euclidean spaces, including Aand Θ, are endowed with their Borel Οƒ-algebra. The spaces βˆ† Aand βˆ†Ξ˜ denote the sets of Borel probability measures on Aand Θ, respectively, endowed with the topology of weak convergence. 3 Fis upper hemi-continuous, so that behavior varies continuously with beliefs. Throughout the paper, we use the following example from Heidhues, K˝ oszegi and Strack (2018) to illustrate definitions, results, and their application. Example (Returns to effort) .An agent takes effort a∈A= [0,∞), leading to an observable outcome given by y= (Ξ±βˆ—+a)ΞΈβˆ—+Ξ΅, whereΞ±βˆ—β‰₯0 represents the agent’s true ability, ΞΈβˆ—>0 is the true return to effort, and Ρ∼ N(0,1) is Gaussian noise with known mean and variance (normalized, without loss of generality, to 1). The agent is uncertain about the return to effort and learns over θ∈Θ = [ΞΈ,ΞΈ]βŠ‚[0,∞), but holds a fixed belief about ability: they assume the outcome is generated by y= (Ξ±+a)ΞΈ+Ξ΅, whereα∈(0,∞) is perceived ability and may differ from Ξ±βˆ—. The agent is overconfident ifΞ± > Ξ±βˆ—andunderconfident if Ξ± < Ξ±βˆ—. For each θ∈Θ, the agent’s model QΞΈ(Β· |a) corresponds to a normal distribution with mean ( Ξ±+a)ΞΈand known variance, and the true
https://arxiv.org/abs/2505.20708v1
distribution Q(Β· |a) corresponds to a normal distribution with mean ( Ξ±βˆ—+a)ΞΈβˆ—and the same variance. These distributions admit densities with respect to the Lebesgue measure that vary co ntinuously in aandΞΈ, and they satisfy the uniform integrability condition with a quadratic e nvelope. The agent’s payoff function is Ο€(a,y) =yβˆ’c(a), where c: [0,∞)β†’Ris increasing, strictly convex, and continuously differentiable, with c(0) =cβ€²(0) = 0 and lim aβ†’βˆžcβ€²(a) =∞. Since the marginal return to effort is bounded above by ΞΈand the marginal cost diverges, it follows that the agent’s optimal actions lie in a compact subset [0 ,Β―a], where Β―a= (cβ€²)βˆ’1(ΞΈ). Thus, the example satisfies all parts of the Assumption above. /diamondsolid Kullback-Leibler divergence. TheKL divergence is a function K: Ξ˜Γ—Aβ†’R, defined for any parameter value θ∈Θ and action a∈Aas K(ΞΈ,a) :=/integraldisplay Yln/parenleftbiggq(y|a) qΞΈ(y|a)/parenrightbigg q(y|a)Ξ½(dy). (1) Furthermore, for any Οƒβˆˆβˆ†A, we define Θm(Οƒ) := argmin θ∈Θ/integraldisplay K(ΞΈ,a)Οƒ(da). The KL divergence measures the β€˜distance’ between the true mode lQand the parametric modelQΞΈ. The set Θm(Οƒ) consists of the parameter values θ∈Θ whose associated model QΞΈ provides the best fit to the true model, in the sense of minimizing the e xpected Kullback– Leibler divergence given data generated by Qwhen actions are drawn according to Οƒ. By continuity and uniform integrability (assumptions (iv) and (v)), Kis jointly contin- 4 uous. Therefore, Θm(Β·) is upper hemi-continuous, nonempty-, and compact-valued. Example (continued) .The KL divergence is K(ΞΈ,a) =1 2((Ξ±+a)ΞΈβˆ’(Ξ±βˆ—+a)ΞΈβˆ—)2. There is a unique minimizer of the expected KL divergence for any dist ribution Οƒβˆˆβˆ†A, given by ΞΈm(Οƒ) =ΞΈβˆ—Β·/integraltext (Ξ±+a)(Ξ±βˆ—+a)Οƒ(da)/integraltext (Ξ±+a)2Οƒ(da)=ΞΈβˆ—+ΞΈβˆ—Β·(Ξ±βˆ—βˆ’Ξ±)/integraltext (Ξ±+a)Οƒ(da)/integraltext (Ξ±+a)2Οƒ(da). This expression shows that the estimated ΞΈis a biased version of the true return ΞΈβˆ—, with the direction and magnitude of the bias depending on the relationship between perceived and true ability: β€’IfΞ± > Ξ±βˆ—(overconfidence), then ΞΈm(Οƒ)< ΞΈβˆ—: the agent underestimates the return to effort. β€’IfΞ± < Ξ±βˆ—(underconfidence), then ΞΈm(Οƒ)> ΞΈβˆ—: the agent overestimates the return to effort. In the special case where Οƒis degenerate at some action a∈A, the minimizer simplifies to ΞΈm(Ξ΄a) =ΞΈβˆ—+ΞΈβˆ—Β·Ξ±βˆ—βˆ’Ξ± Ξ±+a. The size of the bias depends on the level of effort a. Asaincreases, the difference between Ξ±andΞ±βˆ—becomes less important relative to a, so the bias shrinks. In particular, limaβ†’βˆžΞΈm(a) =ΞΈβˆ—. This reflects that at high effort levels, ability is swamped by effort in t he signal, so the agent’s misspecification about ability becomes less cons equential for inference aboutΞΈβˆ—. /diamondsolid Best response operator. For each Borel set of actions AβŠ†A, we define the best response setas Ξ“(A) :=F(βˆͺΟƒβˆˆβˆ†Aβˆ†Ξ˜m(Οƒ)). (2) Note that Ξ“ is nonempty-valued and upper hemi-continuous by the p revious observation about Θm(Β·) and the fact that Fis nonempty-valued and upper hemi-continuous. 5 Equivalently, Ξ“(A) ={a∈A:βˆƒΟƒβˆˆβˆ†A,Β΅βˆˆβˆ†Ξ˜m(Οƒ) such that a∈F(Β΅)}. In other words, the set Ξ“( A) consists of all actions that the agent might choose (according toF) when she assigns probability one to the set of models that provide t he best fit under some action distribution with support in A. This concept serves as a natural counterpart to the best response correspondence in game theory and aligns with o ur interpretation of KL divergence. Specifically, if feedback
https://arxiv.org/abs/2505.20708v1
about consequences arises f rom actions drawn from the setA, then the models that provide the best fit are those that minimize KL divergence for some action distribution supported on A. Consequently, the agent will follow actions that are optimal for beliefs that assign probability one to these best-fit models. Example (continued) .The optimal action corresponding to a degenerate belief δθis (cβ€²)βˆ’1(ΞΈ), since the agent chooses ato solveΞΈ=cβ€²(a). For any Borel set AβŠ†A, the best response operator is Ξ“(A) =/braceleftbig (cβ€²)βˆ’1(ΞΈm(Οƒ)) :Οƒβˆˆβˆ†A/bracerightbig . (3) /diamondsolid Solution concept. We are now ready to define our solution concept for this environmen t. Definition 1. An action ais Berk-Nash rationalizable if there exists a set AβŠ†Asuch that a∈AandAβŠ†Ξ“(A). A Berk-Nash rationalizable action belongs to a set that is closed unde r the operator Ξ“. That is, if a set AβŠ†AsatisfiesAβŠ†Ξ“(A), then every action in Ais rationalizable. Each action in such a set is optimal for a belief supported on model parame ters that best fit data generated by some distribution over A. Different actions may be justified by different such distributions. A formal justification of this solution concept is prov ided in the next section.5 The solution concept typically used in the literature is that of a Berk- Nash equilibrium (EP). In terms of our best response operator, an action a∈Ais aBerk-Nash equilibrium if aβˆˆΞ“({a}). An immediate implication is that a Berk-Nash equilibrium action is ration aliz- able, but the converse is not necessarily true. In particular, the s et of rationalizable actions 5This is not equivalent to rationalizability in a game where player 1 choose s actions and player 2 chooses model parameters. That interpretation would correspond to a lar ger operator F/parenleftbig βˆ†/uniontext Οƒβˆˆβˆ†AΘm(Οƒ)/parenrightbig , which allows arbitrary mixtures over model parameters fit to different ac tion distributions. In contrast, our defini- tion restricts attention to beliefs supported on Θm(Οƒ) for some fixed Οƒβˆˆβˆ†A. 6 may be larger than the set of equilibrium actions, as we now illustrate.6 Example (continued) .Figure1aillustrates an example in which the agent is overconfident. In this case, there are three Berk–Nash equilibrium actions, denot edaS,aM, andaL, each satisfying the fixed point condition aj= (cβ€²)βˆ’1(ΞΈm(Ξ΄aj)) forj=S,M,L. Note also that the image of the function a/mapstoβ†’(cβ€²)βˆ’1(ΞΈm(Ξ΄a)) overa∈[aS,aL] is the interval [ aS,aL]. Since this image is contained in Ξ“([ aS,aL]) (by expression ( 3)), the interval [aS,aL] is closed under the best response operator Ξ“. It follows that eve ry action in this interval is rationalizable. In Section 4, we develop results that help characterize and compute the set of rationaliz- able actions, and show that, in this example, the interval [ aS,aL] coincides exactly with that set. /diamondsolid 3 Justification of solution concept This section justifies Berk-Nash rationalizability in a dynamic environm ent. We consider an agent who learns by Bayesian updating and chooses myopically optima l actions over time.7 The agent begins with a full-support prior Β΅0βˆˆβˆ†Ξ˜ over a space of models Θ. At each discrete time tβ‰₯1: β€’The agent holds a belief Β΅tβˆˆβˆ†Ξ˜, β€’Selects an action at∈F(Β΅t), where F: βˆ†Ξ˜β‡’Ais the agent’s action correspondence, β€’Observes a consequence
https://arxiv.org/abs/2505.20708v1
ytdrawn from the distribution Q(Β· |at), β€’And updates beliefs via Bayes’ rule: Β΅t+1=Bay(at,yt,Β΅t), 6EP also considered mixed actions. Every action in the support of an e quilibrium mixed action is ratio- nalizable. 7EP consider forward-looking agents under a conditionβ€”weak identifi cationβ€”that guarantees incentives to experiment vanish in the long run. The same condition and argumen t can be used to extend Theorem 1 below to forward-looking agents. 7 where, for any Borel set SβŠ†Ξ˜, Bay(a,y,Β΅)(S) =/integraltext SqΞΈ(y|a)Β΅(dΞΈ)/integraltext ΘqΞΈ(y|a)Β΅(dΞΈ)forQ(Β· |a)-almost all y.8 3.1 Asymptotic beliefs Before analyzing endogenous action selection, we first study Baye sian learning given a fixed sequence of actions. Let a∞= (a1,a2,...) denote a deterministic or stochastic sequence of actions. Let A∞denote the space of infinite action sequences, endowed with the pr oduct topology. In each period tβ‰₯1, given at, an outcome ytis drawn independently from the distribution Q(Β· |at). LetY∞denote the space of infinite outcome sequences, also endowed with the product topology. Define P(Β· |a∞) as the probability measure over Y∞induced by the product of the conditional distributions P(Β· |a∞) =/circlemultiplytext∞ t=1Q(Β· |at). Thus, P(Β· |a∞) governs the law of outcomes conditional on the given sequence of a ctions. The empirical action distribution up to time tis denoted by Οƒtβˆˆβˆ†Aand defined, for any Borel set AβŠ†A, by Οƒt(A) =1 tt/summationdisplay Ο„=11A(aΟ„). WhenAis finite, Οƒtcoincides with the usual notion of action frequencies. The following result characterizes asymptotic beliefs. Lemma 1. Fix anya∞. Almost surely with respect to P(Β· |a∞), the following holds: Suppose that there exists a subsequence (tk)ksuch that Οƒtkβ†’Οƒ. Then, for every closed set EβŠ†Ξ˜ satisfying E∩Θm(Οƒ) =βˆ…, there exist constants C >0,ρ >0, and an integer Ksuch that, for allkβ‰₯K, Β΅tk(E)≀Cexp{βˆ’Οtk}. (4) In particular, if the subsequence (Β΅tk)kconverges to some Β΅, thenΒ΅βˆˆβˆ†Ξ˜m(Οƒ). Lemma1says that along any subsequence where the empirical action distrib ution con- verges, the posterior probability assigned to any set of models inco mpatible with the limiting action distribution converges to zero.9Consequently, any limit belief must be supported on the set of models that best explain the observed limiting behavior. 8Assumption (v) implies that qΞΈ(Β· |a)>0 for all θ∈Θ,Q(Β· |a)-almost surely, so Bayes rule is well defined for Q(Β· |a)-a.e.y. 9The convergenceis exponentially fast, but as shown in the proof, t he speed of convergenceis not relevant for characterizing the limiting belief. 8 The idea originates in Berk(1966)’s analysis of misspecified models under i.i.d. data and has since been extended to dynamic learning settings, including b y EP, who use it to characterize long-run behavior under misspecified learning. Lem ma1generalizes these results by allowing for a continuum of actions and by working with subs equences rather than requiring convergence of the empirical action distribution. Except for minor modifications, the argument for the lemma closely follows existing proofs and is ther efore included in Appendix A.1. 3.2 Asymptotic behavior We now return to the full environment where actions are selected e ndogenously. The infinite sequence of actions and consequences ( a∞,y∞) belongs to the space Ω = AβˆžΓ—Y∞. LetP denote the probability measure over Ω induced by the agent’s choice rule and the conditional distributions Q(Β·
https://arxiv.org/abs/2505.20708v1
|a). The marginal distribution of PoverA∞is denoted by PA∞. An action a∈Ais called a limit action of the sequence a∞= (a1,a2,...) if there exists a subsequence ( atk)ksuch that atkβ†’aaskβ†’ ∞. Equivalently, ais a limit action if, for every open neighborhood UβŠ†Aofa, there are infinitely many times t∈Nsuch that at∈U. WhenAis finite, an action is a limit action if and only if it is played infinitely often. Theorem 1. Almost surely with respect to PA∞, every limit action of a Bayesian agent is Berk-Nash rationalizable. Proof.For each a∞, letYa∞denote the probability-one set of outcome sequences for which the statement of Lemma 1holds. Define Aβˆ—:={a∞∈A∞:βˆƒy∞∈ Ya∞,at∈F(Β΅t(a∞,y∞))for allt}. In Appendix A.3, we show that Aβˆ—has probability one. Fix any a∞∈ Aβˆ—and letA(a∞) denote the set of its limit actions. To establish the result, we show that A(a∞)βŠ†Ξ“(A(a∞)). By definition of Aβˆ—, there exists some y∞∈ Ya∞ such that at∈F(Β΅t(a∞,y∞)) for all t. We fix one such y∞and proceed with the proof. Leta∈ A(a∞). Consider the subsequences ( atk)k, (Οƒtk(a∞))k, and (Β΅tk(a∞,y∞))ksuch that: (i) lim atk=a(this is possible because ais a limit action), (ii) lim Β΅tk((a∞,y∞)) =Β΅, limΟƒtk(a∞) =Οƒ(this is possible because these sequences lie in compact spaces, so c onvergent subsequences always exist), (iii) Οƒβˆˆβˆ†A(a∞) (this follows because A(a∞) is the set of limit 9 actions10). The fact that atk∈F(Β΅tk(a∞,y∞)) for all kand the uhc of Fimply that a∈F(Β΅). Additionally, since y∞∈ Y∞, Lemma 1implies that Β΅βˆˆβˆ†Ξ˜m(Οƒ). Therefore, aβˆˆΞ“(A(a∞)). An equivalent formulation of Theorem 1is that for any action that is not rationalizable, there exists an open set containing it that is visited only finitely many t imes. When the action space is finite, this reduces to the statement that the actio n is chosen at most a finite number of times. However, the converse does not hold: a rat ionalizable action is not necessarily a limit action of the action sequence. These findings para llel existing results on equilibrium notions. For example, EP show that if the action sequence converges to aβˆ—, thenaβˆ—must be a Berk-Nash equilibrium, but not every Berk-Nash equilibrium action is necessarily a limit action. 4 Characterization of rationalizability ThefollowingresultprovidesacharacterizationofthesetofBerk- Nashrationalizableactions, leveraging the monotonicity of the best response operator. Proposition 1. The set of Berk-Nash rationalizable actions, B βŠ†A, is nonempty, compact, and is the largest fixed point of Ξ“, meaning B= Ξ“(B)andB βŠ‡Afor anyAsuch that A= Ξ“(A). It can be obtained iteratively, as B=∞/intersectiondisplay k=0BkwithB0=AandBk+1= Ξ“(Bk). Proof.LetP(A) denote the power set of A, and consider the partially ordered set ( P(A),βŠ†). Define S:={AβŠ†A:AβŠ†Ξ“(A)}. Then the set of Berk-Nash rationalizable actions is given by Bβ€²:=/uniondisplay A∈SA, which is the supremum of S. Since Ξ“ is monotone, Tarski’s fixed point theorem implies thatBβ€²is the largest fixed point of Ξ“. The iterative characterization follows from Kleene’s 10Proof: If x /∈ A(a∞), there exists an open neighborhood Uβˆ‹xsuch that atk∈Uonly finitely often. Hence,Οƒtk(U)β†’0, and by the Portmanteau lemma, Οƒ(U) = 0. As this holds for every such x, the support ofΟƒis contained in A(a∞), soΟƒβˆˆβˆ†A(a∞). 10 fixed point theorem, i.e., B=Bβ€². Monotonicity of Ξ“ ensures that the sequence {Bk}is nested. Since Ais compact
https://arxiv.org/abs/2505.20708v1
and Ξ“ is nonempty-valued and upper hemicontinuous, eac h set Bkis nonempty and compact. The infinite intersection of nested, none mpty compact sets is nonempty, so Bis nonempty. This typeofcharacterizationisstandardwhen rationalizabilityisdefi nedasclosureunder a set-valued operator. Just as in other familiar settingsβ€”such as b est response dynamics in gametheoryorbelief hierarchies inepistemic modelsβ€”therationalizab leset canbedescribed as the largest fixed point of the operator and computed by iterate d application starting from the full domain. Example (continued) .IfAis compactβ€”for instance, a compact intervalβ€”expression ( 3) for Ξ“ can be simplified further. For any Οƒβˆˆβˆ†A, define the probability measure Ξ½Οƒ(da) :=(Ξ±+a)2Οƒ(da)/integraltext (Ξ±+a)2Οƒ(da). ThenΞΈm(Οƒ) can be written as a convex combination of degenerate values: ΞΈm(Οƒ) =/integraldisplay ΞΈm(Ξ΄a)Ξ½Οƒ(da). (5) SinceΞ½Οƒis a probability measure supported on A, equation ( 5) shows that ΞΈm(Οƒ) lies in the convex hull of the set {ΞΈm(Ξ΄a) :a∈A}. Moreover, because ΞΈm(Ξ΄a) is continuous on the compact set A, it attains its minimum and maximum, and this convex hull is simply the interval /bracketleftbigg min a∈AΞΈm(Ξ΄a),max a∈AΞΈm(Ξ΄a)/bracketrightbigg . Applying ( cβ€²)βˆ’1to this interval, we conclude that Ξ“(A) =/bracketleftbigg (cβ€²)βˆ’1/parenleftbigg min a∈AΞΈm(Ξ΄a)/parenrightbigg ,(cβ€²)βˆ’1/parenleftbigg max a∈AΞΈm(Ξ΄a)/parenrightbigg/bracketrightbigg . (6) This step uses the fact that the image of a closed interval under a c ontinuous, strictly increasing function, ( cβ€²)βˆ’1, is again a closed interval. This characterization of Ξ“ motivates the definition of a simpler opera tor that acts directly on actions. Define T:Aβ†’Aby T(a) := (cβ€²)βˆ’1(ΞΈm(Ξ΄a)), 11 and note that the set of fixed points of Tcoincides with the set of Berk–Nash equilibrium actions. We will show that the operator Talso characterizes the best response operator Ξ“. Overconfidence ( Ξ± > Ξ±βˆ—). In this case, the function ΞΈm(Ξ΄a) is increasing in a, soTis increasing. An increasing function may have multiple fixed points, so m ultiple Berk-Nash equilibria are possible. For any interval A= [amin,amax], it follows from ( 6) that Ξ“(A) = [T(amin),T(amax)]. To characterize the limit of iterated best responses, define a sequ ence of intervals Ak= [ak min,ak max] by ak+1 min=T(ak min), ak+1 max=T(ak max), starting from a0 min= 0 and a0 max= Β―a, where Β―ais an upper bound on optimal actions. Then (ak min)kincreases, ( ak max)kdecreases, and both sequences converge. The limits a∞ min= lim kβ†’βˆžak min, a∞ max= lim kβ†’βˆžak max are fixed points of T(hence, equilibria), and, by Proposition 1, the limiting interval [a∞ min,a∞ max] is the set of all rationalizable actions. This is the case in Figure 1a, where aS=a∞ minandaL=a∞ max. IfThas a unique fixed point, the limit is a singleton and rationalizability coincides with equilibrium. Underconfidence ( Ξ± < Ξ±βˆ—). In this case, the function ΞΈm(Ξ΄a) is decreasing in a, soTis decreasing. Adecreasing functionhasatmost onefixed point, so t here isaunique Berk-Nash equilibrium. For any interval A= [amin,amax], we have Ξ“(A) = [T(amax),T(amin)]. To analyze the dynamics of Ξ“, define ak+1 min=T(ak max), ak+1 max=T(ak min), again starting from [0 ,Β―a]. Then ( ak min)kincreases, ( ak max)kdecreases, and both sequences converge to a∞ min= lim kβ†’βˆžak min, a∞ max= lim kβ†’βˆžak max, By Proposition 1, the limiting interval [ a∞ min,a∞ max] is the set of
https://arxiv.org/abs/2505.20708v1
rationalizable actions. The limits also satisfy T(a∞ min) =a∞ max, T(a∞ max) =a∞ min, 12 so they form a 2-cycle of Tand are fixed points of T2. IfT2has a unique fixed point, then it must also be a fixed point of T, and the limit is a singleton. In that case, rationalizability coincides with equilibrium. If T2has multiple fixed points, then a∞ minanda∞ maxare the smallest and largest among them, respectively. This latter situation is illustrated in Figure 1b. /diamondsolid ΞΈβˆ— overconfident: ΞΈm(δ·)mg cost: cβ€²(Β·) aopt aSaMaL a (a) Overconfident agenta∞ mina∞ maxΞΈβˆ—underconfident: ΞΈm(δ·)mg cost: cβ€²(Β·) aopt a (b) Underconfident agent Figure 1: Returns to effort example. The optimal action aoptis where the marginal cost curve intersects the true return ΞΈβˆ—. Berk–Nash equilibrium actions lie at intersections of the marginal co st curve and the KL-minimizing curve ΞΈm(δ·). In the overconfident case, the set of rationalizable actions is [ aS,aL]; in the underconfident case, it is the 2-cycle interval [ a∞ min,a∞ max]. Our approach is direct and avoids the use of dynamic or stochastic a pproximation tech- niques often required in convergence analyses. For example, Heidhues, K˝ oszegi and Strack (2018) show convergence in the overconfident case by assuming a unique equilibrium, while Heidhues, K˝ oszegi and Strack (2021) allow for multiple equilibria but rely on normal priors. Moreover, neither paper establishes general results for the und erconfident case. The example also highlights two useful patterns. In some casesβ€”su ch as when the agent is overconfidentβ€”the set of rationalizable actions can be characte rized by identifying the smallest andlargestBerk–Nashequilibria. Inothercases, likeunder confidence, thisnolonger holds, but it is still possible to simplify the analysis by restricting atten tion to degenerate beliefs and actions. In both cases, the structure of the environm ent allows us to reduce the problem of finding the full rationalizable set to a more tractable form . We now present two broad classes of environments where such simplifications are possib le: supermodular and single-peaked settings. Supermodular environments. 13 Definition 2. The environment is supermodular if (i) AandΘare compact sublattices of Euclidean space under the coordinatewise partial order, (ii)Uis quasi-supermodular in aand single crossing in (a,ΞΈ), and (iii) The negative of the KL function, βˆ’K, is quasi- supermodular in ΞΈand is single crossing in (ΞΈ,a).11 This supermodular environment ensures monotone comparative st atics: the optimal ac- tion is increasing in ΞΈ, and the type ΞΈthat best fits a given action is increasing in the action. In general, one can directly verify the conditions in Definition 2or rely on existing results from the literature on monotone comparative statics unde r uncertainty (e.g., Athey (2002)).12 A standard implication of supermodular environments (e.g., Milgrom and Shannon (1994);Topkis(1998)) is that, for each ΞΈ, the set F(δθ) admits a smallest and largest ele- ment, denoted a(ΞΈ) and Β―a(ΞΈ), respectively, both of which are continuous and nondecreasing inΞΈ. Similarly, for each a, the set Θm(Ξ΄a) has a smallest and largest element, denoted ΞΈ(a) andΒ―ΞΈ(a), respectively, and both are continuous and nondecreasing in a. Proposition 2. In supermodular environments, there exist smallest and lar gest Berk-Nash rationalizable
https://arxiv.org/abs/2505.20708v1
actions, denoted aSandaL, respectively. These actions are also the smallest and largest Berk-Nash equilibrium actions. They can be cons tructed as the limits of the monotone sequences: ak+1 S=a(ΞΈ(ak S)), ak+1 L= Β―a(Β―ΞΈ(ak L)), starting from a0 S= minAanda0 L= maxA. The limits aS= limkβ†’βˆžak SandaL= limkβ†’βˆžak L are fixed points of the respective composition functions: aS=a(ΞΈ(aS)), aL= Β―a(Β―ΞΈ(aL)). In particular, Proposition 2shows that all Berk-Nash rationalizable actions lie within the interval [aS,aL]. Theseboundsaredefined asfixedpointsover degenerateaction sandbeliefs, making them far simpler to compute than the full rationalizable set, w hich requires iterating the best response operator over mixed strategies and belief supp orts. In applications, such bounds are typically informative and can be used to conduct compar ative statics. Moreover, 11A function f:XΓ—Tβ†’Rdefined over a lattice Xand a partially ordered set Tis quasi-supermodular inxif for all x,xβ€²and allt,f(x,t)β‰₯(>)f(x∧xβ€²,t) =β‡’(>)f(x∨xβ€²,t)β‰₯f(xβ€²,t). It is single-crossing in (x,t) if, for all xβ€²> xandtβ€²> t,f(xβ€²,t)β‰₯(>)f(x,t) =β‡’f(xβ€²,tβ€²)β‰₯(>)f(x,tβ€²). 12When the set of actions, consequences, and models are one-dimen sional intervals, the environment is supermodular provided that (i) Ο€is single crossing in ( a,y), (ii)qandqΞΈare strictly positive, (iii) q(y|a) satisfies MLRP (monotone likelihood ratio property) in ( a,y), and (iv) qΞΈ(y|a) satisfies MLRP separately in (a,y), (ΞΈ,a), and (ΞΈ,y). 14 whenaS=aL, all rationalizable notions collapse to a unique action, implying converg ence of the agent’s action. We prove Proposition 2in the Appendix, following an argument similar to that of Milgrom and Roberts (1990), who showed that in supermodular games, the smallest and largest serially undominated strategy profiles coincide with pure Nas h equilibria. A key difference in our setting arises from the structure of the operato r Ξ“, which maps Ato F(βˆͺΟƒβˆˆβˆ†Aβˆ†Ξ˜m(Οƒ)). Unlike in standard games, this operator does not permit an inter preta- tion where one player selects actions from Awhile the other selects models from Θ, because different models ΞΈ1,ΞΈ2∈Θ may be supported by distinct beliefs Οƒ1,Οƒ2βˆˆβˆ†A, and mixtures overΟƒ1andΟƒ2may not be feasible in justifying an action under Ξ“. To address this, w e in- steadworkwiththeβ€œlarger”operator A/mapstoβ†’F(βˆ†Ξ˜m(βˆ†A)),whichallowsforseparateiteration over the spaces of actions and models. We then adopt the approac h ofMilgrom and Roberts (1990) to characterize its infinite iteration and show that it provides tight bounds for the set of Berk-Nash rationalizable actions. Single-peaked environments. Definition 3. The environment is single peaked if (i) AandΘare compact intervals in R, (ii)U(a,ΞΈ)is strictly quasi-concave in afor eachΞΈ, and (iii) K(ΞΈ,a)is strictly quasi-convex inΞΈfor each a. This structure guarantees that optimal actions and best-fitting models are uniquely de- termined by degenerate inputs. We denote by ˜ a: Ξ˜β†’Athe function mapping each model ΞΈto its unique optimal action, and by ˜θ:Aβ†’Ξ˜ the function mapping each action ato the unique model that best fits it. Both functions are well defined a nd continuous by strict quasi-concavity and strict quasi-convexity, respectively. Moreo ver, utility and model fit de- grade strictly as one moves away from these optima, in the sense th at suboptimal actions or models yield strictly worse performance the further they are from the best-fitting value. Proposition 3. In single-peaked environments, the set of rationalizable a ctions is given by B=∞/intersectiondisplay k=0˜Bk,
https://arxiv.org/abs/2505.20708v1
where˜B0=Aand˜Bk+1=T(˜Bk), whereT(Β·) = ˜aβ—¦ΛœΞΈ(Β·). This characterization is convenient because the operator Ξ“ is defin ed over mixed actions and mixed beliefs, which can be difficult to work with directly. In contra st, the operator 15 Tin Proposition 3allows us to focus entirely on degenerate actions and beliefs, while st ill capturing the full set of rationalizable actions. A Appendix A.1 Proof of Lemma 1 Define Lt(a∞,y∞)(ΞΈ) :=tβˆ’1t/summationdisplay Ο„=1lnq(yΟ„|aΟ„) qΞΈ(yΟ„|aΟ„). We will use the following fact: Uniform SLLN. For anya∞, there exists Ya∞withP(Ya∞|a∞) = 1 such that, for any y∞∈ Ya∞and for every Ξ΅ >0 there exists T1 Ξ΅(a∞,y∞) such that for all tβ‰₯T1 Ξ΅(a∞,y∞), sup θ∈Θ|Lt(a∞,y∞)(ΞΈ)βˆ’/integraldisplay K(ΞΈ,a)Οƒt(a∞)(da)|< Ξ΅. (7) See Appendix A.2for a proof of this result. From this point on, we fix any a∞and consider any y∞∈ Ya∞. For simplicity, we drop (a∞,y∞) from the notation. Let ( Οƒtk)kbe a subsequence converging to Οƒand letEβŠ†Ξ˜ be a closed set disjoint from Θm(Οƒ). We rely on the following two results: Approximate (Οƒtk)kwith its limit Οƒ.SinceKis continuous and Θ is compact, then for everyΞ΅ >0, there exists T2 Ξ΅such that, for all ksuch that tkβ‰₯T2 Ξ΅, sup θ∈Θ|/integraldisplay K(ΞΈ,a)Οƒtk(da)βˆ’/integraldisplay K(ΞΈ,a)Οƒ(da)|< Ξ΅. Uniform separation. SinceKis continuous and Eis closed in a compact set (hence, compact), there exists Ξ΄ >0 such that for all θ∈E, /integraldisplay K(ΞΈ,a)Οƒ(da)β‰₯Kβˆ—(Οƒ)+Ξ΄, whereKβˆ—(Οƒ) := min θ∈Θ/integraltext K(ΞΈ,a)Οƒ(da). 16 We now prove ( 4) in Lemma 1. For any ΞΎ >0, note that Β΅tk(E) =/integraltext Eeβˆ’tLtk(ΞΈ)Β΅0(dΞΈ)/integraltext Θeβˆ’tLtk(ΞΈ)Β΅0(dΞΈ)≀/integraltext Eeβˆ’tkLtk(ΞΈ)Β΅0(dΞΈ)/integraltext Θξ(Οƒ)eβˆ’tkLtk(ΞΈ)Β΅0(dΞΈ), where Θ ΞΎ(Οƒ) :={θ∈Θ :/integraltext K(ΞΈ,a)Οƒ(da)≀Kβˆ—(Οƒ)+ΞΎ}. Consider first the numerator. From UniformSLLN, the approximat ion of (Οƒtk)kwith its limit Οƒ, and Uniform separation, it follows that, for all ksuch that tkβ‰₯max{T1 Ξ΅,T2 Ξ΅}, Ltk(ΞΈ)β‰₯Kβˆ—(Οƒ)+Ξ΄βˆ’2Ξ΅for allθ∈E, and so /integraldisplay Eeβˆ’tkLtk(ΞΈ)Β΅0(dΞΈ)≀¡0(E)eβˆ’tk(Kβˆ—(Οƒ)+Ξ΄βˆ’2Ξ΅). (8) Next, consider thedenominator. FromUniformSLLNandUniformco nvergence, itfollows that, for all ksuch that tkβ‰₯max{T1 Ξ΅,T2 Ξ΅},Ltk(ΞΈ)≀Kβˆ—(Οƒ)+ΞΎ+2Ξ΅for allθ∈Θξ(Οƒ), and so /integraldisplay Θξ(Οƒ)eβˆ’tkLtk(ΞΈ)Β΅0(dΞΈ)β‰₯Β΅0(Θξ(Οƒ))eβˆ’tk(Kβˆ—(Οƒ)+ΞΎ+2Ξ΅). (9) Combining ( 8) and (9) and setting Ξ΅=ΞΎ < Ξ΄/10, we obtain that for all ksuch that tkβ‰₯max{T1 Ξ΅,T2 Ξ΅}, Β΅tk(E)≀¡0(E) Β΅0(Θξ(Οƒ))eβˆ’tk(Ξ΄/2). Since Θ is compact and ΞΈ/mapstoβ†’/integraltext K(ΞΈ,a)Οƒ(da) is continuous, a minimizer ΞΈβˆ—βˆˆΞ˜ exists and satisfiesΞΈβˆ—βˆˆΞ˜ΞΎ(Οƒ). By continuity, there exists an open ball around ΞΈβˆ—contained in Θ ΞΎ(Οƒ). SinceΒ΅0has full support, this ball has positive measure, so Β΅0(Θξ(Οƒ))>0. Equation ( 4) in the statement of Lemma 1then follows by setting ρ=Ξ΄/2>0 andC=Β΅0(E)/Β΅0(Θξ(Οƒ)). SinceΒ΅0(E)β‰₯0 andΒ΅0(Θξ(Οƒ))>0, it follows that Cβ‰₯0 andC <∞. Suppose, in addition, that ( Β΅tk)kconverges to Β΅. By equation ( 4), for any closed E such that E∩Θm(Οƒ) =βˆ…, liminf kβ†’βˆžΒ΅tk(E) = 0. By the Pormanteau lemma, Β΅(E)≀ liminf kβ†’βˆžΒ΅tk(E) = 0. Now consider any θ∈Θ\Θm(Οƒ). There exists an open neighborhood UΞΈwith closure Β―UΞΈsuch that Β―Uθ∩Θm(Οƒ) =βˆ…. SinceΒ―UΞΈis closed, Β΅(Β―UΞΈ) = 0, and so ΞΈis not in the support of Β΅. Thus,suppΒ΅βŠ†Ξ˜m(Οƒ). 17 A.2 Uniform SLLN Esponda, Pouzo and Yamamoto (2021) (henceforth, EPY 2021) proves the uniform SLLN (equation 7) for the case of a finite number of actions (see their Lemma 2). The proof for the case where Ais a compact subset of Euclidean space is essentially identical. We point out the two places where the more general Amakes a difference. In Step 1 of their proof, they use
https://arxiv.org/abs/2505.20708v1
the fact that there is a function ga∈L2(Y,R,Q(Β· |a)) such that supΞΈβ€²βˆˆO(ΞΈ,Ξ΅)(g(ΞΈβ€²,y,x))2≀(ga(y))2, whereg(ΞΈ,y,x) := log( q(y|a)/qΞΈ(y|a)) and O(ΞΈ,Ξ΅) :={ΞΈβ€²:||ΞΈβ€²βˆ’ΞΈ||< Ξ΅}. In our case, we need a function Gthat does not depend on a; hence our assumption (v). In Step 2 of their proof, they conclude that, for each θ∈Θ, Ξ› Ξ΄(a) :=/integraltext supΞΈβ€²βˆˆO(ΞΈ,Ξ΄)|g(ΞΈβ€²,Y,a)βˆ’g(ΞΈ,Y,a)|Q(dy|a) vanishes as Ξ΄converges to zero. The proof re- quires convergence to be uniform in A, which is immediate if Ais finite. We now establish such uniform convergence for our more general case. By pointwis e convergence and the Arzela-Ascoli Theorem, it suffices to establish equicontinuity of the class (Ξ› Ξ΄)Ξ΄β‰₯0. To do so, take any Η« >0 and observe that for any a,aβ€²βˆˆA, |Λδ(a)βˆ’Ξ›Ξ΄(aβ€²)| ≀/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay sup ΞΈβ€²βˆˆO(ΞΈ,Ξ΄)|g(ΞΈβ€²,Y,aβ€²)βˆ’g(ΞΈ,Y,aβ€²)|(q(y|a)βˆ’q(y|aβ€²))Ξ½(dy)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay sup ΞΈβ€²βˆˆO(ΞΈ,Ξ΄)|logqΞΈ(y|a)βˆ’logqΞΈ(y|aβ€²)|q(y|a)Ξ½(dy)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =:I+II. Under Assumption (v), the term Ibounded by 2/integraltext G(y)|q(y|a)βˆ’q(y|aβ€²)|Ξ½(dy). Under assumption (iv), there exists a Ξ³ >0 (not dependent on Ξ΄) such that 2/integraltext G(y)|q(y|a)βˆ’ q(y|aβ€²)|Ξ½(dy)≀0.5Η«for anya,aβ€²such that ||aβˆ’aβ€²|| ≀γ. Regarding the term II, under Assumptions (iv) and (v) ( ΞΈ,a)/mapstoβ†’logqΞΈ(y|a) is jointly uniformly continuous, so, for any Η«β€²>0 andQ(Β·|a)-almost all y, there exists a Ξ³β€² y>0 (not dependent on ΞΈ) such that |logqΞΈ(y|a)βˆ’logqΞΈ(y|aβ€²)| ≀ǫ′for alla,aβ€²such that ||aβˆ’aβ€²|| ≀γ′ y. Hence, supΞΈβ€²βˆˆO(ΞΈ,Ξ΄)|logqΞΈ(y|a)βˆ’logqΞΈ(y|aβ€²)| ≀ǫ′for all such a,aβ€². By the DCT, under Assumption (v), this result implies that there exists a Ξ³β€²such that the term II is bounded by Η«β€²for alla,aβ€²such that ||aβˆ’aβ€²|| ≀γ′. 18 By taking Η«β€²= 0.5Η«, these results show that for all a,aβ€²such that ||aβˆ’aβ€²|| ≀min{Ξ³,Ξ³β€²}, |Λδ(a)βˆ’Ξ›Ξ΄(aβ€²)| ≀ǫ for anyΞ΄β‰₯0, thus establishing equi-continuity as desired. A.3 Proof that Aβˆ—in Theorem 1has probability one Let Ω1:={(a∞,y∞)βˆˆβ„¦ :at∈F(Β΅t(Ο‰))}. By assumption, every feasible history is in this set. For each a∞, letYa∞denote the probability-one set of action sequences for which the conclusion of Lemma 1holds. By the law of total probability, the set Ω2:={(a∞,y∞)βˆˆβ„¦ : y∞∈ Ya∞}has probability 1.13Therefore, the intersection of these two sets, β„¦βˆ—:= Ω1βˆ©β„¦2, alsohasprobabilityone. Since P(β„¦βˆ—) = 1andsince Aβˆ—definedinTheorem 1istheprojection of β„¦βˆ—overA∞, the law of total probability implies that PA∞(Aβˆ—) = 1. This is the set of actions for which the statement in Theorem 1holds. A.4 Proof of Proposition 2 Define the operator ρ: 2Ξ˜β†’2Θbyρ(E) =F(βˆ†E) for all EβŠ†Ξ˜, and the operator Ο†: 2Aβ†’2AbyΟ†(A) = Θm(βˆ†A) for allAβŠ†A. Observe that for any AβŠ†A, Ξ“(A) =F(βˆͺΟƒβˆˆβˆ†Aβˆ†Ξ˜m(Οƒ))βŠ†(φ◦ρ)(A) =F(βˆ†Ξ˜m(βˆ†A)). (10) LettingB0=Aand defining the sequence Bk+1= Ξ“(Bk), it follows from Proposition 1that the set of Berk-Nash rationalizable actions is given by B=/intersectiontext∞ k=0Bk. Moreover, by equation (10), this set satisfies B βŠ†/intersectiondisplay kβ‰₯0(ρ◦φ)k(A). (11) The rest of the proof characterizes the infinite intersection abov e. This relies on the following claim. Claim1.(i) For any ΞΈl≀θh,ρ([ΞΈl,ΞΈh])βŠ†[a(ΞΈl),Β―a(ΞΈh)]. (ii) For any al≀ah,Ο†([al,ah])βŠ† [ΞΈ(al),Β―ΞΈ(ah)]. Proof.(i) Consider a/\e}atio\slash≀¯a(ΞΈh). Since Β―a(ΞΈh) is the largest maximizer of U(Β·,ΞΈh) anda∨¯a(ΞΈh)> 13P(Ω2) =/integraltext A∞P(Ya∞|a∞)PA∞(da∞) =/integraltext A∞1PA∞(da∞) = 1. 19 Β―a(ΞΈh), thenU(a∨¯a(ΞΈh),ΞΈh)< U(Β―a(ΞΈh),ΞΈh). SinceUis single crossing in ( a,ΞΈ), this implies U(a∨¯a(ΞΈh),ΞΈ)< U(Β―a(ΞΈh),ΞΈ) for all θ∈[ΞΈl,ΞΈh]. Since Uis quasi-supermodular in a, this impliesU(a,ΞΈ)< U(a∧¯a(ΞΈh),ΞΈ) for all θ∈[ΞΈl,ΞΈh]. Thus, for any a/\e}atio\slash≀¯a(ΞΈh) and any Β΅ supported on [ ΞΈl,ΞΈh], we have/integraltext U(a,ΞΈ)Β΅(dΞΈ)</integraltext U(a∧¯a(ΞΈh),ΞΈ)Β΅(dΞΈ), implying a /∈F(Β΅). A similar argument establishes that for any a/\e}atio\slashβ‰₯a(ΞΈl) and any Β΅supported on[ ΞΈl,ΞΈh],a /∈F(Β΅). (ii)
https://arxiv.org/abs/2505.20708v1
Consider ΞΈ/\e}atio\slash≀¯θ(ah). SinceΒ―ΞΈ(xh) is the largest minimizer of K(Β·,xh) andθ∨¯θ(ah)> Β―ΞΈ(ah), thenK(Β―ΞΈ(ah),ah)< K(θ∨¯θ(ah),ah). Sinceβˆ’Kis single crossing in ( ΞΈ,a), this implies K(Β―ΞΈ(ah),a)< K(θ∨¯θ(ah),a) for all a∈[al,ah]. Since βˆ’Kis quasi-supermodular in ΞΈ, this implies K(θ∧¯θ(xh),a)< K(ΞΈ,a) for all a∈[al,ah]. Thus, for any ΞΈ/\e}atio\slash≀¯θ(ah) and anyΟƒsupported on [ xl,xh], we have/summationtext aK(θ∧¯θ(xh),a)Οƒ(a)</summationtext aK(ΞΈ,a)Οƒ(a), implying ΞΈ /∈Θm(Οƒ). A similar argument establishes that for any ΞΈ/\e}atio\slashβ‰₯ΞΈ(al) and any Οƒsupported on [al,ah],ΞΈ /∈Θm(Οƒ). To characterize the iterative application of ρ◦φ, we define sequences ( ak S)kand (ak L)k such that ak+1 S=a(ΞΈ(ak S)) and ak+1 L= Β―a(Β―ΞΈ(ak L)), wherea0 Sanda0 Larethesmallest andlargestactionsin A,respectively. Considerthesequence of sets defined by Ak+1= [ak+1 S,ak+1 L], whereA0=A= [a0 S,a0 L] is the set of all actions. Since a0 Sis the smallest element and a(ΞΈ(Β·)) is nondecreasing, the sequence ( ak S)kis nondecreasing. Similarly, thesequence ( ak L)kisnonincreasing. Therefore, since Aiscompact, thesesequences converge. Let aSandaLdenote the limits. Since aβ—¦ΞΈis continuous, we can pass the limit through the composition: aS= limkβ†’βˆžak+1 S= limkβ†’βˆža(ΞΈ(ak S)) =a(ΞΈ(aS)), soaSis a fixed point of the map aβ—¦ΞΈ. Similarly, aLis a fixed point of Β― aβ—¦Β―ΞΈ. Moreover, /intersectiondisplay kβ‰₯0Ak=/intersectiondisplay kβ‰₯0[ak S,ak L] = [aS,aH]. (12) By Claim 1, for allk, ρ◦φ(Ak)βŠ†Ο([ΞΈ(ak S),Β―ΞΈ(ak L)]) βŠ†[a(ΞΈ(ak S)),Β―a(Β―ΞΈ(ak L))] =Ak+1. (13) Therefore, it follows from ( 11), (12), and (13) that B βŠ†[aS,aH]. 20 We conclude by showing that aHis the largest (Berk-Nash) equilibrium action. A similar argument holds for aSand is omitted. Recall that BNE={a∈A:aβˆˆΞ“(a)}={a∈A: a∈F(βˆ†Ξ˜m(Ξ΄a))}is the set of Berk-Nash equilibrium actions. Since aL= Β―a(Β―ΞΈ(aL)), then aL∈F(δ¯θ(aL)), and since Β―ΞΈ(aL)∈Θm(Ξ΄aL), it follows that aLis an equilibrium. To show that it is the lowest equilibrium, observe that BNEβŠ† {a:a∈F(βˆ†[ΞΈ(a),Β―ΞΈ(a)])} ={a:a∈ρ([ΞΈ(a),Β―ΞΈ(a)])} βŠ† {a:a∈[a(ΞΈ(a)),Β―a(Β―ΞΈ(a))]}, (14) where the first line holds because Θm(Ξ΄a) is contained in [ ΞΈ(a),Β―ΞΈ(a)], the second by definition ofρ, and the third by Claim 1(i). Letybelong to the set defined in ( 14). Theny≀¯a(Β―ΞΈ(a)). Sincey≀a0 Land Β―a(Β―ΞΈ(Β·)) is nondecreasing, Β― a(Β―ΞΈ(y))≀¯a(Β―ΞΈ(a0 h)) =a1 L. Iterating, y≀ak Lfor allk. Sinceak Lconverges to aL, it follows that y≀aL. Since this is true for every yin a set that contains all equilibria, this must also be true for all equilibria. A.5 Proof of Proposition 3 We will establish that, for any compact interval AβŠ†A, ˜aβ—¦ΛœΞΈ(A) = Ξ“(A). (15) The result then follows from Proposition 1. We begin by showing that ˜θ(A) =βˆͺΟƒβˆˆβˆ†AΘm(Οƒ). (16) The inclusion βŠ†is obvious. To show the reverse inclusion, we will establish that if ΞΈ /∈˜θ(A), thenΞΈ /∈ βˆͺΟƒβˆˆβˆ†AΘm(Οƒ). Suppose ΞΈ /∈˜θ(A). By continuity of ˜θ(Β·) and compactness of the intervalA,˜θ(A) is a compact interval in R; denote it by [ ΞΈ,Β―ΞΈ]. Suppose that ΞΈ >Β―ΞΈ(the proof is similar if ΞΈ < ΞΈ). Strict quasiconvexity of K(Β·,a) implies that, for all a∈A, K(ΞΈ,a)> K(Β―ΞΈ,a)β‰₯K(˜θ(a),a). Therefore, for any Οƒβˆˆβˆ†A, /integraldisplay K(ΞΈ,a)Οƒ(da)>/integraldisplay K(Β―ΞΈ,a)Οƒ(da), 21 implying that ΞΈ /∈Θm(Οƒ). We now show equation ( 15). The inclusion βŠ†is obvious. To show the reverse inclusion, we will establish that if a /∈˜aβ—¦ΛœΞΈ(A), thena /βˆˆΞ“(A). Suppose a /∈˜aβ—¦ΛœΞΈ(A). By continuity of ˜aβ—¦ΛœΞΈ(Β·) and compactness of the interval A, ˜aβ—¦ΛœΞΈ(A) is a compact interval in R; denote it by [a,Β―a]. Suppose that a >Β―a(the proof is
https://arxiv.org/abs/2505.20708v1
similar if a < a). Strict quasiconcavity of U(Β·,ΞΈ) implies that, for all θ∈˜θ(A), U(a,ΞΈ)< U(Β―a,ΞΈ)≀U(˜a(ΞΈ),ΞΈ). Therefore, for any Β΅βˆˆβˆ†ΛœA, /integraldisplay U(a,ΞΈ)Β΅(dΞΈ)</integraldisplay U(Β―a,ΞΈ)Β΅(dΞΈ), implying that a /∈F(βˆ†ΛœΞΈ(A)) =F(βˆ†(βˆͺΟƒβˆˆβˆ†AΘm(Οƒ))), where the equality follows from equa- tion (16). SinceF(βˆ†(βˆͺΟƒβˆˆβˆ†AΘm(Οƒ)))βŠ‡Ξ“(A), thena /βˆˆΞ“(A). 22 References Athey, Susan. 2002. β€œMonotone comparative statics under uncertainty.” The Quarterly Journal of Economics , 117(1): 187–223. Berk, Robert H. 1966. β€œLimiting behavior of posterior distributions when the model is incorrect.” The Annals of Mathematical Statistics , 37(1): 51–58. Bernheim, B Douglas. 1984. β€œRationalizable strategic behavior.” Econometrica: Journal of the Econometric Society , 1007–1028. Bohren, J Aislinn. 2016. β€œInformational herding with model misspecification.” Journal of Economic Theory , 163: 222–247. Bohren, J Aislinn, and Daniel N Hauser. 2021. β€œLearning with heterogeneous misspec- ified models: Characterization and robustness.” Econometrica , 89(6): 3025–3077. Eliaz, Kfir, and Ran Spiegler. 2020. β€œA model of competing narratives.” American Economic Review , 110(12): 3786–3816. Esponda, Ignacio. 2008. β€œBehavioral equilibrium in economies with adverse selection.” American Economic Review , 98(4): 1269–1291. Esponda, Ignacio, and Demian Pouzo. 2016. β€œBerk–Nash equilibrium: A framework for modeling agents with misspecified models.” Econometrica , 84(3): 1093–1130. Esponda, Ignacio, and Demian Pouzo. 2021. β€œEquilibrium in misspecified Markov de- cision processes.” Theoretical Economics , 16(2): 717–757. Esponda, Ignacio, Demian Pouzo, and Yuichi Yamamoto. 2021. β€œAsymptotic be- havior of Bayesian learners with misspecified models.” Journal of Economic Theory , 195: 105260. Eyster, Erik, and Matthew Rabin. 2005. β€œCursed equilibrium.” Econometrica , 73(5): 1623–1672. Frick, Mira, Ryota Iijima, and Yuhta Ishii. 2023. β€œBelief convergence under misspeci- fied learning: A martingale approach.” The Review of Economic Studies , 90(2): 781–814. Frick, Mira, Ryota Iijima, and Yuhta Ishii. 2024. β€œWelfare comparisons for biased learning.” American Economic Review , 114(6): 1612–1649. 23 Fudenberg, Drew, Giacomo Lanzani, and Philipp Strack. 2021. β€œLimit points of endogenous misspecified learning.” Econometrica , 89(3): 1065–1098. Fudenberg, Drew, Giacomo Lanzani, and Philipp Strack. 2024. β€œSelective-Memory Equilibrium.” Journal of Political Economy , 132(12): 3978–4020. Fudenberg, Drew, Gleb Romanyuk, and Philipp Strack. 2017. β€œActive learning with a misspecified prior.” Theoretical Economics , 12(3): 1155–1189. Heidhues, Paul, Botond K˝ oszegi, and Philipp Strack. 2018. β€œUnrealistic expectations and misguided learning.” Econometrica , 86(4): 1159–1214. Heidhues, Paul, Botond K˝ oszegi, and Philipp Strack. 2021. β€œConvergence in models of misspecified learning.” Theoretical Economics , 16(1): 73–99. He, Kevin. 2022.β€œMislearning fromcensored data: Thegambler’s fallacyandot hercorrela- tional mistakes in optimal-stopping problems.” Theoretical Economics , 17(3): 1269–1312. He, Kevin, and Jonathan Libgober. 2025. β€œHigher-Order Beliefs and (Mis) learning from Prices.” Working Paper. Jehiel, Philippe. 2005. β€œAnalogy-based expectation equilibrium.” Journal of Economic theory, 123(2): 81–104. Milgrom, Paul, and Chris Shannon. 1994. β€œMonotone comparative statics.” Economet- rica: Journal of the Econometric Society , 157–180. Milgrom, Paul, and John Roberts. 1990. β€œRationalizability, learning, and equilibrium in games with strategic complementarities.” Econometrica: Journal of the Econometric Society, 1255–1277. Murooka, Takeshi, and Yuichi Yamamoto. 2021.β€œMulti-Player Bayesian Learningwith Misspecified Models.” Osaka School of International Public Policy, Os aka University. Nyarko, Yaw. 1991. β€œLearning in mis-specified models and the possibility of cycles.” Jour- nal of Economic Theory , 55(2): 416–427. Pearce, David G. 1984. β€œRationalizable strategic behavior and the problem of perfec tion.”
https://arxiv.org/abs/2505.20708v1
arXiv:2505.20946v1 [math.ST] 27 May 2025Almost Unbiased Liu Estimator in Bell Regression Model: Theory and Application Caner TanΔ±ΒΈ s†and Yasin Asar‑ †Department of Statistics, C ΒΈankΔ±rΔ± Karatekin University e-mail: canertanis@karatekin.edu.tr ‑Department of Mathematics and Computer Sciences, Necmettin Erbakan University, Konya, Turkey e-mail: yasar@erbakan.edu.tr, yasinasar@hotmail.com Abstract In this research, we propose a novel regression estimator as an a lternative to the Liu estimator for addressing multicollinearity in the Bell regression mode l, referred to as the ”almost unbiased Liu estimator”. Moreover, the theoretical char acteristics of the proposed estimator are analyzed, along with several theorems that specify the conditions under which the almost unbiased Liu estimator outperforms its alternatives. A c omprehensive simulation study is conducted to demonstrate the superiority of the almost u nbiased Liu estimator and to compare it against the Bell Liu estimator and the maximum likelihood e stimator. The practical applicability and advantage of the proposed regression e stimator are illustrated through a real-world dataset. The results from both the simulation study and the real-world data application indicate that the new almost unbiased Liu regression estimator outperforms its counterparts based on the mean square error criterion. Keywords : Bell Regression Model, Monte Carlo Simulation, Multicollinearity, Liu Es tima- tor, Almost Unbiased Liu Estimator Supplementary Information (SI): Appendices 0-5. 1 Introduction Count regression models are useful for modeling data in vari ous scientific fields such as biology, chemistry, physics, veterinary medicine, agriculture, en gineering, and medicine ( Walters,2007). In the literature, the well-known count regression models a re listed as follows: Poisson, negative binomial, geometric, and their modified ones. The Poisson di stribution has the limitation that the variance is equal to the mean. This is a disadvantage for t he Poisson regression model in modeling inflated data. The multicollinearity negativel y affects the maximum likelihood method to estimate the coefficients of the Poisson regression model. When multicollinearity is present, the disadvantages of the maximum likelihood est imator (MLE) are listed as follows: 1 Increasing the variance and standard error of the estimated regression coefficients and leading to inconsistent estimates. Furthermore, the multicollinear ity problem causes unreliable hypothesis testing and wider confidence intervals for the estimated par ameters ( M˚ ansson and Shukur ,2011; Amin et al. ,2022). The literature presents several approaches to address mult icollinearity in multiple regression models. Liu(1993) introduced the Liu estimator, which provides a solution to multicollinearity by employing a single biasing parameter, resulting in the es timated coefficients being a linear function of d, unlike ridge regression. Recent studies have expanded upo n this work by uti- lizing Liu estimators in various regression models. For ins tance, the Liu estimator has been extended to the logit and Poisson regression models, with me thods proposed to select the bi- asing parameter. It has also been generalized to negative bi nomial regression, and researchers have introduced its use in gamma regression as a viable alter native to the maximum likelihood estimator when facing multicollinearity. Moreover, the ap plication of Liu estimators has been explored in Beta regression models, where new variants of Li u-type estimators have been
https://arxiv.org/abs/2505.20946v1
devel- oped to fit the specific needs of these regression models. More recent studies have proposed a novel Liu estimator for Bell regression, with performance e valuations conducted through simula- tion studies. Comparative analyses between ridge andLiu es timators have also beenundertaken, particularly in the context of zero-inflated Bell regressio n models. Also, advancements include the introduction of a two-parameter estimator for gamma reg ression, further expanding the utility of Liu-type estimators in addressing multicolline arity across various regression models. Some of the recent references can be listed as follows: M˚ ansson et al. (2011),M˚ ansson et al. (2012),M˚ ansson (2013),Qasim et al. (2018),Karlsson et al. (2020),Algamal and Asar (2020), Algamal and Abonazel (2022),Majid et al. (2022),Algamal et al. (2022),Asar and Algamal (2022),Akram et al. (2022) Another method to solve the multicollinearity in multiple r egression models is the use of the almost unbiased estimator introduced by Kadiyala (1984). Recently, almost unbiased estima- tors are introduced by several authors. Some studies can be l isted as follows: Xinfeng (2015) introduced almost unbiased estimators in the logistic regr ession model. Al-Taweel and Algamal (2020) examined the performances of some almost unbiased ridge es timators in the zero-inflated negative binomial regression model. Asar and Korkmaz (2022) suggested almost unbiased Liu- type estimator in the gamma regression model. Erdugan (2022) proposed almost unbiased Liu-type estimator in the linear regression model. Omara(2023) introduced almost unbiased Liu-typeestimator inthetobit regression model. Ertan et al. (2023)proposedanewLiu-typees- timator in the Bell regression model. Algamal et al. (2023) modified Jackknifed ridge estimator for Bell regression model. Thisstudyprovidesanewalmost unbiasedLiuestimator asan alternative totheLiuestimator intheBellregressionmodel. Thesuggestedestimator isals ocomparedtoitscompetitorsnamely, the Liu estimator and the MLE in terms of the scalar and matrix mean squared error criteria. Furthermore, one of the objectives of this study is to provid e the proposed theoretical findings through simulation studies and real data analysis to evalua te the superiority of the proposed estimator over its competitors. The rest of the study is organized as follows: In Section 2, the main properties of the Bell 2 regression model, the definitions of the Liu estimator, and a new almost unbiased Liu estima- tor for the Bell regression model are given. Section 3compares the estimators via theoretical properties. We consider a comprehensive Monte Carlo simula tion study to evaluate the perfor- mances of the examined estimators via simulated mean square d error (MSE) and squared bias (SB) criteria. Then, we provide a real-world data example to illustrate the superiority of the proposed estimator over its competitors in Section 5. Finally, concluding remarks are presented in Section 6. 2 Bell Regression Model Bell(1934a,b) proposed the Bell distribution. The probability mass func tion (pmf) of the Bell distribution is P/parenleftbig Y=y|Ξ³/parenrightbig =Ξ³yexp/braceleftbig βˆ’exp(Ξ³)+1/bracerightbig By y!,y= 0,1,2,... (1) whereΞ³ >0, Bydenotes Bell numbers defined as follows: Bn=1 e∞/summationdisplay k=0kn k!. The mean and variance of the Bell distribution are given by E(Y) =Ξ³exp(Ξ³), (2) and Var(Y) =Ξ³(1+Ξ³)exp(Ξ³). (3) respectively ( Castellares et al. ,2018) and (Majid et al. ,2022). The essential properties of Bell regression can be summarised as follows: β€’The Bell distribution is a one-parameter
https://arxiv.org/abs/2505.20946v1
distribution. β€’The Bell distribution is a member of the one-parameter expon ential distributions. β€’The Bell distribution is unimodal. β€’The Poisson distribution does not follow the Bell family of d istributions. But if the pa- rameter has a small value, the Bell distribution approximat es to the Poisson distribution. β€’The variance of the Bell distribution is higher than its mean , which indicates that the one- parameterBelldistributioncanbesuitableformodellingo verdisperseddata( Castellares et al. , 2018;Majid et al. ,2022). Castellares et al. (2018) suggested Bell regression as an alternative to Poisson, ne gative bino- mial and other popular discrete regression models. In a regr ession model, it is often more useful to model the mean of the dependent variable. Therefore, to ob tain a regression structure for the mean of the Bell distribution, the Bell regression model with a different parametrization of the probability function of the Bell distribution is defined byCastellares et al. (2018) as follows: 3 Let beΒ΅=Ξ³exp(Ξ³) and therefore Ξ³=W0(Β΅),whereW0(Β΅) is the Lambert function. In this regard, the pmf of the Bell distribution is as follows: P/parenleftbig Y=y|Β΅/parenrightbig =W0(Β΅)yexp/braceleftBig 1βˆ’exp/parenleftbig W0(Β΅)/parenrightbig/bracerightBig By y!,y= 0,1,2,... (4) The mean and variance of the Bell distribution are rewritten as follows: E(Y) =Β΅, (5) and Var(Y) =Β΅/parenleftbig 1+W0(Β΅)/parenrightbig . (6) whereΒ΅ >0 andW0(Β΅)>0.Thus, it is clear that Var(Y)> E(Y).It means that the Bell distribution can be potentially suitable for modelling ove rdispersed count data, such as the negative binomial distribution. An advantage of the Bell di stribution over the negative binomial distribution is that no additional (dispersion) parameter is required to adapt to overdispersion (Castellares et al. ,2018). Lety1,y2,...,ynbenindependent random variables, where each yi,fori= 1,2,...,n, follows the pmf (Eq. 4) with mean Β΅i; that is, yi∼Bell(W0(Β΅i)), fori= 1,2,...,n. Assume the mean ofyifulfils the following functional relation: g(Β΅i) =Ξ·i=x⊀ iΞ²,i= 1,2,...,n whereΞ²=/parenleftbig Ξ²1,Ξ²2,...,Ξ²p/parenrightbig⊀∈Rprepresent a p-dimensional vector of regression coefficients, (p < n),Ξ·idenotes the linear estimator, and x⊀ i=/parenleftbig xi1,xi2,...,xip/parenrightbig corresponds to the observa- tions for the pknown covariates. It is noted that the variance of yidepends on Β΅iand, consequently, on the values of the covariates. As a result, models typically incorporate non- constant response variances. We assume that the mean link function g:(0,∞)β†’Ris strictly monotonic and twice differentiable. Several options exist for the mean link function, with examp les including the logarithmic link g(Β΅) =log(Β΅), the square root link g(Β΅) =√¡, and the identity link g(Β΅) =Β΅, each of which emphasizes ensuring the positivity of the estimates. These functions are also discussed in McCullagh and Nelder (1989). The parameter vector Ξ²is estimated using the maximum likelihood method, and the lo g- likelihood function, excluding constant terms, is express ed as follows: β„“(Ξ²) =n/summationdisplay i=1/bracketleftBig yilog/parenleftbig W0(Β΅i)/parenrightbig βˆ’exp/parenleftbig W0(Β΅i)/parenrightbig/bracketrightBig , whereΒ΅i=gβˆ’1(Ξ½i) is a function of Ξ², andgβˆ’1(.) is the inverse of g(.). The score function is given by the p-vector U(Ξ²)=X⊀W1/2Vβˆ’1/2(yβˆ’Β΅) where the model matrix X= (x1,x2,...,xn)⊀has full column rank, W=diag{w1,w2,...,wn}, 4 V=diag{V1,V2,...,Vn},y= (y1,y2,...,yn)⊀,Β΅= (Β΅1,Β΅2,...,Β΅n)⊀, and wi=/parenleftbig dΒ΅i/dΞ·i/parenrightbig2 Vi,Vi=Β΅i[1+W0Β΅i],i= 1,2,...,n, whereViis the variance function of yi. The Fisher information matrix for Ξ²is given by K(Ξ²) = X⊀WX. The maximum likelihood estimator /hatwideΞ²=/parenleftBig Λ†Ξ²1,Λ†Ξ²2,...,Λ†Ξ²p/parenrightBig⊀ ofΞ²=/parenleftbig Ξ²1,Ξ²2,...,Ξ²p/parenrightbig⊀ is obtained as the
https://arxiv.org/abs/2505.20946v1
solution of U/parenleftBig /hatwideΞ²/parenrightBig =0p, where 0prefers a p-dimensional vector of zeros. Regrettably, the maximum likelihood estimator /hatwideΞ²lacks a closed-form solution, necessitating its numerical computation. For instance, the Newton–Raphs on iterative method is one possible approach. Alternatively, theFisherscoringmethodmaybee mployedtoestimate Ξ²byiteratively solving the corresponding equation. Ξ²(m+1)=/parenleftBig X⊀W(m)X/parenrightBigβˆ’1 X⊀W(m)z(m), (7) wherem= 0,1,...is the iteration counter, z= (z1,z2,...,zn)⊀=Ξ·+Wβˆ’1/2Vβˆ’1/2(yβˆ’Β΅) actions as a modified response variable in Eq. ( 7) whereas Wis a weight matrix, and Ξ·= (Ξ·1,Ξ·2,...,Ξ·n)⊀.The maximum likelihood estimate /hatwideΞ²MLEcan be iteratively obtained by using Eq. (7) through any software program with a weighted linear regres sion routine, such as R software ( Castellares et al. ,2018). Thus, the MLE of Ξ²in the Bell regression obtained using the IRLS algorithm at the final step is given as follows: /hatwideΞ²MLE=/parenleftBig X⊀/hatwiderWX/parenrightBigβˆ’1 X⊀/hatwiderW/hatwidez (8) where/hatwiderWand/hatwidezare computed at final iteration. The scalar mean squared error (MSE) of /hatwideΞ²MLEcan be given as ( Majid et al. ,2022) MSE/parenleftBig /hatwideΞ²MLE/parenrightBig =E/parenleftBig (/hatwideΞ²MLEβˆ’Ξ²)⊀(/hatwideΞ²MLEβˆ’Ξ²)/parenrightBig =tr/parenleftBig (X⊀/hatwiderWX)βˆ’1/parenrightBig =p/summationdisplay j=11 Ξ»j(9) Here,tr(.) denotes the trace operator, and Ξ»jrepresents the jth eigenvalue of the weighted cross-product matrix X⊀/hatwiderWX. It is evident from Eq. ( 9) that the variance of the maximum likelihood estimator (MLE) may be adversely influenced by th e ill-conditioning of the data matrixX⊀/hatwiderWX, a phenomenon commonly referred to as the multicollinearit y problem. For an in-depth discussion of collinearity issues in generalized linear models, refer to Segerstedt (1992) andMackinnon and Puterman (1989). LetQ⊀X⊀/hatwiderWX=Ξ›= diag(Ξ»1,Ξ»2,...,Ξ» p), where Ξ»1β‰₯Ξ»2β‰₯...β‰₯Ξ»p>0 represent the eigenvalues of X⊀/hatwiderWX, arranged in descending order, and Qis apΓ—pmatrix whose columns consist of the normalized eigenvectors of X⊀/hatwiderWX. Consequently, we have the relationship Ξ±= Q⊀β, and the maximum likelihood estimator (MLE) in its canonica l form can be expressed as /hatwideΞ±MLE=Q⊀/hatwideΞ²MLE. 5 2.1 The Bell Liu Estimator The Liu estimator (LE) is proposed by Majid et al. (2022) for the Bell regression model as follows: /hatwideΞ²LE= (F+I)βˆ’1Fd/hatwideΞ²MLE (10) =Ed/hatwideΞ²MLE where 0< d <1,F=X⊀/hatwiderWX,Fd= (F+dI) andEd= (F+I)βˆ’1Fd. The covariance matrix and bias vector of LE can be obtained respectively by Cov/parenleftBig /hatwideΞ²LE/parenrightBig =EdFβˆ’1E⊀ d, (11) bias/parenleftBig /hatwideΞ²LE/parenrightBig = (dβˆ’1)(F+I)βˆ’1Ξ². (12) Thus, the matrix mean squared error (MMSE) and MSE functions of the LE are MMSE/parenleftBig /hatwideΞ²LE/parenrightBig =EdFβˆ’1E⊀ d+d2Fdββ⊀Fd (13) MSE/parenleftBig /hatwideΞ²LE/parenrightBig =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig2+(dβˆ’1)2p/summationdisplay j=1Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig2(14) whereΞ±jis the jth component of Ξ±. 2.2 The new almost unbiased Bell Liu Estimator In this subsection, we propose a new estimator called almost unbiased Liu estimator (AULE) as an alternative to LE and MLE in Bell regression model. Definition 2.1. Xu and Yang (2011) Suppose Ξ²is a biased estimator of parameter vector Ξ², and if the bias vector of Λ†Ξ²is given by b(Λ†Ξ²) =E(Λ†Ξ²)βˆ’Ξ²=RΞ², which shows that E(Λ†Ξ²βˆ’RΞ²) =Ξ², then we call the estimator ˜β=Λ†Ξ²βˆ’RΛ†Ξ²= (Iβˆ’R)Λ†Ξ²is the almost unbiased estimator based on the biased estimator Λ†Ξ². Based on the Definition 2.1, the almost unbiased Bell Liu estimator (AULE) can be defined by /hatwideΞ²AULE=/hatwideΞ²LEβˆ’/parenleftBig βˆ’(1βˆ’d)(F+I)βˆ’1/hatwideΞ²LE/parenrightBig =/hatwideΞ²LE+(1βˆ’d)(F+I)βˆ’1/hatwideΞ²LE =/parenleftBig I+(1βˆ’d)(F+I)βˆ’1/parenrightBig /hatwideΞ²LE =/parenleftBig I+(1βˆ’d)(F+I)βˆ’1/parenrightBig Ed/hatwideΞ²MLE =/parenleftBig I+(1βˆ’d)(F+I)βˆ’1/parenrightBig/parenleftBig Iβˆ’(1βˆ’d)(F+I)βˆ’1/parenrightBig /hatwideΞ²MLE =/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig /hatwideΞ²MLE (15) whereβˆ’βˆž< d <∞is a biasing parameter ( Alheety and Kibria ,2009). According to our literature review, the AULE has not been suggested or studie
https://arxiv.org/abs/2505.20946v1
d in the Bell regression model. 6 In the Bell regression, the AULE is /hatwideΞ²AULE=/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig /hatwideΞ²MLE. The covariance matrix and bias vector of the AULE are Cov/parenleftBig /hatwideΞ²AULE/parenrightBig =Cov/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/hatwideΞ²MLE/parenrightBig =/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig Cov/parenleftBig /hatwideΞ²MLE/parenrightBig/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig⊀ =/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig Fβˆ’1/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig⊀ , (16) and Bias/parenleftBig /hatwideΞ²AULE/parenrightBig =E/parenleftBig /hatwideΞ²AULE/parenrightBig βˆ’Ξ² =βˆ’(1βˆ’d)2(F+I)βˆ’2Ξ², (17) respectively. In this regard, the MMSE and MSE of AULE are res pectively given by MMSE/parenleftBig /hatwideΞ²AULE/parenrightBig =Cov/parenleftBig /hatwideΞ²AULE/parenrightBig +Bias/parenleftBig /hatwideΞ²AULE/parenrightBig Bias/parenleftBig /hatwideΞ²AULE/parenrightBig⊀ =/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig Fβˆ’1 Γ—/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig +/parenleftBig βˆ’(1βˆ’d)2(F+I)βˆ’2Ξ²/parenrightBig Γ—/parenleftBig βˆ’(1βˆ’d)2(F+I)βˆ’2Ξ²/parenrightBig⊀ =/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig Fβˆ’1 Γ—/parenleftBig Iβˆ’(1βˆ’d)2(F+I)βˆ’2/parenrightBig +(1βˆ’d)4(F+I)βˆ’4ββ⊀(18) and MSE/parenleftBig /hatwideΞ²AULE/parenrightBig =tr/parenleftbigg Cov/parenleftBig /hatwideΞ²AULE/parenrightBig/parenrightbigg +Bias/parenleftBig /hatwideΞ²/parenrightBig⊀ Bias/parenleftBig /hatwideΞ²/parenrightBig =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4+(1βˆ’d)4p/summationdisplay j=1Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig4.(19) 3 Theoretical Comparisons Between Estimators In this section, we derive the superiority of AULE over LE and MLE via some theorems. The squared bias of an estimator /hatwideΞ²is described as follows: SB/parenleftBig /hatwideΞ²/parenrightBig =Bias/parenleftBig /hatwideΞ²/parenrightBig⊀ Bias/parenleftBig /hatwideΞ²/parenrightBig =/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²/parenrightBig/vextenddouble/vextenddouble/vextenddouble2 2. In this regard, we compare the squares biases of LE and AULE in the following theorem. 7 Theorem 1. The squared bias of AULE is lower than that of LE for d∈/parenleftbig βˆ’Ξ»j,Ξ»j+2/parenrightbig , namely, /vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²LE/parenrightBig/vextenddouble/vextenddouble/vextenddouble2 2βˆ’/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²AULE/parenrightBig/vextenddouble/vextenddouble/vextenddouble2 2>0. Proof.The difference in squared bias is: /vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²LE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 βˆ’/vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²AULE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 =p/summationdisplay j=1 ο£­(dβˆ’1)2Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig2βˆ’(dβˆ’1)4Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig4ο£Ά ο£Έ =p/summationdisplay j=1(dβˆ’1)2Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig2βˆ’(dβˆ’1)2 /parenleftbig Ξ»j+1/parenrightbig4 Considering that ( dβˆ’1)2>0,Ξ±2 j>0 and/parenleftbig Ξ»j+1/parenrightbig4>0, it is sufficient for /vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²LE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 βˆ’/vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²AULE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 to be positive that/parenleftbig Ξ»j+1/parenrightbig2βˆ’(dβˆ’1)2>0. Thus, we can investigate the positivity of the following function fbias(d) =/parenleftbig Ξ»j+1/parenrightbig2βˆ’(dβˆ’1)2=/parenleftBig/parenleftbig Ξ»j+1/parenrightbig βˆ’(dβˆ’1)/parenrightBig/parenleftBig/parenleftbig Ξ»j+1/parenrightbig +(dβˆ’1)/parenrightBig =/parenleftbig Ξ»jβˆ’d+2/parenrightbig/parenleftbig Ξ»j+d/parenrightbig . Thefunction fbias(d) is positive for the interval d∈/parenleftbig βˆ’Ξ»j,Ξ»j+2/parenrightbig . Thus, the proof is completed. Now, we compare the MSE functions of LE and AULE in the followi ng theorem. Theorem 2. In the Bell regression model, the AULE has a lower MSE value than LE ifd∈/parenleftbig 1,Ξ»j+2/parenrightbig forj= 1,2,...,p, namely, MSE/parenleftBig /hatwideΞ²LE/parenrightBig βˆ’MSE/parenleftBig /hatwideΞ²AULE/parenrightBig >0. 8 Proof.From Eqs. ( 14) and (19), the difference in scalar MSE is, MSE/parenleftBig /hatwideΞ²LE/parenrightBig βˆ’MSE/parenleftBig /hatwideΞ²AULE/parenrightBig =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2+(dβˆ’1)2Ξ»jΞ±2 j Ξ»j/parenleftbig Ξ»j+1/parenrightbig2, βˆ’p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2+(1βˆ’d)4Ξ»jΞ±2 j Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 =p/summationdisplay j=1/parenleftbig Ξ»j+1/parenrightbig2/parenleftBig/parenleftbig Ξ»j+d/parenrightbig2+(dβˆ’1)2Ξ»jΞ±2 j/parenrightBig Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 βˆ’p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2+(1βˆ’d)4Ξ»jΞ±2 j Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2βˆ’/parenleftbig Ξ»j+2βˆ’d/parenrightbig2/parenrightBig Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 +p/summationdisplay j=1Ξ±2 j(dβˆ’1)2/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2βˆ’(1βˆ’d)2/parenrightBig /parenleftbig Ξ»j+1/parenrightbig4 Considering/parenleftbig Ξ»j+1/parenrightbig4>0andΞ±2 j>0, if/parenleftbig Ξ»j+1/parenrightbig2βˆ’/parenleftbig Ξ»j+2βˆ’d/parenrightbig2>0and/parenleftbig Ξ»j+1/parenrightbig2βˆ’(1βˆ’d)2> 0 forj= 1,2,...,p, the difference between the scalar MSEs of LE and AULE becomes p ositive. fMSE1(d) =/parenleftbig Ξ»j+1/parenrightbig2βˆ’/parenleftbig Ξ»j+2βˆ’d/parenrightbig2 =/parenleftbig 2Ξ»j+3βˆ’d/parenrightbig (dβˆ’1). ThefMSE1(d) function is positive defined for d∈/parenleftbig 1,2Ξ»j+3/parenrightbig . fMSE2(d) =/parenleftbig Ξ»j+1/parenrightbig2βˆ’(1βˆ’d)2 =/parenleftbig Ξ»j+2βˆ’d/parenrightbig/parenleftbig Ξ»j+d/parenrightbig . ThefMSE2(d) function is positive for d∈/parenleftbig βˆ’Ξ»j,Ξ»j+2/parenrightbig . It is possible for both functions fMSE1(d) andfMSE2(d) to be positive definite only with the common solution set of t he above two equations, d∈/parenleftbig 1,Ξ»j+2/parenrightbig . Thus, we provide MSE/parenleftBig /hatwideΞ²LE/parenrightBig βˆ’MSE/parenleftBig /hatwideΞ²AULE/parenrightBig >0 ford∈/parenleftbig 1,Ξ»j+2/parenrightbig . The proof is completed. Now, we compare the variances of MLE and AULE in the following theorem. Theorem 3. AULE has a lower variance value than MLE i.e. Var/parenleftBig /hatwideΞ²MLE/parenrightBig βˆ’Var/parenleftBig /hatwideΞ²AULE/parenrightBig >0 when d∈/parenleftbigg 1βˆ’/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2,1+/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2/parenrightbigg . 9 Proof.The difference in variances is, Var/parenleftBig /hatwideΞ²MLE/parenrightBig βˆ’Var/parenleftBig /hatwideΞ²AULE/parenrightBig =p/summationdisplay j=11 Ξ»jβˆ’p/summationdisplay j=1/parenleftbig
https://arxiv.org/abs/2505.20946v1
Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 =p/summationdisplay j=1/parenleftbig Ξ»j+1/parenrightbig4βˆ’/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4. The difference between the variances of MLE and AULE is positiv e for /parenleftbig Ξ»j+1/parenrightbig4βˆ’/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2>0. Considering fVar(d) =/parenleftbig Ξ»j+1/parenrightbig4βˆ’/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βˆ’d/parenrightbig2 =/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2/parenrightBig2 βˆ’/parenleftBig/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βˆ’d/parenrightbig/parenrightBig2 =/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2βˆ’/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βˆ’d/parenrightbig/parenrightBig/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2+/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βˆ’d/parenrightbig/parenrightBig = (1βˆ’d)2/parenleftBig 2Ξ»2 j+4Ξ»j+2dβˆ’d2+1/parenrightBig =βˆ’(1βˆ’d)2/parenleftbigg d2βˆ’2dβˆ’/parenleftBig 2Ξ»2 j+4Ξ»j+1/parenrightBig/parenrightbigg . ThefVar(d) function is positive defined for d∈/parenleftbigg 1βˆ’/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2,1+/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2/parenrightbigg . Thus, the proof is completed. 3.1 Selection of the parameter d We use following procedure in the selection of the parameter d. By differentiating Eqn. ( 19) with respect to dand equating to zero which equals to βˆ‚ βˆ‚d/parenleftbigg MSE/parenleftBig /hatwideΞ²AULE/parenrightBig/parenrightbigg =p/summationdisplay j=11 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4/braceleftBig/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βˆ’d/parenrightbig βˆ’(1βˆ’d)2Ξ»jΞ±2 j/bracerightBig = 0 Since/parenleftbig Ξ»j+1/parenrightbig4is always positive, it is enough to find dsatisfying /parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βˆ’d/parenrightbig βˆ’(1βˆ’d)2Ξ»jΞ±2 j= 0. Then, by solving the above equation, we derive the following optimum biasing parameter: dj= 1βˆ“/parenleftbig Ξ»j+1/parenrightbig/radicalBigg 1 1+Ξ»jΞ±2 j 10 We suggest to use dj= 1βˆ’/parenleftbig Ξ»j+1/parenrightbig/radicalBig1 1+Ξ»jΞ±2 j. In this paper, we suggest the following estimators ofd AULE(d1) =harmmean/parenleftbig dj/parenrightbig , AULE(d2) =median/parenleftbig dj/parenrightbig , AULE(d3) =max/parenleftbig dj/parenrightbig . 4 Monte Carlo Simulation In this section, we conduct a comprehensive Monte Carlo simu lation study to evaluate and compare the mean squared error (MSE) performance of the esti mators. Given that one of our primary objectives is to examine the behavior of the estimat ors in the presence of multicollinear- ity, we generate the design matrix Xfollowing the methodology outlined by Amin et al. (2023). xij= (1βˆ’Ο2)1/2wij+ρwi(p+1), i= 1,2,...,n, j = 1,2,...,p, wherewij’s are independent standard normal pseudo-random numbers, andρdetermines the degree of correlation between any two explanatory variable s which is given by ρ2. Table 1: Simulated squared bias values when p= 4 nρLEAULE(d1)AULE(d2)AULE(d3) 100 0.8 4.6585 4.6086 4.4742 4.6769 200 0.8 4.4266 4.3918 4.2099 4.4361 400 0.8 3.7487 3.7224 3.5122 3.7525 100 0.9 5.2551 5.2160 5.1094 5.2752 200 0.9 4.7315 4.7049 4.5809 4.7414 400 0.9 3.0218 3.0003 2.9350 3.0245 100 0.95 5.5127 5.4989 5.4760 5.5330 200 0.95 4.8069 4.7955 4.7699 4.8169 400 0.95 2.4927 2.4862 2.4831 2.4948 Table 2: Simulated squared bias values when p= 8 nρLEAULE(d1)AULE(d2)AULE(d3) 100 0.8 5.7714 5.6681 5.4469 5.7929 200 0.8 4.4813 4.3938 4.0834 4.4905 400 0.8 2.8507 2.7833 2.5532 2.8530 100 0.9 6.6413 6.5653 6.4233 6.6643 200 0.9 3.7775 3.7266 3.6682 3.7849 400 0.9 2.2306 2.1239 2.0961 2.2189 100 0.95 7.1071 7.0824 7.0585 7.1304 200 0.95 3.2036 3.2065 3.2063 3.2123 400 0.95 1.7250 1.6709 1.6545 1.7213 11 Table 3: Simulated squared bias values when p= 12 nρ LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 16.6143 16.5651 16.3483 16.6376 200 0.8 9.3368 9.2602 8.9179 9.3497 400 0.8 1.2888 1.2171 1.2033 1.2719 100 0.9 18.4322 18.3763 18.1337 18.4540 200 0.9 9.0344 8.9579 8.6998 9.0471 400 0.9 5.8266 5.7646 5.5946 5.8322 100 0.95 19.2328 19.1946 19.0770 19.2529 200 0.95 13.6958 13.6524 13.5391 13.7079 400 0.95 8.8918 8.8643 8.8046 8.8982 Table 4: Simulated MSE values when p= 4 nρMLE LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 15.3684 4.6838 4.6333 4.4986 4.7023 200 0.8 14.8173 4.4380 4.4030 4.2211 4.4475 400 0.8 13.3872 3.7565 3.7299 3.5184 3.7603 100 0.9 16.5517 5.2945
https://arxiv.org/abs/2505.20946v1
5.2482 5.1354 5.3143 200 0.9 15.4715 4.7503 4.7207 4.5908 4.7601 400 0.9 11.7297 3.0397 3.0136 2.9411 3.0421 100 0.95 17.0654 5.5808 5.5327 5.5002 5.5956 200 0.95 15.6238 4.8431 4.8150 4.7800 4.8511 400 0.95 10.3919 2.5382 2.5024 2.4961 2.5321 Table 5: Simulated MSE values when p= 8 nρMLE LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 25.0197 5.8124 5.7071 5.4855 5.8341 200 0.8 22.0185 4.5115 4.4218 4.1066 4.5207 400 0.8 17.5404 2.8820 2.8101 2.5686 2.8843 100 0.9 27.0835 6.7053 6.6125 6.4533 6.7282 200 0.9 20.2087 3.8454 3.7655 3.6878 3.8513 400 0.9 15.6651 2.6264 2.2375 2.1897 2.5359 100 0.95 28.1156 7.2217 7.1323 7.0908 7.2397 200 0.95 18.6185 3.3703 3.2506 3.2402 3.3452 400 0.95 13.6104 2.3251 1.8060 1.7470 2.2019 Table 6: Simulated MSE values when p= 12 nρMLE LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 54.8945 16.6320 16.5829 16.3667 16.6553 200 0.8 40.3239 9.3505 9.2739 8.9333 9.3634 400 0.8 8.5767 1.5107 1.2721 1.2473 1.4435 100 0.9 58.3163 18.4548 18.3980 18.1533 18.4767 200 0.9 39.6896 9.0631 8.9823 8.7148 9.0759 400 0.9 32.0425 5.8544 5.7866 5.6060 5.8599 100 0.95 59.8244 19.2839 19.2314 19.0972 19.3042 200 0.95 49.3954 13.7293 13.6769 13.5513 13.7413 400 0.95 39.3569 8.9406 8.8976 8.8187 8.9468 The sample size nincreases as 100 ,200 and 400, and the number of predictor variables pis taken as 4, 8 and 12. In this setting, ρcontrols the degree of correlation between the predictors, and it is considered as 0 .8,0.9 and 0.95.nobservations of the response variable are generated using the Bell distribution such that yi∼Bell/parenleftbig W0(Β΅i)/parenrightbig where Β΅i= exp(x⊀ iΞ²), i= 1,2,...,n. The number of repetitions in the simulation is taken as 1000. The simulated MSE of an 12 estimator /hatwideΞ²βˆ—is computed as follows, MSE/parenleftBig /hatwideΞ²βˆ—/parenrightBig =1 10001000/summationdisplay r=1/parenleftBig /hatwideΞ²βˆ—βˆ’Ξ²/parenrightBig⊀ r/parenleftBig /hatwideΞ²βˆ—βˆ’Ξ²/parenrightBig In the simulation study, the Bell regression model is fitted w ithout any standardization and without intercept. The results of the Monte Carlo simulation study are presente d in Tables 1–6. From the simulation results, we observe that the following c onclusions: β€’As the sample sizes increase, all MSEs and biases decrease as expected. β€’The AULE is generally superior to its competitors LE and MLE i n terms of MSE. β€’The squared bias of AULE is smaller than that of LE for d1andd2. β€’In all settings, the MSE of AULE is smaller than that of LE for d1andd2. β€’In all selected cases, the MSE of AULE(d2) is smaller than its competitors. Finally, we concluded that AULE is a good alternative to LE an d MLE in Bell regression model. 5 Real Data Application In this section, we present a real data example to illustrate the superiority of AULE over its competitors, MLE and LE in the Bell regression model. For thi s reason, we analyse the plastic plywood data set given by Filho and Sant’Anna (2016). The data set related to the quality of plastic plywood. The plywood is a composite material create d by layering thin veneers of wood, which results in a structure that is both strong and moderate ly flexible. The descriptions of variables in plastic plywood data set are given in Table 7. Table 7: The description of plastic plywood
https://arxiv.org/abs/2505.20946v1
data y(response variable) the number of defects per laminated pla stic plywood area x1 volumetric shrinkage x2 assembly time x3 wood density x4 drying temperature The design matrix is centered and standardized so that X⊀Xis in the correlation form before obtaining the estimators. A Bell regression model without i ntercept is fitted. MLE, LE and AULE are computed and their coefficients and MSE values are giv en in Table 8. The condition number being the square root of the ratio of the maximum eigen value and minimum eigenvalue of the matrix X⊀/hatwiderWXis computed as 74 .5281 which shows that there is severe collinearity problem in this data. The eigenvalues of X⊀/hatwiderWXare obtained as 27 .8119,2.6898,0.1976 and 0.0050. According to Table 8, we observe that the MSE of AULE(d2) is lower than the MSEs ofAULE(d1),AULE(d3), LE and MLE. Also, AULE(d1) andAULE(d2) are both superior 13 to LE and MLE in terms of MSE. We concluded that AULE(d2) estimator with parameter d2 performs better than AULE(d1) andAULE(d3) with parameters d1andd3in terms of MSE in real data analysis. Table 8: Coefficients and MSE values of the estimators MLE LE AULE(d1) AULE(d2) AULE(d3) /hatwideΞ²113.2792 18.1904 9.8211 8.8019 12.4839 /hatwideΞ²21.2203 -3.4026 4.6221 5.6241 2.0039 /hatwideΞ²38.8130 10.5962 7.6453 7.3012 8.5444 /hatwideΞ²45.9243 4.6560 7.1704 7.5377 6.2108 MSE 205.1824 728.8193 60.2740 55.5340 154.2389 SBs 0.0000 50.2975 26.4350 44.3130 1.3980 4.04.55.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.8 a 4.004.254.504.75 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.8 b 3.23.43.63.84.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.8 c 4.55.05.56.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.9 d 4.254.504.755.00 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.9 e 2.83.03.2 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.9 f 5.25.66.06.4 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.95 g 4.44.85.25.6 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.95 h 2.32.52.7 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.95 i Figure 1: MSE values of the estimators for p= 4. 14 5.05.56.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.8 a 4.04.5 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.8 b 2.252.502.753.003.25 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.8 c 6.06.57.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.9 d 3.253.503.754.004.25 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.9 e 234 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.9 f 6.46.87.27.6 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.95 g 3.03.33.6 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.95 h 234 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.95 i Figure 2: MSE values of the estimators for p= 8. 15 16.016.416.817.2 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.8 a 8.59.09.5 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.8 b 1.21.51.82.1 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.8 c 18.018.519.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.9 d 8.508.759.009.25 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.9 e 5.505.756.00 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.9 f 19.019.5 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , ρ=0.95 g 13.213.413.613.814.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , ρ=0.95 h 8.48.68.89.09.2 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , ρ=0.95 i Figure 3: MSE values of the estimators for p= 12 16 050010001500 βˆ’1.0 βˆ’0.5 0.0 0.5 1.0 dMSEEstimators AULE LE MLE 03006009001200 βˆ’1.0 βˆ’0.5 0.0 0.5
https://arxiv.org/abs/2505.20946v1
1.0 dSBEstimators AULE LE Figure 4: MSEs and SBs of the estimators for βˆ’1< d <1 200400600 0.00 0.25 0.50 0.75 1.00 dMSEEstimators AULE LE MLE 0204060 0.00 0.25 0.50 0.75 1.00 dSBEstimators AULE LE Figure 5: MSEs and SBs of the estimators for 0 < d <1 6 Conclusion In this paper, we introduced a new biased estimator called AU LE as an alternative to the LE and the MLE in the Bell regression model. We discuss three the orems to prove the conditions 17 under which AULE is superior to LE and MLE. The AULE is numerically superior to the LE and the MLE, regard ing the scalar MSE and the squared bias. Also, we consider a comprehensive Monte Ca rlo simulation study to show the existence of these theorems proved theoretically in pra ctice. According to findings of the simulationstudy, theAULEhasasmallerthesquaredbiasand MSEvaluethantheLEandMLE. In the real-world data example, the results also support the simulation results. In conclusion, we recommend that the AULE is an effective competitor to the LE and the MLE in Bell regression model. In future work, it can be considered other estimators as an alternative to AULE in the Bell regression model. Acknowledgements This study was supported by TUBITAK 2218-National Postdoct oral Re- search Fellowship Programme with project number 122C104. Author Contributions Caner TanΔ±ΒΈ s: Intoduction, Methodology, Simulation, Real data ap- plication, Writing-original draft. Yasin Asar: Methodolo gy, Simulation, Real data application, Writing-reviewing & editing Funding The authors declare that they have no financial interests. Data Availability The dataset supports the findings of this study are openly ava ilable in ref- erence list. Declarations Conflict of interest All authors declare that they have no conflict of interest. Ethics statements The paper is not under consideration for publication in any o ther venue or language at this time. Bibliography Akram, M. N., Amin, M., Sami, F., Mastor, A. B., Egeh, O. M., Mu se, A. H. (2022). A new Conway Maxwell–Poisson Liu regression estimator–meth od and application. Journal of Mathematics, Article ID 3323955, https://doi.org/10.1155/2022/3323955 . Algamal, Z. Y., Asar, Y. (2020). Liu-type estimator for the g amma regression model. Commu- nications in Statistics–Simulation and Computation, 49(8 ), 2035–2048. Algamal, Z. Y., Lukman, A. F., Abonazel, M. R., Awwad, F. A. (2 022). Performance of the Ridge and Liu Estimators in the zero-inflated Bell Regressio n Model. Journal of Mathematics, Volume 2022, Article ID 9503460. Algamal, Z. Y., Abonazel, M. R. (2022). Developing a Liu-typ e estimator in beta regression model. Concurrency and Computation: Practice and Experien ce, 34(5), e6685. Algamal, Z., Lukman, A., Golam, B. K., Taofik, A. (2023). Modi fiedJackknifed RidgeEstimator in Bell Regression Model: Theory, Simulation and Applicati ons. Iraqi Journal For Computer Science and Mathematics, 4(1), 146–154. 18 Alheety, M. I., Kibria, B. G. (2009). On the Liu and almost unb iased Liu estimators in the presence of multicollinearity with heteroscedastic or cor related errors. Surveys in Mathematics and its Applications, 4, 155-167. Al-Taweel, Y., Algamal, Z.(2020). Somealmost unbiasedrid geregressionestimators forthezero- inflated negative binomial regression model. Periodicals o f Engineering and Natural Sciences, 8(1), 248-255.
https://arxiv.org/abs/2505.20946v1
Amin, M., Qasim, M., Afzal, S., Naveed, K. (2022). New ridgee stimators in theinverse Gaussian regression: Monte Carlo simulation and application to chem ical data. Communications in Statistics–Simulation and Computation, 51(10), 6170–618 7. Amin, M., Akram, M. N., Majid, A. (2023). On the estimation of Bell regression model using ridge estimator. Communications in Statistics–Simulatio n and Computation, 52(3), 854–867. Asar, Y., Algamal, Z. (2022). A new two-parameter estimator for the gamma regression model. Statistics, Optimization & Information Computing, 10(3), 750–761. Asar, Y., Korkmaz, M. (2022). Almost unbiasedLiu-typeesti mators in gamma regression model. Journal of Computational and Applied Mathematics, 403, 113 819. Bell, E. T. (1934a). Exponential polynomials. Annals of Mat hematics, 258-277. Bell, E. T.(1934b). Exponential numbers.TheAmerican Math ematical Monthly, 41(7), 411-419. Castellares, F., Ferrari, S. L., Lemonte, A. J. (2018). On th e Bell distribution and its associated regression model for count data. Applied Mathematical Mode lling, 56, 172-185. Erdugan, F. (2022). An almost unbiased Liu-type estimator i n the linear regression model. Communications in Statistics-Simulation and Computation , 1-13. Ertan, E., Algamal, Z.Y., ErkoΒΈ c, A., Akay, K.U. (2023). Ane wimprovement Liu-typeestimator for the Bell regression model. Communications in Statistic s-Simulation and Computation, 1- 12. Marcondes Filho, D., Sant’Anna, A. M. O. (2016). Principal c omponent regression-based con- trol charts for monitoring count data. The International Jo urnal of Advanced Manufacturing Technology, 85, 1565-1574. Kadiyala, K. (1984). A class of almost unbiased and efficient e stimators of regression coefficients. Economics Letters, 16(3-4), 293-296. Karlsson, P., M˚ ansson, K., Kibria, B. M. G. (2020). A Liu est imator for the beta regression model and its application to chemical data. Journal of Chemo metrics, 34(10), e3300. Liu, K.(1993). A newclass of biased estimate inlinear regre ssion. Communications in Statistics– Theory and Methods, 22(2), 393–402. Mackinnon, M.J., Puterman, M.L. (1989). Collinearity in ge neralized linear models. Communi- cations in Statistics–Theory and Methods, 18(9), 3463–347 2. 19 M˚ ansson K, Shukur G. (2011). A Poisson ridge regression est imator. Economic Modelling 28, 1475–1481. M˚ ansson, K., Kibria, B. G., SjΒ¨ olander, P., Shukur, G., Swe den, V. (2011). New Liu Estimators for the Poisson regression model: Method and application, 5 1. HUI Research. M˚ ansson, K., Kibria, B. G., Sjolander, P., Shukur, G. (2012 ). Improved Liu estimators for the Poisson regression model. International Journal of Statis tics and Probability, 1(1), 1–5. M˚ ansson, K. (2013). Developing a Liu estimator for the nega tive binomial regression model: method and application. Journal of Statistical Computatio n and Simulation, 83, 1773–1780. Majid, A., Amin, M., Akram, M. N. (2022). On the Liu estimatio n of Bell regression model in the presence of multicollinearity. Journal of Statistical Computation and Simulation, 92(2), 262–282. McCullagh, P., Nelder, J.(1989). Generalized Linear Model s, second ed., Chapman & Hall, London. Omara, T. M. (2023). Almost unbiased Liu-type estimator for Tobit regression and its applica- tion. Communications in Statistics-Simulation and Comput ation, 1-16. Segerstedt, B.(1992). Onordinaryridgeregressioningene ralizedlinearmodels.Communications in Statistics–Theory and Methods, 21(8), 2227–2246. Qasim, M., Amin, M., Amanullah, M. (2018). On the performanc e of some new Liu parameters for the gamma
https://arxiv.org/abs/2505.20946v1