text
string | source
string |
|---|---|
t o c onsider a far br oader c lass of low coor dinate degr ee (L CD ) alg orithms . In fact w e w ill c onsider a slightly br oader c lass of ππ- v alued L CD alg orithms , supplement ed b y an additional r ounding sc heme int o Ξ£π . The r andomized r ounding Μ π of π is de- fined b y Μ π β πππππ½ β π( π ) . Her e πππππ½ β π( π₯1 , β¦ , π₯π ) = ( πππππ½π1( π₯1 ) , β¦ , πππππ½ππ( π₯π ) ) = ( ππππ ( π₯1 β π1 ) , β¦ , ππππ ( π₯π β ππ ) ) f or i.i. d. Unif orms ππ β [ β 1 , 1 ] and 5 ππππ ( π§ ) β {+ 1 π§ > 0 , β 1 π§ β€ 0 . Theor em 1. 4 (Har dness f or L CD Alg orithms ) . Let π βΌ π© ( 0 , πΌπ ) be a standar d N ormal r andom vec tor . Let π be an y coor dinate degr ee π· alg orithm with π β π ( π ) β2β€ πΆ π , and let Μ π be its r andomized r ounding as described above . Assume that ( a) if πΈ = πΏ π f or πΏ β ( 0 , 1 ) , then π· β€ π ( π ) ; (b ) if π ( log2π ) β€ πΈ β€ π ( π ) , then π· β€ π ( πΈ / log2( π / πΈ ) ) . Then π ( Μ π β π ( πΈ ; π ) ) = π ( 1 ) . R emar k 1.5 . Although Theor em 1. 4 is st at ed f or Gaussian disor der f or simplicit y and c onsist ency w ith the lo w degr ee poly nomial case , the pr oof e x t ends v er batim t o the muc h mor e g ener al situation that ( π1 , β¦ , ππ ) ar e independent r andom v ariables w ith unif ormly bounded densit y ( w ith r espect t o Le besgue measur e ). N ot e that if eac h ππ has densit y unif ormly at most some c onst ant πΏ independent of π , then the same holds ( w ith the same v alue of πΏ ) f or an y independent sum βπ β πππ f or det erministic π β [ π ] . This is the only pr opert y of the Gaussian densit y that is needed t o pr o v e Theor em 1. 4 , appearing in Lemma 2.1 ( whic h is used t o sho w Lemma 2. 4 ) and Lemma 2.12 . 1.2 Heuristic Optimalit y of Theor em 1. 4 Theor em 1. 4
|
https://arxiv.org/abs/2505.20607v1
|
is essentially best -possible under the lo w degr ee heuristic. In particular , this heuristic sugg est s fr om the ener gy - degr ee tr adeoff of π· β€ Μ π ( πΈ ) that finding solutions w ith ener gy πΈ r equir es time πΜ Ξ© ( πΈ ). This tr adeoff is att ainable along the full r ang e 1 βͺ πΈ β€ π v ia the f ollo w ing r estrict ed brut e -f or c e sear c h: (a) Choose a subset π½ β [ π ] of πΈ c oor dinat es ( sa y , the fir st πΈ ). (b ) R un an e xisting NPP alg orithm on the r estrict ed inst anc e ππ½ t o find π₯π½ w ith β¨ ππ½ , π₯π½ β© β€ 1 . ( c) F ixing π₯π½ , the NPP giv en b y π turns int o finding π₯π½ minimizing | β¨ π , π₯ β© | = | β¨ ππ½ , π₯π½ β© + β¨ ππ½ , π₯π½ β© | . ( d) F ixing π β π½ and the c oor dinat e π₯π = 1 , the c onditional la w of β¨ ππ½ , π₯π½ β© + ππ is c ontiguous w ith a st andar d Gaussian. If β¨ ππ½ , π₯π½ β© + ππ w er e e x actly st andar d Gaussian, then b y the pr e v i- ously mentioned r esult s on st atistically optimal solutions , w ith high pr obabilit y ther e w ould e xist π att aining ener gy Ξ ( πΈ ) . By c ontiguit y , the same holds e v en w ith the shift β¨ ππ½ , π₯π½ β© . ( e ) Giv en the pr e v ious st ep , w e can brut e f or c e sear c h o v er the r emaining πΈ c oor dinat es in π½ , y ielding a solution w ith ener gy Ξ ( πΈ ) w ith high pr obabilit y in ππ ( πΈ ) time . It is r easonable t o ask whether the lo w ( c oor dinat e ) degr ee heuristic is truly appr opriat e in our setting . In pr e v ious pr oblems wher e it has been applied, the objectiv e under c onsider ation is st able t o small input pertur bations . By c ontr ast solutions t o the NPP ar e quit e brittle , a fact whic h guides our pr oof. This brittleness also under lies the failur e of lo w - degr ee poly nomials in Theor em 1.3 , whic h st ems fr om the fact that poly nomials can depend on the entries ππ only in a some what c oar se w a y . N e v ertheless the sharpness of Theor
|
https://arxiv.org/abs/2505.20607v1
|
em 1. 4 is not able and highly sugg estiv e . See also [L S24] f or r elat ed discussion. W e also mention that c onjectur ally optimal tr ade - off s of a similar fla v or bet w een running time and signal-t o -noise r atio ha v e been est ablished f or spar se PC A in [ A WZ23] , [Din+24] and f or t ensor PC A in [K WB19] , [ WEM19] . 6 F inally , w e not e that alg orithms w ith c oor dinat e degr ee Ξ© ( π ) b y definition in v olv e nonlinear int er actions bet w een a c onst ant fr action of the π c oor dinat es . Thus Theor em 1. 4 essentially st at es that g ood NPP alg orithms must be βtruly globalβ , whic h is ec hoed b y r ec ent heuristic alg orithms f or the NPP [KK S14] , [ C O Y19] , [SBD 21] . 1.3 N ot ations and Pr eliminaries W e use the st andar d Bac hmann-Landau not ations π ( β
) , π ( β
) , π ( β
) , Ξ© ( β
) , Ξ ( β
) , in the limit π β β . W e abbr e v iat e π ( π ) βͺ π ( π ) or π ( π ) β« π ( π ) when π ( π ) = π ( π ( π ) ) or π ( π ) = π ( π ( π ) ) . In addi- tion, w e w rit e π ( π ) β² π ( π ) or π ( π ) β³ π ( π ) when ther e e xist s an π -independent c onst ant πΆ suc h that π ( π ) β€ πΆ π ( π ) or π ( π ) β₯ πΆ π ( π ) f or all π . W e w rit e [ π ] β { 1 , β¦ , π } . If π β [ π ] , then w e w rit e π β [ π ] β π f or the c ompliment ar y set of indic es . If π₯ β ππ and π β [ π ] , then the r estric tion of π₯ to the coor dinates in π is the v ect or π₯π w ith ( π₯π )πβ {π₯π π β π , 0 else. In particular , f or π₯ , π¦ β ππ, β¨ π₯π , π¦ β© = β¨ π₯ , π¦π β© = β¨ π₯π , π¦π β© . On ππ, w e w rit e β β
β f or the E uc lidean norm, and π΅ ( π₯ , π ) β { π¦ β ππ: β π¦ β π₯ β < π }
|
https://arxiv.org/abs/2505.20607v1
|
f or the E uc lidean ball of r adius π ar ound π₯ . In addition, w e w rit e π΅Ξ£ ( π₯ , π ) β π΅ ( π₯ , π ) β© Ξ£π = { π¦ β Ξ£π : β π¦ β π₯ β < π } , t o denot e point s on Ξ£π w ithin dist anc e π of π₯ . W e use π© ( π , π2) t o denot e the scalar N ormal distribution w ith giv en mean and v arianc e , and the abbr e v iation βr . v . β f or r andom v ariable ( or r andom v ect or , if it is c lear fr om c ont e x t). π - C orr elat ed and π -R esampled P air s . F or π β [ 0 , 1 ] and a pair ( π , πβ²) of π - dimensional st andar d N ormal r andom v ect or s , w e sa y ( π , πβ²) ar e π - corr elated if πβ² is distribut ed ( c onditionally on π ) as πβ²= π π + β 1 β π2 Μ π , wher e Μ π is an independent c op y of π . W e sa y ( π , πβ²) ar e π -r esampled if π is a st andar d N ormal r andom v ect or and πβ² is dr a w n as f ollo w s: f or eac h π β [ π ] independently , πβ² π = {ππ with probability π , drawn from π© ( 0 , 1 ) with probability 1 β π . W e denot e suc h a pair b y πβ²βΌ βπ ( π ) . In both cases , π and πβ² ar e mar ginally multiv ariat e st andar d N ormal and ha v e entr y w ise c orr elation π . Ho w e v er , while ππβ² βΌπ π ( π = πβ²) = 0 , when ( π , πβ²) ar e π -r esampled, w e ha v e ππβ² βΌ βπ ( π ) ( π = πβ²) = βπ π = 1π ( ππ = ππβ² ) = ( 1 β π )π. (1.3) 7 whic h is non-negligible f or π β€ π ( 1 / π ) . C oor dinat e Degr ee . Let πΎπ be the π - dimensional st andar d N ormal measur e on ππ, and c onsider the spac e πΏ2( πΎπ ) ; this is the spac e of πΏ2 functions of π i.i. d. st andar d N ormal r . v . s . F or π β ππ and π β [ π ] , w e can define subspac es of πΏ2( πΎπ ) ππ β { π β πΏ2( πΎπ ) : π
|
https://arxiv.org/abs/2505.20607v1
|
( π ) depends only on ππ } , πβ€ π· β span ( { ππ½ : π½ β [ π ] , | π½ | β€ π· } ) These subset s describe functions whic h only depend on some subset of c oor dinat es , or on some bounded number of c oor dinat es . N ot e that π[ π ] = πβ€ π = πΏ2( ππ, πβ π) . The coor dinate degr ee of a function π β πΏ2( πΎπ ) is defined as min { π· : π β πβ€ π· } . N ot e that if π is a degr ee π· poly nomial, then it has c oor dinat e degr ee at most π· . See [K un 24, Β§ 1.3] or [ O'D14, Β§ 8 .3] f or further discussion. 1. 4 St abilit y of Lo w Degr ee Alg orithms The k e y pr opert y of LDP and L CD alg orithms f or us is πΏ2 st abilit y under input pertur bations . Pr oposition 1. 6 (Lo w Degr ee St abilit y ) . S uppose π : ππβ ππ is a deterministic alg orithm with pol ynomial degr ee (r esp . coor dinate degr ee ) π· and norm π β π ( π ) β2β€ πΆ π . Then, f or standar d N ormal r . v .s π and πβ² whic h ar e ( 1 β π ) - corr elated (r esp . ( 1 β π ) -r esampled), π β π ( π ) β π ( πβ²) β2β€ 2 πΆ π· π π , (1.4) and thus f or an y π > 0 , π ( β π ( π ) β π ( πβ²) β β₯ 2 β π π ) β€πΆ π· π 2 π. (1.5) Pr oof : W e sho w ( 1. 4) when π has c oor dinat e degr ee π· and ( π , πβ²) ar e ( 1 β π ) -r esampled. See e .g . [HS25b , Pr op . 1. 7] f or the case wher e π is poly nomial. In both cases , Mar k o v β s inequalit y giv es ( 1.5 ) . W e f ollo w [ Gam+2 2, Lem. 3 . 4] . Assume w ithout loss of g ener alit y that π β π ( π ) β2= 1 . W riting π = ( π1 , β¦ , ππ ) , obser v e that f or πβ²βΌ β1 β π ( π ) , π β π ( π ) β π ( πβ²) β2= π β π ( π ) β2+ π β π ( πβ²) β2β 2 π β¨ π ( π ) , π ( πβ²) β© = 2 β 2 π β¨ π ( π ) , π ( πβ²) β© . By [ O'D14, E x er . 8 .18]
|
https://arxiv.org/abs/2505.20607v1
|
, w e kno w that f or eac h ππ β πβ€ π· w e ha v e ( 1 β π )π·π | ππ ( π ) |2β€ π [ ππ ( π ) ππ ( πβ²) ] β€ π | ππ ( π ) |2. Summing this o v er π giv es ( 1 β π )π·β€ π β¨ π ( π ) , π ( πβ²) β© β€ 1 . C ombining this w ith the abo v e , and using 1 β ( 1 β π )π·β€ π π· , y ields ( 1. 4) . β‘ R emar k 1. 7 . Pr oposition 1. 6 also holds f or r andomized alg orithms , w ith e x actly the same pr oof. 8 N e x t w e intr oduc e a family of locally impr o v ing alg orithms , whic h w ill be useful lat er in sho w ing the failur e of r andomized r ounding . Belo w , fix a dist anc e π = π ( 1 ) . Giv en a ππ- v alued π , w e can obt ain a Ξ£π - v alued alg orithm b y fir st r ounding π ( π ) int o the solid h y per cube [ β 1 , 1 ]π and then pic king the best c orner of Ξ£π w ithin c onst ant dist anc e of this output. Suc h a modification r equir es calculating the ener gy of at most ππ additional point s on Ξ£π , and thus pr eser v es e .g . an y poly nomial runtime bound. Sinc e π = π ( 1 ) , this w ill also pr eser v e st abilit y . W e f ormalize this c onstruction as f ollo w s . Let πΌπ
ππ : ππβ [ β 1 , 1 ]π be the function whic h r ounds π₯ β ππ int o the cube [ β 1 , 1 ]π: πΌπ
ππ ( π₯ )π β {{{{{ β 1 π₯π β€ β 1 , π₯π β 1 < π₯π < 1 , 1 π₯π β₯ 1 . N ot e that πΌπ
ππ is 1 -Lipsc hit z w ith r espect t o the E uc lidean norm. Definition 1.8 . Let π > 0 and π be an alg orithm. Define the [ β 1 , 1 ]π- v alued alg orithm Μ ππ b y Μ ππ ( π ) β {{{{{ argmin π₯β² β π΅Ξ£ ( πΌπ
ππ ( π ( π ) ) , π )| β¨ π , π₯β² β© | if π΅Ξ£ ( πΌπ
ππ ( π ( π ) ) , π ) β β
, πΌπ
ππ ( π ( π ) ) else .(1.6) The ne x t simply lemma sho w s that if π is st able , then Μ ππ is also st able . Lemma 1. 9 . S
|
https://arxiv.org/abs/2505.20607v1
|
uppose π : ππβ ππ is a deterministic alg orithm with pol ynomial degr ee (r esp . coor di- nate degr ee ) π· and norm π β π ( π ) β2β€ πΆ π . Then, f or π = π ( 1 ) and standar d N ormal r . v .s π and πβ² whic h ar e ( 1 β π ) - corr elated (r esp . ( 1 β π ) -r esampled), Μ ππ as in Definition 1. 8 satisfies: π β Μ ππ ( π ) β Μ ππ ( πβ²) β2β€ 4 πΆ π· π π + 8 π2. (1.7) Thus f or an y π > 0 . π ( β Μ ππ ( π ) β Μ ππ ( πβ²) β β₯ 2 β π π ) β€πΆ π· π π+2 π2 π π. (1.8) Pr oof : Obser v e that b y the triangle inequalit y , β Μ ππ ( π ) β Μ ππ ( πβ²) β is bounded b y β Μ ππ ( π ) β πΌπ
ππ ( π ( π ) ) β + β πΌπ
ππ ( π ( π ) ) β πΌπ
ππ ( π ( πβ²) ) β + β πΌπ
ππ ( π ( πβ²) ) β Μ ππ ( πβ²) β β€ 2 π + β π ( π ) β π ( πβ²) β . This f ollo w s as πΌπ
ππ is 1 -Lipsc hit z and the c orner-pic king st ep in ( 1. 6 ) only mo v es Μ ππ ( π ) fr om πΌπ
ππ ( π ( π ) ) b y at most π . By J ensen β s inequalit y , squaring this giv es β Μ ππ ( π ) β Μ ππ ( πβ²) β2β€ 2 ( 4 π2+ β π ( π ) β π ( πβ²) β2) . C ombining this w ith Pr oposition 1. 6 giv es ( 1. 7) , and ( 1. 8 ) f ollo w s fr om Mar k o v β s inequalit y . β‘ Of c our se , our c onstruction of Μ ππ is c ert ainly ne v er poly nomial and does not pr eser v e c oor dinat e degr ee in a c ontr ollable w a y . Ho w e v er , because the r ounding does not dr astically alt er the st abilit y 9 analy sis , w e ar e still able t o sho w that f or an y ππ- v alued lo w c oor dinat e degr ee alg orithm π and π = π ( 1 ) , str ong lo w degr ee har dness holds f or Μ ππ . Est ablishing the failur e of Μ ππ w ill be a useful int ermediat e st ep t o w ar ds the full pr oof of Theor em 1. 4 . (The
|
https://arxiv.org/abs/2505.20607v1
|
same ar gument pr o v es har dness when π is a lo w degr ee poly nomial alg orithm; this is omitt ed f or br e v it y .) 2 Har dness f or Lo w Degr ee Alg orithms In this section, w e pr o v e Theor em 1.3 and Theor em 1. 4 β that is , w e e xhibit str ong lo w degr ee har dness f or both lo w poly nomial degr ee and lo w c oor dinat e degr ee alg orithms . Our ar gument utilizes what can be thought of as a β c onditionalβ OGP . Pr e v iously , most OGP pr oof s identif y a global obstruction: w ith high pr obabilit y , ther e do not e xist an y tuples of g ood solutions t o a family of c orr elat ed inst anc es whic h ar e e .g . r oughly the same dist anc e apart. Her e , ho w e v er , w e sho w a local obstruction; w e c ondition on being able t o solv e a single inst anc e and sho w that aft er a small c hang e t o the inst anc e , it is unlik ely that an y solutions w ill e xist c lose t o the fir st one . This is an inst anc e of the βbrittlenessβ that mak es the NPP c hallenging t o solv e; e v en small c hang es in the inst anc e br eak the landscape g eometr y , so that e v en if solutions e xist, ther e is no w a y t o kno w wher e the y w ill end up . R elat ed str at egies ha v e been used r ec ently in [ A G24] , [ Ala 24] , [HS25b] . The fir st of these studied the Ising per c eptr on (in our t erminology , the VBP w ith π = Ξ ( π ) ), and deduc ed har dness of sampling fr om the fact that a t ypical solution t o a giv en inst anc e w ill disappear in a slightly c orr elat ed inst anc e . F or our r esult, it is crucial that ever y solution t o a giv en NPP inst anc e is lik ely t o disappear when the input is slightly pertur bed. On the other hand, this pr opert y is w eak er than r equiring all solutions t o simultaneousl y disappear under suc h pertur bation, as in a mor e st andar d β globalβ OGP ar gument. W e not e that Gamarnik and KΔ±zΔ±ldaΔ sho w ed in [ GK23 , Thm. 2.5] that sublinear ener gies do not e xhibit a c ert ain multi- OGP
|
https://arxiv.org/abs/2505.20607v1
|
, sugg esting that a sharp analy sis v ia global OGP ar gument s ma y be c hallenging t o implement. Let us giv e a mor e det ailed outline of our str at egy , in the case of lo w c oor dinat e degr ee . Let πΈ be an ener gy le v el, and assume π is a Ξ£π - v alued alg orithm w ith c oor dinat e degr ee at most π· β€ Μ π ( πΈ ) . W e c hoose suit able par amet er s π βπΈ π log π and π βlog ( π / π· ) π, and aim t o sho w that π ( π ( π ) β π ( πΈ ; π ) ) β 0 as π β β . T o do so , w e c onsider a ( 1 β π ) -r esampled pair π , πβ² of NPP inst anc es and pr oc eed ac c or ding t o the f ollo w ing st eps . (a) F or π small, π and πβ² ha v e c orr elation c lose t o 1. By Pr oposition 1. 6 , this implies that the output s of an L CD alg orithm π w ill be w ithin dist anc e 2βπ π of eac h other w ith high pr obabilit y . (b ) Sinc e π β«1 π, w e w ill ha v e π β πβ² w ith high pr obabilit y , and w e assume belo w that this holds . (This is the c entr al diff er enc e bet w een the L CD and LDP cases; in the latt er ther e is no issue in t aking π muc h smaller , whic h turns out t o dr astically aff ect the r esulting har dness bound.) ( c) F or π small and fix ed π ( π ) , Lemma 2. 4 sho w s using the Mar k o v inequalit y that c onditional on π and the e v ent πβ²β π , the pertur bed inst anc e πβ² t y pically has no solutions w ithin dist anc e 2βπ π of π ( π ) . This is the c onditional landscape obstruction w e described abo v e . 10 ( d) Put t og ether , these st eps imply that it is unlik ely f or π t o find solutions t o both π and πβ² suc h that the st abilit y guar ant ee of Pr oposition 1. 6 holds . U sing a positiv e c orr elation pr opert y ( see Lemma 2.5 ), w e c onc lude that π ( π ) β π ( πΈ ; π ) w ith high pr obabilit y . The abo v e ar gument handles lo w c oor dinat e
|
https://arxiv.org/abs/2505.20607v1
|
degr ee alg orithms w ithout the r andomized r ounding st ep . T o handle the latt er , w e obser v e that solutions t o a giv en NPP inst anc e ar e isolat ed w ith high pr obabilit y . This implies that r andomized r ounding either c hang es only π ( 1 ) c oor dinat es ( whic h pr eser v es st abilit y ), or else inject s t oo muc h r andomness t o pr eser v e an y c hanc e of finding a g ood solution. F or c on v enienc e , w e pur sue this e x t ension only in the L CD case (as the main messag e of the LDP case is that poly nomial degr ee is a poor pr o x y f or c omple xit y in the NPP ). 2.1 Pr eliminar y Estimat es W e begin w ith some g ener al estimat es that w ill be utilized r epeat edly thr oughout the pr oof. F ir st, w e bound the pr obabilit y of a N ormal r . v . being e xponentially c lose t o zer o . W e denot e 2π₯ b y exp2 ( π₯ ) . Lemma 2.1 . Let πΈ , π2> 0 , and suppose that conditionall y on π , we have π βΌ π© ( π , π2) . Then π ( | π | β€ 2β πΈ| π ) β€ exp2 ( β πΈ β1 2log2 ( π2) + π ( 1 ) ) , β π β π . (2.1) Pr oof : Obser v e that c onditional on π , the distribution of π is bounded as ππ | π ( π§ ) =1β 2 π π2πβ( π§ β π )2 2 π 2 β€ ( 2 π π2)β 1 / 2. Int egr ating o v er | π§ | β€ 2β πΈ then giv es (2.1) , v ia π ( | π | β€ 2β πΈ) = β« | π§ | β€ 2β πΈ( 2 π π2)β 1 / 2d π§ β€ 2β πΈ β1 2log2 ( 2 π π2) + 1. β‘ N ot e that (2.1) is a decr easing function of π2. Thus , if ther e e xist s πΎ w ith π2β₯ πΎ > 0 , then (2.1) is bounded b y exp2 ( β πΈ β log2 ( πΎ ) / 2 + π ( 1 ) ) . N e x t, w e r ecall the f ollo w ing bound on the partial sums of binomial c oefficient s; this w ill be used f or a fir st moment c omput ation in Section 2.2 . Lemma 2.2 ( Chernoff-Hoeff ding) . S uppose that πΎ β€ π / 2 , and let β ( π₯ ) = β π₯ log2 ( π₯ ) β (
|
https://arxiv.org/abs/2505.20607v1
|
1 β π₯ ) log2 ( π₯ ) be the binar y entr opy func tion. Then, f or π β πΎ / π , β π β€ πΎ(π π) β€ exp2 ( π β ( π ) ) β€ exp2 ( 2 π π log2 (1 π) ) . Pr oof : F or a Bin ( π , π ) r andom v ariable π , w e ha v e: 1 β₯ π ( π β€ πΎ ) = β π β€ πΎ(π π) ππ( 1 β π )π β πβ₯ β π β€ πΎ(π π) ππΎ( 1 β π )π β πΎ. The last inequalit y f ollo w s b y multiply ing eac h t erm b y ( π / ( 1 β π ) )πΎ β πβ€ 1 . R earr anging giv es 11 β π β€ πΎ(π π) β€ πβ πΎ( 1 β π )β ( π β πΎ ) = exp2 ( β πΎ log2 ( π ) β ( π β πΎ ) log2 ( 1 β π ) ) = exp2 ( π β
( βπΎ πlog2 ( π ) β (π β πΎ π) log2 ( 1 β π ) ) ) = exp2 ( π β
( β π log2 ( π ) β ( 1 β π ) log2 ( 1 β π ) ) ) = exp2 ( π β ( π ) ) . The final equalit y then f ollo w s fr om the bound β ( π ) β€ 2 π log2 ( 1 / π ) f or π β€ 1 / 2 . β‘ F inally , w e sho w that f or an y ener gy πΈ β€ π ( π ) , ther e e xist s a c hoic e of β dist anc e β π suc h that the t erm in the pr e v ious lemma is c ontr olled. Lemma 2.3 . F or all πΈ β€ π , ther e e xist uni ver sal constants πΆ , πΆβ²> 0 suc h that π βπΈ πΆβ² π log2 ( πΆ π / πΈ )(2.2) satisfies π β ( 0 , 1 / 2 ) and 2 π log2 (1 π) <πΈ 4 π. (2.3) In addition ( a) if πΈ = Ξ ( π ) , this π is Ξ ( 1 ) ; (b ) if πΈ β€ π ( π ) , this π satisfies π β³ πΈ / π log2 ( πΆ π / πΈ ) Pr oof : It is not har d t o sho w that if 0 < π , πΈ / π βͺ 1 and β π log π βΌ πΈ / π , then w e ha v e π βΌ πΈ / ( π log π / πΈ ) . Pic king suit able c onst ant s ( e .g . πΆ = 8 and πΆβ²= 1 6 suffic e ) in (2.2) , it is eas y
|
https://arxiv.org/abs/2505.20607v1
|
t o see that π β ( 0 , 1 / 2 ) and (2.3 ) holds f or all πΈ β€ π . The r esulting as y mpt otics f ollo w immediat ely . β‘ 2.2 C onditional Landscape Obstruction W e turn no w t o est ablishing the c entr al c onditional landscape obstruction of our ar gument. The idea is that f or an π₯ β Ξ£π depending on π and a r elat ed inst anc e πβ², the lik elihood of an y π₯β²β Ξ£π solv ing πβ² is muc h smaller than the number of point s w ithin a neighbor hood of π₯ . Thus , e v en small c hang es t o the inst anc e destr o y an y solutions . Lemma 2. 4 . Let ( π , πβ²) be a pair of either ( 1 β π ) - corr elated or ( 1 β π ) -r esampled instances. Let π₯ β Ξ£π be conditionall y independent of πβ² gi ven π . Then f or an y π β ( 0 , 1 / 2 ) , ππβ² βΌ1 β π π (β π₯β² β π ( πΈ ; πβ²) s.t. β π₯ β π₯β²β β€ 2βπ π) β€ exp2 ( β πΈ β1 2log2 ( π ) + 2 π log2 (1 π) π + π ( log π ) ) , (2.4) and 12 ππβ² βΌ β1 β π ( π ) (β π₯β² β π ( πΈ ; πβ²) s.t. β π₯ β π₯β²β β€ 2βπ π| π β πβ²) β€ exp2 ( β πΈ + 2 π log2 (1 π) π + π ( 1 ) ) . (2.5) Pr oof : Thr oughout, abbr e v iat e ππ β 2βπ π . W e fir st sho w (2. 4) , b y bounding the pr obabilit y that | π΅ ( π₯ , ππ ) β© π ( πΈ ; πβ²) | = β π₯β² β π΅Ξ£ ( π₯ , ππ )πΌ { π₯β²β π ( πΈ ; πβ²) } is nonzer o . By Mar k o v β s inequalit y , this is upper bounded b y π [[[β π₯β² β π΅Ξ£ ( π₯ , ππ )πΌ { π₯β²β π ( πΈ ; πβ²) } ]]]= π [[[β π₯β² β π΅Ξ£ ( π₯ , ππ )π [ πΌ { π₯β²β π ( πΈ ; πβ²) } | π ] ]]] = π [[[β π₯β² β π΅Ξ£ ( π₯ , ππ )π ( | β¨ πβ², π₯β²β© | β€ 2β πΈ| π ) ]]].(2.6) N ot e in particular that the r ang e of this sum is independent of the inner pr obabilit y , as πβ² and π₯ ar e c onditionally independent giv en π . T o bound the number of t erms in (2. 6 ) , let π be the number of c oor dinat es whic h diff er bet w een π₯ and π₯β², so
|
https://arxiv.org/abs/2505.20607v1
|
that β π₯ β π₯β²β2= 4 π . Thus β π₯ β π₯β²β β€ 2βπ π if and only if π β€ π π < π / 2 , so b y Lemma 2.2 , | π΅Ξ£ ( π₯ , ππ ) | = β π β€ π π(π π) β€ exp2 ( 2 π log2 ( 1 / π ) π ) . (2.7) T o bound the inner pr obabilit y under πβ²βΌ1 β π π , fix an y π₯β²β Ξ£π and w rit e πβ²= π π + β 1 β π2 Μ π f or π β 1 β π and Μ π an independent c op y of π . W e kno w β¨ Μ π , π₯β²β© βΌ π© ( 0 , π ) , so this giv es β¨ πβ², π₯β²β© | π βΌ π© ( π β¨ π , π₯β²β© , ( 1 β π2) π ) . This is nondeg ener at e f or ( 1 β π2) π β₯ π π > 0 ; b y Lemma 2.1 , w e g et ππβ² βΌ1 β π π ( | β¨ πβ², π₯β²β© | β€ 2β πΈ| π ) β€ exp2 ( β πΈ β1 2log2 ( π ) + π ( log π ) ) . U sing (2. 7) t o upper bound the number of t erms in (2. 6 ) , summing this bound giv es (2. 4) . Alt ernativ ely , f or (2.5 ) , w e kno w that if π = πβ², then the π΅ ( π₯ , ππ ) β© π ( πΈ ; πβ²) w ill be nonempt y if π₯ is c hosen t o be a solution t o π ; w e thus c ondition on π β πβ² thr oughout (2. 6 ) . T o bound the c orr e- sponding inner t erm, ag ain fix an y π₯β²β Ξ£π and let Μ π be an independent c op y of π . Let π½ β [ π ] be a r andom subset wher e eac h π β π½ independently w ith pr obabilit y 1 β π , so πβ² can be r epr esent ed as πβ²= ππ½ + Μ ππ½ . C onditional on ( π , π½ ) , w e kno w that β¨ Μ ππ½ , π₯β²β© is π© ( 0 , π β | π½ | ) and β¨ ππ½ , π₯β²β© is det erministic, so that 13 β¨ πβ², π₯β²β© | ( π , π½ ) βΌ π© ( β¨ ππ½ , π₯β²β© , π β | π½ | ) . As { π β πβ²} = { | π½ | < π } , w e ha v e π β | π½ | β₯ 1 c onditional on π β πβ². Thus , Lemma 2.1 giv es ππβ² βΌ β1 β π ( π ) ( | β¨ πβ², π₯β²β© | β€ 2β πΈ| π , π β πβ²) β€
|
https://arxiv.org/abs/2505.20607v1
|
exp2 ( β πΈ + π ( 1 ) ) , and w e c onc lude (2.5 ) as in the pr e v ious case . β‘ The f ollo w ing lemma sho w s a positiv e c orr elation pr opert y that enables us t o a v oid union-bound- ing o v er π finding solutions t o c orr elat ed or r esampled pair s of inst anc es . Belo w , the set π pla y s the r ole of the e v ent π ( π , π ) β π ( πΈ ; π ) . Lemma 2.5 ( Adapt ed fr om [HS25b , Lem. 2. 7] ) . Let ( π , πβ²) be a pair of either π - corr elated or π -r esampled instances. Then f or an y set π β ππ and π > 0 , with π β π ( π β π ) , ππβ² βΌπ π ( π β π , πβ²β π ) β₯ π2and ππβ² βΌ βπ ( π ) ( π β π , πβ²β π ) β₯ π2. Pr oof : In the fir st case , let Μ π , π( 0 ), π( 1 ) be thr ee i.i. d. c opies of π , and obser v e that πβ²βΌπ π ar e jointly r epr esent able as π =β 1 β π Μ π +βπ π( 0 ), πβ²=β 1 β π Μ π +βπ π( 1 ). Sinc e π , πβ² ar e c onditionally i.i. d. giv en Μ π , w e ha v e b y J ensen β s inequalit y that ππβ² βΌπ π ( π β π , πβ²β π ) = π [ π ( π β π , πβ²β π | Μ π ) ] = π [ π ( π β π | Μ π )2] β₯ π [ π ( π β π | Μ π ) ]2= π2. Lik e w ise , when πβ²βΌ βπ ( π ) , let π½ be a r andom subset of [ π ] wher e eac h π β π½ independently w ith pr obabilit y π , so that ( π , πβ²) ar e jointly r epr esent able as π = Μ ππ½ + π( 0 ) π½, πβ²= Μ ππ½ + π( 1 ) π½. Thus π and πβ² ar e c onditionally i.i. d. giv en ( Μ π , π½ ) , and w e c onc lude in the same w a y . β‘ R emar k 2. 6 . N ot e that Lemma 2.5 also holds when ther e is an auxiliar y r andom seed π shar ed acr oss the inst anc es . In this case the suc c ess e v ent is a set π β ππΓ Ξ©π , and w e w rit e π = π ( ( π , π
|
https://arxiv.org/abs/2505.20607v1
|
) β π ) , π = π ( ( π , π ) β π , ( πβ², π ) β π ) , π ( π ) = π ( ( π , π ) β π | π ) , π ( π ) = π ( ( π , π ) β π , ( πβ², π ) β π | π ) . Lemma 2.5 sho w s that f or an y π β Ξ©π , π ( π ) β₯ π ( π )2. Then, b y J ensen β s inequalit y , π = π [ π ( π ) ] β₯ π [ π ( π )2] β₯ π [ π ( π ) ]2= π2. Thus , in c ombination w ith R emar k 1. 7 , the pr oof s of Theor em 1.3 and Theor em 1. 4 hold w ithout modification when π depends on an independent r andom seed π . 14 2.3 Har dness f or LDP Alg orithms W e fir st pr o v e Theor em 1.3 . Let π be a Ξ£π - v alued alg orithm w ith satisf y ing ( 1.5 ) degr ee π· ; b y R emar k 2. 6 , assume w ithout loss of g ener alit y that π = π ( π ) is det erministic. C onsider a pair of ( 1 β π ) - c orr elat ed inst anc es ( π , πβ²) . Let π₯ β π ( π ) and define the e v ent s πsolve β { π ( π ) β π ( πΈ ; π ) , π ( πβ²) β π ( πΈ ; πβ²) } , πstable β { β π ( π ) β π ( πβ²) β β€ 2 β π π } , πcond ( π₯ ) β {β π₯β² β π ( πΈ ; πβ²) such that β π₯ β π₯β²β β€ 2βπ π} .(2.8) Intuitiv ely , the fir st t w o e v ent s ask that the alg orithm solv es both inst anc es and is st able , r espec- tiv ely . The last e v ent, whic h depends on π₯ , c orr esponds t o the c onditional landscape obstruction: f or an π₯ depending only on π , ther e is no solution t o πβ² whic h is c lose t o π₯ . Lemma 2. 7 . F or π₯ β π ( π ) , we have πsolve β© πstable β© πcond ( π₯ ) = β
. Pr oof : Suppose that πsolve and πstable both oc cur . Letting π₯ β π ( π ) ( whic h only depends on π ) and π₯β²β π ( πβ²) , w e kno w π₯β²β π ( πΈ ; πβ²) and is w ithin dist anc e 2βπ π of π₯ , c ontr adicting πcond ( π₯ )
|
https://arxiv.org/abs/2505.20607v1
|
. β‘ Pr oof of Theor em 1.3 : Let πsolve β π ( π ( π ) β π ( πΈ ; π ) ) be the pr obabilit y that π solv es one inst anc e . By Lemma 2.5 , ππβ² βΌ1 β π π ( πsolve ) β₯ π2 solve . In addition, let πcond β max π₯ β Ξ£π1 β ππβ² βΌ1 β π π ( πcond ( π₯ ) ) . πunstable β 1 β ππβ² βΌ1 β π π ( πstable ) , Set π β 2β πΈ / 2 and set π as in Lemma 2.3 . By Lemma 2. 4 , f or π sufficiently lar g e , πcond β€ exp2 ( β πΈ β1 2log2 ( π ) + 2 π log2 (1 π) π + π ( log π ) ) β€ exp2 ( β πΈ +πΈ 4+πΈ 4+ π ( log π ) ) = exp2 ( βπΈ 2+ π ( log π ) ) = π ( 1 ) . N e x t, f or log π βͺ πΈ β€ π and π· = π ( exp2 ( πΈ / 4 ) ) , ( 1.5 ) giv es πunstable β²π· π πβ²π· exp2 ( β πΈ / 2 ) π log2 ( πΆ π / πΈ ) πΈ β€ π· exp2 ( βπΈ 2+ π ( log π ) ) β€ π ( 1 ) β
exp2 ( βπΈ 4+ π ( log π ) ) = π ( 1 ) . That is , both πcond and πunstable v anish f or lar g e π . T o c onc lude , Lemma 2. 7 giv es (f or π = ππβ² βΌ1 β π π ) 15 π ( πsolve ) + π ( πstable ) + π ( πcond ( π₯ ) ) β€ 2 , and r earr anging y ields ( πsolve )2β€ πunstable + ( 1 β π ( πcond ( π₯ ) ) β€ πunstable + πcond = π ( 1 ) . β‘ 2. 4 Har dness f or N on-R ounded L CD Alg orithms W e no w build t o w ar ds the main r esult, Theor em 1. 4 . As a st epping st one , w e fir st sho w str ong lo w degr ee har dness f or π w ith c oor dinat e degr ee π· whic h ar e Ξ£π - v alued. W e then e x t end t o π ( 1 ) - c lose and r ounded alg orithms . R ecalling R emar k 2. 6 , w e assume π is det erministic f or c on v enienc e . C onsider a pair of ( 1 β π ) -r esampled inst anc es ( π , πβ²) . Let π₯ β π ( π ) and k eep the definitions of πsolve , πstable , πcond fr om (2. 8 ) . In
|
https://arxiv.org/abs/2505.20607v1
|
addition, define πdiff β { π β πβ²} . Lemma 2.8 . F or π₯ β π ( π ) , we have πdiff β© πsolve β© πstable β© πcond ( π₯ ) = β
. Pr oof : This f ollo w s fr om Lemma 2. 7 , noting that the pr oof did not use that π β πβ² almost sur ely . β‘ As bef or e , our pr oof f ollo w s fr om sho w ing that f or appr opriat e c hoic es of π and π ( depending on π· , πΈ , and π ), the e v ent s πstable and πcond ( π₯ ) hold w ith high pr obabilit y . Ho w e v er , when πdiff fails , πcond ( π₯ ) alw a y s fails . Thus , t o ensur e the appr opriat e pr obabilities v anish, w e ar e r equir ed t o c hoose π β« 1 / π , whic h b y ( 1.3 ) ensur es π β πβ² w ith high pr obabilit y . C ontr ast this w ith Section 2.3 , wher e π c ould be e xponentially small in πΈ . This r estriction on π pr e v ent s us fr om sho w ing har dness f or alg orithms w ith degr ee lar g er than the best possible le v el, as discussed in Section 1.2 . Pr oof of Theor em 1. 4 , f or Ξ£π -valued alg orithms without r andomized r ounding : A g ain let πsolve β π ( π ( π ) β π ( πΈ ; π ) ) be the pr obabilit y that π solv es one inst anc e . By Lemma 2.5 , ππβ² βΌ β1 β π ( π ) ( πsolve ) β₯ π2 solve . W e no w r edefine πcond and πunstable v ia πcond β max π₯ β Ξ£π1 β ππβ² βΌ β1 β π ( π ) ( πcond ( π₯ ) | πdiff ) , πunstable β 1 β ππβ² βΌ β1 β π ( π ) ( πstable | πdiff ) . (2.9) Setting π as in Lemma 2.3 , w e ha v e b y Lemma 2. 4 that πcond β€ exp2 ( β πΈ + 2 π log2 (1 π) π + π ( 1 ) ) β€ exp2 ( β3 πΈ 4+ π ( 1 ) ) = π ( 1 ) . N e x t, set π βlog2 ( π / π· ) π. This c lear ly has π π β« 1 , so π ( πdiff ) = 1 β ( 1 β π )πβ₯ 1 β πβ π πβ 1 . (2.10) By Pr oposition 1. 6 , w e ha v e f or (a) πΈ = πΏ π and π· = π ( π ) that πunstable β²
|
https://arxiv.org/abs/2505.20607v1
|
π· π =π· πlog2 (π π·) = π ( 1 ) . Lik e w ise , f or (b ) log2π βͺ πΈ βͺ π and π· = π ( πΈ / log2( πΆ π / πΈ ) ) , w e g et 16 πunstable β²π· log2 ( π / π· ) log2 ( πΆ π / πΈ ) πΈ As π· is an upper bound on the maximum possible alg orithm degr ee , w e ma y incr ease π· w ithout loss of g ener alit y in the analy sis so that π· gr o w s only slightly slo w er than πΈ . Thus w e assume henc ef orth that π· β₯ πΈ / log3 2 ( π / πΈ ) , so that π / π· β€ π log3 2 ( π / πΈ ) / πΈ . This let s us bound log2 ( π / π· ) β€ log2 ( π / πΈ ) + 3 log2 log2 ( π / πΈ ) β² log2 ( πΆ π / πΈ ) , whic h giv es πunstable β²π· log2 2 ( πΆ π / πΈ ) πΈ= π ( 1 ) . As bef or e , these c hoic es of π and π ensur e both πcond and πunstable v anish f or lar g e π and ar bitr ar y ener gy log2π βͺ πΈ β€ π . T o c onc lude , f or π₯ = π ( π ) , Lemma 2. 8 implies ππβ² βΌ β1 β π ( π ) ( πsolve , πstable , πcond ( π₯ ) | πdiff ) = 0 , so π ( πsolve | πdiff ) + π ( πstable | πdiff ) + π ( πcond ( π₯ ) | πdiff ) β€ 2 . Thus , r earr anging and multiply ing b y π ( πdiff ) giv es π ( πsolve and πdiff ) β€ π ( πdiff ) β
( πunstable + πcond ) β€ πunstable + πcond . F inally , adding π ( πsolve , πΒ¬ diff ) β€ 1 β π ( πdiff ) , whic h is π ( 1 ) b y our c hoic e of π , t o both sides ( so as t o apply Lemma 2.5 ) let s us c onc lude π2 solve β€ π ( πsolve ) β€ πunstable + πcond + ( 1 β π ( πdiff ) ) = π ( 1 ) . β‘ 2.5 Locally Impr o v ing Alg orithms So far , w e ha v e est ablished str ong lo w degr ee har dness f or both lo w degr ee poly nomial and lo w c oor dinat e degr ee alg orithms whic h t ak e v alues in Ξ£π . N e x t w e sho w that the last c ondition is not in fact as r estrictiv e as it might
|
https://arxiv.org/abs/2505.20607v1
|
appear , f ocusing on the L CD case . In the ne x t st ep t o w ar ds Theor em 1. 4 , w e sho w that our pr ec eding analy sis e x t ends t o π ( 1 ) - dist anc e pertur bations of an L CD alg orithm, thank s t o the pr eser v ation of st abilit y . Thr oughout the belo w , fix a dist anc e π = π ( 1 ) . W e c onsider the e v ent that the ππ- v alued alg o- rithm π output s a point c lose t o a solution f or an inst anc e π : πclose ( π ) = {β Μ π₯ β π ( πΈ ; π ) s.t. πΌπ
ππ ( π ( π ) ) β π΅ ( Μ π₯ , π )} = { π΅Ξ£ ( πΌπ
ππ ( π ( π ) ) , π ) β β
} . Equiv alently , this means that the r ounded alg orithm Μ ππ defined in ( 1. 6 ) solv es the NPP . Pr oposition 2. 9 (Har dness f or Locally Impr o v ed L CD Alg orithms ) . Let π βΌ π© ( 0 , πΌπ ) be a standar d N ormal r andom vec tor . Let π > 0 be an π -independent constant and π be an y coor dinate degr ee π· alg orithm with π β π ( π ) β2β² π . Assume that ( a) if πΈ = πΏ π f or πΏ β ( 0 , 1 ) , then π· β€ π ( π ) ; 17 (b ) if π ( log2π ) β€ πΈ β€ π ( π ) , then π· β€ π ( πΈ / log2( πΆ π / πΈ ) ) . Then π ( πclose ( π ) ) = π ( Μ ππ ( π ) β π ( πΈ ; π ) ) = π ( 1 ) . Pr oof of Pr oposition 2.9 : W e maint ain the setup of the pr oof of Theor em 1. 4 . Let π₯ β Μ ππ ( π ) and define the e v ent s πdiff , πsolve , πstable , and πcond ( π₯ ) as in Section 2. 4 , r eplacing π w ith Μ ππ . Let πsolve β π ( Μ ππ ( π ) β π ( πΈ ; π ) ) ; letting π be the e v ent { Μ ππ ( π ) β π ( πΈ ; π ) } = { πΌπ
ππ ( π ( π ) ) β π ( πΈ ; π ) + π΅ ( 0 , π ) } , Lemma 2.5 ensur es ππβ² βΌ β1 β π ( π ) ( πsolve ) β₯ π2 solve . W e k eep the same definitions
|
https://arxiv.org/abs/2505.20607v1
|
of πunstable and πcond as in (2.9 ) , ag ain r eplacing π w ith Μ ππ . Thus , w e c hoose π β log2 ( π / π· ) / π , so that π ( πdiff ) β 1 . Additionally , c hoose π as Lemma 2.3 , so that πcond = π ( 1 ) . It then suffic es t o sho w that πunstable = π ( 1 ) . T o see this , r ecall that when πΈ = πΏ π , our c hoic e of π is Ξ ( 1 ) , so π2 π π= π ( 1 ) . In the sublinear case π ( log2π ) β€ πΈ β€ π ( π ) , w e inst ead g et π π β³πΈ π log2 ( πΆ π / πΈ )β
π β₯πΈ log2 ( πΆ π / πΈ )= π ( 1 ) , sinc e πΈ β« log2π . Apply ing the pr oper ly modified Lemma 1.9 β kno w ing that the fir st t erm bound- ing πunstable is π ( 1 ) w ith these c hoic es of π and π β w e see that Μ πunstable = π ( 1 ) , as desir ed. β‘ 2. 6 Truly R andom R ounding T o c omplet e the pr oof of Theor em 1. 4 , w e need t o sho w that r andomized r ounding cannot help solv e the NPP . F or this , w e sho w in Lemma 2.12 belo w that if one has a subcube of Ξ£π w ith dimension gr o w ing slo wly w ith π , then at most one of those point s w ill be a solution. Then w e sho w that r andomized r ounding fails w ith high pr obabilit y whene v er it c hang es a div er ging number of c oor dinat es . T og ether w ith the pr e v ious subsection whic h addr esses π ( 1 ) c oor dinat e c hang es , this c omplet es the pr oof. Belo w let π be an ππ- v alued alg orithm and π ( log2π ) β€ πΈ β€ π . W e then define the det erministically r ounded alg orithm πβ( π ) β ππππ ( π ( π ) ) , whic h is the most lik ely single out c ome of r andomized r ounding on π ( π ) . Define π1 ( π₯ ) , β¦ , ππ ( π₯ ) β [ 0 ,1 2] b y ππ ( π₯ ) β π ( πππππ½ ( π₯ )π β ππππ ( π₯ )π ) =| π₯π β ππππ ( π₯π ) | 2. (2.11) (Her e w e suppr ess the dependenc e on β π = ( π1 , β¦ , ππ ) fr om just
|
https://arxiv.org/abs/2505.20607v1
|
abo v e Theor em 1. 4 .) Lemma 2.10 . F ix π₯ β ππ. Dr aw π coin flips πΌπ₯ , π βΌ Bern ( 2 ππ ( π₯ ) ) as well as π signs ππ βΌ Unif { Β± 1 } , all mutuall y independent; define the r andom variable Μ π₯ β Ξ£π by Μ π₯π β ππ πΌπ₯ , π + ( 1 β πΌπ₯ , π ) ππππ ( π₯ )π . Then Μ π₯ βΌ πππππ½ ( π₯ ) . Pr oof : Let π₯ββ ππππ ( π₯ ) . C onditioning on πΌπ₯ , π , w e can c hec k that π ( Μ π₯π β π₯β π ) = 2 ππ ( π₯ ) β
π ( Μ π₯π = π₯π | πΌπ₯ , π = 1 ) + ( 1 β 2 ππ ( π₯ ) ) β
π ( Μ π₯π β π₯β π | πΌπ₯ , π = 0 ) = ππ ( π₯ ) . Thus , π ( Μ π₯π = π₯β π ) = π ( πππππ½ ( π₯ )π = π₯β π ) . β‘ 18 By Lemma 2.10 , w e can r edefine πππππ½ ( π₯ ) t o be Μ π₯ as c onstruct ed abo v e w ithout loss of g ener alit y . It thus mak es sense t o define Μ π ( π ) β πππππ½ ( π ( π ) ) , whic h is no w alw a y s Ξ£π - v alued. W e ha v e seen in Pr oposition 2.9 that when β Μ π β πββ β€ π ( 1 ) , lo w degr ee har dness w ill still apply . On the other hand, when β Μ π β πββ1 β₯ π ( 1 ) , an y r ounding sc heme w ill intr oduc e so muc h r andomness that Μ π₯ w ill eff ectiv ely be a r andom point, whic h has a v anishing pr obabilit y of being a solution due t o the local spar sit y of the solution set. (These t w o cases suffic e as β Μ π β πββ β€ β Μ π β πββ1 .) T o see this , w e fir st sho w that an y r andomized r ounding sc heme as in Lemma 2.10 whic h diff er s w ith high pr obabilit y fr om πβ must r esample a div er ging number of c oor dinat es . Lemma 2.11 . F ix π₯ β ππ, and let π1 ( π₯ ) , β¦ , ππ ( π₯ ) be defined as in (2.11) . Then Μ π₯ β π₯β holds with pr oba- bilit y 1 β π ( 1 ) if and onl y if βπππ ( π₯ ) = π ( 1 ) . Mor eover if βπππ ( π₯ ) = π ( 1 )
|
https://arxiv.org/abs/2505.20607v1
|
, the number of coor dinates Μ π₯ is r esampled in di ver g es in pr obabilit y (i. e . e x ceeds an y fix ed π < β with pr obabilit y 1 β π ( 1 ) ). Pr oof : R ecall that f or π₯ β [ 0 , 1 / 2 ] , w e ha v e log2 ( 1 β π₯ ) = Ξ ( π₯ ) . Thus , as eac h c oor dinat e of π₯ is r ounded independently , w e can c omput e π ( Μ π₯ = π₯β) = β π( 1 β ππ ( π₯ ) ) = exp2 ( β πlog2 ( 1 β ππ ( π₯ ) ) ) β€ exp2 ( β Ξ ( β πππ ( π₯ ) ) ) . Thus , π ( Μ π₯ = π₯β) = π ( 1 ) if and only if βπππ ( π₯ ) = π ( 1 ) . N e x t, f ollo w ing the c onstruction of Μ π₯ in Lemma 2.10 , let πΈπ = { πΌπ₯ , π = 1 } be the e v ent that Μ π₯π is r esampled fr om Unif { Β± 1 } , independently of π₯β π . The πΈπ ar e independent Bernoulli v ariables , so βππΈπ has v arianc e at most it s mean. Henc e the sec ond c laim f ollo w s b y Che b y she v β s inequalit y . β‘ The f ollo w ing Lemma 2.12 t ak es adv ant ag e of this b y sho w ing that NPP solutions ar e isolat ed. Lemma 2.12 (Solutions R epel) . C onsider an y distances π = Ξ© ( 1 ) and ener gy levels πΈ β« π log π . W ith high pr obabilit y , ther e ar e no pair s of distinc t solutions π₯ , π₯β²β π ( πΈ ; π ) to an instance π with β π₯ β π₯β²β β€ 2β π (i. e ., within π sign flips of eac h other ): π (β ( π₯ , π₯β²) β π ( πΈ ; π ) s.t. β π₯ β π₯β²β β€ 2β π .) β€ exp2 ( β πΈ + π ( π log π ) ) = π ( 1 ) . (2.12) Pr oof : C onsider an y π₯ β π₯β², and let π½ β [ π ] denot e the c oor dinat es in whic h π₯ , π₯β² diff er . Then π₯ = π₯π½ + π₯π½ , π₯β²= π₯π½ β π₯π½ . Assuming both π₯ , π₯β²β π ( πΈ ; π ) , w e can e xpand the inequalities β 2β πΈβ€ β¨ π , π₯ β© , β¨ π , π₯β²β© β€ 2β πΈ int o β 2β πΈβ€ β¨ π , π₯π½ β© + β¨ π , π₯π½ β©
|
https://arxiv.org/abs/2505.20607v1
|
β€ 2β πΈ, β 2β πΈβ€ β¨ π , π₯π½ β© β β¨ π , π₯π½ β© β€ 2β πΈ. Multiply ing the lo w er equation b y β 1 and adding the r esulting inequalities giv es | β¨ π , π₯π½ β© | β€ 2β πΈ. 19 Thus , finding pair s of distinct solutions w ithin dist anc e 2β π implies finding a subset π½ β [ π ] of at most π c oor dinat es and | π½ | signs π₯π½ suc h that | β¨ ππ½ , π₯π½ β© | β€ 2β πΈ. By [ V er 18 , E x er . 0 . 0 .5] , ther e ar e β 1 β€ πβ² β€ π(π πβ²) β€ (π π π)π β€ ( π π )π= 2π ( π log π ) c hoic es of suc h subset s , and at most 2π c hoic es of signs . N o w , β¨ ππ½ , π₯π½ β© βΌ π© ( 0 , | π½ | ) , and as | π½ | β₯ 1 , Lemma 2.1 and the f ollo w ing r emar k implies π ( | β¨ ππ½ , π₯π½ β© | β€ 2β πΈ) β€ exp2 ( β πΈ + π ( 1 ) ) . Union bounding this o v er the 2π ( π log π ) possibilities giv es (2.12) . β‘ N e x t w e deduc e str ong har dness f or βtruly r andomizedβ alg orithms whic h r esample a div er ging number of c oor dinat es . R oughly , this holds because if enough c oor dinat es ar e r esampled, the r esulting point is c onditionally r andom w ithin a subcube of dimension gr o w ing slo wly w ith Ξ£π , whic h b y Lemma 2.12 can only c ont ain a single solution. Theor em 2.13 . Let π₯ = π ( π ) , and define π₯β, Μ π₯ , etc ., as pr eviousl y . Mor eover , assume that f or an y π₯ in the possible outputs of π , we have βπππ ( π₯ ) = π ( 1 ) . Then, f or an y πΈ β₯ π ( log2π ) , we have π ( Μ π ( π ) β π ( πΈ ; π ) ) = π ( Μ π₯ β π ( πΈ ; π ) ) β€ π ( 1 ) . Pr oof : F ollo w ing the c har act erization of Μ π₯ in Lemma 2.10 , let πΎ β max ( log2 π , βππΌπ₯ , π ) . By the assumptions on βπππ ( π₯ ) and Lemma 2.11 , w e kno w πΎ , whic h is at least the number of c oor dinat es whic h ar e r esampled, is bounded as 1 βͺ πΎ β€ log2 π ,
|
https://arxiv.org/abs/2505.20607v1
|
f or an y possible π₯ = π ( π ) . N o w , let π½ β [ π ] denot e the set of the fir st πΎ c oor dinat es t o be r esampled, so that πΎ = | π½ | , and c onsider π ( Μ π₯ β π ( πΈ ; π ) | Μ π₯π½ ) , wher e w e fix the c oor dinat es out side of π½ and let Μ π₯ be unif ormly sampled fr om a πΎ - dimensional subcube of Ξ£π . All suc h Μ π₯ ar e w ithin dist anc e 2β πΎ of eac h other , so b y Lemma 2.12 , the pr obabilit y that ther e is mor e than one suc h Μ π₯ β π ( πΈ ; π ) is bounded b y exp2 ( β πΈ + π ( πΎ log π ) ) β€ exp2 ( β πΈ + π ( log2π ) ) = π ( 1 ) , b y assumption on πΈ . Thus , the pr obabilit y that an y of the Μ π₯ is in π ( πΈ ; π ) is bounded b y 2β πΎ, whenc e π ( Μ π₯ β π ( πΈ ; π ) ) = π [ π ( Μ π₯ β π ( πΈ ; π ) | Μ π₯π½ ) ] β€ 2β πΎβ€ π ( 1 ) . β‘ F inally w e c ombine the pr ec eding r esult s t o deduc e Theor em 1. 4 . Pr oof of Theor em 1. 4 : F ix π > 0 an ar bitr arily lar g e int eg er (but fix ed as π gr o w s ). F ir st, w e kno w fr om Pr oposition 2.9 that π ( Μ ππ ( π ) β π ( πΈ ; π ) ) = ππ β β ( 1 ) f or an y fix ed π . Ther ef or e b y t aking π lar g e slo wly w ith π , it suffic es t o sho w that π ( Μ π ( π ) β π ( πΈ ; π ) , Μ ππ ( π ) β π ( πΈ ; π ) ) β€ ππ β β ( 1 ) . Indeed sinc e β π₯ β π₯ββ2 β€ β π₯ β π₯ββ1 , the left -hand pr obabilit y t ends t o 0 unif ormly as π gr o w s b y Theor em 2.13 . This c omplet es the pr oof. β‘ 20 Ac kno wledg ement . Thank s t o Bric e Huang and Subhabr at a Sen f or c omment s and discussions . R ef er enc es [ A C08] D . Ac hliopt as and A . C oja- O ghlan, β Alg orithmic Barrier s
|
https://arxiv.org/abs/2505.20607v1
|
fr om Phase Tr ansitions , β in 2008 4 9th A nnual IEEE S ymposium on F oundations of C omputer Science , Oct. 2008 , pp . 79 3β80 2. doi: 10 .1109 /F OCS .2008 .11 . [ A CR11] D . Ac hliopt as , A . C oja- O ghlan, and F . Ric ci- T er senghi, β On the solution-spac e g eometr y of r andom c onstr aint satisfaction pr oblems , β R andom S truc tur es & Alg orithms , v ol. 38 , no . 3 , pp . 25 1β268 , 2011. [ AF G96] M. F . Ar gΓΌello , T . A . F eo , and O . Goldsc hmidt, βR andomized Methods f or the N umber P artitioning Pr oblem, β C omputer s & Oper ations R esear c h , v ol. 23 , no . 2, pp . 10 3β111, F e b . 1996 , doi: 10 .1016 /0 305-0548( 9 5 )E 00 20-L . [ A G24] A . E. Alaoui and D . Gamarnik , βHar dness of sampling solutions fr om the S y mmetric Binar y P er c eptr on, β arX i v pr eprint arX i v :240 7 .166 2 7 , 20 24. [ A GJ20] G. B . Ar ous , R. Gheissari, and A . J ag annath, β Alg orithmic Thr esholds f or T ensor PC A , β The A nnals of Pr obabilit y , v ol. 48 , no . 4, pp . 205 2β208 7 , J ul. 20 20 , doi: 10 .1214 / 19- A OP1415 . [ Ala 24] A . E. Alaoui, βN ear- optimal shatt ering in the Ising pur e p -spin and r arit y of solutions r eturned b y st able alg orithms , β arX i v pr eprint arX i v :2412. 0 35 11 , 20 24. [ Ali+05] B . Alidaee , F . Glo v er , G. A . K oc henber g er , and C. R eg o , β A N e w Modeling and Solution Appr oac h f or the N umber P artitioning Pr oblem, β J ournal of A pplied Mathematics and Decision Sciences , v ol. 2005 , no . 2, pp . 113β121, J an. 2005 , doi: 10 .115 5 /J AMDS .2005 .113 . [ AMS25] A . E. Alaoui, A . Mont anari, and M. Sellk e , βShatt ering in pur e spherical spin glasses , β C ommunications in Mathemat- ical Ph ysics , v ol. 406 , no . 5 , pp . 1β36 , 20 25 . [ A WZ23] G. B . Ar ous , A . S . W ein, and I. Zadik , βF r ee ener gy w ells and o v er lap g ap pr opert y in spar se PC A , β
|
https://arxiv.org/abs/2505.20607v1
|
C ommunications on Pur e and A pplied Mathematics , v ol. 7 6 , no . 10 , pp . 2410β24 7 3 , 20 23 . [Bar+19] B . Bar ak , S . Hopkins , J . K elner , P . K. K othari, A . Moitr a, and A . P ot ec hin, β A near ly tight sum- of-squar es lo w er bound f or the plant ed c lique pr oblem, β SIAM J ournal on C omputing , v ol. 48 , no . 2, pp . 68 7β7 35 , 2019 . [BCP01] C. Bor g s , J . Cha y es , and B . Pitt el, βPhase Tr ansition and F init e βsize Scaling f or the Int eg er P artitioning Pr oblem, β R andom S truc tur es & Alg orithms , v ol. 19 , no . 3β4, pp . 24 7β288 , Oct. 2001, doi: 10 .100 2 / r sa.10004 . [BFM04] H. Bauk e , S . F r anz , and S . Mert ens , βN umber P artitioning as a R andom Ener gy Model, β J ournal of S tatistical Mec hanics: Theor y and E xperiment , v ol. 2004, no . 4, p . P 400 3 , Apr . 2004, doi: 10 .1088 / 17 4 2- 5468 /2004 /04 /P0400 3 . [BH21] G. Br esler and B . Huang , β The Alg orithmic Phase Tr ansition of R andom k -SA T f or Lo w Degr ee P oly nomials , β in 20 21 IEEE 6 2nd A nnual S ymposium on F oundations of C omputer Science (F OCS) , 20 21, pp . 298β309 . doi: 10 .1109 / F OCS5 29 79 .20 21. 000 38 . [BM04] H. Bauk e and S . Mert ens , βUniv er salit y in the Le v el St atistics of Disor der ed S y st ems , β Ph ysical R eview E , v ol. 7 0 , no . 2, p . 25 10 2, A ug . 2004, doi: 10 .110 3 /Ph y sR e vE. 7 0 . 0 25 10 2 . [BM08] S . Boett c her and S . Mert ens , β Analy sis of the K armar k ar-K arp Diff er encing Alg orithm, β The E ur opean Ph ysical J ournal B , v ol. 6 5 , no . 1, pp . 131β140 , Sep . 2008 , doi: 10 .1140 / epjb / e 2008-00 3 20- 9 . [Bor+09a] C. Bor g s , J . Cha y es , S . Mert ens , and C. N air , βPr oof of the Local REM C onjectur e f or N umber P artitioning . I: C onst ant Ener gy Scales , β R andom S truc tur es & Alg orithms , v ol. 34, no
|
https://arxiv.org/abs/2505.20607v1
|
. 2, pp . 217β240 , 2009 , doi: 10 .100 2 / r sa.20 25 5 . [Bor+09b] C. Bor g s , J . Cha y es , S . Mert ens , and C. N air , βPr oof of the Local REM C onjectur e f or N umber P artitioning . II. Gr o w ing Ener gy Scales , β R andom S truc tur es & Alg orithms , v ol. 34, no . 2, pp . 241β284, 2009 , doi: 10 .100 2 / r sa.20 256 . [BR13] Q . Berthet and P . Rig ollet, β C omple xit y theor etic lo w er bounds f or spar se principal c omponent det ection, β J ournal of Mac hine Learning R esear c h , 2013 . [ CE15] A . C oja- O ghlan and C. Efth y miou, β On Independent Set s in R andom Gr aphs , β R andom S truc tur es & Alg orithms , v ol. 4 7 , no . 3 , pp . 4 36β486 , Oct. 2015 , doi: 10 .100 2 / r sa.205 50 . [ C GJ7 8] E. G. C offman Jr ., M. R. Gar e y , and D . S . J ohnson, β An Application of Bin-P ac king t o Multipr oc essor Sc heduling , β SIAM J ournal on C omputing , v ol. 7 , no . 1, pp . 1β17 , F e b . 19 7 8 , doi: 10 .113 7 /0 20 7 001 . [ Che+19] W . -K. Chen, D . Gamarnik , D . P anc henk o , and M. R ahman, βSuboptimalit y of Local Alg orithms f or a Class of Max - C ut Pr oblems , β The A nnals of Pr obabilit y , v ol. 4 7 , no . 3 , Ma y 2019 , doi: 10 .1214 / 18- A OP1291 . 21 [ CL 91] E. G. C offman and G. S . L uek er , Pr obabilistic A nal ysis of P ac king and P artitioning Alg orithms . in W ile y -Int er scienc e Series in Discr et e Mathematics and Optimization. N e w Y or k: J . W ile y & sons , 1991. [ C O Y19] D . C orus , P . S . Oliv et o , and D . Y azdani, β Artificial Immune S y st ems C an F ind Ar bitr arily Good Appr o ximations f or the NP -har d N umber P artitioning Pr oblem, β A rtificial Intellig ence , v ol. 2 7 4, pp . 180β196 , Sep . 2019 , doi: 10 .1016 / j .artint.2019 . 0 3 . 001 . [Der 80] B . Derrida, βR andom-Ener gy Model: Limit of a F amily of Disor der ed Models , β Ph ysical R
|
https://arxiv.org/abs/2505.20607v1
|
eview Letter s , v ol. 4 5 , no . 2, pp . 79β8 2, J ul. 1980 , doi: 10 .110 3 /Ph y sR e vLett. 4 5 . 79 . [Der 81] B . Derrida, βR andom-Ener gy Model: An E x actly Solv able Model of Disor der ed S y st ems , β Ph ysical R eview B , v ol. 24, no . 5 , pp . 2613β26 26 , Sep . 1981, doi: 10 .110 3 /Ph y sR e vB .24.2613 . [DGH25] H. Du, S . Gong , and R. Huang , β The alg orithmic phase tr ansition of r andom gr aph alignment pr oblem, β Pr obabilit y Theor y and R elated F ields , pp . 1β56 , 20 25 . [Din+24] Y . Ding , D . K unisk y , A . S . W ein, and A . S . Bandeir a, βSube xponential-time alg orithms f or spar se PC A , β F oundations of C omputational Mathematics , v ol. 24, no . 3 , pp . 86 5β914, 20 24. [DM15] Y . Deshpande and A . Mont anari, βImpr o v ed sum- of-squar es lo w er bounds f or hidden c lique and hidden submatrix pr oblems , β in C onf er ence on Learning Theor y , 2015 , pp . 5 23β56 2. [ Gam+2 2] D . Gamarnik , E. C. KΔ±zΔ±ldaΔ , W . P er kins , and C. X u, β Alg orithms and barrier s in the s y mmetric binar y per c eptr on model, β in 20 2 2 IEEE 6 3r d A nnual S ymposium on F oundations of C omputer Science (F OCS) , 20 2 2, pp . 5 7 6β58 7 . [ Gam+23] D . Gamarnik , E. C. KizildaΔ , W . P er kins , and C. X u, β Geometric barrier s f or st able and online alg orithms f or discr ep- ancy minimization, β in C onf er ence on Learning Theor y , 20 23 , pp . 3 231β3 26 3 . [ Gam 21] D . Gamarnik , β The Ov er lap Gap Pr opert y: A Geometric Barrier t o Optimizing o v er R andom Structur es , β Pr oceeding s of the N ational Academ y of Sciences , v ol. 118 , no . 41, p . e 21084 9 2118 , Oct. 20 21, doi: 10 .10 7 3 / pnas .21084 9 2118 . [ GJ21] D . Gamarnik and A . J ag annath, β The o v er lap g ap pr opert y and appr o ximat e messag e passing alg orithms f or p -spin models , β The A nnals of Pr obabilit y , v ol. 4 9 , no . 1, pp . 180β205 , 20 21. [ GJ79] M. R. Gar e y
|
https://arxiv.org/abs/2505.20607v1
|
and D . S . J ohnson, C omputer s and Intr ac tabilit y : A Guide to the Theor y of NP - C ompleteness . in A Series of Book s in the Mathematical Scienc es . N e w Y or k: W . H. F r eeman, 19 79 . [ GJK23] D . Gamarnik , A . J ag annath, and E. C. KΔ±zΔ±ldaΔ , βShatt ering in the Ising pur e π -spin modelβ , arX i v pr eprint arX i v :230 7 . 0 7 461 , 20 23 . [ GJS21] D . Gamarnik , A . J ag annath, and S . Sen, β The Ov er lap Gap Pr opert y in Principal Submatrix R ec o v er y , β Pr obabilit y Theor y and R elated F ields , v ol. 181, no . 4, pp . 7 5 7β814, Dec. 20 21, doi: 10 .100 7 / s00440-0 21-0108 9- 7 . [ GJ W 24] D . Gamarnik , A . J ag annath, and A . S . W ein, βHar dness of R andom Optimization Pr oblems f or Boolean Cir cuit s , Lo w - Degr ee P oly nomials , and Lang e v in Dy namics , β SIAM J ournal on C omputing , v ol. 5 3 , no . 1, pp . 1β46 , 20 24. [ GK23] D . Gamarnik and E. C. KΔ±zΔ±ldaΔ , β Alg orithmic obstructions in the r andom number partitioning pr oblem, β The A nnals of A pplied Pr obabilit y , v ol. 3 3 , no . 6B , pp . 54 9 7β5 56 3 , 20 23 . [ GL18] D . Gamarnik and Q . Li, βF inding a Lar g e Submatrix of a Gaussian R andom Matrix, β A nn. S tat. , v ol. 46 , no . 6 A , pp . 25 11β2561, 2018 . [ Gr a6 9] R. L. Gr aham, βBounds on Multipr oc essing Timing Anomalies , β SIAM J ournal on A pplied Mathematics , v ol. 17 , no . 2, pp . 416β4 29 , Mar . 196 9 , doi: 10 .113 7 /0117 0 3 9 . [ GS14] D . Gamarnik and M. Sudan, βLimit s of Local Alg orithms o v er Spar se R andom Gr aphs , β in Pr oceeding s of the 5 th C onf er ence on Innovations in Theor etical C omputer Science , in IT CS ' 14. N e w Y or k , N Y , USA: Association f or C omputing Mac hiner y , J an. 2014, pp . 36 9β3 7 6 . doi: 10 .114 5 /25 54 79 7 .25 548 31 . [ GS17] D . Gamarnik and M. Sudan, βP erf ormanc e of sequential local alg orithms f or the r andom N AE- πΎ -SA T pr
|
https://arxiv.org/abs/2505.20607v1
|
oblem β , SIAM J ournal on C omputing , v ol. 46 , no . 2, pp . 5 90β619 , 2017 . [ GW 00] I. Gent and T . W alsh, βPhase Tr ansitions and Annealed Theories: N umber P artitioning as a C ase Study , β Instituto C ultur a , J un. 2000 . [ GW 98] I. P . Gent and T . W alsh, β Analy sis of Heuristics f or N umber P artitioning , β C omputational Intellig ence , v ol. 14, no . 3 , pp . 4 30β4 5 1, 1998 , doi: 10 .1111 /08 24- 79 35 . 0006 9 . 2 2 [ GZ2 2] D . Gamarnik and I. Zadik , βSpar se high- dimensional linear r egr ession. Estimating squar ed err or and a phase tr ansition, β A nn. S tat. , v ol. 50 , no . 2, pp . 880β90 3 , 20 2 2. [ GZ24] D . Gamarnik and I. Zadik , β The landscape of the plant ed c lique pr oblem: Dense subgr aphs and the o v er lap g ap pr opert y , β The A nnals of A pplied Pr obabilit y , v ol. 34, no . 4, pp . 3 3 7 5β34 34, 20 24. [Har+24] C. Har sha w , F . SΓ€ v je , D . A . Spielman, and P . Zhang , βBalancing c o v ariat es in r andomized e xperiment s w ith the gr amβ sc hmidt w alk design, β J ournal of the A merican S tatistical Association , v ol. 119 , no . 548 , pp . 29 34β29 46 , 20 24. [Ha y0 2] B . Ha y es , β The Easiest Har d Pr oblem, β A merican Scientist , v ol. 90 , no . 2, pp . 113β117 , Apr . 200 2. [Hob+17] R. Hober g , H. R amadas , T . R oth v oss , and X . Y ang , βN umber balancing is as har d as Mink o w skiβ s theor em and short est v ect or , β in International C onf er ence on Integ er Pr ogr amming and C ombinatorial Optimization , 2017 , pp . 254β266 . [Hop+17] S . B . Hopkins , P . K. K othari, A . P ot ec hin, P . R agha v endr a, T . Sc hr amm, and D . St eur er , β The po w er of sum- of-squar es f or det ecting hidden structur es , β in 2017 IEEE 58 th A nnual S ymposium on F oundations of C omputer Science (F OCS) , 2017 , pp . 720β7 31. [Hop18] S . Hopkins , βSt atistical Inf er enc e and the Sum of Squar es Method, β 2018 . [ Online]. A v ailable: https: / /
|
https://arxiv.org/abs/2505.20607v1
|
w w w . samuelbhopkins . c om / thesis .pdf [HS23] B . Huang and M. Sellk e , β Alg orithmic Thr eshold f or Multi-Species Spherical Spin Glasses , β arX i v :230 3 .12172 , 20 23 . [HS25a] B . Huang and M. Sellk e , β Tight Lipsc hit z Har dness f or Optimizing Mean F ield Spin Glasses , β C ommunications on Pur e and A pplied Mathematics , v ol. 7 8 , no . 1, pp . 60β119 , 20 25 . [HS25b] B . Huang and M. Sellk e , βStr ong Lo w Degr ee Har dness f or St able Local Optima in Spin Glasses . β Ac c essed: J an. 30 , 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /2501. 064 2 7 [HSS15] S . B . Hopkins , J . Shi, and D . St eur er , β T ensor principal c omponent analy sis v ia sum- of-squar e pr oof s , β in C onf er ence on Learning Theor y , 2015 , pp . 9 56β1006 . [ J er9 2] M. J errum, βLar g e Cliques Elude the Metr opolis Pr oc ess , β R andom S truc tur es & Alg orithms , v ol. 3 , no . 4, pp . 34 7β35 9 , J an. 199 2, doi: 10 .100 2 / r sa.3 2400 3040 2 . [ J oh+8 9] D . S . J ohnson, C. R. Ar ag on, L. A . McGeoc h, and C. Sc he v on, β Optimization b y Simulat ed Annealing: An E xperiment al E v aluation; P art I, Gr aph P artitioning , β Oper ations R esear c h , v ol. 3 7 , no . 6 , pp . 86 5β8 9 2, 198 9 , Ac c essed: Mar . 15 , 20 25 . [ Online]. A v ailable: http: / / w w w .jst or . or g/ st able / 1714 7 0 [ J oh+91] D . S . J ohnson, C. R. Ar ag on, L. A . McGeoc h, and C. Sc he v on, β Optimization b y Simulat ed Annealing: An E xperiment al E v aluation; P art II, Gr aph C oloring and N umber P artitioning , β Oper ations R esear c h , v ol. 3 9 , no . 3 , pp . 3 7 8β406 , 1991, Ac c essed: Mar . 15 , 20 25 . [ Online]. A v ailable: http: / / w w w .jst or . or g/ st able / 1713 9 3 [ J on+23] C. J ones , K. Mar w aha, J . S . Sandhu, and J . Shi, βR andom Max - CSP s Inherit Alg orithmic Har dness fr om Spin Glasses , β in
|
https://arxiv.org/abs/2505.20607v1
|
14th Innovations in Theor etical C omputer Science C onf er ence (IT CS 20 23 ) , 20 23 , p . 77 . [K AK19] A . M. Krieg er , D . Azriel, and A . K apelner , βN ear ly R andom Designs w ith Gr eatly Impr o v ed Balanc e , β Biometrik a , v ol. 106 , no . 3 , pp . 6 9 5β7 01, Sep . 2019 , doi: 10 .109 3 /biomet / asz0 26 . [K ar+86] N . K armar k ar , R. M. K arp , G. S . L uek er , and A . M. Odlyzk o , βPr obabilistic Analy sis of Optimum P artitioning , β J ournal of A pplied Pr obabilit y , v ol. 23 , no . 3 , pp . 6 26β64 5 , 1986 , doi: 10 .230 7 / 3 21400 2 . [Kis 15] N . Kistler , βDerrida β s r andom ener gy models: F r om spin glasses t o the e x tr emes of c orr elat ed r andom fields , β C orr e- lated R andom S ystems: F i ve Diff er ent Methods: CIRM J ean-Mor let Chair , Spring 2013 . Spring er , pp . 71β120 , 2015 . [KK 8 3] N . K armar k ar and R. M. K arp , β The Diff er encing Method of Set P artitioning , β 198 3 . Ac c essed: Mar . 15 , 20 25 . [ Online]. A v ailable: https: / / w w w 2. eecs .ber k ele y . edu /Pubs /T ec hRpt s / 198 3 / 6 35 3 .html [KK S14] J . Kr atica, J . K ojiΔ, and A . Sa v iΔ, β Tw o Met aheuristic Appr oac hes f or Solv ing Multidimensional Tw o - W a y N umber P artitioning Pr oblem, β C omputer s & Oper ations R esear c h , v ol. 46 , pp . 5 9β68 , J un. 2014, doi: 10 .1016 /j . c or .2014. 01. 00 3 . [K ot+17] P . K. K othari, R. Mori, R. O'Donnell, and D . W itmer , βSum of squar es lo w er bounds f or r efuting an y CSP , β in Pr oceeding s of the 4 9th A nnual A CM SIGA CT S ymposium on Theor y of C omputing , 2017 , pp . 13 2β14 5 . [K un 24] D . K unisk y , βLo w C oor dinat e Degr ee Alg orithms I: Univ er salit y of C omput ational Thr esholds f or Hy pothesis T esting . β Ac c essed: Mar . 26 , 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /240 3 . 0 7
|
https://arxiv.org/abs/2505.20607v1
|
86 2 [K WB19] D . K unisk y , A . S . W ein, and A . S . Bandeir a, βN ot es on c omput ational har dness of h y pothesis t esting: Pr edictions using the lo w - degr ee lik elihood r atio , β in ISAA C C ongr ess (International Societ y f or A nal ysis, its A pplications and C omputation) , 2019 , pp . 1β50 . 23 [LKZ15a] T . Lesieur , F . Kr zak ala, and L. Zde bor o v Γ‘, βMMSE of Pr obabilistic Lo w -R ank Matrix Estimation: Univ er salit y w ith R espect t o the Output Channel, β in 2015 5 3r d A nnual Allerton C onf er ence on C ommunication, C ontr ol, and C omputing ( Allerton) , Sep . 2015 , pp . 680β68 7 . doi: 10 .1109 / ALLER T ON .2015 . 7 44 7 0 7 0 . [LKZ15b] T . Lesieur , F . Kr zak ala, and L. Zde bor o v a, βPhase Tr ansitions in Spar se PC A , β in 2015 IEEE International S ymposium on Inf ormation Theor y (ISIT) , J un. 2015 , pp . 16 35β16 3 9 . doi: 10 .1109 /ISIT .2015 . 728 2 7 3 3 . [L S24] S . Li and T . Sc hr amm, βSome Eas y Optimization Pr oblems Ha v e the Ov er lap - Gap Pr opert y . β Ac c essed: Mar . 31, 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /2411. 018 36 [L SZ24] S . Li, T . Sc hr amm, and K. Zhou, βDiscr epancy alg orithms f or the binar y per c eptr on, β arX i v :2408 . 00 796 , 20 24. [L ue 8 7] G. S . L uek er , β A N ot e on the A v er ag e - C ase Beha v ior of a Simple Diff er encing Method f or P artitioning , β Oper ations R esear c h Letter s , v ol. 6 , no . 6 , pp . 28 5β28 7 , Dec. 198 7 , doi: 10 .1016 /016 7 -6 3 77( 8 7)90044- 7 . [Mer01] S . Mert ens , β A Ph y sicist's Appr oac h t o N umber P artitioning , β Theor etical C omputer Science , v ol. 26 5 , no . 1, pp . 79β108 , A ug . 2001, doi: 10 .1016 /S0 304- 3 9 7 5( 01)0015 3-0 . [Mer06] S . Mert ens , β The Easiest Har d Pr oblem: N umber P artitioning , β C omputational C omple xit y and S tatistical Ph ysics, Chapter 5; E ditor s A . P er cus and G. Istr ate and C. Moor
|
https://arxiv.org/abs/2505.20607v1
|
e , pp . 125β160 , 2006 . [MH7 8] R. Mer kle and M. Hellman, βHiding Inf ormation and Signatur es in Tr apdoor Knapsac k s , β IEEE T r ansac tions on Inf ormation Theor y , v ol. 24, no . 5 , pp . 5 25β5 30 , Sep . 19 7 8 , doi: 10 .1109 /TIT .19 7 8 .105 5 9 2 7 . [MMZ 05] M. MΓ©zar d, T . Mor a, and R. Zec c hina, β Clust ering of Solutions in the R andom Satisfiabilit y Pr oblem, β Ph ysical R eview Letter s , v ol. 9 4, no . 19 , p . 19 7205 , Ma y 2005 , doi: 10 .110 3 /Ph y sR e vLett.9 4.19 7205 . [MP W15] R. Mek a, A . P ot ec hin, and A . W ig der son, βSum- of-Squar es Lo w er Bounds f or Plant ed Clique , β in Pr oceeding s of the F ort y -Seventh A nnual A CM S ymposium on Theor y of C omputing , P ortland Or eg on USA: A CM, J un. 2015 , pp . 8 7β96 . doi: 10 .114 5 /2 7 46 5 3 9 .2 7 46600 . [ O'D14] R. O'Donnell, A nal ysis of Boolean F unc tions . C ambridg e Univ er sit y Pr ess , 2014. [R V17] M. R ahman and B . V ir ag , βLocal Alg orithms f or Independent Set s Ar e Half- Optimal, β The A nnals of Pr obabilit y , v ol. 4 5 , no . 3 , Ma y 2017 , doi: 10 .1214 / 16- A OP109 4 . [SBD 21] V . Santuc ci, M. Baioletti, and G. Di Bari, β An Impr o v ed Memetic Alg e br aic Diff er ential E v olution f or Solv ing the Multidimensional Tw o - W a y N umber P artitioning Pr oblem, β E xpert S ystems with A pplications , v ol. 17 8 , p . 114 9 38 , Sep . 20 21, doi: 10 .1016 /j . es w a.20 21.114 9 38 . [SFD96] R. H. St or er , S . W . Flander s , and S . Da v id W u, βPr oblem Spac e Local Sear c h f or N umber P artitioning , β A nnals of Oper ations R esear c h , v ol. 6 3 , no . 4, pp . 46 3β48 7 , A ug . 1996 , doi: 10 .100 7 /BF 0 21566 30 . [Sha8 2] A . Shamir , β A P oly nomial Time Alg orithm f or Br eaking the Basic Mer kle -Hellman Cr y pt os y st em, β in 23r d A nnual S ymposium on F oundations of C omputer Science (Sf cs 198 2) , N
|
https://arxiv.org/abs/2505.20607v1
|
o v . 198 2, pp . 14 5β15 2. doi: 10 .1109 /SF CS .198 2.5 . [TMR 20] P . T urner , R. Mek a, and P . Rig ollet, βBalancing Gaussian v ect or s in high dimension, β in C onf er ence on Learning Theor y , 20 20 , pp . 34 5 5β3486 . [T sa 9 2] L. -H. T sai, β As y mpt otic Analy sis of an Alg orithm f or Balanc ed P ar allel Pr oc essor Sc heduling , β SIAM J ournal on C omputing , v ol. 21, no . 1, pp . 5 9β64, F e b . 199 2, doi: 10 .113 7 /0 2 2100 7 . [ V er 18] R. V er sh y nin, High-Dimensional Pr obabilit y : A n Intr oduc tion with A pplications in Data Science , 1st ed. in C ambridg e Series in St atistical and Pr obabilistic Mathematics . N e w Y or k , N Y : C ambridg e Univ er sit y Pr ess , 2018 . [ VV 25] N . V afa and V . V aik unt anathan, βS y mmetric P er c eptr ons , N umber P artitioning and Lattic es . β Ac c essed: Mar . 20 , 20 25 . [ Online]. A v ailable: http: / / ar xiv . or g/ abs /2501.16 5 17 [ W ei2 2] A . S . W ein, β Optimal lo w - degr ee har dness of maximum independent set, β Mathematical S tatistics and Learning , v ol. 4, no . 3 , pp . 2 21β25 1, 20 2 2. [ WEM19] A . S . W ein, A . El Alaoui, and C. Moor e , β The Kik uc hi hier ar c h y and t ensor PC A , β in 2019 IEEE 60th A nnual S ymposium on F oundations of C omputer Science (F OCS) , 2019 , pp . 1446β1468 . [ Y ak 96] B . Y akir , β The Diff er encing Alg orithm LDM f or P artitioning: A Pr oof of a C onjectur e of K armar k ar and K arp , β Mathe- matics of Oper ations R esear c h , v ol. 21, no . 1, pp . 8 5β99 , F e b . 1996 , doi: 10 .128 7 / moor .21.1. 8 5 . 24
|
https://arxiv.org/abs/2505.20607v1
|
arXiv:2505.20668v1 [math.ST] 27 May 2025Eigenstructure inference for high-dimensional covariance with generalized shrinkage inverse-Wishart prior Seongmin Kim1, Kwangmin Lee2, Sewon Park3, and Jaeyong Lee1 1Department of Statistics, Seoul National University 2Department of Big Data Convergence, Chonnam National University 3Security Algorithm Lab, Samsung SDS May 28, 2025 Abstract In multivariate statistics, estimating the covariance matrix is es sential for under- standing the interdependence among variables. In high-dimensional settings, where the number of covariates increases with the sample size, it is well known that the eigenstructure of the sample covariance matrix is inconsistent. Th e inverse-Wishart prior, a standard choice for covariance estimation in Bayesian inferen ce, also suο¬ers from posterior inconsistency. To address the issue of eigenvalue dispersion in high-dimensional se ttings, the shrinkage inverse-Wishart (SIW) prior has recently been proposed. Despite its con- ceptual appeal and empirical success, the asymptotic justiο¬cation for the SIW prior has remained limited. In this paper, we propose a generalized shrinkage inverse-Wishart (gS IW) prior for high-dimensional covariance modeling. By extending the SIW fram ework, the 1 gSIW prior accommodates a broader class of prior distributions and facilit ates the derivation of theoretical properties under speciο¬c parameter choice s. In particular, under the spiked covariance assumption, we establis h the asymp- totic behavior of the posterior distribution for both eigenvalues and ei genvectors by directly evaluating the posterior expectations for two sets of paramet er choices. This direct evaluation provides insights into the large-sample behavior of the posterior that cannot be obtained through general posterior asymptotic theorems. Finally, simulation studies illustrate that the proposed prior prov ides accurate estimation of the eigenstructure, particularly for spiked eigenvalu es, achieving nar- rowercredibleintervalsandhighercoverageprobabilitiescompared toexistingmeth- ods. For spiked eigenvectors, the performance is generally comparable to that of competing approaches, including the sample covariance. 1 Introduction Suppose X1,...,X narep-dimensional independent samples from N(0,Ξ£), where Ξ£ is a pΓpcovariance matrix. In the study of multivariate statistics , estimating the covariance matrix is crucial for understanding the interdependence am ong variables. Extensive re- search has been conducted on covariance estimation, includ ingEfron and Morris (1976), Dey and Srinivasan (1985), andDaniels and Kass (2001). In the high-dimensional settings, where the number of covar iates increases with the sample size, Johnstone (2001) andJohnstone and Lu (2009) demonstrated the inconsis- tency of the largest eigenvalue and eigenvector of sample co variance matrix. To address the challenges in high-dimensional settings, se veral shrinkage estimators have been proposed. Stein(1975),Haο¬(1980),Ledoit and Wolf (2004) andBodnar et al. (2014) suggested shrinkage estimators for large covariances by m inimizing speciο¬c loss functions. Additionally, Karoui(2008),Ledoit and Wolf (2012), andLam(2016) pro- posed orthogonal invariant shrinkage estimators based on r andom matrix theory. These methods require the ratio p/nto be bounded, which limits their applicability in ultra-hi gh dimensional settings. In contrast, we consider the regime w herep/nβ β. 2 Some structural assumptions are necessary to address the pr oblem of estimating high- dimensional covariance where p/nβ β. The ο¬rst structural assumption is the sparsity, either in the covariance or precision matrix. In the Frequen tist literature, Bickel and Levina(2008a),Cai and Zhou (2012), andCai et al. (2016)
|
https://arxiv.org/abs/2505.20668v1
|
suggested sparse covariance estimation using thresholding methods. In the Bayesian lit erature,Banerjee and Ghosal (2015) suggested sparse precision estimation using the Gaussian graphical model. Lee et al.(2022) introduced a beta-mixture shrinkage prior for sparse cova riance and showed the posterior has nearly optimal contraction rate. Recentl y,Lee and Lee (2023) suggested post-processed posterior for sparse covariance model. Anot her structure assumption is bandedness. In the Frequentist literature, Bickel and Levina (2008b),Cai et al. (2010) andBien et al. (2016) proposed banded covariance using tapering or banding meth od. In the Bayesian literature, Banerjee and Ghosal (2014) introduced banded precision es- timation by using the Gaussian graphical model. Recently, Lee et al. (2023) suggested post-processed posterior for the banded covariance model. These assumptions of sparsity or bandedness require prior knowledge of the covariance mat rix structure, which may not always be available. Without requiring sparsity or bandedness assumptions on the covariance, the spiked covariance assumption was initially proposed by Johnstone (2001). The spiked covariance Ξ£ is composed of klarge eigenvalues, while the remaining eigenvalues are rel atively small. Paul(2007),Shen et al. (2016), andWang and Fan (2017) investigated the asymptotic properties of the eigenstructure of sample covariances und er the assumption that the true covariance is spiked covariance. Further, Wang and Fan (2017) suggested an estimator of covariancebasedontheasymptoticpropertiesofsamplecov ariance.IntheBayesianstudy, Ning and Ning (2021) andXie et al. (2022) proposed a prior for the spiked covariance and veriο¬ed the posterior convergence rate of covariance. In Bayesian inference, the inverse-Wishart prior is commonl y used for covariance es- timation. Surprisingly, there has been little prior resear ch in Bayesian statistics on the eigenstructures of unconstrained covariance matrices. Re cently,Lee et al. (2024) explored the asymptotic properties of the eigenstructure of covaria nce matrices under the inverse- 3 Wishart prior, marking the ο¬rst time such properties have bee n veriο¬ed in a Bayesian context. Additionally, the shrinkage inverse-Wishart (SIW) p rior was proposed by Berger et al.(2020) to address some of the limitations of the inverse-Wishart pr ior. However, despite its conceptual appeal and empirical success, the as ymptotic justiο¬cation for the SIW prior has remained limited. Inthispaper,weproposeageneralizedshrinkageinverse-Wi shart(gSIW)priorthatac- commodatesabroaderclassofpriordistributionsandfacil itatestheoreticalanalysisofthe eigenstructure of high-dimensional covariance matrices u nder speciο¬c parameter choices. It is known that the eigenvalues of the sample covariance mat rix are biased ( Wang and Fan 2017 ), and that the inverse-Wishart posterior is also biased for t he eigenvalues ( Lee et al. 2024 ). When parameters are chosen appropriately, the proposed gS IW prior oο¬ers an asymptotically unbiased estimator for the eigenstructu re and enables rapid posterior computation. There has been extensive research on posterior convergence rates for the covariance matrix in high-dimensional settings, including Banerjee and Ghosal (2014),Xiang et al. (2015),Lee et al. (2022), andLee and Lee (2023). These studies primarily focus on co- variance or the precision matrix itself, with little attent ion given to the eigenstructures of unconstrained covariance matrices in the Bayesian framewo rk. Unlike these studies that rely on the general theorems of Ghosal et al. (2000) and Ghosal and van der Vaart (2007) to establish posterior convergence rates,
|
https://arxiv.org/abs/2505.20668v1
|
we directly compute the posterior expectation. Commonly, to investigate the asymptotic properties of the p osterior, researchers resort to general theorems on posterior convergence rates (e.g., Ghosal et al. (2000);Ghosal and van der Vaart (2007)). However, in this paper, we directly evaluate the posterio r expectations of eigenvalues and eigenvectors. Our approac h oο¬ers a couple of advantages. Inparticular,wederiveanexplicitexpressionfortheboun dsoftheposteriorexpectations, which provides deeper statistical insights into the behavi or of the posterior. Moreover, our method is applicable to general functionals of the covarian ce matrix. The rest of the paper is structured as follows. In Section 2, we propose a generalization 4 of the shrinkage inverse-Wishart introduced by Berger et al. (2020). We also suggest a sampling method for the posterior. Section 3presents the posterior convergence rates for theeigenvaluesandeigenvectorsofthecovariancematrix. Simulationstudiesarediscussed inSection 4,followedbyrealdataanalysisinSection 5.AconclusionisprovidedinSection 6. The proofs of the theorems and related lemmas are provided i n the Appendix. 2 Generalized shrinkage inverse-Wishart prior and its posterior 2.1 Notation For any positive sequence anandbn, we write an=o(bn) or equivalently anzbn, if an bnβ0 asnβ β. We write an=O(bn) or equivalently anβΌbnif/vextendsingle/vextendsingle/vextendsingle/vextendsinglean bn/vextendsingle/vextendsingle/vextendsingle/vextendsinglefcfor some constant c >0. For real constants aandb, we usea'banda(bto denote the minimum and maximum values between aandb, respectively. Let ||A||F=/radicalbig tr(ATA) denote the Frobenius norm, and ||A||denote the spectral norm, i.e., the largest singular value o fA. For a squared matrix A, we use etr(A) to represent exp(tr(A)). We deο¬ne the set of p dimensional orthogonal matrices as O(p) :={UβRpΓp: UTU =Ip}and the set of pΓp positive deο¬nite matrices as Cp:={AβRpΓp:A is positive deο¬nite }. We write XβΌInvGam( Β³,Β΄) ifXfollows the inverse-gamma distribution with shape parameter Β³ >0andscaleparameter Β΄ >0,havingdensityproportionalto xβΒ³β1exp(βΒ΄/x) forx >0. The notation XiiidβΌPindicates that the random variables Xiare independent and identically distributed according to distribution P, whileXiindβΌPidenotes that the Xiβs are independent but not necessarily identically distrib uted. SupposeΞ£ = ( Γij,1fi,jfp)isapΓpsymmetricmatrix,Ξ = diag( ΒΌ1,...,ΒΌ p)isapΓ pdiagonalmatrix,andU = [ u1,...,u p]βO(p),whereuiisthei-thcolumnofU.Wedeο¬ne the matrix diο¬erential forms of these matrices as follows. L et (dΞ£) :=/logicalanddisplay ifjdΓijdenote the exterior product of the distinct elements of Ξ£, ( dΞ) :=p/logicalanddisplay i=1dΒΌi, and (dU) :=/logicalanddisplay 1fifjfpuT jdui, 5 which represents the diο¬erential form corresponding to the invariant measure on O(p). Note that the volume of a set in O(p) can be obtained by integrating ( dU) over the set, i.e., volume(D) =/integraldisplay D(dU), DΒ’O(p). For a detailed discussion on diο¬erential forms related to ma trices, see Muirhead (2009). 2.2 Generalized shrinkage inverse-Wishart prior Suppose X1,...,X nare independent random vectors following a p-dimensional multivari- ate normal distribution with covariance matrix Ξ£ β Cp: XiiidβΌN(0,Ξ£),βi= 1,...,n. (1) Berger et al. (2020) proposed the shrinkage inverse-Wishart (SIW) prior, a new cl ass of priors for Ξ£ whose densities are given by: ΓSIW(Ξ£)(dΞ£) =ΓSIW(Ξ£|a,b,H)(dΞ£)βetr(β1 2Ξ£β1H) |Ξ£|a[/producttext i<j(ΒΌiβΒΌj)]b(dΞ£), whereag0,bβ[0,1],Hβ CpandΒΌ1,...,ΒΌ pare the ordered eigenvalues of Ξ£. This prior is called the shrinkage inverse-Wishart prior, as it induces shrinkage of the eigenvalues by the term/productdisplay i<j(ΒΌiβΒΌj) compared to the inverse-Wishart prior. When b= 0, the SIW prior reduces to the inverse-Wishart prior. Whenb <0, the eigenvalue spreading problem, which is already probl ematic under the inverse-Wishart prior
|
https://arxiv.org/abs/2505.20668v1
|
in high-dimensional settings, be comes even more severe. On the other hand, when b >1, the prior forces the eigenvalues to become nearly identic al. Accordingly, as proposed in Berger et al. (2020), we restrict bto the range [0 ,1] to avoid these undesirable behaviors. ConsiderthefollowingspectraldecompositionofΞ£ = UΞUT,whereΞ = diag( ΒΌ1,...,ΒΌ p) 6 andU = [ u1,...,u p]isapΓpmatrixwhose i-thcolumn uiistheeigenvectorcorresponding to thei-th eigenvalue ΒΌi. We can rewrite ( dΞ£) in terms of ( dU) and (dΞ): (dΞ£) =/productdisplay i<j(ΒΌiβΒΌj)(dΞ)(dU). (2) SeeFarrell(2012) andMuirhead (2009). Using equation ( 2), the density of the shrinkage inverse-Wishart prior can be w ritten as ΓSIW(Ξ,U|a,b,H)(dΞ)(dU)βetr(β1 2Ξ£β1H) |Ξ£|a[/producttext i<j(ΒΌiβΒΌj)]bβ1(dΞ)(dU), whereΒΌ1,...,ΒΌ pare the ordered eigenvalues of Ξ£. Since the SIW prior is symme tric with respect to the eigenvalues, the ordering constraint ca n be removed without aο¬ecting the distribution. The SIW prior has demonstrated empirical ly successful performance but is not supported by comprehensive theoretical analysis . In this paper, we propose a generalized Shrinkage inverse-Wishart (gSIW) prior whose de nsity is deο¬ned as ΓgSIW(Ξ,U|a1,...,a p,b,H)(dΞ)(dU)βetr(β1 2UΞβ1UTH) p/producttext i=1ΒΌai i/producttext i<j|ΒΌiβΒΌj|bβ1(dΞ)(dU),(3) wherea1,...,a p>0,bβ[0,1], andΒΌ1,Β·Β·Β·,ΒΌpare unordered eigenvalues of Ξ£. By allowing eachaito be deο¬ned diο¬erently for each ΒΌi, we can consider a prior distribution that is more general than the SIW prior. In particular, when a1=Β·Β·Β·=ap=a, the gSIW prior reduces to the SIW prior. When b= 1, the term |ΒΌiβΒΌj|1βbdisappears from the prior distribution of (Ξ ,U), which facilitates faster sampling and simpliο¬es theoret ical analysis. Therefore, we assume b= 1 in the following sections to establish the theoretical properties. The gSIW prior is conjugate to a multivariate normal distrib ution. The posterior den- sity of (Ξ ,U) is given by: 7 Γ(Ξ,U|Xn)(dΞ)(dU)βp/productdisplay i=1ΒΌβaiβn/2 ietr(β1 2Ξβ1UT(H+nS)U)(dΞ)(dU), whereXn= (X1,...,X n), andSis the sample covariance. Consider the following spectral decomposition of nS=QWQT, whereW= diag(nΛΒΌ1,...,nΛΒΌn'p,0,...,0) andQ is apΓpmatrix whose i-th column is the eigenvector corresponding to the i-th eigenvalue. If we denote Ξ = QTU andH=hIpfor some positive h >0, then ( dU) = (dΞ) and we get Γ(Ξ,Ξ|Xn)(dΞ)(dΞ)βp/productdisplay i=1ΒΌβaiβn/2 ietr(β1 2Ξβ1ΞT(hIp+W)Ξ)(dΞ)(dΞ). To obtain the posterior convergence rate, we need the follow ing expression, Γ(Ξ|Xn)(dΞ)β/parenleftbigg/integraldisplay Γ(Ξ,Ξ|Xn)(dΞ)/parenrightbigg (dΞ) βp/productdisplay i=1(2ai+n/2β1Ξ(ai+n/2β1))Β·p/productdisplay i=1cβaiβn/2+1 i (dΞ), whereciis the (i,i)-th element of ΞT(hIp+W)Ξ. 2.3 Sampling Method We use the Gibbs sampling method as proposed by Berger et al. (2020). The posterior density is given by: Γ(Ξ,Ξ|Xn)βp/productdisplay i=1ΒΌβaiβn/2 ietr(β1 2Ξβ1ΞT(hIp+W)Ξ), whereΒΌ1,...,ΒΌ pare unordered diagonal elements of Ξ. 1. Sampling Ξ given (Ξ ,Xn): Γ(Ξ|Ξ,Xn)βp/productdisplay i=1ΒΌβaiβn/2 iexp/parenleftbigg βci 2ΒΌi/parenrightbigg , 8 whereciis the (i,i)-th element of ΞΒ¦(hIp+W)Ξ. Each ΒΌican be sampled indepen- dently from ΒΌiβΌInvGam( ai+n/2β1,ci/2). 2. Sampling Ξ given (Ξ ,Xn): Γ(Ξ|Ξ,Xn)βetr(β1 2Ξβ1ΞT(hIp+W)Ξ). WeusetheGibbssamplingschemeproposedin Bergeretal. (2020),whichiteratively updatespairsofrowsinΞ.Theprocedureforupdatingtheο¬rs ttworowsisasfollows: Step I. Rotation transformation. To update the ο¬rst two rows of Ξ, we construct the updated matr ix as Ξnew=diag(R,Ipβ2) ο£Ξ12 Ξβ12ο£Ά ο£Έ, where Ξ 12and Ξβ12denote the ο¬rst two and remaining pβ2 rows of Ξ, respec- tively. The matrix Ris a signed rotation matrix deο¬ned as R= ο£Ο΅10 0Ο΅2ο£Ά ο£ΈRΒΉ, RΒΉ= ο£cosΒΉβsinΒΉ sinΒΉcosΒΉο£Ά ο£Έ, whereΟ΅1,Ο΅2β {β1,1}andΒΉβ(βΓ/2,Γ/2]. Step II. Conditional distribution of ΒΉ. SinceH0=hIp+Wis diagonal, it can be partitioned as H0= ο£H10 0H2ο£Ά ο£Έ, whereH1βR2Γ2. The conditional distribution of ΒΉbecomes
|
https://arxiv.org/abs/2505.20668v1
|
Γ(ΒΉ|Ξ12,Ξβ12,Ξ;H0)βetr(β1 2H1RΒΉΞ12Ξβ1ΞT 12RT ΒΉ). 9 Let the spectral decomposition of Ξ 12Ξβ1ΞT 12be Ξ12Ξβ1ΞT 12= ο£cosΓβsinΓ sinΓcosΓο£Ά  ο£s10 0s2ο£Ά  ο£cosΓsinΓ βsinΓcosΓο£Ά ο£Έ, withs1> s2andΓβ(βΓ 2,Γ 2]. Then, the conditional density of ΒΉsimpliο¬es to Γ(ΒΉ|Ξ12,Ξβ12,Ξ;H0)βexp/parenleftbig ccos2(ΒΉ+Γ)/parenrightbig , wherec=β1 2|(s1βs2)(h1βh2)|. Step III. Sampling via transformation. LetΒ³= cos2(ΒΉ+Γ), then we obtain Γ(Β³|Ξ12,Ξβ12,Ξ;H0)βexp(cΒ³)Β³β1/2(1βΒ³)β1/2, Β³β(0,1), which corresponds to a tilted Beta(1/2,1/2) distribution. We sample Β³using rejection sampling with a Beta proposal. Once Β³is drawn, the corresponding pair of rows in Ξ is updated accordingly, and this procedure i s repeated for all row pairs. Finally, since Ξ = QTU, the eigenvector matrix of Ξ£ is recovered as U=QΞ. 2.4 Choice of the number of spiked eigenvalues k To implement the proposed gSIW prior, the number of spiked ei genvalues k must be spec- iο¬ed in advance. To select k, we consider three methods. First, we adopt the Watanabe- Akaike Information Criterion (WAIC; Watanabe and Opper 2010 ), deο¬ned for each can- didate value k as WAIC(k) =β2n/summationdisplay i=1logEΞ£|X(k) n[p(Xi|Ξ£)]+2n/summationdisplay i=1VarΞ£|X(k) n[logp(Xi|Ξ£)], 10 where the posterior distribution is derived under the gSIW p rior assuming k spiked eigen- values. In computing the WAIC, we follow the second approach p roposed by Gelman et al.(2014), which estimates the eο¬ective number of parameters using t he variance of the log predictive density. The value of k is then selected by minimizing WAIC over k= 1,...,k max, wherekmaxis a suitably chosen upper bound. Second, we consider the Growth Ratio (GR) method proposed by Ahn and Horenstein (2013), which is based on the eigenvalues of sample covariance. For each k, the GR is de ο¬ned as GR(k) =log/parenleftBig 1+ΛΒΌk V(k)/parenrightBig log/parenleftBig 1+ΛΒΌk+1 V(k+1)/parenrightBig, whereV(k) =n'p/summationdisplay j=k+1ΛΒΌj, andΛΒΌjdenotesthe j-th largest eigenvalue ofthe samplecovariance matrix. The value of k is selected by maximizing GR over k= 1,...,k max. Finally, we consider the information criterion IC p3proposed by Bai and Ng (2002), among several criteria introduced in their work, as it showed relatively s trong empirical performance under our model setting. It is deο¬ned as ICp3(k) = log/parenleftbigg1 np/vextendsingle/vextendsingle/vextendsingle/vextendsingleXβXU1(UΒ¦ 1U1)β1UT 1/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F/parenrightbigg +klog(n'p) n'p, whereXβRnΓpdenotes the observed data matrix, and U1βRpΓkdenotes the matrix of the estimated leading k eigenvectors. The value of k is selec ted by minimizing IC p3over k= 1,...,k max. In practice, all three methods provide good estimates when t he spiked eigenvalues are large, and WAIC generally performs better even when the spike d eigenvalues are rela- tively small. Overall, WAIC exhibits more robust performanc e across a range of settings. Therefore, we recommend using WAIC for selecting k. The code for the Gibbs sampling procedure and choice of kare publicly available at https://github.com/swpark0413/besiw . 11 3 Main Results Consider nindependent samples X1,...,X nfromN(0,Ξ£) where Ξ£ is a pΓppositive deο¬nite matrix. We assume the gSIW prior as deο¬ned in ( 3) withb= 1. Under the gSIW prior, the posterior is given by Γ(Ξ,Ξ|Xn)(dΞ)(dΞ)βp/productdisplay i=1ΒΌβaiβn/2 ietr(β1 2Ξβ1ΞT(hIp+W)Ξ)(dΞ)(dΞ), whereΒΌ1,...,ΒΌ pare the unordered eigenvalues of Ξ£. Suppose that Ξ£ is a spiked covari- ance with kspiked eigenvalues. Let ΒΌ0,1,...,ΒΌ 0,pbe the ordered eigenvalues of the true covariance Ξ£ 0. To obtain the
|
https://arxiv.org/abs/2505.20668v1
|
posterior convergence rates for eigenstruct ure, we assume the following conditions: A1. We consider high-dimensional settings where n/pβ0. A2. There exist positive constants c0andC0such that, for all n, C0> ΒΌ0,k+1>Β·Β·Β·> ΒΌ0,p> c0. A3. The klargest eigenvalues are suο¬ciently separated by a constant valueΒΆ0>0: ΒΌ0,jβΒΌ0,j+1 ΒΌ0,jgΒΆ0,βj= 1,Β·Β·Β·,k. A4. Forkspiked eigenvalues, the quantities dj=p nΒΌ0,jare bounded. A5. We set the hyperparameters of the prior as follows: a1f Β·Β·Β· fakfak+1=Β·Β·Β·=anfan+1=Β·Β·Β·=ap, akzn, H=hIp,withh < n. The conditions A1βA4 are used to establish the asymptotic properties of the samp le covariance as in Wang and Fan (2017). The condition A2 implies that the non-spiked 12 eigenvalues are bounded away from zero and from above. The co nditionA4 represents the requirement that ΒΌ0,jis at least of orderp n. The conditions we impose diο¬er slightly from those considered in Lee et al. (2024), which assume thatk3 nβ0 andΒΌ0,k+1p ΒΌ0,knβ0. In their setting, the number of spiked eigenvalues kis allowed to diverge, and the non-spiked eigenvalues are not necessarily bounded by a constant. In co ntrast, we assume that kis ο¬xed and impose a boundedness condition on the non-spiked ei genvalues. The condition A5 speciο¬es the hyperparameters of the proposed prior and ser ves as a suο¬cient condition for verifying posterior convergence. We demonstrate the posterior convergence rates for eigenva lues and eigenvectors when using the gSIW prior. The proof relies on the asymptotic prop erties of sample covariance ofWang and Fan (2017) and will be provided in the Appendix. We begin by presenting the shrinkage rates of the posterior distribution itself. Lemma 3.1. Under model (1)and prior (3), assume that conditions A1βA5hold. Let Ο΅ >0, and deο¬ne Β»= min l<k(al+1βal)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Suppose thatnp anzΒΌ0,k nΒΌ0,1Β» 2Ο΅and Ο΅{/radicalbiggnp an(Β»β1. Then, /integraldisplay DcΟ΅Γ(Ξ|Xn)(dΞ) =O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig , (4) where the subset DΟ΅ofO(p)is deο¬ned as: DΟ΅=/braceleftbigg ΞβO(p) : inf Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . In this paper, we consider two sets of prior parameter choice s for the asymptotic anal- ysis of the posterior. First, we examine the case where a1,...,a kare approximately equal. In this case, we need to reparametrize Ξ and Ξ to investigate t he posterior convergence rate. For Ξ βO(p), we determine the kΓkpermutation matrix Plby solving the following 13 optimization problem Pl= argminPβP/bracketleftBigg inf Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£P0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F/bracketrightBigg , wherePis the set of all kΓkpermutation matrices. Once Plis determined, we consider the following reparametrization on Ξ and Ξ, ΛΞ = ο£P0 0Ipβkο£Ά ο£ΈT Ξ ο£P0 0Ipβkο£Ά ο£Έ,ΛΞ = Ξ ο£P0 0Ipβkο£Ά ο£Έ. (5) Since the transformation preserves the measure, it follows that (dΞ) = (dΛΞ). By reparametrizing the variables, the posterior distribut ion becomes Γ(ΛΞ,ΛΞ|Xn)(dΛΞ)(dΛΞ) =p/productdisplay i=1ΛΒΌβaiβn/2 ietr(β1 2ΛΞβ1ΛΞT(hIp+W)ΛΞ)(dΛΞ)(dΛΞ). For each Ξ βO(p), the optimal permutation Plcan be selected, allowing us to deο¬ne the subset DΟ΅,lΒ’O(p) as DΟ΅,l=/braceleftbigg ΞβO(p) : inf Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Pl0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg , whereP1,...,P Ldenotes all kΓkpermutation matrices. It follows/integraldisplay (/uniontextL l=1DΟ΅,l)cΓ(Ξ|Xn)(dΞ)β/integraldisplay DΟ΅Γ(ΛΞ|Xn)(dΛΞ), andthus,when a1,...,a kareapproximatelyequal,theposteriorprobabilityover(L/uniondisplay l=1DΟ΅,l)c can be analyzed analogously to Lemma 3.1. 14 Lemma 3.2. Under model (1)and prior (3), assume that conditions A1βA5hold, and max ifkaiβmin ifkaifqfor some positive q, which
|
https://arxiv.org/abs/2505.20668v1
|
may depend on nandp. LetΟ΅ >0, and deο¬neΓ= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Suppose thatnp anzΒΌ0,k nΒΌ0,1Γ 2nΟ΅2andΟ΅{/radicalbiggnp an((nΓ)β1/2. Then, /integraldisplay (/uniontextL l=1DΟ΅,l)cΓ(Ξ|Xn)(dΞ) =O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig .(6) Lemma3.1and3.2describe the degree of posterior shrinkage on the sets of ort hogonal matrices DΟ΅andL/uniondisplay l=1DΟ΅,l, respectively. These lemmas provide the foundation for ana lyzing posterior expectation. Consider the posterior expectatio n of a general function f(Ξ,Ξ), deο¬ned as E/bracketleftbig f(Ξ,Ξ)|Xn/bracketrightbig =/integraldisplay fβ²(c,Ξ)/productdisplay cβaiβn 2+1 i(dΞ) /integraldisplay/productdisplay cβaiβn 2+1 i(dΞ), wherefβ²(c,Ξ) =EΞ[f(Ξ,Ξ)] andΒΌiindβΌInvGam( ai+n/2β1,ci/2). Suppose the function fβ²is suο¬ciently bounded such that sup|fβ²| zο£±  exp/parenleftBigΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigβkq '/parenleftBignΒΌ0,k+p p/parenrightBigΟ΅2an ,ifa1β Β·Β·Β· βak, exp/parenleftBigΒ» 2Ο΅/parenrightBig '/parenleftBignΒΌ0,k+p p/parenrightBigΟ΅2an , ifa1<Β·Β·Β·< ak. Under this condition, Lemma 3.1or3.2can be applied to derive the posterior expectation, thereby establishing the posterior convergence rate. The f ollowing corollaries demonstrate speciο¬c cases where the posterior expectation can be explic itly derived. Theorem 3.3. Under the assumptions of Lemma 3.2, letΒΌ(i)denote the i-th largest eigenvalue of Ξ£. Suppose that the ratio ΒΌ0,1/ΒΌ0,kis bounded by a positive constant, Ο΅β 15 nβ1/4,a1,...,a kβΌn1/2, andan{n3/2p. Then, E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(p nΒΌ0,i)+O(nβ1/2+ΒΆ), fori= 1,...,kand for all small ΒΆ >0. The convergence rate stated in Theorem 3.3is equivalent to the rate achieved by the inverse-Wishart prior, as established by Lee et al. (2024). To achieve faster rates than those given in Theorem 3.3, we set the hyperparameter aidepending on the eigenvalues of the sample covariance. Theorem 3.4. Under the assumptions of Lemma 3.1and3.2, letΒΌ(i)denote the i-th largest eigenvalue of Ξ£. Suppose that the ratio ΒΌ0,1/ΒΌ0,kis bounded by a positive constant, Ο΅βnβ1/4, and the parameters satisfy 2aiβ4 =nt ΛΒΌiβt,fori= 1,...,k,withtβ [ΛΒΌk+1,ΛΒΌn], whereΛΒΌidenotes the i-th eigenvalue of the sample covariance matrix, and an{ n3/2p. Then, E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(1 ΒΌ0,i/radicalbiggp n)+O(nβ1 2+ΒΆ), fori= 1,...,kand for all small ΒΆ >0. Theorem 3.4provides a posterior convergence rate that is better or at le ast equivalent compared to that described in Theorem 3.3. If the condition ΒΌ0,izpβnholds, then the convergencerategiveninTheorem 3.4isfasterthanthatofthepriordescribedinTheorem 3.3. The rate coincides with the shrinkage eigenvalues propose d byWang and Fan (2017), given by: ΛΒΌS j=ΛΒΌjβp npβnkβpk/parenleftBig tr(S)βk/summationdisplay i=1ΛΒΌi/parenrightBig . LetΓjbe the eigenvector corresponding to ΒΌj, thej-th eigenvalue of the covariance Ξ£. 16 The eigenvector corresponding to the true covariance ΒΌ0,iis denoted by Γ0,j. Theorem 3.5 presents the posterior convergence rate for the ο¬rst keigenvectors. Theorem 3.5. Under the assumptions of Theorem 3.4, letΓ(j)denote the eigenvector corresponding to the j-th largest eigenvalue ΒΌ(j)ofΞ£. Then, E/bracketleftBig 1β/vextendsingle/vextendsingleΓT 0,jΓ(j)/vextendsingle/vextendsingle2/vextendsingle/vextendsingleXn/bracketrightBig =O(p nΒΌ0,j)+Op(Β·j), forj= 1,...,k, whereΒ·j=1 ΒΌ0,j/radicalbiggp n+p n3/2ΒΌ0,j+1 n. Thisconvergencerateisequivalenttotherateofthesample eigenvectorasanestimator of the ο¬rst keigenvectors. According to the study of Cai et al. (2013), this rate is proven to be optimal. Corollary 3.6. Under the assumptions of Theorem 3.4, E/bracketleftBig||Ξ£βΞ£0||2 F ||Ξ£0||2 F/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp ΒΌ2 0,1+p+nβ1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op(ΒΌ2 0,1 ΒΌ2 0,1+pΒ·k), (7) for all small ΒΆ >0, whereΒ·k=1 ΒΌ0,k/radicalbiggp n+p n3/2ΒΌ0,k+1 n. Furthermore, for the sample covariance, we obtain the following rate ||SβΞ£0||2 F ||Ξ£0||2 F=O/parenleftBigp2 n(ΒΌ2 0,1+p)+nβ1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op/parenleftBigΒΌ2 0,1 ΒΌ2 0,1+pΒ·k/parenrightBig . (8) The ο¬rst term on the right-hand side of ( 7) corresponds to
|
https://arxiv.org/abs/2505.20668v1
|
the convergence rate of the eigenvalues, while the second and third terms are attribute d to the convergence of the eigenvectors. Comparing the rate in ( 7) with the sample covariance rate in ( 8), we observe that the ο¬rst term in the sample covariance rate is larger. 17 4 Simulation Studies To evaluate the performance of the proposed gSIW prior, we co nsider observations sam- pled from a multivariate normal distribution N(0,Ξ£0). We design two types of simulation settings and compare six diο¬erent methods. Throughout the s imulations, we assume the true number of spiked eigenvalues is known and focus on estim ating the spiked eigenvalues and their corresponding eigenvectors. Each simulation is r epeated 100 times. Method I. Sample covariance (Sample). Method II. Generalized shrinkage inverse-Wishart prior (gS IW) : Γ(Ξ£|a,H)βetr(β1 2ΞΞβ1ΞTH) p/producttext i=1ΒΌai i/producttext i<j|ΒΌiβΒΌj|. Method III. Generalized inverse-Wishart prior (gIW) : Γ(Ξ£|a,H)βetr(β1 2ΞΞβ1ΞTH) p/producttext i=1ΒΌai i, where the hyperparameters aandHare set identically for both gSIW and gIW. In particular, we use ai=nt 2(ΛΒΌiβt)+ 2 fori= 1,...,k, wheret=1 nβknβk/summationdisplay i=1ΛΒΌi, an=p/2,ap= 2p, andH= 4Ip. Method IV. Shrinkage inverse-Wishart prior proposed by Berger et al. (2020) (SIW) : Γ(Ξ£|a,H)βetr(β1 2ΞΞβ1ΞTH) p/producttext i=1ΒΌa i/producttext i<j|ΒΌiβΒΌj|, wherea= 4 and H= 4Ip. 18 Method V. Inverse-Wishart prior (IW) : Γ(Ξ£|a,H)βetr(β1 2ΞΞβ1ΞTH) p/producttext i=1ΒΌa i, wherea=p+1 andH=Ip, as speciο¬ed by Lee et al. (2024). Method VI. Shrinkage POET estimator proposed by Wang and Fan (2017) (S-POET). To evaluate performance, we use the following errors for spi ked eigenvalues and eigen- vectors: ErrΒΌ=|ΒΌ0,iβΒΌi| ΒΌ0,i,ErrΓ= 1β(ΓT iΓi,0)2. For the Bayesian methods, the posterior mean is used as the po int estimator. Eigenvector estimatesareobtainedbyaligningsigns,takingtheEuclid eanaverage,and thennormaliz- ing. We also report the 95% credible (or conο¬dence) interval length (IR) and the coverage probability (CP) for each spiked eigenvalue. The coverage p robability is deο¬ned as the proportion of repetitions in which the true value lies withi n the credible (or conο¬dence) interval. For the S-POET method, the conο¬dence intervals are derived f rom the asymptotic distribution βn/parenleftBigΛΒΌS i ΒΌ0,iβ1/parenrightBig dβN(0,2),ifβp=o(ΒΌ0,i), whereΛΒΌS iis the shrinkage eigenvalue, as established by Wang and Fan (2017). 4.1 Case 1 In this simulation, we set ( n,p) = (50,500) with Ξ£ 0= diag(50 ,20,10,1,...,1). For the Ξ£0, we obtain d1= 0.2,d2= 0.5, andd3= 1 where dj=p nΒΌ0,j. This setting satisο¬es assumption A4, as the values of djfall within the speciο¬ed range. The goal of this setting is to assess the performance of the methods when the dimensio npis large and the spiked eigenvalues are signiο¬cantly larger than the others. 19 EigenvalueSample gSIW gIW SIW IW S-POET ErrΒΌErrΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL ΒΌ1 0.258 0.164 0.88 36.1 5.28 0.01 608 0.173 0.85 35.30.589 0.06 56.2 0.170 0.9248.3 ΒΌ2 0.517 0.149 0.95 13.72.23 0.01 77.7 0.165 0.94 14.6 1.59 0 26.7 0.128 0.98 19.5 ΒΌ3 1.050.141 0.916.141.280.0519.5 0.289 0.65 6.93 3.13 0 17.2 0.174 0.9410.7 Table 1: Average errors (Err ΒΌ), coverage probabilities (CP), and interval lengths (IL) f or the estimated spiked eigenvalues. Eigenvector Sample gSIW gIW SIW IW S-POET Γ1 0.202 0.227 0.223 0.203
|
https://arxiv.org/abs/2505.20668v1
|
0.221 0.206 Γ2 0.419 0.443 0.466 0.427 0.673 0.428 Γ3 0.639 0.642 0.658 0.679 0.921 0.620 Table 2: Average errors (Err Γ) for estimated eigenvectors. Table1presents the average errors (Err ΒΌ), coverage probabilities (CP), and credible (or conο¬dence) interval length (IL) for the estimated spike d eigenvalues across diο¬erent methods. The IW and gIW methods perform poorly in both estima tion accuracy and coverage. In contrast, SIW, S-POET, and gSIW outperform the o ther methods. Among these, the gSIW achieves the lowest errors for the ο¬rst and th ird eigenvalues, while S- POET performs best for the second eigenvalue. The gSIW, SIW, an d S-POET methods exhibit comparable coverage probabilities. Although SIW pe rforms slightly worse and S- POET slightly better than gSIW, the credible intervals of gSI W are narrower than those of S-POET. Furthermore, gSIW achieves lower estimation err ors than SIW for all spiked eigenvalues. Table2presents the average errors (Err Γ) for the estimated eigenvectors. The sample covariance achieves the lowest errors for the ο¬rst and secon d spiked eigenvectors, while the S-POET method demonstrates the best performance for the third spiked eigenvector. Overall, the diο¬erences among the methods are relatively sm all. 4.2 Case 2 In this simulation, we vary the sample size nandpas (n,p) = (20,100),(40,100),...,(80,100), 20 Eigenvalue nSample gSIW gIW SIW IW S-POET ErrΒΌErrΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL Err ΒΌCP IL ΒΌ120 1.45 0.422 0.69 6.27 2.80 0 29.3 0.110 1 5.37 4.23 0 27.4 0.670 0.42 16.8 40 0.732 0.152 0.96 3.81 3.00 0 27.4 0.154 0.963.90 2.16 0 10.8 0.402 0.55 7.60 60 0.499 0.102 0.98 3.11 3.64 0 30.9 0.132 0.96 3.50 1.41 0 6.66 0.307 0.64 5.36 80 0.387 0.0958 0.97 2.75 4.40 0 35.4 0.113 0.96 3.09 1.03 0 4.89 0.254 0.70 4.30 ΒΌ220 1.54 0.292 0.81 3.55 1.80 0 11.6 0.0793 1 2.72 3.23 0 12.7 0.555 0.64 12.5 40 0.783 0.105 0.97 2.43 2.10 0 12.0 0.165 0.88 2.081.95 0 6.08 0.390 0.65 6.03 60 0.525 0.0791 0.98 2.04 2.54 0 13.7 0.184 0.8 2.06 1.36 0 3.92 0.314 0.66 4.31 80 0.395 0.0895 0.98 1.86 3.07 0 15.6 0.163 0.78 2.10 1.03 0 2.98 0.262 0.72 3.46 ΒΌ320 1.98 0.322 0.84 2.83 1.54 0 7.41 0.0913 1 1.93 3.16 0 7.76 0.655 0.47 10.0 40 1.05 0.123 0.99 2.02 1.80 0 7.88 0.0544 1 1.43 2.18 0 4.16 0.533 0.17 4.99 60 0.700 0.0758 0.99 1.70 2.11 0 8.83 0.101 0.98 1.311.65 0 2.84 0.453 0.14 3.58 80 0.528 0.0812 0.97 1.54 2.49 0 9.92 0.145 0.88 1.301.31 0 2.18 0.404 0.09 2.89 Table 3: Average errors (Err ΒΌ), coverage probabilities (CP), and credible (or conο¬dence ) interval lengths (IL) for estimated eigenvalues under vary ingn. while ο¬xing the covariance as Ξ£ 0= diag(5,4,3,1,...,1). We assume that the true number of spiked eigenvalues is 3. As n increases, the quantity dj=p nΒΌ0,jdecreases, in accordance with assumption A4, which holds more strongly for larger n. The goal of this setting is to evaluate the performance of the methods when the spiked eige nvalues
|
https://arxiv.org/abs/2505.20668v1
|
are relatively weak. Table3represents the average errors (Err ΒΌ), coverage probabilities (CP), and credible (or conο¬dence) interval length (IL) for the estimated spike d eigenvalues, across diο¬erent methods and values of n. The gSIW, SIW, and S-POET methods consistently outperform the sample covariance. In contrast, the IW and gIW methods ex hibit signiο¬cantly poorer performance across all settings. For small sample size n= 20, the SIW method shows superior performance in terms of both error and coverage pro bability, while for larger sample sizes, gSIW achieves the best overall performance. I n particular, the coverage probability of gSIW increases with n, indicating that the method becomes more reliable assamplesizegrows.Furthermore,acrossallvaluesof n,thecredibleintervalsproducedby gSIWareshorterthanthoseofS-POET,highlightingtheeο¬ci encyofgSIWinquantifying uncertainty. These results demonstrate the advantages of t he gSIW prior, particularly for moderate to large sample sizes. Table4presents the average error (Err Γ) for the estimated spiked eigenvectors across diο¬erent methods and varying sample sizes n. Since the gIW and IW methods fail to accurately estimate the eigenvalues, as shown in Table 3, we exclude them from further 21 Eigenvector n Sample gSIW gIW SIW IW S-POET Γ120 0.747 0.788 0.784 0.791 0.828 0.737 400.614 0.643 0.671 0.671 0.726 0.634 600.543 0.560 0.607 0.565 0.638 0.578 800.492 0.521 0.528 0.490 0.543 0.536 Γ220 0.897 0.893 0.913 0.930 0.938 0.885 400.794 0.837 0.835 0.886 0.896 0.798 600.709 0.732 0.752 0.788 0.811 0.738 800.650 0.680 0.703 0.657 0.786 0.698 Γ320 0.960 0.961 0.948 0.958 0.961 0.952 400.889 0.908 0.905 0.945 0.938 0.879 600.802 0.813 0.831 0.917 0.902 0.806 800.740 0.745 0.792 0.841 0.915 0.756 Table 4: Average errors (Err Γ) for estimated eigenvectors under varying n. consideration.At n= 20,S-POETshowsthebestperformance,whereassamplecova riance performs the best for all other cases. Overall, sample covar iance, gSIW, and S-POET exhibit comparable performance in estimating eigenvector s. Combining the results from Case 1 and Case 2, we observe that I W and gIW gen- erally exhibit poor performance across all settings. In con trast, gSIW shows improved performance in estimating spiked eigenvalues when the quan tityp nΒΌiis small. For spiked eigenvectors, sample covariance consistently demonstrat es strong performance, and both gSIW and S-POET perform comparably to sample covariance. 5 Real Data In this section, we apply the proposed gSIW prior to the MNIST d ataset. To estimate the number of spiked eigenvalues in the covariance matrix, w e select 50 images from the MNIST dataset, all labeled as the digit 7. Based on the sel ected number of spiked eigenvalues k, we perform dimensionality reduction. We assume that the MNIST dataset follows a multivariate norma l distribution with a gSIW prior on the covariance matrix. The number of spiked eig envalues, denoted as k, is estimated using two model selection criteria: the Watanabe -Akaike Information Criterion (WAIC) and IC p3. Dimensionality reduction is then performed based on the se lectedk. For comparison, we also include results based on the Growth R atio (GR) method. 22 Figure 1: The images of selected 50 MNIST samples labeled as 7. Figure1displays the 50 selected MNIST samples labeled as 7. We ο¬atten the 28Γ28 images into 784-dimensional vectors. For each candidate
|
https://arxiv.org/abs/2505.20668v1
|
k, the gSIW prior is applied, and the WAIC and IC p3are computed. WAIC and IC p3selectk= 7 and k= 9, respectively, while the GR method selects k= 1 as the optimal number of spiked eigenvalues. (a) Dimensionality reduction with k= 7 (WAIC). (b) Dimensionality reduction with k= 9 (ICp3). Figure 2: The images after dimensionality reduction of the se lected 50 MNIST samples labeled as 7 using diο¬erent model selection criteria. Figure2shows the images reconstructed using dimensionality reduc tion with k= 7 23 andk= 9, as determined by WAIC and IC p3, respectively. The reconstructed images successfully retain the shape of the digit 7, indicating tha t both WAIC and IC p3eο¬ectively capture the relevant subspace structure. Figure 3: The ο¬rst 50 eigenvalues of the sample covariance mat rix. In this case, where ( n,p) = (50,784), the sample covariance eigenvalues are known to be heavily distorted due to high dimensionality. To illus trate this distortion, Figure 3 shows the ο¬rst 50 eigenvalues of the sample covariance matri x. A few leading eigenvalues dominate the eigenvalue structure, while the remaining eig envalues are much smaller. As a result, the GR method, which is based directly on sample eig envalues, selects k= 1. By contrast, both WAIC and IC p3select larger values of kthat lead to substantially better reconstruction. To further assess the quality of the reduced representation , we compute the normalized mean squared reconstruction error (NMSE), deο¬ned as the mean squared error divided by the square of the range of the true values. The NMSE for k= 7 (WAIC) is 0.030, and fork= 9 (IC p3), it is 0.024, both substantially lower than the NMSE for k= 1 (GR), which is 0.069. This suggests that both WAIC and IC p3identify subspaces that preserve the original variation well, while the GR-based choice perf orms poorly. We alsoevaluate thecumulativeexplained varianceratio(C VE), deο¬ned asthepropor- tion of total variance captured by the top keigenvalues of the sample covariance matrix. 24 The CVE for k= 7 is 74.3%, and for k= 9, it is 79.0%, both substantially higher than the value for k= 1, which is 40.0%. These results demonstrate that, in this experiment, both WAI C and IC p3provide rea- sonable estimates of kthat lead to eο¬ective dimensionality reduction and data rec onstruc- tion, while the GR method continues to underestimate kdue to the distortion of sample eigenvalues in high-dimensional settings. 6 Conclusion We proposed the generalized shrinkage inverse-Wishart (gSI W) prior, which generalizes the shrinkage inverse-Wishart (SIW) prior of Berger et al. (2020) by allowing componen- twise shrinkage and distinguishing between spiked and non- spiked eigenvalue structures. We established posterior expectation for both eigenvalues and eigenvectors under the gSIW prior, and showed that it outperforms existing priors b oth theoretically and empir- ically when Assumptions A1 through A5 are satisο¬ed. Our theoretical analysis builds on the asymptotic behavior of sample covariance ma- trices developed in Wang and Fan (2017), particularly under the spiked covariance model. Future work may extend our results beyond the current assump tions, such as relaxing the spiked structure or
|
https://arxiv.org/abs/2505.20668v1
|
improving asymptotic bounds for sam ple eigenstructures in more general settings. Acknowledgements This work was supported by the National Research Foundation o f Korea (NRF) grant funded by the Korea government(MSIT) (No. NRF-2023R1A2C100305 0). 25 References Ahn, S. C. and Horenstein, A. R. (2013). Eigenvalue ratio test fo r the number of factors, Econometrica 81(3): 1203β1227. Bai, J. and Ng, S. (2002). Determining the number of factors in a pproximate factor models,Econometrica 70(1): 191β221. Banerjee, S. and Ghosal, S. (2014). Posterior convergence r ates for estimating large precisionmatricesusinggraphicalmodels, Electronic JournalofStatistics 8:2111β2137. Banerjee, S. and Ghosal, S. (2015). Bayesian structure lear ning in graphical models, JournalofMultivariate Analysis 136: 147β162. Berger, J. O., Sun, D. and Song, C. (2020). Bayesian analysis o f the covariance matrix of a multivariate normal distribution with a new class of prior s,TheAnnalsofstatistics 48(4): 2381β2403. Bickel, P. J. and Levina, E. (2008a). Covariance regularizat ion by thresholding, The AnnalsofStatistics pp. 2577β2604. Bickel, P. J. and Levina, E. (2008b). Regularized estimation of large covariance matrices, TheAnnalsofStatistics pp. 199β227. Bien, J., Bunea, F. and Xiao, L. (2016). Convex banding of the cov ariance matrix, Journal oftheAmerican Statistical Association 111(514): 834β845. Bodnar, T., Gupta, A. K. and Parolya, N. (2014). On the strong c onvergence of the optimal linear shrinkage estimator for large dimensional c ovariance matrix, Journalof Multivariate Analysis 132: 215β228. Cai, T. T., Ma, Z. and Wu, Y. (2013). Sparse pca: Optimal rates and adaptive estimation, AnnalsofStatistics 41(6): 3074β3110. 26 Cai, T. T., Ren, Z. and Zhou, H. H. (2016). Estimating structure d high-dimensional covariance and precision matrices: Optimal rates and adapt ive estimation, Electronic JournalofStatistics 10: 1β59. Cai, T. T., Zhang, C. H. and Zhou, H. H. (2010). Optimal rates of co nvergence for covariance matrix estimation, AnnalsofStatistics 38(4): 2118β2144. Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence f or sparse covariance matrix estimation, TheAnnalsofStatistics 40(271): 2389β2420. Daniels, M. J. and Kass, R. E. (2001). Shrinkage estimators fo r covariance matrices, Biometrics 57(4): 1173β1184. Dey, D. K. and Srinivasan, C. (1985). Estimation of a covaria nce matrix under steinβs loss,TheAnnalsofStatistics pp. 1581β1591. Efron,B.andMorris,C.(1976). Multivariateempiricalbay esandestimationofcovariance matrices, TheAnnalsofStatistics pp. 22β32. Farrell, R. H. (2012). Multivariate calculation: Useofthecontinuous groups, Springer Science & Business Media. Gelman, A., Hwang, J. and Vehtari, A. (2014). Understanding predi ctive information criteria for bayesian models, Statistics andcomputing 24: 997β1016. Ghosal, S., Ghosh, J. K. and Van Der Vaart, A. W. (2000). Converge nce rates of posterior distributions, AnnalsofStatistics pp. 500β531. Ghosal, S. and van der Vaart, A. (2007). Convergence rates of posterior distributions for non-iid observations, AnnalsofStatistics 35(1): 192β223. Haο¬, L. (1980). Empirical bayes estimation of the multivaria te normal covariance matrix, TheAnnalsofStatistics 8(3): 586β597. 27 Johnstone, I. M. (2001). On the distribution of the largest ei genvalue in principal com- ponents analysis, TheAnnalsofstatistics 29(2): 295β327. Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsit y for principal com- ponents analysis in high dimensions, JournaloftheAmerican Statistical Association 104(486): 682β693. Karoui, N. E. (2008). Spectrum estimation for large dimensio nal covariance matrices using random matrix theory, TheAnnalsofStatistics pp. 2757β2790. Lam,
|
https://arxiv.org/abs/2505.20668v1
|
C. (2016). Nonparametric eigenvalue-regularized prec ision or covariance matrix estimator, AnnalsofStatistics 44(3): 928β953. Ledoit, O. and Wolf, M. (2004). A well-conditioned estimato r for large-dimensional co- variance matrices, Journalofmultivariate analysis88(2): 365β411. Ledoit, O. and Wolf, M. (2012). Nonlinear shrinkage estimati on of large-dimensional covariance matrices, TheAnnalsofStatistics 40(2): 1024β1060. Lee, K., Jo, S. and Lee, J. (2022). The beta-mixture shrinkage p rior for sparse covari- ances with near-minimax posterior convergence rate, JournalofMultivariate Analysis 192: 105067. Lee, K. and Lee, J. (2023). Post-processed posteriors for spa rse covariances, Journalof Econometrics 236(1): 105475. Lee, K., Lee, K. and Lee, J. (2023). Post-processed posterior s for banded covariances, Bayesian Analysis 18(3): 1017β1040. Lee, K., Park, S., Kim, S. and Lee, J. (2024). Posterior asympt otics of high-dimensional spiked covariance model with inverse-wishart prior, arXivpreprint arXiv:2412.10753 . Muirhead, R. J. (2009). Aspectsofmultivariate statistical theory, John Wiley & Sons. 28 Ning, B. Y.-C. and Ning, N. (2021). Spike and slab bayesian sparse principal component analysis, arXivpreprint arXiv:2102.00305 . Paul, D. (2007). Asymptotics of sample eigenstructure for a l arge dimensional spiked covariance model, Statistica Sinica pp. 1617β1642. Shen, D., Shen, H., Zhu, H. and Marron, J. (2016). The statistics and mathematics of high dimension low sample size asymptotics, Statistica Sinica26(4): 1747. Stein, C. (1975). Estimation of a covariance matrix, 39thAnnualMeeting IMS,Atlanta, GA,1975. Wang, W. and Fan, J. (2017). Asymptotics of empirical eigenstru cture for high dimen- sional spiked covariance, Annalsofstatistics 45(3): 1342. Watanabe, S. and Opper,M.(2010). Asymptoticequivalenceof bayescrossvalidation and widely applicable information criterion in singular learn ing theory., Journalofmachine learning research 11(12). Xiang, R., Khare, K. and Ghosh, M. (2015). High dimensional pos terior convergence rates for decomposable graphical models, Electronic JournalofStatistics 9: 2828β2854. Xie, F., Cape, J., Priebe, C. E. and Xu, Y. (2022). Bayesian sparse s piked covariance model with a continuous matrix shrinkage prior, Bayesian Analysis 17(4): 1193β1217. 29 Supplementary material of βEigenstructure inference for high-dimensional covariance with generalized shrinkage inverse-Wishart priorβ Seongmin Kimβ1, Kwangmin Leeβ 2, Sewon Parkβ‘3, and Jaeyong LeeΒ§1 1Department of Statistics, Seoul National University 2Department of Big Data Convergence, Chonnam National University 3Security Algorithm Lab, Samsung SDS May 28, 2025 βzlatjdals@snu.ac.kr β klee564@jnu.ac.kr β‘swpark0413@gmail.com Β§leejyc@gmail.com 1 S1 Appendix Consider the transformed data Yi= ΞTXiwhich follows N(0,Ξ) where Ξ = diag(ΒΌ0,1,Β·Β·Β·,ΒΌ0,p) and ΒΌ0,iisith eigenvalue of true covariance. Since the eigenvalues of sample covar iance are invariant under the orthogonal transform, we consider the sample covariance S=1 nYTYwhereY= (Y1,...,Y n)T. Deο¬neΛS=1 nY YT, which preserves the ο¬rst neigenvalues of S. LetΛYiisith column of Y, so that ΛYiindβΌN(0,ΒΌ0,iIn). Therefore, ΛS=1 np/summationdisplay i=1ΛYiΛYT i. It can be expressed as ΛS=1 np/summationdisplay i=1ΒΌ0,iZiZT i, whereZ1,Β·Β·Β·,Zpare independent n-dimensional vectors, and each elements of Zifollows an independent standard normal distribution. Asymptotic properties of eigenstructure of sample covariance Lemma S1.1 (Lemma A.1 of Wang and Fan ;2017).Suppose that A1,...,A pare independent n- dimensional Gaussian random vectors with mean 0and variance In. For all tg0, the inequality holds at least1β2exp/parenleftbig ct2/parenrightbig probability for some positive constant c Β―wβmax(ΒΆ,ΒΆ2)fΒΌn/parenleftBig1 pp/summationdisplay i=1wiAiAT i/parenrightBig fΒΌ1/parenleftBig1 pp/summationdisplay i=1wiAiAT i/parenrightBig fΒ―w+max(ΒΆ,ΒΆ2), whereΒΆ=C/radicalbig n/p+t/βp,Cis positive constant, |wi|is bounded for all i, andΒ―w=pβ1p/summationdisplay i=1wi. Theο¬rst neigenvaluesofthesamplecovarianceisequivalencewiththatof ΛS.TheΛScanbedecomposed into the
|
https://arxiv.org/abs/2505.20668v1
|
sum of two matrices AandB, whereA=1 nk/summationdisplay i=1ΒΌ0,iZiZT iandB=1 np/summationdisplay i=k+1ΒΌ0,iZiZT i. Lemma S1.2 (Asymptotic properties of eigenvalues of sample covariance) .Under model (1), the eigen- values of sample covariance satisfy the following properties for all suο¬ciently large n, ΛΒΌj ΒΌ0,j=ο£±  1+Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n+Β΄j, j= 1,...,k Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n, j =k+1,...,n, whereΒ³jis a constant in the interval [βC,C]for some positive constant C, andΒ΄jβ²nβ1/2+ΒΆfor all smallΒΆ >0. 2 Proof.AccordingtoLemma S1.1,fort=βn,theinequalityholdswithprobabilityatleast1 β2exp(βcn): 1 pβkp/summationdisplay i=k+1ΒΌ0,iβC/radicalbiggn pfΒΌl/parenleftbiggn pβkB/parenrightbigg f1 pβkp/summationdisplay i=k+1ΒΌ0,i+C/radicalbiggn p, (S1) for some positive constant Cand for all suο¬ciently large n. The inequality follows from Welyβs theorem: ΒΌj(A) ΒΌ0,j+ΒΌp(B) ΒΌ0,jfΛΒΌj ΒΌ0,j=ΒΌj(A+B) ΒΌ0,jfΒΌj(A) ΒΌ0,j+ΒΌ1(B) ΒΌ0,j, forj= 1,Β·Β·Β·,n. Given the bound in ( S1), we obtain the following inequality ΒΌj(A) ΒΌ0,j+Β―ddjβCΒΌ0,jβ1/radicalbiggp nfΛΒΌj ΒΌ0,jfΒΌj(A) ΒΌ0,j+Β―ddj+CΒΌ0,jβ1/radicalbiggp n, whereΒ―d=1 pβkp/summationdisplay i=k+1ΒΌ0,i, anddj=p nΒΌ0,i. Therefore, we obtain the following equality ΛΒΌj ΒΌ0,j=ΒΌj(A) ΒΌ0,j+Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n, j= 1,...,n, for some constant Β³jβ[βC,C]. According to Lemma A.2 of Wang and Fan (2017), the following asymptotic normality and indepen- dence hold βn/parenleftbiggΒΌj(A) ΒΌ0,jβ1/parenrightbigg dβN(0,2),βi= 1,...,k. We obtain the following asymptotic normality βn/parenleftBiggΛΒΌj ΒΌ0,jβ/parenleftbigg 1+Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n/parenrightbigg/parenrightBigg dβN(0,2), j= 1,...,k. Sincekis constant, for ΒΆnzβn, the following inequality holds for suο¬ciently large nandp ΒΆn/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΛΒΌj ΒΌ0,jβ/parenleftbigg 1+Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle< Ο΅,βj= 1,...,k. Therefore, we obtain the following equality ΛΒΌj ΒΌ0,j= 1+Β΄j+Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n, j= 1,...,k, forΒ΄jβ²nβ1/2+ΒΆfor all small ΒΆ >0. 3 Forj > k,ΒΌj(A) = 0, and it implies ΛΒΌj ΒΌ0,j=Β―ddj+Β³jΒΌ0,jβ1/radicalbiggp n. β Let (S,d) be a metric space and KΒ¦S. ForΟ΅ >0, theΟ΅-covering number of Kwith respect to the metricd, denoted by N(K,d,Ο΅), is the minimum number of closed balls of radius Ο΅(with respect to d) required to cover the set K. Similarly, the Ο΅-packing number of Kwith respect to the metric d, denoted byM(K,d,Ο΅), is the maximum number of points in Ksuch that any two distinct points are at least distance Ο΅apart. Lemma S1.3 (Covering number of Orthogonal group ; Proposition 7 of Szarek 1982 ).LetΒΏbe a unitary ideal norm, for which ΒΏ(PAQ) =ΒΏ(A)holds for any unitary matrices PandQ, and any matrix A. Then, there exist universal positive constants candCthat satisfy the following inequality for Ο΅β(0,2ΒΏ(I)] (cΒΏ(I)/Ο΅)dfN(O(m),ΒΏ,Ο΅)f(CΒΏ(I)/Ο΅)d, whered=m(mβ1)/2andN(O(m),ΒΏ,Ο΅)isΟ΅-covering number of O(m)with respect to the metric Γ(x,y) =ΒΏ(xβy), wherex,yβO(m). Grassmannmanifold Gn,pdenotesthe n-dimensionalsubspacesof Rp.Itcanbeexpressedasaquotient spaceGn,p=O(p)/(O(n)ΓO(pβn)).Edelmanetal. (1998)statesthatapointintheGrassmannmanifold can be represented as an equivalence class [Q] ={Q ο£Q10 0Q2ο£Ά ο£Έ:Q1βO(n),Q2βO(pβn)}, which consists of all orthogonal matrices that span the same subspace as the ο¬ rstncolumns of Q.Szarek (1982) deο¬nes the quotient metric on Gn,pby the unitary ideal norm ΒΏonO(p) as follows ΓΒΏ(H1,H2) = inf{ΒΏ(IβV) :VβO(p),VH1=H2}, forH1,H2βGn,p. Therefore, H1andH2can be represented as follows H1={P1 ο£Q10 0Q2ο£Ά ο£Έ:Q1βO(n),Q2βO(pβn)}, H2={P2 ο£Q10 0Q2ο£Ά ο£Έ:Q1βO(n),Q2βO(pβn)}, 4 for some P1,P2βO(p). The quotient metric is then given by ΓΒΏ(H1,H2) = inf/braceleftBigg ΒΏ(IβV) :V=P2 ο£Q10 0Q2ο£Ά  ο£Q30 0Q4ο£Ά ο£ΈPT 1, Q1,Q3βO(n), Q2,Q4βO(pβn)/bracerightBigg = inf Q1βO(n),Q2βO(pβn)ΒΏ/parenleftBig IβP2 ο£Q10 0Q2ο£Ά ο£ΈPT 1/parenrightBig = inf Q1βO(n),Q2βO(pβn)ΒΏ/parenleftBig P1βP2 ο£Q10 0Q2ο£Ά ο£Έ/parenrightBig . Lemma S1.4 (Covering number of Grassmann manifold ; Proposition 8 of Szarek 1982 ).For all unitary ideal norms ΒΏ, there exist universal positive constants mandMthat satisfy the following inequality for Ο΅β(0,DΒΏ] (mDΒΏ/Ο΅)dfN(Gn,p,ΓΒΏ,Ο΅)f(MDΒΏ/Ο΅)d, whered=n(pβn) =dimGn,p, andDΒΏis the diameter of Gn,pwith respect to the metric ΓΒΏ. Speciο¬cally, for the Frobenius norm ΒΏ, the diameter
|
https://arxiv.org/abs/2505.20668v1
|
is given by DΒΏ= 2/radicalbig min(n,pβn). Lemma S1.5 (Probabilityofsubsetonorthogonalgroup) .Consider the following subset of the orthogonal groupO(p), deο¬ned as BΟ΅=/braceleftbigg ΞβO(p) : inf Q1βO(n), Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Then, the probability measure of BΟ΅satisο¬es the following lower bound: P(BΟ΅) :=/integraldisplay BΟ΅[dΞ]g/parenleftBig2M/radicalbig min(n,pβn) Ο΅/parenrightBigβn(pβn) , for some positive constant M. Proof.LetS1,...,S N(Gn,p,Γ||Β·||F,Ο΅)form an Ο΅-covering of Gn,pwith respect to the distance Γ||Β·||F. Then, the inequality holds 1 =P/parenleftBigN(Gn,p,Γ||Β·||F,Ο΅)/uniondisplay i=1{ΞβO(p) :Γ||Β·||F([Ξ],Si)< Ο΅}/parenrightBig fN(Gn,p,Γ||Β·||F,Ο΅)/summationdisplay i=1P({ΞβO(p) :Γ||Β·||F([Ξ],Si)< Ο΅}) =N(Gn,p,Γ||Β·||F,Ο΅)Β·P({ΞβO(p) :Γ||Β·||F([Ξ],Si)< Ο΅}) 5 =N(Gn,p,Γ||Β·||F,Ο΅)Β·P({ΞβO(p) :Γ||Β·||F([Ξ],[Ip])< Ο΅}) where [Ξ] := {Ξ ο£Q10 0Q2ο£Ά ο£Έ:Q1βO(n), Q2βO(pβn)}for ΞβO(p). We obtain the following lower bound on the probability measure of BΟ΅ P(BΟ΅) =P/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βO(n), Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg =P({ΞβO(p) :Γ||Β·||F([Ξ],[Ip])< Ο΅}) g1 N(Gn,p,Γ||Β·||F,Ο΅) g/parenleftBig2M/radicalbig min(n,pβn) Ο΅/parenrightBigβn(pβn) , whereMis a positive constant. The last inequality follows from Lemma S1.4. β Lemma S1.6 (Probabilityofsubsetonorthogonalgroup) .Consider the following subset of the orthogonal groupO(p), deο¬ned as CΟ΅=/braceleftbigg ΞβO(p) : inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£In0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Then, the probability measure of CΟ΅satisο¬es the following lower bound: P(CΟ΅) :=/integraldisplay CΟ΅[dΞ]g/parenleftBigcβn Ο΅/parenrightBigβn(pβn 2β1 2) , Ο΅f2βn, for some positive constant c. Proof.LetS1,Β·Β·Β·,SN(O(n),||Β·||F,Ο΅1)form an Ο΅1-covering of Onwith respect to the distance ||Β·||FforΟ΅1β (0,Ο΅). For all Q1βO(n), there exists some Sisuch that ||SiβQ1||F< Ο΅1. By triangular inequality, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ff/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£Έβ ο£Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F. (S2) Taking the inο¬mum over Q2βO(pβn) on both sides of ( S2), then the inequality holds inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ffinf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£Έβ ο£Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+ inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F. (S3) 6 We obtain the following lower bound on the probability measure of CΟ΅: P(CΟ΅) =P/parenleftbigg/braceleftbigg ΞβO(p) : inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£In0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg g1 N(O(n),||Β·||F,Ο΅1)P/parenleftbiggN(O(n),||Β·||F,Ο΅1)/uniondisplay i=1/braceleftbigg ΞβO(p) : inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg g1 N(O(n),||Β·||F,Ο΅1)P/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βO(n), Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅βΟ΅1/bracerightbigg/parenrightbigg =P(AΟ΅βΟ΅1) N(O(n),||Β·||F,Ο΅1). The ο¬rst inequality holds by the following fact P(CΟ΅) =P/parenleftbigg/braceleftbigg ΞβO(p) : inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg/parenrightbigg , fori= 1,...,N(O(n),||Β·||F,Ο΅1). The second inequality holds by ( S3). Therefore, by applying Lemma S1.3and Lemma S1.4, we obtain the following inequality: P(CΟ΅)g/parenleftBig2M/radicalbig min(n,pβn) Ο΅βΟ΅1/parenrightBigβn(pβn) Β·/parenleftBigCβn Ο΅1/parenrightBigβn(nβ1)/2 g/parenleftBig2Mβn Ο΅βΟ΅1/parenrightBigβn(pβn) Β·/parenleftBigCβn Ο΅1/parenrightBigβn(nβ1)/2 =/parenleftBig4Mβn Ο΅/parenrightBigβn(pβn) Β·/parenleftBig2Cβn Ο΅/parenrightBigβn(nβ1)/2 g/parenleftBigmax(4M,2C)βn Ο΅/parenrightBigβn(pβn 2β1 2) , where we set Ο΅1=Ο΅ 2, andMandCare positive constants. As a result, for all suο¬ciently large n, we obtain the lower bound P(CΟ΅)g/parenleftBigcβn Ο΅/parenrightBigβn(pβn 2β1 2) , for some positive constant c. β We deο¬ne cias (i,i) elements of ΞT(hIp+W)Ξ, where W=diag(nΛΒΌ1,...,nΛΒΌn,0,...,0), 7 andΛΒΌiis theith eigenvalue of sample covariance S. By Lemma S1.2, we express cias follows: ci n=h n+n/summationdisplay j=1Ξ2 jiΛΒΌj =h n+k/summationdisplay j=1Ξ2 ji/braceleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)/bracerightBig +n/summationdisplay j=k+1Ξ2 ji/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig . Next, we establish two lemmas that provide upper and lower bounds for ci. Lemma S1.7 (Upper bound of ci).Under the conditions A1βA4in Section 3, the quantity cisatisο¬es the following upper bound on CΟ΅ ci nfο£±  /parenleftbigg (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h n/parenrightbigg (1+4Ο΅2ΒΌ0,1 ΒΌ0,k)i= 1,...,k, /parenleftBig Β―dp n+Β³i/radicalbiggp n+h n/parenrightBig/parenleftBig 1+4Ο΅2nΒΌ0,1 Β―dp/parenrightBig i=k+1,...,n, for all suο¬ciently large n. Furthermore, the following inequality also holds
|
https://arxiv.org/abs/2505.20668v1
|
on CΟ΅ p/productdisplay i=n+1ci nf/parenleftBigh n/parenrightBigpβn/bracketleftbigg 1+2Ο΅2n (pβn)hΒΌ0,1/bracketrightbiggpβn , whereΒ΄max= max 1fjfkΒ΄j, and for all suο¬ciently large n. Proof.For ΞβCΟ΅, the following inequality holds: (Ξiiβ1)2+p/summationdisplay j=1 jΜΈ=iΞ2 ji< Ο΅2, fori= 1,...,n. Sincep/summationdisplay j=1Ξ2 ji= 1, it follows that (1βΞ2 ii)< Ο΅2, which implies Ξ2 ii>1βΟ΅2,fori= 1,...,n. Fori= 1,...,k, we have the following bound: ci n=h n+k/summationdisplay j=1(1+Β΄j)Ξ2 jiΒΌ0,j+n/summationdisplay j=1/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig Ξ2 ji fh n+(1+Β΄i)ΒΌ0,iΞ2 ii+/parenleftBig (1+Β΄max)ΒΌ0,1/parenrightBigp/summationdisplay j=1 jΜΈ=iΞ2 ji+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig Ξ2 ii+/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBigp/summationdisplay j=1 jΜΈ=iΞ2 ji 8 fh n+(1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +/parenleftBig (1+Β΄max)ΒΌ0,1+Β―dp n+C/radicalbiggp n/parenrightBig (1βΞ2 ii) fh n+(1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +2Ο΅2ΒΌ0,1 f/parenleftbigg (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h n/parenrightbigg (1+4Ο΅2ΒΌ0,1 ΒΌ0,k), for all suο¬ciently large n. The third inequality follows from the fact that: ΒΌ0,1{Β΄maxΒΌ0,1+Β―dp n+C/radicalbiggp n. The fourth inequality follows from (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h ngΒΌ0,k 2, for all suο¬ciently large n. Fori=k+1,...,n, we have the following bound: ci n=h n+k/summationdisplay j=1(1+Β΄j)Ξ2 jiΒΌ0,j+n/summationdisplay j=1/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig Ξ2 ji fh n+(1+Β΄max)ΒΌ0,1p/summationdisplay j=1 jΜΈ=iΞ2 ji+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig Ξ2 ii+/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBigp/summationdisplay j=1 jΜΈ=iΞ2 ji fh n+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +/parenleftBig (1+Β΄max)ΒΌ0,1+Β―dp n+C/radicalbiggp n/parenrightBig (1βΞ2 ii) fh n+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +2ΒΌ0,1Ο΅2 f/parenleftBig Β―dp n+Β³i/radicalbiggp n+h n/parenrightBig/parenleftBig 1+4Ο΅2nΒΌ0,1 Β―dp/parenrightBig , for all suο¬ciently large n. The third inequality follows from the fact that ΒΌ0,1{Β΄maxΒΌ0,1+Β―dp n+C/radicalbiggp n. The fourth inequality follows from Β―dp n+Β³i/radicalbiggp n+h ng1 2Β―dp n, for all suο¬ciently large n. 9 For ΞβCΟ΅, the following bound holds for j= 1,...,n: p/summationdisplay i=n+1Ξ2 ji< Ο΅2. Therefore, for i=n+1,...,p, we obtain the following inequality: p/summationdisplay i=n+1ci n=h(pβn) n+p/summationdisplay i=n+1k/summationdisplay j=1(1+Β΄j)Ξ2 jiΒΌ0,j+p/summationdisplay i=n+1n/summationdisplay j=1/parenleftBig Β―dp n+Β³j/radicalbiggp n/parenrightBig Ξ2 ji fh(pβn) n+(1+Β΄max)ΒΌ0,1p/summationdisplay i=n+1k/summationdisplay j=1Ξ2 ji+/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBigp/summationdisplay i=n+1n/summationdisplay j=1Ξ2 ji fh(pβn) n+Ο΅2(1+Β΄max)ΒΌ0,1+Ο΅2/parenleftBig Β―dp n+C/radicalbiggp n/parenrightBig fh(pβn) n+2Ο΅2ΒΌ0,1, whereΒ΄max= max 1fjfkΒ΄j, and for all suο¬ciently large n. The last inequality follows from ΒΌ0,1{Β΄maxΒΌ0,1+Β―dp n+C/radicalbiggp n. By arithmetic geometric mean inequality, we obtain the following ine quality: p/productdisplay i=n+1ci nf/parenleftBig1 pβnp/summationdisplay i=n+1ci n/parenrightBigpβn f/bracketleftbiggh n+2Ο΅2 (pβn)ΒΌ0,1/parenrightBig/bracketrightbiggpβn =/parenleftBigh n/parenrightBigpβn/bracketleftbigg 1+2Ο΅2n (pβn)hΒΌ0,1/bracketrightbiggpβn . β Lemma S1.8 (Lower bound of ci).Under the conditions A1βA4on Section 3,cihas the following lower bound: ci ngk/productdisplay j=1/parenleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/parenrightBigΞ2 jiΒ·n/productdisplay j=k+1/parenleftBigΒ―dp n+Β³j/radicalbiggp n+h n/parenrightBigΞ2 jiΒ·p/productdisplay j=n+1/parenleftBigh n/parenrightBigΞ2 ji. Proof.By arithmeticβgeometric mean inequality, we obtain the following ine quality ci n=k/summationdisplay j=1Ξ2 ji/braceleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/bracerightBig +n/summationdisplay j=k+1Ξ2 ji/parenleftBig Β―dp n+Β³j/radicalbiggp n+h n/parenrightBig +p/summationdisplay j=n+1Ξ2 jih n 10 gk/productdisplay j=1/parenleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/parenrightBigΞ2 jiΒ·n/productdisplay j=k+1/parenleftBigΒ―dp n+Β³j/radicalbiggp n+h n/parenrightBigΞ2 jiΒ·p/productdisplay j=n+1/parenleftBigh n/parenrightBigΞ2 ji. β Lemma S1.9 (Hoο¬man-Wielandt theorem ; Stewart and Sun 1990 ).FornΓnnormal matrices A,ΛA, the following inequality holds: min Γ/summationdisplay i/vextendsingle/vextendsingle/vextendsingleΛΒΌΓ(i)βΒΌi/vextendsingle/vextendsingle/vextendsinglef/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΛAβA/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, whereΓis permutation of the set [n] ={1,2,...,n}, andmin Γdenotes the minimum among all permutation of the set [n]. Lemma S1.10 (Block matrix approximation) .For allΞβO(p), if||Ξ1:n,n+1:p||F< ΒΈ, then it follows that inf Q1βO(n), Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F<2ΒΈ, for allΒΈβ(0,1). Proof.Consider the following block matrix of Ξ: Ξ = ο£Ξ11Ξ12 Ξ21Ξ22ο£Ά ο£Έ, where Ξ 11βRnΓnand Ξ22βR(pβn)Γ(pβn). LetA=Ipβn,ΛA=IpβnβΞT 12Ξ12, andBΒΈ={ΞβO(p) :||Ξ12||F< ΒΈ}. For all Ξ βBΒΈ, the following inequality holds by Lemma S1.9: pβn/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingleΛΒΌiβ1/vextendsingle/vextendsingle/vextendsinglef/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞT 12Ξ12/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ2, whereΛΒΌiisith eigenvalue of ΛA. ConsiderthesingularvaluedecompositionΞ 22=UDVT,whereU,VβO(pβn)andD=diag(/radicalBig ΛΒΌ1,Β·Β·Β·,/radicalBig ΛΒΌpβn). Then,
|
https://arxiv.org/abs/2505.20668v1
|
the equality holds for all Ξ βBΒΈ: inf QβO(pβn)||Ξ22βQ||2 F= inf QβO(pβn)||DβQ||2 F = inf QβO(pβn)pβn/summationdisplay i=1[(/radicalBig ΛΒΌiβqi)2+(1βq2 i)] =pβn/summationdisplay i=1(/radicalBig ΛΒΌiβ1)2 11 fmax ifpβn/vextendsingle/vextendsingle/vextendsingleΛΒΌiβ1/vextendsingle/vextendsingle/vextendsingle (/radicalbigΛΒΌi+1)2Β·pβn/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingleΛΒΌiβ1/vextendsingle/vextendsingle/vextendsingle fΒΈ2, whereqiis (i,i) element of Q. Likewise we attain inf QβO(n)||Ξ11βQ||2 FfΒΈ2. For ΞβBΒΈ, we obtain the inequality: inf Q1βO(n),Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞβ ο£Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F= inf Q1βO(n)||Q1βΞ11||2 F+ inf Q2βO(pβn)||Q2βΞ22||2 F+||Ξ12||2 F+||Ξ21||2 F f4ΒΈ2. β Lemma S1.11. Consider the following subset of O(p): AΟ΅=/braceleftbigg ΞβO(p) : inf Q1βO(k), Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞβ ο£Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Under the conditions A1βA5, suppose that Ο΅2an{np. Then, the following inequality holds: /integraldisplay AcΟ΅p/productdisplay i=1cβaiβn 2+1 i(dΞ) /integraldisplayp/productdisplay i=1cβaiβn 2+1 i(dΞ)βΌ/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an . Proof.Consider the following subset of the orthogonal group O(p): CΟ΅=/braceleftbigg ΞβO(p) : inf Q2βO(pβn)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞβ ο£In0 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg . Deο¬ne the following quantities: Ki=/parenleftBig (1+Β΄i)ΒΌ0,i+/parenleftBig Β―dp n+Β³i/radicalbiggp n/parenrightBig +h n/parenrightBig , i= 1,...,k, Li=/parenleftBig Β―dp n+Β³i/radicalbiggp n+h n/parenrightBig , i=k+1,...,n, 12 M=/parenleftBigh n/parenrightBig , rn= 4Ο΅2 2ΒΌ0,1 ΒΌ0,k, sn= 4Ο΅2 2nΒΌ0,1 Β―dp, tn=2Ο΅2 2n (pβn)hΒΌ0,1. Using Lemma S1.7and Lemma S1.8, we obtain the following inequality: /integraldisplay AcΟ΅/productdisplay cβaiβn 2+1 i(dΞ) /integraldisplay CΟ΅2/productdisplay cβaiβn 2+1 i(dΞ) fP(Ac Ο΅)Β·supAc Ο΅/producttextcβaiβn 2+1 i P(CΟ΅2)Β·infCΟ΅2/producttextcβaiβn 2+1 i f1 P(CΟ΅2)Β·supCΟ΅2/producttextcai+n 2β1 i infAcΟ΅/producttextcai+n 2β1 i f1 P(CΟ΅2)Β·k/producttext j=1/bracketleftbigg Kj(1+rn)/bracketrightbigg(aj+n 2β1) Β·n/producttext j=k+1/bracketleftbigg Lj(1+sn)/bracketrightbigg(an+n 2β1) Β·/bracketleftbigg M(1+tn)/bracketrightbigg(pβn)(ap+n 2β1) infAc Ο΅p/producttext i=1/bracketleftbiggk/producttext j=1KΞ2 ji jΒ·n/producttext j=k+1LΞ2 ji jΒ·p/producttext j=n+1MΞ2 ji/bracketrightbiggai+n 2β1 =1 P(CΟ΅2)Β·(1+rn)k/summationtext j=1(aj+n 2β1) (1+sn)(nβk)(an+n 2β1)(1+tn)(pβn)(ap+n 2β1) Γsup Ac Ο΅k/producttext j=1K(aj+n 2β1) j Β·n/producttext j=k+1L(an+n 2β1) j Β·M(pβn)(ap+n 2β1) p/producttext i=1/bracketleftbiggk/producttext j=1KΞ2 ji jΒ·n/producttext j=k+1LΞ2 ji jΒ·p/producttext j=n+1MΞ2 ji/bracketrightbiggai+n 2β1. Now, we focus on bounding the term sup Ac Ο΅k/producttext j=1K(aj+n 2β1) j Β·n/producttext j=k+1L(an+n 2β1) j Β·M(pβn)(ap+n 2β1) p/producttext i=1/bracketleftbiggk/producttext j=1KΞ2 ji jΒ·n/producttext j=k+1LΞ2 ji jΒ·p/producttext j=n+1MΞ2 ji/bracketrightbiggai+n 2β1. We obtain the following upper bound: 13 sup AcΟ΅k/producttext j=1K(aj+n 2β1) j Β·n/producttext j=k+1L(an+n 2β1) j Β·M(pβn)(ap+n 2β1) p/producttext i=1/bracketleftbiggk/producttext j=1KΞ2 ji jΒ·n/producttext j=k+1LΞ2 ji jΒ·p/producttext j=n+1MΞ2 ji/bracketrightbiggai+n 2β1 = sup AcΟ΅/bracketleftBiggk/producttext j=1K(aj+n 2β1) j k/producttext j=1Kk/summationtext i=1Ξ2 ji(ai+n 2β1)+n/summationtext i=k+1Ξ2 ji(an+n 2β1)+p/summationtext i=n+1Ξ2 ji(ap+n 2β1) j Γn/producttext j=k+1L(an+n 2β1) j n/producttext j=k+1Lk/summationtext i=1Ξ2 ji(ai+n 2β1)+n/summationtext i=k+1Ξ2 ji(an+n 2β1)+p/summationtext i=n+1Ξ2 ji(ap+n 2β1) jΓM(pβn)(ap+n 2β1) Mk/summationtext i=1p/summationtext j=n+1Ξ2 ji(ai+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1)+p/summationtext i=n+1p/summationtext j=n+1Ξ2 ji(ap+n 2β1)/bracketrightBigg = sup AcΟ΅/bracketleftBiggk/producttext j=1K(aj+n 2β1) j k/producttext j=1Kk/summationtext i=1Ξ2 ji(ai+n 2β1)+p/summationtext i=k+1Ξ2 ji(ak+n 2β1) jΒ·1 k/producttext j=1Kn/summationtext i=k+1Ξ2 ji(anβak)+p/summationtext i=n+1Ξ2 ji(apβak) i Γn/producttext j=k+1L(1βn/summationtext i=k+1Ξ2 ji)(an+n 2β1) j n/producttext j=k+1Lk/summationtext i=1Ξ2 ji(ai+n 2β1)+p/summationtext i=n+1Ξ2 ji(ap+n 2β1) jΓM(pβnβp/summationtext i=n+1p/summationtext j=n+1Ξ2 ji)(ap+n 2β1) Mk/summationtext i=1p/summationtext j=n+1Ξ2 ji(ai+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1)/bracketrightBigg β€sup O(p)/bracketleftBiggk/producttext j=1K(aj+n 2β1) j k/producttext j=1Kk/summationtext i=1Ξ2 ji(ai+n 2β1)+p/summationtext i=k+1Ξ2 ji(ak+n 2β1) j/bracketrightBigg Γsup AcΟ΅/bracketleftBigg 1 k/producttext j=1Kn/summationtext i=k+1Ξ2 ji(anβak)+p/summationtext i=n+1Ξ2 ji(apβak) iΓn/producttext j=k+1L(1βn/summationtext i=k+1Ξ2 ji)(an+n 2β1) j n/producttext j=k+1Lk/summationtext i=1Ξ2 ji(ai+n 2β1)+p/summationtext i=n+1Ξ2 ji(ap+n 2β1) jΓM(pβnβp/summationtext i=n+1p/summationtext j=n+1Ξ2 ji)(ap+n 2β1) Mk/summationtext i=1p/summationtext j=n+1Ξ2 ji(ai+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1)/bracketrightBigg β€sup AcΟ΅/bracketleftBigg 1 k/producttext j=1Kn/summationtext i=k+1Ξ2 ji(anβak)+p/summationtext i=n+1Ξ2 ji(apβak) iΓn/producttext j=k+1L(1βp/summationtext i=k+1Ξ2 ji)(an+n 2β1) j n/producttext j=k+1Lk/summationtext i=1Ξ2 ji(ai+n 2β1)+p/summationtext i=n+1Ξ2 ji(apβan) jΓM(pβnβp/summationtext i=n+1p/summationtext j=n+1Ξ2 ji)(ap+n 2β1) Mk/summationtext i=1p/summationtext j=n+1Ξ2 ji(ai+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1)/bracketrightBigg β€sup AcΟ΅/bracketleftBigg 1 Kk/summationtext j=1n/summationtext i=k+1Ξ2 ji(anβak)+k/summationtext j=1p/summationtext i=n+1Ξ2 ji(apβak) kΓL(nβkβn/summationtext j=k+1p/summationtext i=k+1Ξ2 ji)(an+n 2β1) n Lp/summationtext i=n+1n/summationtext j=k+1Ξ2
|
https://arxiv.org/abs/2505.20668v1
|
ji(apβan) k+1ΓM(pβnβp/summationtext i=n+1p/summationtext j=n+1Ξ2 ji)(ap+n 2β1) Mk(a1+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1)/bracketrightBigg = sup AcΟ΅/bracketleftBigg 1 Kk/summationtext j=1p/summationtext i=k+1Ξ2 ji(anβak)+k/summationtext j=1p/summationtext i=n+1Ξ2 ji(apβak) kΓLn/summationtext j=k+1k/summationtext i=1Ξ2 ji(an+n 2β1) n Lp/summationtext i=n+1n/summationtext j=k+1Ξ2 ji(apβan) k+1ΓMn/summationtext i=1p/summationtext j=n+1Ξ2 ji(ap+n 2β1) Mk(a1+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1)/bracketrightBigg , 14 where the second inequality follows from the fact that the term k/productdisplay j=1Kk/summationtext i=1Ξ2 ji(ai+n 2β1)+p/summationtext i=k+1Ξ2 ji(ak+n 2β1) j , is minimized when Ξ2 jj= 1 forj= 1,...,k. The third inequality is satisο¬ed due to the ordering conditions K1g Β·Β·Β· gKkandLk+1g Β·Β·Β· gLn. Since Lemma S1.10implies that ||Ξ1:n,n+1:p||F> Ο΅/2 on the Ac Ο΅, we obtain the inequality: 1 Kk/summationtext j=1p/summationtext i=k+1Ξ2 ji(anβak)+k/summationtext j=1p/summationtext i=n+1Ξ2 ji(apβak) kΓLn/summationtext j=k+1k/summationtext i=1Ξ2 ji(an+n 2β1) n Lp/summationtext i=n+1n/summationtext j=k+1Ξ2 ji(apβan) k+1ΓMn/summationtext i=1p/summationtext j=n+1Ξ2 ji(ap+n 2β1) Mk(a1+n 2β1)+n/summationtext i=k+1p/summationtext j=n+1Ξ2 ji(an+n 2β1) f1 Kk/summationtext j=1p/summationtext i=k+1Ξ2 ji(anβak) kΓLn/summationtext j=k+1k/summationtext i=1Ξ2 ji(an+n 2β1) n Γ1 Mk(a1+n 2β1) =/parenleftBigLn Kk/parenrightBigk/summationtext j=1p/summationtext i=k+1Ξ2 ji(anβak) ΓLk/summationtext j=1p/summationtext i=k+1Ξ2 ji(ak+n 2β1) n Mk(a1+n 2β1), where the ο¬rst inequality is derived from Kk,Lk+1>1,M <1, andapgan. Next, we derive an upper bound for the remaining term: (1+rn)k/summationtext i=1(ai+n 2β1) (1+sn)(nβk)(an+n 2β1)(1+tn)(pβk)(ap+n 2β1) βΌexp/parenleftbigg Ο΅2 2ΒΌ0,1 ΒΌ0,kak/parenrightbigg Β·exp/parenleftbigg 4Ο΅2 2nΒΌ0,1 Β―dpnan/parenrightbigg Β·exp/parenleftbigg/parenleftBig2Ο΅2 2n (pβn)hΒΌ0,1/parenrightBig pap/parenrightbigg , using the bound log(1+ x)fxfor all small x >0. Therefore, we obtain the ο¬nal upper bound /integraldisplay Ac Ο΅/productdisplay cβaiβn 2+1 i(dΞ) /integraldisplay CΟ΅2/productdisplay cβaiβn 2+1 i(dΞ) f1 P(CΟ΅2)Β·exp/parenleftbigg Ο΅2 2ΒΌ0,1 ΒΌ0,kak/parenrightbigg Β·exp/parenleftbigg 4Ο΅2 2nΒΌ0,1 Β―dpnan/parenrightbigg Β·exp/parenleftbigg/parenleftBig2Ο΅2 2n (pβn)hΒΌ0,1/parenrightBig pap/parenrightbigg Γsup AcΟ΅/bracketleftBigg/parenleftBigLn Kk/parenrightBigk/summationtext j=1p/summationtext i=k+1Ξ2 ji(anβak) ΓLk/summationtext j=1p/summationtext i=k+1Ξ2 ji(ak+n 2β1) n Mk(a1+n 2β1)/bracketrightBigg βΌ/parenleftBigcβn Ο΅2/parenrightBignpβn2 Β·exp/parenleftbiggn2Ο΅2 2ΒΌ0,1an p/parenrightbigg Β·exp/parenleftbiggnΟ΅2 2ΒΌ0,1ap h/parenrightbigg Β·/parenleftBigLn Kk/parenrightBigk/summationtext i=1n/summationtext j=k+1Ξ2 jian 15 βΌ/parenleftBigKk Ln/parenrightBigβΟ΅2an/4 β/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an , whereΟ΅2 2zhp apΒΌ0,1'p2 nΒΌ0,1an, andΟ΅2an{np. β Lemma S1.12. Suppose that a1<Β·Β·Β·< anandb1>Β·Β·Β·> bn. Consider the permutation function Γ, which is a permutation of the set [n] ={1,...,n}. IfΓis not the identity permutation, then there exists an index iβ {1,...,n}such that Γ(i)ΜΈ=i. In this case, the following inequality holds n/summationdisplay l=1albΓ(l)gn/summationdisplay l=1albl+min l<n(al+1βal)Β·min l<n(blβbl+1). Proof.Letibe the smallest integer which satisο¬es Γ(i)ΜΈ=i, thenΓ(i) =j > i. Furthermore, there exists k > isuch that Γ(k) =i. Now, we deο¬ne a new permutation function Γas follows: Γ(l) =ο£±  Γ(l) forlΜΈ=i,k, iforl=i, jforl=k. Using the newly deο¬ned permutation function Γ, we have n/summationdisplay l=1albΓ(l)=n/summationdisplay l=1albΓ(l)+(aibj+akbi)β(aibi+akbj) =n/summationdisplay l=1albΓ(l)+(akβai)(biβbj), Since the rearrangement inequality guarantees that: n/summationdisplay l=1albΓ(l)gn/summationdisplay l=1albl, if follows thatn/summationdisplay l=1albΓ(l)gn/summationdisplay l=1albl+(akβai)(biβbj). For any non-identity permutation function Γ, we obtain the following inequality n/summationdisplay l=1albΓ(l)gn/summationdisplay l=1albl+ inf i,j,k:i<j,i<k(akβai)(biβbj) 16 gn/summationdisplay l=1albl+min l<n(al+1βal)Β·min l<n(blβbl+1). β Lemma S1.13. Assume that conditions A1βA5hold. Deο¬ne the sets as follows: DΒΈ=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£S0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereSβO(k)has non-negative diagonal elements, and satisο¬es ||IkβS||FgΟ΅. IfΒΈzΒΌ0,k ΒΌ0,1, then the following inequality holds sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1βΌexp/parenleftBigg nΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigg exp/parenleftBigg Ο΅ 2β kminl<k(al+1βal)Β·minl<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg/parenrightBigg. Proof. sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1=sup DΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ2 jiΛΒΌj/bracketrightbiggai+n/2β1 inf EΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ2 jiΛΒΌj/bracketrightbiggai+n/2β1. At ο¬rst, we focus on the following term sup DΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ2 jiΛΒΌj/bracketrightbiggai+n/2β1 . (S4) For ΞβDΒΈ, (1βΞ2
|
https://arxiv.org/abs/2505.20668v1
|
ii)+p/summationdisplay j=1 jΜΈ=iΞ2 ji< ΒΈ2holds and it implies Ξ2 iig1βΒΈ2. Therefore, we obtain the upper bound of ( S4) as follows (S4)fsup DΒΈk/productdisplay i=1/bracketleftbiggh n+Ξ2 iiΛΒΌi+/summationdisplay j=1 jΜΈ=iΞ2 jiΛΒΌj/bracketrightbiggai+n/2β1 17 fsup DΒΈk/productdisplay i=1/bracketleftbiggh n+Ξ2 iiΛΒΌi+(1βΞ2 ii)ΛΒΌ1/bracketrightbiggai+n/2β1 fsup DΒΈk/productdisplay i=1/bracketleftbiggh n+ΛΒΌi+ΒΈ2ΛΒΌ1/bracketrightbiggai+n/2β1 fk/productdisplay i=1/bracketleftbigg/parenleftBigh n+ΛΒΌi/parenrightBig (1+ΒΈ2ΛΒΌ1 ΛΒΌi)/bracketrightbiggai+n/2β1 fk/productdisplay i=1/bracketleftbiggh n+ΛΒΌi/bracketrightbiggai+n/2β1 Β·/parenleftBig 1+ΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk/summationtext i=1(ai+n/2β1) . Now, we focus on the following term inf EΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ2 jiΛΒΌj/bracketrightbiggai+n/2β1 . (S5) For ΞβEΒΈ,k/summationdisplay j=1(Ξ2 ijβS2 ij)g βΒΈ2fori= 1,...,k. Therefore, we obtain the lower bound of ( S5) as follows (S5)ginf EΒΈk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1Ξ2 ji/parenleftBig ΛΒΌj+h n/parenrightBig/bracketrightbiggai+n/2β1 = inf EΒΈk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig +k/summationdisplay j=1(Ξ2 ijβS2 ij)/parenleftBig ΛΒΌj+h n/parenrightBig/bracketrightbiggai+n/2β1 ginf EΒΈk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig βΒΈ2k/summationdisplay j=1/parenleftBig ΛΒΌj+h n/parenrightBig/bracketrightbiggai+n/2β1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig βkΒΈ2/parenleftBig ΛΒΌ1+h n/parenrightBig/bracketrightbiggai+n/2β1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig/parenleftBig 1βkΒΈ2ΛΒΌ1+h n ΛΒΌk+h n/parenrightBig/bracketrightbiggai+n/2β1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig/bracketrightbiggai+n/2β1 Β·/parenleftBig 1β2kΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk/summationtext i=1(ai+n/2β1) . Consider the kΓkmatrixM, whose ( j,i)-th entry is given by S2 ji. SinceSis an orthonormal matrix, the matrix Mbecomes a doubly stochastic matrix, meaning that the sum of each row and the sum of each column both equal 1. By the Birkhoο¬-von Neumann theorem, any doubly st ochastic matrix can be expressed as a convex combination of permutation matrices. Therefore, there exist nonnegative weights 18 w0,w1,...,w lg0 satisfying/summationdisplay wi= 1, such that: M=w0Ik+l/summationdisplay i=1wiPi, whereIk,P1,...,P lare allkΓkpermutation matrices. Since||SβIk||FgΟ΅, we obtain the following inequality ||SβIk||2 F=k/summationdisplay j=1/parenleftBig (Sjjβ1)2+/summationdisplay iΜΈ=jS2 ji/parenrightBig =k/summationdisplay j=1(2β2Sjj) fk/summationdisplay j=1(2β2S2 jj) =k/summationdisplay j=1/parenleftBig (S2 jjβ1)2+/summationdisplay iΜΈ=j(S2 ji)2/parenrightBig =||MβIk||2 F, which implies ||MβIk||2 FgΟ΅2. Therefore, the inequality holds ||MβIk||F=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglel/summationdisplay i=1wlPlβ(1βw0)Ik/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglel/summationdisplay i=1wl(PlβIk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F fl/summationdisplay i=1wl||PlβIk||F fl/summationdisplay i=1wl(||Pl||F+||Ik||F) = 2β k(1βw0), which implies w0<1βΟ΅ 2β k. Letg(A) =k/summationdisplay i=1(ai+n 2β1)log/parenleftBigk/summationdisplay j=1Aji/parenleftBig ΛΒΌj+h n/parenrightBig/parenrightBig for doubly stochastic matrix AβRkΓk, then the inequality holds g(M) =k/summationdisplay i=1(ai+n 2β1)log/parenleftBigk/summationdisplay j=1Mji/parenleftBig ΛΒΌj+h n/parenrightBig/parenrightBig 19 =k/summationdisplay i=1(ai+n 2β1)log/parenleftBigk/summationdisplay j=1/parenleftBig w0[Ik]ji+/summationdisplay lwl[Pl]ji/parenrightBig/parenleftBig ΛΒΌj+h n/parenrightBig/parenrightBig =k/summationdisplay i=1(ai+n 2β1)log/parenleftBig w0k/summationdisplay j=1[Ik]ji/parenleftBig ΛΒΌj+h n/parenrightBig +/summationdisplay lwlk/summationdisplay j=1[Pl]ji/parenleftBig ΛΒΌj+h n/parenrightBig/parenrightBig gk/summationdisplay i=1(ai+n 2β1)/bracketleftbigg w0log/parenleftBigk/summationdisplay j=1[Ik]ji/parenleftBig ΛΒΌj+h n/parenrightBig/parenrightBig +/summationdisplay lwllog/parenleftBigk/summationdisplay j=1[Pl]ji/parenleftBig ΛΒΌj+h n/parenrightBig/parenrightBig/bracketrightbigg =w0g(Ik)+/summationdisplay lwlg(Pl), where the inequality holds by Jensen inequality since log is conca ve. Sincea1<Β·Β·Β·< akandΛΒΌ1>Β·Β·Β·>ΛΒΌk, by applying Lemma S1.12, we obtain the following inequality g(Pl)gg(Ik)+min l<k(al+1βal)Β·min l<k(logΛΒΌlβlogΛΒΌl+1) =g(Ik)+min l<k(al+1βal)Β·min l<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg . Therefore, we obtain the lower bound of g(M) as follows: g(M)gw0g(Ik)+/summationdisplay lwlg(Pl) gw0g(Ik)+/summationdisplay lwl(g(Ik)+min l<k(al+1βal)Β·min l<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg ) =g(Ik)+/summationdisplay lwlΒ·min l<k(al+1βal)Β·min l<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg gg(Ik)+Ο΅ 2β kmin l<k(al+1βal)Β·min l<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg . The lower bound of ( S5) is given by (S5)gk/productdisplay i=1/bracketleftbigg ΛΒΌi+h n/bracketrightbiggai+n/2β1 exp/parenleftBigg Ο΅ 2β kmin l<k(al+1βal)Β·min l<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg/parenrightBigg Β·/parenleftBig 1β2kΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk/summationtext i=1(ai+n/2β1) . By appending the upper bound of ( S4) and the lower bound of ( S5), the following inequality holds: sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 20 fk/producttext i=1/bracketleftbiggh n+ΛΒΌi/bracketrightbiggai+n/2β1 Β·/parenleftBig 1+ΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk/summationtext i=1(ai+n/2β1) k/producttext i=1/bracketleftbigg ΛΒΌi+h n/bracketrightbiggai+n/2β1 exp/parenleftBigg Ο΅ 2β kminl<k(al+1βal)Β·minl<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg/parenrightBigg Β·/parenleftBig 1β2kΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk/summationtext i=1(ai+n/2β1) βexp/parenleftBigg nΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigg exp/parenleftBigg Ο΅ 2β kminl<k(al+1βal)Β·minl<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg/parenrightBigg. β Lemma S1.14. Assume that conditions A1βA5hold. Deο¬ne the sets as follows DΒΈ=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£S0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereSβO(k)with
|
https://arxiv.org/abs/2505.20668v1
|
non-negative diagonal elements, and ||PβS||FgΟ΅for all permutation matrix P. Ifa1=Β·Β·Β·=ak, then the inequality holds sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1βΌexp/parenleftBigg nΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigg exp/parenleftBigg nΟ΅2min i<klog/parenleftBiggΛΒΌi ΛΒΌi+1/parenrightBigg/parenrightBigg. Proof.By proof of Lemma ( S1.13), we obtain the following sup DΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΒΈk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1=sup DΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ2 jiΛΒΌj/bracketrightbigga1+n/2β1 inf EΒΈk/producttext i=1/bracketleftbiggh n+n/summationtext j=1Ξ2 jiΛΒΌj/bracketrightbigga1+n/2β1. (S6) At ο¬rst, the upper bound of numerator of ( S6) is given by sup DΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ2 jiΛΒΌj/bracketrightbigga1+n/2β1 fk/productdisplay i=1/bracketleftbiggh n+ΛΒΌi/bracketrightbigga1+n/2β1 Β·/parenleftBig 1+ΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk(a1+n/2β1) . 21 Next, the lower bound of denominator of ( S6) is given by inf EΒΈk/productdisplay i=1/bracketleftbiggh n+n/summationdisplay j=1Ξ2 jiΛΒΌj/bracketrightbigga1+n/2β1 gk/productdisplay i=1/bracketleftbiggk/summationdisplay j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig/bracketrightbigga1+n/2β1 Β·/parenleftBig 1β2kΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk(a1+n/2β1) . Since||PβS||FgΟ΅for all permutation matrix P, there exist Suvwhich satisο¬es S2 uvβ[Ο΅2 k,1βΟ΅2 k]. Then, the inequality holds k/summationdisplay j=1S2 jv(ΛΒΌj+h n) =/summationdisplay j<uS2 jv(ΛΒΌj+h n)+S2 uv(ΛΒΌu+h n)+/summationdisplay j>uS2 jv(ΛΒΌj+h n) g(/summationdisplay j<uS2 jv)/productdisplay j<u(ΛΒΌj+h n)S2 jv/summationtext j<uS2 jv+S2 uv(ΛΒΌu+h n)+(/summationdisplay j>uS2 jv)/productdisplay j>u(ΛΒΌj+h n)S2 jv/summationtext j>uS2 jv, (S7) where the last inequality holds by arithmetic geometric mean inequal ity. SinceS2 uvβ[Ο΅2 k,1βΟ΅2 k],/summationdisplay j<uS2 jvor/summationdisplay j>uS2 jvis larger thanΟ΅2 2k. Without loss of generality, we assume that/summationdisplay j<uS2 jvis larger thanΟ΅2 2k. Then, the lower bound of ( S7) is given by (S7)g(/summationdisplay j<uS2 jv)/productdisplay j<u(ΛΒΌj+h n)S2 jv/summationtext j<uS2 jv+(/summationdisplay jguS2 jv)/productdisplay jgu(ΛΒΌj+h n)S2 jv/summationtext jβ₯uS2 jv =yt1 1y1βt1 2Β·/bracketleftBig t1/parenleftbigy1 y2/parenrightbig1βt1+(1βt1)/parenleftbigy2 y1/parenrightbigt1/bracketrightBig =k/productdisplay j=1(ΛΒΌj+h n)S2 jvΒ·/bracketleftBig t1/parenleftbigy1 y2/parenrightbig1βt1+(1βt1)/parenleftbigy2 y1/parenrightbigt1/bracketrightBig , (S8) wheret1=/summationdisplay j<uS2 jv,y1=/productdisplay j<u(ΛΒΌj+h n)S2 jv/summationtext j<uS2 jv, andy2=/productdisplay jgu(ΛΒΌj+h n)S2 jv/summationtext jβ₯uS2 jv. Letf(t) =tatβ1+(1βt)atwitha >1. Then, fβ²(t) =atβ1((logaβaloga)t+alogaβa+1) holds. Fortβ(s,1βs) withs >0, the inequality holds f(t)gf(s)'f(1βs). We apply talor expansion on f(s) andf(1βs), then we obtain the following equality f(s) =as(saβ1+1βs) 22 = (1+(log a)s+1 2(loga)2s2+O(s3))(saβ1+1βs) = 1+s(aβ1+logaβ1)+O(s2) f(1βs) =aβs(1βs+as) = (1β(loga)s+1 2(loga)2s2+O(s3))(1βs+as) = 1+s(aβ1βloga)+O(s2), for suο¬ciently small s. It implies f(s)'f(1βs) =f(s). Therefore, the lower bound of ( S8) is given by (S8)gk/productdisplay j=1(ΛΒΌj+h n)S2 jvΒ·/bracketleftBigg 1+Ο΅2 2k/parenleftBig/parenleftbigy2 y1/parenrightbigβ1+log/parenleftbiggy2 y1/parenrightbigg β1/parenrightBig +O/parenleftBig/parenleftbigΟ΅2 2k/parenrightbig2/parenrightBig/bracketrightBig/bracketrightBigg gk/productdisplay j=1(ΛΒΌj+h n)S2 jvΒ·/bracketleftBigg 1+cΟ΅2 2klog/parenleftbiggy2 y1/parenrightbigg +O(Ο΅4)/bracketrightBigg gk/productdisplay j=1(ΛΒΌj+h n)S2 jvΒ·/bracketleftBig 1+cΟ΅2 4kmin i<klog/parenleftBiggΛΒΌi ΛΒΌi+1/parenrightBigg/bracketrightBig , for some positive constant c. Finally, the upper bound of ( S6) is given by (S6)fk/producttext i=1/bracketleftbiggh n+ΛΒΌi/bracketrightbigga1+n/2β1 Β·/parenleftBig 1+ΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk(a1+n/2β1) k/producttext i=1/bracketleftbiggk/summationtext j=1S2 ji/parenleftBig ΛΒΌj+h n/parenrightBig/bracketrightbigga1+n/2β1 Β·/parenleftBig 1β2kΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk(a1+n/2β1) fk/producttext i=1/bracketleftbiggh n+ΛΒΌi/bracketrightbigga1+n/2β1 Β·/parenleftBig 1+ΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk(a1+n/2β1) k/producttext i=1/bracketleftbiggk/producttext j=1/parenleftBig ΛΒΌj+h n/parenrightBigS2 ji/bracketrightbigga1+n/2β1 Β·/bracketleftBig 1+cΟ΅2 4kmin i<klog/parenleftBiggΛΒΌi ΛΒΌi+1/parenrightBigg/bracketrightBiga1+n/2β1 Β·/parenleftBig 1β2kΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigk(a1+n/2β1) fexp/parenleftBigg nΒΈ2ΛΒΌ1 ΛΒΌk/parenrightBigg exp/parenleftBigg nΟ΅2min i<klog/parenleftBiggΛΒΌi ΛΒΌi+1/parenrightBigg/parenrightBigg. β 23 Theorem S1.15. Under the assumptions of Lemma 3.1, fori= 1,...,k, we have E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig , whereΒ³iβ[βC,C]for some positive constant C,Β΄iβ²nβ1/2+ΒΆfor all small ΒΆ >0,Β―d=1 pβkp/summationdisplay j=k+1ΒΌ0,j, andΒ»= min l<k(al+1βal)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Lemma S1.16. Under the assumptions of Lemma 3.1, fori= 1,...,k, we have 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βE[ΒΌ(i)|Xn]/vextendsingle/vextendsingle=ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightbigg , whereΒ»= min l<k(al+1βal)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Proof.LetDΟ΅=/braceleftBig ΞβO(p) : inf Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightBig .By Lemma 3.1, we have /integraldisplay DΟ΅/integraldisplay Γ(Ξ,Ξ|Xn)(dΞ)(dΞ) = 1+ O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig , whereΒ»= min l<k(al+1βal)Β·min l<klog/parenleftBigΒΌ0,l ΒΌ0,l+1/parenrightBig . OnDΟ΅, since (Ξ
|
https://arxiv.org/abs/2505.20668v1
|
iiβ1)2+(1βΞ2 ii)< Ο΅2, it follows that Ξ2 ii>1βΟ΅2. Givenci n=h n+n/summationdisplay j=1Ξ2 jiΛΒΌj,we obtain the bounds: ci nβ/bracketleftbigg (1βΟ΅2)ΛΒΌi+h n,(1βΟ΅2)ΛΒΌi+Ο΅2ΛΒΌ1+h n/bracketrightbigg ,fori= 1,...,n, ci nβ/bracketleftbiggh n, Ο΅2ΛΒΌ1+h n/bracketrightbigg ,fori=n+1,...,p. Under Ξ βDΟ΅, eachΒΌifollows an inverse gamma distribution, ΒΌiiNdβΌInvGam/parenleftBig ai+n 2β1,ci 2/parenrightBig . By Chebyshevβs inequality, Γ(|ΒΌiβE[ΒΌi|Ξ,Xn]|> Β³i|Ξ,Xn)fVar(ΒΌi) Β³2 i=2 Β³2 i/parenleftBigci n/parenrightBig2 Β·n2 (2ai+nβ4)2(2ai+nβ6). 24 Letpibe an upper bound for each of these probabilities pi=ο£±  2 Β³2 i/parenleftBig (1βΟ΅2)ΛΒΌi+Ο΅2ΛΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβ4)2(2ai+nβ6)fori= 1,...,n, 2 Β³2 i/parenleftBig Ο΅2ΛΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβ4)2(2ai+nβ6)fori=n+1,...,p. ChooseΒ³i=1 4ΒΆ0ΛΒΌifori= 1,...,n, andΒ³i=1 4ΒΆ0ΛΒΌnfori=n+ 1,...,p. Then with probability at leastp/productdisplay i=1(1βpi), the inequalities ΒΌiβ[E[ΒΌi|Ξ,Xn]βΒ³i,E[ΒΌi|Ξ,Xn]+Β³i], hold for i= 1,...,p. This implies: ΒΌ1>Β·Β·Β·> ΒΌk, ΒΌk> ΒΌifori=k+1,...,p, and therefore, ΒΌ(i)=ΒΌifor alli= 1,...,k. We now bound the diο¬erence of expectation: /vextendsingle/vextendsingleE[ΒΌ(i)βΒΌi|Xn]/vextendsingle/vextendsingle=/vextendsingle/vextendsingleE[(ΒΌ(i)βΒΌi)Β·I(ΒΌ(i)ΜΈ=ΒΌi)|Xn]/vextendsingle/vextendsingle f2E[p/summationdisplay j=1ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Xn] = 2E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Ξ,Xn]Β·(I(ΞβDΟ΅)+I(Ξ/βDΟ΅))|Xn] f2E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Ξ,Xn]Β·I(ΞβDΟ΅)|Xn]+2E[p/summationdisplay j=1E[ΒΌj|Ξ,Xn]Β·I(Ξ/βDΟ΅)|Xn]. (S9) There exists Β΄j(Ξ)>0 such that Γ(ΒΌ(i)ΜΈ=ΒΌi|Ξ,Xn) =Γ(ΒΌjgΒ΄j(Ξ)|Ξ,Xn), 25 for each j= 1,...,p. Therefore, the inequality holds E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌ(i)ΜΈ=ΒΌi)|Ξ,Xn]Β·I(ΞβDΟ΅)|Xn]fE[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌjgΒ΄j(Ξ))|Ξ,Xn]Β·I(ΞβDΟ΅)|Xn]. By applying HΒ¨ olderβs inequality, we obtain the following bound on the ο¬rst term on the right-hand side of (S9) as follows E[p/summationdisplay j=1E[ΒΌjΒ·I(ΒΌjgΒ΄j(Ξ))|Ξ,Xn]Β·I(ΞβDΟ΅)|Xn] fE[p/summationdisplay j=1/radicalBig E[ΒΌ2 j|Ξ]Β·/radicalBig E[I(ΒΌj> Β΄j(Ξ))|Ξ,Xn]I(ΞβDΟ΅)|Xn] fE[p/summationdisplay j=1E[ΒΌj|Ξ]Β·/radicalBig Γ(ΒΌj> Β΄j(Ξ)|Ξ,Xn)I(ΞβDΟ΅)|Xn] fsup ΞβDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·/radicaltp/radicalvertex/radicalvertex/radicalbt1βp/productdisplay i=1(1βpi) fsup ΞβDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·/radicaltp/radicalvertex/radicalvertex/radicalbtp/summationdisplay i=1pi. We obtain the following bound on the second term on the right-hand side of (S9) as follows E[p/summationdisplay j=1E[ΒΌj|Ξ,Xn]Β·I(Ξ/βDΟ΅)|Xn] =E[/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·I(Ξ/βDΟ΅)|Xn] fsup Ξ/βDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·/parenleftBig O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightBig . Therefore, we obtain /vextendsingle/vextendsingleE[ΒΌi|Xn]βE[ΒΌ(i)|Xn]/vextendsingle/vextendsinglefsup ΞβDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·p/summationdisplay i=1pi + sup Ξ/βDΟ΅/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·/parenleftBig O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightBig fsup Ξ/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·p/summationdisplay i=1pi +sup Ξ/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·/parenleftBig O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightBig . 26 Sincea1f Β·Β·Β· fap, the termp/summationdisplay j=1cj n+2ajβ4is maximized when Ξ = Ip, and hence sup Ξ/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig fn/summationdisplay j=1nΛΒΌj+h n+2ajβ4+p/summationdisplay j=n+1h n+2ajβ4 =O(k/summationdisplay j=1ΒΌ0,j+np an+hp ap) =O(k/summationdisplay j=1ΒΌ0,j), =O(ΒΌ0,1), where we used the condition an{n3/2p. Next, we evaluate the upper bound ofp/summationdisplay i=1pi p/summationdisplay i=1pi=n/summationdisplay i=12 Β³2 i/parenleftBig (1βΟ΅2)ΛΒΌi+Ο΅2ΛΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβ4)2(2ai+nβ6) +p/summationdisplay i=n+12 Β³2 i/parenleftBig Ο΅2ΛΒΌ1+h n/parenrightBig2 Β·n2 (2ai+nβ4)2(2ai+nβ6) βk/summationdisplay i=1n2 (2ai+nβ4)2(2ai+nβ6)+n/summationdisplay i=k+1n2 (2ai+nβ4)2(2ai+nβ6)+h2 n2ΛΒΌ2np/summationdisplay i=n+1n2 (2ai+nβ4)2(2ai+nβ6) =O(k n)+O(n2(nβk) a3n)+O(h2 p2Β·n2(pβn) a3p) =O(1 n), again using an{n3/2p. Combining these, we conclude 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βE[ΒΌ(i)|Xn]/vextendsingle/vextendsingle f1 ΒΌ0,iΒ·/parenleftBig sup Ξ/parenleftBigp/summationdisplay j=1cj n+2ajβ4/parenrightBig Β·/parenleftBigp/summationdisplay i=1pi+O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightBig =ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightbigg . β 27 Theorem S1.17. Under the assumptions of Lemma 3.2, fori= 1,...,k, we have E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig , whereΒ³iβ[βC,C]for some positive constant C,Β΄iβ²nβ1/2+ΒΆfor all small ΒΆ >0,Β―d=1 pβkp/summationdisplay j=k+1ΒΌ0,j, andΓ= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Lemma S1.18. Under the assumptions of Lemma 3.2, fori= 1,...,k, we have 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βE[ΒΌ(i)|Xn]/vextendsingle/vextendsingle=ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightbigg , whereΓ= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Proof.The subset DΟ΅,lof the orthogonal group O(p) is deο¬ned as: DΟ΅,l=/braceleftbigg ΞβO(p) : inf Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Pl0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg , whereP1,...,P Ldenotes all kΓkpermutation matrices. The proof proceeds by applying the same argument as in Lemma S1.16, withDΟ΅replaced by the union L/uniondisplay l=1DΟ΅,l. By Lemma 3.1, we have /integraldisplay /uniontextL l=1DΟ΅,l/integraldisplay
|
https://arxiv.org/abs/2505.20668v1
|
Γ(Ξ,Ξ|Xn)(dΞ)(dΞ) = 1+ O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig , whereΓ= min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Following the same bounding steps as in Lemma S1.16, we obtain 1 ΒΌ0,i/vextendsingle/vextendsingleE[ΒΌi|Xn]βE[ΒΌ(i)|Xn]/vextendsingle/vextendsingle=ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n/parenrightBig +O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig/parenrightbigg . 28 β Proof of Theorem Proof of Lemma 3.1 Proof.We consider the following inequality: Γ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βO(k), Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg |Xn/parenrightbigg (S10) fΓ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅4/bracerightbigg |Xn/parenrightbigg (S11) +Γ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βO(k), Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅, inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg , (S12) for some positive Ο΅4. By Lemma S1.11, we have derived the following lower bound on the posterior probability: (S10)g1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an. Thus, in order to establish the convergence of ( S11) to 1, it suο¬ces to show that ( S12) vanishes as nβ β. Now, we focus on the term: Γ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βO(k), Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅, inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg . Supposethat {R1,...,R M(O(k),||Β·||F,Ο΅5)}formsamaximal Ο΅5-packingof O(k),whereRM(O(k),||Β·||F,Ο΅5)= Ik,whichalsoservesasan Ο΅5-covering.Forall Q1βO(k),thereexistssome Risuchthat ||RiβQ1||F< Ο΅5. By triangular inequality, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ri0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ff/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ri0 0Q2ο£Ά ο£Έβ ο£Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F.(S13) Taking the inο¬mum over Q2βO(pβk) on both sides of ( S13), then the inequality holds inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ri0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Ffinf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ri0 0Q2ο£Ά ο£Έβ ο£Q10 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+ inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F 29 fΟ΅5+ inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F. (S14) Let{S1,...,S m}be a subset of {R1,...,R M(O(k),||Β·||F,Ο΅5)}which satisο¬es the following condition /braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5,inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg ΜΈ=Ο.(S15) Moreover, it follows that Ik/β {S1,...,S m}, whereΟ΅4> Ο΅+Ο΅5. Then, the following inequality holds: Γ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βO(k), Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅, inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg fΓ/parenleftbiggM(O(k),||Β·||F,Ο΅5)/uniondisplay i=1/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ri0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5,inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4/bracerightbigg |Xn/parenrightbigg fΓ/parenleftbiggm/uniondisplay i=1/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5/bracerightbigg |Xn/parenrightbigg , for some positive Ο΅5which satisο¬es Ο΅4> Ο΅+Ο΅5. The ο¬rst inequality follows from ( S14), and the second inequality follows from ( S15). By the triangular inequality, the following inequality holds: /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£Έβ ο£Ik0 0Q2ο£Ά ο£Έ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Fg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, fori= 1,...,m. Taking the inο¬mum over Q2βO(pβk) on both sides yields ||SiβIk||F+ inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle Fginf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, fori= 1,...,m. By the deο¬nition of Si, there exists Ξ iβO(p) satisfying condition ( S15). Therefore, we obtain the inequality ||SiβIk||F> Ο΅4β(Ο΅+Ο΅5), fori= 1,...,m. 30 Next, we deο¬ne the sets: DΒΈ=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ,l=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Sl0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereSlβO(k) and||SlβIk||F> Ο΅4β(Ο΅+Ο΅5). By considering the variable transformation Ξβ= Ξ ο£ST l0 0Ipβkο£Ά ο£Έ, it follows that ( dΞ) = (dΞβ), since ( dΞ) is an invariant measure. Therefore, we obtain the following equality: /integraldisplay EΒΈ,lp/productdisplay i=k+1/parenleftBigci n/parenrightBigβaiβn/2+1 (dΞ) =/integraldisplay EΒΈ,lp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1Ξ2 jiΛΒΌj/parenrightBigβaiβn/2+1 (dΞ) =/integraldisplay DΒΈp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1(Ξβ ji)2ΛΒΌj/parenrightBigβaiβn/2+1 (dΞβ) =/integraldisplay DΒΈp/productdisplay i=k+1/parenleftBigci n/parenrightBigβaiβn/2+1
|
https://arxiv.org/abs/2505.20668v1
|
(dΞ). The last equality holds due to /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Sl0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞβ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, and Ξ2 ji= (Ξβ ji)2fori > k. Therefore, we obtain the following inequality: /integraldisplay EΟ΅+Ο΅5,lp/productdisplay i=1cβaiβn/2+1 i (dΞ) /integraldisplay DΟ΅+Ο΅5p/productdisplay i=1cβaiβn/2+1 i (dΞ)fsup EΟ΅+Ο΅5,lk/producttext i=1cβaiβn/2+1 i Β·/integraldisplay EΟ΅+Ο΅5,lp/productdisplay i=k+1cβaiβn/2+1 i (dΞ) inf DΟ΅+Ο΅5k/producttext i=1cβaiβn/2+1 i Β·/integraldisplay DΟ΅+Ο΅5p/productdisplay i=k+1cβaiβn/2+1 i (dΞ) 31 =sup DΟ΅+Ο΅5k/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1. (S16) By applying Lemma ( S1.13), the upper bound of ( S16) is given by (S16)βΌexp/parenleftBigg n(Ο΅+Ο΅5)2ΛΒΌ1 ΛΒΌk/parenrightBigg exp/parenleftBigg Ο΅4β(Ο΅+Ο΅5) 2β kmin l<k(al+1βal)Β·min l<klog/parenleftBiggΛΒΌl ΛΒΌl+1/parenrightBigg/parenrightBigg βexp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp/parenleftbigg Ο΅4min l<k(al+1βal)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg/parenrightbigg, whereΟ΅4{Ο΅+Ο΅5andΟ΅βΟ΅5. LetΒ»= min l<k(al+1βal)Β·min l<klog/parenleftbiggΒΌ0,l ΒΌ0,l+1/parenrightbigg . Then, we get the following inequality Γ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βOk, Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg |Xn/parenrightbigg fΓ(DΟ΅4|Xn)+mΓ(EΟ΅+Ο΅5,1|Xn) fΓ(DΟ΅4|Xn)+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp(»ϡ4)Β·Γ(DΟ΅+Ο΅5|Xn) f/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp(»ϡ4)/parenrightBigg Γ(DΟ΅4|Xn). We obtain the following probability Γ(DΟ΅4|Xn)g/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp(»ϡ4)/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an) g/parenleftBigg 1+exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,kβ»ϡ4/parenrightbigg/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an) g/parenleftBigg 1+exp/parenleftBig βΒ» 2Ο΅4/parenrightBig/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an), wherenΟ΅2ΒΌ0,1 ΒΌ0,kfΒ» 2Ο΅4. 32 Therefore, we obtain: Γ(Dc Ο΅4|Xn)f1β/parenleftBigg 1+exp/parenleftBig βΒ» 2Ο΅4/parenrightBig/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an) =exp/parenleftBig βΒ» 2Ο΅4/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an 1+exp/parenleftBig βΒ» 2Ο΅4/parenrightBig =O/parenleftBig exp/parenleftBig βΒ» 2Ο΅4/parenrightBig/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an/parenrightBig , whereΟ΅4{Β»β1. In summary, if there exists Ο΅ >0 satisfying np anzΟ΅2fΒΌ0,k nΒΌ0,1Β» 2Ο΅4, and Ο΅4{Ο΅(Β»β1, then the following holds Γ(Dc Ο΅4|Xn) =O/parenleftBig exp/parenleftBig βΒ» 2Ο΅4/parenrightBig/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an/parenrightBig . In particular, the same posterior bound holds under the simpliο¬ed con ditionnp anzΒΌ0,k nΒΌ0,1Β·Β» 2Ο΅4and Ο΅4{/radicalbiggnp an(Β»β1. β Proof of Lemma 3.2 Proof.The proof follows the same procedure as the proof of Lemma 3.1. By replacing inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Ik0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4, with inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£P0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4,for all permutation matrix P, we obtain the following. Supposethat {R1,...,R M(O(k),||Β·||F,Ο΅5)}formsamaximal Ο΅5-packingof O(k),whereRM(O(k),||Β·||F,Ο΅5)= Ik, which also serves as an Ο΅5-covering. Let {S1,...,S m}be a subset of {R1,...,R M(O(k),||Β·||F,Ο΅5)}which 33 satisο¬es the following condition /braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Si0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅+Ο΅5,inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Pl0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle FgΟ΅4, l= 1,...,L/bracerightbigg ΜΈ=Ο. (S17) Moreover, it follows that Pl/β {S1,...,S m}forl= 1,...,L, whereΟ΅4> Ο΅+Ο΅5. Next, we deο¬ne the sets: DΒΈ,l=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Pl0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , EΒΈ,r=/braceleftbigg ΞβO(p) : inf Q2βO(pβk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Sr0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< ΒΈ/bracerightbigg , whereSrβO(k) and||SrβPl||F> Ο΅4β(Ο΅+Ο΅5). By considering the variable transformation Ξβ= Ξ ο£ST rPl0 0Ipβkο£Ά ο£Έ, it follows that ( dΞ) = (dΞβ), since ( dΞ) is an invariant measure. Therefore, we obtain the following equality: /integraldisplay EΒΈ,rp/productdisplay i=k+1/parenleftBigci n/parenrightBigβaiβn/2+1 (dΞ) =/integraldisplay EΒΈ,rp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1Ξ2 jiΛΒΌj/parenrightBigβaiβn/2+1 (dΞ) =/integraldisplay DΒΈ,lp/productdisplay i=k+1/parenleftBigh n+n/summationdisplay j=1(Ξβ ji)2ΛΒΌj/parenrightBigβaiβn/2+1 (dΞβ) =/integraldisplay DΒΈ,lp/productdisplay i=k+1/parenleftBigci n/parenrightBigβaiβn/2+1 (dΞ). The last equality holds due to /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Sr0 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Pl0 0Q2ο£Ά ο£ΈβΞβ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F, and Ξ2 ji= (Ξβ ji)2fori > k. 34 Therefore, we obtain the following inequality: /integraldisplay EΟ΅+Ο΅5,rp/productdisplay i=1cβaiβn/2+1 i (dΞ) /integraldisplay DΟ΅+Ο΅5,lp/productdisplay i=1cβaiβn/2+1 i (dΞ)fsup EΟ΅+Ο΅5,rk/producttext i=1cβaiβn/2+1 i Β·/integraldisplay EΟ΅+Ο΅5,rp/productdisplay i=k+1cβaiβn/2+1 i (dΞ) inf DΟ΅+Ο΅5,lk/producttext i=1cβaiβn/2+1 i Β·/integraldisplay DΟ΅+Ο΅5,lp/productdisplay i=k+1cβaiβn/2+1 i (dΞ) =sup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBigai+n/2β1 f/bracketleftBiggsup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBig inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBig/bracketrightBigga1+n/2β1 Γsup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBigaiβa1 inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBigaiβa1 f/bracketleftBiggsup DΟ΅+Ο΅5,lk/producttext i=1/parenleftBigci n/parenrightBig inf EΟ΅+Ο΅5,rk/producttext i=1/parenleftBigci n/parenrightBig/bracketrightBigga1+n/2β1 Γ/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq (S18) By applying Lemma
|
https://arxiv.org/abs/2505.20668v1
|
( S1.14), the upper bound of ( S18) is given by (S18)βΌexp/parenleftBigg n(Ο΅+Ο΅5)2ΛΒΌ1 ΛΒΌk/parenrightBigg exp/parenleftBigg n(Ο΅4β(Ο΅+Ο΅5))2min i<klog/parenleftBiggΛΒΌi ΛΒΌi+1/parenrightBigg/parenrightBiggΓ/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq βΌexp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg exp/parenleftbigg nΟ΅2 4min i<klog/parenleftbiggΒΌ0,i ΒΌ0,i+1/parenrightbigg/parenrightbiggΓ/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq , whereΟ΅{Ο΅+Ο΅5andΟ΅βΟ΅5. LetΓ= min i<klog/parenleftbiggΒΌ0,i ΒΌ0,i+1/parenrightbigg . Then, we get the following inequality Γ/parenleftbigg/braceleftbigg ΞβO(p) : inf Q1βOk, Q2βOpβk/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ο£Q10 0Q2ο£Ά ο£ΈβΞ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle F< Ο΅/bracerightbigg |Xn/parenrightbigg fΓ(L/uniondisplay l=1DΟ΅4,l|Xn)+mΓ(EΟ΅+Ο΅5,1|Xn) fΓ(L/uniondisplay l=1DΟ΅4,l|Xn)+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq exp(ΓnΟ΅2 4)Β·Γ(DΟ΅+Ο΅5,1|Xn) 35 f/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq exp(ΓnΟ΅2 4)/parenrightBigg Γ(L/uniondisplay l=1DΟ΅4,l|Xn). We obtain the following probability Γ(L/uniondisplay l=1DΟ΅4,l|Xn)g/parenleftBigg 1+M(O(k),||Β·||F,Ο΅5)Β·exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,k/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq exp(ΓnΟ΅2 4)/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an) g/parenleftBigg 1+exp/parenleftbigg nΟ΅2ΒΌ0,1 ΒΌ0,kβΓnΟ΅2 4/parenrightbigg Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an) g/parenleftBigg 1+exp/parenleftBig βΓ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an), wherenΟ΅2ΒΌ0,1 ΒΌ0,kfΓ 2nΟ΅2 4. Therefore, we obtain: Γ((L/uniondisplay l=1DΟ΅4,l)c|Xn)f1β/parenleftBigg 1+exp/parenleftBig βΓ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBiggβ1 (1β/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an) =exp/parenleftBig βΓ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an 1+exp/parenleftBig βΓ 2nΟ΅2 4/parenrightBig =O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an/parenrightBig , whereΟ΅4{(nΓ)β1/2. In summary, if there exists Ο΅ >0 satisfying np anzΟ΅2fΒΌ0,k nΒΌ0,1Γ 2nΟ΅2 4, and Ο΅4{Ο΅((nΓ)β1/2, then the following holds Γ((L/uniondisplay l=1DΟ΅4,l)c|Xn) =O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2 4/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2 4an/parenrightBig . In particular, the same posterior bound holds under the simpliο¬ed con ditionnp anzΒΌ0,k nΒΌ0,1Γ 2nΟ΅2 4and 36 Ο΅4{/radicalbiggnp an((nΓ)β1/2. β Proof of Theorem S1.17 Proof.The proof follows the same procedure as the proof of Theorem S1.15. By using the reparemetriza- tion as we mentioned at ( 5), we obtain the following inequality E/bracketleftBigΛΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplayΛΒΌiβΒΌ0,i ΒΌ0,i/productdisplayΛΒΌβaiβn 2 iexp/parenleftbigg βΛci 2ΛΒΌi/parenrightbigg (dΛΞ)(dΛΞ) /integraldisplay/productdisplayΛΒΌβaiβn 2 iexp/parenleftbigg βΛci 2ΛΒΌi/parenrightbigg (dΛΞ)(dΛΞ) =/integraldisplay f(Λc)/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) f/integraldisplay DΟ΅f(Λc)/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay DΟ΅/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ)+/integraldisplay DcΟ΅f(Λc)/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) fsup DΟ΅f(Λc)+sup Dc Ο΅f(Λc)/integraldisplay Dc Ο΅/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) = sup DΟ΅f(Λc)+sup DcΟ΅f(Λc)/integraldisplay (/uniontextL l=1DΟ΅,l)c/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ), where f(Λc) =EΒΌi[ΒΌi ΒΌ0,iβ1] =Ξ(ai+n/2β2) 2Ξ(ai+n/2β1)Λci ΒΌ0,iβ1, withΒΌiiidβΌInvGam (ai+n 2β1,Λci 2). Since sup Dc Ο΅f(Λc)βΌΒΌ0,1 ΒΌ0,i, applying Lemma 3.2, we obtain: E/bracketleftBigΛΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig fn n+2aiβ41 ΒΌ0,iΒ·sup DΟ΅/parenleftBigΛci n/parenrightBig β1 +O/parenleftBigΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBigΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig =n n+2aiβ41 ΒΌ0,iΒ·sup DΟ΅/parenleftBigci n/parenrightBig β1 +O/parenleftBigΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBigΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . 37 The last equality follows from ci= ΛcionDΟ΅. Since the ciis given by ci n=k/summationdisplay j=1Ξ2 ji/braceleftBig (1+Β΄j)ΒΌ0,j+(Β―dp n+Β³j/radicalbiggp n)+h n/bracerightBig +n/summationdisplay j=k+1Ξ2 ji/parenleftBig Β―dp n+Β³j/radicalbiggp n+h n/parenrightBig +p/summationdisplay j=n+1Ξ2 jih n, we obtain the upper bound of sup DΟ΅/parenleftBigci n/parenrightBig as follows: sup DΟ΅/parenleftBigci n/parenrightBig f(1βΟ΅2)/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig +Ο΅2/bracketleftBig (1+Β΄1)ΒΌ0,1+(Β―dp n+Β³1/radicalbiggp n)+h n/bracketrightBig f/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig +2ΒΌ0,1Ο΅2. Therefore, the posterior expectation given by, E/bracketleftBigΛΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig fn n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . Next, we derive the lower bound of the posterior expectation: E/bracketleftBigΛΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplay f(Λc)/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) g/integraldisplay DΟ΅f(Λc)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay DΟ΅/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ)Β·/integraldisplay DΟ΅/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) =/integraldisplay DΟ΅f(Λc)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay DΟ΅/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ)Β·/parenleftBigg 1β/integraldisplay Dc Ο΅/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ) /integraldisplay/parenleftBig/productdisplay Λci/parenrightBigβaiβn 2+1 (dΛΞ)/parenrightBigg = inf DΟ΅f(Λc)Γ/parenleftBigg
|
https://arxiv.org/abs/2505.20668v1
|
1β/integraldisplay (/uniontextL l=1DΟ΅,l)c/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ)/parenrightBigg ginf DΟ΅f(c)Γ/parenleftBig 1+O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig/parenrightBig . The last inequality holds by Lemma 3.2andci= ΛcionDΟ΅. 38 We obtain the lower bound of inf DΟ΅/parenleftBigci n/parenrightBig as follows: inf DΟ΅/parenleftBigci n/parenrightBig g(1βΟ΅2)/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig +Ο΅2/bracketleftBigh n/bracketrightBig g/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig β2Ο΅2ΒΌ0,i. Therefore, the lower bound of posterior expectation is given by E/bracketleftBigΛΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig g/bracketleftBigg n n+2aiβ41 ΒΌ0,i/parenleftbigg/bracketleftBig (1+Β΄i)ΒΌ0,i+(Β―dp n+Β³i/radicalbiggp n)+h n/bracketrightBig β2Ο΅2ΒΌ0,i/parenrightbigg β1/bracketrightBigg Γ/parenleftBig 1+O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig/parenrightBig =n n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . Thus, we conclude that the posterior expectation satisο¬es: E/bracketleftBigΛΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . β Proof of Theorem 3.3 Proof.By Theorem S1.17, we obtain the following posterior expectation, E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig =/parenleftBig(1+Β΄i)n n+2aiβ4β1/parenrightBig +n n+2aiβ41 ΒΌ0,i/bracketleftBig Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig =Β΄inβ2ai+4 n+2aiβ4+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . 39 IfaiβΌn1/2, then the following inequality holds, Β΄inβ2ai+4 n+2aiβ4βΌΒ΄i=O(nβ1/2+ΒΆ). Therefore, the posterior expectation is given by E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(nβ1/2+ΒΆ)+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . By the assumptions of the theorem, in particular ΒΌ0,1/ΒΌ0,k=O(1) andΟ΅=nβ1/4, we apply Lemma S1.18to obtain E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig +E/bracketleftBigΒΌ(i)βΒΌi ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(nβ1/2+ΒΆ)+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig +ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n+exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig/parenrightbigg =O(nβ1/2+ΒΆ)+O/parenleftBigp nΒΌ0,i/parenrightBig +O/parenleftBig (Ο΅2+1 n)ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig , whereΒΌ(i)is thei-th largest eigenvalue of Ξ£. SinceΒΌ0,1/ΒΌ0,kis bounded by a positive constant, a1,...,a kzn1/2, andΟ΅βnβ1/4, it follows that Ο΅2+exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2anβΌnβ1/2+ΒΆ+p nΒΌ0,i. Therefore, we conclude E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp nΒΌ0,i/parenrightBig +O(nβ1/2+ΒΆ). β 40 Proof of Theorem S1.15 Proof.Using the Lemma 3.1, we obtain the following inequality: E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplayΒΌiβΒΌ0,i ΒΌ0,i/productdisplay ΒΌβaiβn 2 iexp/parenleftbigg βci 2ΒΌi/parenrightbigg (dΞ)(dΞ) /integraldisplay/productdisplay ΒΌβaiβn 2 iexp/parenleftbigg βci 2ΒΌi/parenrightbigg (dΞ)(dΞ) =/integraldisplay f(c)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) f/integraldisplay DΟ΅f(c)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) /integraldisplay DΟ΅/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ)+/integraldisplay DcΟ΅f(c)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) fsup DΟ΅f(c)+sup DcΟ΅f(c)Β·/integraldisplay DcΟ΅f(c)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 (dΞ) where f(c) =EΒΌi[ΒΌi ΒΌ0,iβ1] =Ξ(ai+n/2β2) 2Ξ(ai+n/2β1)ci ΒΌ0,iβ1, withΒΌiiidβΌInvGam (ai+n 2β1,ci 2). By the same procedure of proof of Theorem S1.17, we conclude that the posterior expectation satisο¬es E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =n n+2aiβ41 ΒΌ0,i/bracketleftBig (1+Β΄i)ΒΌ0,i+Β―dp n+Β³i/radicalbiggp n+h n/bracketrightBig β1 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . β Proof of Theorem 3.4 Proof.Let 2aiβ4 =nt ΛΒΌiβt. Then, by Theorem S1.15, the posterior expectation can be written as: E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =1 ΒΌ0,i/bracketleftBignΛΒΌi n+2aiβ4βΒΌ0,i/bracketrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig =1 ΒΌ0,i/bracketleftBig ΛΒΌiβtβΒΌ0,i/bracketrightBig 41 +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig =1 ΒΌ0,i/bracketleftBigh n+Β΄iΒΌ0,i+Β―dp n+Β³i/radicalbiggp nβt/bracketrightBig +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . Iftβ[ΛΒΌk+1,ΛΒΌn], then
|
https://arxiv.org/abs/2505.20668v1
|
for some constant Β³0β[βC,C], we have t=Β―dp n+Β³0/radicalbiggp n. Therefore, the posterior expectation simpliο¬es to: E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =1 ΒΌ0,i(Β³iβΒ³0)/radicalbiggp n+Β΄i+1 ΒΌ0,ih n +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig =O(ΒΌ0,iβ1/radicalbiggp n)+O(Β΄i) +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig . By the assumptions of the theorem, in particular ΒΌ0,1/ΒΌ0,k=O(1) andΟ΅=nβ1/4, we apply Lemma S1.18to obtain E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =E/bracketleftBigΒΌiβΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig +E/bracketleftBigΒΌ(i)βΒΌi ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O(ΒΌ0,iβ1/radicalbiggp n)+O(Β΄i) +O/parenleftBig Ο΅2ΒΌ0,1 ΒΌ0,i+ΒΌ0,1 ΒΌ0,iexp/parenleftBig βΒ» 2Ο΅/parenrightBig +ΒΌ0,1 ΒΌ0,i/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig +ΒΌ0,1 ΒΌ0,iΒ·/parenleftbigg O/parenleftBig1 n+exp/parenleftBig βΒ» 2Ο΅/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig/parenrightbigg =O(1 ΒΌ0,i/radicalbiggp n)+O(nβ1 2+ΒΆ) +O/parenleftBig1 n+exp/parenleftBig βΒ» 2Ο΅/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2an/parenrightBig , whereΒΌ(i)is thei-th largest eigenvalue of Ξ£. Since 2aiβ4 =nt ΛΒΌiβt,fori= 1,...,k,Ο΅βnβ1/4, andΒΌ0,kβΌp/n1/4, it follows that Ο΅2+exp/parenleftBig βΒ» 2Ο΅/parenrightBig +/parenleftbignΒΌ0,k+p p/parenrightbigβΟ΅2anβΌnβ1/2+ΒΆ+1 ΒΌ0,i/radicalbiggp n. 42 Therefore, we conclude E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBig1 ΒΌ0,i/radicalbiggp n/parenrightBig +O(nβ1/2+ΒΆ), whereΒΌ0,kβΌp/n1/4. On the other hand, if ΒΌ0,k{p/n1/4, then it follows that max ifkaiβmin ifkaiβΌp ΒΌ0,kzn1/2. In this case, Theorem 3.3applies and yields E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp nΒΌ0,i/parenrightBig +O(nβ1/2+ΒΆ). SinceΒΌ0,i{p/n1/4in this regime, it follows that O/parenleftbiggp nΒΌ0,i/parenrightbigg +O/parenleftBig nβ1/2+ΒΆ/parenrightBig =O/parenleftbigg1 ΒΌ0,i/radicalbiggp n/parenrightbigg +O/parenleftBig nβ1/2+ΒΆ/parenrightBig =O(nβ1/2+ΒΆ), holds. Therefore, in both regimes, we obtain the uniο¬ed posterior bound: E/bracketleftBigΒΌ(i)βΒΌ0,i ΒΌ0,i/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBig1 ΒΌ0,i/radicalbiggp n/parenrightBig +O(nβ1/2+ΒΆ). β Proof of Corollary 3.5 Proof.Consider the spectral decomposition of nS=QWQT. Then, on DΟ΅, we have the following inequal- ity: (ΓT jΛΓj)2= ([QΞ]T Β·jQΒ·j)2 = [(QΞeT j)T(QeT j)]2 = Ξ2 jj g1βΟ΅2, forj= 1,...,k. 43 By Theorem 3.2 of Wang and Fan (2017), the following limit holds: /vextendsingle/vextendsingle/vextendsingleΓT 0,jΛΓj/vextendsingle/vextendsingle/vextendsingleβ(1+Β―ddj)β1 2=Op(Β·), whereΒ·j=1 ΒΌ0,j/radicalbiggp n+p n3/2ΒΌ0,j+1 n. Applying the triangular inequality of angles from Castano et al. (2016), where ΒΉ(u,v) = arccos/vextendsingle/vextendsingleuTv/vextendsingle/vextendsingle ||u||||v||, we obtain: /vextendsingle/vextendsingleΓT 0,jΓj/vextendsingle/vextendsinglefcos/bracketleftBig arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΛΓT jΓj/vextendsingle/vextendsingle/vextendsingle/parenrightBig +arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΓT 0,jΛΓj/vextendsingle/vextendsingle/vextendsingle/parenrightBig/bracketrightBig fcos/bracketleftBig arccos/parenleftBig/radicalbig 1βΟ΅2/parenrightBig +arccos ο£1/radicalBig 1+Β―ddj+Op(Β·j)ο£Ά ο£Έ/bracketrightBig = cos/bracketleftBig arccos/parenleftBig/radicalbig 1βΟ΅2/radicalBigg 1 1+Β―ddj+Op(Β·j)βΟ΅/radicalBigg 1β/parenleftbig1 1+Β―ddj+Op(Β·j)/parenrightbig2/parenrightBig/bracketrightBig = cos/bracketleftBig arccos/parenleftbig/radicalBigg 1βΟ΅2 1+Β―ddjβ/radicalBigg Β―ddj 1+Β―ddjΟ΅2+Op(Β·j)/parenrightbig/bracketrightBig f(1+Β―ddj)β1/2+Op(Β·j). Similarly, applying the lower bound using ΒΉ(Γ0,j,Γj)gΒΉ(Γ0,j,ΛΓj)βΒΉ(ΛΓj,Γj). /vextendsingle/vextendsingleΓT 0,jΓj/vextendsingle/vextendsinglegcos/bracketleftBig arccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΓT 0,jΛΓj/vextendsingle/vextendsingle/vextendsingle/parenrightBig βarccos/parenleftBig/vextendsingle/vextendsingle/vextendsingleΛΓT jΓj/vextendsingle/vextendsingle/vextendsingle/parenrightBig/bracketrightBig gcos/bracketleftBig arccos ο£1/radicalBig 1+Β―ddjο£Ά ο£Έβarccos/parenleftBig/radicalbig 1βΟ΅2/parenrightBig/bracketrightBig = cos/bracketleftBig arccos/parenleftBig/radicalbig 1βΟ΅2/radicalBigg 1 1+Β―ddj+Op(Β·j)+Ο΅/radicalBigg 1β/parenleftbig1 1+Β―ddj+Op(Β·j)/parenrightbig2/parenrightBig/bracketrightBig = cos/bracketleftBig arccos/parenleftbig/radicalBigg 1βΟ΅2 1+Β―ddj+/radicalBigg Β―ddj 1+Β―ddjΟ΅2+Op(Β·j)/parenrightbig/bracketrightBig =/radicalBigg 1βΟ΅2 1+Β―ddj+Ο΅/radicalBigg Β―ddj 1+Β―ddj+Op(Β·j). Therefore, we obtain the expectation: E/bracketleftBig 1β(ΓT 0,jΓj)2/vextendsingle/vextendsingleXn/bracketrightBig = sup DΟ΅(1β/vextendsingle/vextendsingleΓT 0,jΓj/vextendsingle/vextendsingle2) 44 = 1β/parenleftBig1βΟ΅2 1+Β―ddj+Op(Β·j)/parenrightBig β/parenleftBigΒ―ddjΟ΅2 1+Β―ddj/parenrightBig β2/radicalBigg/parenleftBig1βΟ΅2 1+Β―ddj/parenrightBig/parenleftBigΒ―ddjΟ΅2 1+Β―ddj/parenrightBig +Op(Β·j) =Β―ddj 1+Β―ddj+O(p nΒΌ0,jΟ΅2)+O/parenleftBig Ο΅/radicalbiggp nΒΌ0,j/parenrightBig +Op(Β·j) =pΒ―d nΒΌ0,j+pΒ―d+O(p nΒΌ0,jΟ΅2)+O/parenleftBig Ο΅/radicalbiggp nΒΌ0,j/parenrightBig +Op(Β·j). IfΒΌ0,1βΌp/n1/4, then by proof of Lemma S1.16, we obtain /vextendsingle/vextendsingle/vextendsingleE/bracketleftBig 1β(ΓT 0,jΓj)2|Xn/bracketrightBig βE/bracketleftBig 1β(ΓT 0,jΓ(j))2|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingleE/bracketleftBig/parenleftbig (ΓT 0,jΓ(j))2β(ΓT 0,jΓj)2)/parenrightbig Β·I(ΒΌ(i))ΜΈ=ΒΌi)|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsingle f2E/bracketleftBig I(ΒΌ(i))ΜΈ=ΒΌi)|Xn/bracketrightBig = 2E/bracketleftBig E[I(ΒΌ(i))ΜΈ=ΒΌi)|Ξ]Β·(I(ΞβDΟ΅)+I(Ξ/βDΟ΅))|Xn/bracketrightBig f2E/bracketleftBig E[I(ΒΌ(i))ΜΈ=ΒΌi)|Ξ]Β·I(ΞβDΟ΅)|Xn/bracketrightBig +2E/bracketleftBig E[1|Ξ]Β·I(Ξ/βDΟ΅)|Xn/bracketrightBig f2(1βp/productdisplay i=1(1βpi))+Γ(Ξ/βDΟ΅|Xn) f2p/summationdisplay i=1pi+O/parenleftBig exp/parenleftBig βΒ» 2Ο΅/parenrightBig/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig . Similarly, if ΒΌ0,1{p/n1/4, then by proof of Lemma S1.18, we obtain /vextendsingle/vextendsingle/vextendsingleE/bracketleftBig 1β(ΓT 0,jΓj)2|Xn/bracketrightBig βE/bracketleftBig 1β(ΓT 0,jΓ(j))2|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsinglef2p/summationdisplay i=1pi+O/parenleftBig exp/parenleftBig βΓ 2nΟ΅2/parenrightBig Β·/parenleftBig2ΒΌ0,1 ΒΌ0,k/parenrightBigkq/parenrightBig +O/parenleftBig/parenleftBignΒΌ0,k+p p/parenrightBigβΟ΅2an/parenrightBig . By the assumptions of the theorem, in particular ΒΌ0,1/ΒΌ0,k=O(1) andΟ΅βnβ1/4, we havep/summationdisplay i=1pi= O(1/n). Therefore, /vextendsingle/vextendsingle/vextendsingleE/bracketleftBig 1β(ΓT 0,jΓj)2|Xn/bracketrightBig βE/bracketleftBig 1β(ΓT 0,jΓ(j))2|Xn/bracketrightBig/vextendsingle/vextendsingle/vextendsingleβΌnβ1/2+ΒΆ+1 ΒΌ0,i/radicalbiggp n. Hence, we obtain the ο¬nal posterior bound E/bracketleftBig 1β/vextendsingle/vextendsingleΓT 0,jΓ(j)/vextendsingle/vextendsingle2/vextendsingle/vextendsingleXn/bracketrightBig =O(p nΒΌ0,j)+Op(Β·j), forj= 1,...,k, whereΒ·j=1 ΒΌ0,j/radicalbiggp n+p n3/2ΒΌ0,j+1 n. β 45 We consider the spectral decomposition Ξ£ = ΞΞΞT, where Ξ = ο£Ξ10 0 Ξ2ο£Ά ο£Έ, with Ξ 1being akΓkmatrix and Ξ 2an (pβk)Γ(pβk) matrix. Furthermore, we write Ξ = (Ξ 1,Ξ2) where Ξ 1is apΓkmatrix and Ξ 2is apΓ(pβk) matrix. Lemma S1.19 (Spectral
|
https://arxiv.org/abs/2505.20668v1
|
diο¬erence of eigenvectors) .For the spiked eigenvector matrix Ξ1, the following inequality holds on the set DΟ΅: ||Ξ0,1βΞ1||2=O(dk)+Op(Β·k), whereΒ·k=1 ΒΌ0,k/radicalbiggp n+p n3/2ΒΌ0,k+1 n. Proof. ||Ξ0,1βΞ1||2f ||Ξ0,1βΞ1||2 F fk/summationdisplay i=1||Γ0,iβΓi||2 2 = 2k/summationdisplay i=1(1β/vextendsingle/vextendsingleΓT 0,iΓi/vextendsingle/vextendsingle) f2k/summationdisplay i=1/parenleftbigg 1β/parenleftbig/radicalBigg 1βΟ΅2 1+di+Ο΅/radicalbigg di 1+di+Op(Β·i)/parenrightbig/parenrightbigg , whereΒ·i=1 ΒΌ0,i/radicalbiggp n+p n3/2ΒΌ0,i+1 n. The last inequality follows from proof of Theorem 3.5. 2k/summationdisplay i=1/parenleftbigg 1β/parenleftbig/radicalBigg 1βΟ΅2 1+di+Ο΅/radicalbigg di 1+di/parenrightbig/parenrightbigg f2k/summationdisplay i=1/parenleftbigg 1β/radicalBigg 1βΟ΅2 1+di/parenrightbigg f2k/parenleftbigg 1β/radicalBigg 1βΟ΅2 1+dk/parenrightbigg f2k/parenleftbigg/radicalbig 1+dkβ/radicalbig 1βΟ΅2/parenrightbigg = 2kdk+Ο΅2 β1+dk+β 1βΟ΅2 =O(dk). Sincek/summationdisplay j=1Β·jfkΒ·k, it follows that ||Ξ0,1βΞ1||2=O(dk)+Op(Β·k). 46 β Proof of Corollary 3.6 Proof. ||Ξ£βΞ£0||2 Ff22/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞiΞiΞT iβΞ0,iΞ0,iΞT 0,i/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F f22/summationdisplay i=1/bracketleftbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle(ΞiβΞ0,i)Ξ0,iΞT i/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞi(ΞiβΞ0,i)ΞT i/vextendsingle/vextendsingle/vextendsingle/vextendsingle F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,iΞ0,i(ΞiβΞ0,i)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle F/bracketrightbig2 f62/summationdisplay i=1/bracketleftbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle(ΞiβΞ0,i)Ξ0,iΞT i/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞi(ΞiβΞ0,i)ΞT i/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F+/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,iΞ0,i(ΞiβΞ0,i)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F/bracketrightbig (S19) f62/summationdisplay i=1/bracketleftbig 2||Ξ0,iβΞi||2||Ξ0,i||2 F+||Ξ0,iβΞi||2 F/bracketrightbig f6/bracketleftBig 8(k/summationdisplay i=1ΒΌ2 0,i)Β·/parenleftbig O(dk)+Op(Β·k)/parenrightbig +k/summationdisplay i=1(ΒΌ0,iβΒΌi)2/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kΒΌ2 0,i+/summationdisplay i>k(ΒΌ0,iβΒΌi)2/bracketrightBig , (S20) f6/bracketleftBig 8k/summationdisplay i=1ΒΌ2 0,iΒ·O(dk)+k/summationdisplay i=1(ΒΌ0,iβΒΌi)2/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kΒΌ2 0,i+/summationdisplay i>k(ΒΌ0,iβΒΌi)2/bracketrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k) The inequality ( S19) follows from the Cauchy-Schwarz inequality, and ( S20) follows from Lemma S1.19. The posterior expectation is given by: E/bracketleftBig (ΒΌiβΒΌ0,i)2/vextendsingle/vextendsingleXn/bracketrightBig =/integraldisplay (ΒΌiβΒΌ0,i)2/productdisplay ΒΌβaiβn 2 iexp/parenleftbigg βci 2ΒΌi/parenrightbigg dΞdΞ /integraldisplay/productdisplay ΒΌβaiβn 2 iexp/parenleftbigg βci 2ΒΌi/parenrightbigg dΞdΞ =/integraldisplay f(c)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 dΞ /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 dΞ fsup DΟ΅f(c)+/integraldisplay Dc Ο΅f(c)/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 dΞ /integraldisplay/parenleftBig/productdisplay ci/parenrightBigβaiβn 2+1 dΞ, where f(c) =EΒΌi[(ΒΌiβΒΌ0,i)2] =/parenleftBig ΒΌ0,iβci n+2aiβ4/parenrightBig2 +2c2 i (n+2aiβ4)2(n+2aiβ6)2, withΒΌiiidβΌInvGam (ai+n 2β1,ci 2). 47 Therefore, we obtain for i= 1,...,k: sup DΟ΅f(c) =/parenleftBig O(/radicalbiggp n+ΒΌ0,inβ1 2+ΒΆ)/parenrightBig2 +O(ΒΌ2 0,i n3) =O/parenleftBigp n+ΒΌ2 0,i n1β2ΒΆ/parenrightBig , for all small ΒΆ >0. The posterior expectation is given by: E/bracketleftBig (ΒΌiβΒΌ0,i)2/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp n+ΒΌ2 0,i n1β2ΒΆ/parenrightBig , fori= 1,...,k. Similarly, we obtain the upper bound for f(c) overDΟ΅. Using the assumption that ai{pfori > k, we obtain: sup DΟ΅f(c) =/parenleftBig ΒΌ0,iβci n+2aiβ4/parenrightBig2 +2 (n+2aiβ6)2Β·/parenleftBigci (n+2aiβ4)/parenrightBig2 fΒΌ2 0,i+1 a2 iΒ·ΒΌ2 0,i f2ΒΌ2 0,i. The posterior expectation is given by: E/bracketleftBig (ΒΌiβΒΌ0,i)2/vextendsingle/vextendsingleXn/bracketrightBig =O(ΒΌ2 0,i), for alli > k. Therefore, we obtain the following inequality of posterior expectati on: E/bracketleftBig ||Ξ£βΞ£0||2 F|Xn/bracketrightBig =O(dkk/summationdisplay i=1ΒΌ2 0,i+p n+nβ1+2ΒΆk/summationdisplay i=1ΒΌ2 0,i)+O(/summationdisplay i>kΒΌ2 0,i)+Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k) =O/parenleftBig/parenleftbig nβ1+2ΒΆ+p nΒΌ0,k/parenrightbigk/summationdisplay i=1ΒΌ2 0,i+p n+p/summationdisplay i=k+1ΒΌ2 0,i/parenrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k) =O/parenleftBig/parenleftbig nβ1+2ΒΆ+p nΒΌ0,k/parenrightbigk/summationdisplay i=1ΒΌ2 0,i+p/parenrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k). SinceΒΌ0,1/ΒΌ0,kis bounded by a positive constant, the posterior expectation of the cov ariance is given 48 by: E/bracketleftBig||Ξ£βΞ£0||2 F ||Ξ£0||2 F/vextendsingle/vextendsingleXn/bracketrightBig =O/parenleftBigp p/summationtext i=1ΒΌ2 0,i+nβ1+2ΒΆk/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,i/parenrightBig +O/parenleftBigp nΒΌ0,kk/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,i/parenrightBig +Op(k/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,iΒ·k) =O/parenleftBigp ΒΌ2 0,1+p+nβ1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op(ΒΌ2 0,1 ΒΌ2 0,1+pΒ·k). We consider the spectral decomposition S=ΛΞΛΞΛΞT, where ΛΞ = ο£ΛΞ10 0ΛΞ2ο£Ά ο£Έ, withΛΞ1being akΓkmatrix and ΛΞ2an (pβk)Γ(pβk) matrix. Furthermore, we write ΛΞ = (ΛΞ1,ΛΞ2) whereΛΞ1is apΓkmatrix and ΛΞ2is apΓ(pβk) matrix. By Theorem 3.2 of Wang and Fan (2017), the following limit holds for each j= 1,...,k /vextendsingle/vextendsingle/vextendsingleΓT 0,jΛΓj/vextendsingle/vextendsingle/vextendsingle= (1+Β―ddj)β1 2+Op(Β·j). Using this, we bound the spectral norm between the true and sample ei genvectors as /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,1βΛΞ1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 f/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,1βΛΞ1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F fk/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΓ0,jβΛΓj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 2 = 2k/summationdisplay j=1(1β/vextendsingle/vextendsingle/vextendsingleΓT 0,jΛΓj/vextendsingle/vextendsingle/vextendsingle) = 2k/summationdisplay j=1/parenleftBig 1β((1+Β―ddj)β1 2+Op(Β·j))/parenrightBig = 2k/summationdisplay j=1Β―ddj/radicalBig 1+Β―ddj(/radicalBig 1+Β―ddj+1)+Op(Β·k) = 2k/summationdisplay j=1O(dj)+Op(Β·k) =O(dk)+Op(Β·k). 49 By the same procedure, we have ||Ξ£βΞ£0||2 Ff62/summationdisplay i=1/bracketleftbig 2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,iβΛΞi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ||Ξ0,i||2 F+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,iβΛΞi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 F/bracketrightbig f6/bracketleftBig 8(k/summationdisplay i=1ΒΌ2
|
https://arxiv.org/abs/2505.20668v1
|
0,i)Β·/parenleftbig O(dk)+Op(Β·k)/parenrightbig +k/summationdisplay i=1(ΒΌ0,iβΛΒΌi)2/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kΒΌ2 0,i+/summationdisplay i>k(ΒΌ0,iβΛΒΌi)2/bracketrightBig . (S21) By Lemma S1.2, we obtain (ΒΌ0,iβΛΒΌi)2=ο£±  O(p2 n2+ΒΌ2 0,i n1β2ΒΆ) fori= 1,...,k O(p2 n2) for i=k+1,...,n O(1) for i=n+1,...,p. Substituting into ( S21), we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ0,1βΛΞ1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 f6/bracketleftBig 8(k/summationdisplay i=1ΒΌ2 0,i)Β·/parenleftbig O(dk)+Op(Β·k)/parenrightbig +k/summationdisplay i=1O(p2 n2+ΒΌ2 0,i n1β2ΒΆ)/bracketrightBig +6/bracketleftBig 8/summationdisplay i>kO(1)+n/summationdisplay i=k+1O(p2 n2)+p/summationdisplay i=n+1O(1)/bracketrightBig =O/parenleftBig dkk/summationdisplay i=1ΒΌ2 0,i+p2 n2+n1β2ΒΆk/summationdisplay i=1ΒΌ2 0,i+p+p2 n/parenrightBig +Op/parenleftBigk/summationdisplay i=1ΒΌ2 0,iΒ·k/parenrightBig =O/parenleftBigp nΒΌ0,kk/summationdisplay i=1ΒΌ2 0,i+p2 n+n1β2ΒΆk/summationdisplay i=1ΒΌ2 0,i/parenrightBig +Op(k/summationdisplay i=1ΒΌ2 0,iΒ·k)ΒΌ2 0,i+p2 n/parenrightBig +Op/parenleftBigk/summationdisplay i=1ΒΌ2 0,iΒ·k/parenrightBig . SinceΒΌ0,1/ΒΌ0,kis bounded by a positive constant, it follows that: ||SβΞ£0||2 F ||Ξ£0||2 F=O/parenleftBigp nΒΌ0,k/summationtextk i=1ΒΌ2 0,i/summationtextp i=1ΒΌ2 0,i+p2 n/summationtextp i=1ΒΌ2 0,i+nβ1+2ΒΆk/summationtext i=1ΒΌ2 0,i p/summationtext i=1ΒΌ2 0,i/parenrightBig +Op/parenleftBig/summationtextk i=1ΒΌ2 0,i/summationtextp i=1ΒΌ2 0,iΒ·k/parenrightBig =O/parenleftBigp2 n(ΒΌ2 0,1+p)+nβ1+2ΒΆΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +O/parenleftBigp nΒΌ0,kΒΌ2 0,1 ΒΌ2 0,1+p/parenrightBig +Op/parenleftBigΒΌ2 0,1 ΒΌ2 0,1+pΒ·k/parenrightBig . β References Castano, D., Paksoy, V. E. and Zhang, F. (2016). Angles, triangle inequalities, correlation matrices and metric-preserving and subadditive functions, LinearAlgebra anditsApplications 491: 15β29. 50 Edelman, A., Arias, T. A. and Smith, S. T. (1998). The geometry of algorithms wit h orthogonality constraints, SIAMjournalonMatrixAnalysis andApplications 20(2): 303β353. Stewart,G.andSun,J.(1990). MatrixPerturbation Theory,ComputerScienceandScientiο¬cComputing, Elsevier Science. Szarek, S. J. (1982). Nets of grassmann manifold and orthogonal group, Proceedings ofresearch workshop onBanachspacetheory(IowaCity,Iowa,1981), Vol. 169, University of Iowa Iowa City, IA, p. 185. Wang, W. and Fan, J. (2017). Asymptotics of empirical eigenstructure for high dimensional spiked covariance, Annalsofstatistics 45(3): 1342. 51
|
https://arxiv.org/abs/2505.20668v1
|
arXiv:2505.20681v1 [stat.ME] 27 May 2025HYBRID BAYESIAN ESTIMATION IN THE ADDITIVE HAZARDS MODEL Β΄Alvarez Enrique Ernesto1, Riddick Maximiliano Luis2 1Instituto de CΒ΄ alculo, Universidad de Buenos Aires - CONICET Universidad Nacional de LujΒ΄ an, Argentina mail: enriqueealvarez@fibertel.com.ar 2NUCOMPA and Departamendo de MatemΒ΄ atica, Facultad de Ciencias Exactas, Universidad Nacional del Centro de la Provincia de Buenos Aires, Tandil (7000), Argentina, mail: mriddick@nucompa.exa.unicen.edu.ar Abstract : Hereby we propose a Bayesian method of estimation for the semipara- metric Additive Hazards Model (AHM) from Survival Analysis under right- censoring. With this aim, we review the AHM revisiting the likelihood func- tion, so as to comment on the challenges posed by Bayesian estimation from the full likelihood. Through an algorithmic reformulation of that likelihood, we present an alternative method based on a hybrid Bayesian treatment that exploits Lin and Ying (1994) estimating equation approach and which chooses tractable priors for the parameters. We obtain the estimators from the posterior distributions in closed form, we perform a small simulation experiment, and lastly, we illustrate our method with the classical Nickels Miners dataset and a brief simulation experiment. Keywords : Additive Hazards Model, Survival Analysis, Bayesian Inference. 2000 AMS Subject Classification: 62N02 - 62F15 1 Introduction Suppose we have a sample of nindividuals who may experience a certain event of interest such as death, definite illness remission or permanent retirement from the labor force over an observation window [0 , Ο]. We denote by Tβ ithe true, possibly unobserved, time to occurrence for the i-th individual. Because some individuals experience censoring at times Ci, their duration until the event is ob- served only when Ciβ₯Tβ i. In classical Survival Analysis, it is of interest to study this variables as related to observed individual level characteristics, as measured by some independent variables or, frequenty called, covariates . Notice that we are only considering that we are able to observe at most one event time for each individual. Even though this model could be extended for repeated events, we 2 leave that as matter for future research. For example, in Medicine, it could be of interest how remission make depend on the choice of different drugs, or other individual specific characteristics. In the literature, models for survival data typically are defined according to the so-called hazard rate ,Ξ»i(t, ΞΈ) := lim Ξ΅β0Ξ΅β1PΞΈ(Tβ iβ€t+Ξ΅|Tβ i> t),which is a heuristic measure of the instantaneous risk at any given time. Approaches for parametric models, where ΞΈβΞβI Rk, are plenty. Some common choices are the exponential, Weibull or Gamma distributions (e.g., Lawless 2003). Alterna- tively, useful models have been proposed within the semi-parametric framework, in which the hazard function is decomposed in a nonparametric baseline hazard function Ξ»0(Β·) and an Euclidean parameter Ξ²βI Rk. In this setting the three most common models are: ( i)Coxβs Proportional Hazards Model (CPM), which decomposes the hazard multiplicatively proposing Ξ»(t,Ξ²) =Ξ»0(t) exp ( zβ²Ξ²); (1) (ii) the Additive Hazards Model (AHM), which proposes instead an additive de- composition Ξ»(t,Ξ²) =Ξ»0(t) +zβ²Ξ², (2) and ( iii) the Accelerated Failure Time Model (AFTM), which takes a slightly dif- ferent approach than
|
https://arxiv.org/abs/2505.20681v1
|
the previous ones by proposing a rescaling for the duration times themselves. I.e., Tβ=Tβ 0exp(Zβ²Ξ²),where Tβ 0βΌF0(Β·) is the baseline cumu- lative distribution function. Interestingly, for the corresponding hazard functions, this entails that Ξ»(t,Ξ²) =Ξ»0 texp(Zβ²Ξ²) exp(Zβ²Ξ²), (3) which is neither multiplicative nor additive. Developments in Classical Estimation for the above three models have been fairly vast. An extensive account for them can be found in some classical text- books such as Kalbfleisch and Prentice (1980), Klein and Melvin (2006) or Lawless (2003), among others. As for Bayesian estimation, most of the statistical focus in the literature was put on the Multiplicative Hazards Model, an extensive ac- count of which can be found in the textbook of Ibrahim, Chen and Sinha (2001). Comparatively, Bayesian developments for the Additive Hazards Model and for the Accelerated Failure time Model have been fewer. For the AHM, a recent literature review is provided by Alvarez and Riddick (2019), and for the AFT model, the relevant references are discussed in Zhang and Lawson (2011). It is our goal in this study to propose a Bayesian method of estimation for the semi-parametric Additive Hazards Model under right-censoring. This paper is organized as follows. In Section 2 we review the Additive Hazards Model, we 3 introduce the likelihood function and we discuss the challenges posed by either Classical or Bayesian estimation from the full likelihood. We further discuss alternative approaches based on Coxβs Partial Likelihood and on the seminal work of Lin and Ying (1994), who proposed an approach which is based on an estimating equation method developed from Counting Process theory. Section 3 is our main contribution in this manuscript in which, based on a convenient expression of the likelihood, we present a new hybrid Bayesian method of estimation. This method combines the classical Lin and Yingβs estimating equation with Bayesian priors for the Euclidean parameter and for the baseline hazard function. We obtain the posterior distributions and provide formulas for the estimators in closed form. The availability of those closed-form expressions is one of the main advantages of our method, which avoids the necessity of relying on approximate sampling from the posterior distributions or on other types of numerical approximation. Section 4 contains a small simulation experiment that illustrates some of the properties of our hybrid Bayesian estimators. In Section 5 we carry out a comparative Bayesian analysis for the Welsh Nickels Refiners dataset first introduced by Doll, Morgan and Speizer (1970) and subsequently analyzed from a Classical perspective by Breslow and Day (1987), Lin and Ying (1994), and Alvarez and Ferrario (2016), among others. Lastly, in section 6, we highlight the main contributions of this article, which is part of Maximiliano Riddick PhD thesis (2020). 2 Additive Hazards Model The parameter vector Ξ²is non-negative and k-dimensional, and Ξ»0(Β·)βLis a baseline hazard function. This entails that, denoting L:= h:R+7βR+,Zβ 0h(s)ds=β , theparameter space for (Ξ², Ξ»0) is Ξ = Rk+ΓL. It is in that sense that the model is semi-parametric . It is noteworthy that in the general AHM formulation what needs to be positive is the
|
https://arxiv.org/abs/2505.20681v1
|
hazard function itself, but not necessarily all the coefficients. In our context, there are two ways to accomplish that goal: (i) by adding the specific constraint that the estimated ΛΞ»(t) =ΛΞ»0(t) +Ξ²β²zbe nonnegative and performing constrained inference; or ( ii) by using only positive covariates zk, either as given by nature or constructing them purposely to achieve positivity, and forcing all the coefficients Ξ²kalso to be positive. Althought it seems restrictive, this option could be extended to allow for negative parameters through the model proposed by Dunson & Herring (2005). Our choice, which is sufficient yet not necessary to obtain nonnegative hazards, is the alternative 4 that we adopted in this manuscript. The main advantage of being willing to live with this limitation is the availability of estimators in closed form. According to the hazard function of the model 2, and under right censoring, our interest is to estimate the parameters of the model. Under right censoring Ξ΄i= 1(tiβ₯u), the likelihood function in survival models is, calling Ξ to the model parameters, Ln(Ξ) =nY i=1Ξ»(ti|Ξ)S(ti|Ξ)Ξ΄i. In the next of this article, and for practicality, we will omit in the expression the conditional respect to the parameters of the model. So, in the AHM, the likelihood expression when ni.i.d. triplets ( ti, zi, Ξ΄i) are observed results in Ln(Ξ², Ξ»0) =nY i=1[f(ti)]Ξ΄i[S(ti)]1βΞ΄i=nY i=1[Ξ»(ti)]Ξ΄iS(ti) =nY i=1 Ξ»0(ti) +Ξ²β²ziΞ΄iexp{βΞ0(ti)βtiΞ²β²zi}, (4) since S(t) = exp {βΞ(t)}= exp{βRt 0Ξ»(u)du}, where Ξ 0(t) represents the denoted cumulative baseline hazard function , i.e., Ξ 0(t) =Rt 0Ξ»0(u)du. We will expand about this in the algorithm explanation. Piecewise constant model. For simplicity, we will in this Section propose to model the baseline hazard function as independent of the Euclidean parameter and as a piecewise constant function. E.g., we fix mβNand we choose a partition 0 =: s0< s1< . . . < s mβ1<β=:sm. Define further a random nonnegative stepwise function L0(t) :=ο£±  A1 0β€t < s 1, A2 s1β€t < s 2, ... Amβ1smβ2β€t < s mβ1, Am tβ₯smβ1,(5) and let us call Athem-dimensional random vector with entries A1, . . . , A mβ1β₯0, and with Am>0 (where the last entry is strictly positive in order to guarantee that any realization of the hazard function integrates to infinity). Strictly speak- ing, in a Bayesian setting, the random function L0has distribution Q, which is a probability measure on the space L. Several choices are possible for Q. But, for simplicity, we assume throughout this manuscript that the grid s0, . . . , s mis fixed and it was chosen before collecting any data. 5 Polynomial expansion of the Likelihood under the piecewise constant hazard model. The next construction is crucial since it makes available the Bayesian treatment developed in the next Section. First, we reconsider the likeli- hood including the piecewise exponential model proposed to the baseline hazard: Ln(Ξ², Ξ»0) = exp( βΞ²β²nX i=1tizi) exp( βnX i=1Ξ0(ti))nY i=1 Ξ»0(ti) +Ξ²β²ziΞ΄i. Now, the trick is to rewrite the expressionQn i=1 Ξ»0(ti) +Ξ²β²ziΞ΄iaccording to the grid and the values defined in the piecewise exponential model. Let njthe number of
|
https://arxiv.org/abs/2505.20681v1
|
observations that belong to each grid interval, and calling ( zkj, Ξ΄kj) to their corresponding covariate and censoring indicator, this yields Ln(Ξ², Ξ»0) = exp( βΞ²β²nX i=1tizi) exp( βnX i=1Ξ0(ti))mY j=1njY kj=1 aj+Ξ²β²zkjΞ΄kj(6) (7) Notice that the formula in Equation (6) above does not correspond to any stan- dard density for which moment or quantile formulae, nor simulation algorithms, are readily available. Because of that, the previous statistical inference ap- proaches to this model were based on numerical methods. We now endeavor to find approximations to its first two moments, from a tractable re-expression of this formulae. Let us expand the product njY kj=1 aj+Ξ²β²zkjΞ΄kj=d0+d1aj+. . .+dNjaNj j, (8) which is a polynomial of order Nj=Pnj kj=1Ξ΄kj, with coefficients diβs that depend on the sample ( ti, zi, Ξ΄i),i= 1, ..., n and on the parameters. To estimate the coefficients of Equation (8), Chernoukhov (2013, 2018) proposed a method which essentially consisted in two steps: ( i) evaluating the polynomial at ( Nj+ 1) different points; and ( ii) solving a system of linear equations. The drawback of that method is its numerical unstability. Instead, in this manuscript we propose an alternative approach which is more efficient numerically and which allows for estimators in closed-form. With that aim, we basically exploit a recursive relationship obtained for the polynomial coefficients. We explain this as follows. Recursion Let us propose the following recursive method, based on how the polynomial coefficients changes when a degree npolynomial is multiplied by a 6 monomial ( x+b). If the number of uncensored times is n(i.e., a degree npoly- nomial, for a generic variable Ξ»), we denote the polynomial Pn(Ξ») =Pn j=0djΞ»j. When the sample is augmented by one uncensored observation, it is equivalently to multiply Pnby (Ξ»+Ξ²β²zn+1), i.e., Pn+1(Ξ») = ο£nX j=0djΞ»jο£Ά ο£Έ(Ξ»+Ξ²β²zn+1) =nX j=0djΞ»j+1+nX j=0djΞ²β²zn+1Ξ»j. Then, the coefficients of Pn+1will be updated as ( i)d(n+1) 0 =d(n) 0Ξ²β²zn+1, (ii) d(n+1) j =d(n) jβ1+d(n) jΞ²β²zn+1, for jβ {1, ..., n}, and lastly ( iii)d(n+1) n+1=d(n) n= 1. With this recursive relationship between the coefficients, we can calculate the diβs in an fairly easy and numerically efficient way. According to this modification, we propose to estimate Ξ², and then estimate Ξ»0. In order to achieve that goal, we deal with the likelihood expression Ln(Ξ»0|Ξ²) which becomes Ln(Ξ»0|Ξ²)βmY j=1 ο£NjX k=0dkak jο£Ά ο£Έexp( βnX i=1Ξ0(ti)) . (9) We endeavour now to obtain a tractable expression for exp {βPn i=1Ξ0(ti)}in Equation (9), for each interval in the grid, expressing each Ξ 0(ti) as function of each aj. For that, notice that, for each aβ² js, Ξ0(t) (i) does not depend of ajif tβ€sjβ1, (ii) depends of ajandtifsjβ1< tβ€sj, or (iii) Ξ0(ti) :=ο£±  Ξ0(ti) tiβ€sjβ1, Ξ0(sjβ1) +aj(tiβsjβ1) sjβ1< tiβ€sj, Ξ0(sjβ1) +aj(sjβsjβ1) +Rti sjΞ»0(u)du s j< ti. Then, calling mj= #{ti> sj}, after standard calculations we obtain exp( βnX i=1Ξ0(ti)) βexpο£± ο£² ο£³βaj ο£njX kj=1(tkjβsjβ1) +mj(sjβsjβ1)ο£Ά ο£Έο£Ό ο£½ ο£Ύ.(10) Therefore, combining Equations (9) and (10) we see that Ln(Ξ»0|Ξ²) is proportional to a mixture of Gamma distributions, i.e., Ln(Ξ»0|Ξ²)βmY j=1 ο£NjX k=0dkak jexpο£± ο£² ο£³βaj ο£njX kj=1(tkjβsjβ1) +mj(sjβsjβ1)ο£Ά ο£Έο£Ό ο£½ ο£Ύο£Ά ο£Έ.(11)
|
https://arxiv.org/abs/2505.20681v1
|
Being a mixture of Gammas, the display above provides a tractable expression which we extensively exploit in the sequel. 2.1 Uninformative Priors 7 Without this convenient expression, several algorithms were presented in lit- erature with innovative adaptations of acceptance rejection sampling, Metropolis- Hastings and Metropolis within Gibbs (e.g., Turkmann, Paulino & MΒ¨ uller 2020). Instead, our formulation, because it leads to a mixture of Gamma distributions, enables avoiding the Gibbs sampler and provide estimators in closed form. 2.1 Uninformative Priors According to covariates and censoring indicators, under the piecewise constant hazard model for the baseline hazard we arrive to mixture of Gamma distributions expression to the likelihood function for the AHM. We leave analysis under non- informative priors formulations as Laplace prior (equals to 1 over the parametric space), or MAXENT priors for a future research. In the present article the objective is to develop Bayesian inference according to informative priors. 2.2 Informative Priors As mentioned above, in a Bayesian treatment, we view the parameters as re- alizations of a random vector B=Ξ²and a random curve L0=Ξ»0(Β·). In this Subsection, we attempt to specify informative prior distributions in the following manner: Prior for the Euclidean Parameter. Consider BβΌNk(¡β,CΞ²)|Rk+, (12) this is a truncation of a multivariate normal distribution with mean vector ¡βand positive definite covariance matrix CΞ², to the positive orthant Rk+. This entails that the density function for Bis given by fB(Ξ²) =Nk Ξ²kY j=1I(Ξ²jβ₯0) Zβ 0. . .Zβ 0Nk Ξ²dΞ².(13) where Nk Ξ²= 1β 2Οk |det(CΞ²)|β1 2exp β1 2(Ξ²β¡β)β²CΞ²β1(Ξ²β¡β) . The de- nominator is the probability P( Ξ²1β₯0;. . .;Ξ²kβ₯0) under the nontruncated normal distribution. It is a constant that we denote by β¦( ¡β,CΞ²) and which depends only on ¡βandCΞ². Apart for its mathematical tractability, an additional advantage of choosing a prior based on a normal distribution is that it may simplify the process of prior elicitation. This is because field experts are often reasonably acquainted with them, by specifying the mean, mode or some of its quantiles. 8 Prior for the cumulative baseline hazard function. In order to making avail- able certain dependence among the baseline hazard function parameters, we opted to model it as a Gamma process, with a pre-specified increas- ing and left continuous function Ξ±(t) and a scale parameter c. So, calling Ξ·i:=Ξ±(si)βΞ±(siβ1) the increment of the function Ξ±(t) in the interval [siβ1, si), the joint prior density for the cumulative baseline parameters Ξ+ 0j according to the time grid selected is fΞ+ 0j(a+ 01, . . . , a+ 0m) =mY i=1cΞ·i Ξ(Ξ·i)(a+ 0i)Ξ·iβ1exp βc a+ 0i , (14) where the introduced new parameters ( a+ 01, . . . , a+ 0m) are a realization of the random vector Ξ+ 0= (Ξ+ 01,Ξ+ 02,Β·Β·Β·,Ξ+ 0m), and are constructed according to the baseline hazard increments. We provide algebraic details of this calculations below in Section 3.2. We let ¡β=1kbe a vector of ones, and CΞ²=ΟIk, i.e., a constant Ο > 0 times the identity matrix of order k. Alternative methods which rely on choosing the hyperparameters for both BandL0under prior elicitation from expert knowledge
|
https://arxiv.org/abs/2505.20681v1
|
are currently under research. 3 The Hybrid Bayesian Method We attempt to develop a Bayesian method that achieves two goals, ( i) it disen- tangles estimation of Ξ²from the baseline hazard function Ξ»0(Β·), as the latter is often treated as a nuisance in many applications, and ( ii) it generates estimators in closed form. Those are two important goals because Survival Analysis typi- cally has two objectives, namely: ( i) prediction of the survival of an individual with given covariate values, and/or ( ii) assessing how survival or the risk of the event occurring at any given time may depend on some covariates. While for the first objective (i.e., prediction) estimation of both the baseline hazard function and the Euclidean parameters is necessary, for the second objective only the Eu- clidean parameters are needed. As already mentioned, straight application of the maximum likelihood approach for either of the three models mentioned in this article, namely PHM, AHM or AFTM, estimates the baseline hazard function and the Euclidean parameter jointly (e.g., Andersen et.al. , 1993) has presented some computational difficulties that encouraged researchers to propose alterna- tive estimation methods in the literature. For instance, for the PHM, Cox has developed the approach of the so-called Partial Likelihood , which made it pos- sible to estimate Ξ²disentangled from Ξ»0(Β·). First, we propose an approach to estimate Ξ², and then we develop a method to estimate the baseline hazard given the estimation of Ξ². 3.1 Estimation of Ξ² 9 3.1 Estimation of Ξ² In the AHM context, a classical way to estimate Ξ²uses an estimating equation approach instead of the score function from either maximum likelihood or partial maximum likelihood. This is because neither of them allows the estimation of Ξ²to be disentangled from estimation of Ξ»0and, furthermore, they do not allow estimators in closed form. The pioneering work of Lin and Ying (1994) proposed an estimating equation for Ξ² U(Ξ²) :="nX i=1βi[ziβΛzn(ti)]# β"nX i=1Zti 0[ziβΛzn(u)]β2du# Ξ²= 0,(15) where for a given column vector a, we denote the matrix aβ2=a aβ², and Λzn(u) :=Pn i=1zi1(tiβ₯u)Pn i=11(tiβ₯u)(16) is the vector function which averages of all the zβs corresponding to time values greater or equal to u. In order to put the estimating function U(Ξ²) of Equation (15) into a Bayesian context, let us now denote the statistics V1:=1 nnX i=1βi[ziβΛzn(ti)], V2:=1 nnX i=1Zti 0[ziβΛzn(u)]β2du , V3:=1 nnX i=1[ziβΛzn(ti)]β2, where V1is a row vector, while V2andV3are symmetric matrices. Notice that with that notation LY estimating equation is U(Ξ²) =V1βV2Ξ²= 0. Consider further the function g(Ξ²) = exp( β1 2" (Ξ²βVβ1 2V1)β²Vβ1 2V3Vβ1 2 nβ1 (Ξ²βVβ1 2V1)#) .(17) 3.1 Estimation of Ξ² 10 Taking logarithms and derivatives with respect to Ξ²we see that logg(Ξ²) =β1 2" Ξ²β²Vβ1 2V3Vβ1 2 nβ1 Ξ²β2Ξ²β²Vβ1 2V3Vβ1 2 nβ1 (Vβ1 2V1) β(Vβ1 2V1)β²Vβ1 2V3Vβ1 2 nβ1 (Vβ1 2V1)# , β βΞ²logg(Ξ²) =βVβ1 2V3Vβ1 2 nβ1 Ξ²+Vβ1 2V3Vβ1 2 nβ1 (Vβ1 2V1). From the display above, we observe that U(Ξ²) = 0 whenever ( β/βΞ²) logg(Ξ²) = 0. As a consequence, it is remarkable that Lin and Yingβs point estimator of Ξ² corresponds to
|
https://arxiv.org/abs/2505.20681v1
|
the mean or the mode of a multivariate normal distribution with mean vector m= (Vβ1 2V1) and covariance matrix is D=nβ1(Vβ1 2V3Vβ1 2). This entails that in a Bayesian context we could consider LY estimators as belonging to a posterior normal distribution for the Euclidean parameter with a flat (improper) prior. In this manuscript, in contrast, we have opted for an informative conjugate multivariate normal prior truncated to the positive orthant given by Equation (12). From there, we propose the pseudo-marginal posterior distribution for [B|X,L0] as fB|X,L0 Ξ² βexp β1 2h (Ξ²βm)β²Dβ1(Ξ²βm)β²+ (Ξ²β¡β)β²Cβ1 Ξ²(Ξ²β¡β)β²i kY j=1I(Ξ²jβ₯0),(18) which from standard calculations entail [B|X,L0]βΌNkh (Dβ1+Cβ1 Ξ²)β1(Dβ1m+Cβ1 β¡β); (Dβ1+Cβ1 Ξ²)β1i Rk+.(19) It is worthy to remark, at this point, why we call the distribution in the Equation above a β pseudo β posterior, instead of a (plain) posterior distribution. That is because the Bayesian calculations that led to the density fB|X,L0in Equation (18) started from g(Ξ²), which is not the conditional distribution of the sample given the parameters, but a convenient transformation inspired by an estimat- ing equation. That is why we call our method β hybrid β. Within a Bayesian framework, our method shares the same spirit of Coxβs partial likelihood, and Lin and Yingβs estimating equation in an alternative frequentist approach to in- ference. All those methods rely on estimating equations that depart from the traditional likelihood function. Furthermore, it turns out convenient that with 3.1 Estimation of Ξ² 11 our formulation [ B|X,L0] = [B|X]. I.e., this marginal posterior distribution is independent of the baseline hazard function. This enables estimation of Ξ²to be disentangled from that of Ξ»0(Β·), which in many contexts is treated as a nuisance parameter. Secondly, we will see that our formulation enables point estimation ofΞ²in closed-form, a feature which is shared by LY development, while not by Coxβs. As for the choice of the Bayesian point estimator of Ξ², we opt in this article for the mode of the posterior distribution. This is, ΛΞ²B= (Dβ1+Cβ1 Ξ²)β1(Dβ1m+Cβ1 β¡β) (20) whenever positive, or zero otherwise. It is notheworthy that that under a noninformative prior, i.e., when β₯CΞ²β₯ β β , then the point estimator ΛΞ²Bbecomes asymptotically equivalent tom=Vβ1 2V1, which coincides in the positive case with LY classical estimator. In contrast with LY estimator, however, ours has the advantage that it could never turn out negative for any sample. Notice also that an alternative Bayesian estimator, based on the expectation of the pseudo-posterior distribution, would have a serious problem: that it would diverge to infinity as β₯CΞ²β₯ β β , due to the effect of truncation of the prior multivariate normal distribution to the positive orthant. For the same reason, we propose not to estimate the variance of ΛΞ²Bby the variance of the (truncated) pseudo posterior distribution, but instead with a different method. We explain that as follows. Estimation of the Variance. We proceed in the following steps 1. Estimate ΛΞ²Baccording to Equation (20) whenever positive, or zero other- wise. 2. To estimate the precision of each component ΛΞ²B kwe will construct the high- est posterior density (HPD) credible interval
|
https://arxiv.org/abs/2505.20681v1
|
of (1 βΞ±) coverage. The inter- val is of the form [ bl, bu], where 0 β€bl< bu<β. It is noteworthy that in Lin and Yingβs formulation, since the confidence intervals are based on the approximate normal distribution, they take the form ΛΞ²LYΒ±z1βΞ±/2ΛΟ(ΛΞ²LY), which allows for a possibly negative lower endpoint. Here for the sake of comparability of the standard deviation estimators, we propose ΛΟ(ΛΞ²B):=buβbl 2z1βΞ±/2. (21) Notice also that because we work with truncated normal distributions, the point estimates need not be at the center of the intervals, which will happen only when bl>0. 3.2 Estimation of the Baseline Hazard 12 This method for estimating Λ Ο(ΛΞ²B)also has an important consequence for test- ing the significance of covariates. Based on the duality between confidence inter- vals and testing, we will consider that when zero belongs to an interval, it is an indication of nonsignificance of a particular covariate. In this regard, construct- ing credible intervals of highest pseudo-posterior density is essential for testing purposes as well as model selection. 3.2 Estimation of the Baseline Hazard We recall first the so-called Gamma Process. Let G(a, b) denote the gamma distribution with shape parameter a >0 and scale parameter b >0. Let Ξ±(t) be an increasing left continuous function on tβ₯0 such that Ξ±(0) = 0. Further, letZ(t) be a stochastic process on tβ₯0, with the properties: ( i)Z(0) = 0, ( ii) Z(t) has independent increments in disjoint intervals, and ( iii) For any t > s , Z(t)βZ(s)βΌG(c(Ξ±(t)βΞ±(s)), c). Then the process {Z(t) :t >0}is called a Gamma Process and is denoted by Z(t)βΌGP(cΞ±(t), c). We opt in this manuscript to incorporate a Gamma Process prior on the cumulative baseline hazard function. With that goal, we first need to find an alternative expression for the likelihood of the AHM, expressed in terms of the cumulative hazard increments Ξ+ 0= (Ξ+ 01,Ξ+ 02,Β·Β·Β·,Ξ+ 0m:=β) associated to each interval in the grid. Since there is no sense in trying to estimate the cumulative increment in the last interval of the grid, we truncate it at a fixed time tF(i.e. sm=tF); this guarantees that Ξ+ 0m<β. This does not represent a limitation in practice, because every empirical study has necessarily a finite follow up time. Considering the expression (5), which introduced the Ajβs, we deduce Ξ+ 0j= Aj(sjβsjβ1). Then, replacing for each jwe obtain: fΞ+ 0j|X,Ξ²(a+ 0j)βNjX k=0dk a+ 0j sjβsjβ1!k eβ a+ 0j sjβsjβ1!Pnj kj=1(tkjβsjβ1) +mj(sjβsjβ1) . (22) For a fixed and increasing left continuous function Ξ±and a scale parameter c, the prior for each Ξ+ 0jisG(c(Ξ±(sj)βΞ±(sjβ1)), c). By construction, different grid intervals are disjoint, and following the gamma process structure, they are independent. According to that, the prior for Ξ+ 0is: Ο(Ξ+ 0) =mY j=1fG(c(Ξ±(sj)βΞ±(sjβ1)),c)(a+ 0) =mY j=1fG(c(Ξ±j),c)(a+ 0), where Ξ±j=Ξ±(sj)βΞ±(sjβ1), and fG(a,b)(t) denotes the density function of a Gamma distribution with parameters aandbevaluated at t. Using that factor, 3.2 Estimation of the Baseline Hazard 13 we obtain the posterior fΞ+ 0|X,Ξ²(a+ 0)βmY j=1NjX k=0dk a+ 0j sjβsjβ1!k eβ a+ 0j sjβsjβ1!Pnj kj=1(tkjβsjβ1) +mj(sjβsjβ1) mY j=1(a+ 0j)cΞ±jβ1eβa+ 0jccc Ξ±j Ξ(c Ξ±j). In
|
https://arxiv.org/abs/2505.20681v1
|
order now to simplify notation, let us call d(j) k:=dk (sjβsjβ1)kΓΞ(c Ξ±j), cj:=Pnj kj=1(tkjβsjβ1) +mj(sjβsjβ1) (sjβsjβ1)+c. With that notation we express the posterior fΞ+ 0j|X,Ξ²(a+ 0) =PNj k=0d(j) k(a+ 0j)(k+c Ξ±jβ1)eβa+ 0jcj Rβ ββPNj k=0d(j) k(a+ 0j)(k+c.Ξ±jβ1)eβa+ 0j.cjda+ 0 =PNj k=0d(j) k ck jΞ(k+c Ξ±j)fG(k+c Ξ±j,cj)(a+ 0) PNj k=0d(j) k ck jΞ(k+c Ξ±j). Therefore, the resulting posterior distribution is proportional to a mixture of Gamma distributions, which enables point estimation in closed-form under square error loss. This is one of the most important advantages of our choice of prior specification in this article. Naturally, other choices of prior distributions for the baseline hazards are possible, for example by placing a prior on the plain hazard function itself, or on the baseline survival function. Those options will be presented in a separated article, a first version of which could be obtained from the authors upon request. In order now to provide formulae for the point estimators and their variances, let us call e(j) k:= (d(j) k/ck j) Ξ(k+c Ξ±j), so that ca+ 0jB = E[Ξ+ 0j|X,Ξ²] =Zβ ββa+ 0"PNj k=0e(j) k.fG(k+c.Ξ±j,cj)(a+ 0) PNj k=0e(j) k# da+ 0 =PNj k=0e(j) k(k+c Ξ±j) PNj k=0e(j) kcj. 14 To estimate their variances, we take V[Ξ+ 0j|X,Ξ²] = Zβ ββ(a+ 0)2"PNj k=0e(j) kfG(k+c.Ξ±j,cj)(a+ 0) PNj k=0e(j) k# da+ 0! βE[Ξ+ 0j|X,Ξ²]2 =PNj k=0e(j) k[(k+c Ξ±j)c2 j+ ((k+c Ξ±j)cj)2] PNj k=0e(j) kβ(ca+ 0jB )2. As with our proposed the estimator for Ξ², it is worth to note the properties ofca+ 0jB as we relax or tighten the prior. Remark 1: When cdiverges to β, eachca+ 0jB converges almost surely to Ξ±j (i.e., itβs prior mean). To see that, we express ca+ 0jB =PNj k=0e(j) k(k+c Ξ±j) PNj k=0e(j) kcj=PNj k=0e(j) kk PNj k=0e(j) kcj+PNj k=0e(j) kc Ξ±jPNj k=0e(j) kcj =PNj k=1e(j) kk e(j) 0cj+PNj k=1e(j) kcj+c Ξ±j cj. Since e(j) k>0 for all kβ {0,1,Β·Β·Β·, Nj}, 0β€PNj k=1e(j) kk e(j) 0cj+PNj k=1e(j) kcjβ€PNj k=1e(j) kNjPNj k=1e(j) kcj=Nj cj. Since Njis bounded in probability,ca+ 0jB converges in probability to Ξ±jascβ β . Remark 2: In the opposite case than the first Remark, i.e., when cconverges to zero, the Bayesian estimator, Λ a+B 0jdoes not depend on the prior function Ξ±j. The proof of that is clear since every time that Ξ±jis in the expression, it is appears multiplied by c. It is worth to mention again that our choices of the loss functions is different for the Euclidean coefficients than for the baseline hazard function. While we used the mode of the posterior distribution for estimation of the Euclidean parameter Ξ², we have opted to use the mean of the posterior distribution for estimation of the baseline hazard. It is also for this reason that we call at our method is hybrid . Again, we notice that one of the main advantages of this approach is that it enables estimators in closed form. 4 Simulation In this Section, we present a small simulation experiment according to the fol- lowing choices: ( i) hazard function Ξ»(t, Ξ²) = 1 + 0 .5z, i.e. Ξ»0(t) = 1 in R+, 15 and
|
https://arxiv.org/abs/2505.20681v1
|
a single covariate with regression parameter Ξ²= 0.5, (ii) Censoring variable Cwith an Exponential distribution with mean 2, ( iii) Covariate ZβΌΟ2 1, (iv) sample sizes n= 100 or 500, and ( v)R= 1000 replicates. Simulations for the regression parameter Ξ².The results are presented in Tables (1) and (2). We observe that in all cases ΛΞ²Bβ₯0, and further, if either β₯CΞ²β₯ β β (i.e., the prior becomes less informative) or nβ β (the sample size increases), the point estimator of Ξ²and its standard deviation converge to those of Lin and Ying, i.e., ΛΞ²BβΛΞ²LYand ΛΟB(ΛΞ²B)βΛΟLY(ΛΞ²LY). Those simulation tables also show the effects of varying the position parameter ¡βand the scale parameter wof the Ξ²-prior. Naturally, the estimates of Ξ²get closer to its true value (0 .5) as the prior position parameter ¡βgets near to the truth. w ¡β 0.1 0.5 1 2 5 10 1000 0 0.3241 0.4813 0.5155 0.5352 0.5479 0.5524 0.5569 (0.1625 ) (0.2122 ) (0.2211 ) (0.2259 ) (0.2291 ) (0.2301 ) (0.2312 ) 0.25 0.4157 0.5093 0.5306 0.5430 0.5512 0.5540 0.5569 (0.1784 ) (0.2154 ) (0.2227 ) (0.2268 ) (0.2294 ) (0.2303 ) (0.2312 ) 0.5 0.5073 0.5372 0.5457 0.5509 0.5544 0.5556 0.5569 (0.1851 ) (0.2182 ) (0.2242 ) (0.2275 ) (0.2297 ) (0.2304 ) (0.2312 ) 0.75 0.5989 0.5652 0.5608 0.5587 0.5576 0.5573 0.5569 (0.1872 ) (0.2203 ) (0.2256 ) (0.2283 ) (0.2300 ) (0.2306 ) (0.2312 ) 1 0.6905 0.5932 0.5759 0.5666 0.5608 0.5589 0.5569 (0.1879 ) (0.2221 ) (0.2268 ) (0.2290 ) (0.2303 ) (0.2307 ) (0.2312 ) 2 1.0570 0.7051 0.6362 0.5981 0.5737 0.5654 0.5570 (0.1884 ) (0.2262 ) (0.2304 ) (0.2314 ) (0.2315 ) (0.2313 ) (0.2312 ) 10 3.9886 1.6002 1.1189 0.8496 0.6770 0.6175 0.5575 (0.1884 ) (0.2290 ) (0.2365 ) (0.2394 ) (0.2378 ) (0.2354 ) (0.2312 ) Table 1: Simulations for the regression parameter Ξ²:n= 100, LY = 0.5569 (0.2457 ). Simulation for the baseline hazard parameters. In this second part of the simulation experiment, we choose a grid according the approximate quan- tiles 0 .2,0.4,0.6 and 0 .8 of the observed event times. A decreasing number of the βat riskβ times along the grid makes more accurate those estimates in ear- lier intervals in the time axis, as is seen in estimates of the standard deviations. According to the chosen grid, the true values are Ξ+ 0= (0.125,0.3,0.6,1.15). Also, for this part of the simulation, the priori selected is a left continuous func- 16 w ¡β 0.1 0.5 1 2 5 10 1000 0 0.4585 0.4992 0.5048 0.5077 0.5094 0.5100 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 0.25 0.4829 0.5045 0.5075 0.5090 0.5100 0.5103 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 0.5 0.5074 0.5099 0.5102 0.5104 0.5105 0.5106 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 0.75 0.5319 0.5152 0.5129 0.5118 0.5111 0.5108 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 1 0.5564 0.5205 0.5156 0.5131 0.5116 0.5111 0.5106 (0.0982
|
https://arxiv.org/abs/2505.20681v1
|
) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 2 0.6543 0.5419 0.5264 0.5186 0.5138 0.5122 0.5106 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) 10 1.4375 0.7128 0.6128 0.5620 0.5312 0.5209 0.5107 (0.0982 ) (0.1024 ) (0.1030 ) (0.1033 ) (0.1035 ) (0.1035 ) (0.1036 ) Table 2: Simulations for the regression parameter Ξ²:n= 500, LY = 0.5106 (0.1036 ). tionΞ±(t), which takes values: Ξ±(0) = 0, Ξ±(0.125) = 5, Ξ±(0.3) = 6, Ξ±(0.6) = 6.3, and Ξ±(1.15) = 6 .31. Thus, the corresponding true values of the vector (Ξ±1, Ξ±2, Ξ±3, Ξ±4) = (5 ,1,0.3,0.01). The prior weight is regulated by the hyperpa- rameter c. We present the simulation with c= 10, c= 1 and c= 0.1. Also, for this simulation, we have previously estimated bΞ²with the hybrid Bayesian estimator, with prior parameters ¡β= 0.5 and ΟΞ²= 10000. The results are presented in Tables (3) and (4). As expected, we see that ( i) the value of chas an impact in the estimator, giving more or less weight to the prior, ( ii) the standard deviation increases as the intervals shift to the right, and (iii) as the sample size increases, the prior losses impact on the estimates. 5 Analysis of a real dataset In this last Section we compute and compare the hybrid Bayesian estimates of Ξ² from the Welsh Nickels miners dataset, with those of Lin and Ying (1994). The βWelsh Nickels refineryβ dataset was originally introduced by Doll et al. (1970), and subsequently analyzed by Breslow & Day (1987), Lin & Yin (1994) andΒ΄Alvarez & Ferrario (2016), among others. It contains information about 679 workers from a nickel refinery at the south of Wales, who were presumably ex- posed to cancer triggering substances. Data were collected from payroll registers 17 c Ξ+ 01=0.125 Ξ+ 02=0.175 Ξ+ 03=0.3 Ξ+ 04=0.55 10 0.6527 0.2967 0.2991 0.3354 (0.0824 ) ( 0.0657 ) ( 0.0825 ) (0.1127 ) 1 0.1880 0.1845 0.2967 0.5167 (0.0495 ) ( 0.0587 ) ( 0.0928 ) (0.1762 ) 0.1 0.1265 0.1688 0.2951 0.5384 (0.0431 ) ( 0.0577 ) ( 0.0943 ) (0.1850 ) Table 3: Simulation for the baseline hazard parameters: n= 100, bΞ²= 0.5569. c Ξ+ 01=0.125 Ξ+ 02=0.175 Ξ+ 03=0.3 Ξ+ 04=0.55 10 0.2510 0.2041 0.2991 0.4919 (0.0247 ) ( 0.0269 ) ( 0.0406 ) (0.0732 ) 1 0.1384 0.1771 0.2990 0.5437 (0.0198 ) ( 0.0260 ) ( 0.0416 ) (0.0799 ) 0.1 0.1257 0.1742 0.2990 0.5494 (0.0192 ) ( 0.0259 ) ( 0.0417 ) (0.0807 ) Table 4: Simulation for the baseline hazard parameters: n= 500, bΞ²= 0.5106. 18 from 1934 to 1981. The covariates are AFE:= Age at first employment, YFE:= Year at first employment and EXP:= Exposure level (an index of the degree of contamination). We fit the same model as in Lin & Ying (1994). We observed that the hybrid Bayesian estimate approaches the classical LY estimator as the prior gets flatter. ΛΞ²LY ¡β= 0ΟΞ²= 0.01 ¡β= 0ΟΞ²= 1000 log(AFE-10) 2.5126 2.4183 2.5126 (YFE-1915)/10 0.2179 0.1868 0.2179 -(YFE-1915)2/100 2.6005 2.3829
|
https://arxiv.org/abs/2505.20681v1
|
2.6005 log(EXP+1) 1.7172 1.7081 1.7172 Table 5: (Results are multiplied by 103) The results are presented in Table (5), where we visualize the impact of the choice of the hyperparameters on the final estimates. Those results show the versatility of our proposed Hybrid Bayesian method, which can be calibrated to give the desired weight to the prior information. As discussed, this makes the Hybrid Bayesian method a potential valuable tool for the Statistical practitioner in Data Analysis. 6 Conclusions In this manuscript, we presented an alternative way to express the likelihood function of the AHM as a mixture of Gamma distributions. This enabled estima- tion methods in two steps: (i) estimation of the Euclidean coefficients in closed form, (ii) estimation of the baseline hazard function parameters. Our Bayesian proposal is based on constant regression coefficients, as did the work of LY. A flexible extension of those methods is to consider the general Aalen (1980) model, where Ξ²=Ξ²(t), as in Silva & Amaral-Turkman (2005), who additionally included frailty terms in their approach. Acknowledgements Maximiliano L. Riddick wishes to thank CONICET for his doctoral fellowship, and to Departamento de MatemΒ΄ atica = Facultad de Ciencias Exactas - Universi- dad Nacional de La Plata. Both authors thank Universidad Nacional de La Plata (PPID UNLP I231). References 19 Aalen, O. (1980). A Model for Nonparametric Regression Analysis of Counting Processes. Mathematical statistics and probability theory , 1-25. Β΄Alvarez, E. E. and Ferrario, J. (2016). Robust estimation in the additive hazards model. Communications in Statistics-Theory and Methods 45 (4) , 906-921. Β΄Alvarez, E. E. and Riddick, M. L. (2019). Review of Bayesian Analysis in Additive Hazards Model. Asian Journal of Probability and Statistics , 1-14. Β΄Alvarez, E. E. and Ferrario, J. (2012). RevisiΒ΄ on de la EstimaciΒ΄ on Robusta en Modelos SemiparamΒ΄ etricos de Supervivencia. IASI (Journal of the Inter- american Statistical Institute) 64 (182-183) , 85-106. Andersen, P. K., Borgan, O., Gill, R. D. and Keiding, N. (1993). Statistical Models Based on Counting Processes . Springer. Breslow, N. E. and Day, N. E. (1987). Statistical Methods in Cancer Research: The Design and Analysis of Cohort Studies . International Agency for Re- search on Cancer. Cox, D. R. (1972). Regression Models and Life-Tables. Journal of the Royal Statistical Society. Series B (Methodological) 34 (2) , 187-220. Chernoukhov, A. (2013). Bayesian Spatial Additive Hazard Model. Electronic Theses and Dissertations. Windsor University. Chernoukhov, A., Hussein, A., Nkurunziza, S. and Bandyopadhyay, D. (2018). Bayesian inference in time-varying additive hazards models with applica- tions to disease mapping. Environmetrics 29 (5-6) , e2478. Doll, R., Morgan, L. G. and Speizer, F. E. (1970). Cancers of the lung and nasal sinuses in nickel workers. British journal of cancer 24 (4) , 623-632. Dunson, D. B., & Herring, A. H. (2005). Bayesian model selection and averaging in additive and proportional hazards models. Lifetime data analysis, 11, 213-232. Ibrahim, J. G., Chen, M. and Sinha, D. (2001). Bayesian Survival Analysis . Springer Science & Business Media. Kalbfleisch, J. D. and Prentice, R. L. (1990). The Statistical Analysis of Time Failure Data . Hoboken: John Wiley
|
https://arxiv.org/abs/2505.20681v1
|
& Sons. Klein, J. P. and Moeschberger, M. L. (2006). Survival analysis: techniques for censored and truncated data . Springer Science & Business Media. 20 Lawless, J. F. (2003). Event history analysis and longitudinal surveys. Analysis of Survey data , 221-243. Lin, D. Y. and Ying, Z. (1994). Semiparametric Analysis of the Additive Risk model. Biometrika 81 (1) , 61-71. Riddick, M. L. (2020). Estimaci´ on Bayesiana en el modelo de riesgos adi- tivos Doctoral dissertation, Universidad Nacional de La Plata, Argentina (https://doi.org/10.35537/10915/138368). Silva, G. L., & Amaral-Turkman, M. A. (2005). Bayesian analysis of an addi- tive survival model with frailty. Communications in Statistics-Theory and Methods, 33(10), 2517-2533. Turkman, M. A. A., Paulino, C. D., & M¨ uller, P. (2019). Computational bayesian statistics: an introduction (Vol. 11). Cambridge University Press. Zhang, J. and Lawson, A. B. (2011). Bayesian parametric accelerated failure time spatial model and its application to prostate cancer. Journal of Ap- plied Statistics 38 (3) , 591-603.
|
https://arxiv.org/abs/2505.20681v1
|
arXiv:2505.20708v1 [econ.TH] 27 May 2025Berk-Nash Rationalizabilityβ Ignacio Esponda Demian Pouzo UC Santa Barbara UC Berkeley May 28, 2025 Abstract We introduce BerkβNash rationalizability , a new solution concept for misspeciο¬ed learning environments. It parallels rationalizability in games and captures all actions that are optimal given beliefs formed using the model that be st ο¬ts the data in the agentβs misspeciο¬ed model class. Our main result shows that , with probability one, everylimit action βany action played or approached inο¬nitely oftenβis BerkβN ash rationalizable. This holds regardless of whether behavior converges. We apply the concept to known examples and identify classes of environme nts where it is easily characterized. The framework provides a general tool for bo unding long-run behavior without assuming convergence to a BerkβNash equilibrium. βEsponda: iesponda@ucsb.edu. Pouzo: dpouzo@berkeley.edu Contents 1 Introduction 1 2 Model and solution concept 3 3 Justiο¬cation of solution concept 7 4 Characterization of rationalizability 10 A Appendix 16 References 23 1 Introduction A growing literature studies learning in environments where agents o perate under misspeci- ο¬ed modelsβthat is, models that exclude the true data-generating process. This framework captures both systematic biases, such as overconο¬dence, selec tion neglect, and mistaken causal reasoning, and the idea that agents may be unable to repre sent the world accurately due to complexity or informational constraints.1 Much of the existing analysis of misspeciο¬ed learning focuses on long- run behavior char- acterized by Berk-Nash equilibrium (Esponda and Pouzo (2016); henceforth EP), where agents best respond to beliefs that minimize the Kullback-Leibler (KL ) divergence rel- ative to the data they generate. EP show that if behavior converg es, it must con- verge to such an equilibrium. More recent work has studied whether and when con- vergence occurs, using techniques from stochastic approximatio n and martingale the- ory (e.g., Fudenberg, Lanzani and Strack (2021);Esponda, Pouzo and Yamamoto (2021); Murooka and Yamamoto (2021),Frick, Iijima and Ishii (2023)).2 Thispapertakesacomplementaryapproach. Ratherthanimposec onditionsthatguaran- tee convergence, we ask what can be said about asymptotic behav ior even when convergence fails. To that end, we introduce a solution conceptβBerk-Nash rat ionalizabilityβthat is inspired by rationalizability in game theory ( Bernheim (1984);Pearce(1984)), but tailored to the structure of misspeciο¬ed learning problems. Our main result s hows that, with prob- ability one, every limit action3is Berk-Nash rationalizable . Equivalently, if an action is not rationalizable, then there exists an open neighborhood around it th at is visited only ο¬nitely many times (in the ο¬nite case, the action is played at most ο¬nitely ofte n). The result is useful for two reasons. First, it holds without requirin g convergence of actions or beliefs, and avoids the need to verify technical condition s that often appear in convergence analysesβsuch as characterizing the asymptotic be havior of solutions to diο¬er- 1Examples include Jehiel(2005),Eyster and Rabin (2005),Esponda (2008),Schwartzstein (2014), Bohren (2016),Spiegler (2016),Heidhues, KΛ oszegi and Strack (2018),Eliaz and Spiegler (2020), Fudenberg, Lanzani and Strack (2024),Frick, Iijima and Ishii (2024), andHe and Libgober (2025) among many others. 2Several papers examine the convergence of misspeciο¬ed learning d ynamics in single-agent settings that are either tailored to speciο¬c
|
https://arxiv.org/abs/2505.20708v1
|
environments or rely on paramet ric assumptions such as Gaussian signals or binary states. Examples include Nyarko(1991),Fudenberg, Romanyuk and Strack (2017), Heidhues, KΛ oszegi and Strack (2018),Heidhues, KΛ oszegi and Strack (2021),Bohren and Hauser (2021), and He(2022) among many others. 3Limit actions are deο¬ned in the natural way: in discrete action space s, they are actions played inο¬nitely often; in continuous spaces, they are actions that are approach ed arbitrarily closely and inο¬nitely often. 1 ential inclusions. Also, some existing results establish convergence to equilibrium only under speciο¬c initial conditions or with positive probability, and may not desc ribe what happens when actions do not converge. Second, whentherationalizableset isasingleton, our result pins down long-runbehavior: the agent almost surely converges to that action. More generally, when the rationalizable set is not a point, it provides boundson the actions the agent may take in the limit (though in such cases, existing convergence results may still help identify wh ich actions are selected within the rationalizable set). The deο¬nition of Berk-Nash rationalizability is based on a best respon se operator that maps sets of actions into new sets. Given a candidate set of actions , the operator considers all probability distributions over that set, identiο¬es the models that minimize Kullback- Leibler (KL) divergence relative to the outcome distribution those a ctions generate, and then constructs beliefs supported on those models. It then maps the original set to the set of actions that are optimal under such beliefs. A set is closedunder this operator if it includes all the actions it justiο¬es in this way. An action is Berk-Nash rationaliz able if it belongs to some closed set. This operator admits a standard characterization: starting from the full action space, we apply the operator iteratively. The largest closed set is the interse ction of this sequence and coincides with the set of rationalizable actions. We illustrate this construction and show how it can be applied in well-kno wn environ- ments. In particular, we revisit an example of Heidhues, KΛ oszegi and Strack (2018), in which overconο¬dence distorts learning. In their overconο¬dent ca se, convergence to a unique equilibrium is established under assumptions that guarantee uniquen ess. We relax these assumptions and show that the limit set of actions lies between the sm allest and largest equilibrium. In the underconο¬dent case, where their convergence results do not apply, we showthattherationalizableset isstillinformativeβit maybeasingleto norawell-structured intervalβthus delivering bounds on long-run behavior. We conclude by identifying two general classes of environmentsβsu permodular and single-peakedβin which the rationalizable set can be characterized in a tractable way. Overall, the paper contributes a method for bounding asymptotic b ehavior under mis- speciο¬ed learning, without relying on convergence, and shifts focu s from equilibrium out- comes to the broader class of rationalizable actions. For concrete ness, we focus on single- agent decision problems, though the solution concept extends nat urally to other environ- ments such as games (EP) and Markov decision processes ( Esponda and Pouzo 2021 ). 2 2 Model and solution concept Primitives. We consider a single-agent decision problem with an action
|
https://arxiv.org/abs/2505.20708v1
|
space A, a space of observable consequences Y, and a consequence function that maps actions to probability distributions over consequences, denoted by Q:AββY. The agent does not necessarily knowQbut possesses a parametric model of it, represented as QΞΈ:AββY, where the model parameter ΞΈbelongs to a parameter space Ξ. The agentβs behavior is described b y an action correspondence F: βΞβAthat is non-empty valued and upper hemi-continuous. This correspondence speciο¬es the set of actions the agent might c hoose given a belief over the set of models Ξ. For concreteness, we consider the standard case in which the agent chooses actions that maximize a payoο¬ function Ο:AΓYβR, i.e., F(Β΅) = argmax aβA/integraldisplay ΞU(a,ΞΈ)Β΅(dΞΈ), whereU(a,ΞΈ) :=/integraltext YΟ(aβ²,y)QΞΈ(dy|a). We maintain the following assumptions throughout the paper.4 Assumption. (i)Ais a compact subset and Yis a Borel subset of Euclidean space; (ii) For all aβA, there exists a density q(Β· |a) inL1(Y,R,Ξ½) such that/integraltext Yq(y|a)Ξ½(dy) =Q(Y|a) for anyYβYBorel; (iii) For all aβAandΞΈβΞ, there exists a density qΞΈ(Β· |a)βL1(Y,R,Ξ½) such that/integraltext YqΞΈ(y|a)Ξ½(dy) =QΞΈ(Y|a) for any YβYBorel; (iv) Continuity : Ξ is a compact subset of Euclidean space and, for all aβA, (ΞΈ,a)/mapstoβqΞΈ(Β· |a) anda/mapstoβq(Β· |a) are continuous almost surely with respect to Q(Β· |a); (v)Uniform integrability : There exists a measurable function G:YβR+such that, for every aβA,GβL2(Y,R,Q(Β· |a)), and for every ΞΈβΞ,|log(q(Β· |a)/qΞΈ(Β· |a))| β€G(Β·) almost surely with respect to Q(Β· |a); (vi) Ο:AΓYβRis jointly continuous. Assumption(i)-(iii) allows AandYto be continuous or discrete, with density functions deο¬ned accordingly. The agent employs a parametric model, with compact parameter space and continuous densities. Assumption (v) is instrumental in establis hing a uniform law of large numbers. Moreover, this condition guarantees that, for ev eryΞΈanda, the support of QΞΈ(Β· |a) contains the support of Q(Β· |a); that is, every observation can be generated by the agentβs model. Finally, assumption (vi), along with the preceding cond itions, ensures that 4As usual, Lp(Y,R,Ξ½) denotes the space of all functions f:YβRsuch that/integraltext |f(y)|pΞ½(dy)<β. All Euclidean spaces, including Aand Ξ, are endowed with their Borel Ο-algebra. The spaces β Aand βΞ denote the sets of Borel probability measures on Aand Ξ, respectively, endowed with the topology of weak convergence. 3 Fis upper hemi-continuous, so that behavior varies continuously with beliefs. Throughout the paper, we use the following example from Heidhues, KΛ oszegi and Strack (2018) to illustrate deο¬nitions, results, and their application. Example (Returns to eο¬ort) .An agent takes eο¬ort aβA= [0,β), leading to an observable outcome given by y= (Ξ±β+a)ΞΈβ+Ξ΅, whereΞ±ββ₯0 represents the agentβs true ability, ΞΈβ>0 is the true return to eο¬ort, and Ξ΅βΌ N(0,1) is Gaussian noise with known mean and variance (normalized, without loss of generality, to 1). The agent is uncertain about the return to eο¬ort and learns over ΞΈβΞ = [ΞΈ,ΞΈ]β[0,β), but holds a ο¬xed belief about ability: they assume the outcome is generated by y= (Ξ±+a)ΞΈ+Ξ΅, whereΞ±β(0,β) is perceived ability and may diο¬er from Ξ±β. The agent is overconο¬dent ifΞ± > Ξ±βandunderconο¬dent if Ξ± < Ξ±β. For each ΞΈβΞ, the agentβs model QΞΈ(Β· |a) corresponds to a normal distribution with mean ( Ξ±+a)ΞΈand known variance, and the true
|
https://arxiv.org/abs/2505.20708v1
|
distribution Q(Β· |a) corresponds to a normal distribution with mean ( Ξ±β+a)ΞΈβand the same variance. These distributions admit densities with respect to the Lebesgue measure that vary co ntinuously in aandΞΈ, and they satisfy the uniform integrability condition with a quadratic e nvelope. The agentβs payoο¬ function is Ο(a,y) =yβc(a), where c: [0,β)βRis increasing, strictly convex, and continuously diο¬erentiable, with c(0) =cβ²(0) = 0 and lim aββcβ²(a) =β. Since the marginal return to eο¬ort is bounded above by ΞΈand the marginal cost diverges, it follows that the agentβs optimal actions lie in a compact subset [0 ,Β―a], where Β―a= (cβ²)β1(ΞΈ). Thus, the example satisο¬es all parts of the Assumption above. /diamondsolid Kullback-Leibler divergence. TheKL divergence is a function K: ΞΓAβR, deο¬ned for any parameter value ΞΈβΞ and action aβAas K(ΞΈ,a) :=/integraldisplay Yln/parenleftbiggq(y|a) qΞΈ(y|a)/parenrightbigg q(y|a)Ξ½(dy). (1) Furthermore, for any ΟββA, we deο¬ne Ξm(Ο) := argmin ΞΈβΞ/integraldisplay K(ΞΈ,a)Ο(da). The KL divergence measures the βdistanceβ between the true mode lQand the parametric modelQΞΈ. The set Ξm(Ο) consists of the parameter values ΞΈβΞ whose associated model QΞΈ provides the best ο¬t to the true model, in the sense of minimizing the e xpected Kullbackβ Leibler divergence given data generated by Qwhen actions are drawn according to Ο. By continuity and uniform integrability (assumptions (iv) and (v)), Kis jointly contin- 4 uous. Therefore, Ξm(Β·) is upper hemi-continuous, nonempty-, and compact-valued. Example (continued) .The KL divergence is K(ΞΈ,a) =1 2((Ξ±+a)ΞΈβ(Ξ±β+a)ΞΈβ)2. There is a unique minimizer of the expected KL divergence for any dist ribution ΟββA, given by ΞΈm(Ο) =ΞΈβΒ·/integraltext (Ξ±+a)(Ξ±β+a)Ο(da)/integraltext (Ξ±+a)2Ο(da)=ΞΈβ+ΞΈβΒ·(Ξ±ββΞ±)/integraltext (Ξ±+a)Ο(da)/integraltext (Ξ±+a)2Ο(da). This expression shows that the estimated ΞΈis a biased version of the true return ΞΈβ, with the direction and magnitude of the bias depending on the relationship between perceived and true ability: β’IfΞ± > Ξ±β(overconο¬dence), then ΞΈm(Ο)< ΞΈβ: the agent underestimates the return to eο¬ort. β’IfΞ± < Ξ±β(underconο¬dence), then ΞΈm(Ο)> ΞΈβ: the agent overestimates the return to eο¬ort. In the special case where Οis degenerate at some action aβA, the minimizer simpliο¬es to ΞΈm(Ξ΄a) =ΞΈβ+ΞΈβΒ·Ξ±ββΞ± Ξ±+a. The size of the bias depends on the level of eο¬ort a. Asaincreases, the diο¬erence between Ξ±andΞ±βbecomes less important relative to a, so the bias shrinks. In particular, limaββΞΈm(a) =ΞΈβ. This reο¬ects that at high eο¬ort levels, ability is swamped by eο¬ort in t he signal, so the agentβs misspeciο¬cation about ability becomes less cons equential for inference aboutΞΈβ. /diamondsolid Best response operator. For each Borel set of actions AβA, we deο¬ne the best response setas Ξ(A) :=F(βͺΟββAβΞm(Ο)). (2) Note that Ξ is nonempty-valued and upper hemi-continuous by the p revious observation about Ξm(Β·) and the fact that Fis nonempty-valued and upper hemi-continuous. 5 Equivalently, Ξ(A) ={aβA:βΟββA,Β΅ββΞm(Ο) such that aβF(Β΅)}. In other words, the set Ξ( A) consists of all actions that the agent might choose (according toF) when she assigns probability one to the set of models that provide t he best ο¬t under some action distribution with support in A. This concept serves as a natural counterpart to the best response correspondence in game theory and aligns with o ur interpretation of KL divergence. Speciο¬cally, if feedback
|
https://arxiv.org/abs/2505.20708v1
|
about consequences arises f rom actions drawn from the setA, then the models that provide the best ο¬t are those that minimize KL divergence for some action distribution supported on A. Consequently, the agent will follow actions that are optimal for beliefs that assign probability one to these best-ο¬t models. Example (continued) .The optimal action corresponding to a degenerate belief δθis (cβ²)β1(ΞΈ), since the agent chooses ato solveΞΈ=cβ²(a). For any Borel set AβA, the best response operator is Ξ(A) =/braceleftbig (cβ²)β1(ΞΈm(Ο)) :ΟββA/bracerightbig . (3) /diamondsolid Solution concept. We are now ready to deο¬ne our solution concept for this environmen t. Deο¬nition 1. An action ais Berk-Nash rationalizable if there exists a set AβAsuch that aβAandAβΞ(A). A Berk-Nash rationalizable action belongs to a set that is closed unde r the operator Ξ. That is, if a set AβAsatisο¬esAβΞ(A), then every action in Ais rationalizable. Each action in such a set is optimal for a belief supported on model parame ters that best ο¬t data generated by some distribution over A. Diο¬erent actions may be justiο¬ed by diο¬erent such distributions. A formal justiο¬cation of this solution concept is prov ided in the next section.5 The solution concept typically used in the literature is that of a Berk- Nash equilibrium (EP). In terms of our best response operator, an action aβAis aBerk-Nash equilibrium if aβΞ({a}). An immediate implication is that a Berk-Nash equilibrium action is ration aliz- able, but the converse is not necessarily true. In particular, the s et of rationalizable actions 5This is not equivalent to rationalizability in a game where player 1 choose s actions and player 2 chooses model parameters. That interpretation would correspond to a lar ger operator F/parenleftbig β/uniontext ΟββAΞm(Ο)/parenrightbig , which allows arbitrary mixtures over model parameters ο¬t to diο¬erent ac tion distributions. In contrast, our deο¬ni- tion restricts attention to beliefs supported on Ξm(Ο) for some ο¬xed ΟββA. 6 may be larger than the set of equilibrium actions, as we now illustrate.6 Example (continued) .Figure1aillustrates an example in which the agent is overconο¬dent. In this case, there are three BerkβNash equilibrium actions, denot edaS,aM, andaL, each satisfying the ο¬xed point condition aj= (cβ²)β1(ΞΈm(Ξ΄aj)) forj=S,M,L. Note also that the image of the function a/mapstoβ(cβ²)β1(ΞΈm(Ξ΄a)) overaβ[aS,aL] is the interval [ aS,aL]. Since this image is contained in Ξ([ aS,aL]) (by expression ( 3)), the interval [aS,aL] is closed under the best response operator Ξ. It follows that eve ry action in this interval is rationalizable. In Section 4, we develop results that help characterize and compute the set of rationaliz- able actions, and show that, in this example, the interval [ aS,aL] coincides exactly with that set. /diamondsolid 3 Justiο¬cation of solution concept This section justiο¬es Berk-Nash rationalizability in a dynamic environm ent. We consider an agent who learns by Bayesian updating and chooses myopically optima l actions over time.7 The agent begins with a full-support prior Β΅0ββΞ over a space of models Ξ. At each discrete time tβ₯1: β’The agent holds a belief Β΅tββΞ, β’Selects an action atβF(Β΅t), where F: βΞβAis the agentβs action correspondence, β’Observes a consequence
|
https://arxiv.org/abs/2505.20708v1
|
ytdrawn from the distribution Q(Β· |at), β’And updates beliefs via Bayesβ rule: Β΅t+1=Bay(at,yt,Β΅t), 6EP also considered mixed actions. Every action in the support of an e quilibrium mixed action is ratio- nalizable. 7EP consider forward-looking agents under a conditionβweak identiο¬ cationβthat guarantees incentives to experiment vanish in the long run. The same condition and argumen t can be used to extend Theorem 1 below to forward-looking agents. 7 where, for any Borel set SβΞ, Bay(a,y,Β΅)(S) =/integraltext SqΞΈ(y|a)Β΅(dΞΈ)/integraltext ΞqΞΈ(y|a)Β΅(dΞΈ)forQ(Β· |a)-almost all y.8 3.1 Asymptotic beliefs Before analyzing endogenous action selection, we ο¬rst study Baye sian learning given a ο¬xed sequence of actions. Let aβ= (a1,a2,...) denote a deterministic or stochastic sequence of actions. Let Aβdenote the space of inο¬nite action sequences, endowed with the pr oduct topology. In each period tβ₯1, given at, an outcome ytis drawn independently from the distribution Q(Β· |at). LetYβdenote the space of inο¬nite outcome sequences, also endowed with the product topology. Deο¬ne P(Β· |aβ) as the probability measure over Yβinduced by the product of the conditional distributions P(Β· |aβ) =/circlemultiplytextβ t=1Q(Β· |at). Thus, P(Β· |aβ) governs the law of outcomes conditional on the given sequence of a ctions. The empirical action distribution up to time tis denoted by ΟtββAand deο¬ned, for any Borel set AβA, by Οt(A) =1 tt/summationdisplay Ο=11A(aΟ). WhenAis ο¬nite, Οtcoincides with the usual notion of action frequencies. The following result characterizes asymptotic beliefs. Lemma 1. Fix anyaβ. Almost surely with respect to P(Β· |aβ), the following holds: Suppose that there exists a subsequence (tk)ksuch that ΟtkβΟ. Then, for every closed set EβΞ satisfying Eβ©Ξm(Ο) =β
, there exist constants C >0,Ο >0, and an integer Ksuch that, for allkβ₯K, Β΅tk(E)β€Cexp{βΟtk}. (4) In particular, if the subsequence (Β΅tk)kconverges to some Β΅, thenΒ΅ββΞm(Ο). Lemma1says that along any subsequence where the empirical action distrib ution con- verges, the posterior probability assigned to any set of models inco mpatible with the limiting action distribution converges to zero.9Consequently, any limit belief must be supported on the set of models that best explain the observed limiting behavior. 8Assumption (v) implies that qΞΈ(Β· |a)>0 for all ΞΈβΞ,Q(Β· |a)-almost surely, so Bayes rule is well deο¬ned for Q(Β· |a)-a.e.y. 9The convergenceis exponentially fast, but as shown in the proof, t he speed of convergenceis not relevant for characterizing the limiting belief. 8 The idea originates in Berk(1966)βs analysis of misspeciο¬ed models under i.i.d. data and has since been extended to dynamic learning settings, including b y EP, who use it to characterize long-run behavior under misspeciο¬ed learning. Lem ma1generalizes these results by allowing for a continuum of actions and by working with subs equences rather than requiring convergence of the empirical action distribution. Except for minor modiο¬cations, the argument for the lemma closely follows existing proofs and is ther efore included in Appendix A.1. 3.2 Asymptotic behavior We now return to the full environment where actions are selected e ndogenously. The inο¬nite sequence of actions and consequences ( aβ,yβ) belongs to the space β¦ = AβΓYβ. LetP denote the probability measure over β¦ induced by the agentβs choice rule and the conditional distributions Q(Β·
|
https://arxiv.org/abs/2505.20708v1
|
|a). The marginal distribution of PoverAβis denoted by PAβ. An action aβAis called a limit action of the sequence aβ= (a1,a2,...) if there exists a subsequence ( atk)ksuch that atkβaaskβ β. Equivalently, ais a limit action if, for every open neighborhood UβAofa, there are inο¬nitely many times tβNsuch that atβU. WhenAis ο¬nite, an action is a limit action if and only if it is played inο¬nitely often. Theorem 1. Almost surely with respect to PAβ, every limit action of a Bayesian agent is Berk-Nash rationalizable. Proof.For each aβ, letYaβdenote the probability-one set of outcome sequences for which the statement of Lemma 1holds. Deο¬ne Aβ:={aββAβ:βyββ Yaβ,atβF(Β΅t(aβ,yβ))for allt}. In Appendix A.3, we show that Aβhas probability one. Fix any aββ Aβand letA(aβ) denote the set of its limit actions. To establish the result, we show that A(aβ)βΞ(A(aβ)). By deο¬nition of Aβ, there exists some yββ Yaβ such that atβF(Β΅t(aβ,yβ)) for all t. We ο¬x one such yβand proceed with the proof. Letaβ A(aβ). Consider the subsequences ( atk)k, (Οtk(aβ))k, and (Β΅tk(aβ,yβ))ksuch that: (i) lim atk=a(this is possible because ais a limit action), (ii) lim Β΅tk((aβ,yβ)) =Β΅, limΟtk(aβ) =Ο(this is possible because these sequences lie in compact spaces, so c onvergent subsequences always exist), (iii) ΟββA(aβ) (this follows because A(aβ) is the set of limit 9 actions10). The fact that atkβF(Β΅tk(aβ,yβ)) for all kand the uhc of Fimply that aβF(Β΅). Additionally, since yββ Yβ, Lemma 1implies that Β΅ββΞm(Ο). Therefore, aβΞ(A(aβ)). An equivalent formulation of Theorem 1is that for any action that is not rationalizable, there exists an open set containing it that is visited only ο¬nitely many t imes. When the action space is ο¬nite, this reduces to the statement that the actio n is chosen at most a ο¬nite number of times. However, the converse does not hold: a rat ionalizable action is not necessarily a limit action of the action sequence. These ο¬ndings para llel existing results on equilibrium notions. For example, EP show that if the action sequence converges to aβ, thenaβmust be a Berk-Nash equilibrium, but not every Berk-Nash equilibrium action is necessarily a limit action. 4 Characterization of rationalizability ThefollowingresultprovidesacharacterizationofthesetofBerk- Nashrationalizableactions, leveraging the monotonicity of the best response operator. Proposition 1. The set of Berk-Nash rationalizable actions, B βA, is nonempty, compact, and is the largest ο¬xed point of Ξ, meaning B= Ξ(B)andB βAfor anyAsuch that A= Ξ(A). It can be obtained iteratively, as B=β/intersectiondisplay k=0BkwithB0=AandBk+1= Ξ(Bk). Proof.LetP(A) denote the power set of A, and consider the partially ordered set ( P(A),β). Deο¬ne S:={AβA:AβΞ(A)}. Then the set of Berk-Nash rationalizable actions is given by Bβ²:=/uniondisplay AβSA, which is the supremum of S. Since Ξ is monotone, Tarskiβs ο¬xed point theorem implies thatBβ²is the largest ο¬xed point of Ξ. The iterative characterization follows from Kleeneβs 10Proof: If x /β A(aβ), there exists an open neighborhood Uβxsuch that atkβUonly ο¬nitely often. Hence,Οtk(U)β0, and by the Portmanteau lemma, Ο(U) = 0. As this holds for every such x, the support ofΟis contained in A(aβ), soΟββA(aβ). 10 ο¬xed point theorem, i.e., B=Bβ². Monotonicity of Ξ ensures that the sequence {Bk}is nested. Since Ais compact
|
https://arxiv.org/abs/2505.20708v1
|
and Ξ is nonempty-valued and upper hemicontinuous, eac h set Bkis nonempty and compact. The inο¬nite intersection of nested, none mpty compact sets is nonempty, so Bis nonempty. This typeofcharacterizationisstandardwhen rationalizabilityisdeο¬ nedasclosureunder a set-valued operator. Just as in other familiar settingsβsuch as b est response dynamics in gametheoryorbelief hierarchies inepistemic modelsβtherationalizab leset canbedescribed as the largest ο¬xed point of the operator and computed by iterate d application starting from the full domain. Example (continued) .IfAis compactβfor instance, a compact intervalβexpression ( 3) for Ξ can be simpliο¬ed further. For any ΟββA, deο¬ne the probability measure Ξ½Ο(da) :=(Ξ±+a)2Ο(da)/integraltext (Ξ±+a)2Ο(da). ThenΞΈm(Ο) can be written as a convex combination of degenerate values: ΞΈm(Ο) =/integraldisplay ΞΈm(Ξ΄a)Ξ½Ο(da). (5) SinceΞ½Οis a probability measure supported on A, equation ( 5) shows that ΞΈm(Ο) lies in the convex hull of the set {ΞΈm(Ξ΄a) :aβA}. Moreover, because ΞΈm(Ξ΄a) is continuous on the compact set A, it attains its minimum and maximum, and this convex hull is simply the interval /bracketleftbigg min aβAΞΈm(Ξ΄a),max aβAΞΈm(Ξ΄a)/bracketrightbigg . Applying ( cβ²)β1to this interval, we conclude that Ξ(A) =/bracketleftbigg (cβ²)β1/parenleftbigg min aβAΞΈm(Ξ΄a)/parenrightbigg ,(cβ²)β1/parenleftbigg max aβAΞΈm(Ξ΄a)/parenrightbigg/bracketrightbigg . (6) This step uses the fact that the image of a closed interval under a c ontinuous, strictly increasing function, ( cβ²)β1, is again a closed interval. This characterization of Ξ motivates the deο¬nition of a simpler opera tor that acts directly on actions. Deο¬ne T:AβAby T(a) := (cβ²)β1(ΞΈm(Ξ΄a)), 11 and note that the set of ο¬xed points of Tcoincides with the set of BerkβNash equilibrium actions. We will show that the operator Talso characterizes the best response operator Ξ. Overconο¬dence ( Ξ± > Ξ±β). In this case, the function ΞΈm(Ξ΄a) is increasing in a, soTis increasing. An increasing function may have multiple ο¬xed points, so m ultiple Berk-Nash equilibria are possible. For any interval A= [amin,amax], it follows from ( 6) that Ξ(A) = [T(amin),T(amax)]. To characterize the limit of iterated best responses, deο¬ne a sequ ence of intervals Ak= [ak min,ak max] by ak+1 min=T(ak min), ak+1 max=T(ak max), starting from a0 min= 0 and a0 max= Β―a, where Β―ais an upper bound on optimal actions. Then (ak min)kincreases, ( ak max)kdecreases, and both sequences converge. The limits aβ min= lim kββak min, aβ max= lim kββak max are ο¬xed points of T(hence, equilibria), and, by Proposition 1, the limiting interval [aβ min,aβ max] is the set of all rationalizable actions. This is the case in Figure 1a, where aS=aβ minandaL=aβ max. IfThas a unique ο¬xed point, the limit is a singleton and rationalizability coincides with equilibrium. Underconο¬dence ( Ξ± < Ξ±β). In this case, the function ΞΈm(Ξ΄a) is decreasing in a, soTis decreasing. Adecreasing functionhasatmost oneο¬xed point, so t here isaunique Berk-Nash equilibrium. For any interval A= [amin,amax], we have Ξ(A) = [T(amax),T(amin)]. To analyze the dynamics of Ξ, deο¬ne ak+1 min=T(ak max), ak+1 max=T(ak min), again starting from [0 ,Β―a]. Then ( ak min)kincreases, ( ak max)kdecreases, and both sequences converge to aβ min= lim kββak min, aβ max= lim kββak max, By Proposition 1, the limiting interval [ aβ min,aβ max] is the set of
|
https://arxiv.org/abs/2505.20708v1
|
rationalizable actions. The limits also satisfy T(aβ min) =aβ max, T(aβ max) =aβ min, 12 so they form a 2-cycle of Tand are ο¬xed points of T2. IfT2has a unique ο¬xed point, then it must also be a ο¬xed point of T, and the limit is a singleton. In that case, rationalizability coincides with equilibrium. If T2has multiple ο¬xed points, then aβ minandaβ maxare the smallest and largest among them, respectively. This latter situation is illustrated in Figure 1b. /diamondsolid ΞΈβ overconο¬dent: ΞΈm(δ·)mg cost: cβ²(Β·) aopt aSaMaL a (a) Overconο¬dent agentaβ minaβ maxΞΈβunderconο¬dent: ΞΈm(δ·)mg cost: cβ²(Β·) aopt a (b) Underconο¬dent agent Figure 1: Returns to eο¬ort example. The optimal action aoptis where the marginal cost curve intersects the true return ΞΈβ. BerkβNash equilibrium actions lie at intersections of the marginal co st curve and the KL-minimizing curve ΞΈm(δ·). In the overconο¬dent case, the set of rationalizable actions is [ aS,aL]; in the underconο¬dent case, it is the 2-cycle interval [ aβ min,aβ max]. Our approach is direct and avoids the use of dynamic or stochastic a pproximation tech- niques often required in convergence analyses. For example, Heidhues, KΛ oszegi and Strack (2018) show convergence in the overconο¬dent case by assuming a unique equilibrium, while Heidhues, KΛ oszegi and Strack (2021) allow for multiple equilibria but rely on normal priors. Moreover, neither paper establishes general results for the und erconο¬dent case. The example also highlights two useful patterns. In some casesβsu ch as when the agent is overconο¬dentβthe set of rationalizable actions can be characte rized by identifying the smallest andlargestBerkβNashequilibria. Inothercases, likeunder conο¬dence, thisnolonger holds, but it is still possible to simplify the analysis by restricting atten tion to degenerate beliefs and actions. In both cases, the structure of the environm ent allows us to reduce the problem of ο¬nding the full rationalizable set to a more tractable form . We now present two broad classes of environments where such simpliο¬cations are possib le: supermodular and single-peaked settings. Supermodular environments. 13 Deο¬nition 2. The environment is supermodular if (i) AandΞare compact sublattices of Euclidean space under the coordinatewise partial order, (ii)Uis quasi-supermodular in aand single crossing in (a,ΞΈ), and (iii) The negative of the KL function, βK, is quasi- supermodular in ΞΈand is single crossing in (ΞΈ,a).11 This supermodular environment ensures monotone comparative st atics: the optimal ac- tion is increasing in ΞΈ, and the type ΞΈthat best ο¬ts a given action is increasing in the action. In general, one can directly verify the conditions in Deο¬nition 2or rely on existing results from the literature on monotone comparative statics unde r uncertainty (e.g., Athey (2002)).12 A standard implication of supermodular environments (e.g., Milgrom and Shannon (1994);Topkis(1998)) is that, for each ΞΈ, the set F(δθ) admits a smallest and largest ele- ment, denoted a(ΞΈ) and Β―a(ΞΈ), respectively, both of which are continuous and nondecreasing inΞΈ. Similarly, for each a, the set Ξm(Ξ΄a) has a smallest and largest element, denoted ΞΈ(a) andΒ―ΞΈ(a), respectively, and both are continuous and nondecreasing in a. Proposition 2. In supermodular environments, there exist smallest and lar gest Berk-Nash rationalizable
|
https://arxiv.org/abs/2505.20708v1
|
actions, denoted aSandaL, respectively. These actions are also the smallest and largest Berk-Nash equilibrium actions. They can be cons tructed as the limits of the monotone sequences: ak+1 S=a(ΞΈ(ak S)), ak+1 L= Β―a(Β―ΞΈ(ak L)), starting from a0 S= minAanda0 L= maxA. The limits aS= limkββak SandaL= limkββak L are ο¬xed points of the respective composition functions: aS=a(ΞΈ(aS)), aL= Β―a(Β―ΞΈ(aL)). In particular, Proposition 2shows that all Berk-Nash rationalizable actions lie within the interval [aS,aL]. Theseboundsaredeο¬ned asο¬xedpointsover degenerateaction sandbeliefs, making them far simpler to compute than the full rationalizable set, w hich requires iterating the best response operator over mixed strategies and belief supp orts. In applications, such bounds are typically informative and can be used to conduct compar ative statics. Moreover, 11A function f:XΓTβRdeο¬ned over a lattice Xand a partially ordered set Tis quasi-supermodular inxif for all x,xβ²and allt,f(x,t)β₯(>)f(xβ§xβ²,t) =β(>)f(xβ¨xβ²,t)β₯f(xβ²,t). It is single-crossing in (x,t) if, for all xβ²> xandtβ²> t,f(xβ²,t)β₯(>)f(x,t) =βf(xβ²,tβ²)β₯(>)f(x,tβ²). 12When the set of actions, consequences, and models are one-dimen sional intervals, the environment is supermodular provided that (i) Οis single crossing in ( a,y), (ii)qandqΞΈare strictly positive, (iii) q(y|a) satisο¬es MLRP (monotone likelihood ratio property) in ( a,y), and (iv) qΞΈ(y|a) satisο¬es MLRP separately in (a,y), (ΞΈ,a), and (ΞΈ,y). 14 whenaS=aL, all rationalizable notions collapse to a unique action, implying converg ence of the agentβs action. We prove Proposition 2in the Appendix, following an argument similar to that of Milgrom and Roberts (1990), who showed that in supermodular games, the smallest and largest serially undominated strategy proο¬les coincide with pure Nas h equilibria. A key diο¬erence in our setting arises from the structure of the operato r Ξ, which maps Ato F(βͺΟββAβΞm(Ο)). Unlike in standard games, this operator does not permit an inter preta- tion where one player selects actions from Awhile the other selects models from Ξ, because diο¬erent models ΞΈ1,ΞΈ2βΞ may be supported by distinct beliefs Ο1,Ο2ββA, and mixtures overΟ1andΟ2may not be feasible in justifying an action under Ξ. To address this, w e in- steadworkwiththeβlargerβoperator A/mapstoβF(βΞm(βA)),whichallowsforseparateiteration over the spaces of actions and models. We then adopt the approac h ofMilgrom and Roberts (1990) to characterize its inο¬nite iteration and show that it provides tight bounds for the set of Berk-Nash rationalizable actions. Single-peaked environments. Deο¬nition 3. The environment is single peaked if (i) AandΞare compact intervals in R, (ii)U(a,ΞΈ)is strictly quasi-concave in afor eachΞΈ, and (iii) K(ΞΈ,a)is strictly quasi-convex inΞΈfor each a. This structure guarantees that optimal actions and best-ο¬tting models are uniquely de- termined by degenerate inputs. We denote by Λ a: ΞβAthe function mapping each model ΞΈto its unique optimal action, and by ΛΞΈ:AβΞ the function mapping each action ato the unique model that best ο¬ts it. Both functions are well deο¬ned a nd continuous by strict quasi-concavity and strict quasi-convexity, respectively. Moreo ver, utility and model ο¬t de- grade strictly as one moves away from these optima, in the sense th at suboptimal actions or models yield strictly worse performance the further they are from the best-ο¬tting value. Proposition 3. In single-peaked environments, the set of rationalizable a ctions is given by B=β/intersectiondisplay k=0ΛBk,
|
https://arxiv.org/abs/2505.20708v1
|
whereΛB0=AandΛBk+1=T(ΛBk), whereT(Β·) = Λaβ¦ΛΞΈ(Β·). This characterization is convenient because the operator Ξ is deο¬n ed over mixed actions and mixed beliefs, which can be diο¬cult to work with directly. In contra st, the operator 15 Tin Proposition 3allows us to focus entirely on degenerate actions and beliefs, while st ill capturing the full set of rationalizable actions. A Appendix A.1 Proof of Lemma 1 Deο¬ne Lt(aβ,yβ)(ΞΈ) :=tβ1t/summationdisplay Ο=1lnq(yΟ|aΟ) qΞΈ(yΟ|aΟ). We will use the following fact: Uniform SLLN. For anyaβ, there exists YaβwithP(Yaβ|aβ) = 1 such that, for any yββ Yaβand for every Ξ΅ >0 there exists T1 Ξ΅(aβ,yβ) such that for all tβ₯T1 Ξ΅(aβ,yβ), sup ΞΈβΞ|Lt(aβ,yβ)(ΞΈ)β/integraldisplay K(ΞΈ,a)Οt(aβ)(da)|< Ξ΅. (7) See Appendix A.2for a proof of this result. From this point on, we ο¬x any aβand consider any yββ Yaβ. For simplicity, we drop (aβ,yβ) from the notation. Let ( Οtk)kbe a subsequence converging to Οand letEβΞ be a closed set disjoint from Ξm(Ο). We rely on the following two results: Approximate (Οtk)kwith its limit Ο.SinceKis continuous and Ξ is compact, then for everyΞ΅ >0, there exists T2 Ξ΅such that, for all ksuch that tkβ₯T2 Ξ΅, sup ΞΈβΞ|/integraldisplay K(ΞΈ,a)Οtk(da)β/integraldisplay K(ΞΈ,a)Ο(da)|< Ξ΅. Uniform separation. SinceKis continuous and Eis closed in a compact set (hence, compact), there exists Ξ΄ >0 such that for all ΞΈβE, /integraldisplay K(ΞΈ,a)Ο(da)β₯Kβ(Ο)+Ξ΄, whereKβ(Ο) := min ΞΈβΞ/integraltext K(ΞΈ,a)Ο(da). 16 We now prove ( 4) in Lemma 1. For any ΞΎ >0, note that Β΅tk(E) =/integraltext EeβtLtk(ΞΈ)Β΅0(dΞΈ)/integraltext ΞeβtLtk(ΞΈ)Β΅0(dΞΈ)β€/integraltext EeβtkLtk(ΞΈ)Β΅0(dΞΈ)/integraltext ΞΞΎ(Ο)eβtkLtk(ΞΈ)Β΅0(dΞΈ), where Ξ ΞΎ(Ο) :={ΞΈβΞ :/integraltext K(ΞΈ,a)Ο(da)β€Kβ(Ο)+ΞΎ}. Consider ο¬rst the numerator. From UniformSLLN, the approximat ion of (Οtk)kwith its limit Ο, and Uniform separation, it follows that, for all ksuch that tkβ₯max{T1 Ξ΅,T2 Ξ΅}, Ltk(ΞΈ)β₯Kβ(Ο)+Ξ΄β2Ξ΅for allΞΈβE, and so /integraldisplay EeβtkLtk(ΞΈ)Β΅0(dΞΈ)β€Β΅0(E)eβtk(Kβ(Ο)+Ξ΄β2Ξ΅). (8) Next, consider thedenominator. FromUniformSLLNandUniformco nvergence, itfollows that, for all ksuch that tkβ₯max{T1 Ξ΅,T2 Ξ΅},Ltk(ΞΈ)β€Kβ(Ο)+ΞΎ+2Ξ΅for allΞΈβΞΞΎ(Ο), and so /integraldisplay ΞΞΎ(Ο)eβtkLtk(ΞΈ)Β΅0(dΞΈ)β₯Β΅0(ΞΞΎ(Ο))eβtk(Kβ(Ο)+ΞΎ+2Ξ΅). (9) Combining ( 8) and (9) and setting Ξ΅=ΞΎ < Ξ΄/10, we obtain that for all ksuch that tkβ₯max{T1 Ξ΅,T2 Ξ΅}, Β΅tk(E)β€Β΅0(E) Β΅0(ΞΞΎ(Ο))eβtk(Ξ΄/2). Since Ξ is compact and ΞΈ/mapstoβ/integraltext K(ΞΈ,a)Ο(da) is continuous, a minimizer ΞΈββΞ exists and satisο¬esΞΈββΞΞΎ(Ο). By continuity, there exists an open ball around ΞΈβcontained in Ξ ΞΎ(Ο). SinceΒ΅0has full support, this ball has positive measure, so Β΅0(ΞΞΎ(Ο))>0. Equation ( 4) in the statement of Lemma 1then follows by setting Ο=Ξ΄/2>0 andC=Β΅0(E)/Β΅0(ΞΞΎ(Ο)). SinceΒ΅0(E)β₯0 andΒ΅0(ΞΞΎ(Ο))>0, it follows that Cβ₯0 andC <β. Suppose, in addition, that ( Β΅tk)kconverges to Β΅. By equation ( 4), for any closed E such that Eβ©Ξm(Ο) =β
, liminf kββΒ΅tk(E) = 0. By the Pormanteau lemma, Β΅(E)β€ liminf kββΒ΅tk(E) = 0. Now consider any ΞΈβΞ\Ξm(Ο). There exists an open neighborhood UΞΈwith closure Β―UΞΈsuch that Β―UΞΈβ©Ξm(Ο) =β
. SinceΒ―UΞΈis closed, Β΅(Β―UΞΈ) = 0, and so ΞΈis not in the support of Β΅. Thus,suppΒ΅βΞm(Ο). 17 A.2 Uniform SLLN Esponda, Pouzo and Yamamoto (2021) (henceforth, EPY 2021) proves the uniform SLLN (equation 7) for the case of a ο¬nite number of actions (see their Lemma 2). The proof for the case where Ais a compact subset of Euclidean space is essentially identical. We point out the two places where the more general Amakes a diο¬erence. In Step 1 of their proof, they use
|
https://arxiv.org/abs/2505.20708v1
|
the fact that there is a function gaβL2(Y,R,Q(Β· |a)) such that supΞΈβ²βO(ΞΈ,Ξ΅)(g(ΞΈβ²,y,x))2β€(ga(y))2, whereg(ΞΈ,y,x) := log( q(y|a)/qΞΈ(y|a)) and O(ΞΈ,Ξ΅) :={ΞΈβ²:||ΞΈβ²βΞΈ||< Ξ΅}. In our case, we need a function Gthat does not depend on a; hence our assumption (v). In Step 2 of their proof, they conclude that, for each ΞΈβΞ, Ξ Ξ΄(a) :=/integraltext supΞΈβ²βO(ΞΈ,Ξ΄)|g(ΞΈβ²,Y,a)βg(ΞΈ,Y,a)|Q(dy|a) vanishes as Ξ΄converges to zero. The proof re- quires convergence to be uniform in A, which is immediate if Ais ο¬nite. We now establish such uniform convergence for our more general case. By pointwis e convergence and the Arzela-Ascoli Theorem, it suο¬ces to establish equicontinuity of the class (Ξ Ξ΄)Ξ΄β₯0. To do so, take any Η« >0 and observe that for any a,aβ²βA, |ΞΞ΄(a)βΞΞ΄(aβ²)| β€/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay sup ΞΈβ²βO(ΞΈ,Ξ΄)|g(ΞΈβ²,Y,aβ²)βg(ΞΈ,Y,aβ²)|(q(y|a)βq(y|aβ²))Ξ½(dy)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplay sup ΞΈβ²βO(ΞΈ,Ξ΄)|logqΞΈ(y|a)βlogqΞΈ(y|aβ²)|q(y|a)Ξ½(dy)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =:I+II. Under Assumption (v), the term Ibounded by 2/integraltext G(y)|q(y|a)βq(y|aβ²)|Ξ½(dy). Under assumption (iv), there exists a Ξ³ >0 (not dependent on Ξ΄) such that 2/integraltext G(y)|q(y|a)β q(y|aβ²)|Ξ½(dy)β€0.5Η«for anya,aβ²such that ||aβaβ²|| β€Ξ³. Regarding the term II, under Assumptions (iv) and (v) ( ΞΈ,a)/mapstoβlogqΞΈ(y|a) is jointly uniformly continuous, so, for any Η«β²>0 andQ(Β·|a)-almost all y, there exists a Ξ³β² y>0 (not dependent on ΞΈ) such that |logqΞΈ(y|a)βlogqΞΈ(y|aβ²)| β€Η«β²for alla,aβ²such that ||aβaβ²|| β€Ξ³β² y. Hence, supΞΈβ²βO(ΞΈ,Ξ΄)|logqΞΈ(y|a)βlogqΞΈ(y|aβ²)| β€Η«β²for all such a,aβ². By the DCT, under Assumption (v), this result implies that there exists a Ξ³β²such that the term II is bounded by Η«β²for alla,aβ²such that ||aβaβ²|| β€Ξ³β². 18 By taking Η«β²= 0.5Η«, these results show that for all a,aβ²such that ||aβaβ²|| β€min{Ξ³,Ξ³β²}, |ΞΞ΄(a)βΞΞ΄(aβ²)| β€Η« for anyΞ΄β₯0, thus establishing equi-continuity as desired. A.3 Proof that Aβin Theorem 1has probability one Let β¦1:={(aβ,yβ)ββ¦ :atβF(Β΅t(Ο))}. By assumption, every feasible history is in this set. For each aβ, letYaβdenote the probability-one set of action sequences for which the conclusion of Lemma 1holds. By the law of total probability, the set β¦2:={(aβ,yβ)ββ¦ : yββ Yaβ}has probability 1.13Therefore, the intersection of these two sets, β¦β:= β¦1β©β¦2, alsohasprobabilityone. Since P(β¦β) = 1andsince Aβdeο¬nedinTheorem 1istheprojection of β¦βoverAβ, the law of total probability implies that PAβ(Aβ) = 1. This is the set of actions for which the statement in Theorem 1holds. A.4 Proof of Proposition 2 Deο¬ne the operator Ο: 2Ξβ2ΞbyΟ(E) =F(βE) for all EβΞ, and the operator Ο: 2Aβ2AbyΟ(A) = Ξm(βA) for allAβA. Observe that for any AβA, Ξ(A) =F(βͺΟββAβΞm(Ο))β(Οβ¦Ο)(A) =F(βΞm(βA)). (10) LettingB0=Aand deο¬ning the sequence Bk+1= Ξ(Bk), it follows from Proposition 1that the set of Berk-Nash rationalizable actions is given by B=/intersectiontextβ k=0Bk. Moreover, by equation (10), this set satisο¬es B β/intersectiondisplay kβ₯0(Οβ¦Ο)k(A). (11) The rest of the proof characterizes the inο¬nite intersection abov e. This relies on the following claim. Claim1.(i) For any ΞΈlβ€ΞΈh,Ο([ΞΈl,ΞΈh])β[a(ΞΈl),Β―a(ΞΈh)]. (ii) For any alβ€ah,Ο([al,ah])β [ΞΈ(al),Β―ΞΈ(ah)]. Proof.(i) Consider a/\e}atio\slashβ€Β―a(ΞΈh). Since Β―a(ΞΈh) is the largest maximizer of U(Β·,ΞΈh) andaβ¨Β―a(ΞΈh)> 13P(β¦2) =/integraltext AβP(Yaβ|aβ)PAβ(daβ) =/integraltext Aβ1PAβ(daβ) = 1. 19 Β―a(ΞΈh), thenU(aβ¨Β―a(ΞΈh),ΞΈh)< U(Β―a(ΞΈh),ΞΈh). SinceUis single crossing in ( a,ΞΈ), this implies U(aβ¨Β―a(ΞΈh),ΞΈ)< U(Β―a(ΞΈh),ΞΈ) for all ΞΈβ[ΞΈl,ΞΈh]. Since Uis quasi-supermodular in a, this impliesU(a,ΞΈ)< U(aβ§Β―a(ΞΈh),ΞΈ) for all ΞΈβ[ΞΈl,ΞΈh]. Thus, for any a/\e}atio\slashβ€Β―a(ΞΈh) and any Β΅ supported on [ ΞΈl,ΞΈh], we have/integraltext U(a,ΞΈ)Β΅(dΞΈ)</integraltext U(aβ§Β―a(ΞΈh),ΞΈ)Β΅(dΞΈ), implying a /βF(Β΅). A similar argument establishes that for any a/\e}atio\slashβ₯a(ΞΈl) and any Β΅supported on[ ΞΈl,ΞΈh],a /βF(Β΅). (ii)
|
https://arxiv.org/abs/2505.20708v1
|
Consider ΞΈ/\e}atio\slashβ€Β―ΞΈ(ah). SinceΒ―ΞΈ(xh) is the largest minimizer of K(Β·,xh) andΞΈβ¨Β―ΞΈ(ah)> Β―ΞΈ(ah), thenK(Β―ΞΈ(ah),ah)< K(ΞΈβ¨Β―ΞΈ(ah),ah). SinceβKis single crossing in ( ΞΈ,a), this implies K(Β―ΞΈ(ah),a)< K(ΞΈβ¨Β―ΞΈ(ah),a) for all aβ[al,ah]. Since βKis quasi-supermodular in ΞΈ, this implies K(ΞΈβ§Β―ΞΈ(xh),a)< K(ΞΈ,a) for all aβ[al,ah]. Thus, for any ΞΈ/\e}atio\slashβ€Β―ΞΈ(ah) and anyΟsupported on [ xl,xh], we have/summationtext aK(ΞΈβ§Β―ΞΈ(xh),a)Ο(a)</summationtext aK(ΞΈ,a)Ο(a), implying ΞΈ /βΞm(Ο). A similar argument establishes that for any ΞΈ/\e}atio\slashβ₯ΞΈ(al) and any Οsupported on [al,ah],ΞΈ /βΞm(Ο). To characterize the iterative application of Οβ¦Ο, we deο¬ne sequences ( ak S)kand (ak L)k such that ak+1 S=a(ΞΈ(ak S)) and ak+1 L= Β―a(Β―ΞΈ(ak L)), wherea0 Sanda0 Larethesmallest andlargestactionsin A,respectively. Considerthesequence of sets deο¬ned by Ak+1= [ak+1 S,ak+1 L], whereA0=A= [a0 S,a0 L] is the set of all actions. Since a0 Sis the smallest element and a(ΞΈ(Β·)) is nondecreasing, the sequence ( ak S)kis nondecreasing. Similarly, thesequence ( ak L)kisnonincreasing. Therefore, since Aiscompact, thesesequences converge. Let aSandaLdenote the limits. Since aβ¦ΞΈis continuous, we can pass the limit through the composition: aS= limkββak+1 S= limkββa(ΞΈ(ak S)) =a(ΞΈ(aS)), soaSis a ο¬xed point of the map aβ¦ΞΈ. Similarly, aLis a ο¬xed point of Β― aβ¦Β―ΞΈ. Moreover, /intersectiondisplay kβ₯0Ak=/intersectiondisplay kβ₯0[ak S,ak L] = [aS,aH]. (12) By Claim 1, for allk, Οβ¦Ο(Ak)βΟ([ΞΈ(ak S),Β―ΞΈ(ak L)]) β[a(ΞΈ(ak S)),Β―a(Β―ΞΈ(ak L))] =Ak+1. (13) Therefore, it follows from ( 11), (12), and (13) that B β[aS,aH]. 20 We conclude by showing that aHis the largest (Berk-Nash) equilibrium action. A similar argument holds for aSand is omitted. Recall that BNE={aβA:aβΞ(a)}={aβA: aβF(βΞm(Ξ΄a))}is the set of Berk-Nash equilibrium actions. Since aL= Β―a(Β―ΞΈ(aL)), then aLβF(δ¯θ(aL)), and since Β―ΞΈ(aL)βΞm(Ξ΄aL), it follows that aLis an equilibrium. To show that it is the lowest equilibrium, observe that BNEβ {a:aβF(β[ΞΈ(a),Β―ΞΈ(a)])} ={a:aβΟ([ΞΈ(a),Β―ΞΈ(a)])} β {a:aβ[a(ΞΈ(a)),Β―a(Β―ΞΈ(a))]}, (14) where the ο¬rst line holds because Ξm(Ξ΄a) is contained in [ ΞΈ(a),Β―ΞΈ(a)], the second by deο¬nition ofΟ, and the third by Claim 1(i). Letybelong to the set deο¬ned in ( 14). Thenyβ€Β―a(Β―ΞΈ(a)). Sinceyβ€a0 Land Β―a(Β―ΞΈ(Β·)) is nondecreasing, Β― a(Β―ΞΈ(y))β€Β―a(Β―ΞΈ(a0 h)) =a1 L. Iterating, yβ€ak Lfor allk. Sinceak Lconverges to aL, it follows that yβ€aL. Since this is true for every yin a set that contains all equilibria, this must also be true for all equilibria. A.5 Proof of Proposition 3 We will establish that, for any compact interval AβA, Λaβ¦ΛΞΈ(A) = Ξ(A). (15) The result then follows from Proposition 1. We begin by showing that ΛΞΈ(A) =βͺΟββAΞm(Ο). (16) The inclusion βis obvious. To show the reverse inclusion, we will establish that if ΞΈ /βΛΞΈ(A), thenΞΈ /β βͺΟββAΞm(Ο). Suppose ΞΈ /βΛΞΈ(A). By continuity of ΛΞΈ(Β·) and compactness of the intervalA,ΛΞΈ(A) is a compact interval in R; denote it by [ ΞΈ,Β―ΞΈ]. Suppose that ΞΈ >Β―ΞΈ(the proof is similar if ΞΈ < ΞΈ). Strict quasiconvexity of K(Β·,a) implies that, for all aβA, K(ΞΈ,a)> K(Β―ΞΈ,a)β₯K(ΛΞΈ(a),a). Therefore, for any ΟββA, /integraldisplay K(ΞΈ,a)Ο(da)>/integraldisplay K(Β―ΞΈ,a)Ο(da), 21 implying that ΞΈ /βΞm(Ο). We now show equation ( 15). The inclusion βis obvious. To show the reverse inclusion, we will establish that if a /βΛaβ¦ΛΞΈ(A), thena /βΞ(A). Suppose a /βΛaβ¦ΛΞΈ(A). By continuity of Λaβ¦ΛΞΈ(Β·) and compactness of the interval A, Λaβ¦ΛΞΈ(A) is a compact interval in R; denote it by [a,Β―a]. Suppose that a >Β―a(the proof is
|
https://arxiv.org/abs/2505.20708v1
|
similar if a < a). Strict quasiconcavity of U(Β·,ΞΈ) implies that, for all ΞΈβΛΞΈ(A), U(a,ΞΈ)< U(Β―a,ΞΈ)β€U(Λa(ΞΈ),ΞΈ). Therefore, for any Β΅ββΛA, /integraldisplay U(a,ΞΈ)Β΅(dΞΈ)</integraldisplay U(Β―a,ΞΈ)Β΅(dΞΈ), implying that a /βF(βΛΞΈ(A)) =F(β(βͺΟββAΞm(Ο))), where the equality follows from equa- tion (16). SinceF(β(βͺΟββAΞm(Ο)))βΞ(A), thena /βΞ(A). 22 References Athey, Susan. 2002. βMonotone comparative statics under uncertainty.β The Quarterly Journal of Economics , 117(1): 187β223. Berk, Robert H. 1966. βLimiting behavior of posterior distributions when the model is incorrect.β The Annals of Mathematical Statistics , 37(1): 51β58. Bernheim, B Douglas. 1984. βRationalizable strategic behavior.β Econometrica: Journal of the Econometric Society , 1007β1028. Bohren, J Aislinn. 2016. βInformational herding with model misspeciο¬cation.β Journal of Economic Theory , 163: 222β247. Bohren, J Aislinn, and Daniel N Hauser. 2021. βLearning with heterogeneous misspec- iο¬ed models: Characterization and robustness.β Econometrica , 89(6): 3025β3077. Eliaz, Kο¬r, and Ran Spiegler. 2020. βA model of competing narratives.β American Economic Review , 110(12): 3786β3816. Esponda, Ignacio. 2008. βBehavioral equilibrium in economies with adverse selection.β American Economic Review , 98(4): 1269β1291. Esponda, Ignacio, and Demian Pouzo. 2016. βBerkβNash equilibrium: A framework for modeling agents with misspeciο¬ed models.β Econometrica , 84(3): 1093β1130. Esponda, Ignacio, and Demian Pouzo. 2021. βEquilibrium in misspeciο¬ed Markov de- cision processes.β Theoretical Economics , 16(2): 717β757. Esponda, Ignacio, Demian Pouzo, and Yuichi Yamamoto. 2021. βAsymptotic be- havior of Bayesian learners with misspeciο¬ed models.β Journal of Economic Theory , 195: 105260. Eyster, Erik, and Matthew Rabin. 2005. βCursed equilibrium.β Econometrica , 73(5): 1623β1672. Frick, Mira, Ryota Iijima, and Yuhta Ishii. 2023. βBelief convergence under misspeci- ο¬ed learning: A martingale approach.β The Review of Economic Studies , 90(2): 781β814. Frick, Mira, Ryota Iijima, and Yuhta Ishii. 2024. βWelfare comparisons for biased learning.β American Economic Review , 114(6): 1612β1649. 23 Fudenberg, Drew, Giacomo Lanzani, and Philipp Strack. 2021. βLimit points of endogenous misspeciο¬ed learning.β Econometrica , 89(3): 1065β1098. Fudenberg, Drew, Giacomo Lanzani, and Philipp Strack. 2024. βSelective-Memory Equilibrium.β Journal of Political Economy , 132(12): 3978β4020. Fudenberg, Drew, Gleb Romanyuk, and Philipp Strack. 2017. βActive learning with a misspeciο¬ed prior.β Theoretical Economics , 12(3): 1155β1189. Heidhues, Paul, Botond KΛ oszegi, and Philipp Strack. 2018. βUnrealistic expectations and misguided learning.β Econometrica , 86(4): 1159β1214. Heidhues, Paul, Botond KΛ oszegi, and Philipp Strack. 2021. βConvergence in models of misspeciο¬ed learning.β Theoretical Economics , 16(1): 73β99. He, Kevin. 2022.βMislearning fromcensored data: Thegamblerβs fallacyandot hercorrela- tional mistakes in optimal-stopping problems.β Theoretical Economics , 17(3): 1269β1312. He, Kevin, and Jonathan Libgober. 2025. βHigher-Order Beliefs and (Mis) learning from Prices.β Working Paper. Jehiel, Philippe. 2005. βAnalogy-based expectation equilibrium.β Journal of Economic theory, 123(2): 81β104. Milgrom, Paul, and Chris Shannon. 1994. βMonotone comparative statics.β Economet- rica: Journal of the Econometric Society , 157β180. Milgrom, Paul, and John Roberts. 1990. βRationalizability, learning, and equilibrium in games with strategic complementarities.β Econometrica: Journal of the Econometric Society, 1255β1277. Murooka, Takeshi, and Yuichi Yamamoto. 2021.βMulti-Player Bayesian Learningwith Misspeciο¬ed Models.β Osaka School of International Public Policy, Os aka University. Nyarko, Yaw. 1991. βLearning in mis-speciο¬ed models and the possibility of cycles.β Jour- nal of Economic Theory , 55(2): 416β427. Pearce, David G. 1984. βRationalizable strategic behavior and the problem of perfec tion.β
|
https://arxiv.org/abs/2505.20708v1
|
arXiv:2505.20946v1 [math.ST] 27 May 2025Almost Unbiased Liu Estimator in Bell Regression Model: Theory and Application Caner TanΔ±ΒΈ sβ and Yasin Asarβ‘ β Department of Statistics, C ΒΈankΔ±rΔ± Karatekin University e-mail: canertanis@karatekin.edu.tr β‘Department of Mathematics and Computer Sciences, Necmettin Erbakan University, Konya, Turkey e-mail: yasar@erbakan.edu.tr, yasinasar@hotmail.com Abstract In this research, we propose a novel regression estimator as an a lternative to the Liu estimator for addressing multicollinearity in the Bell regression mode l, referred to as the βalmost unbiased Liu estimatorβ. Moreover, the theoretical char acteristics of the proposed estimator are analyzed, along with several theorems that specify the conditions under which the almost unbiased Liu estimator outperforms its alternatives. A c omprehensive simulation study is conducted to demonstrate the superiority of the almost u nbiased Liu estimator and to compare it against the Bell Liu estimator and the maximum likelihood e stimator. The practical applicability and advantage of the proposed regression e stimator are illustrated through a real-world dataset. The results from both the simulation study and the real-world data application indicate that the new almost unbiased Liu regression estimator outperforms its counterparts based on the mean square error criterion. Keywords : Bell Regression Model, Monte Carlo Simulation, Multicollinearity, Liu Es tima- tor, Almost Unbiased Liu Estimator Supplementary Information (SI): Appendices 0-5. 1 Introduction Count regression models are useful for modeling data in vari ous scientiο¬c ο¬elds such as biology, chemistry, physics, veterinary medicine, agriculture, en gineering, and medicine ( Walters,2007). In the literature, the well-known count regression models a re listed as follows: Poisson, negative binomial, geometric, and their modiο¬ed ones. The Poisson di stribution has the limitation that the variance is equal to the mean. This is a disadvantage for t he Poisson regression model in modeling inο¬ated data. The multicollinearity negativel y aο¬ects the maximum likelihood method to estimate the coeο¬cients of the Poisson regression model. When multicollinearity is present, the disadvantages of the maximum likelihood est imator (MLE) are listed as follows: 1 Increasing the variance and standard error of the estimated regression coeο¬cients and leading to inconsistent estimates. Furthermore, the multicollinear ity problem causes unreliable hypothesis testing and wider conο¬dence intervals for the estimated par ameters ( MΛ ansson and Shukur ,2011; Amin et al. ,2022). The literature presents several approaches to address mult icollinearity in multiple regression models. Liu(1993) introduced the Liu estimator, which provides a solution to multicollinearity by employing a single biasing parameter, resulting in the es timated coeο¬cients being a linear function of d, unlike ridge regression. Recent studies have expanded upo n this work by uti- lizing Liu estimators in various regression models. For ins tance, the Liu estimator has been extended to the logit and Poisson regression models, with me thods proposed to select the bi- asing parameter. It has also been generalized to negative bi nomial regression, and researchers have introduced its use in gamma regression as a viable alter native to the maximum likelihood estimator when facing multicollinearity. Moreover, the ap plication of Liu estimators has been explored in Beta regression models, where new variants of Li u-type estimators have been
|
https://arxiv.org/abs/2505.20946v1
|
devel- oped to ο¬t the speciο¬c needs of these regression models. More recent studies have proposed a novel Liu estimator for Bell regression, with performance e valuations conducted through simula- tion studies. Comparative analyses between ridge andLiu es timators have also beenundertaken, particularly in the context of zero-inο¬ated Bell regressio n models. Also, advancements include the introduction of a two-parameter estimator for gamma reg ression, further expanding the utility of Liu-type estimators in addressing multicolline arity across various regression models. Some of the recent references can be listed as follows: MΛ ansson et al. (2011),MΛ ansson et al. (2012),MΛ ansson (2013),Qasim et al. (2018),Karlsson et al. (2020),Algamal and Asar (2020), Algamal and Abonazel (2022),Majid et al. (2022),Algamal et al. (2022),Asar and Algamal (2022),Akram et al. (2022) Another method to solve the multicollinearity in multiple r egression models is the use of the almost unbiased estimator introduced by Kadiyala (1984). Recently, almost unbiased estima- tors are introduced by several authors. Some studies can be l isted as follows: Xinfeng (2015) introduced almost unbiased estimators in the logistic regr ession model. Al-Taweel and Algamal (2020) examined the performances of some almost unbiased ridge es timators in the zero-inο¬ated negative binomial regression model. Asar and Korkmaz (2022) suggested almost unbiased Liu- type estimator in the gamma regression model. Erdugan (2022) proposed almost unbiased Liu-type estimator in the linear regression model. Omara(2023) introduced almost unbiased Liu-typeestimator inthetobit regression model. Ertan et al. (2023)proposedanewLiu-typees- timator in the Bell regression model. Algamal et al. (2023) modiο¬ed Jackknifed ridge estimator for Bell regression model. Thisstudyprovidesanewalmost unbiasedLiuestimator asan alternative totheLiuestimator intheBellregressionmodel. Thesuggestedestimator isals ocomparedtoitscompetitorsnamely, the Liu estimator and the MLE in terms of the scalar and matrix mean squared error criteria. Furthermore, one of the objectives of this study is to provid e the proposed theoretical ο¬ndings through simulation studies and real data analysis to evalua te the superiority of the proposed estimator over its competitors. The rest of the study is organized as follows: In Section 2, the main properties of the Bell 2 regression model, the deο¬nitions of the Liu estimator, and a new almost unbiased Liu estima- tor for the Bell regression model are given. Section 3compares the estimators via theoretical properties. We consider a comprehensive Monte Carlo simula tion study to evaluate the perfor- mances of the examined estimators via simulated mean square d error (MSE) and squared bias (SB) criteria. Then, we provide a real-world data example to illustrate the superiority of the proposed estimator over its competitors in Section 5. Finally, concluding remarks are presented in Section 6. 2 Bell Regression Model Bell(1934a,b) proposed the Bell distribution. The probability mass func tion (pmf) of the Bell distribution is P/parenleftbig Y=y|Ξ³/parenrightbig =Ξ³yexp/braceleftbig βexp(Ξ³)+1/bracerightbig By y!,y= 0,1,2,... (1) whereΞ³ >0, Bydenotes Bell numbers deο¬ned as follows: Bn=1 eβ/summationdisplay k=0kn k!. The mean and variance of the Bell distribution are given by E(Y) =Ξ³exp(Ξ³), (2) and Var(Y) =Ξ³(1+Ξ³)exp(Ξ³). (3) respectively ( Castellares et al. ,2018) and (Majid et al. ,2022). The essential properties of Bell regression can be summarised as follows: β’The Bell distribution is a one-parameter
|
https://arxiv.org/abs/2505.20946v1
|
distribution. β’The Bell distribution is a member of the one-parameter expon ential distributions. β’The Bell distribution is unimodal. β’The Poisson distribution does not follow the Bell family of d istributions. But if the pa- rameter has a small value, the Bell distribution approximat es to the Poisson distribution. β’The variance of the Bell distribution is higher than its mean , which indicates that the one- parameterBelldistributioncanbesuitableformodellingo verdisperseddata( Castellares et al. , 2018;Majid et al. ,2022). Castellares et al. (2018) suggested Bell regression as an alternative to Poisson, ne gative bino- mial and other popular discrete regression models. In a regr ession model, it is often more useful to model the mean of the dependent variable. Therefore, to ob tain a regression structure for the mean of the Bell distribution, the Bell regression model with a diο¬erent parametrization of the probability function of the Bell distribution is deο¬ned byCastellares et al. (2018) as follows: 3 Let beΒ΅=Ξ³exp(Ξ³) and therefore Ξ³=W0(Β΅),whereW0(Β΅) is the Lambert function. In this regard, the pmf of the Bell distribution is as follows: P/parenleftbig Y=y|Β΅/parenrightbig =W0(Β΅)yexp/braceleftBig 1βexp/parenleftbig W0(Β΅)/parenrightbig/bracerightBig By y!,y= 0,1,2,... (4) The mean and variance of the Bell distribution are rewritten as follows: E(Y) =Β΅, (5) and Var(Y) =Β΅/parenleftbig 1+W0(Β΅)/parenrightbig . (6) whereΒ΅ >0 andW0(Β΅)>0.Thus, it is clear that Var(Y)> E(Y).It means that the Bell distribution can be potentially suitable for modelling ove rdispersed count data, such as the negative binomial distribution. An advantage of the Bell di stribution over the negative binomial distribution is that no additional (dispersion) parameter is required to adapt to overdispersion (Castellares et al. ,2018). Lety1,y2,...,ynbenindependent random variables, where each yi,fori= 1,2,...,n, follows the pmf (Eq. 4) with mean Β΅i; that is, yiβΌBell(W0(Β΅i)), fori= 1,2,...,n. Assume the mean ofyifulο¬ls the following functional relation: g(Β΅i) =Ξ·i=xβ€ iΞ²,i= 1,2,...,n whereΞ²=/parenleftbig Ξ²1,Ξ²2,...,Ξ²p/parenrightbigβ€βRprepresent a p-dimensional vector of regression coeο¬cients, (p < n),Ξ·idenotes the linear estimator, and xβ€ i=/parenleftbig xi1,xi2,...,xip/parenrightbig corresponds to the observa- tions for the pknown covariates. It is noted that the variance of yidepends on Β΅iand, consequently, on the values of the covariates. As a result, models typically incorporate non- constant response variances. We assume that the mean link function g:(0,β)βRis strictly monotonic and twice diο¬erentiable. Several options exist for the mean link function, with examp les including the logarithmic link g(Β΅) =log(Β΅), the square root link g(Β΅) =βΒ΅, and the identity link g(Β΅) =Β΅, each of which emphasizes ensuring the positivity of the estimates. These functions are also discussed in McCullagh and Nelder (1989). The parameter vector Ξ²is estimated using the maximum likelihood method, and the lo g- likelihood function, excluding constant terms, is express ed as follows: β(Ξ²) =n/summationdisplay i=1/bracketleftBig yilog/parenleftbig W0(Β΅i)/parenrightbig βexp/parenleftbig W0(Β΅i)/parenrightbig/bracketrightBig , whereΒ΅i=gβ1(Ξ½i) is a function of Ξ², andgβ1(.) is the inverse of g(.). The score function is given by the p-vector U(Ξ²)=Xβ€W1/2Vβ1/2(yβΒ΅) where the model matrix X= (x1,x2,...,xn)β€has full column rank, W=diag{w1,w2,...,wn}, 4 V=diag{V1,V2,...,Vn},y= (y1,y2,...,yn)β€,Β΅= (Β΅1,Β΅2,...,Β΅n)β€, and wi=/parenleftbig dΒ΅i/dΞ·i/parenrightbig2 Vi,Vi=Β΅i[1+W0Β΅i],i= 1,2,...,n, whereViis the variance function of yi. The Fisher information matrix for Ξ²is given by K(Ξ²) = Xβ€WX. The maximum likelihood estimator /hatwideΞ²=/parenleftBig ΛΞ²1,ΛΞ²2,...,ΛΞ²p/parenrightBigβ€ ofΞ²=/parenleftbig Ξ²1,Ξ²2,...,Ξ²p/parenrightbigβ€ is obtained as the
|
https://arxiv.org/abs/2505.20946v1
|
solution of U/parenleftBig /hatwideΞ²/parenrightBig =0p, where 0prefers a p-dimensional vector of zeros. Regrettably, the maximum likelihood estimator /hatwideΞ²lacks a closed-form solution, necessitating its numerical computation. For instance, the NewtonβRaphs on iterative method is one possible approach. Alternatively, theFisherscoringmethodmaybee mployedtoestimate Ξ²byiteratively solving the corresponding equation. Ξ²(m+1)=/parenleftBig Xβ€W(m)X/parenrightBigβ1 Xβ€W(m)z(m), (7) wherem= 0,1,...is the iteration counter, z= (z1,z2,...,zn)β€=Ξ·+Wβ1/2Vβ1/2(yβΒ΅) actions as a modiο¬ed response variable in Eq. ( 7) whereas Wis a weight matrix, and Ξ·= (Ξ·1,Ξ·2,...,Ξ·n)β€.The maximum likelihood estimate /hatwideΞ²MLEcan be iteratively obtained by using Eq. (7) through any software program with a weighted linear regres sion routine, such as R software ( Castellares et al. ,2018). Thus, the MLE of Ξ²in the Bell regression obtained using the IRLS algorithm at the ο¬nal step is given as follows: /hatwideΞ²MLE=/parenleftBig Xβ€/hatwiderWX/parenrightBigβ1 Xβ€/hatwiderW/hatwidez (8) where/hatwiderWand/hatwidezare computed at ο¬nal iteration. The scalar mean squared error (MSE) of /hatwideΞ²MLEcan be given as ( Majid et al. ,2022) MSE/parenleftBig /hatwideΞ²MLE/parenrightBig =E/parenleftBig (/hatwideΞ²MLEβΞ²)β€(/hatwideΞ²MLEβΞ²)/parenrightBig =tr/parenleftBig (Xβ€/hatwiderWX)β1/parenrightBig =p/summationdisplay j=11 Ξ»j(9) Here,tr(.) denotes the trace operator, and Ξ»jrepresents the jth eigenvalue of the weighted cross-product matrix Xβ€/hatwiderWX. It is evident from Eq. ( 9) that the variance of the maximum likelihood estimator (MLE) may be adversely inο¬uenced by th e ill-conditioning of the data matrixXβ€/hatwiderWX, a phenomenon commonly referred to as the multicollinearit y problem. For an in-depth discussion of collinearity issues in generalized linear models, refer to Segerstedt (1992) andMackinnon and Puterman (1989). LetQβ€Xβ€/hatwiderWX=Ξ= diag(Ξ»1,Ξ»2,...,Ξ» p), where Ξ»1β₯Ξ»2β₯...β₯Ξ»p>0 represent the eigenvalues of Xβ€/hatwiderWX, arranged in descending order, and Qis apΓpmatrix whose columns consist of the normalized eigenvectors of Xβ€/hatwiderWX. Consequently, we have the relationship Ξ±= Qβ€Ξ², and the maximum likelihood estimator (MLE) in its canonica l form can be expressed as /hatwideΞ±MLE=Qβ€/hatwideΞ²MLE. 5 2.1 The Bell Liu Estimator The Liu estimator (LE) is proposed by Majid et al. (2022) for the Bell regression model as follows: /hatwideΞ²LE= (F+I)β1Fd/hatwideΞ²MLE (10) =Ed/hatwideΞ²MLE where 0< d <1,F=Xβ€/hatwiderWX,Fd= (F+dI) andEd= (F+I)β1Fd. The covariance matrix and bias vector of LE can be obtained respectively by Cov/parenleftBig /hatwideΞ²LE/parenrightBig =EdFβ1Eβ€ d, (11) bias/parenleftBig /hatwideΞ²LE/parenrightBig = (dβ1)(F+I)β1Ξ². (12) Thus, the matrix mean squared error (MMSE) and MSE functions of the LE are MMSE/parenleftBig /hatwideΞ²LE/parenrightBig =EdFβ1Eβ€ d+d2FdΞ²Ξ²β€Fd (13) MSE/parenleftBig /hatwideΞ²LE/parenrightBig =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig2+(dβ1)2p/summationdisplay j=1Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig2(14) whereΞ±jis the jth component of Ξ±. 2.2 The new almost unbiased Bell Liu Estimator In this subsection, we propose a new estimator called almost unbiased Liu estimator (AULE) as an alternative to LE and MLE in Bell regression model. Deο¬nition 2.1. Xu and Yang (2011) Suppose Ξ²is a biased estimator of parameter vector Ξ², and if the bias vector of ΛΞ²is given by b(ΛΞ²) =E(ΛΞ²)βΞ²=RΞ², which shows that E(ΛΞ²βRΞ²) =Ξ², then we call the estimator ΛΞ²=ΛΞ²βRΛΞ²= (IβR)ΛΞ²is the almost unbiased estimator based on the biased estimator ΛΞ². Based on the Deο¬nition 2.1, the almost unbiased Bell Liu estimator (AULE) can be deο¬ned by /hatwideΞ²AULE=/hatwideΞ²LEβ/parenleftBig β(1βd)(F+I)β1/hatwideΞ²LE/parenrightBig =/hatwideΞ²LE+(1βd)(F+I)β1/hatwideΞ²LE =/parenleftBig I+(1βd)(F+I)β1/parenrightBig /hatwideΞ²LE =/parenleftBig I+(1βd)(F+I)β1/parenrightBig Ed/hatwideΞ²MLE =/parenleftBig I+(1βd)(F+I)β1/parenrightBig/parenleftBig Iβ(1βd)(F+I)β1/parenrightBig /hatwideΞ²MLE =/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig /hatwideΞ²MLE (15) whereββ< d <βis a biasing parameter ( Alheety and Kibria ,2009). According to our literature review, the AULE has not been suggested or studie
|
https://arxiv.org/abs/2505.20946v1
|
d in the Bell regression model. 6 In the Bell regression, the AULE is /hatwideΞ²AULE=/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig /hatwideΞ²MLE. The covariance matrix and bias vector of the AULE are Cov/parenleftBig /hatwideΞ²AULE/parenrightBig =Cov/parenleftBig Iβ(1βd)2(F+I)β2/hatwideΞ²MLE/parenrightBig =/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig Cov/parenleftBig /hatwideΞ²MLE/parenrightBig/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBigβ€ =/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig Fβ1/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBigβ€ , (16) and Bias/parenleftBig /hatwideΞ²AULE/parenrightBig =E/parenleftBig /hatwideΞ²AULE/parenrightBig βΞ² =β(1βd)2(F+I)β2Ξ², (17) respectively. In this regard, the MMSE and MSE of AULE are res pectively given by MMSE/parenleftBig /hatwideΞ²AULE/parenrightBig =Cov/parenleftBig /hatwideΞ²AULE/parenrightBig +Bias/parenleftBig /hatwideΞ²AULE/parenrightBig Bias/parenleftBig /hatwideΞ²AULE/parenrightBigβ€ =/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig Fβ1 Γ/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig +/parenleftBig β(1βd)2(F+I)β2Ξ²/parenrightBig Γ/parenleftBig β(1βd)2(F+I)β2Ξ²/parenrightBigβ€ =/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig Fβ1 Γ/parenleftBig Iβ(1βd)2(F+I)β2/parenrightBig +(1βd)4(F+I)β4Ξ²Ξ²β€(18) and MSE/parenleftBig /hatwideΞ²AULE/parenrightBig =tr/parenleftbigg Cov/parenleftBig /hatwideΞ²AULE/parenrightBig/parenrightbigg +Bias/parenleftBig /hatwideΞ²/parenrightBigβ€ Bias/parenleftBig /hatwideΞ²/parenrightBig =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4+(1βd)4p/summationdisplay j=1Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig4.(19) 3 Theoretical Comparisons Between Estimators In this section, we derive the superiority of AULE over LE and MLE via some theorems. The squared bias of an estimator /hatwideΞ²is described as follows: SB/parenleftBig /hatwideΞ²/parenrightBig =Bias/parenleftBig /hatwideΞ²/parenrightBigβ€ Bias/parenleftBig /hatwideΞ²/parenrightBig =/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²/parenrightBig/vextenddouble/vextenddouble/vextenddouble2 2. In this regard, we compare the squares biases of LE and AULE in the following theorem. 7 Theorem 1. The squared bias of AULE is lower than that of LE for dβ/parenleftbig βΞ»j,Ξ»j+2/parenrightbig , namely, /vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²LE/parenrightBig/vextenddouble/vextenddouble/vextenddouble2 2β/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²AULE/parenrightBig/vextenddouble/vextenddouble/vextenddouble2 2>0. Proof.The diο¬erence in squared bias is: /vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²LE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 β/vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²AULE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 =p/summationdisplay j=1 ο£(dβ1)2Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig2β(dβ1)4Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig4ο£Ά ο£Έ =p/summationdisplay j=1(dβ1)2Ξ±2 j/parenleftbig Ξ»j+1/parenrightbig2β(dβ1)2 /parenleftbig Ξ»j+1/parenrightbig4 Considering that ( dβ1)2>0,Ξ±2 j>0 and/parenleftbig Ξ»j+1/parenrightbig4>0, it is suο¬cient for /vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²LE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 β/vextenddouble/vextenddouble/vextenddouble/vextenddoubleBias/parenleftBig /hatwideΞ²AULE/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 to be positive that/parenleftbig Ξ»j+1/parenrightbig2β(dβ1)2>0. Thus, we can investigate the positivity of the following function fbias(d) =/parenleftbig Ξ»j+1/parenrightbig2β(dβ1)2=/parenleftBig/parenleftbig Ξ»j+1/parenrightbig β(dβ1)/parenrightBig/parenleftBig/parenleftbig Ξ»j+1/parenrightbig +(dβ1)/parenrightBig =/parenleftbig Ξ»jβd+2/parenrightbig/parenleftbig Ξ»j+d/parenrightbig . Thefunction fbias(d) is positive for the interval dβ/parenleftbig βΞ»j,Ξ»j+2/parenrightbig . Thus, the proof is completed. Now, we compare the MSE functions of LE and AULE in the followi ng theorem. Theorem 2. In the Bell regression model, the AULE has a lower MSE value than LE ifdβ/parenleftbig 1,Ξ»j+2/parenrightbig forj= 1,2,...,p, namely, MSE/parenleftBig /hatwideΞ²LE/parenrightBig βMSE/parenleftBig /hatwideΞ²AULE/parenrightBig >0. 8 Proof.From Eqs. ( 14) and (19), the diο¬erence in scalar MSE is, MSE/parenleftBig /hatwideΞ²LE/parenrightBig βMSE/parenleftBig /hatwideΞ²AULE/parenrightBig =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2+(dβ1)2Ξ»jΞ±2 j Ξ»j/parenleftbig Ξ»j+1/parenrightbig2, βp/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2+(1βd)4Ξ»jΞ±2 j Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 =p/summationdisplay j=1/parenleftbig Ξ»j+1/parenrightbig2/parenleftBig/parenleftbig Ξ»j+d/parenrightbig2+(dβ1)2Ξ»jΞ±2 j/parenrightBig Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 βp/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2+(1βd)4Ξ»jΞ±2 j Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 =p/summationdisplay j=1/parenleftbig Ξ»j+d/parenrightbig2/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2β/parenleftbig Ξ»j+2βd/parenrightbig2/parenrightBig Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 +p/summationdisplay j=1Ξ±2 j(dβ1)2/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2β(1βd)2/parenrightBig /parenleftbig Ξ»j+1/parenrightbig4 Considering/parenleftbig Ξ»j+1/parenrightbig4>0andΞ±2 j>0, if/parenleftbig Ξ»j+1/parenrightbig2β/parenleftbig Ξ»j+2βd/parenrightbig2>0and/parenleftbig Ξ»j+1/parenrightbig2β(1βd)2> 0 forj= 1,2,...,p, the diο¬erence between the scalar MSEs of LE and AULE becomes p ositive. fMSE1(d) =/parenleftbig Ξ»j+1/parenrightbig2β/parenleftbig Ξ»j+2βd/parenrightbig2 =/parenleftbig 2Ξ»j+3βd/parenrightbig (dβ1). ThefMSE1(d) function is positive deο¬ned for dβ/parenleftbig 1,2Ξ»j+3/parenrightbig . fMSE2(d) =/parenleftbig Ξ»j+1/parenrightbig2β(1βd)2 =/parenleftbig Ξ»j+2βd/parenrightbig/parenleftbig Ξ»j+d/parenrightbig . ThefMSE2(d) function is positive for dβ/parenleftbig βΞ»j,Ξ»j+2/parenrightbig . It is possible for both functions fMSE1(d) andfMSE2(d) to be positive deο¬nite only with the common solution set of t he above two equations, dβ/parenleftbig 1,Ξ»j+2/parenrightbig . Thus, we provide MSE/parenleftBig /hatwideΞ²LE/parenrightBig βMSE/parenleftBig /hatwideΞ²AULE/parenrightBig >0 fordβ/parenleftbig 1,Ξ»j+2/parenrightbig . The proof is completed. Now, we compare the variances of MLE and AULE in the following theorem. Theorem 3. AULE has a lower variance value than MLE i.e. Var/parenleftBig /hatwideΞ²MLE/parenrightBig βVar/parenleftBig /hatwideΞ²AULE/parenrightBig >0 when dβ/parenleftbigg 1β/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2,1+/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2/parenrightbigg . 9 Proof.The diο¬erence in variances is, Var/parenleftBig /hatwideΞ²MLE/parenrightBig βVar/parenleftBig /hatwideΞ²AULE/parenrightBig =p/summationdisplay j=11 Ξ»jβp/summationdisplay j=1/parenleftbig
|
https://arxiv.org/abs/2505.20946v1
|
Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4 =p/summationdisplay j=1/parenleftbig Ξ»j+1/parenrightbig4β/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4. The diο¬erence between the variances of MLE and AULE is positiv e for /parenleftbig Ξ»j+1/parenrightbig4β/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2>0. Considering fVar(d) =/parenleftbig Ξ»j+1/parenrightbig4β/parenleftbig Ξ»j+d/parenrightbig2/parenleftbig Ξ»j+2βd/parenrightbig2 =/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2/parenrightBig2 β/parenleftBig/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βd/parenrightbig/parenrightBig2 =/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2β/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βd/parenrightbig/parenrightBig/parenleftBig/parenleftbig Ξ»j+1/parenrightbig2+/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βd/parenrightbig/parenrightBig = (1βd)2/parenleftBig 2Ξ»2 j+4Ξ»j+2dβd2+1/parenrightBig =β(1βd)2/parenleftbigg d2β2dβ/parenleftBig 2Ξ»2 j+4Ξ»j+1/parenrightBig/parenrightbigg . ThefVar(d) function is positive deο¬ned for dβ/parenleftbigg 1β/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2,1+/radicalBig 1+2/parenleftbig Ξ»j+1/parenrightbig2/parenrightbigg . Thus, the proof is completed. 3.1 Selection of the parameter d We use following procedure in the selection of the parameter d. By diο¬erentiating Eqn. ( 19) with respect to dand equating to zero which equals to β βd/parenleftbigg MSE/parenleftBig /hatwideΞ²AULE/parenrightBig/parenrightbigg =p/summationdisplay j=11 Ξ»j/parenleftbig Ξ»j+1/parenrightbig4/braceleftBig/parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βd/parenrightbig β(1βd)2Ξ»jΞ±2 j/bracerightBig = 0 Since/parenleftbig Ξ»j+1/parenrightbig4is always positive, it is enough to ο¬nd dsatisfying /parenleftbig Ξ»j+d/parenrightbig/parenleftbig Ξ»j+2βd/parenrightbig β(1βd)2Ξ»jΞ±2 j= 0. Then, by solving the above equation, we derive the following optimum biasing parameter: dj= 1β/parenleftbig Ξ»j+1/parenrightbig/radicalBigg 1 1+Ξ»jΞ±2 j 10 We suggest to use dj= 1β/parenleftbig Ξ»j+1/parenrightbig/radicalBig1 1+Ξ»jΞ±2 j. In this paper, we suggest the following estimators ofd AULE(d1) =harmmean/parenleftbig dj/parenrightbig , AULE(d2) =median/parenleftbig dj/parenrightbig , AULE(d3) =max/parenleftbig dj/parenrightbig . 4 Monte Carlo Simulation In this section, we conduct a comprehensive Monte Carlo simu lation study to evaluate and compare the mean squared error (MSE) performance of the esti mators. Given that one of our primary objectives is to examine the behavior of the estimat ors in the presence of multicollinear- ity, we generate the design matrix Xfollowing the methodology outlined by Amin et al. (2023). xij= (1βΟ2)1/2wij+Οwi(p+1), i= 1,2,...,n, j = 1,2,...,p, wherewijβs are independent standard normal pseudo-random numbers, andΟdetermines the degree of correlation between any two explanatory variable s which is given by Ο2. Table 1: Simulated squared bias values when p= 4 nΟLEAULE(d1)AULE(d2)AULE(d3) 100 0.8 4.6585 4.6086 4.4742 4.6769 200 0.8 4.4266 4.3918 4.2099 4.4361 400 0.8 3.7487 3.7224 3.5122 3.7525 100 0.9 5.2551 5.2160 5.1094 5.2752 200 0.9 4.7315 4.7049 4.5809 4.7414 400 0.9 3.0218 3.0003 2.9350 3.0245 100 0.95 5.5127 5.4989 5.4760 5.5330 200 0.95 4.8069 4.7955 4.7699 4.8169 400 0.95 2.4927 2.4862 2.4831 2.4948 Table 2: Simulated squared bias values when p= 8 nΟLEAULE(d1)AULE(d2)AULE(d3) 100 0.8 5.7714 5.6681 5.4469 5.7929 200 0.8 4.4813 4.3938 4.0834 4.4905 400 0.8 2.8507 2.7833 2.5532 2.8530 100 0.9 6.6413 6.5653 6.4233 6.6643 200 0.9 3.7775 3.7266 3.6682 3.7849 400 0.9 2.2306 2.1239 2.0961 2.2189 100 0.95 7.1071 7.0824 7.0585 7.1304 200 0.95 3.2036 3.2065 3.2063 3.2123 400 0.95 1.7250 1.6709 1.6545 1.7213 11 Table 3: Simulated squared bias values when p= 12 nΟ LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 16.6143 16.5651 16.3483 16.6376 200 0.8 9.3368 9.2602 8.9179 9.3497 400 0.8 1.2888 1.2171 1.2033 1.2719 100 0.9 18.4322 18.3763 18.1337 18.4540 200 0.9 9.0344 8.9579 8.6998 9.0471 400 0.9 5.8266 5.7646 5.5946 5.8322 100 0.95 19.2328 19.1946 19.0770 19.2529 200 0.95 13.6958 13.6524 13.5391 13.7079 400 0.95 8.8918 8.8643 8.8046 8.8982 Table 4: Simulated MSE values when p= 4 nΟMLE LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 15.3684 4.6838 4.6333 4.4986 4.7023 200 0.8 14.8173 4.4380 4.4030 4.2211 4.4475 400 0.8 13.3872 3.7565 3.7299 3.5184 3.7603 100 0.9 16.5517 5.2945
|
https://arxiv.org/abs/2505.20946v1
|
5.2482 5.1354 5.3143 200 0.9 15.4715 4.7503 4.7207 4.5908 4.7601 400 0.9 11.7297 3.0397 3.0136 2.9411 3.0421 100 0.95 17.0654 5.5808 5.5327 5.5002 5.5956 200 0.95 15.6238 4.8431 4.8150 4.7800 4.8511 400 0.95 10.3919 2.5382 2.5024 2.4961 2.5321 Table 5: Simulated MSE values when p= 8 nΟMLE LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 25.0197 5.8124 5.7071 5.4855 5.8341 200 0.8 22.0185 4.5115 4.4218 4.1066 4.5207 400 0.8 17.5404 2.8820 2.8101 2.5686 2.8843 100 0.9 27.0835 6.7053 6.6125 6.4533 6.7282 200 0.9 20.2087 3.8454 3.7655 3.6878 3.8513 400 0.9 15.6651 2.6264 2.2375 2.1897 2.5359 100 0.95 28.1156 7.2217 7.1323 7.0908 7.2397 200 0.95 18.6185 3.3703 3.2506 3.2402 3.3452 400 0.95 13.6104 2.3251 1.8060 1.7470 2.2019 Table 6: Simulated MSE values when p= 12 nΟMLE LE AULE(d1)AULE(d2)AULE(d3) 100 0.8 54.8945 16.6320 16.5829 16.3667 16.6553 200 0.8 40.3239 9.3505 9.2739 8.9333 9.3634 400 0.8 8.5767 1.5107 1.2721 1.2473 1.4435 100 0.9 58.3163 18.4548 18.3980 18.1533 18.4767 200 0.9 39.6896 9.0631 8.9823 8.7148 9.0759 400 0.9 32.0425 5.8544 5.7866 5.6060 5.8599 100 0.95 59.8244 19.2839 19.2314 19.0972 19.3042 200 0.95 49.3954 13.7293 13.6769 13.5513 13.7413 400 0.95 39.3569 8.9406 8.8976 8.8187 8.9468 The sample size nincreases as 100 ,200 and 400, and the number of predictor variables pis taken as 4, 8 and 12. In this setting, Οcontrols the degree of correlation between the predictors, and it is considered as 0 .8,0.9 and 0.95.nobservations of the response variable are generated using the Bell distribution such that yiβΌBell/parenleftbig W0(Β΅i)/parenrightbig where Β΅i= exp(xβ€ iΞ²), i= 1,2,...,n. The number of repetitions in the simulation is taken as 1000. The simulated MSE of an 12 estimator /hatwideΞ²βis computed as follows, MSE/parenleftBig /hatwideΞ²β/parenrightBig =1 10001000/summationdisplay r=1/parenleftBig /hatwideΞ²ββΞ²/parenrightBigβ€ r/parenleftBig /hatwideΞ²ββΞ²/parenrightBig In the simulation study, the Bell regression model is ο¬tted w ithout any standardization and without intercept. The results of the Monte Carlo simulation study are presente d in Tables 1β6. From the simulation results, we observe that the following c onclusions: β’As the sample sizes increase, all MSEs and biases decrease as expected. β’The AULE is generally superior to its competitors LE and MLE i n terms of MSE. β’The squared bias of AULE is smaller than that of LE for d1andd2. β’In all settings, the MSE of AULE is smaller than that of LE for d1andd2. β’In all selected cases, the MSE of AULE(d2) is smaller than its competitors. Finally, we concluded that AULE is a good alternative to LE an d MLE in Bell regression model. 5 Real Data Application In this section, we present a real data example to illustrate the superiority of AULE over its competitors, MLE and LE in the Bell regression model. For thi s reason, we analyse the plastic plywood data set given by Filho and SantβAnna (2016). The data set related to the quality of plastic plywood. The plywood is a composite material create d by layering thin veneers of wood, which results in a structure that is both strong and moderate ly ο¬exible. The descriptions of variables in plastic plywood data set are given in Table 7. Table 7: The description of plastic plywood
|
https://arxiv.org/abs/2505.20946v1
|
data y(response variable) the number of defects per laminated pla stic plywood area x1 volumetric shrinkage x2 assembly time x3 wood density x4 drying temperature The design matrix is centered and standardized so that Xβ€Xis in the correlation form before obtaining the estimators. A Bell regression model without i ntercept is ο¬tted. MLE, LE and AULE are computed and their coeο¬cients and MSE values are giv en in Table 8. The condition number being the square root of the ratio of the maximum eigen value and minimum eigenvalue of the matrix Xβ€/hatwiderWXis computed as 74 .5281 which shows that there is severe collinearity problem in this data. The eigenvalues of Xβ€/hatwiderWXare obtained as 27 .8119,2.6898,0.1976 and 0.0050. According to Table 8, we observe that the MSE of AULE(d2) is lower than the MSEs ofAULE(d1),AULE(d3), LE and MLE. Also, AULE(d1) andAULE(d2) are both superior 13 to LE and MLE in terms of MSE. We concluded that AULE(d2) estimator with parameter d2 performs better than AULE(d1) andAULE(d3) with parameters d1andd3in terms of MSE in real data analysis. Table 8: Coeο¬cients and MSE values of the estimators MLE LE AULE(d1) AULE(d2) AULE(d3) /hatwideΞ²113.2792 18.1904 9.8211 8.8019 12.4839 /hatwideΞ²21.2203 -3.4026 4.6221 5.6241 2.0039 /hatwideΞ²38.8130 10.5962 7.6453 7.3012 8.5444 /hatwideΞ²45.9243 4.6560 7.1704 7.5377 6.2108 MSE 205.1824 728.8193 60.2740 55.5340 154.2389 SBs 0.0000 50.2975 26.4350 44.3130 1.3980 4.04.55.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.8 a 4.004.254.504.75 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.8 b 3.23.43.63.84.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.8 c 4.55.05.56.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.9 d 4.254.504.755.00 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.9 e 2.83.03.2 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.9 f 5.25.66.06.4 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.95 g 4.44.85.25.6 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.95 h 2.32.52.7 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.95 i Figure 1: MSE values of the estimators for p= 4. 14 5.05.56.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.8 a 4.04.5 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.8 b 2.252.502.753.003.25 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.8 c 6.06.57.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.9 d 3.253.503.754.004.25 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.9 e 234 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.9 f 6.46.87.27.6 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.95 g 3.03.33.6 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.95 h 234 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.95 i Figure 2: MSE values of the estimators for p= 8. 15 16.016.416.817.2 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.8 a 8.59.09.5 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.8 b 1.21.51.82.1 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.8 c 18.018.519.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.9 d 8.508.759.009.25 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.9 e 5.505.756.00 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.9 f 19.019.5 AULE_d1 AULE_d2 AULE_d3 LEMSE n=100 , Ο=0.95 g 13.213.413.613.814.0 AULE_d1 AULE_d2 AULE_d3 LEMSE n=200 , Ο=0.95 h 8.48.68.89.09.2 AULE_d1 AULE_d2 AULE_d3 LEMSE n=400 , Ο=0.95 i Figure 3: MSE values of the estimators for p= 12 16 050010001500 β1.0 β0.5 0.0 0.5 1.0 dMSEEstimators AULE LE MLE 03006009001200 β1.0 β0.5 0.0 0.5
|
https://arxiv.org/abs/2505.20946v1
|
1.0 dSBEstimators AULE LE Figure 4: MSEs and SBs of the estimators for β1< d <1 200400600 0.00 0.25 0.50 0.75 1.00 dMSEEstimators AULE LE MLE 0204060 0.00 0.25 0.50 0.75 1.00 dSBEstimators AULE LE Figure 5: MSEs and SBs of the estimators for 0 < d <1 6 Conclusion In this paper, we introduced a new biased estimator called AU LE as an alternative to the LE and the MLE in the Bell regression model. We discuss three the orems to prove the conditions 17 under which AULE is superior to LE and MLE. The AULE is numerically superior to the LE and the MLE, regard ing the scalar MSE and the squared bias. Also, we consider a comprehensive Monte Ca rlo simulation study to show the existence of these theorems proved theoretically in pra ctice. According to ο¬ndings of the simulationstudy, theAULEhasasmallerthesquaredbiasand MSEvaluethantheLEandMLE. In the real-world data example, the results also support the simulation results. In conclusion, we recommend that the AULE is an eο¬ective competitor to the LE and the MLE in Bell regression model. In future work, it can be considered other estimators as an alternative to AULE in the Bell regression model. Acknowledgements This study was supported by TUBITAK 2218-National Postdoct oral Re- search Fellowship Programme with project number 122C104. Author Contributions Caner TanΔ±ΒΈ s: Intoduction, Methodology, Simulation, Real data ap- plication, Writing-original draft. Yasin Asar: Methodolo gy, Simulation, Real data application, Writing-reviewing & editing Funding The authors declare that they have no ο¬nancial interests. Data Availability The dataset supports the ο¬ndings of this study are openly ava ilable in ref- erence list. Declarations Conο¬ict of interest All authors declare that they have no conο¬ict of interest. Ethics statements The paper is not under consideration for publication in any o ther venue or language at this time. Bibliography Akram, M. N., Amin, M., Sami, F., Mastor, A. B., Egeh, O. M., Mu se, A. H. (2022). A new Conway MaxwellβPoisson Liu regression estimatorβmeth od and application. Journal of Mathematics, Article ID 3323955, https://doi.org/10.1155/2022/3323955 . Algamal, Z. Y., Asar, Y. (2020). Liu-type estimator for the g amma regression model. Commu- nications in StatisticsβSimulation and Computation, 49(8 ), 2035β2048. Algamal, Z. Y., Lukman, A. F., Abonazel, M. R., Awwad, F. A. (2 022). Performance of the Ridge and Liu Estimators in the zero-inο¬ated Bell Regressio n Model. Journal of Mathematics, Volume 2022, Article ID 9503460. Algamal, Z. Y., Abonazel, M. R. (2022). Developing a Liu-typ e estimator in beta regression model. Concurrency and Computation: Practice and Experien ce, 34(5), e6685. Algamal, Z., Lukman, A., Golam, B. K., Taoο¬k, A. (2023). Modi ο¬edJackknifed RidgeEstimator in Bell Regression Model: Theory, Simulation and Applicati ons. Iraqi Journal For Computer Science and Mathematics, 4(1), 146β154. 18 Alheety, M. I., Kibria, B. G. (2009). On the Liu and almost unb iased Liu estimators in the presence of multicollinearity with heteroscedastic or cor related errors. Surveys in Mathematics and its Applications, 4, 155-167. Al-Taweel, Y., Algamal, Z.(2020). Somealmost unbiasedrid geregressionestimators forthezero- inο¬ated negative binomial regression model. Periodicals o f Engineering and Natural Sciences, 8(1), 248-255.
|
https://arxiv.org/abs/2505.20946v1
|
Amin, M., Qasim, M., Afzal, S., Naveed, K. (2022). New ridgee stimators in theinverse Gaussian regression: Monte Carlo simulation and application to chem ical data. Communications in StatisticsβSimulation and Computation, 51(10), 6170β618 7. Amin, M., Akram, M. N., Majid, A. (2023). On the estimation of Bell regression model using ridge estimator. Communications in StatisticsβSimulatio n and Computation, 52(3), 854β867. Asar, Y., Algamal, Z. (2022). A new two-parameter estimator for the gamma regression model. Statistics, Optimization & Information Computing, 10(3), 750β761. Asar, Y., Korkmaz, M. (2022). Almost unbiasedLiu-typeesti mators in gamma regression model. Journal of Computational and Applied Mathematics, 403, 113 819. Bell, E. T. (1934a). Exponential polynomials. Annals of Mat hematics, 258-277. Bell, E. T.(1934b). Exponential numbers.TheAmerican Math ematical Monthly, 41(7), 411-419. Castellares, F., Ferrari, S. L., Lemonte, A. J. (2018). On th e Bell distribution and its associated regression model for count data. Applied Mathematical Mode lling, 56, 172-185. Erdugan, F. (2022). An almost unbiased Liu-type estimator i n the linear regression model. Communications in Statistics-Simulation and Computation , 1-13. Ertan, E., Algamal, Z.Y., ErkoΒΈ c, A., Akay, K.U. (2023). Ane wimprovement Liu-typeestimator for the Bell regression model. Communications in Statistic s-Simulation and Computation, 1- 12. Marcondes Filho, D., SantβAnna, A. M. O. (2016). Principal c omponent regression-based con- trol charts for monitoring count data. The International Jo urnal of Advanced Manufacturing Technology, 85, 1565-1574. Kadiyala, K. (1984). A class of almost unbiased and eο¬cient e stimators of regression coeο¬cients. Economics Letters, 16(3-4), 293-296. Karlsson, P., MΛ ansson, K., Kibria, B. M. G. (2020). A Liu est imator for the beta regression model and its application to chemical data. Journal of Chemo metrics, 34(10), e3300. Liu, K.(1993). A newclass of biased estimate inlinear regre ssion. Communications in Statisticsβ Theory and Methods, 22(2), 393β402. Mackinnon, M.J., Puterman, M.L. (1989). Collinearity in ge neralized linear models. Communi- cations in StatisticsβTheory and Methods, 18(9), 3463β347 2. 19 MΛ ansson K, Shukur G. (2011). A Poisson ridge regression est imator. Economic Modelling 28, 1475β1481. MΛ ansson, K., Kibria, B. G., SjΒ¨ olander, P., Shukur, G., Swe den, V. (2011). New Liu Estimators for the Poisson regression model: Method and application, 5 1. HUI Research. MΛ ansson, K., Kibria, B. G., Sjolander, P., Shukur, G. (2012 ). Improved Liu estimators for the Poisson regression model. International Journal of Statis tics and Probability, 1(1), 1β5. MΛ ansson, K. (2013). Developing a Liu estimator for the nega tive binomial regression model: method and application. Journal of Statistical Computatio n and Simulation, 83, 1773β1780. Majid, A., Amin, M., Akram, M. N. (2022). On the Liu estimatio n of Bell regression model in the presence of multicollinearity. Journal of Statistical Computation and Simulation, 92(2), 262β282. McCullagh, P., Nelder, J.(1989). Generalized Linear Model s, second ed., Chapman & Hall, London. Omara, T. M. (2023). Almost unbiased Liu-type estimator for Tobit regression and its applica- tion. Communications in Statistics-Simulation and Comput ation, 1-16. Segerstedt, B.(1992). Onordinaryridgeregressioningene ralizedlinearmodels.Communications in StatisticsβTheory and Methods, 21(8), 2227β2246. Qasim, M., Amin, M., Amanullah, M. (2018). On the performanc e of some new Liu parameters for the gamma
|
https://arxiv.org/abs/2505.20946v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.