text
string
source
string
(i,j)∈E(H)Qxi,xj/parenrightBigg . Now, we make the following observation. Suppose that His not a tree. Then, one can remove some edge ( i,j)∈E(H) to obtain a graph H′that is still connected. Clearly, ΦSBM(p,Q)(H′) =/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H′)Qxi,xj/parenrightBigg ≥/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg = ΦSBM(p,Q)(H),(26) where we used the simple fact that Q∈[0,1] andE(H′)⊆E(H). 23 Repeating the same edge-removal procedure, we are left with some spanning tree TofHsuch that ΦSBM(p,Q)(T)≥ΦSBM(p,Q)(H). AsTis spanning, |V(T)|=|V(H)|,so |ΦSBM(p,Q)(T)|1 |V(T)|≥ |ΦSBM(p,Q)(H)|1 |V(H)|. Now, we will further remove edges from the tree. Suppose that Tis not a star. Then, it has a subgraph which is a path of length 3. Removing the middle edg e partitions Tinto two trees (both with at least one edge) such that V(T1)∪V(T2) =V(T) andV(T1)∩V(T2) =∅.Using the same-argument as in ( 26), we conclude that ΦSBM(p,Q)(T1⊔T2)≥ΦSBM(p,Q)(T). UsingCorollary 2.1 , we conclude that |ΦSBM(p,Q)(H)|1 |V(H)|≤ |ΦSBM(p,Q)(T)|1 |V(T)|≤max(|ΦSBM(p,Q)(T1)|1 |V(T1)|,|ΦSBM(p,Q)(T2)|1 |V(T2)|). Repeating this operation while no paths of length at least 3 a re left, we conclude that |ΦSBM(p,Q)(H)|1 |V(H)|≤ |ΦSBM(p,Q)(Start)|1 |V(Start)| for some star graph on t≤Dedges. 3.3 SBMs with Non-Vanishing Community Probabilities Theorem 3.4 (Maximizing Partition Functions with Non-Vanishing Commu nity Probabilities) . Suppose that SBM(p,Q)is a stochastic block model on kcommunities such that pi≥c∀i∈[k]for some constant c >0.Then, for any connected graph Hon at most Dedges, ΨSBM(p,Q)(H)/lessorsimilarc,Dmax/parenleftbigg ΨSBM(p,Q)(Cyc4),ΨSBM(p,Q)(Star1),ΨSBM(p,Q)(Star2)/parenrightbigg . Proof.Denoteh=|V(H)|.We split the proof into two parts depending on whether His a tree. 1.His not a tree. In that case, we can show that 4-cycles “dominate” H.We need the following statement on signed 4-cycle counts. Lemma 3.2. Suppose that SBM(p,Q)is a stochastic block model on kcommunities such that pi≥c∀i∈[k]for some universal constant c >0.Then, ΦSBM(p,Q)(Cyc4)/greaterorsimilarcmax i,j∈[k]Q4 i,jand, equivalently, ΨSBM(p,Q)(Cyc4)/greaterorsimilarcmax i,j∈[k]|Qi,j|. Before we present the proof of the lemma, we will show that it i mmediately implies the following statement. Claim 3.3. Under the assumptions on SBM(p,Q)inTheorem 3.4 , for any connected graph H which is not a tree, ΨSBM(p,Q)(H)/lessorsimilarc,DΨSBM(p,Q)(Cyc4). 24 Proof.SinceHis not a tree, |V(H)| ≤ |E(H)|.Recalling Proposition 1.3 , /vextendsingle/vextendsingle/vextendsingleΦSBM(p,Q)(H)/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay x1,x2,...,xh∈[k]h/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)(max u,v|Qu,v|) =/summationdisplay x1,x2,...,xh∈[k]h/productdisplay i=1pxi(max u,v|Qu,v|)|E(H)| ≤/summationdisplay x1,x2,...,xh∈[k]h/productdisplay i=1pxi(max u,v|Qu,v|)|V(H)| = (max u,v|Qu,v|)|V(H)|. The claim follows immediately from Lemma 3.2 . Proof of Lemma 3.2 .We start by analyzing Proposition 1.3 : ΦSBM(p,Q)(Cyc4) =/summationdisplay x1,x2,x3,x4px1px2px3px4Qx1,x2Qx2,x3Qx3,x4Qx4,x1 =/summationdisplay x1,x3px1px3/parenleftBigg/summationdisplay xpxQx1,xQx,x3/parenrightBigg2 ≥/summationdisplay yp2 y/parenleftBigg/summationdisplay xpxQy,xQy,x/parenrightBigg2 =/summationdisplay yp2 y/parenleftBigg/summationdisplay xpxQ2 y,x/parenrightBigg2 ≥/summationdisplay yp2 y/summationdisplay xp2 xQ4 x,y=/summationdisplay x,yp2 xp2 yQ4 x,y≥max x,yc4Q4 x,y/greaterorsimilarcmax x,yQ4 x,y, as desired. 2.His a tree. Suppose that His a tree. We can assume that Hhas at least three edges as otherwise His a star and there is nothing to prove. In particular, this me ans that Hhas at least two leaves. Let these be h,h−1.Let their parents be par(h−1) andpar(h).We rewrite Proposition 1.3 in a way that allows us to compare to signed 2-stars and 4-cycl es. Applying the leaf-isolation technique in Eq. (14) 25 /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleΦSBM(p,Q)(H)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤/summationdisplay x1,x2,...,xh−2/parenleftBigg px1px2···pxh−2/productdisplay (ij)∈E(H)\{(par(h−1),h−1),(par(h),h)}|Qxi,xj|× ×/parenleftbigg/parenleftbig/summationdisplay xh−1pxh−1Qxpar(h−1),xh−1/parenrightbig2+/parenleftbig/summationdisplay xhpxhQxpar(h),xh/parenrightbig2/parenrightbigg/parenrightBigg We used the simple inequality |a|×|b| ≤a2+b2.We now interpret the terms above. Square Terms and 2-Stars. FromCorollary 2.2 and the fact that pi≥c >0 for any i, /parenleftbig/summationdisplay xh−1pxh−1Qxpar(h−1),xh−1/parenrightbig2+/parenleftbig/summationdisplay xhpxhQxpar(h)xh/parenrightbig2 /lessorsimilarcppar(h−1)/parenleftbig/summationdisplay xh−1pxh−1Qxpar(h−1),xh−1/parenrightbig2+ppar(h)/parenleftbig/summationdisplay xhpxhQxpar(h)xh/parenrightbig2 /lessorsimilarcΦSBM(p,Q)(Star2). 4-Cycles. Now,
https://arxiv.org/abs/2504.17202v1
by Lemma 3.2 , /summationdisplay x1,x2,...,xh−2px1px2···pxh−2/productdisplay (ij)∈E(H)\{(par(h−1),h−1),(par(h),h)}|Qxi,xj| ≤/summationdisplay x1,x2,...,xh−2px1px2...pxh−2(max u,v|Qu,v|)|E(H)|−2 = (max u,v|Qu,v|)|V(H)|−3 /lessorsimilarc,D|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4. Altogether, we obtain that /vextendsingle/vextendsingleΦSBM(p,Q)(H)/vextendsingle/vextendsingle/lessorsimilarc,D/vextendsingle/vextendsingleΦSBM(p,Q)(Star2)/vextendsingle/vextendsingle×|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4. We now proceed similarly to Corollary 2.1 . Namely, we rewrite /vextendsingle/vextendsingleΦSBM(p,Q)(H)/vextendsingle/vextendsingle1 |V(H)|/lessorsimilarc,D/vextendsingle/vextendsingleΦSBM(p,Q)(Star2)/vextendsingle/vextendsingle1 |V(H)|×|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4×1 |V(H)|⇐⇒ /vextendsingle/vextendsingleΦSBM(p,Q)(H)/vextendsingle/vextendsingle1 |V(H)|/lessorsimilarc,D/parenleftBigg /vextendsingle/vextendsingleΦSBM(p,Q)(Star2)/vextendsingle/vextendsingle1 3/parenrightBigg3 |V(H)| ×/parenleftBigg /vextendsingle/vextendsingleΦSBM(p,Q)(Cyc4)/vextendsingle/vextendsingle1 4/parenrightBigg|V(H)|−3 |V(H)| As|V(H)|−3 |V(H)|+3 |V(H)|= 1,the conclusion follows. Again,Theorem 2.4 applied for any D≥8 gives the corresponding testing result. 26 Theorem 3.5 (Testing in Non-Vanishing Community Probabilities) .Suppose that c >0is an absolute constant and SBM(p,Q)is an SBM model on kcommunities such that pi> c∀i∈[k].If there exists a constant degree test distinguishing G(n,1/2)andSBM(n;p,Q)with high probability, one can also distinguish the two distributions with high pro bability using the signed count of one of theStar1(edge),Star2(wedge), or Cyc4(4-cycle) graphs. 3.4 General 2-SBMs Theorem 3.6 (Maximizing Partition Functions in 2-SBMs) .LetD∈Nbe some fixed even natural number. Suppose that SBM(p,Q)is an arbitrary SBM on k= 2communities. Then, for any connected graph Hon at most Dedges, ΨSBM(p,Q)(H)/lessorsimilarDmax/parenleftBigg ΨSBM(p,Q)(Cyc4),max 1≤t≤DΨSBM(p,Q)(Start)/parenrightBigg . Due to the length and complexity of the proof, we split it into several sections. Without loss of generality suppose that p1≤p2.Asp1+p2= 1,this means that p2≥1/2. We will also use throughout the following simple claim about signed 4-cycles in 2-SBMs. Claim 3.4. Whenever p2≥p1in a 2-community stochastic block model SBM(p,Q), ΦSBM(p,Q)(Cyc4) = Θ(max( p4 1Q4 1,1,p2 1Q4 1,2,Q4 2,2)). Proof.Expanding Proposition 1.3 , ΦSBM(p,Q)(Cyc4) =p4 1Q4 1,1+4p3 1p2Q2 1,1Q2 1,2+4p2 1p2 2Q1,1Q2 1,2Q2,2 +2p2 1p2 2Q4 1,2+4p1p3 2Q2 1,2Q2 2,2+p4 2Q4 2,2. Observethat |4p2 1p2 2Q1,1Q2 1,2Q2,2| ≤2p3 1p2Q2 1,1Q2 1,2+2p1p3 2Q2 1,2Q2 2,2byAM-GM. Hencetheabove ex- pressionisboundedbetween p4 1Q4 1,1+2p3 1p2Q2 1,1Q2 1,2+2p2 1p2 2Q4 1,2+2p1p3 2Q2 1,2Q2 2,2+p4 2Q4 2,2andp4 1Q4 1,1+ 6p3 1p2Q2 1,1Q2 1,2+2p2 1p2 2Q4 1,2+ 6p1p3 2Q2 1,2Q2 2,2+p4 2Q4 2,2.Similarly, observe that 0 ≤2p3 1p2Q2 1,1Q2 1,2≤ p4 1Q4 1,1+p2 1p2 2Q2 1,2.Using also that p2∈[1/2,1) gives the desired conclusion. 3.4.1 Step 1: Comparison with a non-negative block-model. The key idea in the proof is a comparison between SBM models. Namely, let |Q|be the matrix formed by taking entry-wise absolute values of Q.Consider SBM(p,|Q|).Proposition 1.3 combined with triangle inequality implies that |ΦSBM(p,Q)(H)|1 |V(H)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 |V(H)| ≤/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)|Qxi,xj|/parenrightBigg1 |V(H)| =|ΦSBM(p,|Q|)(H)|1 |V(H)|.(27) On the other hand, from Theorem 3.3 , we know that |ΦSBM(p,|Q|)(H)|1 |V(H)|/lessorsimilarD|ΦSBM(p,|Q|)(Start)|1 |V(Start)| 27 for some t∈ {1,2,...,D}.Altogether, this implies that |ΦSBM(p,Q)(H)|1 |V(H)|/lessorsimilarD|ΦSBM(p,|Q|)(Start)|1 |V(Start)|. (28) (28) would imply the result if it were the case that taking absolu te values of Qdoes not significantly increase star counts, i.e. |ΦSBM(p,|Q|)(Start)|1 |V(Start)|/lessorsimilart|ΦSBM(p,Q)(Start)|1 |V(Start)| (29) Unfortunately, ( 29)iscertainlyincorrectastherearematrices QforwhichΦ SBM(p,Q)(Start)|1 |V(Start)|= 0 for any t≥1,but ΦSBM(p,|Q|)(Start)|1 |V(Start)|>0 (seeTheorem A.2 ). Yet, it turns out that whenever ( 29) does not hold, SBM(p,Q) needs to have a very specific structure which leads to cancellations in the Fourier coeffic ients of stars . The rest of the proof is devoted to firstdescribingsuch a structure, and then exploi ting it to compare the Fourier coefficient ofHwith that of stars and 4-cycles. For the rest of the proof, let Tbe one value of t∈ {1,2,...,D} such that ( 28) is satisfied. 3.4.2 Step 2: Identifying structure which leads to
https://arxiv.org/abs/2504.17202v1
cancellations in the Fourier coeffi- cients of stars. ByCorollary 2.2 , ΦSBM(p,Q)(StarT) =p1(p1Q11+p2Q1,2)T+p2(p1Q12+p2Q2,2)T. Now, suppose that ( 29) does not hold. Then, for some large absolute constant C,4it must be the case that (4C2)T/vextendsingle/vextendsingle/vextendsinglep1(p1Q11+p2Q1,2)T+p2(p1Q12+p2Q2,2)T/vextendsingle/vextendsingle/vextendsingle ≤p1(p1|Q11|+p2|Q1,2|)T+p2(p1|Q12|+p2|Q2,2|)T.(30) There might be two reasons for this inequality: 1.Within-community cancellation: p1/parenleftBig p1|Q1,1|+p2|Q1,2|/parenrightBigT ≥CT×p1/vextendsingle/vextendsingle/vextendsinglep1Q1,1+p2Q1,2/vextendsingle/vextendsingle/vextendsingleT (31.a) or (31) p2/parenleftBig p1|Q1,2|+p2|Q2,2|/parenrightBigT ≥CT×p2/vextendsingle/vextendsingle/vextendsinglep1Q1,2+p2Q2,2/vextendsingle/vextendsingle/vextendsingleT . (31.b) 2.Between-community cancellation: 4For concreteness, C= 128 works. 28 p1/parenleftBig p1|Q1,1|+p2|Q1,2|/parenrightBigT ≤CT×p1/vextendsingle/vextendsingle/vextendsinglep1Q1,1+p2Q1,2/vextendsingle/vextendsingle/vextendsingleT and p2/parenleftBig p1|Q1,2|+p2|Q2,2|/parenrightBigT ≤CT×p2/vextendsingle/vextendsingle/vextendsinglep1Q1,2+p2Q2,2/vextendsingle/vextendsingle/vextendsingleT , but (4C2)T×/vextendsingle/vextendsingle/vextendsinglep1/parenleftBig p1Q1,1+p2Q1,2/parenrightBigT +p2/parenleftBig p1Q1,2+p2Q2,2/parenrightBigT/vextendsingle/vextendsingle/vextendsingle ≤p1/vextendsingle/vextendsingle/vextendsinglep1Q1,1+p2Q1,2/vextendsingle/vextendsingle/vextendsingleT +p2/vextendsingle/vextendsingle/vextendsinglepQ1,2+p2Q2,2/vextendsingle/vextendsingle/vextendsingleT(32) 3.4.3 Step 3: Within-community Cancellations. We beginby analyzing thecase of within-community cancellations which turnsout to bethesimpler case. Lemma 3.5. Suppose that (30)and(31)hold for some C≥4.Then,|Q1,2| ≤Cp1|Q1,1|. Proof.Case 1) First, suppose that ( 31.a) holds. Then, (p1|Q1,1|+p2|Q1,2|)≥C/vextendsingle/vextendsingle/vextendsinglep1Q1,1+p2Q1,2/vextendsingle/vextendsingle/vextendsingle≥C/vextendsingle/vextendsingle/vextendsinglep1|Q1,1|−p2|Q1,2|/vextendsingle/vextendsingle/vextendsingle. WhenC≥4,this immediately implies that p2|Q1,2| ≤2p1|Q1,1|which is enough as p2≥1/2. Case 2) Now, suppose that that ( 31.b) holds. Then, p1|Q1,2|+p2|Q2,2| ≥C/vextendsingle/vextendsingle/vextendsinglep1Q1,2+p2Q2,2/vextendsingle/vextendsingle/vextendsingle≥ C/vextendsingle/vextendsingle/vextendsinglep1|Q1,2|−p2|Q2,2|/vextendsingle/vextendsingle/vextendsingle.As in Case 1), p2|Q2,2| ≤2p1|Q1,2|.Hence, (p1|Q1,2|+p2|Q2,2|)≤3p1|Q1,2|, so/vextendsingle/vextendsingle/vextendsinglep1Q1,2+p2Q2,2/vextendsingle/vextendsingle/vextendsingle≤1 C(p1|Q1,2|+p2|Q2,2|)≤3 Cp1|Q1,2| (33) There are two cases. Either |p1Q1,1+p2Q1,2| ≤1√ C(p1|Q1,1|+p2|Q1,2|),which immediately implies p2|Q1,2| ≤2p1|Q1,1|as in Case 1) provided C≥42.Or, |p1Q1,1+p2Q1,2|>1√ C(p1|Q1,1|+p2|Q1,2|)≥1√ Cp2|Q1,2| ≥1 2√ C|Q1,2|. (34) Combining ( 33) and (34), /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep1(p1Q1,1+p2Q1,2)T+p2(pQ1,2+p2Q2,2)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≥/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep1(p1Q1,1+p2Q1,2)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle−/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep2(pQ1,2+p2Q2,2)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≥p1×/parenleftBig1 2√ C|Q1,2|/parenrightBigT −p2/parenleftBig3 Cp1|Q1,2|/parenrightBigT ≥p1×/parenleftBig1 2√ C|Q1,2|/parenrightBigT −p1/parenleftBig3 C|Q1,2|/parenrightBigT ≥p1|Q1,2|T (2C)T 29 whenever C≥128. Now, recall ( 30). Then, as ( p1|Q1,2|+p2|Q2,2|)≤3p1|Q1,2|,it must be the case that p1(p1|Q1,1|+p2|Q1,2|)T+p2(3p1|Q1,2|)T ≥p1(p1|Q1,1|+p2|Q1,2|)T+p2(p1|Q1,2|+p2|Q2,2|)T ≥(4C2)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep1(p1Q1,1+p2Q1,2)T+p2(p1Q1,2+p2Q2,2)T/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≥(4C2)Tp1|Q1,2|T (2C)T. For large enough C≥128,this immediately implies that p1|Q1,1|+p2|Q1,2| ≥3C 2×|Q1,2|. Again, this implies |Q1,2| ≤Cp1|Q1,1|. It turns out that Lemma 3.5 is enough to imply Theorem 3.6 . Lemma 3.6. Suppose that |Q1,2| ≤Cp1|Q1,1|for some absolute constant C.Then, for any con- nected graph Hon at most Dedges, ΨSBM(p,Q)(H)/lessorsimilarDmax/parenleftBigg ΨSBM(p,Q)(Cyc4),max 1≤t≤DΨSBM(p,Q)(Start)/parenrightBigg . In the rest of the section, we prove Lemma 3.6 . Again, the proof depends on whether His a tree. Proof of Lemma 3.6 whenHis not a tree. Suppose that His connected and not a tree. Then, H has at least |V(H)|edges. Now, we rewrite Proposition 1.3 as follows: ΦSBM(p,Q)(H) =/summationdisplay x1,x2,...,xh∈{1,2}/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg =/summationdisplay K⊆V(H)p|K| 1×p|V(H)|−|K| 2 ×Q|EH(K,K)| 1,1Q|EH(K,V(H)\K)| 1,2 Q|EH(V(H)\K,V(H)\K)| 2,2 ,(35) where the equivalence between the second and the third line f ollows simply by choosing Kto be the subset of vertices labeled by 1 .In particular, |ΦSBM(p,Q)(H)| ≤/summationdisplay K⊆V(H)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep|K| 1×p|V(H)|−|K| 2 ×Q|EH(K,K)| 1,1Q|EH(K,V(H)\K)| 1,2 Q|EH(V(H)\K,V(H)\K)| 2,2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle /lessorsimilarDmax K⊆V(H)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep|K| 1×Q|EH(K,K)| 1,1Q|EH(K,V(H)\K)| 1,2 Q|EH(V(H)\K,V(H)\K)| 2,2/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle = max K⊆V(H)p|K| 1×|Q1,1||EH(K,K)||Q1,2||EH(K,V(H)\K)||Q2,2||EH(V(H)\K,V(H)\K)|(36) 30 K K1K2Ka...a+tvertices, at leasttedges V(H)\KT1T2Tb... b+svertices, at leastsedgesat least a+b−1 edges Figure 2: Decomposition of Hinto vertices and edges in the case of in-community cancella tions whenHis not a tree. |V(H)|=a+b+t+s,|EH(K,K)| ≥t,|EH(K,V(H)\K)| ≥a+b−1, |EH(V(H)\K,V(H)\K)| ≥sand|E(H)| ≥ |V(H)|=a+b+t+s.The blue circles represent the connected components in H|KandH|V(H)\K,respectively. From now on, we will focus on a specific choice of K.Suppose that Kis such that H|K hasaconnected components and |K|=a+t.Hence,|EH(K,K)| ≥t.LetH|V(H)\Khaveb connected components and b+svertices. Hence, |EH(V(H)\K,V(H)\K)| ≥s.Finally, observe that|EH(K,V(H)\K)| ≥a+b−1 sinceHis connected. SinceHis not a tree, |E(H)| ≥ |V(H)|=a+b+t+s.Thus, there is at least one more edge besides the a+b+t+s−1 identified so far. Suppose that it is of the type |Qi,j|.As in the layout from the end of Section 1.6.2 , we aim to replace some of the |Q1,2|instances by p1|Q1,1|so that we can compare to a 4-cycle. Namely: p|K| 1|Q1,1||EH(K,K)||Q1,2||EH(K,V(H)\K)||Q2,2||EH(V(H)\K,V(H)\K)|
https://arxiv.org/abs/2504.17202v1
≤pa+t 1×|Q1,1|t×|Q1,2|a+b−1×|Q2,2|s×|Qi,j| /lessorsimilarpa+t+a+b−1 1 ×|Q1,1|a+b+t−1×|Q2,2|s×|Qi,j|, where we used Lemma 3.5 in the second inequality. If (i,j)∈ {(1,1),(1,2)},thena≥1 (as label 1 exists) and |Qi,j|=O(|Q1,1|).Thus, the above is expression is bounded by pa+b+t 1×|Q1,1|a+b+t×|Q2,2|s ≤(p1|Q1,1|)a+b+t×|Q2,2|s /lessorsimilarD|ΦSBM(p,Q)(Cyc4)|a+b+t 4×|ΦSBM(p,Q)(Cyc4)|s 4 =|ΦSBM(p,Q)(Cyc4)||V(H)| 4, 31 which completes the proof. If (i,j) = (2,2),the above expression is bounded by pt+a+b−1 1×|Q1,1|a+b+t−1×|Q2,2|s+1. We argue similarly that this is at most Φ SBM(p,Q)(Cyc4)||V(H)| 4up to multiplicative constants de- pending on D. If we were to apply the same argument when Hwere a tree, we would prove that |ΦSBM(p,Q)(H)|/lessorsimilarD|ΦSBM(p,Q)(Cyc4)||V(H)|−1 4, which isnot strongenough. Thus, weneed tousethetechnique of isolating leaves as in Theorem 3.4 Proof of Lemma 3.6 whenHis a tree. Now, suppose that His a tree. Again, there is nothing to prove if it has less than 3 vertices. If it has at least three ve rtices, then it has at least two leaves. Let these be h−1 andh. We now bound |ΦSBM(p,Q)(H)|by using the same technique as in Theorem 3.4 . Namely, ΦSBM(p,Q)(H) =/summationdisplay K⊆V(H)\{h−1,h}p|K|Q|EH(K,K)| 1,1Q|EH(K,V(H)\K)| 1,2 Q|E(V(H)\K,V(H)\K)| 2,2 ×(p1Q1,1+p2Q1,2)P1(K)(p1Q1,2+p2Q2,2)P2(K),(37) whereP1(K) is the number of parents of h,h−1 with label 1 according to KandP2(K) with label 2 (in case of a common parent, we count it twice). Note that P1(K)+P2(K) = 2.By AM-GM, |(p1Q1,1+p2Q2,2)P1(K)(p1Q1,2+p2Q2,2)P2(K)| /lessorsimilarp−1[|K|≥1] 1/parenleftBigg p1(p1Q1,1+p2Q1,2)2+p2(p1Q1,2+p2Q2,2)2/parenrightBigg =p−1[|K|≥1] 1ΦSBM(p,Q)(Star2). The1[|K| ≥1] factor comes from the fact that if |K|= 0,thenP1(K) = 0.Combining with ( 37), /vextendsingle/vextendsingle/vextendsingleΦSBM(p,Q)(H)/vextendsingle/vextendsingle/vextendsingle /lessorsimilarDmax K⊆V(H)\{h,h−1}p|K||Q1,1||EH(K,K)||Q1,2||EH(K,H\K)||Q2,2||EH(V(H)\K,V(H)\K)| ×p−1[|K|≥1]|ΦSBM(p,Q)(Star2)|(38) Again, we denote by athe connected components of K,bya+tits vertices, by bthe connected components of V(H)\Kand byb+sthe number of vertices. We rewrite the right hand-side of ( 38) as pa+t 1|Q1,1|t|Q1,2|a+b−1|Q2,2|sp−1[a≥1] 1|ΦSBM(p,Q)(Star2)| ≤pa+t−1+a+b−1[a≥1] 1 |Q1,1|a+b+t−1|Q2,2|s|ΦSBM(p,Q)(Star2)|, 32 K vertexh−1K1K2Ka...a+tvertices, at leasttedges V(H)\Kvertexh T1T2Tb... b+svertices, at leastsedgesat least a+b−1 edges Figure 3: Decomposition of Hinto vertices and edges in the case of in-community cancella tions whenHis a tree. |V(H)|=a+b+t+s+ 2,|EH(K,K)| ≥t,|EH(K,V(H)\K)|=a+b−1, |EH(V(H)\K,V(H)\K)|=sand|E(H)|=a+b+t+s+1.The blue circles represent the connected components in H|KandH|V(H)\K,respectively. We have drawn the two leaves to have parents with different labels for the purposes of illustration, but th is might not be the case. where again the inequality follows from Lemma 3.5 . Using that a−1[a≥1]≥0,we continue as follows (p1|Q1,1|)a+b+t−1|Q2,2|s|ΦSBM(p,Q)(Star2)| /lessorsimilarD|ΦSBM(p,Q)(Cyc4)|a+b+t−1 4|ΦSBM(p,Q)(Cyc4)|s 4|ΦSBM(p,Q)(Star2)| =|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4|ΦSBM(p,Q)(Star2)|. Exactly as in the proof of Theorem 3.4 , this implies that ΨSBM(p,Q)(H)/lessorsimilarDmax/parenleftbig ΨSBM(p,Q)(Cyc4),ΨSBM(p,Q)(Star2)/parenrightbig . 3.4.4 Step 4: Between-Community Cancellations. Now, suppose that ( 30) holds in the case of between-community cancellations, ( 32). Additionally, we assume that |Q1,2| ≥p1|Q1,1|as otherwise we can invoke Lemma 3.6 and complete the proof. In particular, p1|Q1,1|+p2|Q1,2| ≤2|Q1,2|.By (32), also|p1Q1,1+p2Q1,2| ≥p2 C|Q1,2| ≥1 2C|Q1,2|. We record: 1 2C|Q1,2| ≤ |p1Q1,1+p2Q1,2| ≤2|Q1,2| (39) Furthermore, ( 32) implies that p1(p1Q1,1+p2Q1,2)T,p2(p1Q1,2+p2Q2,2)Thave different signs. In particular, Tis odd. Hence, T≤D−1 asDis even. Next, we will show that unless p1≥ |Q1,2|,StarT+1dominates H. 33 Lemma 3.7. Suppose that (30)holds and p1≤ |Q1,2|.Then, ΨSBM(p,Q)(H)/lessorsimilarDΨSBM(p,Q)(StarT+1) Proof.For brevity, denote µ=p1Q1,1+p2Q1,2,λ=p1Q1,2+p2Q2,2.When (32) holds, C|µ| ≥ p1|Q1,1|+p2|Q1,2|andC|λ| ≥p1|Q1,2|+p2|Q2,2|.However, (4 C2)T×|p1µT+p2λT| ≤/parenleftBig p1|µ|T+ p2|λ|T/parenrightBig .In particular, this means (4 C2)T×/vextendsingle/vextendsingle/vextendsinglep1|µ|T−p2|λ|T/vextendsingle/vextendsingle/vextendsingle≤/parenleftBig p1|µ|T+p2|λ|T/parenrightBig ,sop1|µ|T∈ (1±1/2)p2|λ|TwhenC≥4.By (39), also|µ| ≥1 2C|Q1,2| ≥p1 2C. Now, since T+1 is even and µ≥p1 2C,(p1µT+1)1/(T+2)/greaterorsimilarD(p1|µ|T)1/(T+1).Altogether, |ΦSBM(p,Q)(StarT+1)|1 |V(StarT+1)|=|p1µT+1+p2λT+1|1/(T+2)≥ |p1µT+1|1/(T+2)/greaterorsimilarD|p1µT|1/(T+1) /greaterorsimilarD/parenleftbig p1|µ|T+p2|λ|T/parenrightbig1/(T+1)=|ΦSBM(p,|Q|)(StarT)|1/(T+1)/greaterorsimilarD|ΦSBM(p,Q)(H)|1 |V(H)|. Thus, from now on, we Assume that |Q1,2| ≤p1. (40) Lemma
https://arxiv.org/abs/2504.17202v1
3.8. Suppose that (30)holds,|Q1,2| ≤p1,andT>1.Then, ΨSBM(p,Q)(H)/lessorsimilarDΨSBM(p,Q)(StarT−1) Proof sketch. The proof is exactly the same as that of Lemma 3.8 . The only difference is that this time|µ|/lessorsimilarDp1,which implies ( p1µT−1)1/T/greaterorsimilarD(p1|µ|T)1/(T+1). Hence, what is left is the case T= 1.For convenience, we restate and recall the conditions in the remaining case: 1.T= 1. 2. (32):p1|Q1,1|+p2|Q1,2| ≤C×|pQ1,1+p2Q1,2|,andp1|Q1,2|+p2|Q2,2| ≤C×|p1Q1,2+p2Q2,2|, and (4C2)×|p1(p1Q1,1+p2Q1,2)+p2(p1Q1,2+p2Q2,2)| ≤p1|p1Q1,1+p2Q1,2|+p2|pQ1,2+p2Q2,2|. 3.|Q1,2| ≥p1|Q1,1|as otherwise Lemma 3.6 givesTheorem 3.6 . 4.|Q1,2| ≤p1as otherwise Lemma 3.7 implies the result. Observe that under the above assumptions, |p1(p1Q1,1+p2Q1,2)| ≤p1|p1Q1,1|+p1|p2Q1,2)| ≤ 2p1|Q1,2|.Furthermore, p1|p1Q1,1+p2Q1,2|+p2|pQ1,2+p2Q2,2| ≥C×|p1(p1Q1,1+p2Q1,2)+p2(pQ1,2+p2Q2,2)| ≥C×/vextendsingle/vextendsingle/vextendsinglep1|p1Q1,1+p2Q1,2|−p2|p1Q1,2+p2Q2,2|/vextendsingle/vextendsingle/vextendsingle. For large enough C,this implies p2|p1Q1,2+p2Q2,2| ≤2p1|p1Q1,1+p2Q1,2| ≤4p1|Q1,2|. (41) Asp2|p1Q1,2+p2Q2,2| ≥1 Cp1|Q1,2|+1 Cp2|Q2,2|,this further implies 34 4.|Q2,2| ≤8cp1|Q1,2|for large enough C. Finally, this immediately implies that |p1Q1,2+p2Q2,2|=O(p1|Q1,2|).As we are in the case of between-community cancellations, thereverseinequality isalsotrueand |p1Q1,2+p2Q2,2| ≍p1|Q1,2|. 5. For any even t∈[2,D], ΦSBM(p,Q)(Start) =p1(p1Q1,1+p2Q1,2)t+p2(p1Q1,2+p2Q2,2)t ≍Dp1(p1|Q1,1|+p2|Q1,2|)t+p2(p1|Q1,2|+p2|Q2,2|)t ≍Dp1|Q1,2|t+pt 1|Q1,2|t≍Dp1|Q1,2|t.(42) Now, we will carry out a similar analysis as in Section 3.4.3 but we will need an even more careful analysis of the leaves. Namely, suppose that L(H)⊆V(H) is the set of leaves and for K⊆ V(H)\L(H),the set of leaves with parents of label 1 is L1(K) and with parents of label 2 is L2(K). Hence, |ΦSBM(p,Q)(H)| =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay K⊆V(H)\L(H)p|K| 1p|V(H)|−|L(H)|−|K| 2 × ×Q|EH(K,K)| 1,1Q|EH(K,V(H)\(K∪L(H)))| 1,2 Q|EH(V(H)\(K∪L(H)),V(H)\(K∪L(H)))| 2,2 ×(p1Q1,1+p2Q1,2)|L1(K)|(p1Q1,2+p2Q2,2)|L2(K)|/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle /lessorsimilarDmax K⊆V(H)\L(H)p|K| 1 ×|Q1,1||EH(K,K)||Q1,2||EH(K,V(H)\(K∪L(H)))||Q2,2||EH(V(H)\(K∪L(H)),V(H)\(K∪L(H)))| ×|p1Q1,1+p2Q1,2||L1(K)||p1Q1,2+p2Q2,2||L2(K)|.(43) Now on, let Kbe the respective maximizer in ( 43). Again, let Khaveaconnected components and a+tvertices and H|V(H)\(K∪L(H))havebconnected components and b+svertices. In particular, |V(H)|=a+b+t+s+|L1(K)|+|L2(K)|. For a fixed K,the last expression in ( 43) is at most pa+t 1|Q1,1|t|Q1,2|a+b−1|Q2,2|s|p1Q1,1+Q1,2||L1(K)||p1Q1,2+p2Q2,2||L2(K)|. Using that |Q2,2| ≤p1|Q1,2|,the above expression is bounded by pa+t+s+|L2(K)| 1 |Q1,1|t|Q1,2|a+b−1+s+|L2(K)||p1Q1,1+Q1,2||L1(K)|. (44) We now analyze two cases based on which of the two quantities |Q1,1|√p1and|Q1,2|is larger. Again, the intuition is as in Section 1.6.2 – this will allow us to replace instances of |Q1,2|with |Q1,1|√p1and compare witha four-cycle. 35 K L1(K) L2(K)K1K2Ka...a+tvertices, at leasttedges V(H)\KT1T2Tb... b+svertices, at leastsedgesat least a+b−1 edges Figure 4: Decomposition of Hinto vertices and edges in the case of between-community can cella- tions.|V(H)|=a+b+t+s+|L1(K)|+|L2(K)|,|EH(K,K)| ≥t,|EH(K,V(H)\K)| ≥a+b−1, |EH(V(H)\K,V(H)\K)| ≥s.The blue circles represent the connected components in H|Kand H|V(H)\K. Case 1) |Q1,1|√p1≤ |Q1,2|.Using|Q1,1|√p1≤ |Q1,2|, we bound ( 44) by pa+t/2+s+|L2(K)| 1 |Q1,2|a+b−1+s+|L2(K)|+t|p1Q1,1+Q1,2||L1(K)|. (45) Case 1.1) |L1(K)| ≤2.Using that p1|Q1,1| ≤ |Q1,2|,the expression in ( 45) is bounded by pa+t/2+s+|L2(K)| 1 |×|Q1,2|a+b−1+s+|L2(K)|+t+|L1(K)| /lessorsimilarpa+t/2+s+|L2(K)|−1 1 |×|Q1,2|a+b−3+s+|L2(K)|+t+|L1(K)|×ΦSBM(p,Q)(Star2) where we used ( 42). Case 1.1.1) Now, suppose that a+t/2+s+|L2(K)|−1≥1 2(a+b−3+s+t+|L2(K)|+|L1(K)|) =1 2(|V(H)|−3). Then, using Claim 3.4 , the above expression is bounded by (√p1|Q1,2|)a+b−3+s+|L2(K)|+t+|L1(K)|×ΦSBM(p,Q)(Star2) /lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4×|ΦSBM(p,Q)(Star2)|. As inTheorem 3.3 , this expression is at most max(|ΦSBM(p,Q)(Cyc4)|1/4,|ΦSBM(p,Q)(Star2)|1/3)|V(H)|. 36 Case 1.1.2) The remaining case is a+t/2+s+|L2(K)|−1<1 2(a+b−3+s+|L2(K)|+t+|L1(K)|)⇐⇒ a+s+|L2(K)|+1< b+|L1(K)| ⇐⇒ a+s+|L2(K)|+2≤b+|L1(K)|. Since|L1(K)| ≤2 in the current case, it follows that b=a+s+|L2(K)|+rfor some r≥0.Under these assumptions, we prove the following fact. Claim 3.9. H|V(H)\(K∪L1(K))has at least R≥b−s−|L2(K)|=a+risolated vertices. Proof.Indeed, note that H|V(H)\(K∪L(H))hasbconnected components and b+svertices. Hence, at leastb−sof the vertices in H|V(H)\(K∪L(H))are isolated. These same vertices are not isolated inH|V(H)\(K∪L1(K))if and only if they have a neighbor in L2(K).As each vertex in L2(K) is a leaf, there are at most |L2(K)|such of them. Hence, at least b−s−|L2(K)|=a+rof the vertices in V(H)\(K∪L(H)) are isolated in H|V(H)\(K∪|L1(K)|). Denotetheseisolated vertices by S,so|S|
https://arxiv.org/abs/2504.17202v1
≥a+r.Now consider H|V(H)\L(H).Thisisaconnected graph since His connected and L(H) is the set of leaves. Hence, each connected component in H|V(H)\(K∪L(H))has at least one edge to a vertex in K.If, furthermore, this connected component is one of the vertices in S,it must have at least 2 edges to K.At least one edge due to connectivity and at least one more as the vertices of Sare not leaves. Indeed, note that S ⊆V(H)\(K∪L1(K)) and they don’t have any edges to other vertices in V(H)\(K∪L1(K)) due to being isolated, and inL1(K) due to the fact that the parents of leaves in L1(K) are all in K.Altogether, this means that the number of edges between KandV(H)\(L(H)∪K) is at least 2|S|+(b−|S|)≥b+|S| ≥b+a+r. Note that all of these edges are of type (1 ,2).Thus, the bound in ( 43) becomes |ΦSBM(p,Q)(H)| /lessorsimilarDpa+t 1|Q1,1|t×|Q1,2|b+a+r×|Q2,2|s×|p1Q1,1+p2Q1,2||L1(K)||p1Q1,2+p2Q2,2||L2(K)| (Using that√p1|Q1,1| ≤ |Q1,2|,p1|Q1,2|/greaterorsimilar|Q2,2|andEqs. (39) and(41)) /lessorsimilarDpa+t 1|Q1,2|tp−t/2|Q1,2|b+a+r×|Q1,2|sps|Q1,2||L1(K)||Q1,2||L2(K)|p|L2(K)| 1 /lessorsimilarDpa+t/2+s+|L2(K)| 1 |Q1,2|t+b+a+r+s+|L1(K)|+|L2(K)| /lessorsimilarpa+t/2+s+|L2(K)|−1 1 ×|Q1,2|t+b+a+s+|L1(K)|+|L2(K)|−3×p1|Q1,2|2×|Q1,2|1+r (Using that |Q1,2| ≤p1.) /lessorsimilarpa+t/2+s+|L2(K)|+r 1 ×|Q1,2|t+b+a+s+|L1(K)|+|L2(K)|−3×p1|Q1,2|2, Now, observe that a+t/2+s+|L2(K)|+r≥1 2(t+b+a+s+|L1(K)|+|L2(K)|−3) since this is equivalent to a+s+|L2(K)|+2r+3≥b+|L1(K)|, 37 buta+s+|L2(K)|+r=b,r+3≥3>|L1(K)|.Altogether, the above expression is bounded by the familiar (√p1|Q1,2|)t+b+a+s+|L1(K)|+|L2(K)|−3×p1|Q1,2|2/lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4×|ΦSBM(p,Q)(Star2)|, where the inequality follows from Claim 3.4 andEq. (42). Case 1.2) |L1(K)| ≥3.Letξ∈ {0,1}be such that |L1(K)|−ξis even. We bound ( 43) by pa+t+s+|L2(K)| 1 |Q1,1|t|Q1,2|a+b−1+s|p1Q1,1+p2Q1,2||L1(K)|×|p1Q1,2+p2Q2,2||L2(K)| (Using that√p1|Q1,1| ≤ |Q1,2|and (39) and (41)) /lessorsimilarDpa+t+s+|L2(K)| 1 |Q1,2|tp−t/2|Q1,2|a+b−1+s+|L2(K)||p1Q1,1+p2Q1,2||L1(K)|−ξ|Q1,2|ξ (Using (42)) /lessorsimilarDpa+t/2+s+|L2(K)| 1 |Q1,2|a+b−1+s+|L2(K)|+t+ξ|ΦSBM(p,Q)(Star|L1(K)|−ξ)|. Again, we consider two cases. Case 1.2.1) If a+t/2+s+|L2(K)| ≥1 2(a+b−1+s+|L2(K)|+t+ξ)⇐⇒ a+s+|L2(K)| ≥b+ξ−1, the last expression is bounded by (√p1|Q1,2|)a+b−1+s+|L2(K)|+t+ξ|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (UsingClaim 3.4 ) /lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(H)|−|L1(K)|−1+ξ 4|ΦSBM(p,Q)(Star|L1(K)|−ξ)| ≤max(|ΦSBM(p,Q)(Cyc4)|1/4,|ΦSBM(p,Q)(Star|L1(K)|−ξ)|1/(|L1(K)|−ξ+1))|V(H)|. The last inequality follows in the same way as for 2-stars and 4-cycles as in Theorem 3.3 . Case 1.2.2) Otherwise, a+t/2+s+|L2(K)|<1 2(a+b−1+s+|L2(K)|+t+ξ)⇐⇒ a+s+|L2(K)|< b+ξ−1, sob+ξ≥a+s+|L2(K)|+2.Again, let b+ξ=a+s+|L2(K)|+rfor some r≥2.We use the same argument as in Case 1) to argue that there are at least b+a+r−ξedges between Kand 38 V(H)\(K∪L(H)).We bound ( 43) by |ΦSBM(p,Q)(H)| /lessorsimilarDpa+t 1|Q1,1|t×|Q1,2|b+a+r−ξ×|Q2,2|s×|p1Q1,1+p2Q1,2||L1(K)||p1Q1,2+p2Q2,2||L2(K)| (Using|Q1,1|√p1≤ |Q1,2|,|Q2,2| ≤p1|Q1,2|and (42)) /lessorsimilarpa+t 1|Q1,2|tp−t/2 1|Q1,2|b+a+r−ξ×|Q1,2|sps ×p−1|ΦSBM(p,Q)(Star|L1(K)|−ξ)|×|Q1,2|ξ|Q1,2||L2(K)|p|L2(K)| 1 /lessorsimilarpa+t/2+s+|L2(K)|−1 1 |Q1,2|t+b+a+r+s+|L2(K)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (Using|Q1,2| ≤p1) /lessorsimilarpa+t/2+s+|L2(K)|+r−ξ 1 |Q1,2|t+b+a+s+|L2(K)|+ξ−1×|ΦSBM(p,Q)(Star|L1(K)|−ξ)|. Now, observe that a+t/2+s+|L2(K)|+r−ξ≥1 2(t+b+a+s+|L2(K)|+ξ−1) ⇐⇒ a+s+|L2(K)|+2r+1≥b+3ξ, butb+ξ=a+s+|L2(K)|+randr+1≥3≥2ξ.Altogether, this means that the above expression is at most (√p1|Q1,2|)t+b+a+s+|L2(K)|+ξ−1×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| ≤ |ΦSBM(p,Q)(Cyc4)||V(H)|−|L1(K)|−1+ξ 4 ×|ΦSBM(p,Q)(Star|L1(K)|−ξ)|. Again, this is enough. Case 2) |Q1,1|√p1>|Q1,2|We consider the same two cases based on |L1(K)|. Case 2.1) |L1(K)| ≤2.We bound the expression in ( 43) by pa+t+s+|L2(K)| 1 |Q1,1|t|Q1,2|a+b−1+s+|L2(K)||p1Q1,1+p2Q1,2||L1(K)| (Using (39) and (41)) /lessorsimilarDpa+t+s+|L2(K)| 1 |Q1,1|t×|Q1,2|a+b−1+s+|L2(K)|×|Q1,2||L1(K)| (Using that p1≥ |Q1,2|) /lessorsimilarDpa+t+s+|L2(K)|−1 1 ×|Q1,1|t×|Q1,2|a+b+s+|L2(K)|+|L1(K)|−3×p1|Q1,2|2 (Using that√p1|Q1,1|>|Q1,2|) /lessorsimilarDpa+t+s+|L2(K)|−1 1 ×|Q1,1|t×|Q1,1|a+b+s+|L2(K)|+|L1(K)|−3p(a+b+s+|L2(K)|−3)/2×p1|Q1,2|2 (Using (42)) /lessorsimilarDpa+t+s+|L2(K)|−1+1 2(a+b+s+|L2(K)|+|L1(K)|−3) 1 |Q1,1|t+a+b+s+|L2(K)|+|L1(K)|−3×ΦSBM(p,Q)(Star2). 39 Case 2.1.1) If a+t+s+|L2(K)|−1+1 2(a+b+s+|L2(K)|+|L1(K)|−3)≥ ≥t+a+b+s+|L2(K)|+|L1(K)|−3⇐⇒ a+s+L2+1≥b+L1, then going back we have pa+t+s+|L2(K)|−1+1 2(a+b+s+|L2(K)|+|L1(K)|−3)|Q1,1|t+a+b+s+|L2(K)|+|L1(K)|−3×ΦSBM(p,Q)(Star2) ≤(p1|Q1,1|)|V(H)|−3ΦSBM(p,Q)(Star2) ≤ΦSBM(p,Q)(Cyc4)|V(H)|−3 4×ΦSBM(p,Q)(Star2) ≤max(|ΦSBM(p,Q)(Cyc4)|1/4,|ΦSBM(p,Q)(Star2)|1/3)|V(H)|. Case 2.1.2) Otherwise, a+s+|L2(K)|+ 1< b+|L1(K)| ≤b+ 2,sob=a+s+L2(K) +r, wherer≥0.Again, using the argument of counting isolated vertices, we lower-bound the number of edges between KandV(H)\(K∪L(H)) bya+b+r,so |ΦSBM(p,Q)(H)| (By (43)) /lessorsimilarDpa+t 1|Q1,1|t×|Q1,2|b+a+r×|Q2,2|s×|p1Q1,1+p2Q1,2||L1(K)||p1Q1,2+p2Q2,2||L2(K)| (Using that√p1|Q1,1|>|Q1,2|) /lessorsimilarpa+t 1|Q1,1|t×|Q1,2|b+a+r×|Q1,2|sps×|Q1,2||L1(K)||Q1,2||L2(K)|p|L2(K)| 1 /lessorsimilarpa+t+s+|L2(K)| 1 ×|Q1,1|t×|Q1,2|b+a+r+s+|L1(K)|+|L2(K)| (Using that p1>|Q1,2|) /lessorsimilarpa+t+s+|L2(K)|+r 1 |Q1,1|t×|Q1,2|b+a+s+|L1(K)|+|L2(K)| (Using that p1>|Q1,2|) /lessorsimilarpa+t+s+|L2(K)|+r 1 |Q1,1|t×|Q1,2|b+a+s+|L1(K)|+|L2(K)|−3p1|Q1,2|2 (Using (42)) /lessorsimilarpa+t+s+|L2(K)|+r+1 2(b+a+s+|L1(K)|+|L2(K)|−3) 1 |Q1,1|t+b+a+s+|L1(K)|+|L2(K)|−3p1|Q1,2|2.(46) Now, observe that a+t+s+|L2(K)|+r+1 2(b+a+s+|L1(K)|+|L2(K)|−3)≥ ≥t+b+a+s+|L1(K)|+|L2(K)|−3 ⇐⇒ 3+2r+a+s+|L2(K)| ≥b+L1. This inequality holds as r+a+s+|L2(K)| ≥b,3+r≥3≥ |L2(K)|.Hence, the last expression in (46) is bounded by (p1|Q1,1|)t+b+a+s+|L1(K)|+|L2(K)|−3p1|Q1,2|2/lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(H)|−3 4×|ΦSBM(p,Q)(Star2)|. 40 Case 2.2) |L1(K)|>2.Letξ∈ {0,1}be such that |L1(K)| −ξis even. Then,
https://arxiv.org/abs/2504.17202v1
we bound the expression in ( 43) by pa+t+s 1|Q1,1|t|Q1,2|a+b−1+s|p1Q1,2+p2Q2,2||L2(K)||p1Q1,1+p2Q1,2||L1(K)| (UsingEqs. (39) and(41)) /lessorsimilarDpa+t+s+|L2(K)| 1 |Q1,1|t|Q1,2|a+b−1+s+|L2(K)||p1Q1,1+p2Q1,2||L1(K)| /lessorsimilarDpa+t+s+|L2(K)|−1 1 |Q1,1|t×|Q1,2|a+b−1+s+|L2(K)|×p1|p1Q1,1+p2Q1,2||L1(K)|−ξ|×|Q1,2|ξ (Using (42)) /lessorsimilarpa+t+s+|L2(K)|−1 1 |Q1,1|t×|Q1,2|a+b−1+s+|L2(K)|+ξ×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (Using√p1|Q1,1|>|Q1,2|) /lessorsimilarpa+t+s+|L2(K)|−1 1 |Q1,1|t×|Q1,1|a+b−1+s+|L2(K)|+ξp1 2(a+b−1+s+|L2(K)|+ξ) 1 ×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| /lessorsimilarpa+t+s+|L2(K)|−1+1 2(a+b−1+s+(K)+ξ) 1 |Q1,1|t+a+b−1+s+|L2(K)|+ξ×|ΦSBM(p,Q)(Star|L1(K)|−ξ)|. Case 2.2.1) If a+t+s+|L2(K)|−1+1 2(a+b−1+s+|L2(K)|+ξ) ≥t+a+b−1+s+|L2(K)|+ξ ⇐⇒ a−1+s+|L2(K)| ≥b+ξ, the above expression is at most (p1|Q1,1|)t+a+b−1+s+|L2(K)|+ξ×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (UsingClaim 3.4 ) /lessorsimilar|ΦSBM(p,Q)(Cyc4)||V(H)|−|L1(K)|−1+ξ 4 ×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| ≤ ≤max(|ΦSBM(p,Q)(Cyc4)|1/4,|ΦSBM(p,Q)(Star|L1(K)|−ξ)|1/(|L1(K)|−ξ+1))|V(H)|. Case 2.2.2) Otherwise, a−1 +s+|L2(K)|< b+ξ,sob+ξ=a+s+|L2(K)|+r,where r≥0.Using the isolated-vertices argument, there are at least b+a+r−ξedges between Kand V(H)\(K∪L(H)).We bound ( 43) as 41 |ΦSBM(p,Q)(H)| /lessorsimilarDpa+t 1|Q1,1|t×|Q1,2|b+a+r−ξ×|Q2,2|s×|p1Q1,1+p2Q1,2||L1(K)||p1Q1,2+p2Q2,2||L2(K)| (Usingp1|Q1,2|/greaterorsimilar|Q2,2|) and (39) and (41) /lessorsimilarpa+t 1|Q1,1|t|Q1,2|b+a+r−ξ×|Q1,2|sps 1p−1 1× ×p1|p1Q1,1+p2Q1,2||L1(K)|−ξ×|p1Q1,1+p2Q1,2|ξ|Q1,2||L2(K)|p|L2(K)| 1 (Using (42)) /lessorsimilarpa+t 1|Q1,1|t|Q1,2|b+a+r−ξ×|Q1,2|sps 1p−1 1|ΦSBM(p,Q)(Star|L1(K)|−ξ)|×|Q1,2||L2(K)|+ξp|L2(K)| 1 /lessorsimilarpa+t+s+|L2(K)|−1 1 ×|Q1,1|t×|Q1,2|b+a+r+s+|L2(K)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (Usingp1≥ |Q1,2|) /lessorsimilarpa+t+s+|L2(K)|−1 1 ×|Q1,1|t×|Q1,2|b+a+r+s+|L2(K)|−3×p1Q2 1,2×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (Using (42)) /lessorsimilarpa+t+s+|L2(K)|−1 1 ×|Q1,1|t×|Q1,2|b+a+r+s+|L2(K)|−3×|ΦSBM(p,Q)(Star2)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (Usingp1≥ |Q1,2|) /lessorsimilarpa+t+s+|L2(K)|−1+1+r−ξ 1 ×|Q1,1|t×|Q1,2|b+a+s+|L2(K)|−4+ξ× |ΦSBM(p,Q)(Star2)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (Using√p1|Q1,1| ≥ |Q1,2|) /lessorsimilarpa+t+s+|L2(K)|+r−ξ 1 ×|Q1,1|t+b+a+s+|L2(K)|−4+ξ×p1 2(b+a+s+|L2(K)|−4+ξ) 1 × |ΦSBM(p,Q)(Star2)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| /lessorsimilarpa+t+s+|L2(K)|+r−ξ+1 2(b+a+s+|L2(K)|−4+ξ) 1 ×|Q1,1|t+b+a+s+|L2(K)|−4+ξ× |ΦSBM(p,Q)(Star2)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)|. (47) Now, observe that a+t+s+|L2(K)|+r−ξ+1 2(b+a+s+|L2(K)|−4+ξ)≥ ≥t+b+a+s+|L2(K)|−4+ξ ⇐⇒ 2r+a+s+|L2(K)|+4≥3ξ+b, which is true since b+ξ=r+a+s+|L2(K)|,2ξ≤4.Furthermore, t+b+a+s+|L2(K)|−4+ξ=|V(H)\L(K)|+|L2(K)|−4+ξ≥1+3−4≥0, since|L2(K)| ≥3 in case 2.2) and |V(H)\L(K)| ≥1 asHis connected and has at least 3 leaves and any such graph has a non-leaf vertex, and ξ≥0 by definition. 42 Altogether, ( 47) is bounded by (p1|Q1,1|)t+b+a+s+|L2(K)|−4+ξ×|ΦSBM(p,Q)(Star2)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| (UsingClaim 3.4 ) /lessorsimilarD|ΦSBM(p,Q)(Cyc4)||V(H)|+ξ−|L1(K)|−4 4 ×|ΦSBM(p,Q)(Star2)|×|ΦSBM(p,Q)(Star|L1(K)|−ξ)| =/parenleftBigg |ΦSBM(p,Q)(Cyc4)|1/4/parenrightBigg|V(H)|+ξ−|L1(K)|−4 ×/parenleftBigg |ΦSBM(p,Q)(Star2)|1/3/parenrightBigg3 ×/parenleftbigg |ΦSBM(p,Q)(Star|L1(K)|−ξ)|1/(|L1(K)|−ξ+1)/parenrightBigg|L1(K)|−ξ+1 ≤max/parenleftBig ΨSBM(p,Q)(Cyc4),ΨSBM(p,Q)(Star2),ΨSBM(p,Q)(Star|L1(K)|−ξ)/parenrightBig|V(H)| . With this, the proof of Theorem 3.6 is complete. 3.4.5 Testing in 2-SBMs Theorem 3.7 (Testing in 2-SBMs) .Suppose that SBM(n;p,Q)is a stochastic block-model on two communities such that min(p1,p2) =ω(1/n).Suppose that there exists a polynomial test of degree at mostD,for some even absolute constant D≥4,distinguishing SBM(n;p,Q)andG(n,1/2)with high probability. Then, one can also distinguish the two grap h distribution with high probability via either the signed 4-cycle count or the signed count of a star o n at most Dedges. Remark 1. The condition min( p1,p2) =ω(1/n) is very minimal. Suppose that p1≤p2.If min(p1,p2) =o(1/n),sop1=o(1/n),then with high probability, there is no vertex of label 1, so SBM(n;p,Q) is (in total variation) close to G(n,(Q2,2+1)/2).Trivially, the signed edge count is an optimal distinguisher between G(n,q) andG(n,1/2) for any q.Ifp1= Θ(1/n),then we can prove a slightly weaker result by blowing up the “vertex-sample co mplexity” as in Definition 3 . Namely, the result we will prove is that if there exists a polynomial t est of degree at most Dfor some evenD≥4 distinguishing SBM(n;p,Q) andG(n,1/2),then one can distinguish SBM(N(n);p,Q) andG(N(n),1/2) via either the signed 4-cycle count or the signed count of a star on at most D edges, where N(n) is any function that satisfies N(n) =ω(n).The idea is that if N(n) =ω(n) and p1= Θ(1/n),thenp1=ω(1/N(n)). Proof.ByTheorem 3.6 , we know that for any graph on at most Dedges without isolated vertices, ΨSBM(p,Q)(H)/lessorsimilarDmax/parenleftBigg ΨSBM(p,Q)(Cyc4),max 1≤t≤DΨSBM(p,Q)(Start)/parenrightBigg . If an approximate maximum on the right hand-side above is ach ieved by a 4-cycle or a star on at mostD/2 edges, the conclusion follows by Theorem 2.4 . Suppose instead that the maximum is achieved by some Startsuch that t∈ {D/2+1,...,D}. Step 1: Identifying relationships between p,Q.As in the proof of Theorem 3.6 , we first identify several relationships between pandQ. 43 First, from Theorem 2.3 , ΨSBM(p,Q)(Start) =ω(n−1/2). (48) Second, it implies
https://arxiv.org/abs/2504.17202v1
that Star2is not an approximate maximizer, so ΨSBM(p,Q)(Star3) =o(ΨSBM(p,Q)(Start)). (49) Without loss of generality, let p1≤p2.Let also λi:=p1Q1,i+p2Q2,i,so ΦSBM(p,Q)(Starr) = p1λr 1+p2λr 2.Now, (49) implies that (p1λ2 1+p2λ2 2)1/3=o/parenleftBig (p1λt 1+p2λt 2)1/(t+1)/parenrightBig =⇒ (p1λ2 1+p2λ2 2)(t+1)=o/parenleftBig (p1|λ1|t+p2|λ2|t)3/parenrightBig =⇒ pt+1 1|λ1|2(t+1)+pt+1 2|λ2|2(t+1)=o/parenleftBig p3 1|λ1|3t+p3 2|λ2|3t/parenrightBig . Now, one of the following two systems of inequalities needs t o be satisfied 1.pt+1 2|λ2|2(t+1)=o(p3 2|λ2|3t) andp3 2|λ2|3t≥p3 1|λ1|3t.Then, as t=O(1),the first inequality implies that pt−2 2=o(|λ2|t−2).Thus, p2=o(|λ2|) =o(|p1Q1,2+p2Q2,2|) =o(p1|Q1,2|+p2|Q2,2|) =o(2p2), contradiction. We used that p1≤p2and|Qi,j| ≤1. 2.pt+1 1|λ1|2(t+1)=o(p3 1|λ1|3t) andp3 1|λ1|3t≥p3 2|λ2|3t.As in the previous case, the first inequality implies that p1=o(|λ1|).The second implies that |λ1| ≥ |λ2|asp1≤p2. Altogether, we have learned that: p1≤p2,p1=ω(1/n),p1=o(|λ1|),and|λ1| ≥ |λ2|. (50) This further implies the following two inequalities: p1|λ1|s≥p2|λ2|sfor anys≥t. (51) This is true sincep1|λ1|s p2|λ2|s=p1|λ1|t p2|λ2|t×/parenleftBig |λ1| |λ2|/parenrightBigs−t ,which is a product of terms at least equal to 1. |(p1λs 1+p2λs 2)|1 s+1/greaterorsimilarD|(p1λt 1+p2λt 2)|1 t+1for any even s∈[t,D]. (52) Indeed, from ( 51), it is enough to show that (p1|λ1|s)1 s+1≥(p1|λ1|t)1 t+1⇐⇒ |λ1|s−t≥ps−t 1 which follows from s≥t,p1=o(|λ1|). Furthermore, ( 48) implies that |λ1|=ω/parenleftBig (p1|λ1|t)/t+1/parenrightBig =ω/parenleftBig |ΦSBM(p,Q)(Start)|1/(t+1)) =ω(n−1/2). (53) Finally, from ( 52), we assume that t=D. 44 Step 2: Analysis of Variance of SBM. We need to show that /vextendsingle/vextendsingle/vextendsingleIE G∼SBM(p,Q)/bracketleftbig SCStarD(G)/bracketrightbig/vextendsingle/vextendsingle/vextendsingle=ω/parenleftBig Var G∼SBM(p,Q)/bracketleftbig SCStarD(G)]1/2/parenrightBig . As in the proof of Theorem 2.4 , take any graph Hwhich is isomorphic to S1⊗S2whereS1,S2are both isomorphic to StarDand share at least one vertex. Suppose that there are MHcopies of the signed count of Hafter expanding Var G∼SBM(p,Q)/bracketleftbig SCStarD(G)].Then, we have to show that n2(D+1)(p1λD 1+p2λD 2)2=ω/parenleftBig MH×ΦSBM(p,Q)(H)/parenrightBig . We consider three possible cases for H=S1⊗S2: Case 1) S1andS2share their central vertex. Then,His a star on ℓ≤2Dleaves. This means that S1andS2also share(2 D−ℓ)/2 leaves. Altogether, |V(S1)∪V(S2)|= 1+ℓ+(2D−ℓ)/2 = 1+D+ℓ/2.Hence,MH= Θ(n1+D+ℓ/2).All we need to prove is that n2(D+1)(p1λD 1+p2λD 2)2=ω(n1+D+ℓ/2(p1λℓ 1+p2λℓ 2))⇐⇒ n1+D−ℓ/2max(p2 1λ2D 1,p2 2λ2D 2) =ω(p1λℓ 1+p2λℓ 2).(54) Again, there are two cases. If ℓ≥D,then by ( 51) the inequality is equivalent to n1+D−ℓ/2p2 1λ2D 1=ω(p1λℓ 1)⇐⇒n1+D−ℓ/2p1λ2D−ℓ 1=ω(1). Note that p1=ω(1/n) and|λ1|=ω(n−1/2) by (53), which is enough. Ifℓ < D,then|p1λℓ 1+p2λℓ 2|=|ΦSBM(p,Q)(Starℓ)|/lessorsimilarD|ΦSBM(p,Q)(StarD)|ℓ+1 D+1,so the inequality reduces to n1+D−ℓ/2|ΦSBM(p,Q)(StarD)|2−ℓ+1 D+1=ω(1), which follows immediately from ( 48). Case 2) S1andS2do not have a common central vertex and do not have common edges. Hence,S1⊗S2has at most 2( D+1)−1 vertices (at S1,S2need to share at least one vertex) and 2Dedges. Hence, MH=O(n2D+1).ByTheorem 3.6 applied for 2 D,|ΦSBM(p,Q)(S1⊗S2)|/lessorsimilarD |ΦSBM(p,Q)(Star2D)|.Hence, all we need to show is that n2(D+1)(p1λD 1+p2λD 2)2=ω(n1+2D(p1λ2D 1+p2λ2D 2). This is a special case of ( 54) proved in Case 1) when ℓ= 2D. Case 3) S1andS2do not have a common central vertex but do have common edges. Note that S1,S2are stars and do not share their central vertex, they can have at most one common edge. Hence, S1⊗S2has at most 2 D−2 edges, so |ΦSBM(p,Q)(S1⊗S2)|/lessorsimilarD|ΦSBM(p,Q)(Star2D−2)| byTheorem 3.6 applied for 2 D.AsS1,S2share an edge, they have at least two common vertices, so|V(S1)∪V(S2)| ≤2(D+1)−2≤2D.Thus,MH=O(n2D).Altogether, we need to prove that n2(D+1)(p1λD 1+p2λD 2)2=ω(n2D(p1λ2D−2 1+p2λ2D−2 2). This is a special case of ( 54) proved in Case 1) when ℓ= 2D−2. 45 4 Comparison
https://arxiv.org/abs/2504.17202v1
Inequalities We note that all arguments in the current section apply more g enerally to any graphon instead of stochastic block model, provided no measurability issues o ccur. 4.1 Cycle Comparisons: A Spectral Approach We prove the following theorem, which explains why signed tr iangles and 4-cycles are used for detecting stochastic block models, but larger cycles are no t. Theorem 4.1. For any SBM(p,Q)distribution and t≥5, ΨSBM(p,Q)(Cyct)≤ΨSBM(p,Q)(Cyc4). Proof.Consider the expression for a signed t-cycle count. It is given by ΦSBM(p,Q)(Cyct) =/summationdisplay x1,x2,...,xtpx1px2···pxtQx1x2Qx2x3···Qxtx1 =/summationdisplay x1,x2,...,xt(√px1Qx1,x2√px2)(√px2Qx2,x3√px3)···(√pxtQxt,x1√px1) =tr((√ PQ√ P)t), where√ Pis the diagonal matrix with entries (√p1,√p2,...,√pk).Letλ1,λ2,...,λ kbe the k eigenvalues of√ PQ√ P.Then, ΦSBM(p,Q)(Cyct) =k/summationdisplay i=1λt i. Now, for any t≥5, |ΦSBM(p,Q)(Cyct)|1/t=/vextendsingle/vextendsingle/vextendsinglek/summationdisplay i=1λt i/vextendsingle/vextendsingle/vextendsingle1/t ≤(k/summationdisplay i=1|λi|t)1/t≤(k/summationdisplay i=1|λi|4)1/4=|ΦSBM(p,Q)(Cyc4)|1/4. The first inequality is triangle inequality and the second mo notonicity of t−→ /⌊ard⌊l(λ1,...,λ k)/⌊ard⌊lt. Remark 2 (Triangles vs 4-Cycles) .The above proof also sheds light on when triangles are more informative than 4-cycles and vice versa. Namely, observe t hat if the positive (or negative) eigen- values of√ PQ√ Pdominate, then /vextendsingle/vextendsingle/vextendsinglek/summationdisplay i=1λ3 i/vextendsingle/vextendsingle/vextendsingle1/3 ≈/vextendsingle/vextendsingle/vextendsinglek/summationdisplay i=1|λi|3/vextendsingle/vextendsingle/vextendsingle1/3 ≥/vextendsingle/vextendsingle/vextendsinglek/summationdisplay i=1λ4 i/vextendsingle/vextendsingle/vextendsingle1/4 , so|ΦSBM(p,Q)(Cyc3)|1/3≥c|ΦSBM(p,Q)(Cyc4)|1/4.Whenever this is not the case (as in the quiet planted coloring distribution in [ KVWX23 ], theL∞random geometric graphs in [ BB24a], certain non-PSDGaussianrandomgeometric graphsin[ BB25]), themassonpositiveandnegative eigenval- ues should be relatively balanced. Heuristically, this is c aused by certain bipartiteness/hyperbolic behavior of SBM(p,Q). A more detailed analysis of the performance of signed triang les and 4-cycles for testing against sparser Erd˝ os-R´ enyi as well is carried out in [ JKL19]. 46 4.2 Exploiting Symmetries: A Sum-of-Squares Approach We prove the following theorem, which explains why signed K− 4counts are not used for detecting latent spacestructure, where K− 4isthegraphon4vertices and5edges (saywithvertexset {1,2,3,4} and all edges but (2 ,4)). One can certainly generalize this approach to other gra phs with enough symmetry. The argument appears implicitly in [ LR23a, Lemma 4.10]. Theorem 4.2. For any SBM(p,Q)distribution, |ΦSBM(p,Q)(K− 4)| ≤ |ΦSBM(p,Q)(Cyc4)|. Proof.Consider the Cyc4with edges (12) ,(23),(34),(14) and the K− 4on these edges with the extra edge (13) .We will first rewrite the expression for Φ SBM(p,Q)(Cyc4).Note that ΦSBM(p,Q)(Cyc4) = IE x1,x2,x3,x4[Qx1,x2Qx2,x3Qx3,x4Qx4,x1] = IE x1,x3/bracketleftBig IE x2,x4[Qx1,x2Qx2,x3Qx3,x4Qx4,x1|x1,x3]/bracketrightBig .(55) Note that x2,x4are independent even conditioned on x1,x3.Hence, IE[Qx1,x2Qx2,x3Qx3,x4Qx4,x1|x1,x3] = IE[Qx1,x2Qx2,x3|x1,x3]×IE[Qx1,x4Qx4,x3|x1,x3] = IE[Qx1,x2Qx2,x3|x1,x3]2, where we used the fact that the conditional expectation are i dentically distributed. Hence, we obtain that ΦSBM(p,Q)(Cyc4) = IE/bracketleftBigg IE[Qx1,x2Qx2,x3|x1,x3]2/bracketrightBigg . In the exact same way, we conclude that ΦSBM(p,Q)(K− 4) = IE/bracketleftBigg IE[Qx1,x2Qx2,x3|x1,x3]2Qx1,x3/bracketrightBigg . Altogether, ΦSBM(p,Q)(Cyc4)−ΦSBM(p,Q)(K− 4) = IE/bracketleftBigg IE[Qx1,x2Qx2,x3|x1,x3]2(1−Qx1,x3)/bracketrightBigg ≥0, ΦSBM(p,Q)(Cyc4)+ΦSBM(p,Q)(K− 4) = IE/bracketleftBigg IE[Qx1,x2Qx2,x3|x1,x3]2(1+Qx1,x3)/bracketrightBigg ≥0, where we used that −1≤Qx1,x3≤1.Together, the two inequalities give the desired result. 4.3 Ghost Vertices: A Second Moment Approach The key idea in the proof of Theorem 4.2 was the symmetry around (1 ,3).One can “artificially” create such symmetry via a second moment argument, but this u nfortunately yields comparison inequalities too weak for the purposes of ( 5). Here, we present one possible result. The proof follows implicitly [ CGW88, Fact 12] and [ LR23a, (4) in Section 3]. 47 Theorem 4.3. Consider any graph Hand suppose that it has a vertex of
https://arxiv.org/abs/2504.17202v1
degree d.Then, for any SBM(p,Q)distribution, |ΦSBM(p,Q)(H)| ≤ |ΦSBM(p,Q)(K2,d)|1/2. Proof.Let the vertex set of Hbe{1,2,...,h}such that hhas degree d.Then, |ΦSBM(p,Q)(H)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBigg/productdisplay (i,j)∈E(H)Qxixj/bracketrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle =/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBigg/productdisplay i,j<h: (i,j)∈E(H)Qxi,xj×IE/bracketleftBig/productdisplay k: (k,h)∈E(H)Qxkxh/vextendsingle/vextendsingle/vextendsinglex1,x2,...,x h−1/bracketrightBig/bracketrightBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤IE/bracketleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/productdisplay i,j<h: (i,j)∈E(H)Qxi,xj×IE/bracketleftBig/productdisplay k: (k,h)∈E(H)Qxkxh/vextendsingle/vextendsingle/vextendsinglex1,x2,...,x h−1/bracketrightBig/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤IE/bracketleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleIE/bracketleftBig/productdisplay k: (k,h)∈E(H)Qxkxh/vextendsingle/vextendsingle/vextendsinglex1,x2,...,x h−1/bracketrightBig/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≤IE/bracketleftBigg IE/bracketleftBig/productdisplay k: (k,h)∈E(H)Qxkxh/vextendsingle/vextendsingle/vextendsinglex1,x2,...,x h−1/bracketrightBig2/bracketrightBigg1/2 = IE x1,x2,...,xh−1,xh′,xh′′/bracketleftBigg/productdisplay k: (k,h)∈E(H)Qxkxh′Qxkxh′′/bracketrightBigg1/2 =|ΦSBM(p,Q)(K2,d)|1/2. 5 Future Directions Beyond the natural direction of proving Conjecture 1 for all stochastic block models (and, more generally, graphons), we outline several different directio ns of interest. They modify the conditions in the current work in different ways. 1. Beyond dense graphs: testing against a sparse Erd˝ os-R´ enyi. One natural direction is to develop a quasirandomness theory for testing against G(n,q) whenqdepends on n,sayq=n−β for some absolute constant β∈(0,1].Theq-biased Fourier coefficients are Φq SBM(p,Q)(H):= IE/bracketleftBigg/productdisplay (i,j)∈E(H)(Gij−q)/radicalbig q(1−q)/bracketrightBigg ≍nβ×|E(H)| 2×IE/bracketleftBigg/productdisplay (i,j)∈E(H)(Gij−q)/bracketrightBigg , where the asymptotic equivalence holds for constant sized g raphsH.The equivalent condition to (5) is|Φq SBM(p,Q)(H)|1 |V(H)|=ω(n−1/2).What makes this problem different is the nβ×|E(H)| 2|V(H)|term in |Φq SBM(p,Q)(H)|1 |V(H)|.This term poses several challenges, including the argument on the variance of the planted model in Theorem 2.4 . Oneviable approach seems the consideration of balanced graphs which [DMW25] use exactly in the setting of planting a dense subgraph in G(n,q).In a balanced graphH,the key quantity|E(H)| |V(H)|is larger than the same quantity for any subgraph H′ofH. 48 2. Beyond SBMs: testing against vertex-transitive distributions. The question of devel- oping quasirandomness theory for testing against G(n,1/2) is, of course, not restricted to stochastic block models as in our work and planted subgraph models as in [ YZZ24]. In fact, one can even ask the question for fixed graphs as in the original quasrirandom ness work of [ CGW88]. One way to phrase Problem 1 for a fixed graph is as follows. Take any fixed graph Gand let ΠGbe the distribution formed by applying a uniformly random ve rtex permutation to G.Find the possible approximate maximizers of H−→ΨΠG(H) where Hranges over constant-sized graphs without isolated vertices. 3. Beyond constant degree tests: towards computational hardness. Our results as well astheresultsof[ YZZ24]applyonlytoindistinguishabilityagainst constant-deg ree polynomialtests. However, it would be useful to have results for polynomials o f higher-degree. Especially desirable would be hardness against degree D=ω(logn) polynomial tests as this is frequently viewed as strong (even though by no means perfect) evidence for comput ational hardness. Our current framework does not allow for any super-constant Dsince one can check that the implicit constants in /greaterorsimilarDare on the order of 2Θ(DlogD)(when enumerating over all graphs on at mostDedges in Theorem 2.3 and taking max over KinTheorems 3.3 and3.6). Explicitly, this means that when D=ω(1),the different /greaterorsimilarDfactors no longer indicate inequalities up to absolute constants. Yet, if we allow for an no(1)blow-up in the sample complexity, the same techniques with a more careful bookkeeping of the /greaterorsimilarDdependence still apply. To illustrate, useful is the follow ing sample-complexity formulation of ( HT). Definition 3 (“Vertex”-Sample-Complexity Perspective of Testing agai nst Erd˝ os-R´ enyi) .Consider asequenceofstochastic block-models/braceleftbigg SBM(pk,Qk)/bracerightbigg k∈Nsuchthateach coordinateof pkispositive andQkis not the zero matrix. Find the minimal number of vertices n(k) such that one can test between SBM(n(k);pk,Qk) andG(n(k),1/2) via
https://arxiv.org/abs/2504.17202v1
a degree- Dpolynomial test with success probability 1 −ok(1). With a small blow-up in the sample-complexity, we can show th e following theorem. The proofs are identical, observing that the hidden factors are on the o rder of 2O(DlogD)everywhere. Hence, ifD=o(logn/loglogn),the hidden factors are of order no(1). Proposition 5.1. Suppose that there is a degree D=o(logn/loglogn)polynomial test that suc- ceeds in distinguishing SBM(n(k);pk,Qk)andG(n(k),1/2)with success probability 1−ok(1).Sup- pose furthermore that the family SBM(n(k);pk,Qk)belongs to one of the four cases i∈ {1,2,3,4} described in Theorem 1.4 . Then, for some appropriate ǫk=ok(1),there exists some signed sub- graph count of H∈ Ai Dthat distinguishes SBM(n(k)1+ǫk;pk,Qk)andG(n(k)1+ǫk,1/2)with success probability 1−ok(1). The reason why we write all the proofs for constant degree ins tead ofo(logn/loglogn) is that whilewearenot awareof any advantage of the o(logn/loglogn) tests, thereis significant notational simplicity when hiding factors depending on the degree. Of c ourse, if one manages to push the result to degree Θ(log n),this would have the significant advantage of capturing spect ral methods. However, this seems to be not merely a matter of carefully kee ping track of the constants in our proofs. If any form of Conjecture 1 holds true for degree D=ω(logn) polynomial tests, it seems to require essentially new techniques. 49 Remark 3 (Completeness of Definition 3 ).A-priori, it is not even clear that n(k) inDefinition 3 exists. Yet, a simple argument mimicking Claim 3.4 andLemma 3.2 shows that ΦSBM(p,Q)(Cyc4) = Ω/parenleftBig max i,jp2 ip2 j|Qi,j|4/parenrightBig for anySBM(p,Q).As the variance under planted scales as Ok(n7) =ok(n2|V(Cyc4)|),one can use this to show that for large enough n(k),one can always test via the signed 4-cycle count. 4. Beyond complete graphs: “edge”-sample-complexity of testing. A different sample complexity perspective of graph hypothesis testing was int roduced in [ MVW24] and further ana- lyzed in [ BB24b]. The goal is to capture the query-complexity of low-degree polynomial tests. We take the following formulation as described in [ BB24b]. For a mask M ∈ {view, hide }N×Nand (adjacency) matrix A∈ {0,1}N×N,denote by A⊙ M theN×Narray in which ( A⊙M)ji=Ajiwhenever Mji=viewand (A⊙M)ji= ? whenever Mji=hide.Testing between graph distributions with masks correspond s to a non-adaptive edge query model. Instead of viewing a full graph, one can choose t o observe a smaller more structured setMof edges in order to obtain a more data-efficient algorithm. Th e number of viewentries|M| ofMis a natural proxy for “sample complexity” in the case of low- degree polynomials as the input variables of low-degree polynomials are edges rather than v ertices. Instead of asking for n(k) in Definition 3 , one can ask for the size of the optimal mask M(k). Definition 4 (“Edge”-Sample-Complexity PerspectiveofTestingagains t Erd˝ os-R´ enyi) .Considera sequence of stochastic block-models/braceleftbigg SBM(pk,Qk)/bracerightbigg k∈Nsuch that each coordinate of pkis positive andQkis not the zero matrix. Find the minimal number of edges Mksuch that there exists some N(k)∈Nand a mask Mkof size|Mk|=MkonN(k) vertices with the following property. One can test between Mk⊙SBM(N(k);pk,Qk) andMk⊙G(N(k),1/2) via a degree- Dpolynomial test with success probability 1 −ok(1). Again, one can ask for a quasirandomness criterion in the cas e of edge-query-complexity. A
https://arxiv.org/abs/2504.17202v1
theorem due to Alon [ Alo81] shows that the maximal number of graphs isomorphic to Hin a graph onMedges is Θ H(M|V(H)|+δ(H) 2),where δ(H):= max S⊆V(H)|S|−|{j∈V(H) :∃i∈Ss.t. (j,i)∈E(H)}|. Reasoning as in Section 1.3 , instead of finding the approximate maximizers of H−→ |ΦSBM(p,Q)(H)|1 |V(H)|,one should look for the approximate maximizers of H−→ |ΦSBM(p,Q)(H)|1 |V(H)|+δ(H). Needless to say, different techniques than ours are required. 5. The Sign of Fourier Coefficients of Stochastic Block Models. The argument of Lemma 3.2 shows that for any SBM(p,Q) such that Qis not the zero matrix, Φ SBM(p,Q)(Cyc4)>0. What other graphs HbesidesCyc4have this property? The argument in Lemma 3.2 applies ver- batim for any other graph K2,dwhendis even. The examples in Appendix A show that Hdoes 50 not necessarily satisfy this property if Hhas an odd degree vertex ( Theorem A.2 ),Hhas an odd number of edges ( Theorem A.6 ),His not 2-connected ( Theorem A.7 ), orHis not bipartite (Theorem A.5 ). What if we relax the inequality: what graphs Hhave the property that ΦSBM(p,Q)(H)≥0for anySBM(p,Q)?The argument in Theorem 4.1 shows that all cycles of even length, in addition to the graphs K2,dsatisfy this property. Acknowledgments We thank Hannah Munkhbat for participating in early stages o f this work. References [Abb18] Emmanuel Abbe. Community detection and stochastic block models: Recent devel- opments. Journal of Machine Learning Research , 18(177):1–86, 2018. [AKS98] Noga Alon, Michael Krivelevich, and Benny Sudakov. Finding a large hidden clique in a random graph. Random Structures and Algorithms , 13(3-4):457–466, 1998. [Alo81] Noga Alon. On the number of subgraphs of prescribed t ype of graphs with a given number of edges. Israel Journal of Mathematics , 38:116–130, 1981. [Alo91] Noga Alon. Independent sets in regular graphs and su m-free subsets of finite groups. Israel Journal of Mathematics , 73:247–256, 1991. [BB19] Matthew Brennan and Guy Bresler. Optimal Average-Ca se Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness. In Alina Beyg elzimer and Daniel Hsu, editors, Proceedings of the Thirty-Second Conference on Learning Theor y, vol- ume 99 of Proceedings of Machine Learning Research , pages 469–470. PMLR, 25–28 Jun 2019. [BB20] Matthew Brennan and Guy Bresler. Reducibility and st atistical-computational gaps from secret leakage. In Conference on Learning Theory , pages 648–847. PMLR, 2020. [BB24a] Kiril Bangachev and Guy Bresler. Detection of L∞Geometry in Random Geometric Graphs: Suboptimality of Triangles and Cluster Expansion. In Shipra Agrawal and Aaron Roth, editors, Proceedings of Thirty Seventh Conference on Learning Theory , volume 247 of Proceedings of Machine Learning Research , pages 427–497. PMLR, 30 Jun–03 Jul 2024. [BB24b] Kiril Bangachev and Guy Bresler. On the Fourier Coeffi cients of High-Dimensional Random Geometric Graphs. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing , STOC 2024, page 549–560, New York, NY, USA, 2024. Association for Computing Machinery. [BB25] Kiril Bangachev and Guy Bresler. Random Algebraic Gr aphs and Their Convergence to Erd˝ os–R´ enyi. Random Structures & Algorithms , 66(1):e21276, 2025. 51 [BBH18] Matthew Brennan, Guy Bresler, and Wasim Huleihel. R educibility and computa- tional lower bounds for problems with planted sparse
https://arxiv.org/abs/2504.17202v1
struct ure. In S´ ebastien Bubeck, Vianney Perchet, and Philippe Rigollet, editors, Proceedings of the 31st Conference On Learning Theory , volume 75 of Proceedings of Machine Learning Research , pages 48–166. PMLR, 06–09 Jul 2018. [BBH+21] Matthew S Brennan, Guy Bresler, Sam Hopkins, Jerry Li, an d Tselil Schramm. Sta- tistical query algorithms and low degree tests are almost eq uivalent. Proceedings of Machine Learning Research , 134:774–774, 15–19 Aug 2021. [BBH24] Matthew Brennan, Guy Bresler, and Brice Huang. Thre shold for detecting high dimensional geometry in anisotropic random geometric grap hs.Random Structures & Algorithms , 64(1):125–137, 2024. [BBN19] Matthew Brennan, Guy Bresler, and Dheeraj M. Nagara j. Phase transitions for detecting latent geometry in random graphs. Probability Theory and Related Fields , 178:1215 – 1289, 2019. [BCLS84] Thang Nguyen Bui, Soma Chaudhuri, Frank Thomson Le ighton, and Michael Sipser. Graph bisection algorithms with good average case behavior .Combinatorica , 7:171– 191, 1984. [BDER14] S´ ebastien Bubeck, Jian Ding, Ronen Eldan, and Mik l´ os R´ acz. Testing for high- dimensional geometry in random graphs. Random Structures & Algorithms , 49, 11 2014. [BHK+19] Boaz Barak, Samuel Hopkins, Jonathan Kelner, Pravesh K. Kothari, Ankur Moitra, and Aaron Potechin. A nearly tight sum-of-squares lower bou ndfor the planted clique problem. SIAM Journal on Computing , 48(2):687–735, 2019. [BJR07] B´ ela Bollob´ as, Svante Janson, and Oliver Riordan . The phase transition in inhomo- geneous random graphs. Random Structures & Algorithms , 31(1):3–122, 2007. [Bop87] Ravi B. Boppana. Eigenvalues and graph bisection: A n average-case analysis. In Proceedings of the 28th Annual Symposium on Foundations of Co mputer Science , SFCS ’87, page 280–285, USA, 1987. IEEE Computer Society. [CGW88] F. Chung, R. Graham, and R. Wilson. Quasi-random gra phs.Combinatorica , pages 345–362, 1988. [CO10] Amin Coja-Oghlan. Graph partitioning via adaptive s pectral techniques. Combina- torics, Probability and Computing , 19(2):227–284, 2010. [DF89] M.E Dyer and A.M Frieze. The solution of some random np -hard problems in poly- nomial expected time. Journal of Algorithms , 10(4):451–489, 1989. [DGLU11] Luc Devroye, Andr´ as Gy¨ orgy, G´ abor Lugosi, and F rederic Udina. High-Dimensional Random Geometric Graphs and their Clique Number. Electronic Journal of Proba- bility, 16:2481 – 2508, 2011. 52 [DJPR17] Ewan Davies, Matthew Jenssen, Will Perkins, and Ba rnaby Roberts. Independent sets, matchings, and occupancy fractions. Journal of the London Mathematical Soci- ety, 96(1):47–66, 2017. [DKMZ11] Aurelien Decelle, Florent Krzakala, CristopherM oore, andLenka Zdeborov´ a. Asymp- totic analysis of the stochastic block model for modular net works and its algorithmic applications. Phys. Rev. E , 84:066106, Dec 2011. [DMW25] Abhishek Dhawan, ChengMao, and Alexander S. Wein. D etection of dense subhyper- graphs by low-degree polynomials. Random Structures & Algorithms , 66(1):e21279, 2025. [DWXY21] Jian Ding, Yihong Wu, Jiaming Xu, and Dana Yang. The planted matching problem: sharp threshold and infinite-order phase transition. Probability Theory and Related Fields, 187:1–71, 2021. [FGKS24] Tobias Friedrich, Andreas G¨ obel, Maximilian Kat zmann, and Leon Schiller. Cliques in high-dimensional geometric inhomogeneous random graph s.SIAM Journal on Discrete Mathematics , 38(2):1943–2000, 2024. [Gal11] David Galvin. A threshold phenomenon for
https://arxiv.org/abs/2504.17202v1
random ind ependent sets in the discrete hypercube. Comb. Probab. Comput. , 20(1):27–51, January 2011. [HLL83] Paul W. Holland, Kathryn Blackmond Laskey, and Samu el Leinhardt. Stochastic blockmodels: First steps. Social Networks , 5(2):109–137, 1983. [Hop18] Samuel Hopkins. Statistical inference and the sum o f squares method, 2018. [HS17a] S. B. Hopkins and D. Steurer. Efficient Bayesian Estim ation from Few Samples: Community Detection and Related Problems. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) , pages 379–390, Los Alamitos, CA, USA, oct 2017. IEEE Computer Society. [HS17b] Samuel BHopkins andDavid Steurer. Efficient bayesia n estimation from few samples: community detection and related problems. In Foundations of Computer Science (FOCS), 2017 IEEE 58th Annual Symposium on , pages 379–390. IEEE, 2017. [HS24] Shuichi Hirahara and Nobutaka Shimizu. Planted cliq ue conjectures are equivalent. InProceedings of the 56th Annual ACM Symposium on Theory of Comput ing, STOC 2024, page 358–366, New York, NY, USA, 2024. Association for Computing Machin- ery. [HWX15] Bruce E Hajek, Yihong Wu, and Jiaming Xu. Computatio nal lower bounds for com- munity detection on random graphs. In COLT, pages 899–928, 2015. [Jer92] Mark Jerrum. Large cliques elude the metropolis pro cess.Random Structures & Algorithms , 3(4):347–359, 1992. [JKL19] Jiashun Jin, Zheng Tracy Ke, and Shengming Luo. Opti mal adaptivity of signed- polygon statistics for network testing. The Annals of Statistics , 2019. 53 [Kah01] Jeff Kahn. An entropy approach to the hard-core model on bipartite graphs. Comb. Probab. Comput. , 10(3):219–237, May 2001. [Kuˇ c95] Ludˇ ek Kuˇ cera. Expected complexity of graph part itioning problems. Discrete Applied Mathematics , 57(2-3):193–212, 1995. [KVWX23] Pravesh Kothari, Santosh S. Vempala, Alexander S. Wein, and Jeff Xu. Is planted coloring easier than planted clique? In Annual Conference Computational Learning Theory, 2023. [KWB19] Dmitriy Kunisky, Alexander S. Wein, and Afonso S. Ba ndeira. Notes on compu- tational hardness of hypothesis testing: Predictions usin g the low-degree likelihood ratio, 2019. [LMSY22] Siqi Liu, Sidhanth Mohanty, Tselil Schramm, and El izabeth Yang. Testing thresholds for high-dimensional sparse random geometric graphs. In Proceedings of the 54th Annual ACM Symposium on Theory of Computing , STOC 2022, New York, NY, USA, 2022. Association for Computing Machinery. [LR23a] Suqi Liu and Mikl´ os Z. R´ acz. A probabilistic view o f latent space graphs and phase transitions. Bernoulli , 29(3):2417 – 2441, 2023. [LR23b] Suqi Liu and Mikl´ os Z. R´ acz. Phase transition in no isy high-dimensional random geometric graphs. Electronic Journal of Statistics , 17(2):3512 – 3574, 2023. [Mas14] Laurent Massouli´ e. Community detection threshol ds and the weak ramanujan prop- erty. InProceedings of the Forty-Sixth Annual ACM Symposium on Theory of Com- puting, STOC ’14, page 694–703, New York, NY, USA, 2014. Associatio n for Com- puting Machinery. [MNS18] Elchanan Mossel, Joe Neeman, and Allan Sly. A proof o f the block model threshold conjecture. Combinatorica , 38(3):665–708, June 2018. [MNWS+23] Elchanan Mossel, Jonathan Niles-Weed, Youngtak Sohn, N ike Sun, and Ilias Zadik. Sharp thresholds in inference of planted subgraphs. In Gerg ely Neu and Lorenzo Rosasco, editors, Proceedings of Thirty
https://arxiv.org/abs/2504.17202v1
Sixth Conference on Learning Theory , volume 195 ofProceedings of Machine Learning Research , pages 5573–5577. PMLR, 12–15 Jul 2023. [MVW24] Jay Mardia, Kabir Aladin Verchand, and Alexander S. Wein. Low-degree phase transitions for detecting a planted clique in sublinear tim e. In Shipra Agrawal and Aaron Roth, editors, Proceedings of Thirty Seventh Conference on Learning Theory , volume 247 of Proceedings of Machine Learning Research , pages 3798–3822. PMLR, 30 Jun–03 Jul 2024. [MW22] Andrea Montanari and Alexander S. Wein. Equivalence of approximate message passing and low-degree polynomials in rank-one matrix esti mation, 2022. 54 [MWZ23] Cheng Mao, Alexander S. Wein, and Shenduo Zhang. Det ection-recovery gap for planted dense cycles. In Gergely Neu and Lorenzo Rosasco, ed itors,Proceedings of Thirty Sixth Conference on Learning Theory , volume 195 of Proceedings of Machine Learning Research , pages 2440–2481. PMLR, 12–15 Jul 2023. [MWZ24] Cheng Mao, Alexander S. Wein, and Shenduo Zhang. Inf ormation-theoretic thresh- olds for planted dense cycles, 2024. [RL08] J¨ org Reichardt and Michele Leone. (un)detectable c luster structure in sparse net- works.Phys. Rev. Lett. , 101:078701, Aug 2008. [RSWY23] Cynthia Rush, Fiona Skerman, Alexander S. Wein, an d Dana Yang. Is it easier to count communities than find them? In Yael Tauman Kalai, edito r,14th Innovations in Theoretical Computer Science Conference, ITCS 2023, January 10 -13, 2023, MIT, Cambridge, Massachusetts, USA , volume 251 of LIPIcs, pages 94:1–94:23, 2023. [SSSZ18] Ashwin Sah, Mehtaab Sawhney, David Stoner, and Yuf ei Zhao. A reverse Sidorenko inequality. Inventiones mathematicae , 221:665–711, 2018. [SSSZ19] Ashwin Sah, Mehtaab Sawhney, David Stoner, and Yuf ei Zhao. The number of in- dependent sets in an irregular graph. Journal of Combinatorial Theory, Series B , 138:172–195, September 2019. [WXXZ24] VirginiaVassilevskaWilliams, YinzhanXu, Zixua nXu, andRenfeiZhou. NewBounds for Matrix Multiplication: from Alpha to Omega , pages 3792–3835. SIAM, 2024. [YZZ24] Xifan Yu, Ilias Zadik, and Peiyuan Zhang. Counting s tars is constant-degree optimal for detecting any planted subgraph, 2024. [Zha09] Yufei Zhao. The number of independent sets in a regul ar graph. Combinatorics, Probability and Computing , 19(2):315–320, November 2009. A Library of Examples We will repeatedly refer to the following condition. Proposition A.1. We call an SBM(p,Q)distribution on kcommunities fully unbiased if for any x∈[k],it holds that/summationtext ypyQx,y= 0.For any fully unbiased SBM(p,Q)and graph Hwith a leaf (in particular, any tree), ΦSBM(p,Q)(H) = 0. Proof.Suppose that Hhashvertices and vertex his a leaf with parent h−1.ByProposition 1.3 , ΦSBM(p,Q)(H) =/summationdisplay x1,x2,...,xh∈[k]/parenleftBiggh/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)Qxi,xj/parenrightBigg =/summationdisplay x1,x2,...,xh−1∈[k]/parenleftBiggh−1/productdisplay i=1pxi×/productdisplay (i,j)∈E(H)\{(h−1,h)}Qxi,xj×/summationdisplay xh∈[k]Qxh−1,xh/parenrightBigg = 0, where we use the fully unbiased condition for x=xh−1. 55 Throughout, weonly verify condition ( 5), butdonot computethevariance oftheplanted model. Theorem A.2 (Failure of All Signed Trees) .There exists an SBM distribution SBM(n;p,Q) onk= 2communities with p= (1/2,1/2)which has the following properties. For any graph Hwith an odd degree vertex, ΦSBM(p,Q)(H) = 0.For any graph Hwithout odd degree vertices, ΦSBM(p,Q)(H) = 1.Furthermore, as ngrows,SBM(n;p,Q)can be distinguished from G(n,1/2) with high probability via the signed 4-cycle count, for exam ple. Finally, any signed subgraph count distinguishes SBM(n;p,|Q|)andG(n,1/2)with high probability as ngrows. Construction. Takek= 2,p= (1/2,1/2) andQ=/parenleftbigg1−1 −1 1/parenrightbigg .
https://arxiv.org/abs/2504.17202v1
Theorem A.3 (Balanced 2-SBMs in which 1-Stars Dominate) .There exists a stochastic block modelSBM(n;p,Q)on 2 balanced communities with the following property. It can be distinguished fromG(n,1/2)with high probability via the signed count of 1-stars, but no t via the signed counts of any other connected graph of constant size. Construction. Consider the following SBM model on k= 2 communities. p= (1/2,1/2) and Q=/parenleftbiggn−β0 0n−β/parenrightbigg whereβ∈(3/4,1).For any connected graph H,ΦSBM(n;p,Q)(H) =O(n−β|E(H)|). Hence, if Hhas at least 2 edges and is connected, Ψ SBM(n;p,Q)(H) =O(n−β|E(H)| |V(H)|) =O(n−2β/3) = o(n−1/2) asβ≥3/4.Then, Φ SBM(n;p,Q)(Star1) = Θ(n−β) and Ψ SBM(n;p,Q)(Star1) = Θ(n−β/3) = ω(n−1/2) asβ≤1. Theorem A.4 (Balanced 2-SBMs in which 2-Stars Dominate) .There exists a stochastic block modelSBM(n;p,Q)on 2 balanced communities with the following property. It can be distinguished fromG(n,1/2)with high probability via the signed count of 2-stars, but no t via the signed counts of any other connected graph of constant size. Construction. Consider the following SBM model on k= 2 communities. p= (1/2,1/2) andQ=/parenleftbiggn−β0 0−n−β/parenrightbigg whereβ∈(2/3,3/4).For any connected graph H,ΦSBM(n;p,Q)(H) =O(n−β|E(H)|). Hence, if Hhas at least 3 edges and is connected, Ψ SBM(n;p,Q)(H) =O(n−β|E(H)| |V(H)|) =O(n−3β/4) = o(n−1/2) asβ≥2/3.Then, Φ SBM(n;p,Q)(Star2) = Θ(n−2β) and Ψ SBM(n;p,Q)(Star2) = Θ(n−2β/3) = ω(n−1/2) asβ≤3/4.Finally, Φ SBM(n;p,Q)(Star1) = 0. Theorem A.5 (2-SBMs in which Large Stars Dominate) .LetD≥3be a natural number. There exists a stochastic block model SBM(n;p,Q)on 2 communities with the following property. It can be distinguished from G(n,1/2)with high probability via the signed count of D-stars, but not via the signed count of any other connected graph on at most Dedges. Construction. Consider the following SBM model on k= 2 communities. p= (n−α,1−n−α) and Q=/parenleftbigg0n−β n−β0/parenrightbigg whereα= 3/4,β=2D−1 4D−1 8D(D−1).LetHbe any connected graph. If His not bipartite, clearly Φ SBM(p,Q)(H) = 0.Now, suppose that His bipartite and has parts of sizes u andvwhere 1≤u≤vandu+v=|V(H)| ≤D+1.A simple calculation shows that ΦSBM(p,Q)(H) = Θ(pu 1n−β|E(H)|) = Θ(n−αu−β|E(H)|) =⇒ΨSBM(p,Q)(H) = Θ(n−αu |V(H)|−β|E(H)| |V(H)|). 56 Now, suppose first that His not a tree. A non-tree bipartite graph on at most Dedges has at most Dvertices and each part in the bipartition is of size at least 2 . Thus, ΨSBM(p,Q)(H) =O(n−3 2D−2D−1 4D+1 8D(D−1))O(n−1 2−10D−11 8D(D−1)) =o(n−1/2). Next, ifHis a tree and u≥2, ΨSBM(p,Q)(H) =O(n−3 21 |V(H)|−β×|V(H)|−1 |V(H)|). Asβ <3/2,theconvexcombination3 21 |V(H)|+β×|V(H)|−1 |V(H)|decreaseswith |V(H)|,henceismaximized at|V(H)|=D+1,where n−3 21 |V(H)|−β×|V(H)|−1 |V(H)|=n−3 2×1 D+1−(2D−1 4D−1 8D(D−1))×D D+1=n−1 2−6D−7 8(D−1)(D+1)=o(n−1/2). Finally, if His a tree and u= 1,we obtain ΨSBM(p,Q)(H) = Θ(n−3 41 |V(H)|−β×|V(H)|−1 |V(H)|). When|V(H)|=D+1,the case of StarD, n−3 41 |V(H)|−β×|V(H)|−1 |V(H)|=n−3 41 D+1−(2D−1 4D−1 8D(D−1))D D+1=n−1 2+1 8(D−1)(D+1)=ω(n−1/2). When|V(H)|< D+1,the case of smaller stars, n−3 41 |V(H)|−β×|V(H)|−1 |V(H)|≤n−3 41 D−(2D−1 4D−1 8D(D−1))D−1 D=n−1 2−1 8D2=o(n−1/2). Thus,StarDis the only connected graph on at most Dedges such that Ψ SBM(n;p,Q)=ω(n−1/2). Theorem A.6 (2-SBMsinwhich4-CyclesDominate) .There exists an SBM distribution SBM(p,Q) with the following properties. All Fourier coefficients corr esponding to graphs with at least one leaf (including all stars) are zero and the Fourier coefficients co rresponding to graphs with an odd number of edges (including triangles) are zero. Furthermor e, the 4-cycle is the unique connected graph of constant size which can distinguish SBM(n;p,Q)andG(n,1/2)with
https://arxiv.org/abs/2504.17202v1
high probability. Construction. Theconstructionisinspiredbythe“quietplanting”distri butionin[ KVWX23 ]. Take someq∈Nsuch that q=ω(n5/8),q=o(n2/3) and consider an SBM on k=q2communities labeled by [q]×[q],wherep(a,b)= 1/q2∀(a,b)∈[q]×[q] and Q(a1,b1),(a2,b2)=  1, a1=b1anda2/\e}a⊔io\slash=b2, −1, a1/\e}a⊔io\slash=b1anda2=b2, 0,otherwise . The Fourier coefficient corresponding to any graph with a leaf (including stars) is 0 as the SBM is fully unbiased as in Proposition A.1 . The Fourier coefficient corresponding to any graph with an odd number of edges (including triangles) is also 0. This follows since the measure-preser ving map on labels ( a,b)−→(b,a) acts byQb,a=−Qa,b.This is enough via ( 7). 57 Finally, the Fourier coefficient of any connected graph HisO(q1−|V(H)|) and the Fourier coeffi- cientofthesigned4-cycleisΘ( q−3).Inparticular, thismeansthatif Hisconnected, Ψ SBM(p,Q)(H) = O(q−|V(H)|−1 |V(H)|).Asq=ω(n5/8),this means that Ψ SBM(p,Q)(H) =o(n−1/2) if|V(H)| ≥5.On the other hand, Ψ SBM(p,Q)(Cyc4) = Θ(q−3/4) =ω(n−1/2) asq=o(n2/3).Finally, any other connected graphHon at most 4 vertices has either an odd number of edges or an odd degree vertex, so ΨSBM(p,Q)(H) = 0. Theorem A.7 (2-SBMs in which Triangles Dominate) .There exists a stochastic block model SBM(n;p,Q)which can be distinguished from G(n,1/2)with high probability via the signed tri- angle count, but not via the signed count of any other connect ed graph on a constant number of vertices. Construction. One such construction appears in [ KVWX23 ] and with a slight modification in [BB24b]. We give full detail for completeness. Take k=nαfor some α∈(2/3,3/4).Consider an SBM on kcommunities with pi= 1/k=n−α∀i∈[k] and Qi,j=/braceleftBigg 1, i =j, −1 k−1, i/\e}a⊔io\slash=j. LetHbe any connected graph. One can again observe that the SBM is f ully unbiased, which implies that Φ SBM(p,Q)(H) = 0 ifHis a tree via Proposition A.1 . Similarly, one can show that if Hhas any edge the removal of which makes the graph disconnecte d (i.e.,His not 2-connected), ΦSBM(p,Q)(H) = 0. Finally, if His 2-connected, that is His connected and continues to be so after the removal of any edge, one can show that Φ SBM(p,Q)(H)≍k1−|V(H)|.Hence, Ψ SBM(p,Q)(H)≍k−|V(H)|−1 |V(H)|.This quantity is decreasing in |V(H)|.Ask=o(n3/4) andk=ω(n2/3),the signed triangle count is the unique signed count of a connected constant-sized graph suffi cient for testing. Theorem A.8 (One-to-One Comparisons of Fourier Coefficients Fail) .There exist two stochastic block models SBM(n;p1,Q1)andSBM(n;p2,Q2)on 2 communities and a connected graph Hon6 vertices and 8 edges with the following two properties: 1. InSBM(n;p1,Q1),ΨSBM(n;p1,Q1)(Start) =o(ΨSBM(n;p1,Q1)(H))for anyt∈N. 2. InSBM(n;p2,Q2),ΨSBM(n;p2,Q2)(Cyc4) =o(ΨSBM(n;p2,Q2)(H)). But, byTheorem 3.6 , for any 2-SBM SBM(p,Q),it holds that ΨSBM(n;p,Q)(H)/lessorsimilarmax/parenleftBigg ΨSBM(n;p,Q)(Cyc4),max 1≤t≤8ΨSBM(n;p,Q)(Start)/parenrightBigg . Proof.TakeH=K2,4.LetSBM(n;p1,Q1) be just the SBM from Theorem A.2 . Then, for any star Start,ΦSBM(p1,Q1)(Start) = 0.However, as K2,4has only even degree vertices, ΦSBM(p1,Q1)(K2,4) = 1. Now, let SBM(n;p2,Q2) be the SBM with p2= (n−α,1−n−α) where α∈(0,1) andQ2=/parenleftbigg0 1 1 0/parenrightbigg .Then, ΦSBM(p2,Q2)(Cyc4)≍ΦSBM(p2,Q2)(K4)≍(p2 1)2=n−2α. Thus, |ΦSBM(n;p2,Q2)(Cyc4)|1 |V(Cyc4)|≍n−α/2=o(|ΦSBM(n;p2,Q2)(K2,4)|1 |V(H))|) =o(n−α/3). 58 B Some Omitted Details B.1 Proof of Inequality (25) We rewrite as follows. Lemma B.1. Suppose that (p1,p2,...,pk)is a pmf on [k].Letq1,q2,...,qkbe real numbers in [0,1].Then, v−→/parenleftbigk/summationdisplay i=1pv iqv−1 i/parenrightbig1/v is non-increasing on [1,+∞]. Proof.Define the function f(v):= log/parenleftbigk/summationdisplay i=1pv iqv−1 i/parenrightbig1/v=1 vlog/parenleftbigk/summationdisplay i=1pv iqv−1 i/parenrightbig .It is enough to show thatf(v) is non-increasing. Equivalently, we need to show that f′(v)≤0. f′(v)≤0⇐⇒ −1 v2log/parenleftbigk/summationdisplay i=1pv iqv−1 i/parenrightbig
https://arxiv.org/abs/2504.17202v1
arXiv:2504.17451v1 [math.ST] 24 Apr 2025Functional /u1D472Sample Problem via Multivariate Optimal Measure Transport-Based Permutation Test ˇS´arka Hudecov ´a, Daniel Hlubinka and Zden ˇek Hl ´avka Abstract The null hypothesis of equality of distributions of functio nal data coming from/u1D43Esamples is considered. The proposed test statistic is multi variate and its components are based on pairwise Cram ´er von Mises comparisons of empirical characteristic functionals. The significance of the test st atistic is evaluated via the novel multivariate permutation test, where the final single /u1D45D-value is computed using the discrete optimal measure transport. The methodology is illustrated by real data on cumulative intraday returns of Bitcoin. 1 Introduction Functional Data Analysis (FDA) is widely applied in various practical applications where the observed data can be represented as (possibly disc retized) functions. This includes situations where the variable of interest is obser ved over a continuous time domain or over a high-frequency discrete time domain; s etups often arising in economics, finance, biomedicine, engineering, and many o ther fields. FDA has become a very popular approach to analyzing high-dimension al datasets in past few decades, [14, 18]. One of the basic statistical problems in F DA is the functional /u1D43E sample comparison, see [2, 3, 6, 22] and further references t herein. Most studies focus on the null hypothesis of equality of means, resulting in a functional version ˇS´arka Hudecov ´a Charles University, Faculty of Mathematics and Physics, De partment of Probability and Mathemat- ical Statistics, Sokolovsk ´a 83, Prague, Czech Republic, e-mail: hudecova@karlin.mff.cuni.cz Daniel Hlubinka Charles University, Faculty of Mathematics and Physics, De partment of Probability and Mathemat- ical Statistics, Sokolovsk ´a 83, Prague, Czech Republic, e-mail: hlubinka@karlin.mff.cuni.cz Zden ˇek Hl ´avka Charles University, Faculty of Mathematics and Physics, De partment of Probability and Mathemat- ical Statistics, Sokolovsk ´a 83, Prague, Czech Republic, e-mail: hlavka@karlin.mff.cuni.cz 1 2 ˇS´arka Hudecov ´a, Daniel Hlubinka and Zden ˇek Hl ´avka of analysis of variance (ANOVA). However, the equality of co variance operators has also been investigated, [8]. In this contribution, we are co ncerned with a more strict null hypothesis that the whole underlying distributions in the/u1D43Esamples are equal. In many FDA tests, the resulting test statistic is univariat e and its (asymptotic) distribution is rather complex. It is, therefore, common to apply a permutation test, because it requires minimal distributional assumptions an d leads to reasonable results even in small samples. The significance is then evaluated usi ng so called permutation distribution obtained by random rearrangements of the data . The corresponding /u1D45D- value is computed as a proportion of permutation test statis tics that are more extreme than the value computed from the original data. Unfortunately, the permutation principle cannot be easily generalized for mul- tivariate test statistics, mainly due to the lack of orderin g in dimension /u1D451≥2. A standard tool here is to compute /u1D45D-values for each component of the vector test statis- tic (so called partial /u1D45D-values) and then to combine them, [15, Chapter 6]. Recently , [11] proposed a multivariate permutation test based on disc rete optimal measure transport, [16, 20],
https://arxiv.org/abs/2504.17451v1
which is referred to in the following as the OMT permutation test. Let/u1D44B/u1D457,1,...,/u1D44B/u1D457,/u1D45B/u1D457be independent and identically distributed functional obs er- vations coming from a distribution with a characteristic fu nctional/u1D711/u1D457,/u1D457=1,...,/u1D43E , and let the/u1D43Esamples be independent. Consider the null hypothesis of equ ality of distributions H0:/u1D7111=···=/u1D711/u1D43E (1) against a general alternative. For /u1D43E=2, [12] propose a Cram ´er von Mises type test statistic for H0based on the empirical characteristic functionals. For /u1D43E > 2, the authors recommend to reformulate H0using/u1D451=/parenleftbig/u1D43E 2/parenrightbigpairwise comparisons, i.e. H(/u1D457,/u1D459) 0:/u1D711/u1D457=/u1D711/u1D459,for 1≤/u1D457 </u1D459≤/u1D43E. (2) The test statistic is then computed for each pair (/u1D457,/u1D459),/u1D457 </u1D459 , leading to/u1D451-dimensional vector test statistic /u1D47B=(/u1D4471,...,/u1D447/u1D451)⊤. Subsequently, the permutation test is applied to the univariate statistic max 1≤/u1D457≤/u1D451/u1D447/u1D457. It is clear that some information can be lost by the transformation of the multivariate test statistic /u1D47Bto its maximum. Hence, in this contribution, we follow the latter approach for /u1D43E > 2 samples, but the significance of/u1D47Bis evaluated using the OMT permutation test. This approach n ot only uses the information from all the elements of /u1D47B, but also allows for an interpretation of the contributions of the individual pairwise comparisons to th e rejection of the null hypothesis. That is, if the null hypothesis H0is rejected, the OMT permutation test detects the pairs (/u1D457,/u1D459)that contribute the most to the rejection. The benefits of thi s procedure are illustrated on a real dataset on cumulative in traday returns of Bitcoin prices. The paper is organized as follows. The considered multivari ate test statistic /u1D47Bis introduced in Section 2. Section 3 describes the OMT permuta tion test and discusses related issues. The real data application is provided in Sec tion 4. Functional/u1D43Esample OMT permutation test 3 2 Multivariate Test Statistic for the /u1D472Sample Problem A functional random variable /u1D44Bis a random element taking values in the Hilbert spaceX=L2[0,1]of square integrable functions on [0,1]such that E∫1 0/u1D44B2(/u1D461)d/u1D461 < ∞. The space Xis equipped with an inner product /an}bracketle{t/u1D462,/u1D463/an}bracketri}ht=∫1 0/u1D462(/u1D461)/u1D463(/u1D461)d/u1D461,/u1D462,/u1D463∈ X. A distribution of /u1D44Bis fully described by a characteristic functional /u1D711(/u1D464)= Eexp{i/an}bracketle{t/u1D464,/u1D44B/an}bracketri}ht}for/u1D464∈ X,where i=√ −1. Consider first the two sample problem, i.e. /u1D43E=2. Then (1) reduces to H(1,2) 0: /u1D7111=/u1D7112. [12] proposed a test statistic for H(1,2) 0based on a comparison of the empirical characteristic functionals of the two samples /u1D4461,2=/uni222B.dsp X|/hatwide/u1D7111(/u1D464) −/hatwide/u1D7112(/u1D464)|2d/u1D444, (3) where/hatwide/u1D711/u1D457is the empirical characteristic functional in the /u1D457-th sample, defined as /hatwide/u1D711/u1D457(/u1D464)=1 /u1D45B/u1D457/u1D45B/u1D457/summationdisplay.1 /u1D456=1exp/braceleftbig i/an}bracketle{t/u1D464,/u1D44B/u1D457,/u1D456/an}bracketri}ht/bracerightbig , /u1D457=1,2, (4) and/u1D444is a centered Gaussian measure on Xwith a covariance operator with a kernel function/u1D463:[0,1]2→R. In practice, the functions /u1D44B/u1D457,/u1D456,/u1D456=1,...,/u1D45B/u1D457,/u1D457=1,...,/u1D43E , are recorded on a finite set of points /u1D4611<···< /u1D461/u1D43Dfrom[0,1], and therefore, the inner products in (4) need to be replaced by a suitable Rieman n approximation of the integral. Furthermore, it suffices to specify /u1D463on the measurement points, via a matrix/u1D47D=/parenleftBig /u1D463(/u1D461/u1D456,/u1D461/u1D457)/parenrightBig/u1D43D /u1D456,/u1D457=1. Further details on the numerical computation of /u1D4461,2and discussions on the choice of /u1D47Dcan be found in [12]. Consider now the general problem with /u1D43E > 2, the null hypothesis H0in (1) and the pairwise partial hypothesis in (2). For each pair (/u1D457,/u1D459), 1≤/u1D457 < /u1D459≤/u1D43E, let/u1D446/u1D457,/u1D459 be the
https://arxiv.org/abs/2504.17451v1
two sample test statistic defined in (3) for samples /u1D457and/u1D459. It follows from (3) that large values of /u1D446/u1D457,/u1D459indicate that the distribution in samples /u1D457and/u1D459may differ. Hence, define the test statistic /u1D47B=(/u1D4461,2,/u1D4461,3,...,/u1D446/u1D43E−1,/u1D43E)⊤=(/u1D4471,...,/u1D447/u1D451)⊤, i.e./u1D47Bis a vector with elements /u1D446/u1D457,/u1D459ordered according to the lexicographic ordering of(/u1D457,/u1D459). Then/u1D47Btakes values in [0,∞)/u1D451and large values of its components indicate violation of the corresponding pairwise null hypothesis. Remark 1 It is clear that the dimension /u1D451of/u1D47Bgrowths rapidly with the number of samples/u1D43E. Remark that the dimension can be reduced, if required, by co nsidering only some of the pairwise comparisons. For instance, H0holds if and only if /u1D7111=/u1D711/u1D457 for all/u1D457=2,...,/u1D43E . Therefore, one can consider only H(1,/u1D457) 0for/u1D457=2,...,/u1D43E and get/u1D47Bof dimension /u1D43E−1. 4 ˇS´arka Hudecov ´a, Daniel Hlubinka and Zden ˇek Hl ´avka 3 Permutation Test Based on the Optimal Measure Transport Let Z=(/u1D44B1,1,...,/u1D44B 1,/u1D45B1,...,/u1D44B/u1D43E,1,...,/u1D44B/u1D43E,/u1D45B/u1D43E) be the ordered list of the pooled data. Denote as /u1D47B0the/u1D451-dimensional test statistic /u1D47Bcomputed as described in Section 2 for the original data Z. A permutation version of /u1D47Bis obtained as follows: The elements of Zare randomly permuted leading to a new ordered list Z∗=(/u1D44D∗ 1,1,...,/u1D44D∗ 1,/u1D45B1,...,/u1D44D∗ /u1D43E,1,...,/u1D44D∗ /u1D43E,/u1D45B/u1D43E). The test statistic/u1D47Bis then computed for the corresponding /u1D43Esamples, with the /u1D457-th sample being/u1D44D∗ /u1D457,1,...,/u1D44D∗ /u1D457,/u1D45B/u1D457. This procedure is repeated independently /u1D435times, leading to permutation test statistics /u1D47B1,...,/u1D47B/u1D435. The OMT permutation test from [11] is based on the discrete /u1D43F2optimal measure transport of the set T={/u1D47B0,/u1D47B1,...,/u1D47B/u1D435}of/u1D435+1 points inR/u1D451to a specified grid setGof/u1D435+1 points in the unit ball {/u1D499∈R/u1D451:/bardbl/u1D499/bardbl ≤ 1}, whose choice is discussed later. Namely, the optimal mapping /u1D439∗is defined as a bijection from T toGthat minimizes the quadratic loss/summationtext.1/u1D435 /u1D456=0/bardbl/u1D439(/u1D47B/u1D456) −/u1D47B/u1D456/bardbl2.Note that computation of the discrete OMT /u1D439∗is a standard optimalization task, see [16, Chapter 3]. In R, [17], it can be computed using package clue , [13]. The permutation /u1D45D-value is computed as the relative frequency of permutation statistics that are ”more extreme” than /u1D47B0from the center-outward perspective (i.e. that are more distant from the center), that is /hatwide/u1D45D=1 /u1D435+1/parenleftBigg 1+/u1D435/summationdisplay.1 /u1D44F=11/braceleftBig /bardbl/u1D439∗(/u1D47B/u1D44F)/bardbl ≥ /bardbl/u1D439∗(/u1D47B0)/bardbl/bracerightBig/parenrightBigg , where 1{·}is the indicator function. If the grid set Gis specified as (5) below, one can use also a /u1D45D-value/tildewide/u1D45D=1− /bardbl/u1D439∗(/u1D47B0)/bardbl. Apart from the single final /u1D45D-value, the OMT permutation test provides also an interpretation of the partial contributions of the individ ual pairwise comparison. The quantity (1−/tildewide/u1D45D)2can be interpreted as an overall non-conformity score that measures the deviation of the data from the null hypothesis, andH0is rejected on level/u1D6FC∈ (0,1)if and only if the non-conformity score exceeds (1−/u1D6FC)2. Let /u1D439∗(/u1D47B0)=/parenleftbig/u1D4391(/u1D47B0),...,/u1D439/u1D451(/u1D47B0)/parenrightbig⊤. Then (1−/tildewide/u1D45D)2=/bardbl/u1D439∗(/u1D47B0)/bardbl2=/u1D451/summationdisplay.1 /u1D457=1/u1D439∗ /u1D457(/u1D47B0)2, so the value/u1D439∗ /u1D457(/u1D47B0)2can be interpreted as an absolute contribution of the /u1D457-th com- ponent of /u1D47B(corresponding to /u1D457-th pairwise comparison) to the rejection of the com- posite null hypothesis H0. Denote as/u1D437/u1D456=/u1D439∗ /u1D457(/u1D47B0)//bardbl/u1D439∗(/u1D47B0)/bardbl. Then/summationtext.1/u1D451 /u1D456=1/u1D4372 /u1D456=1, and the vector (/u1D4372 1,...,/u1D4372 /u1D451)⊤can be interpreted as a vector of relative contributions of the individual pairwise comparisons. See [11] for exampl es and more details. Functional/u1D43Esample OMT permutation test 5 The computation of the OMT permutation /u1D45D-value requires a choice of the grid G. In the current context, each component of
https://arxiv.org/abs/2504.17451v1
/u1D47Btakes values inR+with large values indicating violation of the corresponding partial pairwis e hypothesis. Therefore, in view of Section 3.2 in [11], a suitable grid set Gis a subset of {/u1D499:/u1D499∈ [0,1]/u1D451,/bardbl/u1D499/bardbl ≤ 1}. In view of [11], it is beneficial to specify /u1D435such that/u1D435+1=/u1D45B/u1D445·/u1D45B/u1D446for positive integers/u1D45B/u1D445and/u1D45B/u1D446and compute the grid points G={/u1D488/u1D456/u1D457}/u1D45B/u1D445,/u1D45B/u1D446 /u1D456,/u1D457=1as /u1D488/u1D456/u1D457=/u1D456 /u1D45B/u1D445+1/u1D494/u1D457/u1D456=1,...,/u1D45B/u1D445, /u1D457=1,...,/u1D45B/u1D446, (5) where/u1D4941,...,/u1D494/u1D45B/u1D446are distinct directional vectors from the set S+={/u1D499∈ [0,1]/u1D451: /bardbl/u1D499/bardbl=1}, distributed as uniformly as possible over S+. These directions can be obtained from a sequence {/u1D499/u1D456}/u1D45B/u1D446 /u1D456=1of low discrepancy points in [0,1]/u1D451−1as/u1D494/u1D457= /u1D70F(/u1D499/u1D457), where/u1D70Fis a mapping from [0,1]/u1D451−1toS+such that/u1D70F(/u1D47F)has a uniform distribution on S+whenever /u1D47Fhas a uniform distribution in [0,1]/u1D451−1. The mapping /u1D70Fcan be derived analogously as in [5, Section 1.5.3], see also Section 4 for the application with /u1D451=3. Remark that the minimal /u1D45D-value/hatwide/u1D45Dthat can be obtained by the OMT permutation test with grid from (5) is 1 //u1D45B/u1D445. For instance, if /u1D45B/u1D445=20, then 1//u1D45B/u1D445=0.05 and results with this /u1D45D-value have to be interpreted as significant. 4 Real Data Application As an illustration of the proposed methodology, we analyze t he intraday Bitcoin prices from 2022. The data come from a larger database availa ble atkaggle.com repository /one.sup. We consider intraday data with frequency 20 min (so there ar e/u1D43D=72 observations each day) and calculate the logarithmic cumul ative intraday returns (CIR). We are concerned with the question whether the behavior of CI R differs for working and weekend days. In particular, we consider the cum ulative intraday returns for Mondays (sample 1), Wednesdays (sample 2) and Saturdays (sample 3), and test H0for/u1D43E=3. The three samples, of sizes /u1D45B1=52,/u1D45B2=52, and/u1D45B3=53, respectively, are shown in Figure 1. Remark that the intraday Bitcoin prices for successive days are inherently depen- dent due to various market dynamics and external factors infl uencing price trends. Moreover, they are unlikely to be identically distributed b ecause of long-term trends and evolving patterns. The latter issue is effectively addre ssed by transforming the data to CIR. The potential dependence between successive da ys is also partially miti- gated by the CIR transformation. Furthermore, the selectio n of non-consecutive days (e.g., Monday, Wednesday, Saturday) reduces any lingering dependence between the observed data points. Consequently, any residual dependen ce between these chosen days can be reasonably assumed negligible. /one.suphttps://www.kaggle.com/datasets/jkraak/bitcoin-pric e-dataset 6 ˇS´arka Hudecov ´a, Daniel Hlubinka and Zden ˇek Hl ´avka 0.0 0.2 0.4 0.6 0.8 1.0−0.15 −0.10 −0.05 0.00 0.05 0.10 0.15MondayCIR 0.0 0.2 0.4 0.6 0.8 1.0−0.15 −0.10 −0.05 0.00 0.05 0.10 0.15WednesdayCIR 0.0 0.2 0.4 0.6 0.8 1.0−0.15 −0.10 −0.05 0.00 0.05 0.10 0.15SaturdayCIR Fig. 1 Logarithmic cumulative intraday returns for Bitcoin in 202 2: Mondays (52 curves), Wednes- days (52 curves), and Saturdays (53 curves). The trivariate test statistic /u1D47B, whose components correspond to pairwise compar- isons of samples (1,2), (1,3) and (2,3), respectively, was c omputed as described in Section 2. Based on empirical evidence, the authors from [12 ] recommend to choose the matrix /u1D47Das data dependent, namely as an approximation to the inverse of the sample covariance matrix, because this choice leads to gene rally
https://arxiv.org/abs/2504.17451v1
reasonable results under the null as well as against various alternatives. Foll owing this recommenda- tion, the following test statistics were considered: −/u1D47Binvfor/u1D47Dbeing the approximate inverse to the sample covariance matr ix com- puted from all data (without distinguishing the samples) an d −/u1D47Binv.poolfor/u1D47Dbeing the approximate inverse to the pooled sample covarian ce matrix. In both cases, the approximated inverse is computed from 9 la rgest eigenvalues. The permutation test statistics were calculated for /u1D435=999. The grid GinR3is constructed as (5) with /u1D45B/u1D445=40 and/u1D45B/u1D446=25. The sequence {/u1D499/u1D456}/u1D45B/u1D446 /u1D456=1is taken as a Halton sequence in [0,1]2, see [9], obtained in Rusing packagerandtool , [4]. For /u1D451=3, the desired transformation from /u1D499/u1D457to/u1D494/u1D457is via mapping /u1D70F=(/u1D70F1,/u1D70F2,/u1D70F3)⊤ defined as/u1D70F1(/u1D4651,/u1D4652)=1−/u1D4651,/u1D70F2(/u1D4651,/u1D4652)=/radicalBig 2/u1D4651−/u1D4652 1cos(/u1D70B/u1D4652/2), and/u1D70F3(/u1D4651,/u1D4652)= /radicalBig 2/u1D4651−/u1D4652 1sin(/u1D70B/u1D4652/2). Figure 2 shows the test statistic /u1D47Binvcomputed from the original data together with its/u1D435permutation replicas, and the grid set Gwith the transformation /u1D439∗(/u1D47Binv 0). It is visible that /u1D439∗(/u1D47Binv 0), and subsequently /u1D47Binv 0, is one of the most extreme points, so the obtained /u1D45D-value is the minimal possible and significant. Namely, we ge t /hatwide/u1D45D=0.025 and /tildewide/u1D45D=0.024. The vector of individual contributions is (/u1D4372 1,/u1D4372 2,/u1D4372 3)= (0.353,0.305,0.343)⊤, so it indicates that all three pairwise comparisons contri bute equally to the rejection of the null hypothesis. The conclus ions for the test statistic /u1D47Binv.poolare very similar, so these are not discussed in detail. Table 1 presents /u1D45D-values of various functional ANOVA tests for equality of mean functions, computed with the help of package fdANOVA , [7]. In contrast to the results of our tests, all the tests for equality of mean fu nction lead to highly non-significant /u1D45D-values. This suggests that the violation of H0is not due to the Functional/u1D43Esample OMT permutation test 7 Fig. 2 Left panel: The test statistic /u1D47Binv(larger square) together with /u1D435permutation replicas (small circles). All values multiplied by 104. Right panel: The grid Gwith highlighted point /u1D439∗(/u1D47Binv 0). difference in means, but rather due to some other distributio nal aspects. Figure 1 suggests that the difference in the three groups can be relate d to different covariance structures. Hence, we consider also the null hypothesis /tildewideH0:/u1D4361=/u1D4362=/u1D4363, where/u1D436/u1D457 is the covariance operator in sample /u1D457. This hypothesis can be reformulated in terms of three pairwise comparisons. For each pair, the operators can be compared using the test statistic based on the square-root distance, propo sed by [1]. This procedure results in a trivariate test statistic that can be evaluated via the OMT permutation test. Using again /u1D435=999 and the same grid set, we get /hatwide/u1D45D=0.05 and /tildewide/u1D45D=0.049. Note that due to the discreteness of possible /u1D45D-values for the OMT permutation test, these values need to be interpreted as significant. The correspond ing vector of individual contributions is (0.062,0.909,0.028)⊤, which reveals that /tildewideH0is rejected mainly due to the difference in covariance operators of Mondays and Satu rdays. Finally note that our conclusions about differences in indiv idual days are in agreement with some other empirical studies that examined d ifferences in Bitcoin trading, see [10] and references therein. Table 1 Results for various functional ANOV A tests
https://arxiv.org/abs/2504.17451v1
for equality of m ean functions for the con- sidered three samples. FP is permutation test based on basis function representation from [6], CS is/u1D43F2-norm-based parametric bootstrap test for heteroscedasti c samples from [3]. L2B is /u1D43F2-norm- based test with bias-reduced method of estimation, while L2 b is/u1D43F2-norm-based bootstrap test, see [22]. FB is F-type test with bias-reduced method of estim ation, see [19, 21], and Fb is F-type bootstrap test, see [22]. Test FP CS L2B L2b FB Fb /u1D45D-value 0.830 0.790 0.772 0.775 0.775 0.789 8 ˇS´arka Hudecov ´a, Daniel Hlubinka and Zden ˇek Hl ´avka Acknowledgements The research was supported by the Czech Science Foundation p roject GA ˇCR No. 25-15844S. References 1. A. Cabassi, D. Pigoli, P. Secchi, and P. A. Carter. Permuta tion tests for the equality of covariance operators of functional data with applications to evolutionary biology. Electron. J. Stat., 11(2):3815–3840, 2017. 2. J. A. Cuesta-Albertos and M. Febrero-Bande. A simple mult iway ANOV A for functional data. TEST , 19(3):537–557, 2010. 3. A. Cuevas, M. Febrero, and R. Fraiman. An ANOV A test for fun ctional data. Comput. Statist. Data Anal. , 47(1):111–122, 2004. 4. C. Dutang and P. Savicky. randtoolbox: Generating and Testing Random Numbers , 2022. R package version 2.0.3. 5. K.-T. Fang and Y. Wang. Number-theoretic Methods in Statistics . Chapman & Hall, 1994. 6. T. G ´orecki and L. Smaga. A comparison of tests for the one-way AN OV A problem for functional data. Comput. Statist. , 30(4):987–1010, 2015. 7. T. Gorecki and L. Smaga. fdANOVA: Analysis of Variance for Univariate and Multivari ate Functional Data , 2018. R package version 0.1.2. 8. J. Guo, B. Zhou, and J.-T. Zhang. New tests for equality of s everal covariance functions for functional data. J. Amer. Statist. Assoc. , 114(527):1251–1263, 2019. 9. J. H. Halton. On the efficiency of certain quasi-random sequ ences of points in evaluating multi-dimensional integrals. Numer. Math. , 2:84–90, 1960. 10. P. R. Hansen, C. Kim, and W. Kimbrough. Periodicity in cry ptocurrency volatility and liquidity. J. Financ. Econom. , 22(1):224–251, 2024. 11. Z. Hl ´avka, D. Hlubinka, and ˇS. Hudecov ´a. Multivariate quantile-based permuta- tion tests with application to functional data. J. Comput. Graph. Stat. , 2024. https://doi.org/10.1080/10618600.2024.2444302. 12. Z. Hl ´avka, D. Hlubinka, and K. Ko ˇnasov ´a. Functional ANOV A based on empirical character- istic functionals. J. Multivariate Anal. , 189:104878, 2022. 13. K. Hornik. A CLUE for CLUster Ensembles. J. Stat. Softw. , 14(12), 2005. 14. L. Horv ´ath and P. Kokoszka. Inference for Functional Data with Applications . Springer-Verlag, New York, 2012. 15. F. Pesarin. Multivariate permutation tests: with applications in bios tatistics . Wiley, 2001. 16. G. Peyr ´e, M. Cuturi, et al. Computational optimal transport: With a pplications to data science. Found. Trends Mach. Learn. g , 11(5-6):355–607, 2019. 17. R Core Team. R: A Language and Environment for Statistical Computing . R Foundation for Statistical Computing, Vienna, Austria, 2022. 18. J. O. Ramsay and B. W. Silverman. Functional Data Analysis . Springer Series in Statistics. Springer New York, 2013. 19. Q.
https://arxiv.org/abs/2504.17451v1
Concentration inequalities and cut-off phenomena for penalized model selection within a basic Rademacher framework Pascal Massart Institut de Math´ ematique d’Orsay Bˆ atiment 307 Universit´ e Paris-Saclay 91405 Orsay-Cedex, France Vincent Rivoirard CEREMADE Universit´ e Paris Dauphine Place du Mar´ echal de Lattre de Tassigny 75016 Paris, France & Universit´ e Paris-Saclay, CNRS, Inria, Laboratoire de math´ ematiques d’Orsay, 91405, Orsay, France April 25, 2025 Abstract This article exists first and foremost to contribute to a tribute to Patrick Cattiaux. One of the two authors has known Patrick Cattiaux for a very long time, and owes him a great deal. If we are to illustrate the adage that life is made up of chance, then what could be better than the meeting of two young people in the 80s, both of whom fell in love with the mathematics of randomness, and one of whom changed the other’s life by letting him in on a secret: if you really believe in it, you can turn this passion into a profession. By another happy coincidence, this tribute comes at just the right time, as Michel Talagrand has been awarded the Abel prize. The temptation was therefore great to do a double. Following one of the many galleries opened up by mathematics, we shall first draw a link between the mathematics of Patrick Cattiaux and that of Michel Talagrand. Then we shall show how the abstract probabilistic material on the concentration of product measures thus revisited can be used to shed light on cut-off phenomena in our field of expertise, mathematical statistics. Nothing revolutionary here, as everyone knows the impact that 0arXiv:2504.17559v1 [math.ST] 24 Apr 2025 Talagrand’s work has had on the development of mathematical statistics since the late 90s, but we’ve chosen a very simple framework in which everything can be explained with minimal technicality, leaving the main ideas to the fore. 1 Introduction Talagrand’s work on concentration of measure gave a decisive impetus to this subject, not only in its fundamental aspects, but also in its implications for other fields, such as statistics and machine learning, which will be of particular inter- est to us here. There is such an overflow of results that we thought it would be useful to highlight a few key ideas within a deliberately simple and streamlined framework. Our ambition is to illustrate why and how concentration inequali- ties come into play to understand cut-off phenomena in some high-dimensional statistical problems, while giving an insight into how these tools are constructed. The statistical framework in which we are going to place ourselves is that of linear regression, for which one observes a random vector Y=f+σϵ in the Euclidean space Rn, where fis an unknown vector to be estimated and the noise level σis assumed to be known at first. The noise vector ϵhas independent and identically distributed components. They are assumed to be centered in expectation and normalized, i.e. with variance equal to 1. To keep things as simple as possible, we shall even assume that they are Rademacher variables, i.e. uniformly distributed random signs. Of course, this
https://arxiv.org/abs/2504.17559v1
assumption is merely a convenience to lighten the presentation, but it is clear that everything we present here extends immediately to the case where the noise variables are bounded in absolute value by a constant M(greater than 1, of course, since the variables have a variance equal to 1). This a priori parametric estimation problem can easily be turned into a non-parametric one if we bear in mind that fis nothing but the vector of signal intensities on [0 ,1] at successive instants i/n, with 1 ≤i≤n. In high dimension, i.e. if ntends to infinity, estimating a smooth signal is more or less the same as estimating the vector f. A flexible strategy for this is to select a least-squares estimator from a family given a priori. A simple illustration of this strategy is to start from an ordered basis (ϕj)1≤j≤nofRn, then form the linear model family SD, with 1 ≤D≤n, where SDis the vector space spanned by the first Dbasis functions ( ϕj)1≤j≤D. In the case where fcomes from a signal, the natural ordered basis comes from the Fourier basis, as will be detailed in section 5 of the article. Selecting a proper model SDand therefore a proper least-squares estimator ˆfDbuilt on SDforf can be performed by minimizing the penalized least-squares criterion crit(D) =− ∥ˆfD∥2+ pen ( D). Choosing a penalty function of the form pen ( D) =κσ2D, the following interest- ing high-dimensional cut-off phenomenon can be observed (on simulations and 1 on real data as well) on the behavior of ˆDκ= argminD− ∥ˆfD∥2+κσ2D: ifκ stays below some critical value ˆDκtakes large values while at this critical value ˆDκsuddenly drops to a much more smaller value. Interestingly this phenomenon can be observed for a variety of penalized model selection criteria and it helps to calibrate these criteria from the data themselves. See [3, 4, 5, 9, 23, 28] for regression models, [7, 14, 16, 17, 23, 26] for density estimation, or [12, 18, 25] for more involved models. We also refer the reader to [2] for a survey on minimal penalties related to the slope heuristics. In this paper, we shall provide a complete mathematical analysis which shows that this phenomenon occurs with high probability with a critical value which is asymptotically equal to 1. Since the same result has been proved in [9] for standard Gaussian errors this shows the robustness of the phenomenon to non- Gaussianity. The reason why concentration inequalities play a crucial role in the mathematical understanding of this phenomenon is particularly clear if we consider the case of pure noise where f= 0 and if we assume that σ= 1 to make things even simpler. In this case Y=ϵand the square norm of the least-squares estimator ˆfDcan be explicitly computed as ∥ˆfD∥2=X 1≤j≤D⟨ϵ, ϕj⟩2. In the Gaussian case, the summands are independent and this quantity is merely distributed according to a chi-square distribution with Ddegrees of freedom. In the Rademacher case it is no longer the case and it is not that obvious to get a sharp probabilistic control. The very simple trick which allows
https://arxiv.org/abs/2504.17559v1
to connect the issue of understanding the behavior of this quantity with Talagrand’s works on concentration of product measures is to consider ∥ˆfD∥rather than its square, just because of the formula ∥ˆfD∥= sup b∈B⟨b, ϵ⟩ where B={P 1≤j≤Dθjϕj|P 1≤j≤Dθ2 j≤1}. This formula allows to interprete ∥ˆfD∥as the supremum of a Rademacher process. For such a process, one can apply different related techniques to ultimately obtain concentration inequalities of∥ˆfD∥2around D. This concentration of ∥ˆfD∥2/Daround 1 provides an explanation for the asymptotic behavior of the critical value for κmentioned above. The paper is organized in two parts: the first part is probabilistic while the second one is devoted to statistics. In the first part, we first revisit the con- nection between optimal transportation and Talagrand’s geometrical approach to concentration. In particular we emphasize the importance of the variational formula for entropy to establish this connection. Needless to say, the varia- tional formula plays an important role in statistical mechanics and this topic as well as optimal transportation are among the topics of interest of Patrick Cat- tiaux. These abstract results are used to build explicit concentration bounds for suprema of Rademacher processes. In the second part, we prove two com- plementary results on penalized least-squares model selection that highlight the 2 above-mentioned cut-off phenomenon. Finally, we illustrate the advantages of this approach in a non-parametric estimation context and produce a few simu- lations that allow us to visualize the cut-off phenomenon. 2 Transportation and Talagrand’s convex dis- tance for product measures The aim of this probabilistic part of the article is twofold. Firstly, we wish to demonstrate, in an elementary way, the link between transport inequalities - one of Patrick Cattiaux’s mathematical interests - and concentration inequalities. In passing, we shall revisit the connection between the functional point of view and the isoperimetric point of view developed by Talagrand in his works on concentration of measure. Secondly, the resulting concentration inequalities will be used to control the suprema of Rademacher processes. This will prove to be the crucial tool for the statistical part of the article. The point of view adopted here is to focus our attention on concentration inequalities for a function of independent random variables ζ(X1, X2, . . . , X n), where ζdenotes some real valued measurable function on some abstract product space Xn=X1× X 2× ··· × X nequipped with some product σ-fieldAn=A1⊗ A 2⊗ ··· ⊗ A n. More precisely we shall consider the following regularity condition. If vdenotes a positive real number, we say that ζsatisfies the bounded differences condition in quadratic mean ( Cv) if ζ(x)−ζ(y)≤nX i=1ci(x) 1 lxi̸=yi. (1) where the coefficients ci’s are measurable and nX i=1c2 i ∞≤v. The strength of this condition is that no structure is needed to formulate it. However, if one wants to figure out what it means, it is interesting to realize that if ζis a smooth (continuously differentiable) convex function on [0 ,1]n, for instance, then ζsatisfies ( Cv) whenever ∥∥ ∇ f∥2∥∞≤v. In this spirit, this regularity condition will typically enable us to study the behavior
https://arxiv.org/abs/2504.17559v1
of suprema of Rademacher processes, which, as announced above, will be our target example here. But the fact that no structure is required is important because it also allows to study many examples of functions of independent random variables from random combinatorics. It was in this field that Talagrand’s early work had its most immediate impact. The contribution of Talagrand’s seminal work in [30] in this context is to relax the bounded differences condition used in Mac Diarmid’s bound [24], which involvesnX i=1 c2 i ∞instead of nX i=1c2 i ∞. 3 2.1 Talagrand’s convex distance The crucial concept introduced by Talagrand to make this breakthrough is what he called the convex distance which can defined as follows. For any measurable setAand any point xinXnlet dT(x, A) = sup α∈B+ ninf y∈AnX i=1αi1 lxi̸=yi (2) where B+ ndenotes the set of vectors of the unit closed euclidean ball of Rnwith non negative components. As explained very well in Michel Ledoux’s fine article [15], for example, the concentration property of a probability measure on a metric space results in concentration inequalities of Lipschitz functions around their median. The nice thing is that an analogous mechanism can be set up for Talagrand’s convex distance on a product probability space, with the ( Cv) condition replacing the Lipschitz condition. More precisely, the role played by dTin the study of func- tions satisfying to condition ( Cv) is as follows. Assume that v= 1 for simplicity. Choosing Aas a level set of the function ζ, i.e. A={ζ≤s}, we notice that inf y∈AnX i=1ci(x)1 lxi̸=yi≤dT(x, A) and therefore, if dT(x, A)< t, there exists some point ysuch that ζ(y)≤sand nX i=1ci(x)1 lxi̸=yi< t. Using such a point in condition ( C1) leads to ζ(x)< t+s. In other words, for a function ζsatisfying condition ( C1), the following inclusion between level sets holds true for any real number sand any non negative real number t: {ζ≥s+t} ⊆ { dT(.,{ζ≤s})≥t}. (3) This means that in terms of level sets, everything works as if dTwere really a usual distance between points and ζwere a 1-Lipschitz function with respect to dT. Given some random vector Xtaking its values in Xnwe can now connect the concentration of ζ(X) around its median Mto the concentration rate of the probability distribution of XonXnwith respect to dTdefined as ρ(t) = sup AP{X∈A}P{dT(X, A)≥t}, where the supremum in the formula above is extended to all mesurable sets A. Indeed (3) leads to P{ζ(X)≤s}P{ζ(X)≥s+t} ≤ρ(t) 4 so that, given a median Mofζ(X), using this inequality with s=Mors=M−t alternatively, implies that P{ζ(X)≤M−t} ∨P{ζ(X)≥M+t} ≤2ρ(t). (4) The remarkable thing is that for independent variables X1, X2, . . . X n, Tala- grand’s convex distance inequality provides a universal sub-gaussian control ofρ. Theorem 1 LetX1, X2, . . . , X nbe independent random variables and set X= (X1, X2, . . . , X n), for all non negative real number t sup AP{X∈A}P{dT(X, A)≥t} ≤exp (−t2/4) where the supremum is taken over all measurable subsets of Xn. It is easy to relax the normalization constraint on the
https://arxiv.org/abs/2504.17559v1
function ζthat we have used above. Given some function ζsatisfying to condition ( Cv) and combin- ing Talagrand’s convex distance inequality with inequality (4) (used for ζ/√v instead of ζ) leads to the following immediate consequence. Corollary 2 Letζsatisfying regularity condition ( Cv), and X1, X2, . . . , X nbe independent random variables. Setting Z=ζ(X1, X2, . . . , X n), ifMis a median ofZ, then for all non negative real number t P{Z−M≤ −t} ∨P{Z−M≥t} ≤2 exp(−t2/(4v)). Of course, from this concentration inequality of ζ(X1, X2, . . . , X n) around the median, it is possible to deduce a concentration inequality around the expecta- tion, possibly with slightly worse constants. As a matter of fact, it is better to take an alternative route. More precisely, starting from a proper transportation inequality one can prove a concentration inequality around the expectation un- der the same regularity condition as above with neat constants. As a bonus we shall see that it will also provide a simple proof of Talagrand’s convex distance inequality. 2.2 Marton’s transportation inequality in action The link between optimal transportation and concentration has been pointed out by Katalin Marton in a series of papers (see [20],[21] and [22]). Let us give a few lines of explanation based on the variational formula for entropy. Let Q be some probability distribution which is absolutely continuous with respect to Pn. Let Pbe some probability distribution, coupling PntoQ, which merely means that it is a probability distribution on with first marginal Pnand second marginal Q. Let ζbe some function satisfying condition ( Cv). Then we may write EQ(ζ)−EPn(ζ) =EP[ζ(Y)−ζ(X)]≤nX i=1EP[ci(Y)P{Xi̸=Yi|Y}], 5 which implies by applying Cauchy-Schwarz inequality twice EQ(ζ)−EPn(ζ)≤nX i=1 EP c2 i(Y)1/2 EP P2{Xi̸=Yi|Y}1/2 ≤ nX i=1EP c2 i(Y)!1/2 nX i=1EP P2{Xi̸=Yi|Y}!1/2 . We derive from this inequality that EQ(ζ)−EPn(ζ)≤√v inf P∈P(Pn,Q)nX i=1EP P2{Xi̸=Yi|Y}!1/2 , where P(Pn, Q) denotes the set all probability distributions P, coupling Qto Pn. Of course, exchanging the roles of XandY, a similar inequality holds for −finstead of ζ, more precisely −EQ(ζ) +EPn(ζ)≤√v inf P∈P(Pn,Q)nX i=1EP P2{Xi̸=Yi|X}!1/2 . Marton’s following beautiful result tells us what happens when the coupling is chosen in a clever way. The version provided below (in which a symmetric conditioning with respect to XandYis involved) is due to Paul-Marie Samson (see [27]). Theorem 3 (Marton’s conditional transportation inequality) LetPnbe some product probability distribution on some product measurable space Xnand Qbe some probability measure absolutely continuous with respect to Pn. Then min P∈P(Pn,Q)EP"nX i=1P2{Xi̸=Yi|X}+P2{Xi̸=Yi|Y}# ≤2D(Q∥Pn), where (Xi, Yi),1≤i≤ndenote the coordinate mappings on Xn× Xnand D(Q∥Pn)denotes the Kullback-Leibler divergence of Qfrom Pn. Now we can forget about the way the optimal coupling has been designed and focus on what gives us the combination between Theorem 3 and the preceding inequalities. If we do so, we end up with the following inequality EQ(ζ)−EPn(ζ)≤p 2vD(Q∥Pn), (5) which holds true for any probability distribution Qwhich is absolutely continu- ous with respect to Pn(the same inequality remaining true for −ζinstead of ζ). It remains to connect this inequality with concentration, which can done thanks to a
https://arxiv.org/abs/2504.17559v1
very simple but powerful engine: the variational formula for entropy. This formula is also well known in statistical mechanics, which is another domain of 6 interest of Patrick Cattiaux. Let us briefly recall what this formula says for a random variable ξon some probability space (Ω ,A, P) logEP eξ = sup Q≪P(EQ(ξ)−D(Q∥P)) . (6) How to use it? The trick is to rewrite (5) differently. Noticing that for any non negative real number a inf λ>0a λ+λv 2 =√ 2av (7) and using (7) with a=D(Q∥Pn), inequality (5) means that for any positive λ sup Q≪Pn[λ(EQ(ζ)−EPn(ζ))−D(Q∥Pn)]≤λ2v 2. It remains to combine this inequality with the variational formula (6) applied to the random variable ξ=λ(f−EPn(f)) to derive that for any positive λ logEPn eλ(ζ−EPn(ζ)) ≤λ2v 2. Since, the same inequality holds true for −ζinstead of ζ, this means that it actually holds for any real number λ. Applying Chernoff’s inequality leads to the following concentration result around the mean which is the analogue of the preceding concentration inequality around the median apart from the fact that numerical constants are slightly different. Corollary 4 Letζsatisfying regularity condition ( Cv), and X1, X2, . . . , X nbe independent random variables. Setting Z=ζ(X1, X2, . . . , X n), then for all non negative real number t P{Z−EZ≤ −t} ∨P{Z−EZ≥t} ≤exp(−t2/(2v)). Interestingly, as pointed out in [10], this result strictly implies Talagrand’s con- vex distance inequality (and therefore Corollary 2). In other words, Marton’s transportation inequality implies Talagrand’s convex distance inequality. 2.3 The convex distance inequality revisited The key is that given any measurable subset AofXn, Talagrand’s convex dis- tance dT(., A) itself satisfies condition ( C1). Indeed if c(x) is a vector of B+ nfor which the supremum in formula (2) is achieved (which does exist since an upper semi-continuous function achieves a maximum on a compact set), we have dT(x, A)−dT(y, A)≤inf x′∈AnX i=1ci(x) 1 lxi̸=x′ i−inf y′∈AnX i=1ci(x) 1 lyi̸=y′ i ≤nX i=1ci(x) 1 lxi̸=yi 7 with Pn i=1c2 i ∞≤1. This means that dT(., A) satisfies to condition ( C1) and Corollary 4 merely applies to dT(X, A). It turns out that this property strictly implies Talagrand’s convex distance inequality. Indeed, setting Z=dT(., A) andθ=EZby the right-tail bound provided by Corollary 4 P{Z−θ≥x} ≤exp −x2 2 . Noticing that x2≥ −θ2+ (x+θ)2/2, this upper tail inequality a fortiori leads to P{Z−θ≥x} ≤expθ2 2 exp −(x+θ)2 4! . (8) Setting x=t−θ, this inequality can also imply that for positive t P{Z≥t} ≤expθ2 2 exp −t2 4 (notice that this bound is trivial whenever t≤θand therefore we may always assume that t > θ which warrants that x >0). On the other hand, using the left-tail bound P{θ−Z≥x} ≤exp −x2 2 with x=θ, we derive that P{X∈A}=P{Z= 0} ≤exp −θ2 2 . (9) Combining (8) with (9) leads to P{X∈A}P{Z≥t} ≤exp −t2 4 which is precisely Talagrand’s convex distance inequality. 2.4 Application to Rademacher processes Recalling that a Rademacher variable is merely a uniformly distributed random sign, if ϵ1, ϵ2, . . . , ϵ nare independent Rademacher variables and
https://arxiv.org/abs/2504.17559v1
if Bis a subset ofRn, a Rademacher process is nothing else that b→ ⟨b, ϵ⟩, where ⟨., .⟩denotes the canonical scalar product. The quantity of interest here is the supremum of such a process: Z= supb∈B⟨b, ϵ⟩. One knows from Hoeffding’s inequality (see [13]) that for each given vector bwith Euclidean norm and all positive real number t P{⟨b, ϵ⟩ ≥t} ≤exp −t2/2 . (10) IfBis a subset of the unit closed Euclidean ball BnofRn, and if one wants to prove a similar sub-Gaussian inequality for Z−EZandEZ−Z, where 8 Z= supb∈B⟨b, ϵ⟩, this a typical situation where the preceding theory applies. Consider the function ζ, defined on [ −1,1]nby ζ:x→sup b∈B⟨b, x⟩. Then ζsatisfies to condition ( Cv) with v= 4. Indeed, possibly changing Binto its closure, one can always assume that Bis compact. Hence, there exists some point b(x) belonging to Bsuch that ζ(x) =⟨b(x), x⟩and therefore since xand ybelong to [ −1,1]n ζ(x)−ζ(y)≤ ⟨b(x), x⟩ − ⟨b(x), y⟩=⟨b(x), x−y⟩ ≤2nX i=1|bi(x)|1 lxi̸=yi, which clearly means that ζsatisfies to ( C4). Recalling that a Rademacher vari- able is merely a uniformly distributed random sign, applying Corollary 4 we derive the following concentration result for the supremum of a Rademacher process Z= supb∈B⟨b, ϵ⟩. Proposition 5 LetBbe some subset of the unit closed Euclidean ball of Rn andϵ1, ϵ2, . . . , ϵ nbe independent Rademacher random variables. Setting Z= supb∈B⟨b, ϵ⟩, then for all non negative real number t P{Z−EZ≤ −t} ∨P{Z−EZ≥t} ≤exp(−t2/8). Furthermore, the variance of the supremum of a Rademacher can easily be controlled via Efron-Stein’s inequality. Considering independent Rademacher variables ϵ′ 1, ϵ′ 2, . . . , ϵ′ nwhich are independent from ϵ1, ϵ2, . . . , ϵ nand setting Z′ i= supb∈B(biϵ′ i+P j̸=ibjϵj), Efron-Stein’s inequality states that Var(Z)≤nX i=1E(Z−Z′ i)2 +. Now, writing ZasZ=⟨b(ϵ), ϵ⟩and noticing that Z−Z′ i≤bi(ϵ)(ϵi−ϵ′ i) leads to E((Z−Z′ i)2 +|ϵ)≤b2 i(ϵ)(1 + ϵ2 i) and finally to Var(Z)≤2E nX i=1b2 i(ϵ)! ≤2. (11) The latter inequality is especially interesting to control the expectation of the square root of a chi-square type statistics from below. More precisely, if we consider some orthonormal family of vectors {ϕj,1≤j≤D}and if we define the chi-square type statistics χ2=X 1≤j≤D⟨ϵ, ϕj⟩2 9 χcan interpreted as the supremum of a Rademacher process. Indeed, if we simply set B={P 1≤j≤Dθjϕj|P 1≤j≤Dθ2 j≤1}, then χ= sup b∈B⟨b, ϵ⟩. Applying the above results to control the upper and lower tails of χis exactly what we shall need in the statistical part of the paper to highlight phase tran- sition phenomena in the behavior of penalized least squares model selection criteria. More precisely, since we know by (11) that Var( χ)≤2, we derive the following sharp inequalities for the expectation of χ D−2≤(E(χ))2≤D. Combining this with Proposition 5 leads to the following ready-to-use upper and lower tails controls, which hold for all positive x χ≤√ D+ 2√ 2x (12) except on a set with probability less than e−xwhile χ≥p (D−2)+−2√ 2x (13) except on a set with probability less than e−x. 3 Model selection for regression with Rademacher errors
https://arxiv.org/abs/2504.17559v1
Our aim is to show how some fairly general ideas (as those developed in [6], [8] or [23] for instance) work in a very simple context where the technical aspects are deliberately reduced. The statistical framework we have chosen is that of regression with Rademacher errors which can be described as follows. One observes Y=f+σϵ (14) where fis some unknown vector in Rn,ϵis a random vector in Rnwith compo- nents ϵ1, ϵ2, . . . , ϵ nwhich are independent Rademacher random variables and σ is some positive real number (the level of noise, which is assumed to be known at this point). The issue is to estimate fand the model selection approach to do so consists in starting from a (finite or countable) collection of models {Sm, m∈ M} that we assume here to be linear subspaces of the Euclidean space Rn. Consider for each model Smthe least-square estimator which is merely defined as ˆfm= arg min g∈Sm∥Y−g∥2(15) in other words, ˆfmis the orthogonal projection of Yonto Sm. The purpose is to select an estimator from the collection {ˆfm, m∈ M} in a clever way. We need 10 some quality criterion here. Since we are dealing with least squares a natural one is the quadratic expected risk. Since everything is explicit, it is easy to compute it in this case. By Pythagoras’ identity we indeed can decompose the quadratic loss of ˆfmas follows ∥ˆfm−f∥2=∥fm−f∥2+∥ˆfm−fm∥2, where fmdenotes the orthogonal projection of fonto Sm. The connection with the probabilistic part of the paper comes from the analysis of the random part of the decomposition. Denoting by Π mthe orthogonal projection operator onto Smthe random term can be written as ∥ˆfm−fm∥2=∥Πm(Y−f)∥2=σ2∥Πm(ϵ)∥2. (16) Taking some orthonormal basis {ϕ(m) j,1≤j≤Dm}ofSmthe quantity ∥Πm(ϵ)∥2appears to be some chi-square type statistics χ2 m=∥Πm(ϵ)∥2=X 1≤j≤D⟨ϵ, ϕ(m) j⟩2 and therefore the expected quadratic risk of ˆfmcan be computed as Ef∥ˆfm−f∥2=∥fm−f∥2+σ2Dm This formula for the quadratic risk perfectly reflects the model choice paradigm since if one wants to choose a model in such a way that the risk of the resulting least square estimator remains under control, we have to warrant that the bias term∥fm−f∥2and the variance term σ2Dmremain simultaneously under control. This corresponds intuitively to what one should expect from a ”good” model: it should fit to the data but should not be too complex in order to avoid overfitting. We therefore keep the quadratic risk as a quality criterion, which means that mathematically speaking, an ”ideal” model should minimize Ef∥ˆfm−f∥2with respect to m∈ M . It is called an ”oracle”. Of course, since we do not know the bias, the quadratic risk cannot be used as a statistical model choice criterion but just as a benchmark. The issue is now to consider data-driven criteria to select an estimator which tends to mimic an oracle, i.e. one would like the risk of the selected estimator ˆfˆmto be as close as possible to the oracle benchmark inf m∈MEf∥ˆfm−f∥2. 3.1 Model selection via penalization and Mallows’ heuris- tics Let us describe the method. The penalized least squares procedure consists in considering some proper
https://arxiv.org/abs/2504.17559v1
penalty function pen: M → R+and take ˆ mminimizing ∥Y−ˆfm∥2+ pen ( m) over M. Since by Pythagora’s identity, ∥Y−ˆfm∥2=∥Y∥2− ∥ˆfm∥2, 11 we can equivalently consider ˆ mminimizing − ∥ˆfm∥2+ pen ( m) overM. Then, we can define the selected model Sˆmand the corresponding selected least squares estimator ˆfˆm. Penalized criteria have been proposed in the early seventies by Akaike or Schwarz (see [1] and [29]) for penalized maximum log-likelihood in the density estimation framework and Mallows for penalized least squares regression (see [11] and [19]). The crucial issue is: how to penalize? The classical answer given by Mallows’ Cpis based on some heuristics and on the unbiased risk estimation principle. It can be described as follows. An ”ideal” model should minimize the quadratic risk ∥fm−f∥2+σ2Dm=∥f∥2− ∥fm∥2+σ2Dm, or equivalently −∥fm∥2+σ2Dm. At this step, it is tempting to use ∥ˆfm∥2as an estimator of ∥fm∥2. But this estimator turns out to be biased. Indeed, starting from the decomposition ∥ˆfm∥2−∥fm∥2=∥ˆfm−fm∥2+2⟨fm,ˆfm−fm⟩=σ2∥Πm(ϵ)∥2+2σ⟨fm,Πm(ϵ)⟩ and noticing that by orthogonality ⟨fm,Πm(ϵ)⟩=⟨fm, ϵ⟩, leads to the following meaningful formula ∥ˆfm∥2=∥fm∥2+σ2χ2 m+ 2σ⟨fm, ϵ⟩. (17) From this formula we see that the expectation of ∥ˆfm∥2is equal to ∥fm∥2 +σ2Dm. We can know remove this bias. Substituting to ∥fm∥2its natural unbiased estimator ∥ˆfm∥2−σ2Dmleads to Mallows’ Cp − ∥ˆfm∥2+2σ2Dm. The weakness of this analysis is that it relies on the computation of the ex- pectation of ∥ˆfm∥2for every given model but nothing warrants that ∥ˆfm∥2 will stay of the same order of magnitude as its expectation for all models si- multaneously. This leads to consider some more general model selection criteria involving penalties which may differ from Mallows’ penalty. 3.2 An oracle type inequality The above heuristics can be justified (or corrected) if one can specify how close is ∥ˆfm∥2from its expectation ∥fm∥2+σ2Dm, uniformly with respect to m∈ M . The upper tail probability bound provided by Proposition 5 will precisely be the adequate tool to do that. The price to pay is to consider more flexible penalty functions that can take into account the complexity of the list of models. 12 As a consequence, the performance of the selected least-squares estimator is judged by an oracle inequality that differs slightly from what might have been expected. The following result is the exact analogue of the model selection theorem established in [8] in the Gaussian regression framework. Theorem 6 Let{xm}m∈Mbe some family of positive numbers such that X m∈Mexp (−xm) = Σ <∞. (18) LetK > 1and assume that pen (m)≥Kσ2p Dm+ 2√ 2xm2 . (19) Letˆmminimizing the penalized least-squares criterion crit(m) =− ∥ˆfm∥2+ pen ( m) (20) over m∈ M . The corresponding penalized least-squares estimator ˆfˆmsatifies to the following risk bound Ef∥ˆfˆm−f∥2≤C(K) inf m∈M ∥fm−f∥2+ pen ( m) + (1 + Σ) σ2 ,(21) where C(K)depends only on K. The proof of this result is based on two claims. The first one provides a risk bound which derives from the very definition of the selection procedure via some elementary calculus while the second one is a consequence of the probabilistic material brought by the first part of the paper. Let us first
https://arxiv.org/abs/2504.17559v1
introduce some notation. For all m, m′∈ M we define χm,m′= sup g∈Sm′⟨g−fm, ϵ⟩ ∥fm−f∥+∥g−f∥. (22) The role of this supremum of a Rademacher process in the proof of Theorem 6 is elucidated by the following statement. Claim 7 Ifˆmminimizes the penalized least-squares criterion (20), then for every m∈ M and all η∈]0,1[ η∥ˆfˆm−f∥2≤η−1∥fm−f∥2+ pen ( m) +1 +η 1−η σ2χ2 m,ˆm−pen ( ˆm). Proof. Pythagoras’ identity combined with (17) leads to ∥f∥2+ crit( m) =∥f−fm∥2−σ2χ2 m−2σ⟨fm, ϵ⟩+ pen ( m) . (23) Letmbe some given element of M. By (23), crit( ˆ m)≤crit(m) means that ∥f−fˆm∥2−σ2χ2 ˆm≤ ∥f−fm∥2−σ2χ2 m+ pen ( m) + 2σ⟨fˆm−fm, ϵ⟩−pen ( ˆm). 13 We can drop the non positive term −σ2χ2 mand add 2 σ2χ2 ˆmon both sides of the preceding preceding inequality, which leads to ∥f−fˆm∥2+σ2χ2 ˆm≤ ∥f−fm∥2+pen ( m)+2σ⟨fˆm−fm, ϵ⟩+2σ2χ2 ˆm−pen ( ˆm). Noticing that σχ2 ˆm=⟨ˆfˆm−fˆm, ϵ⟩and therefore ⟨fˆm−fm, ϵ⟩+σχ2 ˆm=⟨ˆfˆm− fm, ϵ⟩, we finally derive the inequality that we shall rely upon to prove Claim 7 ∥f−fˆm∥2+σ2χ2 ˆm≤ ∥f−fm∥2+ pen ( m) + 2σ⟨ˆfˆm−fm, ϵ⟩ −pen ( ˆm), which yields ∥ˆfˆm−f∥2≤∥f−fm∥2+ pen ( m) + 2σ⟨ˆfˆm−fm, ϵ⟩ −pen ( ˆm). (24) To finish the proof, notice first that 2σ⟨ˆfˆm−fm, ϵ⟩ ≤2σ ∥fm−f∥+ ˆfˆm−f  χm,ˆm. Now we define δ= (1−η)/(1+η) and use repeatedly the inequality 2 ab≤a2+b2 to derive that on the one hand 2σ⟨ˆfˆm−fm, ϵ⟩ ≤δ−1σ2χ2 m,ˆm+δ ∥fm−f∥+ ˆfˆm−f 2 and on the other hand  ∥fm−f∥+ ˆfˆm−f 2 ≤(1 +η−1)∥fm−f∥2+ (1 + η) ˆfˆm−f 2 . Combining these two inequalities and plugging the resulting upper bound on 2σ⟨ˆfˆm−fm, ϵ⟩into (24) finally leads to the claim. Let us now state the second claim which will provide some control on the quantity χm,m′defined by (22). Claim 8 For every m, m′∈ M , the following probability bound holds true. For all non negative real number x χm,m′≤1 +p Dm′+ 2√ 2x except on a set with probability less than e−x. Proof. Since∥g−fm∥≤∥f−fm∥+∥g−f∥we can apply Proposition 5 and asserts that χm,m′≤E(χm,m′) + 2√ 2x except on a set with probability less than e−x. It remains to bound E(χm,m′). To do that we split the supremum defining χm,m′in two terms. Namely we set χ(1) m,m′= sup g∈Sm′⟨g−fm′, ϵ⟩+ ∥fm−f∥+∥g−f∥ 14 and χ(2) m,m′= sup g∈Sm′⟨fm′−fm, ϵ⟩+ ∥fm−f∥+∥g−f∥ noticing that χm,m′≤χ(1) m,m′+χ(2) m,m′. To control the first term, we note that since the orthogonal projection is a contraction, ∥g−f∥≥∥g−fm′∥for all g∈Sm′and therefore by linearity χ(1) m,m′≤sup g∈Sm′⟨g−fm′, ϵ⟩+ ∥g−fm′∥= sup g∈Sm′⟨g, ϵ⟩ ∥g∥=χm′. Of course, this bound implies that E χ(1) m,m′ ≤√Dm′. To control the second term, we note by definition of fm′and the triangle inequality, that for all g∈Sm′ ∥fm−f∥+∥g−f∥≥ ∥fm−f∥+∥fm′−f∥≥∥fm′−fm∥ and therefore χ(2) m,m′≤⟨fm′−fm, ϵ⟩+ ∥fm′−fm∥. Invoking Cauchy-Schwarz, and using the fact that am,m′= (fm′−fm)/∥fm′−fm∥ has norm 1, leads to E χ(2) m,m′ ≤q E⟨am,m′, ϵ⟩2= 1. Collecting the upper bounds on the two terms E χ(1) m,m′ andE χ(2) m,m′ we getE(χm,m′)≤1 +√Dm′and the proof is complete. Once these two claims are available, the proof of Theorem 6 is quite straight- forward. Proof of Theorem 6. To prove the required bound
https://arxiv.org/abs/2504.17559v1
on the expected risk, we first prove an expo- nential probability bound and then integrate it. Towards this aim we introduce some positive real number ξ(this is the variable that we shall use at the end of the proof to integrate the tail bound that we shall obtain) and we fix some model m∈ M . Using a union bound, Claim 8 ensures that for all m′∈ M simultaneously χm,m′≤1 +p Dm′+ 2p 2(xm′+ξ) except on a set with probability less than Σ exp( −ξ). Using√ a+b≤√a+√ b and using again 2 ab≤a2+b2, if we define pm′=σ2p Dm′+ 2√ 2xm′2 the latter inequality implies that except on a set with probability less than Σ exp( −ξ) σ2χ2 m,ˆm≤(1 +η)pˆm+ (1 + η−1)σ2 1 + 2p 2ξ2 . (25) 15 Let us notice that the quantity pm′which appears here is precisely the one which is involved in the statement of Theorem 6 to bound the penalty from below. More precisely, this constraint can merely be written as pen( m′)≥Kpm′for allm′∈ M . Let us by now choose ηin such a way that K= (1 + η)2/(1 +η), then the assumption on the penalty ensures that 1 +η 1−η (1 +η)pˆm−pen ( ˆm)≤0. Taking this constraint into account and combining (25) with Claim 7 leads to η∥ˆfˆm−f∥2≤η−1∥fm−f∥2+ pen ( m) +(1 +η)2 η(1−η)σ2 1 + 2p 2ξ2 except on a set with probability less than Σ exp( −ξ). Using a last time 2 ab≤ a2+b2we upper bound 1 + 2√2ξ2by 2 + 16 ξand it remains to integrate the resulting tail bound with respect to ξin order to get the desired upper bound on the expected risk. It is interesting to exhibit some simple condition under which the above result can be applied to a choice of the penalty of the form pen ( m) =K′σ2Dm, since obviously in this case the risk bound provided by the Theorem has the expected shape, that is, up to some constant the performance of the selected least-squares estimator is comparable to the infimum of the quadratic risks Ef∥ˆfm−f∥2 when mvaries in M. This is connected to the possibility of choosing weights xmof the form xm=αDm. The simplest scheme under which this can be done easily is the situation where the models are nested. In other words one starts from a family of linearly independent vectors ϕ1, ϕ2, . . . , ϕ Nand each model SD with 1 ≤D≤Nis merely defined as the linear span of ϕ1, ϕ2, . . . , ϕ D. Indeed, in this case, since there is exactly one model per dimension, the choice xD=αD leads toX 1≤D≤Ne−xD≤X D≥1e−αD=1 eα−1. Choosing a sufficiently small value for αwe finally derive from Theorem 6 that if the penalty is chosen as pen ( D) =κσ2D, with κ >1, for some constant C′(κ) depending only on κ, Ef∥ˆfˆD−f∥2≤C′(κ) inf 1≤D≤NEf∥ˆfD−f∥2. The same result will hold true if the number of models per dimension increases polynomially with respect to the dimension. The purpose of the following section is to show that in these situations if one takes
https://arxiv.org/abs/2504.17559v1
a penalty of the form pen (m) =κσ2Dm, the value κ= 1 is indeed critical in the sense that, below this value the selection method becomes inconsistent. To enlighten this cut- off phenomenon, the lower tails probability bounds established in the section devoted to concentration will play a crucial role. 16 3.3 Cut-off for the penalty: lower tails in action To exhibit this cut-off phenomenon for the penalty we shall restrict ourselves to the situation where all the models are included in a model SmNwith dimension N. We allow the list of models to depend on N(we shall therefore write MN instead of M) and we shall let Ngo to infinity, assuming that the number of models is sub-exponential with respect to N, which more precisely means that N−1log #MN→0 asN→ ∞ (26) Note that in the nested case this assumption is satisfied and that it still holds true when the number of models with dimension Dis less that CDksince in this case # MN≤CNk+1. We are now ready to state the announced negative result. This result has the same flavor as the one established in [9] in the Gaus- sian framework but interestingly it is based solely on concentration arguments, without any extra properties (in [9] the Gaussian framework is crucially involved since some specific lower tail bounds for non-central chi-square distributions are used to make the proof). Theorem 9 Let{Sm, m∈ M N}be a collection of linear subspaces of Rnsuch that all the models Smare included in some model SmNwith dimension N. Assume furthermore that condition (26) on the cardinality of MNis satisfied. Take a penalty function of the form pen (m) =κσ2Dm and consider ˆmminimizing the penalized least squares criterion (20). Assume thatκ <1. Then, for any δ∈(0,1)there exists N0depending on δandκbut not on forσsuch that, whatever f, for all N≥N0 Pf{Dˆm≥N/2} ≥1−δ (27) and the following lower bound on the expected risk holds true Ef∥ˆfˆm−f∥2≥∥fmN−f∥2+σ2(N/4). (28) Proof. Letm∈ M Nand first notice that since Sm⊆SmN, by orthogonality − ∥fmN∥2+∥fm∥2=− ∥fmN−fm∥2. Let us now use formula (17) to assert that − ∥ˆfmN∥2+∥ˆfm∥2=−σ2(χ2 mN−χ2 m) + 2σ⟨fm−fmN, ϵ⟩− ∥ fmN−fm∥2. Let us set gm= (fm−fmN)/∥fm−fmN∥iffm̸=fmNorgm= 0 otherwise. Using again 2 ab≤a2+b2, the preceding identity leads to − ∥ˆfmN∥2+∥ˆfm∥2≤ −σ2(χ2 mN−χ2 m) +σ2⟨gm, ϵ⟩2 +. Using the definition of the penalized least-squares criterion, we finally derive the inequality that we shall start from to make the probabilistic analysis of the behavior of this criterion: σ−2(crit( mN)−crit(m))≤ −(χ2 mN−χ2 m) +⟨gm, ϵ⟩2 ++κ(N−Dm). (29) 17 The point now is that for models such that Dm≤N/2 the negative term −(1−κ)(N−Dm) stays below −(1−κ)N/2. We argue that this is enough to ensure that, with high probability, for all such models simultaneously crit( mN)− crit(m)<0 and therefore Dˆmhas to be larger than N/2. To complete this road map we make use of the lower tail probability bounds established in the probabilistic section of the paper. Indeed, since Sm⊆SmN, the quantity χ2 mN− χ2 mappears to be some pseudo chi-square statistics with dimension N−Dm≥ N/2 to which we can apply the lower tail inequality (13). If
https://arxiv.org/abs/2504.17559v1
we do so, and if we use a union bound, we derive that for all models such that Dm≤N/2 q χ2mN−χ2m≥p (N−Dm−2)+−2√ 2x while simultaneously by Hoeffding’s inequality (10) ⟨gm, ϵ⟩+≤√ 2x except on a set with probability less than 2# MNexp(−x). We choose x= log(2/δ) + log # MNin order to warrant that the above inequalities simultane- ously hold true except on a set with probability less than δ. It is now time to use asymptotic arguments. If we take into account assumption (26), we know that our choice of xis small as compared to Nand therefore it is also small as compared to N−Dmuniformly over the set of models such that Dm≤N/2 when Nis large. Using this argument we derive from the above tail probability bounds that given η >0, ifNis large enough, the following inequalities hold for all models such that Dm≤N/2 q χ2mN−χ2m≥p (1−η)(N−Dm) and ⟨gm, ϵ⟩+≤p η(N−Dm) except on a set with probability less than δ. IfNis large enough, plugging these inequalities into (29) and choosing η= (1−κ)/4 leads to σ−2(crit( mN)−crit(m))≤ −(1−2η−κ)(N−Dm)≤ −1−κ 2 (N−Dm) for all models such that Dm≤N/2, except on a set with probability less than δ. The proof of (27) is now complete. Proving (28) is quite easy. We first observe that since SmNincludes all the other models ∥ˆfˆm−f∥2≥∥fmN−f∥2+∥ˆfˆm−fˆm∥2=∥fmN−f∥2+σ2χ2 ˆm so that it remains to bound χ2 ˆmin expectation from below. To do that we argue exactly as above to assert that if Nis large enough, for all models such that Dm≥N/2 simultaneously χ2 m≥(2/3)Dm≥N/3 18 except on a set with probability less than δ. Combining this with (27) and using again a union bound argument we know that if Nis large enough χ2 ˆm≥ N/3 except on a set with probability less than 2 δ. It remains to use Markov’s inequality and choose δ= 1/8 to ensure that E χ2 ˆm ≥(1−2δ)N/3 =N/4 completing the proof of (28). Comment Let us come back to the nested case for which one starts from a set of linearly independent vectors ϕ1, ϕ2, . . . , ϕ Nand a model is merely the linear span SDof {ϕj,1≤j≤D}andDvaries between 1 and N. In this case the situation is clear. If one considers the penalized least squares model selection criterion crit(D) =− ∥ˆfD∥2+κσ2D the two preceding theorems tell us that κ= 1 is a critical value in the sense that ifκis above this value the selected least squares estimator is comparable (up to some constant depending on κ) to the best estimator in the collection while below this value the criterion will tend to select the largest models whatever the target fto be estimated. This cut-off is so visible (on simulations and on real data) that it can be used to estimate σ2. Of course, the notion of a “large” model only makes sense if Nis large (and thus so is n). In the next and final section of the article, we shall see that this framework becomes very natural in the context of non-parametric estimation. 3.4 Adaptive functional estimation In this section, the goal
https://arxiv.org/abs/2504.17559v1
is to estimate the function fon the interval [0 ,1] in the model Yk=f(tk) +σϵk, k = 1, . . . , n, (30) where tk=k/nand we assume in the sequel that the ϵk’s are independent Rademacher random variables but our results remain valid if they are only centered i.i.d. bounded random variables; the noise level, σ >0, is assumed to be known. We shall also assume that fis squared integrable and f(0) = f(1), so that we can expand fon the Fourier basis ( ϕj)j≥1with ϕ1≡1 and for any j≥1 and any t∈[0,1], ϕ2j(t) =√ 2 cos(2 πjt) and ϕ2j+1(t) =√ 2 sin(2 πjt). Denoting θ= (θj)j≥1the sequence of the Fourier coefficients of f, we obtain: f=+∞X j=1θjϕj. We derive oracle inequalities in the same spirit as Theorem 6, except that we consider both the empirical norm associated with the design tk’s and the func- tional L2-norm. We then introduce following notations: for any function g, we 19 set: ∥g∥2 n=1 nnX k=1g2(tk),∥g∥2 L2=Z1 0g2(t)dt. The associated scalar products are denoted ⟨·,·⟩nand⟨·,·⟩L2. We recall that the Fourier basis satisfies for any 1 ≤j, j′≤n−1, ⟨ϕj, ϕj′⟩n=1 nnX k=1ϕj(tk)ϕj′(tk) = 1 {j=j′}, (31) which makes its use suitable for our study. We consider a collection of models {Sm, m∈ M} , with here Sm= span ϕj, j∈m , withMa set of subsets of {1,2, . . . , n −1}. Similarly to (20), we consider for anym∈ M the criterion crit(m) =−∥ˆfm∥2 n+ pen ( m), (32) with ˆfm=X j∈m1 nnX k=1Ykϕj(tk)ϕj. Observe that if Yis any (random) 1-periodic L2-function such that Y(tk) =Yk, then ˆfmis the projection of the function YontoSmfor the empirical norm ∥·∥n: ˆfm=X j∈m⟨Y, ϕj⟩nϕj. Therefore, ∥Y∥2 n− ∥ˆfm∥2 n=∥Y−ˆfm∥2 n=1 nnX k=1 Yk−ˆfm(tk)2 , which justifies the use of (32). Observe also that fm, the mean of ˆfm, satisfies fm=Ef(ˆfm) =X j∈m⟨f, ϕj⟩nϕj andfmis the orthogonal projection of fonSmfor the empirical norm. The following result is the analogue of Theorem 6 in the functional framework. We denote Sn−1= span( ϕ1, . . . , ϕ n−1). Theorem 10 Let{xm}m∈Mbe some family of positive numbers such that X m∈Mexp (−xm) = Σ <∞. (33) 20 LetK > 1and assume that pen (m)≥Kσ2 np Dm+ 2√ 2xm2 . (34) Letˆmminimizing the penalized least-squares criterion defined in (32) over m∈ M . The corresponding penalized least-squares estimator ˆfˆmsatifies to the following risk bound Ef∥ˆfˆm−f∥2 n≤C(K) inf m∈M ∥fm−f∥2 n+ pen ( m) +(1 + Σ) σ2 n ,(35) where C(K)depends only on K. We also have: Ef∥ˆfˆm−f∥2 L2≤C′(K) inf m∈M ∥fm−f∥2 L2+ pen ( m) + inf g∈Sn−1∥f−g∥2 L∞+(1 + Σ) σ2 n , (36) where C′(K)depends only on Kand∥ · ∥L∞denotes the sup-norm on [0,1]. Proof. We observe that ∥ˆfm∥2 n=X j∈m⟨Y, ϕj⟩2 n. To prove the first point of Theorem 10, we then follow easily the same lines as used to prove Theorem 6 with χ2 m=1 σ2∥ˆfm−fm∥2 n. Proposition 5 is applied with χm,m′= sup g∈Sm′⟨g−fm, Y−f⟩n ∥fm−f∥n+∥g−f∥n. The last point is a simple consequence of (35) and ∥g∥L2=∥g∥n for any function g∈ Sn−1. Indeed, we have,
https://arxiv.org/abs/2504.17559v1
for any g∈ Sn−1, ∥ˆfˆm−f∥L2≤ ∥ˆfˆm−g∥L2+∥f−g∥L2 ≤ ∥ˆfˆm−g∥n+∥f−g∥L∞ ≤ ∥ˆfˆm−f∥n+ 2∥f−g∥L∞ and ∥fm−f∥n≤ ∥fm−g∥n+∥f−g∥n ≤ ∥fm−g∥L2+∥f−g∥L∞ ≤ ∥fm−f∥L2+ 2∥f−g∥L∞. 21 We have used that ∥f−g∥n≤ ∥f−g∥L∞and∥f−g∥L2≤ ∥f−g∥L∞. To prove optimality of our procedure, we consider the minimax setting and establish rates of our estimate on the class of (periodized) Sobolev spaces. We recall the definition of the Sobolev ball for integer smoothness α. Definition 11 Letα∈ {1,2, . . .}andR > 0. The Sobolev ball W(α, R)is defined by W(α, R) =( g∈[0,1]7−→R:g(α−1)is absolutely continuous and Z1 0 g(α)(x)2dx≤R2) . In our setting, we consider the periodic Sobolev ball Wper(α, R) defined by Wper(α, R) =( g∈W(α, R) :g(j)(0) = g(j)(1), j= 0,1, . . . , α −1) . In subsequent Theorem 12, we consider the model selection procedure with M such that m∈ M if and only if mis of the form m={1, . . . , D }for some 1≤D≤n−1. In this case, Dm=D. Applying Theorem 10 with xm=xDm for any arbitrary constant xand pen (m) =Kσ2 np Dm+ 2√ 2xm2 , for some constant K > 1, we obtain: Theorem 12 Letα≥1andR >0. Then, we have: sup f∈Wper(α,R)E∥ˆfˆm−f∥2 L2≤Cn−2α 2α+1, where Cdepends on σ,αandR. It can be proved by using standard arguments that lim inf n→+∞inf Tnsup f∈Wper(α,R)Eh n2α 2α+1∥Tn−f∥2 L2i ≥˜C, where inf Tndenotes the infimum over all estimators and where the constant ˜C depends on σ,αandR. Therefore, the previous theorem shows that ˆfˆmachieves the optimal minimax rate. It is also adaptive since it does not depend on the parameters αandRwhich are unknown in practice. Proof. The set Wper(α, R) can be characterized by Fourier coefficients and by using the following proposition established in [31]: 22 Proposition 13 Letα∈ {1,2, . . .}andR > 0. Then, the function fbelongs toWper(α, R)if and only if the sequence of its Fourier coefficients θ= (θj)j≥1 belongs to the ellipsoid Θ(c, r)defined by Θ(c, r) =( θ∈ℓ2:+∞X j=1c2 jθ2 j≤r2) , withr=R/παand cj= jαifjis even, (j−1)αifjis odd. Now, we use Inequality (36) of Theorem 10. Let m∈ M be fixed. Proposition 13 allows to control the bias term: ∥fm−f∥2 L2=X j∈m ⟨f, ϕj⟩n−θj2+X j̸∈mθ2 j. For the first term, we have for any j∈m, ⟨f, ϕj⟩n−θj=1 nnX k=1+∞X i=1θiϕi(tk)ϕj(tk)−θj =n−1X i=1θi×1 nnX k=1ϕi(tk)ϕj(tk)−θj+1 nnX k=1+∞X i=nθiϕi(tk)ϕj(tk) =1 nnX k=1+∞X i=nθiϕi(tk)ϕj(tk). Thus, max j∈m|⟨f, ϕj⟩n−θj| ≤2+∞X i=n|θi| and ∥fm−f∥2 L2=X j∈m ⟨f, ϕj⟩n−θj2+X j̸∈mθ2 j ≤4Dm+∞X i=n|θi|2 ++∞X j=Dm+1θ2 j ≤4Dm×+∞X i=1c2 iθ2 i×X i≥nc−2 i+D−2α m∞X j=1c2 jθ2 j ≤cα,R Dmn−2α+1+D−2α m , with cα,Ronly depending on αandR. We have used α >1/2. We also have: inf g∈Sn−1∥f−g∥L∞≤ X i≥nθiϕi L∞≤√ 2X i≥n|θi| ≤√ 2+∞X i=1c2 iθ2 i×X i≥nc−2 i1/2 . 23 Finally, inf g∈Sn−1∥f−g∥L∞≤c′ α,Rn−α+1/2, with c′ α,Ronly depending on αandR. To conclude, we observe that inf m∈M ∥fm−f∥2 L2+ pen ( m) + inf g∈Sn−1∥f−g∥2 L∞+(1 + Σ) σ2 n ≤C inf 1≤Dm≤n−1( Dmn−2α+1+D−2α m+Dmσ2 n) +n−2α+1+(1 + Σ) σ2 n, withCdepending on αandR. We take Dm∈ {1, . . . , n −1}of order ( n/σ2)1/(2α+1) to conclude. Observe that the assumption α≥1 allows to state that the
https://arxiv.org/abs/2504.17559v1
term Dmn−2α+1is smaller than D−2α m∨Dmσ2 n, up to a constant. We end this section by deriving the cut-off phenomenon for the penalty in the functional setting. Even if the analogous general results of Section 3.3 can be obtained, we only consider the case where the collection of models Mis the following: a model m∈ M if and only if it is of the form m={1, . . . , d }for some d∈ {1, . . . , n −1}. In particular, all models are nested and # M=n−1. For sake of simplicity, we further assume that f∈ Sn−1. Mimicking the proof of Theorem 9, we obtain: Theorem 14 Take a penalty function of the form pen (m) =κσ2Dm n and consider ˆmminimizing the penalized least squares criterion (32). Assume thatκ <1. Then, for any δ∈(0,1)there exists N0depending on δandκbut not on forσsuch that, whatever f∈ Sn−1, for all n≥N0 Pf{Dˆm≥n/2} ≥1−δ and the following lower bounds on the expected risks hold true Ef∥ˆfˆm−f∥2 n≥σ2 4,Ef∥ˆfˆm−f∥2 L2≥σ2 4. Some simulations are carried out in Figure 1 to illustrate this last result in the non-asymptotic setting. More precisely, in the framework of Model (30) with σ= 1 and n= 100, we consider the estimation of the function f(x) = 2 + 0 .7√ 2 cos(2 πx) + 0.5√ 2 sin(2 πx), which brings this problem in the setting of Theorem 14. Note in particular that fbelongs to Sm, with m={1,2,3}. The graph of the left hand side provides the value of Dˆmwith respect to κ, where κis the constant involved in the penalty function pen of Theorem 14. We observe a jump around the value κ= 1, as predicted by the theory, with in particular very large models being selected when κ <1. Observe that true model is selected ( Dˆm= 3), as soon as κ≥1.3. On the right hand of Figure 1, we display the value of ∥ˆfm∥2with respect to Dm. Once Dmis larger or equal to 3, this function is approximately linear and the estimation of the slope of the linear part of the curve is equal to ˆ κ×σ2/nwith ˆ κ= 0.988. 24 102030 0 1 2 3 4 5 κDm^κDm^κ versus κ 5.05.5 0 10 20 30 Dm||f^ m||2||f^ m||2 versus DmFigure 1: Estimation of the function f(x) = 2+0 .7√ 2 cos(2 πx)+0.5√ 2 sin(2 πx) in Model (30) with σ= 1 and n= 100 in the setting of Theorem 14. Left hand side: graph of κ7−→Dˆm. Right hand side: graph of Dm7−→ ∥ ˆfm∥2. Acknowledgements The authors are very grateful to Suzanne Varet, who carried out the numerical study of Section 3.4 and produced the graphs of Figure 1. References [1]Akaike , H. Information theory and an extension of the maximum likeli- hood principle. In P.N. Petrov and F. Csaki, editors, Proceedings 2nd In- ternational Symposium on Information Theory , pages 267–281. Akademia Kiado, Budapest, 1973. [2]Arlot , S. Minimal penalties and the slope heuristics: a survey. J. SFdS , 160, 3, 1-106 (2019). [3]Arlot , S. and Bach , F. Data-driven calibration of linear estimators
https://arxiv.org/abs/2504.17559v1
with minimal penalties. In Proceedings of the 23rd International Conference on Neural Information Processing Systems , NIPS’09, page 46-54, Red Hook, NY, USA, Curran Associates Inc. (2009). [4]Arlot , S. and Bach , F.. Data-driven calibration of linear estimators with minimal penalties. arXiv:0909.1884v2. (2011). [5]Arlot , S. and Massart, P. Data-driven calibration of penalties for least- squares regression. J. Mach. Learn. Res., 10, 245-279 (2009). [6]Barron , A.R., Birg´e, L. and Massart , P. Risk bounds for model selec- tion via penalization. Probab. Th. Rel. Fields .113, 301-415 (1999). [7]Bertin, K,Le Pennec , E. and Rivoirard , V. Adaptive Dantzig density estimation. Ann. Inst. Henri Poincar´ e Probab. Stat. ,47, 1, 43-74 (2011). 25 [8]Birg´e, L. and Massart , P. Gaussian model selection. Journal of the Eu- ropean Mathematical Society , n◦3, 203-268 (2001). [9]Birg´e, L. and Massart , P. Minimal penalties for Gaussian model selec- tion. Probab. Th. Rel. Fields 138, 33-73 (2007). [10]Boucheron , M., Lugosi , G., Massart , P.Concentration inequalities: A Nonasymptotic Theory of Independence . Oxford University Press (2013). [11]Daniel , C. and Wood , F.S. Fitting Equations to Data . Wiley, New York (1971). [12]Gassiat , E. and van Handel , R. Consistent order estimation and minimal penalties. IEEE Transactions on Information Theory ,59, 2, 1115-1128 (2013). [13]Hoeffding , W. Probability inequalities for sums of bounded random vari- ables. Journal of the American Statistical Association 5813-30 (1963). [14]Lacour , C. and Massart , P. Minimal penalty for Goldenshluger-Lepski method. Stochastic Processes and their Applications 26,12, 3774-3789 (2016). [15]Ledoux , M. The concentration of measure phenomenon. Mathematical Surveys and Monographs 89, American Mathematical Society. [16]Lerasle , M. Optimal model selection in density estimation. Ann. Inst. Henri Poinca´ e Probab. Stat. ,48, 3, 884-908 (2012). [17]Lerasle , M. Magalh ˜aes, N. M. and Reynaud-Bouret P. Optimal kernel selection for density estimation. In High dimensional probability VII, volume 71 of Progr. Probab. , pages 425–460. Springer (2016). [18]Lerasle , M. and Takahashi, Y.T. Sharp oracle inequalities and slope heuristic for specification probabilities estimation in discrete random fields. Bernoulli ,22, 1, 325-344 (2016). [19]Mallows , C.L. Some comments on Cp.Technometrics 15, 661-675 (1973). [20]Marton , K. A simple proof of the blowing up lemma. IEEE Trans. Inform. Theory IT-32 , 445-446 (1986). [21]Marton , K. Bounding d-distance by information divergence: a method to prove measure concentration. Ann. Probab. 24, 927-939 (1996). [22]Marton , K. A measure concentration inequality for contracting Markov chains. Geom. Funct. Anal. 6, n◦3, 556-571 (1996). [23]Massart, P. Concentration inequalities and model selection . Ecole d’´ et´ e de Probabilit´ es de Saint-Flour 2003. Lecture Notes in Mathematics 1896, Springer Berlin/Heidelberg (2007). 26 [24]McDiarmid , C. On the method of bounded differences. In Surveys in Com- binatorics 1989 , pages 148-188. Cambridge University Press, Cambridge (1989). [25]Reynaud-Bouret , P. and Rivoirard, V. Near optimal thresholding estimation of a Poisson intensity on the real line. Electron. J. Stat. ,4, 172-238 (2010). [26]Reynaud-Bouret , P.,Rivoirard, V. and Tuleau-Malot , C. Adaptive density estimation: a curse of support? J. Statist. Plann. Inference ,141, 1, 115-139 (2011). [27]Samson , P.M. Concentration
https://arxiv.org/abs/2504.17559v1
arXiv:2504.17611v1 [math.ST] 24 Apr 2025Some Results on Generalized Familywise Error Rate Controlling Procedures under Dependence Monitirtha Dey∗1and Subir Kumar Bhandari†2 1Institute for Statistics, University of Bremen, Bremen, Ger many 2Interdisciplinary Statistical Research Unit, Indian Stat istical Institute, Kolkata, India Abstract The topic of multiple hypotheses testing now has a potpourri of novel theories and ubiquitous applications in diverse scientific fields. Ho wever, the universal utility of this field often hinders the possibility of having a generalized theory that accommodates every scenario. This tradeoff is better reflect ed through the lens of dependence, a central piece behind the theoretical and ap plied developments of multiple testing. Although omnipresent in many scientifi c avenues, the nature and extent of dependence vary substantially with the contex t and complexity of the particular scenario. Positive dependence is the norm in testing many treat- ments versus a single control or in spatial statistics. On th e contrary, negative dependence arises naturally in tests based on split samples and in cyclical, ordered comparisons. In GWAS, the SNP markers are generally conside red to be weakly dependent. Generalized familywise error rate ( k-FWER) control has been one of the prominent frequentist approaches in simultaneous in ference. However, the performances of k-FWER controlling procedures are yet unexplored under differ ent dependencies. This paper revisits the classical testing pr oblem of normal means in different correlated frameworks. We establish upper bounds o n the generalized familywise error rates under each dependence, consequentl y giving rise to improved testing procedures. Towards this, we present improved prob ability inequalities, which are of independent theoretical interest. ∗mdey@uni-bremen.de †subirkumar.bhandari@gmail.com 1 1 Introduction The field of simultaneous statistical inference has witnessed a beau tiful blend of theory and applications in its development. Now, we have a potpourri of nov el theories and a stream of wide-reaching applications in diverse scientific fields. Howe ver, the universal utility of this topic often hinders the possibility of having a generalized theory that fits every situation. This tradeoff is better reflected through the lens of dependence, a central piece behind the theoretical and applied developments of multiple tes ting. While dependence is a natural phenomenon in a plethora of scientific a venues, the nature and extent of dependence varies with the context and com plexity of the particular scenario: 1. Positively correlated observations (e.g., the equicorrelated set up) arise in several applications, e.g., when comparing a control against several treat ments. Con- sequently, numerous recent works in multiple testing consider the e quicorrelated setup and the positively correlated setup( Delattre and Roquain ,2011;Dey,2024; Dey and Bhandari ,2023,2024;Proschan and Shaw ,2011;Roy and Bhandari ,2024). 2. Negative dependence also naturally appears in many testing and m ultiple compar- ison scenarios ( Chi et al. ,2025;Joag-Dev and Proschan ,1983), e.g., in tests based on split samples, and in cyclical, ordered comparisons. 3. The correlation between two single nucleotide polymorphisms (SNP ) is thought (Proschan and Shaw ,2011) to decrease with genomic distance. Many authors have argued that for the large numbers of markers typically used for a G WA study, the test statistics
https://arxiv.org/abs/2504.17611v1
are weakly correlated because of this largely loc al presence of correlationbetween SNPs. Storey and Tibshirani (2003)define weak dependence as “any form of dependence whose effect becomes negligible as the num ber of features increases to infinity” and remark that weak dependence generally h olds in genome- wide scans. Generalized familywise error rate has been one of the most prominen t approaches in frequentist simultaneous inference. However, the performance s ofk-FWER controlling procedures are yet unexplored under different dependencies. Th is paper revisits the classical testing problem of normal means in different correlated fr ameworks. We es- tablish upper bounds on the generalized familywise error rates unde r each dependence, consequently giving rise to improved testing procedures. Towards this, we also present improved probability inequalities, which are of independent theoretic al interest. Thispaperisorganizedasfollows. Weintroducethetestingframewo rkwithrelevant notations and summarize some results on the limiting behavior of the B onferroni method under the equicorrelated normal setup in the next section. Sectio n3presents improved bounds on k-FWER under independence and negative dependence of the test s tatistics. Weobtainnew andimproved probability inequalities in 4. Section 5employs these results to our multiple testing framework. We consider the nearly independe nt set-up in Section 6. Results of an empirical study and real data analysis are presente d in Section 7before we conclude with a brief discussion in 8. 2 2 The Framework We address the multiple testing problem through a Gaussian sequence model : Xi∼N(µi,1), i∈ {1,...,n} where Corr( Xi,Xj) =ρijfor eachi/ne}ationslash=j. We wish to test: H0i:µi= 0vs H 1i:µi>0,1≤i≤n. The global null H0=/intersectiontextn i=1H0ihypothesizes that each mean is zero. In the following, Σ n denotes the correlation matrix of X1,...,X nwith (i,j)’th entry ρij. The usual Bonferroni procedure uses the cutoff Φ−1(1−α/n) to control FWER at levelα.Lehmann and Romano (2005) remark that controlling k-FWER allows one to decrease this cutoff to Φ−1(1−kα/n), and thus significantly increase the ability to identify false hypotheses. Thus, for their Bonferroni-type proc edure, under the global null, k-FWER(n,α,Σn) =PΣn/parenleftbig Xi>Φ−1(1−kα/n) for at least k i’s|H0/parenrightbig . Evidently, when k= 1, Lehmann-Romano procedure simplifies to the Bonferroni meth od andk-FWER reduces to the usual FWER. InDey and Bhandari (2024), weproposedamultipletesting procedurethatcontrols generalized FWER under non-negative dependence. The generalize d FWER for this procedure is given by k-FWER modified(n,α,Σn) =PΣn/parenleftbig Xi>Φ−1(1−kα⋆/n) for at least k i’s|H0/parenrightbig where α⋆:= argmax β∈(0,1)/braceleftig min{fn,k,Σn(β),gn,k,Σn(β)} ≤α/bracerightig . Herefandgare functions defined as follows: fn,k,Σn(α) =(n−1)k n(k−1)·α2+1 πk(k−1)/summationdisplay 1≤i<j<n/integraldisplayρij 01√ 1−z2e−{Φ−1(1−kα n)}2 1+zdz, gn,k,Σn(α) =α·n+k−1 n−n−1 n·kα2 n−1 2πkn/summationdisplay j=1,j/ne}ationslash=i⋆/integraldisplayρi⋆j 01√ 1−z2e−{Φ−1(1−kα n)}2 1+zdz. wherei⋆= argmaxi/summationtextn j=1,j/ne}ationslash=iρij. For each k >1,Dey and Bhandari (2024) show that k-FWER modified(n,α,Σn)≤min{fn,k,Σn(α),gn,k,Σn(α)}. Therefore, k-FWER modified(n,α,Σn) is not more than αwhen α⋆:= argmax β∈(0,1)/braceleftig min{fn,k,Σn(β),gn,k,Σn(β)} ≤α/bracerightig . Sinceα⋆/greaterorequalslantα,theirprocedureimprovestheLehmann-Romanomethod. In Dey and Bhandari (2024), the authors establish the following result: 3 Theorem 2.1. Letk >1. Suppose X1,...,X nare independent. Moreover, let α≤ n(k−1) (n−1)k. Then,α⋆=/radicalig n(k−1)α (n−1)k. This result implies that, under the independence of the test statist ics,α⋆can be chosen close to√α. This actually greatly increases the ability to reject false null hypotheses. Naturally the following question arises: Do there exist nontrivial constants Dn,ksuch that
https://arxiv.org/abs/2504.17611v1
one has PΣn/parenleftbig Xi>Φ−1(1−kα⋆/n) for at least k i’s|H0/parenrightbig ≤α with some α⋆/greaterorequalslantDn,k·α1/k? If this is answered in affirmative, then we would have sharper upper b ounds on generalized FWERs and consequently, improved multiple testing proc edures. Otherwise, can one devise improved multiple testing procedures, using the theo ry of probability inequalities? 3 Improved Bounds under Independence and Nega- tive Dependence For a random vector T= (T1,...,T n), letFkbe the distribution function of Tkfor k∈[n] :={1,...,n}.Tis called (lower) weakly negatively dependent if P/parenleftigg/intersectiondisplay k∈A/braceleftbig Tk≤F−1 k(p)/bracerightbig/parenrightigg ≤/productdisplay k∈AP/parenleftbig Tk≤F−1 k(p)/parenrightbig for allA⊆[n] andp∈(0,1). Note that when there is exact equality for each Aand each p, we have independence. We consider the means testing problem as described in section 2but with a general underlying distribution F. Theorem 3.1. Suppose X= (X1,...,X n)is negatively dependent where Xi∼Fwith the density of Fbeing symmetric about zero. Then, PΣn/parenleftbig Xi> F−1(1−kα⋆/n)for at least k i’s|H0/parenrightbig ≤α where α⋆=/bracketleftigg n k·/parenleftbign k/parenrightbig1/k/bracketrightigg α1/k. Proof.LetAidenote the event/braceleftbig Xi> F−1/parenleftbig 1−kα⋆ n/parenrightbig/bracerightbig ≡/braceleftbig −Xi<−F−1/parenleftbig 1−kα⋆ n/parenrightbig/bracerightbig ≡/braceleftbig −Xi< F−1/parenleftbigkα⋆ n/parenrightbig/bracerightbig . ThenPH0(Ai) =kα⋆/n. Now, k-FWER modified(n,α,I n) =P(at least kmanyAi’s occur) =P/parenleftigg/uniondisplay i1,...,ik{Ai,∩...∩Aik}/parenrightigg 4 /lessorequalslant/parenleftbiggn k/parenrightbigg/summationdisplay i1,...,ikP(Ai1∩....∩Aik) (using Boole’s inequality) ≤/parenleftbiggn k/parenrightbigg ·/parenleftbiggkα⋆ n/parenrightbiggk (from negative dependence) . Now,/parenleftbign k/parenrightbig ·/parenleftbigkα⋆ n/parenrightbigk/lessorequalslantαgives kα⋆ n/lessorequalslant/braceleftigg α/parenleftbign k/parenrightbig/bracerightigg1/k . The rest follows. /square When (X1,...,X n) are independent then upper bounds on generalized FWER may also be obtained through the Chernoff bound: Theorem 3.2. LetY1,...,Y nbe independent Bernoulli( p) random variables. Suppose Y=/summationtextn i=1Yi. Then, P(Y≥a)≤inf t>0e−ta(1−p+pet)n. Puttinga= (1+δ)np(forδ >0) andt= log[(1+ δ)/p], one obtains the following result: Theorem 3.3. LetY1,...,Y nbe independent Bernoulli( p) random variables. Suppose Y=/summationtextn i=1Yi. Then, for any δ >0, P/bracketleftbig Y≥(1+δ)np/bracketrightbig ≤/bracketleftigg eδ (1+δ)1+δ/bracketrightiggnp . In our multiple testing context, let us consider the Bernoulli random variables Yi= /BD/braceleftbig Xi> F−1(1−kα⋆/n)/bracerightbig ,1≤i≤n. Then, under the global null, P(Yi= 1) =kα⋆/nfor eachi. We observe that k-FWER modified(n,α,I n) is same as P(/summationtextn i=1Yi≥k). Consider δ= 1/α⋆−1. Then, Theorem 3.3 gives P/parenleftiggn/summationdisplay i=1Yi≥k/parenrightigg ≤/bracketleftigg e1 α⋆−1 (1 α⋆)1 α⋆/bracketrightiggkα⋆ =/bracketleftbig e1−α⋆·α⋆/bracketrightbigk. We wish to use α⋆for which/bracketleftbig e1−α⋆·α⋆/bracketrightbigk≤α. It is sufficient to have ( eα⋆)k≤α. In other words, we may choose α⋆=1 e·α1/k. Hence, we obtain the following result: Theorem 3.4. SupposeX1,...,X nare independent. Then, PΣn/parenleftbig Xi> F−1(1−kα⋆/n)for at least k i’s|H0/parenrightbig ≤α where α⋆=1 e·α1/k. 5 We now compare the bounds obtained through Boole’s inequality and C hernoff’s bound. Towards this, we note /parenleftbiggn k/parenrightbigg =n! (n−k)!k!=n·(n−1)...(n−(k−1)) k!≤nk k!. On the other hand, ek=/summationtext∞ i=0ki i!>kk k!. This implies1 k!</parenleftbige k/parenrightbigk. Substituting this in the above inequality, we get/parenleftbiggn k/parenrightbigg ≤/parenleftigen k/parenrightigk . This results in1 e≤n k·/parenleftbign k/parenrightbig1/k. HenceTheorem 3.1 is stronger than Theorem 3.4 . Remark 1. We havepreviouslynotedthat k-FWER modified(n,α,I n)issameas P(/summationtextn i=1Yi≥ k)whereYi= /BD{Xi>Φ−1(1−kα⋆/n}, for1≤i≤n.Hoeffding’s inequalty gives P/bracketleftiggn/summationdisplay i=1Yi≥E/parenleftiggn/summationdisplay i=1Yi/parenrightigg +δ/bracketrightigg ≤e−2δ2/n. Puttingδ=k(1−α⋆)results in k-FWER modified(n,α,I n)≤e−2k2(1−α⋆)2/n. This implies, whenever k2/n→ ∞asn→ ∞,k-FWER modified(n,α,I n)approaches zero. In particular, for any sequence kn=nβwithβ >1/2,k-FWER modified(n,α,I n) approaches zero. Also note that, this is true under general d istributions, since its proof actually nowhere uses normality of the test statistics. 4 A New Probability Inequality
https://arxiv.org/abs/2504.17611v1
We have previously observed that k-FWER is P(at least kout ofn Ai’s occur) for suit- ably defined events Ai, 1≤i≤n. Naturally one wonders whether the theory of probabil- ity inequalities helps to findimproved upper boundson P(at least kout ofn Ai’s occur). Accurate computation of this probability requires knowing the comp lete dependence between the events ( A1,...,A n), which we typically do not know unless they are inde- pendent. As also mentioned in Dey(2024) andDey and Bhandari (2024), the available informationisoftenthemarginalprobabilitiesandjointprobabilitiesu ptolevel k0(k0<< n). Towards finding such an easily computable upper bound on the pro bability that at leastkout ofnevents occur, Dey and Bhandari (2024) establish in the following: Theorem 4.1. LetA1,A2, ...,Anbenevents. Then, for each k≥2, P(at leastkout ofn Ai’s occur)≤min/braceleftigg S1−S′ 2 k+k−1 k·max 1≤i≤nP(Ai),2S2 k(k−1)/bracerightigg whereS1=/summationtextn i=1P(Ai),S2=/summationtext 1≤i<j<nP(Ai∩Aj)and S′ 2= max 1≤i≤nn/summationdisplay j=1,j/ne}ationslash=iP(Ai∩Aj). 6 We wish to obtain sharper probability inequalities. Towards this, we de fine some quan- tities. For 1 ≤m≤n, suppose Sm=/summationdisplay 1≤i1<···<im≤nP(Ai1∩···∩Aim) denotes the sum of probabilities of m-wise intersections. Also, let S′ m= max i1<...<i m−1n/summationdisplay j=1,j/∈{i1,...,im−1}P(Aj∩Ai1...∩Aim−1). Lemma 4.1. LetA1,A2, ...,Anbenevents. Given any k≥2, we have P(at leastkout ofn Ai’s occur)≤S1−S′ m k+k−m+1 k·max i1<...<i m−1P(Ai1∩...∩Aim−1) for each 2≤m≤k. Proof.LetIi(w) be the indicator randomvariable of the event Aifor 1≤i≤n. Then the random variable max Ii1(w)···Iik(w) is the indicator of the event that at least kamong n Ai’s occur. Here the maximum is taken over all tuples ( i1,...,i k) withi1,...,i k∈ {1,...,n},i1< ... < i k. Now, for any i= 1,...,n, maxIi1(w)···Iik(w)≤1 k[1−Ii1(w)···Iim−1(w)]n/summationdisplay j=1Ij(w)+Ii1(w)···Iim−1(w). Taking expectations in above, we obtain P(at least kout ofn Ai’s occur) ≤1 k·n/summationdisplay j=1P(Aj)−1 k·n/summationdisplay j=1,j/∈{i1,...,im−1}P(Aj∩Ai1...Aim−1)+P(Ai1∩...∩Aim−1)·k−m+1 k. The rest follows by observing that the above holds for any ( m−1)-tuple ( i1,...,i m−1) and also for any msatisfying 2 ≤m≤k. /square Lemma 4.2. LetA1,A2, ...,Anbenevents. Given any k≥2, we have P(at leastkout ofn Ai’s occur)≤min 1≤m≤kSm/parenleftbigk m/parenrightbig. for each 1≤m≤k. Proof.LetTndenotes the number of events occurring. Then, for each 1 ≤m≤k, P(at least kout ofn Ai’s occur) = P(Tn≥k) =P/bracketleftbigg/parenleftbiggTn m/parenrightbigg ≥/parenleftbiggk m/parenrightbigg/bracketrightbigg ≤1/parenleftbigk m/parenrightbigE/bracketleftbigg/parenleftbiggTn m/parenrightbigg/bracketrightbigg 7 =1/parenleftbigk m/parenrightbig·E/bracketleftbigg/summationdisplay 1≤i1<···<im≤nIi1(w)Ii2(w)···Iim(w)/bracketrightbigg =1/parenleftbigk m/parenrightbig·/summationdisplay 1≤i1<···<im≤nP(Ai1∩···∩Aim) =Sm/parenleftbigk m/parenrightbig. The rest is obvious. /square Thus we have the improved inequality: Theorem 4.2. LetA1,A2, ...,Anbenevents. Suppose SmandS′ mare as mentioned earlier. Then, for each k≥2, P(at leastkout ofn Ai’s occur)≤min{A,B} where A:= min 2≤m≤kS1−S′ m k+k−m+1 k·max i1<...<i m−1P(Ai1∩...∩Aim−1), B:= min 1≤m≤kSm/parenleftbigk m/parenrightbig. Theorem 4.2 is an extremely general probability inequality that works under any kind of joint behavior of events A1,...,A n. 5 ImprovedBoundsunderArbitraryDependenceand Equicorrelation Suppose ( X1,X2,...,X n) have covariance matrix Σ n= ((ρij)). We define Ai={Xi> Φ−1(1−kα/n)}for 1≤i≤n. This implies k-FWER(n,α,Σn) =PΣn(at least k Ai’s occur|H0). Hence,Theorem 4.2 gives the following immediate corollary: Corollary 5.1. LetΣnbe the correlation matrix of X1,...,X nwith(i,j)’th entry ρij. SupposeAi={Xi>Φ−1(1−kα/n)}for1≤i≤n. Then, for each k >1, k-FWER(n,α,Σn)≤min{A,B} whereAandBare as mentioned earlier. We focus the equicorrelated case now. When ρij=ρfor eachi/ne}ationslash=j, one has Sm=/parenleftbiggn m/parenrightbigg ·am 8 wheream=P/bracketleftbiggm/intersectiondisplay j=1{Xij>Φ−1(1−kα/n)}/bracketrightbigg . This means Sm/parenleftbigk m/parenrightbig=/parenleftbign m/parenrightbig ·am/parenleftbigk m/parenrightbig=n! k!·(k−m)! (n−m)!·am. Thus, rm:=Sm+1 (k m+1) Sm (k m)=n−m k−m·am+1 am. We have the following: Theorem 5.1. Consider the equicorrelated normal set-up
https://arxiv.org/abs/2504.17611v1
with correlatio nρ >0. Then, the sequence rmis increasing in 1≤m≤k−1. Proof.Consider the block equicorrelation structure as mentioned in Appen dix. Suppose k= (m+1,m−1,0,...,0) andk⋆= (m,m,0,...,0), where 2 m−2 zeros are there in each of these two vectors. So k>k⋆. Applying Theorem A.1, we obtain am+1am−1≥a2 m. This means am+1/amis increasing in m. Since, ( n−m)/(k−m) is also increasing in m, we have the desired result. /square Lemma 5.1. Consider the equicorrelated normal set-up with correlatio nρ >0. Suppose k >1andk/n,α < . 5. Then,r1<1. To prove this, we need the following representation theorem by Monhor(2013): Lemma 5.2. Suppose (X,Y)follows a bivariate normal distribution with parameters (0,0,1,1,ρ)withρ≥0. Then, for all x >0, P(X≤x,Y≤x) = [Φ(x)]2+1 2π/integraldisplayρ 01√ 1−z2e−x2 1+zdz. Proof.We start with finding an expression on a2: a2=PH0(Ai∩Aj) = 1−PH0(Ac i∪Ac j) = 1−PH0(Ac i)−PH0(Ac j)+PH0(Ac i∩Ac j) = 1−(1−kα/n)−(1−kα/n)+PH0/parenleftbigg Xi≤Φ−1(1−kα/n),Xj≤Φ−1(1−kα/n)/parenrightbigg =2kα n−1+(1−kα/n)2+1 2π/integraldisplayρij 01√ 1−z2e−{Φ−1(1−kα n)}2 1+zdz(using Lemma 5.2) =k2α2 n2+1 2π/integraldisplayρ 01√ 1−z2e−{Φ−1(1−kα n)}2 1+zdz. 9 Hence, r1<1⇐⇒a2<k−1 n−1·a1 ⇐⇒/parenleftbiggkα n/parenrightbigg2 +1 2π/integraldisplayρ 01√ 1−z2e−{Φ−1(1−kα n)}2 1+zdz <k−1 n−1·kα n ⇐⇒1 2π/integraldisplayρ 01√ 1−z2e−{Φ−1(1−kα n)}2 1+zdz <kα n/bracketleftbiggk−1 n−1−kα n/bracketrightigg ⇐=sin−1(ρ) 2π·e−c2 2<kα n/bracketleftbiggk−1 n−1−kα n/bracketrightigg (wherec= Φ−1(1−kα/n)) ⇐=1 2kα n/bracketleftbiggk−1 n−1−kα n/bracketrightigg < c2 ⇐= 1/4< c(sincek/n,α < . 5) ⇐=kα/n <1−Φ(1/4) which is true since since k/n,α < . 5. /square Thus,rmis increasing in mandr1<1. Hence, if there exists m⋆such that rm⋆−1<1< rm⋆then min 1≤m≤kSm/parenleftbigk m/parenrightbig=Sm⋆/parenleftbigk m⋆/parenrightbig. 6 Results under Nearly Independence Thecorrelationbetweentwosinglenucleotidepolymorphisms(SNP)is thought( Proschan and Shaw , 2011) to decrease with genomic distance. Many authors have argued th at for the large numbers of markers typically used for a GWA study, the test statis tics are weakly corre- latedbecauseofthislargelylocalpresenceofcorrelationbetween SNPs.Storey and Tibshirani (2003) define weak dependence as “any form of dependence whose effec t becomes negli- gible as the number of features increases to infinity” and remark th at weak dependence generally holds in genome-wide scans. Das and Bhandari (2025) consider the nearly independent setup : ∀i/ne}ationslash=j,Corr(Xi,Xj) =O/parenleftbigg1 nβ/parenrightbigg =ρij(say) for some β >0. Under the global null, we have k-FWER modified(n,α,Σn)/lessorequalslant/parenleftbiggn k/parenrightbigg ·max (i1,...,ik)P/bracketleftiggk/intersectiondisplay j=1/braceleftbig Xij> ck,α⋆,n/bracerightbig/bracketrightigg whereck,α⋆,n= Φ−1(1−kα⋆/n). Proceeding along the similar lines of Das and Bhandari (2025) (see Lemma 4.3 therein), one obtains the following: 10 P/bracketleftiggk/intersectiondisplay j=1/braceleftbig Xij> ck,α⋆,n/bracerightbig/bracketrightigg ∼f(ck,α⋆,n,Σn−1)/producttextk i=1∆iasn→ ∞ provided that c→ ∞asn→ ∞. Here ∆ i=ck,α⋆,n 1+(k−1)ρfor eachi. We also have the following asymptotic approximation f(ck,α⋆,n,Σn−1)/producttextk i=1∆i∼/parenleftbiggkα⋆ n/parenrightbiggk 1+c2 2/summationdisplay l/ne}ationslash=m∈{i1,...,ik}ρlm asn→ ∞. Combining these two approximations gives that under the global null, for all suffi- ciently large values of n, k-FWER modified(n,α,Σn)/lessorequalslant/parenleftbiggn k/parenrightbigg/parenleftbiggkα⋆ n/parenrightbiggk ·max (i1,...,ik) 1+c2 2/summationdisplay l/ne}ationslash=m∈{i1,...,ik}ρlm . 7 Empirical Study and Real Data Analysis The set-up of our empirical study is as follows: •number of hypotheses: n= 1000. •choice of k:k= 25,50,75. •choice of equicorrelation: ρ∈ {.1,.15,.2,.25,.3}. •desired level of control : α=.05. We present the empirical results for k-FWER for n= 1000,α=.05 and the aforementionedchoicesof( n,k,ρ,α)alongwiththeexistingboundsby Dey and Bhandari (2024) and our proposed bounds (given by Corollary 5.1) in Table 1. It is noteworthy that for very small values of ρ, our proposed bounds are significantly smaller than the existing bounds, while the proposed
https://arxiv.org/abs/2504.17611v1
bounds are never worse than t he existing ones. We now elucidate the practical utility of our proposed bounds throu gh aprostate cancer dataset (Singh et al. ,2002). This dataset involves expression levels of n= 6033 genes for N= 102 individuals: among them 52 are prostate cancer patients and t he rest are healthy persons. LetTijdenote the expression level for gene ion individual j. The dataset is a n×NmatrixTwith 1≤j≤50 for the healthy persons and 51 ≤j≤102 for the cancer patients. Let ¯Ti(1) and ¯Ti(2) be the averages of Tijfor these two groups respectively. one of the foremost problem is to to identify genes that have signific antly different levels among the two populations ( Efron,2010). In other words, one is interested in testing H0i:Tijhas the same distribution for the two populations . 11 Table 1: Estimates of k-FWER(n= 1000,α=.05,ρ) k ρ 0.1 0.15 0.2 0.25 0.3 25ˆk-FWER(ρ) 1e-04 0.0013 0.0024 0.0050 0.0062 Existing Bound 0.007098 0.011068 0.01672 0.02456 0.03522 Proposed Bound 0.001392 0.006829 0.01672 0.02456 0.03522 50ˆk-FWER(ρ) 0e+00 0.0006 0.0017 0.0032 0.0048 Existing Bound 0.006185 0.009164 0.01321 0.01857 0.02557 Proposed Bound 0.000447 0.003484 0.00962 0.01857 0.02557 75ˆk-FWER(ρ) 0e+00 0.0003 0.0013 0.0025 0.0037 Existing Bound 0.005739 0.008254 0.01157 0.01587 0.02134 Proposed Bound 0.000191 0.002094 0.00681 0.01374 0.02134 A natural way to test H0iis to compute the usual tstatisticti=¯Ti(2)−¯Ti(1) Si. Here, S2 i=50/summationdisplay j=1/parenleftbig Tij−¯Ti(1)/parenrightbig2+102/summationdisplay j=51/parenleftbig Tij−¯Ti(2)/parenrightbig2 100·/parenleftbigg1 50+1 52/parenrightbigg . One rejects H0iatα=.05 (based on usual normality assumptions) if |ti|exceeds 1.98, i.e, the two-tailed 5% point for a Student- trandom variable with 100 d.f. We consider the following transformation: Xi= Φ−1(F(ti)), whereFdenotes the cdf of t100distribution. This means H0i:Xi∼N(0,1). The observed value of the usual correlation coefficient efficient bet ween the n Xvalues is less than .001. Hence, we take ρ= 0 in our computations. For a given k, the Lehmann-Romano procedure at level α=.05 rejects H0iif Xi>Φ−1(1−kα/n). Our proposed procedure rejects H0iifXi>Φ−1(1−kα⋆/n) where α⋆=/bracketleftbigg n k·(n k)1/k/bracketrightbigg α1/k. The numbers of rejected hypotheses for different values of kby Lehmann-Romano procedure, the method proposed in Dey and Bhandari (2024), and our proposed procedure are given in Table 2. Table 2: Number of rejected hypotheses for different values of k k 2 10 20 40 60 Lehmann-Romano Method 6 13 17 26 27 Method in Dey and Bhandari (2024) 12 26 34 51 62 Proposed Method 12 28 47 63 78 12 Table2demonstrates that our proposed method rejects significantly mo re number of hypotheses than the method proposed in Dey and Bhandari (2024) and the Lehmann- Romano method for each value of k. This illustrates the superiority of our proposed bound on the probability of occurrence of at least kamongnevents. 8 Concluding Remarks Computing generalized familywise error rates involves the knowledge of joint distribution of test statistics under null hypotheses. While this distribution is re latively straightfor- ward under independence, this becomes intractable under depend ence, especially arbi- trary or unknown dependence. While dependence is a natural phen omenon in a plethora of scientific avenues, the nature and extent of dependence varie s with the context and complexity of the particular scenario. This
https://arxiv.org/abs/2504.17611v1
paper revisits the classic al testing problem of normal means in different correlated frameworks. We establish u pper bounds on the generalized familywise error rates under each dependence, conse quently giving rise to improved testing procedures. Towards this, we also present impro ved inequalities on the probability that at least kamongnevents occur, which are of independent theoretical interest. This probability arises, e.g., in transportation and commun ication networks. The new probability inequalities proposed in this work might be insightfu l in those areas, too. Acknowledgements Dey is grateful to Prof. Thorsten Dickhaus for his constant enco uragement and support during this work. References Z. Chi, A. Ramdas, and R. Wang. Multiple testing under negative depe n- dence. Bernoulli , 31(2):1230 – 1255, 2025. doi: 10.3150/24-BEJ1768. URL https://doi.org/10.3150/24-BEJ1768 . N. Das and S. K. Bhandari. Fwer for normal distribution in nearly independent setup. Statistics & Probability Letters , 219:110340, 2025. ISSN 0167-7152. doi: https://doi.org/10.1016/j.spl.2024.110340. URL https://www.sciencedirect.com/science/article/pii/S 0167715224003092 . S. Delattre and E. Roquain. On the false discovery proportion conv ergence under gaussian equi-correlation. Statistics & Probability Letters , 81(1):111–115, 2011. URL https://www.sciencedirect.com/science/article/pii/S 0167715210002750 . M. Dey. Behavior of fwer in normal distributions. Communications in Statistics - Theory and Methods , 53(9):3211–3225, 2024. URL https://doi.org/10.1080/03610926.2022.2150826 . 13 M. Dey and S. K. Bhandari. Bounds on generalized family-wise error r ates for normal distributions. Statistical Papers , 65:2313–2326, 2024. URL https://doi.org/10.1007/s00362-023-01487-0 . M. Dey and S.K. Bhandari. Fwer goes to zero for correlated normal. Statistics & Probability Letters , 193:109700, 2023. URL https://www.sciencedirect.com/science/article/pii/S 0167715222002139 . B. Efron. Large-Scale Inference: Empirical Bayes Methods for Estima tion, Testing, and Prediction . Institute of Mathematical Statistics Monographs. Cambridge Un iversity Press, 2010. URL https://doi.org/10.1017/CBO9780511761362 . K. Joag-Dev and F. Proschan. Negative Association of Random Var iables with Applica- tions.The Annals of Statistics , 11(1):286 – 295, 1983. doi: 10.1214/aos/1176346079. URLhttps://doi.org/10.1214/aos/1176346079 . E. L. Lehmann and Joseph P. Romano. Generalizations of the familyw ise error rate. The Annals of Statistics , 33(3):1138 – 1154, 2005. URL https://doi.org/10.1214/009053605000000084 . D. Monhor. Inequalities for correlated bivariate normal distributio n function. Prob- ability in the Engineering and Informational Sciences , 27(1):115–123, 2013. URL https://doi.org/10.1017/S0269964812000332 . M. A. Proschan and P. A. Shaw. Asymptotics of Bonferroni for de pendent normal test statistics. Statistics & Probability Letters , 81(7):739–748, 2011. URL https://www.sciencedirect.com/science/article/pii/S 0167715210003329 . Statistics in Biological and Medical Sciences. R. Roy and S. K. Bhandari. Asymptotic bayes’ optimality under sparsity for exchangeable dependent multivariate normal test st atis- tics. Statistics & Probability Letters , 207:110030, 2024. URL https://www.sciencedirect.com/science/article/pii/S 0167715223002535 . D. Singh, P. G. Febbo, K. Ross, D. G. Jackson, J. Manola, C. Ladd, P. Tamayo, A. A. Renshaw, A. V. D’Amico, J. P. Richie, E. S. Lander, M. Loda, P. W. Kantoff, T. R. Golub, and W. R. Sellers. Gene expression correlates of clin- ical prostate cancer behavior. Cancer cell , 1(2):203—209, March 2002. URL https://doi.org/10.1016/s1535-6108(02)00030-2 . J. D. Storey and R. Tibshirani. Statistical significance for genomew ide studies. Proceed- ings of the National Academy of Sciences , 100(16):9440–9445,2003. doi: 10.1073/pnas. 1530509100. URL https://www.pnas.org/doi/abs/10.1073/pnas.153050910 0. Y. L. Tong. The Multivariate Normal Distribution
https://arxiv.org/abs/2504.17611v1
. Springer Series in Statistics. Springer- Verlag, New York, 1990. 14 A Appendix Letk1,...,k nbenonnegativeintegerssuchthat/summationtextn i=1ki=n,anddenote k= (k1,...,k n)′. Without loss of generality, it may be assumed that k1≥ ··· ≥kr>0, kr+1=···=kn= 0 for some r≤n.Tong(1990) defines rsquare matrices Σ11,...,Σrrsuch that Σjj(k) has 1 on diagonals and ρon each each off-diagonals for j= 1,...,r. We define an n×n matrixΣ(k) given by Σ(k) = Σ110···0 0 Σ 22···0 ··· 0 0 ···Σrr . LetX= (X1,...,X n)′have an Nn(µ,Σ(k)) distribution. For fixed block sizes k1,...,k r,XiandXjare correlated with a common correlation coefficient ρif they are in the same block, and are independent if they belong to different bloc ks. The problem of interest is how the distributions and moments of the extreme ord er statistics X(1)and X(n)depend on the block size vector k. Letk⋆= (k⋆ 1,...,k⋆ n)′denote another vector of nonnegative integers such that k⋆ 1≥ ··· ≥k⋆ r>0, k⋆ r+1=···=k⋆ n= 0 and/summationtextr⋆ i=1k⋆ i=n. LetΣ(k⋆) denote the similar matrix as Σ(k) but based on k⋆instead ofk. One has the following: Theorem A.1. (seeTheorem6.4.5of Tong(1990)) Let(X1,...,X n)′haveanNn(µ,Σ(k)) distribution, let (X⋆ 1,...,X⋆ n)′have anNn(µ,Σ(k⋆))distribution, and let X(1)≤ ··· ≤X(n), X⋆ (1)≤ ··· ≤X⋆ (n) denote the corresponding order statistics. If µ1=···=µnandk>k⋆, then: P/bracketleftbig X(1)≥λ/bracketrightbig ≥P/bracketleftbig X⋆ (1)≥λ/bracketrightbig ∀λ. 15
https://arxiv.org/abs/2504.17611v1
Bernstein Polynomial Processes for Continuous Time Change Detection Dan Cunha1, Mark Friedl2, and Luis Carvalho1 1Department of Mathematics and Statistics, Boston University 2Department of Earth and Environment, Boston University Abstract There is a lack of methodological results for continuous time change detection due to the challenges of noninformative prior specification and efficient posterior inference in this setting. Most methodologies to date assume data are collected according to uniformly spaced time intervals. This assumption incurs bias in the continuous time setting where, a priori , two consecutive observations measured closely in time are less likely to change than two consecutive observations that are far apart in time. Models proposed in this setting have required MCMC sampling which is not ideal. To address these issues, we derive the heterogeneous continuous time Markov chain that models change point transition probabilities noninformatively. By construction, change points under this model can be inferred efficiently using the forward backward algorithm and do not require MCMC sampling. We then develop a novel loss function for the continuous time setting, derive its Bayes estimator, and demonstrate its performance on synthetic data. A case study using time series of remotely sensed observations is then carried out on three change detection applications. To reduce falsely detected changes in this setting, we develop a semiparametric mean function that captures interannual variability due to weather in addition to trend and seasonal components. Keywords: heterogeneous continuous time Markov chain, expectation maximization, land disturbance modeling 1arXiv:2504.17876v1 [stat.ME] 24 Apr 2025 1 Introduction Change detection is a widely used modeling and inference procedure for a vast number of applications, many of which use data measured in continuous time or along a continuous axis. However, methodologies to date have largely focused on the discrete time model (Truong et al., 2020; Aminikhanghahi and Cook, 2017). It is not hard to reason, a priori , that the probability of change ought to be lower when observations are closer in time than when observations are further apart in time. Our objective is to develop this prior intuition in a principled noninformative way that also yields analytically tractable posterior inference of change points in continuous time models. We will start by introducing the discrete time setting where most methodologies have been developed. Suppose we have conditionally independent observations [ yi|Θk, zi=j] indexed by equally spaced discrete times i= 0, . . . , n with j= 1, . . . , k model states. The likelihood distribution is conditioned on a latent change point process {zi}n i=0that takes on states 1 , . . . , k as well as parameters Θ k∈Rp×k. When zi=j, the likelihood distribution of yiis a function of the jth parameter vector θj∈Rp×1. What makes {zi}n i=0a change point process is that z0= 1, zn=k, and if zi=j, then zi+1∈ {j, j+ 1}with probability 1. We index observations from i= 0 since z0always equals 1, which simplifies notation later on. Notice, we could just as well have defined ksegment length parameters {ζj}k j=1such thatPk j=1ζj=nandζj=iwhen zi−1=jandzi=j+1. The majority of change detection litera- ture postulates models as a
https://arxiv.org/abs/2504.17876v1
function of segment length parameters ζor change point locations τj=Pj l=1ζl(Scott and Knott, 1974a; Auger and Lawrence, 1989; Killick et al., 2012; Fearn- head, 2006; Fearnhead and Liu, 2007; Adams and MacKay, 2007). While the state space and segment length models are equivalent in terms of their likelihood distributions, their corresponding Bayesian inference procedures can be quite different in terms of tractability of the posterior distribution. In the continuous time offline setting, the posterior distribution of the segment length parameters ζis not analytically tractable under a noninformative Dir(1k) prior (Stephens, 1994). The main contribution of this paper is the development of a heterogeneous continuous time Markov chain πk(zt=h|zs=j):=π(zt=h|zs=j, k) for times 0 < s < t < 1, that is noninformative in the offline setting and enjoys analytical posterior inferences without approximation nor MCMC sampling. 1.1 State space models versus partition models Chib (1998) was the first to show the connection between state variables and segment lengths for the online model . In this setting, segment lengths are assumed to follow a geometric distribution, π(G) k(ζj=i) = pi−1 j(1−pj) (Yao, 1984; Barry and Hartigan, 1993). Chib (1998) showed these segment lengths can be reparameterized in terms of state variables with transition probabilities π(G) k(zi+1=j|zi=j) =pjandπ(G) k(zi+1=j+ 1|zi=j) = (1−pj). While these models are distributionally equivalent, the latter is a special case of a 2 hidden Markov model and is equipped with methodological conveniences such as the forward backward algorithm for computing posterior expectations or analytic formulas for simulation (Chib, 1996; Fearnhead, 2006). 1.2 Offline modeling versus online modeling In the current work, we operate in the retrospective (offline) setting and assume a priori all change point sequences are equally likely. Please see Truong et al. (2020) for a review of offline approaches. For example in discrete time, let Ω n,kbe the sample space of all change point process sequences. If n= 3 and k= 3, then Ω 3,3={{1,1,2,3},{1,2,2,3},{1,2,3,3}} and each of these sequences is given equal prior probability in the offline setting. The corresponding prior πk(ζ1) is discrete uniform on 0 , . . . , n and the conditional prior on the jth segment length πk(ζj|Pj−1 l=1ζl=i0) is discrete uniform on the remaining n−(k−j)−i0 positions. The ( k−j) term is subtracted to ensure there are enough positions for the remaining segments. The last length ζkis restricted byPk j=1ζj=n(Stephens, 1994). However, the corresponding offline model for state variables zhas been unexplored in both discrete and continuous time. We develop both approaches in this work. 1.3 Continuous time versus discrete time In continuous time, the noninformative prior on segment lengths is ζ∼Dir(1k), but the posterior distribution of this model is intractable and requires MCMC sampling (Stephens, 1994) or an approximation. For this reason, we take a different approach. First, noninfor- mative priors for the state variables zare developed in discrete time and then relaxed to continuous time. We then show our model of the continuous time state variables {zti}n i=0is distributionally equivalent to ζ∼Dir(1k) using the relationship 1 {zti=j}= 1{Pj−1 l=1ζl≤ ti<Pj l=1ζl}. In doing so, we are able to derive the heterogeneous
https://arxiv.org/abs/2504.17876v1
continuous time Markov chain πk(zt=h|zs=j) for 0 ≤s < t ≤1 and h≥jfor which exact posterior inference procedures are analytically available without MCMC nor approximation. 1.4 Modeling environmental changes using satellite imagery Change detection is an important and challenging problem in remote sensing data. Appli- cations include detecting land cover change from both natural events (e.g., desertification, fires, etc.) and land use by humans (e.g., urbanization, agriculture, forestry, etc.) (Zhu and Woodcock, 2014; Keenan et al., 2014; Zhu et al., 2020). Due to high frequency of missing data in available remote sensing data sources and the variability in satellite periodicity, the data are collected in continuous time and as such require continuous time methods for their analysis. In this work, we provide three case studies to demonstrate that our model gen- eralizes across a range of situations. The first is a deforestation example in the Rondonia 3 region of the Amazon rainforest, the second is an agricultural land management example in the San Joaquin Valley, California, and the third is a study of vegetative drought detection in a semi-arid region in Texas. 1.5 Paper structure The paper is structured as follows. In Section 2, we develop our retrospective Bayesian change detection model in discrete time by deriving noninformative marginals πk(zi=j) and extending them to their corresponding transition probabilities. In Section 3, we derive the continuous time marginal distribution πk(zt=j) and prove these marginals have a distributional equivalence to the noninformative prior ζ∼Dir(1k). In Section 4, we derive the heterogeneous continuous time Markov chain πk(zt=h|zs=j) for 0 ≤s < t≤1 and h≥junder the noninformative prior measure. In Section 5, we develop a methodology for inference using expectation maximization, a novel loss function suited for the continuous time change point problem, and derive the Bayes estimator for that loss function. The Bayes estimator can be computed with the posterior moments made available by the forward backward algorithm. We then extend our model to handle outlier observations. In Section 6, we provide a simulation study and compare our method to other popular change detection methods. In Section 7, we introduce a semiparametric model that captures interannual variation due to variation in weather and derive constraints on that function to ensure its continuity. Finally in Section 8, we provide case studies of change detection examples using remote sensing, including detecting deforestation, crop management, and detecting shrub and grassland drought responses to interannual variation in weather in semi-arid regions. 2 Noninformative Priors in Discrete Time Let Ω n,kbe the sample space of all change point process sequences in discrete time with n+1 time points (including time 0) and ksegments. The cardinality of Ω n,kisn k−1 since there are nways to choose k−1 changes. Now suppose we define a probability measure πthat places equal probability on all change point sequences in Ω n,k. Using the same counting argument above, the marginal probability πk(zi=j) can be evaluated by counting the number of change point sequences that occur before zi=jand the number that occur after. That is, the number of ways to choose j−1 changes from itime points, times the
https://arxiv.org/abs/2504.17876v1
number of ways to choose k−jchange points from n−itime points, Proposition 1 (Marginal noninformative prior in discrete time) .The marginal noninfor- mative prior on the state space {zi}n i=0in discrete time is hypergeometric distributed, πk(zi=j) =n−i k−ji j−1 n k−1 4 An example of these discrete time marginals is represented as the dotted lines in Figure 1 (Left). While this may be interesting, it is not immediately useful since the joint distri- bution of zis not a product of marginals. But, what is clear from the definition of change point process, is that each state variable only depends on the previous state and thus the joint distribution can be factored as a product of transition probabilities. To compute the transition probabilities, start by noting that zi+1= 1 implies zi= 1 by the definition of change point process. Then since the transition probability is the joint distribution divided by the marginal, we have πk(zi+1= 1|zi= 1) = πk(zi+1= 1)/πk(zi= 1). In a similar fashion we can write, πk(zi+1= 2) = 1−πk(zi+1= 1|zi= 1) πk(zi= 1) + πk(zi+1= 2|zi= 2)πk(zi= 2) And solve for πk(zi+1= 2|zi= 2). Proceeding recursively, and representing all transitions as functions of marginals, we arrive at the noninformative discrete time transition probabilities for the retrospective model, Proposition 2. The noninformative transition probabilities of the state space variables {zi}n i=0in discrete time are functions of the noninformative marginals, πk(zi=j|zi−1=j) =Pj l=1πk(zi=l)−Pj−1 l=1πk(zi−1=l) πk(zi−1=j). The proposition shows that in discrete time the noninformative transition probabilities from zitozi+1are a direct functional of the hypergeometric distributed marginals. Please see Figure 1 ( Right ) for a demonstration of these noninformative transition probabilities. Note how each state only has non-zero probability under two conditions. The first condition is that enough observations have been collected to identify that segment. For example, z2 cannot equal 3 since there have only been two observations. The second condition is that enough observations remain to exhaust all ksegments. For example, zn−1cannot equal k−2 since there is only one observation remaining and zn=kby definition. 3 Relaxing to Continuous Time To derive the continuous time Markov chain for the state variables {zti}n i=0, it will be helpful to first derive the continuous time marginals πk(zti=j) so that we can later evaluate the transitions via πk(zti+1=h|zti=j) =πk(zti+1=h, zti=j)/πk(zti=j). We start with the discrete time, hypergeometric distributed marginals from Proposition 1 and derive their convergence in distribution as time becomes continuous. To that end, define time in the interval t∈[0,1]. To relate discrete time to continuous time, let continuous time tbe mapped to discrete time through the function i(t) =⌊tn⌋. Theorem 1 (Noninformative marginal convergence: Hypergeometric to Bernstein) .Let t∈[0,1]. The limit of the marginal prior probability of state jat discrete time i(t) =⌊tn⌋, 5 0.0 0.2 0.4 0.6 0.8 1.0Marginal probabilityDiscrete and Continuous Marginals p(zt(i)=j) Time0.0 0.3 0.7 1.0 5 10 15 20 250.0 0.2 0.4 0.6 0.8 1.0Transition probability 1Discrete Self−Transitions p(zi+1=j|zi=j) TimeFigure 1: ( Left) An example of noninformative discrete time hypergeometric state marginals(dotted), n= 10 and k= 4 with green as segment 1 and pink as segment 4. Noninformative continuous
https://arxiv.org/abs/2504.17876v1
time state marginal are the corresponding solid lines. ( Right ) Discrete time noninformative transition probabilities for n= 25 and k= 10 for illustration. The colors range from π10(zi+1= 1|zi= 1)(purple) to π10(zi+1= 10|zi= 10)(pink). asn→ ∞ , and after normalizing the index i(t)/n=⌊tn⌋/n, converges to the Bernstein polynomial distribution, πk(zt=j) =k−1 j−1 tj−1(1−t)k−j Please see Figure 1( Left) for an example of these continuous-time marginals compared with the discrete time marginals from Proposition 1. 3.1 Relating state variables with segment lengths Now that we have continuous time marginals, the final piece to solving the continuous time transition probabilities is evaluating πk(zti+1=h, zti=j). Recall from our discussion in Section 1, there is an equivalence relation between the two parameterizations, namely, 1(zti=j) =1Pj−1 l=1ζl≤ti<Pj l=1ζl . Furthermore, the noninformative prior on segment lengths is ζ∼Dir(1k) (Stephens, 1994). Having distributional equivalence between 1(zti= j) and 1Pj−1 l=1ζl≤ti<Pj l=1ζl would provide a path for computing the joint probabilities 6 since πk(zti+1=h, zti=j) =πkh−1X l=1ζl≤ti+1<hX l=1ζl,j−1X l=1ζl≤ti<jX l=1ζl (1) We prove this equivalence in the following theorem, Theorem 2 (Distributional equivalence in continuous time) .Let segment lengths ζ∼ Dir(1k)and let ztbe the random vector defined by the indicators 1(zt=j):=1(Pj−1 l=1ζl≤ t <Pj l=1ζl)forj= 1, ..., k . Then the marginals of ztare Bernstein polynomial distributed, πk(zt=j) =k−1 j−1 (1−t)k−jtj−1. This theorem connects the duality between state variables {zti}n i=0and segment lengths {ζj}k j=1in the offline setting and provides the tools to evaluate the joint probability πk(zti+1= h, zti=j) needed for the continuous time transitions. 4 Continuous Time Change Point Processes We are now in a position to derive the continuous time Markov chain πk(zt=h|zs=j) for 0≤s < t≤1 and h≥j. Since we have the marginals from Theorem 1, we only need to evaluate the joint probability πk(zt=h, zs=j). By distributional equivalence in Theorem 2, we have the segment lengths in Equation 1 are distributed Dirichlet with parameter 1k. Putting these tools together to evaluate πkPh−1 l=1ζl≤ti+1<Ph l=1ζl,Pj−1 l=1ζl≤ti<Pj l=1ζl , we have the following, Theorem 3. For times 0≤s < t≤1and states j= 1, . . . , k andh=j, . . . , k , we have the following transition probabilities Pjh(s, t):=πk(zt=h|zs=j): Pjh(s, t) =k−j h−j 1−1−t 1−s!h−j 1−t 1−s!k−h =bh−j,k−j t−s 1−s! where bν,n(x) =n ν xν(1−x)n−νis the ν-index n-degree Bernstein polynomial. Furthermore, these transition probabilities satisfy the Kolmogorov equations, Pjh(s, t) =Ph l=jPjl(s, r)Plh(r, t) for0≤s < r < t ≤1. The proof is in the appendix. There are a number of interesting corollaries from The- orem 3. For instance, the continuous time marginals of ztfrom Theorem 1 are verified by plugging in pj(t):=P1,j(0, t) =k−1 j−1 tj−1(1−t)k−j. Also, in particular, self-transitions are given by Pjj(s, t) = [(1 −t)/(1−s)]k−j, which verifies that once state kis reached, the chains stays in state k,Pkk(s, t) = 1. 7 0.0 0.2 0.4 0.6 0.8 1.0Transition probabilityp(zt(i+1)=j|zt(i)=j) for j=1, … , k−1 Time0 1 0.0 0.2 0.4 0.6 0.8 1.0Transition probabilityp(zt(i+1)=h|zt(i)=1) for h=1, … , k Time0 1Figure 2: An example of continuous time transitions on 25 uniformly distributed times and 5 segments. ( Left)
https://arxiv.org/abs/2504.17876v1
Self-transitions π5(zti+1=j|zti=j). Pink is j= 1, with the remaining in gray, following in increasing order of probability. ( Right ) Transitions π5(zti+1=h|zti= 1) forh= 1, . . . , k −1. Pink is h= 1, followed by green, yellow, orange, and brown. Note, one important difference between change point modeling in discrete time versus continuous time is that more than one change can occur between two consecutive obser- vations. For this reason, our prior specification includes transitions from state jto any of h=j, . . . , k . The prior distribution for the vector zof continuous time state variables is, πk(z) =nY i=1kY j=1kY h=jπk(zti=h|zti−1=j)1{zti=h}1{zti−1=j}(2) For the remainder of this work, we refer to this prior as the Bernstein polynomial process or BPP and change detection models that assume this prior BPP models. Figure 2 ( Left) captures important behaviors of the self-transitions πk(zti+1=j|zti=j) for an example of 25 time points and k= 5 segments. When observations are far apart in time, the probabilities of staying in the same state drop. Whereas, when observations are very close in time, the probabilities jump close to 1. This behavior reflects our noninformative belief that a change is more likely to occur as more time elapses between observations. Using the same example, we can also study transitions πk(zti+1=h|zti= 1) for h= 1, . . . , k −1 in the ( Right ) of Figure 2. Notice the probability of staying in state 1 dominates for the first half of time, but as time comes close to an end, the probabilities of transitioning to higher states take over. 8 5 Methodology The main benefit of Theorem 3 is that the complete data likelihood and priors can now be expressed in terms of state variables z∼BPP drawn from a Bernstein polynomial process, as opposed to segment lengths ζ∼Dir(1k) which are hard to perform posterior inference on. With this state variable parameterization, efficient inference using EM is possible since the marginal and pairwise posterior expectations of [ z|y,Θk] can be determined using the forward-backward algorithm. Furthermore, posterior samples of the full vector [ z|y,Θk] can be simulated exactly within a broader Gibbs approach (Chib, 1996) which does not require a Metropolis-Hastings step. Whereas, the posterior distribution [ ζ|y,Θk] requires a component-wise MCMC on each ζjor a posterior approximation (Stephens, 1994; Chib, 1998). We derive this simulation approach in the appendix and for remainder of the paper focus on an EM approach. In this section, we will characterize our model end-to-end, providing prior justifications on the change point locations, number of segments, and parameterizations. We will introduce an additional latent variable framework for modeling heavy tailed error distributions and finally propose a novel change detection loss function and derive its Bayes estimator in discrete and continuous time. 5.1 Model For a fixed number of segments k, the model can be characterized by its prior distribution on the change point process z, its likelihood distribution f, and its prior on the parameters Θk, nY i=0kY j=1 f(yti|θj)p(θj)1{zti=j}n−1Y i=0kY j=1kY h=jπk(zti+1=h|zti=j)1{zti+1=h}1{zti=j} We assume the parameters are independent across segments
https://arxiv.org/abs/2504.17876v1
θj⊥ ⊥θlforj̸=l, as well as conditional independence of the likelihood observations given the model yi⊥ ⊥yj|z,Θk. The choice of likelihood distribution falso does not affect our main results; the EM and simulation procedures for the posterior distribution of zare analytically tractable regardless of the likelihood distribution. The maximization steps for Θ kin the EM approach and the posterior distribution of [Θ k|y,z] in the simulation approach are the only steps for which the analytical tractability depends on the form of the likelihood. In this work, the likelihood distribution fis assumed to be Gaussian. 5.2 Robustness to outliers There is a tradeoff between outliers and change points. For example, roughly, if there is an outlier very far from its conditional expectation, there may be more evidence for placing two 9 change points directly before and after the outlier, even though it is not a true change. One way to address this problem is to assume a likelihood distribution with heavy tails. To that end, we introduce auxiliary variance scaling parameters drawn i.i.d. qti∼Ga(ν/2, ν/2) such that [ yti|θj, σ2 k, qti]∼ N(θj, σ2 k/qti) and the marginal distribution [ yti|θj, σ2 k]∼lst(θj, σ2 k, ν) is location-scale t-distributed with νdegrees of freedom. This approach extends that of Little and Rubin (2019) to the regression and change point settings. There are two major methodological benefits to introducing these auxiliary parameters. The first is that within EM or a Gibbs sampling framework, using a Gaussian likelihood, the conditional posterior distribution [ qti|yti,θj, σ2 k] is gamma distributed making expectations and simulation straightforward. Furthermore, the maximization steps for θjandσ2 kremain analytically tractable since the conditional likelihood is Gaussian. The second major benefit is the marginal likelihood is t-distributed, enabling analytically tractable inference of poste- rior moments of zusing the forward-backward algorithm. These benefits are detailed in the subsection 5.4. 5.3 Prior on number of segments We would like to discuss two different assumptions that lead to two different priors, respec- tively, on the number of segments. Typically, researchers choose the prior on the number of segments proportional to the volume of the space of change point sequences associated with that number of segments (Chib, 1998; Fearnhead, 2006; Peluso et al., 2019). For example, in the geometric online setting described in those works, the implied prior probability on the number of segments is binomial distributed. In the offline/retrospective setting, all change point sequences are equally likely a priori , and thus, under that reasoning, would lead to a prior on the number of segments that is proportional to their volume of change point se- quences. In the following, we challenge this assumption, noting that just because we assume sequences are equally likely within each number of segments k, this does not behoove us to carry that assumption across the number of segments k, as is typically assumed. 5.3.1 Argument for using noninformative inverse volume On the one hand, we could continue to assume all change point sequences are equally likely across k= 1. . . , K where Kis the maximum number of segments, but this would
https://arxiv.org/abs/2504.17876v1
lead to a combinatorially increasing prior probability with respect to the number of segments, which may not be believable. For example, in the discrete time offline setting, this would lead to π(k)∝n k−1 , that is, the normalizing constant found in Proposition 1. In the continuous time setting, note from Theorem 3, after removing all terms that depend on jorhfrom the transition probability πk(zti=h|zti−1=j), the remaining constant is (1−ti)/(1−ti−1)k, 10 and thus the normalizing constant is the inverse of that value. As such, we would have π0(k)∝nY i=1 (1−ti)/(1−ti−1)−k Fork= 1, . . . , K under the assumption of equally likely change point sequences across k= 1. . . , K . On the other hand, we may assume change point sequences are only equally likely within each k= 1, . . . , K but that the prior on the number of segments should be noninformative with respect to its volume of change point sequences. In this case, the prior probability of each kshould be inversely proportional to its volume of sequences. In the discrete time setting, this amounts to π(k)∝n k−1−1and in the continuous time setting π(k)∝nY i=1 (1−ti)/(1−ti−1)k Fork= 1, . . . , K . This prior is more attractive, for example, in remote sensing applications where we expect a small number of changes on the ground. 5.3.2 Argument for incorporating parameter space volume We also extend this reasoning to the volume of the parameter space associated with each number of segments k. For example, suppose we are modeling changes in the mean pa- rameters of a regression with constant variance across segments. If we assume a Gaussian prior for the mean parameters θj|σ2 k∼ N (0, σ2 kΦ) and an improper prior for the variance, p(σ2 k)∝1 σ2 kon some reasonable closed interval for σ2 k, their joint distribution is, p(Θk, σ2 k) =kY j=1(2πσ2 k)−p 2|Φ−1|1 2exp−1 σ2 kθT jΦ−1θj1/σ2 k Cσ2 k Where dim( θj) =p×1. Removing all terms that do not depend on Θ k, σ2 k, we have the volume of the parameter space is (2 π)pk 2|Φ−1|−k 2Cσ2 k. As (Θ k, σ2 k)⊥ ⊥za priori , the volume of their joint distribution is the product of their volumes. Under the assumption that change point sequences are equally likely across k= 1, . . . , K , π0(k)∝(2π)pk 2|Φ−1|−k 2nY i=1 (1−ti)/(1−ti−1)−k(3) Whereas, under the assumption that the number of segments is noninformative with respect to their corresponding volume, we invert the normalizing constant and obtain, π(k)∝(2π)−pk 2|Φ−1|k 2nY i=1 (1−ti)/(1−ti−1)k(4) 11 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4Comparing priors on number of segments π(k) Number of segments kFigure 3: Two priors on number of segments are compared. Twenty time points are simulated from a uniform distribution on [0 ,1], and π(k) is plotted for each. The prior on kassuming equally likely change point sequences across k= 1, . . . , 4 is in gray reaching maximum probability at k= 4. The prior on kassuming kis noninformative with respect to the volume of its model is
https://arxiv.org/abs/2504.17876v1
in pink, having maximum probability at k= 1. We compare these priors from Equations 3 and 4 in Figure 3 across 2000 samples of time from a uniform distribution for an intercept only model. Furthermore, we examine the performance of both priors in the case study and find the noninformative prior on number of segments from Equation 4 has largely better performance. 5.4 Expectation Maximization In many applications of change detection models, particularly in remote sensing data with trillions of time series to analyze, efficient estimation procedures are necessary. The forward backward algorithm within an Expectation Maximization (EM) framework is an efficient inference algorithm for hidden Markov models. The algorithm uses dynamic programming to evaluate the posterior moments of zconditioned on maximum a posteriori point estimates of the parameters (Bishop, 2006; Dempster et al., 1977). Details for the EM algorithm are available in a wide variety of sources (Little and Rubin, 2019). As noted earlier, the main contribution of this paper is the continuous time Markov chain from Theorem 3, enabling efficient and exact inference for this model, whereas MCMC or approximate methods were required before. Define the Qfunction as the expectation of the log complete data likelihood with respect 12 to the posterior distribution [ z,q|y,Θ(s) k, σ2(s) k] for the sth iteration of the algorithm. Plug- ging in a Gaussian likelihood for the function fand removing any terms that are not a factor of Θ korσ2 k, we arrive at, Q(Θk|Θ(s) k)(c)=Eq,z|y,X,Θ(s) knX i=0kX j=11{zti=j} −log(σk)−qti 2σ2 k(yti−xT tiθj)2 + log p(Θk, σ2 k) Where we assume a Gaussian prior for the mean parameters θj|σ2 k∼ N(0, σ2 kΦ) to represent our prior belief that the mean parameters are not far from zero. We assume an improper prior for the variance, p(σ2 k)∝1 σ2 k. The first step of the EM algorithm is to evaluate the posterior expectations of the relevant terms in the Qfunction. In this case, we evaluate the posterior expectations of 1{zti=j}qti and 1{zti=j}. The first of these can be evaluated as a product of a conditional expectation and a marginal distribution, Ezti,qi|y,X,Θ(s) k 1{zti=j}qti =Eqti|zti,y,X,Θ(s) k qti 1{zti=j} Ezti|,y,X,Θ(s) k 1{zti=j} The conditional expectation can be evaluated using the posterior distribution, [qti|zti=j, yti, X,θ(s) j, σ2(s) k](d)=Ga(ν+ 1 2,(ν 2+(yti−xT tiθ(s) j)2 2σ2(s) k)) The expectations of 1 {zti=j}can be evaluated using the forward-backward algorithm. After these expectations are evaluated, the Qfunction can be optimized with respect to the parameters Θ kandσ2 k. The M-steps for the mean parameters can be evaluated analytically since the likelihood is Gaussian, θ(s+1) j = XTW(s) jX+ Φ−1−1XTW(s) jy Where W(s) jis a diagonal matrix with entries E[1{zti=j}qti|y,Θ(s) k, σ2(s) k]. The M-step for the variance σ2 kcan also be evaluated analytically, σ2(s+1) k =Pn i=0Pk j=1E 1{zti=j}qti y,Θ(s), σ2(s) k yti−xT tiθ(s+1) j2 +Pk j=1θ(s+1) jΦ−1θ(s+1) j n+pk+ 2 Where pis the dimension of θjfor all j. After the M-step is complete, the E-step is then re- peated conditioned on the updated parameters. The algorithm is repeated until convergence of the Qfunction. Additional details are in the appendix. 13
https://arxiv.org/abs/2504.17876v1
5.5 Marginal posterior distribution on number of segments A major benefit of reparameterizing the change point problem using state variables z∼BPP within a hidden Markov model is the marginal likelihood can be computed using results from the forward backward algorithm. The forward recursions for this model represent the joint probability aj(i) =p(yt0, . . . , y ti, zti=j|Θk, σ2 k) and take the form, aj(0) =( f(yt0|θ1, σ2 k) ifj= 1 0 else aj(i+ 1) = f(yti+1|θj, σ2 k)jX l=1al(i)πk(zti+1=j|zti=l) Where fis the location-scale t-distributed likelihood described in subsection 5.2. Note, however, marginal likelihoods can be obtained using this approach for general conditional likelihood distributions. The marginal likelihood is given by f(y|Θk, σ2 k) =Pk j=1aj(n). Since the integral of f(y|Θk, σ2 k)p(Θk, σ2 k) over Θ kandσ2 kis intractable, we use a Laplace approximation keeping only the terms associated with the Bayesian information criterion for computational purposes (Schwarz, 1978; Tierney and Kadane, 1986; Konishi and Kitagawa, 2008; Killick et al., 2012). As such, the log marginal posterior distribution on the number of segments is approximated up to a normalizing constant, logp(k|y)(c)≈logf(y|ˆΘk,ˆσk2)−pk 2log(n) + log p(k) (5) Where pk= dim(Θ k)+1 and the prior p(k) is from Equation 4 established in subsection 5.3. 5.6 Loss function for change point locations In this section, we introduce a loss function on change point locations and derive a Bayes estimator for that loss function in both discrete and continuous time. Define τj=Pj l=1ζj forj= 1, . . . , k −1 as change point location parameters. To avoid identifiability issues, we restrict these change point locations to be at observed times ti∈[0,1] for continuous time or ti∈ {0, . . . , n }for discrete time. For a specified number of changes k−1, a natural loss function for comparing two change point configurations is the absolute loss between the τjlocations, L(τ,τ∗).=Pk−1 j=1|τj−τ∗ j|. See Truong et al. (2020) for a review of other loss functions. This absolute loss in turn induces a weighted Hamming loss between change point state sequences zandz∗, L(τ,τ∗).=k−1X j=1|τj−τ∗ j|=nX i=1k−1X j=11 min{τj, τ∗ j}< ti≤max{τj, τ∗ j} (ti−ti−1) =nX i=1|zti−z∗ ti|(ti−ti−1) =H(z,z∗). 14 The second equality holds since the difference |τj−τ∗ j|is the sum of time increments within that window. The third equality holds since the indicator function of an observed time ti∈[min{τj, τ∗ j},max{τj, τ∗ j} ] can occur for multiple segments jand thus the interval (ti−ti−1) should be summed |zti−z∗ ti|times. This final step yields a doubly weighted Hamming distance. Note in discrete time each of the intervals ( ti−ti−1) = 1 and so the distance reduces to summing over |zti−zti−1|. Note, the weighted Hamming loss does not depend on the number of change points in the configurations and is thus more general. We choose to find the Bayes estimator for H(z,z∗) so we can infer change point locations and number of change points simultaneously. Theorem 4. The Bayes estimator for the weighted Hamming loss H(z,z∗)between change point process realizations {zti}n i=0is for each zti ˆzti=min jKX k=1jX l=1π(zti=l|k,y)π(k|y)≥0.5 That is, the Bayes estimator for zis the
https://arxiv.org/abs/2504.17876v1
component-wise medians. Furthermore, the Bayes estimator {ˆzti}n i=1is a change point process. This result holds in both the discrete and con- tinuous time settings. Using our EM inference procedure, we estimate this Bayes estimator as follows. The π(k|y) term is estimated using Equation 5 and the π(zti=l|k,y) terms are estimated using the marginal expectations E[zti=l|k,Θ(s),y] computed in the last sth iteration of the forward backward algorithm. 6 Simulation Study Our simulation study is aimed at characterizing the performance of different change point models across a variety of conditions within a factorial setup. Each combination in the factorial study has 100 replicates. All simulated datasets are intercept only models with constant variance across segments and scaled t-distributed error as described below. Here are the settings that make up the factorial study, •Time and change point distributions : 1.) Uniformly spaced discrete time with uniformly distributed change points. This is the time and change point distribution assumed in Killick et al. (2012). 2 .)Beta (0.5,0.5) distributed time with BPP dis- tributed change points, 3 .)Beta (2,2) distributed time with BPP distributed change points. •Error variance :σ2∈(0.1,0.2,0.3) •Robustness parameter :ν∈(3,10,100) 15 FPRTPR 0 10 1BPP: all data FPRTPR 0 10 1PELT: all data FPRTPR 0 10 1BinSeg: all data FPRTPR 0 10 1BPP: nu=3 data FPRTPR 0 10 1PELT: nu=3 data FPRTPR 0 10 1BinSeg: nu=3 dataFigure 4: True positive rate against false positive rate on a synthetic data set with varying sizes of change, time distribution, outlier magnitude, and number of segments. The first row compares BPP ,PELT ,andBinSeg on all of the data from the factorial study and the second row compares them on a subset when ν= 3. •Size of change in intercept : (0.1,0.3,0.5,0.7,0.9,1.1) •Number of segments :k= 1, . . . , 4 There are six models being compared in this study. All models explore up to K= 6 segments except for the PELT model which does not support a maximum number of a segments argument. •BPP : This is our main model with transition probabilities according to Theorem 3, location-scale t-distributed likelihood with ν= 3 according to subsection 5.2 and noninformative prior on number of segments from Equation 4. •PELT andBinSeg : These are two popular models used for change detection (Killick et al., 2012; Scott and Knott, 1974b). Both models are available in the changepoint R package (Killick and Eckley, 2014). These models assume observations are from uniformly spaced discrete time intervals. We used the default setting for the cpt.mean 16 BPP vs PELT BPP vs BinSegCommission Rate 0.00.20.40.60.81.0Commission Rate BPP vs PELT BPP vs BinSegOmission Rate 0.00.20.40.60.81.0Omission Rate BPP vs PELT BPP vs BinSegF1 Score 0.00.20.40.60.81.0F1 ScoreFigure 5: Breakdown of commission rate, omission rate, and F1 score for the BPP ,PELT , andBinSeg models . function which assumes a Normal cost function and a Modified BIC penalty (Zhang and Siegmund, 2007). Readers are encouraged to learn more about these models in the references above. •BPP nonrobust : This model is the same as BPP except it assumes a Gaussian likelihood without robustness to outliers. •Noninformative discrete time
https://arxiv.org/abs/2504.17876v1
model : This is our noninformative discrete time model from Propositions 1 and 2 with location-scale t-distributed likelihood from sub- section 5.2 and noninformative prior on number of segments from Equation 4. •BPPE : This is the BPP model but with prior on number of segments that assumes equally likely sequences across number of segments kfrom Equation 3. The results for the last three models are in the appendix. In total, there are 33·6·4·100 = 64800 total datasets that are modeled. For each of the 648 different factorial settings, the 100 replicates were used to calculate the omission and commission rates for each model. In order to capture settings similar to our case study, each data set is simulated with n= 500 observations over a 20 year period. Detected changes are considered true if they are within a 3-month window of the true change which reduces to a 0 .0225 window after time is mapped to [0,1] (Zhu and Woodcock, 2014; Zhu et al., 2020; Cohen et al., 2017). If two changes are detected within the window of a true change, the closer one is considered a true change and the other is considered a false positive (Killick et al., 2012). Those 648 results were then plotted as points with a kernel density estimator with bandwidth .5 for visualization aid. The performance of the BPP is notable in Figure 4 and Figure 5. It appears that the combination of continuous time state space modeling, in addition to a noninformative prior on the number of segments and a robust likelihood lead to better performance than the other models. The results for all 6 models are broken down further by time distribution, error 17 variance, robustness, and number of segments in the appendix. Those results confirm that each of the additional models, ( BPP without robustness, without a continuous time prior, or without a noninformative prior on the number of segments), each perform worse than the full BPP model. Notably, Figures 18 and 12 respectively show the discrete time noninformative model has poorer performance on the k= 1 datasets, whereas, the BPP model does very well in the k= 1 setting. As the only difference between those two models is the assumption of discrete versus continuous time, this demonstrates the need for a continuous-time change detection model to avoid additional false positives in the continuous time setting. Note in Figure 11 in the appendix, the main performance disparities between PELT , BinSeg , and BPP appear in the datasets with ν= 3, that is, the datasets with the largest heavy-tailed error distribution, and otherwise their performance is comparable. In Figure 17, the BPP nonrobust model also does well on the ν= 3 subset of datasets compared toPELT andBinSeg , demonstrating that, even though it assumes a Gaussian likelihood, itsBPP continuous time prior achieves robustness to outliers by noninformatively down weighting the probability of change in cases when an outlier occurs shortly after the previous observation. While our BPP model and methodology offer orders of magnitude of computational improvement compared to MCMC methods, PELT andBinSeg
https://arxiv.org/abs/2504.17876v1
have much lower compu- tational cost. In our simulated study, the BPP model runs in 3e −1 seconds per dataset, whereas the PELT andBinSeg models run in 7e −4 and 9e −4 seconds per dataset, respec- tively. This computational disparity lies in the difference of the model being assumed. PELT andBinSeg do not allow the penalty (the log prior in our setting) to depend on the number or location of change points (Killick et al., 2012). Whereas, the BPP model assumes a prior on both the number and location of change points due to its latent Markov chain. Future research may explore if pruning can be used for inference in the BPP model to enjoy similar computational benefits enjoyed by PELT . Otherwise, a part of this computational gap may be closable by reimplementing our code in C, which we save for future work. Finally, we derive a full Bayesian approach for the BPP model using exact simulation for the conditional posterior of the continuous time state variables following Chib (1996) and test its performance on the synthetic data in the appendix as well. Code for running our proposed models can be found https://github.com/daniel-s-cunha/BPP/ . 7 Phenological Modeling with Multiple Change Points Phenology is the study of the timing of biological activity over the course of a year, partic- ularly in relation to climate. Phenological modeling of vegetation is often carried out using imagery data from Earth observation satellites. Spectral reflectances of Earth’s surface are commonly combined to create vegetation indices, the most widely used of which is called the 18 Normalized Difference Vegetative Index (NDVI), which are then used to monitor seasonal changes in vegetation. Specifically, the NDVI exploits the fact that healthy leaves are highly reflective in near infrared wavelengths and highly absorptive of light in the red wavelengths. By taking the normalized difference of these two measurements, the NDVI provides an ex- cellent surrogate measure for the amount of green leaf area on the ground: NDVI =NIR−Red NIR+Red. Our goal is to model and detect changes within this vegetation index over time, so that land cover changes due to environmental factors are not mistakenly modeled as phenological signal. 7.1 Harmonic regression Letybe an observed time series of NDVI from a single pixel of satellite imagery. Let time be standardized such that ti∈[0,1] by subtracting the minimum time and dividing by the total time interval. Let Tbe the time interval of the study with units in days. For an observation yti, define the harmonic regression model as, yti|θ, σ2, qtiind∼N α+βti+HX h=1γhsin(hωt i) +δhcos(hωt i),σ2 qti! (6) where θ= (α, β,{γh, δh}H h=1)Tare the mean model parameters, ω= 2πT/365 is the harmonic frequency, qti∼Ga(ν/2, ν/2) is the robustness latent variable from Subsection 5.2, and H is the number of harmonics in the model. Define a design matrix Xsuch that xT tiθ= α+βti+PH h=1γhsin(hωt i) +δhcos(hωt i). This model is popular in the remote sensing community because the decomposition of phenological dynamics into an intercept, slope, and harmonics enables researchers to make inferences about seasonality as well as long term trends
https://arxiv.org/abs/2504.17876v1
(Zhu and Woodcock, 2014). 7.2 Interannually varying harmonics One limitation of the above model is it assumes the mean function follows the same season- ality pattern each year. Change point detection algorithms that use the above model may exhibit higher false positive rates, since seasonal anomalies are not captured by the model and thus may be falsely detected as change points. To address this limitation, we introduce harmonic contrasts to the model, giving it flexibility to capture interannual variation. Let l(t) be the year at time tand consider a harmonic contrast for each year as follows, yti|θ,ϕ, σ2, qtiind∼N xT tiθ+HX h=1γh,l(i)sin(hωt i) +δh,l(i)cos(hωt i),σ2 qti! , where ϕ= ({γh,l}H,J h=1,l=1,{δh,l}H,J h=1,l=1)Tis the contrast parameter vector and xT tiθis the mean function without contrasts. Let Wbe the design matrix for the harmonic contrasts 19 designed such that wT tiϕ=PH h=1γh,l(i)sin(hωt i) +δh,l(i)cos(hωt i). Our model can then be summarized in terms of the mean components and contrast components as, yti|θ,ϕ, σ2, qtiind∼N xT tiθ+wT tiϕ,σ2 qti! , (7) 7.3 Continuity constraints on the mean function and its derivative Since we are interested in detecting changes in phenological signal, it is important that the mean xT tθ+wT tϕand its first derivative are continuous for all t∈[0,1] so there are no discontinuities that can be mistaken for changes in the intercept or slope. Thus, we introduce the following constraints, Proposition 3. Placing continuity constraints, with respect to time, on the mean function and its first derivative in (7)yields the following linear constraints on the contrast harmonic parameters for each lth year, γH,l=−H−1X h=1γh,land δH,l=−H−1X h=1δh,l. These continuity constraints also have implications for how we design the prior for the contrast parameters. Specifically, if the vector ϕof all contrasts including the Hth harmonic parameters has the following distribution ϕ∼ N(0,Φ(0)), then we need to adjust the prior by conditioning on the continuity constraints,  ϕ  γH,l=−H−1X h=1γh,l L l=1, δH,l=−H−1X h=1δh,l L l=1 ∼ N(0,Φ(1)) Where the new covariance matrix Φ(1)is derived in the appendix. 8 Case Study Our case study aims to demonstrate robust continuous-time change point detection for three remote sensing examples. We chose canonical examples of how change detection is or can be used in this broad field of research using data collected from Earth observation satellites. Data from the Landsat satellites are used in all three studies (Friedl et al., 2022). These imagery have a spatial resolution of 30 meters and a repeat frequency of 8 to 16 days, not accounting for missing data from clouds. To provide independent reference data allowing us to identify changes on the ground, we use temporally-sparse high-spatial resolution imagery in Google Earth, and high-quality continuous precipitation data such as the standardized 20 precipitation-evapotranspiration index and drought data (Yu and Gong, 2012; Beguer´ ıa et al., 2014; Owens, 2007). Note that while these data sources are highly informative, they are not sufficiently comprehensive to determine all changes. However, they are sufficient for the purpose of demonstrating the robustness of our method. The first study applies our method to the challenge of detecting deforestation in the
https://arxiv.org/abs/2504.17876v1
Rondonia region of the Amazon rainforest. The second study applies our method to the problem of detecting changes in land management in an agricultural field in the San Joaquin Valley of California, and the third study applies our method to detecting responses of semi-arid vegetation in Texas to drought and year-to-year variation in precipitation. 8.1 Case study model The phenological model from Equation 7 is used with H= 2 harmonics, K= 6 maximum number of segments, and a parameter prior covariance that accounts for continuity con- straints in the mean function and its first derivative as detailed in the appendix. We assume a robustness parameter of ν= 3. Changes are searched for in the mean parameters θ={α, β,{γh, δh}H h=1}but not in the interannual harmonics ϕ= ({γh,l}H,J h=1,l=1,{δh,l}H,J h=1,l=1) nor the error variance. For example, a change in the intercept αcould represent deforestation or other changes where a land cover is removed or added. A change in trend βcan represent a growth pattern, a decline, or a stabilization. Changes in the mean harmonics {γh, δh}H h=1can capture events such as crop changes or land cover changes in general. While the interannual harmonics ϕ= ({γh,l}H,J h=1,l=1,{δh,l}H,J h=1,l=1) are necessary for cap- turing inter-seasonal variation in the phenological signature, we do not search for changes in these parameters since they are temporally local parameters introduced for each year. We assume a single variance σ2across all segments to reflect our belief that measurement error is independent of phenological signal. 8.2 Deforestation in Rondonia Monitoring and limiting deforestation is of paramount importance towards slowing anthro- pogenically driven climate change. This can be difficult to do, especially in remote regions such as Amazonia, which are difficult to navigate and observe on the ground. The Rondonia region of the Amazon rainforest has experienced some of the highest rates of deforestation on the planet over the last 50 years (Pedlowski et al., 1997, 2005; Butt et al., 2011). Here we present results from a single pixel located at -11.89 latitude and -63.59 longitude for a study period extending from 2000 to the end of 2022. From a data perspective, detecting forest cover changes in tropical rainforests can be dif- ficult because these regions have persistent cloud cover throughout much of the year. This 21 0.1 0.2 0.3 0.4Rondonia Deforestation: Case Study PixelNDVI 0.0 0.2 0.4 0.6 0.8 1.0Detected changes Timeslp amp intFigure 6: NDVI time series for a single pixel in the Amazon rainforest of Rondonia. A deforestation event (pink dotted line) occurs in 2005 as confirmed using high resoultion imagery in Google Earth Pro as well as MapBiomas Brazil (Souza et al., 2020). The model detected change is shown in the lower panel. leads to high frequencies of missing data and non-uniform spacing of cloud free observa- tions. Hence, discrete time change detection models will be biased if they do not account for the missingness properly. We used an externally generated forest change data source (the MapBiomas Brazil project (Souza et al., 2020)) to confirm the location and timing of deforestation in this case study. The model results
https://arxiv.org/abs/2504.17876v1
are shown in Figure 6. 8.3 Crop rotation in the San Joaquin Valley Identifying and monitoring land management in agricultural regions from remote sensing data is an important task for a wide array of applications such as harvest and food sup- ply projections (Li et al., 2024; Boryan et al., 2011). We chose an agriculture plot in the San Joaquin Valley of California with latitude 35.03 and longitude -118.91. Monitoring agricultural land management can be a difficult task because of crop rotations and other management decisions made by growers. For this reason, researchers often apply classifi- cations at annual time steps that are independent of other years in the time series. This strategy loses the benefits of longer time series that are available from remote sensing in many locations. Change detection methods can help solve this problem as they can be used to determine when phenological changes happen (i.e., that are diagnostic of specific crops or management practices), and thus classification can be done on each change segment of data as opposed to each year. Using Google Earth Imagery and CropScape (Li et al., 2024) we annotated three changes in land management that were clearly visible in high-resolution imagery. The high resolution imagery showed stable crops from 2000 until August 2006, at which point there is a fallow 22 0.1 0.2 0.3 0.4Crop Rotation: Case Study PixelNDVI 0.0 0.2 0.4 0.6 0.8 1.0Detected changes Timeslp amp intFigure 7: NDVI time series for a pixel located in an agriculture field in California. Three crop rotation events are annotated by the (pink dotted line). The model detects an additional change towards the beginning of the time series when high resolution imagery is not available. period with no crops. In August of 2012, the field is re-planted, and in October 2016 the geometric patterns related to crop type visibly change in the high resolution imagery. The model detects each of these three changes as well as an additional change at the beginning of the time series during a period when we do not have high resolution imagery available for confirmation. The interpreted changes from the high-resolution imagery agree well with the changes detected automatically in the low-temporal and medium spatial resolution Landsat imagery (Figure 7.). 8.4 Semi-arid vegetation responses to drought Climate change is affecting precipitation regimes in many parts of the world, leading to, for example, faster drought onsets (Mukherjee et al., 2018; Shenoy et al., 2022). Detecting changes in phenological signatures due to drought is thus an important and open question that can have implications for climate modeling and other tasks such as land management. Semi-arid regions are particularly affected by drought and understanding the resilience of vegetation to stress from climate change in these areas is important. Our study area for this case study is located in a semi-arid region of Texas at latitude 31.87 and longitude -103.64. We used high resolution imagery from Google Earth Pro to find a location with stable shrub and grass land cover (i.e., no land use) in order to isolate the effects of drought. Drought index
https://arxiv.org/abs/2504.17876v1
data are provided by the National Drought Mitigation Center, University of Nebraska-Lincoln (Owens, 2007). We use these data to compare drought events to changes detected in NDVI at the same location. Specifically, they provide a No Drought measurement which scales from 0 (i.e. full drought) to 100 (i.e. no drought). We denote this measurement asAbsence of Drought in the third panel of Figure 8. Our model of NDVI detects four 23 0.050.100.150.200.250.30Semi−arid Shrubland: Case Study PixelNDVIDetected changesslp amp int 0.0 0.2 0.4 0.6 0.8 1.00 100Absence of Drought TimeFigure 8: NDVI time series for a pixel located in a semi-arid region of Texas. Drought data from the National Drought Mitigation Center are plotted in the third panel. The model detects four major changes in the NDVI that appear to be related to major precipitation and drought events captured in the drought data. changes in the phenological signature that co-occur with significant precipitation anomalies and drought events as captured by the drought monitor. The results are in Figure 8. Note that when we run our model without interannually varying harmonics, the model does not detect any of the drought/precipitation events. Those additional results can be found in the appendix. 9 Discussion In this work, we offer an end to end solution for continuous time change detection. The change detection problem is first reframed in terms of a state space model where efficient and exact inference on the state variables is possible using the forward backward algorithm. We then derived noninformative priors on change point processes and their corresponding transition probabilities in both discrete and continuous time and showed the continuous time priors have equivalent moments to Dir(1k). The continuous time transition probabilities are particularly notable, forming a class of Bernstein polynomial processes that adjust to the spacing of time measurements for the data at hand. These priors confirm our intuition that two consecutive observations that are closer in time are less likely to change than two consecutive observations far apart in time. 24 The prior on the number of segments is also tackled in this work. We provide a discourse on measuring model space volumes in order to construct noninformative prior mass on the number of segments. Our reasoning is confirmed in synthetic studies where the BPP model competes with current state of the art methods and out performs them in the heavy tailed error distribution cases. This performance benefit is also owed to our development of a robust likelihood that can be inferred efficiently within the forward backward algorithm framework used to infer change points. Our case study addresses three canonical examples of change detection commonly used in remote sensing literature. We developed a new semiparametric model that capture in- terannual variability due to weather while also maintaining interpretable parameters such as intercept and slope for which we’d like to infer changes. This new likelihood model out performed its commonly used harmonic regression predecessor as demonstrated in the appendix. Future work may consider extending this continuous time model to the spatial domain. There are also interesting parallels between our findings and theory
https://arxiv.org/abs/2504.17876v1
regarding random par- titions that did not fall within the scope of this work. Finally, it would be interesting to see how the continuous time transition probabilities can be parameterized to accommodate additional prior information or for use within an empirical Bayes approach. References Adams, R. P. and MacKay, D. J. (2007). Bayesian online changepoint detection. arXiv preprint arXiv:0710.3742 . Aminikhanghahi, S. and Cook, D. J. (2017). A survey of methods for time series change point detection. Knowledge and information systems , 51(2):339–367. Auger, I. E. and Lawrence, C. E. (1989). Algorithms for the optimal identification of segment neighborhoods. Bulletin of mathematical biology , 51(1):39–54. Barry, D. and Hartigan, J. A. (1993). A bayesian analysis for change point problems. Journal of the American Statistical Association , 88(421):309–319. Beguer´ ıa, S., Vicente-Serrano, S. M., Reig, F., and Latorre, B. (2014). Standardized pre- cipitation evapotranspiration index (spei) revisited: parameter fitting, evapotranspiration models, tools, datasets and drought monitoring. International journal of climatology , 34(10):3001–3023. Bishop, C. M. (2006). Pattern recognition and machine learning , volume 4. Springer. Boryan, C., Yang, Z., Mueller, R., and and, M. C. (2011). Monitoring us agriculture: the us department of agriculture, national agricultural statistics service, cropland data layer program. Geocarto International , 26(5):341–358. 25 Butt, N., De Oliveira, P. A., and Costa, M. H. (2011). Evidence that deforestation af- fects the onset of the rainy season in rondonia, brazil. Journal of Geophysical Research: Atmospheres , 116(D11). Chib, S. (1996). Calculating posterior distributions and modal estimates in markov mixture models. Journal of Econometrics , 75(1):79–97. Chib, S. (1998). Estimation and comparison of multiple change-point models. Journal of econometrics , 86(2):221–241. Cohen, W. B., Healey, S. P., Yang, Z., Stehman, S. V., Brewer, C. K., Brooks, E. B., Gorelick, N., Huang, C., Hughes, M. J., Kennedy, R. E., et al. (2017). How similar are forest disturbance maps derived from different landsat time series algorithms? Forests , 8(4):98. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from in- complete data via the em algorithm. Journal of the royal statistical society: series B (methodological) , 39(1):1–22. Fearnhead, P. (2006). Exact and efficient bayesian inference for multiple changepoint prob- lems. Statistics and computing , 16:203–213. Fearnhead, P. and Liu, Z. (2007). On-line inference for multiple changepoint problems. Journal of the Royal Statistical Society Series B: Statistical Methodology , 69(4):589–605. Friedl, M. A., Woodcock, C. E., Olofsson, P., Zhu, Z., Loveland, T., Stanimirova, R., Arevalo, P., Bullock, E., Hu, K.-T., Zhang, Y., et al. (2022). Medium spatial resolution mapping of global land cover and land cover change across multiple decades from landsat. Frontiers in Remote Sensing , 3:894571. Keenan, T. F., Gray, J., Friedl, M. A., Toomey, M., Bohrer, G., Hollinger, D. Y., Munger, J. W., O’Keefe, J., Schmid, H. P., Wing, I. S., et al. (2014). Net carbon uptake has in- creased through warming-induced changes in temperate forest phenology. Nature Climate Change , 4(7):598–604. Killick et al. (2012). Optimal detection of changepoints with a linear computational cost. Journal of the American Statistical Association , 107(500):1590–1598. Killick, R. and Eckley, I.
https://arxiv.org/abs/2504.17876v1
A. (2014). changepoint: An r package for changepoint analysis. Journal of statistical software , 58:1–19. Konishi, S. and Kitagawa, G. (2008). Information criteria and statistical modeling . Springer Science & Business Media. Li, H., Di, L., Zhang, C., Lin, L., Guo, L., Yu, E. G., and Yang, Z. (2024). Automated in-season crop-type data layer mapping without ground truth for the conterminous united states based on multisource satellite imagery. IEEE Transactions on Geoscience and Remote Sensing , 62:1–14. 26 Little, R. J. and Rubin, D. B. (2019). Statistical analysis with missing data , volume 793. John Wiley & Sons. Mukherjee, S., Mishra, A., and Trenberth, K. E. (2018). Climate change and drought: a perspective on drought indices. Current climate change reports , 4:145–163. Owens, J. C. (2007). Unl center helps develop national drought initiative. Pedlowski, M. A., Dale, V. H., Matricardi, E. A., and da Silva Filho, E. P. (1997). Patterns and impacts of deforestation in rondˆ onia, brazil. Landscape and Urban Planning , 38(3- 4):149–157. Pedlowski, M. A., Matricardi, E. A., Skole, D., Cameron, S., Chomentowski, W., Fernan- des, C., and Lisboa, A. (2005). Conservation units: a new deforestation frontier in the amazonian state of rondˆ onia, brazil. Environmental Conservation , 32(2):149–155. Peluso, S., Chib, S., and Mira, A. (2019). Semiparametric multivariate and multiple change- point modeling. Schwarz, G. (1978). Estimating the dimension of a model. The annals of statistics , pages 461–464. Scott, A. J. and Knott, M. (1974a). A cluster analysis method for grouping means in the analysis of variance. Biometrics , 30(3):507–512. Scott, A. J. and Knott, M. (1974b). A cluster analysis method for grouping means in the analysis of variance. Biometrics , 30(3):507–512. Shenoy, S., Gorinevsky, D., Trenberth, K. E., and Chu, S. (2022). Trends of extreme us weather events in the changing climate. Proceedings of the National Academy of Sciences , 119(47):e2207536119. Souza, C. M., Z. Shimbo, J., Rosa, M. R., Parente, L. L., A. Alencar, A., Rudorff, B. F. T., Hasenack, H., Matsumoto, M., G. Ferreira, L., Souza-Filho, P. W. M., de Oliveira, S. W., Rocha, W. F., Fonseca, A. V., Marques, C. B., Diniz, C. G., Costa, D., Monteiro, D., Rosa, E. R., V´ elez-Martin, E., Weber, E. J., Lenti, F. E. B., Paternost, F. F., Pareyn, F. G. C., Siqueira, J. V., Viera, J. L., Neto, L. C. F., Saraiva, M. M., Sales, M. H., Salgado, M. P. G., Vasconcelos, R., Galano, S., Mesquita, V. V., and Azevedo, T. (2020). Reconstructing three decades of land use and land cover changes in brazilian biomes with landsat archive and earth engine. Remote Sensing , 12(17). Stephens, D. (1994). Bayesian retrospective multiple-changepoint identification. Journal of the Royal Statistical Society: Series C (Applied Statistics) , 43(1):159–178. Tierney, L. and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities. Journal of the american statistical association , 81(393):82–86. 27 Truong, C., Oudre, L., and Vayatis, N. (2020). Selective review of offline change point detection methods. Signal Processing , 167:107299. Yao, Y.-C. (1984). Estimation of a noisy discrete-time step function: Bayes and empirical bayes approaches. The Annals of
https://arxiv.org/abs/2504.17876v1
Statistics , pages 1434–1447. Yu, L. and Gong, P. (2012). Google earth as a virtual globe tool for earth science applications at the global scale: progress and perspectives. International Journal of Remote Sensing , 33(12):3966–3986. Zhang, N. R. and Siegmund, D. O. (2007). A modified bayes information criterion with appli- cations to the analysis of comparative genomic hybridization data. Biometrics , 63(1):22– 32. Zhu, Z. and Woodcock, C. E. (2014). Continuous change detection and classification of land cover using all available landsat data. Remote sensing of Environment , 144:152–171. Zhu, Z., Zhang, J., Yang, Z., Aljaddani, A. H., Cohen, W. B., Qiu, S., and Zhou, C. (2020). Continuous monitoring of land disturbance based on landsat time series. Remote Sensing of Environment , 238:111116. 10 Appendix A: Theoretical Results 10.1 Proofs Proof of Proposition 1. We remove the prior designation of pfor notational simplicity. Recall the binomial coefficient propertym l =m−1 l +m−1 l−1 . Note there are four possibilities for position ibeing in state j. p(zi=j) =j+1X l=jjX m=j−1p(zi=j, zi−1=m, z i+1=l) Similarly to how the denominator was counted, note that when zi=j, there arei−1 m−1 ways to choose the initial m−1 changes in the first itime points, andn−i−1 k−l ways to choose the final k−lchanges in the final n−itime points. Following with distributive property of 28 multiplication and the binomial coefficient property, p(zi=j) =j+1X l=jjX m=j−1i−1 m−1n−i−1 k−l n k−1 =i−1 j−2n−i−1 k−j n k−1 +i−1 j−1n−i−1 k−j n k−1 +i−1 j−2n−i−1 k−j−1 n k−1 +i−1 j−1n−i−1 k−j−1 n k−1 =i j−1n−i−1 k−j n k−1 +i j−1n−i−1 k−j−1 n k−1 =i j−1n−i k−j n k−1 Proof of Proposition 2. First note that in order to be in segment 1 at time i, then the data process must have been in segment 1 at time i−1, p(zi= 1) = p(zi= 1|zi−1= 1)p(zi−1= 1) Which implies, p(zi= 1|zi−1= 1) =p(zi= 1) p(zi−1= 1) Moving on to the next segment, p(zi= 2) = p(zi= 2|zi−1= 2)p(zi−1= 2) + p(zi= 2|zi−1= 1)p(zi−1= 1) =p(zi= 2|zi−1= 2)p(zi−1= 2) + (1 −p(zi= 1|zi−1= 1)) p(zi−1= 1) =p(zi= 2|zi−1= 2)p(zi−1= 2) + (1 −p(zi= 1) p(zi−1= 1))p(zi−1= 1) =p(zi= 2|zi−1= 2)p(zi−1= 2) + ( p(zi−1= 1)−p(zi= 1)) Which implies, p(zi= 2|zi−1= 2) =P2 l=1p(zi=l)−P1 l=1p(zi−1=l) p(zi−1= 2) And in general we find recursively for all 1 < j < k , p(zi=j|zi−1=j) =Pj l=1p(zi=l)−Pj−1 l=1p(zi−1=l) p(zi−1=j) With p(zi=k|zi−1=k) = 1 by assumption. 29 Proof of Theorem 1. Lett∈[0,1] and define corresponding discrete time i=⌊tn⌋. We prove the statement in terms of inoting that lim n→∞i/n= lim n→∞⌊tn⌋/n=t. Define the continuous time marginal p(zt=j):= lim n→∞p(zi=j). Begin by evaluating the binomial coefficients, p(zi=j) =n−i k−ji j−1 n k−1 =(n−i)! (k−j)!(n−i−k+j)!i! (j−1)!(i−j+1)! n! (k−1)!(n−k+1)!(expand) =k−1 j−1(n−i)! (n−i−k+j)!·i! (i−j+ 1)!·(n−k+ 1)! n!(rearrange) =k−1 j−1nk−1 nk−jnj−1·(n−i)! (n−i−k+j)!·i! (i−j+ 1)!·(n−k+ 1)! n!(multiply by 1) =k−1 j−1(n−i)! nk−j(n−i−k+j)!·i! nj−1(i−j+ 1)!·nk−1(n−k+ 1)! n!(rearrange) We will inspect the limit of each of the three fractions separately as n→ ∞ , lim n→∞1 nk−j(n−i)! (n−i−k+j)!= lim i→∞ n→∞(n−i)· ··· · (n−i−k+j+ 1) nk−j = lim n→∞(n−i) n· ··· ·(n−i−k+j+ 1) n(there are k−jterms) = lim n→∞(1−t)· ···
https://arxiv.org/abs/2504.17876v1
· (1−t−k n+j n+1 n) (every term isn−i+ const. n) = (1−t)k−j lim n→∞1 nj−1i! (i−j+ 1)!= lim n→∞i· ··· · (i−j+ 2) nj−1 = lim n→∞i n· ··· ·(i−j+ 2) n(there are j−1 terms) = lim n→∞t· ··· · (t−j n+2 n) (every term isi+ const. n) =tj−1 30 lim n→∞nk−1(n−k+ 1)! n!= lim n→∞nk−1 n· ··· · (n−k+ 2) = lim n→∞n n· ··· ·n n−k+ 2(there are k−1 terms) = 1 Finally, multiply these three limits together, since the limit of products is the product of limits when each limit is convergent, p(zt=j) =k−1 j−1 (1−t)k−jtj−1 Proposition 4. For state variables {zti}n i=0and segment lengths {ζj}k j=1the following equiv- alence representation holds: 1(zti=j) =1 j−1X l=1ζl≤t <jX l=1ζl! . Proof of Proposition 4. Note the definition of ζjis the length of time between the first oc- currence of state jand state j+ 1. ThenPj l=1ζlis equal to the time of the first occurrence of state j+ 1. As such, if tiis between the first time of state jand the first time of state j+ 1, then by the definition of change point process zti=j. We will need the following lemma to prove Theorem 2. Lemma 1. Lett∈[0,1]and suppose {ζj}k j=1are the continuous time segment lengths that sum to 1, then, P(j−1X l=1ζl≤t <jX l=1ζl) =P(j−1X l=1ζl≤t)−P(jX l=1ζl≤t) 31 Proof of Lemma 1. P(j−1X l=1ζl≤t <jX l=1ζl) =P((j−1X l=1ζl≤t)∩(t <jX l=1ζl)) =P(t <jX l=1ζl|j−1X l=1ζl≤t)P(j−1X l=1ζl≤t) = (1−P(jX l=1ζl≤t|j−1X l=1ζl≤t))P(j−1X l=1ζl≤t) = (1−P(Pj l=1ζl≤t∩Pj−1 l=1ζl≤t) P(Pj−1 l=1ζl≤t))P(j−1X l=1ζl≤t) = (1−P(Pj l=1ζl≤t) P(Pj−1 l=1ζl≤t))P(j−1X l=1ζl≤t) (jX l=1ζl≤t)→(j−1X l=1ζl≤t) =P(j−1X l=1ζl≤t)−P(jX l=1ζl≤t) Proof of Theorem 2. Define the Bernstein polynomial bj(t):=k−1 j−1 (1−t)k−jtj−1and the distribution of the Dirichlet indicator dj(t):=p(Pj−1 l=1ζl≤t <Pj l=1ζl). We wish to prove dj(t) =bj(t). The proof proceeds as follows. The first step is to evaluate dj(t) using the aggregation property of the Dirichlet distribution. The next step is to argue that since neither function has an additive constant, it is sufficient to provedbj(t) dt=ddj(t) dt. Or note bj(0) = dj(0). Finally, we establish equality of the two derivatives. We start with dj(t). We have by Lemma 1, dj(t) = (p(Pj−1 l=1ζl≤t)−p(Pj l=1ζl≤t)) These two cumulative distributions can be evaluated using the aggregation property of the Dirichlet. As such, we have (Pj−1 l=1ζl,Pk l=jζl)′∼Dir(j−1, k−j+ 1) and (Pj l=1ζl,Pk l=j+1ζl)′∼ Dir(j, k−j). Thus, the cumulative distribution for the ( j−1)th case is, p(j−1X l=1ζl≤t) =1 B(j−1, k−j+ 1)Zt 0u(j−1)−1(1−u)(k−j+1)−1du And similarly for the jth case. The derivative of the Bernstein polynomial follows from the product and chain rules, dbj(t) dt=k−1 j−1 [(j−1)tj−2(1−t)k−j−(k−j)tj−1(1−t)k−j−1] (8) 32 And the derivative of the Dirichlet probability interval follows from the fundamental theorem of calculus, dp(Pj−1 l=1ζl≤t) dt=1 B(j−1, k−j+ 1)t(j−1)−1(1−t)(k−j+1)−1 Which holds similarly for the ( j)th case. As such the derivative of (1) is, d(p(Pj−1 l=1ζl≤t)−p(Pj l=1ζl≤t)) dt=1 B(j−1, k−j+ 1)t(j−1)−1(1−t)(k−j+1)−1 −1 B(j, k−j)tj−1(1−t)(k−j)−1(9) And now we can now show (2)=(3) as desired. ddj(t) dt=1 B(j−1, k−j+ 1)t(j−2)(1−t)(k−j) −1 B(j, k−j)tj−1(1−t)(k−j)−1 =Γ(k) Γ(j−1)Γ(k−j+ 1)t(j−2)(1−t)(k−j) −Γ(k) Γ(j)Γ(k−j)t(j−1)(1−t)(k−j−1) =(k−1)! (j−2)!(k−j)!t(j−2)(1−t)(k−j) −(k−1)! (j−1)!(k−j−1)!t(j−1)(1−t)(k−j−1) =k−1 j−1 (j−1)t(j−2)(1−t)(k−j) −k−1 j−1 (k−j)t(j−1)(1−t)(k−j−1) =dbj(t) dt Proof of Theorem 3.
https://arxiv.org/abs/2504.17876v1
Using Theorem 2, note that the probability p(zt=h|zs=j) is equiva- lent to the probability p(Ph−1 l=1ζl≤t <Ph l=1ζl|Pj−1 l=1ζl≤s <Pj l=1ζl). As such we evaluate the joint probability of these events, and then divide by the marginal. Case 1: h=j 33 p(j−1X l=1ζl≤s, t <jX l=1ζl) =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Z1−Pj−1 l=1ζl t−Pj−1 l=1ζlZ1−Pj l=1ζl 0···Z1−Pk−2 l=1ζl 0B−1(1k)∂ζk−1. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Z1−Pj−1 l=1ζl t−Pj−1 l=1ζlZ1−Pj l=1ζl 0···Z1−Pk−3 l=1ζl 0B−1(1k)(1−k−2X l=1ζl)∂ζk−2. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Z1−Pj−1 l=1ζl t−Pj−1 l=1ζlZ1−Pj l=1ζl 0···Z1−Pk−4 l=1ζl 0B−1(1k)(1−Pk−3 l=1ζl)2 2!∂ζk−3. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Z1−Pj−1 l=1ζl t−Pj−1 l=1ζlZ1−Pj l=1ζl 0···Z1−Pk−5 l=1ζl 0B−1(1k)(1−Pk−4 l=1ζl)3 3!∂ζk−4. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Z1−Pj−1 l=1ζl t−Pj−1 l=1ζlB−1(1k)(1−Pj l=1ζl)k−j−1 (k−j−1)!∂ζj. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0B−1(1k)(1−t)k−j (k−j)!∂ζj−1. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−3 l=1ζl 0(s−j−2X l=1ζl)B−1(1k)(1−t)k−j (k−j)!∂ζj−2. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−4 l=1ζl 0(s−Pj−3 l=1ζl)2 2!B−1(1k)(1−t)k−j (k−j)!∂ζj−3. . . ∂ζ 1 =Zs 0Zs−ζ1 0(s−P2 l=1ζl)j−3 (j−3)!B−1(1k)(1−t)k−j (k−j)!∂ζ2∂ζ1 =Zs 0(s−ζ1)j−2 (j−2)!B−1(1k)(1−t)k−j (k−j)!∂ζ1 =B−1(1k)sj−1 (j−1)!(1−t)k−j (k−j)! =k−1 j−1 sj−1(1−t)k−j Then using this as the numerator of p(zt=j|zs=j), and using the continuous time marginal 34 ofzsfrom Theorem 1, p(zt=j|zs=j) =k−1 j−1 s(j−1)(1−t)(k−j) k−1 j−1 s(j−1)(1−s)(k−j) =1−t 1−s(k−j) =k−j j−j 1−1−t 1−s(j−j)1−t 1−s(k−j) Case 2: h=j+ 1 Now we derive the transition from jtoj+1. Start with the integrand in the joint numerator, p(j−1X l=1ζl≤s <jX l=1ζl≤t <j+1X l=1ζl) =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζlZ1−Pj l=1ζl t−Pj l=1ζlZ1−Pj+1 l=1ζl 0···Z1−Pk−2 l=1ζl 0B−1(1k)∂ζk−1. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζlZ1−Pj l=1ζl t−Pj l=1ζl(1−Pj+1 l=1ζl)k−j−2 (k−j−2)!B−1(1k)∂ζj+1. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζl(1−t)k−j−1 (k−j−1)!B−1(1k)∂ζj. . . ∂ζ 1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0(t−s)(1−t)k−j−1 (k−j−1)!B−1(1k)∂ζj−1. . . ∂ζ 1 =sj−1 (j−1)!(t−s)(1−t)k−j−1 (k−j−1)!B−1(1k) =k−1 j−1 sj−1(t−s)(1−t)k−j−1(k−j) Then the transition probability is given by dividing the marginal probability of zs=j, p(zt=j+ 1|zs=j) =k−1 j−1 sj−1(t−s)(1−t)k−j−1(k−j) k−1 j−1 sj−1(1−s)k−j = (k−j)t−s 1−s1−t 1−sk−j−1 =k−j j+ 1−j 1−1−t 1−s!(j+1−j)1−t 1−sk−j−1 35 Case 3: h > j + 1 p(j−1X l=1ζl≤s <jX l=1ζl≤h−1X l=1ζl≤t <hX l=1ζl) =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζlZt−Pj l=1ζl 0···Zt−Ph−2 l=1ζl 0Z1−Ph−1 l=1ζl t−Ph−1 l=1ζlZ1−Ph l=1ζl 0···Z1−Pk−2 l=1ζl 0 B−1(1k)∂ζk−1. . . ∂ζ h+1∂ζh∂ζh−1. . . ∂ζ j+1∂ζj∂ζj−1. . . ∂ζ 2∂ζ1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζlZt−Pj l=1ζl 0···Zt−Ph−2 l=1ζl 0Z1−Ph−1 l=1ζl t−Ph−1 l=1ζl (1−Ph l=1ζl)k−h−1 (k−h−1)!B−1(1k)∂ζh∂ζh−1. . . ∂ζ j+1∂ζj∂ζj−1. . . ∂ζ 2∂ζ1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζlZt−Pj l=1ζl 0···Zt−Ph−2 l=1ζl 0 (1−t)k−h (k−h)!B−1(1k)∂ζh−1. . . ∂ζ j+1∂ζj∂ζj−1. . . ∂ζ 2∂ζ1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0Zt−Pj−1 l=1ζl s−Pj−1 l=1ζl(t−Pj l=1ζl)h−j−1 (h−j−1)!(1−t)k−h (k−h)!B−1(1k)∂ζj∂ζj−1. . . ∂ζ 2∂ζ1 =Zs 0Zs−ζ1 0···Zs−Pj−2 l=1ζl 0(t−s)h−j (h−j)!(1−t)k−h (k−h)!B−1(1k)∂ζj−1. . . ∂ζ 2∂ζ1 =sj−1 (j−1)!(t−s)h−j (h−j)!(1−t)k−h (k−h)!B−1(1k) Now divide by the marginal zs=jto get the transition probability, p(zt=h|zs=j) =sj−1 (j−1)!(t−s)h−j (h−j)!(1−t)k−h (k−h)!(k−1)! (k−1)! (j−1)!(k−j)!sj−1(1−s)k−j =(t−s)h−j (h−j)!(1−t)k−h (k−h)! 1 (k−j)!(1−s)k−j =k−j h−j(t−s)h−j(1−t)k−h (1−s)k−j =k−j h−j t−s 1−s!h−j 1−t 1−s!k−h =k−j h−j 1−1−t 1−s!h−j 1−t 1−s!k−h 36 Proof of Theorem 3 Kolmogorov Equations. hX l=jPjl(s, r)Plh(r, t) =hX l=jk−j l−j 1−1−r 1−s!l−j 1−r 1−s!k−lk−l h−l
https://arxiv.org/abs/2504.17876v1
1−1−t 1−r!h−l 1−t 1−r!k−h = 1−t 1−s!k−hhX l=jk−j l−jk−l h−l r−s 1−s!l−j 1−r 1−s!h−l t−r 1−r!h−l = 1−t 1−s!k−hhX l=jk−j l−jk−l h−l r−s 1−s!l−j t−r 1−s!h−l = 1−t 1−s!k−h 1 1−s!h−jhX l=jk−j l−jk−l h−l (r−s)l−j(t−r)h−l = 1−t 1−s!k−h 1 1−s!h−jhX l=j(k−j)! (l−j)!(k−l)!(k−l)! (h−l)!(k−h)!(r−s)l−j(t−r)h−l =k−j h−j 1−t 1−s!k−h 1 1−s!h−jhX l=jh−j l−j (r−s)l−j(t−r)h−l =k−j h−j 1−t 1−s!k−h t−s 1−s!h−j =Pjh(s, t) Proof of Theorem 4. This is a constrained optimization problem since we need to find the configuration zthat minimizes the expected loss subject to being a change point process. We first derive the Bayes estimator in the unconstrained space(which contains the constrained space). We then show, despite that we found the estimator in the bigger unconstrained space, the estimator yields a change point process almost surely, satisfying the constraint. Estimator for the unconstrained space Since the loss is a sum over i= 1, . . . , n , and since we are operating in the unconstrained space, the problem reduces to finding the Bayes estimator separately for each zti, arg min ztiEz∗|y |zti−z∗ ti|(ti−ti−1) = arg min ztiEz∗|y |zti−z∗ ti| Since ( ti−ti−1) is a constant. It is well known the Bayes estimator for absolute loss is the median. Thus, the Bayes estimator for the unconstrained problem is ˆzti= min jKX k=1jX l=1p(zti=l|k, y)p(k|y)≥0.5 37 Show this estimator is a change point process: discrete time case Now we show this estimator is a change point process with probability 1. The proof strategy is to show for arbitrary median ˆ zti, that the median ˆ zti+1∈ {ˆzti,ˆzti+ 1}with probability 1 under the posterior measure of interest. To that end, let Ω be the set of all change point process sample points ωwith positive support under the prior on z. These configurations represent a superset of the configurations with positive support under the posterior measure. Let ˆzti= median( zti) under the posterior measure be arbitrary. By definition of median, p(zti(ω)≤ˆzti|y)≥0.5 and p(zti(ω)≥ˆzti|y)≥0.5 Since ωis a change point process, ω∈Ω zti= ˆzti =⇒ω∈Ω zti+1∈ {ˆzti,ˆzti+ 1} with probability 1, where we define Ω( A) as the subset of the sample space Ω where the condition Ais true. The above then also implies, ω∈Ω zti≤ˆzti =⇒ω∈Ω zti+1≤ˆztiorzti+1≤ˆzti+ 1 with probability 1. Plugging these implications back into the probability inequalities that define the median, and using the fact that A⊂Bimplies p(A)≤p(B), p(zti+1(ω)≤ˆztiorzti+1(ω)≤ˆzti+ 1|y)≥0.5 and p(zti+1(ω)≥ˆztiorzti+1(ω)≥ˆzti+ 1|y)≥0.5 The ”or”-events in these probabilities can be reduced to mutual exclusivity by removing their intersection as follows, p(zti+1(ω)≤ˆzti|y) +p(zti+1(ω) = ˆzti+ 1|y)≥0.5 and p(zti+1(ω) = ˆzti|y) +p(zti+1(ω)≥ˆzti+ 1|y)≥0.5 We are now in a position to determine the median of zti+1. The first case is when p(zti+1(ω)≤ ˆzti|y)>0.5, which implies p(zti+1(ω)≥ˆzti+ 1|y)<0.5 since the two probabilities sum to 1. In this case, we have ˆ zti+1= ˆztisince, p(zti+1(ω)≤ˆzti|y)≥0.5 and p(zti+1(ω) = ˆzti|y) +p(zti+1(ω)≥ˆzti+ 1|y)≥0.5 38 The second case is when p(zti+1(ω)≤ˆzti|y)<0.5, which implies p(zti+1(ω)≥ˆzti+ 1|y)> 0.5 since the two probabilities sum to 1. In this case we have ˆ zti+1= ˆzti+ 1 since, p(zti+1(ω)≤ˆzti|y) +p(zti+1(ω) = ˆzti+ 1|y)≥0.5 and p(zti+1(ω)≥ˆzti+ 1|y)≥0.5 The last case, when p(zti+1(ω)≤ˆzti|y) = 0 .5, follows similarly.
https://arxiv.org/abs/2504.17876v1
Thus, in all cases, the median ˆ zti+1is either ˆ ztior ˆzti+ 1 with probability 1, and the resulting estimator is the Bayes estimator for the weighted Hamming loss in the constrained space of change point processes in discrete time. Show this estimator is a change point process: continuous time case In continuous time, more than one change point can occur between two consecutive ob- servations, so the proof changes slightly. Suppose the median at time tiis ˆzti. Let ω be a continuous time change point process such that ω∈Ω(zti= ˆzti). This implies ω∈Ω(zti+1∈ {ˆzti, . . . , K }). Furthermore, extending these statements with inequalities, we have, ω∈Ω(zti≤ˆzti) implies ω∈Ω(zti+1≤ˆztiorzti+1∈ {ˆzti+ 1, . . . , K }) and similarly ω∈Ω(zti≥ˆzti) implies ω∈Ω(zti+1∈ {ˆzti, . . . , K }). Using the fact that A⊂Bimplies p(A)≤p(B), applying these results to the definition of median, p(zti+1(ω)≤ˆzti|y) +KX j=ˆzti+1p(zti+1(ω) =j|y)≥0.5 and p(zti+1(ω) = ˆzti|y) +KX j=ˆzti+1p(zti+1(ω) =j|y)≥0.5 The first of those inequalities is trivial since the probabilities sum to one, but is also con- structive for the proof. Now proceed ruling out each possibility as we did in the discrete case. If p(zti+1(ω)≤ˆzti|y)>0.5 then the second summation in the second equation is less than 0.5, and the median is ˆ zti+1= ˆzti. Proceeding iteratively, now suppose p(zti+1(ω)≤ˆzti|y)<0.5. Then, by the first equation,PK j=ˆzti+1p(zti+1(ω) =j|y)≥0.5. But since p(zti+1(ω)≤ˆzti|y)<0.5 then by the sum of probability to 1,PK j=ˆzti+1p(zti+1(ω) =j|y)≥0.5 and the median ˆ zti+1= ˆzti+ 1. This process iterates and we conclude that ˆ zti+1∈ {ˆzti, . . . , K }with probability 1. Proof of Proposition 3. WLOG, let t∗= 365 /Tbe the starting time of the second year. Enforcing continuity of the mean function requires setting its left limit equal to its right limit at time t∗. Limiting from the left, the harmonic contrast coefficients are zero since there is no contrast in the first year. We also have that the sin terms are zero and cos terms 39 are 1 at t∗, thus, = lim t→t∗−µ(t) +HX h=1γh,j(t)sin(hωt) +δh,j(t)cos(hωt) (left hand limit) = lim t→t∗−µ(t) (contrasts are 0 in first year) =α+βt∗+HX h=1δh (sin terms 0, cos terms 1 at t∗) From the right, the contrast coefficients for the second year may not be 0. We have all sin terms are zero at the limit and cos terms are 1, = lim t→t∗+µ(t) +HX h=1γh,j(t)sin(hωt) +δh,j(t)cos(hωt) =α+βt∗+HX h=1δh+HX h=1δh,1 Setting these two limits equal we arrive at the first result, HX h=1δh,1= 0 (10) Now let t∗be the starting time of the second year. Using similar arguments above, we arrive at the equality, HX h=1δh,1=HX h=1δh,2 0 =HX h=1δh,2 (from Equation 10) Thus, the continuity constraint at the starting time of each jth year of the mean function leads to the constraints, δH,j=−H−1X h=1δh,j Using a similar argument for enforcing continuity of the derivative of the mean function, we have, γH,j=−H−1X h=1γh,j 40 11 Appendix B: Noninformative Segment Lengths in Discrete and Continuous Time We wish to derive the noninformative distribution of {ζj}k j=1in the discrete time case. Whereas, in the
https://arxiv.org/abs/2504.17876v1
hypergeometric distribution, the number of samples until success is con- sidered fixed and the number of successes at that time is considered random, we wish to relate this distribution to one where the segment lengths are random– that is, the number of samples until a specified number of successes is random. With this in mind, define the Inverse Hypergeometric Distribution as the distribution on the number of samples iuntil the jth success, with population size nandktotal successes in the population. We first derive the (n, J,1)-Inverse Hypergeometric Distribution, that is, the distribution of the length until the first success. Proposition 5 ((n, J,1)-Inverse Hypergeometric Distribution) .Suppose there is an urn with population nandJtotal successes. The distribution of the length until first success is, p(ζ1=i) =J n−(i−1)·n−J i−1 n i−1 Proof of Proposition 5. Choose the first i−1 draws out of the possible n−Jfailures. The denominator of those first i−1 draws is all the ways to choose i−1 draws from population n. Then, conditioned on the first i−1 failures, the probability that the ith draw is a success isJ/(n−(i−1)). Using this distribution, derive the general case by conditioning on one success at a time, Theorem 5 ((n, J, j )-Inverse Hypergeometric Distribution) .Suppose there is an urn with population nand total successes J. Define ζ0= 0. The distribution of {ζl}j l=1, the first j consecutive lengths-until-success, is the product, p({ζl=i(l)}j l=1) =jY l=1IHG((n−l−1X m=0ζm),(J−l+ 1),1) We say {ζl=i(l)}j l=1is IHG (n, J, j )distributed. In the special case of J=k−1and j=k−1, we have, p({ζl=i(l)}k−1 l=1) =k−1Y l=1IHG((n−l−1X m=0ζm),(k−l),1) =(k−1)! n(n−1). . .(n−(k−2)) =1n k−1 41 Proof of Theorem 5. From Proposition 5, p(ζ1=i(1)) is ( n, J,1)-Inverse Hypergeometric Distributed. Note that conditioned on ζ1=i(1), the population is now n−i(1)and the remaining total number of successes is J−1, thus, p(ζ2=i(2)|ζ1=i(1)) =IHG(n−i(1), J−1,1) Note that in the general case, for p(ζl=i(l)|{ζm=i(m)}l−1 m=1), a similar argument holds. The population is reduced to n−Pl−1 m=0ζland the number of successes is reduced to J−l+1. Thus, using the law of conditional probability, the result is a product of inverse hypergeometric distributions as written in the statement. Now to prove the simplification of this statement, consider the cases of k= 2 and k= 3 as follows. Let {ζj}k−1 j=1beIHG(n, k−1, k−1) distributed. Suppose k= 2, then, p(ζ1=i1) =1 n−i1+ 1·n−1 i1−1 n i1−1 =1 n−i1+ 1·(n−1)!(n−i1+ 1)! (n−i1)!n! =1! n When k= 3, observe the following telescopic cancellation, p(ζ1=i1, ζ2=i2) =p(ζ1=i1)p(ζ2=i2|ζ1=i1) = (2 n−i1+ 1n−2 i1−1 n i1−1)·(1 n−i1−i2+ 1n−i1−1 i2−1 n−i1 i2−1) = (2 n−i1+ 1(n−2)!(n−i1+ 1)! (n−i1−1)!n!)·(1 n−i1−i2+ 1(n−i1−1)!(n−i1−i2+ 1)! (n−i1−i2)!(n−i1)!) =2(n−i1) n(n−1)·1 (n−i1) =2! n(n−1) 42 For general k, using the same telescoping cancellation approach, notice, p({ζj=ij}k−1 j=1) =k−1 n−i1+ 1n−(k−1) i1−1 n i1−1·k−2 n−i1−i2+ 1n−i1−(k−2) i2−1 n−i1 i2−1·p({ζj}k−1 j=3|{ζj}2 j=1) =k−1 n−i1+ 1(n−(k−1))!(n−i1+ 1)! n!(n−k−i1+ 2)! ·k−2 n−i1−i2+ 1(n−i1−k+ 2)!( n−i1−i2+ 1)! (n−i1)!(n−i1−k−i2+ 3)!·p({ζj}k−1 j=3|{ζj}2 j=1) =(k−1)(k−2) n(n−1). . .(n−(k−2))·(n−i1−i2)! (n−i1−k−i2+ 3)!·p({ζj}k−1 j=3|{ζj}2 j=1) There are three points to make here. The first is that the numerator is recursively forming (k−1)!. The second is that the first denominator is already equal to ( n(n−1). . .(n−(k−
https://arxiv.org/abs/2504.17876v1
2)))−1. Finally, the term in the middle can be rewritten in terms of j, in order to understand how it changes during recursion, (n−i1−i2)! (n−i1−k−i2+ 3)!=(n−Pj l=1il)! (n−(Pj l=1il)−k+ (j+ 1))! Using this equation, after recursing through k−2 conditional probabilities, we arrive at, =(k−1)! n(n−1). . .(n−(k−2))·(n−Pk−2 l=1il)! (n−(Pk−2 l=1il)−k+ ((k−2) + 1))!·p(ζk−1|{ζj}k−2 j=1) =(k−1)! n(n−1). . .(n−(k−2))·(n−Pk−2 l=1il)! (n−(Pk−2 l=1il)−1)!·p(ζk−1|{ζj}k−2 j=1) =(k−1)! n(n−1). . .(n−(k−2))·(n−Pk−2 l=1il)! (n−(Pk−2 l=1il)−1)! ·1 n−Pk−1 l=1il+ 1(n−Pk−2 l=1il−1)!(n−Pk−1 l=1il+ 1)! (n−Pk−2 l=1il)!(n−Pk−1 l=1il)! =(k−1)! n(n−1). . .(n−(k−2)) The last part of this theorem confirms the Inverse-Hypergeometric distribution is the noninformative prior on discrete segment lengths, as it measures each change point process sample point with equal probability1 (n k−1). From the other direction, we ought to expect that the Inverse Hypergeometric distribution in discrete time converges in distribution to the Dirichlet as well. This is indeed the case, 43 Theorem 6 (Noninformative segment Length Convergence: Inverse Hypergeometric to Dirichlet) .Let{ζ∗ j}k−1 j=1∈(0,1)be arbitrary havingPk−1 j=1ζ∗ j<1. Define the correspond- ing discrete case as ζj=⌊(n−j+ 1)ζ∗ j⌋for all j. Then the inverse hypergeometric segment lengths converge in distribution to the noninformative continuous time distribution on seg- ment lengths, Dirichlet 1kasn→ ∞ , FIHG({ζj}k−1 j=1)→FDir({ζ∗ j}k−1 j=1;1k) Where Fdenotes the distribution function and the IHG distribution is parameterized as in the noninformative case with population n,k−1total successes, and samples until k−1 successes. Proof of Theorem 6. Let{ζ∗ j}k j=1∈(0,1) such thatPk j=1ζ∗ j= 1 be otherwise arbitrary. Define the discretization ζj=⌊(n−j+ 1)ζ∗ j⌋. Note, {ζj}k j=1as defined represents the sample space of the IHG probability measure, and {ζ∗ j}k j=1represents the sample space of the Dirichlet random variable. As such, the CDF of the IHG random variable follows, FIHG(ζ1, . . . , ζ k−1) =FIHG(⌊nζ∗ 1⌋, ...,⌊(n−k+ 2)ζ∗ k−1⌋) =⌊nζ∗ 1⌋X i1=1···⌊(n−k+2)ζ∗ k−1⌋X ik−1=1(k−1)! n(n−1). . .(n−k+ 2)(Theorem ??,nlarge enough) = (k−1)!⌊nζ∗ 1⌋ n⌊(n−1)ζ∗ 2⌋ n−1. . .⌊(n−k+ 2)ζ∗ k−1⌋ n−k+ 2 =B−1(1k)⌊nζ∗ 1⌋ n⌊(n−1)ζ∗ 2⌋ n−1. . .⌊(n−k+ 2)ζ∗ k−1⌋ n−k+ 2 →B−1(1k)ζ∗ 1. . . ζ∗ k−1 (n→ ∞ ) =B−1(1k)Zζ∗ 1 0···Zζ∗ k−1 0∂ζ∗ 1. . . ∂ζ∗ k−1 =FDirichlet (ζ∗ 1, . . . , ζ∗ k−1;1k) 12 Appendix C: Supplementary Results for Simulation Study 12.1 Simulation study: factorial subsets Following up from Section 6, we breakdown the factorial study into subsets along the time distribution (Figure 9), the error distribution (Figure 10), and the robustness distribution (Figure 11). Finally the performance of the three main models are broken down by number of segments in the synthetic data in Figure 12. 44 FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSegFigure 9: Row 1 is a subset of the data generated with time distribution uniformly spaced and change points uniformly distributed. Row 2 is a subset of the data generated with time distribution tii.i.d.∼Beta (0.5,0.5) and change points simulated from BPP . Row 3 is a subset of the data generated with time distribution tii.i.d.∼Beta (2,2)
https://arxiv.org/abs/2504.17876v1
and change points simulated fromBPP . 45 FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSegFigure 10: Row 1 is a subset of the data generated with error variance 0 .1. Row 2 is a subset of the data generated with error variance 0 .2. Row 3 is a subset of the data generated with error variance 0 .3. 46 FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSegFigure 11: Row 1 is a subset of the data generated with robustness parameter ν= 3. Row 2 is a subset of the data generated with robustness parameter ν= 10. Row 3 is a subset of the data generated with robustness parameter ν= 100. 47 FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSeg FPRTPR 0 10 1BPP FPRTPR 0 10 1PELT FPRTPR 0 10 1BinSegFigure 12: Study is broken down by number of changes. First row is 0 changes, to the fourth row of 3 changes. 48 FPRTPR 0 10 1BPP: Gibbs samplerFigure 13: Gibbs sampler from subsection 14.2 is run on 10 replications of the same synthetic data from Section 6. 12.2 Simulation study with Gibbs sampler Figure 13 implements the Gibbs sampler from subsection 14.2 on the synthetic data study with 10 replicates per setting as described in Section 6. 12.3 Simulation study with other models Following up from Section 6, we run the full study on three additional models in Figure 14. The first model is BPP change point process model with Normal likelihood for the error distribution instead of t-distributed likelihood, the second it the noninformative discrete time model from Proposition 2, and the third model is BPP but with a different prior on the number of segments following Equation 3. We then breakdown the factorial study into subsets along the time distribution (Figure 15), the error distribution (Figure 16), and the robustness distribution (Figure 17) for these three models. Finally the performance of the three main models are broken down by number of segments in the synthetic data in Figure 18. 49 FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPEFigure 14: Comparing three additional models on the full factorial synthetic study. Left is the continuous time noninformative BPP model but with a normal error distribution. Middle is the noninformative discrete time model from Proposition 2. Right is the continuous time noninformative BPP model but with prior on number of segments that represents equally likely sequences across k. 13 Appendix D: Supplementary Results for Case Study 13.1 Prior on parameters The mean parameter θ= (α, β,{γh, δh}H h=1)Tare a priori independent
https://arxiv.org/abs/2504.17876v1
and 0 precision except forβ. Since we do not want short periods of change to be captured by sharp slopes, we set the precision of βto be 5 to help regularize and avoid spurious changes. Denote corresponding precision matrix as Λ θ. Now, consider the prior distribution on the annual harmonic contrasts ϕ= ({γh,l}H,J h=1,l=1,{δh,l}H,J h=1,l=1)T given their continuity constraints. We will construct this prior separately for γhlandδhlfor each year, and then put it together afterwards. Define γl= (γ1,l, . . . , γ H,l)Tas the vector of sin coefficients for the lth year. Assume these contrasts are Gaussian with mean zero, having an exponentially decaying diagonal variance of the seasonal anomalies with respect to the harmonic number. Given the prior for γl, we will derive the prior distribution for γl,−Hconditioned on the continuity constraints on the Hth harmonic. Letγl∼N(0,ΦC) where Φ C=ψDiagh=1,...,H{expλ(1−h)}. The ψparameter is the prior variance of the first harmonic, which then exponentially decays according to λas the harmonics increase. In all that follows, we assume λ= 1. The joint distribution of γland the continuity constraint ξl=PH h=1γhlis, with s=P h′P hΦChh′, γ ξ =IH 1⊤ H γ∼N 0,ΦC ΦC1H (ΦC1H)⊤s 50 FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPEFigure 15: Row 1 is a subset of the data generated with time distribution uniformly spaced and change points uniformly distributed. Row 2 is a subset of the data generated with time distribution tii.i.d.∼Beta (0.5,0.5) and change points simulated from BPP . Row 3 is a subset of the data generated with time distribution tii.i.d.∼Beta (2,2) and change points simulated fromBPP . 51 FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPEFigure 16: Row 1 is a subset of the data generated with error variance 0 .1. Row 2 is a subset of the data generated with error variance 0 .2. Row 3 is a subset of the data generated with error variance 0 .3. 52 FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPEFigure 17: Row 1 is a subset of the data generated with robustness parameter ν= 3. Row 2 is a subset of the data generated with robustness parameter ν= 10. Row 3 is a subset of the data generated with robustness parameter ν= 100. 53 FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0
https://arxiv.org/abs/2504.17876v1
10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPE FPRTPR 0 10 1BPP: Normal Likelihood FPRTPR 0 10 1Noninf Discrete FPRTPR 0 10 1BPPEFigure 18: Study is broken down by number of changes. First row is 0 changes, to the fourth row of 3 changes. 54 0.1 0.2 0.3 0.4Rondonia Deforestation: Case Study PixelNDVI 0.0 0.2 0.4 0.6 0.8 1.0Detected changes Timeslp amp intFigure 19: Evaluating the same case study location for deforestation in Rondonia, but without interannually varying harmonics as in Equation 6. Using formulae for Gaussian conditional distributions, we arrive at, γl|ξl= 0∼N(0,ΦC− ΦC1H(ΦC1H)⊤/s). Only the first h−1 positions of this conditional multivariate Gaussian are used since the hth harmonic is constrained. The contrast covariance matrix is then the kronecker product over 2 Jcopies of this covariance matrix for 2 harmonics and Jyears. Λ−1 ϕ=I2J⊗ ΦC−ΦC1H(ΦC1H)⊤/s) The full parameter precision matrix is then the block diagonal operation of the precision matrix on θandϕas, Φ−1= blkdiag(Λ θ,Λϕ) 13.2 Applying other models to the case study 13.2.1 Case study results for model without interannually varying harmonics We also evaluate the three case study locations for the harmonic model without interannually varying harmonics from Equation 6. These results are in Figure 19, Figure 20 and Figure 21. The mean phenology function estimates are clearly different from our model in the original case study since interannual variation is not being captured. The detected changes for deforestation and crop rotation are similar, however the model fails to capture the changes due to drought in the shrub and grassland example. 13.2.2 Case study results for different prior on number of segments In subsection 5.3, we introduced two priors on the number of segments. The prior we use in the case study in Section 8 is from Equation 4. In this subsection, we evaluate the case 55 0.1 0.2 0.3 0.4Crop Rotation: Case Study PixelNDVI 0.0 0.2 0.4 0.6 0.8 1.0Detected changes Timeslp amp intFigure 20: Evaluating the same case study location for crop rotation, but without interan- nually varying harmonics as in Equation 6. This model detects similar changes despite that it does not capture interannual variation. 0.050.100.150.200.250.30Semi−arid Shrubland: Case Study PixelNDVIDetected changesslp amp int 0.0 0.2 0.4 0.6 0.8 1.00 100Absence of Drought Time Figure 21: Evaluating the same case study location for drought responses in shrub and grassland, but without interannually varying harmonics as in Equation 6. This model fails to detect changes due to drought as a result of removing interannual variation. 56 0.1 0.2 0.3 0.4Rondonia Deforestation: Case Study PixelNDVI 0.0 0.2 0.4 0.6 0.8 1.0Detected changes Timeslp amp intFigure 22: Evaluating the same case study location for deforestation in Rondonia, but with a different prior on the number of segments. Notice three more changes are added. These extra changes appear to be false positives as supported by high resolution imagery and reference to MapBiomas study pixels under the inverse of that prior, π0(k)∝(2π)pk 2|Φ−1|−k 2nY i=1 (1−ti)/(1−ti−1)−k(11) The results are in Figure 22, Figure 23, and Figure 24. The deforestation example demon- strates that the model under
https://arxiv.org/abs/2504.17876v1
this prior incurs extra falsely detected changes compared to the prior in Equation 4. 14 Appendix E: Supplementary Results for Methodol- ogy: EM and Simulation 14.1 Expectation Maximization Expectation maximization will be used to obtain posterior expectations of the robustness variables {qi}n i=0as well as the state variables {zti}n i=0, and to maximize the marginal like- lihood with respect to the mean and variance parameters for each segment (Θ , σ2). As such, we evaluate the posterior expectations, Ezti|y,X,Θ(s)[1{zti=j}],Ezti,zti−1|y,X,Θ(s)[1{zti= j, zti−1=j}]. Following Little and Rubin (2019), the posterior distribution of qi|zi=j, yi, X,Θ(s) j∼ Ga(ν+1 2,(ν 2+(yi−xT iθ(s) j)2 2σ2(s) j)) from which the corresponding E-steps are readily available. Con- ditioning on zi=jand the likelihood mean function for the jth state at time ti,µ(s) j,ti, and 57 0.1 0.2 0.3 0.4Crop Rotation: Case Study PixelNDVI 0.0 0.2 0.4 0.6 0.8 1.0Detected changes Timeslp amp intFigure 23: Evaluating the same case study location for crop rotation, but with the inverse prior on the number of segments. This model detects similar changes despite the different prior. 0.050.100.150.200.250.30Semi−arid Shrubland: Case Study PixelNDVIDetected changesslp amp int 0.0 0.2 0.4 0.6 0.8 1.00 100Absence of Drought Time Figure 24: Evaluating the same case study location for drought responses in shrub and grassland, but with the inverse prior on the number of segements. Change detected results do not change after switching the prior. 58 assessing the posterior for a single qi, p(qti|yti, zti=j;ν)∝1 qti−1/2 exp−qti(yti−µ(s) j,ti)2 2σ2(t)qν 2−1 iexp−qiν 2 ∝qν+1 2−1 i exp−qi(ν 2+(yi−µ(s) j,ti)2 2σ2(s)) Which is a gamma distribution Ga(ν+1 2,(ν 2+(yti−µ(s) j,ti)2 2σ2(s))). The Q function follows, Q(Θ|Θ(s))(c)=Eq,z|y,X,Θ(s)nX i=0kX j=11{zti=j} −log(σ)−qti 2σ2(yti−xT tiθj)2 + −pklog(σ)−(1 2σ2kX j=1θT jΦ−1θj)−log(σ2) +nX i=1k−1X j=1kX h=j1{zti=h, zti−1=j}log π(zti=h|zti−1=j) The M-steps for the mean parameters are weighted least squares ˆθj= (XTWjX+Φ−1)−1XTWjy where Wjis a diagonal matrix with entries E[1{zi=j}qi|y,Θ(s)] =E[qi|1{zi=j}, y,Θ(s)]∗ E[1{zi=j}|y,Θ(s)]. The first of those expectations is given above, and the marginal ex- pectation of zi=jis provided by the forward-backward algorithm. The joint posterior expectations of 1 {zi=j+ 1, zi−1=j}is also provided by the forward-backward algorithm. The M-step for the variance σ2can also be evaluated analytically, σ2(s+1)=Pn i=0Pk j=1E 1{zti=j}qti y,Θ(s), σ2(s) yti−xT tiθ(s+1) j2 +Pk j=1θ(s+1) jΦ−1θ(s+1) j Pn i=0Pk j=1E 1{zti=j} y,Θ(s), σ2(s) +pk+ 2 After the M-step is complete, the E-step is then repeated conditioned on the updated pa- rameters. The algorithm is repeated until convergence of the Qfunction. The likelihood distribution of yti|zti=j,Θ after marginalizing out qtiis t-distributed as 59 follows, fyti(yti;µti, σ2) = (2 πσ2)−1/2ν 2ν 2 Γ(ν 2)Z qν+1 2−1 i exp−qi(ν 2+(yi−µj,ti)2 2σ2)dqi = (2πσ2)−1/2ν 2ν 2Γ(ν+1 2) Γ(ν 2)(ν 2+(yi−µj,ti)2 2σ2)−ν+1 2 =1 σνν 21 2ν+1 2Γ(ν+1 2) (π)1/2Γ(ν 2)(ν 2+(yi−µj,ti)2 2σ2)−ν+1 2 =1 σΓ(ν+1 2) (πν)1/2Γ(ν 2)(1 +(yi−µj,ti)2 σ2ν)−ν+1 2 Which is a location-scaled t-distribution with mean µj,tiand scale σ. 14.2 Gibbs Sampling Toward full Bayesian inference, analytical posteriors are not available for general models (see Fearnhead (2006) for a model obtaining exact posterior inference), however simulation for the full conditional distribution [ z|y,Θ, σ2] can be derived and used within a broader Gibbs sampling methodology. The posterior conditional
https://arxiv.org/abs/2504.17876v1
distribution of the mean vectors θjfollow a Gaussian distribu- tion since their prior is Gaussian. Let W(j) ii=qi1{zi=j}be diagonal, p(θj|y,z,q, σ2 j)∝exp{−1 2(θj−µj)TΛj(θj−µj)} Where µ= (XTW(j)X+Φ−1)−1XTW(j)yand Λ j= (XTW(j)X+Φ−1)/σ2are the mean and precision matrix of the Gaussian posterior for θj. The posterior conditional distribution of σ2is scaled-inverse- χ2as follows, p(σ2|y,z,q,Θ)∝(σ2)−(Pn i=1Pk j=11{zti=j})+pk 2−1 exp −Pn i=1Pk j=1qti1{zti=j}(yti−xT tiθj)2+Pk j=1θT jΦ−1θj 2σ2 Which is a scaled-inverse- χ2(ν0, τ2 0) with parameters ν0=Pn i=1Pk j=11{zti=j}+pkand τ2 0=Pn i=1Pk j=1qti1{zti=j}(yti−xT tiθj)2+Pk j=1θT jΦ−1θjPn i=1Pk j=11{zti=j}+pk, where pis the dimension of θjfor all j= 1, . . . , k . 60 14.2.1 Conditional distribution of state variables The conditional distribution p(z|y,θ,σ2,q) can be derived using the contribution of Chib (1996), while carefully handling the robustness parameters q. We cover high level details from Chib (1996) here for our model. Define Zti= (zt0, . . . , z ti)TandZti+1= (zti+1, . . . , z tn)T, with similar vectors Yti,Yti+1,Qti,Qti+1for the observations yand robustness parameters q. Start by factorizing the conditional distribution as follows, p(z|y,θ,σ2,q) =p(ztn|y,θ,σ2,q)p(ztn−1|Ztn,y,θ,σ2,q). . . p(zti|Zti+1,y,θ,σ2,q). . . p(zt0|Zt1,y,θ,σ2,q) Except for the first ztnterm, these terms take the form p(zti|Zti+1,y,θ,σ2,q). After using Bayes rule and noting conditional independencies from the Markov chain, p(zti|Zti+1,y,θ,σ2,q)∝p(zti,Zti+1,Yti+1,Qti+1|Yti,Qti,θ,σ) ∝p(Zti+1,Yti+1,Qti+1|zti,θ,σ)p(zti|Yti,Qti,θ,σ) ∝p(Yti+1,Qti+1|Zti+1,θ,σ)p(Zti+1|zti,θ,σ)p(zti|Yti,Qti,θ,σ) ∝p(zti+1|zti)p(zti|Yti,Qti,θ,σ2) The first term is the continuous time transition probability from Theorem 3. Regarding the second term, first note that p(zt0= 1|Yt0,Qt0,θ,σ2) = 1 since the prior is a point mass at 1, and thus we can proceed recursively. Assume p(zti−1|Yti−1,Qti−1,θ,σ2) is known. We have, p(zti|Yti,Qti,θ,σ2)∝p(zti, yti, qti|Yti−1,Qti−1) ∝p(zti|Yti−1,Qti−1,θ,σ2)p(yti|qti, zti,θ,σ2) Since p(qi|zi,θ,σ2) =p(qi) which is a constant with respect to zti. The first term above can be written as, p(zti|Yti−1,Qti−1,θ,σ2) =kX j=1p(zti|zti−1=j)p(zti−1=j|Yti−1,Qti−1,θ,σ2) And the second term is the likelihood distribution for yi. 61
https://arxiv.org/abs/2504.17876v1
arXiv:2504.17885v1 [math.PR] 24 Apr 2025Maximal Inequalities for Independent Random Vectors Supratik Basu1Arun Kumar Kuchibhotla2 1supratik.basu@duke.edu 2arunku@cmu.edu 1Department of Statistical Science, Duke University 2Department of Statistics & Data Science, Carnegie Mellon University April 28, 2025 Abstract Maximal inequalities refer to bounds on expected values of t he supremum of averages of random variables over a collection. They play a crucial role in the s tudy of non-parametric and high-dimensional estimators, and especially in the study of empirical risk mi nimizers. Although the expected supremum over an infinite collection appears more often in these appli cations, the expected supremum over a finite collection is a basic building block. This follows from the g eneric chaining argument. For the case of finite maximum, most existing bounds stem from the Bonferron i inequality (or the union bound). The optimality of such bounds is not obvious, especially in the c ontext of heavy-tailed random vectors. In this article, we consider the problem of finding sharp upper a nd lower bounds for the expected L∞norm of the mean of finite-dimensional random vectors under margi nal variance bounds and an integrable envelope condition. 1 Introduction and Motivation SupposeW1,...,W nare independent random variables in some measurable space, taking values in W. Let Fbe a collection of real-valued functions f:W →R. The quantity En(F) :=E/bracketleftBigg sup f∈F/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1{f(Wi)−E[f(Wi)]}/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg , plays a crucial role in the study of empirical risk minimization ( van de Geer ,2000;Van der Vaart ,1998; van der Vaart and Wellner ,2023;Talagrand ,2021;Bartl and Mendelson ,2025). To mention one specific example, suppose we have observations ( Xi,Yi) from the non-parametric regression model Yi=µ0(Xi)+ξi withXiindependent of ξiandµ0∈ Mfor some function class Mequipped with the L2-norm/bar⌈blµ1−µ2/bar⌈bl= (E[|µ1(X)−µ2(X)|2])1/2. Consider the least squares estimator (LSE) on the function class M: ˆµn= argmin µ∈M1 nn/summationdisplay i=1(Yi−µ(Xi))2. The results of Han and Wellner (2018) andHan and Wellner (2019) characterize the rates of convergence of the LSE using the behavior of φn(δ) =E/bracketleftBigg sup µ∈M:/bardblµ−µ0/bardbl≤δ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1ξi(µ−µ0)(Xi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg . 1 Hence, optimal (upper and lower) bounds on φn(δ) allow one to understand the precise rate of convergence of the LSE. Without the independence between ξiandXi,φn(δ) can be difficult to control, as discussed inKuchibhotla and Patra (2022), especially if ξi’s only have a finite q-th moment. Even though the supremums mentioned above are over an infinite co llection, they can be reduced to a finite collection using chaining arguments. A detailed description can be found in Talagrand (2021); Bartl and Mendelson (2025);Dirksen(2015). The argument goes as follows: suppose F0⊂ F1⊂ ···F k⊂ ···Fis a collection of finite sets approaching F(ask→ ∞), with|F0|= 1 (i.e., a singleton). Assume, for simplicity, that E[f(Wi)] = 0 for all f∈ Fand 1≤i≤n. Also, for any f∈ F, letπk(f) denote the element inFkthat is “closest” to f; one can use any generic notion of closeness. With this notation, we can write En(F)≤∞/summationdisplay j=0E/bracketleftBigg sup f∈F/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1(πj+1(f)−πj(f))(Wi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg . The expected value on the right hand side is that of a finite supremum , because there are at most |Fj||Fj+1| pairs (πj+1(f),πj(f)) asfranges over F. On the other hand, note that En(F)≥max j≥0E/bracketleftBigg
https://arxiv.org/abs/2504.17885v1
sup f∈Fj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1f(Wi)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg , which is also an expected value of a finite maximum of averages of rand om variables. Hence, the bounding En(F) can be solved by considering the expected values of a finite maximum of averages of random vari- ables. This is the main focus of the manuscript. Note that if Fis a singleton, then En(F) =O(1/√n) whenever Var( f(W))<∞. Hence, for a scaling of n−1/2(in terms of sample size) for En(F), we need maxf∈FVar(f(W))<∞. With only a marginal variance condition, there exists a function clas sFsuch thatEn(F)≥cmaxf∈F/radicalbig Var(f(W))/radicalbig |F|/n, for some absolute constant c. From the point of view of high- dimensional or nonparametric applications, we want bounds for En(F) that have a logarithmic dependence on the cardinality of F. For this, we consider assumptions on the envelope function F(w) := sup f∈F|f(w)|. Hence, we considerthe problem of bounding En(F) under a boundedness assumption ofthe marginalvariable and integrability of the envelope function. We do not explicitly restric t|F|<∞, but our upper and lower bounds depend on |F|and show that the worst case bounds are infinity if |F|=∞. The remaining article is organized as follows. In Section 2, we formally describe the problem of obtaining sharp upper and lower bounds for the expected maximum and provid e some simple reductions. In Section 3, we consider the special case of bounded random vectors. This is th e most commonly considered case in the literature where one applies Hoeffding or Bernstein inequalities to obtain the maximal inequalities. Our results in Section 3improve upon these existing results and we also prove a lower bound. In Section 4, we consider the general case of random vectors with a finite q-th moment on the envelope. As part of our analysis, we improve the Fuk-Nagaev inequalities refined in Rio(2017), which maybe of independent interest in the study of heavy-tailed random variables.1Finally, we conclude the article with some remarks in Section 5. 2 Problem Description Forq>0, we consider the problem of characterizing (up to universal cons tants) Eq(σ,B) := sup Pn/braceleftBigg EPn/bracketleftBigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1Xi(j)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg :V(Pn)≤σ2andDq(Pn)≤B/bracerightBigg , 1We refer to random variables with infinite moment generating function as heavy-tailed random variables. 2 where the supremum is taken over all joint probability distributions Pnof independent random vectors X1,X2,...,Xnsatisfying EPn[Xi] = 0,1≤i≤n, V(Pn) := max 1≤j≤p1 nn/summationdisplay i=1E[X2 i(j)],andDq(Pn) :=/parenleftBigg 1 nn/summationdisplay i=1E/bracketleftbigg max 1≤j≤p|Xi(j)|q/bracketrightbigg/parenrightBigg1/q . (1) Forq=∞, the constraint Dq(Pn)≤Bshould be understood as PPn(max1≤i≤n/bar⌈blXi/bar⌈bl∞> B) = 0. For notational convenience, throughout, we use Pn 0to denote the collection of all joint probability distributions Pnof independent random vectors X1,...,Xnsatisfying EPn[Xi] = 0,1≤i≤n. Note that Pn∈ Pn 0are of the form Pn=/producttextn i=1Pifor some probability distributions PionRp. By characterize (up to universal constants), we mean to find a function ( n,p,q,σ,B )/ma√sto→L(n,p,q,σ,B ) such that for some universal constants C,C, CL(n,p,q,σ,B )≤ Eq(σ,B)≤CL(n,p,q,σ,B ) for all n,p≥1,q>0,σ,B≥0. To characterize Eq(σ,B), we show that it suffices to consider the supremum over independe nt and identi- cally distributed random vectors X1,...,Xn. Then, we show that the study of Eq(σ,B) is intricately related to that of bounded random vectors. For the
https://arxiv.org/abs/2504.17885v1
first claim, we use Pro position 1 of Pruss(1997) and for the second claim, we use Theorem 1.4.4 of de la Pe˜ na and Gin´ e (1999). Define the sub-class of mean zero probability distributions PonRdas Pq(σ,B) :=/braceleftbigg Pa probability distribution : EP[X] = 0,max 1≤j≤pEP[X2(j)]≤σ2,(EP[/bar⌈blX/bar⌈blq ∞])1/q≤B/bracerightbigg , and define the expectation as E∗ q(σ,B) := sup P/braceleftBigg EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg :X1,X2,...,Xniid∼P, P∈ Pq(σ,B)/bracerightBigg . Theorem 2.1. For anyn,p≥1,0≤σ≤B, E∗ q(σ,B)≤ Eq(σ,B)≤16E∗ q(σ,B). The proof for this result can be found in Section S.1.1of the Supplementary material. For the second claim proving that the study of Eq(σ,B) is related to that of bounded random vectors, define Pq,∞(σ,Bq,B∞) :={P∈ Pq(σ,Bq) :PP(/bar⌈blX/bar⌈bl∞>B∞) = 0}, and E∗ q,∞(σ,Bq,B∞) := sup/braceleftBigg EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg :X1,...,Xniid∼P, P∈ Pq,∞(σ,Bq,B∞)/bracerightBigg , For any distribution PonRd, define T(P;n) := inf/braceleftbigg t∈[0,∞) :PP(/bar⌈blX/bar⌈bl∞>t)≤1 8n/bracerightbigg . Define τq(σ,B) := sup{T(P;n) :P∈ Pq(σ,B)}, Mq(σ,B) := sup/braceleftbigg EP/bracketleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞/bracketrightbigg :X1,...,Xniid∼P, P∈ Pq(σ,B)/bracerightbigg , M′ q(σ,B) := sup/braceleftbigg EP/bracketleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞1{/bar⌈blXi/bar⌈bl∞>τq(σ,B)}/bracketrightbigg :X1,...,Xniid∼P, P∈ Pq(σ,B)/bracerightbigg . 3 Theorem 2.2. There exists universal constants C≤Csuch that for any q≥1,n,d≥1, C/bracketleftbigg Eq,∞(σ,B,τ q(σ,B))+1 nMq(σ,B)/bracketrightbigg ≤ Eq(σ,B)≤C/bracketleftbigg Eq,∞(σ,B,τ q(σ,B))+1 nMq(σ,B)/bracketrightbigg .(2) One can take C= 128andC= 1/4. Moreover, inequalities (2)continues to hold when Mq(σ,B)is replaced withM′ q(σ,B). The details of the proof are discussed in Section S.1.2of the Supplementary material. The following proposition provides some inequalities relating Mq(σ,B) andτq(σ,B), a rigorous proof of which is provided in Section S.1.3of the Supplementary Material. Lemma 2.1. For anyq>0andσ,B >0, τq(σ,B)≤(1−(1−1/(8n))n)−1Mq(σ,B)≤9Mq(σ,B), and n1/q (logn)2/q/parenleftbigg B1S+pσ2 B1Sc/parenrightbigg /lessorsimilarMq(σ,B)≤n1/qB whereS={2p−1≥5(B/σ)2/(1+2q)2/q}. 3 Optimal Bounds under Bounded Envelope In this section, we obtain matching upper and lower bounds for E∞(σ,B). To do so, we compute an upper bound for the expectation using Bennett’s inequality ( Bennett,1962;Wellner,2017). It turns out that the bound so obtained also serves as a lower bound for the same when ap propriately multiplied by a constant for all but one case where the value of σis too small as compared to the value of B. We will proceed by first analyzing the nature of this ”Bennett bound” following which we will inv estigate the behaviour and provide sharp upper and lower bounds for E′ ∞(σ,B) whereE′is justErestricted to appropriately shifted and scaled Bernoulli random variables. Keeping these results in mind, we will esta blish inequalities for the general case and then compare our results with the ones in Blanchard and Voracek (2024). 3.1 Analysis of the “Bennett Bound” Observe that for random vectors X1,...,X nIID from a distribution P∈ P∞(σ,B) E/bracketleftBigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1Xi(j)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg =/integraldisplayB 0P/parenleftBigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1Xi(j)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle>t/parenrightBigg dt ≤/integraldisplayB 0min  1,p/summationdisplay j=1P/parenleftBigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1Xi(j)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle>t/parenrightBigg  dt. HereXi(j),1≤i≤nare IID real-valued mean zero random variables with variance bound ed byσ2and the random variables are almost surely bounded by B. Hence, one can use any of the univariate concentration inequalities for bounded random variables to obtain an upper bound f orE∞(σ,B). We use the Bennett bound in the following and show its optimality. First, we characterize t he upper bounds obtained using Bennett’s inequality. For this, we introduce some notation. Define Ψ(x) = (1+x)ln(1+x)−xforx≥0. In addition, let
https://arxiv.org/abs/2504.17885v1
qBenn(.) denote the upper bound provided to the tail probability of the ran dom variable /bar⌈bl¯X/bar⌈bl∞obtained using an application of the Bennett’s Inequality. We theref ore have qBenn(t) = 2pexp/parenleftbigg −nσ2 B2Ψ/parenleftbiggtB σ2/parenrightbigg/parenrightbigg fort≥0 4 and further define pBenn(t) = min{1,qBenn(t)}for anyt≥0. See, for example, Proposition 3.1 of Wellner (2017). We begin by presenting the following result on the nature of the Benn ett bound. Theorem 3.1. Forn∈N, and0<σ≤B, define A=σ2 BΨ−1/parenleftbiggB2ln(2p) nσ2/parenrightbigg andf(t) =qBenn(t) ln(1+tB/σ2),t≥0. IfA≤B, then A+(ln2)2 4√ 2B n(f(A)−f(B))≤/integraldisplayB 0pBenn(t)dt≤A+B n(f(A)−f(B)). IfA>B, we have/integraldisplayB 0pBenn(t)dt=B. In general, we have (A∧B)+(ln2)2 4√ 2B n(f(A∧B)−f(B))≤/integraldisplayB 0pBenn(t)dt≤(A∧B)+B n(f(A∧B)−f(B)). Theorem 3.1characterizesthe upper bound that can be obtained from the Ben nett bound and Bonferroni inequality. Theorem 3.1is split into two cases depending on the ordering of A,B. The following proposition provides some insight into these cases. Proposition 3.1. A>Bimpliesnσ2/(σ2+B2)≥1/(2p). Proof.A>Bis equivalent to Ψ−1/parenleftbiggB2ln(2p) nσ2/parenrightbigg ≥B2 σ2⇔B2ln(2p) nσ2≥Ψ/parenleftbiggB2 σ2/parenrightbigg . This in turn implies that B2 σ2/parenleftbigg 1+ln(2p) n/parenrightbigg ≥(1+B2/σ2)log(1+B2/σ2)≥(B2/σ2)log(B2/σ2). Note that the last quantity is non-negative as B2≥σ2. This further implies that /parenleftbigg 1+ln(2p) n/parenrightbigg ≥log(B2/σ2)⇔B2/σ2≤e(2p)1/n. If possible, let nσ2/(σ2+B2)<1/(2p). This means that B2/σ2>2np−1. Combining this with the fact that B2/σ2≤e(2p)1/n, we find that 2np−1<e(2p)1/n⇒2np−1−e(2p)1/n<0. It is easy enough to verify that 2 np−1−e(2p)1/nis increasing in nand sincen≥2, we must have 4p−1−e√2p<0. Again this quantity is seen to be increasing in p, and is positive for p≥2, a contradiction. Hence forp≥2, we must have nσ2/(σ2+B2)≥1/(2p). Whenp= 1, we have σ2≤B2≤e√ 2σ2. This proves the proposition. Wenowshowthatthisupperboundisthebestpossiblebyconstruct ingtherandomvectors X1,X2,...,Xn that have independent (centered) Bernoulli components such th at the corresponding expected maximum is lower bounded by the Bennett upper bound up to some constants. The fact that Bennett’s inequality upper bound is matched by Bernoulli random vectors is not surprising, give n the results of Bentkus (2004). 5 3.2 Analysis for Random Vectors with Bernoulli components Consider independent p-variate random vectors {Wi}1≤i≤nsuch that the components {Wi,j}1≤j≤pare in- dependent for each i∈[n]. The random variables Wi,jare such that P(Wi,j=w) =  B2/(σ2+B2),ifw=−σ2/B, σ2/(σ2+B2),ifw=B 0, otherwise. DefineW:=n−1/summationtextn i=1Wiand analogously define Wj:=n−1/summationtextn i=1Wi(j). We are using Wfor these two point random variables to indicate that these are in some sense wors t-case random variables. The following result provides matching upper and lower bounds for the expected L∞norm of the mean of these worst case random vectors. Theorem 3.2. For anyn,p≥1andA≤B, E/bracketleftbig /bar⌈blW/bar⌈bl∞/bracketrightbig =E/bracketleftbigg max 1≤j≤pWj/bracketrightbigg ≤A+B n(f(A)−f(B)) Additionally, (a) Whennσ2/(σ2+B2)≥1/(2p), E/bracketleftbig/vextenddouble/vextenddoubleW/vextenddouble/vextenddouble ∞/bracketrightbig /greaterorsimilarA+B n(f(A)−f(B)). (b) Whennσ2/(σ2+B2)<1/(2p), E/bracketleftbig/vextenddouble/vextenddoubleW/vextenddouble/vextenddouble ∞/bracketrightbig ≍pσ2 B. WhenA>B, we have E/bracketleftbig/vextenddouble/vextenddoubleW/vextenddouble/vextenddouble ∞/bracketrightbig ≍B. The proof that heavily exploits the results of Zubkov and Serov (2013) and can be found in Section S.2.2 of the Supplement. Let EW,∞(σ,B) be the quantity E∞(σ,B) where the components of the random vectors have a marginal distribution same as that of the Wi(j)’s. We do not however, assume anything about the correlation structure exhibited by the components of these vect ors. We can still argue that the upper and lowerbounds that we obtained for the Wi’s hold even when the components are not necessarilyindependent. To that end the following result will prove instrumental. Theorem 3.3. Consider independent mean zero
https://arxiv.org/abs/2504.17885v1
random vectors X1,X2,···,Xnsuch that for all i= 1,2,···,n max 1≤j≤p|Xi(j)| ≤Bandmax 1≤j≤pV[Xi(j)]≤σ2. Then there exist independent random vectors Y1,Y2,···Ynsuch thatYi(1),Yi(2),···,Yi(p)are jointly independent for all i= 1,2,···,nsuch that max 1≤j≤p|Yi(j)| ≤Bandmax 1≤j≤pV[Yi(j)]≤σ2 and E/bracketleftbig /bar⌈blX/bar⌈bl∞/bracketrightbig ≤2E/bracketleftbig /bar⌈blY/bar⌈bl∞/bracketrightbig . The following corollary is therefore an immediate consequence of The orem3.2and Theorem 3.3(proved in Section S.2.3of the Appendix). Corollary 3.1. For anyn,p≥1andA≤B, the following hold: 6 (a) Whennσ2/(σ2+B2)≥1/(2p), EW,∞(σ,B)≍min/braceleftbigg B,A+B n(f(A)−f(B))/bracerightbigg . (b) Whennσ2/(σ2+B2)<1/(2p) EW,∞(σ,B)≍pσ2 B. In addition, when A>B, we have E/bracketleftbig/vextenddouble/vextenddoubleW/vextenddouble/vextenddouble ∞/bracketrightbig ≍B. 3.3 Optimality of the bounds on E∞(σ,B)for the general case The discussion in this section focuses on extracting the bounds for E∞(σ,B) when there are no restrictions on the distribution functions of the random variables X1,X2,...,Xnexcept those mentioned in ( 1). As before, we shall divide our analysis into two parts: one where nσ2/(σ2+B2)≥1/(2p) and the other where nσ2/(σ2+B2)<1/(2p). In the former case, by virtue of the proof of the first part of The orem3.2, we have A+B n(f(A)−f(B))/lessorsimilarE/bracketleftbig/vextenddouble/vextenddouble¯W/vextenddouble/vextenddouble ∞/bracketrightbig ≤ E∞(σ,B)≤A+B n(f(A)−f(B)). As for the latter case, we are unable to obtain such a sharp bound a nd have to therefore, settle for pσ2 B/lessorsimilarE∞(σ,B)≤A+B n(f(A)−f(B)). This gives us the following result regarding the characterization of E∞(σ,B): Theorem 3.4. (a) Whennσ2/(σ2+B2)≥1/(2p), we have E∞(σ,B)≍min/braceleftbigg B,A+B n(f(A)−f(B))/bracerightbigg . (b) Whennσ2/(σ2+B2)<1/(2p), pσ2 B/lessorsimilarE∞(σ,B)/lessorsimilarmin/braceleftbigg B,A+B n(f(A)−f(B))/bracerightbigg . Note that E∞(σ,B)≤Bfollows by the Triangle Inequality and the fact that the components are all bounded by B. The result above reflects our findings for the Bernoulli variables e xcept for the case where the ratio of the upper bound on the standard deviation of the varia bles and the bound on the variables shrinks to zero at a rate faster than 1 /√npi.e.σ/B=o/parenleftbig (np)−1/2/parenrightbig . 3.4 A comparitive study with Blanchard and Voracek (2024) In this section, we compare Theorem 3.2with the results of Blanchard and Voracek (2024), who considered the problem of bounding the expected L∞norm of the mean of ni.i.d. centred random vectors X1,...,Xn with independent Bernoulli components. They have assumed that Xi(j)∼Ber(pj) and have attempted to furnish bounds for the following quantity: ∆n(p) =E/bar⌈blpn−p/bar⌈bl∞ wherepn=/summationtextn i=1Xi/n. This setting is similar in spirit to the one in Theorem 3.2. In light of this, it may be fruitful to observe that if we plug in p=σ2/(σ2+B2)1p, we obtain a setting identical to that of Theorem 3.2. The only difference here would be that the Xi’s are a scaled and shifted version of the random vectorsWi’s that we worked on. In particular, Xi= (BWi+σ2)/(B2+σ2). The result that they have obtained is as follows: 7 (1) When p(j)≤1/(2nj) for allj, then ∆ n(p)≍1/n∧/summationtext j≥1p(j). In our case, this translates to npσ2/(σ2+B2)≤1/2, in which case, this result takes the form ∆ n(p)≍1/n∧pσ2/(σ2+B2) which is exactly consistent with our result in part ( b) of Theorem 3.2. (2) For all other cases, ∆n(p)≍1∧sup j≥1 /radicalbigg p(j)ln(j+1) n∨ln(j+1) nln(2+ln(j+1) np(j)) . Plugging in p(j) =σ2/(σ2+B2) for 1≤j≤p, this result is equivalent to Theorem 3.2up to a universal constant. Corollary 3 of Blanchard and Voracek (2024) which pertains to an analysis of dependent Bernoulli random variables corresponds to Corollary
https://arxiv.org/abs/2504.17885v1
3.1. WhiletheiranalysisisrestrictedtoBernoullirandomvariables,ouran alysisimpliesthatBernoullirandom variables, in some sense, result in the worst case bound. In the follo wing section, we also consider the case of unbounded random vectors. 4 Bounds under Integrable Envelope This section focuses on acquiring upper bounds for Eq(σ,B) forq≥2. The bound we obtain is consistent with the bounds in the previous section in the sense that the upper b ound obtained in this section converges to the one from Theorem 3.4ifq=∞. An encouraging fact is that this convergence is monotonic up to constants. We use Theorem 2.2to obtain bounds for Eq(σ,B). For this purpose, we need to obtain concentration inequalities for averages of real-valued random var iables that have a finite variance, finite q-th moment, and are bounded. Before we proceed further, we introduce some notation and defin itions taken from Section 2 of Rio (2017). We define the transformation Ton the class Ψ cof convex functions ψ: [0,∞)→[0,∞] such that ψ(0) = 0 by Tψ(x) = inf/braceleftbig t−1(ψ(t)+x) :t∈(0,∞)/bracerightbig for anyx≥0. We also define the integrated quantile function /tildewideQXof the real-valued and integrable random variable X as follows: /tildewideQX(u) =u−1/integraldisplayu 0QX(s)ds where the function QXis the cadlag inverse of the tail probability function of the random va riableX. We now present a result that will be instrumental in obtaining an upper b ound for Eq(σ,B). The following result is a refinement of the argument for Theorem 3.1(a) of Rio(2017). Theorem 4.1. Consider independent mean zero real-valued random variabl esX1,X2,...,X nsuch that for someq≥2, 1 nn/summationdisplay i=1V[Xi]≤σ2,1 nn/summationdisplay j=1E[|Xj|q]≤Bq,andmax 1≤j≤n|Xj| ≤K. Then for any z>1, P/parenleftBigg 1 nn/summationdisplay i=1Xi≥B/parenleftbiggσ2 B2/parenrightbigg(q−1)/(q−2) Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq/(q−2)lnz n/bracketrightBigg +Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqlnz n/bracketrightbigg/parenrightBigg ≤1 z. Note that if Bq=σ2Kq−2, then Theorem 4.1reduces to P/parenleftBigg 1 nn/summationdisplay i=1Xi≥2σ2 KΨ−1/parenleftbiggK2lnz nσ2/parenrightbigg/parenrightBigg ≤1 z, 8 which is precisely the Bennett bound (except for the constant fac tor of 2). Observe that the condition Bq=σ2Kq−2means that we do not have any information on the random variables o ther than mean zero, variance bounded by σ2, and an almost sure bound of K. Also, ifq= 2, then the condition becomes the same as Bennett’s inequality, and Theorem 4.1becomes the Bennett’s inequality. The proof is inspired by the proof of Theorem 3.1 ( a) ofRio(2017) and is detailed in Section S.3.1of the Supplement. The following is an alternative to Theorem 4.1that exhibits more desirable properties in connection to Bennett’s inequality. Theorem 4.2. Consider independent mean zero real-valued random variabl esX1,X2,...,X nsuch that for someq≥2, 1 nn/summationdisplay i=1V[Xi]≤σ2,1 nn/summationdisplay j=1E[|Xj|q]≤Bq,andmax 1≤j≤n|Xj| ≤K. Then for any z>1, with probability at least 1−1/z, 1 nn/summationdisplay i=1Xi<B/parenleftbiggσ2 B2/parenrightbigg(q−1)/(q−2) Ψ−1/parenleftBigg/parenleftbiggB2 σ2/parenrightbiggq/(q−2)lnz n/parenrightBigg +2Klnz nq/parenleftbigg W/parenleftbigg(K/B)q/([q]+1)(lnz)1/([q]+1) 10(nR)1/([q]+1)/parenrightbigg/parenrightbigg−1 , whereR= (1−(Bq/(σ2Kq−2))1/(q−2))+andW(·)represents the Lambert function. See Section S.3.2of the supplement for a proof of Theorem 4.2. Note that R= 0 ifBq≥σ2Kq−2and in this case, the second term in the bound in Theorem 4.2becomes zero because the Lambert function is increasing and unbounded. Moreover, as q→ ∞, the argument of the Lambert function becomes ( K/B)/10 and therefore the second term again becomes zero, because of t he factor of 1 /qfor the second term. Hence, asq→ ∞orq= 2, Theorem
https://arxiv.org/abs/2504.17885v1
4.2becomes the Bennett bound. To elaborate on the behavior of the s econd term, note that from Theorem 2.3 of Hoorfar and Hassani (2008), we have W(x)≥ln(x)−ln/parenleftbigg ln/parenleftbiggx(1+1/e) lnx/parenrightbigg/parenrightbigg ,for allx>0. This follows by taking y=x/ein Theorem 2.3 of Hoorfar and Hassani (2008) and using the fact that W(x) =x/eW(x). Theorems 4.1and4.2can be used to obtain the bounds for Eq(σ,B) using Theorem 2.2. For simplicity, we use Theorem 4.1for the following result. See Section S.3.3of the supplement for a proof. Proposition 4.1. Fix anyσ,B >0, and anyq≥2. ThenEq(σ,B)≤min{B,U(q,σ,B,p,n )}, where U(q,σ,B,p,n ) =  10σ/radicalbigg ln(2p) nif(B2/σ2)q/(q−2)(ln(2p)/n)≤e, 5B(B2/σ2)1/(q−2)(ln(2p)/n) ln(1+(B2/σ2)q/(q−2)(ln(2p)/n))if(B2/σ2)q/(q−2)(ln(2p)/n)>e. The bound providedin Proposition 4.1canbe further improvedfor the case( B2/σ2)q/(q−2)(ln(2p)/n)>e as shown in the following result. This improvement stems from directly applying Bennett’s inequality to the truncated random vectors (ignoring the q-th moment bound) and optimizing the truncation parameter. Although counterintuitive, this leads to a better bound than Theor em4.1, which accounts for the q-th moment condition. Proposition 4.2. Forq≥2,n≥1,p≥1andσ,B≥0such that (B2/σ2)q/(q−2)(ln(2p)/n)>ewe have Eq(σ,B)/lessorsimilarB/parenleftbiggln(2p) n/parenrightbigg1−1/q/bracketleftBigg ln/parenleftBigg B2 σ2/parenleftbiggln(2p) n/parenrightbigg1−2/q/parenrightBigg/bracketrightBigg1/q−1 (3) 9 The proof is available in Section S.3.4of the Supplementary material. Note that Proposition 4.2holds only for the case ( B2/σ2)q/(q−2)(ln(2p)/n)>e. For this reason, we combine it with Proposition 4.1to obtain a very good upper bound encompassing all the possible cases which is stated in the following Corollary. Theorem 4.3. Forσ,B >0, andq≥2, we have Eq(σ,B)/lessorsimilarmin{B,U∗(q,σ,B,p,n )} U∗(q,σ,B,p,n ) =  σ/radicalbigg ln(2p) nif(B2/σ2)q/(q−2)(ln(2p)/n)≤e, B/parenleftbiggln(2p) n/parenrightbigg1−1/q/bracketleftBigg ln/parenleftBigg B2 σ2/parenleftbiggln(2p) n/parenrightbigg1−2/q/parenrightBigg/bracketrightBigg1/q−1 if(B2/σ2)q/(q−2)(ln(2p)/n)>e. In this setting, the bound presented in Proposition B.1 of ( Kuchibhotla and Patra ,2022) is of the order σ/radicalbigg ln(2p) n+B/parenleftbiggln(2p) n/parenrightbigg1−1/q . Notethatthisisoftheorder σ/radicalbig ln(2p)/nwhen(B2/σ2)q/(q−2)(ln(2p)/n)≤eandtheorder B(ln(2p)/n)1−1/q when (B2/σ2)q/(q−2)(ln(2p)/n)> e. In the former case, our bound performs comparably with that of Kuchibhotla and Patra (2022), while in the latter case, our bound performs better by the ploy-lo g factor in (3). This difference is more pronounced as qincreases, i.e., our bound always performs better and more so whenqis larger. 4.1 Properties of the bound Having studied the upper bounds for Eq(σ,B), there are certain natural questions that arise namely how would the bound behave as q→ ∞. Is this bound optimal in some sense? In this section, we discuss suc h properties. It is observed that upper bound exhibits certain desir able properties. These are: (1) The upper bound obtained is a continuous function of qforq≥2. (2) The bound as a function of qis decreasing in qforq≥2 if ln(2p)/ndoes not diverge. (3) Asq→ ∞, the bound converges to a quantity which has the same order as E∞(σ,B). Property (1) is easy to verify from the results we have obtained so far. To vouch for Property (2), we present the following result. Theorem 4.4. Ifln(2p)/n≤1, the upper bound on Eq(σ,B), obtained in Theorem 4.3is decreasing in q (upto constants). Proof.Note that it suffices to establish that for q,q′>2 such that q<q′,U(q′,σ,B,p,n )≤U(q,σ,B,p,n ). Decreasingness at q= 2 is then just a consequence of continuity. Towards proving decr easingness for q>2, we first define the collection Sqof sets of the tuples ( σ,B,p,n,q ) such that Sq=/braceleftBig (σ,B,p,n,q ) : (B2/σ2)q q−2(ln(2p)/n)>e/bracerightBig It
https://arxiv.org/abs/2504.17885v1
is clear that the collection is decreasing in qi.e.Sq′⊂ Sq. Hence the corresponding sets Ω \ Sqare increasing in q. Further, observe that for any b≥a≥1/3, we have /parenleftbigg 1−1 q/parenrightbigg ln(b/a)<1 6(b/a−1)+1≤1+1 2/parenleftbigg 1−2 q/parenrightbigg (b/a−1). Now, take a= (B2/σ2)(ln(2p)/n)1−2/q′ andb= (B2/σ2)(ln(2p)/n)1−2/q. 10 Thus, /parenleftbigg 1−1 q/parenrightbigg ln(ln((B2/σ2)(ln(2p)/n)1−2/q))−/parenleftbigg 1−1 q′/parenrightbigg ln(ln((B2/σ2)(ln(2p)/n)1−2/q′ )) ≤/parenleftbigg 1−1 q/parenrightbigg ln(ln((B2/σ2)(ln(2p)/n)1−2/q))−/parenleftbigg 1−1 q/parenrightbigg ln(ln((B2/σ2)(ln(2p)/n)1−2/q′))+1 3ln3 ≤1+1 2/parenleftbigg 1−2 q/parenrightbigg (ln((B2/σ2)(ln(2p)/n)1−2/q′ )/ln((B2/σ2)(ln(2p)/n)1−2/q))−1)+1 3ln3 ≤1+1 2/parenleftBig ln((B2/σ2)(ln(2p)/n)1−2/q)−ln((B2/σ2)(ln(2p)/n)1−2/q′ ))/parenrightBig +1 3ln3 ≤1+1 3ln3−/parenleftbigg1 q−1 q′/parenrightbigg ln(ln(2p)/n) From this, we can say that the ratio of the upper bound on Eq′(σ,B) to the upper bound on Eq(σ,B) is bounded above by 4. This shows that on Sq′, the bound is decreasing. Further on Ω \Sq, the bounds are of the same order. All we are left with is (Ω \Sq′)∩Sq. For this case, the bound on Eq′(σ,B) is of the order σ/radicalbig ln(2p)/nwhile that on Eq(σ,B) is of the order B(ln(2p)/n)1−1/q/bracketleftBig ln/parenleftBig (B2/σ2)(ln(2p)/n)1−2/q/parenrightBig/bracketrightBig1/q−1 . On the set (Ω \Sq′)∩Sq, these two are of the same order which shows that the upper boun d onEq(σ,B) is larger (upto constants and possibly even in order) than the upper bound on Eq(σ,B). As for Property (3), we may write the bound UqonEq(σ,B) as follows: Uq=σ/radicalbigg ln(2p) n1/braceleftbig Sc q/bracerightbig +B(ln(2p)/n)1−1/q/bracketleftBig ln/parenleftBig (B2/σ2)(ln(2p)/n)1−2/q/parenrightBig/bracketrightBig1/q−1 1{Sq} where Sq={(B2/σ2)q/(q−2)(ln(2p)/n)>e}. As we saw in the proof of Theorem 4.4, theSq’s are decreasing in qand they decrease to S∞={(B2/σ2)(ln(2p)/n)>e} whileSc q↑ Sc ∞.Further, as q→ ∞, we must have B(ln(2p)/n)1−1/q/bracketleftBig ln/parenleftBig (B2/σ2)(ln(2p)/n)1−2/q/parenrightBig/bracketrightBig1/q−1 →Bln(2p)/n ln(B2ln(2p)/(nσ2)). It is easy to verify from here that the limiting bound that we obtain is o f the same order as the bound we obtained on E∞(σ,B). In this sense, the upper bound we obtained for Eq(σ,B) may be considered optimal. 5 Conclusions and Future Work In this article, we studied the problem of characterizing (up to unive rsal constants) the expected L∞norm of the mean of independent random vectors by providing matching u pper and lower bounds. Two conditions were imposed on the random vectors: the componentwise variance s are finite (bounded above by σ2) and the envelope has a finite q-th moment bounded by Bq. Such concentration inequalities play a fundamental role in empirical process theory. Our results demonstrate that it s uffices to analyze the case of independent and identically distributed (i.i.d.) random vectors, and also to study bo unded random vectors. 11 For bounded random vectors, it is common in the literature to use Be rnstein’s inequality. Our results imply a closed-form upper bound from Bennett’s inequality and that it is optimal up to universal constants. It improved upon Bernstein’s inequality by a log-factor, as can be ex pected. Except for the case where the variance of the random variables is too small compared to the upper bound on the random variables, our result is final. We also study the case where the envelope is not bound ed but only has a finite q-th moment. Our results in this case are limited to upper bounds, which we show exh ibit several desirable properties. We leave the question of lower bounds for
https://arxiv.org/abs/2504.17885v1
future research. Finally, our results can be applied to study or refine maximal inequalitie s for the supremum of empirical processes. For instance, some results of Kuchibhotla and Patra (2022) can be improved by replacing their Proposition B.1 with our Theorem 4.3. We leave these important applications to future research. References Bartl, D. and Mendelson, S. (2025). Uniform mean estimation via gen eric chaining. arXiv preprint arXiv:2502.15116 . Batir, N. (2008). Sharp inequalities for factorial n.Proyecciones (Antofagasta) , 27(1):97–102. Bennett, G. (1962). Probability inequalities for the sum of independ ent random variables. Journal of the American Statistical Association , 57(297):33–45. Bentkus, V. (2004). On Hoeffding’s inequalities. Annals of Probability , 32(2):1650–1673. Blanchard, G. and Voracek, V. (2024). Tight bounds for local Glive nko-Cantelli. In International Conference on Algorithmic Learning Theory , pages 179–220. PMLR. Csisz´ ar, I. and Talata, Z. (2006). Context tree estimation for n ot necessarily finite memory processes, via bic and mdl. IEEE Transactions on Information theory , 52(3):1007–1016. de la Pe˜ na, V. H. and Gin´ e, E. (1999). Decoupling . Probability and its Applications (New York). Springer- Verlag, New York. From dependence to independence, Randomly st opped processes. U-statistics and processes. Martingales and beyond. Dirksen, S. (2015). Tail bounds via generic chaining. Electron. J. Probab. , 20:no. 53, 1–29. Gin´ e,E.andNickl,R.(2021). Mathematical foundations of infinite-dimensional statist ical models . Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge univer sity press. Han, Q. and Wellner, J. A. (2018). Robustness of shape-restrict ed regression estimators: an envelope perspective. ArXiv preprints arxiv:1805.02542 . Han, Q. and Wellner, J. A. (2019). Convergencerates of least squ aresregressionestimators with heavy-tailed errors.Ann. Statist. , 47:2286 – 2319. Hoorfar, A. and Hassani, M. (2008). Inequalities on the lambert w f unction and hyperpower function. J. Inequal. Pure and Appl. Math , 9(2):5–9. Kock, A. B. and Preinerstorfer, D. (2024). A remark on moment- dependent phase transitions in high- dimensional Gaussian approximations. Statist. Probab. Lett. , 211:Paper No. 110149, 6. Kuchibhotla, A. K. and Patra, R. K. (2022). On least squares estim ation under heteroscedastic and heavy- tailed errors. Ann. Statist. , 50(1):277–302. Ledoux, M. and Talagrand, M. (2011). Probability in Banach spaces . Classics in Mathematics. Springer- Verlag, Berlin. Isoperimetry and processes, Reprint of the 1991 e dition. 12 Montgomery-Smith, S. J. and Pruss, A. R. (2001). A comparison in equality for sums of independent random variables. Journal of mathematical analysis and applications , 254(1):35–42. Pruss, A. R. (1997). Comparisons between tail probabilities of sum s of independent symmetric random variables. In Annales de l’Institut Henri Poincare (B) Probability and St atistics, volume 33, pages 651– 671. Elsevier. Rio, E. (2017). About the constants in the Fuk-Nagaev inequalities .Electron. Commun. Probab. , 22:Paper No. 28, 12. Talagrand, M. (2021). Upper and lower bounds for stochastic processes—decomposi tion theorems , volume 60 ofErgebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge . A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Ser ies of Modern Surveys in Mathematics] . Springer, Cham, second edition. vandeGeer,S.A.(2000). Applications of empirical process theory ,
https://arxiv.org/abs/2504.17885v1
volume6of Cambridge Series in Statistical and Probabilistic Mathematics . Cambridge University Press, Cambridge. Van der Vaart, A. W. (1998). Asymptotic statistics , volume 3 of Cambridge Series in Statistical and Proba- bilistic Mathematics . Cambridge University Press, Cambridge. vanderVaart, A. W. andWellner, J.A. (2023). Weak convergence and empirical processes—with applicatio ns to statistics . Springer Series in Statistics. Springer, Cham, second edition. Wellner, J. A. (2017). The Bennett-Orlicz norm. Sankhya A , 79(2):355–383. Zubkov, A. M. and Serov, A. A. (2013). A complete proof of univer sal inequalities for the distribution function of the binomial law. Theory Probab. Appl. , 57(3):539–544. 13 Supplement to “Maximal Inequalities for Independent Rando m Vectors” Abstract This supplement contains the proofs of all the main results i n the paper and some supporting lemmas. S.1 Proofs of Results in Section 1 S.1.1 Proof of Theorem 2.1 The lower bound of E∗ q(σ,B)≤ Eq(σ,B) is trivially true, because P⊗n=/producttextn i=1PforP∈ Pq(σ,B) is a distribution in Pn 0satisfying V(P⊗n)≤σ2andDq(P⊗n)≤B. To prove the upper bound, we show that for anyPn∈ Pn 0satisfying V(Pn)≤σ2andDq(Pn)≤B, there exists a distribution P∈ Pq(σ,B) such that EPn/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤cEP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg . By symmetrization ( Gin´ e and Nickl ,2021, Theorem 3.1.21), we have EPn/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤2EPn/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1εiXi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg , (E.1) whereεi,1≤i≤nare independent Rademacher random variables (i.e., P(εi=−1) =P(εi= 1) = 1/2). We can write the right hand side of ( E.1) asEQn[/bar⌈bln−1/summationtextn i=1Yi/bar⌈bl∞] whereYid=εiXi(andQiis the distribution ofYi). Proposition 1 of Pruss(1997) implies that for any Qn=/producttextn i=1Qi, there exists a distribution /tildewideQ supported on Rpsuch that PQn/parenleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Yi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞>λ/parenrightBigg ≤8P/tildewideQ/parenleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1˜Yi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞>λ/2/parenrightBigg , (E.2) where˜Y1,...,˜Yniid∼/tildewideQ. (Proposition 1 of Pruss(1997) is only stated for real-valued random variables but the proof holds for random vectors as stated in Montgomery-Smith and Pruss (2001).) This implies EQn/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤16E/tildewideQ/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1/tildewideYi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg . The distribution /tildewideQthat satisfies ( E.2) is defined via E/tildewideQ[g(/tildewideY)] =1 nn/summationdisplay i=1EQn[g(Yi)],for all bounded Borel measurable functions g:Rp→R. Because V(Qn)≤σ2andDq(Qn)≤B, this implies that V(/tildewideQ⊗n)≤σ2andDq(/tildewideQ⊗n)≤B. Therefore, /tildewideQ∈ Pq(σ,B) and implies Eq(σ,B)≤16E∗ q(σ,B). S.1.2 Proof of Theorem 2.2 We prove the result by proving the equivalence for E∗ q(σ,B) and by Theorem 2.1the equivalence also holds forEq(σ,B). 14 By triangle inequality, for any tand distribution P, we have EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞≤t}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg +EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>t}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg . Takingt=τq(σ,B), we get that for any P∈ Pq(σ,B), the first term is bounded by Eq,∞(σ,B,τ q(σ,B)). For the second term, note that for any P∈ Pq(σ,B), PP/parenleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>τq(σ,B)}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞>0/parenrightBigg ≤n/summationdisplay i=1PP(/bar⌈blXi/bar⌈bl∞>τq(σ,B)) =nPP(/bar⌈blXi/bar⌈bl∞>τq(σ,B))≤1/8. Hence, by Proposition 6.8 of Ledoux and Talagrand (2011), for anyP∈ Pq(σ,B) EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>τq(σ,B)}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤8 nEP/bracketleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞/bracketrightbigg ≤8 nMq(σ,B).(E.3) Therefore, by Theorem 2.1, Eq(σ,B)≤16E∗ q(σ,B)≤16/bracketleftbigg E∗ q,∞(σ,B,τ q(σ,B)) +8 nMq(σ,B)/bracketrightbigg . This completes the proof of the upper bound. To prove the lower bound, note that Pq(σ,B)⊃ Pq,∞(σ,B,τ q(σ,B)) and hence, E∗ q(σ,B)≥ E∗ q,∞(σ,B,τ q(σ,B)). Moreover, for any P∈ Pq(σ,B), by Theorem 1.1.1 of de la Pe˜ na and Gin´ e (1999) EP/bracketleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞/bracketrightbigg ≤2EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg . Taking the supremum over P∈ Pq(σ,B) yieldsn−1Mq(σ,B)≤2E∗ q(σ,B). Therefore, E∗
https://arxiv.org/abs/2504.17885v1
q(σ,B)≥max/braceleftbigg E∗ q,∞(σ,B,τ q(σ,B)),1 2nMq(σ,B)/bracerightbigg ≥1 2E∗ q,∞(σ,B,τ q(σ,B))+1 4nMq(σ,B). This completes the proof of lower bound. To prove that the final statement that ( 2) holds even when Mq(σ,B) is replaced with M′ q(σ,B), first note that M′ q(σ,B)≤ Mq(σ,B). This implies that the lower bound with M′ q(σ,B) readily holds from the lower bound in ( 2). For the upper bound with M′ q(σ,B), we revisit the application of Proposition 6.8 ofLedoux and Talagrand (2011) for (E.3). By Proposition 6.8 of Ledoux and Talagrand (2011), we in fact obtain EP/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>τq(σ,B)}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤8 nEP/bracketleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞1{/bar⌈blXi/bar⌈bl∞>τq(σ,B)}/bracketrightbigg ≤8 nM′ q(σ,B). This completes the proof. 15 S.1.3 Proof of Lemma 2.1 Observe that for any P∈ Pq(σ,B) andX1,...,X niid∼P, 1−(1−PP(/bar⌈blXi/bar⌈bl∞>t)n=PP/parenleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞>t/parenrightbigg ≤EP[max1≤i≤n/bar⌈blXi/bar⌈bl∞1{/bar⌈blXi/bar⌈bl∞>t}] t≤Mq(σ,B) t. Takingt= (1−(1−1/(8n))n)−1Mq(σ,B), we get PP(/bar⌈blXi/bar⌈bl∞>t)≤1 8nfor allP∈ Pq(σ,B). Therefore,τq(σ,B)≤(1−(1−1/(8n))n)−1Mq(σ,B) for alln≥1. Because (1 −(1−1/(8n))n)−1≤8.52 for alln≥1, the stated inequality for τq(σ,B) follows. By an application of Jensen’s inequality, we get EP/bracketleftbigg max 1≤i≤n/bar⌈blXi/bar⌈bl∞/bracketrightbigg ≤(nEP[/bar⌈blXi/bar⌈blq ∞])1/q≤n1/qB. For anyq≥2, define a random variable G=G(q) via the distribution function as P(G≤x) :=  (1/2)|x|−q(max{ln(|x|),1})−2,ifx≤ −1, 1/2, ifx∈(−1,1), 1−(1/2)x−q(max{ln(x),1})−2,ifx≥1. ThenE[G] = 0and E[|G|q] = 1+2q; seeEq. (9) of Kock and Preinerstorfer (2024) fordetails. It is easytosee thatE[|G|q+δ] =∞for anyδ >0. Also, for 0 ≤τ≤K, define a two-point random variable W=W(τ2,K) as P/parenleftbigg W=−τ2 K/parenrightbigg =K2 K2+τ2,andP(W=K) =τ2 K2+τ2. It is easy to verify that E[W] = 0,P(|W| ≤K) = 1, and Var(W) =τ4 K2K2 K2+τ2+K2τ2 (K2+τ2)=τ2. Finally, consider the random vector H∈RdwhereH(j) =G(q)WjwhereW1,...,W pare independent identically distributed copies of W(τ2,K) and independent of G(q). Then for any 1 ≤j≤p, E[H(j)] =E[G(q)]E[W(τ2,K)] = 0,E[H2(j)] =E[G2(q)]E[W2(τ2,K)] =E[G2(q)]τ2, and (E[/bar⌈blH/bar⌈blq ∞])1/q= (E[|G(q)|q])1/q/parenleftbigg E/bracketleftbigg max 1≤j≤p|Wj|q/bracketrightbigg/parenrightbigg1/q = (1+2q)1/q/bracketleftbigg/parenleftbiggτ2 K/parenrightbiggq/parenleftbiggK2 K2+τ2/parenrightbiggp +Kq/parenleftbigg 1−/parenleftbiggK2 K2+τ2/parenrightbiggp/parenrightbigg/bracketrightbigg1/q . This follows by noting that P/parenleftbigg max 1≤j≤p|Wj|=τ2 K/parenrightbigg = 1−P/parenleftbigg max 1≤j≤p|Wj|=K/parenrightbigg =/parenleftbiggK2 K2+τ2/parenrightbiggp , 16 which in turn yields E/bracketleftbigg max 1≤j≤p|Wj|q/bracketrightbigg =/parenleftbiggτ2 K/parenrightbiggq/parenleftbiggK2 K2+τ2/parenrightbiggp +Kq/parenleftbigg 1−/parenleftbiggK2 K2+τ2/parenrightbiggp/parenrightbigg . Observe that E[G2(q)] =/integraldisplay∞ 0P(|G(q)| ≥x)dx= 1+/integraldisplay∞ 1P(|G(q)| ≥x1/2)dx≥1. Moreover, by Jensen’s inequality (using q≥2) E[G2(q)]≤(E[|G(q)|q])2/q≤(1+2q)2/q≤5. Takingτ2=σ2/5, we get E[H2(j)]≤5τ2=σ2.Moreover, take K=K(σ;q,p) to be the unique solution to the equation (1+2q)/bracketleftbigg/parenleftbiggσ2 5K/parenrightbiggq/parenleftbigg5K2 5K2+σ2/parenrightbiggp +Kq/parenleftbigg 1−/parenleftbigg5K2 5K2+σ2/parenrightbiggp/parenrightbigg/bracketrightbigg =Bq. (E.4) By Lemma S.4.1, the above equation has a unique solution in K. Having found the unique solution, we may write the following: E/bracketleftbigg max 1≤i≤n/bar⌈blHi/bar⌈bl∞/bracketrightbigg ≥E/bracketleftbigg max 1≤i≤n|Gi(q)|/bracketrightbigg max 1≤i≤nE/bracketleftbigg max 1≤j≤p|Wi,j|/bracketrightbigg =E/bracketleftbigg max 1≤i≤n|Gi(q)|/bracketrightbigg E/bracketleftbigg max 1≤j≤p|Wj|/bracketrightbigg Now, E/bracketleftbigg max 1≤j≤p|Wj|/bracketrightbigg =/parenleftbiggτ2 K/parenrightbigg/parenleftbiggK2 K2+τ2/parenrightbiggp +K/parenleftbigg 1−/parenleftbiggK2 K2+τ2/parenrightbiggp/parenrightbigg =/parenleftbiggτ2 K/parenrightbigg +/parenleftbigg K−τ2 K/parenrightbigg/parenleftbigg 1−/parenleftbiggK2 K2+τ2/parenrightbiggp/parenrightbigg Whenpτ2/(τ2+K2)≥1/2, then we have τ2 τ2+K2≥1 2p=⇒1−/parenleftbiggK2 τ2+K2/parenrightbiggp ≥1−/parenleftbigg 1−1 2p/parenrightbiggp ≥1−1√e. Whenpτ2/(τ2+K2)<1/2, we have 1−/parenleftbiggK2 τ2+K2/parenrightbiggp ≥pτ2 τ2+K2−1 2/parenleftbiggpτ2 τ2+K2/parenrightbigg ≥1 2/parenleftbiggpτ2 τ2+K2/parenrightbigg . From here it is easy to see that E/bracketleftbigg max 1≤j≤p|Wj|/bracketrightbigg /greaterorsimilar/braceleftBigg K, ifpτ2/(τ2+K2)≥1/2 pτ2/Kifpτ2/(τ2+K2)<1/2 Now,intheequation( E.4), itiseasytoseethattakingtheleftsideasfunction gofK,thatgiscontinuousand g(B/(1+2q)1/q)≤Bqwhich means that the solution K≥B/(1+2q)1/q. For the choice K∗=B/(1+2q)1/q, both Var(Hi)≤σ2andE[/bar⌈blHi/bar⌈blq ∞]≤Bqand hence, we may plug in K=B/(1+2q)1/qto obtain a lower bound. Thus, we have E/bracketleftbigg max 1≤j≤d|Wj|/bracketrightbigg /greaterorsimilar/braceleftBigg B, if 2p−1≥5(B/σ)2/(1+2q)2/q pσ2/Bif 2p−1<5(B/σ)2/(1+2q)2/q 17 We now compute a lower bound for the expected maximum of the Gi’s. Note that if (2 qn(logn)−2)1/q>e, E/bracketleftbigg max 1≤j≤n|Gi(q)|/bracketrightbigg =/integraldisplaye 11−(1−x−q)ndx+/integraldisplay∞ e1−(1−x−q(lnx)−2)ndx ≥/integraldisplay(2qn(logn)−2)1/q e1−(1−x−q(lnx)−2)ndx
https://arxiv.org/abs/2504.17885v1
≥((2qn(logn)−2)1/q−e)/parenleftbigg 1−exp/parenleftbigg −q 2(logn)2 (logn+log(2q))2/parenrightbigg/parenrightbigg /greaterorsimilar(n(logn)−2)1/q. The penultimate inequality is a simple application of the fact that 1 −x≤e−xforx≥0 which leads to the fact that 1 −(1−x)n≥1−e−nx. The last inequality holds because the multiplier of ((2 qn(logn)−2)1/q−e) is increasing in each of nandq, so plugging in, n=q= 2, we get (1 −e−1/9). When (2 qn(logn)−2)1/q≤e, we need only show that the expected maximum is larger than a consta nt. This is done as follows. Consider the solution zto the equation 1 −e−x/2 =x. Then 1−e−nz/2 = 1−(2(1−z))n≤1−(1−z)n. This means that for t=z−1/q≤√ 2. Thus we may write E/bracketleftbigg max 1≤j≤n|Gi(q)|/bracketrightbigg ≥/integraldisplaye √ 21−(1−x−q)ndx ≥/integraldisplaye √ 21−e−nx−q/2dx ≥(e−√ 2)(1−e−ne−q/2)≥(e−√ 2)/2/greaterorsimilar(n(logn)−2)1/q. TakingS={2p−1≥5(B/σ)2/(1+2q)2/q}, we get Mq(σ,B)≥E/bracketleftbigg max 1≤j≤n/bar⌈blHi/bar⌈bl∞/bracketrightbigg /greaterorsimilar(n(logn)−2)1/q/parenleftbigg B1S+pσ2 B1Sc/parenrightbigg . S.2 Proofs of Results in Section 2 S.2.1 Proof of Theorem 3.1 Recall that we have pBenn(t) = min/braceleftbigg 1,2pexp/parenleftbigg −nσ2 B2Ψ/parenleftbiggtB σ2/parenrightbigg/parenrightbigg/bracerightbigg . In addition to this, an observation that Ψ′′(x) = 1/(1+x)>0 for allx≥0 enables us to apply Lemma S.4.2 which gives us /integraldisplayB 0pBenn(t)dt−A≤/integraldisplayB A2pexp/parenleftbigg −nσ2 B2Ψ/parenleftbiggtB σ2/parenrightbigg/parenrightbigg dt≤B n(f(A)−f(B)). Before we proceed further, recall that pBenn(t) =1{t≤A}+qBenn(t)1{t>A}. Define the finite sequence ℓ1,ℓ2,···,ℓK+1such thatℓ1=Aand in general, for k≥2, ℓk= min/braceleftbigg t>A: 2pexp/parenleftbigg −nσ2 B2Ψ/parenleftbiggtB σ2/parenrightbigg/parenrightbigg ≤1 2k−1/bracerightbigg =σ2 BΨ−1/parenleftbiggB2ln(2kp) nσ2/parenrightbigg 18 HereKis chosen such that K= max{k∈N:ℓk+1≤B}. This allows us to rewrite pBenn(t) as pBenn(t) =1{t≤A}+K/summationdisplay k=1qBenn(t)1{ℓk<t≤ℓk+1}+qBenn(t)1{ℓK+1<t≤B}. From the definition of the ℓk’s, it is evident that 1 2k(ℓk+1−ℓk)≤/integraldisplayℓk+1 ℓkqBenn(t)dt≤1 2k−1(ℓk+1−ℓk) Appealing to the Lagrange’s mean value theorem, for every k, there exists τk∈[0,1] such that ℓk+1−ℓk=σ2 B/parenleftbiggB2ln(2k+1p) nσ2−B2ln(2kp) nσ2/parenrightbigg/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2k+τkp) nσ2/parenrightbigg/parenrightbigg =Bln(2) n/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2k+τkp) nσ2/parenrightbigg/parenrightbigg which implies that Bln(2) n/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2k+1p) nσ2/parenrightbigg/parenrightbigg ≤ℓk+1−ℓk≤Bln(2) n/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2kp) nσ2/parenrightbigg/parenrightbigg LemmaS.4.4implies that for ksuch thatB2ln(2k+1p)/(nσ2)≤1, we have Ψ′/parenleftBig Ψ−1/parenleftBig B2ln(2k+1p) nσ2/parenrightBig/parenrightBig Ψ′/parenleftBig Ψ−1/parenleftBig B2ln(2kp) nσ2/parenrightBig/parenrightBig≤3 2√ 2ln/parenleftbigg 1+/radicalbigg/parenleftBig B2ln(2k+1p) nσ2/parenrightBig/parenrightbigg ln/parenleftbigg 1+/radicalbigg/parenleftBig B2ln(2kp) nσ2/parenrightBig/parenrightbigg≤3 2√ 2/radicalBigg ln(2k+1p) ln(2kp)≤3 2(E.5) This follows from the fact that since x/ma√sto→ln(1+x)/xis a decreasing function, hence for 0 <x<y, ln(1+y) ln(1+x)≤y x. (E.6) Now, using ( E.6) and appealing to Lemma S.4.5, we get that for ksuch thatB2ln(2k+1p)/(nσ2)>1, Ψ′/parenleftBig Ψ−1/parenleftBig B2ln(2k+1p) nσ2/parenrightBig/parenrightBig Ψ′/parenleftBig Ψ−1/parenleftBig B2ln(2kp) nσ2/parenrightBig/parenrightBig≤√ 2 ln2ln/parenleftBig 1+/parenleftBig B2ln(2k+1p) nσ2/parenrightBig/parenrightBig ln/parenleftBig 1+/parenleftBig B2ln(2kp) nσ2/parenrightBig/parenrightBig≤√ 2 ln2ln(2k+1p) ln(2kp)≤2√ 2 ln2(E.7) Combining ( E.5) and (E.7), we conclude Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2k+1p) nσ2/parenrightbigg/parenrightbigg/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2kp) nσ2/parenrightbigg/parenrightbigg ≤2√ 2 ln2for allk≥1. Takinga= 2√ 2/ln2, we have that for any 1 ≤k≤K, Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2k+1p) nσ2/parenrightbigg/parenrightbigg ≤aΨ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2kp) nσ2/parenrightbigg/parenrightbigg ≤ ··· ≤akΨ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2p) nσ2/parenrightbigg/parenrightbigg For further analysis, we need to consider two cases separately. 19 Case 1:ℓ2≤B.In this case, /integraldisplayB ApBenn(t)dt=K/summationdisplay k=1/integraldisplayℓk+1 ℓkqBenn(t)dt+/integraldisplayB ℓK+1qBenn(t)dt ≥K/summationdisplay k=1/integraldisplayℓk+1 ℓkqBenn(t)dt ≥K/summationdisplay k=11 2k(ℓk+1−ℓk) ≥K/summationdisplay k=11 2kBln2 n/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(2k+1p) nσ2/parenrightbigg/parenrightbigg ≥K/summationdisplay k=11 (2a)kBln2 n1 ln(1+AB/σ2) ≥1 2aBln2 n1 ln(1+AB/σ2) =1 2aBln2 nf(A) ≥1 2aBln2 n(f(A)−f(B)). (E.8) Case 2:ℓ2>B.In this case, we have /integraldisplayB ApBenn(t)dt≥/integraldisplayB A1 2dt=/parenleftbiggB−A 2/parenrightbigg . Further, note that f(A)−f(B) =1 ln(1+AB/σ2)−pBenn(B) ln(1+B2/σ2) ≤1 ln(1+AB/σ2)−1 2ln(1+B2/σ2) ≤1 ln(1+AB/σ2)−1 2ln(1+ℓ2B/σ2) ≤1 ln(1+AB/σ2)−1 2aln(1+AB/σ2) =/parenleftbigg 1−1 2a/parenrightbigg1 ln(1+AB/σ2). (E.9) Moreover, by continuity of Ψ−1, for someα∈(0,1), B−A=σ2 B/parenleftbigg Ψ−1/parenleftbiggB2ln(21+αp) nσ2/parenrightbigg −Ψ−1/parenleftbiggB2ln(2p) nσ2/parenrightbigg/parenrightbigg =Bln2 n/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(21+βp) nσ2/parenrightbigg/parenrightbigg (E.10) ≥Bln2 n/slashBig Ψ′/parenleftbigg Ψ−1/parenleftbiggB2ln(22p) nσ2/parenrightbigg/parenrightbigg =Bln2 n1 ln(1+ℓ2B/σ2) 20 ≥Bln2 na1 ln(1+AB/σ2). (E.11) Note that
https://arxiv.org/abs/2504.17885v1
∃β∈(0,α) satisfying ( E.10) by Lagrange’s Mean Value Theorem. Also, from ( E.9) and (E.11), we haveB n(f(A)−f(B))≤2a−1 2ln2(B−A). Consequently, we must have for B <ℓ2, /integraldisplayB ApBenn(t)dt≥B−A 2≥ln2 2a−1B n(f(A)−f(B)). (E.12) Combining the results ( E.8) and (E.12) for the cases B≥ℓ2andB <ℓ2respectively, we find that /integraldisplayB ApBenn(t)dt≥ln2 2aB n(f(A)−f(B)). Our analysis thus culminates in the following result A+ln2 2aB n(f(A)−f(B))≤/integraldisplayB 0pBenn(t)dt≤A+B n(f(A)−f(B)) or in other words/integraldisplayB 0pBenn(t)dt−A≍B n(f(A)−f(B)). S.2.2 Proof of Theorem 3.2 By Bennett’s Inequality ( Bennett,1962), for independent random variables X1,X2,···,Xnwith|Xi| ≤Mi and Var(Xi) =σ2 i, we have P(Sn≥t)≤exp/parenleftbigg −σ2 M2Ψ/parenleftbiggtM σ2/parenrightbigg/parenrightbigg (E.13) whereSn=/summationtextn i=1Xi,M= max 1≤i≤nMiandσ2=/summationtextn i=1σ2 i. Replacing the Xi’s by−Xi’s, we get the following inequality P(Sn≤ −t)≤exp/parenleftbigg −σ2 M2Ψ/parenleftbiggtM σ2/parenrightbigg/parenrightbigg . (E.14) Combining ( E.13) and (E.14), we have P(|Sn| ≥t)≤2exp/parenleftbigg −σ2 M2Ψ/parenleftbiggtM σ2/parenrightbigg/parenrightbigg . (E.15) Towards obtaining an upper bound on the tail probability of the L∞norm of X=1 n/summationtextn i=1XiwhereXi’s are independent p-variate random vectors {Xi}1≤i≤nsuch that for every i∈ {1,2,···,n},|Xi,j| ≤B, j= 1,2,···,pwhereXi= (Xi,1,Xi,2,···,Xi,p) and Var(Xi,j)≤σ2, we first define Sn=/summationtextn i=1Xiand Sn,j=/summationtextn i=1Xi,jand observe the following: P(/bar⌈blX/bar⌈bl∞≥t) =P(/bar⌈blSn/bar⌈bl∞≥nt) =P p/uniondisplay j=1Sn,j≥nt . Now, applying the union bound and using ( E.15) P(/bar⌈blX/bar⌈bl∞≥t)≤p/summationdisplay j=1P(Sn,j≥nt) =pP(Sn,1≥nt)≤2pexp/parenleftbigg −nσ2 B2Ψ/parenleftbiggtB σ2/parenrightbigg/parenrightbigg =qBenn(t).(E.16) 21 It is however, immediate that the bound in ( E.16) can be improved to pBenn(t), and, as we saw in the proof of Theorem 3.1that/integraldisplayB 0pBenn(t)dt≤A+B n(f(A)−f(B)). Consequently, E/bracketleftbig /bar⌈blW/bar⌈bl∞/bracketrightbig =/integraldisplayB 0P/parenleftbig /bar⌈blW/bar⌈bl∞≥t/parenrightbig dt≤/integraldisplayB 0pBenn(t)dt≤A+B n(f(A)−f(B)). This proves the first part of the theroem. (a)In this part, we are required to elicit lower bounds on the expected L∞norm when the following condition holds: nσ2/(σ2+B2)≥1/(2p). We split our analysis into several cases, the first of which is the follow ing: B2ln(2p)≤nσ2. To handle this case, we make use of the following bound on the invers e of Ψ(x) obtained by invoking Lemma S.4.8√ 2x≤Ψ−1(x)≤2√ 2xfor 0≤x≤e. In our case, we have B2ln(2p)/(nσ2)≤eand appealing to Lemma S.4.8, we have σ/radicalbigg 2ln(2p) n≤σ2 BΨ−1/parenleftbiggB2ln(2p) nσ2/parenrightbigg ≤2σ/radicalbigg 2ln(2p) n⇒σ/radicalbigg 2ln(2p) n≤A≤2σ/radicalbigg 2ln(2p) n.(E.17) Further, B n(f(A)−f(B))≤B nf(A) =B n1 ln(1+AB/σ2)≤B n1 ln(1+B/radicalbig ln(2p)/(σ√n)).(E.18) The last inequality follows by Lemma S.4.9. Combining ( E.18) with Lemma S.4.10, we get B n(f(A)−f(B))≤3B nσ√n B/radicalbig ln(2p)=3σ√n1/radicalbig ln(2p)(E.19) and (E.17), Lemma S.4.11and (E.19) together yield A+B n(f(A)−f(B))≤σ√n/radicalbig 2ln(2p)/parenleftBigg 2+3/radicalbig ln(2p)/parenrightBigg ≤bσ√n/radicalbig 2ln(2p) ≤2bσ√n/radicaltp/radicalvertex/radicalvertex/radicalbt2ln/parenleftBigg 2p/radicalbig ln(2p)/parenrightBigg whereb= (2 + 3/√ ln2). We now choose t=σ/radicalBig ln(2p//radicalbig ln(2p))/√enand by Lemma S.4.12, we observe that P/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleX−nσ2 σ2+B2/vextendsingle/vextendsingle/vextendsingle/vextendsingle≥nBt σ2+B2/parenrightbigg ≥P/parenleftbigg X−nσ2 σ2+B2≥nBt σ2+B2/parenrightbigg ≥τ γΦ/parenleftBigg −t√ 2n σ/parenrightBigg . Observing that ln(2p//radicalbig ln(2p)) 2e≤ln(2p) e≤ln(2p//radicalbig ln(2p)) 22 and from Eq. ( E.34), it follows that Φ/parenleftBigg −t√ 2n σ/parenrightBigg = Φ/parenleftbigg −/radicalBig 2ln(2p//radicalbig ln(2p))/parenrightbigg ≥1 1+/radicalBig 2ln(2p//radicalbig ln(2p))1√ 2π/radicalbig ln(2p) 2p ≥1 2/radicalBig 2ln(2p//radicalbig ln(2p))1√ 2π/radicalbig ln(2p) 2p =1√ 2π1/radicalbig 2−ln(ln(2p))/ln(2p)1 4p≥1 24p Settingx=τ/(τ+24γ) and using Lemma S.4.13, /parenleftbigg24γ τ+24γ/parenrightbigg1/p ≥1−τ 24pγ Consequently P(/bar⌈blW/bar⌈bl∞≥t) =P/parenleftbigg max 1≤j≤p|Wj| ≥t/parenrightbigg = 1−(1−P(|Wj| ≥t))p≥τ τ+24γ. Finally, by Markov’s Inequality, E[/bar⌈blW/bar⌈bl∞]≥tP(/bar⌈blW/bar⌈bl∞≥t) =tτ τ+24γ/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg ≥τ 4b√e(τ+24γ)/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg . This settles our analysis for the case that B2ln(2p)≤enσ2. The other case, i.e. B2ln(2p)> enσ2is considerably more involved which is why, we divide this broad case into m ultiple
https://arxiv.org/abs/2504.17885v1
parts. In the first part, alongside B2ln(2p)>neσ2, we havet∗B≥4(σ2+B2)/(5n) where t∗=4σ2 9B/parenleftbigg(1+B2/σ2)(4ln(2p)/5n) W((1+B2/σ2)(4ln(2p)/5n))−1/parenrightbigg . Now plugging in x=B2/σ2andy= ln(2p)/nin Lemma S.4.14, we get Ψ−1(B2ln(2p)/(nσ2))≤e e−1B2ln(2p)/(nσ2)−1 ln(1+(B2ln(2p)/(nσ2)−1)/e)≤e2 e−1(1+B2/σ2)(ln(2p)/n) ln((1+B2/σ2)(ln(2p)/n)). From Lemma S.4.4and Lemma S.4.5, we have Ψ′(Ψ−1(x))≥1√ 2ln(1+x) from which it follows that B2 nσ2(f(A)−f(B))≤B2 nσ21 ln(1+AB/σ2)≤B2 nσ2√ 2 ln(1+B2ln(2p)/(nσ2)). Consider first the case that ln(2 p)≤n. In this case B2 nσ2√ 2 ln(1+B2ln(2p)/(nσ2))≤B2ln(2p) nσ2√ 2/ln2 ln((1+B2/σ2)(ln(2p)/n))≤√ 2 ln2(1+B2/σ2)(ln(2p)/n) ln((1+B2/σ2)(ln(2p)/n)). 23 As for the other case i.e. ln(2 p)>n, we have B2 nσ2√ 2 ln(1+B2ln(2p)/(nσ2))≤B2ln(2p) nσ2√ 2 ln(1+B2/σ2)ln(2p). (E.20) Further B2ln(2p) nσ2√ 2 ln(1+B2/σ2)ln(2p)≤√ 2(1+B2/σ2)(ln(2p)/n) ln((1+B2/σ2)(ln(2p)/n)). (E.21) (E.21) is true due to the fact that ln(2p) n≤ln(2p)≤(2p)1−1/n which means that ln(1+B2/σ2)+ln(ln(2p)/n)≤ln(2p) nln(1+B2/σ2)+/parenleftbigg 1−1 n/parenrightbigg ln(2p)ln/parenleftbig 1+B2/σ2/parenrightbig with the last inequality holding because 1 = ln(e)≤/parenleftbig 1+B2/σ2/parenrightbig . Combining ( E.20) and (E.21), we get B σ2/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg ≤/parenleftBigg e2 e−1+√ 2 ln2+1 e/parenrightBigg (1+B2/σ2)(ln(2p)/n) ln((1+B2/σ2)(ln(2p)/n))−1 ≤9(1+B2/σ2)(4ln(2p)/5n) ln((1+B2/σ2)(4ln(2p)/5n))−1. (E.22) Note that x lnx≥e=⇒x lnx≥67 25⇐⇒25x lnx≥67⇐⇒34x lnx−68≥9x lnx−1. (E.23) In addition, for x≥4e/5 W(x)≤ln(1+x)≤2lnx. Combining the above with ( E.22) and (E.23), we get B σ2/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg ≤68/parenleftbigg(1+B2/σ2)(4ln(2p)/5n) W((1+B2/σ2)(4ln(2p)/5n))−1/parenrightbigg . ByZubkov and Serov (2013), P/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleX−nσ2 σ2+B2/vextendsingle/vextendsingle/vextendsingle/vextendsingle≥nBt∗ σ2+B2/parenrightbigg ≥P/parenleftbigg X≥n(σ2+Bt∗) σ2+B2/parenrightbigg ≥Φ(−b(t∗)) whereX∼Bin(n,σ2/(σ2+B2)) and b(t∗) =/radicalBigg 2nσ2 σ2+B2/parenleftbigg 1+t∗B+(σ2+B2)/n σ2/parenrightbigg ln/parenleftbigg 1+t∗B+(σ2+B2)/n σ2/parenrightbigg . Since it is given that t∗B≥4(σ2+B2)/5n, we must have b(t∗)≤/radicalBigg 2nσ2 σ2+B2/parenleftbigg 1+9t∗B 4σ2/parenrightbigg ln/parenleftbigg 1+9t∗B 4σ2/parenrightbigg . 24 Moreover /radicalBigg 2nσ2 σ2+B2/parenleftbigg 1+9t∗B 4σ2/parenrightbigg ln/parenleftbigg 1+9t∗B 4σ2/parenrightbigg =/radicalBigg 2nσ2 σ2+B2/parenleftbigg 1+B2 σ2/parenrightbigg4ln(2p) 5n=/radicalbigg 8ln(2p) 5≤/radicaltp/radicalvertex/radicalvertex/radicalbt2ln/parenleftBigg 2p/radicalbig ln(2p)/parenrightBigg . As shown in part (a), we must have Φ(−b(t∗))≥1 24p. Consequently, P(/bar⌈blW/bar⌈bl∞≥t∗)≥1 25. Finally by Markov’s inequality, E[/bar⌈blW/bar⌈bl∞]≥t∗P(/bar⌈blW/bar⌈bl∞≥t∗)≥t∗ 25≥1 3825/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg . For the case where B2ln(2p)> neσ2andt∗B <4(σ2+B2)/(5n), note first that the two assumptions mentioned, imply that 4 9/parenleftbigg(1+B2/σ2)(4ln2p/5n) W((1+B2/σ2)(4ln(2p)/5n))−1/parenrightbigg <4(σ2+B2) 5nσ2 =⇒/parenleftbigg 1+B2 σ2/parenrightbigg4ln(2p) 5n</parenleftbigg 1+9(σ2+B2) 5nσ2/parenrightbigg ln/parenleftbigg 1+9(σ2+B2) 5nσ2/parenrightbigg =⇒/parenleftbigg 1+B2 σ2/parenrightbigg4ln(2p) 5n<  9(σ2+B2) 5nσ24ln10, ifσ2+B2 nσ2≤5 18(σ2+B2) σ2ln/parenleftBig 18(σ2+B2) σ2/parenrightBig ,ifσ2+B2 nσ2>5(E.24) where (E.24) follows from the fact that σ2+B2 nσ2≥5 9inf x>1/parenleftBigx 2lnx−1/parenrightBig ≥5 9(e/2−1). We first consider the case where ( σ2+B2)/(nσ2)≤5. In this case, we have /parenleftbigg 1+B2 σ2/parenrightbigg4ln(2p) 5n≤8ln10(1+B2/σ2) n≤8/parenleftbigg 1+B2 σ2/parenrightbiggln10 ln2ln(2p) n. This in turn implies that ln2≤ln(2p)≤10ln10. Letγ= 10ln10/ln2. Then Φ(−b(t∗))≥Φ/parenleftBigg −/radicalBigg 2nγσ2 σ2+B2/parenleftbigg 1+B2 σ2/parenrightbiggln(2p) n/parenrightBigg = Φ(−/radicalbig 2γln(2p)) ≥1 1+/radicalbig 2γln(2p)exp(−γln(2p)) ≥1 2/radicalbig 2γln(2p)/parenleftbigg1 2p/parenrightbiggγ 25 ≥1 21 γ√ 2ln2/parenleftbigg1 1010/parenrightbiggγ−1/parenleftbigg1 p/parenrightbigg =ε p. (E.25) From (E.25) and Lemma S.4.8, we get P(/bar⌈blW/bar⌈bl∞≥t∗)≥ε 1+ε Now, as shown in the proof of part (b), t∗is lower bounded by a constant multiple of the Bennett bound and hence, by Markov inequality the result follows. We are now only left with the case that nσ2/(σ2+B2)<1/5 andnBt∗/(σ2+B2)<4/5. Together they imply that n(σ2+Bt∗)/(σ2+B2)<1. For notational simplicity, for further computations we will denot e the quantity σ2/(σ2+B2) bypσ,B. Note that P/parenleftbig/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle≥t∗/parenrightbig =P/parenleftbigg X≤nσ2 σ2+B2−nBt∗ σ2+B2/parenrightbigg +P/parenleftbigg X≥nσ2 σ2+B2+nBt∗ σ2+B2/parenrightbigg whereX∼Bin(n,pσ,B). Whent∗≤σ2/B, the above probability becomes P(X≤0)+P(X≥1) = 1. By Markov’s inequality E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle/bracketrightbigg ≥t∗P/parenleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle≥t∗/parenrightbigg =t∗/parenleftBig 1−/parenleftbig 1−P/parenleftbig/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle≥t∗/parenrightbig/parenrightbigp/parenrightBig =t∗≥1 153/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg We now turn our attention to the case where t∗> σ2/B. We are given that npσ,B≥1/(2p). When t∗>σ2/B, we can immediately say that P/parenleftbig/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle≥t∗/parenrightbig =P(X≥1) = 1−(1−pσ,B)n.
https://arxiv.org/abs/2504.17885v1
As a consequence, P/parenleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle≥t∗/parenrightbigg = 1−(1−pσ,B)np=≥1−/parenleftbigg 1−1 2np/parenrightbiggnp ≥1−e−1/2. Hence by Markov’s inequality, E[/bar⌈blW/bar⌈bl∞]≥t∗P/parenleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle≥t∗/parenrightbigg ≥(1−e−1/2) 153/parenleftbigg A+B n(f(A)−f(B))/parenrightbigg . Throughout this proof, we have implicitly assumed that A≤B. If it is the other way around i.e. A>B, thenA+(B/n)(f(A)−f(B))/greaterorsimilarB. This time around, however, the term ( B/n)(f(A)−f(B)) is negative but is of a lower order as compared to A. Further, we have not used the decreasingness of fanywhere in the proof. We have lower bounded our quantity of interest by cons tant multiples of A+Bf(A)/nand lower bounded that by A+(B/n)(f(A)−f(B)). To add to this, courtesy Proposition 3.1,A>Bcorresponds to the casenσ2/(σ2+B2)≥1/(2p). Thus, the lower bounds computed in this proof not only apply to th e case thatA>B, but they also imply that E/bracketleftbig/vextenddouble/vextenddoubleW/vextenddouble/vextenddouble ∞/bracketrightbig ≍B. (b)We now work with the assumption nσ2/(σ2+B2)<1/(2p). Consequently, observing that Wj=σ2+B2 nBWj−σ2 B 26 whereWj∼Bin(n,pσ,B), and noting that nσ2/(σ2+B2)<1/2 we can easily see that it is possible for Wjto take only one negative value i.e. −σ2/Band that value is in fact smaller in absolute value than the smallest positive value (( σ2+B2)/nB−σ2/B) thatWjcan take. Hence, we may write the following: E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle/bracketrightbigg =E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsinglemax 1≤j≤pWj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg =E/bracketleftBigg/parenleftbigg max 1≤j≤pWj/parenrightbigg+/bracketrightBigg +E/bracketleftBigg/parenleftbigg max 1≤j≤pWj/parenrightbigg−/bracketrightBigg =E/bracketleftBigg/parenleftbigg max 1≤j≤pWj/parenrightbigg+/bracketrightBigg −E/bracketleftBigg/parenleftbigg max 1≤j≤pWj/parenrightbigg−/bracketrightBigg +2E/bracketleftBigg/parenleftbigg max 1≤j≤pWj/parenrightbigg−/bracketrightBigg =E/bracketleftbigg max 1≤j≤pWj/bracketrightbigg +2σ2 B/parenleftbiggB2 σ2+B2/parenrightbiggnp =/parenleftbiggσ2+B2 nB/parenrightbigg E/bracketleftbigg max 1≤j≤pWj/bracketrightbigg −σ2 B+2σ2 B/parenleftbiggB2 σ2+B2/parenrightbiggnp . LetU= max 1≤j≤pWjwhereWjare iid binomial random variables with parameters nandpσ,B. Then we know that E[U] =n/summationdisplay i=1P(U≥j) =n/summationdisplay i=1[1−P(W1≤j−1)p] =n/summationdisplay i=1[1−(1−P(W1≥j))p]. Along with E[U]≤E p/summationdisplay j=1Wj =pE[W1] =nppσ,B, appealing to Lemma S.4.6, we have E[U] =n/summationdisplay i=1[1−P(W1≤j−1)p] ≥1−(P(W1= 0))p = 1−(1−pσ,B)np ≥nppσ,B−/parenleftbiggnp 2/parenrightbigg p2 σ,B =nppσ,B/parenleftbigg 1−(np−1)pσ,B 2/parenrightbigg ≥nppσ,B/parenleftBig 1−nppσ,B 2/parenrightBig >npp σ,B/parenleftbigg 1−1 4/parenrightbigg =3 4nppσ,B. Combining both inequalities above, we get 1−(np−1)pσ,B 2≤E[U] nppσ,B≤1 and asnppσ,B=o(1), this implies that E[U]∼nppσ,B. Note that in this case f∼gmeans that the ratio of fandgconverges to 1. To be precise, ≤E/bracketleftbig max1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle/bracketrightbig ≤(p+1)σ2/Band E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle/bracketrightbigg ≥σ2 B/bracketleftbigg3p 4−1+2/parenleftbiggB2 B2+σ2/parenrightbiggnp/bracketrightbigg 27 =⇒E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle/bracketrightbigg ≥σ2 B/bracketleftbigg3p 4−1+2/parenleftbigg 1−npσ2 σ2+B2/parenrightbigg/bracketrightbigg =⇒E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingleWj/vextendsingle/vextendsingle/bracketrightbigg ≥3pσ2 4B. These findings immediately lead us to the desired conclusion that E/bracketleftbig /bar⌈blW/bar⌈bl∞/bracketrightbig ≍pσ2 B whennppσ,B<1/2. S.2.3 Proof of Theorem 3.3 Considerindependentrandomvectors Y1,Y2,···YnsuchthatYi(1),Yi(2),···,Yi(p)arejointlyindependent for alli= 1,2,···,nand max 1≤j≤p|Yi(j)| ≤Band max 1≤j≤pV[Yi(j)]≤σ2. As the random variables |Y(j)|are independent, we have E/bracketleftbig/vextenddouble/vextenddoubleY/vextenddouble/vextenddouble ∞/bracketrightbig =E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg =/integraldisplayB 0P/parenleftbig/vextenddouble/vextenddoubleY/vextenddouble/vextenddouble ∞≥t/parenrightbig dt =/integraldisplayB 0P/parenleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightbigg dt =/integraldisplayB 01−p/productdisplay j=1/parenleftBig 1−P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig dt. Further, we have p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig >1 =⇒p/productdisplay j=1exp/parenleftBig −P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig <1 e =⇒p/productdisplay j=1/parenleftBig 1−P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig <1 e =⇒1−p/productdisplay j=1/parenleftBig 1−P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig >1−1 e. As in the proof of Theorem 3.1, we define the sequence {ℓk}k≥1as follows: ℓk= inf/braceleftBigg 0≤x≤B:p/summationdisplay i=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(i)/vextendsingle/vextendsingle/vextendsingle≥x/parenrightBig ≤1 2k−1/bracerightBigg . Combining this with Lemma S.4.7, we get /integraldisplayℓ1 0/parenleftbigg 1−1 e/parenrightbigg dt≤/integraldisplayℓ1 01−p/productdisplay j=1/parenleftBig 1−P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig dt ≤/integraldisplayℓ1 0min  1,p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig  dt≤/integraldisplayℓ1 0dt. (E.26) 28 We also define K= sup{k≥0 :ℓk+1≤B}. Note thatKmay be finite or infinite. We first take up the case where Kis finite. We know that for t betweenℓkandℓk+1, p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig ∈/bracketleftbigg1 2k,1 2k−1/bracketrightbigg . Further, appealing to Lemma S.4.7, we find that 1−p/productdisplay j=1/parenleftBig 1−P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig ≥p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig −p−1/summationdisplay i=1/summationdisplay j>iP/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(i)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig =p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig −1
https://arxiv.org/abs/2504.17885v1
2 p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig 2 +1 2p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig2 ≥p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig 1−1 2p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig  ≥p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenleftbigg 1−1 2k/parenrightbigg ≥1 2p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig . (E.27) Since, E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg =/integraldisplayB 0F(t)dt=/integraldisplayℓ1 0F(t)dt+K/summationdisplay k=1/integraldisplayℓk+1 ℓkF(t)dt+/integraldisplayB ℓK+1F(t)dt. where F(t) = 1−p/productdisplay j=1/parenleftBig 1−P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig/parenrightBig . From (E.26) and (E.27), it follows that 1 2E1≤E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≤ E1 where E1=/integraldisplayℓ1 0dt+K/summationdisplay k=1/integraldisplayℓk+1 ℓkp/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig dt+/integraldisplayB ℓK+1p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig dt. On the other hand, when Kis infinite, by the exact same arguments, we find that 1 2E2≤E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≤ E2 where E2=/integraldisplayℓ1 0dt+∞/summationdisplay k=1/integraldisplayℓk+1 ℓkp/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig dt. 29 Therefore 1 2E ≤E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≤ E where E=/integraldisplayB 0min  1,p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig  . All that remains in this proof, is the actual nature of the distributio n functions of the variables Yi(j). For eachiandj, we choose Yi(j)d=Xi(j) that is we take the distribution of Yi(j) to be the marginal distribution ofXi(j). Note that the Yi(j)’s are chosen independent of each other. Now, E/bracketleftbig /bar⌈blX/bar⌈bl∞/bracketrightbig =E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleX(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg =/integraldisplayB 0P/parenleftbig/vextenddouble/vextenddoubleX/vextenddouble/vextenddouble ∞≥t/parenrightbig dt =/integraldisplayB 0P/parenleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleX(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightbigg dt ≤/integraldisplayB 0min  1,p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleX(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig  dt =/integraldisplayB 0min  1,p/summationdisplay j=1P/parenleftBig/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle≥t/parenrightBig  dt =E ≤2E/bracketleftbigg max 1≤j≤p/vextendsingle/vextendsingle/vextendsingleY(j)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg = 2E/bracketleftbig /bar⌈blY/bar⌈bl∞/bracketrightbig . This completes the proof. S.3 Proofs of Results in Section 3 S.3.1 Proof of Theorem 4.1 Now, consider the random variables Xiwhen they are divided by B. Then, using Lemma S.4.15, we can say that lnE/bracketleftbig etSn/bracketrightbig (whereSn=/summationtextn i=1Xi/B) is bounded above by n(ℓ0(t)+ℓ1(t)+ℓ2(t)) where ℓ0(t) =vt2 2 ℓ1(t) =/summationdisplay 2<k<qv(q−k)/(q−2)tk k! ℓ2(t) =/summationdisplay k≥q/parenleftbiggK B/parenrightbiggk−qtk k! wherev=σ2/B2. Further, using Equation 2.4 of Rio(2017), we get /tildewideQSn/B(1/z)≤ T(nℓ∗(x)+nℓ∗∗(x))≤ T(nℓ∗(x))+T(nℓ∗∗(x)) wherex= logzand ℓ∗(x) =∞/summationdisplay k=2v(q−k)/(q−2)tk k!andℓ∗∗(x) =∞/summationdisplay k=2/parenleftbiggK B/parenrightbiggk−qtk k!. 30 Therefore, we obtain that Tnℓ∗∗(x) = inf t>0n(K/B)−q(eKt/B−1−Kt/B) t+x t. This infimum will be attained at the solution to the equation −n(K/B)−q(eKt/B−1−Kt/B) t2+n(K/B)1−q(eKt/B−1) t−x t2= 0. The solution to the equation comes out to be t∗=B Kln/bracketleftbigg 1+Ψ−1/parenleftbigg/parenleftbiggK B/parenrightbiggqx n/parenrightbigg/bracketrightbigg . Consequently, the quantity Tnℓ∗∗(x) turns out to be Tnℓ∗∗(x) =n/parenleftbiggK B/parenrightbigg1−q Ψ−1/parenleftbigg/parenleftbiggK B/parenrightbiggqx n/parenrightbigg . (E.28) We adopt a similar approach towards bounding the quantity T(nℓ∗(x)). Note that the quantity ℓ∗(t) can be expressed as follows: ℓ∗(t) =∞/summationdisplay k=2v(q−k)/(q−2)tk k!=vq/(q−2)/parenleftBig ev−1/(q−2)t−1−v−1/(q−2)t/parenrightBig . The technique for computing the value of Tnℓ∗∗(x) is reapplied to obtain the value Tnℓ∗(x) =nv(q−1)/(q−2)Ψ−1/bracketleftbigg v−q/(q−2)lnz n/bracketrightbigg . (E.29) Combining ( E.28) and (E.29), we obtain the result /tildewideQSn(1/z)≤min/braceleftBigg nB/parenleftbiggσ2 B2/parenrightbigg(q−1)/(q−2) Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq/(q−2)lnz n/bracketrightBigg +nB/parenleftbiggK B/parenrightbigg1−q Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqlnz n/bracketrightbigg/bracerightBigg . Hence, by Proposition 2.4 of ( Rio,2017), P/parenleftBigg 1 nn/summationdisplay i=1Xi≥B/parenleftbiggσ2 B2/parenrightbigg(q−1)/(q−2) Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq/(q−2)lnz n/bracketrightBigg +Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqlnz n/bracketrightbigg/parenrightBigg ≤1 z. S.3.2 Proof of Theorem 4.2 Let Sn=n/summationdisplay i=1Xi. For anyt≥0, we get logE[etSn] =n/summationdisplay i=1logE[etXi] =n/summationdisplay i=1log/parenleftBigg 1+∞/summationdisplay k=2tkE[Xk i] k!/parenrightBigg 31 ≤∞/summationdisplay k=2tk k!n/summationdisplay i=1E[Xk i] ≤n/summationdisplay 2≤k<q(σ2)(q−k)/(q−2)(Bq)(k−2)/(q−2)tk k!+n/summationdisplay k≥qKk−qBqtk k! =n∞/summationdisplay k=2(σ2)(q−k)/(q−2)(Bq)(k−2)/(q−2)tk k!+n/summationdisplay k≥q/parenleftBig Kk−qBq−(σ2)(q−k)/(q−2)(Bq)(k−2)/(q−2)/parenrightBigtk k! =:ℓ1(t)+ℓ2(t). Observe that ℓ1(t) =n(σ2/B2)q/(q−2)∞/summationdisplay k=21 k!/parenleftBigg t/parenleftbiggBq σ2/parenrightbigg1/(q−2)/parenrightBiggk =n(σ2/B2)q/(q−2)/parenleftBig e(Bq/σ2)1/(q−2)t−1−(Bq/σ2)1/(q−2)t/parenrightBig . Note that Kk−qBq−(σ2)(q−k)/(q−2)(Bq)(k−2)/(q−2)=Kk−qBq/parenleftBigg 1−/parenleftbiggBq σ2Kq−2/parenrightbigg(k−q)/(q−2)/parenrightBigg ≤Kk−qBq(k−q)R. Observe that if R<0, thenℓ2(t)≤0 for allt≥0. Hence, ℓ2(t)≤n/summationdisplay k≥qKk−qBq(k−q)Rtk k! =nRBq/summationdisplay k>q(k−q)Kk−qtk k! ≤nRBq∞/summationdisplay k=[q]+1(k−q)Kk−qtk (k−[q])!(k−[q])! k! ≤nRBq∞/summationdisplay k=[q]+1Kk−qtk (k−[q]−1)!1 [q]!/parenleftbigk [q]/parenrightbig ≤nRBqt[q]+1K[q]−q+1 [q]!∞/summationdisplay k=[q]+1(tK)k−[q]−1 (k−[q]−1)! =nRBqt[q]+1K[q]−q+1 [q]!etK=:¯ℓ2(t). Therefore, logE[etSn]≤¯ℓ1(t)+¯ℓ2(t). From Eq. (2.4) of Rio(2017), we get that
https://arxiv.org/abs/2504.17885v1
/tildewideQSn(1/z)≤(T¯ℓ1)(logz)+(Tℓ2)(logz). We now evaluate the quantities on the right-hand side. By Lemma S.4.19, we have (T¯ℓ1)(logz) =n(σ2/B2)q/(q−2)(Bq/σ2)1/(q−2)Ψ−1/parenleftbigglogz n(σ2/B2)q/(q−2)/parenrightbigg 32 =nB/parenleftbiggσ2 B2/parenrightbigg(q−1)/(q−2) Ψ−1/parenleftbigglogz n(σ2/B2)q/(q−2)/parenrightbigg , and by Lemma S.4.20, we have (T¯ℓ2)(logz)≤2Klogz ([q]+1)/parenleftbigg W/parenleftbiggK(logz)1/([q]+1) 2([q]+1)(nRBqK1−q+[q]/[q]!)1/([q]+1)/parenrightbigg/parenrightbigg−1 ≤2Klogz q/parenleftbigg W/parenleftbigg(K/B)q/([q]+1)(logz)1/([q]+1) 10(nR)1/([q]+1)/parenrightbigg/parenrightbigg−1 ,(E.30) where the last inequality follows because ([ q]!)1/([q]+1)/([q]+1)≥0.2 for allq≥2; see, for example, Batir (2008). One can also use an alternative, simpler bound for ℓ2(t) as ℓ2(t)≤nBqt[q]+1∞/summationdisplay k=[q]+1Kk−qtk−[q]−1 k! ≤nBqt[q]+1K1−q+[q]∞/summationdisplay k=[q]+1(Kt)k−[q]−1 (k−[q]−1)!([q]+1)!1/parenleftbigk [q]+1/parenrightbig ≤nBqK1−q+[q]t[q]+1 ([q]+1)!etK. Therefore, (Tℓ2)(logz)≤2Klogz [q]+1/parenleftbigg W/parenleftbiggK(logz)1/([q]+1) 2([q]+1)(nBqK1−q+[q]/([q]+1)!)1/([q]+1)/parenrightbigg/parenrightbigg−1 ≤2Klogz q/parenleftbigg W/parenleftbigg(K/B)q/([q]+1)(logz)1/([q]+1) 2en1/([q]+1)/parenrightbigg/parenrightbigg−1 BecauseR≤1, this bound is weaker than the bound in ( E.30). S.3.3 Proof of Proposition 4.1 By the Triangle Inequality, E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞≤K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg +E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg . Towards bounding the second term, observe that E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤n/summationdisplay i=1E[/bar⌈blXi1{/bar⌈blXi/bar⌈bl∞>K}/bar⌈bl∞] =n/summationdisplay i=1/integraldisplay∞ KP(/bar⌈blXi/bar⌈bl∞>t)dt =n/summationdisplay i=1/integraldisplay∞ Kqtq−1P(/bar⌈blXi/bar⌈bl∞>t) qtq−1dt ≤1 qKq−1n/summationdisplay i=1/integraldisplay∞ 0qtq−1P(/bar⌈blXi/bar⌈bl∞>t)dt≤nBq qKq−1. 33 As for the first term, having observed that the random variables Ui=Xi1{/bar⌈blXi/bar⌈bl∞≤K}satisfy max 1≤j≤p1 nn/summationdisplay i=1E/bracketleftbig U2 i(j)/bracketrightbig ≤σ2,max 1≤j≤p1 nn/summationdisplay i=1E[|Ui(j)|q]≤Bq,max 1≤j≤p/bar⌈blUi/bar⌈bl∞≤K, we seek to bound the expected L∞norm of the mean of the Ui’s. To that end, we utilize the result in Theorem 4.1. Choosing z∗=B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2 Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq q−2ln(2p) n/bracketrightBigg +Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqln(2p) n/bracketrightbigg observe that E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Ui/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg =/integraldisplay∞ 0P/parenleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Ui/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞≥x/parenrightBigg dx ≤/integraldisplayz∗ 0dx+/integraldisplay∞ z∗2pP/parenleftBigg 1 nn/summationdisplay i=1Ui(1)≥x/parenrightBigg dx. The value of the first integral equals z∗. As for the second integral, we make the variable change x=B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2 Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq q−2lnz n/bracketrightBigg +Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqlnz n/bracketrightbigg and get the following: /integraldisplay∞ z∗2pP/parenleftBigg 1 nn/summationdisplay i=1Ui(1)≥x/parenrightBigg dx ≤/integraldisplay∞ 2p2p nz2/bracketleftBigg B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2(B2/σ2)q q−2 Ψ′(Ψ−1((B2/σ2)q q−2(lnz)/n))+Bq Kq−1(K/B)q Ψ′(Ψ−1(K/B)q(lnz)/n)/bracketrightBigg dz ≤/integraldisplay∞ 2p2p nz2/bracketleftBigg B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2(B2/σ2)q q−2 Ψ′(Ψ−1((B2/σ2)q q−2(ln(2p))/n))+Bq Kq−1(K/B)q Ψ′(Ψ−1(K/B)q(ln(2p))/n)/bracketrightBigg dz =B n/parenleftbiggσ2 B2/parenrightbiggq−1 q−2(B2/σ2)q q−2 Ψ′(Ψ−1((B2/σ2)q q−2(ln(2p))/n))+Bq nKq−1(K/B)q Ψ′(Ψ−1(K/B)q(ln(2p))/n) ≤B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2(B2/σ2)q q−2(ln(2p)/n) Ψ′(Ψ−1((B2/σ2)q q−2(ln(2p))/n))+Bq Kq−1(K/B)q(ln(2p)/n) Ψ′(Ψ−1(K/B)q(ln(2p))/n) ≤B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2 Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq q−2ln(2p) n/bracketrightBigg +Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqln(2p) n/bracketrightbigg =z∗. The last inequality holds due to the following chain of inequalities for all y= Ψ−1(z) withy,z≥0: ln(1+y)≤y =⇒ln(1+y)+yln(1+y)−y≤yln(1+y) =⇒Ψ(y)≤yΨ′(y) =⇒Ψ(Ψ−1(z))≤Ψ−1(z)Ψ′(Ψ−1(z)) =⇒z Ψ′(Ψ−1(z))≤Ψ−1(z). 34 We therefore find that E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤2B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2 Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq q−2ln(2p) n/bracketrightBigg +2Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqln(2p) n/bracketrightbigg +Bq qKq−1. We partition the rest of our analysis into two cases: Case 1: (B2/σ2)q q−2(ln(2p)/n)≤e. In this situation, by Lemma S.4.8, we can say that E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤4√ 2B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2/parenleftbiggB σ/parenrightbiggq q−2/radicalbigg ln(2p) n+2Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqln(2p) n/bracketrightbigg +Bq qKq−1 =4√ 2σ/radicalbigg ln(2p) n+2Bq Kq−1Ψ−1/bracketleftbigg/parenleftbiggK B/parenrightbiggqln(2p) n/bracketrightbigg +Bq qKq−1 Now, setting K= (√e(Bq/σ)/radicalbig n/ln(2p))1/(q−1), we find that Bq/Kq−1= (1/√e)σ/radicalbig ln(2p)/n. Note that the above choice of Kis feasible i.e. K≥Bas/radicalbig n/ln(2p)≥(1/√e)(B/σ)q q−2and hence K/B= (B2/σ2)1/(q−2)>1. Further /parenleftbiggK B/parenrightbiggqln(2p) n= (√e)q q−1/parenleftbiggB σ/parenrightbiggq q−1/parenleftBigg/radicalbigg ln(2p) n/parenrightBiggq−2 q−1 ≤(√e)q q−1(√e)q−2 q−1=e. Therefore, we find that E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤10σ/radicalbigg ln(2p) n. Case 2: (B2/σ2)q q−2(ln(2p)/n)>e. In this case, we choose Kto be (Bq/σ2)1/(q−2). For this choice of K, note that/parenleftbiggK B/parenrightbiggqln(2p) n=/parenleftbiggB2 σ2/parenrightbiggq q−2ln(2p) n>e Further notice that for x>e, 2 q≤1≤x ln(1+x)≤Ψ−1(x)≤2x ln(1+x). Therefore, E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤9 2B/parenleftbiggσ2 B2/parenrightbiggq−1 q−2 Ψ−1/bracketleftBigg/parenleftbiggB2 σ2/parenrightbiggq q−2ln(2p) n/bracketrightBigg ≤9B(B2/σ2)1/(q−2)(ln(2p)/n) ln(1+(B2/σ2)q q−2(ln(2p)/n)). S.3.4 Proof of Proposition 4.2 By the Triangle
https://arxiv.org/abs/2504.17885v1
Inequality, E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞≤K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg +E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg . 35 Towards bounding the second term, observe that E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞>K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤n/summationdisplay i=1E[Xi1{/bar⌈blXi/bar⌈bl∞>K}/bar⌈bl∞] =n/summationdisplay i=1/integraldisplay∞ KP(/bar⌈blXi/bar⌈bl∞>t)dt =n/summationdisplay i=1/integraldisplay∞ Kqtq−1P(/bar⌈blXi/bar⌈bl∞>t) qtq−1dt ≤1 qKq−1n/summationdisplay i=1/integraldisplay∞ 0qtq−1P(/bar⌈blXi/bar⌈bl∞>t)dt≤nBq qKq−1. As for the first term, having observed that the random variables Ui=Xi1{/bar⌈blXi/bar⌈bl∞≤K}satisfy max 1≤j≤p1 nn/summationdisplay i=1E/bracketleftbig U2 i(j)/bracketrightbig ≤σ2,max 1≤j≤p1 nn/summationdisplay i=1E[|Ui(j)|q]≤Bq,max 1≤j≤p/bar⌈blUi/bar⌈bl∞≤K, we bound it by making use of the Bennett’s Inequality as in the proof o f Theorem 3.2. to obtain E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi1{/bar⌈blXi/bar⌈bl∞≤K}/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤nA0+K ln(1+A0K/σ2) whereA0= (σ2/K)Ψ−1/parenleftbig K2ln(2p)/(nσ2)/parenrightbig . Consequently, we have E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤nA0+K ln(1+A0K/σ2)+nBq qKq−1. Appealing to Lemma S.4.4andS.4.5, therefore we obtain 1√ 2ln/parenleftbig 1+K2ln(2p)/(nσ2)/parenrightbig ≤ln/parenleftbig 1+A0K/σ2/parenrightbig Further, using Lemma S.4.8and the increasingness of the function x/ln(1+x), we have E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤(eln(2p)+√ 2)K ln(1+K2ln(2p)/(nσ2))+nBq qKq−1≤(eln(2p)+√ 2)K ln(1+K/radicalbig ln(2p)/(nσ2))+nBq qKq−1(E.31) SinceKis a free parameter, we can improve the upper bound to E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoublen/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤inf K>0(eln(2p)+√ 2)K ln(1+K/radicalbig ln(2p)/(nσ2))+nBq qKq−1 Clearly, the right hand side of this inequality is differentiable and we can find the infimum by solving the equation Ψ(K/radicalbig ln(2p)/(nσ2)) (1+K/radicalbig ln(2p)/(nσ2))ln2(1+K/radicalbig ln(2p)/(nσ2))=1 (eln(2p)+√ 2)n(q−1)Bq qKq However, it is not possible to solve this equation analytically, and henc e we resort to an approximation of the function on the left hand side. By Lemma S.4.16, whenx>√e, 1 6lnx≤1 3ln(1+x)≤Ψ(x) (1+x)ln2(1+x)≤1 ln(1+x)≤1 lnx. 36 It therefore makes more sense for us to solve an equation of the f orm 1 ln(K/radicalbig ln(2p)/(nσ2))=n(q−1)Bq (eln(2p)+√ 2)qKq. This can be written as an equation of the form xq qln(x)=c HereK/radicalbig ln(2p)/(nσ2) plays the role of x. There may be two types of solutions to this equation. One is simply K/radicalbig ln(2p)/(nσ2) = [−DW(−1/D)]1/q whereWis the Lambert function. However, by Lemma S.4.18, this solution is bounded between e1/qand 1. Ifq >2, then the maximum value that this solution can attain is ≤√ewhereas the minimum value ofK/radicalbig ln(2p)/(nσ2) is√e. Therefore, we discard this solution due to its limited scope of use. B y Lemma S.4.17, K/radicalbig ln2p/(nσ2) = [DlnDlnDlnD...]1/q, where D=(q−1)n/parenleftBig B/radicalbig ln(2p)/(nσ2)/parenrightBigq (eln(2p)+√ 2)q2. Note that here we are implicitly assuming that K/radicalbig ln(2p)/(nσ2)> e, which allows us to argue the optimality of the bound (up to constant multiples) when [ DlnD]1/q≥e. Nevertheless, we can always plug in this value of Kto obtain an upper bound, which may or may not be optimal in all situat ions. However, it is to be noted that the bound is meaningful when K2ln(2p)/nσ2>1 which in turn implies that [DlnDlnD...]1/q≥1 which follows if we make sure that [ DlnD]1/q≥1. Note that [DlnD]1/q≍2(B/σ)(ln(2p)/n)1/2−1/q/bracketleftBig ln/parenleftBig (B/σ)(ln(2p)/n)1/2−1/q/parenrightBig/bracketrightBig1/q ≥1 (E.32) when (B2/σ2)q/(q−2)(ln(2p)/n)>e. Plugging this value of Kinto (E.31), we have E/bracketleftBigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Xi/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ∞/bracketrightBigg ≤(eln(2p)+√ 2)K nln(K/radicalbig ln(2p)/(nσ2))+Bq qKq−1 =K/bracketleftBigg (eln(2p)+√ 2) nln(K/radicalbig ln(2p)/(nσ2))+Bq qKq/bracketrightBigg =K/bracketleftbigg(q−1)Bq qKq+Bq qKq/bracketrightbigg =Bq Kq−1 ≤B n(q−1) (eln(2p)+√ 2)q2ln (q−1)n/parenleftBig B/radicalbig ln(2p)/(nσ2)/parenrightBigq (eln(2p)+√ 2)q2  1/q−1 /lessorsimilarB/parenleftbiggln(2p) n/parenrightbigg1−1/q/bracketleftBigg ln/parenleftBigg B2 σ2/parenleftbiggln(2p) n/parenrightbigg1−2/q/parenrightBigg/bracketrightBigg1/q−1 . 37 Even when [ DlnD]1/q<1, we may argue that it is only off by a constant, whereby, we may dire ctly choose (K/σ)/radicalbig ln(2p)/nto be the quantity on the right of ( E.32) and we will obtain the same bound
https://arxiv.org/abs/2504.17885v1
as follows. K= 2B(ln(2p)/n)−1/q/bracketleftBig ln/parenleftBig (B/σ)(ln(2p)/n)1/2−1/q/parenrightBig/bracketrightBig1/q . It is thus easy to see that both Bq Kq−1/lessorsimilarB/parenleftbiggln(2p) n/parenrightbigg1−1/q/bracketleftBigg ln/parenleftBigg B2 σ2/parenleftbiggln(2p) n/parenrightbigg1−2/q/parenrightBigg/bracketrightBigg1/q−1 and K(eln(2p)+√ 2) nln(K/radicalbig ln(2p)/(nσ2))/lessorsimilarB/parenleftbiggln(2p) n/parenrightbigg1−1/q/bracketleftBigg ln/parenleftBigg B2 σ2/parenleftbiggln(2p) n/parenrightbigg1−2/q/parenrightBigg/bracketrightBigg1/q−1 and the bound holds either way. S.4 Auxiliary Lemmas and their proofs Lemma S.4.1. Equation (E.4)has a unique solution in K. Proof.Recall that Eq.( E.4) is as follows: (1+2q)/bracketleftBigg/parenleftbiggσ2 5K/parenrightbiggq/parenleftbigg5K2 5K2+σ2/parenrightbiggd +Kq/parenleftBigg 1−/parenleftbigg5K2 5K2+σ2/parenrightbiggd/parenrightBigg/bracketrightBigg =Bq whereB≥σand√ 5K≥σby the definition of the random variables Wjin the proofof Lemma 2.1. Observe further that (1+2q)/bracketleftBigg/parenleftbiggσ2 5K/parenrightbiggq/parenleftbigg5K2 5K2+σ2/parenrightbiggd +Kq/parenleftBigg 1−/parenleftbigg5K2 5K2+σ2/parenrightbiggd/parenrightBigg/bracketrightBigg =Bq =⇒/parenleftbiggσ2 5K/parenrightbiggq/parenleftbigg5K2 5K2+σ2/parenrightbiggd +Kq/parenleftBigg 1−/parenleftbigg5K2 5K2+σ2/parenrightbiggd/parenrightBigg =Bq 1+2q =⇒/parenleftbiggσ√ 5K/parenrightbiggq/parenleftbigg5K2 5K2+σ2/parenrightbiggd +/parenleftBigg√ 5K σ/parenrightBiggq/parenleftBigg 1−/parenleftbigg5K2 5K2+σ2/parenrightbiggd/parenrightBigg =1 (1+2q)/parenleftbiggB√ 5σ/parenrightbiggq (E.33) Takingy=√ 5K/σ, we will show that the function F(y) =y−q/parenleftbiggy2 1+y2/parenrightbiggd +yq/parenleftBigg 1−/parenleftbiggy2 1+y2/parenrightbiggd/parenrightBigg is strictly increasing in ywhereby, Eq ( E.33) has a unique solution in√ 5K/σand consequently, in K. To show that the function Fis strictly increasing in yfor ally≥1, we take the derivative of Fwith respect to ywhich comes out to be F′(y) = (1+y2)−(d+1)(qyq−1(1+y2)d+1−qy2d−q−1(1+y2)+2dy2d−q−1−qy2d+q−1(1+y2)−2dy2d+q−1). To show that this is positive, we only need to show that qyq−1(1+y2)d+1>(q−2d)y2d−q−1+qy2d−q+1−(q+2d)y2d+q−1+qy2d+q+1. 38 We first consider the case where d≥2. Observe that the coefficients of 4 largest powers of yin the expansion ofqyq−1(1+y2)d+1areq,q(d+1),qd(d+1)/2,qd(d2−1)/6. It iseasytoseetherefore, that the firsttermin the expansion cancels out with qy2d+q+1. As for the second term, note that for d≥2, we haveq≥2≥d/(d−1) which implies that qd+d≥q+2dand henceq(d+1)y2d+q−1>(q+2d)y2d+q−1. as for the third and the fourth terms, not only are the coefficients in the expansion larger, the powers of yare higher and given that y≥1, the inequality is proved. For the case d= 1, we are required to show that qyq−1(1+2y2+y4)>qyq+3+(q+2)yq+1+qy3−q+(q−2)y1−q. Note that qyq−1(1+2y2+y4)−(qyq+3+(q+2)yq+1+qy3−q+(q−2)y1−q) =(q−2)yq+1−(q−2)y1−q+qyq−1−q3−q>0 This proves the result. Lemma S.4.2. Ifψ(x)is a twice differentiable and strictly increasing, non-nega tive convex function on [a,b)such thatψ′(a)>0, then /integraldisplayb aexp(−ψ(x))dx≤1 ψ′(a)exp(−ψ(a))−1 ψ′(b)exp(−ψ(b)). Proof.The first observation we make is that /integraldisplayb aexp(−ψ(x))dx=/integraldisplayb a1 ψ′(x)ψ′(x)exp(−ψ(x))dx. Now, applying integration by parts, we get /integraldisplayb a1 ψ′(x)ψ′(x)exp(−ψ(x))dx =−1 ψ′(x)exp(−ψ(x))/vextendsingle/vextendsingle/vextendsingle/vextendsingleb a−/integraldisplayb aψ′′(x) ψ′(x)2exp(−ψ(x))dx =1 ψ′(a)exp(−ψ(a))−1 ψ′(b)exp(−ψ(b))−/integraldisplayb aψ′′(x) ψ′(x)2exp(−ψ(x))dx ≤1 ψ′(a)exp(−ψ(a))−1 ψ′(b)exp(−ψ(b)) The last inequality follows from the fact that ψ(x) being convex and twice differentiable, has a positive second derivative. This implies that the integrand (of the integral in the penultimate step) and hence the integral is positive. Lemma S.4.3. Forx≥0, letW(x)be the principal branch of the Lambert function defined by W(x)eW(x)= x. Then we have /parenleftbigg 1−1 e/parenrightbigg ln(1+x)≤W(x)≤ln(1+x) Proof.The upper bound on the Lambert function is easy to show. Observe that Ψ(x) = (1 +x)ln(1 + x)−x≥0. This implies that we must have (1+ x)ln(1+x)≥x=W(x)eW(x)which in turn implies that ln(1+x)≥W(x). As for the lower bound, it suffices to show that /parenleftbigg 1−1 e/parenrightbigg (1+x)(1−1/e)ln(1+x)≤x 39 TakingT(x) = (1−1/e)(1+x)(1−1/e)ln(1+x)−x, it is enough to show that T′(x)≤0 for anyx>0. Now, T′(x) =/parenleftbig 1−1 e/parenrightbig2ln(1+x) (1+x)1/e+/parenleftbig 1−1 e/parenrightbig (1+x)1/e−1 If we can show that R(x) =/parenleftbig 1−1 e/parenrightbig ln(1+x)+1 (1+x)1/e≤e e−1 then we are done. Note that R(x) has a unique maximum at x=ee(e−2)/(e−1)−1. The maximum value attained by R(x) is therefore ( e−1)/e(e−2)/(e−1)<e/(e−1). This proves the result. Lemma S.4.4. Forx∈[0,e],Ψ′(Ψ−1(x))is bounded as follows: 1√ 2ln(1+x)≤√ 2ln(1+√x)≤Ψ′(Ψ−1(x))≤3 2ln(1+√x) Proof.We begin
https://arxiv.org/abs/2504.17885v1
by observing that showing the validity of the lower bound boils down to an equivalent problem: Ψ′(Ψ−1(x))≥√ 2ln(1+√x) ⇐⇒ln(1+Ψ−1(x))≥√ 2ln(1+√x) ⇐⇒Ψ−1(x)≥(1+√x)√ 2−1 ⇐⇒√ 2(1+√x)√ 2ln(1+√x)−(1+√x)√ 2+1−x≤0 Takeζ(x) =√ 2(1+√x)√ 2ln(1+√x)−(1+√x)√ 2+1−x. Sinceζ(0) = 0, in order for us to show that ζ(x)≤0 forx∈[0,e], it suffices to show that it is a decreasing function. As ζ′(x) =(1+√x)√ 2−1 √xln(1+√x)−1 and lim x→0ln(1+√x)√x= 1 , it is enough to prove that ln(1+√x)≤√x (1+√x)√ 2−1 Takingζ1(x) = ln(1+√x)−√x/(1+√x)√ 2−1, observe that ζ′ 1(x) =(1+√x)−√ 2−1((√ 2−2)x+(√ 2−3)√x+(1+√x)√ 2−1) 2√x =(1+√x)−√ 2−1((1+√x)√ 2−(1+√x)+(√ 2−2)√x(1+√x)) 2√x =(1+√x)−√ 2((1+√x)√ 2−1+(√ 2−2)√x−1) 2√x Aslim x→0ζ′ 1(x) =√ 2−3/2<0,thefactthat ζ1isdecreasingfollowsifwecanshowthatfor x∈[0,e],ζ2(x) = (1+x)√ 2−1+(√ 2−2)√x−1≤0. Thisiseasytoseesince ζ2(0) = 0and ζ′ 2(x) = (√ 2−1)(1+x)√ 2−2+(√ 2−2) is decreasing and takes the value 2√ 2−3 atx= 0. 40 To prove the right side of the inequality, we observe that Ψ′(Ψ−1(x))≤3 2ln(1+√x) ⇐⇒ln(1+Ψ−1(x))≤3 2ln(1+√x) ⇐⇒Ψ−1(x)≤(1+√x)3/2−1 ⇐⇒ζ3(x) =3 2(1+√x)3/2ln(1+√x)−(1+√x)3/2+1−x≥0 Then asζ3(0) = 0, it suffices to show that ζ′ 3(x) =9 8(1+√x)1/2 √xln(1+√x)−1≥0 Observe that we need only show that 9 8(1+x)1/2 xln(1+x)−1≥0⇐⇒ζ4(x) =9 8ln(1+x)−x (1+x)1/2 forx∈[0,√e]. Observe that ζ′ 4(x) =−8(1+x)+9√1+x+4x 8(1+x)3/2 Takingy=√1+x, note that ζ′ 4(x)≥0 if and only if 4 y2−9y+ 4≤0 which happens if and only if (9−√ 17)/8≤y≤(9+√ 17)/8. In this case, as x∈[0,√e], therefore y∈[1,/radicalbig 1+√e] and hence we must haveζ′ 4(x)≥0. Also,ζ4(0) = 0 and hence, the result follows. Lastly, 2√x≥0 =⇒1+x+2√x≥1+x =⇒(1+√x)2≥(1+x) =⇒2ln(1+√x)≥ln(1+x) =⇒√ 2ln(1+√x)≥1√ 2ln(1+x) which implies the result. Lemma S.4.5. Forx∈(1,∞), we have 1√ 2ln(1+x)≤Ψ′(Ψ−1(x))≤1 ln2ln(1+x) Proof.Before we proceed with the proof, recall that Ψ−1(x) =x−1 W((x−1)/e)−1 It is therefore easy to see that Ψ′(Ψ−1(x)) = ln(1+Ψ−1(x)) = 1+W((x−1)/e) Combining this with Lemma S.4.3, we obtain 1+/parenleftbigg 1−1 e/parenrightbigg ln(1+(x−1)/e)≤Ψ′(Ψ−1(x))≤1+ln(1+( x−1)/e) 41 ⇐⇒1 e+/parenleftbigg 1−1 e/parenrightbigg ln(x+e−1)≤Ψ′(Ψ−1(x))≤ln(x+e−1) Havingobservedthatln( x+e−1)/ln(1+x)isdecreasingfor x>1,itisimmediatethatln( x+e−1)/ln(1+x)≤ 1/ln2. Employing the techniques used in the proof of Lemma S.4.4, we can see that proving that Ψ′(Ψ−1(x))≥ ln(1+x)/√ 2 is equivalent to showing that ξ(x) =1√ 2(1+x)1/√ 2ln(1+x)−(1+x)1/√ 2+1−x≤0 Note that since ξ′(x) = ln(1+x)/(2(1+x)1−1√ 2)−1 andξ(0) = 0, it is enough to show that ξ1(x) = ln(1+x)−2(1+x)1−1/√ 2≤0 To that end, it would be useful to observe that the function ξ′ 1(x) =1 1+x−2(1−1/√ 2) (1+x)1/√ 2 has a unique root at the point x0=/parenleftbig 1+1/√ 2/parenrightbig2+√ 2−1. Having checked the maximum value of ξ1is ξ1(x0)<0, we may conclude that ξ1(x)≤0 forx∈(1,∞). This establishes the lemma. Lemma S.4.6. Forx∈(0,1)andp≥1, px−/parenleftbiggp 2/parenrightbigg x2≤1−(1−x)p≤px Proof.Firstly, equality holds trivially for the case p= 1. So, it suffices to show the result for p≥2. Consider the functions ν1(x) = 1−(1−x)p−px and ν2(x) = 1−(1−x)p−px+/parenleftbiggp 2/parenrightbigg x2 Now,ν′ 1(x) =p(1−x)p−1−p≤0 Hence,ν1is increasing and ν1(0) = 0 which implies that ν1(x)≤0 for all x∈(0,1). Similarly, ν′ 2(x) =p(1−x)p−1−p+p(p−1)xandν′′ 2(x) =−p(p−1)(1−x)p−2+p(p−1)≥0. Note thatν′ 2(0) = 0 =ν2(0). This coupled with the fact that clearly, ν′ 2(x) is an increasing function, implies the result. Lemma S.4.7. Consider real numbers p1,p2,···,pnsuch that 0≤pi≤1for eachi= 1,2,···,n. Then n/summationdisplay i=1pi−n−1/summationdisplay i=1n/summationdisplay j=i+1pipj≤1−n/productdisplay i=1(1−pi)≤n/summationdisplay i=1pi Proof.We will prove the result by induction. It is easy to see that for n= 1, equality holds throughout and forn= 2, equality holds on for the
https://arxiv.org/abs/2504.17885v1
first inequality while there is strict inequa lity for the second one if p1p2>0. Let the lemma hold for n=mi.e. m/summationdisplay i=1pi−m−1/summationdisplay i=1m/summationdisplay j=i+1pipj≤1−m/productdisplay i=1(1−pi)≤m/summationdisplay i=1pi 42 Now, forn=m+1, 1−m+1/productdisplay i=1(1−pi) = 1−(1−pm+1)m/productdisplay i=1(1−pi) = 1−m/productdisplay i=1(1−pi)+pm+1m/productdisplay i=1(1−pi) ≤m/summationdisplay i=1pi+pm+1m/productdisplay i=1(1−pi) ≤m/summationdisplay i=1pi+pm+1=m+1/summationdisplay i=1pi On the other hand, 1−m+1/productdisplay i=1(1−pi) = 1−(1−pm+1)m/productdisplay i=1(1−pi) = 1−m/productdisplay i=1(1−pi)+pm+1m/productdisplay i=1(1−pi) ≥m/summationdisplay i=1pi−m−1/summationdisplay i=1m/summationdisplay j=i+1pipj+pm+1m/productdisplay i=1(1−pi) ≥m/summationdisplay i=1pi−m−1/summationdisplay i=1m/summationdisplay j=i+1pipj+pm+1/parenleftBigg 1−m/summationdisplay i=1pi/parenrightBigg =m/summationdisplay i=1pi−m−1/summationdisplay i=1m/summationdisplay j=i+1pipj+pm+1−pm+1m/summationdisplay i=1pi =m+1/summationdisplay i=1pi−m/summationdisplay i=1m+1/summationdisplay j=i+1pipj Hence, by induction, the lemma holds. Lemma S.4.8. LetΨ(x) = (1+x)ln(1+x)−x. Theng(x)≤Ψ−1(x)≤2g(x)where g(x) =/braceleftBigg√ 2x , ifx∈[0,e] x−1 ln(1+(x−1)/e)−1, ifx>1 Proof.This result is motivated by Lemma 6.3 of Wellner(2017). We first consider the case where x∈[0,e]. Proving the lemma in this case is equivalent to proving that for 0 ≤x≤e Ψ(√ 2x)≤x≤Ψ(2√ 2x) Note that the inequality reduces to an equality when x= 0. So, it is enough for us to consider the case 0<x≤e. To that end, let α(x) = Ψ(√ 2x)−x Then α′(x) =ln(1+√ 2x)√ 2x−1 43 We know that ln(1 + y)/yis decreasing in yand lim y→0ln(1 +y)/y= 1. Hence it follows that α′(x) is negative for all x>0. Consequently, α(x) is decreasing in xand attains the value 0 at x= 0. Hence, α(x) must be negative for all x>0 which in turn implies that Ψ(√ 2x)≤xfor allx>0. As for the other part i.e. Ψ(2√ 2x)≥x, we consider the function β(x) = Ψ(2√ 2x)−x Observe that β′(x) =4 2√ 2xln(1+2√ 2x)−1 Once again, β′is decreasing and attains its minimum value on (0 ,e] atx=e. But β′(e) =√ 2√eln(1+2√ 2e)−1>0 Henceβis increasing on (0 ,e] and attains the value 0 at x= 0. Hence it is established that on (0 ,e], Ψ(2√ 2x)≥x. To perform the analysis on the case x>1, we first find the exact form of the inverse of Ψ in terms of the Lambert function W. Ψ(Ψ−1(x)) = (1+Ψ−1(x))ln(1+Ψ−1(x))−Ψ−1(x) =⇒x−1 = (1+Ψ−1(x))(ln(1+Ψ−1(x))−1) =⇒x−1 e=(1+Ψ−1(x)) eln/parenleftbigg1+Ψ−1(x) e/parenrightbigg =⇒W/parenleftbiggx−1 e/parenrightbigg exp/parenleftbigg W/parenleftbiggx−1 e/parenrightbigg/parenrightbigg =1+Ψ−1(x) eln/parenleftbigg1+Ψ−1(x) e/parenrightbigg =⇒1+Ψ−1(x) e= exp/parenleftbigg W/parenleftbiggx−1 e/parenrightbigg/parenrightbigg =⇒Ψ−1(x) =x−1 W(x−1 e)−1 Using Lemma S.4.3, we have for x>1 x−1 ln(1+(x−1 e))−1≤Ψ−1(x)≤e e−1x−1 ln(1+(x−1 e))−1 Consider the functions ℓ(x) andu(x) where ℓ(x) =x−1 ln(1+(x−1 e))−1 andu(x) =e e−1x−1 ln(1+(x−1 e))−1 Then u(x) ℓ(x)≤2 =⇒u(x)≤2ℓ(x) =⇒2/parenleftbiggx−1 ln(1+(x−1 e))−1/parenrightbigg ≥e e−1x−1 ln(1+(x−1 e))−1 =⇒2/parenleftbigg 1−1 e/parenrightbigg (x−1)−/parenleftbigg 1−1 e/parenrightbigg ln/parenleftbigg 1+x−1 e/parenrightbigg ≥(x−1)−/parenleftbigg 1−1 e/parenrightbigg ln/parenleftbigg 1+x−1 e/parenrightbigg 44 =⇒/parenleftbigg 1−2 e/parenrightbigg (x−1)≥ln/parenleftbiggx−1 e+1/parenrightbigg Hence it suffices to show that f(x) =/parenleftbig 1−2 e/parenrightbig (x−1)−ln(1+(x−1)/e)≥0 for allx >1. This follows easily from the fact that f(1) = 0 and f′(x) = (e−2)/(e−1)−1/(x+e−1)>0 for allx>1. Lemma S.4.9. For0≤x≤e Ψ′(Ψ−1(x))≥ln(1+√x) Proof.By Lemma S.4.8, we observe that for 0 ≤x≤e Ψ−1(x)≥√ 2x≥√x It immediately follows that Ψ′(Ψ−1(x)) = ln(1+Ψ−1(x))≥ln(1+√x) and the lemma is established. Lemma S.4.10. For0≤x≤e, ln(1+x)≥x 3 Proof.ln(1+x)/xis a decreasing function and attains its minimum value on [0 ,e] atx=e. Hence ln(1+x) x≥ln(1+e) e≥lne e=1 e>1 3 . Lemma S.4.11. Forx≥1/2, ln2x≤5 4ln/parenleftbigg2x√ ln2x/parenrightbigg ≤4ln/parenleftbigg2x√ ln2x/parenrightbigg Proof.To show that the lemma holds, it suffices to prove that α(x) =5 4ln/parenleftbigg2x√ ln2x/parenrightbigg −−ln2x=1 4ln2x−5 8lnln2x≥0 Observe that α′(x) =2ln2x−5 8xln2x It is evident
https://arxiv.org/abs/2504.17885v1