text
string | source
string |
|---|---|
0 because wk2+1=w1. Similarly, X j∈[k2](wj−wj+k2−1) =X j∈[k2]wj−X j∈[k2]wj+k2−1=X j∈[k2]wj−k2X j=2wj+k2−1−wk2, which vanishes becausePk2 j=2wj+k2−1=Pk2 j=2wj−1=Pk2−1 j=1wj. In general, for any 1 ≤j≤k2−1, we obtain that X j∈[k2]wj+j=k2−jX j=1wj+j+k2X j=k2−j+1wj+j=k2X j=j+1wj+k2+jX j=k2+1wj =k2X j=j+1wj+jX j=1wj=X j∈[k2]wj. Therefore,P j∈[k2](wj−wj+j) = 0 for any 1 ≤j≤k2−1, which, combined with (S5.29) and (S5.30), implies that −η(∆ix,0k2−1)<∞. Since iwas arbitrary, we obtain −η(∆ix,0k2−1)<∞for each i∈[k1]. For each i∈[k1], the block-wise permutation-symmetry of ηas in (S5.1) implies that η(∆ix,0k2−1) =η(xi−xi+1, . . . ,xi−xi+k1−1,0k2−1), where for i > k 1,xi=ximod k1. Since −η(xi−xi+1, . . . ,xi−xi+k1−1,0k2−1)<∞ for all i∈[k1], by Jensen’s inequality, it follows that ∞>−1 k2X i∈[k2]η(xi−xi+1, . . . ,xi−xi+k1−1,0k2−1) ≥ − η P i∈[k1](xi−xi+1) k1,P i∈[k1](xi−xi+2) k1, . . . ,P i∈[k1](xi−xi+k1−1) k1,0k2−1 . Proceeding as in the case for w, we can prove that the right hand side of the above inequality equals −η(0k1−1,0k2−2). Thus we have shown that 0k1+k2−2∈dom(−η). It remains to prove 0k1+k2−2∈int(dom( −η)), which will be proved by contradiction. If possible, suppose 0k1+k2−2/∈ int(dom( −η)). Then we must have 0k1+k2−2∈∂(dom( −η)). We claim that, in that case, there exists an open orthant in Rk1+k2−2, which we will soon define, whose intersection with dom( −η) is the nullset. For any k∈Nandϵ∈ {± 1}k, we define the open orthant Q(ϵ) inRkas Q(ϵ) ={x∈Rk:xiϵi>0 for all i∈[k]}. Note that an open orthant in Rk1+k2−2is of the form Q0∩Q1(ϵ)×Q0∩Q1(ϵ′) where Q0∩Q1(ϵ) is an open orthant in Rk1−1 corresponding to some ϵ∈ {± 1}k1−1andQ0∩ Q 1(ϵ′) is an open orthant in Rk2−1corresponding to some ϵ′∈ {± 1}k2−1. Fact S5.4. Ifηis concave with 0k1+k2−2∈∂(dom(−η)), then there exist ϵ,ϵ′∈ {± 1}kso that the open orthant Q0∩Q1(ϵ)× Q0∩ Q 1(ϵ′)satisfies Q0∩ Q 1(ϵ)× Q0∩ Q 1(ϵ′)∩dom(−η) =∅. S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 64 Fact S5.4 is proved in Section S5.5.4. We will prove that under the setup of the current lemma, {Q0∩ Q 1(ϵ)× Q0∩ Q1(ϵ′)} ∩dom(−η)̸=∅for all ϵandϵ′in{±1}k, which will lead to the desired contradiction. Fix ϵandϵ′in{±1}k. We denote the number of positive elements in ϵandϵ′bya1anda2, respectively. Since ∩k1 i=1∩k2 j=1int(dom( ψ(·;i, j))̸=∅, there is an open set U⊂Rk1+k2so that for all ( x,w)∈U,−ψ(x,w;i, j)<∞for all i∈[k1] and j∈[k2]. In particular, we can choose x∈Rk1andw∈Rk2such that the elements in xandware all distinct. Suppose r∈[k2] is such that wr=w(a2+1). Since whas distinct elements, ris unique and there are a2many j’s such that wj<wrandk2−1−a2many wj’s are strictly greater than wr. Note that ∞>−ψ(x,w;i, j) =−η(∆ix,wj−w1, . . . ,wj−wk2) where the term wj−wjis omitted. The vector v= (wi−w1, . . . ,wi−wk2) has exactly a2many positive and k2−1−a2 many negative elements. Therefore, we can permute the vector vin a way so that υ(v)∈ Q0∩ Q 1(ϵ′). However, by (S5.1), η(∆ix,v) =η(∆ix, υ(v)) for any permutation υ: [k2−1]7→[k2−1]. Therefore, there is a permutation υso that υ(v)∈ Q0∩Q1(ϵ′) and (∆ ix, υ(v))∈ dom(−η) both holds. Thus we have shown that for each i∈[k1], there exists vi∈ Q0∩ Q 1(ϵ′) such that (∆ix,vi)∈dom(−η). Since the elements of xis distinct, there is an i∈[k1] so that x(a1+1)=xi. Then a1many elements of ∆ ixis positive, and the rest are
|
https://arxiv.org/abs/2505.17285v1
|
negative. The block permutation symmetry of (S5.1)implies that η(∆ix,vi) =η(υ(∆ix),vi) for any permutation υ: [k1]7→[k1]. In particular, we can choose υin a way so that υ(∆ix)∈ Q0∩ Q 1(ϵ). Therefore, υ(∆ix)∈ Q0∩ Q 1(ϵ) and−η(υ(∆ix),vi)<∞. This implies ( υ(∆ix),vi)∈ Q0∩ Q1(ϵ)× Q0∩ Q1(ϵ′). However, since η(∆ix,vi) =η(υ(∆ix),vi), it holds that −η(υ(∆ix),vi)<∞. Hence {Q0∩ Q 1(ϵ)× Q0∩ Q 1(ϵ′)} ∩dom(−η) is non-empty. Then Fact S5.4 implies 0k1+k2−2/∈int(dom( −η)) can not hold. Hence, we must have 0k1+k2−2∈int(dom( −η)). S5.4.2. Proof of Lemma S5.2 Proof of Lemma S5.2. Note that (S5.7) implies when pij= 1 for all i∈[k1] and j∈[k2], then Vψ(x,y1, . . . ,yk1;p) =X i∈[k1]X j∈[k2]η(xi−x1, . . . ,xi−xk1,yij−yi1, . . . ,yij−yik2), for all x∈Rk1andy1, . . . ,yk1∈Rk2. Note that if ( x,y1, . . . ,yk1)∈ Ce, then Vψ(x,y1, . . . ,yk1;p) =X i∈[k1]X j∈[k2]η(0k1−1,0k2−1) =Vψ(0k1,0k2, . . . ,0k2;p). Since ( 0k1−1,0k2−1)∈int(dom( −η)) by Lemma S5.1, the above implies that ( 0k1,0k2, . . . ,0k2)∈int(dom( −Vψ(·;p))). The above display also implies that the proof will be complete if we can show that ( 0k1,0k2, . . . ,0k2) lies in argmaxx,y1,...,yk1Vψ(x,y1, . . . ,yk1;p). Note that ( 0k1,0k2, . . . ,0k2) is just the zero-vector 0k1+k1k2and (0k1−1,0k2−1) is just the zero-vector 0k1+k2−2. To streamline notation, we will use the expressions 0k1+k1k2and0k1+k2−2for those vectors from now on. Let us denote G=−Vψ(·;p) so that Gis a convex function with 0k1+k1k2∈dom(G). Thus we need to show that 0k1+k1k2 is a minimizer of G. Hence, it is enough to prove ∂G(0k1+k1k2)∋0k1+k1k2(Theorem 2.2.1, pp. 177, Hiriart-Urruty and Lemar´ echal, 2004). In particular, we will show that ∂G(0k1+k1k2) is singletone, and equals 0k1+k1k2. For i∈[k1] and j∈[k2], let us define ˜ϕij(x,w) =−η(xi−x1, . . . ,xi−xk1,wj−w1, . . . ,wj−wk2) andHij(x,y1, . . . ,yk1) =˜ϕij(x,yi), (S5.31) where x∈Rk1,y∈Rk2,yi∈Rk2fori∈[k1], and the terms xi−xiandyj−yjare omitted in the above definitions. Then by Theorem 4.1.1 of Hiriart-Urruty and Lemar´ echal (2004), ∂G(x,y1, . . . ,yk1) =X i∈[k1]X j∈[k2]∂Hij(x,y1, . . . ,yk1) where for sets C1, . . . , C n, the sum C1+. . .+Cnis the Minkowski sum defined in Section S3. If ( x,y1, . . . ,yk1) lies in int(dom( Hij)) for some i∈[k1] and j∈[k2], then the corresponding ∂Hij(x,y1, . . . ,yk1) is non-empty (see theorem 23.4, pp.214, of Rockafellar, 1970, for a proof). Otherwise, it is just an empty set. Thus the above expression continues to hold even if ( x,y1, . . . ,yk1)/∈int(dom( Hij)) for some i∈[k1] and j∈[k2]. Consider i∈[k1] and j∈[k2]. Equation S5.31 implies that among the yr’s (r∈[k1]),Hijdepends only on yi. Hence, if (t,w1, . . . ,wk1)∈∂Hij(x,y1, . . . ,yk1) for some t∈Rk1andw1, . . . ,wk1∈Rk2, then wr= 0 for r̸=i(cf. Remark 4.1.2, pp. 184, Hiriart-Urruty and Lemar´ echal, 2004). S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 65 Moreover, ( t,0k2. . . ,wi, . . . ,0k2) is in ∂Hij(x,y1, . . . ,yk1) if and only if ( t,wi)∈∂˜ϕij(x,yi). Using the above, we can specify ∂Hij(x,y1, . . . ,yk1) for any
|
https://arxiv.org/abs/2505.17285v1
|
x∈Rk1andy1, . . . ,yk1∈Rk2as follows: ∂Hij(x,y1,...,yk1) ={(t,0k2...,w,...,0k2) : (t,w)∈∂˜ϕij(x,yi),t∈Rk1,w∈Rk2}. (S5.32) Therefore, we need to find ∂˜ϕijif we want to infer on ∂Hij. Equation S5.31 also implies that for each i, j,˜ϕij(x,y) = h(Cij(x,y)) where Cij=Ai 0(k1−1)×k2 0(k2−1)×k1 Bj is a matrix with Ai∈R(k1−1)×Rk1andBj∈R(k2−1)×Rk2satisfying Aix= xi−x1 ... xi−xi−1 xi−xi+1 ... xi−xk−1 andBjw= wj−w1 ... wj−wj−1 wj−wj+1 ... wj−wk−2 for any i∈[k1],j∈[k2],x∈Rk1, and w∈Rk2. The above implies A1=1k1−1−Ik1−1 ,Ak1=−Ik1−11k1−1 , B1=1k2−1−Ik2−1 ,Bk2=−Ik2−11k2−1, , Ai=−Ii−1 1i−10(i−1)×(k1−i) 0(k1−i)×(i−1)1k1−i−Ik1−i , (S5.33) andBj=−Ij−1 1j−10(j−1)×(k2−i) 0(k2−j)×(j−1)1k2−j −Ik2−j fori∈[2 :k1−1] and j∈[2 :k2−1]. Here we did a slight overload of notation because AiandBj’s are different from those defined in Section S5.2. Since k1, k2≥2 by our assumption on k1andk2, the above matrices are well-defined. Therefore, ∂˜ϕij(x,w) =CT ij∂(−η)(Cij(x,w)) for x∈Rk1andw∈Rk2(cf. Theorem 4.2.1 of Hiriart-Urruty and Lemar´ echal, 2004). Since 0k1+k2−2∈int(dom( −η)) by Lemma S5.1 and −ηis differentiable by our assumption, it follows that −ηis differentiable at0k1+k2−2. Thus ∂(−η)(0k1+k2−2) is singletone and equals −∇η(0k1+k2−2). Therefore, ˜ϕijis differentiable at 0k1+k2and ∇˜ϕij(0k1+k2) =−CT ij∇η(0k1+k2−2). (S5.34) Equation S5.31 implies, Hij, and consequently, G=−Vψ, also are differentiable at 0k1+k1k2. We will calculate ∇η(0k1+k2−2) now. To this end, we will use (S5.1), which implies η(u1,u2, . . . ,uk1−1,v) =η(u2, u1, . . . ,uk1−1,v) for any u∈Rk1−1andv∈Rk2−1, leading to ∂η(u1,u2, . . . ,uk1−1,v) ∂u1=∂η(u2,u1, . . . ,uk1−1,v) ∂u1 whenever ηis differentiable at ( u,v). Therefore, η1(u1,u2, . . . ,uk1−1,v) =η2(u2,u1, . . . ,uk1−1,v). In particular, when u=0k1−1, then it follows that η1(0k1−1,v) =η2(0k1−1,v). In the same way, we can show that ηi(0k1−1,v) is constant across i∈[k1−1]. Similarly, we can show that ηk1−1+j(u,0k2−1) is constant across j∈[k2−1]. In particular, ηi(0k1+k2−2) =η10k1+k2−2) for all i∈[k1] and ηj(0k1+k2−2) =ηk1(0k1+k2−2) for all j∈[k1:k1+k2−2]. Thus ∇η(0k1+k2−2) = (η1(0k1+k2−2)1k1−1, ηk1(0k1+k2−2)1k2−1). Then (S5.34) implies that ∇˜ϕij(0k1+k2) =−CT ij∇η(0k1+k2−2) =−η1(0k1+k2−2)AT i1k1−1 ηk1(0k1+k2−2)BT j1k2−1 . We have already shown that Gis differentiable at 0k1+k1k2. It remains to calculate ∇G(0k1+k1k2). To this end, (S5.31) and S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 66 (S5.32) imply that ∇G(0k1+k1k2) =−X i∈[k1]X j∈[k2] η1(0k1+k2−2)AT i1k1−1 0 ... 0 ηk1+1(0k1+k2−2)BT j1k2−1(ith position) 0 ... 0 =−X j∈[k2] η1(0k1+k2−2)P i∈[k1]AT i1k1−1 ηk1+1(0k1+k2−2)BT j1k2−1 ... ηk1+1(0k1+k2−2)BT j1k2−1 =− k2η1(0k1+k2−2)P i∈[k1]AT i1k1−1 ηk1+1(0k1+k2−2)P j∈[k2]BT j1k2−1 ... hk1+1(0k1+k2−2)P j∈[k2]BT j1k2−1 . Note that (S5.33) implies that AT i= −Ii−1 0(i−1)×(k1−i) 1T i−1 1T k1−i 0(k1−i)×(i−1) −Ik1−i andBT j= −Ij−1 0(j−1)×(j−1) 1T j−1 1T k2−j 0(k2−j)×(j−1) −Ik2−j . Hence, AT i1k1−1= −1i−1 k1−1 −1k1−i andBT j1k2−1= −1j−1 k2−1 −1k2−j . ThusP i∈[k1]AT i1k1−1=0k1andP j∈[k2]BT j1k2−1=0k2, which indicates ∇G(0k1+k1k2) =0k1+k1k2, which completes the proof. S5.4.3. Proof of Lemma S5.3 Proof of Lemma S5.3. Since 0k∈dom( h) and 0k∈dom( h)∩Range( Ci) for each i∈[m], (h◦Ci)′ ∞=h′ ∞◦Cifor each i∈[m] by part (c) of Fact S5.2. Using part (a) of the same Fact, we obtain that for each w∈Rm >0, f′ ∞(x;w) =mX i=1wih′ ∞(Cix),x∈Rk, where f′ ∞(·;w) is the recession function of f(·;w). Since his convex and bounded below, f′ ∞(·;w) is convex and bounded below for each w∈Rm >0. If there exists w0∈Rm >0so that the minima of the convex function f(·;w0) is not attained
|
https://arxiv.org/abs/2505.17285v1
|
in Rk, then f′ ∞(x;w0) = 0 for some x̸=0kby Fact S5.1. Since w0∈Rm >0, it must hold that h′ ∞(Cix) = 0 for each i∈[m] for this x. Therefore, mX i=1wih′ ∞(Cix) = 0 for all w∈Rm >0. Another application of Fact S5.1 yields that the minima of the convex function x7→mP i=1wih(Cix) is not attained for any w∈Rm. Hence, proved. S5.4.4. Proof of Lemma S5.4 Proof of Lemma S5.4. We will fix p∈R4 >0and denote u∗(p),v∗ 1(p), . . . ,v∗ k1(p) byu∗,v∗ 1, . . . ,v∗ k1, respectively. We will prove Lemma S5.4 in three steps. (a) We will show that there exists x∈Rso that u∗=x1k1−1. (b) We will next show that v∗ 2=. . .=v∗ k1. (c) Finally, we will show that v∗ i=c1k2−1for some c∈Rfor each i∈[k1]. S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 67 Proving (a) the u∗ i’s are equal For fixed iandjin [k1], let us denote Cij={u∈Rk1−1:ui=uj}. We will first show thatu∗∈Cijfor all i, j∈[k1]. Since the proof is trivial for i=j, we assume i̸=j. Fixi, j∈[k1] so that i̸=j. Let us denote by υthe permutation on [ k1] that swaps iwith j, i.e., υ(i) =jandυ(j) =i. To show u∗∈Cij, our first step is to prove Vη(u,v1, . . . ,vk1;p) =Vη(υu,v1,vυ(1)+1, . . . ,vυ(k1−1)+1;p) (S5.35) for all u∈Rk1−1andv1, . . . ,vk1∈Rk2−1. To that end, note that υu=Pijuwhere Pijis the permutation matrix for swapping ith row with jth row. For any u∈Rk1−1andw∈Rk2−1, it follows that η(υu,w) =η(u,w) by the blockwise permutation symmetry of ηin (S5.1). Also, η(Aiυu,w) =η(AiPiju,w)(a)=η(PijAjP2 iju,w)(b)=η(PijAju,w)(b)=η(P2 ijAju,w), which equals η(Aju,w), where (a) uses the fact that Ai=PijAjPijfor any i, j∈[k1], (b) uses the fact that P2 ij=Iand (c) uses the blockwise permutation symmetry of ηas in (S5.1). Similarly we can show that η(Ajυu,w) =η(Aiu,w) for all u∈Rk1−1andw∈Rk2−1. Now suppose r̸=i̸=j. Consider the case i < r < j . Then for all u∈Rk1−1andw∈Rk2−1, η(Arυu,w) =η(ArPiju,w) =η(υu1−υur, . . . ,−υur, . . . , υ ui−υur, . . . , υ uj−υur, . . . , υ uk1−1−υur,w) =η(u1−ur, . . . ,uj−ur, . . . ,−ur, . . . ,ui−ur, . . . ,uk1−1−ur,w) (a)=η(u1−ur, . . . ,ui−ur, . . . ,−ur, . . . ,uj−ur, . . . ,uk1−1−ur,w) =η(Aru,w), where (a) follows by the blockwise permutation symmetry of η. For all other orderings of the triplet ( i, r, j), we can similarly prove that η(Arυu,w) =η(Aru,w). Therefore (S5.15) implies that Vη(υu,v1, . . . ,vk1;p) =p11η(υu,v1) +p12k2−1X j′=1η(υu,Bj′v1) +p21k1−1X i′=1η(Ai′υu,vi′+1) +p22k1−1X i′=1k2−1X j′=1η(Ai′υu,Bj′vi′+1) =p11η(u,v1) +p12k2−1X j′=1η(u,Bj′v1) +p21X i′∈[k1]:i′̸=i,jη(Ai′υu,vi′+1) +p22X i′∈[k1]:i′̸=i,jk2−1X j′=1η(Ai′υu,Bj′vi′+1) +p21X i′∈(i,j)η(Ai′υu,vi′+1) +p22X i′∈(i,j)k2−1X j′=1η(Ai′υu,Bj′vi′+1). By our previous calculations, p21X i′∈[k1]:i′̸=i,jη(Ai′υu,vi′+1) +p22X i′∈[k1]:i′̸=i,jk2−1X j′=1η(Ai′υu,Bj′vi′+1) =p21X i′∈[k1]:i′̸=i,jη(Ai′u,vi′+1) +p22X i′∈[k1]:i′̸=i,jk2−1X j′=1η(Ai′u,Bj′vi′+1) and p21X i′∈(i,j)η(Ai′υu,vi′+1) +p22X i′∈(i,j)k2−1X j′=1η(Ai′υu,Bj′vi′+1) =p21η(Aiυu,vi+1) +p22k2−1X j′=1η(Aiυu,Bj′vi+1) +p21η(Ajυu,vj+1) +p22k2−1X j′=1η(Ajυu,Bj′vj+1) =p21η(Aju,vi+1) +p22k2−1X j′=1η(Aju,Bj′vi+1) +p21η(Aiu,vj+1) +p22k2−1X j′=1η(Aiu,Bj′vj+1), S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 68 which, noting j=υ(i) and i=υ(j), equals p21X i′∈(i,j)η(Ai′u,vυ(i′)+1) +p22X i′∈(i,j)k2−1X j′=1η(Ai′u,Bj′vυ(i′)+1). Combining all the above pieces, we have Vη(υu,v1, . . . ,vk1;p) =p11η(u,v1) +p12k2−1X j′=1η(u,Bj′v1) +p21X i′∈[k1]:i′̸=i,jη(Ai′u,vi′+1) +p22X i′∈[k1]:i′̸=i,jk2−1X j′=1η(Ai′u,Bj′vi′+1) +p21X i′∈(i,j)η(Ai′u,vυ(i′)+1)
|
https://arxiv.org/abs/2505.17285v1
|
+p22X i′∈(i,j)k2−1X j′=1η(Ai′u,Bj′vυ(i′)+1), which can be rewritten as p11η(u,v1) +p12k2−1X j′=1η(u,Bj′v1) +p21k1−1X i′=1η(Ai′u,vυ(i′)+1) +p22k1−1X i′=1,k2−1X j′=1η(Ai′u,Bj′vυ(i′)+1) because υ(i′) =i′fori′̸= (i, j). However, the above implies Vη(υu,v1, . . . ,vk1;p) =Vη(u,v1,vυ(1)+1, . . . ,vυ(k1−1)+1;p) (S5.36) for all u∈Rk1−1andv1, . . . ,vk1∈Rk2−1. Now observe that since υ=υ−1, we have Vη(υu,v1,vυ(1)+1, . . . ,vυ(k1−1)+1;p) =Vη(υu,v1,vυ−1(1)+1, . . . ,vυ−1(k1−1)+1;p). However, (S5.36) applied on ( u,v1,vυ−1(1)+1, . . . ,vυ−1(k1−1)+1) leads to Vη(υu,v1,vυ−1(1)+1, . . . ,vυ−1(k1−1)+1;p) =Vη(u,v1, . . . ,vk1;p), which proves (S5.35). Now we will prove that u∗∈Cij. Suppose u∗ i̸=u∗ j. Since ηis bounded, there exists c∈Rso that Vη(u∗,v∗ 1, . . . ,v∗ k1;p) =c. Then (S5.35) implies that Vη(υu∗,v∗ 1,v∗ υ(1)+1, . . . ,v∗ υ(k1−1)+1;p) =c as well. Since Vηis a sum of strictly concave functions, it is strictly concave. Therefore, Jensen’s inequality yields that c=Vη(u∗,v∗ 1, . . . ,v∗ k1;p) +Vη(υu∗,v∗ 1,v∗ υ(1)+1, . . . ,v∗ υ(k1−1)+1;p) 2 (a) < Vηu∗+υu∗ 2,v∗ 1,v∗ 2+v∗ υ(1)+1 2, . . . ,v∗ k1+v∗ υ(k1−1)+1 2 where the strict inequality holds because of strict concavity. If u′=u∗+υu∗ 2, then u′ i=u′ j. Then the above implies that if ui̸=uj, then Vη(u∗,v∗ 1, . . . ,v∗ k1;p)< sup u∈Cij,v1,...,vk1∈Rk2−1Vη(u,v1, . . . ,vk1;p), (S5.37) which contradicts with the fact that ( u∗,v∗ 1, . . . ,v∗ k1) is the unique maximizer of Vη. Hence, we must have u∗ i=u∗ j. Since i andjwere arbitrary, we must have u∗=x1k1−1for some x∈R. Showing v∗ j’s are equal for j≥2:Suppose υis a permutaion of [2 : k1]. Then ( υ(2)−1, . . . , υ (k1)−1) is a permutaion of [k1−1]. Let us denote υ(2)−1 =ς(1), . . . , υ (k1)−1 =ς(k1−1). Then ςis a permutaion of [ k1−1]. Therefore, we have shown that for i∈[k1−1], we can write υ(i+ 1) = ς(i) + 1 for a permutaion ςof [k1−1]. Therefore, Vη(u,v1,vυ(2), . . . ,vυ(k1);p) =Vη(u,v1,vυ(1+1), . . . ,vυ(1+k1−1);p) =Vη(u,v1,v1+ς(1), . . . ,v1+ς(k1−1);p). S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 69 However, applying (S5.35) on ( ς−1u,v1, . . . ,vk1) we obtain that Vη(ς−1(u),v1,v2, . . . ,vk1;p) =Vη(u,v1,v1+ς(1), . . . ,v1+ς(k1−1);p). Therefore, we have showed that Vη(u,v1,vυ(2), . . . ,vυ(k1);p) =Vη(ς−1(u),v1,v2, . . . ,vk1;p). In particular, Vη(u∗,v∗ 1,v∗ υ(2), . . . ,v∗ υ(k1);p) =Vη(ς−1(u∗),v∗ 1,v∗ 2, . . . ,v∗ k1;p) =Vη(u∗,v∗ 1,v∗ 2, . . . ,v∗ k1;p) because we have proved in the last step that u∗ 1=. . .=u∗ k1−1. However, ( u∗,v∗ 1,v∗ 2, . . . ,v∗ k1) is the unique maximizer of Vη, implying ( v∗ υ(2), . . . ,v∗ υ(k1)) = (v∗ 2, . . . ,v∗ k1). Since υis any arbitrary permutation of [2 : k1], this implies that v∗ 2=. . .=v∗ k1. Showing vj’s have equal elements Suppose υis any arbitrary permutation of [ k2−1]. Then by (S5.15), Vη(u, υ(v1),v2, . . . ,vk1;p) =p11η(u, υ(v1)) +p12k2−1X j′=1η(u,Bj′υ(v1)) +p21k1−1X i′=1η(Ai′u,vi′+1) +p22k1−1X i′=1k2−1X j′=1η(Ai′u,Bj′vi′+1), (S5.38) where the Bj’s andAi’s are as in (S5.13) and (S5.14).
|
https://arxiv.org/abs/2505.17285v1
|
First note that η(u, υ(v1)) =η(u,v1) by (S5.1). By definition of Bj, η(u,Bj′υ(v1)) = η(u,v1υ(1)−v1υ(j), . . . ,−v1υ(j), . . . ,v1υ(k2−1)−v1υ(j)). However, by (3.4), η(u,Bj′υ(v1)) =η(u, υ−1(Bj′υ(v1))). Let l=Bj′υ(v1) = (v1υ(1)−v1υ(j), . . . ,−v1υ(j), . . . ,v1υ(k2−1)−v1υ(j)). We want to figure out what is υ−1(l). Suppose i=j. Then υ−1(li) =lυ−1(i)=( v1i−v1υ(j)ifi̸=υ(j) −v1υ(j) ifi=υ(j) Therefore, υ−1(Bj′υ(v1)) = v1i−v1υ(j), . . . ,v1,υ(j)−1−v1υ(j)| {z } υ(j)−1 elements,−v1υ(j), v1,υ(j)+1−v1υ(j), . . . ,v1k1−v1υ(j)| {z } k2−1−υ(j) elements , which is just Bυ(j)v1. Thus we have shown that η(u,Bj′υ(v1)) =η(u,Bυ(j′)v1), indicating that k2−1X j′=1η(u,Bj′υ(v1)) =k2−1X j′=1η(u,Bυ(j′)v1) =k2−1X j=1η(u,Bjv1). Hence, (S5.38) implies Vη(u, υ(v1),v2, . . . ,vk1;p) =Vη(u,v1,v2, . . . ,vk1;p) for all u∈Rk1andv1,v2, . . . ,vk1∈Rk2. In particular, Vη(u∗, υ(v∗ 1),v∗ 2, . . . ,v∗ k1;p) =Vη(u∗,v∗ 1,v∗ 2, . . . ,v∗ k1;p) The uniqueness of v∗ 1implies v∗ 1=υ(v∗ 1). Since υis an arbitrary permutaion, it follows that v∗ 1=c1k2−1for some c∈R. The proof follows similarly for other v∗ i’s replacing ubyAi−1u, and hence skipped. S5.4.5. Proof of Lemma S5.5 Proof of Lemma S5.5. We have previously shown that the minimizer of −Vη(·;p) is unique, and by Lemma S5.4, it is of the form ( u∗ 11k1−1,v∗ 111k2−1,v∗ 211k2−1, . . . ,v∗ 211k2−1) for some u∗,v∗ 11,v∗ 21∈R, possibly depending on p. Therefore, if (x, y, z )̸= (u∗ 1,v∗ 11,v∗ 21), it holds that −Vη(x1k1−1, y1k2−1, z1k2−1, . . . , z 1k2−1) >−Vη(u∗ 11k1−1,v∗ 111k2−1,v∗ 211k2−1, . . . ,v∗ 211k2−1), S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 70 where the strict inequality follows from the uniqueness of the minimum. The above implies Φ(x, y, z ;p)>Φ(u∗ 1,v∗ 11,v∗ 21;p) for all x, y, z ∈R because (S5.15) implies that Φ(x, y, z ;p) =−Vη(x1k1−1, y1k2−1, z1k2−1, . . . , z 1k2−1). Therefore, Φ( x, y, z ;p) has a unique minimizer ( x∗(p), y∗(p), z∗(p)) and x∗(p) =u∗ 1(p),y∗(p) =v∗ 11(p),z∗(p) =v∗ 21(p), thus proving the first part of the lemma. Now we will show that −ϑ(·;i, j)′sand Φ( ·;p) are proper, closed, and strictly convex for any i, j∈[2] and p∈R4 >0. Note that −ϑ(·;i, j)’s and Φ are strictly convex because −ηis strictly convex. To show a convex function is proper, we need to show that (1) it never attains the value −∞and (2) its domain is non-empty (pp. 24 of Rockafellar, 1970). Since −ηis bounded below, (1) follows trivially for −ϑ(·;i, j)′sand Φ. Since 0k1+K2−2∈dom(−η) by Lemma S5.1, (S5.17) implies that (0,0)∈dom(−ϑ(·;i, j)) for all i, j∈[2]. Similarly, (S5.18) implies that (0 ,0,0)∈dom(Φ( ·;p)) for all p∈R4 >0. Therefore, (2) follows for −ϑ(·;i, j)′sand Φ( ·;p). Therefore −η,−ϑ, and Φ( ·;p) are proper. Using Facts S11.1 and S11.2 (see Section S11), one can easily verify that −ϑ(·;i, j)’s and Φ( ·;p) are closed. We have shown that, for each p∈R4 >0, the minimum of Φ(·;p) is attained. Therefore, Φ( ·;p)is 0-coercive by Fact S5.1. S5.4.6. Proof of Fact S5.3 Proof of Fact S5.3. Note that −Φ∗(p) = sup w∈R3{−Φ(w;p)} = sup (x,y,z )∈R3 p11ϑ(x, y; 1,1)
|
https://arxiv.org/abs/2505.17285v1
|
+p12ϑ(x, y; 1,2) +p21ϑ(x, z; 2,1) +p22ϑ(x, z; 2,2) =ςIm(ϑ)(p) (S5.39) where ςis the support function defined in (1.1) and Im(ϑ) ={w∈R4:w={ϑ(x, y; 1,1), ϑ(x, y; 1,2), ϑ(x, z; 2,1), ϑ(x, z; 2,2)} for some ( x, y, z )∈R3} is the image set of ϑ. Thus −Φ∗is the support function of Im(ϑ). Thus −Φ∗is a closed convex function. Hence, it is continuous on the relative interior of its domain (cf. Remark 3.1.3, pp. 104, Hiriart-Urruty and Lemar´ echal, 2004). However, because ϑis bounded above, −Φ∗(p)<∞forp∈R4 >0. Hence, dom(−Φ∗)⊃R4 >0. (S5.40) SinceR4 >0is open, R4 >0⊂int(dom( −Φ∗)). Thus Φ∗is continuous on R4 >0everywhere. To show the finiteness of Φ∗, note that, if Φ∗(p) = inf w∈R3Φ(w;p) =∞for some p∈R4 >0, then Φ( w;p) =∞ for all w∈R3, which would imply that Φ( ·;p) is improper. However, Lemma S5.5 implies that Φ( ·;p) is proper. Thus Φ∗(p)<∞. Also, −Φ∗(p)<∞for all p∈R4 >0because −Φ∗is a convex function with p∈dom(−Φ∗) by (S5.40). Therefore, |Φ∗(p)|<∞. S5.4.7. Proof of Lemma S5.7 Proof of Lemma S5.7. First we prove the results on ϑ. Lemma S5.1 implies that under the setup of Theorem 3.1, 0k1+k2−2∈ int(dom( −η)). If x, yare in a small neighborhood U0of 0, then ( xA1k1−1, yB1k2−1) is in a small neighborhood of 0k1+k2−2for all matrices AandBof appropriate dimensions. Noting 0k1+k2−2∈int(dom( −η)), we deduce that if U0is small enough, then (xAi1k1, yBj1k2−1)∈int(dom( η)) for all ( x, y)∈U0andi∈ {0, . . . , k 1−1}andj∈ {0, . . . , k 2−1}where the AiandBj’s are as defined in (S5.13) and (S5.14). Thus, U2 0⊂int(dom( ϑ(·, i, j))) for i, j∈[2], which proves part 1 of the current lemma. On the other hand, if ( xAi1k1−1, yBj1k2−1)∈int(dom( −η)),ηis thrice continuously differentiable at ( xAi1k1, yBj1k2−1). Therefore, ϑ(·;i, j) is also thrice continuously differentiable on U2 0ifU0is a sufficiently small neighborhood, which proves part 3 of the current lemma. Now we will prove the results on Φ. Since for any p∈R4 >0, the function Φ( ·;p) is a non-negative linear combination of convex functions, namely the −ϑ(·;i, j)’s, by (S5.18), it follows that dom(Φ( ·;p)) =∩i,j∈[2]dom( ϑ(·;i, j)) (pp. 33 Rockafellar, 1970). Therefore, part 2 follows from part 1. Part 4 follows from part 3 due to (S5.18). S5.4.8. Proof of Lemma S5.8 Proof of Lemma S5.8. The bulk of the proof is devoted to showing that there exists δ∗>0 so that if q∈B(p, δ) for some δ≤δ∗, then Λ∗(q) is uniformly bounded in the sense that it lies in a compact set depending only on pandϑ. Denote by Sq r={v∈R3: Φ(v;q)≤r},for all q∈R4 >0, S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 71 which means Sq ris the level- rsublevel set of Φ( ·;q). We will prove the boundedness of Λ∗(q) by proving two main results. The first step is to show that Λ∗(q)∈ Sq Φ∗(p)+C1δ for some constant C1>0. The second step will show that Sq r⊂ Sp C(p,r)for some bounded function C(p, r) for all r∈Rand q∈B(p, δ) as long as δ <min(p)/2∧1. The rest of the boundedness proof will follow from the uniqueness
|
https://arxiv.org/abs/2505.17285v1
|
of the minimizer of Φ(·;p), which follows from Lemma S5.5. We will take δ∗= min( p)/2∧1. Note that if q∈B(p, δ) with δ≤δ∗, then q∈R4 >0automatically because min( q)>min(p)−δ≥min(p)/2>0. First of all, since −Φ∗is a convex function, it is locally Lipschitz on its domain (Theorem 3.1.2, pp. 103, Hiriart-Urruty and Lemar´ echal, 2004). However, (S5.40) implies that dom( −Φ∗)⊃R4 >0. Therefore, for all q∈B(p, δ), |Φ∗(p)−Φ∗(q)| ≤Lp∥p−q∥2 for some constant Lp>0. Therefore, for q∈B(p, δ), Φ(Λ∗(q);q) = Φ∗(q)≤Φ∗(p) +Lp∥p−q∥2. Noting Φ∗(p) +Lp∥p−q∥2∈Rby Fact S5.3, we deduce that when δ < δ∗, Λ∗(q)⊂ Sq Φ∗(p)+Lp∥p−q∥2for all q∈B(p, δ), (S5.41) which implies Λ∗(q)⊂ Sq Φ∗(p)+Lpδfor all q∈B(p, δ). (S5.42) Now we will prove that if δ < δ∗, then Sq r⊂ Sp r+8δC∥p∥1+2C+|r| min(p)for all q∈B(p, δ). (S5.43) Suppose v= (x, y, z )∈ Sq rfor some r∈R. Thus −ϑ(x, y; 1,1)q11−ϑ(x, y; 1,2)q12−ϑ(x, y; 2,1)q21−ϑ(x, y; 2,2)q22≤r. Note that the ϑ(·;i, j)’s are bounded above by some constant C >0 for i, j∈[2] by Lemma S5.11. Thus −ϑ(x, y;i, j)>−C for all i, j∈[2]. Therefore, the above display implies −ϑ(x, y; 1,1)q11−C(q12+q21+q22)≤r. Therefore, we have obtained that −ϑ(x, y; 1,1)≤C(q11+q21+q22) +r q11<C∥q∥1+|r| min(q), where we used the fact that min( q)>0. Note that ϑ(x, y; 1,1)≤C≤C∥q∥1 min(q). Therefore, |ϑ(x, y; 1,1)| ≤C∥q∥1+|r| min(q). We will express the above bound in terms of p. By the triangle inequality, ∥q∥1≤ ∥q−p∥1+∥p∥1≤2∥p−q∥2+∥p∥1 because ∥p−q∥1≤2∥p−q∥2by Cauchy Schwartz inequality. Also, min(q)≥min(p)− ∥p−q∥∞≥min(p)− ∥p−q∥2 because ∥p∥∞≤ ∥p∥2for any vector p. Since q∈B(p, δ) and δ < δ∗, we have ∥p−q∥2< δ∗<min(p)/2. Hence min(p)− ∥p−q∥2≥min(p)/2, and the following bound holds: |ϑ(x, y; 1,1)| ≤C∥q∥1+|r| min(q)≤2C∥p∥1+ 2C∥p−q∥2+|r| min(p). Similarly, we can show that |ϑ(x, y; 1,2)|,|ϑ(x, z; 2,1)|,|ϑ(x, z; 2,2)| ≤2C∥p∥1+ 2C∥p−q∥2+|r| min(p). S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 72 Therefore, if v∈ Sq r, then |Φ(v;p)−Φ(v;q)| ≤ |ϑ(x, y; 1,1)|+|ϑ(x, y; 1,2)|+|ϑ(x, z; 2,1)|+|ϑ(x, z; 2,2)| ∥p−q∥∞ ≤8∥p−q∥2C∥p∥1+ 2C∥p−q∥2+|r| min(p), (S5.44) where we again used the fact that ∥p∥∞≤ ∥p∥2for any vector p. Since Φ( v;q)≤rand∥p−q∥2< δ≤δ∗≤1, it follows that for all q∈B(p, δ), Φ(v;p)≤r+ 8∥p−q∥2C∥p∥1+ 2C∥p−q∥2+|r| min(p) ≤r+ 8∥p−q∥2C∥p∥1+ 2C+|r| min(p), (S5.45) which completes the proof of (S5.43). Now suppose r= Φ∗(p) +Lp∥p−q∥2. Then r+ 8∥p−q∥2C∥p∥1+ 2C+|r| min(p) ≤Φ∗(p) +Lp∥p−q∥2+ 8∥p−q∥2C∥p∥1+ 2C+|Φ∗(p) +Lp∥p−q∥2| min(p) = Φ∗(p) +∥p−q∥2 Lp+ 8C∥p∥1+ 2C+|Φ∗(p)|+Lp min(p) | {z } Cp(S5.46) where we used the fact that ∥p−q∥2≤δ≤δ∗≤1. Since |Φ∗(p)|<∞by Fact S5.3, Cp<∞. Hence, if δ≤δ∗, then (S5.41), (S5.45) and (S5.46) imply that Λ∗(q)∈ Sp Φ∗(p)+Cp∥p−q∥2⊂ Sp Φ∗(p)+δCp. (S5.47) for all q∈B(p, δ). Since Φ( ·;p) is 0-coercive for all p∈R4 >0by Lemma S5.11, the sublevel sets of Φ( ·;p) are compact by Proposition 3.2.4, pp. 107, of Hiriart-Urruty and Lemar´ echal (2004). Hence, Sp Φ∗(p)+δCpis compact. Now it remains to prove that if qk→p, then Λ∗(q)→Λ∗(p). First of all, note that if qk→p, then given any small δ >0, qk∈B(p, δ) for all sufficiently large k. Let us take δ < δ∗. Then the above calculations imply that the sequence Λ∗(qk) lies in a compact set for all large kand hence is bounded. To show Λ∗(qk)→Λ∗(p), it suffices to show that
|
https://arxiv.org/abs/2505.17285v1
|
given any subsequence of Λ∗(qk), there exists a further subsequence that converges to Λ∗(p). Since any subsequence of Λ∗(qk) is bounded, we can always extract a converging subsequence from it. We will show that limit is Λ∗(p). If possible, suppose that limit is w∈R3 for some subsequence where wmay depend on the particular subsequence. The proof will be complete if we can show that w= Λ∗(p) for any such subsequence. To streamline notation, we will denote this subsequence by Λ∗(qk) as well. Φ( ·;p) is closed by Lemma S5.5. By Proposition 1.2.2 of Hiriart-Urruty and Lemar´ echal (2004), the sublevel sets of a closed function are closed (possibly empty). Since qk∈B(p, δ∗) for all sufficiently large k, using (S5.47) we obtain that qk∈ Sp Φ∗(p)+δCpfor all large k∈N. Since Sp Φ∗(p)+δCpis closed, if Λ∗(qk)→w, then w∈ Sp Φ∗(p)+δCp. Thus w∈dom(Φ( ·;p)). However, we do not know if w∈int(dom(Φ( ·;p))). Therefore, we do not know if Φ( ·;p) is continuous at w. However, since Φ( ·;p) is closed, lim inf kΦ(Λ∗(qk),p)≥Φ(w;p) since Λ∗(qk)→kw(for a lower-semi continuous function f, lim inf y→xf(y)≥f(x); cf. pp. 55 of Rockafellar, 1970). Note that lim sup k Φ(w,p)−Φ(Λ∗(qk);qk) ≤lim sup k Φ(w;p)−Φ(Λ∗(qk),p) + Φ(Λ∗(qk),p)−Φ(Λ∗(qk);qk) (S5.48) lim sup k Φ(w;p)−Φ(Λ∗(qk),p) = Φ(w;p)−lim inf kΦ(Λ∗(qk),p)≤0 because we have already argued that lim inf kΦ(Λ∗(qk),p)≥Φ(w;p). (S5.42) implies that Λ∗(q)∈ Sq rwith r= Φ∗(p) +Lpδ∗as long as q∈B(p, δ∗). Then by (S5.44), Φ(Λ∗(qk),p)−Φ(Λ∗(qk);qk)≤8∥p−qk∥2C∥p∥1+ 4C∥p−qk∥2+|r| min(p). Since|Φ∗(p)|<∞, it follows that |r|<∞. Therefore, using the fact that qk→p, we deduce that lim sup k Φ(Λ∗(qk),p)−Φ(Λ∗(qk);qk) = 0. S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 73 Hence, lim sup k Φ(w;p)−Φ(Λ∗(qk),p) + lim sup k Φ(Λ∗(qk),p)−Φ(Λ∗(qk);qk) is well-defined and is non-positive. Therefore, lim sup k Φ(w;p)−Φ(Λ∗(qk),p) + Φ(Λ∗(qk),p)−Φ(Λ∗(qk);qk) ≤lim sup k Φ(w;p)−Φ(Λ∗(qk),p) + lim sup k Φ(Λ∗(qk),p)−Φ(Λ∗(qk);qk) is non-positive. Therefore, (S5.48) implies that 0≥Φ(w;p)−lim inf kΦ(Λ∗(qk);qk) = Φ( w;p)−lim inf kΦ∗(qk) = Φ( w;p)−Φ∗(p) where the last step follows from (S5.40) because −Φ∗is continuous on R4 >0by Lemma S5.3. Therefore, we have obtained that Φ(w;p)≤Φ∗(p). Lemma S5.5 implies that Φ( ·;p) has unique minimum for each p∈R4 >0. Therefore, w= Λ∗(p), which completes the proof. S5.4.9. Proof of Lemma S5.10 Proof of Lemma S5.10 . We will prove the result for y∗(p) only because the proof for z∗(p) follows similarly. Before we prove this lemma, we will collect some results that will be required later in the proof. Since Lemma S5.6 implies Λ∗(p0) = (0 ,0,0), Φ(·;p0) is minimized at (0 ,0,0). Lemma S5.7 implies that Φ( ·;p) is differentiable at (0 ,0,0) for all p∈R3 >0, and in particular, atp0, which implies ∇Φ(0,0,0;p0) = 0, yielding ∂ϑ(0,0; 1,1) ∂x+∂ϑ(0,0; 1,2) ∂x+∂ϑ(0,0; 2,1) ∂x+∂ϑ(0,0; 2,2) ∂x= 0, ∂ϑ(0,0; 1,1) ∂y+∂ϑ(0,0; 1,2) ∂y= 0, ∂ϑ(0,0; 2,1) ∂y+∂ϑ(0,0; 2,2) ∂y= 0. (S5.49) Let us also define M12(x, y) =∂2ϑ(x, y; 1,1) ∂x∂y+∂2ϑ(x, y; 1,2) ∂x∂y N12(x, y) =∂2ϑ(x, y; 2,1) ∂x∂y+∂2ϑ(x, y; 2,2) ∂x∂y, (S5.50) where M12(x, y) and N12(x, y) exist if x, y∈U0, and in particular, at ( x, y) = (0 ,0). The following lemma, proved in Section S5.5.1, provides the values
|
https://arxiv.org/abs/2505.17285v1
|
of M12(0,0) and N12(0,0). Lemma S5.14. M12(0,0) = N12(0,0) = 0 when M12andN12are continuous functions. The following lemma proves the H¨ older continuity of Λ∗at all pin a neighborhood of p0. Lemma S5.15 is proved in Section S5.5.3 Lemma S5.15. Consider the setup in Theorem 3.1. Suppose p∈B(p0, δ0)where δ0is as in Lemma S5.9. Then there exists δp>0, possibly depending on p, such that if q∈B(p, δp), then ∥Λ∗(p)−Λ∗(q)∥2≤Cpp ∥p−q∥2where Cpdepends only onϑandp. We will choose δ >0 in a way so that a few conditions are met. In particular, we let δ= min( δ1, δ2, δ3, δ4, δ5) where δ1, δ2,δ3,δ4, and δ5are as explained below. 1. Let δ1>0 be such that Λ∗(p)∈U3 0for all p∈B(p0, δ1). Such a δ1exists by Lemma S5.9 . 2. Let δ2>0 be such that for all p∈B(p0, δ2),∥Λ∗(p)−Λ∗(p0)∥∞=∥Λ∗(p)∥∞<2. Such a δ2exists by Lemma S5.8. 3. Let us denote C=∂2ϑ(0,0; 1,1) ∂y2+∂2ϑ(0,0; 1,2) ∂y2. Note that Cis a negative number because ϑis strictly concave. If pis in a small neighborhood of p0= (1,1,1,1), |p11−1|and|p12−1|are small. If δis sufficiently small, then ( x∗(p), y∗(p)) will be sufficiently close to (0 ,0) for allp∈B(p0, δ) by Lemma S5.8. Also,∂2ϑ(x,y;1,1) ∂y2 and∂2ϑ(x,y;1,2) ∂y2 are close to∂2ϑ(0,0;1,1) ∂y2 and∂2ϑ(0,0;1,2) ∂y2 if (x, y) is sufficiently close to (0 ,0) because the second-order partial derivatives of ϑ(·;i, j)’s are continuous at (0 ,0) by Lemma S5.7. Therefore, there exists δ3>0 so that if δ < δ 3, then for all p∈B(p0, δ), p11∂2ϑ(x∗(p), ξ; 1,1) ∂y2+p12∂2ϑ(x∗(p), ξ′; 1,2) ∂y2 ∈(2C, C/ 2) (S5.51) for all ξandξ′between 0 and y∗(p). S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 74 4. Since the third order partial derivatives of ϑ(·;i, j)’s are continuous on U2 0by Lemma S5.7 and Λ∗(p) is continuous and Λ∗(p)∈U3 0for all pin a small neighborhood of p0by Lemma S5.8, we can choose δso small such that for all p∈B(p0, δ), p11∂3ϑ(ξ1,0; 2,2) ∂x2∂y+p12∂3ϑ(ξ2,0; 1,2) ∂x2∂y ≤p11 ∂3ϑ(0,0; 2,2) ∂x2∂y +p12 ∂3ϑ(0,0; 1,2) ∂x2∂y + 1 for all ξ1andξ2between 0 and x∗(p). We will refer to the above δbyδ4. We will also choose δ4to satisfy δ4<1. Then p11,p12<2 because psatisfies ∥p−p0∥2< δ. Therefore, the last display would imply p11∂3ϑ(ξ1,0; 2,2) ∂x2∂y+p12∂3ϑ(ξ2,0; 1,2) ∂x2∂y ≤2 ∂3ϑ(0,0; 2,2) ∂x2∂y + ∂3ϑ(0,0; 1,2) ∂x2∂y + 1 (S5.52) for all ξ1andξ2between 0 and x∗(p). 5. Let δ5>0 be such that for all p∈B(p0, δ5),∥Λ∗(p)−Λ∗(p0)∥2< Cp ∥p−p0∥2for some C >0 depending only on ϑ. Existence of such a δ5is guaranteed by Lemma S5.15. Note that since Λ∗(p0) =03by Lemma S5.6, the above implies that|x∗(p)|=|x∗(p)−x∗(p0)|< Cp ∥p−p0∥2for all p∈B(p0, δ5). Theϑ(·;i, j)’s defined in (S5.17) are thrice continuously differentiable on U2 0for all i, j∈[2] and Φ( ·;p) is thrice continu- ously differentiable on U3 0for all p∈R4 >0. By our choice of δ, Λ∗(p)∈U3 0for all p∈B(p0, δ). Therefore, the ϑ(·;i, j)’s and Φ(·;p) will be thrice continuously differentiable at Λ∗(p) ifp∈B(p0, δ). Therefore, for all p∈B(p0, δ),∂Φ(Λ∗(p);p)/∂y exists, and ∂Φ(Λ∗(p);p) ∂y=−p11∂ϑ(x∗(p), y∗(p); 1,1) ∂y−p12∂ϑ(x∗(p), y∗(p); 1,2) ∂y= 0, where the last step follows because Λ∗(p) is the minimizer of Φ( ·;p). Since
|
https://arxiv.org/abs/2505.17285v1
|
U0contains both 0 and y∗(p), the real-valued functions y7→∂ϑ(x∗(p), y; 1,1)/∂yandy7→∂ϑ(x∗(p), y; 1,1)/∂yare differentiable on an interval containing 0 and y∗(p). Therefore, we can take Taylor series expansion of these functions at y∗(p) around y= 0, which leads to p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗(p),0; 1,2) ∂y +y∗(p) p11∂2ϑ(x∗(p), ξ; 1,1) ∂y2+p12∂2ϑ(x∗(p), ξ′; 1,2) ∂y2 = 0 (S5.53) where ξandξ′are between 0 and y∗(p). The case y∗(p) = 0 will not be considered because the proof follows trivially when y∗(p) = 0. We will prove that for any y∗(p)̸= 0, |y∗(p)| ≤2 |C| p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗(p),0; 1,2) ∂y . (S5.54) First, we consider the case when y∗(p) is positive. y∗(p)is positive In this case, (S5.53) leads to p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗,0; 1,2) ∂y =−y∗(p) p11∂2ϑ(x∗(p), ξ; 1,1) ∂y2+p12∂2ϑ(x∗(p), ξ′; 1,2) ∂y2 (a) ≥ − y∗(p)C/2 =|y∗(p)||C|/2, where (a) follows from (S5.51) and the last step uses C <0. As mentioned previously, |C|is non-zero, which implies (S5.54). y∗(p)is negative In this case, using (S5.51) again, we obtain that p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗(p),0; 1,2) ∂y =−y∗(p) p11∂2ϑ(x∗(p), ξ; 1,1) ∂y2+p12∂2ϑ(x∗(p), ξ′; 1,2) ∂y2 ≤ −C 2y∗(p) =−|y∗(p)||C| 2, S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 75 where we use the fact C <0. Thus |y∗(p)| ≤ −2 |C| p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗,0; 1,2) ∂y , which implies (S5.54) because the last display implies that when y∗(p)<0, p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗,0; 1,2) ∂y<0. To complete the proof, we need to upper bound the right hand side of (S5.54). To this end, note that Lemma S5.7 implies that the univariate functions x7→∂ϑ(x,0;1,1) ∂yandx7→∂ϑ(x,0;1,2) ∂yare twice continuously differentiable on U0. Our choice of δ ensures that x∗(p)∈U0for all p∈B(p0, δ). Therefore, the above univariate functions are twice continuously differentiable on an open interval containing 0 and x∗(p) for all p∈B(p0, δ). A second order Taylor series expansion of the function x7→p11∂ϑ(x,0;1,1) ∂y+p12∂ϑ(x,0;1,2) ∂yatx∗(p) around 0 yields p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗(p),0; 1,2) ∂y =p11∂ϑ(0,0; 1,1) ∂y+p12∂ϑ(0,0; 1,2) ∂y+x∗(p)p11∂2ϑ(0,0; 1,1) ∂x∂y +x∗(p)p12∂2ϑ(0,0; 1,2) ∂x∂y+x∗(p)2 2 p11∂3ϑ(ξ1,0; 2,2) ∂x2∂y+p12∂3ϑ(ξ2,0; 1,2) ∂x2∂y , (S5.55) where ξ1andξ2are between 0 and x∗(p). Note that by our choice of δand (S5.52), there exists a constant C′, depending only on ϑ, so that p11∂3ϑ(ξ1,0; 2,2) ∂x2∂y+p12∂3ϑ(ξ2,0; 1,2) ∂x2∂y < C′. (S5.56) Using Lemma S5.14 and (S5.49), we obtain that p11∂ϑ(0,0; 1,1) ∂y+p12∂ϑ(0,0; 1,2) ∂y+x∗(p) p11∂2ϑ(0,0; 1,1) ∂x∂y+p12∂2ϑ(0,0; 1,2) ∂x∂y = (p11−p12)∂ϑ(0,0; 1,1) ∂y+x∗(p)∂2ϑ(0,0; 1,1) ∂x∂y | {z } C1(p), which implies p11∂ϑ(0,0; 1,1) ∂y+p12∂ϑ(0,0; 1,2) ∂y+x∗(p) p11∂2ϑ(0,0; 1,1) ∂x∂y+p12∂2ϑ(0,0; 1,2) ∂x∂y ≤ |p1−p2||C1(p)|. (S5.57) Since we have chosen δso that Λ∗(p)<2 for all p∈B(p0, δ), it follows that |x∗(p)|<2. Therefore, |C1(p)| ≤ ∂ϑ(0,0; 1,1) ∂y + ∂2ϑ(0,0; 1,1) ∂x∂y , (S5.58) which depends only on ϑ. On the other hand, since p0= (1,1,1,1), it holds that |p1−p2|=|p1−p0 1+p0 2−p2| ≤ ∥p−p0∥1≤√ 2∥p−p0∥2, where the last step follows by Cauchy-Schwartz inequality. Hence, combining (S5.55), (S5.56), (S5.57), and (S5.58), we obtain that there exist constants C′>0 and C2>0, depending only on ϑ, so that p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗(p),0; 1,2) ∂y ≤C′x∗(p)2+C2∥p−p0∥2. (S5.59) Since δ < δ 5, there exists C >0, depending only on ϑ, so that x∗(p)2≤C∥p−p0∥2. Hence, the proof follows from (S5.54)
|
https://arxiv.org/abs/2505.17285v1
|
and (S5.59). S5.4.10. Proof of Lemma S5.11 Proof of Lemma S5.11. By Lemma S5.7, there exists an open neighborhood U0∋0 such that U2 0⊂int(dom(Φ( ·;p)) for all p∈R4 >0. Lemma S5.8 implies that we can choose a δ >0 so small such that Λ∗(p)∈U3 0for all p∈B(p0, δ). We will consider that p∈B(p0, δ) for the rest of this proof. S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 76 Proving P1 Lemma S5.5 implies that u∗(p) =x∗(p)1k1−1,v∗ 1(p) =y∗(p)1k2−1andv∗ i(p) =z∗(p)1k2−1for all i∈[2 :k1], where these vectors are as defined in Lemma S5.4. Therefore (S5.10) implies that x∗(p) = (0 ,−x∗(p)1k1−1),y∗ 1(p) = (0 ,−y∗(p)1k2−1), y∗ 2(p) =. . . ,y∗ k1(p) = (0 ,−z∗(p)1k2−1) is a maximizer of Vψ. Hence, a version of ˜dis given by ˜d1= pred(0 ,−x∗(p)1k1−1) and ˜d2(i) =( pred(0 ,−y∗(p)1k1−1) if i= 1 pred(0 ,−z∗(p)1k2−1) if i∈[2 :k1].(S5.60) Note that ˜d1(or˜d2) can take only two values, one and k1(ork2). We remind the reader that although we consider p∈R4 >0 forP∈ Pk1k2 b, this pis the abbreviation of ( pij)i∈[k1],j∈[k2]. However, for P∈ Pk1k2 b,p1j=p12ifj≥2,pi1=p21fori≥2 andpi2=p22fori≥2. Therefore, to avoid redundancy, we only consider the four elements of palthough pijexists for all i∈[k1] and j∈[k2]. In light of the above, we can keep using the expression V(˜d) =p˜d1,˜d2(˜d1)forP∈ Pk1,k2 bas well. Since V(d) =pd1,d2(d1)for any DTR d,V(˜d) =p˜d1,˜d2(˜d1). Subcase max(p11,p12)>max(p21,p22):Ifψis Fisher consistent, then Definition 2.1 implies that V(˜d) = V∗= max(p11,p12). Therefore, p˜d1,˜d2(˜d1)= max( p11,p12). Hence, ˜d1̸= 2,3, . . . , k 1because in these cases, p˜d1,˜d2(˜d1)≤max(p21,p22)< V∗. Thus ˜d1= 1, which implies pred(0 ,−x∗(p)1k1−1) = 1, which implies max(argmax(0 ,−x∗(p))) = 1. Therefore, 0 >−x∗(p) orx∗(p)>0. Subcase max(p21,p22)>max(p11,p12):In this case, proceeding in the same way as the previous case, we can similarly prove that pred(0 ,−x∗(p)1k1−1) =k1, which implies max(argmax(0 ,−x∗(p)1k1−1)) = k1. However, this only implies that x∗(p)≤0. Therefore, we still need to show that x∗(p)̸= 0. If possible, suppose x∗(p) = 0 for some p∈B(p0, δ). Then for thisp,x∗(p) =0k1. Consider the sequence wn= (1/n,0k1−1). Our next step is to show that Vψ(wn,y∗ 1(p), . . . ,y∗ k1(p);p)→nVψ ∗. To this end, it suffices to show that Vη(∆wn,v∗ 1(p), . . . ,v∗ k1(p);p)→n Vη ∗. Note that ∆ wn= 1k1−1/n. Also, Lemma S5.5 implies that v∗ 1(p) =y∗(p)1k2−1andv∗ 2(p) =. . .=v∗ k1(p) =z∗(p)1k2−1. Therefore, by (S5.19), Vη(∆wn,v∗ 1(p), . . . ,v∗ k1(p);p) =−Φ(1/n, y∗(p), z∗(p);p) and Vη ∗=−Φ∗(p). Therefore, it suffices to show that Φ(1 /n, y∗(p), z∗(p);p)→nΦ∗(p). Since Λ∗(p) = (0 , y∗(p), z∗(p))∈U3 0, it follows that (0 , y∗(p), z∗(p))∈ int(dom(Φ( ·;p))). Since Φ( ·;p) is convex by Lemma S5.5, the above implies that Φ( ·;p) is continuous at (0 , y∗(p), z∗(p)) because convex functions are continuous on the interior of its domain (cf. pp. 104 of Hiriart-Urruty and Lemar´ echal, 2004). Therefore, lim nΦ(1/n, y∗(p), z∗(p);p) = Φ(0 , y∗(p), z∗(p);p) = Φ∗(p), where we used the fact that Λ∗(p) = (0 , y∗(p), z∗(p)). Therefore, it follows that Vψ(wn,y∗ 1(p), . . . ,y∗ k1(p);p)→nVψ ∗. Ifψ is Fisher consistent, then the above implies
|
https://arxiv.org/abs/2505.17285v1
|
that V(˜dn)→V∗= max( p) = max( p21,p22) where the DTR ˜dn= (˜d1n,˜d2n) corresponds to ( wn,y∗ 1(p), . . . ,y∗ k1(p)). Note that ˜dnsatisfies ˜d1n= pred(∆ wn),˜d2n(i) =( pred(0 ,−y∗(p)1k2−1) if i= 1 pred(0 ,−z∗(p)1k2−1) if i∈[2 :k1].(S5.61) Thus in this case, ˜d1n= pred(0 ,−1k1−1/n) = 1. Since V(˜d) =V(d1, d2(d1)) for any DTR d≡(d1, d2) inPk1,k2,V(˜dn) = p1,d2n(1). Therefore, V(˜dn)≤max(p11,p12)<max(p21,p22) =V∗. The above implies lim supnV(˜dn)< V∗, which violates V(˜dn)→V∗. Therefore, we arrive at a contradiction if we assume x∗(p) = 0. Therefore, if p∈B(p0, δ), then x∗(p)<0 if max( p21,p22)>max(p11,p12). Proving P2 Case 1: max(p11,p12)>max(p21,p22) Suppose p11>p12. Therefore, V∗= max( p) =p11andpij< V∗unless ( i, j) = (1,1). If ψis Fisher consistent, then V(˜d) =p˜d1,˜d2(˜d1)=p11. Therefore, d2(˜d1) = 1. We have already proved that x∗(p)>0 in this case, implying ˜d1= pred(0 ,−x∗(p)1k1−1) = 1. Therefore, ˜d2(1) = 1. Hence, (S5.60) implies that pred(0 ,−z∗(p)1k2−1) = 1, which indicates that max(argmax(0 ,−y∗(p)1k2−1)) = 1. Therefore, 0 >−y∗(p), which implies y∗(p)>0. Now if p11<p12, then proceeding as before, we arrive at max(argmax(0 ,−y∗(p)1k2−1)) = k2, which only ensures that y∗(p)≤0. As before, let us assume y∗(p) = 0 for some p∈B(p0, δ). We will show that we arrive at a contradiction if y∗(p). Consider the sequence ( x∗(p),1/n, z∗(p)), which obviously converges to Λ∗(p) as n→ ∞ . Let us also denote wn= (1/n,0k2−1). Therefore, proceeding like the case of x∗(p) = 0, we can show that Φ(x∗(p),1/n, z∗(p);p)→nΦ∗(p). As before, it follows that Φ∗(p) =Vψ ∗andVψ(x∗(p),wn,y∗ 2(p), . . . ,y∗ k1(p)) =−Φ(x∗(p),1/n, z∗(p);p). Therefore, Vψ(x∗(p),wn,y∗ 2(p), . . . ,y∗ k1(p))→n Vψ ∗. Suppose ˜dnis the DTR associated with ( x∗,y∗ 1, . . . ,y∗ k1). We can show that ˜d1n= pred(0 ,−x∗(p)1k1−1),˜d2n(i) =( pred(∆ wn) if i= 1 pred(0 ,−z∗(p)1k2−1) if i∈[2 :k1]. S5 PROOF OF THEOREM 3.1/S5.5 Proof of auxiliary results 77 Fisher consistency implies that V(˜dn) =p˜d1n,˜d2n(˜d1n)→np12. However, we have shown that ˜d1n= 1 in this case. Also, ˜d2n(˜d1n) =˜d2n(1) = pred(0 ,−1k2−1/n) = 1 . Therefore, we have V(˜dn) =p11< Vψ ∗=p12where the last equality follows because because max( p) =p12. Hence, we arrive at a contradiction, implying y∗(p) can not be zero. Case 2: max(p11,p12)<max(p21,p22) We have shown that, in this case, x∗(p)<0. Therefore, pred( x∗(p)) = pred(0 ,−x∗(p)1k1−1) = k1. Suppose p21>p22. In this case, V∗= max( p) =p21. Using Fisher consistency as in Case 1, we can show that ˜d2(˜d1) = 1, implying ˜d2(k1) = 1. Therefore, pred(0 ,−z∗(p)1k2−1) = 1, which implies z∗(p)>0. Ifp21<p22, then we can show that pred(0 ,−z∗(p)) = k2, which implies z∗(p)≤0. If possible, suppose z∗(p) = 0 for some p∈B(p0, δ). In this case, max(p) =p22. Consider the sequence ( x∗(p), y∗(p),1/n). Let us denote wn= (1/n,0k2−1). Proceeding as before, we can show that Φ(x∗(p), y∗(p),1/n;p)→nΦ∗(p). From (S5.19), it follows that Φ∗(p) =Vψ ∗andVψ(x∗(p),y∗ 2(p),wn, , . . . , wn) =−Φ(x∗(p), y∗(p),1/n,). Therefore, Vψ(x∗(p),y∗ 2(p),wn, , . . . , wn)→n Vψ ∗. Ifψis Fisher consistent, we must also have V(˜dn)→nV∗where ˜dnis the DTR corresponding to ( x∗(p),y∗ 2(p),wn, , . . . ,
|
https://arxiv.org/abs/2505.17285v1
|
wn). We can show that ˜d1n= pred(0 ,−x∗(p)1k1−1),˜d2n(i) =( pred(0 ,−y∗(p)1k2−1) if i= 1 pred(∆ wn) if i∈[2 :k1]. Since x∗(p)<0, the corresponding ˜dnsatisfies ˜d1n=k1. Therefore, ˜d2n(˜d1n) =˜d2n(k1) = pred(0 ,−1k2−1/n) = 1. Hence, p˜d1n,˜d2n(˜d1n)=pk11. However, since P∈ Pk1k2 b,pk11=p21by definition. Therefore, V(˜dn) =p21. Since p21<p22, it follows that lim supnV(˜dn)< V∗, which implies V(˜dn) does not converge to V∗. Thus we again encounter a contradiction, implying z∗(p) can not be zero. S5.5. Proof of auxiliary results S5.5.1. Proof of Lemma S5.14 Proof of Lemma S5.14. We prove the case for M12because the proof for N12follows in the similar way. Lemma S5.7 implies that the second-order partial derivatives of ϑ(·;i, j)’s are continuous on U2 0for all i, j∈[2]. Therefore, M12andN12are continuous on the open set U2 0. Suppose M12(0,0)̸= 0. If possible, suppose M12(0,0)> C for some C > 0. Then there exists ϵ >0 such that |x|,|y|< ϵimplies that M12(x, y)> C/ 2. By Lemma S5.8, can choose δ1>0 so small such that |x∗(p)−x∗(p0)|< ϵfor all p∈B(p0, δ1). Since x∗(p) = 0 by Lemma S5.6, it follows that |x∗(p)|< ϵfor all p∈B(p0, δ1). By Lemma S5.16 below (proved in Section S5.5.2), we can choose p∈B(p0, δ1) so that p11=p12>p21,p22, Λ∗(p)∈U3 0, y∗(p) = 0, and x∗(p)>0. Lemma S5.16. There exists p∈B(p0, δ1)so that p11=p12>p21,p22,Λ∗(p)∈U3 0,x∗(p)>0, and y∗(p) = 0 . Noting (a) Λ∗(p) = ( x∗(p), y∗(p), z∗(p)) is the minimizer of Φ( ·;p) and (b) Φ( ·;p) is thrice differentiable on U3 0, and hence at Λ∗(p), we obtain that 0 =∂Φ(x∗(p), y∗(p), z∗(p);p) ∂y. Then by a first order Taylor series expansion (differentiating with respect to x), we have ∂Φ(0, y∗(p), z∗(p);p) ∂y+x∗(p)∂2Φ(ξ, y∗(p), z∗(p);p) ∂x∂y= 0 where ξis a number between 0 and x∗. Therefore, p11∂ϑ(0, y∗(p); 1,1) ∂y+p12∂ϑ(0, y∗(p); 1,2) ∂y +x∗(p) p11∂2ϑ(ξ, y∗(p); 1,1) ∂x∂y+p12∂2ϑ(ξ, y∗(p); 1,2) ∂x∂y = 0. (S5.62) Since p11=p12andy∗(p) = 0, we have ∂ϑ(0,0; 1,1) ∂y+∂ϑ(0,0; 1,2) ∂y+x∗(p)∂2ϑ(ξ,0; 1,1) ∂x∂y+x∗(p)∂2ϑ(ξ,0; 1,2) ∂x∂y= 0. Using (S5.49) we obtain that x∗(p)∂2ϑ(ξ,0; 1,1) ∂x∂y+x∗(p)∂2ϑ(ξ,0; 1,2) ∂x∂y= 0. S5 PROOF OF THEOREM 3.1/S5.5 Proof of auxiliary results 78 Since x∗(p)>0, M12(ξ,0) =∂2ϑ(ξ,0; 1,1) ∂x∂y+∂2ϑ(ξ,0; 1,2) ∂x∂y= 0. Therefore, we have obtained that there exists ξ∈[0, x∗(p)] such that M12(ξ,0) = 0. However, since |x∗(p)|< ϵ,ξsatisfies |ξ|< ϵ. However, we have previously shown that M12(x, y)> C/ 2 for all x, y∈(−ϵ, ϵ). Therefore, we arrive at a contradiction. Hence, our assumption M12(0,0)>0 is incorrect Similarly, we can show that assuming M12(0,0)<0 leads to a contradiction as well. Thus, M12(0,0) = 0. S5.5.2. Proof of Lemma S5.16 Proof of Lemma S5.16. By Lemmas S5.7 and S5.11, we can choose δ∈(0, δ1) so small such that for all p∈B(p0, δ), Λ∗(p)∈ U3 0and Λ∗(p) satisfies Properties P1 and P2. We consider a p∈B(p0, δ/3) of the form ( c, c, c 1, c2) where c > c 1, c2>0. Such a pexists in the ball B(p0, δ/3) because p0= (1,1,1,1). Suppose, if possible, y∗(p)>0. We will show that this leads to a contradiction. If y∗(p)>0, we can find an ϵ >0 such that y∗(p)> ϵ. Since Λ∗is
|
https://arxiv.org/abs/2505.17285v1
|
a continuous map by Lemma S5.8, we can find a δ′>0 such that for all q∈B(p, δ′),∥Λ∗(p)−Λ∗(q)∥2≤ϵ/2. We choose δ′to be smaller than δ/3 so that B(p, δ′)⊂B(p0,2δ/3)⊂B(p0, δ). If ( x∗(q), y∗(q), z∗(q)) = Λ∗(q),|y∗(p)−y∗(q)| ≤ϵ/2. Hence, y∗(q)>0. In particular, we consider q= (c, c+δ′/2, c1, c2). It is not hard to see that q∈B(p, δ′). However, since B(p, δ′)⊂B(p0, δ), the corresponding Λ∗(q) satisfies Properties P1 and P2. Since argmax( q) = (1 ,2), Property P2 implies y∗(q)<0, which leads to a contradiction. Thus y∗(p)>0 can not hold. In a similar way, we can show that y∗(p)<0 can not hold. Hence, we must have y∗(p) = 0. S5.5.3. Proof of Lemma S5.15 Proof of Lemma S5.15. In this proof, we will use the fact that (0 ,0) lies inside int(dom( ϑ(·;i, j))) for all i, j∈[2], which follows from Lemma S5.7. If p∈B(p0, δ0) where δ0is as in Lemma S5.9, then Λ∗(p)∈U3 0, where the latter is an open set containing the origin. Therefore, Φ( ·;p) is thrice continuously differentiable at Λ∗(p). Therefore, the hessian of Φ( ·;p) exists and all of its elements are continuous at U3 0∋Λ∗(p). Since Φ( ·;p) is a strictly convex function, the hessian of w7→Φ(w;p) is positive definite at Λ∗(p). The minimum eigenvalue of the hessian at Λ∗(p) is thus bounded below by 2 cpfor some cp>0. Since the hessian of Φ( ·;p) is continuous on U3 0and the minimum eigenvalue of a matrix is a continuous function of the matrix (cf. Corollary 6.3.8, p.407 of Horn and Johnson, 2012), the minimum eigenvalue of the hessian of w7→Φ(w;p) is a continuous function of wonU3 0. We can therefore choose a ϵ >0 so small such that the minimum eigenvalue of the hessian of Φ(w;p) is bounded away from cpfor all w∈B(Λ∗(p), ϵ)⊂U3 0. Thus Φ( ·;p) is strongly convex on B(Λ∗(p), ϵ) with constant cp>0 that possibly depends only on ϑandp. Since Λ∗(p) is continuous at p, there is δ′>0 so that Λ∗(q)∈B(Λ∗(p), ϵ) for all q∈B(p, δ′). Let us take δp= min( δ′, δ∗) where δ∗is as in the proof of Lemma S5.8. Note that δ∗depends on p. Suppose δ < δ p. Then for all q∈B(p, δ), Φ(Λ∗(q),p)≥Φ(Λ∗(p),p) +⟨s,Λ∗(q)−Λ∗(p)⟩+cp 2∥Λ∗(p)−Λ∗(q)∥2 2, where s∈∂Φ(Λ∗(p),p) by Theorem 6.1.2, pp. 200 of Hiriart-Urruty and Lemar´ echal (2004). Since Λ∗(p) is the minimizer of Φ(·,p), 0∈∂Φ(Λ∗(p),p). Taking s= 0, we obtain the following for all q∈B(p, δ): Φ(Λ∗(q),p)≥Φ(Λ∗(p),p) +cp 2∥Λ∗(p)−Λ∗(q)∥2 2. On the other hand, since δ < δ p≤δ∗, (S5.47) implies that Φ(Λ∗(q),p)≤Φ∗(p) +Cp∥p−q∥2 for all q∈B(p, δ). Because Φ(Λ∗(p),p) = Φ∗(p), we thus have Φ∗(p) +cp 2∥Λ∗(p)−Λ∗(q)∥2 2≤Φ(Λ∗(q),p)≤Φ∗(p) +∥p−q∥2 2Cp for all q∈B(p, δ), implying ∥Λ∗(p)−Λ∗(q)∥2 2≤2Cp cp∥p−q∥2. S5.5.4. Proof of Fact S5.4 Proof of Fact S5.4. If0k1+k2−2∈∂(dom( −η)), then by the supporting hyperplane theorem (pp. 100, Theorem 11.6 Rock- afellar, 1970), there exists a hyperplane TTu+CTv+c= 0 with T∈Rk1−1,C∈Rk2−1, and c∈Rso that ( T,C)̸=0k1+k2−2 andTTu+CTv+c≤0 for all ( u,v)∈dom(−η) and 0k1+k2−2lies on the hyperplane. The supporting hyperplane theorem S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/ 79 applies because dom( −η) is a
|
https://arxiv.org/abs/2505.17285v1
|
convex set due to ηbeing concave. Since 0k1+k2−2lies on the hyperplane, c= 0 and, hence, the hyperplane passes through the origin. Suppose ϵ= (ϵi)k1 i=1∈ {± 1}k1−1is such that ϵi>0 ifTi>0 and ϵi<0 ifTi<0. IfTi= 0, then ϵican be either 1 or −1. Thus ϵneed not be unique. Simililarly, we define ϵ′= (ϵ′ i)k1 i=1∈ {± 1}k2−1such that ϵ′ i>0 ifβi>0 and ϵ′ i<0 ifβi<0.ϵ′need not be unique either since Ccan contain zero elements. Consider the open orthant Q0∩ Q 1(ϵ)× Q0∩ Q 1(ϵ′). Suppose u∈ Q0∩ Q 1(ϵ) and v∈ Q0∩ Q 1(ϵ′). The definition of ϵandϵ′ensures that Tiui>0 ifTi̸= 0 and βjvj>0 ifβj̸= 0. Then TTu+CTv=X i:Ti̸=0|Tiui|+X j:βj̸=0|βjvj|. AlsoTandCboth can not be zero vectors because the theorem states that ( T,C)̸=0k1+k2−2. Therefore, there exists at least one Tiorβjsuch that Ti̸= 0 or βj̸= 0, leading to TT iui>0 or βjvj>0. Therefore, TTu+CTv>0. Therefore, ( u,v)/∈dom(−η) because TTu+CTv≤0 for all ( u,v)∈dom(−η). Since ( u,v) was an arbitrary element of Q0∩ Q 1(ϵ)× Q0∩ Q 1(ϵ′), it follows that {Q0∩ Q 1(ϵ)× Q0∩ Q 1(ϵ′)} ∩dom(−η) =∅. S6. Proofs of Theorem 4.1 and Proposition 1 This section is organized as follows. In Section S6.0.1, we introduce some terminologies that will be used in the proofs of Theorem 4.1 and Proposition 1. Section S6.1 lists some preliminary results on ψ’s satisfying Conditions N1-N2. These results will be used not only for the proofs of this section, but also for proving Theorem 4.2 and other results of Section 4. Then, in Section S6.2, we state and prove some auxiliary lemmas on the value function under Assumptions I-V, which will be used in the proofs of this section. Sections S6.3 and S6.4 provide the proofs of Theorem 4.1 and Proposition 1, respectively. S6.0.1. New terminologies Notation and terminologies: In what follows, we will denote κ(x;p) =ppred(x). For the purpose of induction, we introduce some new notation. Let us define the function pT:HT7→RkTso that pT(HT)i=E[Y1+. . .+YT|HT, AT=i] fori∈[kT], and for t=T−1, . . . , 1, the functions pt:Ht7→Rktare recursively defined as pt(Ht;f)i=E[Ψ1+t(f1+t(H1+t);p1+t(H1+t;f))|Ht, At=i] for all i∈[kt]. (S6.1) When fis clear from the context, we will denote pt(Ht;f) simply by pt(Ht). We also define the functions p∗ t:Ht7→Rktso thatp∗ T=pTand for t=T−1, . . . , 1, the p∗ t’s are recursively defined as p∗ t(Ht)i=E[Ψ∗ 1+t(p∗ 1+t(H1+t))|Ht, At=i], i∈[kt]. (S6.2) For a function h:Ht7→[kt], expressions such as pt(Ht)h(Ht)andp∗ t(Ht)h(Ht)may also be denoted by pt(Ht, h(Ht)) andp∗ t(Ht, h(Ht)), respectively, especially when the expression of h(Ht) is long or convoluted. For the sake of notational simplicity, for any x,p∈Rk, κ(x;p) =ppred(x). (S6.3) S6.1. Lemmas on the properties of ϕunder Conditions N1-N2 This section is organized as follows. First, we list some facts that we will repeatedly use in the proofs of Theorem 4.1 and 4.2. In Section S6.1.1, we establish properties that ϕsatisfies when Condition N1 holds. In Section S6.1.2, we derive properties ofϕunder both Conditions N1 and N2. Under Assumption I, Eϕt(ft(Ht);At) πt(At|Ht) Ht = Ψ t(ft(Ht);1kt) for all t∈[T], (S6.4) and for any random variable Vandi∈[kt], E V1[At=i] πt(At|Ht) Ht =E[V|Ht, At=i]
|
https://arxiv.org/abs/2505.17285v1
|
(S6.5) and E Vϕt(ft(Ht);At) πt(At|Ht) Ht = Ψ t(ft(Ht);p(Ht)) (S6.6) where p(Ht)j=E[V|Ht, At=j] for all j∈[kt]. Therefore, for any Ht, sup ft∈FtE Vϕt(ft(Ht);At) πt(At|Ht) =E[Ψ∗ t(p(Ht))]. (S6.7) S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 80 Moreover, if h(Ht) is any P-measurable function of Ht, then sup ft∈FtE h(Ht)Vϕt(ft(Ht);At) πt(At|Ht) =E[h(Ht)Ψ∗ t(p(Ht))] for all t∈[T]. (S6.8) Also, if Assumption I holds, with p(Ht)j=E[V|Ht, At=j], we obtain that E1[At= pred( p(Ht))]−1[At=i] πt(At|Ht)h(Ht)V Ht =E1[At= pred( p(Ht))]−1[At=i] πt(At|Ht)h(Ht)E[V|Ht, At] Ht =E1[At= pred( p(Ht))]−1[At=i] πt(At|Ht)h(Ht)p(Ht)At Ht =E max(p(Ht))−p(Ht)i h(Ht) Ht . (S6.9) We will also use the fact that ETY j=t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht = 1 for all t∈[T]. (S6.10) S6.1.1. Properties of ϕsatisfying Condition N1 Fact S6.1. Suppose ϕ:Rk×[k]7→Rsatisfies Condition N1, where k∈N. Ifϕ(x(m);p)→mΨ∗(p)for some p∈Rk ≥0, then pred(x(m))∈argmax( p)for all sufficiently large m. Proof of Fact S6.1. If the assertion in Fact S6.1 does not hold, we can extract a subsequence of x(m), if necessary, so that pred(x(m))̸∈argmax( p) but Ψ( x(m);p)→Ψ∗ t(p). We denote this subsequence by {x(m)}m≥1for the sake of notational simplicity. Then, sup x:ppred( x)<max(p)Ψ(x;p)≥sup mΨ(x(m);p) = Ψ∗(p), which would contradict (4.4). Lemma S6.1. Suppose ϕ:Rk×[k]7→Rsatisfies Condition N1, where k∈N. Then the ϕ(·;i)’s are bounded above for each i∈[k]. Moreover, for each p∈Rk ≥0such that p̸=0k,Ψ∗(p)>0. Proof of Lemma S6.1. First we prove the boundedness of ϕ. Suppose there exists i∈[k] such that supx∈Rkϕ(x;i) =∞. In this case, Ψ∗(p) =∞for all p∈Rk ≥0. Also there exists {x(m)} ⊂Rkso that ϕ(x(m);i)→m∞. Therefore, Ψ( x(m);p)→mΨ∗(p) for all p∈Rk ≥0. Fact S6.1 implies that pred( x(m))∈argmax( p) for all sufficiently large m∈N. Consider p1,p2inRk ≥0such that argmax( p1) = 1 and argmax( p2) = 2. For both i∈[1,2], it holds that pred( x(m))∈argmax( pi) for all sufficiently large m. Then there exists M∈Nsuch that for all m≥M, pred( x(m))∈argmax( p1)∩argmax( p2) =∅, which can not hold. Therefore, sup ϕ(·;i) must be finite. Now suppose Ψ∗(p′) = 0 for some p′∈Rk ≥0, but p′̸=0k. LetC={i∈[k] :p′ i>0}, which is non-empty because p′̸=0k. Since Ψ∗(p′) = 0, Ψ( x;p′) = 0 for all x∈Rkifi∈ C. Therefore, if i∈ C,ϕ(x;i) = 0 for all x∈Rk. Then for any psuch that {i∈[k] :pi>0}=C, Ψ(x;p) = 0 for all x∈Rkand Ψ∗(p) = 0. We can choose p∈Rk ≥0so that {i∈[k] :pi>0}=Cand the non-zero pi’s are all different so that [ kr]\argmax( p) is non-empty. Therefore, there exists x∈Rksatisfying ppred(x)<max(p). Therefore, the set {x∈Rk:ppred(x)<max(p)}is non-empty, which implies sup x∈Rk:ppred( x)<max(p)Ψ(x;p)>−∞. Since 0 ≤Ψ(x;p)≤Ψ∗(p) for all x∈Rkand Ψ∗(p) = 0, it follows that the above supremum is 0. Therefore, (4.4) is violated. Therefore, Ψ∗(p)>0 for all non-zero p∈Rk ≥0. Lemma S6.2. Ifϕ:Rk×[k]satisfies Condition N1 for some k∈N, then Ψ∗:Rk7→Ris positively homogenous, convex, and a continuous function. Proof of Lemma S6.2. In Section 4, it was mentioned that Ψ∗is the support function of the set Vϕ. A support function of a nonempty set is sublinear (cf. Proposition 2.1.2, pp. 134 of Hiriart-Urruty and Lemar´ echal, 2004). Sublinear functions are functions that are both convex and positively homogenous (cf. Definition 1.1.1, pp. 123 of
|
https://arxiv.org/abs/2505.17285v1
|
Hiriart-Urruty and Lemar´ echal, 2004). By Lemma S6.1, ϕ(·;i) is bounded above for each i. Therefore, |Ψ∗(p)|<∞for all p∈Rk. Hence, dom(Ψ∗) =Rk. Since a convex function is continuous on the interior of its domain (cf. pp. 104 of Hiriart-Urruty and Lemar´ echal, 2004), Ψ∗ is continuous. S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 81 Lemma S6.3. LetBbe any bounded set in Rk ≥0. Suppose ϕ:Rk×[k]7→R≥0satisfies Condition N1. Then there exists a non-negative convex function ϱ:R≥07→R≥0, depending only on ψandB, so that Ψ∗(p)−Ψ(x;p)≥ϱ(max( p)−κ(x;p)) for all p∈B, where κis as in (S6.3) . Moreover, ϱ(0) = 0 ,ϱis positive on (0,∞), and for any sequence {ϵm}m≥1⊂R≥0, limm→∞ϱ(ϵm) = 0 if and only if limm→∞ϵm= 0. Proof. The proof follows from Zhang (2004) but is given here for the sake of completeness. Let us denote Υ(ϵ) = inf p∈B,x∈Rk{Ψ∗(p)−Ψ(x;p) : max( p)−κ(x;p)≥ϵ} where the infimum is ∞the set is empty. Then by Proposition 23 of Zhang (2004), Υ is non-ngeative, Υ(0) = 0, and non-decreasing on R≥0. Proposition 23 of Zhang (2004) further implies that for any p∈Bandx∈Rk, Ψ∗(p)−Ψ(x;p)≥Υ(max( p)−κ(x;p)). We will now show that if ϵ >0, then Υ( ϵ)>0. For fixed p∈B, ifκ(x;p)≤max(p)−ϵ, then κ(x;p)<max(p), indicating pred(x)/∈argmax( p). Moreover, p̸=0ksince argmax( 0k) = [k]. Therefore, sup x:max( p)−κ(x;p)≥ϵΨ(x;p)≤ sup x:pred( x)/∈argmax( p)Ψ(x;p)(a) <Ψ∗(p), where (a) follows because ϕsatisfies Condition N1. Hence, for any x∈Rkandp∈B, Ψ∗(p)−Ψ(x;p)>0 if max( p)− κ(x;p)≥ϵ. Now suppose Υ( ϵ) = 0. Then there exists a sequence ( x(m),p(m))m≥1⊂Rk×Bsuch that max( p(m))−κ(x(m);p(m))≥ϵ for all m≥1 but Ψ∗(p(m))−Ψ(x(m);p(m))→m0. Since Bis bounded, p(m)has a convergent subsequence that converges top0∈B. Then Ψ∗(p(m))−Ψ(x(m);p(m))− Ψ∗(p0)−Ψ(x(m);p0) ≤ Ψ∗(p(m))−Ψ∗(p0) + Ψ(x(m);p0)−Ψ(x(m);p(m)) , whose first term goes to zero as m→ ∞ because Ψ∗is continuous on Rkby Lemma S6.1. For the second term, note that since Ψ is linear in p, Ψ(x(m);p0)−Ψ(x(m);p(m)) = Ψ(x(m);p0−p(m)) ≤ sup x∈Rk,i∈[k]ϕ(x;i)∥p0−p(m)∥1 which goes to zero as m→ ∞ because ϕis bounded by Lemma S6.1 and p(m)→mp0. Hence, we have showed that Ψ∗(p(m))−Ψ(x(m);p(m))− Ψ∗(p0)−Ψ(x(m);p0) →m0, which, combined with the fact that Ψ∗(p(m))−Ψ(x(m),p(m))→m0, implies that Ψ(x(m);p0)→mΨ∗(p0). On the other hand, max( p(m))−κ(x(m);p(m))≥ϵfor all m≥1. Therefore, max(p0)−κ(x(m);p0) ≥max(p(m))−κ(x(m);p(m))− |κ(x(m);p(m))−κ(x(m);p0)| − |max(p(m))−max(p0)| ≥ϵ− |κ(x(m);p(m))−κ(x(m);p0)| − |max(p(m))−max(p0)|. Since p(m)→mp0, we have max( p(m))→mmax(p0). Also |κ(x(m);p(m))−κ(x(m);p0)|=|(p(m))pred(x(m))−(p0)pred(x(m))| ≤ ∥p(m)−p0∥2, which converges to 0 as m→ ∞ . Hence, max( p0)−κ(x(m);p0)≥ϵ/2 for all sufficiently large m. Hence, ppred(x)<max(p0) for all sufficiently large m. However, Ψ( x(m);p0)→mΨ∗(p0). Therefore, sup p0 pred( x)<max(p0)Ψ(x(m);p0) = Ψ∗(p0), which is a contradiction since p0∈B⊂Rk ≥0. Thus, the assumption that Υ( ϵ) = 0 was wrong. Hence, Υ( ϵ)>0. Let us extend Υ to Rby setting Υ to be 0 on ( −∞,0). Since Υ ≥0, Υ is bounded below by the affine function f(0) = 0. Then Proposition 25 of Zhang (2004) applied to our case (we need to take Q=B,l(p, d) = max( p)−κ(d,p), and v≡1 in S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 82 that proposition) implies there is a convex function ϱsuch that ϱ(x)≤Υ(x),ϱ(ϵ)>0 whenever ϵ >0,ϱ(0) = 0, and ϱis non-decreasing. Since
|
https://arxiv.org/abs/2505.17285v1
|
ϱ≤Υ and Υ <∞onR, dom( ϱ) =R. It remains to show that ϱ(ϵm)→m0 if and only if ϵm→0. For the if part, note that ϱis continuous at 0 because ϱis convex and 0 ∈int(dom( ϱ))≡R. Thus lim ϵ→0ϱ(ϵ) = 0. Now we prove the only if part. Suppose {ϵm}m≥1is a subsequence so that ϵm≥0 for all m≥1 and ϱ(ϵm)→m0. Then we claim that ϵm→0. We can replace {ϵm}m≥1, if necessary, with a subsequence so that ϵm≥ϵ >0 for all m≥1. Then ϱ(ϵm)≥ϱ(ϵ)>0 for all malong that sequence since ϱis non-decreasing. However, all subsequences of ϱ(ϵm) must go to zero, leading to a contradiction. Hence, we have shown that ϱ(ϵm)→m0 only ifϵm→0. Lemma S6.4. Suppose m∈Nand the single-stage surrogate ϕ:Rk×[k]satisfies Condition N1 and Ψ∗(1k) = 1 . Let p∈Rk ≥0satisfy p̸=0k. Then there exists Z(p)>0, depending only on pandϕ, so that for any sequence x(m)satisfying Ψ(x(m);p)→mΨ∗(p), lim inf mϕ(x(m);pred(x(m)))>Z(p). Proof. Had the ϕ(·;i)’s been zero for all i∈[k], then Ψ( x;p) = Ψ∗(p) = 0 for all p∈Rk ≥0. This contradicts that Ψ∗(1k) = 1. Thus, there exists a i∈[k] so that ϕ(·;i) is not identically zero. Since p̸=0k, the above implies Ψ∗(p)≥ max(p) supx∈Rϕ(x;i)>0. Let us define the function Ω(p) = sup pred(x)/∈argmax( p)Ψ(x;p) for all p∈Rk ≥0. Since ϕsatisfies Condition N1, for all p∈Rk ≥0, it holds that Ω(p)<Ψ∗(p). Recall that we denote by e(i) k the unit vector of length one with one in the ith position. For any p, we denote the set J(p) = p−e(i) kpi:i∈argmax( p) . Note that |J(p)|=|argmax( p)|is a finite number. However, for any q∈J(p), since the ϕ’s are non-negative, Ψ(x;q) =X i∈[k]ϕ(x;i)qi≤X i∈[k]ϕ(x;i)pi= Ψ(x;p) for all x∈Rk. Therefore, for any q∈J(p), Ψ∗(q) = sup x∈RkΨ(x;q)≤Ψ∗(p). By definition of Ω,Ω(q)<Ψ∗(q)≤Ψ∗(p) for each q∈J(p) satisfying q̸=0k. Since J(p) is a finite set, it therefore follows that max q∈J(p),q̸=0kΩ(q)<Ψ∗(p). Therefore ˜Z(p) := Ψ∗(p)− max q∈J(p),q̸=0kΩ(q) (S6.11) is positive. Since p̸=0k, there exists a q∈J(p) so that q̸=0k. Hence, the maximum max q∈J(p),q̸=0kΩ(q)>−∞. Therefore, ˜Z(p)∈R>0. Since Ψ( x(m);p)→mΨ∗(p), and ϕsatisfies Condition N1, pred( x(m))∈argmax( p) for all sufficiently large m∈Nby Fact S6.1. Therefore, given any ϵ >0, Ψ(x(m),p)>Ψ∗(p)−ϵfor all sufficiently large m. Let us choose ϵ <˜Z(p)/2. Note that Ψ(x(m);p) =X i∈[k1]:i̸=pred( x(m))ϕ(x(m);i)pi+ max( p)ϕ(x(m); pred( x(m))). Therefore, max(p)ϕ(x(m); pred( x(m))) = Ψ( x(m);p)−X i∈[k1]:i̸=pred( x(m))ϕ(x(m);i)pi ≥Ψ∗(p)−ϵ−X i∈[k1]:i̸=pred( x(m))ϕ(x(m);i)pi for all sufficiently large m. Let us define q(m)so that q(m) i=piifi̸= pred( x(m)) and 0 otherwise. Since pred( x(m))∈ argmax( p) for all sufficiently large m∈N,q(m)∈J(p) for all sufficiently large m. Then X i∈[k1]:i̸=pred( x(m))ϕ(x(m);i)pi= Ψ(x(m);q(m)). Since p̸=0k, max( p)>0. There can be two situations. Either q(m)=0kor max( q(m))>0. In the first case,P i∈[k1]:i̸=pred( x(m))ϕ(x(m);i)pi= 0. Therefore, ϕ(x(m); pred( x(m))≥Ψ∗(p)−ϵ max(p)>Ψ∗(p) 2 max( p) S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 83 because ϵ <˜Z(p)/2≤Ψ∗(p)/2 by (S6.11). In the second case, pred( x(m))/∈argmax( q(m)) because q(m) pred(x(m))= 0. Suppose x∈Rksatisfies pred( x)/∈argmax( q) for some q∈J(p) such that q̸=0k. Then it holds that Ψ(x;q)≤Ω(q)≤ max q∈J(p),q̸=0kΩ(q) = Ψ∗(p)−˜Z(p). Since q(m)∈J(p) for all sufficiently large m∈N, it follows that
|
https://arxiv.org/abs/2505.17285v1
|
Ψ(x(m);q(m))<Ψ∗(p)−˜Z(p) given mis sufficiently large. Therefore, in the second case, ϕ(x(m); pred( x(m))≥Ψ∗(p)−ϵ−Ψ(x(m);q(m)) max(p)≥˜Z(p)−ϵ max(p)≥˜Z(p) 2 max( p) since ϵ≤˜Z(p)/2. Thus, we have shown that for all sufficiently large m, ϕ(x(m); pred( x(m))≥˜Z(p) 2 max( p). The definition of ˜Z(p) in (S6.11) implies that ˜Z(p)≤Ψ∗(p). Hence, ϕ(x(m); pred( x(m)))≥˜Z(p) 2 max( p). Since ˜Z(p)>0, the proof follows with Z(p) =˜Z(p)/2. Lemma S6.5. Letk∈N. Suppose ϕ:Rk×[k], which is not identiacally zero, satisfies Condition N1. Let x(m)be such that there exists p∈Rk ≥0andp̸= 0so that Ψ(x(m);p)→Ψ∗(p). Then there exists a constant c >0, depending only on ψ, so thatlim inf m→∞ϕ(x(m);pred(x(m)))≥c. Proof of Lemma S6.5. Since ϕis not identically zero, Ψ∗(1k)>0. Without loss of generality, we assume that Ψ∗(1k) = 1. If not, we can replace ϕbyϕ/Ψ∗(1k). This ensures that we can apply Lemma S6.4, which implies lim inf m→∞ϕ(x(m); pred( x(m)))≥ Z(p). Therefore, it suffices to show that c= infp∈Rk ≥0,p̸=0Z(p)>0 where Zis as in Lemma S6.4. Suppose, if possible, c= 0. Then there exist a sequence {p(m)}m≥1⊂Rk ≥0and sequences {xm,r}r≥1⊂Rkso that p(m)̸= 0, Ψ( xm,r;p(m))→rΨ∗(p(m)) for each m∈Nbut lim inf rϕ(xm,r; pred( xm,r))→m0. Each {xm,r}r≥1has a subsequence {xm,r′}r′≥1so that lim r′→∞ϕ(xm,r′; pred( xm,r′)) = lim inf r→∞ϕ(xm,r; pred( xm,r)). Replacing the original sequences by these subsequences, we obtain sequences {xm,r}r≥1⊂Rkso that Ψ( xm,r;p(m))→r Ψ∗(p(m)) for each m∈N, but lim m→∞lim r→∞ϕ(xm,r; pred( xm,r)) = 0 . Note that since p(m)̸=0k,∥p(m)∥1=P l∈[k]p(m) l>0. Therefore, we obtain that Ψ(xm,r;p(m)/∥p(m)∥1)(a)=Ψ(xm,r;p(m)) ∥p(m)∥1(b)→rΨ∗(p(m)) ∥p(m)∥1(c)= Ψ∗(p(m)/∥p(m)∥1), where (a) follows because the map p7→Ψ(x,p) is linear in p, (b) follows because Ψ( xm,r;p(m))→rΨ∗(p(m)), and (c) follows because Ψ∗is positively homogeneous by Lemma S6.2. Therefore, without the loss of generality, we can write that there exists {p(m)}m≥1⊂ Sk−1so that Ψ( xm,r;p(m))→rΨ∗(p(m)) for each m∈Nbut lim m→∞lim r→∞ϕ(xm,r; pred( xm,r)) = 0 . SinceSk−1is closed and bounded, there is a subsequence {ml}l≥1⊂ {m}m≥1so that p(ml)→lp∗∈ Sk−1. For the sake of notational simplicity, we will denote the sequence {ml}l≥1by{l}l≥1. Hence l→ ∞ andp(l)→lp∗asl→ ∞ . We will next show that if p(l)→lp∗, then there exists a sequence x(l)so that Ψ( x(l);p∗)→lΨ∗(p∗) but lim l→∞ϕ(x(l); pred( x(l))) = 0, which will contradict Lemma S6.4, thus completing the proof. Fixl≥1. Since lim m→∞limr→∞ϕ(xm,r,pred(xm,r)) = 0, there exists Ml∈Nso that for all m≥Ml, lim r→∞ϕ(xm,r,pred(xm,r))<1/l. This implies that for all m≥Ml, there exists Rm>0 so that for all r≥Rm, ϕ(xm,r,pred(xm,r))<1/l. S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 84 In particular, ϕ(xMl,RMl,pred(xMl,RMl))<1/l. Since Ψ( xm,r;p(m))→rΨ∗(p(m)) for each m∈N, given any mandl, there exists Km,l>0 so that if r≥Km,l, it holds that |Ψ∗(p(m))−Ψ(xm,r;p(m))|<1/l. We choose x(1)=x1,1. For any l∈Nso that l≥2, we choose x(l)=xml,nlso that ml= max( ml−1+ 1, Ml), and nl= max( nl−1, Rml,Kml,l). We will now show that if p(i)→lp∗, then Ψ( x(l);p∗)→lΨ∗(p∗). For each l∈N, Ψ∗(p∗)−Ψ(x(l);p∗)≤|Ψ∗(p∗)−Ψ∗(p(ml))|+|Ψ∗(p(ml))−Ψ(xml,nl;p(ml))| +|Ψ(xml,nl;p(ml))−Ψ(xml,nl;p∗)|. Since ml> m l−1, we have ml→ ∞ . Thus p(ml)→lp∗asl→ ∞ . Since Ψ∗is continuous on Rkby Lemma S6.2, the above implies Ψ∗(p(ml))→lΨ∗(p∗), indicating that the first term in the above bound approaches zero. Since nl≥Kml,l, it holds that |Ψ∗(p(ml))−Ψ(xml,nl;p(ml))|<1/l. Therefore, lim l→∞|Ψ∗(p(ml))−Ψ(xml,nl;p(ml))|= 0. On the other
|
https://arxiv.org/abs/2505.17285v1
|
hand, |Ψ(xml,nl;p(ml))−Ψ(xml,nl;p∗)| ≤ sup x∈Rk,l∈[k]|ϕ(x, l)|∥p(ml)−p∗∥1, which goes to zero as l→ ∞ because p(ml)→lp∗andϕis bounded by Lemma S6.1. Therefore, we have shown that ifp(i)→lp∗, then Ψ( x(l);p∗)→lΨ∗(p∗). Therefore, it just remains to show that lim m→∞ϕ(x(l); pred( x(l))) = 0. Since ml≥Mlandnl≥Rml, ϕ(x(l); pred( x(l))) =ϕ(xml,nl; pred( xml,nl))<1/l. Since ϕis non-negative, the above implies that lim l→∞ϕ(xml,nl; pred( xml,nl)) = 0 , which proves the desired contradiction. Lemma S6.6. Letk∈N. Suppose ϕ:Rk×[k]7→R, which is not identiacally zero, satisfies Condition N1. The sequences {x(m)} ⊂Rkand{p(m)} ⊂Rk ≥0are such that Ψ∗(p(m))−Ψ(x(m);p(m))→m0. Moreover, the sequences p(m)satisfy lim inf m→∞∥p(m)∥1>0. Then there exists a constant J>0, depending only on ψ, so that lim inf m→∞ϕ(x(m);pred(x(m)))≥ J. Proof of Lemma S6.6. Since lim inf m→∞∥p(m)∥1>0,∥p(m)∥1is bounded away from 0. Therefore, q(m)=p(m)/∥p(m)∥1is well defined for all sufficiently large m∈Nand Ψ∗(q(m))−Ψ(x(m);q(m))(a)=Ψ∗(p(m))−Ψ(x(m);p(m)) ∥p(m)∥1→m0, where (a) follows because Ψ∗is positively homogeneous by Lemma S6.2 and Ψ( x;·) is linear for each fixed x∈Rk. Therefore, we can replace p(m)withq(m). Note that q(m)∈ Sk−1. Hence, without loss of generality, we assume that p(m)∈ Sk−1. If possible, suppose the assertion in the statement of Lemma S6.6 is false. This implies there exists ϵ >0 and sequences {x(m)} ⊂Rkand{p(m)} ⊂ Sk−1such that Ψ∗(p(m))−Ψ(x(m);p(m))→m0 but lim m→∞ϕ(x(m); pred( x(m))) =J − ϵ. Therefore, all subsequences x(mr)ofx(m)must satisfy lim r→∞ϕ(x(mr); pred( x(mr))) =J −ϵ. We will show that there exists a subsequence {mr}r≥1⊂ {m}so that lim inf r→∞ϕ(x(mr); pred( x(mr)))≥ J, which will finish the proof by contradiction. We will consider the following subsequence. Since {p(m)}m≥1⊂ Sk−1andSk−1 is closed, there exists a subsequence {mr}r≥1⊂ {m}so that p(mr)→rp∗∈ Sk−1. It is clear that p∗̸= 0. If we can show Ψ( x(mr),p∗)→rΨ∗(p∗), then Lemma S6.5 would imply lim inf r→∞ϕ(x(mr); pred( x(mr)))≥ J, leading to the desired contradiction. Therefore, it is enough to prove that Ψ( x(mr),p∗)→rΨ∗(p∗). To this end, note that Ψ∗(p)−Ψ(x(mr),p∗) = Ψ∗(p)−Ψ∗(p(mr)) + Ψ∗(p(mr))−Ψ(x(mr);p(mr)) + Ψ( x(mr);p(mr))−Ψ(x(mr),p∗), which converges to zero as r→ ∞ because (a) Ψ∗(p(mr))→rΨ∗(p) since Ψ∗is continuous by Lemma S6.2 and p(mr)→rp∗, (b) Ψ∗(p(mr))−Ψ(x(mr);p(mr))→r0 because {mr}r≥1⊂ {m}and Ψ∗(p(m))−Ψ(x(m);p(m))→m0, and (c) Ψ( x(mr);p(mr))− Ψ(x(mr),p∗)→r0 since Ψ(x(mr);p(mr))−Ψ(x(mr),p∗)≤ sup x∈Rk,i∈[k]ϕ(x;i)∥p(mr)−p∗∥1, which converges to 0 as p(mr)→rp∗. Hence, the proof follows. S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 85 S6.1.2. Properties of ϕunder both Conditions N1 and N2 Lemma S6.7. Suppose ϕ:Rk×[k]7→Rsatisfies Conditions N1 and N2 with Cϕ= 1. Then there exists Ω<1so that for allx∈Rk,X i∈[k1]:i̸=pred(x)ϕ(x;i)<Ω. Proof of Lemma S6.7. Fixi∈[k]. LetCi={x∈Rk: pred( x) =i}. Recall that we denote by e(i) kthe unit vector with one in theith position. Consider p(i)=1k−e(i) k, whose ith element is zero but all other elements are one. Clearly, if x∈ Ci, then pred(x) =i /∈argmax( p(i)). Let Gi= supx∈CiΨ(x;p(i)). Note that Ψ∗(p(i)) = max( p(i)) because ϕsatisfies Condition N2 with Cϕ= 1. Thus, Ψ∗(p(i)) = 1. Consequently, Gi<1 since Gi= sup x∈CiΨ(x;p(i))(a) <Ψ∗(p(i)) = 1 , where step (a) follows from Condition N1. Since Ψ( x;p(i)) =P j∈[k2]:j̸=iϕ(x;j), we have shown that if x∈ Ci, then X j∈[k2]:j̸=iϕ(x;j)≤Gi for some Gi<1. We can show that above holds for all for
|
https://arxiv.org/abs/2505.17285v1
|
all i∈[k]. Let Ω = max i∈[k]Gi. Clearly, Ω <1. Since any x∈Rk belongs to Cpred(x), we haveX j∈[k2]:j̸=pred( x)ϕ(x;j)≤αpred(x)≤Ω, which completes the proof. Lemma S6.8. Suppose ϕ:Rk×[k]7→Rsatisfies Conditions N1 and N2. Then there exists χϕ∈(0,1], depending only on ϕ, so that Ψ∗(p)−Ψ(x;p)≥Cϕχϕ(max( p)−κ(x;p)) for all x∈Rkandp∈Rk ≥0. Here Cϕ>0is as in Condition N2. Proof of Lemma S6.8. First, we will consider the case when Cϕ= 1. Recall from (S6.3) that κ(x;p) =ppred(x). Let us define z=κ(x;p). Then Ψ∗(p)−Ψ(x;p) = Ψ∗(p)−z+z 1−X i∈[k]ϕ(x;i) +X i∈[k1]:i̸=pred( x)(z−pi)ϕ(x;i), which is bounded below by Ψ∗(p)−z+X i∈[k1]:i̸=pred( x)(z−pi)ϕ(x;i) since z≥0 andX i∈[k]ϕ(x;i)≤Ψ∗(1k)(a)=Cϕ= 1, where (a) follows from Condition N2. However, since Ψ∗(p) = max( p),ϕ(x;i)≥0, and z−pi≥z−max(p), the expression in the above display is bounded below by max(p)−z+X i∈[k1]:i̸=pred( x)(z−max(p))ϕ(x;i) = (max( p)−z) 1−X i∈[k1]:i̸=pred( x)ϕ(x;i) . Lemma S6.7 impliesP i∈[k1]:i̸=pred( x)ϕ(x;i)≤Ω where Ω <1, and it depends only on ϕ. Since z=κ(x;p), it follows that Ψ∗(p)−Ψ(x;p)≥(1−Ω)(max( p)−κ(x;p)). Since Ω ∈[0,1), 1−Ω∈(0,1]. Hence, the proof follows for Cϕ= 1. When Cϕ̸= 1, the proof follows taking ˜ϕto be ϕ/C ϕ and applying the above result to ˜ϕ. Lemma S6.9. Suppose the single-stage surrogate ϕ:Rk×[k]×Rsatisfies Conditions N1 and N2 for some k∈N. Further suppose, there exists C >0so that Ψ(x,1k) =Cfor all x∈Rk. Then there exists J>0so that ϕ(x;pred(x))>J. S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 86 Proof of Lemma S6.9. Without loss of generality, we assume that C= 1. If not, we can replace ϕbyϕ/Cso that Ψ( x,1k) = 1. Hence, the conditions of Lemma S6.4 are satisfied. We let Ω,J, and ˜Zbe as in the proof of Lemma S6.4. We have shown in the proof of Lemma S6.4 that ˜Z(p)>0 for all p∈Rk ≥0such that p̸=0k. Since ϕsatisfies Ψ( x;1k) = 1, 1 = Ψ( x;1k) =X i∈[k1]:i̸=pred( x)ϕ(x;i) +ϕ(x; pred( x)). Hence, ϕ(x; pred( x)) = 1 −X i∈[k1]:i̸=pred( x)ϕ(x;i) = 1−Ψ(x;q), where q=1k−e(pred( x)) k. Note that q∈J(1k). Moreover, argmax( q) = [k]\ {pred(x)}, indicating pred( x)/∈argmax( q). Therefore, Ψ( x;q)≤Ω(q). Hence, ϕ(x; pred( x))≥1−Ω(q)≥1−sup q∈J(1k)Ω(q) = max( 1k)−sup q∈J(1k)Ω(q) = Ψ∗(1k)−sup q∈J(1k)Ω(q) =˜Z(1k). The proof follows since ˜Z(p)>0 for all p̸=0k, as mentioned in the beginning of the proof. Lemma S6.10. Letk∈N. Suppose ϕ:Rk×[k]7→R≥0satisfies Conditions N1 and N2. Given any p∈Rk ≥0, ifΨ∗(p) = ⟨v∗,p⟩for some v∗∈Rk, then the vector v∗satisfies v∗∈conv(Cset), where Cset={e(i) k:i∈argmax( p)}. Moreover, if Ψ(x(m);p)→mΨ∗(p), then ϕ(x(m);i)→m0ifi /∈argmax( p). Proof of Lemma S6.10. The proof is trivial if p=0k. Thus we assume p̸=0k. Without loss of generality, let us assume that Cϕ= 1 because otherwise we can replace ϕbyϕ/C ϕ. Recall the image set Vdefined in (4.3). Since Cϕ= 1, Ψ( x;1k)≤1, which implies ϕ(x;i)≤1 for each i∈[k] and V ⊂ x∈Rk ≥0:xi∈[0,1],X i∈[k]xi≤1 :=C. Since Φ∗is the support function of V(see the proof of Lemma S6.2), and Φ∗(p) =⟨v∗,p⟩, it follows that v∗lies on a face of the closed convex hull of V(cc. pp. 145 of Hiriart-Urruty and Lemar´ echal, 2004). Since V ⊂ C , andCis both closed and convex, conv(V)⊂ C. Therefore, v∗∈ C, which implies
|
https://arxiv.org/abs/2505.17285v1
|
v∗ i∈[0,1] for each i∈[k] andP i∈[k]v∗ i≤1. Therefore, ⟨v∗,p⟩ ≤max(p)X i∈[k]v∗ i≤max(p). Since ϕsatisfies Condition N2, it follows that Ψ∗(p) = max( p), implying ⟨v∗,p⟩= max( p)X i∈[k]v∗ i= max( p), implyingP i∈[k]v∗ i= 1 and ⟨v∗,p⟩= max( p)X i∈[k]v∗ i. Therefore,X i∈[k](max( p)−pi)v∗ i= 0, implying v∗ i>0 only if max( p) =pi. SinceP i∈[k]v∗ i= 1, it follows that v∗=P i∈[k]v∗ ie(i) k=P i∈argmax( p)v∗ ie(i) k∈conv(Cset). Lety(m) i=ϕ(x(m);i). We will show that, if i /∈argmax( p), given any subsequence of y(m) i, we can find a further subsequence that converges to zero. Therefore, it will follow that y(m) i→m0. Since ϕsatisfies Condition N1, ϕis bounded by Lemma S6.1. Therefore, given any subsequence of {x(m)}m≥1, we can find a further subsequence so that the y(m) i=ϕ(x(m);i)’s converge (say to some v∗ i∈R) for all i∈[k] along this subsequence. To simplify notation, we will denote the corresponding subsequences by x(m)andy(m). Hence, y(m)→mv∗. Therefore, it suffices to show that v∗ i= 0 unless i∈argmax( p). Note that ⟨y(m),p⟩ →m⟨v∗,p⟩because the l2inner product is a continuous function. Noting Ψ( x(m);p) =⟨y(m),p⟩and Ψ(x(m);p)→mΨ∗(p), we obtain that ⟨v∗,p⟩= Ψ∗(p). Therefore, from the first part, it follows that v∗∈conv(Cset), which implies v∗ i= 0 if i /∈argmax( p). S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 87 S6.2. Preliminary auxlliary lemmas Lemma S6.11. Suppose T≥2,Psatisfies Assumptions I-IV, and (ϕ2, . . . , ϕ T)satisfies Conditions N1 and N2 with Cϕt= 1 fort∈[2 :T]. Then Ψ∗ t(p∗ t(Ht)) =ETX i=1YiTY j=t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht (S6.12) andargmax( p∗ t(Ht)) = argmax( Q∗ t(Ht))fort∈[2 :T]. Proof of Lemma S6.11. We will show that Ψ∗ t(p∗ t(Ht)) =ETX i=1YiTY j=t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht fort∈[2 :T] using induction. Our induction hypothesis is that the above and argmax( p∗ t(Ht)) = argmax( Q∗ t(Ht)) holds for t∈[2 :T]. Let t=T. Note that (S6.2) implies p∗ T(HT)i=EhPT i=1Yi|HT, AT=ii . Since Q∗ T(HT, i) =E[YT|HT, AT=i] for all i∈[kT] and pred( x) = max(argmax( x)) for any vector x, it follows that pred(p∗ T(HT)) = max(argmax i∈[kT]E[YT|HT, AT=i]) = max(argmax( Q∗ T(HT)) = pred( Q∗ T(HT)) =d∗ T(HT), where the last step follows because (S3.1) defines d∗ t(Ht) to be pred( Q∗ t(Ht)) for all t∈[T]. Since ϕt’s satisfy Condition N2 fort∈[2 :T] with Cϕt= 1, Ψ∗ t(p∗ T(Ht)) = max( p∗ T(HT))(a)=p∗ T(HT)pred(p∗ T(HT)) =ETX i=1Yi|AT= pred( p∗ T(HT)), HT (b)=ETX i=1Yi|AT=d∗ T(HT), HT , where (a) follows because pred( x)∈argmax( x) for any vector xand (b) follows because we just showed that pred( p∗ T(HT)) = d∗ T(HT). Since Psatisfies Assumption I, (S6.5) yields ETX i=1Yi HT, AT=d∗ T(HT) =ETX i=1Yi1[AT=d∗ T(HT)] πT(AT|HT) HT , which implies that the induction hypothesis holds when t=T. Suppose the induction hypothesis holds for some 1 + t∈[3 :T]. We will show that the induction hypothesis holds for t. Since t∈[2 :T],ϕtsatisfies Condition N2 with Cϕt= 1. Therefore, Ψ∗ t(p∗ t(Ht)) = max( p∗ t(Ht)) where by (S6.2), for any i∈[kt], p∗ t(Ht)i=E[Ψ∗ 1+t(p∗ 1+t(H1+t))|Ht, At=i] =E ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t Ht, At=i =ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj)
|
https://arxiv.org/abs/2505.17285v1
|
Ht, At=i (S6.13) since the Induction hypothesis holds for 1 + t. Thus Ψ∗ t(p∗ t(Ht)) =ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At= pred( p∗ t(Ht)) =ETX i=1Yi1[At= pred( p∗ t(Ht))]QT j=1+t1[Aj=d∗ j(Hj)] QT j=tπj(Aj|Hj) Ht , where the last step follows from (S6.5). Thus to show that the induction hypothesis follows for t, it only remains to show S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 88 that pred( p∗ t(Ht)) =d∗ t(Ht). To show the latter, we will first prove a fact. Since ( Y1, . . . , Y t−1)⊂Ht, ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=j =t−1X i=1Yi ETY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=i +ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=j =t−1X i=1Yi+ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=j , (S6.14) where the last step follows from (S6.10). Since pred( p) = max(argmax( p)) for any vector p, (S6.13) implies pred( p∗ t(Ht)) equals max argmax j∈[kt]ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=j (a)= max argmax j∈[kt]ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=j , where (a) follows due to (S6.14). However, Fact S11.4 implies ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=i =Q∗ t(Ht, i), implying pred( p∗ t(Ht)) = pred( Q∗ t(Ht)) =d∗ t(Ht). Therefore, the proof follows. Lemma S6.12. Under the setup of Lemma S6.11, there exist c1, c2>0, depending only on P, so that the p∗ t:Ht7→R defined in (S6.2) satisfies c1<p∗ t(ht)i< c2for all ht∈ H t,i∈[kt], and t∈[T]. Proof of Lemma S6.12. We will use (S6.13). By our assumption, there is a constant depending on P, i.e., Cmin(P), so that Yt> C min(P) for each t∈[T]. Hence, (S6.13) implies for each i∈[kt], p∗ t(Ht)i≥TCmin(P)ETY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) , which equals TCmin(P) by (S6.10). Moreover, Assumption IV implies that there exists a constant depending on p, say Cmax(P), so that Yj≤Cmax(P) for all j∈[T]. Hence, p∗ t(Ht)i≤TCmax(P)ETY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) =TCmax(P) by the same argument. Hence, the proof follows. Lemma S6.13. Suppose Psatisfies Assumptions I-V, and if T≥2, then for t∈[2 :T],ϕtsatisfies Conditions N1 and N2 with Cϕt= 1. Then p∗ 1(H1) =Q∗ 1(H1)andV∗=E[max( p∗ 1(H1))]. Proof of Lemma S6.13. IfT= 1, then the proof follows trivially because in this case, p∗ 1(H1)i=p1(H1)i=E[Y1|H1, A1= i] =Q∗ 1(H1, i) for all i∈[k1]. Hence, we assume that T≥2. For t∈[2 :T],Cϕt= 1 implies Ψ∗ t(1kt) = 1, which indicates Lemmas S6.3–S6.8 hold. By (S6.2) and (S6.12), for any i∈[k1], p∗ 1(H1)i=E[Ψ∗ 2(p∗ 2(H2))|H1, A1=i] =E ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj) H2 H1, A1=i where the last step follows by Lemma S6.11. Using Fact S11.4 in step (a), we obtain that p∗ 1(H1)i=ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj) H1, A1=i (a)=Q∗ 1(H1, i). That V∗=E[max( p∗ 1(H1))] follows noting V∗=E[max i∈[k1]Q∗ 1(H1, i)]. S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 89 Lemma S6.14. Suppose Psatisfies Assumptions I-V. Then Vψ ∗=E[Ψ∗ 1(p∗ 1(H1))], and for any f∈ F,Vψ(f) =E[Ψ(f1(H1);p1(H1))] where p1(H1)≡p1(H1;f). In the special case where ϕ1also satisfies Condition N2 with Cϕ1= 1, then Vψ ∗=V∗. Proof of Lemma S6.14. We will prove Vψ ∗=E[Ψ∗ 1(p∗ 1(H1))] by induction. Since the proof of
|
https://arxiv.org/abs/2505.17285v1
|
Vψ(f) =E[Ψ(f1(H1);p1(H1)] is similar, it is skipped. Induction hypothesis: Fort∈[T], sup fi∈Fi:i∈[t:T]ETX i=1YiTY j=1ϕj(fj(Hj);Aj) πj(Aj|Hj) =Et−1Y j=1ϕj(fj(Hj);At) πj(Aj|Hj)Ψ∗ t(p∗ t(Ht)) , where the product term is one when the range of the product is empty, which occurs only when t= 1. When t=T, taking V=PT i=tYtandh(HT) =QT−1 j=1{ϕj(fj(Hj);Aj)/πj(Aj|Hj)}in (S6.8), we obtain that sup fT∈FTETX i=1YiTY j=1ϕj(fj(Hj);Aj) πj(Aj|Hj) =ET−1Y j=1ϕj(fj(Hj);Aj) πj(Aj|Hj)Ψ∗ T(p∗ T(HT)) . Thus the induction hypothesis holds for t=T. Suppose it holds for some t∈[2 :T]. We will show that then the induction hypothesis holds for t−1. Note that t−1∈[T−1] and sup fi:i∈[t−1:T]ETX i=1YiTY j=1ϕj(fj(Hj);Aj) πj(Aj|Hj) (a)= sup ft−1Et−1Y j=1ϕj(fj(Hj);Aj) πj(Aj|Hj)Ψ∗ t(p∗ t(Ht)) = sup ft−1∈Ft−1Et−2Y j=1ϕj(fT(Ht);At) πj(Aj|Hj) Ψ∗ t(p∗ t(Ht))ϕt−1(ft−1(Ht−1);At−1) πt−1(At−1|Ht−1) . Now we apply (S6.8) on the right hand side of the above display at time stage t−1 with V= Ψ∗ t(p∗ t(Ht)) and h(Ht) =Qt−2 j=1{ϕj(fj(Hj);Aj)/πj(Aj|Hj)}. Noting that E[V|Ht−1, At−1=i] =E[Ψ∗ t(p∗ t(Ht))|Ht−1, At−1=i] =p∗ t−1(Ht−1)i by (S6.2), and applying (S6.8), we obtain that sup ft−1∈Ft−1Et−2Y j=1ϕj(fT(Ht);At) πj(Aj|Hj) Ψ∗ t(p∗ t(Ht))ϕt−1(ft−1(Ht−1);At−1) πt−1(At−1|Ht−1) = sup ft−1∈Ft−1Et−2Y j=1ϕj(fT(Ht);At) πj(Aj|Hj)Ψ∗ t−1(p∗ t−1(Ht−1)) , which implies that the induction hypothesis holds for t−1. Therefore, the induction hypothesis holds for all t∈[1 :T]. Thus, setting t= 1, we obtain that Vψ ∗= sup ft∈Ft:t∈[T]ETX i=1YiTY j=1ϕj(fj(Hj);Aj) πj(Aj|Hj) =E[Ψ∗ 1(p∗ 1(H1))]. Finally, in the special case, Ψ∗ 1(p∗ 1(H1)) = max( p∗ 1(H1)) by Assumption N2. Hence, Vψ ∗=E[max( p∗ 1(H1))], which is V∗by Lemma S6.13. Auxiliary lemmas for induction steps Lemmas S6.15 and S6.16 are used during the induction steps in the proofs of Theorem 4.1 and 4.2, respectively. Lemma S6.15. Suppose Psatisfies Assumptions I-V, ϕ1satisfies Condition N1 and for t∈[2 :T],ϕtsatisfies Conditions N1 and N2 with Cϕt= 1. Let Zdiff,t=PT i=1Yi 1[At=pred(x)] QT r=tπr(Ar|Hr)T−1Y r=t+1ϕr fr(Hr);pred(fr(Hr)) ×(TY r=1+t1[Ar=d∗ r(Hr)]−TY r=1+t1[Ar=pred(fr(Hr))]) , where the product terms in the expectation are one if the range is empty. Then for t= 1, . . . , T , for all f∈ F, and for any x∈Rkt,E[Zdiff,t|Ht]is non-negative and Ψt(x;p∗ t(Ht))−Ψt(x;pt(Ht))≥ min 1+t≤i≤Tχϕi ϕt(x;pred(x))E[Zdiff,t|Ht], (S6.15) where the χϕt’s are as in Lemma S6.8 and pt(Ht)≡pt(Ht;f). S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 90 Proof of Lemma S6.15. First of all, the assumptions imply Lemma S6.8 is applicable to ϕ2, . . . , ϕ T. When t=T, we can show that the LHS of (S6.15) is zero for all x∈Rktbecause p∗ T(hT) =pT(hT) for all hT∈ H Tby definition. Also, the RHS of (S6.15) is zero in this case because the range [1 + T, T] is empty. Thus,(S6.15) trivially follows for t=T. Therefore, it suffices to consider the case t∈[T−1], which implies we also consider T≥2. We will prove the lemma by Induction. Our induction hypothesis is that (S6.15) holds and the RHS is non-negative for t∈[T−1]. First, we will show that (S6.15) holds for t=T−1. Then we will show that if (S6.15) holds for t+ 1∈[2 :T−1], then it holds for tas well. When t=T−1, the LHS of (S6.15) equals ΨT−1(x;p∗ T−1(HT−1))−ΨT−1(x;pT−1(HT−1)) =kT−1X i=1ϕT−1(x;i) p∗ T−1(HT−1)i−pT−1(HT−1)i =kT−1X i=1ϕT−1(x;i) E[Ψ∗ T(p∗ T(HT))−ΨT(fT(HT);pT(HT))|HT−1, AT−1=i] by (S6.2). Since p∗ T=pT, it follows that
|
https://arxiv.org/abs/2505.17285v1
|
the above is non-negative. Lemma S6.8 a applies to ϕTbecause T≥2. Applying Lemma S6.8, we obtain that there exists χϕT>0, depending only on ϕt, so that kT−1X i=1ϕT−1(x;i) E[Ψ∗ t(p∗ T(HT))−Ψt(fT(HT);p∗ T(HT))|HT−1, AT−1=i] ≥χϕTkT−1X i=1ϕT−1(x;i)E[max( p∗ T(HT))−κ(fT(HT);pT(HT))|HT−1, AT−1=i] which is bounded below by χϕTϕT−1(x; pred( x))E[max( p∗ T(HT))−κ(fT(HT);pT(HT))|HT−1, AT−1= pred( x)], which is non-negative because pT=p∗ T. However, E[max( p∗ T(HT))−κ(fT(HT);pT(HT))|HT−1, AT−1= pred( x)] (S6.16) (a)=ETX i=1Yi1[AT= pred( p∗ T(HT))]−1[AT= pred( fT(HT))] πT(AT|HT) HT−1, AT−1= pred( x) (b)=ETX i=1Yi 1[AT−1= pred( x)]1[AT= pred( p∗ T(HT))]−1[AT= pred( fT(HT))]QT t=T−1πt(At|Ht) HT−1 (c)=ETX i=1Yi 1[AT−1= pred( x)]1[AT=d∗ T(HT)]−1[AT= pred( fT(HT))]QT t=T−1πt(At|Ht) HT−1 where (a) follows from (S6.9) with i= pred( fT(HT)), (b) follows from an application of (S6.5), and (c) follows because pred(p∗ T(HT)) =d∗ T(HT) by Lemma S6.13. Since max(p∗ T(HT))≥κ(fT(HT);p∗ T(HT)) =κ(fT(HT);pT(HT)), (S6.16) also implies that ETX i=1Yi 1[AT−1= pred( x)]×1[AT=d∗ T(HT)]−1[AT= pred( fT(HT))]QT t=T−1πt(At|Ht) HT−1 , which is non-negative. Since χϕT, ϕT−1≥0, it also follows that ΨT−1(x;p∗ T−1(HT−1))−ΨT−1(x;pT−1(HT−1))≥χϕTϕT−1(x; pred( x)) ×ETX i=1Yi 1[AT−1= pred( x)]1[AT=d∗ T(HT)]−1[AT= pred( fT(HT))]QT t=T−1πt(At|Ht) HT−1 , which is non-negative, implying that the induction hypothesis holds for t=T−1. IfT= 2, then there is nothing to prove. Suppose T≥3 and that the induction hypothesis is true for t+ 1 where t∈[1, T−2]. We will show that the induction hypothesis is true for t, which will complete the proof of the current lemma. Note that Ψt(x;p∗ t(Ht))−Ψt(x;pt(Ht)) =ktX i=1ϕt(x;i) p∗ t(Ht)i−pt(Ht)i (S6.17) =ktX i=1ϕt(x;i) E[Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p1+t(H1+t))|Ht, At=i] S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 91 by (S6.2) since t∈[T−2]. However, the above equals ktX i=1ϕt(x;i) E[Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At=i] | {z } S1 +ktX i=1ϕt(x;i) E[Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p1+t(H1+t))|Ht, At=i] | {z } S2. (S6.18) Since 1 + t∈[2 :T−1], Lemma S6.8 can be applied to ϕ1+t, yielding S1≥χϕ1+tktX i=1ϕt(x;i)E[max( p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At=i] ≥χϕ1+tϕt(x; pred( x))E[max( p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At= pred( x)] ≥ min 1+t≤i≤Tχϕi ϕt(x; pred( x)) (S6.19) ×Eh max(p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At= pred( x)i . Since p∗ 1+t(H1+t)i=E[Ψ∗ 2+t(p∗ 2+t(H2+t))|H1+t, A1+t=i] (S6.20) for all i∈[k1+t], it follows that E[max( p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At= pred( x)] =E E max(p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t)) H1+t Ht, At= pred( x) (a)=E E1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f1+t(H1+t))] π1+t(A1+t|H1+t) ×Ψ∗ 2+t(p∗ 2+t(H2+t)) H1+t Ht, At= pred( x) (S6.21) where (a) uses (S6.9) on H1+twith i= pred( f1+t(H1+t)) and p(H1+t) =p∗ 1+t(H1+t), whose expression is given by (S6.20). Also, note that E1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f1+t(H1+t))] π1+t(A1+t|H1+t)Ψ∗ 2+t(p∗ 2+t(H2+t)) H1+t =E max(p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t)) H1+t (S6.22) is non-negative because max( p∗ 1+t(H1+t))≥κ(f1+t(H1+t);p∗ 1+t(H1+t)). Since Cϕt= Ψ∗ t(1kt) and Cϕt= 1 for all t∈[2 :T], it follows that ϕt(x; pred( x))≤1 for each x∈Rktfor all t∈[2 :T]. In particular 0≤T−1Y r=t+1ϕr fr(Hr); pred( fr(Hr)) ≤1 for all t≥1. Therefore, denoting S11=ET−1Y r=t+1ϕr fr(Hr); pred( fr(Hr)) ×E1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f1+t(H1+t))] π1+t(A1+t|H1+t) ×Ψ∗ 2+t(p∗ 2+t(H2+t)) H1+t Ht, At= pred( x) , (S6.23) and using (S6.21), we obtain that E[max( p∗ 1+t(H1+t))−κ(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At= pred( x)]≥S11. (S6.24) Since 0 ≤QT−1 r=t+1ϕr fr(Hr); pred( fr(Hr)) , from (S6.22) it follows that S11≥0. Also, Lemma S6.11 implies Ψ∗ 2+t(p∗ 2+t(H2+t)) =ETX i=1YiTY j=t+21[Aj=d∗
|
https://arxiv.org/abs/2505.17285v1
|
j(Hj)] πj(Aj|Hj) H2+t , S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 92 where H2+tis non-empty because the case under consideration has t∈[T−2]. Therefore, S11=ET−1Y r=t+1ϕr fr(Hr); pred( fr(Hr)) ×1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f1+t(H1+t))] π1+t(A1+t|H1+t) ×TX i=1YiTY j=t+21[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At= pred( x) . An application of (S6.5) yields that S11=ET−1Y r=t+1ϕr fr(Hr); pred( fr(Hr)) ×1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f1+t(H1+t))] π1+t(A1+t|H1+t) ×TX i=1YiTY j=t+21[Aj=d∗ j(Hj)] πj(Aj|Hj)1[At= pred( x)] πt(At|Ht) Ht . Combining the above with (S6.19) and (S6.24), and using the fact that ϕt’s and χϕt’s are non-negative, we obtain the following lower bound for S1: S1≥ min 1+t≤i≤Tχϕi ϕt(x; pred( x))S11 = min 1+t≤i≤Tχϕi ϕt(x; pred( x))ETX i=1YiT−1Y r=t+1ϕr fr(Hr); pred( fr(Hr)) (S6.25) ×QT j=t+21[Aj=d∗ j(Hj)] QT j=tπj(Aj|Hj) × 1[A1+t=d∗ 1+t(H1+t)]−1[A1+t= pred( f1+t(H1+t))] 1[At= pred( x)] Ht Since we already argued that S11≥0, the RHS of (S6.25) is non-negative. Now we will bound S2. By the induction hypothesis, each element of the sum S2is non-negative. Therefore, S2is bounded below by ϕt(x; pred( x)) times E[Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p1+t(H1+t))|Ht, At= pred( x)] =E Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p1+t(H1+t))1[At= pred( x)] πt(At|Ht) Ht where the last step follows from (S6.5). Since the Induction hypothesis holds for 1 + t, it holds that Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p1+t(H1+t)) ≥ min 2+t≤i≤Tχϕi ϕ1+t f1+t(H1+t); pred( f1+t(H1+t)) ×E T−1Y r=t+2ϕr fr(Hr)); pred( fr(Hr)) (S6.26) ×TX i=1Yi ×1[A1+t= pred( f1+t(H1+t)] (S6.27) ×QT r=t+21[Ar=d∗ r(Hr)]−QT r=t+21[Ar= pred( fr(Hr)] QT r=t+1πr(Ar|Hr) H1+t , which is also non-negative by the induction hypothesis. Since the χϕt’s and the ϕt’s are non-negative, we obtain that S2≥ min 2+t≤i≤Tχϕi ϕt(x; pred( x))E T−1Y r=t+1ϕr fr(Hr)); pred( fr(Hr)) ×1[At= pred( x)] πt(At|Ht)TX i=1Yi ×1[A1+t= pred( f1+t(H1+t)] ×QT r=t+21[Ar=d∗ r(Hr)]−QT r=t+21[Ar= pred( fr(Hr))] QT r=t+1πr(Ar|Hr) Ht , S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.3 Proof of Theorem 4.1 93 which is also non-negative. Adding the above inequality with the inequality in (S6.25), we obtain that S1+S2≥ min 1+t≤i≤Tχϕi ϕt(x; pred( x))E[Zdiff,t|Ht] Since we have shown that S11(the RHS of (S6.25)) and the RHS of (S6.26) are non-negative, it also follows that E[Zdiff,t|Ht] is non-negative. The induction hypothesis follows for tnoting (S6.17) and (S6.18) imply Ψt(x;p∗ t(Ht))−Ψt(x;pt(Ht)) =S1+S2, thus completing the proof of Lemma S6.15. Lemma S6.16. Suppose P,ϕ1, . . . , ϕ T, and Zdiff,tare as in Lemma S6.15. Additionally, ϕt(x;pred(x))≥ Jtfor some Jt≥0 for all t∈[T]andx∈Rkt. Then for t= 1, . . . , T , for all f∈ F, and for any x∈Rkt, Ψt(x;p∗ t(Ht))−Ψt(x;pt(Ht))≥TY i=tJi min 1+t≤i≤Tχϕi E[Zdiff,t|Ht], andE[Zdiff,t|Ht]≥0. Here the χϕt’s are as in Lemma S6.8 and pt(Ht)≡pt(Ht;f). Proof. Note that Lemma S6.15 implies ϕt(ft(Ht),pred( ft(Ht)))≥ J tsince ft(Ht)∈Rktfor all t∈[T]. If we redo all steps of Lemma S6.15 replacing ϕr(fr(Hr),pred( fr(Hr))) and ϕt(x,pred(x)) with Jtin all lower bounds, Lemma S6.16 will follow. Since the proof is similar, it has been skipped. S6.3. Proof of Theorem 4.1 We need to show that if Vψ(f(m))→mVψ ∗for some {f(m)}m≥1⊂ F, then V(f(m))→mV∗. Without loss of generality, let us assume Cϕt= 1 for t∈[2 :T] because we can substitute ϕtbyϕt/Cϕtotherwise. We claim that the proof of Fisher consistency follows directly from
|
https://arxiv.org/abs/2505.17285v1
|
Lemma S6.17 below. Lemma S6.17. Suppose ϕ1satisfies Condition N1 and ϕ2, . . . , ϕ Tsatisfy both Conditions N1 and N2 with Cϕt= 1 for t∈[2;T]. IfVψ(f(m))→mV∗, then under Assumptions I-V, ETX i=1Yit−1Y j=11[A1=pred(f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) × 1[At=pred(p∗ t(Ht))]−1[At=pred(f(m) t(Ht))] πt(At|Ht) →m0 (S6.28) for all t∈[T], where the products with empty range are taken to be one. To see how the proof follows the Lemma S6.17, first note that V∗−V(f) =ETX i=1YiQT t=11[At=d∗ t(Ht)]−QT t=11[At= pred( f(m) t(Ht))] QT t=1πt(At|Ht) . Letting products over empty ranges be one, we calculate TX t=1t−1Y j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) × 1[At= pred( p∗ t(Ht))]−1[At= pred( f(m) t(Ht))] πt(At|Ht) (a)=TX t=1t−1Y j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) × 1[At=d∗ t(Ht)]−1[At= pred( f(m) t(Ht))] πt(At|Ht) =TX t=1t−1Y j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj)1[At=d∗ t(Ht)] πt(At|Ht) −TX t=1t−1Y j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj)1[At= pred( f(m) t(Ht))] πt(At|Ht) =TX t=1t−1Y j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=t1[Aj=d∗ j(Hj)] πj(Aj|Hj) −TX t=1tY j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj), S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.3 Proof of Theorem 4.1 94 where (a) follows from Lemmas S6.11 and S6.13. The telescopic sum in the RHS of the above display equals QT t=11[At=d∗ t(Ht)]−QT t=11[At= pred( f(m) t(Ht))] QT t=1πt(At|Ht). Thus, we derived that V∗−V(f) equals ETX i=1YiTX t=1t−1Y j=11[A1= pred( f(m) 1(H1))] π1(A1|H1)TY j=1+t ×1[Aj=d∗ j(Hj)] πj(Aj|Hj) 1[At= pred( p∗ t(Ht))]−1[At= pred( f(m) t(Ht))] πt(At|Ht) , which goes to zero as m→ ∞ by Lemma S6.17. The rest of the proof is devoted towards proving Lemma S6.17. S6.3.1. Proof of Lemma S6.17 LetHt(d(m)) be the counterfactual Htunder the DTR d(m) t(Ht) = pred( f(m) t(Ht)). In other word, the random variable Ht(d(m)) is distributed as the Htin the world where the treatments are assigned according to d(m)instead of the propensity scores πt. Thus H1(d(m))≡H1, H 2(d(m)) = (H1, d(m) 1(H1), O2(d(m) 1(H1)), Y1(d(m) 1(H1))), and so on. Thus Ht(d(m))’s distribution is not same as the distribution of HtunderPfort≥2. From Lemma 1 of Orellana et al. (2010) (see also Murphy et al. , 2001b), it follows that for t∈[T−1], the distribution of H1+t(d(m)) is absolutely continuous with respect to the distribution of H1+tunder Assumptions I-III with Radon-Nikodym derivative tY i=11[Ai=d(m)(Hi)] πi(Ai|Hi), which is well defined as πi(Ai|Hi)>0 for all i∈[T] by Assumption I. Therefore, under Assumptions I-III, for any measurable function h:H1+t7→R, the expectation under Pd(m)is identified as E[h(H1+t(d(m)))] =E h(H1+t)tY i=11[Ai=d(m)(H1)] πi(Ai|Hi) . (S6.29) We will prove Lemma S6.17 by mathematical induction. We consider the following induction hypothesis: under the setup of Lemma S6.17, for each t∈[T], (P.i) (S6.28) holds, and (P.ii)Et−1Y i=11[Ai= pred( f(m) i(Hi))] πi(Ai|Hi) Ψ∗ t(p∗ t(Ht))−Ψt(f(m) t(Ht);p∗ t(Ht)) →m0, (S6.30) where the product over empty range is defined to be one. We will denote pt(Ht;f(m)) byp(m) t(Ht) throughout the proof of Lemma S6.17. First we show that the induction hypotheses (P.i) and (P.ii) hold for t= 1. S6.3.2. Case t= 1 Note that Vψ ∗−Vψ(f(m)) (a)=E[Ψ∗ 1(p∗ 1(H1))]−E[Ψ1(f(m) 1(H1);p(m)(H1))] =E[Ψ∗ 1(p∗ 1(H1))−Ψ1(f(m) 1(H1);p∗ 1(H1))] +Eh Ψ1(f(m) 1(H1);p∗ 1(H1))−Ψ1(f(m) 1(H1);p(m) 1(H1))i
|
https://arxiv.org/abs/2505.17285v1
|
(b) ≥E[Ψ∗ 1(p∗ 1(H1))−Ψ1(f(m) 1(H1);p∗ 1(H1))], where (a) follows from Lemma S6.14 and (b) follows as Lemma S6.15 implies Ψ1(f(m) 1(H1);p∗ 1(H1))≥Ψ1(f(m) 1(H1);p(m) 1(H1)). Also, the RHS of the above display is non-negative because Ψ∗ 1(p∗ 1(H1))≥Ψ1(f(m) 1(H1);p∗ 1(H1)) by the definition of Ψ∗in (4.2). Since Vψ(f(m))→mVψ ∗, it then follows that E[Ψ∗ 1(p∗ 1(H1))−Ψ1(f(m) 1(H1);p∗ 1(H1))]→m0, (S6.31) S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.3 Proof of Theorem 4.1 95 implying (S6.30). Hence Hypothesis P.ii holds for t= 1. Let B1= [c1, c2]k1where c1, c2>0 is as in Lemma S6.12. Then Lemma S6.12 implies that p∗ 1(h1)∈ B1for all h1∈ H 1. Note that c2depends on P. Since ϕ1satisfies Condition N1, Lemma S6.3 applies on ϕ1withp=p∗ 1(h1) andB=B1. Let ϱbe as in Lemma S6.3, which depends on ϕ1andP(viaB1). Since ϱis convex with dom( ϱ)⊃R≥0, max( p∗ 1(H1))−κ(f(m) 1(H1);p∗ 1(H1))∈dom( ϱ) for all H1. Thus Jensen’s inequality implies that ϱ Eh max(p∗ 1(H1))−κ(f(m) 1(H1);p∗ 1(H1))i ≤Eh ϱ max(p∗ 1(H1))−κ(f(m) 1(H1);p∗ 1(H1))i (a) ≤Eh Ψ∗ 1(p∗ 1(H1))−Ψ1(f(m) 1(H1);p∗ 1(H1))i , where (a) follows from Lemma S6.3 because p∗ 1(H1)∈ B1. Lemma S6.3 states that ϱ(x)→0 implies x→0 for x≥0. Since max(p∗ 1(H1))≥κ(f(m) 1(H1);p∗ 1(H1)) by the definition of κin (S6.3), E[max( p∗ 1(H1))−κ(f(m) 1(H1);p∗ 1(H1))]≥0. Therefore, (S6.31) leads to Eh max(p∗ 1(H1))−κ(f(m) 1(H1);p∗ 1(H1))i →m0. (S6.32) On one hand, Lemma S6.13 implies E[max( p∗ 1(H1))] = V∗=ETX i=1YiTY j=11[Aj=d∗ j(Hj)] πj(Aj|Hj) . On the other hand, κ(f(m) 1(H1);p∗ 1(H1)) =p∗ 1(H1)pred( f(m) 1(H1)) (a)=ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj) H1, A1= pred( f(m) 1(H1)) , where (a) follows by (S6.13). Then (S6.5) implies E[κ(f(m) 1(H1);p∗ 1(H1))] =ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj)1[A1= pred( f(m) 1(H1))] π1(A1|H1) , which, combined with (S6.32), implies ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj)1[A1=d∗ 1(H1)]−1[A1= pred( f(m) 1(H1))] π1(A1|H1) →m0, (S6.33) which implies induction hypothesis P.i holds when t= 1. If T= 1, then there is nothing else to prove. Therefore, we will consider the case T≥2 from now on. S6.3.3. Case t∈[2 :T] LetT≥2. Suppose the induction hypotheses P.i and P.ii hold for t∈[1 :T−1]. We will show that they hold for t+ 1. Let us denote wt=t−1Y i=11[Ai= pred( f(m) i(Hi))] πi(Ai|Hi), (S6.34) implying w1= 1. Since t < T ,ptandp∗ tare as in (S6.1) and (S6.2), respectively. Using the fact that wt≥0 and equations (S6.17) and (S6.18) from the proof of Lemma S6.15, we obtain that E wt Ψt(f(m) t(Ht);p∗ t(Ht))−Ψt(f(m) t(Ht);p(m) t(Ht)) ≥E wtktX i=1ϕt(f(m) t(Ht);i)E[Ψ∗ 1+t(p∗ 1+t(H1+t)) −Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At=i] +E wtktX i=1ϕt(f(m) t(Ht);i)E[Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) −Ψ1+t(f1+t(H1+t);p1+t(H1+t))|Ht, At=i] , S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.3 Proof of Theorem 4.1 96 whose second term is non-negative by Lemma S6.15. Since the ϕt’s are non-negative and Ψ∗ 1+t(p∗ 1+t(H1+t))≥Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) for all t∈[T−1], and (S6.30) holds for t, E wtϕt(f(m) t(Ht); pred( f(m) t(Ht)))E[Ψ∗ 1+t(p∗ 1+t(H1+t)) (S6.35) −Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At= pred( f(m) t(Ht))] →m0. An application of (S6.5) implies that E wtϕt(f(m) t(Ht); pred( f(m) t(Ht)))E[Ψ∗ 1+t(p∗ 1+t(H1+t)) −Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, At= pred( f(m) t(Ht))] =E wt1[At= pred( f(m) t(Ht))] πt(At|Ht)ϕt(f(m) t(Ht); pred( f(m) t(Ht)))E[Ψ∗ 1+t(p∗ 1+t(H1+t)) −Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))|Ht] (a)=E tY i=11[Ai= pred( f(m) i(Hi))] πi(Ai|Hi) ϕt(f(m) t(Ht); pred(
|
https://arxiv.org/abs/2505.17285v1
|
f(m) t(Ht))) × Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) , where (a) follows from (S6.34). However, the above, combined with (S6.29), implies that E ϕt f(m) t(Ht(d(m))); pred( f(m) t(Ht(d(m)))) Ψ∗ 1+t(p∗ 1+t(H1+t(d(m)))) −Ψ1+t(f1+t(H1+t(d(m)));p∗ 1+t(H1+t(d(m)))) →m0. While we use Ht(d(m)), we want to remind the readers that the functions ptandp∗ tdo not depend on the distribution of Ht(d(m)), and their definition remains as in as in (S6.1) and (S6.2), respectively, which depends only on P. Lemma S6.18 below states that the above convergence implies E Ψ∗ 1+t(p∗ 1+t(H1+t(d(m))))−Ψ1+t(f1+t(H1+t(d(m)));p∗ 1+t(H1+t(d(m)))) →m0, which combined with (S6.29), implies that E w1+t Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) →m0. Lemma S6.18. Consider the setup of Lemma S6.17. Let T≥2and Z(m)= Ψ∗ 1+t(p∗ 1+t(H1+t(d(m))))−Ψ1+t(f1+t(H1+t(d(m)));p∗ 1+t(H1+t(d(m)))) (S6.36) for some t∈[T−1]. Suppose (S6.30) holds and E ϕt f(m) t(Ht(d(m)));pred(f(m) t(Ht(d(m)))) Z(m) →m0. (S6.37) Then, under the setup of Lemma S6.17, E[Z(m)]→m0. Lemma S6.18 is proved in Section S6.3.4. Since t≥1, we have 1 + t≥2. Therefore, Lemma S6.8 implies Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) ≥χϕ1+t max(p∗ 1+t(H1+t))−κ(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) . (S6.38) Since wt’s are non-negative by (S6.34), we therefore obtain E w1+t Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) ≥χϕ1+tE w1+t max(p∗ 1+t(H1+t))−κ(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) . S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.3 Proof of Theorem 4.1 97 Thus (S6.38) leads to E w1+t max(p∗ 1+t(H1+t))−κ(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) →m0. (S6.39) On the other hand, E w1+t max(p∗ 1+t(H1+t))−κ(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) =E w1+tEh max(p∗ 1+t(H1+t))−p∗ 1+t(H1+t)pred( f(m) 1+t(H1+t))|H1+ti . Now we want to use (S6.9) with H1+t,h(H1+t) = 1, and V=TX i=1YiTY j=2+t1[Aj=d∗ j(Hj)] πj(Aj|Hj). Note that (S6.13) implies E[V|H1+t, A1+t=i] =p∗ 1+t(H1+t)ifor all i∈[k1+t]. Therefore, (S6.9) implies E max(p∗ 1+t(H1+t))−p∗ 1+t(H1+t)pred( f(m) 1+t(H1+t)) H1+t =E1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f(m) 1+t(H1+t))] π1+t(A1+t|H1+t)V H1+t . Thus we obtain that E w1+t max(p∗ 1+t(H1+t))−κ(f(m) 1+t(H1+t);p∗ 1+t(H1+t)) =E w1+tTX i=1YiTY j=2+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) × 1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f(m) 1+t(H1+t))] π1+t(A1+t|H1+t) , If 2 + t=T+ 1, the product will be one since the range is empty. Hence, (S6.39) implies that ETX i=1Yi w1+tTY j=2+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) × 1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f(m) 1+t(H1+t))] π1+t(A1+t|H1+t) →m0, which proves P.i for 1 + t. S6.3.4. The proof of Lemma S6.18 Proof of Lemma S6.18. First, we will show that the Z(m)in (S6.36) is non-negative and bounded above. Our Z(m)is also non-negative by the definition of Ψ∗ tin (4.2). The boundedness follows because the ϕt’s are bounded and p∗ 1+t(h1+t) is bounded above by a constant c1, potentially depending on P, for all h1+t∈ H 1+t—as shown in Lemma S6.12. It suffices to show that given any subsequence of {m}, there exists a subsequence along which E[Z(m)]→m0. First note that ϕt f(m) t(Ht(d(m))); pred( f(m) t(Ht(d(m)))) Z(m)≥0, (S6.37) implies ϕt f(m) t(Ht(d(m))); pred( f(m) t(Ht(d(m)))) Z(m)→P0. (S6.40) The convergence in (S6.30) implies that Et−1Y i=11[Ai= pred( f(m) i(Hi))] πi(Ai|Hi) Ψ∗ t(p∗ t(Ht))−Ψt(f(m) t(Ht);p∗ t(Ht)) →m0. S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.4 Proof of Proposition 1 98 Thus (S6.29) leads to E Ψ∗ t p∗ t(Ht(d(m))) −Ψt f(m) t(Ht(d(m)));p∗ t(Ht(d(m))) →m0. The random variable inside the above expectation is non-negative by (4.2), which implies the above
|
https://arxiv.org/abs/2505.17285v1
|
convergence is L1- convergence. Since L1convergence implies convergence in probability, Ψ∗ t p∗ t(Ht(d(m))) −Ψt f(m) t(Ht(d(m)));p∗ t(Ht(d(m))) →P0, (S6.41) where Pcorresponds to the underlying probability space. Since convergence in probability a implies almost sure convergence along a subsequence, given any subsequence of m, we can find a further subsequence {mr}r≥1⊂ {m}so that the convergences in (S6.40) and (S6.41) are almost sure convergences w.r.t. the underlying probability P. The proof will be complete if we can show that E[Z(mr)]→r0. For the sake of notational simplicity, we will denote the subsequence mrbym. Hence, we need to show that if ϕt f(m) t(Ht(d(m))); pred( f(m) t(Ht(d(m)))) Z(m)→a.s.0 (S6.42) and Ψ∗ t p∗ t(Ht(d(m))) −Ψt f(m) t(Ht(d(m)));p∗ t(Ht(d(m))) →a.s.0 (S6.43) asm→ ∞ , then then it holds that E[Z(m)]→m0. Now we will use Lemma S6.6 with x(m)=f(m) t(Ht(d(m))) and p(m)= p∗ t(Ht(d(m))). This lemma applies because Lemma S6.12 implies p∗ t(ht)∈[c1, c2]ktfor all ht∈ H t, which indicates that ∥p∗ t(ht)∥1is bounded below by a constant, perhaps depending on P, for all ht∈ H t. LetJt>0 be the fixed constant appearing in Lemma S6.6, which depends only on ϕt. Lemma S6.6 implies P lim inf m→∞ϕt f(m) t(Ht(d(m))); pred( f(m) t(Ht(d(m))) >Jt ≥P Ψ∗ t p∗ t(Ht(d(m))) −Ψt f(m) t(Ht(d(m)));p∗ t(Ht(d(m))) →m0 , which is one because of (S6.43). Then (S6.42), combined with the fact that Z(m)≥0 implies Z(m)→a.s.0 asm→ ∞ . Since Z(m)is a bounded random variable, Fact S11.5 implies E[Z(m)]→m0, which completes the proof. S6.3.5. New proof of main lemma commented below S6.4. Proof of Proposition 1 We show the proof first for the case Cϕt= 1 for each t∈[T]. For any f∈ F, Vψ ∗−Vψ(f) =E[Ψ∗ 1(p∗ 1(H1))]−E[Ψ1(f1(H1);p1(H1))] =E[Ψ∗ 1(p∗ 1(H1))−Ψ1(f1(H1);p∗ 1(H1))] (S6.44) +Eh Ψ1(f1(H1);p∗ 1(H1))−Ψ1(f1(H1);p1(H1))i , where the first term is non-negative, and by Lemma S6.16, the second term is bounded below by CE PT i=1Yi 1[A1= pred( f1(H1))] ×QT r=21[Ar=d∗ r(Hr)]−QT r=21[Ar= pred( fr(Hr))] QT r=1πr(Ar|Hr) H1 , where C=QT i=1Ji min2≤i≤Tχϕi. Lemma S6.16 also indicates that the above expression is non-negative. Lemma S6.8 implies that Eh Ψ∗ 1(p∗ 1(H1))−Ψ1(f1(H1);p∗ 1(H1))i ≥χϕ1 Eh max(p∗ 1(H1)−κ(f1(H1),p∗ 1(H1))i where χϕ1is as in Lemma S6.8. Then (S6.44) reduces to Vψ ∗−Vψ(f)≥χϕ1 Eh max(p∗ 1(H1)−κ(f1(H1),p∗ 1(H1))i +CEPT i=1Yi ×1[A1= pred( f1(H1))] QT r=1πr(Ar|Hr) ×TY r=21[Ar=d∗ r(Hr)]−TY r=21[Ar= pred( fr(Hr))] H1 . S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.4 Proof of Proposition 1 99 where C=QT i=1Ji min2≤i≤Tχϕi. Lemma S6.13 implies that p∗ 1(H1) =Q∗ 1(H1). Thus max(p∗ 1(H1)) = max( Q∗ 1(H1)) =Q∗ 1(H1, d∗ 1(H1)) and κ(f(m) 1(H1),p∗ 1(h1)) =Q∗ 1(H1,pred( f(m) 1(h1))). However, Q∗ 1(H1, i) =ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj) H1, A1=i by Fact S11.4. Therefore, Eh max(p∗ 1(h1))−κ(f(m) 1(h1),p∗ 1(h1))i =ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj) H1, A1=d∗ 1(H1) −ETX i=1YiTY j=21[Aj=d∗ j(Hj)] πj(Aj|Hj) H1, A1= pred( f(m) 1(H1)) =EPT i=1YiQT j=21[Aj=d∗ j(Hj)] QT j=1πj(Aj|Hj)1[A1=d∗ 1(H1)] −1[A1= pred( f(m) 1(H1))] =V∗−ETX i=1YiQT j=21[Aj=d∗ j(Hj)] QT j=1πj(Aj|Hj)1[A1= pred( f(m) 1(H1))] (S6.45) Now (S6.45) implies that Eh max(p∗ 1(H1)−κ(f1(H1),p∗ 1(H1))i =V∗−ETX i=1YiQT j=21[Aj=d∗ j(Hj)] QT j=1πj(Aj|Hj)1[A1= pred( f1(H1))] , implying Vψ ∗−Vψ(f)≥χϕ1 V∗−ETX i=1YiQT j=21[Aj=d∗ j(Hj)] QT j=1πj(Aj|Hj)1[A1= pred( f1(H1))] +CE PT
|
https://arxiv.org/abs/2505.17285v1
|
i=1Yi 1[A1= pred( f1(H1))] QT r=1πr(Ar|Hr) ×TY r=21[Ar=d∗ r(Hr)]−TY r=21[Ar= pred( fr(Hr))] H1# ≥min(χϕ1, C) V∗−V(f)i . where the last step follows because both terms in the sum are non-negatives. Note that min(χϕ1, C)≥TY i=1min(Ji,1) min 1≤i≤Tχϕi. However, since Ψ∗ t(1kt) = 1 for each t∈[T], it follows that ϕt(x; pred( x))≤1 for each t∈[T]. Therefore, Jt≤1. Hence, min(χϕ1, C)≥TY t=1Jt min 1≤i≤Tχϕi. Thus we have showed Vψ ∗−Vψ(f)≥TY t=1Jt min 1≤t≤Tχϕt V∗−V(f) (S6.46) when Cϕt= 1 for each t∈[T]. Now suppose Cϕt̸= 1. Then (S6.46) holds if we transform ϕt7→ϕt/Cϕt. However, Jta also becomes Jt/Cϕtfollowing this transformation. Therefore, (S6.46) implies Vψ ∗−Vψ(f)QT t=1Cϕt≥TY t=1Jt Cϕt min 1≤t≤Tχϕt V∗−V(f) , which completes the proof. S7 PROOF OF THEOREM 4.2 / 100 S7. Proof of Theorem 4.2 This section is organized as follows. Section S7.1 contains auxiliary lemmas necessary for proving the necessity of Condition N1. The proof of the necessity of Condition N1 is then given in Section S7.2. We establish the necessity of Condition N2 in Section S7.3. S7.1. Lemmas required for proving Condition N1 Lemma S7.1. Suppose Psatisfies Assumption I-IV and Yt≥0fort∈[T]. Let T≥2. Additionally, Yt= 0fort > r where r∈[T−1]is an integer, and Yr⊥Or+1, Ar+1, . . . , O T, AT|Hr, Ar. Then Q∗ t(Ht, At) = 0 for all t > r andQ∗ r(Hr, ar) =E[Yr|Hr, Ar=ar]for all ar∈[k]. Additionally, sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt =TY t=1+rΨ∗ t(1kt) sup f1,...,f rErY t=1ϕt(ft(Ht);At) πt(At|Ht)X t∈[r]Yt . Proof of Lemma S7.1 Proof of Lemma S7.1. First of all, note that for any a∈[kT], Q∗ T(HT, a) =E[YT|HT, a] = 0. Similarly, we can show that if T−1> r, for any a∈[kT−1], Q∗ T−1(HT−1, a) =Eh YT−1+ max aT∈[kT]E[YT|HT, aT] Ht−1, At−1=a = 0. By induction, we can show that Q∗ t(ht, at) = 0 for all ht∈ H tandat∈[kt] as long as t > r . Therefore, for any ar∈[kr], Q∗ r(Hr, ar) =E[Yr+Q∗ r+1(Hr+1, Ar+1)|Hr, Ar=ar] =E[Yr|Hr, Ar=ar], which proves the first part of the lemma. To prove the second part of the current lemma, we will show that for any m∈[r, T−1], sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt =TY t=1+mΨ∗ t(1kt) sup f1,...,f mEX t∈[r]YtmY t=1ϕt(ft(Ht);At) πt(At|Ht) .(S7.1) To prove (S7.1), we will show that the statement holds for m=T−1. Then we will show that if (S7.1) holds for some m, then it holds for m−1 as well, provided m−1≥r. Then, by induction, it will follow that m= 1 + rsatisfies (S7.1), which will complete the proof of the current lemma. To this end, note that sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt = sup f1,...,f TEX t∈[r]YtT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)EϕT(fT(HT);AT) πT(AT|HT) HT, AT (a)= sup f1,...,f TEX t∈[r]YtT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)ΨT(fT(HT);1kT) (b)= sup f1,...,f T−1EX t∈[r]YtT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)sup x∈RkTΨT(x;1kT) , where (a) follows from (S6.4) and step (b) uses the non-negativity of the Yt’s and the ϕt’s (by our assumption). Note that supx∈RkTPkT i=1ϕT(x;i) = Ψ∗ t(1kT). Thus we have shown that sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt = Ψ∗ T(1kT) sup f1,...,f T−1EX t∈[r]YtT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) , which proves (S7.1) holds for m=T−1. If r=T−1, then there is nothing to prove.
|
https://arxiv.org/abs/2505.17285v1
|
Suppose r≤T−2 (note that it requires T >2 for such r∈Nto exist). S7 PROOF OF THEOREM 4.2 /S7.1 Lemmas required for proving Condition N1 101 Now suppose (S7.1) holds for some m≤T−1 such that m≥1 +r. Then we will show that (S7.1) holds for m−1 as well. Since m≥1 +r, it holds that m−1≥randY1, . . . , Y r⊂Hm. Hence, sup f1,...,f mEX t∈[r]YtmY t=1ϕt(ft(Ht);At) πt(At|Ht) = sup f1,...,f mEX t∈[r]Ytm−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)Eϕm(fm(Hm);Am) πm(Am|Hm) Hm, Am Using (S6.4), we obtain that Eϕm(fm(Hm);Am) πm(Am|Hm) Hm, Am = Ψ m(fm(Hm);1km). Using the non-negativity of the Yt’s and the non-negativity of he ϕt’s, we obtain that sup f1,...,f mEX t∈[r]YtmY t=1ϕt(ft(Ht);At) πt(At|Ht) = sup f1,...,f m−1EX t∈[r]Ytm−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)sup x∈RkmΨm(x;1km) = sup x∈RkmΨm(x;1km) sup f1,...,f m−1EX t∈[r]Ytm−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) . Since supx∈RkmΨm(x;1km) = Ψ∗ m(1km) and (S7.1) holds for m, the above implies that (S7.1) holds for m−1 as well, thus completing the proof. Lemma S7.2. Letr∈[T]. Suppose Pis as in the setup of Theorem 4.2. Moreover, Yt≥0fort∈[T]andYt= 0for all t̸=r. Further suppose (Hr−1, Ar−1)⊥(Ar, Or, Yr), and Yr⊥(Or+1, Ar+1, . . . , O T, AT)|(Or, Ar). Here Ht, At=∅ift <1 ort > T . Then sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt =E[Ψ∗ r(Q∗ r(Or))]TY t=1,t̸=rΨ∗ t(1kt), where Q∗ r(Or)ar=Q∗ r(Hr, ar) =E[Yr|Or, Ar=ar]forar∈[kr]. Proof of Lemma S7.2. Note that Yt= 0 is possible under the setup of Theorem 4.2. By Lemma S7.1, Q∗ r(Hr, a) =E[Yr|Hr, Ar=a]. Moreover, Yr⊥(Hr−1, Ar−1, Yr−1) where we used the fact that Yr−1= 0. Therefore, E[Yr|Hr, Ar] =E[Yr|Or, Ar]. Therefore, Q∗ r(Hr, a) =E[Yr|Or, Ar=a] for all a∈[kr], which implies Q∗ r(Or)a=Q∗ r(Hr, a) for a∈[kr]. When T= 1, the only possible value for ris 1. In that case, sup f1∈F1Eϕ1(f1(H1);A1) πt(A1|H1)Y1 =E[Ψ∗ 1(Q∗ 1(H1))] by (S6.7). Therefore, the proof follows for T= 1 case trivially. If T >1 and r= 1, then from Lemma S7.1 it follows that sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt =TY t=2Ψ∗ t(1kt) sup frEϕr(fr(Hr);Ar) πr(Ar|Hr)Yr , which equals E[Ψ∗ r(Qr(Hr))]QT t=2Ψ∗ t(1kt) by (S6.7). Therefore, the proof of the current lemma follows trivially also when r= 1. Hence, we consider the case r >1. Note that if r >1, then T≥2. Lemma S7.1 implies when r∈[T−1], sup f∈FETY t=1ϕt(ft(Ht);At) πt(At|Ht)TX t=1Yt =TY t=1+rΨ∗ t(1kt) sup f1,...,f rErY t=1ϕt(ft(Ht);At) πt(At|Ht)Yr . The product in the above expression is one if r=T. It suffices to show that sup f1,...,f rErY t=1ϕt(ft(Ht);At) πt(At|Ht)Yr =E[Ψ∗ r(Q∗ r(Or))]r−1Y t=1Ψ∗ t(1kt). (S7.2) S7 PROOF OF THEOREM 4.2 /S7.1 Lemmas required for proving Condition N1 102 Suppose Vis any non-negative random variable that is a function of ( Hr−1, Ar−1). Using Assumption I and the non- negativity of the ϕt’s, we obtain that sup frE Vϕr(fr(Hr);Ar) πr(Ar|Hr)Yr (a)= sup frE VEϕr(fr(Hr);Ar) πr(Ar|Hr)Yr Hr (b)= sup frE VX ar∈[kr]E[Yr|Hr, Ar=ar]ϕr(fr(Hr);ar) (c)= sup frE VX ar∈[kr]E[Yr|Or, Ar=ar]ϕr(fr(Hr);ar) (d)=E Vsup x∈RkrΨr(x;Q∗ r(Or)) , where (a) follows because Vis a function of Hr−1andAr−1, (b) follows because the propensity scores are bounded away from zero by Assumption I, (c) follows because Yr⊥Hr−1, Ar−1, Yr−1(we again used the fact that Yr−1, being zero, is independent of
|
https://arxiv.org/abs/2505.17285v1
|
Yr) and (d) follows using the non-negativity of V. Therefore, we have proved that sup frE Vϕr(fr(Hr);Ar) πr(Ar|Hr)Yr =E[VΨ∗ r(Q∗ r(Or))] =E[V]E[Ψ∗ r(Q∗ r(Or))] (S7.3) because Or⊥(Hr−1, Ar−1) and Vis a function of ( Hr−1, Ar−1). Letting V= 1 in (S7.3), we obtain that sup frEϕr(fr(Hr);Ar) πr(Ar|Hr)Yr =E[Ψ∗ r(Q∗ r(Or))]. Letting V=r−1Y t=1ϕt(ft(Ht);At) πt(At|Ht), we obtain that sup frErY t=1ϕt(ft(Ht);At) πt(At|Ht)Yr =Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) E[Ψ∗ r(Q∗ r(Or))]. Therefore, sup f1,...,f rErY t=1ϕt(ft(Ht);At) πt(At|Ht)Yr = sup f1,...,f r−1Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) E[Ψ∗ r(Q∗ r(Or))]. The proof will be complete if we can show that sup f1,...,f r−1Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) =r−1Y t=1Ψ∗ t(1kt). (S7.4) For any j≤r−1, sup f1,...,f jEjY t=1ϕt(ft(Ht);At) πt(At|Ht) = sup f1,...,f jEjY t=1ϕt(ft(Ht);At) πt(At|Ht)Eϕj(fj(Hj);Aj) πj(Aj|Hj) Hj, Aj (a)= sup f1,...,f jEjY t=1ϕt(ft(Ht);At) πt(At|Ht)Ψj(fj(Hj);1kj) where (a) follows from (S6.4). Since the ϕt’s are non-negative, the above equals sup f1,...,f j−1EjY t=1ϕt(ft(Ht);At) πt(At|Ht)Ψ∗ j(1kj) = Ψ∗ j(1kj) sup f1,...,f j−1EjY t=1ϕt(ft(Ht);At) πt(At|Ht) . Equation S7.4 follows by applying (S6.4) repeatedly to j=r−1, . . . , 1. S7 PROOF OF THEOREM 4.2 /S7.2 Proving the necessity of Condition N1 103 S7.2. Proving the necessity of Condition N1 We will prove by contradiction. Suppose there exists r∈[T] so that ϕrdoes not satisfy Condition N1. Hence, there exists p∈Rkr ≥0that violates (4.4) with Ψ r. Suppose pis such that all elements of pare equal. Then the left hand side of (4.4) is ∞ since we defined the supremum of emptyset to be −∞. Therefore, (4.4) automatically holds. Therefore, in what follows, we assume that the elements of pare not all equal. Since (4.4) fails, there exists p∈Rkr ≥0and a sequence {xm,r}m≥1⊂Rkrso that as m→ ∞ , Ψr(xm,r;p)7→Ψ∗ r(p) butppred(xm,r)<max(p) for all m∈N. We will show that if the above-mentioned xm,r exists, then we can construct a Pand a sequence of class score vectors (these are vector-valued functions) {f(m)}m≥1⊂ F so that Vψ(f(m))→mVψ ∗butV(f(m))̸→V∗, which will complete the proof. For t∈[T] such that t̸=r, we define the sequences {xm,t}m≥1⊂Rktso that Ψ t(xm,t;1kt)→mΨ∗ t(1kt). We will define our f(m)andPnow. We take f(m)to be so that f(m) t≡xm,tfor all t∈[T], i.e., f(m) t(ht) =xm,tfor all ht∈ H t. We let Pbe as in Lemma S7.2, i.e., Psatisfies Assumptions I-IV, Yt= 0 for all t̸=r, and Yr≥0. Note that Yt= 0 is allowed because under the set up of Theorem 4.2, Pdoes not need to satisfy Assumption V. Additionally, the independence conditions (Hr−1, Ar−1)⊥(Ar, Or, Yr) and Yr⊥(Or+1, Ar+1, . . . , O T, AT)|(Or, Ar) hold for all t∈[T], where HtandAtare empty sets if t <1 ort > T . Furthermore, we assume that E[Yr|Hr, Ar=i] =pifori∈[k]. It is straightforward to verify the existence of such distributions. From Lemma S7.2, it follows that Q∗ r(Hr, Ar) =pArand sup f1,...,f TETY t=1ϕt(ft(Ht);At) πt(At|Ht)Yr = Ψ∗ r(p)Y t̸=rΨ∗ t(1kt). (S7.5) First, we will show that Vψ(f(m))→mVψ ∗. Next, we will show that V(f(m))̸→mV∗, which will contradict the Fisher consistency of ψ, hence completing the proof by contradiction. Showing Vψ(f(m))→mVψ ∗It suffices to show that for each m∈[N], Vψ(f(m)) = Ψ r(xm,r;p)TY t̸=r,t=1Ψt(xm,t;1kt) (S7.6) because the above converges to Ψ∗
|
https://arxiv.org/abs/2505.17285v1
|
r(p)QT t̸=r,t=1Ψ∗ t(1kt) due to Ψ r(xm,r;p)→mΨ∗ r(p) and Ψ t(xm,t;1kt)→mΨ∗ t(1kt) for t̸=r. However, (S7.5) implies Vψ ∗= Ψ∗ r(p)TY t̸=r,t=1Ψ∗ t(1kt), which would complete the proof of Vψ(f(m))→mVψ ∗. To show (S7.6), we will first show that for any r∈[T], Vψ(f(m)) =E YrTY t=1ϕt(f(m) t(Ht);At) πt(At|Ht) =TY t=1+rΨt(xm,t;1kt)E YrrY t=1ϕt(f(m) t(Ht);At) πt(At|Ht) .(S7.7) Ifr=T, (S7.7) holds trivially because the product in (S7.7) becomes one since the range of the product becomes empty. Forr∈[T−1], E YrTY t=1ϕt(f(m) t(Ht);At) πt(At|Ht) =E YrT−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht)EΨt(f(m) T(HT);AT) πT(AT|HT) HT, AT (a)=E YrT−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht)ΨT(xm,t;1kT) = Ψ T(xm,t;1kT)E YrT−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht) , S7 PROOF OF THEOREM 4.2 /S7.2 Proving the necessity of Condition N1 104 where (a) follows from (S6.4). Proceeding similarly and applying (S6.4) repeatedly, we can show (S7.7) holds. To finish the proof of (S7.6), now observe that E YrrY t=1ϕt(f(m) t(Ht);At) πt(At|Ht) =Er−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht)E Yrϕr(f(m) r(Hr);Ar) πr(Ar|Hr) (a)=Er−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht)Ψr(f(m) r(Hr);Q∗ r(Or)) , where (a) follows from (S6.6) with Q∗ r(Or)j=E[Yr|Hr, Ar=j] =pj by our definition of P. Since f(m) r=xm,r, E YrrY t=1ϕt(f(m) t(Ht);At) πt(At|Ht) = Ψ r(xm,r;p)Er−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht) . Using (S6.4) and the fact that f(m)(Ht) =xm,tis non-stochastic, we can show that Er−1Y t=1ϕt(f(m) t(Ht);At) πt(At|Ht) =r−1Y t=1Ψt(xm,t;1kt). Therefore, (S7.6) follows, completing the proof of Vψ(f(m))→mVψ ∗. Showing lim supm→∞V(f(m))< V∗It remains to prove that V(f(m))̸→mV∗. First, we will prove that V(f(m)) = ppred(xm,r). To see this, note that V(f(m)) =E YrTY t=11[At= pred( f(m)(Ht))] πt(At|Ht) =E YrT−1Y t=11[At= pred( f(m)(Ht))] πt(At|Ht)E1[AT= pred( f(m)(HT))] πT(AT|HT) HT . Letting the ϕin (S6.4) be ϕdis, we obtain that E1[AT= pred( f(m)(HT))] πT(AT|HT) = 1. Thus V(f(m)) =E YrT−1Y t=11[At= pred( f(m)(Ht))] πt(At|Ht) , which rewrites as V(f(m)) =Er−1Y t=11[At= pred( f(m)(Ht))] πt(At|Ht)E Yr1[Ar= pred( f(m)(Hr))] πr(Ar|Hr) Hr where the product term is one if r= 1. Letting the ϕin (S6.6) be ϕdis, we obtain that E Yr1[Ar= pred( f(m)(Hr))] πr(Ar|Hr) Hr =X a∈[kr]E[Yr|Hr, a]1[a= pred( xm,r)], (S7.8) which equals ppred(xm,r). Thus V(f(m)) =ppred(xm,r)Er−1Y t=11[At= pred( f(m)(Ht))] πt(At|Ht) . Applying (S6.4) repeatedly with ϕ=ϕdis, we can show that Er−1Y t=11[At= pred( f(m)(Ht))] πt(At|Ht) = 1. Hence, we showed that V(f(m)) =ppred(xm,r). It remains to show that lim supm→∞ppred(xm,r)< V∗. By our assumption, for each m∈N,ppred(x(m))<max(p). Therefore, pred( x(m))/∈argmax( p) for all m∈N. Since phas a finite length, this implies lim supm→∞ppred(x(m))<max(p). Therefore, lim supm→∞V(f(m))<max(p). Thus, it suffices to show that V∗≥max(p). To this end, note that, for any x∈Rkr, setting f(m) r≡x,f(m) t≡xm,tfort̸=r, f(m)= (f(m) 1, . . . , f(m) r, . . . , f(m) T), and working as before, we can show that V(f(m))→mppred(x). If we take x=p, then V(f(m))→mppred(p)= max( p). Thus V∗≥max(p), which completes the proof. S7 PROOF OF THEOREM 4.2 /S7.3 Proving the necessity of Condition N2 105 S7.3. Proving the necessity of Condition N2 We only need to consider the T≥2 case in this proof. Since we have already proved that ϕt’s satisfy Condition N1 for t∈[T], we will use this fact in the proof. In particular, ϕt’s are bounded and Ψ∗ t(p)∈(0,∞) for all p∈Rk+t ≥0such that
|
https://arxiv.org/abs/2505.17285v1
|
p̸=0ktby Lemma S6.1 and Ψ∗ tis continuous by Lemma S6.2. Moreover, Fact S6.1 will also apply. We will prove the necessity of Condition N2 in 3 steps. To this end, let r∈[1 :T−1]. Step 1. We will show that for any p,q∈Rk1+r ≥0, max( p)>max(q) implies Ψ∗ 1+r(p)≥Ψ∗ 1+r(q). Step 2. Using the result from Step 1, we will show that for any p,q∈Rk1+r ≥0, max( p) = max( q) implies Ψ∗ 1+r(p) = Ψ∗ 1+r(q). This will imply that there exists a function h:R7→Rso that Ψ∗ 1+r(p) =h1+r(max( p)) for all p∈Rk1+r ≥0. Step 3. We will show that h1+r(x) = Ψ∗ 1+r(1k1+r)xfor all x >0. Ψ∗ 1+r(1k1+r)<∞by Lemma S6.1. Step 3 implies Ψ∗ 1+r(p) = Ψ∗ 1+r(1k1+r) max( p) for all p∈Rk1+r ≥0. Since Ψ∗ 1+r(1k1+r) is positive for each r∈[T−1] by Lemma S6.1, ϕ1+rsatisfies Condition N2. Since rcan be any number in [1 : T−1], it follows that ϕtsatisfies Condition N2 if t∈[2 :T], thus completing the proof. S7.3.1. Step 1: Showing max(p)>max(q)implies Ψ∗ 1+r(p)≥Ψ∗ 1+r(q)for any p,q∈Rk1+r ≥0and any r∈[1 :T−1] Note that if q=0k1+r, then Ψ∗ 1+r(q) = 0. Since the ϕ(·;i)’s are non-negative, Ψ∗ 1+r(p)≥Ψ∗ 1+r(q) automatically holds. Hence, it suffices to consider the case where q̸=0k1+rso that max( q)>0. In this case, note that we also have p̸=0k1+r. Letr∈[1 :T−1]. We will prove Step 1 in some substeps. •Step 1a: We will define a simpler class of distributions Psmall, depending on pandq, that satisfy Assumptions I-IV andYt≥0 for all t∈[T]. •Step 1b: We will show that when P∈ Psmall, for any collection of non-negative surrogates ϕ1, . . . , ϕ T, we have simpler expressions of Vψ ∗andVψ(f) for some classes of f. •Step 1c: Using the form of Vψ ∗, we will derive the form of V∗by letting ϕto be the 0-1 loss. •Step 1d: We will define a sequence of class-score functions {f(m)}m≥1⊂ F so that Vψ(f(m))→mVψ ∗. •Step 1d: We calculate V(f(m)) for the aforementioned sequence of score functions. •Step 1e: Fisher consistency implies V(f(m))→mV∗, which leads to the desired inequality Ψ∗ 1+r(p)≥Ψ∗ 1+r(q). Step 1a. Defining the small class Psmall Suppose pandqare fixed vectors in Rk1+r ≥0such that max( p)>max(q). Let Psmall be the class of all P’s that satisfy Assumption I-IV and Yt≥0 for all t∈[T] in addition to the followings: 1.Yi= 0 unless i= 1 + r. 2.Hr−1, Ar−1⊥(Ar, Or, A1+r, O1+r, Y1+r) for all t∈[2 :T−1]. If r= 1, we take H0=∅andA0= 0. Hence, the above statement is vacuously true for r= 1. 3.Or=O1+r=∅. Hence, E[Y1+r|H1+r, A1+r] =E[Y1+r|Ar, A1+r]. Also, for i∈[kr] and j∈[k1+r], we let E[Y1+r|Ar=i, A1+r=j] =p(i) jwhere p(i)=( pifi= 1 qifi∈[2 :kr].(S7.9) 4.Y1+r⊥Or+2, A2+r, . . .|Hr. We take OT+1=∅andAT+1= 0. Note that Psmall has resembelence with a 2-stage DTR. The outcome Yrdoes not depend on any variable other than Arand A1+r. Thus the stages rand 1 + rare the only stages that has connection to the outcome. The Tstage DTR is formed from this 2-stage DTR by padding r−1 stages before and T−1−rafter these two stages. The proof builds on the idea that,
|
https://arxiv.org/abs/2505.17285v1
|
since the ϕt’s are Fisher consistent, ( ϕr, ϕ1+r) are Fisher consistent for the two-stage DTR embedded in the Tstage DTR. Step 1b. Properties under Psmall Distributions in Psmall have some interesting properties. First we will show that Vψ ∗ has some closed form expression under P∈ Psmall for non-negative surrogates. Let us define ν∈Rkr ≥0so that νi= Ψ∗ 1+r(p(i)) for each i∈[kr], (S7.10) where the p(i)’s are as defined in (S7.9). Here the νi’s are non-negative by Lemma S6.1 because Ψ∗ 1+r(p(i))>0 since p,q̸=0k1+r. Lemma S7.3. Suppose ψis as in (4.1), where the losses ϕ1, . . . , ϕ Tare bounded and non-negative. Let r∈[1 :T−1]. Then for any P∈ Psmall, it follows that Vψ ∗= Ψ∗ r(ν)r−1Y t=1Ψ∗ t(1kt)TY t=2+rΨ∗ t(1kt), where the products are one if the range is empty and νis as defined in (S7.10) S7 PROOF OF THEOREM 4.2 /S7.3 Proving the necessity of Condition N2 106 Proof of Lemma S7.3. First, we want to show Vψ ∗=TY t=2+rΨ∗ t(1kt) sup f1,...,f 1+rE Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) , (S7.11) where the products in the integrand are 1 if the range is empty. The equation in (S7.11) trivially holds if r=T−1. Let us consider the case r < T −1. Then Vψ ∗= sup f1,...,f TE Y1+rTY t=1ϕt(ft(Ht);At) πt(At|Ht) = sup f1,...,f TE Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)EϕT(fT(HT);AT) πT(AT|HT) HT (a)= sup f1,...,f TE Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)ΨT(fT(HT);1kT) (b)= sup f1,...,f T−1E Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)sup x∈RkTΨT(x;1kT) = sup f1,...,f T−1E Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)Ψ∗ t(1kT) = Ψ∗ t(1kT) sup f1,...,f T−1E Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) where step (b) uses the non-negativity of Y1+rand the ϕt’s and step (a) follows from (S6.4). Proceeding as above, we can show (S7.11) holds. Observe that sup f1,...,f 1+rE Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) = sup f1,...,f 1+rErY t=1ϕt(ft(Ht);At) πt(At|Ht)E Y1+rϕ1+r(f1+r(H1+r);A1+r) π1+r(A1+r|H1+r) H1+r = sup f1,...,f 1+rErY t=1ϕt(ft(Ht);At) πt(At|Ht)Ψ1+r(f1+r(H1+r);p2(H1+r)) by (S6.6), where p2(H1+r)j=E[Y1+r|H1+r, A1+r=j]. However, for our P, E[Y1+r|H1+r, A1+r=j] =E[Y1+r|Ar, A1+r=j] =p(Ar) jfor all j∈[k1+r] Therefore, p2(H1+r) =p(Ar). Hence, we have shown that E Y1+rϕ1+r(f1+r(H1+r);A1+r) π1+r(A1+r|H1+r) H1+r = Ψ 1+r(f1+r(H1+r);p(Ar)), (S7.12) and, in particular, sup f1,...,f 1+rE Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) = sup f1,...,f 1+rErY t=1ϕt(ft(Ht);At) πt(At|Ht)Ψ1+r(f1+r(H1+r);p(Ar)) = sup f1,...,f rErY t=1ϕt(ft(Ht);At) πt(At|Ht)sup x∈Rk1+rΨ1+r(x;p(Ar)) = sup f1,...,f rErY t=1ϕt(ft(Ht);At) πt(At|Ht)Ψ∗ 1+r(p(Ar)) = sup f1,...,f rEr−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)Eϕr(fr(Hr);Ar) πr(Ar|Hr)Ψ∗ 1+r(p(Ar)) Hr where the product term is one if r= 1. Another application of (S6.6) yields that the above equals sup f1,...,f rEr−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)Ψr(fr(Hr);ν) , S7 PROOF OF THEOREM 4.2 /S7.3 Proving the necessity of Condition N2 107 where νj=E[Ψ∗ 1+r(p(Ar))|Hr, Ar=j] = Ψ∗ 1+r(p(j)) for all j∈[kr], i.e.,νis non-stochastic and as defined in (S7.10). Using the fact that the ϕt’s are non-negatve, it then follows that sup frEr−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)Ψr(fr(Hr);ν) =Er−1Y t=1ϕt(x;At) πt(At|Ht)sup x∈RkrΨr(x;ν) =Er−1Y t=1ϕt(x;At) πt(At|Ht)Ψ∗ r(ν) . Thus we have shown that sup f1,...,f 1+rE Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) = Ψ∗ r(ν) sup f1,...,f r−1Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) . The proof follows by combining the above with (S7.11), and noting that, for r−1≥1, (S7.4) implies that sup f1,...,f r−1Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) =r−1Y t=1Ψ∗ t(1kt). For simpler forms of f’s, we can find their surrogate value function Vψ(f) explicitly under Psmall.
|
https://arxiv.org/abs/2505.17285v1
|
Lemma S7.4. Suppose P∈ P small andψis as in (4.1), where ϕ1, . . . , ϕ Tare bounded and non-negative. Consider r∈ [1, T−1]. For t /∈ {1+r}, we let ft(ht) =x(t)for all ht∈ H twhere x(t)∈Rktare fixed vectors. For H1+r= (Hr, Ar, Yr), we letf1+r(H1+r) =M(Ar)where, for each i∈[kr],M(i)∈Rk1+ris a non-stochastic vector. Let us define γ∈Rkris defined by γi= Ψ 1+r(M(i);p(i))for all i∈[kr]. (S7.13) Then Vψ(f) = Ψ r(x(r);γ)r−1Y t=1Ψt(x(t);1kt)TY t=r+2Ψt(x(t);1kt), where γis as defined in (S7.13) , thep(i)’s are as in (S7.9) , and the products are one if the ranges are empty. Proof of Lemma S7.4. Our first step is to show that Vψ(f) = TY t=2+rΨt(x(t);1kt) E Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) , (S7.14) where the product is one if the range of the product is empty. The above follows trivially if r=T−1. Suppose T >1 +r. Then Vψ(f) =E Y1+rTY t=1ϕt(ft(Ht);At) πt(At|Ht) =E Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)EϕT(fT(HT);AT) πT(AT|HT) HT (a)=E Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht)ΨT(fT(HT);1kT) = Ψ T(x(T);1kT)E Y1+rT−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) where (a) follows from (S6.4). Proceeding as above, and applying (S6.4) repeatedly, (S7.14) can be proved. Using (S7.12), we can show that E Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) =E Ψ1+r(f1+r(H1+r);p(Ar))rY t=1ϕt(ft(Ht);At) πt(At|Ht) . However, according to our definition, f1+r(H1+r) =M(Ar), S7 PROOF OF THEOREM 4.2 /S7.3 Proving the necessity of Condition N2 108 where Ar∈H1+r. Therefore, E Y1+r1+rY t=1ϕt(ft(Ht);At) πt(At|Ht) =E Ψ1+r(M(Ar);p(Ar))rY t=1ϕt(ft(Ht);At) πt(At|Ht) =E E Ψ1+r(M(Ar);p(Ar))ϕr(fr(Ht);Ar) πr(Ar|Hr) Hrr−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) , where the product over the rangeQr−1 t=1is one if r−1 = 0. Another application of (S6.6) yields that E Ψ1+r(M(Ar);p(Ar))ϕr(fr(Hr);Ar) πr(Ar|Hr) Hr = Ψ r(fr(Hr);γ), where γi=E[Ψ1+r(M(Ar);p(Ar))|Hr, Ar=i] = Ψ 1+r(M(i);p(i)). Since fr(Hr) =x(r), E Ψ1+r(M(Ar);p(Ar))ϕt(ft(Ht);At) πt(At|Ht) Hr = Ψ r(x(r);γ). Therefore, E E Ψ1+r(M(Ar);p(Ar))ϕt(fr(Hr);Ar) πr(Ar|Hr) Hrr−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) = Ψ r(x(r);γ)Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) , where, if r−1≥1, using (S6.4) repeatedly, one can show that Er−1Y t=1ϕt(ft(Ht);At) πt(At|Ht) =r−1Y t=1Ψt(x(t);1kt). Therefore, the proof follows. Step 1c. Calculating V∗Lemma S7.3 implies that, for any non-negative ϕ1, . . . , ϕ T, Vψ ∗= Ψ∗ r(ν)r−1Y t=1Ψ∗ t(1kt)TY t=2+rΨ∗ t(1kt) where νis as in (S7.10). Suppose for each t∈[T],ϕt(x;i) = 1[pred( x) =i] where x∈Rkt. Then Ψ t(x;1kt) = 1. Therefore, Ψ∗ t(1kt) = 1. For any p∈Rkt ≥0in general, Ψ t(x;p) =ppred(x). Therefore, Ψ∗ t(p) = max( p). Hence, (S7.10) implies νi= Ψ∗ 1+r(p(i)) = max( p(i)). Therefore, V∗= max( ν) = max i∈[kr]max(p(i)). Step 1d. Defining the sequences Now we will define a sequence of class scores {f(m)}m≥1⊂ F. To this end, we define some sequences of vectors first. The following fact will be used while defining these sequences: since Ψ∗ t(p) = supx∈RktΨt(x;p) for each p∈Rkt, there exists a sequence {x(m)}t≥1⊂Rktso that Ψ t(x(m);p)→mΨ∗ t(p) for each p∈Rkt ≥0andt∈[T]. For each i∈[kr], let{Mm,i}m≥1⊂Rk1+rbe a sequence that satisfies Ψ1+r(Mm,i;p(i))→mΨ∗ 1+r(p(i)), (S7.15) where p(i)is as in (S7.9) for each i∈[kr]. We let {xm,r}m≥1⊂Rkrbe a sequence of vectors that satisfy Ψr(xm,r;ν)→mΨ∗ r(ν), (S7.16) where νas in (S7.10). For each t /∈ {r,1 +r}andm∈N, define the sequences {xm,t}m≥1⊂Rktsuch that Ψ t(xm,t;1kt)→m Ψ∗ t(1kt). Now we are ready to define f(m). For t /∈ {r,1 +r}, we let f(m) t≡xm,t, i.e., f(m) t(ht)
|
https://arxiv.org/abs/2505.17285v1
|
=xm,tfor all ht∈ H t. When t=r, we let f(m) r≡xm,r. Thus f(m) tis constant vector for all t̸= 1 + r. However, we will consider a non-constant f(m) 1+r. Note that H1+r= (Hr, Ar) forP∈ Psmall because Yr= 0 and Or=O1+r=∅. Therefore, f1+ris a function from Hr×[kr] toR. For i∈[kr], we let f(m) 1+r(H1+r) =f(m) 1+r(Hr, i) =Mm,i. S7 PROOF OF THEOREM 4.2 /S7.3 Proving the necessity of Condition N2 109 Step 1e. Showing Vψ(f(m))→mVψ ∗Our surrogates are non-negative by assumption and they are bounded by Lemma S6.1. Therefore, Lemmas S7.3 and S7.4 apply. Vψ ∗is given by Lemma S7.3. Vψ(f(m)) is given by Lemma S7.4, which yields that Vψ(f(m)) = Ψ r(xm,r;γ(m))r−1Y t=1Ψt(xm,t;1kt)TY t=r+2Ψt(xm,t;1kt), (S7.17) where γ(m)= Ψ 1+r(Mm,i;p(i)) for all i∈[kr], (S7.18) and the products are one if the ranges are empty. Since Ψ t(xm,t;1kt)→mΨ∗ t(1kt) for all t /∈ {r,1 +r}by construction, to show Vψ(f(m))→mVψ ∗, it suffices to show that Ψ r(xm,r;γ(m))→mΨ∗ r(ν) where ν∈Rkris as in (S7.10). To this end, we note that Ψr(xm,r;γ(m))−Ψ∗ r(ν) ≤ krX i=1 ϕr(xm,r;i)γ(m) i−ϕr(xm,r;i)νi + krX i=1ϕr(xm,r;i)νi−Ψ∗ r(ν) ≤ sup x∈Rkr,i∈[kr]ϕr(x;i)krX i=1 γ(m) i−νi +|Ψr(xm,r;ν)−Ψ∗ r(ν)|, where we used the non-negativity of ϕr. By Lemma S6.1, supx∈Rkr,i∈[kr]ϕr(x;i) is finite. For each i∈[kr], |γ(m) i−νi|=|Ψ1+r(Mm,i;p(i))−Ψ∗ 1+r(p(i))| which goes to 0 as m→ ∞ by the definition of Mm,iin (S7.15). On the other hand, Ψ r(xm,r;ν)→mΨ∗ r(ν) by (S7.16). Therefore, we have shown that Ψ r(xm,r;γ(m))→mΨ∗ r(ν), which completes the proof of this step. Step 1e. Calculating V(f(m)) We will apply Lemma S7.4 with ϕt(x;i) =ϕdis(x;i) = 1[pred( x) =i] forx∈Rktand t∈[T]. Equation S7.17 yields that V(f(m)) = Ψ r(xm,r;γ(m))r−1Y t=1Ψt(xm,t;1kt)TY t=r+2Ψt(xm,t;1kt), where ϕt≡ϕdisandγ(m)is as in (S7.18). For ϕdis, we have already shown that Ψ t(x;p) =ppred(x)for any x∈Rktand p∈Rkt ≥0. Therefore, Ψ t(xm,t;1kt) = 1 for each t /∈ {r,1 +r}and Ψ r(xm,r;γ(m)) =γ(m) pred(xm,r). Therefore, V(f(m)) =γ(m) pred(xm,r). Step 1f. Implication of Fisher consistency Suppose ϕt’s are Fisher consistent. Since Vψ(f(m))→mVψ ∗by Step 1d, Fisher consistency implies V(f(m))→mV∗. Step 1e shows that that V(f(m)) =γ(m) pred(xm,r)and Step 1c shows that V∗= max i∈[kr]max(p(i)). Therefore, Fisher consistency yields that γ(m) pred(xm,r)→mmax i∈[kr]max(p(i)). We will use use this convergence to show that pred( xm,r) = 1 for all sufficiently large m. The definition of x(m)in (S7.16) implies that Ψr(xm,r;ν)→mΨ∗ r(ν). Since we proved that ϕrsatisfies Condition N1, Fact S6.1 applies. Hence, pred( xm,r)∈argmax( ν) for all sufficiently large m. Therefore, if pred( xm,r) = 1 for all sufficiently large m, it will follow that 1 ∈argmax( ν). Then the definition of νin (S7.10) would imply that Ψ∗ 1+r(p)≥Ψ∗ 1+r(q), completing the proof of Step 1. Thus it suffices to show that pred( xm,r) = 1 for all sufficiently large m. To this end, note that (S7.13) implies γ(m) i= (p(i))pred(Mm,i)≤max(p(i)) for each i∈[kr]. Let us denote the vector v0= (max( p(1)), . . . , max(p(kr))). Therefore, γ(m) i≤v0 ifor each i∈[kr]. Hence, it holds that γ(m) pred(xm,r)≤v0 pred(xm,r)≤max( v0). Since max( v0) = max i∈[kr]max(p(i)) and Fisher consistency implies γ(m) pred(xm,r)→mmax i∈[kr]max(p(i)), it follows that γ(m) pred(xm,r)→mmax( v0). Therefore, it follows
|
https://arxiv.org/abs/2505.17285v1
|
that v0 pred(xm,r)→mmax( v0). Since argmax( v0) = argmaxi∈[kr]max(p(i)) = 1 forP∈ Psmall, it follows that pred( xm,r) = 1 for all sufficiently large m. S7.3.2. Step 2: Proving max(p) = max( q)implies Ψ∗ t(p) = Ψ∗ t(q)for all p,q∈Rkt ≥0andt >1 Suppose t∈[2 :T]. Consider p(m)=p+ (1/m)1k2. Then max( p(m)) = max( p) + 1 /m > max(q). Taking r=t−1 in the previous step, we obtain that Ψ∗ t(p(m))≥Ψ∗ t(q). Therefore, lim m→∞Ψ∗ t(p(m))≥Ψ∗ t(q). Since p(m)→mpand Ψ∗ t is continuous for each t∈[T] by Lemma S6.2, it follows that Ψ∗ t(p)≥Ψ∗ t(q). Switching the role between pandq, we can similarly prove that Ψ∗ t(p)≤Ψ∗ t(q), which implies Ψ∗ t(p) = Ψ∗ t(q). Therefore, the function Ψ∗ t:Rkt ≥07→Rsatisfies Ψ∗ t(p) =Gt(max( p)) for all p∈Rkt ≥0for some Gt:R→R. S8 PROOFS OF THE LEMMAS FROM SECTION 4/ 110 S7.3.3. Step 3: Showing that Gtis linear on R≥0fort∈[2 :T] Now, we will show that Gtis positively homogenous on R≥0in that Gt(x) =Ctxfor some Ct>0 for all x≥0. Note that if x≥0, then Gt(x) =Gt(max( x1kt)) = Ψ∗ t(x1kt). However, Ψ∗ t(x1k2) =xsup x∈RktktX i=1ϕt(x;i) =xΨ∗ t(1kt). Hence, we have shown that Ψ∗ t(x) = Ψ∗ t(1kt) max( x) for all x∈Rkt ≥0. The proof follows noting Ψ∗ t(1kt)>0 by Lemma S6.1. S8. Proofs of the lemmas from Section 4 S8.1. Single-stage case: proof of Lemma 4.1 Proof of Lemma 4.1. The sufficiency part follows directly from Theorem 4.1. We will only consider the proof of the necessity of Condition N1. In this case T= 1 and f=f1. Note that given any p∈Rk >0, we can find a Psatisfying Assumptions I-V so that E[Y1|H1, A1=i] =pifor all i∈[k]. In this case, Q∗ 1(H1, A1) =E[Y1|H1, A1] =pA1. From Lemma S7.2, it follows that Vψ(f) = sup f1∈F1Eϕ1(f1(H1);A1) π1(A1|H1)Y1 = Ψ∗ 1(p). Equation S7.7 implies that Vψ(f) =E[Ψ1(f1(H1);p)]. The rest of the proof follows similarly to Section S7.2, and hence skipped. S8.2. Proof of Lemma 4.2 Proof of Lemma 4.2. In this proof, we will use that Ψ∗(1k)>0, which follows from Lemma S6.1 since ϕsatisfies Condition N1. Only if part: Suppose Condition N2 holds. Without loss of generality, let us assume that Ψ∗(1k) = 1. Otherwise, we can replace ϕbyϕ/Ψ∗(1k). Since Ψ∗(1k) = 1, Cϕis one. Thus, we need to show that Sk−1⊂conv(Vϕ). Note that it suffices to show {e(1) k, . . . ,e(k) k} ⊂Vϕ (S8.1) because Sk−1⊂conv(Vϕ) would follow taking convex hull on both sides of (S8.1). Convex hull of a set in Rkis same as that of its closure, cf. page 31 of Hiriart-Urruty and Lemar´ echal (2004). Therefore, conv( Vϕ) = conv( Vϕ), which would complete the proof of Sk−1⊂conv(Vϕ). To prove (S8.1), will show that e(1) k∈Vϕbecause the proof will follow similarly for the other e(i) k’s. Consider p∈Rk >0such that argmax( p) = 1. Then from Lemma S6.10, it holds that there exists a sequence {v(m)}m≥1⊂ Vϕsuch that v(m) i→m0 ifi̸= 1 and ⟨v(m),p⟩ →mp1. Since p1>0,v(m) 1→m1. Here Lemma S6.10 applies because both Conditions N1 and N2 hold with Cϕ= 1. Therefore, v(m)→me(1) k. Hence, it
|
https://arxiv.org/abs/2505.17285v1
|
follows that e(1) k∈Vϕ. Therefore, the proof of (S8.1) follows. If part: Suppose CϕSk−1⊂conv(Vϕ) where Cϕ= Ψ∗(1k). Without loss of generality, we assume that Ψ∗(1) = 1. Otherwise, we can replace ϕbyϕ/Ψ∗(1k). Thus, we have Sk−1⊂conv(Vϕ). Hence, ςSk−1≤ςconv(Vϕ)(cf. Theorem 3.3.1, p. 151, Hiriart-Urruty and Lemar´ echal, 2004). Also, the support function of Sk−1equals the support function of {e(1) k, . . . ,e(k) k} (cf. Prop 2.2.1, p. 137, Hiriart-Urruty and Lemar´ echal, 2004). Hence, for any p∈Rk1, ςSk−1(p) = max {p1, . . . ,pk}= max( p). Therefore, we have obtained that for any p∈Rk, max( p)≤ςconv(Vϕ)(p) = Ψ∗(p). Ifp∈Rk ≥0, then Ψ∗(p)≤max(p)Ψ∗(1k) = max( p) because we assumed Ψ∗(1k) = 1. Therefore, for p∈Rk >0, Ψ∗(p) = max( p). Hence, Condition N2 holds. S8.3. Proof of Lemma 4.3 Proof of Lemma 4.3. Note that for any p∈Rk ≥0, sup x∈RkΨa,b(x;p) =bsup x∈RkΨ(ax;p) =bΨ∗(p) =bCϕmax(p) because ϕsatisfies Condition N2. Hence, ϕa,bsatisfies Condition N2. Also, sup x:pred( x)/∈argmax( p)Ψa,b(x;p) =b sup x∈Rk:pred( x)/∈argmax( p)Ψ(ax;p). S8 PROOFS OF THE LEMMAS FROM SECTION 4/S8.4 Restricted class: Proof of Lemma 4.4 111 Note that pred( ax) = pred( x) since a >0. Therefore, sup x∈Rk:pred( x)/∈argmax( p)Ψ(ax;p) = sup x:pred( ax)/∈argmax( p)Ψ(ax;p) = sup x∈Rk:pred( x)/∈argmax( p)Ψ(x;p)<Ψ∗(p) since ϕsatisfies (4.4). Thus it follows that b sup x∈Rk:pred( x)/∈argmax( p)Ψa,b(x;p)< bΨ∗(p) = sup x∈RkΨa,b(x;p), indicating that ϕa,bsatisfies Condition N1. Hence, we showed that ϕa,bsatisfies Condition N1 and Condition N2. With an abuse of notation, we denote the surrogate ϕ+cbyϕcand the corresponding Ψ and Ψ∗transforms as Ψ cand Ψ∗ c, respectively. Observe that sup x∈Rk:pred( x)/∈argmax( p)Ψc(x;p) = sup x∈Rk:pred( x)/∈argmax( p)Ψ(x;p) +cX i∈[k]pi (a) <sup x∈RpΨ(x;p) +cX i∈[k]pi= sup x∈RpΨc(x;p) = Ψ∗ c(p), where (a) follows because ϕsatisfies Condition N1. Therefore, ϕcsatisfies Condition N1. However, Ψ∗ c(p) = sup x∈RpΨc(x;p) = Ψ∗(p) +cX i∈[k]pi=Cϕmax(p) +cX i∈[k]pi because ϕsatisfies Condition N2. Therefore, ϕcdoes not satisfy Condition N2 unless c= 0. S8.4. Restricted class: Proof of Lemma 4.4 As in Proposition 1, it suffices to prove the result for the case where Cϕt= 1 for all t∈[T], since the general case then follows by replacing ϕtwith ϕt/Cϕt. Note that since af∈ Lfor all a >0 and f∈ L, if we can prove that lim a→∞Vψ(af∗) = supf∈FVψ(f), then supf∈FVψ(f) = supf∈LVψ(f), which, combined with Proposition 1, completes the proof. Hence, it suffices to show that lim a→∞Vψ(af∗) =Vψ ∗. Lemma S6.14 implies that under Conditions N1 and N2, Vψ ∗=E[Ψ∗ 1(p∗ 1(H1))] where p∗ 1(H1) is as in (S6.2). However, Condition N2 implies Ψ∗ 1(p∗ 1(H1)) = max( p∗ 1(H1)). Lemma S6.13 implies p∗ 1(H1) = Q∗ 1(H1). Thus Vψ ∗=E[Q∗ 1(H1)]. However, Fact S11.4 implies V∗=E[Q∗ 1(H1)]. Therefore, Vψ ∗=V∗. Hence, it suffices to show that lim a→∞Vψ(af∗) =V∗. If argmax( f∗ t(Ht)) is singleton, then Condition N3 implies lim a→∞ϕ(af∗ t(Ht);j) = 1[ j= argmax( f∗ t(Ht))]. Thus P lim a→∞TY t=1ϕ(af∗ t(Ht);At) πt(At|Ht)=TY t=11[j= argmax( f∗ t(Ht))] πt(At|Ht) = 1. Under the setup of the current lemma, argmax( f∗ t(Ht)) = d∗ t(Ht) for some version of d∗ twithP-probability one. Therefore, the above implies P
|
https://arxiv.org/abs/2505.17285v1
|
lim a→∞TY t=1ϕ(af∗ t(Ht);At) πt(At|Ht)=TY t=11[j=d∗ t(Ht)] πt(At|Ht) = 1. Since Condition N1 is satisfied, Lemma S6.1 implies that the ϕ’s are all bounded. Assumption I implies that the πt’s are all bounded below and Assumption IV implies that the Yt’s are bounded above. Therefore, TX i=1YiTY t=1ϕ(af∗ t(Ht);At) πt(At|Ht)< C for some constant C >0 potentially depending on P. Hence, the dominated convergence theorem implies that lim a→∞Vψ(af∗) = lim a→∞ETX i=1YiTY t=1ϕ(af∗ t(Ht);At) πt(At|Ht) =ETX i=1YiTY t=11h At=d∗ t(Ht)i πt(At|Ht) , which equals V∗, thus completing the proof. S8 PROOFS OF THE LEMMAS FROM SECTION 4/S8.5 Proof of the results for the kernel-based surrogates in Section 4.2.1 112 S8.5. Proof of the results for the kernel-based surrogates in Section 4.2.1 Proof of Lemma 4.5. Without loss of generality, let us assume C= 1 in (4.10). Since pred( x) = max(argmax( x)) for all x∈Rk, (4.10) implies ϕ(x, j) =Z Rk1[pred( x−u) =j]K(u)du =Z RkkY i=j+11[xj−Zj>xi−Zi]j−1Y i=11[xj−Zj≥xi−Zi]dPZ (a)=PZ Zj≤Zi+xj−xifor all i̸=j, i∈[k] (S8.2) where ( a) uses the fact that PZ(Zj=Zi+xj−xi) = 0 for all iandjin [k], which follows because the Zi’s have density. Now PZ Zj≤Zi+xj−xifor all i̸=j, i∈[k] =EK PZ Zj≤Zi+xj−xifor all i̸=j, i∈[k] Zj =EKY i̸=jPZ Zj≤Zi+xj−xi Zj because Z1, . . . , Z kare i.i.d. given Zj. Since they are independent of Zjas well, the above equals EKY i̸=j 1−FK(Zj+xi−xj) =EKY i̸=j 1−FK(Z+xi−xj) , where Zis a random variable with distribution FK. Thus the first part of the lemma follows. Suppose j∈argmax( x). Then xj−xi≥0 for all i∈[k]. Thus, 1−FK(Z+xi−xj)≥1−FK(Z). Therefore, ϕ(x; pred( x))≥EK[(1−FK(Z))k−1]. Now note that since Z∼FK,FK(Z)∼U(0,1). Hence, 1 −FK(Z)≡U∼ U(0,1) as well. Thus ϕ(x,pred(x))≥E[Uk−1] = 1/k. Hence, the second part of the proof follows. S8.5.1. Forms of the kernel-based ϕin special cases Lemma 4.5 implies ϕ(x;j) =CEKY i̸=jFK(Z+xi−xj) for all j∈[k], where FK= 1−FK. IfKis symmetric about 0, then ϕ(x;j) =CEKY i̸=jFK(xj−xi−Z) for all j∈[k]. Suppose k= 3 and C= 1. Then it follows that ϕ(x;i) = ηs(x1−x2,x1−x3) if i= 1 ηs(x2−x3,x2−x1) if i= 2 ηs(x3−x1,x3−x2) if i= 3 where for x, y∈R, ηs(x, y) =( EK[FK(x−Z)FK(y−Z)] if Kis symmetric about 0 EK[FK(Z−x)FK(Z−y)] otherwise.(S8.3) In Section 3, we used ηto denote the template for two-stage PERM losses, so ηsmay appear to be an overloaded notation. However, the expressions above show that ϕis a single-stage PERM loss with template ηs(see also Wang and Scott, 2023b, for the single-stage definition of PERM losses). Here, the subscript sindicates ”single stage.” Since logistic is a symmetric (w.r.t. 0) distribution, when Kis the standard logistic density, straightforward algebra yields ηs(a, b) =ea+b a eb−12+ (ea−1) −eab+ eb−1 ea−eb +b (ea−1)2(eb−1)2(ea−eb) for all a, b∈R. When Kis the standard Gumbel density, straightforward algebra yields ηs(a, b) =ea+b−1 (1 +ea)(1 + eb)+1 1 +ea+ebfor all a, b∈R. S8 PROOFS OF THE LEMMAS FROM SECTION 4/S8.5 Proof of the results for the kernel-based surrogates in Section 4.2.1 113 S8.5.2. Proof of Lemma 4.6 Proof of Lemma 4.6. Without loss of generality, we assume C= 1. Therefore, to show Cϕ=C, it suffices to show that Cϕ= 1. First we will show
|
https://arxiv.org/abs/2505.17285v1
|
that supx∈RkΨ(x;p) = max( p), which would imply that ϕsatisfies Condition N2 with Cϕ= 1. Note that Ψ(x;p) =X j∈[k]pjZ Rk1[pred( x−u) =j]K(u)du =Z RkX j∈[k]pj1[pred( x−u) =j] K(u)du =Z Rkppred(x−u)K(u)du≤max(p)Z RkK(u)du. SinceR K(u)du= 1, we have Ψ( x;p)≤max(p). Suppose j∈argmax( x). Let xn=ne(k) j. Then lim n→∞Ψ(xn;p) = lim n→∞Z Rkppred(xn−u)K(u)du=Z Rklim n→∞ppred(xn−u)K(u)du by the dominated convergence theorem since ppred(xn−u)≤max(p) andR K(u)du= 1. However, lim n→∞ppred(xn−u)=ppred(xn)=ppred(e(k) j)=pj. Therefore, lim n→∞Ψ(xn;p) =pj= max( p). Hence, we have proved that supx∈RkΨ(x;p) = max( p), which implies Condition N2 holds with Cϕ= 1. Now we will show that Condition N1 holds. To this end, it suffices to show that (4.7) holds. Suppose pred( x)/∈argmax( p). Then Ψ( x;p) equals Z Rkppred(x−u)K(u)du =Z u:pred( x−u)=pred( x)ppred(x−u)K(u)du+Z u:pred( x−u)̸=pred( x)ppred(x−u)K(u)du ≤ppred(x)Z u:pred( x−u)=pred( x)K(u)du+ max( p)Z u:pred( x−u)̸=pred( x)K(u)du =ppred(x)PK pred(x−⃗Z) = pred( x) + max( p)PK pred(x−⃗Z)̸= pred( x) where ⃗Z= (Z1, . . . , Z k) with the Zi’s being i.i.d. with common density K, and PKis the probability measure induced by K, the joint density of the random vector ⃗Z. Therefore, we have obtained that Ψ(x;p)≤max(p)−PK pred(x−⃗Z) = pred( x) max(p)−ppred(x) , which, noting Ψ∗(p) = max( p), implies Ψ∗(p)−Ψ(x;p)≥PK pred(x−⃗Z) = pred( x) max(p)−ppred(x) . (S8.4) Observe that if pred( x) = pred( −⃗Z), then argmax( x−⃗Z)∋pred(x). Letting PZdenote the probability measure corresponding to the distribution of the Zi’s, we obtain that PK argmax( x−⃗Z)∋pred(x) ≥PK pred(−⃗Z) = pred( x) ≥PZ −Zpred(x)>−Zi, i̸= pred( x) =PZ −Z1>−Zi, i∈[2 :k] , where the last step follows because −Z1, . . . ,−Zkare i.i.d., and hence, exchangeable, and xis non-stochastic. The RHS of the above display is 2−(k−1)because the Zi’s have a density. Now note that PK argmax( x−⃗Z)∋pred(x) ≤PK pred(x−⃗Z) = pred( x) +PK argmax( x−⃗Z) is not singleton , whose second term vanishes because ⃗Zhas a density. Thus we have shown that PK pred(x−⃗Z) = pred( x) ≥2−(k−1). S8 PROOFS OF THE LEMMAS FROM SECTION 4/S8.6 Proof of Lemma 4.7 for the product-based losses 114 Therefore, (S8.4) implies (4.7) holds with χϕ= 1. It remains to show that Condition N3 holds. To this end, it suffices to show that if argmax( x) =j, then lim an→∞ϕ(anx, j) = 1. By (4.10), lim inf an→∞ϕ(anx, j) = lim inf an→∞EKY i∈[k]:i̸=j 1−FK(Z+anxi−anxj) (a) ≥EK lim inf an→∞Y i∈[k]:i̸=j 1−FK(Z+anxi−anxj) =EKY i∈[k]:i̸=j 1−lim inf an→∞FK(Z+anxi−anxj) (b)=EKY i∈[k]:i̸=j 1−FK(Z+ lim inf an→∞(anxi−anxj)) , where (a) follows by Fatou’s lemma since 0 ≤FK≤1 and (b) follows because FKis continuous on Rsince it has a density. Since xj−xi>0 for all i∈[k] such that i̸=j,anxi−anxj→n−∞. Thus, the RHS on the above display is one. Thus we have shown that lim inf an→∞ϕ(anx, j)≥1. However, ϕ(x;j)≤Ψ(x,1k) = 1 since Ψ( x,1k) = 1 for the kernel-based surrogate. Therefore, lim an→∞ϕ(anx, j) = 1 in this case. Now suppose argmax( x) is singleton and j /∈argmax( x). Then 1−FK(Z+anxpred(x)−anxj)→a.s.0. Therefore,Y i∈[k]:i̸=j 1−FK(Z+anxi−anxj) →a.s.0. Therefore, by the dominated convergence theorem, it follows that lim inf an→∞EKY i∈[k]:i̸=j 1−FK(Z+anxi−anxj) =EK lim inf an→∞Y i∈[k]:i̸=j 1−FK(Z+anxi−anxj) , which equals zero. The proof
|
https://arxiv.org/abs/2505.17285v1
|
follows since lim inf an→∞ϕ(anx, j) = lim inf an→∞EKY i∈[k]:i̸=j 1−FK(Z+anxi−anxj) . S8.6. Proof of Lemma 4.7 for the product-based losses Proof of Lemma 4.7. Without loss of generality, let us assume that C= 1, which implies τ= 1−FK. Because Kis symmetric about zero, 1 −FK=FK, implying τ=FKin this case. Since τis a distribution function in this case, it is non-decreasing. Hence, for the surrogate in (4.11), ϕ(x; pred( x))≥τ(0)k−1. IfKis symmetric about zero, then τ(0) =R∞ −∞1[y≤0]K(y)dy= 1/2. Hence, J= 2−(k−1)for such ϕ’s. Proving Conditions N1, N2, and (4.7):Since we assumed C= 1, it suffices to show that Conditions N1, N2, and (4.7) hold with Cϕ= 1. If we can show that Ψ∗(p) = max( p), and Ψ∗(p)−Ψ(x;p)≥(max p−ppred(x))/2, (S8.5) then it will follow that Condition N2 holds with Cϕ= 1 and (4.7) holds with χϕ= 1/2. The latter would imply that Condition N1 holds. Thus it suffices to show (S8.5). First, we will show that Ψ∗(p)≥max(p). To see this, consider x(m)=me(k) pred(p). In this case, pred( x(m)) = pred( p). Since τis a distribution function, it follows that lim m→∞ϕ(x(m);i) = 1 if i= pred( p) and lim m→∞ϕ(x(m);i) = 0 for all other i’s in [ k]. Therefore, Ψ( x(m);p)→mmax(p). Since Ψ∗(p) = supx∈RkΨ(x;p), it follows that Ψ∗(p)≥lim m→∞Ψ(x(m);p) = max( p). Therefore, to show that Condition N2 holds, it suffices to show that Ψ∗(p)≤max(p). We will show that given any k∈N, Ψ(x;p)≤(max( p) +ppred(x))/2 for all p∈Rk ≥0andx∈Rk. Note that the above implies 2 max( p)−2Ψ(x;p)≥max(p)−ppred(x), (S8.6) S9 PROOFS FOR SECTION 6.2/ 115 which, combined with Ψ∗(p)≥max(p), leads to (S8.5) with χϕ= 1/2. Also, Ψ( x;p)≤(max( p) +ppred(x))/2 implies Ψ∗(p)≤max(p), from which Condition N2 follows. Therefore, it suffices to prove (S8.6). We will use mathematical induction to prove this. Our induction hypothesis is that (S8.6) holds for some k∈N. We will first show that (S8.6) holds for k= 1. Next we will show that if (S8.6) hold for some k∈N, it holds for k+ 1 as well, which will complete the proof. When k= 1,x=x∈Randp=p∈R≥0. Note that Ψ( x;p) =pτ(x). Since τ(x) is a distribution function, supx∈∞τ(x) = 1 and Ψ∗(p) =p= max( p). Also ppred(x)=p. Since τis a distribution function, τ(x)≤1. Thus Ψ(x;p) =pτ(x)≤p= (max( p) +ppred(x))/2 = ( p+ppred(x))/2, implying (S8.6) holds for k= 1. Suppose the induction hypothesis holds for some k∈N. Consider x∈Rk+1andp∈Rk+1 ≥0. Let pred( x) =i∗. Suppose r∈[k] is such that r= max {argmaxi∈[k]:i̸=i∗xi}. Note that we allow the case xr= max( x), which can happen if argmax( x) is not singleton. Also, xr≤xi∗andxr≥xifor all i∈[k]\ {r, i∗}. Note that Ψ( x;p) equals pi∗ϕ(x;i∗) +X j∈[k]:j̸=i∗pjϕ(x;j) =pi∗τ(xi∗−xr)Y i/∈{r,i∗}τ(xi∗−xi) +X j̸=i∗pjτ(xj−xi∗)Y i/∈{i∗,j}τ(xj−xi). Here, the products will be one if the range is empty, which is the case when k= 2 since r̸=i∗. Since τis a distribution function, τ≤1. Also, since τis a distribution function, it is non-decreasing, implying τ(xj−xi∗)≤τ(xr−xi∗) for all j̸=i∗. Therefore, Ψ(x;p)≤pi∗τ(xi∗−xr) +τ(xr−xi∗)X j∈[k2]:j̸=i∗pjY i/∈{i∗,j}τ(xj−xi). Let us denote p′= (p1, . . . ,pi∗−1,pi∗+1, . . . ,pk+1) and x′= (x1, . . . ,xi∗−1,xi∗+1, . . . ,xk+1). Then
|
https://arxiv.org/abs/2505.17285v1
|
p′∈Rk ≥0andx′∈Rk. Let us also denote by Ψ( x′;p′) the k-dimesnional version of ϕ. Since the induction hypothesis holds for k, Ψ(x′;p′)≤max(p′). Note that the last display can be rewritten as Ψ(x;p)≤pi∗τ(xi∗−xr) +τ(xr−xi∗)Ψ(x′;p′) ≤pi∗τ(xi∗−xr) +τ(xr−xi∗) max( p′) ≤pi∗τ(xi∗−xr) +τ(xr−xi∗) max( p) ≤sup x≥0 pi∗τ(x) +τ(−x) max( p) , where we used the fact that xi∗≥xr. Since Kis symmetric about zero, τ(−x) = 1−τ(x). Therefore, The derivative of the function x7→pi∗τ(x) + max( p)τ(−x) isK(x)(pi∗−max(p)), which is non-positive because pi∗≤max(p) andK(x)≥0 as Kis a density. Thus on the positive half-line, pi∗τ(x) + max( p)τ(−x) is maximized at x= 0. However, since Kis symmetric about zero, τ(0) = 1 /2, which implies Ψ( x;p)≤(pi∗+max( p))/2. Thus, we have shown that the induction hypothesis holds fork+ 1. Hence, the induction hypothesis holds for all k∈N, which completes the proof. Proving Condition N3: Since τis a distribution function for C= 1, τ(∞) = 1 and τ(−∞) = 0. Note also that we have shown that Cϕ= 1. If argmax( x) =j, then lim a→∞ϕ(ax;j) =Y i̸=jlim a→∞τ(a(xj−xi)) = 1 . Ifj /∈argmax( x), and argmax( x) is singleton, a(xj−xpred(x))→ −∞ asa→ ∞ .The proof follows noting that, in this case, lim a→∞ϕ(ax;j) =Y i̸=jlim a→∞τ(a(xj−xi)) = 0 . S9. Proofs for Section 6.2 S9.0.1. Proof of Lemma 6.1 Proof. Proposition 1 implies C∗(V∗−V(bf))≤Vψ ∗−Vψ(bf). Since Jt>0 for all t∈[T], it follows that C∗>0. Recall from Algorithm 1 that bf= tran( bg) where the tran operator is as in (5.2). Since we defined Vψ,rel(g) to be Vψ(tran( g)),Vψ(bf) =Vψ,rel(bg). From Section 5, it follows that for relative- margin-based surrogates, Vψ ∗= supg∈WVψ,rel(g). Thus Vψ ∗−Vψ(bf) = supg∈WVψ,rel(g)−Vψ,rel(bg), implying the following S9 PROOFS FOR SECTION 6.2/S9.1 Proof of Result 4 116 decomposition of the ψ-regret Vψ ∗−Vψ(bf): sup g∈WVψ,rel(g)−Vψ,rel(˜g) + (Vψ,rel−bVψ,rel)(˜g)−(Vψ,rel−bVψ,rel)(bg) +bVψ,rel(˜g)−sup g∈UnbVψ,rel(g) + sup g∈UnbVψ,rel(g)−bVψ,rel(bg) | {z } Optimization error: Optn. The proof follows noting bVψ(˜g)≤supg∈UnbVψ(g)≤0 because Un∋˜g. S9.1. Proof of Result 4 Proof of Result 4. Proof follows trivially if j∈argmax( x). Suppose j /∈argmax( x). Let pred( x) =i. Then ψ(x;j) =PK Zj≤Zi′+xj−xi′for all i′̸=j, i′∈[k] ≤PK Zj≤Zi+xj−xi =PK Zj≤Zi−(xi−xj) However, since xi−xj>0, PK(Zi−Zj≥xi−xj) =PK (Zi−Zj)2>(xi−xj)2 ≤EK[(Z1−Z2)2] (xi−xj)2 by Markov’s inequality since Z1−Z2has finite second moment. Moreover, Ca=EK[(Z1−Z2)2] =EK[Z2 1] +EK[Z2 2] = 2EK[Z2 1]>0 because EK[Z2 1]>0. S9.2. Proof of Theorem 6.1 Without loss of generality, we assume that Cϕt= 1 for all t∈[T]. If that is not the case, we can replace the ϕt’s by ϕt/Cϕt. Our first step is to express Vψ ∗−Vψ(f) in terms of the optimal Q-functions. To this end, we will need the following lemma. Lemma S9.1. Under the setup of Theorem 6.1, for any f= (f1, . . . , f T)∈ F, Vψ ∗−Vψ(f) =ETX j=1jY i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) . Lemma S9.1 is proved in Section S9.3. Since ϕt≥0, andP j∈[kt]ϕt(x;j)≤Ψ∗ t(1kt) = 1, if follows that ϕt≤1 for all t∈[T]. Moreover, since πt> C πfor all t∈[T], using Lemma S9.1 and (S6.6), we obtain Vψ ∗−Vψ(f)≤P t∈[T]EhP i∈[kt]:i̸=d∗ t(Ht) Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(ft(Ht);i)i CTπ. (S9.1) For fixed i∈[kt], Q∗ t(Ht;d∗ t(Ht))−Q∗
|
https://arxiv.org/abs/2505.17285v1
|
t(Ht;i) ϕt(ft(Ht);i) = Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i) | {z } S1 − Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)−ϕt(ft(Ht);i) | {z } S2. Defining ˜h= tran(˜ g), we will now show that when f=bn˜h, then S1≲Cab−2 nz−1+z1[Ht∈ Eti] (S9.2) and S2≲Cab−2 nδ−1 n+δn1E′ ti. (S9.3) S9 PROOFS FOR SECTION 6.2/S9.2 Proof of Theorem 6.1 117 S9.2.1. Bound on S1 For fixed t∈[T],i∈[kt], and z >0, let us denote Eti={ht∈ H t:Q∗ t(ht;d∗ t(ht))−Q∗ t(ht;i)≤z}. Ideally, Etishould depend onz, but we omit the dependence for notational simplicity. Let us denote the random variables 1 Ec tiand 1 Ec tiby 1Etiand 1Ec ti, respectively. Then S1= Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)1Ec ti| {z } S11 + Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)1Eti | {z } S12. We apply Condition N3.s on the first term to obtain ψ(anQ∗ t(Ht);i)≤Ca bn Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht, i)−2 . Thus S11≲Cab−2 n Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht, i))−1 1Ec ti≤Cab−2 nz−1 because for Ht∈ Ec t,Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht, i)> z > 0. Finally, (S9.2) follows, noting, since ϕt≤1 for all t∈[T], we have S12≤z1Eti. S9.2.2. Bound on S2 Now for fixed t∈[T],i∈[kt], and for any δn>0, we denote E′ ti={ht∈ H t:Q∗ t(ht;d∗ t(ht))−Q∗ t(ht;i)≤3δn}. Here we suppress the dependence of Eonnfor the sake of simplicity. Now notice that S2is bounded above by Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)−ϕt(bn˜ht(Ht);i) 1(E′ ti)c | {z } S21 + Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)−ϕt(bn˜ht(Ht);i) 1E′ ti| {z } S22. We will analyze S21first, for which, we use Lemma S9.2 below. This lemma is proved in Supplement S9.3.1. Lemma S9.2. Under the setup of Theorem 6.1, for all t∈[T],ht∈ H t, and i∈[kt], it holds that, ˜ht,d∗ t(ht)(ht)−˜hti(ht)≥Q∗ t(ht;d∗ t(ht))−Q∗ t(ht;i)−2δnwhere ˜ht=tran(˜gt). (S9.4) Since Ht∈(E′ ti)cimplies Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)>3δn, Lemma S9.2 implies the following for Ht∈(E′ ti)c: ˜ht,d∗ t(Ht)(Ht)−˜hti(Ht)> δn>0 for all i̸=d∗ t(Ht). Therefore, argmax( ˜ht(Ht)) ={d∗ t(Ht)}forHt∈(E′ ti)c. On the other hand, Condition N3.s implies ϕt(bn˜ht(Ht);i)≤b−2 nCa ˜ht,d∗ t(Ht)(Ht)−˜hti(Ht)−2 ≤b−2 nCa Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)−2δn−2 , where the last step follows from Lemma S9.2 and the fact that Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)−2δn> δn>0 for Ht∈(E′ ti)c. Moreover, if Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)>3δn, then 2δn<2 3 Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) , which implies, for Ht∈(E′ ti)c, Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)−2δn≥1 3 Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) . Therefore, for Ht∈(E′ ti)c, ϕt(bn˜ht(Ht);i)≤9a−2 nCa Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)−2 . On the other hand, Condition N3.s applied on ϕt(anQ∗ t(Ht);i) implies that ϕt(anQ∗ t(Ht);i)≤Cab−2 n Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)−2 . S9 PROOFS FOR SECTION 6.2/S9.3 Proof of the auxiliary lemmas for proving Theorem 6.1 118 Therefore, we have shown that for Ht∈(E′ ti)c, Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)−ϕt(bn˜ht(Ht);i) ≤10Cab−2 n Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)−1 , which implies S21≤10Cab−2 nδ−1 n1(E′ ti)c. On the other hand, from the definition of E′ ti, and using ϕt≤1, it is not hard to see that S22≤6δn1E′ ti. Thus, (S9.3) is proved. Combining (S9.1), (S9.2), and (S9.3), we obtain that Vψ ∗−Vψ(bn˜h)≲1 CTπ Tb−2 nCa(z−1+δ−1 n) +X t∈[T]EhX i∈[kt]:i̸=d∗ t(Ht)(z1Eti+δn1Eti′)i . LetEt={ht:µ(Q∗ t(ht))≤z}andE′ t={ht:µ(Q∗ t(ht))≤3δn}. Then X i∈[kt]:i̸=d∗ t(Ht)1Eti≤(kt−1)1EtandX
|
https://arxiv.org/abs/2505.17285v1
|
i∈[kt]:̸=d∗ t(Ht)1E′ ti≤(kt−1)1E′ t. By the small noise assumption, P(Et)≤zαandP(E′ t)≲δα n. Thus Vψ ∗−Vψ(bn˜h)≲1 CTπ b−2 nz−1Ca+b−2 nδ−1 nCa+z1+α+δ1+α n . We want the z1+αterm to match δ1+α n. Thus, letting z=δn Vψ ∗−Vψ(bn˜h)≲1 CTπ b−2 nδ−1 nCa+δ1+α n . Since bn> δ−(1+α/2) n , it follows that δ1+α n> b−2 nδ−1 n. Thus Vψ ∗−Vψ(bn˜h)≲1 CTπ(1 +Ca)δ1+α n. Since supg∈WVψ,rel(g) =Vψ ∗for relative-margin-based surrogates, and Vψ,rel(bn˜g) =Vψ(bntran(˜ g)) = Vψ(bn˜h), the proof follows. S9.3. Proof of the auxiliary lemmas for proving Theorem 6.1 Lemma S9.3. Under Assumptions I-V, for all t∈[T]andi∈[k−t], max( p∗ t(Ht))−p∗ t(Ht;i) =Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht;i). (S9.5) Proof of Lemma S9.3. Note that for j∈[kT], (S6.2) implies p∗ T(HT;j) =EhTX i=1Yi|HT, AT, AT=ji =E[YT|HT, AT=j] +T−1X i=1Yi, which equals Q∗ T(HT, j) +PT−1 i=1Yi. Therefore, max( p∗ T(HT))−p∗ T(HT;i) = max( Q∗ T(HT))−Q∗ T(HT;i). Hence, the proof follows for t=T. For t= 1, the proof follows from Lemma S6.13. Suppose T >2. For t∈[2 :T−1], using (S6.13) we obtain that for i∈[kt], p∗ t(Ht;i) =E ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t Ht, At=i =E Et−1X i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t Ht, At=i +E ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t Ht, At=i , whose last term is Q∗ t(Ht, i) by Fact S11.4 and the first term t−1X i=1Yi E ETY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t Ht, At=i =t−1X i=1Yi S9 PROOFS FOR SECTION 6.2/S9.3 Proof of the auxiliary lemmas for proving Theorem 6.1 119 because ETY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t = 1 by (S6.10). Therefore, for 1 < t < T andi∈[kt], p∗ t(Ht;i) =Q∗ t(Ht, i) +t−1X i=1Yi. Hence, the lemma follows for t∈[2 :T−1]. Proof of Lemma S9.1. Without loss of generality, we assume that Cϕt= 1 for each t∈[T]. Otherwise, we can replace ϕtby ϕt/Cϕtfor each t∈[T]. Since ϕtis symmetric for each t∈[T], it holds that Ψ∗ t(1kt) = Ψ t(x;1kt) for each x∈Rkt. Since Ψ∗ t(1kt) =Cϕt= 1, it follows that Ψ t(x;1kt) =P i∈[kt]ϕt(x;i) = 1 for each x∈Rkt. This fact will be used repeatedly in the proof. We will also use the notation introduced in Section S6.0.1 for the proof of Theorem 4.1. Fort∈[T], let us define Tt=E Ψt(ft(Ht);p∗ t(Ht))−Ψt(ft(Ht);pt(Ht))|Ht . Since p∗ T(HT) =pT(HT), we obtain that TT= 0. For t∈[T−1], we obtain that Ttequals EhX i∈[kt]ϕt(ft(Ht);i) p∗ t(Ht)i−pt(Ht)i |Hti (a)=Eϕt(ft(Ht);At) πt(At|Ht) p∗ t(Ht;At)−pt(Ht;At) Ht =Eϕt(ft(Ht);At) πt(At|Ht)Eh Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p1+t(H1+t))|Ht, Ati Ht =Eϕt(ft(Ht);At) πt(At|Ht)Eh Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t))|Ht, Ati Ht +Eϕt(ft(Ht);At) πt(At|Ht)Eh Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) −Ψ1+t(f1+t(H1+t);p1+t(H1+t))|Ht, Ati Ht (b)=E ϕt(ft(Ht);At) πt(At|Ht)E X i∈[k1+t] max( p∗ 1+t(H1+t))−p∗ 1+t(H1+t;i) ×ϕ1+t(f1+t(H1+t);i) Ht, At Ht +Eϕt(ft(Ht);At) πt(At|Ht)E Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) −Ψ1+t(f1+t(H1+t);p1+t(H1+t)) H1+t Ht , where (a) uses (S6.6) and (b) uses the facts: (i) Ψ∗ t(p) = max( p) for all p∈Rk+t ≥0since ϕtsatisfies Condition N2 with Cϕt= 1, (ii)P i∈[kt]ϕt(x;i) = 1 for each t∈[T] and x∈Rkt, as shown previously, and (ii) H1+t⊃ {Ht, At}. The second term on the RHS of the above display equals Eϕt(ft(Ht);At) πt(At|Ht)T1+t Ht . Note that Lemma S9.3 implies max( p∗ t(Ht))−p∗ t(Ht;i) =Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht;i). Then it follows that for t∈[T−1],Ttequals Eϕt(ft(Ht);At) πt(At|Ht) EhX
|
https://arxiv.org/abs/2505.17285v1
|
i∈[k1+t] max( Q∗ 1+t(H1+t))−Q∗ 1+t(H1+t;i) ×ϕ1+t(f1+t(H1+t);i)|Ht, Ati +T1+t Ht =Eϕt(ft(Ht);At) πt(At|Ht)ϕ1+t(f1+t(H1+t);A1+t) π1+t(A1+t|H1+t) × max( Q∗ 1+t(H1+t))−Q∗ 1+t(H1+t;A1+t) +T1+t Ht (S9.6) by (S6.6). We claim that for any t∈[T−1], Tt=ETX j=t+1jY i=tϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) Ht . (S9.7) S9 PROOFS FOR SECTION 6.2/S9.3 Proof of the auxiliary lemmas for proving Theorem 6.1 120 Note that since TT= 0, (S9.6) implies TT−1=E ϕT−1(fT−1(HT−1);AT−1) ×ϕT(fT(HT);AT) πT−1(AT−1|HT−1) ×πT(AT|HT) max( Q∗ T(HT))−Q∗ T(HT;AT) HT−1 . Hence, (S9.7) holds for t=T−1. Suppose (S9.7) holds for t∈[2 :T−1]. Then (S9.6) implies Tt−1=ETX j=t+1jY i=t−1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) Ht−1 +EtY i=t−1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ t(Ht))−Q∗ t(Ht;At) Ht−1 =ETX j=tjY i=t−1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) Ht−1 . Hence, (S9.7) holds for t−1 as well. Therefore, by induction, (S9.7) holds for t= 1 and T1=ETX j=2jY i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) H1 . By the definition of Tt, E[T1] =Eh Ψ1(f1(H1);p∗ 1(H1))−Ψ1(f1(H1);p1(H1))i . Hence, it follows that Eh Ψ1(f1(H1);p∗ 1(H1))−Ψ1(f1(H1);p1(H1))i =ETX j=2jY i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) . Using Lemma S6.14, we can show that Vψ ∗−Vψ(f) equals Eh Ψ∗ 1(p∗ 1(H1))−Ψ1(f1(H1);p∗ 1(H1))i +Eh Ψ1(f1(H1);p∗ 1(H1))−Ψ1(f1(H1);p1(H1))i (a)=E[max( p∗ 1(H1))−Ψ1(f1(H1);p∗ 1(H1))] +ETX j=2jY i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) (b)=Ehk1X j=1ϕ1(f1(H1);i) max( p∗ 1(H1))−p∗ 1(H1)ii +ETX j=2jY i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) (c)=Ehϕ1(f1(H1);A1) π1(A1|H1) max( Q∗ 1(H1))−Q∗ 1(H1;A1)i +ETX j=2jY i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj) , where (a) follows since ϕ1satisfies Condition N1, (b) follows because ϕ1satisfiesP i∈[k1]ϕt(x;i) = 1, and (c) follows from Lemma S9.3 and (S6.6). Therefore, the proof follows. S9.3.1. Proof of Lemma S9.2 Proof of Lemma S9.2. We consider three cases: (i) d∗ t(ht)̸= 1,i̸= 1 (ii) d∗ t(ht) = 1, i̸= 1, and (iii) d∗ t(ht)̸= 1,i= 1. We will show that (S9.4) holds in case (i) and (ii). The proof for case (iii) follows similarly to that of case (ii), and hence skipped. S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/ 121 Case (i) Since ˜ht= tran(˜ gt), for i≥2,˜hti(ht) =−˜gt,i−1(ht). Thus, ˜ht,d∗ t(ht)(ht)−˜hti(ht) = ˜gt,i−1(ht)−˜gt,d∗ t(ht)−1(ht), which equals ˜gt,i−1(ht)−(Q∗ t(ht,1)−Q∗ t(ht, i))− ˜gt,d∗ t(ht)−1(ht)− Q∗ t(ht,1) −Q∗ t(ht, d∗ t(ht)) +Q∗ t(ht, d∗ t(ht))−Q∗ t(ht, i) ≥Q∗ t(ht, d∗ t(ht))−Q∗ t(ht, i)−2 sup j∈[kt−1]∥˜gt,j−(Q∗ t(·,1)−Q∗ t(·,1 +j)∥∞. Thus (S9.4) follows noting that our assumption on ˜ gand (6.4) imply sup j∈[kt−1]∥˜gtj−(Q∗ t(·,1)−Q∗ t(·,1 +j))∥∞=∥˜gtj−bliptj∥∞≤δn. (S9.8) Case (ii) Since d∗ t(ht) = 1 and ˜ht= tran(˜ gt), it follows that ˜ht,d∗ t(ht)(ht) = 0. However, since i≥2,˜hti(ht) =−˜gt,i−1(ht). Thus ˜ht,d∗ t(ht)(ht)−˜ht,i(ht) = ˜gt,i−1(ht) = ˜gt,i−1(ht)−(Q∗ t(ht,1)−Q∗ t(ht, i)) +Q∗ t(ht, d∗ t(ht))−Q∗ t(ht, i) where we used d∗ t(ht) = 1. However, |˜gt,i−1(ht)−(Q∗ t(ht,1)−Q∗ t(ht, i))| ≤δnby (6.4) and (S9.8). Thus ˜ht,d∗ t(ht)(ht)−˜ht,i(ht)≥Q∗ t(ht, d∗ t(ht))−Q∗ t(ht, i)−δn. S10. Proofs for Section 6.3: combined regret decay rate S10.1. Proof of Theorem 6.2 Without loss of generality, we assume that Cϕt= 1 for each t∈[T]. Otherwise, we can replace each ϕtbyϕt/Cϕt. As shown in the proof of Lemma Proof of Lemma S9.1, Cϕt= 1 impliesP i∈[kt]ϕt(x;i) = 1 for all x∈Rkt. Also, as shown in the proof of Theorem 6.1, Cϕt= 1 implies Φ∗ t(1kt) = 1, which
|
https://arxiv.org/abs/2505.17285v1
|
indicates ϕt≤1 for all t∈[T]. These facts will be used repeatedly in our proof. Throughout this proof, we will denote tran(˜ g) by ˜f. We also remind the readers that bf= tran( bg). Since the ϕt’s satisfy the conditions of Proposition 1 with positive J, the regret of bfis bounded by a constant multiple of the ψ-regret Vψ ∗−Vψ(bf), which we will denote by rnfrom now on. Therefore, it suffices to prove the concentration bound on rn. Since bf is stochastic, rnis also a random variable. Parts of the proof of Theorem 6.2 follow similar arguments as Theorem 5 in Laha et al. (2024b). Therefore, we focus on the parts that differ from that proof. First, we introduce some new functions. It will be convenient for us to write Vψ ∗asVψ(f) for some f. There are extended-valued scores for which the above is possible. Let us define f∗= (f∗ 1, . . . , f∗ T) with f∗ t:Ht7→Rktso that f∗ ti(Ht) =( ∞ ifi=d∗ t(Ht) 0 otherwise,(S10.1) where d∗is any version of the optimal DTR. Note that we can write f∗ t(Ht) =∞ ×e(d∗ t(Ht)) kt. We define ϕt(f∗ t(Ht);j) := limm→∞ϕt(me(d∗ t(Ht)) kt;j). Let us fix ht∈ H t. Condition N3.s implies that for j̸=d∗ t(ht) lim sup m→∞ϕt(me(d∗ t(ht)) kt;j)≤lim m→∞Cam−2= 0. Therefore, lim m→∞ϕt(me(d∗ t(ht)) kt;j) = 0 for all j̸=d∗ t(ht). (S10.2) Since we showed that under our assumptions,P i∈[kt]ϕt(x;i) = 1, it follows that ϕt(x;i) = 1−X j∈[kt]:j̸=iϕt(x;j) for all x∈Rkt, implying that ϕt(me(d∗ t(ht)) kt;d∗ t(ht)) = 1 −X j∈[kt]:j̸=d∗ t(ht)ϕt(me(d∗ t(ht)) kt;j). S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.1 Proof of Theorem 6.2 122 Hence, lim m→∞ϕt(me(d∗ t(ht)) kt;d∗ t(ht)) = 1 −X j∈[kt]:j̸=d∗ t(ht)lim m→∞ϕt(me(d∗ t(ht)) kt;j) = 1 by (S10.2). Therefore, by our definition, ϕt(f∗ t(ht);j) = 1[ j=d∗ t(ht)], j∈[kt], t∈[T]. (S10.3) With the above definition of ϕt(f∗ t(Ht);j)’s,Vψ(f∗) is well-defined. Lemma S10.1. Letf∗is as defined in (S10.1) . Then under the conditions of Theorem 6.2, Vψ(f∗) =Vψ ∗=V∗when Cϕt= 1for all t∈[T]. Proof of Lemma S10.1. When Cϕt= 1 for all t∈[T] Lemma S6.14 implies Vψ ∗=V∗=ETX i=1YiTY j=11[Aj=d∗ j(Hj)] πj(Aj|Hj) =ETX i=1YiTY j=1ϕt(f∗ t(Ht);j) πj(Aj|Hj) , which equals Vψ(f∗). Here the last step follows from (S10.3). For any u, u′∈ F, define Uu,u′(D) :=TX i=1YiQT t=1ϕt(ut(Ht);At)−QT t=1ϕt(u′ t(Ht);At) QT t=1πt(At|Ht). (S10.4) The proof of Lemma 6.1 shows that rn=Vψ ∗−Vψ(bf)≤Approximation error + Estimation error + Optn, where the approximation error is sup g∈WVψ,rel(g)−Vψ,rel(˜g) =Vψ ∗−Vψ(tran(˜ g)) =Vψ ∗−Vψ(˜f), the estimation error is (Vψ,rel−bVψ,rel)(˜g−bg) = (Pn−P)[ L(D;bg)− L(D; ˜g)], and Optnis the optimization error. Let us denote the approximation error by Appn. We denote the absolute value of the estimation error is by Estn, i.e., Estn=|(Pn−P)[ L(D;bg)− L(D; ˜g)]|. Thus rn≤Appn+Estn+Optn. Since for any i∈[kt], Γt(x;i) =ϕt(tran( x);i) by Definition 4.1, it is not hard to see that L(D;bg)− L(D; ˜g) =Utran(bg),tran(˜ g)(D) =Ubf,˜f(D) because ˜f= tran(˜ g) andbf= tran( bg). Thus Estn=|(Pn−P)Ubf,˜f|. Note that δ−(1+α/2) n =n1/2(ρnlogIn)−1/2, which is less than bnby our assumption on bn. Therefore, Appncan be bounded directly by applying Theorem 6.1, which yields
|
https://arxiv.org/abs/2505.17285v1
|
Appn≲δ1+α n. The non-trivial step is bounding Estn. To this end, first, we need a bound on ∥Uf∗,f∥P,2, which can be bounded using rnandAppn. Lemma S10.2. Under the setup of Theorem 6.2, there exists a constant C >0depending only on P, so that ∥Uf∗,f∥2 P,2≤C Vψ(f∗)−Vψ(f)α/(1+α) where αis as in Assumption 1. The above lemma is proved in Section S10.2.1. It implies ∥Uf∗,˜f∥2 P,2≤C Vψ(f∗)−Vψ(˜f)α/(1+α) =C Vψ ∗−Vψ(˜f)α/(1+α) because Vψ(f∗) =Vψ ∗by Lemma S10.1. However, since Vψ ∗−Vψ(˜f) =Appn≲δ1+α n, we have ∥Uf∗,˜f∥2 P,2≲Cδα n. Thus [∥Ubf,˜f∥P,2≤ ∥Uf∗,˜f∥P,2+∥Uf∗,bf∥P,2(a) ≲δα/2 n+rα/{2(1+α)} n :=ϵn, (S10.5) S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.1 Proof of Theorem 6.2 123 where (a) follows since ∥Uf∗,˜f∥2 P,2≲Cδα nand by Lemma S10.2, ∥Uf∗,bf∥2 P,2≤C Vψ(f∗)−Vψ(bf)α/(1+α) =C Vψ ∗−Vψ(bf)α/(1+α) ≲rα/(1+α) n . Next, define the set Gn(ϵ) = Uf,˜f:f= tran( g), g∈ Un,∥Uf,˜f∥P,2≤ϵ . Note that (S10.5) implies Ubf,˜f∈ Gn(ϵn). Therefore, we can bound Estnby Estn≤ sup u∈Gn(ϵn)|(Pn−P)u|=E[ sup u∈Gn(ϵn)|(Pn−P)u|] + sup u∈Gn(ϵn)|(Pn−P)u| −E[ sup u∈Gn(ϵn)|(Pn−P)u|] | {z } deviation term Bounding the deviation term requires Talagrand’s inequality, and this part is exactly similar to Laha et al. (2024b) if we plug in our Gn(ϵn) in the proof of step 3 of Theorem 5 therein. From the symmetrization inequality (cf. Theorem 2.1 of Koltchinskii, 2009), it follows that E[ sup u∈Gn(ϵn)|(Pn−P)u|]≲E[ sup u∈Gn(ϵn)|Rn(u)|] where Rnis the Rademacher complexity. The number E[supu∈Gn(ϵn)|Rn(u)|] is called the Rademacher complexity of the classGn(ϵ). Our next step is to show that the bracketing entropy N[](ϵ,Gn(ϵn), L2(Pn)) ofGn(ϵn) is of the order ( In/ϵ)ρ′ n. It can be then used to derive an upper bound on the Rademacher complexity of the class Gn(ϵn) using the following fact given by (3.12), pp.40, of Koltchinskii (2009). Fact S10.1 (Fact from Koltchinskii (2009)) .Suppose Gis a function-class with envelope Fsuch that ∥F∥∞≤C, where C >0. Further suppose there exists ρn>0such that N[](ϵ,G, L2(Pn))≲IG ϵρn . Letσ2= supu∈GPh2. Suppose there exists c >0such that nσ2> c. Then E[∥Rn∥G]≲max σρnlog(IG/σ) n1/2 ,ρnlog(IG/σ) n . By Lemma 9.1 of Wellner (2005), N[](ϵ,Gn(ϵn), L2(P))≲N(ϵ/2,Gn(ϵn),∥ · ∥∞) for any measure P. On the other hand, since ˜fis fixed, for any ϵ >0, the covering number N(ϵ,Gn(ϵn),∥ · ∥∞) is not larger than the covering number of the function-class C1= D 7→(TP i=1Yi)QT t=1ϕt(ft(Ht);at) QT t=1πt(at|Ht) f= tran( g), g∈ Un . Note that D 7→PT i=1Yi/QT t=1πt(at|Ht) is a fixed function and it is also bounded because the Yi’s are bounded by a constant, say Cmax, by Assumption IV, and the πt’s are bounded away from Cπ>0 by Assumption I. Since the Yt’s are also positive by Assumption V, the above function takes value in [0 , TC max/CT π]. Let us denote c=TCmax/CT π. Letting C2= D 7→TY t=1ϕt(ft(Ht);At)|f= tran( g), g∈ Un , we note that N(ϵ,C1,∥ · ∥∞)≤N(ϵ/c,C2,∥ · ∥∞). (S10.6) Thus to apply Fact S10.1, it suffices to show that there exists ρ′ n>0 so that N[](ϵ,C2,∥ · ∥∞)≲In ϵρ′ n for all ϵ >0. Recall that gt∈ Ukt−1 tn andgti∈ Utnfor each i∈[kt]. For a fixed ( i, t) pair with t∈[T] and i∈[kt], we define the
|
https://arxiv.org/abs/2505.17285v1
|
class of functions C(i, t) := u:Ht7→R|u(ht) =ϕt(ft(ht);i), ft= tran( gt), gt∈ Ukt−1 tn . We will apply Lemma S10.3 in Section S10.2 with G=C(i, t),C=Utn,X=Ht,k=kt,ϕ=ϕt, and u(·) =ϕt(ft(·), i) to obtain its covering number. Lemma S10.3 applies here because, by Condition 1, for fixed i∈[kt], the function xt7→ϕt(xt, i) is globally Lipschitz. Lemma S10.3 yields N(ϵ,C(i, t),∥ · ∥∞)≲N(ϵ/(Cp kt),Utn,∥ · ∥∞)kt−1≤N(ϵ/(Cp kt),Utn,∥ · ∥∞)kt. (S10.7) S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.2 Proof of auxiliary lemmas for proving Theorem 6.2 124 Now consider the function class C(t) := u:Ht×[kt]7→R|u(ht, i) =ϕt(ft(ht), i) for all ht∈ H t, i∈[kt],where ft= tran( gt), gt∈ Ukt−1 tn . We now use Lemma S10.4 in Section S10.2 to bound the covering number of C(t). To this end, we take X=Ht,G=C(t), ua=ϕt(ft(·), a),Ci=C(i, t), and A= [kt] to obtain N(ϵ,C(t),∥ · ∥∞)≲N(ϵ,C(i, t),∥ · ∥∞)kt. (S10.8) Now note that the functions in C2areT-fold products of functions from function-classes C(1), . . . ,C(T). Moreover, the functions in C(t) are positive for each t∈[T] because ϕt’s are positive. We also showed that the ϕt’s are also uniformly bounded by one. Hence, the functions in C(t) are also bounded by one. Therefore, N(ϵ,C2,∥ · ∥∞)≲TY t=1N(ϵ,C(t),∥ · ∥∞). (S10.9) Combining (S10.7), (S10.8), and (S10.9) with (6.6), we obtain that N(ϵ,C2,∥ · ∥∞)≲TY t=1N(ϵ,C(t),∥ · ∥∞)≤TY t=1InC√kt ϵρnk2 t ≲InC√ K ϵρnTP t=1k2 t where K=PT t=1kt. Since the kt’s,T, and Care finite, InC√ K ϵρnTP t=1k2 t ≲In ϵρnTP t=1k2 t =In ϵρ′ n , where ρ′ n=ρnPT t=1k2 t. Since the kt’s are finite integers, lim inf nρn>0,ρnlogIn=o(n), and lim inf nρnlogIn>0, it holds that lim inf nρ′ n>0,ρ′ nlogIn=o(n), and lim inf nρ′ nlogIn>0. Therefore, (S10.6) implies N(ϵ,C1,∥·∥∞)≲(In/ϵ)ρ′ n. Therefore, we have established N(ϵ,Gn(ϵn),∥ · ∥∞)≲(In/ϵ)ρ′ n. By Lemma 9.1 of Wellner (2005), N(ϵ,Gn(ϵn), L2(Q))≲ N(ϵ/2,Gn(ϵn),∥·∥∞) for any measure Q. In particular, when we take Q=Pn, we obtain that N(ϵ,Gn(ϵn), L2(Pn))≲(In/ϵ)ρ′ n. Therefore, Fact S10.1 can be applied on Gn(ϵn) with ρnreplaced by ρ′ n. The rest of the proof is similar to the proof of Step 2 of Theorem 5 of Laha et al. (2024b), and hence omitted. S10.2. Proof of auxiliary lemmas for proving Theorem 6.2 Lemma S10.3. LetXbe a space, and Cis a space of functions mapping XtoR. Let ϕbe a fixed Lipschitz map with constant C >0from the metric space (Rk−1, l2)to the metric space Rendowed with the usual absolute value metric. Define G= u:X 7→R|u=ϕ(0, g1, . . . , g k−1), gi∈ Cfor all i∈[k−1] . Further suppose the covering number N(ϵ,C,∥ · ∥∞)is finite for each ϵ >0. Then N(ϵ,G,∥ · ∥∞)≤N(ϵ/(C√ k−1),C,∥ · ∥∞)k−1. Proof of Lemma S10.3. Anyu∈ Gis of the form u=ϕ(0, g1, . . . , g k−1) where the gi’s are in C. Suppose N=N(ϵ/(C√ k−1),C,∥· ∥∞). Then there exists an ϵ/(C√ k−1) covering Θ of Cof size N. Suppose {A1, . . . ,Ak−1} ⊂Θ is such that ∥Ai−gi∥∞< ϵ/(C√ k−1) for all i∈[k−1]. Let us denote ucov=ϕ(0,A1, . . . ,Ak−1). For any x∈ X, |u(x)−ucov(x)|=|ϕ(g1(x), . . .
|
https://arxiv.org/abs/2505.17285v1
|
, g k−1(x))−ϕ(A1(x), . . . ,Ak−1(x))| ≤Cvuutk−1X i=1(gi(x)−Ai(x))2 because ϕis Lipschitz with constant C. Thus ∥u−ucov∥∞≤C√ k−1 sup 1≤i≤k−1∥gi−Ai∥∞≤ϵ. Therefore, GΘ= g:X 7→R|g=ϕ(0,A1, . . . ,Ak−1),Ai∈Θ for all i∈[k−1] is an ϵ-covering of G. Note that the cardinality of GΘisNk−1. Thus, the proof follows. S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.2 Proof of auxiliary lemmas for proving Theorem 6.2 125 Lemma S10.4. Suppose Xis a space, and Ais a finite set. For each a∈ A, letCabe a space of functions mapping Xto R. Let us define G= f:X × A 7→ R|f(x, a) =ua(x)where ua∈ Ca . Then N(ϵ,G,∥ · ∥∞)≤Y a∈AN(ϵ,Ca,∥ · ∥∞). Proof of Lemma S10.4. We will construct an ϵ-covering of G. For each a∈ A, let Θ abe a covering of Ca. Suppose the cardinality of Θ aisNa. Then N(ϵ,Ca,∥ · ∥∞)≤Na. Every f∈ Gis of the form f(x, i) =X a∈Aua(x)1[a=i],for all x∈ X, i∈ A, where the ua’s are some functions satisfying ua∈ Cafor each a∈ A. There exist Aa∈Θafor each a∈ A so that ∥Aa−ua∥∞≤ϵ. Then fcover(x, i) =P a∈AAa(x)1[i=a] satisfies ∥fcover−f∥∞≤sup a∈A∥Aa−ua∥∞=ϵ. Therefore f:X × A 7→ R|f(x, i) =X a∈AAa(x)1[i=a] where Aa∈Θafor each a∈ A is an ϵ-cover of G. The cardinality of the above set is bounded byQ a∈ANa, which completes the proof of the current lemma. S10.2.1. Proof of Lemma S10.2 Proof of Lemma S10.2. We will first show that for any f∈ F, ∥Uf∗,f∥P,1≤C Vψ(f∗)−Vψ(f)α/(1+α) . Note that ∥Uf∗,f∥P,1equals ∥Uf∗,f∥P,1=E TP i=1Yi QT i=1πi(Ai|Hi) TY t=1ϕt(f∗ t(Ht);At)−TY t=1ϕt(ft(Ht);At) where we used the fact that Yi’s and πi’s are positive. Now we will use the following lemma, which is proved in Section S10.2.3. Lemma S10.5. Under the setup of Theorem 6.2, any f≡(f1, . . . , f T)∈ F satisfies TY i=1ϕ(f∗ i(Hi);Ai)−TY i=1ϕ(fi(Hi);Ai) =X t∈[T] ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At)t−1Y i=1ϕi(fi(Hi);Ai)TY i=t+1ϕi(f∗ i(Hi);Ai), where products overQ0 i=1orQT i=T+1are taken to be one. Since the ϕt’s are positive, using Lemma S10.5, we obtain E[|Uf∗,f|] =E (TP i=1Yi)Qt−1 i=1ϕi(fi(Hi);Ai)QT i=t+1ϕi(f∗ i(Hi);Ai) QT i=1πi(Ai|Hi) × X t∈[T] ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At) , S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.2 Proof of auxiliary lemmas for proving Theorem 6.2 126 where the products overQ0 1andQT T+1are taken to be one. By triangle inequality, the above expression is bounded above by X t∈[T]E (TP i=1Yi)Qt−1 i=1ϕi(fi(Hi);Ai)QT i=t+1ϕi(f∗ i(Hi);Ai) QT i=1πi(Ai|Hi) × |ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At)|] =X t∈[T]E E TP i=1YiQT i=t+1ϕi(f∗ i(Hi);Ai) QT i=t+1πi(Ai|Hi) Ht, At ×| ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At)Qt−1 i=1ϕi(fi(Hi);Ai)| Qt i=1πi(Ai|Hi) . Combining (S10.3), which gives ϕi(f∗ i(Hi);Ai) = 1[ Ai=d∗ i(Hi)] for all i∈[T], with Fact S11.4, we obtain that E (TP i=1Yj)QT i=t+1ϕi(f∗ i(Hi);Ai) QT i=t+1πi(Ai|Hi) Ht, At =Q∗ t(Ht, At). Therefore, we have obtained that ∥Uf∗,f∥P,1is bounded by X t∈[T]EQt−1 i=1ϕi(fi(Hi);Ai)Qt−1 i=1πi(Ai|Hi)Q∗ t(Ht, At)|ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At)| πt(At|Ht) (a)=X t∈[T]E"Qt−1 i=1ϕi(fi(Hi);Ai)Qt−1 i=1πi(Ai|Hi)EQ∗ t(Ht, At)|ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At)| πt(At|Ht) Ht# (b)=X t∈[T]EQt−1 i=1ϕi(fi(Hi);Ai)Qt−1 i=1πi(Ai|Hi)X i∈[kt]Q∗ t(Ht, i)|ϕt(f∗ t(Ht);i)−ϕt(ft(Ht);i)| (S10.10) where step (a) follows because ( Hi, Ai)⊂Htfor any i∈[t−1] for all t∈[2 :T], and step (b) follows because EQ∗ t(Ht, At)|ϕt(f∗ t(Ht);At)−ψ(fi(Ht);At)| πt(At|Ht)|Ht =X i∈[kt]Q∗ t(Ht, i)|ϕt(f∗ t(Ht);i)−ψ(fi(Ht);i)|, which is obtained by
|
https://arxiv.org/abs/2505.17285v1
|
(S6.6). S10.2.2. Lower bound on Vψ(f∗)−Vψ(f) Since Vψ ∗=Vψ(f∗) by Lemma S10.1, Vψ(f∗)−Vψ(f) =Vψ ∗−Vψ(f), which, by Lemma S9.1, equals E X t∈[T]t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]:i̸=d∗ t(Ht) Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(ft(Ht);i) . Equation S10.3 implies ϕt(f∗ t(Ht);i) = 0 for i̸=d∗ t(Ht). Therefore, V(f∗)−Vψ(f) =X t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]:i̸=d∗ t(Ht) Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) × ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) Note that the integrated is always positive because for i̸=d∗ t(Ht), Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) ≥0. Let us fix z >0 and denote Et={ht∈ H t:µ(Q∗ t(ht))≤z}. S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.2 Proof of auxiliary lemmas for proving Theorem 6.2 127 Let us denote the random variables 1[ Ht∈ Et] and 1[ Ht∈ Ec t] by 1 Etand 1 Ec t, respectively. Then Vψ(f∗)−Vψ(f) is bounded below by Vψ(f∗)−Vψ(f)≥X t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]:i̸=d∗ t(Ht)(Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i)) × ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) 1Ec t (S10.11) By Assumptions IV, Yt≤Cmaxfor some Cmax>0 for all t∈[T]. Thus, Q∗ t(Ht;i) +Q∗ t(Ht, d∗ t(Ht))≤2Cmax. Hence, For Ht∈ Ec tandi̸=d∗ t(Ht), Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht, i)≥µ(Q∗ t(Ht))> z(a) ≥z Q∗ t(Ht;i) +Q∗ t(Ht, d∗ t(Ht)) 2Cmax, where (a) follows because {Q∗ t(Ht;i) +Q∗ t(Ht, d∗ t(Ht))}/(2Cmax)≤1. When i̸=d∗ t(Ht), ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) =ϕt(ft(Ht);i)≥0, which implies that if i̸=d∗ t(Ht) and Ht∈ Ec t, then Q∗ t(Ht, d∗ t(Ht))−Q∗ t(Ht, i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) ≥z 2Cmax Q∗ t(Ht, d∗ t(Ht)) +Q∗ t(Ht, i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) . (S10.12) However, X i∈[kt]:i̸=d∗ t(Ht) Q∗ t(Ht;d∗ t(Ht)) +Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) (a)=Q∗ t(Ht;d∗ t(Ht))X i∈[kt]:i̸=d∗ t(Ht)ϕt(ft(Ht);i) +X i∈[kt]:i̸=d∗ t(Ht)Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) (S10.13) where in step (a), we used the fact that ϕt(f∗ t(Ht);i) = 0 for all i̸=d∗ t(Ht), which follows from (S10.3). SinceP i∈[kt]ϕt(x;i) = 1 for all x∈Rkt, the above equals Q∗ t(Ht;d∗ t(Ht)) 1−ϕt(ft(Ht);d∗ t(Ht)) +X i∈[kt]:i̸=d∗ t(Ht)Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) (a)=Q∗ t(Ht;d∗ t(Ht)) ϕt(f∗ t(Ht);d∗ t(Ht))−ϕt(ft(Ht);d∗ t(Ht)) +X i∈[kt]:i̸=d∗ t(Ht)Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) =X i∈[kt]Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) (S10.14) where in step (a), we have used the fact that ϕt(f∗ t(Ht);d∗ t(Ht)) = 1, which follows from (S10.3). Now we will show that ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)≥0 for all i∈[kt] and t∈[T]. (S10.15) To this end, first consider the case when i=d∗ t(Ht). Since ϕt≤1 and ϕt(f∗ t(Ht);d∗ t(Ht)) = 1, ϕt(f∗ t(Ht);d∗ t(Ht))−ϕt(ft(Ht);d∗ t(Ht))≥0. When i̸=d∗ t(Ht), (S10.15) still holds since ϕt(f∗ t(Ht);i) = 0 for all i̸=d∗ t(Ht) by (S10.3), and ϕt≥0. Then (S10.15), combined with (S10.13) and (S10.14), implies that X i∈[kt]:i̸=d∗ t(Ht) Q∗ t(Ht;d∗ t(Ht)) +Q∗ t(Ht;i) ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i) ≥X i∈[kt]Q∗ t(Ht;i)|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)|. S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.2 Proof of auxiliary lemmas for proving Theorem 6.2 128 The above, combined with (S10.11) and (S10.12), implies that Vψ(f∗)−Vψ(f) is bounded below by z 2CmaxX t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]Q∗ t(Ht;i)|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)| 1Ec t =z 2CmaxX t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]Q∗ t(Ht;i)|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)| −z 2CmaxX t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]Q∗ t(Ht;i)|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)| 1Et Because ϕt≤1 for all t∈[T],PT i=1|Yi| ≤TCmax, and πt≥Cπfor all t∈[T], it follows that E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi) X i∈[kt]|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)| ×Q∗ t(Ht;i) 1Et ≤ktCmaxP(Et) CTπ, which is less than
|
https://arxiv.org/abs/2505.17285v1
|
Czα, where the last step follows from Assumption 1. Here C >0 is a constant depending only on P. Thus we have showed that V(f∗)−Vψ(f)≥ −Cz1+α +z 2CmaxX t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]Q∗ t(Ht;i)|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)| , which, combined with the upper bound on ∥Uf∗,f∥P,1from (S10.10), leads to Vψ(f∗)−Vψ(f)≥Cz∥Uf∗,f∥P,1−Cz1+α(S10.16) for some constant C >0. Since πt≥Cπ,P t∈[T]|Yt| ≤TCmaxandϕt≤1, it follows that X t∈[T]YtTY j=1ψ(fj(Hj);Aj) πj(Aj|Hj) ≤TCmax CTπ. Therefore Uf∗,fis uniformly bounded by TCmaxC−T π. Therefore, ∥Uf∗,f∥P,2≤TCmaxC−T π∥Uf∗,f∥P,1. Hence, (S10.16) leads to V(f∗)−Vψ(f)≥C z∥Uf∗,f∥P,2−z1+α , which leads to ∥Uf∗,f∥P,2≤zα+ V(f∗)−Vψ(f) z−1. Optimizing over z, we find that the upper bound is smallest when z= (Vψ(f∗)−Vψ(f))1/(1+α), which completes the proof. S10.2.3. Proof of Lemma S10.5 Proof of Lemma S10.5. The proof follows noting the following telescoping sum X t∈[T] ϕt(f∗ t(Ht);At)−ϕt(ft(Ht);At)t−1Y i=1ϕt(fi(Hi);Ai)TY i=t+1ϕi(f∗ i(Hi);Ai) =TY i=1ϕi(f∗ i(Hi);Ai) +TX t=2t−1Y i=1ϕt(fi(Hi);Ai)TY i=tϕi(f∗ i(Hi);Ai) −T−1X t=1tY i=1ϕt(fi(Hi);Ai)TY i=t+1ϕi(f∗ i(Hi);Ai)−TY i=1ϕi(fi(Hi);Ai). (S10.17) Replacing tbyt′=t−1 in the second sum on the RHS of (S10.17), we obtain that TX t=2t−1Y i=1ϕi(fi(Hi);Ai)TY i=tϕi(f∗ i(Hi);Ai) =T−1X t′=1t′Y i=1ϕi(fi(Hi);Ai)TY i=t′+1ϕi(f∗ i(Hi);Ai), which cancels the third term on the RHS of (S10.17). Hence, the proof follows from (S10.17). S10 PROOFS FOR SECTION 6.3: COMBINED REGRET DECAY RATE/S10.3 Proof of Corollary 1 129 S10.3. Proof of Corollary 1 Proof of Corollary 1. We have already mentioned in Section 6.3 that Utn=F(Nn, Wn, sn) satisfies (6.6) with ρn=sn+ 1 andIn= (Nn+ 1)( sn+ 1)2Nn+4. We will apply Theorem 6.2. To this end, we first show that ρnlogIn=o(n) and lim inf nρnlogIn>0. Since Nn, sn≥1, we have In≥2×26. Thus log In>0 trivially follows. Also, ρn=sn>0. Therefore ρnlogIn>0 for all n > e . Also since log( sn) =O(logn), logInis of the order O(logn). Noting ρn=O(sn) =O nq (2+α)β+q , we thus conclude that ρnlogIn=o(n). Therefore, it suffices to show that (6.6) holds for some bn≫√n. We will take bn=n. Since sn=c2nq/((2+α)β+q)andNn=c1logn, we have ρnlogIn n1/(2+α) ≥Csn(2Nn+ 4) log( sn) +snlog log n n1/(2+α) ≥C (logn)2s n1/(2+α) ≥C(logn)2 2+αn−β (2+α)β+q≳(logn)2 2+αs−β/q n, (S10.18) where the last step uses s−β/q n =n−β (2+α)β+q. Let t∈[T]. Under Assumption 2, following the proof of Supplementary Lemma M.2 of Laha et al. (2024b), we can show that, for all i∈[kt−1], there exists a network ˜ gti∈ F(Nn, Wn, sn) and a constant Cq,β>0 so that under our condition on Nn,Wn, and sn, the following holds for all t∈[T]: ∥˜gti/bn−Q∗ t(·,1) +Q∗ t(·, i+ 1)∥∞≤Cq,βs−β/q n, which is bounded above by a constant multiple of ρnlogIn n1/(2+α) by (S10.18). Hence, (6.6) holds. Therefor, the rest of the proof follows from Theorem 6.2 noting ρnlogIn n1+α 2+α ≲(logn)2(1+α) 2+αn−1+α (2+α)+q/β. S10.4. Proof of Result 5 Proof of Result 5. Fixj∈[k]. If we can show that ϕ(·;j) is differentiable and the gradient is uniformly bounded in l2norm, then the global Lipschitz continuity follows from the mean value property. First, we will show ϕ(·;j) is differentiable. To this end, we first show that Kis differentiable with uniformly bounded partial derivatives. Since Ksatisfies K(x) =Qk i=1K(xi) for all x∈RkandKis differentiable, the partial derivatives of Kexist everywhere and equals ∂K(x) ∂xi=K′(xi)Y r∈[k]:r̸=iK(xr) for all i∈[k]. Since K′andKare bounded,
|
https://arxiv.org/abs/2505.17285v1
|
the partial derivatives are also bounded. To show that ϕt(·, j) is differentiable, it suffices to show that its partial derivatives exist and they are continuous. Since all partial derivatives of Kexist and bounded for all x∈Rk, by Leibniz integral rule, ∂ϕ(x;j) ∂xi=∂Z Rk1[pred( t) =j]K(x−t)dt ∂xi =Z Rk1[pred( t) =j]∂K(x−t) ∂xidt =Z Rk1[pred( t) =j]K′(xi−ti)Y r∈[k]:r̸=iK(xr−tr)dt for all i∈[k]. Thus all partial derivatives of ϕt(·, j) exist. Now we will show that the partial derivatives of ϕ(·;j) are continuous. Suppose x(n)→x. Then lim n→∞∂ϕ(xn;j) ∂xi= lim n→∞Z Rk1[pred( t) =j]K′(x(n) i−ti)Y r∈[k]:r̸=iK(x(n) r−tr)dt. SinceK′andKare bounded, the continuity of ∂ϕ(·;j)/∂xifollows from the dominated convergence theorem and the continuity ofKandK′. Thus we have shown that all partial derivatives of ϕ(·;j) exist and they are continuous at every x∈Rk. Hence, it follows that ϕ(·;j) is differentiable with gradient ∇ϕ(x;j) =Z Rk1[pred( t) =j]∇K(x−t)dt. S11 ADDITIONAL FACTS/ 130 To show that ϕ(·;j) is Lipschitz, it remains to show that ∇ϕ(x;j) is bounded in l2norm for each x∈Rk. To this end, it is enough to show that the partial derivatives are uniformly bounded. However, the above follows since for any iandjin [k], sup x∈Rk ∂ϕ(x;j) ∂xi = sup x∈Rk Z Rk1[pred( t) =j]K′(xi−ti)Y r∈[k]:r̸=iK(xr−tr)dt ≤sup x∈RkZ Rk|K′(xi−ti)|Y r∈[k]:r̸=iK(xr−tr)dt = sup xi∈RZ R|K′(xi−ti)|dti =Z∞ −∞|K′(t)|dt, which is bounded because K′is integrable. S11. Additional facts Fact S11.1. Suppose f1,. . .,fkare real-valued functions on Rkfor some k∈N. If they are closed and bounded below, then their sum is closed. Proof of Fact S11.1. Suppose f1,. . .,fkare bounded below by C∈R. Consider the function h=Pk i=1(fk−C). Clearly, h is a sum of non-negative closed functions. Closedness is equivalent to lower-semicontinuity (Hiriart-Urruty and Lemar´ echal, 2004, pp. 78). However, sum of non-negative lower semicontinuous functions is lower semicontinuous (this fact follows trivially noting fis lower semicontinuous if and only if lim inf x→x0f(x)≥f(x0) for all x0∈Rk). Therefore, his lower semicontinuous, and hence closed. However, sincePk i=1fi=h+kC, the functionPk i=1fiis closed as well. Fact S11.2. Letu∈Rk1−1andw∈Rk2−1be two fixed vectors. If f:Rk1+k2→Ris closed, then so is the bivariate function (x, y)7→f(xu, yw), where x, y∈R. Proof of Fact S11.2. To show closedness, it suffices to show that the above function is lower-semicontinuous everywhere. Let (xk) and ( yk) be two real sequences. Let us denote xk=xkuandyk=ykv. For any x0, y0∈R, it holds that lim inf xk→x0, yk→y0f(xku, ykv) = lim inf xk→x0u,yk→y0vf(xk,yk)≥f(x0u, y0v) because fis lower semicontinuous (Hiriart-Urruty and Lemar´ echal, 2004, pp. 78). Fact S11.3 (Lemma 5 of Schmidt-Hieber (2020)) .Suppose F(N, W, s )is a class of ReLU neural networks with linear output layer, depth N, width vector W, sparsity s, and weights uniformly bounded by one. Denote by CNthe constant (N+ 1)QL+1 i=0(Wi+ 1)2. Then for any ϵ >0, logN(ϵ,F(N, W, s ),∥ · ∥∞)≲(s+ 1) log 2ϵ−1CN . Also since Wi≤sfor each i∈ {0, . . . , L + 1},CN≲N(s+ 1)2N+4. Fact S11.4. Fort∈[T], the Q-functions satisfy Q∗ t(Ht, at) =ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=at . Proof of Fact S11.4. We will prove this fact by induction where the induction hypothesis Q∗ t(Ht, at)
|
https://arxiv.org/abs/2505.17285v1
|
=ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=at holds for t=Tbecause Q∗ T(HT, AT) =E[YT|HT, AT] and we have defined products over an empty range to be one. Suppose the induction hypothesis holds for 1 + twhere t∈[1 :T−1]. We will show that the induction hypothesis holds for tas well. Note that ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=i =E YtETY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t (S11.1) +TX i=t+1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=i (a)=E[Yt|Ht, At=i] +E ETX i=t+1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t Ht, At=i (S11.2) S11 ADDITIONAL FACTS/ 131 where (a) follows by applying (S6.10) on the first term. Now applying (S6.5) on the second term of the RHS of (S11.1) with V=TX i=t+1YiTY j=2+t1[Aj=d∗ j(Hj)] πj(Aj|Hj), we see that the RHS of (S11.1) equals E[Yt|Ht, At=i] +E ETX i=1+tYiTY j=2+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t, A1+t=d∗ 1+t(H1+t) Ht, At=i , where the product is one if t=T−1, when the range of the product is empty. Since the induction hypothesis holds for 1 + t, it follows that Q∗ 1+t(H1+t, d∗ 1+t(H1+t)) equals ETX i=1+tYiTY j=2+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) H1+t, A1+t=d∗ 1+t(H1+t) . Thus (S11.1) leads to ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=i =Eh Yt+Q∗ 1+t(H1+t, d∗ 1+t(H1+t))|Ht, At=ii , which equals Q∗ t(Ht, i). Fact S11.5. IfXnis bounded and Xn→P0, then E[Xn]→n0. Proof. Uniform integrability and convergence in probability implies L1convergence. Bounded sequences of random variables are uniformly integrable (Billingsley, 2017). Hence E[|Xn| →n0. References Akhtar, M., Tanveer, M., and Arshad, M. (2024). Hawkeye: advancing robust regression with bounded, smooth, and insensitive loss function. arXiv preprint arXiv:2401.16785 . Athey, S. and Wager, S. (2021). Policy learning with observational data. Econometrica ,89(1), 133–161. Audibert, J.-Y. and Tsybakov, A. B. (2007). Fast learning rates for plug-in classifiers. The Annals of statistics ,35(2), 608–633. Bartlett, P. L., Jordan, M. I., and McAuliffe, J. D. (2006). Convexity, classification, and risk bounds. Journal of the American Statistical Association ,101(473), 138–156. Beijbom, O., Saberian, M., Kriegman, D., and Vasconcelos, N. (2014). Guess-averse loss functions for cost-sensitive multiclass boosting. In International Conference on Machine Learning , pages 586–594. PMLR. Bennett, A. and Kallus, N. (2020). Efficient policy learning from surrogate-loss classification reductions. In International Conference on Machine Learning , pages 788–798. PMLR. Bhattacharyya, A., Ghoshal, S., and Saket, R. (2018). Hardness of learning noisy halfspaces using polynomial thresholds. In Conference On Learning Theory , pages 876–917. PMLR. Billingsley, P. (2017). Probability and measure . John Wiley & Sons. Bottou, L., Curtis, F. E., and Nocedal, J. (2018). Optimization methods for large-scale machine learning. SIAM review , 60(2), 223–311. Boyd, S. and Vandenberghe, L. (2004). Convex Optimization . Cambridge University Press, New York, NY, USA. Calauzenes, C., Usunier, N., and Gallinari, P. (2012). On the (non-) existence of convex, calibrated surrogate losses for ranking. Advances in Neural Information Processing Systems 25 (NIPS 2012) , pages 197–205. Chakraborty, B. and Moodie, E. E. (2013). Statistical methods for dynamic treatment regimes , volume 2. Springer. Chapagain, N. (2024a). Data application code for SDSS method. https://github.com/nilson01/ApplicationFin . GitHub repository. Chapagain, N. (2024b). Simulation code for SDSS method. https://github.com/nilson01/SimulationDirectSearch . GitHub repository. Charoenphakdee, N., Lee, J.,
|
https://arxiv.org/abs/2505.17285v1
|
and Sugiyama, M. (2019). On symmetric losses for learning from corrupted labels. In Interna- tional Conference on Machine Learning , pages 961–970. PMLR. Chen, J., Fu, H., He, X., Kosorok, M. R., and Liu, Y. (2018). Estimating individualized treatment rules for ordinal treatments. Biometrics ,74(3), 924–933. Chen, Y., Chi, Y., Fan, J., and Ma, C. (2019). Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval. Mathematical Programming ,176, 5–37. Dauphin, Y. N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., and Bengio, Y. (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems , pages 2933–2941. S11 ADDITIONAL FACTS/ 132 Deliu, N., Williams, J. J., and Chakraborty, B. (2022). Reinforcement learning in modern biostatistics: constructing optimal adaptive interventions. arXiv preprint arXiv:2203.02605 . Do, S. N., Dao, C. X., Nguyen, T. A., Nguyen, M. H., Pham, D. T., Nguyen, N. T., Huynh, D. Q., Hoang, Q. T. A., Van Bui, C., Vu, T. D., et al. (2023). Sequential organ failure assessment (sofa) score for predicting mortality in patients with sepsis in vietnamese intensive care units: a multicentre, cross-sectional study. BMJ open ,13(3), e064870. Doss, C. R. and Wellner, J. A. (2016). Global rates of convergence of the MLEs of log-concave and s-concave densities. Ann. Statist. ,44, 954–981. Doss, C. R. and Wellner, J. A. (2019). Univariate log-concave density estimation with symmetry or modal constraints. Electron. J. Stat. ,13, 2391–2461. Duchi, J. C., Mackey, L. W., and Jordan, M. I. (2010). On the consistency of ranking algorithms. In ICML . Dud´ ık, M., Schapire, R. E., and Telgarsky, M. (2022). Convex analysis at infinity: An introduction to astral space. arXiv preprint arXiv:2205.03260 . Fathony, R., Liu, A., Asif, K., and Ziebart, B. (2016). Adversarial multiclass classification: A risk minimization perspective. Advances in Neural Information Processing Systems ,29. Feng, H., Ning, Y., and Zhao, J. (2022). Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses. The Annals of Statistics ,50(4), 2284–2305. Finocchiaro, J., Frongillo, R., and Waggoner, B. (2019). An embedding framework for consistent polyhedral surrogates. Advances in neural information processing systems ,32. Gao, W. and Zhou, Z.-H. (2011). On the consistency of multi-label learning. In Proceedings of the 24th annual conference on learning theory , pages 341–358. Gin´ e, E. and Nickl, R. (2015). Mathematical Foundations of Infinite-Dimensional Statistical Models , volume 40 of Cambridge series in statistical and probabilistic mathematics . Cambridge University Press, Cambridge. Glasmachers, T., Igel, C., et al. (2016). A unified view on multi-class support vector classification. Journal of Machine Learning Research ,17(45), 1–32. Glorot, X. and Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics , pages 249–256. JMLR Workshop and Conference Proceedings. Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning . MIT Press. http://www.deeplearningbook.org . Hager, R., Tsiatis, A. A., and Davidian, M. (2018). Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data. Biometrics ,74(4), 1180–1192. Han, S. (2021). Identification in
|
https://arxiv.org/abs/2505.17285v1
|
nonparametric models for dynamic treatment effects. Journal of Econometrics ,225(2), 132–147. He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision , pages 1026–1034. Hirano, K., Imbens, G. W., and Ridder, G. (2003). Efficient estimation of average treatment effects using the estimated propensity score. Econometrica ,71(4), 1161–1189. Hiriart-Urruty, J.-B. and Lemar´ echal, C. (2004). Fundamentals of convex analysis . Springer Science & Business Media. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J., et al. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Horn, R. A. and Johnson, C. R. (2012). Matrix analysis . Cambridge university press. Horowitz, J. L. (1992). A smoothed maximum score estimator for the binary response model. Econometrica: journal of the Econometric Society , pages 505–531. Jiang, B., Song, R., Li, J., and Zeng, D. (2019). Entropy learning for dynamic treatment regimes. Statistica Sinica ,29(4), 1633. Jiang, N. and Li, L. (2016). Doubly robust off-policy value evaluation for reinforcement learning. In International Conference on Machine Learning , pages 652–661. PMLR. Jin, C., Liu, Q., and Miryoosefi, S. (2021). Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems ,34, 13406–13418. Johnson, A. and Pollard, T. (2018). sepsis3-mimic. Johnson, A., Pollard, T., Shen, L., Lehman, L., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L., and Mark, R. (2016). Mimic-iii, a freely accessible critical care database. Scientific Data ,3, 160035. Johnson, A. E., Aboab, J., Raffa, J. D., Pollard, T. J., Deliberato, R. O., Celi, L. A., and Stone, D. J. (2018). A comparative analysis of sepsis identification methods in an electronic database. Critical care medicine ,46(4), 494. Kallus, N. and Uehara, M. (2020). Double reinforcement learning for efficient off-policy evaluation in markov decision processes. Journal of Machine Learning Research ,21(167). Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Koltchinskii, V. (2009). 2008 saint flour lectures oracle inequalities in empirical risk minimization and sparse recovery problems. Komorowski, M., Celi, L., Badawi, O., Gordon, A., and Faisal, A. (2018). The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nature Medicine ,24(11), 1716–1720. Kosorok, M. R. and Laber, E. B. (2019). Precision medicine. Annual Review of Statistics and its Application ,6, 263–286. Laber, E. B., Lizotte, D. J., Qian, M., Pelham, W. E., and Murphy, S. A. (2014). Dynamic treatment regimes: Technical challenges and applications. Electronic journal of statistics ,8(1), 1225. Laha, N. (2021). Adaptive estimation in symmetric location model under log-concavity constraint. Electronic Journal of Statistics ,15, 2939–3014. S11 ADDITIONAL FACTS/ 133 Laha, N., Sonabend-W., A., Mukherjee, R., and Cai, T. (2024a). Finding the optimal dynamic treatment regimes using smooth fisher consistent surrogate loss. Annals of Statistics ,52(2), 679–707. Laha, N., Sonabend-W, A., Mukherjee, R., and Cai, T. (2024b). Finding the optimal dynamic treatment regimes using smooth fisher consistent surrogate loss. Lat, I., Coopersmith, C. M., and De Backer, D. (2021). The surviving sepsis campaign:
|
https://arxiv.org/abs/2505.17285v1
|
fluid resuscitation and vasopressor therapy research priorities in adult patients. Intensive Care Medicine Experimental ,9(1), 1–16. Lee, Y., Lin, Y., and Wahba, G. (2004). Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association ,99(465), 67–81. Liu, M., Shen, X., and Pan, W. (2021). Outcome weighted ψ-learning for individualized treatment rules. Stat,10(1), e343. Liu, M., Wang, Y., Fu, H., and Zeng, D. (2024a). Controlling cumulative adverse risk in learning optimal dynamic treatment regimens. Journal of the American Statistical Association ,119(548), 2622–2633. Liu, M., Wang, Y., Fu, H., and Zeng, D. (2024b). Learning optimal dynamic treatment regimens subject to stagewise risk controls. Journal of Machine Learning Research ,25(128), 1–64. Liu, Y. (2007). Fisher consistency of multicategory support vector machines. In Artificial intelligence and statistics , pages 291–298. Liu, Y. and Shen, X. (2006). Multicategory ψ-learning. Journal of the American Statistical Association ,101(474), 500–509. Liu, Y., Wang, Y., Kosorok, M. R., Zhao, Y., and Zeng, D. (2018). Augmented outcome-weighted learning for estimating optimal dynamic treatment regimens. Statistics in medicine ,37(26), 3776–3788. Luedtke, A. R. and Van Der Laan, M. J. (2016). Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy. Annals of statistics ,44(2), 713. Marik, P. (2015). The demise of early goal-directed therapy for severe sepsis and septic shock. Acta Anaesthesiologica Scandinavica ,59(5), 561–567. Masnadi-Shirazi, H. and Vasconcelos, N. (2008). On the design of loss functions for classification: theory, robustness to outliers, and savageboost. Advances in neural information processing systems ,21. Meng, H., Zhao, Y.-Q., Fu, H., and Qiao, X. (2020). Near-optimal individualized treatment recommendations. J. Mach. Learn. Res. ,21, 183–1. Moodie, E. E., Richardson, T. S., and Stephens, D. A. (2007). Demystifying optimal dynamic treatment regimes. Biometrics , 63(2), 447–455. Moodie, E. E., Dean, N., and Sun, Y. R. (2014). Q-learning: Flexible learning about useful utilities. Statistics in Biosciences , 6, 223–243. Murphy, S. A. (2003). Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B (Statistical Methodology) ,65(2), 331–355. Murphy, S. A. (2005). A generalization error for Q-learning. Journal Of Machine Learning Research ,6, 1073–1097. Murphy, S. A., van der Laan, M. J., and Robins, J. M. (2001a). Marginal mean models for dynamic regimes. Journal of the American Statistical Association ,96(456), 1410–1423. Murphy, S. A., van der Laan, M. J., and Robins, J. M. (2001b). Marginal mean models for dynamic regimes. Journal of the American Statistical Association ,96(456), 1410–1423. Natarajan, N., Dhillon, I. S., Ravikumar, P. K., and Tewari, A. (2013). Learning with noisy labels. Advances in neural information processing systems ,26. Neykov, M., Liu, J. S., and Cai, T. (2016). On the characterization of a class of Fisher-consistent loss functions and its application to boosting. The Journal of Machine Learning Research ,17(1), 2498–2529. Nguyen, T. and Sanner, S. (2013). Algorithms for direct 0–1 loss optimization in binary classification. In International conference on machine learning , pages 1085–1093. PMLR. Orellana, L., Rotnitzky, A., and Robins, J. M. (2010). Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, part
|
https://arxiv.org/abs/2505.17285v1
|
i: main content. The international journal of biostatistics ,6(2). Osokin, A., Bach, F., and Lacoste-Julien, S. (2017). On structured prediction theory with calibrated convex surrogate losses. Advances in Neural Information Processing Systems ,30. Pan, Y. and Zhao, Y.-Q. (2021). Improved doubly robust estimation in learning optimal individualized treatment rules. Journal of the American Statistical Association ,116(533), 283–294. Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., and Qu, L. (2017). Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1944–1952. Qian, M. and Murphy, S. A. (2011). Performance guarantees for individualized treatment rules. Annals of statistics ,39(2), 1180. Raghu, A., Komorowski, M., Celi, L. A., Szolovits, P., and Ghassemi, M. (2017). Continuous state-space models for optimal sepsis treatment: a deep reinforcement learning approach. abs/1705.08422. Ramaswamy, H. G. and Agarwal, S. (2016). Convex calibration dimension for multiclass loss matrices. The Journal of Machine Learning Research ,17(1), 397–441. Rigollet, P. and Zeevi, A. J. (2010). Nonparametric bandits with covariates. In Annual Conference Computational Learning Theory . Robins, J. M. (1997). Causal inference from complex longitudinal data. In Latent variable modeling and applications to causality , pages 69–117. Springer. Robins, J. M. (2004). Optimal structural nested models for optimal sequential decisions. Proceedings of the second Seattle symposium on biostatistics In D. Y. Lin and P. Heagerty (Eds.) (pp. 189–326). New York: Springer. Robins, J. M., Rotnitzky, A., and Zhao, L. P. (1994). Estimation of regression coefficients when some regressors are not S11 ADDITIONAL FACTS/ 134 always observed. Journal of the American statistical Association ,89(427), 846–866. Rockafellar, R. T. (1970). Convex Analysis . Princeton University Press. Rosset, S., Zhu, J., and Hastie, T. (2003). Margin maximizing loss functions. Advances in neural information processing systems ,16. Saumard, A. and Wellner, J. A. (2014). Log-concavity and strong log-concavity: a review. Statistics surveys ,8, 45. Schmidt-Hieber (2020). Nonparametric regression using deep neural networks with relu activation function. Annals of Statistics ,48(4), 1875–1897. Schulte, P. J., Tsiatis, A. A., Laber, E. B., and Davidian, M. (2014). Q- and A-learning methods for estimating optimal dynamic treatment regimes. Statist. Sci. ,29(4), 640–661. Seregin, A. and Wellner, J. A. (2010). Nonparametric estimation of multivariate convex-transformed densities. Annals of statistics ,38(6), 3751. Sonabend-W, A., Laha, N., Ananthakrishnan, A. N., Cai, T., and Mukherjee, R. (2023). Semi-supervised off-policy rein- forcement learning and value estimation for dynamic treatment regimes. Journal of Machine Learning Research ,24(323), 1–86. Steinwart, I., Scovel, C., et al. (2007). Fast rates for support vector machines using gaussian kernels. The Annals of Statistics , 35(2), 575–607. Sun, Y. and Wang, L. (2021). Stochastic tree search for estimating optimal dynamic treatment regimes. Journal of the American Statistical Association ,116(533), 421–432. Sutton, R. S. (2018). Reinforcement learning : an introduction . Adaptive computation and machine learning. The MIT Press, Cambridge, Massachusetts ; London, England, second edition. edition. Tao, Y. and Wang, L. (2017). Adaptive contrast weighted learning for multi-stage multi-treatment decision-making. Bio- metrics ,73(1), 145–155. Tao, Y., Wang, L., and Almirall, D. (2018). Tree-based reinforcement learning for estimating
|
https://arxiv.org/abs/2505.17285v1
|
optimal dynamic treatment regimes. The annals of applied statistics ,12(3), 1914. Tewari, A. and Bartlett, P. L. (2007). On the consistency of multiclass classification methods. Journal of Machine Learning Research ,8(May), 1007–1025. Thomas, P. and Brunskill, E. (2016). Data-efficient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning , pages 2139–2148. PMLR. Tsiatis, A. (2006). Semiparametric theory and missing data. 73. Tsiatis, A. A., Davidian, M., Holloway, S. T., and Laber, E. B. (2019). Dynamic treatment regimes: Statistical methods for precision medicine . Chapman and Hall/CRC. Tsybakov, A. B. (2004). Optimal aggregation of classifiers in statistical learning. The Annals of Statistics ,32(1), 135–166. Ven, L. and Lederer, J. (2021). Regularization and reparameterization avoid vanishing gradients in sigmoid-type networks. arXiv preprint arXiv:2106.02260 . Wallace, M. P. and Moodie, E. E. (2015). Doubly-robust dynamic treatment regimen estimation via weighted least squares. Biometrics ,71(3), 636–644. Wang, K., Pei, H., Cao, J., and Zhong, P. (2020). Robust regularized extreme learning machine for regression with non-convex loss function via dc program. Journal of the Franklin Institute ,357(11), 7069–7091. Wang, Y. and Scott, C. (2023a). On classification-calibration of gamma-phi losses. In The Thirty Sixth Annual Conference on Learning Theory , pages 4929–4951. PMLR. Wang, Y. and Scott, C. (2023b). Unified binary and multiclass margin-based classification. arXiv preprint arXiv:2311.17778 . Wang, Y., Fu, H., and Zeng, D. (2018). Learning optimal personalized treatment rules in consideration of benefit and risk: with an application to treating type 2 diabetes patients with insulin therapies. Journal of the American Statistical Association ,113(521), 1–13. Watkins, C. J. C. H. (1989). Learning from Delayed Rewards . Ph.D. thesis, King’s College, Cambridge, UK. Wellner, J. A. (2005). Empirical processes: Theory and applications. Lecture notes. Weston, J., Watkins, C., et al. (1999). Support vector machines for multi-class pattern recognition. In Esann , volume 99, pages 219–224. Wu, Y. and Liu, Y. (2007). Robust truncated hinge loss support vector machines. Journal of the American Statistical Association ,102(479), 974–983. Xu, T., Wang, J., and Fang, Y. (2014). A model-free estimation for the covariate-adjusted youden index and its associated cut-point. Statistics in medicine ,33(28), 4963–4974. Xu, Y., M¨ uller, P., Wahed, A. S., and Thall, P. F. (2016). Bayesian nonparametric estimation for dynamic treatment regimes with sequential transition times. Journal of the American Statistical Association ,111(515), 921–950. Xue, F., Zhang, Y., Zhou, W., Fu, H., and Qu, A. (2022). Multicategory angle-based learning for estimating optimal dynamic treatment regimes with censored data. Journal of the American Statistical Association ,117(539), 1438–1451. Yang, Y. (1999). Minimax nonparametric classification. I. rates of convergence. IEEE Transactions on Information Theory , 45(7), 2271–2284. Zajonc, T. (2012). Bayesian inference for dynamic treatment regimes: Mobility, equity, and efficiency in student tracking. Journal of the American Statistical Association ,107(497), 80–92. Zhang, C. and Liu, Y. (2014). Multicategory angle-based large-margin classification. Biometrika ,101(3), 625–640. Zhang, C., Chen, J., Fu, H., He, X., Zhao, Y.-Q., and Liu, Y. (2020). Multicategory outcome weighted margin-based learning for estimating individualized treatment rules. Statistica sinica ,30, 1857. Zhang, J. and Tchetgen, E. T. (2024). On identification of dynamic treatment regimes with proxies
|
https://arxiv.org/abs/2505.17285v1
|
of hidden confounders. arXiv preprint arXiv:2402.14942 . S11 ADDITIONAL FACTS/ 135 Zhang, T. (2004). Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research ,5(Oct), 1225–1251. Zhang, Y., Laber, E. B., Davidian, M., and Tsiatis, A. A. (2018). Interpretable dynamic treatment regimes. Journal of the American Statistical Association ,113(524), 1541–1549. Zhang, Z., Jordan, M., Li, W.-J., and Yeung, D.-Y. (2009). Coherence functions for multicategory margin-based classification methods. In Artificial Intelligence and Statistics , pages 647–654. PMLR. Zhao, Y.-Q., Zeng, D., Rush, A. J., and Kosorok, M. R. (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association ,107, 1106–1118. Zhao, Y.-Q., Zeng, D., Laber, E. B., and Kosorok, M. R. (2015). New statistical learning methods for estimating optimal dynamic treatment regimes. Journal of the American Statistical Association ,110, 583–598. Zhao, Y.-Q., Laber, E. B., Ning, Y., Saha, S., and Sands, B. E. (2019). Efficient augmentation and relaxation learning for individualized treatment rules using observational data. Journal of Machine Learning Research ,20(48), 1–23. Zhao, Y.-Q., Zhu, R., Chen, G., and Zheng, Y. (2020). Constructing dynamic treatment regimes with shared parameters for censored data. Statistics in medicine ,39(9), 1250–1263. Zhou, Z., Athey, S., and Wager, S. (2022). Offline multi-action policy learning: Generalization and optimization. Operations Research . Zou, H., Zhu, J., and Hastie, T. (2008). New multicategory boosting algorithms based on multicategory Fisher-consistent losses. The Annals of Applied Statistics ,2(4), 1290.
|
https://arxiv.org/abs/2505.17285v1
|
Wasserstein Transfer Learning Kaicheng Zhang1∗Sinian Zhang2∗Doudou Zhou3†Yidong Zhou4† 1School of Mathematical Sciences, Zhejiang University, China 2Division of Biostatistics and Health Data Science, University of Minnesota, USA 3Department of Statistics and Data Science, National University of Singapore, Singapore 4Department of Statistics, University of California, Davis, USA ddzhou@nus.edu.sg ,ydzhou@ucdavis.edu May 26, 2025 Abstract Transfer learning is a powerful paradigm for leveraging knowledge from source domains to enhance learning in a target domain. However, traditional transfer learning approaches often focus on scalar or multivariate data within Euclidean spaces, limiting their applicability to complex data structures such as probability distributions. To address this, we introduce a novel framework for transfer learning in regression models, where outputs are probability distributions residing in the Wasserstein space. When the informative subset of transferable source domains is known, we propose an estimator with provable asymptotic convergence rates, quantifying the impact of domain similarity on transfer efficiency. For cases where the informative subset is unknown, we develop a data-driven transfer learning procedure designed to mitigate negative transfer. The proposed methods are supported by rigorous theoretical analysis and are validated through extensive simulations and real-world applications. Keywords: Data Aggregation, Domain Adaptation, Multitask Learning, non-Euclidean Output, Optimal Transport ∗Equal contribution. †Corresponding author. 1arXiv:2505.17404v1 [cs.LG] 23 May 2025 1 Introduction In recent years, transfer learning (Torrey & Shavlik 2010) has emerged as a powerful paradigm in machine learning, enabling models to leverage knowledge acquired from one domain and apply it to related tasks in another. This approach has proven especially valuable in scenarios where data collection and labeling can be costly, or where tasks exhibit inherent similarities in structure or representation. While early successes focused on conventional data types such as images (Shin et al. 2016), text (Raffel et al. 2020), and tabular data (Weiss et al. 2016), there is growing interest in extending these methods to more complex data structures. Such data often reside in non-Euclidean spaces and lack basic algebraic operations like addition, subtraction, or scalar multiplication, posing challenges for traditional learning algorithms. A key example is probability distributions (Petersen et al. 2022), where for example the sum of two density functions does not yield a valid density. Samples of univariate probability distributions are increasingly encountered across various research domains, such as mortality analysis (Ghodrati & Panaretos 2022), temperature studies (Zhu & Müller 2023), and physical activity monitoring (Lin et al. 2023), among others (Petersen et al. 2022). Recently, there has been a growing focus on directly modeling distributions as elements of the Wasserstein space, a geodesic metric space related to optimal transport (Villani 2009, Panaretos & Zemel 2020). The absence of a linear structure in this space motivates the development of specialized transfer learning techniques that respect its intrinsic geometry. To address this gap, we introduce Wasserstein Transfer Learning (WaTL), a novel transfer learning framework for regression models where outputs are univariate probability distri- butions. WaTL effectively leverages knowledge from source domains to improve learning in a target domain by intrinsically incorporating the Wasserstein metric, which provides a natural way to measure discrepancies between probability distributions. 2 1.1 Contributions The primary
|
https://arxiv.org/abs/2505.17404v1
|
contributions of this work are summarized as follows: Methodology. We propose a novel transfer learning framework for regression models with distributional outputs, addressing the challenges inherent in the Wasserstein space, which lacks a conventional linear structure. Our framework includes an efficient algorithm for cases where the informative subset of source domains is known, and a data-driven algorithm for scenarios where the subset is unknown. To the best of our knowledge, this is the first comprehensive transfer learning approach specifically designed for regression models with outputs residing in the Wasserstein space. Theoretical analysis. We establish the asymptotic convergence rates for the WaTL algorithm in both the case where the informative set is known and the more challenging scenario where it must be estimated. In the latter case, we also prove that the informative set can be consistently identified. In both settings, we demonstrate that WaTL effectively improves model performance on the target data by leveraging information from the source domain. The proofs rely heavily on empirical process theory and a careful analysis of the covariate structure. Our key theoretical results extend beyond responses lying in the Wasserstein space, offering potential applications to other complex outputs. Simulation studies and real-world applications. We evaluate WaTL through sim- ulations and real data applications, demonstrating its effectiveness in improving target model performance by leveraging source domain information. The benefits become more pronounced with larger source sample sizes, underscoring its ability to harness transferable knowledge. 3 1.2 Related Work Transfer learning. Transfer learning aims to improve performance in a target population by leveraging information from a related source population and has seen wide application across domains (e.g., Huang et al. 2013, Gong et al. 2012, Choi et al. 2017, Zhou et al. 2024). Recent theoretical developments have focused on regression in Euclidean settings, including high-dimensional linear (Li et al. 2022) and generalized linear models (Tian & Feng 2023), nonparametric regression (Cai & Pu 2024, Lin & Reimherr 2024), and function mean estimation from discretely sampled data (Cai et al. 2024). In parallel, optimal transport has been used to measure distributional shifts for domain adaptation (Courty et al. 2017, Redko et al. 2017). However, to the best of our knowledge, no existing work has investigated transfer learning in regression models where outputs are probability distributions residing in the Wasserstein space. This represents a significant gap in the literature, highlighting the need for novel methodologies that address this challenging yet important setting. Distributional data analysis. The increasing prevalence of data where distributions serve as fundamental units of observation has spurred the development of distributional data analysis (Petersen et al. 2022). Recent advancements in this field include geodesic principal component analysis in the Wasserstein space (Bigot et al. 2017), autoregressive models for time series of distributions (Zhang et al. 2022, Zhu & Müller 2023), and distribution-on- distribution regression (Ghodrati & Panaretos 2022, Chen et al. 2023). Leveraging the Wasserstein metric, regression models with distributional outputs and Euclidean inputs can be viewed as a special case of Fréchet regression (Petersen & Müller 2019), which extends linear and local linear regression to outputs
|
https://arxiv.org/abs/2505.17404v1
|
residing in general metric spaces. In practical scenarios where only finite samples from the unknown distributional output are available, empirical measures have been utilized as substitutes for the unobservable distributions in regression models (Zhou & Müller 2024). 4 2 Preliminaries 2.1 Notations LetL2(0,1)be the space of square-integrable functions over the interval (0,1), with the associated L2norm and metric denoted by ∥·∥ 2anddL2, respectively. To be specific, ∥g∥2= (/integraltext1 0g2(z)dz)1/2anddL2(g1,g2) =∥g1−g2∥2. For a vector Z,∥Z∥denotes the Euclidean norm. Given a matrix Σ, we define its spectrum as the set of all its singular values. For a subgaussian random vector X, we define the subgaussian norm as ∥X∥Ψ2:= sup∥v∥=1inf/braceleftig t>0 :E(e⟨X,v⟩2/t2)≤2/bracerightig . We writean≲bnif there exists a positive constant Csuch thatan≤Cbnwhennis large enough and an≍bnifan≲bnandbn≲an. The notation an=Op(bn)implies thatP(|an/bn| ≤C)→1for some constant C > 0, whilean=op(bn)implies that P(|an/bn|>c)→0for any constant c>0. Superscripts typically indicate different data sources, while subscripts distinguish individual samples from the same source. 2.2 Wasserstein Space LetWdenote the space of probability distributions on Rwith finite second moments, equipped with the 2-Wasserstein, or simply Wasserstein, metric. For two distributions µ1,µ2∈W, the Wasserstein metric is given by d2 W(µ1,µ2) =infπ∈Π(µ1,µ2)/integraltext R×R|s−t|2dπ(s,t), where Π(µ1,µ2)denotes the set of all joint distributions with marginals µ1andµ2(Kan- torovich 1942). For a probability measure µ∈Wwith cumulative distribution function Fµ, we define the quantile function F−1 µas the left-continuous inverse of Fµ, such that F−1 µ(u) =inf{t∈R|Fµ(t)≥u}, foru∈(0,1). It has been established (Villani 2009) that the Wasserstein metric can be expressed as the L2metric between quantile functions: d2 W(µ1,µ2) =/integraldisplay1 0/braceleftig F−1 µ1(u)−F−1 µ2(u)/bracerightig2du. (1) 5 The spaceW, endowed with the Wasserstein metric, forms a complete and separable metric space, commonly known as the Wasserstein space (Villani 2009). AssumingE{d2 W(ν,µ)}<∞for allµ∈W, the Fréchet mean (Fréchet 1948) of a random distribution ν∈Wis given by ν⊕=arg minµ∈WE{d2 W(ν,µ)}. Since the Wasserstein space Wis a Hadamard space (Kloeckner 2010), the Fréchet mean is well-defined and unique. Moreover, from (1), it follows that the quantile function of the Fréchet mean, denoted as F−1 ν⊕, satisfiesF−1 ν⊕(u) =E{F−1 ν(u)},u∈(0,1). 2.3 Fréchet Regression Consider a random pair (X,ν)with joint distribution Fon the product space Rp×W. LetXhave mean θ=E(X)and covariance matrix Σ = Var(X), where Σis assumed to be positive definite. To establish a regression framework for predicting the distributional responseνfrom the covariate X, we employ the Fréchet regression model, which extends multiple linear regression and local linear regression to scenarios where responses reside in a metric space (Petersen & Müller 2019). The Fréchet regression function is defined as the conditional Fréchet mean of νgivenX=x, m(x) = arg min µ∈WE{d2 W(ν,µ)|X=x}. For a detailed exposition of Fréchet regression, we refer the reader to Petersen & Müller (2019). Given nindependent realizations {(Xi,νi)}n i=1, we define the empirical mean and covariance of XasX=1 n/summationtextn i=1Xiand/hatwideΣ =1 n/summationtextn i=1(Xi−X)(Xi−X)T. The global Fréchet regression extends classical multiple linear regression and estimates the conditional Fréchet mean as mG(x) = arg min µ∈WE{sG(x)d2 W(ν,µ)}, 6 where the weight function is given by sG(x) = 1 + (X−θ)TΣ−1(x−θ). The empirical estimator is formulated as /hatwidermG(x) = arg min µ∈W1 nn/summationdisplay i=1siG(x)d2 W(νi,µ), wheresiG(x) =
|
https://arxiv.org/abs/2505.17404v1
|
1 + (Xi−X)T/hatwideΣ−1(x−X). Similarly, local Fréchet regression extends classical local linear regression to settings with metric space-valued outputs. In the case of a scalar predictor X∈R, the local Fréchet regression function is mL,h(x) = arg min µ∈WE{sL(x,h)d2 W(ν,µ)}, where the weight function is sL(x,h) =Kh(X−x){u2−u1(X−x)}/σ2 0, withuj= E{Kh(X−x)(X−x)j},j= 0,1,2, andσ2 0=u0u2−u2 1. Here,Kh(·) =h−1K(·/h)is a kernel function with bandwidth h. The empirical version is given by /hatwidermL,h(x) = arg min µ∈W1 nn/summationdisplay i=1siL(x,h)d2 W(νi,µ), wheresiL(x,h) =Kh(Xi−x){/hatwideu2−/hatwideu1(Xi−x)}//hatwideσ2 0, with/hatwideuj=n−1/summationtextn i=1Kh(Xi−x)(Xi− x)j,j= 0,1,2, and/hatwideσ2 0=/hatwideu0/hatwideu2−/hatwideu2 1. 3 Methodology 3.1 Setup We consider a transfer learning problem where target data {(X(0) i,ν(0) i)}n0 i=1are sampled independently from the target population (X(0),ν(0))∼F 0, and source data{(X(k) i,ν(k) i)}nk i=1 are sampled independently from the source population (X(k),ν(k))∼Fk, fork= 1,...,K. The goal is to estimate the target model using both the target data and the source data fromKrelated studies. 7 Fork= 0,...,K, assumeX(k)has meanθkand covariance Σk, with Σkpositive def- inite. Define the empirical mean and covariance of {X(k) i}nk i=1asXk=n−1 k/summationtextnk i=1X(k) i and/hatwideΣk=1 nk/summationtextnk i=1(X(k) i−Xk)(X(k) i−Xk)T.For a fixed x∈Rp, the weight function is s(k) G(x) = 1 + (X(k)−θk)TΣ−1 k(x−θk),with the sample version s(k) iG(x) = 1 + (X(k) i− Xk)T/hatwideΣ−1 k(x−Xk).The target regression function for a given x∈Rpis thenm(0) G(x) = arg minµ∈WE{s(0) G(x)d2 W(µ,ν(0))}.In the following, we present details on transfer learning for global Fréchet regression, where the key difference in the local Fréchet regression setting is the use of a different weight function. The technical details for transfer learning in local Fréchet regression are therefore deferred to Appendix D. The set of informative auxiliary samples (informative set) consists of sources sufficiently similar to the target data. Formally, the informative set is defined as Aψ=/braceleftig 1≤k≤K: ∥f(0)(x)−f(k)(x)∥2≤ψ/bracerightig for someψ>0, wheref(k)(x) =E{s(k) G(x)F−1 ν(k)}. For simplicity, letnA=/summationtextK k=1nk. 3.2 Wasserstein Transfer Learning We propose the Wasserstein Transfer Learning (WaTL) algorithm, which combines infor- mation from source datasets under the assumption that all source data are “informative enough.” This assumption implies that the discrepancies between the source and target are small enough to enhance estimation compared to using only the target. When this condition is met for all source datasets, the informative set is given by Aψ={1,...,K}. The detailed steps of WaTL are presented in Algorithm 1. In Step 1, the initial estimate/hatwidefaggregates information from both the target and source, weighted by their respective sample numbers. While this step incorporates valuable auxiliary information, the resulting estimate may be biased due to distributional differences between the target and source populations. In 8 Algorithm 1 Wasserstein Transfer Learning (WaTL) Input:Target and source data {(x(0) i,ν(0) i)}n0 i=1∪/parenleftig ∪1≤k≤K{(x(k) i,ν(k) i)}nk i=1/parenrightig , regularization parameterλ, and query point x∈Rp. Output: Target estimator /hatwiderm(0) G(x). 1:Weighted auxiliary estimator:/hatwidef(x) =1 n0+nA/summationtextK k=0nk/hatwidef(k)(x), where/hatwidef(k)(x) = n−1 k/summationtextnk i=1s(k) iG(x)F−1 ν(k) i. 2:Bias correction using target data:/hatwidef0(x) =arg ming∈L2(0,1)1 n0/summationtextn0 i=1s(0) iG(x)∥F−1 ν(0) i−g∥2 2+ λ∥g−/hatwidef(x)∥2. 3:Projection to Wasserstein space: /hatwiderm(0) G(x) = arg minµ∈W/vextenddouble/vextenddouble/vextenddoubleF−1 µ−/hatwidef0(x)/vextenddouble/vextenddouble/vextenddouble 2. Step 2, the bias in/hatwidefis corrected by focusing on the target data. The regularization term λ∥g−/hatwidef(x)∥2ensures a balance between target-specific precision and auxiliary-informed robustness. Theoretical
|
https://arxiv.org/abs/2505.17404v1
|
guidelines for selecting λare provided in Theorem 2. The final step projects the corrected estimate/hatwidef0onto the Wasserstein space, ensuring the output /hatwiderm(0) G(x)satisfies the intrinsic geometry of W. Such a projection exists and is unique as W is a closed and convex subset of L2(0,1). 3.3 Adaptive Selection of Informative Sources Inmanypracticalscenarios, theassumptionthatallsourcedatasetsbelongtotheinformative setAψmay not hold. To address this, we extend WaTL with an adaptive selection procedure to identify the informative set. The discrepancy for each source dataset kis defined as ψk=∥f(0)(x)−f(k)(x)∥2, which measures the distance between the target distribution and the auxiliary distribution. Since f(0)(x)andf(k)(x)are unknown, we compute an empirical estimate/hatwideψkforψk, which is used to adaptively estimate the informative set Aψ. To implement this approach, an additional input parameter L, which specifies the approximate number of informative sources, is required. In practice, Lcan be treated as a 9 Algorithm 2 Adaptive Wasserstein Transfer Learning (AWaTL) Input:Target and source data {(x(0) i,ν(0) i)}n0 i=1∪/parenleftig ∪1≤k≤K{(x(k) i,ν(k) i)}nk i=1/parenrightig , regularization parameterλ, number of informative sources L, and query point x∈Rp. Output: Target estimator /hatwiderm(0) G(x). 1:Compute discrepancy scores. For each source dataset k= 1,...,K, compute the empirical discrepancy:/hatwideψk=∥/hatwidef(0)(x)−/hatwidef(k)(x)∥2,where/hatwidef(k)(x) =n−1 k/summationtextnk i=1s(k) iG(x)F−1 ν(k) i. Construct the adaptive informative set by selecting the Lsmallest discrepancy scores /hatwideA={1≤k≤K:/hatwideψkis among the smallest Lvalues}. 2:Weighted auxiliary estimator:/hatwidef(x) =1/summationtext k∈/hatwideA∪{0}nk/summationtext k∈/hatwideA∪{0}nk/hatwidef(k)(x). 3:Bias correction using target data:/hatwidef0(x) =arg ming∈L2(0,1)1 n0/summationtextn0 i=1s(0) iG(x)∥F−1 ν(0) i−g∥2 2+ λ∥g−/hatwidef(x)∥2. 4:Projection to Wasserstein space: /hatwiderm(0) G(x) = arg minµ∈W/vextenddouble/vextenddouble/vextenddoubleF−1 µ−/hatwidef0(x)/vextenddouble/vextenddouble/vextenddouble 2. tuning parameter and selected through cross-validation or other model selection techniques. The full procedure is formalized in Algorithm 2. The proposed algorithm adaptively identifies the informative set/hatwideAin Step 1 by evaluating the empirical discrepancy scores/hatwideψk. The selected set is then used to compute the weighted auxiliary estimator in Step 2, ensuring that only the most relevant source datasets contribute to the final target estimator. Steps 3 and 4 follow the same bias correction and projection procedures as described in Section 3.2. This adaptive approach enhances the robustness of WaTL by excluding irrelevant or highly dissimilar source datasets. 4 Theory In this section, we establish the theoretical guarantees of the proposed WaTL and AWaTL algorithms using techniques from empirical process theory (van der Vaart & Wellner 2023). 10 For WaTL, we present the following lemma, which characterizes the convergence rate for each term contributing to the weighted auxiliary estimator/hatwidef(x)computed in Step 1. Condition 1. Fork= 0,...,K, the covariate X(k)is subguassian with ∥X(k)∥Ψ2∈[σ1,σ2], the mean vector satisfies ∥θk∥≤R1, and the spectrum of the covariance matrix Σklies within the interval [R2,R3]. Moreover, ν(k)is supported on a bounded interval. Lemma 1. Let/hatwidef(k)(x) =n−1 k/summationtextnk i=1s(k) iG(x)F−1 ν(k) iand its population counterpart f(k)(x) = E{s(k) G(x)F−1 ν(k)}fork= 0,...,K. Then under Condition 1, ∥/hatwidef(k)(x)−f(k)(x)∥2=Op(n−1/2 k). To derive the convergence rate of/hatwidef(x), we rely on the following condition. Condition 2. There exist positive constants C1,C2,C3,C4such that K/summationdisplay k=02e−C14 R2 3nk+C2e−C3/parenleftig 2 R3−/radicalig C4 nk/parenrightig nk=o(1), whereR3is as in Condition 1. In addition, √n0+nA min 1≤k≤Knk=o(1),√n0+nA n0=o(1). Remark1.Theseconditionsaretypicallysatisfiedinpracticeastheyarenotoverlyrestrictive. Condition 1 requires that covariates and covariance matrices are bounded in a specific way, which is standard in the transfer learning
|
https://arxiv.org/abs/2505.17404v1
|
literature and generally holds in real-world scenarios (Cai et al. 2024). In particular, this assumption is common when dealing with high-dimensional data where regularization is necessary (Li et al. 2022). Condition 2 assumes that the number of samples is significantly larger than the number of sources K, which is reasonable since Kis usually fixed in practical settings, and we often have sufficient source data compared to target data. In practice, Condition 2 may be slightly violated if there exists a source kwith a relatively small nk, such that√n0+nA/nkis noto(1). In such cases, thekth source can simply be excluded from Step 1 of the WaTL algorithm. 11 Theorem 1. Suppose Conditions 1 and 2 hold. Then, for the WaTL algorithm, it holds for a fixedx∈Rpthat ∥/hatwidef(x)−f(x)∥2=Op/parenleftbigg/summationtextK k=0√nk n0+nA+ (n0+nA)−1/2/parenrightbigg , wheref(x) = (n0+nA)−1/summationtextK k=0nkf(k)(x). The proof of Theorem 1 involves a detailed analysis of the sample covariance matrix and leverages M-estimation techniques within the framework of empirical process theory. The result extends beyond responses in the Wasserstein space, applying to other metric spaces that meet mild regularity conditions. Consequently, Theorem 1 provides a versatile framework that can be applied to transfer learning in regression models with responses such as networks (Zhou & Müller 2022), symmetric positive-definite matrices (Pennec et al. 2006), or trees (Billera et al. 2001). The following theorem establishes the convergence rate for the estimated regression function /hatwiderm(0) G(x)in the WaTL algorithm. Theorem 2. Assume Conditions 1 and 2 hold and the regularization parameter satisfies λ≍n−1/2+ϵ 0for someϵ>0. Then, for the WaTL algorithm and a fixed x∈Rp, it holds that d2 W(/hatwiderm(0) G(x),m(0) G(x)) =Op/parenleftbigg n−1/2+ϵ 0/parenleftig ψ+/summationtextK k=0√nk n0+nA+ (n0+nA)−1/2/parenrightig/parenrightbigg , whereψ=max 1≤k≤K∥f(0)(x)−f(k)(x)∥2quantifies the maximum discrepancy between the target and source. Theorem 2 can be contrasted with the convergence rate of global Fréchet regression (Petersen & Müller 2019) applied solely to the target data, where the rate for regression with distributional outputs is dW(/hatwiderm(0) G(x),m(0) G(x)) =Op(n−1/2 0). The WaTL algorithm achieves a faster convergence rate when there are sufficient source data and the auxiliary sources are informative enough, satisfying ψ≲n−1/2−ϵ 0. This result highlights that knowledge transfer from auxiliary samples can significantly enhance the learning performance of the target model, provided the auxiliary sources are closely aligned with the target data. 12 For the AWaTL algorithm, we require the following condition. Condition 3. The regularization parameter satisfies λ≍n−1/2+ϵ 0for someϵ>0and for someϵ′>ϵ, there exists a non-empty subset A⊂{ 1,...,K}such that n−1/2 ∗+n−1/2 0 mink∈ACψk=o(1),maxk∈Aψk n−1/2 ∗+n−1/2−ϵ′ 0=O(1), whereAC={1,...,K}\A,n∗= min 1≤k≤Knkandψk=∥f(0)(x)−f(k)(x)∥2. Remark2.We allowAto be the whole set {1,...,K}, in which case the condition becomes max 1≤k≤Kψk n−1/2 ∗+n−1/2−ϵ′ 0=O(1). Besides, ifAexists, then it is unique since {1≤k≤K:n−1/2 ∗+n−1/2 0 ψk=o(1)}∩{ 1≤k≤K:ψk n−1/2 ∗+n−1/2−ϵ′ 0=O(1)}=∅. Condition 3 ensures that the source datasets can be effectively partitioned into two groups: one that is informative and another that is sufficiently different from the target data. This separation guarantees that the informative set can be accurately identified. Under this condition, setting the number of informative sources as L=|A|, we establish the following rate of convergence for the AWaTL algorithm. In practice, Lcan be decided
|
https://arxiv.org/abs/2505.17404v1
|
by cross-validation. Theorem 3. Under Condition 3 and the conditions of Theorem 2, for the AWaTL algorithm with a fixed number of sources Kwe haveP(/hatwideA=A)→1and d2 W(/hatwiderm(0) G(x),m(0) G(x)) =Op/parenleftbigg n−1/2+ϵ 0/parenleftig max k∈Aψk+/summationtext k∈A∪{ 0}√nk/summationtext k∈A∪{ 0}nk+ (/summationdisplay k∈A∪{ 0}nk)−1/2/parenrightig/parenrightbigg . The convergence rate of the AWaTL algorithm simplifies to n−1−ϵ′+ϵ 0when the informative source data is sufficiently large. This rate surpasses that of global Fréchet regression (Petersen & Müller 2019) applied solely to the target data, offering a theoretical guarantee that AWaTL effectively mitigates negative transfer by selectively integrating relevant auxiliary information. 13 5 Numerical Experiments In this section, we evaluate the performance of the proposed WaTL algorithm, alongside two baseline approaches: the global Fréchet regression using only target data (Only Target) and using only source data (Only Source). Consider K= 5source sites. The data is generated as follows. For the target population, we sample X(0)∼U(0,1)and generate the response distribution, represented by its quantile function, as F−1 ν(0)(u) =w(0)(1−u)u+ (1−X(0))u+X(0)F−1 Z(0)(u), u∈(0,1), whereZ(0)∼N(0.5,1)|(0,1)andw(0)∼N(0,1)|(−0.5,0.5). Here,N(µ,σ2)|(a,b)denotes a normal distribution with mean µand variance σ2, truncated to the interval (a,b). For source populations, we define ψk= 0.1kfork= 1,...,Kand generate X(k)∼U(0,1). The corresponding response distribution is generated as F−1 ν(k)(u) =w(k)(1−u)u+ (1−X(k))u+X(k)F−1 Z(k)(u), u∈(0,1), whereZ(k)∼N(0.5,1−ψk)|(0,1)andw(k)∼N(0,1)|(−0.5,0.5). Consequently, foreachpredictor x, the true regression function is m(k) G(x) = (1−x)u+xF−1 Z(k)(u)fork= 0,1,...,K. We vary the target sample size n0from 200to800, while the source sample size is set as nk= kτ, whereτ∈{100,200}andk= 1,...,K. The regularization parameter λin Algorithm 1 is selected via five-fold cross-validation, varying from 0to3in increments of 0.1. To evaluate performance, we sample 100predictors uniformly from the target distribution. Using Algorithm 1, we compute /hatwiderm(0) G(x)and compare it with the corresponding estimates obtained from global Fréchet regression using only target or source data. Performance is assessed using the root mean squared prediction risk RMSPR =/radicalbigg 1 100/summationtext100 i=1d2 W/parenleftig /hatwiderm(0) G(xi),m(0) G(xi)/parenrightig , wherexidenotes the sampled predictor, /hatwiderm(0) G(xi)is the estimated function, and m(0) G(xi) represents the ground truth. To ensure robustness, we repeat the simulation 50times and report the average RMSPR, as shown in Figure 1 (a). 14 As shown in Figure 1 (a), WaTL consistently outperforms global Fréchet regression trained solely on target or source data. When the target sample size n0is small, the Only Target method exhibits a high RMSPR due to the instability of models trained on limited data. In contrast, WaTL significantly reduces RMSPR by effectively incorporating auxiliary information from the source domain. As n0increases, the performance of Only Target improves and gradually approaches that of WaTL, which is expected as larger sample sizes lead to more stable and accurate estimators. Nevertheless, WaTL maintains a consistent advantage across all n0, suggesting that it leverages complementary information from the source. The performance of the Only Source estimator remains nearly unchanged across differentn0values, as it does not benefit from additional target data. Comparing the two panels of Figure 1 (a), we also observe that WaTL improves as the source sample size increases, confirming its ability to effectively integrate source information. This demonstrates the benefit of multi-source transfer learning, where WaTL balances knowledge from
|
https://arxiv.org/abs/2505.17404v1
|
both target and source domains to achieve improved prediction. To better understand when negative transfer may occur, we conduct an ablation study with K= 1source and vary the similarity parameter ψ1from 0.01to1in increments of 0.01, withn0= 100andn1= 200. Our method outperforms the Only Target approach when ψ1<0.9, while forψ1≥0.9, the Only Target method becomes preferable. This confirms that negative transfer arises when the source is too dissimilar to the target. We further evaluate the effectiveness of AWaTL in selecting informative sources. In this experiment, we set L= 2andnk= 100fork= 0,..., 5. The similarity parameters are set asψk= 0.1fork= 1,2(informative sources), and ψkfrom 0.2to1in increments of 0.1fork= 3,4,5(uninformative sources). We repeat each setting 100 times and report the selection rates in Figure 1 (b). The results show that AWaTL successfully identifies informative sources, with selection rates for sources 1and2rapidly increasing and reaching 15 perfect accuracy when ψ>0.6. This highlights the robustness of AWaTL in distinguishing useful sources under varying similarity levels. Method Only Source Only Target WaTL 0.00200.00250.00300.00350.0040 200400600800 Target Sample NumberRMSPR 0.00200.00250.00300.00350.0040 200400600800 Target Sample Number (a) 0.000.250.500.751.00 0.2 0.4 0.6 0.8 1.0 ψSelection RateSource 1 Source 2 Source 3 Source 4 Source 5 (b) Figure 1: (a) Root mean squared prediction risk (RMSPR) of WaTL, only Source, and Only Target methods under varying target sample sizes, with source sample sizes τ= 100 (left) andτ= 200(right); (b) Selection rate of each source site as ψincreases. 6 Real-world Applications We evaluate the WaTL algorithm using data from the National Health and Nutrition Examination Survey (NHANES) 2005–20061, focusing on modeling the distribution of physical activity intensity. NHANES is a large-scale health survey in the United States that combines interviews with physical examinations to assess the health and nutrition of both adults and children. The dataset includes extensive demographic, socioeconomic, dietary, and medical assessments, providing a comprehensive resource for health-related research. During the 2005–2006 NHANES cycle, participants aged 6 and older wore an ActiGraph 7164 accelerometer on their right hip for seven days, recording physical activity intensity 1https://wwwn.cdc.gov/nchs/nhanes/ContinuousNhanes/Default.aspx?BeginYear=2005 16 in 1-minute epochs. Participants were instructed to remove the device during water-based activities and sleep. The device measured counts per minute (CPM), ranging from 0to 32767, capturing variations in activity levels throughout the monitoring period. Since female and male participants exhibit distinct physical activity patterns (Feraco et al. 2024), we analyze them separately. Physical activity intensity is influenced by multiple factors, and we consider body mass index (BMI) and age as key predictors (Klein & Becker 2012, Cleven et al. 2023). To accommodate potential nonlinear relationships, we implement local Fréchet regression within the WaTL algorithm, treating the distribution of physical activity intensity as the response and BMI and age as predictors. Following the data preprocessing steps in Lin et al. (2023), we remove unreliable observations per NHANES protocols. For each participant, we exclude activity counts above 1000 CPM or equal to zero (as zeros may correspond to various low-activity states such as sleep or swimming). Participants with fewer than 100valid observations or missing BMI, age, or gender information are also excluded. The remaining activity counts over
|
https://arxiv.org/abs/2505.17404v1
|
seven days are concatenated to form the distribution of each participant’s activity intensity. To evaluate the WaTL, we set White, Mexican Americans, and other Hispanic individuals as sources and Black as target. For females, the source data include 1308White people, 884 Mexican Americans, and 108Other Hispanic individuals. For males, the source data include 1232White people, 805Mexican Americans, and 92Other Hispanic individuals. We set 200Black participants as the target data for both genders. During evaluating, we perform five-fold cross-validation, using four folds for training and one for testing, cycling through each fold. For comparison, we also apply local Fréchet regression using only the target data. The results, summarized in Figure 2 (a), show that WaTL improves performance over local Fréchet regression for both females and males, demonstrating its ability to leverage information from other participants to enhance modeling for black participants. 17 30354045 Female Male GenderRMSPRMethod Only Target WaTL(a) Method Ground Truth WaTLOnly Target 0.000.250.500.751.00 02505007501000 Physical Activity IntensityProbability 0.000.250.500.751.00 02505007501000 Physical Activity Intensity (b) Figure 2: (a) Root mean squared prediction risk (RMSPR) of WaTL and Only Target methods for females and males, evaluated using five-fold cross-validation; (b) Cumulative distribution function of physical activity levels for one selected female (left) and one selected male (right), along with estimates from WaTL and Only Target methods. To further illustrate the effectiveness of WaTL, we visualize the cumulative distribution function of physical activity levels for one female and one male in Figure 2 (b), along with estimates from WaTL and local Fréchet regression using only the target data. The results indicate that WaTL provides a better fit to the true distribution, outperforming the estimate obtained using only the target data. We further evaluate the robustness of WaTL on a human mortality dataset in Appendix A, where WaTL continues to show strong performance. 7 Conclusion We introduce Wasserstein transfer learning and its adaptive variant for scenarios where the informative set is unknown, addressing the challenges posed by the lack of linear operations in the Wasserstein space. By leveraging the Wasserstein metric, the proposed algorithm 18 accounts for the non-Euclidean structure of distributional outputs, ensuring compatibility with the geometry of the Wasserstein space. Supported by rigorous theoretical guarantees, the framework demonstrates improved estimation performance compared to methods that rely solely on target data. This paper focuses on univariate distributions, which arise frequently in real-world appli- cations such as the NHANES and human mortality studies discussed in Section 6. The proposed framework, however, is not limited to the univariate case and can be extended to multivariate distributions by incorporating metrics such as the Sinkhorn (Cuturi 2013) or sliced Wasserstein distance (Bonneel et al. 2015). Extending the methodology to higher dimensions presents an exciting future direction and would involve addressing both compu- tational and theoretical challenges specific to high-dimensional Wasserstein spaces. Our theoretical analysis assumes that each probability distribution is fully observed, con- sistent with much of the existing literature on distributional data analysis (Petersen & Müller 2019, Chen et al. 2023). However, this assumption could be unrealistic in practical applications, where only independent samples from the underlying distributions
|
https://arxiv.org/abs/2505.17404v1
|
are avail- able. This limitation can be addressed by replacing the unobservable distributions with empirical measures constructed from these samples (Zhou & Müller 2024). The derived convergence rates can be extended to incorporate an additional term reflecting the number of independent samples per distribution, providing a straightforward way to account for this practical consideration. A Human Mortality Data We further assess WaTL using the age-at-death distributions from 162 countries in 2015, compiled from the United Nations Databases2and the UN World Population Prospects 2https://data.un.org/ 19 20223. The dataset provides country-level age-specific death counts, which we convert into smooth age-at-death densities using local linear smoothing; see Figure 3 (a) for an illustration. For this analysis, we define the 24 developed countries as the target site and the remaining 138 developing countries as the source site. Given the limited sample size, we input the entire dataset into the model without performing cross-validation. Country-level age-at-death distributions are shaped by a range of factors, including economic conditions, healthcare systems, and social and environmental contexts. Among these, we select population density and gross domestic product (GDP) per capita as key predictive covariates. The results, shown in Figure 3 (b), demonstrate that the proposed WaTL method successfully transfers knowledge across these demographically diverse regions, effectively addressing the socioeconomic disparities between the source and target domains. 0.000.010.020.030.040.05 0 25 50 75 100 AgeDensityDeveloped Developing (a) 0.0100.0150.0200.025 Only Target WaTL MethodRMSPR (b) Figure 3: (a) Age-at-death densities of developed and developing countries; (b) Root mean squared prediction risk (RMSPR) of WaTL and Only Target methods for human mortality data. 3https://population.un.org/wpp/Download 20 References Bigot, J., Gouet, R., Klein, T. & López, A. (2017), ‘Geodesic PCA in the Wasserstein space by convex PCA’, Annales de l’Institut Henri Poincaré B: Probability and Statistics 53, 1–26. Billera, L. J., Holmes, S. P. & Vogtmann, K. (2001), ‘Geometry of the space of phylogenetic trees’,Advances in Applied Mathematics 27(4), 733–767. Bonneel, N., Rabin, J., Peyré, G. & Pfister, H. (2015), ‘Sliced and Radon Wasserstein barycenters of measures’, Journal of Mathematical Imaging and Vision 51(1), 22–45. Cai, T. T., Kim, D. & Pu, H. (2024), ‘Transfer learning for functional mean estimation: Phase transition and adaptive algorithms’, Annals of Statistics 52(2), 654 – 678. Cai, T. T. & Pu, H. (2024), ‘Transfer learning for nonparametric regression: Non-asymptotic minimax analysis and adaptive procedure’, arXiv preprint arXiv:2401.12272 . Chen, Y., Lin, Z. & Müller, H.-G. (2023), ‘Wasserstein regression’, Journal of the American Statistical Association 118(542), 869–882. Choi, K., Fazekas, G., Sandler, M.&Cho, K.(2017), Transferlearningformusicclassification and regression tasks, in‘The 18th International Society of Music Information Retrieval (ISMIR) Conference 2017, Suzhou, China’, International Society of Music Information Retrieval. Cleven, L., Syrjanen, J. A., Geda, Y. E., Christenson, L. R., Petersen, R. C., Vassilaki, M., Woll, A. & Krell-Roesch, J. (2023), ‘Association between physical activity and longitudinal change in body mass index in middle-aged and older adults’, BMC Public Health 23(1), 202. Courty, N., Flamary, R., Habrard, A. & Rakotomamonjy, A. (2017), Joint distribution 21 optimal transportation for domain adaptation, in‘Advances in Neural Information Processing Systems (NeurIPS)’, Vol. 30, pp. 3730–3739. Cuturi, M. (2013),
|
https://arxiv.org/abs/2505.17404v1
|
Sinkhorn distances: Lightspeed computation of optimal transport, in C. Burges, L. Bottou, M. Welling, Z. Ghahramani & K. Weinberger, eds, ‘Advances in Neural Information Processing Systems’, Vol. 26. Feraco, A., Gorini, S., Camajani, E., Filardi, T., Karav, S., Cava, E., Strollo, R., Padua, E., Caprio, M., Armani, A. & Lombardo, M. (2024), ‘Gender differences in dietary patterns and physical activity: An insight with principal component analysis (PCA)’, Journal of Translational Medicine 22(1), 1112. Fréchet, M. (1948), ‘Les éléments aléatoires de nature quelconque dans un espace distancié’, Annales de l’Institut Henri Poincaré 10(4), 215–310. Ghodrati, L. & Panaretos, V. M. (2022), ‘Distribution-on-distribution regression via optimal transport maps’, Biometrika 109(4), 957–974. Gong, B., Shi, Y., Sha, F. & Grauman, K. (2012), Geodesic flow kernel for unsupervised do- main adaptation, in‘2012 IEEE Conference on Computer Vision and Pattern Recognition’, pp. 2066–2073. Huang, J.-T., Li, J., Yu, D., Deng, L. & Gong, Y. (2013), Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers, in‘2013 IEEE International Conference on Acoustics, Speech and Signal Processing’, pp. 7304–7308. Kantorovich, L. V. (1942), ‘On the translocation of masses’, Dokl. Akad. Nauk SSSR (translated version in Journal of Mathematical Sciences, 133, 1381-1382, 2006) 37, 227– 229. Klein, T. & Becker, S. (2012), ‘Age and exercise: a theoretical and empirical analysis of the effect of age and generation on physical activity’, Journal of Public Health 20, 11–21. 22 Kloeckner, B. (2010), ‘A geometric study of Wasserstein spaces: Euclidean spaces’, Annali della Scuola Normale Superiore di Pisa-Classe di Scienze 9(2), 297–323. Li, S., Cai, T. T. & Li, H. (2022), ‘Transfer learning for high-dimensional linear regression: Prediction, estimation and minimax optimality’, Journal of the Royal Statistical Society Series B: Statistical Methodology 84(1), 149–173. Lin, H. & Reimherr, M. (2024), Smoothness adaptive hypothesis transfer learning, in R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett & F. Berkenkamp, eds, ‘Proceedings of the 41st International Conference on Machine Learning’, Vol. 235 of Proceedings of Machine Learning Research , PMLR, pp. 30286–30316. Lin, Z., Kong, D. & Wang, L. (2023), ‘Causal inference on distribution functions’, Journal of the Royal Statistical Society Series B: Statistical Methodology 85(2), 378–398. Panaretos, V. M. & Zemel, Y. (2020), An Invitation to Statistics in Wasserstein Space , Springer New York. Pennec, X., Fillard, P. & Ayache, N. (2006), ‘A Riemannian framework for tensor computing’, International Journal of Computer Vision 66(1), 41–66. Petersen, A. & Müller, H.-G. (2019), ‘Fréchet regression for random objects with Euclidean predictors’, Annals of Statistics 47(2), 691–719. Petersen, A., Zhang, C. & Kokoszka, P. (2022), ‘Modeling probability density functions as data objects’, Econometrics and Statistics 21, 159–178. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W. & Liu, P. J. (2020), ‘Exploring the limits of transfer learning with a unified text-to-text transformer’, Journal of Machine Learning Research 21(140), 1–67. Redko, I., Habrard, A. & Sebban, M. (2017), Theoretical analysis of domain adaptation 23 with optimal transport, inM. Ceci, J. Hollmén, L. Todorovski, C. Vens & S. Džeroski, eds, ‘Machine Learning and Knowledge Discovery in Databases’,
|
https://arxiv.org/abs/2505.17404v1
|
Springer International Publishing, Cham, pp. 737–753. Shin, H.-C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D. & Summers, R. M. (2016), ‘Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning’, IEEE Transactions on Medical Imaging 35(5), 1285–1298. Tian, Y. & Feng, Y. (2023), ‘Transfer learning under high-dimensional generalized linear models’, Journal of the American Statistical Association 118(544), 2684–2697. Torrey, L. & Shavlik, J. (2010), Transfer learning, in‘Handbook of research on machine learning applications and trends: algorithms, methods, and techniques’, IGI Global, pp. 242–264. van der Vaart, A. W. & Wellner, J. (2023), Weak Convergence and Empirical Processes: With Applications to Statistics , 2nd edn, Springer New York. Villani, C. (2009), Optimal Transport: Old and New , Vol. 338, Springer. Weiss, K., Khoshgoftaar, T. M. & Wang, D. (2016), ‘A survey of transfer learning’, Journal of Big Data 3(9), 1–40. Zhang, C., Kokoszka, P. & Petersen, A. (2022), ‘Wasserstein autoregressive models for density time series’, Journal of Time Series Analysis 43(2), 30–52. Zhou, D., Liu, M., Li, M. & Cai, T. (2024), ‘Doubly robust augmented model accuracy transfer inference with high dimensional features’, Journal of the American Statistical Association pp. 1–26. 24 Zhou, Y. & Müller, H.-G. (2022), ‘Network regression with graph Laplacians’, Journal of Machine Learning Research 23(320), 1–41. Zhou, Y. & Müller, H.-G. (2024), ‘Wasserstein regression with empirical measures and density estimation for sparse data’, Biometrics 80(4), ujae127. Zhu, C. & Müller, H.-G. (2023), ‘Autoregressive optimal transport models’, Journal of the Royal Statistical Society Series B: Statistical Methodology 85(3), 1012–1033. 25
|
https://arxiv.org/abs/2505.17404v1
|
arXiv:2505.17469v1 [cs.LG] 23 May 2025Efficient compression of neural networks and datasets Lukas Silvester Barth1,∗ lukas.barth@mis.mpg.dePaulo von Petersenn1,2,∗ vonpetersenn@mis.mpg.de 1Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany 2University of Leipzig, Leipzig, Germany Abstract We compare, improve, and contribute methods that substantially decrease the number of parameters of neural networks while maintaining high test accuracy. When applying our methods to minimize description length, we obtain very ef- fective data compression algorithms. In particular, we develop a probabilistic reformulation of ℓ0regularized optimization for nonlinear models that does not require Monte-Carlo sampling and thus improves upon previous methods. We also improve upon methods involving smooth approximations to the ℓ0norm, and investigate layerwise methods. We compare the methods on different architectures and datasets, including convolutional networks trained on image datasets and trans- formers trained on parts of Wikipedia. We also created a synthetic teacher-student setup to investigate compression in a controlled continuous setting. Finally, we conceptually relate compression algorithms to Solomonoff’s theory of inductive inference and empirically verify the prediction that regularized models can exhibit more sample-efficient convergence. 1 Introduction Methods for efficient compression are relevant both from a practical and a conceptual point of view. The rapid development of large language models [ 56,26,7,66], as well as image and video diffusion models [ 10,17,30,41] has led to widespread adoption of networks that consume an enormous amount of energy. Such models also require large infrastructure and cannot be run or trained on smaller devices. It is therefore of strong practical interest to find ways to reduce the size of such models while maintaining their generalization capability. Similarly, the amount of digital resources is rapidly growing, and compression technologies are relevant for companies that process large amounts of data. However, the motivation to study compression runs deeper. Since compression is about the reduction of redundancy, and redundancy can often be reduced most effectively by identifying the rules that generated the data, compression is intimately related to generative artificial intelligence and model identification. Indeed, the idea that compression is a fundamental learning principle has quite a long history by now. Formal notions of information arose with Shannon’s source coding theorem [61] that relates entropy to the optimal average description length, while the compressibility of single objects was formalized with the fundamental notion of algorithmic complexity by Kolmogorov in [33]. Almost simultaneously, [ 64] introduced what is now called Solomonoff induction, which related compression to probabilistic inference and thus to learning theory, and later proved an important posterior consistency result [ 63]. Shortly after, minimum description length was proposed as a fundamental learning principle in [ 58], and the ideas developed into an own area [ 20]. In statistics, compressive sensing [ 51,65,3,19] emerged as the discipline whose aim is to reconstruct sparse signals. Finally, algorithmic complexity has also been related to reinforcement learning [ 28] and ∗Equal contribution. Contributions are detailed in Section 5. curiosity-driven learning [60]. In Section 2 we briefly show how all these different theoretical ideas can be subsumed under a unified viewpoint. Though the above mentioned theories have been known for a couple
|
https://arxiv.org/abs/2505.17469v1
|
of decades, we are only starting to be able to fully realize their potential now because the objects involved are so difficult to compute. The Kolmogorov complexity is in general uncomputable (though it can be approximated from above), and even the special case of compressive sensing turns out to be NP-hard. However, the era of deep learning opens up the exciting opportunity to get closer to the theoretical limits of description length than ever before because we can now train sophisticated probability distributions with backpropagation that generally compress better than alternative methods [11]. 1.1 Our contributions and related work Reducing the number of parameters of neural networks is an NP-hard problem.2We therefore contribute to methods that efficiently find good local optima, using reformulations that are amenable to gradient descent. Concretely, we present the following contributions. 1.One can recast NP-hard ℓ0regularized objectives into a probabilistic form that admits gradient descent. This approach is carried through, for example, in [ 69,14,35] and [ 43]. However, these methods require Monte-Carlo sampling, which can be very inefficient. To resolve this, we use a result from [ 1] to reformulate the problem as a constrained minimax problem in Section 2.2.1 that allows us to optimize fully nonlinear regularized objectives. 2.We also use these techniques to derive a new layerwise pruning method, described at the end of Section 2.2.1 and in Appendix C.5. 3.Another approach is to approximate the ℓ0norm directly with some piecewise-differentiable function. The most successful approach that we found is described in [ 8]. We reimplemented it and helped to improve it with a number of innovations described in Section 2.2.2. In particular, we help to reduce the number of hyperparameters and optimization complexity. 4.Theℓ1norm constitutes another piecewise-differentiable approximation, motivated by its success in compressive sensing. Though it is perhaps used often (see e.g. [59]), we did not find a systematic comparison. We therefore implemented a refined version and benchmark it together with our other techniques against a wide array of related methods in Section 3.1. 5.We found that most pruning methods exhibit the problem that pruning can leave spurious weights that do not influence the function’s output. We solve this with a method that we call Random Gradient Pruning in Section 2.2.3. 6.Many approaches require a pruning threshold, which is a parameter subject to tuning. We propose Threshold Adaptive Mask Determination (TAMADE) in Section 2.2.4, which automatically finds the correct threshold given a requirement on model performance. This significantly reduces the computational cost of hyperparameter search. 7.In [11], it is shown that transformers are generally better at compressing text and images than conventional methods like LZMA2 or gzip. We apply ℓ0regularization to a transformer trained on Wikipedia and find that it helps to improve compression even further, cf. Section 3.3. Furthermore, we show that the offline setting generally (and somewhat unexpectedly) performs better than the online setting with respect to description length minimization. 8.We conceptually motivate ℓ0regularization from first principles of algorithmic complexity and Solomonoff induction and show how maximum likelihood estimation, mean square error regression, and compressive sensing can all be seen as
|
https://arxiv.org/abs/2505.17469v1
|
special cases of these principles in Section 2.1. In Section 3.2, we empirically verify the prediction of Solomonoff induction that regularized models can exhibit more sample-efficient convergence. We implemented our methods both in Python using PyTorch [ 50] as well as in Julia using Lux [47,48], and all our code is available at github.com/LUK4S-B/efficient-compression . We carefully specified seeds for all random number generators to facilitate reproducibility. 2The reduction of parameters of neural networks is NP-hard because it includes the ℓ0regularization of linear models, which is already NP-hard [19]. 2 2 Theory 2.1 Preliminaries It turns out that all notions of compression can be subsumed under a Bayesian viewpoint, when adopting priors that encode model complexity. We briefly present this viewpoint, starting from the concept of Kolmogorov complexity [ 33]. Let Ube a universal reference machine (e.g. a universal Turing machine), xa finite string over some alphabet, Pthe set of programs (encoded as strings) on which Uterminates and ℓ(p)the length (in bits) of p∈ P. The Kolmogorov complexity KUis then KU(x) := min p∈P{ℓ(p)|U(p) =x}.(1) KUassigns to a given string the length of (one of) the shortest program(s) that can generate x. In the context of compression, we are often most interested in approximations of p∗(x) := argminp{ℓ(p)|U(p) =x}. Arithmetic coding [ 49,57] allows to use any probability distribution µ to encode any finite sequence xwith−log2(µ(x))bits up to a small constant. By Shannon’s source coding theorem [ 61], this is optimal on average (w.r.t. µ) and the average is (up to a small constant) equal to the entropy of µ. However, when assuming that the receiver of the code only has access to the universal machine Ubut not to µ, the description length Lµ(x)ofxgiven µis actually equal to Lµ(x) =ℓ(µ)−log2(µ(x)), (2) where ℓis some appropriate length function that outputs the number of bits needed to describe µas a program that runs on U. The term ℓ(µ)is especially important in machine learning, where model sizes can easily exceed data sizes. Eq. (2)is the fundamental object in the theory of (minimum) description length (MDL) [ 58,20]. IfMis the space of all computable distributions, then one can prove that minµ∈MLµ(x)andKU(x)are equivalent up to a constant that does not depend onx, see Appendix A. However, one advantage of (2)over (1)is that it relates description length and probability theory. Furthermore, it allows to separate information about the distribution from information about individual samples or noise.3We can rewrite the argmin of (2) as an argmax, argminµLµ(x) =argmaxµ2−ℓ(µ)µ(x), (3) which shows that Lµ(x)(and thus KU(x)) is the maximum a posterior estimator (MAP) of the Bayesian posterior probability p(µ|x)∝2−ℓ(µ)µ(x). Note that if we choose a set Mwith n elements on which ℓ(µ) = log2(n)is constant, then we recover the uniform prior, on which the argmax does not depend and end up with usual maximum likelihood estimation. We can naturally consider the sum instead of the maximum to obtain the predictive prior distribution: ξ(x)∝X µ∈M2−ℓ(µ)µ(x),(4) This, in turn, is the fundamental object in the theory of Solomonoff induction [ 64]. Its importance in particular stems from a strong posterior consistency result
|
https://arxiv.org/abs/2505.17469v1
|
[63, 28]: Theorem 2.1. Given any µ∈ M , letξ(x)be a (semi-) probability distribution that satisfies ξ(x)≥w(µ)µ(x)for some fixed w(µ)and all x. Then, for all n∈N, nX t=1Ex<t∼µX xt(µ(xt|x<t)−ξ(xt|x<t))2 ≤lnw−1 µ. A similar bound has been proven in [ 53] for the MAP, however with considerably slower convergence rate, namely a right-hand-side proportional to w−1 µ, even though it was shown in the same article that this bound is sharp under the conditions of Theorem 2.1. The important point of those theorems is that they show that adopting a complexity prior can lead to more sample-efficient convergence, if one assumes the true model µto be one of low complexity (relative to the other models in Mthat generate similar samples). We are able to use our methods to check this empirically in Section 3. 3It can be useful to slightly generalize this definition by allowing Mto be any (finite or infinite) subset of computable distributions and by allowing ℓ:P →R≥0to be any non-negative length function. 3 We point out that squared error losses are also special cases of (2)when specializing Mto be a class of Gaussians with standard deviation σand mean m(·). In that case4, −log2(µ(y|x)) =1 2log2(2πσ2) +(y−m(x))2 2σ2ln(2)(5) Here mcan be a complicated function like a neural network. When choosing σ2= 1/(2 ln(2)) constant, then the argmin of (2)with (5)reduces precisely to a regularized least squares loss, and upon choosing the regularizer to be uniform, one obtains conventional least squares regression. However, (5)is more refined and we show in Appendix B that if σis adopted as trainable parameter, then it converges to the true standard deviation if the data was generated by a Gaussian. Hence, if it is of interest to obtain information about the noise level in the data, this refined loss can be useful. When specializing (5)further, choosing mto be linear, then (2)reduces to loss functions that are the central objects in compressive sensing [ 19]. There one approximates ℓ(µ)by the ℓ0norm5ofµ. Even for this special case the problem is provably NP-hard [ 19] (Chapter 2.3). Compressive sensing researh showed that the ℓ0norm can often be efficiently approximated by the ℓ1norm, like in the prominent LASSO method [ 65]. A refined version, the relaxed LASSO [ 45,25], can additionally asymptotically reconstruct the original signal and we use it as inspiration for our non-linear methods. To summarize, adopting a Bayesian complexity prior of the form w(µ)∝2−ℓ(µ)produces Solomonoff’s theory of induction, when considering the resulting Bayesian predictive prior and posterior, while the minimum description length principle is obtained as the MAP. In the space of all computable distributions, the minimum description length is equal to the Kolmogorov complexity up to a small constant. Setting ℓ(µ) = log2(n)for finite Mresults in conventional unregularized maximum likelihood estimation. When specializing the distributions to a class of Gaussians, one obtains refined squared error losses and when further specializing to linear mean functions and to ℓ=ℓ0/1, one obtains the losses of compressive sensing. 2.2 Computationally tractable methods Even though the best posterior consistency results were shown for Solomonoff’s predictive posterior, ξinvolves untractable sums and therefore in
|
https://arxiv.org/abs/2505.17469v1
|
practice the most relevant objective for us remains (2). We must also specialize to function families fθamenable to backpropagation and we approximate ℓ byℓ0. Nevertheless, we still obtain a fairly general objective that our methods can be applied to, Lθ(x) =αℓ0(θ) +L(fθ, x), (6) where Lis a differentiable loss function and α∈Rwas introduced for convenience.6Note that this approximation is still connected to Solomonoff’s theory because one can choose a prior wµ∝2−ℓ0(µ) or even wµ∝2−ℓ1(µ)in(3)and(4), and Theorem 2.1 and its MAP (MDL) approximation still hold. Since ℓ0is not differentiable, we need to find a reformulation to make it amenable to optimization. Subsequent subsections introduce different possibilities. 2.2.1 Probabilistic reformulations The first step of a probabilistic reformulation is to rewrite θ=wz(wzmeans componentwise multiplication). The advantage is that now ℓ0(θ) =ℓ0(z) =P izi. Letπγdenote the family of all Bernoulli distributions over {0,1}n, parameterized by γ∈[0,1]n. The idea is to rewrite the objective (6)as a probabilistic expectation Ez∼πγ[αℓ0(z) +L(fwz, x)]. It was proven that this can be done at least for the linear case in [ 69]. In [ 43], the same reformulation was adopted for the nonlinear case (6)but without proof (though they might have known one). Our first theoretical contribution is to provide a clean proof for the general statement. Proposition 2.2. Letπγbe the family of all Bernoulli distributions over {0,1}nwith parameter γ∈[0,1]nand let g:{0,1}n→Rbe any function. Then min γEz∼πγ[g(z)] = min zg(z) (7) 4There is a subtlety here because the Gaussian is a probability density and not a distribution. Nevertheless, on a discretized space, (5)is still correct up to an additive constant that is irrelevant for the argmin, cf. Appendix B. 5which is defined as the number of non-zero elements in the vector θthat parameterizes µ 6Whenever we write R, we actually mean a properly discretized version, e.g. using floating point numbers. 4 and at any minimum γiis either 1or0∀i∈ {1,···, n}. Proof. See Section C.1 for a rigorous proof. The intuition is that the expected value is linear in γifor alliand a function over the hypercube [0,1]nthat is separately linear in all of its arguments must attain its extremal values at the corner points of the cube. What hinders the application of this powerful proposition is that writing out the expected value results in a sum with 2nterms if z∈ {0,1}n. To evade this problem, Monte Carlo sampling with low-variance estimators is used in [ 69,14,35]. In [ 43],z∈ {0,1}is instead smoothed to a variable in[0,1]nand a Monte-Carlo estimator is derived by employing a reparameterization as in [31]. Next, we introduce a method that does not require any Monte-Carlo sampling. As a first step, note that the expectation of a sum linear in zis equal to the sum over γinstead, which results in Ez∼πγ[ℓ0(z)] =nX i=1γi. (8) This was already used in [ 43] to simplify (6). However, a similar replacement procedure is not possible for the fully non-linear term Ez∼πγ[L(fwz, x)]because one can not cancel any terms. For the next step, we build up on a result of our companion article [1], where it is shown that Lemma 2.3. Ez∼πγX i
|
https://arxiv.org/abs/2505.17469v1
|
yi−X jXijwjzj2 =X i yi−X jXijwjγj2+X i,jX2 ijw2 jγj(1−γj). Proof. We provide a copy of the proof in Appendix C.2. This Lemma provides a way to substantially reduce the number of summands compared to the naive expectation. However, this is still not applicable to the fully-nonlinear objective (6). To solve this problem, we rewrite the objective as a constrained optimization problem. The minimum Lθ(x)equals min θ,w,z αℓ0(z) +L(fθ, x) θ=wz = min θ,w,zmax u αℓ0(z) +L(fθ, x) +u·(θ−wz)2 .(9) Note that the maximum over uforces θto equal wzat the global optimum7, while zappears at most quadratically. We can now apply Proposition 2.2 and thereafter (8) and finally Eq. 9 to obtain Corollary 2.4. min θLθ(x) = min θ,w,γmax u( αX iγi+L(fθ, x) +u·(θ−wγ)2+u·(w2γ(1−γ))) . (10) In Appendix C.3 we explain why a quadratic (as opposed to a simpler linear) constraint and therefore Lemma 2.3 is key for the procedure to work. We optimize this minimax objective with alternating gradient descent and ascent steps. However, we also considered more refined optimization procedures like ADMM [ 5] and provide a derivation of the corresponding update equations in appendix C.4. Even though in our experiments, alternating gradient updates exhibited better performance, those derivations might still be useful. We call our method Probabilistic Minimax Pruning (PMMP) because it is based on the probabilistic reformulation in Proposition 2.2 and in the form of a minimax objective in Corollary 2.4. Furthermore, we also leverage Lemma 2.3 to develop a new Layerwise Pruning method for neural networks. Essentially, for each layer, we replace Xijin(2.3) by input batches to that layer and promote wjzjto matrices wjkzjk, in order to find the sparsest possible matrix that produces similar output batches yik. Details are given in Appendix C.5, where we also explain why the combination with Random Gradient Pruning (introduced in Section 2.2.3) is quite effective. 7Here u·(θ−wz)2is the dot product of u∈Rnwith the componentwisely-squared (θ−wz), while juxtaposed variables like wzalways denote componentwise multiplication. 5 2.2.2 Smooth reformulations Another approach is to directly approximate ℓ0(θ)by some piecewise smooth function. The most effective such approach that we found was described in [ 8], who use an approximation that goes back to [21, 6, 44]. The simple but effective idea is to write ℓ0(θ)≈X i(1−e−β|θi|).(11) The full optimization objective as proposed in [ 8] reads, minθL(fθ, x)+αP i(1−e−β|θi|)+ρ||θ||2 2. The authors further propose to use layerwise normalization of the hyperparameters αandρ. The obvious advantage of this differentiable approximation of ℓ0is that it allows direct application of gradient descent. However, the method is computationally costly because it comes with three hyperparameters, α,β, andρ, which are substantially harder to fine-tune than just one. Additionally, parameters do not become exactly 0as they do in probabilistic methods, which means that one must also fix a pruning hyperparameter ϵbelow which all parameters are eliminated. To improve the method, we therefore investigated how to reduce the computational complexity involved in choosing these parameters. We conducted extensive studies summarized in Appendix F, informing the following suggestions: (1) contrary to [ 8], we do not find that additional ℓ2 regularization is strongly
|
https://arxiv.org/abs/2505.17469v1
|
beneficial, allowing us to drop the ρ||θ||2 2term; (2) we find that increasing the regularization strength αduring training can yield lower loss and more stable outcomes. Thus we divide training into 3 phases: α= 0,α̸= 0and finetuning [ 37] after pruning. Since the authors did not provide a name for their method, we call it Differentiable Relaxation of ℓ0Regularization (DRR) in this paper and refer to their version as DRR-O (O for "original"). As a final contender, ℓ1regularization is introduced. Its advantage is that piecewise gradients are equal to the sign of the parameters and therefore independent of the magnitude of the parameters. Hence all parameters are pulled with a constant gradient towards 0. This induces sparsity in the resulting network, which is in contrast to the ℓ2norm, which reduces differences between weights. In fact, this difference facilitates to use a combination of ℓ1andℓ2norm for inducing group sparsity [ 70]. The disadvantage of ℓ1regularization is, however, that weights are not converging to their optimal magnitude but instead are always slightly damped towards 0. For scenarios with a lot of noise, this can help to reduce outlier effects but generally this prevents the consistency of the corresponding estimation process. This was the reason for [ 45] to introduce a relaxed method, the simplified form of which (cf. [ 25]) consists in training with ℓ1norm until active set convergence is achieved (all parameters that are supposed to be 0are very close to 0) and then, in a second phase, remove the ℓ1norm but only finetune the parameters that are not close to 0. This strategy was also adopted in [59] for pruning neural networks. We call this method Relaxed ℓ1-regularization (R-L1) . Similar to DRR, we combine it with Random Gradient Pruning and TAMADE, described below, to make it more robust and efficient. 2.2.3 Random gradient pruning Many pruning methods leave spurious weights behind that do not contribute to the function that the network computes. For example, all outgoing connections of some neuron might have been pruned away, while incoming connections to the neuron remain. An example image is given in Figure 3 in Appendix D. To solve this problem, we invented the Random Gradient Pruning method. It is infeasible to compute directly, for all elements xin the domain of a parameterized function fθ, for which changes in θall the values fθ(x)are left invariant. However, if the derivative of a function ∂fθ(x)/∂θiis zero for all x, then fθdoes not depend on θi. For a specific x, a vanishing gradient of fθ(x)could simply indicate a local minimum with respect to θi. Nevertheless, if we sample a random element z, and we assume that all important parameters in fθcan influence its value at z, then we can detect dependence of fθonθiwith high probability. In practice, when fθis a vector-valued neural network, we sample a random pair (y, z)and compute g:=∂ ∂θ||y−fθ(z)||2. After this, we set all θi to0where giis exactly equal to 0. This worked very well in practice to eliminate all and only the spurious weights with only a single backward pass.8We remark that one
|
https://arxiv.org/abs/2505.17469v1
|
can increase the probability by sampling several times, though this was not necessary in practice. 8Furthermore, note that this works well for neural networks because usually every weight can influence the value of the function at any point. If certain weights only influece the function in certain regions of the domain, then one has to sample a random point for each such region. 6 2.2.4 Threshold adaptive mask determination The smooth reformulations do not set parameters exactly equal to zero during opimization. This necessitates an additional magnitude pruning step with threshold hyperparameter ϵ. This threshold is coupled with the regularization strength αwhich complicates hyperparameter optimization. We solve this problem by introducing Threshold Adaptive Mask Determination (TAMADE) . It makes use of two monotone relationships: (1) increasing ϵincreases the number of parameters set to zero during pruning, and (2) model accuracy is non-increasing as a function of deleted parameters beyond a certain threshold. These relationships allow us to optimize ϵefficiently through (a variant of) binary search. In practice, we determine ϵas the largest threshold such that the accuracy of the model does not decrease by more than δ >0(in absolute terms) or such that the loss does not increase by more thanδ >0(in relative terms). Pseudocode is given in appendix E. This dynamic determination of ϵ reduces the dimension of the space of hyperparameters. 3 Experiments The optimal choice of hyperparameters for each method is determined through extensive ablation studies in Appendix F. Computational resources are described in App. L. For all simulations, we employed the ADAM optimizer [ 32], divided data into training, validation and test sets and used a refined convergence criterion for early stopping, described in Appendix G, that takes into account the noise variations in the loss curve. Implementation details and source code are provided at github.com/LUK4S-B/efficient-compression . 3.1 Image classifier experiments In the context of ℓ0regularization and pruning of nonlinear neural networks, the most extensive benchmarks were so far done on image datasets like MNIST [ 36] and CIFAR [ 34]. We therefore compare our methods using the same datasets and architectures. The results for these 3 setups are listed in Table 1. A visual representation that also provides information on errors is given in Figure 5 in Appendix H. We observe that our improvements of the smooth reformulations result in very high compression rates, while the loss is comparable to other methods. Our refined DRR method outperforms all other methods in Table 1. R-L1 still achieves very good performance, while being simpler to optimize as it only requires one hyperparameter in our setup (thanks to TAMADE 2.2.4).9 A remarkable result in Table 1c is that our methods improve the test accuracy of the VGG classifier, while decreasing its size considerably. This confirms the idea that compression can help to prevent overfitting. Unfortunately, PMMP does not perform as well as we would have hoped, possibly because minimax objectives are generally not easy to optimize. Nevertheless, it achieves comparable performance to L0-HC [ 43], the other method that employs a probabilistic reformulation, while converging more efficiently because Monte Carlo
|
https://arxiv.org/abs/2505.17469v1
|
sampling is not necessary. In contrast to the smooth reformulations, it is the only method that provides an exact reformulation of the original optimization objective (6). We therefore believe that there is more potential in this method and hope that the underlying ideas will be taken up and improved in the future. 3.2 Teacher-Student experiments In the teacher-student setup, both teacher and student networks are multi-layer perceptrons with the same number of layers and the same input and output dimensions but with hidden layers of different dimensions. The teacher is sparse in the sense that its hidden layers have fewer dimensions than those of the student. These networks act as functions that compute the mean value m(x)in the code length formula (5). After fixing some teacher network and a standard deviation, data is sampled from a Gaussian distribution with mean value computed by the teacher. This allows us to investigate the 9We remark that DRR-O was reported in [ 8] to achieve even better results (namely a CR of 96 and 359 in the MNIST experiments, respectively) when tuning αandβhyperparameters separately for each layer but we do not consider this a practical approach as it requires the tuning of far too many parameters, all of which are coupled in complicated ways. As we explained in Subsection 2.2.2, the number of hyperparameters of the default method already imposed a considerable burden that needed to be addressed. Indeed, in the VGG case, they omitted the separate-layer optimization entirely for computational reasons. 7 Table 1: Prune method comparison for different datasets and models. (EI = Error Increase, CR = Compression Rate) (a) MNIST with Lenet-300-100 Method EI CR BC-GHS [42] - 9 L0-HC [43] - 10 THP [24] -0.05 12 L-OBS [13] 0.06 14 L0-ARM [40] - 14 SWS [67] 0.05 23 L0-ADMM [72] 0.00 23 GrOWL [71] 0.20 24 SNIP [38] 0.70 50 L0-L2-LC [29] - 50 DNS [23] -0.29 56 GSM [12] 0.01 60 SparseVD [46] 0.28 68 Autoprune [68] - 80 CPA [52] -0.04 85 DRR-O [8] 0.16 90 DRR 0.16 92 R-L1 0.23 68 PMMP 0.19 11(b) MNIST with LeNet-5-Caffe Method EI CR THP [24] -0.03 12 L-OBS [13] 0.00 14 L0-ADMM [72] 0.00 71 L0-HC [43] - 93 SNIP [38] 0.20 100 L0-L2-LC [29] - 100 DNS [23] 0.00 111 BC-GHS [42] - 156 L0-ARM [40] - 196 DRR-O [8] 0.16 200 SWS [67] 0.09 200 SparseVD [46] -0.05 280 GSM [12] 0.15 300 Autoprune [68] - 310 CPA [52] -0.02 317 DRR 0.16 342 R-L1 0.08 246 R-L1 0.26 384 PMMP 0.17 77(c) CIFAR with VGG-16-512 Method EI CR SSS [27] 0.00 1.67 EConvNets [39] -0.15 2.78 V AR-CONV [ 73]0.07 3.75 PP [62] 0.14 6.43 RFP [2] 0.72 9.52 DRR-O [8] 0.70 15 SparseVD [46] 0.00 48 CPA [52] -0.01 56 DRR -0.89 82 R-L1 -0.73 71 PMMP 3.88 34 minimization of description length in the continuous setting and whether models can exhibit more sample efficient convergence if the true underlying distribution is itself of low description complexity. In Figure 1 we investigate the dependence of the test
|
https://arxiv.org/abs/2505.17469v1
|
loss and description length on the model size, which in turn depends on the regularization strength. In principle, we would expect that, the smaller the dataset in comparison to the initial model size, the more beneficial ℓ0regularization is for achieving low test loss. Indeed, we can observe exactly this effect in the figure, where the optimal size (that leads to lowest test loss) strongly correlates with the dataset size. (a) Dataset size: 30 (b) Dataset size: 300 (c) Dataset size: 2000 Figure 1: ‘Test Loss‘ vs ‘Model Byte Size‘ for different dataset sizes in the teacher-student setup. Description length (in bytes) isolines are color coded. Curves and error bars were computed as described in Appendix I. Teacher and student architectures have layer sizes [2, 5, 8, 1] and [2, 25, 25, 1], respectively. The data was sampled from the teacher network with standard deviation σ= 0.08. In Figure 1b, we observe a U-shaped curve, where the minimum marks the point where least overfitting occurs and in Figure 1a and 1c, this U-curve is tilted corresponding to the ratio of initial model size and dataset size. The description length isolines interestingly show that the points of smallest loss are slightly shifted towards bigger model sizes in comparison to the points of optimal compression, though they are very close. In Appendix I, we show additional figures for a variety of dataset sizes and noise levels. Furthermore, Table 3 summarizes optimal description length results for different settings. Those experiments confirm the observation in Figure 1 and, similar to the 8 classifier experiments on CIFAR, verify the predictions of Solomonoff induction that regularized models exhibit more sample-efficient convergence, as described in Section 2.1 and below eq. (6). 3.3 Transformer experiments Building upon experiments performed in [ 11], we compressed parts of Wikipedia (Wiki-40b dataset [22]) with transformers and additionally tested how the degree of compression depends on the regularization method. To this end, we trained a transformer decoder by minimizing description length via regularized next-token prediction. For the regularization, we again compared PMMP, DRR and R-L1 (cf. Section 2.2). In Figure 2 we recorded the mean loss against the model size for different dataset sizes. The color-coded isolines mark the description length. Since our transformer has 4.1 million parameters, corresponding to an uncompressed model size of about 16MB, we used dataset sizes in comparison to which the transformer size is non-negligible, namely 16MB, 50MB and 300MB.10In this regime, regularization can have significant effects, while the benefits diminish for very large dataset-to-model sizes as we see in Figure 8 in Appendix J.1. Figure 2: Mean loss vs model size with description length (in MB) isolines for the transformer described in Section 3.3. From left to right, the 3 images correspond to dataset sizes of 16MB, 50MB and 300MB respectively. The solid lines correspond to the three different regularization methods PMMP, DRR and R-L1 described in Section 2.2. The crosses correspond to transformers of the indicated model size trained without regularization. A very interesting effect shown in Figure 2 is that smaller models trained without regularization, marked by
|
https://arxiv.org/abs/2505.17469v1
|
crosses in the figure, often perform significantly worse than the regularized big models, even if the final model size of the regularized big model at the end of training equals the smaller unregularized model size. We conjecture that the main reason is that the regularization can pick out, for a given regularization strength, a locally optimal subset of parameters among exponentially many possible subsets, while a priori fixing a subset before training might not be locally optimal in this sense. Furthermore, networks that initially have more parameters also live in a landscape, where local minima tend to be more connected (cf. [15]). Among the tested methods, DRR and R-L1 perform very similarly. Among those, R-L1 exhibits greater simplicity because DRR requires an additional hyperparameter β. PMMP performs slightly worse, possibly again due to the challenges of minimax optimization. We compare this performance to conventional compression algorithms by setting the description lengths in proportion to those obtained with LZMA2, which performed best-in-class in [ 11] among conventional compressors. Table 2 shows that the regularized transformers achieve the best results. Finally, we also compare these description lengths with the online learning description length, obtained by computing the integral under the loss curve of the unregularized model, as explained in [55,4]. As one can see, the online approach usually performs worse than regularized offline compression. This is also true for big models like Llama [ 66] as we explain in more detail in 10We remark that while we are working with comparatively small language models, we are operating in similar dataset-model-size regimes as high end foundational models: For our 4.1M parameter model, a dataset size of approximately 205MB would match the ratio of the number of model parameter to the number of train set tokens used for the 65B LLaMA model [ 66]; a dataset size of approximately 90MB would match the ratio employed for the 671B parameter DeepSeek-V3 model which is trained on 14.8T tokens [9]. 9 Table 2: Description length of different dataset sizes with LZMA2 (chunk size: 2048) and transformer- based models. UT stands for "unregularized transformer", and "IUC" for the integral under the loss curve of the unregularized model used in the online approach (cf. [55]). All values in MB. Dataset size LZMA2 UT UT 8 UT2 PMMP DRR R-L1 IUC 16.4 9.2 20.1 8.4 6.5 7.8 6.1 6.2 10.6 50.0 28.1 26.7 168.5 16.5 17.3 13.5 13.8 27.0 300.0 168.5 78.6 85.4 94.5 76.8 70.0 70.4 137.8 9307 5230 1915 2366 2965 2018 1912 1912 2244 Appendix J.2. Table 2 extends the results in [ 11] to regularized methods that explicitly minimize description length and shows quite clearly that regularization outperforms all other approaches. 4 Conclusion, limitations and outlook We improved nonlinear ℓ0regularization methods and connected our empirical simulations to the theory of algorithmic complexity, verifying corresponding posterior consistency results. Though the PMMP method did not deliver the strongest empirical performance, we hope that further developments of its interesting theory might strengthen it. Our approach is generally broadly applicable across all architectures and optimization objectives that are amenable to backpropagation.
|
https://arxiv.org/abs/2505.17469v1
|
Nevertheless, it is limited in that the ℓ0andℓ1norm are specific approximations of the description length of a model. One might be able to achieve even better results by combining the approach with methods that compress the regularities in the pattern of the weights of the networks. A more refined approach would attempt to reduce the bitlength of arbitrary data generating programs. However, it is not yet clear to us how to optimize in such a space. When applying our approach to recursive neural networks, it might be possible to obtain even smaller models for datasets that exhibit a recursively generated structure. Finally, we touch upon societal impacts in Appendix K. 10 Acknowledgments and Disclosure of Funding We thank Jürgen Jost, Rostislav Matveev, Guido Montúfar and Marcus Hutter for stimulating discussions on the topic of this article. We gratefully acknowledge support from the Max Planck Institute for Mathematics in the Sciences and the German Academic Scholarship Foundation. The authors declare that they have no competing interests (related financial activities outside the submitted work). 5 Contributions The research presented in this paper was conducted in close collaboration of the authors. Below we list our contributions, indicating the individual who made the primary contribution to each component. Component Primary Contributor Probabilistic reformulation of ℓ0-Regularization including development of PMMP method and ADMM equationsLukas Barth Development of Layerwise Pruning method and Random Gradient Pruning methodLukas Barth Development of Gaussian Loss for MDL learning Lukas Barth Convergence guarantees for Gaussian Loss Paulo von Petersenn Experiments on improvement of DRR method and ablation studiesPaulo von Petersenn Development of TAMADE Lukas Barth Experiments on compressing the Wikipedia dataset using transformers and transformer studiesPaulo von Petersenn Conceptualization of research idea and most writing Lukas Barth Statistical analysis and plots of simulation results Paulo von Petersenn Comparative experiments on different ℓ0-regularization meth- ods on classifier datasets and teacher-student datasetsEqual Contribution Nevertheless, we discussed each component with each other and consider the entire piece of work as something to which we equally contributed. References [1] Anonymous. Probabilistic and nonlinear compressive sensing. To appear soon , 2025. [2]Babajide O. Ayinde, Tamer Inanc, and Jacek M. Zurada. Redundant feature pruning for accelerated inference in deep neural networks. Neural Networks , 118:148–158, 2019. [3]Thomas Blumensath and Mike E Davies. Iterative thresholding for sparse approximations. Journal of Fourier analysis and Applications , 14:629–654, 2008. [4]Jorg Bornschein, Yazhe Li, and Marcus Hutter. Sequential learning of neural networks for prequential mdl. arXiv preprint arXiv:2210.07931 , 2022. [5]Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed opti- mization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning , 3(1):1–122, 2011. [6]Paul S Bradley and Olvi L Mangasarian. Feature selection via concave minimization and support vector machines. In ICML , volume 98, pages 82–90, 1998. [7]Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research , 24(240):1– 113, 2023. 11 [8]Felipe Dennis de Resende Oliveira, Eduardo
|
https://arxiv.org/abs/2505.17469v1
|
Luiz Ortiz Batista, and Rui Seara. On the compression of neural networks using l0-norm regularization and weight pruning. Neural Networks , 171:343–352, 2024. [9] DeepSeek-AI. Deepseek-v3 technical report, 2024. [10] Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning , pages 7480–7512. PMLR, 2023. [11] Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christo- pher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, et al. Language modeling is compression. arXiv preprint arXiv:2309.10668 , 2023. [12] Xiaohan Ding, Xiangxin Zhou, Yuchen Guo, Jungong Han, Ji Liu, et al. Global sparse momentum sgd for pruning very deep neural networks. Advances in Neural Information Processing Systems , 32, 2019. [13] Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. Advances in neural information processing systems , 30, 2017. [14] Zhe Dong, Andriy Mnih, and George Tucker. Disarm: An antithetic gradient estimator for binary latent variables. Advances in neural information processing systems , 33:18637–18647, 2020. [15] Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred A. Hamprecht. Essentially no barriers in neural network energy landscape, 2019. [16] H. Edelsbrunner, D. Kirkpatrick, and R. Seidel. On the shape of a set of points in the plane. IEEE Transactions on Information Theory , 29(4):551–559, 1983. [17] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transform- ers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning , 2024. [18] Joaquim F. and contributors. Gmt.jl: Julia wrapper for the generic mapping tools. online, 2014–2025. Accessed: 2025-05-02. [19] Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing . Applied and Numerical Harmonic Analysis. Springer New York, New York, NY , 2013. [20] Peter D Grünwald. The minimum description length principle . MIT press, 2007. [21] Yuantao Gu, Jian Jin, and Shunliang Mei. The l0 norm constraint lms algorithm for sparse system identification. IEEE Signal Processing Letters , 16(9):774–777, 2009. [22] Mandy Guo, Zihang Dai, Denny Vrande ˇci´c, and Rami Al-Rfou. Wiki-40b: Multilingual language model dataset. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 2440–2452, 2020. [23] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. Advances in neural information processing systems , 29, 2016. [24] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. Advances in neural information processing systems , 28, 2015. [25] Trevor Hastie, Robert Tibshirani, and Ryan J Tibshirani. Extended comparisons of best subset selection, forward stepwise selection, and the lasso. arXiv preprint arXiv:1707.08692 , 2017. [26] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 , 2022. [27] Zehao Huang and Naiyan Wang. Data-driven sparse structure
|
https://arxiv.org/abs/2505.17469v1
|
selection for deep neural networks. InProceedings of the European conference on computer vision (ECCV) , pages 304–320, 2018. [28] Marcus Hutter. Universal artificial intelligence: Sequential decisions based on algorithmic probability . Springer Science & Business Media, 2005. 12 [29] Yerlan Idelbayev and Miguel Á Carreira-Perpiñán. Exploring the effect of l0 / l2 regularization in neural network pruning using the lc toolkit. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 3373–3377. IEEE, 2022. [30] Imagen-Team-Google, :, Jason Baldridge, Jakob Bauer, Mukul Bhutani, Nicole Brichtova, Andrew Bunner, Lluis Castrejon, Kelvin Chan, Yichang Chen, Sander Dieleman, Yuqing Du, Zach Eaton-Rosen, Hongliang Fei, Nando de Freitas, Yilin Gao, Evgeny Gladchenko, Ser- gio Gómez Colmenarejo, Mandy Guo, Alex Haig, Will Hawkins, Hexiang Hu, Huilian Huang, Tobenna Peter Igwe, Christos Kaplanis, Siavash Khodadadeh, Yelin Kim, Ksenia Konyushkova, Karol Langner, Eric Lau, Rory Lawton, Shixin Luo, So ˇna Mokrá, Henna Nandwani, Yasumasa Onoe, Aäron van den Oord, Zarana Parekh, Jordi Pont-Tuset, Hang Qi, Rui Qian, Deepak Ramachandran, Poorva Rane, Abdullah Rashwan, Ali Razavi, Robert Riachi, Hansa Srinivasan, Srivatsan Srinivasan, Robin Strudel, Benigno Uria, Oliver Wang, Su Wang, Austin Waters, Chris Wolff, Auriel Wright, Zhisheng Xiao, Hao Xiong, Keyang Xu, Marc van Zee, Junlin Zhang, Katie Zhang, Wenlei Zhou, Konrad Zolna, Ola Aboubakar, Canfer Akbulut, Oscar Akerlund, Isabela Albuquerque, Nina Anderson, Marco Andreetto, Lora Aroyo, Ben Bariach, David Barker, Sherry Ben, Dana Berman, Courtney Biles, Irina Blok, Pankil Botadra, Jenny Brennan, Karla Brown, John Buckley, Rudy Bunel, Elie Bursztein, Christina Butterfield, Ben Caine, Viral Carpenter, Norman Casagrande, Ming-Wei Chang, Solomon Chang, Shamik Chaudhuri, Tony Chen, John Choi, Dmitry Churbanau, Nathan Clement, Matan Cohen, Forrester Cole, Mikhail Dektiarev, Vincent Du, Praneet Dutta, Tom Eccles, Ndidi Elue, Ashley Feden, Shlomi Fruchter, Frankie Garcia, Roopal Garg, Weina Ge, Ahmed Ghazy, Bryant Gipson, Andrew Goodman, Dawid Górny, Sven Gowal, Khyatti Gupta, Yoni Halpern, Yena Han, Susan Hao, Jamie Hayes, Jonathan Heek, Amir Hertz, Ed Hirst, Emiel Hoogeboom, Tingbo Hou, Heidi Howard, Mohamed Ibrahim, Dirichi Ike-Njoku, Joana Iljazi, Vlad Ionescu, William Isaac, Reena Jana, Gemma Jennings, Donovon Jenson, Xuhui Jia, Kerry Jones, Xiaoen Ju, Ivana Kajic, Christos Kaplanis, Burcu Karagol Ayan, Jacob Kelly, Suraj Kothawade, Christina Kouridi, Ira Ktena, Jolanda Kumakaw, Dana Kurniawan, Dmitry Lagun, Lily Lavitas, Jason Lee, Tao Li, Marco Liang, Maggie Li-Calis, Yuchi Liu, Javier Lopez Alberca, Matthieu Kim Lorrain, Peggy Lu, Kristian Lum, Yukun Ma, Chase Malik, John Mellor, Thomas Mensink, Inbar Mosseri, Tom Murray, Aida Nematzadeh, Paul Nicholas, Signe Nørly, João Gabriel Oliveira, Guillermo Ortiz-Jimenez, Michela Paganini, Tom Le Paine, Roni Paiss, Alicia Parrish, Anne Peckham, Vikas Peswani, Igor Petrovski, Tobias Pfaff, Alex Pirozhenko, Ryan Poplin, Utsav Prabhu, Yuan Qi, Matthew Rahtz, Cyrus Rashtchian, Charvi Rastogi, Amit Raul, Ali Razavi, Sylvestre-Alvise Rebuffi, Susanna Ricco, Felix Riedel, Dirk Robinson, Pankaj Rohatgi, Bill Rosgen, Sarah Rumbley, Moonkyung Ryu, Anthony Salgado, Tim Salimans, Sahil Singla, Florian Schroff, Candice Schumann, Tanmay Shah, Eleni Shaw, Gregory Shaw, Brendan Shillingford, Kaushik Shivakumar, Dennis Shtatnov, Zach Singer, Evgeny Sluzhaev, Valerii Sokolov, Thibault Sot- tiaux, Florian Stimberg, Brad Stone, David Stutz, Yu-Chuan Su, Eric Tabellion, Shuai Tang, David Tao, Kurt Thomas,
|
https://arxiv.org/abs/2505.17469v1
|
Gregory Thornton, Andeep Toor, Cristian Udrescu, Aayush Upadhyay, Cristina Vasconcelos, Alex Vasiloff, Andrey V oynov, Amanda Walker, Luyu Wang, Miaosen Wang, Simon Wang, Stanley Wang, Qifei Wang, Yuxiao Wang, Ágoston Weisz, Olivia Wiles, Chenxia Wu, Xingyu Federico Xu, Andrew Xue, Jianbo Yang, Luo Yu, Mete Yurtoglu, Ali Zand, Han Zhang, Jiageng Zhang, Catherine Zhao, Adilet Zhaxybay, Miao Zhou, Shengqi Zhu, Zhenkai Zhu, Dawn Bloxwich, Mahyar Bordbar, Luis C. Cobo, Eli Collins, Shengyang Dai, Tulsee Doshi, Anca Dragan, Douglas Eck, Demis Hassabis, Sissie Hsiao, Tom Hume, Koray Kavukcuoglu, Helen King, Jack Krawczyk, Yeqing Li, Kathy Meier-Hellstern, Andras Orban, Yury Pinsky, Amar Subramanya, Oriol Vinyals, Ting Yu, and Yori Zwols. Imagen 3, 2024. [31] Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013. [32] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [33] Andrei N Kolmogorov. On tables of random numbers. Sankhy ¯a: The Indian Journal of Statistics, Series A , pages 369–376, 1963. [34] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. N.A., 2009. 13 [35] Russell Z Kunes, Mingzhang Yin, Max Land, Doron Haviv, Dana Pe’er, and Simon Tavaré. Gradient estimation for binary latent variables via gradient variance clipping. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37-7, pages 8405–8412, 2023. [36] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998. [37] Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. Advances in neural information processing systems , 2, 1989. [38] Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340 , 2018. [39] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 , 2016. [40] Yang Li and Shihao Ji. L0-arm: Network sparsification via stochastic binary optimization. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases , pages 432–448. Springer, 2019. [41] Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jianfeng Gao, Lifang He, and Lichao Sun. Sora: A review on background, technology, limitations, and opportunities of large vision models, 2024. [42] Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. Advances in neural information processing systems , 30, 2017. [43] Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through L0 regularization. arXiv preprint arXiv:1712.01312 , 2017. [44] Olvi L Mangasarian. Machine learning via polyhedral concave minimization. In Applied Mathematics and Parallel Computing: Festschrift for Klaus Ritter , pages 175–188. Springer, 1996. [45] Nicolai Meinshausen. Relaxed lasso. Computational Statistics & Data Analysis , 52(1):374–393, 2007. [46] Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In International conference on machine learning , pages 2498–2507. PMLR, 2017. [47] Avik Pal. Lux: Explicit parameterization of deep neural networks in julia, April 2023. If you use this software, please cite it as below. [48] Avik Pal. On
|
https://arxiv.org/abs/2505.17469v1
|
efficient training & inference of neural differential equations, 2023. [49] Richard Clark Pasco. Source coding algorithms for fast data compression . PhD thesis, Stanford University CA, 1976. [50] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. CoRR , abs/1912.01703, 2019. [51] Y .C. Pati, R. Rezaiifar, and P.S. Krishnaprasad. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar Conference on Signals, Systems and Computers , pages 40–44 vol.1, 1993. [52] Dzung T Phan, Lam M Nguyen, Nam H Nguyen, and Jayant R Kalagnanam. Pruning deep neural networks with l_0-constrained optimization. In 2020 IEEE International Conference on Data Mining (ICDM) , pages 1214–1219. IEEE, 2020. [53] Jan Poland and Marcus Hutter. Convergence of discrete mdl for sequential prediction. In Learning Theory: 17th Annual Conference on Learning Theory, COLT 2004, Banff, Canada, July 1-4, 2004. Proceedings 17 , pages 300–314. Springer, 2004. [54] Christopher Rackauckas, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, and Ali Ramadhan. Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385 , 2020. [55] Jack Rae. Compression for AGI | Stanford MLSys #76 | Talk on Youtube, 2023. 14 [56] Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 , 2021. [57] J. J. Rissanen. Generalized kraft inequality and arithmetic coding. IBM Journal of Research and Development , 20(3):198–203, 1976. [58] Jorma Rissanen. Modeling by shortest data description. Automatica , 14(5):465–471, 1978. [59] Subham Sahoo, Christoph Lampert, and Georg Martius. Learning equations for extrapolation and control. In International Conference on Machine Learning , pages 4442–4450. Pmlr, 2018. [60] Jürgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE transactions on autonomous mental development , 2(3):230–247, 2010. [61] Claude Elwood Shannon. A mathematical theory of communication. The Bell system technical journal , 27(3):379–423, 1948. [62] Pravendra Singh, Vinay Kumar Verma, Piyush Rai, and Vinay P Namboodiri. Play and prune: Adaptive filter pruning for deep model compression. arXiv preprint arXiv:1905.04446 , 2019. [63] Ray Solomonoff. Complexity-based induction systems: comparisons and convergence theorems. IEEE transactions on Information Theory , 24(4):422–432, 1978. [64] Ray J Solomonoff. A formal theory of inductive inference. part i. Information and control , 7(1):1–22, 1964. [65] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology , 58(1):267–288, 1996. [66] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [67] Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008 , 2017. [68]
|
https://arxiv.org/abs/2505.17469v1
|
Xia Xiao, Zigeng Wang, and Sanguthevar Rajasekaran. Autoprune: Automatic network pruning by regularizing auxiliary parameters. Advances in neural information processing systems , 32, 2019. [69] Mingzhang Yin, Nhat Ho, Bowei Yan, Xiaoning Qian, and Mingyuan Zhou. Probabilistic best subset selection via gradient-based optimization. arXiv preprint arXiv:2006.06448 , 2020. [70] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B: Statistical Methodology , 68(1):49–67, 2006. [71] Dejiao Zhang, Haozhu Wang, Mario Figueiredo, and Laura Balzano. Learning to share: Simultaneous parameter tying and sparsification in deep learning. In International Conference on Learning Representations , 2018. [72] Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. A systematic dnn weight pruning framework using alternating direction method of multipliers. In Proceedings of the European conference on computer vision (ECCV) , pages 184–199, 2018. [73] Chenglong Zhao, Bingbing Ni, Jian Zhang, Qiwei Zhao, Wenjun Zhang, and Qi Tian. Variational convolutional neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 2780–2789, 2019. 15 Appendix A Equivalence of minimum description length and Kolmogorov complexity Here, we briefly show the following Lemma for the reader. Lemma A.1. Up to small constants, the Kolmogorov complexity KUdefined in (1)and the minimum of the description length Lµdefined in (2)are equivalent if we take Mto be the space of all computable probability distributions over the set of all finite sequences over some alphabet. Proof. Given any distribution µ, arithmetic coding allows to encode any string xinto a code of length no more than −log2(µ(x)) +c0bits, where c0is a small constant if we assume sufficient numerical precision for the encoding. Hence we know that for each computable distribution µ, there is a program with length at most ℓ(AC)+ℓ(µ)−log2(µ(x))+c0=Lµ+ℓ(AC)+c0=:Lµ+cbits that generates x, where ℓ(AC)is the (usually small) length of the program that performs arithmetic (de)coding on the fixed universal machine U(and we defined c:=ℓ(AC) +c0). Since KU(x)is the length of the shortest possible program that generates x, it must in particular be shorter than Lµ+cfor all µ. Thus KU(x)≤c+ min µ∈MLµ(x). Conversely, we can use p∗(x) := arg min p{ℓ(p)|U(p) =x}to define a distribution λxby λx(y) =1,ify=U(p∗(x)), 0,else.(12) Then, up to a small constant c′,ℓ(λx)is equal to ℓ(p∗(x)) = KU(x), while −log2(λx(x)) = 0 . Thus we get minµLµ(x)≤Lλx(x) =ℓ(λx) =KU(x) +c′. Thus, for all x, up to usually small constants candc′, that only depend on Uand not on x,KU(x) andminµLµ(x)are equivalent. B Refined square loss Here we more carefully rederive (5)and show theoretically and empirically that the argmin results in the correct standard deviation σif the data generating process is given by a conditional Gaussian probability density and enough samples are provided during training. As explained around eq. (2), the description length for some dataset X={(xi, yi)}i∈{1,···,n}given some probabilistic model µis (up to a small constant) equal to −P ilog2(µ(xi)) +ℓ(µ), where ℓ(µ) is the description length of µ. Now suppose that the probability pθ,σis described by the density pθ,σ, and assume that {pθ,σ}θ,σis a family of conditional Gaussian
|
https://arxiv.org/abs/2505.17469v1
|
densities, that is pθ,σ(b|a) :=1√ 2πσ2exp −(b−fθ(a))2 2σ2! , (13) withσ∈Randθ∈Rnandfθthe function calculated by the parameters θof a neural network. If we want to encode a datapoint yi, then we need the probability pθ,σ(yi). However, we only have a density and the Lebesgue measure of a single point vanishes. This is why, in a continuous space, arithmetic coding is not applicable. However, we can discretize the space, for example using Float32 or Float64 precision. Suppose that Aiis the smallest positive area around yion which pθ,σ(yi|xi)is constant. For example, this could be the smallest positive Float32 number. However, bigger floats are more sparse than smaller floats and hence have a bigger Ai. Alternatively, one could use a subset of Floats with uniform spacing as is common when sampling pseudorandom numbers uniformly between 0and1. In that case, all Aiwould have equal size. In any case, since pθ,σis constant on Ai, we can easily evaluate the following integral and hence compute pθ,σ(xi)as follows: pθ,σ(yi) =Z Aidx p θ,σ(yi|xi)µ(y) =Aipθ,σ(yi|xi)µ(y) (14) 16 Ifyis sampled uniformly, then µ(y)would be 1/Nwhere Nis the cardinality of the sample space. The description length becomes ℓ(pθ,σ)−X ilog2(Aipθ,σ(yi|xi)/N) =ℓ(θ, σ)−X ilog2(pθ,σ(yi|xi))−X ilog2(Ai) +X ilog2(N)(15) What is important in this last expression is that the last two sums do not depend on (θ, σ). Hence, when optimizing over those parameters, both terms can be ignored. This shows that the argmin of “density description length” is actually equivalent to the argmin of proper description length. We can also refine the second term: −X ilog2(pθ,σ(yi|xi)) =X i1 2log2(2πσ2) + (yi−fθ(xi))2/(2 log(2) σ2) (16) That is interestingly almost equal to the mean-squared-error (MSE) loss but it is more refined in the sense that the standard deviation σis taken into account and co-jointly optimized. Of course, if σ is considered a constant, then the argmin of the above term is the same as the argmin of the MSE loss. One can therefore truly consider MSE a crude special case of description length minimization, obtained by specializing to Gaussian families, a uniform ℓ-regularizer and by ignoring the variation of the distribution. One can, however, optimize the following more exact loss: LGauß(θ, σ, X ) =1 n" ℓ(θ, σ) +X i1 2log2(2πσ2) +X i(yi−fθ(xi))2 2 log(2) σ2# . (17) The normalization factor 1/n= 1/|X|does not change the argmin for any given dataset Xbut helps to simplify the following arguments. We provide theoretical guarantees that in the large dataset limit n→ ∞ , minimizing LGaußrecovers the exact function fθas well as noise level σ2. Proposition B.1. Letθ∈ Rnbe the parameters of a neural network, from a family of neural networks Θsatisfying the condition max θ∈Θℓ0(θ)<∞. Let X={(xi, yi)|i= 1, . . . , n }where the inputs are sampled from a uniform distribution xi∼U(dom(fθ))and the targets are sampled from a distribution y∼ D=D(xi)where Dsatisfies the condition: ∀xi∈dom(fθ) :E(y) =fθ(xi)and Var (y)<∞ In the limit n→ ∞ ϑopt, ςopt:= arg min ϑ,ςLGauss are such that: ς2 opt=σ2andfϑopt=fθ An example of such a distribution Dis the normal distribution N(µ=fθ, σ2). The requirements onDare much weaker though and apply to any basic setting
|
https://arxiv.org/abs/2505.17469v1
|
where we encounter a signal fθ(xi) chained with a noise of bounded variance. Proof of Proposition B.1: For convenience we write: LGauß(θ, σ, X ) =ℓ(θ, σ) n+Lsimple(θ, σ, X ) (18) We may assume that ℓ(θ, σ)is bounded by some constant and thus in the limit n→ ∞ : arg min ϑ,ςLGauß(ϑ, ς, X ) = arg min ϑ,ςLsimple(ϑ, ς, X ) (19) It suffices to prove the Proposition for Lsimple rather than LGauss. Further, we use the following helper lemmas (without proof): 17 Lemma B.2. Letx∼ D ,µ=E(D),ς2=Var(D)for some distribution DoverRsatisfying the condition: E,the expectation of D,is a bounded linear operator, and Var (D)≡σ2<∞ Letm∈R, then arg min mE[(x−m)2] =µ and min mE[(x−m)2] =ς2. An example for Dis the normal distribution N(µ, ς2). Lemma B.3. Letf=f(a)>0,g=g(a, b)>0positive real functions and b∗(a) := arg min bg(a, b), then the minimum of f(a) +g(a, b)occurs at b=b∗(a). I.e. it holds that: min a,b[f(a) +g(a, b)] = min a[f(a) +g(a, b∗(a))] The law of large numbers tells us that for large n, the empirical average approximates the expectation value; in the limit n→ ∞ , equality holds. Assuming nis large, we can thus make the approximation: Lsimple(ϑ, ς, X )≈E1 2log2(2πς2) +(yi−fϑ(xi))2 2 log(2) ς2 Due to linearity of expectation: E1 2log2(2πς2) +(yi−fϑ(xi))2 2 log(2) ς2 =1 2log2(2πς2) +1 2 log(2) ς2E (yi−fϑ(xi))2 Evaluating the arg max with respect to ϑandςof this term, we can apply Lemma B.3 to conclude that the optimal ϑminimizes 1 2 log(2) ς2E (yi−fϑ(xi))2 for a given ς. We remind ourselves that yi∼ N(fθ(xi), σ2). Since, for any a >0, arg min xf(x) = arg min xaf(x), we can apply Lemma B.2 to conclude that ϑoptmust be such that fϑopt(xi) =fθ(xi)and(yi− fϑopt(xi))2=σ2. With this, the second half of the statement is already proven. To show that ς2 opt=σ2, we can now simplify: arg min θ,ςlog2(2πς2) 2+1 2 log(2) ς2E (yi−fϑ(xi))2 = arg min ςlog2(2πς2) 2+σ2 2 log(2) ς2 Solving this problem using classical optimization yields that f(ς) =log2(2πς2) 2+σ2 2 log(2) ς2 has one local optimum, which is a minimum at ς2=σ2. ■ 18 C Probabilistic reformulation C.1 Proof of Proposition 2.2 The Bernoulli distribution πγover{0,1}nis given by πγ(z) =nY j=1(γjδ1,zj+ (1−γj)δ0,zj), (20) where δi,jis1ifi=jand0else and γj∈[0,1]. Hence, Ez∼πγ[g(z)] =X z∈{0,1}nπγ(z)g(z) =X znY j=1(γjδ1,zj+ (1−γj)δ0,zj)g(z) =X znY j̸=k(γjδ1,zj+ (1−γj)δ0,zj)(γkδ1,zk+ (1−γk)δ0,zk)g(z) =γkX {z|zk=1}nY j̸=k(γjδ1,zj+ (1−γj)δ0,zj)g(z) + (1−γk)X {z|zk=0}nY j̸=k(γjδ1,zj+ (1−γj)δ0,zj)g(z). Since γkis bounded and the expression Ez∼π[g(z)]is linear in γk, the expression must reach its extremal points at the boundaries, i.e. at γk= 0and at γk= 1. Since kwas arbitrary in the argument above, this must hold for all {πi}i∈{1,···,n}. Therefore, minπEz∼π[g(z)] = min zg(z). ■ C.2 Proof of Lemma 2.3 In the context of Section 2.2.1, we reproduce the proof of Lemma 2.3 provided in [ 1] for the convenience of the reader below. As a first step, one can expand the square: X i yi−X jXijwjzj2=X i y2 i−2yiX jXijwjzj+ X jXijwjzj2 .(21) Next we exchange Ez∼πγwith the linear sums and use that Ez∼πγ(zj) =γj·1 + (1 −γj)·0 =γj to obtain Ez∼πγX i yi−X jXijwjzj2 =X i y2
|
https://arxiv.org/abs/2505.17469v1
|
i−2yiX jXijwjγj+Ez∼πγ X jXijwjzj2 (22) For the remaining expectation over P jXijwjzj2, note that terms will be of the form Xij1Xij2wj1wj2zj1zj2. Whenever j1̸=j2, then such terms are again separately linear in zj1and zj2and can be replaced by γj1γj2terms.11Only if j1=j2, we get Ez∼πγ[zj1zj2] =Ez∼πγ[z2 j1] = γj1·12+ (1−γj1)·02=γj1, i.e. the expectation linearizes the squares. If the sum over jhasn terms, then the square of the sum has exactly nterms of the form z2 jk, and we therefore obtain Ez∼πγ X jXijwjzj2 = X jXijwjγj2−X jX2 ijw2 jγ2 j+X jX2 ijw2 jγj (23) 11To see this, note that Ez∼πγ[zj1zj2] =γj1γj2·1·1 + (1 −γj1)γj2·0·1 +γj1(1−γj2)·1·0 + (1 − γj1)(1−γj2)·0·0 =γj1γj2. 19 When reinserting this into (22), we can recombine the first 3 terms into a square again and finally get Ez∼πγX i yi−X jXijwjzj2 =X i yi−X jXijwjγj2 +X jX2 ijw2 jγj(1−γj) ■ C.3 Square terms in the constrained formulations It is important to use a square term u·(θ−wz)2for the constrained formulation in Eq. (9)and(10) because a linear term u·(θ−wz)would have become u·(θ−wγ), which would not have ensured thatγis pushed towards 0or1. The reason is that one feasible solution would make γvanishingly small (to reduce the loss of the ℓ0termP iγi) and w, which is free, equal to θ/γ. However, when adopting a quadratic constraint, then we get two terms, namely u·(θ−wγ)2andu·(w2γ(1−γ)). If again, we choose w=θ/γandγ≈0to bring bothP iγiand the first term towards 0, then the second term becomes u·(θ2(1−γ)/γ)≈u·(θ2/γ)(where division is done componentwise). Since γis small, this becomes very big. As a result, the quadratic formulation evades the problem of the linear constraint that could make the optimization end up in an optimum that we do not want. Therefore Lemma 2.3 is key to make the procedure work well in practice. C.4 ADMM update equations C.4.1 Preliminaries Thealternating direction method of multipliers (ADMM) is an optimization method for constrained optimization problems with stable convergence guarantees for a wide class of problems [ 5]. Further- more, it allows to parallelize operations over multiple processors and compute nodes. In general, it takes the form, minimizeX ifi(xi) +g(z) subject to Mixi=Pz+c,∀i,(24) withxi, z∈Rn,i∈ {1,···, N}, andMiandParbitrary matrices with appropriate dimensions, and ca constant vector. An important special case is Mi=id=P, andc= 0. Then the problem is equivalent to minimization ofP ifi(x) +g(x)but to express it in the form (24) allows for parallel processing, as we will see below. For example fi(x)could be the i-th batch of some model, parameterized by x. We can associate a Lagrangian to this problem, namely L({xi}i, z,{yi}i) =X ifi(xi) +g(z) +yi(Mixi−Pz−c) +g(z),(25) where all yiare vectors, whose elements are Lagrange multipliers. The constrained problem (24) is then equivalent to the following unconstrained minimax problem: min xi,zmax yiL({xi}i, z,{yi}i). (26) At solution points of the problem, the constraints ri:=Mixi−Pz−c= 0 (also known as the residual) must be satisfied and hence we can augment the Lagrangian with the following term, which is quadratic in the residuals: Lρ({xi}i, z,{yi}i) =X ifi(xi) +g(z) +yi(Mixi−Pz−c) +ρ 2||Mixi−Pz−c||2 2+g(z). We have: minxi,zmax yiL({xi}i, z,{yi}i) = min xi,zmax yiLρ({xi}i, z,{yi}i)but at the same time, the quadratic augmentation stabilizes
|
https://arxiv.org/abs/2505.17469v1
|
the numerical solution procedure and guarantees conver- gence for a wider class of functions. The ADMM procedure now consists of alternatingly minimizing with respect to xiand to zand maximizing with respect to the Lagrange multipliers yi(hence the name alternating direction method 20 of multipliers ). Concretely, the updates are given by the following recursive equations: xk+1 i= arg min xk i∈RnLρ({xk i}i, z,{yi}i) = arg min xk i∈Rn(X ifi(xk i) +yk i(Mixk i−Pzk−c) +ρ 2||Mixk i−Pzk−c||2 2) zk+1= arg min zk∈RnLρ({xk i}i, z,{yi}i) = arg min zk∈Rn( g(zk) +X i yk i(Mixk+1 i−Pzk−c) +ρ 2||Mixk+1 i−Pzk−c||2 2) , yk+1 i=yk i+ρ(Mixk+1 i−Pzk+1−c)(27) These can be brought into a slightly simpler form by introducing rescaled variables ui:=yi/ρ. xk+1 i= arg min xk i∈Rn(X ifi(xk i) +ρ 2||Mixk i−Pzk−c+uk i||2 2) zk+1= arg min zk∈Rn( g(zk) +X iρ 2||Mixk+1 i−Pzk−c+uk i||2 2) , uk+1 i=uk i+Mixk+1 i−Pzk+1−c(28) This algorithm is known to converge to a global optimum under the conditions that fiandgare convex, closed and proper, or equivalently, that their epigraphs are convex sets (cf. [ 5]). Though in our applications fiandgare not convex (but are closed and proper), we can still attempt to use the method to reach a local optimum. C.4.2 L0 regularized ADMM What is to our advantage, when inspecting the equations (28) of this method, is that the regularization termg(z)is separated from fi(x), and only occurs with terms at most quadratic in z. This means that we can apply our Proposition 2.2, eq. (8) and Lemma 2.3 to this method in the following way: 1.Suppose we have a model fθwith parameter θ, that we want to optimize to fit some data D and that we want to distribute our computation with Nbatches of data Di, i∈ {1,···, N} onNcompute nodes, that run in parallel. We can do this by copying the model z:=θtoN other machines, where we define xi:=copy(θ)and where we also copy the data and model such that fi(xi) =copy(fθ)(Di)and optimize minimize(X ifθ(Di) +αL0(θ) =X ifi(xi) +αL0(z)) ,subject to xi=z.(29) We see now that this fits the general scheme of (24) withM=id=Pandc= 0. For example, we could have fi(xi) =−log2(pθ(Di)), minimizing negative log-likelihood of the data, with L0constraint as in (2)and(6). However, for now, we keep the more general constraints with M, P, c generic and only specialize to g(z) =αL0(z). 2.The big advantage is now that the L0(θ)term occurs only within an optimization step, that does not involve fθ(D). So we can use Proposition 2.2 to reformulate the zk+1-update as follows. First, we introduce two new vector-valued variables wandZand write z=wZ, where wZis componentwise multiplication, and w∈Rnwhile Z∈ {0,1}n. Then we 21 apply the ideas from Section 2.2.1: zk+1= arg min zk∈Rn( αL0(zk) +X iρ 2||Mixk+1 i−Pzk−c+uk i||2 2) (Prop. 2.2)= arg min w∈Rn,π∈BernEZ∼π αX jZj+X iρ 2||Mixk+1 i−c+uk i−P(wZ)||2 2 (Lemma. 2.3)= arg min w∈Rn,π∈[0,1]n αX jπj+X iρ 2||Mixk+1 i−c+uk i−P(wπ)||2 2+ +NX i=1ρ 2X p,jP2 pjw2 jπj(1−πj) =: arg min w∈Rn,π∈[0,1]nL(w, π) (30) 3. We can attempt to solve for wby computing ∂L(w, π) ∂wℓ=−X iX jρ (Mixk+1 i−c+uk i)j−X qPjqwqπq Pjℓπℓ+ +NρX
|
https://arxiv.org/abs/2505.17469v1
|
pP2 pℓwℓπℓ(1−πℓ) =−ρX i,j(Mixk+1 i−c+uk i)jPjℓπℓ+NρX j,qπℓPT ℓjPjqπqwq +NρX j,qPT ℓjPjℓπℓ(1−πℓ)δℓqwq =−ρπℓX i,jPT ℓj(Mixk+1 i−c+uk i)j+ +Nρπ ℓX j,qPT ℓjPjq πq+ (1−πq)δℓq wq(31) and setting this to 0. 4.In general, this can be non-trivial to solve but for the important special case that P=id, or Pjq=δjq, we obtain 0 =−πℓX i(Mixk+1 i−c+uk i)ℓ=Nπℓˆwℓ.(32) This leads to either an arbitrary value for ˆwℓ, ifπℓ= 0, or otherwise to ˆw=1 NX i(Mixk+1 i−c+uk i) =:1 NX irk i (33) This is an average, that does not depend on πanymore. 5. Inserting this back into L( ˆw, π), for the case Ppj=δpj, we obtain L( ˆw, π) = αX jπj+X iρ 2||rk i−( ˆwπ)||2 2+Nρ 2X jˆw2 jπj(1−πj) (34) Since this is linear in πℓfor all ℓ∈ {1,···, n}, the only thing we need to do in order to find a minimum, is to check the sign of the gradient with respect to πℓ, and move πℓto the most extreme value in the opposite direction of the sign, admitted within the feasible set. This 22 must be a vertex of the cube [0,1]n. We compute ∂L( ˆw, π) ∂πℓ=α−X iρ(rk i−( ˆwπ))ℓˆwℓ+Nρ 2ˆw2 ℓ((1−πℓ)−πℓ) =α−ρˆwℓX i(rk i)ℓ+Nρ 2ˆw2 ℓ =α−ρ 2Nˆw2 ℓ(35) As a consequence, whenever α−ρ 2Nˆw2 ℓ<0, then we set πℓ= 1and if it is ≥0, we set πℓ= 0. To summarize the result of all steps above, our optimization step for zk+1=wk+1πk+1is: wk+1 ℓ=1 NNX i=1 Mixk+1 i−c+uk i ℓ, πk+1 ℓ=1,if2α < ρN (wk+1 ℓ)2, 0,else.(36) This simple exact solution consists of a hard threshold operator that can be carried out in a single step, and can be considered a generalization of hard thresholding in compressive sensing. The final scheme is thus: xk+1 i= arg min xk i∈Rn(X ifi(xk i) +ρ 2||Mixk i−zk−c+uk i||2 2) zk+1=wk+1πk+1,as given in (36) uk+1 i=uk i+Mixk+1 i−zk+1−c(37) To handle the x-update step, we can train a possibly nonlinear model via gradient descent until close to convergence and then start alternating, always providing the previous xkas warm start. C.5 Layerwise pruning The slight generalization of Lemma 2.3 that is needed for Layerwise Pruning reads Ez∼πγX k,i yℓ ki−X jwℓ kjzℓ kjXℓ ji2 =X k,i yℓ ki−X jwℓ kjγℓ kjXℓ ji2+X k,i,j(wℓ kj)2γℓ kj(1−γℓ kj)(Xℓ ji)2.(38) If we now suppose that Xℓ jiis the activation of layer ℓ−1, and that the current weights of layer ℓ areˆwkjˆzkj, then we can set yℓ ki=P jˆwkjˆzkjXℓ jiand thereafter minimize (38) with respect to w, γ with gradient descent until we converge to an optimum where all components in the vector γare either 0or1. Thereafter, we set the new weight matrix equal to w. This also works seamlessly with biases. Indeed, we can think of was a matrix whose last column contains the bias values and add an additional row of ones to Xto cover this case with the same theorem and algorithm. We additionally found that it is advantageous to traverse the network in reverse order when combining this method with Random Gradient Pruning introduced in Section 2.2.3: We start with this procedure at the last layer and prune
|
https://arxiv.org/abs/2505.17469v1
|
away all unnecessary weights. This then produces spurious weights as described in Section 2.2.3 and appendix D. Those weights can thereafter be eliminated efficiently with Random Gradient Pruning before proceeding to the next-to-last layer and so on. In this way, part of the layerwise information is propagated through the entire network. D Random gradient pruning illustrations To illustrate the discussion in Subsection 2.2.3, Figure 3 shows a two-hidden-layer network. On the left side, some method pruned the network that left behind spurious connections that do not 23 contribute to the function that the network computes. For example the first, fourth and seventh neuron in the prelast layer are not connected to the final neuron and the last neuron in the second layer is not receiving any inputs, while its associated bias is zero. Random gradient pruning removes those spurious weights as one can see on the right-hand-side. Figure 3: Weights in a simple neural network before (left) and after (right) random gradient pruning. (Line thickness and color indicate connection strength and sign of weights. Circle colors indicate strength of biases.) E Adaptive Mask Determination pseudocode The binary search method discussed in Subsection 2.2.4 can be described with the pseudocode in Algorithm 1. It turned out to be highly effective in pruning the right amount of weights, given a tolerated loss increase. Algorithm 1 TAMADE Input : loss function L, neural network fθ, target t, lower bound for threshold low, upper bound for threshold high , tolerance tol, binary search resolution r Output : Highest threshold that does not increase loss by more than 100·tolpercent. Remark : Often tis initialized as L(fθ)(the current loss), and lowas0, and high as the highest absolute value of the weights in θ. A good default value for ris 1e-7. 1:best=low 2:steps = 0 3:while|high−low|> rdo 4: steps =steps + 1 5: mid= (low+high)/2{Typical binary search step} 6: θ′=copy(θ) 7: θ′[|θ′| ≤mid] = 0 {Set all weights to 0whose absolute value is lower or equal to the current threshold value.} 8: ifL(fθ′)≤(1 +tol)·tthen 9: best=mid {If loss is within tolerance, then we have a new best threshold value.} 10: low=mid {However, we do not return this best value but instead continue the loop with the new value low=mid because we might find an even higher threshold that results in a loss within tolerance. This is a variation of standard binary search.} 11: else 12: high =mid {Otherwise, we decrease the highest possible threshold value.} 13: end if 14:end while 15:return best, steps 24 F Ablation Studies F.1 Experiments on the DRR Method This appendix presents a comprehensive set of experiments designed to systematically address the four primary limitations of the Differentiable Relaxation of ℓ0-Regularization (DRR) method: 1. Strong gradients near zero may trap parameters in local minima. 2. Parameters with large magnitudes but low utility may receive weak gradients and be ineffi- ciently optimized. 3. The relaxation does not yield exact zeros, necessitating a manual pruning threshold. 4. The method introduces four hyperparameters ( α,β,ρ,ϵ), complicating tuning. Our experiments examine and propose targeted remedies for each of these
|
https://arxiv.org/abs/2505.17469v1
|
limitations. Improving the ℓ0Approximation To mitigate issues with gradient magnitude (points 1 and 2), we evaluated alternative smooth approximations to the ℓ0indicator function: ℓ0(x)≈1−exp(−β|θi|) (39) ℓ0(x)≈1−exp(−θ2 i/(2β2)) (40) ℓ0(x)≈1−1 (1 +θ2 i/(3β2))(41) The first is the exponential form used in [ 8], while the second and third draw inspiration from the Gaussian and Student- tdistributions, respectively. Despite the conceptual appeal of these alternatives, our experiments across all three benchmark datasets found that the original exponential formulation either outperformed or matched the others, offering no consistent benefit to switching. Adaptive Thresholding with TAMADE To address the need for manual pruning (limitation 3), we implemented and evaluated TAMADE (Threshold Adaptive Mask Determination), as described in Section 2.2.2. TAMADE enables automated, performance-driven threshold selection and reduces the effective hyperparameter dimensionality. Our results demonstrate that TAMADE consistently identifies effective thresholds with minimal computational overhead, delivering high compression rates without compromising test performance. All downstream method comparisons incorporate TAMADE. Reducing Hyperparameter Burden To address limitation 4, we first note that TAMADE reduces the search space by one parameter. We further investigated whether additional simplifications to the (α, β, ρ, ϵ )configuration could maintain performance while easing tuning complexity. To visualize the trade-off between model compression and predictive performance, we use α-Hull plots, which capture the optimal frontiers of accuracy versus sparsity. Based on the α-shape method [16], implemented via the GMT library [ 18], this technique provides holistic comparison by outlining the best achievable trade-offs across thousands of hyperparameter configurations. While previous work [ 8] emphasized the importance of ℓ2-regularization in tandem with DRR, our exhaustive grid searches suggest otherwise. We compared pure DRR ( ρ= 0) against several positive ρvalues on three datasets and found that the inclusion of ℓ2offers only marginal benefit, limited to certain compression regimes. Figure 4 highlights that for MNIST, the α-Hulls for different ρvalues are nearly indistinguishable, indicating no benefit from ℓ2regularization. CIFAR-10 and the teacher-student setup show slight gains in some compression rate regimes, but not enough to justify the additional tuning burden. We also examined whether αandβcan be reduced to a single hyperparameter. While this sim- plification holds for MNIST, where fixing one and sweeping the other yields similar performance independent of the fixed value, it breaks down on more complex datasets like CIFAR-10, where their interplay becomes critical. 25 (a) MNIST (b) CIFAR-10 (c) Teacher-Student Figure 4: Effect of ℓ2-regularization strength ( ρ) on performance across datasets, visualized via α-Hull plots. Each dot corresponds to a different hyperparameter configuration. Solid lines denote the optimal trade-off frontiers. Compression is measured as the ratio of uncompressed to regularized model size. Color encodes ρ: yellow (highest), pink (intermediate), and green ( ρ= 0). (a) MNIST (LeNet-300-100), (b) CIFAR-10 (MLP with three 512-d hidden layers), and (c) Teacher-Student setting with teacher [2, 5, 5, 1] and student [2, 25, 25, 1]. In (c), inverse test MSE is used (higher is better); the dashed vertical line indicates teacher model size. Dynamic Regularization Schedules A further finding relates to training dynamics: gradually introducing DRR during training improves convergence. We tested both fixed schedules that
|
https://arxiv.org/abs/2505.17469v1
|
increase αover time and warm-up phases without regularization. Both approaches led to more stable optimization and stronger final results. This staged strategy balances early-stage learning (favoring flexibility) with late-stage compression (favoring sparsity), and we recommend it as a general practice when applying DRR in new contexts. F.2 Hyperparameters for Method Comparison This appendix provides details on the ablation studies conducted to determine optimal hyperparame- ters for our comparative experiments between DRR, PMMP, and R-L1 methods. Methodology For each experiment and method, we tested multiple hyperparameter combina- tions and selected those that yielded the best results. We evaluated performance by plotting accu- racy/loss versus regularization strength α, with color information representing description length. For teacher-student experiments, we chose parameters that performed best overall across six different combinations of noise-level and training set size. The experiments included: • MNIST with LeNet-300-100 • MNIST with LeNet-5-Caffe • CIFAR-10 with VGG16 • Teacher with Gaussian Loss • Teacher with MSE Loss where the Teachers are tanh-MLPs with four layers with dimensions [2, 5, 8, 1]. Important Findings Our investigations revealed that layerwise pruning generally does not improve results in terms of accuracy and description length when αis properly chosen. In some cases, it even produced negative effects. While layerwise pruning occasionally helped when αwas set too low, this effectively overwrote the explicitly set low regularization level. Based on these findings, we decided not to use layerwise pruning in our main experiments. 26 For the DRR method, we compared two βvalues: 5 and 25. The NORM approach proposed by de Resende Oliveira et al. [ 8], which scales hyperparameters with the size of the layers, performed worse than regular implementation and was therefore excluded from our main experiments. For the PMMP method, we tested six combinations of initial probabilities pand multiplier values u: p∈ {0.0,0.5,1.0}andu∈ {1.0,10.0,100.0} For the R-L1 method, we did not use layerwise pruning, and there were no additional hyperparameters to set beyond the regularization strength α. The following table summarizes the optimal hyperparameter values identified for each method across our experimental settings: Experiment Best β(DRR) Best (pinit, u-multi )(PMMP) MNIST MLP 5 (0 .0,1.0) MNIST Caffe 25 (0 .5,1.0) CIFAR VGG 5 (1 .0,1.0) Teacher Gauss 5 (1 .0,1.0) Teacher MSE 5 (0 .0,1.0) For simplicity in our main experiments, we standardized on β= 5.0for the DRR method across all datasets, as this value performed consistently well in most settings. G Noise-aware convergence Here we describe in more detail the criterion we used to determine whether a given loss curve is saturated, which is employed in the classifier and teacher student simulations in Sections 3.1 and 3.2. Given an array Lthat consists of the logged loss values (either training or validation loss, one value for each epoch) and given an even integer that we call smoothing window length w, we run a function that performs the following computations every kepochs ( kdetermines how often convergence is checked): If the length ℓ(L)is lower than w, then the function just returns false. Otherwise, it splits the subarray of the last wvalues in Linto two halves, determines
|
https://arxiv.org/abs/2505.17469v1
|
the average of each and then checks if the 2nd average is bigger or equal to the first one, up to the variation in the data in those halves. The slightly innovative part in this procedure is how we determine the variation in the data. Since the data varies along a (usually decaying loss) curve, which has itself varying y-values, we fit smoothed curves to the two halves, subtract the y-values of the smooth curves from the y-values of the data and then compute the standard deviations of the results. To compute the smoothed curves, we use thecollocate_data function from the DiffEqFlux library [ 54]. The collocate_data function employs an Epanechnikov kernel to determine the smoothed curve. Fitting this smoothed curve is usually so fast that we did not notice a significant increase in time per epoch, even when kis low (and convergence is checked frequently). The idea behind this slightly more involved saturation criterion is that this process can take into account trends of the loss curve, even when the loss curve itself might be noisy (and thus does not allow for a simpler procedure in which one triggers saturation / convergence as soon as the validation loss increases). Furthermore, it can be used both for validation loss as well as for train loss saturation / convergence. Regarding our experiments with transformer decoders and the Wikipedia dataset, we did not use a convergence criterion to determine when to stop training but rather trained for a predetermined number of epochs, as large language models are typically not trained until convergence [ 66,9]. For each dataset, the number of epochs was chosen such that the total number of parameter updates summed up to approximately 1M, which is the number of training iterations used by Delétang et al. in [11]. H Additional classifier benchmark figures To provide additional context for the results described in Section 3.1, we present Accuracy-vs- Compression Rate plots in Figure 5. The error bars in those figures were computed using an adaptive 27 binning approach that groups data points along the x-axis into bins containing at least 10 points each and spanning at least 0.1 on the log10 scale. For each bin, the median performance metric was calculated with error tubes indicating the interquartile range (IQR). This method provides a more robust representation of performance trends compared to simple averaging, especially with the varied outcomes that result from different hyperparameter combinations and network initializations. One can observe that DRR and R-L1 generally perform similarly, while PMMP has somewhat lower performance. Note that the x-axis is log-scaled and that some of the models achieve very high compression rates before their accuracy falls below the −3% mark. In the case of CIFAR, one can clearly see how moderate regularization robustly improves test accuracy. /uni00000088/uni00000087/uni00000087/uni00000088/uni00000087/uni00000088/uni00000088/uni00000087/uni00000089/uni00000088/uni00000087/uni0000008a /uni00000011/uni00000033/uni00000026/uni00000027/uni000000a3/uni00000002/uni00000019/uni0000002d/uni00000041/uni00000027/uni00000002/uni00000006/uni00000033/uni00000031/uni00000036/uni00000038/uni00000027/uni00000039/uni00000039/uni0000002d/uni00000033/uni00000032/uni00000002/uni00000018/uni00000021/uni0000003b/uni0000002d/uni00000033/uni000000a5/uni000000a4/uni000000a6/uni000000a5/uni000000a5/uni000000a6/uni0000008e/uni00000087/uni000000a6/uni0000008e/uni00000089/uni000000a6/uni0000008e/uni0000008b/uni000000a6/uni0000008e/uni000000a4/uni000000a6/uni0000008e/uni000000a5/uni000000a6/uni00000003/uni00000024/uni00000024/uni0000003c/uni00000038/uni00000021/uni00000024/uni00000040 /uni00000018/uni00000066 /uni00000008/uni00000018/uni00000018 /uni00000016/uni00000011/uni00000011/uni00000016 /uni00000005/uni00000027/uni00000039/uni0000003b/uni00000002/uni00000003/uni00000024/uni00000024/uni00000002/uni00000066/uni00000002/uni0000008a/uni000000a6 /uni0000001b/uni00000032/uni00000038/uni00000027/uni0000002b/uni0000003c/uni000000a3/uni00000021/uni00000038/uni0000002d/uni00000041/uni00000027/uni00000026 (a) MNIST with Lenet-300-100 /uni00000088/uni00000087/uni00000087/uni00000088/uni00000087/uni00000088/uni00000088/uni00000087/uni00000089/uni00000088/uni00000087/uni0000008a /uni00000011/uni00000033/uni00000026/uni00000027/uni000000a3/uni00000002/uni00000019/uni0000002d/uni00000041/uni00000027/uni00000002/uni00000006/uni00000033/uni00000031/uni00000036/uni00000038/uni00000027/uni00000039/uni00000039/uni0000002d/uni00000033/uni00000032/uni00000002/uni00000018/uni00000021/uni0000003b/uni0000002d/uni00000033/uni0000008e/uni00000089/uni000000a6/uni0000008e/uni0000008a/uni000000a6/uni0000008e/uni0000008b/uni000000a6/uni0000008e/uni0000008c/uni000000a6/uni0000008e/uni000000a4/uni000000a6/uni0000008e/uni0000008d/uni000000a6/uni0000008e/uni000000a5/uni000000a6/uni0000008e/uni0000008e/uni000000a6/uni00000088/uni00000087/uni00000087/uni000000a6/uni00000003/uni00000024/uni00000024/uni0000003c/uni00000038/uni00000021/uni00000024/uni00000040 /uni00000018/uni00000066 /uni00000008/uni00000018/uni00000018 /uni00000016/uni00000011/uni00000011/uni00000016 /uni00000005/uni00000027/uni00000039/uni0000003b/uni00000002/uni00000003/uni00000024/uni00000024/uni00000002/uni00000066/uni00000002/uni0000008a/uni000000a6 /uni0000001b/uni00000032/uni00000038/uni00000027/uni0000002b/uni0000003c/uni000000a3/uni00000021/uni00000038/uni0000002d/uni00000041/uni00000027/uni00000026 (b) MNIST with LeNet-5-Caffe /uni00000088/uni00000087/uni00000087/uni00000088/uni00000087/uni00000088/uni00000088/uni00000087/uni00000089/uni00000088/uni00000087/uni0000008a /uni00000011/uni00000033/uni00000026/uni00000027/uni000000a3/uni00000002/uni00000019/uni0000002d/uni00000041/uni00000027/uni00000002/uni00000006/uni00000033/uni00000031/uni00000036/uni00000038/uni00000027/uni00000039/uni00000039/uni0000002d/uni00000033/uni00000032/uni00000002/uni00000018/uni00000021/uni0000003b/uni0000002d/uni00000033/uni0000008d/uni00000087/uni000000a6/uni0000008d/uni0000008a/uni000000a6/uni0000008d/uni0000008c/uni000000a6/uni0000008d/uni000000a5/uni000000a6/uni000000a5/uni00000087/uni000000a6/uni000000a5/uni00000089/uni000000a6/uni000000a5/uni0000008c/uni000000a6/uni000000a5/uni000000a5/uni000000a6/uni0000008e/uni00000087/uni000000a6/uni00000003/uni00000024/uni00000024/uni0000003c/uni00000038/uni00000021/uni00000024/uni00000040 /uni00000018/uni00000066 /uni00000008/uni00000018/uni00000018 /uni00000016/uni00000011/uni00000011/uni00000016 /uni00000005/uni00000027/uni00000039/uni0000003b/uni00000002/uni00000003/uni00000024/uni00000024/uni00000002/uni00000066/uni00000002/uni0000008a/uni000000a6 /uni0000001b/uni00000032/uni00000038/uni00000027/uni0000002b/uni0000003c/uni000000a3/uni00000021/uni00000038/uni0000002d/uni00000041/uni00000027/uni00000026 (c) CIFAR with VGG-16-512 Figure 5: ‘Accuracy‘ vs ‘Model Size Compression Rate‘ for different datasets and models. I Teacher Student
|
https://arxiv.org/abs/2505.17469v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.