text string | source string |
|---|---|
0 because wk2+1=w1. Similarly, X j∈[k2](wj−wj+k2−1) =X j∈[k2]wj−X j∈[k2]wj+k2−1=X j∈[k2]wj−k2X j=2wj+k2−1−wk2, which vanishes becausePk2 j=2wj+k2−1=Pk2 j=2wj−1=Pk2−1 j=1wj. In general, for any 1 ≤j≤k2−1, we obtain that X j∈[k2]wj+j=k2−jX j=1wj+j+k2X j=k2−j+1wj+j=k2X j=j+1wj+k2+jX j=k2+1wj =k2X j=j+1wj+jX j=1wj=X j∈[k2]... | https://arxiv.org/abs/2505.17285v1 |
negative. The block permutation symmetry of (S5.1)implies that η(∆ix,vi) =η(υ(∆ix),vi) for any permutation υ: [k1]7→[k1]. In particular, we can choose υin a way so that υ(∆ix)∈ Q0∩ Q 1(ϵ). Therefore, υ(∆ix)∈ Q0∩ Q 1(ϵ) and−η(υ(∆ix),vi)<∞. This implies ( υ(∆ix),vi)∈ Q0∩ Q1(ϵ)× Q0∩ Q1(ϵ′). However, since η(∆ix,vi) =η(υ(∆... | https://arxiv.org/abs/2505.17285v1 |
x∈Rk1andy1, . . . ,yk1∈Rk2as follows: ∂Hij(x,y1,...,yk1) ={(t,0k2...,w,...,0k2) : (t,w)∈∂˜ϕij(x,yi),t∈Rk1,w∈Rk2}. (S5.32) Therefore, we need to find ∂˜ϕijif we want to infer on ∂Hij. Equation S5.31 also implies that for each i, j,˜ϕij(x,y) = h(Cij(x,y)) where Cij=Ai 0(k1−1)×k2 0(k2−1)×k1 Bj is a matrix with Ai∈R(k1−1... | https://arxiv.org/abs/2505.17285v1 |
in Rk, then f′ ∞(x;w0) = 0 for some x̸=0kby Fact S5.1. Since w0∈Rm >0, it must hold that h′ ∞(Cix) = 0 for each i∈[m] for this x. Therefore, mX i=1wih′ ∞(Cix) = 0 for all w∈Rm >0. Another application of Fact S5.1 yields that the minima of the convex function x7→mP i=1wih(Cix) is not attained for any w∈Rm. Hence, proved... | https://arxiv.org/abs/2505.17285v1 |
+p22X i′∈(i,j)k2−1X j′=1η(Ai′u,Bj′vυ(i′)+1), which can be rewritten as p11η(u,v1) +p12k2−1X j′=1η(u,Bj′v1) +p21k1−1X i′=1η(Ai′u,vυ(i′)+1) +p22k1−1X i′=1,k2−1X j′=1η(Ai′u,Bj′vυ(i′)+1) because υ(i′) =i′fori′̸= (i, j). However, the above implies Vη(υu,v1, . . . ,vk1;p) =Vη(u,v1,vυ(1)+1, . . . ,vυ(k1−1)+1;p) (S5.36) for al... | https://arxiv.org/abs/2505.17285v1 |
First note that η(u, υ(v1)) =η(u,v1) by (S5.1). By definition of Bj, η(u,Bj′υ(v1)) = η(u,v1υ(1)−v1υ(j), . . . ,−v1υ(j), . . . ,v1υ(k2−1)−v1υ(j)). However, by (3.4), η(u,Bj′υ(v1)) =η(u, υ−1(Bj′υ(v1))). Let l=Bj′υ(v1) = (v1υ(1)−v1υ(j), . . . ,−v1υ(j), . . . ,v1υ(k2−1)−v1υ(j)). We want to figure out what is υ−1(l). Suppos... | https://arxiv.org/abs/2505.17285v1 |
+p12ϑ(x, y; 1,2) +p21ϑ(x, z; 2,1) +p22ϑ(x, z; 2,2) =ςIm(ϑ)(p) (S5.39) where ςis the support function defined in (1.1) and Im(ϑ) ={w∈R4:w={ϑ(x, y; 1,1), ϑ(x, y; 1,2), ϑ(x, z; 2,1), ϑ(x, z; 2,2)} for some ( x, y, z )∈R3} is the image set of ϑ. Thus −Φ∗is the support function of Im(ϑ). Thus −Φ∗is a closed convex function... | https://arxiv.org/abs/2505.17285v1 |
of the minimizer of Φ(·;p), which follows from Lemma S5.5. We will take δ∗= min( p)/2∧1. Note that if q∈B(p, δ) with δ≤δ∗, then q∈R4 >0automatically because min( q)>min(p)−δ≥min(p)/2>0. First of all, since −Φ∗is a convex function, it is locally Lipschitz on its domain (Theorem 3.1.2, pp. 103, Hiriart-Urruty and Lemar´ ... | https://arxiv.org/abs/2505.17285v1 |
given any subsequence of Λ∗(qk), there exists a further subsequence that converges to Λ∗(p). Since any subsequence of Λ∗(qk) is bounded, we can always extract a converging subsequence from it. We will show that limit is Λ∗(p). If possible, suppose that limit is w∈R3 for some subsequence where wmay depend on the particu... | https://arxiv.org/abs/2505.17285v1 |
of M12(0,0) and N12(0,0). Lemma S5.14. M12(0,0) = N12(0,0) = 0 when M12andN12are continuous functions. The following lemma proves the H¨ older continuity of Λ∗at all pin a neighborhood of p0. Lemma S5.15 is proved in Section S5.5.3 Lemma S5.15. Consider the setup in Theorem 3.1. Suppose p∈B(p0, δ0)where δ0is as in Lemm... | https://arxiv.org/abs/2505.17285v1 |
U0contains both 0 and y∗(p), the real-valued functions y7→∂ϑ(x∗(p), y; 1,1)/∂yandy7→∂ϑ(x∗(p), y; 1,1)/∂yare differentiable on an interval containing 0 and y∗(p). Therefore, we can take Taylor series expansion of these functions at y∗(p) around y= 0, which leads to p11∂ϑ(x∗(p),0; 1,1) ∂y+p12∂ϑ(x∗(p),0; 1,2) ∂y +y∗(p) p... | https://arxiv.org/abs/2505.17285v1 |
and (S5.59). S5.4.10. Proof of Lemma S5.11 Proof of Lemma S5.11. By Lemma S5.7, there exists an open neighborhood U0∋0 such that U2 0⊂int(dom(Φ( ·;p)) for all p∈R4 >0. Lemma S5.8 implies that we can choose a δ >0 so small such that Λ∗(p)∈U3 0for all p∈B(p0, δ). We will consider that p∈B(p0, δ) for the rest of this proo... | https://arxiv.org/abs/2505.17285v1 |
that V(˜dn)→V∗= max( p) = max( p21,p22) where the DTR ˜dn= (˜d1n,˜d2n) corresponds to ( wn,y∗ 1(p), . . . ,y∗ k1(p)). Note that ˜dnsatisfies ˜d1n= pred(∆ wn),˜d2n(i) =( pred(0 ,−y∗(p)1k2−1) if i= 1 pred(0 ,−z∗(p)1k2−1) if i∈[2 :k1].(S5.61) Thus in this case, ˜d1n= pred(0 ,−1k1−1/n) = 1. Since V(˜d) =V(d1, d2(d1)) for a... | https://arxiv.org/abs/2505.17285v1 |
wn). We can show that ˜d1n= pred(0 ,−x∗(p)1k1−1),˜d2n(i) =( pred(0 ,−y∗(p)1k2−1) if i= 1 pred(∆ wn) if i∈[2 :k1]. Since x∗(p)<0, the corresponding ˜dnsatisfies ˜d1n=k1. Therefore, ˜d2n(˜d1n) =˜d2n(k1) = pred(0 ,−1k2−1/n) = 1. Hence, p˜d1n,˜d2n(˜d1n)=pk11. However, since P∈ Pk1k2 b,pk11=p21by definition. Therefore, V(˜d... | https://arxiv.org/abs/2505.17285v1 |
a continuous map by Lemma S5.8, we can find a δ′>0 such that for all q∈B(p, δ′),∥Λ∗(p)−Λ∗(q)∥2≤ϵ/2. We choose δ′to be smaller than δ/3 so that B(p, δ′)⊂B(p0,2δ/3)⊂B(p0, δ). If ( x∗(q), y∗(q), z∗(q)) = Λ∗(q),|y∗(p)−y∗(q)| ≤ϵ/2. Hence, y∗(q)>0. In particular, we consider q= (c, c+δ′/2, c1, c2). It is not hard to see that... | https://arxiv.org/abs/2505.17285v1 |
convex set due to ηbeing concave. Since 0k1+k2−2lies on the hyperplane, c= 0 and, hence, the hyperplane passes through the origin. Suppose ϵ= (ϵi)k1 i=1∈ {± 1}k1−1is such that ϵi>0 ifTi>0 and ϵi<0 ifTi<0. IfTi= 0, then ϵican be either 1 or −1. Thus ϵneed not be unique. Simililarly, we define ϵ′= (ϵ′ i)k1 i=1∈ {± 1}k2−1... | https://arxiv.org/abs/2505.17285v1 |
(S6.5) and E Vϕt(ft(Ht);At) πt(At|Ht) Ht = Ψ t(ft(Ht);p(Ht)) (S6.6) where p(Ht)j=E[V|Ht, At=j] for all j∈[kt]. Therefore, for any Ht, sup ft∈FtE Vϕt(ft(Ht);At) πt(At|Ht) =E[Ψ∗ t(p(Ht))]. (S6.7) S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.1 Lemmas on the properties of ϕunder Conditions N1-N2 80 Moreover, if h(Ht) ... | https://arxiv.org/abs/2505.17285v1 |
Hiriart-Urruty and Lemar´ echal, 2004). By Lemma S6.1, ϕ(·;i) is bounded above for each i. Therefore, |Ψ∗(p)|<∞for all p∈Rk. Hence, dom(Ψ∗) =Rk. Since a convex function is continuous on the interior of its domain (cf. pp. 104 of Hiriart-Urruty and Lemar´ echal, 2004), Ψ∗ is continuous. S6 PROOFS OF THEOREM 4.1 AND PROP... | https://arxiv.org/abs/2505.17285v1 |
ϱ≤Υ and Υ <∞onR, dom( ϱ) =R. It remains to show that ϱ(ϵm)→m0 if and only if ϵm→0. For the if part, note that ϱis continuous at 0 because ϱis convex and 0 ∈int(dom( ϱ))≡R. Thus lim ϵ→0ϱ(ϵ) = 0. Now we prove the only if part. Suppose {ϵm}m≥1is a subsequence so that ϵm≥0 for all m≥1 and ϱ(ϵm)→m0. Then we claim that ϵm→0.... | https://arxiv.org/abs/2505.17285v1 |
Ψ(x(m);q(m))<Ψ∗(p)−˜Z(p) given mis sufficiently large. Therefore, in the second case, ϕ(x(m); pred( x(m))≥Ψ∗(p)−ϵ−Ψ(x(m);q(m)) max(p)≥˜Z(p)−ϵ max(p)≥˜Z(p) 2 max( p) since ϵ≤˜Z(p)/2. Thus, we have shown that for all sufficiently large m, ϕ(x(m); pred( x(m))≥˜Z(p) 2 max( p). The definition of ˜Z(p) in (S6.11) implies tha... | https://arxiv.org/abs/2505.17285v1 |
hand, |Ψ(xml,nl;p(ml))−Ψ(xml,nl;p∗)| ≤ sup x∈Rk,l∈[k]|ϕ(x, l)|∥p(ml)−p∗∥1, which goes to zero as l→ ∞ because p(ml)→lp∗andϕis bounded by Lemma S6.1. Therefore, we have shown that ifp(i)→lp∗, then Ψ( x(l);p∗)→lΨ∗(p∗). Therefore, it just remains to show that lim m→∞ϕ(x(l); pred( x(l))) = 0. Since ml≥Mlandnl≥Rml, ϕ(x(l); ... | https://arxiv.org/abs/2505.17285v1 |
all i∈[k]. Let Ω = max i∈[k]Gi. Clearly, Ω <1. Since any x∈Rk belongs to Cpred(x), we haveX j∈[k2]:j̸=pred( x)ϕ(x;j)≤αpred(x)≤Ω, which completes the proof. Lemma S6.8. Suppose ϕ:Rk×[k]7→Rsatisfies Conditions N1 and N2. Then there exists χϕ∈(0,1], depending only on ϕ, so that Ψ∗(p)−Ψ(x;p)≥Cϕχϕ(max( p)−κ(x;p)) for all x∈... | https://arxiv.org/abs/2505.17285v1 |
v∗ i∈[0,1] for each i∈[k] andP i∈[k]v∗ i≤1. Therefore, ⟨v∗,p⟩ ≤max(p)X i∈[k]v∗ i≤max(p). Since ϕsatisfies Condition N2, it follows that Ψ∗(p) = max( p), implying ⟨v∗,p⟩= max( p)X i∈[k]v∗ i= max( p), implyingP i∈[k]v∗ i= 1 and ⟨v∗,p⟩= max( p)X i∈[k]v∗ i. Therefore,X i∈[k](max( p)−pi)v∗ i= 0, implying v∗ i>0 only if max(... | https://arxiv.org/abs/2505.17285v1 |
Ht, At=i (S6.13) since the Induction hypothesis holds for 1 + t. Thus Ψ∗ t(p∗ t(Ht)) =ETX i=1YiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At= pred( p∗ t(Ht)) =ETX i=1Yi1[At= pred( p∗ t(Ht))]QT j=1+t1[Aj=d∗ j(Hj)] QT j=tπj(Aj|Hj) Ht , where the last step follows from (S6.5). Thus to show that the induction hypothesis... | https://arxiv.org/abs/2505.17285v1 |
Vψ(f) =E[Ψ(f1(H1);p1(H1)] is similar, it is skipped. Induction hypothesis: Fort∈[T], sup fi∈Fi:i∈[t:T]ETX i=1YiTY j=1ϕj(fj(Hj);Aj) πj(Aj|Hj) =Et−1Y j=1ϕj(fj(Hj);At) πj(Aj|Hj)Ψ∗ t(p∗ t(Ht)) , where the product term is one when the range of the product is empty, which occurs only when t= 1. When t=T, taking V=PT i=... | https://arxiv.org/abs/2505.17285v1 |
the above is non-negative. Lemma S6.8 a applies to ϕTbecause T≥2. Applying Lemma S6.8, we obtain that there exists χϕT>0, depending only on ϕt, so that kT−1X i=1ϕT−1(x;i) E[Ψ∗ t(p∗ T(HT))−Ψt(fT(HT);p∗ T(HT))|HT−1, AT−1=i] ≥χϕTkT−1X i=1ϕT−1(x;i)E[max( p∗ T(HT))−κ(fT(HT);pT(HT))|HT−1, AT−1=i] which is bounded below by ... | https://arxiv.org/abs/2505.17285v1 |
j(Hj)] πj(Aj|Hj) H2+t , S6 PROOFS OF THEOREM 4.1 AND PROPOSITION 1/S6.2 Preliminary auxlliary lemmas 92 where H2+tis non-empty because the case under consideration has t∈[T−2]. Therefore, S11=ET−1Y r=t+1ϕr fr(Hr); pred( fr(Hr)) ×1[A1+t= pred( p∗ 1+t(H1+t))]−1[A1+t= pred( f1+t(H1+t))] π1+t(A1+t|H1+t) ×TX i=1YiTY j... | https://arxiv.org/abs/2505.17285v1 |
Lemma S6.17 below. Lemma S6.17. Suppose ϕ1satisfies Condition N1 and ϕ2, . . . , ϕ Tsatisfy both Conditions N1 and N2 with Cϕt= 1 for t∈[2;T]. IfVψ(f(m))→mV∗, then under Assumptions I-V, ETX i=1Yit−1Y j=11[A1=pred(f(m) 1(H1))] π1(A1|H1)TY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) × 1[At=pred(p∗ t(Ht))]−1[At=pred(f(m) t(Ht))] ... | https://arxiv.org/abs/2505.17285v1 |
(b) ≥E[Ψ∗ 1(p∗ 1(H1))−Ψ1(f(m) 1(H1);p∗ 1(H1))], where (a) follows from Lemma S6.14 and (b) follows as Lemma S6.15 implies Ψ1(f(m) 1(H1);p∗ 1(H1))≥Ψ1(f(m) 1(H1);p(m) 1(H1)). Also, the RHS of the above display is non-negative because Ψ∗ 1(p∗ 1(H1))≥Ψ1(f(m) 1(H1);p∗ 1(H1)) by the definition of Ψ∗in (4.2). Since Vψ(f(m))→m... | https://arxiv.org/abs/2505.17285v1 |
f(m) t(Ht))) × Ψ∗ 1+t(p∗ 1+t(H1+t))−Ψ1+t(f1+t(H1+t);p∗ 1+t(H1+t)) , where (a) follows from (S6.34). However, the above, combined with (S6.29), implies that E ϕt f(m) t(Ht(d(m))); pred( f(m) t(Ht(d(m)))) Ψ∗ 1+t(p∗ 1+t(H1+t(d(m)))) −Ψ1+t(f1+t(H1+t(d(m)));p∗ 1+t(H1+t(d(m)))) →m0. While we use Ht(d(m)), we want to... | https://arxiv.org/abs/2505.17285v1 |
convergence is L1- convergence. Since L1convergence implies convergence in probability, Ψ∗ t p∗ t(Ht(d(m))) −Ψt f(m) t(Ht(d(m)));p∗ t(Ht(d(m))) →P0, (S6.41) where Pcorresponds to the underlying probability space. Since convergence in probability a implies almost sure convergence along a subsequence, given any subse... | https://arxiv.org/abs/2505.17285v1 |
i=1Yi 1[A1= pred( f1(H1))] QT r=1πr(Ar|Hr) ×TY r=21[Ar=d∗ r(Hr)]−TY r=21[Ar= pred( fr(Hr))] H1# ≥min(χϕ1, C) V∗−V(f)i . where the last step follows because both terms in the sum are non-negatives. Note that min(χϕ1, C)≥TY i=1min(Ji,1) min 1≤i≤Tχϕi. However, since Ψ∗ t(1kt) = 1 for each t∈[T], it follows that ϕt(x... | https://arxiv.org/abs/2505.17285v1 |
Suppose r≤T−2 (note that it requires T >2 for such r∈Nto exist). S7 PROOF OF THEOREM 4.2 /S7.1 Lemmas required for proving Condition N1 101 Now suppose (S7.1) holds for some m≤T−1 such that m≥1 +r. Then we will show that (S7.1) holds for m−1 as well. Since m≥1 +r, it holds that m−1≥randY1, . . . , Y r⊂Hm. Hence, sup f1... | https://arxiv.org/abs/2505.17285v1 |
Yr) and (d) follows using the non-negativity of V. Therefore, we have proved that sup frE Vϕr(fr(Hr);Ar) πr(Ar|Hr)Yr =E[VΨ∗ r(Q∗ r(Or))] =E[V]E[Ψ∗ r(Q∗ r(Or))] (S7.3) because Or⊥(Hr−1, Ar−1) and Vis a function of ( Hr−1, Ar−1). Letting V= 1 in (S7.3), we obtain that sup frEϕr(fr(Hr);Ar) πr(Ar|Hr)Yr =E[Ψ∗ r(Q∗ r(Or)... | https://arxiv.org/abs/2505.17285v1 |
r(p)QT t̸=r,t=1Ψ∗ t(1kt) due to Ψ r(xm,r;p)→mΨ∗ r(p) and Ψ t(xm,t;1kt)→mΨ∗ t(1kt) for t̸=r. However, (S7.5) implies Vψ ∗= Ψ∗ r(p)TY t̸=r,t=1Ψ∗ t(1kt), which would complete the proof of Vψ(f(m))→mVψ ∗. To show (S7.6), we will first show that for any r∈[T], Vψ(f(m)) =E YrTY t=1ϕt(f(m) t(Ht);At) πt(At|Ht) =TY t=1+rΨt(xm... | https://arxiv.org/abs/2505.17285v1 |
p̸=0ktby Lemma S6.1 and Ψ∗ tis continuous by Lemma S6.2. Moreover, Fact S6.1 will also apply. We will prove the necessity of Condition N2 in 3 steps. To this end, let r∈[1 :T−1]. Step 1. We will show that for any p,q∈Rk1+r ≥0, max( p)>max(q) implies Ψ∗ 1+r(p)≥Ψ∗ 1+r(q). Step 2. Using the result from Step 1, we will sho... | https://arxiv.org/abs/2505.17285v1 |
since the ϕt’s are Fisher consistent, ( ϕr, ϕ1+r) are Fisher consistent for the two-stage DTR embedded in the Tstage DTR. Step 1b. Properties under Psmall Distributions in Psmall have some interesting properties. First we will show that Vψ ∗ has some closed form expression under P∈ Psmall for non-negative surrogates. L... | https://arxiv.org/abs/2505.17285v1 |
Lemma S7.4. Suppose P∈ P small andψis as in (4.1), where ϕ1, . . . , ϕ Tare bounded and non-negative. Consider r∈ [1, T−1]. For t /∈ {1+r}, we let ft(ht) =x(t)for all ht∈ H twhere x(t)∈Rktare fixed vectors. For H1+r= (Hr, Ar, Yr), we letf1+r(H1+r) =M(Ar)where, for each i∈[kr],M(i)∈Rk1+ris a non-stochastic vector. Let u... | https://arxiv.org/abs/2505.17285v1 |
=xm,tfor all ht∈ H t. When t=r, we let f(m) r≡xm,r. Thus f(m) tis constant vector for all t̸= 1 + r. However, we will consider a non-constant f(m) 1+r. Note that H1+r= (Hr, Ar) forP∈ Psmall because Yr= 0 and Or=O1+r=∅. Therefore, f1+ris a function from Hr×[kr] toR. For i∈[kr], we let f(m) 1+r(H1+r) =f(m) 1+r(Hr, i) =Mm... | https://arxiv.org/abs/2505.17285v1 |
that v0 pred(xm,r)→mmax( v0). Since argmax( v0) = argmaxi∈[kr]max(p(i)) = 1 forP∈ Psmall, it follows that pred( xm,r) = 1 for all sufficiently large m. S7.3.2. Step 2: Proving max(p) = max( q)implies Ψ∗ t(p) = Ψ∗ t(q)for all p,q∈Rkt ≥0andt >1 Suppose t∈[2 :T]. Consider p(m)=p+ (1/m)1k2. Then max( p(m)) = max( p) + 1 /m... | https://arxiv.org/abs/2505.17285v1 |
follows that e(1) k∈Vϕ. Therefore, the proof of (S8.1) follows. If part: Suppose CϕSk−1⊂conv(Vϕ) where Cϕ= Ψ∗(1k). Without loss of generality, we assume that Ψ∗(1) = 1. Otherwise, we can replace ϕbyϕ/Ψ∗(1k). Thus, we have Sk−1⊂conv(Vϕ). Hence, ςSk−1≤ςconv(Vϕ)(cf. Theorem 3.3.1, p. 151, Hiriart-Urruty and Lemar´ echal, ... | https://arxiv.org/abs/2505.17285v1 |
lim a→∞TY t=1ϕ(af∗ t(Ht);At) πt(At|Ht)=TY t=11[j=d∗ t(Ht)] πt(At|Ht) = 1. Since Condition N1 is satisfied, Lemma S6.1 implies that the ϕ’s are all bounded. Assumption I implies that the πt’s are all bounded below and Assumption IV implies that the Yt’s are bounded above. Therefore, TX i=1YiTY t=1ϕ(af∗ t(Ht);At) πt(A... | https://arxiv.org/abs/2505.17285v1 |
that supx∈RkΨ(x;p) = max( p), which would imply that ϕsatisfies Condition N2 with Cϕ= 1. Note that Ψ(x;p) =X j∈[k]pjZ Rk1[pred( x−u) =j]K(u)du =Z RkX j∈[k]pj1[pred( x−u) =j] K(u)du =Z Rkppred(x−u)K(u)du≤max(p)Z RkK(u)du. SinceR K(u)du= 1, we have Ψ( x;p)≤max(p). Suppose j∈argmax( x). Let xn=ne(k) j. Then lim n→∞Ψ(xn;... | https://arxiv.org/abs/2505.17285v1 |
follows since lim inf an→∞ϕ(anx, j) = lim inf an→∞EKY i∈[k]:i̸=j 1−FK(Z+anxi−anxj) . S8.6. Proof of Lemma 4.7 for the product-based losses Proof of Lemma 4.7. Without loss of generality, let us assume that C= 1, which implies τ= 1−FK. Because Kis symmetric about zero, 1 −FK=FK, implying τ=FKin this case. Since τis ... | https://arxiv.org/abs/2505.17285v1 |
p′∈Rk ≥0andx′∈Rk. Let us also denote by Ψ( x′;p′) the k-dimesnional version of ϕ. Since the induction hypothesis holds for k, Ψ(x′;p′)≤max(p′). Note that the last display can be rewritten as Ψ(x;p)≤pi∗τ(xi∗−xr) +τ(xr−xi∗)Ψ(x′;p′) ≤pi∗τ(xi∗−xr) +τ(xr−xi∗) max( p′) ≤pi∗τ(xi∗−xr) +τ(xr−xi∗) max( p) ≤sup x≥0 pi∗τ(x) +τ(−x... | https://arxiv.org/abs/2505.17285v1 |
t(Ht;i) ϕt(ft(Ht);i) = Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i) | {z } S1 − Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(bnQ∗ t(Ht);i)−ϕt(ft(Ht);i) | {z } S2. Defining ˜h= tran(˜ g), we will now show that when f=bn˜h, then S1≲Cab−2 nz−1+z1[Ht∈ Eti] (S9.2) and S2≲Cab−2 nδ−1 n+δn1E′ ti. (S9.3) S9 PROOFS FOR SECTION 6.2/S9... | https://arxiv.org/abs/2505.17285v1 |
i∈[kt]:̸=d∗ t(Ht)1E′ ti≤(kt−1)1E′ t. By the small noise assumption, P(Et)≤zαandP(E′ t)≲δα n. Thus Vψ ∗−Vψ(bn˜h)≲1 CTπ b−2 nz−1Ca+b−2 nδ−1 nCa+z1+α+δ1+α n . We want the z1+αterm to match δ1+α n. Thus, letting z=δn Vψ ∗−Vψ(bn˜h)≲1 CTπ b−2 nδ−1 nCa+δ1+α n . Since bn> δ−(1+α/2) n , it follows that δ1+α n> b−2 nδ−1 n. T... | https://arxiv.org/abs/2505.17285v1 |
i∈[k1+t] max( Q∗ 1+t(H1+t))−Q∗ 1+t(H1+t;i) ×ϕ1+t(f1+t(H1+t);i)|Ht, Ati +T1+t Ht =Eϕt(ft(Ht);At) πt(At|Ht)ϕ1+t(f1+t(H1+t);A1+t) π1+t(A1+t|H1+t) × max( Q∗ 1+t(H1+t))−Q∗ 1+t(H1+t;A1+t) +T1+t Ht (S9.6) by (S6.6). We claim that for any t∈[T−1], Tt=ETX j=t+1jY i=tϕi(fi(Hi);Ai) πi(Ai|Hi) max( Q∗ j(Hj))−Q∗ j(Hj;Aj)... | https://arxiv.org/abs/2505.17285v1 |
indicates ϕt≤1 for all t∈[T]. These facts will be used repeatedly in our proof. Throughout this proof, we will denote tran(˜ g) by ˜f. We also remind the readers that bf= tran( bg). Since the ϕt’s satisfy the conditions of Proposition 1 with positive J, the regret of bfis bounded by a constant multiple of the ψ-regret ... | https://arxiv.org/abs/2505.17285v1 |
Appn≲δ1+α n. The non-trivial step is bounding Estn. To this end, first, we need a bound on ∥Uf∗,f∥P,2, which can be bounded using rnandAppn. Lemma S10.2. Under the setup of Theorem 6.2, there exists a constant C >0depending only on P, so that ∥Uf∗,f∥2 P,2≤C Vψ(f∗)−Vψ(f)α/(1+α) where αis as in Assumption 1. The above ... | https://arxiv.org/abs/2505.17285v1 |
class of functions C(i, t) := u:Ht7→R|u(ht) =ϕt(ft(ht);i), ft= tran( gt), gt∈ Ukt−1 tn . We will apply Lemma S10.3 in Section S10.2 with G=C(i, t),C=Utn,X=Ht,k=kt,ϕ=ϕt, and u(·) =ϕt(ft(·), i) to obtain its covering number. Lemma S10.3 applies here because, by Condition 1, for fixed i∈[kt], the function xt7→ϕt(xt, i) ... | https://arxiv.org/abs/2505.17285v1 |
, g k−1(x))−ϕ(A1(x), . . . ,Ak−1(x))| ≤Cvuutk−1X i=1(gi(x)−Ai(x))2 because ϕis Lipschitz with constant C. Thus ∥u−ucov∥∞≤C√ k−1 sup 1≤i≤k−1∥gi−Ai∥∞≤ϵ. Therefore, GΘ= g:X 7→R|g=ϕ(0,A1, . . . ,Ak−1),Ai∈Θ for all i∈[k−1] is an ϵ-covering of G. Note that the cardinality of GΘisNk−1. Thus, the proof follows. S10 PROOFS FO... | https://arxiv.org/abs/2505.17285v1 |
(S6.6). S10.2.2. Lower bound on Vψ(f∗)−Vψ(f) Since Vψ ∗=Vψ(f∗) by Lemma S10.1, Vψ(f∗)−Vψ(f) =Vψ ∗−Vψ(f), which, by Lemma S9.1, equals E X t∈[T]t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]:i̸=d∗ t(Ht) Q∗ t(Ht;d∗ t(Ht))−Q∗ t(Ht;i) ϕt(ft(Ht);i) . Equation S10.3 implies ϕt(f∗ t(Ht);i) = 0 for i̸=d∗ t(Ht). Therefore, V(f... | https://arxiv.org/abs/2505.17285v1 |
Czα, where the last step follows from Assumption 1. Here C >0 is a constant depending only on P. Thus we have showed that V(f∗)−Vψ(f)≥ −Cz1+α +z 2CmaxX t∈[T]E t−1Y i=1ϕi(fi(Hi);Ai) πi(Ai|Hi)X i∈[kt]Q∗ t(Ht;i)|ϕt(ft(Ht);i)−ϕt(f∗ t(Ht);i)| , which, combined with the upper bound on ∥Uf∗,f∥P,1from (S10.10), leads to ... | https://arxiv.org/abs/2505.17285v1 |
the partial derivatives are also bounded. To show that ϕt(·, j) is differentiable, it suffices to show that its partial derivatives exist and they are continuous. Since all partial derivatives of Kexist and bounded for all x∈Rk, by Leibniz integral rule, ∂ϕ(x;j) ∂xi=∂Z Rk1[pred( t) =j]K(x−t)dt ∂xi =Z Rk1[pred( t) =j]... | https://arxiv.org/abs/2505.17285v1 |
=ETX i=tYiTY j=1+t1[Aj=d∗ j(Hj)] πj(Aj|Hj) Ht, At=at holds for t=Tbecause Q∗ T(HT, AT) =E[YT|HT, AT] and we have defined products over an empty range to be one. Suppose the induction hypothesis holds for 1 + twhere t∈[1 :T−1]. We will show that the induction hypothesis holds for tas well. Note that ETX i=tYiTY j... | https://arxiv.org/abs/2505.17285v1 |
and Sugiyama, M. (2019). On symmetric losses for learning from corrupted labels. In Interna- tional Conference on Machine Learning , pages 961–970. PMLR. Chen, J., Fu, H., He, X., Kosorok, M. R., and Liu, Y. (2018). Estimating individualized treatment rules for ordinal treatments. Biometrics ,74(3), 924–933. Chen, Y., ... | https://arxiv.org/abs/2505.17285v1 |
nonparametric models for dynamic treatment effects. Journal of Econometrics ,225(2), 132–147. He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision , pages 1026–1... | https://arxiv.org/abs/2505.17285v1 |
fluid resuscitation and vasopressor therapy research priorities in adult patients. Intensive Care Medicine Experimental ,9(1), 1–16. Lee, Y., Lin, Y., and Wahba, G. (2004). Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the ... | https://arxiv.org/abs/2505.17285v1 |
i: main content. The international journal of biostatistics ,6(2). Osokin, A., Bach, F., and Lacoste-Julien, S. (2017). On structured prediction theory with calibrated convex surrogate losses. Advances in Neural Information Processing Systems ,30. Pan, Y. and Zhao, Y.-Q. (2021). Improved doubly robust estimation in lea... | https://arxiv.org/abs/2505.17285v1 |
optimal dynamic treatment regimes. The annals of applied statistics ,12(3), 1914. Tewari, A. and Bartlett, P. L. (2007). On the consistency of multiclass classification methods. Journal of Machine Learning Research ,8(May), 1007–1025. Thomas, P. and Brunskill, E. (2016). Data-efficient off-policy policy evaluation for ... | https://arxiv.org/abs/2505.17285v1 |
of hidden confounders. arXiv preprint arXiv:2402.14942 . S11 ADDITIONAL FACTS/ 135 Zhang, T. (2004). Statistical analysis of some multi-category large margin classification methods. Journal of Machine Learning Research ,5(Oct), 1225–1251. Zhang, Y., Laber, E. B., Davidian, M., and Tsiatis, A. A. (2018). Interpretable d... | https://arxiv.org/abs/2505.17285v1 |
Wasserstein Transfer Learning Kaicheng Zhang1∗Sinian Zhang2∗Doudou Zhou3†Yidong Zhou4† 1School of Mathematical Sciences, Zhejiang University, China 2Division of Biostatistics and Health Data Science, University of Minnesota, USA 3Department of Statistics and Data Science, National University of Singapore, Singapore 4De... | https://arxiv.org/abs/2505.17404v1 |
contributions of this work are summarized as follows: Methodology. We propose a novel transfer learning framework for regression models with distributional outputs, addressing the challenges inherent in the Wasserstein space, which lacks a conventional linear structure. Our framework includes an efficient algorithm for... | https://arxiv.org/abs/2505.17404v1 |
residing in general metric spaces. In practical scenarios where only finite samples from the unknown distributional output are available, empirical measures have been utilized as substitutes for the unobservable distributions in regression models (Zhou & Müller 2024). 4 2 Preliminaries 2.1 Notations LetL2(0,1)be the sp... | https://arxiv.org/abs/2505.17404v1 |
1 + (Xi−X)T/hatwideΣ−1(x−X). Similarly, local Fréchet regression extends classical local linear regression to settings with metric space-valued outputs. In the case of a scalar predictor X∈R, the local Fréchet regression function is mL,h(x) = arg min µ∈WE{sL(x,h)d2 W(ν,µ)}, where the weight function is sL(x,h) =Kh(X−x)... | https://arxiv.org/abs/2505.17404v1 |
guidelines for selecting λare provided in Theorem 2. The final step projects the corrected estimate/hatwidef0onto the Wasserstein space, ensuring the output /hatwiderm(0) G(x)satisfies the intrinsic geometry of W. Such a projection exists and is unique as W is a closed and convex subset of L2(0,1). 3.3 Adaptive Selecti... | https://arxiv.org/abs/2505.17404v1 |
literature and generally holds in real-world scenarios (Cai et al. 2024). In particular, this assumption is common when dealing with high-dimensional data where regularization is necessary (Li et al. 2022). Condition 2 assumes that the number of samples is significantly larger than the number of sources K, which is rea... | https://arxiv.org/abs/2505.17404v1 |
by cross-validation. Theorem 3. Under Condition 3 and the conditions of Theorem 2, for the AWaTL algorithm with a fixed number of sources Kwe haveP(/hatwideA=A)→1and d2 W(/hatwiderm(0) G(x),m(0) G(x)) =Op/parenleftbigg n−1/2+ϵ 0/parenleftig max k∈Aψk+/summationtext k∈A∪{ 0}√nk/summationtext k∈A∪{ 0}nk+ (/summationdisp... | https://arxiv.org/abs/2505.17404v1 |
both target and source domains to achieve improved prediction. To better understand when negative transfer may occur, we conduct an ablation study with K= 1source and vary the similarity parameter ψ1from 0.01to1in increments of 0.01, withn0= 100andn1= 200. Our method outperforms the Only Target approach when ψ1<0.9, wh... | https://arxiv.org/abs/2505.17404v1 |
seven days are concatenated to form the distribution of each participant’s activity intensity. To evaluate the WaTL, we set White, Mexican Americans, and other Hispanic individuals as sources and Black as target. For females, the source data include 1308White people, 884 Mexican Americans, and 108Other Hispanic individ... | https://arxiv.org/abs/2505.17404v1 |
are avail- able. This limitation can be addressed by replacing the unobservable distributions with empirical measures constructed from these samples (Zhou & Müller 2024). The derived convergence rates can be extended to incorporate an additional term reflecting the number of independent samples per distribution, provid... | https://arxiv.org/abs/2505.17404v1 |
Sinkhorn distances: Lightspeed computation of optimal transport, in C. Burges, L. Bottou, M. Welling, Z. Ghahramani & K. Weinberger, eds, ‘Advances in Neural Information Processing Systems’, Vol. 26. Feraco, A., Gorini, S., Camajani, E., Filardi, T., Karav, S., Cava, E., Strollo, R., Padua, E., Caprio, M., Armani, A. &... | https://arxiv.org/abs/2505.17404v1 |
Springer International Publishing, Cham, pp. 737–753. Shin, H.-C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D. & Summers, R. M. (2016), ‘Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning’, IEEE Transactions on Med... | https://arxiv.org/abs/2505.17404v1 |
arXiv:2505.17469v1 [cs.LG] 23 May 2025Efficient compression of neural networks and datasets Lukas Silvester Barth1,∗ lukas.barth@mis.mpg.dePaulo von Petersenn1,2,∗ vonpetersenn@mis.mpg.de 1Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany 2University of Leipzig, Leipzig, Germany Abstract We compare... | https://arxiv.org/abs/2505.17469v1 |
of decades, we are only starting to be able to fully realize their potential now because the objects involved are so difficult to compute. The Kolmogorov complexity is in general uncomputable (though it can be approximated from above), and even the special case of compressive sensing turns out to be NP-hard. However, t... | https://arxiv.org/abs/2505.17469v1 |
special cases of these principles in Section 2.1. In Section 3.2, we empirically verify the prediction of Solomonoff induction that regularized models can exhibit more sample-efficient convergence. We implemented our methods both in Python using PyTorch [ 50] as well as in Julia using Lux [47,48], and all our code is a... | https://arxiv.org/abs/2505.17469v1 |
[63, 28]: Theorem 2.1. Given any µ∈ M , letξ(x)be a (semi-) probability distribution that satisfies ξ(x)≥w(µ)µ(x)for some fixed w(µ)and all x. Then, for all n∈N, nX t=1Ex<t∼µX xt(µ(xt|x<t)−ξ(xt|x<t))2 ≤lnw−1 µ. A similar bound has been proven in [ 53] for the MAP, however with considerably slower convergence rate, na... | https://arxiv.org/abs/2505.17469v1 |
practice the most relevant objective for us remains (2). We must also specialize to function families fθamenable to backpropagation and we approximate ℓ byℓ0. Nevertheless, we still obtain a fairly general objective that our methods can be applied to, Lθ(x) =αℓ0(θ) +L(fθ, x), (6) where Lis a differentiable loss functio... | https://arxiv.org/abs/2505.17469v1 |
yi−X jXijwjzj2 =X i yi−X jXijwjγj2+X i,jX2 ijw2 jγj(1−γj). Proof. We provide a copy of the proof in Appendix C.2. This Lemma provides a way to substantially reduce the number of summands compared to the naive expectation. However, this is still not applicable to the fully-nonlinear objective (6). To solve this prob... | https://arxiv.org/abs/2505.17469v1 |
beneficial, allowing us to drop the ρ||θ||2 2term; (2) we find that increasing the regularization strength αduring training can yield lower loss and more stable outcomes. Thus we divide training into 3 phases: α= 0,α̸= 0and finetuning [ 37] after pruning. Since the authors did not provide a name for their method, we ca... | https://arxiv.org/abs/2505.17469v1 |
can increase the probability by sampling several times, though this was not necessary in practice. 8Furthermore, note that this works well for neural networks because usually every weight can influence the value of the function at any point. If certain weights only influece the function in certain regions of the domain... | https://arxiv.org/abs/2505.17469v1 |
sampling is not necessary. In contrast to the smooth reformulations, it is the only method that provides an exact reformulation of the original optimization objective (6). We therefore believe that there is more potential in this method and hope that the underlying ideas will be taken up and improved in the future. 3.2... | https://arxiv.org/abs/2505.17469v1 |
loss and description length on the model size, which in turn depends on the regularization strength. In principle, we would expect that, the smaller the dataset in comparison to the initial model size, the more beneficial ℓ0regularization is for achieving low test loss. Indeed, we can observe exactly this effect in the... | https://arxiv.org/abs/2505.17469v1 |
crosses in the figure, often perform significantly worse than the regularized big models, even if the final model size of the regularized big model at the end of training equals the smaller unregularized model size. We conjecture that the main reason is that the regularization can pick out, for a given regularization s... | https://arxiv.org/abs/2505.17469v1 |
Nevertheless, it is limited in that the ℓ0andℓ1norm are specific approximations of the description length of a model. One might be able to achieve even better results by combining the approach with methods that compress the regularities in the pattern of the weights of the networks. A more refined approach would attemp... | https://arxiv.org/abs/2505.17469v1 |
Luiz Ortiz Batista, and Rui Seara. On the compression of neural networks using l0-norm regularization and weight pruning. Neural Networks , 171:343–352, 2024. [9] DeepSeek-AI. Deepseek-v3 technical report, 2024. [10] Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas... | https://arxiv.org/abs/2505.17469v1 |
selection for deep neural networks. InProceedings of the European conference on computer vision (ECCV) , pages 304–320, 2018. [28] Marcus Hutter. Universal artificial intelligence: Sequential decisions based on algorithmic probability . Springer Science & Business Media, 2005. 12 [29] Yerlan Idelbayev and Miguel Á Carr... | https://arxiv.org/abs/2505.17469v1 |
Gregory Thornton, Andeep Toor, Cristian Udrescu, Aayush Upadhyay, Cristina Vasconcelos, Alex Vasiloff, Andrey V oynov, Amanda Walker, Luyu Wang, Miaosen Wang, Simon Wang, Stanley Wang, Qifei Wang, Yuxiao Wang, Ágoston Weisz, Olivia Wiles, Chenxia Wu, Xingyu Federico Xu, Andrew Xue, Jianbo Yang, Luo Yu, Mete Yurtoglu, A... | https://arxiv.org/abs/2505.17469v1 |
efficient training & inference of neural differential equations, 2023. [49] Richard Clark Pasco. Source coding algorithms for fast data compression . PhD thesis, Stanford University CA, 1976. [50] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gi... | https://arxiv.org/abs/2505.17469v1 |
Xia Xiao, Zigeng Wang, and Sanguthevar Rajasekaran. Autoprune: Automatic network pruning by regularizing auxiliary parameters. Advances in neural information processing systems , 32, 2019. [69] Mingzhang Yin, Nhat Ho, Bowei Yan, Xiaoning Qian, and Mingyuan Zhou. Probabilistic best subset selection via gradient-based op... | https://arxiv.org/abs/2505.17469v1 |
densities, that is pθ,σ(b|a) :=1√ 2πσ2exp −(b−fθ(a))2 2σ2! , (13) withσ∈Randθ∈Rnandfθthe function calculated by the parameters θof a neural network. If we want to encode a datapoint yi, then we need the probability pθ,σ(yi). However, we only have a density and the Lebesgue measure of a single point vanishes. This is wh... | https://arxiv.org/abs/2505.17469v1 |
where we encounter a signal fθ(xi) chained with a noise of bounded variance. Proof of Proposition B.1: For convenience we write: LGauß(θ, σ, X ) =ℓ(θ, σ) n+Lsimple(θ, σ, X ) (18) We may assume that ℓ(θ, σ)is bounded by some constant and thus in the limit n→ ∞ : arg min ϑ,ςLGauß(ϑ, ς, X ) = arg min ϑ,ςLsimple(ϑ, ς, X ) ... | https://arxiv.org/abs/2505.17469v1 |
i−2yiX jXijwjγj+Ez∼πγ X jXijwjzj2 (22) For the remaining expectation over P jXijwjzj2, note that terms will be of the form Xij1Xij2wj1wj2zj1zj2. Whenever j1̸=j2, then such terms are again separately linear in zj1and zj2and can be replaced by γj1γj2terms.11Only if j1=j2, we get Ez∼πγ[zj1zj2] =Ez∼πγ[z2 j1] = γj1·1... | https://arxiv.org/abs/2505.17469v1 |
the numerical solution procedure and guarantees conver- gence for a wider class of functions. The ADMM procedure now consists of alternatingly minimizing with respect to xiand to zand maximizing with respect to the Lagrange multipliers yi(hence the name alternating direction method 20 of multipliers ). Concretely, the ... | https://arxiv.org/abs/2505.17469v1 |
pP2 pℓwℓπℓ(1−πℓ) =−ρX i,j(Mixk+1 i−c+uk i)jPjℓπℓ+NρX j,qπℓPT ℓjPjqπqwq +NρX j,qPT ℓjPjℓπℓ(1−πℓ)δℓqwq =−ρπℓX i,jPT ℓj(Mixk+1 i−c+uk i)j+ +Nρπ ℓX j,qPT ℓjPjq πq+ (1−πq)δℓq wq(31) and setting this to 0. 4.In general, this can be non-trivial to solve but for the important special case that P=id, or Pjq=δjq, we obtain 0 =... | https://arxiv.org/abs/2505.17469v1 |
away all unnecessary weights. This then produces spurious weights as described in Section 2.2.3 and appendix D. Those weights can thereafter be eliminated efficiently with Random Gradient Pruning before proceeding to the next-to-last layer and so on. In this way, part of the layerwise information is propagated through ... | https://arxiv.org/abs/2505.17469v1 |
limitations. Improving the ℓ0Approximation To mitigate issues with gradient magnitude (points 1 and 2), we evaluated alternative smooth approximations to the ℓ0indicator function: ℓ0(x)≈1−exp(−β|θi|) (39) ℓ0(x)≈1−exp(−θ2 i/(2β2)) (40) ℓ0(x)≈1−1 (1 +θ2 i/(3β2))(41) The first is the exponential form used in [ 8], while t... | https://arxiv.org/abs/2505.17469v1 |
increase αover time and warm-up phases without regularization. Both approaches led to more stable optimization and stronger final results. This staged strategy balances early-stage learning (favoring flexibility) with late-stage compression (favoring sparsity), and we recommend it as a general practice when applying DR... | https://arxiv.org/abs/2505.17469v1 |
the average of each and then checks if the 2nd average is bigger or equal to the first one, up to the variation in the data in those halves. The slightly innovative part in this procedure is how we determine the variation in the data. Since the data varies along a (usually decaying loss) curve, which has itself varying... | https://arxiv.org/abs/2505.17469v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.