text
string
source
string
the MDP framework by defining transition probabilities Pst,at t(st+1∈A) :=Pt Φt(st, at, ξt)∈A , A∈ Ft+1, (4.2) where ξt∼Pt. However, there is an essential difference between the SOC and MDP modeling. In the MDP the probability law of the process is defined by the transition kernels and there is no explicitly defined ...
https://arxiv.org/abs/2505.16651v1
the following robust counterpart of the risk averse functionals Rst,at t(·) := sup Pst,at t∈MtRPst,at t(·), t= 1, ..., T, (4.9) and the respected nested functional Rπ=Rs1,π1(s1)◦ ··· ◦ RsT,πT(sT)defined iteratively as in (4.6). Then the corresponding robust nested formulation is obtained by replacing Rπin (4.7) with Rπ...
https://arxiv.org/abs/2505.16651v1
P. M. Esfahani and D. Kuhn. Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations. Mathematical Pro- gramming , pages 115–166, 2018. [5] H. Federer. Curvature measures. Transactions of the American Mathematical Society , 93(3):418–491, 1959. [...
https://arxiv.org/abs/2505.16651v1
arXiv:2505.16780v1 [math-ph] 22 May 2025Large time and distance asymptotics of the one-dimensional impenetrable Bose gas and Painlev´ e IV transition Zhi-Xuan Meng1, Shuai-Xia Xu2, and Yu-Qiu Zhao1 1Department of Mathematics, Sun Yat-sen University, Guangzhou 510275, China. 2Institut Franco-Chinois de l’Energie Nucl´ e...
https://arxiv.org/abs/2505.16780v1
. . . . . . . . . . . . . . 17 3.5 RH problem for M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.6 Final transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7 Large xasymptotics in the space-like region . . . . . . . . . . . . . . . . . . . . . 20 4 Asymp...
https://arxiv.org/abs/2505.16780v1
Deformation of the jump contour . . . . . . . . . . . . . . . . . . . . . . . 34 6.1.2 Local parametrix near z= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6.1.3 Local parametrix near z=−1 . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.1.4 Final transformation . . . . . . . . . . . . . . . . . . . ...
https://arxiv.org/abs/2505.16780v1
These determinants have been applied to characterize the bulk scaling limit dis- tributions of particles in noninteracting spinless fermion system, the Moshe-Neuberger-Shapiro random matrix ensemble, and non-intersecting Brownian motions [27, 35, 38]. Recently, a completely integrable system of PDEs and integro-differe...
https://arxiv.org/abs/2505.16780v1
L2(−1,1) with the sine kernel K(sin). It is remarkable that the logarithmic derivative of the Fredholm determinant of K(sin) x σV(x;γ) =xd dxln det( I−γK(sin) x) (1.12) satisfies the σ-form of the fifth Painlev´ e equation [26, Eq. (2.27)] (xσ′′ V)2+ 4(4 σV−4xσ′ V−σ′2 V)(σV−xσ′ V) = 0 , (1.13) with the boundary conditi...
https://arxiv.org/abs/2505.16780v1
x bounded away from the zeros of cos(2 x). Moreover, we have the asymptotic expansions of the corresponding solutions of the separated NLS equations b++andB−−, defined by (1.5) and(1.6), asx→+∞: b++(x, t) =−πe−2it cos(2 x)−rπ 2tei x2 2t−π 4 1−4ittan(2 x) x−2t+4t2e4ix (x2−4t2) cos2(2x) +O(x−1),(1.18) B−−(x, t) =e2it...
https://arxiv.org/abs/2505.16780v1
πcos(2 x).(1.26) Remark 1.6. From Theorems 1.1 and 1.2, we observe a phase transition in the leading asymp- totics of ∂tD(x, t) given in (1.16) and (1.20). Specifically, the phase changes from ei x2 2t+2t+π 4 toe−i x2 2t+2t+π 4 as (x, t) moves across the critical curve x= 2tas shown in Fig. 1. From Theorems 1.1 and...
https://arxiv.org/abs/2505.16780v1
to (1.20). Therefore, the Painlev´ e IV asymptotics describes the phase transition between the asymptotics of ∂tD(x, t) in the space- like and time-like regions as given in (1.16) and (1.20). Similarly, the Painlev´ e IV asymptotics shown in Theorem 1.8 also describe the phase transition in the asymptotics of the quant...
https://arxiv.org/abs/2505.16780v1
2πi− →f(λ)− →gT(λ) , λ∈(−1,1). (2.7) 10 (3)X(λ) =I+O1 λ , asλ→ ∞ . Then the solution to the above RH problem is expressible in terms of the functions− →Fand − →gby using the Cauchy integral X(λ) =I+Z1 −1− →F(µ)− →gT(µ) µ−λdµ. (2.8) The behavior of Xat infinity can be expressed as X(λ) =I+X1 λ+X2 λ2+O1 λ3 . (2.9) W...
https://arxiv.org/abs/2505.16780v1
where the branch for ξΘ∞is chosen such that arg ξ∈(−π 2,3π 2). (4) As ξ→0, for Θ ̸= 0, Ψ( ξ) = Ψ(0)(ξ)ξΘσ3, where Ψ(0)(ξ) is analytic near ξ= 0. For Θ = 0, we have Ψ( ξ) =O(ln|ξ|), as ξ→0. Then, the Painlev´ e IV tanscendent u, and the quantities yandHcan be expressed in terms of the elements of Ψ 1and Ψ 2as follows: y...
https://arxiv.org/abs/2505.16780v1
Appendix A. The solution to the above RH problem can be constructed as follows: P(−1)(λ) =E(−1)(λ)P(−1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(−1, δ), (3.7) where P(−1) 0(ξ) =  Φ(CHF )(ξ)(eπiπ)−1 2σ3, argξ∈(0,π 3)∪(2π 3, π), Φ(CHF )(ξ)1 0 −i1 (eπiπ)−1 2σ3, argξ∈(π 3,π 2), Φ(CHF )(ξ)1 0 i1 (eπ...
https://arxiv.org/abs/2505.16780v1
δ) and ∂U(1, δ). RH problem for M (1)M(λ) is analytic for λ∈C\(∂U(−1, δ)∪∂U(1, δ)). (2) On the boundaries ∂U(−1, δ) and ∂U(1, δ), we have M+(λ) =M−(λ)1λ−1 λ+1πe2i(x−t) 0 1 , λ∈∂U(−1, δ), (3.19) M+(λ) =M−(λ) 1 0 −λ+1 π(λ−1)e2i(x+t)1! , λ∈∂U(1, δ). (3.20) 18 (3) As λ→ ∞ ,M(λ) =I+O1 λ . Let A(λ) =I+B λ+ 1+C λ−1, (3.21...
https://arxiv.org/abs/2505.16780v1
λ), λ∈∂U(λ0, δ), (3.39) with ∆(λ) =−p −λ0πei x2 2t−π 4 λ−λ0 1 λ+12 πe2i(x+t) 1+e4ix+1 λ2−14 πe2i(x+t)+4ix (1+e4ix)2λ−1 λ+1+1 λ+14e4ix 1+e4ix+1 λ2−14e8ix (1+e4ix)2 −1 λ2−14 π2e4i(x+t) (1+e4ix)2 −1 λ+12 πe2i(x+t) 1+e4ix−1 λ2−14 πe2i(x+t)+4ix (1+e4ix)2 . (3.40) We obtain that R(1)(λ) =( C λ−λ0, λ ∈C\U(λ0, δ), C λ−λ0...
https://arxiv.org/abs/2505.16780v1
Appendix A. The solution to the above RH problem can be constructed as follows: P(−1)(λ) =E(−1)(λ)P(−1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(−1, δ), (4.6) 23 where P(−1) 0(ξ) =  Φ(CHF )(ξ)π1 2σ3σ1, argξ∈(0,π 3)∪(2π 3, π), Φ(CHF )(ξ)1 0 i1 π1 2σ3σ1, argξ∈(π 3,π 2), Φ(CHF )(ξ)1 0 −i1 π1 2σ3σ1...
https://arxiv.org/abs/2505.16780v1
uniform for λbounded away from the jump contour for R. 4.5 Large tasymptotics in the time-like region By tracing back the series of invertible transformations Y7→T7→R, (4.18) we obtain that as t→+∞, Y(λ) =R(λ)A(λ)P(∞)(λ), λ∈C\ ∪4 i=1Ωi∪U(1, δ)∪U(−1, δ)∪U(λ0, δ) , (4.19) where P(∞)andAare defined in (3.3) and (3.21), ...
https://arxiv.org/abs/2505.16780v1
global parametrix as given in (3.3). 5.2 Local parametrix near λ=−1 In this subsection, we seek a parametrix P(−1)that satisfies the same jump conditions as Ton Σ in the neighborhood U(−1, δ), for some δ >0. 28 RH problem for P(−1) (1)P(−1)(λ) is analytic for λ∈U(−1, δ)\Σ. (2)P(−1)(λ) has the same jumps as T(λ) onU(−1,...
https://arxiv.org/abs/2505.16780v1
λ ∈U(−1, δ)\Σ,(5.18) where P(∞)is defined in (3.3). Then Rfulfills the following RH problem. 30 𝜮𝟐𝜮𝟏 𝜮𝟒𝝏𝑼−𝟏 𝝏𝑼𝟏 -1 1Figure 8: The jump contour of the RH problem for R RH problem for R (1)R(λ) is analytic for λ∈C\Σ, where the contour is shown in Fig. 8. (2)R+(λ) =R−(λ)JR(λ), λ∈Σ, where JR(λ) =  ...
https://arxiv.org/abs/2505.16780v1
in (3.34), (5.25) and (5.33). From (5.34), (5.35) and Proposition 2.1, we obtain the asymptotics of ∂tD,∂xD,b++and B−−ast→+∞in the transition region, as given in (1.38)-(1.41), which complete the proof of Theorem 1.8. 6 Asymptotic analysis of the PIV equation In this section, we derive the asymptotics of a special solu...
https://arxiv.org/abs/2505.16780v1
the above RH problem can be constructed as follows: P(0)(z) =E(0)(z)P(0) 0(ζ(z))eiτ2 2φ(z)σ3, (6.12) where P(0) 0(ζ) =  Φ(CHF )(ζ), argζ∈(0,π 4)∪(2π 3, π), Φ(CHF )(ζ)1 0 2i1 , argζ∈(π 4,π 3), Φ(CHF )(ζ)1 0 i1 , argζ∈(π 3,2π 3), Φ(CHF )(ζ)0i i0 , argζ∈(π,5π 4)∪(−π 3,−π ...
https://arxiv.org/abs/2505.16780v1
(6.27) where the error term is uniform for zbounded away from the jump contour for R. Here R(1) satisfies R(1) +(z)−R(1) −(z) = ∆( z), z∈∂U(−1, δ), (6.28) with ∆(z) =−eiτ2+3πi 4√πz(z+ 1)0 1 0 0 , z∈∂U(−1, δ). (6.29) We obtain that R(1)(z) =C z+1, z ∈C\U(−1, δ), C z+1−∆(z), z ∈U(−1, δ),(6.30) where C= Res(∆( z),−1) i...
https://arxiv.org/abs/2505.16780v1
(6.8), (6.43)-(6.45) and (A.6), the matching condition (6.40) is fulfilled. 41 6.2.3 Local parametrix near z= 1 In this subsection, we seek a parametrix P(1)satisfying the same jump conditions as Ton Σ(T) in the neighborhood U(1, δ), for some δ >0. RH problem for P(1) (1)P(1)(z) is analytic for z∈U(1, δ)\Σ(T). (2)P(1)(...
https://arxiv.org/abs/2505.16780v1
Φ(z) = I+Φ1 z+Φ2 z2+O(z−3) z−1 2σ3, z→ ∞ , (6.62) where Φ1=R1+P(∞) 1,Φ2=R1P(∞) 1+R2+P(∞) 2. (6.63) Here P(∞) 1,P(∞) 2,R1andR2are defined in (6.25) and (6.61). From (2.27), (2.28), (2.30), (6.54), (6.62) and (6.63), we obtain the following asymptotics fory,Handuasτ=eπi 4s→ −∞ : y(s) =−2(Ψ1)12=−2(Φ1)12= 2 +2es2 √πs+O(s...
https://arxiv.org/abs/2505.16780v1
References [1] M.J. Ablowitz, D.J. Kaup, A.C. Newell and H. Segur, The inverse scattering transform- Fourier analysis for nonlinear problems, Stud. Appl. Math. ,53, 249–315 (1974). [2] G. Amir, I. Corwin and J. Quastel, Probability distribution of the free energy of the contin- uum directed random polymer in 1 + 1 dime...
https://arxiv.org/abs/2505.16780v1
A.G. Izergin and V.E. Korepin, Long-distance asymptotics of temperature corre- lators of the impenetrable Bose gas, Comm. Math. Phys. ,130, 471–488 (1990). [21] A.R. Its, A.G. Izergin and V.E. Korepin, Temperature correlators of the impenetrable Bose gas as an integrable system, Comm. Math. Phys. ,129, 205–222 (1990). ...
https://arxiv.org/abs/2505.16780v1
arXiv:2505.17285v1 [math.ST] 22 May 2025On Fisher Consistency of Surrogate Losses for Optimal Dynamic Treatment Regimes with Multiple Categorical Treatments per Stage Nilanjana Laha1,∗, Nilson Chapagain1, Victoria Cicherski1, and Aaron Sonabend-W2 1Department of Statistics, Texas A&M, College Station, TX 77843, e-mail:...
https://arxiv.org/abs/2505.17285v1
kt≥2 available treatment options. We assume that each ktis fixed and does not grow with the sample size n. To distinguish this setting from the binary-treatment setting, i.e., where kt= 2 for all t∈ {1, . . . , T }, we refer to it as the general DTR setting throughout this paper. Plenty of approaches have been proposed...
https://arxiv.org/abs/2505.17285v1
to an optimization problem with the discontinuous 0-1 loss x7→1[x≤0]. The support vector machine replaces the 0-1 loss with the hinge loss x7→max{0, x}, which is Fisher consistent for the 0-1 loss (Bartlett et al. , 2006). However, the discontinuous loss corresponding to the sequential DTR classification, referred to h...
https://arxiv.org/abs/2505.17285v1
for the general DTR setting falls within this category. Although computationally efficient, these methods either incur a loss of sample size (e.g., backward outcome weighted learning of Zhao et al. , 2015) or depend on model-based objects such as pseudo-outcomes (e.g., the augmented outcome weighted learning of Liu et ...
https://arxiv.org/abs/2505.17285v1
Fisher consistent when extended to the T= 2 case. Our examples include both smooth and non-smooth (e.g., hinge loss) losses, as well as losses using sum-zero constraints. In particular, Theorem 3.1 shows that, under smoothness conditions, permutation equivariant and relative-margin-based (PERM) surrogate losses can not...
https://arxiv.org/abs/2505.17285v1
class (Lemma 4.4). 1.1.2. Methodological contribution The proposed DTR learning method, SDSS, solves a surrogate value optimization problem based on the above-mentioned smooth, non-convex, Fisher consistent surrogate losses. The smoothness of the surrogates allows for gradient-based opti- mization, enabling fast implem...
https://arxiv.org/abs/2505.17285v1
±∞ . Denote by R≥0andR>0the sets [0,∞) and (0 ,∞), respectively. For any k∈N, define Sk−1to be the simplex in Rk, i.e.Sk−1:={p∈Rk:pi≥0 for all i∈ [k],Pk i=1pi= 1}. The k-dimensional vectors of all zeros and all ones are denoted by 0kand1k, respectively, and Ikdenotes the identity matrix of order k. A permutation von [k...
https://arxiv.org/abs/2505.17285v1
every patient, which are random vectors. The dimension of Otcan change with stage t. The following diagram gives the sequence of events: O1→A1→Y1→ ··· → OT→AT→YT. Thet-th stage history Htcontains everything observed before At, that is H1=O1, and Ht= (O1, A1, Y1, . . . , O t) for 2 ≤t≤ T. We denote the space of each Htb...
https://arxiv.org/abs/2505.17285v1
t=11[dt(Ht) =At] πt(At|Ht)TX j=1Yj . (2.2) We define the optimal policy or DTR d∗to be a maximizer of V(d) over all possible DTRs, i.e., d∗∈argmaxdV(d). We denote the optimal value function V(d∗) byV∗. Under Assumptions I-III, d∗can be represented using the optimal Q-functions. We define the T-stage optimal Q-functio...
https://arxiv.org/abs/2505.17285v1
d∗ tis linear if there exists f∗ tsuch that f∗ t:Ht7→Rktis linear for all t∈[T], and non-linear otherwise. Similar to Zhao et al. (2015); Jiang et al. (2019), we use the inverse probability weighted (IPW) estimator to estimate V(f), given by bV(f) =PnTY i=11[At= pred( ft(Ht))] πt(At|Ht)TX j=1Yj . (2.5) 2 MATHEMATICAL...
https://arxiv.org/abs/2505.17285v1
j=1YjQT t=1πt(At|Ht) . An ideal estimator of f∗would be the maximizer of bV(f). However, direct optimization of bV(f) is computationally challenging due to the discontinuity of ψdis, which arises from the pred operator and the indicator function (Tewari and Bartlett, 2007; Bhattacharyya et al. , 2018). In both machine...
https://arxiv.org/abs/2505.17285v1
bVψcan lead to sub-optimal DTRs, even if the latter efficiently estimates Vψ. In the multiclass classification literature, surrogate frameworks often involve the sum-zero constraint, requiring the class score functions to sum to zero (Tewari and Bartlett, 2007; Zhang, 2004). However, in this paper, we do not consider a...
https://arxiv.org/abs/2505.17285v1
T= 2. Then we establish the inconsistency of concave PERM losses under smoothness conditions. Finally, we present examples of non-PERM concave surrogates, demonstrating that they may also fail to be Fisher consistent. Example 1. (Exponential loss) Let us consider the concave surrogate loss ψ(x,y;a1, a2) =−X i∈[k1]X j∈[...
https://arxiv.org/abs/2505.17285v1
, 2016; Fathony et al. , 2016). We used the class of PERM losses here because demonstrating Fisher inconsistency for all concave surrogates, even under smoothness conditions, is challenging, because the class of all concave surrogates is too broad. As demonstrated by the insightful work of Wang and Scott (2023b), the P...
https://arxiv.org/abs/2505.17285v1
Theorem 3.1 below uses −ηand−ψinstead of ηandψ, respectively. Theorem 3.1. Suppose T= 2,k1, k2≥2, and ψis an above-bounded concave PERM loss such that ∩k1 i=1∩k2 j=1int(dom(−ψ(·;i, j))̸= ∅. Further suppose −ηis proper, closed, and strictly convex, where ηis the template of ψ. Also, we assume ηis thrice con- tinuously d...
https://arxiv.org/abs/2505.17285v1
losses coincide with Laha et al. (2024a)’s margin-based losses. Therefore, while Laha et al. (2024a) developed tools to prove the inconsistency of margin-based concave surrogates in the binary-treatment case, those techniques fail to apply here. The loss of permutation equivariance substantially alters the structure of...
https://arxiv.org/abs/2505.17285v1
on the score function when the surrogate risk is minimized (Zhang, 2004). The corresponding surrogate is given by ϕ(x;a1) =X i∈[k1]:i̸=a1˜ϕ(−xi) for all x∈Rk1anda1∈[k1], (3.13) where ˜ϕis a univariate function. An example is the multi-category support vector machine (SVM) of Lee et al. (2004), where ˜ϕis the convex los...
https://arxiv.org/abs/2505.17285v1
A2=j]fori∈[3]andj∈[2]. □ To conclude, our results and examples in this section do not show much promises for concave surrogates in the DTR classification problem. In fact, it may be possible that convexifying simultaneous direct search is fundamentally infeasible without sacrificing Fisher consistency. Thus, the comput...
https://arxiv.org/abs/2505.17285v1
complicate the classification (Liu et al. , 2024b; Wang et al. , 2018). Similar surrogates also appear in the multiclass classification literature (Liu and Shen, 2006; Wu and Liu, 2007). 4.0.1. Single-stage setting (T= 1) The necessary and sufficient condition of Fisher consistency in the single-stage case is analogous...
https://arxiv.org/abs/2505.17285v1
N2 reduces to conv( Vϕ)⊃ Sk−1. The appearance of Sk−1may seem unexpected, but as shown in the proof of Lemma 4.2, it is simply the closed convex hull of the image set for ϕdis. Thus, Condition N2 requires the image set of ϕto contain, in convex hull, the image set of the original discontinuous surrogate ϕdis. If a surr...
https://arxiv.org/abs/2505.17285v1
adjusted accordingly. A key step in proving F2 is to show that for any t≥2, and x∈Rkt ≥0, Ψ∗ t(x) =Gt(max( x)) for some univariate function Gt. The linearity of Gtis then derived from the positive homogeneity of Ψ∗ t. While Theorems 4.1 and 4.2 do not explicitly refer to the original loss, they can be interpreted as su...
https://arxiv.org/abs/2505.17285v1
so under a scale transformation. However, a location transformation by a positive constant violates Condition N2 although it preserves Condition N1. The proof of Lemma 4.3 is provided in Supplement S8.3. Lemma 4.3. Suppose ϕ:Rk×[k]→R≥0satisfies Conditions N1 and N2. Then, for any a, b > 0, the scaled surrogate defined ...
https://arxiv.org/abs/2505.17285v1
then af∈ Lfor any a >0. Assume there exists f∗∈ Lsuch that, for all t∈[T], the set argmax( f∗ t(Ht))is a singleton and agrees with d∗ t(ht)withP-probability one. Let ψbe as in Proposition 1, and suppose it also satisfies Condition N3. Let C∗be as in Proposition 1. Then the following assertion holds for any f∈ L: sup f∈...
https://arxiv.org/abs/2505.17285v1
for ϕin specific cases. In particular, Supplement S8.5.1 provides explicit formulas for ϕwhen k= 3 and Kis either the standard logistic or standard Gumbel density. Lemma 4.6, proved in Supplement S8.5.2, establishes that the kernel-based surrogate ϕsatisfies the conditions in Proposition 1 and Lemma 4.4. Lemma 4.6. The...
https://arxiv.org/abs/2505.17285v1
al. , 2022; Xu et al. , 2014), covariate-adjusted Youden index estimation, and one-bit compressed sensing (Feng et al. , 2022). In the context of dynamic treatment regimes, the smoothed 0-1 loss appears in Laha et al. (2024a) and Xue et al. (2022). In binary classification, where Fisher-consistent convex surrogates are...
https://arxiv.org/abs/2505.17285v1
except their bumps appear in the second and fourth quadrants, respectively, since they are maximized at (−∞,∞) and ( ∞,−∞). Remark 2.In Section 2, we mentioned the angle-based framework, an alternative surrogate framework based on an angle- based pred function. Similar to the relative-margin-based surrogates, the angle...
https://arxiv.org/abs/2505.17285v1
tran( gt) = (0 ,−gt). Also, we let tran( g) = (tran( g1), . . . , tran( gT)). Thus, if g∈ W, then tran( g)∈ F. Then it is straightforward to verify that (5.1) is equivalent to maximizing bVψ(tran( g)) over g∈ W. In particular, if bfsolves (5.1) and bg∈ W maximizes bVψ(tran( g)), then they are related by bft= tran( bgt)...
https://arxiv.org/abs/2505.17285v1
where T= 1,k1= 3,n= 7, and H1=R, using the dataset in Table 2. We assume π(A1=i|H1) = 1 /3 for all i∈[3]. In this illustration, we use the product-based surrogate in (4.11) with τ(x) = 1 + tanh( x), though the same 5 SDSS METHOD/5.1 Optimization part of SDSS 18 challenges arise for all Fisher consistent surrogates cons...
https://arxiv.org/abs/2505.17285v1
descent iterates can move very slowly, even when approaching the optimal region. For example, see Path 6 in Figure 3a, traced by iterates initialized at (5, –5) in our toy data example. This issue can be partially addressed by momentum-based gradient descent techniques, such as Adaptive Moment Estimation or ADAM (see F...
https://arxiv.org/abs/2505.17285v1
variance of the randomly initiated parameter θ(0)appropriately. Otherwise, the variance of the gradient may explode (He et al. , 2015). Several initialization methods account for this, including He initialization (He et al. , 2015) and Xavier initialization (Glorot and Bengio, 2010), both widely used to stabilize the v...
https://arxiv.org/abs/2505.17285v1
frequency, is typically chosen from {2,3,4}. If the previously saved best value of the EMA is larger than EMA(r) val, we update it to EMA(r) val, and set the r-th iterate θ(r)as the best iterate θbestup to iteration r. Otherwise, the best EMA and θbestremain unchanged. If the EMA fails to improve beyond a small thresho...
https://arxiv.org/abs/2505.17285v1
then Phase 4 – Reinitialization 30: procedure ReinitializeModel 31: Reinitialize θ(r)using He or Xavier initialization. 32: Reset ADAM moments: mom1r←0,mom2r←0. 33: Reset learning rate: lr←lr0. 34: Reset counters: R2←0, R 1←0. 35: end procedure 36: end if 37: end if 38: end procedure 39: else 6 REGRET DECAY/ 21 40: EMA...
https://arxiv.org/abs/2505.17285v1
the other hand, −bVψ,relis unlikely to be strongly convex near the optimum since we expect bVψ,relto plateau as in the toy example. This lack of strong convexity prevents application of traditional results, such as those in Bottou et al. (2018), for the convergence of gradient descent to local optimum. Similar issues w...
https://arxiv.org/abs/2505.17285v1
approximation error) so that ϕ(x;j)≤Ca(max( x)−xj)−2for all x∈Rkandj∈[kt]. We assume that the ϕt’s satisfy Condition N3.s and are symmetric. The symmetry assumption is used because symmetric surrogates yield sharper upper bounds on the approximation error. We can still bound the approximation error without the symmetry...
https://arxiv.org/abs/2505.17285v1
blip functions (Schulte et al. , 2014). Theorem 6.1. Suppose ψis as in (4.1) with non-negative ϕt’s, and each ϕtsatisfies Conditions N1, N2, and N3.s. Moreover, for each t∈[T], we assume ϕtto be relative-margin-based (Definition 4.1) and symmetric. Further suppose Psatisfies Assumptions I-V and Assumption 1. Let ˜g= (˜...
https://arxiv.org/abs/2505.17285v1
(cf. Schmidt-Hieber, 2020). Here, sparsity refers to the number of non-zero weights in the network. We denote this class by F(Nn, Wn, sn). From Lemma 5 of Schmidt-Hieber (2020) (see Supplementary Fact S11.3), it follows that Utn=F(Nn, Wn, sn) satisfies (6.6) with ρn=sn+ 1 and In= (Nn+ 1)( sn+ 1)2Nn+4. Other important e...
https://arxiv.org/abs/2505.17285v1
the function classes. Example of surrogates satisfying the conditions of Theorem 6.2 Result 5 show that kernel-based surrogates satisfy Condition 1 under mild conditions. Result 5. Suppose ϕ:Rk×[k]→Ris a kernel-based surrogate as defined in (4.9), where the kernel Kis bounded and continuously differentiable, and K′is b...
https://arxiv.org/abs/2505.17285v1
andCHol>0such that, for all t∈[T], for any fixed ht∈ H tc, and any i∈[kt], the function blipti(·, ht)is inCβ(Hts, CHol). Here, the blip function bliptjis as defined in (6.4). Recall that we denote the dimension of HTbyq. Corollary 1 below implies that under Assumption 2, the regret decays at the rate Op(n−1+α 2+α+q/β) ...
https://arxiv.org/abs/2505.17285v1
than SDSS. Therefore, in simpler settings, where all methods are expected to incur low approximation and estimation error, SDSS may have no clear advantage due to its higher opti- mization error. However, SDSS can have an advantage when the approximation and estimation errors are expected to be substantial for all meth...
https://arxiv.org/abs/2505.17285v1
rely on a model-based quantity called pseudo-outcomes. Model misspecification for the pseudo-outcomes can result in higher estimation and approximation error, which also propagates through stages (Murphy, 2005). 7 EMPIRICAL ANALYSIS/7.1 Simulations 26 (a) Plot of bVψ,rel(x, y) (plotted in the Z axis) vs xandy (b) Conto...
https://arxiv.org/abs/2505.17285v1
in Section 5.1 The plots display the paths traced by iterates initiated from 6 different initialization points for (a) vanilla gradient descent, (b) ADAM (without minibatching), (c) SGD, and (d) ADAM with SGD. The white circle and the solid black rectangle mark the starting point and the end point of the paths, respect...
https://arxiv.org/abs/2505.17285v1
and Wang, 2017). All three methods, SDSS, Q-learning, and ACWL, are considered for nonlinear policies. The non-linear DTR for SDSS and Q-learning is implemented using deep neural network. For ACWL, we use its default policy class based on classification and regression trees (CART), which is nonlinear. Since currently A...
https://arxiv.org/abs/2505.17285v1
the value function of the estimated policy remains comparable to the above-mentioned best-performing τ. However, ensembling slightly increases the variance of V(bd) for nonlinear policy classes, likely due to the higher parameter burden in this case. Figure 8 in Supplement 7.1 illustrates this under Scheme 2. Benchmark...
https://arxiv.org/abs/2505.17285v1
less sensitive to higher variance in Ycompared to the other methods, which use the squared error loss for modeling purposes. In a related context, bounded non-convex surrogate losses have been empirically demonstrated to offer increased robustness to label noise in classification tasks (Natarajan et al. , 2013; Akhtar ...
https://arxiv.org/abs/2505.17285v1
ments at two critical time points: admission (Stage 1) and four hours post-admission (Stage 2). The data was cleaned and pre-processed according to Johnson and Pollard (2018). We discretized the rate of IV fluid transfusion (in mL/Kg body weight) to apply our methods, which is a common practice in reinforcement learnin...
https://arxiv.org/abs/2505.17285v1
1(H1, A1) +bQ2(H2, d2(H2)) +Pn"2Y i=11[Ai=di(Hi)] bπ2(Ai|Hi) Y2−bQ2(H2, A2)# , (7.1) wherebπ1(A1|H1) andbπ2(A2|H2) are the propensity score estimators, Pnis the empirical distribution corresponding to the test data, and for any DTR dandi, j∈[3], the functions bQ2(H2, i) andbQd 1(H1, j) are the estimators of Q2(H2, ...
https://arxiv.org/abs/2505.17285v1
methods for binary treatments, with extensions to specialized settings such as constrained and censored settings (Zhao et al. , 2015; Jiang et al. , 2019; Liu et al. , 2018; Zhao et al. , 2020; Liu et al. , 2024b). The key differences between the binary-treatment and the general DTR setting have already been discussed ...
https://arxiv.org/abs/2505.17285v1
consistency. Building on the initial findings of Laha et al. (2024a), the present paper shifts focus toward a deeper theoretical investigation of Fisher consistency by developing precise necessary and sufficient conditions. In Section 4, we show that for separable surrogates, DTR Fisher consistency indeed imposes stric...
https://arxiv.org/abs/2505.17285v1
Discussion In this paper, we have explored the limitations and possibilities of simultaneous direct search for finding the optimal DTR. When both the number of treatments and stages are arbitrary, simultaneous direct search reduces into an optimization problem with a complex, discontinuous loss. We begin by proving som...
https://arxiv.org/abs/2505.17285v1
Horizon or the total number of stages (Ot,At,Yt) covariate, treatment, and outcome triplet at stage t Ht The history at stage t Di Trajectory of the i-th patient, defined in (2.1) kt Total number of treatments on stage t K the sum of all kt’s; i.e., K=P t∈[T]kt q Dimension of HT F The class of all scores f P0 Class of ...
https://arxiv.org/abs/2505.17285v1
by a constant c >0, infx∈Rktϕt(x;i) also scales by c, implying that Jtbecomes cJt. From (4.7), however, it follows that such scaling does not affect the positive fraction χϕt. Consequently, C∗scales linearly with S1 ADDITIONAL DISCUSSION/S1.3 Connection between our surrogates and the vanishing gradient problem 35 ϕt. H...
https://arxiv.org/abs/2505.17285v1
Therefore, the value functions resulting from these surrogates have suboptimal plateau regions and the vanishing gradient problem persists. S1.4. More details on He and Xavier initialization When the relative class scores gti(·;θti) are linear, He initialization samples the elements of the initial parameter, i.e., θ(0)...
https://arxiv.org/abs/2505.17285v1
base excess (BE), serum bicarbonate (HCO3), arterial lactate, Sequential Organ Failure Assessment (SOFA) score, Systemic Inflammatory Response Syndrome (SIRS) score, shock index (HR/SBP), cumulative fluid balance, peripheral capillary oxygen saturation (SpO2), blood urea nitrogen (BUN), serum creatinine, serum glutamic...
https://arxiv.org/abs/2505.17285v1
SDSS/S2.2 Tuning parameters for the sepsis data application 38 Table 8 Table of the neural network parameters for Non-linear SDSS that varied across settings in our data simulations. Here hidden dim. corresponds to the number of units per hidden layer. Parameter Setting 1 Setting 2 Setting 3 Setting 4 Setting 5 Setting...
https://arxiv.org/abs/2505.17285v1
hidden dimension corresponds to the number of units per hidden layer. Parameter Value/Configuration Initial Learning Rate ( lr0)0.007 Nepoch 120 Train/Validation Split 80/20 Network Architecture 3-layer network Activation Function ELU Hidden Dimensions (Stage 1) 40 Hidden Dimensions (Stage 2) 40 Dropout Rate 0.4 S2.2.2...
https://arxiv.org/abs/2505.17285v1
A2=a2]. Here Lemma S4.1 applies because E[Y1+Y2|H2, A2=a2]>0 by Assumption V, which implies ca2(H2)’s are positive real numbers. Since Q∗ 2(H2, A2) =E[Y1+Y2|H2, A2] and ˜d2= pred( ˜f2), it follows that ˜d2(H2)∈argmaxa2∈[k2]Q∗ 2(H2, a2). Lemma S4.1 also implies that supf2Vψ(f) equals −EX i∈[k1]ef1i(H1)−f1A1(H1)X j∈[k2...
https://arxiv.org/abs/2505.17285v1
pj(H2)∈(0,1) for each j∈[k2]. Hence, the vector p(h2)∈ Sk2−1. Fact S4.1 implies that argmax( ˜f2(h2))⊂argmax( p(h2)) = argmaxa2∈[k2]E[Y1+Y2|h2, a2]. Therefore, ˜d2(H2) = pred( ˜f2(H2))∈argmaxa2∈[k2]E[Y1+Y2|H2, a2]. Also, Fact S4.1 implies that sup y∈Rk2 −X a2∈[k2] e−ya2+X j∈[k2]:j̸=a2eyjE[Y1+Y2|h2, a2] π1(a1|h1) = ...
https://arxiv.org/abs/2505.17285v1
can show that for a fixed h2= (h1, a1, y1, o2), whenever ˜f2(h2) exists, pred( ˜f2(h2))∈argmax a2∈[k2]E(Y1+Y2) π1(A1|H1) H2=h2, A2=a2 = argmax a2∈[k2]1 π1(a1|h1) y1+E[Y2|H2=h2, A2=a2] = argmax a2∈[k2]E[Y2|H2=h2, A2=a2], where we used the fact that π1(a1|h1)>0 (by Assumption I). The proof follows noting ˜dt(Ht) = pr...
https://arxiv.org/abs/2505.17285v1
k1= 3 and k2= 2, Lemma S4.3 implies that inf y∈Rk2:P j∈[k2]yj=0X j∈[k2] 1−pj(H2)X i∈[k1−1]max(vi(H1;A1),yj) =( v1(H1;A1) +v2(H1;A1) if min( v(H1;A1))≥0 κ(v(H1;A1),p(H2)) otherwise =  P i∈[k1]:i̸=A1f1i(H1) if f1i(H1)≥0 for i̸=A1 κ(v(H1;A1),p(H2)) otherwise, where the last step follows because min( v(H1;A1))≥0 if an...
https://arxiv.org/abs/2505.17285v1
. The point where the vertical and horizontal lines meet is the point where the minima occurs. The value of the function decreases as the color turns red from green. The plot for Setting 1 is skipped because it is similar to that of Setting 2. Additional lemmas for the proofs of Section S4.0.4 Lemma S4.3. Letv∈Rk1−1and...
https://arxiv.org/abs/2505.17285v1
(S4.15) Hence, inf x∈[−c,c]˜ϕ(x) = min hc(p1∧p2),−2|v2(p2−p1)| . Now note that hc(p1∧p2) = 2 c(p1∧p2) + (p1∨p2)(v2−c) =−c|p1−p2|+c(p1∧p2) +v2(p1∨p2). (S4.16) Therefore, hc(p1∧p2)⋛−2|v2(p2−p1)| if and only if −c|p1−p2|+c(p1∧p2) +v2(p1∨p2)⋛2v2|p2−p1| where we used the fact that v2<0. The last display is equivalent to (...
https://arxiv.org/abs/2505.17285v1
all elements of y′are greater than equal to v1. Thus we have shown that inf y∈C1˜ϕ(y)≤infy∈C˜ϕ(y), which, combined with C1⊂ C, implies inf y∈C1˜ϕ(y) = inf y∈C˜ϕ(y). S5. Proof of Theorem 3.1 S5.1. Proof preparation Before proceeding with the proof of Theorem 3.1, we first gather some key facts that will be utilized thro...
https://arxiv.org/abs/2505.17285v1
equivariant w.r.t. xandiwhen wandjare fixed and vice versa. Using this result, we can prove that 0k1+k2−2∈int(dom( −η)) under the setup of Theorem 3.1. Lemma S5.1 establishes this fact. The proof of Lemma S5.1 can be found in Section S5.4.1. Lemma S5.1. Suppose ψis a PERM loss with template η, which is concave and boun...
https://arxiv.org/abs/2505.17285v1
value function V(d1, d2) =pd1,d2(d1). Thus, the contribution of P∈ Pk1,k2on the value function reflects only via ( pij). For each A1=i, any number in the set argmaxj∈[k2]pijis a candidate of the optimal second stage treatment assignment d∗ 2(i). Also, since H1=∅, it follows that d∗ 1can be any number in the set argmaxi...
https://arxiv.org/abs/2505.17285v1
=pij for any p≡piji∈[k1], j∈[k2] with positive pijvalues, it follows that for any ( p11,p12,p21,p22)∈R4 >0, there exists a P∈ Pk1,k2 bcorresponding to these values. Since we will only consider P∈ Pk1,k2 bfrom now on, by an abuse of notation, we will denote ( p11,p12,p21,p22) byp. The surrogate loss reduces to Vψ(x,y1, ...
https://arxiv.org/abs/2505.17285v1
by (3.4), for any u∈Rk1−1andw∈Rk2−1, η(u,−wj,w1−wj. . . ,wk2−1−wj) =η(u,w1−wj. . . ,−wj, . . . ,wk2−1−wj) and η(−ui,u1−ui. . . ,uk1−1−ui,w) =η(u1−ui. . . ,−ui, . . . ,uk1−1−ui,w) for any i∈[k1−1],j∈[k2−1],u∈Rk1−1andw∈Rk2−1. In the above, terms such as ui−uiandwj−wjare omitted S5 PROOF OF THEOREM 3.1/S5.2 Step 1 56 from...
https://arxiv.org/abs/2505.17285v1
. . ,v∗ k1(p)). The following lemma, which is proved in Section S5.4.4, shows that the maximizer lies in the lower dimensional space C= (u,v1, . . . ,vk1)∈Rk1−1×Rk1(k2−1):u=x1k1−1, v1=y1k2−1,v2=. . .=vk1=z1k2−1,where x, y, z ∈R .(S5.16) S5 PROOF OF THEOREM 3.1/S5.3 Step 2 57 Lemma S5.4. Forp∈R4 >0, let (u∗(p),v∗ 1(p)...
https://arxiv.org/abs/2505.17285v1
we will show that, if ψis Fisher consistent, then ϑsatisfies some properties, which violates the abovementioned restrictions. This contradiction would imply that ψcan not be Fisher consistent, thus completing the proof. It is hard to find a closed form expression of Λ∗(p) or analyze its behavior for any arbitrary p. Es...
https://arxiv.org/abs/2505.17285v1
p∈R4 >0. Lemma S5.7 is proved in Section S5.4.7. Lemma S5.7. Under the setup of Theorem 3.1, there exists an open neighborhood U0of0so that 1.U2 0⊂int(dom(ϑ(·;i, j)))for all i, j∈[2]. 2.U3 0⊂int(dom(Φ(·;p)))for all p∈R4 >0. 3. The ϑ(·;i.j)’s are thrice continuously differentiable at U2 0for all i, j∈[2]. 4.Φ(·;p)is thr...
https://arxiv.org/abs/2505.17285v1
∂x −p21∂ϑ(0, z∗(p); 2,1) ∂x−p22∂ϑ(0, z∗(p); 2,2) ∂x(S5.24) for all p∈B(p0, δ), provided δis smaller than the δin Lemma S5.9. The inequalities (S5.22) and (S5.23) entail a restriction onϑand Λ∗(p). S5.3.4. Step 2d: implication of Fisher consistency In this section, we will show that the Fisher consistency of ψenforces s...
https://arxiv.org/abs/2505.17285v1
ϑ(0,0;i, j)’s can not satisfy Property 1. Lemma S5.12. Under the setup of Theorem 3.1, if ∂ϑ(0,0; 1,1)/∂x=∂ϑ(0,0; 1,2)/∂x=∂ϑ(0,0; 2,1)/∂x=∂ϑ(0,0; 2,2)/∂x= 0, then the ϑ(·;i, j)’s can not satisfy Property 1. Proof of Lemma S5.12. Note that ∂Φ(0,0,0;p)/∂x= 0 for all p∈R4 >0if ∂ϑ(0,0; 1,1)/∂x=∂ϑ(0,0; 1,2)/∂x=∂ϑ(0,0; 2,1)/...
https://arxiv.org/abs/2505.17285v1
3.1/S5.3 Step 2 61 where ξ1is between 0 and y∗(p). We can have a similar Taylor series expansion for all the other terms, leading to −∂Φ(0, y∗(p), z∗(p);p) ∂x=∂Φ(0;p) ∂x+y∗(p) ∆1∂2ϑ(0,0; 1,1) ∂y∂x+ ∆ 2∂2ϑ(0,0; 1,2) ∂y∂x +y∗(p)M12(0,0) +z∗(p)N12(0,0) + (1 + ∆ 1)y∗(p)2 2∂3ϑ(0, ξ1; 1,1) ∂y2∂x + (1 + ∆ 2)y∗(p)2 2∂3ϑ(0, ξ...
https://arxiv.org/abs/2505.17285v1
1and ∆ 2, (S5.26) reduces to 1 k∂ϑ(0,0; 1,1) ∂x+y∗(pk)∂2ϑ(0,0; 1,1) ∂y∂x+C1 k +1 k2∂ϑ(0,0; 1,2) ∂x+y∗(pk)∂2ϑ(0,0; 1,2) ∂y∂x+C1 k2 >0 for all pkof the form (1 + 1 /k,1 + 1 /k2,1,1) for all sufficiently large k∈N. Multiplying both sides with k, we get ∂ϑ(0,0; 1,1) ∂x+y∗(pk)∂2ϑ(0,0; 1,1) ∂y∂x+C1 k +1 k∂ϑ(0,0; 1,2) ∂x...
https://arxiv.org/abs/2505.17285v1