text
string | source
string |
|---|---|
the MDP framework by defining transition probabilities Pst,at t(st+1∈A) :=Pt Φt(st, at, ξt)∈A , A∈ Ft+1, (4.2) where ξt∼Pt. However, there is an essential difference between the SOC and MDP modeling. In the MDP the probability law of the process is defined by the transition kernels and there is no explicitly defined random data process. The basic assumption of the SOC, used in section 3, that the distribution of the random data process does not depend on states and controls, is not directly applicable in the MDP setting. The dynamic equations for problem (4.1) are: VT+1(sT+1) =cT+1(sT+1) and for t=T, ..., 1, andst∈ St, Vt(st) = inf at∈AtEPst,at t ct(st, at, st+1) +Vt+1(st+1) , (4.3) where EPst,at tdenotes the expectation with respect to the probability measure Pst,at ton (St+1,Ft+1). The optimal policy is given by πt(st)∈arg min at∈AtEPst,at t ct(st, at, st+1) +Vt+1(st+1) , (4.4) t= 1, ..., T . As before we assume that with every probability measure Pton (St,Ft) is associated functional RPt:Zt→Rdefined on a linear space Ztof measurable functions Zt:St→R. Unless stated otherwise we assume that RPtsatisfies the axioms of monotonicity and translation equivariance. The counterpart of dynamic equations (4.3) (compare with dynamic equations (3.8) for the risk averse SOC) is Vt(st) = inf at∈AtRPst,at t ct(st, at, st+1) +Vt+1(st+1) . (4.5) These dynamic equations correspond to the nested formulation of the respective risk averse MDP. That is, for a policy π∈Π, with at=πt(st), the kernels Pst,at t define the respective probability distribution on the sequences ( s1, ..., s T+1)∈ S 1× ··· × S T+1(Ionescu Tulcea Theorem, [23]). Consequently the nested functional Rπ=RPs1,π1(s1) 1 ◦ ··· ◦ RPsT,πT(sT) T is defined on the history of the process s1, ..., s T+1(compare with (2.41)). That is, for a measurable Z:S1× ··· × S T+1→R define iteratively ZT+1=Zand for t=T, ..., 1, Zt(s1, ..., s t) :=RPst,πt(st) t (Zt+1(s1, ..., s t,·)). (4.6) Eventually for t= 1, the value Z1(s1) is deterministic, and Rπ(Z) =Z1(s1). The nested counterpart of problem (4.1) is min π∈ΠRπhPT t=1ct(st, at, st+1) +cT+1(sT+1)i , (4.7) 19 with the corresponding dynamic equations given in (4.5). Let us emphasize that here the nested functional Rπis associated with policy π∈Π, and is defined on the history of the process s1, ..., s T+1 with probability of moving from st∈ Sttost+1∈ St+1given by the transition kernel Pst,πt(st) t . For coherent risk measures RPt, in particular for the Average Value-at-Risk, the nested formulation (4.7), and the respective dynamic equations (4.5), is equivalent to the construction introduced in Ruszczy´ nski [13] where it was developed in terms of the dual representation of coherent risk measures. In case RPt=EPt, problem (4.7) coincides with the risk neutral problem (4.1). We can also consider robust risk averse functionals (compare with (3.10)). That is, let Mtbe a set of conditional probability measures Pst,at t. Then consider the following (robust) extension of dynamic equations (4.5): Vt(st) = inf at∈Atsup Pst,at t∈MtRPst,at t ct(st, at, st+1) +Vt+1(st+1) . (4.8) We can give two equivalent formulations of the corresponding robust risk averse problem. We can define
|
https://arxiv.org/abs/2505.16651v1
|
the following robust counterpart of the risk averse functionals Rst,at t(·) := sup Pst,at t∈MtRPst,at t(·), t= 1, ..., T, (4.9) and the respected nested functional Rπ=Rs1,π1(s1)◦ ··· ◦ RsT,πT(sT)defined iteratively as in (4.6). Then the corresponding robust nested formulation is obtained by replacing Rπin (4.7) with Rπ, that is min π∈ΠRπhPT t=1ct(st, at, st+1) +cT+1(sT+1)i . (4.10) Alternatively we can formulate it as a game between the controller (decision maker) and the nature (cf., [9]). That is, consider the min-max problem min π∈Πsup γ∈ΓRπ,γhPT t=1ct(st, πt(st), st+1) +cT+1(sT+1)i , (4.11) where γt(st)∈Mt,t= 1, ..., T , defines the respective policy of the nature, and Rπ,γis the nested functional defined iteratively as in (4.6) with Pst,at t=γ(st). Remark 4.1. Similar to the static formulation5of distributionally robust counterpart of risk- neutral MDP [6,10], in the considered risk-averse setting it is also possible to consider the following counterpart of (4.11) , min π∈Πsup Pt∈Pt,1≤t≤TRπ,{Pt}T t=1hPT t=1ct(st, πt(st), st+1) +cT+1(sT+1)i , (4.12) where Rπ,{Pt}T t=1(·)refers to the nested risk functional (4.7) with fixed reference kernels {Pt}T t=1. It should be noted that in (4.12) ,{Pt}T t=1are chosen before the realization of the data process, and consequently dynamic equations (4.8) do not apply to (4.12) in general. For dynamic equations to hold, there is a need for additional rectangularity structure of the ambiguity sets Pt(cf., [6,10,24]), in which case (4.11) and its static counterpart (4.12) become equivalent. 5In some publications such static formulations are referred to as robust MDPs. 20 4.1 Infinite horizon setting In the infinite horizon setting, for the counterpart of the robust risk averse problem (4.10), the corresponding Bellman equation can be written as V(s) = inf a∈Asup Ps,a∈MRPs,a c(s, a,·) +βV(·) . (4.13) Here β∈(0,1) is the discount factor, Ais the set of actions, c:S ×A×S → Ris the cost function, Mis the set of transition kernels, i.e., elements of Mare conditional probability measures Ps,aon the state space ( S,F), and RPis the risk functional. As before we assume that SandAare Polish spaces. For bounded cost function, equation (4.1) has a unique solution. Proof of that is rather standard (and similar to the proof for SOC discussed in section 3.1). Consider the space Bof bounded measurable functions g:S →R, equipped with the sup-norm, and Bellman operator T:B→B, T(g)(·) := inf a∈Asup Ps,a∈MRPs,a c(s, a,·) +βg(·) . The operator Thas the following properties. It is monotone, i.e., if g, g′∈Bandg(·)≥g′(·), then T(g)(·)≥T(g′)(·). This follows from the monotonicity property of RP. Also from the translation equivariance property of RPfollows the property of constant shift: for any g∈Bandc∈R, T(g+c) =T(g) +βc. It follows that Tis a contraction mapping, i.e., ∥T(g)−T(g′)∥∞≤β∥g−g′∥∞, g, g′∈B, and hence by the Banach Fixed Point Theorem has a unique fixed point. References [1] P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. Coherent measures of risk. Mathematical Finance , 9:203–228, 1999. [2] D.P. Bertsekas and S.E. Shreve. Stochastic Optimal Control, The Discrete Time Case . Aca- demic Press, New York, 1978. [3] Claude Dellacherie and Paul-Andr´ e Meyer. Probabilities and Potential . North-Holland Pub- lishing Co., Amsterdam, The Netherlands., 1988. [4]
|
https://arxiv.org/abs/2505.16651v1
|
P. M. Esfahani and D. Kuhn. Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations. Mathematical Pro- gramming , pages 115–166, 2018. [5] H. Federer. Curvature measures. Transactions of the American Mathematical Society , 93(3):418–491, 1959. [6] G.N. Iyengar. Robust Dynamic Programming. Mathematics of Operations Research , 30:257– 280, 2005. [7] M. R. Kosorok. Introduction to Empirical Processes and Semiparametric Inference . Springer, 2008. [8] D. Kuhn, S. Shafiee, and W. Wiesemann. Distributionally robust optimization. https://arxiv.org/abs/2411.02549 , 2024. 21 [9] Yan Li and A. Shapiro. Rectangularity and duality of distributionally robust markov decision processes. https://arxiv.org/abs/2308.11139 , 2023. [10] A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition probabilities. Operations Research , 53:780–798, 2005. [11] Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming . John Wiley & Sons, 2014. [12] R.T Rockafellar and S. Uryasev. Conditional value-at-risk for general loss distributions. J. of Banking and Finance , 26(7):1443–1471, 2002. [13] A. Ruszczy´ nski. Risk-averse dynamic programming for Markov decision processes. Mathemat- ical Programming , 125:235–261, 2010. [14] A. Ruszczy´ nski and A. Shapiro. Conditional risk mappings. Mathematics of Operations Re- search , 31:544–561, 2006. [15] A. Ruszczy´ nski and A. Shapiro. Optimization of convex risk functions. Mathematics of Oper- ations Research , 31:433–452, 2006. [16] H. Scarf. A min-max solution of an inventory problem. In Studies in the Mathematical Theory of Inventory and Production , pages 201–209. Stanford University Press, 1958. [17] A. Shapiro. Interchangeability principle and dynamic equations in risk averse stochastic pro- gramming. Operations Research Letters , 45:377–381, 2017. [18] A. Shapiro and Y. Cheng. Central limit theorem and sample complexity of stationary stochastic programs. Operations Research Letters , 49:676–681, 2021. [19] A. Shapiro, D. Dentcheva, and A. Ruszczy´ nski. Lectures on Stochastic Programming: Modeling and Theory . SIAM, Philadelphia, third edition, 2021. [20] A. Shapiro and Yan Li. Distributionally robust stochastic optimal control. Operations Research Letters , 2025. [21] A. Shapiro and A. Pichler. Conditional distributionally robust functionals. Operations Re- search , 72:2745 – 2757, 2024. [22] M. Sion. On general minimax theorems. Pacific Journal of Mathematics , 8:171–176, 1958. [23] Ionescu Tulcea. Mesures dans les espaces produits. Atti Accad. Naz. Lincei Rend , 7:208 – 211, 1949. [24] W. Wiesemann, D. Kuhn, and B. Rustem. Robust Markov decision processes. Mathematics of Operations Research , 38(1):153 – 183, 2013. 22
|
https://arxiv.org/abs/2505.16651v1
|
arXiv:2505.16780v1 [math-ph] 22 May 2025Large time and distance asymptotics of the one-dimensional impenetrable Bose gas and Painlev´ e IV transition Zhi-Xuan Meng1, Shuai-Xia Xu2, and Yu-Qiu Zhao1 1Department of Mathematics, Sun Yat-sen University, Guangzhou 510275, China. 2Institut Franco-Chinois de l’Energie Nucl´ eaire, Sun Yat-sen University, Guangzhou 510275, China. Abstract In the present paper, we study the time-dependent correlation function of the one- dimensional impenetrable Bose gas, which can be expressed in terms of the Fredholm deter- minant of a time-dependent sine kernel and the solutions of the separated NLS equations. We derive the large time and distance asymptotic expansions of this determinant and the solutions of the separated NLS equations in both the space-like region and time-like region of the ( x, t)-plane. Furthermore, we observe a phase transition between the asymptotic ex- pansions in these two different regions. The phase transition is then shown to be described by a particular solution of the Painlev´ e IV equation. 2010 mathematics subject classification: 33E17; 34E05; 34M55; 41A60 Keywords and phrases: One-dimensional Bose gas; Nonlinear Schr¨ odinger equations; Painlev´ e IV equation; Fredholm determinant; Riemann-Hilbert problems; Deift-Zhou method Contents 1 Introduction 2 1.1 Statement of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 RH problem for the determinant and the PIV equation 10 2.1 RH problem for the determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 RH problem for the PIV equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3 Asymptotic analysis in the space-like region 14 3.1 Deformation of the jump contour . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 Global parametrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Local parametrices near λ=±1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.1 Local parametrix near λ=−1 . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.2 Local parametrix near λ= 1 . . . . . . . . . . . . . . . . . . . . . . . . . 16 Email addresses: mengzhx@mail2.sysu.edu.cn (Z.-X. Meng); xushx3@mail.sysu.edu.cn (S.-X. Xu); stszyq@mail.sysu.edu.cn (Y.-Q. Zhao) 1 3.4 Local parametrix near the stationary point . . . . . . .
|
https://arxiv.org/abs/2505.16780v1
|
. . . . . . . . . . . . . . 17 3.5 RH problem for M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.6 Final transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.7 Large xasymptotics in the space-like region . . . . . . . . . . . . . . . . . . . . . 20 4 Asymptotic analysis in the time-like region 22 4.1 Deformation of the jump contour . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.2 Local parametrix near λ=−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.3 Local parametrix near the stationary point . . . . . . . . . . . . . . . . . . . . . 24 4.4 Final transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.5 Large tasymptotics in the time-like region . . . . . . . . . . . . . . . . . . . . . 26 5 Asymptotic analysis in the transition region 27 5.1 Deformation of the jump contour . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.2 Local parametrix near λ=−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.3 RH problem for M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.4 Final transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.5 Painlev´ e IV asymptotics in the transition region . . . . . . . . . . . . . . . . . . 31 6 Asymptotic analysis of the PIV equation 33 6.1 Asymptotic analysis of the PIV equation as τ→+∞. . . . . . . . . . . . . . . . 34 6.1.1
|
https://arxiv.org/abs/2505.16780v1
|
Deformation of the jump contour . . . . . . . . . . . . . . . . . . . . . . . 34 6.1.2 Local parametrix near z= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6.1.3 Local parametrix near z=−1 . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.1.4 Final transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.1.5 Proof of Proposition 1.7: asymptotics of the PIV as τ=eπi 4s→+∞. . . 38 6.2 Asymptotic analysis of the PIV equation as τ→ −∞ . . . . . . . . . . . . . . . . 40 6.2.1 Deformation of the jump contour . . . . . . . . . . . . . . . . . . . . . . . 40 6.2.2 Local parametrix near z= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.2.3 Local parametrix near z= 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.2.4 Final transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.2.5 Proof of Proposition 1.7: asymptotics of the PIV as τ=eπi 4s→ −∞ . . . 43 A Confluent hypergeometric parametrix 45 1 Introduction In this paper, we consider the asymptotics of the correlation functions of the one-dimensional impenetrable Bose gas. It is known, as shown in [32–34], that the one-dimensional Bose gas is exactly solvable. In the state of the thermal equilibrium at positive temperature, the momentum distribution of the particles is given by the Fermi weight. The thermodynamics of the model at positive temperature was developed by Yang and Yang in [41]. In [22], a completely integrable system describing the temperature correlation functions was constructed by developing a general theory of integral operators. In particular, the correlation functions were expressed as Fredholm determinants of integrable operators and the corresponding Riemann-Hilbert representations 2 were established, which allow for the calculation of the asymptotics of the correlation functions; see [7, 17, 20–23, 25, 28, 30]. In recent years, there has been renewed interest in the study of the Fredholm determinants and their finite-temperature deformations, both in the physics literature and in the mathemat- ics literature. At equal time, the correlation functions of the one-dimensional Bose gas can be expressed in terms of the Fredholm determinants of the sine kernel and its finite-temperature generalization.
|
https://arxiv.org/abs/2505.16780v1
|
These determinants have been applied to characterize the bulk scaling limit dis- tributions of particles in noninteracting spinless fermion system, the Moshe-Neuberger-Shapiro random matrix ensemble, and non-intersecting Brownian motions [27, 35, 38]. Recently, a completely integrable system of PDEs and integro-differential Painlev´ e V equation have been derived for a large class of weight functions extending the finite-temperature sine kernel [8]. The asymptotics of the finite-temperature sine kernel has been derived in several different regimes in the (x, s)-plane, where a third-order phase transition is observed and described by an integral involving the Hastings-McLeod solution of the second Painlev´ e equation [40]. It is remarkable that the finite-temperature Airy-kernel determinant has been used to characterize the solution of the Kardar-Parisi-Zhang equation with the narrow wedge initial condition [2–4, 6]. In the present paper, we consider the time-dependent correlation function of the one- dimensional impenetrable Bose gas. At zero temperature, the correlation function can be characterized by the determinant of the following time-dependent sine kernel [22] K(λ, µ;x, t) =f1(λ)f2(µ)−f1(µ)f2(λ) λ−µ, (1.1) where f1(λ) =f2(λ)E(λ), f2(λ) =1 πeitλ2+ixλ, (1.2) E(λ) =P.V.Z+∞ −∞1 τ−λe−2itτ2−2ixτdτ. (1.3) Here xandtare the distance and time variables. Define the Fredholm determinant D(x, t) = ln det( I+Kx,t), (1.4) where Kx,tdenotes the integrable operator acting on L2(−1,1) with the kernel (1.1). Let Bij, (i, j= +,−) be the potentials B++=Z1 −1f1(µ)F1(µ)dµ, B +−=Z1 −1f1(µ)F2(µ)dµ, B−+=Z1 −1f2(µ)F1(µ)dµ, B −−=Z1 −1f2(µ)F2(µ)dµ,(1.5) where Fk= (I+Kx,t)−1fk,k= 1,2. Denote b++by b++=B++−G, G =Z+∞ −∞e−2itτ2−2ixτdτ. (1.6) Then, we have ∂xxD(x, t) = 4 b++B−−. (1.7) 3 Furthermore, the two-point time-dependent correlation function for the one-dimensional impen- etrable Bose gas can be represented by ψ(x2, t2)ψ+(x1, t1) =−1 2πe2itb++(x, t) exp( D(x, t)), (1.8) where the distance x, time tare related to x1, x2,t1, t2and the chemical potential hby x=1 2√ h|x1−x2|, t=1 2h(t2−t1); (1.9) see [22, Eqs.(6.1) and (6.2)] and also [29, Chapter XIV.5]. Furthermore, it was shown in [22, Eq.(6.23)] and [23, Eq.(1.18)] that the potentials Bijsatisfy the separated NLS equations 2i∂tb++=−∂2 xb++−8b2 ++B−−, 2i∂tB−−=∂2 xB−−+ 8b++B2 −−.(1.10) The above system is the first nontrivial pair of equations of the AKNS hierarchy and it reduces to the nonlinear Schr¨ odinger equation if b++=B−−; see [1]. It should be mentioned that the finite-temperature unequal time correlators can be expressed in terms of the determinant of the time-dependent sine kernel multiplied by a Fermi weight. The above PDEs are also valid for the finite-temperature determinant. These results were obtained in [22] by developing a general theory for the integral operators and constructing Riemann-Hilbert problems for these operators. These Riemann-Hilbert representations allow for calculations of the aymptotics of the correlation functions. From these Riemann-Hilbert representations, the large time and distance asymptotics of correlation function of impenetrable bosons at finite temperature have been derived in [23]. By a computation using (1.1)-(1.3), we see that the kernel (1.1) tends to the classical sine kernel as t→0 K(λ, µ;x,0) =−γK(sin)(λ, µ;x), K(sin)(λ, µ;x) =sinx(λ−µ) π(λ−µ), (1.11) with γ= 2. This corresponds to the equal-time situation of the correlation function (1.8); see [22, Eq. (1.10)]. Denote K(sin) xthe integrable operator acting on
|
https://arxiv.org/abs/2505.16780v1
|
L2(−1,1) with the sine kernel K(sin). It is remarkable that the logarithmic derivative of the Fredholm determinant of K(sin) x σV(x;γ) =xd dxln det( I−γK(sin) x) (1.12) satisfies the σ-form of the fifth Painlev´ e equation [26, Eq. (2.27)] (xσ′′ V)2+ 4(4 σV−4xσ′ V−σ′2 V)(σV−xσ′ V) = 0 , (1.13) with the boundary conditions σV(x;γ) =−2 πγx+O(x2), x→0, (1.14) and as x→ ∞ [15, Eq. (2.14)] and [36, Eqs (1.16) and (1.21)] σV(x;γ) = 4kx+O(1), γ < 1, −x2+O(1), γ = 1, 4kx−2xtan(θ(x)) +O(1), γ > 1,(1.15) 4 𝒕=𝟏 𝟐𝒙 Space -like regionTime -like region xt 0Transition regionFigure 1: The space-like region, time-like region and transition region with k=1 2πln|γ−1|,θ(s) = 2 x+ 2klnx+c0andc0= 4kln 2−2 arg Γ ik+1 2 . Therefore, the equal-time correlation function (1.8) with t= 0 is related to σV(x;γ) with γ= 2, which has singular asymptotic behavior as x→ ∞ . Let γ= 1, integrating (1.12) along [0 , x] leads to the famous integral expression for the gap probability distribution of the classical sine process in random matrix theory. In [15], Dyson derived the large xasymptotics of this determinant with γ= 1 up to a conjectured constant term. The constant was derived rigorously later in [10, 16, 31]. The asymptotics of this deteminnant on the union of several intervals, as the size of the intervals tends to infinity, have also been explored in [5, 11, 18]. The present work is devoted to the studies of the time-dependent correlation function of the one-dimensional Bose gas (1.8), which can be expressed in terms of the Fredholm determinant (1.4) and the solutions of the separated NLS equations (1.10). By using the Riemann-Hilbert representation for the determinant (1.4), we derive the large time and distance asymptotic approximations of the derivatives of determinant (1.4) and the solutions of the separated NLS equations in both the space-like region and time-like region of the ( x, t)-plane. Furthermore, we observe a phase transition between the asymptotic expansions in these two different regions. The phase transition is then shown to be described by a particular solution of the Painlev´ e IV equation. 1.1 Statement of results To state our main results, we define for x, t > 0 the space-like region, time-like region and transition region as follows: •space-like region:x 2t>1 +δ, •time-like region:x 2t<1−δ, •transition region: t1 2 x 2t−1 ≤C, with any small but fixed δ >0, and any constant C > 0; see Fig. 1. Then, we derive the 5 asymptotic expansions of the derivatives of determinant (1.4) and the solutions of the separated NLS equations in these regions, of which the main results are given in the following theorems. Theorem 1.1 (Large distance asymptotics in the space-like region) .LetD(x, t)be the Fredholm determinant defined in (1.4), we have the following asymptotic expansions as x→+∞: ∂tD(x, t) =−√ 2ei x2 2t+2t+π 4 √ πtcos(2 x)+O(x−1), (1.16) ∂xD(x, t) =−2 tan(2 x) +2√ 2tei x2 2t+2t+π 4 √π(x−2t) cos(2 x) 1−2te2ix (x+ 2t) cos(2 x) +O(x−1), (1.17) where the error terms are uniform for (x, t)in the space-like region and for
|
https://arxiv.org/abs/2505.16780v1
|
x bounded away from the zeros of cos(2 x). Moreover, we have the asymptotic expansions of the corresponding solutions of the separated NLS equations b++andB−−, defined by (1.5) and(1.6), asx→+∞: b++(x, t) =−πe−2it cos(2 x)−rπ 2tei x2 2t−π 4 1−4ittan(2 x) x−2t+4t2e4ix (x2−4t2) cos2(2x) +O(x−1),(1.18) B−−(x, t) =e2it πcos(2 x)−2√ 2t3 2ei x2 2t+4t−π 4 π3 2(x2−4t2) cos2(2x)+O(x−1), (1.19) where the error terms are uniform for (x, t)in the space-like region and for x bounded away from the zeros of cos(2 x). Theorem 1.2 (Large time asymptotics in the time-like region) .LetD(x, t)be the Fredholm determinant defined in (1.4), we have the following asymptotic expansions as t→+∞: ∂tD(x, t) =√ 2e−i x2 2t+2t+π 4 √ πtcos(2 x)+O(t−1), (1.20) ∂xD(x, t) =−2 tan(2 x)−2√ 2te−i x2 2t+2t+π 4 √π(x+ 2t) cos(2 x) 1 +2te2ix (x−2t) cos(2 x) +O(t−1), (1.21) where the error terms are uniform for (x, t)in the time-like region and for x bounded away from the zeros of cos(2 x). Moreover, we have the asymptotic expansions of the corresponding solutions of the separated NLS equations b++andB−−, defined by (1.5) and(1.6), ast→+∞: b++(x, t) =−πe−2it cos(2 x)−2√ 2πt3 2e−i x2 2t+4t−π 4 (x2−4t2) cos2(2x)+O(t−1), (1.22) B−−(x, t) =e2it πcos(2 x)−ei −x2 2t+π 4 π3 2√ 2t 1 +4ittan(2 x) x+ 2t+4t2e4ix (x2−4t2) cos2(2x) +O(t−1),(1.23) where the error terms are uniform for (x, t)in the time-like region and for x bounded away from the zeros of cos(2 x). 6 Remark 1.3. Lett→0 in (1.17), we have the asymptotics as x→+∞: ∂xD(x,0) =−2 tan(2 x) +O(x−1). (1.24) From (1.11) and (1.12), we have x∂xD(x,0) = xd dxln det( I−γK(sin) x) =σV(x;γ), (1.25) where K(sin) xis the integral operator with the sine kernel and σV(x;γ) is the solution of the σ-form of the fifth Painlev´ e equation (1.13) with γ= 2. The asymptotics (1.24) is consistent with the large xasymptotics of σV(x;γ) with γ= 2 as given in (1.15), which was obtained earlier by McCoy and Tang in [36]. Remark 1.4. From (1.17), we see that ∂xDhas singular asymptotic behavior as x→+∞. This phenomenon can also be observed in the large xasymptotics of the sine kernel determinant (1.12) and σV(x;γ) if the parameter γ >1 as shown in (1.15). Therefore, it is natural to expect that D(x, t) may have poles near the zeros of cos(2 x) for large x. To make the results valid in the above theorems, we require that xis bounded away from the zeros of cos(2 x). It would be desirable to derive the asymptotics near the zeros of cos(2 x) and the asymptotics of D(x, t) itself. Furthermore, similar to the sine kernel determinant, it would be interesting to study the asymptotics of the γ-parameter generalization of the determinant (1.4), namely det( I−γKx,t). We will leave these problems to further investigations. Remark 1.5. It is noted that the leading terms in both the large distance asymptotics (1.18) and (1.19), and the large time asymptotics (1.22) and (1.23) are given by the following special periodic solutions of the separated NLS equations (1.10) ( b++(x, t) =−πe−2it cos(2 x), B−−(x, t) =e2it
|
https://arxiv.org/abs/2505.16780v1
|
πcos(2 x).(1.26) Remark 1.6. From Theorems 1.1 and 1.2, we observe a phase transition in the leading asymp- totics of ∂tD(x, t) given in (1.16) and (1.20). Specifically, the phase changes from ei x2 2t+2t+π 4 toe−i x2 2t+2t+π 4 as (x, t) moves across the critical curve x= 2tas shown in Fig. 1. From Theorems 1.1 and 1.2, similar phase transition can also be found in the large time and distance asymptotics of the other quantities, including ∂xD(x, t),b++(x, t) and B−−(x, t). From Theorems 1.1 and 1.2, we observe a phase transition near the critical curve x= 2t in the large time and distance asymptotic expansions. Next, we show that the transition can be described by a special solution of the Painlev´ e IV equation. Let u(s) be a solution of the Painlev´ e IV (PIV) equation d2u ds2=1 2udu ds2 +3 2u3+ 4su2+ 2(s2+ 1−2Θ∞)u−8Θ2 u, (1.27) and we define1 ydy ds=−u−2s, (1.28) 7 z=1 4 −du ds+u2+ 2su+ 4Θ , (1.29) and the associate Hamiltonian H=z u(z−2Θ)−u 2+s (z−Θ−Θ∞). (1.30) The following solution of the PIV equation with the parameters Θ = 0 and Θ ∞=1 2plays a central role in the asymptotics in the transition region. Proposition 1.7 (Large sasymptotics of PIV) .For the parameters Θ = 0 andΘ∞=1 2, there exists a unique solution of PIV equation (1.27) corresponding to the special Stokes multipliers {s1=s2= 2i, s 3=s4= 0}. (1.31) This solution has the following asymptotic expansions u(s) =−2s−2ie−s2 √π+O(s−1), eπi 4s→+∞, (1.32) u(s) =−2s−2es2 √π+O(s−1), eπi 4s→ −∞ . (1.33) Moreover, we have the following asymptotic expansions of y(s)and the associate Hamiltonian H(s)defined in (1.28) and(1.30) , respectively: y(s) = 2−2ie−s2 √πs+O(s−2), eπi 4s→+∞, (1.34) H(s) =O(s−1), eπi 4s→+∞, (1.35) y(s) = 2 +2es2 √πs+O(s−2), eπi 4s→ −∞ , (1.36) H(s) =−es2 √π+O(s−1), eπi 4s→ −∞ . (1.37) Theorem 1.8 (Asymptotics in the transition region) .LetD(x, t)be the Fredholm determinant defined in (1.4), we have the following asymptotic expansions as x, t→+∞: ∂tD(x, t) =2√ 2ieπi 4√ t" H(s)−y(s) 2u(s) 2+se4ix 1 +y(s) 2e4ix# +O(t−1), (1.38) ∂xD(x, t) = 2 iy(s) 2e4ix−1 1 +y(s) 2e4ix+r 2 te−π 4iH(s)−y2(s) 4 u(s) 2+s−H(s) e8ix 1 +y(s) 2e4ix2+O(t−1).(1.39) 8 Moreover, we have the asymptotic expansions of the corresponding solutions of the separated NLS equations b++andB−−, defined by (1.5) and(1.6), ast→+∞: b++(x, t) =−πy(s)e2i(x−t) 1 +y(s) 2e4ix+πy(s)e2i(x−t)+πi 4 2√ 2t 1 +y(s) 2e4ix2 y(s)H(s)e4ix−(1 +y(s)e4ix)u(s) 2+s +O(t−1), (1.40) B−−(x, t) =2e2i(x+t) π 1 +y(s) 2e4ix+e2i(x+t)+iπ 4 π√ 2t 1 +y(s) 2e4ix2 2H(s) +y(s) 2u(s) 2+s e4ix +O(t−1). (1.41) Here the variable s=e−πi 4√ 2t x 2t−1 and the error terms are uniform for (x, t)in the tran- sition region such that sis bounded. The function u(s)is the solution of the PIV equation in(1.27) andy(s)andH(s)are defined by (1.28) and(1.30) with the properties specified in Proposition 1.7. Remark 1.9. Ast1 2 x 2t−1 →+∞, from (1.32), (1.34) and (1.35), we see that the Painlev´ e IV asymptotics (1.38) degenerates to (1.16). On the other hand, as t1 2 x 2t−1 → −∞ , from (1.33), (1.36) and (1.37), the asymptotics (1.38) is reduced
|
https://arxiv.org/abs/2505.16780v1
|
to (1.20). Therefore, the Painlev´ e IV asymptotics describes the phase transition between the asymptotics of ∂tD(x, t) in the space- like and time-like regions as given in (1.16) and (1.20). Similarly, the Painlev´ e IV asymptotics shown in Theorem 1.8 also describe the phase transition in the asymptotics of the quantities ∂xD(x, t),b++(x, t) and B−−(x, t). Notations. In this paper, we will frequently use the following notations. •IfAis a matrix, then ( A)ijdenotes its ( i, j)-th entry and ATrepresents its transpose. •We define U(a, δ) as the open disc centered at awith radius δ >0: U(a, δ) :={z∈C:|z−a|< δ}, (1.42) and∂U(a, δ) as its boundary with the clockwise orientation. •The Pauli matrices are defined as follows: σ1=0 1 1 0 , σ 3=1 0 0−1 , σ +=0 1 0 0 , σ−=0 0 1 0 . (1.43) •We will carry out Deift-Zhou nonlinear steepest descent analysis for the Riemann-Hilbert problems several times. Each time, we will use the same notations such as T,P(∞),P(±1) andR. These notations will have different meaning in each context, and we expect this will not cause confusion. The rest of the present paper is arranged as follows. In Section 2, we introduce the Riemann- Hilbert (RH) problem for the Fredholm determinant (1.4), which was introduced in [23]. The RH problem for the classical PIV equation is also given in this section. In Sections 3 and 4, 9 we perform the Deift-Zhou nonlinear steepest descent analysis [12–14] of the RH problem for the determinant in the space-like region and the time-like region, respectively. The proofs of Theorems 1.1 and 1.2 are given at the end of Sections 3 and 4, respectively. In Section 5, we consider the asymptotics in the transition region and prove Theorem 1.8. Finally, in Section 6, we derive the asymptotics of a special solution of the PIV equation and prove Proposition 1.7 by using the associate RH problem. 2 RH problem for the determinant and the PIV equation In this section, we intruduce the RH problem representation for the determinant in (1.4) as constructed in [22, 23] and then present the RH problem for the PIV equation (1.27). 2.1 RH problem for the determinant The kernel in (1.1) can be expressed in the following form K(λ, µ) =− →fT(λ)− →g(µ) λ−µ, (2.1) where − →f(λ) =f1(λ) f2(λ) ,− →g(λ) =f2(λ) −f1(λ) . (2.2) Then the kernel of the resolvent operator ( I+Kx,t)−1Kx,tcan be expressed as [22]: R(λ, µ) =− →FT(λ)− →G(µ) λ−µ, (2.3) with− →F(λ) = (I+Kx,t)−1− →f(λ),− →G(λ) = I+KT x,t−1− →g(λ). (2.4) Here Kx,tdenotes the integrable operator acting on L2(−1,1) with the kernel (1.1) and KT x,t represents the real adjoint of the operator Kx,twith the kernel KT(λ, µ) =K(µ, λ). (2.5) Furthermore,− →Fand− →Gcan be expressed as − →F(λ) =X(λ)− →f(λ),− →G(λ) =X−T(λ)− →g(λ), (2.6) where Xsatisfies the following RH problem; see [22, 23]. RH problem for X (1)X(λ) is analytic for λ∈C\[−1,1]. (2)X(λ) has continuous boundary values X±(λ) asλapproaches the real axis from the positive and negative sides, respectively. And they satisfy the relation X+(λ) =X−(λ) I+
|
https://arxiv.org/abs/2505.16780v1
|
2πi− →f(λ)− →gT(λ) , λ∈(−1,1). (2.7) 10 (3)X(λ) =I+O 1 λ , asλ→ ∞ . Then the solution to the above RH problem is expressible in terms of the functions− →Fand − →gby using the Cauchy integral X(λ) =I+Z1 −1− →F(µ)− →gT(µ) µ−λdµ. (2.8) The behavior of Xat infinity can be expressed as X(λ) =I+X1 λ+X2 λ2+O1 λ3 . (2.9) We denote X1andX2by X1=−B−+B++ −B−−B+− , X 2=−C−+C++ −C−−C+− , (2.10) then the potentials Bij(i, j= +,−) are given by (1.5). In order to simplify the jump condition, we introduce the transformation Y(λ) =X(λ) 1R+∞ −∞e−2iθ(τ) τ−λdτ 0 1! , (2.11) where the phase function θ(λ) =tλ2+xλ. (2.12) Then Ysatisfies the following RH problem. RH problem for Y (1)Y(λ) is analytic for λ∈C\R. (2)Y+(λ) =Y−(λ)JY(λ), λ∈R, JY(λ) = −1 0 2i πe2iθ(λ)−1 , λ ∈(−1,1), 1 2πie−2iθ(λ) 0 1 , λ ∈(−∞,−1)∪(1,+∞).(2.13) (3)Y(λ) =I+Y1 λ+Y2 λ2+O 1 λ3 , asλ→ ∞ , where Y1=−B−+b++ −B−−B+− , Y 2= −C−+C+++B−+G−R+∞ −∞τe−2iθ(τ)dτ −C−− C+−+B−−G, ,(2.14) with b++andGdefined in (1.6), and Bij, Cijgiven in (2.10). (4)Y(λ) =O(ln|λ∓1|), as λ→ ±1. 11 Let Ψ(λ) =Y(λ)e−iθ(λ)σ3. (2.15) We have the following Lax pair [1, 42]: Ψx(λ;x, t) =L1(λ;x, t)Ψ(λ;x, t), Ψt(λ;x, t) =L2(λ;x, t)Ψ(λ;x, t),(2.16) where L1(x, t) =−iλσ3+i[σ3, Y1], (2.17) L2(x, t) =−iλ2σ3+iλ[σ3, Y1] +i[σ3, Y2]−i[σ3, Y1]Y1, (2.18) with Y1andY2given in (2.14). The zero-curvature equation ∂L2 ∂x−∂L1 ∂t−[L1, L2] = 0, (2.19) gives us the separated NLS equations (1.10). According to [22] and [29, Chapter XIV.5], we have ∂xD=−2iB+−, ∂ tD=−2iGB−−−2i(C+−+C−+), (2.20) where G,BijandCijappeared in (2.14). Therefore, the derivatives of the determinant (1.4) and the associate solutions of the separated NLS equations b++andB−−, defined by (1.5) and (1.6) can be expressed in terms of the elements of the solution to the RH problem. Proposition 2.1. The derivatives of Din(1.4) and the associate solutions of the separated NLS equations b++andB−−, defined by (1.5) and (1.6), can be expressed in terms of the elements of Y1andY2in(2.14) ∂tD= 2i((Y2)11−(Y2)22), (2.21) ∂xD= 2i(Y1)11=−2i(Y1)22, (2.22) b++= (Y1)12, (2.23) B−−=−(Y1)21. (2.24) 2.2 RH problem for the PIV equation In this section, we recall the RH problem for the PIV equation (1.27) constructed in [19, Chapter 5.1]. A particular solution of the PIV equation with special parameters plays important roles in the description of the asymptotics of the logarithmic derivative of the Fredholm determinant and the corresponding solutions of the NLS equations in (1.10) in the transition region. 12 O𝜮𝟏𝜮𝟐 𝜮𝟑 𝜮𝟒Figure 2: The jump contour of the RH problem for Ψ RH problem for Ψ (1) Ψ( ξ, s) (Ψ( ξ) for short) is analytic for ξ∈C\ ∪4 i=1Σi, where Σ i,i= 1, . . . , 4, are shown in Fig. 2. (2) Ψ +(ξ) = Ψ −(ξ)Si,ξ∈Σi,i= 1,2,3; Ψ +(ξ) = Ψ −(ξ)S4e2πiΘ∞σ3forξ∈Σ4, where S1=1 0 s11 , S 2=1s2 0 1 , S 3=1 0 s31 , S 4=1s4 0 1 . The Stokes multipliers si,i= 1, . . . , 4, satisfy (1 +s2s3)e2πiΘ∞+ [s1s4+ (1 + s3s4)(1 + s1s2)]e−2πiΘ∞= 2 cos 2 πΘ. (2.25) (3) As ξ→ ∞ , Ψ(ξ) = I+Ψ1(s) ξ+Ψ2(s) ξ2+O(ξ−3) e(ξ2 2+sξ)σ3ξ−Θ∞σ3, (2.26)
|
https://arxiv.org/abs/2505.16780v1
|
where the branch for ξΘ∞is chosen such that arg ξ∈(−π 2,3π 2). (4) As ξ→0, for Θ ̸= 0, Ψ( ξ) = Ψ(0)(ξ)ξΘσ3, where Ψ(0)(ξ) is analytic near ξ= 0. For Θ = 0, we have Ψ( ξ) =O(ln|ξ|), as ξ→0. Then, the Painlev´ e IV tanscendent u, and the quantities yandHcan be expressed in terms of the elements of Ψ 1and Ψ 2as follows: y(s) =−2(Ψ1)12(s), (2.27) H(s) = (Ψ 1)22(s), (2.28) u(s) =−2s−d dsln(Ψ 1)12(s), (2.29) 13 -1 1 𝝀𝟎𝜴𝟏 𝜴𝟐𝜴𝟑 𝜴𝟒𝜮𝟏 𝜮𝟐𝜮𝟑 𝜮𝟒 𝜮𝟓Figure 3: Deformation of the jump contour or u(s) = 2(Ψ 1)22(s)−2s−2(Ψ2)12(s) (Ψ1)12(s). (2.30) In our situation, we take the parameters Θ = 0 and Θ ∞=1 2, and the Stokes multipliers s1=s2= 2iands3=s4= 0. 3 Asymptotic analysis in the space-like region In this section, we derive the large xasymptotics of the derivatives of Ddefined in (1.4) and the corresponding solutions of the separated NLS equations in the space-like regionx 2t>1 +δ, for any small but fixed δ >0, by performing Deift-Zhou nonlinear steepest descent analysis [12–14] of the RH problem for Y. 3.1 Deformation of the jump contour Define T(λ) = Y(λ) 1−2πie−2iθ(λ) 0 1 , λ ∈Ω1, Y(λ) 1 2πie−2iθ(λ) 0 1 , λ ∈Ω2∪Ω4, Y(λ)1 0 2i πe2iθ(λ)1 , λ ∈Ω3, Y(λ), λ ∈C\S4 i=1Ωi,(3.1) where the regions Ω i,i= 1, . . . , 4, are illustrated in Fig. 3. Then Tsolves the following RH problem. RH problem for T (1)T(λ) is analytic for λ∈C\Σ, where Σ is shown in Fig 3. 14 (2)T+(λ) =T+(λ)JT(λ) ,λ∈Σ, where JT(λ) = 1 2πie−2iθ(λ) 0 1 , λ ∈Σ1∪Σ2∪Σ5, 1 0 −2i πe2iθ(λ)1 , λ ∈Σ3, −I, λ ∈Σ4.(3.2) (3) As λ→ ∞ ,T(λ) =I+O 1 λ . (4) As λ→ ±1,T(λ) =O(ln(λ∓1)). 3.2 Global parametrix From (3.2), we have JT(λ)→I, asx→+∞forλ∈Σ\Σ4. As x→+∞, it is expected that Tcan be approximated by a solution to the following RH problem with the remaining jump matrix along the interval ( −1,1). RH problem for P(∞) (1)P(∞)(λ) is analytic for λ∈C\[−1,1]. (2)P(∞) +(λ) =−P(∞) −(λ) ,λ∈(−1,1). (3)P(∞)(λ) =I+O 1 λ , asλ→ ∞ . The solution to the RH problem for P(∞)can be constructed as follows: P(∞)(λ) =λ−1 λ+ 11 2σ3 , (3.3) where λ−1 λ+11 2takes the branch cut along [ −1,1] and behaves like 1 as λ→ ∞ . 3.3 Local parametrices near λ=±1 In this subsection, we seek two parametrices P(±1)that satisfy the same jump conditions as T on Σ in the neighborhoods U(±1, δ), for some δ >0. 3.3.1 Local parametrix near λ=−1 In this section, we construct the solution to the following RH problem for P(−1). 15 RH problem for P(−1) (1)P(−1)(λ) is analytic for λ∈U(−1, δ)\Σ. (2)P(−1)(λ) has the same jumps as T(λ) onU(−1, δ)∩Σ. (3) On the boundary ∂U(−1, δ),P(−1)(λ) satisfies P(−1)(λ){P(∞)(λ)}−1=1λ−1 λ+1πe2i(x−t) 0 1 +O(x−1), x→+∞. (3.4) We define the following conformal mapping ξ(λ) = 2 [ θ(λ)−θ(−1)] = 2( x−2t)(λ+ 1) + 2 t(λ+ 1)2. (3.5) Asλ→ −1, we have ξ(λ)∼2(x−2t)(λ+ 1). (3.6) Let Φ(CHF )be the confluent hypergeometric parametrix with the parameter β=1 2, as given in
|
https://arxiv.org/abs/2505.16780v1
|
Appendix A. The solution to the above RH problem can be constructed as follows: P(−1)(λ) =E(−1)(λ)P(−1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(−1, δ), (3.7) where P(−1) 0(ξ) = Φ(CHF )(ξ)(eπiπ)−1 2σ3, argξ∈(0,π 3)∪(2π 3, π), Φ(CHF )(ξ)1 0 −i1 (eπiπ)−1 2σ3, argξ∈(π 3,π 2), Φ(CHF )(ξ)1 0 i1 (eπiπ)−1 2σ3, argξ∈(π 2,2π 3), Φ(CHF )(ξ)0i i0 (eπiπ)−1 2σ3, argξ∈(π,4π 3)∪(−π 3,0), Φ(CHF )(ξ)0i i1 (eπiπ)−1 2σ3, argξ∈(4π 3,3π 2), Φ(CHF )(ξ)0i i−1 (eπiπ)−1 2σ3, argξ∈(−π 2,−π 3),(3.8) and E(−1)(λ) =P(∞)(λ)ξ(λ)1 2σ3ei(x−t)σ3(eπiπ)1 2σ3. (3.9) It follows from (3.3) and (3.6) that E(−1)(λ) is analytic for λ∈U(−1, δ). From (3.3), (3.7)-(3.9) and (A.6), we have (3.4). 3.3.2 Local parametrix near λ= 1 In this section, we seek the solution to the following RH problem for P(1). 16 RH problem for P(1) (1)P(1)(λ) is analytic for λ∈U(1, δ)\Σ. (2)P(1)(λ) has the same jumps as T(λ) onU(1, δ)∩Σ. (3) On the boundary ∂U(1, δ),P(1)(λ) satisfies P(1)(λ){P(∞)(λ)}−1= 1 0 −λ+1 (λ−1)πe2i(x+t)1! +O(x−1), x→+∞. (3.10) We define the following conformal mapping ξ(λ) = 2[ θ(λ)−θ(1)] = 2( x+ 2t)(λ−1) + 2 t(λ−1)2. (3.11) Asλ→1, we have ξ(λ)∼2(x+ 2t)(λ−1). (3.12) Let Φ(CHF )be the confluent hypergeometric parametrix with the parameter β=−1 2, as given in Appendix A. The solution to the above RH problem can be constructed as follows: P(1)(λ) =E(1)(λ)P(1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(1, δ), (3.13) where P(1) 0(ξ) = Φ(CHF )(ξ)(eπiπ)−1 2σ3, argξ∈(0,π 3)∪(2π 3, π), Φ(CHF )(ξ)1 0 i1 (eπiπ)−1 2σ3, argξ∈(π 3,π 2), Φ(CHF )(ξ)1 0 −i1 (eπiπ)−1 2σ3, argξ∈(π 2,2π 3), Φ(CHF )(ξ)0i i0 (eπiπ)−1 2σ3, argξ∈(π,4π 3)∪(−π 3,0), Φ(CHF )(ξ)0i i−1 (eπiπ)−1 2σ3, argξ∈(4π 3,3π 2), Φ(CHF )(ξ)0i i1 (eπiπ)−1 2σ3, argξ∈(−π 2,−π 3),(3.14) and E(1)(λ) =P(∞)(λ)ξ−1 2σ3e−i(x+t)σ3(eπiπ)1 2σ3. (3.15) It follows from (3.3) and (3.12) that E(1)(λ) is analytic for λ∈U(1, δ). From (3.3), (3.13)-(3.15) and (A.6), we have (3.10). 3.4 Local parametrix near the stationary point In this subsection, we seek a parametrix P(0)that satisfies the same jump conditions as Ton Σ within the neighborhood U(λ0, δ), for some δ >0, where λ0=−x 2t. 17 RH problem for P(0) (1)P(0)(λ) is analytic for λ∈U(λ0, δ)\Σ. (2)P(0)(λ) has the same jumps as T(λ) onU(λ0, δ)∩Σ. (3) On the boundary ∂U(λ0, δ),P(0)(λ) satisfies P(0)(λ){P(∞)(λ)}−1=I+O(x−1 2), x→+∞. (3.16) The solution to the above RH problem can be constructed by using the Cauchy integral P(0)(λ) =P(∞)(λ) I+Z Γ0e−2i(ts2+xs) s−λds0 1 0 0! , (3.17) where P(∞)is defined in (3.3). The integral contour is defined as Γ 0=U(λ0, δ)∩(Σ1∪Σ2), where Σ 1and Σ 2are shown in Fig. 3. As x→+∞, we have the asymptotics of the integral by using the steepest descent method Z Γ0e−2i(ts2+xs) s−λds=eix2 2tZ Γ0e−2it(s+x 2t)2 (s+x 2t)−(λ+x 2t)ds =eix2 2tZ Γ1x 2te−2ix2 4t(u+1)2 x 2t(u+ 1)−(λ+x 2t)du =−rπ 2tei x2 2t−π 4 (λ−λ0)−1+O(x−3 2),(3.18) where integral contour Γ 1=2t xΓ0. From (3.3), (3.17) and (3.18), we have (3.16). 3.5 RH problem for M Asx→+∞, from (3.4) and (3.10), we see that P(−1) P(∞) −1andP(1) P(∞) −1do not tend to the identity matrix on ∂U(−1, δ) and ∂U(1, δ), respectively. To resolve this issue, we construct a matrix-valued function M(λ), which solves the remaining jumps on ∂U(−1,
|
https://arxiv.org/abs/2505.16780v1
|
δ) and ∂U(1, δ). RH problem for M (1)M(λ) is analytic for λ∈C\(∂U(−1, δ)∪∂U(1, δ)). (2) On the boundaries ∂U(−1, δ) and ∂U(1, δ), we have M+(λ) =M−(λ)1λ−1 λ+1πe2i(x−t) 0 1 , λ∈∂U(−1, δ), (3.19) M+(λ) =M−(λ) 1 0 −λ+1 π(λ−1)e2i(x+t)1! , λ∈∂U(1, δ). (3.20) 18 (3) As λ→ ∞ ,M(λ) =I+O 1 λ . Let A(λ) =I+B λ+ 1+C λ−1, (3.21) then we seek a solution to the above RH problem of the following form: M(λ) = A(λ)1−λ−1 λ+1πe2i(x−t) 0 1 , λ ∈U(−1, δ), A(λ) 1 0 λ+1 π(λ−1)e2i(x+t)1! , λ ∈U(1, δ), A(λ), λ ∈C\(U(−1, δ)∪U(1, δ)).(3.22) By the condition that Mis analytic near λ=±1, we determine the coefficients in (3.21) B= 0−2πe2i(x−t) 1+e4ix 0−2e4ix 1+e4ix! , C = 2e4ix 1+e4ix 0 −2 πe2i(x+t) 1+e4ix 0! . (3.23) From (3.3), (3.7), (3.13), (3.17) and (3.22), we have as x→+∞, M−(λ)P(1)(λ){P(∞)(λ)}−1M−1 +(λ) =I+O x−1 , λ∈∂U(1, δ), (3.24) M−(λ)P(−1)(λ){P(∞)(λ)}−1M−1 +(λ) =I+O x−1 , λ∈∂U(−1, δ), (3.25) M(λ)P(0)(λ){P(∞)(λ)}−1M−1(λ) =I+O x−1 2 , λ∈∂U(λ0, δ). (3.26) 3.6 Final transformation The final transformation is defined as R(λ) = T(λ) M(λ)P(∞)(λ) −1, λ ∈C\(U(λ0, δ)∪U(1, δ)∪U(−1, δ)), T(λ) M(λ)P(0)(λ) −1, λ ∈U(λ0, δ)\Σ, T(λ) M(λ)P(1)(λ) −1, λ ∈U(1, δ)\Σ, T(λ) M(λ)P(−1)(λ) −1, λ ∈U(−1, δ)\Σ.(3.27) Then Rfulfills the following RH problem. RH problem for R (1)R(λ) is analytic for λ∈C\Σ, where the contour is shown in Fig. 4. (2)R+(λ) =R−(λ)JR(λ),λ∈Σ, where JR(λ) = M(λ)P(0)(λ) P(∞)(λ) −1M−1(λ), λ ∈∂U(λ0, δ), M−(λ)P(1)(λ) P(∞)(λ) −1M−1 +(λ), λ ∈∂U(1, δ), M−(λ)P(−1)(λ) P(∞)(λ) −1M−1 +(λ), λ ∈∂U(−1, δ), M(λ)P(∞)(λ) 1 2πie−2iθ(λ) 0 1 P(∞)(λ) −1M−1(λ), λ ∈Σ1∪Σ2∪Σ5, M(λ)P(∞)(λ)1 0 −2i πe2iθ(λ)1 P(∞)(λ) −1M−1(λ), λ ∈Σ3. (3.28) 19 -1 1 𝝀𝟎𝜮𝟏 𝜮𝟐𝜮𝟑 𝜮𝟓𝝏𝑼−𝟏 𝝏𝑼𝟏 𝝏𝑼𝟎Figure 4: The jump contour of the RH problem for R (3)R(λ) =I+O 1 λ , asλ→ ∞ . From the matching conditions (3.24)-(3.26), we have as x→+∞, JR(λ) = I+O(x−1), λ ∈∂U(±1, δ), I+O(x−1 2), λ ∈∂U(λ0, δ), I+O(e−c1x), λ ∈Σ1∪Σ2∪Σ3∪Σ5,(3.29) where c1is some positive constant. Then we have as x→+∞, R(λ) =I+O(x−1 2), (3.30) where the error term is uniform for λbounded away from the jump contour for R. 3.7 Large xasymptotics in the space-like region By tracing back the series of invertible transformations Y7→T7→R, (3.31) we obtain that as x→+∞, Y(λ) =R(λ)A(λ)P(∞)(λ), λ∈C\ ∪4 i=1Ωi∪U(1, δ)∪U(−1, δ)∪U(λ0, δ) , (3.32) where P(∞)andAare defined in (3.3) and (3.21), and the regions Ω i,i= 1, . . . , 4, are shown in Fig. 3. From (3.3), we have P(∞)(λ) =I+P(∞) 1 λ+P(∞) 2 λ2+O1 λ3 , λ→ ∞ , (3.33) where P(∞) 1=−σ3, P(∞) 2=1 2I. (3.34) 20 From (3.21), we have A(λ) =I+A1 λ+A2 λ2+1 λ3 , λ→ ∞ , (3.35) where A1= 2e4ix 1+e4ix−2πe2i(x−t) 1+e4ix −2 πe2i(x+t) 1+e4ix −2e4ix 1+e4ix! , A 2= 2e4ix 1+e4ix2πe2i(x−t) 1+e4ix −2 πe2i(x+t) 1+e4ix2e4ix 1+e4ix! . (3.36) We have the asymptotic expansion R(λ) =I+R1 λ+R2 λ2+O1 λ3 , λ→ ∞ . (3.37) Asx→+∞, we have R(λ) =I+R(1)(λ) x1 2+O x−1 , (3.38) where the error term is uniform for λbounded away from the jump contour for R. Here R(1) satisfies R(1) +(λ)−R(1) −(λ) = ∆(
|
https://arxiv.org/abs/2505.16780v1
|
λ), λ∈∂U(λ0, δ), (3.39) with ∆(λ) =−p −λ0πei x2 2t−π 4 λ−λ0 1 λ+12 πe2i(x+t) 1+e4ix+1 λ2−14 πe2i(x+t)+4ix (1+e4ix)2λ−1 λ+1+1 λ+14e4ix 1+e4ix+1 λ2−14e8ix (1+e4ix)2 −1 λ2−14 π2e4i(x+t) (1+e4ix)2 −1 λ+12 πe2i(x+t) 1+e4ix−1 λ2−14 πe2i(x+t)+4ix (1+e4ix)2 . (3.40) We obtain that R(1)(λ) =( C λ−λ0, λ ∈C\U(λ0, δ), C λ−λ0−∆(λ), λ ∈U(λ0, δ),(3.41) where C= Res(∆( λ), λ0) is given by C=−p −λ0πei x2 2t−π 4 1 λ0+12 πe2i(x+t) 1+e4ix+1 λ2 0−14 πe2i(x+t)+4ix (1+e4ix)2λ0−1 λ0+1+1 λ0+14e4ix 1+e4ix+1 λ2 0−14e8ix (1+e4ix)2 −1 λ2 0−14 π2e4i(x+t) (1+e4ix)2 −1 λ0+12 πe2i(x+t) 1+e4ix−1 λ2 0−14 πe2i(x+t)+4ix (1+e4ix)2 . (3.42) Expanding R(1)into the Taylor series at infinity, we obtain the asymptotics for R1andR2: R1=C x1 2+O(x−1), R 2=λ0C x1 2+O(x−1), x→+∞. (3.43) Then, Ycan be expressed in the following form Y(λ) =I+Y1 λ+Y2 λ2+O1 λ3 , λ→ ∞ , (3.44) where Y1=R1+A1+P(∞) 1, Y 2=R1A1+R1P(∞) 1+A1P(∞) 1+R2+A2+P(∞) 2. (3.45) Here P(∞) 1,P(∞) 2,A1,A2,R1andR2are defined in (3.34), (3.36) and (3.43). From (3.44), (3.45) and Proposition 2.1, we obtain the asymptotics of ∂tD,∂xD,b++and B−−asx→+∞in the space-like region, as given in (1.16)-(1.19), which complete the proof of Theorem 1.1. 21 𝜴𝟐𝜴𝟑 -1 𝝀𝟎𝜴𝟒𝜴𝟏 1𝜮𝟏 𝜮𝟔𝜮𝟑𝜮𝟐𝜮𝟒 𝜮𝟓Figure 5: Deformation of the jump contour 4 Asymptotic analysis in the time-like region In this section, we derive the large tasymptotics of the derivatives of Ddefined in (1.4) and the corresponding solutions of the separated NLS equations in the time-like region, wherex 2t<1−δ for any small but fixed δ >0, by performing Deift-Zhou nonlinear steepest descent analysis of the RH problem for Y. 4.1 Deformation of the jump contour Define T(λ) = Y(λ) 1−2πie−2iθ(λ) 0 1 , λ ∈Ω1, Y(λ)1 0 −2i πe2iθ(λ)1 , λ ∈Ω2, Y(λ)1 0 2i πe2iθ(λ)1 , λ ∈Ω3, Y(λ) 1 2πie−2iθ(λ) 0 1 , λ ∈Ω4, Y(λ), λ ∈C\S4 i=1Ωi,(4.1) where the regions Ω i,i= 1, . . . , 4, are illustrated in Fig. 5. Then Tsolves the following RH problem. RH problem for T (1)T(λ) is analytic for λ∈C\Σ, where Σ is shown in Fig 5. 22 (2)T+(λ) =T−(λ)JT(λ) ,λ∈Σ, where JT(λ) = 1 2πie−2iθ(λ) 0 1 , λ ∈Σ1∪Σ6, −I, λ ∈Σ2∪Σ5,1 0 −2i πe2iθ(λ)1 , λ ∈Σ3∪Σ4.(4.2) (3) As λ→ ∞ ,T(λ) =I+O 1 λ . (4) As λ→ ±1,T(λ) =O(ln(λ∓1)). From (4.2), we have JT(λ)→I, ast→+∞forλ∈Σ\(Σ2∪Σ5). As t→+∞, it is expected that Tcan be approximated by a solution to a RH problem with the jump matrix along the line segment ( −1,1). Therefore, we construct the same global parametrix as given in (3.3). 4.2 Local parametrix near λ=−1 In this subsection, we seek a parametrix P(−1)that satisfies the same jump conditions as Ton Σ in the neighborhood U(−1, δ), for some δ >0. RH problem for P(−1) (1)P(−1)(λ) is analytic for λ∈U(−1, δ)\Σ. (2)P(−1)(λ) has the same jumps as T(λ) onU(−1, δ)∩Σ. (3) On the boundary ∂U(−1, δ),P(−1)(λ) satisfies P(−1)(λ){P(∞)(λ)}−1M−1 +(λ) =I+O(t−1), t→+∞, (4.3) where Mis defined in (3.22). We define the following conformal mapping ξ(λ) =−2[θ(λ)−θ(−1)] =−2(x−2t)(λ+ 1)−2t(λ+ 1)2. (4.4) Asλ→ −1, we have ξ(λ)∼2(2t−x)(λ+ 1). (4.5) Let Φ(CHF )be the confluent hypergeometric parametrix with the parameter β=−1 2, as given in
|
https://arxiv.org/abs/2505.16780v1
|
Appendix A. The solution to the above RH problem can be constructed as follows: P(−1)(λ) =E(−1)(λ)P(−1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(−1, δ), (4.6) 23 where P(−1) 0(ξ) = Φ(CHF )(ξ)π1 2σ3σ1, argξ∈(0,π 3)∪(2π 3, π), Φ(CHF )(ξ)1 0 i1 π1 2σ3σ1, argξ∈(π 3,π 2), Φ(CHF )(ξ)1 0 −i1 π1 2σ3σ1, argξ∈(π 2,2π 3), Φ(CHF )(ξ)0−i −i0 π1 2σ3σ1, argξ∈(π,4π 3)∪(−π 3,0), Φ(CHF )(ξ)0−i −i1 π1 2σ3σ1, argξ∈(4π 3,3π 2), Φ(CHF )(ξ)0−i −i−1 π1 2σ3σ1, argξ∈(−π 2,−π 3),(4.7) and E(−1)(λ) =M(λ)P(∞)(λ)σ1ξ(λ)−1 2σ3ei(t−x)σ3π−1 2σ3. (4.8) It follows from (3.3), (3.22) and (4.5) that E(−1)(λ) is analytic for λ∈U(−1, δ). From (3.3), (3.22), (4.6)-(4.8) and (A.6), the matching condition (4.3) is fulfilled. Asλ→1,Thas the same jumps as the one in the space-like region in Section 3.1. So we can construct the local parametrix P(1) P(1)(λ) =M(λ)E(1)(λ)P(1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(1, δ), (4.9) where P(1) 0,E(1)andMare defined in (3.14), (3.15) and (3.22). Then as t→+∞, we have the matching condition P(1)(λ){P(∞)(λ)}−1M−1 +(λ) =I+O(t−1). (4.10) 4.3 Local parametrix near the stationary point In this subsection, we seek a parametrix P(0)that satisfies the same jump conditions as Ton Σ in the neighborhood U(λ0, δ), for some δ >0, where λ0=−x 2t. RH problem for P(0) (1)P(0)(λ) is analytic for λ∈U(λ0, δ)\Σ. (2)P(0)(λ) has the same jumps as T(λ) onU(λ0, δ)∩Σ. (3) On the boundary ∂U(λ0, δ),P(0)(λ) satisfies P(0)(λ){P(∞)(λ)}−1M−1(λ) =I+O(t−1 2), t→+∞, (4.11) where Mis defined in (3.22). Similarly, the solution to the above RH problem can be constructed by using the Cauchy integral P(0)(λ) =M(λ)P(∞)(λ) I−1 2πiZ Γ02ie2i(ts2+xs) π(s−λ)ds0 0 1 0! , (4.12) 24 𝜮𝟏 𝜮𝟔𝜮𝟑𝜮𝟒 𝝏𝑼𝟎 𝝏𝑼−𝟏 𝝏𝑼𝟏 𝝀𝟎 1 -1Figure 6: The jump contour of the RH problem for R where P(∞)is defined in (3.3). The integral contour is defined as Γ 0=U(λ0, δ)∩(Σ3∪Σ4), where Σ 3and Σ 4are shown in Fig. 5. As t→+∞, we have the asymptotics of the integral by using the steepest descent method 1 2πiZ Γ02ie2i(ts2+xs) π(s−λ)ds=e−ix2 2t π2Z Γ0e2it(s+x 2t)2 s+x 2t − λ+x 2tds =−ei −x2 2t+π 4 √ 2tπ3 2(λ−λ0)+O(t−3 2).(4.13) From (3.3), (3.22), (4.12) and (4.13), the matching condition (4.11) is fulfilled. 4.4 Final transformation The final transformation is defined as R(λ) = T(λ) M(λ)P(∞)(λ) −1, λ ∈C\(U(λ0, δ)∪U(1, δ)∪U(−1, δ)), T(λ) P(0)(λ) −1, λ ∈U(λ0, δ)\Σ, T(λ) P(1)(λ) −1, λ ∈U(1, δ)\Σ, T(λ) P(−1)(λ) −1, λ ∈U(−1, δ)\Σ,(4.14) where P(∞)andMare defined in (3.3) and (3.22). Then Rfulfills the following RH problem. RH problem for R (1)R(λ) is analytic for λ∈C\Σ, where the contour is shown in Fig. 6. 25 (2)R+(λ) =R−(λ)JR(λ), λ∈Σ, where JR(λ) = P(0)(λ) P(∞)(λ) −1M−1(λ), λ ∈∂U(λ0, δ), P(1)(λ) P(∞)(λ) −1M−1 +(λ), λ ∈∂U(1, δ), P(−1)(λ) P(∞)(λ) −1M−1 +(λ), λ ∈∂U(−1, δ), M(λ)P(∞)(λ) 1 2πie−2iθ(λ) 0 1 P(∞)(λ) −1M−1(λ), λ ∈Σ1∪Σ6, M(λ)P(∞)(λ)1 0 −2i πe2iθ(λ)1 P(∞)(λ) −1M−1(λ), λ ∈Σ3∪Σ4. (4.15) (3)R(λ) =I+O 1 λ , asλ→ ∞ . From the matching conditions (4.3), (4.10) and (4.11), we have as t→+∞, JR(λ) = I+O(t−1), λ ∈∂U(±1, δ), I+O(t−1 2), λ ∈∂U(λ0, δ), I+O(e−c2t), λ ∈Σ1∪Σ3∪Σ4∪Σ6,(4.16) where c2is some positive constant. Then we have as t→+∞, R(λ) =I+O(t−1 2), (4.17) where the error term is
|
https://arxiv.org/abs/2505.16780v1
|
uniform for λbounded away from the jump contour for R. 4.5 Large tasymptotics in the time-like region By tracing back the series of invertible transformations Y7→T7→R, (4.18) we obtain that as t→+∞, Y(λ) =R(λ)A(λ)P(∞)(λ), λ∈C\ ∪4 i=1Ωi∪U(1, δ)∪U(−1, δ)∪U(λ0, δ) , (4.19) where P(∞)andAare defined in (3.3) and (3.21), and the regions Ω i,i= 1, . . . , 4, are shown in Fig. 5. As λ→ ∞ , the asymptotic expansions of P(∞)andAare given in (3.33) and (3.35). We expand Rasλ→ ∞ , R(λ) =I+R1 λ+R2 λ2+O1 λ3 . (4.20) Ast→+∞, we have R(λ) =I+R(1)(λ) t1 2+O(t−1), (4.21) where the error term is uniform for λbounded away from the jump contour for R. Here R(1) satisfies R(1) +(λ)−R(1) −(λ) = ∆( λ), λ∈∂U(λ0, δ), (4.22) 26 with ∆(λ) =e−i x2 2t−π 4 √ 2π3 2(λ−λ0) −1 λ−12πe2i(x−t) 1+e4ix+1 λ2−14πe2i(x−t)+4ix (1+e4ix)2 −1 λ2−14π2e4i(x−t) (1+e4ix)2 λ+1 λ−1−1 λ−14e4ix 1+e4ix+1 λ2−14e8ix (1+e4ix)21 λ−12πe2i(x−t) 1+e4ix−1 λ2−14πe2i(x−t)+4ix (1+e4ix)2! . (4.23) We obtain that R(1)(λ) =( C λ−λ0, λ ∈C\U(λ0, δ), C λ−λ0−∆(λ), λ ∈U(λ0, δ),(4.24) where C= Res(∆( λ), λ0) is given by C=e−i x2 2t−π 4 √ 2π3 2 −1 λ0−12πe2i(x−t) 1+e4ix+1 λ2 0−14πe2i(x−t)+4ix (1+e4ix)2 −1 λ2 0−14π2e4i(x−t) (1+e4ix)2 λ0+1 λ0−1−1 λ0−14e4ix 1+e4ix+1 λ2 0−14e8ix (1+e4ix)21 λ0−12πe2i(x−t) 1+e4ix−1 λ2 0−14πe2i(x−t)+4ix (1+e4ix)2! .(4.25) Expanding R(1)into the Taylor series at infinity, we obtain the asymptotics for R1andR2: R1=C t1 2+O(t−1), R 2=λ0C t1 2+O(t−1), t→+∞. (4.26) Then, it follows that Ycan be expressed in the following form Y(λ) =I+Y1 λ+Y2 λ2+O1 λ3 , λ→ ∞ , (4.27) where Y1=R1+A1+P(∞) 1, Y 2=R1A1+R1P(∞) 1+A1P(∞) 1+R2+A2+P(∞) 2. (4.28) Here P(∞) 1,P(∞) 2,A1,A2,R1andR2are defined in (3.34), (3.36) and (4.26). From (4.27), (4.28) and Proposition 2.1, we obtain the asymptotics of ∂tD,∂xD,b++and B−−ast→+∞in the time-like region, as given in (1.20)-(1.23), which complete the proof of Theorem 1.2. 5 Asymptotic analysis in the transition region In this section, we derive the large tasymptotics of the derivatives of Ddefined in (1.4) and the corresponding solutions of the separated NLS equations in the transition region, where t1 2 x 2t−1 ≤Cfor any constant C > 0, by performing Deift-Zhou nonlinear steepest descent analysis of the RH problem for Y. 5.1 Deformation of the jump contour Define T(λ) = Y(λ) 1−2πie−2iθ(λ) 0 1 , λ ∈Ω1, Y(λ)1 0 2i πe2iθ(λ)1 , λ ∈Ω2, Y(λ) 1 2πie−2iθ(λ) 0 1 , λ ∈Ω3, Y(λ), λ ∈C\S3 i=1Ωi,(5.1) 27 𝜴𝟐𝜮𝟐 𝜮𝟑-1 1𝜴𝟏 𝜴𝟑𝜮𝟏 𝜮𝟒Figure 7: Deformation of the jump contour where the regions Ω i,i= 1,2,3, are illustrated in Fig. 7. Then Tsolves the following RH problem. RH problem for T (1)T(λ) is analytic for λ∈C\Σ, where Σ is shown in Fig 7. (2)T+(λ) =T−(λ)JT(λ) ,λ∈Σ, where JT(λ) = 1 2πie−2iθ(λ) 0 1 , λ ∈Σ1∪Σ4, 1 0 −2i πe2iθ(λ)1 , λ ∈Σ2, −I, λ ∈Σ3.(5.2) (3) As λ→ ∞ ,T(λ) =I+O 1 λ . (4) As λ→ ±1,T(λ) =O(ln(λ∓1)). From (5.2), we have JT(λ)→I, ast→+∞forλ∈Σ\Σ3. Ast→+∞, it is expected that Tcan be approximated by a solution to the RH problem with the jump matrix along the line segment ( −1,1). Therefore, we construct the same
|
https://arxiv.org/abs/2505.16780v1
|
global parametrix as given in (3.3). 5.2 Local parametrix near λ=−1 In this subsection, we seek a parametrix P(−1)that satisfies the same jump conditions as Ton Σ in the neighborhood U(−1, δ), for some δ >0. 28 RH problem for P(−1) (1)P(−1)(λ) is analytic for λ∈U(−1, δ)\Σ. (2)P(−1)(λ) has the same jumps as T(λ) onU(−1, δ)∩Σ. (3) On the boundary ∂U(−1, δ),P(−1)(λ) satisfies P(−1)(λ){P(∞)(λ)}−1=1y 2λ−1 λ+1πe2i(x−t) 0 1 +O(t−1 2), t→+∞. (5.3) We define the following conformal mapping ξ(λ) =e−πi 4√ 2t(λ+ 1). (5.4) Let s=e−πi 4√ 2tx 2t−1 . (5.5) Asλ→ −1, we have ξ2(λ) 2+sξ(λ) =−iθ(λ)−i(x−t). (5.6) Let Ψ be the solution to the RH problem associated with PIV equation with the parameters Θ∞=1 2, Θ = 0 and the four Stokes multipliers s1=s2= 2i, s3=s4= 0; see Section 2.2. The solution to the above RH problem can be constructed as follows: P(−1)(λ) =E(−1)(λ)P(−1) 0(ξ(λ))ei(tλ2+xλ)σ3, λ∈U(−1, δ), (5.7) where P(−1) 0(ξ) =( Ψ(ξ, s)(eπiπ)−1 2σ3, argξ∈(−π 4,3π 2), −Ψ(ξ, s)(eπiπ)−1 2σ3, argξ∈(−π 2,−π 4),(5.8) and E(−1)(λ) =P(∞)(λ)ei(x−t)σ3ξ1 2σ3(eπiπ)1 2σ3. (5.9) It follows from (3.3) and (5.6) that E(−1)(λ) is analytic for λ∈U(−1, δ). From (2.26)-(2.29), (3.3) and (5.7)-(5.9), we have (5.3). We mention that in [37] a model problem which is similar to the RH problem associated with the PIV equation has been used in the studies of the asymptotics of the focusing NLS equation. Asλ→1,Thas the same jumps as the one in the space-like region in Section 3.1. Therefore, forλ∈U(1, δ) with some δ >0, we can construct the same local parametrix P(1)as given in (3.13). From (3.3), (3.13)-(3.15) and (A.6), we have P(1)(λ){P(∞)(λ)}−1= 1 0 −λ+1 (λ−1)πe2i(x+t)1! +O(t−1), t→+∞. (5.10) 5.3 RH problem for M Ast→+∞, it follows from (5.3) and (5.10) that P(−1) P(∞) −1andP(1) P(∞) −1do not tend to the identity matrix on ∂U(−1, δ) and ∂U(1, δ), respectively. To resolve this issue, we construct a matrix-valued function M(λ), which solves the remaining jumps along ∂U(1, δ) and ∂U(−1, δ). 29 RH problem for M (1)M(λ) is analytic for λ∈C\(∂U(−1, δ)∪∂U(1, δ)). (2) On the boundaries ∂U(−1, δ) and ∂U(1, δ), we have M+(λ) =M−(λ)1y 2λ−1 λ+1πe2i(x−t) 0 1 , λ∈∂U(−1, δ), (5.11) M+(λ) =M−(λ) 1 0 −λ+1 (λ−1)πe2i(x+t)1! , λ∈∂U(1, δ). (5.12) (3) As λ→ ∞ ,M(λ) =I+O 1 λ . Let A(λ) =I+B λ+ 1+C λ−1, (5.13) then we seek a solution to the above RH problem of the form: M(λ) = A(λ)1−y 2λ−1 λ+1πe2i(x−t) 0 1 , λ ∈U(−1, δ), A(λ) 1 0 λ+1 π(λ−1)e2i(x+t)1! , λ ∈U(1, δ), A(λ), λ ∈C\(U(−1, δ)∪U(1, δ)).(5.14) By the condition that Mis analytic near λ=±1, we derive the coefficients in (5.13) B= 0−πye2i(x−t) 1+y 2e4ix 0−ye4ix 1+y 2e4ix , C = ye4ix 1+y 2e4ix 0 −2 πe2i(x+t) 1+y 2e4ix0 . (5.15) From (3.3), (5.3), (5.10) and (5.14), we have as t→+∞, M−(λ)P(1)(λ){P(∞)(λ)}−1M−1 +(λ) =I+O t−1 , λ∈∂U(1, δ), (5.16) M−(λ)P(−1)(λ){P(∞)(λ)}−1M−1 +(λ) =I+O t−1 2 , λ∈∂U(−1, δ). (5.17) 5.4 Final transformation The final transformation is defined as R(λ) = T(λ) M(λ)P(∞)(λ) −1, λ ∈C\(U(1, δ)∪U(−1, δ)), T(λ) M(λ)P(1)(λ) −1, λ ∈U(1, δ)\Σ, T(λ) M(λ)P(−1)(λ) −1,
|
https://arxiv.org/abs/2505.16780v1
|
λ ∈U(−1, δ)\Σ,(5.18) where P(∞)is defined in (3.3). Then Rfulfills the following RH problem. 30 𝜮𝟐𝜮𝟏 𝜮𝟒𝝏𝑼−𝟏 𝝏𝑼𝟏 -1 1Figure 8: The jump contour of the RH problem for R RH problem for R (1)R(λ) is analytic for λ∈C\Σ, where the contour is shown in Fig. 8. (2)R+(λ) =R−(λ)JR(λ), λ∈Σ, where JR(λ) = M−(λ)P(1)(λ) P(∞)(λ) −1M−1 +(λ), λ ∈∂U(1, δ), M−(λ)P(−1)(λ) P(∞)(λ) −1M−1 +(λ), λ ∈∂U(−1, δ), M(λ)P(∞)(λ)1 2πie−2iθ 0 1 P(∞)(λ) −1M−1(λ), λ ∈Σ1∪Σ4, M(λ)P(∞)(λ)1 0 −2i πe2iθ1 P(∞)(λ) −1M−1(λ), λ ∈Σ2. (5.19) (3)R(λ) =I+O 1 λ , asλ→ ∞ . From the matching conditions (5.16) and (5.17), we have as t→+∞, JR(λ) = I+O(t−1), λ ∈∂U(1, δ), I+O(t−1 2), λ ∈∂U(−1, δ), I+O(e−c3t), λ ∈Σ1∪Σ2∪Σ4,(5.20) where c3is some positive constant. Then we have as t→+∞, R(λ) =I+O(t−1 2), (5.21) where the error term is uniform for λbounded away from the jump contour for R. 5.5 Painlev´ e IV asymptotics in the transition region By tracing back the series of invertible transformations Y7→T7→R, (5.22) 31 we obtain that as t→+∞, Y(λ) =R(λ)A(λ)P(∞)(λ), λ∈C\ ∪3 i=1Ωi∪∂U(1, δ)∪∂(−1, δ) , (5.23) where P(∞)andAare defined in (3.3) and (5.13), and the regions Ω i,i= 1,2,3, are shown in Fig. 7. As λ→ ∞ , the asymptotic expansion of P(∞)is given in (3.34). From (5.13), we have A(λ) =I+A1 λ+A2 λ2+1 λ3 , λ→ ∞ , (5.24) where A1= ye4ix 1+y 2e4ix−πye2i(x−t) 1+y 2e4ix −2 πe2i(x+t) 1+y 2e4ix−ye4ix 1+y 2e4ix , A 2= ye4ix 1+y 2e4ixπye2i(x−t) 1+y 2e4ix −2 πe2i(x+t) 1+y 2e4ixye4ix 1+y 2e4ix . (5.25) We have the asymptotic expansion R(λ) =I+R1 λ+R2 λ2+O1 λ3 , λ→ ∞ . (5.26) Ast→+∞, we have R(λ) =I+R(1)(λ) t1 2+O t−1 , (5.27) where the error term is uniform for λbounded away from the jump contour for R. Here R(1) satisfies R(1) +(λ)−R(1) −(λ) = ∆( λ), λ∈∂U(−1, δ), (5.28) with ∆(λ) =eπ 4i √ 2(λ+ 1) −HA(λ)σ3A−1(λ)−πy 2u 2+sλ−1 λ+ 1e2i(x−t)A(λ)0 1 0 0 A−1(λ) =−eπ 4i √ 2(λ+ 1) H 1 +4ye4ix (λ2−1)(1+y 2e4ix)22yπe2i(x−t) (λ+1)(1+y 2e4ix)+2πy2e2i(x−t)+4ix (λ2−1)(1+y 2e4ix)2 −4e2i(x+t) π(λ−1)(1+y 2e4ix)+4ye2i(x+t)+4ix π(λ2−1)(1+y 2e4ix)2 −1−4ye4ix (λ2−1)(1+y 2e4ix)2 + πy 2u 2+s e2i(x−t) 2 πe2i(x+t) (λ+1)(1+y 2e4ix)+2 πye2i(x+t)+4ix (λ2−1)(1+y 2e4ix)2λ−1 λ+1+2ye4ix (λ+1)(1+y 2e4ix)+y2e8ix (λ2−1)(1+y 2e4ix)2 −4 π2e4i(x+t) (λ2−1)(1+y 2e4ix)2 −2 πe2i(x+t) (λ+1)(1+y 2e4ix)−2 πye2i(x+t)+4ix (λ2−1)(1+y 2e4ix)2 . (5.29) We obtain that R(1)(λ) =(C λ+1+D (λ+1)2, λ ∈C\U(−1, δ), C λ+1+D (λ+1)2−∆(λ), λ ∈U(−1, δ),(5.30) where C=−eπi 4H√ 2 1−ye4ix (1+y 2e4ix)2−πy2e2i(x−t)+4ix 2(1+y 2e4ix)2 2e2i(x+t) π(1+y 2e4ix)2−1 +ye4ix (1+y 2e4ix)2 −πye2i(x−t)+πi 4 2√ 2u 2+s −ye2i(x+t)+4ix 2π(1+y 2e4ix)21−y2e8ix 4(1+y 2e4ix)2 e4i(x+t) π2(1+y 2e4ix)2ye2i(x+t)+4ix 2π(1+y 2e4ix)2 ,(5.31) 32 D=−eπi 4H√ 2 −2ye4ix (1+y 2e4ix)22πye2i(x−t) (1+y 2e4ix)2 −2ye2i(x+t)+4ix π(1+y 2e4ix)22ye4ix (1+y 2e4ix)2 −πye2i(x−t)+πi 4 2√ 2u 2+s 2e2i(x+t) π(1+y 2e4ix)2−2 +2ye4ix 1+y 2e4ix−y2e8ix 2(1+y 2e4ix)2 2e4i(x+t) π2(1+y 2e4ix)2 −2e2i(x+t) π(1+y 2e4ix)2 .(5.32) Expanding R(1)into the Taylor series at infinity, we obtain the asymptotics for R1andR2: R1=C t1 2+O(t−1), R 2=−C+D t1 2+O(t−1), t→+∞. (5.33) Then, Ycan be expressed in the following form Y(λ) =I+Y1 λ+Y2 λ2+O1 λ3 , λ→ ∞ , (5.34) where Y1=R1+A1+P(∞) 1, Y 2=R1A1+R1P(∞) 1+A1P(∞) 1+R2+A2+P(∞) 2. (5.35) Here P(∞) 1,P(∞) 2,A1,A2,R1andR2are defined
|
https://arxiv.org/abs/2505.16780v1
|
in (3.34), (5.25) and (5.33). From (5.34), (5.35) and Proposition 2.1, we obtain the asymptotics of ∂tD,∂xD,b++and B−−ast→+∞in the transition region, as given in (1.38)-(1.41), which complete the proof of Theorem 1.8. 6 Asymptotic analysis of the PIV equation In this section, we derive the asymptotics of a special solution of the PIV equation (1.27) with the parameters Θ ∞=1 2, Θ = 0, which is corresponding to the special Stokes multipliers s1=s2= 2i, s3=s4= 0, by using the RH problem given in Section 2.2. Define τ=eπi 4s∈R. (6.1) We introduce the transformation Φ(z) = e−πi 4|τ|1 2σ3Ψ(e−πi 4|τ|z, e−πi 4τ)e−iτ2 2φ(z)σ3,argz∈(−π 4,7π 4), (6.2) where φ(z) =z2+ 2 sgn( τ)z. (6.3) Then Φ solves the following RH problem. 33 O𝚺𝟐𝚽 𝚺𝟏𝚽 𝚺𝟑𝚽Figure 9: The jump contour of the RH problem for Φ RH problem for Φ (1) Φ( z) is analytic for z∈C\ΣΦ, shown in Fig. 9. (2) Φ +(z) = Φ −(z)JΦ(z),z∈Σ(Φ), where JΦ(z) = 1 0 2ieiτ2φ(z)1 , z ∈Σ(Φ) 1, 1 2ie−iτ2φ(z) 0 1 , z ∈Σ(Φ) 2, −I, z ∈Σ(Φ) 3.(6.4) (3) As z→ ∞ , Φ(z) = I+O1 z z−1 2σ3, (6.5) where the branch for z1 2is taken such that arg z∈(−π 4,7π 4). (4) As z→0, Φ( z) =O(ln|z|). Since the position of the stationary point of the phase function φdepends on the sign of τ, we will consider these two cases separately in Sections 6.1 and 6.2. 6.1 Asymptotic analysis of the PIV equation as τ→+∞ Forτ >0, the function φ(z) =z2+ 2zpossesses the stationary point z=−1. 6.1.1 Deformation of the jump contour Define 34 O -1𝛀𝚺𝟐𝑻 𝚺𝟏𝑻 𝚺𝟑𝑻Figure 10: The jump contour of the RH problem for T T(z) = Φ(z) 1−2ie−iτ2φ(z) 0 1 , z ∈Ω, Φ(z), z ∈C\Ω,(6.6) where the region Ω is shown in Fig. 10. RH problem for T (1)T(z) is analytic for z∈C\Σ(T), where Σ(T)is shown in Fig. 10. (2)T+(z) =T−(z)JT(z) ,z∈Σ(T), where JT(z) = 1 0 2ieiτ2φ(z)1 , z ∈Σ(T) 1, 1 2ie−iτ2φ(z) 0 1 , z ∈Σ(T) 2, −I, z ∈Σ(T) 3.(6.7) (3) As z→ ∞ ,T(z)z1 2σ3=I+O 1 z . (4) As z→0,T(z) =O(ln|z|). From (6.7), we have JT(z)→I, asτ→+∞, forz∈Σ(T)\Σ(T) 3. Asτ→+∞, it is expected that Tcan be approximated by z−1 2σ3. In order to fulfill the matching conditions later, we define the global parametrix as follows: P(∞)(z) =1−1 z 0 1 z−1 2σ3, (6.8) where the branch for z1 2is taken such that arg z∈(−π 4,7π 4). 35 6.1.2 Local parametrix near z= 0 In this subsection, we seek a parametrix P(0)satisfying the same jump conditions as Ton Σ(T) in the neighborhood U(0, δ), for some δ >0. RH problem for P(0) (1)P(0)(z) is analytic for z∈U(0, δ)\Σ(T). (2)P(0)(z) has the same jumps as T(z) onU(0, δ)∩Σ(T). (3) On the boundary ∂U(0, δ),P(0)(z) satisfies P(0)(z){P(∞)(z)}−1=I+O τ−2 , τ→+∞. (6.9) We define the following conformal mapping ζ(z) = 2 τ2 z+z2 2 . (6.10) Asz→0, we have ζ(z)∼2τ2z. (6.11) Let Φ(CHF )be the confluent hypergeometric parametrix with the parameter β=1 2, as given in Appendix A. The solution to
|
https://arxiv.org/abs/2505.16780v1
|
the above RH problem can be constructed as follows: P(0)(z) =E(0)(z)P(0) 0(ζ(z))eiτ2 2φ(z)σ3, (6.12) where P(0) 0(ζ) = Φ(CHF )(ζ), argζ∈(0,π 4)∪(2π 3, π), Φ(CHF )(ζ)1 0 2i1 , argζ∈(π 4,π 3), Φ(CHF )(ζ)1 0 i1 , argζ∈(π 3,2π 3), Φ(CHF )(ζ)0i i0 , argζ∈(π,5π 4)∪(−π 3,−π 4), Φ(CHF )(ζ)0i i−2 , argζ∈(5π 4,4π 3), Φ(CHF )(ζ)0i i−1 , argζ∈(4π 3,3π 2)∪(−π 2,−π 3), Φ(CHF )(ζ)0−i −i0 , argζ∈(−π 4,0),(6.13) and E(0)(z) =z−1 2σ3ζ(z)1 2σ3. (6.14) It follows from (6.11) that E(0)(z) is analytic for z∈U(0, δ). From (6.8), (6.12)-(6.14) and (A.6), the matching condition (6.9) is fulfilled. 36 6.1.3 Local parametrix near z=−1 In this subsection, we seek a parametrix P(−1)that satisfies the same jump conditions as Ton Σ(T)in the neighborhood U(−1, δ), for some δ >0. RH problem for P(−1) (1)P(−1)(z) is analytic for z∈U(−1, δ)\Σ(T). (2)P(−1)(z) has the same jumps as T(z) onU(−1, δ)∩Σ(T). (3) On the boundary ∂U(−1, δ),P(−1)(z) satisfies P(−1)(z){P(∞)(z)}−1=I+O τ−1 , τ→+∞. (6.15) The solution to the above RH problem can be constructed by using the Cauchy integral P(−1)(z) =P(∞)(z) I+1 πZ Γ−1e−iτ2(s2+2s) s−zds0 1 0 0! , (6.16) where P(∞)is defined in (6.8). The integral contour is defined as Γ −1=U(−1, δ)∩Σ(T) 2, where Σ(T) 2is shown in Fig. 10. As τ→+∞, we have the asymptotics of the integral by using the steepest descent method 1 πZ Γ−1e−iτ2(s2+2s) s−zds=−eiτ2+3π 4i τ(z+ 1)√π+O(τ−2). (6.17) From (6.8), (6.16) and (6.17), the matching condition (6.15) is fulfilled. 6.1.4 Final transformation The final transformation is defined as R(z) = T(z) P(∞)(z) −1, z ∈C\(U(0, δ)∪U(−1, δ)), T(z) P(0)(z) −1, z ∈U(0, δ)\Σ(T), T(z) P(−1)(z) −1, z ∈U(−1, δ)\Σ(T).(6.18) Then Rfulfills the following RH problem. RH problem for R (1)R(z) is analytic for z∈C\Σ(R), where Σ(R)is shown in Fig. 11. (2)R+(z) =R−(z)JR(z), z∈Σ(R), where JR(z) = P(0)(z) P(∞)(z) −1, z ∈∂U(0, δ), P(−1)(z) P(∞)(z) −1, z ∈∂U(−1, δ), P(∞)(z)1 0 2ieiτ2φ(z)1 P(∞)(z) −1, z ∈Σ(R) 1, P(∞)(z) 1 2ie−iτ2φ(z) 0 1 P(∞)(z) −1, z ∈Σ(R) 2.(6.19) 37 𝝏𝑼−𝟏 𝝏𝑼𝟎 -1 O𝚺𝟐𝑹𝚺𝟏𝑹Figure 11: The jump contour of the RH problem for R (3)R(z) =I+O 1 z , asz→ ∞ . From the matching conditions (6.9) and (6.15), we have as τ→+∞, JR(z) = I+O(τ−2), z ∈∂U(0, δ), I+O(τ−1), z ∈∂U(−1, δ), I+O(e−c4τ), z ∈Σ(R) 1∪Σ(R) 2,(6.20) where c4is some positive constant. Then we have as τ→+∞, R(z) =I+O(τ−1), (6.21) where the error term is uniform for zbounded away from the jump contour for R. 6.1.5 Proof of Proposition 1.7: asymptotics of the PIV as τ=eπi 4s→+∞ By tracing back the series of invertible transformations Ψ7→Φ7→T7→R, (6.22) we obtain that for z∈C\(Ω∪U(0, δ)∪U(−1, δ)), where the region Ω is shown in Fig. 10, as τ→+∞, Ψ(sz, s) =s−1 2σ3Φ(z)es2 2φ(z)σ3,Φ(z) =R(z)P(∞)(z). (6.23) Here s=e−πi 4τandP(∞)is defined in (6.8). From (6.8), we have as z→ ∞ , P(∞)(z) =" I+P(∞) 1 z+P(∞) 2 z2+O1 z3# z−1 2σ3, (6.24) where P(∞) 1=0−1 0 0 , P(∞) 2= 0. (6.25) 38 Asz→ ∞ , we have the asymptotic expansion R(z) =I+R1 z+R2 z2+O1 z3 . (6.26) Asτ→+∞, we have R(z) =I+R(1)(z) τ+O(τ−2),
|
https://arxiv.org/abs/2505.16780v1
|
(6.27) where the error term is uniform for zbounded away from the jump contour for R. Here R(1) satisfies R(1) +(z)−R(1) −(z) = ∆( z), z∈∂U(−1, δ), (6.28) with ∆(z) =−eiτ2+3πi 4√πz(z+ 1)0 1 0 0 , z∈∂U(−1, δ). (6.29) We obtain that R(1)(z) =C z+1, z ∈C\U(−1, δ), C z+1−∆(z), z ∈U(−1, δ),(6.30) where C= Res(∆( z),−1) is given by C=eiτ2+3π 4i √π0 1 0 0 . (6.31) Expanding R(1)into the Taylor series at infinity, we obtain the asymptotics for R1andR2: R1=C τ+O(τ−2), R 2=−C τ+O(τ−2), τ→+∞. (6.32) Then, Φ can be expressed in the following form Φ(z) = I+Φ1 z+Φ2 z2+O(z−3) z−1 2σ3, z→ ∞ , (6.33) where Φ1=R1+P(∞) 1,Φ2=R1P(∞) 1+R2+P(∞) 2. (6.34) Here P(∞) 1,P(∞) 2,R1andR2are defined in (6.25) and (6.32). From (2.27), (2.28), (2.30), (6.23), (6.33) and (6.34), we obtain the following asymptotics fory,Handuasτ=eπi 4s→+∞: y=−2(Ψ1)12=−2(Φ1)12= 2−2ie−s2 √πs+O(s−2), (6.35) H= (Ψ 1)22=s(Φ1)22=O(s−1), (6.36) u(s) =−2s−2ie−s2 √π+O(s−1), (6.37) as given in (1.32), (1.34) and (1.35). 39 O 1𝜴𝚺𝟐𝑻𝚺𝟏𝑻 𝚺𝟑𝑻Figure 12: The jump contour of the RH problem of T 6.2 Asymptotic analysis of the PIV equation as τ→ −∞ Forτ <0, the phase function φ(z) =z2−2zpossesses the stationary point z= 1. 6.2.1 Deformation of the jump contour Define T(z) = Φ(z)1 0 2ieiτ2φ(z)1 , z ∈Ω, Φ(z), z ∈C\Ω,(6.38) where the region Ω is shown in Fig. 12. RH problem for T (1)T(z) is analytic for z∈C\Σ(T), where Σ(T)is shown in Fig. 12. (2)T+(z) =T−(z)JT(z) ,z∈Σ(T), where JT(z) = 1 0 2ieiτ2φ(z)1 , z ∈Σ(T) 1, 1 2ie−iτ2φ(z) 0 1 , z ∈Σ(T) 2, −I, z ∈Σ(T) 3.(6.39) (3) As z→ ∞ ,T(z)z1 2σ3=I+O 1 z . (4) As z→0,T(z) =O(ln|z|). From (6.39), we have JT(z)→I, asτ→ −∞ , for z∈Σ(T)\Σ(T) 3. As τ→ −∞ , it is expected that Tcan be approximated by z−1 2σ3. Similarly, we can construct the same global parametrix P(∞)given in (6.8). 40 6.2.2 Local parametrix near z= 0 In this subsection, we seek a parametrix P(0)satisfying the same jump conditions as Ton Σ(T) in the neighborhood U(0, δ), for some δ >0. RH problem for P(0) (1)P(0)(z) is analytic for z∈U(0, δ)\Σ(T). (2)P(0)(z) has the same jumps as T(z) onU(0, δ)∩Σ(T). (3) On the boundary ∂U(0, δ),P(0)(z) satisfies P(0)(z){P(∞)(z)}−1=I+O τ−2 , τ→ −∞ . (6.40) We define the following conformal mapping ζ(z) = 2 τ2 z−z2 2 . (6.41) Asz→0, we have ζ(z)∼2τ2z. (6.42) Let Φ(CHF )be the confluent hypergeometric parametrix with the parameter β=−1 2, as given in Appendix A. The solution to the above RH problem can be constructed as follows: P(0)(z) =E(0)(z)P(0) 0(ζ(z))eiτ2 2φ(z)σ3, (6.43) where P(0) 0(ζ) = Φ(CHF )(ζ)σ1e−πi 2σ3, argζ∈(0,π 3)∪(3π 4, π), Φ(CHF )(ζ)1 0 i1 σ1e−πi 2σ3, argζ∈(π 3,2π 3), Φ(CHF )(ζ)1 0 2i1 σ1e−πi 2σ3, argζ∈(2π 3,3π 4), Φ(CHF )(ζ)0−i −i0 σ1e−πi 2σ3, argζ∈(π,4π 3), Φ(CHF )(ζ)0−i −i1 σ1e−πi 2σ3, argζ∈(4π 3,3π 2)∪(−π 2,−π 3), Φ(CHF )(ζ)0−i −i2 σ1e−πi 2σ3, argζ∈(−π 3,−π 4), Φ(CHF )(ζ)0i i0 σ1e−πi 2σ3, argζ∈(−π 4,0),(6.44) and E(0)(z) =z−1 2σ3eπi 2σ3σ1ζ(z)−1 2σ3. (6.45) It follows from (6.8) and (6.42) that E(0)(z) is analytic for z∈U(0, δ). From
|
https://arxiv.org/abs/2505.16780v1
|
(6.8), (6.43)-(6.45) and (A.6), the matching condition (6.40) is fulfilled. 41 6.2.3 Local parametrix near z= 1 In this subsection, we seek a parametrix P(1)satisfying the same jump conditions as Ton Σ(T) in the neighborhood U(1, δ), for some δ >0. RH problem for P(1) (1)P(1)(z) is analytic for z∈U(1, δ)\Σ(T). (2)P(1)(z) has the same jumps as T(z) onU(1, δ)∩Σ(T). (3) On the boundary ∂U(1, δ),P(1)(z) satisfies P(1)(z){P(∞)(z)}−1=I+O τ−1 , τ→ −∞ . (6.46) The solution to the above RH problem can be constructed by using the Cauchy integral P(1)(z) =P(∞)(z) I+1 πZ Γ1eiτ2(s2−2s) s−zds0 0 1 0! , (6.47) where P(∞)is defined in (6.8). The integral contour is defined as Γ 1=U(1, δ)∩Σ(T) 1, where Σ(T) 1is shown in Fig. 12. As τ→ −∞ , we have the asymptotics of the integral by using the steepest descent method 1 πZ Γ1eiτ2(s2−2s) s−zds=e−iτ2+π 4i τ(z−1)√π+O(τ−2). (6.48) From (6.8), (6.47) and (6.48), the matching condition (6.46) is fulfilled. 6.2.4 Final transformation The final transformation is defined as R(z) = T(z) P(∞)(z) −1, z ∈C\(U(0, δ)∪U(1, δ)), T(z) P(0)(z) −1, z ∈U(0, δ)\Σ(T), T(z) P(1)(z) −1, z ∈U(1, δ)\Σ(T).(6.49) Then Rfulfills the following RH problem. RH problem for R (1)R(z) is analytic for z∈C\Σ(R), where Σ(R)is shown in Fig. 13. (2)R+(z) =R−(z)JR(z), z∈Σ(R), where JR(z) = P(0)(z) P(∞)(z) −1, z ∈∂U(0, δ), P(1)(z) P(∞)(z) −1, z ∈∂U(1, δ), P(∞)(z)1 0 2ieiτ2φ(z)1 P(∞)(z) −1, z ∈Σ(R) 1, P(∞)(z) 1 2ie−iτ2φ(z) 0 1 P(∞)(z) −1, z ∈Σ(R) 2.(6.50) 42 𝝏𝑼𝟏 𝝏𝑼𝟎 1 O𝚺𝟐𝑹𝚺𝟏𝑹Figure 13: The jump contour of the RH problem for R (3)R(z) =I+O 1 z , asz→ ∞ . From the matching conditions (6.40) and (6.46), we have as τ→ −∞ , JR(z) = I+O(τ−2), z∈∂U(0, δ), I+O(τ−1), z∈∂U(1, δ), I+O(ec5τ), z∈Σ(R) 1∪Σ(R) 2,(6.51) where c5is some positive constant. Then we have as τ→ −∞ , R(z) =I+O(τ−1), (6.52) where the error term is uniform for zbounded away from the jump contour for R. 6.2.5 Proof of Proposition 1.7: asymptotics of the PIV as τ=eπi 4s→ −∞ By tracing back the series of invertible transformations Ψ7→Φ7→T7→R, (6.53) we obtain that for z∈C\(Ω∪U(0, δ)∪U(1, δ)), where the region Ω is shown in Fig. 12, as τ→ −∞ , Ψ(sz, s) = (eπis)−1 2σ3Φ(z)es2 2φ(z)σ3,Φ(z) =R(z)P(∞)(z). (6.54) Here s=e−πi 4τ,P(∞)is defined in (6.8), and the asymptotic expansion of P(∞)is given in (6.24). Asz→ ∞ , we have the asymptotic expansion R(z) =I+R1 z+R2 z2+O1 z3 . (6.55) 43 Asτ→ −∞ , we have R(z) =I+R(1)(z) τ+O(τ−2), (6.56) where the error term is uniform for zbounded away from the jump contour for R. Here R(1) satisfies R(1) +(z)−R(1) −(z) = ∆( z), z∈∂U(1, δ), (6.57) with ∆(z) =e−iτ2+πi 4√π(z−1)−1−1 z z 1 , z∈∂U(1, δ). (6.58) We obtain that R(1)(z) =C z−1, z ∈C\U(1, δ), C z−1−∆(z), z ∈U(1, δ),(6.59) where C= Res(∆( z),1) is given by C=−e−iτ2+π 4i √π1 1 −1−1 . (6.60) Expanding R(1)into the Taylor series at infinity, we obtain the asymptotics for R1andR2: R1=C τ+O(τ−2), R 2=C τ+O(τ−2), τ→ −∞ . (6.61) Then, Φ can be expressed in the following form
|
https://arxiv.org/abs/2505.16780v1
|
Φ(z) = I+Φ1 z+Φ2 z2+O(z−3) z−1 2σ3, z→ ∞ , (6.62) where Φ1=R1+P(∞) 1,Φ2=R1P(∞) 1+R2+P(∞) 2. (6.63) Here P(∞) 1,P(∞) 2,R1andR2are defined in (6.25) and (6.61). From (2.27), (2.28), (2.30), (6.54), (6.62) and (6.63), we obtain the following asymptotics fory,Handuasτ=eπi 4s→ −∞ : y(s) =−2(Ψ1)12=−2(Φ1)12= 2 +2es2 √πs+O(s−2), (6.64) H(s) = (Ψ 1)22=−s(Φ1)22=−es2 √π+O(s−1), (6.65) u(s) =−2s−2es2 √π+O(s−1), (6.66) as given in (1.33), (1.36) and (1.37). 44 𝜮𝟔𝜮𝟐 𝜮𝟒 𝜮𝟓𝜮𝟏𝜮𝟑Figure 14: The jump contour for the RH problem for Φ(CHF ) Acknowledgements We thank the anonymous referees for their constructive suggestions. The work of Shuai-Xia Xu was supported in part by the National Natural Science Foundation of China under grant numbers 12431008, 12371257 and 11971492, and by Guangdong Basic and Applied Basic Re- search Foundation (Grant No. 2022B1515020063). Yu-Qiu Zhao was supported in part by the National Natural Science Foundation of China under grant numbers 11971489 and 12371077. A Confluent hypergeometric parametrix As shown in [24], the confluent hypergeometric parametrix Φ(CHF )(ξ) = Φ(CHF )(ξ;β), with a parameter β, is a solution to the following RH problem. Some applications of this parametrix in the studies of the asymptotics of the Fredholm determinants of integrable kernels with jump- type Fisher-Hartwig singularities can be found in [5, 9] etc.. RH problem for Φ(CHF ) (1) Φ(CHF )(ξ) is analytic in C\ ∪6 i=1Σi, where Σ i,i= 1, . . . , 6, are shown in Fig. 14. (2) Φ(CHF ) + (ξ) = Φ(CHF ) − (ξ)Ji(ξ),ξ∈Σi,i= 1, . . . , 6, where J1(ξ) =0 e−βπi −eβπi0 , J 2(ξ) =1 0 eβπi1 , J 3(ξ) =1 0 e−βπi1 , J4(ξ) =0 eβπi −e−βπi0 , J 5(ξ) =1 0 e−βπi1 , J 6(ξ) =1 0 eβπi1 . 45 (3) As ξ→ ∞ , Φ(CHF )(ξ) = I+O(ξ−1) ξ−βσ3e−iξ 2σ3 I, 0<argξ < π,0−eβπi e−βπi0 , π < argξ <3 2π, 0−e−βπi eβπi0 ,−π 2<argξ <0.(A.1) (4) As ξ→0, Φ(CHF )(ξ) =O(ln|ξ|). Forξbelonging to the region bounded by Σ 1and Σ 2, the solution to the RH problem can be constructed by using the confluent hypergeometric function ψ(a, b;ξ) [24]: Φ(CHF )(ξ) = ψ(β,1, eπi 2ξ)eβπi 2 −Γ(1−β) Γ(β)ψ(1−β,1, e−πi 2ξ)e−βπi 2 −Γ(1+ β) Γ(−β)ψ(1 +β,1, eπi 2ξ)e3βπi 2 ψ(−β,1, e−πi 2ξ)! e−iξ 2σ3,(A.2) where ψ(a, b;ξ) is the unique solution of the Kummer’s equation ξd2y dξ2+ (b−ξ)dy dξ−ay= 0. (A.3) If the parameter b= 1, the expansions of the function ψ(a,1;ξ), for arg ξ∈(−3 2π,3 2π), at infinty and zero are known to be ψ(a,1;ξ) =ξ−a 1−a2 ξ+a2(a−1)2 2ξ2+O(ξ−3) , ξ→ ∞ , (A.4) ψ(a,1;ξ) =−1 Γ(a) lnξ+Γ′(a) Γ(a)+ 2γE! +O(ξlnξ), ξ→0, (A.5) see [39], Chapter 13, where γEis the Euler’s constant. From (A.4) and (A.5), we obtain the asymptotics of Φ(CHF )in (A.2) near infinity and zero Φ(CHF )(ξ) = I+1 ξ iβ2−iΓ(1−β) Γ(β)e−βπi iΓ(1+ β) Γ(−β)eβπi−iβ2! +O(ξ−2)! ξ−βσ3e−iξ 2σ3, ξ→ ∞ ,(A.6) Φ(CHF )(ξ)1 0 −e−βπi1 e−βπi 2σ3= Υ 0 I+ Υ 1ξ+O(ξ2) 1−1−e2βπi 2πiln e−πi 2ξ 0 1! , ξ→0, (A.7) where Υ0= Γ(1−β)e−βπi 1 Γ(β) Γ′(1−β) Γ(1−β)+ 2γE Γ(1 + β)−eβπi Γ(−β) Γ′(−β) Γ(−β)+ 2γE . (A.8) 46
|
https://arxiv.org/abs/2505.16780v1
|
References [1] M.J. Ablowitz, D.J. Kaup, A.C. Newell and H. Segur, The inverse scattering transform- Fourier analysis for nonlinear problems, Stud. Appl. Math. ,53, 249–315 (1974). [2] G. Amir, I. Corwin and J. Quastel, Probability distribution of the free energy of the contin- uum directed random polymer in 1 + 1 dimensions, Comm. Pure Appl. Math. ,64, 466–537 (2011). [3] M. Cafasso and T. Claeys, A Riemann-Hilbert approach to the lower tail of the Kardar- Parisi-Zhang equation, Comm. Pure Appl. Math. ,75, 493–540 (2022). [4] M. Cafasso, T. Claeys and G. Ruzza, Airy kernel determinant solutions to the KdV equation and Integro-differential Painlev´ e equations, Comm. Math. Phys. ,386, 1107–1153 (2021). [5] C. Charlier, Large gap asymptotics for the generating function of the sine point process, P. Lond. Math. Soc. ,123, 103–152 (2021). [6] C. Charlier, T. Claeys and G. Ruzza, Uniform tail asymptotics for Airy kernel determinant solutions to KdV and for the narrow wedge solution to KPZ, J. Funct. Anal. ,283, 109608 (2022). [7] V.V. Cheianov and M.B. Zvonarev, Zero temperature correlation functions for the impene- trable fermion gas, J. Phys. A ,37, 2261–2297 (2003). [8] T. Claeys and S. Tarricone, On the integrable structure of deformed sine kernel determinants, Math. Phys. Anal. Geom. ,27, Paper No. 3, 35 pp (2024). [9] D. Dai, S.-X. Xu and L. Zhang, On the deformed Pearcey determinant, Adv. Math. ,400, 108291 (2022). [10] P. Deift, A. Its, I. Krasovsky and X. Zhou, The Widom-Dyson constant for the gap prob- ability in random matrix theory, J. Comput. Appl. Math. ,202, 26–47 (2007). [11] P. Deift, A. Its and X. Zhou, A Riemann-Hilbert approach to asymptotic problems arising in the theory of random matrix models, and also in the theory of integrable statistical mechanics, Ann. Math. ,146, 149–235 (1997). [12] P. Deift, T. Kriecherbauer, K. T.-R. McLaughlin, S. Venakides and X. Zhou, Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and ap- plications to universality questions in random matrix theory, Comm. Pure Appl. Math. ,52, 1335–1425 (1999). [13] P. Deift, T. Kriecherbauer, K. T.-R. McLaughlin, S. Venakides and X. Zhou, Strong asymp- totics of orthogonal polynomials with respect to exponential weights, Comm. Pure Appl. Math. ,52, 1491–1552 (1999). [14] P. Deift and X. Zhou, A steepest descent method for oscillatory Riemann-Hilbert problems, Asymptotics for the MKdV equation, Ann. Math. ,137, 295–368 (1993). 47 [15] F.J. Dyson, Fredholm determinants and inverse scattering problems, Comm. Math. Phys. , 47, 171–183 (1976). [16] T. Ehrhardt, Dyson’s constant in the asymptotics of the Fredholm determinant of the sine kernel, Comm. Math. Phys. ,262, 317–341 (2006). [17] F.H. Essler, H. Frahm, A.R. Its and V.E. Korepin, Determinant representation for a quan- tum correlation function of the lattice sine-Gordon model, J. Phys. A ,30, 219–244 (1997). [18] B. Fahs and I. Krasovsky, Sine-kernel determinant on two large intervals, Comm. Pure Appl. Math. ,77, 1958–2029 (2024). [19] A.S. Fokas, A.R. Its, A.A. Kapaev and V.Y. Novokshenov, Painlev´ e transcendents: the Riemann-Hilbert approach, In: AMS Mathematical Surveys and Monographs, vol. 128. American Mathematical Society, Providence, RI (2006). [20] A.R. Its,
|
https://arxiv.org/abs/2505.16780v1
|
A.G. Izergin and V.E. Korepin, Long-distance asymptotics of temperature corre- lators of the impenetrable Bose gas, Comm. Math. Phys. ,130, 471–488 (1990). [21] A.R. Its, A.G. Izergin and V.E. Korepin, Temperature correlators of the impenetrable Bose gas as an integrable system, Comm. Math. Phys. ,129, 205–222 (1990). [22] A.R. Its, A.G. Izergin, V.E. Korepin and N.A. Slavnov, Differential equations for quantum correlation functions, Int. J. Mod. Phys. B ,04, 1003–1037 (1990). [23] A.R. Its, A.G. Izergin, V.E. Korepin and G.G. Varzugin, Large time and distance asymp- totics of field correlation function of impenetrable bosons at finite temperature, Phys. D ,54, 351–395 (1992). [24] A.R. Its and I. Krasovsky, Hankel determinant and orthogonal polynomials for the Gaussian weight with a jump, Contemp. Math. ,458, 215–248 (2008). [25] A.R. Its and N.A. Slavnov, On the Riemann-Hilbert approach to asymptotic analysis of the correlation functions of the quantum nonlinear Schr¨ odinger equation: interacting fermion case, Theor. Math. Phys. ,119, 541–593 (1999). [26] M. Jimbo, T. Miwa, Y. Mˆ ori and M. Sato, Density matrix of an impenetrable Bose gas and the fifth Painlev´ e transcendent, Phys. D ,1, 80–158 (1980). [27] K. Johansson, From Gumbel to Tracy-Widom, Probab. Theory Relat. Fields ,138, 75–112 (2007). [28] V.E. Korepin, Dual field formulation of quantum integrable models, Comm. Math. Phys. , 113, 177–190 (1987). [29] V.E. Korepin, N.M. Bogoliubov and A.G. Izergin, Quantum inverse scattering method and correlation functions, Cambridge Univ. Press (1993). [30] V.E. Korepin and N.A. Slavnov, The time dependent correlation function of an impene- trable Bose gas as a Fredholm minor, Comm. Math. Phys. ,129, 103–113 (1990). 48 [31] I. Krasovsky, Gap probability in the spectrum of random matrices and asymptotics of polynomials orthogonal on an arc of the unit circle, Int. Math. Res. Not. ,2004 , 1249–1272 (2004). [32] E.H. Lieb and W. Liniger, Exact analysis of an interacting Bose gas. I. the general solution and the ground state, Phys. Rev. ,130, 1605–1616 (1963). [33] E.H. Lieb, Exact analysis of an interacting Bose gas. II. the excitation spectrum, Phys. Rev.,130, 1616–1624 (1963). [34] E.H. Lieb and D.C. Mattis (eds.), Mathematical physics in one dimension, New York, London: Academic Press (1966). [35] K. Liechty and D. Wang, Asymptotics of free fermions in a quadratic well at finite tem- perature and the Moshe-Neuberger-Shapiro random matrix model, Ann. Inst. H. Poincar´ e Probab. Statist. ,56, 1072–1098 (2020). [36] B.M. McCoy and Sh.Tang, Connection formulae for Painlev´ e V functions II. the δfunction Bose gas problem, Phys. D ,20, 187–216 (1986). [37] A.B.D. Monvel, J. Lenells and D. Shepelsky, The focusing NLS equation with step-like oscillating background: asymptotics in a transition zone, Comm. Math. Phys. ,383, 893–952 (2021). [38] M. Moshe, H. Neuberger and B. Shapiro, Generalized ensemble of random matrices, Phys. Rev. Lett. ,73, 1497–1500 (1994). [39] F.W.J. Olver, A.B. Olde Daalhuis, D.W. Lozier, B.I. Schneider, R.F. Boisvert, C.W. Clark, B.R. Miller and B.V. Saunders, eds, NIST Digital Library of Mathematical Func- tions, http://dlmf.nist.gov/, Release 1.2.2 of 2024-09-15. [40] S.-X. Xu, Asymptotics of the finite-temperature sine kernel determinant, to appear in Comm. Math. Phys. , arXiv: 2403.06722
|
https://arxiv.org/abs/2505.16780v1
|
arXiv:2505.17285v1 [math.ST] 22 May 2025On Fisher Consistency of Surrogate Losses for Optimal Dynamic Treatment Regimes with Multiple Categorical Treatments per Stage Nilanjana Laha1,∗, Nilson Chapagain1, Victoria Cicherski1, and Aaron Sonabend-W2 1Department of Statistics, Texas A&M, College Station, TX 77843, e-mail: nlaha@tamu.edu ;nchapagain@stat.tamu.edu ;cicherskiv114@tamu.edu 2Google Research, Mountain View, CA 94043, USA, e-mail: asonabend@gmail.com Abstract: Patients with chronic diseases often receive treatments at multiple time points, or stages. Our goal is to learn the optimal dynamic treatment regime (DTR) from longitudinal patient data. When both the number of stages and the number of treatment levels per stage are arbitrary, estimating the optimal DTR reduces to a sequential, weighted, multiclass classification problem (Kosorok and Laber, 2019). In this paper, we aim to solve the above- mentioned classification problem, simultaneously across all stages, through Fisher consistent surrogate losses. Although computationally feasible Fisher consistent surrogates exist in special cases—e.g., the binary treatment setting—a unified theory of Fisher consistency remains largely unexplored. We establish necessary and sufficient conditions for DTR Fisher consistency within the class of non-negative, stagewise separable surrogate losses. To our knowledge, this is the first result in the DTR literature to provide necessary conditions for Fisher consistency within a non-trivial surrogate class. Furthermore, we show that many convex surrogate losses fail to be Fisher consistent for the DTR classification problem, and we formally establish this inconsistency for smooth, permutation equivariant, and relative- margin-based convex losses. Building on this, we propose SDSS (Simultaneous Direct Search with Surrogates), which uses smooth, non-concave surrogate losses to learn the optimal DTR. We design a computationally fast, gradient- based algorithm for implementing SDSS. When the optimization error is small, we establish a sharp upper bound on SDSS’s regret decay rate. We assess the numerical performance of SDSS through simulations. Finally, we demonstrate its real-world performance by applying it to estimate optimal fluid resuscitation strategies for severe septic patients using electronic health record data. Keywords and phrases: Dynamic treatment regimes, Classification, Fisher consistency, Classification calibration, Non-convex optimization. 1. Introduction Personalized medicine seeks to customize the treatments according to individual patient characteristics (Kosorok and Laber, 2019; Deliu et al. , 2022). During the treatment of chronic diseases, such as cancer, sepsis, and diabetes, patients may receive treatments multiple times (Johnson et al. , 2018; Raghu et al. , 2017; Sonabend-W et al. , 2023). In recent times, plenty of such longitudinal data has been available in the form of electronic health records (EHR) (Raghu et al. , 2017). Our objective is to learn the optimal sequence of treatments, collectively referred to as a treatment policy or dynamic treatment regime (DTR), from such patient data. In line with the contemporary DTR literature, we define the optimal DTR to be the DTR that maximizes the expected counterfactual outcome, also known as the value function (Chakraborty and Moodie, 2013; Tsiatis et al. , 2019). We refer to the corresponding problem as the DTR learning problem. We consider the setting where patient data are collected over Tstages for some T > 1, and at each stage t∈ {1, . . . , T }, there are
|
https://arxiv.org/abs/2505.17285v1
|
kt≥2 available treatment options. We assume that each ktis fixed and does not grow with the sample size n. To distinguish this setting from the binary-treatment setting, i.e., where kt= 2 for all t∈ {1, . . . , T }, we refer to it as the general DTR setting throughout this paper. Plenty of approaches have been proposed in recent decades for estimating the optimal DTR. Traditional methods either model the full data distribution (Xu et al. , 2016; Zajonc, 2012) or focus on partial modeling. Popular examples of the latter include Q-learning (Moodie et al. , 2014; Murphy, 2005; Watkins, 1989; Zhang et al. , 2018; Chakraborty and Moodie, 2013), stochastic tree search algorithm (Sun and Wang, 2021), A-learning, and marginal structural mean models (Chakraborty and Moodie, 2013; Moodie et al. , 2007; Murphy, 2003; Robins, 2004; Schulte et al. , 2014). Model-based methods can be computationally efficient. However, in this case, modeling carries a substantial risk of misspecification (Kosorok and Laber, 2019). Model-based approaches typically fit the models using parametric or nonparametric methods via backward recursion: the model for stage Tis fitted first, and models for stages T−1,T−2, etc., are recursively fitted using the estimates from later stages, causing model misspecification errors to propagate and accumulate over stages (Murphy, 2005). A-learning and marginal structural mean models are more robust to model misspecification than Q-learning. However, they still require the correct specification of the contrasts of certain functions called Q-functions (Schulte et al. , 2014). The cost of model misspecification is expected to be higher in high-dimensional data or when the optimal DTR has a nonlinear decision boundary, as is likely in real-world medical datasets (Laha et al. , 2024a). Our work focuses on a more recent alternative approach to learning the optimal DTR. This approach, called the di- rect search or value search, frames the DTR learning problem as a weighted, sequential, multiclass classification1problem (Kosorok and Laber, 2019; Zhao et al. , 2015). The advantage of this approach is that it does not require modeling any quantities except the treatment assignment probabilities, which are also known in experimental designs. Hereafter, we refer to the above sequential multiclass classification as the sequential DTR classification, or simply the DTR classification. The ∗Corresponding author. 1Multiclass classification is a generalization of binary classification where the number of classes can exceed two (Tewari and Bartlett, 2007; Zhang, 2004). 1 1 INTRODUCTION/1.1 Main contributions 2 DTR classification problem transforms into a discontinuous and non-convex loss-minimization problem, which is atypical of classification problems (Bartlett et al. , 2006; Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016). A common strategy to address this is to replace the discontinuous loss with a well-behaved surrogate loss, leading to the surrogate loss minimization approach (Bartlett et al. , 2006; Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016). The surrogate losses that lead to the same population-level solution as the original discontinuous loss are called calibrated or Fisher con- sistent for that loss (Bartlett et al. , 2006; Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016). For example, binary classification leads
|
https://arxiv.org/abs/2505.17285v1
|
to an optimization problem with the discontinuous 0-1 loss x7→1[x≤0]. The support vector machine replaces the 0-1 loss with the hinge loss x7→max{0, x}, which is Fisher consistent for the 0-1 loss (Bartlett et al. , 2006). However, the discontinuous loss corresponding to the sequential DTR classification, referred to hereafter as the DTR loss, is more convoluted than the discontinuous loss of the binary or multiclass classification. Since no general recipe exists for constructing computationally efficient Fisher consistent surrogates for an arbitrary loss (Finocchiaro et al. , 2019), finding a Fisher-consistent surrogate loss for the DTR loss is not straightforward (Kosorok and Laber, 2019). As a consequence, to date, Fisher consistent surrogate losses for the sequential DTR classification have been obtained only in special cases, e.g., the single-stage case ( T= 1) (Bennett and Kallus, 2020; Chen et al. , 2018; Luedtke and Van Der Laan, 2016; Meng et al. , 2020; Wang et al. , 2018; Pan and Zhao, 2021; Zhang et al. , 2020), in the context of maximizing the conditional survival function (Xue et al. , 2022), and, most notably, the widely studied binary-treatment case (Liu et al. , 2018; Zhao et al. , 2015; Laha et al. , 2024a; Jiang et al. , 2019; Liu et al. , 2024a,b; Zhao et al. , 2020). As a result, the general DTR case ( kt≥2) has received much less attention in the direct search literature, despite its practical relevance (Tao and Wang, 2017; Xue et al. , 2022). In addition, most Fisher-consistency-related work on DTR focuses on identifying Fisher consistent surrogate losses. In contrast, research across machine learning has moved beyond finding Fisher consistent surrogates to understanding the specific properties of surrogates that drive Fisher consistency. For instance, in binary and multiclass classification, considerable effort has been devoted to establishing necessary and sufficient conditions for Fisher consistency (Bartlett et al. , 2006; Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016; Wang and Scott, 2023b). Other areas where Fisher consistency of surrogate losses has been investigated in depth include multi-label learning (Gao and Zhou, 2011), ranking (Duchi et al. , 2010; Calauzenes et al. , 2012), and structured prediction (Osokin et al. , 2017). In comparison, the properties of surrogate losses that elicit Fisher consistency for the sequential DTR classification remain poorly understood even in the binary treatment case. Yet, such knowledge is important for understanding the scope of model-free DTR learning. In addition to constructing computationally feasible Fisher consistent surrogate problems for DTR classification, this paper also attempts to address this conceptual gap. Before outlining our contributions, we provide some important clarifications. First, many DTR approaches break the sequential DTR classification problem into Tsingle-stage DTR classification problems, which are then solved recursively using tree-based search (Tao and Wang, 2017; Tao et al. , 2018) or Fisher consistent surrogates for the single-stage case (Liu et al. , 2024b, 2018; Hager et al. , 2018). We refer to this approach as the stagewise direct search, which bypasses the need to solve the full DTR classification problem. Most existing work on direct search
|
https://arxiv.org/abs/2505.17285v1
|
for the general DTR setting falls within this category. Although computationally efficient, these methods either incur a loss of sample size (e.g., backward outcome weighted learning of Zhao et al. , 2015) or depend on model-based objects such as pseudo-outcomes (e.g., the augmented outcome weighted learning of Liu et al. (2018) or the tree-based methods of Tao and Wang (2017); Tao et al. (2018)). Thus, stagewise direct search, a hybrid between direct search and model-based methods, does not align with the goal of this paper. We aim to directly attack the sequential DTR classification, and learn the optimal treatment assignment for each stage simultaneously without estimating any pseudo-outcomes. To emphasize the distinction from stagewise direct search, we refer to our approach as Simultaneous Direct Search with Surrogates (SDSS). Second, the surrogate loss theory for the general DTR setting differs substantially from the binary treatment setting. The binary treatment case turns into a sequential binary classification, where the general case turns into a sequential multiclass classification. An advantage of binary classification is that it facilitates a special class of surrogate losses called margin-based losses, which has no direct extension to the multiclass setting (Wang and Scott, 2023b). Moreover, DTR classifications in the binary and general settings are at least as different as binary and multiclass classification, which have spurred separate research trajectories (Tewari and Bartlett, 2007; Zhang, 2004; Zou et al. , 2008; Ramaswamy and Agarwal, 2016). Finally, Fisher consistency is quite well understood in the special case when T= 1 (also referred to as individualized treatment regimes or ITR) due to the efforts of Zhang et al. (2020); Pan and Zhao (2021); Bennett and Kallus (2020), among others. However, the theoretical insights from this setting do not directly extend to the general T-stage case. The primary reason is that when T= 1, DTR classification reduces to a weighted multiclass classification problem, allowing the exploitation of the extensive Fisher consistency literature on multiclass classification (cf. Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016; Zhang, 2004; Zhang and Liu, 2014). In contrast, sequential DTR classification differs substantially from multiclass classification and, consequently, from single-stage DTR. We will discuss these differences in detail in Section 4. 1.1. Main contributions Given the main focus of this paper is the Fisher consistency of surrogate losses, similar to Liu et al. (2024b,a); Zhao et al. (2015); Laha et al. (2024a), we restrict our attention to the case where the propensity scores, i.e., the treatment assignment probabilities, are known. Our contribution is two-fold. The first part of the paper focuses on the subject of Fisher consistency in the DTR classification problem. The second part focuses on the construction of our method, SDSS, and investigates its theoretical and empirical performance. 1 INTRODUCTION/1.1 Main contributions 3 1.1.1. Thoretical results on Fisher consistency Concave losses Concave surrogate losses are attractive because they result in a convex optimization problem. Since DTR learning is a maximization problem, concave—rather than convex—surrogates are more appropriate. In Section 3, we show that many popular concave surrogates that are Fisher consistent in the single-stage setting fail to be
|
https://arxiv.org/abs/2505.17285v1
|
Fisher consistent when extended to the T= 2 case. Our examples include both smooth and non-smooth (e.g., hinge loss) losses, as well as losses using sum-zero constraints. In particular, Theorem 3.1 shows that, under smoothness conditions, permutation equivariant and relative-margin-based (PERM) surrogate losses can not be Fisher consistent if they are also concave (Wang and Scott, 2023b). Here, we use PERM losses because (a) they provide enough structure that can be exploited to derive interpretable Fisher consistency-related results and (b) it is a rich class including numerous popular multivariate losses (see Section 3 for details). Our previous work in Laha et al. (2024a) established the inconsistency of margin-based, smooth, concave losses in the binary treatment setting. However, margin-based losses constitute an useful but small subset of all possible surrogate losses in that setting (Wang and Scott, 2023b). Hence, our Theorem 3.1 does not follow from Laha et al. (2024a)’s impossibility results. Moreover, our proof technique differs substantially from that of Laha et al. (2024a) because multiclass PERM losses exhibit more degrees of variability than the margin-based losses of the binary treatment setting (detailed comparison with Laha et al. (2024a) is provided in Section 8.1; see also Section 3.1.1). At a high level, these results provide a negative answer to the possibility of convexifying sequential DTR classification. This negative finding has two implications. First, model-free DTR learning will likely incur the computational burden of nonconvex optimization. Second, since single-stage DTR classification problems can be convexified (cf. Zhang et al. , 2020; Pan and Zhao, 2021), this highlights the intrinsic difficulty of the multi-stage ( T >1) setting compared to the single-stage setting. Necessary and sufficient conditions The negative results on concave surrogates forces us to look beyond concave surrogate losses. In Section 4, we consider the class of non-negative stagewise separable surrogates, i.e., surrogates that can be written as a product of non-negative single-stage surrogates (see (4.1)). This class was chosen because it has been used in the DTR Fisher-consistency literature due to its computational and theoretical advantages (Xue et al. , 2022; Laha et al. , 2024a). Informally, we show that surrogates in this class are Fisher consistent if and only if the corresponding single-stage surrogates (a) are Fisher consistent for single-stage DTR and (b) for t≥2, have image sets with sufficiently large convex hulls (see Theorems 4.1 and 4.2 for the precise result). The second condition restricts the class of Fisher consistent separable surrogates for T≥2, which underscores the inherent challenge of the multi-stage setting compared to the single-stage setting. To the best of our knowledge, this is the first result establishing necessary conditions for DTR Fisher consistency in non- trivial surrogate loss classes. In Section 4.2, we establish the existence of smooth, non-convex Fisher consistent losses. This finding indicates that sequential DTR classification can at least be smoothed, even though convexification is not currently attainable. Moreover, we show that our surrogates preserve Fisher consistency even when the search space of policies is restricted to a smaller class, e.g., the class of linear policies, provided the optimal DTR lies in that
|
https://arxiv.org/abs/2505.17285v1
|
class (Lemma 4.4). 1.1.2. Methodological contribution The proposed DTR learning method, SDSS, solves a surrogate value optimization problem based on the above-mentioned smooth, non-convex, Fisher consistent surrogate losses. The smoothness of the surrogates allows for gradient-based opti- mization, enabling fast implementation that scales to the sample sizes of modern EHR data. However, the non-convexity of these losses introduces optimization challenges, illustrated with a toy example in Section 5.1. To mitigate these issues, we incorporate strategies such as random reinitialization and minibatching, which help prevent optimization iterates from stagnating in suboptimal regions. SDSS can accommodate any number of stages and any number of treatments per stage. It is simultaneous in that it learns the treatment assignments for all stages simultaneously. Therefore, unlike the stagewise methods, SDSS does not require modeling pseudo-outcomes and is completely model-free when the propensity scores are known. SDSS is flexible with policy classes in that it can be implemented with any policy class. Regret decay rate In Section 6, we show that if the policy class is sufficiently rich, e.g., neural network classes, and the optimization error is small, then we can provide a tight upper bound on the regret of SDSS—defined as the difference between the expected total reward of the optimal DTR and that of the DTR estimated by SDSS. To this end, we use a small noise assumption (Tsybakov, 2004) (see Assumption 1) and standard smoothness assumptions similar to those in Zhao et al. (2015); Sun and Wang (2021). For specific surrogates and neural network policy classes, SDSS’s regret decay rate matches the minimax rate of risk decay for binary classification under similar assumptions (Corollary 1). Our empirical analysis in Section 7 suggests that despite the non-convexity, SDSS may asymptotically outperform traditional DTR learning methods in scenarios with high model misspecification risk. 1.1.3. Organization In Section 2, we discuss the mathematical formulation of the DTR classification problem and formally define Fisher consis- tency for the DTR classification problem. Sections 3 and 4 describe the Fisher consistency properties of concave and separable losses, respectively, as discussed under Contribution 1.1.1. Section 5 introduces SDSS and discusses its implementation. In Section 6, we provide the regret analysis of SDSS. Section 7 presents an empirical comparison of SDSS with other DTR 2 MATHEMATICAL SETUP/1.2 Notation 4 methods using various simulation setups (see Section 7.1) and EHR data from sepsis patients (see Section 7.2). Section 8 compares our work with related literature. We conclude with some discussion and suggestions for future research in section 9. For the reader’s convenience, a list of notation is provided in Table 3. 1.2. Notation We let Ndenote the set of natural numbers. For any integer k∈N, let [ k] denote the set {1,2, . . . , k }. For any vector x∈Rk and any j∈[k],xjdenotes the jth element of x, so that x= (x1, . . . ,xk). For a vector x∈Rk, argmax( x) denotes the set argmaxi∈[k]xi, and in particular, max(argmax( x)) = max {i∈[k] :xi= max( x)}. We let Rdenote the set of real numbers and Rthe extended real line, i.e., R∪
|
https://arxiv.org/abs/2505.17285v1
|
±∞ . Denote by R≥0andR>0the sets [0,∞) and (0 ,∞), respectively. For any k∈N, define Sk−1to be the simplex in Rk, i.e.Sk−1:={p∈Rk:pi≥0 for all i∈ [k],Pk i=1pi= 1}. The k-dimensional vectors of all zeros and all ones are denoted by 0kand1k, respectively, and Ikdenotes the identity matrix of order k. A permutation von [k] is a bijection from [ k] to itself. By an abuse of notation, for any x∈Rk, we will denote by v(x) the k-dimensional vector ( xv(1), . . . ,xv(k)), which extends the permutation vto a map from Rkto itself. For k∈N, we let | · |kdenote the ℓknorm; that is, for x∈Rk,∥x∥k=Pk i=1(|xi|k)1/k. Ifx∈Nk, we denote by|x|1the sumPk i=1xi. Unless otherwise mentioned, lowercase letters such as xandywill denote real numbers, whereas boldface letters such as x,y,p,w,u, and vwill denote vectors. For any set C, the indicator function 1[ X∈ C] takes the value one if X∈ Cand zero otherwise. We denote by int( C) and Cthe interior and the closure of the set C, respectively, and its cardinality is denoted by |C|. When C ⊂Rk, for any x∈R, we define xC={xx:x∈ C}. The closed convex hull ofCis denoted by conv( C). For a finite set C, max( C) denotes its maximum element. If C=∅, we define inf( C) =∞and sup(C) =−∞. For any k∈N, a probability measure PonX, and any P-measurable function h:X 7→R, we denote the Lk(P) norm ofhby∥h∥P,k= R |h(x)|kdP(x)1/kand the uniform norm by ∥h∥∞= supx∈X|h(x)|. Also, P[h] denotes the integralR hdP. We define the domain (effective domain) of a convex function h:Rk7→Ras in page 23 of Rockafellar (1970), i.e. dom( h) ={x∈Rk:h(x)<∞}. For any set C ⊂Rk, we define its support function ςCas in Definition 2.1.1 (p. 104) of Hiriart-Urruty and Lemar´ echal (2004): ςC(x) = sup w∈CxTw for all x∈Rk. (1.1) For any differentiable function h:Rk7→R,∇hwill denote the gradient of h. Throughout this paper, we use the convention ±∞ × 0 = 0. In addition, we use Candcto denote generic constants that may vary from line to line. Many results in this paper are asymptotic (in n) in nature and thus require some standard asymptotic notations. If bnandbnare two sequences of real numbers then bn≫bn(and bn≪bn) implies that bn/bn→ ∞ (and bn/bn→0) asn→ ∞ , respectively. Similarly bn≳bn(and bn≲bn) implies that lim inf n→∞bn/bn=Cfor some C∈(0,∞] (and lim supn→∞bn/bn=Cfor some C∈R≥0). Alternatively, bn=o(bn) will also imply bn≪bnandbn=O(bn) will imply that lim supn→∞bn/bn=Cfor some C∈R≥0. 2. Mathematical setup We assume that each patient receives Ttreatments A1, . . . , A Tat stages 1 , . . . , T , respectively. Each treatment Atis a random variable taking value in a finite set Atwith cardinality kt. For sake of simplicity, we let At={1, . . . , k t}fort∈[T]. We denotePT t=1kt=K. After each treatment, the patient observes responses (outcomes) Y1, . . . , Y T, which are real-valued random variables. Without loss of generality, we assume higher values of outcomes are desirable. Hence the outcomes are also referred to as rewards. We also observe covariates O1, . . . , O Tfor
|
https://arxiv.org/abs/2505.17285v1
|
every patient, which are random vectors. The dimension of Otcan change with stage t. The following diagram gives the sequence of events: O1→A1→Y1→ ··· → OT→AT→YT. Thet-th stage history Htcontains everything observed before At, that is H1=O1, and Ht= (O1, A1, Y1, . . . , O t) for 2 ≤t≤ T. We denote the space of each HtbyHt, i.e., Ht∈ H tfor all t∈[T]. Therefore, Ht⊂ O 1×A 1×Y1×. . .×At−1×Yt−1×Ot, where the inclusion can be strict. Let qdenote the dimension of HT. The t-th stage individualized treatment assignment dt is a map from Htto the treatment space At. A treatment policy or DTR is the collection of all such Ttreatments; that is, d= (d1, . . . , d T) is a DTR. For a patient, the trajectory Dwill denote the sequence of all covariates, actions, and outcomes, i.e., D= (O1, A1, Y1, . . . , O T, AT, YT). (2.1) Suppose Pis the distribution of D, i.e.,Pis the distribution of the observed data. The expectation corresponding to Pwill be denoted by E. We observe nindependent trajectories Di’s from npatients. The empirical distribution of D1, . . . ,Dnwill be denoted by Pn. For any function hmapping Dto a real number, we denote by Pn[h] the average n−1Pn i=1h(Di). We will denote the t-th stage propensity score, i.e., the treatment assignment probability P(At=at|Ht), by πt(at|Ht). We define the value function of a DTR or policy dbyV(d) =Ed[PT t=1Yt], where the expectation Edis with respect to a distribution which is similar to P, except the treatment assignments follow the potentially unobserved DTR d. Thus Ed is different from E, the expectation with respect to the observed data. Therefore, we need additional assumptions on the observed data distribution Pso that V(d) becomes identifiable under P. Assumptions for identifiability: 2 MATHEMATICAL SETUP/ 5 I.Positivity: There exists a constant Cπ∈(0,1) so that πt(at|ht)> C πfor all at∈[kt],ht∈ H t, and t∈[T]. II.Consistency: For all t∈[T], the observed outcomes Ytand covariates Otagree with the potential outcomes and covariates under the treatments actually received. See Schulte et al. (2014); Robins et al. (1994) for more details. III.Sequential ignorability: For each t= 1, . . . , T , the treatment assignment Atis conditionally independent of the future potential outcomes Ytand future potential clinical profile Ot+1given Ht. Here we take OT+1to be the empty set. Our version of sequential ignorability follows from Robins (1997); Murphy et al. (2001a). Assumptions I-III are standard in DTR literature (cf. Schulte et al. , 2014; Murphy et al. , 2001a; Tsiatis et al. , 2019). Assumption III is generally satisfied under sequentially randomized trials (Chakraborty and Moodie, 2013). Recent research shows promise for relaxing Assumption III in observational data, provided proxy variables are available (Han, 2021; Zhang and Tchetgen, 2024). However, we do not pursue this direction in the current paper, as we first need to understand Fisher consistency in the simpler unconfounded setting before tackling the added complexity of unmeasured confounders. Under Assumptions I-III, it holds that (cf. p. 224 of Tsiatis et al. , 2019) V(d) =E TY
|
https://arxiv.org/abs/2505.17285v1
|
t=11[dt(Ht) =At] πt(At|Ht)TX j=1Yj . (2.2) We define the optimal policy or DTR d∗to be a maximizer of V(d) over all possible DTRs, i.e., d∗∈argmaxdV(d). We denote the optimal value function V(d∗) byV∗. Under Assumptions I-III, d∗can be represented using the optimal Q-functions. We define the T-stage optimal Q-function as Q∗ T(HT, AT) =E[YT|HT, AT], and for t∈ {1, . . . , T −1}, we define the optimal Q-functions recursively as follows: Q∗ t(Ht, At) =E Yt+ max i∈[kt+1]Q∗ t+1(Ht+1, i)|Ht, At . It can be shown that (cf. Chakraborty and Moodie, 2013) under Assumptions I-III, any dsatisfying dt(Ht)∈argmax i∈[kt]Q∗ t(Ht, i) for all t∈[T] (2.3) is a candidate for the optimal DTR d∗, and any version of the optimal DTR d∗satisfies (2.3) with probability one. It is common to transform the maximization of V(d) into an optimization problem with real-valued functions, thereby avoiding optimization over discrete-valued maps (Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016). To formalize, for any t∈[T], letFtdenote the space of all Borel measurable functions from HttoRkt. We consider T-fold product of these function-classes, denoted by F=F1×. . .× F T. Any f∈ F is of the form f= (f1, . . . , f T), where ft:Ht7→Rkt is Borel measurable. Thus any f∈ F is a map from H1×. . .× H TtoRK, where we denoted K=PT t=1kt. For any (h1, . . . , h T)∈ H 1×. . .× H T, f(h1, . . . , h T) = (f1(h1), . . . , f T(hT)),which is in RP t∈[T]kt=RK. For all i∈[kt] and t∈[T], the i-th element of ft(ht) will be denoted as fti(ht). Similar to Wang and Scott (2023a,b); Zhang (2004), we will refer to fas the class score function, and ftas the class score function for stage t. In the classification literature, these functions are also known as the prediction function, decision function vector, or classification function. We require a link function to map each class score function to a treatment. Following contemporary classification literature (Zhang, 2004; Wang and Scott, 2023a,b), we call this link function the pred function, defined by the operator pred( x) = max(argmax1≤j≤ktxj) for each vector x. The outer max operator ensures that pred returns a number rather than a set. We omit the dimension of xfrom the notation of pred because, similar to max, the domain of pred should be understandable from the context. Notice that each ftdefines a treatment assignment dt:Ht7→[kt] via pred( ft). Let pred( f) denote the function (pred( f1), . . . , pred( fT)). Then each f∈ F defines a DTR (or policy) dvia pred( f). We say that a DTR dis linear if each ftis linear; otherwise, it is non-linear. By an abuse of notation, we denote V(pred( f)) by V(f), so that V(f) =ETY t=11[At= pred( ft(Ht))] πt(At|Ht)TX j=1Yj . (2.4) Iff∗= (f∗ 1, . . . , f∗ T) is a maximizer of (2.4), then it can be shown that d∗= pred( f∗) is an optimal DTR, and vice versa (Tsiatis, 2006). Furthermore, supf∈FV(f) =V∗. We say that
|
https://arxiv.org/abs/2505.17285v1
|
d∗ tis linear if there exists f∗ tsuch that f∗ t:Ht7→Rktis linear for all t∈[T], and non-linear otherwise. Similar to Zhao et al. (2015); Jiang et al. (2019), we use the inverse probability weighted (IPW) estimator to estimate V(f), given by bV(f) =PnTY i=11[At= pred( ft(Ht))] πt(At|Ht)TX j=1Yj . (2.5) 2 MATHEMATICAL SETUP/ 6 When T= 1, the maximization of (2.5) corresponds to a multiclass classification problem with class labels 1 , . . . , k 1, feature space H1, and weight E[Y1|H1]/π1(A1|H1) (Tao and Wang, 2017; Xue et al. , 2022). Assigning treatment A1to a patient with history H1corresponds to classifying this patient to one label from the set [ k1]. When T > 1, the maximization of (2.5) can be described as a weighted sequential multiclass classification problem, where a patient is assigned to the class pred( ft(Ht)) at the t-th stage (cf. Tsiatis, 2006). Hereafter, DTR classification refers to the maximization of (2.5). Before moving on to the surrogate problem construction, we introduce two additional assumptions on P. Assumption IV is a boundedness assumption on the outcomes. •Assumption IV. TheYt’s are bounded above by a constant, potentially depending on P, for all t∈[T]. Boundedness assumption on the rewards is common in the DTR and ITR literature (Jiang et al. , 2019; Zhao et al. , 2019; Laha et al. , 2024a). This is not an overly stringent assumption since, in most of our applications, Yt’s represent medical measurements, and are automatically bounded. Weaker versions of Assumption IV, such as the boundedness of the optimal Q-functions, may be sufficient for some of our results. However, the boundedness of the Yt’s is required for showing the sufficiency of the proposed necessary conditions. It is unclear whether interpretable sufficient conditions can still be obtained for the separable surrogates in Section 4 without Assumption IV. Since maximizing Ed[PT t=1Yt] is equivalent to maximizing Ed[PT t=1(Yt+c)] for any c∈R, without loss of generality, we take Yt> C minfor all t= 1, . . . , T for some Cmin>0, leading to the following assumption. •Assumption V. There is a constant Cmin>0, potentially depending on P, so that Yt> C minfor all t∈[T]. Although we refer to the above as an assumption, it is not technically an assumption on Psince a location transformation of any outcome data will ensure Assumption V holds, provided their supports are bounded below. Assumption V has appeared in related DTR literature (Laha et al. , 2024a; Zhao et al. , 2015, 2012). 2.0.1. Construction of the surrogate problem For any k∈N,i∈[k], and x∈Rk, let us define the operator ϕdisasϕdis(x;i) = 1[ i= pred( x)], where “dis” in the subscript stands for “discontinuous”. Although ϕdisdepends on the dimension of x, we suppress this dependence to streamline the notation. For x1∈Rk1, . . . ,xT∈RkT, and a1∈[k1], . . . , a T∈[kT], let us define the loss ψdis(x1, . . . ,xT;a1, . . . , a T) =TY t=1ϕdis(xt;at). (2.6) Equation 2.5 implies that bV(f) =Pn ψdis(f1(H1), . . . , f T(HT);A1, . . . , A T)PT
|
https://arxiv.org/abs/2505.17285v1
|
j=1YjQT t=1πt(At|Ht) . An ideal estimator of f∗would be the maximizer of bV(f). However, direct optimization of bV(f) is computationally challenging due to the discontinuity of ψdis, which arises from the pred operator and the indicator function (Tewari and Bartlett, 2007; Bhattacharyya et al. , 2018). In both machine learning and DTR literature, direct optimization of discontinuous losses is typically avoided, and they are replaced by a more well-behaved function, called a surrogate loss (Xu et al. , 2014; Horowitz, 1992; Feng et al. , 2022; Gao and Zhou, 2011; Calauzenes et al. , 2012). Moreover, empirical evidence from simpler machine learning problems, such as binary classification, shows no significant advantage in directly optimizing the 0-1 loss over using certain surrogates like the sigmoidal loss (Nguyen and Sanner, 2013). In view of the above, we adopt the surrogate loss approach, replacing ψdiswith a surrogate loss ψ:RK×QT t=1[kt]7→R. The resulting surrogate problem maximizes the surrogate value function bVψ(f) =Pn ψ(f1(Ht), . . . , f T(HT);A1, . . . , A T)PT j=1YjQT t=1πt(At|Ht) . (2.7) However, the surrogate ψneeds to be chosen carefully so that the DTR resulting from the maximization of bVψ(f) is a consistent estimator of d∗. The relevant concept in literature is Fisher consistency or calibration (Bartlett et al. , 2006; Tewari and Bartlett, 2007; Liu et al. , 2018; Zhao et al. , 2015). We need to introduce some terminology before formally defining Fisher consistency for the DTR classification problem. Forf∈ F, we define the population-level surrogate value function by Vψ(f) =E ψ(f1(Ht), . . . , f T(HT);A1, . . . , A T)PT j=1YjQT t=1πt(At|Ht) . (2.8) For any surrogate ψ, let us denote Vψ ∗= supf∈FVψ(f). Let ˜f= (˜f1, . . . , ˜fT) be a maximizer of Vψ(f) over F. Note that ˜f may not exist or be unique. In some cases, the maximum of Vψ(f) is not attained in F, but there exists an extended-real- valued score function ˜fsuch that Vψ(˜f) =Vψ ∗. We will denote pred( ˜f) by ˜d. Definition 2.1 (DTR Fisher Consistency w.r.t. P).LetPbe a set of distributions satisfying Assumptions I-V. We say that a surrogate ψis Fisher consistent with respect to Pif, for any probability measure P∈ P, every sequence {fm}m≥1⊂ F satisfying Vψ(fm)→mVψ ∗also satisfies V(fm)→mV∗, where the expectations in VψandVare taken with respect to P. 3 CONCAVE SURROGATES/ 7 Denote by P0the class of all probability distribution functions satisfying Assumptions I-V. Throughout this paper, we say ψis Fisher consistent if it is Fisher consistent with respect to P0.Our definition of Fisher consistency allows argmaxf∈FVψ(f) to be empty, aligning with the definition of Fisher consistency in broader machine learning literature (Bartlett et al. , 2006; Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016). If ˜fexists, then Fisher consistency implies that ˜d= pred( ˜f) is a candidate for the optimal DTR d∗. We say that a surrogate loss is single-stage Fisher consistent if it is Fisher consistent for the DTR classification problem when T= 1. Fisher consistency is an important requirement for surrogate losses because otherwise the optimization of
|
https://arxiv.org/abs/2505.17285v1
|
bVψcan lead to sub-optimal DTRs, even if the latter efficiently estimates Vψ. In the multiclass classification literature, surrogate frameworks often involve the sum-zero constraint, requiring the class score functions to sum to zero (Tewari and Bartlett, 2007; Zhang, 2004). However, in this paper, we do not consider a constrained optimization framework because constraints can complicate the optimization, especially given that DTR classi- fication would impose Tsum-zero constraints—one for each stage (Xue et al. , 2022). Although the pred function considered above is the most traditional link function for policy learning and classification, al- ternatives exist. Notable among these is the recently proposed angle-based link function, which, although originally developed for multiclass classification, has been applied to ITR (Zhang et al. , 2020), and to DTR for maximizing survival probability (Xue et al. , 2022). In this framework, pred( x) does not rely on argmax( x); instead, it depends on the angle between xand a set of prespecified vectors determined by the dimension of x. See Remark 2 for further discussion, where we explain why we adopt the argmax-based pred over the angle-based alternative for our problem. Before closing this section, we note the distinction between the general and the binary-treatment case. In the latter case, treatments Atcan be represented as ±1 rather than {1,2}, so that A1, . . . , A T∈ {± 1}. This embedding has an advantage. Letting gt:Ht7→Rfort∈[T], we can represent dtby sign( gt). Consequently, 1[ At= pred( ft(Ht))] can be replaced with the 0-1 loss function 1[ Atgt(Ht)>0] in (2.4) and (2.5). Therefore, ϕdissimplifies to the univariate 0-1 loss x7→1[x≥0], allowing margin-based surrogates. See Wang and Scott (2023b) for discussion on the advantages of margin-based surrogate losses. 3. Concave surrogates Convex surrogates are attractive because they yield convex optimization problems with unique minima, facilitating efficient optimization techniques (Calauzenes et al., 2012). This section presents some negative results regarding the Fisher consistency of concave surrogates in DTR classification. Readers primarily interested in Fisher consistent surrogates may skip to Section 4. If concave surrogates fail to be Fisher consistent in the T= 2 case, they can’t succeed in the T≥2 case. Hence, we will useT= 2 throughout this section to streamline our presentation. To our knowledge, of date, no concave surrogate loss has been proposed for the DTR classification in the general case, described in Section 2.0.1. Consequently, no reference concave surrogate exists for our setting. However, Fisher consistent surrogates for multiclass classification yield Fisher consistent surrogates for the T= 1 case, since, as noted in Section 2, these two problems have a direct correspondence (see also Zhang et al. , 2020; Tao and Wang, 2017). The concave surrogate losses considered in this section are direct extensions of these single-stage losses to the T= 2 case. The corresponding losses in multiclass classification are convex, as that setting involves minimization. Throughout this section, references to adaptations from multiclass classification should be understood to mean the concave versions of those losses. We begin this section with an example of the concave version of the exponential loss, which is Fisher inconsistent when
|
https://arxiv.org/abs/2505.17285v1
|
T= 2. Then we establish the inconsistency of concave PERM losses under smoothness conditions. Finally, we present examples of non-PERM concave surrogates, demonstrating that they may also fail to be Fisher consistent. Example 1. (Exponential loss) Let us consider the concave surrogate loss ψ(x,y;a1, a2) =−X i∈[k1]X j∈[k2]exp(−(xa1−xi+ya2−yj)), (3.1) where x∈Rk,a1∈[k1], and a2∈[k2]. It is a concave two-stage extension of the following exponential surrogate loss: ϕ(x;a) =−X i∈[k1],i̸=aexp(−(xa−xi)) for all x∈Rk, a∈[k1], (3.2) which is a Fisher consistent surrogate for multiclass classification with k1classes (Zhang, 2004; Tewari and Bartlett, 2007). Using the above, it can be shown that the surrogate in (3.2) is also single-stage Fisher consistent, i.e., Fisher consistent for DTR classification when T= 1. Result 1, proved in Supplement S4.0.1, provides the formula of ˜d= pred( ˜f) when T= 2. Result 1. Under Assumptions I-V, the surrogate loss defined in (3.1) satisfies ˜d1(H1)∈argmax a1∈[k1]EX a2∈[k2]q Y1+Q∗ 2(H2, a2)2 H1, A1=a1 , ˜d2(H2)∈argmax a2∈[k2]Q∗ 2(H2, a2). (3.3) Equation 2.3 implies that the second stage treatment assignment ˜d2(H2) is optimal but the first stage treatment assignment ˜d1(H1) may be suboptimal because it need not agree with maximizer of Q∗ 1(H1, a1), i.e.,E[Y1+max i∈[k2]Q2(H2, i)|H1, A1= a1], over a1∈[k1]. Since Fisher consistency implies ˜d=d∗, the loss in (3.1) can not be Fisher inconsistent. □ 3 CONCAVE SURROGATES/3.1 PERM losses 8 Example 1 is not an isolated case. In all our examples (see Section 3.1.2 for illustrations when T= 2), T-stage extensions of Fisher consistent surrogates from multiclass classification yield optimal treatment only in the final stage, while earlier treatment assignments become suboptimal. This probably occurs because the last stage of DTR classification resembles a multiclass classification problem. 3.1. PERM losses Now, we will show that a large class of concave surrogates can not be Fisher consistent for the sequential DTR classification. To this end, we first define the class of permutation equivariant and relative margin-based surrogate losses, i.e., PERM losses, which were introduced in Wang and Scott (2023b) for the multiclass classification problem. This requires introducing the notion of permutation symmetry. We say η:Rk1+k2−27→Ris permutation symmetric if for any permutations υ: [k1−1]7→ [k1−1] and υ′: [k2−1]7→[k2−1], η(u,v) =η(υ(u), υ′(v)) for all u∈Rk1−1,v∈Rk2−1. (3.4) Extending the definition from Wang and Scott (2023b) to the two-stage case, we say that a surrogate loss ψis PERM if there exists a permutation-symmetric function ηsuch that for all a1∈[k1],a2∈[k2],x∈Rk1, and y∈Rk2, ψ(x,y;a1, a2) =η(xa1−x1, . . . ,xa1−xk1,ya2−y1, . . . ,ya2−yk2) (3.5) Terms such as xi−xiandyj−yjare omitted from the expression in (3.5). PERM losses are basically surrogate losses that are both permutation equivariant and relative margin-based. They are permutation equivariant in that they do not give special weight to any specific treatment label. This is a mild restriction because, to the best of our knowledge, most surrogate losses in practical use are permutation equivariant. PERM losses are relative-margin-based in the sense that ψ(x,y;a1, a2) depends on x(ory) only through pairwise differences of its elements, as evident from (3.5). Relative margin-based losses are also common in the multiclass classification literature (Rosset et al. , 2003; Glasmachers et al.
|
https://arxiv.org/abs/2505.17285v1
|
, 2016; Fathony et al. , 2016). We used the class of PERM losses here because demonstrating Fisher inconsistency for all concave surrogates, even under smoothness conditions, is challenging, because the class of all concave surrogates is too broad. As demonstrated by the insightful work of Wang and Scott (2023b), the PERM losses provide enough structure that can be manipulated to establish interpretable theoretical results on Fisher consistency. Moreover, PERM losses include well-known surrogate loss classes. Below, we present some examples of surrogate loss classes that are PERM. See Wang and Scott (2023b) for more examples. Example 2. (Pairwise comparison loss (Weston et al. , 1999)) In the multiclass classification context, this loss is defined by ϕ(x;a) =X i∈[k1],i̸=a˜ϕ(xa−xi) for x∈Rk1, a∈[k1], (3.6) where ˜ϕis a real-valued function. From Zhang (2004) (see also Tewari and Bartlett, 2007), it follows that if ˜ϕis bounded above, non-decreasing, concave, and ˜ϕ′(0)>0 then the ϕin (3.6) is Fisher consistent for the single-stage DTR classification. Two-stage extension of ϕcan be defined as ψ(x,y;a1, a2) =X i∈[k1]X j∈[k2]˜ϕ(xa1−xi,ya2−yj) (3.7) or ψ(x,y;a1, a2) =X i∈[k1],i̸=a1˜ϕ1(xa1−xi) +X j∈[k2],j̸=a2˜ϕ2(ya2−yj), (3.8) where ˜ϕ1and˜ϕ2are real valued functions. The exponential loss in Example 1 is of the form (3.7). □ Example 3. (Gamma-Phi losses (Beijbom et al. , 2014)) In the multiclass classification context, the Gamma-Phi loss is defined as ϕ(x;a1) =ϕoutX i∈[k1]:j̸=a1ϕin(xa1−xi) for all x∈Rk−1anda1∈[k1], (3.9) where ϕin:R7→Randϕout:R7→Rare real-valued functions. Many popular multiclass losses are Gamma-Phi losses. For example, when ϕout(x) = log(1 + x) and ϕin(t) = exp( −x), (3.9) reduces to the cross-entropy loss or the multinomial logistic loss. When ϕout(x) =clog(1 + x) and ϕin(x) = exp((1 −x)/c) for some c >0, (3.9) reduces to the coherence loss, which is used in multiclass boosting algorithms (Zhang et al. , 2009). Both these losses are convex and Fisher consistent for multiclass classification (Wang and Scott, 2023a). In the DTR problem, however, the concave version of ϕis relevant. From Wang and Scott (2023a), it follows that if (a) ϕinandϕoutare strictly decreasing, real-valued, and continuously differentiable functions, (b) ϕoutis bounded above, and (c) inf x∈Rϕin(x) = 0, then the Gamma-Phi loss in (3.9) is Fisher consistent for the single-stage DTR case. Two possibilities of two-stage extensions of (3.9) are given below: ϕ(x,y;a1, a2) =ϕoutX i∈[k1]:j̸=a1X j∈[k2]:j̸=a2ϕin(xa1−xi,ya2−yj) , ϕ(x,y;a1, a2) =ϕoutX i∈[k1]:j̸=a1ϕin(xa1−xi) +X j∈[k2]:j̸=a2ϕin(ya2−yj) , 3 CONCAVE SURROGATES/3.1 PERM losses 9 where a1∈Rk1,a2∈Rk2,x∈Rk1, and y∈Rk2. Pairwise losses are a special case of gamma-phi losses when ϕoutis the identity function. □ Now we are ready to state the main theorem of this section. To do so, we first review some definitions from convex analysis. A convex function f:Rk7→Ris called proper if f(x)>−∞ for all x∈Rkand if dom( f) is non-empty (cf. pp. 24, Rockafellar, 1970). Also, fis called closed if its sublevel sets are either closed or empty (cf. Definition 1.2.3, pp. 78, Hiriart-Urruty and Lemar´ echal, 2004). Note that a PERM loss is concave if its template ηis concave. Therefore, we will assume that the template ηis concave. Because we define domain, closedness, and properness in the context of convex functions, the statement of
|
https://arxiv.org/abs/2505.17285v1
|
Theorem 3.1 below uses −ηand−ψinstead of ηandψ, respectively. Theorem 3.1. Suppose T= 2,k1, k2≥2, and ψis an above-bounded concave PERM loss such that ∩k1 i=1∩k2 j=1int(dom(−ψ(·;i, j))̸= ∅. Further suppose −ηis proper, closed, and strictly convex, where ηis the template of ψ. Also, we assume ηis thrice con- tinuously differentiable on int (dom(−η)). Then ψcan not be Fisher consistent. The conditions in Theorem 3.1 merits some discussion. (i) Closedness and properness: These are mild regularity assumptions on −ηintended to exclude pathological cases. Improper convex functions are often omitted from formal definition of convex functions (cf. p. 74 Hiriart-Urruty and Lemar´ echal, 2004). Closedness affects behavior only at the boundary and, like properness, is a standard assumption for convex functions in the statistical literature (Saumard and Wellner, 2014; Laha, 2021; Seregin and Wellner, 2010; Doss and Wellner, 2016, 2019). To our knowledge, all convex surrogates in practical use are proper and closed. (ii) Domain condition: The conditionTk1 i=1Tk2 j=1int(dom( −ψ(·;i, j)))̸=∅is a mild technical requirement. The proof of Theorem 3.1 involves the construction of a bad set of distributions. This condition ensures the existence of ˜f1and ˜f2for such distributions, where ( ˜f1,˜f2)∈arg max f∈FVψ(f). While the theorem may still hold without this assumption, the associated analysis becomes technically more involved. All multiclass PERM losses in practical use satisfy this domain assumption. (iii) Boundedness and strict convexity: These conditions are mainly required for establishing the existence and uniqueness of ˜f1and ˜f2for the above-mentioned bad set of distributions. Wang and Scott (2023a,b) also used such assumptions on PERM losses. (iv) Smoothness condition: The proof of Theorem 3.1 depends crucially on the smoothness condition. However, we are not aware of any non-smooth concave loss, PERM or otherwise, that is Fisher consistent for T≥2. Later in this section, we provide an example of a popular non-smooth concave surrogate that is Fisher consistent in the single-stage case but inconsistent in two-stage DTR. Smoothness-related assumptions on surrogates are common in the Fisher consistency literature (Zhang, 2004; Xue et al. , 2022; Wang and Scott, 2023a,b; Zhang et al. , 2020). Supplement S1.1 discusses the conditions under which our PERM loss examples satisfy the assumptions of Theorem 3.1. 3.1.1. Proof outline and challenges Theorem 3.1 is proved in Supplement S5. The proof, which occupies a substantial part of our proof sections, draws on tools from convex analysis. First, we identify a subset of P0where ˜fexists and is analytically tractable. Fisher inconsistency is then demonstrated within this subset. The next step is a reduction to the binary treatment case. At a high level, we show that under the conditions of Theorem 3.1, the existence of a Fisher consistent, PERM loss ψin the general setting implies the existence of a Fisher consistent, relative-margin-based surrogate ϑin the binary treatment setting. Note that, we lose the permutation equivariance property of ψin this reduction. The remainder of the proof establishes that if ϑis concave, it cannot be Fisher consistent. Loss of permutation equivariance implies ϑhas more degrees of variability than a PERM loss in the binary treatment case. Interestingly, in the binary treatment case, PERM
|
https://arxiv.org/abs/2505.17285v1
|
losses coincide with Laha et al. (2024a)’s margin-based losses. Therefore, while Laha et al. (2024a) developed tools to prove the inconsistency of margin-based concave surrogates in the binary-treatment case, those techniques fail to apply here. The loss of permutation equivariance substantially alters the structure of the problem, making our analysis significantly more intricate than that in Laha et al. (2024a). See Section 8.1 for further comparisons between the current paper and Laha et al. (2024a). 3.1.2. Additional examples of non-PERM concave surrogates The negative result in Theorem 3.1 is unlikely to be an artifact of smoothness or the PERM structure. In our search across other classes of concave losses, we found no surrogate for which ˜d1agrees with d∗ 1. Below, we provide examples of inconsistency arising from non-PERM concave surrogate classes. Our examples include a non-smooth loss, which is a two-stage extension of multivariate hinge loss. Example 4. (One vs all loss) Here we consider a two-stage concave version of the one versus all surrogate loss of Zhang (2004). Following the multiclass classification literature, in the single-stage case, we can define the one vs all loss by ϕ(x;a1) =˜ϕ(xa1) +X i∈[k1]:i̸=a1˜ϕ(−xi) for all x∈Rk1anda1∈[k1], where ˜ϕa univariate function. Using Zhang (2004), it can be shown that ϕis single-stage Fisher consistent if ˜ϕis concave, bounded above, differentiable, and ˜ϕ(−x)<˜ϕ(x) for all x >0. A two stage extension of the above surrogate is given by ψ(x,y;a1, a2) =˜ϕ(xa1,ya2) +X j∈[k2]:j̸=a2˜ϕ(xa1,−yj) +X i∈[k1]:i̸=a1˜ϕ(−xi,ya2) +X i∈[k1]:i̸=a1X j∈[k2]:j̸=a2˜ϕ(−xi,−yj), (3.10) 3 CONCAVE SURROGATES/3.1 PERM losses 10 where x∈Rk1,y∈Rk2,a1∈[k1], and a2∈[k2], and ˜ϕis a bivariate function. In particular, let us take ˜ϕ(x, y) = −exp(−x−y)2. In that case, the loss in (3.10) takes the form ψ(x,y;a1, a2) =− e−xa1+X i∈[k1]:i̸=a1exi e−ya2+X j∈[k2]:j̸=a2eyj . (3.11) Result 2, proved in Supplement S4.0.2, implies that, similar to Examples 1 and 5, the first stage treatment assignment may be suboptimal for the loss in (3.11). Result 2. Consider the surrogate ψin(3.11) . Under the setup of Result 1, ˜d2(H2)is as in (3.3), and any element of the following set is a candidate for ˜d1(H1): argmax i∈[k1]X a2∈[k2]Es E[Y1+Y2|H2, a2]X j∈[k2]:j̸=a2E[Y1+Y2|H2, j] H1, A1=i . □ Example 5. (Additive losses) A simple way of constructing a two-stage concave surrogate loss is adding two concave single-stage surrogate losses. We consider losses of the form ψ(x,y;a1, a2) =ϕ(1)(x, a1) +ϕ(2)(y, a2) (3.12) for all x∈Rk1,y∈Rk2,a1∈[k1], and a2∈[k2]. The loss ψin (3.12) will be concave if both ϕ(1)andϕ(2)are concave. Suppose ϕ(1)andϕ(2)are single-stage Fisher consistent losses. Result 3 below, which is proved in Supplement S4.0.3, shows that the first stage treatment assignment ˜d1does not match d∗ 1although the second stage treatment assignment remains optimal. This result is general in that concavity of the single-stage losses is not required. Result 3. Suppose Assumptions I-V hold and ψis as in (3.12) . Further suppose, ϕ(1)andϕ(2)are each single-stage Fisher consistent surrogates. Then whenever they exist, ˜d2is as in (3.3) and ˜d1(H1)∈argmax a1∈[k1]EX a2∈[k2]E[Y1+Y2|H2, A2=a2] H1, A1=a1 . □ Example 6. (Sum-zero constraints) In multiclass classification, a popular method is the constrained comparison method, where the sum-zero constraintPk1 i=1f1i= 0 is imposed
|
https://arxiv.org/abs/2505.17285v1
|
on the score function when the surrogate risk is minimized (Zhang, 2004). The corresponding surrogate is given by ϕ(x;a1) =X i∈[k1]:i̸=a1˜ϕ(−xi) for all x∈Rk1anda1∈[k1], (3.13) where ˜ϕis a univariate function. An example is the multi-category support vector machine (SVM) of Lee et al. (2004), where ˜ϕis the convex loss ˜ϕ(x) = max(1 −x,0) for all x∈R, also known as the hinge loss. Zhang (2004); Liu (2007) show that the resulting surrogate is Fisher consistent for the multiclass classification problem. A two-stage extension of (3.13) is given by ψ(x,y;a1, a2) =X i∈[k1]:i̸=a1X j∈[k2]:j̸=a2˜ϕ(−xi,−yj) (3.14) for all x∈Rk1,y∈Rk2,a1∈[k1], and a2∈[k2], where the constrained maximization problem becomes maximize f1∈ F1, f2∈ F2Vψ(f1, f2) subject toktX i=1fti(ht) = 0 ,for all ht∈ H tandt∈ {1,2}.(3.15) Let us consider ˜ϕ(x, y) = min( x−1, y−1,0), the concave version of the multivariate hinge loss (Zhao et al. , 2015). Exact solutions of (3.15) with this choice of ψis analytically intractable and beyond the scope of this paper. Instead, we consider toy settings where the solutions ˜f1and ˜f2can be found numerically. To make the calculations simple, we consider the scenario where O1, O2=∅, and Y1= 0. In this case, H2≡A1. Since H1=O1=∅,˜f1is a fixed vector in R3andd∗ 1and˜d1are fixed numbers. It can be shown that d∗is totally determined by the expected (conditional) outcome matrix ( E[Y2|A1=i, A2=j])i∈[k3],j∈[k2]in this setting. Letting k1= 3 and k2= 2, we consider 3 choices of this matrix, which leads to the three settings in Table 1. In all these settings, d∗ 1is unique. Table 1 displays ˜f1, argmax( ˜f1), and d∗ 1for each setting. The calculations behind the derivation of ˜ffor the settings in Table 1 can be found in Supplement S4.0.4. Table 1 shows that for the first two settings, ˜d1= pred( ˜f1) disagrees with d∗ 1. It can 2Note that this ˜ϕis an above-bounded, differentiable concave function satisfying ˜ϕ(x, y)>˜ϕ(x,−y) when y >0 and ˜ϕ(x, y)>˜ϕ(−x, y) when x >0. 4 NECESSARY AND SUFFICIENT CONDITIONS FOR FISHER CONSISTENCY/ 11 also be shown that V(˜f)< V∗in these two settings. Therefore, the surrogate under consideration is not Fisher consistent. Nevertheless, in the third setting, ˜d1agrees with d∗ 1. In this case, it turns out that Q∗ 1(1) = 2, Q∗ 1(2) = 80 ,but Q∗ 1(3) = 3, indicating treatment level 2, which is also the optimal treatment for the first stage, has much higher treatment effect compared to the remaining treatments. The higher effect size makes optimal treatment assignment an easier task in the first stage of third setting. Setting E[Y2|1,1]E[Y2|1,2]E[Y2|2,1]E[Y2|2,2]E[Y2|3,1]E[Y2|3,2] ˜f11˜f12˜f13argmax( ˜f1)d∗ 1 1 5 1 3 4 4 4 0 0 0 (1,2,3) 1 2 5 7 3 4 4 4 0 0 0 (1,2,3) 1 3 1 2 80 1 3 3 -1 2 -1 2 2 Table 1 Toy settings for Example 6. This table displays three settings under the toy setup of Example 6. It shows the E[Y2|A1=i, A2=j]values and the corresponding optimal first-stage treatment d∗ 1. The provided ˜f1≡(˜f11,˜f12,˜f13)is the solution to (3.15) , using the multivariate hinge loss surrogate of Example 6. Here, E[Y2|i, j]abbreviates E[Y2|A1=i,
|
https://arxiv.org/abs/2505.17285v1
|
A2=j]fori∈[3]andj∈[2]. □ To conclude, our results and examples in this section do not show much promises for concave surrogates in the DTR classification problem. In fact, it may be possible that convexifying simultaneous direct search is fundamentally infeasible without sacrificing Fisher consistency. Thus, the computational burden of non-convex optimization may be necessary for theoretically valid simultaneous direct search, which can be viewed as the cost of reducing reliance on models. This limitation may narrow the scope of simultaneous DTR direct search, but Fisher inconsistency of concave/convex surrogate losses is neither surprising nor uncommon. Convex surrogates, especially smooth ones, struggle with Fisher consistency across a range of machine learning problems, including multilabel classification with ranking loss (Gao and Zhou, 2011), ranking with pairwise disjoint loss (Calauzenes et al. , 2012; Duchi et al. , 2010), and maximum score estimation (Feng et al. , 2022). Even in DTR classification with risk constraints, no concave surrogates have been identified to date, and non-concave losses such as the ramp loss remain in use (Liu et al. , 2024b,a). According to Calauzenes et al. (2012), the failure of concave loss occurs in problems where the convex surrogate objective function ( Vψ(f) in our case) fails to closely approximate the discontinuous objective function ( V(f) in our case). We note that properties related to Fisher consistency rely on the choice of the link function pred (Ramaswamy and Agarwal, 2016). Consequently, the aforementioned negative results are specific to our argmax-based pred function, which is arguably the most common pred function in DTR, ITR, and even classification literature. Smooth concave surrogate losses may achieve Fisher consistency with alternative choices of pred, although whether such pred functions can yield computationally feasible methods is a separate question. 4. Necessary and Sufficient Conditions for Fisher Consistency The inconsistency results for concave surrogates in Section 3 prompt us to look beyond the concave surrogates. In this section, we establish necessary and sufficient conditions for Fisher consistency among separable surrogates, which we define as ψ(x1, . . . ,xT;a1, . . . , a T) =TY t=1ϕt(xt;at) for all xt∈Rkt, at∈[kt], t∈[T], (4.1) where each ϕt:Rkt×[kt]7→Rcan be thought of as a single-stage surrogate loss. We also assume ϕt≥0 for each t∈[T]. We then present examples of Fisher-consistent surrogates within this class and discuss their properties. Here we consider products of single-stage surrogates rather than sums, as Example 5 shows that addition can lead to Fisher inconsistency. Non-negativity of the ϕt’s offers analytical advantages. In particular, it enables iterative closed-form expressions for Vψ(f) andVψ ∗, facilitating a detailed analysis of the ψ-regret Vψ ∗−Vψ(f). However, non-negativity implies non-concavity for non- constant ϕt’s. That said, concavity of the ϕt’s may offer limited benefit, since a separable ψcan still be non-concave even if each ϕtis concave. Notably, the original discontinuous surrogate ψdisis separable with non-negative ϕdis. Non-negative separable surrogates have been used in DTR classification, particularly for survival data (Xue et al. , 2022) and binary treatments (Laha et al. , 2024a). Even in single-stage direct search, non-negative, non-concave surrogates are used when the presence of risk constraints
|
https://arxiv.org/abs/2505.17285v1
|
complicate the classification (Liu et al. , 2024b; Wang et al. , 2018). Similar surrogates also appear in the multiclass classification literature (Liu and Shen, 2006; Wu and Liu, 2007). 4.0.1. Single-stage setting (T= 1) The necessary and sufficient condition of Fisher consistency in the single-stage case is analogous to multiclass classification, and can be concisely expressed using functionals of ϕt’s. Let k∈N. For any single-stage surrogate ϕ:Rk×[k]7→R,x∈Rk, and any p∈Rk, let us define the functionals Ψ(x;p) =kX i=1piϕ(x;i) and Ψ∗(p) = sup x∈RkΨ(x;p). (4.2) 4 NECESSARY AND SUFFICIENT CONDITIONS FOR FISHER CONSISTENCY/ 12 Equation 1.1 implies that Ψ∗is simply the support function of the set Vϕ= v∈Rk:vi=ϕ(x;i) for some x∈Rk . (4.3) By a slight abuse of terminology, we will refer to Vϕas the image set of ϕ. When ϕ=ϕt, Ψ and Ψ∗will be denoted by Ψ t and Ψ∗ t, respectively. We will now introduce a condition. Condition N1. For any p∈Rk ≥0andx∈Rk,ϕ:Rk×[k]7→R≥0satisfies Ψ∗(p)− sup x∈Rk:ppred(x)<max(p)Ψ(x;p)>0. (4.4) Since we defined the supremum of an empty set to be −∞, (4.4) holds trivially when the elements of pare all equal. Condition N1 essentially states that if pred( x)/∈argmax( p), then xcan not maximize Ψ( ·;p). Such conditions are common in multiclass classification literature (Ramaswamy and Agarwal, 2016; Tewari and Bartlett, 2007). Tewari and Bartlett (2007) also provides geometric interpretations of conditions similar to Condition N1, though under the link function pred( x) = argmaxi∈[k]ϕ(x;i). Lemma 4.1 below shows that Condition N1 is both necessary and sufficient for Fisher consistency in the single-stage setting. The analogous result for multiclass classification has been proved by Tewari and Bartlett (2007). Lemma 4.1. Condition N1 is necessary and sufficient for the Fisher consistency of ϕ:Rk×[k]7→R≥0in the single-stage (T= 1) setting with respect to P0, i.e., the class of all distributions satisfying Assumptions I-V. Under Condition N1, ϕsatisfies several interesting properties. For example, it is bounded (cf. Supplementary Lemma S6.1), and, given any bounded set B⊂Rk ≥0, there exists a non-negative, non-decreasing, convex function ϱsuch that Ψ∗(p)−Ψ(x;p)≥ϱ(max( p)−ppred(x)) (4.5) for all x∈Rkand all p∈B(cf. Supplementary Lemma S7.2). Analogous results also appear in the multiclass classification literature (Zhang, 2004). 4.0.2. Multi-stage setting (T≥2) As we shall see, Condition N1 is no longer sufficient for Fisher consistency of separable surrogates when T≥2. In this case, we will require more restrictions on the ϕt’s. To this end, we introduce another condition on the Ψ∗-transformation. Condition N2. Letk∈N. Then ϕ:Rk×[k]7→Rsatisfies Ψ∗(p) =Cϕmax(p)for all p∈Rk ≥0for some Cϕ>0. (4.6) It is straightforward to verify that Cϕ= Ψ∗(1k). Interestingly, when ϕ=ϕdis, we have Ψ∗(p) = max( p). Thus Condition N2 essentially requires the Ψ∗-functionals of ϕandϕdisto be proportional. Lemma 4.2 below shows that Condition N2 can also be characterized in terms of conv( Vϕ), the closed convex hull of the image set Vϕdefined in (4.3). Lemma 4.2. Suppose k∈Nandϕ:Rk×[k]7→R≥0satisfies Condition N1. Then ϕsatisfies Condition N2 if and only if CϕSk−1⊂conv(Vϕ), where Sk−1is the simplex in Rk,Cϕ= Ψ∗(1k), andVϕis as in (4.3). Note that the Cϕin Lemma 4.2 agrees with that in Condition N2. The lemma further implies that when Cϕ= 1, Condition
|
https://arxiv.org/abs/2505.17285v1
|
N2 reduces to conv( Vϕ)⊃ Sk−1. The appearance of Sk−1may seem unexpected, but as shown in the proof of Lemma 4.2, it is simply the closed convex hull of the image set for ϕdis. Thus, Condition N2 requires the image set of ϕto contain, in convex hull, the image set of the original discontinuous surrogate ϕdis. If a surrogate satisfies both Conditions N1 and N2, then the ϱin (4.5) becomes linear. In particular, Supplementary Lemma S6.8 shows that there exists χϕ∈(0,1], depending only on ψ, so that Ψ∗(p)−Ψ(x;p)≥Cϕχϕ(max( p)−ppred(x)) (4.7) for all x∈Rkandp∈Rk ≥0, where Cϕis the constant appearing in Condition N2. The original discontinuous loss ϕdisalso satisfies (4.7). Thus Conditions N1 and N2 together enforce a stricter restriction on the surrogates than Condition N1 alone. Theorem 4.1 below establishes that these two conditions are sufficient for Fisher consistency. Theorem 4.1 (Sufficient conditions) .Suppose ψis a separable surrogate, i.e., it is as in (4.1), where the ϕt’s are non- negative. Then the followings are sufficient conditions for Fisher consistency (with respect to P0, i.e., the set of all distributions satisfying Assumptions I-V): F1. Each ϕtsatisfies Condition N1 for t∈[T]. F2. For t≥2,ϕtalso satisfies Condition N2. The proof of Theorem 4.2 can be found in Supplement S7. The proof exploits properties of surrogates satisfying Conditions N1 and N2, such as inequalities (4.5) and (4.7). However, the main technical challenge lies in showing that ϕt(ft; pred( ft(Ht)) stays bounded away from zero when Vψ ∗−Vψ(f) is small. Section 4.2 provides examples of ψthat satisfy conditions F1 and F2. It turns out that these conditions are also necessary for Fisher consistency if we slightly relax Assumption V, which requires Yt>0, to allow Yt≥0. The resulting class of distributions is slightly larger than P0. 4 NECESSARY AND SUFFICIENT CONDITIONS FOR FISHER CONSISTENCY/4.1 Restricted classes 13 Theorem 4.2 (Necessary conditions) .Suppose ψis a separable surrogate with non-negative ϕt’s. Further suppose ψis Fisher consistent with respect to the set of all distributions satisfying Assumptions I-IV and Yt≥0for all t∈[T]. Then ψmust satisfy the conditions F1 and F2 of Theorem 4.1. We chose to work with the slightly broader class that allows Yt≥0 in Theorem 4.2 for technical convenience in the proof. It may be possible to show that F1 and F2 are necessary for Fisher consistency even under P0(i.e., when Yt>0), but the corresponding proof would be considerably more technical without offering additional conceptual insight. Despite this minor discrepancy, we will refer to F1 and F2 as the necessary and sufficient conditions for Fisher consistency within the class of separable surrogates in the sequel. Similar to Theorem 4.1, the proof of Theorem 4.2 is non-trivial, and together they occupy a substantial portion of the Supplement. The proof of Theorem 4.2 relies on identifying good sets of distribution under which the optimal policy and quantities such as ˜fandVψ(f) are tractable. To prove Condition F1, we consider distributions that resemble a one-stage DTR, where treatment affects the outcome at only one stage. However, proving Condition F2 requires distributions that embed a two-stage DTR structure, and the corresponding good set is
|
https://arxiv.org/abs/2505.17285v1
|
adjusted accordingly. A key step in proving F2 is to show that for any t≥2, and x∈Rkt ≥0, Ψ∗ t(x) =Gt(max( x)) for some univariate function Gt. The linearity of Gtis then derived from the positive homogeneity of Ψ∗ t. While Theorems 4.1 and 4.2 do not explicitly refer to the original loss, they can be interpreted as suggesting that, compared to the single-stage setting, Fisher consistency in the multi-stage setting requires the ϕt’s to more closely resemble the original discontinuous loss ϕdisfort≥2. As a downside, similar to ψdis, such surrogates do not preserve the ranking among suboptimal treatments. In particular, for two suboptimal treatments iandj, it is possible that ˜fti(Ht)<˜ftj(Ht) even when Q∗ t(Ht, i)> Q∗ t(Ht, j), where ˜fsatisfies Vψ(˜f) =Vψ ∗. In fact, our proof implies that, if ˜fexists, ϕt(˜ft(Ht), j) = 0 for any suboptimal treatment j∈[kt]. Theorems 4.1-4.2 imply that Fisher consistency does not require ϕ1to satisfy Condition N2. However, if ϕ1does satisfy Condition N2, we can obtain a stronger result. Proposition 1. Suppose Assumptions I-V are satisfied and ψis as in Theorem 4.1 except ϕ1also satisfies Condition N2. Further suppose for each t∈[T], there exists Jt≥0so that ϕt(x,pred(x))≥ Jtfor all x∈Rkt. Then for all f∈ F, Vψ ∗−Vψ(f)≥C∗ V∗−V(f) where C∗=TY t=1Jt min 1≤t≤Tχϕt. (4.8) Here the Cϕt’s are as in Condition N2, and the χϕt’s are as in (4.7). The proof of Proposition 1 is provided in Supplement S6.4. The constant C∗depends on ψ, but we omit this depen- dence from notaion for the sake of simplicity. The lower bound in (4.8) is nontrivial only when Jt>0, which requires the ϕt(x; pred( x))’s to be bounded away from zero. The original discontinuous loss ϕdisand all surrogates considered in the next section satisfy this lower boundedness condition. Supplementary Lemma S6.4 implies that this condition is automatically met under Condition N1 if the ϕt’s are symmetric, i.e., ifPkt i=1ϕt(x;i) is constant for all x∈Rkt. In general, however, Condi- tions N1 and N2 do not ensure that ϕt(x,pred(x)) is bounded away from zero uniformly over x∈Rkt, and counterexamples can be easily constructed. There exist surrogates for which (4.8) holds with equality, e.g., when the ϕt’s are constant multiples of ϕdis. To see this, note that Jt=χϕt= 1 for ϕdis. However, for specific surrogates, the constant C∗may not be tight, and sharper constants may exist. As such, C∗may not serve as a reliable criterion for selecting among surrogates. Moreover, since our separable surrogates are non-convex, a surrogate that is easier to optimize are ultimately more practical choice. Further discussion on C∗, including upper bounds on C∗and the roles of Jtandχϕt, is provided in Supplement S1.2. Remark 1.We anticipate that non-separable Fisher consistent surrogates may also exist. From Liu et al. (2024a), it follows that the multivariate ramp loss—which is non-separable—is Fisher consistent for the DTR classification problem in the binary-treatment setting. However, this loss is margin-based, and its extension to the kt>2 case remains an open question. Location and scale transformation Lemma 4.3 below shows that if a single-stage surrogate ϕtsatisfies Conditions N1 and N2, it continues to do
|
https://arxiv.org/abs/2505.17285v1
|
so under a scale transformation. However, a location transformation by a positive constant violates Condition N2 although it preserves Condition N1. The proof of Lemma 4.3 is provided in Supplement S8.3. Lemma 4.3. Suppose ϕ:Rk×[k]→R≥0satisfies Conditions N1 and N2. Then, for any a, b > 0, the scaled surrogate defined by ϕa,b(x;i) =bϕ(ax;i)forx∈Rkandi∈[k]also satisfies Conditions N1 and N2. The shifted surrogate ϕ+c satisfies Condition N1 for all c∈Rbut fails to satisfy Condition N2 unless c= 0. Lemma 4.3 proves the existence of non-negative ϕt’s that satisfy Condition N1 but not Condition N2. Conversely, there are surrogates that satisfy Condition N2 but not Condition N1. As a trivial example, consider ϕt(x, i) = 1[pred( x) =kt−i+ 1] forx∈Rktandi∈[kt]. Thus, neither Condition N1 nor Condition N2 implies the other. 4.1. Restricted classes In the upcoming Section 6, we will see that if we search for the best DTR within a sufficiently rich policy class, e.g., neural networks, decision trees, or basis expansion classes, we may expect a DTR with high value in large samples. However, in many applications, practitioners may be interested in simpler, interpretable policy classes that may not be universal approximation classes. For instance, linear policies are often favored for their interpretability. Suppose L ⊂ F is such a restricted class of 4 NECESSARY AND SUFFICIENT CONDITIONS FOR FISHER CONSISTENCY/4.2 Examples of Fisher-consistent ϕ 14 policies and f∗∈ Lis the maximizer of V(f) over F. Informally, we say ψis Fisher consistent over Lif, for any sequence {fn}n≥1⊂ L, the convergence Vψ(fn)→supf∈LVψ(f) implies V(fn)→supf∈LV(f). An interesting question is: under what conditions a surrogate ψis Fisher consistent over restricted classes? The answer depends on the specific structure of L, but we anticipate that Conditions N1 and N2—which are essentially restrictions on Ψ∗—will generally not suffice. These conditions sufficed in our earlier discussion because the maximizer of Vψ(f) over Fcan be explicitly characterized via Ψ∗ (see Lemma S6.13 in the Supplement). The maximizer of Vψ(f) over L, in general, has little connection with Ψ∗. Developing a comprehensive theory for restricted classes is out of the scope of the present paper. However, in this section, we show that in the special case where the optimal DTR within Lcoincides with d∗, Conditions N1 and N2 can still be used to investigate the DTR obtained by maximizing Vψ(f) over L. Nevertheless, even in this case, we need more structure on ψ. There are many ways to impose such structures. We choose to restrict the behavior of the ϕt(x;j)’s when the elements of xare large, since the upcoming examples in Section 4.2 satisfy this condition. This leads to Condition N3 below. Condition N3. Suppose x∈Rkis such that argmax( x)is singleton. Then for any real sequence {bn}n≥1⊂R≥0such that bn→ ∞ , ϕ(bnx;j)→nCϕ1[j= argmax( x)]for all j∈[k], where Cϕis as in Condition N2. Lemma 4.4 below shows that, when the optimal DTR within Lcoincides with d∗, andLis closed under multiplication by positive scalars, Conditions N1, N2, and N3 together guarantee a regret bound analogous to Proposition 1. Lemma 4.4. Suppose Lis closed under multiplication by positive scalars, i.e., if f∈ L,
|
https://arxiv.org/abs/2505.17285v1
|
then af∈ Lfor any a >0. Assume there exists f∗∈ Lsuch that, for all t∈[T], the set argmax( f∗ t(Ht))is a singleton and agrees with d∗ t(ht)withP-probability one. Let ψbe as in Proposition 1, and suppose it also satisfies Condition N3. Let C∗be as in Proposition 1. Then the following assertion holds for any f∈ L: sup f∈LVψ(f)−Vψ(f)≥C∗(V∗−V(f)). Lemma 4.4 is proved in Supplement S8.4. The assumptions in Lemma 4.4 imply that the optimal treatment assignments are unique with probability one. This condition often arises in the theoretical analysis of model-based DTR methods, such as parametric Q-learning and A-learning, which require uniqueness of the optimal DTR for efficient estimation of model parameters (Laber et al. , 2014; Schulte et al. , 2014; Robins et al. , 1994; Chakraborty and Moodie, 2013; Tsiatis, 2006). It has also appeared in the context of direct search with linear policies (Laha et al. , 2024a). An important example of Lis the class of linear policies (Murphy, 2005; Wallace and Moodie, 2015; Robins et al. , 1994; Sonabend-W et al. , 2023). Here, the scores are linear functions of Htor its feature transformations. Lemma 4.4 implies that if the optimal DTR d∗is linear, then direct search over Lwith suitable surrogates can, in principle, recover the optimal DTR. 4.2. Examples of Fisher-consistent ϕ In this section, we present examples of surrogates ϕtthat satisfy Conditions N1, N2, and N3, and that ϕ(x; pred( x))>Jfor someJ>0. These surrogates are defined for a general dimension k, and the corresponding expression for ϕtcan be obtained by replacing kwith kt. Note that the scale of ϕcan be adjusted by multiplying it by any positive constant; however, such scaling has no impact on the theoretical or practical performance of our method. 4.2.1. Kernel-based surrogates For any k∈N, let K:Rk7→R≥0be a kernel which is positive everywhere and integrates to one. For sake of simplicity, suppose Kis the joint distribution of ki.i.d. random variables Z1,. . .,Zkwith density K. This implies K(x) =Qk i=1K(xi) for each x∈Rk. Examples of Kinclude densities such as Gaussian, logistic, etc. For x∈Rkandj∈[k], consider the single-stage surrogate loss ϕ(x;j) =CZ Rk1[pred( x−u) =j]K(u)du=CZ Rk1[pred( u) =j]K(x−u)du. (4.9) For this surrogate, Ψ∗(1k) =C. To see this, observe thatP i∈[k]ϕ(x;i) =Cfor all x∈Rk. Since the sum is constant, ϕis a symmetric surrogate loss, as previously discussed in Section 4.0.2. Symmetric losses are known to be more robust to label corruption in multiclass classification problems (Patrini et al. , 2017; Charoenphakdee et al. , 2019). As previously mentioned, symmetric losses also satisfy inf x∈Rkϕ(x,pred(x))>0 (see Lemma S6.4 in the Supplement). In fact, as shown in Lemma 4.5, this quantity is lower bounded by 1 /kfor the kernel-based surrogates. Lemma 4.5 is proved in Supplement S8.5. Lemma 4.5. Suppose ϕis as in (4.9). Then ϕ(x, j) =CEKY i̸=j 1−FK(Z+xi−xj) , (4.10) where Zis a random variable with density K, and FKandEKare the distribution function and the expectation operators corresponding to K, respectively. Moreover, ϕ(x,pred(x))>Jfor all x∈Rkwhere J=C/k. 4 NECESSARY AND SUFFICIENT CONDITIONS FOR FISHER CONSISTENCY/4.3 Relative-margin-based representation 15 Using (4.10), we can derive closed-form expressions
|
https://arxiv.org/abs/2505.17285v1
|
for ϕin specific cases. In particular, Supplement S8.5.1 provides explicit formulas for ϕwhen k= 3 and Kis either the standard logistic or standard Gumbel density. Lemma 4.6, proved in Supplement S8.5.2, establishes that the kernel-based surrogate ϕsatisfies the conditions in Proposition 1 and Lemma 4.4. Lemma 4.6. The kernel-based ϕin(4.9) satisfies Conditions N1, N2, and N3, provided K(x)>0for each x∈R. Moreover, the constant Cϕin Condition N2 equals C, and the constant χϕin(4.7) equals 2−(k−1). Suppose the Cin (4.9) is one. Since J= 1/kandχϕ= 2−(k−1)by Lemmas 4.5 and 4.6, respectively, we have C∗=TY t=11 ktmin t∈[T]2−kt=2−maxt∈[T]kt QT t=1kt,where C∗is as in Proposition 1 . In particular, if kt=kfor all i∈[T], then C∗= 2−k/kT. 4.2.2. Product-based surrogates Suppose Kis a density supported on Ras in Section 4.2.1. For x∈RkandC >0, the product-based surrogate is of the form ϕ(x;j) =Y i̸=j,1≤i≤kτ(xj−xi) where τ(x) =C(1−FK(x)), (4.11) with FKbeing the distribution function of the density K. To understand the intuition behind this surrogate, assuming C= 1, note that τ(x) =R R1[x−u≥0]K(u)ducan be represented as a smoothed (using kernel K) version of the univariate 0-1 losses 1[ x≥0] and 1[ x >0]. The discontinuous loss I[j= pred( x)] can be written as a product of the univariate 0-1 losses as follows: I[j= pred( x)] =Y 1≤i<jI[xj−xi≥0]Y j+1≤i≤kI[xj−xi>0]. (4.12) If we smooth each 0-1 loss in (4.12) using the kernel K, we obtain the product-based surrogate in (4.11) with C= 1. The main difference between the kernel-based surrogates and the product-based surrogates is as follows: the kernel-based surrogate smooths the multivariate 0-1 loss 1[pred( x) =j] at once, where the product-based surrogate splits this loss into a product of univariate 0-1 losses, and smooths each loss separately. Unlike the kernel-based surrogate, the product-based surrogate may not satisfy thatP i∈[k]ϕ(x;i) is a constant. Lemma 4.7 below implies that the product-based surrogate satisfies Conditions N1, N2, and N3. This lemma is proved in Supplement S8.6. Lemma 4.7. Suppose k∈N. If the density Kis symmetric about zero, then the product-based surrogate loss ϕ, as defined in (4.11) , satisfies Assumptions N1, N2, and N3 with Cϕ=C. Moreover, it satisfies (4.7) withχϕ= 1/2, and ϕ(x,pred(x))≥ J withJ=C2−(k−1)for all x∈Rk. Lemma 4.7 implies Proposition 1 and Lemma 4.4 apply for the product-based surrogates with C∗=CT ϕ2−PT t=1kt+T−1. In particular, when Cϕ= 1 and kt=kfor all t∈[T], we have C∗= 2−kT+T−1. Examples of Kthat are symmetric about zero include the logistic, Cauchy, and Student’s t-density (for suitable degrees of freedom). In the binary treatment case, the product-based surrogate with symmetric Kcorresponds to the margin-based Fisher consistent surrogates studied by Laha et al. (2024a). We will revisit this connection in Section 4.3. To the best of our knowledge, our kernel-based and product-based surrogates have not been used previously in DTR or ITR research. However, smoothing the 0-1 loss is a widely adopted approach for constructing surrogate methods, particularly in machine learning and statistics problems where suitable convex surrogates are unavailable. For example, smoothed 0-1 loss has been used for constructing surrogates in multilabel classification with ranking loss (Gao and Zhou, 2011), maximum score estimation (Feng et
|
https://arxiv.org/abs/2505.17285v1
|
al. , 2022; Xu et al. , 2014), covariate-adjusted Youden index estimation, and one-bit compressed sensing (Feng et al. , 2022). In the context of dynamic treatment regimes, the smoothed 0-1 loss appears in Laha et al. (2024a) and Xue et al. (2022). In binary classification, where Fisher-consistent convex surrogates are available, Nguyen and Sanner (2013) demonstrate that the smoothed 0–1 loss exhibits greater robustness to outliers and data contamination compared to popular convex surrogates. 4.3. Relative-margin-based representation The product-based and kernel-based surrogates share a useful property. They are relative-margin-based, which means that ϕdepends on xonly through the pairwise differences xi−xj’s. This property can lead to a dimension reduction during the optimization. Definition 4.1 (Relative-margin-based losses) .Letk∈N. The surrogate ϕ:Rk×[k]7→Ris relative-margin-based if there exists a function Γ : Rk−1×[k]7→R, so that for each i∈[k] and x∈Rk, ϕ(x;i) = Γ(∆ x1;i),where ∆ x1= (x1−x2, . . . ,x1−xk). The function Γ is referred to as the template for ϕ. 5 SDSS METHOD/ 16 Fig 1: Plot of Γ(x, y; 1)when k= 3.For the product-based surrogate, Γ is as in (4.13), and τ(x) = (1 + tanh( x))/2, which is the distribution function of the centered logistic distribution with scale 2. For kernel-based surrogates, the template Γ is provided in (4.14). Its closed formulas for the logistic and Gumbel densities are provided in Supplement S8.5.1. Relative margin-based surrogate losses are quite common in multiclass classification theory, where they have appeared in the context of multiclass support vector machines (Glasmachers et al. , 2016), Gamma-Phi losses (Wang and Scott, 2023a), and PERM losses (Wang and Scott, 2023b). Lemma 4.8 below states that the product-based and kernel-based surrogates are relative margin-based, and provides the corresponding templates. We will say that the separable ψdefined in (4.1) is relative-margin-based if each ϕtis relative-margin based. We will denote the corresponding templates by Γ 1, . . . , ΓT, respectively. Lemma 4.8. Ifϕ:Rk×[k]7→R≥0is either the kernel-based surrogate in (4.9) or the product-based surrogate in (4.11) , then ϕis relative-margin-based. For the product-based surrogate, Γtakes the form Γ(y;i) =(Q j∈[k]τ(yj) ifi= 1 τ(−yi)Q j∈[k]\{1,i}τ(yj−yi)ifi̸= 1,(4.13) where τis as in (4.11) . For the kernel-based surrogate loss, Γtakes the form Γ(y;i) = CϕEKQ j∈[k]\{i} 1−FK(Z−yj) ifi= 1 CϕEK 1−FK(Z+yi)Qk j∈[k]\{1,i} 1−FK(Z+yi−yj) ifi̸= 1,(4.14) where Zis a random variable with density K, as in (4.10) . The proof of Lemma 4.8 follows directly from the definitions of the surrogates in (4.10) and (4.11), and is therefore omitted. In the binary treatment case, i.e., when kt= 2 for all t∈[T], ∆x1reduces to a real number, leading to the margin-based formulation of the surrogate problem. In this case, the Γ corresponding to the product-based surrogate leads to Laha et al. (2024a)’s surrogate losses, provided the smoothing kernel Kis symmetric. See Figure 1 for the plot of Γ( x, y; 1) for some product-based and kernel-based surrogates when k= 3. The function has no local extrema but attains its maximum at ( ∞,∞), resulting in a bump in the first quadrant. The plots of Γ( x, y; 2) and Γ(x, y; 3) are similar,
|
https://arxiv.org/abs/2505.17285v1
|
except their bumps appear in the second and fourth quadrants, respectively, since they are maximized at (−∞,∞) and ( ∞,−∞). Remark 2.In Section 2, we mentioned the angle-based framework, an alternative surrogate framework based on an angle- based pred function. Similar to the relative-margin-based surrogates, the angle-based framework requires only kt−1 scores. To the best of our knowledge, under this framework, concave Fisher consistent surrogates are currently available only for the T= 1 case (Zhang et al. , 2020). For T >1, Fisher consistency guarantees exist for some non-concave surrogates, albeit under a setting quite different from ours (see Section 8.2 for further discussion on Xue et al. , 2022). Similar to our product-based losses, these surrogates rely on smoothed 0-1 loss functions. In this paper, we focus on the argmax-based link function, as it is better suited to the mathematical manipulations needed for deriving the necessary and sufficient conditions. We anticipate that some parallels to our results would hold under the angle-based framework as well, although the corresponding calculations can be more analytically challenging. 5. SDSS method In this section, we introduce the SDSS method. Section 5.1 then addresses the optimization challenges posed by the non- convexity of our Fisher consistent surrogates and presents an algorithm for the optimization component of SDSS. In what follows, we assume that ψis separable as in (4.1), with each ϕtbeing non-negative and relative-margin-based. As we will see, relative-margin-based forms offer a computational advantage. Ifψis relative-margin-based, i.e., if each ϕtis relative-margin-based for t∈[T], then by Definition 4.1, ϕt(ft(Ht);At) depends only on the pairwise differences among the components of ft(Ht), not on their absolute values. As a result, bVψ(f) 5 SDSS METHOD/5.1 Optimization part of SDSS 17 can not have a unique maximizer, even when maximizers exist. For such surrogates, imposing linear constraints on ft, such asPkt i=1fti= 0 or ft1= 0, does not affect the supremum of bVψ(f). We prefer the constraint ft1= 0 because then we need to optimize only over the remaining kt−1 many score functions, thereby lowering the problem dimension. Instead of setting ft1= 0, one could equivalently choose ft2= 0 or, more generally, fti= 0 for any i∈[kt]. This choice is unlikely to impact SDSS’s computational complexity or theoretical performance. For simplicity, we take i= 1. Specifically, we replace the maximization of bVψ(f) with the constrained problem: maximize f∈ FbVψ(f) subject to ft1(Ht) = 0 for all t∈[T],(5.1) whose optimal value coincides with supf∈FbVψ(f) when ψis relative-margin-based. Similarly, maximizing the constrained version of Vψ(f) yields Vψ ∗for relative-margin-based ψ’s. We now reformulate (5.1) as an unconstrained optimization problem. To this end, we introduce some notation. For each t∈[T], letWtdenote the class of all Borel measurable functions from HttoRkt−1, and define W=W1×. . .× W T. Let g= (g1, . . . , g T), where each gt∈ W t. We refer to gas the relative class score function and each gtas the relative score function at stage t. For all t∈[T] and i∈[kt−1], the i-th element of the vector-valued function gtwill be denoted by gti. For anygt∈ W t, define the transformation
|
https://arxiv.org/abs/2505.17285v1
|
tran( gt) = (0 ,−gt). Also, we let tran( g) = (tran( g1), . . . , tran( gT)). Thus, if g∈ W, then tran( g)∈ F. Then it is straightforward to verify that (5.1) is equivalent to maximizing bVψ(tran( g)) over g∈ W. In particular, if bfsolves (5.1) and bg∈ W maximizes bVψ(tran( g)), then they are related by bft= tran( bgt) = (0 ,−bgt) for all t∈[T]. (5.2) For any g∈ W, let us denote bVψ,rel(g) :=bVψ(tran( g)). Here the word “rel” stands for relative-margin-based. Straightforward algebra yields that if ψis relative margin-based, then bVψ,rel(g) =Pn L(D;g)z }| { TX t=1YtTY t=1Γt(gt;At) πt(At|Ht) . (5.3) Therefore, solving (5.1) reduces to maximizing bVψ,rel(g) over g∈ W. Note that the latter maximization is overPT t=1(kt−1) score functions, whereas maximizing bVψ(f) over f∈ Fwould involvePT t=1ktscore functions. In practice, maximization over Wis generally infeasible. Therefore, SDSS restricts optimization to a subset Un⊂ W. Let Utnbe a subset of the space of all Borel measurable functions from HttoR. Then, we define Unto beUn=Uk1−1 1n×. . .×Ukt−1 Tn. For the sake of simplicity, we will refer to Un, the search space of relative class scores, as the policy class. Algorithm 1 provides a pseudo-code for SDSS. The optimization part of SDSS will be discussed in detail in Section 5.1. Algorithm 1 SDSS Require: (i)Un: function class for relative class scores, (ii) ψ: a relative-margin-based, non-negative, smooth, separable surrogate (typically chosen to be Fisher consistent), and (iii) {Di}i∈[n]:ni.i.d. trajectories. 1:Letbgdenote an approximate minimizer of −bVψ,rel(g) over g∈ Un, where bVψ,rel(g) is as in (5.3). We use Algorithm 2 for the minimization, which will be explained later. 2:Setbf= tran( bg). 3:Return estimated policy: bd= pred( bf). 5.1. Optimization part of SDSS Ifψis smooth, then bVψ,rel(g) is a smooth functional of gand can be optimized using standard gradient-based methods. Gradient-based methods scale well to large sample sizes and are fast (Bottou et al. , 2018). Therefore, we implement SDSS with smooth ψ(cf. examples in Figure 1). In this section, we discuss the optimization part of SDSS for a pre-fixed ψ. A discussion on the choice of surrogate and when to use SDSS is provided in Section 6.4. Throughout this section, any reference to gradient descent specifically refers to the minimization problem in Step 1 of the SDSS Algorithm (Algorithm 1). We assume Unis parameterized by a real vector θso that Un⊂ {gθ:θ∈Rkdim}, where kdimis the dimension of θ. While θandkdimmay depend on n, we omit this dependence for notational simplicity. In the case of linear relative class scores, we may take gti(Ht) =θ⊤ tiHtfort∈[T] and i∈[kt−1], where each θtiis a vector of the same dimension as Ht, and θ= (θti)t∈[T],i∈[kt−1]. The function-class Uncan also be non-linear, e.g., neural networks, decision trees, lists, basis expansion classes, etc. In the neural network case, θcontains the associated weight and bias parameters. In the basis expansion case, we may take gti(Ht) =θ⊤ tiJ(Ht), where J(Ht) is a vector of basis functions. While gradient-based methods can speed up optimization, they still face challenges due to non-convexity. We illustrate this with a toy example
|
https://arxiv.org/abs/2505.17285v1
|
where T= 1,k1= 3,n= 7, and H1=R, using the dataset in Table 2. We assume π(A1=i|H1) = 1 /3 for all i∈[3]. In this illustration, we use the product-based surrogate in (4.11) with τ(x) = 1 + tanh( x), though the same 5 SDSS METHOD/5.1 Optimization part of SDSS 18 challenges arise for all Fisher consistent surrogates considered in this paper. For each z∈R, letg11(z) =xzandg12(z) =yz forx, y∈R, so that θ= (x, y) and Un={gθ:gθ(z) = (xz, yz ) for all z∈R≡ H 1,where x, y∈R}. This parameterization reduces −bVψ,rel(gθ) to a function of only two variables, xandy, with the form bVψ,rel(x, y)≡bVψ,rel(gθ)≡=3 nnX i=1Y1iΓ(xH1i, yH 1i;A1i), (5.4) where θ= (x, y), allowing visual exploration of the surface of bVψ,rel(gθ). i 1 2 3 4 5 6 7 H1i 2 1 -1 0.5 -0.5 -1 0.5 A1i 1 2 3 1 2 2 3 Y1i0.33 0.67 0.67 0.33 0.23 1 0.13 Table 2 Toy Data of (H1, A1, Y1)triplets when T= 1,k1= 3, and H1∈R. Here n= 7. The surface plot of bVψ,relfor this toy dataset is given in Figure 2a. In this plot, bVψ,reldoes not have any local optimum but it has a horizontal asymptote. Moreover, bVψ,relexhibits several plateau regions for the toy data. While the gradient in these regions is not exactly zero, it can be very small, as illustrated in Figure 2c. Gradient descent iterates can become trapped if they enter these regions, leading to what is known as the vanishing gradient problem (Hochreiter et al. , 2001; Ven and Lederer, 2021). However, for our toy data, getting trapped in a plateau region is not always bad. Figure 2a shows that there is a conical plateau region in the first quadrant (yellow in Figure 2b), where bVψ,relbecomes concave, and slowly plateaus to the maxima. The contour plot of bVψin Figure 2b shows that bVψ,rel(x, y) can be very close to the optimal value when ( x, y) lies inside the optimal plateau. For instance, at the point (10 ,4) inside the plateau, the value is 3.24993, while the global optimum, attained at infinity, is approximately 3.26. The main concern is whether gradient descent iterates get trapped in suboptimal plateaus. The sample size was seven in our toy data, but plateau regions and horizontal asymptotes still persist in larger samples. This is unsurprising because the surface of bVψ,relin Figure 2a is not not an artifact of the toy data, but is inherited from the geometry of Γ (see Figure 1). Moreover, all surrogate functions we are aware of that satisfy Conditions N1–N3 exhibit similar suboptimal plateau regions, and the vanishing gradient issue persists across them. Supplement S1.3 explores in more detail how the geometry of these surrogates gives rise to this behavior. We now illustrate the optimization challenges the plateau regions pose and demonstrate how we address them using the toy dataset. To this end, we use Figure 3, which displays the optimization trajectories of several gradient-based methods initialized from six different starting points in our toy data example. Slow convergence The presence of plateau regions imply that gradient
|
https://arxiv.org/abs/2505.17285v1
|
descent iterates can move very slowly, even when approaching the optimal region. For example, see Path 6 in Figure 3a, traced by iterates initialized at (5, –5) in our toy data example. This issue can be partially addressed by momentum-based gradient descent techniques, such as Adaptive Moment Estimation or ADAM (see Figure 3b for its application on our toy data example), because momentum helps in accelerating the convergence (cf. Boyd and Vandenberghe, 2004). Iterates diverging to suboptimal region Suboptimal plateau regions and valleys can cause gradient descent iterates to diverge in suboptimal directions. Figures 3a and 3b show that, both vanilla gradient descent and ADAM diverge toward plateau regions with suboptimal value (around 2.29) when initialized at ( −1,−1) (Path 1) and ( −5,8) (Path 4) in our toy data example. We verified that tuning the learning rate or increasing iterations does not prevent the divergence along Paths 1 and 4 for either vanilla gradient descent or ADAM. This finding also establishes that SDSS with vanilla gradient descent or ADAM may fail to reach the global optimum if initiated from certain suboptimal regions. Such sensitivity to initialization is typical in non-convex optimization. In our implementation, we use random initialization, but we adopt two strategies to prevent stagnation of the gradients: (i) restarting gradient descent with new initial points if the iterates get stuck, and (ii) injecting noise via minibatches, i.e., computing gradients using small, random subsets of the data. For example, in Figures 3c and 3d, we use stochastic gradient descent with minibatch size one. In Figure 3d, when we use SGD with ADAM, all six paths of the iterates enter the optimal region eventually. While our toy example did not exhibit local optima and saddle points, they may appear in more complex settings. To address this, alongside the earlier strategies, we use ReduceLROnPlateau (Goodfellow et al. , 2016), a learning rate scheduler that adaptively adjusts the learning rate based on validation-set performance. Below, we detail how these strategies are integrated into the optimization step of SDSS, which is summarized in Algorithm 2. In our empirical study in Section 7, Un consists of either linear functions or deep neural networks with ELU/ReLU activations. Accordingly, the discussion below focuses on these function classes, though Algorithm 2 is applicable to any choice of Un. 5.1.1. SDSS Optimization procedure First, SDSS splits the data into two parts: 80% for training ( Dtrain) and 20% for validation ( Dval). As we will see shortly, the validation set is used to monitor loss improvement and trigger necessary adjustments. 5 SDSS METHOD/5.1 Optimization part of SDSS 19 Initialization In our empirical study, we implement SDSS with random initialization. Random initialization is a popular strategy for non-convex problems, especially when deep neural networks are involved (Chen et al. , 2019; He et al. , 2015; Goodfellow et al. , 2016). An advantage of random-initialization is that it is agnostic to the model or policy class being used (Chen et al. , 2019). However, when θis high-dimensional—as in the case where Unconsists of deep neural networks—we need to scale the
|
https://arxiv.org/abs/2505.17285v1
|
variance of the randomly initiated parameter θ(0)appropriately. Otherwise, the variance of the gradient may explode (He et al. , 2015). Several initialization methods account for this, including He initialization (He et al. , 2015) and Xavier initialization (Glorot and Bengio, 2010), both widely used to stabilize the variance of the gradients during training. Supplement S1.4 details how these initializers sample θ(0)for neural network and linear policy classes. In our numerical experiments, we used the He initializer, as it consistently performed better or comparably to the Xavier initializer for both linear and neural network policies. It may be possible to customize the initialization for specific policy classes, which can be an interesting direction of future research. Minibatch stochastic gradient descent with ADAM For the minimization of −bVψ,rel, we use minibatch stochastic gradient descent with ADAM (Kingma and Ba, 2014). The momentum helps to accelerate convergence, where the noise due to minibatching helps prevent stagnation in suboptimal regions (Goodfellow et al. , 2016; Bottou et al. , 2018). ADAM is widely used in non-convex optimization (Kingma and Ba, 2014; Goodfellow et al. , 2016). Alternative adaptive learning rate optimization algorithms, such as RMSprop exist, but ADAM consistently outperformed RMSprop in our simulations. We now explain how ADAM is implemented in SDSS, as detailed in Phase 1 of Algorithm 2. At iteration r∈ {1,2, . . . ,}, we sample a minibatch Bfrom the training set, and the gradient of the loss function is computed as: gradr=−P i∈B∇θ L(Di;θ(r−1))/|B| forr≥1, where L is as defined in (5.3). ADAM uses estimators of the first and second (elementwise) moments of gradr, which we denote by mom1randmom2r, respectively (cf. Kingma and Ba, 2014, for details). These terms are initialized at zero, and at each iteration r, are updated as mom1r=D1mom1,r−1+ (1−D1)gradr,mom2r=D2mom2,r−1+ (1−D2)grad2 r, (5.5) where grad2 ris the vector obtained by squaring each element of gradr. The constants D1>0 and D2>0, known as weight decay parameters, control the decay rates of the moment estimates. They are typically set as D1= 0.9 and D2= 0.999 (Kingma and Ba, 2014). Both moment estimates are typically initialized at zero, which introduces a bias toward smaller values in early iterations (Kingma and Ba, 2014). To address this, ADAM applies the following bias corrections to mom1randmom2r: mom1r←mom1r 1−Dr 1and mom2r←mom2r 1−Dr 2. (5.6) Ther-th parameter update is then computed as: θ(r)←θ(r−1)−lrrmom(1)rp mom(2)r+ϵnum. Here lrris the learning rate and ϵnumis a small constant (typically 10−8) added for numerical stability to prevent division by zero. The updating schedule of lrris discussed below. Learning rate scheduler ReduceLROnPlateau We start with an initial learning rate lr0, which is then dynamically adjusted using a learning rate scheduler called ReduceLROnPlateau (Goodfellow et al. , 2016). This scheduler tracks an exponential moving average (EMA) of the validation loss, as described in Phase 2 of Algorithm 2. After every revaliterations, the EMA is updated as EMA(r) val:=κ Lval(θ(r)) + (1 −κ) EMA(r−1) val, whereLvalis the validation loss defined in (5.7), and the constant κis called the EMA smoothing parameter. κis typically chosen from (0 .1,0.8). The number reval, called the evaluation
|
https://arxiv.org/abs/2505.17285v1
|
frequency, is typically chosen from {2,3,4}. If the previously saved best value of the EMA is larger than EMA(r) val, we update it to EMA(r) val, and set the r-th iterate θ(r)as the best iterate θbestup to iteration r. Otherwise, the best EMA and θbestremain unchanged. If the EMA fails to improve beyond a small threshold for Npatience (typically set to ten) consecutive updates of θ(r), the learning rate lris reduced to RF×lr, where the reduction factor RFis usually chosen from (0 .5,0.9) (Goodfellow et al. , 2016). This update rule is described in Phase 3 of Algorithm 2. Adaptive learning rate reduction helps prevent overshooting and improves stability as parameters approach a local optimum. Random re-initialization If learning rate reductions fail to improve the validation loss, we infer that the iterates are trapped in a plateau region (Dauphin et al. , 2014) and reinitiate from a new random starting point. Formally, reinitialization is triggered when the EMA fails to improve beyond a small threshold after Nrestart many learning rate updates, where Nrestart is typically chosen from {3, . . . , 10}(see Phase 4 of Algorithm 2). Upon reinitiation, the learning rate resets to lr0, but SDSS retains the best θ, i.e., θbest, and EMA found so far. Stopping rule The total number of iterations, Nepoch, is pre-specified. SDSS tracks the best-performing parameter through- out optimization using the validation loss. After Nepoch iterations, the algorithm terminates and returns the best recorded value of θ. Algorithm 2: SDSS-OPTIM (Optimization part of SDSS) 5 SDSS METHOD/5.1 Optimization part of SDSS 20 1:Input: 1. Training set: Dtrain, Validation set: Dval 2. Maximum epochs: Nepoch 3. Initial learning rate: lr0∈[10−3,0.2], reduction factor: RF∈(0.5,0.8) 4. Evaluation frequency: reval∈ {2, . . . , 4} 5. Patience: Npatience ∈N(default 10) and Restart Threshold: Nrestart∈ {3, . . . , 10} 6. ADAM parameters: D1>0 (default 0 .9),D2>0 (default 0 .99) 7. Minibatch size: nmini∈N 8. EMA smoothing parameter: κ∈(0.1,0.8) (default 0.8) 9. Numerical constant: ϵnum>0 (default 10−8) 10. Improvement threshold: δimp∈(0,0.001) (default 0) 2:Initialize: θ(0)using He or Xavier initializer ,mom(1) 0←0,mom(2) 0←0,lr←lr0, R2←0, R 1←0,best val←(+∞)dim(θ(0)),EMA(0) val←+∞. 3:forr= 1,2, . . . , N epoch do Phase 1 – Training Step 4: procedure TrainingStep 5: Sample a minibatch B ⊂ D train of size nmini. 6: Compute gradient: gradr← −1 |B|X i∈B∇θ L(Di;θ(r−1)). 7: Gradient clipping: gradr←gradr max{1,∥gradr∥2}. 8: Update ADAM moments mom(1) rand mom(2) rusing D1andD2as in (5.5). 9: Bias-correct moments as in (5.6). 10: Update parameters: θ(r)←θ(r−1)−lrmom(1) r√vr+ϵnum. 11: end procedure 12: ifris a multiple of revalthen Phase 2 – Updating validation loss 13: procedure ValidationStep 14: Compute validation loss: Lval(θ(r))← −1 |Dval|X i∈Dval L(Di;θ(r)) where L is as in (5.3). (5.7) 15: EMA update: EMA(r) val←κ Lval(θ(r)) + (1 −κ) EMA(r−1) val . 16: ifEMA(r) val<best val−δimpthen 17: best val←EMA(r) val. 18: Save model: θbest←θ(r). 19: Reset counters: R2←0, R 1←0. 20: else 21: R2←R2+ 1. 22: ifR2≥Npatience then Phase 3 – Learning Rate Reduction 23: procedure ReduceLearningRate 24: lr←RF×lr. 25: Reset R2←0. 26: R1←R1+ 1. 27: end procedure 28: end if 29: ifR1≥Nrestart
|
https://arxiv.org/abs/2505.17285v1
|
then Phase 4 – Reinitialization 30: procedure ReinitializeModel 31: Reinitialize θ(r)using He or Xavier initialization. 32: Reset ADAM moments: mom1r←0,mom2r←0. 33: Reset learning rate: lr←lr0. 34: Reset counters: R2←0, R 1←0. 35: end procedure 36: end if 37: end if 38: end procedure 39: else 6 REGRET DECAY/ 21 40: EMA(r) val←EMA(r−1) val . 41: end if 42:end for 43:Output: θ∗←θbest. 6. Regret decay In this section, we analyze the regret decay rate of the SDSS policy bd≡pred(bf) from Algorithm 1. The regret of bdequals V∗−V(bd), which can also be presented using the estimated class scores as V∗−V(bf). Our theoretical results focus on non- linear function classes, where the sequence {Un}n≥1is assumed to be dense in W. In the following discussion, the ψ-regret of any relative class score gwill be defined as Vψ ∗−Vψ(f), where f= tran( g) is the class score corresponding to g. 6.1. Regret decomposition For any g∈ W, define Vψ,rel(g) :=Vψ(tran( g)). Since Vψ,rel(g) =E[bVψ,rel(g)],Vψ,rel(g) can be interpreted as the population version of bVψ,rel(g). The regret of the SDSS-estimated policy bdcan be bounded using Vψ,rel(g) andbVψ,rel(g). Lemma 6.1, proved in Supplement S9.0.1, provides a decomposition of the regret. Lemma 6.1. Suppose ψis relative-margin-based, and it satisfies the conditions of Proposition 1 with Jt>0for all t∈[T]. Suppose Un⊂ W n. Letbfandbgdenote the class scores and relative class scores, respectively, associated with the policy class Un, as defined in Algorithm 1. Then for any ˜g∈ Un, V∗−V(bf)≤C−1 ∗ sup g∈WVψ,rel(g)−Vψ,rel(˜g) | {z } Approximation error+ (Vψ,rel−bVψ,rel)(˜g−bg)| {z } Estimation error + sup g∈UnbVψ,rel(g)−bVψ,rel(bg) | {z } Optimization error: Optn ,where C∗is as in Proposition 1. (6.1) Our definitions of approximation and estimation errors depend on the choice of ˜ g. Ideally, these definitions should use ˜g= argmaxg∈UnVψ,rel(g). However, since the maximizer may not always exist, in our theoretical analysis, we use a ˜ g∈ Un with small approximation error. This motivates our use of a more general definition of approximation and estimation errors. The approximation error arises because SDSS optimizes bVψ,relover the policy class Unrather than the full function space W. The estimation error arises due to the finite sample size, and it depends on the complexity of Un. The optimization error occurs because our problem is non-convex. There is a trade-off between the approximation error and the estimation error. On one hand, complex universal approximation classes Unmay have lower approximation error, although the estimation error increases with the complexity of Un. On the other hand, a simple Un, e.g., class of linear functions, may have lower estimation error, but its approximation error can be high if the optimal policy does not belong to that class. We have discussed the optimization procedure in detail in Section 5.1. Since our optimization problem is non-convex, providing global-convergence-type results on the optimization error is challenging. In fact, as noted in Section 5.1, global convergence does not hold if initiated from bad starting points even in simple toy settings. Moreover, it can be easily shown that the Polyak- Lojasiewicz inequality, which guarantees global convergence for gradient-based algorithms, fails to hold in our setting. On
|
https://arxiv.org/abs/2505.17285v1
|
the other hand, −bVψ,relis unlikely to be strongly convex near the optimum since we expect bVψ,relto plateau as in the toy example. This lack of strong convexity prevents application of traditional results, such as those in Bottou et al. (2018), for the convergence of gradient descent to local optimum. Similar issues were encountered in Laha et al. (2024a) due to the same structural limitation. To the best of our knowledge, the recent work of Dud´ ık et al. (2022) is the only study on gradient descent’s optimization error that is relevant to our setting. Although their focus is convex optimization, they consider functions without finite minimizer, similar to ours. However, they rely on a new theoretical framework based on astral planes to treat such functions. While it may be possible to analyze Optnusing their approach, extending their astral plane theory to our setting would likely require a separate paper. Although this is an interesting direction for future research, it is beyond the scope of the current work. In light of the above, this paper focuses only on the approximation and estimation errors. Our regret bound is derived through a sharp analysis of both components, under Tsybakov’s small noise condition (Tsybakov, 2004) and complexity assumptions on Un. We will see that the resulting bound matches the best known regret decay rates for DTR under these conditions. Remark 3 (Requirement of relative-margin-based surrogates) .The rate results presented in this paper do not require ψto be relative-margin-based. Analogous results to Theorems 6.1–6.2 and Corollary 1 can be established for potentially non-relative- margin-based surrogates with minimal modifications to the proofs, provided the other conditions are satisfied. However, since the SDSS method explicitly uses relative-margin-based surrogates, we restrict our attention to those surrogates in this section. 6.2. Approximation error In this section, we derive a sharp upper bound on SDSS’s approximation error, which will be used in Section 6.3 to bound the regret of bd. While the approximation error primarily depends on the choice of Un, it also depends on the surrogate ψand the 6 REGRET DECAY/6.2 Approximation error 22 underlying distribution P. In particular, the approximation error can be small even for simpler policy classes if the optimal treatment is well-separated from suboptimal ones with high probability. To obtain a sharp upper bound, we therefore impose appropriate conditions on both ψandP. 6.2.1. Assumptions on ψ Even under Conditions N1–N3, the class of relative-margin-based surrogates ψis too broad to permit a sharp analysis of the approximation error. However, for specific surrogates where the minimizer of Vψ,rel(g) over g∈ W is better understood, a more detailed analysis becomes possible. To this end, we use a stronger version of Condition N3. Recall that, if a single-stage surrogate ϕsatisfies Condition N3, then ϕ(x;j) is small for large values of xwhen j /∈argmax( x). The stronger version, Condition N3.s, imposes tighter control by requiring a polynomial decay of ϕ(x;j) whenever j̸= pred( x). If a surrogate is symmetric, i.e., ifP i∈[k]ϕ(x;i) is constant in x, then Condition N3.s implies Condition N3. Condition N3.s. Consider ϕ:Rk×[k]7→R. There exists Ca>0(“a” in Carefers to
|
https://arxiv.org/abs/2505.17285v1
|
approximation error) so that ϕ(x;j)≤Ca(max( x)−xj)−2for all x∈Rkandj∈[kt]. We assume that the ϕt’s satisfy Condition N3.s and are symmetric. The symmetry assumption is used because symmetric surrogates yield sharper upper bounds on the approximation error. We can still bound the approximation error without the symmetry assumption, but the bounds will be looser. We revisit this point in the discussion following Theorem 6.1. The kernel-based surrogate losses introduced in Section 4.2.1 are symmetric, and Result 4 (proved in Supplement S9.1) shows that they also satisfy Condition N3.s. Thus, the rate results developed in this section apply to these kernel-based surrogates. Result 4. Suppose ϕis a kernel-based surrogate as defined in (4.9), and the random variable Zassociated with the density Ksatisfies 0< EK[Z2]<∞. Then ϕsatisfies Condition N3.s with Ca= 2EK[Z2]. The finite second moment condition is mild. Especially, the examples in Figure 1 satisfy this condition. 6.2.2. Small noise condition on P Although it was introduced by Tsybakov (2004) in the context of binary classification, we will use the multiclass version of the small noise condition, which was also used by Rigollet and Zeevi (2010); Qian and Murphy (2011); Liu and Shen (2006). To introduce this condition, we define the operator µ(x) = max( x)− max i∈[k]:i̸=pred( x)xi, (6.2) where xis any real vector. Note that µ(x)>0 implies xhas a unique maximum, and µ(x) = 0 implies max( x) is attained at multiple elements of x. Assumption 1 (Small noise) .For each t∈[T], letQ∗ t(Ht)denote the vector of optimal Q-functions, i.e., Q∗ t(Ht) = (Q∗ t(Ht,1), . . . , Q∗ t(Ht, kt)). (6.3) Suppose µis as defined in (6.2). Then there exists C >0such that for all sufficiently small z >0, P µ(Q∗ t(Ht))≤zfor some t∈[T] ≤Cz1+α. The optimal Q-function Q∗ t(Ht, at) can be seen as the overall utility of assigning treatment atat stage tprovided the patient receives the optimal treatment in the subsequent stages. A large value of µ(Q∗ t(Ht)) indicates that, for a patient with history Ht, the optimal treatment at stage tyields substantially higher utility than the second-best option—making the optimal treatment easier to detect. Assumption 1 ensures that the probability of this utility gap being small at any stage is small. An assumption of this type is typically required to obtain sharp theoretical guarantees on the estimated policy, as the problem becomes too hard if the utility gap is small with high probability (Robins, 2004; Luedtke and Van Der Laan, 2016). Therefore, the small noise assumption, or similar assumptions, e.g., the geometric noise assumption (Steinwart et al. , 2007), is common in the DTR and ITR literature (Luedtke and Van Der Laan, 2016; Qian and Murphy, 2011; Liu et al. , 2024a,b; Zhao et al. , 2015). Now we are ready to state the main theorem on approximation error, which essentially says that this error can be made small if for each j∈[kt−1], functions of the form bliptj(x) =Q∗ t(x; 1)−Q∗ t(x; 1 + j),x∈ H t (6.4) are well approximated within the class Utnfor all t∈[T]. Following the DTR literature, we refer to the above functions as
|
https://arxiv.org/abs/2505.17285v1
|
blip functions (Schulte et al. , 2014). Theorem 6.1. Suppose ψis as in (4.1) with non-negative ϕt’s, and each ϕtsatisfies Conditions N1, N2, and N3.s. Moreover, for each t∈[T], we assume ϕtto be relative-margin-based (Definition 4.1) and symmetric. Further suppose Psatisfies Assumptions I-V and Assumption 1. Let ˜g= (˜g1, . . . , ˜gT)∈ Unbe such that max t∈[T]max j∈[kt−1]∥˜gtj−bliptj∥∞≤δn. (6.5) Then there exists a constant C > 0, depending only on ψandP, so that for any bn> δ−(1+α/2) n ,supg∈WVψ,rel(g)− Vψ,rel(bn˜g)≤Cδ1+α n. 6 REGRET DECAY/6.3 Combined regret decay rate 23 The proof of Theorem 6.1 can be found in Supplement S9. The ˜ gin Theorem 6.1 may depend on n, but we omit this dependence from the notation for simplicity. We will now briefly discuss the conditions of Theorem 6.1. The condition in (6.5) will be satisfied if, for example, the blip function belongs to a smoothness class such as the H¨ older class, and Unis dense within these classes. Examples of such Uninclude neural network, cubic spline, wavelet, and decision tree classes (Sun and Wang, 2021). See Section 6.3.1 for more details on the neural network example. The conditions on the surrogates are satisfied by the kernel-based surrogates, as shown by Result 4. The symmetry condition plays an important role. In its absence, we obtain a looser upper bound on the approximation error. For instance, the product-based surrogate violates the symmetry condition, and it can be shown that its approximation error bound becomes δα n, leading to slower overall regret decay when combined with the estimation error. However, as previously discussed, the relative-margin-based condition is not necessary. Finally, the original discontinuous loss ϕdissatisfies Conditions N1, N2, and N3.s with Ca= 0. Therefore, the assertions of Theorem 6.1 hold for this loss. 6.3. Combined regret decay rate In this section, we present the final regret bound by combining the approximation and estimation errors. Recall that Un= Uk1 1n×. . .×UkT Tn, where each Utnis a class of Borel measurable functions from HttoR. We begin with a general theorem that provides a regret bound under complexity restriction on the Utnclasses (Theorem 6.2). Then, in Section 6.3.1, we consider the special case when the Utn’s correspond to ReLU neural networks and the blip functions are smooth. Complexity condition on UtnComplexity assumptions on policy classes are standard in reinforcement learning and DTR literature for bounding the regret (Jin et al. , 2021; Athey and Wager, 2021; Sun and Wang, 2021). In this paper, we use covering numbers to measure the complexity of function classes. For any ϵ >0, the covering number N(ϵ,Utn,∥ · ∥∞) is the minimum number of balls of radius ϵ(with respect to the uniform norm ∥·∥∞) needed to cover Utn. We will assume Utn satisfies N(ϵ,Utn,∥ · ∥∞)≲(In/ϵ)ρnfor all t∈[T] (6.6) for some In, ρn>0, which will be specified later. Function classes satisfying (6.6) are referred to as VC-type classes (Koltchinskii, 2009, p. 41). As an example, suppose Utnis a class of ReLU networks with depth Nn∈N, width vector Wn, sparsity sn∈N, a linear output layer, and weights bounded by one
|
https://arxiv.org/abs/2505.17285v1
|
(cf. Schmidt-Hieber, 2020). Here, sparsity refers to the number of non-zero weights in the network. We denote this class by F(Nn, Wn, sn). From Lemma 5 of Schmidt-Hieber (2020) (see Supplementary Fact S11.3), it follows that Utn=F(Nn, Wn, sn) satisfies (6.6) with ρn=sn+ 1 and In= (Nn+ 1)( sn+ 1)2Nn+4. Other important examples of classes satisfying (6.6) include wavelets (Gin´ e and Nickl, 2015; Laha et al. , 2024a) and classification and regression trees (Athey and Wager, 2021). Lipschitz condition on the ϕt’sIn addition to the complexity condition, we require a Lipschitz condition on the surro- gates. Condition 1.For each t∈[T] and i∈[kt], the function x7→ϕt(x;i) is Lipschitz with constant C > 0 in that for all x,y∈Rkt, |ϕt(x;i)−ϕt(y;i)| ≤C∥x−y∥2. Condition 1 ensures that the complexity restrictions on Utn’s transfer to function-classes of the form x7→ϕt(ft(x);i), which is required for bounding the Rademacher complexity of some classes during our regret analysis. Lipschitz assumptions on surrogate losses are common in theoretical analyses within both DTR and classification literature (Xue et al. , 2022; Bartlett et al. , 2006). If the functions ϕt(·;i) are differentiable with gradients bounded in ℓ2, then Condition 1 is automatically satisfied. Under mild differentiability conditions, the kernel-based surrogates satisfy this requirement, as will be discussed following Theorem 6.2. Theorem 6.2 shows that, when the optimization error is negligible and both the above conditions and those in Theorem 6.1 are satisfied, the regret of SDSS is of order Op(n−(1+α)/(2+α)), up to a polylogarithmic factor. This matches the regret decay rate obtained by Laha et al. (2024a) in the binary-treatment case. Theorem 6.2. Suppose Psatisfies Assumptions I-V and Assumption 1 with α >0. Let ψbe as in Theorem 6.1. Additionally, theϕt’s satisfy Condition 1 and that ϕt(x;pred(x))>Jfor for all t∈[T]andx∈Rkt, where J>0is a constant. Let Un=Uk1 1n×. . .× UkT Tn. For all t∈[T], suppose the Utn’s satisfy (6.6) with constants In>0andρn>0such that lim inf nρn>0,ρnlogIn=o(n), and lim inf nρnlogIn>0. Further suppose there exists ˜g∈ Unso that ˜g/bnsatisfies (6.5) with δn= (ρnn−1logIn)1/(2+α)for some bn>p n/(ρnlog(In)). Then there exist constants C > 0andN0≥1such that, for all n≥N0and all x >0, the SDSS score estimator bf(as defined in Algorithm 1) satisfies V∗−V(bf)≤Cmax (1 +x)2(logn)2ρnlogIn n1+α 2+α ,Optn with probability at least 1−exp(−x). The proof of Theorem 6.2 is provided in Supplement S10 and relies on tools from empirical risk minimization theory. The rate-related conditions in Theorem 6.2 merits some discussion. The conditions on ρnandInare required to apply Rademacher complexity bound results on certain function classes in our proof. The choice of δnensures that the estimation error is of order Op(δ1+α n) up to a poly-logarithmic term. Theorem 6.1 established that the approximation error of Unis of the 6 REGRET DECAY/6.3 Combined regret decay rate 24 same order. Hence, the approximation error matches the estimation error under the conditions of Theorem 6.1. Theorem 6.2 also requires that Uncontain a function ˜ gsuch that ˜ g/bnsatisfies (6.5). This implies Unincludes functions with magnitudes at least on the order of bn. The result in Theorem 6.2 is nonparametric, as it imposes no other structural restrictions on
|
https://arxiv.org/abs/2505.17285v1
|
the function classes. Example of surrogates satisfying the conditions of Theorem 6.2 Result 5 show that kernel-based surrogates satisfy Condition 1 under mild conditions. Result 5. Suppose ϕ:Rk×[k]→Ris a kernel-based surrogate as defined in (4.9), where the kernel Kis bounded and continuously differentiable, and K′is bounded and integrable with respect to the Lebesgue measure. Then ϕsatisfies Condition 1. The proof of Result 5 in Supplement S10.4 relies on showing that the gradient of ϕis bounded, which requires the integrability of K′. We require KandK′to be bounded for applying the Leibniz rule for differentiating an integral while deriving the partial derivatives of ϕ. A kernel-based surrogate satisfies all conditions of Theorem 6.2 if Ksatisfies the assumptions of both Results 4 and 5. Examples of such Kinclude the logistic and Gumbel densities (cf. Figure 1), as well as the Gaussian density, Weibull and Gamma densities with shape parameter greater than two, and the beta density with both parameters greater than one. Remark 4 (Product-based surrogates) .As noted earlier, their approximation error is of order Op(δα n). Applying proof tech- niques similar to Theorem 6.2, one would obtain a slower regret decay rate of Op(n−α/(2+α)) for the product-based surrogates under conditions similar to Theorem 6.2. 6.3.1. Special case: neural network In this example, we will consider the neural network class F(Nn, Wn, sn) asUtn. As previously noted, (6.6) holds for this class. To ensure that (6.5) is satisfied, i.e., that the blip functions are well-approximable by Utn, we impose structural assumptions on the blip functions. Specifically, we assume they belong to a H¨ older class. H¨ older assumption on the Q- and blip functions are quite common in the DTR literature (Zhao et al. , 2015; Sun and Wang, 2021; Laha et al. , 2024a). Moreover, such smoothness assumptions are weaker than parametric restrictions on the Q- and blip functions, which are also common in the DTR literature (Schulte et al. , 2014; Kosorok and Laber, 2019). We now define a H¨ older class. Let k∈N. Recall that for any multi-index v∈Nk, we denote |v|1=P i∈[k]vi. For any X ⊂Rk, a function u:X →Ris said to have H¨ older smoothness index β >0 if, for all multi-indices v= (v1, . . . ,vk)∈Nk satisfying |v|1< β, the partial derivative ∂vu=∂v1···∂vkuexists, and there exists a constant C >0 such that |∂vu(x)−∂vu(y)| ∥x−y∥β−⌊β⌋ 2< C for all x,y∈ Xsuch that x̸=y. Here, and throughout, ⌊β⌋denotes the largest integer less than or equal to β. For some constant CHol>0, we define the H¨ older class Cβ(X, CHol) as Cβ(X, CHol) = u:X 7→R X v:|v|1<β∥∂vu∥∞+X v:|v|1=⌊β⌋sup x,y∈X x̸=y|∂vu(x)−∂vu(y)| |x−y|β−⌊β⌋≤CHol . Since Htmay include categorical variables, we separate its continuous and categorical components. To this end, for each t∈[T], we write Ht= (Hts, Htc), where HtsandHtcdenote the continuous and categorical parts of Ht, respectively. We also assume that Hts∈ H tsandHtc∈ H tcwhere Ht=Hts× H tcfor each t∈[T]. Our smoothness assumption, presented in Assumption 2, is on the blip function restricted to Hts. Assumption 2 (Smoothness assumption) .For each t∈[T],Htis compact and Htcis finite. There exist constants β >0
|
https://arxiv.org/abs/2505.17285v1
|
andCHol>0such that, for all t∈[T], for any fixed ht∈ H tc, and any i∈[kt], the function blipti(·, ht)is inCβ(Hts, CHol). Here, the blip function bliptjis as defined in (6.4). Recall that we denote the dimension of HTbyq. Corollary 1 below implies that under Assumption 2, the regret decays at the rate Op(n−1+α 2+α+q/β) up to a logarithmic factor, modulo the optimization error. Corollary 1. Suppose Pandψare as in Theorem 6.2, and that Palso satisfies Assumption 2 with parameter β >0. Let Utnbe the neural network class F(Nn, Wn, sn)with depth Nn=c1logn, sparsity sn=c2nq/((2+α)β+q), and maximal width satisfying maxWn≤c3sn/Nn, where c1, c2, c3>0. Then there exist constants N0>0andC > 0, depending on Pandψ, such that if c1, c2, c3> C, then for any n≥N0andx >0, the following holds with probability at least 1−exp(−x): V∗−V(bf)≤Cmax (1 +x)2(logn)6+4α 2+αn−1+α 2+α+q/β,Optn . The proof of Corollary 1 is provided in Supplement S10.3. Under similar smoothness and small noise conditions, the rate n−1+α 2+α+q/βis the minimax rate of risk decay in binary classification (Audibert and Tsybakov, 2007). It is also the sharpest available rate, up to polylog terms, for DTR regret decay under analogous conditions (Laha et al. , 2024a). Laha et al. (2024a) conjectured that this rate is minimax-optimal for regret decay in DTR settings, under assumptions analogous to 7 EMPIRICAL ANALYSIS/6.4 Practical guidance regarding SDSS 25 Assumptions 1 and 2. Under such conditions, the available upper bound on the regret decay rate of nonparametric Q-learning isn−(1+α) 2+α2 2+q/β. This upper bound is calculated using Qian and Murphy (2011)’s regret bounds for single-stage case; see Laha et al. (2024a) for more details. The Q-learning regret decay rate is slower because it relies on the estimation of the Q- functions, and the minimax L2(P) error rate of nonparametrically estimating β-smooth functions is n−2 2+q/β(cf. Yang, 1999). In principle, under general conditions such as ours, Q-function estimation is a harder nonparametric problem than learning the optimal DTR, since the optimal DTR depends only on the signs of pairwise Q-function differences, not their exact values. That said, sharper regret decay rates for Q-learning can be achieved under stronger restrictions on the Q-function class, as such restrictions simplify the underlying estimation problem (Qian and Murphy, 2011; Laha et al. , 2024a). 6.4. Practical guidance regarding SDSS Although the approximation and estimation errors of SDSS decays to zero, Optncan remain non-trivial. We are currently unable to reliably quantify Optnbecause the minimum attainable value of the loss −bVψis unknown, unlike standard losses such as the squared error. Instead, we offer several guidelines for using SDSS. First, although simultaneous optimization uses the sample effectively by pulling all information together, it can become highly overparameterized as Tincreases, especially with nonparametric policy classes. Thus, we recommend using SDSS only when the sample size is large. For instance, when T= 2 and k1=k2= 3, SDSS typically needs several thousand observations to be competitive with stagewise methods. Such sample sizes are common in EHR data. Second, SDSS generally incurs higher optimization error than stagewise methods, which solve separate single-stage opti- mization problems sequentially, each involving fewer parameters
|
https://arxiv.org/abs/2505.17285v1
|
than SDSS. Therefore, in simpler settings, where all methods are expected to incur low approximation and estimation error, SDSS may have no clear advantage due to its higher opti- mization error. However, SDSS can have an advantage when the approximation and estimation errors are expected to be substantial for all methods, such as in cases with high model misspecification risk (e.g., with interpretable policies) or when covariates are high-dimensional or noisy. In such challenging scenarios, SDSS may outperform stagewise methods despite its higher optimization error because (i) it does not need outcome or pseudo-outcome modeling, (ii) it has no error propagation across stages due to its simultaneous optimization framework.3Moreover, both the kernel-based and product-based surro- gates are bounded and taper off at infinity. In the context of classification, a growing body of research supports that such surrogates exhibit superior robustness to outliers and data contamination (Liu and Shen, 2006; Wu and Liu, 2007; Masnadi- Shirazi and Vasconcelos, 2008; Nguyen and Sanner, 2013; Wang et al. , 2020). The challenges of model misspecification and noise are common in real-world EHR data. It is therefore unsurprising that SDSS outperforms both Q-learning and ACWL in our sepsis data example (Section 7.2). Hence, we recommend SDSS in settings with noisy or high-dimensional covariates, or when model misspecification is a concern. Finally, multiple Fisher consistent surrogates are available for SDSS. One way to combine several surrogates is through an ensemble approach, where we estimate the optimal DTR using multiple ψ’s, and the final treatment assignment for each subject is determined by the majority vote. However, ensembling may increase variance, particularly with non-linear policy classes (see Section 7.1 for details). 7. Empirical analysis As discussed in Section 5.1, we recommend SDSS as an alternative to the conventional methods only in settings where the sample size is large and modeling may be challenging. Therefore, we focus only on such scenarios. Despite the non-convexity discussed in Section 5.1, our optimization strategy involving ADAM, random restart, and minibatching consistently allowed SDSS to find high-value policies in these simulation settings, often outperforming comparators. We evaluate the performance of our method both in synthetic datasets (Section 7.1) as well as EHR data on septic patients (Section 7.2). To investigate the benefit of our SDSS in such scenarios, we compare SDSS with two well-known DTR learning methods: (i) Q-learning (Watkins, 1989) and (ii) the Adaptive Contrast Weighted Learning or ACWL (Tao and Wang, 2017). 7.1. Simulations We consider two simulation schemes, each comprising two or three settings. Scheme 1 It is a two-stage scenario with treatments A1, A2∈ {1,2,3}, assigned uniformly at random with π1(A1=i| H1) =π2(A2=i|H2) = 1 /3 for i∈ {1,2,3}. Therefore, k1=k2= 3. The baseline covariates are O1= (X11, X12, X13)∼N(03,10I3), which have a substantial spread due to their high variance. The first-stage outcome model is linear in the covariates. Specifically, Y1=A1(X11+X12+X13) + 3 + ε1,where ε1∼N(0,1) is the random error independent of the treatments and covariates. We let O2∼N(0,1). The second-stage outcome model, which includes a nonlinear component, is given by Y2=ωX2 1A2+O2+ 3 + ε2, 3The stagewise methods generally
|
https://arxiv.org/abs/2505.17285v1
|
rely on a model-based quantity called pseudo-outcomes. Model misspecification for the pseudo-outcomes can result in higher estimation and approximation error, which also propagates through stages (Murphy, 2005). 7 EMPIRICAL ANALYSIS/7.1 Simulations 26 (a) Plot of bVψ,rel(x, y) (plotted in the Z axis) vs xandy (b) Contour plot of bVψ,rel (c) Contour plot of the logarithm of the ℓ2norm of the gradient of bVψ,rel Fig 2: Plots related to bVψ,relfor the toy example in Section 5.1. The formula of bVψ,rel(x, y) is provided in (5.4) and the toy data is provided in Table 2. In plot (c), the logarithm base is 10. Higher negative values in this plot indicate plateau regions, where the gradients become close to zero. where ω > 0 is a simulation parameter and ε2∼N(0,1) is the random error, which is independent of the other random variables. Larger values of ωincrease the variance of Y2, as well as the nonlinear contribution of the second stage to the value function. Therefore, as we shall see, modeling the Q-functions becomes more challenging as ωincreases. Hence, we will refer toωas the complexity parameter. We let ωvary in {10,20,40}, generating three distinct settings. The optimal policies are given by d∗ 1(H1) =( 3 if X11+X12+X13>0 1 o.w. .and d∗ 2(H2) = argmax i∈{1,2,3}X2 1i. According to our definition of linear policies in Section 2, d∗ 2is non-linear since the corresponding class scores are quadratic, butd∗ 1is linear because the corresponding class-scores are linear. Scheme 2 This scheme has high-dimensional covariates. The baseline covariate O1∈Rn×pis distributed as a centered p-variate Gaussian random vector with identity covariance matrix, where p∈ {50,100}. The covariate space of the sepsis data in Section 7.2 has similar dimension. We provide the stage-specific outcome models of Scheme 2 below: Y1= λ(1) A1sin OT 11p2 +ζ(1) A1cos OT 11p +ε1, Y2= λ(2) A2cos OT 11p2 +ζ(2) A2sin OT 11p +ε2, where ε1, ε2∼N(0,1). The stage-specific parameters λ(1), λ(2), ζ(1), ζ(2)∈R3, whose values are provided in Supplement S2. Here we remind the readers that for a vector x∈Rk,xirepresents its i-th element. Scheme 2 outcome models are non-linear functions of the history. The resulting optimal treatment assignments are also non-linear. Their formulas are provided in Supplement S2, which shows that, at both stages, the optimal assignments d∗ t(Ht) depend solely on OT 11pand take values between two and three. Figure 4 plots the optimal treatment assignments as a function of OT 11p. As can be seen from Figure 4, for each stage t, the regions where the optimal treatment is two (or three) are disconnected sets of the form ∪∞ i=1{ht∈ H t:ci≤oT 11p≤c′ i, o1⊂ht}where {ci}i≥1and{c′ i}i≥1are sequences of distinct numbers. The disconnected components appear due to the periodic nature of the outcome models. Decision boundaries of this form are hard to approximate with linear policies, but they are approximable with suitable non-linear policies. 7 EMPIRICAL ANALYSIS/7.1 Simulations 27 (a) Vanilla gradient descent (learning rate 0.05) (b) ADAM without minibatches (learning rate: 0.05) (c) SGD (learning rate: 0.05) (d) ADAM with SGD (learning rate 0.10) Fig 3: Gradient descent for the toy data
|
https://arxiv.org/abs/2505.17285v1
|
in Section 5.1 The plots display the paths traced by iterates initiated from 6 different initialization points for (a) vanilla gradient descent, (b) ADAM (without minibatching), (c) SGD, and (d) ADAM with SGD. The white circle and the solid black rectangle mark the starting point and the end point of the paths, respectively. Although the algorithms minimize −bVψ,rel, the paths are shown on the contour plot of bVψ,rel(also provided in Figure 2b). The legend to the right presents the color scale. The yellow region indicates the optimal plateau, where the objective value is close to the optimum. The first three plots used 5,000 iterations, while the last used 20,000. The SGD batch size was one. The ADAM parameters were set to their default values as specified in Algorithm 2. 23 −20 0 20 O1T 1p23 −20 0 20 O1T 1p Stage 1 Stage 2Optimal treatment 23 Fig 4: Optimal treatment assignments for Scheme 2 as a function of OT 11p.As mentioned in Section 7.1, the optimal treatment assignments in Scheme 2 are functions of OT 11ponly. The X-axis represents OT 11p, while the Y-axis indicates the corresponding optimal treatment. Here, the optimal treatment assignments are either two or three, which are colored blue and red, respectively. 7 EMPIRICAL ANALYSIS/7.1 Simulations 28 Propensity scores: Here the propensity scores depend on the history. For i∈[3], let us define the function vsim t:Ht×[3]7→ Rpas vsim 1(Ht;i) =( 1p ifi=d∗ t(Ht) −1pifi̸=d∗ t(Ht). The propensity scores are given by π1(A1|H1) =exp OT 1vsim 1(H1;A1) P3 i=1exp O1vsim 1(H1;i) π2(A2|H2) =exp O⊤ 1vsim 2(H2;A2) + 0.5A1+ 0.5Y1 P3 i=1exp O⊤ 1vsim 2(H2, i) + 0.5A1+ 0.5Y1. 7.1.1. Common simulation setup For each of the 5 simulation settings resulting from the 2 schemes, we consider two sample sizes, n= 5,000 and n= 15,000. For each sample size and each setting, we generate 100 Monte Carlo samples. We compare the methods using their value functions. We also use the value function of the optimal DTR as a benchmark. We estimate the value function of each DTR using an independent held-out sample of size n= 10,000, using the formula V(d) =Ed[PT t=1Yt]. To ensure comparability across methods, the held-out set was the same for all methods in each Monte Carlo replication. (a) Non-linear policies (Scheme 1) (b) Linear policies (Scheme 1) (c) Non-linear policies (Scheme 2) (d) Linear policies (Scheme 2) Fig 5: Boxplot of value functions: Boxplot of estimated value function of different methods for 100 replications. The value functions in Scheme 1 are scaled by 10−2for better visual comparability. The X-axis indicates the sample size and the Y-axis represents the average value function over 100 replications. Each box shows the interquartile range (IQR), with edges at the 25th and 75th percentiles and a line for the median. Whiskers extend to 1.5 ·IQR. The red dashed line represents the optimal value function. In Scheme one, ωcorresponds to the complexity parameter, and in Scheme two, pcorresponds to the dimension of the baseline covariate space. 7.1.2. Methods As mentioned previously, we benchmark the performance of SDSS against Q-learning and ACWL (Tao
|
https://arxiv.org/abs/2505.17285v1
|
and Wang, 2017). All three methods, SDSS, Q-learning, and ACWL, are considered for nonlinear policies. The non-linear DTR for SDSS and Q-learning is implemented using deep neural network. For ACWL, we use its default policy class based on classification and regression trees (CART), which is nonlinear. Since currently ACWL only uses tree-based policies, we exclude it from comparisons involving linear policies. For Scheme 1, whose first-stage outcome model includes treatment-covariate interaction terms, we incorporated first-order interaction terms between treatment and covariates in the linear model used for Q-learning. In contrast, no interaction terms were included for Scheme 2 due to the high dimensionality of the covariates. The main implementation details for the methods are provided below. 7 EMPIRICAL ANALYSIS/7.1 Simulations 29 SDSS SDSS was implemented using Algorithm 2 with 4:1 training:validation split. In case of the neural network policies, the networks for the relative class score gti’s share parameters for each stage t, differing only at the outer layer of each stage. This strategy was adopted to help reduce the burden of parameters. For simplicity, we also used the same network configuration, e.g., depth, dropout rate, width, etc., for each relative score. SDSS was implemented using PyTorch, where the linear policy was implemented using neural network with no activation functions. The SDSS hyperparameters are provided in Supplement S2.1. The performance of SDSS was generally robust to moderate changes in hyperparameters such as initial learning rate, number of epochs, and the optimization parameters from Algorithm 2. Most optimization-related parameters were fixed throughout the simulations (see Table 6 in the Supplement), using default values from Algorithm 2. For neural network policies, ReLU and ELU activation functions yielded the best, and comparable, results. In this case, varying the number of hidden layers between one and three, we observed the best performance in Scheme 1 with one layer, while the high-dimensional Scheme 2 saw the best performance with three layers. In practice, we recommend tuning the neural network hyperparameters such as the number of layers, dropout rate, learning rate, etc., by monitoring training and validation error curves. For the optimization-related hyperparameters, the values in Supplementary Table 6 may be used. The number of random restarts was capped at three in all replications. The code for our simulation section, including the implementation of SDSS, is available at the Github repository Chapagain (2024b). Choice of surrogate Although the approximation and estimation error bounds for the kernel-based surrogates are sharper due to their symmetry property, they were outperformed by the product based surrogates in our simulations. We suspect that the simpler form of the product-based surrogate may have optimization-related advantages, which makes this surrogate less sensitive to hyperparameters. In view of the above, we present our results only for the product-based surrogate, where we take τ(x) = 1 + x/√ 1 +x2as it performed slightly better than the other τ’s we considered, which are 1 + tanh(5 x), 1 +x/(1 +|x|), and 1 + 2 arctan(( πx)/2)/π. That said, all four choices led to comparable performance. If we ensemble these four surrogates using the strategy mentioned in Section 6.4,
|
https://arxiv.org/abs/2505.17285v1
|
the value function of the estimated policy remains comparable to the above-mentioned best-performing τ. However, ensembling slightly increases the variance of V(bd) for nonlinear policy classes, likely due to the higher parameter burden in this case. Figure 8 in Supplement 7.1 illustrates this under Scheme 2. Benchmark methods Both Q-learning and ACWL are stagewise methods relying on backpropagation. While Q-learning is purely model-based, ACWL uses both modeling and single-stage direct search. Q-learning requires modeling the Q-functions whereas ACWL require modeling its contrasts, both of which can become challenging when the outcome models are complex. However, since ACWL’s stage-wise direct search is based on a doubly robust construction, it is supposed to be less affected by model misspecification as long as the propensity scores are consistently estimated. Here, the true propensity scores are provided to ensure comparability with SDSS. Q-Learning Similar to SDSS, we implemented both linear and nonlinear versions of Q-learning using PyTorch. The nonlinear Q-learning uses a feed-forward ReLU neural network with two hidden layers with 64 and 32 neurons per layer. To ensure comparability with SDSS, we used the ADAM optimizer with random restart and learning rate scheduler. We considered a training-validation split of 4 : 1, because, similar to SDSS, the learning rate was updated when validation loss becomes stagnant. More details on the network architecture and optimization hyperparameters are provided in Supplement S2.1.3. Q-learning was generally robust to moderate changes in neural network and optimization-related hyperparameters, and performed the best with ReLU networks with three hidden layers for the neural network policies. ACWL Tao and Wang (2017)’s ACWL is a stagewise method, which requires estimating contrasts of Q-functions. ACWL uses tree-based policies to directly optimize a doubly robust estimator of single-stage value functions. We implemented ACWL using the authors’ publicly available R code, which uses the rpart package. We implemented ACWL using the default setting. The provided functions for ACWL also do not take CART hyperparameters as input. 7.1.3. Results Among the methods, Q-learning was fastest (4–5 seconds), followed by SDSS (7–8 seconds), and ACWL was slowest (about 11 seconds); overall, the runtimes were broadly comparable. The simulations were run on a MacBook Pro with a 2.6 GHz 6-core Intel Core i7 processor and 16GB RAM. Figure 5 displays the boxplots of the value function estimates. We discuss the case of non-linear policies first. Q-learning has the lowest value-function estimate, with the highest Monte Carlo variance, among all methods. This observation indicates that Q-learning may still be affected by model-misspecification even when we use universal approximation classes such as deep neural networks and fairly large sample sizes. Although the stagewise method ACWL is competitive with SDSS, on average, the value function estimates of SDSS are consistently the highest among all methods, underscoring the advantage of simultaneous optimization in our simulation settings. Figure 5a shows that the gap between SDSS’s and the other methods’ value function estimates slowly increases as the complexity parameter ωincreases. To this end, note that both the range and variance of Y2increase with ω. Since SDSS uses a bounded surrogate loss, it may be potentially
|
https://arxiv.org/abs/2505.17285v1
|
less sensitive to higher variance in Ycompared to the other methods, which use the squared error loss for modeling purposes. In a related context, bounded non-convex surrogate losses have been empirically demonstrated to offer increased robustness to label noise in classification tasks (Natarajan et al. , 2013; Akhtar et al. , 2024). When considering linear policies, Q-learning has lower variance than SDSS, as shown by Figure 5b and 5d. This is unsurprising because being a linear-regression-based method, linear Q-learning is computationally simpler than linear SDSS, 7 EMPIRICAL ANALYSIS/7.2 Application to an EHR Study of Sepsis 30 which still involves a non-convex optimization. In general, the value functions for linear policies are smaller than those of non-linear policies for both methods, which is expected because the optimal DTR is non-linear under both schemes. The value function estimates of SDSS are consistently higher than that of Q-learning, suggesting that SDSS is less affected by the policy class misspecification in our simulation settings. In the binary treatment setting, Zhao et al. (2015); Jiang et al. (2019); Laha et al. (2024a) also observed that direct-search-based methods often outperform or match Q-learning when the policy class is misspecified. As in the nonlinear case, the performance gap between SDSS and Q-learning increases as the complexity parameter ωincreases in Scheme 1, demonstrating the advantage of SDSS as the variance of Yincreases. These findings indicate that in our simulation settings, SDSS is either competitive to or exhibit better performance than the other methods, both with linear and non-linear policies. 7.2. Application to an EHR Study of Sepsis Sepsis is a critical condition where the body’s immune response to infection causes severe, and potentially lethal, compli- cations. Intravenous fluid (IV) resuscitation is often used to manage severe cases of sepsis (Raghu et al. , 2017). Due to the significant variability in how sepsis affects individuals, personalized treatment approaches for IV fluid resuscitation are rec- ommended (Lat et al. , 2021; Komorowski et al. , 2018). Despite its critical importance, there still remains a lot of uncertainty about the best ways to administer intravenous (IV) fluid, particularly across diverse patient subgroups and clinical conditions (Marik, 2015). In this section, we investigate the effectiveness of SDSS and the comparators in estimating the optimal IV fluid resuscitation DTR using the Sepsis3 data from the Medical Information Mart for Intensive Care version III (MIMIC-III) database (Johnson et al. , 2016). It is an openly available dataset comprised of patients admitted to the Beth Israel Deaconess Medical Center. Following Raghu et al. (2017), we use arterial lactate levels to assess the effectiveness of IV-fluid resuscitation policies since its elevated levels indicate worsening of sepsis. Since the true value function of any policy/DTR is unknown, we will estimate it using the doubly robust Augmented Inverse Probability Weighted (AIPW) estimator of Kallus and Uehara (2020) (Jiang and Li, 2016; Thomas and Brunskill, 2016). Study Cohort and Data Description We consider policies over two stages. We extracted a cohort of adult ICU patients from MIMIC-III, restricting the analysis to individuals with recorded IV fluid administration and relevant clinical measure-
|
https://arxiv.org/abs/2505.17285v1
|
ments at two critical time points: admission (Stage 1) and four hours post-admission (Stage 2). The data was cleaned and pre-processed according to Johnson and Pollard (2018). We discretized the rate of IV fluid transfusion (in mL/Kg body weight) to apply our methods, which is a common practice in reinforcement learning (Raghu et al. , 2017; Komorowski et al. , 2018). We considered three treatment levels: no IV (no fluid administered), moderate (0-100 mL/Kg body weight), and high ( >100 ml/Kg body weight). Therefore, k1=k2= 3 in both stages. Since lower values of lactate levels are associated with effective sepsis management, we transformed the lactate level Larterial as˜Larterial = 4 (22 −Larterial ) to use it as the reward or outcome. This transformation was selected because the original arterial lactate values range from 0.2 to 21.06. This transformation ensured a positive outcome variable with a broader range. The covariates include diverse physiological measurements and laboratory values. A total of 98.42% of patients had at least one missing value in their trajectory, with 27.95% missing more than 50% of their trajectory. Covariates with more than 70% missingness, e.g., temperature and PaO 2/FiO 2ratio, were excluded. Similar to prior works such as Raghu et al. (2017); Johnson et al. (2018) that used the Sepsis3 repository, we imputed the remaining missing values using K-nearest neighbors (KNN) imputation with K=3. Then, we estimated the stagewise propensity scores πt(At|Ht) of each patient using multinomial logistic regression, and removed all patients with propensity score <0.15 at any stage, leading to the exclusion of about 5% patients. Clipping the observations with extreme probability estimates is recommended for improving the prediction of logistic regression (Hirano et al. , 2003). In our case, this step also helps stabilize the inverse propensity scores, which are required not only during the training of SDSS and ACWL, but also at the testing phase, for estimating the value function using Kallus and Uehara (2020)’s AIPW estimator. After data pre-processing, we obtained a final sample of 16551 patients with complete data on 45 and 42 covariates in the first and second stages, respectively. The complete covariate list is provided in Supplement S1.7. In the final data, percentages of patients who got no IV, moderate, and high dose of IV in stage 1 were 65.23%, 12.80%, and 21.97%, respectively. The corresponding percentages in stage 2 were 37.51%, 19.59%, and 42.90%. Training and testing We reshuffled the dataset 100 times and, after each shuffle, split the data into training and testing sets with a 50-50 ratio, resulting in 100 replications. In each replication, the training subset was used for training the policies following the procedures in Section 7.1, and the test set was used exclusively for estimating the value function of the trained policies. The hyperparameters for SDSS and Q-learning are provided in Supplement S2.2. During training, the maximum number of random restarts was set to six in all replications. When T= 2 and k1=k2= 3, Kallus and Uehara (2020)’s AIPW estimator of the value function takes the form bVAIPW (d) =Pnh bQd 1(H1, d1(H1))i +Pn1[A1=d1(H1)] bπ1(A1|H1) Y1−bQd
|
https://arxiv.org/abs/2505.17285v1
|
1(H1, A1) +bQ2(H2, d2(H2)) +Pn"2Y i=11[Ai=di(Hi)] bπ2(Ai|Hi) Y2−bQ2(H2, A2)# , (7.1) wherebπ1(A1|H1) andbπ2(A2|H2) are the propensity score estimators, Pnis the empirical distribution corresponding to the test data, and for any DTR dandi, j∈[3], the functions bQ2(H2, i) andbQd 1(H1, j) are the estimators of Q2(H2, i) =E[Y2| 8 COMPARISON WITH RELATED LITERATURE/ 31 H2, A2=i] and Qd 1(H1, j) =Eh Y1+Q2(H2, d2(H2))|H1, A1=ji , respectively. The functions Q2andQd 1are referred to as the Q-functions under the DTR d(Sutton, 2018). The above Q- functions were estimated nonparametrically using a 3-layer feed-forward neural network on the test data, whose details can be found in Supplementary Tables 14 and 10. The value function estimates of different methods were stable for moderate variations of the parameters of this 3-layer neural network. For bπ1(A1|H1) andbπ2(A2|H2), we used probability estimates obtained from the multinomial logistic regression fitted on the full data. Figure 6 provides the boxplots of the value function estimates obtained from the 100 replications. As in Section 7.1, ACWL was excluded from linear policy computations. The Python code for the data application is available in the GitHub repository cited as Chapagain (2024a). (a) Linear policies (b) Nonlinear policies Fig 6: Box plots of estimated value functions for the sepsis data. The y-axis shows AIPW estimates of the value function across 100 replications generated by reshuffling the data as described in Section 7.2. Each box extends from the 25th to the 75th percentile, with the median indicated by the central line. The whiskers extend up to 1.5 times the interquartile range. Results In Figure 6, non-linear SDSS exhibits higher value function estimates with lower variability than the non-linear comparators. SDSS also outperforms Q-learning when using linear policies. These findings aligns with our simulations in Section 7.1, where we saw that SDSS can be a competitive method for high-dimensional data. Among Q-learning, SDSS, and ACWL, Q-learning exhibits the lowest value function estimates, which also aligh with our simulations in Section 7.1. It appears that the policy estimated by Q-learning is more conservative than those by ACWL and SDSS. To illustrate this, we examine the SOFA (Sequential Organ Failure Assessment) score, which quantifies organ dysfunction in sepsis (Do et al. , 2023). Encoding treatment levels as 1 (no IV), 2 (moderate), and 3 (high), Figure 7 shows that, under non-linear policies, the average treatment level of Q-learning increases rather slowly with SOFA score, when compared with ACWL or SDSS. Figure 7 also shows that the SDSS policy exhibits the steepest escalation in treatment intensity as severity increases, aligning with clinical expectations. The ACWL policy also increases treatment intensity with severity. While the value function estimate for non-linear policies is slightly higher than that of the linear policies for SDSS, the difference is modest. Linear SDSS exhibits slightly lower variability, as it involves optimization over fewer parameters. This observation suggests that in real-world applications, where interpretable policies are desirable but model misspecification is a concern, SDSS with a linear policy may be promising. 8. Comparison with related literature 8.1. Binary treatment setting Considerable work has been done on direct search
|
https://arxiv.org/abs/2505.17285v1
|
methods for binary treatments, with extensions to specialized settings such as constrained and censored settings (Zhao et al. , 2015; Jiang et al. , 2019; Liu et al. , 2018; Zhao et al. , 2020; Liu et al. , 2024b). The key differences between the binary-treatment and the general DTR setting have already been discussed in Sections 1 and 2. Here, we compare our work to Laha et al. (2024a), which is the most closely related to the current paper among works on the binary treatment setting. 8 COMPARISON WITH RELATED LITERATURE/8.2 Direct search for general ( kt≥2) setting 32 (a) Stage 1 (b) Stage 2 Fig 7: Plot of mean treatment level vs SOFA scores in our data application for non-linear policies. We partition the SOFA score into four clinically meaningful bins, 0–4, 5–8, 9–12, and ≥13, corresponding to mild, moderate, severe, and critical organ dysfunction. Treatment levels are encoded as 1 (no IV), 2 (moderate), and 3 (high). For each method and SOFA bin, the average treatment level, displayed on the Y axis, is computed using the training set. Although SOFA scores range from 0 to 24, values above 13 were grouped due to their low frequency in the dataset. Comparison with Laha et al. (2024a) In Laha et al. (2024a), we characterized a class of margin-based, Fisher con- sistent surrogates for the sequential DTR classification in the binary treatment case, and provided sharp regret bounds for the resulting direct search method. In that paper, we also established the (Fisher) inconsistency of margin-based, smooth, concave surrogates. Below, we highlight the major differences between Laha et al. (2024a) and the present work. [1.] The first major difference arises from the distinction between binary and multiclass classification, whose Fisher con- sistency literatures differ significantly. As discussed in Sections 2 and 3, binary classification—and by extension, binary- treatment DTRs— heavily exploit the embedding of the classes/treatments into {±1}. As noted by Zhou et al. (2022), the absence of this structure complicates multiclass classification, and, consequently, general DTR ( kt≥2) classification. As previously noted, the unavailability of margin-based losses in the general setting is a result of this. More importantly, tools for studying Fisher consistency in binary classification do not directly extend to multiclass settings, necessitating develop- ment of new techniques (Tewari and Bartlett, 2007; Ramaswamy and Agarwal, 2016; Neykov et al. , 2016). Likewise, while the insights from Laha et al. (2024a) helped guide this work, their tools and methods were not directly applicable to the general setting. Nonetheless, we obtain regret decay rates similar to Laha et al. (2024a)’s, which indicates that the general DTR problem is not inherently harder than the binary-treatment DTR case. [2.] Given the apparent failure of concave surrogates in the binary treatment case, Laha et al. (2024a) speculated that DTR Fisher consistency might require stricter conditions on surrogate losses than in binary or multiclass classification, where concave, Fisher consistent surrogates exist. However, Laha et al. (2024a) primarily focused on characterizing a class of Fisher consistent surrogates and did not fully explore necessary and sufficient conditions for Fisher
|
https://arxiv.org/abs/2505.17285v1
|
consistency. Building on the initial findings of Laha et al. (2024a), the present paper shifts focus toward a deeper theoretical investigation of Fisher consistency by developing precise necessary and sufficient conditions. In Section 4, we show that for separable surrogates, DTR Fisher consistency indeed imposes strict restrictions on the image set of the surrogates. Moreover, Laha et al. (2024a)’s surrogate losses satisfy these restrictions. In fact, they are a special case of our product-based surrogate losses (see Section 4.3). [3.] Laha et al. (2024a) focused on the two-stage case ( T= 2), whereas we address the general T-stage setting, which makes our theoretical analysis more technically challenging. 8.2. Direct search for general ( kt≥2) setting As noted earlier, research on direct search in the general setting has been scarcer than in the binary treatment setting. The tree-based methods proposed by Tao and Wang (2017); Tao et al. (2018) are among the few works in this category whose setting aligns with ours. Both methods follow the stagewise approach, which we have already extensively discussed in Sections 1, 7.1, and 7.2. Here, we instead focus on Xue et al. (2022), as it is, to the best of our knowledge, the only other DTR direct search paper adopting a simultaneous optimization framework. However, there are significant differences between Xue et al. (2022) and our work. Comparison with Xue et al. (2022) First, Xue et al. (2022)’s setting and goal substantially differs from ours. Their focus is on censored survival data, with the goal of maximizing the conditional survival function of patients under censoring. While they demonstrate Fisher consistency of their surrogate losses under conditions, exploring the theory of Fisher consistency was not a goal of that paper. Because the survival function is a probability, it factorizes into a product of stage-specific terms. Taking the logarithm of this product yields a sum of Tterms, each corresponding to a different stage’s treatment assignment. This tensorization makes the discontinuous loss function additive in the dt’s (the stage-specific treatment assignments). The clever work of Xue et al. (2022) exploits this additive structure to gain a computational advantage. In contrast, our value function, which is the expected sum of potential rewards, does not tensorize. Nevertheless, despite the advantage of 10 ACKNOWLEDGMENTS/ 33 tensorization, Xue et al. (2022) requires non-convex surrogates, as we do. To this end, they adopt the angle-based pred function mentioned in Remark 2. Second, the Fisher consistency results in Xue et al. (2022) require the set of distributions P(as denoted in our paper) to satisfy a set of inequalities involving discontinuous functions of the conditional survival probability (see their Condition 2). According to the authors, these restrictions ensure that receiving the optimal treatment increases the targeted survival probability at each stage. However, it is currently unknown whether these conditions are necessary, or whether they are verifiable under more standard assumptions of the DTR literature, as those in Zhao et al. (2015); Murphy (2005); Kosorok and Laber (2019); Robins (2004). These assumptions have no clear analogue in our setting and are therefore not directly comparable to ours. 9.
|
https://arxiv.org/abs/2505.17285v1
|
Discussion In this paper, we have explored the limitations and possibilities of simultaneous direct search for finding the optimal DTR. When both the number of treatments and stages are arbitrary, simultaneous direct search reduces into an optimization problem with a complex, discontinuous loss. We begin by proving some negative results on the potential convexification of this problem through concave, Fisher consistent surrogates. Then, focusing on the class of separable surrogates, we show that Fisher consistency imposes restrictive geometric conditions on the surrogates when T≥2. However, when T= 1, the necessary conditions resemble those of multiclass classification. These findings underscore that, on top of the possible inconsistency of concave surrogates, the overall class of Fisher-consistent surrogates may be substantially narrower in multi- stage settings than in the single-stage (ITR) setting. Taken together, our findings suggest that computational challenges, particularly those stemming from non-convexity, may be an unavoidable cost of Fisher consistency in simultaneous direct search. However, we demonstrate existence of smooth, Fisher consistent surrogate losses, thereby showing that the discontinuous optimization problem can be smoothed. Building on this, we develop SDSS, a method for the surrogate optimization, that exploits gradient-based optimizers to enable fast implementation. Our numerical experiments and real-data analysis suggest that, despite the higher computational burden, SDSS, when combined with ADAM, random restarts, and minibatching, may offer advantages in settings where modeling is difficult, provided the sample size is sufficiently large. SDSS is the initial step towards DTR learning with simultaneous optimization. There remains substantial room for improvement. Several modifications may further improve its performance, some of which are discussed below. Tailored initialization We used He initialization for the non-convex optimization. Developing tailored initialization methods is a promising direction for improvement because non-convex direct search has shown improved performance with customized initializations (Xue et al. , 2022; Liu et al. , 2024b). Location transformation We applied a location transformation to the Yt’s when they were negative. However, this transformation can affect practical performance. It may be possible to avoid it by modifying our surrogates using the strategies in Zhao et al. (2019); Liu et al. (2021). Unknown propensity scores Our surrogate framework is based on an IPW estimator of value function, which can be unstable when the propensity scores need to be estimated (Kosorok and Laber, 2019). When propensity scores are unknown, the surrogate framework should ideally be based on the doubly robust estimator of the value function, as in (7.1). This presents an interesting avenue of future research. However, since the discontinuous loss in (7.1) differs from ψdisand involves Q-functions, it would alter the conditions required for Fisher consistency. 10. Acknowledgments Nilanjana Laha and Nilson Chapagain’s research was partially supported by National Science Foundation grant DMS-2311098. Supplementary Material The supplementary material contains additional discussions, additional details on the tuning parameters used in our empirical study in Section 7, and the proofs of the theorems and lemmas. Code: The Python code for the simulations and data applications are made available on Github (Chapagain, 2024b,a). S1 ADDITIONAL DISCUSSION/ 34 Table 3 List of important notation Symbol Description t Shorthand for stage T
|
https://arxiv.org/abs/2505.17285v1
|
Horizon or the total number of stages (Ot,At,Yt) covariate, treatment, and outcome triplet at stage t Ht The history at stage t Di Trajectory of the i-th patient, defined in (2.1) kt Total number of treatments on stage t K the sum of all kt’s; i.e., K=P t∈[T]kt q Dimension of HT F The class of all scores f P0 Class of distributions satisfying Assumptions I-V d A treatment policy or DTR d∗The optimal DTR, need not be unique g Relative score functions pred An operator defined as pred( x) = max(argmax( x)) for any vector x tran An operator, defined as tran( x) = (0 ,−x) for any vector x ψ T -stage surrogate ϕ Single-stage surrogate, also denoted by ϕtwhen it varies with stage K,KKernels, where Kis generally a univariate density and Kis a multivariate density V(f) Value function of the score f Vψ(f) Surrogate value function of the score f, defined in (2.8) ˜f,˜d ˜fis the maximizer of Vψ(f) over f∈ Fand˜d= pred( ˜f), both depend on ψ bVψ(f) Empirical surrogate value function of the score f, defined in (2.7) bVψ,rel(g)Surrogate value function in the relative margin-based form, defined as bVψ,rel(g) =bVψ(tran( g)) for any relative score g L A function related to bVψ,rel, see (5.3) Q∗Optimal Q-function (state-action value function) QdQ-function under policy d Un Policy class, e.g., class of relative class scores g L Restricted classes of policies F(N, W, s ) ReLU network with depth N, width W, and sparsity s Γ Template of a relative margin-based surrogate (see Definition 4.1) θ(r)The r-th parameter update in SDSS (see Algorithm 1) Optn Optimization error β Smoothness parameter for H¨ older classes Supplementary Material The supplement is organized as follows. Section S1 provides additional discussion on various conditions related to our theoretical results, and additional details on the simulations in Section 7.1 and the data application in Section 7.2. Section S2 presents tables listing the tuning parameters used for SDSS and Q-learning in Sections 7.1 and 7.2. The remainder of the supplement contains the proofs. S1. Additional discussion S1.1. Further discussion on the conditions in Theorem 3.1 The gamma-phi losses in Example 3 satisfy the boundedness condition if ϕoutis bounded above, and satisfy the domain condition if ϕoutandϕinare real-valued. The smoothness condition holds if both ϕinandϕoutare smooth. The pairwise loss in (3.7) satisfies the conditions of Theorem 3.1 if −˜ϕis strictly convex, bounded below, closed, proper, thrice continuously differentiable on the interior of its domain, and (0 ,0)∈int(dom( −˜ϕ)). Similarly, the pairwise loss in (3.8) satisfies all the conditions of Theorem 3.1 if −˜ϕ1and−˜ϕ2are strictly convex, bounded below, closed, proper, thrice continuously differentiable on the interior of their respective domains, and 0 ∈int(dom( −˜ϕ1))∩int(dom( −˜ϕ2)). The exponential loss in Example 1, as well as the concave versions of the cross-entropy and coherence losses in Example 3, satisfy all conditions of Theorem 3.1. S1.2. Further discussion on the constant C∗in Proposition 1 Since the regret is bounded by ( Vψ ∗−Vψ(f))/C∗, a larger value of C∗is preferable. This naturally raises the question: how large can C∗be? If ϕtis scaled
|
https://arxiv.org/abs/2505.17285v1
|
by a constant c >0, infx∈Rktϕt(x;i) also scales by c, implying that Jtbecomes cJt. From (4.7), however, it follows that such scaling does not affect the positive fraction χϕt. Consequently, C∗scales linearly with S1 ADDITIONAL DISCUSSION/S1.3 Connection between our surrogates and the vanishing gradient problem 35 ϕt. However, so does the ψ-regret term Vψ ∗−Vψ(f). For instance, multiplying ϕ1by 2 results in both C∗and the ψ-regret doubling, leaving the overall bound on the regret V∗−V(f) in (4.8) unchanged. Thus, inflating ϕtby a scaling factor does not change the rate of regret decay. Therefore, let us assume that the ϕt’s are uniformly bounded by one. In this case, the Jt’s are also bounded by one. Since χϕt≤1 by definition, it follows that C∗≤1 in this case. Therefore, although it may be possible for the true regret to be smaller than the ψ-regret, the bound in (4.8) offers no such guarantee. A few additional remarks on C∗are in order. (a) The role of χϕt:Large values of χϕtleads to large values of C∗. For a single-stage surrogate ϕ,χϕis large when disagreement between pred( x) and pred( p) causes a large gap between Ψ∗(p) and Ψ( x;p). Such surrogates are likely to ensure that the surrogate value function of a suboptimal DTR is subatantially smaller than that of the optimal one. (d) Role of Jt:C∗increases linearly with Jt, hinting that surrogate losses for which ϕt(x,pred(x)) is larger compared to other ϕt(x;i)’s (i̸= pred( x)) may yield lower regret. The extreme case is ϕt=ϕdis, in which case ϕ(x; pred( x)) = 1 and all other ϕ(x;i)’s are zero. Therefore, surrogate losses closer to ϕdisin geometry are expected to yield larger values of C∗. S1.3. Connection between our surrogates and the vanishing gradient problem Figure 2a shows that bVψ,relexhibits horizontal asymptotes and plateau regions in the toy example from Section 5.1. These features, which lead to the vanishing gradient problem, arise from Γ, since, as shown in (5.4), bVψ,relis a positive linear combination of Γ functions. Figure 1 shows that the maximum of Γ( x, y; 1) is not attained in R2, and Γ( x, y; 1) exhibits horizontal asymptotes and plateau regions for both the product-based and kernel-based surrogates. The plots for Γ( x, y; 2) and Γ( x, y; 3) are similar except the bumps (optimal plateau regions) are in the second and fourth quadrants, respectively. Although Figure 1 uses a specific τfor the product-based surrogate, we verified that the surface plots have similar properties for the other examples of τmentioned in Section 4.2.2. One may wonder whether an alternative surrogate could eliminate plateau-related issues entirely. If we could construct a ϕsatisfying Conditions N1 and N2 that also attains unique maximum in Rk, the issues could be at least partially eradicated. However, all Fisher consistent surrogates we have been able to construct that satisfy Conditions N1 and N2 also satisfy Condition N3, and are variants of the kernel-based or product-based forms. All these surrogates lead to Γ’s similar to Figure 1, i.e., they lack strong concavity, the optimum is attained at infinity, and plateau regions exist.
|
https://arxiv.org/abs/2505.17285v1
|
Therefore, the value functions resulting from these surrogates have suboptimal plateau regions and the vanishing gradient problem persists. S1.4. More details on He and Xavier initialization When the relative class scores gti(·;θti) are linear, He initialization samples the elements of the initial parameter, i.e., θ(0) ti, independently from a centered Gaussian distribution with variance 2 /nf, where nfis the dimension of the feature space. When the relative class scores are deep neural network, θticonsists of the weights and bias parameters of the network. Under He initialization, bias parameters in each layer are set to zero, while the weights are sampled independently from a centered Gaussian distribution with variance 2 /nf, where nfis the number of input units in that layer. The Xavier initialization follows a similar approach in both cases, but samples weights from U[−1/√nf,1/√nf] instead of a normal distribution. S1.5. More details on Scheme 2 The stage-specific parameters of Scheme 2 outcome models are given by λ(1)= (2.5,2.7,2.6), λ(2)= (2.6,2.5,2.7), (S1.1) ζ(1)= (2.1,2.0,2.2), ζ(2)= (2.2,2.1,2.3). (S1.2) The optimal treatment assignments are given by d∗ 2(H2) = arg max i∈{1,2,3}n λ(2) icos OT 11p2 +ζ(2) isin OT 11po , Then the optimal policies are: d∗ 2(H2) = arg max i∈{1,2,3}n λ(2) icos OT 11p2+ζ(2) isin OT 11po , d∗ 1(H1) = arg max i∈{1,2,3}n λ(1) isin OT 11p2+ζ(1) icos OT 11po . S2 TUNING PARAMETERS FOR Q-LEARNING AND SDSS/S1.6 Comparing regular SDSS with ensembled SDSS for Scheme 2 36 S1.6. Comparing regular SDSS with ensembled SDSS for Scheme 2 (a) Linear policies. (b) Nonlinear policies. Fig 8: Value function of SDSS with and without ensembling under Scheme 2. Ensembling is performed as described in Section 7.1. The regular SDSS (colored blue) uses τ(x) = 1 + x/√ 1 +x2. The left panel shows results for linear policies; the right panel shows results for nonlinear policies, both uses p= 50 and 100. The x-axis indicates the sample size and the y-axis represents the value function estimates from Monte Carlo 100 replications. Each box spans the interquartile range (IQR) with edges at the 25th and 75th percentiles, a median line, and whiskers extending to 1.5 ×IQR. The red dashed line marks the optimal value. In all four cases, SDSS was implemented following the procedures in Section 7.1. Ensembling slightly improves value function for linear policies and yields comparable value function for nonlinear policies, albeit with slightly higher variance. S1.7. The list of covariates for sepsis data After data pre-processing, we had 45 covariates in stage 1 and 42 covariates in stage 2. The following 42 covariates were present in both stages: Mechanical ventilation status, maximum vasopressor dose, ICU readmission status, median vasopressor dose, Glasgow Coma Scale (GCS), heart rate (HR), systolic blood pressure (SysBP), mean arterial pressure (MeanBP), diastolic blood pressure (DiaBP), respiratory rate (RR), fraction of inspired oxygen (FiO2), potassium level, sodium level, chloride level, glucose level, magnesium level, calcium level, hemoglobin (Hb), white blood cell count (WBC), platelet count, partial thromboplastin time (PTT), prothrombin time (PT), arterial pH, arterial partial pressure of oxygen (paO2), arterial partial pressure of carbon dioxide (paCO2), arterial
|
https://arxiv.org/abs/2505.17285v1
|
base excess (BE), serum bicarbonate (HCO3), arterial lactate, Sequential Organ Failure Assessment (SOFA) score, Systemic Inflammatory Response Syndrome (SIRS) score, shock index (HR/SBP), cumulative fluid balance, peripheral capillary oxygen saturation (SpO2), blood urea nitrogen (BUN), serum creatinine, serum glutamic oxaloacetic transaminase (SGOT/AST), serum glutamic pyruvic transaminase (SGPT/ALT), total bilirubin, international normalized ratio (INR), total fluid input, total fluid output, four-hourly fluid output. The extra three covariates in stage 1 were gender, age, and weight. S2. Tuning parameters for Q-learning and SDSS S2.1. Tuning parameters for the simulations in Section 7.1 S2.1.1. Summary and definition of settings The common simulation parameters are provided in Table 6. S2 TUNING PARAMETERS FOR Q-LEARNING AND SDSS/S2.1 Tuning parameters for the simulations in Section 7.1 37 Table 4 Summary of Simulation schemes Simulation SchemePolicy TypeVaried ValuesSample SizesComparators Scheme 1 Linear ω∈ {10,20,40} { 5000,15000} SDSS, Q-learning Scheme 1 Non-linear ω∈ {10,20,40} { 5000,15000}Q-learning, SDSS, ACWL Scheme 2 Linear p∈ {50,100} { 5000,15000} SDSS, Q-learning Scheme 2 Non-linear p∈ {50,100} { 5000,15000}Q-learning, SDSS, ACWL Table 5 Definition of the settings for the Simulation. These definition should be used in the rest of the tables in this section. Setting Description 1 Scheme 1, ω= 10, n= 5 000 2 Scheme 1, ω= 10, n= 15 000 3 Scheme 1, ω= 20, n= 5 000 4 Scheme 1, ω= 20, n= 15 000 5 Scheme 1, ω= 40, n= 5 000 6 Scheme 1, ω= 40, n= 15 000 7 Scheme 2, p= 50, n= 5 000 8 Scheme 2, p= 50, n= 15 000 9 Scheme 2, p= 100, n= 5 000 10 Scheme 2, p= 100, n= 15 000 Table 6 Tuning parameters that did not change across simulation settings. These values were used for both SDSS and Q-learning. Parameter Value/Configuration Train/Validation Split 70/30 Weight Initialization He initializer L2Regularization (Weight Decay) Parameter 0.001 Reduction Factor ( RF) 0.8 Patience Threshold ( Npatience ) 30 Restart Threshold ( Nrestart ) 3 Evaluation Frequency ( reval) 2 ADAM Parameter ( D1) 0.9 ADAM Parameter ( D2) 0.999 EMA Smoothing Parameter ( κ) 0.8 Numerical Constant ( ϵnum) 10−8 Improvement Threshold ( δimp) 0 S2.1.2. SDSS network parameters that varied across schemes Table 7 Table of the neural network parameters for Linear SDSS that varied across settings in our data simulations. Here hidden dim. corresponds to the number of units per hidden layer. Parameter Setting 1 Setting 2 Setting 3 Setting 4 Setting 5 Setting 6 Setting 7 Setting 8 Setting 9 Setting 10 Initial Learning Rate ( lr0)0.07 0.07 0.07 0.07 0.07 0.07 0.1 0.1 0.2 0.2 Batchsize ( nmini) 1000 1000 1000 1000 200 1000 800 800 1600 1600 Nepoch 10 10 10 10 10 10 20 20 60 60 Activation Function None None None None None None None None None None Hidden Dim. Stage 1 10 10 20 20 20 20 20 20 10 20 Hidden Dim. Stage 2 20 40 20 40 40 40 40 40 20 40 Number of Layers 1 1 1 1 1 1 1 1 1 1 S2 TUNING PARAMETERS FOR Q-LEARNING AND
|
https://arxiv.org/abs/2505.17285v1
|
SDSS/S2.2 Tuning parameters for the sepsis data application 38 Table 8 Table of the neural network parameters for Non-linear SDSS that varied across settings in our data simulations. Here hidden dim. corresponds to the number of units per hidden layer. Parameter Setting 1 Setting 2 Setting 3 Setting 4 Setting 5 Setting 6 Setting 7 Setting 8 Setting 9 Setting 10 Initial Learning Rate ( lr0)0.07 0.07 0.07 0.07 0.07 0.07 0.07 0.07 0.2 0.2 Batchsize ( nmini) 5000 5000 5000 5000 5000 5000 5000 5000 1600 1600 Nepoch 20 10 20 10 10 10 10 10 60 60 Activation Function ReLU ReLU ReLU ReLU ReLU ReLU eLU eLU eLU eLU Dropout Rate 0 0 0 0 0 0 0.4 0.4 0.4 0.4 Hidden Dim. Stage 1 40 20 20 20 40 40 40 40 40 40 Hidden Dim. Stage 2 40 40 40 40 40 40 20 40 40 40 Number of Layers 1 1 1 1 1 1 3 3 3 3 S2.1.3. Q-learning parameters that varied across schemes Table 9 Table of the neural network parameters for Non-linear Q-learning that varied across settings in our data simulations. Here hidden dim. corresponds to the number of units per hidden layer. Parameter Setting 1 Setting 2 Setting 3 Setting 4 Setting 5 Setting 6 Setting 7 Setting 8 Setting 9 Setting 10 Initial Learning Rate ( lr0)0.07 0.07 0.07 0.07 0.07 0.07 0.05 0.05 0.05 0.05 Batchsize ( nmini) 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 Nepoch 20 10 20 10 10 10 10 10 20 10 Activation Function ReLU ReLU ReLU ReLU ReLU ReLU ReLU ReLU ReLU ReLU Dropout Rate 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Hidden Dim. Stage 1 40 40 40 20 20 40 40 20 40 20 Hidden Dim. Stage 2 20 40 40 40 40 40 40 40 40 40 Number of Layers 3 3 3 3 3 3 3 3 3 3 S2.2. Tuning parameters for the sepsis data application Table 10 Tuning parameters that were common to all methods in the application. These parameters did not vary across methods or policies unless otherwise specified on Tables 11–14 Parameter Value/Configuration Train/Validation Split 70/30 Weight Initialization He initializer L2Regularization (Weight Decay) Parameter 0.001 Reduction Factor ( RF) 0.8 Patience Threshold ( Npatience ) 30 Evaluation Frequency ( reval) 2 ADAM Parameter ( D1) 0.9 ADAM Parameter ( D2) 0.999 EMA Smoothing Parameter ( κ) 0.8 Numerical Constant ( ϵnum) 10−8 Improvement Threshold ( δimp) 0 S2 TUNING PARAMETERS FOR Q-LEARNING AND SDSS/S2.2 Tuning parameters for the sepsis data application 39 S2.2.1. Parameters for SDSS in data application Table 11 Table of the neural network parameters for Linear SDSS in our data application. Here hidden dimensions correspond to the number of units per hidden layer. Parameter Value/Configuration Initial Learning Rate ( lr0)0.007 Nepoch 120 Train/Validation Split 80/20 Network Architecture 3-layer network Activation Function None Hidden Dimensions (Stage 1) 40 Hidden Dimensions (Stage 2) 40 Table 12 Table of the neural network parameters for Non-linear SDSS in our data application. Here
|
https://arxiv.org/abs/2505.17285v1
|
hidden dimension corresponds to the number of units per hidden layer. Parameter Value/Configuration Initial Learning Rate ( lr0)0.007 Nepoch 120 Train/Validation Split 80/20 Network Architecture 3-layer network Activation Function ELU Hidden Dimensions (Stage 1) 40 Hidden Dimensions (Stage 2) 40 Dropout Rate 0.4 S2.2.2. Parameters for Q-learning and Q-function estimation in data application Table 13 Table of the neural network parameters for Non-linear Q-Learning in our data application. Here hidden dimension corresponds to the number of units per hidden layer. Parameter Value/Configuration Learning Rate 0.07 Nepoch 150 Network Architecture 1-layer network Activation Function ELU Hidden Dimensions (Stage 1) 4 Hidden Dimensions (Stage 2) 4 Dropout Rate 0 S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 40 Table 14 Table of neural network parameters for estimating bQd 1and bQ2during the computation of the AIPW value function estimator in our data application. Here hidden dimension corresponds to the number of units per hidden layer. Parameter Value/Configuration Learning Rate 0.07 Train/Validation Split 70/30 Nepoch 150 Network Architecture 3-layer network Activation Function ELU Hidden Dimension (Stage 1) 40 Hidden Dimension (Stage 2) 40 Dropout Rate 0.4 S3. New notation for proofs For two integers mandn,mmod ndenotes the remainder when mis divided by n. For m≤n, [m:n] indicates the set {m, m +1, . . . , n }. We denote by e(i) kthe unit vector of length kwith one in the i-th position and zero everywhere else. When kis clear from the context, we will denote e(i) konly by e(i).0m×nwill denote a matrix with mrows and ncolumns, whose all elements are zero. For two sets AandBinRk, we define the Minkowski sum A+Bas the set {x+y:x∈A, y∈B}. For two real numbers xsndy, we denote min( x, y) byx∧yand max( x, y) byx∨y. The order statistics of xwill be denoted by ( x[1],x[2], . . . ,x[k]). We also introduce some shorthand. For any t∈[T], we will denote the vector ( Q∗ t(Ht, i))i∈[kt]byQ∗ t(Ht). As mentioned earlier, the d∗defined by (2.3) need not be unique. For ease of representation, unless otherwise mentioned, we will stick to a particular version of d∗: d∗ t(Ht) = max(argmax at∈[kt]Q∗ t(Ht, at)) for all t∈[T]. (S3.1) S4. Proofs for the convex surrogate examples in Section 3 In the proofs for this section, ˜ϕwill be used to denote an arbitrary function, whose definition can change from proof to proof. S4.0.1. Proof of Result 1 Proof of Result 1. Note that in this case Vψ(f) equals E −X i∈[k1],j∈[k2]exp − f1A1(H1)−f1i(H1) +f2A2(H2)−f2j(H2) (Y1+Y2) π1(A1|H1)π2(A2|H2) =−EX i∈[k1],j∈[k2]e− f1A1(H1)−f1i(H1)+f2A2(H2)−f2j(H2) E[Y1+Y2|H2, A2] π1(A1|H1)π2(A2|H2) =E −X a2∈[k2]X i∈[k1],j∈[k2]exp − f1A1(H1)−f1i(H1) +f2a2(H2)−f2j(H2) ×E[Y1+Y2|H2, A2=a2] π1(A1|H1) Therefore, for fixed H2, if it exists, ˜f2(H2) is in argmax y∈Rk2X a2∈[k2]P i∈[k1],j∈[k2]−e−f1A1(H1)+f1i(H1)−ya2+yj E[Y1+Y2|H2, A2=a2] π1(A1|H1), provided the above set is non-empty. However, sup y∈Rk2X a2∈[k2]P i∈[k1],j∈[k2]−e−f1A1(H1)+f1i(H1)−ya2+yj E[Y1+Y2|H2, A2=a2] π1(A1|H1) =−e−f1A1(H1)inf y∈Rk2X a2∈[k2]E[Y1+Y2|H2, A2=a2] π1(A1|H1)X i∈[k1],j∈[k2]ef1i(H1)−ya2+yj S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 41 which equals −e−f1A1(H1)inf y∈Rk2X a2∈[k2]e−ya2E[Y1+Y2|H2, A2=a2] π1(A1|H1)X j∈[k2]eyjX i∈[k1]ef1i(H1) =−X i∈[k1]ef1i(H1)−f1A1(H1)inf y∈Rk2X a2∈[k2]E[Y1+Y2|H2, A2=a2] π1(A1|H1)| {z } ca2(H2)X j∈[k2]eyj−ya2 . Lemma S4.1 implies ˜f2(H2) uniquely exists and argmax( ˜f2(H2)) = argmax a2∈[k2]E[Y1+Y2|H2,
|
https://arxiv.org/abs/2505.17285v1
|
A2=a2]. Here Lemma S4.1 applies because E[Y1+Y2|H2, A2=a2]>0 by Assumption V, which implies ca2(H2)’s are positive real numbers. Since Q∗ 2(H2, A2) =E[Y1+Y2|H2, A2] and ˜d2= pred( ˜f2), it follows that ˜d2(H2)∈argmaxa2∈[k2]Q∗ 2(H2, a2). Lemma S4.1 also implies that supf2Vψ(f) equals −EX i∈[k1]ef1i(H1)−f1A1(H1)X j∈[k2]q Cj(H2)2 =−EX i∈[k1]ef1i(H1) ef1A1(H1)EX j∈[k2]q Cj(H2)2 |H1, A1 =−EX a1∈[k1]EX j∈[k2]p E[Y1+Y2|H2, A2=j]2 |H1, A1=a1X i∈[k1]ef1i(H1) ef1A1(H1) . Therefore, for fixed H1,˜f1(H1) exists if argmin x∈Rk1X a1∈[k1]EhX j∈[k2]p E[Y1+Y2|H2, A2=j]2 |H1, A1=a1iX i∈[k1]exi−xa1 is non-empty and in that case, any member of the above set is a candidate for ˜f1(H1). Note that for any H1andA1=a1, X j∈[k2]p E[Y1+Y2|H2, A2=j]2 >0 because E[Y1+Y2|H2, A2]>0 due to Assumption V. Applying Lemma S4.1, we can show that the above set is non-empty and argmax( ˜f1(H1)) = argmax a1∈[k1]EX j∈[k2]p E[Y1+Y2|H2, A2=j]2 |H1, A1=a1 . The proof follows noting E[Y1+Y2|H2, A2] =Y1+Q∗ 2(H2, A2) and˜d1(H1) = pred( f1(H1)). Lemma S4.1. Suppose ˜ϕ(y) =X i∈[k]ciX j∈[k]eyj−yi,for all y∈Rk2, where k∈Nand the ci’s are positive real numbers. Then infy∈Rk˜ϕ(y) = (P i∈k√ci)2andargminy˜ϕ(y)is a singletone set. Moreover, y∗= argminy˜ϕ(y)satisfies argmax( y∗) = argmax( c). Proof of Lemma S4.1. infy∈Rk˜ϕ(y) is equivalent to the infimum of g1(x) =X i∈[k]ciX j∈[k]xj xi, where xi= expyi>0 for each i∈[k]. However, the above is the infimum of g2(u) =X i∈[k]ci/ui (S4.1) S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 42 where ui=xi/P j∈[k]xjsatisfies the constraintskP i=1ui= 1 and ui∈(0,1). Therefore, we need to minimize the function u7→P i∈[k]ci/uiover the simplex {u∈Rk:uT1k= 1,uj∈(0,1) for all j∈[k]}. By Lagrange multiplier method, we can show that any critical point of the minimization program min u∈RkX i∈[k]ci/ui s.t.X i∈[k]ui= 1, ui∈(0,1) for all i∈[k].(S4.2) satisfies ui=p ci/csumwhere csum∈Ris such thatP i∈[k]ui= 1 holds. This yields csum= (P i∈k√ci)2, leading to ui= √ci/P i∈[k]√ci. Since ci>0 for all i∈[k], it follows that ui∈(0,1) for i∈[k]. Therefore, u∗ i=√ci/P i∈[k]√ciis also the unique solution to the optimization problem (S4.2). The minimum of (S4.2) is therefore (P i∈[k]√ci)2. The y∗ icorresponding toui=√ci/P i∈[k]√cisatisfies y∗ i= log( ci)/2. Thus y∗is the unique maximizer of ˜ϕ(y). The proof is complete noticing argmax( y∗) = argmax( c). S4.0.2. Proof of Result 2 Proof of Result 2. Note that ψ(x,y;a1, a2) equals −e−ya2 e−xa1+X i∈[k1]:i̸=a1exi +X j∈[k2]:j̸=a2eyj e−xa1+X i∈[k1]:i̸=a1exi =− e−xa1+X i∈[k1]:i̸=a1exi e−ya2+X j∈[k2]:j̸=a2eyj . When ψis the loss in (3.11), Vψ(f) equals E − e−f1A1(H1)+P i∈[k1]:i̸=A1ef1i(H1) e−f2A2(H2)+P j∈[k2]:j̸=A2ef2j(H2) (Y1+Y2) π1(A1|H1)π2(A2|H2) =E − e−f1A1(H1)+X i∈[k1]:i̸=A1ef1i(H1) ×X a2∈[k2] e−f2a2(H2)+X j∈[k2]:j̸=a2ef2j(H2)E[Y1+Y2|H2, A2=a2] π1(A1|H1) Note that sup f2Vψ(f1, f2) =E e−f1A1(H1)+X i∈[k1]:i̸=A1ef1i(H1) ×sup y∈Rk2 −X a2∈[k2] e−ya2+X j∈[k2]:j̸=a2eyjE[Y1+Y2|h2, a2] π1(a1|h1) (S4.3) Therefore, for fixed h2= (h1, a1, y1, o2)∈ H 2,˜f2(h2) equals argmax y∈Rk2 −X a2∈[k2] e−ya2+X j∈[k2]:j̸=a2eyjE[Y1+Y2|h2, a2] π1(a1|h1) = argmin y∈Rk2X a2∈[k2] e−ya2E[Y1+Y2|H2=h2, A2=a2] +eya2X j∈[k2]:j̸=a2E[Y1+Y2|H2=h2, A2=j] = argmin y∈Rk2X a2∈[k2] e−ya2pa2(h2) +eya2(1−pa2(h2)) where pj(h2) =E[Y1+Y2|H2=h2, A2=j]P a2∈[k2]E[Y1+Y2|H2=h2, A2=a2]andp(h2) = (pj(h2))j∈[k2] S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 43 for all h2∈ H 2. For the rest of the proof, we will denote terms such as E[Y1+Y2|H2=h2, A2=a2] byE[Y1+Y2|h2, a2] for the sake of simplicity. Due to Assumption V, E[Y1+Y2|H2, A2]>0 for all H2andA2. Therefore,
|
https://arxiv.org/abs/2505.17285v1
|
pj(H2)∈(0,1) for each j∈[k2]. Hence, the vector p(h2)∈ Sk2−1. Fact S4.1 implies that argmax( ˜f2(h2))⊂argmax( p(h2)) = argmaxa2∈[k2]E[Y1+Y2|h2, a2]. Therefore, ˜d2(H2) = pred( ˜f2(H2))∈argmaxa2∈[k2]E[Y1+Y2|H2, a2]. Also, Fact S4.1 implies that sup y∈Rk2 −X a2∈[k2] e−ya2+X j∈[k2]:j̸=a2eyjE[Y1+Y2|h2, a2] π1(a1|h1) = 2P j∈[k2]E[Y1+Y2|h2, j] π1(a1|h1)X j∈[k2]q p(H2)j(1−p(H2)j) = 2P a2∈[k2]r E[Y1+Y2|h2, a2]P j∈[k2]:j̸=a2E[Y1+Y2|h2, j] π1(a1|h1). Therefore, (S4.3) implies that supf2Vψ(f) equals 2E − e−f1A1(H1)+X i∈[k1]:i̸=A1ef1i(H1) ×X a2∈[k2]r E[Y1+Y2|H2, A2=a2]P j∈[k2]:j̸=a2E[Y1+Y2|H2, j] π1(A1|H1) . Denoting conditional expectation of a random variable Uwith respect to H1andA1=a1asE[U|H1, a1], we obtain that −supf2Vψ(f) equals 2EX a1∈[k1] e−f1a1(H1)+X i∈[k1]:i̸=a1ef1i(H1) ×X a2∈[k2]Es E[Y1+Y2|H2, A2=a2]X j∈[k2]:j̸=a2E[Y1+Y2|H2, j] H1, a1 = 2EX a1∈[k1]e−f1a1(H1)X a2∈[k2]Es E[Y1+Y2|H2, A2=a2]X j̸=a2E[Y1+Y2|H2, j] H1, a1 + 2EX a1∈[k1]ef1a1(H1)X i̸=a1X a2∈[k2]Es E[Y1+Y2|H2, A2=a2]X j̸=a2E[Y1+Y2|H2, j] H1, i , where for any random variable Xandi∈[k1], we use the shorthand E[X|H1, i] forE[X|H1, A1=i]. We overload notation and for all h1∈ H 1anda1∈[k1], we define pa1(h1) =P a2∈[k2]Er E[Y1+Y2|H2, A2=a2]P j∈[k2]:j̸=a2E[Y1+Y2|H2, j] H1, a1 P i∈[k1]P a2∈[k2]Er E[Y1+Y2|H2, A2=a2]P j∈[k2]:j̸=a2E[Y1+Y2|H2, j] H1, i andp(h1) = (pi(h1))i∈[k1]. That p(h1)∈ Sk1−1follows since E[Y1+Y2|H2, A2]>0 for all H2andA2by our assumption. Therefore, for fixed h1∈ H 1, we observe that ˜f1(h1) equals argmin x∈Rk1X a1∈[k1] e−xa1pa1(h1) +exa1(1−pa1(h1)) . Another application of Fact S4.1 implies that ˜f1(h1) uniquely exists in Rk1and argmax( ˜f1(h1))⊂argmax( p(h1)) = argmax i∈[k1]X a2∈[k2]Es E[Y1+Y2|H2, A2=a2]X j̸=a2E[Y1+Y2|H2, j] H1, i . The proof follows noting ˜d1(H1) = pred( ˜f1(H1)). Fact S4.1. For any p∈ Sk−1optimization problem min y∈RkX j∈[k] e−yjpj+ (1−pj)eyj (S4.4) has a unique minimizer y∗satisfying argmax( y∗)⊂argmax( p)and the minimum value is 2P j∈[k]p pj(1−pj). S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 44 Proof of Fact S4.1. The given optimization problem is equivalent to min x∈Rk >0X j∈[k] pj/xj+ (1−pj)xj where we used the variable transformation xj=eyjforj∈[k]. The function ˜ϕ(x) =P j∈[k] pj/xj+ (1−pj)xj is convex on Rk >0. Hence, any critical point x∗of˜ϕis the unique global minimum over Rk >0. Since ˜ϕis differentiable on Rk >0, it follows thatx∗satisfies ∇˜ϕ(x∗) = 0. Therefore, x∗ j=p pj/(1−pj) for all j∈[k], which implies ˜ϕ(x∗) = 2P j∈[k]p pj(1−pj). Note that the optimal solution of (S4.4) satisfies y∗ j= log x∗ j . Therefore, y∗is unique and y∗ j=1 2log(pj/(1−pj)). Since ϕ(x) = exp( −x) is convex, bounded below, differentiable, and decreasing, Theorem 9 of Zhang (2004) implies that argmax( y∗)⊂argmax( p). S4.0.3. Proof of Result 3 Proof of Result 3. Vψ(f) equals E(Y1+Y2)ϕ(1)(f1(H1);A1) π1(A1|H1)π2(A2|H2)+(Y1+Y2)ϕ(2)(f1(H1);A2) π1(A1|H1)π2(A2|H2) Since the propensity scores are positive by Assumption I, E ϕ(1)(f1(H1);A1)Y1+Y2 π1(A1|H1)π2(A2|H2) =EX a1∈[k1]EY1+Y2 π2(A2|H2) H1=h1, A1=a1 ϕ(1)(f1(H1);a1) Thus for any h1∈ H 1, a version of ˜f1(h1) exists if argmax x∈Rk1X a1∈[k1]EY1+Y2 π2(A2|H2) H1=h1, A1=a1 ϕ(1)(x;a1) is non-empty. Let us define p(1) i=EY1+Y2 π2(A2|H2) H1=h1, A1=a1 , which is positive because of Assumptions V. In that case, ˜f1(H1)∈argmax x∈Rk1X a1∈[k1]p(1) iϕ(1)(x;a1). However, since p(1)∈Rk1 >0, Lemma S4.2 below implies that if x∗∈argmax x∈Rk1X a1∈[k1]p(1) iϕ(1)(x;a1), then pred( x∗)∈argmax( p(1)). Therefore, we have shown that whenever ˜f1(h1) exists, pred( ˜f1(h1))∈argmax( p(1)) = argmax a1∈[k1]E(Y1+Y2) π2(A2|H2) H1=h1, A1=a1 . Similarly, since ϕ(2)is also single-stage Fisher consistent, taking k1to be k2in Lemma S4.2, we
|
https://arxiv.org/abs/2505.17285v1
|
can show that for a fixed h2= (h1, a1, y1, o2), whenever ˜f2(h2) exists, pred( ˜f2(h2))∈argmax a2∈[k2]E(Y1+Y2) π1(A1|H1) H2=h2, A2=a2 = argmax a2∈[k2]1 π1(a1|h1) y1+E[Y2|H2=h2, A2=a2] = argmax a2∈[k2]E[Y2|H2=h2, A2=a2], where we used the fact that π1(a1|h1)>0 (by Assumption I). The proof follows noting ˜dt(Ht) = pred( ft(Ht)) for t= 1,2. S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 45 Lemma S4.2. Suppose ϕ:Rk17→Ris a Fisher consistent surrogate for T= 1. Then if argmaxx∈Rk1Ψ(x;p)is non-empty for some p∈Rk1 ≥0with Ψas defined in (4.2), then for any x∗∈argmaxx∈Rk1Ψ(x;p), it follows that ppred(x∗)= max( p). Proof. Letp∈Rk1 >0. Consider the distribution P∈ P0so that H1=∅andE[Y1|A1=i] =pifor all i∈[k1]. For such P, f1:H17→Rk1is just a vector in Rk1. Since Palso satisfies Assumption I-III, for any such f1, it holds that V(f1) =E[Y11[pred( f1) =A1]/π1(A1|H1)] =ppred( f1), and Vϕ(f1) =E[Y1ϕ(f1, A1)/π1(A1|H1)] = Ψ( f1;p). Let˜f1be the maximizer of Vϕ(f1) over f1∈ F 1. Therefore, if argmaxx∈Rk1Ψ(x;p) is non-empty, then ˜f1exists. In that case, any x∗∈argmaxx∈Rk1Ψ(f1;p) is a candidate of ˜f1. Since ˜f1exists and ϕis Fisher consistent for T= 1, Definition 2.1 (definition of Fisher consistency) implies V(˜f1) = supf1∈F1V(f1). Therefore, it follows that ppred(x∗)= max( p) for any x∗∈argmaxx∈Rk1Ψ(f1;p). S4.0.4. Proofs for Example 6 First, we derive a formula for ˜f1(h1) under the general two-stage setup. It will be used in Section S4.0.4, where we derive a simpler form of ˜f1(h1) under the toy setup. In this case, for all x∈Rk1,y∈Rk2,a1∈[k1], and a2∈[k2], ψ(x,y;a1, a2) =X i∈[k1]:i̸=A1X j∈[k2]:j̸=a2min(−xi−1,−yj−1,0) =−X i∈[k1]:i̸=A1X j∈[k2]:j̸=a2max(1 + xi,1 +yj,0). (S4.5) For any set C ⊂ F 1× F 2, sup(f1,f2)∈CVψ(f1, f2) takes the form sup (f1,f2)∈C−E (Y1+Y2)P i∈[k1]:i̸=A1P j∈[k2]:j̸=A2max(1 + f1i(H1),1 +f2j(H2),0) π1(A1|H1)π2(A2|H2) = sup (f1,f2)∈C−E (Y1+Y2)P i∈[k1]:i̸=A1P j∈[k2]:j̸=A2max(1 + f1i(H1),1 +f2j(H2),0) π1(A1|H1)π2(A2|H2) =−inf (f1,f2)∈CE (Y1+Y2)P i∈[k1]:i̸=A1P j∈[k2]:j̸=A2max(1 + f1i(H1),1 +f2j(H2),0) π1(A1|H1)π2(A2|H2) . Therefore, it suffices to consider the minimization problem. Let us denote the scores corresponding to f1asf1j’s for j∈[k1] so thatf1= (f11, . . . , f 1k1). Similarly, let us denote the scores corresponding to f2asf2j’s for j∈[k2] so that f2= (f21, . . . , f 2k2). Let us denote R(f1, f2) =E (Y1+Y2)P i∈[k1]:i̸=A1P j∈[k2]:j̸=A2max(1 + f1i(H1),1 +f2j(H2),0) π1(A1|H1)π2(A2|H2) , which equals EE[Y1+Y2|H2, A2] π(A1|H1)P i∈[k1]:i̸=A1P j∈[k2]:j̸=A2max(1 + f1i(H1),1 +f2j(H2),0) π2(A2|H2) =EX a2∈[k2]E[Y1+Y2|H2, A2=a2] π(A1|H1)X i∈[k1]:i̸=A1X j∈[k2]:j̸=a2max(1 + f1i(H1),1 +f2j(H2),0) =EX j∈[k2]X i∈[k1]:i̸=A1max(1 + f1i(H1),1 +f2j(H2),0)X a2∈[k2]:a2̸=jE[Y1+Y2|H2, A2=a2] π(A1|H1) . Then inf f2∈F2:P j∈[k2]f2j=0R(f1, f2) equals =E inf y∈Rk2:Pk2 j=1yj=0X j∈[k2]X i∈[k1]:i̸=A1max(1 + f1i(H1),1 +yj,0) ×X a2∈[k2]:a2̸=jE[Y1+Y2|H2, A2=a2] π(A1|H1) S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 46 Forh2∈ H 2, let us denote pa2(h2) =E[Y1+Y2|h2, A2=a2]P j∈[k2]E[Y1+Y2|h2, A2=j]for all a2∈[k2] and p(h2) = (pa2(h2))a2∈[k2], which are well-defined since E[Y1+Y2|h2, A2=j]>0 for all h2andj∈[k2] due to Assumption V. Then inf y∈Rk2:Pk2 j=1yj=0X j∈[k2]X i∈[k1]:i̸=A1max(1 + f1i(H1),1 +yj,0) ×X a2∈[k2]:a2̸=jE[Y1+Y2|H2, A2=a2] =X j∈[k2]E[Y1+Y2|h2, A2=j] × inf y∈Rk2:Pk2 j=1yj=0X j∈[k2]X i∈[k1]:i̸=A1max(1 + f1i(H1),1 +yj,0) 1−pj(H2) . For fixed H1, X j∈[k2] 1−pj(H2)X i∈[k1]:i̸=A1max(1 + f1i(H1),1 +yj,0) =k1X j∈[k2] 1−pj(H2) +X j∈[k2] 1−pj(H2)X i∈[k1]:i̸=A1max( f1i(H1),yj,−1) =k1X j∈[k2] 1−pj(H2) +X j∈[k2] 1−pj(H2)X i∈[k1−1]max(vi(H1;A1),yj), where v(H1;A1) = (max( f1i(H1),−1))i∈[k1]:i̸=A1is a vector of length 2. When
|
https://arxiv.org/abs/2505.17285v1
|
k1= 3 and k2= 2, Lemma S4.3 implies that inf y∈Rk2:P j∈[k2]yj=0X j∈[k2] 1−pj(H2)X i∈[k1−1]max(vi(H1;A1),yj) =( v1(H1;A1) +v2(H1;A1) if min( v(H1;A1))≥0 κ(v(H1;A1),p(H2)) otherwise = P i∈[k1]:i̸=A1f1i(H1) if f1i(H1)≥0 for i̸=A1 κ(v(H1;A1),p(H2)) otherwise, where the last step follows because min( v(H1;A1))≥0 if and only if f1i(H1)>0 for all i̸=A1, in which case, max( f1i(H1),−1) = f1i(H1). SinceP j∈[k2](1−pj(H2)) =k2−1, taking the infimum of X j∈[k2]X i∈[k1]:i̸=A1max(1 + f1i(H1),1 +yj,0)X a2∈[k2]:a2̸=jE[Y1+Y2|H2, A2=a2] over{y∈Rk2:Pk2 j=1yj= 0}yields k1(k2−1)X j∈[k2]E[Y1+Y2|h2, A2=j] +X j∈[k2]E[Y1+Y2|h2, A2=j]X i∈[k1]:i̸=A1f1i(H1)1h min i∈[k1]:i̸=A1f1i(H1)≥0i +X j∈[k2]E[Y1+Y2|h2, A2=j] κ(v(H1;A1),p(H2))1h min i∈[k1]:i̸=A1f1i(H1)<0i . Therefore, maximizing inf f2R(f1, f2) with respect to f1:P i∈[k1]f1i= 0 is equivalent to minimizing E P j∈[k2]E[Y1+Y2|H2, A2=j]P i∈[k1]:i̸=A1max( f1i(H1),−1)1h min i∈[k1]:i̸=A1f1i(H1)≥0i π1(A1|H1) +E P j∈[k2]E[Y1+Y2|H2, A2=j]κ(v(H1;A1),p(H2))1h min i∈[k1]:i̸=A1f1i(H1)<0i π1(A1|H1) S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 47 with respect to f1:P i∈[k1]f1i= 0 . However, the last display equals E π1(A1|H1)−1 P j∈[k2]Eh E[Y1+Y2|H2, A2=j]|H1, A1i ×P i∈[k1]:i̸=A1f1i(H1)1h min i∈[k1]:i̸=A1f1i(H1)≥0i +κ(v(H1;A1),p(H2))1h min i∈[k1]:i̸=A1f1i(H1)<0i =E X a1∈[k1]X j∈[k2]Eh E[Y1+Y2|H2, A2=j]|H1, a1iX i̸=A1f1i(H1)1h min i̸=A1f1i(H1)≥0i +EX a1∈[k1]X j∈[k2]Eh E[Y1+Y2|H2, A2=j]|H1, a1i ×κ(v(H1;A1),p(H2))1h min i∈[k1]:i̸=A1f1i(H1)<0i . Let us denote α(H1, a1) =X j∈[k2]Eh E[Y1+Y2|H2, A2=j]|H1, a1i . Then ˜f1(H1) is the maximizer of X a1∈[k1]α(H1, a1)X i∈[k1]:i̸=A1xi1h min i∈[k1]:i̸=A1xi≥0i +κ(v(a1),p(H2))1h min i∈[k1]:i̸=A1xi<0i (S4.6) subject toP i∈[k1]xi= 0, where v(a1)∈Rk1−1satisfies v(a1) = (max( xi,−1))i∈[k1]:i̸=a1. Toy setup of the sum-zero loss example Under the toy setup of Example 6, H1=∅. Therefore, f1(H1)≡f1∈R3and α(H1, a1)≡α(a1) =X j∈[k2]E[Y2|A1=a1, A2=j]. Moreover, since H2≡A1,p(H2) =p(A1), and pj(A1) =E[Y2|A1, A2=j]P j∈[k2]E[Y2|A1, A2=j]forj∈[k2]. In our example, H1=∅, which implies f1(H1) =f1. Also, k1= 3. In this case, (S4.6) reduces to the following optimization problem: min x∈R3:X i∈[k1]xi= 0X a1∈[k1]α(a1) P j∈[k1]:j̸=a1xj1h min i∈[k1]:i̸=A1xi≥0i +κ(v(a1),p(a1))1h min i∈[k1]:i̸=a1xi<0i s.t. x1+x2+x3= 0, v(a1) = (max( xi,−1))i∈[3]:i̸=a1(S4.7) Any solution to solution to (S4.7) is a candidate of ˜f1. Let us define ψ(x1,x2) =X a1∈[k1]α(a1) P j∈[k1]:j̸=a1xj1h min i∈[k1]:i̸=A1xi≥0i +κ(v(a1),p(a1))1h min i∈[k1]:i̸=a1xi<0i (S4.8) where the xused in (S4.8) is x= (x1,x2,−(x1+x2)). Note that ˜fis a solution to (S4.7) if and only if ( ˜f1,˜f2) is a solution to min (x1,x2)∈R2ψ(x1,x2)(S4.9) and ˜f3=−(˜f1+˜f2). The optimization program (S4.9) is an unconstrained optimization problem with 2 variables. We obtained the surface plot of ψusing the software R in the three settings mentioned in Table 1 to find out the optimal solution. Figure 9 gives the surface plots for settings 2 and 3. The plot for setting 1 is skipped because it is similar to that of Setting 2. Finally, in this case, Q∗ 2(H2, A2) =E[Y1+Y2|H2, A2] =E[Y2|A1, A2] and Q∗ 1(H1, A1)≡Q∗ 1(A1) =Eh max j∈[k2]Q∗ 2(H2, j)|A1i =Eh max j∈[k2]E[Y2|A1, A2=j]|A1i , S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 48 which equals max j∈[k2]E[Y2|A1, A2=j]. Hence, any member of argmax i∈[3] max j∈[2]E[Y2|A1=i, A2=j] is a candidate for d∗ 1, the first stage optimal treatment assignment. This explains the d∗ 1values in Table 1. (a) Setting 2 in Table 1 (b) Setting 3 in Table 1 Fig 9: Surface plot of ψdefined in (S4.8) for the last two settings in Table 1
|
https://arxiv.org/abs/2505.17285v1
|
. The point where the vertical and horizontal lines meet is the point where the minima occurs. The value of the function decreases as the color turns red from green. The plot for Setting 1 is skipped because it is similar to that of Setting 2. Additional lemmas for the proofs of Section S4.0.4 Lemma S4.3. Letv∈Rk1−1andp∈ Sk2−1. Ifv[1]<0, then the optimum value of the program min y∈Rk2X j∈[k2] 1−pjX i∈[k1−1]max(vi,yj) s.t.X j∈[k2]yj= 0,(S4.10) isκ(v,p)ifk1= 3andk2= 2, where κ(v,p) = v[1]|p1−p2|+ max( v[2],−v[1])(p1∧p2) +v[2](p1∨p2)ifp1∧p2≤(p1∨p2)/2, −|min(v[2],−v[1])(p2−p1)| +v[2](p1∨p2) +|v[2]|(p1∧p2)otherwise. Ifv[1]≥0, then the optimal value of the program in (S4.10) isv1+v2. Proof. Let us define ˜ϕ(y) =X j∈[k2] 1−pjX i∈[k1−1]max(vi,yj). (S4.11) Note that ˜ϕ(y)≥X i∈[k1−1]viX j∈[k2] 1−pj = (k2−1)X i∈[k1−1]vi for all y∈Rk2. Ifv[1]≥0, then any y∈Rk1with max( y)≤v[1]satisfies ˜ϕ(y) =P i∈[k1−1]viP j∈[k2] 1−pj . In particular, we can take y=0k2. The result for v[1]>0 follows noting p∈ Sk2−1, which impliesP j∈[k2](1−pj) = 1 when k2= 2. Hence, we consider v[1]<0. Note also that the optimum value of the program (S4.10) equals min y∈Rk2X j∈[k2] 1−pjX i∈[k1−1]max(v[i],yj) s.t.X j∈[k2]yj= 0. S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 49 Therefore, it suffices to show the proof considering vis sorted, i.e., v1≤. . .≤vk1−1andv1<0. In this case, from Lemma S4.4 it follows that the optimum value of the optimization program in (S4.10) equals the optimal value of the program min y∈Rk2X j∈[k2] 1−pjX i∈[k1−1]max(vi,yj) s.t.X j∈[k2]yj= 0, yj≥v1for all j∈[k2].(S4.12) When k2= 2, we can represent ybyy= (x,−x) for some x∈R. Then the optimization problem in (S4.12) becomes min x∈Rp2 max(v1, x) + max( v2, x) +p1 max(v1,−x) + max( v2,−x) s.t. x, −x≥v1.(S4.13) When x,−x≥v1, it follows that p2 max(v1, x) + max( v2, x) +p1 max(v1,−x) + max( v2,−x) =x(p2−p1) +p2max(v2, x) +p1max(v2,−x). Thus the optimization problem in (S4.13) rewrites as min x∈Rx(p2−p1) +p2max(v2, x) +p1max(v2,−x) s.t.|x| ≤ −v1,(S4.14) where we used the fact that x,−x≥v1is equivalent to |x| ≤ −v1when v1<0. We overload notation and define ˜ϕ(x) :=x(p2−p1) +p2max(v2, x) +p1max(v2,−x). Let us define −v1=c >0. Therefore, the optimal value of (S4.14) equals inf x∈[−c,c]˜ϕ(x). Since v2≥v1, there can be three cases. Case 1: v2=v1Since x∈[−c, c], it follows that x,−x≥c2implying ˜ϕ(x) = 2 x(p2−p1). Then inf x∈[−c,c]˜ϕ(x) =−2|p2−p1|c=−2|v2(p1−p2)| argmin x∈[−c,c]˜ϕ(x) = c p1>p2 −c p2<p1 [−c, c] if p1=p2. Let us call the above number x∗. Suppose y∗= (x∗,−x∗) Then argmax( y∗) = 1 if p1>p2 2 if p2<p1 {1,2} ifp1=p2. Case 2: v1<v2<0 In this case, −c <v2<0<−v2< c, indicating ˜ϕ(x) = x(p2−p1) +p2v2−p1xifx∈[−c,v2) 2x(p2−p1) if x∈(v2,−v2] 2xp2−p1x+p1v2 x∈(−v2, c]. Since ˜ϕ(x) is a piecewise linear function, the knots will be in argminx∈[−c,c]˜ϕ(x) where the knots belong to the set {−c,v2,0,−v2, c}. Note that ˜ϕ(x) = 2p1c−p2c+p2v2ifx=−c 2v2(p2−p1) if x=v2 −2v2(p2−p1) if x=−v2 2cp2−p1c+p1v2ifx=c. Therefore, inf x∈[−c,c]˜ϕ(x) = min 2p1c−p2c+p2v2,−2|v2(p2−p1)|,2cp2−p1c+p1v2 . S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 50 Let us denote hc(y;c, v2) = 2 yc−(1−y)c+ (1−y)v2= 3yc−c+ (1−y)v2. Then inf x∈[−c,c]˜ϕ(x) = min hc(p1), hc(p2),−2|v2(p2−p1)| . It can be easily seen that hc(p1)⋛hc(p2) iffp1⋛p2. Therefore, min(hc(p1), hc(p2)) =hc(min( p1,p2)).
|
https://arxiv.org/abs/2505.17285v1
|
(S4.15) Hence, inf x∈[−c,c]˜ϕ(x) = min hc(p1∧p2),−2|v2(p2−p1)| . Now note that hc(p1∧p2) = 2 c(p1∧p2) + (p1∨p2)(v2−c) =−c|p1−p2|+c(p1∧p2) +v2(p1∨p2). (S4.16) Therefore, hc(p1∧p2)⋛−2|v2(p2−p1)| if and only if −c|p1−p2|+c(p1∧p2) +v2(p1∨p2)⋛2v2|p2−p1| where we used the fact that v2<0. The last display is equivalent to (v2+c)(p1∧p2)⋛(v2+c)|p2−p1|. Since c+v2>0, the above is equivalent to p1∧p2⋛|p2−p1|orp1∧p2⋛p1∨p2 2. Therefore, we obtained that inf x∈[−c,c]˜ϕ(x) =( −c|p1−p2|+c(p1∧p2) +v2(p1∨p2) if p1∧p2≤p1∨p2 2 −2|v2(p2−p1)| otherwise. Case 3: 0≤v2< c In this case, −c <0≤v2< c. Therefore, ˜ϕ(x) = x(p2−p1) +p2v2−p1xifx∈[−c,−v2) x(p2−p1) +v2 ifx∈(−v2,v2] 2xp2−p1x+p1v2 x∈(v2, c] Note that ˜ϕ(x) = 2p1c−p2c+p2v2ifx=−c v2(p1−p2+ 1) if x=−v2 v2(p2−p1+ 1) if x=v2 2cp2−p1c+p1v2ifx=c. Therefore, inf x∈[−c,c]˜ϕ(x) equals min 2p1c−p2c+p2v2,−v2|p2−p1|+v2,2cp2−p1c+p1v2 = min hc(p1), h(p2),−v2|p2−p1|+v2 (a)= min hc(p1∧p2),−v2|p2−p1|+v2 (b)= min −c|p1−p2|+c(p1∧p2) +v2(p1∨p2),−v2|p2−p1|+v2 . where (a) follows from (S4.15) and (b) follows from (S4.16). It can be easily seen that −c|p1−p2|+c(p1∧p2) +v2(p1∨p2)⋚−v2|p2−p1|+v2 if and only if (c−v2)(p1∧p2)⋚(c−v2)|p1−p2|. Since c >v2in this case, the above is equivalent to p1∧p2⋚|p1−p2|orp1∧p2⋚(p1∨p2)/2. Therefore, we have obtained that inf x∈[−c,c]˜ϕ(x) =( −c|p1−p2|+c(p1∧p2) +v2(p1∨p2) if p1∧p2≤(p1∨p2)/2 −|v2(p2−p1)|+v2 otherwise. S4 PROOFS FOR THE CONVEX SURROGATE EXAMPLES IN SECTION 3/ 51 Case 4: c≤v2In this case, −v2≤ −c < c≤v2. Thus −x, x≤v2ifx∈[−c, c] Therefore, ˜ϕ(x) =x(p2−p1) +v2 Therefore, inf x∈[−c,c]˜ϕ(x) =−|p2−p1|c+v2 Combining cases 1-4 for p1∧p2≤(p1∨p2)/2 The above steps imply that if p1∧p2≤(p1∨p2)/2, then inf x∈[−c,c]˜ϕ(x) = 2v2|p1−p2| ifv1=v2 v1|p1−p2| −v1(p1∧p2) +v2(p1∨p2) if v1<v2<−v1 v1|p1−p2|+v2 if−v1≤v2. When v1=v2, −v1(p1∧p2) +v2(p1∨p2) =v1 p1∨p2−p1∧p2 =v1|p1−p2|. Therefore, if p1∧p2≤(p1∨p2)/2, then inf x∈[−c,c]˜ϕ(x) =( v1|p1−p2| −v1(p1∧p2) +v2(p1∨p2) if v1≤v2<−v1 v1|p1−p2|+v2 if−v1≤v2. However, when −v1≤v2, v1|p1−p2|+v2=v1|p1−p2|+v2(p1∨p2) +v1p1∧p2 =v1|p1−p2|+v2(p1∨p2) + max( v2,−v1)p1∧p2, and when −v1>v2, v1|p1−p2| −v1(p1∧p2) +v2(p1∨p2) =v1|p1−p2|+ max( v2,−v1)p1∧p2+v2(p1∨p2). Therefore, if p1∧p2≤(p1∨p2)/2, then inf x∈[−c,c]˜ϕ(x) =v1|p1−p2|+ max( v2,−v1)(p1∧p2) +v2(p1∨p2). (S4.17) Combining cases 1-4 for p1∧p2>(p1∨p2)/2 The above cases imply that when p1∧p2>(p1∨p2)/2, then inf x∈[v1,−v1]˜ϕ(x) = −2|v2(p2−p1)| ifv1<v2<0 −|v2(p2−p1)|+v2if 0≤v2<−v1 v1|p1−p2|+v2 if−v1≤v2, which is equivalent to inf x∈[v1,−v1]˜ϕ(x) =( −2|v2(p2−p1)| ifv1<v2<0 −|min(v2,−v1)(p2−p1)|+v2if 0≤v2 When v1<v2<0, −2|v2(p2−p1)| =− |v2(p2−p1)|+v2(p1∨p2)−v2(p1∧p2) =− |min(v2,−v1)(p2−p1)|+v2(p1∨p2)−v2(p1∧p2) =− |min(v2,−v1)(p2−p1)|+v2(p1∨p2) +|v2|(p1∧p2). When 0 ≤v2, v2=v2(p1∨p2) +v2(p1∧p2) =v2(p1∨p2) +|v2|(p1∧p2). Therefore, we have obtained that when p1∧p2>(p1∨p2)/2, inf x∈[v1,−v1]˜ϕ(x) =−|min(v2,−v1)(p2−p1)|+v2(p1∨p2) +|v2|(p1∧p2) (S4.18) Hence, the proof follows combining (S4.17) and (S4.18). Lemma S4.4. Consider the optimization problem in (S4.10) . Ifv1≤. . .≤vk1−1andv1<0, then for any k1, k2≥2, the optimal value of (S4.10) equals inf y∈C1X j∈[k2] 1−pjX i∈[k1]:i̸=A1max(vi,yj), where C1= y∈Rk2:X j∈[k2]yj= 0,yi≥v1 . S5 PROOF OF THEOREM 3.1/ 52 Proof of Lemma S4.4. Let us denote the set C={y∈Rk2:P j∈[k2]yj= 0}. Hence, the optimal value of (S4.10) equals infy∈C˜ϕ(y) where ˜ϕis as defined in (S4.11). We will show that inf y∈C˜ϕ(y) = inf y∈C1˜ϕ(y). If possible, suppose i∈[k2] is such that yi<v1. Let l̸=iandl∈[k2]. Our next step is to show that ˜ϕ(y)≥˜ϕ(y′) where y′is the vector obtained by replacing yiwithv1and replacing ylwithy′ l=yl+yi−v1. Clearly, y′ l<ylandy′ i>yi. Note that y′∈ Cand˜ϕ(y)−˜ϕ(y′) equals 1−piX r∈[k1−1]max(vr,yi) + 1−plX r∈[k1−1]max(vr,yl) − 1−piX r∈[k1−1]max(vr,v1)− 1−plX r∈[k1−1]max(vr,yl+yi−v1) = (1−pi)X r∈[k1−1] max(vr,yi)−max(vr,v1) + (1−pl)X r∈[k1−1] max(vr,yl)−max(vr,yl+yi−v1) = (1−pl)X r∈[k1−1] max(vr,yl)−max(vr,yl+yi−v1) where the last equality follows because max( vr,yi) = max( vr,v1) =vror all r∈[k1−1] since yi<v1≤vrfor all r∈[k1−1]. However, since yl+yi−v1<yl, we obtain ˜ϕ(y)≥˜ϕ(y′). We can keep replacing yiwithv1ifyi<v1until
|
https://arxiv.org/abs/2505.17285v1
|
all elements of y′are greater than equal to v1. Thus we have shown that inf y∈C1˜ϕ(y)≤infy∈C˜ϕ(y), which, combined with C1⊂ C, implies inf y∈C1˜ϕ(y) = inf y∈C˜ϕ(y). S5. Proof of Theorem 3.1 S5.1. Proof preparation Before proceeding with the proof of Theorem 3.1, we first gather some key facts that will be utilized throughout the proof and briefly outline the main steps involved. We say f:Rk7→Ris lower semicontinuous at x0∈Rkif lim inf x→x0f(x)≥f(x0). Following Definition 1.2.3., pp. 78 of Hiriart-Urruty and Lemar´ echal (2004), fis lower semicontinuous everywhere if and only if it is closed. Following Section 3.2 of Hiriart-Urruty and Lemar´ echal (2004), we define a 0-coercive convex function as follows. Definition S5.1 (0-coercivity) .A closed convex function f:Rk7→Ris called 0-coercive if f(x)→ ∞ as∥x∥2→ ∞ . A 0-coercive convex function is necessarily bounded below. An associated notion is that of the recession function. Definition S5.2. Suppose the function f:Rk→Ris closed and convex. Then for any x∈Rk, the function f′ ∞defined by f′ ∞(x) = lim t→∞f(tx)−f(0) t is known as the recession function of f. It is evident that f′ ∞(0) = 0. The behavior of f′ ∞forx̸= 0 is closely related to the coercivity of fand can inform us on the existence of a minimum. The following result, synthesizing Remark 3.2.7 and Proposition 3.2.4 from Hiriart-Urruty and Lemar´ echal (2004), establishes key properties of recession functions in relation to coercivity. Fact S5.1. Suppose fis convex and bounded below. Then f′ ∞(x)≥0for all x̸= 0. In fact, there are only two possibilities: (1)fis 0-coercive, and the set of minimum points of fis a non-empty compact set (singletone if fis strictly convex) – this happens if f′ ∞(x)>0for all x̸= 0, or (2) the minimum of fis unattained – this happens if f′ ∞(x) = 0 for some x̸= 0. Fact S5.1 suggests that we can deduce whether the minimum of fis achieved by evaluating the recession functions of f. The subsequent fact provides a useful tool for this purpose. Fact S5.2 (Proposition 3.2.8 of Hiriart-Urruty and Lemar´ echal (2004)) . a Consider m∈Nand let a1, . . . , a mbe positive numbers. Then for closed convex functions f1, . . . , f m:Rk7→R, we have mX i=1aifi′ ∞=mX i=1ai(fi)′ ∞. b Suppose h(x1, . . . , x k) =f(a1x1, . . . , a kxk)where ai∈ {± 1}fork= 1, . . . , m andfis convex. Then h′ ∞(x1, . . . , x k) =f′ ∞(a1x1, . . . , a kxk). c Let Range (A)denote the range of Afor any matrix Awith kcolumns, where k∈N. Ifh(x) =f(Ax)for all x∈Rk for a convex function fand dom (f)∩Range (A)̸=∅, then (f◦A)′ ∞=f′ ∞◦A. S5 PROOF OF THEOREM 3.1/S5.2 Step 1 53 Now we will collect some initial results that will be used throughout the proof. From Wang and Scott (2023b), it follows that if υ: [k1]7→[k1], ˜π: [k2]7→[k2] are two permutations and x∈Rk1andw∈Rk2, then ψ(x,w;i, j) =ψ(υ(x),˜π(w);υ−1(i),˜π−1(j)) =ψ(υ−1(x),˜π−1(w);υ(i),˜π(j)). (S5.1) We refer to (S5.1) as the clockwise permutation symmetry of ψ. The above implies ψ(x,w;i, j) is permutation
|
https://arxiv.org/abs/2505.17285v1
|
equivariant w.r.t. xandiwhen wandjare fixed and vice versa. Using this result, we can prove that 0k1+k2−2∈int(dom( −η)) under the setup of Theorem 3.1. Lemma S5.1 establishes this fact. The proof of Lemma S5.1 can be found in Section S5.4.1. Lemma S5.1. Suppose ψis a PERM loss with template η, which is concave and bounded above. If ∩k1 i=1∩k2 j=1int(dom(ψ(·;i, j))̸= ∅, then 0k1+k2−2∈int(dom(−η)). Since−ηis bounded below, if dom( −η) is non-empty, then −ηis proper. Therefore Lemma S5.1 implies −ηis a proper convex function. S5.1.1. The main steps We will prove this theorem in two main steps. The main goal of the first step is to reduce the general problem to a problem with k1= 2 and k2= 2. In the second step, from a high-level, we first show that the concavity of ψenforces some geometric restrictions on ˜f1and ˜f2in the corresponding binary-treatment problem. Then we will show that if ψis Fisher consistent, then it also induces some restrictions on ˜f1and ˜f2. Using a bad set of distributions, we will show that these restrictions get violated in presence of the abovementioned concavity-induced restrictions. The structure of this section is as follows: Step 1 is proved in Section S5.2, and Step 2 is proved in Section S5.3. The main lemmas are proved in Section S5.4, while the proofs of additional results are deferred to Section S5.5. S5.2. Step 1 The first task of this step is to construct a sublass of distributions for which the ˜fcorresponding to our ψexists and the forms of ˜f,˜d,d∗, etc. are tractable. Then we will find a further subclass of distributions, for which, the DTR optimization problem for k1andk2treatment levels reduce to a DTR optimization problem with 2 treatment levels per stage. We consider a subset Pk1,k2ofP0, the space all distributions satisfying Assumptions I-V. The distributions in Pk1,k2 entail the following setting. For these distributions, Y1= 0 and O1, O2=∅. Therefore, there is no covariate. We could add covariates in this counterexample, but the main idea of the construction of the counterexample would be similar. However, the calculation would be more technical. Since Pk1,k2⊂ P0, it follows that E[Y2|i, j]>0 for all i∈[k1] and j∈[k2]. Since H1=∅, under this setting, the first stage class-score function f1can be represented by a vector in Rk. The corresponding d1= argmaxi∈[k1]f1(i) will be a number. Also, since H2=A1,H2= [k1]. Thus f2is a map from [ k1] toRk2 and the second stage DTR d2is a map from [ k1] to [k2]. For each i∈[k1], the class score f2(i) is a k2-dimesnional real vector. Therefore, the function f2can be completely explained by the k1vectors {f2(1), . . . , f 2(k1)}. Since H2=A1for distributions inPk1,k2, the expectation E[Y1+Y2|H2, A2] =E[Y2|A1, A2]. For i∈[k1] and j∈[k2], let us denote E[Y2|A1=i, A2=j] bypij. Since for P∈ Pk1,k2,E[Y2|A1, A2]>0, it follows that pij>0 for all i∈[k1] and j∈[k2]. Note that every P∈ Pk1,k2 elicits one set of pij’s. Moreover, given any set {pij}i∈[k1],j∈[k2]with positive pij’s, we can construct a P∈ Pk1,k2so that E[Y2|A1=i, A2=j] =pij. It can also be shown that for any DTR d= (d1, d2), the
|
https://arxiv.org/abs/2505.17285v1
|
value function V(d1, d2) =pd1,d2(d1). Thus, the contribution of P∈ Pk1,k2on the value function reflects only via ( pij). For each A1=i, any number in the set argmaxj∈[k2]pijis a candidate of the optimal second stage treatment assignment d∗ 2(i). Also, since H1=∅, it follows that d∗ 1can be any number in the set argmaxi∈[k1]Q∗ 1(i). Under our setting, it follows that Q∗ 1(i) =E[ max j∈[k2]E[Y2|A1, A2=j]|A1=i] =E[ max j∈[k2]pA1j|A1=i] = max j∈[k2]pij Thus any member of argmaxi∈[k1](max j∈[k2]pij) is a candidate for d∗ 1. Also, since V∗= max i∈[k1]Q∗ 1(i), V∗= max i∈[k1],j∈[k2]pij. (S5.2) The surrogate value function for P∈ Pk1,k2takes the form Vψ(f1, f2) :=EY2 π1(A1)π2(A2|A1)ψ(f1, f2(A1);A1, A2) =EE[Y2|A1, A2] π1(A1)π2(A2|A1)ψ(f1, f2(A1);A1, A2) =X j∈[k2]EE[Y2|A1, A2=j] π1(A1)ψ(f1, f2(A1);A1, j) =X i∈[k1]X j∈[k2]E[Y2|i, j]ψ(f1, f2(i);i, j) =X i∈[k1]X j∈[k2]pijψ(f1, f2(i);i, j). (S5.3) S5 PROOF OF THEOREM 3.1/S5.2 Step 1 54 Therefore Vψ(f1, f2) depends on Ponly via pij’s as long as P∈ Pk1,k2. Let us denote p= (pij). From now on, we will denote Vψ(f1, f2) byVψ(f1, f2;p) to make the dependence on the underlying probability distribution Pexplicit. We will introduce some more new notation. Since f1∈Rkandf2(i)∈Rk2,f1can be represented by a vector x∈Rk1 and each f2(i) can be represented by a vector yi∈Rk2,i∈[k1]. As f2= (f2(1), . . . , f 2(k1)), we can describe f2by the k1k2-dimensional vector Y= (y1, . . . ,yk1). For the rest of this proof, unless otherwise specified, we will use the notation x instead of f1and (y1, . . . ,yk1) orYinstead of f2unless otherwise specified. Therefore, Vψ(f1, f2;p) will also be denoted by Vψ(x,y1, . . . ,yk1;p) for the rest of this proof. Using the above notation, (S5.3) can be rewritten as Vψ(x,y1, . . . ,yk1;p) =X i∈[k1]X j∈[k2]pijψ(x,yi;i, j). (S5.4) Suppose argmaxx∈Rk1,y1,...,yk1∈Rk2Vψ(x,y1, . . . ,yk1;p) is non-empty for some pand (x∗(p),y∗ k1(p), . . . ,y∗ k2(p))≡(x∗,y∗ k1, . . . ,y∗ k2) ∈ argmax x∈Rk1,y1,...,yk1∈Rk2Vψ(x,y1, . . . ,yk1;p). (S5.5) Then for any Pcorresponding to p,x∗(p) is a candidate for ˜f1andy∗ i(p) is a candidate for ˜f2(i) for all i∈[k1]. If ˜d1 and ˜d2are the policies generated by ˜f1and ˜f2, then ˜d1= pred( x∗(p)) and ˜d2(i) = pred( y∗ i(p)) for each i∈[k1]. Since V(d1, d2) =pd1,d2(d1)for any DTR d,V(˜d1,˜d2) =p˜d1,˜d2(˜d1). For an arbitrary p, there is no guarantee that the maximizer of Vψ(·;p) exists. However, Lemma S5.2, which is proved in Section S5.4.2, shows that argmaxx∈Rk1,y1,...,yk1∈Rk2Vψ(x,y1, . . . ,yk1;p) is none-empty when pij= 1 for all i∈[k1] and j∈[k2]. We will later see that the existence of the maximizer for one pis sufficient for showing the existence of maximizer for all p’s. Lemma S5.2. Suppose Ce= (x1k1, y1k2, . . . , y 1k2)∈Rk1(1+k2):x, y∈R andpij= 1for all i∈[k1]andj∈[k2]. Under the setup of Theorem 3.1, the Vψdefined in (S5.4) satisfies argmax x∈Rk1,y1,...,yk1∈Rk2Vψ(x,y1, . . . ,yk1;p)⊃ Ce. We now consider a further subset Pk1,k2 b⊂ Pk1,k2so that for any P∈ Pk1,k2 b,p1,j=p12for all j≥2,pi,1=p21for all i >1, andpij=p22for all j >1 and i >1. Since it is possible to construct a P∈ Pk1,k2such that E[Y2|A1=i, A2=j]
|
https://arxiv.org/abs/2505.17285v1
|
=pij for any p≡piji∈[k1], j∈[k2] with positive pijvalues, it follows that for any ( p11,p12,p21,p22)∈R4 >0, there exists a P∈ Pk1,k2 bcorresponding to these values. Since we will only consider P∈ Pk1,k2 bfrom now on, by an abuse of notation, we will denote ( p11,p12,p21,p22) byp. The surrogate loss reduces to Vψ(x,y1, . . . ,yk1;p) =p11ψ(x,y1; 1,1) +p12k2X j=2ψ(x,y1; 1, j) +p21k1X i=2ψ(x,yi;i,1) +p22k1X i=2k2X j=2ψ(x,yi;i, j). (S5.6) Since ψis PERM with template η, (3.5) implies that Vψ(x,y1, . . . ,yk1;p) equals p11η(x1−x2, . . . ,x1−xk1,y11−y12, . . . ,y11−y1k2) +p12k2X j=2η(x1−x2, . . . ,x1−xk1,y1j−y11, . . . ,y1j−y1,k2−1) +p21k1X i=2η(xi−x1, . . . ,xi−xk1,yi1−yi2, . . . ,yi1−yik2) +p22k1X i=2k2X j=2η(xi−x1, . . . ,xi−xk1,yij−yi1, . . . ,yij−yik2) (S5.7) where terms such as xi−xiandyij−yijare omitted from the argument of η. Let us denote ∆ xi=x1−xifori∈[2 :k1] and ∆ yij=yi1−yijfori∈[k1] and j∈[2 :k2]. Observe that xi−xj=xi−x1+ (x1−xj) = ∆ xj−∆xiand yij−yij′=yij−yi1+(yi1−yij′) = ∆ yij′−∆yij. Let ∆ x= (∆x2, . . . , ∆xk1), and for i∈[k1], let ∆ yi= (∆yi2, . . . , ∆yik2). S5 PROOF OF THEOREM 3.1/S5.2 Step 1 55 With this notation, for any x∈Rk1andy1, . . . ,yk2∈Rk2, the quantity Vψreduces to Vψ(x,y1, . . . ,yk1;p) =p11η(∆x,∆y1) +p12k2X j=2η(∆x,−∆y1j,∆y12−∆y1j. . . ,∆y1k2−∆y1j) +p21k1X i=2η(−∆xi,∆x2−∆xi. . . ,∆xk1−∆xi,∆yi) +p22k1X i=2k2X j=2η−∆xi,∆x2−∆xi. . . ,∆xk1−∆xi, −∆yij,∆yi2−∆yij. . . ,∆yik2−∆yij =p11η(∆x,∆y1) +p12k2−1X j=1η(∆x,−∆y1,1+j,∆y12−∆y1,1+j. . . ,∆y1k2−∆y1,1+j) +p21k1−1X i=1η(−∆x1+i,∆x2−∆x1+i. . . ,∆xk1−∆x1+i,∆y1+i) +p22k1−1X i=1k2−1X j=1η−∆x1+i,∆x2−∆x1+i. . . ,∆xk1−∆x1+i,−∆y1+i,1+j, ∆y1+i.2−∆y1+i,1+j. . . ,∆y1+i,k2−∆y1+i.1+j . Foru∈Rk1−1andv1, . . . ,vk1∈Rk1−1, by an slight abuse of notation, let us define the function Vη(u,v1, . . . ,vk1;p) =p11η(u,v1) +p12k2−1X j=1η(u,−v1j,v11−v1j. . . ,v1,k2−1−v1j) +p21k1−1X i=1η(−ui,u1−ui. . . ,uk1−1−ui,v1+i) +p22k1−1X i=1k2−1X j=1η−ui,u1−ui. . . ,uk1−1−ui, −v1+i,j,v1+i,1−v1+i,j. . . ,v1+i,k2−1−v1+i.j , (S5.8) where terms such as ui−uiandvij−vijare omitted from the argument of ηin the above definition. Note that Vηis different from Vψas the domain of ψandηare different. Since ∆ x= (∆x2, . . . , ∆xk1) and ∆ yi= (∆yi2, . . . , ∆yik2) for i∈[k1], letting ∆ x=uand ∆ yi=vi, we can show that Vψ(x,y1, . . . ,yk1;p) =Vη(∆x,∆y1, . . . , ∆yk1;p) (S5.9) for all x∈Rk1andy1, . . . ,yk2∈Rk2. Therefore, Vψdepends on xand the yi’s only via the ∆ xand ∆ yi’s. To maximize Vψwith respect to xand the yi’s, one actually need to maximize it over the ∆ xand ∆ yi’s which are in Rk1−1andRk2−1, respectively. If u∗∈Rk1−1,v∗ 1, . . . ,v∗ k1∈Rk2−1maximizes Vη(·;p), then any x∈Rk1,y1, . . . ,yk1∈Rk2with ∆ x=u∗and ∆yi=v∗ i(i∈[k1]) is a maximizer of Vψ(·;p). In particular x∗= (0,−u∗),y∗ 1= (0,−v∗ 1), . . . ,y∗ k1= (0,−v∗ k1) (S5.10) is a version of ( x∗(p),y∗ 1(p), . . . ,y∗ k1(p)). Alternatively, (∆x∗(p),∆y∗ 1(p), . . . , ∆y∗ k1(p))∈ argmax u∈Rk1−1,v1,...,vk1∈Rk2−1Vη(u,v1, . . . ,vk1) (S5.11) if (x∗(p),y∗ 1(p), . . . ,y∗ k1(p)) exists. However, it is not obvious when (x∗(p),y∗ 1(p), . . . ,y∗ k1(p)) exists. To this end, first we derive a simpler expression for Vη(·;p). Since ηis blockwise permutation symmetric
|
https://arxiv.org/abs/2505.17285v1
|
by (3.4), for any u∈Rk1−1andw∈Rk2−1, η(u,−wj,w1−wj. . . ,wk2−1−wj) =η(u,w1−wj. . . ,−wj, . . . ,wk2−1−wj) and η(−ui,u1−ui. . . ,uk1−1−ui,w) =η(u1−ui. . . ,−ui, . . . ,uk1−1−ui,w) for any i∈[k1−1],j∈[k2−1],u∈Rk1−1andw∈Rk2−1. In the above, terms such as ui−uiandwj−wjare omitted S5 PROOF OF THEOREM 3.1/S5.2 Step 1 56 from the argument of η. Thus Vη(u,v1, . . . ,vk1;p) =p11η(u,v1) +p12k2−1X j=1η(u,v11−v1j. . . ,−v1j, . . . ,v1,k2−1−v1j) +p21k1−1X i=1η(u1−ui. . . ,−ui, . . . ,uk1−1−ui,vi+1) (S5.12) +p22k1−1X i=1k2−1X j=1η u1−ui. . . ,−ui, . . . ,uk1−1−ui, vi+1,1−vi+1,j. . . ,−vi+1,j, . . . ,vi+1,k2−1−vi+1,j. Let us define A0=Ik1−1, A1=−1 0T k1−2 −1k1−1Ik1−2 andAi= Ii−1 −1i−10(i−1)×(k1−1−i) 0T i−1 −1 −0T k1−1−i 0(k1−1−i)×(i−1)−1k1−1−i Ik1−1−i (S5.13) fori∈[2 :k1−1] in case the set [2 : k1−1] is non-empty. Therefore, Aiu= (u1−ui, . . . ,−ui, . . . ,uk1−1−ui) for all i∈[k1]. One can verify that we can obtain AifromAjby swapping the ith column with the jthcolumn and then swapping the ith row with the jth row. Therefore, Ai=PijAjPijwhere, by a slight abuse of notation, we define Pijto be the permutation matrix of appropriate dimesnion for swapping ith row with jth row. Also, we let B0beIk2−1and B1=−1 0T k2−2 −1k2−1Ik2−2 andBj= Ij−1 −1j−10(j−1)×(k2−1−j) 0T j−1 −1 −0T k2−1−j 0(k2−1−j)×(j−1)−1k2−1−j Ik2−1−j (S5.14) forj∈[2 :k2−1] in case the set [2 : k2−1] is non-empty. It can similarly be shown that Bj=P1jB1P1j. Note that Bjw= (w1−wj, . . . ,−wj, . . . ,wk2−1−wj) for any w∈Rk2−1. With this new notation, (S5.12) reduces to Vη(u,v1, . . . ,vk1;p) =p11η(u,v1) +p12k2−1X j=1η(u,Bjv1) +p21k1−1X i=1η(Aiu,vi+1) +p22k1−1X i=1k2−1X j=1η(Aiu,Bjvi+1) (S5.15) for all u∈Rk1−1,v1, . . . ,vk1∈Rk2−1. We will now show that the maximizer of Vη(·;p) is attained for all p∈R4 >0. If possible, suppose there exists p∈R4 >0 so that the maximizer of Vη(·;p) is not attained. Then Lemma S5.3 implies that the maximizer of Vη(·;p) is not attained for any p∈R4 >0. The proof of Lemma S5.3 is provided in Section S5.4.3. Note that the function fin Lemma S5.3 does not represent scores, and hdenotes a function rather than a history. Lemma S5.3. Letr, k∈N. Suppose C1, . . . , C mare matrices in Rr×kandh:Rr7→Ris a convex and bounded below. Further suppose 0k∈dom(h). Define f(x;w) =mX i=1wih(Cix),x∈Rk,w∈Rm >0. Suppose there exists w0∈Rm >0so that the minimum of the convex function f(·;w0)is not attained in Rk. Then the minima off(·;w)is not attained in Rkfor any w∈Rm >0. Here Lemma S5.3 applies because −ηis convex and bounded below and dom( −η)∋0k1+k2−2by Lemma S5.1. In particular, the maximizer of Vη(·;p) is not attained when p11=p12=p21=p22= 1. Then the maximizer of Vψ(·;p) is also not attained because otherwise (S5.10) would give a maximizer of Vη(·;p). However, Lemma S5.2 implies that ( 0k1,0k2, . . . ,0k2) is a maximizer of Vψwhen the pij’s are all one, which presents a contradiction. Therefore, the maximizer of Vη(·;p) is attained for all p∈R4 >0. Since ηis strictly concave, Vη(·;p) is strictly concave, implying that the maximizer is unique. Let us denote this maximizer by ( u∗(p),v∗ 1(p), .
|
https://arxiv.org/abs/2505.17285v1
|
. . ,v∗ k1(p)). The following lemma, which is proved in Section S5.4.4, shows that the maximizer lies in the lower dimensional space C= (u,v1, . . . ,vk1)∈Rk1−1×Rk1(k2−1):u=x1k1−1, v1=y1k2−1,v2=. . .=vk1=z1k2−1,where x, y, z ∈R .(S5.16) S5 PROOF OF THEOREM 3.1/S5.3 Step 2 57 Lemma S5.4. Forp∈R4 >0, let (u∗(p),v∗ 1(p), . . . ,v∗ k1(p)) = argmax u∈Rk1−1,v1,...,vk1∈Rk2−1Vη(u,v1, . . . ,vk1;p). Then (u∗(p),v∗ 1(p), . . . ,v∗ k1(p))∈ C,where Cis as defined in (S5.16) . For all x,y,z∈R, let us define ϑ:R2× {1,2}27→Ri as follows: ϑ(x, y; 1,1) = η(x1k1−1, y1k2−1) ϑ(x, y; 1,2) =k2−1X j=1η(x1k1−1, yBj1k2−1) ϑ(x, y; 2,1) =k1−1X i=1η(xAi1k1−1, y1k2−1) ϑ(x, y; 2,2) =k1−1X i=1k2−1X j=1η(xAi1k1−1, yBj1k2−1), (S5.17) where for i∈[k1] and j∈[k2], and the AiandBj’s are as defined in (S5.13) and (S5.14), respectively. For x, y, z ∈R, define Φ(x, y, z ;p) =−p11ϑ(x, y; 1,1)−p12ϑ(x, y; 1,2)−p21ϑ(x, z; 2,1)−p22ϑ(x, z; 2,2). (S5.18) Note that Φ is well defined because ηis bounded above. If u=x1k1−1, v1=y1k2−1, and v2=. . .=vk1=z1k2−1, then for all p∈R4 >0andx, y, z ∈R, Vη(u,v1, . . . ,vk1;p) = Φ( x, y, z ;p). (S5.19) Therefore Lemma S5.4 and (S5.15) imply that sup (u,v1,...,vk1)∈Rk1−1×Rk1(k2−1)Vψ(u,v1, . . . ,vk1;p) = sup x,y,z∈RΦ(x, y, z ;p), It is possible to show that if ψis Fisher consistent for the class Pk1,k2 b, then ϑinduces a relative-margin-based Fisher consistent loss for 2-stage DTR problem with binary treatments. When we mention “relative-margin-based loss induced by ϑ”, we refer to the loss defined by ϑloss(x1, x2, y1, y2;i, j) =ϑ(x1−x2, y1−y2;i, j) for all x1, x2, y1, y2∈R. For the purpose of showing the contradiction in Step 2, it will be sufficient to show that the Fisher consistency of ψinduces a weaker version of Fisher consistency on ϑlossorϑ, which is later elicited in Property 1. Therefore, we will not get into the Fisher consistency ofϑloss(·;i, j). However, before discussing the weaker version of Fisher consistency, we need to establish more properties of Φ and the ϑ(·;i, j)’s. Lemma S5.5 establishes that Φ( ·;p) has a unique minimizer for any p∈R4 >0. This lemma also establishes that −ϑand Φ are proper, closed, and strictly convex, where a proper convex function was defined in Section S5.1. Lemma S5.5 is proved in Section S5.4.5. Lemma S5.5. Under the setup of Theorem 3.1, the Φ(·;p)defined in (S5.18) has a unique minimizer over R3for all p∈R4 >0. Moreover, (x∗(p), y∗(p), z∗(p))≡(x∗, y∗, z∗) = argmin x,y,z∈RΦ(x, y, z ;p). (S5.20) satisfies x∗(p) =u∗ 1(p),y∗(p) =v∗ 11(p), and z∗(p) =v∗ 21(p). Moreover, the −ϑ(·;i, j)’s, as well as Φ(·;p), are closed and proper strictly convex functions for all i, j∈[2]andp∈R4 >0. Moreover, Φ(·;p)is 0-coercive for all i, j∈[2]andp∈R4 >0. An important quantity for us will be Λ∗(p) = (x∗(p), y∗(p), z∗(p)), which uniquely exists for each p∈R4 >0by Lemma S5.5. When pis clear from the context, we will drop pfrom the notation ofx∗(p), y∗(p), and z∗(p). S5.3. Step 2 In this step, we will show that the Λ∗(p) defined in (S5.17) satisfies some restrictions due to the concavity of ϑ. Then
|
https://arxiv.org/abs/2505.17285v1
|
we will show that, if ψis Fisher consistent, then ϑsatisfies some properties, which violates the abovementioned restrictions. This contradiction would imply that ψcan not be Fisher consistent, thus completing the proof. It is hard to find a closed form expression of Λ∗(p) or analyze its behavior for any arbitrary p. Especially, for an arbitrary p∈R4 >0, it is not guaranteed that Λ∗(p) lies in the interior or the relative interior of dom(Φ( ·;p)). However, if Λ∗(p)/∈int(dom(Φ( ·;p)), we will need to deal with several pathological issues for applying the tools of convex analysis. Therefore, for the rest of the proof, we will focus on a neighborhood of a special p, namely p=p0= (1,1,1,1). We will later show that if pis in a small neighborhood of p0, then Λ∗(p)∈int(dom(Φ( ·;p)), which makes application of convex analysis tools easier. Lemma S5.6 implies that Λ∗(p0) = (0 ,0,0). S5 PROOF OF THEOREM 3.1/S5.3 Step 2 58 Lemma S5.6. Suppose p0= (1,1,1,1). Then Λ∗(p0) = (0 ,0,0). Proof of Lemma S5.6. This lemma follows trivially from (S5.17) and Lemma S5.2. We break down step 2 into further substeps. 2a. In this step, we establish smoothness properties of ϑand related functions in neighborhood of p0. 2b. In this step, we establish the continuity properties of the map p7→Λ∗(p). We will also show that p7→y∗(p) and p7→z∗(p) are locally Lipschitz at p=p0. These results enable control over the behavior of Λ∗(p) for p’s in a neighborhood of p0. 2c. We will show that the concavity of ϑinduces some restrictions on Λ∗(p) for all pin a neighborhood of p0. 2d. We will show that if ψis Fisher consistent, then ϑsatisfies some properties, which will collectively be referred to as Property 1. 2e. We will use the restrictions on Λ∗(p) derived in Step 2c to show that ϑcan not satisfy Property 1, which would imply ψcan not be Fisher consistent. S5.3.1. Step 2a: properties of ϑand related functions In this step, we will mainly establish different properties of ϑ, Φ, and Λ∗(p). The function Φ( ·;p) is an important function for us because Λ∗(p) relies on it and, as mentioned previously, the proof hinges on Λ∗(p). Another important function for us is Φ∗(p) = inf w∈R3Φ(w;p),p∈R4 >0. (S5.21) Note that as Λ∗(p) is the solution to above minimization problem, we also have Φ∗(p) = Φ(Λ∗(p);p) for any p∈R4 >0. Fact S5.3 below implies that −Φ∗is well-behaved in that it is a convex function that is continuous and bounded on R4 >0. This fact requires −ηto be bounded below, which is satisfied under the setup of Theorem 3.1. Fact S5.3 is proved in Section S5.4.6. Fact S5.3. Suppose −ηis bounded below. Then −Φ∗is a convex function with int (dom(−Φ∗))⊃R4 >0. Also Φ∗is continuous onR4 >0. Moreover, |Φ∗(p)|<∞for all p∈R4 >0. Lemma S5.7 below implies that the ϑ(·;i, j)’s are thrice continuously differentiable at (0 ,0) for i, j∈[2]. In particular, there is an open neighborhood U0of 0 so that the ϑ(·;i.j)’s are thrice continuously differentiable at U2 0for all i, j∈[2] and Φ(·;p) is thrice continuously differentiable at U3 0for all
|
https://arxiv.org/abs/2505.17285v1
|
p∈R4 >0. Lemma S5.7 is proved in Section S5.4.7. Lemma S5.7. Under the setup of Theorem 3.1, there exists an open neighborhood U0of0so that 1.U2 0⊂int(dom(ϑ(·;i, j)))for all i, j∈[2]. 2.U3 0⊂int(dom(Φ(·;p)))for all p∈R4 >0. 3. The ϑ(·;i.j)’s are thrice continuously differentiable at U2 0for all i, j∈[2]. 4.Φ(·;p)is thrice continuously differentiable at U3 0for all p∈R4 >0. S5.3.2. Step 2b: continuity properties of Λ∗ In this section, we will prove the continuity properties of Λ∗. In particular, we will show that Λ∗is continuous. Moreover, we will show that y∗andz∗are locally Lipschitz at p0. Lemma S5.8 proves the continuity of the map p7→Λ∗(p). This lemma is proved in Section S5.4.8. Lemma S5.8. Under the setup of Theorem 3.1, p7→Λ∗(p)is a continuous map at any p∈R4 >0. One immediate corollary of the continuity of Λ∗is that Λ∗(p)∈U3 0for all pin a small neighborhood of p0. Lemma S5.9. Under the setup of Theorem 3.1, there exists δ >0so that if p∈B(p0, δ), then Λ∗(p)∈U3 0, where U0is as in Lemma S5.7. Proof of Lemma S5.9. Lemma S5.6 implies that Λ∗(p0) = (0 ,0,0). By the continuity of Λ∗by Lemma S5.8, we can choose δsmall enough so that if p∈B(p0, δ), then Λ∗(p)∈U3 0. Lemma S5.10 below establishes the Lipschitz continuity of y∗andz∗atp0. Lemma S5.10. Consider the setup of Theorem 3.1. Then there exists a δ >0andC≥0such that for all p∈B(p0, δ), |y∗(p)| ≤C∥p−p0∥2 and |z∗(p)| ≤C∥p−p0∥2. S5 PROOF OF THEOREM 3.1/S5.3 Step 2 59 Lemma S5.10 is proved in Section S5.4.9. Since Λ( p0) = (0 ,0,0) by Lemma S5.6, the above can also be re-written as |y∗(p)−y∗(p0)| ≤C∥p−p0∥2and|z∗(p)−z∗(p0)| ≤C∥p−p0∥2, which yields the Lipschitz continuity of z∗andy∗atp0. S5.3.3. Step 2c: geometric restrictions on ϑ In this step, we show that ϑand Λ∗(p) need to satisfy a system of inequalities when p∈B(p0, δ) where δis smaller than the δin Lemma S5.9. For such p’s, Λ∗(p)∈U3 0, which is necessary to guarantee the differentiability of Φ( ·;p) at Λ∗(p), as well as to ensure that Λ∗(p)∈int(dom( −Φ(·;p))) and Λ∗(p)∈int(dom( −ϑ(·;i, j))) for all i, j∈[2]. Suppose x∈U0, which indicates ( x, y∗(p), z∗(p))∈U3 0. Therefore, Φ( ·;p) is differentiable at ( x, y∗(p), z∗(p)) by Lemma S5.7. Since Λ∗(p)−(x, y∗(p), z∗(p)) = ( x∗(p)−x,0,0) and Φ is convex, from the definition of subdifferential (pp. 167, (1.2.1), Hiriart-Urruty and Lemar´ echal, 2004), it follows that Φ(Λ∗(p);p)−Φ(x, y∗(p), z∗(0);p)≥(x∗(p)−x)∂Φ(x, y∗(p), z∗(p);p) ∂x. In particular, if x̸=x∗(p), then by strict convexity, the above inequality is strict. Thus, we have for any x̸=x∗(p), (x∗(p)−x)∂Φ(x, y∗(p), z∗(p);p) ∂x<0. Since 0 ∈U0, we can choose x= 0, which implies that for all p∈B(p0, δ), x∗(p)∂Φ(0, y∗(p), z∗(p);p) ∂x<0. (S5.22) Similarly, we can show that z∗(p)∂Φ(x∗(p), y∗(p),0;p) ∂z<0, y∗(p)∂Φ(x∗(p),0, z∗(p);p) ∂y<0 (S5.23) for all p∈B(p0, δ). Note that the concavity of ϑ, which led to the convexity of Φ( ·;p), is the main reason behind the above inequalities. Now we will calculate the partial derivatives. To this end, note that ∂Φ(x∗(p),0, z∗(p);p) ∂y=−p11∂ϑ(x∗(p),0; 1,1) ∂y−p12∂ϑ(x∗(p),0; 1,2) ∂y ∂Φ(x∗(p), z∗(p),0;p) ∂z=−p21∂ϑ(x∗(p),0; 2,1) ∂z−p22∂ϑ(x∗(p),0; 2,2) ∂z ∂Φ(0, y∗(p), z∗(p);p) ∂x=−p11∂ϑ(0, y∗(p); 1,1) ∂x−p12∂ϑ(0, y∗(p); 1,2)
|
https://arxiv.org/abs/2505.17285v1
|
∂x −p21∂ϑ(0, z∗(p); 2,1) ∂x−p22∂ϑ(0, z∗(p); 2,2) ∂x(S5.24) for all p∈B(p0, δ), provided δis smaller than the δin Lemma S5.9. The inequalities (S5.22) and (S5.23) entail a restriction onϑand Λ∗(p). S5.3.4. Step 2d: implication of Fisher consistency In this section, we will show that the Fisher consistency of ψenforces some restriction on ϑ. To this end, we define a set of conditions, which we will refer to us Property 1. One may think of these conditions as a weak version Fisher consistency. As mentioned earlier, one can show that when ψis Fisher consistent, the margin-based loss defined by ϑloss(x1, x2, y1, y2;i, j) = ϑ(x1−x2, y1−y2;i, j) for all x1, x2, y1, y2∈Ris Fisher consistent in the two-stage binary treatment case. However, to prove this theorem, we do not need to show the above result, which would be technically more cumbersome. For the purpose of this proof, it is enough to show that, when ψis Fisher consistent, ϑsatisfies some conditions, which are collected in Property 1. Property 1. The function ϑ:R2× {1,2} 7→Rsatisfies Property 1 if there exists δ >0so that x∗(p),y∗(p), and z∗(p) defined in (S5.20) satisfy the following for all p∈B(p0, δ): P1. If max(p11,p12)>max(p21,p22), then x∗(p)>0. Also, if max(p11,p12)<max(p21,p22), then x∗(p)<0. P2. Suppose max(p11,p12)>max(p21,p22). Then then y∗(p)>0ifp11>p12andy∗(p)<0ifp11<p12. Suppose max(p21,p22)>max(p11,p12). Then z∗(p)>0ifp21>p22andz∗(p)<0ifp21<p22. Lemma S5.11, which is proved in Section S5.4.10, entails that ϑsatisfies Property 1 if ψis Fisher consistent. Lemma S5.11. Ifψis fisher consistent, then the ϑdefined in (S5.17) satisfies Property 1 under the setup of Theorem 3.1 Lemma S5.11 implies that to prove that ψis not Fisher consistent, it suffices to show that ϑfails to satisfy Property 1. We have p∈B(p0, δ) in Property 1 because in this case Lemma S5.9 ensures that Λ∗(p)∈U2 0for sufficiently small δ >0, ,which, in combination with Lemma S5.7, yields that Λ∗(p)∈int(dom(Φ( ·;p)). This ensures that we can use tools of convex analysis on Φ( ·;p) to prove Lemma S5.11. Unless Λ∗(p)/∈int(dom(Φ( ·;p)), showing P1 and P2 of Property 1 is difficult for arbitrary p’s. Proving Lemma S5.11, which relies on the continuity of Φ( ·;p) at Λ∗(p), is challenging in this case. S5 PROOF OF THEOREM 3.1/S5.3 Step 2 60 S5.3.5. Step 2e: proving the contradiction This is the final step of the proof. In this step, we will show that ϑcan not satisfy Property 1. Then Lemma S5.11 would imply that ψcan not be Fisher consistent, completing the proof. Since ϑ(·;i, j) is differentiable at (0 ,0) by Lemma S5.7 for alli, j∈[2], at least one of the following three cases must hold: (i) ∂ϑ(0,0; 1,1)/∂x,∂ϑ(0,0; 1,2)/∂x,∂ϑ(0,0; 2,1)/∂x, and ∂ϑ(0,0; 2,2)/∂xare all zero, (ii) at least one number between ∂ϑ(0,0; 1,1)/∂xand∂ϑ(0,0; 1,2)/∂xis non-zero, and (iii) at least one number between ∂ϑ(0,0; 2,1)/∂xand∂ϑ(0,0; 2,2)/∂xis non-zero. We will show that in none of theses cases, ϑ satisfies Property 1. Since the proofs are similar for case (ii) and (iii), we will show the proof only for case (i) and (ii). Case (i): ∂ϑ(0,0;i, j)/∂x’s are zero for all i, j∈[2] The following lemma implies that if the ∂ϑ(0,0;i, j)/∂x’s are all zero, then
|
https://arxiv.org/abs/2505.17285v1
|
ϑ(0,0;i, j)’s can not satisfy Property 1. Lemma S5.12. Under the setup of Theorem 3.1, if ∂ϑ(0,0; 1,1)/∂x=∂ϑ(0,0; 1,2)/∂x=∂ϑ(0,0; 2,1)/∂x=∂ϑ(0,0; 2,2)/∂x= 0, then the ϑ(·;i, j)’s can not satisfy Property 1. Proof of Lemma S5.12. Note that ∂Φ(0,0,0;p)/∂x= 0 for all p∈R4 >0if ∂ϑ(0,0; 1,1)/∂x=∂ϑ(0,0; 1,2)/∂x=∂ϑ(0,0; 2,1)/∂x=∂ϑ(0,0; 2,2)/∂x= 0. Consider p∈R4 >0of the form ( c1, c1, c2, c2) where c1> c2>0 and p. Ifϑsatisfies Property 1, then x∗(p)>0 for all such p provided c1andc2are sufficiently close to one. However, for p’s of such form, (S5.49) implies that ∂Φ(0,0,0;p)/∂y= 0 and ∂Φ(0,0,0;p)/∂z= 0. Therefore, ∇Φ(0,0,0;p) = 0. Since Φ( ·;p) is strictly convex for all p∈R4 >0,∇Φ(0,0,0;p) = 0 implies (0,0,0) is the unique minimizer of Φ( ·;p) (cf. Theorem 2.2.1, pp. 177, Hiriart-Urruty and Lemar´ echal, 2004). Therefore, for our chosen p, Λ∗(p) = (0 ,0,0) or x∗(p) = 0 although max( p11,p12)>max(p21,p22). Given any δ >0, we can find c1> c2>0 such that ( c1, c1, c2, c2)∈B(p0, δ). Therefore, Property 1 can not hold. Case (ii): ∂ϑ(0,0; 1,1)/∂xand/or ∂ϑ(0,0; 1,2)/∂xis non-zero Our first task is to choose a δ0>0. The choice of δis explained below. Since the ϑ(·;i, j)’s are thrice continuously differentiable at (0,0) by Lemma S5.7, there exists ϵ0>0 so that sup |y|<ϵ0 ∂3ϑ(0, y;i, j) ∂y2∂x < ∂3ϑ(0,0;i, j) ∂y2∂x + 1 (S5.25) for all i, j∈[2]. By Lemma S5.8, there exists δ >0 such that for all p∈B(p0, δ), it holds that ∥Λ∗(p)−Λ∗(p0)∥2≤ϵ0. We choose our δ0>0 to be smaller than this δ, one, and also the δ’s in Lemmas S5.9 and S5.10. If Property 1 holds, we will take δ0to be smaller than the δmentioned in Property 1. In this proof, we will consider p’s of the form p= (1 + ∆ 1,1 + ∆ 2,1,1) where ∆ 1and ∆ 2are reals satisfying ∆2 1+ ∆2 2< δ2 0, which ensures p∈B(p0, δ0). We will choose ∆ 1and ∆ 2so that max(1 + ∆ 1,1 + ∆ 2)>1. If Property 1 holds, and δ0is chosen as mentioned above, our p’s will satisfy x∗(p)>0 by P1 of Property 1. Lemma S5.13. Suppose ϑsatisfies Property 1 and δ0is chosen as described above. If x∗(p)>0for some p= (1 + ∆ 1,1 + ∆2,1,1)∈B(p0, δ0), then ∆1∂ϑ(0,0; 1,1) ∂x+y∗(p)∂2ϑ(0,0; 1,1) ∂y∂x+C∆1 + ∆ 2∂ϑ(0,0; 1,2) ∂x+y∗(p)∂2ϑ(0,0; 1,2) ∂y∂x+C∆2 >0 (S5.26) for some constant C >0depending only on ϑ. Proof of Lemma S5.13. Because δ0is smaller that the δin Lemma S5.9, Λ∗(p)∈U3 0for all p∈B(p0, δ0). Since Λ∗(p)∈U3 0, (0, y∗(p), z∗(p))∈U3 0. Therefore Φ( ·;p) is thrice continuously differentiable at (0 , y∗(p), z∗(p)). Therefore using (S5.24) we obtain that −∂Φ(0, y∗(p), z∗(p);p) ∂x=p11∂ϑ(0, y∗(p); 1,1) ∂x+p12∂ϑ(0, y∗(p); 1,2) ∂x +p21∂ϑ(0, z∗(p); 2,1) ∂x+p22∂ϑ(0, z∗(p); 2,2) ∂x. (S5.27) Since Λ∗(p)∈U3 0, (0, y∗(p))∈U2 0. Therefore, the ϑ(·;i, j)’s are thrice continuously differentiable at (0 , y∗(p)). Since the line joining y∗(p) and 0 is in U0, we can take a second order Taylor series expansion of y7→∂ϑ(0,y;1,1) ∂xaty∗(p) around zero, which yields ∂ϑ(0, y∗(p); 1,1) ∂x=∂ϑ(0,0; 1,1) ∂x+y∗(p)∂2ϑ(0,0; 1,1) ∂y∂x+y∗(p)2 2∂3ϑ(0, ξ1; 1,1) ∂y2∂x, S5 PROOF OF THEOREM
|
https://arxiv.org/abs/2505.17285v1
|
3.1/S5.3 Step 2 61 where ξ1is between 0 and y∗(p). We can have a similar Taylor series expansion for all the other terms, leading to −∂Φ(0, y∗(p), z∗(p);p) ∂x=∂Φ(0;p) ∂x+y∗(p) ∆1∂2ϑ(0,0; 1,1) ∂y∂x+ ∆ 2∂2ϑ(0,0; 1,2) ∂y∂x +y∗(p)M12(0,0) +z∗(p)N12(0,0) + (1 + ∆ 1)y∗(p)2 2∂3ϑ(0, ξ1; 1,1) ∂y2∂x + (1 + ∆ 2)y∗(p)2 2∂3ϑ(0, ξ2; 1,2) ∂y2∂x +z∗(p)2 2∂3ϑ(0, ξ3; 2,1) ∂y2∂x+z∗(p)2 2∂3ϑ(0, ξ4; 1,1) ∂y2∂x. where ξ2is between 0 and y∗(p) and ξ3,ξ4is between 0 and z∗(p), and M12andN12are as defined in (S5.4.9). Note that M12(0,0) and N12(0,0) vanish by Lemma S5.14. Also, ∂Φ(0;p) ∂x=∂Φ(0;p0) ∂x+ ∆ 1∂ϑ(0,0; 1,1) ∂x+ ∆ 2∂ϑ(0,0; 1,2) ∂x, whose first term is zero by (S5.49). Note that (S5.22) implies when x∗(p)>0, we must have −∂Φ(0,y∗,z∗;p) ∂x>0. Therefore, (S5.27) yields that ∆1∂ϑ(0,0; 1,1) ∂x+ ∆ 2∂ϑ(0,0; 1,2) ∂x+y∗(p) ∆1∂2ϑ(0,0; 1,1) ∂y∂x+ ∆ 2∂2ϑ(0,0; 1,2) ∂y∂x + (1 + ∆ 1)y∗(p)2 2∂3ϑ(0, ξ1; 1,1) ∂y2∂x+ (1 + ∆ 2)y∗(p)2 2∂3ϑ(0, ξ2; 1,2) ∂y2∂x +z∗(p)2 2∂3ϑ(0, ξ3; 2,1) ∂y2∂x+z∗(p)2 2∂3ϑ(0, ξ4; 2,2) ∂y2∂x>0. (S5.28) We have chosen δ0so that ∥Λ∗(p)∥2≤ϵ0for all p∈B(p0, δ) where ϵ0is as in (S5.25). Since ∥Λ∗(p)∥2< ϵ0,|y∗(p)|and |z∗(p)|are smaller than ϵ0. Therefore, ξ1,ξ2,ξ3andξ4are smaller than ϵ0in absolute value. Thus (S5.25) yields ∂3ϑ(0, ξ1; 1,1) ∂y2∂x < ∂3ϑ(0,0; 1,1) ∂y2∂x + 1. ∂3ϑ(0, ξ2; 1,2) ∂y2∂x < ∂3ϑ(0,0; 1,2) ∂y2∂x + 1. ∂3ϑ(0, ξ3; 2,1) ∂y2∂x < ∂3ϑ(0,0; 2,1) ∂y2∂x + 1. ∂3ϑ(0, ξ4; 2,2) ∂y2∂x < ∂3ϑ(0,0; 2,2) ∂y2∂x + 1. In addition, since δ0is chosen to be smaller than one, |∆1|,|∆2|<1. Thus (1 + ∆ 1)y∗(p)2 2∂3ϑ(0, ξ1; 1,1) ∂y2∂x+ (1 + ∆ 2)y∗(p)2 2∂3ϑ(0, ξ2; 1,2) ∂y2∂x +z∗(p)2 2∂3ϑ(0, ξ3; 2,1) ∂y2∂x+z∗(p)2 2∂3ϑ(0, ξ4; 2,2) ∂y2∂x≤C(y∗(p)2+z∗(p)2) where the constant C > 0 depends only on ϑ(·;i, j)’s. However, by Lemma S5.10, y∗(p)2andz∗(p)2are bounded by C∥p−p0∥2 2=C(∆2 1+ ∆2 2) where C >0 is a constant depending on the ϑ(·;i, j)’s. Therefore, (1 + ∆ 1)y∗(p)2 2∂3ϑ(0, ξ1; 1,1) ∂y2∂x+ (1 + ∆ 2)y∗(p)2 2∂3ϑ(0, ξ2; 1,2) ∂y2∂x +z∗(p)2 2∂3ϑ(0, ξ3; 2,1) ∂y2∂x+z∗(p)2 2∂3ϑ(0, ξ4; 2,2) ∂y2∂x≤C(∆2 1+ ∆2 2). The above, combined with (S5.28) implies (S5.26). Now we are ready to show that under case (ii), ϑdoes not satisfy Property 1. We will divide case (ii) in two subcases: (a) either ∂ϑ(0,0; 1,1)/∂x < 0 or∂ϑ(0,0; 1,2)/∂x < 0 and (b) either ∂ϑ(0,0; 1,1)/∂x > 0 or∂ϑ(0,0; 1,2)/∂x > 0. If (ii) holds then either (ii) a or (ii) b needs to hold. There can be overlap between cases (a) and (b), but it will not affect the proof. We will show that, if ϑsatisfies Property 1, then we will encounter contradiction in either cases. S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 62 Case (ii) a: ∂ϑ(0,0; 1,1)/∂x < 0or∂ϑ(0,0; 1,2)/∂x < 0 First consider the case∂ϑ(0,0;1,1) ∂x<0. If possible, suppose Property 1 holds. Consider ∆ 1= 1/kand ∆ 2= 1/k2, where k∈Nis a large number so that the resulting pk= (1 + 1/k,1+1/k2,1,1) is in B(p0, δ0). Observe that p=pksatisfies max( p11,p12)>max(p21,p22). Therefore Property 1 implies x∗(pk)>0 for all k∈N. Hence, (S5.26) holds by Lemma S5.13. For this ∆
|
https://arxiv.org/abs/2505.17285v1
|
1and ∆ 2, (S5.26) reduces to 1 k∂ϑ(0,0; 1,1) ∂x+y∗(pk)∂2ϑ(0,0; 1,1) ∂y∂x+C1 k +1 k2∂ϑ(0,0; 1,2) ∂x+y∗(pk)∂2ϑ(0,0; 1,2) ∂y∂x+C1 k2 >0 for all pkof the form (1 + 1 /k,1 + 1 /k2,1,1) for all sufficiently large k∈N. Multiplying both sides with k, we get ∂ϑ(0,0; 1,1) ∂x+y∗(pk)∂2ϑ(0,0; 1,1) ∂y∂x+C1 k +1 k∂ϑ(0,0; 1,2) ∂x+y∗(pk)∂2ϑ(0,0; 1,2) ∂y∂x+C1 k2 >0. Letting k→ ∞ , and observing Lemma S5.8 implies y∗(pk)→0 ask→ ∞ we obtain ∂ϑ(0,0; 1,1) ∂x≥0, which leads to a contradiction. Therefore, ϑcan not satisfy Property 1 if ∂ϑ(0,0; 1,1)/∂x < 0. Similarly, we can show that ϑcan not satisfy Property 1 if ∂ϑ(0,0; 1,2)/∂x < 0. by taking ∆ 1= 1/k2 and ∆ 2= 1/k. Therefore, we have showed that if either∂ϑ(0,0;1,1) ∂x<0 or∂ϑ(0,0;1,2) ∂x<0, Property 1 can not hold. Case ii(b):∂ϑ(0,0;1,1) ∂x>0or∂ϑ(0,0;1,2) ∂x>0 First consider the case∂ϑ(0,0;1,1) ∂x>0. If possible, suppose ϑsatisfies Property 1. Take ∆ 1=−1/kand ∆ 2= 1/k2. Let us refer to the resulting sequence (1 −1/k,1 + 1 /k2,1,1) by pk. Note that pk∈B(p0, δ0) ifkis sufficiently large and x∗(pk)>0 for all k∈Nby Property 1. Then (S5.26) holds by Lemma S5.13. In this case, (S5.26) reduces to −1 k∂ϑ(0,0; 1,1) ∂x+y∗(pk)∂2ϑ(0,0; 1,1) ∂y∂x−C1 k +1 k2∂ϑ(0,0; 1,2) ∂x+y∗(pk)∂2ϑ(0,0; 1,2) ∂y∂x+C1 k2 >0 for all sufficiently large k. Multiplying both sides by k, we get −∂ϑ(0,0; 1,1) ∂x+y∗(pk)∂2ϑ(0,0; 1,1) ∂y∂x−C1 k +1 k∂ϑ(0,0; 1,2) ∂x+y∗(pk)∂2ϑ(0,0; 1,2) ∂y∂x+C1 k2 >0. Letting k→ ∞ and observing y∗(pk)→0 by Lemma S5.8, we obtain that −∂ϑ(0,0; 1,1) ∂x≥0, which is a contradiction. Therefore, ϑcan not satisfy Property 1 in this case. The proof for the ∂ϑ(0,0; 1,2)/∂x > 0 follows similarly by taking ∆ 1= 1/k2and ∆ 2=−1/kand letting k→ ∞ . S5.4. Proofs of main lemmas S5.4.1. Proof of Lemma S5.1 Proof of Lemma S5.1. In this proof, we will work with −ηbecause it is convex. Because ∩k1 i=1∩k2 j=1int(dom( ψ(·;i, j))̸=∅, there exist x∈Rk1andw∈Rk2so that ( x,w)∈int(dom( ψ(·;i, j)) for all i∈[k1] and j∈[k2]. Let us fix i∈[k1]. Our first task is to show that 0k1+k2−2∈int(dom( −η)). Let us denote ∆ix= (xi−x1, . . . ,xi−xi−1,xi−xi+1, . . . ,xi−xk1). Then for all j∈[k2], −η(∆ix,wj−w1, . . . ,wj−wk2)<∞ where the term wi−wiis omitted. Since ηis blockwise permutation symmetric by (S5.1), for all j∈[2 :k2], −η(∆ix,wj−wj+1,wj−wj+2. . . ,wj−wk2,wj−w1, . . . ,wj−wj−1) =−η(∆ix,wj−w1, . . . ,wj−wk2)<∞. S5 PROOF OF THEOREM 3.1/S5.4 Proofs of main lemmas 63 For the sake of algebra, we will introduce wjforj > k 2. We extend the definition of wjtoNletting wj=wjmod k2for all j∈N. Therefore, we can write w1=wk2+1,. . .,wj−1=wj+k2−1, and so on. According to our notation η(∆ix,wj−wj+1, . . . ,wj−wk2,wj−w1, . . . ,wj−wj−1) =η(∆ix,wj−wj+1, . . . ,wj−wj+k2−1) for all j∈[k2]. Therefore, using this notation, −η(∆ix,wj−wj+1, . . . ,wj−wj+k2−1)<∞ (S5.29) for all j∈[k2]. On the other hand, since −ηis convex, Jensen inequality implies that −1 k2X j∈[k2]η(∆ix,wj−wj+1, . . . ,wj−wj+k2−1) ≥ − η ∆ix,P j∈[k2](wi−wj+1) k2, . . . ,P j∈[k2](wi−wj+k2−1) k2 (S5.30) where wj=wjmod 2 as per our notation. Moreover, X j∈[k2](wj−wj+1) =X j∈[k2]wj−X j∈[k2]wj+1=X j∈[k2]wj−k2X j=2wj−wk2+1=
|
https://arxiv.org/abs/2505.17285v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.