text
string | source
string |
|---|---|
=/integraldisplayx −∞q(y)dy. Hence du dx=q(x), x=Qq(u). TRANSPORT f-DIVERGENCES 7 From the chain rule, we have q(x) =du dx= (dx du)−1=1 Q′q(u). Thus Q′ p(u) Q′q(u)=d duQp(Fq(x))·du dx =d dxQp(Fq(x)) =T′(x), where the second equality holds by the chain rule, and the las t equality holds because T(x) =Qp(Fq(x)). Substituting the above calculations into (5), we derive (7). (iii) Denote the change of variable formula by x=Tq(z). Since (Tq)#pref=q, then pref(z) =q(Tq(z))T′ q(z). In other words, pref(z)dz=q(x)dx. Note that Tp(z) =Qp(Fpref(z)), Tq(z) =Qq(Fpref(z)). Hence T(x) =T(Tq(z)) =Tp(z), and T′(x) =d dzT(Tq(z))·dz dx =d dzTp(z)·(dx dz)−1 =T′ p(z) T′q(z). Substituting the above calculations into (5), we derive for mulation (8). /square 3.3.Properties. We next present several properties of transport f-divergences. Proposition 2 (Properties) .The following properties hold: (i)Nonnegativity: The transport f-divergence is nonnegative: DT,f(p/bardblq)≥0. AndDT,f(p/bardblq) = 0if and only if pequals to qup to a constant shrift in their variables. I.e., there exists a constant c∈R, such that p(x+c) =q(x). 8 LI (ii)Entropy: Let Ω = [0,1]andq(x) = 1. Then the transport f-divergence equals to the following negative entropy. DT,f(p/bardbl1) =/integraldisplay Ωf(1 p(x))p(x)dx. (iii)Additivity and scaling: Suppose f1,f2are convex functions and a >0, then DT,f1+af2(p/bardblq) = DT,f1(p/bardblq)+aDT,f2(p/bardblq). (iv)Duality: DT,f(p/bardblq) = DT,ˆf(q/bardblp), whereˆf(u) =f(1 u). (v)Transport invariance: Let k: Ω→Ωbe a smooth inverse mapping function. De- notek−1be the inverse function of k. Let ˜q= (k−1)#q, and ˜p=˜T#˜q, with ˜T(x) =/integraldisplayx 0T′(k(y))dy+c, (9) for any constant c∈R. Then DT,f(p/bardblq) = DT,f(˜p/bardbl˜q). (vi)Transport convexity in the first variable: Given any densiti esp1,p2,q∈ P(Ω)with p1=T1#qandp2=T2#q, whereT1,T2are two monotone mapping functions, respectively. Denote pλ=/parenleftBig λT1+(1−λ)T2/parenrightBig #q. Then for any constant λ∈[0,1], we have DT,f(pλ/bardblq)≤λDT,f(p1/bardblq)+(1−λ)DT,f(p2/bardblq). Proof.The proofs are based on the definitions of transport f-divergences. (i) The nonnegativity follows from the fact that fis a positive convex function. Suppose DT,f(p/bardblq) = 0. Following (5), we have f(T′(x)) = 0, for xin the support of q(x). In other words,T′(x) = 1, i.e. T(x) =x+c, wherec∈Ris a constant value. From p=T#q, we derive the result. (ii) Denote T#q=pandq(x) = 1, then DT,f(p/bardblq) =/integraldisplay Ωf(1 p(T(x)))dx =/integraldisplay Ωf(1 p(T(x)))p(T(x))T′(x)dx =/integraldisplay Ωf(1 p(T(x)))p(T(x))dT(x). Denotey=T(x), then D T,f(p/bardblq) =/integraltext Ωf(1 p(y))p(y)dy. TRANSPORT f-DIVERGENCES 9 (iii) The additivity and scaling property holds from the defi nition: DT,f1+af2(p/bardblq) =/integraldisplay Ω/parenleftBig f1(T′(x))+af2(T′(x))/parenrightBig q(x)dx= DT,f1(p/bardblq)+aDT,f2(p/bardblq). (iv) The proof follows from the formulation of transport f-divergence defined in (8). Note DT,f(p/bardblq) =/integraldisplay Ωf(T′ p(z) T′q(z))pref(z)dz =/integraldisplay Ωˆf/parenleftBigT′ q(z) T′p(z)/parenrightBig pref(z)dz. (v) The proof follows from the definition of pushforward oper ator. From ˜ q= (k−1)#qand q=k#˜q, we obtain q(k(y))k′(y) = ˜q(y). From the definition of ˜Tin (9), we have d dx˜T(x) =d dx/integraldisplayx 0T′(k(y))dy=T′(k(x)). Following the above two equalities, we have DT,f(˜p/bardbl˜q) =/integraldisplay Ωf(T′(k(y)))˜q(y)dy =/integraldisplay Ωf(T′(k(y)))q(k(y))k′(y)dy =/integraldisplay Ωf(T′(k(y)))q(k(y))dk(y) =/integraldisplay Ωf(T′(x))q(x)dx =DT,f(p/bardblq), where we let x=k(y) in the fourth equality. (vi) The proof follows directly from the convexity of functi onf. Notice that Df,T(pλ/bardblq) =/integraldisplay Ωf(λT′ 1(x)+(1−λ)T′ 2(x))q(x)dx ≤/integraldisplay Ω/parenleftBig λf(T′ 1(x))+(1−λ)f(T′ 2(x))/parenrightBig q(x)dx =λDT,f(p1/bardblq)+(1−λ)DT,f(p2/bardblq). /square 3.4.Variational formulations. In this subsection, we present variational formulations for transport f-divergences. Theorem 1 (Transport f-dualities) .Denote ˆf(u) =f(1 u). 10 LI Denoteˆf∗as the conjugate function of ˆf, withˆf∗(v) = supu∈R/braceleftbig uv−ˆf(u)/bracerightbig . Assume ˆfis
|
https://arxiv.org/abs/2504.15515v2
|
strictly convex with respect to the variable u. Then the transport f-divergence satisfies DT,f(p/bardblq) = sup Ψ/integraldisplay ΩΨ(x)p(T(x))dx−/integraldisplay Ωˆf∗(Ψ(x))q(x)dx, (10) where the supreme is over all continuous functions Ψ∈C(Ω;R), andTis a monotone mapping function, such that T#q=p. The optimality condition of supreme problem (10) is described below. Denote the transport f-duality variable: Ψopt(x) :=ˆf′(T′(x)), (11) Then DT,f(p/bardblq) =/integraldisplay ΩΨopt(x)p(T(x))dx−/integraldisplay Ωˆf∗(Ψopt(x))q(x)dx. Proof.The proof follows from the convex conjugate of a function ˆf. Notice that ˆf(u) = supv∈R/braceleftbig uv−ˆf∗(v)/bracerightbig , whereu= (ˆf∗)′. Thus, DT,f(p/bardblq) =/integraldisplay Ωf(T′(x))q(x)dx=/integraldisplay Ωˆf(1 T′(x))q(x)dx =sup Ψ/integraldisplay Ω/parenleftBig Ψ(x)·1 T′(x)−ˆf∗(Ψ(x))/parenrightBig q(x)dx =sup Ψ/integraldisplay ΩΨ(x)·q(x) T′(x)dx−/integraldisplay Ωˆf∗(Ψ(x))q(x)dx =sup Ψ/integraldisplay ΩΨ(x)p(T(x))dx−/integraldisplay Ωˆf∗(Ψ(x))q(x)dx, where the last equality holds from equation (2). And the opti mality condition implies that 1 T′(x)= (ˆf∗)′(Ψ(x)). From the assumption, ˆfis a strictly convex function, then ( ˆf∗)′is invertible. Then, we have the optimality condition Ψ opt(x) = ((ˆf∗)′)−1(1 T′(x)) =ˆf′(1 T′(x)). Substituting Ψ opt into formulation (10), we finish the proof. /square Wealsopresentarelationbetweenthetransport f-dualitiesandtheKantorvichdualities in Wasserstein-2 distances. Theorem 2 (Transport f-Kantorvich dualities) .The following equation holds: Ψopt(x) =ˆf′(1 Φ′′ 0(x)+1), whereΦ0(x)is the Kantorovich duality variable defined in (4), andΨoptis the transport f-duality variable defined in (11). In addition, the transport f-divergence is reformulated below: DT,f(p/bardblq) =/integraldisplay Ωˆf′(1 Φ′′ 0(x)+1)p(Φ′ 0(x)+x)dx−/integraldisplay Ωˆf∗(1 Φ′′ 0(x)+1)q(x)dx. TRANSPORT f-DIVERGENCES 11 Proof.The proof follows from a direct calculation. From Kantorovi ch duality, we have Φ′ 0(x) =T(x)−x. By taking the derivative in above formula, we have Φ′′ 0(x) =T′(x)−1. Substituting the above formula into (11) and (10), we finish t he proof. /square 3.5.Local behaviors and Taylor expansions. We last formulate the local behaviors and Taylor expansions of transport f-divergences. Theorem 3 (Local behaviors) .Assumef∈C2andf(1) =f′(1) = 0, then lim λ→01 λ2DT,f(pλ/bardblq) =f′′(1) 2/integraldisplay Ω|T′(x)−1|2q(x)dx, wherepλ∈ P(Ω),λ∈[0,1], is the geodesic in Wasserstein- 2space connecting probability densities q,p. In other words, pλ=/parenleftBig (1−λ)id+λT/parenrightBig #q, whereid(x) :=xis an identity mapping function. Proof.By the definition of pλ, we have DT,f(pλ/bardblq) =/integraldisplay Ωf/parenleftbig (1−λ)+λT′(x)/parenrightbig q(x)dx =/integraldisplay Ωf/parenleftbig 1+λ(T′(x)−1)/parenrightbig q(x)dx. By the Taylor expansion of function f/parenleftbig (1−λ)+λT′(x)/parenrightbig , we have DT,f(pλ/bardblq) =/integraldisplay Ω/parenleftBig f(1)+λf′(1)(T′(x)−1)+λ2f′′(1) 2|T′(x)−1|2/parenrightBig q(x)dx+o(λ2) =λ2f′′(1) 2/integraldisplay Ω|T′(x)−1|2q(x)dx+o(λ2). In above derivations, we apply the fact that f(1) =f′(1) = 0 in the second equality. This finishes the proof. /square Theorem 4 (Taylor expansions in Wasserstein-2 space) .Assume f∈C4andf(1) = f′(1) = 0. Then the following equation holds: DT,f(p/bardblq) =/integraldisplay1 0/bracketleftBigf′′(1) 2|Q′ p(u)−Q′ q(u) Q′q(u)|2+f′′′(1) 6/parenleftBigQ′ p(u)−Q′ q(u) Q′q(u)/parenrightBig3/bracketrightBig du +O(/integraldisplay1 0|Q′ p(u)−Q′ q(u) Q′q(u)|4du). We also represent the above formula in terms of Kantorovich d uality variable Φ0defined in(4). DT,α(p/bardblq) =/integraldisplay Ω/bracketleftBigf′′(1) 2|Φ′′ 0(x)|2+f′′′(1) 6(Φ′′ 0(x))3/bracketrightBig q(x)dx +O(/integraldisplay Ω|Φ′′ 0(x)|4q(x)dx). 12 LI Proof.The proof is based on a direct calculation. Firstly, from qua ntile density function formulation (7), we have DT,f(p/bardblq) =/integraldisplay1 0f(1+h(u))du, whereh(u) :=Q′ p(u)−Q′ q(u) Q′q(u). From the Taylor expansion of function fat 1 and f′(1) = f′′(1) = 0, we have f(1+h(u)) =1 2f′′(1)h(u)2+1 6f′′′(1)h(u)3+O(|h(u)|4). Secondly, we note that the Kantorovich duality condition (4 ) holds. Denote Φ′ 0(x) = T(x)−x=Qp(Fq(x))−x, and Φ′′ 0(x) =d dxQp(Fq(x))−1 =Q′ p(Fq(x)) 1 q(x)−1. Denoteu=Fq(x). Then we have /integraldisplay Ω(Φ′′(x))kp(x)dx=/integraldisplay1 0(Q′ p(u) Q′q(u)−1)kdu, fork= 2,3. This finishes the
|
https://arxiv.org/abs/2504.15515v2
|
proof. /square 4.Examples In this section, we list several examples of transport f-divergences. Example 1 (Transport total variation) .Letf(u) =|u−1|, then DT,f(p/bardblq) = DTTV(p/bardblq) =/integraldisplay Ω|T′(x)−1|q(x)dx =/integraldisplay1 0|Q′ p(u) Q′q(u)−1|du. We call DTTVthe transport total variation (TTV). Unfortunately, it is not a distance, sinceDTTV(p/bardblq)/ne}ationslash= DTTV(q/bardblp). Example 2. Letf(u) =|u−1|2, then DT,f(p/bardblq) =/integraldisplay Ω|T′(x)−1|2q(x)dx =/integraldisplay1 0|Q′ p(u) Q′q(u)−1|2du. Example 3 (Squared transport Hessian distance [15, 17]) .Letf(u) =|logu|2, then DT,f(p/bardblq) = Dist TH(p,q)2=/integraldisplay Ω|logT′(x)|2q(x)dx =/integraldisplay1 0|logQ′ p(u) Q′q(u)|2du. We callDistTH(p/bardblq)the transport Hessian distance; see derivations in [17]. TRANSPORT f-DIVERGENCES 13 Example 4 (Transport KL divergence [16]) .Letf(u) =u−logu−1, then DT,f(p/bardblq) = DTKL(p/bardblq) =/integraldisplay Ω/parenleftBig T′(x)−logT′(x)−1/parenrightBig q(x)dx =/integraldisplay1 0/parenleftBigQ′ p(u) Q′q(u)−logQ′ p(u) Q′q(u)−1/parenrightBig du. We call DTKLthe transport KL divergence (TKL). It is the Bregman divergence of the negative Boltzmann-Shannon entropy in Wasserstein- 2space. Example 5 (Transport Jenson-Shannon divergence [16]) .Letf(u) =−1 2logu 1 4|u+1|2, then DT,f(p/bardblq) = DTJS(p/bardblq) =−1 2/integraldisplay ΩlogT′(x) 1 4|T′(x)+1|2q(x)dx =−1 2/integraldisplay1 0logQ′ p(u)·Q′ q(u) 1 4|Q′p(u)+Q′q(u)|2du. We name it the transport Jenson-Shannon divergence (TJS). In f act, the TJS is a sym- metrized transport KL divergence, i.e. DTJS(p/bardblq) =1 2/parenleftBig DTKL(p/bardblp1 2)+DTKL(q/bardblp1 2)/parenrightBig , wherep1 2= (1 2id+1 2T)#qis the geodesic midpoint (Barycenter) between densities p,qin Wasserstein- 2space. Example 6 (Transport α-divergences [18]) .Let fα(u) = 1 α2(uα−αlogu−1), α/ne}ationslash= 0; 1 2|logu|2, α = 0. Then DT,f(p/bardblq) = DT,α(p/bardblq) =1 α2/integraldisplay Ω/parenleftBig (T′(x))α−αlogT′(x)−1/parenrightBig q(x)dx =1 α2/integraldisplay1 0/parenleftBig/parenleftbigQ′ p(u) Q′q(u)/parenrightbigα−αlogQ′ p(u) Q′q(u)−1/parenrightBig du. We callDT,αthe transport α-divergence. The divergence DT,αis a generalization of trans- port KL divergence. E.g. if α= 1, we have DT,1(p/bardblq) = DTKL(p/bardblq). Ifα= 0, we obtain DT,0(p/bardblq) =1 2DTH(p,q)2. Transport α-divergences are related with transport Hessian met- ric structures [18], which are analogs of information geometry methods in Wasse rstein-2 spaces; see [3, 6, 8]. 5.Examples In this section, we present two analytical examples of trans portf-divergences in either location scale families, or generative models. 14 LI Example 7 (Location scale family) .LetpX,pYbe two one-dimensional local scale prob- ability densities. Suppose X∼pX, Y:=T(X) =µY+σY σX(X−µX)∼pY, whereµX,µY∈R,σX,σY>0are mean values and standard derivations of random variables X,Y, respectively. In this case, T′(x) =σY σX. Hence the transport f-divergence forms DT,f(pX/bardblpY) =f(σY σX). We note that the transport f-divergence does not depend on the location variable of prob - ability density functions. Example 8 (Generative family) .Suppose Ω =R1. LetpX,pYbe constructed from generative models. Consider a latent random variable Z∼pref∈ P(Ω), whereprefis a given smooth reference measure. Denote a smooth invertible map function G∈C1(Ω× Θ;Ω), whereΘ⊂Rn,n∈N, is a parameter space. Let θX,θY∈Θand consider X=G(Z,θX)∼pX, Y=G(Z,θY)∼pY. By Proposition 1 (iii), the transport f-divergences satisfies DT,f(pX/bardblpY) =EZ∼pref/bracketleftBig f(∂ZG(Z,θX) ∂ZG(Z,θY))/bracketrightBig . We remark that transport f-divergences depend on the derivative of generative mappin g functions with respect to the input variable Z. 6.Discussion In this paper, we propose a class of transport f-divergences. The proposed trans- port type divergence functionals are built from the derivat ive of pushforward mapping functions. These divergence functionals have convexity pr operties in terms of mapping functions, which contrasts with f-divergences. Thestudyoftransport f-divergencesinmulti-dimensionalsamplespaceareleftin future works. In general, transport f-divergences can be defined from “matrix divergences”, where
|
https://arxiv.org/abs/2504.15515v2
|
the matrix refers to the Jacobian matrix of pushforwar d mapping function; see [16]. We shall also investigate the convexity properties, inequa lities, and variational algorithms for transport f-divergences towards generative AI-related samplingprob lemsandBayesian inverse problems. Acknowledgements . W. Li’s work is supported by AFOSR YIP award No. FA9550-23- 1-0087, NSF RTG: 2038080, and NSF DMS: 2245097. References [1] M. Ali and S. Silvey. A general class of coefficients of dive rgence of one distribution from another. J. Royal Statistical Society, B , (28), 131–142, 1966. [2] S. Amari. α-Divergence Is Unique, Belonging to Both f-Divergence and Bregman Divergence Classes. IEEE Transactions on Information Theory , vol. 55, no. 11, pp. 4925–4931, 2009. TRANSPORT f-DIVERGENCES 15 [3] S. Amari and A. Cichocki. Information geometry of diverg ence functions. Bulletin of the Polish Acad- emy of Sciences . Technical Sciences, 58(1), 183–195, 2010. [4] L. Ambrosio, N. Gigli, and G. Savare. Gradient Flows In Me tric Spaces and in the Space of Probability Measures, 2008. [5] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein Gen erative Adversarial Networks. ICML, 2017. [6] N. Ay, J. Jost, H. V. Lˆ e, and L. Schwachh¨ ofer. Information geometry , volume 64. Springer, Cham, 2017. [7] J. Birrell, P. Dupuis, M. Katsoulakis, Y. Pantazis, and L . Rey-Bellet. ( f,Γ)-Divergences: Interpolating between f-Divergences and Integral Probability Metrics Journal of Machine Learning Research , 23, 1-70, 2022. [8] A. Cichocki, and S. Amari. Families of Alpha, Beta and Gam ma Divergences: Flexible and Robust Measures of Similarities. Entropy, 12, 1532-1568, 2010. [9] T. M. Cover and J. A. Thomas. Elements of Information Theory . Wiley Series in Telecommunications. Wiley, New York, 1991. [10] I. Csiszar. Information-type measures of difference of probability distributions and indirect observa- tionsStudia Sci. Math , 2, 299-318, 1967. [11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. War de-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Nets. NIPS, 2014. [12] X. Guo, J. Hong, and N. Yang. Relaxed Wasserstein with Ap plications to GANs. IEEE International Conference on Acoustics, Speech and Signal Processing (ICA SSP), 3325-3329, 2021. [13] J. D. Lafferty. The density manifold and configuration sp ace quantization. Transactions of the Amer- ican Mathematical Society , 305(2):699-741, 1988. [14] W. Li. Transport information geometry: Riemannian cal culus in probability simplex. Information Geometry , 5, 161–207, 2022. [15] W. Li. Hessian metric via transport information geomet ry.Journal of Mathematical Physics , 62, 033301, 2021. [16] W. Li. Transport information Bregman divergences. Information Geometry , 4, 435–470, 2021. [17] W. Li. Transport information Hessian distances. Geometric Science of Information , 2021. [18] W. Li. Transport alpha divergences. arXiv:2504.14084 , 2025. [19] K. Modin. Geometry of matrix decompositions seen throu gh optimal transport and information ge- ometry.Journal of Geometric Mechanics , 9(3): 335–390, 2017. [20] F. Nielsen. Emerging trends in visual computing, Lecture Notes in Computer Science 6 , CD-ROM, 2009. [21] I. Sason and S. Verdu. f-Divergence Inequalities. IEEE Transactions on Information Theory , vol. 62, no.11, pp. 5973–6006, 2016. [22] G. Savare and G. Toscani. The Concavity of R´ enyi Entrop
|
https://arxiv.org/abs/2504.15515v2
|
arXiv:2504.15556v1 [math.ST] 22 Apr 2025Dynamical mean-field analysis of adaptive Langevin diffusio ns: Propagation-of-chaos and convergence of the linear respon se Zhou Fan∗, Justin Ko†, Bruno Loureiro‡, Yue M. Lu§, Yandi Shen¶ Abstract Motivated by an application to empirical Bayes learning in h igh-dimensional regression, we study a class of Langevin diffusions in a system with random disorder , where the drift coefficient is driven by a parameter that continuously adapts to the empirical distri bution of the realized process up to the current time. The resulting dynamics take the form of a stochastic in teracting particle system having both a McKean-Vlasov type interaction and a pairwise interaction defined by the random disorder. We prove a propagation-of-chaos result, showing that in the large sy stem limit over dimension-independent time horizons, the empirical distribution of sample paths of the Langevin process converges to a deterministic limit law that is described by dynamical mean-field theory. T his law is characterized by a system of dynamical fixed-point equations for the limit of the drift pa rameter and for the correlation and response kernels of the limiting dynamics. Using a dynamical cavity a rgument, we verify that these correlation and response kernels arise as the asymptotic limits of the avera ged correlation and linear response functions of single coordinates of the system. These results enable an asymptotic analysis of an empirical Bayes Langevin dynamics procedure for learning an unknown prior p arameter in a linear regression model, which we develop in a companion paper. Contents 1 Introduction 2 2 Model and main results 4 2.1 Model and dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Existence and uniqueness of the DMFT fixed point . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 The dynamical mean-field approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Interpretation of the DMFT correlation and response . . . . . . . . . . . . . . . . . . . . . . . 8 3 Existence and uniqueness of the DMFT fixed point 9 3.1 The function spaces S(T) andS(T)cont. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Contractive mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 The dynamical mean-field approximation 23 4.1 Step 1: DMFT approximation of discrete dynamics .
|
https://arxiv.org/abs/2504.15556v1
|
. . . . . . . . . . . . . . . . . . . . . . . 23 4.2 Step 2: Discretization error of DMFT equation . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3 Step 3: Discretization of Langevin dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.4 Completing the proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5 Convergence of the linear response 42 5.1 Convergence of response functions for discrete dynamics . . . . . . . . . . . . . . . . . . . . . 42 5.2 Discretization of Langevin response function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 ∗Department of Statistics and Data Science, Yale University †Department of Statistics and Actuarial Science, Universit y of Waterloo ‡Departement d’Informatique, ´Ecole Normale Sup´ erieure, PSL & CNRS §Departments of Electrical Engineering and Applied Mathema tics, Harvard University ¶Department of Statistics and Data Science, Carnegie Mellon University 1 A Existence of linear response functions 61 A.1 Continuous dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 A.2 Discrete dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 1 Introduction Letθ= (θ1,...,θd)∈Rdbe a system of dinteracting particles, evolving according to a stochastic dynamics dθt=/bracketleftBig −βX⊤Xθt+/parenleftbig s(θt j,/hatwideαt)/parenrightbigd j=1/bracketrightBig dt+√ 2dbt,d dt/hatwideαt=G/parenleftBig /hatwideαt,1 dd/summationdisplay j=1δθt j/parenrightBig . (1) HereX∈Rn×dis a matrix of random disorder, and s(·,/hatwideαt) :R→Rin the drift coefficient is a nonlinear functiondrivenbyastochastictime-dependent parameter /hatwideαt∈RKthatadaptstothepasthistory {θs}s∈[0,t]. (We defer formal definitions and conditions for the functions s(·) andG(·) to Section 2.) We will study the pathwise convergence of the empirical measure1 d/summationtextd j=1δθt jand of the parameter /hatwideαtto deterministic limits asn,d→ ∞at a fixed rate, in this model ( 1) and in a closely related statistical model. In the setting of β= 0, i.e. with no random disorder, the dynamics ( 1) take a pathwise-exchangeableform as studied classically by [ 1,2], where the evolution d θt jof eachjthparticle depends on the remaining particles only via the empirical law1 d/summationtextd j=1δ{θs j}s∈[0,t]. The convergence of this law in the asymptotic limit d→ ∞, together
|
https://arxiv.org/abs/2504.15556v1
|
with a resulting asymptotic decoupling of low-dimensional m arginals of {θs}s∈[0,t], is commonly referred to as propagation-of-chaos. We refer to the classical monographs [ 3,4] for a detailed treatment of such models, and to [ 5,6] and [7,8] for modern surveys and examples of recent quantitative conver gence results. The study of propagation-of-chaos for dynamics with random diso rder (β/\⌉}atio\slash= 0) has also a separate and rich development in the literature, using techniques of dynamical me an-field theory (DMFT). DMFT was initially developed to study Langevin dynamics in the soft Sherrington -Kirkpatrick (SK) model [ 9,10] and related spherical p-spin models in spin glass theory [ 11–13], and relied on deep but non-rigorous techniques of the dynamical cavity method [ 14,15] and generating functional methods [ 16–18] of statistical physics. In recent years, DMFT has been applied to shed insight into the learn ing dynamics in an increasingly wide range of statistical and machine learning models, including matrix and tensor PCA [ 19–23], phase retrieval and generalized linear models [ 15,24–26], Gaussian mixture classification [ 27,28], and deep neural networks [ 29–33]. Pioneering work of [ 34–37] established the first mathematical formalizations of DMFT in varian ts of the SK model, in the forms of large deviations principles for the empiric al distributions of sample paths. Mathematical results for spherical models were subsequently obt ained in [ 38,39], and universality of such results with respect to the law of the disorder in [ 40,41]. Recently, [ 42] developed a different and innovative new approach to formalizing DMFT via time discretization and reductio n to Approximate Message Passing schemes [ 43–45], and applied this to derive a DMFT limit for gradient flow dynamics in stat istical multi- index models. A related strategy via iterative Gaussian conditioning w as developed in [ 46], which extended the results of [ 42] to a class of discrete-time Langevin and stochastic gradient dyna mics. Non-asymptotic analyses of the entrywise behavior of such dynamics were obtained in [47]. In this work, we will prove a DMFT approximation for the dynamics ( 1), which has both the above elements of a pathwise-exchangeable interaction driven by the emp irical law, as well as a pairwise interaction driven by random disorder. Our motivation is the study of a variant o f Langevin dynamics for posterior sampling in a statistical linear model y=Xθ∗+ε, θ∗ jiid∼g(·,α∗), εiiid∼ N(0,σ2), wheretheregressioncoefficientsofinterest θ∗ 1,...,θ∗ daredistributedaccordingtoapriorlaw g(·,α∗)thathas an unknown parameter α∗∈RK. To implement empirical Bayes learningof α∗[48–50], a Langevin dynamics 2 methodwasintroducedin[ 51]1which, fromaninitialestimateorguess /hatwideα0, evolvesapriorparameterestimate d dt/hatwideαt=1 dd/summationdisplay j=1∇αlogg(θt j,/hatwideαt) (2) based on the coordinates of a coupled Langevin diffusion dθt=∇θ/parenleftBig −1 2σ2/⌊a∇⌈⌊ly−Xθt/⌊a∇⌈⌊l2 2+d/summationdisplay j=1logg(θt j,/hatwideαt)/parenrightBig dt+√ 2dbt(3) that samples from the posterior law P(θ|X,y). Such dynamics comprise a minor extension of ( 1) (which motivates our choice to study a disorder matrix having the covarian ce formX⊤X), and we state in ( 4–5) of Section 2an extended general dynamics that encompasses this application. We defer a more detailed discussion and analysis of this specific empirical Bayes procedure to our companion paper [ 52], focusing in
|
https://arxiv.org/abs/2504.15556v1
|
this work on the formalization of the limiting dynamics in a general cont ext. We summarize the main contributions of our paper as follows: 1. Adapting and building upon the methods of [ 42], we prove a DMFT limit for the dynamics ( 1) (with a natural extension to the dynamics ( 4–5) to follow). This will take the form of almost-sure convergence to certain deterministic limits, as n,d→ ∞, {/hatwideαt}t∈[0,T]→ {αt}t∈[0,T],1 dd/summationdisplay j=1δ{θt j}t∈[0,T]W2→P({θt}t∈[0,T]),1 nn/summationdisplay i=1δ{ηt i}t∈[0,T]W2→P({ηt}t∈[0,T]) for the sample path of /hatwideαtand for the empirical laws of sample paths of θtandηt=Xθt. Each limit P({θt}t∈[0,T]) andP({ηt}t∈[0,T]) represents the law of a univariate stochastic process, which is driven by the above limit {αt}t∈[0,T]for the evolving drift parameter, an additional Gaussian process representing the mean field, and an integrated response. These G aussian processes and integrated responses are described by correlation and response kernels Cθ,Cη,Rθ,Rη, where {αt},Cθ,Cη,Rθ,Rη are defined self-consistently from the laws P({θt}t∈[0,T]) andP({ηt}t∈[0,T]) via a system of dynamical fixed-point equations. We establish that this dynamical fixed point is unique in a certain domain of functions with exponential growth. 2. We show that the dynamics ( 1) admit a well-defined notion of a linear response function RAB(t,s) for a class of observables A,B:Rd→R, whereRAB(t,s) represents a linear response of A(θt) to a perturbation of the drift coefficient at a previous time sby∇B(θs). We then verify that the above DMFT correlation and response kern elsCθ,Cη,Rθ,Rηarise as the mean-field limits of averages of the correlation and linear response f unctions for the “single-particle” coordinate observables A(θ),B(θ) =θjandA(θ),B(θ) = [Xθ−y]iof the high-dimensional system. Our methods and analyses in the first contribution above follow the a pproach of [ 42]. We incorporate into the dynamical fixed-point system a deterministic limit {αt}t∈[0,T]for the trajectory of the stochastic drift parameter {/hatwideαt}t∈[0,T], extend the analyses to encompass processes with more irregular sample paths resulting from the additional Brownian diffusion term d btin the dynamics, and simplify the approach in [ 42] for embedding a discrete-time DMFT system into a continuous-time lim it. Our second contribution above is, to our knowledge, novel in the ma thematical literature on DMFT (although anticipated by statistical physics derivations of the DMF T equations). To understand Rθ,Rη as asymptotic limits of averaged single-particle linear response func tions, we formalize a dynamical cavity calculation that analyzes the response of a single coordinate θt jto a perturbation of θs jat a preceding times, via a DMFT approximation of the cavity system with this coordinate le ft out. This result will be 1[51] proposed a nonparametric variant of this method, and we sim plify our discussion here to a parametric formulation 3 important to our companion work [ 52], allowing us to transfer a fluctuation-dissipation theorem [ 53] from the high-dimensional dynamics to the DMFT correlation and respons e kernels Cθ,Cη,Rθ,Rη. This will then allow us to carry out an analysis of the long-time behavior of the DMFT equations in an approximately time-translation-invariant setting, and to show convergence of t he prior parameter estimate /hatwideαtin the above empirical Bayes dynamics to a stationary point of a replica-symmetr
|
https://arxiv.org/abs/2504.15556v1
|
ic limit for the model free energy. Acknowledgments This research was initiated during the “Huddle on Learning and Infer ence from Structured Data” at ICTP Trieste in 2023. We’d like to thank the huddle organiers Jean Barbier, Manuel S´ aenz, Subhabrata Sen, and Pragya Sur for their hospitality and many helpful discussions. We ar e also grateful to Pierfrancesco Urbani, Francesca Mignacco, Emanuele Troiani, and Andrea Montanari for useful discussions. Z. Fan was supported in part by NSF DMS-2142476 and a Sloan Research Fellowship. J. Ko wa s supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), th e Canada Research Chairs programme, and the Ontario Research Fund [RGPIN-2020-04597, DGECR-2020 -00199]. B. Loureiro was supported by the French government, managed by the National Research Agen cy (ANR), under the France 2030 program with the reference “ANR-23-IACL-0008” and the Choose France - CNRS AI Rising Talents program. Y. M. Lu was supported in part by the Harvard FAS Dean’s Competitive Fun d for Promising Scholarship and by a Harvard College Professorship. Notational conventions Constants C,C′,c,c′>0 are independent of the dimensions n,dunless otherwise specified, and may depend on the time horizon T, dimension Kof the drift parameter, and scalar parameter β∈R. In a separable and complete normed vector space ( M,/⌊a∇⌈⌊l · /⌊a∇⌈⌊l), for any p≥1,Pp(M) is the space of probability distributions Pon (M,/⌊a∇⌈⌊l · /⌊a∇⌈⌊l) such that Eξ∼P/⌊a∇⌈⌊lξ/⌊a∇⌈⌊lp<∞,Wp(·) is the Wasserstein- pmetric on Pp(M), andPnWp→PdenotesWp(Pn,P)→0 asn→ ∞. For a random variable ξinM, we will use P(ξ) to denote its law. For a vector x∈ Mn,/hatwideP(x) =1 n/summationtextn i=1δxi∈ Pp(M) (for any p≥1) denotes the empirical distribution of the coordinates x1,...,xnofx. On a Euclidean space Rd,/⌊a∇⌈⌊l · /⌊a∇⌈⌊lwithout subscript is, by convention, the Euclidean (i.e. ℓ2) norm. C([0,T],R) is the space of continuous functions f: [0,T]→Requipped with the norm of uniform con- vergence /⌊a∇⌈⌊lf/⌊a∇⌈⌊l∞= supt∈[0,T]|f(t)|.Z+={0,1,2,...}denotes the nonnegative integers, and R+= [0,∞) denotes the nonnegative reals. For a function f:Rd→R,∇f(x)∈Rdand∇2f(x)∈Rd×dare its gradient and Hessian at x. Forf:Rd→Rm, df(x)∈Rd×mis its derivative at x. TrM,/⌊a∇⌈⌊lM/⌊a∇⌈⌊lop, and/⌊a∇⌈⌊lM/⌊a∇⌈⌊lFare the matrix trace, Euclidean operator norm, and Frobenius norm. 2 Model and main results 2.1 Model and dynamics LetX∈Rn×dbe a random matrix with independent entries, and y=Xθ∗+ε∈Rnthe observations of a linear model with regression design X, regression coefficients θ∗∈Rd, and noise ε∈Rn. Lets:R×RK→Rbe a Lipschitz drift function, G:RK×P2(R)→RKa Lipschitz gradient map (where P2(R) is the space of probability measures on Rwith finite second moment), {bt}t≥0a standard Brownian motion on Rd, andβ∈Ra scalar parameter. We will study the dynamics dθt=/bracketleftBig −βX⊤(Xθt−y)+/parenleftbig s(θt j,/hatwideαt)/parenrightbigd j=1/bracketrightBig dt+√ 2dbt(4) d/hatwideαt=G/parenleftBig /hatwideαt,1 dd/summationdisplay j=1δθt j/parenrightBig dt (5) with initial conditions (θ0,/hatwideα0)∈Rd×RK. (6) 4 This encompasses the general dynamics ( 1) and the application ( 2–3) under a unified model: Specializing toθ∗= 0 and ε= 0 (hence y= 0) recovers ( 1), while specializing to β=σ−2,s(θ,α) =∂θlogg(θ,α), and G(α,P) =Eθ∼P[∇αlogg(θ,α)] recovers ( 2–3). We will refer to these general dynamics ( 4–5) as an adaptive Langevin diffusion. We impose the following assumptions on the components of the above model and dynamics throughout this work. Assumption 2.1 (Model and initial conditions) . (a)
|
https://arxiv.org/abs/2504.15556v1
|
(Asymptotic scaling) lim n,d→∞n d=δ∈(0,∞). (b) (Random design) X= (xij)∈Rn×dhas independent entries satisfying Exij= 0,Ex2 ij=1 d, and /⌊a∇⌈⌊l√ dxij/⌊a∇⌈⌊lψ2≤Cfor a constant C >0 where/⌊a∇⌈⌊l·/⌊a∇⌈⌊lψ2is the sub-gaussian norm. (c) (Linear model and initial conditions) θ0,θ∗,εare independent of X, andy=Xθ∗+ε. For some probability distributions P(θ∗,θ0) andP(ε) having finite moment generatingfunctions in a neighborhood of 0, and for each fixed p≥1, the entries of θ0,θ∗,εsatisfy the Wasserstein- pconvergence almost surely asn,d→ ∞, 1 dd/summationdisplay j=1δ(θ∗ j,θ0 j)Wp→P(θ∗,θ0),1 nn/summationdisplay i=1δεiWp→P(ε). (7) For a deterministic parameter α0∈RK, almost surely lim n,d→∞/hatwideα0=α0. Assumption 2.2 (Drift function) .s:R×RK→Ris twice continuously-differentiable, and for some constant C >0 and all ( θ,α)∈R×RK, |s(θ,α)| ≤C(1+|θ|+/⌊a∇⌈⌊lα/⌊a∇⌈⌊l2),/⌊a∇⌈⌊l∇(θ,α)s(θ,α)/⌊a∇⌈⌊l2,/⌊a∇⌈⌊l∇2 (θ,α)s(θ,α)/⌊a∇⌈⌊lop≤C. (8) Assumption 2.3 (Gradient map) .Let/hatwideP(θ) =d−1/summationtextd j=1δθjdenote the empiricaldistribution ofcoordinates ofθ∈Rd, and let Gk:RK→Rbe thekthcomponent of G. (a) For some constant C >0 and all ( α,P)∈RK×P2(R), /⌊a∇⌈⌊lG(α,P)/⌊a∇⌈⌊l2≤C(1+/⌊a∇⌈⌊lα/⌊a∇⌈⌊l2+EP[θ2]1/2),/⌊a∇⌈⌊lG(α,P)−G(α′,P′)/⌊a∇⌈⌊l2≤C/parenleftbig /⌊a∇⌈⌊lα−α′/⌊a∇⌈⌊l2+W2(P,P′)/parenrightbig .(9) (b) For each k= 1,...,K, (θ,α)/ma√sto→ Gk(α,/hatwideP(θ)) is twice continuously-differentiable, and for some constant C >0 and all ( θ,α)∈Rd×RK, /⌊a∇⌈⌊l∇αGk(α,/hatwideP(θ))/⌊a∇⌈⌊l2≤C,√ d/⌊a∇⌈⌊l∇θGk(α,/hatwideP(θ))/⌊a∇⌈⌊l2≤C, (10) max/parenleftBig d/⌊a∇⌈⌊l∇2 θGk(α,/hatwideP(θ))/⌊a∇⌈⌊lop,√ d/⌊a∇⌈⌊l∇θ∇αGk(α,/hatwideP(θ))/⌊a∇⌈⌊lop,/⌊a∇⌈⌊l∇2 αGk(α,/hatwideP(θ))/⌊a∇⌈⌊lop/parenrightBig ≤C. (11) Viewing ( 4–5) as a joint diffusion of ( θt,/hatwideαt) onRd+K, we remark that ( 8) and (10) imply that the drift function of this joint diffusion is Lipschitz with respect to the Eu clidean norm. Then there exists a unique solution {θt,/hatwideαt}t≥0to (4–5) with initial condition ( 6) that is adapted to the filtration Ft:= F({bs}s∈[0,t],θ0,/hatwideα0,X,θ∗,ε), which will be the process of interest in our main results. 2.2 Existence and uniqueness of the DMFT fixed point In this section we define the DMFT limit for the preceding dynamics ( 4–5). Letδ= limn,d→∞n dand α0= limn,d→∞/hatwideα0be as in Assumption 2.1. Let (θ∗,θ0)∼P(θ∗,θ0) denote scalar variables with the distribution ( 7). Let {bt}t≥0,{ut}t≥0 5 be univariate mean-zero Gaussian processes independent of each other and of ( θ∗,θ0), where {bt}t≥0is a standard Brownian motion and {ut}t≥0has a correlation kernel Cη(·) on [0,∞), defined self-consistently below. Let {αt}t≥0 be a deterministic continuous process on RK, also defined self-consistently below. We consider univariate processes {θt}t≥0and{∂θt ∂us}t≥s≥0adapted to the filtration Fθ t:=F({bs}s∈[0,t],{us}s∈[0,t],θ∗,θ0) (in the sense that θtand∂θt ∂usfor alls∈[0,t] areFθ t-measurable), defined by the stochastic differential equations dθt=/bracketleftBig −δβ(θt−θ∗)+s(θt,αt)+/integraldisplayt 0Rη(t,s)(θs−θ∗)ds+ut/bracketrightBig dt+√ 2dbtwithθt/vextendsingle/vextendsingle t=0=θ0,(12) d/parenleftBig∂θt ∂us/parenrightBig =/bracketleftbigg −/parenleftbigg δβ−∂θs(θt,αt)/parenrightbigg∂θt ∂us+/integraldisplayt sRη(t,s′)∂θs′ ∂usds′/bracketrightbigg dtwith∂θt ∂us/vextendsingle/vextendsingle/vextendsingle/vextendsingle t=s= 1. (13) We clarify that∂θt ∂usis a notation for a univariate process on t∈[s,∞), defined via ( 13) for each s≥0. Consider also ε∼P(ε),(w∗,{wt}t≥0), whereεis a scalar variable with the distribution ( 7), and (w∗,{wt}t≥0) is a univariate mean-zero Gaussian process indexed by {∗}∪[0,∞), independent of εand with a correlation kernel Cθ(·) on{∗}∪[0,∞) also defined self-consistently below. We consider univariate processes {ηt}t≥0and{∂ηt ∂ws}t≥s≥0adapted to the filtration Fη t:=F({ws}s≤t,w∗,ε), defined by the integral equations ηt=−β/integraldisplayt 0Rθ(t,s)/parenleftbig ηs+w∗−ε/parenrightbig ds−wt, (14) ∂ηt ∂ws=β/bracketleftBig −/integraldisplayt sRθ(t,s′)∂ηs′ ∂wsds′+Rθ(t,s)/bracketrightBig . (15) Again∂ηt ∂usis a notation for a univariate process on t∈[s,∞), defined by ( 15) for each s≥0. The centered Gaussian processes {ut}t≥0and (w∗,{wt}t≥0) above have correlation kernels E[utus] =Cη(t,s),E[wtws] =Cθ(t,s),E[wtw∗] =Cθ(t,∗),E[(w∗)2] =Cθ(∗,∗).(16) Denotingby P(θt)thelawof θtsolving(12), theabovedeterministicprocess {αt}t≥0andcorrelation/response kernelsCη,Cθ,Rη,Rθare defined for all t≥s≥0 self-consistently by d dtαt=G(αt,P(θt)) withαt/vextendsingle/vextendsingle t=0=α0, (17) Cθ(t,s) =E[θtθs], Cθ(t,∗) =E[θtθ∗], Cθ(∗,∗) =E[(θ∗)2], Cη(t,s) =δβ2E[(ηt+w∗−ε)(ηs+w∗−ε)], Rθ(t,s) =E/bracketleftBig∂θt ∂us/bracketrightBig
|
https://arxiv.org/abs/2504.15556v1
|
, Rη(t,s) =δβE/bracketleftBig∂ηt ∂ws/bracketrightBig .(18) We note that the above process {∂ηt ∂ws}t≥s≥0defined by ( 15) is in fact deterministic, but we keep the expec- tation defining Rη(t,s) for symmetry of notation. The equations ( 17–18) should be understood as fixed-point equations for α,Cθ,Cη,Rθ,Rη, where the laws of the processes {θt,∂θt ∂us,ut}t≥s≥0and{ηt,∂ηt ∂ws,wt}t≥s≥0defining ( 17–18) are in turn defined by α,Cθ,Cη,Rθ,Rηvia (12–16). For each fixed time horizon T >0, letS(T) be a space of functions (α,Cθ,Cη,Rθ,Rη)≡ {αt,Cθ(t,s),Cθ(t,∗),Cθ(∗,∗),Cη(t,s),Rθ(t,s),Rη(t,s)}0≤s≤t≤T 6 having at most exponential growth, and S(T)conta subset of continuous such functions, whose precise definitions we defer to Section 3.1to follow. The following result establishes existence and uniqueness o f a fixed point to ( 17–18) in this space S(T)cont. Theorem 2.4. Under Assumptions 2.1,2.2, and2.3, for any fixed T >0: (a) For any (α,Cθ,Cη,Rθ,Rη)∈ S(T)and any realization of the mean-zero Gaussian processes {ut}t≥0 and(w∗,{wt}t≥0)satisfying ( 16) (independent of (θ∗,θ0,{bt}t≥0)andεrespectively), there exist unique solutions to ( 12–13) and (14–15) adapted to {Fθ t}t∈[0,T]and{Fη t}t∈[0,T]for times 0≤s≤t≤T. (b) There exists a unique fixed point (α,Cθ,Cη,Rθ,Rη)∈ S(T)satisfying ( 17–18) for the solution of part (a). This fixed point belongs to S(T)cont, and in particular {αt}t≥0is a deterministic continuous process onRK. The proof of Theorem 2.4is given in Section 3. We will call the components of Theorem 2.4(a–b) the unique solution of the DMFT system ( 12–18). 2.3 The dynamical mean-field approximation The following is the first main result of our work, showing that the pre ceding solution to the DMFT system describes the limit of {/hatwideαt}t∈[0,T]and empirical distributions of coordinates of {θt}t∈[0,T]and{ηt}t∈[0,T]≡ {Xθt}t∈[0,T]solving (4–5), for fixed time horizons T >0 in the limit n,d→ ∞. Theorem 2.5. Suppose Assumptions 2.1,2.2, and2.3hold. Denote ηt=Xθt,η∗=Xθ∗ letθ∗,ε,η∗=−w∗, and{θt,ηt,αt}t∈[0,T]be the components of the unique solution to the DMFT system (12–18) given by Theorem 2.4, and let P(·)denote the law of these components. Then for each fixed T >0, almost surely as n,d→ ∞, (a)(/hatwideαt)t∈[0,T]→(αt)t∈[0,T]inC([0,T],RK). (b) In the sense of Wasserstein-2 convergence over R×C([0,T],R)andR×R×C([0,T],R), 1 dd/summationdisplay j=1δθ∗ j,{θt j}t∈[0,T]W2→P(θ∗,{θt}t∈[0,T]),1 nn/summationdisplay i=1δη∗ i,εi,{ηt i}t∈[0,T]W2→P(η∗,ε,{ηt}t∈[0,T]). The proof of Theorem 2.5is given in Section 4. For ease of interpretation, we record here two corollaries of this result. The first clarifies an implication of the above Wasserst ein-2 convergence in terms of the convergence of pseudo-Lipschitz test functions of finite-dimens ional marginals of the processes. Corollary 2.6. In the setting of Theorem 2.5, for any fixed m≥1and times t1,...,tm∈[0,T], and for any pseudo-Lipschitz test functions fθ:Rm+1→Randfη:Rm+2→R(i.e. satisfying |f(x)−f(y)| ≤ C/⌊a∇⌈⌊lx−y/⌊a∇⌈⌊l2(1+/⌊a∇⌈⌊lx/⌊a∇⌈⌊l2+/⌊a∇⌈⌊ly/⌊a∇⌈⌊l2)), almost surely as n,d→ ∞, 1 dd/summationdisplay j=1fθ(θ∗ j,θt1 j,...,θtm j)→Efθ(θ∗,θt1,...,θtm) 1 nn/summationdisplay i=1fη(η∗ i,εi,ηt1 i,...,ηtm i)→Efη(η∗,ε,ηt1,...,ηtm)(19) where the expectations on the right side are under the joint l aws of the solution to the DMFT system. Proof.Any pseudo-Lipschitz function ( θ∗,θt1,...,θtm)/ma√sto→fθ(θ∗,θt1,...,θtm) is also apseudo-Lipschitz func- tion of the full sample path ( θ∗,{θt}t∈[0,T])∈R×C([0,T],R). Thus the first statement of ( 19) follows from Theorem 2.5(b) and the characterization of Wasserstein- pconvergence in [ 54, Definition 6.8 and Theorem 6.9], and the second statement follows similarly. 7 The second corollary asserts an asymptotic decoupling of the finite -dimensional marginal distributions of (θ∗,{θt}t∈[0,T]) in a coordinate-exchangeable setting,
|
https://arxiv.org/abs/2504.15556v1
|
which is the usual notion of propagation-of-chaos for interacting particle systems. Corollary 2.7. In the setting of Theorem 2.5, suppose in addition that (θ∗,θ0)∈Rd×2andX∈Rn×dare both invariant in law under permutations of the coordinates {1,...,d}. Fix any J≥1, and let P(θ∗ 1:J,{θt 1:J}t∈[0,T])denote the joint law of sample paths (θ∗ j,{θt j}t∈[0,T])∈ R×C([0,T],R)forj= 1,...,J. LetP(θ∗,{θt}t∈[0,T])⊗Jdenote the J-fold product of the limit law in Theorem 2.5(b). Then as n,d→ ∞, in the sense of weak convergence, P(θ∗ 1:J,{θt 1:J}t∈[0,T])→P(θ∗,{θt}t∈[0,T])⊗J. Proof.Under the stated assumptions and the definition of the process ( 4–5), the law of ( θ∗,{θt}t∈[0,T])∈ (R×C([0,T],R))dremains invariant under permutations of the coordinates {1,...,d}. Then the stated result is equivalent to convergence of the empirical law1 d/summationtextd j=1δθ∗ j,{θt j}t∈[0,T]toP(θ∗,{θt}t∈[0,T]) weakly in probability (c.f. [ 4, Proposition 2.2]), and this is implied by Theorem 2.5(b). We clarify that P(θ∗ 1:J,{θt 1:J}t∈[0,T]) in this statement refers to the law over all randomness including that ofθ∗,θ0and the disorder X. It would be interesting to also study propagation-of-chaos phen omena conditional on parts of this randomness, and we leave such investig ations to future work. 2.4 Interpretation of the DMFT correlation and response FixingX,θ∗,εandy=Xθ∗+ε, define the coordinate observables ej(θ) =θj, xi(θ) =√ δβ([Xθ]i−yi). (20) Fixing also the initial conditions x= (θ0,/hatwideα0), for each pair A,B∈ {e1,...,ed,x1,...,xn}, define {Rx AB(t,s)}0≤s≤t as a response function for the joint dynamics ( 4–5) that satisfies the following condition: For any continuous bounded function h: [0,∞)→Rand any ε >0, consider the perturbed dynamics dθt,ε=/bracketleftBig −βX⊤(Xθt,ε−y)+εh(t)∇θB(θt,ε,/hatwideαt,ε)+/parenleftbig s(θt,ε j,/hatwideαt,ε)/parenrightbigd j=1/bracketrightBig dt+√ 2dbt d/hatwideαt,ε=G/parenleftBig /hatwideαt,ε,1 dd/summationdisplay j=1δθt,ε j/parenrightBig dt with the same initial condition ( θ0,ε,/hatwideα0,ε) =x. Denote the expectation conditional on X,θ∗,εandx= (θ0,/hatwideα0) as/a\}⌊∇a⌋k⌉tl⌉{tf({θt,/hatwideαt}t≥0)/a\}⌊∇a⌋k⌉t∇i}htx. Then for any t >0, lim ε→01 ε/parenleftBig /a\}⌊∇a⌋k⌉tl⌉{tA(θt,ε,/hatwideαt,ε)/a\}⌊∇a⌋k⌉t∇i}htx−/a\}⌊∇a⌋k⌉tl⌉{tA(θt,/hatwideαt)/a\}⌊∇a⌋k⌉t∇i}htx/parenrightBig =/integraldisplayt 0Rx AB(t,s)h(s)ds. (21) ThusRx AB(t,s) may be understood as the linear response of the observable A(θ) at time tto a perturbation of the Langevin potential by B(θ) at a preceding time s. Existence of such a response function for smooth bounded observables in uniformly elliptic and hypoelliptic diffusions has b een shown in [ 55,56]. We verify in Proposition A.1that the arguments of [ 56] may be extended to show also the existence of a response function Rx AB(t,s) satisfying ( 21) in our adaptive Langevin diffusion, for a class of unbounded and Lips chitz observables including all A,B∈ {e1,...,ed,x1,...,xn}. Let{θt,/hatwideαt}t≥0be the solution to ( 4–5) with the given initial condition x= (θ0,/hatwideα0) of Assumption 2.1, and define the corresponding correlation and response matrices Cθ(t,s) =/parenleftBig/angbracketleftbig ej(θt)ek(θs)/angbracketrightbig x/parenrightBigd j,k=1,Cθ(t,∗) =/parenleftBig/angbracketleftbig ej(θt)ek(θ∗)/angbracketrightbig x/parenrightBigd j,k=1,Rθ(t,s) =/parenleftBig Rx ejek(t,s)/parenrightBigd j,k=1, Cη(t,s) =/parenleftBig/angbracketleftbig xj(θt)xk(θs)/angbracketrightbig x/parenrightBign j,k=1,Rη(t,s) =/parenleftBig Rx xjxk(t,s)/parenrightBign j,k=1 (22) 8 for the above coordinate observables e1,...,ed,x1,...,xn. The following is the second main result of our work, showing that the correlation and response kernels Cθ,Cη,Rθ,Rηdefining the DMFT limit in Theorem 2.5arethe almost-surelimits ofthe normalized tracesof these matrice s, i.e. the correlationand self-responses of the observables ejandxiaveraged across coordinates j= 1,...,dandi= 1,...,n. Theorem 2.8. Suppose Assumptions 2.1,2.2, and2.3hold, and (θ,α)/ma√sto→ ∇2 (θ,α)s(θ,α)and(θ,α)/ma√sto→ ∇2 (θ,α)Gk(α,/hatwideP(θ))are uniformly H¨ older-continuous for each k= 1,...,K. LetCθ,Cη,Rθ,Rηbe the correla- tion and response kernels of the solution to the DMFT
|
https://arxiv.org/abs/2504.15556v1
|
system ( 12–18) given by Theorem 2.4. Then for any fixedt≥s≥0, almost surely as n,d→ ∞, d−1TrCθ(t,s)→Cθ(t,s), d−1TrCθ(t,∗)→Cθ(t,∗), n−1TrCη(t,s)→Cη(t,s), d−1TrRθ(t,s)→Rθ(t,s), n−1TrRη(t,s)→Rη(t,s). The proof of Theorem 2.8is provided in Section 5. We note that the convergence of d−1TrCθand n−1TrCηis an immediate consequence of Corollary 2.6. The additional content of this theorem is the convergence of d−1TrRθandn−1TrRη, which relies on an inductive analysis of dynamics at a single particle level using a dynamical cavity argument. Remark 2.9. By an argument similar to our proof of Theorem 2.8, one may show that the DMFT response kernelsRθ(t,s) andRη(t,s) also represent the limits of d−1TrRθ(t,s) andn−1TrRη(t,s) defined for a non-adaptive version of the dynamics d˜θt=/bracketleftBig −βX⊤(X˜θt−y)+/parenleftbig s(˜θt j,αt)/parenrightbigd j=1/bracketrightBig dt+√ 2dbt whichreplacestheadaptively-evolvingdriftparameter {/hatwideαt}t≥0byitsdeterministicDMFTlimit {αt}t≥0. The response matrices Rθ,Rηfor this non-adaptive dynamics {˜θt}t≥0are different from those for the adaptive dynamics ( 4–5), in that a perturbation in the adaptive system affects {/hatwideαt}t≥swhereas it does not change {αt}t≥sin the non-adaptive system. However, our result implies that the alm ost-sure limits of d−1TrRθ andn−1TrRηcoincide for these two dynamics, i.e. the propagation of the effect o f the perturbation through {/hatwideαt}is negligible in the large-( n,d) limit. The remainder of this paper will prove the preceding results of Theo rems2.4,2.5, and2.8. 3 Existence and uniqueness of the DMFT fixed point In this section we prove Theorem 2.4. We assume throughout Assumptions 2.1,2.2, and2.3. Section 3.1 defines the spaces S(T) andS(T)contand provesTheorem 2.4(a) on existence and uniqueness ofthe processes (12–15). Section 3.2then proves Theorem 2.4(b) on existence and uniqueness of the dynamical fixed point via a contractive mapping argument similar to that of [ 42]. 9 3.1 The function spaces S(T)andS(T)cont Letτ2 ∗=Eθ∗2andσ2=Eε2, and let C0>0 denote a constant larger than the constants C >0 of (8) and (9). Consider the following system of equations for functions Φ α,ΦCθ,ΦCη,ΦRθ,ΦRηon [0,∞): d dtΦα(t) = 4.1C0(1+ΦCθ(t))+3C0Φα(t) with Φα(0) =/⌊a∇⌈⌊lα0/⌊a∇⌈⌊l2, (23) d dtΦCθ(t) = (6δ2β2+18C2 0+1.1)ΦCθ(t)+6/integraldisplayt 0(t−s+1)2Φ2 Rη(t−s)ΦCθ(s)ds +6/parenleftBig δ2β2τ2 ∗+3C2 0+3C2 0Φα(t)+/integraldisplayt 0(t−s+1)2Φ2 Rη(t−s)ds·τ2 ∗+ΦCη(t)/parenrightBig +2 with ΦCθ(0) =E(θ0)2, (24) ΦCη(t) = 2δβ2/bracketleftBig1 δ/integraldisplayt 0(t−s+1)2·Φ2 Rθ(t−s)ΦCη(s)ds+2ΦCθ(t)+2τ2 ∗+σ2/bracketrightBig , (25) d dtΦRθ(t) = (δ|β|+C0)ΦRθ(t)+/integraldisplayt 0ΦRη(t−s)ΦRθ(s)dswith ΦRθ(0) = 1, (26) ΦRη(t) =|β|/parenleftBig/integraldisplayt 0ΦRθ(t−s)ΦRη(s)ds+δ|β|ΦRθ(t)/parenrightBig . (27) Lemma 3.1. The system ( 23–27) has a unique continuous solution. Defining E(λ) =/braceleftBig continuous functions f:R+→R+such that/integraldisplay∞ 0e−λsf(s)ds <∞/bracerightBig , for any sufficiently large constant λ >0, this solution satisfies Φα,ΦCθ,ΦCη,ΦRθ,ΦRη∈E(λ). Proof.Let Φη= (ΦCη,ΦRη), Φθ= (Φα,ΦCθ,ΦRθ), and Φ = (Φ η,Φθ). For any two continuous solutions Φ and˜Φ, there exists some M >0 such that all components of both solutions are uniformly bounded over [0,T] byM. The above equations then imply /⌊a∇⌈⌊lΦθ(t)−˜Φθ(t)/⌊a∇⌈⌊l ≤/integraldisplayt 0C/⌊a∇⌈⌊lΦ(s)−˜Φ(s)/⌊a∇⌈⌊lds,/⌊a∇⌈⌊lΦη(t)−˜Φη(t)/⌊a∇⌈⌊l ≤C/parenleftBig/integraldisplayt 0/⌊a∇⌈⌊lΦ(s)−˜Φ(s)/⌊a∇⌈⌊lds+/⌊a∇⌈⌊lΦθ(t)−˜Φθ(t)/⌊a∇⌈⌊l/parenrightBig for a constant C >0 depending on M,T. Applying Gronwall’s lemma to the second inequality shows sup s∈[0,t]/⌊a∇⌈⌊lΦη(s)−˜Φη(s)/⌊a∇⌈⌊l ≤C′sup s∈[0,t]/⌊a∇⌈⌊lΦθ(s)−˜Φθ(s)/⌊a∇⌈⌊l. (28) Then applying this in the first inequality gives /⌊a∇⌈⌊lΦθ(t)−˜Φθ(t)/⌊a∇⌈⌊l ≤/integraldisplayt 0C′′sup r∈[0,s]/⌊a∇⌈⌊lΦθ(r)−˜Φθ(r)/⌊a∇⌈⌊lds, so Gronwall’s lemma applied again shows Φ θ(t) =˜Φθ(t) for allt∈[0,T]. Then by ( 28), also Φ( t) =˜Φ(t) for allt∈[0,T], so any continuous solution to ( 23–27) is unique. It remains to show existence of a continuous solution with all compon ents inE(λ). Consider ( 26–27) as a mapping from
|
https://arxiv.org/abs/2504.15556v1
|
Φ Rθ,ΦRηon the right side to ˜ΦRθ,˜ΦRηon the left side, i.e. ˜ΦRθ(t) = 1+/integraldisplayt 0/parenleftBig (δ|β|+C0)ΦRθ(t′)+/integraldisplayt′ 0ΦRη(t′−s)ΦRθ(s)ds/parenrightBig dt′, ˜ΦRη(t) =|β|/parenleftBig/integraldisplayt 0ΦRθ(t−s)ΦRη(s)ds+δ|β|ΦRθ(t)/parenrightBig . If ΦRθ,ΦRη∈E(λ), then writing Lθ(λ) =/integraltext∞ 0ΦRθ(s)e−λsdsfor the Laplace transform of Φ Rθand similarly writingLη,˜Lθ,˜Lηfor those of Φ Rη,˜ΦRθ,˜ΦRη, taking Laplace transforms of the above gives λ˜Lθ(λ)−1 = (δ|β|+C0)Lθ(λ)+Lη(λ)Lθ(λ), ˜Lη(λ) =|β|Lθ(λ)Lη(λ)+δβ2Lθ(λ).(29) 10 This implies in particular that ˜Lθ(λ),˜Lη(λ)<∞, i.e.˜ΦRθ,˜ΦRη∈E(λ). Forι >0, define further E(λ,ι) ={(ΦRθ,ΦRη) :Lθ(λ)≤ι, Lη(λ)≤(δβ2+1)ι}. If (ΦRθ,ΦRη)∈E(λ,ι), then for ι >0 sufficiently small and λ >0 sufficiently large, this implies ˜Lθ(λ)≤ λ−1(1+(δ|β|+C0)ι+(δβ2+1)ι2)≤ιand˜Lη(λ)≤ |β|(δβ2+1)ι2+δβ2ι≤(δβ2+1)ι, so (˜ΦRθ,˜ΦRη)∈E(λ,ι). For two pairs of inputs ( L1 θ,L1 η) and (L2 θ,L2 η), note by ( 29) that the corresponding outputs satisfy |˜L1 θ−˜L2 θ| ≤δ|β|+C0 λ|L1 θ−L2 θ|+|L1 η| λ|L1 θ−L2 θ|+|L2 θ|(δβ2+1) λ|L1 η−L2 η| δβ2+1 |˜L1 η−˜L2 η| δβ2+1≤ |β||L1 θ||L1 η−L2 η| δβ2+1+|β||L2 η| δβ2+1|L1 θ−L2 θ|+δβ2 δβ2+1|L1 θ−L2 θ|. Thus, defining a weighted L1-norm on E(λ)×E(λ) given by /⌊a∇⌈⌊l(ΦRθ,ΦRη)/⌊a∇⌈⌊l=|Lθ(λ)|+(δβ2+1)−1|Lη(λ)|, one may check from the above that the mapping (Φ Rθ,ΦRη)/ma√sto→(˜ΦRθ,˜ΦRη) is Lipschitz on E(λ)×E(λ) with respect to /⌊a∇⌈⌊l·/⌊a∇⌈⌊l, with Lipschitz constant at most δ|β|+C0 λ+2(δβ2+1)ι λ+2|β|ι+δβ2 δβ2+1. This is less than 1 for ι >0 sufficiently small and λ >0 sufficiently large, so (Φ Rθ,ΦRη)/ma√sto→(˜ΦRθ,˜ΦRη) is a contraction with respect to /⌊a∇⌈⌊l·/⌊a∇⌈⌊l. This norm is complete on E(λ)×E(λ), andE(λ,ι) is closed in E(λ)×E(λ), so by the Banach fixed-point theorem, there exists a unique fixed p oint (ΦRθ,ΦRη)∈E(λ,ι)⊂E(λ)×E(λ) which is a solution to ( 26–27). Given this solution to ( 26–27), consider now ( 23–25) as a mapping from (Φ α,ΦCθ,ΦCη) on the right side to ( ˜Φα,˜ΦCθ,˜ΦCη) on the left side. Now let Lα(λ),Lθ(λ),Lη(λ) denote the Laplace transforms of (Φα,ΦCθ,ΦCη), and define also the Laplace transforms Kη(λ) =/integraltextt 0(s+ 1)2Φ2 Rη(s)e−λsdsandKθ(λ) =/integraltextt 0(s+1)2Φ2 Rθ(s)e−λsds. Choosing λlarge enough so that Kη(λ),Kθ(λ)<∞, if Φα,ΦCθ,ΦCη∈E(λ), then taking Laplace transforms of ( 23–25) gives λ˜Lα(λ)−/⌊a∇⌈⌊lα0/⌊a∇⌈⌊l2=4.1C0 λ+4.1C0Lθ(λ)+3C0Lα(λ) λ˜Lθ(λ)−E(θ0)2=C1Lθ(λ)+6Kη(λ)Lθ(λ)+18C2 0Lα(λ)+6τ2 ∗ λKη(λ)+6Lη(λ)+C2 λ ˜Lη(λ) = 2β2Kθ(λ)Lη(λ)+4δβ2Lθ(λ)+C3 λ for some constants C1,C2,C3depending only on δ,β,C 0,σ2,τ2 ∗. For small ι >0, suppose further that (Φα,ΦCθ,ΦCη)∈E(λ,ι) where E(λ,ι) ={(Φα,ΦCθ,ΦCη) :Lα(λ)≤ι, Lθ(λ)≤ι, Lη(λ)≤(4δβ2+1)ι}. Then, using that lim λ→∞Kθ(λ) = 0 and lim λ→∞Kη(λ) = 0, for sufficiently large λ >0 and small ι >0, the above Laplace transform equations imply ( ˜Φα,˜ΦCθ,˜ΦCη)∈E(λ,ι). Furthermore, defining the norm /⌊a∇⌈⌊l(Φα,ΦCθ,ΦCη)/⌊a∇⌈⌊l=|Lα(λ)|+|Lθ(λ)|+ (4δβ2+ 1)|Lη(λ)|, it may be verified as above that the mapping (Φα,ΦCθ,ΦCη)/ma√sto→(˜Φα,˜ΦCθ,˜ΦCη) is Lipschitz in /⌊a∇⌈⌊l·/⌊a∇⌈⌊lonE(λ,ι), with Lipschitz constant at most 7.1C0 λ+C1+6Kη(λ)+18C2 0+6(4δβ2+1) λ+2β2Kθ(λ)+4δβ2 4δβ2+1. For sufficiently large λ >0, this is again less than 1, so there exists a unique fixed point (Φ α,ΦCθ,ΦCη)∈ E(λ,ι)⊂E(λ)×E(λ)×E(λ) which solves ( 23–25). Let (Φα,ΦCθ,ΦCη,ΦRθ,ΦRη) be the above solution to ( 23–27). For any T >0 and finite set D= {d1,...,dm} ⊂(0,T), we call [0 ,d1),[d1,d2),...,[dm,T] the maximal intervals of [0 ,T]\D. Fixing T >0 and denoting (α,Cθ,Cη,Rθ,Rη)≡ {αt,Cθ(t,s),Cθ(t,∗),Cθ(∗,∗),Cη(t,s),Rθ(t,s),Rη(t,s)}0≤s≤t≤T, 11 we define the space S ≡ S(T) in Theorem 2.4as S={(α,Cθ,Cη,Rθ,Rη) : (Rη,Cη,α)∈ Sη,(Rθ,Cθ,α)∈ Sθ}. (30) HereSη≡ Sη(T) is the collection of ( Rη,Cη,α) such that, for some (possibly empty) discontinuity set D⊂(0,T) of at most finite cardinality: •Cηis a positive-semidefinite covariance kernel on [0 ,T] (identifying Cη(s,t) =Cη(t,s)) satisfying Cη(t,t)≤ΦCη(t) for
|
https://arxiv.org/abs/2504.15556v1
|
all 0 ≤t≤T. (31) Furthermore, Cη(t,s) is uniformly continuous over s,t∈Ifor each maximal interval Iof [0,T]\D, and satisfies /vextendsingle/vextendsingleCη(t,t)−2Cη(t,s)+Cη(s,s)/vextendsingle/vextendsingle≤3β2/bracketleftbigg/parenleftBig T3sup r∈[0,T]Φ′ Rθ(r)2+Tsup r∈[0,T]ΦRθ(r)2/parenrightBig sup r∈[0,T]ΦCη(r) +δ/parenleftBig 2Tsup r∈[0,T]Φ′ Cθ(r)+4/parenrightBig/bracketrightbigg ·|t−s|for alls,t∈I.(32) •Rη(t,s) satisfies |Rη(t,s)| ≤ΦRη(t−s) for all 0 ≤s≤t≤T. (33) Furthermore, Rη(t,s) is uniformly continuous over s∈I′andt∈Ifor any two (possibly equal) maximal intervals I,I′of [0,T]\D. •αtsatisfies /⌊a∇⌈⌊lαt/⌊a∇⌈⌊l2≤Φα(t) for all 0 ≤t≤T. (34) and is uniformly continuous on each maximal interval Iof [0,T]\D. Similarly Sθ≡ Sθ(T) is the set of ( Rθ,Cθ,α) such that •Cθis a positive-semidefinite covariance kernel on {∗} ∪[0,T] (identifying Cθ(s,t) =Cθ(t,s) and Cθ(t,∗) =Cθ(∗,t)) satisfying Cθ(t,t)≤ΦCθ(t) for all 0 ≤t≤T. (35) Furthermore, Cθ(t,s) is uniformly continuous over s,t∈Ifor each maximal interval Iof [0,T]\Dand satisfies /vextendsingle/vextendsingleCθ(t,t)−2Cθ(t,s)+Cθ(s,s)/vextendsingle/vextendsingle≤/parenleftBig 2Tsup r∈[0,T]Φ′ Cθ(r)+4/parenrightBig |t−s|for alls,t∈I, (36) andCθ(t,∗) is uniformly continuous over t∈I. •Rθ(t,s) satisfies |Rθ(t,s)| ≤ΦRθ(t−s) for all 0 ≤s≤t≤T. (37) Furthermore, Rθ(t,s) is uniformly continuous over s∈I′andt∈Ifor any two (possibly equal) maximal intervals I,I′of [0,T]\D, and satisfies /vextendsingle/vextendsingleRθ(t′,s)−Rθ(t,s)/vextendsingle/vextendsingle≤/parenleftBig sup r∈[0,T]Φ′ Rθ(r)/parenrightBig |t′−t|for each fixed s∈[0,T] and all t,t′∈[s,T]∩I.(38) •αtsatisfies ( 34) and is uniformly continuous on each maximal interval Iof [0,T]\D. 12 We define Scont(T)≡ Scont⊂ S,Scont η(T)≡ Scont η⊂ Sη,Scont θ(T)≡ Scont θ⊂ Sθ as the subsets of the above spaces where D=∅, i.e. the above continuity conditions hold on all of [0 ,T]. Remark 3.2. By (32), letting {ut}t∈[0,T]be a mean-zero Gaussian process with covariance Cη, for any maximal interval Iof [0,T]\D, anys,t∈I, and some constant C >0, E(ut−us)4= 3[E(ut−us)2]2≤C|t−s|2. Then Kolmogorov’s continuity theorem ( [ 57, Theorem 2.9]) implies that there exists a modification of {ut}t∈[0,T]that is uniformly H¨ older continuous on each such maximal interval I, and similarly for {wt}t∈[0,T] with covariance Cθsatisfying ( 36). We will always take {ut}and{wt}to be the versions of these processes that satisfy this H¨ older continuity. Let us now establish existence and uniqueness of the solutions to ( 12–15) given (α,Cθ,Cη,Rθ,Rη)∈ S. Lemma 3.3. Fix any T >0, any(Rη,Cη,α)∈ Sη, and any realizations of θ0,θ∗,{bt}t≤Tand{ut}t≤T. Then there exist unique Fθ t-adapted processes {θt}t≤Tand{∂θt ∂us}s≤t≤Tsolving (12–13). Proof.Consider the drift function v(t,{θs}s≤t) =−δβ(θt−θ∗)+s(θt,αt)+/integraldisplayt 0Rη(t,s)(θs−θ∗)ds+ut. Conditioning on θ0,θ∗and{ut}and writing 0 for the process θt≡0, we have (with probability 1 over θ0,θ∗ and{ut}) sup t∈[0,T]|v(t,0)| ≤δ|βθ∗|+ sup t∈[0,T]|s(0,αt)|+/integraldisplayT 0ΦRη(t)dt·|θ∗|+ sup t∈[0,T]|ut|<∞. Furthermore, for all t∈[0,T], |v(t,{θs}s≤t)−v(t,{˜θs}s≤t)| ≤/parenleftBig δ|β|+ sup (θ,α)∈R×RK|∂θs(θ,α)|+/integraldisplayT 0ΦRη(s)ds/parenrightBig sup s∈[0,t]|θs−˜θs|, showing under Assumption 2.2that{θs}s≤t/ma√sto→v(t,{θs}s≤t) is Lipschitz in the norm of uniform convergence, uniformly over t∈[0,T]. Then existence and uniqueness of a solution {θt}t≤Twithθt|t=0=θ0adapted to the filtration of {bt}t≥0is classical, see e.g. [ 58, Theorem 11.2]. This solution is a measurable function of θ0, θ∗, and{ut}, and hence is also Fθ t-adapted. Conditioning now on {θt}, for any fixed s∈[0,T], consider v(t,{xs′}s′∈[s,t]) =−/parenleftBig δβ−∂θs(θt,αt)/parenrightBig xt+/integraldisplayt sRη(t,s′)xs′ds′. This satisfies v(t,0)≡0 for all t∈[s,T] and |v(t,{xs′}s′∈[s,t])−v(t,{˜xs′}s′∈[s,t])| ≤/parenleftBig δ|β|+ sup (θ,α)∈R×RK|∂θs(θ,α)|+/integraldisplayT 0ΦRη(t)dt/parenrightBig sup s′∈[s,t]|xs′−˜xs′|, so{xs′}s′∈[s,t]/ma√sto→v(t,{xs′}s′∈[s,t]) is also Lipschitz in the norm of uniform convergence, uniformly over t∈[s,T]. Then again for each s∈[0,T], there exists a unique solution {∂θt ∂us}t∈[s,T]with∂θt ∂us|t=s= 1, which is adapted to the filtration Ft≡ F({θr}r∈[s,t]) and hence also to Fθ t, showing the lemma. Lemma 3.4. Fix anyT >0, any(Rθ,Cθ,α)∈ Sθ, and any realizations
|
https://arxiv.org/abs/2504.15556v1
|
of εand(w∗,{wt}t≤T). Then there exist unique Fη t-adapted processes {ηt}t≤Tand{∂ηt ∂ws}s≤t≤Tsolving (14–15). 13 Proof.Conditional on εand (w∗,{wt}), the equations ( 14–15) are linear Volterra integral equations for which the kernel ( s,t)/ma√sto→Rθ(t,s) is continuous on each maximal interval Iof [0,T]\D. Then, for each maximal interval I= [a,b), given the values of {ηt}fort∈[0,a], existence and uniqueness of {ηt}t∈[a,b)is classical and follows from e.g. [ 59, Theorem 2.1.2]. Applying this successively to each maximal interval I shows existence and uniqueness of {ηt}overt∈[0,T]. A similar argument shows, for each fixed s∈[0,T], the existence and uniqueness of {∂ηt ∂ws}overt∈[s,T]. Here∂ηt ∂wsis deterministic by its definition, while ηtis a measurable function of ε,w∗,{ws}s≤tand hence is adapted to Fη t. Proof of Theorem 2.4(a).This follows from Lemmas 3.3and3.4. 3.2 Contractive mapping We fixT >0. For any ( Rη,Cη,α)∈ Sη, define a map Tη→θ: (Rη,Cη,α)→(Rθ,Cθ,˜α) by Rθ(t,s) =E/bracketleftBig∂θt ∂us/bracketrightBig , Cθ(t,s) =E[θtθs], Cθ(t,∗) =E[θtθ∗], Cθ(∗,∗) =E[(θ∗)2], d dt˜αt=G(˜αt,P(θt)) with ˜αt|t=0=α0 where{θt}t∈[0,T]and{∂θt ∂us}0≤s≤t≤Tare the unique solutions to ( 12–13) given ( Rη,Cη,α) andθ0,θ∗,{ut}, guaranteed by Lemma 3.3, andP(θt) is the law of θt. Similarly, for any ( Rθ,Cθ,˜α)∈ Sθ, define a map Tθ→η: (Rθ,Cθ,˜α)→(Rη,Cη,α) by Rη(t,s) =δβE/bracketleftBig∂ηt ∂ws/bracketrightBig , Cη(t,s) =δβ2E[(ηt+w∗−ε)(ηs+w∗−ε)], αt= ˜αt where{ηt}t∈[0,T]and{∂ηt ∂ws}0≤s<t≤Tare the unique solutions to ( 14–15) given ( Rθ,Cθ,˜α) andε,w∗,{wt}, guaranteed by Lemma 3.4. Finally, define the composite maps Tη→η=Tθ→η◦Tη→θ,Tθ→θ=Tη→θ◦Tθ→η. (39) The rest of this subsection is divided into two parts: •(Part 1) We show in Lemma 3.5(resp. Lemma 3.6) thatTθ→ηmapsSθintoSη(resp.Tη→θmapsSη intoSθ). •(Part 2) We equip SηandSθwith certain metrics and derive the moduli-of-continuity of the maps Tη→θandTθ→ηin Lemmas 3.7and3.8, thereby concluding that Tη→ηandTθ→θin (39) arecontractions under these metrics. Lemma 3.5. Tθ→ηmapsSθintoSη, andScont θintoScont η. Proof.(Condition for Cη) Define ξt=ηt+w∗−ε, so that ξt=−β/integraldisplayt 0Rθ(t,s)ξsds−wt+w∗−ε andCη(t,s) =δβ2E[ξtξs]. Then by Cauchy-Schwarz, Cη(t,t)≤2δβ2E/bracketleftBig β2/parenleftBig/integraldisplayt 0Rθ(t,s)ξsds/parenrightBig2 +(wt−w∗+ε)2/bracketrightBig ≤2δβ2/bracketleftBig β2/integraldisplayt 0(t−s+1)2·Rθ(t,s)2E(ξs)2ds·/integraldisplayt 0(t−s+1)−2ds+2Cθ(t,t)+2τ2 ∗+σ2/bracketrightBig ≤2δβ2/bracketleftBig1 δ/integraldisplayt 0(t−s+1)2·Φ2 Rθ(t−s)Cη(s,s)ds+2ΦCθ(t)+2τ2 ∗+σ2/bracketrightBig .(40) 14 Recalling the equation for Φ Cη(·) in (25), ΦCη(t) = 2δβ2/bracketleftBig1 δ/integraldisplayt 0(t−s+1)2·Φ2 Rθ(t−s)ΦCη(s)ds+2ΦCθ(t)+2τ2 ∗+σ2/bracketrightBig , Gronwall’s inequality implies that Cη(t,t)≤ΦCη(t), showing ( 31). We now check ( 32) on each maximal interval Iof [0,T]\DwhereD⊂(0,T) is the discontinuity set of Sθ. Note that Cη(t,t)−2Cη(t,s)+Cη(s,s) =δβ2E[(ξt−ξs)2] =δβ2E/bracketleftBig/parenleftBig −β/integraldisplayt 0Rθ(t,r)ξrdr+β/integraldisplays 0Rθ(s,r)ξrdr−wt+ws/parenrightBig2/bracketrightBig ≤3δβ2/bracketleftBig β2E/parenleftBig/integraldisplays 0/parenleftbig Rθ(t,r)−Rθ(s,r)/parenrightbig ξrdr/parenrightBig2 +β2E/parenleftBig/integraldisplayt sRθ(t,r)ξrdr/parenrightBig2 +E(wt−ws)2/bracketrightBig . Usingδβ2E(ξt)2=Cη(t,t)≤ΦCη(t) established above, together with the continuity conditions ( 38) forRθ and (36) forCθ, for any s,t∈Iit holds that E/parenleftBig/integraldisplays 0/parenleftbig Rθ(t,r)−Rθ(s,r)/parenrightbig ξrdr/parenrightBig2 ≤s/integraldisplays 0(Rθ(t,r)−Rθ(s,r))2E(ξr)2dr ≤T2 δβ2/parenleftBig sup r∈[0,T]Φ′ Rθ(r)/parenrightBig2 ·sup r∈[0,T]ΦCη(r)·|t−s|2, E/parenleftBig/integraldisplayt sRθ(t,r)ξrdr/parenrightBig2 ≤(t−s)/integraldisplayt sRθ(t,r)2E(ξr)2dr ≤1 δβ2sup r∈[0,T]ΦRθ(r)2·sup r∈[0,T]ΦCη(r)·|t−s|2, E(wt−ws)2=Cθ(t,t)−2Cθ(t,s)+Cθ(s,s)≤(2Tsup r∈[0,T]Φ′ Cθ(r)+4)·|t−s|. Combining these bounds shows ( 32) overs,t∈I. Applying |E[ξsξt−ξs′ξt′]|2≤2E(ξs−ξs′)2E(ξt′)2+ 2E(ξs′)2E(ξt−ξt′)2, this shows also that Cη(t,s) is uniformly continuous over s,t∈I. If (Cθ,Rθ,˜α)∈ Scont θ, thenD=∅so this maximal interval is I= [0,T]. (Condition for Rη) By definition, Rη(t,s) =β[−/integraltextt sRθ(t,s′)Rη(s′,s)ds′+δβRθ(t,s)], hence |Rη(t,s)| ≤ |β|/parenleftBig/integraldisplayt s|Rθ(t,s′)||Rη(s′,s)|ds′+δ|βRθ(t,s)|/parenrightBig ≤ |β|/parenleftBig/integraldisplayt−s 0ΦRθ(t−s−s′)|Rη(s+s′,s)|ds′+δ|β|ΦRθ(t−s)/parenrightBig . Recalling the equation for Φ Rηin (27), ΦRη(t−s) =|β|/parenleftBig/integraldisplayt−s 0ΦRθ(t−s−s′)ΦRη(s′)ds′+δ|β|ΦRθ(t−s)/parenrightBig , this implies for all t∈[s,T] that|Rη(t,s)| ≤ΦRη(t−s), verifying ( 33). To show uniform continuity on each pair of maximal intervals I,I′defining Sθ, observe first that for any s,s′∈I′andτ≥0 for which 15 s+τ,s′+τ∈I, |Rη(s′+τ,s′)−Rη(s+τ,s)| ≤ |β|·/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays+τ sRθ(s+τ,r)Rη(r,s)dr−/integraldisplays′+τ s′Rθ(s′+τ,r)Rη(r,s′)dr/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle+δβ2|Rθ(s′+τ,s′)−Rθ(s+τ,s)| =|β|/integraldisplayτ 0/vextendsingle/vextendsingle/vextendsingleRθ(s+τ,s+r)Rη(s+r,s)−Rθ(s′+τ,s′+r)Rη(s′+r,s′)/vextendsingle/vextendsingle/vextendsingledr+δβ2|Rθ(s′+τ,s′)−Rθ(s+τ,s)| ≤ |β|/integraldisplayτ 0|Rθ(s+τ,s+r)|·|Rη(s′+r,s′)−Rη(s+r,s)|dr +|β|/integraldisplayτ 0|Rθ(s+τ,s+r)−Rθ(s′+τ,s′+r)|·|Rη(s′+r,s′)|dr+δβ2|Rθ(s′+τ,s′)−Rθ(s+τ,s)|. Denoting by o|s−s′|(1) an error that converges
|
https://arxiv.org/abs/2504.15556v1
|
to 0 uniformly in τas|s−s′| →0, observe that the last term above is o|s−s′|(1) by the uniform continuity of RθonI′×I. For the second term, writing the range of integration as [0 ,τ] =A∪Bwherer∈Aare the values for which s+r,s′+rbelong to a single maximal interval of [0 ,T]\Dandr∈Bare the values for which s+r,s′+rbelong to two different maximal intervals, the integral over r∈Aiso|s−s′|(1) again by the continuity of Rθ, while the integral over r∈Bis also o|s−s′|(1) by the boundedness of Rθ,Rηand the bound |B| ≤C|s−s′|for the total length of B. Putting this together, |Rη(s′+τ,s′)−Rη(s+τ,s)| ≤C/integraldisplayτ 0|Rη(s′+r,s′)−Rη(s+r,s)|dr+o|s−s′|(1). SinceRη(s′,s′) =Rη(s,s), the above and Gronwall’s inequality imply that |Rη(s′+τ,s′)−Rη(s+τ,s)|=o|s−s′|(1) (41) uniformly in τ. Now for any s∈I′andτ′≥τ≥0 for which s+τ,s+τ′∈I, |Rη(s+τ′,s)−Rη(s+τ,s)| ≤ |β|/integraldisplays+τ s|Rθ(s+τ′,r)−Rθ(s+τ,r)||Rη(r,s)|dr +/integraldisplays+τ′ s+τ|Rθ(s+τ′,r)|·|Rη(r,s)|dr+δ|β||Rθ(s+τ′,s)−Rθ(s+τ,s)|, so the continuity of Rθand boundedness of Rθ,Rηagain imply that |Rη(s+τ′,s)−Rη(s+τ,s)|=o|τ−τ′|(1) (42) uniformly in s. The statements ( 41) and (42) show that ( s,τ)/ma√sto→Rη(s+τ,s) is uniformly continuous over {(s,τ) :s∈I′,τ≥0,s+τ∈I}, implying uniform continuity of ( s,t)/ma√sto→Rη(t,s) over (s,t)∈I′×I. Again if (Cθ,Rθ,˜α)∈ Scont θ, then this continuity holds over all of I= [0,T]. (Condition for α) By definition, the mapping ˜ α/ma√sto→αunderTθ→ηis the identity, so the required conditions forαhold by those assumed for ˜ α. Lemma 3.6. Tη→θmapsSηintoScont θ. Proof.(Condition for Cθ) To verify ( 35), denote vt=−δβ(θt−θ∗)+s(θt,αt)+/integraldisplayt 0Rη(t,s)(θs−θ∗)ds+ut. Applying Ito’s formula to ( θt)2yields dCθ(t,t) dt=dE(θt)2 dt=E[2θtvt]+2≤1.1·E(θt)2+E(vt)2+2. (43) 16 (The bound holds with 1 in place of 1.1, and we enlarge this to 1.1 to acco mmodate a later discretized version of this computation.) Using an argument similar to ( 40), and letting C0>0 be the constant defining (23–27) which upper bounds C >0 in (8) of Assumption 2.2, we may bound E(vt)2as E(vt)2≤6/bracketleftbigg δ2β2E(θt)2+δ2β2τ2 ∗+E[s(θt,αt)2]+E/parenleftBig/integraldisplayt 0Rη(t,s)θsds/parenrightBig2 +E/parenleftBig/integraldisplayt 0Rη(t,s)θ∗ds/parenrightBig2 +E(ut)2/bracketrightbigg ≤(6δ2β2+18C2 0)E(θt)2+6/integraldisplayt 0(t−s+1)2Φ2 Rη(t−s)E(θs)2ds +6/parenleftBig δ2β2τ2 ∗+3C2 0+3C2 0Φα(t)+/integraldisplayt 0(t−s+1)2Φ2 Rη(t−s)ds·τ2 ∗+ΦCη(t)/parenrightBig .(44) Applying this to ( 43) and comparing with the equation for Φ Cθfrom (24), d dtΦCθ(t) = (6δ2β2+18C2 0+1.1)ΦCθ(t)+6/integraldisplayt 0(t−s+1)2Φ2 Rη(t−s)ΦCθ(s)ds (45) +6/parenleftBig δ2β2τ2 ∗+3C2 0+3C2 0Φα(t)+/integraldisplayt 0(t−s+1)2Φ2 Rη(t−s)ds·τ2 ∗+ΦCη(t)/parenrightBig +2, we see that since Cθ(0,0) = ΦCθ(0), we have Cθ(t,t)≤ΦCθ(t). Next we prove ( 36) for all 0 ≤s≤t≤T. We have θt−θs=/integraltextt svrdr+√ 2(bt−bs). Then it holds that Cθ(t,t)−2Cθ(t,s)+Cθ(s,s) =E[(θt−θs)2]≤2|t−s|/integraldisplayt sE(vr)2dr+4E(bt−bs)2 ≤2|t−s|2sup r∈[0,T]|Φ′ Cθ(r)|+4|t−s| ≤/parenleftBig 2Tsup r∈[0,T]Φ′ Cθ(r)+4/parenrightBig |t−s|, where the second inequality compares ( 44) to the definition of Φ′ Cθ(t) in (45). This verifies ( 36). As in the preceding argument for Cη, this condition ( 36) and Cauchy-Schwarz implies that Cθ(t,s) is uniformly continuous over all 0 ≤s≤t≤T, and also that Cθ(t,∗) is uniformly continuous over all t∈[0,T]. (Condition for Rθ) Let¯Rθ(t,s) =E|∂θt ∂us|so that|Rθ(t,s)| ≤¯Rθ(t,s) by definition. Note that d dt/vextendsingle/vextendsingle/vextendsingle∂θt ∂us/vextendsingle/vextendsingle/vextendsingle≤/vextendsingle/vextendsingle/vextendsingled dt∂θt ∂us/vextendsingle/vextendsingle/vextendsingle≤/parenleftbig δ|β|+|∂θs(θt,αt)|/parenrightbig/vextendsingle/vextendsingle/vextendsingle∂θt ∂us/vextendsingle/vextendsingle/vextendsingle+/integraldisplayt s|Rη(t,s′)|/vextendsingle/vextendsingle/vextendsingle∂θs′ ∂us/vextendsingle/vextendsingle/vextendsingleds′ ≤(δ|β|+C0)/vextendsingle/vextendsingle/vextendsingle∂θt ∂us/vextendsingle/vextendsingle/vextendsingle+/integraldisplayt sΦRη(t−s′)/vextendsingle/vextendsingle/vextendsingle∂θs′ ∂us/vextendsingle/vextendsingle/vextendsingleds′, (46) whereC0>0 is the constant defining ( 23–27) which upper bounds C >0 in (8) of Assumption 2.2. Taking expectation on both sides yields d dt¯Rθ(t,s)≤(δ|β|+C0)¯Rθ(t,s)+/integraldisplayt−s 0ΦRη(t−s−s′)¯Rθ(s+s′,s)ds′. Recall the equation for Φ Rθin (26), d dtΦRθ(t−s) = (δ|β|+C0)ΦRθ(t−s)+/integraldisplayt−s 0ΦRη(t−s−s′)ΦRθ(s′)ds′. Since¯Rθ(s,s) = 1 = Φ Rθ(0), this implies for all t∈[s,T] that|Rθ(t,s)| ≤¯Rθ(t,s)≤ΦRθ(t−s), verifying (37) for all 0 ≤s≤t≤T. To show ( 38) for all 0 ≤s≤t≤t′≤T, observe that we have |Rθ(t′,s)−Rθ(t,s)|=/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt′
|
https://arxiv.org/abs/2504.15556v1
|
tE/bracketleftBig −/parenleftBig δβ−∂θs(θr,αr)/parenrightBig∂θr ∂us/bracketrightBig dr+/integraldisplayt′ t/parenleftBig/integraldisplayr sRη(r,r′)Rθ(r′,s)dr′/parenrightBig dr/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤/integraldisplayt′ t/parenleftBig (δ|β|+C0)¯Rθ(r,s)dr+/integraldisplayr sΦRη(r−r′)¯Rθ(r′,s)dr′/parenrightBig dr≤ |t′−t|·sup r∈[0,T]Φ′ Rθ(r), 17 verifying ( 38). In particular, this shows continuity of τ/ma√sto→Rθ(s+τ,s) uniformly over s∈[0,T] and τ∈[0,T−s]. For continuity in s, observe that d dτ/vextendsingle/vextendsingle/vextendsingle∂θs+τ ∂us−∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingle ≤(δ|β|+C0)/vextendsingle/vextendsingle/vextendsingle∂θs+τ ∂us−∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingle+/integraldisplayτ 0/vextendsingle/vextendsingle/vextendsingleRη(s+τ,s+r)∂θs+r ∂us−Rη(s′+τ,s′+r)∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingledr. We may again divide the range of integration of the second term as [0 ,τ] =A∪Bwheres+r,s′+rbelong to the same maximal interval defining Sηforr∈A, and to two different maximal intervals for r∈B. Then taking expectations on both sides above and applying boundedness of¯Rθ,Rη, continuity of Rηto bound the integral over r∈A, and|B| ≤C|s−s′|to bound the integral over r∈B, this shows d dτE/vextendsingle/vextendsingle/vextendsingle∂θs+τ ∂us−∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingle≤C/parenleftbigg E/vextendsingle/vextendsingle/vextendsingle∂θs+τ ∂us−∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingle+/integraldisplayτ 0E/vextendsingle/vextendsingle/vextendsingle∂θs+r ∂us−∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingledr/parenrightbigg +o|s−s′|(1) whereo|s−s′|(1) converges to 0 uniformly in τas|s−s′| →0. Then, since E|∂θs ∂us−∂θs′ ∂us′|= 0, a Gronwall ar- gument implies E/vextendsingle/vextendsingle/vextendsingle∂θs+τ ∂us−∂θs′+τ ∂us′/vextendsingle/vextendsingle/vextendsingle=o|s−s′|(1), so also s/ma√sto→Rθ(s+τ,s) is continuous uniformly over s∈[0,T] andτ∈[0,T−s]. Thus ( s,t)/ma√sto→Rθ(t,s) is uniformly continuous over all 0 ≤s≤t≤T. (Condition for ˜α) By definition, we have d dt˜αt=G(˜αt,P(θt)) with ˜α0=α0. The condition ( 9) and boundedness of Cθshown aboveimply that α/ma√sto→ G(α,P(θt)) is Lipschitz uniformly over t∈[0,T], so there exists a unique solution {˜αt}t∈[0,T]of this equation, which is uniformly continuous on [0 ,T]. Letting C0>0 be the constant defining ( 23–27) which upper bounds ( 9) of Assumption 2.3, and applying the above bound E(θt)2=Cθ(t,t)≤ΦCθ(t), this solution satisfies d dt/⌊a∇⌈⌊l˜αt/⌊a∇⌈⌊l2≤2/⌊a∇⌈⌊l˜αt/⌊a∇⌈⌊l·/⌊a∇⌈⌊lG(˜αt,P(θt))/⌊a∇⌈⌊l ≤2C0(1+/radicalbig ΦCθ(t)+/⌊a∇⌈⌊l˜αt/⌊a∇⌈⌊l)/⌊a∇⌈⌊l˜αt/⌊a∇⌈⌊l ≤4.1C0(1+ΦCθ(t))+3C0/⌊a∇⌈⌊l˜αt/⌊a∇⌈⌊l2 (where we again relax a constant 4 to 4.1). Recalling the equation for Φαin (23), d dtΦα(t) = 4.1C0(1+ΦCθ(t))+3C0Φα(t), sinceC0>0 and/⌊a∇⌈⌊l˜α0/⌊a∇⌈⌊l2= Φα(0), this shows /⌊a∇⌈⌊l˜αt/⌊a∇⌈⌊l2≤Φα(t). Next we equip the spaces SηandSθwith metrics. Fixing a large constant λ >0, define d(α1,α2) = sup t∈[0,T]e−λt/⌊a∇⌈⌊lαt 1−αt 2/⌊a∇⌈⌊l d(C1 θ,C2 θ) = inf (w∗ 1,{wt 1})∼C1 θ,(w∗ 2,{wt 2})∼C2 θ/bracketleftBig/radicalBig E(w∗ 1−w∗ 2)2+ sup t∈[0,T]e−λt/radicalBig E(wt 1−wt 2)2/bracketrightBig d(C1 η,C2 η) = inf {ut 1}∼C1η,{ut 2}∼C2ηsup t∈[0,T]e−λt/radicalBig E(ut 1−ut 2)2 d(R1 θ,R2 θ) = sup 0≤s≤t≤Te−λt/vextendsingle/vextendsingleR1 θ(t,s)−R2 θ(t,s)/vextendsingle/vextendsingle d(R1 η,R2 η) = sup 0≤s≤t≤Te−λt/vextendsingle/vextendsingleR1 η(t,s)−R2 η(t,s)/vextendsingle/vextendsingle.(47) In the definitions of d(C1 θ,C2 θ) andd(C1 η,C2 η) above, the infima are taken over all couplings of mean-zero Gaussian processes with covariances ( C1 θ,C2 θ) and (C1 η,C2 η). Writing Xi= (Ri η,Ci η,αi)∈ SηandYi= (Ri θ,Ci θ,˜αi)∈ Sθfori= 1,2, let d(X1,X2) =d(R1 η,R2 η)+d(C1 η,C2 η)+d(α1,α2), (48) d(Y1,Y2) =d(R1 θ,R2 θ)+d(C1 θ,C2 θ)+d(˜α1,˜α2). (49) 18 Lemma 3.7 (Modulus of Tη→θ).LetXi= (Ri η,Ci η,αi)∈ SηandYi=Tη→θ(Xi) = (Ri θ,Ci θ,˜αi)∈ Sθfor i= 1,2. Then for any ε >0, there exists a constant λ=λ(ε)>0sufficiently large defining the metrics ( 47) such that d(Y1,Y2)≤ε·d(X1,X2). Proof.We write C,C′>0 for constants that may depend on T, but not on λ, and changing from instance to instance. Bound of d(C1 θ,C2 θ).Let{ut 1}t∈[0,T]and{ut 2}t∈[0,T]be an optimal coupling in the definition of d(C1 η,C2 η), i.e., sup t∈[0,T]e−λt/radicalBig E[(ut 1−ut 2)2] =d(C1 η,C2 η). (50) Let{θt i}be the solution to ( 12) driven by {ut i,αt i,Ri η}fori= 1,2, with a common Brownian motion {bt} and initialization θ0, i.e. θt i=θ0+/integraldisplayt 0/parenleftBig −δβ(θs i−θ∗)+s(θs i,αs i)+/integraldisplays 0Ri η(s,s′)(θs′ i−θ∗)ds′+us i/parenrightBig ds+√ 2bt.(51) By definition, we have E[θt 1θs 1] =E[θt
|
https://arxiv.org/abs/2504.15556v1
|
2θs 2] =Cθ(t,s). Moreover, E(θt 1−θt 2)2≤5[(I)+(II)+(III)+(IV)+(V)] where we set (I) =E/parenleftBig/integraldisplayt 0δβ|θs 1−θs 2|ds/parenrightBig2 (II) =E/parenleftBig/integraldisplayt 0|s(θs 1,αs 1)−s(θs 2,αs 2)|ds/parenrightBig2 (III) =E/parenleftBig/integraldisplayt 0|θs′ 1−θs′ 2|/parenleftBig/integraldisplayt s′|R1 η(s,s′)|ds/parenrightBig ds′/parenrightBig2 (IV) =E/parenleftBig/integraldisplayt 0|θs′ 2−θ∗|/parenleftBig/integraldisplayt s′|R1 η(s,s′)−R2 η(s,s′)|ds/parenrightBig ds′/parenrightBig2 (V) =E/parenleftBig/integraldisplayt 0|us 1−us 2|ds/parenrightBig2 . Term (I) satisfies (I)≤C/integraldisplayt 0E(θs 1−θs 2)2ds=C/integraldisplayt 0e2λse−2λsE(θs 1−θs 2)2ds ≤Csup t∈[0,T]e−2λtE(θt 1−θt 2)2/integraldisplayt 0e2λsds≤C′ λe2λtsup t∈[0,T]e−2λtE(θt 1−θt 2)2. To bound ( II), applying the Lipschitz properties of s(·) in Assumption 2.2and a similar argument, (II)≤C/integraldisplayt 0/parenleftBig E(θs 1−θs 2)2+/⌊a∇⌈⌊lαs 1−αs 2/⌊a∇⌈⌊l2/parenrightBig ds≤C′ λe2λtsup t∈[0,T]e−2λt/parenleftBig E(θt 1−θt 2)2+/⌊a∇⌈⌊lαt 1−αt 2/⌊a∇⌈⌊l2/parenrightBig ≤C′ λe2λt/parenleftBig sup t∈[0,T]e−2λtE(θt 1−θt 2)2+d(α1,α2)2/parenrightBig . For (III), using the condition |R1 η(t,s)| ≤ΦRη(t−s)≤C, we have (III)≤C/integraldisplayt 0E(θs 1−θs 2)2ds≤C′ λe2λtsup t∈[0,T]e−2λtE(θt 1−θt 2)2. 19 For (IV), usingE(θs 2−θ∗)2≤2Cθ(s,s)+2τ2 ∗≤2ΦCθ(s)+2τ2 ∗≤C, we have (IV)≤C/integraldisplayt 0/integraldisplayt s′/parenleftbig R1 η(s,s′)−R2 η(s,s′)/parenrightbig2dsds′≤C/integraldisplayt 0/integraldisplayt s′e2λsdsds′·sup 0≤s≤t≤Te−2λt/parenleftbig R1 η(t,s)−R2 η(t,s)/parenrightbig2 ≤C′ λe2λtd(R1 η,R2 η)2. Lastly for ( V), using ( 50), we have (V)≤C/integraldisplayt 0E(us 1−us 2)2ds≤C′ λe2λt·d(C1 η,C2 η)2. Combining these bounds, for a constant C >0 independent of λ, sup t∈[0,T]e−2λtE(θt 1−θt 2)2≤C λ/parenleftBig sup t∈[0,T]e−2λtE(θt 1−θt 2)2+d(X1,X2)2/parenrightBig . Thus for any ε >0, choosing λ=λ(ε) large enough and rearranging gives sup t∈[0,T]e−2λtE(θt 1−θt 2)2≤ε2d(X1,X2)2. Finally, let ( w∗,{wt 1},{wt 2}) be a centered Gaussian process with second moments matching ( θ∗,{θt 1},{θt 2}). Then (w∗,{wt 1}) and (w∗,{wt 2}) realizes a coupling defining the metric d(C1 θ,C2 θ) in (47), so d(C1 θ,C2 θ)≤sup t∈[0,T]e−λt/radicalBig E(wt 1−wt 2)2= sup t∈[0,T]e−λt/radicalBig E(θt 1−θt 2)2≤ε·d(X1,X2). (52) Bound of d(R1 θ,R2 θ).Defining the processes∂θt i ∂usfori= 1,2 from the above coupling of {θt 1}and{θt 2}, by definition we have ∂θt i ∂us= 1−/integraldisplayt s/parenleftBig δβ−∂θs(θs′ i,αs′ i)/parenrightBig∂θs′ i ∂usds′+/integraldisplayt s/parenleftBig/integraldisplays′ sRi η(s′,s′′)∂θs′′ i ∂usds′′/parenrightBig ds′, Then E/vextendsingle/vextendsingle/vextendsingle∂θt 1 ∂us−∂θt 2 ∂us/vextendsingle/vextendsingle/vextendsingle≤4[(I)+(II)+(III)+(IV)] where (I) =/integraldisplayt sE/bracketleftBig/vextendsingle/vextendsingle/vextendsingle∂θs(θs′ 1,αs′ 1)−∂θs(θs′ 2,αs′ 2)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle∂θs′ 1 ∂us/vextendsingle/vextendsingle/vextendsingle/bracketrightBig ds′, (II) =/integraldisplayt sE/bracketleftBig/parenleftBig δ|β|+|∂θs(θs′ 2,αs′ 2)|/parenrightBig/vextendsingle/vextendsingle/vextendsingle∂θs′ 1 ∂us−∂θs′ 2 ∂us/vextendsingle/vextendsingle/vextendsingle/bracketrightBig ds′, (III) =/integraldisplayt s/integraldisplays′ sE/bracketleftBig/vextendsingle/vextendsingle/vextendsingleR1 η(s′,s′′)−R2 η(s′,s′′)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle∂θs′′ 1 ∂us/vextendsingle/vextendsingle/vextendsingle/bracketrightBig ds′′ds′, (IV) =/integraldisplayt s/integraldisplays′ sE/bracketleftBig |R2 η(s′,s′′)|/vextendsingle/vextendsingle/vextendsingle∂θs′′ 1 ∂us−∂θs′′ 2 ∂us/vextendsingle/vextendsingle/vextendsingle/bracketrightBig ds′′ds′. For (I), note that ( 46) implies |∂θs′ 1 ∂us| ≤Cfor a constant C >0 with probability 1. Then, using the Lipschitz continuity of ∂θs(·) in Assumption 2.2, we have (I)≤C/integraldisplayt s/parenleftBig E/vextendsingle/vextendsingleθs′ 1−θs′ 2/vextendsingle/vextendsingle+/⌊a∇⌈⌊lαs′ 1−αs′ 2/⌊a∇⌈⌊l/parenrightBig ds′≤C/integraldisplayt seλs′ds′sup s′∈[0,T]e−λs′/parenleftBig E/vextendsingle/vextendsingleθs′ 1−θs′ 2/vextendsingle/vextendsingle+/⌊a∇⌈⌊lαs′ 1−αs′ 2/⌊a∇⌈⌊l/parenrightBig ≤C′ λeλtsup s′∈[0,T]e−λs′/parenleftBig/radicalBig E/parenleftbig θs′ 1−θs′ 2/parenrightbig2+/⌊a∇⌈⌊lαs′ 1−αs′ 2/⌊a∇⌈⌊l/parenrightBig ≤C′ λeλt/parenleftBig ε·d(X1,X2)+d(α1,α2)/parenrightBig , 20 the last step using ( 52) already shown. For ( II), applying the boundedness of ∂θs(·) in Assumption 2.2, we have (II)≤C/integraldisplayt seλs′e−λs′E/bracketleftBig/vextendsingle/vextendsingle/vextendsingle∂θs′ 1 ∂us−∂θs′ 2 ∂us/vextendsingle/vextendsingle/vextendsingle/bracketrightBig ds′≤C′ λeλt·sup 0≤s≤t≤Te−λtE/vextendsingle/vextendsingle/vextendsingle∂θt 1 ∂us−∂θt 2 ∂us/vextendsingle/vextendsingle/vextendsingle. For (III), applying again ( 46) to bound |∂θs′ 1 ∂us| ≤C, (III)≤C λeλt·d(R1 η,R2 η). For (IV), applying |R2 η(t,s)| ≤ΦRη(t−s)≤C, (IV)≤C λeλtsup 0≤s≤t≤Te−λtE/vextendsingle/vextendsingle/vextendsingle∂θt 1 ∂us−∂θt 2 ∂us/vextendsingle/vextendsingle/vextendsingle. Combining these bounds, sup 0≤s≤t≤Te−λtE/vextendsingle/vextendsingle/vextendsingle∂θt 1 ∂us−∂θt 2 ∂us/vextendsingle/vextendsingle/vextendsingle≤C λ/parenleftBig sup 0≤s≤t≤Te−λtE/vextendsingle/vextendsingle/vextendsingle∂θt 1 ∂us−∂θt 2 ∂us/vextendsingle/vextendsingle/vextendsingle+d(X1,X2)/parenrightBig , so rearranging and choosing λ=λ(ε) large enough gives d(R1 θ,R2 θ)≤sup 0≤s≤t≤Te−λtE/vextendsingle/vextendsingle/vextendsingle∂θt 1 ∂us−∂θt 2 ∂us/vextendsingle/vextendsingle/vextendsingle≤ε·d(X1,X2). (53) Bound of d(˜α1,˜α2).By definition, ˜αt i=α0+/integraldisplayt 0G(˜αs i,P(θs i))ds fori= 1,2. Letting {θt 1}and{θt 2}be coupled as above and applying Assumption 2.3, /⌊a∇⌈⌊l˜αt
|
https://arxiv.org/abs/2504.15556v1
|
1−˜αt 2/⌊a∇⌈⌊l ≤C/integraldisplayt 0/parenleftBig /⌊a∇⌈⌊l˜αs 1−˜αs 2/⌊a∇⌈⌊l+W2(P(θs 1),P(θs 2))/parenrightBig ds≤C/integraldisplayt 0/parenleftBig /⌊a∇⌈⌊l˜αs 1−˜αs 2/⌊a∇⌈⌊l+/radicalBig E(θs 1−θs 2)2/parenrightBig ds. Then /⌊a∇⌈⌊l˜αt 1−˜αt 2/⌊a∇⌈⌊l ≤C/integraldisplayt 0eλsdssup s∈[0,T]e−λs/parenleftBig /⌊a∇⌈⌊l˜αs 1−˜αs 2/⌊a∇⌈⌊l+/radicalBig E(θs 1−θs 2)2/parenrightBig ≤C′ λeλt/parenleftBig d(˜α1,˜α2)+ε·d(X1,X2)/parenrightBig . Choosing λ=λ(ε) large enough and rearranging shows d(˜α1,˜α2) = sup t∈[0,T]e−λt/⌊a∇⌈⌊l˜αt 1−˜αt 2/⌊a∇⌈⌊l ≤ε·d(X1,X2). (54) The lemma follows from ( 52), (53), and (54). Lemma 3.8 (Modulus of Tθ→η).LetYi= (Ri θ,Ci θ,˜αi)∈ SθandXi=Tθ→η(Yi) = (Ci η,Ri η,αi)∈ Sηfor i= 1,2. Then there exists a constant C >0such that for any sufficiently large λ >0defining the metrics (47), d(X1,X2)≤C·d(Y1,Y2). Proof.The proof is similar to that of Lemma 3.7so we will omit some details. Again let C,C′,C′′>0 denote constants depending on Tbut not on λ. 21 Bound of d(C1 η,C2 η).Let (w∗ 1,{wt 1}) and (w∗ 2,{wt 2}) be an optimal coupling for which /radicalBig E[(w∗ 1−w∗ 2)2]+ sup t∈[0,T]e−λt/radicalBig E[(wt 1−wt 2)2] =d(C1 θ,C2 θ). Fori= 1,2, let ηt i=−β/integraldisplayt 0Ri θ(t,s)/parenleftbig ηs i+w∗ i−ε/parenrightbig ds−wt i be the corresponding coupled solutions to ( 14). We write ξt i=ηt i+w∗ i−ε, so that ξt i=−β/integraldisplayt 0Ri θ(t,s)ξs ids−wt i+w∗ i−ε andCi η(t,s) =δβ2E[ξt iξs i]. Then E(ξt 1−ξt 2)2≤C/bracketleftBig/integraldisplayt 0(R1 θ(t,s)−R2 θ(t,s))2E(ξs 1)2ds+/integraldisplayt 0R2 θ(t,s)2E(ξs 1−ξs 2)2ds+E(wt 1−w∗ 1−wt 2+w∗ 2)2/bracketrightBig ≤C′/bracketleftBig/integraldisplayt 0e2λs·e−2λs/parenleftBig (R1 θ(t,s)−R2 θ(t,s))2+E(ξs 1−ξs 2)2/parenrightBig ds+E(wt 1−w∗ 1−wt 2+w∗ 2)2/bracketrightBig ≤C′′ λe2λt/parenleftBig sup s∈[0,T]e−2λsE(ξs 1−ξs 2)2+d(R1 θ,R2 θ)2/parenrightBig +C′′e2λtd(C1 θ,C2 θ)2. Choosing λ >2C′′and rearranging yields, for a constant C >0, sup t∈[0,T]e−2λtE(ξt 1−ξt 2)2≤C·d(Y1,Y2)2. Then letting ( {ut 1},{ut 2}) be a centered Gaussian process with second moments E[ut 1us 2] =δβ2E[ξt 1ξs 2], this realizes a coupling defining d(C1 η,C2 η), so d(C1 η,C2 η)≤sup t∈[0,T]e−λt/radicalBig E[(ut 1−ut 2)2] =/radicalbig δβ2·sup t∈[0,T]e−λt/radicalBig E(ξt 1−ξt 2)2≤C′·d(Y1,Y2). Bound of d(R1 η,R2 η).Defining the (deterministic) process∂ηt i ∂wsdriven by Ri θfori= 1,2, we have Ri η(t,s) =−β/integraldisplayt sRi θ(t,s′)Ri η(s′,s)ds′+δβ2Ri θ(t,s), hence |R1 η(t,s)−R2 η(t,s)| ≤ |β|/integraldisplayt s|R1 θ(t,s′)−R2 θ(t,s′)||R1 η(s′,s)|ds′+|β|/integraldisplayt s|R2 θ(t,s′)||R1 η(s′,s)−R2 η(s′,s)|ds′ +δβ2|R1 θ(t,s)−R2 θ(t,s)| ≤C/integraldisplayt seλs′e−λs′|R1 η(s′,s)−R2 η(s′,s)|ds′+Ceλtd(R1 θ,R2 θ) ≤C′ λeλt/parenleftBig sup 0≤s≤t≤Te−λt|R1 η(t,s)−R2 η(t,s)|/parenrightBig +Ceλtd(R1 θ,R2 θ). Choosing λ >2C′and rearranging yields d(R1 η,R2 η) = sup 0≤s≤t≤Te−λt|R1 η(t,s)−R2 η(t,s)| ≤C·d(R1 θ,R2 θ)≤C·d(Y1,Y2). We note that αi= ˜αifori= 1,2 by definition, so also d(α1,α2) =d(˜α1,˜α2)≤d(Y1,Y2). Combining these bounds shows the lemma. 22 Proof of Theorem 2.4(b).Combining Lemmas 3.7and3.8, for sufficiently large λ >0, the composition map Tη→ηis a contraction on Scont ηwith respect to the metric d(X1,X2) and similarly Tθ→θis a contraction on Scont θ. We note that for any sequence {Ck θ}of correlation functions in Scont, ask→ ∞, d(Ck θ,Cθ)→0 implies sup s,t∈[0,T]|Ck θ(s,t)−Cθ(s,t)| →0 by definition of the metric and Cauchy-Schwarz, while sup s,t∈[0,T]|Ck θ(s,t)−Cθ(s,t)| →0 implies d(Ck θ,Cθ)→0 by e.g. the construction of a coupling in [ 52, Lemma D.1]. The same holds for Cη, so each metric in ( 47) induces a topology equivalent to that of uniform convergence over continuous functions on the appropriate space [0,T],{s,t: 0≤s≤t≤T}, or{∗}∪ {s,t: 0≤s≤t≤T}. Furthermore, each condition defining Scont η,Scont θis closed with respect to this topology. Thus d(X1 η,X2 η) andd(Y1 θ,Y2 θ) are complete metrics onScont η,Scont
|
https://arxiv.org/abs/2504.15556v1
|
θ, so the Banach fixed-point theorem guarantees Tη→ηandTθ→θhave unique fixed points X= (Rη,Cη,α)∈ Scont ηandY= (Rθ,Cθ,α)∈ Scont θ, for which also Tη→θ(X) =Y. These fixed points remain unique in SηandSθ, because Lemmas 3.5and3.6imply that the images of Tη→η,Tθ→θonSη,Sθare contained in Scont η,Scont θ. Then the tuple ( α,Cθ,Cη,Rθ,Rη)∈ Scontis the unique fixed point in Ssolving the dynamical fixed point equations ( 17–18). 4 The dynamical mean-field approximation In this section we prove Theorem 2.5. We assume throughout Assumptions 2.1,2.2, and2.3. The proof consists of three steps: •(Step 1) We prove in Section 4.1a discrete DMFT limit for a discretized version of the dynamics. •(Step 2) We show in Section 4.2that, as the discretization step size goes to zero, the discrete DM FT equations converge in an appropriate sense to ( 12–18). •(Step 3) We show in Section 4.3that, as the discretization step size goes to zero, the discretized dynamics converges in an appropriate sense to ( 4–5). This argument follows closely the approach of [ 42], although we will use in Steps 2 and 3 a different and somewhat simpler piecewise-constant embedding of the discretized DMFT process and discretized Langevin dynamics into continuous time. 4.1 Step 1: DMFT approximation of discrete dynamics Fix a step size γ >0. We first define a discretized version of the process ( 4–5), which we denote by {θt γ} and{/hatwideαt γ}fort∈Z+={0,1,2,...}: θt+1 γ=θt γ+γ/parenleftBig −βX⊤(Xθt γ−y)+s(θt γ,/hatwideαt γ)/parenrightBig +√ 2(bt+1 γ−bt γ) (55) /hatwideαt+1 γ=/hatwideαt γ+γ·G/parenleftBig /hatwideαt γ,1 dd/summationdisplay j=1δθt γ,j/parenrightBig (56) with initialization ( θ0 γ,/hatwideα0 γ) = (θ0,/hatwideα0), where {bt γ}is a discrete Gaussian process with b0 γ= 0 and in- dependent increments bt+1 γ−bt γ∼ N(0,γI). Here and throughout the sequel, we write as shorthand s(θ,/hatwideα) = (s(θj,/hatwideα))d j=1. We set ηt γ=Xθt γ,η∗=Xθ∗. We correspondingly define a discretized version of the DMFT system (12–18): Given discrete-time cor- relation and response matrices {Cγ η(s,r)}r≤s≤t,{Rγ η(s,r)}r<s≤tand a deterministic process {αs γ}s≤tup to 23 timet, define (in the probability space of ( θ∗,θ0)∼P(θ∗,θ0)) θt+1 γ=θt γ+γ/parenleftBig −δβ(θt γ−θ∗)+s(θt γ,αt γ)+t−1/summationdisplay s=0Rγ η(t,s)(θs γ−θ∗)+ut γ/parenrightBig +√ 2(bt+1 γ−bt γ) withθγ 0=θ0, (57) ∂θt+1 γ ∂usγ=/braceleftBigg γ fors=t, ∂θt γ ∂usγ+γ/bracketleftBig/parenleftBig −δβ+∂θs(θt γ,αt γ)/parenrightBig∂θt γ ∂usγ+/summationtextt−1 r=s+1Rγ η(t,r)∂θr γ ∂usγ/bracketrightBig fors < t.(58) Here,{us γ}0≤s≤tand{bs γ}0≤s≤tare mean-zero Gaussian vectors independent of each other and o f (θ∗,θ0), where{us γ}0≤s≤thas covariance E[us γur γ] =Cγ η(s,r), (59) and{bs γ}0≤s≤thas independent increments bs+1 γ−bs γ∼ N(0,γ) withb0 γ= 0. We note that∂θt+1 γ ∂usγis the usual partial derivative of θt+1 γinus γ, whose form ( 58) is derived from ( 57) via the chain rule. These processes then define {Cγ θ(s,r)}r≤s≤t+1,{Cγ θ(s,∗)}s≤t+1,{Rγ θ(s,r)}r<s≤t+1, and{αs γ}s≤t+1up to time t+1 via Cγ θ(s,r) =E[θs γθr γ], Cγ θ(s,∗) =E[θs γθ∗], Cγ θ(∗,∗) =E[(θ∗)2], Rγ θ(s,r) =E/bracketleftBig∂θs γ ∂urγ/bracketrightBig , αt+1 γ=αt γ+γ·G(αt γ,P(θt γ))(60) whereP(θt γ) is the law of θt γ. Conversely, given {Cγ θ(s,r)}r≤s≤t,{Cγ θ(s,∗)}s≤t, and{Rγ θ(s,r)}r<s≤tup to time t, define (in the prob- ability space of ε∼P(ε)) ηt γ=−βt−1/summationdisplay s=0Rγ θ(t,s)(ηs γ+w∗ γ−ε)−wt γ, (61) ∂ηt γ ∂wsγ=β/bracketleftBig −t−1/summationdisplay r=s+1Rγ θ(t,r)∂ηr γ ∂wsγ+Rγ θ(t,s)/bracketrightBig fors < t. (62)
|
https://arxiv.org/abs/2504.15556v1
|
Here, (w∗ γ,{ws γ}0≤s≤t) is a mean-zero Gaussian vector with covariance E[ws γwr γ] =Cγ θ(s,r),E[ws γw∗ γ] =Cγ θ(s,∗),E[(w∗ γ)2] =Cγ θ(∗,∗), (63) and again∂ηt γ ∂wsγis the usual partial derivative computed from the chain rule. These define{Cγ η(s,r)}r≤s≤t, {Rγ η(s,r)}r<s≤tup to time tvia Cγ η(s,r) =δβ2E[(ηs γ+w∗ γ−ε)(ηr γ+w∗ γ−ε)], Rγ η(s,r) =δβ/parenleftBig∂ηs γ ∂wrγ/parenrightBig , (64) where we note that∂ηs γ ∂wrγis deterministic. These definitions should be understood in the iterat ive sense {θs γ}s≤t,{us γ}s<t,{∂θs γ ∂urγ}r<s≤t⇒ {Cγ θ(s,r),Cγ θ(s,∗)}r≤s≤t,{Rγ θ(s,r)}r<s≤t,{αs}s≤t⇒ w∗ γ,{ηs γ,ws γ}s≤t,{∂ηs γ ∂wrγ}r<s≤t⇒ {Cγ η(s,r)}r≤s≤t,{Rγ η(s,r)}r<s≤t⇒ {θs γ}s≤t+1,{us γ}s<t+1,{∂θs γ ∂θrγ}r<s≤t+1⇒...(65) with initialization θ0 γ=θ0. The goal of this section is to show the following discrete analogue of T heorem2.5. 24 Lemma 4.1. For any fixed integer T≥0, almost surely as n,d→ ∞, 1 dd/summationdisplay j=1δ/parenleftbig θ∗ j,θ0 γ,j,...,θT γ,j/parenrightbigW2→P/parenleftbig θ∗,θ0 γ,...,θT γ/parenrightbig (66) 1 nn/summationdisplay i=1δ/parenleftbig η∗ i,εi,η0 γ,i,...,η0 γ,i/parenrightbigW2→P/parenleftbig −w∗ γ,ε,η0 γ,...,ηT γ/parenrightbig (67) (/hatwideα0 γ,...,/hatwideαT γ)→(α0 γ,...,αT γ). (68) For convenience of the proof, we define also an auxiliary response f unction Rγ η(t,∗) =δβ/parenleftBig∂ηt γ ∂w∗γ/parenrightBig where∂ηt γ ∂w∗γ=−βt−1/summationdisplay s=0Rγ θ(t,s)/parenleftBig∂ηs γ ∂w∗γ+1/parenrightBig , (69) initialized from∂η0 γ ∂w∗ γ= 0. Here∂ηt γ ∂w∗ γis the usual partial derivative of ηt γwith respect to w∗ γ, which is also deterministic. We have the following basic fact relating the response functions ( 62) and (69). Lemma 4.2. For any t≥1, we have∂ηt γ ∂w∗γ=−/summationtextt−1 s=0∂ηt γ ∂wsγ, and consequently Rγ η(t,∗) =−/summationtextt−1 s=0Rγ η(t,s). Proof.Let us shorthand rη(t,s) =∂ηt γ ∂ws γfors < tandrη(t,∗) =∂ηt γ ∂w∗ γ. We prove rη(t,∗) =−/summationtextt−1 s=0rη(t,s) by induction, with the base case t= 1 verified by the initial conditions rη(1,∗) =−γβandrη(1,0) =γβ. Suppose the claim holds for some t, then rη(t+1,∗) =−βt/summationdisplay s=0Rγ θ(t+1,s)(rη(s,∗)+1) =−βt/summationdisplay s=0Rγ θ(t+1,s)/parenleftBig −s−1/summationdisplay r=0rη(s,r)+1/parenrightBig =β/bracketleftBigt−1/summationdisplay r=0t/summationdisplay s=r+1Rγ θ(t+1,s)rη(s,r)−t/summationdisplay r=0Rγ θ(t+1,r)/bracketrightBig =βt−1/summationdisplay r=0/parenleftBigt/summationdisplay s=r+1Rγ θ(t+1,s)rη(s,r)−Rγ θ(t+1,r)/parenrightBig −βRγ θ(t+1,t) =−t−1/summationdisplay r=0rη(t+1,r)−rη(t+1,t) =−t/summationdisplay r=0rη(t+1,r), as desired. Proof of Lemma 4.1.Step 1: Convergence of auxiliary dynamics. Considerthe followingnon-adaptive auxiliary dynamics ˜θt+1 γ=˜θt γ−γ/parenleftbigg βX⊤(X˜θt γ−Xθ∗−ε)−s(˜θt γ,αt γ)/parenrightbigg +√ 2(bt+1 γ−bt γ). (70) This differs from {θt γ}in that we replace {/hatwideαt γ}by the deterministic process {αt γ}of the discrete DMFT system. Let ˜ηt γ=X˜θt γ. We will first show 1 dd/summationdisplay j=1δ/parenleftbig θ∗ j,˜θ0 γ,j,...,˜θT γ,j/parenrightbigW2→P/parenleftbig θ∗,θ0 γ,...,θT γ/parenrightbig ,1 nn/summationdisplay i=1δ/parenleftbig η∗ i,εi,˜η0 γ,i,...,˜ηT γ,i/parenrightbigW2→P/parenleftbig −w∗ γ,ε,η0 γ,...,ηT γ/parenrightbig .(71) The proof is based on a reduction to an AMP algorithm: Let ε∈Rnbe as in the above dynamics, define V= (θ∗,θ0,b1−b0,...,bT−bT−1)∈Rd×(T+2) 25 and let ε∼P(ε), V= (θ∗,θ0,ρ1,...,ρT)∼P(θ∗,θ0)⊗N(0,γIT). Assumption 2.1ensures that |ε|and/⌊a∇⌈⌊lv/⌊a∇⌈⌊l2have finite moment generating functions in a neighborhood of 0, and for each fixed p≥1, almost surely as n,d→ ∞, 1 nn/summationdisplay i=1δεiWp→ε,1 dd/summationdisplay j=1δVjWp→V (72) whereVjis thejthrow ofV. Fixing some k≥1, consider the AMP iterations Wi=Xgi(U1,...,Ui;V)−i−1/summationdisplay j=0fj(W0,...,Wj;ε)ζij∈Rn×k, Ui+1=X⊤fi(W0,...,Wi;ε)−i/summationdisplay j=0gj(U1,...,Uj;V)ξij∈Rd×k,(73) initialized at W0=Xg0(V), where the nonlinearities fi=/parenleftbig fi,1,...,fi,k/parenrightbig :Rk(i+1)×R→Rk, gi=/parenleftbig gi,1,...,gi,k/parenrightbig :Rki×RT+2→Rk are Lipschitz-continuous and applied row-wise, the Onsager coeffic ients are recursively defined as ξij=/parenleftbigg δE/bracketleftBig dWjfi(W0,...,Wi;ε)/bracketrightBig/parenrightbigg⊤ ∈Rk×k,0≤j≤i, ζij=/parenleftbigg E/bracketleftBig dUj+1gi(U1,...,Ui;V)/bracketrightBig/parenrightbigg⊤ ∈Rk×k,0≤j≤i−1, and{Wj}j≥0and{Uj}j≥1are mean-zero Gaussian processes in Rkindependent of ε,Vwith covariance structure E[WiWj⊤] =E/bracketleftBig gi(U1,...,Ui;V)gj(U1,...,Uj;V)⊤/bracketrightBig ∈Rk×k, i,j≥0, E[Ui+1Uj+1⊤] =E/bracketleftBig δfi(W0,...,Wi;ε)fj(W0,...,Wj;ε)⊤/bracketrightBig ∈Rk×k,
|
https://arxiv.org/abs/2504.15556v1
|
i,j≥0. (74) This is a standard form of an AMP algorithm, see e.g. [ 45,60]. The iterations for ( W0,...,WT−1)∈Rn×kT and (U1,...,UT)∈Rd×kTadmit a mapping to the form of [ 60, Eqs. (2.14) and (D.1–D.2)] with kTvector iterates. Then by the AMP state evolution (c.f. [ 60, Theorem 2.21 and Remark 2.2]), under the conditions of Assumption 2.1, almost surely as n,d→ ∞, 1 dd/summationdisplay j=1δU1 j,...,Um j,VjW2→P(U1,...,Um,V), 1 nn/summationdisplay i=1δW0 i,...,Wm i,εiW2→P(W0,...,Wm,ε).(75) We will now use the above state evolution to prove the desired conclu sion (71). In the AMP algorithm (73), letk= 2. We show the existence of Lipschitz nonlinearities gi= (gi,1,gi,2) :R2i×RT+2→R2and fi= (fi,1,fi,2) :R2(i+1)×R→R2such that (˜θj γ,θ∗) =gj(U1,...,Uj;V), (76) /parenleftBig −(β/δ)(X˜θj γ−Xθ∗−ε),0/parenrightBig =fj(W0,...,Wj;ε). (77) 26 The base case is g0(V) = (θ0,θ∗) andf0(W0;ε) = (−(β/δ)(W0 1−W0 2−ε),0) where W0= (W0 1,W0 2) = (Xθ0,Xθ∗). Supposing inductively that ( 76–77) hold for some Lipschitz functions g0,f0,...,gj,fj, we note that this implies ( ξjℓ)12= (ξjℓ)22= 0 for all ℓ≤j. Then writing Uj= (Uj 1,Uj 2)∈Rd×2, we have Uj+1 1=X⊤fj,1(W0,...,Wj;ε)−j/summationdisplay ℓ=0/parenleftBig gℓ,1(U1,...,Uℓ;V)(ξjℓ)11+gℓ,2(U1,...,Uℓ;V)(ξjℓ)21/parenrightBig =−β δX⊤(X˜θj γ−Xθ∗−ε)−j/summationdisplay ℓ=0gℓ,1(U1,...,Uℓ;V)(ξjℓ)11−j/summationdisplay ℓ=0(ξjℓ)21·θ∗. So ˜θj+1 γ=˜θj γ−γ/parenleftbigg βX⊤(X˜θj γ−Xθ∗−ε)−s(˜θj γ,αj γ)/parenrightbigg +√ 2(bj+1−bj) =˜θj γ+γδ/parenleftBig Uj+1 1+j/summationdisplay ℓ=0gℓ,1(U1,...,Uℓ;V)(ξjℓ)11+j/summationdisplay ℓ=0(ξjℓ)21·θ∗/parenrightBig +γs(˜θt γ,αj γ)+√ 2(bj+1−bj), and to satisfy ( 76) we may define gj+1(·) asgj+1,2(U1,...,Uj+1;V) =θ∗and gj+1,1(U1,...,Uj+1;V) =gj,1(U1,...,Uj;V)+γδ/parenleftBig Uj+1 1+j/summationdisplay ℓ=0gℓ,1(U1,...,Uℓ;V)(ξjℓ)11+j/summationdisplay ℓ=0(ξjℓ)21·θ∗/parenrightBig +γs/parenleftbig gj,1(U1,...,Uj;V),αj γ/parenrightbig +√ 2ρj, (78) where we recall V= (θ∗,θ0,ρ1,...,ρT). We note that θ/ma√sto→s(θ,αj γ) is Lipschitz by Assumption 2.2, so this function gj+1(·) is also Lipschitz by the induction hypothesis. Next, the condition gj+1,2(·) =θ∗implies (ζj+1,ℓ)12= (ζj+1,ℓ)22= 0 forℓ≤j, and allows us to compute Wj+1= (Wj+1 1,Wj+1 2) asWj+1 2=Xθ∗and Wj+1 1=Xgj+1,1(U1,...,Uj+1;V)−j/summationdisplay ℓ=0/parenleftBig fℓ,1(W0,...,Wℓ;ε)(ζj+1,ℓ)11+fℓ,2(W0,...,Wℓ;ε)(ζj+1,ℓ)21/parenrightBig =X˜θj+1 γ−j/summationdisplay ℓ=0fℓ,1(W0,...,Wℓ;ε)(ζj+1,ℓ)11. Hence with X˜θj+1 γ−Xθ∗−ε=Wj+1 1−Wj+1 2+j/summationdisplay ℓ=0fℓ,1(W0,...,Wℓ;ε)(ζj+1,ℓ)11−ε, to satisfy ( 77) we can define fj+1,2= 0 and fj+1,1(W0,...,Wj+1;ε) =−β δ/parenleftBig Wj+1 1−Wj+1 2+j/summationdisplay ℓ=0fℓ,1(W0,...,Wℓ;ε)(ζj+1,ℓ)11−ε/parenrightBig .(79) This is also Lipschitz by the induction hypothesis, completing the induc tion. So using ( 76–77), the state evolution ( 75), and the fact that XnW2→Ximpliesf(Xn)W2→f(X) for Lipschitz f, we conclude that 1 dd/summationdisplay j=1δ/parenleftbig θ∗ j,˜θ0 γ,j,...,˜θT γ,j/parenrightbigW2→P/parenleftbig θ∗,θ0,...,θT/parenrightbig ,1 nn/summationdisplay i=1δ/parenleftbig η∗ i,εi,˜η0 γ,i,...,˜ηT γ,i/parenrightbigW2→P/parenleftbig W∗,ε,η0,...,ηT/parenrightbig .(80) Here, the laws on the right side are defined by setting W∗=Wi 2for each i≥1, and θi=gi,1(U1,...,Ui;V), ηi=−δ βfi,1(W0,...,Wi;ε)+W∗+ε, 27 where{Ui}={(Ui 1,Ui 2)}and{Wi}={(Wi 1,Wi 2)}are the Gaussian processes from AMP state evolution, independent of ε,Vwith covariance kernels given by ( 74). Let us now show that P(θ∗,θ0,...,θT) =P/parenleftbig θ∗,θ0 γ,...,θT γ/parenrightbig ,P(W∗,ε,η0,...,ηT) =P/parenleftbig −w∗ γ,ε,η0 γ,...,ηT γ/parenrightbig ,(81) where the laws on the right sides are the variables of the discrete DM FT equations. This will conclude the proof of ( 71). To do so, let us define from the AMP state evolution variables ( 80) the quantities ui γ=δUi+1 1, wi γ=−Wi 1, w∗ γ=−W∗, θi γ=θi, ηi γ=ηi, ∂θi γ ∂uj γ=1 δ∂gi,1 ∂Uj+1 1(U1,...,Ui;V),∂ηi γ ∂wj γ=δ β∂fi,1 ∂Wj 1(W0,...,Wi;ε).(82) Then it suffices to check that these quantities satisfy the discrete DMFT equations ( 57–64), by uniqueness of the iterative construction ( 65) of the solution to these discrete DMFT equations. We first note th at by (74),{uj γ}and (w∗ γ,{wj γ})
|
https://arxiv.org/abs/2504.15556v1
|
thus defined are centered Gaussian processes with covariance E[ui γuj γ] =δ3E[fi,1(W0,...,Wi;ε)fj,1(W0,...,Wj;ε)] =δβ2E[(ηi−W∗−ε)(ηj−W∗−ε)], E[wi γwj γ] =E[gi,1(U1,...,Ui;V)gj,1(U1,...,Uj;V)] =E[θiθj], E[wi γw∗ γ] =E[gi,1(U1,...,Ui;V)g0,2(V)] =E[θiθ∗], E[(w∗ γ)2] =E[g0,2(V)2] =E[(θ∗)2], which verifies ( 59) and (63) in light of ( 82). We next check the recursions ( 58) and (62) for the response: Recall that the AMP Onsager corrections are (ζj,s)11=E/bracketleftBig∂gj,1 ∂Us+1 1(U1,...,Uj;V)/bracketrightBig , (ξj,s)11=E/bracketleftBig δ∂fj,1 ∂Ws 1(W0,...,Wj;ε)/bracketrightBig ,(ξj,s)21=E/bracketleftBig δ∂fj,1 ∂Ws 2(W0,...,Wj;ε)/bracketrightBig .(83) By definition of gj,1in (78), we have∂gj,1 ∂Us+1 1= 0 fors≥j,∂gj,1 ∂Us+1 1=γδifs=j−1, and if s≤j−2, ∂gj,1 ∂Us+1 1=∂gj−1,1 ∂Us+1 1+γδj−1/summationdisplay ℓ=s+1∂gℓ,1 ∂Us+1 1(ξj−1,ℓ)11+γ∂θs(gj−1,1,αtj−1 γ)∂gj−1,1 ∂Us+1 1 =/parenleftBig 1+γδ(ξj−1,j−1)11+γ∂θs(gj−1,1,αtj−1γ)/parenrightBig∂gj−1,1 ∂Us+1 1+γδj−2/summationdisplay ℓ=s+1(ξj−1,ℓ)11∂gℓ,1 ∂Us+1 1(84) where both sides are evaluated at ( U1,...,Uj;V). Similarly, by definition of fj,1(·) in (79), we have∂fj,1 ∂Ws 1= ∂fj,1 ∂Ws 2= 0 ifs > j,∂fj,1 ∂Ws 1=−∂fj,1 ∂Ws 2=−β/δifs=j, and ifs < j, ∂fj,1 ∂Ws 1=−β δj−1/summationdisplay ℓ=s∂fℓ,1 ∂Ws 1(ζj,ℓ)11=−β δ/parenleftBigj−1/summationdisplay ℓ=s+1∂fℓ,1 ∂Ws 1(ζj,ℓ)11−β δ(ζj,s)11/parenrightBig , (85) ∂fj,1 ∂Ws 2=−β δj−1/summationdisplay ℓ=s∂fℓ,1 ∂Ws 2(ζj,ℓ)11=−β δ/parenleftBigj−1/summationdisplay ℓ=s+1∂fℓ,1 ∂Ws 2(ζj,ℓ)11+β δ(ζj,s)11/parenrightBig . (86) Here, these recursions imply that {∂fj,1 ∂Ws 1}and{∂fj,1 ∂Ws 2}are deterministic. Under the definitions ( 60), (64), and (82), we have Rγ θ(j,s) =1 δE/bracketleftBig∂gj,1 ∂Us+1 1(U1,...,Uj;V)/bracketrightBig =1 δ(ζj,s)11, Rγ η(j,s) =δ2/bracketleftBig∂fj,1 ∂Ws 1/bracketrightBig =δ(ξj,s)11.(87) 28 Then by ( 84) and (85),{∂gj,1 ∂Us+1 1}and{∂fj,1 ∂Ws 1}satisfy the recursions ∂gj,1 ∂Us+1 1=/parenleftBig 1−γδβ+γ∂θs(gj−1,1;αj−1 γ)/parenrightBig∂gj−1,1 ∂Us+1 1+γj−2/summationdisplay ℓ=s+1Rγ η(j−1,ℓ)∂gℓ,1 ∂Us+1 1, ∂fj,1 ∂Ws 1=−β/parenleftBigj−1/summationdisplay ℓ=s+1Rγ θ(j,ℓ)∂fℓ,1 ∂Ws 1−β δRγ θ(j,s)/parenrightBig , which verify ( 58) and (62) in view of ( 82) and above boundary conditions∂gj,1 ∂Uj 1=γδand∂fj,1 ∂Wj 1=−β/δ. Finally we check the primary recursions ( 57) and (61). By (82), definition of gj+1,1(·) in (78), and (ξj,j)11=−(ξj,j)21=−β, we have θj+1=gj+1,1(U1,...,Uj+1;V) =θj+γδ/parenleftBig Uj+1 1+j/summationdisplay ℓ=0θℓ(ξj,ℓ)11+j/summationdisplay ℓ=0(ξj,ℓ)21·θ∗/parenrightBig +γs/parenleftbig θj,αj γ/parenrightbig +√ 2ρj =θj−γδβ(θj−θ∗)+γs/parenleftbig θj,αj γ/parenrightbig +γδj−1/summationdisplay ℓ=0θℓ(ξj,ℓ)11+γδj−1/summationdisplay ℓ=0θ∗(ξj,ℓ)21+γδUj+1 1+√ 2ρj.(88) Similarly, by definition of fj,1(·) in (79) andWi 2≡W∗, ηj=−δ βfj,1(W0,...,Wj;ε)+W∗+ε=−β δj−1/summationdisplay ℓ=0(ζj,ℓ)11(ηℓ−W∗−ε)+Wj 1. (89) Applying ( 83), note that in ( 88) we have Aj:=/summationtextj−1 ℓ=0(ξj,ℓ)21=δ/summationtextj−1 ℓ=0∂fj,1 ∂Ws 2. Then by the recursion ( 86) and first identification of ( 87), we have Aj=−βδj−1/summationdisplay ℓ=0j−1/summationdisplay s=ℓ∂fs,1 ∂Wℓ 2Rγ θ(j,s) =−βj−1/summationdisplay s=0Rγ θ(j,s)δs/summationdisplay ℓ=0∂fs,1 ∂Wℓ 2=−βj−1/summationdisplay s=0Rγ θ(j,s)(As+β). This coincides with the recursion for {β∂ηj γ ∂w∗ γ}in (69). Hence Aj=β∂ηj γ ∂w∗ γ=1 δRγ η(j,∗) =−1 δ/summationtextj−1 s=0Rγ η(j,s), where the last step applies Lemma 4.2. Applying this form of Ajand (87), we may write the equations (88–89) as θj+1=θj−γδβ(θj−θ∗)+γs/parenleftbig θj,αj γ/parenrightbig +γj−1/summationdisplay ℓ=0Rγ η(j,ℓ)(θℓ−θ∗)+γδUj+1 1+√ 2ρj, ηj=−βj−1/summationdisplay ℓ=0Rγ θ(j,ℓ)(ηℓ−W∗−ε)+Wj 1, which verifies ( 57) and (61) in view of ( 82). This verifies that the definitions ( 82) indeed satisfy ( 57–64), concluding the proof of ( 71). Step 2: Comparison with auxiliary dynamics. Let us now prove ( 66–68) for the original dynamics with an adaptive drift parameter /hatwideαt γ. We will prove via induction that, almost surely as n,d→ ∞, for each t= 0,...,T, 1 d/⌊a∇⌈⌊lθt γ−˜θt γ/⌊a∇⌈⌊l2→0,/hatwideαt γ→αt γ. (90) 29 Since W2/parenleftBig1 dd/summationdisplay j=1δ/parenleftbig θ∗ j,θ0 γ,j,...,θT γ,j/parenrightbig,1 dd/summationdisplay j=1δ/parenleftbig θ∗ j,˜θ0 γ,j,...,˜θT γ,j/parenrightbig/parenrightBig2 ≤T/summationdisplay t=01 d/⌊a∇⌈⌊lθt γ−˜θt γ/⌊a∇⌈⌊l2(91) W2/parenleftBig1 nn/summationdisplay i=1δ/parenleftbig η∗ i,εi,η0 γ,i,...,ηT γ,i/parenrightbig,1 nn/summationdisplay i=1δ/parenleftbig η∗ i,εi,˜η0 γ,i,...,˜ηT γ,i,/parenrightbig/parenrightBig2
|
https://arxiv.org/abs/2504.15556v1
|
≤T/summationdisplay t=01 n/⌊a∇⌈⌊lηt γ−˜ηt γ/⌊a∇⌈⌊l2≤T/summationdisplay t=01 n/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op/⌊a∇⌈⌊lθt γ−˜θt γ/⌊a∇⌈⌊l2 and/⌊a∇⌈⌊lX/⌊a∇⌈⌊lopis almost surely bounded for all large n,d, the above inductive claim together with ( 71) implies (66–68). The base case of t= 0 in (90) holds exactly. Suppose ( 90) holds up to time t. Fort+1, we see that 1 d/⌊a∇⌈⌊lθt+1 γ−˜θt+1 γ/⌊a∇⌈⌊l2≤C/parenleftBig (1+/⌊a∇⌈⌊lX/⌊a∇⌈⌊l4 op)1 d/⌊a∇⌈⌊lθt γ−˜θt γ/⌊a∇⌈⌊l2+1 d/⌊a∇⌈⌊ls(θt γ,/hatwideαt γ)−s(˜θt γ,αt γ)/⌊a∇⌈⌊l2/parenrightBig . Applying boundedness of /⌊a∇⌈⌊lX/⌊a∇⌈⌊lopand Lipschitz continuity of s(·) in Assumption 2.2, we have by the induction hypothesis that1 d/⌊a∇⌈⌊lθt+1 γ−˜θt+1 γ/⌊a∇⌈⌊l2→0 almost surely. Next, we have by the Lipschitz continuity of G(·) in Assumption 2.3that /⌊a∇⌈⌊l/hatwideαt+1 γ−αt+1 γ/⌊a∇⌈⌊l ≤ /⌊a∇⌈⌊l/hatwideαt γ−αt γ/⌊a∇⌈⌊l+γ/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleG/parenleftBig /hatwideαt γ,1 dd/summationdisplay ℓ=1δθt γ,ℓ/parenrightBig −G(αt γ,P(θt γ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble ≤C/⌊a∇⌈⌊l/hatwideαt γ−αt γ/⌊a∇⌈⌊l+CW2/parenleftBig1 dd/summationdisplay ℓ=1δθt γ,ℓ,P(θt γ)/parenrightBig , which converges almost surely to 0 by the induction hypothesis and t he above implication ( 91). This establishes the induction for ( 90) and hence completes the proof. 4.2 Step 2: Discretization error of DMFT equation We now define a piecewise constant embedding of the components of the discrete DMFT system ( 57–64) into continuous time, and show that this converges to the solution of th e continuous DMFT system established in Theorem 2.4, in the limit γ→0. For all times t∈R+, define ⌊t⌋= max{iγ:iγ≤t,i∈Z+} ∈γZ+,⌈t⌉=⌊t⌋+γ∈γZ+,[t] =⌊t⌋/γ∈Z+. (92) FixingT >0, letDγ ηbe the space of functions ( ¯Rγ η,¯Cγ η,¯αγ)≡ {(¯Rγ η(t,s),¯Cγ η(t,s),¯αt γ}0≤s≤t≤Tthat are piecewise constant and right-continuous in ( s,t) with jumps at γZ+, i.e.¯Rγ η(t,s) =¯Rγ η(⌊t⌋,⌊s⌋) for all 0≤s≤t≤Tand similarly for ¯Cγ η,¯αγ. Analogously, let Dγ θbe the space of functions ( ¯Rγ θ,¯Cγ θ,¯˜αγ)≡ {¯Rγ θ(t,s),¯Cγ θ(t,s),¯˜αt γ}0≤s≤t≤Tthat are piecewise constant and right-continuous with jumps at γZ+. We define a map Tγ η→θ:Dγ η→ Dγ θas follows: Given Xγ= (¯Rγ η,¯Cγ η,¯αγ)∈ Dγ η, let{¯ut γ}t∈[0,T]be a mean-zero Gaussian process with covariance ¯Cγ η, let{bt}t≥0be a standard Brownian motion, and define processes {¯θt γ}t∈[0,T]and{∂¯θt γ ∂¯usγ}0≤s≤t≤Tby ¯θt γ=θ0+/integraldisplay⌊t⌋ 0/bracketleftBig −δβ(¯θs γ−θ∗)+s(¯θs γ,¯αs γ)+/integraldisplay⌊s⌋ 0¯Rγ η(s,r)(¯θr γ−θ∗)dr+ ¯us γ/bracketrightBig ds+√ 2b⌊t⌋,(93) ∂¯θt γ ∂¯usγ= 1+1{⌈s⌉ ≤ ⌊t⌋}/integraldisplay⌊t⌋ ⌈s⌉/bracketleftbigg/parenleftBig −δβ+∂θs(¯θr γ,¯αr γ)/parenrightBig∂¯θr γ ∂¯usγ+/integraldisplay⌊r⌋ ⌈s⌉¯Rγ η(r,r′)∂¯θr′ γ ∂¯usγdr′/bracketrightbigg dr. (94) Define from these processes ¯Cγ θ(t,s) =E/bracketleftBig ¯θt γ¯θs γ/bracketrightBig ,¯Cγ θ(t,∗) =E/bracketleftBig ¯θt γθ∗/bracketrightBig ,¯Cγ θ(∗,∗) =E[(θ∗)2], ¯Rγ θ(t,s) =E/bracketleftBig∂¯θt γ ∂¯usγ/bracketrightBig ,¯˜αt γ=α0+/integraldisplay⌊t⌋ 0G(¯˜αs γ,P(¯θs γ))ds(95) 30 and set Tγ η→θ(Xγ) = (¯Rγ θ,¯Cγ θ,¯˜αγ). We also define a map Tγ θ→η:Dγ θ→ Dγ ηas follows: Given Yγ= (¯Rγ θ,¯Cγ θ,¯˜αγ)∈ Dγ η, let (¯w∗ γ,{¯wt γ}t∈[0,T]) be a mean-zero Gaussian process with covariance ¯Cγ θ, and define ¯ηt γ=−β/integraldisplay⌊t⌋ 0¯Rγ θ(t,s)(¯ηs γ+ ¯w∗ γ−ε)ds−¯wt γ, (96) ∂¯ηt γ ∂¯wsγ=β¯Rθ(t,s)−1{⌈s⌉ ≤ ⌊t⌋}β/integraldisplay⌊t⌋ ⌈s⌉¯Rγ θ(t,r)∂¯ηr γ ∂¯wsγdr. (97) Define from these processes ¯Rγ η(t,s) =δβE/bracketleftBig∂¯ηt γ ∂¯wsγ/bracketrightBig ,¯Cγ η(t,s) =δβ2E/bracketleftBig (¯ηt γ+ ¯w∗ γ−ε)(¯ηs γ+ ¯w∗ γ−ε)/bracketrightBig ,¯αt γ=¯˜αt γ (98) and setTγ θ→η(Yγ) = (¯Rγ η,¯Cγ η,¯αγ). These maps may be understood as discrete approximations of Tη→θand Tθ→ηconstructed in Section 3.2, with domains restricted to the spaces Dγ θandDγ ηof piecewise constant inputs. Recallthe spaces Sθ,Sη,SofSection 3.1. The followinglemma showsthat the abovemaps Tγ η→θ,Tγ θ→ηare well-defined, that the unique fixed point of these
|
https://arxiv.org/abs/2504.15556v1
|
maps is a piecewise c onstant embedding of the discrete-time DMFT system in Section 4.1, and that furthermore this fixed point belongs to S. Lemma 4.3. (a) Given any Xγ∈ Dγ ηand realization of θ∗,θ0,{bt}t∈[0,T], and{¯ut γ}t∈[0,T], the processes (93–94) have a unique solution, and this solution is piecewise cons tant and right-continuous with jumps atγZ+. Consequently, Tγ η→θis a well-defined map from Dγ ηtoDγ θ. (b) Given any Yγ∈ Dγ θand realization of εand(¯w∗ γ,{¯wt γ}), the processes ( 96–97) have a unique solution, and this solution is piecewise constant and right-continuo us with jumps at γZ+. Consequently, Tγ θ→ηis a well-defined map from Dγ θtoDγ η. (c) The map Tγ η→η=Tγ η→θ◦Tγ θ→ηhas a unique fixed point in Dγ η, and the map Tγ θ→θ=Tγ θ→η◦Tγ η→θhas a unique fixed point in Dγ θ. These fixed points are given precisely by ¯αt γ=¯˜αt γ=α[t] γ,¯Cγ θ(t,s) =Cγ θ([t],[s]),¯Cγ θ(t,∗) =Cγ θ([t],∗),¯Cγ η(t,s) =Cγ η([t],[s]), ¯Rγ θ(t,s) =/braceleftBigg 1 if[s] = [t] 1 γRγ θ([t],[s])if[s]<[t],¯Rγ η(t,s) =/braceleftBigg δβ2if[s] = [t] 1 γRγ η([t],[s])if[s]<[t](99) for all0≤s≤t≤T, where(αγ,Cγ θ,Cγ η,Rγ θ,Rγ η)are the components of the discrete DMFT system defined iteratively in time via ( 60) and (64). (d) For any γ >0sufficiently small, we have Tγ η→θ(Dγ η∩Sη)⊆ Dγ θ∩Sθ,Tγ θ→η(Dγ θ∩Sθ)⊆ Dγ η∩Sη, and the fixed point ( 99) belongs to S. Proof.For (a), if Xγ= (¯Rγ η,¯Cγ η,¯αγ)∈ Dγ η, then{¯ut γ}is also piecewise constant and right-continuous by these properties of ¯Cγ η. Then an easy induction on kshows that ( 93) has a unique solution over t∈[0,kγ) for each integer k≥1, which is given by ¯θt γ=¯θ⌊t⌋ γ. By definition, ( 94) is given by∂¯θt γ ∂¯us γ= 1 for all s≥0 and t∈[s,⌈s⌉). Then for each s≥0, an induction on kshows also that ( 94) has a unique solution on [ s,⌈s⌉+kγ) for each integer k≥1, which is given by∂¯θt γ ∂¯usγ=∂¯θ⌊t⌋ γ ∂¯usγ, and furthermore this solution depends on sonly via ⌊s⌋, i.e.∂¯θt γ ∂¯usγ=∂¯θ⌊t⌋ γ ∂¯u⌊s⌋ γ. Thus the solutions of ( 93–94) are piecewise constant and right-continuous, implying the same properties for ¯Rγ θ,¯Cγ θ,¯˜αdefined by ( 95). This shows (a). Part (b) follows from analogous inductive arguments, using that if Yγ= (¯Rγ θ,¯Cγ θ,¯˜αγ)∈ Dγ θ, then{¯wt γ} is also piecewise constant and right-continuous, and hence so are {¯ηt γ}and{¯ηt γ ¯wsγ}. Part (c) also follows by induction: Since any fixed point is piecewise con stant, it suffices to consider the values at γZ+. By (93),¯θ0 γ=θ0. Then by ( 95), ¯Rγ θ(0,0) = 1,¯Cγ θ(0,0) =E[(θ0)2] =Cγ θ(0,0),¯Cγ θ(0,∗) =E[θ0θ∗] =Cγ θ(0,∗),¯˜α0 γ= 0. 31 Then by ( 96), ¯η0 γ=−¯w0 γ, so (¯η0 γ,¯w∗ γ) is equal in joint law to the discrete DMFT variables ( η0 γ,w∗ γ). Then by (98), any fixed point must satisfy ¯Rγ η(0,0) =δβ2,¯Cγ η(0,0) =δβ2E[(η0 γ+w∗ γ−ε)2] =Cη(0,0),¯α0 γ= 0. Suppose inductively that there is a unique fixed point over times s≤tin{0,γ,...,kγ }, and consider nowt= (k+1)γ. The equations ( 93–94) and the piecewise constant nature of all processes imply ¯θ(k+1)γ γ=¯θkγ γ+/integraldisplay(k+1)γ kγ/bracketleftBig −δβ(¯θs
|
https://arxiv.org/abs/2504.15556v1
|
γ−θ∗)+s(¯θs γ,¯αs γ)+/integraldisplay⌊s⌋ 0¯Rγ η(s,r)(¯θr γ−θ∗)dr+ ¯us γ/bracketrightBig ds+√ 2(b(k+1)γ−bkγ) =¯θkγ γ+γ/parenleftBig −δβ(¯θkγ γ−θ∗)+s(¯θkγ γ,¯αkγ γ)+γk−1/summationdisplay ℓ=0¯Rγ η(kγ,ℓγ)(¯θℓγ γ−θ∗)+ ¯ukγ γ/parenrightBig +√ 2(b(k+1)γ−bkγ), ∂¯θ(k+1)γ γ ∂¯u(k+1)γ γ= 1, and for any j≤k, ∂¯θ(k+1)γ γ ∂¯ujγ γ=∂¯θkγ γ ∂¯ujγ γ+/integraldisplay(k+1)γ kγ/bracketleftbigg/parenleftBig −δβ+∂θs(¯θr γ,¯αr γ)/parenrightBig∂¯θr γ ∂¯ujγ γ+/integraldisplaykγ (j+1)γ¯Rγ η(r,r′)∂¯θr′ γ ∂¯ujγ γdr′/bracketrightbigg dr =∂¯θkγ γ ∂¯ujγ γ+γ/bracketleftbigg/parenleftBig −δβ+∂θs(¯θkγ γ,¯αkγ γ)/parenrightBig∂¯θkγ γ ∂¯ujγ γ+γk−1/summationdisplay ℓ=j+1¯Rγ η(kγ,ℓγ)∂¯θℓγ γ ∂¯ujγ γ/bracketrightbigg Comparing these equations with ( 57–58) and applying γ¯Rγ η(kγ,ℓγ) =Rγ η(k,ℓ), ¯αkγ γ=αk γ, and the equality in law (¯u0 γ,...,¯ukγ γ)L= (u0 γ,...,uk γ) by the induction hypothesis, this shows the equality in law {θ∗,¯θiγ γ,∂¯θiγ γ ∂¯ujγ γ}i<j≤k+1L={θ∗,θi γ,γ−1∂θi γ ∂uj γ}i<j≤k+1. Then (99) holds for the components ¯Rγ θ,¯Cγ θ,¯˜αγof any fixed point up to times s≤tin{0,γ,...,(k+1)γ}. Now the equations ( 96–97) imply ¯η(k+1)γ γ=−β/integraldisplay(k+1)γ 0¯Rγ θ((k+1)γ,s)(¯ηs γ+ ¯w∗ γ−ε)ds−¯w(k+1)γ γ =−βγk/summationdisplay j=0¯Rγ θ((k+1)γ,jγ)/parenleftbig ¯ηjγ γ+ ¯w∗ γ−ε/parenrightbig −¯w(k+1)γ γ, ∂¯η(k+1)γ γ ∂¯w(k+1)γ γ=δβ2, and for all j≤k, ∂¯η(k+1)γ γ ∂¯wjγ γ=β¯Rθ((k+1)γ,jγ)−β/integraldisplay(k+1)γ (j+1)γ¯Rγ θ((k+1)γ,r)∂¯ηr γ ∂¯wjγ γdr =β¯Rθ((k+1)γ,jγ)−βγk/summationdisplay ℓ=j+1¯Rγ θ((k+1)γ,ℓγ)∂¯ηℓγ γ ∂¯wjγ γ. Comparingtheseequationswith ( 61–62)andapplyingagainthe induction hypothesis, this showsthe equality in law {¯ηiγ γ,∂¯ηiγ γ ∂¯wjγ γ}i<j≤k+1L={ηi γ,γ−1∂ηi γ ∂wj γ}i<j≤k+1. Then (99) alsoholds forthe components ¯Rγ η,¯Cγ η,¯αγofanyfixed pointup to times s≤tin{0,γ,...,(k+1)γ}, completing the induction. Thus any fixed points of Tγ η→ηandTγ θ→θmust satisfy ( 99) for all 0 ≤s≤t≤T, implying also that such fixed points are unique in Dγ ηandDγ θby uniqueness of the iterative construction (65) of the solution to the discrete DMFT equations. This shows (c). 32 For (d), we check that Tγ η→ηandTγ θ→θdefine contractive mappings on Dγ η∩SηandDγ θ∩Sθ. Note that if Yγ= (¯Rγ θ,¯Cγ θ,¯˜αγ)∈ Dγ θ∩SθandXγ=Tγ θ→η(Yγ) = (¯Rγ η,¯Cγ η,¯αγ), then setting ¯ξt γ= ¯ηt γ+ ¯w∗ γ−ε, the same arguments as in Lemma 3.5show E(¯ξt γ)2≤2/bracketleftBig β2/integraldisplay⌊t⌋ 0(t−s+1)2·Φ2 Rθ(t−s)E(¯ξs γ)2ds+2ΦCθ(t)+2τ2 ∗+σ2/bracketrightBig , |¯Rγ η(t,s)| ≤ |β|/parenleftBig 1{⌈s⌉ ≤ ⌊t⌋}/integraldisplay⌊t⌋ ⌈s⌉ΦRθ(t−s′)|¯Rγ η(s′,s)|ds′+δ|β|ΦRθ(t−s)/parenrightBig . Upper-bounding these integrals/integraltext⌊t⌋ 0and/integraltext⌊t⌋ ⌈s⌉by/integraltextt 0and/integraltextt s, we obtain as in Lemma 3.5that¯Cγ η(t,t)≤ ΦCγ(t) and¯Rγ η(t,s)≤ΦRγ(t−s). All continuity conditions defining Sηare automatically satisfied since the components of Xγare piecewise constant outside the knots D= [0,T]∩γZ+. ThusXγ∈ Sη, i.e.Tγ θ→ηmaps Dγ θ∩SθintoDγ η∩Sη. Conversely, suppose Xγ= (¯Rγ η,¯Cγ η,¯α)∈ Dγ η∩Sη, and let Yγ=Tγ θ→η(¯Rγ θ,¯Cγ θ,¯˜α). A small extension of the argument in Lemma 3.6showsYγ∈ Sθ. Let us explain this extension for ¯Cγ θ: Defining ¯vt γ=−δβ(¯θt γ−θ∗)+s(¯θt γ,¯αt γ)+/integraldisplay⌊t⌋ 0¯Rγ η(t,s)(¯θs γ−θ∗)ds+ ¯ut γ, we have¯θt+γ γ=¯θt γ+γ·¯vt γ+√ 2(bt+γ−bt) fort∈γZ+. Then, analogous to the calculation using Ito’s formula in Lemma 3.6, for sufficiently small γ >0, ¯Cγ θ(t+γ,t+γ)−¯Cγ θ(t,t) = 2γE¯θt γ¯vt γ+γ2E(¯vt γ)2+2γ ≤γ 1−γE(¯θt γ)2+[γ(1−γ)+γ2]E(¯vt γ)2+2γ ≤γ/parenleftBig 1.1E(¯θt γ)2+E(¯vt γ)2+2/parenrightBig =/integraldisplayt+γ t/parenleftBig 1.1E(¯θs γ)2+E(¯vs γ)2+2/parenrightBig ds, the last equality holding because ¯ vγand¯θγare piecewise constant. Summing this inequality shows ¯Cγ θ(t,t)−¯Cγ θ(0,0)≤/integraldisplayt 0/parenleftBig 1.1E(¯θs γ)2+E(¯vs γ)2+2/parenrightBig ds for allt∈[0,T]. Then, bounding E(¯vs γ)2as in (44) of Lemma 3.6, this shows that ¯Cγ θ(t,t)≤ΦCθ(t). Similar extensions of the arguments in Lemma 3.6show that ¯Rγ θ(t,s)≤ΦRθ(t−s) and/⌊a∇⌈⌊l¯αt γ/⌊a∇⌈⌊l2≤Φα(t), soYγ∈ Sθ as claimed. Then Tγ η→θmapsDγ η∩SηintoDγ θ∩Sθ. The same argument as in Lemmas 3.7and3.8bound the
|
https://arxiv.org/abs/2504.15556v1
|
moduli-of-continuity of Tγ η→θandTγ θ→ηin the metricsd(·) of Section 3.2, implying that Tγ η→ηandTγ θ→θare contractive for sufficiently large λ >0 defining d(·). These metrics induce the topologies of uniform convergence on t he spaces Dγ η∩SηandDγ θ∩Sθ, which are equivalent to closed subsets of finite-dimensional vector spac es and hence also complete. Then Tγ η→ηand Tγ θ→θhave unique fixed points in Dγ η∩ SηandDγ θ∩ Sθby the Banach fixed-point theorem. These must coincide with the fixed point ( 99), by the uniqueness statement (without restriction to SθandSη) shown in part (c). Thus this fixed point ( 99) belongs to S. Lemma 4.4. There exists a constant C >0(depending on Tbut not on λ,γ) such that for all large enough λ >0defining the metrics ( 47) and all sufficiently small γ >0, (a) For any Xγ∈ Dγ η∩Sη, d/parenleftbig Tη→θ(Xγ),Tγ η→θ(Xγ)/parenrightbig ≤C√γ. (b) For any Yγ∈ Dγ θ∩Sθ, d/parenleftbig Tθ→η(Yγ),Tγ θ→η(Yγ)/parenrightbig ≤C√γ. 33 Proof.We show part (a). Consider any Xγ= (¯Rγ η,¯Cγ η,¯αγ), and denote Tη→θ(Xγ) = (Rθ,Cθ,˜α) and Tγ η→θ(Xγ) = (¯Rγ θ,¯Cγ θ,¯˜αγ). Throughout, C,C′>0 denote constants depending on Tbut not on λ,γand changing from instance to instance. Bound of d(Cθ,¯Cγ θ).GivenXγ, let us couple the evolutions ( 12) and (93) by a common realization of {¯ut γ} with covariance ¯Cγ ηand a common Brownian motion. Then by definition, we have θt=θ0+/integraldisplayt 0/bracketleftBig −δβ(θs−θ∗)+s(θs,¯αs γ)+/integraldisplays 0¯Rγ η(s,s′)(θs′−θ∗)ds′+ ¯us γ/bracketrightBig ds+√ 2bt, ¯θt γ=θ0+/integraldisplay⌊t⌋ 0/bracketleftBig −δβ(¯θs γ−θ∗)+s(¯θs γ,¯αs γ)+/integraldisplay⌊s⌋ 0¯Rγ η(s,s′)(¯θs′ γ−θ∗)ds′+ ¯us γ/bracketrightBig ds+√ 2b⌊t⌋. ThenE(θt−¯θt γ)2≤6[(I)+(II)+(III)+(IV)+(V)+(VI)] where (I) =E/parenleftBig/integraldisplay⌊t⌋ 0δβ(θs−¯θs γ)ds/parenrightBig2 , (II) =E/parenleftBig/integraldisplay⌊t⌋ 0(s(θs,¯αs γ)−s(¯θs γ,¯αs γ)ds/parenrightBig2 , (III) =E/parenleftBig/integraldisplay⌊t⌋ 0/integraldisplay⌊s⌋ 0¯Rγ η(s,s′)(θs′−¯θs′ γ)ds′ds/parenrightBig2 , (IV) =E/parenleftBig/integraldisplay⌊t⌋ 0/integraldisplays ⌊s⌋¯Rγ η(s,s′)(θs′−θ∗)ds′ds/parenrightBig2 , (V) =E/parenleftBig/integraldisplayt ⌊t⌋/bracketleftBig −δβ(θs−θ∗)+s(θs,¯αs γ)+/integraldisplays 0¯Rγ η(s,s′)(θs′−θ∗)ds′+ ¯us γ/bracketrightBig ds/parenrightBig2 , (VI) =E(√ 2bt−√ 2b⌊t⌋)2. By the same arguments as in the proof of Lemma 3.7, using the Lipschitz continuity of s(·) in Assumption 2.2, we may show (I)+(II)+(III)≤C λe2λtsup s∈[0,T]e−2λsE(θs−¯θs γ)2. Applying s−⌊s⌋ ≤γ,t−⌊t⌋ ≤γ, and the bounds for ¯Rγ η,¯Cγ ηimplied by Xγ∈ Sη, we have (IV)+(V)+(VI)≤Cγ. Then sup t∈[0,T]e−2λtE(θt−¯θt γ)2≤C λsup t∈[0,T]e−2λtE(θt−¯θt γ)2+Cγ, and choosing large enough λ >0 yields sup t∈[0,T]e−λt/radicalBig E(θt−¯θtγ)2≤C′√γ. (100) This implies as in the proof of Lemma 3.7thatd(Cθ,¯Cγ θ)≤C′√γ. Bound of d(Rθ,¯Rγ θ).Denote by rθ(t,s) =∂θt ∂usand ¯rγ θ(t,s) =∂¯θt γ ∂¯usγthe processes ( 13) and (94) defined from the above coupling of {θt}and{¯θt γ}. Then by definition, we have rθ(t,s) = 1+/integraldisplayt s/bracketleftbigg/parenleftBig −δβ+∂θs(θs′,¯αs′ γ)/parenrightBig rθ(s′,s)+/integraldisplays′ s¯Rγ η(s′,s′′)rθ(s′′,s)ds′′/bracketrightbigg ds′, ¯rγ θ(t,s) = 1+1{⌈s⌉ ≤ ⌊t⌋}/integraldisplay⌊t⌋ ⌈s⌉/bracketleftbigg/parenleftBig −δβ+∂θs(¯θs′ γ,¯αs′ γ)/parenrightBig ¯rγ θ(s′,s)+/integraldisplay⌊s′⌋ ⌈s⌉¯Rγ η(s′,s′′)¯rγ θ(s′′,s)ds′′/bracketrightbigg ds′. 34 HenceE|rθ(t,s)−¯rγ θ(t,s)| ≤(I)+(II)+(III)+(IV) where (I) =E/bracketleftbigg 1{⌈s⌉ ≤ ⌊t⌋}/integraldisplay⌊t⌋ ⌈s⌉/vextendsingle/vextendsingle/vextendsingle/parenleftBig −δβ+∂θs(θs′,¯αs′ γ)/parenrightBig rθ(s′,s)−/parenleftBig −δβ+∂θs(¯θs′ γ,¯αs′ γ)/parenrightBig ¯rγ θ(s′,s)/vextendsingle/vextendsingle/vextendsingleds′/bracketrightbigg , (II) =E/bracketleftbigg 1{⌈s⌉ ≤ ⌊t⌋}/integraldisplay⌊t⌋ ⌈s⌉/integraldisplay⌊s′⌋ ⌈s⌉/vextendsingle/vextendsingle/vextendsingle¯Rγ η(s′,s′′)(rθ(s′′,s)−¯rγ θ(s′′,s))/vextendsingle/vextendsingle/vextendsingleds′′ds′/bracketrightbigg , (III) =E/bracketleftbigg 1{⌈s⌉ ≤ ⌊t⌋}/integraldisplay⌊t⌋ ⌈s⌉/integraldisplay (s,⌈s⌉)∪(⌊s′⌋,s′)/vextendsingle/vextendsingle/vextendsingle¯Rγ η(s′,s′′)rθ(s′′,s)/vextendsingle/vextendsingle/vextendsingleds′′ds′/bracketrightbigg , (IV) =/integraldisplay (s,⌈s⌉)∪(⌊t⌋,t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftBig −δβ+∂θs(θs′,¯αs′ γ)/parenrightBig rθ(s′,s)+/integraldisplays′ s¯Rγ η(s′,s′′)rθ(s′′,s)ds′′/vextendsingle/vextendsingle/vextendsingle/vextendsingleds′. By the same arguments as in the proof of Lemma 3.7, using the above bound ( 100) and Lipschitz continuity of∂θs(·) in Assumption 2.2, we may show (I)+(II)≤C λeλt/parenleftBig sup 0≤s≤t≤Te−λtE|rθ(t,s)−¯rγ θ(t,s)|/parenrightBig +C√γ. Applying ⌈s⌉−s≤γ,s′−⌊s′⌋ ≤γ, andt−⌊t⌋ ≤γ, we have (III)+(IV)≤Cγ. Then sup 0≤s≤t≤Te−λtE|rθ(t,s)−¯rγ θ(t,s)| ≤C
|
https://arxiv.org/abs/2504.15556v1
|
λsup 0≤s≤t≤Te−λtE|rθ(t,s)−¯rγ θ(t,s)|+C√γ, and choosing large enough λ >0 and rearranging gives d(Rθ,¯Rγ θ)≤sup 0≤s≤t≤Te−λtE|rθ(t,s)−¯rγ θ(t,s)| ≤C√γ. Bound of d(˜α,¯˜αγ).By definition, ˜αt=α0+/integraldisplayt 0G(˜αs,P(θs))ds,¯˜αt γ=α0+/integraldisplay⌊t⌋ 0G(¯˜αs γ,P(¯θs γ))ds, so/⌊a∇⌈⌊l˜αt−¯˜αt γ/⌊a∇⌈⌊l ≤(I)+(II) where (I) =/integraldisplay⌊t⌋ 0/vextenddouble/vextenddouble/vextenddoubleG(˜αs,P(θs))−G(¯˜αs γ,P(¯θs γ))/vextenddouble/vextenddouble/vextenddoubleds,(II) =/integraldisplayt ⌊t⌋/vextenddouble/vextenddouble/vextenddoubleG(˜αs,P(θs))/vextenddouble/vextenddouble/vextenddoubleds. By the same arguments as in the proof of Lemma 3.7, using the above bound ( 100) and the Lipschitz continuity of G(·) in Assumption 2.3, we have (I)≤C λeλtsup s∈[0,T]e−λs/⌊a∇⌈⌊l˜αs−¯˜αs γ/⌊a∇⌈⌊lds+C√γ Usingt−⌊t⌋ ≤γ, we have ( II)≤Cγ. So choosing λ >0 large enough and rearranging shows d(˜α,¯˜αγ) = sup t∈[0,T]e−λt/⌊a∇⌈⌊l˜αt−¯˜αt γ/⌊a∇⌈⌊l ≤C√γ. This concludes the proof of (a). The proof of (b) is analogous, and we omit this for brevity. Lemma 4.5. Let{θt}t∈[0,T],{ηt}t∈[0,T], and{αt}t∈[0,T]be the components of the solution to the DMFT system in Theorem 2.4, and let {¯θt γ}t∈[0,T],{¯ηt γ}t∈[0,T],{¯αt γ}t∈[0,T]be defined from the components of the fixed point ( 99) via (93) and (96). Then for any fixed m≥0andt1,...,tm∈[0,T], asγ→0, P(θ∗,¯θt1 γ,...,¯θtm γ)W2→P(θ∗,θt1,...,θtm) (101) P(¯w∗ γ,ε,¯ηt1 γ,...,¯ηtm γ)W2→P(w∗,ε,ηt1,...,ηtm) (102) {¯αt γ}t∈[0,T]→ {αt}t∈[0,T] (103) where (103) holds in the sense of uniform convergence on C([0,T],RK). 35 Proof.LetXγ= (¯Rγ η,¯Cγ η,¯αγ) andYγ= (¯Rγ θ,¯Cγ θ,¯αγ) be the components of the fixed point ( 99), and let X= (Rη,Cη,α) andY= (Rθ,Cθ,α) be those of the unique solution to the continuous DMFT system prescribed by Theorem 2.4. Letd(·) denote the metrics introduced in ( 48) and (49), for a sufficiently large choice of λ >0. By Lemma 4.3,Xγ∈ Dγ η∩Sηfor all sufficiently small γ >0, soTη→η(Xγ) is well-defined. Then, applying the fixed point conditions for XγandX, d(X,Xγ) =d/parenleftbig Tη→η(X),Tγ η→η(Xγ)/parenrightbig ≤d/parenleftbig Tη→η(X),Tη→η(Xγ)/parenrightbig +d/parenleftbig Tη→η(Xγ),Tγ η→η(Xγ)/parenrightbig . By Lemmas 3.7and3.8,Tη→ηis a contraction on Sηfor large enough λ >0, for which the first term satisfies d/parenleftbig Tη→η(X),Tη→η(Xγ)/parenrightbig ≤1 2d(X,Xγ). Thus, rearranging shows d(X,Xγ)≤2d/parenleftbig Tη→η(Xγ),Tγ η→η(Xγ)/parenrightbig = 2d/parenleftbig Tθ→η◦Tη→θ(Xγ),Tγ θ→η◦Tγ η→θ(Xγ)/parenrightbig . LettingY′=Tη→θ(Xγ)∈ SθandYγ=Tγ η→θ(Xγ)∈ Dγ θ∩Sθ, this shows d(X,Xγ)≤2d/parenleftbig Tθ→η(Y′),Tγ θ→η(Yγ)/parenrightbig ≤2d/parenleftbig Tθ→η(Y′),Tθ→η(Yγ)/parenrightbig +2d/parenleftbig Tθ→η(Yγ),Tγ θ→η(Yγ)/parenrightbig . By Lemma 4.4,d(Y′,Yγ) =d(Tη→θ(Xγ),Tγ η→θ(Xγ))≤C√γ. Then by Lemma 3.8, the first term is bounded as d(Tθ→η(Y′),Tθ→η(Yγ))≤Cd(Y′,Yγ)≤C′√γ. By Lemma 4.4, the second term is also bounded asd(Tθ→η(Yγ),Tγ θ→η(Yγ))≤C√γ. So combining these statements and taking γ→0 shows lim γ→0d(X,Xγ) = 0,lim γ→0d(Y,Yγ) = 0. (104) By definition of the metrics d(·), the convergence ( 104) implies the uniform convergence statement ( 103). It also implies lim γ→0sup 0≤s≤t≤T|Cη(t,s)−¯Cγ η(t,s)|= 0. We recall that Theorem 2.4shows (Rη,Cη,α)∈ Scont η, for which the continuity property ( 32) holds for all 0≤s≤t≤T. Then there exists a coupling of {ut}t∈[0,T]and{¯ut γ}t∈[0,T]with covariance kernels Cη(t,s) and¯Cγ η(t,s) for which lim γ→0sup t∈[0,T]E(ut−¯ut γ)2= 0, see e.g. [52, Lemma D.1]. Defining {θt}and{¯θt γ}by this coupling of {ut}and{¯ut γ}and a common Brownian motion, the same arguments as leading to ( 100) shows lim γ→0sup t∈[0,T]e−λtE(θt−¯θt γ)2= 0, hence in particular lim γ→0E(θt−¯θt γ)2= 0 for each fixed t∈[0,T] under this coupling, which implies ( 101). A similar argument shows ( 102). 4.3 Step 3: Discretization of Langevin dynamics We now consider a piecewise constant embedding {¯θt γ,¯/hatwideαt γ}t∈[0,T]of the discretized Langevin process ( 55–56), defined as ¯θt γ=θ[t] γ,¯/hatwideαt γ=/hatwideα[t] γ,¯ηt γ=X¯θt γ where [t]∈Z+is aspreviously defined in (
|
https://arxiv.org/abs/2504.15556v1
|
92). Asimple induction showsthat this is equivalently the solution to a modification of the dynamics ( 4–5), ¯θt γ=θ0+/integraldisplay⌊t⌋ 0/bracketleftBig −βX⊤(X¯θs γ−y)+/parenleftbig s(¯θs γ,j,¯/hatwideαs γ)/parenrightbigd j=1/bracketrightBig ds+√ 2b⌊t⌋, ¯/hatwideαt γ=/hatwideα0+/integraldisplay⌊t⌋ 0G/parenleftBig ¯/hatwideαs γ,1 dd/summationdisplay j=1δ¯θs γ,j/parenrightBig ds.(105) We compare this to the solution {θt,/hatwideαt}t≥0of the original dynamics ( 4–5), withηt=Xθt, to show the following lemma. 36 Lemma 4.6. Let{θt,ηt,/hatwideαt}be defined by ( 4–5), and let {¯θt γ,¯ηt γ,¯/hatwideαt γ}be defined by the piecewise constant process ( 105). Then for any fixed m≥1andt1,...,tm∈[0,T], there exists a function ι:R+→R+ satisfying limγ→0ι(γ) = 0such that almost surely limsup n,d→∞W2/parenleftBig1 dd/summationdisplay j=1δ(θ∗ j,θt1 j,...,θtm j),1 dd/summationdisplay j=1δ(θ∗ j,¯θt1 γ,j,...,¯θtm γ,j)/parenrightBig < ι(γ) limsup n,d→∞W2/parenleftBig1 nn/summationdisplay i=1δ(η∗ i,εi,ηt1 i,...,ηtm i),1 nn/summationdisplay i=1δ(η∗ i,εi,¯ηt1 γ,i,...,¯ηtm γ,i)/parenrightBig < ι(γ) limsup n,d→∞sup t∈[0,T]/vextenddouble/vextenddouble/hatwideαt−¯/hatwideαt γ/vextenddouble/vextenddouble< ι(γ). We proceed to prove Lemma 4.6. Lemma 4.7. Let{bt}t≥0be a standard Brownian motion on Rd. For any fixed T >0, there exists a constant C >0depending on Tsuch that almost surely limsup d→∞sup t∈[0,T]1√ d/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l ≤C,limsup d→∞sup t∈[0,T]1√ d/parenleftbig /⌊a∇⌈⌊lbt−b⌊t⌋/⌊a∇⌈⌊l+/⌊a∇⌈⌊lbt−b⌈t⌉/⌊a∇⌈⌊l/parenrightbig ≤C/radicalbig γmax(log(1 /γ),1). Proof.We first show that for any a,b∈R+witha≤b, we have P(supt∈[a,b]d−1/2/⌊a∇⌈⌊lbt−ba/⌊a∇⌈⌊l ≥u)≤ exp/parenleftbig −cdu2 b−a/parenrightbig for anyu≥/radicalbig 4(b−a) and some constant c >0. To see this, for any λ∈(0,d 2(b−a)), we have P( sup t∈[a,b]d−1/2/⌊a∇⌈⌊lbt−ba/⌊a∇⌈⌊l ≥u) =P( sup t∈[a,b]exp(λ/⌊a∇⌈⌊lbt−ba/⌊a∇⌈⌊l2/d)≥exp(λu2)) (∗) ≤e−λu2E[exp(λ/⌊a∇⌈⌊lbb−ba/⌊a∇⌈⌊l2/d)](∗∗)=e−λu2(1−2λ(b−a)/d)−d/2, where (∗) applies Doob’s maximal inequality for the nonnegative submartingale {exp(λ/⌊a∇⌈⌊lbt−ba/⌊a∇⌈⌊l2/d)}t∈[a,b], and (∗∗) applies the moment generating function of the χ2distribution. Choosing λ=cd/(b−a) for some small enough c >0 and applying (1 −x)≥e−2xfor small x >0, we have P( sup t∈[a,b]d−1/2/⌊a∇⌈⌊lbt−ba/⌊a∇⌈⌊l ≥u)≤exp/parenleftBig −cdu2 b−a+2cd/parenrightBig ≤exp/parenleftBig −cdu2 2(b−a)/parenrightBig foru≥/radicalbig 4(b−a), proving the inequality. For the first claim, we apply this with a= 0,b=T,u=√ 4Tto yield that P(supt∈[0,T]d−1/2/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l ≥√ 4T)≤exp(−2cd), so the claim follows by the Borel Cantelli lemma. For the second claim, let N=T/γ(assumed without loss of generality to be an integer greater than 1 ), and Ii= [(i−1)γ,iγ) fori∈[N]. Then applying the inequality over these intervals yields P( sup t∈[0,T](dγ)−1/2/⌊a∇⌈⌊lbt−b⌊t⌋/⌊a∇⌈⌊l ≥u)≤Nmax i≤NP/parenleftBig sup t∈Ii(dγ)−1/2/⌊a∇⌈⌊lbt−biγ/⌊a∇⌈⌊l ≥u/parenrightBig ≤Ne−cdu2 for anyu≥2. Hence by choosing u=C√logNfor large enough C >0, we have P(supt∈[0,T](dγ)−1/2/⌊a∇⌈⌊lbt− b⌊t⌋/⌊a∇⌈⌊l ≥C√logN)≤exp(−c′d). A similar argument applies to supt∈[0,T](dγ)−1/2/⌊a∇⌈⌊lbt−b⌈t⌉/⌊a∇⌈⌊l, proving the second claim. Lemma 4.8. For any fixed T >0, there exists a constant C >0depending on Tsuch that almost surely limsup n,d→∞sup t∈[0,T](/⌊a∇⌈⌊lθt/⌊a∇⌈⌊l/√ d+/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l)≤C. Proof.LetC >0 denote a constant depending on Tand changing from instance to instance. Since θt=θ0−/integraldisplayt 0/parenleftBig βX⊤(Xθs−y)−s(θs,/hatwideαs)/parenrightBig ds+√ 2bt, 37 and/⌊a∇⌈⌊ls(θs,/hatwideαs)/⌊a∇⌈⌊l ≤C(√ d+/⌊a∇⌈⌊lθs/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l) by Assumption 2.2, we have for every t∈[0,T] that /⌊a∇⌈⌊lθt/⌊a∇⌈⌊l ≤ /⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l+C(/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op+1)/integraldisplayt 0/parenleftBig /⌊a∇⌈⌊lθs/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l/parenrightBig ds+C(/⌊a∇⌈⌊lX⊤y/⌊a∇⌈⌊l+√ d+ sup t∈[0,T]/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l). Next by Assumption 2.3, we have /⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤ /⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+C/integraldisplayt 0/parenleftbig 1+/⌊a∇⌈⌊lθs/⌊a∇⌈⌊l/√ d+/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l/parenrightbig ds. Combining the above two bounds yields /⌊a∇⌈⌊lθt/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤C(/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op+1)/integraldisplayt 0/parenleftBig/⌊a∇⌈⌊lθs/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l/parenrightBig ds+C/parenleftBig/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+/⌊a∇⌈⌊lX⊤y/⌊a∇⌈⌊l√ d+ sup t∈[0,T]/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l√ d+1/parenrightBig . (106) Hence by Gronwall’s inequality, we have sup t∈[0,T]/⌊a∇⌈⌊lθt/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤Cexp(C(/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op+1))/parenleftBig/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+/⌊a∇⌈⌊lX⊤y/⌊a∇⌈⌊l√ d+ sup t∈[0,T]/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l√ d+1/parenrightBig . Under Assumption 2.1, we have almost surely that limsup n,d→∞max/parenleftBig /⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l,1√ d/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l,1√ d/⌊a∇⌈⌊lX⊤y/⌊a∇⌈⌊l,/⌊a∇⌈⌊lX/⌊a∇⌈⌊lop/parenrightBig ≤C, (107) so the conclusion follows from the first claim of Lemma 4.7. Proof of Lemma 4.6.Here and throughout, C >0 denotes a constant depending on Tbut not on γ, and changing
|
https://arxiv.org/abs/2504.15556v1
|
from instance to instance. We restrict to the almost-sur e event where Lemmas 4.7and4.8hold, and (107) holds for all large n,d. Then, coupling ( 4) and (105) by the same Brownian motion, for any 0≤t≤T, /⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l ≤C/integraldisplay⌊t⌋ 0/parenleftBig /⌊a∇⌈⌊lX⊤X(θs−¯θs γ)/⌊a∇⌈⌊l+/⌊a∇⌈⌊ls(θs;/hatwideαs)−s(θs γ;¯/hatwideαs γ)/⌊a∇⌈⌊l/parenrightBig ds+C/integraldisplayt ⌊t⌋/bracketleftBig X⊤Xθs+s(θs,/hatwideαs)/bracketrightBig ds +√ 2/⌊a∇⌈⌊lbt−b⌊t⌋/⌊a∇⌈⌊l. Applying Lipschitz continuity of s(·) in Assumption 2.2and the bounds of Lemma 4.8and (107), this shows /⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l ≤C/integraldisplayt 0/parenleftbig /⌊a∇⌈⌊lθs−¯θs γ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαs−¯/hatwideαs γ/⌊a∇⌈⌊l/parenrightbig ds+Cγ√ d+Csup t∈[0,T]/⌊a∇⌈⌊lbt−b⌊t⌋/⌊a∇⌈⌊l.(108) Similarly, using Assumption 2.3andW2 2(d−1/summationtextd j=1δuj,d−1/summationtextd j=1δvj)≤d−1/⌊a∇⌈⌊lu−v/⌊a∇⌈⌊l2foru,v∈Rd, /⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l ≤/integraldisplay⌊t⌋ 0/vextenddouble/vextenddouble/vextenddouble/vextenddoubleG/parenleftBig /hatwideαs,1 dd/summationdisplay j=1δθs j/parenrightBig −G/parenleftBig ¯/hatwideαs γ,1 dd/summationdisplay j=1δ¯θs γ,j/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddoubleds+/integraldisplayt ⌊t⌋/vextenddouble/vextenddouble/vextenddouble/vextenddoubleG/parenleftBig /hatwideαs,1 dd/summationdisplay j=1δθs j/parenrightBig/vextenddouble/vextenddouble/vextenddouble/vextenddoubleds ≤C/integraldisplayt 0/parenleftBig /⌊a∇⌈⌊l/hatwideαs−¯/hatwideαs γ/⌊a∇⌈⌊l+1√ d/⌊a∇⌈⌊lθs−¯θs γ/⌊a∇⌈⌊l/parenrightBig ds+Cγ. Combining the above display with ( 108), we have by Gronwall’s lemma sup t∈[0,T]/⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l+1√ d/⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l ≤Cγ+C√ dsup t∈[0,T]/⌊a∇⌈⌊lbt−b⌊t⌋/⌊a∇⌈⌊l. (109) 38 By Lemma 4.7, there exists some C >0 such that limsup d→∞sup t∈[0,T]1√ d/⌊a∇⌈⌊lbt−b⌊t⌋/⌊a∇⌈⌊l ≤C/radicalbig γ·max(log(1 /γ),1). Substituting this bound in ( 109) proves the claim on α. Noting that W2/parenleftBig1 dd/summationdisplay j=1δ(θ∗ j,θt1 j,...,θtm j),1 dd/summationdisplay j=1δ(θ∗ j,¯θt1 γ,j,...,¯θtm γ,j)/parenrightBig ≤/radicaltp/radicalvertex/radicalvertex/radicalbt1 dm/summationdisplay ℓ=1d/summationdisplay j=1/parenleftbig θtℓ j−¯θtℓ γ,j/parenrightbig2≤√m·1√ dsup t∈[0,T]/⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l, this proves also the claim on θ, and the claim on ηfollows from /⌊a∇⌈⌊lηt−¯ηt γ/⌊a∇⌈⌊l ≤ /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop/⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊land the same argument. 4.4 Completing the proof Proof of Theorem 2.5.For part (a), by the triangle inequality, sup t∈[0,T]/⌊a∇⌈⌊l/hatwideαt−αt/⌊a∇⌈⌊l ≤sup t∈[0,T]/⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l+ sup t∈[0,T]/⌊a∇⌈⌊l¯/hatwideαt γ−¯αt γ/⌊a∇⌈⌊l+ sup t∈[0,T]/⌊a∇⌈⌊l¯αt γ−αt/⌊a∇⌈⌊l. Since{¯/hatwideαt γ}and{¯αt γ}are piecewise constant with values equal to those of the discrete p rocesses of Section 4.1, Lemma 4.1implies that the middle term converges to 0 a.s. as n,d→ ∞. Then, taking n,d→ ∞ followed by γ→0 and applying also Lemmas 4.5and4.6to bound the first and third terms in this limit, this shows (a). For part (b), similarly combining Lemmas 4.1,4.5, and4.6shows that almost surely, for any m≥1 and t0,t1,...,tm∈[0,T], 1 dd/summationdisplay j=1δ/parenleftbig θ∗ j,θt0 j,...,θtm j/parenrightbigW2→P(θ∗,θt0,...,θtm). (110) We now strengthen this to almost-sure convergence in the Wasser stein-2 sense over R×C([0,T]), equipped with the product norm /⌊a∇⌈⌊lθ/⌊a∇⌈⌊l∞:=|θ∗|+ sup t∈[0,T]|θt|. By [54, Definition 6.8 and Theorem 6.9], it suffices to show weak convergence together with convergence of the squared norm /⌊a∇⌈⌊lθ/⌊a∇⌈⌊l2 ∞, which will be implied by convergence for all pseudo-Lipschitz test fu nctions f:R×C([0,T])→Rsatisfying /vextendsingle/vextendsinglef(θ)−f(θ′)/vextendsingle/vextendsingle≤C/⌊a∇⌈⌊lθ−θ′/⌊a∇⌈⌊l∞(1+/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l∞+/⌊a∇⌈⌊lθ′/⌊a∇⌈⌊l∞). (111) Consider the event Ewhere (110) holds for each m≥2 and{t0,t1,t2,...,tm}={0,γ,2γ,...,T}with γ=T/m. Let{˜θt}t∈[0,T]be a piecewise linear interpolation of {θt}t∈[0,T]∩γZ+, and similarly let {˜θt}t∈[0,T] be a piecewise linear interpolation of the DMFT process {θt}t∈[0,T]. For any pseudo-Lipschitz function f:R×C([0,T])→R, we have /vextendsingle/vextendsingle/vextendsingle1 dd/summationdisplay j=1f(θ∗ j,{θt j}t∈[0,T])−E[f(θ∗,{θt}t∈[0,T])]/vextendsingle/vextendsingle/vextendsingle≤(I)+(II)+(III) (112) with (I) =/vextendsingle/vextendsingle/vextendsingle1 dd/summationdisplay j=1f(θ∗ j,{˜θt j}t∈[0,T])−E[f(θ∗,{˜θt}t∈[0,T])]/vextendsingle/vextendsingle/vextendsingle (II) =/vextendsingle/vextendsingle/vextendsingle1 dd/summationdisplay j=1f(θ∗ j,{θt j}t∈[0,T])−f(θ∗ j,{˜θt j}t∈[0,T])/vextendsingle/vextendsingle/vextendsingle (III) =/vextendsingle/vextendsingle/vextendsingleE[f(θ∗,{θt}t∈[0,T])]−E[f(θ∗,{˜θt}t∈[0,T])]/vextendsingle/vextendsingle/vextendsingle. 39 For a piecewise linear process ˜θwith knots at γZ+,f(θ∗ j,{˜θt}t∈[0,T]) may be understood as a function of (θ∗ j,˜θ0,˜θγ,˜θ2γ,...,˜θT), where this function is pseudo-Lipschitz on Rm+2by the pseudo-Lipschitz property (111) forf. Then on the above event E, the Wasserstein-2 convergence ( 110) implies lim n,d→∞(I) = 0. To bound ( II), letC,C′>0 be constants depending on T(but not γ) and changing from instance to instance. Writing θj= (θ∗ j,{θt j}t∈[0,T])∈R×C([0,T],R) and applying ( 111), we have (II)≤C dd/summationdisplay j=1/⌊a∇⌈⌊lθj−˜θj/⌊a∇⌈⌊l∞/parenleftbig 1+/⌊a∇⌈⌊lθj/⌊a∇⌈⌊l∞/parenrightbig ≤C′/parenleftBig1 dd/summationdisplay
|
https://arxiv.org/abs/2504.15556v1
|
j=1/⌊a∇⌈⌊lθj−˜θj/⌊a∇⌈⌊l2 ∞/parenrightBig1/2/parenleftBig 1+1 dd/summationdisplay j=1/⌊a∇⌈⌊lθj/⌊a∇⌈⌊l2 ∞/parenrightBig1/2 .(113) Set F(θ,/hatwideα) =−βX⊤(Xθ−y)+s(θ,/hatwideα) so that by definition, θt j=θ0 j+/integraltextt 0e⊤ jF(θs,/hatwideαs)ds+√ 2bt j. Hence sup t∈[0,T](θt j)2≤C/parenleftBig (θ0 j)2+/integraldisplayT 0(e⊤ jF(θs,/hatwideαs))2ds+/⌊a∇⌈⌊lbj/⌊a∇⌈⌊l2 ∞/parenrightBig , so 1 dd/summationdisplay j=1/⌊a∇⌈⌊lθj/⌊a∇⌈⌊l2 ∞≤C/parenleftBig1 dd/summationdisplay j=1(θ∗ j)2+(θ0 j)2+1 dsup t∈[0,T]/⌊a∇⌈⌊lF(θt,/hatwideαt)/⌊a∇⌈⌊l2+1 dd/summationdisplay j=1/⌊a∇⌈⌊lbj/⌊a∇⌈⌊l2 ∞/parenrightBig . On an almost-sure event E′, for all large n,d, we have that d−1/summationtext j(θ∗ j)2+d−1/summationtext j(θ0 j)2≤Cby Assumption 2.2, that supt∈[0,T]d−1/⌊a∇⌈⌊lF(θt,/hatwideαt)/⌊a∇⌈⌊l2≤Cby the definition of F(·) together with Assumption 2.2and Lemma 4.8, and that d−1/summationtext j/⌊a∇⌈⌊lbj/⌊a∇⌈⌊l2 ∞≤Cby Doob’s maximal inequality P[/⌊a∇⌈⌊lbj/⌊a∇⌈⌊l∞> x]≤2e−x2/(2T)and Bernstein’s inequality for a sum of independent subexponential random variable s [61, Theorem 2.8.1]. Thus, on E′, 1 dd/summationdisplay j=1/⌊a∇⌈⌊lθj/⌊a∇⌈⌊l2 ∞≤C′. (114) Now fixing any α∈(0,1/2), define the H¨ older semi-norm /⌊a∇⌈⌊lθj/⌊a∇⌈⌊lα= sups,t∈[0,T]|θt j−θs j|/|t−s|α. Then, since ˜θjlinearly interpolates θjat the knots γZ+, /⌊a∇⌈⌊lθj−˜θj/⌊a∇⌈⌊l∞≤γα/⌊a∇⌈⌊lθj/⌊a∇⌈⌊lα. We have by definition θt j−θs j=/integraltextt se⊤ jF(θr,/hatwideαr)dr+√ 2(bt j−bs j), so by H¨ older’s inequality, |θt j−θs j| ≤ |t−s|α/parenleftBig/integraldisplayt s/vextendsingle/vextendsinglee⊤ jF(θr,/hatwideαr)/vextendsingle/vextendsingle1 1−αdr/parenrightBig1−α +√ 2|bt j−bs j| ≤C|t−s|α/parenleftBig/parenleftBig/integraldisplayT 0(e⊤ jF(θr,/hatwideαr))2/parenrightBig1/2 +/⌊a∇⌈⌊lbj/⌊a∇⌈⌊lα/parenrightBig and hence 1 dd/summationdisplay j=1/⌊a∇⌈⌊lθj/⌊a∇⌈⌊l2 α≤C/parenleftBig1 dsup t∈[0,T]/⌊a∇⌈⌊lF(θt,/hatwideαt)/⌊a∇⌈⌊l2+1 dd/summationdisplay j=1/⌊a∇⌈⌊lbj/⌊a∇⌈⌊l2 α/parenrightBig . On an almost-sure event E′′, for all large n,d, we have supt∈[0,T]d−1/⌊a∇⌈⌊lF(θt,/hatwideαt)/⌊a∇⌈⌊l2≤Cas above, and d−1/summationtext j/⌊a∇⌈⌊lbj/⌊a∇⌈⌊l2 α≤Cby the tail bound P[/⌊a∇⌈⌊lbj/⌊a∇⌈⌊lα> C+x]≤e−cx2for some C,c >0 (see e.g. [ 62, Theo- rem 5.32, Example 5.37]) and Bernstein’s inequality. Thus on E′′, 1 dd/summationdisplay j=1/⌊a∇⌈⌊lθj−˜θj/⌊a∇⌈⌊l2 ∞≤Cγ2α. (115) 40 Applying ( 114) and (115) to (113), onE′∩E′′, limsup n,d→∞(II)< Cγα. To bound ( III), similarly we have (III)≤C/parenleftBig E/⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l2 ∞/parenrightBig1/2/parenleftBig 1+E/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l2 ∞/parenrightBig1/2 . (116) By definition θt=θ0+/integraldisplayt 0/bracketleftBig −δβ(θs−θ∗)+s(θs,αs)+/integraldisplays 0Rη(s,s′)(θs′−θ∗)ds′+us/bracketrightBig ds+√ 2bt. Hence, applying |s(θs,αs)| ≤C(1+|θs|+/⌊a∇⌈⌊lαs/⌊a∇⌈⌊l)byAssumption 2.2anduniformboundednessofthecontinuous functions αsandRη(s,s′) over [0,T], (θt)2≤C/parenleftBig 1+(θ0)2+(θ∗)2+/⌊a∇⌈⌊lu/⌊a∇⌈⌊l2 ∞+/⌊a∇⌈⌊lb/⌊a∇⌈⌊l2 ∞+/integraldisplayt 0/parenleftBig sup r∈[0,s](θs)2/parenrightBig ds/parenrightBig . Then Gronwall’s lemma gives sup t∈[0,T](θt)2≤C/parenleftBig 1+(θ0)2+(θ∗)2+/⌊a∇⌈⌊lu/⌊a∇⌈⌊l2 ∞+/⌊a∇⌈⌊lb/⌊a∇⌈⌊l2 ∞/parenrightBig . We have E(θ0)2,E(θ∗)2≤Cby assumption. Since {ut}t∈[0,T]has covariance Cη(t,s) satisfying |Cη(t,s)| ≤ C|t−s|by the condition ( 32) defining S(T)cont, we have P[/⌊a∇⌈⌊lu/⌊a∇⌈⌊l∞≥C+t]≤e−ct2for some constants C,c >0 by a standard application of Dudley’s inequality [ 61, Theorem 8.1.6], so E/⌊a∇⌈⌊lu/⌊a∇⌈⌊l2 ∞≤C. Similarly E/⌊a∇⌈⌊lb/⌊a∇⌈⌊l2 ∞≤C, so this gives E/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l2 ∞≤C. (117) By definition we have also θt−θs=/integraldisplayt s/bracketleftBig −δβ(θr−θ∗)+s(θr,αr)+/integraldisplayr 0Rη(r,r′)(θr′−θ∗)dr′+ur/bracketrightBig dr+√ 2(bt−bs), so |θt−θs| ≤C|t−s|α/parenleftBig/integraldisplayt s/parenleftBig 1+|θ∗|+ sup r′∈[0,r]|θr′|+|ur|/parenrightBig1 1−αdr/parenrightBig1−α +√ 2|bt−bs| ≤C′|t−s|α/parenleftBig 1+/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l∞+/⌊a∇⌈⌊lu/⌊a∇⌈⌊l∞+/⌊a∇⌈⌊lb/⌊a∇⌈⌊lα/parenrightBig . Then E/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l2 α≤C(1+E/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l2 ∞+E/⌊a∇⌈⌊lu/⌊a∇⌈⌊l2 ∞+E/⌊a∇⌈⌊lb/⌊a∇⌈⌊l2 α)≤C′, so E/⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l2 ∞≤γ2αE/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l2 α≤C′γ2α. (118) Applying ( 117) and (118) to (116) shows (III)≤Cγα. Applying these bounds for ( I), (II), and (III) to take the limit n,d→ ∞followed by γ→0 in (112), this shows that on the almost-sure event E∩E′∩E′′(which does not depend on f), for every pseudo-Lipschitz function f:R×C([0,T])→R, lim n,d→∞1 dd/summationdisplay j=1f(θ∗ j,{θt j}t∈[0,T]) =E[f(θ∗,{θt}t∈[0,T])]. 41 This implies on E ∩E′∩E′′that 1 dd/summationdisplay j=1δθ∗ j,{θt j}t∈[0,T]W2→P(θ∗,{θt}t∈[0,T]). For the convergence for η, we note that ηt i=e⊤ iXθt=e⊤ iXθt+/integraltextt 0e⊤ iXF(θs,/hatwideαs)ds+√ 2e⊤ iXbt. Then applying similar arguments as above, fixing any α∈(0,1/2), on an almost-sure event we have for all large n,dthat 1 nn/summationdisplay i=1/⌊a∇⌈⌊lηi/⌊a∇⌈⌊l2 ∞≤C,1 nn/summationdisplay i=1/⌊a∇⌈⌊lηi/⌊a∇⌈⌊l2 α≤Cγ2α. For the DMFT process we have ηt=−β/integraltextt 0Rθ(t,s)(ηs+w∗−ε)ds−wt, hence (ηt)2≤C/parenleftBig (w∗)2+(wt)2+ε2+/integraldisplayt 0(ηs)2ds/parenrightBig so Gronwall’s lemma and a similar argument as above gives E/⌊a∇⌈⌊lη/⌊a∇⌈⌊l2 ∞≤C. Also |ηt−ηs| ≤C|t−s|α/parenleftBig/integraldisplayt s/parenleftBig |w∗|+|ε|+|ηr|/parenrightBig1 1−αdr/parenrightBig1−α +|wt−ws| ≤C′|t−s|α/parenleftBig |w∗|+|ε|+/⌊a∇⌈⌊lη/⌊a∇⌈⌊l∞+/⌊a∇⌈⌊lw/⌊a∇⌈⌊lα/parenrightBig so E/⌊a∇⌈⌊lη/⌊a∇⌈⌊l2 α≤C/parenleftBig E(w∗)2+Eε2+E/⌊a∇⌈⌊lη/⌊a∇⌈⌊l2 ∞+E/⌊a∇⌈⌊lw/⌊a∇⌈⌊l2 α/parenrightBig .
|
https://arxiv.org/abs/2504.15556v1
|
We recall that {wt}t∈[0,T]has covariance satisfying |Cθ(t,s)| ≤C|t−s|, soP[/⌊a∇⌈⌊lw/⌊a∇⌈⌊lα> C+x]≤2e−cx2for someC,c >0 (c.f. [62, Theorem 5.32]). Thus E/⌊a∇⌈⌊lη/⌊a∇⌈⌊l2 α≤C. Applying these bounds, the same arguments as above show the almost-sure convergence 1 nn/summationdisplay i=1δη∗ i,εi,{ηt i}t∈[0,T])W2→P(η∗,ε,{ηt}t∈[0,T]) where we recall that η∗on the right side is, by definition, η∗=−w∗. 5 Convergence of the linear response In this section, we prove Theorem 2.8. We assume throughout Assumptions 2.1,2.2,2.3and the H¨ older- continuity conditions of Theorem 2.8. We first state and prove in Section 5.1an analogue of Theorem 2.8for the discrete-time dynamics introduced previously in Section 4.1, and then analyze the discretization error and complete the proof of Theorem 2.8in Section 5.2. 5.1 Convergence of response functions for discrete dynamic s We recall the discrete, integer-indexed dynamics ( 55–56), which we reproduce here as θt+1 γ=θt γ−γ/bracketleftBig βX⊤(Xθt γ−y)−s(θt γ,/hatwideαt γ)/bracketrightBig +√ 2(bt+1 γ−bt γ),ηt γ=Xθt γ /hatwideαt+1 γ=/hatwideαt γ+γ·G/parenleftBig /hatwideαt γ,/hatwideP(θt γ)/parenrightBig ,/hatwideP(θ) =1 dd/summationdisplay j=1δθj. (119) We first show an analogue of Theorem 2.8for these discrete dynamics. For anys∈Z+and any j∈[d] ori∈[n], letting ejdenote the jthstandard basis vector in either Rdor Rn, define two sets of perturbed dynamics θt+1,(s,j),ε γ =θt,(s,j),ε γ−γ/bracketleftBig βX⊤(Xθt,(s,j),ε γ−y)−s(θt,(s,j),ε γ,/hatwideαt,(s,j),ε γ)−εej1t=s/bracketrightBig +√ 2(bt+1 γ−bt γ) /hatwideαt+1,(s,j),ε γ =/hatwideαt,(s,j),ε γ+γ·G/parenleftBig /hatwideαt,(s,j),ε γ,/hatwideP(θt,(s,j),ε γ)/parenrightBig , (120) 42 and θt+1,[s,i],ε γ =θt,[s,i],ε γ−γ/bracketleftBig βX⊤(Xθt,[s,i],ε γ−y)−s(θt,[s,i],ε γ,/hatwideαt,[s,i],ε γ)−εX⊤ei1t=s/bracketrightBig +√ 2(bt+1 γ−bt γ) /hatwideαt+1,[s,i],ε γ =/hatwideαt,[s,i],ε γ+γ·G/parenleftBig /hatwideαt,[s,i],ε γ,/hatwideP(θt,[s,i],ε γ)/parenrightBig (121) with the same initial conditions as ( 119). We set ηt,[s,i],ε γ=Xθt,[s,i],ε γ. (122) Comparing with ( 119), these dynamics have a perturbation to the drift in the direction o fejorX⊤eiat the single time s∈Z+. LetRγ θ(t,s) = (Rγ θ(t,s))d i,j=1∈Rd×dandRγ η(t,s) = (Rγ η(t,s))n i,j=1∈Rn×nbe matrices of response functions defined by (Rγ θ(t,s))i,j=∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tθt,(s,j),ε γ,i/a\}⌊∇a⌋k⌉t∇i}ht,(Rγ η(t,s))i,j=δβ2·∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tηt,[s,j],ε γ,i/a\}⌊∇a⌋k⌉t∇i}ht, where/a\}⌊∇a⌋k⌉tl⌉{t·/a\}⌊∇a⌋k⌉t∇i}htdenotes the expectation over only the randomness of {bt γ}t∈Z+, i.e. conditional on ( X,θ∗,ε) and on the initial conditions ( θ0,(s,j),ε γ,/hatwideα0,(s,j),ε γ) = (θ0,[s,i],ε γ,/hatwideα0,[s,i],ε γ) = (θ0,/hatwideα0)∈Rd+K. Recall also the discrete-time DMFT response functions Rγ θ(t,s),Rγ η(t,s) defined by ( 60) and (64). The goalofthis sectionis to provethe followinganalogueofthe converg encestatements for the responsefunctions in Theorem 2.8. Lemma 5.1. For any fixed s,t∈Z+withs < t, almost surely lim n,d→∞1 dTrRγ θ(t,s) =Rγ θ(t,s),lim n,d→∞1 nTrRγ η(t,s) =Rγ η(t,s). To ease notation, in the remainder of this section we will drop all subs criptsγand write simply θt=θt γ, /hatwideαt=/hatwideαt γ,bt=bt γetc. to refer to the above discrete-time processes. We first est ablish in Section 5.1.1a set of dynamical cavity estimates, which we will then use to prove Lemma 5.1in Section 5.1.2. 5.1.1 Dynamical cavity estimates We introduce the following notations: For any j∈[d] andi∈[n], denote θt= (θt j,θt −j)∈Rd, θt j∈R,θt −j∈Rd−1, ηt= (ηt i,ηt −i)∈Rn, ηt i∈R,ηt −i∈Rn−1, where{θt,ηt}t∈Z+are the components of the discrete-time process ( 119), andθt −jare the coordinates of θt excluding the jth(and similarly for ηt). We consider the following leave-one-out versions of ( 119): Forj∈[d], let X(j)= (Xik1k/n⌉}ationslash=j)i,k∈Rn×d,y(j)=X(j)θ∗+ε∈Rn(123) whereX(j)denotesXwithjthcolumn set to 0. Define θt+1,(j)=θt,(j)−γ/bracketleftBig β(X(j))⊤(X(j)θt,(j)−y(j))−s(θt,(j),/hatwideαt,(j))/bracketrightBig +√ 2(bt+1−bt)∈Rd /hatwideαt+1,(j)=/hatwideαt,(j)+γ·G/parenleftBig /hatwideαt,(j),/hatwideP(θt,(j))/parenrightBig (124) with initialization ( θ0,(j),/hatwideα0,(j)) = (θ0,/hatwideα0), and write as above θt,(j)= (θt,(j) j,θt,(j) −j)∈Rd, θt,(j) j∈R,θt,(j) −j∈Rd−1. We
|
https://arxiv.org/abs/2504.15556v1
|
note that for convenience of the proof, we define θt,(j)to be of the same dimension as θ, where one may check from ( 124) that the dynamics of θt,(j) −jdo not involve θt,(j) j. Similarly, for i∈[n], let X[i]= (Xkj1k/n⌉}ationslash=i)k,j∈Rn×d,y[i]=X[i]θ∗+ε, 43 whereX[i]sets theithrow ofXto 0. Define θt+1,[i]=θt,[i]−γ/bracketleftBig β(X[i])⊤(X[i]θt,[i]−y[i])−s(θt,[i],/hatwideαt,[i])/bracketrightBig +√ 2(bt+1−bt)∈Rd /hatwideαt+1,[i]=/hatwideαt,[i]+γ·G/parenleftBig /hatwideαt,[i],/hatwideP(θt,[i])/parenrightBig (125) also with initialization ( θ0,[i],/hatwideα0,[i]) = (θ0,/hatwideα0), and write as above ηt,[i]= (ηt,[i] i,ηt,[i] −i)∈Rn, ηt,[i] i∈R,ηt,[i] −i∈Rn−1. By construction, {θt,(j),/hatwideαt,(j)}is independent of the jthcolumn of X, and{θt,[i],/hatwideαt,[i]}is independent of theithrow ofX. The following lemma gives ℓ2estimates on the original dynamics ( 119) as well as on its difference with the cavity versions ( 124) and (125). Lemma 5.2. Fix anyT >0. Then there exists a constant C >0(depending on Tbut notγ) such that for anyγ >0, almost surely for all large n,d, we have for all 0≤t≤T/γand allj∈[d],i∈[n]that /⌊a∇⌈⌊lθt/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤C,/⌊a∇⌈⌊lθt,(j)/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt,(j)/⌊a∇⌈⌊l ≤C,/⌊a∇⌈⌊lθt,[i]/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt,[i]/⌊a∇⌈⌊l ≤C, (126) |θt,(j) j| ≤C(1+|θ0 j|+ max t∈[0,T/γ]|bt j|), (127) /⌊a∇⌈⌊lθt,(j)−θt/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt,(j)−/hatwideαt/⌊a∇⌈⌊l ≤C(|θ0 j|+|θ∗ j|+ max t∈[0,T/γ]|bt j|+/radicalbig logd), (128) /⌊a∇⌈⌊lθt,[i]−θt/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt,[i]−/hatwideαt/⌊a∇⌈⌊l ≤C(|εi|+/radicalbig logd). (129) Proof.Fixing a constant C0>0 large enough (depending on T) and any γ >0, define the event E=/braceleftBig /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0,/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l ≤C0,/⌊a∇⌈⌊lθ∗/⌊a∇⌈⌊l2,/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l2≤C0√ d,/⌊a∇⌈⌊lε/⌊a∇⌈⌊l2≤C0√ d, max t∈[0,T/γ]/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l2≤C0√ dfor all large n,d/bracerightBig . Note that we have bt∼ N(0,tγI), soP[/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l2> C0√tγd]≤e−cdfor some constants C0,c >0 and all large n,dby a chi-squared tail bound. Then, taking a union bound over all t∈[0,T/γ]∩Z+and applying the conditions of Assumption 2.1together with the Borel-Cantelli lemma, we see that this event Eholds almost surely. We restrict to the event E. LetC,C′>0 denote constants depending on C0,T(but not on γ) and changing from instance to instance. For ( 126), we have by definition of {θt,/hatwideαt}in (119) that θt=θ0−γt−1/summationdisplay s=0/bracketleftBig βX⊤(Xθs−y)−s(θs,/hatwideαs)/bracketrightBig +√ 2bt /hatwideαt=/hatwideα0+γt−1/summationdisplay s=0G/parenleftBig /hatwideαs,/hatwideP(θs)/parenrightBig . Applying the bounds for s(·) andG(·) in Assumptions 2.2and2.3and the conditions of E, /⌊a∇⌈⌊lθt/⌊a∇⌈⌊l ≤Cγt−1/summationdisplay s=0/parenleftBig /⌊a∇⌈⌊lθs/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l+√ d/parenrightBig +/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l+√ 2/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l /⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤Cγt−1/summationdisplay s=0/parenleftBig /⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l+/⌊a∇⌈⌊lθs/⌊a∇⌈⌊l/√ d+1/parenrightBig +/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l so 1+/⌊a∇⌈⌊lθt/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤Cγt−1/summationdisplay s=0/parenleftBig 1+/⌊a∇⌈⌊lθs/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l/parenrightBig +1+/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+/radicalbigg 2 d/⌊a∇⌈⌊lbt/⌊a∇⌈⌊l. 44 Iterating this bound over tshows /⌊a∇⌈⌊lθt/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l ≤(1+Cγ)t/bracketleftbigg 1+/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+/radicalbigg 2 dmax s∈[0,t]/⌊a∇⌈⌊lbs/⌊a∇⌈⌊l/bracketrightbigg ≤C′, the last bound holdingfor t≤T/γandonE. Thisestablishesthe first claimof( 126). The othertwoclaimsof (126) for the cavity dynamics hold by the same argument, noting that on Ewe have also /⌊a∇⌈⌊lX(j)/⌊a∇⌈⌊lop,/⌊a∇⌈⌊lX[i]/⌊a∇⌈⌊lop≤ C0for allj∈[d] andi∈[n]. For (127), we have by definition of ( 124) that θt+1,(j) j=θt,(j) j+γs(θt,(j) j,/hatwideαt,(j))+√ 2(bt+1 j−bt j). Then θt,(j) j=θ0 j+γt−1/summationdisplay s=0s(θs,(j) j,/hatwideαt,(j))+√ 2bt j so applying the bound for s(·) in Assumption 2.2and the bound /⌊a∇⌈⌊l/hatwideαt,(j)/⌊a∇⌈⌊l ≤Calready shown in ( 126), 1+|θt,(j) j| ≤Cγt−1/summationdisplay s=0(1+|θs,(j) j|)+1+|θ0 j|+√ 2|bt j|. Iterating this bound gives, for all t≤T/γ, 1+|θt,(j) j| ≤(1+Cγ)t(1+|θ0 j|+√ 2 max s∈[0,t]|bt j|)≤C′(1+|θ0 j|+ max t≤T/γ|bt j|) which shows ( 127). For (128), by definition, θt+1−θt+1,(j)=/parenleftBig I−γβX⊤X/parenrightBig θt−/parenleftBig I−γβX(j)⊤X(j)/parenrightBig θt,(j)+γβ/parenleftBig X⊤y−X(j)⊤y(j)/parenrightBig +γ/parenleftBig s(θt,/hatwideαt)−s(θt,(j),/hatwideαt,(j))/parenrightBig . Then, applying the Lipschitz bound for s(·) in Assumption 2.2and the conditions defining E, /⌊a∇⌈⌊lθt+1−θt+1,(j)/⌊a∇⌈⌊l ≤(1+Cγ)/⌊a∇⌈⌊lθt−θt,(j)/⌊a∇⌈⌊l+Cγ√ d/⌊a∇⌈⌊l/hatwideαt−1−/hatwideαt−1,(j)/⌊a∇⌈⌊l +Cγ/parenleftbig /⌊a∇⌈⌊l(X(j)⊤X(j)−X⊤X)θt,(j)/⌊a∇⌈⌊l+/⌊a∇⌈⌊lX⊤y−X(j)⊤y(j)/⌊a∇⌈⌊l/parenrightbig /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=∆t,j.(130) Similarly, by the Lipschitz bound for G(·) in Assumption 2.3, /⌊a∇⌈⌊l/hatwideαt+1−/hatwideαt+1,(j)/⌊a∇⌈⌊l ≤(1+Cγ)/⌊a∇⌈⌊l/hatwideαt−/hatwideαt,(j)/⌊a∇⌈⌊l+Cγ/⌊a∇⌈⌊lθt−θt,(j)/⌊a∇⌈⌊l/√ d. Combining the above two inequalities yields /⌊a∇⌈⌊lθt+1−θt+1,(j)/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt+1−/hatwideαt+1,(j)/⌊a∇⌈⌊l ≤(1+Cγ)/parenleftbig /⌊a∇⌈⌊lθt−θt,(j)/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt−/hatwideαt,(j)/⌊a∇⌈⌊l/parenrightbig +Cγ∆t,j,(131) and hence iterating this bound and
|
https://arxiv.org/abs/2504.15556v1
|
using ( θ0,(j),/hatwideα0,(j)) = (θ0,/hatwideα0), for any t≤T/γ, /⌊a∇⌈⌊lθt−θt,(j)/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt−/hatwideαt,(j)/⌊a∇⌈⌊l ≤t−1/summationdisplay s=0(1+Cγ)st−1max s=0Cγ∆s,j≤C′t−1max s=0∆s,j. Let us now bound ∆ t,j. Writing xj∈Rnfor thejthcolumn of X, we have X(j)=X−xje⊤ j, hence X⊤X−X(j)⊤X(j)=X⊤xje⊤ j+ejx⊤ jX−ejx⊤ jxje⊤ j, and /⌊a∇⌈⌊l(X(j)⊤X(j)−X⊤X)θt,(j)/⌊a∇⌈⌊l ≤ /⌊a∇⌈⌊lX⊤xj/⌊a∇⌈⌊l|θt,(j) j|+|x⊤ jXθt,(j)|+/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op|θt,(j) j| ≤C/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op|θt,(j) j|+|x⊤ jX(j)θt,(j)|. (132) 45 Similarly, we have X⊤y−X(j)⊤y(j)= (X⊤X−X(j)⊤X(j))θ∗+(X−X(j))⊤εso /⌊a∇⌈⌊lX⊤y−X(j)⊤y(j)/⌊a∇⌈⌊l ≤C/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op|θ∗ j|+|x⊤ jX(j)θ∗|+|x⊤ jε|. (133) By (127), we have |θt,(j) j| ≤C(1 +|θ0 j|+ supt∈[0,T/γ]|bt j|) fort≤ ⌊T/γ⌋. Applying this in the above two bounds yields, on E, ∆t,j≤C/bracketleftBig 1+|θ0 j|+|θ∗ j|+ sup t∈[0,T/γ]|bt j|+|x⊤ jX(j)θ∗|+|x⊤ jX(j)θt,(j)|+|x⊤ jε|/bracketrightBig . Define the additional event E′where sup j∈[d]max t∈[0,T/γ]|x⊤ jX(j)θ∗|+|x⊤ jX(j)θt,(j)|+|x⊤ jε| ≤C0/radicalbig logdfor all large n,d. Then the desired bound ( 128) holds for all large n,don the event E ∩E′, so it remains to show that E′holds almost surely for sufficiently large C0>0. For each j∈[d] andt∈[0,T/γ], by independence between xj andX(j),θt,(j), we have that x⊤ jX(j)θt,(j)is subgaussian conditional on X(j)θt,(j), so P/bracketleftbigg |x⊤ jX(j)θt,(j)| ≥C/radicalbigg /⌊a∇⌈⌊lX(j)θt,(j)/⌊a∇⌈⌊llogd d/bracketrightbigg ≤e−cd for some constants C,c >0 (conditional on X(j)θt,(j), and hence also unconditionally). Then, taking a union bound over j∈[d] andt∈[0,T/γ]∩Z+and applying the Borel-Cantelli lemma, almost surely for all large n,d, sup j,t|x⊤ jX(j)θt,(j)| ≤Csup j,t/radicalbigg /⌊a∇⌈⌊lX(j)θt,(j)/⌊a∇⌈⌊llogd d. Ontheevent Ewehavesupj,t/⌊a∇⌈⌊lX(j)θt,(j)/⌊a∇⌈⌊l ≤C√ dby(126)alreadyshown, sosupj,t|x⊤ jX(j)θt,(j)| ≤C′√logd a.s. for all large n,d. The terms |x⊤ jX(j)θ∗|and|x⊤ jε|are bounded similarly, verifying that E′holds almost surely as claimed, and concluding the proof of ( 128). For (129), similar to above, we have /⌊a∇⌈⌊lθt+1−θt+1,[i]/⌊a∇⌈⌊l ≤(1+Cγ)/⌊a∇⌈⌊lθt−θt,[i]/⌊a∇⌈⌊l+Cγ√ d/⌊a∇⌈⌊l/hatwideαt−/hatwideαt,[i]/⌊a∇⌈⌊l +Cγ/parenleftBig /⌊a∇⌈⌊lX[i]⊤X[i]−X⊤X)θt,[i]/⌊a∇⌈⌊l+/⌊a∇⌈⌊lX⊤y−X[i]⊤y[i]/⌊a∇⌈⌊l/parenrightBig /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright ∆t,i, /⌊a∇⌈⌊l/hatwideαt+1−/hatwideαt+1,[i]/⌊a∇⌈⌊l ≤ /⌊a∇⌈⌊l/hatwideαt−/hatwideαt,[i]/⌊a∇⌈⌊l+Cγ/parenleftbig /⌊a∇⌈⌊l/hatwideαt−/hatwideαt,[i]/⌊a∇⌈⌊l+/⌊a∇⌈⌊lθt−θt,[i]/⌊a∇⌈⌊l/√ d/parenrightbig . which implies /⌊a∇⌈⌊lθt−θt,[i]/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt−/hatwideαt,[i]/⌊a∇⌈⌊l ≤Ct−1max s=0∆s,i. UsingX=X[i]+eix⊤ i, wherexi∈Rdnow denotes (the transpose of) the ithrow ofX, we have X⊤X= X[i]⊤X[i]+xix⊤ iandX⊤y−X[i]⊤y[i]=xi(x⊤ iθ∗+εi), so ∆t,i≤ /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop/parenleftBig |x⊤ iθt,[i]|+|x⊤ iθ∗|+|εi|/parenrightBig . Using independence between xiandθt,[i],θ∗, we obtain as above that on an almost sure event E′, for all largen,dwe have ∆ t,i≤C(|εi|+√logd) for allt∈[0,T/γ]∩Z+andi∈[n], showing ( 129). 5.1.2 Proof of Lemma 5.1 Lemma 5.3. For any T >0, on the event where /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0, there exists a constant C >0(depending on T,C0but notγ) such that max 0≤s≤t≤T/γmax j∈[d]/bracketleftBig /⌊a∇⌈⌊l∂ε|ε=0θt,(s,j),ε/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l∂ε|ε=0/hatwideαt,(s,j),ε/⌊a∇⌈⌊l/bracketrightBig ≤Cγ, max 0≤s≤t≤T/γmax i∈[n]/bracketleftBig /⌊a∇⌈⌊l∂ε|ε=0θt,[s,i],ε/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l∂ε|ε=0/hatwideαt,[s,i],ε/⌊a∇⌈⌊l/bracketrightBig ≤Cγ. 46 Proof.For the first statement, we fix s,j, and shorthand θt,(s,j),ε,/hatwideαt,(s,j),εasθt,ε,/hatwideαt,ε. By definition of the process ( 120), we have for t≥s+1 ∂ε|ε=0θt+1,ε=/parenleftBig I−γβX⊤X+γDiag(∂θs(θt,/hatwideαt))/parenrightBig ∂ε|ε=0θt,ε+γ∇αs(θt,/hatwideαt)∂ε|ε=0/hatwideαt,ε, ∂ε|ε=0/hatwideαt+1,ε=/parenleftBig 1+γdαG(/hatwideαt,/hatwideP(θt))/parenrightBig ∂ε|ε=0/hatwideαt,ε+γdθG(/hatwideαt,/hatwideP(θt))∂ε|ε=0θt,ε. Then applying the conditions for s(·) andG(·) in Assumptions 2.2and2.3and/⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0, /⌊a∇⌈⌊l∂ε|ε=0θt+1,ε/⌊a∇⌈⌊l ≤(1+Cγ)/⌊a∇⌈⌊l∂ε|ε=0θt,ε/⌊a∇⌈⌊l+C√ d/⌊a∇⌈⌊l∂ε|ε=0/hatwideαt,ε/⌊a∇⌈⌊l, /⌊a∇⌈⌊l∂ε|ε=0/hatwideαt+1,ε/⌊a∇⌈⌊l ≤(1+Cγ)/⌊a∇⌈⌊l∂ε|ε=0/hatwideαt,ε/⌊a∇⌈⌊l+Cγ/⌊a∇⌈⌊l∂ε|ε=0θt,ε/⌊a∇⌈⌊l/√ d, whereC >0 is a constant independent of γ. Combining and iterating these inequalities yields, for all t∈[s+1,T/γ], /⌊a∇⌈⌊l∂ε|ε=0θt+1,ε/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l∂ε|ε=0/hatwideαt+1,ε/⌊a∇⌈⌊l ≤(1+Cγ)t−s/parenleftbig /⌊a∇⌈⌊l∂ε|ε=0θs+1,(s,j),ε/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l∂ε|ε=0/hatwideαs+1,(s,j),ε/⌊a∇⌈⌊l/parenrightbig ≤C′γ, (134) using∂ε|ε=0θs+1,(s,j),ε=γejand∂ε|ε=0/hatwideαs+1,(s,j),ε= 0. This holds for all s∈[0,T/γ] andj∈[d], showing the first claim. The proof of the second claim is analogous, and omitte d for brevity. Lemma 5.4. Let{θt,/hatwideαt}be given by ( 119). For each t∈Z+define the matrix Ωt=I−γβX⊤X+γDiag(∂θs(θt,/hatwideαt))∈Rd×d. (135) Then for any fixed s,t∈[0,T/γ]witht≥s+1, almost surely lim n,d→∞1 dd/summationdisplay j=1∂ε|ε=0θt,(s,j),ε j−γ dTr/parenleftBig Ωt−1...Ωs+1/parenrightBig = 0 lim n,d→∞1 nn/summationdisplay i=1∂ε|ε=0ηt,[s,i],ε i−γ nTr/parenleftBig XΩt−1...Ωs+1X⊤/parenrightBig = 0 where by convention we set Ωt−1...Ωs+1=Ifort=s+1. Proof.Let us denote ∇αs(θt,/hatwideαt) =/parenleftBig ∇αs(θt i,/hatwideαt)⊤/parenrightBigd i=1∈Rd×K,rt,(s,j)=∇αs(θt,/hatwideαt)∂ε|ε=0/hatwideαt,(s,j),ε∈Rd. Then ∂ε|ε=0θt+1,(s,j),ε=/parenleftBig I−γβX⊤X+γDiag(∂θs(θt,/hatwideαt))/parenrightBig /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright Ωt∂ε|ε=0θt,(s,j),ε+γrt,(s,j). Iterating this identity with ∂ε|ε=0θs+1,(s,j),ε=γejshows ∂ε|ε=0θt,(s,j),ε=γΩt−1...Ωs+1ej+γt−1/summationdisplay ℓ=s+1Ωt−1...Ωℓ+1rℓ,(s,j)(136) (whereΩt−1...Ωℓ+1=Iforℓ=t−1).
|
https://arxiv.org/abs/2504.15556v1
|
This implies 1 dd/summationdisplay j=1∂ε|ε=0θt,(s,j),ε j=γ dTr/parenleftbig Ωt−1...Ωs+1/parenrightbig +γ dt−1/summationdisplay ℓ=s+1d/summationdisplay j=1e⊤ jΩt−1...Ωℓ+1rℓ,(s,j). (137) On an event where /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0for all large n,d(which holds almost surely), by the Lipschitz continuity ofs(·) in Assumption 2.1and bound in Lemma 5.3, we have /⌊a∇⌈⌊lΩt/⌊a∇⌈⌊lop≤C,/⌊a∇⌈⌊l∇αs(θt,/hatwideαt)/⌊a∇⌈⌊lF≤C√ d, and 47 /⌊a∇⌈⌊l∂ε|ε=0/hatwideαt,(s,j),ε/⌊a∇⌈⌊l ≤C/√ dfor allt, whereC >0 is a constant (possibly depending on γ) changing from instance to instance. Then by Cauchy-Schwarz, /vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1e⊤ jΩt−1...Ωℓ+1rℓ,(s,j)/vextendsingle/vextendsingle/vextendsingle≤/radicalbigg/vextenddouble/vextenddouble/vextenddoubleΩt−1...Ωℓ+1∇αs(θℓ,/hatwideαℓ)/vextenddouble/vextenddouble/vextenddouble2 F·/radicaltp/radicalvertex/radicalvertex/radicalbtd/summationdisplay j=1/⌊a∇⌈⌊l∂ε|ε=0/hatwideαℓ,(s,j),ε/⌊a∇⌈⌊l2≤C√ d, which implies that the second term of ( 137) converges to 0 a.s. as n,d→ ∞. This proves the first claim. The proof of the second claim is analogous, and omitted for brevity. Let us now introduce a notation for the discrete DMFT response pr ocess (57) prior to taking an expec- tation. Fixing a univariate process θ={θt}t∈Z+andRK-valued process α={αt}t∈Z+as inputs, define the following auxiliary process {r(θ,α) θ(t,s)}s<t: r(θ,α) θ(t+1,s) =/braceleftBigg γ fors=t,/parenleftBig 1−γδβ+γ∂θs(θt,αt)/parenrightBig r(θ,α) θ(t,s)+γ/summationtextt−1 ℓ=s+1Rγ η(t,ℓ)r(θ,α) θ(ℓ,s) fors < t. (138) Note that if the inputs {θt,αt}are given by the discrete-time DMFT processes defined in ( 57) and (60), then r(θ,α) θ(t,s) =∂θt γ ∂usγ,E[r(θ,α) θ(t,s)] =Rγ θ(t,s) which are precisely the auxiliary process defined in ( 58) and DMFT response function in ( 60). We will instead consider ( 138) with inputs {θt j,/hatwideαt}t∈Z+given by the coordinates of {θt,/hatwideαt}solving ( 119). Lemma 5.5. Let{θt,/hatwideαt}t∈Z+be defined by ( 119), and let Rγ θ(t,s)be the response function of its DMFT limit defined in ( 60). Then for any fixed s,t∈Z+withs < t, almost surely lim n,d→∞1 dd/summationdisplay j=1r(θj,/hatwideα) θ(t,s) =Rγ θ(t,s). Proof.First note that by the Lipschitz bound for ∂θs(·) in Assumption 2.2, the a.s. convergence {/hatwideαt} → {αt} in Lemma 4.1, and a simple induction argument, we have almost surely lim n,d→∞/parenleftBig1 dd/summationdisplay j=1r(θj,/hatwideα) θ(t,s)−1 dd/summationdisplay j=1r(θj,α) θ(t,s)/parenrightBig = 0. By a similar induction argument using the boundedness and Lipschitz- continuity of ∂θs(·), for each fixed s < t, the map ( θ0 j,...,θt j)/ma√sto→r(θj,α) θ(t+1,s) is Lipschitz for each j. Then by the empirical Wasserstein-2 convergence for θ0,...,θTin (66) of Lemma 4.1, almost surely lim n,d→∞1 dd/summationdisplay j=1r(θj,α) θ(t,s) =E[r(θ,α) θ(t,s)] where the inputs ( θ,α) on the right side are the discrete-time DMFT processes ( 57) and (60), andE[·] is the expectation over their law. The lemma follows from noting that, by de finition,Rγ θ(t,s) =E[r(θ,α) θ(t,s)]. We now proceed to prove Lemma 5.1. Proof of Lemma 5.1.For any fixed s,t∈Z+withs < t, set also rη(t,s) =∂ηt ∂ws= (δβ)−1Rγ η(t,s) 48 and define the error terms Et,(s,j) θ=∂ε|ε=0θt,(s,j),ε j−r(θj,/hatwideα) θ(t,s), (139) Et,[s,i] η=∂ε|ε=0ηt,[s,i],ε i−β−1rη(t,s). (140) We first prove by induction on tthat for any p≥1 ands,t∈Z+withs < t, almost surely lim n,d→∞1 dd/summationdisplay j=1|Et,(s,j) θ|p= 0,lim n,d→∞1 nn/summationdisplay i=1|Et,[s,i] θ|p= 0. (141) Fixing any s∈Z+, the base case t=s+1 holds, as direct calculation via ( 120) shows that ∂ε|ε=0θs+1,(s,j),ε j =γ=r(θj,/hatwideα) θ(s+1,s), j∈[d], and similarly via ( 121), ∂ε|ε=0ηs+1,[s,i],ε i =γ/⌊a∇⌈⌊lX⊤ei/⌊a∇⌈⌊l2=γ+Es+1,[s,i] η wheren−1/summationtextn i=1|Es+1,[s,i] η|p→0 a.s. under Assumption 2.1whilerη(s+1,s) =βRγ θ(s+1,s) =γβ. Suppose by induction that ( 141) holds for this fixed s∈Z+up to time t. Note that Lemmas 5.4and5.5 then imply, with the matrix Ωtdefined in (
|
https://arxiv.org/abs/2504.15556v1
|
135), for each s < t, almost surely lim n,d→∞/braceleftbigg γ·1 dTr/parenleftBig Ωt−1...Ωs+1/parenrightBig ,1 dd/summationdisplay j=1∂ε|ε=0θt,(s,j),ε j/bracerightbigg =Rγ θ(t,s), lim n,d→∞/braceleftbigg γ·1 nTr/parenleftBig XΩt−1...Ωs+1X⊤/parenrightBig ,1 nn/summationdisplay i=1∂ε|ε=0ηt,[s,i],ε i/bracerightbigg =β−1rη(t,s) = (δβ2)−1Rγ η(t,s).(142) Claim for ∂ε|ε=0θt,(s,j),ε j.We establish the claim ( 141) forEt+1,(s,j) θ. Fixing both s∈Z+andj∈[d], let us shorthand θt,(s,d),εasθt,εand recall the notations θt= (θt j,θt −j) from Section 5.1.1whereθt jis the jth coordinate of θt. Writing correspondingly X= (xj,X−j), we have θt+1,ε −j=/parenleftBig I−γβX⊤ −jX−j/parenrightBig θt,ε −j−γβX⊤ −j(xjθt,ε j−y)+γs(θt,ε −j,/hatwideαt,ε)+√ 2(bt+1 −j−bt −j) θt+1,ε j=/parenleftBig 1−γβ/⌊a∇⌈⌊lxj/⌊a∇⌈⌊l2/parenrightBig θt,ε j−γβx⊤ j(X−jθt,ε −j−y)+γs(θt,ε j,/hatwideαt,ε)+√ 2(bt+1 j−bt j) /hatwideαt+1,ε=/hatwideαt,ε+γ·G(/hatwideαt,ε,/hatwideP(θt,ε)) Define ∇αs(θt,/hatwideαt) =/parenleftBig ∇αs(θt i,/hatwideαt)⊤/parenrightBigd i=1∈Rd×K,rt=∇αs(θt,/hatwideαt)∂ε|ε=0/hatwideαt,ε∈Rd. Then, taking the derivative of θt+1,ε jinε, ∂ε|ε=0θt+1,ε j=/parenleftBig 1−γβ/⌊a∇⌈⌊lxj/⌊a∇⌈⌊l2+γ∂θs(θt j,/hatwideαt)/parenrightBig ∂ε|ε=0θt,ε j+γrt j−γβx⊤ jX−j∂ε|ε=0θt,ε −j.(143) Taking the derivative of θt,ε −jinε, ∂ε|ε=0θt,ε −j=/parenleftBig I−γβX⊤ −jX−j+γDiag(∂θs(θt−1 −j,/hatwideαt−1))/parenrightBig /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=Ωt−1 −j∂ε|ε=0θt−1,ε −j−γβX⊤ −jxj∂ε|ε=0θt−1,ε j+γrt−1 −j. Then, iterating this equality and using ∂ε|ε=0θs+1,(s,j),ε −j= 0 gives ∂ε|ε=0θt,ε −j=−γβt−1/summationdisplay k=s+1Ωt−1 −j...Ωk+1 −jX⊤ −jxj·∂ε|ε=0θk,ε j+γt−1/summationdisplay k=s+1Ωt−1 −j...Ωk+1,ε −jrk −j. 49 Plugging the above expression into ( 143), we have ∂ε|ε=0θt+1,ε j=/parenleftBig 1−γβ/⌊a∇⌈⌊lxj/⌊a∇⌈⌊l2+γ∂θs(θt j,/hatwideαt)/parenrightBig ·∂ε|ε=0θt,ε j/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (Ij) +(γβ)2t−1/summationdisplay k=s+1x⊤ jX−jΩt−1 −j...Ωk+1 −jX⊤ −jxj·∂ε|ε=0θk,ε j/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (IIj,k) −γ2βt−1/summationdisplay k=s+1x⊤ jX−jΩt−1 −j...Ωk+1 −jrk −j/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (IIIj,k)+γrt j/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright (IVj). (144) Analysis of (Ij).We have (Ij) = (1−γδβ+γ∂θs(θt j,/hatwideαt))r(θj,/hatwideα) θ(t,s)+r(j) 1, (145) where r(j) 1=γβ(δ−/⌊a∇⌈⌊lxj/⌊a∇⌈⌊l2)(∂ε|ε=0θt,ε j)+/parenleftBig 1−γδβ+γ∂θs(θt j,/hatwideαt)/parenrightBig Et,(s,j) θ. For anyp≥1, by the induction hypothesis we have d−1/summationtext j|Et,(s,j) θ|p→0 a.s. By the conditions for Xin Assumption 2.1,d−1/summationtext j|δ− /⌊a∇⌈⌊lxj/⌊a∇⌈⌊l2|p→0 a.s. By Lemma 5.3, maxj|∂ε|ε=0θt,ε j| ≤Ca.s. for all large n,d, while by Assumption 2.2,∂θs(·) is also bounded. Combining these bounds gives d−1/summationtext j|r(j) 1|p→0 a.s. for anyp≥1. Analysis of (IIj,k).Let Ωt,(j) −j=I−γβX⊤ −jX−j+γDiag(∂θs(θt,(j) −j,/hatwideαt,(j))) be the analogue of Ωt −jdefined by the cavity dynamics {θt,(j),/hatwideαt,(j)}. We first show that a.s. for all large n,d, we have for every j∈[d] that |r(j,k) 2,1|:=/vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1 −j...Ωk+1 −jX⊤ −jxj−x⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −jX⊤ −jxj/vextendsingle/vextendsingle/vextendsingle ≤C/radicalbigg logd d/parenleftBig |θ0 j|+|θ∗ j|+ max u∈[0,t]|bu j|+/radicalbig logd/parenrightBig . (146) To see this, note that |r(j,k) 2,1| ≤t−1/summationdisplay ℓ=k+1/vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1,(j) −j...Ωℓ+1,(j) −j(Ωℓ −j−Ωℓ,(j) −j)Ωℓ−1 −j...Ωk+1 −jX⊤ −jxj/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=Tℓ (j,k). HereΩℓ −j−Ωℓ,(j) −j=γDiag(∂θs(θℓ −j,/hatwideαℓ)−∂θs(θℓ,(j) −j,/hatwideαℓ,(j))). Then we may bound |Tℓ (j,k)| ≤γ/⌊a∇⌈⌊lΩℓ+1,(j) −j...Ωt−1,(j) −jX⊤ −jxj/⌊a∇⌈⌊l∞·/⌊a∇⌈⌊lΩℓ−1 −j...Ωk+1 −jX⊤ −jxj/⌊a∇⌈⌊l2·/⌊a∇⌈⌊l∂θs(θℓ −j,/hatwideαℓ)−∂θs(θℓ,(j) −j,/hatwideαℓ,(j))/⌊a∇⌈⌊l2 ≤Cγ/⌊a∇⌈⌊lΩℓ+1,(j) −j...Ωt−1,(j) −jX⊤ −jxj/⌊a∇⌈⌊l∞·/⌊a∇⌈⌊lΩℓ−1 −j/⌊a∇⌈⌊lop.../⌊a∇⌈⌊lΩk+1 −j/⌊a∇⌈⌊lop/⌊a∇⌈⌊lX⊤ −jxj/⌊a∇⌈⌊l2 ·(/⌊a∇⌈⌊lθℓ,(j) −j−θℓ −j/⌊a∇⌈⌊l2+√ d/⌊a∇⌈⌊l/hatwideαℓ,(j)−/hatwideαℓ/⌊a∇⌈⌊l2). (147) Sincexjis independent of Ωt,(j) −jandX−j, we have by a subgaussian tail bound P/bracketleftBig e⊤ iΩℓ+1,(j) −j...Ωt−1,(j) −jX⊤ −jxj≥C/radicalbigg logd d/⌊a∇⌈⌊le⊤ iΩℓ+1,(j) −j...Ωt−1,(j) −jX⊤ −j/⌊a∇⌈⌊l2/bracketrightBig ≤e−cd 50 for each i= 1,...,dand some constants C,c >0. Then, taking a union bound and applying the Borel- Cantelli lemma, almost surely for all large n,d, sup j,ℓ/⌊a∇⌈⌊lΩℓ+1,(j) −j...Ωt−1,(j) −jX⊤ −jxj/⌊a∇⌈⌊l∞≤sup j,ℓC/radicalbigg logd d/⌊a∇⌈⌊lΩℓ+1,(j) −j/⌊a∇⌈⌊lop.../⌊a∇⌈⌊lΩt−1,(j) −j/⌊a∇⌈⌊lop/⌊a∇⌈⌊lX−j/⌊a∇⌈⌊lop. The right side is bounded by C′/radicalbig (logd)/don an almost-sure event where /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0holds for all large n,d. Then, applying this to ( 147) and applying also Lemma 5.2to bound the last term of ( 147), this shows (146). By the conditions of Assumption 2.1and the tail estimates of the Brownian motion in Lemma 4.7, this bound ( 146) in turn implies d−1/summationtext j|r(j,k) 2,1|p→0 a.s. for any p≥1. Now consider r(j,k) 2,2:=x⊤ jX−jΩt−1,(j) −j...Ωk+1,(j)
|
https://arxiv.org/abs/2504.15556v1
|
−jX⊤ −jxj−1 dTr/parenleftBig X−jΩt−1,(j) −j...Ωk+1,(j) −jX⊤ −j/parenrightBig . Sincexjis independent of Ωt,(j) −jandX−j, the Hanson-Wright inequality yields P/bracketleftBig |r(j,k) 2,2| ≥max/parenleftBigC√logd d/⌊a∇⌈⌊lW/⌊a∇⌈⌊lF,Clogd d/⌊a∇⌈⌊lW/⌊a∇⌈⌊lop/parenrightBig/bracketrightBig ≤e−cd for some C,c >0, where W=X−jΩt−1,(j) −j...Ωk+1,(j) −jX⊤ −j. Again taking a union bound over j∈[d] and applying /⌊a∇⌈⌊lW/⌊a∇⌈⌊lop≤C0and/⌊a∇⌈⌊lW/⌊a∇⌈⌊lF≤C0√ da.s. for all large n,d, this implies d−1/summationtext j|r(j,k) 2,2|p→0 a.s. for any p≥1. Finally, let X(j)∈Rn×dbe the embedding of X−jwithjthcolumn set to 0 as defined in ( 123), and let Ωt,(j)=I−γβX(j)⊤X(j)+γDiag(∂θs(θt,(j),/hatwideαt,(j)))∈Rd×d. Consider r(j,k) 2,3:=1 dTr/parenleftBig X−jΩt−1,(j) −j...Ωk+1,(j) −jX⊤ −j/parenrightBig −1 dTr/parenleftBig XΩt−1...Ωk+1X⊤/parenrightBig =1 dTr/parenleftBig X(j)Ωt−1,(j)...Ωk+1,(j)X(j)⊤/parenrightBig −1 dTr/parenleftBig XΩt−1...Ωk+1X⊤/parenrightBig =1 dTr/parenleftBig X(j)Ωt−1,(j)...Ωk+1,(j)(X(j)−X)⊤/parenrightBig +t−1/summationdisplay ℓ=k+11 dTr/parenleftBig X(j)Ωt−1,(j)...Ωℓ+1,(j)(Ωℓ−Ωℓ,(j))Ωℓ−1...Ωk+1X⊤/parenrightBig +1 dTr/parenleftBig (X(j)−X)Ωt−1...Ωk+1X⊤/parenrightBig Almost surely for all large n,d, for every j∈[d] we have /⌊a∇⌈⌊lX(j)−X/⌊a∇⌈⌊lF=/⌊a∇⌈⌊lxj/⌊a∇⌈⌊l ≤Cand /⌊a∇⌈⌊lΩt,(j)−Ωt/⌊a∇⌈⌊lF≤γβ/⌊a∇⌈⌊lX(j)⊤X(j)−X⊤X/⌊a∇⌈⌊lF+γ/⌊a∇⌈⌊l∂θs(θt,(j),/hatwideαt,(j))−∂θs(θt,/hatwideαt)/⌊a∇⌈⌊l ≤C/parenleftBig 1+|θ0 j|+|θ∗ j|+ max u∈[0,t]|bu j|+/radicalbig logd/parenrightBig , the second inequality applying the Lipschitz continuity of s(·) in Assumption 2.2and Lemma 5.2. Then, applying Tr( A−B)C≤ /⌊a∇⌈⌊lA−B/⌊a∇⌈⌊lF/⌊a∇⌈⌊lC/⌊a∇⌈⌊lF≤√ d/⌊a∇⌈⌊lA−B/⌊a∇⌈⌊lF/⌊a∇⌈⌊lC/⌊a∇⌈⌊lop, we obtain a.s. for all large n,dthat for every j∈[d], |r(j,k) 2,3| ≤C√ d/parenleftBig 1+|θ0 j|+|θ∗ j|+ max u∈[0,t]|bu j|+/radicalbig logd/parenrightBig , which implies as above that d−1/summationtext j|r(j,k) 2,3|p→0 a.s. for any p≥1. 51 Combining these bounds for r(j,k) 2,1,r(j,k) 2,2,r(j,k) 2,3, the second statement of ( 142) for almost sure convergence ofd−1Tr(XΩt−1...Ωk+1X⊤), the induction hypothesis for approximation of ∂ε|ε=0θk,ε jbyr(θj,/hatwideα) θ, and the bound|∂ε|ε=0θk,ε j| ≤Ca.s. for all large n,dby Lemma 5.3, we get that (IIj,k) =1 γβ2Rγ η(t,k)r(θj,/hatwideα) θ(k,s)+r(j,k) 2 (148) whered−1/summationtext j|r(j,k) 2|p→0 a.s. for any p≥1. Analysis of (IIIj,k).We apply a similar leave-one-out argument as above. Let rt,(j) −j=∇αs(θt,(j) −j,/hatwideαt,(j))∂ε|ε=0/hatwideαt,ε∈Rd−1. (Note that we replaceonly the first factor ∇αs(θt,(j) −j,/hatwideαt,(j)) by the cavity dynamics, leavingthe second factor unchanged.) Then /vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1 −j...Ωk+1 −jrk −j−x⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −jrk,(j) −j/vextendsingle/vextendsingle/vextendsingle ≤t−1/summationdisplay ℓ=k+1/vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1,(j) −j...Ωℓ+1,(j) −j(Ωℓ −j−Ωℓ,(j) −j)Ωℓ−1 −j...Ωk+1 −jrk −j/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=u(j,k) +/vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −j(rk −j−rk,(j) −j)/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=v(j,k). (149) We note that a.s. for all large n,d, we have /⌊a∇⌈⌊lrk/⌊a∇⌈⌊l ≤C√ d·C/√ d≤C′by the Lipschitz bound for s(·) in Assumption 2.2and Lemma 5.3. Then, using similar arguments as in the analysis of r(j,k) 2,1above, we have d−1/summationtext j|u(j,k)|p→0 a.s. for any p≥1. For the second term, we have v(j,k)=/vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1,(j) −j...Ωj+1,(j) −j/parenleftBig ∇αs(θk −j,/hatwideαk)−∇αs(θk,(j) −j,/hatwideαk,(j))/parenrightBig ∂ε|ε=0/hatwideαk,ε/vextendsingle/vextendsingle/vextendsingle ≤C/⌊a∇⌈⌊lxj/⌊a∇⌈⌊l/⌊a∇⌈⌊lX−j/⌊a∇⌈⌊lop/⌊a∇⌈⌊lΩt−1,(j) −j/⌊a∇⌈⌊lop.../⌊a∇⌈⌊lΩk+1,(j) −j/⌊a∇⌈⌊lop(/⌊a∇⌈⌊lθk −j−θk,(j) −j/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαk−/hatwideαk,(j)/⌊a∇⌈⌊l)/⌊a∇⌈⌊l∂ε|ε=0/hatwideαk,ε/⌊a∇⌈⌊l, which satisfies d−1/summationtext j|v(j,k)|p→0 a.s. for all p≥1 by Lemmas 5.2and5.3. Thus x⊤ jX−jΩt−1 −j...Ωk+1 −jrk −j=x⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −jrk,(j) −j+r(j,k) whered−1/summationtext j|r(j,k)|p→0 a.s. On the other hand, we have /vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −jrk,(j) −j/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsinglex⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −j∇αs(θk,(j) −j,/hatwideαk,(j))·∂ε|ε=0/hatwideαk,ε/vextendsingle/vextendsingle/vextendsingle ≤ /⌊a∇⌈⌊lx⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −j∇αs(θk,(j) −j,/hatwideαk,(j))/⌊a∇⌈⌊l/⌊a∇⌈⌊l∂ε|ε=0/hatwideαk,ε/⌊a∇⌈⌊l. SinceX−j,Ωt,(j) −j,θk,(j) −j,/hatwideαk,(j)are all independent of xj, a subgaussian tail bound and union bound shows, a.s. for all large n,d, that for every j∈[d], /⌊a∇⌈⌊lx⊤ jX−jΩt−1,(j) −j...Ωk+1,(j) −j∇αs(θk,(j) −j,/hatwideαk,(j))/⌊a∇⌈⌊l ≤C/radicalbig logd. Since/⌊a∇⌈⌊l∂ε|ε=0/hatwideαk,ε/⌊a∇⌈⌊l ≤C/√ dby Lemma 5.3, this shows (IIIj,k) =r(j,k) 3 (150) where lim n,d→∞d−1/summationtextd j=1|r(j,k) 3|p= 0 a.s. for any p≥1. 52 Analysis of (IVj).By Assumption 2.2and Lemma 5.3,|(IVj)| ≤γ/⌊a∇⌈⌊l∇αs(θt j,/hatwideαt)/⌊a∇⌈⌊l/⌊a∇⌈⌊l∂ε|ε=0/hatwideαt,ε/⌊a∇⌈⌊l ≤C/√ da.s. for all large n,d, hence (IVj) =r(j) 4 (151) where lim n,d→∞d−1/summationtextd j=1|r(j) 4|p= 0 a.s. for any p≥1. Applying ( 145), (148), (150), and (151) back to ( 144), ∂ε|ε=0θt+1,ε j=/parenleftbig 1−γδβ+γ∂θs(θt j,/hatwideαt)/parenrightbig r(θj,/hatwideα) θ(t,s)+γt−1/summationdisplay k=s+1Rγ η(t,k)r(θj,/hatwideα)
|
https://arxiv.org/abs/2504.15556v1
|
θ(k,s)+Et+1,(s,j) θ =r(θj,/hatwideα) θ(t+1,s)+Et+1,(s,j) θ where lim n,d→∞d−1/summationtextd j=1|Et+1,(s,j) θ|p= 0 a.s. for each p≥1, concluding the proof the inductive claim ( 141) forEt+1,(s,j) θ. Claim for ∂ε|ε=0ηt,[s,i],ε i.We now show the claim ( 141) forEt+1,[s,i] η. Again fixing s∈Z+andi∈[n], let us shorthand ηt,[s,i],εandθt,[s,i],εasηt,εandθt,ε. Let us write ηt= (ηt i,ηt −i) as in Section 5.1.1, and write correspondingly y= (yi,y−i),ε= (εi,ε−i), andX= [xi,X⊤ −i]⊤wherexi∈Rddenotes now (the transpose of) the ithrowofX, andX−i∈R(n−1)×d. Then θt+1,ε=θt,ε+γ/bracketleftBig −β/parenleftbig X⊤ −i(X−iθt,ε−y−i)+xi(ηt,ε i−yi)/parenrightbig +s(θt,ε,/hatwideαt,ε)/bracketrightBig +√ 2(bt+1−bt) ηt+1,ε i=ηt,ε i+γ/bracketleftBig −βx⊤ i/parenleftbig X⊤ −i(X−iθt,ε−y−i)+xi(ηt,ε i−yi)/parenrightbig +x⊤ is(θt,ε,/hatwideαt,ε)/bracketrightBig +√ 2x⊤ i(bt+1−bt) /hatwideαt+1,ε=/hatwideαt,ε+γ·G(/hatwideαt,ε,/hatwideP(θt,ε)). Set rt=∇αs(θt,/hatwideαt)∂ε|ε=0/hatwideαt,ε∈Rd. Then, taking the derivative of ηt+1,ε iyields ∂ε|ε=0ηt+1,ε i=/parenleftBig 1−γβ/⌊a∇⌈⌊lxi/⌊a∇⌈⌊l2/parenrightBig ∂ε|ε=0ηt,ε i+x⊤ i/parenleftBig −γβX⊤ −iX−i+γDiag(∂θs(θt,/hatwideαt))/parenrightBig ∂ε|ε=0θt,ε+γx⊤ irt. (152) Taking derivative of θt,εyields ∂ε|ε=0θt,ε=/parenleftBig I−γβX⊤ −iX−i+γDiag(∂θs(θt−1,/hatwideαt−1))/parenrightBig /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright :=Ωt−1 −i∂ε|ε=0θt−1,ε−γβxi∂ε|ε=0ηt−1,ε i+γrt−1. Iterating this equality and using ∂ε|ε=0θs+1,[s,i],ε=γxigives ∂ε|ε=0θt,ε=γΩt−1 −i...Ωs+1 −ixi−γβt−1/summationdisplay k=s+1Ωt−1 −i...Ωk+1 −ixi·∂ε|ε=0ηk,ε i+γt−1/summationdisplay k=s+1Ωt−1 −i...Ωk+1 −irk. Plugging the above expression into ( 152), we have ∂ε|ε=0ηt+1,ε i=/parenleftBig 1−γβ/⌊a∇⌈⌊lxi/⌊a∇⌈⌊l2/parenrightBig ∂ε|ε=0ηt,ε i/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (Ii)+γx⊤ i(Ωt −i−I)Ωt−1 −i...Ωs+1 −ixi/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (IIi) −γβt−1/summationdisplay k=s+1x⊤ i(Ωt −i−I)Ωt−1 −i...Ωk+1 −ixi·∂ε|ε=0ηk,ε i/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (IIIi,k)+γt−1/summationdisplay k=s+1x⊤ i(Ωt −i−I)Ωt−1...Ωk+1rk /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (IV)i,k+γx⊤ irt /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright (Vi). (153) 53 The arguments to analyze these terms are similar to the above, and we will omit some details. Analysis of (Ii).By the induction hypothesis and concentration of /⌊a∇⌈⌊lxi/⌊a∇⌈⌊l2around 1, (Ii) = (1−γβ)β−1rη(t,s)+r[i] 1 (154) wheren−1/summationtextn i=1|r[i] 1|p→0 a.s. for any p≥1. Analysis of (IIi).LetΩt,[i] −i=I−γβX⊤ −iX−i+γDiag(∂θs(θt,[i],/hatwideαt,[i])) withθt,[i],/hatwideαt,[i]given by the cavity dynamics of ( 125). Set r[i] 2,1=x⊤ iΩt−1 −i...Ωs+1 −ixi−x⊤ iΩt−1,[i] −i...Ωs+1,[i] −ixi r[i] 2,2=x⊤ iΩt−1,[i] −i...Ωs+1,[i] −ixi−1 dTrΩt−1,[i] −i...Ωs+1,[i] −i r[i] 2,3=1 dTrΩt−1,[i] −i...Ωs+1 −i−1 dTrΩt−1...Ωs+1,[i]. Then the same argumentsaboveyield n−1/summationtextn i=1|r[i] 2,j|p→0for each j= 1,2,3. Applying the samearguments fortin place of t−1, and the first statement of ( 142) for both tandt−1, (IIi) =γ−1Rγ θ(t+1,s)−γ−1Rγ θ(t,s)+r[i] 2 (155) wheren−1/summationtextn i=1|r[i] 2|p→0 a.s. Analysis of (IIIi,k),(IVi,k), and(Vi).Similar arguments as above show (IIIi,k) = (γβ)−1Rγ θ(t+1,k)rη(k,s)−(γβ)−1Rγ θ(t,k)rη(k,s)+r[i,k] 3 (156) (IV)i,k=r[i,k] 4 (157) (V)i=r[i] 5 (158) wheren−1/summationtextn i=1|r[i,k] 3|p→0,n−1/summationtextn i=1|r[i,k] 4|p→0, andn−1/summationtextn i=1|r[i] 5|p→0 a.s. Applying ( 154), (155), (156), (157), and (158) back to ( 153), for an error term Et+1,[s,i] ηsatisfying n−1/summationtextn i=1|Et+1,[s,i] η|p→0 a.s., we have ∂ε|ε=0ηt+1,ε i= (1−γβ)β−1rη(t,s)+Rγ θ(t+1,s)−Rγ θ(t,s) −t−1/summationdisplay k=s+1(Rγ θ(t+1,k)−Rγ θ(t,s))rη(k,s)+Et+1,[s,i] η =/bracketleftBig −γrη(t,s)+Rγ θ(t+1,s)−t−1/summationdisplay k=s+1Rγ θ(t+1,k)rη(k,s)/bracketrightBig +/bracketleftBig β−1rη(t,s)−Rγ θ(t,s)+t−1/summationdisplay k=s+1Rγ θ(t,k)rη(k,s)/bracketrightBig /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright =0+Et+1,[s,i] η =Rγ θ(t+1,s)−t/summationdisplay k=s+1Rγ θ(t+1,k)rη(k,s)+Et+1,[s,i] η =β−1rη(t+1,s)+Et+1,[s,i] η. This shows the inductive claim ( 141) forEt+1,[s,i] η, and hence concludes the induction. 54 Toconcludetheproof, byboundednessof ∂θs(·) andthe definition ( 138),d−1/summationtextd j=1r(θj,/hatwideα) θ(t,s)is bounded by a constant. Furthermore, by the expansion ( 137) and its following arguments (which hold also at non- zeroε >0), on the event /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0, we have that d−1/summationtextd j=1∂εθt,(s,j),ε jis also bounded by a constant for all sufficiently small ε≥0. Then, writing /a\}⌊∇a⌋k⌉tl⌉{t·/a\}⌊∇a⌋k⌉t∇i}htfor the expectation over only the discrete Brownian motion {bt}t∈Z+, we may apply the dominated convergence theorem to the first sta tement of ( 141) to get, almost surely, lim n,d→∞d−1TrRθ(t,s) = lim n,d→∞1 dd/summationdisplay j=1∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tθt,(s,j),ε j/a\}⌊∇a⌋k⌉t∇i}ht= lim n,d→∞1 dd/summationdisplay j=1/a\}⌊∇a⌋k⌉tl⌉{t∂ε|ε=0θt,(s,j),ε j/a\}⌊∇a⌋k⌉t∇i}ht = lim n,d→∞1 dd/summationdisplay j=1/a\}⌊∇a⌋k⌉tl⌉{tr(θj,α) θ(t,s)/a\}⌊∇a⌋k⌉t∇i}ht=Rγ θ(t,s), the last equality holding by Lemma 5.5. Similarly
|
https://arxiv.org/abs/2504.15556v1
|
we may apply the dominated convergence theorem to the second statement of ( 141) to get, almost surely, lim n,d→∞n−1TrRη(t,s) = lim n,d→∞δβ2·1 nn/summationdisplay i=1∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tηt,[s,i],ε i/a\}⌊∇a⌋k⌉t∇i}ht= lim n,d→∞δβ·1 nn/summationdisplay i=1/a\}⌊∇a⌋k⌉tl⌉{trη(t,s)/a\}⌊∇a⌋k⌉t∇i}ht=Rγ η(t,s), concluding the proof. 5.2 Discretization of Langevin response function In the following, we denote x= (θ,/hatwideα)∈Rd+Kand consider ( 4–5) as a joint diffusion in the variables xt= (θt,/hatwideαt). Letu:Rd+K→Rd+KandM∈R(d+K)×(d+K)be defined by u(x) =u(θ,/hatwideα) =/parenleftBig −βX⊤(Xθ−y)+/parenleftbig s(θj,/hatwideα)/parenrightbigd j=1,G(/hatwideα,/hatwideP(θ))/parenrightBig M= Diag(Id×d,0K×K)(159) Given an initial condition x0∈Rd+K, we consider the continuous-time dynamics for xt∈Rd+Kand Vt∈R(d+K)×(d+K)defined by xt=x0+/integraldisplayt 0u(xs)ds+√ 2Mbt Vt=Id+K+/integraldisplayt 0[du(xs)Vs]ds (160) where du(x)∈R(d+K)×(d+K)is the derivative of u(·) atx. We consider also a piecewise-constant version of these dynamics ¯xt γ=x+/integraldisplay⌊t⌋ 0u(¯xs γ)ds+√ 2Mb⌊t⌋ ¯Vt γ=Id+K+/integraldisplay⌊t⌋ 0[du(¯xs γ)¯Vs γ]ds (161) where⌊t⌋ ∈γZ+is as previously defined in ( 92). We note that the process xt= (θt,/hatwideαt) in (160) is precisely our adaptive Langevin process of interest ( 4–5). Similarly, the process ¯xt γin (161) is the piecewise-constant embedding from Section 4.3of the discrete dynamics for xt γ= (θt γ,/hatwideαt γ) which we have rewritten in ( 119). Denoting [ t] =⌊t⌋/γ∈Z+as in (92), we have ¯xt γ=x[t] γ= (θ[t] γ,/hatwideα[t] γ) for allt≥0. (162) Throughout,wewillwrite /a\}⌊∇a⌋k⌉tl⌉{t·/a\}⌊∇a⌋k⌉t∇i}htx0forexpectationsonlyovertheBrownianmotion bt, i.e.conditionalon X,θ∗,ε and the initial condition x0. 55 Lemma 5.6. Let us write the block forms Vt=/parenleftbiggUt∗ Wt∗/parenrightbigg ,¯Vt γ=/parenleftbigg¯Ut γ∗ ¯Wt γ∗/parenrightbigg ,du(xt) =/parenleftbiggJt 1Jt 2 Jt 3Jt 4/parenrightbigg ,du(¯xt γ) =/parenleftbigg¯Jt γ,1¯Jt γ,2¯Jt γ,3¯Jt γ,4/parenrightbigg with blocks of sizes dandK. Fixing any T >0, on the event {/⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0,/⌊a∇⌈⌊ly/⌊a∇⌈⌊l ≤C0√ d}, there is a constant C >0(depending on T,C0but not on γ) such that for any γ >0, we have sup t∈[0,T]/⌊a∇⌈⌊lJt 1/⌊a∇⌈⌊lop,/⌊a∇⌈⌊l¯Jt γ,1/⌊a∇⌈⌊lop≤C,sup t∈[0,T]/⌊a∇⌈⌊lJt 2/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Jt γ,2/⌊a∇⌈⌊lF≤C√ d, sup t∈[0,T]/⌊a∇⌈⌊lJt 3/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Jt γ,3/⌊a∇⌈⌊lF≤C/√ d,sup t∈[0,T]/⌊a∇⌈⌊lJt 4/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Jt γ,4/⌊a∇⌈⌊lF≤C,(163) sup t∈[0,T]{/⌊a∇⌈⌊lUt/⌊a∇⌈⌊lop,√ d/⌊a∇⌈⌊lWt/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop,√ d/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF} ≤C, (164) sup t∈[0,T]/⌊a∇⌈⌊l¯Ut+γ γ−¯Ut γ/⌊a∇⌈⌊lop≤Cγ,sup t∈[0,T]/⌊a∇⌈⌊l¯Wt+γ γ−¯Wt γ/⌊a∇⌈⌊lF≤Cγ/√ d. (165) Furthermore, for some ι:R+→R+satisfying limγ→0ι(γ) = 0and for any initial condition x0= (θ0,/hatwideα0), sup t∈[0,T]/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJt 1−¯Jt γ,1/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx0√ d,/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJt 2−¯Jt γ,2/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx0√ d,√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJt 3−¯Jt γ,3/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx0,/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJt 4−¯Jt γ,4/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx0≤ι(γ)/parenleftBig/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+1/parenrightBig . (166) Proof.For (163), we have by definition that Jt 1=−βX⊤X+Diag/bracketleftBig/parenleftBig ∂θs(θt j,/hatwideαt)/parenrightBigd j=1/bracketrightBig ,Jt 2=/parenleftBig ∇αs(θt j,/hatwideαt)⊤/parenrightBigd j=1, Jt 3= dθG(/hatwideαt,P(θt)),Jt 4= dαG(/hatwideαt,P(θt)), and similarly for ¯Jt γ,1,¯Jt γ,2,¯Jt γ,3,¯Jt γ,4. Then the desired bounds ( 163) hold on the event where /⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0, by Assumptions 2.2and2.3for the derivatives of s(·) andG(·). For (164), let us first prove the bounds for the discrete dynamics /⌊a∇⌈⌊l¯Uγ/⌊a∇⌈⌊lopand/⌊a∇⌈⌊l¯Wγ/⌊a∇⌈⌊lF. By definition, for eacht∈γZ+, ¯Ut+γ γ= (I+γ¯Jt γ,1)¯Ut γ+γ¯Jt γ,2¯Wt γ,¯Wt+1 γ=γ¯Jt γ,3¯Ut γ+(I+γ¯Jt γ,4)¯Wt γ. (167) Then applying ( 163), /⌊a∇⌈⌊l¯Ut+γ γ/⌊a∇⌈⌊lop≤(1+Cγ)/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+Cγ√ d/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Wt+γ γ/⌊a∇⌈⌊lF≤Cγ√ d/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+(1+Cγ)/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF, which further implies that /⌊a∇⌈⌊l¯Ut+γ γ/⌊a∇⌈⌊lop+√ d/⌊a∇⌈⌊l¯Wt+γ γ/⌊a∇⌈⌊lF≤(1+2Cγ)/parenleftbig /⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+√ d/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF/parenrightbig . Iterating this bound from the initial conditions ¯U0 γ=Idand¯W0 γ= 0K×dshows (164) for¯Ut γ,¯Wt γand all t≤T. For the continuous version /⌊a∇⌈⌊lUt/⌊a∇⌈⌊lopand/⌊a∇⌈⌊lWt/⌊a∇⌈⌊lF, note that analogously Ut=U0+/integraldisplayt 0(Js 1Us+Js 2Ws)ds,Wt=W0+/integraldisplayt 0(Js 3Us+Js 4Ws)ds sod dt(/⌊a∇⌈⌊lUt/⌊a∇⌈⌊lop+√ d/⌊a∇⌈⌊lWt/⌊a∇⌈⌊lF)≤C(/⌊a∇⌈⌊lUt/⌊a∇⌈⌊lop+√ d/⌊a∇⌈⌊lWt/⌊a∇⌈⌊lF). Then ( 164) follows by Gronwall’s lemma. For (165), we have by ( 167) and (163) /⌊a∇⌈⌊l¯Ut+γ γ−¯Ut γ/⌊a∇⌈⌊lop≤γ/parenleftBig /⌊a∇⌈⌊l¯Jt γ,1/⌊a∇⌈⌊lop/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+/⌊a∇⌈⌊l¯Jt γ,2/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF/parenrightBig ≤Cγ, /⌊a∇⌈⌊l¯Wt+γ γ−¯Wt γ/⌊a∇⌈⌊lF≤γ/parenleftBig /⌊a∇⌈⌊l¯Jt γ,3/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+/⌊a∇⌈⌊l¯Jt γ,4/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF/parenrightBig ≤Cγ/√ d. 56 For (166), we have by the Lipschitz continuity
|
https://arxiv.org/abs/2504.15556v1
|
of s(·) in Assumption 2.2thatJt 1−¯Jt γ,1is diagonal with /⌊a∇⌈⌊lJt 1−¯Jt γ,1/⌊a∇⌈⌊lF≤C(/⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l). Next using the arguments that led to ( 109), we have that on the event {/⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0,/⌊a∇⌈⌊ly/⌊a∇⌈⌊l ≤C0√ d}, withx0= (θ0,/hatwideα0), /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθt/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯θt γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l/hatwideαt/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯/hatwideαt γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0≤C(/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+√ d) (168) and /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0≤ι(γ)(/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+√ d). (169) This implies the desired bound for /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJt 1−¯Jt γ,1/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx, and a similar argument leads to the bound for /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJt 2− ¯Jt γ,2/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx. Next, by the derivative bounds for G(·) in Assumption 2.3, we have /⌊a∇⌈⌊lJ3 t−¯Jt γ,3/⌊a∇⌈⌊lF≤C(/⌊a∇⌈⌊lθt− ¯θt γ/⌊a∇⌈⌊l/d+/⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l/√ d) and/⌊a∇⌈⌊lJt 4−¯Jt γ,4/⌊a∇⌈⌊lF≤C(/⌊a∇⌈⌊lθt−¯θt γ/⌊a∇⌈⌊l/√ d+/⌊a∇⌈⌊l/hatwideαt−¯/hatwideαt γ/⌊a∇⌈⌊l), hence the desired bounds also follow by ( 169). Lemma 5.7. Define E={/⌊a∇⌈⌊lX/⌊a∇⌈⌊lop≤C0,/⌊a∇⌈⌊ly/⌊a∇⌈⌊l ≤C0√ d,/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l ≤C0√ d,/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l ≤C0for all large n,d}. (170) Fixing any T >0, there exists a constant C >0(depending on T,C0but not on γ) and a function ι:R+→ R+satisfying limγ→0ι(γ) = 0, such that on E, for any γ >0and all0≤s≤t≤T, |d−1TrRθ(t,s)−d−1γ−1TrRγ θ([t]+1,[s])| ≤ι(γ) (171) |n−1TrRη(t,s)−n−1γ−1TrRγ η([t]+1,[s])| ≤ι(γ). (172) Proof.Discretization of R θ.Let{Pγ t}t∈Z+be the Markov semigroup for the discrete dynamics ( 119), i.e. Pγ tf(x) =/a\}⌊∇a⌋k⌉tl⌉{tf(xt γ)/a\}⌊∇a⌋k⌉t∇i}htx. Then applying Proposition A.4, for any s,t∈Z+withs < t, ∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tθt,(s,j),ε γ,j/a\}⌊∇a⌋k⌉t∇i}htx=γPγ s+1∂jPγ t−s−1ej(x). This implies, for the given initial condition of the dynamics x0= (θ0,/hatwideα0), that γ−1TrRγ θ(t,s) =d/summationdisplay j=1Pγ s+1∂jPγ t−s−1ej(x0). Let{Pt}t≥0analogously denote the Markov semigroup of the continuous dynam ics (4–5), i.e.Ptf(x) = /a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}htx. Then applying Proposition A.1, for any s,t∈R+withs≤t, TrRθ(t,s) =d/summationdisplay j=1Ps∂jPt−sej(x0). Thus, for all s,t∈R+withs≤t, /vextendsingle/vextendsingle/vextendsingleTrRθ(t,s)−γ−1TrRγ θ([t]+1,[s])/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Ps∂jPt−sej(x0)−d/summationdisplay j=1Pγ [s]+1∂jPγ [t]−[s]ej(x0)/vextendsingle/vextendsingle/vextendsingle ≤/vextendsingle/vextendsingle/vextendsinglePs/parenleftBigd/summationdisplay j=1∂jPt−sej−d/summationdisplay j=1∂jPγ [t]−[s]ej/parenrightBig (x0)/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (I)+/vextendsingle/vextendsingle/vextendsingle(Ps−Pγ [s]+1)/parenleftBigd/summationdisplay j=1∂jPγ [t]−[s]ej/parenrightBig (x0)/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (II). 57 Bound of (I).By Proposition A.2(c),/summationtextd j=1∂jPt−sej(x) =/summationtextd j=1/a\}⌊∇a⌋k⌉tl⌉{t(Vt−s)jj/a\}⌊∇a⌋k⌉t∇i}htx, where{xt,Vt}t≥0are the solution to ( 160) with initial condition x0=x. Similarly, by Lemma A.5and the identification ( 162),/summationtextd j=1∂jPγ [t]−[s]ej(x) =/summationtextd j=1/a\}⌊∇a⌋k⌉tl⌉{t(¯V⌊t⌋−⌊s⌋ γ)jj/a\}⌊∇a⌋k⌉t∇i}htx, where{¯xt γ,¯Vt γ}t≥0are the solution to ( 161). Let us write Vt=/parenleftbigg Ut∗ Wt∗/parenrightbigg ,¯Vt γ=/parenleftbigg¯Ut γ∗ ¯Wt γ∗/parenrightbigg ,du(xt) =/parenleftbigg Jt 1Jt 2 Jt 3Jt 4/parenrightbigg ,du(¯xt γ) =/parenleftbigg¯Jt γ,1¯Jt γ,2¯Jt γ,3¯Jt γ,4/parenrightbigg with blocks of sizes dandK. Then /vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1∂jPt−sej(x)−∂jPγ [t]−[s]ej(x)/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/a\}⌊∇a⌋k⌉tl⌉{tTrUt−s−Tr¯U⌊t⌋−⌊s⌋ γ/a\}⌊∇a⌋k⌉t∇i}htx/vextendsingle/vextendsingle/vextendsingle ≤√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUt−s−¯Ut−s γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx+d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Ut−s γ−¯U⌊t⌋−⌊s⌋ γ/⌊a∇⌈⌊lop/a\}⌊∇a⌋k⌉t∇i}htx.(173) Since|(t−s)−(⌊t⌋−⌊s⌋)| ≤Cγ, the second term satisfies /⌊a∇⌈⌊l¯Ut−s−¯Ut−s/⌊a∇⌈⌊lop≤Cγby (165). For the first term, note that by definition Ut=U0+/integraldisplayt 0(Js 1Us+Js 2Ws)ds,Wt=W0+/integraldisplayt 0(Js 3Us+Js 4Ws)ds, ¯Ut γ=U0+/integraldisplay⌊t⌋ 0(¯Js γ,1¯Us γ+¯Js γ,2¯Ws γ)ds,¯Wt γ=W0+/integraldisplay⌊t⌋ 0(¯Js γ,3¯Us γ+¯Js γ,4¯Ws γ)ds. Hence /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUt−¯Ut γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤/integraldisplay⌊t⌋ 0/bracketleftBig /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 1−¯Js γ,1/⌊a∇⌈⌊lF/⌊a∇⌈⌊lUs/⌊a∇⌈⌊lop/a\}⌊∇a⌋k⌉t∇i}htx+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Js γ,1/⌊a∇⌈⌊lop/⌊a∇⌈⌊lUs−¯Us γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx +/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 2−¯Js γ,2/⌊a∇⌈⌊lF/⌊a∇⌈⌊lWs/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Js γ,2/⌊a∇⌈⌊lF/⌊a∇⌈⌊lWs−¯Ws γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx/bracketrightBig ds +/integraldisplayt ⌊t⌋/bracketleftBig /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 1/⌊a∇⌈⌊lF/⌊a∇⌈⌊lUs/⌊a∇⌈⌊lop/a\}⌊∇a⌋k⌉t∇i}htx+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 2/⌊a∇⌈⌊lF/⌊a∇⌈⌊lWs/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx/bracketrightBig ds LetC,C′>0 be constants depending on Tbut notγ, and let ι(γ),ι′(γ) be constants depending also on γand satisfying ι(γ),ι′(γ)→0 asγ→0, all changing from instance to instance. By Lemma 5.6, with x= (θ,/hatwideα), we have /⌊a∇⌈⌊lUs/⌊a∇⌈⌊lop≤C,√ d/⌊a∇⌈⌊lWs/⌊a∇⌈⌊lF≤C,/⌊a∇⌈⌊lJs 1/⌊a∇⌈⌊lop,/⌊a∇⌈⌊l¯Js γ,1/⌊a∇⌈⌊lop≤C,/⌊a∇⌈⌊lJs 2/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Js γ,2/⌊a∇⌈⌊lF≤C√ d, /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 1−¯Js γ,1/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d),/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 2−¯Js γ,2/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d), hence /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUt−¯Ut γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤C/integraldisplayt 0/parenleftbig /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUs−¯Us γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx+√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lWs−¯Ws γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx/parenrightbig ds+ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d).(174) Next we have /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lWt−¯Wt γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤/integraldisplay⌊t⌋ 0/bracketleftBig /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 3−¯Js γ,3/⌊a∇⌈⌊lF/⌊a∇⌈⌊lUs/⌊a∇⌈⌊lop/a\}⌊∇a⌋k⌉t∇i}htx+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Js γ,3/⌊a∇⌈⌊lF/⌊a∇⌈⌊lUs−¯Us γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx +/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 4−¯Js γ,4/⌊a∇⌈⌊lF/⌊a∇⌈⌊lWs/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Js γ,4/⌊a∇⌈⌊lF/⌊a∇⌈⌊lWs−¯Ws γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx/bracketrightBig ds +/integraldisplayt ⌊t⌋/bracketleftBig /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 3/⌊a∇⌈⌊lF/⌊a∇⌈⌊lUs/⌊a∇⌈⌊lop/a\}⌊∇a⌋k⌉t∇i}htx+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 4/⌊a∇⌈⌊lF/⌊a∇⌈⌊lWs/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx/bracketrightBig ds. By Lemma 5.6, we have also /⌊a∇⌈⌊lJs 3/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Js γ,3/⌊a∇⌈⌊lF≤C/√ d,/⌊a∇⌈⌊lJs 4/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l¯Js γ,4/⌊a∇⌈⌊lF≤C, 58 d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lJs 3−¯Js γ,3/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d),√ d/⌊a∇⌈⌊lJs 4−¯Js γ,4/⌊a∇⌈⌊lF≤ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d), which implies that √ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lWt−¯Wt γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤C/integraldisplayt
|
https://arxiv.org/abs/2504.15556v1
|
0(/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUs−¯Us/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx+√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lWs−¯Ws/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx)ds+ι(γ)/parenleftBig/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l√ d+/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+1/parenrightBig .(175) Combining ( 174) and (175) yields /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUt−¯Ut γ/⌊a∇⌈⌊lF+√ d/⌊a∇⌈⌊lWt−¯Wt γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx ≤C/integraldisplayt 0(/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUs−¯Us γ/⌊a∇⌈⌊lF+√ d/⌊a∇⌈⌊lWs−¯Ws γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx)ds+ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d), so Gronwall’s lemma gives supt∈[0,T]/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUt−¯Ut γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx+√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lWt−¯Wt γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx≤ι(γ)(/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+√ d). Hence the bound ( 173) reads, for x= (θ,/hatwideα), /vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1∂jPt−sej(x)−∂jPγ [t]−[s]−1ej(x)/vextendsingle/vextendsingle/vextendsingle≤ι(γ)(√ d/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+d). (176) Applying this with x=xs= (θs,/hatwideαs), this implies that (I)≤ι(γ)(√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθs/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l/hatwideαs/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+d)≤ι(γ)d, the last step using the bound ( 168) and conditions for ( θ0,/hatwideα0) on the event ( 170). Bound of (II).Letf(x) =/summationtextd j=1∂jPγ [t]−[s]ej(x). WefirstestablishaLipschitzboundfor f: Let{¯xt γ,¯Vt γ}t∈Z+ and{˜xt γ,˜Vt γ}t∈Z+be defined by ( 161) with initializations x= (θ,/hatwideα) and˜x= (˜θ,˜/hatwideα) respectively, coupled by the same Brownian motion. We write /a\}⌊∇a⌋k⌉tl⌉{t·/a\}⌊∇a⌋k⌉t∇i}htfor the average over this Brownian motion, and denote by ˜Ut γ,˜Wt γ and˜Jt γ,1,˜Jt γ,2,˜Jt γ,3,˜Jt γ,4the blocks of ˜Vt γand du(˜xt γ). Then, using f(x) =/a\}⌊∇a⌋k⌉tl⌉{tTr¯Uτ γ/a\}⌊∇a⌋k⌉t∇i}htwithτ=⌊t⌋ − ⌊s⌋as established above, |f(x)−f(˜x)| ≤ |/a\}⌊∇a⌋k⌉tl⌉{tTr¯Uτ γ−Tr˜Uτ γ/a\}⌊∇a⌋k⌉t∇i}ht| ≤√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Uτ γ−˜Uτ γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht. (177) We apply a similar argument as in term ( I), noting that /⌊a∇⌈⌊l¯Ut+γ γ−˜Ut+γ γ/⌊a∇⌈⌊lF≤γ/⌊a∇⌈⌊l¯Jt γ,1−˜Jt γ,1/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+(1+γ/⌊a∇⌈⌊l˜Jt γ,1/⌊a∇⌈⌊lop)/⌊a∇⌈⌊l¯Ut γ−˜Ut γ/⌊a∇⌈⌊lF +γ/⌊a∇⌈⌊l¯Jt γ,2−˜Jt γ,2/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF+γ/⌊a∇⌈⌊l˜Jt γ,2/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Wt γ−˜Wt γ/⌊a∇⌈⌊lF, /⌊a∇⌈⌊l¯Wt+γ γ−˜Wt+γ γ/⌊a∇⌈⌊lF≤γ/⌊a∇⌈⌊l¯Jt γ,3−˜Jt γ,3/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop+γ/⌊a∇⌈⌊l˜Jt γ,3/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Ut γ−˜Ut γ/⌊a∇⌈⌊lF +γ/⌊a∇⌈⌊l¯Jt γ,4−˜Jt γ,4/⌊a∇⌈⌊lF/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF+(1+γ/⌊a∇⌈⌊l˜Jt γ,4/⌊a∇⌈⌊lF)/⌊a∇⌈⌊l¯Wt γ−˜Wt γ/⌊a∇⌈⌊lF. By Lemma 5.6, we have /⌊a∇⌈⌊l¯Ut γ/⌊a∇⌈⌊lop,√ d/⌊a∇⌈⌊l¯Wt γ/⌊a∇⌈⌊lF≤C, and/⌊a∇⌈⌊l˜Jt γ,1/⌊a∇⌈⌊lop,/⌊ar⌈⌊l˜Jt γ,2/⌊ar⌈⌊lF√ d,√ d/⌊a∇⌈⌊l˜Jt γ,3/⌊a∇⌈⌊lF,/⌊a∇⌈⌊l˜Jt γ,4/⌊a∇⌈⌊lF≤C. Further- more similar arguments to ( 166) in Lemma 5.6show that /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Jt γ,1−˜Jt γ,1/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht,/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Jt γ,2−˜Jt γ,2/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht,d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Jt γ,3−˜Jt γ,3/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht,√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Jt γ,4−˜Jt γ,4/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht ≤C/angbracketleftBig /⌊a∇⌈⌊l¯θt−˜θt/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l¯/hatwideαt−˜/hatwideαt/⌊a∇⌈⌊l/angbracketrightBig ≤C′/parenleftBig /⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα−˜/hatwideα/⌊a∇⌈⌊l/parenrightBig , the quantities in the last expression denoting the differences in initial conditions. Hence /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l˜Ut+γ γ−¯Ut+γ γ/⌊a∇⌈⌊lF+√ d/⌊a∇⌈⌊l¯Wt+γ γ−˜Wt+γ γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht ≤(1+Cγ)/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Ut γ−˜Ut γ/⌊a∇⌈⌊lF+√ d/⌊a∇⌈⌊l¯Wt γ−˜Wt γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht+Cγ/parenleftBig /⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα−˜/hatwideα/⌊a∇⌈⌊l/parenrightBig . Iterating this bound gives /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯Uτ γ−˜Uτ γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}ht ≤C(/⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα−˜/hatwideα/⌊a∇⌈⌊l), which applied to ( 177) yields our desired Lipschitz bound |f(x)−f(˜x)| ≤C√ d(/⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα−˜/hatwideα/⌊a∇⌈⌊l). 59 Then, writing x0= (θ0,/hatwideα0) for the original initial conditions, (II) =/vextendsingle/vextendsingle/vextendsingle(Ps−Pγ [s]+1)f(x0)/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/a\}⌊∇a⌋k⌉tl⌉{tf(xs)/a\}⌊∇a⌋k⌉t∇i}htx0−/a\}⌊∇a⌋k⌉tl⌉{tf(¯x⌊s⌋+γ γ)/a\}⌊∇a⌋k⌉t∇i}htx0/vextendsingle/vextendsingle/vextendsingle≤C√ d/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθs−¯θ⌊s⌋+γ γ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαs−¯/hatwideα⌊s⌋+γ γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0 where we couple {(θt,/hatwideαt)}t≥0and{¯θt γ,¯/hatwideαt γ}t≥0by the same Brownian motion. Bounding /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθs−¯θ⌊s⌋+γ γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0≤ /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθs−¯θs γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊l¯θs γ−¯θ⌊s⌋+γ γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0 and similarly for /hatwideα, and then applying ( 168) and (169), we obtain on the event ( 170) that /a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lθs−¯θ⌊s⌋+γ γ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideαs−¯/hatwideα⌊s⌋+γ γ/⌊a∇⌈⌊l/a\}⌊∇a⌋k⌉t∇i}htx0≤ι(γ)(/⌊a∇⌈⌊lθ0/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα0/⌊a∇⌈⌊l+√ d)≤Cι(γ)√ d. Hence also (II)≤ι(γ)d. The proof of ( 171) is completed by combining the bounds for ( I) and (II). Discretization of R η.LetPγ tandPtbe the discrete and continuous Markov semigroups defined above. Letxi(θ,/hatwideα) =e⊤ iXθ. Introduce the matrix E∈Rd×(d+K)defined by E(θ,/hatwideα) =θ, so that this reads xi(x) =e⊤ iXExforx= (θ,/hatwideα). Then for any s,t∈Z+withs < t, Proposition A.4gives ∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tηt,[s,i],ε i/a\}⌊∇a⌋k⌉t∇i}htx=γPγ s+1e⊤ iXE∇Pγ t−s−1xi(x). Let us introduce the shorthand Pγ t(x) =/a\}⌊∇a⌋k⌉tl⌉{txt γ/a\}⌊∇a⌋k⌉t∇i}htxas a map Pγ t:Rd+K→Rd+K, soPγ txi(x) =e⊤ iXEPγ t(x). Denote also d Pγ t(·) :Rd+K→R(d+K)×(d+K)as the derivative of this map x/ma√sto→Pγ t(x). Then the above may be written as ∂ε|ε=0/a\}⌊∇a⌋k⌉tl⌉{tηt,[s,i],ε i/a\}⌊∇a⌋k⌉t∇i}htx=γPγ s+1/parenleftBig e⊤ iXEdPγ t−s−1(·)⊤E⊤X⊤ei/parenrightBig (x), implying that γ−1TrRγ η(t,s) =δβ2n/summationdisplay i=1Pγ s+1/parenleftBig e⊤ iXEdPγ t−s−1(·)⊤E⊤X⊤ei/parenrightBig (x0) =δβ2Pγ s+1Tr/bracketleftBig dPγ t−s−1(·)E⊤X⊤XE/bracketrightBig (x0). By Proposition A.1, we have analogously for any s,t∈R+withs≤tthat TrRη(t,s) =δβ2PsTr/bracketleftbig dPt−s(·)E⊤X⊤XE/bracketrightbig (x0). Hence for all s,t∈R+withs≤t, /vextendsingle/vextendsingle/vextendsingleTrRη(t,s)−γ−1TrRγ η([t]+1,[s])/vextendsingle/vextendsingle/vextendsingle =δβ2/bracketleftbigg/vextendsingle/vextendsingle/vextendsinglePsTr/bracketleftBig/parenleftBig dPt−s(·)−dPγ [t]−[s](·)/parenrightBig E⊤X⊤XE/bracketrightBig (x0)/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (I)+/vextendsingle/vextendsingle/vextendsingle(Ps−Pγ [s]+1)Tr/bracketleftBig dPγ [t]−[s](·)E⊤X⊤XE/bracketrightBig (x0)/vextendsingle/vextendsingle/vextendsingle /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright (II)/bracketrightbigg . Bound of
|
https://arxiv.org/abs/2504.15556v1
|
(I).NotethatbyProposition A.2(c), TrdPt−s(x)E⊤X⊤XE=/a\}⌊∇a⌋k⌉tl⌉{tTrVt−sE⊤X⊤XE/a\}⌊∇a⌋k⌉t∇i}htx=/a\}⌊∇a⌋k⌉tl⌉{tTrUt−s· X⊤X/a\}⌊∇a⌋k⌉t∇i}htx,where{xt,Vt}t≥0followthedynamics( 160)andUtasbeforeistheupper-leftblockof Vt. Similarly, LemmaA.5yields that Trd Pγ [t]−[s](x)E⊤X⊤XE=/a\}⌊∇a⌋k⌉tl⌉{tTr¯V⌊t⌋−⌊s⌋ γE⊤X⊤XE/a\}⌊∇a⌋k⌉t∇i}htx=/a\}⌊∇a⌋k⌉tl⌉{tTr¯U⌊t⌋−⌊s⌋ γX⊤X/a\}⌊∇a⌋k⌉t∇i}htx, where {¯xt γ,¯Vt γ}t≥0follow (161). Hence, with x= (θ,/hatwideα), /vextendsingle/vextendsingle/vextendsingleTr/parenleftBig dPt−s(x)−dPγ [t]−[s](x)/parenrightBig E⊤X⊤XE/vextendsingle/vextendsingle/vextendsingle=/a\}⌊∇a⌋k⌉tl⌉{tTr(Ut−s−¯U⌊t⌋−⌊s⌋ γ)X⊤X/a\}⌊∇a⌋k⌉t∇i}htx ≤√ d/⌊a∇⌈⌊lX/⌊a∇⌈⌊l2 op/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lUt−s−¯U⌊t⌋−⌊s⌋ γ/⌊a∇⌈⌊lF/a\}⌊∇a⌋k⌉t∇i}htx ≤ι(γ)(√ d/⌊a∇⌈⌊lθ/⌊a∇⌈⌊l+d/⌊a∇⌈⌊l/hatwideα/⌊a∇⌈⌊l+d) 60 using the preceding bounds leading to ( 176). Then applying this with x=xsshows (I)≤ι(γ)d. Bound of (II).Letf(x) = TrdPγ [t]−[s](x)E⊤X⊤XE= Tr/a\}⌊∇a⌋k⌉tl⌉{t¯Uτ γ/a\}⌊∇a⌋k⌉t∇i}htxX⊤X, whereτ=⌊t⌋ −⌊s⌋. By the same arguments as above, |f(x)−f(˜x)| ≤C√ d(/⌊a∇⌈⌊lθ−˜θ/⌊a∇⌈⌊l+√ d/⌊a∇⌈⌊l/hatwideα−˜/hatwideα/⌊a∇⌈⌊l), leading to ( II)≤ι(γ)d. Combining these bounds for ( I) and (II) shows ( 172). We now conclude the proof of Theorem 2.8. Proof of Theorem 2.8.The claimsfor d−1TrCθ(t,s),d−1TrCθ(t,∗), andn−1TrCη(t,s) followimmediately from the definitions of these quantities, Corollary 2.6applied with fθ(θs,θt) =θsθt,fθ(θ∗,θt) =θ∗θt, fη(η∗,ε,ηs,ηt) =δβ2(ηs−η∗−ε)(ηt−η∗−ε), and an application of the dominated convergence theorem to take expectations over {bt}t∈[0,T]in the almost-sure convergence statements of Corollary 2.6. For the claim for d−1TrRθ(t,s), for any s,t∈[0,T] withs≤t, by Lemma 5.7, almost surely limsup n,d→∞/vextendsingle/vextendsingle/vextendsingle1 dTrRθ(t,s)−1 γ·1 dTrRγ θ([t]+1,[s])/vextendsingle/vextendsingle/vextendsingle≤ι(γ). By Lemma 5.1and the identification ( 99) of Lemma 4.3, almost surely lim n,d→∞1 γ·1 dTrRγ θ([t]+1,[s]) =1 γRγ θ([t]+1,[s]) =¯Rγ θ(t+γ,s). The bound ( 104) implies uniform convergence of ¯Rγ θ(t,s) toRγ θ(t,s) asγ→0, andRγ θ(t,s) is continuous in s,tby Theorem 2.4and the definition of the space Scont. Thus lim γ→0|¯Rγ θ(t+γ,s)−Rθ(t,s)|= 0. Then, taking the limit n,d→ ∞followed by γ→0 shows almost surely lim n,d→∞/vextendsingle/vextendsingle/vextendsingle1 dTrRθ(t,s)−Rθ(t,s)/vextendsingle/vextendsingle/vextendsingle= 0. The proof of the claim for n−1TrRη(t,s) is the same. A Existence of linear response functions A.1 Continuous dynamics Fix any dimension m≥1, and consider the function classes A=/braceleftBig f:Rm→Rtwice continuously-differentiable : ∇f(x),∇2f(x) are globally bounded/bracerightBig , B=/braceleftBig f:Rm→Rmtwice continuously-differentiable : ∇fi(x),∇2fi(x) are globally bounded and H¨ older-continuous for each i= 1,...,m/bracerightBig . We consider a general stochastic diffusion over xt∈Rmgiven by dxt=u(xt)dt+√ 2Mdbt(178) wherebt∈RmisastandardBrownianmotion, u(·)aLipschitzdriftfunction, and M∈Rm×madeterministic diffusion coefficient matrix. We note that the joint evolution of xt= (θt,/hatwideαt) in (4–5) is of this form, with m=d+Kand with u(·) andMas defined in ( 159). The conditions of Theorem 2.8ensure that this drift function u(·) satisfies u∈ B. We prove in this section the following result: 61 Proposition A.1. Suppose u∈ B, and let {xt}t≥0be the solution of ( 178) with initial condition x0=x. For any a∈ A,b∈ B, andx∈Rm, define R(t,s) =Ps/parenleftbig b⊤∇Pt−sa/parenrightbig (x) (179) wherePtf(x) =E[f(xt)|x0=x]. Then{R(t,s)}0≤s≤tis the unique continuous function for which the following holds: Leth: [0,∞)→Rbe any continuous bounded function, and for each ε >0let{xt,ε}t≥0be the solution of the perturbed dynamics dxt,ε=/parenleftBig u(xt,ε)+εh(t)b(xt,ε)/parenrightBig dt+√ 2Mdbt(180) with the same initial condition x0,ε=x. Then for any t >0, lim ε→01 ε/parenleftBig E[a(xt,ε)|x0,ε=x]−E[a(xt)|x0=x]/parenrightBig =/integraldisplayt 0R(t,s)h(s)ds. Statements similar to Proposition A.1have been established in [ 55,56]. Our setting here is somewhat non-standard, in that Mmay be rank-degenerate, so the PDE describing the law of {xt}t≥0is not uniformly elliptic. We show Proposition A.1in two steps, first deriving regularity estimates for the Markov sem igroup {Pt}t≥0in such settings using the results of [ 63], and then applying the proof idea of [ 56, Theorem 3.9] with these regularity estimates in place of the Schauder estimates deriv ed therein from uniform ellipticity. We will write Ptf(x) =/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}htx0=x=E[f(xt)|x0=x] (181) for the Markov semigroup
|
https://arxiv.org/abs/2504.15556v1
|
associated to ( 178). When the initial condition x0=xis clear from context, we will abbreviate /a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}ht=/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}htx0=x. We denote the infinitesimal generator L of this semigroup by Lf(x) =u(x)⊤∇f(x)+TrMM⊤∇2f(x). (182) Throughout this section, constants C,C′,c >0 may depend on the dimension mand the functions u,a,b. Proposition A.2. Suppose the assumptions of Proposition A.1hold. Let ui:Rm→Rbe theithcoordinate ofu, and let ∂juiand∂j∂kuibe its first-order and second-order partial derivatives. (a) For each x∈Rm, the diffusion ( 178) has a unique solution {xt}t≥0with initial condition x0=x. Furthermore, there exists a modification xt(x)of this solution for each initial condition x0=xsuch that xt(x)is jointly continuous in (t,x)and twice continuously-differentiable in x. (b) For every i= 1,...,m, letxt i(x)be theithcoordinate of xt(x), and let vt i(x) =∇xt i(x)∈Rmand Ht i(x) =∇2xt i(x)∈Rm×mbe its gradient and Hessian in x. Then(vt i(x),Ht i(x))are solutions to the first and second variation processes /braceleftBigg dvt i=/summationtextm j=1∂jui(xt(x))·vt jdt dHt i=/parenleftbig/summationtextm j,k=1∂j∂kui(xt(x))·vt jvt⊤ k+/summationtextm j=1∂jui(xt(x))·Ht j/parenrightbig dt(183) with initial conditions v0 i(x) =ei(theithstandard basis vector in Rm) andH0 i(x) = 0. Furthermore /⌊a∇⌈⌊lvt i(x)/⌊a∇⌈⌊l2,/⌊a∇⌈⌊lHt i(x)/⌊a∇⌈⌊lop≤eCtfor some C >0and allx∈Rmandt≥0. (c) For any f∈ A, the map (t,x)/ma√sto→Ptf(x)is continuously-differentiable in tand twice continuously- differentiable in x, and furthermore ∇Ptf(x),∇2Ptf(x)are uniformly bounded over t∈[0,T]and x∈Rmfor any fixed T >0. For any t≥0and initial condition x0=x, letting (xt,vt i,Ht i)≡ (xt(x),vt i(x),Ht i(x))be as defined in parts (a) and (b), we have ∇Ptf(x) =/angbracketleftbiggm/summationdisplay j=1∂jf(xt)vt j/angbracketrightbigg ∇2Ptf(x) =/angbracketleftbiggm/summationdisplay j,k=1∂j∂kf(xt)vt jvt⊤ k+m/summationdisplay j=1∂jf(xt)Ht j/angbracketrightbigg (184) and ∂tPtf(x) =PtLf(x) = LPtf(x). (185) 62 Proof.Since the coordinates of u∈ Bare Lipschitz with bounded and H¨ older-continuous first and secon d derivatives, part (a) follows directly from [ 63, Theorems II.1.2, II.3.3]. For part (b), since u∈ Bhas bounded and H¨ older-continuous first derivative, [ 63, Theorem II.3.1] shows thatxt(x) has derivative Vt(x) = dxxt∈Rm×msolving the first-variation equation dVt= [du(xt)]Vtdt,V0=I. Noting that vt i=∇xt i(x) is (the transpose of) the ithrow ofVt, this gives the first equation of ( 183) with initial condition v0 i=ei. Next, consider the joint diffusion d(xt,Vt) =P(xt,Vt)dt+√ 2(Mdbt,0), P(x,V) =/parenleftBig u(x),[du(x)]V/parenrightBig . The condition u∈ Bimplies also that P(x,V) has bounded and H¨ older-continuous first derivative d P(x,V), which we identify as a square matrix of dimension ( m+m2)×(m+m2) under the vectorization of V. Then [63, Theorem II.3.1] applied again shows that ( xt(x),Vt(x)) has derivative Ut= d(x,V)(xt,Vt)∈ R(m+m2)×(m+m2)solving the second-variation equation dUt= [dP(xt,Vt)]Utdt,U0=I. (186) Notingthat Ht i=∇2xt i(x)istheblockof Utcorrespondingtod xvt i, andthattheblockcorrespondingtod xxt isVt, one may check that the restriction of ( 186) to the d xvt iblock gives exactly the second equation of ( 183) with initialization H0 i= 0. IfC >0 is an upper bound for supx∈Rm/⌊a∇⌈⌊ldu(x)/⌊a∇⌈⌊lopand supx∈Rm/⌊a∇⌈⌊ldP(x,V)/⌊a∇⌈⌊lop, then integrating these equations gives /⌊a∇⌈⌊lVt/⌊a∇⌈⌊lop≤eCt/⌊a∇⌈⌊lV0/⌊a∇⌈⌊lop=eCtand/⌊a∇⌈⌊lUt/⌊a∇⌈⌊lop≤eCt/⌊a∇⌈⌊lU0/⌊a∇⌈⌊lop=eCt, which implies the bounds for vt iandHt i. For part (c), consider any f∈ A. Applying (b) and the chain rule, ∇xf(xt(x)) =m/summationdisplay j=1∂jf(xt)vt i ∇2 xf(xt(x)) =m/summationdisplay j,k=1∂j∂kf(xt)vt jvt⊤ k+m/summationdisplay j=1∂jf(xt)Ht j.(187) By parts (a–b) and the condition f∈ A, for any T >0, the right sides of ( 187) are uniformly bounded and continuous in ( t,x) overt∈[0,T]. Then dominated convergence implies that Ptf(x) is twice continuously- differentiable
|
https://arxiv.org/abs/2504.15556v1
|
in x, that∇Ptf(x) =∇x/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}htx0=x=/a\}⌊∇a⌋k⌉tl⌉{t∇xf(xt(x))/a\}⌊∇a⌋k⌉t∇i}htand∇2Ptf(x) =∇2 x/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}htx0=x= /a\}⌊∇a⌋k⌉tl⌉{t∇2 xf(xt(x))/a\}⌊∇a⌋k⌉t∇i}ht, and that these are also uniformly bounded and continuous over t∈[0,T] andx∈Rm. For the derivative in t, by Itˆ o’s formula f(xt) =f(x)+/integraldisplayt 0Lf(xs)ds+/integraldisplayt 0∇f(xs)⊤√ 2Mdbs where L is the generator defined in ( 182). Since ∇f(xs) is bounded over s∈[0,t] andxsis adapted to the filtration of {bs}, the last term is a martingale, so taking expectations gives Ptf(x) =/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}ht=f(x)+/integraldisplayt 0/a\}⌊∇a⌋k⌉tl⌉{tLf(xs)/a\}⌊∇a⌋k⌉t∇i}htds. Hence, differentiating in t, for any t >0 we have ∂tPtf(x) =/a\}⌊∇a⌋k⌉tl⌉{tLf(xt)/a\}⌊∇a⌋k⌉t∇i}ht=PtLf(x). (188) By Jensen’s inequality, for any s,t≥0, we have /a\}⌊∇a⌋k⌉tl⌉{t(PsLf(xt))2/a\}⌊∇a⌋k⌉t∇i}ht ≤ /a\}⌊∇a⌋k⌉tl⌉{tLf(xt+s)2/a\}⌊∇a⌋k⌉t∇i}ht=/a\}⌊∇a⌋k⌉tl⌉{t(u(xt+s)⊤∇f(xt+s)+TrMM⊤∇2f(xt+s))2/a\}⌊∇a⌋k⌉t∇i}ht ≤C(1+/a\}⌊∇a⌋k⌉tl⌉{t/⌊a∇⌈⌊lxt+s/⌊a∇⌈⌊l2 2/a\}⌊∇a⌋k⌉t∇i}ht), thelastinequalityholdingforsome C >0byboundednessof ∇f,∇2fandtheLipschitzcontinuityof u. Then [63, Theorem II.2.1] implies that PsLf(xt(x)) is uniformly bounded in L2over compact domains of s,t≥0 63 and of the initial condition x∈Rm, and hence is also uniformly integrable over these domains. This unifo rm integrability for s= 0 and dominated convergence shows that /a\}⌊∇a⌋k⌉tl⌉{tLf(xt)/a\}⌊∇a⌋k⌉t∇i}htin (188) is continuous in ( t,x), and hencePtfis continuously-differentiable in t. Taking the limit t→0 in (188), also Lf(x) = limt→0∂tPtf(x). Then applying this with Ptf∈ Ain place of f, LPtf(x) = lim s→0∂sPt+sf(x) = lim s→0∂s/a\}⌊∇a⌋k⌉tl⌉{tPsf(xt)/a\}⌊∇a⌋k⌉t∇i}ht(∗)=/a\}⌊∇a⌋k⌉tl⌉{tLf(xt)/a\}⌊∇a⌋k⌉t∇i}ht=PtLf(x). Here, to justify ( ∗), we note that ∂sPsf(xt) =PsLf(xt) by (188), so (∗) follows from uniform integrability of this quantity and dominated convergence to take the limit lim s→0∂s/a\}⌊∇a⌋k⌉tl⌉{tPsf(xt)/a\}⌊∇a⌋k⌉t∇i}ht= lims→0/a\}⌊∇a⌋k⌉tl⌉{tPsLf(xt)/a\}⌊∇a⌋k⌉t∇i}ht= /a\}⌊∇a⌋k⌉tl⌉{tLf(xt)/a\}⌊∇a⌋k⌉t∇i}ht. Combining with ( 188), this shows all claims about ∂tPtfin part (c). Now consider the perturbed dynamics ( 180) for any ε >0. Let us denote the perturbed drift as uε(t,x) =u(x)+εh(t)b(x). For anyt≥s≥0, we define its (time inhomogeneous) Markov semigroup and infinites imal generator Pε s,tf(x) =/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}htxs=x=E[f(xt)|xs=x],Lε tf(x) =uε(t,x)⊤∇f(x)+TrMM⊤∇2f(x). The following extends the semigroup regularity estimates of Propos itionA.2to this perturbed process. Proposition A.3. Suppose the assumptions of Proposition A.1hold. Then for any f∈ A, the map (s,t,x)/ma√sto→Pε s,tf(x)is continuously-differentiable in (s,t)and twice continuously-differentiable in x, and furthermore ∇Pε s,tf(x),∇2Pε s,tf(x)are uniformly bounded over s,t∈[0,T]andx∈Rmfor any fixed T >0. We have ∂tPε s,tf(x) =Pε s,tLε tf(x), ∂sPε s,tf(x) =−Lε sPε s,tf(x). (189) Proof.We omit the superscript εand write xt≡xt,ε. The same arguments as in Proposition A.2using [63, Theorems II.1.2, II.3.1, II.3.3] show, for each s≥0 andx∈Rm, there exists a modification {xt(s,x)}t≥s of the solution to ( 180) with initial condition xs=x, such that xt(s,x) is jointly continuous in ( s,t,x) and twice continuously-differentiable in x. Each component xt i(s,x) of this solution has gradient vt i=∇xxt i(s,x) and Hessian Ht i=∇2 xxt i(s,x) solving /braceleftBigg dvt i=/summationtextm j=1∂juε i(t,xt(s,x))·vt jdt dHt i=/parenleftbig/summationtextm j,k=1∂j∂kuε i(t,xt(s,x))·vt jvt k⊤+/summationtextm j=1∂juε i(t,xt(s,x))·Ht j/parenrightbig dt with initial conditions vs i(s,x) =eiandHs i(s,x) = 0. Furthermore, /⌊a∇⌈⌊lvt i(s,x)/⌊a∇⌈⌊l2,/⌊a∇⌈⌊lHt i(s,x)/⌊a∇⌈⌊lop≤eC(t−s)for someC >0 and all x∈Rmandt≥s≥0. Thenfor any f∈ A, the samedominated convergenceargumentasin Proposition A.2showsthat Pε s,tf(x) is twice continuously-differentiable in x, where its first and second derivatives are uniformly bounded and continuous in ( s,t,x) overs,t∈[0,T] and may be computed by differentiating in xunder the integral. The same argument as in Proposition A.2using Itˆ o’s formula shows also that Pε s,tf(x) is continuously- differentiable in t, with ∂tPε s,tf(x) =Pε s,tLε tf(x) =/a\}⌊∇a⌋k⌉tl⌉{tLε tf(xt)/a\}⌊∇a⌋k⌉t∇i}htxs=x. For the derivative in s, we have by Itˆ o’s
|
https://arxiv.org/abs/2504.15556v1
|
formula for any h >0, Pε s−h,sf(x) =/a\}⌊∇a⌋k⌉tl⌉{tf(xs)/a\}⌊∇a⌋k⌉t∇i}htxs−h=x=f(x)+/integraldisplays s−h/a\}⌊∇a⌋k⌉tl⌉{tLε rf(xr)/a\}⌊∇a⌋k⌉t∇i}htxs−h=xdr. The same argument as in Proposition A.2shows that Lε tf(xt(s,x)) is uniformly integrable over compact domains of t≥s≥0 and of x∈Rm, so by dominated convergence we have lim h↓0, r↑s/a\}⌊∇a⌋k⌉tl⌉{tLε rf(xr)/a\}⌊∇a⌋k⌉t∇i}htxs−h=x= Lε s(x). So taking the limit h→0 above and rearranging shows Lε sf(x) = lim h↓0Pε s−h,sf(x)−f(x) h. (190) 64 Then for any s≤t, applying this to Pε s,tf∈ Ain place of fgives lim h↓0Pε s,tf(x)−Pε s−h,tf(x) h= lim h↓0Pε s,tf(x)−Pε s−h,s(Pε s,tf)(x) h=−Lε sPε s,tf(x), i.e.Pε s,tf(x) is left-differentiable in s. Here−Lε sPε s,tf(x) =−uε(s,x)⊤∇Pε s,tf(x)−TrMM⊤∇2Pε s,tf(x) is continuous in ( s,t,x) by the continuity of ∇Pε s,tfand∇2Pε s,tfargued above. Then Pε s,tf(x) is also continuously-differentiable in swith∂sPε s,tf(x) =−Lε sPε s,tf(x). Proof of Proposition A.1.Let{xt}t≥0and{xt,ε}t≥0be the solutions to the unperturbed and perturbed diffusions. Let {Pt}and L be the semigroup and infinitesimal generator for {xt}t≥0, and let {Pε s,t}and Lε t be those for {xt,ε}t≥0. We write ∂s,∂tfor the derivatives in s,tand reserve ∇f(t,x) for the gradient of fin its second argument x. For anyt > sandr∈[s,t], definefε(r,x) =Pε r,ta(x). Then by Itˆ o’s formula applied to the unperturbed process{xt}t≥0, fε(t,xt) =fε(s,xs)+/integraldisplayt s(∂r+L)fε(r,xr)dr+/integraldisplayt s∇fε(r,xr)⊤√ 2Mdbr. Proposition A.3showsPε r,ta∈ A, so∇fε(r,xr) is uniformly bounded and the last term is a martingale. Then, taking expectations under the initial condition xs=xand applying ( 189), /a\}⌊∇a⌋k⌉tl⌉{ta(xt)/a\}⌊∇a⌋k⌉t∇i}htxs=x=/a\}⌊∇a⌋k⌉tl⌉{tfε(t,xt)/a\}⌊∇a⌋k⌉t∇i}htxs=x=fε(s,x)+/integraldisplayt s/a\}⌊∇a⌋k⌉tl⌉{t(∂r+L)fε(r,xr)/a\}⌊∇a⌋k⌉t∇i}htxs=xdr =Pε s,ta(x)+/integraldisplayt s/a\}⌊∇a⌋k⌉tl⌉{t(−Lε r+L)Pε r,ta(xr)/a\}⌊∇a⌋k⌉t∇i}htxs=xdr =Pε s,ta(x)−/integraldisplayt sεh(r)/a\}⌊∇a⌋k⌉tl⌉{t(b⊤∇Pε r,ta)(xr)/a\}⌊∇a⌋k⌉t∇i}htxs=xdr =Pε s,ta(x)−ε/integraldisplayt sh(r)Pr−s(b⊤∇Pε r,ta)(x)dr. Applying this also with ε= 0 and P0 s,t=Pt−sand taking the difference, we obtain the identity Pε s,ta(x)−Pt−sa(x) =ε/integraldisplayt sh(r)Pr−s(b⊤∇Pε r,ta)(x)dr. (191) From the definition of Ptf(x) and form of ∇Ptf(x) in (184), we have Pr−s(b⊤∇Pε r,ta)(x) =/angbracketleftBig (b⊤∇Pε r,ta)(xr−s)/angbracketrightBig x0=x, (192) ∇Pr−s(b⊤∇Pε r,ta)(x) =/angbracketleftBiggm/summationdisplay i=1∂xi[b⊤∇Pε r,ta](xr−s)vr−s i/angbracketrightBigg x0=x. (193) Sinceb∈ Bis Lipschitz by assumption, and Pε r,ta∈ Aby Proposition A.3, we have |(b⊤∇Pε r,ta)(xr−s)|,|∂xi[b⊤∇Pε r,ta](xr−s)| ≤C(1+/⌊a∇⌈⌊lxr−s/⌊a∇⌈⌊l2) for some C >0. Then these quantities are uniformly integrable over bounded dom ains ofs≤r≤tandx, by [63, Theorem II.2.1]. Furthermore /⌊a∇⌈⌊lvr−s i/⌊a∇⌈⌊l2is bounded by Proposition A.2(b), so the integrands on the right sides of both ( 192–193) are also uniformly integrable over these domains. Then applying dom inated convergence, we may differentiate ( 191) inxunder the integral to obtain ∇Pε s,ta(x)−∇Pt−sa(x) =ε/integraldisplayt sh(r)∇Pr−s(b⊤∇Pε r,ta)(x)dr, 65 and take the limit ε→0 to get∇Pε s,ta(x)→ ∇Pt−sa(x). Applying this with s=rto the right side of ( 191), and taking the limit ε→0 in (191) using uniform integrability of ( 192), we arrive at lim ε→0Pε s,ta(x)−Pt−sa(x) ε=/integraldisplayt sh(r)Pr−s(b⊤∇Pt−ra)(x)dr. Fors= 0, this means lim ε→01 ε/parenleftBig /a\}⌊∇a⌋k⌉tl⌉{ta(xt,ε)/a\}⌊∇a⌋k⌉t∇i}ht−/a\}⌊∇a⌋k⌉tl⌉{ta(xt)/a\}⌊∇a⌋k⌉t∇i}ht/parenrightBig =/integraldisplayt 0h(r)Pr(b⊤∇Pt−ra)(x)dr, verifying that ( 21) holds with response function R(t,s) given by ( 179). Continuity of this function R(t,s) in (s,t) follows from the above uniform integrability statements, togethe r with continuity of t/ma√sto→ ∇Pt(x) int as shown in Proposition A.2. For uniqueness, observe that if ˜R(t,s) is any continuous function different from R(t,s), then they must differ on a subset of ( s,t) of positive Lebesgue measure. Then there exists a continuous bo unded function h: [0,∞)→Rsuch that/integraltextt 0R(t,s)h(s)ds/\⌉}atio\slash=/integraltextt 0˜R(t,s)h(s)ds, implying that ˜Rcannot satisfy ( 21). Thus this response function R(t,s) is
|
https://arxiv.org/abs/2504.15556v1
|
unique. A.2 Discrete dynamics We record (elementary) analogues of the preceding results for dis crete dynamics xt+1=xt+u(xt)+√ 2M(bt+1−bt) (194) where{bt}t∈Z+is a Gaussian process with b0= 0 and independent increments bt+1−bt∼ N(0,γI), for someγ >0. The following is an analogue of Proposition A.1. Proposition A.4. Suppose u:Rm→Rmis Lipschitz, and let {xt}t∈Z+be the solution of ( 194) with initial condition x0=x. For any Lipschitz functions a:Rm→Randb:Rm→Rm, define R(t,s) =Ps/parenleftbig b⊤P(∇Pt−s−1a)/parenrightbig (x) wherePtf(x) =E[f(xt)|x0=x]. Then for any s,t∈Z+withs < t, R(t,s) = lim ε→01 ε/parenleftBig E[a(xt,ε)|x0,ε=x]−E[a(xt)|x0=x]/parenrightBig where{xt,ε}t∈Z+is the solution of the perturbed dynamics xt+1,ε=xt,ε+u(xt,ε)+εb(xt,ε)1s=t+√ 2M(bt+1−bt) with the same initial condition x0,ε=x. Proof.Write as shorthand P=P1. IffisL-Lipschitz, then (coupling the processes with initializations x,y by the same {bt}) |Pf(x)−Pf(y)|=/vextendsingle/vextendsingle/vextendsingleE[f(x+u(x)−√ 2Mb1)]−E[f(y+u(y)−√ 2Mb1)]/vextendsingle/vextendsingle/vextendsingle≤L(1+Lu)/⌊a∇⌈⌊lx−y/⌊a∇⌈⌊l whereLuis the Lipschitz constant of u. HencePfis Lipschitz, so Ptfis Lipschitz for all t≥0. LetPε tbe the Markov semigroup for the dynamics xt+1=xt+u(xt)+εb(xt)+√ 2M(bt+1−bt), and write as shorthand Pε=Pε 1. Then by definition, E[a(xt,ε)|x0,ε=x] =PsPεPt−s−1(x), 66 so lim ε→01 ε/parenleftBig E[a(xt,ε)|x0,ε=x]−E[a(xt)|x0=x]/parenrightBig =∂ε|ε=0PsPεPt−s−1a(x). (195) Note that for any L-Lipschitz function f, we have ∂εPεf(x) =∂εE[f(x+u(x)+εb(x)+b1)] =b(x)⊤E[∇f(x+u(x)+εb(x)+b1)] (196) where the derivative may be taken under the expectation by domina ted convergence. In particular, ∂ε|ε=0Pεf(x) =b(x)⊤E[∇f(x+u(x)+b1)] =b(x)⊤P(∇f)(x). The derivative( 196) isalsobounded forall ε≥0byL/⌊a∇⌈⌊lb(x)/⌊a∇⌈⌊l, which isintegrableunder PssincebisLipschitz. Then again by dominated convergence, ∂ε|ε=0PsPεPt−s−1a(x) =Ps∂ε|ε=0PεPt−s−1a(x) =Ps(b⊤P(∇Pt−s−1a))(x), and the result follows from applying this to ( 195). The following is an analogue of the first statement of ( 184). Lemma A.5. Let{xt}t∈Z+be the solution to ( 194) whereu(·)is Lipschitz, and consider the first variation processes vt+1 i=vt i+m/summationdisplay j=1∂jui(xt)·vt j with initializations v0 i=ei. Denote Ptf(x) =E[f(xt)|x0=x] =/a\}⌊∇a⌋k⌉tl⌉{tf(xt)/a\}⌊∇a⌋k⌉t∇i}ht. Then for any Lipschitz function f:Rd+K→R, ∇Ptf(x) =/angbracketleftbiggm/summationdisplay j=1∂jf(xt)vt j/angbracketrightbigg . Proof.Stacking Vt= [vt 1,...,vt m]⊤∈Rm×mwith initial condition V0=Im, the evolution of Vtis Vt+1= [I+du(xt)]Vt where duis the derivative of u(·). Writing xt(x) for the dependence of xton the initial condition x0=x, and writing d xt(x) for its derivative in x, by the chain rule we have d xt+1(x) = [I+ du(xt)]dxt(x), with initial condition d x0(x) =I. Thus (Vt)⊤= dxt(x) for allt≥0, so ∇xf(xt(x)) = [dxt(x)]⊤∇f(xt) =m/summationdisplay j=1∂jf(xt)vt j. By dominated convergence we have ∇Ptf(x) =∇x/a\}⌊∇a⌋k⌉tl⌉{tf(xt(x))/a\}⌊∇a⌋k⌉t∇i}ht=/a\}⌊∇a⌋k⌉tl⌉{t∇xf(xt(x))/a\}⌊∇a⌋k⌉t∇i}ht, and the result follows. References [1] Mark Kac. Foundations of kinetic theory. In Proceedings of The third Berkeley symposium on mathe- matical statistics and probability , volume 3, pages 171–197, 1956. [2] Henry P McKean Jr. A class of Markov processes associated with nonlinear parabolic equations. Pro- ceedings of the National Academy of Sciences , 56(6):1907–1911, 1966. [3] J¨ urgen G¨ artner. On the McKean-Vlasov limit for interacting diff usions.Mathematische Nachrichten , 137(1):197–248, 1988. 67 [4] Alain-Sol Sznitman. Topics in propagation of chaos. Ecole d’´ et´ e de probabilit´ es de Saint-Flour XIX—1989 , 1464:165–251, 1991. [5] Louis-Pierre Chaintron and Antoine Diez. Propagation of chaos: a review of models, methods and applications. I. Models and methods. arXiv preprint arXiv:2203.00446 , 2022. [6] Louis-Pierre Chaintron and Antoine Diez. Propagation of chaos: a review of models, methods and applications. II. Applications. arXiv preprint arXiv:2106.14812 , 2021. [7] Daniel Lacker. Hierarchies, entropy, and quantitative propag ation of chaos for mean field diffusions. Probability and Mathematical Physics , 4(2):377–432, 2023.
|
https://arxiv.org/abs/2504.15556v1
|
[8] Daniel Lacker and Luc Le Flem. Sharp uniform-in-time propagatio n of chaos. Probability Theory and Related Fields , 187(1-2):443–480, 2023. [9] Haim Sompolinsky and Annette Zippelius. Dynamic theory of the spin -glass phase. Physical Review Letters, 47(5):359, 1981. [10] Haim Sompolinsky and Annette Zippelius. Relaxational dynamics of the Edwards-Anderson model and the mean-field theory of spin-glasses. Physical Review B , 25(11):6860, 1982. [11] Andrea Crisanti, Heinz Horner, and H J Sommers. The spherical p-spin interaction spin-glass model: the dynamics. Zeitschrift f¨ ur Physik B Condensed Matter , 92:257–271, 1993. [12] Leticia F Cugliandolo and Jorge Kurchan. Analytical solution of th e off-equilibrium dynamics of a long-range spin-glass model. Physical Review Letters , 71(1):173, 1993. [13] Leticia F Cugliandolo and Jorge Kurchan. On the out-of-equilibriu m relaxation of the sherrington- kirkpatrick model. Journal of Physics A: Mathematical and General , 27(17):5749, 1994. [14] Marc M´ ezard, Giorgio Parisi, and Miguel Angel Virasoro. Spin glass theory and beyond , volume 9 of World Scientific Lecture Notes in Physics . World Scientific Publishing Co., Inc., Teaneck, NJ, 1987. [15] Elisabeth Agoritsas, Giulio Biroli, Pierfrancesco Urbani, and Fran cesco Zamponi. Out-of-equilibrium dynamical mean-field equations for the perceptron model. J. Phys. A , 51(8):085002, 36, 2018. [16] C De Dominicis. Dynamics as a substitute for replicas in systems wit h quenched random impurities. Physical Review B , 18(9):4913, 1978. [17] Jorge Kurchan. Supersymmetry in spin glass dynamics. Journal de Physique I , 2(7):1333–1352, 1992. [18] Leticia F Cugliandolo. Dynamics of glassy systems. arXiv preprint cond-mat/0210312 , 2002. [19] Stefano Sarao Mannelli, Florent Krzakala, Pierfrancesco Urba ni, and Lenka Zdeborova. Passed & spuri- ous: Descent algorithms and local minima in spiked matrix-tensor mod els. InInternational Conference on Machine Learning , pages 4333–4342, 2019. [20] Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Kr zakala, and Lenka Zdeborov´ a. Who is afraid of big bad minima? analysis of gradient-flow in spiked matrix -tensor models. Advances in neural information processing systems , 32, 2019. [21] Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Kr zakala, Pierfrancesco Urbani, and Lenka Zdeborov´ a. Marvels and pitfalls of the langevin algorithm in no isy high-dimensional inference. Physical Review X , 10(1):011057, 2020. [22] Stefano Sarao Mannelli and Pierfrancesco Urbani. Analytical s tudy of momentum-based acceleration methods in paradigmatic high-dimensional non-convex problems. Advances in Neural Information Pro- cessing Systems , 34:187–199, 2021. [23] TengyuanLiang, SubhabrataSen, and PragyaSur. High-dimen sional asymptoticsofLangevindynamics in spiked matrix models. Inf. Inference , 12(4):iaad042,33, 2023. 68 [24] Francesca Mignacco, Pierfrancesco Urbani, and Lenka Zdebo rov´ a. Stochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problem. Machine Learning: Science and Technology , 2(3):035029, 2021. [25] Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Kr zakala, Pierfrancesco Urbani, and Lenka Zdeborov´ a. Complex dynamics in simple neural networks: Un derstanding gradient flow in phase retrieval. Advances in Neural Information Processing Systems , 33:3265–3274, 2020. [26] QiyangHan and XiaocongXu. Gradientdescent inference in empir icalrisk minimization. arXiv preprint arXiv:2412.09498 , 2024. [27] Francesca Mignacco, Florent Krzakala, Pierfrancesco Urban i, and Lenka Zdeborov´ a. Dynamical mean- field theory for
|
https://arxiv.org/abs/2504.15556v1
|
stochastic gradient descent in Gaussian mixture cla ssification. J. Stat. Mech. Theory Exp., (12):Paper No. 124008, 23, 2021. [28] FrancescaMignaccoandPierfrancescoUrbani. Theeffectiven oiseofstochasticgradientdescent. Journal of Statistical Mechanics: Theory and Experiment , 2022(8):083405, 2022. [29] Blake Bordelon and Cengiz Pehlevan. Self-consistent dynamical field theory of kernel evolution in wide neural networks. Advances in Neural Information Processing Systems , 35:32240–32256, 2022. [30] Yatin Dandi, Emanuele Troiani, Luca Arnaboldi, Luca Pesce, Lenk a Zdeborov´ a, and Florent Krza- kala. The benefits of reusing batches for gradient descent in two- layer networks: Breaking the curse of information and leap exponents. arXiv preprint arXiv:2402.03220 , 2024. [31] Blake Bordelon, Hamza Chaudhry, and Cengiz Pehlevan. Infinite limits of multi-head transformer dynamics. Advances in Neural Information Processing Systems , 37:35824–35878, 2024. [32] Blake Bordelon, Alexander Atanasov, and Cengiz Pehlevan. A dy namical model of neural scaling laws. arXiv preprint arXiv:2402.01092 , 2024. [33] Andrea Montanari and Pierfrancesco Urbani. Dynamical deco upling of generalization and overfitting in large two-layer networks. arXiv preprint arXiv:2502.21269 , 2025. [34] G. Ben Arous and A. Guionnet. Large deviations for Langevin sp in glass dynamics. Probab. Theory Related Fields , 102(4):455–509, 1995. [35] M.Grunwald. SanovresultsforGlauberspin-glassdynamics. Probab. Theory Related Fields , 106(2):187– 232, 1996. [36] G. Ben Arous and A. Guionnet. Symmetric Langevin spin glass dyn amics.Ann. Probab. , 25(3):1367– 1422, 1997. [37] A. Guionnet. Averaged and quenched propagation of chaos fo r spin glass dynamics. Probab. Theory Related Fields , 109(2):183–215, 1997. [38] G. Ben Arous, A. Dembo, and A. Guionnet. Aging of spherical sp in glasses. Probab. Theory Related Fields, 120(1):1–67, 2001. [39] G´ erard Ben Arous, Amir Dembo, and Alice Guionnet. Cugliandolo- Kurchan equations for dynamics of spin-glasses. Probab. Theory Related Fields , 136(4):619–660, 2006. [40] Amir Dembo and Reza Gheissari. Diffusions interacting through a r andom matrix: universality via stochastic Taylor expansion. Probab. Theory Related Fields , 180(3-4):1057–1097, 2021. [41] Amir Dembo, Eyal Lubetzky, and Ofer Zeitouni. Universality for Langevin-like spin glass dynamics. Ann. Appl. Probab. , 31(6):2864–2880, 2021. [42] Michael Celentano, Chen Cheng, and Andrea Montanari. The hig h-dimensional asymptotics of first order methods with random data. arXiv preprint arXiv:2112.07572 , 2021. 69 [43] Erwin Bolthausen. An iterative construction of solutions of the TAP equations for the Sherrington- Kirkpatrick model. Comm. Math. Phys. , 325(1):333–366, 2014. [44] Mohsen Bayati and Andrea Montanari. The dynamics of messag e passing on dense graphs, with appli- cations to compressed sensing. IEEE Trans. Inform. Theory , 57(2):764–785, 2011. [45] Adel Javanmard and Andrea Montanari. State evolution for ge neral approximate message passing algorithms, with applications to spatial coupling. Inf. Inference , 2(2):115–144, 2013. [46] C´ edric Gerbelot, Emanuele Troiani, Francesca Mignacco, Flore nt Krzakala, and Lenka Zdeborov´ a. Rig- orous dynamical mean-field theory for stochastic gradient desce nt methods. SIAM J. Math. Data Sci. , 6(2):400–427, 2024. [47] Qiyang Han. Entrywise dynamics and universality of general firs t order methods. arXiv preprint arXiv:2406.19061 , 2024. [48] Jeffrey P Spence, Nasa Sinnott-Armstrong, Themistocles L As simes, and Jonathan K Pritchard. A flexible modeling and inference framework for estimating variant effe ct sizes from GWAS summary statistics. BioRxiv,
|
https://arxiv.org/abs/2504.15556v1
|
pages 2022–04, 2022. [49] Sumit Mukherjee, Bodhisattva Sen, and Subhabrata Sen. A me an field approach to empirical Bayes estimation in high-dimensional linear regression. arXiv preprint arXiv:2309.16843 , 2023. [50] Youngseok Kim, Wei Wang, Peter Carbonetto, and Matthew St ephens. A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression. Journal of Machine Learning Research , 25(185):1–59, 2024. [51] Zhou Fan, Leying Guan, Yandi Shen, and Yihong Wu. Gradient flo ws for empirical Bayes in high- dimensional linear models. arXiv preprint arXiv:2312.12708 , 2023. [52] Zhou Fan, Justin Ko, Bruno Loureiro, Yue M. Lu, and Yandi She n. Dynamical mean-field analysis of adaptive Langevin diffusions: Replica-symmetric fixed point and empir ical Bayes. 2025. [53] Rep Kubo. The fluctuation-dissipation theorem. Reports on progress in physics , 29(1):255, 1966. [54] C´ edric Villani. Optimal transport: old and new , volume 338. Springer, 2008. [55] Amir Dembo and Jean-Dominique Deuschel. Markovian perturbat ion, response and fluctuation dissi- pation theorem. Annales de l’IHP Probabilit´ es et statistiques , 46(3):822–852, 2010. [56] Xian Chen and Chen Jia. Mathematical foundation of nonequilibriu m fluctuation–dissipation theo- rems for inhomogeneous diffusion processes with unbounded coeffic ients.Stochastic Processes and their Applications , 130(1):171–202, 2020. [57] Jean-Fran¸ cois Le Gall. Brownian motion, martingales, and stochastic calculus , volume 274 of Graduate Texts in Mathematics . Springer, 2016. [58] L Chris G Rogers and David Williams. Diffusions, Markov processes, and martingales: Itˆ o calcul us, volume 2. Cambridge university press, 2000. [59] HermannBrunner. Collocation methods for Volterraintegral and related func tional differential equations , volume 15. Cambridge university press, 2004. [60] Tianhao Wang, Xinyi Zhong, and Zhou Fan. Universality of appro ximate message passing algorithms and tensor networks. The Annals of Applied Probability , 34(4):3943–3994, 2024. [61] Roman Vershynin. High-dimensional probability: An introduction with appli cations in data science . Cambridge university press, 2018. [62] Ramon Van Handel. Probability in high dimension. Lecture Notes (Princeton University) , 2014. [63] H Kunita. Stochastic differential equations and stochastic flow s of diffeomorphisms. ´Ecole d’´Et´ e de Probabilit´ es de Saint-Flour XII-1982 , pages 143–303, 1984. 70
|
https://arxiv.org/abs/2504.15556v1
|
Dynamical mean-field analysis of adaptive Langevin diffusions: Replica-symmetric fixed point and empirical Bayes Zhou Fan∗, Justin Ko†, Bruno Loureiro‡, Yue M. Lu§, Yandi Shen¶ Abstract In many applications of statistical estimation via sampling, one may wish to sample from a high- dimensional target distribution that is adaptively evolving to the samples already seen. We study an example of such dynamics, given by a Langevin diffusion for posterior sampling in a Bayesian linear regression model with i.i.d. regression design, whose prior continuously adapts to the Langevin trajectory via a maximum marginal-likelihood scheme. Results of dynamical mean-field theory (DMFT) developed in our companion paper establish a precise high-dimensional asymptotic limit for the joint evolution of the prior parameter and law of the Langevin sample. In this work, we carry out an analysis of the equations that describe this DMFT limit, under conditions of approximate time-translation-invariance which include, in particular, settings where the posterior law satisfies a log-Sobolev inequality. In such settings, we show that this adaptive Langevin trajectory converges on a dimension-independent time horizon to an equilibrium state that is characterized by a system of scalar fixed-point equations, and the associated prior parameter converges to a critical point of a replica-symmetric limit for the model free energy. As a by-product of our analyses, we obtain a new dynamical proof that this replica-symmetric limit for the free energy is exact, in models having a possibly misspecified prior and where a log-Sobolev inequality holds for the posterior law. Contents 1 Introduction 2 1.1 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Further related literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Model and main results 5 2.1 Bayesian linear model and adaptive Langevin dynamics . . . . . . . . . . . . . . . . . . . . . 5 2.2 DMFT equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Replica-symmetric characterization of equilibrium for a fixed prior . . . . . . . . . . . . . . . 9 2.3.1 Approximately-TTI DMFT systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.2 Asymptotic MSE and free energy under a posterior LSI . . . . . . . . . . . . . . . . . 12 2.4 Convergence of empirical Bayes
|
https://arxiv.org/abs/2504.15558v1
|
Langevin dynamics . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.1 A general condition for dimension-free convergence . . . . . . . . . . . . . . . . . . . . 13 2.4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3 Analysis of approximately-TTI DMFT systems 19 3.1 Analysis of θ-equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.1 Comparison with an auxiliary process . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2 Convergence of the auxiliary process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 Analysis of η-equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 ∗Department of Statistics and Data Science, Yale University †Department of Statistics and Actuarial Science, University of Waterloo ‡Departement d’Informatique, ´Ecole Normale Sup´ erieure, PSL & CNRS §Departments of Electrical Engineering and Applied Mathematics, Harvard University ¶Department of Statistics and Data Science, Carnegie Mellon University 1arXiv:2504.15558v1 [math.ST] 22 Apr 2025 3.2.1 Comparison with an auxiliary process . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.2 Convergence of the auxiliary process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3 Completing the proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4 Analysis of fixed-prior Langevin dynamics under LSI 35 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1.1 Properties of Langevin dynamics . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2504.15558v1
|
. . . . . . . . . . . 35 4.1.2 Interpretation of the DMFT correlation and response . . . . . . . . . . . . . . . . . . 37 4.2 Posterior bounds and Wasserstein-2 convergence . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.3 Properties of the correlation and response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 The DMFT system is approximately-TTI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.5 Limit MSE and free energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5 Analysis of empirical Bayes Langevin dynamics 55 5.1 General analysis under uniform LSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.2 Analysis of examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 A Proof of Theorem 2.3 69 B Correlation and response functions for a Gaussian prior 70 C Sufficient conditions for a log-Sobolev inequality 72 D Auxiliary lemmas 75 1 Introduction Parameter estimation via Monte Carlo sampling is a common paradigm in statistical learning, arising for example in stochastic implementations of Expectation-Maximization estimation in latent variable models [1,2], and contrastive-divergence [3] and diffusion-based learning [4–7] of generative models for data. In these applications, one wishes to learn a parameter using Monte Carlo samples from an associated distribution on a high-dimensional space. Monte Carlo methods whose target distribution continuously adapts to the learned parameter are natural for such tasks, and we refer to [8–12] for several recent proposals of this form. The goal of our current work is to study the learning dynamics in a particular (classical) instance of this paradigm, namely the estimation of the distribution of regression coefficients in a high-dimensional regression model [13,14]. We will focus on the linear model y=Xθ∗+ε with a latent and high-dimensional coefficient vector θ∗∈Rd, whose coordinates have an unknown “prior” distribution g∗. Estimation of this prior distribution is a classical example of empirical Bayes inference [15,16], and arises ubiquitously in genetic association analyses where g∗represents the distribution of genetic effect sizes in linear mixed models for complex traits [17–24]. Two recent works [10, 25] have established the statistical consistency of nonparametric maximum marginal-likelihood estimators
|
https://arxiv.org/abs/2504.15558v1
|
of g∗in settings of high- dimensional regression designs X∈Rn×d, asn, d→ ∞ . However, direct computation of this maximum marginal-likelihood estimate is intractable for general regression designs, motivating approaches based on approximate posterior inference schemes. We will investigate in this work a parametric analogue of a learning procedure proposed in [10], modeling the prior distribution via a parametric model g(·, α) and applying an adaptive diffusion to estimate the parameter α∈RK. This procedure will take the form of a Langevin diffusion dθt=∇θlogPg(·,bαt)(θt|X,y)dt+√ 2 dbt(1) 2 for sampling from the posterior distribution Pg(·,bαt)(θ|X,y) of the regression coefficients, under a prior law g(·,bαt) whose parameter evolves according to a coupled continuous-time dynamics dbαt=G bαt,1 ddX j=1δθt j dt. (2) HereG(·) is a map that implements gradient-based maximum marginal-likelihood learning of αvia the empirical distribution of coordinates of θt, and we defer a discussion of this motivation to Section 2. The procedure may be understood as an approximation to an idealized dynamics dαt=G(αt,P(θt))dt (3) where P(θt) denotes the average law of the coordinates θt 1, . . . , θt d. For these idealized dynamics, the analyses of [10] may be adapted to show that the prior parameter αtconverges to a fixed point of the marginal log-likelihood, under certain conditions for the noise and regression design. Related results have also been shown recently in more general latent variable models in [8,9,12,26], which, in addition, provide convergence guarantees for particle approximations of the McKean-Vlasov type d αt=G(αt,1 dMPd j=1PM m=1δθm,t j)dt, having Mparallel sampling chains {θ1,t}t≥0, . . . ,{θM,t}t≥0for the latent variable θ∈Rd, in the limit M→ ∞ . The aforementioned results are not fully satisfactory in our context of a high-dimensional regression model, and leave open the following two interesting questions about the original dynamics (1–2): 1.Single chain propagation-of-chaos. In the limit of increasing dimensions d→ ∞ , are the idealized dynamics (3) well-approximated by (2) using just a single Langevin chain {θt}t≥0inRd? 2.Characterization of fixed points. Can the fixed points bαof (2) be explicitly characterized? Does (2) exhibit dimension-free convergence to these fixed points, and in what settings is the fixed point representing the maximum marginal-likelihood estimator of αunique? The purpose of our work is to provide answers to these questions in the context of an i.i.d. regression design. Question 1 is addressed in our companion paper [27], which build upon the recent results of [28, 29] to formalize a dynamical mean-field theory (DMFT) approximation of (2) by (3) over dimension-independent time horizons t∈[0, T], for a general class of such adaptive Langevin dynamics procedures. Our current paper addresses Question 2 by carrying out an analysis of the resulting DMFT system, under an assumption of a uniform log-Sobolev inequality for the posterior law. 1.1 Summary of results Our main results provide an analysis of the DMFT equations that approximate the empirical Bayes Langevin dynamics (1–2) in the high-dimensional limit as n, d→ ∞ proportionally. En route to this analysis, we obtain also new results for the DMFT approximation of the standard non-adaptive Langevin diffusion (1) with a fixed prior g(·)≡g(·, α). We summarize
|
https://arxiv.org/abs/2504.15558v1
|
these results as follows: 1. In the setting of a non-adaptive Langevin diffusion, we formalize a condition of approximate time- translation-invariance (TTI) for the DMFT system. We perform an analysis of the dynamical fixed- point equations for the DMFT correlation and response functions under this condition, and show that they recover the static fixed-point equations for the free energy and posterior mean-squared-error predicted by a replica-symmetric ansatz [30,31]. 2. We show that a log-Sobolev inequality (LSI) for the posterior law provides a sufficient condition to guarantee the above approximate-TTI property for the DMFT system, and we discuss several settings of log-concavity, high noise, or large sample size where such an LSI holds. As a consequence, we obtain a new dynamical proof of the validity of the replica-symmetric predictions for the free energy and MSE in the Bayesian linear model with a possibly misspecified prior law, under such an LSI condition. 3 3. When the LSI holds uniformly over the posterior laws corresponding to the deterministic DMFT trajectory of {αt}t≥0, we show that the empirical Bayes estimate bαtconverges on a dimension-free time horizon to a critical point α∞of the replica-symmetric limit for the free energy. This is explicitly characterized by a system of scalar fixed-point equations, and we discuss examples of models where this critical point may or may not be unique. We present and discuss these results and examples in further detail in Section 2. 1.2 Further related literature Approximating the dynamical behavior of many degrees-of-freedom by an effective single-particle problem interacting self-consistently with its environment is an old idea in the statistical physics literature. Relevant to our work is the development of this idea in the context of disordered systems, and in particular the study of high-dimensional Langevin dynamics of soft-spin variants of the Sherrington-Kirkpatrick model [32, 33] and the spherical p-spin model [34–37]. Mathematical proofs of these approximations were first shown for such models in the works of [38–40] using large deviations techniques, and more recently in generalized linear models close to our setting by [28,29] using different methods around Approximate Message Passing algorithms and iterative Gaussian conditioning. In recent years, DMFT analyses have been applied to study Langevin dynamics and gradient-based optimization in many statistical models and applications, including Gaussian mixture classification [41], matrix and tensor PCA [42–44], phase retrieval and generalized linear models [45,46], and learning in perceptron and neural network models [47–52]. These analyses have uncovered surprising phenomena about the efficacy of gradient-based methods and relationships to landscape complexity for high-dimensional non-convex problems [42]. Understanding the long-time behavior of DMFT systems, in particular in low-temperature regimes char- acterized by aging or metastability, has been a primary goal in both the physics and mathematics literature since the original inception of these methods (see [53,54] and references within for a review). Mathematically rigorous analyses of long-time dynamics have been obtained previously for spherical 2-spin models in [55] and related statistical models in [44, 56] by leveraging the rotational invariance of these models. However, such analyses of DMFT are (to our knowledge) quite rare in more general settings. Our work
|
https://arxiv.org/abs/2504.15558v1
|
takes a step towards filling this gap, by providing a rigorous analysis of the DMFT approximation to Langevin dynamics in a more general model without a rotationally invariant prior, in settings where approximate-TTI holds. As a by-product of our analyses, we obtain a new proof of a replica formula [57] for the free energy and posterior MSE in the Bayesian linear model. This proof is different from several existing proofs of this result [58–62] and from the Gaussian interpolation methods of Guerra-Talagrand [63,64], and is based instead on deducing a static fixed-point equation from the dynamical fixed-point equations of DMFT. Our current result is specific to a high-temperature regime where a LSI holds for the posterior law, but it applies to models where the prior law is misspecified [30,31,65]. In this misspecified context, the closest mathematical result of which we are aware is [66] which proved the replica-symmetric predictions in a setting where the posterior is log-concave. A complete large deviations analysis of the free energy in a related rank-one matrix estimation model with misspecified prior and noise was carried out in [67], showing that in general the asymptotic free energy is characterized by a Parisi-type variational problem whose solution may not be replica-symmetric. Our results imply for the linear model that this solution must be replica-symmetric under our assumed condition of a LSI for the posterior law. In the context of adaptive empirical Bayes Langevin dynamics, our results complement the previous analyses of [10] for more general regression designs, and of [8, 12] in general latent variable models. We deduce a dimension-free convergence rate, in contrast to the results of [10] that established convergence (for a nonparametric variant of this algorithm) on a time horizon growing linearly with n, d, and without employing a time-dependent and decaying learning rate as in [12]. Under the additional mean-field structure of our current model, we are able to establish convergence of a single-chain implementation of the empirical Bayes Langevin dynamics using (2), rather than for an idealized dynamics as studied in [10, 12] or for an implementation using Mparallel chains as studied in [8]. We are also able to give an explicit characterization and analysis of the fixed points to which the dynamics of {bαt}t≥0may converge. 4 Acknowledgments This research was initiated during the “Huddle on Learning and Inference from Structured Data” at ICTP Trieste in 2023. We’d like to thank the huddle organiers Jean Barbier, Manuel S´ aenz, Subhabrata Sen, and Pragya Sur for their hospitality and many helpful discussions. We are very grateful to Andrea Montanari who suggested to us the approach to prove Theorem 2.5. We’d like to thank also Pierfrancesco Urbani, Francesca Mignacco and Emanuele Troiani for useful discussions. Z. Fan was supported in part by NSF DMS–2142476 and a Sloan Research Fellowship. J. Ko was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs programme, and the Ontario Research Fund [RGPIN-2020-04597, DGECR-2020-00199]. B. Loureiro was supported by the French government, managed by the National Research Agency (ANR), under the France
|
https://arxiv.org/abs/2504.15558v1
|
2030 program with the reference “ANR-23-IACL-0008” and the Choose France - CNRS AI Rising Talents program. Y. M. Lu was supported in part by the Harvard FAS Dean’s Competitive Fund for Promising Scholarship and by a Harvard College Professorship. Notational conventions In the context of the posterior law Pg(θ|X,y) for a given prior g(·), we will write ⟨f(θ)⟩=E[f(θ)|X,y] for the posterior expectation conditioning on the “quenched” variables X,y. In the context of Langevin dynamics, we will write similarly ⟨f(θt)⟩=E[f(θt)|X,y] also for an expectation conditioning on X,y. In some arguments it is convenient to consider the expectation also conditioned on the initial condition θ0, and we will denote this by ⟨f(θt)⟩x=E[f(θt)|X,y,θ0=x]. We reserve EandPfor the full expectation and probability also over X,y,θ∗,ε. Constants C, C′, c, c′>0 throughout are independent of the dimensions n, d. For any random variable ξ in a complete and separable normed vector space ( M,∥ · ∥), we will use P(ξ) to denote its law. P2(M) is the space of probability distributions Pon (M,∥ · ∥) such that Eξ∼P∥ξ∥2<∞, and W1(·) and W2(·) denote the Wasserstein-1 and Wasserstein-2 metrics on P2(M). Forf:Rd→R,∇f∈Rdis its gradient, ∇2f∈Rd×dits Hessian, and ∇3f∈Rd×d×dthe symmetric tensor of its 3rd-order partial derivatives. For f:R×RK→R,∂θf(θ, α) and ∇αf(θ, α) denote its partial derivatives with respect to θ∈Randα∈RK.∥ · ∥ 2is the Euclidean norm for vectors and vectorized Euclidean norm for matrices and tensors. Tr Mand∥M∥opare the matrix trace and Euclidean operator norm. C([0, T],Rd) is the space of continuous functions f: [0, T]→Rdequipped with the norm of uniform convergence ∥f∥∞= supt∈[0,T]∥f(t)∥2.Ck(Rd,Rm) is the space of functions f:Rd→Rmthat are k- times continuously-differentiable. For two probability densities p, qonRd, DKL(p∥q) =R q(logq−logp) is the Kullback-Leibler divergence. For a scalar random variable X, Var X=EX2−(EX)2and Ent X= EXlogX−EXlogEX. For a random vector X∈Rk, Cov X=EXX⊤−(EX)(EX)⊤∈Rk×k. 2 Model and main results 2.1 Bayesian linear model and adaptive Langevin dynamics We study a linear model y=Xθ∗+ε∈Rn(4) with random effects θ∗∈Rd. Modeling θ∗ 1, . . . , θ∗ diid∼gfor a prior density g(·) on the real line and modeling ε∼ N(0, σ2I) as Gaussian noise, Bayesian inference for θ∗is based upon the posterior density Pg(θ|X,y) =1 Pg(y|X)1 (2πσ2)n/2exp −1 2σ2∥y−Xθ∥2 2dY j=1g(θj). (5) 5 HerePg(y|X) is the marginal likelihood of y(i.e. model evidence or partition function), given by Pg(y|X) =Z1 (2πσ2)n/2exp −1 2σ2∥y−Xθ∥2 2dY j=1g(θj)dθj. (6) We will study (overdamped) Langevin dynamics for sampling from the posterior density (5) in two settings, the first in which the prior law g(·) is fixed but may be misspecified, and the second in which this prior law may adapt to the Langevin trajectory to implement empirical Bayes learning from the observed data ( X,y). In the former setting, we consider the Langevin dynamics dθt=∇θ −1 2σ2∥y−Xθt∥2 2+dX j=1logg(θt j) dt+√ 2 dbt(7) where {bt}t≥0is a standard Brownian motion on Rd. In the latter setting, we will model the prior via a parametric model g(·, α) :α∈RK (8) and consider the empirical Bayes Langevin dynamics dθt=∇θ −1 2σ2∥y−Xθt∥2 2+dX j=1logg(θt j,bαt) dt+√ 2 dbt(9) dbαt=∇α1 ddX j=1logg(θt j,bαt)−R(bαt) dt. (10) The equation (10) describes a
|
https://arxiv.org/abs/2504.15558v1
|
continuous-time evolution of the prior parameter α∈RKthat is coupled to the Langevin diffusion (9) of the posterior sample, and R:RK→Ris a possible smooth regularizer. (In this work, we will be interested mostly in the behavior of these dynamics when R(α)≡0, and we introduce R(α) for theoretical purposes to confine the dynamics of bαtin certain examples.) To motivate the dynamics (9–10) as a procedure that implements maximum marginal-likelihood learning ofα∈RK, we may consider the free energy (i.e. negative marginal log-likelihood) bF(α) =−1 dlogPg(·,α)(y|X) (11) as a function of the prior parameter α∈RK. By the Gibbs variational principle (c.f. [68, Proposition 4.7]), bF(α) = inf q∈P∗(Rd)V(q, α) (12) where P∗(Rd) is the space of all probability densities on Rd, and V(q, α) =1 dZ1 2σ2∥y−Xθ∥2−dX j=1logg(θj, α) + log q(θ) q(θ)dθ+n 2dlog 2πσ2(13) is the Gibbs free energy corresponding to the prior g(·) =g(·, α). We propose to implement maximum- likelihood learning of α∈RKby minimizing the regularized Gibbs free energy V(q, α) +R(α) jointly over (q, α), via a gradient flow in the Wasserstein-2 geometry for q∈ P∗(Rd) and the standard Euclidean geometry forα∈RK. The resulting gradient flow equations take the form d dtqt=−d·gradW2 qV(qt, αt) :=∇θ· qt(θ)∇θ1 2σ2∥y−Xθ∥2 2−dX j=1logg(θj, αt) + Tr∇2 θqt(θ),(14) d dtαt=−∇α[V(qt, αt) +R(αt)] =∇αZ1 ddX j=1logg(θj, αt)qt(θ)dθ−R(αt) . (15) 6 In (14), we identify gradW2 qV(q, α) with the Fokker-Planck equation for the density evolution of θtunder the Langevin diffusion (7) with prior law g(·) =g(·, α), via its variational interpretation put forth in [69]. Then (9–10) may be understood as a particle implementation of (14–15) that uses a single Langevin trajectory {θt}t≥0to simulate the dynamics of qtin (14), and that uses the empirical distribution1 dPd j=1δθt jto approximate the expectation over θ∼qtin the dynamics of αtin (15). An algorithm similar to (9–10) was introduced in [10], with some additional reparametrization ideas to allow for nonparametric modeling of the prior g(·). Here, to simplify technical considerations, we restrict our study to parametric prior models of the form (8). 2.2 DMFT equations The empirical Bayes Langevin diffusion (9–10) is an example of a general class of adaptive Langevin diffusions that we study in our companion work [27]. In particular, the gradient equation (10) for bαtis a function of the empirical distribution of coordinates θt, dbαt=G bαt,1 ddX j=1δθt j dt where G:RK× P2(R)→RKis the gradient map G(α,P) =Eθ∼P[∇αlogg(θ, α)]− ∇R(α). Our analyses will rely on a system of dynamical mean-field theory (DMFT) equations, formalized in [27] and building upon the results of [28,29], that describes a deterministic evolution of a prior parameter αt∈RK and a univariate law P(θt)∈ P2(R) that approximate ( bαt,1 dPd j=1δθt j) in the large system limit n, d→ ∞ . This approximation will hold under the following assumptions, which we assume throughout this work. Assumption 2.1 (Linear model and initial conditions) . (a) (Asymptotic scaling) lim n,d→∞n d=δ∈(0,∞). (b) (Random design) X= (xij)∈Rn×dhas independent entries satisfying E[xij] = 0, E[x2 ij] =1 d, and ∥√ dxij∥ψ2≤Cfor a constant C >0, where ∥ · ∥ψ2is the sub-Gaussian norm. (c) (Bayesian linear model) θ∗,εare independent of
|
https://arxiv.org/abs/2504.15558v1
|
each other and of X, and y=Xθ∗+ε. The entries of θ∗,εare distributed as θ∗ 1, . . . , θ∗ diid∼g∗, ε 1, . . . , ε niid∼ N(0, σ2) (16) for some σ2>0 and probability density g∗(both fixed and independent of n, d), where g∗satisfies the log-Sobolev inequality Entθ∗∼g∗[f(θ∗)2]≤CLSIEθ∗∼g∗[(f′(θ∗))2] for all f∈C1(R). (17) (d) (Initial conditions) The initialization θ0is independent of X,θ∗,ε, and θ0 1, . . . , θ0 diid∼g0 (18) for some probability density g0(fixed and independent of n, d) with finite entropyR g0logg0and finite moment-generating-function in a neighborhood of 0. The initialization bα0satisfies lim n,d→∞bα0=α0 a.s. for a deterministic parameter α0∈RK. Assumption 2.2 (Prior model and regularizer) . (a) In the context of a fixed prior, g(θ) is strictly positive and thrice continuously-differentiable, and (logg)′′′(θ) is uniformly H¨ older continuous over θ∈R. For a constant C >0 and all θ∈R, |(logg)′(θ)| ≤C(1 +|θ|),|(logg)′′(θ)| ≤C, |(logg)′′′(θ)| ≤C, and for some constants r0, c0>0, −(logg)′′(θ)≥c0for all |θ| ≥r0. 7 (b) In the context of an adaptive prior, g(θ, α) is strictly positive, R(α) is nonnegative, and both are thrice continuously-differentiable. For a constant C >0 and all ( θ, α)∈R×RK, ∥∇(θ,α)logg(θ, α)∥2≤C(1 +|θ|+∥α∥2), ∥∇R(α)∥2≤C(1 +∥α∥2).(19) Furthermore, for each compact subset S⊂RK,θ7→ ∇3 (θ,α)logg(θ, α) is H¨ older-continuous uniformly over ( θ, α)∈R×S, and for some constants C(S), r0(S), c0(S)>0, ∥∇2 (θ,α)logg(θ, α)∥2≤C(S),∥∇3 (θ,α)logg(θ, α)∥2≤C(S) for all ( θ, α)∈R×S, −∂2 θlogg(θ, α)≥c0(S) for all |θ| ≥r0(S) and α∈S, ∥∇2R(α)∥2≤C(S),∥∇3R(α)∥2≤C(S) for all α∈S.(20) In particular, Assumption 2.2(b) requires that g(·)≡g(·, α) satisfies Assumption 2.2(a) for each fixed prior parameter α∈RK. We assume the LSI condition (17) for the true prior g∗to ensure concentration of the free energy (c.f. Proposition 4.11), and we clarify that the conditions of Assumption 2.2(a) imply that the modeled prior g(·) must also satisfy an LSI of the form (17), as reviewed in Lemma C.1. Under the above Assumptions 2.1 and 2.2(b), the DMFT limit for (9–10) is described by the following construction: Let θ∗∼g∗, θ0∼g0, ε ∼ N(0, σ2) (21) denote independent scalar variables with the distributions (16) and (18), and let δ= limn dbe as in As- sumption 2.1. Let {bt}t≥0,{ut}t≥0, and ( w∗,{wt}t≥0) be centered univariate Gaussian processes indepen- dent of each other and of ( θ∗, θ0, ε), where {bt}t≥0is a standard Brownian motion on R, and{ut}t≥0and (w∗,{wt}t≥0) have covariance kernels E[utus] =Cη(t, s),E[wtws] =Cθ(t, s),E[wtw∗] =Cθ(t,∗),E[(w∗)2] =Cθ(∗,∗) (22) defined self-consistently in (28) below. We consider a system of stochastic differential equations dθt=h −δ σ2(θt−θ∗) +∂θlogg(θt, αt) +Zt 0Rη(t, s)(θs−θ∗)ds+uti dt+√ 2 dbt(23) d∂θt ∂us = −δ σ2−∂2 θlogg(θt, αt)∂θt ∂us+Zt sRη(t, s′)∂θs′ ∂usds′ dt (24) for univariate processes {θt}t≥0and{∂θt ∂us}t≥s≥0adapted to the filtration Fθ t:=F({bs}s≤t,{us}s≤t, θ∗, θ0), with the initial conditions θt|t=0=θ0,∂θt ∂us t=s= 1. These are driven by a deterministic and continuous RK-valued process {αt}t≥0representing the asymptotic limit of {bαt}t≥0. We consider likewise univariate processes {ηt}t≥0and{∂ηt ∂ws}t≥s≥0defined by ηt=−1 σ2Zt 0Rθ(t, s) ηs+w∗−ε ds−wt(25) ∂ηt ∂ws=−1 σ2hZt sRθ(t, s′)∂ηs′ ∂wsds′−Rθ(t, s)i (26) adapted to the filtration Fη t:=F({ws}s≤t, w∗, ε). The deterministic process {αt}t≥0above is defined self- consistently by d
|
https://arxiv.org/abs/2504.15558v1
|
dtαt=G(αt,P(θt)),G(α,P) =Eθ∼P[∇αlogg(θ, α)]− ∇R(α) (27) 8 with initial condition αt|t=0=α0given in Assumption 2.2, where P(θt) is the law of θtin (23). The covariance and response functions Cθ, Cη, Rθ, Rηare also defined for all t≥s≥0 self-consistently via the above processes by Cθ(t, s) =E[θtθs], C θ(t,∗) =E[θtθ∗], C θ(∗,∗) =E[(θ∗)2], Cη(t, s) =δ σ4E[(ηt+w∗−ε)(ηs+w∗−ε)] Rθ(t, s) =Eh∂θt ∂usi , R η(t, s) =δ σ2Eh∂ηt ∂wsi .(28) This DMFT system (22–28) describes the n, d→ ∞ limit of the empirical Bayes Langevin dynamics (9–10). In the setting of a fixed prior g(·)≡g(·, α0), the DMFT limit for the standard Langevin diffusion (7) is the same, upon replacing G(α,P) in (27) by G(α,P) = 0 so that αt=α0for all t≥0. Fixing any time horizon T >0, let us set η∗=−w∗ and denote by P(θ∗,{θt}t∈[0,T])∈ P2(R×C([0, T],R)),P(η∗, ε,{ηt}t∈[0,T])∈ P2(R×R×C([0, T],R)) the joint laws of sample paths ( θ∗,{θt}t∈[0,T]) and ( η∗, ε,{ηt}t∈[0,T]) in this DMFT system. We write θ∗ j, θt j, εi, η∗ i, ηt ifor the coordinates of θ∗,θt,ε,η∗=Xθ∗,ηt=Xθt, andW2→for Wasserstein-2 convergence in the spaces P2(R×C([0, T],R)) and P2(R×R×C([0, T],R)) as n, d→ ∞ . The main result we will use from our companion work [27] is summarized in the following theorem. Theorem 2.3. (a) Suppose Assumptions 2.1 and 2.2(a) hold, and identify g(·)≡g(·, α0). Let{θt}t≥0be the solution to the dynamics (7) with fixed prior g(·), and denote η∗=Xθ∗andηt=Xθt. Then for each fixed T≥0, there exists a solution up to time Tof the DMFT system (22–28) with (27) replaced byG(α,P) = 0 , such that almost surely as n, d→ ∞ , 1 ddX j=1δθ∗ j,{θt j}t∈[0,T]W2→P(θ∗,{θt}t∈[0,T]),1 nnX i=1δη∗ i,εi,{ηt i}t∈[0,T]W2→P(η∗, ε,{ηt}t∈[0,T]). (29) (b) Suppose Assumptions 2.1 and 2.2(b) hold, and let {θt,bαt}t≥0be the solution to the empirical Bayes Langevin dynamics (9–10). Then for each fixed T >0, there exists a solution up to time Tof the DMFT system (22–28) such that almost surely as n, d→ ∞ , (29) holds and also {bαt}t∈[0,T]→ {αt}t∈[0,T]inC([0, T],RK). Both parts of this theorem follow from [27, Theorem 2.5], and we explain the details of the reduction to [27, Theorem 2.5] in Appendix A. The above solutions to the DMFT systems are unique in certain domains of exponential growth for {αt}and for the correlation and response functions, and we refer readers to [27, Theorem 2.4] for details of this uniqueness claim. For our analyses of dynamics with fixed prior g(·) in the setting of Theorem 2.3(a), we will require a second result from [27] that gives an interpretation for the DMFT response functions Rθ(t, s) and Rη(t, s) as coordinate averages of single-particle responses in the Langevin diffusion (7). We defer a statement of this result to Section 4.1. 2.3 Replica-symmetric characterization of equilibrium for a fixed prior This and the next section describe the main results of our current paper. We discuss results pertaining to the dynamics (7) with a fixed prior g(·) in this section, and results pertaining to the empirical Bayes dynamics (9–10) in Section 2.4 to follow. 9 2.3.1 Approximately-TTI DMFT systems We first introduce a set of conditions for the correlation
|
https://arxiv.org/abs/2504.15558v1
|
and response functions of the DMFT system that characterize an approximate time-translation-invariance (TTI) property. Under these conditions, we establish convergence of the joint law of ( θ∗, θt) in the DMFT equations to a replica-symmetric fixed point ast→ ∞ . Definition 2.4. In the setting of a fixed prior g(·) [i.e. with G(α,P) = 0 in (27)], the solution of the DMFT system (22–28) is approximately-TTI if it satisfies the following conditions: 1. There exists a scalar value cθ(∗)∈Rand functions ctti θ, ctti η: [0,∞)→Rsuch that, for some ε: [0,∞)→[0,∞) satisfying lim s→∞ε(s) = 0 and for all t≥s≥0, |Cθ(t, s)−ctti θ(t−s)| ≤ε(s), (30) |Cη(t, s)−ctti η(t−s)| ≤ε(s), (31) |Cθ(s,∗)−cθ(∗)| ≤ε(s). (32) Furthermore, there exist values ctti θ(∞), ctti η(∞)≥0 and finite positive measures µθ, µηsupported on [ι,∞) for some ι >0 (strictly) such that ctti θ(τ) =ctti θ(∞) +Z∞ ιe−aτdµθ(a), ctti η(τ) =ctti η(∞) +Z∞ ιe−aτdµη(a). (33) 2. There exist functions rtti θ, rtti η: [0,∞)→Rsuch that, for some ε: [0,∞)→[0,∞) satisfying limt→∞ε(t) = 0, Zt 0|Rθ(t, s)−rtti θ(t−s)|ds≤ε(t), (34) Zt 0|Rη(t, s)−rtti η(t−s)|ds≤ε(t). (35) Furthermore, rtti θ, rtti η, ctti θ, ctti ηsatisfy the fluctuation-dissipation relations rtti θ(τ) =−ctti θ′(τ), rtti η(τ) =−ctti η′(τ). (36) We show that if the DMFT system is approximately-TTI in the above sense, then its t→ ∞ limit is characterized by a system of “static” scalar fixed-point equations. To describe this characterization, consider a scalar Gaussian convolution model y=θ∗+z∈R. (37) Let Pg,ω(θ|y) =1 Pg,ω(y)rω 2πexp −ω 2(y−θ)2 g(θ) (38) be the posterior distribution of θin this model, assuming a prior law θ∼g(·) and independent Gaussian noise z∼ N(0, ω−1), where Pg,ω(y) =Zrω 2πexp −ω 2(y−θ)2 g(θ)dθ (39) denotes the marginal density of yunder these assumptions. Let the true model be y=θ∗+zwith θ∗∼g∗ and independent noise z∼ N(0, ω−1 ∗), and denote by Pg∗,ω∗;g,ω(θ∗, θ) (40) the joint law of the true parameter θ∗and a posterior sample θunder the generating process θ∗∼g∗, z∼ N(0, ω−1 ∗) (independent) ⇒ y=θ∗+z⇒ θ|y∼Pg,ω(· |y) (41) 10 (where θ|yis defined with misspecified prior law g(·) and misspecified noise variance ω−1). We write ⟨f(θ)⟩g,ωfor the posterior average with respect to Pg,ω(· |y) depending implicitly on y, andEg∗,ω∗f(y) for the expectation under the true model y=θ∗+z. Thus, an expectation over the joint law Pg∗,ω∗;g,ωin (40) takes the form E(θ∗,θ)∼Pg∗,ω∗;g,ωf(θ∗, θ) =Eg∗,ω∗⟨f(θ∗, θ)⟩g,ω. Theorem 2.5. Suppose Assumptions 2.1 and 2.2(a) hold. Consider the Langevin diffusion (7) with a fixed prior g(·), and suppose that the corresponding solution of the DMFT system in Theorem 2.3(a) is approximately-TTI. Define, from the quantities of Definition 2.4, mse = ctti θ(0)−ctti θ(∞), mse∗=E[θ∗2]−2cθ(∗) +ctti θ(∞), ymse =σ4 δ ctti η(0)−ctti η(∞) , ymse∗=σ4 δ 2ctti η(0)−ctti η(∞) −σ2.(42) Then there are unique values ω, ω∗>0(given mse,mse∗) for which mse,mse∗, ω, ω ∗satisfy the fixed-point equations ω=δ(σ2+ mse)−1, ω ∗=δ(σ2+ mse ∗)−1, mse = Eg∗,ω∗[⟨(θ− ⟨θ⟩g,ω)2⟩g,ω], mse∗=Eg∗,ω∗[(θ∗− ⟨θ⟩g,ω)2].(43) The quantities ymse ,ymse∗are related to these fixed points by ymse = σ2 1−ωσ2 δ , ymse∗=σ2+ωσ4 δω ω∗−2 . (44) Furthermore, letting P(θ∗, θt)be the joint law of (θ∗, θt)in the DMFT system, as t→ ∞ , P(θ∗, θt)W2→Pg∗,ω∗;g,ω.
|
https://arxiv.org/abs/2504.15558v1
|
(45) Remark 2.6. Let⟨f(θ)⟩and⟨f(θ,θ′)⟩denote the expectation over independent samples θ,θ′∼Pg(· |X,y) from the posterior law (5) with a fixed prior g(·). Then the asymptotic overlaps lim n,d→∞d−1⟨θ⊤θ⟩, lim n,d→∞d−1⟨θ⊤θ′⟩, lim n,d→∞d−1⟨θ⊤θ∗⟩ are predicted in the DMFT system, respectively, by ctti θ(0) = lim t→∞Cθ(t, t), ctti θ(∞) = lim t,τ→∞Cθ(t, t+τ), c θ(∗) = lim t→∞Cθ(t,∗). Thus mse and mse ∗as defined in (42) represent the DMFT predictions for lim n,d→∞d−1⟨∥θ− ⟨θ⟩∥2 2⟩, lim n,d→∞d−1∥θ∗− ⟨θ⟩∥2 2. Similarly, one may check that ymse and ymse∗as defined in (42) represent the DMFT predictions for lim n,d→∞n−1⟨∥Xθ−X⟨θ⟩∥2 2⟩, lim n,d→∞n−1∥Xθ∗−X⟨θ⟩∥2 2. These fixed-point equations (43) that characterize mse and mse ∗coincide with those derived via the replica method (with misspecified prior) under a replica-symmetric ansatz, c.f. [30,31,65]. We clarify that Theorem 2.5 does not claim that the joint solution (mse ,mse∗, ω, ω ∗) of the fixed-point equations (43) is unique. In settings with multiple such fixed points, the theorem pertains to the specific choice of this fixed point that arises from the t→ ∞ limit of the DMFT dynamics. 11 2.3.2 Asymptotic MSE and free energy under a posterior LSI To motivate Definition 2.4, it is illustrative to consider the example of a fixed Gaussian prior g(·), where the Langevin diffusion for θtis a linear Ornstein-Uhlenbeck process. Then Cθ, Cη, Rθ, Rηof the DMFT system may be computed explicitly, as we show in Appendix B, and it is directly checked from their explicit forms that the DMFT system is indeed approximately-TTI. Generalizing this Gaussian prior example, we consider a setting where the posterior distribution (5) satisfies a log-Sobolev inequality. Assumption 2.7. There exists a constant CLSI>0 and a X-dependent event E(X) holding almost surely for all large n, d, for which (a) (LSI for posterior) On E(X), for all y∈Rn, the posterior distribution Pg(θ|X,y) satisfies Ent[f(θ)2|X,y]≤CLSIE[∥∇f(θ)∥2 2|X,y] for all f∈C1(Rd). (46) (b) (LSI for larger noise) On E(X), for every noise variance ˜ σ2∈[σ2,∞), (46) holds also for the posterior lawPg(θ|X,y) defined with ˜ σ2in place of σ2(with a uniform constant CLSI>0 for all ˜ σ2≥σ2). For clarity of interpretation, we list in the following proposition three concrete settings in which these LSI conditions hold by currently known techniques. A proof of Proposition 2.8 is given in Appendix C. Proposition 2.8. Suppose Xsatisfies Assumption 2.1(a–b), and g(·)satisfies Assumption 2.2(a). Let C, r0, c0>0be the constants of Assumption 2.2(a), and define C0=2.01 c0exp8r2 0(c0+C)2 πc0 . Suppose, in addition, that at least one of the following conditions hold: (a) (global log-concavity) −(logg)′′(θ)≥c0for all θ∈R, or (b) (high noise) σ2> C 0(4√ δ1{δ >1}+ (√ δ+ 1)21{δ≤1}), or (c) (large sample size) δ >1and(√ δ−1)2>4C0C√ δ. Then Assumption 2.7 holds for a constant CLSI>0depending only on δ, C, r 0, c0. Under the posterior LSI condition of Assumption 2.7(a), we verify that the solution of the DMFT system must be approximately-TTI in the sense of Definition 2.4. Theorem 2.9. Consider the dynamics (7) with a fixed prior g(·), and suppose Assumptions 2.1, 2.2(a), and 2.7(a) hold. Then the DMFT system given by Theorem 2.3(a) is
|
https://arxiv.org/abs/2504.15558v1
|
approximately-TTI, where the statements of Definition 2.4 hold with ε(t) =Ce−ctand some constants C, c > 0. As a consequence, we obtain the following corollary showing that the asymptotic free energy and mean- squared-errors associated to the posterior distribution Pg(θ|X,y) in the linear model (with a possibly misspecified prior) are given by their replica-symmetric predictions, and furthermore the joint empirical distribution of coordinates of θ∗and a posterior sample θ∼Pg(· |X,y) converges to the preceding law Pg∗,ω∗;g,ωin the scalar Gaussian convolution model. (Our analysis for the free energy uses an I-MMSE relation, for which we require the posterior LSI condition of Assumption 2.7(b) for an extended range of noise variances.) Corollary 2.10. Suppose Assumptions 2.1, 2.2(a), and 2.7(a) hold for dynamics (7) with a fixed prior g(·). LetPg(y|X)be the marginal likelihood of yin (6), let ⟨f(θ)⟩denote the posterior expectation under Pg(θ|X,y), and define MSE = d−1⟨∥θ− ⟨θ⟩∥2 2⟩, MSE ∗=d−1∥θ∗− ⟨θ⟩∥2 2 YMSE = n−1⟨∥Xθ− ⟨Xθ⟩∥2 2⟩, YMSE ∗=n−1∥Xθ∗− ⟨Xθ⟩∥2 2.(47) Letmse,mse∗, ω, ω ∗,ymse ,ymse∗be as defined by (42–43) for the corresponding (approximately-TTI) DMFT system, let Pg,ω(y)be the marginal density of yin (39), and let Eg∗,ω∗denote the expectation over y=θ∗+z in (37) with θ∗∼g∗andz∼ N(0, ω−1 ∗). 12 (a) Almost surely, lim n,d→∞MSE = mse , lim n,d→∞MSE ∗= mse ∗, lim n,d→∞YMSE = ymse , lim n,d→∞YMSE ∗= ymse∗, lim n,d→∞ W21 ddX j=1δ(θ∗ j,θj),Pg∗,ω∗;g,ω2 = 0. (48) (b) If furthermore Assumption 2.7(b) holds, then almost surely, lim n,d→∞1 dlogPg(y|X) =Eg∗,ω∗logPg,ω(y) +1 2 δ+ log2π ω−δlog2πδ ω+ (1−δ)ω ω∗+ωσ2ω ω∗−2 . As discussed in Remark 2.6, the fixed-point equations characterizing these limits of the mean-squared- error quantities MSE ,MSE ∗,YMSE ,YMSE ∗are those derived via the replica method under an assumption of replica symmetry. One may verify that the limit of the free energy in part (b) agrees also with the replica prediction that was computed in [31, Eq. (20)]. The proof of Theorem 2.5 is given in Section 3, and the proofs of Theorem 2.9 and Corollary 2.10 are given in Section 4. 2.4 Convergence of empirical Bayes Langevin dynamics We now discuss results pertaining to the empirical Bayes Langevin dynamics (9–10) with a data-adaptive evolution of the prior law. 2.4.1 A general condition for dimension-free convergence We impose the following strengthening of Assumption 2.7, ensuring that {αt}t≥0of the DMFT solution remains confined to a bounded domain where the posterior log-Sobolev conditions of Assumption 2.7 hold uniformly. Assumption 2.11. Let{αt}t≥0be the α-component of the DMFT system. There exists a compact subset S⊂RKsuch that αt∈Sfor all t≥0. Furthermore, there exists a (bounded) open neighborhood O⊃Sand an X-dependent event E(X) on which the statements of Assumption 2.7(a–b) hold with a uniform constant CLSI>0 for every prior g∈ {g(·, α) : α∈O}. Under this condition, we will show dimension-free convergence of the prior parameter {αt}t≥0to a fixed point of the replica-symmetric free energy. To state this result, let us recall the free energy bF(α) =−1 dlogPg(·,α)(y|X) of the linear model from (11), and denote by F(α) =−Eg∗,ω∗logPg(·,α),ω(Y)−1 2 δ+ log2π ω−δlog2πδ ω+ (1−δ)ω ω∗+ωσ2ω ω∗−2 (49) its asymptotic limit prescribed by Corollary 2.10,
|
https://arxiv.org/abs/2504.15558v1
|
both viewed as a function of α∈O⊂RK. Here, the fixed points ( ω, ω∗)≡(ω(α), ω∗(α)) implicitly depend on αand are well-defined by Theorem 2.9 for all α∈O. Recalling the law Pg∗,ω∗;g,ω(θ∗, θ) from (40), let us abbreviate this law with g≡g(·, α) and fixed points (ω(α), ω∗(α)) as Pα≡Pg∗,ω∗(α);g(·,α),ω(α). (50) We write θ∼Pαas shorthand for the θ-marginal of ( θ∗, θ)∼Pα. We write also ⟨·⟩αfor the expectation under the posterior law Pg(·,α)(θ|X,y) in the linear model. The following lemma strengthens Corollary 2.10(b) to convergence of F(α) and its gradient, uniformly over the compact subset S⊂Ocontaining {αt}t≥0, and shows also that a true prior parameter α∗∈Omust be a global minimizer of F(α). 13 Lemma 2.12. Suppose Assumptions 2.1, 2.2(b), and 2.11 hold, and let S⊂O⊂RKbe the domains of Assumption 2.11. Then (a)bF(α)andF(α)are continuously differentiable on Owith gradients ∇bF(α) =− 1 ddX j=1∇αlogg(θj, α) α,∇F(α) =−Eθ∼Pα[∇αlogg(θ, α)]. (51) (b) Almost surely lim n,d→∞sup α∈S|bF(α)−F(α)|= 0, lim n,d→∞sup α∈S∥∇bF(α)− ∇F(α)∥2= 0. (c) If g∗(·) =g(·, α∗)for some α∗∈O, then F(α∗) = inf α∈OF(α). We now show that under the uniform LSI condition of Assumption 2.11, the DMFT solution {αt}t≥0 converges as t→ ∞ to a critical point α∞of the asymptotic free energy F(α) (with possible additional regularization by R(α)). Consequently, for a dimension-independent time horizon T > 0 and large system sizes n, d, the learned prior parameter bαTwill be close to α∞, and the Langevin sample bθTwill have entrywise statistics close to those in the scalar Gaussian convolution model described by Theorem 2.5 for the limiting prior g(·) =g(·, α∞). Theorem 2.13. Suppose Assumptions 2.1, 2.2(b), and 2.11 hold. Let O⊂RKbe as in Assumption 2.11, define F(α)forα∈Oby (49), and denote Crit = {α∈S:∇F(α) +∇R(α) = 0}. Consider the empirical Bayes Langevin dynamics (9–10), and let {αt}t≥0be the deterministic approximation of{bαt}t≥0in the solution of the DMFT system in Theorem 2.3(b). Then {αt}t≥0satisfies lim t→∞dist(αt,Crit) = 0 . In particular, if all points of Crit are isolated, then there exists a limit α∞= lim t→∞αt∈Crit. (52) Consequently, for any ε >0, there exists a time horizon T:=T(ε)>0independent of n, dsuch that for any fixed t > T (ε), the solution {(θt,bαt)}t≥0of (9–10) satisfies almost surely lim sup n,d→∞∥bαt−α∞∥2< ε, lim sup n,d→∞W21 ddX j=1δ(θ∗ j,θt j),Pα∞ < ε. (53) The proof of Theorem 2.13 is given in Section 5. Supposing that g∗(·) =g(·, α∗) for a true prior parameter α∗∈O, in settings where R(α) = 0 and the critical point α∞∈Crit of F(α) is unique, Lemma 2.12(c) ensures that α∞=α∗, and Theorem 2.13 then provides a guarantee for estimation of this true prior parameter as n, d→ ∞ . In general, F(α) may have multiple critical points. Theorem 2.13 ensures convergence to a point α∞∈Crit that is specified deterministically by the initial conditions of Assumption 2.1(d), and successful learning of α∗may require multiple initializations from different starting values of bα0. We discuss both types of settings in the following examples. 2.4.2 Examples We develop some further implications of Theorem 2.13 in a few specific examples of parametric models for g(·, α). We explore also via numerical
|
https://arxiv.org/abs/2504.15558v1
|
simulation the convergence of ( θt,bαt), the landscape of the replica- symmetric free energy F(α), and the nature of its critical point set Crit in a few settings where a posterior log-Sobolev inequality may not hold. 14 Example 2.14. Consider the Gaussian prior g(θ, α) =rω0 2πexp −ω0 2(θ−α)2 with varying mean α∈Rand a fixed and known prior variance ω−1 0, and suppose g∗(θ) =g(θ, α∗). Consider the empirical Bayes dynamics driven by G(α,P) =Eθ∼P[∂αlogg(θ, α)] in (27), with no regularizer (i.e. R(α)≡0). We verify in Section 5.2 that Assumptions 2.2(b) and 2.11 hold for this example, for a subset O⊂R containing α∗. The posterior mean in the Gaussian convolution model (37) is given explicitly by ⟨θ⟩g(·,α),ω=ω0 ω0+ωα+ω ω0+ωy. Then the condition α∈Crit is 0 = ∇F(α) =Eθ∼Pα[ω0(α−θ)], i.e. α=Eθ∼Pα[θ] =Eg∗,ω∗[⟨θ⟩g(·,α),ω] =Eg∗,ωhω0 ω0+ωα+ω ω0+ωyi =ω0 ω0+ωα+ω ω0+ωα∗, so Crit consists of the unique critical point α∗. Theorem 2.13 then holds with α∞=α∗, i.e. over a dimension- independent time horizon, bαtconverges to α∗(in the limit n, d→ ∞ followed by t→ ∞ as described in Theorem 2.13), and the empirical distribution of coordinates of the Langevin sample θtconverges to that of the posterior distribution for the true prior N(α∗, ω−1 0). Example 2.15. Consider more generally a log-concave location prior g(θ, α) = exp −f(θ−α) where α∈Randf:R→Ris a fixed strongly convex function, such that fis thrice continuously- differentiable with H¨ older-continuous third derivative, and f′(0) = 0 , C ≥f′′(x)≥c0,|f′′′(x)| ≤C for some constants C, c0>0 and all x∈R. Suppose again g∗(θ) =g(θ, α∗), and consider the empirical Bayes dynamics driven by G(α,P) =Eθ∼P[∂αlogg(θ, α)] with no regularizer. We verify in Section 5.2 that Assumptions 2.2(b) and 2.11 hold for this example, for a subset O⊂R containing α∗. Furthermore, we show in Section 5.2 via an adaptation of the Brascamp-Lieb argument of [8, Theorem 3] that F(α) must be strongly convex on O. Hence, Crit consists again of the unique critical point α=α∗, and Theorem 2.13 holds for α∞=α∗. We next consider two canonical examples where the prior g(θ, α) is a Gaussian mixture model that is not log-concave in θ, and where the landscape of F(α) is also not necessarily convex in α. We will check the uniform log-Sobolev condition of Assumption 2.11 and also characterize analytically the landscape of the free energy F(α) for sufficiently large δ= limn d, and explore by simulation the learning dynamics and free energy landscape for some smaller values of δ. The sub-level sets of F(α) may not be bounded in these examples. To confine {αt}t≥0to a bounded subset ofRK, we introduce an additional regularizer: Fix a radius D > 0, and let B(D) ={α∈RK:∥α∥2< D} be the open ball of radius D. For a smooth function r: [0,∞)→[0,∞) having bounded derivatives of all orders and satisfying r(x) = 0 for all x∈[0, D], r(x)≥x−Dfor all x≥D+ 1, r′(x)>0 for all x > D, (54) we fix the regularizer R:RK→Ras R(α) =r(∥α∥2). (55) 15 Note that R(α) = 0 for all α∈ B(D), so adding such a regularizer does not change the critical points α∈Crit∩B(D). We
|
https://arxiv.org/abs/2504.15558v1
|
show in Proposition 5.2 of Section 5.2 that adding such a regularizer indeed confines the dynamics of {αt}t≥0to a bounded domain. We will study analytically a large- δlimit under a reparametrization of the noise variance σ2bys2=σ2/δ, corresponding to a rescaling of the regression design Xto have entries of variance 1 /nand a rescaling of the noise εto have entries N(0, s2). The setting δ→ ∞ with fixed s2>0 is a limiting regime in which each coordinate of the posterior distribution of θdoes not contract around its mode, the Bayes-optimal mean-squared-error for estimating θremains bounded away from 0, and the landscape of F(α) approaches (up to an additive constant) the log-likelihood landscape in the scalar Gaussian convolution model y=θ+z where θ∼g(·, α) and z∼ N(0, s2). We denote by Gs2(α) =−Eg∗,s−2[logPg(·,α),s−2(y)] (56) the negative population log-likelihood in this model as a function of the prior parameter α, when the true distribution of yis given by y=θ∗+zwith θ∗∼g∗. Proposition 2.16. Suppose Assumptions 2.1 and 2.2(b) hold, and the regularizer R(α)is given by (54–55) withα0∈ B(D). Fix s2=σ2/δ, and define Crit G={α∈ B(D) :∇Gs2(α) = 0}. Then, for any s2>0, there exists a constant δ0:=δ0(s2)>0and a function ι: [δ0,∞)→(0,∞)with limδ→∞ι(δ) = 0 such that if δ > δ 0, then Assumption 2.11 holds. Furthermore, 1. Each point of Crit∩B(D)belongs to a ball of radius ι(δ)around some point of Crit G. 2. For each point α∈Crit Gwhere ∇2Gs2(α)is non-singular, there is exactly one point of Critin the ball of radius ι(δ)around α. In particular, if g∗=g(·, α∗)for some α∗∈ B(D), and if α∗is the unique point of Crit Gand∇2Gs2(α∗)is non-singular, then α∗is also the unique point of Crit∩B(D). Example 2.17. Consider a K-component Gaussian mixture prior g(θ, α) =KX k=1pkrωk 2πexp −ωk 2(θ−αk)2 with fixed mixture weights p1, . . . , p Kand variances ω−1 1, . . . , ω−1 K, parametrized by the mixture means α∈RK. Let us suppose that g∗(θ) =g(θ, α∗) for some α∗∈RK, and the variances ω−1 1, . . . , ω−1 Kare distinct. We consider the empirical Bayes dynamics driven by G(α,P) =Eθ∼P[∇αlogg(θ, α)]− ∇R(α), where R(α) is a regularizer of the form (54–55) for which α0, α∗∈ B(D). We verify in Section 5.2 that Assumption 2.2(b) holds. Then, for fixed s2>0 and all sufficiently large δ, Proposition 2.16 ensures that the confinement and log-Sobolev conditions of Assumption 2.11 also hold, and the proposition further establishes a 1-to-1 correspondence between the critical points of Fand the (non-singular) critical points of Gs2(α) inB(D). We note that, here, Gs2(α) is the negative population log-likelihood in the Gaussian mixture model Pg(·,α),s−2(y) =KX k=1pk·1q 2π(ω−1 k+s2)exp −1 2(ω−1 k+s2)(y−αk)2 (57) having the same mixture means α∈RKas the prior, and elevated mixture variances ω−1 k+s2. The optimization landscape of Gs2(α) is well-studied in the literature, see e.g. [70–73], and in general Gs2(α) 16 (a) (d) (b) (e) (c) (f) Figure 1: Simulations for the Gaussian mixture prior model1 2N(α1,1)+1 2N(α2,0.25) of Example 2.17, with true mixture means α∗= (1,−1) and linear model noise variance σ2=δs2fors= 0.5. Empirical Bayes Langevin dynamics is run for
|
https://arxiv.org/abs/2504.15558v1
|
a single instance ( X,y) with max( n, d) = 5000, initialization θ0 jiid∼ N(0,1), and an Euler-Maruyama discretization of the dynamics. (a–e) Landscape of the replica-symmetric free energy F(α) plotted (for visual clarity) as log( F(α)−F(α∗)+10−3), for δ∈ {4,2,1,0.5,0.25}. Two stable fixed points of 0 = ∇F(α) are depicted in red, with star indicating the true parameter α∗= (−1,1) and circle indicating a second fixed point α†near (1 ,−1). Sample paths {bαt}t≥0from two different initial states bα0are shown in blue and green. (f) Mean-squared-error1 d∥θt−θ∗∥2 2across iterations for these same two initial states, at δ= 1. The predicted value for a posterior sample θ∼Pg(·,α)(· |X,y) is1 d∥θ−θ∗∥2 2≈mse(α) + mse ∗(α), depicted by dashed lines for α∈ {α†, α∗}. may have local minimizers in B(D) that are different from α∗. In such settings, Proposition 2.16 implies that Crit must also have critical points different from α∗for large δ. We depict in Figure 1 a simulation of the landscape of F(α) and the dynamics (9–10) across a range of values δ∈[0.25,4], in a simple setting of1 2N(α1,1) +1 2N(α2,0.25) with K= 2 mixture components and true mixture means α∗= (−1,1). The Almeida-Thouless condition for stability of the replica-symmetric phase was computed in [31, Eq. (25)] to be (in our notation) 1−ω2 δEg∗,ω∗[Var g,ω[θ]2]≥0 (58) where g(·) =g(·, α) and ( ω, ω∗) = ( ω(α), ω∗(α)). We have verified that this condition holds at each tested δ > 0 and parameter α∈RKdepicted in Figure 1, and thus we conjecture that the depicted replica- symmetric free energy function F(α) is indeed the correct asymptotic limit of −1 dlogPg(·,α)(y|X) as n, d→ ∞ (even in settings where our assumption of a log-Sobolev inequality for the posterior law may not hold). We observe, not only for large δbut across a range of values δ∈[0.25,4], that the landscape F(α) has two local minimizers, one fixed at the true parameter α∗= (−1,1) and a second minimizer α†whose location depends on δ. As δdecreases, this second minimizer approaches (1 ,−1) — characterizing a prior law with mixture means matching those of g∗=g(·, α∗) but with the mixture variances reversed — and the free energy difference F(α†)−F(α∗) approaches 0, indicating that it becomes increasingly difficult to distinguish α†from the true parameter α∗. The dynamics {bαt}t≥0follow a smooth trajectory to one of α† orα∗, depending on the initial state bα0. 17 (a) (d) (b) (e) (c) (f) Figure 2: Simulations for the Gaussian mixture prior model p1(α)N(0,0.04) + p2(α)N(0,1) +p3(α)N(0,25) of Example 2.18, with true weights p(α∗) = (0 .6,0.2,0.2) and linear model noise variance σ2=δs2for s= 0.2. Empirical Bayes Langevin dynamics are run for two initializations bα0with random θ0 jiid∼ N (0,1) (black and blue), and an initialization bα0nearα∗with ground truth θ0 j=θ∗ j(green). The remaining setup is the same as in Figure 1. (a–e) Landscape of the replica-symmetric free energy F(α) forδ∈ {4,2,1,0.75,0.5}, plotted as log( F(α)−F(α∗) + 10−3) in the coordinates p(α) on the simplex. The unique critical point p(α∗) is depicted as the red star. Sample paths of {p(bαt)}t≥0are shown in green, black, and blue. (f)
|
https://arxiv.org/abs/2504.15558v1
|
Mean- squared-error1 d∥θt−θ∗∥2 2across iterations for these same three initial states, at δ= 0.75. The predicted value of mse( α∗) + mse ∗(α∗) for a posterior sample is depicted by the dashed line. Example 2.18. Consider a K+ 1-component Gaussian mixture prior g(θ, α) =KX k=0pk(α)rωk 2πexp −ωk 2(θ−µk)2 , p k(α) =eαk eα0+. . .+eαK with fixed means µ0, . . . , µ Kand variances ω−1 0, . . . , ω−1 K, parametrized instead by the mixture weights pk(α) =eαk/(eα0+. . .+eαK). Let us suppose that g∗(θ) =g(θ, α∗) for some α∗∈RK+1, and the pa- rameter pairs ( µ0, ω0), . . . , (µK, ωK) are distinct. We again consider the dynamics driven by G(α,P) =Eθ∼P[∇αlogg(θ, α)]− ∇R(α), where R(α) is a regularizer of the form (54–55) such that α0, α∗∈ B(D). This parametrization is over- parametrized by a single parameter — however, defining the K-dimensional linear subspace E={α∈ RK+1:α0+. . .+αK= 0}, a direct calculation (c.f. Section 5.2) verifies that ∇αlogg(θ, α)∈Eand ∇R(α)∈Eifα∈E. Thus, initializing bα0∈Eensures bαt∈Efor all t≥0, and we may apply our preceding results upon identifying Eisometrically with RK. We verify in Section 5.2 that Assumption 2.2(b) holds. Then again for fixed s2>0 and all large δ, Proposition 2.16 ensures that Assumption 2.11 also holds, and there is a 1-to-1 correspondence between the critical points of FandGs2(α) onB(D). Here, Gs2(α) is the negative population log-likelihood of the Gaussian mixture model Pg(·,α),s−2(y) =KX k=0pk(α)s 1 2π(ω−1 k+s2)exp −1 2(ω−1 k+s2)(y−µk)2 . (59) 18 Letting S={(p0, . . . , p K) :p0+. . .+pK= 1, p0, . . . , p K>0}be the open probability simplex, the mapping α∈E7→p(α)∈Sis a 1-to-1 smooth parametrization with smooth inverse, and the function Gs2is strictly convex in the parametrization by ( p0, . . . , p K)∈S. Thus p∗=p(α∗)∈Sis the unique critical point where ∇pGs2= 0, and the Hessian ∇2 pGs2is nonsingular at p∗. This implies that α∗∈Eis also the unique critical point where ∇αGs2= 0, and ∇2 αGs2is also non-singular at α∗. So for large δ, Proposition 2.16 ensures that α∗must be the unique point of Crit ∩B(D). Figure 2 depicts the simulated landscape of F(α) and dynamics (9–10) in a scaled-mixture-of-normals model p1(α)N(0,0.04) + p2(α)N(0,1) +p3(α)N(0,25) with all components having mean 0, across a range of values δ∈[0.5,4], and with true weights p(α∗) = (0 .6,0.2,0.2). (We have again verified that the Almeida- Thouless stability condition (58) holds at each depicted δ >0 and parameter value α∈RKin these figures.) We observe for all tested values δ∈[0.5,4] that α∗is the unique local minimizer and critical point of F(α). However, as δdecreases, the landscape of F(α) flattens around α∗along a direction representing a family of priors g(·, α) having the same first two moments as g(·, α∗), reflecting that the problem of learning g(·, α∗) beyond its second moment becomes increasingly ill-conditioned. The learned parameter {bαt}t≥0successfully converges to α∗from several different initial states bα0when δ≥0.75, with mixing of Langevin dynamics becoming increasingly slower as δdecreases. For δ= 0.5, the learned parameter {bαt}t≥0fails to converge toα∗under the tested time horizon
|
https://arxiv.org/abs/2504.15558v1
|
from random initializations of θ0, but does converge to α∗under a ground-truth initialization θ0=θ∗andbα0close to α∗. 3 Analysis of approximately-TTI DMFT systems In this section, we prove Theorem 2.5 on the equilibrium properties of the solution to the DMFT equations under an assumption of approximate time-translation-invariance (from an out-of-equilibrium initialization). We assume throughout this section that Assumptions 2.1 and 2.2(a) hold, and that the solution to the DMFT system in Theorem 2.3(a) approximating the dynamics (7) with the fixed prior g(·) is approximately-TTI in the sense of Definition 2.4. We denote by {θt}t≥0,{ηt}t≥0, and Cθ, Cη, Rθ, Rηthe components of this DMFT solution. 3.1 Analysis of θ-equation We first derive, from analysis of the evolution (23) for {θt}t≥0, a representation of ctti θ(0), ctti θ(∞), cθ(∗) in terms of ctti η(0), ctti η(∞), assuming a condition ctti η(0)−ctti η(∞)< δ/σ2which ensures long-time stability of {θt}t≥0under (23). This condition will be checked in our subsequent analysis of the evolution of {ηt}t≥0. Lemma 3.1. Suppose ctti η(0)−ctti η(∞)< δ/σ2. Set ω=δ/σ2−(ctti η(0)−ctti η(∞))andω∗=ω2/ctti η(∞). Then ctti θ(0) =Eg∗,ω∗⟨θ2⟩g,ω, ctti θ(∞) =Eg∗,ω∗⟨θ⟩2 g,ω, c θ(∗) =Eg∗,ω∗[⟨θ⟩g,ωθ∗]. (60) The main idea of the proof is to apply the explicit form of ctti θin (33) together with its fluctuation dissipation relation with rtti θin (36) to approximate Cθ, Rθat large times by correlation and response functions C(M) θ, R(M) θthat admit an interpretation as the effect of marginalization over auxiliary variables (xt 1, . . . , xt M) in a Markovian joint evolution of ( θt, xt 1, . . . , xt M) conditional on θ∗. In contrast to the original high-dimensional dynamics of {θt}t≥0inRd, here Mdoes not depend on ( n, d), and the dynamics of {xt 1, . . . , xt M}will be decoupled given {θt}t≥0. This decoupling allows us to provide a simple explicit form for the θ-marginal of the stationary distribution of ( θ, x1, . . . , x M) conditional on θ∗, which in the limit M→ ∞ will match the conditional distribution θ|θ∗under the limit law Pg∗,ω∗;g,ω(θ, θ∗). To implement this idea, we will exhibit a coupling of the processes {θt}t≥0driven by Cθ, Rθand {θt M,T 0}t≥0driven by C(M) θ, R(M) θfrom time T0onwards, and then analyze the convergence of {θt M,T 0}t≥0 under the equivalent Markovian representation of its dynamics. The main technical challenge is to ensure ei- ther that the discretization error ε(M) obtained by approximating Cθ, RθbyC(M) θ, R(M) θdoes not compound exponentially over time, or that the convergence time of {θt M,T 0}t≥0in the equivalent Markovian dynamics is independent of the approximation dimension M. We will take the first approach here, by adapting ideas around sticky and reflection couplings developed in [74,75] to a setting of non-Markovian DMFT dynamics for{θt}t≥0and{θt M,T 0}t≥0. 19 3.1.1 Comparison with an auxiliary process Let us fix a positive integer Mand define two sequences {am}M m=0and{cm}M m=1by am=ι+m√ Mform= 0, . . . , M,c2 m am=µη([am−1, am)) for m= 1, . . . , M (61) where µηis given in Definition 2.4. We set R(M) η(τ) =MX
|
https://arxiv.org/abs/2504.15558v1
|
m=1c2 me−amτ, C(M) η(t, s) =MX m=1c2 m am(e−am|t−s|−e−am(t+s)) +ctti η(∞). A direct calculation of the covariance shows that C(M) η(t, s) =E[ut Mus M] for ut M=z+MX m=1cmZt 0e−am(t−s)√ 2 dbs m, (62) where z∼ N(0, ctti η(∞)) and {bt 1}t≥0, . . . ,{bt M}t≥0are standard Brownian motions independent of each other and of z. In particular, C(M) η(t, s) is a positive-semidefinite covariance kernel on [0 ,∞). For convenience, let us set U(θ, θ∗) =−δ σ2(θ−θ∗) + (log g)′(θ), so the DMFT equation (23) reads dθt=h U(θt, θ∗) +Zt 0Rη(t, s)(θs−θ∗)ds+uti dt+√ 2 dbt. (63) Let{ut M}t≥0be a centered Gaussian process with covariance kernel C(M) η, defined in the probability space of{ut, θt}t≥0and independent of θ∗. Fixing a time T0>0, let{˜bt}t≥T0be a standard Brownian motion ini- tialized at ˜bT0= 0, independent of {ut}t≥0,θ∗, and{θt}t∈[0,T]. We consider an auxiliary process {θt M,T 0}t≥0 defined by θt M,T 0=θtfort∈[0, T0], dθt M,T 0=h U(θt M,T 0, θ∗) +Zt 0R(M) η(t−s)(θs M,T 0−θ∗)ds+ut Mi dt+√ 2 d˜btfort≥T0.(64) We proceed to construct a coupling of {ut M}t≥0with{ut}t≥0and of {˜bt}t≥T0with{bt−bT0}t≥T0defining the DMFT solution {θt}t≥0, to yield a coupling of {θt M,T 0}t≥T0with{θt}t≥T0. Lemma 3.2. For any M, T 0, T > 0, there exists a coupling of {ut M}t≥0and{ut}t≥0such that sup t∈[T0,T0+T]E(ut M−ut)2≤ε(M) +√ T ε(T0), where ε(M)does not depend on T0, Tandε(T0)does not depend on M, T , and limM→∞ε(M) = 0 and limT0→∞ε(T0) = 0 . Proof. Define the covariance kernel C(∞) η(t, s) =R∞ ι e−a|t−s|−e−a(t+s) µη(da) +ctti η(∞) representing the M→ ∞ limit of (62). We will couple Gaussian processes with covariance kernels ( C(M) η, C(∞) η) and with (C(∞) η, Cη) respectively. Coupling of (C(M) η, C(∞) η).LetM′> M be any positive integer for which√ M′is an integer multiple of√ M, and let {˜am}M′ m=0and{˜cm}M′ m=1be the sequences as defined above with M′in place of M. Note then that the grid points {aj}M j=0are a subset of the grid points {˜ai}M′ i=0. Let ut M′=z+M′X i=1˜ciZt 0e−˜ai(t−s)√ 2 d˜bs i (65) 20 where z∼ N(0, ctti η(∞)) and {˜bt 1}t≥0, . . .{˜bt M′}t≥0are standard Brownian motions independent of each other and of z. Then (62) shows that {ut M′}t≥0has covariance C(M′) η. Now, for each j= 1, . . . , M , let Ij={i:aj−1<˜ai≤aj}, bt j=X i∈Ij˜ci˜bt i sX i∈Ij˜c2 i and set ut M=z+MX j=1cjZt 0e−aj(t−s)√ 2 dbs j. Then{bt 1}t≥0, . . . ,{bt M}t≥0are standard Brownian motions independent of each other and of z, so (62) shows also that {ut}t≥0is a Gaussian process with covariance C(M) η. We may now bound E[(ut M−ut M′)2]≤4EhX i:˜ai>aM˜ciZt 0e−˜ai(t−s)d˜bs i2i + 4EhMX j=1X i∈Ij˜ciZt 0e−˜ai(t−s)d˜bi s−MX j=1cjZt 0e−aj(t−s)dbs j2i . Since aM=ι+√ M, the first term equalsP i:˜ai>ι+√ M˜c2 i/˜ai=P i:˜ai>ι+√ Mµη([˜ai−1,˜ai)), which is at most some ε1(M) satisfying lim M→∞ε1(M) = 0, by finiteness of the measure µη. The second term is bounded as EhMX j=1X i∈Ij˜ciZt 0e−˜ai(t−s)d˜bs i−MX j=1cjZt 0e−aj(t−s)dbs j2i =EhMX j=1X i∈IjZt 0 ˜cie−˜ai(t−s)−cj˜ciqP ℓ∈Ij˜c2 ℓe−aj(t−s) d˜bs i2i =MX j=1X i∈IjZt 0 ˜cie−˜ais−cj˜ciqP ℓ∈Ij˜c2 ℓe−ajs2 ds≤2(I + II) , where I =MX
|
https://arxiv.org/abs/2504.15558v1
|
j=1X i∈IjZt 0˜c2 i e−˜ais−e−ajs2ds, II =MX j=1X i∈IjZt 0˜c2 i 1−cjqP ℓ∈Ij˜c2 ℓ2 e−2ajsds. Let ∆ = 1 /√ Mbe the spacing of {aj}M j=0. Then, since |˜ai−aj| ≤∆ and ˜ ai≤ajfor all i∈Ij, I≤MX j=1X i∈IjZt 0˜c2 ie−2aj−1ss2∆2ds≤MX j=1Zt 0X i∈Ij˜c2 i ˜ai aje−2aj−1ss2∆2ds(∗)=MX j=1c2 jZt 0e−2aj−1ss2∆2ds where we useP i∈Ij˜c2 i/˜ai=P i∈Ijµη([˜ai−1,˜ai)) = µη([aj−1, aj)) = c2 j/ajin (∗). Evaluating this integral, for an absolute constant C >0, I≤C∆2MX j=1c2 j a3 j−1≤C∆2 ι2MX j=1c2 j aj≤C∆2 ι2µη([ι,∞))≤ε2(M) where lim M→∞ε2(M) = 0. For II, since c2 j=P ℓ∈Ijaj ˜aℓ˜c2 ℓ, we have |c2 j−P ℓ∈Ij˜c2 ℓ| ≤∆P ℓ∈Ij˜c2 ℓ/˜aℓ= ∆c2 j/aj, and hence II≤MX j=1X i∈Ij˜c2 i 2aj(cj−qP ℓ∈Ij˜c2 ℓ)2 P ℓ∈Ij˜c2 ℓ≤MX j=1(c2 j−P ℓ∈Ij˜c2 ℓ)2 2ajc2 j≤∆2 2MX j=1c2 j a3 j≤∆2 2ι2MX j=1c2 j aj≤ε3(M) 21 where lim M→∞ε3(M) = 0. In summary, we have shown that supt≥0E[(ut M−ut M′)2]≤ε(M) for some ε(M)→0 asM→ ∞ . Now note that for any fixed T0andT,{ut M′}t∈[T0,T0+T]has covariance kernel C(M′) η that converges uniformly to C(∞) ηover [ T0, T0+T] asM′→ ∞ . It is direct to check from its definitions that C(∞) ηsatisfies the condition (235) of Lemma D.1. So by Lemma D.1, there exists a coupling of {ut M′}t∈[T0,T0+T]and a Gaussian process {ut ∞}t∈[T0,T0+T]with covariance {C(∞) η(t, s)}s,t∈[T0,T0+T]such that supt∈[T0,T0+T]E(ut M′−ut ∞)2→0 asM′→ ∞ . Combining this with the above bound supt≥0E[(ut M−ut M′)2]≤ε(M) and taking M′→ ∞ shows supt∈[T0,T0+T]E(ut M−ut ∞)2≤ε(M). Coupling of (C(∞) η, Cη).By the approximation (31) for Cηin Definition 2.4, we have for any t≥s≥0 that |Cη(t, s)−C(∞) η(t, s)| ≤ε(s) +Z∞ ιe−a(t+s)dµη(a), so there exists a (different) function ε(T0) with lim T0→∞ε(T0) = 0 such that sup s,t∈[T0,T0+T]|Cη(t, s)−C(∞) η(t, s)| ≤ε(T0). Here C(∞) ηsatisfies (235) for a constant C0>0 depending only on µη, so by Lemma D.1, there exists a coupling of {ut ∞}t∈[T0,T0+T]with{ut}t∈[T0,T0+T], the latter having covariance {Cη(t, s)}s,t∈[T0,T0+T], for which supt∈[T0,T0+T]E(ut ∞−ut)2≤C(p T ε(T0) +ε(T0)) for a constant C >0. Combining these two couplings yields a coupling of {ut M}t∈[T0,T0+T]with{ut}t∈[T0,T0+T]such that supt∈[T0,T0+T]E(ut M−ut)2≤ε(M) +C(p T ε(T0) +ε(T0)), and extending this arbitrarily to a full cou- pling of {ut M}t≥0and{ut}t≥0and adjusting the value of ε(T0) shows the lemma. Lemma 3.3. Suppose ctti η(0)−ctti η(∞)< δ/σ2. Then for any M, T 0, T > 0, there exists a coupling of the processes {θt}t≥0and{θt M,T 0}t≥0defined by (63) and (64) such that sup t∈[0,T0+T]E|θt−θt M,T 0| ≤ε(M) +√ T ε(T0), where ε(M)does not depend on T0, Tandε(T0)does not depend on M, T , and limM→∞ε(M) = 0 and limT0→∞ε(T0) = 0 . Proof. To ease notation, let us write ˜θt=θt M,T 0and ˜ut=ut M. We couple {ut}t≥0and{˜ut}t≥0according to Lemma 3.2. By definition, {θt}t∈[0,T0]and{˜θt}t∈[0,T0]coincide up to time T0. To construct the coupling of θtand˜θtfor times t∈[T0, T0+T], we adapt the ideas of [74,75]: Fix some ε >0, and let h: [0,∞)→[0,1] be a function such that h(0) = 0, h(x)>0 for x >0,h(x) = 1 for x≥ε, and both x7→h(x) and x7→p 1−h(x)2are Lipschitz. Let {bt}t≥T0and{˜bt}t≥T0be two standard Brownian motions initialized at bT0=˜bT0= 0, independent of each other and of {ut}t≥0,{˜ut}t≥0,θ∗, and{θt}t∈[0,T0].
|
https://arxiv.org/abs/2504.15558v1
|
We define a coupling of {θt}t≥T0and{˜θt}t≥T0by the joint evolutions, for t≥T0, dθt=h U(θt, θ∗) +Zt 0Rη(t, s)(θs−θ∗)ds+uti dt+h(|θt−˜θt|)√ 2 dbt+q 2(1−h(|θt−˜θt|)2) d˜bt, d˜θt=h U(˜θt, θ∗) +Zt 0R(M) η(t−s)(˜θs−θ∗)ds+ ˜uti dt−h(|θt−˜θt|)√ 2 dbt+q 2(1−h(|θt−˜θt|)2) d˜bt. Thus the coupling of the Brownian motions defining these processes is by reflection at times t≥T0where |θt−˜θt| ≥ε, and it transitions to a synchronous coupling as |θt−˜θt| →0. L´ evy’s characterization of Brownian motion shows that the resulting marginal laws of {θt}t≥T0and{˜θt}t≥T0indeed coincide with those of (63) and (64). 22 Let us write as shorthand ξt=θt−˜θt vt=U(θt, θ∗) +Zt 0Rη(t, s)(θs−θ∗)ds+ut ˜vt=U(˜θt, θ∗) +Zt 0R(M) η(t−s)(˜θs−θ∗)ds+ ˜ut. We derive a SDE for |ξt|that is analogous to [75, Eq. (66)]: For any t≥T0, since d ξt= (vt−˜vt)dt+ 2√ 2h(|ξt|)dbt, Itˆ o’s formula yields d(ξt)2= 2ξt[(vt−˜vt)dt+ 2√ 2h(|ξt|)dbt] + 8h(|ξt|)2dt. For a small constant β >0, let Sβ: [0,∞)→[0,∞) be a twice continuously-differentiable approximation to the square root, satisfying Sβ(x) =√xforx≥β, sup0≤x≤β|Sβ(x)| ≤C, sup0≤x≤β|S′ β(x)| ≤Cβ−1/2, and sup0≤x≤β|S′′ β(x)| ≤Cβ−3/2for a universal constant C >0. (A specific construction is given in [75, Eq. (68)].) Then again by Itˆ o’s formula, for any t≥T0, dSβ((ξt)2) =S′ β((ξt)2)h 2ξt(vt−˜vt)dt+ 4√ 2ξth(|ξt|)dbt+ 8h(|ξt|)2dti + 16S′′ β((ξt)2)(ξt)2h(|ξt|)2dt.(66) We may take the limit β→0 via a dominated convergence argument: Applying S′ β(x) =x−1/2/2 for x≥β and the bound |S′ β(x)| ≤Cβ−1/2forx < β , we have |S′ β((ξt)2)ξt(vt−˜vt)| ≤max( C,1/2)|vt−˜vt|. Since vt−˜vtis continuous and hence integrable over [ T0, t], by dominated convergence lim β→0Zt T0S′ β((ξt)2)ξt(vt−˜vt)dt=Zt T0lim β→0S′ β((ξt)2)ξt(vt−˜vt)dt=Zt T0sign(ξt) 2(vt−˜vt)dt. Applying the Lipschitz bound h(|ξt|)≤h(0) + C|ξt|=C|ξt|and a similar dominated convergence argument for the other terms of (66), we obtain in the limit β→0 that for t≥T0, d|ξt|= sign( ξt)(vt−˜vt)dt+ 2√ 2 sign( ξt)h(|ξt|)dbt which is the analogue of [75, Eq. (66)]. (There is no term corresponding to a local time of ξtat 0 that would instead arise under a pure reflection coupling.) Now let A: [0,∞)→[0,∞) be any continuously-differentiable function, and let f: [0,∞)→[0,∞) be any continuously-differentiable function with absolutely continuous first derivative (for which Itˆ o’s formula applies, c.f. [76, Theorem 71]), and satisfying f′(r)∈[0,1] and f′′(r)≤0 for all r≥0. Set rt=|ξt|+Zt 0A(t−s)|ξs|ds. Then d rt= d|ξt|+ [A(0)|ξt|+Rt 0A′(t−s)|ξs|ds] dt. Applying Itˆ o’s formula and taking expectations gives, fort≥T0, d dtEf(rt) =Eh f′(rt) sign(ξt)(vt−˜vt) +A(0)|ξt|+Zt 0A′(t−s)|ξs|ds + 4f′′(rt)h(|ξt|)2i . (67) Let us define κ: [0,∞)→Rby κ(r) = infn−(logg)′(x) + (log g)′(y) x−y:|x−y|=ro (68) so that [ −(logg)′(θt) + (log g′)(˜θt)]/ξt≥κ(|ξt|). Let us set also ∆t=Zt 0 Rη(t, s)−R(M) η(t−s) ·E|θs−θ∗| ds+E|ut−˜ut|. (69) 23 Then, under our assumption f′(r)∈[0,1], we have the bound Eh f′(rt) sign( ξt)(vt−˜vt)i =Eh f′(rt) sign( ξt) −δ σ2ξt− −(logg)′(θt) + (log g)′(˜θt) +Zt 0R(M) η(t−s)(θs−˜θs)ds+Zt 0 Rη(t, s)−R(M) η(t−s) (θs−θ∗)ds+ (ut−˜ut)i ≤ −δ σ2E[f′(rt)|ξt|]−E[f′(rt)κ(|ξt|)|ξt|] +Eh f′(rt)Zt 0R(M) η(t−s)|ξs|dsi + ∆ t. Applying this to (67), for all t≥T0, d dtEf(rt)≤Eh −δ σ2−A(0) f′(rt)|ξt| −f′(rt)κ(|ξt|)|ξt|+ 4f′′(rt)h(|ξt|)2i +Eh f′(rt)Zt 0 A′(t−s) +R(M) η(t−s) |ξs|dsi + ∆ t. (70) Let us now choose the functions A(·) and f(·). For some small enough c0∈(0, ι), let A(0) =δ σ2−c0, A (τ) =A(0)e−c0τ−Zτ 0e−c0(τ−s)MX m=1c2 me−ams ds. This choice of A(τ) satisfies
|
https://arxiv.org/abs/2504.15558v1
|
A′(τ) =−c0A(τ)−PM m=1c2 me−amτ, i.e. A′(τ) +R(M) η(τ) =−c0A(τ). (71) We will require that A(τ)≥0 for all τ≥0. To check this condition, observe that explicitly evaluating the integral defining A(τ) yields ec0τA(τ) =A(0)−MX m=1c2 m am−c0 1−e−(am−c0)τ ≥δ σ2−c0−MX m=1c2 m am−c0≥δ σ2−c0−MX m=1c2 m am·ι ι−c0, the last inequality using am≥ι≥c0. Further boundingPM m=1c2 m/am=PM m=1µη([am−1, am))≤ µη([ι,∞)) = ctti η(0)−ctti η(∞), this shows ec0τA(τ)≥δ σ2−c0−ι ι−c0(ctti η(0)−ctti η(∞)). Then by the given assumption that ctti η(0)−ctti η(∞)< δ/σ2, we obtain A(τ)≥0 for a sufficiently small choice of c0∈(0, ι) and all τ≥0, as desired. Applying (71) and A(0) = δ/σ2−c0into (70), and recalling the definition rt=|ξt|+Rt 0A(t−s)|ξs|ds, we get for all t≥T0that d dtEf(rt)≤Eh −c0f′(rt)rt−f′(rt)κ(|ξt|)|ξt|+ 4f′′(rt)h(|ξt|)2 | {z } :=F(rt,ξt)i + ∆ t. (72) We next proceed to bound the above quantity E[F(rt, ξt)]. Observe that under the convexity-at-infinity condition for −logg(θ) in Assumption 2.2(a), there must exist constants R0, κ0>0 for which κ(r)≥ −κ0for all r≥0, κ (r)≥0 for all r≥R0. (73) Let us denote κ−(r) = max( −κ(r),0). Then −κ(r)r≤κ−(r)randκ−(r)r∈[0, κ0R0] for all r≥0. Recall the constant ε >0 for which h(x) = 1 when x≥ε, and define K: (ε,∞)→[0, κ0R0] by K(r) = sup t≥T0Eh κ−(|ξt|)|ξt| rt=r,|ξt|> εi . Then define f:R→Rby f(0) = 0 , f′(r) = exp −1 4Zmax( r,2κ0R0/c0) 0K(s)ds forr≥0. 24 Note that f′(r) is absolutely continuous as required, with f′(r)∈[c1,1] for all r≥0 where c1= exp( −κ2 0R2 0 2c0), andf′′(r) =−1 4K(r)f′(r)1{r <2κ0R0/c0} ≤0. By these definitions, for any r > ε andt≥T0, we have E[F(rt, ξt)|rt=r,|ξt|> ε]≤Eh −c0f′(rt)rt+f′(rt)κ−(|ξt|)|ξt|+ 4f′′(rt)h(|ξt|)2 rt=r,|ξt|> εi ≤Eh −c0f′(r)r+f′(r)K(r) + 4f′′(r) rt=r,|ξt|> εi . When r≥2κ0R0/c0we may apply f′′(r) = 0 and K(r)≤κ0R0≤c0r/2, to bound this above by −(c0/2)f′(r)r. When r∈(ε,2κ0R0/c0) we may instead apply f′′(r) =−1 4K(r)f′(r) to see that this equals −c0f′(r)r. Thus for all r > ε andt≥T0, E[F(rt, ξt)|rt=r,|ξt|> ε]≤ −(c0/2)f′(r)r. For any r≥0, on the event |ξt| ≤ε(which occurs with probability 1 when rt≤εsince A(t)≥0), let us use f′′(rt)h(|ξt|)2≤0 and −f′(rt)κ(|ξt|)|ξt| ≤εκ0to bound E[F(rt, ξt)|rt=r,|ξt| ≤ε]≤ −c0f′(r)r+εκ0. Combining these cases and taking the full expectation over 1{|ξt|> ε}and over rt, we get for all t≥T0that E[F(rt, ξt)]≤ −(c0/2)E[f′(rt)rt] +εκ0. Applying f′(rt)≥c1andrt≥f(rt) and putting this bound into (72), for all t∈[T0, T0+T], d dtEf(rt)≤ −(c0c1/2)Ef(rt) +εκ0+ max t∈[T0,T0+T]∆t. Since f(rT0) =f(0) = 0, this differential inequality yields for all t∈[T0, T0+T], Ef(rt)≤ εκ0+ max t∈[T0,T0+T]∆t1−e−(c0c1/2)(t−T0) c0c1/2≤2 c0c1 εκ0+ max t∈[T0,T0+T]∆t . Since also rt≤c1f(rt) from the lower bound f′(r)≥c1, this gives Ert≤(2/c0)(εκ0+ max t∈[T0,T0+T]∆t). Applying that A(t)≥0 for all t≥0, we have |ξt| ≤rt, so this gives finally max t∈[T0,T0+T]E|θt−˜θt|= max t∈[T0,T0+T]E|ξt| ≤(2/c0) εκ0+ max t∈[T0,T0+T]∆t . We may choose εsuch that εκ0≤max t∈[T0,T0+T]∆t. Thus, to conclude the proof, it remains to show sup t∈[T0,T0+T]∆t≤ε(M) +√ T ε(T0). (74) We note that under Definition 2.4, E|θt−θ∗| ≤[E(θt−θ∗)2]1/2= (Cθ(t, t)−2Cθ(t,∗) +Eθ∗2)1/2≤Cfor a constant C >0 and all t≥0. Then for the first term of ∆ t, by property (35) of Definition 2.4, Zt 0|Rη(t, s)−R(M) η(t−s)| ·E|θs−θ∗|ds ≤CZt 0|Rη(t, s)−rtti η(t−s)|ds+CZt 0|rtti η(t−s)−R(M) η(t−s)|ds ≤Cε(t) +Zt 0|rtti η(t−s)−R(M) η(t−s)|ds where here lim t→∞ε(t)
|
https://arxiv.org/abs/2504.15558v1
|
= 0. Recalling the sequences {am}M m=0,{cm}M m=1defining R(M) η, rtti η(τ)−R(M) η(τ) =Z∞ ιae−aτµη(da)−MX m=1c2 me−amτ =MX m=1Zam am−1(ae−aτ−ame−amτ)µη(da) +Z a:a>aMae−aτµη(da), 25 hence using the fact that h(a) =ae−aτsatisfies |h′(a)| ≤2e−aτ/2and|am−am−1|= 1/√ M, Zt 0 rtti η(τ)−R(M) η(τ) dτ≤MX m=1Zam am−1Zt 0|ae−aτ−ame−amτ|dτ µη(da) +Z a:a>aMZt 0ae−aτdτ µη(da) ≤MX m=1Zam am−1Zt 0(2/√ M)e−am−1τ/2dτ µη(da) +µη([aM,∞)) ≤4√ MMX m=1Zam am−11 am−1µη(da) +µη([aM,∞)) ≤4 ι√ Mµη([ι,√ M)) +µη([√ M,∞))≤ε(M), where lim M→∞ε(M) = 0. This bounds the first term of ∆ tbyε(M) +Ct·ε(t). Bounding also the second termE|ut−˜ut|of ∆ tby Lemma 3.2, we have sup t∈[T0,T0+T]∆t≤ε(M) +√ T ε(T0) + sup t∈[T0,T0+T]Ct·ε(t) which implies (74) upon adjusting ε(T0). This completes the proof. Adapting part of the previous argument, we record here a uniform bound on E(θt)4for the solution {θt}t≥0of the DMFT equation. Lemma 3.4. Suppose ctti η(0)−ctti η(∞)< δ/σ2. Then supt≥0E(θt)4≤Candsupt≥0E(θt M,T 0)4≤Cfor a constant C >0and all M, T 0>0. Proof. We prove the statement for {θt}t≥0. Let A: [0,∞)→[0,∞) be defined by A(0) =δ σ2−c0, A (τ) =A(0)e−c0τ−Zt 0e−c0(τ−s)rtti η(s)ds for a small enough constant c0∈(0, ι). Here, by the conditions of Definition 2.4, rtti η(s) =−ctti η′(s) =R∞ ιae−asdµη(a), and the same argument as in the preceding proof verifies that inf τ∈[0,∞)A(τ) is bounded below by a positive constant for a sufficiently small choice of c0∈(0, ι) and all τ≥0. Letf:R→[0,∞) be a smooth approximation to the absolute value, satisfying f(x) =|x|for all |x| ≥1, and 1+ f′(x)·x≥f(x)≥ |x|,|f′(x)| ≤1, and |f′′(x)| ≤Cfor all x∈Rand an absolute constant C >0. Let ¯θt=θt−θ∗, and set rt=f(¯θt) +Rt 0A(t−s)f(¯θs)ds. Then by the DMFT equation (23) and Itˆ o’s formula, d¯θt=h −δ σ2¯θt+ (log g)′(θt) +Zt 0Rη(t, s)¯θsds+uti dt+√ 2 dbt, drt=f′(¯θt)d¯θt+f′′(¯θt)dt+h A(0)f(¯θt) +Zt 0A′(t−s)f(¯θs)dsi dt, d(rt)4= 4(rt)3drt+ 12( rt)2f′(¯θt)2dt. Applying rt≥0 and the bounds f′(¯θt)¯θt≥f(¯θt)−1,|f′(¯θt)| ≤1,|¯θs| ≤f(¯θs), and |f′′(¯θt)| ≤Cfrom the definition of f(·), this gives d dtE(rt)4≤E" 4(rt)3 −δ σ2[f(¯θt)−1] +f′(¯θt)(log g)′(θt) +Zt 0|Rη(t, s)|f(¯θs)ds+|ut|+C +A(0)f(¯θt) +Zt 0A′(t−s)f(¯θs)ds! + 12( rt)2# . 26 Then, using A(0) = δ/σ2−c0andA′(t−s) +rtti η(t−s) =−c0A(t−s) from the definition of A(·), d dtE(rt)4≤E" 4(rt)3 −c0rt+δ σ2+f′(¯θt)(log g)′(θt)+Zt 0|Rη(t, s)−rtti η(t−s)|f(¯θs)ds+|ut|+C! +12( rt)2# . When |¯θt| ≥1, we must have f′(¯θt) = sign( ¯θt) =|¯θt|/(θt−θ∗). Recalling the function κ(r) from (68), let us bound in this case f′(¯θt)[(log g)′(θt)−(logg)′(θ∗)] =−|¯θt| ·−(logg)′(θt) + (log g)′(θ∗) θt−θ∗≤ −κ(|¯θt|)|¯θt| ≤κ0R0, where κ0R0is the deterministic upper bound for −κ(r)r. For |¯θt| ≤1, let us apply instead the Lipschitz bound |f′(¯θt)[(log g)′(θt)−(logg)′(θ∗)]| ≤Lwhere Lis the Lipschitz constant of (log g)′under Assumption 2.2(a). We also apply |Rη(t, s)−rtti η(t−s)| ≤ε(t) from Definition 2.4, andRt 0f(¯θs)ds≤rt/infτ∈[0,∞)A(τ) by the definition of rt, where we recall that inf τ∈[0,∞)A(τ) is bounded below by a positive constant. Thus, for some constant C′>0, this yields d dtE(rt)4≤ −4c0E(rt)4+C′Eh ε(t)(rt)4+ (rt)3(1 +|(logg)′(θ∗)|+|ut|) + (rt)2i . Since utis a centered Gaussian variable, we note that E(ut)4= 3[E(ut)2]2= 3C(t, t)2which is bounded uniformly for all t≥0 under Definition 2.4. Also E[|(logg)′(θ∗)|4] is finite by the Lipschitz continuity of (logg)′and finiteness of moments of θ∗under Assumption 2.1. Then by H¨ older’s inequality, E[(rt)3(1 + |(logg)′(θ∗)|+|ut|) + (rt)2]≤C(E[(rt)4]3/4+E[(rt)4]1/2) for some C > 0. Thus,
|
https://arxiv.org/abs/2504.15558v1
|
for some C, T, R > 0 sufficiently large depending on C′, c0, the above implies d dtE(rt)4≤CE(rt)4,d dtE(rt)4≤ −4c0E(rt)4+c0E(rt)4<0 whenever t≥TandE(rt)4≥R. This implies that supt≥0E(rt)4is bounded by a constant depending only on C, T, R . Then supt≥0E(θt)4is also bounded since θt=¯θt+θ∗and|¯θt| ≤f(¯θt)≤rt. The argument to bound E(θM,T 0)4is the same upon replacing Rη(t, s) and utbyR(M) η(t, s) and ut Mfor s, t≥T0, and we omit the details for brevity. 3.1.2 Convergence of the auxiliary process Extending the definition (40) of Pg∗,ω∗;g,ω, letP⊗2 g∗,ω∗;g,ωdenote the law of a triple ( θ∗, θ, θ′) where θ∗, θare generated according to (41) defining Pg∗,ω∗;g,ωandθ′is a second independent copy of θdrawn from the posterior measure conditional on y. Lemma 3.5. Suppose ctti η(0)−ctti η(∞)< δ/σ2. Fix any M, T 0>0, set ω(M)=δ/σ2−PM m=1c2 m/amand ω(M) ∗= (ω(M))2/ctti η(∞), and let {θt M,T 0}t≥0be the process (64). Then for any T, T′>0, W1 P(θ∗, θT0+T M,T 0, θT0+T+T′ M,T 0),P⊗2 g∗,ω(M) ∗;g,ω(M) ≤C√ M(e−cT+e−cT′) for some constants C, c > 0not depending on M, T 0, T, T′. Proof. Letz∼ N(0, ctti η(∞)) and let {˜bt}t≥T0and{bt m}t≥0form= 1, . . . , M beM+ 1 standard Brownian motions. These are all independent of each other, of θ∗, and of {θt}t∈[0,T]. We note that the law of {θt M,T 0}t≥0 defined by (64) coincides with the marginal law of {θt M,T 0}t≥0in the joint process θt M,T 0=θtfort∈[0, T0], dθt M,T 0=h −δ σ2(θt M,T 0−θ∗) + (log g)′(θt M,T 0) +z+MX m=1cmxt mi dt+√ 2 d˜btfort > T 0, (75) dxt m= [−amxt m+cm(θt M,T 0−θ∗)]dt+√ 2 dbt mfor 1≤m≤M, t≥0 (76) 27 with initial conditions x0 1=. . .=x0 M= 0. Indeed, given {θt M,T 0}t≥0, the equations (76) for {xt m}t≥0are linear and have the explicit solutions xt m=cmZt 0e−am(t−s)(θs M,T 0−θ∗)ds+Zt 0e−am(t−s)√ 2 dbs m. (77) Substituting these solutions into (75) gives (64), upon identifying ut M=z+PM m=1cmRt 0e−am(t−s)√ 2 dbs m. Here{ut M}t≥0is a centered Gaussian process independent of θ∗and{˜bt}t≥T0, with covariance kernel exactly C(M) η(t, s) by (62). Thus the marginal law of {θt M,T 0}t≥0coincides with the definition in (64). Fort≥T0, letxt= (θt M,T 0, xt 1, . . . , xt M) and bt= (˜bt, bt 1, . . . , bt M). Conditional on θ∗andz, the evolution of{xt}t≥T0defined by (75–76) is a standard (Markovian) Langevin diffusion given by dxt=−∇H(xt|θ∗, z)dt+√ 2 dbt with Hamiltonian H(x|θ∗, z) =H(θ, x1, . . . , x M|θ∗, z) (78) =1 2δ σ2−MX m=1c2 m am |{z } =ω(M) (θ−θ∗)2−logg(θ)−z θ+MX m=1am 2cm am(θ−θ∗)−xm2 =ω(M) 2(θ−θ∗−z/ω(M))2−logg(θ) +MX m=1am 2cm am(θ−θ∗)−xm2 + const. , (79) for an additive constant not depending on x= (θ, x1, . . . , x M). Note that the given condition ctti η(0)− ctti η(∞)< δ/σ2implies that ω(M)>0 strictly. Convergence of {xt}t≥T0in Wasserstein-1 to a stationary law then follows from the results of [74]: For anyx= (θ, x1, . . . , x M) and x′= (θ′, x′ 1, . . . , x′ M), we have (x−x′)⊤(∇H(x|θ∗, z)− ∇H(x′|θ∗, z)) = ( θ−θ′)(−(logg)′(θ) + (log
|
https://arxiv.org/abs/2504.15558v1
|
g)′(θ′)) + ( x−x′)⊤L(x−x′) where L= δ σ2−c1. . .−cM −c1a1 ...... −cM aM . By the positivity of the Schur complement ω(M)=δ/σ2−PM m=1c2 m/am≥δ/σ2−(ctti η(0)−ctti η(∞)) and of am≥ι, this matrix Lis strictly positive-definite, with smallest eigenvalue bounded away from 0 independently of M. Then, recalling the function κ(r) from (68), (x−x′)⊤(∇H(x|θ∗, z)− ∇H(x′|θ∗, z))≥(θ−θ′)2κ(|θ−θ′|) +c∥x−x′∥2 2 for a constant c >0. Recalling also that κ(r) is positive for all r > R 0and some R0>0, and considering separately the cases where |θ−θ′| ≤R0and|θ−θ′|> R 0, this verifies that inf ∥x−x′∥2=r(x−x′)⊤(∇H(x|θ∗, z)− ∇H(x′|θ∗, z)) ∥x−x′∥2 2> c′ for all r > R′ 0and some constants c′, R′ 0>0. Then by [74, Corollary 3], the Langevin diffusion {xt}t≥T0has the unique stationary law P∞(x)∝exp(−H(x|θ∗, z)). (80) Let us write x∞∼P∞and⟨f(x∞)⟩for a sample and Gibbs average under this stationary law. Let us write alsoW1(·) for the Wasserstein-1 distance conditional on θ∗, z, and P(xT0+T|xT0=x) for the conditional 28 law of xT0+Tgiven θ∗, zand the initial condition xT0=x. Then also by [74, Corollary 2 and 3], there exist constants C, c > 0 such that for any T >0, W1(P(xT0+T|xT0=x),P∞)≤Ce−cTW1(δx,P∞)≤Ce−cT(∥x∥2+⟨∥x∞∥2⟩). Similarly, for any T′>0, W1(P(xT0+T+T′|xT0+T=x),P∞)≤Ce−cT′(∥x∥2+⟨∥x∞∥2⟩). Combining the two conditional couplings that attain these Wasserstein-1 bounds, and taking the average over the sample path {xt}t≥T0(which we denote by ⟨f(xt)⟩, still conditional on θ∗, z), W1(P(xT0+T, xT0+T+T′),(P∞)⊗2)≤C(e−cT+e−cT′)(⟨∥xT0∥2⟩+⟨∥xT0+T∥2⟩+⟨∥x∞∥2⟩) where ( P∞)⊗2is the law of two independent samples from P∞. The explicit form (77) for each {xt m}t≥0 implies that ⟨|xt m|⟩ ≤cmRt 0e−ι(t−s)(⟨|θs M,T 0|⟩+|θ∗|)ds+ (am)−1/2, and hence ⟨∥xt∥2⟩ ≤C√ M 1 +|θ∗|+Zt 0e−ι(t−s)⟨|θs M,T 0|⟩ds for a constant C > 0. Then, taking the full expectation over θ∗, zand applying E⟨|θt M,T 0|⟩ ≤Cby Lemma 3.4, we get E⟨∥xt∥2⟩ ≤C′√ Mfor a constant C′>0 and all t≥0. Then, applying this above gives EW1(P(xT0+T, xT0+T+T′),(P∞)⊗2)≤C√ M(e−cT+e−cT′) (81) for some (different) constants C, c > 0. Finally, note that the stationary law P∞(x) defined by (80) with Hamiltonian (79) describes a joint law (conditional on θ∗, z) of ( θ, x1, . . . , x M) where xm|θis Gaussian and independent across m= 1, . . . , M , andθhas marginal law given exactly by P(θ|y) in the Gaussian convolution model (37) with obser- vation y=θ∗+z/ω(M). Here, the noise variable z/ω(M)is Gaussian with variance ctti η(∞)/(ω(M))2= (ω(M) ∗)−1, so the joint law of θ∗and the ( θ, θ′)-marginals of the conditional law ( P∞)⊗2given ( θ∗, z) is pre- cisely P⊗2 g∗,ω(M) ∗;g,ω(M). Then, taking the ( θ, θ′)-marginals of the coupling (conditional on θ∗, z) that attains W1(P(xT0+T, xT0+T+T′),(P∞)⊗2) and combining with the identity coupling of θ∗, we have W1(P(θ∗, θT0+T M,T 0, θT0+T+T′ M,T 0),P⊗2 g∗,ω(M) ∗;g,ω(M))≤W1(P(xT0+T, xT0+T+T′),(P∞)⊗2). Taking the full expectation over θ∗, zon both sides and applying the bound (81) shows the lemma. Lemma 3.6. Suppose ctti η(0)−ctti η(∞)< δ/σ2. For any M > 0, let ω(M)=δ/σ2−MX m=1c2 m/am, ω(M) ∗= (ω(M))2/ctti η(∞), ω=δ/σ2−(ctti η(0)−ctti η(∞)), ω∗=ω2/ctti η(∞). Then limM→∞W1(P⊗2 g∗,ω(M) ∗;g,ω(M),P⊗2 g∗,ω∗;g,ω) = 0 . Proof. Let ( θ∗, θ, θ′)∼P⊗2 g∗,ω∗;g,ω, i.e. θ, θ′are two independent draws from the posterior law
|
https://arxiv.org/abs/2504.15558v1
|
Pg,ω(θ|y) in the scalar Gaussian convolution model (37) where y=θ∗+ω∗−1/2zandz∼ N(0,1). Let ⟨·⟩g,ωbe average over θ, θ′conditional on θ∗, z, and let Fbe the class of 1-Lipschitz functions f(θ∗, θ, θ′). Then, for any f∈ F, Eg∗,ω∗⟨f(θ∗, θ, θ′)⟩g,ω=ER f(θ∗, θ, θ′) exp(−ω 2[(θ∗+ω∗−1/2z−θ)2+ (θ∗+ω∗−1/2z−θ′)2])g(θ)g(θ′)d(θ, θ′)R exp(−ω 2[(θ∗+ω∗−1/2z−θ)2+ (θ∗+ω∗−1/2z−θ′)2])g(θ)g(θ′)d(θ, θ′) 29 where Eon the right side is over θ∗∼g∗andz∼ N(0,1). Writing ⟨·⟩for⟨·⟩g,ωandκ2for its associated posterior covariance, the above is continuously-differentiable in ( ω, ω∗) with ∂ωE⟨f(θ∗, θ, θ′)⟩=Eh κ2 f(θ∗, θ, θ′),−1 2[(θ∗+ω∗−1/2z−θ)2+ (θ∗+ω∗−1/2z−θ′)2]i ∂ω∗E⟨f(θ∗, θ, θ′)⟩=Eh κ2 f(θ∗, θ, θ′),ωz ω3/2 ∗[(θ∗+ω∗−1/2z−θ) + (θ∗+ω∗−1/2z−θ′)]i By the 1-Lipschitz bound for fand the identity Var X=1 2E[(X−X′)2] where X′is an independent copy ofX, we have κ2(f(θ∗, θ, θ′), f(θ∗, θ, θ′))≤C(κ2(θ, θ) +κ2(θ′, θ′)) for an absolute constant C > 0. Then, applying Cauchy-Schwarz to κ2(·) above, we get that ( ω, ω∗)7→Eg∗,ω∗⟨f(θ∗, θ, θ′)⟩g,ωis locally Lipschitz- continuous uniformly over f∈ F. Since lim M→∞PM m=1c2 m/am=µη([ι,∞)) = ctti η(0)−ctti η(∞), we have limM→∞(ω(M), ω(M) ∗) = (ω, ω∗). Then this local Lipschitz continuity implies as desired lim M→∞W1(P⊗2 g∗,ω(M) ∗;g,ω(M),P⊗2 g∗,ω∗;g,ω) = lim M→∞sup f∈F Eg∗,ω∗⟨f(θ∗, θ, θ′)⟩g,ω−Eg∗,ω(M) ∗⟨f(θ∗, θ, θ′)⟩g,ω(M) = 0. We now complete the proof of Lemma 3.1. Proof of Lemma 3.1. By Lemmas 3.3, 3.5, and 3.6, for any M, T 0, T, T′>0, W1(P(θ∗, θT0+T, θT0+T+T′),P⊗2 g∗,ω∗;g,ω)≤ε(M) + 2√ T+T′ε(T0) +C√ M(e−cT+e−cT′). Setting T=T′=t, choosing T0≡T0(t) so that lim t→∞T0(t) =∞and lim t→∞√ 2t ε(T0(t)) = 0, and taking t→ ∞ followed by M→ ∞ , this shows lim t→∞W1(P(θ∗, θT0(t)+t, θT0(t)+2t),P⊗2 g∗,ω∗;g,ω) = 0 . In particular, we have the weak convergence in distribution of ( θ∗, θT0(t)+t, θT0(t)+2t) toP⊗2 g∗,ω∗;g,ω. Lemma 3.4 implies that ( θ∗, θT0(t)+t, θT0(t)+2t) is uniformly bounded in L4and hence uniformly integrable in L2, so this implies lim t→∞W2(P(θ∗, θT0(t)+t, θT0(t)+2t),P⊗2 g∗,ω∗;g,ω) = 0 . (82) Then, under Definition 2.4 and by definition of the law P⊗2 g∗,ω∗;g,ω, we have as desired ctti θ(0) = lim t→∞Cθ(T0(t) +t, T0(t) +t) = lim t→∞E[(θT0(t)+t)2] =Eg∗,ω∗⟨θ2⟩g,ω, ctti θ(∞) = lim t→∞Cθ(T0(t) +t, T0(t) + 2t) = lim t→∞E[θT0(t)+tθT0(t)+2t] =Eg∗,ω∗⟨θ⟩2 g,ω, cθ(∗) = lim t→∞Cθ(T0(t) +t,∗) = lim t→∞E[θT0(t)+tθ∗] =Eg∗,ω∗[⟨θ⟩g,ωθ∗]. 3.2 Analysis of η-equation We next derive from an analysis of the evolution (25) for {ηt}t≥0a representation of ctti η(0), ctti η(∞) in terms ofctti θ(0), ctti θ(∞), cθ(∗). Lemma 3.7. It holds that ctti η(0) =δ σ4Eθ∗2+σ2+ctti θ(∞)−2cθ(∗) 1 +σ−2(ctti θ(0)−ctti θ(∞))2+ctti θ(0)−ctti θ(∞) 1 +σ−2(ctti θ(0)−ctti θ(∞)) , (83) ctti η(∞) =δ σ4Eθ∗2+σ2+ctti θ(∞)−2cθ(∗) (1 +σ−2(ctti θ(0)−ctti θ(∞)))2, (84) and in particular ctti η(0)−ctti η(∞)< δ/σ2. 30 The argument is similar to the analysis of {θt}t≥0, where we may approximate the dynamics of {ηt}t≥0 at large times by a Markovian joint evolution of a system ( ηt, xt 1, . . . , xt M). Our argument here is simpler than before, as the dynamics of ( ηt, xt 1, . . . , xt M) will be linear, from which we may explicitly analyze the convergence of ηtand show that it is independent of M; thus we will apply a
|
https://arxiv.org/abs/2504.15558v1
|
simple Gronwall argument to bound the propagation of the discretization error ε(M) over time. 3.2.1 Comparison with an auxiliary process We again fix a positive integer M, and define {am}M m=0and{cm}M m=1by am=ι+m√ Mform= 0, . . . , M,c2 m am=µθ([am−1, am)) with µθnow instead of µη. For convenience, let us introduce ξt=ηt+w∗−εandvt=−wt+w∗−ε, so the DMFT equation (25) for {ηt}t≥0is equivalently ξt=−1 σ2Zt 0Rθ(t, s)ξsds+vt. (85) Here,{vt}t≥0is a centered Gaussian process with covariance E[vtvs] =Cθ(t, s)−Cθ(t,∗)−Cθ(s,∗)+E(θ∗)2+ σ2. We set R(M) θ(τ) =MX m=1c2 me−amτ, C(M) θ(t, s) =MX m=1c2 m am(e−am|t−s|−e−am(t+s)) +ctti θ(∞) and define an auxiliary process {ξt M,T 0}t≥0by ξt M,T 0=ξtfort∈[0, T0) ξt M,T 0=−1 σ2Zt 0R(M) θ(t−s)ξs M,T 0ds+vt Mfort≥T0 (86) where {vt M}t≥0is a centered Gaussian process with covariance E[vt Mvs M] =C(M) θ(t, s)−2cθ(∗)+E(θ∗)2+σ2, defined in the probability space of {ξt}t≥0. (We check in the proof of Lemma 3.10 below that this is indeed a positive-semidefinite covariance kernel.) We note that the process {ξt M,T 0}t≥0may be discontinuous at T0; this is inconsequential for our subsequent analysis. Lemma 3.8. For any M, T 0, T > 0, there exists a coupling of {ξt}t≥0and{ξt M,T 0}t≥0such that sup t∈[0,T0+T]E(ξt−ξt M,T 0)2≤CeCT(ε(M) +√ T ε(T0)) where ε(M)does not depend on T0, Tandε(T0)does not depend on M, T , and limM→∞ε(M) = 0 and limT0→∞ε(T0) = 0 . Proof. Applying the approximation (30) and arguments analogous to Lemma 3.2, we have that sup s,t∈[T0,T0+T]|E[vt Mvs M]−E[vtvs]| ≤ sup s,t∈[T0,T0+T]|C(M) θ(t, s)−Cθ(t, s)|+|cθ(∗)−Cθ(t,∗)|+|cθ(∗)−Cθ(s,∗)| ≤ε(M) +ε(T0), and hence there exists a coupling of {vt M,T 0}t≥0and{vt}t≥0such that sup t∈[T0,T0+T]E(vt−vt M)2≤ε(M) +√ T ε(T0). 31 We bound ξt−ξt M,T 0under this coupling of {vt}t≥0with{vt M}t≥0: Let us write ˜ξt=ξt M,T 0. We have ξt=˜ξt fort∈[0, T0), while for t∈[T0, T0+T], E(ξt−˜ξt)2≤3h EZt 0R(M) θ(t−s)|ξs−˜ξs|ds2 +EZt 0|Rθ(t, s)−R(M) θ(t−s)||ξs|ds2 +E(vt−vt M)2i .(87) From the explicit definition of R(M) θ(t−s), the first term of (87) satisfies EZt 0|R(M) θ(t−s)||ξs−˜ξs|ds2 =EZt T0|R(M) θ(t−s)||ξs−˜ξs|ds2 ≤CZt T0E(ξs−˜ξs)2ds for a constant C > 0. Following the argument used to bound (74), the second term of (87) is bounded by ε(M) +Cε(t)2where ε(t)→0 ast→ ∞ , while the third term is bounded by ε(M) +√ T ε(T0) under the above coupling. Then by Gronwall’s inequality, sup t∈[T0,T0+T]E(ξt−˜ξt)2≤CeCT ε(M) + sup t∈[T0,T0+T]ε(t)2+√ T ε(T0) , which implies the lemma upon adjusting ε(T0). 3.2.2 Convergence of the auxiliary process Lemma 3.9. The value σ2 Z=Eθ∗2+ctti θ(∞)−2cθ(∗) +σ2is positive. Proof. Let{θt}t≥0be the Langevin diffusion (7) for which the DMFT system of Theorem 2.5 is the large- (n, d) limit. By Theorem 4.3 to follow, Cθ(t, s)−Cθ(t,∗)−Cθ(s,∗) +E(θ∗)2= lim n,d→∞E" 1 ddX i=1(θt i−θ∗ i)(θs i−θ∗ i)# Since{θt}t≥0is Markovian (conditional on X,y,θ∗), we have for all t≥sthat E" 1 ddX i=1(θt i−θ∗ i)(θs i−θ∗ i)# =E" E" 1 ddX i=1(θt i−θ∗ i)(θs i−θ∗ i) θs,X,y,θ∗## =E" 1 ddX i=1(θs i−θ∗ i)2# ≥0, hence Cθ(t, s)−Cθ(t,∗)−Cθ(s,∗)+E(θ∗)2≥0. Setting s=t/2 and taking the limit t→ ∞ under Definition 2.4 shows ctti θ(∞)−2cθ(∗) +Eθ∗2≥0, and the lemma follows. Lemma 3.10. Letc= (c1, . . . , c M),A= diag( a1, . . . , a M),Λ
|
https://arxiv.org/abs/2504.15558v1
|
=A+cc⊤/σ2, and consider the 2-dimensional Gaussian law N(0,ΣM)with ΣM= ρ2 MκM κMρ2 M , κ M=σ2 Z·h 1−c⊤Λ−1c/σ2i2 , ρ2 M=κM+c⊤Λ−1c, where σ2 Z=E(θ∗)2+ctti θ(∞)−2cθ(∗) +σ2. Then there exists an error ε(T)not depending on T0, Mand satisfying limT→∞ε(T) = 0 , such that for any M, T 0, T, T′>0, W2(P(ξT0+T M,T 0, ξT0+T+T′ M,T 0),N(0,ΣM))≤ε(T) +ε(T′). Proof. Letz∼ N (0, σ2 Z), where σ2 Z>0 by Lemma 3.9, and let {bt m}t≥0form= 1, . . . , M be standard Brownian motions. We assume these are independent of each other and of {ξt}t∈[0,T]. Then the law of {ξt M,T 0}t≥0coincides with the marginal law of {ξt M,T 0}t≥0in the joint process ξt M,T 0=ξtfort∈[0, T0) ξt M,T 0=MX m=1cmxt m+zfort≥T0 (88) dxt m=−[amxt m+cmξt M,T 0/σ2]dt+√ 2 dbt mfor 1≤m≤M, t≥0 (89) 32 with initial conditions x0 1=. . .=x0 M= 0. Indeed, given {ξt M,T 0}t≥0, the equations (89) for {xt m}t≥0have the explicit solutions xt m=−1 σ2Zt 0cme−am(t−s)ξs M,T 0ds+Zt 0e−am(t−s)√ 2 dbs m, and substituting this into (88) gives (86) upon identifying vt M=z+Rt 0PM m=1cme−am(t−s)√ 2 dbs m. It is direct to check that {vt M}t≥0thus defined has covariance C(M) θ(t, s)−2cθ(∗) +E(θ∗)2+σ2, so this coincides with the law of {ξt M,T 0}t≥0defined by (86). Let us denote ˜ξt=ξt M,T 0,xt= (xt 1, . . . , xt M), and bt= (bt 1, . . . , bt M). For t≥T0, the evolution of (˜ξt, xt)∈RM+1is a (Markovian) Ornstein-Uhlenbeck process. Substituting (88) into (89), we have dxt=−[Λxt+cz/σ2]dt+√ 2 dbtfort≥T0 where c= (c1, . . . , c M) and Λ = A+cc⊤/σ2with A= diag( a1, . . . , a M). This has the solution, for t≥T0, xt=e−Λ(t−T0)xT0+z σ2Λ−1(e−Λ(t−T0)−I)c+Zt T0e−Λ(t−s)√ 2 dbs. Substituting back into (88), ˜ξt=c⊤e−Λ(t−T0)xT0+zh 1 +1 σ2c⊤Λ−1(e−Λ(t−T0)−I)ci +Zt T0c⊤e−Λ(t−s)√ 2 dbsfort≥T0.(90) Here, we note that the equation (85) implies that {ξt}t≥0is itself a Gaussian process (given by a linear functional of {vt}t≥0), so xT0with coordinates xT0 m=−cm σ2ZT0 0e−am(T0−s)ξsds | {z } =Um+ZT0 0e−am(T0−s)√ 2 dbs m | {z } =Vm(91) is a Gaussian vector. Consequently, the form (90) shows that for any T, T′>0, (˜ξT0+T,˜ξT0+T+T′) has a centered bivariate Gaussian law. To conclude the proof of the lemma, it suffices to show |E[(˜ξT0+T)2]−ρ2 M|,|E[(˜ξT0+T+T′)2]−ρ2 M| ≤ε(T) +ε(T′) (92) |E[˜ξT0+T˜ξT0+T+T′]−κM| ≤ε(T) +ε(T′) (93) for some errors ε(T), ε(T′) that hold uniformly over all M, T 0>0. For (92), we may compute from the solution (90) that E[(˜ξT0+T)2] =c⊤e−ΛTE[xT0(xT0)⊤]e−ΛTc| {z } =I+σ2 Z·h 1 +1 σ2c⊤Λ−1(e−ΛT−I)ci2 +c⊤Λ−1(I−e−2ΛT)c | {z } =II. Observe that ∥Λ−1/2c∥2 2≤PM m=1c2 m/am=PM m=1µθ([am−1, am))≤µθ([ι,∞)). Hence ∥Λ−1/2c∥2≤Cfor a constant C >0 not depending on M. Since also λmin(Λ)≥ι >0, we have ∥e−ΛT∥op≤e−ιT, so |II−ρ2 M| ≤ε(T) for an error ε(T) not depending on M. To bound I, write xT0=U+Vwhere U, V∈RMhave the coordinates Um, Vmin (91). Then, from the bound E[(u⊤xT0)2]≤2E[(u⊤U)2]+2E[(u⊤V)2] for each unit vector u∈RM, we have ∥E[xT0(xT0)⊤]∥op≤2∥E[UU⊤]∥op+ 2∥E[V V⊤]∥op. 33 For the second term, ∥E[V V⊤]∥op=∥diag( a−1 m(1−e−amT0))∥op≤ι−1. For the first term, ∥E[UU⊤]∥op≤E∥U∥2 2=EMX m=1c2 m σ4ZT0 0e−am(T0−s)ξsds2 ≤MX m=1c2 m σ4ZT0 0e−am(T0−s)ds·ZT0 0e−am(T0−s)E(ξs)2ds. Noting that E(ξt)2= (σ4/δ)Cη(t, t)≤Cfor
|
https://arxiv.org/abs/2504.15558v1
|
all t≥0 under Definition 2.4, this gives ∥E[UU⊤]∥op≤ C′PM m=1c2 m/a2 m≤C′µθ([ι,∞))/ι. Combining these bounds shows ∥E[xT0(xT0)⊤]∥op≤Cfor a constant C >0 not depending on M, T 0. Then, combining with the previous bounds ∥Λ−1/2u∥2≤Candλmin(Λ)≥ι, this shows |I| ≤ε(T), so|E[(˜ξT0+T)2]−ρ2 M| ≤ε(T). The bound for E[(˜ξT0+T+T′)2] in (92) holds similarly. For (93), we may compute similarly from (90) E[(˜ξT0+T)˜ξT0+T+T′] =u⊤e−ΛTE[xT0(xT0)⊤]e−Λ(T+T′)u +σ2 Z·h 1 +1 σ2u⊤Λ−1(e−Λ(T+T′)−I)uih 1 +1 σ2u⊤Λ−1(e−ΛT−I)ui +u⊤Λ−1(e−ΛT′−e−Λ(2T+T′))u, and the arguments to show (93) from this form are the same as above. Lemma 3.11. Consider the 2-dimensional Gaussian law N(0,Σ∞)with Σ∞=ρ2 ∞κ∞ κ∞ρ2 ∞ , κ∞=Eθ∗2+σ2+ctti θ(∞)−2cθ(∗) (1 +σ−2(ctti θ(0)−ctti θ(∞))2, ρ2 ∞=κ∞+ctti θ(0)−ctti θ(∞) 1 +σ−2(ctti θ(0)−ctti θ(∞)). Then limM→∞∥ΣM−Σ∞∥op= 0. Proof. This follows from noting that c⊤Λ−1c=PM m=1c2 m/am 1+σ−2PM m=1c2m/amvia the Sherman-Morrison identity, and PM m=1c2 m/am→R∞ ιµθ(da) =ctti θ(0)−ctti θ(∞) asM→ ∞ . We now complete the proof of Lemma 3.7. Proof of Lemma 3.7. By Lemmas 3.8, 3.10, and 3.11, it holds that W2(P(ξT0+T, ξT0+T+T′),N(0,Σ∞))≤CeC(T+T′)(ε(M) +√ T+T′ε(T0)) +ε(T) +ε(T′) +ε(M). Taking first the limit M→ ∞ , then choosing T=T′=tandT0≡T0(t) such that lim t→∞T0(t) =∞ and lim t→∞e2Ct√ 2t ε(T0(t)) = 0 and taking t→ ∞ , this shows W2(P(ξT0(t)+t, ξT0(t)+2t),N(0,Σ∞))→0 as t→ ∞ . Under Definition 2.4, this implies σ4 δctti η(0) = lim t→∞σ4 δCη(T0(t) +t, T0(t) +t) = lim t→∞E[(ξT0(t)+t)2] =ρ2 ∞, σ4 δctti η(∞) = lim t→∞σ4 δCη(T0(t) +t, T0(t) + 2t) = lim t→∞E[ξT0(t)+tξT0(t)+2t] =κ∞. This shows the desired forms of ctti η(0) and ctti η(∞), and we have also from these forms that ctti η(0)−ctti η(∞) =δ σ2σ−2(ctti θ(0)−ctti θ(∞)) 1 +σ−2(ctti θ(0)−ctti θ(∞)) <δ σ2. 34 3.3 Completing the proof Proof of Theorem 2.5. By Lemmas 3.1 and 3.7, we have five equations (60), (83), (84) for the five variables ctti θ(0), ctti θ(∞), cθ(∗), ctti η(0), ctti η(∞). Defining mse ,mse∗by (42), these equations show ω=δ σ2−(ctti η(0)−ctti η(∞)) =δ σ2+ (ctti θ(0)−ctti θ(∞))=δ σ2+ mse, ω∗=ω2 cttiη(∞)=δ Eθ∗2+σ2+cθ(∞)−2cθ(∗)=δ σ2+ mse ∗, as well as mse = ctti θ(0)−ctti θ(∞) =Eg∗,ω∗[⟨θ2⟩g,ω− ⟨θ⟩2 g,ω] =Eg∗,ω∗⟨(θ− ⟨θ⟩g,ω)2⟩g,ω, mse∗=Eθ∗2−2Eg∗,ω∗[θ∗⟨θ⟩g,ω] +Eg∗,ω∗⟨θ⟩2 g,ω=Eg∗,ω∗(θ∗− ⟨θ⟩g,ω)2. This verifies that the fixed-point equations (43) hold, where it is clear that ω, ω∗are uniquely defined from mse,mse∗via (43). Defining ymse ,ymse∗by (42), we have also from the above forms of ω, ω∗that ymse =σ4 δ(ctti η(0)−ctti η(∞)) =σ2 1−ωσ2 δ , ymse∗=σ4 δ(2ctti η(0)−ctti η(∞))−σ2=σ2+ωσ4 δω ω∗−2 , verifying (44). Finally, the statement (45) is a consequence of (82) shown in the proof of Lemma 3.1. 4 Analysis of fixed-prior Langevin dynamics under LSI In this section, we prove Theorem 2.9 and Corollary 2.10 that verify Definition 2.4 and deduce the replica- symmetric limits for the Bayes-optimal mean-squared-errors and free energy, under Assumption 2.7 of a log-Sobolev inequality (LSI) for the posterior law. 4.1 Preliminaries 4.1.1 Properties of Langevin dynamics We review in this section two general results on a Langevin diffusion of the form dθt=∇U(θt)dt+√ 2 dbt(94) with an equilibrium measure eU(θ). The first is a fluctuation-dissipation relation for its correlation and response functions at equilibrium, and the second is a Bismut-Elworthy-Li representation for the spatial derivative
|
https://arxiv.org/abs/2504.15558v1
|
of its Markov semigroup. For bounded observables, similar fluctuation-dissipation theorems have been stated and shown in [77, 78] and Bismut-Elworthy-Li formulae in [79, 80]. We give versions of these results here for a class of unbounded observables which may have linear growth A={f∈C2(Rd,R) :∇f,∇2fare globally bounded }, and a class of drift coefficients B={U∈C3(Rd,R) :∇2U,∇3Uare globally bounded and H¨ older continuous }, (95) drawing upon some analyses of our companion work [27, Appendix A]. We write Ptf(θ) =E[f(θt)|θ0=θ], Lf(θ) =∇U⊤∇f(θ) + Tr ∇2f(θ) for the Markov semigroup and infinitesimal generator associated to (94). It is shown in [27, Proposition A.2] that f∈ A, U∈ B ⇒ ∇ Ptf(θ),∇2Ptf(θ) are uniformly bounded over t∈[0, T],θ∈Rd(96) for any fixed T >0. In particular, Ptf∈ Afor each fixed t >0. 35 Lemma 4.1. Suppose U∈ B, and (94) has the unique stationary distribution q(θ) =eU(θ)with finite third moments. Let {θt}t≥0be the solution to (94) with initial condition θ0=x, and let A∈ AandB∈ B. (a) Define the response function Rx AB(t, s) =Ps(∇B⊤∇Pt−sA)(x). Then Rx AB(t, s)satisfies the following condition: Fix any continuous bounded function h: [0,∞)→R. For each ε >0, let{θt,ε}t≥0denote the solution of the perturbed dynamics dθt,ε=∇[U(θt,ε) +εh(t)B(θt,ε)]dt+√ 2 dbt with the same initial condition θ0,ε=x. Then for any t >0, lim ε→01 ε E[A(θt,ε)|θ0,ε=x]−E[A(θt)|θ0=x] =Zt 0Rx AB(t, s)h(s)ds. (b) Define the correlation function Cx AB(t, s) =E[A(θt)B(θs)|θ0=x]. Then for any t≥s≥0, averaging over an initial condition x∼qdrawn from the stationary distribution, ∂tEx∼qCx AB(t, s) =−Ex∼qRx AB(t, s). Proof. Part (a) is an application of [27, Proposition A.1] of our companion paper (specialized to this setting of dynamics with a fixed and non-adaptive prior). For part (b), we will use also from [27, Proposition A.2] that for A∈ A, we have ∂tPtA= LPtA. Since the dynamics (103) are Markovian with stationary distribution q(θ), we have Ex∼qCx AB(t, s) =Ex∼q[E[A(θt−s)|θ0=x]B(x)] =Ex∼q(B·Pt−sA)[x]. To differentiate under the integral in t, note that ∂t(B·Pt−sA) =B·LPt−sA. By the uniform boundedness of∇PtA,∇2PtAovert∈[0, T], the Lipschitz-continuity of ∇B,∇U, and finiteness of third moments of q, we have that ( B·LPtA)[x] is uniformly integrable with respect to x∼qover t∈[0, T]. Thus dominated convergence applies to show ∂tEx∼qCx AB(t, s) =∂tEx∼q(B·Pt−sA)[x] =Ex∼q(B·LPt−sA)[x]. On the other hand, using also that both ∇B⊤∇PtAandB·LPtAare integrable with respect to x∼q, we have via integration-by-parts Ex∼qRx AB(t, s) =Ex∼q(∇B⊤∇Pt−sA)[x] =Z q(θ)(∇B⊤∇Pt−sA)[θ]dθ =−Z B(θ)dX j=1∂j[q ∂j(Pt−sA)](θ)dθ =−Z B(θ) qTr∇2(Pt−sA) +∇(Pt−sA)⊤∇q (θ)dθ =−Z q(θ)B(θ) Tr∇2(Pt−sA) +∇(Pt−sA)⊤∇logq (θ)dθ =−Ex∼q(B·LPt−sA)[x]. Lemma 4.2. Suppose U∈ B, and consider the solution (θt,Vt)∈Rd×Rd×dto dθt=∇U(θt)dt+√ 2 dbt, dVt= [∇2U(θt)]Vtdt (97) with initial condition (θ0,V0) = (x,I), adapted to the canonical filtration of the Brownian motion {bt}t≥0. Then for any f∈ Aand any t >0, ∇Ptf(x) =E[Vt⊤∇f(θt)|(θ0,V0) = (x,I)] (98) =1 t√ 2E f(θt)Zt 0Vs⊤dbs (θ0,V0) = (x,I) (99) 36 Proof. The first identity (98) is the statement of [27, Eq. (184)] (again specialized to this setting of dynamics with a fixed and non-adaptive prior). For the second identity (99), we use from [27, Proposition A.2] that for f∈ A and any fixed t≥0, (s,θ)7→Pt−sf(θ) isC1ins∈[0, t] and C2inθ, with ∂sPt−sf(θ) =−LPt−sf(θ). Then Itˆ o’s formula applied
|
https://arxiv.org/abs/2504.15558v1
|
to g(s,θ) =Pt−sf(θ) gives f(θt) =g(t,θt) =g(0,θ0) +Zt 0∂sg(s,θs)ds+Zt 0∇θg(s,θs)⊤dθs+Zt 0Tr∇2 θg(s,θs)ds =Ptf(x) +Zt 0(∂s+ L)Pt−sf(θs)ds+√ 2Zt 0∇Pt−sf(θs)⊤dbs =Ptf(x) +√ 2Zt 0∇Pt−sf(θs)⊤dbs. Since ∇2Uis bounded, {Vt}t∈[0,T]is bounded over finite time horizons, soRt 0Vs⊤dbsis a martingale. Multiplying both sides by this martingale and taking expectations gives E f(θt)Zt 0Vs⊤dbs (θ0,V0) = (x,I) =√ 2Zt 0E Vs⊤∇Pt−sf(θs)|(θ0,V0) = (x,I)]ds. Since Ptf∈ A, we may apply (98) with Pt−sfin place of fto get Zt 0E Vs⊤∇Pt−sf(θs)|(θ0,V0) = (x,I)]ds=Zt 0∇Ps(Pt−sf)(x)ds=t· ∇Ptf(x). Substituting above and rearranging shows (99). 4.1.2 Interpretation of the DMFT correlation and response We remark that under Assumption 2.2(a), the log-posterior density log Pg(θ|X,y) belongs to the function classB, andPg(θ|X,y) is the unique stationary distribution of (7). Fixing X,y,θ∗, consider the coordinate functions ej(θ) =θj, e∗ j(θ) =θ∗ j, x i(θ) =√ δ σ2([Xθ]i−yi). (100) (Here, e∗ jis a constant function not depending on θ.) We define their associated correlation and response matrices Cθ(t, s) = (Cθ0 ejek(t, s))d j,k=1,Cθ(t,∗) = (Cθ0 eje∗ k(t,0))d j,k=1,Rθ(t, s) = (Rθ0 ejek(t, s))d j,k=1 Cη(t, s) = (Cθ0 xjxk(t, s))n j,k=1,Rη(t, s) = (Rθ0 xjxk(t, s))n j,k=1(101) where Cθ0 AB(t, s) and Rθ0 AB(t, s) are the correlation and response functions as defined in Lemma 4.1 for these coordinate functions, under the dynamics (7) with fixed prior g(·) and the given initial condition θ0of Assumption 2.1. The following result is a direct application of [27, Theorem 2.8]. Theorem 4.3 ( [27]) .Suppose Assumptions 2.1 and 2.2(a) hold, and let Cθ, Cη, Rθ, Rηbe the correlation and response functions of the DMFT system in Theorem 2.3(a) approximating the dynamics (7). Then almost surely as n, d→ ∞ , d−1TrCθ(t, s)→Cθ(t, s), d−1TrCθ(t,∗)→Cθ(t,∗), n−1TrCη(t, s)→Cη(t, s) d−1TrRθ(t, s)→Rθ(t, s), n−1TrRη(t, s)→Rη(t, s). 37 4.2 Posterior bounds and Wasserstein-2 convergence Fixing the prior g(·) and the data ( X,y), let us write for convenience q(θ) =Pg(θ|X,y)∝exp −1 2σ2∥y−Xθ∥2 2+dX j=1logg(θj) (102) for the posterior density. The Langevin diffusion (7) with this fixed prior is then dθt=∇logq(θt)dt+√ 2 dbt. (103) We will use the notations ⟨f(θ)⟩=Eθ∼q[f(θ)], P tf(x) =⟨f(θt)⟩x=E[f(θt)|θ0=x] where the former is an average under the posterior law q(·) conditional on X,y, and the latter is an average over{θt}t≥0solving (103) conditional on X,yand also the initial condition θ0=x. We write as shorthand Pt(x) =⟨θt⟩x= (Pte1, . . . , P ted)[x]∈Rd. We reserve ⟨f(θt)⟩for the expectation conditional on X,ybut averaging also over θ0. For constants C0, CLSI>0, define the ( X,θ∗,ε)-dependent event E(C0, CLSI) =n ∥X∥op≤C0,∥θ∗∥2 2,∥ε∥2 2≤C0d,the LSI (46) holds for q(θ)o . (104) Note that under Assumptions 2.1 and 2.7(a), this event holds almost surely for all large n, dfor some sufficiently large choices of constants C0, CLSI>0. All subsequent constants C, C′, c, c′>0 in this section may change from instance to instance, and are dimension-free and depend only on C0, CLSIabove , δ, σ2, g∗of Assumption 2.1 , C, c 0, r0of Assumption 2.2(a) ,and log g(0). (105) We record the following elementary bounds for the posterior expectation ⟨f(θ)⟩=Eθ∼qf(θ). Lemma 4.4. Suppose Assumption 2.2(a) holds. Then on the event where ∥X∥op≤C0, there exists a constant C
|
https://arxiv.org/abs/2504.15558v1
|
>0for which ⟨∥θ∥2 2⟩ ≤C(d+∥y∥2 2), (106) ⟨∥∇logq(θ)∥2 2⟩ ≤C(d+∥y∥2 2), (107) ∥∇2logq(θ)∥op≤C. (108) In particular, on E(C0, CLSI), for a constant C′>0we have ⟨∥θ∥2 2⟩ ≤C′dand⟨∥∇logq(θ)∥2 2⟩ ≤C′d. Proof. (108) is immediate from the form of log q(θ), the bound ∥X∥op≤C0, and Assumption 2.2(a). For (106), write Eg,Pgfor the expectation and probability over the prior θ∼gandθjiid∼g. We note that under Assumption 2.2(a), we have logg(θ) = log g(0) + θ(logg)′(0) +Zθ 0Zx 0(logg)′′(u)dudx≤C(1 +|θ|)−(c0/2)(|θ| −r0)2≤C′−c′θ2 for some constants C, C′, c′>0 depending only on the constants of Assumption 2.2(a) and on log g(0). Then gis subgaussian, and for some constants C, c > 0 (c.f. [81, Eq. (3.1)]) Eg∥θ∥2 2≤Cd, Pg[∥θ∥2 2−Eg∥θ∥2 2≥du]≤Ce−cdufor all u≥1. (109) Write q(θ) =1 Zexp −∥y−Xθ∥2 2 2σ2dY j=1g(θj), Z =Eg exp −∥y−Xθ∥2 2 2σ2 . 38 We have by Jensen’s inequality −logZ≤Eg[∥y−Xθ∥2 2/2σ2]≤C(d+∥y∥2 2+Eg∥θ∥2 2)≤C′(d+∥y∥2 2). Then for any M > 0, also bounding the exponential from above by 1, D ∥θ∥2 21{∥θ∥2 2≥M}E ≤1 ZEgh ∥θ∥2 21{∥θ∥2 2≥M}i ≤eC′(d+∥y∥2 2)Egh ∥θ∥2 21{∥θ∥2 2≥M}i . Integrating the tail bound (109) shows that this is less than d+∥y∥2 2forM=C(d+∥y∥2 2) and a sufficiently large choice of constant C >0. Thus ⟨∥θ∥2 2⟩ ≤M+D ∥θ∥2 21{∥θ∥2 2≥M}E ≤C′(d+∥y∥2 2). This shows (106). Since ∇logq(θ) isC-Lipschitz by (108), and ∥∇logq(0)∥2 2≤2∥X⊤y/σ2∥2 2+ 2d· (logg)′(0)2≤C(d+∥y∥2 2), the statement (107) follows from (106). Remark 4.5. In a later proof, we will require that (106) holds in a form ⟨∥θ∥2 2⟩ ≤Cd+ (C/σ2)∥y∥2 2 (110) for all large noise variances σ2>0, where C > 0 is a constant not depending on σ2. This may be seen from the above arguments: Writing now C, C′>0 for constants not depending on σ2, the above shows −logZ≤(C′/σ2)(d+∥y∥2 2), and hence ⟨∥θ∥2 21{∥θ∥2 2≥M}⟩ ≤ d+∥y∥2 2/σ2forM=Cd+ (C/σ2)∥y∥2 2with a sufficiently large choice of constant C >0. Lemma 4.6. Suppose Assumption 2.2(a) holds. Let {θt}t≥0be the solution to (9) with initial condition θ0∼q0, let qt(θt)be the law of θt, and let W2(·)the Wasserstein-2 distance, all conditional on X,θ∗,ε (and averaging over θ0). Then on the event E(C0, CLSI), there exists a constant C >0such that W2(qt, q)≤Ce−(2/CLSI)tW2(q0, q)for all t≥0. (111) Proof. Fort∈[0,1] we may apply a simple synchronous coupling and Gr¨ onwall argument: Let {θt}t≥0and {˜θt}t≥0be the solutions of (9) with initial conditions θ0∼q0and˜θ0∼q, coupled by the same Brownian motion. Thend dt∥(θt−˜θt)∥2≤ ∥d dt(θt−˜θt)∥2=∥∇logq(θt)− ∇logq(˜θt)∥2≤C∥θt−˜θt∥2by definition of the Langevin equation (9) and by (108). Hence ∥θt−˜θt∥2≤eCt∥θ0−˜θ0∥2. (112) Letting ( θ0,˜θ0) be the coupling of ( q0, q) for which ⟨∥θ0−˜θ0∥2 2⟩=W2(q0, q)2, we have that ( θt,˜θt) is a coupling of ( qt, q), so W2(qt, q)2≤ ⟨∥θt−˜θt∥2 2⟩ ≤e2Ct⟨∥θ0−˜θ0∥2 2⟩=e2CtW2(q0, q)2. Thus W2(qt, q)≤C′W2(q0, q) for all t∈[0,1], which implies (111) for t∈[0,1] and some C >0. Fort≥1, under the curvature-dimension lower bound −∇2logq(θ)⪰ −LId for a constant L >0 that is implied by (108), we apply from [82, Lemma 4.2] that DKL(q1∥q)≤1 4α+L 2 W2(q0, q)2, α =e2L−1 2L. (113) Under the LSI condition of E(C0, CLSI), we have the exponential contraction of relative entropy (c.f. [83, Theorem 5.2.1]) DKL(qt∥q)≤e−2(t−1)/CLSIDKL(q1∥q) for all
|
https://arxiv.org/abs/2504.15558v1
|
t≥1. (114) We have also the T2-transportation inequality (c.f. [83, Theorem 9.6.1]) W2(qt, q)2≤CLSIDKL(qt∥q), (115) and (111) for t≥1 follows follows from combining (113), (114), and (115). 39 4.3 Properties of the correlation and response In this section, on the event E(C0, CLSI), we now show approximate time-translation-invariance at large times for the correlation and response matrices Cθ,Cη,Rθ,Rηdefined in Section 4.1.2. We may write these using our Markov semigroup notation as Cθ(t, s) = ek(θs)Pt−sej(θs) θ0d j,k=1,Cη(t, s) = xk(θs)Pt−sxj(θs) θ0n j,k=1, Rθ(t, s) = ∇ek(θs)⊤∇Pt−sej(θs) θ0d j,k=1,Rη(t, s) = ⟨∇xk(θs)⊤∇Pt−sxj(θs) θ0n j,k=1. Lemma 4.7. Suppose Assumption 2.2(a) holds. Let Cθ,Cη,Rθ,Rηbe defined for the dynamics (7), and set C∞ θ(τ) = ek(θ)Pτej(θ)⟩d j,k=1, C∞ η(τ) = xk(θ)Pτxj(θ) n j,k=1, R∞ θ(τ) = ∇ek(θ)⊤∇Pτej(θ) d j,k=1, R∞ η(τ) = ∇xk(θ)⊤∇Pτxj(θ) n j,k=1 where ⟨·⟩is expectation under the posterior law q(·). Then on E(C0, CLSI)∩ {∥θ0∥2 2≤C0d}, there exist constants C, c > 0such that for all t≥s≥0, |TrCθ(t, s)−TrC∞ θ(t−s)| ≤Cde−cs(116) |TrRθ(t, s)−TrR∞ θ(t−s)| ≤Cde−cs(117) |TrCη(t, s)−TrC∞ η(t−s)| ≤Cde−cs(118) |TrRη(t, s)−TrR∞ η(t−s)| ≤Cde−cs(119) Proof. Momentarily let qtbe the law of θtconditional on ( X,y) and also on a fixed initial condition θ0=x. For any fixed t≥0, denote by φt∼qa random vector such that ( θt,φt) is a coupling of ( qt, q) for which ⟨∥θt−φt∥2 2⟩x=W2(qt, q)2, where W2(·) is the Wasserstein-2 distance conditional on X,yandθ0=x. Then observe that for any M-Lipschitz function f, we have ⟨∥f(θt)−f(φt)∥2 2⟩x≤M2⟨∥θt−φt∥2 2⟩x=M2W2(qt, q)2. (120) Furthermore W2(qt, q)2≤Ce−ctW2(δx, q)2≤2Ce−ct(∥x∥2 2+⟨∥θ∥2 2⟩) for all t≥0 by Lemma 4.6. Then, applying (120) with f(x) =xandf(x) =∇logq(x), and applying (106–108) on the event E(C0, CLSI), we have the basic estimates ⟨∥θt−φt∥2 2⟩x≤Ce−ct(∥x∥2 2+d),⟨∥∇logq(θt)− ∇logq(φt)∥2 2⟩x≤Ce−ct(∥x∥2 2+d), ⟨∥θt∥2 2⟩x≤C(∥x∥2 2+d),⟨∥∇logq(θt)∥2 2⟩x≤C(∥x∥2 2+d) ⟨∥φt∥2 2⟩=⟨∥θ∥2 2⟩ ≤Cd, ⟨∥∇logq(φt)∥2 2⟩=⟨∥∇logq(θ)∥2 2⟩ ≤Cd.(121) We note that also ∥Pt(x)−Pt(˜x)∥2 2≤eCt∥x−˜x∥2 2, (122) ∥Pt(x)−Pt(˜x)∥2 2≤Ce−ct(∥x∥2 2+∥˜x∥2 2+d). (123) Indeed, (122) follows from (112) and Jensen’s inequality. Also by Jensen’s inequality and (121), ∥Pt(x)− ⟨θ⟩∥2 2=∥⟨θt−φt⟩x∥2 2≤Ce−ct(∥x∥2 2+d), (124) and applying this bound for both Pt(x) and Pt(˜x) yields (123). For (116), note that for any s, τ≥0, TrCθ(s+τ, s) =dX j=1⟨ej(θs)Pτej(θs)⟩θ0. (125) 40 Now let qsbe the law of θsconditional on ( X,y) and the given initial condition θ0of Assumption 2.2, and let (θs,φs) be the optimal Wasserstein-2 coupling of ( qs, q) as above. Then |TrCθ(s+τ, s)−TrC∞ θ(τ)| ≤dX j=1 ej(θs)Pτej(θs)−ej(φs)Pτej(φs) θ0 ≤dX j=1 (θs j−φs j)Pτej(θs) θ0+ φs j(Pτej(θs)−Pτej(φs)) θ0 ≤ ∥θs−φs∥2 2 1/2 θ0 ∥Pτ(θs)∥2 2 1/2 θ0| {z } I+ ∥φs∥2 2 1/2 ∥Pτ(θs)−Pτ(φs)∥2 2 1/2 θ0| {z } II (126) where we recall our shorthand Pt(x) =⟨θt⟩x∈Rd. We have I ≤Cde−csfor all s≥0 by (121) and ⟨∥Pτ(θs)∥2 2⟩θ0≤ ⟨∥θs+τ∥2 2⟩θ0which follows from Jensen’s inequality. For II, by (122) and (121), we have ∥Pτ(θs)−Pτ(φs)∥2 2 θ0≤eCτ⟨∥θs−φs∥2 2⟩θ0≤eCτ·Cde−cs. Choosing a large enough constant s0>0, for τ≤s/s0, this gives ⟨∥Pτ(θs)−Pτ(φs)∥2 2⟩θ0≤C′de−c′s. For τ > s/s 0, applying instead (123) and (121), we have ⟨∥Pτ(θs)−Pτ(φs)∥2 2⟩θ0≤Ce−cτ(⟨∥θs∥2 2⟩θ0+⟨∥φs∥2 2⟩+ d)≤C′de−c′s. Thus, for some C, c > 0, ∥Pτ(θs)−Pτ(φs)∥2 2 θ0≤Cde−csfor all s, τ≥0. Thus also II ≤Cde−cs, and applying these bounds for I and II to
|
https://arxiv.org/abs/2504.15558v1
|
(126) shows (116). For (117), note that TrRθ(s+τ, s) =dX j=1⟨(∂jPτej)[θs]⟩θ0, TrR∞ θ(τ) =dX j=1⟨(∂jPτej)[θ]⟩. (127) Let d Pt(x)∈Rd×dbe the Jacobian of the vector map x7→Pt(x). By (98) of Lemma 4.2 applied with f=ej for each j= 1, . . . , d , we have dPt(x) =⟨Vt⟩x (128) where (with slight extension of the notation) we write ⟨·⟩xfor the average over {θt,Vt}t≥0solving (97) with initial condition ( θ0,V0) = (x,I). For t≥1, let us write also ∇Ptej(x) =∇P1f(x) with f=Pt−1ej. Noting thatf∈ Aby (96), we may apply (99) of Lemma 4.2 with this f. Doing so for each j= 1, . . . , d gives dPt(x) =1√ 2 Pt−1(θ1)Z1 0(Vs)⊤dbs⊤ xfort≥1. (129) In particular, dX j=1(∂jPτej)[x] =⟨TrVτ⟩x=1√ 2 Pτ−1(θ1)⊤Z1 0(Vs)⊤dbs x(130) with the second equality holding for τ≥1. Now let {θt,Vt}t≥0and{˜θt,˜Vt}t≥0be the solutions to (97) with initial conditions ( θ0,V0) = (x,I) and ( ˜θ0,˜V0) = ( ˜x,I), coupled by the same Brownian motion {bt}t≥0, and write ⟨·⟩x,˜xfor the associated average over {θt,Vt,˜θt,˜Vt}t≥0conditional on these initial conditions. By the form of (97) and by (108), d dt∥Vt∥op≤ ∥∇2logq(θt)·Vt∥op≤C∥Vt∥op, so ∥Vt∥op≤eCt∥V0∥op=eCt. (131) 41 Then also d dt∥Vt−˜Vt∥F≤ ∥[∇2logq(θt)− ∇2logq(˜θt)]Vt∥F+∥[∇2logq(˜θt)](Vt−˜Vt)∥F ≤ ∥∇2logq(θt)− ∇2logq(˜θt)∥F∥Vt∥op+∥∇2logq(˜θt)∥op∥Vt−˜Vt∥F. Applying ∇2logq(θt)−∇2logq(˜θt) = diag((log q)′′(θt j)−(logq)′′(˜θt j)), the bound ∥∇2logq(θ)∥op≤Cfrom (108), |(logg)′′′(θ)| ≤Cunder Assumption 2.2(a), and (112), d dt∥Vt−˜Vt∥F≤C∥θt−˜θt∥2∥Vt∥op+C∥Vt−˜Vt∥F≤CeCt∥x−˜x∥2·eCt+C∥Vt−˜Vt∥F. Integrating this bound, ∥Vt−˜Vt∥F≤C∥x−˜x∥2for all t∈[0,1]. So it follows from the first equality of (130) that for τ∈[0,1], dX j=1∂jPτej(x)−∂jPτej(˜x) = Tr(Vτ−˜Vτ) x,˜x ≤√ d ∥Vτ−˜Vτ∥F x,˜x≤C√ d∥x−˜x∥2. Hence by (127) and (121), for τ∈[0,1], |TrRθ(s+τ, s)−TrR∞ θ(τ)| ≤C√ d⟨∥θs−φs∥2⟩θ0≤C′de−cs. (132) Forτ≥1, we apply instead the second equality of (130) and Cauchy-Schwarz to obtain √ 2 dX j=1∂jPτej(x)−∂jPτej(˜x) ≤ Pτ−1(θ1)⊤Z1 0(Vs)⊤dbs−Pτ−1(˜θ1)⊤Z1 0(˜Vs)⊤dbs x,˜x ≤D Pτ−1(θ1)−Pτ−1(˜θ1) 2 2E1/2 x,˜x Z1 0(Vs)⊤dbs 2 2 1/2 x+D Pτ−1(˜θ1) 2 2E1/2 ˜x Z1 0(Vs−˜Vs)⊤dbs 2 2 1/2 x,˜x =D Pτ−1(θ1)−Pτ−1(˜θ1) 2 2E1/2 x,˜x Z1 0∥Vs∥2 Fds 1/2 x+D Pτ−1(˜θ1) 2 2E1/2 ˜x Z1 0∥Vs−˜Vs∥2 Fds 1/2 x,˜x ≤C√ dD Pτ−1(θ1)−Pτ−1(˜θ1) 2 2E1/2 x,˜x+C∥x−˜x∥2D Pτ−1(˜θ1) 2 2E1/2 ˜x. (133) Note that ⟨∥Pτ−1(˜θ1)∥2 2⟩˜x≤ ⟨∥˜θτ∥2 2⟩˜x≤C(∥˜x∥2 2+d) by (121). Applying (122–123), (112), and (121), D Pτ−1(θ1)−Pτ−1(˜θ1) 2 2E x,˜x≤e2C(τ−1)⟨∥θ1−˜θ1∥2 2⟩x,˜x≤Ce2Cτ∥x−˜x∥2 2for all τ≥1, D Pτ−1(θ1)−Pτ−1(˜θ1) 2 2E x,˜x≤Ce−c(τ−1)(⟨∥θ1∥2 2⟩x+⟨∥˜θ1∥2 2⟩˜x+d) ≤C′e−cτ(∥x∥2 2+∥˜x∥2 2+d) for all τ≥2. Choosing a large enough constant s0>0, ifτ∈[1, s/s 0], then we may apply the former bound, (121), and Cauchy-Schwarz to (133) to get |TrRθ(s+τ, s)−TrR∞ θ(τ)| ≤ dX j=1∂jPτej(θs)−∂jPτej(φs) θ0 ≤CeCτ√ d⟨∥θs−φs∥2⟩θ0+CD ∥θs−φs∥2(∥φs∥2+√ d)E θ0 ≤C′d(eCτ+ 1)e−cs≤C′′de−c′s. (134) 42 Ifτ≥s/s0, applying instead the latter bound to (133), |TrRθ(s+τ, s)−TrR∞ θ(τ)| ≤Ce−cτ√ d ⟨∥θs∥2 2⟩θ0+⟨∥φs∥2 2⟩+d1/2+CD ∥θs−φs∥2(∥φs∥2+√ d)E θ0 ≤C′de−c′s. (135) Combining these bounds for τ∈[0,1],τ∈[1, s/s 0], and τ≥s/s0in (132), (134), and (135) shows (117). The arguments for CηandRηin (118–119) are similar: For (118), recall the definitions (100) and note that TrCη(s+τ, s)−TrC∞ η(τ) ≤nX i=1D xi(θs)Pτxi(θs)−xi(φs)Pτxi(φs) E θ0 ≤nX i=1D xi(θs)−xi(φs) Pτxi(θs) E θ0+D xi(φs) Pτxi(θs)−Pτxi(φs) E θ0 ≤δ σ4D ∥X(θs−φs)∥2 2E1/2 θ0D ∥XPτ(θs)−y∥2 2E1/2 θ0+δ σ4D ∥Xφs−y∥2 2E1/2D ∥XPτ(θs)−XPτ(φs)∥2 2E1/2 θ0. The desired result (118) follows from the conditions ∥X∥op≤C0,∥y∥2 2≤C0d, and the preceding bounds for (126). For (119), note that TrRη(s+τ, s) =δ σ4 TrX[dPτ(θs)]X⊤ θ0,
|
https://arxiv.org/abs/2504.15558v1
|
TrR∞ η(τ) =δ σ4 TrX[dPτ(θ)]X⊤ . (136) Let{θt,Vt}t≥0and{˜θt,˜Vt}t≥0be the solutions of (97) with initial conditions ( x,I) and ( ˜x,I). Ifτ∈[0,1], we apply (128) to obtain |TrX[dPτ(x)]X⊤−TrX[dPτ(˜x)]X⊤| ≤ ∥Vτ−˜Vτ∥F x,˜x· ∥X⊤X∥F≤√ d∥X∥2 op· ∥Vτ−˜Vτ∥F x,˜x, which leads to the bound (132) up to a different constant depending on the bound C0for∥X∥op. Ifτ≥1, we apply (129) to obtain √ 2 TrX[dPτ(x)]X⊤−TrX[dPτ(˜x)]X⊤ ≤D Pτ−1(θ1)−Pτ−1(˜θ1)⊤ X⊤XZ1 0Vs⊤dbs E x,˜x+D Pτ−1(˜θ1)⊤X⊤XZ1 0(Vs−˜Vs)⊤dbs E x,˜x ≤ ∥X∥2 opD ∥Pτ−1(θ1)−Pτ−1(˜θ1)∥2E1/2 x,˜x Z1 0∥Vs∥2 Fds 1/2 x+D ∥Pτ−1(˜θ1)∥2E1/2 ˜x Z1 0∥Vs−˜Vs∥2 Fds 1/2 x,˜x . This can be bounded in the same way as (133), (134), and (135) up to different constants depending on the bound C0for∥X∥op. This shows (119). Lemma 4.8. Suppose Assumption 2.2(a) holds. Let {θt}t≥0be the solution to (7). Then on the event E(C0, CLSI)∩ {∥θ0∥2 2≤C0d}, there exist constants C, c > 0such that for all t≥s≥0, TrCθ(t, s)−Ps(θ0)⊤⟨θ⟩ ≤Cde−c(t−s)(137) |TrRθ(t, s)| ≤Cde−c(t−s)(138) TrCη(t, s)−δ σ4(XPs(θ0)−y)⊤(X⟨θ⟩ −y) ≤Cde−c(t−s)(139) |TrRη(t, s)| ≤Cde−c(t−s)(140) and furthermore Ps(θ0)⊤⟨θ⟩ − ∥⟨ θ⟩∥2 2 ≤Cde−cs(141) (XPs(θ0)−y)⊤(X⟨θ⟩ −y)− ∥X⟨θ⟩ −y∥2 2 ≤Cde−cs(142) 43 Proof. For (137) and (141), observe that TrCθ(s+τ, s)−Ps(θ0)⊤⟨θ⟩ = θs⊤(Pτθs− ⟨θ⟩) θ0 ≤ ∥θs∥2 2 1/2 θ0 ∥Pτθs− ⟨θ⟩∥2 2 1/2 θ0≤Cde−cτ, the last inequality applying (121) and (124). Similarly Ps(θ0)⊤⟨θ⟩ − ∥⟨ θ⟩∥2 2 = (Psθ0− ⟨θ⟩)⊤⟨θ⟩ ≤ ∥⟨θ⟩∥2· ∥Psθ0− ⟨θ⟩∥2≤Cde−cs. For (138), recall from (127) that Tr Rθ(s+τ, s) =Pd j=1⟨(∂jPτej)[θs]⟩θ0. Then by the first equality of (130) and (131), we have |TrRθ(s+τ, s)| ≤Cdfor any τ∈[0,1]. For τ≥1, we apply instead the second equality of (130) whereRt 0Vs⊤dbsis a martingale. Then ⟨⟨θ⟩⊤R1 0Vs⊤dbs⟩x= 0 for any initial condition x∈Rd, so for any τ≥1, dX j=1∂jPτej(x) = Pτ−1(θ1)− ⟨θ⟩⊤Z1 0Vs⊤dbs x ≤ ∥Pτ−1(θ1)− ⟨θ⟩∥2⟩1/2 x Z1 0∥Vs∥2 Fds 1/2 x≤Ce−cτ√ d(∥x∥2+√ d), the last inequality using the estimates (124) and (131). Then by (121) and Jensen’s inequality, |TrRθ(s+τ, s)| ≤ dX j=1(∂jPτej)[θs] θ0≤C′de−c′τ. Combining these cases τ∈[0,1] and τ≥1 gives (138). The arguments for (139), (140), and (142), are analogous to the above, and we omit these for brevity. 4.4 The DMFT system is approximately-TTI We now prove Theorem 2.9, that under the log-Sobolev condition of Assumption 2.7(a), the DMFT system of Theorem 2.3(a) is approximately-TTI in the sense of Definition 2.4. Lemma 4.9. Under Assumptions 2.1, 2.2(a), and 2.7(a), the DMFT system prescribed by Theorem 2.3(a) satisfies the conditions of Definition 2.4(1) with ε(t) =Ce−ctfor some constants C, c > 0. Proof. We restrict to the almost sure event where the convergence statements of Theorem 4.3 hold, and where E(C0, CLSI)∩ {∥θ0∥2 2≤C0d}holds for all large n, d. Consider first the statements for Cθ(t, s). Applying ∥θ0∥2 2≤C0dand (137) of Lemma 4.8, for some constants C, c > 0, lim sup n,d→∞ d−1TrCθ(t, s)−d−1Ps(θ0)⊤⟨θ⟩ ≤Ce−ctfor all s≤t/2. (143) By Theorem 4.3, lim n,d→∞d−1TrCθ(t, s) =Cθ(t, s) for all t≥s≥0. Then, for each s≥0 and t≥2s, lim sup n,d→∞d−1Ps(θ0)⊤⟨θ⟩ ≤Cθ(t, s) +Ce−ct, lim inf n,d→∞d−1Ps(θ0)⊤⟨θ⟩ ≥Cθ(t, s)−Ce−ct. Taking t→ ∞ on the right side of both statements shows that for each s≥0, there exists a limit ˜cθ(s) := lim n,d→∞d−1Ps(θ0)⊤⟨θ⟩= lim t→∞Cθ(t, s). (144)
|
https://arxiv.org/abs/2504.15558v1
|
Next, (141) of Lemma 4.8 implies for some C, c > 0, lim sup n,d→∞ d−1Ps(θ0)⊤⟨θ⟩ −d−1∥⟨θ⟩∥2 2 ≤Ce−csfor all s≥0. (145) 44 Then lim sup n,d→∞d−1∥⟨θ⟩∥2 2≤˜cθ(s) +Ce−cs, lim inf n,d→∞d−1∥⟨θ⟩∥2 2≥˜cθ(s)−Ce−cs. Taking s→ ∞ on the right side of both statements shows that there exists a limit ctti θ(∞) := lim n,d→∞d−1∥⟨θ⟩∥2 2= lim s→∞˜cθ(s). (146) Now consider C∞ θ(τ) as defined in Lemma 4.7. Let −L =R∞ 0adEabe the spectral decomposition of −L as a positive, self-adjoint operator on L2(q) (c.f. [83, Theorem A.4.2]), where {Ea}a≥0is a family of orthogonal projections onto an increasing family of closed linear subspaces of L2(q). In particular, E0f=⟨f(θ)⟩is the projection onto the constant functions. For each τ≥0 and all f, g∈L2(q), we then have ⟨f(θ)Pτg(θ)⟩=Z∞ 0e−aτd⟨f(θ)Eag(θ)⟩ (147) understood as a Stieltjes integral with respect to the bounded-variation function a7→ ⟨f(θ)Eag(θ)⟩(c.f. [83, Proposition 3.1.6(iii)]). The LSI on the event E(C0, CLSI) implies a spectral gap, i.e. the spectrum of −L is included in {0} ∪[1/CLSI,∞). Thus, fixing any constant ι∈(0,1/CLSI), we have d−1TrC∞ θ(τ) =d−1dX j=1⟨ej(θ)Pτej(θ)⟩=d−1dX j=1⟨ej(θ)E0ej(θ)⟩+d−1dX j=1Z∞ ιe−aτd⟨ej(θ)Eaej(θ)⟩ =d−1∥⟨θ⟩∥2 2+d−1dX j=1Z∞ ιe−aτd⟨ej(θ)Eaej(θ)⟩, the first equality applying (147), and the second equality applyingP j⟨ej(θ)E0ej(θ)⟩=P j⟨θj⟨θj⟩⟩=∥⟨θ⟩∥2. Define ( n, d,X,y)-dependent scalars cθ,d, mθ,d>0 and a positive measure µθ,don [ι,∞) by cθ,d=d−1∥⟨θ⟩∥2 2, m θ,d=d−1dX j=1Z∞ ιd⟨ej(θ)Eaej(θ)⟩, µ θ,d(S) =d−1dX j=1Z Sd⟨ej(θ)Eaej(θ)⟩,(148) noting that a7→d−1Pd j=1⟨ej(θ)Eaej(θ)⟩is nondecreasing and hence defines a valid distribution function forµθ,d. Then d−1TrC∞ θ(τ) =cθ,d+Z∞ ιe−aτµθ,d(da), m θ,d=µθ,d([ι,∞)). (149) Applying (149) with τ= 0, cθ,d+mθ,d=d−1TrC∞ θ(0) = d−1⟨∥θ∥2 2⟩ ≤C. (150) In particular, µθ,dis finite and uniformly bounded in total variation norm for all ( n, d). We claim that τ7→d−1TrC∞ θ(τ) is uniformly equicontinuous over all ( n, d): Observe that d−1TrC∞ θ(τ)−d−1TrC∞ θ(τ′) = d−1 θ⊤[Pτ−Pτ′](θ) ≤d−1⟨∥θ∥2 2⟩1/2· ⟨∥[Pτ−Pτ′](θ)∥2 2⟩1/2 =d−1⟨∥θ∥2 2⟩1/2· ⟨∥θ−P|τ−τ′|(θ)∥2 2⟩1/2. (151) [84, Theorem II.2.1] implies ∥Pt(x)−x∥2 2≤C(1 +∥x∥2 2)tfor all t∈[0,1] and a constant C > 0. This and (121) imply that the right side of (151) is at most C′|τ−τ′|for a constant C′>0 and all |τ−τ′| ≤1, so τ7→d−1TrC∞ θ(τ) is uniformly equicontinuous as claimed. We note that for any M > 0, by the relation (149), cθ,d+µθ,d([0, M)) +e−Mτµθ,d([M,∞))≥d−1TrC∞ θ(τ). Then setting τ= 1/Mand rearranging yields (1−e−1)µθ,d([M,∞))≤cθ,d+mθ,d−d−1TrC∞ θ(1/M) =d−1TrC∞ θ(0)−d−1TrC∞ θ(1/M). 45 So this uniform equicontinuity implies that the measures µθ,dare uniformly tight. Then, there exists a subsequence {(nk, dk)}k≥1of (n, d) along which µθ,d⇒µθweakly, for some finite positive measure µθon [ι,∞). Recalling also that cθ,d=d−1∥⟨θ⟩∥2 2→ctti θ(∞) asn, d→ ∞ by the definition (146), and setting ctti θ(τ) =ctti θ(∞) +Z∞ ιe−aτµθ(da), (152) this weak convergence applied to (149) implies lim k→∞d−1 kTrC∞ θ(τ) =ctti θ(τ). Combining this with the convergence lim k→∞d−1 kTrCθ(s+τ, s) =Cθ(s+τ, s) by Theorem 4.3, for any s, τ≥0 we have Cθ(s+τ, s)−ctti θ(τ) ≤lim sup k→∞ d−1 kTrCθ(s+τ, s)−d−1 kTrC∞ θ(τ) ≤Ce−cs, (153) where the last inequality holds by (116). Since Cθ(t, s) is non-random, this implies that ctti θ(τ) is also non-random for every τ≥0, and thus also the measure µθis non-random. This shows (30). The statement (31) follows analogously: By arguments parallel to (144) and
|
https://arxiv.org/abs/2504.15558v1
|
(146), applying Theorem 4.3 and (139) and (142) shows that there exist limits ˜cη(s) := lim n,d→∞δ nσ4(XPs(θ0)−y)⊤(X⟨θ⟩ −y) = lim t→∞Cη(t, s), (154) ctti η(∞) := lim n,d→∞δ nσ4∥X⟨θ⟩ −y∥2 2= lim s→∞˜cη(s). (155) Note that n−1TrC∞ η(τ) =n−1nX i=1 xi(θ)Pτxi(θ) =δ nσ4∥X⟨θ⟩ −y∥2 2+1 nnX i=1Z∞ ιe−aτd⟨xi(θ)Eaxi(θ)⟩. Defining cη,n=δ nσ4∥X⟨θ⟩ −y∥2 2, µ η,n(S) =1 nnX i=1Z Sd⟨xi(θ)Eaxi(θ)⟩, m η,n=µη,n([ι,∞)), (156) we have cη,n+mη,n=n−1TrC∞ η(0) =δ nσ4⟨∥Xθ−y∥2 2⟩ ≤C. (157) So along some subsequence {(nk, dk)}k≥1, we have cη,n→ctti η(∞),µη,n⇒µηweakly for a finite positive measure µηon [ι,∞), and lim k→∞n−1 kTrC∞ η(τ) =ctti η(τ) for the quantity ctti η(τ) =ctti η(∞) +Z∞ ιe−aτµη(da). By an argument parallel to (153) using Theorem 4.3 and (118), this shows |Cη(s+τ, s)−ctti η(τ)| ≤Ce−cs, establishing (31). Finally, for (32), observe that by Theorem 4.3, lim n,d→∞d−1Ps(θ0)⊤θ∗=Cθ(s,∗). Noting that lim sup n,d→∞d−1 (Ps(θ0)− ⟨θ⟩)⊤θ∗ ≤lim sup n,d→∞d−1∥Psθ0− ⟨θ⟩∥2· ∥θ∗∥2≤Ce−cs by (124), this implies the existence of the limit cθ(∗) := lim n,d→∞d−1⟨θ⟩⊤θ∗= lim s→∞Cθ(s,∗), (158) which satisfies |Cθ(s,∗)−cθ(∗)| ≤Ce−cs. This shows (32). Lemma 4.10. Under Assumptions 2.1, 2.2(a), and 2.7(a), the DMFT system prescribed by Theorem 2.3(a) satisfies the conditions of Definition 2.4(2) with ε(t) =Ce−ctfor some constants C, c > 0. 46 Proof. We again restrict to the almost sure event where the convergence statements of Theorem 4.3 hold, and where E(C0, CLSI)∩ {∥θ0∥2 2≤C0d}holds for all large n, d. Consider first Rθ(t, s). By (138) of Lemma 4.8 and the convergence Rθ(t, s) = lim n,d→∞d−1Rθ(t, s) of Theorem 4.3, |Rθ(t, s)| ≤Ce−ctfor all s≤t/2. (159) Fors≥t/2, note that the forms of (149) and (152) imply that both d−1TrC∞ θ(τ) and ctti θ(τ) are convex and differentiable in τ≥0. Then, along the subsequence {(nk, dk)}k≥1of the preceding proof, the pointwise convergence lim k→∞d−1 kTrC∞ θ(τ) =ctti θ(τ) implies also lim k→∞d−1 k∂τTrC∞ θ(τ) =∂τctti θ(τ) for each τ≥0 (c.f. [85, Theorem 25.7]). By the fluctuation-dissipation relation of Lemma 4.1 applied with A=B=ej for each j= 1, . . . , d , we have ∂τTrC∞ θ(τ) =−TrR∞ θ(τ). Then, defining rtti θ(τ) =−∂τctti θ(τ), this shows limk→∞d−1 kTrR∞ θ(τ) =rtti θ(τ). Combining with lim k→∞d−1 kTrRθ(s+τ, s) =Rθ(s+τ, s) from Theorem 4.3, for any s, τ≥0 we have that Rθ(s+τ, s)−rtti θ(τ) ≤lim sup k→∞ d−1 kTrRθ(s+τ, s)−d−1 kTrR∞ θ(τ) ≤Ce−cs, where the last inequality applies (117). In particular, for any t≥0, |Rθ(t, s)−rtti θ(t−s)| ≤Ce−c′tfor all s∈[t/2, t]. (160) Together, (159) and (160) imply (34). The statement (35) follows analogously, and we omit this for brevity. Proof of Theorem 2.9. This follows from Lemmas 4.9 and 4.10. 4.5 Limit MSE and free energy We now show Corollary 2.10 on the asymptotic values of the mean-squared-errors and the free energy. Proposition 4.11. Suppose Assumptions 2.1, 2.2(a), and 2.7(a) hold. Let YMSE ∗and the marginal like- lihood Pg(y|X)be as defined in Corollary 2.10. Let E[· |X]denote the expectation with respect to θ∗ jiid∼g∗ andεiiid∼ N(0, σ2)conditioning on X. Then almost surely, lim n,d→∞d−1logPg(y|X)−d−1E[logPg(y|X)|X] = 0, lim n,d→∞YMSE ∗−E[YMSE ∗|X] = 0.(161) Proof. We condition on Xthroughout, and restrict to the X-dependent event {∥X∥op≤C0and (46)
|
https://arxiv.org/abs/2504.15558v1
|
holds }. Note that by assumption, this event holds a.s. for all large n, dand does not depend on θ∗,ε. For the first statement, let us consider Z(θ∗,ε) = logZ exp −1 2σ2∥Xθ∗+ε−Xθ∥2 2+dX j=1logg(θj) dθ (which coincides with log Pg(y|X) up to an additive constant) as a function of ( θ∗,ε). Then ∇θ∗Z(θ∗,ε) =−1 σ2X⊤(Xθ∗+ε−X⟨θ⟩),∇εZ(θ∗,ε) =−1 σ2(Xθ∗+ε−X⟨θ⟩). Under Assumption 2.1, note that θ∗andεhave independent subgaussian entries, so there are constants C1, c > 0 such that (c.f. [81, Eq. (3.1)]) P[∥θ∗∥2 2+∥ε∥2 2> C 1d]≤e−cd. (162) When ∥θ∗∥2 2+∥ε∥2 2≤C1d, we have the bound ⟨∥θ∥2 2⟩ ≤Cdfrom (106). Applying this and ∥X∥op≤C0, ∥∇(θ∗,ε)Z(θ∗,ε)∥21{∥θ∗∥2 2+∥ε∥2 2≤C1d} ≤L√ d 47 for a constant L >0. Thus Z(θ∗,ε) isL√ d-Lipschitz on {∥θ∗∥2 2+∥ε∥2 2≤C1d}, so its Lipschitz extension ˜Z(θ∗,ε) = inf x∈Rd+n:∥x∥2 2≤C1dZ(x) +L√ d∥x−(θ∗,ε)∥2 is globally L√ d-Lipschitz on Rd+nand ˜Z(θ∗,ε) =Z(θ∗,ε) over {∥θ∗∥2 2+∥ε∥2 2≤C1d}. Under Assumption 2.1, the joint distribution of ( θ∗,ε) satisfies a log-Sobolev inequality by tensorization, implying the Lipschitz concentration P[|˜Z(θ∗,ε)−E[˜Z(θ∗,ε)|X]| ≥td|X]≤2e−t2d/(2L2). (163) We may bound |E[˜Z(θ∗,ε)|X]−E[Z(θ∗,ε)|X]| ≤Eh 1{∥θ∗∥2 2+∥ε∥2 2≥C1d} |˜Z(θ∗,ε)|+|Z(θ∗,ε)| Xi ≤P[∥θ∗∥2 2+∥ε∥2 2≥C1d|X]1/2 (E[˜Z(θ∗,ε)|X]2)1/2+ (E[Z(θ∗,ε)|X]2)1/2 Applying the upper bound Z(θ∗,ε)≤logR exp(Pd j=1logg(θj))dθ= 0, Jensen’s inequality lower bound Z(θ∗,ε)≥Eg[−1 2σ2∥Xθ∗+ε−Xθ∥2 2] where Eg[·] is the expectation over θjiid∼g, and|˜Z(θ∗,ε)−Z(0)|= |˜Z(θ∗,ε)−˜Z(0)| ≤L√ d(∥θ∗∥2 2+∥ε∥2 2)1/2, we obtain |E[˜Z(θ∗,ε)|X]−E[Z(θ∗,ε)|X]| ≤P[∥θ∗∥2 2+∥ε∥2 2≥C1d|X]1/2·Cd≤e−c′d for all large n, d, the last inequality applying (162). Thus (163) and (162) imply P[|Z(θ∗,ε)−E[Z(θ∗,ε)|X]| ≥td+e−c′d|X]≤2e−t2d/(2L2)+e−cd, implying the first statement of (161) by the Borel-Cantelli lemma. For the second statement, let us write nYMSE ∗(θ∗,ε) =∥Xθ∗+ε−X⟨θ⟩∥2 2 viewed also as a function of ( θ∗,ε). Writing κ2(·) for the covariance associated to the posterior mean ⟨·⟩, differentiating in ( θ∗,ε) gives, for any unit vectors u∈Rdandv∈Rn, u⊤∇θ∗[nYMSE ∗] = 2( Xθ∗+ε−X⟨θ⟩)⊤Xu−2 σ2κ2 θ⊤X⊤Xu,(Xθ∗+ε−X⟨θ⟩)⊤Xθ , v⊤∇ε[nYMSE ∗] = 2( Xθ∗+ε−X⟨θ⟩)⊤v−2 σ2κ2 θ⊤X⊤v,(Xθ∗+ε−X⟨θ⟩)⊤Xθ . The Poincar´ e inequality implied by the assumed LSI for Pg(θ|X,y) shows, for any vector x∈Rd, κ2(x⊤θ,u⊤θ)≤C∥x∥2 2. On the event {∥θ∗∥2 2+∥ε∥2 2≤C1d}, applying this Poincar´ e bound, Cauchy-Schwarz for κ2(·), and ∥X∥op≤ C0and⟨∥θ∥2 2⟩ ≤Cdfrom (106), we obtain |u⊤∇θ∗[nYMSE ∗]| ≤C√ dand|v⊤∇ε[nYMSE ∗]| ≤C√ dfor any unit vectors u,v, and hence ∥∇θ∗,ε[nYMSE ∗]∥21{∥θ∗∥2 2+∥ε∥2 2≤C1d} ≤L√ d for some constant L >0. So nYMSE ∗isL√ d-Lipschitz in ( θ∗,ε) on{∥θ∗∥2 2+∥ε∥2 2≤C1d}. For any ( θ∗,ε), we also have the bound |nYMSE ∗(θ∗,ε)| ≤C(∥θ∗∥2 2+∥ε∥2 2)1/2by (106), so the second statement of (161) follows from the same Lipschitz extension and concentration argument as above. Proof of Corollary 2.10(a). We restrict to the almost sure event where E(C0, CLSI) holds for all large n, d. Observe that by (148) and (150), MSE = d−1⟨∥θ− ⟨θ⟩∥2 2⟩=d−1 ⟨∥θ∥2 2⟩ − ∥⟨ θ⟩∥2 2 =md=µd([ι,∞)), 48 so lim n,d→∞MSE = µθ([ι,∞)) =ctti θ(0)−ctti θ(∞) by (152). Also MSE ∗=d−1∥θ∗− ⟨θ⟩∥2 2=d−1 ∥θ∗∥2 2−2⟨θ⟩⊤θ∗+∥⟨θ⟩∥2 2 , so lim n,d→∞MSE ∗=E[θ∗2]−2cθ(∗) +ctti θ(∞) by Assumption 2.1 and the definitions (146) and (158). Thus MSE→mse and MSE ∗→mse∗for the quantities mse ,mse∗defined in (42). Similarly YMSE = n−1 ∥Xθ−X⟨θ⟩∥2 2 =n−1 ⟨∥Xθ−y∥2 2⟩ − ∥X⟨θ⟩ −y∥2 2 . Then by (156) and (157), lim n,d→∞n−1∥X⟨θ⟩−y∥2 2=σ4 δctti
|
https://arxiv.org/abs/2504.15558v1
|
η(∞) and YMSE =σ4 δµη,n([ι,∞))→σ4 δ(ctti η(0)− ctti η(∞)) = ymse as defined in (42). For YMSE ∗, writing E[· |X] for the expectation over ( θ∗,ε) as in Proposition 4.11, observe first that n−1E[∥X⟨θ⟩ −y∥2 2|X] =n−1E[∥X⟨θ⟩ −Xθ∗∥2 2|X]−2n−1E[ε⊤(X⟨θ⟩ −Xθ∗) +σ2|X], and Gaussian integration-by-parts gives E[ε⊤(X⟨θ⟩ −Xθ∗)|X] =E[ε⊤X⟨θ⟩ |X] =E[⟨∥Xθ−Xθ∗∥2 2⟩ |X]−E[∥X⟨θ⟩ −Xθ∗∥2 2|X] =E[⟨∥Xθ−X⟨θ⟩∥2 2⟩ |X] =nE[YMSE |X]. Thus E[YMSE ∗|X] =n−1E[∥X⟨θ⟩ −Xθ∗∥2 2|X] =n−1E[∥X⟨θ⟩ −y∥2 2|X] + 2E[YMSE |X]−σ2. (164) We remark that n−1∥X⟨θ⟩ −y∥2 2and YMSE are bounded for all large n, don the event E(C0, CLSI), by the bound for ⟨∥θ∥2 2⟩ ≤Cfrom (106). Thus, applying YMSE →ymse and n−1∥X⟨θ⟩ −y∥2 2→σ4 δctti η(∞) as argued above and dominated convergence, the right side of (164) converges to ymse∗=σ4 δ(2ctti η(0)− ctti η(∞))−σ2as defined in (42). Then the concentration of YMSE ∗established in Proposition 4.11 combined with (164) show lim n,d→∞YMSE ∗= ymse∗. To show the last statement (48), conditional on X,θ∗,εand averaging over the initial condition θ0∼ q0=g⊗d 0, letqtbe the conditional law of θt. Consider a coupling of a posterior sample θ∼qwithθt∼qt such that ⟨∥θt−θ∥2 2⟩=W2(qt, q)2, where ⟨·⟩denotes the expectation under this coupling and W2(·) is the Wasserstein-2 distance, both conditional on X,θ∗,ε. For a given realization of ( θt,θ) from this coupling, considering the coordinatewise coupling of1 dPd j=1δ(θ∗ j,θt j)with1 dPd j=1δ(θ∗ j,θj)shows W2 1 ddX j=1δ(θ∗ j,θt j),1 ddX j=1δ(θ∗ j,θj) 2 ≤1 ddX j=1(θt j−θj)2=1 d∥θt−θ∥2 2. Then* W2 1 ddX j=1δ(θ∗ j,θt j),1 ddX j=1δ(θ∗ j,θj)!2+ ≤1 d ∥θt−θ∥2 2 =1 dW2(qt, q)2. Applying Lemmas 4.4 and 4.6, W2(qt, q)2≤Ce−ct(⟨∥θ∥2 2⟩+⟨∥θ0∥2 2⟩)≤C′de−cton the event E(C0, CLSI), for some constants C, C′, c > 0. So on this event, lim sup n,d→∞* W2 1 ddX j=1δ(θ∗ j,θt j),1 ddX j=1δ(θ∗ j,θj)!2+ ≤C′e−ct. (165) Now by Theorem 2.3(a), for each fixed t≥0, almost surely with respect to the randomness of both X,θ∗,εandθ0,{bt}t≥0defining {θt}t≥0, we have lim n,d→∞W2 1 ddX j=1δ(θ∗ j,θt j),P(θ∗, θt)!2 = 0 (166) 49 where P(θ∗, θt) here is the law of ( θ∗, θt) in the DMFT system. To take an expectation over the randomness ofθ0and{bt}t≥0, note that from the definition (103), we have θt=θ0+Zt 0∇θlogq(θs) ds+√ 2bt=θ0+Zt 0h1 σ2X⊤(y−Xθs) + (log g)′(θs)i ds+√ 2bt, where (log g)′is applied entrywise. Then on E(C0, CLSI), by the Lipschitz continuity of (log g)′(θ), this implies for a constant C >0 that d−1/2∥θt∥2≤Zt 0Cd−1/2∥θs∥2ds+Ct+d−1/2∥θ0∥2+√ 2d−1/2∥bt∥2. Then for any T >0, Gronwall’s inequality gives, for a constant C >0, sup t∈[0,T]d−1/2∥θt∥2≤CeCT T+d−1/2∥θ0∥2+d−1/2sup t∈[0,T]∥bt∥2 (167) For any p >1, applying sup t∈[0,T]d−1∥bt∥2 2p ≤sup t∈[0,T]d−1dX j=1|bt j|2p≤d−1dX j=1sup t∈[0,T]|bt j|2p and Doob’s Lp-maximal inequality, we have that ⟨(supt∈[0,T]d−1∥bt∥2 2)p⟩is bounded by a ( T, p)-dependent constant. Similarly ⟨(d−1∥θ0∥2)p⟩is bounded by a ( T, p)-dependent constant, so D sup t∈[0,T]d−1∥θt∥2 2pE ≤CT,p for a constant CT,p>0, where ⟨·⟩averages over θ0and{bt}t≥0. Since W2(1 dPd j=1δ(θ∗ j,θt j),P(θ∗, θt))2≤ C(d−1∥θ∗∥2 2+d−1∥θt∥2 2+E(θ∗)2+E(θt)2), this implies on the event E(C0, CLSI) that sup t∈[0,T]W21 ddX j=1δ(θ∗ j,θt j),P(θ∗, θt)2p ≤C′ T,p (168) for a different constant C′ T,p>0. In particular, for any fixed t≥0 and p >1,
|
https://arxiv.org/abs/2504.15558v1
|
the squared Wasserstein-2 distance in (166) is uniformly bounded in Lpand hence uniformly integrable with respect to ⟨·⟩for all large n, d, so dominated convergence implies, almost surely, lim n,d→∞* W2 1 ddX j=1δ(θ∗ j,θt j),P(θ∗, θt)!2+ = 0. (169) Combining (165) and (169) shows that for any fixed t≥0, almost surely, lim sup n,d→∞* W2 1 ddX j=1δ(θ∗ j,θj),Pg∗,ω∗;g,ω!2+ ≤C e−ct+W2(P(θ∗, θt),Pg∗,ω∗;g,ω)2 . By Theorem 2.5, we have lim t→∞W2(P(θ∗, θt),Pg∗,ω∗;g,ω) = 0 so taking the limit t→ ∞ shows (48). To show Corollary 2.10(b) on the asymptotic free energy, we will apply an I-MMSE argument, together with the following proposition which guarantees continuity of mse ,mse∗in the noise variance σ2. In the later proof of Theorem 2.13, we will require also continuity in the prior parameter α; thus we establish both statements here. 50 Lemma 4.12. Suppose Assumptions 2.1 and 2.2(b) hold. Fix any open subset O⊂RK, and suppose also that Assumption 2.7 holds for each g∈ {g(·, α) :α∈O}, where the constant CLSI>0is uniform over α∈O. Consider any noise variance ˜σ2≥σ2, and define mse(˜σ2, α),mse∗(˜σ2, α)by (42) via the (approximately- TTI) DMFT limit of the Langevin dynamics (7) with a fixed prior g(·, α)in the linear model (4) with noise variance ˜σ2. Then over any compact interval I⊂[σ2,∞)and compact subset S⊂O,mse(˜σ2, α),mse∗(˜σ2, α)are Lipschitz functions of (˜σ2, α)∈I×S. Proof. Consider noise/prior parameters ( s2, α) and (˜ s2,˜α), where s2,˜s2≥σ2. Let us couple the linear models with noise variances s2and ˜s2byy=Xθ∗+szand˜y=Xθ∗+ ˜sz, where z∼ N (0,I). Fixing X,θ∗,z, let us denote U(θ) =−1 2s2∥Xθ∗+sz−Xθ∥2 2+dX j=1logg(θj, α) so that q(θ)∝eU(θ)is the posterior law given ( X,y) under parameters ( s2, α). Denote similarly ˜U(θ) with (˜s2,˜α) in place of ( s2, α), and ˜ q(θ)∝e˜U(θ)as the posterior law given ( X,˜y). We condition on X,θ∗,zand restrict to the event E′(C0, CLSI) ={∥X∥op≤C0,∥θ∗∥2 2,∥z∥2 2≤C0d,(46) holds for both qand ˜q}, which by assumption holds a.s. for all large n, d. We first derive a bound on the Wasserstein-2 distance between qand ˜q, conditional on X,θ∗,z. Let{θt}t≥0be the Langevin diffusion (103) with fixed prior g(·, α) and stationary distribution q(θ), initialized as θ0∼q0where q0has finite second moment and finite entropy. Let us write ⟨f(θt)⟩for the expectation over θ0and{bt}t≥0defining (103), conditional on X,θ∗,z. We apply the following argument of [86] to bound the KL-divergence D KL(qt∥˜q) conditional on X,θ∗,z: Differentiating this KL-divergence in time, d dtDKL(qt∥˜q) =d dtZ qt(logqt−log ˜q) =Zd dtqt (logqt−log ˜q) +Zqt qtd dtqt |{z} =0=Zd dtqt (logqt−˜U+ log ˜Z). The law of θtconditional on X,θ∗,zadmits a density qtwhich is described by the Fokker-Planck equation d dtqt=∇ ·[qt∇(logqt−U)] with initial condition qt|t=0=q0. Then, applying this Fokker-Planck equation and integrating by parts, we obtain d dtDKL(qt∥˜q) =−Z qt∇(logqt−U)⊤∇(logqt−˜U) =−Z qt∥∇(logqt−˜U)∥2 2−Z qt∇(˜U−U)⊤∇(logqt−˜U) ≤ −(1/2)Z qt∥∇(logqt−˜U)∥2 2+ (1/2)Z qt∥∇(˜U−U)∥2 2, the last step applying Cauchy-Schwarz for the second term. By the LSI for ˜ q, the first term (the relative Fisher information) is lower bounded as Z qt∥∇(logqt−˜U)∥2 2=Z qt ∇logqt ˜q 2 2≥1 2CLSIDKL(qt∥˜q). Thusd dtDKL(qt∥˜q)≤ −1 4CLSIDKL(qt∥˜q) +1 2⟨∥∇˜U(θt)− ∇U(θt)∥2 2⟩| {z } :=∆( t). 51 Integrating this inequality
|
https://arxiv.org/abs/2504.15558v1
|
shows, for some constants C, c > 0 depending only on CLSIand for any T >0, DKL(qT∥˜q)≤C sup t∈[0,T]∆(t) +e−cTDKL(q0∥˜q) . (170) We now specialize (170) to the initialization q0= ˜q, and bound ∆( t). We have ∆(t)≤* 1 s2X⊤(Xθ∗+sz−Xθt)−1 ˜s2X⊤(Xθ∗+ ˜sz−Xθt) 2 2+dX j=1 ∂θlogg(θt j, α)−∂θlogg(θt j,˜α)2+ . LetC, C′, C′′>0 be constants depending on the compact sets S, Iof the lemma statement and changing from instance to instance. For α,˜α∈Sands2,˜s2∈I, |s−2−˜s−2| ≤C|s2−˜s2|,|s−1−˜s−1| ≤C|s2−˜s2|,|∂θlogg(θ;α)−∂θlogg(θ; ˜α)| ≤C∥α−˜α∥2, the last inequality holding by Assumption 2.2(b). Thus ∆(t)≤Ch ∥X∥4 op(∥θ∗∥2 2+⟨∥θt∥2 2⟩) +∥X∥2 op∥z∥2 2i (s2−˜s2)2+Cd∥α−˜α∥2 2. On the event E′(C0, CLSI), we have ⟨∥θt∥2 2⟩ ≤ C(⟨∥θ0∥2 2⟩+d) by (121), and ⟨∥θ0∥2 2⟩ ≤ Cdunder the initialization q0= ˜qwhich holds also by (121). Applying these bounds together with ∥X∥op≤C,∥θ∗∥2 2≤Cd, and∥z∥2 2≤Cdby definition of E′(C0, CLSI), we have sup t≥0∆(t)≤C′d(s2−˜s2)2+C′d∥α−˜α∥2 2. Applying this and q0= ˜qto (170), we have on the event E′(C0, CLSI) that sup t≥0DKL(qt∥˜q)≤Cd(s2−˜s2)2+Cd∥α−˜α∥2 2. By lower-semicontinuity of KL-divergence and the T2-transportation inequality for ˜ qimplied by the LSI (c.f. [83, Theorem 9.6.1]), W2(q,˜q)2≤CDKL(q∥˜q)≤Clim inf t→∞DKL(qt∥˜q)≤C′d(s2−˜s2)2+C′d∥α−˜α∥2 2. (171) This gives our desired bound on the Wasserstein-2 distance between qand ˜q. Now let ⟨f(θ)⟩qand⟨f(θ)⟩˜qbe the posterior expectations under q(given y) and ˜ q(given ˜y). Then by Jensen’s inequality, ∥⟨θ⟩q− ⟨θ⟩˜q∥2≤W2(q,˜q). Applying |∥x∥2 2− ∥y∥2 2| ≤ ∥x−y∥2· ∥x+y∥2and Cauchy-Schwarz, also |⟨∥θ∥2 2⟩q− ⟨∥θ∥2 2⟩˜q| ≤W2(q,˜q)·q 2⟨∥θ∥2 2⟩q+ 2⟨∥θ∥2 2⟩˜q≤C√ d W2(q,˜q) where the last inequality applies ⟨∥θ∥2⟩q≤ ⟨∥θ∥2 2⟩1/2 q≤C√ donE(C0, CLSI) by (121), and similarly for ˜ q. Then, denoting by MSE( s2, α) and MSE(˜ s2,˜α) the values of MSE as defined in Corollary 2.10 under qand ˜q, we have |MSE( s2, α)−MSE(˜ s2,˜α)|= d−1⟨∥θ− ⟨θ⟩q∥2 2⟩q−d−1⟨∥θ− ⟨θ⟩˜q∥2 2⟩˜q ≤d−1 ⟨∥θ∥2 2⟩q− ⟨∥θ∥2 2⟩˜q +d−1 ∥⟨θ⟩q∥2 2− ∥⟨θ⟩˜q∥2 2 ≤C′W2(q,˜q)√ d≤C′′|s2−˜s2|+C′′∥α−˜α∥2. Similarly |MSE ∗(s2, α)−MSE ∗(˜s2,˜α)|= d−1∥θ∗− ⟨θ⟩q∥2 2−d−1∥θ∗− ⟨θ⟩˜q∥2 2 ≤2d−1∥θ∗∥2∥⟨θ⟩q− ⟨θ⟩˜q∥2+d−1 ∥⟨θ⟩q∥2 2− ∥⟨θ⟩˜q∥2 2 ≤C′W2(q,˜q)√ d≤C′′|s2−˜s2|+C′′∥α−˜α∥2. 52 SinceE′(C0, CLSI) holds a.s. for all large n, d, and Corollary 2.10(a) already proven shows lim n,d→∞MSE = mse and lim n,d→∞MSE ∗= mse ∗a.s. at both ( s2, α) and (˜ s2,˜α), this implies |mse(s2, α)−mse(˜s2,˜α)|,|mse∗(s2, α)−mse∗(˜s2,˜α)| ≤C|s2−˜s2|+C∥α−˜α∥2, so mse( s2, α) and mse ∗(s2, α) are locally Lipschitz as desired. Proof of Corollary 2.10(b). We apply Corollary 2.10(a) and an I-MMSE relation for mismatched Gaussian channels. Write E[· |X] for the expectation over ( θ∗,ε) conditional on Xas in Proposition 4.11. Let I(y,θ∗) =E logP(y|θ∗,X) Pg∗(y|X) X =−E[logPg∗(y|X)|X]−n 2(1 + log 2 πσ2) be the signal-observation mutual information in the linear model (4) conditional on X, where P(y|θ∗,X) is the Gaussian likelihood of yandPg∗(y|X) is the marginal likelihood (6) under the true prior g∗. Then E[logPg(y|X)|X] =−DKL(Pg∗(y|X)∥Pg(y|X)) +E[logPg∗(y|X)|X] =−DKL(Pg∗(y|X)∥Pg(y|X))−I(y,θ∗)−n 2(1 + log 2 πσ2) (172) where here and throughout the proof, D KL(·) denotes the KL-divergence also conditional on X. Let us denote the inverse noise variance by s−1=σ2and write E(s, g) =E[YMSE ∗|X] =n−1E[∥X⟨θ⟩ −Xθ∗∥2|X] (173) for the expected YMSE ∗in the linear model (4) with assumed prior gand noise variance s−1. We clarify that this means ⟨·⟩in (173) is the posterior average under the
|
https://arxiv.org/abs/2504.15558v1
|
law Pg(θ|X,y)∝exp −s 2∥y−Xθ∥2 2+dX j=1logg(θj) andE[· |X] is the expectation over ( θ∗,ε) where εalso has variance s−1. We write also I[s],DKL[s] for the above quantities I(y,θ∗) and D KL(Pg∗(y|X)∥Pg(y|X)) in this model with noise variance s−1. Then [87, Theorem 2] and [88, Eq. (24)] show the I-MMSE relations d dsI[s] =n 2E(s, g∗),d dsDKL[s] =n 2 E(s, g)−E(s, g∗) . For any fixed n, dandX, in the limit s→0, it is direct to check that I[s]→0 and D KL[s]→0. Thus, for I(y,θ∗)≡I[σ−2] and D KL(Pg∗(y|X)∥Pg(y|X))≡DKL[σ−2] in the original model with noise variance σ2, integrating these I-MMSE relations shows DKL(Pg∗(y|X)∥Pg(y|X)) +I(y,θ∗) =n 2Zσ−2 0E(s, g)ds. (174) Assumption 2.7(b) ensures that the posterior LSI (46) holds a.s. in the model with any noise variance s−1∈ [σ2,∞). Then applying Corollary 2.10(a) already shown and the concentration of YMSE ∗in Proposition 4.11, we have E(s, g)→ymse∗(s, g) a.s. for each s−1∈(σ2,∞), where ymse∗(s, g) is defined by (42) via the DMFT limit of the Langevin dynamics (7) with fixed prior g(·) in the linear model with noise variance s−1. To apply dominated convergence, we note that on the event ∥X∥op≤C0, by the extension (110) of (106), we have E(s, g)≤Cfor a constant C > 0 uniformly over all s∈[0, σ−2] and all n, d. Then, since ∥X∥op≤C0 holds a.s. for all large n, d, taking the limit n, d→ ∞ and applying the bounded convergence theorem to (174) shows that almost surely, lim n,d→∞1 d DKL(Pg∗(y|X)∥Pg(y|X)) +I(y,θ∗) =δ 2Zσ−2 0ymse∗(s, g)ds. (175) 53 Let us now fix the assumed prior g(·), write ymse∗(s)≡ymse∗(s, g), and let (mse( s),mse∗(s), ω(s), ω∗(s)) denote the fixed points (43) corresponding to ymse∗(s). Recall the marginal density Pg,ω(y) of the scalar channel model (39), and define f(ω, ω∗, s) =−Eg∗,ω∗logPg,ω(y)−1 2 2δ+ log2π ω−δlogδs ω+ (1−δ)ω ω∗+ω sω ω∗−2 (176) =ω 2Eθ∗2−ElogZ exp ωθ(θ∗+ω−1/2 ∗z)−ω 2θ2 g(θ)dθ | {z } :=I−1 2 2δ−δlogδs ω−δω ω∗+ω sω ω∗−2 | {z } :=II. Here, the expectations in the second line are over θ∗∼g∗andz∼ N(0,1), and we have applied the explicit form of Pg,ω(y) and evaluated Eg∗,ω∗under the true model y=θ∗+ω−1/2 ∗zwith some some algebraic simplification. We now claim that δ 2Zs 0ymse∗(t)dt=f(ω(s), ω∗(s), s) (177) for all s∈(0, σ−2). To show this, it suffices to check lim s→0f(ω(s), ω∗(s), s) = 0 andd dsf(ω(s), ω∗(s), s) = δ 2ymse∗(s), which we may do as follows: •Let MSE( s),MSE ∗(s) denote the values of MSE ,MSE ∗in a linear model with noise variance s−1. On the event ∥X∥op≤C0, the bound (110) implies that MSE( s),MSE ∗(s)≤C(1+s∥y∥2 2/d) for a constant C > 0 (independent of s) and for all s−1∈(σ2,∞). Taking the almost sure limit as n, d→ ∞ shows that mse( s),mse∗(s)≤C. In particular, in the limit s→0, we have that mse( s),mse∗(s) remain bounded, so ω(s), ω∗(s)∼δsby the fixed point relation (43). Then ω(s)→0,ω∗(s)→0,ω(s)/s→δ, andω(s)/ω∗(s)→1 ass→0. Applying this to (176) shows lim s→0f(ω(s), ω∗(s), s) = 0 . •Differentiating the term I of (176) in ω, ω∗and applying Gaussian integration-by-parts with respect to z∼ N(0,1), we may check that ∂ωI
|
https://arxiv.org/abs/2504.15558v1
|
=1 2E⟨(θ∗−θ)2⟩g,ω−ω ω∗E⟨(θ− ⟨θ⟩g,ω)2⟩g,ω, ∂ω∗I =ω2 2ω2∗E⟨(θ− ⟨θ⟩g,ω)2⟩g,ω. Then at the fixed points ( ω, ω∗) = (ω(s), ω∗(s)), we have ∂ωI|(ω,ω∗)=(ω(s),ω∗(s))=1 2(mse( s) + mse ∗(s))−ω(s) ω∗(s)mse(s) ∂ω∗I|(ω,ω∗)=(ω(s),ω∗(s))=ω(s)2 2ω∗(s)2mse(s). Applying mse( s) =δ/ω(s)−σ2and mse ∗(s) =δ/ω∗(s)−σ2by (43) and comparing with the derivatives of the second term II of (176), this verifies ∂ωf(ω(s), ω∗(s), s) = 0 , ∂ ω∗f(ω(s), ω∗(s), s) = 0 . (178) Furthermore, direct calculation shows that at ( ω, ω∗) = (ω(s), ω∗(s)), ∂sf(ω(s), ω∗(s), s) =δσ2 2+ω(s)σ4 2ω(s) ω∗(s)−2 =δ 2ymse∗(s), the second equality using (44). Lemma 4.12 implies that mse( s),mse∗(s), ω(s), ω∗(s) are locally Lips- chitz, and hence absolutely continuous, over s∈(0, σ−2). Then also s7→f(ω(s), ω∗(s), s) is absolutely 54 continuous, and we may differentiate by the chain rule to get d dsf(ω(s), ω∗(s), s) =∂ωf(ω(s), ω∗(s), s)·ω′(s) +∂ω∗f(ω(s), ω∗(s), s)·ω′ ∗(s) +∂sf(ω(s), ω∗(s), s) =∂sf(ω(s), ω∗(s), s) =δ 2ymse∗(s). Combining the above arguments verifies the claim (177). Applying (175) and (177) to (172) and writing ( ω, ω∗) = ( ω(σ−2), ω∗(σ−2)) for the fixed points at the original noise variance σ2, this shows lim n,d→∞d−1E[logPg(y|X)|X] =−f(ω, ω∗, σ−2)−δ 2(1 + log 2 πσ2). Applying concentration of d−1logPg(y|X) with respect to E[· |X] which is established in Propostion 4.11, and substituting the form of fin (176), this shows Corollary 2.10(b). 5 Analysis of empirical Bayes Langevin dynamics In this section, we prove Theorem 2.13 on the adaptive empirical Bayes dynamics with time-varying prior parameter bαt, and discuss further the examples of Section 2.4.2. 5.1 General analysis under uniform LSI We introduce a few notational shorthands: Conditional on X,θ∗,ε, let qα(θ)≡Pg(·,α)(θ|X,y) be the posterior law under the prior parameter α. We write ⟨·⟩αfor its posterior expectation. For θ∈Rd, define ¯Pθ=1 ddX j=1δ(θ∗ j,θj), ¯Pα=⟨¯Pθ⟩α. (179) Thus ¯Pαis a (X,θ∗,ε)-dependent joint law over variables ( θ∗, θ) which satisfies E(θ∗,θ)∼¯Pαf(θ∗, θ) =1 ddX j=1⟨f(θ∗ j, θj)⟩α=1 ddX j=1Z f(θ∗ j, θj)qα(θ)dθ. (180) We write θ∼¯Pαas shorthand for the θ-marginal of ( θ∗, θ)∼¯Pα. We note that under Assumptions 2.2(b) and 2.11, all constants in (105) are uniform over g∈ {g(·, α) : α∈O}for the bounded domain Oof Assumption 2.11, where a uniform bound for |logg(0, α)|follows from |logg(0, α)| ≤ | logg(0,0)|+∥∇α(logg(0,0))∥2· ∥α∥2+C∥α∥2 2as implied by (19) of Assumption 2.2(b). Hence the bounds of Section 4.2 hold uniformly over α∈O. In particular, from (106), sup α∈O⟨∥θ∥2 2⟩α≤C(d+∥y∥2 2) (181) on an event {∥X∥op≤C0}that holds a.s. for all large n, d. We first prove Lemma 2.12 on the derivatives of F,bFand uniform convergence of bF,∇bFoverS⊂O. Proof of Lemma 2.12. For (a), differentiating bF(α) =−1 dlogZ1 2πσ2n/2 exp −1 2σ2∥y−Xθ∥2 2+dX j=1logg(θj, α) dθ 55 and applying the property (180), we have ∇bF(α) =−1 ddX j=1⟨∇αlogg(θj, α)⟩α=−Eθ∼¯Pα∇αlogg(θ, α). (182) For the form of ∇F(α), define analogously to (176) f(ω, ω∗, α) =−Eg∗,ω∗logPg(·,α),ω(y)−1 2 2δ+ log2π ω−δlogδ ωσ2+ (1−δ)ω ω∗+ωσ2ω ω∗−2 (183) where the dependence on αis inPg(·,α),ω(y). For any α∈O, let ω(α), ω∗(α) be the fixed points ω, ω∗ defined by (42) via the DMFT system for the dynamics (7) with fixed prior g≡g(·, α). (This
|
https://arxiv.org/abs/2504.15558v1
|
DMFT system is approximately-TTI for each α∈Oby Assumption 2.11 and Theorem 2.9, hence ω(α), ω∗(α) are well-defined.) Then F(α) =f(ω(α), ω∗(α), α) +δ 2(1 + log 2 πσ2). (184) By the same calculations as (178), at the fixed points ( ω(α), ω∗(α)), we have ∂ωf(ω(α), ω∗(α), α) = 0 and ∂ω∗f(ω(α), ω∗(α), α) = 0. By Lemma 4.12, ω(α), ω∗(α) are locally Lipschitz and hence absolutely continuous overα∈O. Then F(α) is also absolutely continuous over α∈O, and differentiating by the chain rule gives ∇F(α) =∇αf(ω, ω∗, α) (ω,ω∗)=(ω(α),ω∗(α)) =−∇α Eg∗,ω∗logPg(·,α),ω(y) (ω,ω∗)=(ω(α),ω∗(α)) =−∇α Eg∗,ω∗logZω 2π1/2 exp −ω 2(y−θ)2+ log g(θ, α) dθ (ω,ω∗)=(ω(α),ω∗(α)) By definition Pαis the joint law of ( θ∗, θ) under the generative process where ( θ∗, y) are drawn from the Gaus- sian convolution model defining this expectation Eg∗,ω∗, and where θ∼Pg(·,α),ω(θ|y). Hence, evaluating ∇αabove gives ∇F(α) =−Eθ∼Pα∇αlogg(θ, α). For (b), let S⊂Obe any compact subset of the domain Oin Assumption 2.11, and let Qbe a countable dense subset of O. Define E(C0, CLSI) ={∥X∥op≤C0,(46) holds for qα(θ)≡Pg(·,α)(θ|X,y) for every α∈O}. Assumptions 2.1 and 2.11 ensure for some C0, CLSI>0 that E(C0, CLSI) holds a.s. for all large n, d, where this event depends only on Xand not on θ∗,ε. We restrict to the almost-sure event where the convergence statements of Corollary 2.10 and Proposition 4.11 hold for every α∈Q, and where E(C0, CLSI) holds for all large n, d. Note that Corollary 2.10 shows bF(α)→F(α) for each α∈Q. To strengthen this to uniform convergence over S, note that Assumption 2.2(b) implies ∂θ∇αlogg(θ, α) is uniformly bounded over ( θ, α)∈R×S, so ∥∇αlogg(θ, α)∥2≤ ∥∇ αlogg(0, α)∥2+C|θ|. Then, since ∇αlogg(0, α) is bounded over α∈Sby compactness of S, and supα∈S⟨∥θ∥2 2⟩α≤Cdby (181), we have sup α∈S∥Eθ∼¯Pα∇αlogg(θ, α)∥2≤sup α∈S1 ddX j=1⟨∥∇αlogg(θj, α)∥2⟩α ≤sup α∈S∥∇αlogg(0, α)∥2+C ddX j=1⟨|θj|⟩α≤C′. (185) 56 This shows ∇bF(α) is bounded over any compact subset S⊂O. Then for any compact S⊂O, the functions bF(α) for all n, dare equicontinuous in a neighborhood of each point α∈S, and hence are uniformly equicontinuous over Ssince a finite number of such neighborhoods cover S. Then by Arzela-Ascoli, the convergence bF(α)→F(α) for each α∈Qimplies uniform convergence over α∈S. We next show the pointwise convergence ∇bF(α)→ ∇F(α) for each α∈Q. Recalling our definition of ¯Pθin (179), and applying Jensen’s inequality and the convexity W2(λP+(1−λ)P′,Q)2≤λW2(P,Q)2+(1− λ)W2(P′,Q)2of the squared Wasserstein-2 distance, W2(¯Pα,Pα)2≤ ⟨W2(¯Pθ,Pα)2⟩α. For each α∈Q, the right side converges to 0 as n, d→ ∞ by the statement (48) of Corollary 2.10(a). Thus lim n,d→∞W2(¯Pα,Pα) = 0. Assumption 2.2(b) ensures that ∇αlogg(θ, α) is Lipschitz in θ, so this Wasserstein-2 convergence implies lim n,d→∞∇bF(α) = lim n,d→∞Eθ∼¯Pα∇αlogg(θ, α) =Eθ∼Pα∇αlogg(θ, α) =∇F(α) for each α∈Q, as claimed. To extend this to uniform convergence over any compact subset S⊂O, we differentiate (182) a second time. Writing Var α,Covαfor the variance and covariance under ⟨·⟩α, ∇2bF(α) =−1 d dX j=1∇2 αlogg(θj, α) α−1 dCovαdX j=1∇αlogg(θj, α) . (186) The first term is uniformly bounded over α∈S, by the same argument as showing boundedness of ∇bF(α) above. For the second term, on the event E(C0, CLSI), for
|
https://arxiv.org/abs/2504.15558v1
|
every unit vector v∈RKandα∈S, VarαdX j=1v⊤∇αlogg(θj, α) ≤(CLSI/2) dX j=1 v⊤∂θ∇αlogg(θj, α)2 α by the Poincar´ e inequality for qαimplied by its LSI. Since ∂θ∇αlogg(θ, α) is bounded over α∈S, the second term of (186) is also bounded on E(C0, CLSI). Thus ∇2bF(α) is uniformly bounded over α∈Sfor all large n, d. This implies as above that for any compact S⊂O, the functions ∇bF(α) for all large n, dare uniformly equicontinuous on S, so∇bF(α)→ ∇F(α) uniformly over α∈S. This shows part (b). For part (c), note that if g∗=g(·, α∗), then E[bF(α)|X]−E[bF(α∗)|X] =d−1DKL(Pg(·,α∗)(y|X)∥Pg(·,α)(y|X))≥0, where here D KL(·) is the KL-divergence conditional on X. Thus α∗is a minimizer of α7→E[bF(α)|X] over RK. Applying the convergence bF(α)−E[bF(α)|X]→0 for each α∈Qfrom Proposition 4.11, we have also E[bF(α)|X]→F(α) for each α∈Q. Note that ∇αE[bF(α)|X] =−E[Eθ∼¯Pα∇αlogg(θ, α)|X], and that supα∈SE[⟨∥θ∥2 2⟩α|X]≤E[supα∈S⟨∥θ∥2 2⟩α|X]≤CdonE(C0, CLSI), by (181). Then the argument (185) shows also that ∇αE[bF(α)|X] is uniformly bounded and equicontinuous over α∈S, hence lim n,d→∞sup α∈S|E[bF(α)|X]−F(α)|= 0. Since α∗is a minimizer of E[bF(α)|X], this implies that F(α)≥F(α∗) for every α∈S. Since this holds for every compact subset S⊂O, this shows part (c). We proceed to prove Theorem 2.13. Let {θt,bαt}t≥0be the solution of the adaptive Langevin equations (9– 10). Let {αt}t≥0be the (deterministic) α-component of the DMFT limit of {bαt}t≥0prescribed by Theorem 2.3(b), and consider the SDE d˜θt=∇˜θ −1 2σ2∥y−X˜θt∥2 2+dX j=1logg(˜θt j, αt) dt+√ 2 dbt(187) 57 which replaces bαtbyαt. We couple {˜θt}t≥0to{θt,bαt}t≥0via the same initial conditions ˜θ0=θ0andα0of Assumption 2.1, and via the same Brownian motion {bt}t≥0. We write qtfor the density of ˜θtconditional on X,θ∗,εand averaging over ˜θ0, where q0=g⊗d 0is the initial density of ˜θ0=θ0. In parallel to (179), we denote ¯P˜θt=1 ddX j=1δ(θ∗ j,˜θt j), ¯Pt=⟨¯P˜θt⟩ (188) where ⟨·⟩is the average with respect to ˜θt∼qt, i.e. the average over ˜θ0and{bt}t≥0. We write ˜θt∼¯Ptfor the˜θt-marginal of a sample ( θ∗,˜θt)∼¯Pt. Lemma 5.1. Under Assumptions 2.1 and 2.2(b), there exists a unique solution {˜θt}t≥0to (187). Letting qtbe the above conditional density of ˜θt, and letting V(q, α)be the Gibbs free energy (13), almost surely lim sup n,d→∞sup t∈[0,T]d dt V(qt, αt) +R(αt) ≤0. (189) Proof. Fixing C0>0 large enough, let E(C0) ={∥X∥op≤C0,∥θ∗∥2 2,∥ε∥2 2≤C0d}. We restrict to the event where the almost-sure convergence statements of Theorem 2.3(b) hold and where E(C0) holds for all large n, d. Since {αt}t≥0is continuous, for each T > 0, there exists a compact ball STfor which αt∈STfor all t∈[0, T]. By Assumption 2.2(b), ( θ, α)7→∂θlogg(θ, α) restricted to α∈STis Lipschitz. Then the drift of (187) is Lipschitz over each time horizon [0 , T], so (187) admits a unique solution {˜θt}t∈[0,T](c.f. [84, Theorem II.1.2]) over t∈[0, T] for every T≥0, and hence also over all t≥0. We note that d dt(˜θt−θt) =1 σ2X⊤X(θt−˜θt) +h ∂θlogg(˜θt j, αt)−∂θlogg(θt j,bαt)id j=1. Applying again the Lipschitz property of ( θ, α)7→∂θlogg(θ, α) over α∈STand the bound ∥X∥op≤C0, there is a constant C >0 depending on C0, Tsuch that 1√ dd dt(˜θt−θt) 2≤C√ d∥˜θt−θt∥2+C∥αt−bαt∥2. Since supt∈[0,T]∥αt−bαt∥2→0 by Theorem 2.3(b), a Gronwall argument implies lim n,d→∞sup t∈[0,T]1√ d∥˜θt−θt∥2= 0. (190) By the DMFT equation (27),
|
https://arxiv.org/abs/2504.15558v1
|
the evolution of αtis given by d dtαt=Eθt∼P(θt)∇αlogg(θt, αt)− ∇R(αt) (191) where P(θt) is the law of the DMFT variable θt. The law qtof˜θtsatisfies the Fokker-Planck equation d dtqt(˜θ) =∇˜θ· qt(˜θ)∇˜θ1 2σ2∥y−X˜θ∥2 2−dX j=1logg(˜θj, αt) + log qt(˜θ) . (192) Then, using (191) and (192) to differentiate V(qt, αt) +R(αt), d dt V(qt, αt) +R(αt) =−1 dZ ∇˜θ1 2σ2∥y−X˜θ∥2 2−dX j=1logg(˜θj, αt) + log qt(˜θ) 2 2qt(˜θ)d˜θ | {z } :=FI t(193) − Eθt∼P(θt)∇αlogg(θt, αt)− ∇R(αt)⊤Z1 ddX j=1∇αlogg(˜θj, αt)qt(˜θ)d˜θ− ∇R(αt) . 58 Here, the first term FI t(the relative Fisher information) arises from differentiation in qtand integration-by- parts in ˜θ, while the second term arises from differentiation in αt. Recalling the notation (188), Z1 ddX j=1∇αlogg(˜θj, αt)qt(˜θ)d˜θ=E˜θt∼¯Pt∇αlogg(˜θt, αt) so we may write the above as d dt V(qt, αt) +R(αt) =−1 dFIt− E˜θt∼¯Pt∇αlogg(˜θt, αt)− ∇R(αt) 2 + E˜θt∼¯Pt∇αlogg(˜θt, αt)−Eθt∼P(θt)∇αlogg(θt, αt)⊤ E˜θt∼¯Pt∇αlogg(˜θt, αt)− ∇R(αt) | {z } :=∆t.(194) By the convexity W2(λP+ (1−λ)P′,Q)2≤λW2(P,Q)2+ (1−λ)W2(P′,Q)2and Jensen’s inequality, sup t∈[0,T]W2(¯Pt,P(θ∗, θt))2≤sup t∈[0,T]⟨W2(¯P˜θt,P(θ∗, θt))2⟩ ≤D sup t∈[0,T]W2(¯P˜θt,P(θ∗, θt))2E , (195) where ⟨·⟩is the average over ˜θt∼qt, and P(θ∗, θt) is the joint law of the DMFT variables ( θ∗, θt). By Theorem 2.3(b) and (190), for any fixed T >0 we have sup t∈[0,T]W2(¯P˜θt,P(θ∗, θt))2≤sup t∈[0,T]2W2(¯P˜θt,¯Pθt)2+ 2W2(¯Pθt,P(θ∗, θt))2→0 (196) almost surely as n, d→ ∞ . The same arguments as leading to (168) show that supt∈[0,T]W2(¯P˜θt,P(θ∗, θt))2 is uniformly integrable with respect to ⟨·⟩for all large n, d. Then applying (196) and dominated convergence to bound the right side of (195), we get lim n,d→∞sup t∈[0,T]W2(¯Pt,P(θ∗, θt))2= 0. (197) Finally, applying that ( θ, α)7→ ∇ αlogg(θ, α) is uniformly Lipschitz over α∈STby Assumption 2.2(b), this Wasserstein-2 convergence implies lim n,d→∞sup t∈[0,T] Eθ∼¯Pt∇αlogg(θ, αt)−Eθ∼P(θt)∇αlogg(θ, αt) = 0, hence lim n,d→∞supt∈[0,T]|∆t|= 0 for the quantity ∆ tof (194). As the first two terms of (194) are non- positive, this shows (189). Proof of Theorem 2.13. LetS⊂O⊂RKbe the domains of Assumption 2.11. Fixing sufficiently large constants C0, CLSI>0, define E(C0, CLSI) ={∥X∥op≤C0,∥θ∗∥2 2,∥ε∥2 2≤C0d,and (46) holds for qαfor every α∈O}. We restrict to the event where the almost-sure convergence statements of Theorem 2.3(b) and Lemma 2.12 hold, and where E(C0, CLSI) holds for all large n, d. Throughout, C, C′, c > 0 denote constants that may depend on C0, CLSIand change from instance to instance. On the event E(C0, CLSI), we first note that by Itˆ o’s formula, d∥˜θt∥2 2= 2(˜θt)⊤h1 σ2X⊤(y−X˜θt) + ∂θlogg(˜θt j, αt)d j=1 dt+√ 2 dbti + (2d)dt, and hence d dt⟨d−1∥˜θt∥2 2⟩ ≤C(1 +⟨d−1/2∥˜θt∥2⟩) + 2 d−1dX j=1˜θt j·∂θlogg(˜θt j, αt) 59 for a constant C >0. Under the convexity-at-infinity condition of Assumption 2.2(b), there exist constants C, c > 0 for which θ·∂θlogg(θ, αt)≤C|θ|−cθ2for all θ∈Randαt∈S. Applying this and Cauchy-Schwarz to the above, we have for some constants C′, c′>0 thatd dt⟨d−1∥˜θt∥2 2⟩ ≤C′−c′⟨d−1∥˜θt∥2 2⟩. This implies on E(C0, CLSI) that d−1⟨∥˜θt∥2 2⟩ ≤C (198) for a constant C >0 and all t≥0. The arguments leading to (168) show that for any fixed t≥0,d−1∥˜θt∥2 2 is uniformly integrable with respect to ⟨·⟩for all large n, d. Since E(C0, CLSI) holds a.s.
|
https://arxiv.org/abs/2504.15558v1
|
for all large n, d, and lim n,d→∞d−1∥˜θt∥2 2= (θt)2a.s. by Theorem 2.3(b) and (190) where θthere is the θ-component of the limiting DMFT system, this implies also E(θt)2≤C (199) for all t≥0. Furthermore, for any s≤t, applying ˜θt−˜θs=Zt sh1 σ2X⊤(y−X˜θr) + ∂θlogg(˜θr j, αr)d j=1i dr+√ 2(bt−bs) and uniform Lipschitz continuity of θ7→∂θlogg(θ, αr) for αr∈S, we have on E(C0, CLSI) that d−1/2∥˜θt−˜θs∥2≤Zt sCd−1/2∥˜θr−˜θs∥2dr+C(t−s)(1 + d−1/2∥˜θs∥2) +√ 2d−1/2∥bt−bs∥2. Then by Gronwall’s inequality, d−1/2∥˜θt−˜θs∥2≤CeC(t−s) C(t−s)(1 + d−1/2∥˜θs∥2) +d−1/2sup r∈[s,t]∥br−bs∥2 . Then applying (198) and Doob’s maximal inequality shows d−1⟨∥˜θt−˜θs∥2 2⟩ ≤C(t−s) for all s≤twith t−s≤1. (200) We now show that for a constant C >0, Z∞ 0∥∇F(αt) +∇R(αt)∥2 2dt < C. (201) We remind the reader that qtis the law of ˜θt(conditioned on X,θ∗,ε) and qαtis the posterior law of θ under the prior g≡g(·, αt). On E(C0, CLSI), the LSI for qαtand its implied T2-transportation inequality (c.f. [83, Theorem 9.6.1]) imply for the Fisher information term FI tof (193) that FIt≥C−1 LSIDKL(qt∥qαt)≥C−2 LSIW2(qt, qαt)2 for all t≥0. The average marginal distribution of coordinates of ˜θt∼qtis the ˜θt-marginal of ¯Ptdefined in (188), and that of θ∼qαtis the θ-marginal of ¯Pαtas defined in (179). Considering the coordinatewise coupling of ¯Pt,¯Pαt, we see that W2(¯Pt,¯Pαt)2≤d−1W2(qt, qαt)2, so d−1FIt≥C−2 LSIW2(¯Pt,¯Pαt)2. (202) Applying this and the uniform Lipschitz continuity of θ7→ ∇ αlogg(θ, α) over α∈Oguaranteed by Assump- tion 2.2(b), E˜θt∼¯Pt∇αlogg(˜θt, αt)−Eθ∼¯Pαt∇αlogg(θ, αt) 2 2≤C′W2(¯Pt,¯Pαt)2≤Cd−1FIt. Then applying this as a lower bound for d−1FItin (194), and applying also C−1(a−b)2+b2≥c0a2for a constant c0>0 and all a, b∈R, we get from (194) that d dt V(qt, αt) +R(αt) ≤ −c0 Eθ∼¯Pαt∇αlogg(θ, αt)− ∇R(αt) 2 + ∆ t. 60 Now note from Lemma 2.12 that Eθ∼¯Pαt∇αlogg(θ, αt) =−∇bF(αt). Applying supt∈[0,T]∆t→0 and the uniform convergence ∇bF(α)→ ∇ F(α) over α∈Sfrom Lemma 2.12, this shows a strengthening of (189): for any t∈[0, T], lim sup n,d→∞d dt V(qt, αt) +R(αt) ≤ −c0∥∇F(αt) +∇R(αt)∥2 2. Then for any T >0, c0ZT 0∥∇F(αt) +∇R(αt)∥2 2dt≤lim sup n,d→∞V(q0, α0) +R(α0)−V(qT, αT)−R(αT). Note that by the definition of V(q, α) in (13) and the conditions of finite moments and finite entropy for g0 in Assumption 2.1, V(q0, α0) =V(g⊗d 0, α0) is bounded above by a constant on E(C0, CLSI) for all large n, d. Also by the definition (13), V(qT, αT)≥1 dDKL(qT∥g(·, αT)⊗d) +n 2dlog 2πσ2≥n 2dlog 2πσ2 which is bounded below by a constant for all Tand all large n, d. Then, applying also R(α0)≤Cand R(αT)≥0 and taking the limit n, d→ ∞ followed by T→ ∞ , we obtain the claimed bound (201). Consider the set Crit = {α∈S:∇F(α) +∇R(α) = 0}. Suppose by contradiction that {αt}t≥0has a limit point α∞∈Sthat does not belong to Crit. Lemma 2.12 implies that ∇F(α) +∇R(α) is continuous over α∈O, so∥∇F(α) +∇R(α)∥2> δfor all α∈Bδ(α∞) := {α:∥α−α∞∥2< δ}and some δ >0. However, Assumption 2.2(b) and the DMFT equation (27) imply d dtαt 2≤Eθ∼P(θt)∥∇αlogg(θ, αt)∥2+∥∇R(αt)∥2≤C(1 +E|θt|+∥αt∥2)≤C′(203) for some constants C, C′>0 and all t≥0, where the last inequality applies (199) and the assumption αt∈S. Then for each t0≥0 such that αt0∈Bδ/2(α∞) (204) we must have αt∈Bδ(α∞) for
|
https://arxiv.org/abs/2504.15558v1
|
all t∈[t0−cδ, t0+cδ] and some constant c >0. ThenRt0+cδ t0−cδ∥∇F(αt) + ∇R(αt)∥2 2dt≥2cδ3. The condition (204) must hold for infinitely many times t0because α∞is a limit point of{αt}t≥0, but this contradicts (201). Thus we must have α∞∈Crit. Since this holds for every limit point α∞of{αt}t≥0, and Sis compact, this implies lim t→∞dist(αt,Crit) = 0. If furthermore all points of Crit are isolated, then the limit point α∞of{αt}t≥0must be unique, and lim t→∞αt=α∞. For the remaining statements (53), fix any ε >0. Choosing T(ε) such that ∥αt−α∞∥2< ε/2 for all t > T (ε), we then have lim supn,d→∞∥bαt−α∞∥2< εby Theorem 2.3(b), showing the first statement of (53). For the second statement of (53), we note from (194) that d dt V(qt, αt) +R(αt) ≤ −1 dFIt+ ∆ t. Then, by the same arguments as above, for some constant C >0 and every T >0, lim sup n,d→∞ZT 0d−1FIt≤lim sup n,d→∞V(q0, α0) +R(α0)−V(qT, αT)−R(αT)≤C. 61 Recalling (202), this implies lim sup n,d→∞ZT 0W2(¯Pt,¯Pαt)2dt≤C. (205) For each fixed t≥0, we have lim n,d→∞W2(¯Pt,P(θ∗, θt))2= 0 (206) by (197). We have also by Jensen’s inequality for the squared Wasserstein-2 distance and (48) of Corollary 2.10(a), lim sup n,d→∞W2(¯Pαt,Pαt)2≤lim sup n,d→∞ W2(¯Pθ,Pαt)2 αt= 0 (207) where ⟨·⟩αtis the average over θ∼qαtdefining ¯Pθ. Then, combining (206) and (207), we have that limn,d→∞W2(¯Pt,¯Pαt) =W2(P(θ∗, θt),Pαt). Applying this and Fatou’s lemma to (205), we obtain the boundRT 0W2(P(θ∗, θt),Pαt)2dt≤C. Since T >0 is arbitrary, taking T→ ∞ gives Z∞ 0W2(P(θ∗, θt),Pαt)2dt≤C. (208) For any s≤t, considering the coordinatewise coupling gives W2(¯Ps,¯Pt)2≤d−1⟨∥˜θs−˜θt∥2 2⟩ ≤C(t−s), where the second inequality holds for a constant C >0 and all t−s∈[0,1] by (200). Also W2(¯Pαs,¯Pαt)2≤d−1W2(qαs, qαt)2≤C∥αt−αs∥2 2≤C′(t−s)2(209) by the Wasserstein-2 Lipschitz continuity of qαoverα∈Sshown in (171), and the bound (203) for d αt/dt. Then taking the limit n, d→ ∞ using (206) and (207), this shows |W2(P(θ∗, θt),Pαt)2−W2(P(θ∗, θs),Pαs)2| ≤C(t−s) for all t−s∈[0,1]. Then t7→W2(P(θ∗, θt),Pαt)2is Lipschitz, so (208) implies lim t→∞W2(P(θ∗, θt),Pαt)2= 0. (210) We have similarly to (209) that W2(¯Pαt,¯Pα∞)2≤d−1W2(qαt, qα∞)2≤C∥αt−α∞∥2 2. Hence by (207), also W2(Pαt,Pα∞)2≤C∥αt−α∞∥2 2, so lim t→∞W2(Pαt,Pα∞)2= 0. (211) Combining (210) and (211) show that for any ε >0, there exists T(ε)>0 such that W2(P(θ∗, θt),Pα∞)< εfor all t≥T(ε). The second statement of (53) follows from this and the almost sure convergence limn,d→∞W2(1 dP jδ(θ∗ j,θt j),P(θ∗, θt)) = 0 ensured by Theorem 2.3(b). 5.2 Analysis of examples Analysis of Examples 2.14 and 2.15. We prove the claims in Example 2.15 that Assumptions 2.2(b) and 2.11 hold, and that Crit consists of the unique point α=α∗. (Then these claims hold also in Example 2.14 for the Gaussian prior, which is a special case.) Assumption 2.2(b) is immediate from the given conditions for f(x). For Assumption 2.11, let us first show that there exists a compact interval S⊂Rfor which {αt}t≥0is confined to S(for all t≥0): By Lemma 5.1 (which does not require Assumption 2.11), for each fixed t≥0, almost surely lim sup n,d→∞V(qt, αt)−V(q0, α0)≤0. (212) By the Gibbs variational principle (12) and the lower bound −logg(θ, α) =f(θ−α)≥f(0) +c0 2(θ−α)2, V(qt, αt)≥bF(αt) =−1 dlogZ (2πσ2)−n/2exp −1 2σ2∥y−Xθ∥2 2+dX
|
https://arxiv.org/abs/2504.15558v1
|
j=1logg(θj, αt) dθ ≥ −1 dlogZ (2πσ2)−n/2exp −1 2σ2∥y−Xθ∥2 2−f(0)d−dX j=1c0 2(θj−αt)2 dθ 62 Applying ∥X∥op≤Ca.s. for all large n, d, it is readily checked by explicit evaluation of this integral over θ that V(qt, αt)≥C+c0 2(αt)2−1 2dX⊤y σ2+c0αt1⊤X⊤X σ2+c0I−1X⊤y σ2+c0αt1 a.s. for all large n, dand a constant C∈Rdepending on σ2, δ, f(0), c0, α∗, where here 1denotes the all-1’s vector in Rd. We have lim n,d→∞1 d1⊤X⊤X σ2+c0I−1 1=σ2G(−σ2c0)<1 c0 strictly, where G(z) = lim d−1Tr(X⊤X−zI)−1denotes the Stieltjes transform of the Marcenko-Pastur spectral limit of X⊤X[89, Theorem 2.5]. Applying this to lower-bound the quadratic term in αtabove, and applying Cauchy-Schwarz to lower-bound the linear term, we get V(qt, αt)≥C′+c′(αt)2 for some constants C′∈Randc′>0. Now applying this and V(q0, α0) =V(g⊗d 0, α0)≤Cto (212), we deduce that ( αt)2is uniformly bounded over all t≥0, i.e. there exists a compact interval Sfor which αt∈S for all t≥0, as claimed. By enlarging S, we may assume without loss of generality α∗∈S. Then, taking O to be any neighborhood of S, the remaining LSI condition of Assumption 2.11 holds by the strong convexity off(x) and Proposition 2.8. We now show that F(α) is strictly convex on O, by showing convexity of the original negative log- likelihood bF(α): Fixing sufficiently large and small constants C0, c > 0, let us restrict to the event E={∥X∥op≤C0,∥X1∥2≥c√ d} which holds a.s. for all large n, d. Recalling the form of ∇2bF(α) from (186) and applying this with −logg(θ, α) =f(θ−α), bF′′(α) =1 d dX j=1f′′(θj−α) α−1 dVarαdX j=1f′(θj−α) where ⟨·⟩αis the average under the posterior law corresponding to g(·, α), and Var αis its posterior variance. Since f(x) is strictly convex, the posterior density of θis strictly log-concave for each fixed α. Then, denoting vα(θ) = f′′(θj−α)d j=1∈Rd,Dα(θ) = diag f′′(θj−α)d j=1∈Rd×d, the Brascamp-Lieb inequality [83, Theorem 4.9.1] implies VarαdX j=1f′(θj−α) ≤ vα(θ)⊤ Dα(θ) +X⊤X σ2−1 vα(θ) α Observing also thatPd j=1f′′(θj−α) =vα(θ)⊤Dα(θ)−1vα(θ), this shows bF′′(α)≥1 d vα(θ)⊤ Dα(θ)−1− Dα(θ) +X⊤X σ2−1 vα(θ) α. Applying the Woodbury matrix identity and 0 ⪯σ2I+XDα(θ)−1X⊤⪯C′Ion the event Efor some constant C′>0, bF′′(α)≥1 d vα(θ)⊤ Dα(θ)−1X⊤ σ2I+XDα(θ)−1X⊤−1 XDα(θ)−1 vα(θ) α ≥1 C′d vα(θ)⊤Dα(θ)−1X⊤XDα(θ)−1vα(θ) α=1 C′d1⊤X⊤X1≥c′, the last inequality holding for some c′>0 onE. Thus, on E,bF(α)−(c′/2)α2is convex over α∈R. Since Eholds a.s. for all large n, dandF(α) is the almost-sure pointwise limit of bF(α), this implies that F(α)−(c′/2)α2is also convex [85, Theorem 10.8], so F(α) is strongly convex as claimed. Lemma 2.12(c) implies that ∇F(α∗) = 0, i.e. α∗is a point of Crit, so by this convexity it is the unique point of Crit. 63 Proposition 5.2. In the setting of Theorem 2.13, suppose R(α)is given by (54–55) with ∥α0∥2≤D. Then there exists a constant C(g∗, g0, α0)>0depending only on (g∗, g0, α0)such that the DMFT process {αt}t≥0 satisfies ∥αt∥2≤D+C(g∗, g0, α0)1 +δ σ2+ 1 for all t≥0. Proof. By Lemma 5.1, for each fixed t≥0, almost surely lim sup n,d→∞ V(qt, αt) +R(αt) − V(q0, α0) +R(α0) ≤0. (213) By definition of V(q, α) in (13), we have V(q0, α0) =V(g⊗d 0, α0) =1 dZ1 2σ2∥y−Xθ∥2dY j=1g0(θj)dθj+ D KL(g0∥g(·, α0))
|
https://arxiv.org/abs/2504.15558v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.