text
string
source
string
that Ξ±(x) is increasing for x>exp(5/2)/2, decreasing for x<exp(5/2)/2 withx0= exp(5/2)/2 being the unique minimum. However, at x0= exp(5/2)/2, Ξ±(x0) =5 8βˆ’5 8ln2>0 . HenceΞ±(x) must be non-negative and the lemma is established. Lemma S.4.12. LetB2ln(2p)≀nΟƒ2and consider the random variable Wsuch that W∼Bin/parenleftbigg n,Οƒ2 Οƒ2+B2/parenrightbigg Then fort=Οƒ/radicalBig ln(2p//radicalbig ln(2p))/√en, we have P/parenleftbigg Wβ‰₯nΟƒ2 Οƒ2+B2+nBt Οƒ2+B2/parenrightbigg β‰₯cΞ¦/parenleftBigg βˆ’βˆš 2nt Οƒ/parenrightBigg wherec<1is a universal constant and Ξ¦(.)is the normal distribution function. 45 Proof.The first thing to be noted is that P/parenleftbigg Wβ‰₯nΟƒ2 Οƒ2+B2+nBt Οƒ2+B2/parenrightbigg =P/parenleftbigg Wβ‰₯/ceilingleftbiggnΟƒ2 Οƒ2+B2+nBt Οƒ2+B2/ceilingrightbigg/parenrightbigg where⌈.βŒ‰is the ceiling function. Consequently, /ceilingleftbiggnΟƒ2 Οƒ2+B2+nBt Οƒ2+B2/ceilingrightbigg =nΟƒ2 Οƒ2+B2+nBtβˆ— Οƒ2+B2 wheret≀tβˆ—<t+(Οƒ2+B2)/(nB). ByZubkov and Serov (2013), we have P/parenleftbigg Wβ‰₯/ceilingleftbiggnΟƒ2 Οƒ2+B2+nBt Οƒ2+B2/ceilingrightbigg/parenrightbigg β‰₯Ξ¦/parenleftBigg βˆ’/radicalBigg 2nHpΟƒ,B/parenleftbiggk n,pΟƒ,B/parenrightbigg/parenrightBigg , whereHp(a,p) =aln(a/p)+(1βˆ’a)ln((1βˆ’a)/(1βˆ’p)),pΟƒ,B=Οƒ2/(Οƒ2+B2) andk= (nΟƒ2+nBtβˆ—)/(Οƒ2+B2). Moreover,Hp(a,p) is increasing in aforaβ‰₯p. By Lemma 6.3 of CsiszΒ΄ ar and Talata (2006), Hp(a,p)≀(aβˆ’p)2 p(1βˆ’p), whereby it suffices to show that Ξ¦/parenleftbigg βˆ’βˆš 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg/parenrightbigg β‰₯cΞ¦/parenleftbigg βˆ’βˆš 2nt Οƒ/parenrightbigg for somec<1. Lemma S.4.11combined with an application of Lemma S.4.2, gives us 1√ 2Ο€eβˆ’x2/2 x+1≀Φ(βˆ’x)≀1√ 2Ο€eβˆ’x2/2 x(E.34) By (E.34), Ξ¦/parenleftbigg βˆ’βˆš 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg/parenrightbigg β‰₯1√ 2Ο€1 1+√ 2n/parenleftbigt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigexp/parenleftBigg βˆ’n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg2/parenrightBigg SinceB2ln(2p)≀enΟƒ2, we can write/radicalbigg ln(2p) ne≀σ B≀1 As a consequence, n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg2 =nt2 Οƒ2+2t σσ2+B2 BΟƒ+1 n/parenleftbiggΟƒ2+B2 ΟƒB/parenrightbigg2 =nt2 Οƒ2+2t Οƒ/parenleftbiggΟƒ B+B Οƒ/parenrightbigg +1 n/parenleftbiggΟƒ B+B Οƒ/parenrightbigg2 ≀nt2 Οƒ2+2/radicalBig ln(2p//radicalbig ln(2p)) √ne/parenleftbigg 1+B Οƒ/parenrightbigg +1 n/parenleftbigg 1+B Οƒ/parenrightbigg2 ≀nt2 Οƒ2+2/radicalbig 2ln(2p)√ne/parenleftBigg 1+√ne/radicalbig ln(2p)/parenrightBigg +1 n/parenleftBigg 1+√ne/radicalbig ln(2p)/parenrightBigg2 (E.35) ≀nt2 Οƒ2+2/radicalbig 2ln(2p)√ne+2+/parenleftBigg 1√n+√e/radicalbig ln(2p)/parenrightBigg2 46 ≀nt2 Οƒ2+5+/parenleftbigg 1+√e√ ln2/parenrightbigg2 (E.36) (E.35) follows by observing the simple fact that for all pβ‰₯1, ln(2p//radicalbig ln(2p))≀2ln(2p) while (E.36) follows by observing that/radicalbig ln(2p)/ne≀1 andnβ‰₯1,pβ‰₯1. Taking lnΟ„=βˆ’5βˆ’/parenleftbigg 1+√e√ ln2/parenrightbigg2 we observe that exp/parenleftBigg βˆ’n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg2/parenrightBigg β‰₯Ο„exp/parenleftbigg βˆ’nt2 Οƒ2/parenrightbigg Fort<(Οƒ2+B2)/(nB), we must have √ 2n/parenleftbiggΟƒ2+B2 nΟƒB/parenrightbigg ≀/radicalbigg 2 nΟƒ B+/radicalbigg 2 nB Οƒβ‰€βˆš 2+/radicalbigg 2 n√ne/radicalbig ln(2p)β‰€βˆš 2+√ 2e√ ln2=Ξ± Then 1+√ 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg ≀1+Ξ±+t√ 2n σ≀t√ 2n Οƒ(1+f(1+Ξ±)) where√e/f= min pβ‰₯1/radicalBig 2ln(2p//radicalbig ln(2p)). As a result, Ξ¦/parenleftbigg βˆ’βˆš 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg/parenrightbigg β‰₯Ο„ (1+f(1+Ξ±))Οƒ t√ 2nexp/parenleftbigg βˆ’nt2 Οƒ2/parenrightbigg β‰₯Ο„ (1+f(1+Ξ±))Ξ¦/parenleftBigg βˆ’t√ 2n Οƒ/parenrightBigg Fortβ‰₯(Οƒ2+B2)/(nB), 1+√ 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg ≀t√ 2n Οƒ/parenleftbigg 2+f√ 2e/parenrightbigg =Ξ²t√ 2n Οƒ Hence in this case, Ξ¦/parenleftbigg βˆ’βˆš 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg/parenrightbigg β‰₯Ο„ Ξ²Ξ¦/parenleftBigg βˆ’t√ 2n Οƒ/parenrightBigg TakingΞ³= max{1+f(1+Ξ±),Ξ²}, we have the desired result Ξ¦/parenleftbigg βˆ’βˆš 2n/parenleftbiggt Οƒ+Οƒ2+B2 nΟƒB/parenrightbigg/parenrightbigg β‰₯Ο„ Ξ³Ξ¦/parenleftBigg βˆ’t√ 2n Οƒ/parenrightBigg Lemma S.4.13. Forx∈[0,1)andpβ‰₯1, (1βˆ’x)1/pβˆ’/parenleftbigg 1βˆ’x p(1βˆ’x)/parenrightbigg β‰₯0 Proof.TakingΒ΅(x) = (1βˆ’x)1/pβˆ’/parenleftBig 1βˆ’x p(1βˆ’x)/parenrightBig , observe that Β΅β€²(x) =1βˆ’(1βˆ’x)1+1/p p(1βˆ’x)2β‰₯0 This combined with the fact that Β΅(0) = 0 implies the result. 47 Lemma S.4.14. Forx>0,y>0andxy>1, we must have (xyβˆ’1)/e ln(1+(xyβˆ’1)/e)≀(1+x)y ln((1+x)y) Proof.Note that it is enough to show that (xyβˆ’1)/e ln(1+(xyβˆ’1)/e)≀(1+x)yβˆ’1 ln((1+x)y) Now, since both the quantities ( xyβˆ’1)/eand (1+x)yβˆ’1 are positive, our result would follow immediately by the fact that x/ln(1+x) is increasing for X >0 if we can show that (xyβˆ’1)/e≀(1+x)yβˆ’1 Note that we can write the following if and only if statements (xyβˆ’1)/e≀(1+x)yβˆ’1 ⇐⇒xyβˆ’1≀exy+eyβˆ’e ⇐⇒exyβˆ’xy+eyβˆ’e+1β‰₯0 ⇐⇒(eβˆ’1)(xyβˆ’1)+eyβ‰₯0 the last of which is true. Hence, the result follows. Lemma S.4.15. LetY1,...,Y nbe independent, mean zero random variables such that |Yi| ≀K,1 nn/summationdisplay i=1E[|Yi|q]≀Mq,1 nn/summationdisplay i=1E[Y2 i]≀σ2. Then 1 nn/summationdisplay i=1E[|Yi|s]≀/braceleftBigg/parenleftbig Οƒ2/M2/parenrightbigq qβˆ’2(Mq/Οƒ2)s/(qβˆ’2),if2≀s≀q, Ksβˆ’qMq, ifsβ‰₯q. IfMq=Kqβˆ’2Οƒ2, thenKqβˆ’2/Mqβˆ’2=M2/Οƒ2and hence/parenleftbig
https://arxiv.org/abs/2504.17885v1
Οƒ2/M2/parenrightbigq qβˆ’2= (M/K)q. This implies that if Mq=Kqβˆ’2Οƒ2, then for all sβ‰₯2, the above bound is Ksβˆ’qMq. This moment bound is exactly same as used in Bennett’s inequality. Proof.Because |Yi| ≀K, we get that for sβ‰₯q, 1 nn/summationdisplay i=1E[|Yi|s]≀Ksβˆ’q1 nn/summationdisplay i=1E[|Zi|q]≀Ksβˆ’qMq. Using the inequality ( qβˆ’2)|Yi|s≀(sβˆ’2)asβˆ’q|Yi|q+(qβˆ’s)asβˆ’2|Yi|2for 2≀s≀qand any positive a, we get using Jensen’s inequality, 1 nn/summationdisplay i=1E[|Yi|s]≀sβˆ’2 qβˆ’2asβˆ’q1 nn/summationdisplay i=1E[|Yi|q]+qβˆ’s qβˆ’2asβˆ’21 nn/summationdisplay i=1E[|Yi|2] ≀sβˆ’2 qβˆ’2asβˆ’qMq+qβˆ’s qβˆ’2asβˆ’2Οƒ2. Minimizing over a>0, we choose a= (Mq/Οƒ2)1/(qβˆ’2)and this yields 1 nn/summationdisplay i=1E[|Zi|s]≀Mq(sβˆ’2)/(qβˆ’2)Οƒ2(qβˆ’s)/(qβˆ’2) 48 =/parenleftbiggΟƒ2 M2/parenrightbiggq qβˆ’2/parenleftbiggMq Οƒ2/parenrightbiggs/(qβˆ’2) , for 2≀s≀q. This completes the proof. Lemma S.4.16. Forx>√e, 1 6lnx≀1 3ln(1+x)≀Ψ(x) (1+x)ln2(1+x)≀1 ln(1+x)≀1 lnx Proof.The upper bounds are a bit easier to establish, so let us prove them fi rst. Recall that Ξ¨( x) = (1+x)ln(1+x)βˆ’xand hence (1+x)ln(1+x)βˆ’x (1+x)ln2(1+x)≀1 ln(1+x)βˆ’x (1+x)ln2(1+x)≀1 ln(1+x)≀1 lnx Now, we focus on establishing the lower bounds. In order to show th at Ξ¨(x) (1+x)ln2(1+x)β‰₯1 3ln(1+x) This is equivalent to showing that ΞΎ(x) =2 3Ξ¨(x)βˆ’x 3β‰₯0 for allxβ‰₯√e. This is fairly easy to see once we have taken note of the fact that ΞΎβ€²(x) =2 3ln(1+x)βˆ’1 3is increasing and ΞΎβ€²(√e)>0 andΞΎ(√e)>0. Once, this is established, note that for x>√e>(√ 5+1)/2, the functionx2βˆ’xβˆ’1 is positive and increasing. Therefore for x>√e, we have x2>1+x=β‡’2lnx>ln(1+x) =β‡’6lnx>3ln(1+x) =β‡’1 6lnx≀1 3ln(1+x) and we are done. Lemma S.4.17. Consider the equation xq qlnx=c wherexis known to be larger than√eandqβ‰₯1. Then(clnclnclnc...)1/qis a solution to the equation. Further this solution is O((clnc)1/q). Proof.Consider first the equationy lny=r wherer>e. Define the sequence fn=rlnfnβˆ’1 andf1=r. It is easy to see that this is an increasing sequence in n. Furtherfn≀r2for alln, by induction, sincefnβˆ’1≀r2implies that fn=rlnfnβˆ’1=rlnr2= 2rlnr≀r2. Asfnis a potive, increasing sequence that is bounded above, the limit f∞= limnβ†’βˆžfn=rlnrlnrlnr...exists finitely. Consider, now, the original equation. There, xqplays the role of y. Moreover, the left hand side is increasing in xand if we can show that (√e)q/qln√e > efor allqβ‰₯1, then we have established the first part of the lemma. Observe that for the function g(x) =x/2βˆ’ln(x/2), minima occurs at the point x= 2 and the resultant minimum value is 1. Therefore 2 ex/2/xtakes a minimum value of e1=e. So, we can say that the equation xq lnx=c 49 has (clnclnclnc...)1/qas a solution. As for the second part of the result, observe that since the sequ encefnis increasing, therefore rlnr≀f∞ and since each term in the sequence fn< r2, therefore fβˆžβ‰€r2. Consequently, f∞=rlnf∞=rlnr2≀ 2rlnr. Therefore (clnc)1/q≀(clnclnclnc...)1/q≀(2clnc)1/q. Lemma S.4.18. Consider the function f(x) =βˆ’xW/parenleftbigg βˆ’1 x/parenrightbigg = exp/parenleftbigg βˆ’W/parenleftbigg βˆ’1 x/parenrightbigg/parenrightbigg forxβ‰₯e. Then1≀f(x)≀e. Proof.The way we go about proving this result is showing that the function fis a decreasing function. Observe that since the Lambert function is strictly increasing βˆ€xβ‰₯ βˆ’1/e, we have fβ€²(x) =βˆ’exp/parenleftbigg βˆ’W/parenleftbigg βˆ’1 x/parenrightbigg/parenrightbigg Wβ€²/parenleftbigg βˆ’1 x/parenrightbigg1 x2<0. Moreover,f(e) =eand lim xβ†’βˆžf(x) = lim yβ†’0βˆ’W(y) y= lim yβ†’0βˆ’eβˆ’W(y)= 1 and we are done. Lemma S.4.19. Supposea,b>0. Then inf t>0a(ebtβˆ’1βˆ’bt)+x t=abΞ¨βˆ’1(x/a). Proof.The infimum is attained at tβˆ—satisfying t/bracketleftbig abebtβˆ’ab/bracketrightbig βˆ’aebt+a+abt=x This is equivalent to btebtβˆ’ebt+1 =x/a⇔ebt(btβˆ’1)+1 =x/a. (E.37) Takingy=ebtβˆ’1, this equation is same as ( y+1)ln(y+1)βˆ’y=x/a, or equivalently, Ξ¨( y) =x/a. Hence, tβˆ—=1 bln/parenleftbig 1+Ξ¨βˆ’1(x/a)/parenrightbig . Observe that from ( E.37)
https://arxiv.org/abs/2504.17885v1
Score-Based Deterministic Density Sampling Vasily Ilin†Peter Sushko‑Jingwei Hu† †University of Washington‑Allen Institute for AI vilin@uw.edu Abstract We propose a deterministic sampling framework using Score-Based Transport Mod- eling for sampling an unnormalized target density Ο€given only its score βˆ‡logΟ€. Our method approximates the Wasserstein gradient flow on KL(ftβˆ₯Ο€)by learning the time-varying score βˆ‡logfton the fly using score matching. While having the same marginal distribution as Langevin dynamics, our method produces smooth deterministic trajectories, resulting in monotone noise-free convergence. We prove that our method dissipates relative entropy at the same rate as the exact gradient flow, provided sufficient training. Numerical experiments validate our theoretical findings: our method converges at the optimal rate, has smooth trajectories, and is usually more sample efficient than its stochastic counterpart. Experiments on high dimensional image data show that our method produces high quality generations in as few as 15 steps and exhibits natural exploratory behavior. The memory and runtime scale linearly in the sample size. 1 Introduction Diffusion generative modeling (DGM) [ 25,26] has emerged as a powerful set of techniques to generate β€œmore of the same thing," i.e., given many samples from some unknown distribution Ο€, train a model to generate more samples from Ο€. While the SDE-based generation is commonly used in practice, its deterministic counterpart, termed β€œprobability flow ODE" by [ 26], offers several practical advantages – higher order solvers [ 11], better dimension dependence [ 4], and interpolation in the latent space [ 24]. The two processes are given by the reverse of the Ornstein-Uhlenbeck (OU) process and the corresponding ODE, respectively: dXt= (Xt+ 2βˆ‡logft(Xt)) dt+√ 2 dBt (DGM SDE) dXt= (Xt+βˆ‡logft(Xt)) dt, (DGM ODE) where ftis the law of the OU process. The score βˆ‡logftis learned by running the OU process fromΟ€toN(0, Id), which crucially depends on having many samples from Ο€. In this work we pose and attempt to answer the following question: How to deterministically sample an unnormalized density Ο€in the absence of samples, using techniques of diffusion generative modeling? This is a much harder problem than DGM. Indeed, DGM can be reduced to unnormalized density sampling by estimating the score βˆ‡logΟ€on the samples, but the converse is not true. Indeed, recently [ 10] proved the exponential in dquery complexity lower bound for density sampling. Our main contributions are a deterministic sampling algorithm, scalable to high dimensions, and a proof of fast convergence to log-concave distributions. Unlike the classical Langevin dynamics, our Preprint. Under review.arXiv:2504.18130v2 [cs.LG] 17 May 2025 Figure 1: Left: Langevin dynamics (stochastic). Right: ours (deterministic). The deterministic algorithm has the same marginal distributions as the stochastic one but with smooth trajectories. algorithm produces smooth deterministic trajectories, and gives access to the otherwise intractable scoreβˆ‡logft. Unlike other deterministic interacting particle systems, the memory and runtime scale linearly in the number of particles. The convergence analysis is based on a system of coupled gradient flows – a novel framework that can be of independent interest as a general technique for analyzing convergence of NN-based approximations of gradient flows. We are not aware of any other work that uses the neural tangent
https://arxiv.org/abs/2504.18130v2
kernel for analyzing dynamically changing loss. Our main theoretical result is theorem 4.6. In Section 3 we introduce the main algorithm, building on Score-Based Transport Modeling [ 1]. In Section 4 we prove that the resulting dynamics achieve the optimal convergence rate. Finally, in Section 5 we verify convergence in several numerical experiments. We give the proofs and additional experiments in the Appendix. 2 Related Work Sampling from unnormalized densities traditionally relies on stochastic algorithms such as Langevin dynamics, whose fast convergence under isoperimetry assumptions is well-understood [ 7,28,27,5]. Deterministic alternatives, notably Stein Variational Gradient Descent (SVGD) [ 19,8], rely heavily on handcrafted kernels, resulting in limited theoretical guarantees and empirical success, especially in high dimensions [ 30]. Inspired by deterministic probability-flow ODEs from diffusion generative modeling [ 13,25,26,4,11], which often outperform their stochastic counterparts due to high-order integrators, better dimension dependence, and latent-space interpolation capabilities [ 24,20], Boffi & Vanden-Eijnden [1]proposed Score-Based Transport Modeling (SBTM) as a general method of solving the Fokker-Planck equation with a deterministic interacting particle system, building on neural approximations of gradient flows [ 29,9]. However, it was not used for sampling. Our work fills this gap by adapting the method for the problem of sampling and by providing entropy dissipation guarantees and proving rapid convergence for the coupled particle system dynamics, integrating smoothly with recent annealing approaches like those proposed by [23, 6, 2]. 3 Wasserstein gradient flow Diffusion generative modeling (DGM SDE) and(DGM ODE) converges quickly because it is the reverse of a fast process, namely the OU process. In the absence of samples from Ο€, there is no known numerically tractable process that can be run from Ο€tof0and reversed. Thus, the standard approach to classical sampling is to greedily minimize a divergence, most commonly relative entropy KL(ft||Ο€) :=Eftlogft Ο€, between ftandΟ€at every time step. In continuous time, this is called a Wasserstein gradient flow (GF). The Wasserstein GF on KL(Β·||Ο€)can be implemented stochastically 2 or deterministically, mimicking equations (DGM SDE) and (DGM ODE): dXt=βˆ‡logΟ€(Xt) dt+√ 2 dBt, B t:=Brownian motion , (GF SDE) dXt=βˆ‡logΟ€(Xt) dtβˆ’ βˆ‡logft(Xt) dt, f t:=law(Xt). (GF ODE) Figure 2: Particles are pulled towards target Ο€and away from particle density ft. Brownian motion acts randomly.Remarkably, the distributions of Xtin equations (GF SDE) and(GF ODE) match exactly. In this work we simulate the latter. See [ 15] for the rigorous derivation and connection to Optimal Transport. The term βˆ’βˆ‡logft in the ODE provides deterministic exploration by making samples X1 t, . . . , Xn trepel from each other, since more particles appear where ftis larger. We visualize the forces βˆ‡logΟ€,βˆ’βˆ‡logftand the noise√ 2 dBtin figure 2. Remark 3.1. While equations (DGM ODE) and (GF ODE) look similar, there is a big difference in how ft is defined. In (DGM ODE) ,ftis given by the OU process started from Ο€. In(GF ODE) ftfollows the gradient flow started from f0, usually chosen to be Gaussian N(0,1). 3.1 Score-Based Transport Modeling Score-based transport modeling (SBTM) was introduced by [1] as a method of solving Fokker-Planck equations, of which (GF ODE) is a special case.
https://arxiv.org/abs/2504.18130v2
Additionally, SBTM was recently successfully employed to other Fokker-Planck-type equations [14, 21, 12]. The core idea of SBTM is to approximate βˆ‡logftwith a neural network sΘttrained on the sample X1 t, ..., Xn t. The training is done by minimizing the score matching loss from [ 13,25]. The SBTM dynamics are given by the interacting particle system dXt dt=βˆ‡logΟ€(Xt)βˆ’sΘt(Xt), X 0∼f0iid dΘt dt=βˆ’Ξ·βˆ‡Ξ˜tL(sΘt, ft), X t∼ft,(SBTM) where L(s, ft) =Eftβˆ₯sβˆ’ βˆ‡logftβˆ₯2is the dynamically changing score-matching loss [ 13] and Ξ·(t) controls the amount of NN training relative to sampling dynamics. Unlike most other interacting particle systems used for sampling [ 19,18,6], in SBTM the particles X1 t, . . . , Xn tinteract via the neural network sΘt, which makes the memory and runtime scale linearly, and provides much better dimension scaling. Remark 3.2. The neural network trains on the particles that were produced following the same neural network, as opposed to the true solution of the Wasserstein GF (GF ODE) . The commonsense intuition makes one suspect that local errors will accumulate exponentially. Miraculously, this does not happen, see remark 4.3. 4 Convergence analysis How quickly does ftconverge to Ο€in(SBTM) ?To answer this question we study the decay rate of the relative entropy KL(ft||Ο€)[5]. Our strategy is to break down the change in KL(ft||Ο€)into the change due to the density ftand the change due to the neural network sΘt. Theorem 4.2 does the former, and theorem 4.4 does the latter. Theorem 4.6 brings both pieces together. Theorem 4.8 extends theorem 4.2 to the annealed setting. 3 4.1 Entropy dissipation First, recall the following classical result that quantifies the rate of relative entropy decay for the Wasserstein GF on relative entropy: Theorem 4.1. Ifftfollows the Wasserstein GF on KL(Β·||Ο€)then relative entropy dissipates at the rate of the relative Fisher information: βˆ’d dtKL(ft||Ο€) =F(ft||Ο€) :=Eft βˆ‡logft Ο€ 2 . Additionally, if Ο€satisfies the log-Sobolev inequality with constant Ξ±(e.g. if Ο€isΞ±-log-concave) then KL(ft||Ο€)≀KL(f0||Ο€)eβˆ’1 Ξ±t. Intuitively, if stβ‰ˆ βˆ‡ logft, SBTM should converge at the nearly optimal rate βˆ’d dtKL(ft||Ο€) = F(ft||Ο€). Indeed, this holds if the score matching loss is small: Theorem 4.2 (Small loss guarantees optimal entropy dissipation) .Ifftis the density of Xt, which follows dXt dt=βˆ‡logΟ€(Xt)βˆ’st(Xt), X 0∼f0, for any time-dependent vector field st, then βˆ’d dtKL(ft||Ο€) =F(ft||Ο€)βˆ’Eft stβˆ’ βˆ‡logft,βˆ‡logft Ο€ β‰₯1 2F(ft||Ο€)βˆ’1 2L(st, ft). (4.1) In particular, if L(st, ft)≀1 2F(ft||Ο€), then βˆ’d dtKL(ft||Ο€)β‰₯1 4F(ft||Ο€). While theorem 4.2 looks simple, it elucidates a remarkable property of SBTM: Remark 4.3. Integrating (4.1) in time, one obtains KL(fT||Ο€)≀KL(f0||Ο€)βˆ’1 2ZT 0F(ft||Ο€) dt+1 2ZT 0L(st, ft) dt. Since there no exponential term eTin the bound, local errors do not accumulate. We now show how to ensure that the loss stays small despite changing ft. 4.2 Bounding score matching loss With sufficient size and amount of training a neural network can fit arbitrary (finite) data, including dynamically changing data. The next theorem shows that the score-matching loss does not increase, given sufficient training. Theorem 4.4 (Sufficient training guarantees bounded loss) .Assume that ftis any time-dependent density, X1 t, . . . , Xn tare distributed according
https://arxiv.org/abs/2504.18130v2
to ft, and the neural network st=sΘtis trained with gradient descent dΘt dt=βˆ’Ξ·βˆ‡Ξ˜tLn(st, ft), Ln(st, ft) :=1 nnX i=1βˆ₯s(Xi t)βˆ’ βˆ‡logf(Xi t)βˆ₯2 and is such that the Neural Tangent Kernel (NTK) Hi,j Ξ±,Ξ²(t) =NX k=1βˆ‡ΞΈksΞ± t(Xi t)βˆ‡ΞΈksΞ² t(Xj t), s(x) = (s1(x), . . . , sd(x)) 4 is lower bounded by Ξ»=Ξ»(t)>0, i.e.βˆ₯Hvβˆ₯2β‰₯Ξ»βˆ₯vβˆ₯2. Then as long as Ξ·(t)β‰₯n Ξ»βˆ‚ βˆ‚Ο„ Ο„=tlogLn(st, fΟ„), the loss Ln(st, ft)is non-increasing in time. Remark 4.5. The assumption that the NTK is lower bounded in theorem 4.4 is non-trivial, but holds under relatively mild assumptions [ 16]; the most restrictive one is that the NN size is superlinear in the number of datapoints. We expect that even this can be weakened under additional regularity assumptions of the target function. Combining theorems 4.1 and 4.4 shows that the convergence rate of SBTM matches the convergence rate of the true GF (GF ODE). This is the main theoretical guarantee of our sampling method. Theorem 4.6. Suppose that XtandΘtfollow (SBTM) andΞ·andHare as in theorem 4.4. If the initial loss is small and the true loss Lis well-approximated by the training loss Ln Ln(s0, f0)≀1 4Ξ΅, Ξ΅ := inf t≀TF(ft||Ο€), (4.2) |Ln(st, ft)βˆ’L(st, ft)| ≀1 4Ξ΅, (4.3) then SBTM dissipates relative entropy at the optimal rate: βˆ’d dtKL(ft||Ο€)β‰₯1 4F(ft||Ο€). IfΟ€satisfies the log-Sobolev inequality with constant Ξ±then convergence is exponential: KL(ft||Ο€)≀KL(f0||Ο€)eβˆ’1 4Ξ±t. Remark 4.7. Since stβ‰ˆ βˆ‡logft, relative Fisher information may be approximated by F(ft||Ο€)β‰ˆFn(ft||Ο€) :=1 nnX i=1βˆ₯s(Xi)βˆ’ βˆ‡logΟ€(Xi)βˆ₯2. One may stop the sampling when Fn(ft||Ο€)≀Ρfor some predefined Ξ΅. This makes condition (4.2) practical. If particles X1(t), . . . , Xn(t)were independent, the Law of Large Numbers would imply lim nβ†’βˆž|Ln(st, ft)βˆ’L(st, ft)|= 0, satisfying condition (4.3) for large enough n. Further, for interacting particle systems propagation of chaos usually holds, namely that in the limit nβ†’ ∞ particles Xi tbecome independent. In numerical experiments, we treat LnasL, and confirm the optimal rate of relative entropy dissipation. 4.3 Annealed dynamics Classical sampling can be broken down into three distinct parts: mode discovery, mode weighting, and mode approximation. While Langevin dynamics performs mode approximation very fast, both mode discovery and mode weighting are challenging. For example, in the absence of samples from Ο€, the mere discovery of a mode with support of small width hinddimensions requires Ω hβˆ’d function evaluations [ 10]. Empirically, it helps to pick an annealing between f0andΟ€to β€œguide" ft, similar to how the forward OU process guides the reverse process in DGM. We use the geometric annealing Ο€t∝f1βˆ’t 0Ο€t[23, 22, 3] and the dilation annealing Ο€t(x)βˆΟ€(x/t)[2]. One obtains a similar entropy dissipation estimate in annealed dynamics. Taking Ο€t=Ο€recovers theorem 4.2. 5 Theorem 4.8. IfΟ€tis any time-dependent density, stany time-dependent vector field, and dXt dt=βˆ‡logΟ€t(Xt)βˆ’s(Xt) then βˆ’d dtKL(ft||Ο€) =Eft stβˆ’ βˆ‡logΟ€,βˆ‡logft Ο€t βˆ’Eft stβˆ’ βˆ‡logft,βˆ‡logft Ο€ . In particular, if L(st, ft) = 0 , then βˆ’d dtKL(ft||Ο€) =Eft βˆ‡logft Ο€,βˆ‡logft Ο€t . (4.4) Theorem 4.8 gives a way to test for L(st, ft) = 0 by testing equality (4.4) . This is important, because the loss L(st, ft)cannot be computed from a finite sample X1 t, .
https://arxiv.org/abs/2504.18130v2
. . , Xn tofftsinceβˆ‡logft is unknown. Empirically, we confirm that (4.4) holds, indicating small loss – see figure 5. 5 Experiments We demonstrate the optimal rate of relative entropy dissipation in several experiments, including challenging non log-concave targets. We compare SBTM to its stochastic counterpart given by (GF SDE) . Additionally, we demonstrate the flexibility of SBTM by simulating annealed Langevin dynamics, and compare to the corresponding SDEs. We do not extensively compare SBTM to SVGD [19] because without additional tricks such as momentum and cherry-picking the kernel bandwidth we were unable to obtain decent performance from SVGD, see tables 1 and 2. We emphasize that while the numerical experiments presented below would be trivial in the context of DGM, they are quite challenging in our context. We use a three-layer ResNet of width 128 for 1D experiments, and increase it to five layers for 2D experiments. For the MNIST experiment we use an 8-layer U-Net. Other hyperparameters were found empirically, and are specified in the Appendix. All experiments were done in JAX on a single TITAN X Pascal 12GB GPU. The runtime of the longest experiment is 90 minutes. The code repository is Vilin97/SBTM-sampling. 5.1 Log-concave target Table 1: KL divergence ( ↓) between the sample and the target Ο€=N(0,1), using time step 0.002and final time 2.5. SBTM exhibits better sample efficiency, likely due to determinism. sample size Method 100 300 1000 3000 10000 SBTM (ours) 0.013 0.0032 0.0019 0.0020 0.00099 SDE 0.020 0.022 0.0094 0.0036 0.0012 SVGD 0.33 0.23 0.16 0.20 0.29First, we consider an example that admits an analytic solu- tionft=N 0,1βˆ’eβˆ’2(t+0.1) , which allows us to compute the true entropy dissipation and the L2distance to the true solution. We use sample size n= 1000 . Moreover, SBTM exhibits the op- timal relative entropy dissipation rate, as evidenced by the close alignment of the orange, green, and red lines in the left panel of figure 3. Table 1 shows that SBTM achieves smaller KL divergence than the SDE. The deterministic nature of SBTM allows it to achieve better sample efficiency than the noisy SDE. While the marginal density of X1 tis the same between SBTM and SDE, the whole particle ensemble X1 t, . . . , X1 thas different distributions. More detailed investigation of the ensemble properties of SBTM is left as future work. 6 Figure 3: Left: entropy dissipation of SBTM (ours) and SDE (stochastic). SBTM approximates entropy decay rate perfectly, while SDE is extremely noisy. Right: L2 error to the true solution. SBTM produces lower error with much smoother trajectory. 5.2 Gaussian mixture Table 2: KL divergence ( ↓) between the sample and the target Ο€=1 4N(βˆ’2,1) +3 4N(2,1), using time step 0.01, and final time 10. Top: non-annealed. Bottom: 100 particles. sample size 100 300 1000 3000 10000 SBTM (ours) 0.022 0.018 0.0082 0.0082 0.0036 SDE 0.029 0.013 0.014 0.0068 0.0043 SVGD 2.8 1.4 2.4 2.1 2.0 annealing Non-annealed Geometric Dilation SBTM (ours) 0.022 0.058 0.037 SDE 0.029 0.060 0.062 SVGD 2.800 0.470 0.480This and all other examples do not admit an analytic solution,
https://arxiv.org/abs/2504.18130v2
but we can still compare the qual- ity of the final sample as well as compare the trajectories of the SDE and SBTM. We use f0=N(0,1)here and in fur- ther experiments. Here we sam- ple from the Gaussian mixture Ο€=1 4N(βˆ’2,1) +3 4N(2,1) withn= 1000 . As evidenced by the change of slope at t= 2.5in Figure 4, the underlying Markov chain enters metastability, which plagues the convergence to non- log-concave targets. Here the non-log-concavity is mild, so the process still converges in a reasonable time frame. Table 2 shows that SBTM achieves competitive KL divergence. Figure 4: Left: KL divergence of SBTM (ours) and SDE (stochastic) over time. SBTM exhibits smoother convergence. Right: entropy dissipation of SBTM and SDE. SBTM approximates entropy decay rate perfectly, while SDE is noisy. 7 Figure 6: Top: SBTM (ours). Bottom: SDE [ 2]. SBTM separates into modes early on, compared to the SDE. 5.3 Gaussian mixture with dilation annealing Figure 5: The estimate in (4.4) holds empirically, indicating good score approximation.To sample from a Gaussian mixture with 16 well-spaced modes we employ the annealing schedule Ο€t(x) = Ο€T tx from [ 2]. Figure 6 shows the densities at different time points, and figure 5 shows the entropy dissipation as in (4.4) . This is a challenging example due to the extreme non-log-concavity of the target. With the large sample size of 20,000 and 10,000 steps this experiment took only 90 minutes on a single TITAN X GPU, leveraging the linear scaling of memory and time complexity of SBTM. 5.4 High-dimension experiments Figure 7: Sampled MNIST digits using f0=N(0,1), sample size 64, time step 10βˆ’3,30steps, variable amount of intermediate training. Digits generated by SBTM are competitive in visual quality. To evaluate the scaling of our method to high-dimensional settings, we apply SBTM to produce MNIST digits [ 17] (CC-BY-SA 3.0 license), which corresponds to sampling a 784-dimensional distribution. We sample directly in pixel space, without using a V AE encoder, to purposefully maintain the high dimensionality of the data. To obtain βˆ‡logΟ€we train a U-Net with score-matching. Figure 7 shows that SBTM produces high quality results, comparable to the SDE sample. Figure 8 shows that that SBTM maintains in exploration in high dimensional spaces. The amount of training 8 Figure 8: Starting from the same initial point, SBTM produces distinct sample trajectories depending on the training schedule. The amount of training controls the strength of interaction between particles. Top to bottom: SBTM without training (equivalent to gradient ascent on βˆ‡logΟ€), SBTM with small amount training, SBTM with large amount training, and finally the SDE (Langevin). Figure 9: Cosine similarity between the true score βˆ‡logΟ€and learned score βˆ‡logftover simulation time. Even with very little training the model learns the score well. controls the strength of interaction between particles. With sufficient training, particles repel from each other, providing deterministic exploration. To confirm that βˆ‡logftis being meaningfully learned, we plot the cosine similarity across time in Figure 9. More training epochs per step leads to faster learning of the target score βˆ‡logΟ€. 6 Conclusion In this work we use
https://arxiv.org/abs/2504.18130v2
tools and intuition from diffusion generative modeling to tackle the harder problem of sampling Ο€given only βˆ‡logΟ€but not samples from Ο€. Our method allows for deterministic sampling with smooth trajectories and provably optimal rate of entropy dissipation. Additionally, access to the learned score βˆ‡logftallows for the computation of relative Fisher info to estimate convergence, making the dynamics more interpretable. Our method integrates well with annealed dynamics to sample from challenging non log-concave densities. Finally, our method scales well both in the sample size, with O(n)complexity, and in dimension, as demonstrated in the 784-dimensional example. 9 References [1]Nicholas M Boffi and Eric Vanden-Eijnden. Probability flow solution of the fokker–planck equation. Machine Learning: Science and Technology , 4(3):035012, 2023. [2]Omar Chehab and Anna Korba. A practical diffusion path for sampling. arXiv preprint arXiv:2406.14040 , 2024. [3]Jannis Chemseddine, Christian Wald, Richard Duong, and Gabriele Steidl. Neural sampling from boltzmann densities: Fisher-rao curves in the wasserstein geometry. arXiv preprint arXiv:2410.03282 , 2024. [4]Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, and Adil Salim. The probability flow ode is provably fast. Advances in Neural Information Processing Systems , 36, 2024. [5]Sinho Chewi. Log-concave sampling. Book draft available at https://chewisinho. github. io , 9: 17–18, 2023. [6]Miguel Corrales, Sean Berti, Bertrand Denel, Paul Williamson, Mattia Aleardi, and Matteo Ravasi. Annealed stein variational gradient descent for improved uncertainty estimation in full-waveform inversion. Geophysical Journal International , 241(2):1088–1113, 2025. [7]Arnak S Dalalyan and Avetik G Karagulyan. User-friendly guarantees for the langevin monte carlo with inaccurate gradient. arXiv preprint arXiv:1710.00095 , 2017. [8]A Duncan, N Nuesken, and L Szpruch. On the geometry of stein variational gradient descent, 775. arXiv preprint arXiv:1912.00894 , 374, 2019. [9]Karthik Elamvazhuthi, Xuechen Zhang, Matthew Jacobs, Samet Oymak, and Fabio Pasqualetti. A score-based deterministic diffusion algorithm with smooth scores for general distributions. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pp. 11866–11873, 2024. [10] Yuchen He and Chihao Zhang. On the query complexity of sampling from non-log-concave distributions. arXiv preprint arXiv:2502.06200 , 2025. [11] Daniel Zhengyu Huang, Jiaoyang Huang, and Zhengjiang Lin. Convergence analysis of probability flow ode for score-based generative models. arXiv preprint arXiv:2404.09730 , 2024. [12] Yan Huang and Li Wang. A score-based particle method for homogeneous landau equation. arXiv preprint arXiv:2405.05187 , 2024. [13] Aapo HyvΓ€rinen. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research , 6(24):695–709, 2005. URL http://jmlr.org/papers/v6/ hyvarinen05a.html . [14] Vasily Ilin, Jingwei Hu, and Zhenfu Wang. Transport based particle methods for the fokker- planck-landau equation. arXiv preprint arXiv:2405.10392 , 2024. [15] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the fokker– planck equation. SIAM journal on mathematical analysis , 29(1):1–17, 1998. [16] Kedar Karhadkar, Michael Murray, and Guido MontΓΊfar. Bounds for the smallest eigenvalue of the ntk for arbitrary spherical data of arbitrary dimension. arXiv preprint arXiv:2405.14630 , 2024. 10 [17] Yann LeCun, LΓ©on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86(11):2278–2324. [18] Qiang Liu. Stein variational gradient descent as gradient flow. Advances in neural information processing
https://arxiv.org/abs/2504.18130v2
systems , 30, 2017. [19] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. Advances in neural information processing systems , 29, 2016. [20] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems , 35:5775–5787, 2022. [21] Jianfeng Lu, Yue Wu, and Yang Xiang. Score-based transport modeling for mean-field fokker- planck equations. Journal of Computational Physics , 503:112859, 2024. [22] BΓ‘lint MΓ‘tΓ© and FranΓ§ois Fleuret. Learning interpolations between boltzmann densities. arXiv preprint arXiv:2301.07388 , 2023. [23] Radford M Neal. Annealed importance sampling. Statistics and computing , 11:125–139, 2001. [24] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 , 2020. [25] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems , 32, 2019. [26] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456 , 2020. [27] Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices. Advances in neural information processing systems , 32, 2019. [28] Andre Wibisono. Sampling as optimization in the space of measures: The langevin dynamics as a composite optimization problem. In Conference on learning theory , pp. 2093–3027. PMLR, 2018. [29] Chen Xu, Xiuyuan Cheng, and Yao Xie. Normalizing flow neural networks by jko scheme. arXiv preprint arXiv:2212.14424 , 2022. [30] Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. Message passing stein variational gradient descent. In International Conference on Machine Learning , pp. 6018–6027. PMLR, 2018. 11 7 Appendix 7.1 Pseudocode For ease of reproducibility, we provide the pseudocode for SBTM: SBTM (f0, n, T, βˆ†t, Ξ·) 1 sample {Xi}n i=1iid from f0 2 initialize NN: sΞ˜β‰ˆarg min L(s, f0) 3t:= 0 4 while t < T 5 t:=t+ βˆ†t 6 for k= 1, . . . , K (flow s) 7 Θ = Θ βˆ’Ξ·Β· βˆ‡Ξ˜L(sΘ, ft) 8 for i= 1, . . . , n (flow f) 9 Xi:=Xi+ βˆ†t(βˆ‡logΟ€(Xi)βˆ’sΘ(Xi)) 10 output particle locations X1, . . . , Xn 7.2 Hyperparameters For all experiments we use mini-batch gradient descent with the adamw optimizer. For low- dimensional experiments we use learning rate 5Β·10βˆ’4, batch size 400, and 10 gradient descent steps per simulation step. For the MNIST experiment we use learning rate 10βˆ’3and batch size 64. 7.3 Proofs We restate theorem 4.8. Theorem 7.1 (Entropy Dissipation in Annealed Dynamics) .IfΟ€tis any time-dependent density, st any time-dependent vector field, and dXt dt=βˆ‡logΟ€t(Xt)βˆ’s(Xt) then βˆ’d dtKL(ft||Ο€) =Eft stβˆ’ βˆ‡logΟ€,βˆ‡logft Ο€t βˆ’Eft stβˆ’ βˆ‡logft,βˆ‡logft Ο€ . In particular, if L(st, ft) = 0 , then βˆ’d dtKL(ft||Ο€) =Eft βˆ‡logft Ο€,βˆ‡logft Ο€t . Proof. Ifftis the density of Xt, where Xtsatisfies d dtXt=βˆ‡logΟ€t(Xt)βˆ’st(Xt) thenftsatisfies the Fokker-Planck equation βˆ‚tft+βˆ‡ Β·(ft(βˆ‡logΟ€tβˆ’st)) = 0 . Thus, we may explicitly compute the relative entropy dissipation rate as d dtZ Rdftlogft Ο€dx =Z Rdβˆ‚tft(logftβˆ’logΟ€) dx+Z
https://arxiv.org/abs/2504.18130v2
Rdftβˆ‚tlogftdx =βˆ’Z Rd⟨stβˆ’ βˆ‡logΟ€t,βˆ‡logftβˆ’ βˆ‡logΟ€βŸ©ftdx, where we used integration by parts and thatR Rdftβˆ‚tlogftdxis zero for the last equality. 12 We can now specialize the above to prove 4.1 and 4.2. Proofs of 4.1 and 4.2. By choosing Ο€t=Ο€in the proof of theorem 7.1, we obtain d dtZ Rdftlogft Ο€dx =βˆ’Z Rd⟨stβˆ’ βˆ‡logΟ€,βˆ‡logftβˆ’ βˆ‡logΟ€βŸ©ftdx. By taking st=βˆ‡logftexactly, we recover the proof of the classical result 4.1. Otherwise, adding and subtracting βˆ‡logftin the integrand, we get d dtZ Rdftlogft Ο€dx =βˆ’Z Rdβˆ₯βˆ‡logftβˆ’ βˆ‡logΟ€βˆ₯2ftdx+Z RdβŸ¨βˆ‡logftβˆ’st,βˆ‡logftβˆ’ βˆ‡logΟ€βŸ©ftdx ≀ βˆ’1 2Z Rdβˆ₯βˆ‡logftβˆ’ βˆ‡logΟ€βˆ₯2ftdx+1 2Z Rdβˆ₯βˆ‡logftβˆ’stβˆ₯2ftdx The last line is by Young’s inequality. We restate theorem 4.4. Theorem 7.2 (Sufficient training guarantees bounded loss) .Assume that ftis any time-dependent density, X1 t, . . . , Xn tare distributed according to ft, and the neural network s=sΘ tis trained with gradient descent dΘt dt=βˆ’Ξ·βˆ‡Ξ˜tLn(st, ft), Ln(st, ft) :=1 nnX i=1βˆ₯s(Xi t)βˆ’ βˆ‡logf(Xi t)βˆ₯2 and is such that the Neural Tangent Kernel (NTK) Hi,j Ξ±,Ξ²(t) =NX k=1βˆ‡ΞΈksΞ± t(Xi t)βˆ‡ΞΈksΞ² t(Xj t), s(x) = (s1(x), . . . , sd(x)) is lower bounded by Ξ»=Ξ»(t)>0, i.e.βˆ₯Hvβˆ₯2β‰₯Ξ»βˆ₯vβˆ₯2. Then as long as Ξ·(t)β‰₯n Ξ»βˆ‚ βˆ‚Ο„ Ο„=tlogLn(st, fΟ„), the loss Ln(st, ft)is non-increasing, i.e. d dtLn(st, ft)≀0. Proof. We start with an elementary computation based on chain rule. d dtL(st, ft) =βˆ‚ βˆ‚Ο„ Ο„=tL(sΟ„, ft) +βˆ‚ βˆ‚Ο„ Ο„=tL(st, fΟ„) βˆ‚ βˆ‚Ο„ Ο„=tL(sΟ„, ft) =βˆ‡Ξ˜LΒ·d dtΘ =βˆ’Ξ·NX k=1(βˆ‡ΞΈkL)2 =βˆ’Ξ· n2NX k=1 [s(Xi)βˆ’ βˆ‡logf(Xi)]Β· βˆ‡ΞΈks(Xi)2 =βˆ’Ξ· n2nX i,j=1dX Ξ±,Ξ²=1[sΞ±(Xi)βˆ’ βˆ‡ Ξ±logf(Xi)]Hi,j Ξ±,Ξ²[sΞ²(Xj)βˆ’ βˆ‡ Ξ²logf(Xj)] =βˆ’Ξ· n2βˆ₯sβˆ’ βˆ‡logfβˆ₯2 H. 13 where Hi,j Ξ±,Ξ²=NX k=1βˆ‡ΞΈksΞ±(Xi)βˆ‡ΞΈksΞ²(Xj) is called the Neural Tangent Kernel. Its lowest eigenvalue determines the convergence speed of gradient descent. If βˆ₯Hvβˆ₯2β‰₯Ξ»βˆ₯vβˆ₯2, then d dtL(st, ft) =βˆ‚ βˆ‚Ο„ Ο„=tL(sΟ„, ft) +βˆ‚ βˆ‚Ο„ Ο„=tL(st, fΟ„) ≀ βˆ’Ξ·Ξ» nL(st, ft) +βˆ‚ βˆ‚Ο„ Ο„=tL(st, fΟ„). Thus, the loss is non-increasing if Ξ·(t)β‰₯n Ξ»βˆ‚ βˆ‚Ο„ Ο„=tlogL(st, fΟ„). 14 Figure 10: SBTM exhibits smooth and monotone KL divergence to the target, matching the ground truth closely. 7.4 KL divergence data Below is the KL divergence between the samples and target density for three one-dimensional experiments. SBTM usually achieves the smallest KL divergence in the non-annealed setting. SVGD performs very poorly. Table 3: KL divergence ( ↓) between the samples and the target Ο€=N(0,1)for the analytic setting using non-annealed dynamics. Simulation used time step βˆ†t= 0.002, final time T= 2.5. Columns indicate the number of particles N. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.013 0.0032 0.0019 0.0020 0.00099 SDE 0.020 0.022 0.0094 0.0036 0.0012 SVGD 0.33 0.23 0.16 0.20 0.29 Table 4: KL divergence ( ↓) between the sample and the target Ο€=1 4N(βˆ’4,1) +3 4N(4,1)for gaussians_far using non-annealed dynamics. Simulation used time step βˆ†t= 0.01, final time T= 10 . Columns indicate the number of particles N. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.24 0.19 0.11 0.089 0.084 SDE 0.24 0.18 0.14 0.094 0.084 SVGD 4.8 3.3 4.7 5.3 5.5 15 Table 5: Same setup as Table 4, but using geometric annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.31 0.21 0.19 0.14 0.13 SDE 0.28 0.18 0.15 0.12
https://arxiv.org/abs/2504.18130v2
0.12 SVGD 3.2 3.7 4.4 4.6 4.4 Table 6: Same setup as Table 4, but using dilation annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.34 0.25 0.19 0.15 0.13 SDE 0.25 0.19 0.14 0.13 0.13 SVGD 0.96 1.4 1.1 1.3 1.2 Table 7: KL divergence ( ↓) between the sample and the target Ο€=1 4N(βˆ’2,1) +3 4N(2,1)for gaussians_near using non-annealed dynamics. Simulation used time step βˆ†t= 0.01, final time T= 10 . Columns indicate the number of particles N. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.022 0.018 0.0082 0.0082 0.0036 SDE 0.029 0.013 0.014 0.0068 0.0043 SVGD 2.8 1.4 2.4 2.1 2.0 Table 8: Same setup as Table 7, but using geometric annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.058 0.032 0.037 0.025 0.031 SDE 0.060 0.032 0.029 0.029 0.028 SVGD 0.47 0.69 0.99 1.1 1.0 Table 9: Same setup as Table 7, but using dilation annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.037 0.12 0.10 0.096 0.11 SDE 0.062 0.031 0.080 0.054 0.061 SVGD 0.48 0.32 1.00 1.10 0.99 16 7.5 Additional experiments 7.5.1 Noisy Circle Here we compare the SDE and SBTM samples from the β€œnoisy circle" density Ο€βˆ exp βˆ’(βˆ₯xβˆ’(4,0)βˆ₯βˆ’1)2 0.08 .While the SDE samples fill the vacuum region in figure 11 around the center due to Brownian noise, the SBTM samples leave it blank. Figure 11: Sampling from the noisy circle distribution with SDE (top) and SBTM (bottom). 7.5.2 Gaussian Mixture with Geometric Annealing Here we use the classical geometric annealing to sample from a mixture of gaussians, which are 8 units apart, making it very non-log-concave. Ο€=1 4N(βˆ’4,1) +3 4N(4,1) βˆ‡logΟ€t= (1βˆ’t)βˆ‡logf0+tβˆ‡logΟ€t. The match between the orange and blue lines in the right panel of figure 12 indirectly indicates very good score approximation, as per (4.4). 17 Figure 12: Left: reconstructed density of SBTM. It approximates the solution well despite the non- log-concavity. Right: entropy dissipation of SBTM (ours) and SDE (stochastic). SBTM approximates entropy decay rate perfectly even in annealed dynamics. NeurIPS Paper Checklist The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit. Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist: β€’ You should answer [Yes] , [No] , or [NA] . β€’[NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available. β€’ Please provide a short (1–2 sentence) justification right after your answer (even for NA). The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of
https://arxiv.org/abs/2504.18130v2
your paper, and its final version will be published with the paper. The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes] " is generally preferable to "[No] ", it is perfectly acceptable to answer "[No] " provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No] " or "[NA] " is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found. IMPORTANT, please: β€’Delete this instruction block, but keep the section heading β€œNeurIPS Paper Checklist" , β€’Keep the checklist subsection headings, questions/answers and guidelines below. β€’Do not modify the questions and only use the provided macros for your answers . 18 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: Sections 3 and 4 support the theoretical claims made in the abstract. Section 5 supports the empirical claims. Guidelines: β€’The answer NA means that the abstract and introduction do not include the claims made in the paper. β€’The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. β€’The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. β€’It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: The proposed method matches the Langevin dynamics baseline in computa- tional complexity and accuracy, so the main limitation of the proposed method is the slow mixing of the Wasserstein gradient flow when used on non-log-concave targets. We discuss this limitation in Section 4.3. We also discuss the limitations of the theoretical results in remark 4.7. Guidelines: β€’The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. β€’ The authors are encouraged to create a separate "Limitations" section in their paper. β€’The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and
https://arxiv.org/abs/2504.18130v2
what the implications would be. β€’The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. β€’The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. β€’The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. β€’If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. 19 β€’While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: The assumptions are fully stated in the theorems themselves. The detailed proofs are written in full in the appendix. Guidelines: β€’ The answer NA means that the paper does not include theoretical results. β€’All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. β€’All assumptions should be clearly stated or referenced in the statement of any theorems. β€’The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. β€’Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. β€’ Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: Yes, the initial condition, time step, final time, and the target density are provided for each experiment in 5. The detailed pseudocode of the proposed method is given in the appendix. The NN architecture and code repository are at the beginning of Section 5. Guidelines: β€’ The answer NA means that the paper does not include experiments. β€’If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the
https://arxiv.org/abs/2504.18130v2
code and data are provided or not. β€’If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. β€’Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case 20 of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. β€’While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We include the link to the GitHub repository with the experiments. The only dataset we used is MNIST, which is publicly available. Guidelines: β€’ The answer NA means that paper does not include experiments requiring code. β€’Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. β€’While we encourage the release of code and data, we understand that this might not be possible, so β€œNo” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). β€’The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. β€’The authors should provide instructions on data access and preparation, including how to access the
https://arxiv.org/abs/2504.18130v2
raw data, preprocessed data, intermediate data, and generated data, etc. β€’The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. β€’At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). β€’Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details 21 Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Each experiment contains the initial density, target density, time step, and the number of time steps. The hyperparameters are in the appendix. There is no train/test split in this work. Guidelines: β€’ The answer NA means that the paper does not include experiments. β€’The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. β€’The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: We use very high sample sizes of 1000 and 10,000 in the low-dimensional experiments. Adding error bars to the plots would not convey valuable information, because we are interested in the overall shape rather than the precise values. Guidelines: β€’ The answer NA means that the paper does not include experiments. β€’The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. β€’The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). β€’The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) β€’ The assumptions made should be given (e.g., Normally distributed errors). β€’It should be clear whether the error bar is the standard deviation or the standard error of the mean. β€’It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. β€’For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). β€’If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper
https://arxiv.org/abs/2504.18130v2
provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? 22 Answer: [Yes] Justification: We describe this at the beginning of section 5. Guidelines: β€’ The answer NA means that the paper does not include experiments. β€’The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. β€’The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. β€’The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We read the Code of Ethics and confirm that our research conforms with it in every respect. Guidelines: β€’The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. β€’If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. β€’The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: There are no potential societal impacts of our work, which should be specifi- cally highlighted. Guidelines: β€’ The answer NA means that there is no societal impact of the work performed. β€’If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. β€’Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. β€’The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. 23 β€’The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. β€’If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to
https://arxiv.org/abs/2504.18130v2
monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: We do not anticipate any potential misuse of this work. Guidelines: β€’ The answer NA means that the paper poses no such risks. β€’Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. β€’Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. β€’We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We include the license of the single dataset (MNIST) we used. Guidelines: β€’ The answer NA means that the paper does not use existing assets. β€’ The authors should cite the original paper that produced the code package or dataset. β€’The authors should state which version of the asset is used and, if possible, include a URL. β€’ The name of the license (e.g., CC-BY 4.0) should be included for each asset. β€’For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. β€’If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. β€’For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. 24 β€’If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: We do not release new assets. Guidelines: β€’ The answer NA means that the paper does not release new assets. β€’Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. β€’The paper should discuss whether and how consent was obtained from people whose asset is used. β€’At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the
https://arxiv.org/abs/2504.18130v2
paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our work does not involve crowdsourcing or research with human subjects. Guidelines: β€’The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. β€’Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. β€’According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Our work does not involve crowdsourcing or research with human subjects. Guidelines: β€’The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. 25 β€’Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. β€’We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. β€’For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16.Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA] Justification: This work does not involve LLMs beyond text and code editing. Guidelines: β€’The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. β€’Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM ) for what should or should not be described. 26
https://arxiv.org/abs/2504.18130v2
arXiv:2504.18139v1 [math.PR] 25 Apr 2025Kalman-Langevin dynamics : exponential convergence, part icle approximation and numerical approximation Axel Ringhβˆ—Akash Sharmaβˆ— Abstract Langevin dynamics has found a large number of applications i n sampling, optimization and esti- mation. Preconditioning the gradient in the dynamics with t he covariance β€” an idea that originated in literature related to solving estimation and inverse pro blems using Kalman techniques β€” results in a mean-field (McKean-Vlasov) SDE. We demonstrate exponenti al convergence of the time marginal law of the mean-field SDE to the Gibbs measure with non-Gaussi an potentials. This extends previous results, obtained in the Gaussian setting, to a broader clas s of potential functions. We also establish uniform in time bounds on all moments and convergence in p-Wasserstein distance. Furthermore, we show convergence of a weak particle approximation, that avo ids computing the square root of the empirical covariance matrix, to the mean-field limit. Final ly, we prove that an explicit numerical scheme for approximating the particle dynamics converges, uniformly in number of particles, to its continuous-time limit, addressing non-global Lipschitzn ess in the measure. Keywords: McKean-Vlasov stochastic differential equations, interac ting particle systems, strongly convergent numerical schemes. AMS Classification: 65C30, 60H35, 60H10, 37H10, 35Q84. 1 Introduction Sampling and optimization techniques are, in many cases, the main ingr edients to solutions of problems in applied mathematics and computational statistics. For instance, in molecular dynamics, sampling techniques focus on exploring the potential configurations of a mo lecule, while optimization comes into playwhenseekingthe configurationwith the minimum energy. Addition ally, manyoptimizationproblems can be formulated as sampling problems within a Bayesian framework t o account for uncertainty. In sampling, one looks for samples from the measure defined by Β΅(dx) =1 Zeβˆ’Ξ²U(x)dx, (1.1) whereZ=/integraltext Rdeβˆ’Ξ²U(x)dxis the normalizing constant, Ξ² >0 andU:Rdβ†’R. Connecting this to optimization, under relatively mild conditions on Uand for increasing Ξ², the measure concentrates around global minima of U[Hwa80, HRS24]. For sampling purposes, in addition to traditional Markov chain Monte Carlo methods, overdamped Langevin dynamics, which finds its origin in statistical physics (see, e .g., [RDF78]), is one popular choice of method. The overdamped Langevin dynamics (also known as Smolu chowski dynamics) driven by Brownian noise W(t) is given by dX(t) =βˆ’βˆ‡U(X(t))dt+/radicalbigg 2 Ξ²dW(t), X(0)∈Rd, (1.2) and it leaves the Gibbs measure (1.1) invariant under appropriate co nditions on U. The dynamics in (1.2) has also been used as an optimization method exploiting annealing techniques with a Ξ²β†’ ∞(see [CHS87]), and as a starting point for developing new sampling methods . βˆ—Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg. axelri@chalmers.se ,akashs@chalmers.se . 1 βˆ’15 βˆ’10 βˆ’5 0 5 10 15 xβˆ’15βˆ’10βˆ’5051015yLangevin Dynamics βˆ’15 βˆ’10 βˆ’5 0 5 10 15 xKalman Langevin Dynamics Figure 1: The difference in behavior of overdamped Langevin dynamic s and Kalman-Langevin dynamics for potential function U= 0.26(x2+y2)βˆ’0.48xyat timeT= 1 with 2000 particles uniformly initialized in [βˆ’15,15]2withΞ²= 1. On the other hand, the Kalman filter introduced in [Eve94] for stat e estimation has also found large number of applications in applied sciences (for example in oceanograp hy [EVL96], in reservoir modeling [ANO+09],
https://arxiv.org/abs/2504.18139v1
and in weather forecasting [HM01]) for data assimilation and inver se problems. Inspired from the Kalman filter, an ensemble Kalman iterative procedure for s olving inverse problems, called as ensemble Kalman inversion (EKI), was proposed in [ILS13], and the co rresponding continuous-time limit, whichisaninteractingsystemofSDEs, wasderivedin[SS17]. Theauth orsin[BSWW19]establishedwell- posednessandconvergencetothegroundtruthoftheSDEsund erlyingEKI.Theconvergencetothemean- field limit was studied in [DL21a] for the EKI model. For the reflected EK I model, the well-posedness and convergence of particle system to mean-field limit in non-convex domain setting is established in [HST25]. In this context, it is also worth noting other recent works o n optimization and sampling using interacting particle systems demonstrating better capabilitie s to deal with the non-convexity and anisotropyofenergylandscapes, with possiblederivativefreeimple mentation, suchas[CCTT18,LMW18, KS19, CST20, LWZ22, MRSS25]. Inspired from the continuous-time limit of ensemble Kalman procedur e with noise, [GIHLS20] proposed a covariancepreconditioned overdamped Langevin dynamics which is a non-linear (in the sense of McKean) Markov process. This covariance preconditioned Langevin dynamic s is driven by the following McKean- Vlasov SDEs: dX(t) =βˆ’Ξ£(Β΅t)βˆ‡U(X(t))dt+/radicalBigg 2Ξ£(Β΅t) Ξ²dW(t), (1.3a) with Ξ£(Β΅t) =/integraldisplay Rd(xβˆ’M(Β΅t))(xβˆ’M(Β΅t))⊀dΒ΅t(x), (1.3b) M(Β΅t) =/integraldisplay RdxdΒ΅t(x), (1.3c) whereW(t) is ad-dimensional Brownian motion and Β΅t:=LX tis the time marginal law of X. This provides a new sampling method which portrays better performanc e in capturing anisotropic energy landscapes, as illustrated in Figure 1. Furthermore, Kalman approx imation of the gradient (see Re- mark 3.1) provides derivative free technique for Laplace approxima tion of the targeted distribution. In [GIHLS20], provided that the initial distribution is not a Dirac distribut ion and that Uis a quadratic function of x, the authors showed the convergence of the law of (1.3a) to the G ibbs measure in relative entropy. 2 In [CV21], exponential convergence is obtained directly in Wasserst ein distance for the linear setting, i.e., the setting when Uis quadratic and thus βˆ‡Uis linear. One more step towardsanalysis is the convergence of interacting particle system to the mean field limit given by (1.3), whic h is shown in [DL21b] in the case of a quadratic potential. The propagation of chaos result is extend ed to a non-linear setting in [Vae24], with optimal rate of convergence in terms of number of particles. T his paper builds on the above, and the contributions of the paper are the following: (i) The first major contribution is that we establish exponential con vergence of the law of the non- linear Langevin dynamics to the Gibbs measure for non-linear potent ial functions (in particular, for a quadratic potential with Lipschitz perturbation) in relative entro py (which also gives exponential convergence in 2 βˆ’Wasserstein distance). This significantly improves the linear setting results of [GIHLS20, CV21] as it covers a large class of Bayesian models with Gau ssian priors. In addition, wealsoproveuniform intime p-thmoment boundofthemean-fieldSDEs. Combiningthe resultswe straightforwardly have convergence in pβˆ’Wasserstein distance. One of the main ingredients of the analysis is matrix valued non-linear ordinary differential equations (O DEs). This approach bears resemblance to the approach based on Riccati type matrix valued O DEs which has been employed to obtain stability estimates in the case of ensemble Kalman(-Bucy) fi
https://arxiv.org/abs/2504.18139v1
lters [DMT18, DMH23]. (ii) We provethe convergenceofan interactingparticlesystem tot he mean-fieldSDEs (1.3). In contrast to [DL21b, Vae24], we study a weak particle approximation that avoid s the need to compute the square root of the empirical covariance matrix. We refer to this as a weak approximation because it differs from the interacting particle systems proposed and studie d in [GIHLS20] and [Vae24] at the path level. To show this convergence, we follow the classical trilo gy of arguments from [Szn91] (also see [GKM+96]). (iii) The second major contribution is that we establish the uniform in N, whereNdenotes number of particles, convergence of an implementable explicit numerical sch eme, with fixed discretization time-step, to its continuous-time limit. In [BSW18], the authors cons ider a one dimensional model SDE inspired from ensemble Kalman inversion and establish convergen ce of a numerical scheme to its continuous limit. For interacting particle system with non-global L ipschitzness in measure in 2βˆ’Wasserstein metric, [CDR24] propose a split-step scheme, howeve r, the nonlinearity in terms of measure appear as a convolution which is not the case for (1.3). The outline of the article is in line with the contributions mentioned abov e: in Section 2, we show the exponential convergence; in Section 3, we prove the propagation of chaos; and in Section 4, we show the convergence of the numerical scheme. Notation We will use the following notations in the paper. If Q(t) is a time-varying symmetric positive semi-definite matrix, we denote Ξ»Q min(t) andΞ»Q max(t) as smallest and largest eigenvalues, respectively, of Q(t). For square matrix Q, we denote its trace by trace( Q). IfOis a subset of RdthenOcrepresents its complement. For convenience, we denote a∧b= min(a,b) fora,b∈R. We represent the space of probability measures on RdasP(Rd) and the subset of measures with bounded p-moment by Pp(Rd). We denote with Ck(Rd) the space of ktimes continuously differentiable functions, and with Ck c(Rd) the corresponding functions with compact support. For a function f:Rdβ†’R, we denote its gradient vector and Hessian matrix by βˆ‡fandβˆ‡2f, respectively. For ¡∈ P(Rd), we denote/integraltext RdΟ†(x)Β΅(dx) as/an}bβˆ‡acketle{tΟ†,Β΅/an}bβˆ‡acketβˆ‡i}ht provided that the integral is finite. Moreover, for Β΅,ν∈ P(Rd), byΒ΅β‰ͺΞ½we mean that Β΅is absolutely continuous with respect to Ξ½. With/an}bβˆ‡acketle{taΒ·b/an}bβˆ‡acketβˆ‡i}ht, we denote the scalar product between two vectors a,b∈Rd. For aRdΓ—mmatrix, we denote its Frobenius norm with | Β· |, which reduces to the Euclidean norm for m= 1. We will use Ο€(x) to denote the density, with respect to the Lebesgue measure, o f the Gibbs measure (1.1), i.e., Ο€(x) =1 Zeβˆ’Ξ²U(x), (1.4) whereZ=/integraltext Rdeβˆ’Ξ²U(x)dx. Finally, we denote with Ca generic constant whose value may change from line to line. 3 2 Exponential convergence of non-linear dynamics to equili b- rium In this section, we prove exponential convergence of the non-line ar Markov process driven by (1.3) to its invariant distribution (1.1) in relative entropy. We do this by first s howing a uniform in time lower bound on the smallest eigenvalue of Ξ£ t, i.e., on λΣ min(t). These results are all proved under the following assumption on the function U. Assumption 2.1. We assume
https://arxiv.org/abs/2504.18139v1
that the potential function Uhas the following form: U=1 2x⊀Ax+V, (2.1) whereAis positive definite matrix with smallest eigenvalue β„“Aand largest eigenvalue LA, andVisC1(Rd) and Lipschitz continuous function with Lipchitz constant LV. Remark 2.1. Note that the above assumption allows for non-convexity. One exa mple satisfying the above assumption is that of Rastrigin function, which is commonly use d as both a benchmark objective function for optimization and as a test potential function in sampling problems. We first recall the definition of relative entropy along with its relation to total variation distance and 2- Wasserstein distance; for an in-depth treatment one can look at [ Vil03]. For any two probability measures Ξ½andΒ΅on (Rd,B(Rd)), the relative entropy of Ξ½with respect to Β΅is defined as H(Ξ½|Β΅) =ο£± ο£² ο£³/integraltext Rdln/parenleftbigg dΞ½ dΒ΅/parenrightbigg dΞ½ifΞ½β‰ͺΒ΅, +∞else. Thep-Wasserstein distance on Pp(Rd) is given by Wp(Ξ½,Β΅) := inf Ξ³βˆˆΞ“(Ξ½,Β΅)/braceleftbigg/parenleftbig E|Y1βˆ’Y2|p/parenrightbig1/p,Law(Y1,Y2) =Ξ³/bracerightbigg , where infimum is taken over all couplings of Ξ½andΒ΅, i.e., Ξ“(Ξ½,Β΅) is the set of all joint measures such that Law(Y1) =Ξ½and Law( Y2) =Β΅. We say that Β΅satisfies a log-Sobolev inequality with constant Ξ»LSif for all probability measures Ξ½such that Ξ½β‰ͺΒ΅the following holds: H(Ξ½|Β΅)≀1 2Ξ»LS/integraldisplay Rd/vextendsingle/vextendsingle/vextendsingle/vextendsingleβˆ‡lndΞ½ dΒ΅/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dΞ½. (2.2) Next, we recall one inequality related to relative entropy. These ine qualities implies convergence in total variation norm and 2 βˆ’Wasserstein distance if convergence in relative entropy is proven. IfΒ΅satisfies a log-Sobolev inequality with constant Ξ»LS, then, due to Talagrand’s inequality, we have W2(Ξ½,Β΅)≀/radicalbigg 2 Ξ»LSH(Ξ½|Β΅). The main contributions of this section are the following results. Theorem 2.1. Let Assumption 2.1 hold. Then the non-linear Langevin SDEs ( 1.3) converges to its in- variant measure exponentially in relative entropy, i.e., t here exist positive constants c1andc2independent oftsuch that H(LX t|Ο€)≀c1eβˆ’c2t. Remark 2.2. An applicationofTalagrand’sinequalityimplies that we alsohaveexpone ntialconvergence in 2-Wasserstein distance. This, in turn, ensures weak convergen ce ofLX tto the Gibbs measure, as well as convergence of the second moment of LX tto the second moment of the Gibbs measure (see [Vil03, Theorem 7.12]). 4 The proof of exponential convergence requires the following lemma . Lemma 2.2. Under Assumption 2.1, the following bound holds: λΣ min(t)β‰₯min/parenleftbigg1 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg , (2.3) whereLAandLVare from Assumption 2.1. Analogous to the uniform in time lower bound on the smallest eigenvalue of Ξ£t, as stated in Lemma 2.2 above, we can also prove uniform in time upper bound on the largest e igenvalue (see Lemma 2.4). These uniform lower and upper bounds are of interest in their own right. Mo reover, a consequence of the lower and upper bounds is uniform in time moment bounds of the mean-field d ynamics which we mention here as a separate result and prove in Section 2.4. For brevity, we denote β„“Ξ£:= min/parenleftbigg1 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg , (2.4) LΞ£:=λΣ max(0)eβˆ’β„“At+1 β„“A/parenleftBigg/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βˆ’eβˆ’β„“At). (2.5) Theorem 2.3. Under Assumption 2.1, the following bound holds for all l >0: E|X(t)|2l≀CM, (2.6) whereX(t)is from (1.3) and CMis independent of t. For brevity, we do not explicitly write the constant CMappearing (2.6) but it can easily be inferred from the proof as we
https://arxiv.org/abs/2504.18139v1
explicitly track all the constants. Corollary 2.1. Let Assumption 2.1 hold. Then, for all p >0, thepβˆ’th moment of the law of mean-field SDEs (1.3) converges to the pβˆ’th moment of the Gibbs measure (1.1), i.e., /integraldisplay Rd|x|pLX t(dx)β†’/integraldisplay Rd|x|pΒ΅(dx) (2.7) astβ†’ ∞. Proof.The proof directly follows from [VdV98, Theorem 2.20] using weak con vergence of and uniform integrability of moments. Corollary 2.2. Let Assumption 2.1 hold. Then, for all p >0, we have the following convergence: Wp(LX t,Β΅)β†’0 (2.8) astβ†’ ∞. Proof.The result follows from the previous corollary and [Vil03, Theorem 7.12 ]. Lemma 2.4. Under Assumption 2.1, the following bound holds: λΣ max(t)≀λΣ max(0)eβˆ’β„“At+1 β„“A/parenleftBigg/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βˆ’eβˆ’β„“At), (2.9) whereLAandLVare from Assumption 2.1. For the sake of convenience, in the following we will denote Ξ£ t:= Ξ£(Β΅t). We will represent Ξ£βˆ’1 tas the inverse of Ξ£ t. 5 2.1 Proof of Lemma 2.2 Proof of Lemma 2.2. We have the following due to (1.3a): EX(t) =EX(0)βˆ’/integraldisplayt 0Ξ£sEβˆ‡U(X(s))ds,EX(t)⊀=EX(0)βŠ€βˆ’/integraldisplayt 0E(βˆ‡U(X(s)))⊀Σ⊀ sds. Consider the following stochastic dynamics: d(X(t)βˆ’EX(t)) =βˆ’Ξ£t[βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t))]dt+/radicalBigg 2Ξ£t Ξ²dW(t). Using Ito’s product rule, we have d(X(t)βˆ’EX(t))(X(t)βˆ’EX(t))⊀=βˆ’[(X(t)βˆ’EX(t))(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))⊀]Ξ£tdt βˆ’Ξ£t[(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))(X(t)βˆ’EX(t))⊀]dt+2 Ξ²Ξ£tdt +/radicalbigg 2 Ξ²(X(t)βˆ’EX(t))dW(t)⊀/radicalbig Ξ£t+/radicalbigg 2 Ξ²/radicalbig Ξ£tdW(t)(X(t)βˆ’EX(t))⊀. Therefore, dΞ£t=βˆ’E[(X(t)βˆ’EX(t))(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))⊀]Ξ£tdt βˆ’Ξ£tE[(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))(X(t)βˆ’EX(t))⊀]dt+2 Ξ²Ξ£tdt. (2.10) Due to (2.10), we have the following matrix valued differential equatio n: d dtΞ£βˆ’1 t=βˆ’Ξ£βˆ’1 tdΞ£t dtΞ£βˆ’1 t= Ξ£βˆ’1 tE[(X(t)βˆ’EX(t))(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))⊀] +E[(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))(X(t)βˆ’EX(t))⊀]Ξ£βˆ’1 tβˆ’2 Ξ²Ξ£βˆ’1 t. (2.11) Using Assumption 2.1, we get d dtΞ£βˆ’1 t= 2A+Ξ£βˆ’1 tE[(X(t)βˆ’EX(t))(βˆ‡V(X(t))βˆ’Eβˆ‡V(X(t)))⊀] +E[(βˆ‡V(X(t))βˆ’Eβˆ‡V(X(t)))(X(t)βˆ’EX(t))⊀]Ξ£βˆ’1 tβˆ’2 Ξ²Ξ£βˆ’1 t. For anyy∈Rdwith|y| /ne}ationslash= 0, we have d dtyβŠ€Ξ£βˆ’1 ty=y⊀d dtΞ£βˆ’1 ty= 2y⊀Ay+yβŠ€Ξ£βˆ’1 tE[ZtP⊀ t]y+y⊀E[PtZ⊀ t]Ξ£βˆ’1 tyβˆ’2 Ξ²yβŠ€Ξ£βˆ’1 ty = 2y⊀Ay+2yβŠ€Ξ£βˆ’1 tE[ZtP⊀ t]yβˆ’2 Ξ²yβŠ€Ξ£βˆ’1 ty, (2.12) where, for brevity, we have denoted Zt:=X(t)βˆ’EX(t) andPt:=βˆ‡V(X(t))βˆ’Eβˆ‡V(X(t)). We first deal with the second term on the righthand side of(2.12) by using Cauchy-Bunyakovsky-Schwarz inequality: yβŠ€Ξ£βˆ’1 tE[ZtP⊀ t]y=E[yβŠ€Ξ£βˆ’1 tZtP⊀ ty]≀(E[yβŠ€Ξ£βˆ’1 tZt]2)1/2(E[P⊀ ty]2)1/2 ≀2LV|y|(E[yβŠ€Ξ£βˆ’1 tZt]2)1/2, whereLVis as introduced in Assumption 2.1. Noting that E[yβŠ€Ξ£βˆ’1 tZt]2=E([yβŠ€Ξ£βˆ’1 tZt][Z⊀ tΞ£βˆ’1 ty]) =yβŠ€Ξ£βˆ’1 tE(ZtZ⊀ t)Ξ£βˆ’1 ty=yβŠ€Ξ£βˆ’1 ty, 6 sinceE(ZtZ⊀ t) = Ξ£t. Therefore, yβŠ€Ξ£βˆ’1 tE[ZtP⊀ t]y≀2LV|y|/radicalBig yβŠ€Ξ£βˆ’1 ty. (2.13) This implies d dtyβŠ€Ξ£βˆ’1 ty= 2y⊀Ay+2yβŠ€Ξ£βˆ’1 tE[ZtP⊀ t]yβˆ’2 Ξ²yβŠ€Ξ£βˆ’1 ty ≀2y⊀Ay+4LV|y|/radicalBig yβŠ€Ξ£βˆ’1 tyβˆ’2 Ξ²yβŠ€Ξ£βˆ’1 ty. (2.14) For the sake of convenience, we denote Ξ›(t,y) =yβŠ€Ξ£βˆ’1 ty |y|2. Dividing (2.14) by |y|2, with|y| /ne}ationslash= 0, on both sides, we ascertain d dtΞ›(t,y)≀2LA+4LV/radicalbig Ξ›(t,y)βˆ’2 Ξ²Ξ›(t,y), whereLA:= supy;|y|/negationslash=0y⊀Ay |y|2. Using generalized Young’s inequality ( ab≀a2/(2Η«)+Η«b2/2, where a,b,Η« > 0, witha=/radicalbig Ξ›(s,y),b=LV,Η«= 2Ξ²), we have d dtΞ›(t,y)≀2LA+4/parenleftbigg Ξ²L2 V+Ξ›(t,y) 4Ξ²/parenrightbigg βˆ’2 Ξ²Ξ›(t,y) = 2LA+4Ξ²L2 Vβˆ’1 Ξ²Ξ›(t,y). Usinge1 Ξ²tas the integrating factor, we get d dte1 Ξ²tΞ›(t,y)≀/parenleftbig 2LA+4Ξ²L2 V/parenrightbig e1 Ξ²t. Therefor, we obtain e1 Ξ²tΞ›(t,y)≀Λ(0,y)+2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig/parenleftbig e1 Ξ²tβˆ’1/parenrightbig . (2.15) Hence, we finally have Ξ›(t,y)≀Λ(0,y)eβˆ’1 Ξ²t+2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig/parenleftbig 1βˆ’eβˆ’1 Ξ²t/parenrightbig . (2.16) Taking supremum over y∈Rd, with|y| /ne}ationslash= 0 and using the fact that Ξ£βˆ’1 tis symmetric, we get Ξ»Ξ£βˆ’1 max(t)β‰€Ξ»Ξ£βˆ’1 max(0)eβˆ’1 Ξ²t+2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig/parenleftbig 1βˆ’eβˆ’1 Ξ²t/parenrightbig . Takeg(t) :=Ξ»Ξ£βˆ’1 max(0)eβˆ’1 Ξ²t+Ξ²/parenleftbig 2LA+Ξ²L2 V/parenrightbig/parenleftbig 1βˆ’eβˆ’1 Ξ²t/parenrightbig , thengβ€²(t) is given by gβ€²(t) =βˆ’1 Ξ²Ξ»Ξ£βˆ’1 max(0)eβˆ’1 Ξ²t+2(LA+2Ξ²L2 V)eβˆ’1 Ξ²t=/parenleftbigg βˆ’1 Ξ²Ξ»Ξ£βˆ’1 max(0)+2(LA+2Ξ²L2 V)/parenrightbigg eβˆ’1 Ξ²t. This means that the sign of gβ€²is constant, suggesting that g(t) is a monotonic function which either attend supremum at t= 0 or when tβ†’ ∞. In that case, if 2Ξ²(LA+2Ξ²L2
https://arxiv.org/abs/2504.18139v1
V)β‰₯Ξ»Ξ£βˆ’1 max(0), (2.17) thenΞ»Ξ£βˆ’1 max(t)≀2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig . However, if we choose initial distribution Β΅0such that 2Ξ²(LA+2Ξ²L2 V)β‰€Ξ»Ξ£βˆ’1 max(0), (2.18) thenΞ»Ξ£βˆ’1 max(t)β‰€Ξ»Ξ£βˆ’1 max(0). Therefore, we have Ξ»Ξ£βˆ’1 max(t)≀max/parenleftbigg 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig , Ξ»Ξ£βˆ’1 max(0)/parenrightbigg . Note that Ξ»Ξ£βˆ’1 max(t) = 1/λΣ min(t). This implies λΣ min(t)β‰₯min/parenleftbigg1 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg . (2.19) 7 2.2 Proof of Theorem 2.1 Beforeweproceedwith theproofofTheorem2.1, weneed thefollow ingresult, whichisdirect consequence of [CG22, Theorem 0.1]. In order to state the result, we need the fo llowing notation: We denote the PoincarΒ΄ econstantfor Β΅byCP(Β΅). LetΞ»LS(Β΅A)denotethelog-Sobolevconstantof Β΅A(dx) =eβˆ’x⊀Ax/2dx. We denote CLS(Β΅A) = 2/Ξ»LS(Β΅A). Lemma 2.5. Let Assumption 2.1 hold. Then, the measure Β΅=1 Zeβˆ’U(x)dxsatisfies log-Sobolev inequal- ity, i.e., for all Ξ½β‰ͺΒ΅ H(Ξ½|Β΅)≀1 2Ξ»LS/integraldisplay Rd/vextendsingle/vextendsingle/vextendsingle/vextendsingleβˆ‡ln/parenleftbiggdΞ½ dΒ΅/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dΞ½, where log-Sobolev constant satisfies 1 2Ξ»LS≀1 4/parenleftbig K1(1+aβˆ’1 1)(1+aβˆ’1 2)+K2/parenleftbig (1+a2)(1+aβˆ’1 1)/4+a2 1/2/parenrightbig/parenrightbig , (2.20) whereK1=CA LS,K2=CP(Β΅)(2+/an}bβˆ‡acketle{tV,Β΅A/an}bβˆ‡acketβˆ‡i}ht+LUCP(Β΅)CLS(Β΅A),a1=1 K1/3 2/parenleftbigg K1+K2 2/parenrightbigg2/3 anda2= 2/radicalBig K1 K2. Proof.First note that, thanks to Assumption 2.1, Β΅satisfies PoincarΒ΄ e inequality [BBCG08]. Note that Β΅ satisfies Poincare inequality with constant CP(Β΅). Denote CLS(Β΅) = 2/Ξ»LS. From [CG22, Theorem 0.1], we have for all a1>0 anda2>0 CLS(Β΅)≀(1+aβˆ’1 1)(1+aβˆ’1 2)CA LS+CP(Β΅)(2+/an}bβˆ‡acketle{tV,Β΅A/an}bβˆ‡acketβˆ‡i}ht+LUCP(Β΅)CLS(Β΅A)/parenleftbig (1+a2)(1+aβˆ’1 1)/4+a2 1/2/parenrightbig . Considerafunction f(a1,a2) =K1(1+aβˆ’1 1)(1+aβˆ’1 2)+K2/parenleftbig (1+a2)(1+aβˆ’1 1)/4+a2 1/2/parenrightbig forsome K1,K2>0, thenfattains its minimum at a1=1 K1/3 2/parenleftbigg K1+K2 2/parenrightbigg2/3 , a2= 2/radicalbigg K1 K2, which completes the proof. Proof of Theorem 2.1. Considerthe Fokker-Plankequation governingthe probability distr ibution ofnon- linear Langevin dynamics (1.3a) as βˆ‚Ο βˆ‚t(t,x) =βˆ‡Β·(ρ(t,x)Ξ£(Β΅t)βˆ‡U(x))+1 Ξ²βˆ‡Β·(Ξ£(Β΅t)βˆ‡Ο(t,x)). (2.21) We can write potential function as U(x) =βˆ’1 Ξ²lnΟ€(x)βˆ’1 Ξ²lnZ,Z=/integraltext Rdeβˆ’Ξ²U(x)dx. This results in the following PDE in divergence form: βˆ‚Ο βˆ‚t(t,x) =βˆ’1 Ξ²βˆ‡Β·/parenleftbig ρ(t,x)Ξ£(Β΅t)βˆ‡ln(Ο€(x))/parenrightbig +1 Ξ²βˆ‡Β·/parenleftbig ρ(t,x)Ξ£(Β΅t)βˆ‡ln(ρ(t,x))/parenrightbig =1 Ξ²βˆ‡Β·/parenleftbigg ρ(t,x)Ξ£(Β΅t)βˆ‡ln/parenleftBigρ(t,x) Ο€(x)/parenrightBig/parenrightbigg . (2.22) We denote ρt:=ρ(t,Β·). Consider the relative Boltzmann-Shanon entropy (also known as KL-divergence) in terms of density ρwith respect to Ο€: H(ρt|Ο€) =/integraldisplay Rdρtln/parenleftbiggρt Ο€/parenrightbigg dx. (2.23) We will discuss below the behaviour of above functional along the solu tion of Fokker-Plank PDE (2.22): d dtH(ρt|Ο€) =/integraldisplay Rdβˆ‚ βˆ‚t/parenleftbig ρt(x)ln(ρt(x))/parenrightbig βˆ’βˆ‚ βˆ‚t/parenleftbig ρtln(Ο€(x))/parenrightbig dx =/integraldisplay Rd/parenleftBigβˆ‚ βˆ‚tρt(x)+ln(ρt(x))βˆ‚ βˆ‚tρt(x)βˆ’ln(Ο€(x))βˆ‚ βˆ‚tρt(x)/parenrightBig dx=/integraldisplay Rd/parenleftBig ln/parenleftbiggρt(x) Ο€(x)/parenrightbiggβˆ‚ βˆ‚tρt(x)/parenrightBig dx, 8 where we have used the conservation of mass with time resulting inβˆ‚ βˆ‚t/integraltext Rdρt(x)dx= 0. Using (2.22) and integration by parts, we obtain d dtH(ρt|Ο€) =/integraldisplay Rdβˆ’1 Ξ²/parenleftbigg βˆ‡ln/parenleftBigρt(x) Ο€(x)/parenrightBig Β·Ξ£(Β΅t)βˆ‡ln/parenleftBigρt(x) Ο€(x)/parenrightBig/parenrightbigg ρt(x)dx. (2.24) Using log-Sobolev inequality, we get βˆ’/integraldisplay Rd/parenleftbigg βˆ‡ln/parenleftBigρt(x) Ο€(x)/parenrightBig Β·Ξ£(Β΅t)βˆ‡ln/parenleftBigρt(x) Ο€(x)/parenrightBig/parenrightbigg ρt(x)dx≀ βˆ’Ξ»Ξ£ min(t)/integraldisplay Rd/vextendsingle/vextendsingle/vextendsingleβˆ‡ln/parenleftBigρt(x) Ο€(x)/parenrightBig/vextendsingle/vextendsingle/vextendsingle2 ρt(x)dx ≀ βˆ’Ξ»Ξ£ min(t) 2Ξ»LSH(ρt|Ο€). Therefore, d dtH(ρt|Ο€)≀ βˆ’Ξ»Ξ£ min(t) 2Ξ»LSΞ²H(ρt|Ο€) which on applying Lemma 2.2, we ascertain d dtH(ρt|Ο€)≀ βˆ’min/parenleftbigg1 Ξ²/parenleftbig 2LA+Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg1 2Ξ»LSΞ²H(ρt|Ο€). (2.25) 2.3 Proof of Lemma 2.4 Proof of Lemma 2.4. From (2.10), we have dΞ£t=βˆ’E[(X(t)βˆ’EX(t))(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))⊀]Ξ£tdt βˆ’Ξ£tE[(βˆ‡U(X(t))βˆ’Eβˆ‡U(X(t)))(X(t)βˆ’EX(t))⊀]dt+2 Ξ²Ξ£tdt. (2.26) which due to Assumption 2.1 gives d dtΞ£t=βˆ’2Ξ£tAΞ£tβˆ’E[ZtP⊀ t]Ξ£tβˆ’Ξ£tE[PtZ⊀ t]+2 Ξ²Ξ£t, whereZt=X(t)βˆ’EX(t) andPt=βˆ‡V(X(t))βˆ’Eβˆ‡V(X(t)). For|y|2= 1, we have d dty⊀Σty=βˆ’2y⊀ΣtAΞ£tyβˆ’y⊀E[ZtP⊀ t]Ξ£tyβˆ’y⊀ΣtE[PtZ⊀ t]y+2 Ξ²y⊀Σty ≀ βˆ’2β„“Ay⊀Σ2 tyβˆ’2y⊀E[ZtP⊀ t]Ξ£ty+2 Ξ²y⊀Σty. (2.27) First consider the second term on the right hand side of the inequalit y: βˆ’2y⊀E[ZtP⊀ t]Ξ£ty=βˆ’2E[y⊀ZtP⊀ tΞ£ty]≀2(E|y⊀Zt|2)1/2(E|P⊀ tΞ£ty|2)1/2 = 2(y⊀Σty)1/2(y⊀ΣtE[PtP⊀ t]Ξ£ty)1/2. Due to our assumption on V, there exists a KV>0 such that following holds for all t >0 andz∈Rd: z⊀PtP⊀ tz≀KV|z|2. (2.28) Explicit KVcalculation z⊀Pt≀2LV|z|therefore KV= 4L2 V. An application of Young’s inequality gives βˆ’2y⊀E[ZtP⊀
https://arxiv.org/abs/2504.18139v1
t]Ξ£ty≀2/radicalbig KV(y⊀Σty)1/2(y⊀Σ2 ty)1/2≀ℓA 3y⊀Σ2 ty+3KV β„“Ay⊀Σty. 9 Hence, using above inequality in (2.27), we get d dty⊀Σty≀ βˆ’5 3β„“Ay⊀Σ2 ty+3K2 V β„“Ay⊀Σty+2 Ξ²y⊀Σty. (2.29) Let us now analyze the term y⊀Σ2 tyappearing in (2.27). We have already proven uniform in time lower bound on eigenvalues of Ξ£ t. We write the eigenvalue decomposition of matrix Ξ£ tasOtDtO⊀ t, whereOt is orthogonal matrix and Dtis a diagonal matrix whose diagonals are given by eigenvalues of Ξ£ t. This implies we can write y⊀Σ2 ty=y⊀OtD2 tO⊀ ty. (2.30) We denote vt=O⊀ tyand it is clear that |vt|2=y⊀O⊀ tOty= 1. The calculations below follow due to above properties: (y⊀Σty)2= (v⊀ tDtvt)2=/parenleftbiggd/summationdisplay iλΣ i(t)(vi t)2/parenrightbigg2 ≀d/summationdisplay i,j=1λΣ i(t)(vi t)2λΣ j(t)(vj t)2, whereλΣ i(t) denotes the ii-th component of Dtandvi trepresents the i-th component of vt. On applying Young’s inequality, we arrive at the following: (y⊀Σty)2≀1 2d/summationdisplay i,j=1/bracketleftbig/parenleftbig λΣ i(t)/parenrightbig2+/parenleftbig λΣ j(t)/parenrightbig2/bracketrightbig (vi t)2(vj t)2 =1 2d/summationdisplay i,j=1/parenleftbig λΣ i(t)/parenrightbig2(vi t)2(vj t)2+1 2d/summationdisplay i,j=1/parenleftbig λΣ j(t)/parenrightbig2(vj t)2(vi t)2 =d/summationdisplay i/parenleftbig λΣ i(t)/parenrightbig2(vi t)2=v⊀ tD2 tvt=y⊀OtD2 tO⊀ ty=y⊀Σ2 ty. (2.31) Consequently, we have obtained that (y⊀Σty)2≀y⊀Σ2 ty, (2.32) for allysuch that |y|= 1. Using the estimate, βˆ’(y⊀Σty)2β‰₯ βˆ’y⊀Σ2 tyin (2.29), we get d dty⊀Σty≀ βˆ’5 3β„“A(y⊀Σty)2+3KV β„“Ay⊀Σty+2 Ξ²y⊀Σty. (2.33) Therefore, we have d dty⊀Σty≀ βˆ’β„“A(y⊀Σty)2+33/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2, where we have utilized Young’s inequality as 3KV 4β„“Ay⊀Σty≀1 3(y⊀Σty)2+33/parenleftbiggKV β„“A/parenrightbigg2 , 2 Ξ²y⊀Σty≀1 3(y⊀Σty)2+3 Ξ²2. Therefore, also using βˆ’x2≀ βˆ’x+1, we get d dty⊀Σty≀ βˆ’β„“A(y⊀Σty)+β„“A+33/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2 Denoting ΛœΞ›(t,y) =y⊀Σtyand using eβ„“Atas integrating factor, we get d dteβ„“AtΛœΞ›(t,y)≀/parenleftBigg β„“A+/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2/parenrightBigg eβ„“At. 10 Therefore, we have ΛœΞ›(t,y)β‰€ΛœΞ›(0,y)eβˆ’β„“At+1 β„“A/parenleftBigg β„“A+/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βˆ’eβˆ’β„“At). Taking supremum over y, we finally get our estimate λΣ max(t)≀λΣ max(0)eβˆ’β„“At+1 β„“A/parenleftBigg/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV β„“A/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βˆ’eβˆ’β„“At). (2.34) 2.4 Proof of Theorem 2.3 Proof of Theorem 2.3. Below we prove the result for lβ‰₯2, and after which the result for 0 < l <2 follows from the Cauchy-Bunyakovsky-Schwartz inequality. Using Ito’s formula for Ul(x), withlβ‰₯2, we have dUl(X(t)) =βˆ’lUlβˆ’1(X(t))/an}bβˆ‡acketle{tβˆ‡U(X(t))Β·Ξ£tβˆ‡U(X(t))/an}bβˆ‡acketβˆ‡i}htdt+l Ξ²Ulβˆ’1(X(t))trace/bracketleftbig βˆ‡2U(X(t))Ξ£t/bracketrightbig dt +l Ξ²(lβˆ’1)Ulβˆ’2(X(t))/an}bβˆ‡acketle{tβˆ‡U(X(t))Β·Ξ£tβˆ‡U(X(t))/an}bβˆ‡acketβˆ‡i}htdt+/radicalbigg2 Ξ²lUlβˆ’1/an}bβˆ‡acketle{tβˆ‡U(X(t))Β·/radicalbig Ξ£tdW(t)/an}bβˆ‡acketβˆ‡i}ht. Due to boundedness of second derivatives of Uand uniform in time bounds on λΣ min(t) andλΣ max(t) (see Lemma 2.2 and Lemma 2.4), we have trace/bracketleftbig βˆ‡2U(X(t))Ξ£t/bracketrightbig ≀B, (2.35) βˆ’/an}bβˆ‡acketle{tβˆ‡U(X(t))Β·Ξ£tβˆ‡U(X(t))/an}bβˆ‡acketβˆ‡i}ht ≀ βˆ’β„“Ξ£|βˆ‡U(X(t))|2, (2.36) /an}bβˆ‡acketle{tβˆ‡U(X(t))Β·Ξ£tβˆ‡U(X(t))/an}bβˆ‡acketβˆ‡i}ht ≀LΞ£|βˆ‡U(X(t))|2, (2.37) whereB >0 is a constant independent of t, andβ„“Ξ£andLΞ£are from (2.4) and (2.5), respectively. This leads to the following: dUl(X(t)) =βˆ’lβ„“Ξ£Ulβˆ’1(X(t))|βˆ‡U(X(t))|2dt+l Ξ²BUlβˆ’1(X(t))dt +l Ξ²(lβˆ’1)LΞ£Ulβˆ’2(X(t))|βˆ‡U(X(t))|2dt+/radicalbigg2 Ξ²lUlβˆ’1/an}bβˆ‡acketle{tβˆ‡U(X(t))Β·/radicalbig Ξ£tdW(t)/an}bβˆ‡acketβˆ‡i}ht which on taking expectation on both sides gives dEUl(X(t)) =βˆ’lβ„“Ξ£EUlβˆ’1(X(t))|βˆ‡U(X(t))|2dt+l Ξ²BEUlβˆ’1(X(t))dt +l Ξ²(lβˆ’1)LΞ£EUlβˆ’2(X(t))|βˆ‡U(X(t))|2dt. (2.38) Next, we derive upper and lower bounds for |βˆ‡U(X(t))|2in terms of U(x). To this end, since Vby Assumption 2.1 is Lipschitz continuous, we have V(x)≀V(0)+LV|x|. Therefore, U(x)≀1 2LA|x|2+LV|x|+V(0), which, upon using Young’s inequality LV|x| ≀1 2|x|2+1 2L2 V, gives U(x)≀1 2(LA+1)|x|2+1 2L2 V+V(0), 11 This implies |x|2β‰₯2 LA+1U(x)βˆ’1 LA+1/parenleftbig L2 V+2V(0)/parenrightbig . (2.39) Due to our assumption on U, i.e., Assumption 2.1, and (2.39), we obtain |βˆ‡U(x)|2=|Ax+βˆ‡V(x)|2β‰₯1 2|Ax|2βˆ’|βˆ‡V(x)|2β‰₯β„“2 A 2|x|2βˆ’L2 V β‰₯β„“2 A LA+1U(x)+β„“2 A2V(0)+L2 V 2(LA+1)βˆ’L2 V. (2.40) To derive the upper bound we again using Lipschitz continuity of V, which gives that U(x)β‰₯1 2β„“2 A|x|2+V(0)βˆ’LV|x| β‰₯1 4β„“2
https://arxiv.org/abs/2504.18139v1
A|x|2βˆ’L2 V β„“2 A+V(0), where we have used Young’s inequality, i.e., LV|x| ≀ℓ2 A|x|2/4+L2 V/β„“2 A, in last step. Hence, |x|2≀4 β„“2 AU(x)+4L2 V β„“4 Aβˆ’4 β„“2 AV(0). This implies |βˆ‡U(x)|2=|Ax+βˆ‡V(x)|2≀2L2 A|x|2+2L2 V≀8L2 A β„“2 AU(x)+8L2 AL2 V β„“4 Aβˆ’8L2 A β„“2 AV(0)+2L2 V.(2.41) Using (2.40) and (2.41), we have dEUl(X(t))≀ βˆ’lβ„“Ξ£β„“2 A LA+1EUl(X(t))dtβˆ’lβ„“Ξ£β„“2 A2V(0)+L2 V 2(LA+1)EUlβˆ’1(X(t))dt +lβ„“Ξ£L2 VEUlβˆ’1(X(t))dt+l Ξ²BEUlβˆ’1(X(t))dt+l Ξ²(lβˆ’1)LΞ£8L2 A β„“2 AEUlβˆ’1(X(t))dt +l Ξ²(lβˆ’1)LΞ£/parenleftBig8L2 AL2 V β„“4 Aβˆ’8L2 A β„“2 AV(0)+2L2 V/parenrightBig EUlβˆ’2(X(t))dt ≀ βˆ’Ξ·1EUl(X(t))dt+Ξ·2EUlβˆ’1(X(t))dt+Ξ·3EUlβˆ’2(X(t))dt, where Ξ·1:=lβ„“Ξ£β„“2 A LA+1, (2.42) Ξ·2:=lβ„“Ξ£L2 V+l Ξ²B+l Ξ²(lβˆ’1)LΞ£8L2 A β„“2 A, (2.43) Ξ·3:=l Ξ²(lβˆ’1)LΞ£/parenleftBig8L2 AL2 V β„“4 Aβˆ’8L2 A β„“2 AV(0)+2L2 V/parenrightBig . (2.44) Using Young’s inequality, we have Ξ·2Ulβˆ’1(x)≀η1 3Ul(x)+/parenleftBig2Ξ·1(lβˆ’1) 3l/parenrightBigβˆ’(lβˆ’1)/lΞ·l 2 2l, Ξ·3Ulβˆ’2(x)≀η1 3Ul(x)+/parenleftBig2Ξ·1(lβˆ’2) 3l/parenrightBigβˆ’(lβˆ’2)/lΞ·l/2 3 l, which results in dEUl(X(t))≀ βˆ’Ξ·1 3EUl(X(t))dt+Ξ·4dt, 12 whereΞ·4is independent of tand given by Ξ·4=/parenleftBig2Ξ·1(lβˆ’1) 3l/parenrightBigβˆ’(lβˆ’1)/lΞ·l 2 2l+/parenleftBig2Ξ·1(lβˆ’2) 3l/parenrightBigβˆ’(lβˆ’2)/lΞ·l/2 3 l. This implies EU(X(t))≀EU(X(0))eβˆ’Ξ·1 3t+3Ξ·4 Ξ·1(1βˆ’eβˆ’Ξ·1 3t). This implies uniform in time moment bound, i.e., E|X(t)|2l≀CM, whereCM>0 is a constant independent of t. 3 Particle approximation and propagation of chaos For a process X(t), drive by the dynamics (1.3), the time marginal law, denoted by Β΅t:=LX t, satisfies the non-linear Fokker-Planck PDE βˆ‚Β΅t βˆ‚t(t,x) =/an}bβˆ‡acketle{tβˆ‡xΒ΅tΒ·Ξ£(Β΅t)βˆ‡U(x)/an}bβˆ‡acketβˆ‡i}ht+1 Ξ²trace(Ξ£(Β΅t)βˆ‡2Β΅t),(t,x)∈[0,T]Γ—Rd. (3.1) In this section, we prove that we can use a particle approximation to approximate this time marginal law. By the contraction result in Theorem 2.1, this means that we can use a particle approximation to approximate Ο€(x)dx, withΟ€given in (1.4). To this end, let N∈Ndenote the number of particles and let Xi,N(t) represent the position of the i-th particle at time t. LetEN(t) represent the empirical measure defined as EN(t) =1 NN/summationdisplay j=1Ξ΄Xj,N(t). (3.2) Consider the dΓ—Ndeviation matrix QN(t) = [X1,N(t)βˆ’MN(t),...,XN,N(t)βˆ’MN(t)] withMN tbeing the ensemble mean given by MN t:=M(EN(t)) =1 N/summationtextN j=1Xj,N(t). The ensemble covariance, denoted as Ξ£N t, is defined as Ξ£N t:= Ξ£(EN(t)) =1 NQN(t)QN(t)⊀. (3.3) One possible particle approximation to the mean-field limit is given by the following SDEs governing the interacting particle system: dXi,N(t) =βˆ’Ξ£(EN(t))βˆ‡U(Xi,N(t))dt+/radicalbigg2 Ξ²/radicalBig Ξ£(EN(t))dWi(t), (3.4) where(Wi(t))tβ‰₯0,i= 1,...,N,denotestandard d-dimensionalWienerprocesses. Thestrongconvergence of (3.4) to the mean-field limit (1.3) is shown in [DL21b] and [Vae24] in linea r and non-linear setting, respectively. However, an implementation of (3.4) requires the com putation of the square root of a dΓ—d matrix which may be computationally expensive. To overcome this, we first note that for the purpose of sampling and computation of ergodic averages, a weak sense app roximation will suffice. To this end, inspired from [GINR20], we consider a system of SDEs which approxima tes mean-field SDEs in large particle limit, but in weak-sense. Let (Bi(t))tβ‰₯0,i= 1,...,N, be standard N-dimensional Wiener processes. We consider the following SDEs driving the interacting particle system: dXi,N(t) =βˆ’Ξ£(EN(t))βˆ‡U(Xi,N(t))dt+/radicalbigg2 Ξ²NQN(t)dBi(t), (3.5) whereXi,N(t)∈Rddenotes the position of iβˆ’th particle at time t. 13 Remark 3.1 (Derivative-freeapproach) .Usingthefollowingapproximatefirst-orderlinearization[ILS13, SS17]: U(Xj,N(t))β‰ˆU(Xi,N(t))+/an}bβˆ‡acketle{t(Xj,N(t)βˆ’Xi,N(t))Β·βˆ‡U(Xi,N(t))/an}bβˆ‡acketβˆ‡i}ht, U(Xk,N(t))β‰ˆU(Xi,N(t))+/an}bβˆ‡acketle{t(Xk,N(t)βˆ’Xi,N(t))Β·βˆ‡U(Xi,N(t))/an}bβˆ‡acketβˆ‡i}ht, we getU(Xj,N(t))βˆ’Β―UN(t)β‰ˆ /an}bβˆ‡acketle{t(Xj,N(t)βˆ’MN(t))Β·βˆ‡U(Xi,N(t))/an}bβˆ‡acketβˆ‡i}ht, whereΒ―UN(t) :=1 N/summationtextN k=1U(Xk,N(t)). The above, with some more computations, lead to the following deriva tive-free dynamics: dZi,N(t) =βˆ’1 NN/summationdisplay j=1(Zj,N(t)βˆ’MN(t))/parenleftbig U(Zj,N(t))βˆ’Β―UN(t)/parenrightbig dt+/radicalbigg 2 Ξ²NQN(t)dBi(t).(3.6) In order to prove propagation of chaos for (3.5), we impose the fo llowing assumption on U. Assumption 3.1. LetU∈C2(Rd)andU(x)β‰₯0for allx∈Rd. We assume that there
https://arxiv.org/abs/2504.18139v1
exists a compact setKsuch that following holds for some K1,K2,K3,K4>0: K1|x|2≀U(x)≀K2|x|2andK3|x| ≀ |βˆ‡U(x)| ≀K4|x|,for all x∈ Kc. We also assume that all second derivatives of Uare uniformly bounded in x. In [Vae24], the well-posedness of (1.3a) as well as (3.4) is proved und er Assumption 3.1. In the proof of well-posedness of particle system (3.4) in [Vae24], the Khasminski-Ly apunov approach is employed using the same Lyapunov function as used in [GINR20]. The exact same arg uments can be used to show well- posedness of (3.5). We avoid repeating the same calculations consid ering the line of arguments remains unchanged since Ξ£N=1 NQN(QN)⊀=√ Ξ£N√ Ξ£N⊀. In addition, one can also obtain the moment bounds uniform in Nas has been done in [Vae24], i.e., for pβ‰₯1 it holds that sup i=1,...,Nsup t∈[0,T]E|Xi,N(t)|2p≀C, (3.7) whereC >0 is independent of N. As a consequence of uniform in Nmoment bound, we have E|Ξ£(EN(t))|=1 NE/vextendsingle/vextendsingle/vextendsingle/vextendsingleN/summationdisplay i=1(Xi,N(t)βˆ’MN(t))(Xi,N(t)βˆ’MN(t))⊀/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≀1 NN/summationdisplay i=1E|Xi,N(t)βˆ’MN(t)|2 ≀2 NN/summationdisplay i=1E|Xi,N(t)|2+2E|MN(t)|2≀41 NN/summationdisplay i=1E|Xi,N(t)|2≀C, (3.8) The main result of this section is the following theorem. Theorem 3.1. LetΒ΅0∈ P2(Rd), and let Assumption 3.1 hold. Moreover, let (Xi,N)N i=1be the solution of interacting particle system (3.5), and let EN tdenote empirical measure of these interacting Nparticles at timet. Then, for all t∈[0,T], there exists a deterministic limit Β΅tof the empirical measure EN tas Nβ†’ ∞. ThisΒ΅tsatisfies the mean-field PDE (3.1) in weak sense. 3.1 Proof of Theorem 3.1 The proof follows the classical trilogy of arguments (see [Szn91]): β€’Tightness of law of random empirical measure: Together with Prokh orov’s theorem, this ensures that there exists a converging subsequence of the law of the rand om empirical measure as the number of particles tend to infinity. β€’Identification of limit: This is achieved by looking at the PDE correspon ding to mean-field SDEs. β€’Uniqueness of limit: This follows from the uniqueness of solution of mea n-field SDEs. 14 3.1.1 Tightness of random measure LetL(EN) be law of empirical measures which means it belongs to P(P(C([0,T];Rd))). We aim to show that the family of measures ( L(EN))N∈Nis tight in P(P(C([0,T];Rd))). From [Szn91, Proposition 2.2], showing tightness of family of measures ( L(EN))Nβ‰₯2inP(P(C([0,T];Rd))) is equivalent to showing that family (L(X1,N))N∈Nis tight in P(C([0,T];Rd)). To this end, we employ Aldous criteria (see [Bil13, Section 16]) as follows: (i) For all t∈[0,T], the time marginal law of ( Xi,N(t))i∈Nis tight as a sequence in space P(Rd). (ii) For all Η« >0 andΞΆ >0, there exist n0andΞ΄ >0 such that for any sequence of stopping times Ο„j sup j>n0sup θ≀δP(|Xi,N Ο„j+ΞΈβˆ’Xi,N Ο„j|> ΞΆ)≀ǫ. (3.9) From (3.7), there is a K, not depending on N, such that E|X1,N(t)|2≀K. To verify the first condition consider a compact set KΗ«:={y;|y|2≀K/Η«}. Using Markov’s inequality, we have LX1,N(t)/parenleftbig Kc Η«/parenrightbig =P(|X1,N(t)|2> K/Η«)≀ǫE|X1,N(t)|2 K≀ǫ. (3.10) This verifies the first condition that for each t∈[0,T] the sequence ( LX1,N(t))N∈Nis tight. To verify the second condition, we start with X1,N(Ο„j+Ξ΄) =X1,N(Ο„j)βˆ’/integraldisplayΟ„j+Ξ΄ Ο„jΞ£(EN(t))βˆ‡U(X1,N(t))dt+/integraldisplayΟ„j+Ξ΄ Ο„j/radicalbigg 2 Ξ²NQN(t)dB1(t).(3.11) Rearranging terms, and squaring and taking expectation on both s ides, we obtain E|X1,N(Ο„j+Ξ΄)βˆ’X1,N(Ο„j)|2≀2E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟ„j+Ξ΄ Ο„jΞ£(EN(t))βˆ‡U(X1,N(t))dt/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 +2E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟ„j+Ξ΄ Ο„j/radicalbigg2 Ξ²NQN(t)dB1(t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 . Considering the first term, by Cauchy-Bunyakovsky-Schwarz ine quality, we
https://arxiv.org/abs/2504.18139v1
have E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟ„j+Ξ΄ Ο„jΞ£(EN(t))βˆ‡U(X1,N(t))dt/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ≀δE/parenleftbigg/integraldisplayΟ„j+Ξ΄ Ο„j|Ξ£(EN(t))βˆ‡U(X1,N(t))|2dt/parenrightbigg ≀δsup t∈[0,T]E|Ξ£(EN(t))βˆ‡U(X1,N(t))|2. (3.12) Considering the second term, from Ito’s isometry, we have E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟ„j+Ξ΄ Ο„j/radicalbigg2 Ξ²NQN(t)dB1(t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 =E/integraldisplayΟ„j+Ξ΄ Ο„j/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 Ξ²NQN(t)(QN)⊀(t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dt=E/integraldisplayΟ„j+Ξ΄ Ο„j/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 Ξ²Ξ£(EN(t))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dt. Now, by Cauchy-Bunyakovsky-Schwarz inequality, we have E/integraldisplayΟ„j+Ξ΄ Ο„j/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 Ξ²Ξ£(EN(t))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dt≀E/parenleftbigg/integraldisplayΟ„j+Ξ΄ Ο„j4 Ξ²2dt/parenrightbigg1/2/parenleftbigg/integraldisplayΟ„j+Ξ΄ Ο„jE|Ξ£(EN(t))|4dt/parenrightbigg1/2 ≀CΞ΄1/2/parenleftbigg/integraldisplayT 0E|Ξ£(EN(t))|4dt/parenrightbigg1/2 ≀CΞ΄1/2sup t∈[0,T]E|Ξ£(EN(t))|2.(3.13) This implies E|X1,N(Ο„j+Ξ΄)βˆ’X1,N(Ο„j)|2≀CΞ΄E/integraldisplayΟ„j+Ξ΄ Ο„j|Ξ£(EN(t))βˆ‡U(X1,N(t))|2dt+CΞ΄1/2sup t∈[0,T]E|Ξ£(EN(t))|2 ≀CΞ΄sup t∈[0,T]E|Ξ£(EN(t))βˆ‡U(X1,N(t))|2+CΞ΄1/2sup t∈[0,T]E|Ξ£(EN(t))|2. 15 Using moment bounds from (3.7), we get E|X1,N(Ο„j+Ξ΄)βˆ’X1,N(Ο„j)|2≀CΞ΄1/2. (3.14) It is not difficult to see that for all Η« >0 andΞΆthere exists a Ξ΄0such that for all δ∈(0,Ξ΄0), we have sup δ∈(0,Ξ΄0)P|X1,N(Ο„j+Ξ΄)βˆ’X1,N(Ο„j)|> ΞΆ)≀sup δ∈(0,Ξ΄0)E|X1,N(Ο„j+Ξ΄)βˆ’X1,N(Ο„j)|2 ΞΆ2≀ǫ, (3.15) where we have applied Markov’s inequality. It is obvious that the choic e ofΞ΄0depends on positive constant Cappearing on the right hand side of (3.14). This ensures that secon d condition in Aldous criteria holds. Havingestablishedthetightnessof( LX1,N)N∈NinP(C([0,T];Rd)), andhencethetightnessof( L(EN)N∈N inP(P(C([0,T];Rd))), using Prokhorov’s theorem [Bil13] we can infer that there exis t a sub-sequence (L(ENk)Nk∈Nof (L(EN))N∈Nand a random measure Β΅such that L(ENk) converges to Β΅asNkβ†’ ∞. 3.1.2 Identification of limit The PDE (in weak sense) associated with measure dependent Lange vin dynamics is given by /an}bβˆ‡acketle{tΟ†(y),LX(t)(dy)/an}bβˆ‡acketβˆ‡i}htβˆ’/an}bβˆ‡acketle{tΟ†(y),LX(0)(dy)/an}bβˆ‡acketβˆ‡i}ht=βˆ’/integraldisplayt 0/an}bβˆ‡acketle{t(Ξ£(LX(s))βˆ‡U(y)Β·βˆ‡Ο†(y)),LX(s)(dy)/an}bβˆ‡acketβˆ‡i}htds +/integraldisplayt 02 Ξ²/an}bβˆ‡acketle{ttrace(βˆ‡2Ο†(y)Ξ£(LX(s))),LX(s)(dy)/an}bβˆ‡acketβˆ‡i}htds,(3.16) whereΟ†βˆˆC2 c(Rd). Introduce the following functional on P(C([0,T];Rd)) fort∈[0,T] andΟ†βˆˆC2 c(Rd) as Ψφ t(Ξ½) : =/an}bβˆ‡acketle{tΟ†(yt),Ξ½(dy)/an}bβˆ‡acketβˆ‡i}htβˆ’/an}bβˆ‡acketle{tΟ†(y0),Ξ½(dy)/an}bβˆ‡acketβˆ‡i}ht+/integraldisplayt 0/an}bβˆ‡acketle{t(Ξ£(Ξ½(s))βˆ‡U(ys)Β·βˆ‡Ο†(ys)),Ξ½(dy)/an}bβˆ‡acketβˆ‡i}htds βˆ’/integraldisplayt 02 Ξ²/an}bβˆ‡acketle{t(trace(HessΟ†(ys)Ξ£(Ξ½s)),Ξ½s/an}bβˆ‡acketβˆ‡i}htds (3.17) We have already established the convergence of a sub-sequence o f (L(EN))N∈Nto a random measure ¡∈ P(P(C([0,T];Rd))). The next thing to show is that this random measure is actually no thing but Β΅:=Ξ΄LX. To this end, we establish the following bound for all Ο†βˆˆC2 c(Rd): E|Ψφ t(EN)|2≀C N, (3.18) whereC >0 is independent of N. Using Ito’s formula, we have Ο†(Xi,N(t)) =Ο†(Xi,N(0))βˆ’/integraldisplayt 0/an}bβˆ‡acketle{tβˆ‡Ο†(Xi,N(s))Β·Ξ£(EN(s))βˆ‡U(Xi,N(s))/an}bβˆ‡acketβˆ‡i}htds +/integraldisplayt 02 Ξ²trace(βˆ‡2Ο†(Xi,N(s))QN(s)(QN)⊀(s))ds+/integraldisplayt 0/radicalbigg2 Ξ²/an}bβˆ‡acketle{tβˆ‡Ο†(Xi,N(s))Β·QN(s)dBi(s)/an}bβˆ‡acketβˆ‡i}ht.(3.19) This implies, using (3.17), that Ψφ t(EN) =1 NN/summationdisplay i=1/integraldisplayt 0/radicalbigg2 Ξ²/an}bβˆ‡acketle{tβˆ‡Ο†(Xi,N(s))Β·QN(s)dBi(s)/an}bβˆ‡acketβˆ‡i}ht (3.20) which on using the martingale property of Ito’s integral results in th e following: E|Ψφ t(EN)|2=1 N2N/summationdisplay i=1E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg2 Ξ²/an}bβˆ‡acketle{tβˆ‡Ο†(Xi,N(s))Β·QN(s)dBi(s)/an}bβˆ‡acketβˆ‡i}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 . (3.21) 16 Using Ito’s isometry, we obtain E|Ψφ t(EN)|2=1 N22 Ξ²N/summationdisplay i=1/integraldisplayt 0E/an}bβˆ‡acketle{tβˆ‡Ο†(Xi,N(s))Β·Ξ£(EN(s))βˆ‡Ο†(Xi,N(s))/an}bβˆ‡acketβˆ‡i}ht2ds (3.22) which on using (3.8) gives E|Ψφ t(EN)|2≀C N, (3.23) and thus establishing (3.18). Note that for any bounded continuous function f, we have /integraldisplay Rdf(x)EN(t)(dx)β†’/integraldisplay Rdf(x)Β΅t(dx) asNβ†’ ∞, (3.24) andalsothefamilyofrandomvariables {Ξ£(EN(t))}N∈Nisuniformlyintegrabledueto(3.8). Consequently, we have (see, e.g., [VdV98, Thm. 2.20]) Ξ£(EN(t))β†’Ξ£(Β΅t) asNβ†’ ∞. (3.25) This ensures that Ψφ t(Ξ½) is continuousfunction of Ξ½. Also, from (3.23) it is clearthat the family of random variables {|Ψφ t(EN(t))|2}N∈Nis uniformly integrable. Hence, due to Fatau’s lemma, we have E|Ψφ t(Β΅)|2=Elim Nβ†’βˆž|Ψφ t(EN)|2= lim Nβ†’βˆžE|Ψφ t(EN)|2= 0. (3.26) This implies Β΅satisfies PDE (3.16) (in weak sense) Ψφ t(Β΅) = 0,a.s. (3.27) 3.1.3 Uniqueness of limit The uniqueness of the solution (in weak sense) of PDE (3.16) follows f rom the pathwise uniqueness of the solution of (1.3a) [Vae24]. This implies any arbitrary subsequence of{EN(t)}N∈Nhas a convergent subsequence and all these subsequences converge to the same lim it. Hence this completes the proof noticing that sequence itself is converging. 4 Time discretization analysis The implementation of (3.5) and (3.4) requiresnumerical discretizat ion. Let the final time Tbe fixed. We uniformly partition [0 ,T] intonsub-intervals of size h, i.e.,t0= 0<Β·Β·Β·< tn=T,h=tk+1βˆ’tk=T/n. LetΞ±be a constant belonging to interval (0 ,1/2). We denote the approximating Markov
https://arxiv.org/abs/2504.18139v1
chain as (Xi,N k)n k=0starting from Xi 0for alli= 1,...,N. We also denote Ξ²k:=Ξ²(tk) and Ξ£N k:= Ξ£(EN k) where EN k=1 NN/summationdisplay k=1Ξ΄Xi,N k, (4.1) and therefore Ξ£(EN k) =1 NQN k(QN k)⊀(4.2) withQN k= [X1,N kβˆ’Mk,...,XN,N kβˆ’Mk],Mk=1 N/summationtextN i=1Xi,N k. As is knownin the stochasticnumerics literature (see [HJK11]) that Euler-Maruyama scheme may diverge in strong sense for coefficients with growth higher than linear. There are explicit schemes proposed to d eal with the issue of unbounded moments arising due to non-linear growth [HJK12, TZ13]. Here, follo wing [HJK12], we present tamed schemesto deal with non-lineargrowthof coefficients. We present twoversionsof tamed Euler-Maruyama scheme: 17 (i) Particle-wise tamed Euler-Maruyama scheme : Xi,N k+1=Xi,N kβˆ’B1(h,Xi,N k)Ξ£N kβˆ‡U(Xi,N k)h+/radicalbigg2 Ξ²NQN kΞΎi k+1h1 2, k= 0,...,nβˆ’1,(4.3) with B1(h,Xi,N k) = Diag/parenleftbigg1 1+hΞ±|Ξ£N kβˆ‡U(Xi,N k)|,...,1 1+hΞ±|Ξ£N kβˆ‡U(Xi,N k)|/parenrightbigg , where|Ξ£N kβˆ‡U(Xi,N k)|is the Euclidean norm and ΞΎi k+1is standard normal N-dimensional random vector for all i= 1,...,Nandk= 0,...,nβˆ’1. (ii) Coordinate-wise tamed Euler-Maruyama scheme : Xi,N k+1=Xi,N kβˆ’B2(h,Xi,N k)Ξ£N kβˆ‡U(Xi,N k)h+/radicalbigg2 Ξ²kNQN kΞΎi k+1h1 2, k= 0,...,nβˆ’1,(4.4) with B2(h,Xi,N k) = Diag/parenleftbigg1 1+hΞ±|(Ξ£N kβˆ‡U(Xi,N k))1|,...,1 1+hΞ±|(Ξ£N kβˆ‡U(Xi,N k))d|/parenrightbigg , where (Ξ£N kβˆ‡U(Xi,N k))jdenotesjβˆ’th coordinate of Ξ£N kβˆ‡U(Xi,N k). In (4.3), due to particle-wise taming, we have 1 |B1(h,Xi,N k)Ξ£N kβˆ‡U(Xi,N k)|=1 |Ξ£N kβˆ‡U(Xi,N k)|+hΞ±, (4.5) and, therefore the following bound holds: |B1(h,Xi,N k)Ξ£N kβˆ‡U(Xi,N k)| ≀min/parenleftbigg1 hΞ±,|Ξ£N kβˆ‡U(Xi,N k)|/parenrightbigg . (4.6) In the similar manner, the following is true for coordinate-wise taming : |(B2(h,Xi,N k)Ξ£N kβˆ‡U(Xi,N k))j| ≀min/parenleftbigg1 hΞ±,|(Ξ£N kβˆ‡U(Xi,N k))j|/parenrightbigg , j= 1,...,d. (4.7) Remark 4.1 (Balancing technique) .Writing the schemes with a preconditioning on the drift by a matrix B, one can imagine that there can be other possible choices of balancin g matrices B. This particular idea of balancing appears in [MPS98] for stiff SDEs, and has been utilized in [ TZ13] (see also [MT04]) to design balanced methods to deal with non-globally Lipschitz coefficien ts. In this context, taming can be considered as one particular choice of balancing matrix. In this section, we aim to prove convergence of the tamed numerica l scheme (4.3) to its continuous limit as discretization step hβ†’0. The techniques employed to obtain this convergence can also be u sed to get the convergence of (4.4). Here, however, we will only focus on (4.3). The main issue is to obtain the convergence uniform in N, where Nrepresents number of particles, considering that we have a non-globally Lipschitz drift coefficient. To this end, we let Ο‡h(t) =tkfor all tk≀t < tk+1and write the continuous time version of tamed Euler scheme (4.3) as follows: dYi,N(t) =F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t)))dt+/radicalbigg 2 Ξ²QN Y(Ο‡h(t))dBi(t), (4.8) where, for brevity, we have denoted F(Yi,N(Ο‡h(t)),Ξ£N(Ο‡h(t))) =βˆ’Ξ£N Y(Ο‡h(t))βˆ‡U(Yi,N(Ο‡h(t))) 1+hΞ±|Ξ£N Y(Ο‡h(t))βˆ‡U(Yi,N(Ο‡h(t)))|(4.9) 18 withα∈(0,1/2). Also, we have denoted Ξ£N Y(Ο‡h(t)) :=1 NQN Y(Ο‡h(t))QN Y(Ο‡h(t))⊀, (4.10) where QN Y(t) = [Y1,N(Ο‡h(t))βˆ’MN Y(Ο‡h(t)),...,YN,N(t)βˆ’MN Y(Ο‡h(t))] withMN Y(Ο‡h(t)) being the ensemble mean given by MN Y(Ο‡h(t)) :=1 NN/summationdisplay j=1Yj,N(Ο‡h(t)). (4.11) In this section, we will need to strengthen Assumption 3.1 as follows: Assumption 4.1. Let Assumption 3.1 hold. Moreover, let U∈C4(Rd)and assume that its third and fourth order partial derivatives are uniformly bounded in x∈Rd.
https://arxiv.org/abs/2504.18139v1
The main result of this section is the following theorem. Theorem 4.1. Let Assumption 4.1 hold. Let sup1≀i≀NE|Xi,N(0)|2<∞,sup1≀i≀NE|Yi,N(0)|2<∞ and|Xi,N(0)βˆ’Yi,N(0)| ≀Ch1/2withC >0being independent of Nandh. Then, for all t∈[0,T] lim hβ†’0sup i=1,...,N(E|Xi,N(t)βˆ’Yi,N(t)|2)1/2= 0. (4.12) The first step towards establishing the convergence result in the a bove theorem is to obtain moment bounds that are independent of Nandh. Lemma 4.2. Let Assumption 4.1 be satisfied. Let pβ‰₯1be a constant. Let sup1≀i≀NE|Yi,N(0)|2p≀C withC >0being independent of Nandh. Then, the following holds: sup 1≀i≀NE|Yi,N(t)|2p≀C, (4.13) whereC >0is independent of has well as N. In the below proof, we use GrΒ¨ onwall type arguments to obtain sup 1≀i≀NEsup t∈[0,T]Up(Yi,N(t))≀C, (4.14) which, thanks to Assumption 4.1, provides the required moment bou nd (4.13). Proof.Applying Ito’s formula on Up(Yi,N(t)), we get dUp(Yi,N(t)) =pUpβˆ’1(Yi,N(t))/an}bβˆ‡acketle{tβˆ‡U(Yi,N(t))Β·F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t)))/an}bβˆ‡acketβˆ‡i}htdt +p(pβˆ’1)1 Ξ²Upβˆ’2(Yi,N(t))trace/parenleftbig βˆ‡U(Yi,N(t))βˆ‡U⊀(Yi,N(t))Ξ£N Y(Ο‡h(t))/parenrightbig dt +p1 Ξ²Upβˆ’1(Yi,N(t))trace/parenleftbig βˆ‡2U(Yi,N(t))Ξ£N Y(Ο‡h(t))/parenrightbig dt +p/radicalbigg2 Ξ²NUpβˆ’1(Yi,N(t))/an}bβˆ‡acketle{tβˆ‡U(Yi,N(t))Β·QN Y(Ο‡h(t))dBi(t)/an}bβˆ‡acketβˆ‡i}ht. (4.15) For the sake of convenience, we denote G(Yi,N(t)) :=βˆ‡U(Yi,N(t)). (4.16) Applying Ito’s formula on j-th component of G, we arrive at Gj(Yi,N(t)) =Gj(Yi,N(Ο‡h(t)))+/integraldisplayt Ο‡h(t)/an}bβˆ‡acketle{tβˆ‡Gj(Yi,N(s))Β·F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))/an}bβˆ‡acketβˆ‡i}htds +/integraldisplayt Ο‡h(t)1 Ξ²trace/parenleftbig βˆ‡2Gj(Yi,N(s))Ξ£N Y(Ο‡h(s))/parenrightbig ds +/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²N/angbracketleftbig βˆ‡Gj(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/angbracketrightbig . (4.17) 19 We will estimate bounds on terms in (4.15) one by one. Let us start wit h the first term in (4.15): A1: =/an}bβˆ‡acketle{tβˆ‡U(Yi,N(t))Β·F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t)))/an}bβˆ‡acketβˆ‡i}ht =/angbracketleftbig/parenleftbig βˆ‡U(Yi,N(t))βˆ’βˆ‡U(Yi,N(Ο‡h(t)))/parenrightbig Β·F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t)))/angbracketrightbig +/angbracketleftbig/parenleftbig βˆ‡U(Yi,N(Ο‡h(t)))/parenrightbig Β·F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t)))/angbracketrightbig . (4.18) Note that /angbracketleftbig/parenleftbig βˆ‡U(Yi,N(Ο‡h(t)))/parenrightbig Β·F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t)))/angbracketrightbig ≀0 (4.19) due to the fact that Ξ£N(Ο‡h(t)) is ensemble covariance and hence positive semi-definite matrix (se e (4.9)). We will now utilize (4.17) to get A1≀/vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/parenleftbigg/integraldisplayt Ο‡h(t)/an}bβˆ‡acketle{tβˆ‡Gj(Yi,N(s))Β·F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))/an}bβˆ‡acketβˆ‡i}htds/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Ο‡h(t)1 Ξ²trace/parenleftbig βˆ‡2Gj(Yi,N(s))Ξ£N Y(Ο‡h(s))/parenrightbig ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Ο‡h(t)/radicalbigg 2 Ξ²N/angbracketleftbig βˆ‡Gj(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle, (4.20) whereFjdenotes the j-th component of F(Yi,N(Ο‡h(t)),Ξ£N Y(Ο‡h(t))). Using (4.6) and the fact that Frobenius norm of Hessian of Uis bounded, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/parenleftbigg/integraldisplayt Ο‡h(t)/an}bβˆ‡acketle{tβˆ‡Gj(Yi,N(s))Β·F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))/an}bβˆ‡acketβˆ‡i}htds/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≀1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftbigg/integraldisplayt Ο‡h(t)/an}bβˆ‡acketle{tβˆ‡Gj(Yi,N(s))Β·F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))/an}bβˆ‡acketβˆ‡i}htds/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≀1 h2Ξ±/integraldisplayt Ο‡h(t)d/summationdisplay j=1/vextendsingle/vextendsingleβˆ‡Gj(Yi,N(s))/vextendsingle/vextendsingleds≀Ch1βˆ’2Ξ±, (4.21) whereC >0 is independent of handN. Dealing with the second term on right hand side of (4.20) results in /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Ο‡h(t)1 Ξ²trace/parenleftbig βˆ‡2Gj(Yi,N(s))Ξ£N Y(Ο‡h(s))/parenrightbig ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≀C1 hΞ±/integraldisplayt Ο‡h(t)d/summationdisplay j=1|βˆ‡2Gj(Yi,N(s))||Ξ£N Y(Ο‡h(s))|ds≀C1 hΞ±/integraldisplayt Ο‡h(t)|Ξ£N Y(Ο‡h(s))|ds ≀Ch1βˆ’Ξ±|Ξ£N Y(Ο‡h(t))| (4.22) since the third and fourth derivatives of Uare bounded. Note that |Ξ£N Y(t)| ≀C/parenleftBig 1+1 NN/summationdisplay i=1|Yi,N(t)|2/parenrightBig ≀C/parenleftBig 1+1 NN/summationdisplay i=1U(Yi,N(t))/parenrightBig (4.23) sinceUhas at least quadratic growth outside a compact set K(see Assumption 3.1). Consequently, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Ο‡h(t)1 Ξ²trace/parenleftbig βˆ‡2Gj(Yi,N(s))Ξ£N Y(Ο‡h(s))/parenrightbig ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle≀Ch1βˆ’Ξ±/parenleftBig 1+1 NN/summationdisplay i=1U(Xi,N(Ο‡h(t)))/parenrightBig .(4.24) 20 In the similar manner, we have the following bound for the third term o n the right hand side of (4.20): /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²/angbracketleftbig βˆ‡Gj(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≀1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²N/angbracketleftbig βˆ‡Gj(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.25) Combining (4.21), (4.24) and (4.25) yields A1≀Ch1βˆ’2Ξ±+Ch1βˆ’Ξ±/parenleftBig 1+1 NN/summationdisplay i=1U(Yi,N(Ο‡h(t)))/parenrightBig +1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²N/angbracketleftbig βˆ‡Gj(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.26) To estimate a bound on second term of (4.15), we utilize Cauchy-Bun yakovsky-Schwarz inequality to obtain A2: =Upβˆ’2(Yi,N(t))trace/parenleftbig βˆ‡U(Yi,N(t))βˆ‡U⊀(Yi,N(t))Ξ£N Y(Ο‡h(t))/parenrightbig ≀CUpβˆ’2(Yi,N(t))|βˆ‡U(Yi,N(t))|2|Ξ£N Y(Ο‡h(t))| ≀C(1+Upβˆ’1(Yi,N(t)))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Ο‡h(t)))/parenrightbigg , (4.27) where we use Assumption 3.1 and (4.23) in the last step. Analogously, for the third term of (4.15), application of Cauchy-Bu nyakovsky-Schwarz inequality yields A3: =Upβˆ’1(Yi,N(t))trace/parenleftbig βˆ‡2U(Yi,N(t))Ξ£N Y(Ο‡h(t))/parenrightbig ≀CUpβˆ’1(Yi,N(t))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Ο‡h(t)))/parenrightbigg , (4.28) where we have utilized the fact that the norm of
https://arxiv.org/abs/2504.18139v1
Hessian of potentia l function is bounded due to As- sumption 3.1. Substituting (4.26), (4.27) and (4.28) in (4.15), we obtain Up(Yi,N(t))≀Up(Yi,N(0))+Ch1βˆ’2Ξ±/integraldisplayt 0Upβˆ’1(Yi,N(s))ds +Ch1βˆ’Ξ±/integraldisplayt 0Upβˆ’1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Ο‡h(s)))/parenrightbigg ds +p/integraldisplayt 0Upβˆ’1(Yi,N(s))1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Ο‡h(s)/radicalbigg 2 Ξ²/angbracketleftbig βˆ‡Gj(Yi,N(r))Β·QN Y(Ο‡h(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds +C/integraldisplayt 0Upβˆ’1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Ο‡h(s)))/parenrightbigg ds +p/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg 2 Ξ²NUpβˆ’1(Yi,N(s))/an}bβˆ‡acketle{tβˆ‡U(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/an}bβˆ‡acketβˆ‡i}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.29) 21 Taking supremum over [0 ,T] and then expectation on both sides, we ascertain Esup t∈[0,T]Up(Yi,N(t))≀Up(Yi,N(0))+Ch1βˆ’2Ξ±E/integraldisplayT 0Upβˆ’1(Yi,N(s))dt +Ch1βˆ’Ξ±E/integraldisplayT 0Upβˆ’1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(s))/parenrightbigg dt +pE/integraldisplayT 0Upβˆ’1(Yi,N(s))1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Ο‡h(s)/radicalbigg2 Ξ²/angbracketleftbig βˆ‡Gj(Yi,N(r))Β·Ξ£N(Ο‡h(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds +E/integraldisplayT 0Upβˆ’1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(s))/parenrightbigg dt +pEsup t∈[0,T]/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg2 Ξ²NUpβˆ’1(Yi,N(s))/an}bβˆ‡acketle{tβˆ‡U(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/an}bβˆ‡acketβˆ‡i}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.30) We will first estimate the following term appearing on the right hand sid e of above inequality: D1: =E/integraldisplayT 0Upβˆ’1(Yi,N(s))1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Ο‡h(s)/radicalbigg2 Ξ²/angbracketleftbig βˆ‡Gj(Yi,N(r))Β·QN Y(Ο‡h(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds ≀C/integraldisplayT 0Esup s∈[0,t]Up(Yi,N(s))dt +C1 hΞ±p/integraldisplayT 0d/summationdisplay j=1E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Ο‡h(s)/radicalbigg2 Ξ²N/angbracketleftbig βˆ‡Gj(Yi,N(r))Β·QN Y(Ο‡h(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsinglep ds, where we have used generalized Young’s inequality. Applying the Burk holder-Davis-Gundy inequality, and using boundedness of norm of Hessian of Uand (4.23), we obtain D1≀C/integraldisplayT 0Esup s∈[0,t]Up(Yi,N(s))dt+C hΞ±p/integraldisplayT 0d/summationdisplay j=1E/parenleftbigg/integraldisplays Ο‡h(s)|βˆ‡Gj(Yi,N(r))|2|Ξ£N Y(Ο‡h(r))|dr/parenrightbiggp/2 ds ≀C/integraldisplayT 0Esup s∈[0,t]Up(Yi,N(s))dt+C hΞ±p/integraldisplayT 0E/parenleftbigg/integraldisplays Ο‡h(s)/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Ο‡h(r)))/parenrightBig2 dr/parenrightbiggp/2 ds ≀C/integraldisplayT 0Esup s∈[0,t]Up(Yi,N(s))dt+Chp(1/2βˆ’Ξ±)/integraldisplayT 0E/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Ο‡h(s)))/parenrightbiggp ds ≀C/integraldisplayT 0Esup s∈[0,t]Up(Yi,N(s))dt+Chp(1/2βˆ’Ξ±)/integraldisplayT 0Esup s∈[0,t]/parenleftBig 1+1 NN/summationdisplay j=1Up(Yj,N(s))/parenrightbigg dt.(4.31) The last term remaining to be dealt with is D2:=Esup t∈[0,T]/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg 2 Ξ²NUpβˆ’1(Yi,N(s))/an}bβˆ‡acketle{tβˆ‡U(Yi,N(s))Β·QN Y(Ο‡h(s))dBi(s)/an}bβˆ‡acketβˆ‡i}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle. Again using the Burkholder-Davis-Gundy inequality and (4.23), we ge t D2≀E/parenleftbigg/integraldisplayT 02 Ξ²U2pβˆ’2(Yi,N(s))|βˆ‡U(Yi,N(s))|2|Ξ£N Y(Ο‡h(s))|ds/parenrightbigg1/2 ≀E/parenleftBigg sup s∈[0,T]Upβˆ’1(Yi,N(s))/parenleftbigg/integraldisplayT 02 Ξ²(1+U(Yi,N(s)))/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Ο‡h(s)))/parenrightBig ds/parenrightbigg1/2 ≀1 2E/parenleftbig sup s∈[0,T]Up(Yi,N(s))/parenrightbig +CE/parenleftbigg/integraldisplayT 02 Ξ²(1+U(Yi,N(s)))/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Ο‡h(s)))/parenrightBig ds/parenrightbiggp/2 , 22 where we have used generalized Young’s inequality. Using HΒ¨ older’s ine quality, we have D2≀1 2E/parenleftbig sup s∈[0,T]Up(Yi,N(s))/parenrightbig +CE/parenleftbigg/integraldisplayT 0(1+Up/2(Yi,N(s)))/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Ο‡h(s)))/parenrightBigp/2 ds/parenrightbigg ≀1 2E/parenleftbig sup t∈[0,T]Up(Yi,N(t))/parenrightbig +C/integraldisplayT 0/parenleftBig 1+1 NN/summationdisplay j=1Esup s∈[0,t]Up(Yj,N(s))/parenrightBig dt (4.32) Plugging the estimates obtained in (4.31) and (4.32) into (4.30), we ge t Esup t∈[0,T]Up(Yi,N(t))≀2Up(Yi,N(0))+C/integraldisplayT 0/parenleftBig 1+1 NN/summationdisplay j=1Esup s∈[0,t]Up(Yj,N(s))/parenrightBig dt +C/integraldisplayT 0Esup s∈[0,t]Up(Yi,N(s))dt sinceh1βˆ’2α≀1 due to our choice of α∈(0,1/2). We mention again for the convenience of the reader thatCis a positive constant independent of handN. Taking supremum over i∈ {1,...,N}gives sup i=1,...,NEsup t∈[0,T]Up(Yi,N(t))≀2Up(Yi,N(0))+C/integraldisplayT 0sup i=1,...,NEsup s∈[0,t]Up(Yi,N(s))dt which on applying GrΒ¨ onwall’s Lemma provides the desired result. Lemma4.3. LetAssumption 3.1 be satisfied. Let sup1≀i≀NE|Xi,N(0)|2<∞andsup1≀i≀NE|Yi,N(0)|2< ∞. Then the following bound holds: sup 1≀i≀NE|Yi,N(t)βˆ’Yi,N(Ο‡h(t))|2≀Ch, (4.33) whereC >0is independent of Nandh. Proof.With an application of triangle inequality and HΒ¨ older’s inequality, we hav e |Yi,N(t)βˆ’Yi,N(Ο‡h(t))|2≀C/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 +/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²NQN Y(Ο‡h(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightbigg ≀C/integraldisplayt Ο‡h(t)|1|2ds/integraldisplayt Ο‡h(t)|F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))|2ds+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)/radicalbigg 2 Ξ²NQN Y(Ο‡h(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ≀Ch/integraldisplayt Ο‡h(t)|F(Yi,N(Ο‡h(s)),Ξ£N Y(Ο‡h(s)))|2ds+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²NQN Y(Ο‡h(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 which on using (4.6) gives |Yi,N(t)βˆ’Yi,N(Ο‡h(t))|2≀Ch2(1βˆ’Ξ±)+C/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Ο‡h(t)/radicalbigg2 Ξ²NQN Y(Ο‡h(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 . (4.34) Taking expectation on both sides and applying Ito’s isometry, we ach ieve the following bound: E|Yi,N(t)βˆ’Yi,N(Ο‡h(t))|2=Ch2(1βˆ’Ξ±)+CE/integraldisplayt Ο‡h(t)1 N|QN Y(Ο‡h(s))/parenleftbig QN Y(Ο‡h(s))/parenrightbig⊀|ds (4.35) ≀Ch+CE/integraldisplayt Ο‡h(t)|Ξ£N Y(Ο‡h(s))|ds≀Ch, whereC >0 is independent of Nandh. Hence, the lemma is proved. 23 To obtain our main result, we employ a localization strategy based on s topping times. This is also employed in the interacting particle setting in [KST23] (for the propa gation of chaos proof, as well as for for the proof of the Euler scheme) and [Vae24] (for the propagat ion of chaos proof). More specifically, let Ο„1 R= inf/braceleftbigg t≀0 ;1 NN/summationdisplay i=1|Xi,N(t)|2β‰₯R/bracerightbigg
https://arxiv.org/abs/2504.18139v1
, (4.36) Ο„2 R= inf/braceleftbigg t≀0 ;1 NN/summationdisplay i=1|Yi,N(t)|2β‰₯R/bracerightbigg , (4.37) andΟ„R=Ο„1 Rβˆ§Ο„2 R. Lemma 4.4. Let Assumption 3.1 hold. Then the following inequality is sa tisfied: E/integraldisplaytβˆ§Ο„R 0|Ξ£(EN(s))βˆ‡U(Xi,N(s))+F(Yi,N(Ο‡h(s))))|2ds≀Ch2Ξ± +C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2 +|Yi,N(sβˆ§Ο„R)βˆ’Yi,N(Ο‡h(sβˆ§Ο„R))|2+1 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)βˆ’Yj,N(Ο‡h(sβˆ§Ο„R))|2 +1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg ds whereXi,N(s)is from (3.5), Yi,N(s)is from (4.8) and C >0is independent of h,NandR. Proof.By repeated use of the triangle inequality, we can split the integrand as |Ξ£N(s)βˆ‡U(Xi,N(s))+F(Yi,N(Ο‡h(s)))|2≀ |Ξ£N(s)βˆ‡U(Xi,N(s))βˆ’Ξ£N Y(s)βˆ‡U(Yi,N(s))|2 +|Ξ£N Y(s)βˆ‡U(Yi,N(s))βˆ’Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|2 +|Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))+F(Yi,N(Ο‡h(s)))|2(4.38) =: (I)+(II)+(III) . (4.39) We will analyze the three terms separately, starting with (I). To th is end, we first split the term (I) further as follows: (I) : =|Ξ£N(s)βˆ‡U(Xi,N(s))βˆ’Ξ£N Y(s)βˆ‡U(Yi,N(s)))|2≀ |Ξ£N(s)(βˆ‡U(Xi,N(s))βˆ’βˆ‡U(Yi,N(s)))|2 +|(Ξ£N(s)βˆ’Ξ£N Y(s))βˆ‡U(Yi,N(s))|2 =:B1+B2. (4.40) We have the following bound for B1due to (3.8) and the fact that βˆ‡Uis Lipschitz: B1:=|Ξ£N(s)(βˆ‡U(Xi,N(s))βˆ’βˆ‡U(Yi,N(s)))|2≀ |Ξ£N(s)|2|βˆ‡U(Xi,N(s))βˆ’βˆ‡U(Yi,N(s))|2 ≀C/bracketleftBig 1+1 NN/summationdisplay j=1|Xj,N(s)|2/bracketrightBig2 |Xi,N(s)βˆ’Yi,N(s)|2, (4.41) whereC >0 is a positive constant independent of Nandh. Next, to bound B2, note that |QN(s)βˆ’QN Y(s)|=/parenleftBiggN/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingle/vextendsingleXi,N(s)βˆ’Yi,N(s)βˆ’1 NN/summationdisplay j=1(Xj,N(s)βˆ’Yj,N(s))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg1/2 β‰€βˆš 2/parenleftBiggN/summationdisplay i=1|Xi,N(s)βˆ’Yi,N(s)|2+N/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 NN/summationdisplay j=1(Xj,N(s)βˆ’Yj,N(s))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg1/2 . 24 This implies 1√ N|QN(s)βˆ’QN Y(s)| ≀2/parenleftBigg 1 NN/summationdisplay i=1|Xi,N(s)βˆ’Yi,N(s)|2/parenrightBigg1/2 . (4.42) In the similar manner, we have 1√ N|QN(s)| ≀2/parenleftbigg1 NN/summationdisplay i=1|Xi,N(s)|2/parenrightbigg1/2 and1√ N|QN Y(s)| ≀2/parenleftbigg1 NN/summationdisplay i=1|Yi,N(s)|2/parenrightbigg1/2 .(4.43) Using (4.42) and (4.43), we obtain |Ξ£N(s)βˆ’Ξ£N Y(s)|=/vextendsingle/vextendsingle/vextendsingle1 NQN(s)(QN(s))βŠ€βˆ’1 NQN Y(s)(QN Y(s))⊀/vextendsingle/vextendsingle/vextendsingle ≀/vextendsingle/vextendsingle/vextendsingle1 N(QN(s)βˆ’QN Y(s))(QN(s))⊀/vextendsingle/vextendsingle/vextendsingle+/vextendsingle/vextendsingle/vextendsingle1 NQN Y(s)((QN(s))βŠ€βˆ’(QN Y(s))⊀)/vextendsingle/vextendsingle/vextendsingle ≀C/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)|2/parenrightbigg1/2 +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(s)|2/parenrightbigg1/2/parenrightbigg Γ—/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)βˆ’Yj,N(s)|2/parenrightbigg1/2 , whereC >0 is a positive constant independent of Nandh. Using the above calculation we arrive at the following bound for B2in (4.40): B2: =|(Ξ£N(s)βˆ’Ξ£N Y(s))βˆ‡U(Yi,N(s))|2 ≀C|βˆ‡U(Yi,N(s))|2/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)|2/parenrightbigg +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(s)|2/parenrightbigg/parenrightbigg Γ—/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)βˆ’Yj,N(s)|2/parenrightbigg . (4.44) Due to exchangeability of discretized particle system, we have for a lli= 1,...,N E/parenleftbigg |βˆ‡U(Yi,N(sβˆ§Ο„R))|2/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)|2/parenrightbigg +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)|2/parenrightbigg/parenrightbigg Γ—/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg/parenrightbigg =E/parenleftbigg1 NN/summationdisplay i=1|βˆ‡U(Yi,N(sβˆ§Ο„R))|2/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)|2/parenrightbigg +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)|2/parenrightbigg/parenrightbigg Γ—/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg/parenrightbigg . Using (4.41) and (4.44) yield E/integraldisplaytβˆ§Ο„R 0(I)ds≀C(1+R)2/integraldisplayt 0/parenleftbigg |Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2 +1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg ds ≀C(1+R)2/integraldisplayt 0E/parenleftbigg |Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2+1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg ds (4.45) 25 In the similar manner, we have that (II) : =|Ξ£N Y(s)βˆ‡U(Yi,N(s))βˆ’Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|2 ≀ |Ξ£N Y(s)(βˆ‡U(Yi,N(s))βˆ’βˆ‡U(Yi,N(Ο‡h(s))))|2 +|(Ξ£N Y(s)βˆ’Ξ£N Y(Ο‡h(s)))βˆ‡U(Yi,N(Ο‡h(s)))|2 =:E1+E2. (4.46) Applying the analogous arguments used in obtaining bounds for B1andB2, we obtain E1: =|Ξ£N Y(s)(βˆ‡U(Yi,N(s))βˆ’βˆ‡U(Yi,N(Ο‡h(s))))|2 ≀C/parenleftbigg 1+1 NN/summationdisplay j=1|Yj,N(s)|2/parenrightbigg2/parenleftBig |Yi,N(s)βˆ’Yi,N(Ο‡h(s))|2/parenrightBig , (4.47) and E2: =|(Ξ£N Y(s)βˆ’Ξ£N Y(Ο‡h(s)))βˆ‡U(Yi,N(Ο‡h(s)))|2 ≀C/parenleftbig 1+|Yi,N(Ο‡h(s))|2/parenrightbig/parenleftbigg 1+1 NN/summationdisplay j=1|Yj,N(s)|2+1 NN/summationdisplay j=1|Yj,N(Ο‡h(s))|2/parenrightbigg Γ—1 NN/summationdisplay j=1|Yj,N(s)βˆ’Yj,N(Ο‡h(s))|2, (4.48) whereC >0 is independent of Nandh. The bounds obtained in (4.47) and (4.48) imply that the following holds: E/integraldisplaytβˆ§Ο„R 0(II)ds≀C(1+R)2E/integraldisplayt 0/parenleftbigg |Yi,N(sβˆ§Ο„R)βˆ’Yi,N(Ο‡h(sβˆ§Ο„R))|2 +1 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)βˆ’Yj,N(Ο‡h(sβˆ§Ο„R))|2/parenrightbigg ds (4.49) where we have used Young’s inequality in last step. We shift our attention to the remaining term, i.e., to term (III) in (4.3 8), which is defined as (III) : = |Ξ£Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))+F(Yi,N(Ο‡h(s)))|2. Note that |Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))+F(Yi,N(Ο‡h(s)))|2 =/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))βˆ’Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s))) 1+hΞ±|Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ≀h2Ξ±|Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|4 (1+hΞ±|Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|)2. This results in E/integraldisplaytβˆ§Ο„r 0(III)ds≀Ch2Ξ±/integraldisplayt 0E|Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|4 (1+hΞ±|Ξ£N Y(Ο‡h(s))βˆ‡U(Yi,N(Ο‡h(s)))|)2ds ≀Ch2Ξ±, (4.50) where we have used moment bounds from Lemma 4.2. Combining the es timate obtained in (4.45), (4.49) 26 and (4.50), we ascertain E/integraldisplaytβˆ§Ο„R 0|Ξ£N(s)βˆ‡U(Xi,N(s))+F(Yi,N(Ο‡h(s)))|2ds ≀Ch2Ξ±+C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2 +|Yi,N(sβˆ§Ο„R)βˆ’Yi,N(Ο‡h(sβˆ§Ο„R))|2+1 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)βˆ’Yj,N(Ο‡h(sβˆ§Ο„R))|2 +1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg ds. This completes the proof. 4.1 Proof of Theorem 4.1 Proof.We split the sample space as follows: E|Xi,N(t)βˆ’Yi,N(t)|2=E(|Xi,N(t)βˆ’Yi,N(t)|2IΩR)+E(|Xi,N(t)βˆ’Yi,N(t)|2IΞ›R). (4.51) As is clear, we have the following bound: E(|Xi,N(t)βˆ’Yi,N(t)|IΩR)2≀E|Xi,N(tβˆ§Ο„R)βˆ’Yi,N(tβˆ§Ο„R)|2. Using Ito’s formula,
https://arxiv.org/abs/2504.18139v1
we have |Xi,N(tβˆ§Ο„R)βˆ’Yi,N(tβˆ§Ο„R)|2=|Xi,N(0)βˆ’Yi,N(0)|2 βˆ’2/integraldisplaytβˆ§Ο„R 0/angbracketleftbig (Xi,N(s)βˆ’Yi,N(s))Β·(Ξ£(EN(s))βˆ‡U(Xi,N(s))+F(Yi,N(Ο‡h(s))))/angbracketrightbig ds +/integraldisplaytβˆ§Ο„R 02 Ξ²Ntrace/parenleftbig [QN(s)βˆ’QN Y(Ο‡h(s))][QN(s)βˆ’QN Y(Ο‡h(s))]⊀/parenrightbig ds +2/integraldisplaytβˆ§Ο„R 0/radicalbigg2 Ξ²N/angbracketleftbigg (Xi,N(s)βˆ’Yi,N(s))Β·/parenleftBig QN(s)βˆ’QN Y(Ο‡h(s))/parenrightBig dWi(s)/angbracketrightbigg .(4.52) Due to Doob’s optional stopping theorem, we have E/parenleftbigg/integraldisplaytβˆ§Ο„R 0/radicalBigg 2 Ξ²(s)/angbracketleftbigg (Xi,N(s)βˆ’Yi,N(s))Β·/parenleftBig QN(s)βˆ’QN Y(Ο‡h(s))/parenrightBig dWi(s)/angbracketrightbigg/parenrightbigg = 0.(4.53) Note that the Doob’s theorem can be applied due to the fact that tβˆ§Ο„Ris bounded stopping time, and moments are bounded (see Lemma 4.2). We have already dealt with ex pectation of second term on right hand side of (4.52) in Lemma 4.4. For the third term on the right hand s ide of (4.52), we utilize (4.42) and (4.43) to arrive at the following estimate: E/parenleftbigg/integraldisplaytβˆ§Ο„R 02 Ξ²Ntrace/parenleftbig [QN(s)βˆ’QN Y(Ο‡h(s))][QN(s)βˆ’QN Y(Ο‡h(s))]⊀/parenrightbig ds/parenrightbigg ≀CE/integraldisplaytβˆ§Ο„R 01 NN/summationdisplay j=1|Xj,N(s)βˆ’Yj,N(Ο‡h(s))|2ds ≀CE/integraldisplaytβˆ§Ο„R 01 NN/summationdisplay j=1|Xj,N(s)βˆ’Yj,N(s)|2ds+CE/integraldisplaytβˆ§Ο„R 01 NN/summationdisplay j=1|Yj,N(s)βˆ’Yj,N(Ο‡h(s))|2ds ≀CE/integraldisplayt 01 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2ds +CE/integraldisplayt 01 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)βˆ’Yj,N(Ο‡h(sβˆ§Ο„R))|2ds, (4.54) 27 whereC >0 is independent of handN. Combining the results of (4.53), (4.54) and Lemma 4.4, we get E|Xi,N(tβˆ§Ο„R)βˆ’Yi,N(tβˆ§Ο„R)|2≀ |Xi,N(0)βˆ’Yi,N(0)|2 ≀Ch2Ξ±+C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2 +|Yi,N(sβˆ§Ο„R)βˆ’Yi,N(Ο‡h(sβˆ§Ο„R))|2+1 NN/summationdisplay j=1|Yj,N(sβˆ§Ο„R)βˆ’Yj,N(Ο‡h(sβˆ§Ο„R))|2 +1 NN/summationdisplay j=1|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg ds. Using Lemma 4.3, we ascertain E|Xi,N(tβˆ§Ο„R)βˆ’Yi,N(tβˆ§Ο„R)|2≀Ch+C(1+R)2h +C(1+R)2/integraldisplayt 0/parenleftbigg E|Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2+1 NN/summationdisplay j=1E|Xj,N(sβˆ§Ο„R)βˆ’Yj,N(sβˆ§Ο„R)|2/parenrightbigg ds, which on taking supremum over j= 1,...,N, we get sup i=1,...,NE|Xi,N(tβˆ§Ο„R)βˆ’Yi,N(tβˆ§Ο„R)|2≀Ch2Ξ±+C(1+R)2h +C(1+R)2/integraldisplayt 0sup i=1,...,NE|Xi,N(sβˆ§Ο„R)βˆ’Yi,N(sβˆ§Ο„R)|2ds. Applying Gronwall’s lemma, we have E|Xi,N(tβˆ§Ο„R)βˆ’Yi,N(tβˆ§Ο„R)|2≀C(h2Ξ±+(1+R)2h)eC(1+R)2. This implies E/parenleftbig |Xi,N(t)βˆ’Yi,N(t)|2IΩR/parenrightbig ≀C(h2Ξ±+(1+R)2h)eC(1+R)2, whereC >0 is independent of handN. Using HΒ¨ older’s inequality, Lemma 4.2 and Markov’s inequality, we have E/parenleftbig |Xi,N(t)βˆ’Yi,N(t)|2IΞ›R/parenrightbig ≀C/parenleftbig E|Xi,N(t)|4+E|Yi,N(t)|4/parenrightbig1/2P(Ξ›R(Ο‰)) ≀C1 N/summationtextN i=1E|Xi,N(t)|2 R≀C R, whereC >0 is independent of h,NandR. With appropriate choice of Rdepending on h, we get the desired result. Acknowledgments This work was supported by the Wallenberg AI, Autonomous System s and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. References [ANO+09] S. I. Aanonsen, G. NΕ“vdal, D. S. Oliver, A. C. Reynolds, and B. V all` es. The ensemble Kalman filter in reservoir engineeringβ€”a review. Spe Journal , 14(03):393–412, 2009. [BBCG08] D. Bakry, F. Barthe, P. Cattiaux, and A. Guillin. A simple pro of of the poincarΒ΄ e inequality for a large class of probability measures. Electronic Communications in Probability , 2008. [Bil13] P. Billingsley. Convergence of probability measures . John Wiley & Sons, 2013. 28 [BSW18] D. BlΒ¨ omker, C. Schillings, and P. Wacker. A strongly conver gent numerical scheme from ensemble Kalman inversion. SIAM Journal on Numerical Analysis , 56(4):2537–2562, 2018. [BSWW19] D. BlΒ¨ omker, C. Schillings, P. Wacker, and S. Weissmann. We ll posedness and convergence analysis of the ensemble Kalman inversion. Inverse Problems , 35(8):085007, 2019. [CCTT18] J. A. Carrillo, Y.-P. Choi, C. Totzeck, and O. Tse. An analyt ical framework for consensus- based global optimization method. Mathematical Models and Methods in Applied Sciences , 28(06):1037–1066, 2018. [CDR24] X. Chen and G. Dos Reis. Euler simulation of interacting partic le systems and McKean– Vlasov SDEs with fully super-linear growth drifts in space and interac tion.IMA Journal of Numerical Analysis , 44(2):751–796, 2024. [CG22] P.CattiauxandA.Guillin. Supplementtofunctionalinequalities forperturbedmeasureswith applications to log-concavemeasures and to some Bayesian problem s.Bernoulli , 28(4):2294– 2321, 2022. [CHS87] T.-S. Chiang, C.-R. Hwang, and S. J. Sheu. Diffusion for globa l optimization in Rn.SIAM Journal on Control and Optimization , 25(3):737–753, 1987. [CST20] N. K. Chada, A. M.
https://arxiv.org/abs/2504.18139v1
Stuart, and X. T. Tong. Tikhonov regula rization within ensemble Kalman inversion. SIAM Journal on Numerical Analysis , 58(2):1263–1294, 2020. [CV21] J. A. Carrillo and U. Vaes. Wasserstein stability estimates for covariance-preconditioned Fokker–Planck equations. Nonlinearity , 34(4):2275, 2021. [DL21a] Z. Ding and Q. Li. Ensemble Kalman inversion: mean-field limit and convergence analysis. Statistics and computing , 31:1–21, 2021. [DL21b] Z. Ding and Q. Li. Ensemble Kalman sampler: mean-field limit and c onvergence analysis. SIAM Journal on Mathematical Analysis , 53(2):1546–1578, 2021. [DMH23] P. Del Moral and E. Horton. A theoretical analysis of one- dimensional discrete generation ensemble Kalman particle filters. The Annals of Applied Probability , 33(2):1327–1372, 2023. [DMT18] P. Del Moraland J. Tugaut. On the stability and the uniform propagationofchaosproperties of ensemble Kalman–Bucy filters. The Annals of Applied Probability , 28(2):790–850, 2018. [Eve94] G. Evensen. Sequential data assimilation with a nonlinear qua si-geostrophic model using monte carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans , 99(C5):10143–10162, 1994. [EVL96] G. Evensen and P. J. Van Leeuwen. Assimilation of geosat alt imeter data for the Agulhas current using the ensemble Kalman filter with a quasigeostrophic mod el.Monthly weather review, 124(1):85–96, 1996. [GIHLS20] A. Garbuno-Inigo, F. Hoffmann, W. Li, and A. M. Stuart. Interacting Langevin diffusions: Gradient structure and ensemble Kalman sampler. SIAM Journal on Applied Dynamical Systems, 19(1):412–441, 2020. [GINR20] A. Garbuno-Inigo, N. Nusken, and S. Reich. Affine invarian t interacting Langevin dynamics forBayesianinference. SIAM Journal on Applied Dynamical Systems , 19(3):1633–1658,2020. [GKM+96] C. Graham, T. G. Kurtz, S. MΒ΄ elΒ΄ eard, P. E. Protter, M. Pulv irenti, D. Talay, and S. MΒ΄ elΒ΄ eard. Asymptotic behaviour of some interacting particle systems; Mckea n-Vlasov and Boltzmann models. Probabilistic Models for Nonlinear Partial Differential Eq uations: Lectures given at the 1st Session of the Centro Internazionale Matematico Est ivo (CIME) held in Montecatini Terme, Italy, May 22–30, 1995 , pages 42–95, 1996. [HJK11] M. Hutzenthaler, A. Jentzen, and P. E. Kloeden. Strong a nd weak divergence in finite time of Euler’s method for stochastic differential equations with non-glo bally lipschitz continuous coefficients. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 467(2130):1563–1576, 2011. 29 [HJK12] M. Hutzenthaler, A. Jentzen, and P. E. Kloeden. Strong c onvergenceof an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. The Annals of Applied Probability , 22(4):1611 – 1641, 2012. [HM01] P. L. Houtekamer and H. L. Mitchell. A sequential ensemble Ka lman filter for atmospheric data assimilation. Monthly weather review , 129(1):123–137, 2001. [HRS24] M. Hasenpflug, D. Rudolf, and B. Sprungk. Wasserstein co nvergence rates of increasingly concentrating probability measures. The Annals of Applied Probability , 34(3):3320–3347, 2024. [HST25] P. D. Hinds, A. Sharma, and M. V. Tretyakov. Well-posedne ssand approximationofreflected McKean-Vlasov SDEs with applications. Mathematical Models and Methods in Applied Sci- ences, 2025. [Hwa80] C.-R. Hwang. Laplace’s method revisited: weak convergenc e of probability measures. The Annals of Probability , 8(6):1177–1182, 1980. [ILS13] M. A. Iglesias, K. J. H. Law, and A. M. Stuart. Ensemble Kalm an methods for inverse problems. Inverse Problems , 29(4):045001, 2013. [KS19] N. Kovachki and
https://arxiv.org/abs/2504.18139v1
A. Stuart. Ensemble Kalman inversion: a der ivative-free technique for machine learning tasks. Inverse Problems , 35(9):095005, 8 2019. [KST23] D.Kalise,A.Sharma, andM.V.Tretyakov. Consensus-ba sedoptimizationviajump-diffusion stochastic differential equations. Mathematical Models and Methods in Applied Sciences , 33(02):289–339, February 2023. [LMW18] B. Leimkuhler, C. Matthews, and J. Weare. Ensemble preco nditioning for Markov chain Monte Carlo simulation. Statistics and Computing , 28:277–290, 2018. [LWZ22] M. Lindsey, J. Weare, and A. Zhang. Ensemble Markov chain Monte Carlo with teleporting walkers. SIAM/ASA Journal on Uncertainty Quantification , 10(3):860–885, 2022. [MPS98] G. N. Milstein, E. Platen, and H. Schurz. Balanced implicit meth ods for stiff stochastic systems. SIAM Journal on Numerical Analysis , 35(3):1010–1019, 1998. [MRSS25] V. Molin, A. Ringh, M. Schauer, and A. Sharma. Controlled s tochastic processes for simu- lated annealing. arXiv:2504.08506 , 2025. [MT04] G. N. Milstein and M. V. Tretyakov. Stochastic numerics for mathematical physics , vol- ume 39. Springer, 2004. [RDF78] P. J. Rossky, J. D. Doll, and H. L. Friedman. Brownian dynam ics as smart Monte Carlo simulation. The Journal of Chemical Physics , 69(10):4628–4633, 1978. [SS17] C. Schillings and A. M. Stuart. Analysis of the ensemble Kalman fi lter for inverse problems. SIAM Journal on Numerical Analysis , 55(3):1264–1290, 2017. [Szn91] A.-S. Sznitman. Topics in propagation of chaos. Ecole d’´ etΒ΄ e de probabilitΒ΄ es de Saint-Flour XIXβ€”1989 , 1464:165–251, 1991. [TZ13] M. V. Tretyakov and Z. Zhang. A fundamental mean-squar e convergence theorem for sdes with locally lipschitz coefficients and its applications. SIAM Journal on Numerical Analysis , 51(6):3135–3162, 2013. [Vae24] U. Vaes. Sharp propagation of chaos for the ensemble Lan gevin sampler. Journal of the London Mathematical Society , 110(5):e13008, 2024. [VdV98] A. W. Van der Vaart. Asymptotic statistics . Cambridge University Press, Cambrdige, UK, 1998. [Vil03] C. Villani. Topics in Optimal Transportation . Graduate studies in mathematics. American Mathematical Society, Providence, RI, USA, 2003. 30
https://arxiv.org/abs/2504.18139v1
arXiv:2504.18769v1 [math.ST] 26 Apr 2025A Simplified Condition For Quantile Regression Liang Pengβˆ—and Yongcheng Qi† Abstract Quantile regression is effective in modeling and inferring th e conditional quantile given some predictors and has become popular in risk managem ent due to wide applications of quantile-based risk measures. When foreca sting risk for economic and financial variables, quantile regression has to account for heteroscedasticity, which raises the question of whether the identification condition on residuals in quantile regression is equivalent to one independent of heterosceda sticity. In this paper, we present some identification conditions under three probabi lity models and use them to establish simplified conditions in quantile regression. Keywords : Conditional expectation, Heteroscedasticity, Residual s, Quantile regres- sion MSC 2020 classification : 60E05; 62J05; 62G05 1 Introduction Sincethedistributionfunctionandquantilefunctionaret wofundamentalquantitiesincharacter- izing randomness, nonparametric distribution and quantil e estimation have played a significant role in nonparametric statistics; see Shorlack and Wellner (1986) for an overview of empirical processes and quantile processes. Like regression for mode ling the relationship between response and predictors, quantile regression is an effective techniqu e for modeling and inferring the con- ditional quantile of a response given predictors. We refer t o Koenker (2005) for an overview of quantile regression techniques. Because quantile-base d risk measures such as Value-at-Risk βˆ—Maurice R. Greenberg School of Risk Science, Georgia State U niversity, USA. †Department of Mathematics and Statistics, University of Mi nnesota Duluth, USA. 1 are popular in risk management, quantile regression plays a n important role in forecasting risk. Below are three particular applications of using quantile r egression to forecast risk in insurance and econometrics. The first example is risk forecast in non-life insurance, whe re an actuarial dataset often includes the number of claims, the loss of each claim, and som e characteristics, such as the policyholder’s age, type of car, and driving experience in a utomobile insurance, for each pol- icyholder. When a risk manager needs to forecast Value-at-R isk (VaR), a two-step inference procedure is employed in the literature: logistic regressi on for modeling the probability of hav- ing nonzero claims and quantile regression for modeling Val ue-at-Risk at an adjusted risk level computed from the logistic regression; see Heras, Moreno an d Vilar-ZanΒ΄ on (2018), Kang, Peng and Golub (2021), and Fung et al. (2024). Because of many zero claims, the first step of logistic regression improves the inference for the probability of no nzero claims, while the second step of quantile regression effectively models and infers the quanti le-based risk measure, leading to ac- curate risk forecasts. Also, the analysis often assumes ind ependence across policyholders, which is practically sound. The second example is about predictability in financial econ ometrics. Because of hetero- geneous quantiles (see Gebka and Wohar (2013) and Ma, Xiao an d Ma (2018)), researchers in econometrics are interested in using quantile regression t o test for the predictability of some economic predictors; see Xiao (2009) for unit root predicto r, Xu (2021) for the case of station- ary predictor, and Lee (2016), Fan and Lee (2019), and Liu et a l. (2023)
https://arxiv.org/abs/2504.18769v1
for some unified tests regardless of predictors being stationary, near unit root, and unit root. The third example is systemic risk, which has been a major con cern in the financial industry and insurance business since the 2008 global financial crisi s. A popular systemic risk measure is the so-called CoVaR defined as the conditional quantile of system loss at risk level q given some predictors and that an individual loss equals its Value -at-Risk (VaR) at the same risk level q (see Adrian and Brunnermeier (2016)). Various applicatio ns and extensions of the systemic risk measure CoVaR have appeared in the literature as in Gigl io, Kelly and Pruitt (2016), Yang and Hamori (2021), Girardi and ErgΒ¨ un (2013), HΒ¨ ardle, Wang and Yu (2016), Chen, HΒ¨ ardle and Okhrin (2019), and Capponi and Rubtsov (2022). Because CoVa R is defined as a conditional quantile given someeconomic predictors, Adrian andBrunne rmeier (2016) proposeto model and infer by two quantile regression models: one quantile regre ssion for individual loss given some macroeconomic predictors and another quantile regression for system loss given macroeconomic 2 predictors and the individual loss. A general quantile regression model for a univariate predic tor is ο£± ο£² ο£³Yt=Ξ±q+Ξ²qXt+Ut, Xt=Β΅+ρXtβˆ’1+et, et=/summationtext∞ j=0cjVtβˆ’j, {(Ut,Vt)}is a sequence of independent and identically distributed ra ndom vectors ,(1) wherecj’s satisfy that {et}is stationary, |ρ|<1 for stationary {Xt}, andρ= 1 for the unit root process of {Xt}. Here, we allow the dependence between UtandVt. Forq∈(0,1), to ensure thatΞ±q+Ξ²qXtin (1) models the q-th conditional quantile of YtgivenXtand the consistency of quantile regression estimation, one commonly assumes th at P(Ut≀0|Xt) =q. (2) Then, a simple question is, under model (1), whether (2) is eq uivalent to P(Ut≀0|Vt) =q. (3) When quantile regression is applied to economic and financia l variables, as in the second and third examples above, it is necessary to account for heteros cedasticity. In this case, one often considers the following quantile regression model: ο£±  Yt=Ξ±q+Ξ²qXt+Ut, Xt=Β΅+ρXtβˆ’1+et, et=/summationtext∞ j=0cjVtβˆ’j, Vt=Οƒt,xΞ·t, Ut=Οƒt,yΞ΅t, {(Ξ΅t,Ξ·t)}is a sequence of independent and identically distributed ra ndom vectors , (Ξ΅t,Ξ·t) is independent of {(Οƒs,y,Οƒs,x) :s≀t}, (4) where{Οƒt,y>0}and{Οƒt,x>0}arestationary, cj’ssatisfythat {et}isstationary, and ρ∈(βˆ’1,1]. Again,ρ∈(βˆ’1,1) andρ= 1 represent stationary and nonstationary {Xt}, respectively. Here, the second question is, under model (4), whether (3) is equiv alent to P(Ξ΅t≀0|Ξ·t) =q, (5) which is equivalent to P(Ut≀0|Ξ·t) =qasΟƒt,y>0, implying that the conditional quantile of Yt givenXtis stillΞ±q+Ξ²qXt. Further, the third question is whether (2) is equivalent to (5) under model (4). In using models (1) and (4) for the above mentioned three appl ications, YtandXtare 3 insurance loss and policyholder’s characteristics respec tively in forecasting insurance risk and are asset return and some economic variables respectively i n predictability tests and systemic risk management. In this note, under some minor conditions, we prove the equiv alence of (2) and (3) under model (1) and the equivalence of (3) and (5) under model (4). I t is clear that (5) implies (2) under model (4). Unfortunately, we can only show that (2) imp lies (5) under model (4) and some restrictive moment conditions. 2
https://arxiv.org/abs/2504.18769v1
Main Results In this section, we will prove three technical lemmas that ar e of independent interest in prob- ability theory. Then, we will prove three theorems that can b e used to answer the above three questions. Throughout, we will use either characteristic function or m oment generating function as they determine distribution functions and work nicely with sums . Lemma 1. Assume random variables UandVare independent of S, andS+UandS+Vare identically distributed. Then UandVhave the same distribution under each of the following two conditions: (a). The characteristic function of S, defined as Ο†S(t) =E(eitS), is nonzero for t∈C, where Cis a dense subset of (βˆ’βˆž,∞), whereiis the imaginary number with i2=βˆ’1. (b). For some constant h >0, one of the two moment-generating functions MU(t) :=E/parenleftbig etU/parenrightbig andMV(t) :=E/parenleftbig etV/parenrightbig is finite for t∈(βˆ’h,h). Proof.By using independence and equality in distributions we have , for allt∈(βˆ’βˆž,∞), Ο†S(t)Ο†U(t) =E/parenleftbig eit(S+U)/parenrightbig =E/parenleftbig eit(S+V)/parenrightbig =Ο†S(t)Ο†V(t). (6) Under condition (a) that Ο†S(t)/ne}ationslash= 0 for t∈C, we have from (6) that Ο†U(t) =Ο†V(t) for t∈C. Since all characteristic functions are continuous and Cis dense in ( βˆ’βˆž,∞), we have Ο†U(t) =Ο†V(t) fort∈(βˆ’βˆž,∞), which implies that UandVare identically distributed. Under condition (b), assume MU(t) is finite for t∈(βˆ’h,h), implying that all moments of U exist. It is known that when the moment-generating function of a random variable exists, the moments of the random variable uniquely determine the momen t-generating function and the 4 distribution of the random variable as well. In other words, if all moments of random variable Vexist and E(Vk) =E(Uk) for all positive integers kβ‰₯1, thenUandVare identically distributed. Now we will show E(Vk) =E(Uk) for all kβ‰₯1. Since Ο†S(t) is continuous and Ο†S(0) = 1, Ο†S(t)/ne}ationslash= 0 in a neighborhood of zero. Then it follow from (6) that Ο†U(t) =Ο†V(t),t∈(βˆ’Ξ΄,Ξ΄) for someΞ΄ >0. Hence, both Ο†U(t) andΟ†V(t) are differentiable at the origin infinitely many times since all moments of Uexist, and E(Uk) = (βˆ’i)kΟ†(k) U(t)|t=0= (βˆ’i)kΟ†(k) V(t)|t=0=E(Vk) for all kβ‰₯1. This completes the proof. Lemma 2. Assume random variables Y0andY1are independent of Z,P(Z >0) = 1, andZY0 andZY1are identically distributed. Then Y0andY1have the same distribution under one of the following conditions. (A). The characteristic function of lnZ,Ο†lnZ(t), is nonzero for t∈C, whereCis a dense subset of(βˆ’βˆž,∞). (B). For some Ξ΄ >0,E/parenleftbig |Yj|Ξ΄+|Yj|βˆ’Ξ΄I(Yj/ne}ationslash= 0)/parenrightbig <∞forj= 0or1. Proof.Without loss of generality, assume p=P(Y0>0)>0 andr=P(Y0<0)>0. Then P(Y1>0) =P(ZY1>0) =P(ZY0>0) =P(Y0>0) =psinceP(Z >0) = 1. Similarly, we haveP(Y0<0) =P(Y1<0) =randP(Y0= 0) =P(Y1= 0) = 1 βˆ’pβˆ’r. Now define two new random variables, U0andU1, with the following properties: (P1): Forj= 0,1, the distribution of Ujis the same as the conditional distribution of Yjgiven Yj>0, that is, P(Uj≀u) =P(Yj≀u|Yj>0) =P(0< Yj≀u)/p, u > 0. (7) (P2): (U0,U1) andZare independent Note that P(Uj>0) = 1 for j= 0,1. Since ZY0andZY1are identically distributed, we have P(ZY0≀x|ZY0>0) =P(ZY1≀x|ZY1>0),βˆ’βˆž< x <∞. (8) Again, since P(Z >0) = 1, we have for j= 0,1 P(ZUj≀x) =P(ZYj≀x|Yj>0) =P(ZYj≀x,Yj>0) P(Yj>0) =P(ZYj≀x,ZYj>0) P(ZYj>0)=P(ZYj≀x|ZYj>0). 5 It follows from (8) that ZU0andZU1are identically distributed, or equivalently, ln Z+ lnU0 and lnZ+lnU1have the same distribution. Note that ln Zand (lnU0,lnU1) are independent.
https://arxiv.org/abs/2504.18769v1
To apply Lemma 1 with S= ln(Z),U= ln(U0) andV= ln(U1), we need to verify conditions (a) and (b). Clearly, condition (A) in Lemma 2 implies condit ion (a) in Lemma 1. Now assume condition (B) holds with j= 0 or 1. Then we have E(UΞ΄ j) =1 pE/parenleftbig YΞ΄ jI(Yj> 0)/parenrightbig ≀1 pE/parenleftbig |Yj|Ξ΄/parenrightbig <∞andE(Uβˆ’Ξ΄ j) =1 pE/parenleftbig Yβˆ’Ξ΄ jI(Yj>0)/parenrightbig ≀1 pE/parenleftbig |Yj|βˆ’Ξ΄I(Yj/ne}ationslash= 0)/parenrightbig <∞.By using Lyapunov’s inequality, we have for any t∈(0,Ξ΄) /parenleftbig E(Ut j)/parenrightbig1/t≀/parenleftbig E(UΞ΄ j)/parenrightbig1/Ξ΄<∞and/parenleftbig E(Uβˆ’t j)/parenrightbig1/t≀/parenleftbig E(Uβˆ’Ξ΄ j)/parenrightbig1/Ξ΄<∞, that is,E/parenleftbig eΒ±tln(Uj)/parenrightbig <∞,or equivalently E/parenleftbig etln(Uj)/parenrightbig <∞, t∈(βˆ’Ξ΄,Ξ΄),which implies condi- tion (b) in Lemma 1. Therefore, it follows from Lemma 1 that ln U0and lnU1are identically distributed, that is, U0andU1are identically distributed. From (7) we obtain P(0< Y0≀u) =P(0< Y1≀u), u >0. (9) Replacing Y0andY1withβˆ’Y0andβˆ’Y1, respectively, and repeating the same procedure as above, we have P(0<βˆ’Y0≀v) =P(0<βˆ’Y1≀v), v >0,which is equivalent to P(v≀Y0<0) =P(v≀Y1<0), v <0. (10) By combining (9), (10), and P(Y0= 0) = P(Y1= 0), we conclude that Y0andY1are identically distributed. This completes the proof of the le mma. Lemma 3. AssumeY0,Y1,Z, andWare random variables, (Y0,Y1)is independent of (Z,W), andP(Z >0) = 1. Assume all moments of random variables ZandWare finite, and the moment-generating functions of Y0andY1,MYj(t) =E(etYj), exist in t∈(βˆ’h,h)for some constant h >0. IfY0Z+WandY1Z+Ware identically distributed, then Y0andY1are identically distributed. Proof.Sincethemoment-generatingfunctions E(etY0)andE(etY1)arewelldefinedin t∈(βˆ’h,h), all moments of Y0andY1are finite. In this case, if E(Yn 0) =E(Yn 1) for allnβ‰₯1, thenY0andY1 have the same moment-generating functions, consequently t hey have the identical distribution functions. Hence, it suffices to show that E(Yn 0) =E(Yn 1) for all nβ‰₯1. 6 Note that all moments of ZandWexist,Β΅j,k:=E(ZjWk) is finite for all j,kβ‰₯0, where all indices j,kare integer-values. We have Β΅j,0=E(Zj)>0 for all jβ‰₯0 sinceP(Z >0) = 0. Fornβ‰₯1, we have gn(t) :=E/parenleftbig tZ+W/parenrightbign=E/parenleftBign/summationdisplay m=0/parenleftbiggn m/parenrightbigg tmZmWnβˆ’m/parenrightBig =n/summationdisplay m=0Β΅m,nβˆ’m/parenleftbiggn m/parenrightbigg tm. SinceY0Z+WandY1Z+Ware identically distributed, E(gn(Yj)) =E{E/parenleftbig (YjZ+W)n|Yj/parenrightbig }= E/parenleftbig (YjZ+W)n/parenrightbig is the same for j= 0,1, that is, n/summationdisplay m=0Β΅m,nβˆ’m/parenleftbiggn m/parenrightbigg E(Ym 0) =n/summationdisplay m=0Β΅m,nβˆ’m/parenleftbiggn m/parenrightbigg E(Ym 1). Rewrite the above equation as Β΅n,0/parenleftbig E(Yn 0)βˆ’E(Yn 1)/parenrightbig =nβˆ’1/summationdisplay m=0Β΅m,nβˆ’m/parenleftbiggn m/parenrightbigg/parenleftbig E(Ym 1)βˆ’E(Ym 0)/parenrightbig . (11) Notice that E(Y0 0) = 1 = E(Y0 1) andΒ΅n,0=E(Zn)>0 for all nβ‰₯1. By setting n= 1 in (11), we have E(Y0) =E(Y1). Now assume E(Ym 0) =E(Ym 1) for all m= 0,1,Β·Β·Β·,nβˆ’1, then from (11) we conclude E(Yn 0) =E(Yn 1). This implies that E(Yn 0) =E(Yn 1) fornβ‰₯1 by induction. Remark 1. It is well known that the values of the characteristic functi on of a random vari- able in a neighborhood of the origin cannot uniquely determi ne the distribution of the random variable. That is why we assume the characteristic function is non-zero over a dense subset of all real numbers; see condition (a) in Lemma 1 and condition ( A) in Lemma 2, which require no moment conditions. For most of the commonly used distribu tions such as normal distribu- tions, t-distributions, and distributions in Remark 2 belo w, these conditions are valid. When the characteristic function is zero in a set with a positive L ebesgue measure, we impose on
https://arxiv.org/abs/2504.18769v1
the existence of moment generating function of another random v ariables to ensure the equality of the distribution functions of the two variables; see condit ion (b) on Lemma 1 and condition (B) in Lemma 2. Meanwhile, the assumptions on the moment generat ing functions in Lemmas 1 - 3 cannot be weakened to the finiteness of all moments of the ran dom variables since moments cannot uniquely determine a distribution function in gener al; see, e.g. Example 30.2, page 398 in Billingsley (1995). 7 Remark 2. We notice that Smith (1962) showed that the characteristic f unction of a non- negative random variable cannot vanish identically in an in terval. Immediately, we can draw the following two conclusions: i)The characteristic function of a random variable Sis non-zero over a dense subset of all real numbers if the random variable Sis bounded from above or bounded from below. ii)For a positive random variable Z, ifP(Z > c) = 1 for some c >0, then the characteristic function of ln Zis non-zero over a dense subset of all real numbers. Remark 3. In Lemma 3, we impose moment conditions via moment generatin g functions as it remains unknown whether or how a condition on the characteri stic function like Lemmas 1 and 2 can be employed. First, we show Theorem 1 below. The equivalence between (2) a nd (3) under model (1) follows immediately. Theorem 1. Assume random variable Wis independent of random vector (X,Y). Assume constant q∈(0,1). Then, P(X≀0|Y+W) =qif and only if P(X≀0|Y) =qunder one of the following two conditions: (C1): The characteristic function of W,Ο†W(t) =E/parenleftbig eitW/parenrightbig , is non-zero in a dense subset of (βˆ’βˆž,∞). (C2): The moment-generating function of Y,MY(t), is finite in (βˆ’h,h)for some h >0. Proof.WhenP(X≀0|Y) =q, we have P(X≀0|Y+W) =E{P(X≀0|Y,W)|Y+W} =E{P(X≀0|Y)|Y+W}=E(q|Y+W) =q.(12) Hence, we only need to prove P(X≀0|Y) =qwhen P(X≀0|Y+W) =q. (13) It follows from (13) that P(X≀0|Y+W) =q=P(X≀0) andP(X >0|Y+W) = 1βˆ’q= P(X >0),which implies that the Bernoulli random variable T:=I(X≀0) is independent of Y+W. Forj= 0,1, define the conditional distribution of YgivenT=jas Fj(y) =P(Y≀y|T=j) forβˆ’βˆž< y <∞. (14) 8 Our objective is to show that TandYare independent, or equivalently, the conditional distribution of YgivenT=jis the same for j= 0,1. Now assume random variables Y0andY1 are independentof W, and their cumulative distribution functions are F0andF1, respectively, as defined in (14). For every x, we have P(W+Yj≀x) =P(W+Y≀x|T=j) forj= 0,1.Since TandW+Yare independent, we have P(W+Y≀x|T=j) =P(W+Y≀x) forj= 0,1. Therefore, W+Y0andW+Y1are identically distributed. By applying Lemma 1 with S=W, U=Y0andV=Y1, we conclude that Y0andY1have the same distribution function if we can verify condition (a) or (b) in Lemma 1 holds. In fact, conditi on (C1) implies condition (a) in Lemma 1. Under condition (C2), we have MY1(t) =E/parenleftbig etY1/parenrightbig =1 qE/parenleftbig etYI(X≀0)/parenrightbig ≀1 qMY(t)< ∞for allt∈(βˆ’h,h), that is, condition (b) holds in Lemma 1. Thus, we have proved that TandYare independent, which yields P(X≀0|Y) =P(T= 1|Y) =P(T= 1) =q.This completes the proof. Second, we prove Theorem 2 below. The equivalence between (3 ) and (5) under model (4) follows immediately by noting that I(Ut≀0) =I(Ξ΅t≀0).
https://arxiv.org/abs/2504.18769v1
Theorem 2. Assume random variable Zis independent of random vector (X,Y),P(Z >0) = 1, andq∈(0,1)is a constant. Then, P(X≀0|YZ) =qif and only if P(X≀0|Y) =q under one of the following two conditions: (C3): The characteristic function of lnZ,Ο†lnZ(t) =E/parenleftbig eitlnZ/parenrightbig , is non-zero in a dense subset of (βˆ’βˆž,∞). (C4): For some Ξ΄ >0,E/parenleftbig |Y|Ξ΄+|Y|βˆ’Ξ΄I(Y/ne}ationslash= 0)/parenrightbig <∞. Proof.The sufficiency follows from the same line as in the proof of (12 ). Therefore, we only need to prove P(X≀0|Y) =qwhen P(X≀0|YZ) =q. (15) As in the proof of Theorem 1, set T=I(X≀0) and define random variables Y0andY1in such a way that Y0andY1are independent of Zand corresponding distribution functions are F0 andF1as defines in (14). It suffices to show the independence of TandY, which is equivalent to the equality of the distribution functions of Y0andY1. 9 Since (15) implies the independence of TandYZ, we have for x∈(βˆ’βˆž,∞)P(ZY≀ x|T= 0) =P(ZY≀x|T= 1).Because the left-hand side and the right-hand side of the abo ve equation are equal to P(ZY0≀x) andP(ZY1≀x), respectively, we conclude that ZY0andZY1 are identically distributed. Clearly, condition (C3) is th e same as condition (A) in Lemma 2. We can also show that condition (C4) implies condition (B) in Lemma 2 since E/parenleftbig |Y1|Ξ΄+|Y1|βˆ’Ξ΄I(Y1/ne}ationslash= 0)/parenrightbig =1 qE/braceleftbig/parenleftbig |Y|Ξ΄+|Y|βˆ’Ξ΄I(Y/ne}ationslash= 0)/parenrightbig I(T= 0)/bracerightbig ≀1 qE/parenleftbig |Y|Ξ΄+|Y|βˆ’Ξ΄I(Y/ne}ationslash= 0)/parenrightbig . Hence, it follows from Lemma 2 that Y0andY1are identically distributed, that is, P(Y≀ y|T= 0) = P(Y≀y|T= 1) = P(Y≀y), andYis independent of I(X≀0). Therefore, P(X≀0|Y) =P(X≀0) =q.That is, Theorem 2 holds. Third, like the proof of (12), we know that (5) implies (2) und er model (4). But, we can only show the equivalence between (2) and (5) under model (4) with enough finite moments by using the following theorem. Theorem 3. Assume X,Y,Z, andWare four random variables, (X,Y)and(Z,W)are independent, and P(Z >0) = 1. Furthermore, we assume all moments of random variables ZandWare finite, and the moment-generating function of Y,MY(t) =E(etY), exists in t∈(βˆ’h,h)for some constant h >0. Assume constant q∈(0,1). Then, P(X≀0|YZ+W) =qif and only if P(X≀0|Y) =q. (16) Proof.As before, if P(X≀0|Y) =q, then we have P(X≀0|YZ+W) =E{P(X≀0|Y,Z,W)|YZ+W} =E{P(X≀0|Y)|YZ+W} =E(q|YZ+W) =q. Now we assume P(X≀0|YZ+W) =q. Follow the proof of Theorem 1, we define random variable T=I(X≀0). Then P(T= 1|YZ+W) =qandP(T= 0|YZ+W) = 1βˆ’q,that is, TandYZ+Ware independent and P(T= 1) =q. Define distributions F0andF1, as in (14). If F0andF1are the same, then we can conclude 10 the independence of TandY, which implies P(X≀0|Y) =P(T= 1|Y) =P(T= 1) =q. Hence, we only need to show that F0andF1are identical. Like the proof of Theorem 1, we assume random variables Y0andY1are independent of (Z,W), and their cumulative distribution functions are F0andF1, respectively. Using the independence of ( X,Y) and (Z,W), we have for each j= 0,1, the distribution of YjZ+Wis the same as the conditional distribution of YZ+WgivenT=j. Meanwhile, the conditional distribution of YZ+WgivenT=jis the same for j= 0,1 due to the independence of Tand YZ+W. Therefore, we know that Y0Z+WandY1Z+Ware identically distributed. Since E(etY) =E(etY|T= 0)(1βˆ’q)+E(etY|T= 1)q= (1βˆ’q)E(etY0)+qE(etY1) for allt∈(βˆ’h,h), the moment-generating functions for Y0andY1are well defined in ( βˆ’h,h). Hence, an application of Lemma
https://arxiv.org/abs/2504.18769v1
3 concludes that Y0andY1are identically distributed, that is, F0=F1, i.e., the theorem holds. References [1] T. Adrian and M.K. Brunnermeier (2016). CoVaR. American Economic Review 106, 1705 - 1741. [2] P. Billingsley (1995). Probability and Measure. 3rd Edi tion.New York, Wiley. [3] A. Capponi and A. Rubtsov (2022). Systemic risk-driven p ortfolio selection. Operations Re- search 70, 1598-1612. [4] C. Chen, W.K. H¨ ardle and Y. Okhrin (2019). Tail event dri ven networks of SIFIs. Journal of Econometrics 208, 282-298. [5] R. Fan and J.H. Lee (2019). Predictive quantile regressi ons underpersistenceand conditional heteroskedasticity. Journal of Econometrics 213, 261-280. [6] T. Fung, Y. Li, L. Peng and L. Qian (2024). Testing constan t serial dynamics in two-step risk inference for longitudinal actuarial data. North American Actuarial Journal 28, 861-881. [7] B. Gebka and M. Wohar (2013). Causality between trading v olume and returns: Evidence from quantile regressions. International Review of Economics &Finance 27, 144-159. 11 [8] S. Giglio, B. Kelly and S. Pruitt (2016). Systemic risk an d the macroeconomy: an empirical evaluation. Journal of Financial Economics 119, 457-471. [9] G. Girardi and A. Erg¨ un (2013). Systemic risk measureme nt: multivariate GARCH estima- tion of CoVaR. Journal of Banking &Finance 37, 3169-3180. [10] W. H¨ ardle W. Wang and L. Yu (2016). Tenet: tail-event dr iven network risk. Journal of Econometrics 192, 499-513. [11] A. Heras, I. Moreno and J.L. Vilar-Zan´ on (2018). An app lication of two-stage quantile regression to insurance ratemaking. Scandinavian Actuarial Journal 9, 753-769. [12] S. Kang, L. Peng and A. Golub (2021). Two-step risk analy sis in insurance ratemaking. Scandinavian Actuarial Journal 6, 532-554. [13] R. Koenker (2005). Quantile Regression. Cambridge University Press. [14] Lee, J. H. (2016). Predictive quantile regression with persistent covariates: IVX-QR ap- proach.Journal of Econometrics 192, 105-118. [15] X. Liu, W. Long, L. Peng and B. Yang (2023). A unified infer ence for predictive quantile regression. Journal of American Statistical Association 119, 1526-154 0. [16] C. Ma, S. Xiao and Z. Ma (2018). Investor sentiment and th e prediction of stock returns: a quantile regression approach. Applied Economics 50, 5401-5415. [17] W. L. Smith (1962). A note on characteristic functions w hich vanish identically in an interval. Mathematical Proceedings of the Cambridge Philosophical So ciety 58 (2), 430 - 432. [18] G.R. Shorack and J.A. Wellner (1986). Empirical Proces ses with Applications to Statistics. Wiley Series in Probability and Statistics. [19] Z. Xiao (2009). Quantile cointegrating regression. Journal of Econometrics 150, 248-260. [20] K.L. Xu (2021). On the serial correlation in multi-hori zon predictive quantile regression. Economics Letters 200, 109736. [21] L.YangandS.Hamori (2021). Systemicriskandeconomic policyuncertainty: International evidence from the crude oil market. Economic Analysis and Policy 69, 142-158. 12
https://arxiv.org/abs/2504.18769v1
INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACEβˆ— QIN LI†, MARIA OPREA‑, LI WANGΒ§,AND YUNAN YANGΒΆ Abstract. Define a forward problem as ρy=G#ρx, where the probability distribution ρxis mapped to another distribution ρyusing the forward operator G. In this work, we investigate the corresponding inverse problem: Given ρy, how to find ρx? Depending on whether Gis overdetermined or underdetermined, the solution can have drastically different behavior. In the overdetermined case, we formulate a variational problem min ρxD(G#ρx, ρy), and find that different choices of the metric Dsignificantly affect the quality of the reconstruction. When Dis set to be the Wasserstein distance, the reconstruction is the marginal distribution, while setting Dto be a Ο•-divergence reconstructs the conditional distribution. In the underdetermined case, we formulate the constrained optimization min{G#ρx=ρy}E[ρx]. The choice of Ealso significantly impacts the construction: setting Eto be the entropy gives us the piecewise constant reconstruction, while setting Eto be the second moment, we recover the classical least- norm solution. We also examine the formulation with regularization: min ρxD(G#ρx, ρy) +Ξ±R[ρx], and find that the entropy-entropy pair leads to a regularized solution that is defined in a piecewise manner, whereas the W2-W2 pair leads to a least-norm solution where W2is the 2-Wasserstein metric. 1. Introduction. We study the inverse problem over the probability measure space: (1.1) Given data ρy,reconstruct ρxso that G#ρxβ‰ˆΟy. Here, ρx∈ P(Θ) is a probability measure of the variable xβˆˆΞ˜βŠ†Rm, with P(Θ) denoting the collection of all probability measures on the domain Θ. Gis a function that maps x∈Θ to y=G(x)∈ R βŠ† Rn. We denote Ras the range. The map G#then is the deduced pushforward operator that maps a probability measure in P(Θ) to a measure in P(R). The given data ρy∈ P(Rn) is a probability measure over the variable y∈Rn. Notably, ρy may have non-trivial mass outside the range R. This problem can be viewed as an analog of the inverse problem in linear space (e.g., Euclidean space): Given data y, we are to reconstruct xso thatG(x)β‰ˆy. The new problem (1.1) lives in the probability measure space, and thus naturally in the infinite- dimensional setting and is significantly harder to solve computationally. However, many properties are shared. Indeed, similar to the problem posed in the Euclidean space, different methods can be employed for the reconstruction, depending on the properties of Gandyin (1.1). In our context, we need to separate the discussion depending on the features of Gandρyas well. To start, we define the feasible set, (1.2) S={ρx:G#ρx=ρy} βŠ† P (Θ) being the collection of all possible ρx, whom under the pushforward action by G, agrees completely with ρy. There are three scenarios: β€’Unique solution: The ideal situation occurs when the feasible set contains only one element, which corresponds to the natural inversion of ρy, and solves the problem (1.1). β€’Overdetermined: The feasible set is empty (corresponding to non-existence). This would occur if supp( ρy) is not entirely contained in R, and there are elements of ythat cannot find its inversion. βˆ—Submitted to the editors on April 29, 2025. Funding: M.O. and Y.Y. were supported by National Science Foundation (NSF)
https://arxiv.org/abs/2504.18999v1
through grant DMS-2409855, Office of Naval Research through grant N00014-24-1-2088, and Cornell PCCW Affinito-Stewart Grant. Q.L. was supported in part by NSF grant DMS-2308440. L.W. was supported in part by NSF grant DMS-1846854 and the Simons Fellowship. †Department of Mathematics, University of Wisconsin-Madison, Madison, WI (qinli@math.wisc.edu). ‑Center for Applied Mathematics, Cornell University, Ithaca, NY (mao237@cornell.edu). Β§School of Mathematics, University of Minnesota Twin Cities, Minneapolis, MN (liwang@umn.edu). ΒΆDepartment of Mathematics, Cornell University, Ithaca, NY (yunan.yang@cornell.edu). 1arXiv:2504.18999v1 [math.OC] 26 Apr 2025 2 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG β€’Underdetermined: The feasible set has infinitely many elements (corresponding to non- uniqueness). This can happen if supp( ρy) is completely inside R, and there are elements ofythat find many choices of inversion. These scenarios also appear in the deterministic setting when looking for inversion xβ‰ˆ Gβˆ’1(y). Numerically, it is a common practice to formulate an optimization problem for finding this x by adding a regularization term. This numerical strategy is feasible in both overdetermined and underdetermined settings. In the overdetermined setting, adding a regularizer can relax the feasible set constraints and tolerate errors in mismatching, and in the underdetermined setting, the added regularizer helps impose prior knowledge. Likewise, for problem (1.1), we also have: β€’Regularized: This applies to both overdetermined and underdetermined settings, and we relax the problem by tolerating some errors and adding prior knowledge. We are interested in studying problem (1.1) in (1). overdetermined setting, (2). underdeter- mined setting and (3). regularized setting. More specifically, we are to spell out the behavior of the solution to problem (1.1) formulated in the optimization framework in the above three mentioned settings. We now summarize the findings. In the overdetermined case, supp( ρy)&R, implying that ρy(Rn\ R)ΜΈ= 0. In this scenario, no measure ρxsatisfies G#ρx=ρy, making the feasible set Scompletely empty. Intuitively, this reflects the fact that ρypossesses a non-trivial mass outside the range R, so a perfect match is unattainable. In this case, we solve the variational problem to determine the best match: (1.3) Οβˆ— x= arg min ρxD(G#ρx, ρy), for a properly chosen D. Accordingly, we also define the reconstructed data distribution (1.4) Οβˆ— y=G#Οβˆ— x. This formulation seeks a probability measure ρxsuch that, when pushed forward by G, matches ρyoptimally, with the optimality quantified by the misfit function D. In this setting, it is straight- forward to see that Οβˆ— yΜΈ=ρybut maintains some features of ρy. In particular, the choice of Dis crucial and emphasizes different features of ρy. Below, we present two concrete scenarios: – Setting Dto be any Ο•-divergence (also called the f-divergence in probability theory), Οβˆ— y is the conditional distribution of ρyover the range of G; – Setting Dto be a Wasserstein distance, Οβˆ— yis the marginal distribution of ρy. We should note that, at this point, these statements are not yet rigorous. The use of the terms β€œconditional” versus β€œmarginal” is intended for intuition only. The precise meaning is described in Sections 2.1 and 2.2, respectively. In the underdetermined case , supp( ρy)βŠ† R, and there is at least one element in S. IfShas multiple or
https://arxiv.org/abs/2504.18999v1
infinitely many solutions, we need to pick one that makes the most physical sense. To do so, we follow the footsteps of Euclidean space and solve the following constrained optimization problem: (1.5) Οβˆ— x= arg min ρx∈SE[ρx]. The choice of Edepends on the specific problem and the user’s objectives. As written, (1.5) looks for the optimal choice of ρxwithin the feasible set, which is optimal in the sense characterized by E. Therefore, the choice of Eis crucial. Different definitions of Epromote different properties, leading to solutions with contrasting features. We examine the following two specific definitions of Eand report: – Setting Eto be entropy, Οβˆ— xis piecewise constant function on the level sets of G. – Setting E=R |x|2dρx,Οβˆ— xextends from the least-norm solution defined in Euclidean space. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 3 Once again, the statement is not yet rigorous. The terms β€œpiecewise constant” and β€œleast-norm solution” need precise definitions. We discuss details in Section 3.1 and Section 3.2 respectively. In the regularized case, a regularization term is added to either offset the constraints on the feasible set or to encode prior knowledge, leading to the following formulation: Οβˆ— x= arg min ρxD(G#ρx, ρy) +Ξ±R[ρx], (1.6) where Ξ± >0 is the regularization parameter and R:P(Θ)β†’Ris the regularizing functional. Similar to the two cases above, different choices of the pair ( D,R) promote different charac- terizations of the optimizer Οβˆ— x. We consider the following two scenarios: – Setting Dto be the Kullback–Leibler (KL) divergence and setting R[ρx] also the KL divergence against a prescribed distribution M. The optimal solution Οβˆ— xis piecewisely defined, with the Radon–Nikodym derivativedΟβˆ— x dMbeing piecewise constant on the level sets of G. – Setting Dto be p-Wasserstein distance and R[ρx] :=R |x|pdρx, the p-th order moment of ρx, then the optimal distribution Οβˆ— xcan be deduced by a revised least-norm solution map. Similar to the scenarios above, we are to make these terminologies precise. Details will be presented in Section 4.1 and Section 4.2, respectively. It is clear that in all the cases above, the choice of metrics plays a crucial role in the recon- struction. This is not at all surprising. The same phenomenon holds true in Euclidean space and linear function space. Specifically, in the regularized problem, classical regularizers include Tikhonov regularization [21, 16], total variation (TV) norm [28] or L1norm [11] for Rto promote sparsity, while the L2norm (i.e., mean squared error) is often used as the data fidelity term to account for measurement error. Nevertheless, lifting these problems to the probability space is not only a mathematically interesting question but also is backed by substantial practical demand. Over recent years, inverse problems associated with finding probability measures have gained increasing prominence. For example, in weather prediction, the goal is to infer the distribution of pressure and temperature changes [20]; in plasma simulation, one aims to infer the distribution of plasma particles using macroscopic measurements [18, 10]; in experimental design, the objective is to determine the optimal distribution of tracers or detectors to achieve the best measurements [22, 23, 31]. In
https://arxiv.org/abs/2504.18999v1
optical communication, the task is to recover the distribution of the optical environment [5, 24, 4]. Other problems include those arising in aerodynamics [14], biology [13, 12, 29], and cryo-EM [19, 29]. In all these problems, the sought-after quantity is a probability distribution, density, or measure that matches the given data. Consequently, inverse problems in this stochastic setting are naturally formulated as the inversion for a probability distribution, giving rise to the so-called stochastic inverse problem [6, 8, 7, 9, 27, 26, 30]. Therefore, it is of great importance to carefully study the properties of solutions to this type of new inverse problem. The main objective of this article is precisely that: to understand the behavior of the solutions to (1.3), (1.5) and (1.6) under different choices of D,E, and ( D,R) pair, and to establish the six statements mentioned above. We summarize the findings in Table 1. We emphasize throughout this paper that: β€’We assume the three problems ((1.3), (1.5) and (1.6)) are well-posed, in the sense that the solution can be found. This assumption is not trivial, given that the problem is posed in an infinite-dimensional space. While the well-posedness of these problems is interesting in its own right, it is somewhat orthogonal to our main goal, and we will set it aside for now. β€’Additionally, we define a few projection operators (see Definitions 2.3 and 4.2, and Equa- tion (3.4)), and note that the uniqueness of these projections is not a requirement. If multiple projections exist, we have the freedom to select any one of them. It is possible that these observations have appeared previously in the literature. However, to the best of the authors’ knowledge, we have not found it well-documented systematically. We 4 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG Overdetermined RdP Formulation xβˆ—= arg min xβˆ₯G(x)βˆ’yβˆ₯ Οβˆ— x= arg min ρxD(G#ρx, ρy) ResultG(xβˆ—) =PG(y) (Definition 2.3)D=Ο•-divergence D=W2 Conditional (Theorem 2.2)Marginal (Theorem 2.4) Underdetermined RdP Formulation xβˆ—= arg min G(x)=yβˆ₯xβˆ₯ Οβˆ— x= arg min ρx∈SE[ρx] Resultleast-norm solution (Equation (3.4))E= entropy E[ρx] =R |x|2dρx piecewise constant (Theorem 3.1)least-norm solution (Theorem 3.2) Regularization RdP Formulation minxβˆ₯G(x)βˆ’yβˆ₯2+Ξ±R(x) Οβˆ— x= arg min ρxD(G#ρx, ρy) +Ξ±R(ρx) Resultxβˆ—=F(y) (Definition 4.2)entropy-entropy pair W2-moment pair dρ dMpiecewise const (Theorem 4.1)least-norm solution (Theorem 4.3) Table 1: Six theorems to be proved in this paper. should also note that some of the results in simpler cases were reported in earlier work [25, 26]. In particular, different recovery (conditional versus marginal) for G=Aas an overdetermined linear operator was reported in [25]. Furthermore, in [26], the authors reported a gradient flow optimization algorithm using the kernel method when Dtakes the form of KL divergence [3, 15]. The rest of the paper is structured as follows. Section 2 is dedicated to problem (1.3), with Sec- tion 2.1 presenting general results for Ο•-divergences and Section 2.2 addressing the counterpart for the Wasserstein distance. Section 3 explores the underdetermined case and studies problem (1.5), with Section 3.1 examining the problem by setting Eas an entropy and Section 3.2 focusing on setting Eas the moment of the distribution. Section 4 examines
https://arxiv.org/abs/2504.18999v1
the regularized formulation with Section 4.1 and Section 4.2 respectively, dedicated to entropy-entropy pair and Wasserstein- moment pair. In each section, we end with a subsection documenting the results applied to two toy examples, one linear and one nonlinear, both of which have explicit solutions. These exam- ples highlight the contrast between different measures and their mathematical consequences. The proofs are short enough to be included directly in the main text. 2. The Characterization of Solutions to problem (1.3).In this section, we discuss the overdetermined case, meaning the feasible solution set Sis empty, for example, due to modeling error or noisy data. As an alternative, we look at the relaxation of the inverse problem by con- sidering a variational framework (1.3) and seeking for a ρxthat minimizes the mismatch between produced data G#ρxand the given ρy. As a result, different choices of the distance function D promote different properties, yielding different optimal solutions accordingly. In this section, we pay specific attention to setting Das aΟ•-divergence for any convex function Ο•(see Section 2.1) and the Wasserstein distance (see Section 2.2). INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 5 2.1. Reconstruction when DisΟ•-divergence. This section presents results when D in (1.3) is a Ο•-divergence. We rewrite (1.3) as follows: (2.1) min ρx∈P(Θ)DΟ•(G#ρxβˆ₯ρy). TheΟ•-divergence between two probability measures PandQis defined as DΟ•(Pβˆ₯Q) =Z Ο•dP dQ dQ , wheredP dQis the Radon–Nikodym derivative of Pwith respect to Q, and Ο•:R+β†’Ris a convex function such that Ο•(1) = 0. Common examples of Ο•-divergences include the aforementioned KL divergence and the Ο‡2-divergence, by setting Ο•(t) to be tlog(t) and ( tβˆ’1)2, respectively. We claim the reconstruction recovers the conditional distribution. For that, we need to give a precise definition. Definition 2.1.The conditional distribution of ρyon the range R, denoted by ρc y|Rand ρy(Β·|R)interchangeably, is defined as follows: ρc y|R(B) =ρy(B∩ R) ρy(R),βˆ€measurable set B . Moreover, ρc y|Rn\Ris the conditional distribution of ρyonRn\ R, defined in a similar manner. As a consequence, ρc y|Ris absolutely continuous with respect to ρy, and we have: dρc y|R dρy(y) =( 1 ρy(R),ify∈ R, 0, ify /∈ R. We are now ready to present our first theorem: Theorem 2.2.Assume that the variational problem (2.1) admits a minimizer Οβˆ— x∈ P(Θ). Then, we have G#Οβˆ— x=ρc y|R. The proof of Theorem 2.2 requires the use of Measure Disintegration Theorem [1, Thm. 5.3.1], which guarantees the existence of the conditional distribution. Specifically, for any given map T that maps from a probability space ( Y,BY, Β΅) to a measurable space ( Z,BZ), and defining Ξ½=T#Β΅, the theorem states that for Ξ½-a.e. z∈Z, there exists a family of probability measures {Β΅z:z∈Z} on (Y,BY) that satisfies: Β΅(B) =R ZΒ΅z(B) dΞ½(z) for any measurable set B∈ BY. The collection {Β΅z}zis called the disintegration of Β΅with respect to T. In our context, Ywould be the whole of RnandΒ΅is our data distribution ρy. We need to identify the appropriate measurable space and find the correct map T, as detailed in the following proof. Proof of Theorem 2.2. Define a map T:Rnβ†’ {0,1}such that z=T(y) =( 1, y∈ R, 0, y̸∈
https://arxiv.org/abs/2504.18999v1
R. By applying the Measure Disintegration Theorem to ρybased on the map T, we obtain a discrete probability measure Ξ½, with Ξ½(1) = ρy(R), Ξ½(0) = ρy(Rn\ R), 6 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG and the following disintegration of ρy: (2.2) ρy=Ξ½(1)ρc y|R+Ξ½(0)ρc y|Rn\R, where ρc y|Randρc y|Rn\Rare the conditional distributions given in Definition 2.1. The Measure Disintegration Theorem further states that this disintegration is unique. To show that ρc y|Ris the optimal solution, we rewrite the variational problem (2.1) as: (2.3) min ρx∈P(Θ)DΟ•(G#ρx||ρy) = min ρ′ y∈P(Rd) ρ′ y(Rn\R)=0DΟ•(ρ′ y||ρy). This is true because {G#ρx|ρx∈ P(Θ)}={ρ′ y∈ P(Rd) :ρ′ y(Rn\ R) = 0}. The β€œ βŠ†β€ direction holds directly since R=G(Θ). The β€œ βŠ‡β€ direction holds because, for a given ρ′ y, supp( ρ′ y)βŠ† R and by using the left inverse function of Gwe can define a distribution ρ′ x∈ P(Θ) such that G#ρ′ x=ρ′ y. Without loss of generality, we only examine ρ′ ythat is absolutely continuous with respect to ρy1. By the definition of the Ο•-divergence, we have DΟ•(ρ′ y||ρy) =Z Ο•dρ′ y dρy dρy =Ξ½(1)Z Ο•dρ′ y dρy dρc y|R+Ξ½(0)Z Ο•dρ′ y dρy dρc y|Rn\R. Since ρ′ y(Rn\ R) = 0, the second term is a constant Ξ½(0)Ο•(0). Thus, we are left with: DΟ•(ρ′ y||ρy) = Ξ½(1)Z Ο• dρ′ y Ξ½(1)dρc y|R+Ξ½(0)dρc y|Rn\R! dρc y|R+Ξ½(0)Ο•(0) =Ξ½(1)Z Ο• 1 Ξ½(1)dρ′ y dρc y|R! dρc y|R+Ξ½(0)Ο•(0) β‰₯Ξ½(1)Ο•1 Ξ½(1) +Ξ½(0)Ο•(0), where in the last step we applied Jensen’s inequality, leveraging the convexity of Ο•. The equality holds when ρ′ y=ρc y|R, completing the proof. Although the reconstructed G#Οβˆ— xis expected to somewhat agree with ρyon the range R, the fact that the mismatch between ρyandG#Οβˆ— xonRis merely a constant multiple is not entirely trivial. Jensen’s inequality and the convexity of Ο•play a major role. 2.2. Reconstruction when DisWp.We now move on by setting Das a Wasserstein distance. This amounts to rewriting (1.1) as: (2.4) inf ρx∈P(Θ)Wp(G#ρx, ρy), where Wp(Β·,Β·) is the p-Wasserstein distance. For any two probability measures Β΅andΞ½, the p-Wasserstein distance is: (2.5) Wp(Β΅, Ξ½) = min Ξ³βˆˆΞ“(Β΅,Ξ½)Z dp(x, y) dΞ³1/p , pβ‰₯1, 1Otherwise, DΟ•(ρ′ y||ρy) achieves the maximum value, making such ρ′ yirrelevant as we aim to find the minimum. The maximum value for this divergence is ∞in the case of KL and Ο‡2, and 1 in the case of TV. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 7 where dis a metric over Rnand Ξ“( Β΅, Ξ½) represents the set of all couplings between the two measures Β΅andΞ½. The most common choice of dis the Euclidean distance. To characterize the minimizer of (2.4), we first define an inversion map Fand its associated projection map PGas follows. Definition 2.3.Define the inversion map F:Rnβ†’Ξ˜as F(yβˆ—) = arg min x∈Θd(G(x), yβˆ—), and the projection map as PG(yβˆ—) = arg min y∈Rd(y, yβˆ—) =G(F(yβˆ—)). If the minimizer is not unique, one has the freedom to select one. Theorem 2.4.Assume solution to problem (2.4) exists. Then one of the minimizers takes the following form: (2.6) Οβˆ— x=F#ρy, and the reconstruction recovers the projection: G#Οβˆ— x=PG#ρy. Proof. By the definition
https://arxiv.org/abs/2504.18999v1
of the Wpdistance, we have Wp(G#ρx, ρy)p=Z d(˜y, y)pdΟ€(˜y, y), (2.7) where Ο€βˆˆΞ“(G#ρx, ρy) is the optimal coupling between G#ρxandρy. From Definition 2.3, we can deduce that Z d(˜y, y)pdΟ€(˜y, y)β‰₯Z d(PG(y), y)pdΟ€(˜y, y) =Z d(PG(y), y)pdρy(y) =Z d(z, y)pdΞ³(z, y),where Ξ³=fPG#ρyandfPG=PG In β‰₯ W p(PG#ρy, ρy)p, (2.8) where Inis the n-dimensional identity matrix. Considering PG#ρy= (G β—¦ F )#ρy=G#(F#ρy) =G#Οβˆ— x, and recalling definition (2.6), the inequality above shows that Wp(G#ρx, ρy)pβ‰₯ W p(G#Οβˆ— x, ρy)pfor allρx, concluding the proof. 2.3. Examples. A key distinction between the Ο•-divergence case and the p-Wasserstein case lies in how the reference data distribution ρyis utilized. In the Ο•-divergence case, the optimizer relies only on a small subset of the reference distribution ρy:ρyconfined within R. In contrast, thep-Wasserstein case employs the entire reference distribution ρyto generate the marginal distri- bution. Neither result is completely surprising, but their contrasting features bring out beautiful effects that we did not find in the literature. We demonstrate these results using a couple of simple examples. 8 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG 2.3.1. Linear pushforward maps. Assume G=A: Θ = Rmβ†’Rnis linear and full-rank. Since we are in the overdetermined setting, m < n . Then the range R=A(Θ)∼=Rm. If the given data distributions ρy∈ P(Rn) has nontrivial support outside R, the feasible set Sis empty. We then consider problem (1.3) to find the optimal solution ρx. –Dis aΟ•-divergence: Theorem 2.2 states that A#Οβˆ— xrecovers the conditional distribution ofρy, namely: A#Οβˆ— x=ρy(Β·|R). –Dis the p-Wasserstein metric: Theorem 2.4 states that A#Οβˆ— xrecovers the marginal dis- tribution of ρy. Indeed, in this situation, one can even compute Οβˆ— x(see [26]): Οβˆ— x=A† #ρy, where A†= (A⊀A)βˆ’1A⊀is the pseudoinverse (left inverse) of A. Consequently, A#Οβˆ— x= AA† #ρyindeed recovers the marginal distribution of ρyalong the m-dimensional sub- space of Rn. 2.3.2. A simple nonlinear example. Consider the map G: [0,1]Γ—[0,2Ο€]β†’B(0,0)(1) where (y1, y2) =G(r, ΞΈ) = (rcosΞΈ , rsinΞΈ). We prepare our noisy data as: ρy=1 2Ο€Iy2 1+y2 2<1+1 2πδ(y2 1+y2 2βˆ’4), where IAis the indicator function on the set A. This means that we prepare half of our data as a uniform distribution over the ball B(0,0)(1) in the range (as denoted by1 2Ο€Iy2 1+y2 2<1), and the other half of the data uniformly on the ring of radius 2 (as denoted by1 2πδ(y2 1+y2 2βˆ’4), with an extra1 2Ο€factor coming from normalization2. See Figure 1a for an illustration. According to our earlier statement, conditional and marginal distributions are obtained if Ο•-divergence or the p-Wasserstein distance is used, respectively. Namely: –Dis aΟ•-divergence: Οβˆ— y=G#Οβˆ— x=1 Ο€Iy2 1+y2 2≀1, This is the recovery of the conditional distribution. All data points, conditioned on them being in the range B(0,0)(1), are preserved. All data points outside the range are completely forgotten. Note that the normalizing constant has changed. See Figure 1b for the plot of Οβˆ— y. –Dis the Wp: Οβˆ— y=G#Οβˆ— x=1 2Ο€Iy2 1+y2 2<1+1 2πδ(y2 1+y2 2βˆ’1). This is the recovery of the marginal distribution, with all data points projected to the range B(0,0)(1). In this context, all
https://arxiv.org/abs/2504.18999v1
data points sitting on the ring of Ξ΄(y2 1+y2 2βˆ’4) are projected down to the ring of Ξ΄(y2 1+y2 2βˆ’1), within the range. See Figure 1c for the plot ofΟβˆ— y. 3. The Characterization of Solutions to problem (1.5).In this section, we discuss the underdetermined situation. This is the situation where there is more than one element in the feasible solution set S, and we are tasked to pick one by solving (1.5). Naturally, different choices ofEwill promote different properties and produce different optimal solutions accordingly. As a natural start, we pay special attention to the settings when Eis defined as the entropy (see Section 3.1) or the second-order moment of the distribution (see Section 3.2). 2Note that the integral of Ξ΄(y2 1+y2 2βˆ’R2) over the entire plane is Ο€for all values of R > 0. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 9 (a) Data distribution ρy (b)Ο•-divergence optimizer (c)Wpoptimizer Figure 1: We denote Οβˆ— y:=G#Οβˆ— x; (a): the noisy data distribution with mass supported outside the range R; (b): the reconstructed data distribution with Dbeing the Ο•-divergence; (c): the reconstructed data distribution with Dbeing the p-Wasserstein metric. 3.1. Optimization when E[ρx] =R ρxlnρxdx.We first examine problem (1.5) by setting E as the entropy. As such, we look for ρxthat are absolutely continuous with respect to the Lebesgue measure, which means that we consider Pac(Θ) to be our search space. For the convenience of the derivation, we also assume throughout this section that ρy∈ P ac(R), and that Θ β«‹ Rmis compact. The optimal solution can be explicitly written down as given in Theorem 3.1 below. Theorem 3.1.The optimizer Οβˆ— xof problem (1.5) withE[ρx] =R ρxlnρxdxhas the following property: Οβˆ— x(Β·|Gβˆ’1(y)) =U Gβˆ’1(y) ,βˆ€y , (3.1) where Οβˆ— x(Β·|Gβˆ’1(y))denotes the conditional distriution of Οβˆ— xon the set Gβˆ’1(y)andU Gβˆ’1(y) is the uniform distribution over the set Gβˆ’1(y). In particular: Οβˆ— x(x) =ρy(G(x))R Ξ΄(G(x)βˆ’ G(xβ€²))dxβ€². (3.2) The theorem states that the mass cumulated at y=G(x) is transferred back to the xdomain and distributed uniformly across the preimage Gβˆ’1(y) by all the points in x. This property aligns with the fact that entropy increases with variations, so minimizing entropy discourages variations. The uniform distribution, having the least variation, minimizes the entropy. Proof of Theorem 3.1. To begin with, let Οβˆ— xbe the optimizer of problem (1.5) with E[ρx] =R ρxlnρxdx. We claim that there is a function g:R β†’Rβ‰₯0so that (3.3) Οβˆ— x(x) =g(G(x)). To see this, consider the Lagrangian to the constrained optimization problem (1.5): L[ρx] :=Z ρx(x) lnρx(x)dxβˆ’Z Ξ»(y) ((G#ρx)(y)βˆ’Οy(y)) dy =Z ρx(x) lnρx(x)dxβˆ’Z Ξ»(G(x))ρx(x)dx+Z Ξ»(y)ρy(y)dy , where Ξ»(y) is the Lagrangian multiplier. The optimizer Οβˆ— xsatisfies the first-order optimality condition Ξ΄L δρx Οβˆ—x= lnΟβˆ— x(x) + 1βˆ’Ξ»(G(x)) =C , 10 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG which leads to ln Οβˆ— x(x) =Ξ»(G(x))βˆ’1 +Cwhere the constant Cis determined by normalization3. This verifies our claim in (3.3) as Οβˆ— x(x) only depends on the value of G(x), which also leads to (3.1). In order to determine an explicit formula for g, we use the fact that the constraint has
https://arxiv.org/abs/2504.18999v1
to be satisfied, i.e., G#Οβˆ— x=ρy. Consider an arbitrary test function f∈ C∞ c(Rn). Then the constraint can be rewritten as: Z f(y)ρy(y)dy=Z f(y)d (G#Οβˆ— x) (y) =Z f(G(x))Οβˆ— xdx where Οβˆ— x=g(G(x)) =Z f(G(x))g(G(x))dx =Z f(y)g(y)Z Ξ΄(yβˆ’ G(x)) dx . Since the above holds for any test function f, we conclude that g(y)Z Ξ΄(yβˆ’ G(x))dx=ρy(y) =β‡’g(y) =ρy(y)R Ξ΄(yβˆ’ G(x))dx. Plugging the above expresion into (3.3), we obtain (3.2). 3.2. Optimization when E[ρx] =R |x|2dρx.Just as in the deterministic case where the least-norm solution is sought for when the feasible set is not unique, in this section, we look for the ideal distribution in the feasible set Sby choosing one that has the least β€œnorm”. In particular, we will set the objective function to be the second-order moment of the distribution. To ensure the second-order moment even exists, we naturally work with the space of P2(Θ). Similar to the result presented in Section 3.1, we can explicitly write down the optimal solution given in Theorem 3.2 below. Theorem 3.2.The optimizer Οβˆ— xof problem (1.5) withE[ρx] =R |x|2ρxdxis Οβˆ— x=H#ρy, where H(y)is the minimal-norm solution to G(x) =yin the Euclidean space: (3.4) H(y) := arg min G(x)=y, x∈Θ|x|2. If the minimimum is not unique, we have the freedom to choose one. Proof. We first note that, from the definition of H: |H(G(x))|2≀ |x|2. (3.5) Then to show that Οβˆ— xis optimal, we see that for any other ρxsuch that G#ρx=ρy, we have: Z |x|2dΟβˆ— x(x) =Z |x|2d(H#ρy) =Z |H(y)|2dρy(y) =Z |H(y)|2d (G#ρx) (y) =Z |H(G(x))|2dρx(x) ≀Z |x|2dρx(x). 3Note thatΞ΄L δρx Οβˆ—x= 0 in this context means ⟨δL δρx Οβˆ—x, Οβˆ’Οβˆ— x⟩= 0 for all ρ. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 11 3.3. Examples. Both proofs are straightforward, but the difference between the two the- orems signals the drastic differences between moment-minimization and entropy-minimization. According to Theorem 3.1, minimizing entropy encourages the pull-back samples to spread out the whole preimage. On the contrary, Theorem 3.2 suggests that the solution that minimizes moments pulls every y∈ Rback to one sample in the preimage, defined by H. 3.3.1. Linear pushforward maps. IfGis linear with full column rank and we denote the pushforward map by a matrix A: Ξ˜βŠ‚Rmβ†’Rnwhere Θ is a compact subset. We assume Ais full- rank, m > n andScontains infinitely many elements. In this context, the range R=A(Θ)βŠ‚Rn, and we assume the support of the data distribution supp( ρy)βŠ† R =A(Θ). To compute the preimage, we note that for every fixed y∈Rn,Aβˆ’1(y) is a compact subset lying in a linear subspace of at most ( mβˆ’n)-dimensional. Moreover, define the least-norm solution: (3.6) xy= arg min Ax=y|x|2=A⊀(AA⊀)βˆ’1y=A†y , where A†denotes the pseudoinverse (i.e., right inverse) of matrix A. Then the preimage of ycan be represented as: Aβˆ’1(y) ={xy+AβŠ₯} ∩Θ, where AβŠ₯is the subspace perpendicular to the row-space of A. We further define, according to (3.4): H(y) = arg min {|x|2:x∈Aβˆ’1(y)}. – When E=R ρlogρdx, based on Theorem 3.1, the optimal distribution Οβˆ— xconditioned on the preimage Aβˆ’1(y) is the uniform measure over Aβˆ’1(y)βŠ‚Ξ˜. – When E=R |x|2dρ(x), based on Theorem 3.2, the optimal
https://arxiv.org/abs/2504.18999v1
distribution Οβˆ— xconditioned on the preimage Aβˆ’1(y), for a fixed y∈supp( ρy), is a single Dirac delta measure supported at the minimum-norm solution. Calling (3.6), we have: Οβˆ— x=H#ρy. This example was already shown in [26, Theorem 4.7]. 3.3.2. A simple nonlinear example. A nonlinear example that can be made explicit is as follows. Define G:B(1,1)(1)β†’[0,1], where B(1,1)(1) is a ball centered at (1 ,1) with radius 1, r=G(x1, x2) =p (x1βˆ’1)2+ (x2βˆ’1)2. Then any preimage of ris: Gβˆ’1(r) ={(x1= 1 + rcosΞΈ, x2= 1 + rsinΞΈ), θ∈[0,2Ο€]}. Through a simple calculation, within this preimage, the least-norm solution to (3.4) is: H(r) = 1βˆ’r√ 2,1βˆ’r√ 2 ,0≀r≀1. Suppose we are given a probability measure Β΅rwith a support in [0 ,1]. Then the two problems mentioned above are to find a distribution on B(1,1)(1) within the set of: S={ρ(x1, x2) :G#ρ=Β΅r}, that minimizes entropy or the second-order moment. Explicit solutions are available: 12 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG –E=RR ρ(x1, x2) lnρ(x1, x2) dx1dx2: ρ(x1, x2) =Β΅rp (x1βˆ’1)2+ (x2βˆ’1)2 2Ο€p (x1βˆ’1)2+ (x2βˆ’1)2. This is a density function supported in the ball of B(1,1)(1), with uniform density on each ring. The density of the ring that is raway from the center (1 ,1) is scaled by1 2Ο€r4. See Figure 2a for an illustration of the optimal ρx. –E=RR x2 1+x2 2 ρ(x1, x2)dx1dx2: ρ(x1, x2) =(√ 2Β΅r(p (x1βˆ’1)2+ (x2βˆ’1)2)Ξ΄(x1βˆ’x2),1βˆ’1√ 2≀x1≀1 0, x 1>1. This is a density function supported only in the line section along the straight line x1=x2 between the point (1 βˆ’1√ 2,1βˆ’1√ 2) and the point (1 ,1). See Figure 2b for an illustration of the optimal ρxunder this choice of E. (a)E=R ρxlogρxdx (b)E=R |x|2ρxdx Figure 2: The data distribution Β΅r=U([0,1]) is the uniform distribution in [0 ,1]. (a): the optimizer to (1.5) when Eis the entropy. The density is constant when restricted to level sets of G; (b) the optimizer to (1.5) when Eis the second-order moment. The parameter distribution is concentrated at the element of the minimum norm on each level set of G. 4. The Characterization of Solutions to problem (1.6).In this section, we characterize solutions to the regularized data-matching variational problem (1.6) under two choices of the (D,R), the KL-entropy pair (see Section 4.1) and the Wp-moment pair (see Section 4.2). 4.1. Entropy-entropy pair. SetD= KL and choose R[ρx] = KL( ρx||M), with M ∈ P (Θ) being a desired output probability measure for whichdρx dMexists. For the rest of this analysis, we assume that all probability distributions are absolutely continuous with respect to the Lebesque measure on the corresponding spaces, and we use the same notation to refer to the distribution 4Note that the integral of Ξ΄(q x2 1+x2 2βˆ’R) over the plane is 2 Ο€R INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 13 and its corresponding density interchangeably. Problem (1.6) can be rewritten as: (4.1) Οβˆ— x= arg min ρx∈P2,acL(ρx),L(ρx) := KL( G#ρx||ρy) +Ξ±Z logρx Mρxdx . Under these assumptions, we have the following theorem. Theorem 4.1.The optimizer Οβˆ— xof(4.1) has the following property: Οβˆ— x(Β·|Gβˆ’1(y)) M(Β·|Gβˆ’1(y))=g(y),βˆ€y∈ R, (4.2) where g(y)is
https://arxiv.org/abs/2504.18999v1
a constant only depending on ywhile Οβˆ— x(Β·|Gβˆ’1(y))andM(Β·|Gβˆ’1(y))denote the conditional distributions of Οβˆ— xandMon the preimage Gβˆ’1(y), respectively. In particular: Οβˆ— x(x)∝ M (x)ρy(G(x))R Ξ΄(G(x)βˆ’ G(xβ€²))M(xβ€²)dxβ€²1 1+Ξ± . (4.3) Proof. Since the KL divergence is convex (in the usual sense) and the pushforward action is a linear operator, the optimal solution of (4.1) can be obtained by solving the first-order optimality condition: C0=Ξ΄L δρx Οβˆ—x= 1 + logG#Οβˆ— x ρy(G(x)) +Ξ± 1 + logΟβˆ— x M(x) , where C0is any constant. This implies that (G#Οβˆ— x)(G(x))Οβˆ— x M(x)Ξ± ∝ρy(G(x)). (4.4) Hence, there exists a to-be-determined function gsuch that the following holds: Οβˆ— x M(x) =g(G(x)), (4.5) which also leads to (4.2). Next, we claim that (4.5) leads to (G#Οβˆ— x)(y) =g(y)Z Ξ΄(yβˆ’ G(x))M(x)dx . (4.6) Indeed, consider a mesurable function fas a test function. Then by multipying both sides of (4.6) byf(y) and integrating against y, we can rewrite the left-hand side (LHS) LHS =Z f(y)(G#Οβˆ— x)(y)dy=Z f(G(x))Οβˆ— x(x)dx=Z f(G(x))M(x)g(G(x))dx , where the last equality uses (4.5). For the right-hand side (RHS), we have RHS =Z g(y)f(y)Z Ξ΄(yβˆ’ G(x))M(x)dxdy=Z g(G(x))f(G(x))M(x)dx . Substituting (4.5) and (4.6) into (4.4) leads to an algebraic equation for g, which can be solved to obtain the final result: g(y) =eC0βˆ’1βˆ’Ξ±Οy(y)R Ξ΄(yβˆ’ G(x))M(x)dx1 1+Ξ± . Since the algebraic equation only leads to one solution, we also conclude that the function g appearing in (4.5) is unique. If the regularization coefficient Ξ±= 0, the parameter space Θ is compact, Mis the uniform distribution over Θ, and ρy∈ P(R), then Theorem 4.1 arrives at the same result as Theorem 3.1. 14 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG 4.2.Wp-moment pair. We now consider the case where D=Wp p, the p-th power of the p-Wasserstein metric and the regularization term R[ρx] =R |x|pdρx(x), for 1 ≀p≀ ∞ . Then problem (1.6) can be rewritten as: (4.7) Οβˆ— x= arg min ρx∈PpL(ρx),L(ρx) =Wp p(G#ρx, ρy) +Ξ±Z |x|pdρx(x). To characterize the minimizer, we first define a map ˜Fas follows. Definition 4.2.Let˜F:Rnβ†’Ξ˜βŠ‚Rmbe defined as (4.8) ˜F(y) = arg min x∈Θ{|G(x)βˆ’y|p+Ξ±|x|p}. Then we have the following theorem: Theorem 4.3.Given ρy∈ P(Rn), and the forward map G: Ξ˜βŠ‚Rmβ†’ R βŠ‚ Rnwhere Rand Θare compact, then the minimizer Οβˆ— xto problem (4.7) satisfies (4.9) Οβˆ— x=˜F#ρy. This theorem is built upon a nice observation about the regularizer: R[ρx] =Z |x|pdρx(x) =Wp p(ρx, Ξ΄0). which allows us to rewrite the loss function as follows: Lemma 4.4.For any ρy∈ P(Rn), the objective function defined in (4.7) can be rewritten as: L(ρx) =Wp p(G#ρx, ρy) +Ξ±Z |x|pdρx(x) =Wp p(˜G#ρx,¯ρy), (4.10) with ¯ρy=ρyβŠ—Ξ΄0(x)where Ξ΄0(x)∈ P(Rm)denotes the Dirac delta centered at 0∈Rm, and ˜G=G βŠ— Ξ±1/pIm , with Imbeing the m-dimensional identity matrix. More explicitly, ˜G(x) : Ξ˜βŠ‚Rmβ†’ R Γ— Ξ˜βŠ‚Rn+m,with ˜G(x) =G(x) Ξ±1/px . We will leave the proof for this lemma later. Applying this Lemma 4.4, we obtain that problem (4.7) is equivalent to (4.11) Οβˆ— x= arg min ρx∈PpWp(˜G#ρx,¯ρy). This can be considered as an overdetermined problem in a similar form with (2.4) since ˜G: Ξ˜βŠ‚Rmβ†’ R Γ— Ξ˜βŠ‚Rn+m, for which the range lies in a strictly
https://arxiv.org/abs/2504.18999v1
higher-dimensional space than its domain. As a result, Theorem 2.4 directly applies, which allows us to write down an explicit form of the solution to problem (4.7) as below. Proof for Theorem 4.3. In this overdetermined system, according to Theorem 2.4, we simply need to find the function that solves: Fex(yβˆ—) = arg min x∈Θ|˜G(x)βˆ’yβˆ—|p,foryβˆ—= (y,0)∈Rn+m. Then the solution is Οβˆ— x=Fex #¯ρy. Comparing with Definition 4.2, it is straightforward to see that Fex((y,0)) = ˜F(y). Given the specific form of Β― ρy, we conclude Οβˆ— x=˜F#ρy. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 15 Proof for Lemma 4.4. We drop the sub-index mon the identity matrix because there is no ambiguity. Let Ο€1be the optimal transport plan between G#ρxandρyin the optimal transport problem defining the Wpmetric. Then Wp p(G#ρx, ρy) =Z |yβ€²βˆ’y|pΟ€1(dyβ€²dy) =Z |G(x)βˆ’y|pΛ†Ο€1(dxdy), where Ο€1= (G Γ—I)#Λ†Ο€1for some Λ† Ο€1βˆˆΞ“(ρx, ρy) and Ξ“( ρx, ρy) denotes all the couplings between ρxandρy. Note that if Gis not one-to-one, Λ† Ο€1may not be unique, but its existence is always guaranteed. Similarly: (4.12)Z |x|pdρx=Z |xβˆ’0|pΛ†Ο€2(dxdxβ€²),with Λ† Ο€2=ρxβŠ—Ξ΄0(x)βˆˆΞ“(ρx, Ξ΄0(x)), where Ξ“( ρx, Ξ΄0(x)) denotes all the couplings between ρxandΞ΄0(x), and Ξ΄0(x)∈ P(Rm) denotes the Dirac delta at 0 ∈Rm. Defining Λ† Ο€3= Λ†Ο€1βŠ—Ξ΄0(x)βˆˆΞ“(ρx, ρyβŠ—Ξ΄0(x)), we rewrite: L(ρx) =Z |G(x)βˆ’y|pΛ†Ο€1(dxdy) +Ξ±Z |x|pdρx =Z |˜G(x)βˆ’yβ€²|pΛ†Ο€3(dxdyβ€²) with yβ€²= (y,0) =Z |yβˆ’yβ€²|pΟ€3(dydyβ€²), Ο€ 3= (˜G Γ—I)#Λ†Ο€3βˆˆΞ“ ˜G#ρx, ρyβŠ—Ξ΄0(x) . To show this is Wp p(˜G#ρx,¯ρy), we also need to show Ο€3is an optimal plan. Assume Ξ³ΜΈ=Ο€3 andΞ³is the optimal transport plan between ˜G#ρxand ¯ρy=ρyβŠ—Ξ΄0(x) under the cost function c(yβˆ’yβ€²) =|yβˆ’yβ€²|p, then we have Wp p(˜G#ρx,¯ρy) =Z |yβˆ’yβ€²|pdΞ³(dydyβ€²) =Z |˜G(x)βˆ’yβ€²|pdΛ†Ξ³(dxdyβ€²), Ξ³ = (˜G Γ—I)#Λ†Ξ³,Λ†Ξ³βˆˆΞ“ (ρx, ρyβŠ—Ξ΄0(x)) =Z |G(x)βˆ’y|pdΛ†Ξ³1(dxdy) +Ξ±Z |x|pdρx,Λ†Ξ³1βˆˆΞ“(ρx, ρy) =Z |yβˆ’yβ€²|pdΛ†Ξ³2(dydyβ€²) +Ξ±Z |x|pdρx,Λ†Ξ³2= (G Γ—I)#Λ†Ξ³1βˆˆΞ“(G#ρx, ρy) β‰₯ Wp p(G#ρx, ρy) +Ξ±Z |x|pdρx, =Z |yβˆ’yβ€²|pΟ€3(dydyβ€²), where Λ† Ξ³1and Λ†Ξ³2are determined by Ξ³. This contradicts the assumption that Ο€3is not optimal. So we conclude with (4.10). 4.3. Examples. Similar to what is done in Sections 2.3 and 3.3, we design a linear and a simple nonlinear example for which we can explicitly spell out the solutions to the regularized problem. 4.3.1. A linear pushforward map. We first assume G=A∈RmΓ—nwith mβ‰₯n. Setting D=W2 2andR[ρx] =R |x|2dρx(x), the solution to (1.6), according to Theorem 4.3, is: ρx= (A⊀A+Ξ±I)βˆ’1A⊀ #ρy. (4.13) 16 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG This can be easily deduced from Definition 4.2 with Greplaced by the linear operator. One fascinating fact is that the Wasserstein-moment setting resonates nicely with Tikhonov regularization. In the Euclidean space, Tikhonov regularization was introduced to smooth the signifying effect of small singular values in Aon small perturbations in the data. Namely, suppose the ground truth data yis perturbed to become y+Ξ΄as the given data. In that case, one needs to regularize the problem to temper the amplification of the smallest singular value of A. The choice ofΞ±thus plays an important role in controlling the error magnification. The same phenomenon is observed in the probability space as well, which is stated in the Corollary 4.5 below. Corollary 4.5.Denote ρtrue ythe groundtruth probability measure and ρδ yitsΞ΄-perturbation. They are both in P2(Rn). Denote
https://arxiv.org/abs/2504.18999v1
then ρtrue xthe solution to problem (2.4) with the reference data distribution ρtrue y, and ρδ,Ξ± xthe solution to problem (4.7) using regularizing coefficient Ξ±on the reference data distribution ρδ y. Then we have that W2(ρtrue x, ρδ,Ξ± x)≀ βˆ₯(A⊀A+Ξ±I)βˆ’1A⊀βˆ₯2W2(ρtrue y, ρδ y) +βˆ₯(A⊀A+Ξ±I)βˆ’1AβŠ€βˆ’A†βˆ₯2q Eρtruey[y2].(4.14) SetΟƒm=Οƒmin(A)as the smallest singular value for Aand denote by ρ(A)the set of all singular values of A. The bound in (4.14) can be further simplified to W2(ρtrue x, ρδ,Ξ± x)≀ max Οƒi∈ρ(A)Οƒi Ξ±+Οƒ2 i W2(ρtrue y, ρδ y) +Ξ± Οƒm(Οƒ2m+Ξ±)q Eρtruey[|y|2] ≀1 2√αW2(ρtrue y, ρδ y) +Ξ± Οƒm(Οƒ2m+Ξ±)q Eρtruey[|y|2]. (4.15) This corollary clearly shows that the error (as written in (4.15)) has two sources: the former being the noise in the measurement and the latter coming from the regularization. It is common to equate these two contributions to seek the optimal choice of Ξ±that minimizes the combination of two errors when Οƒmis known. Proof. To show (4.14), we deploy the triangle inequality: W2(ρtrue x, ρδ,Ξ± x)≀ W 2(˜ρx, ρδ,Ξ± x) +W2(ρtrue x,˜ρx), where (4.16) ˜ ρx= (A⊀A+Ξ±I)βˆ’1A⊀ #ρtrue y. Notice in this setting, by (4.13), ρδ,Ξ± x= (A⊀A+Ξ±I)βˆ’1A⊀ #ρδ y. By utilizing the continuity of the map ( A⊀A+Ξ±I)βˆ’1A⊀and citing [17, Theorem 3.2], we obtain the first term in (4.14). To control the second term, we expand: W2 2(˜ρx, ρtrue x) = W2 2 (A⊀A+Ξ±I)βˆ’1A⊀ #ρtrue y,A† #ρtrue y ≀Z A⊀A+Ξ±Iβˆ’1A⊀yβˆ’A†y 2 dρtrue y(y) ≀ A⊀A+Ξ±Iβˆ’1AβŠ€βˆ’A† 2 2Z |y|2dρtrue y(y) |{z } =Eρtruey[|y|2] The same estimates was seen in [2, Theorem 3.1]. From (4.14), we bound matrices’ 2-norm using: W2(ρtrue x, ρδ,Ξ± x)≀ max Οƒi∈ρ(A)Οƒi Ξ±+Οƒ2 i W2(ρtrue y, ρδ y) +Ξ± Οƒm(Οƒ2m+Ξ±)q Eρtruey[|y|2]. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 17 Finally, we arrive at the conclusion by noting that max Οƒi∈ρ(A)Οƒi Ξ±+Οƒ2 i≀1 2√α. 4.3.2. A simple nonlinear example. In this section, we revisit the example in Section 3.3.2 but consider the optimization formulation of (1.6). Recall G:B(1,1)(1)β†’[0,1], where B(1,1)(1) is a ball centered at (1 ,1) with radius 1, r=G(x1, x2) =p (x1βˆ’1)2+ (x2βˆ’1)2. Then any preimage of ris: Gβˆ’1(r) ={(x1= 1 + rcosΞΈ, x2= 1 + rsinΞΈ), θ∈[0,2Ο€]}. Suppose we are given a probability measure Β΅rwith a support in [0 ,1]. Then the two problems mentioned above are to find a distribution on B(1,1)(1) that minimmizes (1.6) under different choices of R. Explicit solutions are available: –D= KL and R[ρx] = KL( ρx||M). This is the formulation deployed in (4.1). According to Theorem 4.1, especially Equation (4.3), when Ξ±= 1: Οβˆ—(x1, x2)∝ M (x1, x2)s Β΅r(G(x1, x2))R Ξ΄(G(x1, x2)βˆ’ G(xβ€² 1, xβ€² 2)))M(xβ€² 1, xβ€² 2)dxβ€² 1dxβ€² 2. In this context, we need to compute the denominator term Z Ξ΄(G(x1, x2)βˆ’ G(xβ€² 1, xβ€² 2)))M(xβ€² 1, xβ€² 2)dxβ€² 1dxβ€² 2. The analytic solution is not available for an arbitrarily given Mbut is explicit when M has special forms. One such example is when M=CMexp βˆ’x2 1+x2 2 2 where CMis the normalizing constant. Under this setup, we define ( xβ€² 1, xβ€² 2) := (1 + rβ€²cosΞΈ,1 +rβ€²sinΞΈ) for a given rβ€²andΞΈ, and set r=G(x1, x2). Upon conducting a change of variable:
https://arxiv.org/abs/2504.18999v1
Z Ξ΄(G(x1, x2)βˆ’ G(xβ€² 1, xβ€² 2)))M(xβ€² 1, xβ€² 2)dxβ€² 1dxβ€² 2 =Z Ξ΄(rβˆ’rβ€²)M(1 +rβ€²cosΞΈ,1 +rβ€²sinΞΈ)rβ€²drβ€²dΞΈ =rCMZ2Ο€ 0exp βˆ’1 2 (rcosΞΈ+ 1)2+ (rsinΞΈ+ 1)2 dΞΈ =rCMexp βˆ’1 2(2 +r2)Z2Ο€ 0exp (βˆ’rcosΞΈβˆ’rsinΞΈ) dΞΈ = 2Ο€rCMexp βˆ’1 2(2 +r2) I0(√ 2r), where I0is the Bessel function5. This leads to the final result of Οβˆ—(x1, x2)∝exp βˆ’x2 1+x2 2 2s Β΅r(r) 2Ο€reβˆ’1 2(2+r2)I0(√ 2r),where r=G(x1, x2). We present the optimal solution, Οβˆ—, in Figure 3 for both the unregularized case ( Ξ±= 0) (also depicted in Figure 2a) and the regularized case ( Ξ±= 1). Given that the prior distribu- tionMis centered at x= 0, the regularized solution (Figure 3b) combines characteristics of both the prior distribution Mand the unregularized distribution shown in Figure 3a. 5Bessel function is defined as I0(a) =1 2Ο€R2Ο€ 0exp (acosΞΈ) dΞΈ. 18 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG (a)Ξ±= 0 (b)Ξ±= 1 Figure 3: A comparison of the reconstructed density for (a) the unregularized case with entropy costE[ρx] =R ρxlnρxdxand (b) the regularized case with ( D,R) = (KL ,KL(Β·βˆ₯M)) where M ∝ exp βˆ’x2 1+x2 2 2 and the parameter Ξ±= 1. The data distribution is chosen to be Β΅r=U([0,1]), the uniform distribution in [0 ,1] –D=W2 2andR[ρx] =R |x|2dρx(x). Then according to Definition 4.2, we have, in this specific context: F(r) = arg min (x1,x2)∈B(1,1)(1) |G(x1, x2)βˆ’r|2+Ξ±|x1|2+Ξ±|x2|2 . Through simple calculation, it can be shown that the optimal solution has to be supported along the diagonal set {(x1, x2)∈B(1,1)(1) :x1=x2}, reducing the problem to solving: F(r) := arg min x∈[0,1](√ 2|xβˆ’1| βˆ’r)2+ 2Ξ±x2 = arg min x∈[0,1]h (2 + 2 Ξ±)x2+ (βˆ’4 + 2√ 2r)x+ (2βˆ’2√ 2r+r2)i . Since this objective function is quadratic, the solution is explicit: F(r) = (F(r), F(r)),with F(r) =1βˆ’βˆš 2 2r 1 +Ξ±. According to Theorem 4.3, we have Οβˆ— x=F#Β΅r. It is a distribution concentrated along the diagonal. In Figure 4a, we revisit the unregularized solution where E[ρx] =R |x|2dρx(also shown in Figure 2b). In Figure 4b, we present the optimal parameter solution with regularization. The dotted circles in Figure 4a highlights the level sets Gβˆ’1(r) for a fixed r. Only a single point closest to the origin on the level set is selected according to the function H(see Section 3.3.2). With a nonzero regularization coefficient, the map shifts from Hto its affine transform F, which depends on the coefficient Ξ±. As a result, the original level sets (dotted circles in Figure 4b) are translated into new level sets (solid regions in Figure 4b), and the point INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 19 closest to the origin is selected to inherit all the mass from the data distribution at r. In other words, the two distributions in Figure 4 are related through a linear pushforward map. (a)Ξ±= 0 (b)Ξ±ΜΈ= 0 Figure 4: A comparison of the optimal distributions under different setups: (a) shows the result without regularization and the minimum norm cost E[ρx] =R |x|2dρx, while (b) shows the result for the regularized version with ( D,R) = (W2 2,R |x|2dρx(x)) and Ξ±ΜΈ= 0. Acknowledgments. The authors thank
https://arxiv.org/abs/2504.18999v1
Prof. Youssef Marzouk and Prof. LΒ΄ enaΒ¨ Δ±c Chizat for comments and suggestions on an earlier version of this work. REFERENCES [1]L. Ambrosio, N. Gigli, and G. Savar Β΄e,Gradient flows: in metric spaces and in the space of probability measures , Springer Science & Business Media, 2005. [2]R. Baptista, B. Hosseini, N. Kovachki, Y. Marzouk, and A. Sagiv ,An approximation theory framework for measure-transport sampling algorithms , Mathematics of Computation, 94 (2025), pp. 1863–1909. [3]J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyr Β΄e,Iterative Bregman projections for regularized transportation problems , SIAM Journal on Scientific Computing, 37 (2015), pp. A1111–A1138. [4]L. Borcea ,Imaging in Random Media , Springer New York, New York, NY, 2015, pp. 1279–1340. [5]L. Bracchini, S. Loiselle, A. M. Dattilo, S. Mazzuoli, A. C Β΄ozar, and C. Rossi ,The Spatial Distribu- tion of Optical Properties in the Ultraviolet and Visible in an Aquatic Ecosystem , Photochemistry and photobiology, 80 (2004), pp. 139–149. [6]J. Breidt, T. Butler, and D. Estep ,A measure-theoretic computational method for inverse sensitivity problems I: Method and analysis , SIAM Journal on Numerical Analysis, 49 (2011), pp. 1836–1859. [7]T. Butler and D. Estep ,A numerical method for solving a stochastic inverse problem for parameters , Annals of Nuclear Energy, 52 (2013), pp. 86–94. [8]T. Butler, D. Estep, and J. Sandelin ,A computational measure theoretic approach to inverse sensitivity problems II: A posteriori error analysis , SIAM Journal on Numerical Analysis, 50 (2012), pp. 22–45. [9]T. Butler, D. Estep, S. Tavener, C. Dawson, and J. J. Westerink ,A measure-theoretic computational method for inverse sensitivity problems III: Multiple quantities of interest , SIAM/ASA Journal on Un- certainty Quantification, 2 (2014), pp. 174–202. [10]R. Caflisch, D. Silantyev, and Y. Yang ,Adjoint DSMC for nonlinear Boltzmann equation constrained optimization , Journal of Computational Physics, 439 (2021), p. 110404. [11]E. J. Candes, J. K. Romberg, and T. Tao ,Stable signal recovery from incomplete and inaccurate measure- ments , Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59 (2006), pp. 1207–1223. 20 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG [12]S. Daun, J. Rubin, Y. Vodovotz, A. Roy, R. Parker, and G. Clermont ,An ensemble of models of the acute inflammatory response to bacterial lipopolysaccharide in rats: results from parameter space reduction , Journal of Theoretical Biology, 253 (2008), pp. 843–853. [13]M. Davidian and D. M. Giltinan ,Nonlinear models for repeated measurement data: an overview and update , Journal of Agricultural, Biological, and Environmental Statistics, 8 (2003), pp. 387–419. [14]C. del Castillo-Negrete, M. Pilosov, T. Butler, C. Dawson, and T. Y. Yen ,Stochastic inversion with maximal updated densities for storm surge wind drag parameter estimation , in AGU Fall Meeting Abstracts, vol. 2022, 2022, pp. NG42B–0401. [15]S. Di Marino, B. Maury, and F. Santambrogio ,Measure sweeping processes , Journal of Convex Analysis, 23 (2016), pp. 567–601. [16]H. W. Engl, M. Hanke, and A. Neubauer ,Regularization of inverse problems , vol. 375, Springer Science & Business Media, 1996. [17]O. G. Ernst, A. Pichler, and B. Sprungk ,Wasserstein sensitivity of risk and uncertainty
https://arxiv.org/abs/2504.18999v1
propagation , SIAM/ASA Journal on Uncertainty Quantification, 10 (2022), pp. 915–948. [18]J. Ferron, M. Walker, L. Lao, H. S. John, D. Humphreys, and J. Leuer ,Real time equilibrium recon- struction for tokamak discharge control , Nuclear Fusion, 38 (1998), p. 1055. [19]J. Giraldo-Barreto, S. Ortiz, E. H. Thiede, K. Palacio-Rodriguez, B. Carpenter, A. H. Barnett, and P. Cossio ,A Bayesian approach to extracting free-energy profiles from cryo-electron microscopy experiments , Scientific Reports, 11 (2021), p. 13657. [20]T. Gneiting and M. Katzfuss ,Probabilistic forecasting , Annual Review of Statistics and Its Application, 1 (2014), pp. 125–151. [21]G. H. Golub, P. C. Hansen, and D. P. O’Leary ,Tikhonov regularization and total least squares , SIAM journal on matrix analysis and applications, 21 (1999), pp. 185–194. [22]X. Huan, J. Jagalur, and Y. Marzouk ,Optimal experimental design: Formulations and computations , Acta Numerica, 33 (2024), pp. 715–840. [23]R. Jin, M. Guerra, Q. Li, and S. Wright ,Optimal design for linear models via gradient flow , arXiv preprint arXiv:2401.07806, (2024). [24]O. Korotkova, S. Avramov-Zamurovic, R. Malek-Madani, and C. Nelson ,Probability density function of the intensity of a laser beam propagating in the maritime environment , Opt. Express, 19 (2011), pp. 20322–20331. [25]Q. Li, M. Oprea, L. Wang, and Y. Yang ,Stochastic Inverse Problem: stability, regularization and Wasser- stein gradient flow , arXiv preprint arXiv:2410.00229, (2024). [26]Q. Li, L. Wang, and Y. Yang ,Differential equation–constrained optimization with stochasticity , SIAM/ASA Journal on Uncertainty Quantification, 11 (2023), pp. 491–515. [27]P. W. Marcy and R. E. Morrison ,β€œStochastic Inverse Problems” and Changes-of-Variables , arXiv preprint arXiv:2211.15730, (2022). [28]L. I. Rudin, S. Osher, and E. Fatemi ,Nonlinear total variation based noise removal algorithms , Physica D: nonlinear phenomena, 60 (1992), pp. 259–268. [29]W. S. Tang, D. Silva-S Β΄anchez, J. Giraldo-Barreto, B. Carpenter, S. M. Hanson, A. H. Barnett, E. H. Thiede, and P. Cossio ,Ensemble reweighting using cryo-EM particle images , The Journal of Physical Chemistry B, 127 (2023), pp. 5410–5421. [30]R. D. White, J. D. Jakeman, T. Wildey, and T. Butler ,Building Population-Informed Priors for Bayesian Inference Using Data-Consistent Stochastic Inversion , arXiv preprint arXiv:2407.13814, (2024). [31]J. Yu, V. M. Zavala, and M. Anitescu ,A scalable design of experiments framework for optimal sensor placement , Journal of Process Control, 67 (2018), pp. 44–55.
https://arxiv.org/abs/2504.18999v1
arXiv:2504.19089v1 [math.ST] 27 Apr 2025Semiparametric M-estimation with overparameterized neural networks Shunxing Yanβˆ—, Ziyuan Chen†and Fang Yao‑ School of Mathematical Sciences, Center for Statistical Sc ience, Peking University, Beijing, China Abstract We focus on semiparametric regression that has played a cent ral role in statistics, and exploit the powerful learning ability of deep neural networ ks (DNNs) while enabling statistical inference on parameters of interest that offers interpretab ility. Despite the success of classical semiparametric method/theory, establishing the√n-consistency and asymptotic normality of the finite-dimensional parameter estimator in this context remains challenging, mainly due to nonlinearity and potential tangent space degeneration in D NNs. In this work, we introduce a foundational framework for semiparametric M-estimation, leveraging the approximation ability of overparameterized neural networks that circumv ent tangent degeneration and align better with training practice nowadays. The optimization p roperties of general loss functions are analyzed, and the global convergence is guaranteed. Ins tead of studying the β€œideal” solution to minimization of an objective function in most li terature, we analyze the statistical properties of algorithmic estimators, and establish nonpa rametric convergence and parametric asymptotic normality for a broad class of loss functions. Th ese results hold without assuming the boundedness of the network output and even when the true f unction lies outside the specified function space. To illustrate the applicability o f the framework, we also provide examples from regression and classification, and the numeri cal experiments provide empirical support to the theoretical findings. 1 Introduction 1.1 Literature review Semiparametric models constitute an essential category of statistical methods, involving both finite and infinite-dimensional parameters. Typically, mor e consideration is given to the finite- dimensional components and the latter is regarded as a nuisa nce (Van der Vaart ,2000;Kosorok , 2008). Compared to parametric models with only finite-dimension al parameters, semiparametric βˆ—E-mail: sxyan@stu.pku.edu.cn †E-mail: chenziyuan@pku.edu.cn ‑Fang Yao is the corresponding author, E-mail: fyao@math.pk u.edu.cn. 1 models provide stronger representation capabilities. Mor eover, in contrast to nonparametric mod- els, they alleviate the curse of dimensionality and allow ea sier inference on the finite-dimensional parameters of primary interest. Benefiting from these advan tages, semiparametric models have garnered increasing attention in statistics over an extend ed period, including regression analy- sis (Ahmad et al. ,2005;Liang et al. ,2010), survival analysis ( Cox,1972,1975;Huang ,1999; Zeng and Lin ,2007), causal inference ( Rosenbaum and Rubin ,1983;Chernozhukov et al. ,2018) and many others; see Tsiatis (2006);Kosorok (2008);Horowitz (2012) for a comprehensive introduction. Among these works, one of the most common esti mation methods is the semipara- metricM-estimation ( Van de Geer ,2000;Ma and Kosorok ,2005b ;Cheng and Huang ,2010), where the estimators for both parametric and nonparametric parameters are obtained simul- taneously by minimizing (or maximizing) certain objective functions. There has been much research on this method, employing classical nonparametri c regression techniques, including linear sieves ( Ding and Nan ,2011;Ma et al. ,2021;Tang et al. ,2023), local polynomial method with closed-form representations ( Wang et al. ,2010;Liang et al. ,2010) and penalized methods (Mammen and Van de Geer ,1997;Ma and Kosorok ,2005a ). In general, efficient asymptotic √n-normality of
https://arxiv.org/abs/2504.19089v1
finite-dimensional parameters and minimax op timal convergence of the non- parametric parts can be established. However, most existin g works based on traditional techniques often face challenges when applied to complex structured da ta increasingly encountered in modern applications. Consequently, there is growing interest in d eveloping estimators based on advanced methodologies such as neural networks, with rigorous theor etical guarantees, motivated by the significant achievements of deep learning in recent years. Deep neural networks (DNNs), especially those with ReLU act ivation function, have re- ceived significant attention in recent studies due to their s trong learning abilities. Approximation bounds for ReLU DNNs have been established in various functi on settings ( Yarotsky ,2017,2018; Schmidt-Hieber ,2020;Suzuki ,2018;Yarotsky and Zhevnerchuk ,2020;Kohler and Langer ,2021), which are shown to be (nearly) optimal in terms of both width a nd depth ( Shen,2020;Lu et al. , 2021;Shen et al. ,2022). A key advantage of DNN estimators, distinguishing them fr om classical nonparametric regression estimators, is their adaptivity to various low intrinsic-dimensional struc- tures, such as the hierarchical composition model where the target function follows a specific com- positional structure ( Bauer and Kohler ,2019;Schmidt-Hieber ,2020;Kohler and Langer ,2021), and cases with low-dimensional inputs ( Schmidt-Hieber ,2019;Chen et al. ,2019;Cloninger and Klock , 2021;Nakada and Imaizumi ,2020;Jiao et al. ,2021). Therefore, DNNs are increasingly recog- nized as an important nonparametric approach in a wide range of statistical problems, includ- 2 ing nonparametric regression ( Bauer and Kohler ,2019;Schmidt-Hieber ,2020;Suzuki ,2018; Kohler and Langer ,2021;Wang et al. ,2024), survival analysis ( Zhong et al. ,2021,2022), causal inference ( Farrell et al. ,2021;Chen et al. ,2024), factor augmented and interaction models ( Fan and Gu , 2022;Bhattacharya et al. ,2023), repeated measurements model ( Yan et al. ,2025b ), among others. As a common ground, most of the previously mentioned statist ical works study the gen- eralization ability of the (nearly) empirical risk minimiz ers and bound the estimation error by network size relative to the training sample. However, the o ptimization landscape in deep learning presents unprecedented difficulty due to nonlinearity and no nconvexity, leading to the estimation more likely being local minimizers. Moreover, in practice, neural networks are often trained with overparameterization, i.e., the network parameters may va stly exceed the training sample size, while still avoiding the traditional pitfall of overfitting (Zhang et al. ,2021). This does not align with the statistical analysis in the previously mentioned w orks. Fortunately, there have been mean- ingful results to address these gaps. For instance, Jacot et al. (2018) introduced a framework that analyzes the training of wide neural networks, drawing an an alogy to kernel regression. Specifi- cally, they compared the gradient flow of the least squares lo ss with that of kernel regression and demonstrated that, as the network width approaches infinity , the Neural Tangent Kernel (NTK) converges to a time-invariant limit ( Arora et al. ,2019;Lee et al. ,2019;Bietti and Mairal ,2019; Geifman et al. ,2020;Chen and Xu ,2020;Bietti and Bach ,2021;Lai et al. ,2023a ). From the perspective of optimization, wide neural networks
https://arxiv.org/abs/2504.19089v1
with ran dom initialization have been shown, with high probability, can achieve global minimization thr ough gradient-based methods ( Du et al. , 2018;Li and Liang ,2018;Allen-Zhu et al. ,2019). In terms of statistical generalization perfor- mance, Hu et al. (2021);Suh et al. (2021) established the convergence rate of penalized least square regression problem, and more recent studies have exa mined least squares regression with early stopping, providing uniform convergence guarantees for neural network kernels ( Lai et al. , 2023a ;Li et al. ,2024;Lai et al. ,2023b ). 1.2 Challenges and main contributions Despite the impressive empirical performance of deep learn ing models, they are commonly re- garded as black boxes, lacking interpretability and theore tical support. Semiparametric modeling provides a useful way for making interpretable inferences w hile leveraging the learning ability of neural networks. Training DNNs involves a large-scale ne twork parameter learning via loss minimization, and aligns naturally with the framework of se miparametric M-estimation which si- multaneously estimates nonparametric (via network weight s) and parametric parameters. Several 3 studies have explored this problem (e.g., Zhong et al. ,2022), but defaulted to some β€œgood” as- sumptions on the tangent spaceβ€”a critical issue we shall dis cuss below. Another commonly used method isZ-estimation, where the finite-dimensional parameter is typ ically taken as a functional of the nonparametric component. It usually takes two steps: first estimating the nonparamet- ric component and then substituting it into an equation to so lve for the parametric component. For such two-step procedures, the doubly debiased/robust m ethods ( Chernozhukov et al. ,2018; Farrell et al. ,2021) are employed to achieve efficiency. By comparison, semipara metricM- estimation is computationally straightforward via networ k training, and requires little statistical manifest with broader applicability. While a simpler two-s tep plug-in method can also attain ef- ficiency ( Chen et al. ,2024), one has to impose assumptions on the tangent approximatio n ability in a similar manner to Zhong et al. (2022). The above discussion inspires our investigation: what foun dation of semiparametric M- estimation theory is undermined in neural network based mod els,and how to rebuild it? Denote the semiparametric model as PΞ²,f, whereβ∈Rpis the finite-dimensional parameter of interest, and f:Rdβ†’Ris a nuisance function belonging to a infinite-dimensional s pace. The semiparametric M-estimation procedure aims to optimize an objective functi on,expressed as PnlΞ²,fwith empirical measure integration Pn, either in a suitable sieve or with an additional penalty ter m. Notably, the information of the parameters, Ξ²andf, is coupled. Since the nuisance parameter fresides in an infinite-dimensional space, it significantly increases t he complexity of the estimation problem. Consequently, it is reasonable that the full set of paramete rs achieves a nonparametric convergence rate that is slower than nβˆ’1. A central challenge of semiparametric M-estimations is to establish√n-consistency and the asymptotic normality of the estimator for the finite-dimens ional component Ξ². To decouple the two parts of parameters, taking likelihood estimation a s an example, the efficient score is commonly utilized, which removes the effect of nuisance para meters. Specifically, we define Λβ,fas the tangent set for the
https://arxiv.org/abs/2504.19089v1
nuisance parameter, i.e. it contai ns score functions βˆ‚flΞ²,f[h]with all appropriate directions h, see Section 3for a rigorous definition. Additionally, denote Ξ ΞΈ,fas the orthogonal projection operator onto the closure of the l inear span of Λβ,f. Then the efficient score can be constructed by βˆ‚Ξ²lΞ²,fsubtracting its projection on nuisance tangent space Λβ,f, i.e. ˜S=S1(Ξ²0,f0)βˆ’Ξ Ξ²0,f0S1(Ξ²0,f0),whereS1(Ξ²,f) =βˆ‚lΞ²,f/βˆ‚Ξ². Accordingly, ˜Sis orthogonal to the tangent space Λβ,f, thus mitigating the nuisance information. Once the empiri cal efficient scorePn˜S(Λ†Ξ²,Λ†f)is proven to be small enough as op(nβˆ’1/2), asymptotic normality of√n(Λ†Ξ²βˆ’Ξ²0) can then be established via standard Taylor expansion and en tropy analysis ( Van der Vaart ,2000; 4 Kosorok ,2008). However, establishing that Pn˜S(Λ†Ξ²,Λ†f) =Pn(S1(Λ†Ξ²,Λ†f)βˆ’Ξ Ξ²0,f0S1(Ξ²0,f0)) isop(nβˆ’1/2)for neural network estimators is highly nontrivial. The firs t term on the right-hand side is typically easy to bound, as Λ†Ξ²is a finite-dimensional (nearly) minimizer. The main challe nge arises from bounding the second term. Because Ξ Ξ²0,f0S1(Ξ²0,f0)is in the the closure of the linear span of Λβ,f, there could be some direction ˜hsuch that Ξ Ξ²0,f0S1(Ξ²0,f0) =S2(Ξ²0,f0)[˜h]. Denote the neural network function as fΞΈ, whereΞΈrepresents the network parameters. When Λ†fΞΈ is an exact local or global minimizer, we can only obtain that βˆ‚ΞΈPnlΛ†Ξ²,Λ†fΞΈ= 0, but not necessarily PnS2(Λ†Ξ²,Λ†f)[˜h] = 0. Denote the tangent space of the network at ΞΈby TΞΈFNN:= Span{βˆ‡ΞΈfΞΈ(x)T(ΞΈ1βˆ’ΞΈ) :allΞΈ1having the same shape with ΞΈ}. Then, for any h∈ TΛ†ΞΈ, we have Pn(S2(Λ†Ξ²,Λ†f)[h]) = 0 . If the components of ˜hlie in, or can be well approximated by, the tangent space TΛ†ΞΈ, we could conclude that PnS2(Λ†Ξ²,Λ†f)[˜h]is sufficiently small. Therefore, it is preferable for the tangent space to b e large to contain or approximate as many possible directions as possible. For traditional li near estimators using a sieve space S={fΞ±(x) :fΞ±(Β·) =/summationtextm i=1Ξ±ibi(Β·)}with basis functions {bi(Β·),i= 1,Β·Β·Β·,m}(e.g., regression spline estimators), a large tangent space is straightforwa rd to guarantee, since the linearity ensures that the tangent space of Sat anyfΞ±(Β·)∈ SisTΞ±S= Span{βˆ‚Ξ±ifΞ±(Β·)}∼=S. This provides a strong approximation ability. However, this does not neces sarily hold for the tangent space of neural networks. Therefore, the crucial question arises on the approximatio n ability of the neural network tangent space, and should not be simply suppressed by presup posed conditions as in previous works ( Zhong et al. ,2022;Chen et al. ,2024): Whether the tangent space TΞΈFNNof a neural network always has good approximation ability at every possible ΞΈ? If not, can the semiparametric DNN M-estimators still achieve a√n-consistency for the finite-dimensional parameters? Unfortunately, the answer to the former question is negativ e. As a toy example, we consider a fully connected neural network in which the weights and bias es within each layer take the same values. Due to the lack of identifiability among network para meters, the derivatives with respect to parameters in equivalent positions are the same. Consequ ently, the dimension of the tangent 5 spaceTΞΈFNNis only linearly related to the depth of the neural network, w hich is much smaller than the number of parameters (i.e., the square of the width multi plied by the depth). In such cases, the tangent space TΞΈFNNwill not provide a sufficiently rich
https://arxiv.org/abs/2504.19089v1
approximation ability fo r possible ˜h. As discussed above, whether the efficient score is sufficiently sm all to establish the√n-consistency of the parametric component of the DNN estimator remains an o pen problem. Nonetheless, the counterexample presented above is so extr eme that it is reasonable to presume that it has only a small probability of occurring in learning . Hence, a workable theoretical treatment requires a more careful analysis of the estimatio n procedure and randomness introduced by the neural network, like the random initialization. Addi tionally, we hope to address the limitations in most current statistical analyses of deep ne ural networks: they ignore the nonlinearity and nonconvexity of the optimization landscape and directl y consider the β€œideal” global minimizer. Motivated by these considerations, we study overparameter ized neural networks that provide a meaningful way for analyzing the optimization and statisti cal performance of the algorithmic solutions. Further, for generality, we consider a broader c lass of loss functions beyond the least squares criterion, which introduces additional diffic ulties for both optimization convergence analysis and statistical inference of the algorithmic solu tion. In summary, our main contributions to tackling the challenges are elaborated below. β€’Methodologically, for the semiparametric M-estimation problem, we employ a new over- parameterized neural network estimator, which better alig ns with practical settings and facilitates the optimization analysis. The l2penalization of the neural network parameters is applied for regularization, allowing subsequent analys es of general loss rather than only least square regression. This also ensures that the scores a t the estimation remain suffi- ciently small, contributing to the asymptotic normality of the finite-dimensional parameter estimation. Then we consider a continuous gradient flow fram ework to investigate the training dynamics of the neural network. By incorporating r andom initialization into the analysis, with high probability, we bound the difference bet ween the overparameterized neural network training flow and the ideal RKHS optimization process. It also suggests that the counterexamples mentioned above with degenerate t angent space would be only taken with a small probability. Furthermore, we establish t he global algorithmic conver- gence of the proposed overparameterized neural semiparame tricM-estimator, providing a theoretical guarantee of the training procedure. β€’Theoretically, we analyze the statistical properties of th e algorithmic solution, including the 6 generalization error of the nonparametric component and th e√n-consistency/asymptotic normality of the parametric component. Specifically, unlik e the common convergence analysis of neural network estimation, which often assumes that the network output is bounded, the optimization solution is analyzed directly. E specially when the true function does not lie in the desired space and the loss function is of a g eneral form instead of least squares, estimators that do not impose bounded assumptions on the nonparametric part present significant challenges to theoretical analysis. To establish the convergence rate, we introduce a new condition, referred to as the β€œHuberized mar gin condition”, which relaxes the standard assumptions and is easier to satisfy by unbound ed nonparametric candidate function classes. Building on the above results and some reg ularity conditions, we show that the parametric component
https://arxiv.org/abs/2504.19089v1
achieves√n-consistency and asymptotic normality. The latter result demonstrates the efficiency for the least favor able submodels and enables interpretable inference for parameters of interest. Lastl y, we discuss two commonly used models: partially linear regression and classification. In these examples, the aforementioned properties of the proposed estimator are examined to be vali d. Moreover, the numerical results corroborate our theoretical analysis by the finite s ample behavior. 1.3 Notations and organization In this paper, we use the notation an/lessorsimilarbnto indicate that an≀Cbnfor some constant C >0 independent of n, with/greaterorsimilardefined analogously. Then an≍bnmeans that both an/lessorsimilarbnand bn/lessorsimilaran. Denotean=o(bn)ifan/bnβ†’0,an=O(bn)ifan/lessorsimilarbn, andan= Θ(bn)ifan≍bn. We also use an=op(bn)to indicate that an/bnβ†’p0, andan=Op(bn)if, for anyΗ« >0, there exists a constant CΗ«>0such that supnP(/baβˆ‡dblXn/baβˆ‡dbl β‰₯CΗ«|Yn|)< Η«. Additionally, poly(a,b,...) indicates some polynomial about (a,b,...). The notation Idenotes the indicator function. For probabilistic integrals, Prepresents the theoretical expectation with respect to the population distribution, while Pndenotes the empirical expectation derived from the finite sa mple. The rest of the article is organized as follows. In Section 2, following an overview of overparameterized neural networks and neural tangent kern els, we introduce the semiparametric model framework for neural M-estimation trained by the gradient flow algorithm. Section 3develops the statistical theory for the algorithmic estima tors, addressing the nonparametric convergence of the nonparametric component and the asympto tic normality of the parametric component. In Section 4, we examine two illustrative examples, the partially linea r regression and classification, demonstrating the validity of the theor etical results. Finally, the numerical 7 experiments are presented in Section 5, providing empirical evidence of the proposed method and theory. 2 Semiparametric M-estimation with neural networks In this section, we first introduce the overparameterized ne ural network used and some important related concepts. Then we present the considered semiparam etric model and neural M-estimation framework. Furthermore, a detailed analysis of the optimiz ation convergence would also be provided. 2.1 Overparameterized neural network and neural tangent ke rnel In this paper, we primarily consider feedforward fully conn ected neural networks. Given positive integersm0,m1,...,m L+1withm0=d,mL+1= 1, the network is defined as following ˜fΞΈ(x) =LLβ—¦Οƒβ—¦LLβˆ’1β—¦Οƒβ—¦LLβˆ’2β—¦Οƒβ—¦Β·Β·Β·β—¦L 1β—¦Οƒβ—¦L0(x), (1) whereΟƒ:Rmiβ†’Rmiis the activation function. For i∈ {0,1,...,Lβˆ’1},Li(y) =√ 2(Wiy+ bi)/√mi+1, and fori=L,Li(y) =Wiy+bi, whereWi∈Rmi+1Γ—mi,bi∈Rmifory∈Rmi. In this work, we default Οƒto the rectified linear unit (ReLU) function a/ma√stoβ†’max{a,0}, and most results presented can be generalized to other activation fu nctions. Additionally, we use ΞΈto denote all parameters in the neural network. To convenientl y analyze the effect of the width mi, we assume that m≀min1≀i≀Lmi≀max1≀i≀Lmi≀Cmalways holds for some constant C. In practice, the neural networks are overparameterized, wh ich means the width mwill be much larger than the sample size n. To better align with practical settings and facilitate the analysis of optimization properties, we also consider overparameteri zed networks in the following. Before training, random initialization of weights in networks is u sually employed to break symmetry and prevent all neurons from learning identical features, t hereby improving learning efficiency and promoting better convergence. Therefore, we randomly i nitialize the neural network weight matricesWi’s and the bias b0from an i.i.d. standard normal distribution,
https://arxiv.org/abs/2504.19089v1
while the othe r biases bi’s foriβ‰₯1are initialized to zero. Denoting the initial parameters by ΞΈ0, we define fΞΈ(x) =˜fΞΈ(x)βˆ’ΛœfΞΈ0(x) to ensure that fΞΈis initially zero in the training, i.e. fΞΈ0(x)≑0, for theoretical convenience. 8 In the analysis of overparameterized neural networks, the n eural tangent kernel (NTK) (Jacot et al. ,2018;Arora et al. ,2019) is commonly employed. We now introduce the NTK for finitemas KNT ΞΈ(x,xβ€²) =βˆ‡ΞΈfΞΈ(x)Tβˆ‡ΞΈfΞΈ(xβ€²). FixingLand lettingmβ†’ ∞ , the kernel function converges to KNT(x,xβ€²) = lim mβ†’βˆžKNT ΞΈ0(x,xβ€²), Recent works have established that this convergence holds u niformly ( Lai et al. ,2023a ). In the sequel, we will generally refer to the limiting kernel as NTK for brevity. Moreover, an explicit formula for NTK with the aforementioned random initializat ion can be derived. Proposition 2.1.Under the random initialization mechanism proposed for Wi’s andbi’s, the neural tangent kernel has the explicit expression as KNT(x,xβ€²) =L/summationdisplay l=0/parenleftBig/radicalBig (/baβˆ‡dblx/baβˆ‡dbl2 2+1)(/baβˆ‡dblxβ€²/baβˆ‡dbl2 2+1)ΞΊ(l) 1(˜u)+ /BDlβ‰₯1/parenrightBigLβˆ’1/productdisplay r=lΞΊ0(ΞΊ(r) 1(˜u)), (2) where˜u:= (xTxβ€²+ 1)//radicalbig (/baβˆ‡dblx/baβˆ‡dbl2 2+1)(/baβˆ‡dblxβ€²/baβˆ‡dbl2 2+1),ΞΊ0(t)andΞΊ1(t)are arc-cosine kernels of degree0and1, i.e. ΞΊ0(t) =1 Ο€(Ο€βˆ’arccost), ΞΊ1(t) =1 Ο€/bracketleftBig√ 1βˆ’t2+t(Ο€βˆ’arccost)/bracketrightBig , andh(r),rβ‰₯1denote ther-times composition of a function hwhileh(0)is the identity map. These expressions presented here differ slightly from previ ous results ( Bietti and Mairal ,2019; Geifman et al. ,2020), due to the special initialization of the bias, which simpl ifies subsequent derivations related to the RKHS of KNTover a general domain. Specifically, we introduce the following equivalence property that facilitates subseque nt statistical analysis. Proposition 2.2.For anyβ„¦βŠ‚Rdwith Lipschitz boundary, the RKHS HNTassociated to KNTis norm-equivalent to the Sobolev space W(d+1)/2,2(Ω). Remark 2.1. Since the properties of Sobolev spaces are extensively stud ied, given the above proposition, many results can be derived more easily using e xisting results of Sobolev spaces. We emphasize that the smoothness index (d+ 1)/2is determined by the ReLU activation function. Generally, smoother activation functions lead to higher sm oothness in the corresponding NTK Sobolev space. For example, the rectified power unit activat iona/ma√stoβ†’max{a,0}rwith positive integerr(Bach,2014;Vakili et al. ,2021), which is weakly differentiable of order r, can be proven to lead to Sobolev space W(d+2rβˆ’1)/2,2via similar analysis as Proposition 2.2. 9 We close this subsection by recalling the motivations behin d using overparameterized DNNs in this paper. In practical applications, neural networks a re often overparameterized, where the number of parameters exceeds the sample size. Furthermore, rather than studying the ideal global minimizers as in common statistical works, we hope to invest igate the statistical properties of the algorithmic estimators, particularly under the overpa rameterized optimization convergence guarantee. By introducing stochastic initialization and a nalyzing the optimization process of the overparameterized DNNs, we can avoid, with high probabilit y, the counterexample mentioned in Section 1.2on the degeneration of the DNN tangent space, and hence helps to establish √n-consistency/asymptotic normality of the estimation of th e parametric component. 2.2 Semiparametric neural M-estimation Consider a general semiparametric statistical model PΞ²,f, whereβ∈Rpis a Euclidean parameter of interest and f:Rd/ma√stoβ†’Ris the nuisance function in an infinite-dimensional space. L et(Y,Z,X) follows the distribution PΞ²,f, whereZ∈RpandX∈Rdare finite-dimensional covariates. For simplicity, the domains of ZandXare assumed to be bounded,
https://arxiv.org/abs/2504.19089v1
compact sets with regular boundaries, and their densities are bounded away from zero a nd infinity. Moreover, we assume thatE[Y2|Z,X]is finite. Consider a nonnegative loss function lΞ²,f=l(ZTΞ²,f(X),Y)such that the true parameters (Ξ²0,f0)minimizes the risk PlΞ²,f=/integraltext l(ZTΞ²,f(X),Y)dPΞ²,f. Common choices forlinclude the negative log-likelihood function, squared los s function, or other robust loss functions. Given i.i.d. observations {(Yi,Zi,Xi),i= 1,2,...,n}, we aim to estimate the unknown parameters (Ξ²,f)by minimizing the criterion Ln(Ξ²,f) :=PnlΞ²,f+Ξ»nJ(f), whereΞ»nβ‰₯0is a tuning parameter and Jis a penalty term, within a suitably chosen parameter set(Rp,Fn). Letting the nonparametric function set Fnbe the overparmeterized neural network class, for each fθ∈ Fnwith network parameter ΞΈand initial parameter ΞΈ0, we define the penalty term as J(fΞΈ) =/baβˆ‡dblΞΈβˆ’ΞΈ0/baβˆ‡dbl2 2, which regularizes the complexity of the neural network to pr event overfitting. An alternative regularization technique is early stopping, while existin g statistical analyses for early stopping in kernel gradient algorithms mainly focus on least square los ses (Yao et al. ,2007;Raskutti et al. , 2014). For generality, this work considers a broader class of los s functions, hence, we adopt 10 the penalization strategy. Even under the penalization fra mework, there are still considerable challenges in the statistical theory. Unlike typical conve rgence analyses on neural network estimation, which often assume that the nonparametric func tion is bounded or that the loss function is Lipschitz continuous, we will study the optimiz ation solution without boundedness assumptions. This brings difficulties when the true function does not belong to the desired space and the loss function takes a more general form than the least squares. 2.3 Gradient flow optimization and convergence Given the above considerations, we aim for our estimator to s atisfy (Λ†Ξ²,Λ†f) = (Λ†Ξ²,fΛ†ΞΈ)where(Λ†Ξ²,Λ†ΞΈ)β‰ˆargmin β∈Rp,fθ∈FnLn(Ξ²,fΞΈ). (3) To optimize the objective in ( 3), gradient-based algorithms are widely employed, with var ious modifications such as stochastic sampling and momentum meth ods being particularly common in practice. A substantial body of literature addresses opt imization algorithms for neural net- works, highlighting diverse methodologies and their relat ive effectiveness in improving training outcomes. In this study, for theoretical convenience, we fo cus on gradient flow, which serves as the continuous counterpart of gradient descent. Let Λ†ΞΈtdenote the neural network parameters at timetβ‰₯0. Correspondingly, let Λ†ft=fΛ†ΞΈtrepresent the neural network output, and KNT t=KNT Λ†ΞΈt denote the neural tangent kernel at tβ‰₯0. With the initial values set as Λ†ΞΈ0=ΞΈ0andΛ†f0(x) = 0, for allxβˆˆβ„¦, the gradient flow training process of the parameters (Λ†Ξ²t,Λ†ΞΈt)is governed by the following equations: d dtΛ†Ξ²t=βˆ’βˆ‡Ξ²Ln/parenleftBig Λ†Ξ²t,Λ†ΞΈt/parenrightBig =βˆ’1 nn/summationdisplay i=1lβ€² 1(ZT iΛ†Ξ²t,Λ†ft(Xi),Yi)Zi, (4) and d dtΛ†ΞΈt=βˆ’βˆ‡ΞΈLn/parenleftBig Λ†Ξ²t,Λ†ΞΈt/parenrightBig =βˆ’1 nn/summationdisplay i=1lβ€² 2(ZT iΛ†Ξ²t,Λ†ft(Xi),Yi)βˆ‡ΞΈΛ†ft(Xi)βˆ’2Ξ»n(Λ†ΞΈtβˆ’ΞΈ0).(5) Then the flow of fis defined by a dynamical system as d dtΛ†ft(x) =βˆ‡ΞΈΛ†ft(x)Td dtΛ†ΞΈt =βˆ’1 nn/summationdisplay i=1lβ€² 2(ZT iΛ†Ξ²t,Λ†ft(Xi),Yi)KNT t(x,Xi)βˆ’2Ξ»nβˆ‡ΞΈΛ†ft(x)T(Λ†ΞΈtβˆ’ΞΈ0).(6) For theoretical analysis, we take an ideal estimator in the R KHS associated with reproducing kernelKNTas a surrogate, which is shown to be sufficiently close to the ne ural algorithmic 11 estimator (Λ†Ξ²t,Λ†ΞΈt). In this context, the penalty term is defined as ˜J(f) =/baβˆ‡dblf/baβˆ‡dbl2 HNT, which is standard and encourages smoother solutions by penalizing t he complexity of the function fin the RKHS norm. This
https://arxiv.org/abs/2504.19089v1
penalty term leads to the following regulari zed optimization criterion: ˜Ln(Ξ²,f) =PnlΞ²,f+Ξ»n˜J(f), (7) wherePnlΞ²,fstill represents the empirical risk term and Ξ»n˜J(f)is the tuning parameter. Conse- quently, within the RKHS, a gradient flow training process {˜βt,˜ft}with initial value ˜θ0= 0and ˜f0(x) = 0,βˆ€xβˆˆβ„¦is adopted as d dt˜βt=βˆ’βˆ‡Ξ²ΛœLn/parenleftBig ˜βt,˜ft/parenrightBig =βˆ’1 nn/summationdisplay i=1lβ€² 1(ZT i˜βt,˜ft(Xi),Yi)Zi, and d dt˜ft=βˆ’βˆ‡f˜Ln/parenleftBig ˜βt,˜ft/parenrightBig =βˆ’1 nn/summationdisplay i=1lβ€² 2(ZT i˜βt,˜ft(Xi),Yi)KNT(Β·,Xi)βˆ’2Ξ»n˜ft. Define the subspace H1= Span{KNT(Β·,Xi),i= 1,2,...,n}within the considered NTK RKHS. It is straightforward to verify that ˜ft∈ H1,βˆ€tβ‰₯0, meaning that the evolution of ˜ftis restricted to this finite-dimensional subspace. To analyze the optimiz ation convergence, the following assumption on convexity and smoothness is standard. Assumption 1 (Conditions for optimization) .The loss function l=l(Β·,Β·,Β·)is convex and non- negative, the gradient βˆ‡lisB1-Lipschitz continuous with a constant B1. This assumption on the loss function ensures that the gradie nt flow converges. The convexity guarantees convergence, while the Lipschitz continuity of the gradient ensures stability during optimization, preventing abrupt changes that could hinder convergence. Denote the initial loss L0=˜Ln(Ξ²0,f0)and assume that the RKHS global minimizer of ( 7) satisfies /baβˆ‡dbl˜β∞/baβˆ‡dbl2 2+/baβˆ‡dbl˜f∞/baβˆ‡dbl2 HNT= ˜B2. The following conclusion characterizes the convergence o f the ideal gradient flow for general loss functions. Proposition 2.3.Given the sample {(Yi,Zi,Xi),i= 1,2,...,n}and we consider the optimization problem ( 7) in the Hilbert space H1with the gradient flow method described above, the following results on the optimization convergence rate hold. (1). When Assumption 1holds, we have /baβˆ‡dblβˆ‡Ξ²ΛœLn(˜βt,˜ft)/baβˆ‡dbl2 2+/baβˆ‡dblβˆ‡f˜Ln(˜βt,˜ft)/baβˆ‡dbl2 HNT/lessorsimilar˜Ln(˜βt,˜ft)βˆ’ΛœLn(˜β∞,˜f∞)/lessorsimilar˜B2L0 L0t+˜B2. (2). Additionally, if ( 7) isΒ΅-strongly convex for a positive number Β΅, we have /baβˆ‡dblβˆ‡Ξ²ΛœLn(˜βt,˜ft)/baβˆ‡dbl2 2+/baβˆ‡dblβˆ‡f˜Ln(˜βt,˜ft)/baβˆ‡dbl2 HNT/lessorsimilar˜Ln(˜βt,˜ft)βˆ’ΛœLn(˜β∞,˜f∞)/lessorsimilareβˆ’2Β΅tL0. 12 Remark 2.2. The above proposition demonstrates that the convergence ra te of optimization is sub- linear for convex objects and linear for strongly convex obj ects; which is standard in optimization theory. Therefore, to obtain the optimizer (˜βt,˜ft)that satisfies ˜Ln(˜βt,˜ft)βˆ’ΛœLn(˜β∞,˜f∞)and/baβˆ‡dblβˆ‡Ξ²ΛœLn(˜βt,˜ft)/baβˆ‡dbl2+/baβˆ‡dblβˆ‡f˜Ln(˜βt,˜ft)/baβˆ‡dblHNT/lessorsimilarnβˆ’1, we need to train tsthatts/greaterorsimilarn2˜B2when the objective is just convex and ts/greaterorsimilarΒ΅βˆ’1lognwhen it is Β΅strongly convex. Furthermore, the penalization procedure guarantees /baβˆ‡dbl˜f∞/baβˆ‡dbl2 HNT/lessorsimilarL0/Ξ»nand the consistency (or at least boundedness) of ˜β∞is usually easy to establish in statistical problems. Therefore, without involving neural networks, the upper bo unds of˜B2for specific problems are typically available. The following theorem demonstrates that, for any given trai ning time, the discrepancy between the neural network and RKHS optimization results can be suffic iently small when the neural network is wide enough. Theorem 2.1. Given any positive real number ξ∈(0,1)and the training time t. Let the network m(t,ΞΎ)β‰₯poly(n,Ξ»βˆ’1,L0,log(1/ΞΎ),exp(t))large enough. Then, with probability at least 1βˆ’ΞΎ over neural network random initialization, the differences between neural network estimation and RKHS estimation can be bounded by max 1≀s≀t/braceleftBig /baβˆ‡dblΛ†Ξ²sβˆ’ΛœΞ²s/baβˆ‡dbl,/baβˆ‡dblΛ†fsβˆ’Λœfs/baβˆ‡dblL∞/bracerightBig ≀o(nβˆ’1/2). This theorem demonstrates that the gradient flow of DNNs and t he NTK RKHS can exhibit significant similarity when the neural network is sufficientl y wide. Although the optimization landscape of DNNs is generally complex, in the NTK overparam eterized regime, it closely resembles that of the RKHS. This similarity also helps avoid the degeneration of the tangent space as seen in the counterexample in Section 1.2, with high probability. Additionally, the requirement on the width of DNNs includes an exponent term re lated
https://arxiv.org/abs/2504.19089v1
to the training time, which is used to bound the accumulated dynamic differences between the two training processes. This factor accounts for the cost of accommodating general, unst ructured loss functions, but can be omitted when considering the least squares loss as a special case. 3 Statistical theory of algorithmic neural M-estimation In this section, we discuss the statistical properties of th e neural semiparametric M-estimator, focusing on the convergence rate of the nonparametric part a nd the asymptotic normality of the 13 parametric component. Notably, we do not impose common assu mptions such as the Lipschitz continuity (e.g. Huang et al. ,2024) or strongly convexity of the loss function, as these condit ions may limit the applicability of our results. Furthermore, be cause our analysis pertains to the algorithmic optimizer, we do not assume that the nonparamet ric component is bounded; this in- troduces additional challenges for the theoretical analys is. Before discussing details, we introduce some basic assumptions. Firstly, some basic conditions of t he model regularity are summarized below. Assumption 2. (1) The covariate (Z,X)takes value in a bounded domain with a joint probability density function bounded away from 0and∞. (2) Conditional on (Z,X), the second order moment E[Y2|Z=z,X=x]is bounded. (3) Derivatives and expectations are exchangeable in the se nse that βˆ‚ βˆ‚uE[l(u1,u2,Y)|Z=z,X=x] =E/bracketleftbiggβˆ‚ βˆ‚ul(u1,u2,Y)|Z=z,X=x/bracketrightbigg , foru=u1andu2. When establishing the nonparametric convergence, the marg in condition ( Tsybakov ,2004) and the Bernstein condition ( Bartlett and Mendelson ,2006) are often employed to achieve fast rates in statistical and learning analysis. Consider a simp le nonparametric model Pffor example, which depends on a nonparametric parameter fand a corresponding loss function lf. These conditions quantify the identifiability and the curvature o f the objective function f/ma√stoβ†’Plfat some minimum fβˆ—. In the margin condition, fβˆ—=f0is the minimizer of the risk over all measurable functions, whereas in the Bernstein condition, fβˆ—typically minimizes the risk over the candidate function class F, seeLecuΒ΄e(2011) for more discussions. As one of specific forms, these conditions may establish relationships between the e xcess riskP(lfβˆ’lfβˆ—)and theL2norm /baβˆ‡dblfβˆ’fβˆ—/baβˆ‡dbl2 L2≍P(f(X)βˆ’fβˆ—(X))2through inequalities of the form P(lfβˆ’lfβˆ—)/greaterorsimilar/baβˆ‡dblfβˆ’fβˆ—/baβˆ‡dbl2ΞΊ L2 with typically ΞΊ= 1. This implies a better concentration and smaller localized sets, hence helps for the fast convergence. However, to verify the Margin/Ber nstein conditions usually requires that fis nearf0, for example, /baβˆ‡dblfβˆ’f0/baβˆ‡dblL∞is bounded. This has limited the application of related results when the boundedness of nonparametric functions do es not hold naturally. Especially in this work, assuming boundedness for the optimizer under gra dient flow is particularly unsuitable. In semiparametric problems, unlike the finite-dimensional parameters for which consistency is typically easy to establish, L∞boundedness of the nonparametric estimation is often non-t rivial, especially when the true function does not lie within the con sidered RKHS. The following Huberized margin condition holds more easily for unbounded function class. 14 Assumption 3 (Huberized margin condition for semiparametric estimatio n).There is a constant B2>0such that for every β∈Rp,f∈ F, P(lΞ²,fβˆ’lΞ²0,f0)β‰₯B2/baβˆ‡dblΞ²βˆ’Ξ²0/baβˆ‡dbl2+/baβˆ‡dblfβˆ’f0/baβˆ‡dbl2 L2(X) 1+/baβˆ‡dblΞ²βˆ’Ξ²0/baβˆ‡dbl+/baβˆ‡dblfβˆ’f0/baβˆ‡dblL∞. (8) Typically, when the loss is bounded or globally Lipschitz co ntinuous (e.g. Huang et al. ,2024), the class complexity is easy to bound,
https://arxiv.org/abs/2504.19089v1
but such assumptions m ay not be general enough. In the proposed Huberized condition, ignoring finite-dimensiona l parameter Ξ²for convenience, when /baβˆ‡dblfβˆ’f0/baβˆ‡dblL∞/lessorsimilar1, the right hand is nearly /baβˆ‡dblfβˆ’f0/baβˆ‡dbl2 L2(X); while when /baβˆ‡dblfβˆ’f0/baβˆ‡dblL∞/greaterorsimilar1, the right hand is even smaller than /baβˆ‡dblfβˆ’f0/baβˆ‡dblL1(X). If only the local curvature is of concern, then this condition is equivalent to the commonly used margin con ditionP(lfβˆ’lf0)/greaterorsimilar/baβˆ‡dblfβˆ’f0/baβˆ‡dbl2 L2. As pointed out earlier, it is not proper to assume that the alg orithmic neural network optimizer is bounded, and the L∞consistency or boundedness of nonparametric estimation is usually non-trivial when the true function does not lie within the co nsidered RKHS. Moreover, in ( 8) for semiparametric models, /baβˆ‡dblΞ²βˆ’Ξ²0/baβˆ‡dblin the denominator is for symmetry, which is usually not necessary because the consistency of the estimation of Ξ²is verifiable in common cases. When the risk is strongly convex, the condition is obviously satisfied. Intuitively, the proposed assumption only requires that the Hessian matrix is non-sin gular near the minimum and that the gradient is lower bounded at the far end. Strong convexity of the loss function across the entire domain is unnecessary; local strong convexity near the true minimizer is often sufficient. This is not much stricter than Assumption 1, more examples for illustration will be given in Section 4. The next assumption assumes that the true parameters lie wit hin the ideal space. Assumption 4. The true parameter f0belongs to the RKHS HNT. This condition is common in traditional statistical works a nd yields some boundedness to simplify the analysis. However, it does not always hold in a b road context of learning problems and is in fact not necessary to achieve the desired property. Thus, we relax this condition to allow that the true function does not lie within the reproducing sp ace of NTK. Assumption 4β€².The true parameter f0does not reside within the RKHS HNT, but instead belongs to a Sobolev space Ws,2withs>d/2. Here, we assume that the smoothness of f0satisfiess>d/2, a condition crucial for establishing the statistical properties under consideration, such as th e convergence rate and the attainment of parametric asymptotic normality. This assumption is par ticularly significant in the context of semiparametric statistics, where achieving a√n-consistent estimator for finite-dimensional 15 parameters generally necessitates that the non-parametri c convergence rate faster than nβˆ’1/4 (Van der Vaart ,2000;Kosorok ,2008). Now, we introduce some basic concepts and assumptions for th e semiparametric model (Van der Vaart ,2000;Kosorok ,2008). Given a fixed f, letGfdenote the collection of all smooth functional curves gbthat run through fatb= 0, i.e.{gb:gb∈L2(Ω)forb∈(βˆ’1,1)and limbβ†’0/baβˆ‡dblbβˆ’1(gbβˆ’g0)βˆ’h/baβˆ‡dblL2β†’0for some function h}. Then let TfGf=/braceleftBig h: Ω→R,there is agb∈ Gfsuch that lim bβ†’0/vextenddouble/vextenddoublebβˆ’1(gbβˆ’g0)βˆ’h/vextenddouble/vextenddouble L2= 0/bracerightBig be all the potential tangent directions for the nuisance par ameter. Now we set TfGfbe the closed linear span of TfGfunder linear combinations. For anyhinTf, denote S1(Ξ²,f) =βˆ‚ βˆ‚Ξ²lΞ²,fandS2(Ξ²,f)[h] =βˆ‚ βˆ‚b/vextendsingle/vextendsingle/vextendsingle/vextendsingle b=0lΞ²,gb, wheregbsatisfies(βˆ‚/(βˆ‚b))gb|b=0=h. For convenience, the tangent set for fat(Ξ²,f)is defined asΛβ,f:={S2(Ξ²,f)[h] :h∈ Tf}. Further, we also define S11(Ξ²,f) =βˆ‚ βˆ‚Ξ²S1(Ξ²,f), S12(Ξ²,f)[h] =βˆ‚ βˆ‚b/vextendsingle/vextendsingle/vextendsingle/vextendsingle b=0S1(Ξ²,gb), S21(Ξ²,f)[h] =βˆ‚ βˆ‚Ξ²S2(Ξ²,f)[h]andS22(Ξ²,f)[h1][h] =βˆ‚ βˆ‚b/vextendsingle/vextendsingle/vextendsingle/vextendsingle b=0S2(Ξ²,gb)[h1], whereh1is another function in Tf. As is well known, in the special case when
https://arxiv.org/abs/2504.19089v1
the loss is a negativ e log-likelihood, the efficient score is essentially the projection of the score function S1onto the orthonormal complement of the tangent set Λβ,f. Denote Ξ ΞΈ,fas the orthogonal projection onto the closure of the linear s pan ofΛβ,f. The efficient score function for Ξ²at(Ξ²0,f0)is then given by ˜S(Ξ²0,f0) =S1(Ξ²0,f0)βˆ’Ξ Ξ²0,f0S1(Ξ²0,f0). Denote S2(Ξ²,f)[h] = (S2(Ξ²,f)[h1],S2(Ξ²,f)[h2],...,S 2(Ξ²,f)[hp]), whereh= (h1,h2,...,h p)∈ Tp f. Then define S12[h],S21[h]andS22[h1][h2]accordingly. If there is˜h= (˜h1,˜h2,...,˜hβˆ— p)∈ Tp fGf, such that for any h= (h1,...,h k)∈ Tp fGf, P/parenleftBig S12(Ξ²0,f0)[h]βˆ’S22(Ξ²0,f0)[˜h][h]/parenrightBig = 0. (9) 16 Then we can also determine the efficient score for likelihood e stimation as ˜S(Ξ²0,f0) =S1(Ξ²0,f0)βˆ’S2(Ξ²0,f0)[˜h]. Similar to the negative log-likelihood, the following assu mptions are made for the general loss function. Assumption 5. (1). (Positive information) There exists ˜h= (˜h1,...,˜hp)∈ Tp fGfsuch that (9) holds for any h= (h1,...,h k)∈ Tp fGf. Further assume that ˜hβˆ— i∈ Ws,2(Ω),i= 1,2,...,p, wheres>d/2. Given˜h, the matrix A=P/parenleftBig S11(Ξ²0,f0)βˆ’S21(Ξ²0,f0)[˜h]/parenrightBig is nonsingular. (2). (Smoothness of the model) For all possible parameters (Ξ²,f)satisfying {(Ξ²,f) : /baβˆ‡dblΞ²βˆ’Ξ²0/baβˆ‡dbl+/baβˆ‡dblfβˆ’f0/baβˆ‡dblL2(X)/lessorsimilarΞ΄n}withΞ΄n=o(nβˆ’1/4), P/braceleftBig (˜S(Ξ²,f)βˆ’ΛœS(Ξ²0,f0))βˆ’/parenleftBig S11(Ξ²0,f0)βˆ’S21(Ξ²0,f0)[˜h]/parenrightBig (Ξ²βˆ’Ξ²0) βˆ’/parenleftBig S12(Ξ²0,f0)[fβˆ’f0]βˆ’S22(Ξ²0,f0)[˜h][fβˆ’f0]/parenrightBig/bracerightBig =o(/baβˆ‡dblΞ²βˆ’Ξ²0/baβˆ‡dbl)+O(/baβˆ‡dblfβˆ’f0/baβˆ‡dbl2 L2). In the above assumptions, the first one guarantees the existe nce and smoothness of directions ˜h, which corresponds to the least favorable direction when th e loss is the negative log-likelihood. This direction can be determined by solving the equations in (9). The second condition can usually be obtained by Taylor expansion. Under the above ass umptions, we have the following general conclusion. Theorem 3.1. Consider the proposed semiparametric neural M-estimation, assume that the estimation (Λ†Ξ²t,Λ†ft)is trained from (4),(5),(6)with training time tsin Remark 2.2and network widthm(ts,ΞΎ)in Theorem 2.1withξ∈(0,1). Then, with probability at least 1βˆ’ΞΎover neural network random initialization, the following results hold s: IfΛ†Ξ²tsandPlΛ†Ξ²ts,Λ†ftsare bounded, and Assumption 1,3,4,5hold, setting λ≍nβˆ’(d+1)/(2d+1), we have /baβˆ‡dblΛ†ftsβˆ’f0/baβˆ‡dbl2 L2=Op/parenleftbig nβˆ’(d+1)/(2d+1)logn/parenrightbig . (10) 17 If Assumption 4is replaced by Assumption 4β€², settingλ≍nβˆ’(d+1)/(2s+d), the nonparametric rate becomes /baβˆ‡dblΛ†ftsβˆ’f0/baβˆ‡dbl2 L2=Op/parenleftbig nβˆ’2s/(2s+d)logn/parenrightbig . (11) In both cases, we have the asymptotic normality √n/parenleftBig Λ†Ξ²tsβˆ’Ξ²0/parenrightBig =n1/2Aβˆ’1Pn˜S(Ξ²0,f0)+op(1)dβˆ’ β†’N(0,Ξ£). (12) whereΞ£ =Aβˆ’1(P{˜S(Ξ²0,f0)˜S(Ξ²0,f0)T})(Aβˆ’1)T. The boundedness condition of Λ†Ξ²tsandPlΛ†Ξ²ts,Λ†ftsis essentially a boundedness requirement for the parameter part, not for the nonparametric f. For nonparametric problems without the finite- dimensional parameter Ξ², this condition can be removed under the same proof. For semi parametric models, this condition is often verifiable in specific exampl es, as in the applications in the next section. Let s0= (d+1)/2denote the smoothness of the RKHS corresponding to the NTK. T he convergence rate in ( 10) can then be expressed as nearly nβˆ’2s0/(2s0+d)with the tuning parameter λ≍nβˆ’2s0/(2s+d). This smoothness is essentially determined by the ReLU acti vation function, and the rate is minimax optimal for many statistics models with A ssumption 4. When the true function does not lie within the considered ReLU NTK RKHS, as stated in Assumption 4β€², the rate in ( 11) remains minimax optimal. Furthermore, Assumption 4β€²ensures that the nonparametric rate is faster thannβˆ’1/4. Therefore, regardless of whether the true function reside s in the considered RKHS, the estimator is shown to be asymptotically normal. An estimator is semiparametric efficient if its information ( i.e., the inverse of its variance) equals the supremum
https://arxiv.org/abs/2504.19089v1
of the information across all parametri c submodels. The submodel that attains this supremum is typically called the least favorab le or hardest submodel. By the above result, when the loss function is the negative log-likeliho od and the model class contains the hardest submodel, the proposed semiparametric estimator is efficien t. Conversely, if the loss function differs, suggesting model misspecification, the parametric estimator remains√n-consistent and asymptotically normal. We conclude this section with the following remarks. To bett er align with practice, we consider overparameterized neural networks and general loss functi ons, allowing the true function to lie outside the desired RKHS. Distinct from existing literatur e, we refrain from assuming that the output of the optimized network is bounded and study the algo rithmic solution, which presents significant challenges for statistical analysis and inspir es a new yet weaker margin condition. Then, by combining the peeling argument with the interpolation bo und, we precisely characterize the entropy and establish the nonparametric convergence rate. Based on this, we obtain the asymptotic 18 normality of the parametric estimators, with high probabil ity over the random initialization, avoiding the degeneration of the tangent space and addressi ng the key question posed in Section 1.2. Lastly, we acknowledge the potential for broader applicat ions of the proposed framework. This essentially provides a fundamental approach to addres sing statistical problems that involve characterizing the tangent spaces of the nonparametric com ponent, which has been a challenge that frequently arises in the intersection of deep learning and statistical inference. 4 Examples In this section, we introduce two common examples to illustr ate the proposed semiparametric neuralM-estimator and the established theoretical framework. 4.1 Regression Now we consider the regression model Yi=f0(Xi)+ZT iΞ²0+Η«i, (13) whereYirepresents the response variable, XiandZiare bounded covariates with densities bounded away from 0and∞, andΗ«idenotes the independent measurement noise with mean zero, finite variance and a symmetric distribution. For esti mation, we define the loss function as l=l(Yiβˆ’f(Xi)βˆ’ZT iΞ²), wherelis a univariate function. The following condition is for the covariates distribution and loss function. Assumption 6. (1) The matrix E/bracketleftbig (Zβˆ’E[Z|X])(Zβˆ’E[Z|X])T/bracketrightbig is nonsingular. (2) The univariate function lis a nonnegative, even and convex function with Lipschitz continuous derivative lβ€²andl(0) =lβ€²(0) = 0 . Additionally, the pointwise risk R(s) =EΗ«[l(s+Η«)] is convex and Rβ€²β€²(s)β‰₯B3>0fors:|s| ≀bwith someblarge enough and a positive number B3. In this example, symmetry of the noise and loss functions is a ssumed to ensure that (Ξ²0,f0)is the minimizer of the risk EPlΞ²,f, thereby simplifying the analysis. Condition (1) in Assump tion 6is standard ( Kosorok ,2008) to ensure E/bracketleftBig/parenleftbig ZTΞ²+f(X)βˆ’ZTΞ²0βˆ’f0(X)/parenrightbig2/bracketrightBig /greaterorsimilar/baβˆ‡dblΞ²βˆ’Ξ²0/baβˆ‡dbl2+/baβˆ‡dblfβˆ’f0/baβˆ‡dbl2 L2, 19 in partially linear models. Furthermore, the second condit ion assumes that the risk function is strongly convex within a local neighborhood. Many commonly used loss functions, such as least squares loss and Huber loss, satisfy this assumption. Assum ption 6guarantee that Assumption 3 holds, as shown in Lemma ??in the Supplementary Material ( Yan et al. ,2025a ). Theorem 4.1. Consider the proposed semiparametric neural M-estimation for model (13)and set the hyperparameters as in Theorem 3.1. Under Assumption 4,5(1),6, settingλ≍nβˆ’(d+1)/(2d+1), with
https://arxiv.org/abs/2504.19089v1
probability at least 1βˆ’ΞΎover neural network random initialization, we have /baβˆ‡dblΛ†ftsβˆ’f0/baβˆ‡dbl2 L2=Op/parenleftbig nβˆ’(d+1)/(2d+1)logn/parenrightbig . If Assumption 4is replaced by Assumption 4β€², settingλ≍nβˆ’(d+1)/(2s+d), the nonparametric rate becomes /baβˆ‡dblΛ†ftsβˆ’f0/baβˆ‡dbl2 L2=Op/parenleftbig nβˆ’2s/(2s+d)logn/parenrightbig . In both cases, we have the asymptotic normality √n/parenleftBig Λ†Ξ²tsβˆ’Ξ²0/parenrightBig =n1/2Aβˆ’1Pn˜S(Ξ²0,f0)+op(1)dβˆ’ β†’N(0,Ξ£), (14) whereΞ£ =Aβˆ’1(P{˜S(Ξ²0,f0)˜S(Ξ²0,f0)T})(Aβˆ’1)T. Some previous works studied related challenges in the least squares nonparametric regression problem ( Mendelson and Neeman ,2010;Steinwart et al. ,2009). In the above theorem, whether the true function belongs to the considered RKHS, we establi sh the nonparametric optimal convergence rate and parameter asymptotic normality, with out assuming that candidate functions are bounded and allowing unbounded responses. 4.2 Classification In this subsection, we consider the following binary classi fication problem. Suppose that we can observe independently and identically distributed random sample{(Yi,Zi,Xi),i= 1,2,Β·Β·Β·,n}, whereYi∈ {0,1}denotes binary response, and Xβ€² isandZβ€² isare bounded covariates with densities bounded away from 0and∞. We assume that Yfollows a Bernoulli distribution determined by: P(Y= 1) =Ο†/parenleftbig f0(X)+ZTΞ²0/parenrightbig andP(Y= 0) = 1 βˆ’Ο†/parenleftbig f0(X)+ZTΞ²0/parenrightbig , (15) whereΟ†:R/ma√stoβ†’[0,1]is a continuously differentiable monotone link function. He nce the loss function can be taken as the negative log-likelihood as l(ZTΞ²,f(X),Y) =βˆ’YlogΟ†/parenleftbig f(X)+ZTΞ²/parenrightbig βˆ’(1βˆ’Y)log/parenleftbig 1βˆ’Ο†/parenleftbig f(X)+ZTΞ²/parenrightbig/parenrightbig .(16) 20 Common choices for Ο†include the sigmoid function Ο†(t) = (1+eβˆ’t)βˆ’1and the cumulative normal distribution function, corresponding to the logistic and p robit models, respectively. In theoretical analysis, we allow for model misspecification, assuming tha t the data-generating process follows: P(Y= 1) =ψ/parenleftbig f1(X)+ZTΞ²1/parenrightbig andP(Y= 0) = 1 βˆ’Οˆ/parenleftbig f1(X)+ZTΞ²1/parenrightbig , (17) whereψ/ne}ationslash=Ο†is a different link function, and some Ξ²1∈Rp,f1∈L2(Ω). Despite this potential misspecification, we continue to use the working link functi onΟ†and loss function as specified in (16). From a variational perspective, the minimizer (Ξ²0,f0)of thePlΞ²,fcan also be interpreted as minimizer of the Kullback-Leibler divergence P/bracketleftbigglogPΞ²,f logP0/bracketrightbigg =P/bracketleftBigg YlogΟ†/parenleftbig f(X)+ZTΞ²/parenrightbig +(1βˆ’Y)log/parenleftbig 1βˆ’Ο†/parenleftbig f(X)+ZTΞ²/parenrightbig/parenrightbig Ylogψ(f1(X)+ZTΞ²1)+(1βˆ’Y)log(1βˆ’Οˆ(f1(X)+ZTΞ²1))/bracketrightBigg . Thus, the parameters (Ξ²0,f0)retain interpretability and are still parameters in terms o f the underlying distribution. Solving the equation ( 9), some calculation implies ˜h=E/bracketleftBig Z/parenleftBig Y(logΟ†)β€²β€²+(1βˆ’Y)/parenleftBig (log(1βˆ’Ο†))β€²β€²/parenrightBig/parenrightBig |X/bracketrightBig E/bracketleftBig Y(logΟ†)β€²β€²+(1βˆ’Y)/parenleftBig (log(1βˆ’Ο†))β€²β€²/parenrightBig |X/bracketrightBig. The following conditions are standard for the link function . Assumption 7. The link function Ο†is Lipschitz continuous, monotone and continuously differ- entiable; βˆ’logΟ†(t)andβˆ’log(1βˆ’Ο†(t))are both convex with positive, continuous and bounded second-order derivative. If the model is misspecified, we ma ke the same assumptions for ψand assume that the underlying Ξ²1,f1are bounded in infinity norm. This condition ensures that the loss function and distribut ion satisfy Assumptions 1and3. Commonly used link functions, such as the logistic and probi t functions, naturally satisfy this condition, thereby making these assumptions applicable in practice. Theorem 4.2. Consider the proposed semiparametric neural M-estimation for model (15)or(17) and set the hyperparameters as in Theorem 3.1. Under Assumption 4,5(1),6(1) and 7, setting λ≍nβˆ’(d+1)/(2d+1), with probability at least 1βˆ’ΞΎover neural network random initialization, we have /baβˆ‡dblΛ†ftsβˆ’f0/baβˆ‡dbl2 L2=Op/parenleftbig nβˆ’(d+1)/(2d+1)log2n/parenrightbig . If Assumption 4is replaced by Assumption 4β€², settingλ≍nβˆ’(d+1)/(2s+d), the nonparametric rate becomes /baβˆ‡dblΛ†ftsβˆ’f0/baβˆ‡dbl2 L2=Op/parenleftbig nβˆ’2s/(2s+d)log2n/parenrightbig . 21 In both cases, we have the asymptotic normality √n/parenleftBig Λ†Ξ²tsβˆ’Ξ²0/parenrightBig =n1/2Aβˆ’1Pn˜S(Ξ²0,f0)+op(1)dβˆ’ β†’N(0,Ξ£), (18) whereΞ£ =Aβˆ’1(P{˜S(Ξ²0,f0)˜S(Ξ²0,f0)T})(Aβˆ’1)T. For the partially linear classification problem, the propos ed neuralM-estimation achieves the minimax nonparametric rate as well
https://arxiv.org/abs/2504.19089v1
as√n-consistency, with high probability. This performance is attributed to the representational capacity of overpara meterized deep neural networks and their tangent space. When the model is correctly specified, i.e. th e loss function corresponds to the negative log-likelihood, the resulting parametric estima tor attains efficiency. 5 Numerical studies This section demonstrates the practical advantages of the p roposed semiparametric neural M- estimator through simulation studies on both regression an d classification models described in Section 4. We use an overparameterized fully connected ReLU neural ne twork with depth L= 5 and widthm= 1000 . For training, the stochastic gradient descent optimizer i n PyTorch is employed, with a learning rate of 0.001and a total of 1000 epochs. For a comprehensive eval- uation, we compare our method with four baseline approaches that also estimate Ξ²0andf0(x) via (penalized) M-estimation. The first baseline is a regression spline estim ator using B-splines, with uniformly spaced knots over the domain [0,1]d. The second is the RKHS method, where the RKHS norm of the nonparametric component serves as the pe nalty, and the Laplacian kernel Kh(x1,x2) = exp{βˆ’/baβˆ‡dblx1βˆ’x2/baβˆ‡dbl/h}is used. The third is a local linear estimator with the Epanec h- nikov kernel kh(x1,x2) = (1βˆ’/baβˆ‡dblx1βˆ’x2/baβˆ‡dbl2/h)/(2d(1βˆ’1/(3h))). The last one also employs the fully connected ReLU neural network with depth L= 5, and the width serves as a hyperparameter, referred to as β€œunderparameterized” to be distinguished fr om the overparameterized regime. To select the optimal hyperparameters for all methods, we sp lit the full data into a training set (80%) and a validation set (20%) and choose the hyperpara meters that minimize validation loss, including the tuning parameters for our method and the second baseline, the number of basis functions for the first baseline, the bandwidth for the third baseline, and the width of the underparameterized neural network. Specifically, the hype rparameters of the regression spline method and the underparameterized neural network method ar e chosen from a set that ensures the total number of parameters is less than the sample size. 22 We generate {Zi= (Z1i,Z2i)T},i= 1,2,...,n from a uniform distribution over the interval [0,1]2. Then,{Xi= (X1i,Β·Β·Β·,Xdi)T}is generated using the formula Xji= 0.9Wji+0.05(Z1i+ Z2i),1≀j≀d, whereWjiis sampled from a uniform distribution over the interval [0,1]. The true finite-dimensional parameter vector is set as Ξ²0= (1,0.75)T. For the nonparametric part f0(x), we consider four cases with different dimensions and functi on forms. Here, Case 1 and Case 3 correspond to five-dimensional examples, while Case 2 and C ase 4 represent their respective extensions to ten-dimension; which have been studied in the simulation of Zhong et al. (2022) andYan et al. (2025b ). Case 1:d= 5, f0(x) = 5/parenleftbig x2 1x3 2+log(1+x3)+√ 1+x4x5+exp(x5/2)/parenrightbig , Case 2:d= 10, f0(x) =5 2/parenleftbig x2 1x3 2+log(1+x3)+√ 1+x4x5+exp(x5/2) +x2 6x3 7+log(1+x8)+√ 1+x9x10+exp(x10/2)/parenrightbig , Case 3:d= 5, f0(x) = 5sin/parenleftBigg 6Ο€ d(d+1)5/summationdisplay l=1lxl/parenrightBigg , Case 4:d= 10, f0(x) = 5sin/parenleftBigg 6Ο€ d(d+1)10/summationdisplay l=1lxl/parenrightBigg . In the regression setting, the responses Yiare generated by the model Yi=f0(Xi)+ZT iΞ²0+Ξ΅i, whereΞ΅iare i.i.d. normal noise with zero mean and standard deviatio nΟƒ= 0.5. For the classification model, the responses Yiare drawn
https://arxiv.org/abs/2504.19089v1
from the Bernoulli distribution with probability P(Yi= 1) =Ο†(f0(Xi) +ZT iΞ²0βˆ’EX[f0(X)]), whereΟ†(x)is the logistic function Ο†(x) = 1/(1+eβˆ’x). Simulations are performed for three different sample sizes n= 500,1000,2000, with 200repetitions for each case and method. The mean squared error (MSE) of the parametric and nonparametric components is computed, respectively, to ev aluate the performance of each method. Due to computational limitations, the spline method can onl y handle Case 1 and Case 3 with 5- dimensional nonparametric functions. Tables 1and2report the average MSEs for regression and classification examples, respectively. These results d emonstrate that our overparameterized neuralM-estimation approach outperforms the three traditional st atistical methods, as well as the underparameterized neural network (i.e., properly tuned w ith fewer parameters than the sample size). Specifically, due to the high dimensionality of the no nparametric component, the curse of dimensionality becomes significant. Consequently, the u nderparameterized neural network, spline and the local linear estimator suffer from insufficient learning capacity. In all four cases, our method yields the lowest MSE, demonstrating its superio r ability. Lastly, we would like to point out that, in the existing literature (e.g. Zhong et al. ,2022;Fan and Gu ,2022;Wang et al. , 23 2024;Yan et al. ,2025b ), regardless of the theoretical requirements imposed on th e network size, their numerical experiments have in fact employed overpara meterized neural networks to achieve favorable performance, which also provide certain support for our proposed method and theory. Given that the estimation of the parametric component satis fies asymptotic normality, it is possible to perform valid statistical inference. To assess it, we simulate the empirical coverage probabilities of the corresponding estimated confidence in tervals. In the regression model, the variance of Λ†Ξ²is estimated as Λ†Ξ£ =/bracketleftBigg 1 nn/summationdisplay i=1(Yiβˆ’Λ†f(Xi)βˆ’ZT iΛ†Ξ²)2/bracketrightBigg/bracketleftBigg min h∈F1 nn/summationdisplay i=1(Ziβˆ’h(Xi))(Ziβˆ’h(Xi))T/bracketrightBiggβˆ’1 . In the classification model, the variance of Λ†Ξ²is estimated as Λ†Ξ£ =/bracketleftBigg min h∈F1 nn/summationdisplay i=1Ο†(Λ†f(Xi)+ZT iΛ†Ξ²)(1βˆ’Ο†(Λ†f(Xi)+ZT iΛ†Ξ²))(Ziβˆ’h(Xi))(Ziβˆ’h(Xi))T/bracketrightBiggβˆ’1 , whereFis also a neural network function class. The coverage rate is defined as the proportion of repeated experiments for which the true parameter falls w ithin the confidence interval. Tables 3and4report the coverage of the 95%confidence intervals constructed by the proposed method based on 500repeated experiments, for each case in both regression and c lassification models. Generally, the coverage rate is near 0.95for the proposed overparameterized neural M-estimation method, especially when the sample size nis large. This supports the potential usefulness of our proposed approach for semiparametric inference. References Ahmad, I., Leelahanon, S., and Li, Q. (2005). Efficient estima tion of a semiparametric partially linear varying coefficient model. Annals of statistics . Allen-Zhu, Z., Li, Y., and Song, Z. (2019). A convergence the ory for deep learning via over- parameterization. In International conference on machine learning , pages 242–252. PMLR. Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R. R., and Wang, R. (2019). On exact computation with an infinitely wide neural net. Advances in neural information processing systems , 32. Bach, F. R. (2014). Breaking the curse of dimensionality wit h convex neural networks. CoRR , abs/1412.8690. 24 Bartlett, P. L. and Mendelson, S. (2006).
https://arxiv.org/abs/2504.19089v1
Empirical minimiz ation. Probability theory and related fields , 135(3):311–334. Bauer, B. and Kohler, M. (2019). On deep learning as a remedy f or the curse of dimensionality in nonparametric regression. The Annals of Statistics , 47(4):2261–2285. Bhattacharya, S., Fan, J., and Mukherjee, D. (2023). Deep ne ural networks for nonparametric interaction models with diverging dimension. arXiv preprint arXiv:2302.05851 . Bietti, A. and Bach, F. (2021). Deep equals shallow for reLU n etworks in kernel regimes. In International Conference on Learning Representations . Bietti, A. and Mairal, J. (2019). On the inductive bias of neu ral tangent kernels. Advances in Neural Information Processing Systems , 32. Chen, L. and Xu, S. (2020). Deep neural tangent kernel and lap lace kernel have the same rkhs. arXiv preprint arXiv:2009.10683 . Chen, M., Jiang, H., Liao, W., and Zhao, T. (2019). Nonparame tric regression on low-dimensional manifolds using deep relu networks: Function approximatio n and statistical recovery. ArXiv preprint. arXiv:1908.01842 . Chen, X., Liu, Y., Ma, S., and Zhang, Z. (2024). Causal infere nce of general treatment effects using neural networks with a diverging number of confounder s.Journal of Econometrics , 238(1):105555. Cheng, G. and Huang, J. Z. (2010). Bootstrap consistency for general semiparametric m- estimation. Annals of statistics . Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., H ansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. Cloninger, A. and Klock, T. (2021). A deep network construct ion that adapts to intrinsic dimen- sionality beyond the domain. Neural Networks , 141:404–419. Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) , 34(2):187–202. Cox, D. R. (1975). Partial likelihood. Biometrika , 62(2):269–276. 25 Ding, Y. and Nan, B. (2011). A sieve m-theorem for bundled par ameters in semiparametric models, with application to the efficient estimation in a line ar model for censored data. Annals of statistics , 39(6):2795. Du, S. S., Zhai, X., P Β΄oczos, B., and Singh, A. (2018). Gradient descent provably o ptimizes over-parameterized neural networks. CoRR , abs/1810.02054. Fan, J. and Gu, Y. (2022). Factor augmented sparse throughpu t deep relu neural networks for high dimensional regression. ArXiv preprint. arXiv:2210.02002 . Farrell, M. H., Liang, T., and Misra, S. (2021). Deep neural n etworks for estimation and inference. Econometrica , 89(1):181–213. Geifman, A., Yadav, A., Kasten, Y., Galun, M., Jacobs, D., an d Ronen, B. (2020). On the similarity between the laplace and neural tangent kernels. Advances in Neural Information Processing Systems , 33:1451–1461. Horowitz, J. L. (2012). Semiparametric methods in econometrics , volume 131. Springer Science & Business Media. Hu, T., Wang, W., Lin, C., and Cheng, G. (2021). Regularizati on matters: A nonparametric perspective on overparametrized neural network. In International Conference on Artificial Intelligence and Statistics , pages 829–837. PMLR. Huang, J. (1999). Efficient estimation of the partly linear ad ditive cox model. The Annals of Statistics , 27(5):1536–1563. Huang, K., Liu, M., and Ma, S. (2024). Nearly optimal learnin g using sparse deep relu networks in regularized empirical risk minimization with lipschitz loss. Jacot,
https://arxiv.org/abs/2504.19089v1
A., Gabriel, F., and Hongler, C. (2018). Neural tange nt kernel: Convergence and general- ization in neural networks. Advances in neural information processing systems , 31. Jiao, Y., Shen, G., Lin, Y., and Huang, J. (2021). Deep nonpar ametric regression on approximate manifolds: Non-asymptotic error bounds with polynomial pr efactors. Kohler, M. and Langer, S. (2021). On the rate of convergence o f fully connected deep neural network regression estimates. The Annals of Statistics , 49(4):2231–2249. 26 Kosorok, M. R. (2008). Introduction to empirical processes and semiparametric in ference , volume 61. Springer. Lai, J., Xu, M., Chen, R., and Lin, Q. (2023a). Generalizatio n ability of wide neural networks on r.arXiv preprint arXiv:2302.05933 . Lai, J., Yu, Z., Tian, S., and Lin, Q. (2023b). Generalizatio n ability of wide residual networks. LecuΒ΄e, G. (2011). Interplay between concentration, complexity and geometry in learning theory with applications to high dimensional data analysis . PhD thesis, Universit Β΄e Paris-Est. Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl -Dickstein, J., and Pennington, J. (2019). Wide neural networks of any depth evolve as linear mo dels under gradient descent. Advances in neural information processing systems , 32. Li, Y. and Liang, Y. (2018). Learning overparameterized neu ral networks via stochastic gradient descent on structured data. Advances in neural information processing systems , 31. Li, Y., Yu, Z., Chen, G., and Lin, Q. (2024). On the eigenvalue decay rates of a class of neural- network related kernel functions defined on general domains .Journal of Machine Learning Research , 25(82):1–47. Liang, H., Liu, X., Li, R., and Tsai, C.-L. (2010). Estimatio n and testing for partially linear single-index models. Annals of statistics , 38(6):3811. Lu, J., Shen, Z., Yang, H., and Zhang, S. (2021). Deep network approximation for smooth functions. SIAM Journal on Mathematical Analysis , 53(5):5465–5506. Ma, S. and Kosorok, M. R. (2005a). Penalized log-likelihood estimation for partly linear trans- formation models with current status data. The Annals of Statistics , 33(5):2256–2290. Ma, S. and Kosorok, M. R. (2005b). Robust semiparametric m-e stimation and the weighted bootstrap. Journal of Multivariate Analysis , 96(1):190–217. Ma, S., Linton, O., and Gao, J. (2021). Estimation and infere nce in semiparametric quantile factor models. Journal of Econometrics , 222(1):295–323. Mammen, E. and Van de Geer, S. (1997). Penalized quasi-likel ihood estimation in partial linear models. The Annals of Statistics , 25(3):1014–1035. 27 Mendelson, S. and Neeman, J. (2010). Regularization in kern el learning. The Annals of Statistics . Nakada, R. and Imaizumi, M. (2020). Adaptive approximation and generalization of deep neural network with intrinsic dimensionality. J. Mach. Learn. Res. , 21(174):1–38. Raskutti, G., Wainwright, M. J., and Yu, B. (2014). Early sto pping and non-parametric regres- sion: an optimal data-dependent stopping rule. The Journal of Machine Learning Research , 15(1):335–366. Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of t he propensity score in observational studies for causal effects. Biometrika , 70(1):41–55. Schmidt-Hieber, J. (2019). Deep relu network approximatio n of functions on a manifold. ArXiv preprint. arXiv:1908.00695 . Schmidt-Hieber, J. (2020). Nonparametric regression usin
https://arxiv.org/abs/2504.19089v1
g deep neural networks with ReLU activation function. The Annals of Statistics , 48(4):1875 – 1897. Shen, Z. (2020). Deep network approximation characterized by number of neurons. Communi- cations in Computational Physics , 28(5):1768–1811. Shen, Z., Yang, H., and Zhang, S. (2022). Optimal approximat ion rate of relu networks in terms of width and depth. Journal de Math Β΄ematiques Pures et Appliqu Β΄ees, 157:101–135. Steinwart, I., Hush, D. R., Scovel, C., et al. (2009). Optima l rates for regularized least squares regression. In COLT , pages 79–93. Suh, N., Ko, H., and Huo, X. (2021). A non-parametric regress ion viewpoint: Generalization of overparametrized deep relu network under noisy observatio ns. In International Conference on Learning Representations . Suzuki, T. (2018). Adaptivity of deep relu network for learn ing in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. ArXiv preprint. arXiv:1810.08033 . Tang, W., He, K., Xu, G., and Zhu, J. (2023). Survival analysi s via ordinary differential equations. Journal of the American Statistical Association , 118(544):2406–2421. Tsiatis, A. A. (2006). Semiparametric theory and missing data , volume 4. Springer. 28 Tsybakov, A. B. (2004). Optimal aggregation of classifiers i n statistical learning. The Annals of Statistics , 32(1):135–166. Vakili, S., Bromberg, M., Shiu, D., and Bernacchia, A. (2021 ). Uniform generalization bounds for overparameterized neural networks. CoRR , abs/2109.06099. Van de Geer, S. A. (2000). Empirical Processes in M-estimation , volume 6. Cambridge university press. Van der Vaart, A. W. (2000). Asymptotic statistics , volume 3. Cambridge university press. Wang, J.-L., Xue, L., Zhu, L., and Chong, Y. S. (2010). Estima tion for a partial-linear single-index model. Annals of statistics . Wang, X., Zhou, L., and Lin, H. (2024). Deep regression learn ing with optimal loss function. Journal of the American Statistical Association , pages 1–20. Yan, S., Chen, Z., and Yao, F. (2025a). Supplement to β€œsemipa rametric estimation with overpa- rameterized neural network”. Yan, S., Yao, F., and Zhou, H. (2025b). Deep regression for re peated measurements. Journal of the American Statistical Association , 0(ja):1–23. Yao, Y., Rosasco, L., and Caponnetto, A. (2007). On early sto pping in gradient descent learning. Constructive approximation , 26(2):289–315. Yarotsky, D. (2017). Error bounds for approximations with d eep relu networks. Neural Networks , 94:103–114. Yarotsky, D. (2018). Optimal approximation of continuous f unctions by very deep relu networks. InConference on learning theory , pages 639–649. PMLR. Yarotsky, D. and Zhevnerchuk, A. (2020). The phase diagram o f approximation rates for deep neural networks. Advances in neural information processing systems , 33:13005–13015. Zeng, D. and Lin, D. (2007). Maximum likelihood estimation i n semiparametric regression models with censored data. Journal of the Royal Statistical Society Series B: Statisti cal Methodology , 69(4):507–564. 29 Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. ( 2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM , 64(3):107–115. Zhong, Q., MΒ¨ uller, J., and Wang, J.-L. (2022). Deep learnin g for the partially linear Cox model. The Annals of Statistics , 50(3):1348 – 1375. Zhong, Q., MΒ¨ uller, J. W.,
https://arxiv.org/abs/2504.19089v1
and Wang, J.-L. (2021). Deep exten ded hazard models for survival analysis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Lia ng, P., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems , volume 34, pages 15111–15124. Curran Associates, Inc. 30 Table 1: The mean square error (Γ—10βˆ’1)forΛ†Ξ²andΛ†fof our method and baselines for the regression model. Setting Proposed Spline RKHS Local Linear Underpara Casen MSE for Λ†Ξ² Case 1500 0.2613 0.4987 0.6905 0.4148 2.0760 1000 0.1140 0.1411 0.2511 0.1696 1.9988 2000 0.0623 0.0513 0.0793 0.1607 1.9965 Case 2500 0.2429 – 3.7362 132.79 3.1735 1000 0.1132 – 0.7956 20.478 3.0583 2000 0.0494 – 0.1922 2.1419 3.0356 Case 3500 0.2316 0.6546 0.3940 116.13 7.7303 1000 0.1066 0.1801 0.1895 78.535 8.1188 2000 0.0496 0.0755 0.0662 119.39 8.1298 Case 4500 0.2598 – 0.4656 28.555 12.192 1000 0.1098 – 0.1908 88.956 12.195 2000 0.0457 – 0.0836 114.98 12.424 MSE for Λ†f Case 1500 0.8625 46.266 6.6327 2.5206 2.8144 1000 0.7015 3.4216 3.0720 2.4176 2.6657 2000 0.6179 0.7906 1.5700 2.3582 2.6089 Case 2500 0.7824 – 17.558 67.927 2.8644 1000 0.6084 – 7.4870 11.378 2.7192 2000 0.5093 – 3.5917 2.1836 2.6467 Case 3500 0.6153 67.462 4.6669 89.284 40.214 1000 0.4135 7.1053 2.3364 84.843 50.257 2000 0.2956 2.6464 1.2638 89.256 68.170 Case 4500 0.9107 – 10.753 75.208 51.472 1000 0.4489 – 5.5210 84.909 51.209 2000 0.2606 – 3.0471 93.994 51.154 31 Table 2: The mean square error ( Γ—10βˆ’1) forΛ†Ξ²andΛ†fof our method and baselines for the classification model ( 15). Setting Proposed Spline RKHS Local Linear Underpara Casen MSE for Λ†Ξ² Case 1500 0.8821 13.561 8.8990 6.5733 2.4930 1000 0.4411 12.637 2.9707 2.3129 1.7517 2000 0.2133 12.583 1.5815 1.8543 1.6526 Case 2500 2.1877 – 8.1133 12.787 4.4772 1000 0.5395 – 2.6608 11.939 4.1563 2000 0.2177 – 1.5869 11.583 4.1018 Case 3500 1.0119 21.165 22.105 13.400 4.4189 1000 0.5900 19.280 4.2309 11.161 4.7265 2000 0.3318 17.656 1.1527 8.0244 4.5130 Case 4500 1.6781 – 5.5813 20.256 3.8216 1000 0.6652 – 2.3483 19.570 3.7650 2000 0.2812 – 2.5641 19.158 3.6724 MSE for Λ†f Case 1500 4.5624 39.087 9.0062 61.677 6.9442 1000 2.5099 39.043 5.0402 59.122 5.6270 2000 1.7524 38.999 3.9864 58.263 5.2535 Case 2500 22.417 – 7.5404 29.051 11.793 1000 5.0438 – 4.0622 28.991 11.529 2000 1.8980 – 3.1317 29.023 10.751 Case 3500 7.0489 96.863 29.990 92.969 85.572 1000 3.5228 96.762 14.225 91.686 86.227 2000 2.2837 96.850 11.259 90.390 86.411 Case 4500 17.616 – 38.825 51.957 46.156 1000 5.9230 – 35.755 51.703 45.684 2000 3.6942 – 35.791 51.598 45.623 32 Table 3: The coverage probability for constructed 95%confidence interval forΞ²= (Ξ²1,Ξ²2)in the regression model. Model The coverage rate for Ξ²1 The coverage rate for Ξ²2 Settingn= 500n= 1000n= 2000n= 500n= 1000n= 2000 Case 1 0.938 0.958 0.946 0.944 0.952 0.940 Case 2 0.966 0.958 0.950 0.960 0.950 0.952 Case 3 0.946 0.970 0.948 0.968 0.952 0.950 Case 4 0.986 0.974 0.938 0.988 0.968 0.948 Table 4: The coverage probability for constructed 95%confidence interval forΞ²= (Ξ²1,Ξ²2)in the classification model. Model The coverage rate for Ξ²1 The coverage rate for Ξ²2 Settingn= 500n= 1000n= 2000n= 500n= 1000n= 2000 Case 1 0.964 0.968 0.950 0.966
https://arxiv.org/abs/2504.19089v1
Quasi-Monte Carlo confidence intervals using quantiles of randomized nets Zexin Panβˆ— Abstract Recent advances in quasi-Monte Carlo integration have demonstrated that the median trick significantly enhances the convergence rate of lin- early scrambled digital net estimators. In this work, we leverage the quantiles of such estimators to construct confidence intervals with asymp- totically valid coverage for high-dimensional integrals. By analyzing the distribution of the integration error for a class of infinitely differentiable integrands, we prove that as the sample size grows, the error decom- poses into an asymptotically symmetric component and a vanishing per- turbation, which guarantees that a quantile-based interval for the median estimator asymptotically captures the target integral with the nominal coverage probability. 1 Introduction Quasi-Monte Carlo (QMC) methods have emerged as a powerful alternative to conventional Monte Carlo (MC) integration [4]. Like MC, QMC approximates high-dimensional integrals by averaging nfunction evaluations. Unlike MC, however, QMC replaces random sampling with carefully constructed point sets designed to efficiently explore the integration domain. This paper focuses on a class of construction called digital nets to be introduced in Subsection 2.1. This systematic approach allows QMC to mitigate the curse of dimensionality more effectively than classical quadrature rules while achieving a convergence rate faster than MC under smoothness assumptions. Despite their success, QMC estimators based on digital nets face challenges in error quantification [18]. Conventional solutions employ randomization tech- niques to generate independent replicates of QMC means, from which t-confidence intervals are constructed. Common choices of randomization are Owen’s scram- bling [17] and MatouΛ‡ sek’s random linear scrambling [15]. While theoretical work by [14] establishes the asymptotic normality of Owen-scrambled QMC means in some restricted case and thereby justifies t-intervals, their convergence rate is non-adaptive: the variance in general cannot decay faster than O(nβˆ’3), even for βˆ—Johann Radon Institute for Computational and Applied Mathematics, Β¨OAW, Altenberg- erstrasse 69, 4040 Linz, Austria. ( zexin.pan@oeaw.ac.at ). 1arXiv:2504.19138v1 [math.ST] 27 Apr 2025 integrands with higher smoothness. In contrast, the random linear scrambling produces estimators with the same variance as Owen’s method [24] but markedly different error behavior. These estimators lack asymptotic normality and in- stead exhibit error concentration phenomena that adapt to the smoothness of integrands. Notably, [19] demonstrates that the median of linearly scrambled QMC means converges to the target integral at nearly optimal rates across a broad class of function spaces. Due to outlier sensitivity, t-intervals under the linear scrambling often overestimate uncertainty and exceed nominal coverage, as observed empirically in [12]. Quantile-based intervals, while more robust and empirically accurate, lack theoretical guarantees on coverageβ€”a critical open question this work addresses. Before presenting our results, we situate our contributions within the con- text of existing methods. Recent work by [16] proposes asymptotically valid t-intervals by allowing the number of independent QMC replicates rto grow polynomially with the per-replicate sample size n. However, this approach in- curs a total computational cost of O(n1+c) for r=O(nc), resulting in subopti- mal convergence rates. Quantile-based intervals circumvent this limitation and achieve asymptotic validity without requiring rto scale with n. Alternative approach by [8] introduces robust estimation techniques to
https://arxiv.org/abs/2504.19138v1
handle non-normal errors, but still requires reliable variance estimation and remains non-adaptive: stronger smoothness assumptions on the integrand do not improve the conver- gence rate. Specialized methods like higher order scrambled digital nets [3] attain optimal rates under explicit smoothness priors and enable empirically valid t-intervals, though rigorous coverage guarantees remain unproven. For completely monotone integrands, point sets with non-negative (or non-positive) local discrepancy yield computable upper (or lower) error bounds [7], but their convergence rates degrade with the dimension sand becomes unattractive for s >4. See also [18] for a comprehensive survey. Together, these gaps motivate our focus on quantile-based intervals, which adapt to the integrand’s smoothness while provably achieving asymptotically valid coverage. This paper is structured as follows. Section 2 introduces foundational con- cepts and notation, including the Walsh decomposition framework and proper- ties of Walsh coefficients critical to our analysis. Section 3 presents and proves our main theorem under the complete random design, a simplified yet illustrative randomization scheme. After outlining the proof strategy, Subsections 3.1–3.3 systematically address each critical component of the argument. Subsection 3.4 derives crucial corollaries, demonstrating that quantile-based intervals asymp- totically achieve the nominal coverage level for a class of infinitely differentiable integrands. Section 4 generalizes these results to broader randomization choices, with the random linear scrambling as a key special case. Section 5 empirically validates our theory on two highly skewed integrands. Finally, Section 6 identi- fies challenges in extending these results to non-smooth integrands and concludes the paper with a discussion of interesting research questions. 2 2 Background and notation LetN={1,2,3, . . .}denote the natural numbers, N0=Nβˆͺ {0}, and Ns βˆ—= Ns 0\{0}(excluding the zero vector). For β„“βˆˆN, we define Zβ„“={0,1, . . . , β„“ βˆ’1}. The dimension of the integration domain is s∈N, with 1: s={1,2, . . . , s }. For a matrix C,C(β„“,:) denotes its ℓ’th row. The indicator function 1{A}equals 1 if event Aoccurs and 0 otherwise. For a finite set K,|K|is its cardinality, and U(K) represents the uniform distribution over K. Equality in distribution is written as Xd=Y. For asymptotics, am∼bmdenotes lim mβ†’βˆžam/bm= 1 and am∼PL β„“=1bm,β„“recursively means amβˆ’PLβ€²βˆ’1 β„“=1bm,β„“βˆΌbm,Lβ€²for 2β©½Lβ€²β©½L. The integrand f: [0,1]sβ†’RhasL1-norm βˆ₯fβˆ₯1=R [0,1]s|f(x)|dxand L∞-norm βˆ₯fβˆ₯∞= supx∈[0,1]s|f(x)|. Let C([0,1]s) and C∞([0,1]s) denote the spaces of continuous and infinitely differentiable functions, respectively. Quasi-Monte Carlo (QMC) methods approximate the integral Β΅=Z [0,1]sf(x) dxby Λ†Β΅=1 nnβˆ’1X i=0f(xi) for specially constructed points {xi, i∈Zn} βŠ†[0,1]s. In this paper, we choose {xi, i∈Zn}to be the base-2 digital net defined in the next subsection. 2.1 Digital nets and randomization Form∈Nandi∈Z2m, let the binary expansion i=Pm β„“=1iβ„“2β„“βˆ’1be represented by the vector βƒ—i=βƒ—i[m] = ( i1, . . . , i m)T∈ {0,1}m. Similarly, for a∈[0,1) and precision E∈N, we truncate the binary expansion a=P∞ β„“=1aβ„“2βˆ’β„“toEdigits, denoted βƒ— a=βƒ— a[E] = ( a1, . . . , a E)T∈ {0,1}E. For dyadic rationals (numbers with dual binary expansions), we select the representation terminating in zeros. Letsmatrices Cj∈ {0,1}EΓ—mdefine a base-2 digital net over [0 ,1]s. The unrandomized points xi= (xi1, . . . , x is) are generated by βƒ—
https://arxiv.org/abs/2504.19138v1
xij=Cjβƒ—imod 2 for i∈Z2m, j∈1:s, (1) where βƒ— xij∈ {0,1}Erepresents xij∈[0,1) truncated to Edigits (trailing digits set to 0). Typically, Eβ©½mfor unrandomized digital nets. We introduce randomization via βƒ— xij=Cjβƒ—i+βƒ—Djmod 2, (2) where Cj∈ {0,1}EΓ—mandβƒ—Dj∈ {0,1}Eare random with precision Eβ©Ύm. The vector βƒ—Djis called the digital shift and consists of independent U({0,1}) entries. A widely used method to randomize Cjis the random linear scrambling [15]: Cj=MjCjmod 2, where Mj∈ {0,1}EΓ—mis a random lower-triangular matrix with ones on the diagonal and U({0,1}) entries below, and Cj∈ {0,1}mΓ—mis a fixed generating matrix designed to avoid linear dependencies (see [4, Chapter 4.4] for details). 3 Despite the popularity of random linear scrambling, its dependence on Cj causes technical difficulties, so we postpone its analysis until Section 4. In Section 3, we instead use the complete random design [19], where all entries ofCjare independently drawn from U({0,1}). This retains the asymptotic convergence rate of random linear scrambling without requiring pre-designed Cj. Numerically, errors under the complete random design are typically larger than those under the random linear scrambling, but the difference diminishes asmincreases. Letxi[E] denote points from equation (2) with precision E. Our QMC estimator for Β΅is Λ†Β΅E=1 nnβˆ’1X i=0f(xi) for xi=xi[E]. (3) For most of our paper, we conveniently assume E=∞and focus our analysis on Λ†Β΅βˆž. Practical implementation uses finite E, often constrained by the floating point representation in use. Corollary 3 quantifies the required Eto ensure the truncation error |Λ†Β΅Eβˆ’Λ†Β΅βˆž|is negligible. 2.2 Fourier-Walsh decomposition Walsh functions serve as the natural orthonormal basis for analyzing base-2 digital nets. For k∈N0andx∈[0,1), the univariate Walsh function wal k(x) is defined by walk(x) = (βˆ’1)βƒ—kTβƒ— x, where βƒ—k∈ {0,1}∞andβƒ— x∈ {0,1}∞are the binary expansions of kandx, re- spectively. Since βƒ—kcontains a finite number of nonzero entries, a finite-precision truncation suffices for computation. For multivariate functions, the s-dimensional Walsh function wal k: [0,1)sβ†’ {βˆ’1,1}is given by the tensor product walk(x) =sY j=1walkj(xj) = (βˆ’1)Ps j=1βƒ—kT jβƒ— xj, where k= (k1, . . . , k s)∈Ns 0. These functions form a complete orthonormal basis for L2([0,1]s) [4], enabling the Walsh decomposition: f(x) =X k∈Ns 0Λ†f(k)walk(x),where (4) Λ†f(k) =Z [0,1]sf(x)walk(x) dx. Equality (4) holds in the L2sense. Building on this, [21] derives the following error decomposition for QMC estimators: 4 Lemma 1. Forf∈C([0,1]s), the error of Λ†Β΅βˆždefined by equation (3)satisfies Λ†Β΅βˆžβˆ’Β΅=X k∈Nsβˆ—Z(k)S(k)Λ†f(k), (5) where Z(k) =1nsX j=1βƒ—kT jCj=0mod 2o and S(k) = (βˆ’1)Ps j=1βƒ—kT jβƒ—Dj. We note that every S(k) follows a U({βˆ’1,1}) distribution. The distribution ofZ(k) depends on m,kand the the choice of randomization for Cj. Under the complete random design, each Z(k) follows a Bernoulli distribution with success probability 2βˆ’mand{Z(k),k∈Ns βˆ—}are pairwise independent. Their distribution under more general randomization schemes is analyzed in Section 4. 2.3 Notations involving kandΞΊ Fork=P∞ β„“=1aβ„“2β„“βˆ’1∈N0, we define the set of nonzero bits ΞΊ={β„“βˆˆN|aβ„“= 1} βŠ†N. The bijection between kandΞΊallows interchangeable use of integer and set notation. In this framework, we can rewrite Z(k) as Z(k) =1nsX j=1X β„“βˆˆΞΊjCj(β„“,:) =0mod 2o where k= (k1, . . . , k s) and ΞΊjis the nonzero bits of kj. Next, we define some useful norms
https://arxiv.org/abs/2504.19138v1
on kandΞΊ. For a finite subset ΞΊβŠ†N, we denote the cardinality of ΞΊas|ΞΊ|, the sum of elements in ΞΊasβˆ₯ΞΊβˆ₯1and the largest element of ΞΊasβŒˆΞΊβŒ‰. When ΞΊ=βˆ…, we conventionally define |ΞΊ|=βˆ₯ΞΊβˆ₯1= βŒˆΞΊβŒ‰= 0. For k= (k1, . . . , k s) and the corresponding ΞΊ= (ΞΊ1, . . . , ΞΊ s), we define βˆ₯kβˆ₯0=βˆ₯ΞΊβˆ₯0=sX j=1|ΞΊj|,βˆ₯kβˆ₯1=βˆ₯ΞΊβˆ₯1=sX j=1βˆ₯ΞΊjβˆ₯1and⌈kβŒ‰=βŒˆΞΊβŒ‰= max j∈1:s⌈κjβŒ‰. In our later analysis, it is helpful to view Ns 0as aF2-vector space. For k1= (k1,1, . . . , k s,1) and k2= (k1,2, . . . , k s,2), we define the sum of k1andk2 to be k1βŠ•k2= (kβŠ• 1, . . . , kβŠ• s) with βƒ—kβŠ• j=βƒ—kj,1+βƒ—kj,2mod 2 for each j∈1:s. In other words, each ΞΊβŠ• jis the symmetric difference of ΞΊj,1andΞΊj,2. We also write βŠ•r i=1kifor the sum of k1, . . . ,kr. For a finite subset VβŠ†Ns 0, we define the rank of Vas the size of its largest linearly independent subset. We say Vhas full rank if rank( V) =|V|. One can verify that S(βŠ•r i=1ki) =rY i=1S(ki) and{S(k),k∈V}are jointly independent if Vhas full rank. 5 2.4 Bounds on Walsh coefficients The following lemma relates the Walsh coefficients Λ†f(k) to the partial derivatives off. For|ΞΊ|= (|ΞΊ1|, . . . ,|ΞΊs|)∈Ns 0, let f|ΞΊ|=βˆ‚βˆ₯ΞΊβˆ₯0f βˆ‚x|ΞΊ1| 1Β·Β·Β·βˆ‚x|ΞΊs| s. Lemma 2. Forf∈C∞([0,1]s), Λ†f(k) = (βˆ’1)βˆ₯ΞΊβˆ₯0Z [0,1]sf|ΞΊ|(x)sY j=1WΞΊj(xj) dx, (6) where WΞΊ: [0,1]β†’RforΞΊβŠ†Nis defined recursively by Wβˆ…(x) = 1 and WΞΊ(x) =Z [0,1](βˆ’1)βƒ— x(βŒŠΞΊβŒ‹)WΞΊ\βŒŠΞΊβŒ‹(x) dx withβƒ— x(β„“)denoting the ℓ’th bit of xandβŒŠΞΊβŒ‹denoting the smallest element of ΞΊ. In particular, WΞΊ(x)for nonempty ΞΊis continuous, nonnegative, periodic with period 2βˆ’βŒŠΞΊβŒ‹+1and satisfies Z [0,1]WΞΊ(x) dx=Y β„“βˆˆΞΊ2βˆ’β„“βˆ’1and max x∈[0,1]WΞΊ(x) = 2Y β„“βˆˆΞΊ2βˆ’β„“βˆ’1. (7) Proof. Theorem 2.5 of [23] with nj=|ΞΊj|implies equation (6). Properties of WΞΊ(x) are proven in Section 3 of [23]. Corollary 1. Forf∈C∞([0,1]s), |Λ†f(k)|β©½2βˆ’βˆ₯ΞΊβˆ₯1βˆ₯f|ΞΊ|βˆ₯1. Proof. By equation (7), sY j=1WΞΊj(xj) ∞⩽Y j∈1:s,ΞΊjΜΈ=βˆ…2Y β„“βˆˆΞΊj2βˆ’β„“βˆ’1β©½Y j∈1:sY β„“βˆˆΞΊj2βˆ’β„“= 2βˆ’βˆ₯ΞΊβˆ₯1. The result follows after applying HΒ¨ older’s inequality to equation (6). 3 Proof of main results In this section, we aim to prove our main theorem: Theorem 1. Suppose f∈C∞([0,1]s)satisfies the assumptions of Theorem 6. Then under the complete random design lim mβ†’βˆžPr(Λ†Β΅βˆž< Β΅) +1 2Pr(Λ†Β΅βˆž=Β΅) =1 2. 6 The proof strategy is as follows. Given a sequence of subsets KmβŠ†Ns βˆ—, we decompose the error Λ† Β΅βˆžβˆ’Β΅into two components by defining SUM 1=X k∈KmZ(k)S(k)Λ†f(k) (8) and SUM 2=X k∈Nsβˆ—\KmZ(k)S(k)Λ†f(k). (9) By Lemma 1, Λ† Β΅βˆžβˆ’Β΅= SUM 1+ SUM 2. We further define SUMβ€² 1=X k∈KmZ(k)Sβ€²(k)Λ†f(k) (10) where each Sβ€²(k) is independently drawn from U({βˆ’1,1}). We want Kmto be small enough so that SUM 1and SUMβ€² 1have approximately the same dis- tribution, and meanwhile large enough so that |SUM 2/SUM 1|<1 with high probability, as specified in the following lemma: Lemma 3. Suppose for a sequence of subsets KmβŠ†Ns βˆ—andSUM 1,SUM 2,SUMβ€² 1 defined as above, we have lim mβ†’βˆždTV(SUM 1,SUMβ€² 1) = 0 , where dTV(X, Y)is the total variation distance between the distribution of ran- dom variables XandY, and lim mβ†’βˆžPr(|SUM 1|β©½|SUM 2|) = 0 . Then lim mβ†’βˆžPr(Λ†Β΅βˆž< Β΅) +1 2Pr(Λ†Β΅βˆž=Β΅) =1 2. Proof. First notice that Pr(Λ†Β΅βˆž< Β΅) = Pr(SUM 1+
https://arxiv.org/abs/2504.19138v1
SUM 2<0) β©ΎPr(SUM 1<0 and |SUM 1|>|SUM 2|) β©ΎPr(SUM 1<0)βˆ’Pr(|SUM 1|β©½|SUM 2|) β©ΎPr(SUMβ€² 1<0)βˆ’dTV(SUM 1,SUMβ€² 1)βˆ’Pr(|SUM 1|β©½|SUM 2|). Similarly, Pr(Λ†Β΅βˆžβ©½Β΅)β©ΎPr(SUMβ€² 1β©½0)βˆ’dTV(SUM 1,SUMβ€² 1)βˆ’Pr(|SUM 1|β©½|SUM 2|) Hence Pr(Λ†Β΅βˆž< Β΅) + Pr(Λ† ¡∞⩽¡)βˆ’Pr(SUMβ€² 1<0)βˆ’Pr(SUMβ€² 1β©½0) β©Ύβˆ’2dTV(SUM 1,SUMβ€² 1)βˆ’2 Pr(|SUM 1|β©½|SUM 2|). (11) 7 Because SUMβ€² 1is, when conditioned on Z(k), a sum of independent symmetric random variables, we always have Pr(SUMβ€² 1<0) + Pr(SUMβ€² 1β©½0) = 1. Our assumptions then imply lim inf mβ†’βˆžPr(Λ†Β΅βˆž< Β΅) + Pr(Λ† ¡∞⩽¡)β©Ύ1. A similar argument shows Pr(Λ†Β΅βˆž> Β΅) + Pr(Λ† ¡∞⩾¡)βˆ’Pr(SUMβ€² 1>0)βˆ’Pr(SUMβ€² 1β©Ύ0) β©Ύβˆ’2dTV(SUM 1,SUMβ€² 1)βˆ’2 Pr(|SUM 1|β©½|SUM 2|) (12) and lim inf mβ†’βˆžPr(Λ†Β΅βˆž> Β΅) + Pr(Λ† ¡∞⩾¡)β©Ύ1, which gives the limit superior of Pr(Λ† ¡∞< Β΅) + Pr(Λ† ¡∞⩽¡) by taking the complement. Hence, lim mβ†’βˆžPr(Λ†Β΅βˆž< Β΅) +1 2Pr(Λ†Β΅βˆž=Β΅) =1 2lim mβ†’βˆžPr(Λ†Β΅βˆž< Β΅) + Pr(Λ† ¡∞⩽¡) =1 2. In order to apply the above lemma and prove Theorem 1, we define QN={k∈Ns βˆ—| βˆ₯ΞΊβˆ₯1β©½N} (13) andKm=QNmwith Nm= sup{N∈N0| |QN|β©½csm2m}. (14) where csis a positive constant to be specified in equation (19). Notice that 0/∈QNandQ0=βˆ…. Corollary 4 of [21] shows |QN| ∼Ds N1/4exp Ο€r sN 3 (15) for a constant Dsdepending on s, which implies lim Nβ†’βˆž|QN+1|/|QN|= 1 and, because |QNm|β©½csm2m<|QNm+1|, |QNm| ∼csm2m. (16) Equating the right hand side of equation (15) with csm2m, a straightforward calculation shows Nm∼λm2/s+ 3Ξ»mlog2(m)/s+Dβ€² sm (17) forΞ»= 3(log 2)2/Ο€2and a constant Dβ€² sdepending on sandcs. We will show Km=QNmsatisfies the assumptions of Lemma 3. The proof contains the following three steps: 8 β€’Step 1: prove lim mβ†’βˆždTV(SUM 1,SUMβ€² 1) = 0 . β€’Step 2: prove lim mβ†’βˆžPr(|SUM 2|β©ΎTm) = 0 for a sequence Tmspecified later in Corollary 2. β€’Step 3: prove lim mβ†’βˆžPr(|SUMβ€² 1|> Tm) = 1. Notice by Step 1 and 3, lim mβ†’βˆžPr(|SUM 1|> Tm) = lim mβ†’βˆžPr(|SUMβ€² 1|> Tm) = 1 , and then by Step 2, lim mβ†’βˆžPr(|SUM 1|> Tm>|SUM 2|) = 1 , so Lemma 3 can be applied. The following three subsections are devoted to their proof. 3.1 Proof of Step 1 We first show the number of summands in SUM 1is bounded by 2 csmwith high probability. Lemma 4. Under the complete random design, PrX k∈QNmZ(k)β©Ύ2csm β©½1 csm. Proof. First recall that Pr( Z(k) = 1) = 2βˆ’mand{Z(k),k∈QNm}are pairwise independent. By Chebyshev’s inequality, Pr X k∈QNmZ(k)βˆ’2βˆ’m|QNm| β©Ύcsm β©½1 c2sm2VarX k∈QNmZ(k) =1 c2sm22βˆ’m(1βˆ’2βˆ’m)|QNm| β©½1 c2sm22βˆ’m|QNm|. Our conclusion then follows from |QNm|β©½csm2m Next, we show QNcontains very few additive relations with the addition βŠ• defined in Subsection 2.3. The proof is given in the appendix. Lemma 5. LetNβ©Ύ1andk1, . . . ,krbe sampled independently from U(QN). Then there exist positive constants As, Bsdepending on ssuch that for all rβ©Ύ2 Pr βŠ•r i=1ki∈QN β©½Ar sNr/4rβˆ’Bs√ N. (18) 9 As a consequence, we have the following bound on the cardinality of mini- mally rank-deficient subsets of QN. Lemma 6. Let I={VβŠ†QN|rank( V)<|V|}, Iβˆ—={V∈I|every proper WβŠ‚Vhas full rank }, Iβˆ— r=Iβˆ—βˆ© {VβŠ†QN| |V|=r}. Then with As, Bsfrom Lemma 5, we have for rβ©Ύ2 |Iβˆ— r+1|β©½|QN|r (r+ 1)!Ar sNr/4rβˆ’Bs√ N. Proof. Notice that ( r+ 1)!|Iβˆ— r+1|/|QN|r+1is the probability that independent k1, . . . ,kr+1sampled from U(QN) constitute a set
https://arxiv.org/abs/2504.19138v1
V∈Iβˆ— r+1, which is further bounded by the probability that βŠ•r+1 i=1ki=0since all proper subsets WofV have full rank. Because for any given k1, ...,kr, there is at most one kr+1∈QN forβŠ•r+1 i=1ki=0, we therefore have (r+ 1)!|Iβˆ— r+1| |QN|r+1β©½1 |QN|Pr βŠ•r i=1ki∈QN β©½1 |QN|Ar sNr/4rβˆ’Bs√ N. The conclusion follows after rearrangement. Theorem 2. Define csfrom equation (14) to be cs=1 4Bsp Ξ»/s (19) withΞ»= 3(log 2)2/Ο€2andBsfrom Lemma 5. Then under the complete random design, there exist constants ds, msdepending on ssuch that for mβ©Ύms dTV(SUM 1,SUMβ€² 1)β©½1 csm+mds2βˆ’4csm. Proof. LetV={k∈QNm|Z(k) = 1}. We can rewrite SUM 1as SUM 1=X VβŠ†QNm1{V=V}X k∈VS(k)Λ†f(k). When V=βˆ…, we conventionally define the sum over Vas 0. Because {Z(k),k∈ QNm}are independent of {S(k),k∈QNm}, we see the distribution of SUM 1 is a mixture ofP k∈VS(k)Λ†f(k) weighted by Pr( V=V). A similar argument shows SUMβ€² 1is a mixture ofP k∈VSβ€²(k)Λ†f(k) weighted by Pr( V=V). When Vhas full rank, {S(k)|k∈V}are jointly independent and X k∈VS(k)Λ†f(k)d=X k∈VSβ€²(k)Λ†f(k). 10 Letting ImbeIfrom Lemma 6 with N=Nm, we have the bound dTV(SUM 1,SUMβ€² 1)β©½X V∈ImPr(V=V) = Pr( V ∈Im), (20) where we have used the fact that the total variation distance satisfies the tri- angular inequality and is bounded by 1 between any two distributions. By Lemma 4, we further have Pr(V ∈Im)β©½Pr(V ∈Im,|V|β©½2csm) +1 csm. It remains to bound Pr( V ∈ Im,|V|β©½2csm). Let Iβˆ— m,Iβˆ— m,rbeIβˆ—, Iβˆ— rfrom Lemma 6 with N=Nm. When V ∈Im, we can always find a subset W βŠ† V such that W ∈ Iβˆ— m.|W|is at least 3 because a pair of distinct k1,k2∈QNmust have rank 2. Hence a union bound argument shows for large enough m Pr(V ∈Im,|V|β©½2csm)⩽⌊2csmβŒ‹X r=2X W∈Iβˆ— m,r+1Pr(WβŠ† V). (21) Because W∈Iβˆ— m,r+1has rank r, Pr(WβŠ† V) = Pr( Z(k) = 1 for all k∈W) = 2βˆ’mr. Then by Lemma 6 Pr(V ∈Im,|V|β©½2csm)⩽⌊2csmβŒ‹X r=2|Iβˆ— m,r+1|2βˆ’mr ⩽⌊2csmβŒ‹X r=22βˆ’mr|QNm|r (r+ 1)!Ar sNr/4 mrβˆ’Bs√Nm ⩽⌊2csmβŒ‹X r=2(csmAsN1/4 m)r (r+ 1)!rβˆ’Bs√Nm, where we have used |QNm|β©½csm2m. Because Nm∼λm2/s+ 3Ξ»mlog2(m)/s forΞ»= 3(log 2)2/Ο€2, for large enough mwe have Ξ»m2/sβ©½Nmβ©½2Ξ»m2/sand Pr(V ∈Im,|V|β©½2csm)⩽⌊2csmβŒ‹X r=2(csAs(2Ξ»/s)1/4)r (r+ 1)!m(3/2)rrβˆ’mBs√ Ξ»/s β©½exp(csAs(2Ξ»/s)1/4) max 2β©½rβ©½2csmm(3/2)rrβˆ’mBs√ Ξ»/s. Because m(3/2)rrβˆ’mBs√ Ξ»/sis log-convex in r, the maximum is attained at either r= 2 or r= 2csm. After plugging in equation (19), we get max 2β©½rβ©½2csmm(3/2)rrβˆ’mBs√ Ξ»/s= max( m32βˆ’4csm, mβˆ’csm(2cs)βˆ’4csm). The conclusion follows by choosing ds>3 and a large enough ms. 11 3.2 Proof of Step 2 Throughout this subsection, we assume f∈C∞([0,1]s). Recall that SUM 2=X k∈Nsβˆ—\QNmZ(k)S(k)Λ†f(k). In light of Corollary 1, the size of SUM 2depends on how fast βˆ₯f|ΞΊ|βˆ₯1grows with|ΞΊ|. Below we provide two results under different growth assumptions. The easier one is when βˆ₯f|ΞΊ|βˆ₯1grows exponentially in |ΞΊ|. An example is f(x) = exp(Ps j=1xj). Theorem 3. Assume βˆ₯f|ΞΊ|βˆ₯1β©½K1Ξ±βˆ₯ΞΊβˆ₯0for some positive constants K1and Ξ±. Then there exist a constant D1and a threshold m1depending on sandΞ± such that for all mβ©Ύm1 |SUM 2|β©½X k∈Nsβˆ—\QNm|Λ†f(k)|< K 12βˆ’Nm+D1√Nm. Proof. We follow the strategy used in the proof of Theorem 2 from [21]. Corol- lary 1 together with our assumption on f|ΞΊ|gives |Λ†f(k)|β©½K12βˆ’βˆ₯ΞΊβˆ₯1Ξ±βˆ₯ΞΊβˆ₯0. The constraint k∈Ns βˆ—\QNmimplies βˆ₯ΞΊβˆ₯1> N m. Theorem 7 from [21] shows |{k∈Ns βˆ—| βˆ₯ΞΊβˆ₯1=N}|β©½Ο€βˆšs 2√ 3Nexp
https://arxiv.org/abs/2504.19138v1
Ο€r sN 3 . Furthermore, βˆ₯ΞΊβˆ₯1=sX j=1βˆ₯ΞΊjβˆ₯1β©ΎsX j=1|ΞΊj|2 2β©Ύ1 2ssX j=1|ΞΊj|2 =1 2sβˆ₯ΞΊβˆ₯2 0. (22) Therefore, βˆ₯ΞΊβˆ₯0β©½p 2sβˆ₯ΞΊβˆ₯1and X k∈Nsβˆ—\QNm|Λ†f(k)|⩽∞X N=Nm+1K12βˆ’Nmax α√ 2sN,1Ο€βˆšs 2√ 3Nexp Ο€r sN 3 β©½K1Ο€βˆšs 2√ 3∞X N=Nm+12βˆ’N+Dα√ sN with DΞ±=√ 2 max(log2(Ξ±),0) +Ο€/(√ 3 log(2)). For any ρ∈(0,1), we can find Nρ,s,Ξ±such that DΞ±p s(N+ 1)βˆ’Dα√ sN < ρ forN > N ρ,s,Ξ±. When mis large enough so that Nm> N ρ,s,Ξ±, ∞X N=Nm+12βˆ’N+Dα√ sNβ©½2βˆ’Nm+Dα√sNm∞X N=12(Οβˆ’1)N= 2βˆ’Nm+Dα√sNm2Οβˆ’1 1βˆ’2Οβˆ’1. 12 By choosing ρ= 1/2, we get for large enough m X k∈Nsβˆ—\QNm|Λ†f(k)|β©½K1Ο€βˆšs 2√ 3(√ 2βˆ’1)2βˆ’Nm+Dα√sNm. The conclusion follows once we choose a large enough D1. We need a more careful analysis when βˆ₯f|ΞΊ|βˆ₯1grows factorially in |ΞΊ|, such as when f(x) =Q j∈J1 1βˆ’xj/2for some JβŠ†1:s. The key is to observe that for mostk∈QN,|ΞΊj|is approximately 2p Ξ»N/s in the following sense: Lemma 7. LetNβ©Ύ1andkbe sampled from U(QN). Then there exist positive constants Aβ€² s, Bβ€² sdepending on ssuch that for any j∈1:sandϡ∈(0,1) Pr |ΞΊj|p Ξ»N/sβˆ’2 > Ο΅ β©½Aβ€² sN1/4exp(βˆ’Bβ€² sΟ΅2√ N) (23) where Ξ»= 3 log(2)2/Ο€2as in equation (17). The proof is given in the appendix. Theorem 4. Assume βˆ₯f|ΞΊ|βˆ₯1β©½K2Ξ±βˆ₯ΞΊβˆ₯0Y j∈J(|ΞΊj|)! for some positive constants K2, Ξ±and some nonempty JβŠ†1:s. Then under the complete random design, there exist a constant ds,Ξ±depending on s, Ξ±, a constant D2depending on s, Ξ±,|J|and a threshold m2depending on s, Ξ±,|J| such that for mβ©Ύm2 Pr |SUM 2|β©ΎK22βˆ’Nm+(2|J|log2(m)+D2)√ Ξ»Nm/s β©½mds,Ξ±exp(βˆ’cβ€² sm log2(m)2) with cβ€² s=Bβ€² sp Ξ»/(2s)forBβ€² sfrom Lemma 7. Proof. Corollary 1 and our assumption on fimply |Λ†f(k)|β©½K22βˆ’βˆ₯ΞΊβˆ₯1Ξ±βˆ₯ΞΊβˆ₯0Y j∈J(|ΞΊj|)!. (24) By equation (22), βˆ₯ΞΊβˆ₯0β©½p 2sβˆ₯ΞΊβˆ₯1. It follows Y j∈J(|ΞΊj|)!β©½X j∈J|ΞΊj| !β©½(βˆ₯ΞΊβˆ₯0)!β©½(p 2sβˆ₯ΞΊβˆ₯1)√ 2sβˆ₯ΞΊβˆ₯1. 13 LetNβˆ— mβ©ΎNmbe a new threshold we will determine later. A proof similar to that of Theorem 3 shows for large enough m X k∈Nsβˆ—\QNβˆ—m|Λ†f(k)| ⩽∞X N=Nβˆ—m+1K22βˆ’Nmax α√ 2sN,1Ο€βˆšs 2√ 3Nexp Ο€r sN 3 (√ 2sN)√ 2sN β©½K2Ο€βˆšs 2√ 3∞X N=Nβˆ—m+12βˆ’N+Dα√ N(√ 2sN)√ 2sN β©½K2Ο€βˆšs 2√ 3(√ 2βˆ’1)2βˆ’Nβˆ— m+Dα√ Nβˆ—m(p 2sNβˆ—m)√ 2sNβˆ—m. Because Nm∼λm2/s+3Ξ»mlog2(m)/s, we can choose Nβˆ— m=⌈Nm+K3mlog2(m)βŒ‰ for a large enough K3so that 2βˆ’Nβˆ— m+Dα√ Nβˆ—m(p 2sNβˆ—m)√ 2sNβˆ—mβ©½2βˆ’Nm when mis large enough. We then have the bound X k∈Nsβˆ—\QNβˆ—mZ(k)S(k)Λ†f(k) β©½X k∈Nsβˆ—\QNβˆ—m|Λ†f(k)|β©½K2 22βˆ’Nm+2|J|log2(m)√ Ξ»Nm/s. It remains to show that X k∈QNβˆ—m\QNmZ(k)S(k)Λ†f(k) β©½K2 22βˆ’Nm+(2|J|log2(m)+D2)√ Ξ»Nm/s(25) with high probability for large enough D2. Let ρm= 2 + Ο΅mforΟ΅m∈(0,1) that we will determine later. Further let ˜Q=n k∈QNβˆ—m |ΞΊj|> ρmp Ξ»Nβˆ—m/sfor some j∈1:so . Lemma 7 with a union bound argument over j∈1:sshows |˜Q| |QNβˆ—m|β©½sAβ€² s(Nβˆ— m)1/4exp(βˆ’Bβ€² sΟ΅2 mp Nβˆ—m). Because Nβˆ— m=⌈Nm+K3mlog2(m)βŒ‰, equation (15) implies there exists K4such that|QNβˆ—m|β©½mK42mfor large enough m. We can then bound the probability that Z(k) = 1 for any k∈˜Qby 2βˆ’m|˜Q|β©½mK4sAβ€² s(Nβˆ— m)1/4exp(βˆ’Bβ€² sΟ΅2 mp Nβˆ—m), which can be further bounded by mds,Ξ±exp(βˆ’cβ€² sΟ΅2 mm) for cβ€² s=Bβ€² sp Ξ»/(2s) and some large enough ds,Ξ±because Ξ»m2/(2s)β©½Nβˆ— mβ©½2Ξ»m2/sfor large enough m. 14 We have shown that with probability at least 1 βˆ’mds,Ξ±exp(βˆ’cβ€² sΟ΅2 mm) for large enough m, allkwith Z(k) = 1 satisfies k/∈˜Qand X k∈QNβˆ—m\QNmZ(k)S(k)Λ†f(k) β©½X k∈QNβˆ—m\(QNmβˆͺ˜Q)|Λ†f(k)|. (26) Because |ΞΊj|⩽ρmp Ξ»Nβˆ—m/sfor all j∈1:swhen k∈QNβˆ—m\˜Q, equation (24) implies for such k |Λ†f(k)|β©½K22βˆ’βˆ₯ΞΊβˆ₯1max Ξ±sρm√ Ξ»Nβˆ—m/s,1 ρmp Ξ»Nβˆ—m/s|J|ρm√ Ξ»Nβˆ—m/s . Because Nm∼λm2/sandNβˆ— mβˆ’Nm∼K3mlog2(m),p Ξ»Nβˆ—m/sβˆ’p Ξ»Nm/s∼ K3log2(m)/2 and we can
https://arxiv.org/abs/2504.19138v1
find a constant Dβˆ—depending on K3,|J|, Ξ±, s such that for large enough m |Λ†f(k)|β©½K22βˆ’βˆ₯ΞΊβˆ₯1+ρm|J|log2(m)√ Ξ»Nm/s+Dβˆ—m. By choosing Ο΅m= 1/log2(m) for mβ©Ύ3, we see ρm|J|log2(m)p Ξ»Nm/s= 2|J|log2(m)p Ξ»Nm/s+|J|p Ξ»Nm/sand |Λ†f(k)|β©½K22βˆ’βˆ₯ΞΊβˆ₯1+2|J|log2(m)√ Ξ»Nm/s+Dβˆ—βˆ—m(27) for some Dβˆ—βˆ—> Dβˆ—. Hence when mis large enough X k∈QNβˆ—m\(QNmβˆͺ˜Q)|Λ†f(k)| β©½K222|J|log2(m)√ Ξ»Nm/s+Dβˆ—βˆ—mNβˆ— mX N=Nm+12βˆ’NΟ€βˆšs 2√ 3Nexp Ο€r sN 3 β©½K222|J|log2(m)√ Ξ»Nm/s+Dβˆ—βˆ—m Ο€βˆšs 2√ 3(√ 2βˆ’1)2βˆ’Nmexp Ο€r sNm 3 . (28) The above bound is asymptotically smaller than the right hand side of equa- tion (25) once we choose a large enough D2> Dβˆ—βˆ—, so we complete the proof. Corollary 2. Assume for K, Ξ± > 0andJ={J1, ..., J L}with J1, ..., J LβŠ†1:s, βˆ₯f|ΞΊ|βˆ₯1β©½KΞ±βˆ₯ΞΊβˆ₯0max J∈JY j∈J(|ΞΊj|)! (29) whereQ j∈J(|ΞΊj|)! = 1 ifJ=βˆ…. Let Jmax= max J∈J|J|. Then under the complete random design, there exist constants ds,Ξ±depending on s, Ξ±,Ds,Ξ±,J depending on s, Ξ±,Jmaxandms,Ξ±,Jdepending on s, Ξ±,Jmaxsuch that for mβ©Ύ ms,Ξ±,J Pr(|SUM 2|β©ΎTm)β©½mds,Ξ±exp(βˆ’cβ€² sm log2(m)2) 15 where cβ€² s=Bβ€² sp Ξ»/(2s)forBβ€² sfrom Lemma 7 and Tm=K2βˆ’Nm+2Jmaxlog2(m)√ Ξ»Nm/s+Ds,Ξ±,Jm. (30) Proof. TheJmax= 0 case follows immediately from Theorem 3. When Jmax> 0, we notice Nβˆ— min the proof of Theorem 4 does not depend on Jand equa- tion (26) still holds with probability at least 1 βˆ’mds,Ξ±exp(βˆ’cβ€² sΟ΅2 mm) for Ο΅m= 1/log2(m). Then similar to equation (27), we can find Dβˆ—βˆ— Jfor each J∈ Jsuch that |Λ†f(k)|β©½K2βˆ’βˆ₯ΞΊβˆ₯1max J∈J22|J|log2(m)√ Ξ»Nm/s+Dβˆ—βˆ— Jm β©½K2βˆ’βˆ₯ΞΊβˆ₯122Jmaxlog2(m)√ Ξ»Nm/s+(max J∈JDβˆ—βˆ— J)m. (31) A calculation similar to equation (28) gives the desired result. Remark 1. We can generalize Corollary 2 to other choices of Nmby noticing that the proof only requires Nm∼λm2/s. Remark 2. When fis analytic over an open neighborhood of [0 ,1]s, Proposi- tion 2.2.10 of [11] shows for each x∈[0,1]s, we can find Kx, Ξ±x>0 depending onxand an open ball Vxcontaining xsuch that for all |ΞΊ| ∈Ns 0 sup y∈Vx|f|ΞΊ|(y)|β©½KxΞ±βˆ₯ΞΊβˆ₯0 xsY j=1(|ΞΊj|)!. By the compactness of [0 ,1]s, equation (29) holds for J={1:s}and some K, Ξ± > 0 independent of x, so Corollary 2 applies. 3.3 Proof of Step 3 Recall that SUMβ€² 1=X k∈QNmZ(k)Sβ€²(k)Λ†f(k) where each Sβ€²(k) is sampled independently from U({βˆ’1,1}). Our last step is to show|SUMβ€² 1|is larger than the Tmfrom Corollary 2 with high probability. We need the following lemma from [5]: Lemma 8. Let{ci, i∈1:n}be a set of real numbers with |ci|β©Ύ1for all i∈1:n and{Sβ€² i, i∈1:n}be independent U({βˆ’1,1})random variables. Then sup t∈RPr nX i=1ciSβ€² iβˆ’t β©½1 β©½1 2nn ⌊n/2βŒ‹ . Theorem 5. Suppose fsatisfies the assumptions of Corollary 2. For mβ©Ύ1 andTmgiven by equation (30), define Qm(Tm) ={k∈QNm| |Λ†f(k)|β©ΎTm}. 16 Assume lim inf mβ†’βˆž|Qm(Tm)| |QNm|>0. (32) Then under the complete random design, lim sup mβ†’βˆžβˆšmPr(|SUMβ€² 1|β©½Tm)<∞. Proof. LetV={k∈QNm|Z(k) = 1}andW=V ∩Qm(Tm). For large enough m, equation (32) along with |QNm| ∼csm2mimplies E|W|=EhX k∈Qm(Tm)Z(k)i = 2βˆ’m|Qm(Tm)|β©Ύcm for some constant c >0. By a proof similar to that of Lemma 4, Pr |W|β©½E|W| 2 β©½Var(|W|)  E|W| βˆ’ E|W|/22β©½E|W| (E|W|)2/4β©½4 cm. When |W|>E|W|/2β©Ύcm/2, we write SUMβ€² 1=X k∈WSβ€²(k)Λ†f(k) +X k∈V\WSβ€²(k)Λ†f(k). Conditioned on Οƒ(Z) ={Z(k),k∈Nm}, we can apply Lemma 8 to SUMβ€² 1by treating the sum over k∈ V \ W as a shift term and get Pr |SUMβ€² 1|β©½Tm Οƒ(Z) β©½sup t∈RPr X k∈WSβ€²(k)Λ†f(k)
https://arxiv.org/abs/2504.19138v1
Tmβˆ’t β©½1 Οƒ(Z) β©½1 2|W||W| ⌊|W|/2βŒ‹ . (33) Our conclusion then follows from the asymptotic relationn ⌊n/2βŒ‹ ∼2n(Ο€n)βˆ’1/2 and|W|> cm/ 2. The next theorem provides a sufficient condition for equation (32) to hold. Simply put, we require fto be ”nondegenerate” in the sense that a sufficient number of k∈QNmhave|Λ†f(k)|comparable to their upper bounds in equa- tion (31) up to an exponential factor in m. Theorem 6. ForΞ² >0andJ={J1, . . . , J L}with J1, . . . , J LβŠ†1:s, define QN,Ξ²,J(f) =n k∈QN |Λ†f(k)|β©Ύ2βˆ’βˆ₯ΞΊβˆ₯1Ξ²βˆ₯ΞΊβˆ₯0max J∈JY j∈J(|ΞΊj|)!o (34) and FΞ²,J=n f∈C∞([0,1]s) sup c>0lim inf Nβ†’βˆž|QN,Ξ²,J(cf)| |QN|>0o . Iff∈C∞([0,1]s)satisfies equation (29)for some K, Ξ± > 0andJ={J1, . . . , J L} andf∈S Ξ²>0FΞ²,J, then equation (32) holds. 17 Proof. Without loss of generality, we assume f∈ FΞ²,Jfor some Ξ²β©½1. We first define Nm,Ξ²=Nmβˆ’DΞ²mfor a large enough Dβ∈Nwe will specify later. By equation (17) and the mean value theorem r Ξ»Nm sβˆ’r Ξ»Nm,Ξ² sβ©½r Ξ» s1 2p Nm,Ξ²DΞ²m∼1 2DΞ². So for large enough m, we havep Ξ»Nm/sβˆ’p Ξ»Nm,Ξ²/sβ©½DΞ². Furthermore, equation (14) and (15) implies lim mβ†’βˆž|QNm,Ξ²| |QNm|=cDΞ² (35) for a constant cDΞ²>0 depending on DΞ²ands. Next we define for mβ©Ύ1 ˜Qm,Ξ²=n k∈QNm,Ξ² |ΞΊj|p Ξ»Nm,Ξ²/sβˆ’2 > mβˆ’1/4for some j∈1:so . When k∈QNm,Ξ²\˜Qm,Ξ², Stirling’s formula implies for large enough m max J∈JY j∈J(|ΞΊj|)!β©Ύ ⌈(2βˆ’mβˆ’1/4)q Ξ»Nm,Ξ²/sβŒ‰ !Jmax β©Ύ ⌈(2βˆ’mβˆ’1/4)p Ξ»Nm/sβŒ‰ βˆ’2DΞ² !Jmax β©Ύ ⌈(2βˆ’mβˆ’1/4)p Ξ»Nm/sβŒ‰ ! 2p Ξ»Nm/sβˆ’2DΞ²Jmax β©Ύ22Jmaxlog2(m)√ Ξ»Nm/sβˆ’JmaxKsm 2p Ξ»Nm/sβˆ’2JmaxDΞ² for a large enough Ksdepending on s. By equation (34) with N=Nm,Ξ²and the above lower bound, k∈QNm,Ξ²,Ξ²,J(cf)\˜Qm,Ξ²forc >0 implies for large enough m c|Λ†f(k)| β©Ύ2βˆ’βˆ₯ΞΊβˆ₯1Ξ²βˆ₯ΞΊβˆ₯0max J∈JY j∈J(|ΞΊj|)! β©Ύ2βˆ’Nm,Ξ²Ξ²s(2+mβˆ’1/4)√ Ξ»Nm,Ξ²/s22Jmaxlog2(m)√ Ξ»Nm/sβˆ’JmaxKsm 2p Ξ»Nm/sβˆ’2JmaxDΞ² β©Ύ2βˆ’Nm+2Jmaxlog2(m)√ Ξ»Nm/s+DΞ²m+3slog2(Ξ²)√ Ξ»Nm/sβˆ’JmaxKsmβˆ’2JmaxDΞ²log2(2√ Ξ»Nm/s). Comparing the above bound with Tmgiven by equation (30), we can lower bound |Qm(Tm)|by |Qm(Tm)|β©Ύ|QNm,Ξ²,Ξ²,J(cf)\˜Qm,Ξ²|β©Ύ|QNm,Ξ²,Ξ²,J(cf)| βˆ’ |˜Qm,Ξ²| 18 for large enough mafter choosing a Dβ∈Nsuch that DΞ²m+ 3slog2(Ξ²)p Ξ»Nm/sβˆ’JmaxKsmβˆ’2JmaxDΞ²log2(2p Ξ»Nm/s)βˆ’Ds,Ξ±,Jm grows to ∞asmβ†’ ∞ . By Lemma 7, |˜Qm,Ξ²| |QNm,Ξ²|β©½sAβ€² sN1/4 m,Ξ²exp(βˆ’Bβ€² smβˆ’1/2p Nm,Ξ²), which converges to 0 as mβ†’ ∞ . On the other hand, sup c>0lim inf Nβ†’βˆž|QN,Ξ²,J(cf)| |QN|>0 implies there exists c, cΞ²>0 such that |QNm,Ξ²,Ξ²,J(cf)|β©ΎcΞ²|QNm,Ξ²|for large enough m. Hence, we conclude from equation (35) that lim inf mβ†’βˆž|Qm(Tm)| |QNm|β©Ύlim inf mβ†’βˆž|QNm,Ξ²,Ξ²,J(cf)| βˆ’ |˜Qm,Ξ²| |QNm,Ξ²||QNm,Ξ²| |QNm|β©ΎcΞ²cDΞ²>0. Remark 3. The definition of FΞ²,Jmight appear nonstandard. Notably, FΞ²,J is not convex and excludes the zero function. However, [2] argues that coni- cal input function spaces are preferred over convex ones in adaptive confidence interval construction, and FΞ²,Jis by design conical. In general, some nondegen- erate assumptions are required to exclude constant integrands, for which Λ† Β΅βˆžβˆ’Β΅ is identically 0. See also Theorem 2 of [14] and Theorem 2 of [1] for nondegen- erate assumptions used to establish asymptotic normality of Owen-scrambled QMC means. Remark 4. For an example where f /∈S Ξ²>0FΞ²,J, consider fsatisfying f|ΞΊ|= 0 whenever |ΞΊ1|> ΞΊfor some κ∈N0. Lemma 2 then implies Λ†f(k) = 0 whenever |ΞΊ1|> ΞΊand Lemma 7 shows only an exponentially small fraction of k∈QN have nonzero Λ†f(k). Hence, f /∈ FΞ²,Jfor any Ξ² >0. In this case, however, fadmits the form f=ΞΊX p=0gp(xβˆ’1)xp 1 withxβˆ’1= (x2, . . . , x s), so one can first integrate falong the
https://arxiv.org/abs/2504.19138v1
x1direction and apply our algorithm toPΞΊ p=0gp(xβˆ’1)/(p+ 1) instead. This is called pre- integration in the QMC literature, a technique to regularize the integrands. See [9, 13] for further reference. Another solution is to localize our calculation to Qβ€²={k∈Ns βˆ—| |ΞΊ1|β©½ ΞΊ}. Specifically, we set Km=QNm∩Qβ€²with Nm= sup {N∈N0| |QN∩ Qβ€²|β©½cβ€² sm2m}for a suitable cβ€² s>0. Repeating our previous proof strategy, we can establish the counterparts of Step 1-3 when fsatisfies equation (29) with J1, ..., J LβŠ†2:sand sup c>0lim inf Nβ†’βˆž|QN,Ξ²,J(cf)| |QN∩Qβ€²|>0 19 for some Ξ², c > 0. The above two arguments can be generalized to cases where for a subset uβŠ†1:sand a set of thresholds {ΞΊj∈N0, j∈u}, we have f|ΞΊ|= 0 whenever |ΞΊj|> ΞΊjfor any j∈u. It is an open question whether all f /∈S Ξ²>0FΞ²,J belong to one of the above cases, which we leave for future study. Remark 5. It is easy to prove f∈S Ξ²>0FΞ²,Jwhen f|ΞΊ|(x) does not change sign over [0 ,1]s. In this case, equation (6) and (7) imply |Λ†f(k)|β©Ύ inf x∈[0,1]s|f|ΞΊ|(x)|Z [0,1]ssY j=1WΞΊj(xj) dx = inf x∈[0,1]s|f|ΞΊ|(x)|Y j∈1:s,ΞΊjΜΈ=βˆ…Y β„“βˆˆΞΊj2βˆ’β„“βˆ’1 = inf x∈[0,1]s|f|ΞΊ|(x)| 2βˆ’βˆ₯ΞΊβˆ₯1βˆ’βˆ₯ΞΊβˆ₯0. In order for f∈S Ξ²>0FΞ²,J, it suffices for inf x∈[0,1]s|f|ΞΊ|(x)|β©Ύc0Ξ²βˆ₯ΞΊβˆ₯0max J∈JY j∈J(|ΞΊj|)! (36) to hold for some constants c0, Ξ² > 0. In particular, simple integrands such asf(x) = exp(Ps j=1xj) and f(x) =Qs j=11 1βˆ’xj/2can be shown to satisfy Theorem 6 using this strategy. The above argument also suggests we can regularize fby adding a func- tion with sufficiently large positive derivatives before applying Theorem 6. For instance, if fsatisfies equation (29) with J={βˆ…}and some K, Ξ± > 0, then forKβ€²> K , the sum f(x) +Kβ€²exp(Ξ±Ps j=1xj) satisfies equation (36) with c0=Kβ€²βˆ’KandΞ²=Ξ±. This regularization, however, is not practically useful because choosing suitable Ξ»andΞ±requires information on the derivatives of f. Moreover, the error in integrating Ξ»exp(Ξ±Ps j=1xj) may dominate that of f and make the confidence interval unnecessarily wide. How to formulate easily verifiable conditions that allow f|ΞΊ|(x) to change signs over [0 ,1]sis another interesting question we leave for future research. 3.4 Main results As promised, the preceding three steps provide all the ingredients for the proof of Theorem 1. In fact, our analysis gives a quantitative estimate on how fast the quantile of Β΅converges to 1 /2. Theorem 7. Iff∈C∞([0,1]s)satisfies the assumptions of Theorem 6, then under the complete random design lim sup mβ†’βˆžβˆšm Pr(Λ†Β΅βˆž< Β΅) +1 2Pr(Λ†Β΅βˆž=Β΅)βˆ’1 2 <∞. (37) 20 Proof. By equation (11), (12) and the symmetry of SUMβ€² 1, Pr(Λ†Β΅βˆž< Β΅) +1 2Pr(Λ†Β΅βˆž=Β΅)βˆ’1 2 β©½dTV(SUM 1,SUMβ€² 1) + Pr( |SUM 1|β©½|SUM 2|) β©½dTV(SUM 1,SUMβ€² 1) + Pr( |SUM 2|β©ΎTm) + Pr( |SUM 1|β©½Tm) β©½2dTV(SUM 1,SUMβ€² 1) + Pr( |SUM 2|β©ΎTm) + Pr( |SUMβ€² 1|β©½Tm). Theorem 2 proves lim mβ†’βˆžβˆšmdTV(SUM 1,SUMβ€² 1) = 0. Corollary 2 proves limmβ†’βˆžβˆšmPr(|SUM 2|β©ΎTm) = 0. Theorem 5 together with Theorem 6 proves lim supmβ†’βˆžβˆšmPr(|SUMβ€² 1|β©½Tm)<∞. Our conclusion follows by combining the above results. The next corollary shows sample quantiles of Λ† Β΅Ecan be used to construct confidence intervals for Β΅with asymptotically desired coverage level. Corollary 3. Forr∈N, let Λ†Β΅1 E, . . . , Λ†Β΅r Eberindependent QMC estimators
https://arxiv.org/abs/2504.19138v1
given by equation (3)andΛ†Β΅(Ξ½) Ebe their ν’th order statistics. If f∈C∞([0,1]s) satisfies the assumptions of Theorem 6 and the precision Eincreases with mso thatEβ©ΎNm, we have under the complete random design lim sup mβ†’βˆžβˆšm Pr(Λ†Β΅E< Β΅) +1 2Pr(Λ†Β΅E=Β΅)βˆ’1 2 <∞ (38) and for 1β©½β„“β©½uβ©½r, lim inf mβ†’βˆžPr(¡∈[Λ†Β΅(β„“) E,Λ†Β΅(u) E])β©ΎF(uβˆ’1)βˆ’F(β„“βˆ’1), (39) where F(Ξ½)is the cumulative distribution function of the binomial distribution B(r,1/2). Proof. Let SUM 2,E= SUM 2+ Λ†Β΅Eβˆ’Λ†Β΅βˆž. Then Λ†Β΅Eβˆ’Β΅= Λ†Β΅βˆžβˆ’Λ†Β΅E+ Λ†Β΅Eβˆ’Β΅= SUM 1+ SUM 2,E. Lemma 1 of [21] shows |Λ†Β΅Eβˆ’Λ†Β΅βˆž|⩽√s 2Esup x∈[0,1]s||βˆ‡f(x)||2. (40) Since f∈C∞([0,1]s), the gradient βˆ‡f(x) is continuous over [0 ,1]sand has a bounded vector norm. Because Eβ©ΎNm, by increasing Ds,Ξ±,J in the definition ofTmif necessary, we can assume |Λ†Β΅Eβˆ’Λ†Β΅βˆž|β©½Tmfor large enough m. Hence under the conditions of Corollary 2, we have for large enough m Pr |SUM 2,E|β©Ύ2Tm β©½mds,Ξ±exp(βˆ’cβ€² sm log2(m)2). (41) Equation (38) then follows from a slight modification of the proof of Theorem 7. 21 Next by the property of order statistics, Pr(Λ†Β΅(β„“) E> Β΅) =β„“βˆ’1X j=0r j Pr(Λ†Β΅Eβ©½Β΅)jPr(Λ†Β΅E> Β΅)rβˆ’j, which is monotonically decreasing in Pr(Λ† Β΅Eβ©½Β΅). Equation (38) implies lim inf mβ†’βˆžPr(Λ†Β΅Eβ©½Β΅)β©Ύ1/2, so we have lim sup mβ†’βˆžPr(Λ†Β΅(β„“) E> Β΅)β©½β„“βˆ’1X j=0r j1 2r=F(β„“βˆ’1). (42) Similarly, lim sup mβ†’βˆžPr(Λ†Β΅(u) E< Β΅)β©½rX j=ur j1 2r= 1βˆ’F(uβˆ’1). (43) Therefore, lim inf mβ†’βˆžPr(¡∈[Λ†Β΅(β„“) E,Λ†Β΅(u) E])β©ΎF(uβˆ’1)βˆ’F(β„“βˆ’1). In addition to asymptotically valid coverage, the interval length Λ† Β΅(u) Eβˆ’Λ†Β΅(β„“) E converges in probability to 0 at a super-polynomial rate. To prove this, we first need to generalize Theorem 2 of [21] to the complete random design setting. Theorem 8. Iff∈C∞([0,1]s)satisfies equation (29) for some K, Ξ± > 0and J={J1, . . . , J L}, then for any Ξ³ >0, we can find a constant Ξ“depending on s, Ξ±, Ξ³, Jmaxsuch that under the complete random design lim sup mβ†’βˆžmΞ³Pr |Λ†Β΅βˆžβˆ’Β΅|> K2βˆ’Ξ»m2/s+Ξ“mlog2(m) β©½1. Proof. Our proof strategy is similar to that of Theorem 7. Given Ξ³ >0, let Nm,Ξ³= sup{N∈N| |QN|β©½mβˆ’Ξ³2m}. By equation (15) and a calculation similar to equation (17), Nm,γ∼λ sm2+(1βˆ’2Ξ³)Ξ» smlog2(m) +Ds,Ξ³m (44) for some Ds,Ξ³depending on sandΞ³. Next let Λ† Β΅βˆžβˆ’Β΅= SUM 1,Ξ³+SUM 2,Ξ³with SUM 1,Ξ³=X k∈QNm,Ξ³Z(k)S(k)Λ†f(k), SUM 2,Ξ³=X k∈Nsβˆ—\QNm,Ξ³Z(k)S(k)Λ†f(k). 22 Because |QNm,Ξ³|β©½mβˆ’Ξ³2mand Pr( Z(k) = 1) = 2βˆ’mfor all k∈QNm,Ξ³, Pr Z(k) = 1 for any k∈QNm,Ξ³ = PrX k∈QNm,Ξ³Z(k)β©Ύ1 β©½EhX k∈QNm,Ξ³Z(k)i β©½mβˆ’Ξ³. Therefore, SUM 1,Ξ³= 0 with probability at least 1 βˆ’mβˆ’Ξ³. Next by Remark 1, we can apply Corollary 2 to SUM 2,Ξ³with Nm,Ξ³replacing Nmand get lim mβ†’βˆžmΞ³Pr |SUM 2,Ξ³|β©ΎTm,Ξ³ = 0 for Tm,Ξ³=K2βˆ’Nm,Ξ³+2Jmaxlog2(m)√ Ξ»Nm,Ξ³/s+Ds,Ξ±,Ξ³,Jm with a sufficiently large Ds,Ξ±,Ξ³,Jdepending on s, Ξ±, Ξ³, Jmax. In view of equa- tion (44), we can further find Ξ“ depending on s, Ξ³,Jmax, Ds,Ξ±,Ξ³,Jsuch that βˆ’Nm,Ξ³+ 2Jmaxlog2(m)q Ξ»Nm,Ξ³/s+Ds,Ξ±,Ξ³,Jmβ©½βˆ’Ξ»m2/s+ Ξ“mlog2(m) for large enough m. Our conclusion then follows by taking the union bound over the probability of SUM 1,Ξ³ΜΈ= 0 and |SUM 2,Ξ³|β©ΎTm,Ξ³. Corollary 4. Under the conditions of Corollary 3, we can find for any Ξ³ >0 a constant Ξ“β€²depending on s, Ξ±, Ξ³, Jmaxsuch that lim sup mβ†’βˆžmrβˆ—Ξ³Pr Λ†Β΅(u) Eβˆ’Λ†Β΅(β„“) E>4K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m) β©½r rβˆ— with rβˆ—= min( β„“, rβˆ’u+ 1). Proof. Given Ξ³ > 0 and the corresponding Ξ“ from Theorem 8, we can find a
https://arxiv.org/abs/2504.19138v1
constant Ξ“β€²β©ΎΞ“ such that Nmβˆ’Ξ»m2/s+ Ξ“β€²mlog2(m)β†’ ∞ asmβ†’ ∞ because Nm∼λm2/s+ 3Ξ»mlog2(m)/s. Then Eβ©ΎNmand equation (40) implies |Λ†Β΅Eβˆ’Λ†Β΅βˆž|β©½K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m)for large enough m. Together with Theorem 8, we have lim sup mβ†’βˆžmΞ³Pr |Λ†Β΅Eβˆ’Β΅|>2K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m) β©½1. In order for either |Λ†Β΅(β„“) Eβˆ’Β΅|or|Λ†Β΅(u) Eβˆ’Β΅|to exceed 2 K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m), we need at least rβˆ—instances among Λ† Β΅1 E, . . . , Λ†Β΅r Eto have an error greater than 2K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m). By taking a union bound over allr rβˆ— sizerβˆ—subsets of Λ†Β΅1 E, . . . , Λ†Β΅r E, we have lim sup mβ†’βˆžmrβˆ—Ξ³Pr max(|Λ†Β΅(β„“) Eβˆ’Β΅|,|Λ†Β΅(u) Eβˆ’Β΅|)>2K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m) β©½r rβˆ— . 23 When both |Λ†Β΅(β„“) Eβˆ’Β΅|and|Λ†Β΅(u) Eβˆ’Β΅|are bounded by 2 K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m), Λ†Β΅(u) Eβˆ’Λ†Β΅(β„“) Eβ©½4K2βˆ’Ξ»m2/s+Ξ“β€²mlog2(m)and our proof is complete. Remark 6. One can also prove a strong convergence result using Theorem 8. Specifically, we can construct a sequence of Λ† ¡∞(m) where Λ† ¡∞(m+ 1) keeps the same digital shifts Djas Λ†Β΅βˆž(m) but constructs its j’th generating matrix Cj by appending a new column of U({0,1}) entries to that of Λ† ¡∞(m). By taking Ξ³ >1 and the corresponding Ξ“ from Theorem 8, Borel–Cantelli lemma shows |Λ†Β΅βˆž(m)βˆ’Β΅|> K2βˆ’Ξ»m2/s+Ξ“mlog2(m)only occurs for finitely many mβ©Ύ1 almost surely. Hence, we have for any Ξ»β€²< Ξ» Pr lim mβ†’βˆž2Ξ»β€²m2/s|Λ†Β΅βˆž(m)βˆ’Β΅|= 0 = 1. Similar results can be established for the confidence interval length as well. Remark 7. If a point estimator for Β΅is needed, we can generate rβ€²groups of rindependent Λ† Β΅E, compute Λ† Β΅(β„“) Eand Λ†Β΅(u) Eof each group and take the median Med(Λ† Β΅(β„“) E) and Med(Λ† Β΅(β„“) E) of the rβ€²number of Λ† Β΅(β„“) Eand Λ†Β΅(u) E. By a proof similar to that of Corollary 3 in [21], we can show the mean squared errors of both Med(Λ† Β΅(β„“) E) and Med(Λ† Β΅(u) E) converge to 0 at a super-polynomial rate given rβ€²grows at am2rate as mincreases. Any value between Med(Λ† Β΅(β„“) E) and Med(Λ† Β΅(u) E) can therefore be used as a point estimator. In addition, by equation (42) and (43) we also have Pr( ¡∈[Med(Λ† Β΅(β„“) E),Med(Λ† Β΅(u) E)]) converges to 1 as m, rβ€²β†’ ∞ given F(β„“βˆ’1)<1/2 and F(uβˆ’1)>1/2 for F(Ξ½) defined in Corollary 3. 4 Generalization to other randomization So far we have been discussing the completely random design. The analysis is easy because every linear combination of rows of Cj, j∈1:sfollows a U({0,1}m) distribution. In application, the random linear scrambling is often preferred be- cause the resulting digital nets usually have better low-dimensional projections. The construction of [10], for example, optimizes over all two-dimensional pro- jections. In this section, we show what additional assumptions are needed for results in Section 3 to hold under more general randomization. Recall that in the random linear scrambling, Cj=MjCjfor a random lower-triangular matrix Mj∈ {0,1}EΓ—mand a fixed generating matrix Cj∈ {0,1}mΓ—m. Usually every Cjis nonsingular, ensuring that no points overlap in their one-dimensional projections. A useful feature when Cjhas full rank is the random linear scrambling agrees with the complete random design except for the first mrows of each Cj. This motivates the following definition: Definition 1. Themarginal order of a randomization scheme for Cj∈ {0,1}EΓ—m, j∈1:sis the smallest d∈N0such that for every
https://arxiv.org/abs/2504.19138v1
j∈1:sand β„“ > dm ,Cj(β„“,:) is independently drawn from U({0,1}m). 24 The marginal order is 0 for the complete random design and 1 for the random linear scrambling provided every generating matrix has full rank. Randomiza- tion of higher marginal order is useful when randomizing higher order digital nets from [3]. The next lemma is useful in showing most k∈QNmsatisfies Pr( Z(k) = 1) = 2βˆ’meven when the marginal order is positive. The proof is given in the appendix. Lemma 9. ForLβ©Ύ0andΞΊβŠ†N, define ΞΊ>L={β„“βˆˆΞΊ|β„“ > L}.LetNβ©Ύ1 andk1,k2be sampled independently from U(QN). Then for any ρ >0, there exist positive constants Aρ,s, Bρ,sdepending on ρ, ssuch that for each j∈1:s, Pr ΞΊ>ρ√ N j,1 =βˆ… β©½Aρ,sN1/4exp(βˆ’Bρ,s√ N) and Pr ΞΊ>ρ√ N j,1 =ΞΊ>ρ√ N j,2 β©½A2 ρ,sN1/2exp(βˆ’2Bρ,s√ N). Another common feature of the random linear scrambling is Pr( Z(k) = 1)β©½2βˆ’m+Rfor nonzero kand a constant Rdepending on sand the generating matrices [21]. We generalize it as the following definition: Definition 2. Forr∈N, let Vr={VβŠ†Ns βˆ—| |V|= rank( V) =r}. Ther-way rank deficiency Rm,rof a randomization scheme for Cj∈ {0,1}EΓ—m, j∈ 1:sis defined as Rm,r=mr+ sup V∈Vrlog2 Pr(Z(k) = 1 for all k∈V) . with Z(k) =1{Ps j=1P β„“βˆˆΞΊjCj(β„“,:) =0mod 2}. In [19], a randomization scheme is called asymptotically full-rank if Rm,1 is bounded as mβ†’ ∞ . This is satisfied by the random linear scrambling based on common choices of generating matrices such as those from Sobol’ [22]. Much less is known about Rm,rforrβ©Ύ2. One might guess Rm,rβ©½rRm,1, but this is not true in general. Section 5 of [20] provides a three-dimensional example where Rm,1β©½5 but Rm,2β©Ύm/2+3 and mis an arbitrarily large even number. Fortunately, for most generating matrices the corresponding Rm,r grows logarithmically in min the following sense: Theorem 9. LetImbe the set of nonsingular mΓ—mF2-matrices and let Cj, j∈1:sbe independently sampled from U(Im). Then for the random linear scrambling based on generating matrices Cj, j∈1:s, Pr Rm,1β©Ύ3slog2(m+ 1) β©½exp(2 s) (m+ 1)2s(45) and for rβ©Ύ2 Pr Rm,rβ©Ύmax Rm,rβˆ’1,(2r+ 2rβˆ’1)slog2(m+ 1) β©½exp(2 sr) (m+ 1)2sr.(46) 25 Proof. Recall that βŒˆΞΊβŒ‰is the largest element of ΞΊβŠ†NandβŒˆΞΊβŒ‰= max j∈1:s⌈κjβŒ‰. For any nonzero k, if⌈κjβˆ—βŒ‰> m for a jβˆ—βˆˆ1:s, we can find β„“βˆˆΞΊjβˆ—such that β„“ > m andMjβˆ—(β„“,:) follows a U({0,1}m) distribution. Because Cjβˆ—is nonsingular, Cjβˆ—(β„“,:) =Mjβˆ—(β„“,:)Cjβˆ—also follows a U({0,1}m) distribution andPs j=1P β„“βˆˆΞΊjCj(β„“,:) =0mod 2 occurs with probability 2βˆ’m. Hence, it suffices to consider the maximum of Pr( Z(k) = 1| Cj, j∈1:s) over all nonzero kwith βŒˆΞΊβŒ‰β©½m. Instead of directly sampling CjfromU(Im), we sample Cβˆ— j, j∈1:sinde- pendently from U({0,1}mΓ—m) and view CjasCβˆ— jconditioned on Cβˆ— j∈ Im. For each j∈1:s, the probability Cβˆ— j∈ Imis given byQm β„“=1(1βˆ’2βˆ’m+β„“βˆ’1) because there are 2mβˆ’2β„“βˆ’1choices for the ℓ’th row of Cβˆ— jto be linearly independent of previous rows. We notice this probability is monotonically decreasing in mand lim mβ†’βˆžmY β„“=1(1βˆ’2βˆ’m+β„“βˆ’1) =∞Y β„“=1(1βˆ’2βˆ’β„“)β©Ύexp(βˆ’βˆžX β„“=12βˆ’β„“ 1βˆ’2βˆ’β„“)β©Ύexp(βˆ’2), where we have used log(1 βˆ’x)β©Ύβˆ’x/(1βˆ’x) for x∈(0,1). LetCβˆ— j=MjCβˆ— jand Zβˆ—(k) =1nsX j=1X β„“βˆˆΞΊjCβˆ— j(β„“,:) =0mod 2o =1nsX j=1X β„“βˆˆΞΊjMj(β„“,:)Cβˆ— j=0mod 2o . (47) When ΞΊjΜΈ=βˆ…and⌈κjβŒ‰β©½m,P β„“βˆˆΞΊjMj(β„“,:)ΜΈ=0andP β„“βˆˆΞΊjMj(β„“,:)Cβˆ— jfollows aU({0,1}m) distribution. Hence when kΜΈ=0andβŒˆΞΊβŒ‰β©½m, 2βˆ’m= Pr( Zβˆ—(k) = 1) β©ΎPr(Cβˆ—
https://arxiv.org/abs/2504.19138v1
j∈ Im, j∈1:s) Pr(Zβˆ—(k) = 1| Cβˆ— j∈ Im, j∈1:s). Conditioned on Cβˆ— j∈ Imfor all j∈1:s,Z(k) has the same distribution as Zβˆ—(k). Therefore Pr(Z(k) = 1) β©½1 Pr(Cβˆ— j∈ Im, j∈1:s)2βˆ’mβ©½exp(2 s)2βˆ’m. (48) Because Pr( Z(k) = 1) = E[Pr(Z(k) = 1| Cj, j∈1:s)], the Markov’s inequality shows for each nonzero k Pr Pr(Z(k) = 1| Cj, j∈1:s)>2βˆ’m+R β©½exp(2 s)2βˆ’R(49) forRβ©Ύ0. Next, we notice Pr( Z(k) = 1 | Cj, j∈1:s) varies with konly through ⌈κjβŒ‰, j∈1:s. This is because sX j=1X β„“βˆˆΞΊjCj(β„“,:) =sX j=1X β„“βˆˆΞΊjMj(β„“,:) Cj andP β„“βˆˆΞΊjMj(β„“,:)d=Mj(⌈κjβŒ‰,:) due to the lower triangular structure of Mj. When βŒˆΞΊβŒ‰β©½m, each ⌈κjβŒ‰can take a value between 0 and mand there are at 26 most ( m+ 1)scombinations. A uniform bound over all combinations shows for R= 3slog2(m+ 1) Pr sup kΜΈ=0Pr(Z(k) = 1| Cj, j∈1:s)>2βˆ’m+R β©½(m+ 1)sexp(2 s)2βˆ’R =exp(2 s) (m+ 1)2s. It follows from the definition of 1-way rank deficiency that Rm,1β©½Rwhen supkΜΈ=0Pr(Z(k) = 1| Cj, j∈1:s)β©½2βˆ’m+R, so we have proven equation (45). The proof of equation (46) is similar. For rβ©Ύ2, let V={k1, . . . ,kr}has rank rwithki= (k1,i, . . . , k s,i). Suppose ⌈κjβˆ—,iβˆ—βŒ‰> m for some jβˆ—βˆˆ1:sand iβˆ—βˆˆ1:r. After an invertible linear transformation on Vif necessary, we can find β„“βˆˆΞΊjβˆ—,1such that β„“ > m andβ„“ /∈κjβˆ—,ifor all i∈2:r. Conditioned on all random entries of Cj, j∈1:sandMj, j∈1:sother than Mjβˆ—(β„“,:),Z(ki) is nonrandom fori∈2:randZ(k1) = 1 with 2βˆ’mprobability because Cjβˆ—is nonsingular and Cjβˆ—(β„“,:) =Mjβˆ—(β„“,:)Cjβˆ—follows a U({0,1}m) distribution. Hence Pr(Z(k) = 1 for all k∈V) = 2βˆ’mPr(Z(ki) = 1 , i∈2:r)β©½2βˆ’mr+Rm,rβˆ’1. (50) Next suppose max i∈1:r⌈κiβŒ‰β©½m. After an invertible linear transformation on Vif necessary, we can find jβˆ—βˆˆ1:ssuch that ⌈κjβˆ—,1βŒ‰>⌈κjβˆ—,iβŒ‰for all i∈2:r. Denote β„“βˆ—=⌈κjβˆ—,1βŒ‰. As before, we let Cβˆ— j, j∈1:sbe independently sampled fromU({0,1}mΓ—m) and Zβˆ—(k) given by equation (47) for k∈V. Conditioned on all random entries of Mj, j∈1:sandCβˆ— j, j∈1:sother than Cβˆ— jβˆ—(β„“βˆ—,:),Zβˆ—(ki) is nonrandom for i∈2:randZβˆ—(k1) = 1 with 2βˆ’mprobability because Mjβˆ— equals 1 on the diagonal and Mjβˆ—(β„“βˆ—,:)Cβˆ— jβˆ—=Cβˆ— jβˆ—(β„“βˆ—,:) +X β„“<β„“βˆ—Mjβˆ—(β„“βˆ—, β„“)Cβˆ— jβˆ—(β„“,:) follows a U({0,1}m) distribution. Hence Pr(Zβˆ—(k) = 1 for all k∈V) = 2βˆ’mPr(Zβˆ—(ki) = 1 , i∈2:r). By inductively applying the preceding argument to Vβ€²={k2, . . . ,kr}, we get Pr(Zβˆ—(k) = 1 for all k∈V) = 2βˆ’mr. Then similar to equation (48) and (49), we can derive Pr(Z(k) = 1 for all k∈V)β©½exp(2 sr)2βˆ’mr and for Rβ©Ύ0 Pr Pr(Z(k) = 1 for all k∈V| Cj, j∈1:s)>2βˆ’mr+R β©½exp(2 sr)2βˆ’R.(51) Finally, for each j∈1:sanduβŠ†1:r, we define ΞΊj,u={β„“βˆˆN|β„“βˆˆΞΊj,ifori∈u, β„“ /∈κj,ifori∈1:r\u}. 27 Notice that ΞΊj,u∩κj,uβ€²=βˆ…ifuΜΈ=uβ€²andΞΊj,uforu={i}is not equal to ΞΊj,i. By the lower triangular structure of Mj, X β„“βˆˆΞΊj,iMj(β„“,:)Cj=X uβŠ†1:r i∈uX β„“βˆˆΞΊj,uMj(β„“,:) Cjd=X uβŠ†1:r i∈uMj(⌈κj,uβŒ‰,:)Cj. It follows that Pr( Z(k) = 1 for all k∈V| Cj, j∈1:s) is equal to the probability thatsX j=1X uβŠ†1:r i∈uMj(⌈κj,uβŒ‰,:)Cj=0mod 2 for all i∈1:r. There are 2rβˆ’1 nonempty uβŠ†1:rand each ⌈κj,uβŒ‰can take a value between 0 and mwhen uΜΈ=βˆ…and max i∈1:r⌈κiβŒ‰β©½m, providing at most ( m+ 1)(2rβˆ’1)s combinations of {⌈κj,uβŒ‰, j∈1:s, uβŠ†1:r}. By equation (51) and a union bound over all combinations, the probability that sup V=(k1,...,kr)∈Vr maxi∈1:r⌈κiβŒ‰β©½mPr(Z(k) = 1 for all k∈V| Cj, j∈1:s)>2βˆ’mr+R is bounded
https://arxiv.org/abs/2504.19138v1
by ( m+ 1)s(2rβˆ’1)exp(2 sr)2βˆ’R, which equals exp(2 sr)(m+ 1)βˆ’2sr when R= (2r+ 2rβˆ’1)slog2(m+ 1). Equation (46) follows once we combine the above bound with equation (50). Corollary 5. When mβ©Ύ3, there exist generating matrices Cj, j∈1:ssuch that the random linear scrambling has marginal order 1and satisfies Rm,rβ©½ (2r+ 2rβˆ’1)slog2(m+ 1) for all r∈N. Proof. LetCj, j∈1:sbe independently sampled from U(Im). The marginal order is 1 because every Cjis nonsingular. By equation (45) and (46), a union bound over all r∈Ngives Pr(Rm,rβ©½(2r+rβˆ’1)slog2(m+ 1) for all rβ©Ύ1)β©Ύ1βˆ’βˆžX r=1exp(2 sr) (m+ 1)2sr, which is positive because ( m+ 1)βˆ’2sexp(2 s)<2βˆ’swhen mβ©Ύ3. Now we are ready to generalize Theorem 1. Let Nm= sup{N∈N0| |QN|β©½1 2log2(m)2m}. (52) By equation (15), Nm∼λm2/s+mlog2(m)/s+Dβ€²β€² smlog2log2(m) with Ξ»= 3(log 2)2/Ο€2andDβ€²β€² sa constant depending on s. We first prove a generalization of Lemma 4. 28 Lemma 10. LetKmβŠ†QNmandlim inf mβ†’βˆž|Km|/|QNm|>0. Under a ran- domization scheme with marginal order d∈N0andr-way rank deficiency Rm,r satisfying limmβ†’βˆžRm,1/m= lim mβ†’βˆžRm,2/m= 0, lim mβ†’βˆžPr|Km| 2m+1β©½X k∈KmZ(k)β©½3|Km| 2m+1 = 1. Proof. LetKm,d={k∈Km| βŒˆΞΊβŒ‰β©½dm}. Because Nm∼λm2/s, we can find a constant ρdepending on dandssuch that dm⩽ρ√Nmfor large enough m. Lemma 9 then implies |Km,d|=|QNm||Km,d| |QNm|β©½1 2log2(m)2mAρ,sN1/4 mexp βˆ’Bρ,sp Nm . LetA={Z(k) = 1 for any k∈Km,d}. By a union bound argument, Pr(A)β©½2βˆ’m+Rm,1|Km,d|β©½1 2log2(m)2Rm,1Aρ,sN1/4 mexp βˆ’Bρ,sp Nm , which converges to 0 since lim mβ†’βˆžRm,1/m= 0. Similarly, we define Kβ€² m,d={(k1,k2)∈K2 m|ΞΊ>dm j,1=ΞΊ>dm j,2for all j∈1:s}. A similar argument using Lemma 9 shows 2βˆ’2m+Rm,2|Kβ€² m,d|converges to 0. Fork1∈Km\Km,d, we can find β„“1, j1such that β„“1> dm, β„“ 1∈κj1. By the definition of marginal order, Cj1(β„“1,:) is independently drawn from U({0,1}m), soZ(k) is independent of Aand Pr( Z(k)| Ac) = 2βˆ’m. Furthermore, if k1,k2∈Km\Km,dand (k1,k2)/∈Kβ€² Nm,d, we can find, after replacing k2by k1βŠ•k2if necessary, β„“1, β„“2, j1, j2such that β„“1> dm, β„“ 1∈κj1,1, β„“1/∈κj1,2, β„“2> dm, β„“ 2∈κj2,2, β„“2/∈κj2,1. Because Cj1(β„“1,:) and Cj2(β„“2,:) are independently drawn from U({0,1}m),{Z(k1), Z(k2),A}are jointly independent and the con- ditional covariance Cov( Z(k1), Z(k2)| Ac) = 0. Therefore, EhX k∈KmZ(k) Aci =X k∈Km\Km,dPr(Z(k)| Ac) = 2βˆ’m(|Km| βˆ’ |Km,d|) and VarX k∈KmZ(k) Ac β©½X k∈Km\Km,dVar(Z(k)| Ac) +X (k1,k2)∈Kβ€² m,dE[Z(k1)Z(k2)| Ac] β©½2βˆ’m(|Km| βˆ’ |Km,d|) +1 Pr(Ac)2βˆ’2m+Rm,2|Kβ€² m,d|. Since 2βˆ’m|Km| β†’ ∞ , 2βˆ’m|Km,d| β†’0, Pr(Ac)β†’1 and 2βˆ’2m+Rm,2|Kβ€² m,d| β†’0 asmβ†’ ∞ , our conclusion follows from the Chebyshev’s inequality. 29 Theorem 10. Suppose f∈C∞([0,1]s)satisfies the assumptions of Theorem 6. Then under a randomization scheme with marginal order d∈N0andr-way rank deficiency Rm,rsatisfying Rm,rβ©½(2r+ 2rβˆ’1)slog2(m+ 1) forr∈N, lim mβ†’βˆžPr(Λ†Β΅βˆž< Β΅) +1 2Pr(Λ†Β΅βˆž=Β΅) =1 2. Proof. Let SUM 1,SUM 2and SUMβ€² 1be defined by equation (8), (9) and (10) with Km=QNmforNmdefined by equation (52). We will follow the same three steps outlined in Section 3. In Step 1, equation (20) implies for V={k∈QNm|Z(k) = 1} dTV(SUM 1,SUMβ€² 1)β©½Pr(V ∈Im) β©½Pr V ∈Im,|V|β©½3 4log2(m) + Pr |V|>3 4log2(m) . Lemma 10 with Km=QNmimplies Pr( |V|>(3/4) log2(m)) converges to 0. Next, similar to equation (21), we have for large enough m Pr V ∈Im,|V|β©½3 4log2(m) ⩽⌊(3/4) log2(m)βŒ‹X r=2X W∈Iβˆ— m,r+1Pr(WβŠ† V). Because each W∈Iβˆ— m,r+1has full rank, the definition of r-way rank deficiency and Lemma 6 imply X W∈Iβˆ— m,r+1Pr(WβŠ† V)β©½|Iβˆ— m,r+1|2βˆ’mr+Rm,rβ©½2Rm,r
https://arxiv.org/abs/2504.19138v1
(r+ 1)!Ar sNr/4 mrβˆ’Bs√Nm. Forrβ©½(3/4) log2(m),Rm,rβ©½(m3/4+ (3/2) log2(m)βˆ’1)slog2(m+ 1). Hence Pr V ∈Im,|V|β©½3 2log2(m) β©½2(m3/4+(3/2) log2(m)βˆ’1)slog2(m+1)⌊(3/4) log2(m)βŒ‹X r=2(AsN1/4 m)r (r+ 1)!rβˆ’Bs√Nm β©½2(m3/4+(3/2) log2(m)βˆ’1)slog2(m+1)exp(AsN1/4 m)2βˆ’Bs√Nm, which converges to 0 as mβ†’ ∞ since Nm∼λm2/s. The proof of Step 2 is essentially the same as before, except that the prob- ability Z(k) = 1 for any k∈˜Qis now bounded by 2Rm,1mds,Ξ±exp(βˆ’cβ€² sΟ΅2 mm), where ˜Q, d s,Ξ±, cβ€² s, Ο΅mare defined as in the proof of Theorem 4. In particular, we still have lim mβ†’βˆžPr(|SUM 2|β©ΎTm) = 0 for Tmdefined by equation (30) when Rm,1β©½3slog2(m+ 1). In Step 3, Theorem 6 shows equation (32) holds and |Qm(Tm)|> clog2(m)2m for some c > 0 when mis large enough. Then we can apply Lemma 10 with Km=Qm(Tm) to get lim mβ†’βˆžPr(|W|>(c/2) log2(m)) = 1 for W= V ∩Qm(Tm). An argument similar to equation (33) completes the proof. 30 Corollary 6. Let[Λ†Β΅(β„“) E,Λ†Β΅(u) E]be the confidence interval from Corollary 3 with Eβ©ΎNmforNmdefined in equation (52). Under the assumptions of Theo- rem 10, lim inf mβ†’βˆžPr(¡∈[Λ†Β΅(β„“) E,Λ†Β΅(u) E])β©ΎF(uβˆ’1)βˆ’F(β„“βˆ’1) with F(Ξ½)defined as in Corollary 3. Proof. The proof is essentially the same as that of Corollary 3 except that equation (41) becomes Pr |SUM 2,E|β©Ύ2Tm β©½2Rm,1mds,Ξ±exp(βˆ’cβ€² sm log2(m)2), which still converges to 0 when Rm,1β©½3slog2(m+ 1). Counterparts of Theorem 8 and Corollary 4 can also be established using a slightly modified proof. Remark 8. Corollary 5 shows there exist generating matrices for which the random linear scrambling satisfies the assumptions of Theorem 10. In fact, the proof shows generating matrices randomly drawn from U(Im) are qualified with high probability. If further Rm,rβ©½Crlog2(m+ 1) for some C > 0, one can modify the proof and show the√mconvergence rate in equation (37) also holds. Whether there exist generating matrices achieving such bounds is an open question left for future research. 5 Numerical experiments In this section, we validate our theoretical results on two highly skewed inte- grands and two types of randomization. For each integrand and each random- ization, we first compute the probability Λ† Β΅Eis larger than Β΅and verify this probability converges to 1 /2. The precision Eis chosen according to a small test run to make sure 2βˆ’Eis much smaller than the observed errors. Next, we generate our quantile intervals and the traditional t-intervals both for 1000 times. Each confidence interval is constructed from r= 9 independent repli- cates of Λ† Β΅E. For the quantile interval, we choose β„“= 2 and u= 8 as described in Corollary 3. The predicted converge level according to equation (39) is ap- proximately 96 .1%. The t-interval is [Β― Β΅βˆ’tΛ†Οƒ,Β―Β΅+tΛ†Οƒ] for Β―Β΅the sample mean and Λ†Οƒthe sample standard deviation of the 9 replicates of Λ† Β΅E. We choose tβ‰ˆ2.46 so that the predicted coverage level according to a t-distribution with 8 degree of freedom equals that of the quantile interval. We report the 90th percentile of the 1000 interval lengths to compare the efficiency of two constructions. We further estimate the coverage level by computing the proportion of intervals containing Β΅. We call the coverage level too low if less than 950 intervals
https://arxiv.org/abs/2504.19138v1
out of the 1000 runs contain Β΅and too high if more than 970 intervals contain Β΅. Below we use CRD as the shorthand for the β€œcompletely random design” and RLS for the β€œrandom linear scrambling”. The generating matrices for RLS come from [10]. 31 ● ● ● ● ● ● ● ● ● ● ● ● 2 4 6 8 10 120.0010.002 0.0050.0100.020 0.0500.1000.200Deviation of Pr(Β΅^E>Β΅) from 12 when f(x)=x33exp(x) m0.5βˆ’Pr(Β΅^ E>Β΅) ●CRD RLSFigure 1: Deviation of Pr(Λ† Β΅E> Β΅) from 1 /2 for f(x) =x33exp(x). 5.1 One-dimensional example We start with the one-dimensional integrand f(x) =x33exp(x). The power 33 is chosen so that Pr( f(U)> Β΅)β‰ˆ10% for Ufollowing a uniform [0 ,1] distribution. In other words, Pr(Λ† Β΅E> Β΅)β‰ˆ10% when we set m= 0 and use one function evaluation to estimate Β΅. Figure 1 records the deviation of estimated Pr(Λ†Β΅E> Β΅) from 1 /2 across m= 1, . . . , 12. Each Pr(Λ† Β΅E> Β΅) is computed from 8Γ—106replicates with precision E= 64. As expected, Pr(Λ† Β΅E> Β΅) converges to 1/2 for both choices of randomization. Although our analysis only guarantees a very slow convergence under RLS, the empirical convergence rate outperforms that of CRD. Figure 2 and 3 compare the 90th percentile interval lengths and empirical coverage levels, respectively. We observe that the quantile intervals have rapidly shrinking interval lengths while achieving the target coverage level formβ©Ύ5. On the other hand, the t-intervals tend to be much wider due to the influence of outliers and their coverage levels become too high for mβ©Ύ7. The quantile intervals are therefore preferred over the t-intervals for constructing confidence intervals from Λ† Β΅E. 5.2 Eight-dimensional example Next, we investigate the impact of dimensionality using the eight-dimensional function f(x) =Q8 j=1xjexp(xj). We have Pr( f(U)> Β΅)β‰ˆ12.2% for Ufollow- ing a uniform [0 ,1]8distribution. Figure 4 records the deviation of estimated Pr(Λ†Β΅E> Β΅) from 1 /2 across m= 1, . . . , 18. Each Pr(Λ† Β΅E> Β΅) is computed from 8 Γ—104replicates with precision E= 32. We observe that convergence of 32 ● ● ● ● ● ● ● ● ● ● ● ● 2 4 6 8 10 121eβˆ’05 1eβˆ’04 1eβˆ’03 1eβˆ’02 1eβˆ’0190th quantile of confidence interval lengths when f(x)=x33exp(x) mInterval length● ● ● ● ● ● ● ● ● ● ● ● ● ●tβˆ’interval for CRD quantile interval for CRD tβˆ’interval for RLS quantile interval for RLSFigure 2: 90th percentile interval lengths of quantile intervals and t-intervals forf(x) =x33exp(x). ●●●● ●●●●● ● ● ● 2 4 6 8 10 120.4 0.5 0.6 0.7 0.8 0.9 1.0Coverage levels when f(x)=x33exp(x) mCoverage level ●●●●●●● ●●●● ● ● ●97% coverage 95% coverage tβˆ’interval for CRD quantile interval for CRD tβˆ’interval for RLS quantile interval for RLS Figure 3: Coverage levels of quantile intervals and t-intervals for f(x) = x33exp(x). 33 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5 10 150.0010.002 0.0050.0100.020 0.0500.1000.200 0.500Deviation of Pr(Β΅^E>Β΅) from 12 when f(x)=∏ j=18 xjexp(xj) m0.5βˆ’Pr(Β΅^ E>Β΅) ●CRD RLSFigure 4: Deviation of Pr(Λ† Β΅E> Β΅) from 1 /2
https://arxiv.org/abs/2504.19138v1
for f(x) =Q8 j=1xjexp(xj). ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5 10 151eβˆ’03 1eβˆ’02 1eβˆ’01 1e+00 1e+0190th quantile of confidence interval lengths when f(x)=∏ j=18 xjexp(xj) mInterval length● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●tβˆ’interval for CRD quantile interval for CRD tβˆ’interval for RLS quantile interval for RLS Figure 5: 90th percentile interval lengths of quantile intervals and t-intervals forf(x) =Q8 j=1xjexp(xj). 34 ●●●●●● ●●● ●●● ● ●● ●●● 5 10 150.4 0.5 0.6 0.7 0.8 0.9 1.0Coverage levels when f(x)=∏ j=18 xjexp(xj) mCoverage level ●●●●●●●●●● ●●● ●●● ●● ● ●97% coverage 95% coverage tβˆ’interval for CRD quantile interval for CRD tβˆ’interval for RLS quantile interval for RLSFigure 6: Coverage levels of quantile intervals and t-intervals for f(x) =Q8 j=1xjexp(xj). Histogram Β΅^ Eβˆ’Β΅Density βˆ’0.005 0.000 0.0050 50 100 150 200 250 300 ● ● ●●● ● ● ●● ●● ● ●●● ●● ●●● ●● ●●●● ● ● ●● ●●● ●● ●● ● ● ●● ●● ●●● ●●● ●●● ●● ● ●● ● ●●●●● ●● ● ● ●● ●● ●● ●●● ● ●● ● ●●●●● ●●●● ●●● ●●● ●● ● ●● ● ●● ● ●●● ● ●●● ●● ●● ●● ● ●●●● ●● ●● ● ●●● ●●● ● ●●● ● ●● ●● ●● ● ●● ●● ● ●● ●●●● ●● ●● ●● ●●● ● ●● ● ● ●●●● ● ●●● ● ●● ●● ●●● ●● ●●●● ● ●●● ● ●●● ●●● ●● ●● ●●● ●●● ●●● ● ●● ●● ●●● ●●● ●● ●●● ●●● ● ●●●●● ●● ● ●● ●●●● ●● ●● ●● ●●●● ●● ● ●●● ●●● ●● ●●● ●●● ● ●● ●●● ●● ●●● ●● ●●●●● ●● ●●● ●● ●●● ●●● ●● ●●● ●●● ●● ● ●● ● ●● ● ●● ● ●● ●●●● ●●● ●● ●● ●● ●● ●●●●● ●● ●● ● ●● ●●● ●●● ●● ● ●●●● ● ●● ●● ●● ● ● ● ● ●● ●●● ●●● ●● ●● ● ●●● ● ●●● ●● ●● ● ● ●●●● ●● ●● ●● ● ● ●● ●● ● ●●● ●● ● ● ●● ●●● ● ● ●●● ●●●●● ●● ●● ● ●● ●●● ●● ● ●●● ● ●●●● ●● ●●●● ●● ●● ●● ●● ●● ● ●●● ●● ●●●●● ● ● ●● ●● ●●● ● ●●●●● ●● ● ●● ●● ● ● ●● ●● ●●● ●● ●● ●● ● ● ● ●● ●●●●●● ●● ●● ●● ●●●● ●● ●● ● ●● ●● ●● ● ●●● ●● ● ●●● ● ●● ● ● ●●● ● ●●● ● ●●●● ● ●●●● ● ●● ●●●● ● ●● ● ●● ●●● ●●● ●● ●● ●● ●● ● ● ●● ●●● ●●● ● ●● ●●● ●● ●●●●●● ● ●● ●● ●●● ● ●●● ●●●● ●● ●● ●● ● ● ●● ● ●●● ●●● ● ●● ●● ●●● ●● ● ●●● ●●● ● ● ●● ●●● ●● ●● ●● ● ●● ●● ●● ●● ● ●● ●● ●●● ●●● ●●●● ●● ●●● ● ●● ●●●●●● ●●● ●●●● ● ●●●●● ●●● ●●●●● ●●
https://arxiv.org/abs/2504.19138v1
● ●● ● ●● ● ●●● ●● ● ●●● ●● ●● ● ●● ●● ●● ●●● ● ●● ●● ● ●● ●●● ●●●●● ●●● ●● ● ●● ●● ●● ●● ● ●●● ●● ●● ●●●●● ●●●● ● ●●●● ●●● ●● ●● ●● ●● ●● ●● ● ● ●● ●● ● ●● ●● ●● ●● ●● ● ●●●● ● ●●●● ●●● ●● ●● ●● ● ● ●● ●● ●● ●● ●●● ● ●●● ●●● ●● ●●● ● ●●● ●● ● ●●● ● ●● ●● ●●●● ●● ●● ●● ● ● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ● ●● ●●● ●●● ●● ● ●● ●●● ●●●●●● ●● ●●● ●● ●● ●● ●●● ●●●● ● ● ● ●●● ●●● ●●●● ●● ●●● ● ●●●●● ●● ●● ●● ●●● ●● ●●● ●● ●● ● ●●● ●●●● ●●● ● ●● ● ●● ●● ●●● ●●● ● ●●● ●● ● ●●● ● ●● ●● ●● ● ●● ● ●● ●● ●● ●● ●●●● ● ● ●● ● ●● ●●● ●● ●● ● ●● ●●● ●● ●● ●● ●● ●●● ●●● ●● ● ●●●● ● ●●● ●● ●● ● ●● ●●● ●●●● ●● ●●●● ● ●●● ●●● ●●●● ●● ●● ● ●● ●● ● ●● ●● ● ●●●● ● ●● ●● ●● ●●●● ●● ● ●● ●● ●●● ●● ●● ●●●● ● ●●● ●● ● ●● ●●●● ●● ●●●● ● ●●●●● ● ●● ●●● ● ●●● ●● ●● ● ●●● ●● ● ●● ●●● ●● ●●● ●●● ●●●● ●● ●● ●●●● ● ●●● ●● ●●● ●●● ●●● ●●●● ● ●● ●● ●●●● ●● ● ●● ●● ● ●● ●●● ●● ●●● ●● ● ●●● ●● ● ●●● ●● ● ●● ● ●● ●●● ● ●● ●●● ● ●●●● ● ● ●● ●● ●● ● ●● ●●●● ●●● ●● ●●● ●● ●● ● ●●●● ●● ● ●● ●●● ●●● ●●● ● ●●● ● ●●● ●●●● ●●●● ●● ● ●● ●● ●●● ●● ● ● ●● ● ●●● ●●●● ● ●● ●●● ●● ● ●●● ● ●●● ●●● ●●●● ●●● ●●● ●● ●● ●● ● ●●● ● ●●● ●● ●●● ●●● ●●●● ●● ● ●● ● ●● ●● ●● ●● ● ●● ●● ●● ● ●● ●● ● ●●●● ●● ●● ● ●●●● ●●●● ●●●● ●● ●●●● ●● ●●● ●●● ●● ●●● ●●● ●● ● ●● ●●●● ●●●● ● ● ●● ●●●● ● ●●●● ●● ●● ●● ●● ●● ●● ●●● ●●● ● ●●● ●● ●● ●●● ●● ● ●● ● ●● ●●● ● ●●● ●●● ● ●●●● ●● ●●● ●●●● ● ●● ●● ● ●●● ● ●●● ● ●●●● ● ● ● ●● ● ●● ●● ●●●● ●●● ● ●● ●●● ●● ● ●●●●●● ● ●● ●●● ●● ●●● ● ●●● ●●● ●● ● ●● ●●● ●●● ●●● ● ●● ●● ●● ● ●● ●●● ●●● ●● ●●●● ● ●●● ●●● ● ●● ●● ● ●● ●●●●●● ● ●●● ● ●●●●● ● ●●● ● ●●● ●● ● ●●●●● ● ● ●●● ●●● ● ●●● ●● ●● ●● ●● ●●●● ●●● ● ●●●● ●●● ●●●●● ●● ●● ● ●● ●● ●● ●● ● ●●● ●● ●● ● ●●●●
https://arxiv.org/abs/2504.19138v1
●● ●●● ●●● ● ●●● ●●● ●●●● ●●●● ●●● ●● ●●●● ●● ●●●● ●●● ● ●● ●● ● ●●● ● ●● ●● ●● ●●●●● ● ●●● ● ● ●●●● ●● ● ●●● ● ●● ● ●●●● ● ●●●● ●● ●●●●● ●● ● ●● ●● ●● ●● ● ●● ●● ● ●●● ●●● ● ●● ●● ●● ●● ●●● ● ●●●●● ● ●●●● ● ● ●●●●●● ● ●● ●● ●● ●● ●● ●● ●● ●●● ● ●●● ●● ● ●●● ● ●●●● ●● ● ●●●●● ●●● ●● ●● ●●● ●● ●● ●●● ●● ● ● ●●● ●●●●●● ● ●●● ●● ● ●●● ● ●●● ● ●●● ●●●● ●● ● ●●● ●● ●● ●●●● ●●● ●●●●●● ●●● ● ●● ●●● ● ●●● ●● ●●● ● ● ●●●● ●● ●●●● ● ●● ●●● ●●● ● ●●● ●● ●●●● ●●●● ● ●● ●● ●● ●● ● ●●● ●● ● ●● ●●● ● ●●● ● ●●● ● ● ●●●● ●● ●● ●● ●● ●● ●●●●● ●●●● ● ●● ● ●● ●● ●● ●● ●● ●● ● ●●●●● ●● ●● ●● ● ●● ●● ●● ●●● ●● ●● ●● ●● ● ●● ● ●●● ●● ● ●● ●● ● ●●● ● ●●● ●● ●● ●●● ●● ● ●● ● ● ●● ●● ●● ●● ●● ●●● ●● ● ●●●● ●● ●● ● ● ●● ●● ● ● ●● ●● ●●● ●● ●●●● ●●●●● ● ●●● ● ● ● ●● ●● ●● ●● ● ●●● ● ●●● ●● ● ●●● ● ●●● ●● ● ●●● ● ●● ● ●●●● ●●●● ● ●● ●●●● ● ●● ●● ●● ●●● ● ●●●● ●● ●● ● ● ●●● ●●● ●●● ●● ●● ●● ●● ●● ● ●●●● ●●● ●● ●● ●●● ●●● ●● ●● ●● ●● ● ●● ●● ●●●●●● ●● ●●●●● ●● ●●●●●● ●● ● ●●● ●● ● ●● ● ●●●● ● ●● ●● ● ●● ●● ● ●●● ●● ●●●● ●● ●● ● ●●● ●●● ●●● ●● ● ●●● ● ●● ●●●● ● ●●●●● ● ●● ●●● ●●●● ●● ●●● ●● ●● ●●●● ●●● ● ●●●● ● ●●●● ● ● ●●●● ●●● ●●● ●●●●● ●● ●●●● ●●● ●●● ●● ●●●● ●●● ●● ●● ● ●● ●●● ●●●● ●● ●● ● ● ●●●● ●●● ●●● ●●● ● ● ●●●● ●● ● ●●● ● ●●● ●● ●● ● ●●● ●● ●● ● ●●●● ●●●● ● ● ●●● ●● ●●● ●● ●●● ● ●● ● ●● ●● ●●● ●●● ●●● ● ●● ●● ●● ●● ●● ●●● ● ● ●●● ●●● ●● ● ● ● ● ●● ● ● ●●● ● ●●● ●●●● ●● ● ●●● ●● ●● ●● ● ●●●● ●● ●● ●● ●● ● ● ●● ●●●● ●● ●●● ●● ●●●● ●● ● ●●● ● ●● ●● ●●● ●● ● ●●● ●● ●●● ●● ●● ●●● ●● ●● ●● ●● ● ●●● ● ● ●● ●●●●● ●●●●● ●● ● ●● ●● ●●● ● ●● ●● ●● ●●● ● ●●● ●● ●● ● ●●●●●● ●● ●●● ● ●● ● ●●● ●●● ●●● ●● ●● ● ● ●● ● ●●●● ● ●●●● ●●●● ● ●● ●● ● ●●●●● ●●●● ●● ● ●●●●●● ●●
https://arxiv.org/abs/2504.19138v1
●●● ●● ●● ●●●● ● ●●●● ● ●●● ● ●● ●●● ●●● ● ● ●●● ● ●● ● ●● ● ●●●●●● ●●● ● ●●● ●● ●●●● ●●● ●● ● ●● ●● ● ●●● ●● ● ●●●● ●●●● ● ●● ● ● ●● ●●● ●●●● ●● ●●● ● ● ●●● ●●● ●● ●● ●● ● ●● ●●●● ● ●● ●●●● ●● ●●● ● ●● ●● ●● ●●●●●●● ●● ● ●●●● ● ●● ● ●● ●●●●● ●● ● ●● ●● ●● ● ●● ● ●●●● ●●● ●●●●● ●●● ●●● ● ● ●●● ●● ●●● ● ●● ● ●●●● ● ●●● ●● ● ●●● ●● ●● ● ●● ●●● ●●● ● ●●●● ● ●● ●●● ● ●● ●●● ●●● ●●●● ● ● ●●● ●● ●●● ●●● ● ●●● ●● ● ● ●●● ● ● ●● ● ●●●● ●● ●● ●●● ●●● ●● ●● ● ●● ●● ● ●●● ●● ●● ●● ● ●● ●●● ● ●● ●●● ●● ●● ●●●● ●●● ●●●●● ●● ●●● ● ●●● ● ●● ●● ● ●● ●● ●● ●● ●● ●● ●●● ● ●● ● ●●●● ●●● ● ●● ●● ●● ●● ● ●● ● ● ●● ●● ●● ● ●● ●●● ●● ●● ● ●●● ● ●●● ●●● ● ●● ●● ●●●● ●●●● ●● ● ●● ●● ●● ●● ●● ●●●●● ●● ●●● ●● ● ●●●● ●●● ●● ●● ●●● ●● ●●● ●● ● ●●●● ●●●● ● ●●●●● ●●● ●●●● ●● ● ●●●● ●● ● ●●●● ● ● ●●●● ● ●● ●● ●●● ●● ●●● ●●● ●●● ● ●●● ●●● ●● ● ●●●●● ●●● ●●● ●●● ● ● ●● ● ● ●● ● ●● ●● ●● ● ● ●●●●● ●●● ● ● ●●●● ●● ● ●●● ●● ●● ● ●●● ●●● ● ●● ●●● ●●●● ● ●● ●● ● ● ●● ● ●●● ●● ●●● ● ●●● ●● ●●● ●● ●● ●● ●● ● ●● ●●●●● ●●● ●● ●● ●●● ●● ●●●●● ●● ● ● ●● ●●● ●● ● ●●●● ●●●● ●● ●●● ●●●● ●● ●●● ●● ●● ●●● ●● ●●● ●● ●●● ● ●● ●●● ●● ●● ●● ●●● ●● ● ● ●●● ●● ●● ● ●● ●●● ●●● ●●● ●●● ●● ●●●● ● ●●● ●●● ●●● ●●● ● ●●● ●● ●● ● ●●●●●●● ● ●● ●●● ● ●●● ●● ●●● ●● ●● ● ●●● ●● ●● ●● ●●●● ●●● ● ●●●●● ● ●● ●●● ● ●●● ●● ● ●● ● ● ●●● ●● ● ●● ●●●● ● ●●● ●● ● ●● ●●●● ● ●●● ●●● ● ●● ● ● ●● ●●●●● ●● ●●● ●● ●● ●● ● ●●●● ● ● ●● ●●●● ●● ●●●● ● ●●● ●● ●●● ●● ●● ● ●● ●● ●● ● ●●● ● ●● ●●●● ●●● ●● ●●●●●● ● ● ● ●● ●●●● ● ●●● ● ●● ● ●● ● ●● ●●● ●● ●● ● ●●● ●●●●● ● ●●● ● ●●●●● ●● ●●● ● ●●●● ●● ●●● ●● ●● ●●● ●●● ●● ●● ● ●● ● ●●● ● ● ●● ●● ●● ●● ●● ●●● ●●● ● ●●●● ●● ●● ●●● ●● ●●● ● ●● ●●●●●●● ●●● ● ●● ●●● ● ●● ●●
https://arxiv.org/abs/2504.19138v1