text
string | source
string |
|---|---|
that Ξ±(x) is increasing for x>exp(5/2)/2, decreasing for x<exp(5/2)/2 withx0= exp(5/2)/2 being the unique minimum. However, at x0= exp(5/2)/2, Ξ±(x0) =5 8β5 8ln2>0 . HenceΞ±(x) must be non-negative and the lemma is established. Lemma S.4.12. LetB2ln(2p)β€nΟ2and consider the random variable Wsuch that WβΌBin/parenleftbigg n,Ο2 Ο2+B2/parenrightbigg Then fort=Ο/radicalBig ln(2p//radicalbig ln(2p))/βen, we have P/parenleftbigg Wβ₯nΟ2 Ο2+B2+nBt Ο2+B2/parenrightbigg β₯cΞ¦/parenleftBigg ββ 2nt Ο/parenrightBigg wherec<1is a universal constant and Ξ¦(.)is the normal distribution function. 45 Proof.The ο¬rst thing to be noted is that P/parenleftbigg Wβ₯nΟ2 Ο2+B2+nBt Ο2+B2/parenrightbigg =P/parenleftbigg Wβ₯/ceilingleftbiggnΟ2 Ο2+B2+nBt Ο2+B2/ceilingrightbigg/parenrightbigg whereβ.βis the ceiling function. Consequently, /ceilingleftbiggnΟ2 Ο2+B2+nBt Ο2+B2/ceilingrightbigg =nΟ2 Ο2+B2+nBtβ Ο2+B2 wheretβ€tβ<t+(Ο2+B2)/(nB). ByZubkov and Serov (2013), we have P/parenleftbigg Wβ₯/ceilingleftbiggnΟ2 Ο2+B2+nBt Ο2+B2/ceilingrightbigg/parenrightbigg β₯Ξ¦/parenleftBigg β/radicalBigg 2nHpΟ,B/parenleftbiggk n,pΟ,B/parenrightbigg/parenrightBigg , whereHp(a,p) =aln(a/p)+(1βa)ln((1βa)/(1βp)),pΟ,B=Ο2/(Ο2+B2) andk= (nΟ2+nBtβ)/(Ο2+B2). Moreover,Hp(a,p) is increasing in aforaβ₯p. By Lemma 6.3 of CsiszΒ΄ ar and Talata (2006), Hp(a,p)β€(aβp)2 p(1βp), whereby it suο¬ces to show that Ξ¦/parenleftbigg ββ 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg/parenrightbigg β₯cΞ¦/parenleftbigg ββ 2nt Ο/parenrightbigg for somec<1. Lemma S.4.11combined with an application of Lemma S.4.2, gives us 1β 2Οeβx2/2 x+1β€Ξ¦(βx)β€1β 2Οeβx2/2 x(E.34) By (E.34), Ξ¦/parenleftbigg ββ 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg/parenrightbigg β₯1β 2Ο1 1+β 2n/parenleftbigt Ο+Ο2+B2 nΟB/parenrightbigexp/parenleftBigg βn/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg2/parenrightBigg SinceB2ln(2p)β€enΟ2, we can write/radicalbigg ln(2p) neβ€Ο Bβ€1 As a consequence, n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg2 =nt2 Ο2+2t ΟΟ2+B2 BΟ+1 n/parenleftbiggΟ2+B2 ΟB/parenrightbigg2 =nt2 Ο2+2t Ο/parenleftbiggΟ B+B Ο/parenrightbigg +1 n/parenleftbiggΟ B+B Ο/parenrightbigg2 β€nt2 Ο2+2/radicalBig ln(2p//radicalbig ln(2p)) βne/parenleftbigg 1+B Ο/parenrightbigg +1 n/parenleftbigg 1+B Ο/parenrightbigg2 β€nt2 Ο2+2/radicalbig 2ln(2p)βne/parenleftBigg 1+βne/radicalbig ln(2p)/parenrightBigg +1 n/parenleftBigg 1+βne/radicalbig ln(2p)/parenrightBigg2 (E.35) β€nt2 Ο2+2/radicalbig 2ln(2p)βne+2+/parenleftBigg 1βn+βe/radicalbig ln(2p)/parenrightBigg2 46 β€nt2 Ο2+5+/parenleftbigg 1+βeβ ln2/parenrightbigg2 (E.36) (E.35) follows by observing the simple fact that for all pβ₯1, ln(2p//radicalbig ln(2p))β€2ln(2p) while (E.36) follows by observing that/radicalbig ln(2p)/neβ€1 andnβ₯1,pβ₯1. Taking lnΟ=β5β/parenleftbigg 1+βeβ ln2/parenrightbigg2 we observe that exp/parenleftBigg βn/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg2/parenrightBigg β₯Οexp/parenleftbigg βnt2 Ο2/parenrightbigg Fort<(Ο2+B2)/(nB), we must have β 2n/parenleftbiggΟ2+B2 nΟB/parenrightbigg β€/radicalbigg 2 nΟ B+/radicalbigg 2 nB Οβ€β 2+/radicalbigg 2 nβne/radicalbig ln(2p)β€β 2+β 2eβ ln2=Ξ± Then 1+β 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg β€1+Ξ±+tβ 2n Οβ€tβ 2n Ο(1+f(1+Ξ±)) whereβe/f= min pβ₯1/radicalBig 2ln(2p//radicalbig ln(2p)). As a result, Ξ¦/parenleftbigg ββ 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg/parenrightbigg β₯Ο (1+f(1+Ξ±))Ο tβ 2nexp/parenleftbigg βnt2 Ο2/parenrightbigg β₯Ο (1+f(1+Ξ±))Ξ¦/parenleftBigg βtβ 2n Ο/parenrightBigg Fortβ₯(Ο2+B2)/(nB), 1+β 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg β€tβ 2n Ο/parenleftbigg 2+fβ 2e/parenrightbigg =Ξ²tβ 2n Ο Hence in this case, Ξ¦/parenleftbigg ββ 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg/parenrightbigg β₯Ο Ξ²Ξ¦/parenleftBigg βtβ 2n Ο/parenrightBigg TakingΞ³= max{1+f(1+Ξ±),Ξ²}, we have the desired result Ξ¦/parenleftbigg ββ 2n/parenleftbiggt Ο+Ο2+B2 nΟB/parenrightbigg/parenrightbigg β₯Ο Ξ³Ξ¦/parenleftBigg βtβ 2n Ο/parenrightBigg Lemma S.4.13. Forxβ[0,1)andpβ₯1, (1βx)1/pβ/parenleftbigg 1βx p(1βx)/parenrightbigg β₯0 Proof.TakingΒ΅(x) = (1βx)1/pβ/parenleftBig 1βx p(1βx)/parenrightBig , observe that Β΅β²(x) =1β(1βx)1+1/p p(1βx)2β₯0 This combined with the fact that Β΅(0) = 0 implies the result. 47 Lemma S.4.14. Forx>0,y>0andxy>1, we must have (xyβ1)/e ln(1+(xyβ1)/e)β€(1+x)y ln((1+x)y) Proof.Note that it is enough to show that (xyβ1)/e ln(1+(xyβ1)/e)β€(1+x)yβ1 ln((1+x)y) Now, since both the quantities ( xyβ1)/eand (1+x)yβ1 are positive, our result would follow immediately by the fact that x/ln(1+x) is increasing for X >0 if we can show that (xyβ1)/eβ€(1+x)yβ1 Note that we can write the following if and only if statements (xyβ1)/eβ€(1+x)yβ1 ββxyβ1β€exy+eyβe ββexyβxy+eyβe+1β₯0 ββ(eβ1)(xyβ1)+eyβ₯0 the last of which is true. Hence, the result follows. Lemma S.4.15. LetY1,...,Y nbe independent, mean zero random variables such that |Yi| β€K,1 nn/summationdisplay i=1E[|Yi|q]β€Mq,1 nn/summationdisplay i=1E[Y2 i]β€Ο2. Then 1 nn/summationdisplay i=1E[|Yi|s]β€/braceleftBigg/parenleftbig Ο2/M2/parenrightbigq qβ2(Mq/Ο2)s/(qβ2),if2β€sβ€q, KsβqMq, ifsβ₯q. IfMq=Kqβ2Ο2, thenKqβ2/Mqβ2=M2/Ο2and hence/parenleftbig
|
https://arxiv.org/abs/2504.17885v1
|
Ο2/M2/parenrightbigq qβ2= (M/K)q. This implies that if Mq=Kqβ2Ο2, then for all sβ₯2, the above bound is KsβqMq. This moment bound is exactly same as used in Bennettβs inequality. Proof.Because |Yi| β€K, we get that for sβ₯q, 1 nn/summationdisplay i=1E[|Yi|s]β€Ksβq1 nn/summationdisplay i=1E[|Zi|q]β€KsβqMq. Using the inequality ( qβ2)|Yi|sβ€(sβ2)asβq|Yi|q+(qβs)asβ2|Yi|2for 2β€sβ€qand any positive a, we get using Jensenβs inequality, 1 nn/summationdisplay i=1E[|Yi|s]β€sβ2 qβ2asβq1 nn/summationdisplay i=1E[|Yi|q]+qβs qβ2asβ21 nn/summationdisplay i=1E[|Yi|2] β€sβ2 qβ2asβqMq+qβs qβ2asβ2Ο2. Minimizing over a>0, we choose a= (Mq/Ο2)1/(qβ2)and this yields 1 nn/summationdisplay i=1E[|Zi|s]β€Mq(sβ2)/(qβ2)Ο2(qβs)/(qβ2) 48 =/parenleftbiggΟ2 M2/parenrightbiggq qβ2/parenleftbiggMq Ο2/parenrightbiggs/(qβ2) , for 2β€sβ€q. This completes the proof. Lemma S.4.16. Forx>βe, 1 6lnxβ€1 3ln(1+x)β€Ξ¨(x) (1+x)ln2(1+x)β€1 ln(1+x)β€1 lnx Proof.The upper bounds are a bit easier to establish, so let us prove them ο¬ rst. Recall that Ξ¨( x) = (1+x)ln(1+x)βxand hence (1+x)ln(1+x)βx (1+x)ln2(1+x)β€1 ln(1+x)βx (1+x)ln2(1+x)β€1 ln(1+x)β€1 lnx Now, we focus on establishing the lower bounds. In order to show th at Ξ¨(x) (1+x)ln2(1+x)β₯1 3ln(1+x) This is equivalent to showing that ΞΎ(x) =2 3Ξ¨(x)βx 3β₯0 for allxβ₯βe. This is fairly easy to see once we have taken note of the fact that ΞΎβ²(x) =2 3ln(1+x)β1 3is increasing and ΞΎβ²(βe)>0 andΞΎ(βe)>0. Once, this is established, note that for x>βe>(β 5+1)/2, the functionx2βxβ1 is positive and increasing. Therefore for x>βe, we have x2>1+x=β2lnx>ln(1+x) =β6lnx>3ln(1+x) =β1 6lnxβ€1 3ln(1+x) and we are done. Lemma S.4.17. Consider the equation xq qlnx=c wherexis known to be larger thanβeandqβ₯1. Then(clnclnclnc...)1/qis a solution to the equation. Further this solution is O((clnc)1/q). Proof.Consider ο¬rst the equationy lny=r wherer>e. Deο¬ne the sequence fn=rlnfnβ1 andf1=r. It is easy to see that this is an increasing sequence in n. Furtherfnβ€r2for alln, by induction, sincefnβ1β€r2implies that fn=rlnfnβ1=rlnr2= 2rlnrβ€r2. Asfnis a potive, increasing sequence that is bounded above, the limit fβ= limnββfn=rlnrlnrlnr...exists ο¬nitely. Consider, now, the original equation. There, xqplays the role of y. Moreover, the left hand side is increasing in xand if we can show that (βe)q/qlnβe > efor allqβ₯1, then we have established the ο¬rst part of the lemma. Observe that for the function g(x) =x/2βln(x/2), minima occurs at the point x= 2 and the resultant minimum value is 1. Therefore 2 ex/2/xtakes a minimum value of e1=e. So, we can say that the equation xq lnx=c 49 has (clnclnclnc...)1/qas a solution. As for the second part of the result, observe that since the sequ encefnis increasing, therefore rlnrβ€fβ and since each term in the sequence fn< r2, therefore fββ€r2. Consequently, fβ=rlnfβ=rlnr2β€ 2rlnr. Therefore (clnc)1/qβ€(clnclnclnc...)1/qβ€(2clnc)1/q. Lemma S.4.18. Consider the function f(x) =βxW/parenleftbigg β1 x/parenrightbigg = exp/parenleftbigg βW/parenleftbigg β1 x/parenrightbigg/parenrightbigg forxβ₯e. Then1β€f(x)β€e. Proof.The way we go about proving this result is showing that the function fis a decreasing function. Observe that since the Lambert function is strictly increasing βxβ₯ β1/e, we have fβ²(x) =βexp/parenleftbigg βW/parenleftbigg β1 x/parenrightbigg/parenrightbigg Wβ²/parenleftbigg β1 x/parenrightbigg1 x2<0. Moreover,f(e) =eand lim xββf(x) = lim yβ0βW(y) y= lim yβ0βeβW(y)= 1 and we are done. Lemma S.4.19. Supposea,b>0. Then inf t>0a(ebtβ1βbt)+x t=abΞ¨β1(x/a). Proof.The inο¬mum is attained at tβsatisfying t/bracketleftbig abebtβab/bracketrightbig βaebt+a+abt=x This is equivalent to btebtβebt+1 =x/aβebt(btβ1)+1 =x/a. (E.37) Takingy=ebtβ1, this equation is same as ( y+1)ln(y+1)βy=x/a, or equivalently, Ξ¨( y) =x/a. Hence, tβ=1 bln/parenleftbig 1+Ξ¨β1(x/a)/parenrightbig . Observe that from ( E.37)
|
https://arxiv.org/abs/2504.17885v1
|
Score-Based Deterministic Density Sampling Vasily Ilinβ Peter Sushkoβ‘Jingwei Huβ β University of Washingtonβ‘Allen Institute for AI vilin@uw.edu Abstract We propose a deterministic sampling framework using Score-Based Transport Mod- eling for sampling an unnormalized target density Οgiven only its score βlogΟ. Our method approximates the Wasserstein gradient flow on KL(ftβ₯Ο)by learning the time-varying score βlogfton the fly using score matching. While having the same marginal distribution as Langevin dynamics, our method produces smooth deterministic trajectories, resulting in monotone noise-free convergence. We prove that our method dissipates relative entropy at the same rate as the exact gradient flow, provided sufficient training. Numerical experiments validate our theoretical findings: our method converges at the optimal rate, has smooth trajectories, and is usually more sample efficient than its stochastic counterpart. Experiments on high dimensional image data show that our method produces high quality generations in as few as 15 steps and exhibits natural exploratory behavior. The memory and runtime scale linearly in the sample size. 1 Introduction Diffusion generative modeling (DGM) [ 25,26] has emerged as a powerful set of techniques to generate βmore of the same thing," i.e., given many samples from some unknown distribution Ο, train a model to generate more samples from Ο. While the SDE-based generation is commonly used in practice, its deterministic counterpart, termed βprobability flow ODE" by [ 26], offers several practical advantages β higher order solvers [ 11], better dimension dependence [ 4], and interpolation in the latent space [ 24]. The two processes are given by the reverse of the Ornstein-Uhlenbeck (OU) process and the corresponding ODE, respectively: dXt= (Xt+ 2βlogft(Xt)) dt+β 2 dBt (DGM SDE) dXt= (Xt+βlogft(Xt)) dt, (DGM ODE) where ftis the law of the OU process. The score βlogftis learned by running the OU process fromΟtoN(0, Id), which crucially depends on having many samples from Ο. In this work we pose and attempt to answer the following question: How to deterministically sample an unnormalized density Οin the absence of samples, using techniques of diffusion generative modeling? This is a much harder problem than DGM. Indeed, DGM can be reduced to unnormalized density sampling by estimating the score βlogΟon the samples, but the converse is not true. Indeed, recently [ 10] proved the exponential in dquery complexity lower bound for density sampling. Our main contributions are a deterministic sampling algorithm, scalable to high dimensions, and a proof of fast convergence to log-concave distributions. Unlike the classical Langevin dynamics, our Preprint. Under review.arXiv:2504.18130v2 [cs.LG] 17 May 2025 Figure 1: Left: Langevin dynamics (stochastic). Right: ours (deterministic). The deterministic algorithm has the same marginal distributions as the stochastic one but with smooth trajectories. algorithm produces smooth deterministic trajectories, and gives access to the otherwise intractable scoreβlogft. Unlike other deterministic interacting particle systems, the memory and runtime scale linearly in the number of particles. The convergence analysis is based on a system of coupled gradient flows β a novel framework that can be of independent interest as a general technique for analyzing convergence of NN-based approximations of gradient flows. We are not aware of any other work that uses the neural tangent
|
https://arxiv.org/abs/2504.18130v2
|
kernel for analyzing dynamically changing loss. Our main theoretical result is theorem 4.6. In Section 3 we introduce the main algorithm, building on Score-Based Transport Modeling [ 1]. In Section 4 we prove that the resulting dynamics achieve the optimal convergence rate. Finally, in Section 5 we verify convergence in several numerical experiments. We give the proofs and additional experiments in the Appendix. 2 Related Work Sampling from unnormalized densities traditionally relies on stochastic algorithms such as Langevin dynamics, whose fast convergence under isoperimetry assumptions is well-understood [ 7,28,27,5]. Deterministic alternatives, notably Stein Variational Gradient Descent (SVGD) [ 19,8], rely heavily on handcrafted kernels, resulting in limited theoretical guarantees and empirical success, especially in high dimensions [ 30]. Inspired by deterministic probability-flow ODEs from diffusion generative modeling [ 13,25,26,4,11], which often outperform their stochastic counterparts due to high-order integrators, better dimension dependence, and latent-space interpolation capabilities [ 24,20], Boffi & Vanden-Eijnden [1]proposed Score-Based Transport Modeling (SBTM) as a general method of solving the Fokker-Planck equation with a deterministic interacting particle system, building on neural approximations of gradient flows [ 29,9]. However, it was not used for sampling. Our work fills this gap by adapting the method for the problem of sampling and by providing entropy dissipation guarantees and proving rapid convergence for the coupled particle system dynamics, integrating smoothly with recent annealing approaches like those proposed by [23, 6, 2]. 3 Wasserstein gradient flow Diffusion generative modeling (DGM SDE) and(DGM ODE) converges quickly because it is the reverse of a fast process, namely the OU process. In the absence of samples from Ο, there is no known numerically tractable process that can be run from Οtof0and reversed. Thus, the standard approach to classical sampling is to greedily minimize a divergence, most commonly relative entropy KL(ft||Ο) :=Eftlogft Ο, between ftandΟat every time step. In continuous time, this is called a Wasserstein gradient flow (GF). The Wasserstein GF on KL(Β·||Ο)can be implemented stochastically 2 or deterministically, mimicking equations (DGM SDE) and (DGM ODE): dXt=βlogΟ(Xt) dt+β 2 dBt, B t:=Brownian motion , (GF SDE) dXt=βlogΟ(Xt) dtβ βlogft(Xt) dt, f t:=law(Xt). (GF ODE) Figure 2: Particles are pulled towards target Οand away from particle density ft. Brownian motion acts randomly.Remarkably, the distributions of Xtin equations (GF SDE) and(GF ODE) match exactly. In this work we simulate the latter. See [ 15] for the rigorous derivation and connection to Optimal Transport. The term ββlogft in the ODE provides deterministic exploration by making samples X1 t, . . . , Xn trepel from each other, since more particles appear where ftis larger. We visualize the forces βlogΟ,ββlogftand the noiseβ 2 dBtin figure 2. Remark 3.1. While equations (DGM ODE) and (GF ODE) look similar, there is a big difference in how ft is defined. In (DGM ODE) ,ftis given by the OU process started from Ο. In(GF ODE) ftfollows the gradient flow started from f0, usually chosen to be Gaussian N(0,1). 3.1 Score-Based Transport Modeling Score-based transport modeling (SBTM) was introduced by [1] as a method of solving Fokker-Planck equations, of which (GF ODE) is a special case.
|
https://arxiv.org/abs/2504.18130v2
|
Additionally, SBTM was recently successfully employed to other Fokker-Planck-type equations [14, 21, 12]. The core idea of SBTM is to approximate βlogftwith a neural network sΞttrained on the sample X1 t, ..., Xn t. The training is done by minimizing the score matching loss from [ 13,25]. The SBTM dynamics are given by the interacting particle system dXt dt=βlogΟ(Xt)βsΞt(Xt), X 0βΌf0iid dΞt dt=βΞ·βΞtL(sΞt, ft), X tβΌft,(SBTM) where L(s, ft) =Eftβ₯sβ βlogftβ₯2is the dynamically changing score-matching loss [ 13] and Ξ·(t) controls the amount of NN training relative to sampling dynamics. Unlike most other interacting particle systems used for sampling [ 19,18,6], in SBTM the particles X1 t, . . . , Xn tinteract via the neural network sΞt, which makes the memory and runtime scale linearly, and provides much better dimension scaling. Remark 3.2. The neural network trains on the particles that were produced following the same neural network, as opposed to the true solution of the Wasserstein GF (GF ODE) . The commonsense intuition makes one suspect that local errors will accumulate exponentially. Miraculously, this does not happen, see remark 4.3. 4 Convergence analysis How quickly does ftconverge to Οin(SBTM) ?To answer this question we study the decay rate of the relative entropy KL(ft||Ο)[5]. Our strategy is to break down the change in KL(ft||Ο)into the change due to the density ftand the change due to the neural network sΞt. Theorem 4.2 does the former, and theorem 4.4 does the latter. Theorem 4.6 brings both pieces together. Theorem 4.8 extends theorem 4.2 to the annealed setting. 3 4.1 Entropy dissipation First, recall the following classical result that quantifies the rate of relative entropy decay for the Wasserstein GF on relative entropy: Theorem 4.1. Ifftfollows the Wasserstein GF on KL(Β·||Ο)then relative entropy dissipates at the rate of the relative Fisher information: βd dtKL(ft||Ο) =F(ft||Ο) :=Eft βlogft Ο 2 . Additionally, if Οsatisfies the log-Sobolev inequality with constant Ξ±(e.g. if ΟisΞ±-log-concave) then KL(ft||Ο)β€KL(f0||Ο)eβ1 Ξ±t. Intuitively, if stβ β logft, SBTM should converge at the nearly optimal rate βd dtKL(ft||Ο) = F(ft||Ο). Indeed, this holds if the score matching loss is small: Theorem 4.2 (Small loss guarantees optimal entropy dissipation) .Ifftis the density of Xt, which follows dXt dt=βlogΟ(Xt)βst(Xt), X 0βΌf0, for any time-dependent vector field st, then βd dtKL(ft||Ο) =F(ft||Ο)βEft stβ βlogft,βlogft Ο β₯1 2F(ft||Ο)β1 2L(st, ft). (4.1) In particular, if L(st, ft)β€1 2F(ft||Ο), then βd dtKL(ft||Ο)β₯1 4F(ft||Ο). While theorem 4.2 looks simple, it elucidates a remarkable property of SBTM: Remark 4.3. Integrating (4.1) in time, one obtains KL(fT||Ο)β€KL(f0||Ο)β1 2ZT 0F(ft||Ο) dt+1 2ZT 0L(st, ft) dt. Since there no exponential term eTin the bound, local errors do not accumulate. We now show how to ensure that the loss stays small despite changing ft. 4.2 Bounding score matching loss With sufficient size and amount of training a neural network can fit arbitrary (finite) data, including dynamically changing data. The next theorem shows that the score-matching loss does not increase, given sufficient training. Theorem 4.4 (Sufficient training guarantees bounded loss) .Assume that ftis any time-dependent density, X1 t, . . . , Xn tare distributed according
|
https://arxiv.org/abs/2504.18130v2
|
to ft, and the neural network st=sΞtis trained with gradient descent dΞt dt=βΞ·βΞtLn(st, ft), Ln(st, ft) :=1 nnX i=1β₯s(Xi t)β βlogf(Xi t)β₯2 and is such that the Neural Tangent Kernel (NTK) Hi,j Ξ±,Ξ²(t) =NX k=1βΞΈksΞ± t(Xi t)βΞΈksΞ² t(Xj t), s(x) = (s1(x), . . . , sd(x)) 4 is lower bounded by Ξ»=Ξ»(t)>0, i.e.β₯Hvβ₯2β₯Ξ»β₯vβ₯2. Then as long as Ξ·(t)β₯n Ξ»β βΟ Ο=tlogLn(st, fΟ), the loss Ln(st, ft)is non-increasing in time. Remark 4.5. The assumption that the NTK is lower bounded in theorem 4.4 is non-trivial, but holds under relatively mild assumptions [ 16]; the most restrictive one is that the NN size is superlinear in the number of datapoints. We expect that even this can be weakened under additional regularity assumptions of the target function. Combining theorems 4.1 and 4.4 shows that the convergence rate of SBTM matches the convergence rate of the true GF (GF ODE). This is the main theoretical guarantee of our sampling method. Theorem 4.6. Suppose that XtandΞtfollow (SBTM) andΞ·andHare as in theorem 4.4. If the initial loss is small and the true loss Lis well-approximated by the training loss Ln Ln(s0, f0)β€1 4Ξ΅, Ξ΅ := inf tβ€TF(ft||Ο), (4.2) |Ln(st, ft)βL(st, ft)| β€1 4Ξ΅, (4.3) then SBTM dissipates relative entropy at the optimal rate: βd dtKL(ft||Ο)β₯1 4F(ft||Ο). IfΟsatisfies the log-Sobolev inequality with constant Ξ±then convergence is exponential: KL(ft||Ο)β€KL(f0||Ο)eβ1 4Ξ±t. Remark 4.7. Since stβ βlogft, relative Fisher information may be approximated by F(ft||Ο)βFn(ft||Ο) :=1 nnX i=1β₯s(Xi)β βlogΟ(Xi)β₯2. One may stop the sampling when Fn(ft||Ο)β€Ξ΅for some predefined Ξ΅. This makes condition (4.2) practical. If particles X1(t), . . . , Xn(t)were independent, the Law of Large Numbers would imply lim nββ|Ln(st, ft)βL(st, ft)|= 0, satisfying condition (4.3) for large enough n. Further, for interacting particle systems propagation of chaos usually holds, namely that in the limit nβ β particles Xi tbecome independent. In numerical experiments, we treat LnasL, and confirm the optimal rate of relative entropy dissipation. 4.3 Annealed dynamics Classical sampling can be broken down into three distinct parts: mode discovery, mode weighting, and mode approximation. While Langevin dynamics performs mode approximation very fast, both mode discovery and mode weighting are challenging. For example, in the absence of samples from Ο, the mere discovery of a mode with support of small width hinddimensions requires β¦ hβd function evaluations [ 10]. Empirically, it helps to pick an annealing between f0andΟto βguide" ft, similar to how the forward OU process guides the reverse process in DGM. We use the geometric annealing Οtβf1βt 0Οt[23, 22, 3] and the dilation annealing Οt(x)βΟ(x/t)[2]. One obtains a similar entropy dissipation estimate in annealed dynamics. Taking Οt=Οrecovers theorem 4.2. 5 Theorem 4.8. IfΟtis any time-dependent density, stany time-dependent vector field, and dXt dt=βlogΟt(Xt)βs(Xt) then βd dtKL(ft||Ο) =Eft stβ βlogΟ,βlogft Οt βEft stβ βlogft,βlogft Ο . In particular, if L(st, ft) = 0 , then βd dtKL(ft||Ο) =Eft βlogft Ο,βlogft Οt . (4.4) Theorem 4.8 gives a way to test for L(st, ft) = 0 by testing equality (4.4) . This is important, because the loss L(st, ft)cannot be computed from a finite sample X1 t, .
|
https://arxiv.org/abs/2504.18130v2
|
. . , Xn tofftsinceβlogft is unknown. Empirically, we confirm that (4.4) holds, indicating small loss β see figure 5. 5 Experiments We demonstrate the optimal rate of relative entropy dissipation in several experiments, including challenging non log-concave targets. We compare SBTM to its stochastic counterpart given by (GF SDE) . Additionally, we demonstrate the flexibility of SBTM by simulating annealed Langevin dynamics, and compare to the corresponding SDEs. We do not extensively compare SBTM to SVGD [19] because without additional tricks such as momentum and cherry-picking the kernel bandwidth we were unable to obtain decent performance from SVGD, see tables 1 and 2. We emphasize that while the numerical experiments presented below would be trivial in the context of DGM, they are quite challenging in our context. We use a three-layer ResNet of width 128 for 1D experiments, and increase it to five layers for 2D experiments. For the MNIST experiment we use an 8-layer U-Net. Other hyperparameters were found empirically, and are specified in the Appendix. All experiments were done in JAX on a single TITAN X Pascal 12GB GPU. The runtime of the longest experiment is 90 minutes. The code repository is Vilin97/SBTM-sampling. 5.1 Log-concave target Table 1: KL divergence ( β) between the sample and the target Ο=N(0,1), using time step 0.002and final time 2.5. SBTM exhibits better sample efficiency, likely due to determinism. sample size Method 100 300 1000 3000 10000 SBTM (ours) 0.013 0.0032 0.0019 0.0020 0.00099 SDE 0.020 0.022 0.0094 0.0036 0.0012 SVGD 0.33 0.23 0.16 0.20 0.29First, we consider an example that admits an analytic solu- tionft=N 0,1βeβ2(t+0.1) , which allows us to compute the true entropy dissipation and the L2distance to the true solution. We use sample size n= 1000 . Moreover, SBTM exhibits the op- timal relative entropy dissipation rate, as evidenced by the close alignment of the orange, green, and red lines in the left panel of figure 3. Table 1 shows that SBTM achieves smaller KL divergence than the SDE. The deterministic nature of SBTM allows it to achieve better sample efficiency than the noisy SDE. While the marginal density of X1 tis the same between SBTM and SDE, the whole particle ensemble X1 t, . . . , X1 thas different distributions. More detailed investigation of the ensemble properties of SBTM is left as future work. 6 Figure 3: Left: entropy dissipation of SBTM (ours) and SDE (stochastic). SBTM approximates entropy decay rate perfectly, while SDE is extremely noisy. Right: L2 error to the true solution. SBTM produces lower error with much smoother trajectory. 5.2 Gaussian mixture Table 2: KL divergence ( β) between the sample and the target Ο=1 4N(β2,1) +3 4N(2,1), using time step 0.01, and final time 10. Top: non-annealed. Bottom: 100 particles. sample size 100 300 1000 3000 10000 SBTM (ours) 0.022 0.018 0.0082 0.0082 0.0036 SDE 0.029 0.013 0.014 0.0068 0.0043 SVGD 2.8 1.4 2.4 2.1 2.0 annealing Non-annealed Geometric Dilation SBTM (ours) 0.022 0.058 0.037 SDE 0.029 0.060 0.062 SVGD 2.800 0.470 0.480This and all other examples do not admit an analytic solution,
|
https://arxiv.org/abs/2504.18130v2
|
but we can still compare the qual- ity of the final sample as well as compare the trajectories of the SDE and SBTM. We use f0=N(0,1)here and in fur- ther experiments. Here we sam- ple from the Gaussian mixture Ο=1 4N(β2,1) +3 4N(2,1) withn= 1000 . As evidenced by the change of slope at t= 2.5in Figure 4, the underlying Markov chain enters metastability, which plagues the convergence to non- log-concave targets. Here the non-log-concavity is mild, so the process still converges in a reasonable time frame. Table 2 shows that SBTM achieves competitive KL divergence. Figure 4: Left: KL divergence of SBTM (ours) and SDE (stochastic) over time. SBTM exhibits smoother convergence. Right: entropy dissipation of SBTM and SDE. SBTM approximates entropy decay rate perfectly, while SDE is noisy. 7 Figure 6: Top: SBTM (ours). Bottom: SDE [ 2]. SBTM separates into modes early on, compared to the SDE. 5.3 Gaussian mixture with dilation annealing Figure 5: The estimate in (4.4) holds empirically, indicating good score approximation.To sample from a Gaussian mixture with 16 well-spaced modes we employ the annealing schedule Οt(x) = Ο T tx from [ 2]. Figure 6 shows the densities at different time points, and figure 5 shows the entropy dissipation as in (4.4) . This is a challenging example due to the extreme non-log-concavity of the target. With the large sample size of 20,000 and 10,000 steps this experiment took only 90 minutes on a single TITAN X GPU, leveraging the linear scaling of memory and time complexity of SBTM. 5.4 High-dimension experiments Figure 7: Sampled MNIST digits using f0=N(0,1), sample size 64, time step 10β3,30steps, variable amount of intermediate training. Digits generated by SBTM are competitive in visual quality. To evaluate the scaling of our method to high-dimensional settings, we apply SBTM to produce MNIST digits [ 17] (CC-BY-SA 3.0 license), which corresponds to sampling a 784-dimensional distribution. We sample directly in pixel space, without using a V AE encoder, to purposefully maintain the high dimensionality of the data. To obtain βlogΟwe train a U-Net with score-matching. Figure 7 shows that SBTM produces high quality results, comparable to the SDE sample. Figure 8 shows that that SBTM maintains in exploration in high dimensional spaces. The amount of training 8 Figure 8: Starting from the same initial point, SBTM produces distinct sample trajectories depending on the training schedule. The amount of training controls the strength of interaction between particles. Top to bottom: SBTM without training (equivalent to gradient ascent on βlogΟ), SBTM with small amount training, SBTM with large amount training, and finally the SDE (Langevin). Figure 9: Cosine similarity between the true score βlogΟand learned score βlogftover simulation time. Even with very little training the model learns the score well. controls the strength of interaction between particles. With sufficient training, particles repel from each other, providing deterministic exploration. To confirm that βlogftis being meaningfully learned, we plot the cosine similarity across time in Figure 9. More training epochs per step leads to faster learning of the target score βlogΟ. 6 Conclusion In this work we use
|
https://arxiv.org/abs/2504.18130v2
|
tools and intuition from diffusion generative modeling to tackle the harder problem of sampling Οgiven only βlogΟbut not samples from Ο. Our method allows for deterministic sampling with smooth trajectories and provably optimal rate of entropy dissipation. Additionally, access to the learned score βlogftallows for the computation of relative Fisher info to estimate convergence, making the dynamics more interpretable. Our method integrates well with annealed dynamics to sample from challenging non log-concave densities. Finally, our method scales well both in the sample size, with O(n)complexity, and in dimension, as demonstrated in the 784-dimensional example. 9 References [1]Nicholas M Boffi and Eric Vanden-Eijnden. Probability flow solution of the fokkerβplanck equation. Machine Learning: Science and Technology , 4(3):035012, 2023. [2]Omar Chehab and Anna Korba. A practical diffusion path for sampling. arXiv preprint arXiv:2406.14040 , 2024. [3]Jannis Chemseddine, Christian Wald, Richard Duong, and Gabriele Steidl. Neural sampling from boltzmann densities: Fisher-rao curves in the wasserstein geometry. arXiv preprint arXiv:2410.03282 , 2024. [4]Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, and Adil Salim. The probability flow ode is provably fast. Advances in Neural Information Processing Systems , 36, 2024. [5]Sinho Chewi. Log-concave sampling. Book draft available at https://chewisinho. github. io , 9: 17β18, 2023. [6]Miguel Corrales, Sean Berti, Bertrand Denel, Paul Williamson, Mattia Aleardi, and Matteo Ravasi. Annealed stein variational gradient descent for improved uncertainty estimation in full-waveform inversion. Geophysical Journal International , 241(2):1088β1113, 2025. [7]Arnak S Dalalyan and Avetik G Karagulyan. User-friendly guarantees for the langevin monte carlo with inaccurate gradient. arXiv preprint arXiv:1710.00095 , 2017. [8]A Duncan, N Nuesken, and L Szpruch. On the geometry of stein variational gradient descent, 775. arXiv preprint arXiv:1912.00894 , 374, 2019. [9]Karthik Elamvazhuthi, Xuechen Zhang, Matthew Jacobs, Samet Oymak, and Fabio Pasqualetti. A score-based deterministic diffusion algorithm with smooth scores for general distributions. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pp. 11866β11873, 2024. [10] Yuchen He and Chihao Zhang. On the query complexity of sampling from non-log-concave distributions. arXiv preprint arXiv:2502.06200 , 2025. [11] Daniel Zhengyu Huang, Jiaoyang Huang, and Zhengjiang Lin. Convergence analysis of probability flow ode for score-based generative models. arXiv preprint arXiv:2404.09730 , 2024. [12] Yan Huang and Li Wang. A score-based particle method for homogeneous landau equation. arXiv preprint arXiv:2405.05187 , 2024. [13] Aapo HyvΓ€rinen. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research , 6(24):695β709, 2005. URL http://jmlr.org/papers/v6/ hyvarinen05a.html . [14] Vasily Ilin, Jingwei Hu, and Zhenfu Wang. Transport based particle methods for the fokker- planck-landau equation. arXiv preprint arXiv:2405.10392 , 2024. [15] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the fokkerβ planck equation. SIAM journal on mathematical analysis , 29(1):1β17, 1998. [16] Kedar Karhadkar, Michael Murray, and Guido MontΓΊfar. Bounds for the smallest eigenvalue of the ntk for arbitrary spherical data of arbitrary dimension. arXiv preprint arXiv:2405.14630 , 2024. 10 [17] Yann LeCun, LΓ©on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86(11):2278β2324. [18] Qiang Liu. Stein variational gradient descent as gradient flow. Advances in neural information processing
|
https://arxiv.org/abs/2504.18130v2
|
systems , 30, 2017. [19] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. Advances in neural information processing systems , 29, 2016. [20] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems , 35:5775β5787, 2022. [21] Jianfeng Lu, Yue Wu, and Yang Xiang. Score-based transport modeling for mean-field fokker- planck equations. Journal of Computational Physics , 503:112859, 2024. [22] BΓ‘lint MΓ‘tΓ© and FranΓ§ois Fleuret. Learning interpolations between boltzmann densities. arXiv preprint arXiv:2301.07388 , 2023. [23] Radford M Neal. Annealed importance sampling. Statistics and computing , 11:125β139, 2001. [24] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 , 2020. [25] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems , 32, 2019. [26] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456 , 2020. [27] Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices. Advances in neural information processing systems , 32, 2019. [28] Andre Wibisono. Sampling as optimization in the space of measures: The langevin dynamics as a composite optimization problem. In Conference on learning theory , pp. 2093β3027. PMLR, 2018. [29] Chen Xu, Xiuyuan Cheng, and Yao Xie. Normalizing flow neural networks by jko scheme. arXiv preprint arXiv:2212.14424 , 2022. [30] Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. Message passing stein variational gradient descent. In International Conference on Machine Learning , pp. 6018β6027. PMLR, 2018. 11 7 Appendix 7.1 Pseudocode For ease of reproducibility, we provide the pseudocode for SBTM: SBTM (f0, n, T, βt, Ξ·) 1 sample {Xi}n i=1iid from f0 2 initialize NN: sΞβarg min L(s, f0) 3t:= 0 4 while t < T 5 t:=t+ βt 6 for k= 1, . . . , K (flow s) 7 Ξ = Ξ βΞ·Β· βΞL(sΞ, ft) 8 for i= 1, . . . , n (flow f) 9 Xi:=Xi+ βt(βlogΟ(Xi)βsΞ(Xi)) 10 output particle locations X1, . . . , Xn 7.2 Hyperparameters For all experiments we use mini-batch gradient descent with the adamw optimizer. For low- dimensional experiments we use learning rate 5Β·10β4, batch size 400, and 10 gradient descent steps per simulation step. For the MNIST experiment we use learning rate 10β3and batch size 64. 7.3 Proofs We restate theorem 4.8. Theorem 7.1 (Entropy Dissipation in Annealed Dynamics) .IfΟtis any time-dependent density, st any time-dependent vector field, and dXt dt=βlogΟt(Xt)βs(Xt) then βd dtKL(ft||Ο) =Eft stβ βlogΟ,βlogft Οt βEft stβ βlogft,βlogft Ο . In particular, if L(st, ft) = 0 , then βd dtKL(ft||Ο) =Eft βlogft Ο,βlogft Οt . Proof. Ifftis the density of Xt, where Xtsatisfies d dtXt=βlogΟt(Xt)βst(Xt) thenftsatisfies the Fokker-Planck equation βtft+β Β·(ft(βlogΟtβst)) = 0 . Thus, we may explicitly compute the relative entropy dissipation rate as d dtZ Rdftlogft Οdx =Z Rdβtft(logftβlogΟ) dx+Z
|
https://arxiv.org/abs/2504.18130v2
|
Rdftβtlogftdx =βZ Rdβ¨stβ βlogΟt,βlogftβ βlogΟβ©ftdx, where we used integration by parts and thatR Rdftβtlogftdxis zero for the last equality. 12 We can now specialize the above to prove 4.1 and 4.2. Proofs of 4.1 and 4.2. By choosing Οt=Οin the proof of theorem 7.1, we obtain d dtZ Rdftlogft Οdx =βZ Rdβ¨stβ βlogΟ,βlogftβ βlogΟβ©ftdx. By taking st=βlogftexactly, we recover the proof of the classical result 4.1. Otherwise, adding and subtracting βlogftin the integrand, we get d dtZ Rdftlogft Οdx =βZ Rdβ₯βlogftβ βlogΟβ₯2ftdx+Z Rdβ¨βlogftβst,βlogftβ βlogΟβ©ftdx β€ β1 2Z Rdβ₯βlogftβ βlogΟβ₯2ftdx+1 2Z Rdβ₯βlogftβstβ₯2ftdx The last line is by Youngβs inequality. We restate theorem 4.4. Theorem 7.2 (Sufficient training guarantees bounded loss) .Assume that ftis any time-dependent density, X1 t, . . . , Xn tare distributed according to ft, and the neural network s=sΞ tis trained with gradient descent dΞt dt=βΞ·βΞtLn(st, ft), Ln(st, ft) :=1 nnX i=1β₯s(Xi t)β βlogf(Xi t)β₯2 and is such that the Neural Tangent Kernel (NTK) Hi,j Ξ±,Ξ²(t) =NX k=1βΞΈksΞ± t(Xi t)βΞΈksΞ² t(Xj t), s(x) = (s1(x), . . . , sd(x)) is lower bounded by Ξ»=Ξ»(t)>0, i.e.β₯Hvβ₯2β₯Ξ»β₯vβ₯2. Then as long as Ξ·(t)β₯n Ξ»β βΟ Ο=tlogLn(st, fΟ), the loss Ln(st, ft)is non-increasing, i.e. d dtLn(st, ft)β€0. Proof. We start with an elementary computation based on chain rule. d dtL(st, ft) =β βΟ Ο=tL(sΟ, ft) +β βΟ Ο=tL(st, fΟ) β βΟ Ο=tL(sΟ, ft) =βΞLΒ·d dtΞ =βΞ·NX k=1(βΞΈkL)2 =βΞ· n2NX k=1 [s(Xi)β βlogf(Xi)]Β· βΞΈks(Xi)2 =βΞ· n2nX i,j=1dX Ξ±,Ξ²=1[sΞ±(Xi)β β Ξ±logf(Xi)]Hi,j Ξ±,Ξ²[sΞ²(Xj)β β Ξ²logf(Xj)] =βΞ· n2β₯sβ βlogfβ₯2 H. 13 where Hi,j Ξ±,Ξ²=NX k=1βΞΈksΞ±(Xi)βΞΈksΞ²(Xj) is called the Neural Tangent Kernel. Its lowest eigenvalue determines the convergence speed of gradient descent. If β₯Hvβ₯2β₯Ξ»β₯vβ₯2, then d dtL(st, ft) =β βΟ Ο=tL(sΟ, ft) +β βΟ Ο=tL(st, fΟ) β€ βΞ·Ξ» nL(st, ft) +β βΟ Ο=tL(st, fΟ). Thus, the loss is non-increasing if Ξ·(t)β₯n Ξ»β βΟ Ο=tlogL(st, fΟ). 14 Figure 10: SBTM exhibits smooth and monotone KL divergence to the target, matching the ground truth closely. 7.4 KL divergence data Below is the KL divergence between the samples and target density for three one-dimensional experiments. SBTM usually achieves the smallest KL divergence in the non-annealed setting. SVGD performs very poorly. Table 3: KL divergence ( β) between the samples and the target Ο=N(0,1)for the analytic setting using non-annealed dynamics. Simulation used time step βt= 0.002, final time T= 2.5. Columns indicate the number of particles N. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.013 0.0032 0.0019 0.0020 0.00099 SDE 0.020 0.022 0.0094 0.0036 0.0012 SVGD 0.33 0.23 0.16 0.20 0.29 Table 4: KL divergence ( β) between the sample and the target Ο=1 4N(β4,1) +3 4N(4,1)for gaussians_far using non-annealed dynamics. Simulation used time step βt= 0.01, final time T= 10 . Columns indicate the number of particles N. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.24 0.19 0.11 0.089 0.084 SDE 0.24 0.18 0.14 0.094 0.084 SVGD 4.8 3.3 4.7 5.3 5.5 15 Table 5: Same setup as Table 4, but using geometric annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.31 0.21 0.19 0.14 0.13 SDE 0.28 0.18 0.15 0.12
|
https://arxiv.org/abs/2504.18130v2
|
0.12 SVGD 3.2 3.7 4.4 4.6 4.4 Table 6: Same setup as Table 4, but using dilation annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.34 0.25 0.19 0.15 0.13 SDE 0.25 0.19 0.14 0.13 0.13 SVGD 0.96 1.4 1.1 1.3 1.2 Table 7: KL divergence ( β) between the sample and the target Ο=1 4N(β2,1) +3 4N(2,1)for gaussians_near using non-annealed dynamics. Simulation used time step βt= 0.01, final time T= 10 . Columns indicate the number of particles N. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.022 0.018 0.0082 0.0082 0.0036 SDE 0.029 0.013 0.014 0.0068 0.0043 SVGD 2.8 1.4 2.4 2.1 2.0 Table 8: Same setup as Table 7, but using geometric annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.058 0.032 0.037 0.025 0.031 SDE 0.060 0.032 0.029 0.029 0.028 SVGD 0.47 0.69 0.99 1.1 1.0 Table 9: Same setup as Table 7, but using dilation annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.037 0.12 0.10 0.096 0.11 SDE 0.062 0.031 0.080 0.054 0.061 SVGD 0.48 0.32 1.00 1.10 0.99 16 7.5 Additional experiments 7.5.1 Noisy Circle Here we compare the SDE and SBTM samples from the βnoisy circle" density Οβ exp β(β₯xβ(4,0)β₯β1)2 0.08 .While the SDE samples fill the vacuum region in figure 11 around the center due to Brownian noise, the SBTM samples leave it blank. Figure 11: Sampling from the noisy circle distribution with SDE (top) and SBTM (bottom). 7.5.2 Gaussian Mixture with Geometric Annealing Here we use the classical geometric annealing to sample from a mixture of gaussians, which are 8 units apart, making it very non-log-concave. Ο=1 4N(β4,1) +3 4N(4,1) βlogΟt= (1βt)βlogf0+tβlogΟt. The match between the orange and blue lines in the right panel of figure 12 indirectly indicates very good score approximation, as per (4.4). 17 Figure 12: Left: reconstructed density of SBTM. It approximates the solution well despite the non- log-concavity. Right: entropy dissipation of SBTM (ours) and SDE (stochastic). SBTM approximates entropy decay rate perfectly even in annealed dynamics. NeurIPS Paper Checklist The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit. Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist: β’ You should answer [Yes] , [No] , or [NA] . β’[NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available. β’ Please provide a short (1β2 sentence) justification right after your answer (even for NA). The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of
|
https://arxiv.org/abs/2504.18130v2
|
your paper, and its final version will be published with the paper. The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes] " is generally preferable to "[No] ", it is perfectly acceptable to answer "[No] " provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No] " or "[NA] " is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found. IMPORTANT, please: β’Delete this instruction block, but keep the section heading βNeurIPS Paper Checklist" , β’Keep the checklist subsection headings, questions/answers and guidelines below. β’Do not modify the questions and only use the provided macros for your answers . 18 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paperβs contributions and scope? Answer: [Yes] Justification: Sections 3 and 4 support the theoretical claims made in the abstract. Section 5 supports the empirical claims. Guidelines: β’The answer NA means that the abstract and introduction do not include the claims made in the paper. β’The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. β’The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. β’It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: The proposed method matches the Langevin dynamics baseline in computa- tional complexity and accuracy, so the main limitation of the proposed method is the slow mixing of the Wasserstein gradient flow when used on non-log-concave targets. We discuss this limitation in Section 4.3. We also discuss the limitations of the theoretical results in remark 4.7. Guidelines: β’The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. β’ The authors are encouraged to create a separate "Limitations" section in their paper. β’The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and
|
https://arxiv.org/abs/2504.18130v2
|
what the implications would be. β’The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. β’The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. β’The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. β’If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. 19 β’While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that arenβt acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: The assumptions are fully stated in the theorems themselves. The detailed proofs are written in full in the appendix. Guidelines: β’ The answer NA means that the paper does not include theoretical results. β’All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. β’All assumptions should be clearly stated or referenced in the statement of any theorems. β’The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. β’Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. β’ Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: Yes, the initial condition, time step, final time, and the target density are provided for each experiment in 5. The detailed pseudocode of the proposed method is given in the appendix. The NN architecture and code repository are at the beginning of Section 5. Guidelines: β’ The answer NA means that the paper does not include experiments. β’If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the
|
https://arxiv.org/abs/2504.18130v2
|
code and data are provided or not. β’If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. β’Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case 20 of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. β’While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We include the link to the GitHub repository with the experiments. The only dataset we used is MNIST, which is publicly available. Guidelines: β’ The answer NA means that paper does not include experiments requiring code. β’Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. β’While we encourage the release of code and data, we understand that this might not be possible, so βNoβ is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). β’The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. β’The authors should provide instructions on data access and preparation, including how to access the
|
https://arxiv.org/abs/2504.18130v2
|
raw data, preprocessed data, intermediate data, and generated data, etc. β’The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. β’At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). β’Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details 21 Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Each experiment contains the initial density, target density, time step, and the number of time steps. The hyperparameters are in the appendix. There is no train/test split in this work. Guidelines: β’ The answer NA means that the paper does not include experiments. β’The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. β’The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: We use very high sample sizes of 1000 and 10,000 in the low-dimensional experiments. Adding error bars to the plots would not convey valuable information, because we are interested in the overall shape rather than the precise values. Guidelines: β’ The answer NA means that the paper does not include experiments. β’The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. β’The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). β’The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) β’ The assumptions made should be given (e.g., Normally distributed errors). β’It should be clear whether the error bar is the standard deviation or the standard error of the mean. β’It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. β’For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). β’If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper
|
https://arxiv.org/abs/2504.18130v2
|
provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? 22 Answer: [Yes] Justification: We describe this at the beginning of section 5. Guidelines: β’ The answer NA means that the paper does not include experiments. β’The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. β’The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. β’The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didnβt make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We read the Code of Ethics and confirm that our research conforms with it in every respect. Guidelines: β’The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. β’If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. β’The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: There are no potential societal impacts of our work, which should be specifi- cally highlighted. Guidelines: β’ The answer NA means that there is no societal impact of the work performed. β’If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. β’Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. β’The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. 23 β’The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. β’If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to
|
https://arxiv.org/abs/2504.18130v2
|
monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: We do not anticipate any potential misuse of this work. Guidelines: β’ The answer NA means that the paper poses no such risks. β’Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. β’Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. β’We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We include the license of the single dataset (MNIST) we used. Guidelines: β’ The answer NA means that the paper does not use existing assets. β’ The authors should cite the original paper that produced the code package or dataset. β’The authors should state which version of the asset is used and, if possible, include a URL. β’ The name of the license (e.g., CC-BY 4.0) should be included for each asset. β’For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. β’If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. β’For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. 24 β’If this information is not available online, the authors are encouraged to reach out to the assetβs creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: We do not release new assets. Guidelines: β’ The answer NA means that the paper does not release new assets. β’Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. β’The paper should discuss whether and how consent was obtained from people whose asset is used. β’At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the
|
https://arxiv.org/abs/2504.18130v2
|
paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our work does not involve crowdsourcing or research with human subjects. Guidelines: β’The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. β’Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. β’According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Our work does not involve crowdsourcing or research with human subjects. Guidelines: β’The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. 25 β’Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. β’We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. β’For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16.Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA] Justification: This work does not involve LLMs beyond text and code editing. Guidelines: β’The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. β’Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM ) for what should or should not be described. 26
|
https://arxiv.org/abs/2504.18130v2
|
arXiv:2504.18139v1 [math.PR] 25 Apr 2025Kalman-Langevin dynamics : exponential convergence, part icle approximation and numerical approximation Axel RinghβAkash Sharmaβ Abstract Langevin dynamics has found a large number of applications i n sampling, optimization and esti- mation. Preconditioning the gradient in the dynamics with t he covariance β an idea that originated in literature related to solving estimation and inverse pro blems using Kalman techniques β results in a mean-ο¬eld (McKean-Vlasov) SDE. We demonstrate exponenti al convergence of the time marginal law of the mean-ο¬eld SDE to the Gibbs measure with non-Gaussi an potentials. This extends previous results, obtained in the Gaussian setting, to a broader clas s of potential functions. We also establish uniform in time bounds on all moments and convergence in p-Wasserstein distance. Furthermore, we show convergence of a weak particle approximation, that avo ids computing the square root of the empirical covariance matrix, to the mean-ο¬eld limit. Final ly, we prove that an explicit numerical scheme for approximating the particle dynamics converges, uniformly in number of particles, to its continuous-time limit, addressing non-global Lipschitzn ess in the measure. Keywords: McKean-Vlasov stochastic diο¬erential equations, interac ting particle systems, strongly convergent numerical schemes. AMS Classiο¬cation: 65C30, 60H35, 60H10, 37H10, 35Q84. 1 Introduction Sampling and optimization techniques are, in many cases, the main ingr edients to solutions of problems in applied mathematics and computational statistics. For instance, in molecular dynamics, sampling techniques focus on exploring the potential conο¬gurations of a mo lecule, while optimization comes into playwhenseekingthe conο¬gurationwith the minimum energy. Addition ally, manyoptimizationproblems can be formulated as sampling problems within a Bayesian framework t o account for uncertainty. In sampling, one looks for samples from the measure deο¬ned by Β΅(dx) =1 ZeβΞ²U(x)dx, (1.1) whereZ=/integraltext RdeβΞ²U(x)dxis the normalizing constant, Ξ² >0 andU:RdβR. Connecting this to optimization, under relatively mild conditions on Uand for increasing Ξ², the measure concentrates around global minima of U[Hwa80, HRS24]. For sampling purposes, in addition to traditional Markov chain Monte Carlo methods, overdamped Langevin dynamics, which ο¬nds its origin in statistical physics (see, e .g., [RDF78]), is one popular choice of method. The overdamped Langevin dynamics (also known as Smolu chowski dynamics) driven by Brownian noise W(t) is given by dX(t) =ββU(X(t))dt+/radicalbigg 2 Ξ²dW(t), X(0)βRd, (1.2) and it leaves the Gibbs measure (1.1) invariant under appropriate co nditions on U. The dynamics in (1.2) has also been used as an optimization method exploiting annealing techniques with a Ξ²β β(see [CHS87]), and as a starting point for developing new sampling methods . βDepartment of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg. axelri@chalmers.se ,akashs@chalmers.se . 1 β15 β10 β5 0 5 10 15 xβ15β10β5051015yLangevin Dynamics β15 β10 β5 0 5 10 15 xKalman Langevin Dynamics Figure 1: The diο¬erence in behavior of overdamped Langevin dynamic s and Kalman-Langevin dynamics for potential function U= 0.26(x2+y2)β0.48xyat timeT= 1 with 2000 particles uniformly initialized in [β15,15]2withΞ²= 1. On the other hand, the Kalman ο¬lter introduced in [Eve94] for stat e estimation has also found large number of applications in applied sciences (for example in oceanograp hy [EVL96], in reservoir modeling [ANO+09],
|
https://arxiv.org/abs/2504.18139v1
|
and in weather forecasting [HM01]) for data assimilation and inver se problems. Inspired from the Kalman ο¬lter, an ensemble Kalman iterative procedure for s olving inverse problems, called as ensemble Kalman inversion (EKI), was proposed in [ILS13], and the co rresponding continuous-time limit, whichisaninteractingsystemofSDEs, wasderivedin[SS17]. Theauth orsin[BSWW19]establishedwell- posednessandconvergencetothegroundtruthoftheSDEsund erlyingEKI.Theconvergencetothemean- ο¬eld limit was studied in [DL21a] for the EKI model. For the reο¬ected EK I model, the well-posedness and convergence of particle system to mean-ο¬eld limit in non-convex domain setting is established in [HST25]. In this context, it is also worth noting other recent works o n optimization and sampling using interacting particle systems demonstrating better capabilitie s to deal with the non-convexity and anisotropyofenergylandscapes, with possiblederivativefreeimple mentation, suchas[CCTT18,LMW18, KS19, CST20, LWZ22, MRSS25]. Inspired from the continuous-time limit of ensemble Kalman procedur e with noise, [GIHLS20] proposed a covariancepreconditioned overdamped Langevin dynamics which is a non-linear (in the sense of McKean) Markov process. This covariance preconditioned Langevin dynamic s is driven by the following McKean- Vlasov SDEs: dX(t) =βΞ£(Β΅t)βU(X(t))dt+/radicalBigg 2Ξ£(Β΅t) Ξ²dW(t), (1.3a) with Ξ£(Β΅t) =/integraldisplay Rd(xβM(Β΅t))(xβM(Β΅t))β€dΒ΅t(x), (1.3b) M(Β΅t) =/integraldisplay RdxdΒ΅t(x), (1.3c) whereW(t) is ad-dimensional Brownian motion and Β΅t:=LX tis the time marginal law of X. This provides a new sampling method which portrays better performanc e in capturing anisotropic energy landscapes, as illustrated in Figure 1. Furthermore, Kalman approx imation of the gradient (see Re- mark 3.1) provides derivative free technique for Laplace approxima tion of the targeted distribution. In [GIHLS20], provided that the initial distribution is not a Dirac distribut ion and that Uis a quadratic function of x, the authors showed the convergence of the law of (1.3a) to the G ibbs measure in relative entropy. 2 In [CV21], exponential convergence is obtained directly in Wasserst ein distance for the linear setting, i.e., the setting when Uis quadratic and thus βUis linear. One more step towardsanalysis is the convergence of interacting particle system to the mean ο¬eld limit given by (1.3), whic h is shown in [DL21b] in the case of a quadratic potential. The propagation of chaos result is extend ed to a non-linear setting in [Vae24], with optimal rate of convergence in terms of number of particles. T his paper builds on the above, and the contributions of the paper are the following: (i) The ο¬rst major contribution is that we establish exponential con vergence of the law of the non- linear Langevin dynamics to the Gibbs measure for non-linear potent ial functions (in particular, for a quadratic potential with Lipschitz perturbation) in relative entro py (which also gives exponential convergence in 2 βWasserstein distance). This signiο¬cantly improves the linear setting results of [GIHLS20, CV21] as it covers a large class of Bayesian models with Gau ssian priors. In addition, wealsoproveuniform intime p-thmoment boundofthemean-ο¬eldSDEs. Combiningthe resultswe straightforwardly have convergence in pβWasserstein distance. One of the main ingredients of the analysis is matrix valued non-linear ordinary diο¬erential equations (O DEs). This approach bears resemblance to the approach based on Riccati type matrix valued O DEs which has been employed to obtain stability estimates in the case of ensemble Kalman(-Bucy) ο¬
|
https://arxiv.org/abs/2504.18139v1
|
lters [DMT18, DMH23]. (ii) We provethe convergenceofan interactingparticlesystem tot he mean-ο¬eldSDEs (1.3). In contrast to [DL21b, Vae24], we study a weak particle approximation that avoid s the need to compute the square root of the empirical covariance matrix. We refer to this as a weak approximation because it diο¬ers from the interacting particle systems proposed and studie d in [GIHLS20] and [Vae24] at the path level. To show this convergence, we follow the classical trilo gy of arguments from [Szn91] (also see [GKM+96]). (iii) The second major contribution is that we establish the uniform in N, whereNdenotes number of particles, convergence of an implementable explicit numerical sch eme, with ο¬xed discretization time-step, to its continuous-time limit. In [BSW18], the authors cons ider a one dimensional model SDE inspired from ensemble Kalman inversion and establish convergen ce of a numerical scheme to its continuous limit. For interacting particle system with non-global L ipschitzness in measure in 2βWasserstein metric, [CDR24] propose a split-step scheme, howeve r, the nonlinearity in terms of measure appear as a convolution which is not the case for (1.3). The outline of the article is in line with the contributions mentioned abov e: in Section 2, we show the exponential convergence; in Section 3, we prove the propagation of chaos; and in Section 4, we show the convergence of the numerical scheme. Notation We will use the following notations in the paper. If Q(t) is a time-varying symmetric positive semi-deο¬nite matrix, we denote Ξ»Q min(t) andΞ»Q max(t) as smallest and largest eigenvalues, respectively, of Q(t). For square matrix Q, we denote its trace by trace( Q). IfOis a subset of RdthenOcrepresents its complement. For convenience, we denote aβ§b= min(a,b) fora,bβR. We represent the space of probability measures on RdasP(Rd) and the subset of measures with bounded p-moment by Pp(Rd). We denote with Ck(Rd) the space of ktimes continuously diο¬erentiable functions, and with Ck c(Rd) the corresponding functions with compact support. For a function f:RdβR, we denote its gradient vector and Hessian matrix by βfandβ2f, respectively. For Β΅β P(Rd), we denote/integraltext RdΟ(x)Β΅(dx) as/an}bβacketle{tΟ,Β΅/an}bβacketβi}ht provided that the integral is ο¬nite. Moreover, for Β΅,Ξ½β P(Rd), byΒ΅βͺΞ½we mean that Β΅is absolutely continuous with respect to Ξ½. With/an}bβacketle{taΒ·b/an}bβacketβi}ht, we denote the scalar product between two vectors a,bβRd. For aRdΓmmatrix, we denote its Frobenius norm with | Β· |, which reduces to the Euclidean norm for m= 1. We will use Ο(x) to denote the density, with respect to the Lebesgue measure, o f the Gibbs measure (1.1), i.e., Ο(x) =1 ZeβΞ²U(x), (1.4) whereZ=/integraltext RdeβΞ²U(x)dx. Finally, we denote with Ca generic constant whose value may change from line to line. 3 2 Exponential convergence of non-linear dynamics to equili b- rium In this section, we prove exponential convergence of the non-line ar Markov process driven by (1.3) to its invariant distribution (1.1) in relative entropy. We do this by ο¬rst s howing a uniform in time lower bound on the smallest eigenvalue of Ξ£ t, i.e., on λΣ min(t). These results are all proved under the following assumption on the function U. Assumption 2.1. We assume
|
https://arxiv.org/abs/2504.18139v1
|
that the potential function Uhas the following form: U=1 2xβ€Ax+V, (2.1) whereAis positive deο¬nite matrix with smallest eigenvalue βAand largest eigenvalue LA, andVisC1(Rd) and Lipschitz continuous function with Lipchitz constant LV. Remark 2.1. Note that the above assumption allows for non-convexity. One exa mple satisfying the above assumption is that of Rastrigin function, which is commonly use d as both a benchmark objective function for optimization and as a test potential function in sampling problems. We ο¬rst recall the deο¬nition of relative entropy along with its relation to total variation distance and 2- Wasserstein distance; for an in-depth treatment one can look at [ Vil03]. For any two probability measures Ξ½andΒ΅on (Rd,B(Rd)), the relative entropy of Ξ½with respect to Β΅is deο¬ned as H(Ξ½|Β΅) =ο£± ο£² ο£³/integraltext Rdln/parenleftbigg dΞ½ dΒ΅/parenrightbigg dΞ½ifΞ½βͺΒ΅, +βelse. Thep-Wasserstein distance on Pp(Rd) is given by Wp(Ξ½,Β΅) := inf Ξ³βΞ(Ξ½,Β΅)/braceleftbigg/parenleftbig E|Y1βY2|p/parenrightbig1/p,Law(Y1,Y2) =Ξ³/bracerightbigg , where inο¬mum is taken over all couplings of Ξ½andΒ΅, i.e., Ξ(Ξ½,Β΅) is the set of all joint measures such that Law(Y1) =Ξ½and Law( Y2) =Β΅. We say that Β΅satisο¬es a log-Sobolev inequality with constant Ξ»LSif for all probability measures Ξ½such that Ξ½βͺΒ΅the following holds: H(Ξ½|Β΅)β€1 2Ξ»LS/integraldisplay Rd/vextendsingle/vextendsingle/vextendsingle/vextendsingleβlndΞ½ dΒ΅/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dΞ½. (2.2) Next, we recall one inequality related to relative entropy. These ine qualities implies convergence in total variation norm and 2 βWasserstein distance if convergence in relative entropy is proven. IfΒ΅satisο¬es a log-Sobolev inequality with constant Ξ»LS, then, due to Talagrandβs inequality, we have W2(Ξ½,Β΅)β€/radicalbigg 2 Ξ»LSH(Ξ½|Β΅). The main contributions of this section are the following results. Theorem 2.1. Let Assumption 2.1 hold. Then the non-linear Langevin SDEs ( 1.3) converges to its in- variant measure exponentially in relative entropy, i.e., t here exist positive constants c1andc2independent oftsuch that H(LX t|Ο)β€c1eβc2t. Remark 2.2. An applicationofTalagrandβsinequalityimplies that we alsohaveexpone ntialconvergence in 2-Wasserstein distance. This, in turn, ensures weak convergen ce ofLX tto the Gibbs measure, as well as convergence of the second moment of LX tto the second moment of the Gibbs measure (see [Vil03, Theorem 7.12]). 4 The proof of exponential convergence requires the following lemma . Lemma 2.2. Under Assumption 2.1, the following bound holds: λΣ min(t)β₯min/parenleftbigg1 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg , (2.3) whereLAandLVare from Assumption 2.1. Analogous to the uniform in time lower bound on the smallest eigenvalue of Ξ£t, as stated in Lemma 2.2 above, we can also prove uniform in time upper bound on the largest e igenvalue (see Lemma 2.4). These uniform lower and upper bounds are of interest in their own right. Mo reover, a consequence of the lower and upper bounds is uniform in time moment bounds of the mean-ο¬eld d ynamics which we mention here as a separate result and prove in Section 2.4. For brevity, we denote βΞ£:= min/parenleftbigg1 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg , (2.4) LΞ£:=λΣ max(0)eββAt+1 βA/parenleftBigg/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βeββAt). (2.5) Theorem 2.3. Under Assumption 2.1, the following bound holds for all l >0: E|X(t)|2lβ€CM, (2.6) whereX(t)is from (1.3) and CMis independent of t. For brevity, we do not explicitly write the constant CMappearing (2.6) but it can easily be inferred from the proof as we
|
https://arxiv.org/abs/2504.18139v1
|
explicitly track all the constants. Corollary 2.1. Let Assumption 2.1 hold. Then, for all p >0, thepβth moment of the law of mean-ο¬eld SDEs (1.3) converges to the pβth moment of the Gibbs measure (1.1), i.e., /integraldisplay Rd|x|pLX t(dx)β/integraldisplay Rd|x|pΒ΅(dx) (2.7) astβ β. Proof.The proof directly follows from [VdV98, Theorem 2.20] using weak con vergence of and uniform integrability of moments. Corollary 2.2. Let Assumption 2.1 hold. Then, for all p >0, we have the following convergence: Wp(LX t,Β΅)β0 (2.8) astβ β. Proof.The result follows from the previous corollary and [Vil03, Theorem 7.12 ]. Lemma 2.4. Under Assumption 2.1, the following bound holds: λΣ max(t)β€Ξ»Ξ£ max(0)eββAt+1 βA/parenleftBigg/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βeββAt), (2.9) whereLAandLVare from Assumption 2.1. For the sake of convenience, in the following we will denote Ξ£ t:= Ξ£(Β΅t). We will represent Ξ£β1 tas the inverse of Ξ£ t. 5 2.1 Proof of Lemma 2.2 Proof of Lemma 2.2. We have the following due to (1.3a): EX(t) =EX(0)β/integraldisplayt 0Ξ£sEβU(X(s))ds,EX(t)β€=EX(0)β€β/integraldisplayt 0E(βU(X(s)))β€Ξ£β€ sds. Consider the following stochastic dynamics: d(X(t)βEX(t)) =βΞ£t[βU(X(t))βEβU(X(t))]dt+/radicalBigg 2Ξ£t Ξ²dW(t). Using Itoβs product rule, we have d(X(t)βEX(t))(X(t)βEX(t))β€=β[(X(t)βEX(t))(βU(X(t))βEβU(X(t)))β€]Ξ£tdt βΞ£t[(βU(X(t))βEβU(X(t)))(X(t)βEX(t))β€]dt+2 Ξ²Ξ£tdt +/radicalbigg 2 Ξ²(X(t)βEX(t))dW(t)β€/radicalbig Ξ£t+/radicalbigg 2 Ξ²/radicalbig Ξ£tdW(t)(X(t)βEX(t))β€. Therefore, dΞ£t=βE[(X(t)βEX(t))(βU(X(t))βEβU(X(t)))β€]Ξ£tdt βΞ£tE[(βU(X(t))βEβU(X(t)))(X(t)βEX(t))β€]dt+2 Ξ²Ξ£tdt. (2.10) Due to (2.10), we have the following matrix valued diο¬erential equatio n: d dtΞ£β1 t=βΞ£β1 tdΞ£t dtΞ£β1 t= Ξ£β1 tE[(X(t)βEX(t))(βU(X(t))βEβU(X(t)))β€] +E[(βU(X(t))βEβU(X(t)))(X(t)βEX(t))β€]Ξ£β1 tβ2 Ξ²Ξ£β1 t. (2.11) Using Assumption 2.1, we get d dtΞ£β1 t= 2A+Ξ£β1 tE[(X(t)βEX(t))(βV(X(t))βEβV(X(t)))β€] +E[(βV(X(t))βEβV(X(t)))(X(t)βEX(t))β€]Ξ£β1 tβ2 Ξ²Ξ£β1 t. For anyyβRdwith|y| /ne}ationslash= 0, we have d dtyβ€Ξ£β1 ty=yβ€d dtΞ£β1 ty= 2yβ€Ay+yβ€Ξ£β1 tE[ZtPβ€ t]y+yβ€E[PtZβ€ t]Ξ£β1 tyβ2 Ξ²yβ€Ξ£β1 ty = 2yβ€Ay+2yβ€Ξ£β1 tE[ZtPβ€ t]yβ2 Ξ²yβ€Ξ£β1 ty, (2.12) where, for brevity, we have denoted Zt:=X(t)βEX(t) andPt:=βV(X(t))βEβV(X(t)). We ο¬rst deal with the second term on the righthand side of(2.12) by using Cauchy-Bunyakovsky-Schwarz inequality: yβ€Ξ£β1 tE[ZtPβ€ t]y=E[yβ€Ξ£β1 tZtPβ€ ty]β€(E[yβ€Ξ£β1 tZt]2)1/2(E[Pβ€ ty]2)1/2 β€2LV|y|(E[yβ€Ξ£β1 tZt]2)1/2, whereLVis as introduced in Assumption 2.1. Noting that E[yβ€Ξ£β1 tZt]2=E([yβ€Ξ£β1 tZt][Zβ€ tΞ£β1 ty]) =yβ€Ξ£β1 tE(ZtZβ€ t)Ξ£β1 ty=yβ€Ξ£β1 ty, 6 sinceE(ZtZβ€ t) = Ξ£t. Therefore, yβ€Ξ£β1 tE[ZtPβ€ t]yβ€2LV|y|/radicalBig yβ€Ξ£β1 ty. (2.13) This implies d dtyβ€Ξ£β1 ty= 2yβ€Ay+2yβ€Ξ£β1 tE[ZtPβ€ t]yβ2 Ξ²yβ€Ξ£β1 ty β€2yβ€Ay+4LV|y|/radicalBig yβ€Ξ£β1 tyβ2 Ξ²yβ€Ξ£β1 ty. (2.14) For the sake of convenience, we denote Ξ(t,y) =yβ€Ξ£β1 ty |y|2. Dividing (2.14) by |y|2, with|y| /ne}ationslash= 0, on both sides, we ascertain d dtΞ(t,y)β€2LA+4LV/radicalbig Ξ(t,y)β2 Ξ²Ξ(t,y), whereLA:= supy;|y|/negationslash=0yβ€Ay |y|2. Using generalized Youngβs inequality ( abβ€a2/(2Η«)+Η«b2/2, where a,b,Η« > 0, witha=/radicalbig Ξ(s,y),b=LV,Η«= 2Ξ²), we have d dtΞ(t,y)β€2LA+4/parenleftbigg Ξ²L2 V+Ξ(t,y) 4Ξ²/parenrightbigg β2 Ξ²Ξ(t,y) = 2LA+4Ξ²L2 Vβ1 Ξ²Ξ(t,y). Usinge1 Ξ²tas the integrating factor, we get d dte1 Ξ²tΞ(t,y)β€/parenleftbig 2LA+4Ξ²L2 V/parenrightbig e1 Ξ²t. Therefor, we obtain e1 Ξ²tΞ(t,y)β€Ξ(0,y)+2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig/parenleftbig e1 Ξ²tβ1/parenrightbig . (2.15) Hence, we ο¬nally have Ξ(t,y)β€Ξ(0,y)eβ1 Ξ²t+2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig/parenleftbig 1βeβ1 Ξ²t/parenrightbig . (2.16) Taking supremum over yβRd, with|y| /ne}ationslash= 0 and using the fact that Ξ£β1 tis symmetric, we get λΣβ1 max(t)β€Ξ»Ξ£β1 max(0)eβ1 Ξ²t+2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig/parenleftbig 1βeβ1 Ξ²t/parenrightbig . Takeg(t) :=λΣβ1 max(0)eβ1 Ξ²t+Ξ²/parenleftbig 2LA+Ξ²L2 V/parenrightbig/parenleftbig 1βeβ1 Ξ²t/parenrightbig , thengβ²(t) is given by gβ²(t) =β1 βλΣβ1 max(0)eβ1 Ξ²t+2(LA+2Ξ²L2 V)eβ1 Ξ²t=/parenleftbigg β1 βλΣβ1 max(0)+2(LA+2Ξ²L2 V)/parenrightbigg eβ1 Ξ²t. This means that the sign of gβ²is constant, suggesting that g(t) is a monotonic function which either attend supremum at t= 0 or when tβ β. In that case, if 2Ξ²(LA+2Ξ²L2
|
https://arxiv.org/abs/2504.18139v1
|
V)β₯λΣβ1 max(0), (2.17) thenλΣβ1 max(t)β€2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig . However, if we choose initial distribution Β΅0such that 2Ξ²(LA+2Ξ²L2 V)β€Ξ»Ξ£β1 max(0), (2.18) thenλΣβ1 max(t)β€Ξ»Ξ£β1 max(0). Therefore, we have λΣβ1 max(t)β€max/parenleftbigg 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig , λΣβ1 max(0)/parenrightbigg . Note that λΣβ1 max(t) = 1/λΣ min(t). This implies λΣ min(t)β₯min/parenleftbigg1 2Ξ²/parenleftbig LA+2Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg . (2.19) 7 2.2 Proof of Theorem 2.1 Beforeweproceedwith theproofofTheorem2.1, weneed thefollow ingresult, whichisdirect consequence of [CG22, Theorem 0.1]. In order to state the result, we need the fo llowing notation: We denote the PoincarΒ΄ econstantfor Β΅byCP(Β΅). LetΞ»LS(Β΅A)denotethelog-Sobolevconstantof Β΅A(dx) =eβxβ€Ax/2dx. We denote CLS(Β΅A) = 2/Ξ»LS(Β΅A). Lemma 2.5. Let Assumption 2.1 hold. Then, the measure Β΅=1 ZeβU(x)dxsatisο¬es log-Sobolev inequal- ity, i.e., for all Ξ½βͺΒ΅ H(Ξ½|Β΅)β€1 2Ξ»LS/integraldisplay Rd/vextendsingle/vextendsingle/vextendsingle/vextendsingleβln/parenleftbiggdΞ½ dΒ΅/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dΞ½, where log-Sobolev constant satisο¬es 1 2Ξ»LSβ€1 4/parenleftbig K1(1+aβ1 1)(1+aβ1 2)+K2/parenleftbig (1+a2)(1+aβ1 1)/4+a2 1/2/parenrightbig/parenrightbig , (2.20) whereK1=CA LS,K2=CP(Β΅)(2+/an}bβacketle{tV,Β΅A/an}bβacketβi}ht+LUCP(Β΅)CLS(Β΅A),a1=1 K1/3 2/parenleftbigg K1+K2 2/parenrightbigg2/3 anda2= 2/radicalBig K1 K2. Proof.First note that, thanks to Assumption 2.1, Β΅satisο¬es PoincarΒ΄ e inequality [BBCG08]. Note that Β΅ satisο¬es Poincare inequality with constant CP(Β΅). Denote CLS(Β΅) = 2/Ξ»LS. From [CG22, Theorem 0.1], we have for all a1>0 anda2>0 CLS(Β΅)β€(1+aβ1 1)(1+aβ1 2)CA LS+CP(Β΅)(2+/an}bβacketle{tV,Β΅A/an}bβacketβi}ht+LUCP(Β΅)CLS(Β΅A)/parenleftbig (1+a2)(1+aβ1 1)/4+a2 1/2/parenrightbig . Considerafunction f(a1,a2) =K1(1+aβ1 1)(1+aβ1 2)+K2/parenleftbig (1+a2)(1+aβ1 1)/4+a2 1/2/parenrightbig forsome K1,K2>0, thenfattains its minimum at a1=1 K1/3 2/parenleftbigg K1+K2 2/parenrightbigg2/3 , a2= 2/radicalbigg K1 K2, which completes the proof. Proof of Theorem 2.1. Considerthe Fokker-Plankequation governingthe probability distr ibution ofnon- linear Langevin dynamics (1.3a) as βΟ βt(t,x) =βΒ·(Ο(t,x)Ξ£(Β΅t)βU(x))+1 Ξ²βΒ·(Ξ£(Β΅t)βΟ(t,x)). (2.21) We can write potential function as U(x) =β1 Ξ²lnΟ(x)β1 Ξ²lnZ,Z=/integraltext RdeβΞ²U(x)dx. This results in the following PDE in divergence form: βΟ βt(t,x) =β1 Ξ²βΒ·/parenleftbig Ο(t,x)Ξ£(Β΅t)βln(Ο(x))/parenrightbig +1 Ξ²βΒ·/parenleftbig Ο(t,x)Ξ£(Β΅t)βln(Ο(t,x))/parenrightbig =1 Ξ²βΒ·/parenleftbigg Ο(t,x)Ξ£(Β΅t)βln/parenleftBigΟ(t,x) Ο(x)/parenrightBig/parenrightbigg . (2.22) We denote Οt:=Ο(t,Β·). Consider the relative Boltzmann-Shanon entropy (also known as KL-divergence) in terms of density Οwith respect to Ο: H(Οt|Ο) =/integraldisplay RdΟtln/parenleftbiggΟt Ο/parenrightbigg dx. (2.23) We will discuss below the behaviour of above functional along the solu tion of Fokker-Plank PDE (2.22): d dtH(Οt|Ο) =/integraldisplay Rdβ βt/parenleftbig Οt(x)ln(Οt(x))/parenrightbig ββ βt/parenleftbig Οtln(Ο(x))/parenrightbig dx =/integraldisplay Rd/parenleftBigβ βtΟt(x)+ln(Οt(x))β βtΟt(x)βln(Ο(x))β βtΟt(x)/parenrightBig dx=/integraldisplay Rd/parenleftBig ln/parenleftbiggΟt(x) Ο(x)/parenrightbiggβ βtΟt(x)/parenrightBig dx, 8 where we have used the conservation of mass with time resulting inβ βt/integraltext RdΟt(x)dx= 0. Using (2.22) and integration by parts, we obtain d dtH(Οt|Ο) =/integraldisplay Rdβ1 Ξ²/parenleftbigg βln/parenleftBigΟt(x) Ο(x)/parenrightBig Β·Ξ£(Β΅t)βln/parenleftBigΟt(x) Ο(x)/parenrightBig/parenrightbigg Οt(x)dx. (2.24) Using log-Sobolev inequality, we get β/integraldisplay Rd/parenleftbigg βln/parenleftBigΟt(x) Ο(x)/parenrightBig Β·Ξ£(Β΅t)βln/parenleftBigΟt(x) Ο(x)/parenrightBig/parenrightbigg Οt(x)dxβ€ βλΣ min(t)/integraldisplay Rd/vextendsingle/vextendsingle/vextendsingleβln/parenleftBigΟt(x) Ο(x)/parenrightBig/vextendsingle/vextendsingle/vextendsingle2 Οt(x)dx β€ βλΣ min(t) 2Ξ»LSH(Οt|Ο). Therefore, d dtH(Οt|Ο)β€ βλΣ min(t) 2Ξ»LSΞ²H(Οt|Ο) which on applying Lemma 2.2, we ascertain d dtH(Οt|Ο)β€ βmin/parenleftbigg1 Ξ²/parenleftbig 2LA+Ξ²L2 V/parenrightbig,λΣ min(0)/parenrightbigg1 2Ξ»LSΞ²H(Οt|Ο). (2.25) 2.3 Proof of Lemma 2.4 Proof of Lemma 2.4. From (2.10), we have dΞ£t=βE[(X(t)βEX(t))(βU(X(t))βEβU(X(t)))β€]Ξ£tdt βΞ£tE[(βU(X(t))βEβU(X(t)))(X(t)βEX(t))β€]dt+2 Ξ²Ξ£tdt. (2.26) which due to Assumption 2.1 gives d dtΞ£t=β2Ξ£tAΞ£tβE[ZtPβ€ t]Ξ£tβΞ£tE[PtZβ€ t]+2 Ξ²Ξ£t, whereZt=X(t)βEX(t) andPt=βV(X(t))βEβV(X(t)). For|y|2= 1, we have d dtyβ€Ξ£ty=β2yβ€Ξ£tAΞ£tyβyβ€E[ZtPβ€ t]Ξ£tyβyβ€Ξ£tE[PtZβ€ t]y+2 Ξ²yβ€Ξ£ty β€ β2βAyβ€Ξ£2 tyβ2yβ€E[ZtPβ€ t]Ξ£ty+2 Ξ²yβ€Ξ£ty. (2.27) First consider the second term on the right hand side of the inequalit y: β2yβ€E[ZtPβ€ t]Ξ£ty=β2E[yβ€ZtPβ€ tΞ£ty]β€2(E|yβ€Zt|2)1/2(E|Pβ€ tΞ£ty|2)1/2 = 2(yβ€Ξ£ty)1/2(yβ€Ξ£tE[PtPβ€ t]Ξ£ty)1/2. Due to our assumption on V, there exists a KV>0 such that following holds for all t >0 andzβRd: zβ€PtPβ€ tzβ€KV|z|2. (2.28) Explicit KVcalculation zβ€Ptβ€2LV|z|therefore KV= 4L2 V. An application of Youngβs inequality gives β2yβ€E[ZtPβ€
|
https://arxiv.org/abs/2504.18139v1
|
t]Ξ£tyβ€2/radicalbig KV(yβ€Ξ£ty)1/2(yβ€Ξ£2 ty)1/2β€βA 3yβ€Ξ£2 ty+3KV βAyβ€Ξ£ty. 9 Hence, using above inequality in (2.27), we get d dtyβ€Ξ£tyβ€ β5 3βAyβ€Ξ£2 ty+3K2 V βAyβ€Ξ£ty+2 Ξ²yβ€Ξ£ty. (2.29) Let us now analyze the term yβ€Ξ£2 tyappearing in (2.27). We have already proven uniform in time lower bound on eigenvalues of Ξ£ t. We write the eigenvalue decomposition of matrix Ξ£ tasOtDtOβ€ t, whereOt is orthogonal matrix and Dtis a diagonal matrix whose diagonals are given by eigenvalues of Ξ£ t. This implies we can write yβ€Ξ£2 ty=yβ€OtD2 tOβ€ ty. (2.30) We denote vt=Oβ€ tyand it is clear that |vt|2=yβ€Oβ€ tOty= 1. The calculations below follow due to above properties: (yβ€Ξ£ty)2= (vβ€ tDtvt)2=/parenleftbiggd/summationdisplay iλΣ i(t)(vi t)2/parenrightbigg2 β€d/summationdisplay i,j=1λΣ i(t)(vi t)2λΣ j(t)(vj t)2, whereλΣ i(t) denotes the ii-th component of Dtandvi trepresents the i-th component of vt. On applying Youngβs inequality, we arrive at the following: (yβ€Ξ£ty)2β€1 2d/summationdisplay i,j=1/bracketleftbig/parenleftbig λΣ i(t)/parenrightbig2+/parenleftbig λΣ j(t)/parenrightbig2/bracketrightbig (vi t)2(vj t)2 =1 2d/summationdisplay i,j=1/parenleftbig λΣ i(t)/parenrightbig2(vi t)2(vj t)2+1 2d/summationdisplay i,j=1/parenleftbig λΣ j(t)/parenrightbig2(vj t)2(vi t)2 =d/summationdisplay i/parenleftbig λΣ i(t)/parenrightbig2(vi t)2=vβ€ tD2 tvt=yβ€OtD2 tOβ€ ty=yβ€Ξ£2 ty. (2.31) Consequently, we have obtained that (yβ€Ξ£ty)2β€yβ€Ξ£2 ty, (2.32) for allysuch that |y|= 1. Using the estimate, β(yβ€Ξ£ty)2β₯ βyβ€Ξ£2 tyin (2.29), we get d dtyβ€Ξ£tyβ€ β5 3βA(yβ€Ξ£ty)2+3KV βAyβ€Ξ£ty+2 Ξ²yβ€Ξ£ty. (2.33) Therefore, we have d dtyβ€Ξ£tyβ€ ββA(yβ€Ξ£ty)2+33/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2, where we have utilized Youngβs inequality as 3KV 4βAyβ€Ξ£tyβ€1 3(yβ€Ξ£ty)2+33/parenleftbiggKV βA/parenrightbigg2 , 2 Ξ²yβ€Ξ£tyβ€1 3(yβ€Ξ£ty)2+3 Ξ²2. Therefore, also using βx2β€ βx+1, we get d dtyβ€Ξ£tyβ€ ββA(yβ€Ξ£ty)+βA+33/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2 Denoting ΛΞ(t,y) =yβ€Ξ£tyand using eβAtas integrating factor, we get d dteβAtΛΞ(t,y)β€/parenleftBigg βA+/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2/parenrightBigg eβAt. 10 Therefore, we have ΛΞ(t,y)β€ΛΞ(0,y)eββAt+1 βA/parenleftBigg βA+/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βeββAt). Taking supremum over y, we ο¬nally get our estimate λΣ max(t)β€Ξ»Ξ£ max(0)eββAt+1 βA/parenleftBigg/parenleftbigg3 4/parenrightbigg3/parenleftbiggKV βA/parenrightbigg2 +3 Ξ²2/parenrightBigg (1βeββAt). (2.34) 2.4 Proof of Theorem 2.3 Proof of Theorem 2.3. Below we prove the result for lβ₯2, and after which the result for 0 < l <2 follows from the Cauchy-Bunyakovsky-Schwartz inequality. Using Itoβs formula for Ul(x), withlβ₯2, we have dUl(X(t)) =βlUlβ1(X(t))/an}bβacketle{tβU(X(t))Β·Ξ£tβU(X(t))/an}bβacketβi}htdt+l Ξ²Ulβ1(X(t))trace/bracketleftbig β2U(X(t))Ξ£t/bracketrightbig dt +l Ξ²(lβ1)Ulβ2(X(t))/an}bβacketle{tβU(X(t))Β·Ξ£tβU(X(t))/an}bβacketβi}htdt+/radicalbigg2 Ξ²lUlβ1/an}bβacketle{tβU(X(t))Β·/radicalbig Ξ£tdW(t)/an}bβacketβi}ht. Due to boundedness of second derivatives of Uand uniform in time bounds on λΣ min(t) andλΣ max(t) (see Lemma 2.2 and Lemma 2.4), we have trace/bracketleftbig β2U(X(t))Ξ£t/bracketrightbig β€B, (2.35) β/an}bβacketle{tβU(X(t))Β·Ξ£tβU(X(t))/an}bβacketβi}ht β€ ββΞ£|βU(X(t))|2, (2.36) /an}bβacketle{tβU(X(t))Β·Ξ£tβU(X(t))/an}bβacketβi}ht β€LΞ£|βU(X(t))|2, (2.37) whereB >0 is a constant independent of t, andβΞ£andLΞ£are from (2.4) and (2.5), respectively. This leads to the following: dUl(X(t)) =βlβΞ£Ulβ1(X(t))|βU(X(t))|2dt+l Ξ²BUlβ1(X(t))dt +l Ξ²(lβ1)LΞ£Ulβ2(X(t))|βU(X(t))|2dt+/radicalbigg2 Ξ²lUlβ1/an}bβacketle{tβU(X(t))Β·/radicalbig Ξ£tdW(t)/an}bβacketβi}ht which on taking expectation on both sides gives dEUl(X(t)) =βlβΞ£EUlβ1(X(t))|βU(X(t))|2dt+l Ξ²BEUlβ1(X(t))dt +l Ξ²(lβ1)LΞ£EUlβ2(X(t))|βU(X(t))|2dt. (2.38) Next, we derive upper and lower bounds for |βU(X(t))|2in terms of U(x). To this end, since Vby Assumption 2.1 is Lipschitz continuous, we have V(x)β€V(0)+LV|x|. Therefore, U(x)β€1 2LA|x|2+LV|x|+V(0), which, upon using Youngβs inequality LV|x| β€1 2|x|2+1 2L2 V, gives U(x)β€1 2(LA+1)|x|2+1 2L2 V+V(0), 11 This implies |x|2β₯2 LA+1U(x)β1 LA+1/parenleftbig L2 V+2V(0)/parenrightbig . (2.39) Due to our assumption on U, i.e., Assumption 2.1, and (2.39), we obtain |βU(x)|2=|Ax+βV(x)|2β₯1 2|Ax|2β|βV(x)|2β₯β2 A 2|x|2βL2 V β₯β2 A LA+1U(x)+β2 A2V(0)+L2 V 2(LA+1)βL2 V. (2.40) To derive the upper bound we again using Lipschitz continuity of V, which gives that U(x)β₯1 2β2 A|x|2+V(0)βLV|x| β₯1 4β2
|
https://arxiv.org/abs/2504.18139v1
|
A|x|2βL2 V β2 A+V(0), where we have used Youngβs inequality, i.e., LV|x| β€β2 A|x|2/4+L2 V/β2 A, in last step. Hence, |x|2β€4 β2 AU(x)+4L2 V β4 Aβ4 β2 AV(0). This implies |βU(x)|2=|Ax+βV(x)|2β€2L2 A|x|2+2L2 Vβ€8L2 A β2 AU(x)+8L2 AL2 V β4 Aβ8L2 A β2 AV(0)+2L2 V.(2.41) Using (2.40) and (2.41), we have dEUl(X(t))β€ βlβΞ£β2 A LA+1EUl(X(t))dtβlβΞ£β2 A2V(0)+L2 V 2(LA+1)EUlβ1(X(t))dt +lβΞ£L2 VEUlβ1(X(t))dt+l Ξ²BEUlβ1(X(t))dt+l Ξ²(lβ1)LΞ£8L2 A β2 AEUlβ1(X(t))dt +l Ξ²(lβ1)LΞ£/parenleftBig8L2 AL2 V β4 Aβ8L2 A β2 AV(0)+2L2 V/parenrightBig EUlβ2(X(t))dt β€ βΞ·1EUl(X(t))dt+Ξ·2EUlβ1(X(t))dt+Ξ·3EUlβ2(X(t))dt, where Ξ·1:=lβΞ£β2 A LA+1, (2.42) Ξ·2:=lβΞ£L2 V+l Ξ²B+l Ξ²(lβ1)LΞ£8L2 A β2 A, (2.43) Ξ·3:=l Ξ²(lβ1)LΞ£/parenleftBig8L2 AL2 V β4 Aβ8L2 A β2 AV(0)+2L2 V/parenrightBig . (2.44) Using Youngβs inequality, we have Ξ·2Ulβ1(x)β€Ξ·1 3Ul(x)+/parenleftBig2Ξ·1(lβ1) 3l/parenrightBigβ(lβ1)/lΞ·l 2 2l, Ξ·3Ulβ2(x)β€Ξ·1 3Ul(x)+/parenleftBig2Ξ·1(lβ2) 3l/parenrightBigβ(lβ2)/lΞ·l/2 3 l, which results in dEUl(X(t))β€ βΞ·1 3EUl(X(t))dt+Ξ·4dt, 12 whereΞ·4is independent of tand given by Ξ·4=/parenleftBig2Ξ·1(lβ1) 3l/parenrightBigβ(lβ1)/lΞ·l 2 2l+/parenleftBig2Ξ·1(lβ2) 3l/parenrightBigβ(lβ2)/lΞ·l/2 3 l. This implies EU(X(t))β€EU(X(0))eβΞ·1 3t+3Ξ·4 Ξ·1(1βeβΞ·1 3t). This implies uniform in time moment bound, i.e., E|X(t)|2lβ€CM, whereCM>0 is a constant independent of t. 3 Particle approximation and propagation of chaos For a process X(t), drive by the dynamics (1.3), the time marginal law, denoted by Β΅t:=LX t, satisο¬es the non-linear Fokker-Planck PDE βΒ΅t βt(t,x) =/an}bβacketle{tβxΒ΅tΒ·Ξ£(Β΅t)βU(x)/an}bβacketβi}ht+1 Ξ²trace(Ξ£(Β΅t)β2Β΅t),(t,x)β[0,T]ΓRd. (3.1) In this section, we prove that we can use a particle approximation to approximate this time marginal law. By the contraction result in Theorem 2.1, this means that we can use a particle approximation to approximate Ο(x)dx, withΟgiven in (1.4). To this end, let NβNdenote the number of particles and let Xi,N(t) represent the position of the i-th particle at time t. LetEN(t) represent the empirical measure deο¬ned as EN(t) =1 NN/summationdisplay j=1Ξ΄Xj,N(t). (3.2) Consider the dΓNdeviation matrix QN(t) = [X1,N(t)βMN(t),...,XN,N(t)βMN(t)] withMN tbeing the ensemble mean given by MN t:=M(EN(t)) =1 N/summationtextN j=1Xj,N(t). The ensemble covariance, denoted as Ξ£N t, is deο¬ned as Ξ£N t:= Ξ£(EN(t)) =1 NQN(t)QN(t)β€. (3.3) One possible particle approximation to the mean-ο¬eld limit is given by the following SDEs governing the interacting particle system: dXi,N(t) =βΞ£(EN(t))βU(Xi,N(t))dt+/radicalbigg2 Ξ²/radicalBig Ξ£(EN(t))dWi(t), (3.4) where(Wi(t))tβ₯0,i= 1,...,N,denotestandard d-dimensionalWienerprocesses. Thestrongconvergence of (3.4) to the mean-ο¬eld limit (1.3) is shown in [DL21b] and [Vae24] in linea r and non-linear setting, respectively. However, an implementation of (3.4) requires the com putation of the square root of a dΓd matrix which may be computationally expensive. To overcome this, we ο¬rst note that for the purpose of sampling and computation of ergodic averages, a weak sense app roximation will suο¬ce. To this end, inspired from [GINR20], we consider a system of SDEs which approxima tes mean-ο¬eld SDEs in large particle limit, but in weak-sense. Let (Bi(t))tβ₯0,i= 1,...,N, be standard N-dimensional Wiener processes. We consider the following SDEs driving the interacting particle system: dXi,N(t) =βΞ£(EN(t))βU(Xi,N(t))dt+/radicalbigg2 Ξ²NQN(t)dBi(t), (3.5) whereXi,N(t)βRddenotes the position of iβth particle at time t. 13 Remark 3.1 (Derivative-freeapproach) .Usingthefollowingapproximateο¬rst-orderlinearization[ILS13, SS17]: U(Xj,N(t))βU(Xi,N(t))+/an}bβacketle{t(Xj,N(t)βXi,N(t))Β·βU(Xi,N(t))/an}bβacketβi}ht, U(Xk,N(t))βU(Xi,N(t))+/an}bβacketle{t(Xk,N(t)βXi,N(t))Β·βU(Xi,N(t))/an}bβacketβi}ht, we getU(Xj,N(t))βΒ―UN(t)β /an}bβacketle{t(Xj,N(t)βMN(t))Β·βU(Xi,N(t))/an}bβacketβi}ht, whereΒ―UN(t) :=1 N/summationtextN k=1U(Xk,N(t)). The above, with some more computations, lead to the following deriva tive-free dynamics: dZi,N(t) =β1 NN/summationdisplay j=1(Zj,N(t)βMN(t))/parenleftbig U(Zj,N(t))βΒ―UN(t)/parenrightbig dt+/radicalbigg 2 Ξ²NQN(t)dBi(t).(3.6) In order to prove propagation of chaos for (3.5), we impose the fo llowing assumption on U. Assumption 3.1. LetUβC2(Rd)andU(x)β₯0for allxβRd. We assume that there
|
https://arxiv.org/abs/2504.18139v1
|
exists a compact setKsuch that following holds for some K1,K2,K3,K4>0: K1|x|2β€U(x)β€K2|x|2andK3|x| β€ |βU(x)| β€K4|x|,for all xβ Kc. We also assume that all second derivatives of Uare uniformly bounded in x. In [Vae24], the well-posedness of (1.3a) as well as (3.4) is proved und er Assumption 3.1. In the proof of well-posedness of particle system (3.4) in [Vae24], the Khasminski-Ly apunov approach is employed using the same Lyapunov function as used in [GINR20]. The exact same arg uments can be used to show well- posedness of (3.5). We avoid repeating the same calculations consid ering the line of arguments remains unchanged since Ξ£N=1 NQN(QN)β€=β Ξ£Nβ Ξ£Nβ€. In addition, one can also obtain the moment bounds uniform in Nas has been done in [Vae24], i.e., for pβ₯1 it holds that sup i=1,...,Nsup tβ[0,T]E|Xi,N(t)|2pβ€C, (3.7) whereC >0 is independent of N. As a consequence of uniform in Nmoment bound, we have E|Ξ£(EN(t))|=1 NE/vextendsingle/vextendsingle/vextendsingle/vextendsingleN/summationdisplay i=1(Xi,N(t)βMN(t))(Xi,N(t)βMN(t))β€/vextendsingle/vextendsingle/vextendsingle/vextendsingle β€1 NN/summationdisplay i=1E|Xi,N(t)βMN(t)|2 β€2 NN/summationdisplay i=1E|Xi,N(t)|2+2E|MN(t)|2β€41 NN/summationdisplay i=1E|Xi,N(t)|2β€C, (3.8) The main result of this section is the following theorem. Theorem 3.1. LetΒ΅0β P2(Rd), and let Assumption 3.1 hold. Moreover, let (Xi,N)N i=1be the solution of interacting particle system (3.5), and let EN tdenote empirical measure of these interacting Nparticles at timet. Then, for all tβ[0,T], there exists a deterministic limit Β΅tof the empirical measure EN tas Nβ β. ThisΒ΅tsatisο¬es the mean-ο¬eld PDE (3.1) in weak sense. 3.1 Proof of Theorem 3.1 The proof follows the classical trilogy of arguments (see [Szn91]): β’Tightness of law of random empirical measure: Together with Prokh orovβs theorem, this ensures that there exists a converging subsequence of the law of the rand om empirical measure as the number of particles tend to inο¬nity. β’Identiο¬cation of limit: This is achieved by looking at the PDE correspon ding to mean-ο¬eld SDEs. β’Uniqueness of limit: This follows from the uniqueness of solution of mea n-ο¬eld SDEs. 14 3.1.1 Tightness of random measure LetL(EN) be law of empirical measures which means it belongs to P(P(C([0,T];Rd))). We aim to show that the family of measures ( L(EN))NβNis tight in P(P(C([0,T];Rd))). From [Szn91, Proposition 2.2], showing tightness of family of measures ( L(EN))Nβ₯2inP(P(C([0,T];Rd))) is equivalent to showing that family (L(X1,N))NβNis tight in P(C([0,T];Rd)). To this end, we employ Aldous criteria (see [Bil13, Section 16]) as follows: (i) For all tβ[0,T], the time marginal law of ( Xi,N(t))iβNis tight as a sequence in space P(Rd). (ii) For all Η« >0 andΞΆ >0, there exist n0andΞ΄ >0 such that for any sequence of stopping times Οj sup j>n0sup ΞΈβ€Ξ΄P(|Xi,N Οj+ΞΈβXi,N Οj|> ΞΆ)β€Η«. (3.9) From (3.7), there is a K, not depending on N, such that E|X1,N(t)|2β€K. To verify the ο¬rst condition consider a compact set KΗ«:={y;|y|2β€K/Η«}. Using Markovβs inequality, we have LX1,N(t)/parenleftbig Kc Η«/parenrightbig =P(|X1,N(t)|2> K/Η«)β€Η«E|X1,N(t)|2 Kβ€Η«. (3.10) This veriο¬es the ο¬rst condition that for each tβ[0,T] the sequence ( LX1,N(t))NβNis tight. To verify the second condition, we start with X1,N(Οj+Ξ΄) =X1,N(Οj)β/integraldisplayΟj+Ξ΄ ΟjΞ£(EN(t))βU(X1,N(t))dt+/integraldisplayΟj+Ξ΄ Οj/radicalbigg 2 Ξ²NQN(t)dB1(t).(3.11) Rearranging terms, and squaring and taking expectation on both s ides, we obtain E|X1,N(Οj+Ξ΄)βX1,N(Οj)|2β€2E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟj+Ξ΄ ΟjΞ£(EN(t))βU(X1,N(t))dt/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 +2E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟj+Ξ΄ Οj/radicalbigg2 Ξ²NQN(t)dB1(t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 . Considering the ο¬rst term, by Cauchy-Bunyakovsky-Schwarz ine quality, we
|
https://arxiv.org/abs/2504.18139v1
|
have E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟj+Ξ΄ ΟjΞ£(EN(t))βU(X1,N(t))dt/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 β€Ξ΄E/parenleftbigg/integraldisplayΟj+Ξ΄ Οj|Ξ£(EN(t))βU(X1,N(t))|2dt/parenrightbigg β€Ξ΄sup tβ[0,T]E|Ξ£(EN(t))βU(X1,N(t))|2. (3.12) Considering the second term, from Itoβs isometry, we have E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayΟj+Ξ΄ Οj/radicalbigg2 Ξ²NQN(t)dB1(t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 =E/integraldisplayΟj+Ξ΄ Οj/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 Ξ²NQN(t)(QN)β€(t)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dt=E/integraldisplayΟj+Ξ΄ Οj/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 Ξ²Ξ£(EN(t))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dt. Now, by Cauchy-Bunyakovsky-Schwarz inequality, we have E/integraldisplayΟj+Ξ΄ Οj/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 Ξ²Ξ£(EN(t))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dtβ€E/parenleftbigg/integraldisplayΟj+Ξ΄ Οj4 Ξ²2dt/parenrightbigg1/2/parenleftbigg/integraldisplayΟj+Ξ΄ ΟjE|Ξ£(EN(t))|4dt/parenrightbigg1/2 β€CΞ΄1/2/parenleftbigg/integraldisplayT 0E|Ξ£(EN(t))|4dt/parenrightbigg1/2 β€CΞ΄1/2sup tβ[0,T]E|Ξ£(EN(t))|2.(3.13) This implies E|X1,N(Οj+Ξ΄)βX1,N(Οj)|2β€CΞ΄E/integraldisplayΟj+Ξ΄ Οj|Ξ£(EN(t))βU(X1,N(t))|2dt+CΞ΄1/2sup tβ[0,T]E|Ξ£(EN(t))|2 β€CΞ΄sup tβ[0,T]E|Ξ£(EN(t))βU(X1,N(t))|2+CΞ΄1/2sup tβ[0,T]E|Ξ£(EN(t))|2. 15 Using moment bounds from (3.7), we get E|X1,N(Οj+Ξ΄)βX1,N(Οj)|2β€CΞ΄1/2. (3.14) It is not diο¬cult to see that for all Η« >0 andΞΆthere exists a Ξ΄0such that for all Ξ΄β(0,Ξ΄0), we have sup Ξ΄β(0,Ξ΄0)P|X1,N(Οj+Ξ΄)βX1,N(Οj)|> ΞΆ)β€sup Ξ΄β(0,Ξ΄0)E|X1,N(Οj+Ξ΄)βX1,N(Οj)|2 ΞΆ2β€Η«, (3.15) where we have applied Markovβs inequality. It is obvious that the choic e ofΞ΄0depends on positive constant Cappearing on the right hand side of (3.14). This ensures that secon d condition in Aldous criteria holds. Havingestablishedthetightnessof( LX1,N)NβNinP(C([0,T];Rd)), andhencethetightnessof( L(EN)NβN inP(P(C([0,T];Rd))), using Prokhorovβs theorem [Bil13] we can infer that there exis t a sub-sequence (L(ENk)NkβNof (L(EN))NβNand a random measure Β΅such that L(ENk) converges to Β΅asNkβ β. 3.1.2 Identiο¬cation of limit The PDE (in weak sense) associated with measure dependent Lange vin dynamics is given by /an}bβacketle{tΟ(y),LX(t)(dy)/an}bβacketβi}htβ/an}bβacketle{tΟ(y),LX(0)(dy)/an}bβacketβi}ht=β/integraldisplayt 0/an}bβacketle{t(Ξ£(LX(s))βU(y)Β·βΟ(y)),LX(s)(dy)/an}bβacketβi}htds +/integraldisplayt 02 Ξ²/an}bβacketle{ttrace(β2Ο(y)Ξ£(LX(s))),LX(s)(dy)/an}bβacketβi}htds,(3.16) whereΟβC2 c(Rd). Introduce the following functional on P(C([0,T];Rd)) fortβ[0,T] andΟβC2 c(Rd) as Ξ¨Ο t(Ξ½) : =/an}bβacketle{tΟ(yt),Ξ½(dy)/an}bβacketβi}htβ/an}bβacketle{tΟ(y0),Ξ½(dy)/an}bβacketβi}ht+/integraldisplayt 0/an}bβacketle{t(Ξ£(Ξ½(s))βU(ys)Β·βΟ(ys)),Ξ½(dy)/an}bβacketβi}htds β/integraldisplayt 02 Ξ²/an}bβacketle{t(trace(HessΟ(ys)Ξ£(Ξ½s)),Ξ½s/an}bβacketβi}htds (3.17) We have already established the convergence of a sub-sequence o f (L(EN))NβNto a random measure Β΅β P(P(C([0,T];Rd))). The next thing to show is that this random measure is actually no thing but Β΅:=Ξ΄LX. To this end, we establish the following bound for all ΟβC2 c(Rd): E|Ξ¨Ο t(EN)|2β€C N, (3.18) whereC >0 is independent of N. Using Itoβs formula, we have Ο(Xi,N(t)) =Ο(Xi,N(0))β/integraldisplayt 0/an}bβacketle{tβΟ(Xi,N(s))Β·Ξ£(EN(s))βU(Xi,N(s))/an}bβacketβi}htds +/integraldisplayt 02 Ξ²trace(β2Ο(Xi,N(s))QN(s)(QN)β€(s))ds+/integraldisplayt 0/radicalbigg2 Ξ²/an}bβacketle{tβΟ(Xi,N(s))Β·QN(s)dBi(s)/an}bβacketβi}ht.(3.19) This implies, using (3.17), that Ξ¨Ο t(EN) =1 NN/summationdisplay i=1/integraldisplayt 0/radicalbigg2 Ξ²/an}bβacketle{tβΟ(Xi,N(s))Β·QN(s)dBi(s)/an}bβacketβi}ht (3.20) which on using the martingale property of Itoβs integral results in th e following: E|Ξ¨Ο t(EN)|2=1 N2N/summationdisplay i=1E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg2 Ξ²/an}bβacketle{tβΟ(Xi,N(s))Β·QN(s)dBi(s)/an}bβacketβi}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 . (3.21) 16 Using Itoβs isometry, we obtain E|Ξ¨Ο t(EN)|2=1 N22 Ξ²N/summationdisplay i=1/integraldisplayt 0E/an}bβacketle{tβΟ(Xi,N(s))Β·Ξ£(EN(s))βΟ(Xi,N(s))/an}bβacketβi}ht2ds (3.22) which on using (3.8) gives E|Ξ¨Ο t(EN)|2β€C N, (3.23) and thus establishing (3.18). Note that for any bounded continuous function f, we have /integraldisplay Rdf(x)EN(t)(dx)β/integraldisplay Rdf(x)Β΅t(dx) asNβ β, (3.24) andalsothefamilyofrandomvariables {Ξ£(EN(t))}NβNisuniformlyintegrabledueto(3.8). Consequently, we have (see, e.g., [VdV98, Thm. 2.20]) Ξ£(EN(t))βΞ£(Β΅t) asNβ β. (3.25) This ensures that Ξ¨Ο t(Ξ½) is continuousfunction of Ξ½. Also, from (3.23) it is clearthat the family of random variables {|Ξ¨Ο t(EN(t))|2}NβNis uniformly integrable. Hence, due to Fatauβs lemma, we have E|Ξ¨Ο t(Β΅)|2=Elim Nββ|Ξ¨Ο t(EN)|2= lim NββE|Ξ¨Ο t(EN)|2= 0. (3.26) This implies Β΅satisο¬es PDE (3.16) (in weak sense) Ξ¨Ο t(Β΅) = 0,a.s. (3.27) 3.1.3 Uniqueness of limit The uniqueness of the solution (in weak sense) of PDE (3.16) follows f rom the pathwise uniqueness of the solution of (1.3a) [Vae24]. This implies any arbitrary subsequence of{EN(t)}NβNhas a convergent subsequence and all these subsequences converge to the same lim it. Hence this completes the proof noticing that sequence itself is converging. 4 Time discretization analysis The implementation of (3.5) and (3.4) requiresnumerical discretizat ion. Let the ο¬nal time Tbe ο¬xed. We uniformly partition [0 ,T] intonsub-intervals of size h, i.e.,t0= 0<Β·Β·Β·< tn=T,h=tk+1βtk=T/n. LetΞ±be a constant belonging to interval (0 ,1/2). We denote the approximating Markov
|
https://arxiv.org/abs/2504.18139v1
|
chain as (Xi,N k)n k=0starting from Xi 0for alli= 1,...,N. We also denote Ξ²k:=Ξ²(tk) and Ξ£N k:= Ξ£(EN k) where EN k=1 NN/summationdisplay k=1Ξ΄Xi,N k, (4.1) and therefore Ξ£(EN k) =1 NQN k(QN k)β€(4.2) withQN k= [X1,N kβMk,...,XN,N kβMk],Mk=1 N/summationtextN i=1Xi,N k. As is knownin the stochasticnumerics literature (see [HJK11]) that Euler-Maruyama scheme may diverge in strong sense for coeο¬cients with growth higher than linear. There are explicit schemes proposed to d eal with the issue of unbounded moments arising due to non-linear growth [HJK12, TZ13]. Here, follo wing [HJK12], we present tamed schemesto deal with non-lineargrowthof coeο¬cients. We present twoversionsof tamed Euler-Maruyama scheme: 17 (i) Particle-wise tamed Euler-Maruyama scheme : Xi,N k+1=Xi,N kβB1(h,Xi,N k)Ξ£N kβU(Xi,N k)h+/radicalbigg2 Ξ²NQN kΞΎi k+1h1 2, k= 0,...,nβ1,(4.3) with B1(h,Xi,N k) = Diag/parenleftbigg1 1+hΞ±|Ξ£N kβU(Xi,N k)|,...,1 1+hΞ±|Ξ£N kβU(Xi,N k)|/parenrightbigg , where|Ξ£N kβU(Xi,N k)|is the Euclidean norm and ΞΎi k+1is standard normal N-dimensional random vector for all i= 1,...,Nandk= 0,...,nβ1. (ii) Coordinate-wise tamed Euler-Maruyama scheme : Xi,N k+1=Xi,N kβB2(h,Xi,N k)Ξ£N kβU(Xi,N k)h+/radicalbigg2 Ξ²kNQN kΞΎi k+1h1 2, k= 0,...,nβ1,(4.4) with B2(h,Xi,N k) = Diag/parenleftbigg1 1+hΞ±|(Ξ£N kβU(Xi,N k))1|,...,1 1+hΞ±|(Ξ£N kβU(Xi,N k))d|/parenrightbigg , where (Ξ£N kβU(Xi,N k))jdenotesjβth coordinate of Ξ£N kβU(Xi,N k). In (4.3), due to particle-wise taming, we have 1 |B1(h,Xi,N k)Ξ£N kβU(Xi,N k)|=1 |Ξ£N kβU(Xi,N k)|+hΞ±, (4.5) and, therefore the following bound holds: |B1(h,Xi,N k)Ξ£N kβU(Xi,N k)| β€min/parenleftbigg1 hΞ±,|Ξ£N kβU(Xi,N k)|/parenrightbigg . (4.6) In the similar manner, the following is true for coordinate-wise taming : |(B2(h,Xi,N k)Ξ£N kβU(Xi,N k))j| β€min/parenleftbigg1 hΞ±,|(Ξ£N kβU(Xi,N k))j|/parenrightbigg , j= 1,...,d. (4.7) Remark 4.1 (Balancing technique) .Writing the schemes with a preconditioning on the drift by a matrix B, one can imagine that there can be other possible choices of balancin g matrices B. This particular idea of balancing appears in [MPS98] for stiο¬ SDEs, and has been utilized in [ TZ13] (see also [MT04]) to design balanced methods to deal with non-globally Lipschitz coeο¬cien ts. In this context, taming can be considered as one particular choice of balancing matrix. In this section, we aim to prove convergence of the tamed numerica l scheme (4.3) to its continuous limit as discretization step hβ0. The techniques employed to obtain this convergence can also be u sed to get the convergence of (4.4). Here, however, we will only focus on (4.3). The main issue is to obtain the convergence uniform in N, where Nrepresents number of particles, considering that we have a non-globally Lipschitz drift coeο¬cient. To this end, we let Οh(t) =tkfor all tkβ€t < tk+1and write the continuous time version of tamed Euler scheme (4.3) as follows: dYi,N(t) =F(Yi,N(Οh(t)),Ξ£N Y(Οh(t)))dt+/radicalbigg 2 Ξ²QN Y(Οh(t))dBi(t), (4.8) where, for brevity, we have denoted F(Yi,N(Οh(t)),Ξ£N(Οh(t))) =βΞ£N Y(Οh(t))βU(Yi,N(Οh(t))) 1+hΞ±|Ξ£N Y(Οh(t))βU(Yi,N(Οh(t)))|(4.9) 18 withΞ±β(0,1/2). Also, we have denoted Ξ£N Y(Οh(t)) :=1 NQN Y(Οh(t))QN Y(Οh(t))β€, (4.10) where QN Y(t) = [Y1,N(Οh(t))βMN Y(Οh(t)),...,YN,N(t)βMN Y(Οh(t))] withMN Y(Οh(t)) being the ensemble mean given by MN Y(Οh(t)) :=1 NN/summationdisplay j=1Yj,N(Οh(t)). (4.11) In this section, we will need to strengthen Assumption 3.1 as follows: Assumption 4.1. Let Assumption 3.1 hold. Moreover, let UβC4(Rd)and assume that its third and fourth order partial derivatives are uniformly bounded in xβRd.
|
https://arxiv.org/abs/2504.18139v1
|
The main result of this section is the following theorem. Theorem 4.1. Let Assumption 4.1 hold. Let sup1β€iβ€NE|Xi,N(0)|2<β,sup1β€iβ€NE|Yi,N(0)|2<β and|Xi,N(0)βYi,N(0)| β€Ch1/2withC >0being independent of Nandh. Then, for all tβ[0,T] lim hβ0sup i=1,...,N(E|Xi,N(t)βYi,N(t)|2)1/2= 0. (4.12) The ο¬rst step towards establishing the convergence result in the a bove theorem is to obtain moment bounds that are independent of Nandh. Lemma 4.2. Let Assumption 4.1 be satisο¬ed. Let pβ₯1be a constant. Let sup1β€iβ€NE|Yi,N(0)|2pβ€C withC >0being independent of Nandh. Then, the following holds: sup 1β€iβ€NE|Yi,N(t)|2pβ€C, (4.13) whereC >0is independent of has well as N. In the below proof, we use GrΒ¨ onwall type arguments to obtain sup 1β€iβ€NEsup tβ[0,T]Up(Yi,N(t))β€C, (4.14) which, thanks to Assumption 4.1, provides the required moment bou nd (4.13). Proof.Applying Itoβs formula on Up(Yi,N(t)), we get dUp(Yi,N(t)) =pUpβ1(Yi,N(t))/an}bβacketle{tβU(Yi,N(t))Β·F(Yi,N(Οh(t)),Ξ£N Y(Οh(t)))/an}bβacketβi}htdt +p(pβ1)1 Ξ²Upβ2(Yi,N(t))trace/parenleftbig βU(Yi,N(t))βUβ€(Yi,N(t))Ξ£N Y(Οh(t))/parenrightbig dt +p1 Ξ²Upβ1(Yi,N(t))trace/parenleftbig β2U(Yi,N(t))Ξ£N Y(Οh(t))/parenrightbig dt +p/radicalbigg2 Ξ²NUpβ1(Yi,N(t))/an}bβacketle{tβU(Yi,N(t))Β·QN Y(Οh(t))dBi(t)/an}bβacketβi}ht. (4.15) For the sake of convenience, we denote G(Yi,N(t)) :=βU(Yi,N(t)). (4.16) Applying Itoβs formula on j-th component of G, we arrive at Gj(Yi,N(t)) =Gj(Yi,N(Οh(t)))+/integraldisplayt Οh(t)/an}bβacketle{tβGj(Yi,N(s))Β·F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))/an}bβacketβi}htds +/integraldisplayt Οh(t)1 Ξ²trace/parenleftbig β2Gj(Yi,N(s))Ξ£N Y(Οh(s))/parenrightbig ds +/integraldisplayt Οh(t)/radicalbigg2 Ξ²N/angbracketleftbig βGj(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/angbracketrightbig . (4.17) 19 We will estimate bounds on terms in (4.15) one by one. Let us start wit h the ο¬rst term in (4.15): A1: =/an}bβacketle{tβU(Yi,N(t))Β·F(Yi,N(Οh(t)),Ξ£N Y(Οh(t)))/an}bβacketβi}ht =/angbracketleftbig/parenleftbig βU(Yi,N(t))ββU(Yi,N(Οh(t)))/parenrightbig Β·F(Yi,N(Οh(t)),Ξ£N Y(Οh(t)))/angbracketrightbig +/angbracketleftbig/parenleftbig βU(Yi,N(Οh(t)))/parenrightbig Β·F(Yi,N(Οh(t)),Ξ£N Y(Οh(t)))/angbracketrightbig . (4.18) Note that /angbracketleftbig/parenleftbig βU(Yi,N(Οh(t)))/parenrightbig Β·F(Yi,N(Οh(t)),Ξ£N Y(Οh(t)))/angbracketrightbig β€0 (4.19) due to the fact that Ξ£N(Οh(t)) is ensemble covariance and hence positive semi-deο¬nite matrix (se e (4.9)). We will now utilize (4.17) to get A1β€/vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/parenleftbigg/integraldisplayt Οh(t)/an}bβacketle{tβGj(Yi,N(s))Β·F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))/an}bβacketβi}htds/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Οh(t)1 Ξ²trace/parenleftbig β2Gj(Yi,N(s))Ξ£N Y(Οh(s))/parenrightbig ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle +/vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Οh(t)/radicalbigg 2 Ξ²N/angbracketleftbig βGj(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle, (4.20) whereFjdenotes the j-th component of F(Yi,N(Οh(t)),Ξ£N Y(Οh(t))). Using (4.6) and the fact that Frobenius norm of Hessian of Uis bounded, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/parenleftbigg/integraldisplayt Οh(t)/an}bβacketle{tβGj(Yi,N(s))Β·F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))/an}bβacketβi}htds/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle β€1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftbigg/integraldisplayt Οh(t)/an}bβacketle{tβGj(Yi,N(s))Β·F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))/an}bβacketβi}htds/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle β€1 h2Ξ±/integraldisplayt Οh(t)d/summationdisplay j=1/vextendsingle/vextendsingleβGj(Yi,N(s))/vextendsingle/vextendsingledsβ€Ch1β2Ξ±, (4.21) whereC >0 is independent of handN. Dealing with the second term on right hand side of (4.20) results in /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Οh(t)1 Ξ²trace/parenleftbig β2Gj(Yi,N(s))Ξ£N Y(Οh(s))/parenrightbig ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle β€C1 hΞ±/integraldisplayt Οh(t)d/summationdisplay j=1|β2Gj(Yi,N(s))||Ξ£N Y(Οh(s))|dsβ€C1 hΞ±/integraldisplayt Οh(t)|Ξ£N Y(Οh(s))|ds β€Ch1βΞ±|Ξ£N Y(Οh(t))| (4.22) since the third and fourth derivatives of Uare bounded. Note that |Ξ£N Y(t)| β€C/parenleftBig 1+1 NN/summationdisplay i=1|Yi,N(t)|2/parenrightBig β€C/parenleftBig 1+1 NN/summationdisplay i=1U(Yi,N(t))/parenrightBig (4.23) sinceUhas at least quadratic growth outside a compact set K(see Assumption 3.1). Consequently, we obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Οh(t)1 Ξ²trace/parenleftbig β2Gj(Yi,N(s))Ξ£N Y(Οh(s))/parenrightbig ds/vextendsingle/vextendsingle/vextendsingle/vextendsingleβ€Ch1βΞ±/parenleftBig 1+1 NN/summationdisplay i=1U(Xi,N(Οh(t)))/parenrightBig .(4.24) 20 In the similar manner, we have the following bound for the third term o n the right hand side of (4.20): /vextendsingle/vextendsingle/vextendsingle/vextendsingled/summationdisplay j=1Fj/integraldisplayt Οh(t)/radicalbigg2 Ξ²/angbracketleftbig βGj(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle β€1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)/radicalbigg2 Ξ²N/angbracketleftbig βGj(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.25) Combining (4.21), (4.24) and (4.25) yields A1β€Ch1β2Ξ±+Ch1βΞ±/parenleftBig 1+1 NN/summationdisplay i=1U(Yi,N(Οh(t)))/parenrightBig +1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)/radicalbigg2 Ξ²N/angbracketleftbig βGj(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.26) To estimate a bound on second term of (4.15), we utilize Cauchy-Bun yakovsky-Schwarz inequality to obtain A2: =Upβ2(Yi,N(t))trace/parenleftbig βU(Yi,N(t))βUβ€(Yi,N(t))Ξ£N Y(Οh(t))/parenrightbig β€CUpβ2(Yi,N(t))|βU(Yi,N(t))|2|Ξ£N Y(Οh(t))| β€C(1+Upβ1(Yi,N(t)))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Οh(t)))/parenrightbigg , (4.27) where we use Assumption 3.1 and (4.23) in the last step. Analogously, for the third term of (4.15), application of Cauchy-Bu nyakovsky-Schwarz inequality yields A3: =Upβ1(Yi,N(t))trace/parenleftbig β2U(Yi,N(t))Ξ£N Y(Οh(t))/parenrightbig β€CUpβ1(Yi,N(t))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Οh(t)))/parenrightbigg , (4.28) where we have utilized the fact that the norm of
|
https://arxiv.org/abs/2504.18139v1
|
Hessian of potentia l function is bounded due to As- sumption 3.1. Substituting (4.26), (4.27) and (4.28) in (4.15), we obtain Up(Yi,N(t))β€Up(Yi,N(0))+Ch1β2Ξ±/integraldisplayt 0Upβ1(Yi,N(s))ds +Ch1βΞ±/integraldisplayt 0Upβ1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Οh(s)))/parenrightbigg ds +p/integraldisplayt 0Upβ1(Yi,N(s))1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Οh(s)/radicalbigg 2 Ξ²/angbracketleftbig βGj(Yi,N(r))Β·QN Y(Οh(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds +C/integraldisplayt 0Upβ1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(Οh(s)))/parenrightbigg ds +p/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg 2 Ξ²NUpβ1(Yi,N(s))/an}bβacketle{tβU(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/an}bβacketβi}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.29) 21 Taking supremum over [0 ,T] and then expectation on both sides, we ascertain Esup tβ[0,T]Up(Yi,N(t))β€Up(Yi,N(0))+Ch1β2Ξ±E/integraldisplayT 0Upβ1(Yi,N(s))dt +Ch1βΞ±E/integraldisplayT 0Upβ1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(s))/parenrightbigg dt +pE/integraldisplayT 0Upβ1(Yi,N(s))1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Οh(s)/radicalbigg2 Ξ²/angbracketleftbig βGj(Yi,N(r))Β·Ξ£N(Οh(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds +E/integraldisplayT 0Upβ1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(s))/parenrightbigg dt +pEsup tβ[0,T]/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg2 Ξ²NUpβ1(Yi,N(s))/an}bβacketle{tβU(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/an}bβacketβi}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle. (4.30) We will ο¬rst estimate the following term appearing on the right hand sid e of above inequality: D1: =E/integraldisplayT 0Upβ1(Yi,N(s))1 hΞ±d/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Οh(s)/radicalbigg2 Ξ²/angbracketleftbig βGj(Yi,N(r))Β·QN Y(Οh(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsingleds β€C/integraldisplayT 0Esup sβ[0,t]Up(Yi,N(s))dt +C1 hΞ±p/integraldisplayT 0d/summationdisplay j=1E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplays Οh(s)/radicalbigg2 Ξ²N/angbracketleftbig βGj(Yi,N(r))Β·QN Y(Οh(r))dBi(r)/angbracketrightbig/vextendsingle/vextendsingle/vextendsingle/vextendsinglep ds, where we have used generalized Youngβs inequality. Applying the Burk holder-Davis-Gundy inequality, and using boundedness of norm of Hessian of Uand (4.23), we obtain D1β€C/integraldisplayT 0Esup sβ[0,t]Up(Yi,N(s))dt+C hΞ±p/integraldisplayT 0d/summationdisplay j=1E/parenleftbigg/integraldisplays Οh(s)|βGj(Yi,N(r))|2|Ξ£N Y(Οh(r))|dr/parenrightbiggp/2 ds β€C/integraldisplayT 0Esup sβ[0,t]Up(Yi,N(s))dt+C hΞ±p/integraldisplayT 0E/parenleftbigg/integraldisplays Οh(s)/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Οh(r)))/parenrightBig2 dr/parenrightbiggp/2 ds β€C/integraldisplayT 0Esup sβ[0,t]Up(Yi,N(s))dt+Chp(1/2βΞ±)/integraldisplayT 0E/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Οh(s)))/parenrightbiggp ds β€C/integraldisplayT 0Esup sβ[0,t]Up(Yi,N(s))dt+Chp(1/2βΞ±)/integraldisplayT 0Esup sβ[0,t]/parenleftBig 1+1 NN/summationdisplay j=1Up(Yj,N(s))/parenrightbigg dt.(4.31) The last term remaining to be dealt with is D2:=Esup tβ[0,T]/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt 0/radicalbigg 2 Ξ²NUpβ1(Yi,N(s))/an}bβacketle{tβU(Yi,N(s))Β·QN Y(Οh(s))dBi(s)/an}bβacketβi}ht/vextendsingle/vextendsingle/vextendsingle/vextendsingle. Again using the Burkholder-Davis-Gundy inequality and (4.23), we ge t D2β€E/parenleftbigg/integraldisplayT 02 Ξ²U2pβ2(Yi,N(s))|βU(Yi,N(s))|2|Ξ£N Y(Οh(s))|ds/parenrightbigg1/2 β€E/parenleftBigg sup sβ[0,T]Upβ1(Yi,N(s))/parenleftbigg/integraldisplayT 02 Ξ²(1+U(Yi,N(s)))/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Οh(s)))/parenrightBig ds/parenrightbigg1/2 β€1 2E/parenleftbig sup sβ[0,T]Up(Yi,N(s))/parenrightbig +CE/parenleftbigg/integraldisplayT 02 Ξ²(1+U(Yi,N(s)))/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Οh(s)))/parenrightBig ds/parenrightbiggp/2 , 22 where we have used generalized Youngβs inequality. Using HΒ¨ olderβs ine quality, we have D2β€1 2E/parenleftbig sup sβ[0,T]Up(Yi,N(s))/parenrightbig +CE/parenleftbigg/integraldisplayT 0(1+Up/2(Yi,N(s)))/parenleftBig 1+1 NN/summationdisplay j=1U(Yj,N(Οh(s)))/parenrightBigp/2 ds/parenrightbigg β€1 2E/parenleftbig sup tβ[0,T]Up(Yi,N(t))/parenrightbig +C/integraldisplayT 0/parenleftBig 1+1 NN/summationdisplay j=1Esup sβ[0,t]Up(Yj,N(s))/parenrightBig dt (4.32) Plugging the estimates obtained in (4.31) and (4.32) into (4.30), we ge t Esup tβ[0,T]Up(Yi,N(t))β€2Up(Yi,N(0))+C/integraldisplayT 0/parenleftBig 1+1 NN/summationdisplay j=1Esup sβ[0,t]Up(Yj,N(s))/parenrightBig dt +C/integraldisplayT 0Esup sβ[0,t]Up(Yi,N(s))dt sinceh1β2Ξ±β€1 due to our choice of Ξ±β(0,1/2). We mention again for the convenience of the reader thatCis a positive constant independent of handN. Taking supremum over iβ {1,...,N}gives sup i=1,...,NEsup tβ[0,T]Up(Yi,N(t))β€2Up(Yi,N(0))+C/integraldisplayT 0sup i=1,...,NEsup sβ[0,t]Up(Yi,N(s))dt which on applying GrΒ¨ onwallβs Lemma provides the desired result. Lemma4.3. LetAssumption 3.1 be satisο¬ed. Let sup1β€iβ€NE|Xi,N(0)|2<βandsup1β€iβ€NE|Yi,N(0)|2< β. Then the following bound holds: sup 1β€iβ€NE|Yi,N(t)βYi,N(Οh(t))|2β€Ch, (4.33) whereC >0is independent of Nandh. Proof.With an application of triangle inequality and HΒ¨ olderβs inequality, we hav e |Yi,N(t)βYi,N(Οh(t))|2β€C/parenleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))ds/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 +/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)/radicalbigg2 Ξ²NQN Y(Οh(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightbigg β€C/integraldisplayt Οh(t)|1|2ds/integraldisplayt Οh(t)|F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))|2ds+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)/radicalbigg 2 Ξ²NQN Y(Οh(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 β€Ch/integraldisplayt Οh(t)|F(Yi,N(Οh(s)),Ξ£N Y(Οh(s)))|2ds+/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)/radicalbigg2 Ξ²NQN Y(Οh(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 which on using (4.6) gives |Yi,N(t)βYi,N(Οh(t))|2β€Ch2(1βΞ±)+C/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayt Οh(t)/radicalbigg2 Ξ²NQN Y(Οh(s))dBi(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 . (4.34) Taking expectation on both sides and applying Itoβs isometry, we ach ieve the following bound: E|Yi,N(t)βYi,N(Οh(t))|2=Ch2(1βΞ±)+CE/integraldisplayt Οh(t)1 N|QN Y(Οh(s))/parenleftbig QN Y(Οh(s))/parenrightbigβ€|ds (4.35) β€Ch+CE/integraldisplayt Οh(t)|Ξ£N Y(Οh(s))|dsβ€Ch, whereC >0 is independent of Nandh. Hence, the lemma is proved. 23 To obtain our main result, we employ a localization strategy based on s topping times. This is also employed in the interacting particle setting in [KST23] (for the propa gation of chaos proof, as well as for for the proof of the Euler scheme) and [Vae24] (for the propagat ion of chaos proof). More speciο¬cally, let Ο1 R= inf/braceleftbigg tβ€0 ;1 NN/summationdisplay i=1|Xi,N(t)|2β₯R/bracerightbigg
|
https://arxiv.org/abs/2504.18139v1
|
, (4.36) Ο2 R= inf/braceleftbigg tβ€0 ;1 NN/summationdisplay i=1|Yi,N(t)|2β₯R/bracerightbigg , (4.37) andΟR=Ο1 Rβ§Ο2 R. Lemma 4.4. Let Assumption 3.1 hold. Then the following inequality is sa tisο¬ed: E/integraldisplaytβ§ΟR 0|Ξ£(EN(s))βU(Xi,N(s))+F(Yi,N(Οh(s))))|2dsβ€Ch2Ξ± +C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2 +|Yi,N(sβ§ΟR)βYi,N(Οh(sβ§ΟR))|2+1 NN/summationdisplay j=1|Yj,N(sβ§ΟR)βYj,N(Οh(sβ§ΟR))|2 +1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg ds whereXi,N(s)is from (3.5), Yi,N(s)is from (4.8) and C >0is independent of h,NandR. Proof.By repeated use of the triangle inequality, we can split the integrand as |Ξ£N(s)βU(Xi,N(s))+F(Yi,N(Οh(s)))|2β€ |Ξ£N(s)βU(Xi,N(s))βΞ£N Y(s)βU(Yi,N(s))|2 +|Ξ£N Y(s)βU(Yi,N(s))βΞ£N Y(Οh(s))βU(Yi,N(Οh(s)))|2 +|Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))+F(Yi,N(Οh(s)))|2(4.38) =: (I)+(II)+(III) . (4.39) We will analyze the three terms separately, starting with (I). To th is end, we ο¬rst split the term (I) further as follows: (I) : =|Ξ£N(s)βU(Xi,N(s))βΞ£N Y(s)βU(Yi,N(s)))|2β€ |Ξ£N(s)(βU(Xi,N(s))ββU(Yi,N(s)))|2 +|(Ξ£N(s)βΞ£N Y(s))βU(Yi,N(s))|2 =:B1+B2. (4.40) We have the following bound for B1due to (3.8) and the fact that βUis Lipschitz: B1:=|Ξ£N(s)(βU(Xi,N(s))ββU(Yi,N(s)))|2β€ |Ξ£N(s)|2|βU(Xi,N(s))ββU(Yi,N(s))|2 β€C/bracketleftBig 1+1 NN/summationdisplay j=1|Xj,N(s)|2/bracketrightBig2 |Xi,N(s)βYi,N(s)|2, (4.41) whereC >0 is a positive constant independent of Nandh. Next, to bound B2, note that |QN(s)βQN Y(s)|=/parenleftBiggN/summationdisplay i=1/vextendsingle/vextendsingle/vextendsingle/vextendsingleXi,N(s)βYi,N(s)β1 NN/summationdisplay j=1(Xj,N(s)βYj,N(s))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg1/2 β€β 2/parenleftBiggN/summationdisplay i=1|Xi,N(s)βYi,N(s)|2+N/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 NN/summationdisplay j=1(Xj,N(s)βYj,N(s))/vextendsingle/vextendsingle/vextendsingle/vextendsingle2/parenrightBigg1/2 . 24 This implies 1β N|QN(s)βQN Y(s)| β€2/parenleftBigg 1 NN/summationdisplay i=1|Xi,N(s)βYi,N(s)|2/parenrightBigg1/2 . (4.42) In the similar manner, we have 1β N|QN(s)| β€2/parenleftbigg1 NN/summationdisplay i=1|Xi,N(s)|2/parenrightbigg1/2 and1β N|QN Y(s)| β€2/parenleftbigg1 NN/summationdisplay i=1|Yi,N(s)|2/parenrightbigg1/2 .(4.43) Using (4.42) and (4.43), we obtain |Ξ£N(s)βΞ£N Y(s)|=/vextendsingle/vextendsingle/vextendsingle1 NQN(s)(QN(s))β€β1 NQN Y(s)(QN Y(s))β€/vextendsingle/vextendsingle/vextendsingle β€/vextendsingle/vextendsingle/vextendsingle1 N(QN(s)βQN Y(s))(QN(s))β€/vextendsingle/vextendsingle/vextendsingle+/vextendsingle/vextendsingle/vextendsingle1 NQN Y(s)((QN(s))β€β(QN Y(s))β€)/vextendsingle/vextendsingle/vextendsingle β€C/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)|2/parenrightbigg1/2 +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(s)|2/parenrightbigg1/2/parenrightbigg Γ/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)βYj,N(s)|2/parenrightbigg1/2 , whereC >0 is a positive constant independent of Nandh. Using the above calculation we arrive at the following bound for B2in (4.40): B2: =|(Ξ£N(s)βΞ£N Y(s))βU(Yi,N(s))|2 β€C|βU(Yi,N(s))|2/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)|2/parenrightbigg +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(s)|2/parenrightbigg/parenrightbigg Γ/parenleftbigg1 NN/summationdisplay j=1|Xj,N(s)βYj,N(s)|2/parenrightbigg . (4.44) Due to exchangeability of discretized particle system, we have for a lli= 1,...,N E/parenleftbigg |βU(Yi,N(sβ§ΟR))|2/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)|2/parenrightbigg +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(sβ§ΟR)|2/parenrightbigg/parenrightbigg Γ/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg/parenrightbigg =E/parenleftbigg1 NN/summationdisplay i=1|βU(Yi,N(sβ§ΟR))|2/parenleftbigg/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)|2/parenrightbigg +/parenleftbigg1 NN/summationdisplay j=1|Yj,N(sβ§ΟR)|2/parenrightbigg/parenrightbigg Γ/parenleftbigg1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg/parenrightbigg . Using (4.41) and (4.44) yield E/integraldisplaytβ§ΟR 0(I)dsβ€C(1+R)2/integraldisplayt 0/parenleftbigg |Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2 +1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg ds β€C(1+R)2/integraldisplayt 0E/parenleftbigg |Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2+1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg ds (4.45) 25 In the similar manner, we have that (II) : =|Ξ£N Y(s)βU(Yi,N(s))βΞ£N Y(Οh(s))βU(Yi,N(Οh(s)))|2 β€ |Ξ£N Y(s)(βU(Yi,N(s))ββU(Yi,N(Οh(s))))|2 +|(Ξ£N Y(s)βΞ£N Y(Οh(s)))βU(Yi,N(Οh(s)))|2 =:E1+E2. (4.46) Applying the analogous arguments used in obtaining bounds for B1andB2, we obtain E1: =|Ξ£N Y(s)(βU(Yi,N(s))ββU(Yi,N(Οh(s))))|2 β€C/parenleftbigg 1+1 NN/summationdisplay j=1|Yj,N(s)|2/parenrightbigg2/parenleftBig |Yi,N(s)βYi,N(Οh(s))|2/parenrightBig , (4.47) and E2: =|(Ξ£N Y(s)βΞ£N Y(Οh(s)))βU(Yi,N(Οh(s)))|2 β€C/parenleftbig 1+|Yi,N(Οh(s))|2/parenrightbig/parenleftbigg 1+1 NN/summationdisplay j=1|Yj,N(s)|2+1 NN/summationdisplay j=1|Yj,N(Οh(s))|2/parenrightbigg Γ1 NN/summationdisplay j=1|Yj,N(s)βYj,N(Οh(s))|2, (4.48) whereC >0 is independent of Nandh. The bounds obtained in (4.47) and (4.48) imply that the following holds: E/integraldisplaytβ§ΟR 0(II)dsβ€C(1+R)2E/integraldisplayt 0/parenleftbigg |Yi,N(sβ§ΟR)βYi,N(Οh(sβ§ΟR))|2 +1 NN/summationdisplay j=1|Yj,N(sβ§ΟR)βYj,N(Οh(sβ§ΟR))|2/parenrightbigg ds (4.49) where we have used Youngβs inequality in last step. We shift our attention to the remaining term, i.e., to term (III) in (4.3 8), which is deο¬ned as (III) : = |Ξ£Y(Οh(s))βU(Yi,N(Οh(s)))+F(Yi,N(Οh(s)))|2. Note that |Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))+F(Yi,N(Οh(s)))|2 =/vextendsingle/vextendsingle/vextendsingle/vextendsingleΞ£N Y(Οh(s))βU(Yi,N(Οh(s)))βΞ£N Y(Οh(s))βU(Yi,N(Οh(s))) 1+hΞ±|Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))|/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 β€h2Ξ±|Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))|4 (1+hΞ±|Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))|)2. This results in E/integraldisplaytβ§Οr 0(III)dsβ€Ch2Ξ±/integraldisplayt 0E|Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))|4 (1+hΞ±|Ξ£N Y(Οh(s))βU(Yi,N(Οh(s)))|)2ds β€Ch2Ξ±, (4.50) where we have used moment bounds from Lemma 4.2. Combining the es timate obtained in (4.45), (4.49) 26 and (4.50), we ascertain E/integraldisplaytβ§ΟR 0|Ξ£N(s)βU(Xi,N(s))+F(Yi,N(Οh(s)))|2ds β€Ch2Ξ±+C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2 +|Yi,N(sβ§ΟR)βYi,N(Οh(sβ§ΟR))|2+1 NN/summationdisplay j=1|Yj,N(sβ§ΟR)βYj,N(Οh(sβ§ΟR))|2 +1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg ds. This completes the proof. 4.1 Proof of Theorem 4.1 Proof.We split the sample space as follows: E|Xi,N(t)βYi,N(t)|2=E(|Xi,N(t)βYi,N(t)|2Iβ¦R)+E(|Xi,N(t)βYi,N(t)|2IΞR). (4.51) As is clear, we have the following bound: E(|Xi,N(t)βYi,N(t)|Iβ¦R)2β€E|Xi,N(tβ§ΟR)βYi,N(tβ§ΟR)|2. Using Itoβs formula,
|
https://arxiv.org/abs/2504.18139v1
|
we have |Xi,N(tβ§ΟR)βYi,N(tβ§ΟR)|2=|Xi,N(0)βYi,N(0)|2 β2/integraldisplaytβ§ΟR 0/angbracketleftbig (Xi,N(s)βYi,N(s))Β·(Ξ£(EN(s))βU(Xi,N(s))+F(Yi,N(Οh(s))))/angbracketrightbig ds +/integraldisplaytβ§ΟR 02 Ξ²Ntrace/parenleftbig [QN(s)βQN Y(Οh(s))][QN(s)βQN Y(Οh(s))]β€/parenrightbig ds +2/integraldisplaytβ§ΟR 0/radicalbigg2 Ξ²N/angbracketleftbigg (Xi,N(s)βYi,N(s))Β·/parenleftBig QN(s)βQN Y(Οh(s))/parenrightBig dWi(s)/angbracketrightbigg .(4.52) Due to Doobβs optional stopping theorem, we have E/parenleftbigg/integraldisplaytβ§ΟR 0/radicalBigg 2 Ξ²(s)/angbracketleftbigg (Xi,N(s)βYi,N(s))Β·/parenleftBig QN(s)βQN Y(Οh(s))/parenrightBig dWi(s)/angbracketrightbigg/parenrightbigg = 0.(4.53) Note that the Doobβs theorem can be applied due to the fact that tβ§ΟRis bounded stopping time, and moments are bounded (see Lemma 4.2). We have already dealt with ex pectation of second term on right hand side of (4.52) in Lemma 4.4. For the third term on the right hand s ide of (4.52), we utilize (4.42) and (4.43) to arrive at the following estimate: E/parenleftbigg/integraldisplaytβ§ΟR 02 Ξ²Ntrace/parenleftbig [QN(s)βQN Y(Οh(s))][QN(s)βQN Y(Οh(s))]β€/parenrightbig ds/parenrightbigg β€CE/integraldisplaytβ§ΟR 01 NN/summationdisplay j=1|Xj,N(s)βYj,N(Οh(s))|2ds β€CE/integraldisplaytβ§ΟR 01 NN/summationdisplay j=1|Xj,N(s)βYj,N(s)|2ds+CE/integraldisplaytβ§ΟR 01 NN/summationdisplay j=1|Yj,N(s)βYj,N(Οh(s))|2ds β€CE/integraldisplayt 01 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2ds +CE/integraldisplayt 01 NN/summationdisplay j=1|Yj,N(sβ§ΟR)βYj,N(Οh(sβ§ΟR))|2ds, (4.54) 27 whereC >0 is independent of handN. Combining the results of (4.53), (4.54) and Lemma 4.4, we get E|Xi,N(tβ§ΟR)βYi,N(tβ§ΟR)|2β€ |Xi,N(0)βYi,N(0)|2 β€Ch2Ξ±+C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2 +|Yi,N(sβ§ΟR)βYi,N(Οh(sβ§ΟR))|2+1 NN/summationdisplay j=1|Yj,N(sβ§ΟR)βYj,N(Οh(sβ§ΟR))|2 +1 NN/summationdisplay j=1|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg ds. Using Lemma 4.3, we ascertain E|Xi,N(tβ§ΟR)βYi,N(tβ§ΟR)|2β€Ch+C(1+R)2h +C(1+R)2/integraldisplayt 0/parenleftbigg E|Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2+1 NN/summationdisplay j=1E|Xj,N(sβ§ΟR)βYj,N(sβ§ΟR)|2/parenrightbigg ds, which on taking supremum over j= 1,...,N, we get sup i=1,...,NE|Xi,N(tβ§ΟR)βYi,N(tβ§ΟR)|2β€Ch2Ξ±+C(1+R)2h +C(1+R)2/integraldisplayt 0sup i=1,...,NE|Xi,N(sβ§ΟR)βYi,N(sβ§ΟR)|2ds. Applying Gronwallβs lemma, we have E|Xi,N(tβ§ΟR)βYi,N(tβ§ΟR)|2β€C(h2Ξ±+(1+R)2h)eC(1+R)2. This implies E/parenleftbig |Xi,N(t)βYi,N(t)|2Iβ¦R/parenrightbig β€C(h2Ξ±+(1+R)2h)eC(1+R)2, whereC >0 is independent of handN. Using HΒ¨ olderβs inequality, Lemma 4.2 and Markovβs inequality, we have E/parenleftbig |Xi,N(t)βYi,N(t)|2IΞR/parenrightbig β€C/parenleftbig E|Xi,N(t)|4+E|Yi,N(t)|4/parenrightbig1/2P(ΞR(Ο)) β€C1 N/summationtextN i=1E|Xi,N(t)|2 Rβ€C R, whereC >0 is independent of h,NandR. With appropriate choice of Rdepending on h, we get the desired result. Acknowledgments This work was supported by the Wallenberg AI, Autonomous System s and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. References [ANO+09] S. I. Aanonsen, G. NΕvdal, D. S. Oliver, A. C. Reynolds, and B. V all` es. The ensemble Kalman ο¬lter in reservoir engineeringβa review. Spe Journal , 14(03):393β412, 2009. [BBCG08] D. Bakry, F. Barthe, P. Cattiaux, and A. Guillin. A simple pro of of the poincarΒ΄ e inequality for a large class of probability measures. Electronic Communications in Probability , 2008. [Bil13] P. Billingsley. Convergence of probability measures . John Wiley & Sons, 2013. 28 [BSW18] D. BlΒ¨ omker, C. Schillings, and P. Wacker. A strongly conver gent numerical scheme from ensemble Kalman inversion. SIAM Journal on Numerical Analysis , 56(4):2537β2562, 2018. [BSWW19] D. BlΒ¨ omker, C. Schillings, P. Wacker, and S. Weissmann. We ll posedness and convergence analysis of the ensemble Kalman inversion. Inverse Problems , 35(8):085007, 2019. [CCTT18] J. A. Carrillo, Y.-P. Choi, C. Totzeck, and O. Tse. An analyt ical framework for consensus- based global optimization method. Mathematical Models and Methods in Applied Sciences , 28(06):1037β1066, 2018. [CDR24] X. Chen and G. Dos Reis. Euler simulation of interacting partic le systems and McKeanβ Vlasov SDEs with fully super-linear growth drifts in space and interac tion.IMA Journal of Numerical Analysis , 44(2):751β796, 2024. [CG22] P.CattiauxandA.Guillin. Supplementtofunctionalinequalities forperturbedmeasureswith applications to log-concavemeasures and to some Bayesian problem s.Bernoulli , 28(4):2294β 2321, 2022. [CHS87] T.-S. Chiang, C.-R. Hwang, and S. J. Sheu. Diο¬usion for globa l optimization in Rn.SIAM Journal on Control and Optimization , 25(3):737β753, 1987. [CST20] N. K. Chada, A. M.
|
https://arxiv.org/abs/2504.18139v1
|
Stuart, and X. T. Tong. Tikhonov regula rization within ensemble Kalman inversion. SIAM Journal on Numerical Analysis , 58(2):1263β1294, 2020. [CV21] J. A. Carrillo and U. Vaes. Wasserstein stability estimates for covariance-preconditioned FokkerβPlanck equations. Nonlinearity , 34(4):2275, 2021. [DL21a] Z. Ding and Q. Li. Ensemble Kalman inversion: mean-ο¬eld limit and convergence analysis. Statistics and computing , 31:1β21, 2021. [DL21b] Z. Ding and Q. Li. Ensemble Kalman sampler: mean-ο¬eld limit and c onvergence analysis. SIAM Journal on Mathematical Analysis , 53(2):1546β1578, 2021. [DMH23] P. Del Moral and E. Horton. A theoretical analysis of one- dimensional discrete generation ensemble Kalman particle ο¬lters. The Annals of Applied Probability , 33(2):1327β1372, 2023. [DMT18] P. Del Moraland J. Tugaut. On the stability and the uniform propagationofchaosproperties of ensemble KalmanβBucy ο¬lters. The Annals of Applied Probability , 28(2):790β850, 2018. [Eve94] G. Evensen. Sequential data assimilation with a nonlinear qua si-geostrophic model using monte carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans , 99(C5):10143β10162, 1994. [EVL96] G. Evensen and P. J. Van Leeuwen. Assimilation of geosat alt imeter data for the Agulhas current using the ensemble Kalman ο¬lter with a quasigeostrophic mod el.Monthly weather review, 124(1):85β96, 1996. [GIHLS20] A. Garbuno-Inigo, F. Hoο¬mann, W. Li, and A. M. Stuart. Interacting Langevin diο¬usions: Gradient structure and ensemble Kalman sampler. SIAM Journal on Applied Dynamical Systems, 19(1):412β441, 2020. [GINR20] A. Garbuno-Inigo, N. Nusken, and S. Reich. Aο¬ne invarian t interacting Langevin dynamics forBayesianinference. SIAM Journal on Applied Dynamical Systems , 19(3):1633β1658,2020. [GKM+96] C. Graham, T. G. Kurtz, S. MΒ΄ elΒ΄ eard, P. E. Protter, M. Pulv irenti, D. Talay, and S. MΒ΄ elΒ΄ eard. Asymptotic behaviour of some interacting particle systems; Mckea n-Vlasov and Boltzmann models. Probabilistic Models for Nonlinear Partial Diο¬erential Eq uations: Lectures given at the 1st Session of the Centro Internazionale Matematico Est ivo (CIME) held in Montecatini Terme, Italy, May 22β30, 1995 , pages 42β95, 1996. [HJK11] M. Hutzenthaler, A. Jentzen, and P. E. Kloeden. Strong a nd weak divergence in ο¬nite time of Eulerβs method for stochastic diο¬erential equations with non-glo bally lipschitz continuous coeο¬cients. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 467(2130):1563β1576, 2011. 29 [HJK12] M. Hutzenthaler, A. Jentzen, and P. E. Kloeden. Strong c onvergenceof an explicit numerical method for SDEs with nonglobally Lipschitz continuous coeο¬cients. The Annals of Applied Probability , 22(4):1611 β 1641, 2012. [HM01] P. L. Houtekamer and H. L. Mitchell. A sequential ensemble Ka lman ο¬lter for atmospheric data assimilation. Monthly weather review , 129(1):123β137, 2001. [HRS24] M. Hasenpο¬ug, D. Rudolf, and B. Sprungk. Wasserstein co nvergence rates of increasingly concentrating probability measures. The Annals of Applied Probability , 34(3):3320β3347, 2024. [HST25] P. D. Hinds, A. Sharma, and M. V. Tretyakov. Well-posedne ssand approximationofreο¬ected McKean-Vlasov SDEs with applications. Mathematical Models and Methods in Applied Sci- ences, 2025. [Hwa80] C.-R. Hwang. Laplaceβs method revisited: weak convergenc e of probability measures. The Annals of Probability , 8(6):1177β1182, 1980. [ILS13] M. A. Iglesias, K. J. H. Law, and A. M. Stuart. Ensemble Kalm an methods for inverse problems. Inverse Problems , 29(4):045001, 2013. [KS19] N. Kovachki and
|
https://arxiv.org/abs/2504.18139v1
|
A. Stuart. Ensemble Kalman inversion: a der ivative-free technique for machine learning tasks. Inverse Problems , 35(9):095005, 8 2019. [KST23] D.Kalise,A.Sharma, andM.V.Tretyakov. Consensus-ba sedoptimizationviajump-diο¬usion stochastic diο¬erential equations. Mathematical Models and Methods in Applied Sciences , 33(02):289β339, February 2023. [LMW18] B. Leimkuhler, C. Matthews, and J. Weare. Ensemble preco nditioning for Markov chain Monte Carlo simulation. Statistics and Computing , 28:277β290, 2018. [LWZ22] M. Lindsey, J. Weare, and A. Zhang. Ensemble Markov chain Monte Carlo with teleporting walkers. SIAM/ASA Journal on Uncertainty Quantiο¬cation , 10(3):860β885, 2022. [MPS98] G. N. Milstein, E. Platen, and H. Schurz. Balanced implicit meth ods for stiο¬ stochastic systems. SIAM Journal on Numerical Analysis , 35(3):1010β1019, 1998. [MRSS25] V. Molin, A. Ringh, M. Schauer, and A. Sharma. Controlled s tochastic processes for simu- lated annealing. arXiv:2504.08506 , 2025. [MT04] G. N. Milstein and M. V. Tretyakov. Stochastic numerics for mathematical physics , vol- ume 39. Springer, 2004. [RDF78] P. J. Rossky, J. D. Doll, and H. L. Friedman. Brownian dynam ics as smart Monte Carlo simulation. The Journal of Chemical Physics , 69(10):4628β4633, 1978. [SS17] C. Schillings and A. M. Stuart. Analysis of the ensemble Kalman ο¬ lter for inverse problems. SIAM Journal on Numerical Analysis , 55(3):1264β1290, 2017. [Szn91] A.-S. Sznitman. Topics in propagation of chaos. Ecole dβΒ΄ etΒ΄ e de probabilitΒ΄ es de Saint-Flour XIXβ1989 , 1464:165β251, 1991. [TZ13] M. V. Tretyakov and Z. Zhang. A fundamental mean-squar e convergence theorem for sdes with locally lipschitz coeο¬cients and its applications. SIAM Journal on Numerical Analysis , 51(6):3135β3162, 2013. [Vae24] U. Vaes. Sharp propagation of chaos for the ensemble Lan gevin sampler. Journal of the London Mathematical Society , 110(5):e13008, 2024. [VdV98] A. W. Van der Vaart. Asymptotic statistics . Cambridge University Press, Cambrdige, UK, 1998. [Vil03] C. Villani. Topics in Optimal Transportation . Graduate studies in mathematics. American Mathematical Society, Providence, RI, USA, 2003. 30
|
https://arxiv.org/abs/2504.18139v1
|
arXiv:2504.18769v1 [math.ST] 26 Apr 2025A Simpliο¬ed Condition For Quantile Regression Liang Pengβand Yongcheng Qiβ Abstract Quantile regression is eο¬ective in modeling and inferring th e conditional quantile given some predictors and has become popular in risk managem ent due to wide applications of quantile-based risk measures. When foreca sting risk for economic and ο¬nancial variables, quantile regression has to account for heteroscedasticity, which raises the question of whether the identiο¬cation condition on residuals in quantile regression is equivalent to one independent of heterosceda sticity. In this paper, we present some identiο¬cation conditions under three probabi lity models and use them to establish simpliο¬ed conditions in quantile regression. Keywords : Conditional expectation, Heteroscedasticity, Residual s, Quantile regres- sion MSC 2020 classiο¬cation : 60E05; 62J05; 62G05 1 Introduction Sincethedistributionfunctionandquantilefunctionaret wofundamentalquantitiesincharacter- izing randomness, nonparametric distribution and quantil e estimation have played a signiο¬cant role in nonparametric statistics; see Shorlack and Wellner (1986) for an overview of empirical processes and quantile processes. Like regression for mode ling the relationship between response and predictors, quantile regression is an eο¬ective techniqu e for modeling and inferring the con- ditional quantile of a response given predictors. We refer t o Koenker (2005) for an overview of quantile regression techniques. Because quantile-base d risk measures such as Value-at-Risk βMaurice R. Greenberg School of Risk Science, Georgia State U niversity, USA. β Department of Mathematics and Statistics, University of Mi nnesota Duluth, USA. 1 are popular in risk management, quantile regression plays a n important role in forecasting risk. Below are three particular applications of using quantile r egression to forecast risk in insurance and econometrics. The ο¬rst example is risk forecast in non-life insurance, whe re an actuarial dataset often includes the number of claims, the loss of each claim, and som e characteristics, such as the policyholderβs age, type of car, and driving experience in a utomobile insurance, for each pol- icyholder. When a risk manager needs to forecast Value-at-R isk (VaR), a two-step inference procedure is employed in the literature: logistic regressi on for modeling the probability of hav- ing nonzero claims and quantile regression for modeling Val ue-at-Risk at an adjusted risk level computed from the logistic regression; see Heras, Moreno an d Vilar-ZanΒ΄ on (2018), Kang, Peng and Golub (2021), and Fung et al. (2024). Because of many zero claims, the ο¬rst step of logistic regression improves the inference for the probability of no nzero claims, while the second step of quantile regression eο¬ectively models and infers the quanti le-based risk measure, leading to ac- curate risk forecasts. Also, the analysis often assumes ind ependence across policyholders, which is practically sound. The second example is about predictability in ο¬nancial econ ometrics. Because of hetero- geneous quantiles (see Gebka and Wohar (2013) and Ma, Xiao an d Ma (2018)), researchers in econometrics are interested in using quantile regression t o test for the predictability of some economic predictors; see Xiao (2009) for unit root predicto r, Xu (2021) for the case of station- ary predictor, and Lee (2016), Fan and Lee (2019), and Liu et a l. (2023)
|
https://arxiv.org/abs/2504.18769v1
|
for some uniο¬ed tests regardless of predictors being stationary, near unit root, and unit root. The third example is systemic risk, which has been a major con cern in the ο¬nancial industry and insurance business since the 2008 global ο¬nancial crisi s. A popular systemic risk measure is the so-called CoVaR deο¬ned as the conditional quantile of system loss at risk level q given some predictors and that an individual loss equals its Value -at-Risk (VaR) at the same risk level q (see Adrian and Brunnermeier (2016)). Various applicatio ns and extensions of the systemic risk measure CoVaR have appeared in the literature as in Gigl io, Kelly and Pruitt (2016), Yang and Hamori (2021), Girardi and ErgΒ¨ un (2013), HΒ¨ ardle, Wang and Yu (2016), Chen, HΒ¨ ardle and Okhrin (2019), and Capponi and Rubtsov (2022). Because CoVa R is deο¬ned as a conditional quantile given someeconomic predictors, Adrian andBrunne rmeier (2016) proposeto model and infer by two quantile regression models: one quantile regre ssion for individual loss given some macroeconomic predictors and another quantile regression for system loss given macroeconomic 2 predictors and the individual loss. A general quantile regression model for a univariate predic tor is ο£± ο£² ο£³Yt=Ξ±q+Ξ²qXt+Ut, Xt=Β΅+ΟXtβ1+et, et=/summationtextβ j=0cjVtβj, {(Ut,Vt)}is a sequence of independent and identically distributed ra ndom vectors ,(1) wherecjβs satisfy that {et}is stationary, |Ο|<1 for stationary {Xt}, andΟ= 1 for the unit root process of {Xt}. Here, we allow the dependence between UtandVt. Forqβ(0,1), to ensure thatΞ±q+Ξ²qXtin (1) models the q-th conditional quantile of YtgivenXtand the consistency of quantile regression estimation, one commonly assumes th at P(Utβ€0|Xt) =q. (2) Then, a simple question is, under model (1), whether (2) is eq uivalent to P(Utβ€0|Vt) =q. (3) When quantile regression is applied to economic and ο¬nancia l variables, as in the second and third examples above, it is necessary to account for heteros cedasticity. In this case, one often considers the following quantile regression model: ο£±  Yt=Ξ±q+Ξ²qXt+Ut, Xt=Β΅+ΟXtβ1+et, et=/summationtextβ j=0cjVtβj, Vt=Οt,xΞ·t, Ut=Οt,yΞ΅t, {(Ξ΅t,Ξ·t)}is a sequence of independent and identically distributed ra ndom vectors , (Ξ΅t,Ξ·t) is independent of {(Οs,y,Οs,x) :sβ€t}, (4) where{Οt,y>0}and{Οt,x>0}arestationary, cjβssatisfythat {et}isstationary, and Οβ(β1,1]. Again,Οβ(β1,1) andΟ= 1 represent stationary and nonstationary {Xt}, respectively. Here, the second question is, under model (4), whether (3) is equiv alent to P(Ξ΅tβ€0|Ξ·t) =q, (5) which is equivalent to P(Utβ€0|Ξ·t) =qasΟt,y>0, implying that the conditional quantile of Yt givenXtis stillΞ±q+Ξ²qXt. Further, the third question is whether (2) is equivalent to (5) under model (4). In using models (1) and (4) for the above mentioned three appl ications, YtandXtare 3 insurance loss and policyholderβs characteristics respec tively in forecasting insurance risk and are asset return and some economic variables respectively i n predictability tests and systemic risk management. In this note, under some minor conditions, we prove the equiv alence of (2) and (3) under model (1) and the equivalence of (3) and (5) under model (4). I t is clear that (5) implies (2) under model (4). Unfortunately, we can only show that (2) imp lies (5) under model (4) and some restrictive moment conditions. 2
|
https://arxiv.org/abs/2504.18769v1
|
Main Results In this section, we will prove three technical lemmas that ar e of independent interest in prob- ability theory. Then, we will prove three theorems that can b e used to answer the above three questions. Throughout, we will use either characteristic function or m oment generating function as they determine distribution functions and work nicely with sums . Lemma 1. Assume random variables UandVare independent of S, andS+UandS+Vare identically distributed. Then UandVhave the same distribution under each of the following two conditions: (a). The characteristic function of S, deο¬ned as ΟS(t) =E(eitS), is nonzero for tβC, where Cis a dense subset of (ββ,β), whereiis the imaginary number with i2=β1. (b). For some constant h >0, one of the two moment-generating functions MU(t) :=E/parenleftbig etU/parenrightbig andMV(t) :=E/parenleftbig etV/parenrightbig is ο¬nite for tβ(βh,h). Proof.By using independence and equality in distributions we have , for alltβ(ββ,β), ΟS(t)ΟU(t) =E/parenleftbig eit(S+U)/parenrightbig =E/parenleftbig eit(S+V)/parenrightbig =ΟS(t)ΟV(t). (6) Under condition (a) that ΟS(t)/ne}ationslash= 0 for tβC, we have from (6) that ΟU(t) =ΟV(t) for tβC. Since all characteristic functions are continuous and Cis dense in ( ββ,β), we have ΟU(t) =ΟV(t) fortβ(ββ,β), which implies that UandVare identically distributed. Under condition (b), assume MU(t) is ο¬nite for tβ(βh,h), implying that all moments of U exist. It is known that when the moment-generating function of a random variable exists, the moments of the random variable uniquely determine the momen t-generating function and the 4 distribution of the random variable as well. In other words, if all moments of random variable Vexist and E(Vk) =E(Uk) for all positive integers kβ₯1, thenUandVare identically distributed. Now we will show E(Vk) =E(Uk) for all kβ₯1. Since ΟS(t) is continuous and ΟS(0) = 1, ΟS(t)/ne}ationslash= 0 in a neighborhood of zero. Then it follow from (6) that ΟU(t) =ΟV(t),tβ(βΞ΄,Ξ΄) for someΞ΄ >0. Hence, both ΟU(t) andΟV(t) are diο¬erentiable at the origin inο¬nitely many times since all moments of Uexist, and E(Uk) = (βi)kΟ(k) U(t)|t=0= (βi)kΟ(k) V(t)|t=0=E(Vk) for all kβ₯1. This completes the proof. Lemma 2. Assume random variables Y0andY1are independent of Z,P(Z >0) = 1, andZY0 andZY1are identically distributed. Then Y0andY1have the same distribution under one of the following conditions. (A). The characteristic function of lnZ,ΟlnZ(t), is nonzero for tβC, whereCis a dense subset of(ββ,β). (B). For some Ξ΄ >0,E/parenleftbig |Yj|Ξ΄+|Yj|βΞ΄I(Yj/ne}ationslash= 0)/parenrightbig <βforj= 0or1. Proof.Without loss of generality, assume p=P(Y0>0)>0 andr=P(Y0<0)>0. Then P(Y1>0) =P(ZY1>0) =P(ZY0>0) =P(Y0>0) =psinceP(Z >0) = 1. Similarly, we haveP(Y0<0) =P(Y1<0) =randP(Y0= 0) =P(Y1= 0) = 1 βpβr. Now deο¬ne two new random variables, U0andU1, with the following properties: (P1): Forj= 0,1, the distribution of Ujis the same as the conditional distribution of Yjgiven Yj>0, that is, P(Ujβ€u) =P(Yjβ€u|Yj>0) =P(0< Yjβ€u)/p, u > 0. (7) (P2): (U0,U1) andZare independent Note that P(Uj>0) = 1 for j= 0,1. Since ZY0andZY1are identically distributed, we have P(ZY0β€x|ZY0>0) =P(ZY1β€x|ZY1>0),ββ< x <β. (8) Again, since P(Z >0) = 1, we have for j= 0,1 P(ZUjβ€x) =P(ZYjβ€x|Yj>0) =P(ZYjβ€x,Yj>0) P(Yj>0) =P(ZYjβ€x,ZYj>0) P(ZYj>0)=P(ZYjβ€x|ZYj>0). 5 It follows from (8) that ZU0andZU1are identically distributed, or equivalently, ln Z+ lnU0 and lnZ+lnU1have the same distribution. Note that ln Zand (lnU0,lnU1) are independent.
|
https://arxiv.org/abs/2504.18769v1
|
To apply Lemma 1 with S= ln(Z),U= ln(U0) andV= ln(U1), we need to verify conditions (a) and (b). Clearly, condition (A) in Lemma 2 implies condit ion (a) in Lemma 1. Now assume condition (B) holds with j= 0 or 1. Then we have E(UΞ΄ j) =1 pE/parenleftbig YΞ΄ jI(Yj> 0)/parenrightbig β€1 pE/parenleftbig |Yj|Ξ΄/parenrightbig <βandE(UβΞ΄ j) =1 pE/parenleftbig YβΞ΄ jI(Yj>0)/parenrightbig β€1 pE/parenleftbig |Yj|βΞ΄I(Yj/ne}ationslash= 0)/parenrightbig <β.By using Lyapunovβs inequality, we have for any tβ(0,Ξ΄) /parenleftbig E(Ut j)/parenrightbig1/tβ€/parenleftbig E(UΞ΄ j)/parenrightbig1/Ξ΄<βand/parenleftbig E(Uβt j)/parenrightbig1/tβ€/parenleftbig E(UβΞ΄ j)/parenrightbig1/Ξ΄<β, that is,E/parenleftbig eΒ±tln(Uj)/parenrightbig <β,or equivalently E/parenleftbig etln(Uj)/parenrightbig <β, tβ(βΞ΄,Ξ΄),which implies condi- tion (b) in Lemma 1. Therefore, it follows from Lemma 1 that ln U0and lnU1are identically distributed, that is, U0andU1are identically distributed. From (7) we obtain P(0< Y0β€u) =P(0< Y1β€u), u >0. (9) Replacing Y0andY1withβY0andβY1, respectively, and repeating the same procedure as above, we have P(0<βY0β€v) =P(0<βY1β€v), v >0,which is equivalent to P(vβ€Y0<0) =P(vβ€Y1<0), v <0. (10) By combining (9), (10), and P(Y0= 0) = P(Y1= 0), we conclude that Y0andY1are identically distributed. This completes the proof of the le mma. Lemma 3. AssumeY0,Y1,Z, andWare random variables, (Y0,Y1)is independent of (Z,W), andP(Z >0) = 1. Assume all moments of random variables ZandWare ο¬nite, and the moment-generating functions of Y0andY1,MYj(t) =E(etYj), exist in tβ(βh,h)for some constant h >0. IfY0Z+WandY1Z+Ware identically distributed, then Y0andY1are identically distributed. Proof.Sincethemoment-generatingfunctions E(etY0)andE(etY1)arewelldeο¬nedin tβ(βh,h), all moments of Y0andY1are ο¬nite. In this case, if E(Yn 0) =E(Yn 1) for allnβ₯1, thenY0andY1 have the same moment-generating functions, consequently t hey have the identical distribution functions. Hence, it suο¬ces to show that E(Yn 0) =E(Yn 1) for all nβ₯1. 6 Note that all moments of ZandWexist,Β΅j,k:=E(ZjWk) is ο¬nite for all j,kβ₯0, where all indices j,kare integer-values. We have Β΅j,0=E(Zj)>0 for all jβ₯0 sinceP(Z >0) = 0. Fornβ₯1, we have gn(t) :=E/parenleftbig tZ+W/parenrightbign=E/parenleftBign/summationdisplay m=0/parenleftbiggn m/parenrightbigg tmZmWnβm/parenrightBig =n/summationdisplay m=0Β΅m,nβm/parenleftbiggn m/parenrightbigg tm. SinceY0Z+WandY1Z+Ware identically distributed, E(gn(Yj)) =E{E/parenleftbig (YjZ+W)n|Yj/parenrightbig }= E/parenleftbig (YjZ+W)n/parenrightbig is the same for j= 0,1, that is, n/summationdisplay m=0Β΅m,nβm/parenleftbiggn m/parenrightbigg E(Ym 0) =n/summationdisplay m=0Β΅m,nβm/parenleftbiggn m/parenrightbigg E(Ym 1). Rewrite the above equation as Β΅n,0/parenleftbig E(Yn 0)βE(Yn 1)/parenrightbig =nβ1/summationdisplay m=0Β΅m,nβm/parenleftbiggn m/parenrightbigg/parenleftbig E(Ym 1)βE(Ym 0)/parenrightbig . (11) Notice that E(Y0 0) = 1 = E(Y0 1) andΒ΅n,0=E(Zn)>0 for all nβ₯1. By setting n= 1 in (11), we have E(Y0) =E(Y1). Now assume E(Ym 0) =E(Ym 1) for all m= 0,1,Β·Β·Β·,nβ1, then from (11) we conclude E(Yn 0) =E(Yn 1). This implies that E(Yn 0) =E(Yn 1) fornβ₯1 by induction. Remark 1. It is well known that the values of the characteristic functi on of a random vari- able in a neighborhood of the origin cannot uniquely determi ne the distribution of the random variable. That is why we assume the characteristic function is non-zero over a dense subset of all real numbers; see condition (a) in Lemma 1 and condition ( A) in Lemma 2, which require no moment conditions. For most of the commonly used distribu tions such as normal distribu- tions, t-distributions, and distributions in Remark 2 belo w, these conditions are valid. When the characteristic function is zero in a set with a positive L ebesgue measure, we impose on
|
https://arxiv.org/abs/2504.18769v1
|
the existence of moment generating function of another random v ariables to ensure the equality of the distribution functions of the two variables; see condit ion (b) on Lemma 1 and condition (B) in Lemma 2. Meanwhile, the assumptions on the moment generat ing functions in Lemmas 1 - 3 cannot be weakened to the ο¬niteness of all moments of the ran dom variables since moments cannot uniquely determine a distribution function in gener al; see, e.g. Example 30.2, page 398 in Billingsley (1995). 7 Remark 2. We notice that Smith (1962) showed that the characteristic f unction of a non- negative random variable cannot vanish identically in an in terval. Immediately, we can draw the following two conclusions: i)The characteristic function of a random variable Sis non-zero over a dense subset of all real numbers if the random variable Sis bounded from above or bounded from below. ii)For a positive random variable Z, ifP(Z > c) = 1 for some c >0, then the characteristic function of ln Zis non-zero over a dense subset of all real numbers. Remark 3. In Lemma 3, we impose moment conditions via moment generatin g functions as it remains unknown whether or how a condition on the characteri stic function like Lemmas 1 and 2 can be employed. First, we show Theorem 1 below. The equivalence between (2) a nd (3) under model (1) follows immediately. Theorem 1. Assume random variable Wis independent of random vector (X,Y). Assume constant qβ(0,1). Then, P(Xβ€0|Y+W) =qif and only if P(Xβ€0|Y) =qunder one of the following two conditions: (C1): The characteristic function of W,ΟW(t) =E/parenleftbig eitW/parenrightbig , is non-zero in a dense subset of (ββ,β). (C2): The moment-generating function of Y,MY(t), is ο¬nite in (βh,h)for some h >0. Proof.WhenP(Xβ€0|Y) =q, we have P(Xβ€0|Y+W) =E{P(Xβ€0|Y,W)|Y+W} =E{P(Xβ€0|Y)|Y+W}=E(q|Y+W) =q.(12) Hence, we only need to prove P(Xβ€0|Y) =qwhen P(Xβ€0|Y+W) =q. (13) It follows from (13) that P(Xβ€0|Y+W) =q=P(Xβ€0) andP(X >0|Y+W) = 1βq= P(X >0),which implies that the Bernoulli random variable T:=I(Xβ€0) is independent of Y+W. Forj= 0,1, deο¬ne the conditional distribution of YgivenT=jas Fj(y) =P(Yβ€y|T=j) forββ< y <β. (14) 8 Our objective is to show that TandYare independent, or equivalently, the conditional distribution of YgivenT=jis the same for j= 0,1. Now assume random variables Y0andY1 are independentof W, and their cumulative distribution functions are F0andF1, respectively, as deο¬ned in (14). For every x, we have P(W+Yjβ€x) =P(W+Yβ€x|T=j) forj= 0,1.Since TandW+Yare independent, we have P(W+Yβ€x|T=j) =P(W+Yβ€x) forj= 0,1. Therefore, W+Y0andW+Y1are identically distributed. By applying Lemma 1 with S=W, U=Y0andV=Y1, we conclude that Y0andY1have the same distribution function if we can verify condition (a) or (b) in Lemma 1 holds. In fact, conditi on (C1) implies condition (a) in Lemma 1. Under condition (C2), we have MY1(t) =E/parenleftbig etY1/parenrightbig =1 qE/parenleftbig etYI(Xβ€0)/parenrightbig β€1 qMY(t)< βfor alltβ(βh,h), that is, condition (b) holds in Lemma 1. Thus, we have proved that TandYare independent, which yields P(Xβ€0|Y) =P(T= 1|Y) =P(T= 1) =q.This completes the proof. Second, we prove Theorem 2 below. The equivalence between (3 ) and (5) under model (4) follows immediately by noting that I(Utβ€0) =I(Ξ΅tβ€0).
|
https://arxiv.org/abs/2504.18769v1
|
Theorem 2. Assume random variable Zis independent of random vector (X,Y),P(Z >0) = 1, andqβ(0,1)is a constant. Then, P(Xβ€0|YZ) =qif and only if P(Xβ€0|Y) =q under one of the following two conditions: (C3): The characteristic function of lnZ,ΟlnZ(t) =E/parenleftbig eitlnZ/parenrightbig , is non-zero in a dense subset of (ββ,β). (C4): For some Ξ΄ >0,E/parenleftbig |Y|Ξ΄+|Y|βΞ΄I(Y/ne}ationslash= 0)/parenrightbig <β. Proof.The suο¬ciency follows from the same line as in the proof of (12 ). Therefore, we only need to prove P(Xβ€0|Y) =qwhen P(Xβ€0|YZ) =q. (15) As in the proof of Theorem 1, set T=I(Xβ€0) and deο¬ne random variables Y0andY1in such a way that Y0andY1are independent of Zand corresponding distribution functions are F0 andF1as deο¬nes in (14). It suο¬ces to show the independence of TandY, which is equivalent to the equality of the distribution functions of Y0andY1. 9 Since (15) implies the independence of TandYZ, we have for xβ(ββ,β)P(ZYβ€ x|T= 0) =P(ZYβ€x|T= 1).Because the left-hand side and the right-hand side of the abo ve equation are equal to P(ZY0β€x) andP(ZY1β€x), respectively, we conclude that ZY0andZY1 are identically distributed. Clearly, condition (C3) is th e same as condition (A) in Lemma 2. We can also show that condition (C4) implies condition (B) in Lemma 2 since E/parenleftbig |Y1|Ξ΄+|Y1|βΞ΄I(Y1/ne}ationslash= 0)/parenrightbig =1 qE/braceleftbig/parenleftbig |Y|Ξ΄+|Y|βΞ΄I(Y/ne}ationslash= 0)/parenrightbig I(T= 0)/bracerightbig β€1 qE/parenleftbig |Y|Ξ΄+|Y|βΞ΄I(Y/ne}ationslash= 0)/parenrightbig . Hence, it follows from Lemma 2 that Y0andY1are identically distributed, that is, P(Yβ€ y|T= 0) = P(Yβ€y|T= 1) = P(Yβ€y), andYis independent of I(Xβ€0). Therefore, P(Xβ€0|Y) =P(Xβ€0) =q.That is, Theorem 2 holds. Third, like the proof of (12), we know that (5) implies (2) und er model (4). But, we can only show the equivalence between (2) and (5) under model (4) with enough ο¬nite moments by using the following theorem. Theorem 3. Assume X,Y,Z, andWare four random variables, (X,Y)and(Z,W)are independent, and P(Z >0) = 1. Furthermore, we assume all moments of random variables ZandWare ο¬nite, and the moment-generating function of Y,MY(t) =E(etY), exists in tβ(βh,h)for some constant h >0. Assume constant qβ(0,1). Then, P(Xβ€0|YZ+W) =qif and only if P(Xβ€0|Y) =q. (16) Proof.As before, if P(Xβ€0|Y) =q, then we have P(Xβ€0|YZ+W) =E{P(Xβ€0|Y,Z,W)|YZ+W} =E{P(Xβ€0|Y)|YZ+W} =E(q|YZ+W) =q. Now we assume P(Xβ€0|YZ+W) =q. Follow the proof of Theorem 1, we deο¬ne random variable T=I(Xβ€0). Then P(T= 1|YZ+W) =qandP(T= 0|YZ+W) = 1βq,that is, TandYZ+Ware independent and P(T= 1) =q. Deο¬ne distributions F0andF1, as in (14). If F0andF1are the same, then we can conclude 10 the independence of TandY, which implies P(Xβ€0|Y) =P(T= 1|Y) =P(T= 1) =q. Hence, we only need to show that F0andF1are identical. Like the proof of Theorem 1, we assume random variables Y0andY1are independent of (Z,W), and their cumulative distribution functions are F0andF1, respectively. Using the independence of ( X,Y) and (Z,W), we have for each j= 0,1, the distribution of YjZ+Wis the same as the conditional distribution of YZ+WgivenT=j. Meanwhile, the conditional distribution of YZ+WgivenT=jis the same for j= 0,1 due to the independence of Tand YZ+W. Therefore, we know that Y0Z+WandY1Z+Ware identically distributed. Since E(etY) =E(etY|T= 0)(1βq)+E(etY|T= 1)q= (1βq)E(etY0)+qE(etY1) for alltβ(βh,h), the moment-generating functions for Y0andY1are well deο¬ned in ( βh,h). Hence, an application of Lemma
|
https://arxiv.org/abs/2504.18769v1
|
3 concludes that Y0andY1are identically distributed, that is, F0=F1, i.e., the theorem holds. References [1] T. Adrian and M.K. Brunnermeier (2016). CoVaR. American Economic Review 106, 1705 - 1741. [2] P. Billingsley (1995). Probability and Measure. 3rd Edi tion.New York, Wiley. [3] A. Capponi and A. Rubtsov (2022). Systemic risk-driven p ortfolio selection. Operations Re- search 70, 1598-1612. [4] C. Chen, W.K. HΒ¨ ardle and Y. Okhrin (2019). Tail event dri ven networks of SIFIs. Journal of Econometrics 208, 282-298. [5] R. Fan and J.H. Lee (2019). Predictive quantile regressi ons underpersistenceand conditional heteroskedasticity. Journal of Econometrics 213, 261-280. [6] T. Fung, Y. Li, L. Peng and L. Qian (2024). Testing constan t serial dynamics in two-step risk inference for longitudinal actuarial data. North American Actuarial Journal 28, 861-881. [7] B. Gebka and M. Wohar (2013). Causality between trading v olume and returns: Evidence from quantile regressions. International Review of Economics &Finance 27, 144-159. 11 [8] S. Giglio, B. Kelly and S. Pruitt (2016). Systemic risk an d the macroeconomy: an empirical evaluation. Journal of Financial Economics 119, 457-471. [9] G. Girardi and A. ErgΒ¨ un (2013). Systemic risk measureme nt: multivariate GARCH estima- tion of CoVaR. Journal of Banking &Finance 37, 3169-3180. [10] W. HΒ¨ ardle W. Wang and L. Yu (2016). Tenet: tail-event dr iven network risk. Journal of Econometrics 192, 499-513. [11] A. Heras, I. Moreno and J.L. Vilar-ZanΒ΄ on (2018). An app lication of two-stage quantile regression to insurance ratemaking. Scandinavian Actuarial Journal 9, 753-769. [12] S. Kang, L. Peng and A. Golub (2021). Two-step risk analy sis in insurance ratemaking. Scandinavian Actuarial Journal 6, 532-554. [13] R. Koenker (2005). Quantile Regression. Cambridge University Press. [14] Lee, J. H. (2016). Predictive quantile regression with persistent covariates: IVX-QR ap- proach.Journal of Econometrics 192, 105-118. [15] X. Liu, W. Long, L. Peng and B. Yang (2023). A uniο¬ed infer ence for predictive quantile regression. Journal of American Statistical Association 119, 1526-154 0. [16] C. Ma, S. Xiao and Z. Ma (2018). Investor sentiment and th e prediction of stock returns: a quantile regression approach. Applied Economics 50, 5401-5415. [17] W. L. Smith (1962). A note on characteristic functions w hich vanish identically in an interval. Mathematical Proceedings of the Cambridge Philosophical So ciety 58 (2), 430 - 432. [18] G.R. Shorack and J.A. Wellner (1986). Empirical Proces ses with Applications to Statistics. Wiley Series in Probability and Statistics. [19] Z. Xiao (2009). Quantile cointegrating regression. Journal of Econometrics 150, 248-260. [20] K.L. Xu (2021). On the serial correlation in multi-hori zon predictive quantile regression. Economics Letters 200, 109736. [21] L.YangandS.Hamori (2021). Systemicriskandeconomic policyuncertainty: International evidence from the crude oil market. Economic Analysis and Policy 69, 142-158. 12
|
https://arxiv.org/abs/2504.18769v1
|
INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACEβ QIN LIβ , MARIA OPREAβ‘, LI WANGΒ§,AND YUNAN YANGΒΆ Abstract. Define a forward problem as Οy=G#Οx, where the probability distribution Οxis mapped to another distribution Οyusing the forward operator G. In this work, we investigate the corresponding inverse problem: Given Οy, how to find Οx? Depending on whether Gis overdetermined or underdetermined, the solution can have drastically different behavior. In the overdetermined case, we formulate a variational problem min ΟxD(G#Οx, Οy), and find that different choices of the metric Dsignificantly affect the quality of the reconstruction. When Dis set to be the Wasserstein distance, the reconstruction is the marginal distribution, while setting Dto be a Ο-divergence reconstructs the conditional distribution. In the underdetermined case, we formulate the constrained optimization min{G#Οx=Οy}E[Οx]. The choice of Ealso significantly impacts the construction: setting Eto be the entropy gives us the piecewise constant reconstruction, while setting Eto be the second moment, we recover the classical least- norm solution. We also examine the formulation with regularization: min ΟxD(G#Οx, Οy) +Ξ±R[Οx], and find that the entropy-entropy pair leads to a regularized solution that is defined in a piecewise manner, whereas the W2-W2 pair leads to a least-norm solution where W2is the 2-Wasserstein metric. 1. Introduction. We study the inverse problem over the probability measure space: (1.1) Given data Οy,reconstruct Οxso that G#ΟxβΟy. Here, Οxβ P(Ξ) is a probability measure of the variable xβΞβRm, with P(Ξ) denoting the collection of all probability measures on the domain Ξ. Gis a function that maps xβΞ to y=G(x)β R β Rn. We denote Ras the range. The map G#then is the deduced pushforward operator that maps a probability measure in P(Ξ) to a measure in P(R). The given data Οyβ P(Rn) is a probability measure over the variable yβRn. Notably, Οy may have non-trivial mass outside the range R. This problem can be viewed as an analog of the inverse problem in linear space (e.g., Euclidean space): Given data y, we are to reconstruct xso thatG(x)βy. The new problem (1.1) lives in the probability measure space, and thus naturally in the infinite- dimensional setting and is significantly harder to solve computationally. However, many properties are shared. Indeed, similar to the problem posed in the Euclidean space, different methods can be employed for the reconstruction, depending on the properties of Gandyin (1.1). In our context, we need to separate the discussion depending on the features of GandΟyas well. To start, we define the feasible set, (1.2) S={Οx:G#Οx=Οy} β P (Ξ) being the collection of all possible Οx, whom under the pushforward action by G, agrees completely with Οy. There are three scenarios: β’Unique solution: The ideal situation occurs when the feasible set contains only one element, which corresponds to the natural inversion of Οy, and solves the problem (1.1). β’Overdetermined: The feasible set is empty (corresponding to non-existence). This would occur if supp( Οy) is not entirely contained in R, and there are elements of ythat cannot find its inversion. βSubmitted to the editors on April 29, 2025. Funding: M.O. and Y.Y. were supported by National Science Foundation (NSF)
|
https://arxiv.org/abs/2504.18999v1
|
through grant DMS-2409855, Office of Naval Research through grant N00014-24-1-2088, and Cornell PCCW Affinito-Stewart Grant. Q.L. was supported in part by NSF grant DMS-2308440. L.W. was supported in part by NSF grant DMS-1846854 and the Simons Fellowship. β Department of Mathematics, University of Wisconsin-Madison, Madison, WI (qinli@math.wisc.edu). β‘Center for Applied Mathematics, Cornell University, Ithaca, NY (mao237@cornell.edu). Β§School of Mathematics, University of Minnesota Twin Cities, Minneapolis, MN (liwang@umn.edu). ΒΆDepartment of Mathematics, Cornell University, Ithaca, NY (yunan.yang@cornell.edu). 1arXiv:2504.18999v1 [math.OC] 26 Apr 2025 2 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG β’Underdetermined: The feasible set has infinitely many elements (corresponding to non- uniqueness). This can happen if supp( Οy) is completely inside R, and there are elements ofythat find many choices of inversion. These scenarios also appear in the deterministic setting when looking for inversion xβ Gβ1(y). Numerically, it is a common practice to formulate an optimization problem for finding this x by adding a regularization term. This numerical strategy is feasible in both overdetermined and underdetermined settings. In the overdetermined setting, adding a regularizer can relax the feasible set constraints and tolerate errors in mismatching, and in the underdetermined setting, the added regularizer helps impose prior knowledge. Likewise, for problem (1.1), we also have: β’Regularized: This applies to both overdetermined and underdetermined settings, and we relax the problem by tolerating some errors and adding prior knowledge. We are interested in studying problem (1.1) in (1). overdetermined setting, (2). underdeter- mined setting and (3). regularized setting. More specifically, we are to spell out the behavior of the solution to problem (1.1) formulated in the optimization framework in the above three mentioned settings. We now summarize the findings. In the overdetermined case, supp( Οy)&R, implying that Οy(Rn\ R)ΜΈ= 0. In this scenario, no measure Οxsatisfies G#Οx=Οy, making the feasible set Scompletely empty. Intuitively, this reflects the fact that Οypossesses a non-trivial mass outside the range R, so a perfect match is unattainable. In this case, we solve the variational problem to determine the best match: (1.3) Οβ x= arg min ΟxD(G#Οx, Οy), for a properly chosen D. Accordingly, we also define the reconstructed data distribution (1.4) Οβ y=G#Οβ x. This formulation seeks a probability measure Οxsuch that, when pushed forward by G, matches Οyoptimally, with the optimality quantified by the misfit function D. In this setting, it is straight- forward to see that Οβ yΜΈ=Οybut maintains some features of Οy. In particular, the choice of Dis crucial and emphasizes different features of Οy. Below, we present two concrete scenarios: β Setting Dto be any Ο-divergence (also called the f-divergence in probability theory), Οβ y is the conditional distribution of Οyover the range of G; β Setting Dto be a Wasserstein distance, Οβ yis the marginal distribution of Οy. We should note that, at this point, these statements are not yet rigorous. The use of the terms βconditionalβ versus βmarginalβ is intended for intuition only. The precise meaning is described in Sections 2.1 and 2.2, respectively. In the underdetermined case , supp( Οy)β R, and there is at least one element in S. IfShas multiple or
|
https://arxiv.org/abs/2504.18999v1
|
infinitely many solutions, we need to pick one that makes the most physical sense. To do so, we follow the footsteps of Euclidean space and solve the following constrained optimization problem: (1.5) Οβ x= arg min ΟxβSE[Οx]. The choice of Edepends on the specific problem and the userβs objectives. As written, (1.5) looks for the optimal choice of Οxwithin the feasible set, which is optimal in the sense characterized by E. Therefore, the choice of Eis crucial. Different definitions of Epromote different properties, leading to solutions with contrasting features. We examine the following two specific definitions of Eand report: β Setting Eto be entropy, Οβ xis piecewise constant function on the level sets of G. β Setting E=R |x|2dΟx,Οβ xextends from the least-norm solution defined in Euclidean space. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 3 Once again, the statement is not yet rigorous. The terms βpiecewise constantβ and βleast-norm solutionβ need precise definitions. We discuss details in Section 3.1 and Section 3.2 respectively. In the regularized case, a regularization term is added to either offset the constraints on the feasible set or to encode prior knowledge, leading to the following formulation: Οβ x= arg min ΟxD(G#Οx, Οy) +Ξ±R[Οx], (1.6) where Ξ± >0 is the regularization parameter and R:P(Ξ)βRis the regularizing functional. Similar to the two cases above, different choices of the pair ( D,R) promote different charac- terizations of the optimizer Οβ x. We consider the following two scenarios: β Setting Dto be the KullbackβLeibler (KL) divergence and setting R[Οx] also the KL divergence against a prescribed distribution M. The optimal solution Οβ xis piecewisely defined, with the RadonβNikodym derivativedΟβ x dMbeing piecewise constant on the level sets of G. β Setting Dto be p-Wasserstein distance and R[Οx] :=R |x|pdΟx, the p-th order moment of Οx, then the optimal distribution Οβ xcan be deduced by a revised least-norm solution map. Similar to the scenarios above, we are to make these terminologies precise. Details will be presented in Section 4.1 and Section 4.2, respectively. It is clear that in all the cases above, the choice of metrics plays a crucial role in the recon- struction. This is not at all surprising. The same phenomenon holds true in Euclidean space and linear function space. Specifically, in the regularized problem, classical regularizers include Tikhonov regularization [21, 16], total variation (TV) norm [28] or L1norm [11] for Rto promote sparsity, while the L2norm (i.e., mean squared error) is often used as the data fidelity term to account for measurement error. Nevertheless, lifting these problems to the probability space is not only a mathematically interesting question but also is backed by substantial practical demand. Over recent years, inverse problems associated with finding probability measures have gained increasing prominence. For example, in weather prediction, the goal is to infer the distribution of pressure and temperature changes [20]; in plasma simulation, one aims to infer the distribution of plasma particles using macroscopic measurements [18, 10]; in experimental design, the objective is to determine the optimal distribution of tracers or detectors to achieve the best measurements [22, 23, 31]. In
|
https://arxiv.org/abs/2504.18999v1
|
optical communication, the task is to recover the distribution of the optical environment [5, 24, 4]. Other problems include those arising in aerodynamics [14], biology [13, 12, 29], and cryo-EM [19, 29]. In all these problems, the sought-after quantity is a probability distribution, density, or measure that matches the given data. Consequently, inverse problems in this stochastic setting are naturally formulated as the inversion for a probability distribution, giving rise to the so-called stochastic inverse problem [6, 8, 7, 9, 27, 26, 30]. Therefore, it is of great importance to carefully study the properties of solutions to this type of new inverse problem. The main objective of this article is precisely that: to understand the behavior of the solutions to (1.3), (1.5) and (1.6) under different choices of D,E, and ( D,R) pair, and to establish the six statements mentioned above. We summarize the findings in Table 1. We emphasize throughout this paper that: β’We assume the three problems ((1.3), (1.5) and (1.6)) are well-posed, in the sense that the solution can be found. This assumption is not trivial, given that the problem is posed in an infinite-dimensional space. While the well-posedness of these problems is interesting in its own right, it is somewhat orthogonal to our main goal, and we will set it aside for now. β’Additionally, we define a few projection operators (see Definitions 2.3 and 4.2, and Equa- tion (3.4)), and note that the uniqueness of these projections is not a requirement. If multiple projections exist, we have the freedom to select any one of them. It is possible that these observations have appeared previously in the literature. However, to the best of the authorsβ knowledge, we have not found it well-documented systematically. We 4 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG Overdetermined RdP Formulation xβ= arg min xβ₯G(x)βyβ₯ Οβ x= arg min ΟxD(G#Οx, Οy) ResultG(xβ) =PG(y) (Definition 2.3)D=Ο-divergence D=W2 Conditional (Theorem 2.2)Marginal (Theorem 2.4) Underdetermined RdP Formulation xβ= arg min G(x)=yβ₯xβ₯ Οβ x= arg min ΟxβSE[Οx] Resultleast-norm solution (Equation (3.4))E= entropy E[Οx] =R |x|2dΟx piecewise constant (Theorem 3.1)least-norm solution (Theorem 3.2) Regularization RdP Formulation minxβ₯G(x)βyβ₯2+Ξ±R(x) Οβ x= arg min ΟxD(G#Οx, Οy) +Ξ±R(Οx) Resultxβ=F(y) (Definition 4.2)entropy-entropy pair W2-moment pair dΟ dMpiecewise const (Theorem 4.1)least-norm solution (Theorem 4.3) Table 1: Six theorems to be proved in this paper. should also note that some of the results in simpler cases were reported in earlier work [25, 26]. In particular, different recovery (conditional versus marginal) for G=Aas an overdetermined linear operator was reported in [25]. Furthermore, in [26], the authors reported a gradient flow optimization algorithm using the kernel method when Dtakes the form of KL divergence [3, 15]. The rest of the paper is structured as follows. Section 2 is dedicated to problem (1.3), with Sec- tion 2.1 presenting general results for Ο-divergences and Section 2.2 addressing the counterpart for the Wasserstein distance. Section 3 explores the underdetermined case and studies problem (1.5), with Section 3.1 examining the problem by setting Eas an entropy and Section 3.2 focusing on setting Eas the moment of the distribution. Section 4 examines
|
https://arxiv.org/abs/2504.18999v1
|
the regularized formulation with Section 4.1 and Section 4.2 respectively, dedicated to entropy-entropy pair and Wasserstein- moment pair. In each section, we end with a subsection documenting the results applied to two toy examples, one linear and one nonlinear, both of which have explicit solutions. These exam- ples highlight the contrast between different measures and their mathematical consequences. The proofs are short enough to be included directly in the main text. 2. The Characterization of Solutions to problem (1.3).In this section, we discuss the overdetermined case, meaning the feasible solution set Sis empty, for example, due to modeling error or noisy data. As an alternative, we look at the relaxation of the inverse problem by con- sidering a variational framework (1.3) and seeking for a Οxthat minimizes the mismatch between produced data G#Οxand the given Οy. As a result, different choices of the distance function D promote different properties, yielding different optimal solutions accordingly. In this section, we pay specific attention to setting Das aΟ-divergence for any convex function Ο(see Section 2.1) and the Wasserstein distance (see Section 2.2). INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 5 2.1. Reconstruction when DisΟ-divergence. This section presents results when D in (1.3) is a Ο-divergence. We rewrite (1.3) as follows: (2.1) min ΟxβP(Ξ)DΟ(G#Οxβ₯Οy). TheΟ-divergence between two probability measures PandQis defined as DΟ(Pβ₯Q) =Z ΟdP dQ dQ , wheredP dQis the RadonβNikodym derivative of Pwith respect to Q, and Ο:R+βRis a convex function such that Ο(1) = 0. Common examples of Ο-divergences include the aforementioned KL divergence and the Ο2-divergence, by setting Ο(t) to be tlog(t) and ( tβ1)2, respectively. We claim the reconstruction recovers the conditional distribution. For that, we need to give a precise definition. Definition 2.1.The conditional distribution of Οyon the range R, denoted by Οc y|Rand Οy(Β·|R)interchangeably, is defined as follows: Οc y|R(B) =Οy(Bβ© R) Οy(R),βmeasurable set B . Moreover, Οc y|Rn\Ris the conditional distribution of ΟyonRn\ R, defined in a similar manner. As a consequence, Οc y|Ris absolutely continuous with respect to Οy, and we have: dΟc y|R dΟy(y) =( 1 Οy(R),ifyβ R, 0, ify /β R. We are now ready to present our first theorem: Theorem 2.2.Assume that the variational problem (2.1) admits a minimizer Οβ xβ P(Ξ). Then, we have G#Οβ x=Οc y|R. The proof of Theorem 2.2 requires the use of Measure Disintegration Theorem [1, Thm. 5.3.1], which guarantees the existence of the conditional distribution. Specifically, for any given map T that maps from a probability space ( Y,BY, Β΅) to a measurable space ( Z,BZ), and defining Ξ½=T#Β΅, the theorem states that for Ξ½-a.e. zβZ, there exists a family of probability measures {Β΅z:zβZ} on (Y,BY) that satisfies: Β΅(B) =R ZΒ΅z(B) dΞ½(z) for any measurable set Bβ BY. The collection {Β΅z}zis called the disintegration of Β΅with respect to T. In our context, Ywould be the whole of RnandΒ΅is our data distribution Οy. We need to identify the appropriate measurable space and find the correct map T, as detailed in the following proof. Proof of Theorem 2.2. Define a map T:Rnβ {0,1}such that z=T(y) =( 1, yβ R, 0, yΜΈβ
|
https://arxiv.org/abs/2504.18999v1
|
R. By applying the Measure Disintegration Theorem to Οybased on the map T, we obtain a discrete probability measure Ξ½, with Ξ½(1) = Οy(R), Ξ½(0) = Οy(Rn\ R), 6 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG and the following disintegration of Οy: (2.2) Οy=Ξ½(1)Οc y|R+Ξ½(0)Οc y|Rn\R, where Οc y|RandΟc y|Rn\Rare the conditional distributions given in Definition 2.1. The Measure Disintegration Theorem further states that this disintegration is unique. To show that Οc y|Ris the optimal solution, we rewrite the variational problem (2.1) as: (2.3) min ΟxβP(Ξ)DΟ(G#Οx||Οy) = min Οβ² yβP(Rd) Οβ² y(Rn\R)=0DΟ(Οβ² y||Οy). This is true because {G#Οx|Οxβ P(Ξ)}={Οβ² yβ P(Rd) :Οβ² y(Rn\ R) = 0}. The β ββ direction holds directly since R=G(Ξ). The β ββ direction holds because, for a given Οβ² y, supp( Οβ² y)β R and by using the left inverse function of Gwe can define a distribution Οβ² xβ P(Ξ) such that G#Οβ² x=Οβ² y. Without loss of generality, we only examine Οβ² ythat is absolutely continuous with respect to Οy1. By the definition of the Ο-divergence, we have DΟ(Οβ² y||Οy) =Z ΟdΟβ² y dΟy dΟy =Ξ½(1)Z ΟdΟβ² y dΟy dΟc y|R+Ξ½(0)Z ΟdΟβ² y dΟy dΟc y|Rn\R. Since Οβ² y(Rn\ R) = 0, the second term is a constant Ξ½(0)Ο(0). Thus, we are left with: DΟ(Οβ² y||Οy) = Ξ½(1)Z Ο dΟβ² y Ξ½(1)dΟc y|R+Ξ½(0)dΟc y|Rn\R! dΟc y|R+Ξ½(0)Ο(0) =Ξ½(1)Z Ο 1 Ξ½(1)dΟβ² y dΟc y|R! dΟc y|R+Ξ½(0)Ο(0) β₯Ξ½(1)Ο1 Ξ½(1) +Ξ½(0)Ο(0), where in the last step we applied Jensenβs inequality, leveraging the convexity of Ο. The equality holds when Οβ² y=Οc y|R, completing the proof. Although the reconstructed G#Οβ xis expected to somewhat agree with Οyon the range R, the fact that the mismatch between ΟyandG#Οβ xonRis merely a constant multiple is not entirely trivial. Jensenβs inequality and the convexity of Οplay a major role. 2.2. Reconstruction when DisWp.We now move on by setting Das a Wasserstein distance. This amounts to rewriting (1.1) as: (2.4) inf ΟxβP(Ξ)Wp(G#Οx, Οy), where Wp(Β·,Β·) is the p-Wasserstein distance. For any two probability measures Β΅andΞ½, the p-Wasserstein distance is: (2.5) Wp(Β΅, Ξ½) = min Ξ³βΞ(Β΅,Ξ½)Z dp(x, y) dΞ³1/p , pβ₯1, 1Otherwise, DΟ(Οβ² y||Οy) achieves the maximum value, making such Οβ² yirrelevant as we aim to find the minimum. The maximum value for this divergence is βin the case of KL and Ο2, and 1 in the case of TV. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 7 where dis a metric over Rnand Ξ( Β΅, Ξ½) represents the set of all couplings between the two measures Β΅andΞ½. The most common choice of dis the Euclidean distance. To characterize the minimizer of (2.4), we first define an inversion map Fand its associated projection map PGas follows. Definition 2.3.Define the inversion map F:RnβΞas F(yβ) = arg min xβΞd(G(x), yβ), and the projection map as PG(yβ) = arg min yβRd(y, yβ) =G(F(yβ)). If the minimizer is not unique, one has the freedom to select one. Theorem 2.4.Assume solution to problem (2.4) exists. Then one of the minimizers takes the following form: (2.6) Οβ x=F#Οy, and the reconstruction recovers the projection: G#Οβ x=PG#Οy. Proof. By the definition
|
https://arxiv.org/abs/2504.18999v1
|
of the Wpdistance, we have Wp(G#Οx, Οy)p=Z d(Λy, y)pdΟ(Λy, y), (2.7) where ΟβΞ(G#Οx, Οy) is the optimal coupling between G#ΟxandΟy. From Definition 2.3, we can deduce that Z d(Λy, y)pdΟ(Λy, y)β₯Z d(PG(y), y)pdΟ(Λy, y) =Z d(PG(y), y)pdΟy(y) =Z d(z, y)pdΞ³(z, y),where Ξ³=fPG#ΟyandfPG=PG In β₯ W p(PG#Οy, Οy)p, (2.8) where Inis the n-dimensional identity matrix. Considering PG#Οy= (G β¦ F )#Οy=G#(F#Οy) =G#Οβ x, and recalling definition (2.6), the inequality above shows that Wp(G#Οx, Οy)pβ₯ W p(G#Οβ x, Οy)pfor allΟx, concluding the proof. 2.3. Examples. A key distinction between the Ο-divergence case and the p-Wasserstein case lies in how the reference data distribution Οyis utilized. In the Ο-divergence case, the optimizer relies only on a small subset of the reference distribution Οy:Οyconfined within R. In contrast, thep-Wasserstein case employs the entire reference distribution Οyto generate the marginal distri- bution. Neither result is completely surprising, but their contrasting features bring out beautiful effects that we did not find in the literature. We demonstrate these results using a couple of simple examples. 8 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG 2.3.1. Linear pushforward maps. Assume G=A: Ξ = RmβRnis linear and full-rank. Since we are in the overdetermined setting, m < n . Then the range R=A(Ξ)βΌ=Rm. If the given data distributions Οyβ P(Rn) has nontrivial support outside R, the feasible set Sis empty. We then consider problem (1.3) to find the optimal solution Οx. βDis aΟ-divergence: Theorem 2.2 states that A#Οβ xrecovers the conditional distribution ofΟy, namely: A#Οβ x=Οy(Β·|R). βDis the p-Wasserstein metric: Theorem 2.4 states that A#Οβ xrecovers the marginal dis- tribution of Οy. Indeed, in this situation, one can even compute Οβ x(see [26]): Οβ x=Aβ #Οy, where Aβ = (Aβ€A)β1Aβ€is the pseudoinverse (left inverse) of A. Consequently, A#Οβ x= AAβ #Οyindeed recovers the marginal distribution of Οyalong the m-dimensional sub- space of Rn. 2.3.2. A simple nonlinear example. Consider the map G: [0,1]Γ[0,2Ο]βB(0,0)(1) where (y1, y2) =G(r, ΞΈ) = (rcosΞΈ , rsinΞΈ). We prepare our noisy data as: Οy=1 2ΟIy2 1+y2 2<1+1 2ΟΞ΄(y2 1+y2 2β4), where IAis the indicator function on the set A. This means that we prepare half of our data as a uniform distribution over the ball B(0,0)(1) in the range (as denoted by1 2ΟIy2 1+y2 2<1), and the other half of the data uniformly on the ring of radius 2 (as denoted by1 2ΟΞ΄(y2 1+y2 2β4), with an extra1 2Οfactor coming from normalization2. See Figure 1a for an illustration. According to our earlier statement, conditional and marginal distributions are obtained if Ο-divergence or the p-Wasserstein distance is used, respectively. Namely: βDis aΟ-divergence: Οβ y=G#Οβ x=1 ΟIy2 1+y2 2β€1, This is the recovery of the conditional distribution. All data points, conditioned on them being in the range B(0,0)(1), are preserved. All data points outside the range are completely forgotten. Note that the normalizing constant has changed. See Figure 1b for the plot of Οβ y. βDis the Wp: Οβ y=G#Οβ x=1 2ΟIy2 1+y2 2<1+1 2ΟΞ΄(y2 1+y2 2β1). This is the recovery of the marginal distribution, with all data points projected to the range B(0,0)(1). In this context, all
|
https://arxiv.org/abs/2504.18999v1
|
data points sitting on the ring of Ξ΄(y2 1+y2 2β4) are projected down to the ring of Ξ΄(y2 1+y2 2β1), within the range. See Figure 1c for the plot ofΟβ y. 3. The Characterization of Solutions to problem (1.5).In this section, we discuss the underdetermined situation. This is the situation where there is more than one element in the feasible solution set S, and we are tasked to pick one by solving (1.5). Naturally, different choices ofEwill promote different properties and produce different optimal solutions accordingly. As a natural start, we pay special attention to the settings when Eis defined as the entropy (see Section 3.1) or the second-order moment of the distribution (see Section 3.2). 2Note that the integral of Ξ΄(y2 1+y2 2βR2) over the entire plane is Οfor all values of R > 0. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 9 (a) Data distribution Οy (b)Ο-divergence optimizer (c)Wpoptimizer Figure 1: We denote Οβ y:=G#Οβ x; (a): the noisy data distribution with mass supported outside the range R; (b): the reconstructed data distribution with Dbeing the Ο-divergence; (c): the reconstructed data distribution with Dbeing the p-Wasserstein metric. 3.1. Optimization when E[Οx] =R ΟxlnΟxdx.We first examine problem (1.5) by setting E as the entropy. As such, we look for Οxthat are absolutely continuous with respect to the Lebesgue measure, which means that we consider Pac(Ξ) to be our search space. For the convenience of the derivation, we also assume throughout this section that Οyβ P ac(R), and that Ξ β« Rmis compact. The optimal solution can be explicitly written down as given in Theorem 3.1 below. Theorem 3.1.The optimizer Οβ xof problem (1.5) withE[Οx] =R ΟxlnΟxdxhas the following property: Οβ x(Β·|Gβ1(y)) =U Gβ1(y) ,βy , (3.1) where Οβ x(Β·|Gβ1(y))denotes the conditional distriution of Οβ xon the set Gβ1(y)andU Gβ1(y) is the uniform distribution over the set Gβ1(y). In particular: Οβ x(x) =Οy(G(x))R Ξ΄(G(x)β G(xβ²))dxβ². (3.2) The theorem states that the mass cumulated at y=G(x) is transferred back to the xdomain and distributed uniformly across the preimage Gβ1(y) by all the points in x. This property aligns with the fact that entropy increases with variations, so minimizing entropy discourages variations. The uniform distribution, having the least variation, minimizes the entropy. Proof of Theorem 3.1. To begin with, let Οβ xbe the optimizer of problem (1.5) with E[Οx] =R ΟxlnΟxdx. We claim that there is a function g:R βRβ₯0so that (3.3) Οβ x(x) =g(G(x)). To see this, consider the Lagrangian to the constrained optimization problem (1.5): L[Οx] :=Z Οx(x) lnΟx(x)dxβZ Ξ»(y) ((G#Οx)(y)βΟy(y)) dy =Z Οx(x) lnΟx(x)dxβZ Ξ»(G(x))Οx(x)dx+Z Ξ»(y)Οy(y)dy , where Ξ»(y) is the Lagrangian multiplier. The optimizer Οβ xsatisfies the first-order optimality condition Ξ΄L Ξ΄Οx Οβx= lnΟβ x(x) + 1βΞ»(G(x)) =C , 10 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG which leads to ln Οβ x(x) =Ξ»(G(x))β1 +Cwhere the constant Cis determined by normalization3. This verifies our claim in (3.3) as Οβ x(x) only depends on the value of G(x), which also leads to (3.1). In order to determine an explicit formula for g, we use the fact that the constraint has
|
https://arxiv.org/abs/2504.18999v1
|
to be satisfied, i.e., G#Οβ x=Οy. Consider an arbitrary test function fβ Cβ c(Rn). Then the constraint can be rewritten as: Z f(y)Οy(y)dy=Z f(y)d (G#Οβ x) (y) =Z f(G(x))Οβ xdx where Οβ x=g(G(x)) =Z f(G(x))g(G(x))dx =Z f(y)g(y)Z Ξ΄(yβ G(x)) dx . Since the above holds for any test function f, we conclude that g(y)Z Ξ΄(yβ G(x))dx=Οy(y) =βg(y) =Οy(y)R Ξ΄(yβ G(x))dx. Plugging the above expresion into (3.3), we obtain (3.2). 3.2. Optimization when E[Οx] =R |x|2dΟx.Just as in the deterministic case where the least-norm solution is sought for when the feasible set is not unique, in this section, we look for the ideal distribution in the feasible set Sby choosing one that has the least βnormβ. In particular, we will set the objective function to be the second-order moment of the distribution. To ensure the second-order moment even exists, we naturally work with the space of P2(Ξ). Similar to the result presented in Section 3.1, we can explicitly write down the optimal solution given in Theorem 3.2 below. Theorem 3.2.The optimizer Οβ xof problem (1.5) withE[Οx] =R |x|2Οxdxis Οβ x=H#Οy, where H(y)is the minimal-norm solution to G(x) =yin the Euclidean space: (3.4) H(y) := arg min G(x)=y, xβΞ|x|2. If the minimimum is not unique, we have the freedom to choose one. Proof. We first note that, from the definition of H: |H(G(x))|2β€ |x|2. (3.5) Then to show that Οβ xis optimal, we see that for any other Οxsuch that G#Οx=Οy, we have: Z |x|2dΟβ x(x) =Z |x|2d(H#Οy) =Z |H(y)|2dΟy(y) =Z |H(y)|2d (G#Οx) (y) =Z |H(G(x))|2dΟx(x) β€Z |x|2dΟx(x). 3Note thatΞ΄L Ξ΄Οx Οβx= 0 in this context means β¨Ξ΄L Ξ΄Οx Οβx, ΟβΟβ xβ©= 0 for all Ο. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 11 3.3. Examples. Both proofs are straightforward, but the difference between the two the- orems signals the drastic differences between moment-minimization and entropy-minimization. According to Theorem 3.1, minimizing entropy encourages the pull-back samples to spread out the whole preimage. On the contrary, Theorem 3.2 suggests that the solution that minimizes moments pulls every yβ Rback to one sample in the preimage, defined by H. 3.3.1. Linear pushforward maps. IfGis linear with full column rank and we denote the pushforward map by a matrix A: ΞβRmβRnwhere Ξ is a compact subset. We assume Ais full- rank, m > n andScontains infinitely many elements. In this context, the range R=A(Ξ)βRn, and we assume the support of the data distribution supp( Οy)β R =A(Ξ). To compute the preimage, we note that for every fixed yβRn,Aβ1(y) is a compact subset lying in a linear subspace of at most ( mβn)-dimensional. Moreover, define the least-norm solution: (3.6) xy= arg min Ax=y|x|2=Aβ€(AAβ€)β1y=Aβ y , where Aβ denotes the pseudoinverse (i.e., right inverse) of matrix A. Then the preimage of ycan be represented as: Aβ1(y) ={xy+Aβ₯} β©Ξ, where Aβ₯is the subspace perpendicular to the row-space of A. We further define, according to (3.4): H(y) = arg min {|x|2:xβAβ1(y)}. β When E=R ΟlogΟdx, based on Theorem 3.1, the optimal distribution Οβ xconditioned on the preimage Aβ1(y) is the uniform measure over Aβ1(y)βΞ. β When E=R |x|2dΟ(x), based on Theorem 3.2, the optimal
|
https://arxiv.org/abs/2504.18999v1
|
distribution Οβ xconditioned on the preimage Aβ1(y), for a fixed yβsupp( Οy), is a single Dirac delta measure supported at the minimum-norm solution. Calling (3.6), we have: Οβ x=H#Οy. This example was already shown in [26, Theorem 4.7]. 3.3.2. A simple nonlinear example. A nonlinear example that can be made explicit is as follows. Define G:B(1,1)(1)β[0,1], where B(1,1)(1) is a ball centered at (1 ,1) with radius 1, r=G(x1, x2) =p (x1β1)2+ (x2β1)2. Then any preimage of ris: Gβ1(r) ={(x1= 1 + rcosΞΈ, x2= 1 + rsinΞΈ), ΞΈβ[0,2Ο]}. Through a simple calculation, within this preimage, the least-norm solution to (3.4) is: H(r) = 1βrβ 2,1βrβ 2 ,0β€rβ€1. Suppose we are given a probability measure Β΅rwith a support in [0 ,1]. Then the two problems mentioned above are to find a distribution on B(1,1)(1) within the set of: S={Ο(x1, x2) :G#Ο=Β΅r}, that minimizes entropy or the second-order moment. Explicit solutions are available: 12 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG βE=RR Ο(x1, x2) lnΟ(x1, x2) dx1dx2: Ο(x1, x2) =Β΅rp (x1β1)2+ (x2β1)2 2Οp (x1β1)2+ (x2β1)2. This is a density function supported in the ball of B(1,1)(1), with uniform density on each ring. The density of the ring that is raway from the center (1 ,1) is scaled by1 2Οr4. See Figure 2a for an illustration of the optimal Οx. βE=RR x2 1+x2 2 Ο(x1, x2)dx1dx2: Ο(x1, x2) =(β 2Β΅r(p (x1β1)2+ (x2β1)2)Ξ΄(x1βx2),1β1β 2β€x1β€1 0, x 1>1. This is a density function supported only in the line section along the straight line x1=x2 between the point (1 β1β 2,1β1β 2) and the point (1 ,1). See Figure 2b for an illustration of the optimal Οxunder this choice of E. (a)E=R ΟxlogΟxdx (b)E=R |x|2Οxdx Figure 2: The data distribution Β΅r=U([0,1]) is the uniform distribution in [0 ,1]. (a): the optimizer to (1.5) when Eis the entropy. The density is constant when restricted to level sets of G; (b) the optimizer to (1.5) when Eis the second-order moment. The parameter distribution is concentrated at the element of the minimum norm on each level set of G. 4. The Characterization of Solutions to problem (1.6).In this section, we characterize solutions to the regularized data-matching variational problem (1.6) under two choices of the (D,R), the KL-entropy pair (see Section 4.1) and the Wp-moment pair (see Section 4.2). 4.1. Entropy-entropy pair. SetD= KL and choose R[Οx] = KL( Οx||M), with M β P (Ξ) being a desired output probability measure for whichdΟx dMexists. For the rest of this analysis, we assume that all probability distributions are absolutely continuous with respect to the Lebesque measure on the corresponding spaces, and we use the same notation to refer to the distribution 4Note that the integral of Ξ΄(q x2 1+x2 2βR) over the plane is 2 ΟR INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 13 and its corresponding density interchangeably. Problem (1.6) can be rewritten as: (4.1) Οβ x= arg min ΟxβP2,acL(Οx),L(Οx) := KL( G#Οx||Οy) +Ξ±Z logΟx MΟxdx . Under these assumptions, we have the following theorem. Theorem 4.1.The optimizer Οβ xof(4.1) has the following property: Οβ x(Β·|Gβ1(y)) M(Β·|Gβ1(y))=g(y),βyβ R, (4.2) where g(y)is
|
https://arxiv.org/abs/2504.18999v1
|
a constant only depending on ywhile Οβ x(Β·|Gβ1(y))andM(Β·|Gβ1(y))denote the conditional distributions of Οβ xandMon the preimage Gβ1(y), respectively. In particular: Οβ x(x)β M (x)Οy(G(x))R Ξ΄(G(x)β G(xβ²))M(xβ²)dxβ²1 1+Ξ± . (4.3) Proof. Since the KL divergence is convex (in the usual sense) and the pushforward action is a linear operator, the optimal solution of (4.1) can be obtained by solving the first-order optimality condition: C0=Ξ΄L Ξ΄Οx Οβx= 1 + logG#Οβ x Οy(G(x)) +Ξ± 1 + logΟβ x M(x) , where C0is any constant. This implies that (G#Οβ x)(G(x))Οβ x M(x)Ξ± βΟy(G(x)). (4.4) Hence, there exists a to-be-determined function gsuch that the following holds: Οβ x M(x) =g(G(x)), (4.5) which also leads to (4.2). Next, we claim that (4.5) leads to (G#Οβ x)(y) =g(y)Z Ξ΄(yβ G(x))M(x)dx . (4.6) Indeed, consider a mesurable function fas a test function. Then by multipying both sides of (4.6) byf(y) and integrating against y, we can rewrite the left-hand side (LHS) LHS =Z f(y)(G#Οβ x)(y)dy=Z f(G(x))Οβ x(x)dx=Z f(G(x))M(x)g(G(x))dx , where the last equality uses (4.5). For the right-hand side (RHS), we have RHS =Z g(y)f(y)Z Ξ΄(yβ G(x))M(x)dxdy=Z g(G(x))f(G(x))M(x)dx . Substituting (4.5) and (4.6) into (4.4) leads to an algebraic equation for g, which can be solved to obtain the final result: g(y) =eC0β1βΞ±Οy(y)R Ξ΄(yβ G(x))M(x)dx1 1+Ξ± . Since the algebraic equation only leads to one solution, we also conclude that the function g appearing in (4.5) is unique. If the regularization coefficient Ξ±= 0, the parameter space Ξ is compact, Mis the uniform distribution over Ξ, and Οyβ P(R), then Theorem 4.1 arrives at the same result as Theorem 3.1. 14 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG 4.2.Wp-moment pair. We now consider the case where D=Wp p, the p-th power of the p-Wasserstein metric and the regularization term R[Οx] =R |x|pdΟx(x), for 1 β€pβ€ β . Then problem (1.6) can be rewritten as: (4.7) Οβ x= arg min ΟxβPpL(Οx),L(Οx) =Wp p(G#Οx, Οy) +Ξ±Z |x|pdΟx(x). To characterize the minimizer, we first define a map ΛFas follows. Definition 4.2.LetΛF:RnβΞβRmbe defined as (4.8) ΛF(y) = arg min xβΞ{|G(x)βy|p+Ξ±|x|p}. Then we have the following theorem: Theorem 4.3.Given Οyβ P(Rn), and the forward map G: ΞβRmβ R β Rnwhere Rand Ξare compact, then the minimizer Οβ xto problem (4.7) satisfies (4.9) Οβ x=ΛF#Οy. This theorem is built upon a nice observation about the regularizer: R[Οx] =Z |x|pdΟx(x) =Wp p(Οx, Ξ΄0). which allows us to rewrite the loss function as follows: Lemma 4.4.For any Οyβ P(Rn), the objective function defined in (4.7) can be rewritten as: L(Οx) =Wp p(G#Οx, Οy) +Ξ±Z |x|pdΟx(x) =Wp p(ΛG#Οx,Β―Οy), (4.10) with Β―Οy=ΟyβΞ΄0(x)where Ξ΄0(x)β P(Rm)denotes the Dirac delta centered at 0βRm, and ΛG=G β Ξ±1/pIm , with Imbeing the m-dimensional identity matrix. More explicitly, ΛG(x) : ΞβRmβ R Γ ΞβRn+m,with ΛG(x) =G(x) Ξ±1/px . We will leave the proof for this lemma later. Applying this Lemma 4.4, we obtain that problem (4.7) is equivalent to (4.11) Οβ x= arg min ΟxβPpWp(ΛG#Οx,Β―Οy). This can be considered as an overdetermined problem in a similar form with (2.4) since ΛG: ΞβRmβ R Γ ΞβRn+m, for which the range lies in a strictly
|
https://arxiv.org/abs/2504.18999v1
|
higher-dimensional space than its domain. As a result, Theorem 2.4 directly applies, which allows us to write down an explicit form of the solution to problem (4.7) as below. Proof for Theorem 4.3. In this overdetermined system, according to Theorem 2.4, we simply need to find the function that solves: Fex(yβ) = arg min xβΞ|ΛG(x)βyβ|p,foryβ= (y,0)βRn+m. Then the solution is Οβ x=Fex #Β―Οy. Comparing with Definition 4.2, it is straightforward to see that Fex((y,0)) = ΛF(y). Given the specific form of Β― Οy, we conclude Οβ x=ΛF#Οy. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 15 Proof for Lemma 4.4. We drop the sub-index mon the identity matrix because there is no ambiguity. Let Ο1be the optimal transport plan between G#ΟxandΟyin the optimal transport problem defining the Wpmetric. Then Wp p(G#Οx, Οy) =Z |yβ²βy|pΟ1(dyβ²dy) =Z |G(x)βy|pΛΟ1(dxdy), where Ο1= (G ΓI)#ΛΟ1for some Λ Ο1βΞ(Οx, Οy) and Ξ( Οx, Οy) denotes all the couplings between ΟxandΟy. Note that if Gis not one-to-one, Λ Ο1may not be unique, but its existence is always guaranteed. Similarly: (4.12)Z |x|pdΟx=Z |xβ0|pΛΟ2(dxdxβ²),with Λ Ο2=ΟxβΞ΄0(x)βΞ(Οx, Ξ΄0(x)), where Ξ( Οx, Ξ΄0(x)) denotes all the couplings between ΟxandΞ΄0(x), and Ξ΄0(x)β P(Rm) denotes the Dirac delta at 0 βRm. Defining Λ Ο3= ΛΟ1βΞ΄0(x)βΞ(Οx, ΟyβΞ΄0(x)), we rewrite: L(Οx) =Z |G(x)βy|pΛΟ1(dxdy) +Ξ±Z |x|pdΟx =Z |ΛG(x)βyβ²|pΛΟ3(dxdyβ²) with yβ²= (y,0) =Z |yβyβ²|pΟ3(dydyβ²), Ο 3= (ΛG ΓI)#ΛΟ3βΞ ΛG#Οx, ΟyβΞ΄0(x) . To show this is Wp p(ΛG#Οx,Β―Οy), we also need to show Ο3is an optimal plan. Assume Ξ³ΜΈ=Ο3 andΞ³is the optimal transport plan between ΛG#Οxand Β―Οy=ΟyβΞ΄0(x) under the cost function c(yβyβ²) =|yβyβ²|p, then we have Wp p(ΛG#Οx,Β―Οy) =Z |yβyβ²|pdΞ³(dydyβ²) =Z |ΛG(x)βyβ²|pdΛΞ³(dxdyβ²), Ξ³ = (ΛG ΓI)#ΛΞ³,ΛΞ³βΞ (Οx, ΟyβΞ΄0(x)) =Z |G(x)βy|pdΛΞ³1(dxdy) +Ξ±Z |x|pdΟx,ΛΞ³1βΞ(Οx, Οy) =Z |yβyβ²|pdΛΞ³2(dydyβ²) +Ξ±Z |x|pdΟx,ΛΞ³2= (G ΓI)#ΛΞ³1βΞ(G#Οx, Οy) β₯ Wp p(G#Οx, Οy) +Ξ±Z |x|pdΟx, =Z |yβyβ²|pΟ3(dydyβ²), where Λ Ξ³1and ΛΞ³2are determined by Ξ³. This contradicts the assumption that Ο3is not optimal. So we conclude with (4.10). 4.3. Examples. Similar to what is done in Sections 2.3 and 3.3, we design a linear and a simple nonlinear example for which we can explicitly spell out the solutions to the regularized problem. 4.3.1. A linear pushforward map. We first assume G=AβRmΓnwith mβ₯n. Setting D=W2 2andR[Οx] =R |x|2dΟx(x), the solution to (1.6), according to Theorem 4.3, is: Οx= (Aβ€A+Ξ±I)β1Aβ€ #Οy. (4.13) 16 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG This can be easily deduced from Definition 4.2 with Greplaced by the linear operator. One fascinating fact is that the Wasserstein-moment setting resonates nicely with Tikhonov regularization. In the Euclidean space, Tikhonov regularization was introduced to smooth the signifying effect of small singular values in Aon small perturbations in the data. Namely, suppose the ground truth data yis perturbed to become y+Ξ΄as the given data. In that case, one needs to regularize the problem to temper the amplification of the smallest singular value of A. The choice ofΞ±thus plays an important role in controlling the error magnification. The same phenomenon is observed in the probability space as well, which is stated in the Corollary 4.5 below. Corollary 4.5.Denote Οtrue ythe groundtruth probability measure and ΟΞ΄ yitsΞ΄-perturbation. They are both in P2(Rn). Denote
|
https://arxiv.org/abs/2504.18999v1
|
then Οtrue xthe solution to problem (2.4) with the reference data distribution Οtrue y, and ΟΞ΄,Ξ± xthe solution to problem (4.7) using regularizing coefficient Ξ±on the reference data distribution ΟΞ΄ y. Then we have that W2(Οtrue x, ΟΞ΄,Ξ± x)β€ β₯(Aβ€A+Ξ±I)β1Aβ€β₯2W2(Οtrue y, ΟΞ΄ y) +β₯(Aβ€A+Ξ±I)β1Aβ€βAβ β₯2q EΟtruey[y2].(4.14) SetΟm=Οmin(A)as the smallest singular value for Aand denote by Ο(A)the set of all singular values of A. The bound in (4.14) can be further simplified to W2(Οtrue x, ΟΞ΄,Ξ± x)β€ max ΟiβΟ(A)Οi Ξ±+Ο2 i W2(Οtrue y, ΟΞ΄ y) +Ξ± Οm(Ο2m+Ξ±)q EΟtruey[|y|2] β€1 2βΞ±W2(Οtrue y, ΟΞ΄ y) +Ξ± Οm(Ο2m+Ξ±)q EΟtruey[|y|2]. (4.15) This corollary clearly shows that the error (as written in (4.15)) has two sources: the former being the noise in the measurement and the latter coming from the regularization. It is common to equate these two contributions to seek the optimal choice of Ξ±that minimizes the combination of two errors when Οmis known. Proof. To show (4.14), we deploy the triangle inequality: W2(Οtrue x, ΟΞ΄,Ξ± x)β€ W 2(ΛΟx, ΟΞ΄,Ξ± x) +W2(Οtrue x,ΛΟx), where (4.16) Λ Οx= (Aβ€A+Ξ±I)β1Aβ€ #Οtrue y. Notice in this setting, by (4.13), ΟΞ΄,Ξ± x= (Aβ€A+Ξ±I)β1Aβ€ #ΟΞ΄ y. By utilizing the continuity of the map ( Aβ€A+Ξ±I)β1Aβ€and citing [17, Theorem 3.2], we obtain the first term in (4.14). To control the second term, we expand: W2 2(ΛΟx, Οtrue x) = W2 2 (Aβ€A+Ξ±I)β1Aβ€ #Οtrue y,Aβ #Οtrue y β€Z Aβ€A+Ξ±Iβ1Aβ€yβAβ y 2 dΟtrue y(y) β€ Aβ€A+Ξ±Iβ1Aβ€βAβ 2 2Z |y|2dΟtrue y(y) |{z } =EΟtruey[|y|2] The same estimates was seen in [2, Theorem 3.1]. From (4.14), we bound matricesβ 2-norm using: W2(Οtrue x, ΟΞ΄,Ξ± x)β€ max ΟiβΟ(A)Οi Ξ±+Ο2 i W2(Οtrue y, ΟΞ΄ y) +Ξ± Οm(Ο2m+Ξ±)q EΟtruey[|y|2]. INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 17 Finally, we arrive at the conclusion by noting that max ΟiβΟ(A)Οi Ξ±+Ο2 iβ€1 2βΞ±. 4.3.2. A simple nonlinear example. In this section, we revisit the example in Section 3.3.2 but consider the optimization formulation of (1.6). Recall G:B(1,1)(1)β[0,1], where B(1,1)(1) is a ball centered at (1 ,1) with radius 1, r=G(x1, x2) =p (x1β1)2+ (x2β1)2. Then any preimage of ris: Gβ1(r) ={(x1= 1 + rcosΞΈ, x2= 1 + rsinΞΈ), ΞΈβ[0,2Ο]}. Suppose we are given a probability measure Β΅rwith a support in [0 ,1]. Then the two problems mentioned above are to find a distribution on B(1,1)(1) that minimmizes (1.6) under different choices of R. Explicit solutions are available: βD= KL and R[Οx] = KL( Οx||M). This is the formulation deployed in (4.1). According to Theorem 4.1, especially Equation (4.3), when Ξ±= 1: Οβ(x1, x2)β M (x1, x2)s Β΅r(G(x1, x2))R Ξ΄(G(x1, x2)β G(xβ² 1, xβ² 2)))M(xβ² 1, xβ² 2)dxβ² 1dxβ² 2. In this context, we need to compute the denominator term Z Ξ΄(G(x1, x2)β G(xβ² 1, xβ² 2)))M(xβ² 1, xβ² 2)dxβ² 1dxβ² 2. The analytic solution is not available for an arbitrarily given Mbut is explicit when M has special forms. One such example is when M=CMexp βx2 1+x2 2 2 where CMis the normalizing constant. Under this setup, we define ( xβ² 1, xβ² 2) := (1 + rβ²cosΞΈ,1 +rβ²sinΞΈ) for a given rβ²andΞΈ, and set r=G(x1, x2). Upon conducting a change of variable:
|
https://arxiv.org/abs/2504.18999v1
|
Z Ξ΄(G(x1, x2)β G(xβ² 1, xβ² 2)))M(xβ² 1, xβ² 2)dxβ² 1dxβ² 2 =Z Ξ΄(rβrβ²)M(1 +rβ²cosΞΈ,1 +rβ²sinΞΈ)rβ²drβ²dΞΈ =rCMZ2Ο 0exp β1 2 (rcosΞΈ+ 1)2+ (rsinΞΈ+ 1)2 dΞΈ =rCMexp β1 2(2 +r2)Z2Ο 0exp (βrcosΞΈβrsinΞΈ) dΞΈ = 2ΟrCMexp β1 2(2 +r2) I0(β 2r), where I0is the Bessel function5. This leads to the final result of Οβ(x1, x2)βexp βx2 1+x2 2 2s Β΅r(r) 2Οreβ1 2(2+r2)I0(β 2r),where r=G(x1, x2). We present the optimal solution, Οβ, in Figure 3 for both the unregularized case ( Ξ±= 0) (also depicted in Figure 2a) and the regularized case ( Ξ±= 1). Given that the prior distribu- tionMis centered at x= 0, the regularized solution (Figure 3b) combines characteristics of both the prior distribution Mand the unregularized distribution shown in Figure 3a. 5Bessel function is defined as I0(a) =1 2ΟR2Ο 0exp (acosΞΈ) dΞΈ. 18 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG (a)Ξ±= 0 (b)Ξ±= 1 Figure 3: A comparison of the reconstructed density for (a) the unregularized case with entropy costE[Οx] =R ΟxlnΟxdxand (b) the regularized case with ( D,R) = (KL ,KL(Β·β₯M)) where M β exp βx2 1+x2 2 2 and the parameter Ξ±= 1. The data distribution is chosen to be Β΅r=U([0,1]), the uniform distribution in [0 ,1] βD=W2 2andR[Οx] =R |x|2dΟx(x). Then according to Definition 4.2, we have, in this specific context: F(r) = arg min (x1,x2)βB(1,1)(1) |G(x1, x2)βr|2+Ξ±|x1|2+Ξ±|x2|2 . Through simple calculation, it can be shown that the optimal solution has to be supported along the diagonal set {(x1, x2)βB(1,1)(1) :x1=x2}, reducing the problem to solving: F(r) := arg min xβ[0,1](β 2|xβ1| βr)2+ 2Ξ±x2 = arg min xβ[0,1]h (2 + 2 Ξ±)x2+ (β4 + 2β 2r)x+ (2β2β 2r+r2)i . Since this objective function is quadratic, the solution is explicit: F(r) = (F(r), F(r)),with F(r) =1ββ 2 2r 1 +Ξ±. According to Theorem 4.3, we have Οβ x=F#Β΅r. It is a distribution concentrated along the diagonal. In Figure 4a, we revisit the unregularized solution where E[Οx] =R |x|2dΟx(also shown in Figure 2b). In Figure 4b, we present the optimal parameter solution with regularization. The dotted circles in Figure 4a highlights the level sets Gβ1(r) for a fixed r. Only a single point closest to the origin on the level set is selected according to the function H(see Section 3.3.2). With a nonzero regularization coefficient, the map shifts from Hto its affine transform F, which depends on the coefficient Ξ±. As a result, the original level sets (dotted circles in Figure 4b) are translated into new level sets (solid regions in Figure 4b), and the point INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE 19 closest to the origin is selected to inherit all the mass from the data distribution at r. In other words, the two distributions in Figure 4 are related through a linear pushforward map. (a)Ξ±= 0 (b)Ξ±ΜΈ= 0 Figure 4: A comparison of the optimal distributions under different setups: (a) shows the result without regularization and the minimum norm cost E[Οx] =R |x|2dΟx, while (b) shows the result for the regularized version with ( D,R) = (W2 2,R |x|2dΟx(x)) and Ξ±ΜΈ= 0. Acknowledgments. The authors thank
|
https://arxiv.org/abs/2504.18999v1
|
Prof. Youssef Marzouk and Prof. LΒ΄ enaΒ¨ Δ±c Chizat for comments and suggestions on an earlier version of this work. REFERENCES [1]L. Ambrosio, N. Gigli, and G. Savar Β΄e,Gradient flows: in metric spaces and in the space of probability measures , Springer Science & Business Media, 2005. [2]R. Baptista, B. Hosseini, N. Kovachki, Y. Marzouk, and A. Sagiv ,An approximation theory framework for measure-transport sampling algorithms , Mathematics of Computation, 94 (2025), pp. 1863β1909. [3]J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyr Β΄e,Iterative Bregman projections for regularized transportation problems , SIAM Journal on Scientific Computing, 37 (2015), pp. A1111βA1138. [4]L. Borcea ,Imaging in Random Media , Springer New York, New York, NY, 2015, pp. 1279β1340. [5]L. Bracchini, S. Loiselle, A. M. Dattilo, S. Mazzuoli, A. C Β΄ozar, and C. Rossi ,The Spatial Distribu- tion of Optical Properties in the Ultraviolet and Visible in an Aquatic Ecosystem , Photochemistry and photobiology, 80 (2004), pp. 139β149. [6]J. Breidt, T. Butler, and D. Estep ,A measure-theoretic computational method for inverse sensitivity problems I: Method and analysis , SIAM Journal on Numerical Analysis, 49 (2011), pp. 1836β1859. [7]T. Butler and D. Estep ,A numerical method for solving a stochastic inverse problem for parameters , Annals of Nuclear Energy, 52 (2013), pp. 86β94. [8]T. Butler, D. Estep, and J. Sandelin ,A computational measure theoretic approach to inverse sensitivity problems II: A posteriori error analysis , SIAM Journal on Numerical Analysis, 50 (2012), pp. 22β45. [9]T. Butler, D. Estep, S. Tavener, C. Dawson, and J. J. Westerink ,A measure-theoretic computational method for inverse sensitivity problems III: Multiple quantities of interest , SIAM/ASA Journal on Un- certainty Quantification, 2 (2014), pp. 174β202. [10]R. Caflisch, D. Silantyev, and Y. Yang ,Adjoint DSMC for nonlinear Boltzmann equation constrained optimization , Journal of Computational Physics, 439 (2021), p. 110404. [11]E. J. Candes, J. K. Romberg, and T. Tao ,Stable signal recovery from incomplete and inaccurate measure- ments , Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 59 (2006), pp. 1207β1223. 20 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG [12]S. Daun, J. Rubin, Y. Vodovotz, A. Roy, R. Parker, and G. Clermont ,An ensemble of models of the acute inflammatory response to bacterial lipopolysaccharide in rats: results from parameter space reduction , Journal of Theoretical Biology, 253 (2008), pp. 843β853. [13]M. Davidian and D. M. Giltinan ,Nonlinear models for repeated measurement data: an overview and update , Journal of Agricultural, Biological, and Environmental Statistics, 8 (2003), pp. 387β419. [14]C. del Castillo-Negrete, M. Pilosov, T. Butler, C. Dawson, and T. Y. Yen ,Stochastic inversion with maximal updated densities for storm surge wind drag parameter estimation , in AGU Fall Meeting Abstracts, vol. 2022, 2022, pp. NG42Bβ0401. [15]S. Di Marino, B. Maury, and F. Santambrogio ,Measure sweeping processes , Journal of Convex Analysis, 23 (2016), pp. 567β601. [16]H. W. Engl, M. Hanke, and A. Neubauer ,Regularization of inverse problems , vol. 375, Springer Science & Business Media, 1996. [17]O. G. Ernst, A. Pichler, and B. Sprungk ,Wasserstein sensitivity of risk and uncertainty
|
https://arxiv.org/abs/2504.18999v1
|
propagation , SIAM/ASA Journal on Uncertainty Quantification, 10 (2022), pp. 915β948. [18]J. Ferron, M. Walker, L. Lao, H. S. John, D. Humphreys, and J. Leuer ,Real time equilibrium recon- struction for tokamak discharge control , Nuclear Fusion, 38 (1998), p. 1055. [19]J. Giraldo-Barreto, S. Ortiz, E. H. Thiede, K. Palacio-Rodriguez, B. Carpenter, A. H. Barnett, and P. Cossio ,A Bayesian approach to extracting free-energy profiles from cryo-electron microscopy experiments , Scientific Reports, 11 (2021), p. 13657. [20]T. Gneiting and M. Katzfuss ,Probabilistic forecasting , Annual Review of Statistics and Its Application, 1 (2014), pp. 125β151. [21]G. H. Golub, P. C. Hansen, and D. P. OβLeary ,Tikhonov regularization and total least squares , SIAM journal on matrix analysis and applications, 21 (1999), pp. 185β194. [22]X. Huan, J. Jagalur, and Y. Marzouk ,Optimal experimental design: Formulations and computations , Acta Numerica, 33 (2024), pp. 715β840. [23]R. Jin, M. Guerra, Q. Li, and S. Wright ,Optimal design for linear models via gradient flow , arXiv preprint arXiv:2401.07806, (2024). [24]O. Korotkova, S. Avramov-Zamurovic, R. Malek-Madani, and C. Nelson ,Probability density function of the intensity of a laser beam propagating in the maritime environment , Opt. Express, 19 (2011), pp. 20322β20331. [25]Q. Li, M. Oprea, L. Wang, and Y. Yang ,Stochastic Inverse Problem: stability, regularization and Wasser- stein gradient flow , arXiv preprint arXiv:2410.00229, (2024). [26]Q. Li, L. Wang, and Y. Yang ,Differential equationβconstrained optimization with stochasticity , SIAM/ASA Journal on Uncertainty Quantification, 11 (2023), pp. 491β515. [27]P. W. Marcy and R. E. Morrison ,βStochastic Inverse Problemsβ and Changes-of-Variables , arXiv preprint arXiv:2211.15730, (2022). [28]L. I. Rudin, S. Osher, and E. Fatemi ,Nonlinear total variation based noise removal algorithms , Physica D: nonlinear phenomena, 60 (1992), pp. 259β268. [29]W. S. Tang, D. Silva-S Β΄anchez, J. Giraldo-Barreto, B. Carpenter, S. M. Hanson, A. H. Barnett, E. H. Thiede, and P. Cossio ,Ensemble reweighting using cryo-EM particle images , The Journal of Physical Chemistry B, 127 (2023), pp. 5410β5421. [30]R. D. White, J. D. Jakeman, T. Wildey, and T. Butler ,Building Population-Informed Priors for Bayesian Inference Using Data-Consistent Stochastic Inversion , arXiv preprint arXiv:2407.13814, (2024). [31]J. Yu, V. M. Zavala, and M. Anitescu ,A scalable design of experiments framework for optimal sensor placement , Journal of Process Control, 67 (2018), pp. 44β55.
|
https://arxiv.org/abs/2504.18999v1
|
arXiv:2504.19089v1 [math.ST] 27 Apr 2025Semiparametric M-estimation with overparameterized neural networks Shunxing Yanβ, Ziyuan Chenβ and Fang Yaoβ‘ School of Mathematical Sciences, Center for Statistical Sc ience, Peking University, Beijing, China Abstract We focus on semiparametric regression that has played a cent ral role in statistics, and exploit the powerful learning ability of deep neural networ ks (DNNs) while enabling statistical inference on parameters of interest that oο¬ers interpretab ility. Despite the success of classical semiparametric method/theory, establishing theβn-consistency and asymptotic normality of the ο¬nite-dimensional parameter estimator in this context remains challenging, mainly due to nonlinearity and potential tangent space degeneration in D NNs. In this work, we introduce a foundational framework for semiparametric M-estimation, leveraging the approximation ability of overparameterized neural networks that circumv ent tangent degeneration and align better with training practice nowadays. The optimization p roperties of general loss functions are analyzed, and the global convergence is guaranteed. Ins tead of studying the βidealβ solution to minimization of an objective function in most li terature, we analyze the statistical properties of algorithmic estimators, and establish nonpa rametric convergence and parametric asymptotic normality for a broad class of loss functions. Th ese results hold without assuming the boundedness of the network output and even when the true f unction lies outside the speciο¬ed function space. To illustrate the applicability o f the framework, we also provide examples from regression and classiο¬cation, and the numeri cal experiments provide empirical support to the theoretical ο¬ndings. 1 Introduction 1.1 Literature review Semiparametric models constitute an essential category of statistical methods, involving both ο¬nite and inο¬nite-dimensional parameters. Typically, mor e consideration is given to the ο¬nite- dimensional components and the latter is regarded as a nuisa nce (Van der Vaart ,2000;Kosorok , 2008). Compared to parametric models with only ο¬nite-dimension al parameters, semiparametric βE-mail: sxyan@stu.pku.edu.cn β E-mail: chenziyuan@pku.edu.cn β‘Fang Yao is the corresponding author, E-mail: fyao@math.pk u.edu.cn. 1 models provide stronger representation capabilities. Mor eover, in contrast to nonparametric mod- els, they alleviate the curse of dimensionality and allow ea sier inference on the ο¬nite-dimensional parameters of primary interest. Beneο¬ting from these advan tages, semiparametric models have garnered increasing attention in statistics over an extend ed period, including regression analy- sis (Ahmad et al. ,2005;Liang et al. ,2010), survival analysis ( Cox,1972,1975;Huang ,1999; Zeng and Lin ,2007), causal inference ( Rosenbaum and Rubin ,1983;Chernozhukov et al. ,2018) and many others; see Tsiatis (2006);Kosorok (2008);Horowitz (2012) for a comprehensive introduction. Among these works, one of the most common esti mation methods is the semipara- metricM-estimation ( Van de Geer ,2000;Ma and Kosorok ,2005b ;Cheng and Huang ,2010), where the estimators for both parametric and nonparametric parameters are obtained simul- taneously by minimizing (or maximizing) certain objective functions. There has been much research on this method, employing classical nonparametri c regression techniques, including linear sieves ( Ding and Nan ,2011;Ma et al. ,2021;Tang et al. ,2023), local polynomial method with closed-form representations ( Wang et al. ,2010;Liang et al. ,2010) and penalized methods (Mammen and Van de Geer ,1997;Ma and Kosorok ,2005a ). In general, eο¬cient asymptotic βn-normality of
|
https://arxiv.org/abs/2504.19089v1
|
ο¬nite-dimensional parameters and minimax op timal convergence of the non- parametric parts can be established. However, most existin g works based on traditional techniques often face challenges when applied to complex structured da ta increasingly encountered in modern applications. Consequently, there is growing interest in d eveloping estimators based on advanced methodologies such as neural networks, with rigorous theor etical guarantees, motivated by the signiο¬cant achievements of deep learning in recent years. Deep neural networks (DNNs), especially those with ReLU act ivation function, have re- ceived signiο¬cant attention in recent studies due to their s trong learning abilities. Approximation bounds for ReLU DNNs have been established in various functi on settings ( Yarotsky ,2017,2018; Schmidt-Hieber ,2020;Suzuki ,2018;Yarotsky and Zhevnerchuk ,2020;Kohler and Langer ,2021), which are shown to be (nearly) optimal in terms of both width a nd depth ( Shen,2020;Lu et al. , 2021;Shen et al. ,2022). A key advantage of DNN estimators, distinguishing them fr om classical nonparametric regression estimators, is their adaptivity to various low intrinsic-dimensional struc- tures, such as the hierarchical composition model where the target function follows a speciο¬c com- positional structure ( Bauer and Kohler ,2019;Schmidt-Hieber ,2020;Kohler and Langer ,2021), and cases with low-dimensional inputs ( Schmidt-Hieber ,2019;Chen et al. ,2019;Cloninger and Klock , 2021;Nakada and Imaizumi ,2020;Jiao et al. ,2021). Therefore, DNNs are increasingly recog- nized as an important nonparametric approach in a wide range of statistical problems, includ- 2 ing nonparametric regression ( Bauer and Kohler ,2019;Schmidt-Hieber ,2020;Suzuki ,2018; Kohler and Langer ,2021;Wang et al. ,2024), survival analysis ( Zhong et al. ,2021,2022), causal inference ( Farrell et al. ,2021;Chen et al. ,2024), factor augmented and interaction models ( Fan and Gu , 2022;Bhattacharya et al. ,2023), repeated measurements model ( Yan et al. ,2025b ), among others. As a common ground, most of the previously mentioned statist ical works study the gen- eralization ability of the (nearly) empirical risk minimiz ers and bound the estimation error by network size relative to the training sample. However, the o ptimization landscape in deep learning presents unprecedented diο¬culty due to nonlinearity and no nconvexity, leading to the estimation more likely being local minimizers. Moreover, in practice, neural networks are often trained with overparameterization, i.e., the network parameters may va stly exceed the training sample size, while still avoiding the traditional pitfall of overο¬tting (Zhang et al. ,2021). This does not align with the statistical analysis in the previously mentioned w orks. Fortunately, there have been mean- ingful results to address these gaps. For instance, Jacot et al. (2018) introduced a framework that analyzes the training of wide neural networks, drawing an an alogy to kernel regression. Speciο¬- cally, they compared the gradient ο¬ow of the least squares lo ss with that of kernel regression and demonstrated that, as the network width approaches inο¬nity , the Neural Tangent Kernel (NTK) converges to a time-invariant limit ( Arora et al. ,2019;Lee et al. ,2019;Bietti and Mairal ,2019; Geifman et al. ,2020;Chen and Xu ,2020;Bietti and Bach ,2021;Lai et al. ,2023a ). From the perspective of optimization, wide neural networks
|
https://arxiv.org/abs/2504.19089v1
|
with ran dom initialization have been shown, with high probability, can achieve global minimization thr ough gradient-based methods ( Du et al. , 2018;Li and Liang ,2018;Allen-Zhu et al. ,2019). In terms of statistical generalization perfor- mance, Hu et al. (2021);Suh et al. (2021) established the convergence rate of penalized least square regression problem, and more recent studies have exa mined least squares regression with early stopping, providing uniform convergence guarantees for neural network kernels ( Lai et al. , 2023a ;Li et al. ,2024;Lai et al. ,2023b ). 1.2 Challenges and main contributions Despite the impressive empirical performance of deep learn ing models, they are commonly re- garded as black boxes, lacking interpretability and theore tical support. Semiparametric modeling provides a useful way for making interpretable inferences w hile leveraging the learning ability of neural networks. Training DNNs involves a large-scale ne twork parameter learning via loss minimization, and aligns naturally with the framework of se miparametric M-estimation which si- multaneously estimates nonparametric (via network weight s) and parametric parameters. Several 3 studies have explored this problem (e.g., Zhong et al. ,2022), but defaulted to some βgoodβ as- sumptions on the tangent spaceβa critical issue we shall dis cuss below. Another commonly used method isZ-estimation, where the ο¬nite-dimensional parameter is typ ically taken as a functional of the nonparametric component. It usually takes two steps: ο¬rst estimating the nonparamet- ric component and then substituting it into an equation to so lve for the parametric component. For such two-step procedures, the doubly debiased/robust m ethods ( Chernozhukov et al. ,2018; Farrell et al. ,2021) are employed to achieve eο¬ciency. By comparison, semipara metricM- estimation is computationally straightforward via networ k training, and requires little statistical manifest with broader applicability. While a simpler two-s tep plug-in method can also attain ef- ο¬ciency ( Chen et al. ,2024), one has to impose assumptions on the tangent approximatio n ability in a similar manner to Zhong et al. (2022). The above discussion inspires our investigation: what foun dation of semiparametric M- estimation theory is undermined in neural network based mod els,and how to rebuild it? Denote the semiparametric model as PΞ²,f, whereΞ²βRpis the ο¬nite-dimensional parameter of interest, and f:RdβRis a nuisance function belonging to a inο¬nite-dimensional s pace. The semiparametric M-estimation procedure aims to optimize an objective functi on,expressed as PnlΞ²,fwith empirical measure integration Pn, either in a suitable sieve or with an additional penalty ter m. Notably, the information of the parameters, Ξ²andf, is coupled. Since the nuisance parameter fresides in an inο¬nite-dimensional space, it signiο¬cantly increases t he complexity of the estimation problem. Consequently, it is reasonable that the full set of paramete rs achieves a nonparametric convergence rate that is slower than nβ1. A central challenge of semiparametric M-estimations is to establishβn-consistency and the asymptotic normality of the estimator for the ο¬nite-dimens ional component Ξ². To decouple the two parts of parameters, taking likelihood estimation a s an example, the eο¬cient score is commonly utilized, which removes the eο¬ect of nuisance para meters. Speciο¬cally, we deο¬ne ΞΞ²,fas the tangent set for the
|
https://arxiv.org/abs/2504.19089v1
|
nuisance parameter, i.e. it contai ns score functions βflΞ²,f[h]with all appropriate directions h, see Section 3for a rigorous deο¬nition. Additionally, denote Ξ ΞΈ,fas the orthogonal projection operator onto the closure of the l inear span of ΞΞ²,f. Then the eο¬cient score can be constructed by βΞ²lΞ²,fsubtracting its projection on nuisance tangent space ΞΞ²,f, i.e. ΛS=S1(Ξ²0,f0)βΞ Ξ²0,f0S1(Ξ²0,f0),whereS1(Ξ²,f) =βlΞ²,f/βΞ². Accordingly, ΛSis orthogonal to the tangent space ΞΞ²,f, thus mitigating the nuisance information. Once the empiri cal eο¬cient scorePnΛS(ΛΞ²,Λf)is proven to be small enough as op(nβ1/2), asymptotic normality ofβn(ΛΞ²βΞ²0) can then be established via standard Taylor expansion and en tropy analysis ( Van der Vaart ,2000; 4 Kosorok ,2008). However, establishing that PnΛS(ΛΞ²,Λf) =Pn(S1(ΛΞ²,Λf)βΞ Ξ²0,f0S1(Ξ²0,f0)) isop(nβ1/2)for neural network estimators is highly nontrivial. The ο¬rs t term on the right-hand side is typically easy to bound, as ΛΞ²is a ο¬nite-dimensional (nearly) minimizer. The main challe nge arises from bounding the second term. Because Ξ Ξ²0,f0S1(Ξ²0,f0)is in the the closure of the linear span of ΞΞ²,f, there could be some direction Λhsuch that Ξ Ξ²0,f0S1(Ξ²0,f0) =S2(Ξ²0,f0)[Λh]. Denote the neural network function as fΞΈ, whereΞΈrepresents the network parameters. When ΛfΞΈ is an exact local or global minimizer, we can only obtain that βΞΈPnlΛΞ²,ΛfΞΈ= 0, but not necessarily PnS2(ΛΞ²,Λf)[Λh] = 0. Denote the tangent space of the network at ΞΈby TΞΈFNN:= Span{βΞΈfΞΈ(x)T(ΞΈ1βΞΈ) :allΞΈ1having the same shape with ΞΈ}. Then, for any hβ TΛΞΈ, we have Pn(S2(ΛΞ²,Λf)[h]) = 0 . If the components of Λhlie in, or can be well approximated by, the tangent space TΛΞΈ, we could conclude that PnS2(ΛΞ²,Λf)[Λh]is suο¬ciently small. Therefore, it is preferable for the tangent space to b e large to contain or approximate as many possible directions as possible. For traditional li near estimators using a sieve space S={fΞ±(x) :fΞ±(Β·) =/summationtextm i=1Ξ±ibi(Β·)}with basis functions {bi(Β·),i= 1,Β·Β·Β·,m}(e.g., regression spline estimators), a large tangent space is straightforwa rd to guarantee, since the linearity ensures that the tangent space of Sat anyfΞ±(Β·)β SisTΞ±S= Span{βΞ±ifΞ±(Β·)}βΌ=S. This provides a strong approximation ability. However, this does not neces sarily hold for the tangent space of neural networks. Therefore, the crucial question arises on the approximatio n ability of the neural network tangent space, and should not be simply suppressed by presup posed conditions as in previous works ( Zhong et al. ,2022;Chen et al. ,2024): Whether the tangent space TΞΈFNNof a neural network always has good approximation ability at every possible ΞΈ? If not, can the semiparametric DNN M-estimators still achieve aβn-consistency for the ο¬nite-dimensional parameters? Unfortunately, the answer to the former question is negativ e. As a toy example, we consider a fully connected neural network in which the weights and bias es within each layer take the same values. Due to the lack of identiο¬ability among network para meters, the derivatives with respect to parameters in equivalent positions are the same. Consequ ently, the dimension of the tangent 5 spaceTΞΈFNNis only linearly related to the depth of the neural network, w hich is much smaller than the number of parameters (i.e., the square of the width multi plied by the depth). In such cases, the tangent space TΞΈFNNwill not provide a suο¬ciently rich
|
https://arxiv.org/abs/2504.19089v1
|
approximation ability fo r possible Λh. As discussed above, whether the eο¬cient score is suο¬ciently sm all to establish theβn-consistency of the parametric component of the DNN estimator remains an o pen problem. Nonetheless, the counterexample presented above is so extr eme that it is reasonable to presume that it has only a small probability of occurring in learning . Hence, a workable theoretical treatment requires a more careful analysis of the estimatio n procedure and randomness introduced by the neural network, like the random initialization. Addi tionally, we hope to address the limitations in most current statistical analyses of deep ne ural networks: they ignore the nonlinearity and nonconvexity of the optimization landscape and directl y consider the βidealβ global minimizer. Motivated by these considerations, we study overparameter ized neural networks that provide a meaningful way for analyzing the optimization and statisti cal performance of the algorithmic solutions. Further, for generality, we consider a broader c lass of loss functions beyond the least squares criterion, which introduces additional diο¬c ulties for both optimization convergence analysis and statistical inference of the algorithmic solu tion. In summary, our main contributions to tackling the challenges are elaborated below. β’Methodologically, for the semiparametric M-estimation problem, we employ a new over- parameterized neural network estimator, which better alig ns with practical settings and facilitates the optimization analysis. The l2penalization of the neural network parameters is applied for regularization, allowing subsequent analys es of general loss rather than only least square regression. This also ensures that the scores a t the estimation remain suο¬- ciently small, contributing to the asymptotic normality of the ο¬nite-dimensional parameter estimation. Then we consider a continuous gradient ο¬ow fram ework to investigate the training dynamics of the neural network. By incorporating r andom initialization into the analysis, with high probability, we bound the diο¬erence bet ween the overparameterized neural network training ο¬ow and the ideal RKHS optimization process. It also suggests that the counterexamples mentioned above with degenerate t angent space would be only taken with a small probability. Furthermore, we establish t he global algorithmic conver- gence of the proposed overparameterized neural semiparame tricM-estimator, providing a theoretical guarantee of the training procedure. β’Theoretically, we analyze the statistical properties of th e algorithmic solution, including the 6 generalization error of the nonparametric component and th eβn-consistency/asymptotic normality of the parametric component. Speciο¬cally, unlik e the common convergence analysis of neural network estimation, which often assumes that the network output is bounded, the optimization solution is analyzed directly. E specially when the true function does not lie in the desired space and the loss function is of a g eneral form instead of least squares, estimators that do not impose bounded assumptions on the nonparametric part present signiο¬cant challenges to theoretical analysis. To establish the convergence rate, we introduce a new condition, referred to as the βHuberized mar gin conditionβ, which relaxes the standard assumptions and is easier to satisfy by unbound ed nonparametric candidate function classes. Building on the above results and some reg ularity conditions, we show that the parametric component
|
https://arxiv.org/abs/2504.19089v1
|
achievesβn-consistency and asymptotic normality. The latter result demonstrates the eο¬ciency for the least favor able submodels and enables interpretable inference for parameters of interest. Lastl y, we discuss two commonly used models: partially linear regression and classiο¬cation. In these examples, the aforementioned properties of the proposed estimator are examined to be vali d. Moreover, the numerical results corroborate our theoretical analysis by the ο¬nite s ample behavior. 1.3 Notations and organization In this paper, we use the notation an/lessorsimilarbnto indicate that anβ€Cbnfor some constant C >0 independent of n, with/greaterorsimilardeο¬ned analogously. Then anβbnmeans that both an/lessorsimilarbnand bn/lessorsimilaran. Denotean=o(bn)ifan/bnβ0,an=O(bn)ifan/lessorsimilarbn, andan= Ξ(bn)ifanβbn. We also use an=op(bn)to indicate that an/bnβp0, andan=Op(bn)if, for anyΗ« >0, there exists a constant CΗ«>0such that supnP(/baβdblXn/baβdbl β₯CΗ«|Yn|)< Η«. Additionally, poly(a,b,...) indicates some polynomial about (a,b,...). The notation Idenotes the indicator function. For probabilistic integrals, Prepresents the theoretical expectation with respect to the population distribution, while Pndenotes the empirical expectation derived from the ο¬nite sa mple. The rest of the article is organized as follows. In Section 2, following an overview of overparameterized neural networks and neural tangent kern els, we introduce the semiparametric model framework for neural M-estimation trained by the gradient ο¬ow algorithm. Section 3develops the statistical theory for the algorithmic estima tors, addressing the nonparametric convergence of the nonparametric component and the asympto tic normality of the parametric component. In Section 4, we examine two illustrative examples, the partially linea r regression and classiο¬cation, demonstrating the validity of the theor etical results. Finally, the numerical 7 experiments are presented in Section 5, providing empirical evidence of the proposed method and theory. 2 Semiparametric M-estimation with neural networks In this section, we ο¬rst introduce the overparameterized ne ural network used and some important related concepts. Then we present the considered semiparam etric model and neural M-estimation framework. Furthermore, a detailed analysis of the optimiz ation convergence would also be provided. 2.1 Overparameterized neural network and neural tangent ke rnel In this paper, we primarily consider feedforward fully conn ected neural networks. Given positive integersm0,m1,...,m L+1withm0=d,mL+1= 1, the network is deο¬ned as following ΛfΞΈ(x) =LLβ¦Οβ¦LLβ1β¦Οβ¦LLβ2β¦Οβ¦Β·Β·Β·β¦L 1β¦Οβ¦L0(x), (1) whereΟ:RmiβRmiis the activation function. For iβ {0,1,...,Lβ1},Li(y) =β 2(Wiy+ bi)/βmi+1, and fori=L,Li(y) =Wiy+bi, whereWiβRmi+1Γmi,biβRmiforyβRmi. In this work, we default Οto the rectiο¬ed linear unit (ReLU) function a/maβstoβmax{a,0}, and most results presented can be generalized to other activation fu nctions. Additionally, we use ΞΈto denote all parameters in the neural network. To convenientl y analyze the eο¬ect of the width mi, we assume that mβ€min1β€iβ€Lmiβ€max1β€iβ€Lmiβ€Cmalways holds for some constant C. In practice, the neural networks are overparameterized, wh ich means the width mwill be much larger than the sample size n. To better align with practical settings and facilitate the analysis of optimization properties, we also consider overparameteri zed networks in the following. Before training, random initialization of weights in networks is u sually employed to break symmetry and prevent all neurons from learning identical features, t hereby improving learning eο¬ciency and promoting better convergence. Therefore, we randomly i nitialize the neural network weight matricesWiβs and the bias b0from an i.i.d. standard normal distribution,
|
https://arxiv.org/abs/2504.19089v1
|
while the othe r biases biβs foriβ₯1are initialized to zero. Denoting the initial parameters by ΞΈ0, we deο¬ne fΞΈ(x) =ΛfΞΈ(x)βΛfΞΈ0(x) to ensure that fΞΈis initially zero in the training, i.e. fΞΈ0(x)β‘0, for theoretical convenience. 8 In the analysis of overparameterized neural networks, the n eural tangent kernel (NTK) (Jacot et al. ,2018;Arora et al. ,2019) is commonly employed. We now introduce the NTK for ο¬nitemas KNT ΞΈ(x,xβ²) =βΞΈfΞΈ(x)TβΞΈfΞΈ(xβ²). FixingLand lettingmβ β , the kernel function converges to KNT(x,xβ²) = lim mββKNT ΞΈ0(x,xβ²), Recent works have established that this convergence holds u niformly ( Lai et al. ,2023a ). In the sequel, we will generally refer to the limiting kernel as NTK for brevity. Moreover, an explicit formula for NTK with the aforementioned random initializat ion can be derived. Proposition 2.1.Under the random initialization mechanism proposed for Wiβs andbiβs, the neural tangent kernel has the explicit expression as KNT(x,xβ²) =L/summationdisplay l=0/parenleftBig/radicalBig (/baβdblx/baβdbl2 2+1)(/baβdblxβ²/baβdbl2 2+1)ΞΊ(l) 1(Λu)+ /BDlβ₯1/parenrightBigLβ1/productdisplay r=lΞΊ0(ΞΊ(r) 1(Λu)), (2) whereΛu:= (xTxβ²+ 1)//radicalbig (/baβdblx/baβdbl2 2+1)(/baβdblxβ²/baβdbl2 2+1),ΞΊ0(t)andΞΊ1(t)are arc-cosine kernels of degree0and1, i.e. ΞΊ0(t) =1 Ο(Οβarccost), ΞΊ1(t) =1 Ο/bracketleftBigβ 1βt2+t(Οβarccost)/bracketrightBig , andh(r),rβ₯1denote ther-times composition of a function hwhileh(0)is the identity map. These expressions presented here diο¬er slightly from previ ous results ( Bietti and Mairal ,2019; Geifman et al. ,2020), due to the special initialization of the bias, which simpl iο¬es subsequent derivations related to the RKHS of KNTover a general domain. Speciο¬cally, we introduce the following equivalence property that facilitates subseque nt statistical analysis. Proposition 2.2.For anyβ¦βRdwith Lipschitz boundary, the RKHS HNTassociated to KNTis norm-equivalent to the Sobolev space W(d+1)/2,2(β¦). Remark 2.1. Since the properties of Sobolev spaces are extensively stud ied, given the above proposition, many results can be derived more easily using e xisting results of Sobolev spaces. We emphasize that the smoothness index (d+ 1)/2is determined by the ReLU activation function. Generally, smoother activation functions lead to higher sm oothness in the corresponding NTK Sobolev space. For example, the rectiο¬ed power unit activat iona/maβstoβmax{a,0}rwith positive integerr(Bach,2014;Vakili et al. ,2021), which is weakly diο¬erentiable of order r, can be proven to lead to Sobolev space W(d+2rβ1)/2,2via similar analysis as Proposition 2.2. 9 We close this subsection by recalling the motivations behin d using overparameterized DNNs in this paper. In practical applications, neural networks a re often overparameterized, where the number of parameters exceeds the sample size. Furthermore, rather than studying the ideal global minimizers as in common statistical works, we hope to invest igate the statistical properties of the algorithmic estimators, particularly under the overpa rameterized optimization convergence guarantee. By introducing stochastic initialization and a nalyzing the optimization process of the overparameterized DNNs, we can avoid, with high probabilit y, the counterexample mentioned in Section 1.2on the degeneration of the DNN tangent space, and hence helps to establish βn-consistency/asymptotic normality of the estimation of th e parametric component. 2.2 Semiparametric neural M-estimation Consider a general semiparametric statistical model PΞ²,f, whereΞ²βRpis a Euclidean parameter of interest and f:Rd/maβstoβRis the nuisance function in an inο¬nite-dimensional space. L et(Y,Z,X) follows the distribution PΞ²,f, whereZβRpandXβRdare ο¬nite-dimensional covariates. For simplicity, the domains of ZandXare assumed to be bounded,
|
https://arxiv.org/abs/2504.19089v1
|
compact sets with regular boundaries, and their densities are bounded away from zero a nd inο¬nity. Moreover, we assume thatE[Y2|Z,X]is ο¬nite. Consider a nonnegative loss function lΞ²,f=l(ZTΞ²,f(X),Y)such that the true parameters (Ξ²0,f0)minimizes the risk PlΞ²,f=/integraltext l(ZTΞ²,f(X),Y)dPΞ²,f. Common choices forlinclude the negative log-likelihood function, squared los s function, or other robust loss functions. Given i.i.d. observations {(Yi,Zi,Xi),i= 1,2,...,n}, we aim to estimate the unknown parameters (Ξ²,f)by minimizing the criterion Ln(Ξ²,f) :=PnlΞ²,f+Ξ»nJ(f), whereΞ»nβ₯0is a tuning parameter and Jis a penalty term, within a suitably chosen parameter set(Rp,Fn). Letting the nonparametric function set Fnbe the overparmeterized neural network class, for each fΞΈβ Fnwith network parameter ΞΈand initial parameter ΞΈ0, we deο¬ne the penalty term as J(fΞΈ) =/baβdblΞΈβΞΈ0/baβdbl2 2, which regularizes the complexity of the neural network to pr event overο¬tting. An alternative regularization technique is early stopping, while existin g statistical analyses for early stopping in kernel gradient algorithms mainly focus on least square los ses (Yao et al. ,2007;Raskutti et al. , 2014). For generality, this work considers a broader class of los s functions, hence, we adopt 10 the penalization strategy. Even under the penalization fra mework, there are still considerable challenges in the statistical theory. Unlike typical conve rgence analyses on neural network estimation, which often assume that the nonparametric func tion is bounded or that the loss function is Lipschitz continuous, we will study the optimiz ation solution without boundedness assumptions. This brings diο¬culties when the true function does not belong to the desired space and the loss function takes a more general form than the least squares. 2.3 Gradient ο¬ow optimization and convergence Given the above considerations, we aim for our estimator to s atisfy (ΛΞ²,Λf) = (ΛΞ²,fΛΞΈ)where(ΛΞ²,ΛΞΈ)βargmin Ξ²βRp,fΞΈβFnLn(Ξ²,fΞΈ). (3) To optimize the objective in ( 3), gradient-based algorithms are widely employed, with var ious modiο¬cations such as stochastic sampling and momentum meth ods being particularly common in practice. A substantial body of literature addresses opt imization algorithms for neural net- works, highlighting diverse methodologies and their relat ive eο¬ectiveness in improving training outcomes. In this study, for theoretical convenience, we fo cus on gradient ο¬ow, which serves as the continuous counterpart of gradient descent. Let ΛΞΈtdenote the neural network parameters at timetβ₯0. Correspondingly, let Λft=fΛΞΈtrepresent the neural network output, and KNT t=KNT ΛΞΈt denote the neural tangent kernel at tβ₯0. With the initial values set as ΛΞΈ0=ΞΈ0andΛf0(x) = 0, for allxββ¦, the gradient ο¬ow training process of the parameters (ΛΞ²t,ΛΞΈt)is governed by the following equations: d dtΛΞ²t=ββΞ²Ln/parenleftBig ΛΞ²t,ΛΞΈt/parenrightBig =β1 nn/summationdisplay i=1lβ² 1(ZT iΛΞ²t,Λft(Xi),Yi)Zi, (4) and d dtΛΞΈt=ββΞΈLn/parenleftBig ΛΞ²t,ΛΞΈt/parenrightBig =β1 nn/summationdisplay i=1lβ² 2(ZT iΛΞ²t,Λft(Xi),Yi)βΞΈΛft(Xi)β2Ξ»n(ΛΞΈtβΞΈ0).(5) Then the ο¬ow of fis deο¬ned by a dynamical system as d dtΛft(x) =βΞΈΛft(x)Td dtΛΞΈt =β1 nn/summationdisplay i=1lβ² 2(ZT iΛΞ²t,Λft(Xi),Yi)KNT t(x,Xi)β2Ξ»nβΞΈΛft(x)T(ΛΞΈtβΞΈ0).(6) For theoretical analysis, we take an ideal estimator in the R KHS associated with reproducing kernelKNTas a surrogate, which is shown to be suο¬ciently close to the ne ural algorithmic 11 estimator (ΛΞ²t,ΛΞΈt). In this context, the penalty term is deο¬ned as ΛJ(f) =/baβdblf/baβdbl2 HNT, which is standard and encourages smoother solutions by penalizing t he complexity of the function fin the RKHS norm. This
|
https://arxiv.org/abs/2504.19089v1
|
penalty term leads to the following regulari zed optimization criterion: ΛLn(Ξ²,f) =PnlΞ²,f+Ξ»nΛJ(f), (7) wherePnlΞ²,fstill represents the empirical risk term and Ξ»nΛJ(f)is the tuning parameter. Conse- quently, within the RKHS, a gradient ο¬ow training process {ΛΞ²t,Λft}with initial value ΛΞΈ0= 0and Λf0(x) = 0,βxββ¦is adopted as d dtΛΞ²t=ββΞ²ΛLn/parenleftBig ΛΞ²t,Λft/parenrightBig =β1 nn/summationdisplay i=1lβ² 1(ZT iΛΞ²t,Λft(Xi),Yi)Zi, and d dtΛft=ββfΛLn/parenleftBig ΛΞ²t,Λft/parenrightBig =β1 nn/summationdisplay i=1lβ² 2(ZT iΛΞ²t,Λft(Xi),Yi)KNT(Β·,Xi)β2Ξ»nΛft. Deο¬ne the subspace H1= Span{KNT(Β·,Xi),i= 1,2,...,n}within the considered NTK RKHS. It is straightforward to verify that Λftβ H1,βtβ₯0, meaning that the evolution of Λftis restricted to this ο¬nite-dimensional subspace. To analyze the optimiz ation convergence, the following assumption on convexity and smoothness is standard. Assumption 1 (Conditions for optimization) .The loss function l=l(Β·,Β·,Β·)is convex and non- negative, the gradient βlisB1-Lipschitz continuous with a constant B1. This assumption on the loss function ensures that the gradie nt ο¬ow converges. The convexity guarantees convergence, while the Lipschitz continuity of the gradient ensures stability during optimization, preventing abrupt changes that could hinder convergence. Denote the initial loss L0=ΛLn(Ξ²0,f0)and assume that the RKHS global minimizer of ( 7) satisο¬es /baβdblΛΞ²β/baβdbl2 2+/baβdblΛfβ/baβdbl2 HNT= ΛB2. The following conclusion characterizes the convergence o f the ideal gradient ο¬ow for general loss functions. Proposition 2.3.Given the sample {(Yi,Zi,Xi),i= 1,2,...,n}and we consider the optimization problem ( 7) in the Hilbert space H1with the gradient ο¬ow method described above, the following results on the optimization convergence rate hold. (1). When Assumption 1holds, we have /baβdblβΞ²ΛLn(ΛΞ²t,Λft)/baβdbl2 2+/baβdblβfΛLn(ΛΞ²t,Λft)/baβdbl2 HNT/lessorsimilarΛLn(ΛΞ²t,Λft)βΛLn(ΛΞ²β,Λfβ)/lessorsimilarΛB2L0 L0t+ΛB2. (2). Additionally, if ( 7) isΒ΅-strongly convex for a positive number Β΅, we have /baβdblβΞ²ΛLn(ΛΞ²t,Λft)/baβdbl2 2+/baβdblβfΛLn(ΛΞ²t,Λft)/baβdbl2 HNT/lessorsimilarΛLn(ΛΞ²t,Λft)βΛLn(ΛΞ²β,Λfβ)/lessorsimilareβ2Β΅tL0. 12 Remark 2.2. The above proposition demonstrates that the convergence ra te of optimization is sub- linear for convex objects and linear for strongly convex obj ects; which is standard in optimization theory. Therefore, to obtain the optimizer (ΛΞ²t,Λft)that satisο¬es ΛLn(ΛΞ²t,Λft)βΛLn(ΛΞ²β,Λfβ)and/baβdblβΞ²ΛLn(ΛΞ²t,Λft)/baβdbl2+/baβdblβfΛLn(ΛΞ²t,Λft)/baβdblHNT/lessorsimilarnβ1, we need to train tsthatts/greaterorsimilarn2ΛB2when the objective is just convex and ts/greaterorsimilarΒ΅β1lognwhen it is Β΅strongly convex. Furthermore, the penalization procedure guarantees /baβdblΛfβ/baβdbl2 HNT/lessorsimilarL0/Ξ»nand the consistency (or at least boundedness) of ΛΞ²βis usually easy to establish in statistical problems. Therefore, without involving neural networks, the upper bo unds ofΛB2for speciο¬c problems are typically available. The following theorem demonstrates that, for any given trai ning time, the discrepancy between the neural network and RKHS optimization results can be suο¬c iently small when the neural network is wide enough. Theorem 2.1. Given any positive real number ΞΎβ(0,1)and the training time t. Let the network m(t,ΞΎ)β₯poly(n,Ξ»β1,L0,log(1/ΞΎ),exp(t))large enough. Then, with probability at least 1βΞΎ over neural network random initialization, the diο¬erences between neural network estimation and RKHS estimation can be bounded by max 1β€sβ€t/braceleftBig /baβdblΛΞ²sβΛΞ²s/baβdbl,/baβdblΛfsβΛfs/baβdblLβ/bracerightBig β€o(nβ1/2). This theorem demonstrates that the gradient ο¬ow of DNNs and t he NTK RKHS can exhibit signiο¬cant similarity when the neural network is suο¬cientl y wide. Although the optimization landscape of DNNs is generally complex, in the NTK overparam eterized regime, it closely resembles that of the RKHS. This similarity also helps avoid the degeneration of the tangent space as seen in the counterexample in Section 1.2, with high probability. Additionally, the requirement on the width of DNNs includes an exponent term re lated
|
https://arxiv.org/abs/2504.19089v1
|
to the training time, which is used to bound the accumulated dynamic diο¬erences between the two training processes. This factor accounts for the cost of accommodating general, unst ructured loss functions, but can be omitted when considering the least squares loss as a special case. 3 Statistical theory of algorithmic neural M-estimation In this section, we discuss the statistical properties of th e neural semiparametric M-estimator, focusing on the convergence rate of the nonparametric part a nd the asymptotic normality of the 13 parametric component. Notably, we do not impose common assu mptions such as the Lipschitz continuity (e.g. Huang et al. ,2024) or strongly convexity of the loss function, as these condit ions may limit the applicability of our results. Furthermore, be cause our analysis pertains to the algorithmic optimizer, we do not assume that the nonparamet ric component is bounded; this in- troduces additional challenges for the theoretical analys is. Before discussing details, we introduce some basic assumptions. Firstly, some basic conditions of t he model regularity are summarized below. Assumption 2. (1) The covariate (Z,X)takes value in a bounded domain with a joint probability density function bounded away from 0andβ. (2) Conditional on (Z,X), the second order moment E[Y2|Z=z,X=x]is bounded. (3) Derivatives and expectations are exchangeable in the se nse that β βuE[l(u1,u2,Y)|Z=z,X=x] =E/bracketleftbiggβ βul(u1,u2,Y)|Z=z,X=x/bracketrightbigg , foru=u1andu2. When establishing the nonparametric convergence, the marg in condition ( Tsybakov ,2004) and the Bernstein condition ( Bartlett and Mendelson ,2006) are often employed to achieve fast rates in statistical and learning analysis. Consider a simp le nonparametric model Pffor example, which depends on a nonparametric parameter fand a corresponding loss function lf. These conditions quantify the identiο¬ability and the curvature o f the objective function f/maβstoβPlfat some minimum fβ. In the margin condition, fβ=f0is the minimizer of the risk over all measurable functions, whereas in the Bernstein condition, fβtypically minimizes the risk over the candidate function class F, seeLecuΒ΄e(2011) for more discussions. As one of speciο¬c forms, these conditions may establish relationships between the e xcess riskP(lfβlfβ)and theL2norm /baβdblfβfβ/baβdbl2 L2βP(f(X)βfβ(X))2through inequalities of the form P(lfβlfβ)/greaterorsimilar/baβdblfβfβ/baβdbl2ΞΊ L2 with typically ΞΊ= 1. This implies a better concentration and smaller localized sets, hence helps for the fast convergence. However, to verify the Margin/Ber nstein conditions usually requires that fis nearf0, for example, /baβdblfβf0/baβdblLβis bounded. This has limited the application of related results when the boundedness of nonparametric functions do es not hold naturally. Especially in this work, assuming boundedness for the optimizer under gra dient ο¬ow is particularly unsuitable. In semiparametric problems, unlike the ο¬nite-dimensional parameters for which consistency is typically easy to establish, Lβboundedness of the nonparametric estimation is often non-t rivial, especially when the true function does not lie within the con sidered RKHS. The following Huberized margin condition holds more easily for unbounded function class. 14 Assumption 3 (Huberized margin condition for semiparametric estimatio n).There is a constant B2>0such that for every Ξ²βRp,fβ F, P(lΞ²,fβlΞ²0,f0)β₯B2/baβdblΞ²βΞ²0/baβdbl2+/baβdblfβf0/baβdbl2 L2(X) 1+/baβdblΞ²βΞ²0/baβdbl+/baβdblfβf0/baβdblLβ. (8) Typically, when the loss is bounded or globally Lipschitz co ntinuous (e.g. Huang et al. ,2024), the class complexity is easy to bound,
|
https://arxiv.org/abs/2504.19089v1
|
but such assumptions m ay not be general enough. In the proposed Huberized condition, ignoring ο¬nite-dimensiona l parameter Ξ²for convenience, when /baβdblfβf0/baβdblLβ/lessorsimilar1, the right hand is nearly /baβdblfβf0/baβdbl2 L2(X); while when /baβdblfβf0/baβdblLβ/greaterorsimilar1, the right hand is even smaller than /baβdblfβf0/baβdblL1(X). If only the local curvature is of concern, then this condition is equivalent to the commonly used margin con ditionP(lfβlf0)/greaterorsimilar/baβdblfβf0/baβdbl2 L2. As pointed out earlier, it is not proper to assume that the alg orithmic neural network optimizer is bounded, and the Lβconsistency or boundedness of nonparametric estimation is usually non-trivial when the true function does not lie within the co nsidered RKHS. Moreover, in ( 8) for semiparametric models, /baβdblΞ²βΞ²0/baβdblin the denominator is for symmetry, which is usually not necessary because the consistency of the estimation of Ξ²is veriο¬able in common cases. When the risk is strongly convex, the condition is obviously satisο¬ed. Intuitively, the proposed assumption only requires that the Hessian matrix is non-sin gular near the minimum and that the gradient is lower bounded at the far end. Strong convexity of the loss function across the entire domain is unnecessary; local strong convexity near the true minimizer is often suο¬cient. This is not much stricter than Assumption 1, more examples for illustration will be given in Section 4. The next assumption assumes that the true parameters lie wit hin the ideal space. Assumption 4. The true parameter f0belongs to the RKHS HNT. This condition is common in traditional statistical works a nd yields some boundedness to simplify the analysis. However, it does not always hold in a b road context of learning problems and is in fact not necessary to achieve the desired property. Thus, we relax this condition to allow that the true function does not lie within the reproducing sp ace of NTK. Assumption 4β².The true parameter f0does not reside within the RKHS HNT, but instead belongs to a Sobolev space Ws,2withs>d/2. Here, we assume that the smoothness of f0satisο¬ess>d/2, a condition crucial for establishing the statistical properties under consideration, such as th e convergence rate and the attainment of parametric asymptotic normality. This assumption is par ticularly signiο¬cant in the context of semiparametric statistics, where achieving aβn-consistent estimator for ο¬nite-dimensional 15 parameters generally necessitates that the non-parametri c convergence rate faster than nβ1/4 (Van der Vaart ,2000;Kosorok ,2008). Now, we introduce some basic concepts and assumptions for th e semiparametric model (Van der Vaart ,2000;Kosorok ,2008). Given a ο¬xed f, letGfdenote the collection of all smooth functional curves gbthat run through fatb= 0, i.e.{gb:gbβL2(β¦)forbβ(β1,1)and limbβ0/baβdblbβ1(gbβg0)βh/baβdblL2β0for some function h}. Then let TfGf=/braceleftBig h: β¦βR,there is agbβ Gfsuch that lim bβ0/vextenddouble/vextenddoublebβ1(gbβg0)βh/vextenddouble/vextenddouble L2= 0/bracerightBig be all the potential tangent directions for the nuisance par ameter. Now we set TfGfbe the closed linear span of TfGfunder linear combinations. For anyhinTf, denote S1(Ξ²,f) =β βΞ²lΞ²,fandS2(Ξ²,f)[h] =β βb/vextendsingle/vextendsingle/vextendsingle/vextendsingle b=0lΞ²,gb, wheregbsatisο¬es(β/(βb))gb|b=0=h. For convenience, the tangent set for fat(Ξ²,f)is deο¬ned asΞΞ²,f:={S2(Ξ²,f)[h] :hβ Tf}. Further, we also deο¬ne S11(Ξ²,f) =β βΞ²S1(Ξ²,f), S12(Ξ²,f)[h] =β βb/vextendsingle/vextendsingle/vextendsingle/vextendsingle b=0S1(Ξ²,gb), S21(Ξ²,f)[h] =β βΞ²S2(Ξ²,f)[h]andS22(Ξ²,f)[h1][h] =β βb/vextendsingle/vextendsingle/vextendsingle/vextendsingle b=0S2(Ξ²,gb)[h1], whereh1is another function in Tf. As is well known, in the special case when
|
https://arxiv.org/abs/2504.19089v1
|
the loss is a negativ e log-likelihood, the eο¬cient score is essentially the projection of the score function S1onto the orthonormal complement of the tangent set ΞΞ²,f. Denote Ξ ΞΈ,fas the orthogonal projection onto the closure of the linear s pan ofΞΞ²,f. The eο¬cient score function for Ξ²at(Ξ²0,f0)is then given by ΛS(Ξ²0,f0) =S1(Ξ²0,f0)βΞ Ξ²0,f0S1(Ξ²0,f0). Denote S2(Ξ²,f)[h] = (S2(Ξ²,f)[h1],S2(Ξ²,f)[h2],...,S 2(Ξ²,f)[hp]), whereh= (h1,h2,...,h p)β Tp f. Then deο¬ne S12[h],S21[h]andS22[h1][h2]accordingly. If there isΛh= (Λh1,Λh2,...,Λhβ p)β Tp fGf, such that for any h= (h1,...,h k)β Tp fGf, P/parenleftBig S12(Ξ²0,f0)[h]βS22(Ξ²0,f0)[Λh][h]/parenrightBig = 0. (9) 16 Then we can also determine the eο¬cient score for likelihood e stimation as ΛS(Ξ²0,f0) =S1(Ξ²0,f0)βS2(Ξ²0,f0)[Λh]. Similar to the negative log-likelihood, the following assu mptions are made for the general loss function. Assumption 5. (1). (Positive information) There exists Λh= (Λh1,...,Λhp)β Tp fGfsuch that (9) holds for any h= (h1,...,h k)β Tp fGf. Further assume that Λhβ iβ Ws,2(β¦),i= 1,2,...,p, wheres>d/2. GivenΛh, the matrix A=P/parenleftBig S11(Ξ²0,f0)βS21(Ξ²0,f0)[Λh]/parenrightBig is nonsingular. (2). (Smoothness of the model) For all possible parameters (Ξ²,f)satisfying {(Ξ²,f) : /baβdblΞ²βΞ²0/baβdbl+/baβdblfβf0/baβdblL2(X)/lessorsimilarΞ΄n}withΞ΄n=o(nβ1/4), P/braceleftBig (ΛS(Ξ²,f)βΛS(Ξ²0,f0))β/parenleftBig S11(Ξ²0,f0)βS21(Ξ²0,f0)[Λh]/parenrightBig (Ξ²βΞ²0) β/parenleftBig S12(Ξ²0,f0)[fβf0]βS22(Ξ²0,f0)[Λh][fβf0]/parenrightBig/bracerightBig =o(/baβdblΞ²βΞ²0/baβdbl)+O(/baβdblfβf0/baβdbl2 L2). In the above assumptions, the ο¬rst one guarantees the existe nce and smoothness of directions Λh, which corresponds to the least favorable direction when th e loss is the negative log-likelihood. This direction can be determined by solving the equations in (9). The second condition can usually be obtained by Taylor expansion. Under the above ass umptions, we have the following general conclusion. Theorem 3.1. Consider the proposed semiparametric neural M-estimation, assume that the estimation (ΛΞ²t,Λft)is trained from (4),(5),(6)with training time tsin Remark 2.2and network widthm(ts,ΞΎ)in Theorem 2.1withΞΎβ(0,1). Then, with probability at least 1βΞΎover neural network random initialization, the following results hold s: IfΛΞ²tsandPlΛΞ²ts,Λftsare bounded, and Assumption 1,3,4,5hold, setting Ξ»βnβ(d+1)/(2d+1), we have /baβdblΛftsβf0/baβdbl2 L2=Op/parenleftbig nβ(d+1)/(2d+1)logn/parenrightbig . (10) 17 If Assumption 4is replaced by Assumption 4β², settingΞ»βnβ(d+1)/(2s+d), the nonparametric rate becomes /baβdblΛftsβf0/baβdbl2 L2=Op/parenleftbig nβ2s/(2s+d)logn/parenrightbig . (11) In both cases, we have the asymptotic normality βn/parenleftBig ΛΞ²tsβΞ²0/parenrightBig =n1/2Aβ1PnΛS(Ξ²0,f0)+op(1)dβ βN(0,Ξ£). (12) whereΞ£ =Aβ1(P{ΛS(Ξ²0,f0)ΛS(Ξ²0,f0)T})(Aβ1)T. The boundedness condition of ΛΞ²tsandPlΛΞ²ts,Λftsis essentially a boundedness requirement for the parameter part, not for the nonparametric f. For nonparametric problems without the ο¬nite- dimensional parameter Ξ², this condition can be removed under the same proof. For semi parametric models, this condition is often veriο¬able in speciο¬c exampl es, as in the applications in the next section. Let s0= (d+1)/2denote the smoothness of the RKHS corresponding to the NTK. T he convergence rate in ( 10) can then be expressed as nearly nβ2s0/(2s0+d)with the tuning parameter Ξ»βnβ2s0/(2s+d). This smoothness is essentially determined by the ReLU acti vation function, and the rate is minimax optimal for many statistics models with A ssumption 4. When the true function does not lie within the considered ReLU NTK RKHS, as stated in Assumption 4β², the rate in ( 11) remains minimax optimal. Furthermore, Assumption 4β²ensures that the nonparametric rate is faster thannβ1/4. Therefore, regardless of whether the true function reside s in the considered RKHS, the estimator is shown to be asymptotically normal. An estimator is semiparametric eο¬cient if its information ( i.e., the inverse of its variance) equals the supremum
|
https://arxiv.org/abs/2504.19089v1
|
of the information across all parametri c submodels. The submodel that attains this supremum is typically called the least favorab le or hardest submodel. By the above result, when the loss function is the negative log-likeliho od and the model class contains the hardest submodel, the proposed semiparametric estimator is eο¬cien t. Conversely, if the loss function diο¬ers, suggesting model misspeciο¬cation, the parametric estimator remainsβn-consistent and asymptotically normal. We conclude this section with the following remarks. To bett er align with practice, we consider overparameterized neural networks and general loss functi ons, allowing the true function to lie outside the desired RKHS. Distinct from existing literatur e, we refrain from assuming that the output of the optimized network is bounded and study the algo rithmic solution, which presents signiο¬cant challenges for statistical analysis and inspir es a new yet weaker margin condition. Then, by combining the peeling argument with the interpolation bo und, we precisely characterize the entropy and establish the nonparametric convergence rate. Based on this, we obtain the asymptotic 18 normality of the parametric estimators, with high probabil ity over the random initialization, avoiding the degeneration of the tangent space and addressi ng the key question posed in Section 1.2. Lastly, we acknowledge the potential for broader applicat ions of the proposed framework. This essentially provides a fundamental approach to addres sing statistical problems that involve characterizing the tangent spaces of the nonparametric com ponent, which has been a challenge that frequently arises in the intersection of deep learning and statistical inference. 4 Examples In this section, we introduce two common examples to illustr ate the proposed semiparametric neuralM-estimator and the established theoretical framework. 4.1 Regression Now we consider the regression model Yi=f0(Xi)+ZT iΞ²0+Η«i, (13) whereYirepresents the response variable, XiandZiare bounded covariates with densities bounded away from 0andβ, andΗ«idenotes the independent measurement noise with mean zero, ο¬nite variance and a symmetric distribution. For esti mation, we deο¬ne the loss function as l=l(Yiβf(Xi)βZT iΞ²), wherelis a univariate function. The following condition is for the covariates distribution and loss function. Assumption 6. (1) The matrix E/bracketleftbig (ZβE[Z|X])(ZβE[Z|X])T/bracketrightbig is nonsingular. (2) The univariate function lis a nonnegative, even and convex function with Lipschitz continuous derivative lβ²andl(0) =lβ²(0) = 0 . Additionally, the pointwise risk R(s) =EΗ«[l(s+Η«)] is convex and Rβ²β²(s)β₯B3>0fors:|s| β€bwith someblarge enough and a positive number B3. In this example, symmetry of the noise and loss functions is a ssumed to ensure that (Ξ²0,f0)is the minimizer of the risk EPlΞ²,f, thereby simplifying the analysis. Condition (1) in Assump tion 6is standard ( Kosorok ,2008) to ensure E/bracketleftBig/parenleftbig ZTΞ²+f(X)βZTΞ²0βf0(X)/parenrightbig2/bracketrightBig /greaterorsimilar/baβdblΞ²βΞ²0/baβdbl2+/baβdblfβf0/baβdbl2 L2, 19 in partially linear models. Furthermore, the second condit ion assumes that the risk function is strongly convex within a local neighborhood. Many commonly used loss functions, such as least squares loss and Huber loss, satisfy this assumption. Assum ption 6guarantee that Assumption 3 holds, as shown in Lemma ??in the Supplementary Material ( Yan et al. ,2025a ). Theorem 4.1. Consider the proposed semiparametric neural M-estimation for model (13)and set the hyperparameters as in Theorem 3.1. Under Assumption 4,5(1),6, settingΞ»βnβ(d+1)/(2d+1), with
|
https://arxiv.org/abs/2504.19089v1
|
probability at least 1βΞΎover neural network random initialization, we have /baβdblΛftsβf0/baβdbl2 L2=Op/parenleftbig nβ(d+1)/(2d+1)logn/parenrightbig . If Assumption 4is replaced by Assumption 4β², settingΞ»βnβ(d+1)/(2s+d), the nonparametric rate becomes /baβdblΛftsβf0/baβdbl2 L2=Op/parenleftbig nβ2s/(2s+d)logn/parenrightbig . In both cases, we have the asymptotic normality βn/parenleftBig ΛΞ²tsβΞ²0/parenrightBig =n1/2Aβ1PnΛS(Ξ²0,f0)+op(1)dβ βN(0,Ξ£), (14) whereΞ£ =Aβ1(P{ΛS(Ξ²0,f0)ΛS(Ξ²0,f0)T})(Aβ1)T. Some previous works studied related challenges in the least squares nonparametric regression problem ( Mendelson and Neeman ,2010;Steinwart et al. ,2009). In the above theorem, whether the true function belongs to the considered RKHS, we establi sh the nonparametric optimal convergence rate and parameter asymptotic normality, with out assuming that candidate functions are bounded and allowing unbounded responses. 4.2 Classiο¬cation In this subsection, we consider the following binary classi ο¬cation problem. Suppose that we can observe independently and identically distributed random sample{(Yi,Zi,Xi),i= 1,2,Β·Β·Β·,n}, whereYiβ {0,1}denotes binary response, and Xβ² isandZβ² isare bounded covariates with densities bounded away from 0andβ. We assume that Yfollows a Bernoulli distribution determined by: P(Y= 1) =Ο/parenleftbig f0(X)+ZTΞ²0/parenrightbig andP(Y= 0) = 1 βΟ/parenleftbig f0(X)+ZTΞ²0/parenrightbig , (15) whereΟ:R/maβstoβ[0,1]is a continuously diο¬erentiable monotone link function. He nce the loss function can be taken as the negative log-likelihood as l(ZTΞ²,f(X),Y) =βYlogΟ/parenleftbig f(X)+ZTΞ²/parenrightbig β(1βY)log/parenleftbig 1βΟ/parenleftbig f(X)+ZTΞ²/parenrightbig/parenrightbig .(16) 20 Common choices for Οinclude the sigmoid function Ο(t) = (1+eβt)β1and the cumulative normal distribution function, corresponding to the logistic and p robit models, respectively. In theoretical analysis, we allow for model misspeciο¬cation, assuming tha t the data-generating process follows: P(Y= 1) =Ο/parenleftbig f1(X)+ZTΞ²1/parenrightbig andP(Y= 0) = 1 βΟ/parenleftbig f1(X)+ZTΞ²1/parenrightbig , (17) whereΟ/ne}ationslash=Οis a diο¬erent link function, and some Ξ²1βRp,f1βL2(β¦). Despite this potential misspeciο¬cation, we continue to use the working link functi onΟand loss function as speciο¬ed in (16). From a variational perspective, the minimizer (Ξ²0,f0)of thePlΞ²,fcan also be interpreted as minimizer of the Kullback-Leibler divergence P/bracketleftbigglogPΞ²,f logP0/bracketrightbigg =P/bracketleftBigg YlogΟ/parenleftbig f(X)+ZTΞ²/parenrightbig +(1βY)log/parenleftbig 1βΟ/parenleftbig f(X)+ZTΞ²/parenrightbig/parenrightbig YlogΟ(f1(X)+ZTΞ²1)+(1βY)log(1βΟ(f1(X)+ZTΞ²1))/bracketrightBigg . Thus, the parameters (Ξ²0,f0)retain interpretability and are still parameters in terms o f the underlying distribution. Solving the equation ( 9), some calculation implies Λh=E/bracketleftBig Z/parenleftBig Y(logΟ)β²β²+(1βY)/parenleftBig (log(1βΟ))β²β²/parenrightBig/parenrightBig |X/bracketrightBig E/bracketleftBig Y(logΟ)β²β²+(1βY)/parenleftBig (log(1βΟ))β²β²/parenrightBig |X/bracketrightBig. The following conditions are standard for the link function . Assumption 7. The link function Οis Lipschitz continuous, monotone and continuously diο¬er- entiable; βlogΟ(t)andβlog(1βΟ(t))are both convex with positive, continuous and bounded second-order derivative. If the model is misspeciο¬ed, we ma ke the same assumptions for Οand assume that the underlying Ξ²1,f1are bounded in inο¬nity norm. This condition ensures that the loss function and distribut ion satisfy Assumptions 1and3. Commonly used link functions, such as the logistic and probi t functions, naturally satisfy this condition, thereby making these assumptions applicable in practice. Theorem 4.2. Consider the proposed semiparametric neural M-estimation for model (15)or(17) and set the hyperparameters as in Theorem 3.1. Under Assumption 4,5(1),6(1) and 7, setting Ξ»βnβ(d+1)/(2d+1), with probability at least 1βΞΎover neural network random initialization, we have /baβdblΛftsβf0/baβdbl2 L2=Op/parenleftbig nβ(d+1)/(2d+1)log2n/parenrightbig . If Assumption 4is replaced by Assumption 4β², settingΞ»βnβ(d+1)/(2s+d), the nonparametric rate becomes /baβdblΛftsβf0/baβdbl2 L2=Op/parenleftbig nβ2s/(2s+d)log2n/parenrightbig . 21 In both cases, we have the asymptotic normality βn/parenleftBig ΛΞ²tsβΞ²0/parenrightBig =n1/2Aβ1PnΛS(Ξ²0,f0)+op(1)dβ βN(0,Ξ£), (18) whereΞ£ =Aβ1(P{ΛS(Ξ²0,f0)ΛS(Ξ²0,f0)T})(Aβ1)T. For the partially linear classiο¬cation problem, the propos ed neuralM-estimation achieves the minimax nonparametric rate as well
|
https://arxiv.org/abs/2504.19089v1
|
asβn-consistency, with high probability. This performance is attributed to the representational capacity of overpara meterized deep neural networks and their tangent space. When the model is correctly speciο¬ed, i.e. th e loss function corresponds to the negative log-likelihood, the resulting parametric estima tor attains eο¬ciency. 5 Numerical studies This section demonstrates the practical advantages of the p roposed semiparametric neural M- estimator through simulation studies on both regression an d classiο¬cation models described in Section 4. We use an overparameterized fully connected ReLU neural ne twork with depth L= 5 and widthm= 1000 . For training, the stochastic gradient descent optimizer i n PyTorch is employed, with a learning rate of 0.001and a total of 1000 epochs. For a comprehensive eval- uation, we compare our method with four baseline approaches that also estimate Ξ²0andf0(x) via (penalized) M-estimation. The ο¬rst baseline is a regression spline estim ator using B-splines, with uniformly spaced knots over the domain [0,1]d. The second is the RKHS method, where the RKHS norm of the nonparametric component serves as the pe nalty, and the Laplacian kernel Kh(x1,x2) = exp{β/baβdblx1βx2/baβdbl/h}is used. The third is a local linear estimator with the Epanec h- nikov kernel kh(x1,x2) = (1β/baβdblx1βx2/baβdbl2/h)/(2d(1β1/(3h))). The last one also employs the fully connected ReLU neural network with depth L= 5, and the width serves as a hyperparameter, referred to as βunderparameterizedβ to be distinguished fr om the overparameterized regime. To select the optimal hyperparameters for all methods, we sp lit the full data into a training set (80%) and a validation set (20%) and choose the hyperpara meters that minimize validation loss, including the tuning parameters for our method and the second baseline, the number of basis functions for the ο¬rst baseline, the bandwidth for the third baseline, and the width of the underparameterized neural network. Speciο¬cally, the hype rparameters of the regression spline method and the underparameterized neural network method ar e chosen from a set that ensures the total number of parameters is less than the sample size. 22 We generate {Zi= (Z1i,Z2i)T},i= 1,2,...,n from a uniform distribution over the interval [0,1]2. Then,{Xi= (X1i,Β·Β·Β·,Xdi)T}is generated using the formula Xji= 0.9Wji+0.05(Z1i+ Z2i),1β€jβ€d, whereWjiis sampled from a uniform distribution over the interval [0,1]. The true ο¬nite-dimensional parameter vector is set as Ξ²0= (1,0.75)T. For the nonparametric part f0(x), we consider four cases with diο¬erent dimensions and functi on forms. Here, Case 1 and Case 3 correspond to ο¬ve-dimensional examples, while Case 2 and C ase 4 represent their respective extensions to ten-dimension; which have been studied in the simulation of Zhong et al. (2022) andYan et al. (2025b ). Case 1:d= 5, f0(x) = 5/parenleftbig x2 1x3 2+log(1+x3)+β 1+x4x5+exp(x5/2)/parenrightbig , Case 2:d= 10, f0(x) =5 2/parenleftbig x2 1x3 2+log(1+x3)+β 1+x4x5+exp(x5/2) +x2 6x3 7+log(1+x8)+β 1+x9x10+exp(x10/2)/parenrightbig , Case 3:d= 5, f0(x) = 5sin/parenleftBigg 6Ο d(d+1)5/summationdisplay l=1lxl/parenrightBigg , Case 4:d= 10, f0(x) = 5sin/parenleftBigg 6Ο d(d+1)10/summationdisplay l=1lxl/parenrightBigg . In the regression setting, the responses Yiare generated by the model Yi=f0(Xi)+ZT iΞ²0+Ξ΅i, whereΞ΅iare i.i.d. normal noise with zero mean and standard deviatio nΟ= 0.5. For the classiο¬cation model, the responses Yiare drawn
|
https://arxiv.org/abs/2504.19089v1
|
from the Bernoulli distribution with probability P(Yi= 1) =Ο(f0(Xi) +ZT iΞ²0βEX[f0(X)]), whereΟ(x)is the logistic function Ο(x) = 1/(1+eβx). Simulations are performed for three diο¬erent sample sizes n= 500,1000,2000, with 200repetitions for each case and method. The mean squared error (MSE) of the parametric and nonparametric components is computed, respectively, to ev aluate the performance of each method. Due to computational limitations, the spline method can onl y handle Case 1 and Case 3 with 5- dimensional nonparametric functions. Tables 1and2report the average MSEs for regression and classiο¬cation examples, respectively. These results d emonstrate that our overparameterized neuralM-estimation approach outperforms the three traditional st atistical methods, as well as the underparameterized neural network (i.e., properly tuned w ith fewer parameters than the sample size). Speciο¬cally, due to the high dimensionality of the no nparametric component, the curse of dimensionality becomes signiο¬cant. Consequently, the u nderparameterized neural network, spline and the local linear estimator suο¬er from insuο¬cient learning capacity. In all four cases, our method yields the lowest MSE, demonstrating its superio r ability. Lastly, we would like to point out that, in the existing literature (e.g. Zhong et al. ,2022;Fan and Gu ,2022;Wang et al. , 23 2024;Yan et al. ,2025b ), regardless of the theoretical requirements imposed on th e network size, their numerical experiments have in fact employed overpara meterized neural networks to achieve favorable performance, which also provide certain support for our proposed method and theory. Given that the estimation of the parametric component satis ο¬es asymptotic normality, it is possible to perform valid statistical inference. To assess it, we simulate the empirical coverage probabilities of the corresponding estimated conο¬dence in tervals. In the regression model, the variance of ΛΞ²is estimated as ΛΞ£ =/bracketleftBigg 1 nn/summationdisplay i=1(YiβΛf(Xi)βZT iΛΞ²)2/bracketrightBigg/bracketleftBigg min hβF1 nn/summationdisplay i=1(Ziβh(Xi))(Ziβh(Xi))T/bracketrightBiggβ1 . In the classiο¬cation model, the variance of ΛΞ²is estimated as ΛΞ£ =/bracketleftBigg min hβF1 nn/summationdisplay i=1Ο(Λf(Xi)+ZT iΛΞ²)(1βΟ(Λf(Xi)+ZT iΛΞ²))(Ziβh(Xi))(Ziβh(Xi))T/bracketrightBiggβ1 , whereFis also a neural network function class. The coverage rate is deο¬ned as the proportion of repeated experiments for which the true parameter falls w ithin the conο¬dence interval. Tables 3and4report the coverage of the 95%conο¬dence intervals constructed by the proposed method based on 500repeated experiments, for each case in both regression and c lassiο¬cation models. Generally, the coverage rate is near 0.95for the proposed overparameterized neural M-estimation method, especially when the sample size nis large. This supports the potential usefulness of our proposed approach for semiparametric inference. References Ahmad, I., Leelahanon, S., and Li, Q. (2005). Eο¬cient estima tion of a semiparametric partially linear varying coeο¬cient model. Annals of statistics . Allen-Zhu, Z., Li, Y., and Song, Z. (2019). A convergence the ory for deep learning via over- parameterization. In International conference on machine learning , pages 242β252. PMLR. Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R. R., and Wang, R. (2019). On exact computation with an inο¬nitely wide neural net. Advances in neural information processing systems , 32. Bach, F. R. (2014). Breaking the curse of dimensionality wit h convex neural networks. CoRR , abs/1412.8690. 24 Bartlett, P. L. and Mendelson, S. (2006).
|
https://arxiv.org/abs/2504.19089v1
|
Empirical minimiz ation. Probability theory and related ο¬elds , 135(3):311β334. Bauer, B. and Kohler, M. (2019). On deep learning as a remedy f or the curse of dimensionality in nonparametric regression. The Annals of Statistics , 47(4):2261β2285. Bhattacharya, S., Fan, J., and Mukherjee, D. (2023). Deep ne ural networks for nonparametric interaction models with diverging dimension. arXiv preprint arXiv:2302.05851 . Bietti, A. and Bach, F. (2021). Deep equals shallow for reLU n etworks in kernel regimes. In International Conference on Learning Representations . Bietti, A. and Mairal, J. (2019). On the inductive bias of neu ral tangent kernels. Advances in Neural Information Processing Systems , 32. Chen, L. and Xu, S. (2020). Deep neural tangent kernel and lap lace kernel have the same rkhs. arXiv preprint arXiv:2009.10683 . Chen, M., Jiang, H., Liao, W., and Zhao, T. (2019). Nonparame tric regression on low-dimensional manifolds using deep relu networks: Function approximatio n and statistical recovery. ArXiv preprint. arXiv:1908.01842 . Chen, X., Liu, Y., Ma, S., and Zhang, Z. (2024). Causal infere nce of general treatment eο¬ects using neural networks with a diverging number of confounder s.Journal of Econometrics , 238(1):105555. Cheng, G. and Huang, J. Z. (2010). Bootstrap consistency for general semiparametric m- estimation. Annals of statistics . Chernozhukov, V., Chetverikov, D., Demirer, M., Duο¬o, E., H ansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. Cloninger, A. and Klock, T. (2021). A deep network construct ion that adapts to intrinsic dimen- sionality beyond the domain. Neural Networks , 141:404β419. Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) , 34(2):187β202. Cox, D. R. (1975). Partial likelihood. Biometrika , 62(2):269β276. 25 Ding, Y. and Nan, B. (2011). A sieve m-theorem for bundled par ameters in semiparametric models, with application to the eο¬cient estimation in a line ar model for censored data. Annals of statistics , 39(6):2795. Du, S. S., Zhai, X., P Β΄oczos, B., and Singh, A. (2018). Gradient descent provably o ptimizes over-parameterized neural networks. CoRR , abs/1810.02054. Fan, J. and Gu, Y. (2022). Factor augmented sparse throughpu t deep relu neural networks for high dimensional regression. ArXiv preprint. arXiv:2210.02002 . Farrell, M. H., Liang, T., and Misra, S. (2021). Deep neural n etworks for estimation and inference. Econometrica , 89(1):181β213. Geifman, A., Yadav, A., Kasten, Y., Galun, M., Jacobs, D., an d Ronen, B. (2020). On the similarity between the laplace and neural tangent kernels. Advances in Neural Information Processing Systems , 33:1451β1461. Horowitz, J. L. (2012). Semiparametric methods in econometrics , volume 131. Springer Science & Business Media. Hu, T., Wang, W., Lin, C., and Cheng, G. (2021). Regularizati on matters: A nonparametric perspective on overparametrized neural network. In International Conference on Artiο¬cial Intelligence and Statistics , pages 829β837. PMLR. Huang, J. (1999). Eο¬cient estimation of the partly linear ad ditive cox model. The Annals of Statistics , 27(5):1536β1563. Huang, K., Liu, M., and Ma, S. (2024). Nearly optimal learnin g using sparse deep relu networks in regularized empirical risk minimization with lipschitz loss. Jacot,
|
https://arxiv.org/abs/2504.19089v1
|
A., Gabriel, F., and Hongler, C. (2018). Neural tange nt kernel: Convergence and general- ization in neural networks. Advances in neural information processing systems , 31. Jiao, Y., Shen, G., Lin, Y., and Huang, J. (2021). Deep nonpar ametric regression on approximate manifolds: Non-asymptotic error bounds with polynomial pr efactors. Kohler, M. and Langer, S. (2021). On the rate of convergence o f fully connected deep neural network regression estimates. The Annals of Statistics , 49(4):2231β2249. 26 Kosorok, M. R. (2008). Introduction to empirical processes and semiparametric in ference , volume 61. Springer. Lai, J., Xu, M., Chen, R., and Lin, Q. (2023a). Generalizatio n ability of wide neural networks on r.arXiv preprint arXiv:2302.05933 . Lai, J., Yu, Z., Tian, S., and Lin, Q. (2023b). Generalizatio n ability of wide residual networks. LecuΒ΄e, G. (2011). Interplay between concentration, complexity and geometry in learning theory with applications to high dimensional data analysis . PhD thesis, Universit Β΄e Paris-Est. Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl -Dickstein, J., and Pennington, J. (2019). Wide neural networks of any depth evolve as linear mo dels under gradient descent. Advances in neural information processing systems , 32. Li, Y. and Liang, Y. (2018). Learning overparameterized neu ral networks via stochastic gradient descent on structured data. Advances in neural information processing systems , 31. Li, Y., Yu, Z., Chen, G., and Lin, Q. (2024). On the eigenvalue decay rates of a class of neural- network related kernel functions deο¬ned on general domains .Journal of Machine Learning Research , 25(82):1β47. Liang, H., Liu, X., Li, R., and Tsai, C.-L. (2010). Estimatio n and testing for partially linear single-index models. Annals of statistics , 38(6):3811. Lu, J., Shen, Z., Yang, H., and Zhang, S. (2021). Deep network approximation for smooth functions. SIAM Journal on Mathematical Analysis , 53(5):5465β5506. Ma, S. and Kosorok, M. R. (2005a). Penalized log-likelihood estimation for partly linear trans- formation models with current status data. The Annals of Statistics , 33(5):2256β2290. Ma, S. and Kosorok, M. R. (2005b). Robust semiparametric m-e stimation and the weighted bootstrap. Journal of Multivariate Analysis , 96(1):190β217. Ma, S., Linton, O., and Gao, J. (2021). Estimation and infere nce in semiparametric quantile factor models. Journal of Econometrics , 222(1):295β323. Mammen, E. and Van de Geer, S. (1997). Penalized quasi-likel ihood estimation in partial linear models. The Annals of Statistics , 25(3):1014β1035. 27 Mendelson, S. and Neeman, J. (2010). Regularization in kern el learning. The Annals of Statistics . Nakada, R. and Imaizumi, M. (2020). Adaptive approximation and generalization of deep neural network with intrinsic dimensionality. J. Mach. Learn. Res. , 21(174):1β38. Raskutti, G., Wainwright, M. J., and Yu, B. (2014). Early sto pping and non-parametric regres- sion: an optimal data-dependent stopping rule. The Journal of Machine Learning Research , 15(1):335β366. Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of t he propensity score in observational studies for causal eο¬ects. Biometrika , 70(1):41β55. Schmidt-Hieber, J. (2019). Deep relu network approximatio n of functions on a manifold. ArXiv preprint. arXiv:1908.00695 . Schmidt-Hieber, J. (2020). Nonparametric regression usin
|
https://arxiv.org/abs/2504.19089v1
|
g deep neural networks with ReLU activation function. The Annals of Statistics , 48(4):1875 β 1897. Shen, Z. (2020). Deep network approximation characterized by number of neurons. Communi- cations in Computational Physics , 28(5):1768β1811. Shen, Z., Yang, H., and Zhang, S. (2022). Optimal approximat ion rate of relu networks in terms of width and depth. Journal de Math Β΄ematiques Pures et Appliqu Β΄ees, 157:101β135. Steinwart, I., Hush, D. R., Scovel, C., et al. (2009). Optima l rates for regularized least squares regression. In COLT , pages 79β93. Suh, N., Ko, H., and Huo, X. (2021). A non-parametric regress ion viewpoint: Generalization of overparametrized deep relu network under noisy observatio ns. In International Conference on Learning Representations . Suzuki, T. (2018). Adaptivity of deep relu network for learn ing in besov and mixed smooth besov spaces: optimal rate and curse of dimensionality. ArXiv preprint. arXiv:1810.08033 . Tang, W., He, K., Xu, G., and Zhu, J. (2023). Survival analysi s via ordinary diο¬erential equations. Journal of the American Statistical Association , 118(544):2406β2421. Tsiatis, A. A. (2006). Semiparametric theory and missing data , volume 4. Springer. 28 Tsybakov, A. B. (2004). Optimal aggregation of classiο¬ers i n statistical learning. The Annals of Statistics , 32(1):135β166. Vakili, S., Bromberg, M., Shiu, D., and Bernacchia, A. (2021 ). Uniform generalization bounds for overparameterized neural networks. CoRR , abs/2109.06099. Van de Geer, S. A. (2000). Empirical Processes in M-estimation , volume 6. Cambridge university press. Van der Vaart, A. W. (2000). Asymptotic statistics , volume 3. Cambridge university press. Wang, J.-L., Xue, L., Zhu, L., and Chong, Y. S. (2010). Estima tion for a partial-linear single-index model. Annals of statistics . Wang, X., Zhou, L., and Lin, H. (2024). Deep regression learn ing with optimal loss function. Journal of the American Statistical Association , pages 1β20. Yan, S., Chen, Z., and Yao, F. (2025a). Supplement to βsemipa rametric estimation with overpa- rameterized neural networkβ. Yan, S., Yao, F., and Zhou, H. (2025b). Deep regression for re peated measurements. Journal of the American Statistical Association , 0(ja):1β23. Yao, Y., Rosasco, L., and Caponnetto, A. (2007). On early sto pping in gradient descent learning. Constructive approximation , 26(2):289β315. Yarotsky, D. (2017). Error bounds for approximations with d eep relu networks. Neural Networks , 94:103β114. Yarotsky, D. (2018). Optimal approximation of continuous f unctions by very deep relu networks. InConference on learning theory , pages 639β649. PMLR. Yarotsky, D. and Zhevnerchuk, A. (2020). The phase diagram o f approximation rates for deep neural networks. Advances in neural information processing systems , 33:13005β13015. Zeng, D. and Lin, D. (2007). Maximum likelihood estimation i n semiparametric regression models with censored data. Journal of the Royal Statistical Society Series B: Statisti cal Methodology , 69(4):507β564. 29 Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. ( 2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM , 64(3):107β115. Zhong, Q., MΒ¨ uller, J., and Wang, J.-L. (2022). Deep learnin g for the partially linear Cox model. The Annals of Statistics , 50(3):1348 β 1375. Zhong, Q., MΒ¨ uller, J. W.,
|
https://arxiv.org/abs/2504.19089v1
|
and Wang, J.-L. (2021). Deep exten ded hazard models for survival analysis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Lia ng, P., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems , volume 34, pages 15111β15124. Curran Associates, Inc. 30 Table 1: The mean square error (Γ10β1)forΛΞ²andΛfof our method and baselines for the regression model. Setting Proposed Spline RKHS Local Linear Underpara Casen MSE for ΛΞ² Case 1500 0.2613 0.4987 0.6905 0.4148 2.0760 1000 0.1140 0.1411 0.2511 0.1696 1.9988 2000 0.0623 0.0513 0.0793 0.1607 1.9965 Case 2500 0.2429 β 3.7362 132.79 3.1735 1000 0.1132 β 0.7956 20.478 3.0583 2000 0.0494 β 0.1922 2.1419 3.0356 Case 3500 0.2316 0.6546 0.3940 116.13 7.7303 1000 0.1066 0.1801 0.1895 78.535 8.1188 2000 0.0496 0.0755 0.0662 119.39 8.1298 Case 4500 0.2598 β 0.4656 28.555 12.192 1000 0.1098 β 0.1908 88.956 12.195 2000 0.0457 β 0.0836 114.98 12.424 MSE for Λf Case 1500 0.8625 46.266 6.6327 2.5206 2.8144 1000 0.7015 3.4216 3.0720 2.4176 2.6657 2000 0.6179 0.7906 1.5700 2.3582 2.6089 Case 2500 0.7824 β 17.558 67.927 2.8644 1000 0.6084 β 7.4870 11.378 2.7192 2000 0.5093 β 3.5917 2.1836 2.6467 Case 3500 0.6153 67.462 4.6669 89.284 40.214 1000 0.4135 7.1053 2.3364 84.843 50.257 2000 0.2956 2.6464 1.2638 89.256 68.170 Case 4500 0.9107 β 10.753 75.208 51.472 1000 0.4489 β 5.5210 84.909 51.209 2000 0.2606 β 3.0471 93.994 51.154 31 Table 2: The mean square error ( Γ10β1) forΛΞ²andΛfof our method and baselines for the classiο¬cation model ( 15). Setting Proposed Spline RKHS Local Linear Underpara Casen MSE for ΛΞ² Case 1500 0.8821 13.561 8.8990 6.5733 2.4930 1000 0.4411 12.637 2.9707 2.3129 1.7517 2000 0.2133 12.583 1.5815 1.8543 1.6526 Case 2500 2.1877 β 8.1133 12.787 4.4772 1000 0.5395 β 2.6608 11.939 4.1563 2000 0.2177 β 1.5869 11.583 4.1018 Case 3500 1.0119 21.165 22.105 13.400 4.4189 1000 0.5900 19.280 4.2309 11.161 4.7265 2000 0.3318 17.656 1.1527 8.0244 4.5130 Case 4500 1.6781 β 5.5813 20.256 3.8216 1000 0.6652 β 2.3483 19.570 3.7650 2000 0.2812 β 2.5641 19.158 3.6724 MSE for Λf Case 1500 4.5624 39.087 9.0062 61.677 6.9442 1000 2.5099 39.043 5.0402 59.122 5.6270 2000 1.7524 38.999 3.9864 58.263 5.2535 Case 2500 22.417 β 7.5404 29.051 11.793 1000 5.0438 β 4.0622 28.991 11.529 2000 1.8980 β 3.1317 29.023 10.751 Case 3500 7.0489 96.863 29.990 92.969 85.572 1000 3.5228 96.762 14.225 91.686 86.227 2000 2.2837 96.850 11.259 90.390 86.411 Case 4500 17.616 β 38.825 51.957 46.156 1000 5.9230 β 35.755 51.703 45.684 2000 3.6942 β 35.791 51.598 45.623 32 Table 3: The coverage probability for constructed 95%conο¬dence interval forΞ²= (Ξ²1,Ξ²2)in the regression model. Model The coverage rate for Ξ²1 The coverage rate for Ξ²2 Settingn= 500n= 1000n= 2000n= 500n= 1000n= 2000 Case 1 0.938 0.958 0.946 0.944 0.952 0.940 Case 2 0.966 0.958 0.950 0.960 0.950 0.952 Case 3 0.946 0.970 0.948 0.968 0.952 0.950 Case 4 0.986 0.974 0.938 0.988 0.968 0.948 Table 4: The coverage probability for constructed 95%conο¬dence interval forΞ²= (Ξ²1,Ξ²2)in the classiο¬cation model. Model The coverage rate for Ξ²1 The coverage rate for Ξ²2 Settingn= 500n= 1000n= 2000n= 500n= 1000n= 2000 Case 1 0.964 0.968 0.950 0.966
|
https://arxiv.org/abs/2504.19089v1
|
Quasi-Monte Carlo confidence intervals using quantiles of randomized nets Zexin Panβ Abstract Recent advances in quasi-Monte Carlo integration have demonstrated that the median trick significantly enhances the convergence rate of lin- early scrambled digital net estimators. In this work, we leverage the quantiles of such estimators to construct confidence intervals with asymp- totically valid coverage for high-dimensional integrals. By analyzing the distribution of the integration error for a class of infinitely differentiable integrands, we prove that as the sample size grows, the error decom- poses into an asymptotically symmetric component and a vanishing per- turbation, which guarantees that a quantile-based interval for the median estimator asymptotically captures the target integral with the nominal coverage probability. 1 Introduction Quasi-Monte Carlo (QMC) methods have emerged as a powerful alternative to conventional Monte Carlo (MC) integration [4]. Like MC, QMC approximates high-dimensional integrals by averaging nfunction evaluations. Unlike MC, however, QMC replaces random sampling with carefully constructed point sets designed to efficiently explore the integration domain. This paper focuses on a class of construction called digital nets to be introduced in Subsection 2.1. This systematic approach allows QMC to mitigate the curse of dimensionality more effectively than classical quadrature rules while achieving a convergence rate faster than MC under smoothness assumptions. Despite their success, QMC estimators based on digital nets face challenges in error quantification [18]. Conventional solutions employ randomization tech- niques to generate independent replicates of QMC means, from which t-confidence intervals are constructed. Common choices of randomization are Owenβs scram- bling [17] and MatouΛ sekβs random linear scrambling [15]. While theoretical work by [14] establishes the asymptotic normality of Owen-scrambled QMC means in some restricted case and thereby justifies t-intervals, their convergence rate is non-adaptive: the variance in general cannot decay faster than O(nβ3), even for βJohann Radon Institute for Computational and Applied Mathematics, Β¨OAW, Altenberg- erstrasse 69, 4040 Linz, Austria. ( zexin.pan@oeaw.ac.at ). 1arXiv:2504.19138v1 [math.ST] 27 Apr 2025 integrands with higher smoothness. In contrast, the random linear scrambling produces estimators with the same variance as Owenβs method [24] but markedly different error behavior. These estimators lack asymptotic normality and in- stead exhibit error concentration phenomena that adapt to the smoothness of integrands. Notably, [19] demonstrates that the median of linearly scrambled QMC means converges to the target integral at nearly optimal rates across a broad class of function spaces. Due to outlier sensitivity, t-intervals under the linear scrambling often overestimate uncertainty and exceed nominal coverage, as observed empirically in [12]. Quantile-based intervals, while more robust and empirically accurate, lack theoretical guarantees on coverageβa critical open question this work addresses. Before presenting our results, we situate our contributions within the con- text of existing methods. Recent work by [16] proposes asymptotically valid t-intervals by allowing the number of independent QMC replicates rto grow polynomially with the per-replicate sample size n. However, this approach in- curs a total computational cost of O(n1+c) for r=O(nc), resulting in subopti- mal convergence rates. Quantile-based intervals circumvent this limitation and achieve asymptotic validity without requiring rto scale with n. Alternative approach by [8] introduces robust estimation techniques to
|
https://arxiv.org/abs/2504.19138v1
|
handle non-normal errors, but still requires reliable variance estimation and remains non-adaptive: stronger smoothness assumptions on the integrand do not improve the conver- gence rate. Specialized methods like higher order scrambled digital nets [3] attain optimal rates under explicit smoothness priors and enable empirically valid t-intervals, though rigorous coverage guarantees remain unproven. For completely monotone integrands, point sets with non-negative (or non-positive) local discrepancy yield computable upper (or lower) error bounds [7], but their convergence rates degrade with the dimension sand becomes unattractive for s >4. See also [18] for a comprehensive survey. Together, these gaps motivate our focus on quantile-based intervals, which adapt to the integrandβs smoothness while provably achieving asymptotically valid coverage. This paper is structured as follows. Section 2 introduces foundational con- cepts and notation, including the Walsh decomposition framework and proper- ties of Walsh coefficients critical to our analysis. Section 3 presents and proves our main theorem under the complete random design, a simplified yet illustrative randomization scheme. After outlining the proof strategy, Subsections 3.1β3.3 systematically address each critical component of the argument. Subsection 3.4 derives crucial corollaries, demonstrating that quantile-based intervals asymp- totically achieve the nominal coverage level for a class of infinitely differentiable integrands. Section 4 generalizes these results to broader randomization choices, with the random linear scrambling as a key special case. Section 5 empirically validates our theory on two highly skewed integrands. Finally, Section 6 identi- fies challenges in extending these results to non-smooth integrands and concludes the paper with a discussion of interesting research questions. 2 2 Background and notation LetN={1,2,3, . . .}denote the natural numbers, N0=Nβͺ {0}, and Ns β= Ns 0\{0}(excluding the zero vector). For ββN, we define Zβ={0,1, . . . , β β1}. The dimension of the integration domain is sβN, with 1: s={1,2, . . . , s }. For a matrix C,C(β,:) denotes its ββth row. The indicator function 1{A}equals 1 if event Aoccurs and 0 otherwise. For a finite set K,|K|is its cardinality, and U(K) represents the uniform distribution over K. Equality in distribution is written as Xd=Y. For asymptotics, amβΌbmdenotes lim mββam/bm= 1 and amβΌPL β=1bm,βrecursively means amβPLβ²β1 β=1bm,ββΌbm,Lβ²for 2β©½Lβ²β©½L. The integrand f: [0,1]sβRhasL1-norm β₯fβ₯1=R [0,1]s|f(x)|dxand Lβ-norm β₯fβ₯β= supxβ[0,1]s|f(x)|. Let C([0,1]s) and Cβ([0,1]s) denote the spaces of continuous and infinitely differentiable functions, respectively. Quasi-Monte Carlo (QMC) methods approximate the integral Β΅=Z [0,1]sf(x) dxby ΛΒ΅=1 nnβ1X i=0f(xi) for specially constructed points {xi, iβZn} β[0,1]s. In this paper, we choose {xi, iβZn}to be the base-2 digital net defined in the next subsection. 2.1 Digital nets and randomization FormβNandiβZ2m, let the binary expansion i=Pm β=1iβ2ββ1be represented by the vector βi=βi[m] = ( i1, . . . , i m)Tβ {0,1}m. Similarly, for aβ[0,1) and precision EβN, we truncate the binary expansion a=Pβ β=1aβ2ββtoEdigits, denoted β a=β a[E] = ( a1, . . . , a E)Tβ {0,1}E. For dyadic rationals (numbers with dual binary expansions), we select the representation terminating in zeros. Letsmatrices Cjβ {0,1}EΓmdefine a base-2 digital net over [0 ,1]s. The unrandomized points xi= (xi1, . . . , x is) are generated by β
|
https://arxiv.org/abs/2504.19138v1
|
xij=Cjβimod 2 for iβZ2m, jβ1:s, (1) where β xijβ {0,1}Erepresents xijβ[0,1) truncated to Edigits (trailing digits set to 0). Typically, Eβ©½mfor unrandomized digital nets. We introduce randomization via β xij=Cjβi+βDjmod 2, (2) where Cjβ {0,1}EΓmandβDjβ {0,1}Eare random with precision Eβ©Ύm. The vector βDjis called the digital shift and consists of independent U({0,1}) entries. A widely used method to randomize Cjis the random linear scrambling [15]: Cj=MjCjmod 2, where Mjβ {0,1}EΓmis a random lower-triangular matrix with ones on the diagonal and U({0,1}) entries below, and Cjβ {0,1}mΓmis a fixed generating matrix designed to avoid linear dependencies (see [4, Chapter 4.4] for details). 3 Despite the popularity of random linear scrambling, its dependence on Cj causes technical difficulties, so we postpone its analysis until Section 4. In Section 3, we instead use the complete random design [19], where all entries ofCjare independently drawn from U({0,1}). This retains the asymptotic convergence rate of random linear scrambling without requiring pre-designed Cj. Numerically, errors under the complete random design are typically larger than those under the random linear scrambling, but the difference diminishes asmincreases. Letxi[E] denote points from equation (2) with precision E. Our QMC estimator for Β΅is ΛΒ΅E=1 nnβ1X i=0f(xi) for xi=xi[E]. (3) For most of our paper, we conveniently assume E=βand focus our analysis on ΛΒ΅β. Practical implementation uses finite E, often constrained by the floating point representation in use. Corollary 3 quantifies the required Eto ensure the truncation error |ΛΒ΅EβΛΒ΅β|is negligible. 2.2 Fourier-Walsh decomposition Walsh functions serve as the natural orthonormal basis for analyzing base-2 digital nets. For kβN0andxβ[0,1), the univariate Walsh function wal k(x) is defined by walk(x) = (β1)βkTβ x, where βkβ {0,1}βandβ xβ {0,1}βare the binary expansions of kandx, re- spectively. Since βkcontains a finite number of nonzero entries, a finite-precision truncation suffices for computation. For multivariate functions, the s-dimensional Walsh function wal k: [0,1)sβ {β1,1}is given by the tensor product walk(x) =sY j=1walkj(xj) = (β1)Ps j=1βkT jβ xj, where k= (k1, . . . , k s)βNs 0. These functions form a complete orthonormal basis for L2([0,1]s) [4], enabling the Walsh decomposition: f(x) =X kβNs 0Λf(k)walk(x),where (4) Λf(k) =Z [0,1]sf(x)walk(x) dx. Equality (4) holds in the L2sense. Building on this, [21] derives the following error decomposition for QMC estimators: 4 Lemma 1. ForfβC([0,1]s), the error of ΛΒ΅βdefined by equation (3)satisfies ΛΒ΅ββΒ΅=X kβNsβZ(k)S(k)Λf(k), (5) where Z(k) =1nsX j=1βkT jCj=0mod 2o and S(k) = (β1)Ps j=1βkT jβDj. We note that every S(k) follows a U({β1,1}) distribution. The distribution ofZ(k) depends on m,kand the the choice of randomization for Cj. Under the complete random design, each Z(k) follows a Bernoulli distribution with success probability 2βmand{Z(k),kβNs β}are pairwise independent. Their distribution under more general randomization schemes is analyzed in Section 4. 2.3 Notations involving kandΞΊ Fork=Pβ β=1aβ2ββ1βN0, we define the set of nonzero bits ΞΊ={ββN|aβ= 1} βN. The bijection between kandΞΊallows interchangeable use of integer and set notation. In this framework, we can rewrite Z(k) as Z(k) =1nsX j=1X ββΞΊjCj(β,:) =0mod 2o where k= (k1, . . . , k s) and ΞΊjis the nonzero bits of kj. Next, we define some useful norms
|
https://arxiv.org/abs/2504.19138v1
|
on kandΞΊ. For a finite subset ΞΊβN, we denote the cardinality of ΞΊas|ΞΊ|, the sum of elements in ΞΊasβ₯ΞΊβ₯1and the largest element of ΞΊasβΞΊβ. When ΞΊ=β
, we conventionally define |ΞΊ|=β₯ΞΊβ₯1= βΞΊβ= 0. For k= (k1, . . . , k s) and the corresponding ΞΊ= (ΞΊ1, . . . , ΞΊ s), we define β₯kβ₯0=β₯ΞΊβ₯0=sX j=1|ΞΊj|,β₯kβ₯1=β₯ΞΊβ₯1=sX j=1β₯ΞΊjβ₯1andβkβ=βΞΊβ= max jβ1:sβΞΊjβ. In our later analysis, it is helpful to view Ns 0as aF2-vector space. For k1= (k1,1, . . . , k s,1) and k2= (k1,2, . . . , k s,2), we define the sum of k1andk2 to be k1βk2= (kβ 1, . . . , kβ s) with βkβ j=βkj,1+βkj,2mod 2 for each jβ1:s. In other words, each ΞΊβ jis the symmetric difference of ΞΊj,1andΞΊj,2. We also write βr i=1kifor the sum of k1, . . . ,kr. For a finite subset VβNs 0, we define the rank of Vas the size of its largest linearly independent subset. We say Vhas full rank if rank( V) =|V|. One can verify that S(βr i=1ki) =rY i=1S(ki) and{S(k),kβV}are jointly independent if Vhas full rank. 5 2.4 Bounds on Walsh coefficients The following lemma relates the Walsh coefficients Λf(k) to the partial derivatives off. For|ΞΊ|= (|ΞΊ1|, . . . ,|ΞΊs|)βNs 0, let f|ΞΊ|=ββ₯ΞΊβ₯0f βx|ΞΊ1| 1Β·Β·Β·βx|ΞΊs| s. Lemma 2. ForfβCβ([0,1]s), Λf(k) = (β1)β₯ΞΊβ₯0Z [0,1]sf|ΞΊ|(x)sY j=1WΞΊj(xj) dx, (6) where WΞΊ: [0,1]βRforΞΊβNis defined recursively by Wβ
(x) = 1 and WΞΊ(x) =Z [0,1](β1)β x(βΞΊβ)WΞΊ\βΞΊβ(x) dx withβ x(β)denoting the ββth bit of xandβΞΊβdenoting the smallest element of ΞΊ. In particular, WΞΊ(x)for nonempty ΞΊis continuous, nonnegative, periodic with period 2ββΞΊβ+1and satisfies Z [0,1]WΞΊ(x) dx=Y ββΞΊ2βββ1and max xβ[0,1]WΞΊ(x) = 2Y ββΞΊ2βββ1. (7) Proof. Theorem 2.5 of [23] with nj=|ΞΊj|implies equation (6). Properties of WΞΊ(x) are proven in Section 3 of [23]. Corollary 1. ForfβCβ([0,1]s), |Λf(k)|β©½2ββ₯ΞΊβ₯1β₯f|ΞΊ|β₯1. Proof. By equation (7), sY j=1WΞΊj(xj) ββ©½Y jβ1:s,ΞΊjΜΈ=β
2Y ββΞΊj2βββ1β©½Y jβ1:sY ββΞΊj2ββ= 2ββ₯ΞΊβ₯1. The result follows after applying HΒ¨ olderβs inequality to equation (6). 3 Proof of main results In this section, we aim to prove our main theorem: Theorem 1. Suppose fβCβ([0,1]s)satisfies the assumptions of Theorem 6. Then under the complete random design lim mββPr(ΛΒ΅β< Β΅) +1 2Pr(ΛΒ΅β=Β΅) =1 2. 6 The proof strategy is as follows. Given a sequence of subsets KmβNs β, we decompose the error Λ Β΅ββΒ΅into two components by defining SUM 1=X kβKmZ(k)S(k)Λf(k) (8) and SUM 2=X kβNsβ\KmZ(k)S(k)Λf(k). (9) By Lemma 1, Λ Β΅ββΒ΅= SUM 1+ SUM 2. We further define SUMβ² 1=X kβKmZ(k)Sβ²(k)Λf(k) (10) where each Sβ²(k) is independently drawn from U({β1,1}). We want Kmto be small enough so that SUM 1and SUMβ² 1have approximately the same dis- tribution, and meanwhile large enough so that |SUM 2/SUM 1|<1 with high probability, as specified in the following lemma: Lemma 3. Suppose for a sequence of subsets KmβNs βandSUM 1,SUM 2,SUMβ² 1 defined as above, we have lim mββdTV(SUM 1,SUMβ² 1) = 0 , where dTV(X, Y)is the total variation distance between the distribution of ran- dom variables XandY, and lim mββPr(|SUM 1|β©½|SUM 2|) = 0 . Then lim mββPr(ΛΒ΅β< Β΅) +1 2Pr(ΛΒ΅β=Β΅) =1 2. Proof. First notice that Pr(ΛΒ΅β< Β΅) = Pr(SUM 1+
|
https://arxiv.org/abs/2504.19138v1
|
SUM 2<0) β©ΎPr(SUM 1<0 and |SUM 1|>|SUM 2|) β©ΎPr(SUM 1<0)βPr(|SUM 1|β©½|SUM 2|) β©ΎPr(SUMβ² 1<0)βdTV(SUM 1,SUMβ² 1)βPr(|SUM 1|β©½|SUM 2|). Similarly, Pr(ΛΒ΅ββ©½Β΅)β©ΎPr(SUMβ² 1β©½0)βdTV(SUM 1,SUMβ² 1)βPr(|SUM 1|β©½|SUM 2|) Hence Pr(ΛΒ΅β< Β΅) + Pr(Λ Β΅ββ©½Β΅)βPr(SUMβ² 1<0)βPr(SUMβ² 1β©½0) β©Ύβ2dTV(SUM 1,SUMβ² 1)β2 Pr(|SUM 1|β©½|SUM 2|). (11) 7 Because SUMβ² 1is, when conditioned on Z(k), a sum of independent symmetric random variables, we always have Pr(SUMβ² 1<0) + Pr(SUMβ² 1β©½0) = 1. Our assumptions then imply lim inf mββPr(ΛΒ΅β< Β΅) + Pr(Λ Β΅ββ©½Β΅)β©Ύ1. A similar argument shows Pr(ΛΒ΅β> Β΅) + Pr(Λ Β΅ββ©ΎΒ΅)βPr(SUMβ² 1>0)βPr(SUMβ² 1β©Ύ0) β©Ύβ2dTV(SUM 1,SUMβ² 1)β2 Pr(|SUM 1|β©½|SUM 2|) (12) and lim inf mββPr(ΛΒ΅β> Β΅) + Pr(Λ Β΅ββ©ΎΒ΅)β©Ύ1, which gives the limit superior of Pr(Λ Β΅β< Β΅) + Pr(Λ Β΅ββ©½Β΅) by taking the complement. Hence, lim mββPr(ΛΒ΅β< Β΅) +1 2Pr(ΛΒ΅β=Β΅) =1 2lim mββPr(ΛΒ΅β< Β΅) + Pr(Λ Β΅ββ©½Β΅) =1 2. In order to apply the above lemma and prove Theorem 1, we define QN={kβNs β| β₯ΞΊβ₯1β©½N} (13) andKm=QNmwith Nm= sup{NβN0| |QN|β©½csm2m}. (14) where csis a positive constant to be specified in equation (19). Notice that 0/βQNandQ0=β
. Corollary 4 of [21] shows |QN| βΌDs N1/4exp Οr sN 3 (15) for a constant Dsdepending on s, which implies lim Nββ|QN+1|/|QN|= 1 and, because |QNm|β©½csm2m<|QNm+1|, |QNm| βΌcsm2m. (16) Equating the right hand side of equation (15) with csm2m, a straightforward calculation shows NmβΌΞ»m2/s+ 3Ξ»mlog2(m)/s+Dβ² sm (17) forΞ»= 3(log 2)2/Ο2and a constant Dβ² sdepending on sandcs. We will show Km=QNmsatisfies the assumptions of Lemma 3. The proof contains the following three steps: 8 β’Step 1: prove lim mββdTV(SUM 1,SUMβ² 1) = 0 . β’Step 2: prove lim mββPr(|SUM 2|β©ΎTm) = 0 for a sequence Tmspecified later in Corollary 2. β’Step 3: prove lim mββPr(|SUMβ² 1|> Tm) = 1. Notice by Step 1 and 3, lim mββPr(|SUM 1|> Tm) = lim mββPr(|SUMβ² 1|> Tm) = 1 , and then by Step 2, lim mββPr(|SUM 1|> Tm>|SUM 2|) = 1 , so Lemma 3 can be applied. The following three subsections are devoted to their proof. 3.1 Proof of Step 1 We first show the number of summands in SUM 1is bounded by 2 csmwith high probability. Lemma 4. Under the complete random design, PrX kβQNmZ(k)β©Ύ2csm β©½1 csm. Proof. First recall that Pr( Z(k) = 1) = 2βmand{Z(k),kβQNm}are pairwise independent. By Chebyshevβs inequality, Pr X kβQNmZ(k)β2βm|QNm| β©Ύcsm β©½1 c2sm2VarX kβQNmZ(k) =1 c2sm22βm(1β2βm)|QNm| β©½1 c2sm22βm|QNm|. Our conclusion then follows from |QNm|β©½csm2m Next, we show QNcontains very few additive relations with the addition β defined in Subsection 2.3. The proof is given in the appendix. Lemma 5. LetNβ©Ύ1andk1, . . . ,krbe sampled independently from U(QN). Then there exist positive constants As, Bsdepending on ssuch that for all rβ©Ύ2 Pr βr i=1kiβQN β©½Ar sNr/4rβBsβ N. (18) 9 As a consequence, we have the following bound on the cardinality of mini- mally rank-deficient subsets of QN. Lemma 6. Let I={VβQN|rank( V)<|V|}, Iβ={VβI|every proper WβVhas full rank }, Iβ r=Iββ© {VβQN| |V|=r}. Then with As, Bsfrom Lemma 5, we have for rβ©Ύ2 |Iβ r+1|β©½|QN|r (r+ 1)!Ar sNr/4rβBsβ N. Proof. Notice that ( r+ 1)!|Iβ r+1|/|QN|r+1is the probability that independent k1, . . . ,kr+1sampled from U(QN) constitute a set
|
https://arxiv.org/abs/2504.19138v1
|
VβIβ r+1, which is further bounded by the probability that βr+1 i=1ki=0since all proper subsets WofV have full rank. Because for any given k1, ...,kr, there is at most one kr+1βQN forβr+1 i=1ki=0, we therefore have (r+ 1)!|Iβ r+1| |QN|r+1β©½1 |QN|Pr βr i=1kiβQN β©½1 |QN|Ar sNr/4rβBsβ N. The conclusion follows after rearrangement. Theorem 2. Define csfrom equation (14) to be cs=1 4Bsp Ξ»/s (19) withΞ»= 3(log 2)2/Ο2andBsfrom Lemma 5. Then under the complete random design, there exist constants ds, msdepending on ssuch that for mβ©Ύms dTV(SUM 1,SUMβ² 1)β©½1 csm+mds2β4csm. Proof. LetV={kβQNm|Z(k) = 1}. We can rewrite SUM 1as SUM 1=X VβQNm1{V=V}X kβVS(k)Λf(k). When V=β
, we conventionally define the sum over Vas 0. Because {Z(k),kβ QNm}are independent of {S(k),kβQNm}, we see the distribution of SUM 1 is a mixture ofP kβVS(k)Λf(k) weighted by Pr( V=V). A similar argument shows SUMβ² 1is a mixture ofP kβVSβ²(k)Λf(k) weighted by Pr( V=V). When Vhas full rank, {S(k)|kβV}are jointly independent and X kβVS(k)Λf(k)d=X kβVSβ²(k)Λf(k). 10 Letting ImbeIfrom Lemma 6 with N=Nm, we have the bound dTV(SUM 1,SUMβ² 1)β©½X VβImPr(V=V) = Pr( V βIm), (20) where we have used the fact that the total variation distance satisfies the tri- angular inequality and is bounded by 1 between any two distributions. By Lemma 4, we further have Pr(V βIm)β©½Pr(V βIm,|V|β©½2csm) +1 csm. It remains to bound Pr( V β Im,|V|β©½2csm). Let Iβ m,Iβ m,rbeIβ, Iβ rfrom Lemma 6 with N=Nm. When V βIm, we can always find a subset W β V such that W β Iβ m.|W|is at least 3 because a pair of distinct k1,k2βQNmust have rank 2. Hence a union bound argument shows for large enough m Pr(V βIm,|V|β©½2csm)β©½β2csmβX r=2X WβIβ m,r+1Pr(Wβ V). (21) Because WβIβ m,r+1has rank r, Pr(Wβ V) = Pr( Z(k) = 1 for all kβW) = 2βmr. Then by Lemma 6 Pr(V βIm,|V|β©½2csm)β©½β2csmβX r=2|Iβ m,r+1|2βmr β©½β2csmβX r=22βmr|QNm|r (r+ 1)!Ar sNr/4 mrβBsβNm β©½β2csmβX r=2(csmAsN1/4 m)r (r+ 1)!rβBsβNm, where we have used |QNm|β©½csm2m. Because NmβΌΞ»m2/s+ 3Ξ»mlog2(m)/s forΞ»= 3(log 2)2/Ο2, for large enough mwe have Ξ»m2/sβ©½Nmβ©½2Ξ»m2/sand Pr(V βIm,|V|β©½2csm)β©½β2csmβX r=2(csAs(2Ξ»/s)1/4)r (r+ 1)!m(3/2)rrβmBsβ Ξ»/s β©½exp(csAs(2Ξ»/s)1/4) max 2β©½rβ©½2csmm(3/2)rrβmBsβ Ξ»/s. Because m(3/2)rrβmBsβ Ξ»/sis log-convex in r, the maximum is attained at either r= 2 or r= 2csm. After plugging in equation (19), we get max 2β©½rβ©½2csmm(3/2)rrβmBsβ Ξ»/s= max( m32β4csm, mβcsm(2cs)β4csm). The conclusion follows by choosing ds>3 and a large enough ms. 11 3.2 Proof of Step 2 Throughout this subsection, we assume fβCβ([0,1]s). Recall that SUM 2=X kβNsβ\QNmZ(k)S(k)Λf(k). In light of Corollary 1, the size of SUM 2depends on how fast β₯f|ΞΊ|β₯1grows with|ΞΊ|. Below we provide two results under different growth assumptions. The easier one is when β₯f|ΞΊ|β₯1grows exponentially in |ΞΊ|. An example is f(x) = exp(Ps j=1xj). Theorem 3. Assume β₯f|ΞΊ|β₯1β©½K1Ξ±β₯ΞΊβ₯0for some positive constants K1and Ξ±. Then there exist a constant D1and a threshold m1depending on sandΞ± such that for all mβ©Ύm1 |SUM 2|β©½X kβNsβ\QNm|Λf(k)|< K 12βNm+D1βNm. Proof. We follow the strategy used in the proof of Theorem 2 from [21]. Corol- lary 1 together with our assumption on f|ΞΊ|gives |Λf(k)|β©½K12ββ₯ΞΊβ₯1Ξ±β₯ΞΊβ₯0. The constraint kβNs β\QNmimplies β₯ΞΊβ₯1> N m. Theorem 7 from [21] shows |{kβNs β| β₯ΞΊβ₯1=N}|β©½Οβs 2β 3Nexp
|
https://arxiv.org/abs/2504.19138v1
|
Οr sN 3 . Furthermore, β₯ΞΊβ₯1=sX j=1β₯ΞΊjβ₯1β©ΎsX j=1|ΞΊj|2 2β©Ύ1 2ssX j=1|ΞΊj|2 =1 2sβ₯ΞΊβ₯2 0. (22) Therefore, β₯ΞΊβ₯0β©½p 2sβ₯ΞΊβ₯1and X kβNsβ\QNm|Λf(k)|β©½βX N=Nm+1K12βNmax Ξ±β 2sN,1Οβs 2β 3Nexp Οr sN 3 β©½K1Οβs 2β 3βX N=Nm+12βN+DΞ±β sN with DΞ±=β 2 max(log2(Ξ±),0) +Ο/(β 3 log(2)). For any Οβ(0,1), we can find NΟ,s,Ξ±such that DΞ±p s(N+ 1)βDΞ±β sN < Ο forN > N Ο,s,Ξ±. When mis large enough so that Nm> N Ο,s,Ξ±, βX N=Nm+12βN+DΞ±β sNβ©½2βNm+DΞ±βsNmβX N=12(Οβ1)N= 2βNm+DΞ±βsNm2Οβ1 1β2Οβ1. 12 By choosing Ο= 1/2, we get for large enough m X kβNsβ\QNm|Λf(k)|β©½K1Οβs 2β 3(β 2β1)2βNm+DΞ±βsNm. The conclusion follows once we choose a large enough D1. We need a more careful analysis when β₯f|ΞΊ|β₯1grows factorially in |ΞΊ|, such as when f(x) =Q jβJ1 1βxj/2for some Jβ1:s. The key is to observe that for mostkβQN,|ΞΊj|is approximately 2p Ξ»N/s in the following sense: Lemma 7. LetNβ©Ύ1andkbe sampled from U(QN). Then there exist positive constants Aβ² s, Bβ² sdepending on ssuch that for any jβ1:sandΟ΅β(0,1) Pr |ΞΊj|p Ξ»N/sβ2 > Ο΅ β©½Aβ² sN1/4exp(βBβ² sΟ΅2β N) (23) where Ξ»= 3 log(2)2/Ο2as in equation (17). The proof is given in the appendix. Theorem 4. Assume β₯f|ΞΊ|β₯1β©½K2Ξ±β₯ΞΊβ₯0Y jβJ(|ΞΊj|)! for some positive constants K2, Ξ±and some nonempty Jβ1:s. Then under the complete random design, there exist a constant ds,Ξ±depending on s, Ξ±, a constant D2depending on s, Ξ±,|J|and a threshold m2depending on s, Ξ±,|J| such that for mβ©Ύm2 Pr |SUM 2|β©ΎK22βNm+(2|J|log2(m)+D2)β Ξ»Nm/s β©½mds,Ξ±exp(βcβ² sm log2(m)2) with cβ² s=Bβ² sp Ξ»/(2s)forBβ² sfrom Lemma 7. Proof. Corollary 1 and our assumption on fimply |Λf(k)|β©½K22ββ₯ΞΊβ₯1Ξ±β₯ΞΊβ₯0Y jβJ(|ΞΊj|)!. (24) By equation (22), β₯ΞΊβ₯0β©½p 2sβ₯ΞΊβ₯1. It follows Y jβJ(|ΞΊj|)!β©½X jβJ|ΞΊj| !β©½(β₯ΞΊβ₯0)!β©½(p 2sβ₯ΞΊβ₯1)β 2sβ₯ΞΊβ₯1. 13 LetNβ mβ©ΎNmbe a new threshold we will determine later. A proof similar to that of Theorem 3 shows for large enough m X kβNsβ\QNβm|Λf(k)| β©½βX N=Nβm+1K22βNmax Ξ±β 2sN,1Οβs 2β 3Nexp Οr sN 3 (β 2sN)β 2sN β©½K2Οβs 2β 3βX N=Nβm+12βN+DΞ±β N(β 2sN)β 2sN β©½K2Οβs 2β 3(β 2β1)2βNβ m+DΞ±β Nβm(p 2sNβm)β 2sNβm. Because NmβΌΞ»m2/s+3Ξ»mlog2(m)/s, we can choose Nβ m=βNm+K3mlog2(m)β for a large enough K3so that 2βNβ m+DΞ±β Nβm(p 2sNβm)β 2sNβmβ©½2βNm when mis large enough. We then have the bound X kβNsβ\QNβmZ(k)S(k)Λf(k) β©½X kβNsβ\QNβm|Λf(k)|β©½K2 22βNm+2|J|log2(m)β Ξ»Nm/s. It remains to show that X kβQNβm\QNmZ(k)S(k)Λf(k) β©½K2 22βNm+(2|J|log2(m)+D2)β Ξ»Nm/s(25) with high probability for large enough D2. Let Οm= 2 + Ο΅mforΟ΅mβ(0,1) that we will determine later. Further let ΛQ=n kβQNβm |ΞΊj|> Οmp Ξ»Nβm/sfor some jβ1:so . Lemma 7 with a union bound argument over jβ1:sshows |ΛQ| |QNβm|β©½sAβ² s(Nβ m)1/4exp(βBβ² sΟ΅2 mp Nβm). Because Nβ m=βNm+K3mlog2(m)β, equation (15) implies there exists K4such that|QNβm|β©½mK42mfor large enough m. We can then bound the probability that Z(k) = 1 for any kβΛQby 2βm|ΛQ|β©½mK4sAβ² s(Nβ m)1/4exp(βBβ² sΟ΅2 mp Nβm), which can be further bounded by mds,Ξ±exp(βcβ² sΟ΅2 mm) for cβ² s=Bβ² sp Ξ»/(2s) and some large enough ds,Ξ±because Ξ»m2/(2s)β©½Nβ mβ©½2Ξ»m2/sfor large enough m. 14 We have shown that with probability at least 1 βmds,Ξ±exp(βcβ² sΟ΅2 mm) for large enough m, allkwith Z(k) = 1 satisfies k/βΛQand X kβQNβm\QNmZ(k)S(k)Λf(k) β©½X kβQNβm\(QNmβͺΛQ)|Λf(k)|. (26) Because |ΞΊj|β©½Οmp Ξ»Nβm/sfor all jβ1:swhen kβQNβm\ΛQ, equation (24) implies for such k |Λf(k)|β©½K22ββ₯ΞΊβ₯1max Ξ±sΟmβ Ξ»Nβm/s,1 Οmp Ξ»Nβm/s|J|Οmβ Ξ»Nβm/s . Because NmβΌΞ»m2/sandNβ mβNmβΌK3mlog2(m),p Ξ»Nβm/sβp Ξ»Nm/sβΌ K3log2(m)/2 and we can
|
https://arxiv.org/abs/2504.19138v1
|
find a constant Dβdepending on K3,|J|, Ξ±, s such that for large enough m |Λf(k)|β©½K22ββ₯ΞΊβ₯1+Οm|J|log2(m)β Ξ»Nm/s+Dβm. By choosing Ο΅m= 1/log2(m) for mβ©Ύ3, we see Οm|J|log2(m)p Ξ»Nm/s= 2|J|log2(m)p Ξ»Nm/s+|J|p Ξ»Nm/sand |Λf(k)|β©½K22ββ₯ΞΊβ₯1+2|J|log2(m)β Ξ»Nm/s+Dββm(27) for some Dββ> Dβ. Hence when mis large enough X kβQNβm\(QNmβͺΛQ)|Λf(k)| β©½K222|J|log2(m)β Ξ»Nm/s+DββmNβ mX N=Nm+12βNΟβs 2β 3Nexp Οr sN 3 β©½K222|J|log2(m)β Ξ»Nm/s+Dββm Οβs 2β 3(β 2β1)2βNmexp Οr sNm 3 . (28) The above bound is asymptotically smaller than the right hand side of equa- tion (25) once we choose a large enough D2> Dββ, so we complete the proof. Corollary 2. Assume for K, Ξ± > 0andJ={J1, ..., J L}with J1, ..., J Lβ1:s, β₯f|ΞΊ|β₯1β©½KΞ±β₯ΞΊβ₯0max JβJY jβJ(|ΞΊj|)! (29) whereQ jβJ(|ΞΊj|)! = 1 ifJ=β
. Let Jmax= max JβJ|J|. Then under the complete random design, there exist constants ds,Ξ±depending on s, Ξ±,Ds,Ξ±,J depending on s, Ξ±,Jmaxandms,Ξ±,Jdepending on s, Ξ±,Jmaxsuch that for mβ©Ύ ms,Ξ±,J Pr(|SUM 2|β©ΎTm)β©½mds,Ξ±exp(βcβ² sm log2(m)2) 15 where cβ² s=Bβ² sp Ξ»/(2s)forBβ² sfrom Lemma 7 and Tm=K2βNm+2Jmaxlog2(m)β Ξ»Nm/s+Ds,Ξ±,Jm. (30) Proof. TheJmax= 0 case follows immediately from Theorem 3. When Jmax> 0, we notice Nβ min the proof of Theorem 4 does not depend on Jand equa- tion (26) still holds with probability at least 1 βmds,Ξ±exp(βcβ² sΟ΅2 mm) for Ο΅m= 1/log2(m). Then similar to equation (27), we can find Dββ Jfor each Jβ Jsuch that |Λf(k)|β©½K2ββ₯ΞΊβ₯1max JβJ22|J|log2(m)β Ξ»Nm/s+Dββ Jm β©½K2ββ₯ΞΊβ₯122Jmaxlog2(m)β Ξ»Nm/s+(max JβJDββ J)m. (31) A calculation similar to equation (28) gives the desired result. Remark 1. We can generalize Corollary 2 to other choices of Nmby noticing that the proof only requires NmβΌΞ»m2/s. Remark 2. When fis analytic over an open neighborhood of [0 ,1]s, Proposi- tion 2.2.10 of [11] shows for each xβ[0,1]s, we can find Kx, Ξ±x>0 depending onxand an open ball Vxcontaining xsuch that for all |ΞΊ| βNs 0 sup yβVx|f|ΞΊ|(y)|β©½KxΞ±β₯ΞΊβ₯0 xsY j=1(|ΞΊj|)!. By the compactness of [0 ,1]s, equation (29) holds for J={1:s}and some K, Ξ± > 0 independent of x, so Corollary 2 applies. 3.3 Proof of Step 3 Recall that SUMβ² 1=X kβQNmZ(k)Sβ²(k)Λf(k) where each Sβ²(k) is sampled independently from U({β1,1}). Our last step is to show|SUMβ² 1|is larger than the Tmfrom Corollary 2 with high probability. We need the following lemma from [5]: Lemma 8. Let{ci, iβ1:n}be a set of real numbers with |ci|β©Ύ1for all iβ1:n and{Sβ² i, iβ1:n}be independent U({β1,1})random variables. Then sup tβRPr nX i=1ciSβ² iβt β©½1 β©½1 2nn βn/2β . Theorem 5. Suppose fsatisfies the assumptions of Corollary 2. For mβ©Ύ1 andTmgiven by equation (30), define Qm(Tm) ={kβQNm| |Λf(k)|β©ΎTm}. 16 Assume lim inf mββ|Qm(Tm)| |QNm|>0. (32) Then under the complete random design, lim sup mβββmPr(|SUMβ² 1|β©½Tm)<β. Proof. LetV={kβQNm|Z(k) = 1}andW=V β©Qm(Tm). For large enough m, equation (32) along with |QNm| βΌcsm2mimplies E|W|=EhX kβQm(Tm)Z(k)i = 2βm|Qm(Tm)|β©Ύcm for some constant c >0. By a proof similar to that of Lemma 4, Pr |W|β©½E|W| 2 β©½Var(|W|) E|W| β E|W|/22β©½E|W| (E|W|)2/4β©½4 cm. When |W|>E|W|/2β©Ύcm/2, we write SUMβ² 1=X kβWSβ²(k)Λf(k) +X kβV\WSβ²(k)Λf(k). Conditioned on Ο(Z) ={Z(k),kβNm}, we can apply Lemma 8 to SUMβ² 1by treating the sum over kβ V \ W as a shift term and get Pr |SUMβ² 1|β©½Tm Ο(Z) β©½sup tβRPr X kβWSβ²(k)Λf(k)
|
https://arxiv.org/abs/2504.19138v1
|
Tmβt β©½1 Ο(Z) β©½1 2|W||W| β|W|/2β . (33) Our conclusion then follows from the asymptotic relation n βn/2β βΌ2n(Οn)β1/2 and|W|> cm/ 2. The next theorem provides a sufficient condition for equation (32) to hold. Simply put, we require fto be βnondegenerateβ in the sense that a sufficient number of kβQNmhave|Λf(k)|comparable to their upper bounds in equa- tion (31) up to an exponential factor in m. Theorem 6. ForΞ² >0andJ={J1, . . . , J L}with J1, . . . , J Lβ1:s, define QN,Ξ²,J(f) =n kβQN |Λf(k)|β©Ύ2ββ₯ΞΊβ₯1Ξ²β₯ΞΊβ₯0max JβJY jβJ(|ΞΊj|)!o (34) and FΞ²,J=n fβCβ([0,1]s) sup c>0lim inf Nββ|QN,Ξ²,J(cf)| |QN|>0o . IffβCβ([0,1]s)satisfies equation (29)for some K, Ξ± > 0andJ={J1, . . . , J L} andfβS Ξ²>0FΞ²,J, then equation (32) holds. 17 Proof. Without loss of generality, we assume fβ FΞ²,Jfor some Ξ²β©½1. We first define Nm,Ξ²=NmβDΞ²mfor a large enough DΞ²βNwe will specify later. By equation (17) and the mean value theorem r Ξ»Nm sβr Ξ»Nm,Ξ² sβ©½r Ξ» s1 2p Nm,Ξ²DΞ²mβΌ1 2DΞ². So for large enough m, we havep Ξ»Nm/sβp Ξ»Nm,Ξ²/sβ©½DΞ². Furthermore, equation (14) and (15) implies lim mββ|QNm,Ξ²| |QNm|=cDΞ² (35) for a constant cDΞ²>0 depending on DΞ²ands. Next we define for mβ©Ύ1 ΛQm,Ξ²=n kβQNm,Ξ² |ΞΊj|p Ξ»Nm,Ξ²/sβ2 > mβ1/4for some jβ1:so . When kβQNm,Ξ²\ΛQm,Ξ², Stirlingβs formula implies for large enough m max JβJY jβJ(|ΞΊj|)!β©Ύ β(2βmβ1/4)q Ξ»Nm,Ξ²/sβ !Jmax β©Ύ β(2βmβ1/4)p Ξ»Nm/sβ β2DΞ² !Jmax β©Ύ β(2βmβ1/4)p Ξ»Nm/sβ ! 2p Ξ»Nm/sβ2DΞ²Jmax β©Ύ22Jmaxlog2(m)β Ξ»Nm/sβJmaxKsm 2p Ξ»Nm/sβ2JmaxDΞ² for a large enough Ksdepending on s. By equation (34) with N=Nm,Ξ²and the above lower bound, kβQNm,Ξ²,Ξ²,J(cf)\ΛQm,Ξ²forc >0 implies for large enough m c|Λf(k)| β©Ύ2ββ₯ΞΊβ₯1Ξ²β₯ΞΊβ₯0max JβJY jβJ(|ΞΊj|)! β©Ύ2βNm,Ξ²Ξ²s(2+mβ1/4)β Ξ»Nm,Ξ²/s22Jmaxlog2(m)β Ξ»Nm/sβJmaxKsm 2p Ξ»Nm/sβ2JmaxDΞ² β©Ύ2βNm+2Jmaxlog2(m)β Ξ»Nm/s+DΞ²m+3slog2(Ξ²)β Ξ»Nm/sβJmaxKsmβ2JmaxDΞ²log2(2β Ξ»Nm/s). Comparing the above bound with Tmgiven by equation (30), we can lower bound |Qm(Tm)|by |Qm(Tm)|β©Ύ|QNm,Ξ²,Ξ²,J(cf)\ΛQm,Ξ²|β©Ύ|QNm,Ξ²,Ξ²,J(cf)| β |ΛQm,Ξ²| 18 for large enough mafter choosing a DΞ²βNsuch that DΞ²m+ 3slog2(Ξ²)p Ξ»Nm/sβJmaxKsmβ2JmaxDΞ²log2(2p Ξ»Nm/s)βDs,Ξ±,Jm grows to βasmβ β . By Lemma 7, |ΛQm,Ξ²| |QNm,Ξ²|β©½sAβ² sN1/4 m,Ξ²exp(βBβ² smβ1/2p Nm,Ξ²), which converges to 0 as mβ β . On the other hand, sup c>0lim inf Nββ|QN,Ξ²,J(cf)| |QN|>0 implies there exists c, cΞ²>0 such that |QNm,Ξ²,Ξ²,J(cf)|β©ΎcΞ²|QNm,Ξ²|for large enough m. Hence, we conclude from equation (35) that lim inf mββ|Qm(Tm)| |QNm|β©Ύlim inf mββ|QNm,Ξ²,Ξ²,J(cf)| β |ΛQm,Ξ²| |QNm,Ξ²||QNm,Ξ²| |QNm|β©ΎcΞ²cDΞ²>0. Remark 3. The definition of FΞ²,Jmight appear nonstandard. Notably, FΞ²,J is not convex and excludes the zero function. However, [2] argues that coni- cal input function spaces are preferred over convex ones in adaptive confidence interval construction, and FΞ²,Jis by design conical. In general, some nondegen- erate assumptions are required to exclude constant integrands, for which Λ Β΅ββΒ΅ is identically 0. See also Theorem 2 of [14] and Theorem 2 of [1] for nondegen- erate assumptions used to establish asymptotic normality of Owen-scrambled QMC means. Remark 4. For an example where f /βS Ξ²>0FΞ²,J, consider fsatisfying f|ΞΊ|= 0 whenever |ΞΊ1|> ΞΊfor some ΞΊβN0. Lemma 2 then implies Λf(k) = 0 whenever |ΞΊ1|> ΞΊand Lemma 7 shows only an exponentially small fraction of kβQN have nonzero Λf(k). Hence, f /β FΞ²,Jfor any Ξ² >0. In this case, however, fadmits the form f=ΞΊX p=0gp(xβ1)xp 1 withxβ1= (x2, . . . , x s), so one can first integrate falong the
|
https://arxiv.org/abs/2504.19138v1
|
x1direction and apply our algorithm toPΞΊ p=0gp(xβ1)/(p+ 1) instead. This is called pre- integration in the QMC literature, a technique to regularize the integrands. See [9, 13] for further reference. Another solution is to localize our calculation to Qβ²={kβNs β| |ΞΊ1|β©½ ΞΊ}. Specifically, we set Km=QNmβ©Qβ²with Nm= sup {NβN0| |QNβ© Qβ²|β©½cβ² sm2m}for a suitable cβ² s>0. Repeating our previous proof strategy, we can establish the counterparts of Step 1-3 when fsatisfies equation (29) with J1, ..., J Lβ2:sand sup c>0lim inf Nββ|QN,Ξ²,J(cf)| |QNβ©Qβ²|>0 19 for some Ξ², c > 0. The above two arguments can be generalized to cases where for a subset uβ1:sand a set of thresholds {ΞΊjβN0, jβu}, we have f|ΞΊ|= 0 whenever |ΞΊj|> ΞΊjfor any jβu. It is an open question whether all f /βS Ξ²>0FΞ²,J belong to one of the above cases, which we leave for future study. Remark 5. It is easy to prove fβS Ξ²>0FΞ²,Jwhen f|ΞΊ|(x) does not change sign over [0 ,1]s. In this case, equation (6) and (7) imply |Λf(k)|β©Ύ inf xβ[0,1]s|f|ΞΊ|(x)|Z [0,1]ssY j=1WΞΊj(xj) dx = inf xβ[0,1]s|f|ΞΊ|(x)|Y jβ1:s,ΞΊjΜΈ=β
Y ββΞΊj2βββ1 = inf xβ[0,1]s|f|ΞΊ|(x)| 2ββ₯ΞΊβ₯1ββ₯ΞΊβ₯0. In order for fβS Ξ²>0FΞ²,J, it suffices for inf xβ[0,1]s|f|ΞΊ|(x)|β©Ύc0Ξ²β₯ΞΊβ₯0max JβJY jβJ(|ΞΊj|)! (36) to hold for some constants c0, Ξ² > 0. In particular, simple integrands such asf(x) = exp(Ps j=1xj) and f(x) =Qs j=11 1βxj/2can be shown to satisfy Theorem 6 using this strategy. The above argument also suggests we can regularize fby adding a func- tion with sufficiently large positive derivatives before applying Theorem 6. For instance, if fsatisfies equation (29) with J={β
}and some K, Ξ± > 0, then forKβ²> K , the sum f(x) +Kβ²exp(Ξ±Ps j=1xj) satisfies equation (36) with c0=Kβ²βKandΞ²=Ξ±. This regularization, however, is not practically useful because choosing suitable Ξ»andΞ±requires information on the derivatives of f. Moreover, the error in integrating Ξ»exp(Ξ±Ps j=1xj) may dominate that of f and make the confidence interval unnecessarily wide. How to formulate easily verifiable conditions that allow f|ΞΊ|(x) to change signs over [0 ,1]sis another interesting question we leave for future research. 3.4 Main results As promised, the preceding three steps provide all the ingredients for the proof of Theorem 1. In fact, our analysis gives a quantitative estimate on how fast the quantile of Β΅converges to 1 /2. Theorem 7. IffβCβ([0,1]s)satisfies the assumptions of Theorem 6, then under the complete random design lim sup mβββm Pr(ΛΒ΅β< Β΅) +1 2Pr(ΛΒ΅β=Β΅)β1 2 <β. (37) 20 Proof. By equation (11), (12) and the symmetry of SUMβ² 1, Pr(ΛΒ΅β< Β΅) +1 2Pr(ΛΒ΅β=Β΅)β1 2 β©½dTV(SUM 1,SUMβ² 1) + Pr( |SUM 1|β©½|SUM 2|) β©½dTV(SUM 1,SUMβ² 1) + Pr( |SUM 2|β©ΎTm) + Pr( |SUM 1|β©½Tm) β©½2dTV(SUM 1,SUMβ² 1) + Pr( |SUM 2|β©ΎTm) + Pr( |SUMβ² 1|β©½Tm). Theorem 2 proves lim mβββmdTV(SUM 1,SUMβ² 1) = 0. Corollary 2 proves limmβββmPr(|SUM 2|β©ΎTm) = 0. Theorem 5 together with Theorem 6 proves lim supmβββmPr(|SUMβ² 1|β©½Tm)<β. Our conclusion follows by combining the above results. The next corollary shows sample quantiles of Λ Β΅Ecan be used to construct confidence intervals for Β΅with asymptotically desired coverage level. Corollary 3. ForrβN, let ΛΒ΅1 E, . . . , ΛΒ΅r Eberindependent QMC estimators
|
https://arxiv.org/abs/2504.19138v1
|
given by equation (3)andΛΒ΅(Ξ½) Ebe their Ξ½βth order statistics. If fβCβ([0,1]s) satisfies the assumptions of Theorem 6 and the precision Eincreases with mso thatEβ©ΎNm, we have under the complete random design lim sup mβββm Pr(ΛΒ΅E< Β΅) +1 2Pr(ΛΒ΅E=Β΅)β1 2 <β (38) and for 1β©½ββ©½uβ©½r, lim inf mββPr(Β΅β[ΛΒ΅(β) E,ΛΒ΅(u) E])β©ΎF(uβ1)βF(ββ1), (39) where F(Ξ½)is the cumulative distribution function of the binomial distribution B(r,1/2). Proof. Let SUM 2,E= SUM 2+ ΛΒ΅EβΛΒ΅β. Then ΛΒ΅EβΒ΅= ΛΒ΅ββΛΒ΅E+ ΛΒ΅EβΒ΅= SUM 1+ SUM 2,E. Lemma 1 of [21] shows |ΛΒ΅EβΛΒ΅β|β©½βs 2Esup xβ[0,1]s||βf(x)||2. (40) Since fβCβ([0,1]s), the gradient βf(x) is continuous over [0 ,1]sand has a bounded vector norm. Because Eβ©ΎNm, by increasing Ds,Ξ±,J in the definition ofTmif necessary, we can assume |ΛΒ΅EβΛΒ΅β|β©½Tmfor large enough m. Hence under the conditions of Corollary 2, we have for large enough m Pr |SUM 2,E|β©Ύ2Tm β©½mds,Ξ±exp(βcβ² sm log2(m)2). (41) Equation (38) then follows from a slight modification of the proof of Theorem 7. 21 Next by the property of order statistics, Pr(ΛΒ΅(β) E> Β΅) =ββ1X j=0r j Pr(ΛΒ΅Eβ©½Β΅)jPr(ΛΒ΅E> Β΅)rβj, which is monotonically decreasing in Pr(Λ Β΅Eβ©½Β΅). Equation (38) implies lim inf mββPr(ΛΒ΅Eβ©½Β΅)β©Ύ1/2, so we have lim sup mββPr(ΛΒ΅(β) E> Β΅)β©½ββ1X j=0r j1 2r=F(ββ1). (42) Similarly, lim sup mββPr(ΛΒ΅(u) E< Β΅)β©½rX j=ur j1 2r= 1βF(uβ1). (43) Therefore, lim inf mββPr(Β΅β[ΛΒ΅(β) E,ΛΒ΅(u) E])β©ΎF(uβ1)βF(ββ1). In addition to asymptotically valid coverage, the interval length Λ Β΅(u) EβΛΒ΅(β) E converges in probability to 0 at a super-polynomial rate. To prove this, we first need to generalize Theorem 2 of [21] to the complete random design setting. Theorem 8. IffβCβ([0,1]s)satisfies equation (29) for some K, Ξ± > 0and J={J1, . . . , J L}, then for any Ξ³ >0, we can find a constant Ξdepending on s, Ξ±, Ξ³, Jmaxsuch that under the complete random design lim sup mββmΞ³Pr |ΛΒ΅ββΒ΅|> K2βΞ»m2/s+Ξmlog2(m) β©½1. Proof. Our proof strategy is similar to that of Theorem 7. Given Ξ³ >0, let Nm,Ξ³= sup{NβN| |QN|β©½mβΞ³2m}. By equation (15) and a calculation similar to equation (17), Nm,Ξ³βΌΞ» sm2+(1β2Ξ³)Ξ» smlog2(m) +Ds,Ξ³m (44) for some Ds,Ξ³depending on sandΞ³. Next let Λ Β΅ββΒ΅= SUM 1,Ξ³+SUM 2,Ξ³with SUM 1,Ξ³=X kβQNm,Ξ³Z(k)S(k)Λf(k), SUM 2,Ξ³=X kβNsβ\QNm,Ξ³Z(k)S(k)Λf(k). 22 Because |QNm,Ξ³|β©½mβΞ³2mand Pr( Z(k) = 1) = 2βmfor all kβQNm,Ξ³, Pr Z(k) = 1 for any kβQNm,Ξ³ = PrX kβQNm,Ξ³Z(k)β©Ύ1 β©½EhX kβQNm,Ξ³Z(k)i β©½mβΞ³. Therefore, SUM 1,Ξ³= 0 with probability at least 1 βmβΞ³. Next by Remark 1, we can apply Corollary 2 to SUM 2,Ξ³with Nm,Ξ³replacing Nmand get lim mββmΞ³Pr |SUM 2,Ξ³|β©ΎTm,Ξ³ = 0 for Tm,Ξ³=K2βNm,Ξ³+2Jmaxlog2(m)β Ξ»Nm,Ξ³/s+Ds,Ξ±,Ξ³,Jm with a sufficiently large Ds,Ξ±,Ξ³,Jdepending on s, Ξ±, Ξ³, Jmax. In view of equa- tion (44), we can further find Ξ depending on s, Ξ³,Jmax, Ds,Ξ±,Ξ³,Jsuch that βNm,Ξ³+ 2Jmaxlog2(m)q Ξ»Nm,Ξ³/s+Ds,Ξ±,Ξ³,Jmβ©½βΞ»m2/s+ Ξmlog2(m) for large enough m. Our conclusion then follows by taking the union bound over the probability of SUM 1,Ξ³ΜΈ= 0 and |SUM 2,Ξ³|β©ΎTm,Ξ³. Corollary 4. Under the conditions of Corollary 3, we can find for any Ξ³ >0 a constant Ξβ²depending on s, Ξ±, Ξ³, Jmaxsuch that lim sup mββmrβΞ³Pr ΛΒ΅(u) EβΛΒ΅(β) E>4K2βΞ»m2/s+Ξβ²mlog2(m) β©½r rβ with rβ= min( β, rβu+ 1). Proof. Given Ξ³ > 0 and the corresponding Ξ from Theorem 8, we can find a
|
https://arxiv.org/abs/2504.19138v1
|
constant Ξβ²β©ΎΞ such that NmβΞ»m2/s+ Ξβ²mlog2(m)β β asmβ β because NmβΌΞ»m2/s+ 3Ξ»mlog2(m)/s. Then Eβ©ΎNmand equation (40) implies |ΛΒ΅EβΛΒ΅β|β©½K2βΞ»m2/s+Ξβ²mlog2(m)for large enough m. Together with Theorem 8, we have lim sup mββmΞ³Pr |ΛΒ΅EβΒ΅|>2K2βΞ»m2/s+Ξβ²mlog2(m) β©½1. In order for either |ΛΒ΅(β) EβΒ΅|or|ΛΒ΅(u) EβΒ΅|to exceed 2 K2βΞ»m2/s+Ξβ²mlog2(m), we need at least rβinstances among Λ Β΅1 E, . . . , ΛΒ΅r Eto have an error greater than 2K2βΞ»m2/s+Ξβ²mlog2(m). By taking a union bound over all r rβ sizerβsubsets of ΛΒ΅1 E, . . . , ΛΒ΅r E, we have lim sup mββmrβΞ³Pr max(|ΛΒ΅(β) EβΒ΅|,|ΛΒ΅(u) EβΒ΅|)>2K2βΞ»m2/s+Ξβ²mlog2(m) β©½r rβ . 23 When both |ΛΒ΅(β) EβΒ΅|and|ΛΒ΅(u) EβΒ΅|are bounded by 2 K2βΞ»m2/s+Ξβ²mlog2(m), ΛΒ΅(u) EβΛΒ΅(β) Eβ©½4K2βΞ»m2/s+Ξβ²mlog2(m)and our proof is complete. Remark 6. One can also prove a strong convergence result using Theorem 8. Specifically, we can construct a sequence of Λ Β΅β(m) where Λ Β΅β(m+ 1) keeps the same digital shifts Djas ΛΒ΅β(m) but constructs its jβth generating matrix Cj by appending a new column of U({0,1}) entries to that of Λ Β΅β(m). By taking Ξ³ >1 and the corresponding Ξ from Theorem 8, BorelβCantelli lemma shows |ΛΒ΅β(m)βΒ΅|> K2βΞ»m2/s+Ξmlog2(m)only occurs for finitely many mβ©Ύ1 almost surely. Hence, we have for any Ξ»β²< Ξ» Pr lim mββ2Ξ»β²m2/s|ΛΒ΅β(m)βΒ΅|= 0 = 1. Similar results can be established for the confidence interval length as well. Remark 7. If a point estimator for Β΅is needed, we can generate rβ²groups of rindependent Λ Β΅E, compute Λ Β΅(β) Eand ΛΒ΅(u) Eof each group and take the median Med(Λ Β΅(β) E) and Med(Λ Β΅(β) E) of the rβ²number of Λ Β΅(β) Eand ΛΒ΅(u) E. By a proof similar to that of Corollary 3 in [21], we can show the mean squared errors of both Med(Λ Β΅(β) E) and Med(Λ Β΅(u) E) converge to 0 at a super-polynomial rate given rβ²grows at am2rate as mincreases. Any value between Med(Λ Β΅(β) E) and Med(Λ Β΅(u) E) can therefore be used as a point estimator. In addition, by equation (42) and (43) we also have Pr( Β΅β[Med(Λ Β΅(β) E),Med(Λ Β΅(u) E)]) converges to 1 as m, rβ²β β given F(ββ1)<1/2 and F(uβ1)>1/2 for F(Ξ½) defined in Corollary 3. 4 Generalization to other randomization So far we have been discussing the completely random design. The analysis is easy because every linear combination of rows of Cj, jβ1:sfollows a U({0,1}m) distribution. In application, the random linear scrambling is often preferred be- cause the resulting digital nets usually have better low-dimensional projections. The construction of [10], for example, optimizes over all two-dimensional pro- jections. In this section, we show what additional assumptions are needed for results in Section 3 to hold under more general randomization. Recall that in the random linear scrambling, Cj=MjCjfor a random lower-triangular matrix Mjβ {0,1}EΓmand a fixed generating matrix Cjβ {0,1}mΓm. Usually every Cjis nonsingular, ensuring that no points overlap in their one-dimensional projections. A useful feature when Cjhas full rank is the random linear scrambling agrees with the complete random design except for the first mrows of each Cj. This motivates the following definition: Definition 1. Themarginal order of a randomization scheme for Cjβ {0,1}EΓm, jβ1:sis the smallest dβN0such that for every
|
https://arxiv.org/abs/2504.19138v1
|
jβ1:sand β > dm ,Cj(β,:) is independently drawn from U({0,1}m). 24 The marginal order is 0 for the complete random design and 1 for the random linear scrambling provided every generating matrix has full rank. Randomiza- tion of higher marginal order is useful when randomizing higher order digital nets from [3]. The next lemma is useful in showing most kβQNmsatisfies Pr( Z(k) = 1) = 2βmeven when the marginal order is positive. The proof is given in the appendix. Lemma 9. ForLβ©Ύ0andΞΊβN, define ΞΊ>L={ββΞΊ|β > L}.LetNβ©Ύ1 andk1,k2be sampled independently from U(QN). Then for any Ο >0, there exist positive constants AΟ,s, BΟ,sdepending on Ο, ssuch that for each jβ1:s, Pr ΞΊ>Οβ N j,1 =β
β©½AΟ,sN1/4exp(βBΟ,sβ N) and Pr ΞΊ>Οβ N j,1 =ΞΊ>Οβ N j,2 β©½A2 Ο,sN1/2exp(β2BΟ,sβ N). Another common feature of the random linear scrambling is Pr( Z(k) = 1)β©½2βm+Rfor nonzero kand a constant Rdepending on sand the generating matrices [21]. We generalize it as the following definition: Definition 2. ForrβN, let Vr={VβNs β| |V|= rank( V) =r}. Ther-way rank deficiency Rm,rof a randomization scheme for Cjβ {0,1}EΓm, jβ 1:sis defined as Rm,r=mr+ sup VβVrlog2 Pr(Z(k) = 1 for all kβV) . with Z(k) =1{Ps j=1P ββΞΊjCj(β,:) =0mod 2}. In [19], a randomization scheme is called asymptotically full-rank if Rm,1 is bounded as mβ β . This is satisfied by the random linear scrambling based on common choices of generating matrices such as those from Sobolβ [22]. Much less is known about Rm,rforrβ©Ύ2. One might guess Rm,rβ©½rRm,1, but this is not true in general. Section 5 of [20] provides a three-dimensional example where Rm,1β©½5 but Rm,2β©Ύm/2+3 and mis an arbitrarily large even number. Fortunately, for most generating matrices the corresponding Rm,r grows logarithmically in min the following sense: Theorem 9. LetImbe the set of nonsingular mΓmF2-matrices and let Cj, jβ1:sbe independently sampled from U(Im). Then for the random linear scrambling based on generating matrices Cj, jβ1:s, Pr Rm,1β©Ύ3slog2(m+ 1) β©½exp(2 s) (m+ 1)2s(45) and for rβ©Ύ2 Pr Rm,rβ©Ύmax Rm,rβ1,(2r+ 2rβ1)slog2(m+ 1) β©½exp(2 sr) (m+ 1)2sr.(46) 25 Proof. Recall that βΞΊβis the largest element of ΞΊβNandβΞΊβ= max jβ1:sβΞΊjβ. For any nonzero k, ifβΞΊjββ> m for a jββ1:s, we can find ββΞΊjβsuch that β > m andMjβ(β,:) follows a U({0,1}m) distribution. Because Cjβis nonsingular, Cjβ(β,:) =Mjβ(β,:)Cjβalso follows a U({0,1}m) distribution andPs j=1P ββΞΊjCj(β,:) =0mod 2 occurs with probability 2βm. Hence, it suffices to consider the maximum of Pr( Z(k) = 1| Cj, jβ1:s) over all nonzero kwith βΞΊββ©½m. Instead of directly sampling CjfromU(Im), we sample Cβ j, jβ1:sinde- pendently from U({0,1}mΓm) and view CjasCβ jconditioned on Cβ jβ Im. For each jβ1:s, the probability Cβ jβ Imis given byQm β=1(1β2βm+ββ1) because there are 2mβ2ββ1choices for the ββth row of Cβ jto be linearly independent of previous rows. We notice this probability is monotonically decreasing in mand lim mββmY β=1(1β2βm+ββ1) =βY β=1(1β2ββ)β©Ύexp(ββX β=12ββ 1β2ββ)β©Ύexp(β2), where we have used log(1 βx)β©Ύβx/(1βx) for xβ(0,1). LetCβ j=MjCβ jand Zβ(k) =1nsX j=1X ββΞΊjCβ j(β,:) =0mod 2o =1nsX j=1X ββΞΊjMj(β,:)Cβ j=0mod 2o . (47) When ΞΊjΜΈ=β
andβΞΊjββ©½m,P ββΞΊjMj(β,:)ΜΈ=0andP ββΞΊjMj(β,:)Cβ jfollows aU({0,1}m) distribution. Hence when kΜΈ=0andβΞΊββ©½m, 2βm= Pr( Zβ(k) = 1) β©ΎPr(Cβ
|
https://arxiv.org/abs/2504.19138v1
|
jβ Im, jβ1:s) Pr(Zβ(k) = 1| Cβ jβ Im, jβ1:s). Conditioned on Cβ jβ Imfor all jβ1:s,Z(k) has the same distribution as Zβ(k). Therefore Pr(Z(k) = 1) β©½1 Pr(Cβ jβ Im, jβ1:s)2βmβ©½exp(2 s)2βm. (48) Because Pr( Z(k) = 1) = E[Pr(Z(k) = 1| Cj, jβ1:s)], the Markovβs inequality shows for each nonzero k Pr Pr(Z(k) = 1| Cj, jβ1:s)>2βm+R β©½exp(2 s)2βR(49) forRβ©Ύ0. Next, we notice Pr( Z(k) = 1 | Cj, jβ1:s) varies with konly through βΞΊjβ, jβ1:s. This is because sX j=1X ββΞΊjCj(β,:) =sX j=1X ββΞΊjMj(β,:) Cj andP ββΞΊjMj(β,:)d=Mj(βΞΊjβ,:) due to the lower triangular structure of Mj. When βΞΊββ©½m, each βΞΊjβcan take a value between 0 and mand there are at 26 most ( m+ 1)scombinations. A uniform bound over all combinations shows for R= 3slog2(m+ 1) Pr sup kΜΈ=0Pr(Z(k) = 1| Cj, jβ1:s)>2βm+R β©½(m+ 1)sexp(2 s)2βR =exp(2 s) (m+ 1)2s. It follows from the definition of 1-way rank deficiency that Rm,1β©½Rwhen supkΜΈ=0Pr(Z(k) = 1| Cj, jβ1:s)β©½2βm+R, so we have proven equation (45). The proof of equation (46) is similar. For rβ©Ύ2, let V={k1, . . . ,kr}has rank rwithki= (k1,i, . . . , k s,i). Suppose βΞΊjβ,iββ> m for some jββ1:sand iββ1:r. After an invertible linear transformation on Vif necessary, we can find ββΞΊjβ,1such that β > m andβ /βΞΊjβ,ifor all iβ2:r. Conditioned on all random entries of Cj, jβ1:sandMj, jβ1:sother than Mjβ(β,:),Z(ki) is nonrandom foriβ2:randZ(k1) = 1 with 2βmprobability because Cjβis nonsingular and Cjβ(β,:) =Mjβ(β,:)Cjβfollows a U({0,1}m) distribution. Hence Pr(Z(k) = 1 for all kβV) = 2βmPr(Z(ki) = 1 , iβ2:r)β©½2βmr+Rm,rβ1. (50) Next suppose max iβ1:rβΞΊiββ©½m. After an invertible linear transformation on Vif necessary, we can find jββ1:ssuch that βΞΊjβ,1β>βΞΊjβ,iβfor all iβ2:r. Denote ββ=βΞΊjβ,1β. As before, we let Cβ j, jβ1:sbe independently sampled fromU({0,1}mΓm) and Zβ(k) given by equation (47) for kβV. Conditioned on all random entries of Mj, jβ1:sandCβ j, jβ1:sother than Cβ jβ(ββ,:),Zβ(ki) is nonrandom for iβ2:randZβ(k1) = 1 with 2βmprobability because Mjβ equals 1 on the diagonal and Mjβ(ββ,:)Cβ jβ=Cβ jβ(ββ,:) +X β<ββMjβ(ββ, β)Cβ jβ(β,:) follows a U({0,1}m) distribution. Hence Pr(Zβ(k) = 1 for all kβV) = 2βmPr(Zβ(ki) = 1 , iβ2:r). By inductively applying the preceding argument to Vβ²={k2, . . . ,kr}, we get Pr(Zβ(k) = 1 for all kβV) = 2βmr. Then similar to equation (48) and (49), we can derive Pr(Z(k) = 1 for all kβV)β©½exp(2 sr)2βmr and for Rβ©Ύ0 Pr Pr(Z(k) = 1 for all kβV| Cj, jβ1:s)>2βmr+R β©½exp(2 sr)2βR.(51) Finally, for each jβ1:sanduβ1:r, we define ΞΊj,u={ββN|ββΞΊj,iforiβu, β /βΞΊj,iforiβ1:r\u}. 27 Notice that ΞΊj,uβ©ΞΊj,uβ²=β
ifuΜΈ=uβ²andΞΊj,uforu={i}is not equal to ΞΊj,i. By the lower triangular structure of Mj, X ββΞΊj,iMj(β,:)Cj=X uβ1:r iβuX ββΞΊj,uMj(β,:) Cjd=X uβ1:r iβuMj(βΞΊj,uβ,:)Cj. It follows that Pr( Z(k) = 1 for all kβV| Cj, jβ1:s) is equal to the probability thatsX j=1X uβ1:r iβuMj(βΞΊj,uβ,:)Cj=0mod 2 for all iβ1:r. There are 2rβ1 nonempty uβ1:rand each βΞΊj,uβcan take a value between 0 and mwhen uΜΈ=β
and max iβ1:rβΞΊiββ©½m, providing at most ( m+ 1)(2rβ1)s combinations of {βΞΊj,uβ, jβ1:s, uβ1:r}. By equation (51) and a union bound over all combinations, the probability that sup V=(k1,...,kr)βVr maxiβ1:rβΞΊiββ©½mPr(Z(k) = 1 for all kβV| Cj, jβ1:s)>2βmr+R is bounded
|
https://arxiv.org/abs/2504.19138v1
|
by ( m+ 1)s(2rβ1)exp(2 sr)2βR, which equals exp(2 sr)(m+ 1)β2sr when R= (2r+ 2rβ1)slog2(m+ 1). Equation (46) follows once we combine the above bound with equation (50). Corollary 5. When mβ©Ύ3, there exist generating matrices Cj, jβ1:ssuch that the random linear scrambling has marginal order 1and satisfies Rm,rβ©½ (2r+ 2rβ1)slog2(m+ 1) for all rβN. Proof. LetCj, jβ1:sbe independently sampled from U(Im). The marginal order is 1 because every Cjis nonsingular. By equation (45) and (46), a union bound over all rβNgives Pr(Rm,rβ©½(2r+rβ1)slog2(m+ 1) for all rβ©Ύ1)β©Ύ1ββX r=1exp(2 sr) (m+ 1)2sr, which is positive because ( m+ 1)β2sexp(2 s)<2βswhen mβ©Ύ3. Now we are ready to generalize Theorem 1. Let Nm= sup{NβN0| |QN|β©½1 2log2(m)2m}. (52) By equation (15), NmβΌΞ»m2/s+mlog2(m)/s+Dβ²β² smlog2log2(m) with Ξ»= 3(log 2)2/Ο2andDβ²β² sa constant depending on s. We first prove a generalization of Lemma 4. 28 Lemma 10. LetKmβQNmandlim inf mββ|Km|/|QNm|>0. Under a ran- domization scheme with marginal order dβN0andr-way rank deficiency Rm,r satisfying limmββRm,1/m= lim mββRm,2/m= 0, lim mββPr|Km| 2m+1β©½X kβKmZ(k)β©½3|Km| 2m+1 = 1. Proof. LetKm,d={kβKm| βΞΊββ©½dm}. Because NmβΌΞ»m2/s, we can find a constant Οdepending on dandssuch that dmβ©½ΟβNmfor large enough m. Lemma 9 then implies |Km,d|=|QNm||Km,d| |QNm|β©½1 2log2(m)2mAΟ,sN1/4 mexp βBΟ,sp Nm . LetA={Z(k) = 1 for any kβKm,d}. By a union bound argument, Pr(A)β©½2βm+Rm,1|Km,d|β©½1 2log2(m)2Rm,1AΟ,sN1/4 mexp βBΟ,sp Nm , which converges to 0 since lim mββRm,1/m= 0. Similarly, we define Kβ² m,d={(k1,k2)βK2 m|ΞΊ>dm j,1=ΞΊ>dm j,2for all jβ1:s}. A similar argument using Lemma 9 shows 2β2m+Rm,2|Kβ² m,d|converges to 0. Fork1βKm\Km,d, we can find β1, j1such that β1> dm, β 1βΞΊj1. By the definition of marginal order, Cj1(β1,:) is independently drawn from U({0,1}m), soZ(k) is independent of Aand Pr( Z(k)| Ac) = 2βm. Furthermore, if k1,k2βKm\Km,dand (k1,k2)/βKβ² Nm,d, we can find, after replacing k2by k1βk2if necessary, β1, β2, j1, j2such that β1> dm, β 1βΞΊj1,1, β1/βΞΊj1,2, β2> dm, β 2βΞΊj2,2, β2/βΞΊj2,1. Because Cj1(β1,:) and Cj2(β2,:) are independently drawn from U({0,1}m),{Z(k1), Z(k2),A}are jointly independent and the con- ditional covariance Cov( Z(k1), Z(k2)| Ac) = 0. Therefore, EhX kβKmZ(k) Aci =X kβKm\Km,dPr(Z(k)| Ac) = 2βm(|Km| β |Km,d|) and VarX kβKmZ(k) Ac β©½X kβKm\Km,dVar(Z(k)| Ac) +X (k1,k2)βKβ² m,dE[Z(k1)Z(k2)| Ac] β©½2βm(|Km| β |Km,d|) +1 Pr(Ac)2β2m+Rm,2|Kβ² m,d|. Since 2βm|Km| β β , 2βm|Km,d| β0, Pr(Ac)β1 and 2β2m+Rm,2|Kβ² m,d| β0 asmβ β , our conclusion follows from the Chebyshevβs inequality. 29 Theorem 10. Suppose fβCβ([0,1]s)satisfies the assumptions of Theorem 6. Then under a randomization scheme with marginal order dβN0andr-way rank deficiency Rm,rsatisfying Rm,rβ©½(2r+ 2rβ1)slog2(m+ 1) forrβN, lim mββPr(ΛΒ΅β< Β΅) +1 2Pr(ΛΒ΅β=Β΅) =1 2. Proof. Let SUM 1,SUM 2and SUMβ² 1be defined by equation (8), (9) and (10) with Km=QNmforNmdefined by equation (52). We will follow the same three steps outlined in Section 3. In Step 1, equation (20) implies for V={kβQNm|Z(k) = 1} dTV(SUM 1,SUMβ² 1)β©½Pr(V βIm) β©½Pr V βIm,|V|β©½3 4log2(m) + Pr |V|>3 4log2(m) . Lemma 10 with Km=QNmimplies Pr( |V|>(3/4) log2(m)) converges to 0. Next, similar to equation (21), we have for large enough m Pr V βIm,|V|β©½3 4log2(m) β©½β(3/4) log2(m)βX r=2X WβIβ m,r+1Pr(Wβ V). Because each WβIβ m,r+1has full rank, the definition of r-way rank deficiency and Lemma 6 imply X WβIβ m,r+1Pr(Wβ V)β©½|Iβ m,r+1|2βmr+Rm,rβ©½2Rm,r
|
https://arxiv.org/abs/2504.19138v1
|
(r+ 1)!Ar sNr/4 mrβBsβNm. Forrβ©½(3/4) log2(m),Rm,rβ©½(m3/4+ (3/2) log2(m)β1)slog2(m+ 1). Hence Pr V βIm,|V|β©½3 2log2(m) β©½2(m3/4+(3/2) log2(m)β1)slog2(m+1)β(3/4) log2(m)βX r=2(AsN1/4 m)r (r+ 1)!rβBsβNm β©½2(m3/4+(3/2) log2(m)β1)slog2(m+1)exp(AsN1/4 m)2βBsβNm, which converges to 0 as mβ β since NmβΌΞ»m2/s. The proof of Step 2 is essentially the same as before, except that the prob- ability Z(k) = 1 for any kβΛQis now bounded by 2Rm,1mds,Ξ±exp(βcβ² sΟ΅2 mm), where ΛQ, d s,Ξ±, cβ² s, Ο΅mare defined as in the proof of Theorem 4. In particular, we still have lim mββPr(|SUM 2|β©ΎTm) = 0 for Tmdefined by equation (30) when Rm,1β©½3slog2(m+ 1). In Step 3, Theorem 6 shows equation (32) holds and |Qm(Tm)|> clog2(m)2m for some c > 0 when mis large enough. Then we can apply Lemma 10 with Km=Qm(Tm) to get lim mββPr(|W|>(c/2) log2(m)) = 1 for W= V β©Qm(Tm). An argument similar to equation (33) completes the proof. 30 Corollary 6. Let[ΛΒ΅(β) E,ΛΒ΅(u) E]be the confidence interval from Corollary 3 with Eβ©ΎNmforNmdefined in equation (52). Under the assumptions of Theo- rem 10, lim inf mββPr(Β΅β[ΛΒ΅(β) E,ΛΒ΅(u) E])β©ΎF(uβ1)βF(ββ1) with F(Ξ½)defined as in Corollary 3. Proof. The proof is essentially the same as that of Corollary 3 except that equation (41) becomes Pr |SUM 2,E|β©Ύ2Tm β©½2Rm,1mds,Ξ±exp(βcβ² sm log2(m)2), which still converges to 0 when Rm,1β©½3slog2(m+ 1). Counterparts of Theorem 8 and Corollary 4 can also be established using a slightly modified proof. Remark 8. Corollary 5 shows there exist generating matrices for which the random linear scrambling satisfies the assumptions of Theorem 10. In fact, the proof shows generating matrices randomly drawn from U(Im) are qualified with high probability. If further Rm,rβ©½Crlog2(m+ 1) for some C > 0, one can modify the proof and show theβmconvergence rate in equation (37) also holds. Whether there exist generating matrices achieving such bounds is an open question left for future research. 5 Numerical experiments In this section, we validate our theoretical results on two highly skewed inte- grands and two types of randomization. For each integrand and each random- ization, we first compute the probability Λ Β΅Eis larger than Β΅and verify this probability converges to 1 /2. The precision Eis chosen according to a small test run to make sure 2βEis much smaller than the observed errors. Next, we generate our quantile intervals and the traditional t-intervals both for 1000 times. Each confidence interval is constructed from r= 9 independent repli- cates of Λ Β΅E. For the quantile interval, we choose β= 2 and u= 8 as described in Corollary 3. The predicted converge level according to equation (39) is ap- proximately 96 .1%. The t-interval is [Β― Β΅βtΛΟ,Β―Β΅+tΛΟ] for Β―Β΅the sample mean and ΛΟthe sample standard deviation of the 9 replicates of Λ Β΅E. We choose tβ2.46 so that the predicted coverage level according to a t-distribution with 8 degree of freedom equals that of the quantile interval. We report the 90th percentile of the 1000 interval lengths to compare the efficiency of two constructions. We further estimate the coverage level by computing the proportion of intervals containing Β΅. We call the coverage level too low if less than 950 intervals
|
https://arxiv.org/abs/2504.19138v1
|
out of the 1000 runs contain Β΅and too high if more than 970 intervals contain Β΅. Below we use CRD as the shorthand for the βcompletely random designβ and RLS for the βrandom linear scramblingβ. The generating matrices for RLS come from [10]. 31 β β β β β β β β β β β β 2 4 6 8 10 120.0010.002 0.0050.0100.020 0.0500.1000.200Deviation of Pr(Β΅^E>Β΅) from 12 when f(x)=x33exp(x) m0.5βPr(Β΅^ E>Β΅) βCRD RLSFigure 1: Deviation of Pr(Λ Β΅E> Β΅) from 1 /2 for f(x) =x33exp(x). 5.1 One-dimensional example We start with the one-dimensional integrand f(x) =x33exp(x). The power 33 is chosen so that Pr( f(U)> Β΅)β10% for Ufollowing a uniform [0 ,1] distribution. In other words, Pr(Λ Β΅E> Β΅)β10% when we set m= 0 and use one function evaluation to estimate Β΅. Figure 1 records the deviation of estimated Pr(ΛΒ΅E> Β΅) from 1 /2 across m= 1, . . . , 12. Each Pr(Λ Β΅E> Β΅) is computed from 8Γ106replicates with precision E= 64. As expected, Pr(Λ Β΅E> Β΅) converges to 1/2 for both choices of randomization. Although our analysis only guarantees a very slow convergence under RLS, the empirical convergence rate outperforms that of CRD. Figure 2 and 3 compare the 90th percentile interval lengths and empirical coverage levels, respectively. We observe that the quantile intervals have rapidly shrinking interval lengths while achieving the target coverage level formβ©Ύ5. On the other hand, the t-intervals tend to be much wider due to the influence of outliers and their coverage levels become too high for mβ©Ύ7. The quantile intervals are therefore preferred over the t-intervals for constructing confidence intervals from Λ Β΅E. 5.2 Eight-dimensional example Next, we investigate the impact of dimensionality using the eight-dimensional function f(x) =Q8 j=1xjexp(xj). We have Pr( f(U)> Β΅)β12.2% for Ufollow- ing a uniform [0 ,1]8distribution. Figure 4 records the deviation of estimated Pr(ΛΒ΅E> Β΅) from 1 /2 across m= 1, . . . , 18. Each Pr(Λ Β΅E> Β΅) is computed from 8 Γ104replicates with precision E= 32. We observe that convergence of 32 β β β β β β β β β β β β 2 4 6 8 10 121eβ05 1eβ04 1eβ03 1eβ02 1eβ0190th quantile of confidence interval lengths when f(x)=x33exp(x) mInterval lengthβ β β β β β β β β β β β β βtβinterval for CRD quantile interval for CRD tβinterval for RLS quantile interval for RLSFigure 2: 90th percentile interval lengths of quantile intervals and t-intervals forf(x) =x33exp(x). ββββ βββββ β β β 2 4 6 8 10 120.4 0.5 0.6 0.7 0.8 0.9 1.0Coverage levels when f(x)=x33exp(x) mCoverage level βββββββ ββββ β β β97% coverage 95% coverage tβinterval for CRD quantile interval for CRD tβinterval for RLS quantile interval for RLS Figure 3: Coverage levels of quantile intervals and t-intervals for f(x) = x33exp(x). 33 β β β β β β β β β β β β β β β β β β 5 10 150.0010.002 0.0050.0100.020 0.0500.1000.200 0.500Deviation of Pr(Β΅^E>Β΅) from 12 when f(x)=β j=18 xjexp(xj) m0.5βPr(Β΅^ E>Β΅) βCRD RLSFigure 4: Deviation of Pr(Λ Β΅E> Β΅) from 1 /2
|
https://arxiv.org/abs/2504.19138v1
|
for f(x) =Q8 j=1xjexp(xj). β β β β β β β β β β β β β β β β β β 5 10 151eβ03 1eβ02 1eβ01 1e+00 1e+0190th quantile of confidence interval lengths when f(x)=β j=18 xjexp(xj) mInterval lengthβ β β β β β β β β β β β β β β β β β β βtβinterval for CRD quantile interval for CRD tβinterval for RLS quantile interval for RLS Figure 5: 90th percentile interval lengths of quantile intervals and t-intervals forf(x) =Q8 j=1xjexp(xj). 34 ββββββ βββ βββ β ββ βββ 5 10 150.4 0.5 0.6 0.7 0.8 0.9 1.0Coverage levels when f(x)=β j=18 xjexp(xj) mCoverage level ββββββββββ βββ βββ ββ β β97% coverage 95% coverage tβinterval for CRD quantile interval for CRD tβinterval for RLS quantile interval for RLSFigure 6: Coverage levels of quantile intervals and t-intervals for f(x) =Q8 j=1xjexp(xj). Histogram Β΅^ EβΒ΅Density β0.005 0.000 0.0050 50 100 150 200 250 300 β β βββ β β ββ ββ β βββ ββ βββ ββ ββββ β β ββ βββ ββ ββ β β ββ ββ βββ βββ βββ ββ β ββ β βββββ ββ β β ββ ββ ββ βββ β ββ β βββββ ββββ βββ βββ ββ β ββ β ββ β βββ β βββ ββ ββ ββ β ββββ ββ ββ β βββ βββ β βββ β ββ ββ ββ β ββ ββ β ββ ββββ ββ ββ ββ βββ β ββ β β ββββ β βββ β ββ ββ βββ ββ ββββ β βββ β βββ βββ ββ ββ βββ βββ βββ β ββ ββ βββ βββ ββ βββ βββ β βββββ ββ β ββ ββββ ββ ββ ββ ββββ ββ β βββ βββ ββ βββ βββ β ββ βββ ββ βββ ββ βββββ ββ βββ ββ βββ βββ ββ βββ βββ ββ β ββ β ββ β ββ β ββ ββββ βββ ββ ββ ββ ββ βββββ ββ ββ β ββ βββ βββ ββ β ββββ β ββ ββ ββ β β β β ββ βββ βββ ββ ββ β βββ β βββ ββ ββ β β ββββ ββ ββ ββ β β ββ ββ β βββ ββ β β ββ βββ β β βββ βββββ ββ ββ β ββ βββ ββ β βββ β ββββ ββ ββββ ββ ββ ββ ββ ββ β βββ ββ βββββ β β ββ ββ βββ β βββββ ββ β ββ ββ β β ββ ββ βββ ββ ββ ββ β β β ββ ββββββ ββ ββ ββ ββββ ββ ββ β ββ ββ ββ β βββ ββ β βββ β ββ β β βββ β βββ β ββββ β ββββ β ββ ββββ β ββ β ββ βββ βββ ββ ββ ββ ββ β β ββ βββ βββ β ββ βββ ββ ββββββ β ββ ββ βββ β βββ ββββ ββ ββ ββ β β ββ β βββ βββ β ββ ββ βββ ββ β βββ βββ β β ββ βββ ββ ββ ββ β ββ ββ ββ ββ β ββ ββ βββ βββ ββββ ββ βββ β ββ ββββββ βββ ββββ β βββββ βββ βββββ ββ
|
https://arxiv.org/abs/2504.19138v1
|
β ββ β ββ β βββ ββ β βββ ββ ββ β ββ ββ ββ βββ β ββ ββ β ββ βββ βββββ βββ ββ β ββ ββ ββ ββ β βββ ββ ββ βββββ ββββ β ββββ βββ ββ ββ ββ ββ ββ ββ β β ββ ββ β ββ ββ ββ ββ ββ β ββββ β ββββ βββ ββ ββ ββ β β ββ ββ ββ ββ βββ β βββ βββ ββ βββ β βββ ββ β βββ β ββ ββ ββββ ββ ββ ββ β β ββ βββ ββ ββ ββ ββ ββ ββ ββ β ββ βββ βββ ββ β ββ βββ ββββββ ββ βββ ββ ββ ββ βββ ββββ β β β βββ βββ ββββ ββ βββ β βββββ ββ ββ ββ βββ ββ βββ ββ ββ β βββ ββββ βββ β ββ β ββ ββ βββ βββ β βββ ββ β βββ β ββ ββ ββ β ββ β ββ ββ ββ ββ ββββ β β ββ β ββ βββ ββ ββ β ββ βββ ββ ββ ββ ββ βββ βββ ββ β ββββ β βββ ββ ββ β ββ βββ ββββ ββ ββββ β βββ βββ ββββ ββ ββ β ββ ββ β ββ ββ β ββββ β ββ ββ ββ ββββ ββ β ββ ββ βββ ββ ββ ββββ β βββ ββ β ββ ββββ ββ ββββ β βββββ β ββ βββ β βββ ββ ββ β βββ ββ β ββ βββ ββ βββ βββ ββββ ββ ββ ββββ β βββ ββ βββ βββ βββ ββββ β ββ ββ ββββ ββ β ββ ββ β ββ βββ ββ βββ ββ β βββ ββ β βββ ββ β ββ β ββ βββ β ββ βββ β ββββ β β ββ ββ ββ β ββ ββββ βββ ββ βββ ββ ββ β ββββ ββ β ββ βββ βββ βββ β βββ β βββ ββββ ββββ ββ β ββ ββ βββ ββ β β ββ β βββ ββββ β ββ βββ ββ β βββ β βββ βββ ββββ βββ βββ ββ ββ ββ β βββ β βββ ββ βββ βββ ββββ ββ β ββ β ββ ββ ββ ββ β ββ ββ ββ β ββ ββ β ββββ ββ ββ β ββββ ββββ ββββ ββ ββββ ββ βββ βββ ββ βββ βββ ββ β ββ ββββ ββββ β β ββ ββββ β ββββ ββ ββ ββ ββ ββ ββ βββ βββ β βββ ββ ββ βββ ββ β ββ β ββ βββ β βββ βββ β ββββ ββ βββ ββββ β ββ ββ β βββ β βββ β ββββ β β β ββ β ββ ββ ββββ βββ β ββ βββ ββ β ββββββ β ββ βββ ββ βββ β βββ βββ ββ β ββ βββ βββ βββ β ββ ββ ββ β ββ βββ βββ ββ ββββ β βββ βββ β ββ ββ β ββ ββββββ β βββ β βββββ β βββ β βββ ββ β βββββ β β βββ βββ β βββ ββ ββ ββ ββ ββββ βββ β ββββ βββ βββββ ββ ββ β ββ ββ ββ ββ β βββ ββ ββ β ββββ
|
https://arxiv.org/abs/2504.19138v1
|
ββ βββ βββ β βββ βββ ββββ ββββ βββ ββ ββββ ββ ββββ βββ β ββ ββ β βββ β ββ ββ ββ βββββ β βββ β β ββββ ββ β βββ β ββ β ββββ β ββββ ββ βββββ ββ β ββ ββ ββ ββ β ββ ββ β βββ βββ β ββ ββ ββ ββ βββ β βββββ β ββββ β β ββββββ β ββ ββ ββ ββ ββ ββ ββ βββ β βββ ββ β βββ β ββββ ββ β βββββ βββ ββ ββ βββ ββ ββ βββ ββ β β βββ ββββββ β βββ ββ β βββ β βββ β βββ ββββ ββ β βββ ββ ββ ββββ βββ ββββββ βββ β ββ βββ β βββ ββ βββ β β ββββ ββ ββββ β ββ βββ βββ β βββ ββ ββββ ββββ β ββ ββ ββ ββ β βββ ββ β ββ βββ β βββ β βββ β β ββββ ββ ββ ββ ββ ββ βββββ ββββ β ββ β ββ ββ ββ ββ ββ ββ β βββββ ββ ββ ββ β ββ ββ ββ βββ ββ ββ ββ ββ β ββ β βββ ββ β ββ ββ β βββ β βββ ββ ββ βββ ββ β ββ β β ββ ββ ββ ββ ββ βββ ββ β ββββ ββ ββ β β ββ ββ β β ββ ββ βββ ββ ββββ βββββ β βββ β β β ββ ββ ββ ββ β βββ β βββ ββ β βββ β βββ ββ β βββ β ββ β ββββ ββββ β ββ ββββ β ββ ββ ββ βββ β ββββ ββ ββ β β βββ βββ βββ ββ ββ ββ ββ ββ β ββββ βββ ββ ββ βββ βββ ββ ββ ββ ββ β ββ ββ ββββββ ββ βββββ ββ ββββββ ββ β βββ ββ β ββ β ββββ β ββ ββ β ββ ββ β βββ ββ ββββ ββ ββ β βββ βββ βββ ββ β βββ β ββ ββββ β βββββ β ββ βββ ββββ ββ βββ ββ ββ ββββ βββ β ββββ β ββββ β β ββββ βββ βββ βββββ ββ ββββ βββ βββ ββ ββββ βββ ββ ββ β ββ βββ ββββ ββ ββ β β ββββ βββ βββ βββ β β ββββ ββ β βββ β βββ ββ ββ β βββ ββ ββ β ββββ ββββ β β βββ ββ βββ ββ βββ β ββ β ββ ββ βββ βββ βββ β ββ ββ ββ ββ ββ βββ β β βββ βββ ββ β β β β ββ β β βββ β βββ ββββ ββ β βββ ββ ββ ββ β ββββ ββ ββ ββ ββ β β ββ ββββ ββ βββ ββ ββββ ββ β βββ β ββ ββ βββ ββ β βββ ββ βββ ββ ββ βββ ββ ββ ββ ββ β βββ β β ββ βββββ βββββ ββ β ββ ββ βββ β ββ ββ ββ βββ β βββ ββ ββ β ββββββ ββ βββ β ββ β βββ βββ βββ ββ ββ β β ββ β ββββ β ββββ ββββ β ββ ββ β βββββ ββββ ββ β ββββββ ββ
|
https://arxiv.org/abs/2504.19138v1
|
βββ ββ ββ ββββ β ββββ β βββ β ββ βββ βββ β β βββ β ββ β ββ β ββββββ βββ β βββ ββ ββββ βββ ββ β ββ ββ β βββ ββ β ββββ ββββ β ββ β β ββ βββ ββββ ββ βββ β β βββ βββ ββ ββ ββ β ββ ββββ β ββ ββββ ββ βββ β ββ ββ ββ βββββββ ββ β ββββ β ββ β ββ βββββ ββ β ββ ββ ββ β ββ β ββββ βββ βββββ βββ βββ β β βββ ββ βββ β ββ β ββββ β βββ ββ β βββ ββ ββ β ββ βββ βββ β ββββ β ββ βββ β ββ βββ βββ ββββ β β βββ ββ βββ βββ β βββ ββ β β βββ β β ββ β ββββ ββ ββ βββ βββ ββ ββ β ββ ββ β βββ ββ ββ ββ β ββ βββ β ββ βββ ββ ββ ββββ βββ βββββ ββ βββ β βββ β ββ ββ β ββ ββ ββ ββ ββ ββ βββ β ββ β ββββ βββ β ββ ββ ββ ββ β ββ β β ββ ββ ββ β ββ βββ ββ ββ β βββ β βββ βββ β ββ ββ ββββ ββββ ββ β ββ ββ ββ ββ ββ βββββ ββ βββ ββ β ββββ βββ ββ ββ βββ ββ βββ ββ β ββββ ββββ β βββββ βββ ββββ ββ β ββββ ββ β ββββ β β ββββ β ββ ββ βββ ββ βββ βββ βββ β βββ βββ ββ β βββββ βββ βββ βββ β β ββ β β ββ β ββ ββ ββ β β βββββ βββ β β ββββ ββ β βββ ββ ββ β βββ βββ β ββ βββ ββββ β ββ ββ β β ββ β βββ ββ βββ β βββ ββ βββ ββ ββ ββ ββ β ββ βββββ βββ ββ ββ βββ ββ βββββ ββ β β ββ βββ ββ β ββββ ββββ ββ βββ ββββ ββ βββ ββ ββ βββ ββ βββ ββ βββ β ββ βββ ββ ββ ββ βββ ββ β β βββ ββ ββ β ββ βββ βββ βββ βββ ββ ββββ β βββ βββ βββ βββ β βββ ββ ββ β βββββββ β ββ βββ β βββ ββ βββ ββ ββ β βββ ββ ββ ββ ββββ βββ β βββββ β ββ βββ β βββ ββ β ββ β β βββ ββ β ββ ββββ β βββ ββ β ββ ββββ β βββ βββ β ββ β β ββ βββββ ββ βββ ββ ββ ββ β ββββ β β ββ ββββ ββ ββββ β βββ ββ βββ ββ ββ β ββ ββ ββ β βββ β ββ ββββ βββ ββ ββββββ β β β ββ ββββ β βββ β ββ β ββ β ββ βββ ββ ββ β βββ βββββ β βββ β βββββ ββ βββ β ββββ ββ βββ ββ ββ βββ βββ ββ ββ β ββ β βββ β β ββ ββ ββ ββ ββ βββ βββ β ββββ ββ ββ βββ ββ βββ β ββ βββββββ βββ β ββ βββ β ββ ββ
|
https://arxiv.org/abs/2504.19138v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.