text string | source string |
|---|---|
that α(x) is increasing for x>exp(5/2)/2, decreasing for x<exp(5/2)/2 withx0= exp(5/2)/2 being the unique minimum. However, at x0= exp(5/2)/2, α(x0) =5 8−5 8ln2>0 . Henceα(x) must be non-negative and the lemma is established. Lemma S.4.12. LetB2ln(2p)≤nσ2and consider the random variable Wsuch that W∼Bin/parenleftbigg n... | https://arxiv.org/abs/2504.17885v1 |
σ2/M2/parenrightbigq q−2= (M/K)q. This implies that if Mq=Kq−2σ2, then for all s≥2, the above bound is Ks−qMq. This moment bound is exactly same as used in Bennett’s inequality. Proof.Because |Yi| ≤K, we get that for s≥q, 1 nn/summationdisplay i=1E[|Yi|s]≤Ks−q1 nn/summationdisplay i=1E[|Zi|q]≤Ks−qMq. Using the inequali... | https://arxiv.org/abs/2504.17885v1 |
Score-Based Deterministic Density Sampling Vasily Ilin†Peter Sushko‡Jingwei Hu† †University of Washington‡Allen Institute for AI vilin@uw.edu Abstract We propose a deterministic sampling framework using Score-Based Transport Mod- eling for sampling an unnormalized target density πgiven only its score ∇logπ. Our method ... | https://arxiv.org/abs/2504.18130v2 |
kernel for analyzing dynamically changing loss. Our main theoretical result is theorem 4.6. In Section 3 we introduce the main algorithm, building on Score-Based Transport Modeling [ 1]. In Section 4 we prove that the resulting dynamics achieve the optimal convergence rate. Finally, in Section 5 we verify convergence i... | https://arxiv.org/abs/2504.18130v2 |
Additionally, SBTM was recently successfully employed to other Fokker-Planck-type equations [14, 21, 12]. The core idea of SBTM is to approximate ∇logftwith a neural network sΘttrained on the sample X1 t, ..., Xn t. The training is done by minimizing the score matching loss from [ 13,25]. The SBTM dynamics are given by... | https://arxiv.org/abs/2504.18130v2 |
to ft, and the neural network st=sΘtis trained with gradient descent dΘt dt=−η∇ΘtLn(st, ft), Ln(st, ft) :=1 nnX i=1∥s(Xi t)− ∇logf(Xi t)∥2 and is such that the Neural Tangent Kernel (NTK) Hi,j α,β(t) =NX k=1∇θksα t(Xi t)∇θksβ t(Xj t), s(x) = (s1(x), . . . , sd(x)) 4 is lower bounded by λ=λ(t)>0, i.e.∥Hv∥2≥λ∥v∥2. Then a... | https://arxiv.org/abs/2504.18130v2 |
. . , Xn tofftsince∇logft is unknown. Empirically, we confirm that (4.4) holds, indicating small loss – see figure 5. 5 Experiments We demonstrate the optimal rate of relative entropy dissipation in several experiments, including challenging non log-concave targets. We compare SBTM to its stochastic counterpart given b... | https://arxiv.org/abs/2504.18130v2 |
but we can still compare the qual- ity of the final sample as well as compare the trajectories of the SDE and SBTM. We use f0=N(0,1)here and in fur- ther experiments. Here we sam- ple from the Gaussian mixture π=1 4N(−2,1) +3 4N(2,1) withn= 1000 . As evidenced by the change of slope at t= 2.5in Figure 4, the underlying... | https://arxiv.org/abs/2504.18130v2 |
tools and intuition from diffusion generative modeling to tackle the harder problem of sampling πgiven only ∇logπbut not samples from π. Our method allows for deterministic sampling with smooth trajectories and provably optimal rate of entropy dissipation. Additionally, access to the learned score ∇logftallows for the ... | https://arxiv.org/abs/2504.18130v2 |
systems , 30, 2017. [19] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. Advances in neural information processing systems , 29, 2016. [20] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion pr... | https://arxiv.org/abs/2504.18130v2 |
Rdft∂tlogftdx =−Z Rd⟨st− ∇logπt,∇logft− ∇logπ⟩ftdx, where we used integration by parts and thatR Rdft∂tlogftdxis zero for the last equality. 12 We can now specialize the above to prove 4.1 and 4.2. Proofs of 4.1 and 4.2. By choosing πt=πin the proof of theorem 7.1, we obtain d dtZ Rdftlogft πdx =−Z Rd⟨st− ∇logπ,∇logft−... | https://arxiv.org/abs/2504.18130v2 |
0.12 SVGD 3.2 3.7 4.4 4.6 4.4 Table 6: Same setup as Table 4, but using dilation annealing. sample size N 100 300 1000 3000 10000 SBTM (ours) 0.34 0.25 0.19 0.15 0.13 SDE 0.25 0.19 0.14 0.13 0.13 SVGD 0.96 1.4 1.1 1.3 1.2 Table 7: KL divergence ( ↓) between the sample and the target π=1 4N(−2,1) +3 4N(2,1)for gaussians... | https://arxiv.org/abs/2504.18130v2 |
your paper, and its final version will be published with the paper. The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes] " is generally preferable to "[No] ", it is perfectly acceptable to answer "[No] " provided a proper justification is given (e.g., "e... | https://arxiv.org/abs/2504.18130v2 |
what the implications would be. •The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influe... | https://arxiv.org/abs/2504.18130v2 |
code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, ... | https://arxiv.org/abs/2504.18130v2 |
raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission... | https://arxiv.org/abs/2504.18130v2 |
provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? 22 Answer: [Yes] Justification: We describe this at the beginning of section 5. Guidelines: • The answer NA means that the paper does not include experiments. •The paper s... | https://arxiv.org/abs/2504.18130v2 |
monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, o... | https://arxiv.org/abs/2504.18130v2 |
paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our work does not involve crowdsourcing or research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsou... | https://arxiv.org/abs/2504.18130v2 |
arXiv:2504.18139v1 [math.PR] 25 Apr 2025Kalman-Langevin dynamics : exponential convergence, part icle approximation and numerical approximation Axel Ringh∗Akash Sharma∗ Abstract Langevin dynamics has found a large number of applications i n sampling, optimization and esti- mation. Preconditioning the gradient in the dy... | https://arxiv.org/abs/2504.18139v1 |
and in weather forecasting [HM01]) for data assimilation and inver se problems. Inspired from the Kalman filter, an ensemble Kalman iterative procedure for s olving inverse problems, called as ensemble Kalman inversion (EKI), was proposed in [ILS13], and the co rresponding continuous-time limit, whichisaninteractingsyst... | https://arxiv.org/abs/2504.18139v1 |
lters [DMT18, DMH23]. (ii) We provethe convergenceofan interactingparticlesystem tot he mean-fieldSDEs (1.3). In contrast to [DL21b, Vae24], we study a weak particle approximation that avoid s the need to compute the square root of the empirical covariance matrix. We refer to this as a weak approximation because it diffe... | https://arxiv.org/abs/2504.18139v1 |
that the potential function Uhas the following form: U=1 2x⊤Ax+V, (2.1) whereAis positive definite matrix with smallest eigenvalue ℓAand largest eigenvalue LA, andVisC1(Rd) and Lipschitz continuous function with Lipchitz constant LV. Remark 2.1. Note that the above assumption allows for non-convexity. One exa mple satis... | https://arxiv.org/abs/2504.18139v1 |
explicitly track all the constants. Corollary 2.1. Let Assumption 2.1 hold. Then, for all p >0, thep−th moment of the law of mean-field SDEs (1.3) converges to the p−th moment of the Gibbs measure (1.1), i.e., /integraldisplay Rd|x|pLX t(dx)→/integraldisplay Rd|x|pµ(dx) (2.7) ast→ ∞. Proof.The proof directly follows fro... | https://arxiv.org/abs/2504.18139v1 |
V)≥λΣ−1 max(0), (2.17) thenλΣ−1 max(t)≤2β/parenleftbig LA+2βL2 V/parenrightbig . However, if we choose initial distribution µ0such that 2β(LA+2βL2 V)≤λΣ−1 max(0), (2.18) thenλΣ−1 max(t)≤λΣ−1 max(0). Therefore, we have λΣ−1 max(t)≤max/parenleftbigg 2β/parenleftbig LA+2βL2 V/parenrightbig , λΣ−1 max(0)/parenrightbigg . N... | https://arxiv.org/abs/2504.18139v1 |
t]Σty≤2/radicalbig KV(y⊤Σty)1/2(y⊤Σ2 ty)1/2≤ℓA 3y⊤Σ2 ty+3KV ℓAy⊤Σty. 9 Hence, using above inequality in (2.27), we get d dty⊤Σty≤ −5 3ℓAy⊤Σ2 ty+3K2 V ℓAy⊤Σty+2 βy⊤Σty. (2.29) Let us now analyze the term y⊤Σ2 tyappearing in (2.27). We have already proven uniform in time lower bound on eigenvalues of Σ t. We write the ei... | https://arxiv.org/abs/2504.18139v1 |
A|x|2−L2 V ℓ2 A+V(0), where we have used Young’s inequality, i.e., LV|x| ≤ℓ2 A|x|2/4+L2 V/ℓ2 A, in last step. Hence, |x|2≤4 ℓ2 AU(x)+4L2 V ℓ4 A−4 ℓ2 AV(0). This implies |∇U(x)|2=|Ax+∇V(x)|2≤2L2 A|x|2+2L2 V≤8L2 A ℓ2 AU(x)+8L2 AL2 V ℓ4 A−8L2 A ℓ2 AV(0)+2L2 V.(2.41) Using (2.40) and (2.41), we have dEUl(X(t))≤ −lℓΣℓ2 A LA... | https://arxiv.org/abs/2504.18139v1 |
exists a compact setKsuch that following holds for some K1,K2,K3,K4>0: K1|x|2≤U(x)≤K2|x|2andK3|x| ≤ |∇U(x)| ≤K4|x|,for all x∈ Kc. We also assume that all second derivatives of Uare uniformly bounded in x. In [Vae24], the well-posedness of (1.3a) as well as (3.4) is proved und er Assumption 3.1. In the proof of well-pos... | https://arxiv.org/abs/2504.18139v1 |
have E/vextendsingle/vextendsingle/vextendsingle/vextendsingle/integraldisplayτj+δ τjΣ(EN(t))∇U(X1,N(t))dt/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ≤δE/parenleftbigg/integraldisplayτj+δ τj|Σ(EN(t))∇U(X1,N(t))|2dt/parenrightbigg ≤δsup t∈[0,T]E|Σ(EN(t))∇U(X1,N(t))|2. (3.12) Considering the second term, fr... | https://arxiv.org/abs/2504.18139v1 |
chain as (Xi,N k)n k=0starting from Xi 0for alli= 1,...,N. We also denote βk:=β(tk) and ΣN k:= Σ(EN k) where EN k=1 NN/summationdisplay k=1δXi,N k, (4.1) and therefore Σ(EN k) =1 NQN k(QN k)⊤(4.2) withQN k= [X1,N k−Mk,...,XN,N k−Mk],Mk=1 N/summationtextN i=1Xi,N k. As is knownin the stochasticnumerics literature (see [... | https://arxiv.org/abs/2504.18139v1 |
The main result of this section is the following theorem. Theorem 4.1. Let Assumption 4.1 hold. Let sup1≤i≤NE|Xi,N(0)|2<∞,sup1≤i≤NE|Yi,N(0)|2<∞ and|Xi,N(0)−Yi,N(0)| ≤Ch1/2withC >0being independent of Nandh. Then, for all t∈[0,T] lim h→0sup i=1,...,N(E|Xi,N(t)−Yi,N(t)|2)1/2= 0. (4.12) The first step towards establishing ... | https://arxiv.org/abs/2504.18139v1 |
Hessian of potentia l function is bounded due to As- sumption 3.1. Substituting (4.26), (4.27) and (4.28) in (4.15), we obtain Up(Yi,N(t))≤Up(Yi,N(0))+Ch1−2α/integraldisplayt 0Up−1(Yi,N(s))ds +Ch1−α/integraldisplayt 0Up−1(Yi,N(s))/parenleftbigg 1+1 NN/summationdisplay i=1U(Yi,N(χh(s)))/parenrightbigg ds +p/integraldisp... | https://arxiv.org/abs/2504.18139v1 |
, (4.36) τ2 R= inf/braceleftbigg t≤0 ;1 NN/summationdisplay i=1|Yi,N(t)|2≥R/bracerightbigg , (4.37) andτR=τ1 R∧τ2 R. Lemma 4.4. Let Assumption 3.1 hold. Then the following inequality is sa tisfied: E/integraldisplayt∧τR 0|Σ(EN(s))∇U(Xi,N(s))+F(Yi,N(χh(s))))|2ds≤Ch2α +C(1+R)2E/integraldisplayt 0/parenleftbigg |Xi,N(s∧τR)... | https://arxiv.org/abs/2504.18139v1 |
we have |Xi,N(t∧τR)−Yi,N(t∧τR)|2=|Xi,N(0)−Yi,N(0)|2 −2/integraldisplayt∧τR 0/angbracketleftbig (Xi,N(s)−Yi,N(s))·(Σ(EN(s))∇U(Xi,N(s))+F(Yi,N(χh(s))))/angbracketrightbig ds +/integraldisplayt∧τR 02 βNtrace/parenleftbig [QN(s)−QN Y(χh(s))][QN(s)−QN Y(χh(s))]⊤/parenrightbig ds +2/integraldisplayt∧τR 0/radicalbigg2 βN/angb... | https://arxiv.org/abs/2504.18139v1 |
Stuart, and X. T. Tong. Tikhonov regula rization within ensemble Kalman inversion. SIAM Journal on Numerical Analysis , 58(2):1263–1294, 2020. [CV21] J. A. Carrillo and U. Vaes. Wasserstein stability estimates for covariance-preconditioned Fokker–Planck equations. Nonlinearity , 34(4):2275, 2021. [DL21a] Z. Ding and Q.... | https://arxiv.org/abs/2504.18139v1 |
A. Stuart. Ensemble Kalman inversion: a der ivative-free technique for machine learning tasks. Inverse Problems , 35(9):095005, 8 2019. [KST23] D.Kalise,A.Sharma, andM.V.Tretyakov. Consensus-ba sedoptimizationviajump-diffusion stochastic differential equations. Mathematical Models and Methods in Applied Sciences , 33(02)... | https://arxiv.org/abs/2504.18139v1 |
arXiv:2504.18769v1 [math.ST] 26 Apr 2025A Simplified Condition For Quantile Regression Liang Peng∗and Yongcheng Qi† Abstract Quantile regression is effective in modeling and inferring th e conditional quantile given some predictors and has become popular in risk managem ent due to wide applications of quantile-based risk... | https://arxiv.org/abs/2504.18769v1 |
for some unified tests regardless of predictors being stationary, near unit root, and unit root. The third example is systemic risk, which has been a major con cern in the financial industry and insurance business since the 2008 global financial crisi s. A popular systemic risk measure is the so-called CoVaR defined as the... | https://arxiv.org/abs/2504.18769v1 |
Main Results In this section, we will prove three technical lemmas that ar e of independent interest in prob- ability theory. Then, we will prove three theorems that can b e used to answer the above three questions. Throughout, we will use either characteristic function or m oment generating function as they determine ... | https://arxiv.org/abs/2504.18769v1 |
To apply Lemma 1 with S= ln(Z),U= ln(U0) andV= ln(U1), we need to verify conditions (a) and (b). Clearly, condition (A) in Lemma 2 implies condit ion (a) in Lemma 1. Now assume condition (B) holds with j= 0 or 1. Then we have E(Uδ j) =1 pE/parenleftbig Yδ jI(Yj> 0)/parenrightbig ≤1 pE/parenleftbig |Yj|δ/parenrightbig <... | https://arxiv.org/abs/2504.18769v1 |
the existence of moment generating function of another random v ariables to ensure the equality of the distribution functions of the two variables; see condit ion (b) on Lemma 1 and condition (B) in Lemma 2. Meanwhile, the assumptions on the moment generat ing functions in Lemmas 1 - 3 cannot be weakened to the finitene... | https://arxiv.org/abs/2504.18769v1 |
Theorem 2. Assume random variable Zis independent of random vector (X,Y),P(Z >0) = 1, andq∈(0,1)is a constant. Then, P(X≤0|YZ) =qif and only if P(X≤0|Y) =q under one of the following two conditions: (C3): The characteristic function of lnZ,φlnZ(t) =E/parenleftbig eitlnZ/parenrightbig , is non-zero in a dense subset of ... | https://arxiv.org/abs/2504.18769v1 |
3 concludes that Y0andY1are identically distributed, that is, F0=F1, i.e., the theorem holds. References [1] T. Adrian and M.K. Brunnermeier (2016). CoVaR. American Economic Review 106, 1705 - 1741. [2] P. Billingsley (1995). Probability and Measure. 3rd Edi tion.New York, Wiley. [3] A. Capponi and A. Rubtsov (2022). S... | https://arxiv.org/abs/2504.18769v1 |
INVERSE PROBLEMS OVER PROBABILITY MEASURE SPACE∗ QIN LI†, MARIA OPREA‡, LI WANG§,AND YUNAN YANG¶ Abstract. Define a forward problem as ρy=G#ρx, where the probability distribution ρxis mapped to another distribution ρyusing the forward operator G. In this work, we investigate the corresponding inverse problem: Given ρy,... | https://arxiv.org/abs/2504.18999v1 |
through grant DMS-2409855, Office of Naval Research through grant N00014-24-1-2088, and Cornell PCCW Affinito-Stewart Grant. Q.L. was supported in part by NSF grant DMS-2308440. L.W. was supported in part by NSF grant DMS-1846854 and the Simons Fellowship. †Department of Mathematics, University of Wisconsin-Madison, Ma... | https://arxiv.org/abs/2504.18999v1 |
infinitely many solutions, we need to pick one that makes the most physical sense. To do so, we follow the footsteps of Euclidean space and solve the following constrained optimization problem: (1.5) ρ∗ x= arg min ρx∈SE[ρx]. The choice of Edepends on the specific problem and the user’s objectives. As written, (1.5) loo... | https://arxiv.org/abs/2504.18999v1 |
optical communication, the task is to recover the distribution of the optical environment [5, 24, 4]. Other problems include those arising in aerodynamics [14], biology [13, 12, 29], and cryo-EM [19, 29]. In all these problems, the sought-after quantity is a probability distribution, density, or measure that matches th... | https://arxiv.org/abs/2504.18999v1 |
the regularized formulation with Section 4.1 and Section 4.2 respectively, dedicated to entropy-entropy pair and Wasserstein- moment pair. In each section, we end with a subsection documenting the results applied to two toy examples, one linear and one nonlinear, both of which have explicit solutions. These exam- ples ... | https://arxiv.org/abs/2504.18999v1 |
R. By applying the Measure Disintegration Theorem to ρybased on the map T, we obtain a discrete probability measure ν, with ν(1) = ρy(R), ν(0) = ρy(Rn\ R), 6 QIN LI, MARIA OPREA, LI WANG, AND YUNAN YANG and the following disintegration of ρy: (2.2) ρy=ν(1)ρc y|R+ν(0)ρc y|Rn\R, where ρc y|Randρc y|Rn\Rare the conditiona... | https://arxiv.org/abs/2504.18999v1 |
of the Wpdistance, we have Wp(G#ρx, ρy)p=Z d(˜y, y)pdπ(˜y, y), (2.7) where π∈Γ(G#ρx, ρy) is the optimal coupling between G#ρxandρy. From Definition 2.3, we can deduce that Z d(˜y, y)pdπ(˜y, y)≥Z d(PG(y), y)pdπ(˜y, y) =Z d(PG(y), y)pdρy(y) =Z d(z, y)pdγ(z, y),where γ=fPG#ρyandfPG=PG In ≥ W p(PG#ρy, ρy)p, (2.8) where I... | https://arxiv.org/abs/2504.18999v1 |
data points sitting on the ring of δ(y2 1+y2 2−4) are projected down to the ring of δ(y2 1+y2 2−1), within the range. See Figure 1c for the plot ofρ∗ y. 3. The Characterization of Solutions to problem (1.5).In this section, we discuss the underdetermined situation. This is the situation where there is more than one ele... | https://arxiv.org/abs/2504.18999v1 |
to be satisfied, i.e., G#ρ∗ x=ρy. Consider an arbitrary test function f∈ C∞ c(Rn). Then the constraint can be rewritten as: Z f(y)ρy(y)dy=Z f(y)d (G#ρ∗ x) (y) =Z f(G(x))ρ∗ xdx where ρ∗ x=g(G(x)) =Z f(G(x))g(G(x))dx =Z f(y)g(y)Z δ(y− G(x)) dx . Since the above holds for any test function f, we conclude that g(y)Z δ(y− G... | https://arxiv.org/abs/2504.18999v1 |
distribution ρ∗ xconditioned on the preimage A−1(y), for a fixed y∈supp( ρy), is a single Dirac delta measure supported at the minimum-norm solution. Calling (3.6), we have: ρ∗ x=H#ρy. This example was already shown in [26, Theorem 4.7]. 3.3.2. A simple nonlinear example. A nonlinear example that can be made explicit i... | https://arxiv.org/abs/2504.18999v1 |
a constant only depending on ywhile ρ∗ x(·|G−1(y))andM(·|G−1(y))denote the conditional distributions of ρ∗ xandMon the preimage G−1(y), respectively. In particular: ρ∗ x(x)∝ M (x)ρy(G(x))R δ(G(x)− G(x′))M(x′)dx′1 1+α . (4.3) Proof. Since the KL divergence is convex (in the usual sense) and the pushforward action is a... | https://arxiv.org/abs/2504.18999v1 |
higher-dimensional space than its domain. As a result, Theorem 2.4 directly applies, which allows us to write down an explicit form of the solution to problem (4.7) as below. Proof for Theorem 4.3. In this overdetermined system, according to Theorem 2.4, we simply need to find the function that solves: Fex(y∗) = arg mi... | https://arxiv.org/abs/2504.18999v1 |
then ρtrue xthe solution to problem (2.4) with the reference data distribution ρtrue y, and ρδ,α xthe solution to problem (4.7) using regularizing coefficient αon the reference data distribution ρδ y. Then we have that W2(ρtrue x, ρδ,α x)≤ ∥(A⊤A+αI)−1A⊤∥2W2(ρtrue y, ρδ y) +∥(A⊤A+αI)−1A⊤−A†∥2q Eρtruey[y2].(4.14) Setσm=σ... | https://arxiv.org/abs/2504.18999v1 |
Z δ(G(x1, x2)− G(x′ 1, x′ 2)))M(x′ 1, x′ 2)dx′ 1dx′ 2 =Z δ(r−r′)M(1 +r′cosθ,1 +r′sinθ)r′dr′dθ =rCMZ2π 0exp −1 2 (rcosθ+ 1)2+ (rsinθ+ 1)2 dθ =rCMexp −1 2(2 +r2)Z2π 0exp (−rcosθ−rsinθ) dθ = 2πrCMexp −1 2(2 +r2) I0(√ 2r), where I0is the Bessel function5. This leads to the final result of ρ∗(x1, x2)∝exp −x2 1+x2 2... | https://arxiv.org/abs/2504.18999v1 |
Prof. Youssef Marzouk and Prof. L´ ena¨ ıc Chizat for comments and suggestions on an earlier version of this work. REFERENCES [1]L. Ambrosio, N. Gigli, and G. Savar ´e,Gradient flows: in metric spaces and in the space of probability measures , Springer Science & Business Media, 2005. [2]R. Baptista, B. Hosseini, N. Kov... | https://arxiv.org/abs/2504.18999v1 |
propagation , SIAM/ASA Journal on Uncertainty Quantification, 10 (2022), pp. 915–948. [18]J. Ferron, M. Walker, L. Lao, H. S. John, D. Humphreys, and J. Leuer ,Real time equilibrium recon- struction for tokamak discharge control , Nuclear Fusion, 38 (1998), p. 1055. [19]J. Giraldo-Barreto, S. Ortiz, E. H. Thiede, K. Pa... | https://arxiv.org/abs/2504.18999v1 |
arXiv:2504.19089v1 [math.ST] 27 Apr 2025Semiparametric M-estimation with overparameterized neural networks Shunxing Yan∗, Ziyuan Chen†and Fang Yao‡ School of Mathematical Sciences, Center for Statistical Sc ience, Peking University, Beijing, China Abstract We focus on semiparametric regression that has played a cent ra... | https://arxiv.org/abs/2504.19089v1 |
finite-dimensional parameters and minimax op timal convergence of the non- parametric parts can be established. However, most existin g works based on traditional techniques often face challenges when applied to complex structured da ta increasingly encountered in modern applications. Consequently, there is growing inte... | https://arxiv.org/abs/2504.19089v1 |
with ran dom initialization have been shown, with high probability, can achieve global minimization thr ough gradient-based methods ( Du et al. , 2018;Li and Liang ,2018;Allen-Zhu et al. ,2019). In terms of statistical generalization perfor- mance, Hu et al. (2021);Suh et al. (2021) established the convergence rate of ... | https://arxiv.org/abs/2504.19089v1 |
nuisance parameter, i.e. it contai ns score functions ∂flβ,f[h]with all appropriate directions h, see Section 3for a rigorous definition. Additionally, denote Πθ,fas the orthogonal projection operator onto the closure of the l inear span of Λβ,f. Then the efficient score can be constructed by ∂βlβ,fsubtracting its project... | https://arxiv.org/abs/2504.19089v1 |
approximation ability fo r possible ˜h. As discussed above, whether the efficient score is sufficiently sm all to establish the√n-consistency of the parametric component of the DNN estimator remains an o pen problem. Nonetheless, the counterexample presented above is so extr eme that it is reasonable to presume that it has... | https://arxiv.org/abs/2504.19089v1 |
achieves√n-consistency and asymptotic normality. The latter result demonstrates the efficiency for the least favor able submodels and enables interpretable inference for parameters of interest. Lastl y, we discuss two commonly used models: partially linear regression and classification. In these examples, the aforemention... | https://arxiv.org/abs/2504.19089v1 |
while the othe r biases bi’s fori≥1are initialized to zero. Denoting the initial parameters by θ0, we define fθ(x) =˜fθ(x)−˜fθ0(x) to ensure that fθis initially zero in the training, i.e. fθ0(x)≡0, for theoretical convenience. 8 In the analysis of overparameterized neural networks, the n eural tangent kernel (NTK) (Jaco... | https://arxiv.org/abs/2504.19089v1 |
compact sets with regular boundaries, and their densities are bounded away from zero a nd infinity. Moreover, we assume thatE[Y2|Z,X]is finite. Consider a nonnegative loss function lβ,f=l(ZTβ,f(X),Y)such that the true parameters (β0,f0)minimizes the risk Plβ,f=/integraltext l(ZTβ,f(X),Y)dPβ,f. Common choices forlinclude ... | https://arxiv.org/abs/2504.19089v1 |
penalty term leads to the following regulari zed optimization criterion: ˜Ln(β,f) =Pnlβ,f+λn˜J(f), (7) wherePnlβ,fstill represents the empirical risk term and λn˜J(f)is the tuning parameter. Conse- quently, within the RKHS, a gradient flow training process {˜βt,˜ft}with initial value ˜θ0= 0and ˜f0(x) = 0,∀x∈Ωis adopted ... | https://arxiv.org/abs/2504.19089v1 |
to the training time, which is used to bound the accumulated dynamic differences between the two training processes. This factor accounts for the cost of accommodating general, unst ructured loss functions, but can be omitted when considering the least squares loss as a special case. 3 Statistical theory of algorithmic ... | https://arxiv.org/abs/2504.19089v1 |
but such assumptions m ay not be general enough. In the proposed Huberized condition, ignoring finite-dimensiona l parameter βfor convenience, when /ba∇dblf−f0/ba∇dblL∞/lessorsimilar1, the right hand is nearly /ba∇dblf−f0/ba∇dbl2 L2(X); while when /ba∇dblf−f0/ba∇dblL∞/greaterorsimilar1, the right hand is even smaller th... | https://arxiv.org/abs/2504.19089v1 |
the loss is a negativ e log-likelihood, the efficient score is essentially the projection of the score function S1onto the orthonormal complement of the tangent set Λβ,f. Denote Πθ,fas the orthogonal projection onto the closure of the linear s pan ofΛβ,f. The efficient score function for βat(β0,f0)is then given by ˜S(β0,f0... | https://arxiv.org/abs/2504.19089v1 |
of the information across all parametri c submodels. The submodel that attains this supremum is typically called the least favorab le or hardest submodel. By the above result, when the loss function is the negative log-likeliho od and the model class contains the hardest submodel, the proposed semiparametric estimator ... | https://arxiv.org/abs/2504.19089v1 |
probability at least 1−ξover neural network random initialization, we have /ba∇dblˆfts−f0/ba∇dbl2 L2=Op/parenleftbig n−(d+1)/(2d+1)logn/parenrightbig . If Assumption 4is replaced by Assumption 4′, settingλ≍n−(d+1)/(2s+d), the nonparametric rate becomes /ba∇dblˆfts−f0/ba∇dbl2 L2=Op/parenleftbig n−2s/(2s+d)logn/parenrigh... | https://arxiv.org/abs/2504.19089v1 |
as√n-consistency, with high probability. This performance is attributed to the representational capacity of overpara meterized deep neural networks and their tangent space. When the model is correctly specified, i.e. th e loss function corresponds to the negative log-likelihood, the resulting parametric estima tor attai... | https://arxiv.org/abs/2504.19089v1 |
from the Bernoulli distribution with probability P(Yi= 1) =φ(f0(Xi) +ZT iβ0−EX[f0(X)]), whereφ(x)is the logistic function φ(x) = 1/(1+e−x). Simulations are performed for three different sample sizes n= 500,1000,2000, with 200repetitions for each case and method. The mean squared error (MSE) of the parametric and nonpara... | https://arxiv.org/abs/2504.19089v1 |
Empirical minimiz ation. Probability theory and related fields , 135(3):311–334. Bauer, B. and Kohler, M. (2019). On deep learning as a remedy f or the curse of dimensionality in nonparametric regression. The Annals of Statistics , 47(4):2261–2285. Bhattacharya, S., Fan, J., and Mukherjee, D. (2023). Deep ne ural networ... | https://arxiv.org/abs/2504.19089v1 |
A., Gabriel, F., and Hongler, C. (2018). Neural tange nt kernel: Convergence and general- ization in neural networks. Advances in neural information processing systems , 31. Jiao, Y., Shen, G., Lin, Y., and Huang, J. (2021). Deep nonpar ametric regression on approximate manifolds: Non-asymptotic error bounds with polyn... | https://arxiv.org/abs/2504.19089v1 |
g deep neural networks with ReLU activation function. The Annals of Statistics , 48(4):1875 – 1897. Shen, Z. (2020). Deep network approximation characterized by number of neurons. Communi- cations in Computational Physics , 28(5):1768–1811. Shen, Z., Yang, H., and Zhang, S. (2022). Optimal approximat ion rate of relu n... | https://arxiv.org/abs/2504.19089v1 |
and Wang, J.-L. (2021). Deep exten ded hazard models for survival analysis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Lia ng, P., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems , volume 34, pages 15111–15124. Curran Associates, Inc. 30 Table 1: The mean square error (×10−1)forˆβandˆf... | https://arxiv.org/abs/2504.19089v1 |
Quasi-Monte Carlo confidence intervals using quantiles of randomized nets Zexin Pan∗ Abstract Recent advances in quasi-Monte Carlo integration have demonstrated that the median trick significantly enhances the convergence rate of lin- early scrambled digital net estimators. In this work, we leverage the quantiles of su... | https://arxiv.org/abs/2504.19138v1 |
handle non-normal errors, but still requires reliable variance estimation and remains non-adaptive: stronger smoothness assumptions on the integrand do not improve the conver- gence rate. Specialized methods like higher order scrambled digital nets [3] attain optimal rates under explicit smoothness priors and enable em... | https://arxiv.org/abs/2504.19138v1 |
xij=Cj⃗imod 2 for i∈Z2m, j∈1:s, (1) where ⃗ xij∈ {0,1}Erepresents xij∈[0,1) truncated to Edigits (trailing digits set to 0). Typically, E⩽mfor unrandomized digital nets. We introduce randomization via ⃗ xij=Cj⃗i+⃗Djmod 2, (2) where Cj∈ {0,1}E×mand⃗Dj∈ {0,1}Eare random with precision E⩾m. The vector ⃗Djis called the dig... | https://arxiv.org/abs/2504.19138v1 |
on kandκ. For a finite subset κ⊆N, we denote the cardinality of κas|κ|, the sum of elements in κas∥κ∥1and the largest element of κas⌈κ⌉. When κ=∅, we conventionally define |κ|=∥κ∥1= ⌈κ⌉= 0. For k= (k1, . . . , k s) and the corresponding κ= (κ1, . . . , κ s), we define ∥k∥0=∥κ∥0=sX j=1|κj|,∥k∥1=∥κ∥1=sX j=1∥κj∥1and⌈k⌉=⌈κ... | https://arxiv.org/abs/2504.19138v1 |
SUM 2<0) ⩾Pr(SUM 1<0 and |SUM 1|>|SUM 2|) ⩾Pr(SUM 1<0)−Pr(|SUM 1|⩽|SUM 2|) ⩾Pr(SUM′ 1<0)−dTV(SUM 1,SUM′ 1)−Pr(|SUM 1|⩽|SUM 2|). Similarly, Pr(ˆµ∞⩽µ)⩾Pr(SUM′ 1⩽0)−dTV(SUM 1,SUM′ 1)−Pr(|SUM 1|⩽|SUM 2|) Hence Pr(ˆµ∞< µ) + Pr(ˆ µ∞⩽µ)−Pr(SUM′ 1<0)−Pr(SUM′ 1⩽0) ⩾−2dTV(SUM 1,SUM′ 1)−2 Pr(|SUM 1|⩽|SUM 2|). (11) 7 Because SUM′ ... | https://arxiv.org/abs/2504.19138v1 |
V∈I∗ r+1, which is further bounded by the probability that ⊕r+1 i=1ki=0since all proper subsets WofV have full rank. Because for any given k1, ...,kr, there is at most one kr+1∈QN for⊕r+1 i=1ki=0, we therefore have (r+ 1)!|I∗ r+1| |QN|r+1⩽1 |QN|Pr ⊕r i=1ki∈QN ⩽1 |QN|Ar sNr/4r−Bs√ N. The conclusion follows after rearr... | https://arxiv.org/abs/2504.19138v1 |
πr sN 3 . Furthermore, ∥κ∥1=sX j=1∥κj∥1⩾sX j=1|κj|2 2⩾1 2ssX j=1|κj|2 =1 2s∥κ∥2 0. (22) Therefore, ∥κ∥0⩽p 2s∥κ∥1and X k∈Ns∗\QNm|ˆf(k)|⩽∞X N=Nm+1K12−Nmax α√ 2sN,1π√s 2√ 3Nexp πr sN 3 ⩽K1π√s 2√ 3∞X N=Nm+12−N+Dα√ sN with Dα=√ 2 max(log2(α),0) +π/(√ 3 log(2)). For any ρ∈(0,1), we can find Nρ,s,αsuch that Dαp s(N+ 1)... | https://arxiv.org/abs/2504.19138v1 |
find a constant D∗depending on K3,|J|, α, s such that for large enough m |ˆf(k)|⩽K22−∥κ∥1+ρm|J|log2(m)√ λNm/s+D∗m. By choosing ϵm= 1/log2(m) for m⩾3, we see ρm|J|log2(m)p λNm/s= 2|J|log2(m)p λNm/s+|J|p λNm/sand |ˆf(k)|⩽K22−∥κ∥1+2|J|log2(m)√ λNm/s+D∗∗m(27) for some D∗∗> D∗. Hence when mis large enough X k∈QN∗m\(QNm∪˜Q)|... | https://arxiv.org/abs/2504.19138v1 |
Tm−t ⩽1 σ(Z) ⩽1 2|W||W| ⌊|W|/2⌋ . (33) Our conclusion then follows from the asymptotic relation n ⌊n/2⌋ ∼2n(πn)−1/2 and|W|> cm/ 2. The next theorem provides a sufficient condition for equation (32) to hold. Simply put, we require fto be ”nondegenerate” in the sense that a sufficient number of k∈QNmhave|ˆf(k)|compar... | https://arxiv.org/abs/2504.19138v1 |
x1direction and apply our algorithm toPκ p=0gp(x−1)/(p+ 1) instead. This is called pre- integration in the QMC literature, a technique to regularize the integrands. See [9, 13] for further reference. Another solution is to localize our calculation to Q′={k∈Ns ∗| |κ1|⩽ κ}. Specifically, we set Km=QNm∩Q′with Nm= sup {N∈N... | https://arxiv.org/abs/2504.19138v1 |
given by equation (3)andˆµ(ν) Ebe their ν’th order statistics. If f∈C∞([0,1]s) satisfies the assumptions of Theorem 6 and the precision Eincreases with mso thatE⩾Nm, we have under the complete random design lim sup m→∞√m Pr(ˆµE< µ) +1 2Pr(ˆµE=µ)−1 2 <∞ (38) and for 1⩽ℓ⩽u⩽r, lim inf m→∞Pr(µ∈[ˆµ(ℓ) E,ˆµ(u) E])⩾F(u−1)−F(ℓ... | https://arxiv.org/abs/2504.19138v1 |
constant Γ′⩾Γ such that Nm−λm2/s+ Γ′mlog2(m)→ ∞ asm→ ∞ because Nm∼λm2/s+ 3λmlog2(m)/s. Then E⩾Nmand equation (40) implies |ˆµE−ˆµ∞|⩽K2−λm2/s+Γ′mlog2(m)for large enough m. Together with Theorem 8, we have lim sup m→∞mγPr |ˆµE−µ|>2K2−λm2/s+Γ′mlog2(m) ⩽1. In order for either |ˆµ(ℓ) E−µ|or|ˆµ(u) E−µ|to exceed 2 K2−λm2/s+... | https://arxiv.org/abs/2504.19138v1 |
j∈1:sand ℓ > dm ,Cj(ℓ,:) is independently drawn from U({0,1}m). 24 The marginal order is 0 for the complete random design and 1 for the random linear scrambling provided every generating matrix has full rank. Randomiza- tion of higher marginal order is useful when randomizing higher order digital nets from [3]. The nex... | https://arxiv.org/abs/2504.19138v1 |
j∈ Im, j∈1:s) Pr(Z∗(k) = 1| C∗ j∈ Im, j∈1:s). Conditioned on C∗ j∈ Imfor all j∈1:s,Z(k) has the same distribution as Z∗(k). Therefore Pr(Z(k) = 1) ⩽1 Pr(C∗ j∈ Im, j∈1:s)2−m⩽exp(2 s)2−m. (48) Because Pr( Z(k) = 1) = E[Pr(Z(k) = 1| Cj, j∈1:s)], the Markov’s inequality shows for each nonzero k Pr Pr(Z(k) = 1| Cj, j∈1:s)>... | https://arxiv.org/abs/2504.19138v1 |
by ( m+ 1)s(2r−1)exp(2 sr)2−R, which equals exp(2 sr)(m+ 1)−2sr when R= (2r+ 2r−1)slog2(m+ 1). Equation (46) follows once we combine the above bound with equation (50). Corollary 5. When m⩾3, there exist generating matrices Cj, j∈1:ssuch that the random linear scrambling has marginal order 1and satisfies Rm,r⩽ (2r+ 2r−... | https://arxiv.org/abs/2504.19138v1 |
(r+ 1)!Ar sNr/4 mr−Bs√Nm. Forr⩽(3/4) log2(m),Rm,r⩽(m3/4+ (3/2) log2(m)−1)slog2(m+ 1). Hence Pr V ∈Im,|V|⩽3 2log2(m) ⩽2(m3/4+(3/2) log2(m)−1)slog2(m+1)⌊(3/4) log2(m)⌋X r=2(AsN1/4 m)r (r+ 1)!r−Bs√Nm ⩽2(m3/4+(3/2) log2(m)−1)slog2(m+1)exp(AsN1/4 m)2−Bs√Nm, which converges to 0 as m→ ∞ since Nm∼λm2/s. The proof of Step 2 ... | https://arxiv.org/abs/2504.19138v1 |
out of the 1000 runs contain µand too high if more than 970 intervals contain µ. Below we use CRD as the shorthand for the “completely random design” and RLS for the “random linear scrambling”. The generating matrices for RLS come from [10]. 31 ● ● ● ● ● ● ● ● ● ● ● ● 2 4 6 8 10 120.0010.002 0.0050.0100.020 0.0500.1000... | https://arxiv.org/abs/2504.19138v1 |
for f(x) =Q8 j=1xjexp(xj). ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5 10 151e−03 1e−02 1e−01 1e+00 1e+0190th quantile of confidence interval lengths when f(x)=∏ j=18 xjexp(xj) mInterval length● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●t−interval for CRD quantile interval for CRD t−interval for RLS quantile interval for RLS Figu... | https://arxiv.org/abs/2504.19138v1 |
● ●● ● ●● ● ●●● ●● ● ●●● ●● ●● ● ●● ●● ●● ●●● ● ●● ●● ● ●● ●●● ●●●●● ●●● ●● ● ●● ●● ●● ●● ● ●●● ●● ●● ●●●●● ●●●● ● ●●●● ●●● ●● ●● ●● ●● ●● ●● ● ● ●● ●● ● ●● ●● ●● ●● ●● ● ●●●● ● ●●●● ●●● ●● ●● ●● ● ● ●● ●● ●● ●● ●●● ● ●●● ●●● ●● ●●● ● ●●● ●● ● ●●● ● ●● ●● ●●●● ●● ●● ●● ● ● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ● ●● ●●● ●●● ●● ● ... | https://arxiv.org/abs/2504.19138v1 |
●● ●●● ●●● ● ●●● ●●● ●●●● ●●●● ●●● ●● ●●●● ●● ●●●● ●●● ● ●● ●● ● ●●● ● ●● ●● ●● ●●●●● ● ●●● ● ● ●●●● ●● ● ●●● ● ●● ● ●●●● ● ●●●● ●● ●●●●● ●● ● ●● ●● ●● ●● ● ●● ●● ● ●●● ●●● ● ●● ●● ●● ●● ●●● ● ●●●●● ● ●●●● ● ● ●●●●●● ● ●● ●● ●● ●● ●● ●● ●● ●●● ● ●●● ●● ● ●●● ● ●●●● ●● ● ●●●●● ●●● ●● ●● ●●● ●● ●● ●●● ●● ● ● ●●● ●●●●●● ●... | https://arxiv.org/abs/2504.19138v1 |
●●● ●● ●● ●●●● ● ●●●● ● ●●● ● ●● ●●● ●●● ● ● ●●● ● ●● ● ●● ● ●●●●●● ●●● ● ●●● ●● ●●●● ●●● ●● ● ●● ●● ● ●●● ●● ● ●●●● ●●●● ● ●● ● ● ●● ●●● ●●●● ●● ●●● ● ● ●●● ●●● ●● ●● ●● ● ●● ●●●● ● ●● ●●●● ●● ●●● ● ●● ●● ●● ●●●●●●● ●● ● ●●●● ● ●● ● ●● ●●●●● ●● ● ●● ●● ●● ● ●● ● ●●●● ●●● ●●●●● ●●● ●●● ● ● ●●● ●● ●●● ● ●● ● ●●●● ● ●●● ... | https://arxiv.org/abs/2504.19138v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.