chunk_id
float64
0
165
chunk_text
stringlengths
1
11.1k
chunk_text_tokens
float64
1
2k
serialized_text
stringlengths
2
11.2k
serialized_text_tokens
float64
1
2.03k
1
Analogies from physics and other fields, particularly population genetics, are of interest when studying problems in machine learning theory. Analogies between machine learning theory and Darwinian evolution theory were discussed already by Alan Turing [1]. Biological analogies in computing were discussed by John von N...
1,159
Control of Overfitting with Physics 1 Introduction Analogies from physics and other fields, particularly population genetics, are of interest when studying problems in machine learning theory. Analogies between machine learning theory and Darwinian evolution theory were discussed already by Alan Turing [1]. Biological ...
1,170
2
Let {zl}\{z_{l}\}, l=1,…,Ll=1,\dots,L (with the elements belonging to ℝn\mathbb{R}^{n}), a loss function fl​(x)=ℒ​(zl,x)f_{l}(x)=\mathcal{L}(z_{l},x) for the ll-th sample object, and hypothesis xx (we assume that the hypothesis space is ℝm\mathbb{R}^{m}). Minimization of the empirical risk is the following problem: , 1...
1,280
Control of Overfitting with Physics 2 Stochastic Gradient Descent and the Eyring Formula 2.1 Stochastic Gradient Descent Let {zl}\{z_{l}\}, l=1,…,Ll=1,\dots,L (with the elements belonging to ℝn\mathbb{R}^{n}), a loss function fl​(x)=ℒ​(zl,x)f_{l}(x)=\mathcal{L}(z_{l},x) for the ll-th sample object, and hypothesis xx (w...
1,310
3
Overfitting is the lack of ability to generalize for the solution of the learning problem (i.e., high likelihood on the training sample and low likelihood for the validation sample). One approach to overfitting control is based on the algorithmic stability, i.e., on the stability of the solution obtained by a learning ...
1,021
Control of Overfitting with Physics 2 Stochastic Gradient Descent and the Eyring Formula 2.2 Overfitting Control for Stochastic Gradient Descent Overfitting is the lack of ability to generalize for the solution of the learning problem (i.e., high likelihood on the training sample and low likelihood for the validation s...
1,056
4
The generative adversarial network (GAN) model is a minimax problem, such that [25] , 1 = miny⁡maxx⁡V​(x,y);\displaystyle\min_{y}\max_{x}V(x,y);. , 2 = . , 3 = (7). , 1 = V​(x,y)=1L​∑l=1Llog⁡D​(zl,x)+∫Zpgen​(z,y)​log⁡(1−D​(z,x))​𝑑z;\displaystyle V(x,y)=\frac{1}{L}\sum\limits_{l=1}^{L}\log D(z_{l},x)+\int\limits_{Z}p_{...
1,300
Control of Overfitting with Physics 3 The GAN Model and Overfitting 3.1 Stochastic Gradient Langevin Dynamic for GAN The generative adversarial network (GAN) model is a minimax problem, such that [25] , 1 = miny⁡maxx⁡V​(x,y);\displaystyle\min_{y}\max_{x}V(x,y);. , 2 = . , 3 = (7). , 1 = V​(x,y)=1L​∑l=1Llog⁡D​(zl,x)+∫Zp...
1,331
5
Let us consider one-dimensional parameters xx and yy for the discriminator and for the generator, respectively, and functional V=ω​x​yV=\omega xy with minimax located at the origin. The noiseless GAN equation system is , 1 = d​xd​t\displaystyle\frac{dx}{dt}. , 2 = =\displaystyle=. , 3 = ∂∂x​V​(x,y)=ω​y,\displaystyle\fr...
326
Control of Overfitting with Physics 3 The GAN Model and Overfitting 3.1 Stochastic Gradient Langevin Dynamic for GAN Example. Let us consider one-dimensional parameters xx and yy for the discriminator and for the generator, respectively, and functional V=ω​x​yV=\omega xy with minimax located at the origin. The noiseles...
359
6
If we ignore the presence of the generator, then the dynamics of the discriminator (9) for optimization with noise will correspond to the diffusion in the potential generated by the data. Thus, the arguments of Section 2 will be applicable. Therefore, overfitting can be reduced according to the Eyring formula. The pres...
698
Control of Overfitting with Physics 3 The GAN Model and Overfitting 3.2 Overfitting Control for GAN If we ignore the presence of the generator, then the dynamics of the discriminator (9) for optimization with noise will correspond to the diffusion in the potential generated by the data. Thus, the arguments of Section 2...
727
7
In this section, a branching random process with diffusion and particle interactions describing the populations of discriminators and generators in a generalization of the GAN model is introduced. The theory of branching random processes and its connection with population genetics have been actively discussed in the li...
1,941
Control of Overfitting with Physics 4 Branching Random Process for GAN In this section, a branching random process with diffusion and particle interactions describing the populations of discriminators and generators in a generalization of the GAN model is introduced. The theory of branching random processes and its con...
1,958
8
In this section, the results of the numerical simulation of the SGLD procedure and the simulation of the predator–prey model for the GAN are provided.
33
Control of Overfitting with Physics 5 Simulations In this section, the results of the numerical simulation of the SGLD procedure and the simulation of the predator–prey model for the GAN are provided.
45
9
Let ℒ​(x)\mathcal{L}(x) be an objective function for optimization, ℒ​(x)→max{\mathcal{L}}(x)\to\max. We consider ℒ​(x){\mathcal{L}}(x) as a sum of non-normalized Gaussians of the following form: , 1 = ℒ​(x)=∑j=1nqj​e−‖x−cj‖22​σj2,\mathcal{L}(x)=\sum_{j=1}^{n}q_{j}e^{-\frac{\|x-c_{j}\|^{2}}{2\sigma_{j}^{2}}},. , 2 = . ,...
356
Control of Overfitting with Physics 5 Simulations 5.1 Objective Function Let ℒ​(x)\mathcal{L}(x) be an objective function for optimization, ℒ​(x)→max{\mathcal{L}}(x)\to\max. We consider ℒ​(x){\mathcal{L}}(x) as a sum of non-normalized Gaussians of the following form: , 1 = ℒ​(x)=∑j=1nqj​e−‖x−cj‖22​σj2,\mathcal{L}(x)=\s...
374
10
For objective function (15), we consider n=2n=2, σ1=3.0,σ2=1.5\sigma_{1}=3.0,\sigma_{2}=1.5, c1=(−5.5,−5.5)𝑇,c2=(3.0,3.0)𝑇c_{1}=\left(-5.5,-5.5\right)^{\mathop{T}},\linebreak c_{2}=\left(3.0,3.0\right)^{\mathop{T}}, qj=σj2q_{j}=\sigma_{j}^{2} for j=1,2j=1,2 (here and in similar places below, TT means transpose of the...
994
Control of Overfitting with Physics 5 Simulations 5.2 Stochastic Gradient Langevin Dynamics For objective function (15), we consider n=2n=2, σ1=3.0,σ2=1.5\sigma_{1}=3.0,\sigma_{2}=1.5, c1=(−5.5,−5.5)𝑇,c2=(3.0,3.0)𝑇c_{1}=\left(-5.5,-5.5\right)^{\mathop{T}},\linebreak c_{2}=\left(3.0,3.0\right)^{\mathop{T}}, qj=σj2q_{j...
1,016
11
For the predator–prey model, we consider a more general dynamical system than the system defined by Equations (9) and (10). Let x​(t)x(t) be position of the prey and y​(t)y(t) be position of the predator at time tt. In simulations for visualization, we consider x,y∈ℝ2x,y\in\mathbb{R}^{2}. We consider their joint evolut...
1,783
Control of Overfitting with Physics 5 Simulations 5.3 Predator–Prey Model For the predator–prey model, we consider a more general dynamical system than the system defined by Equations (9) and (10). Let x​(t)x(t) be position of the prey and y​(t)y(t) be position of the predator at time tt. In simulations for visualizati...
1,804
12
A typical evolution is shown in Figure 6 (video available in Supplementary Material ). The model parameters for this simulation are the following. Centers of the two Gaussian extrema in (15) (n=2n=2) are c1=(0,0),c2=(−7.0,−7.0)c_{1}=(0,0),c_{2}=(-7.0,-7.0), widths σ1=0.5,σ2=2.0\sigma_{1}=0.5,\sigma_{2}=2.0, and amplitu...
348
Control of Overfitting with Physics 5 Simulations 5.3 Predator–Prey Model A typical evolution is shown in Figure 6 (video available in Supplementary Material ). The model parameters for this simulation are the following. Centers of the two Gaussian extrema in (15) (n=2n=2) are c1=(0,0),c2=(−7.0,−7.0)c_{1}=(0,0),c_{2}=(...
369
13
In Section 5, we applied the predator–prey model to some synthetic conditions to reveal the desired behavior. While the application of the method to real big datasets is a separate complex task, here we investigate the improvement achieved by the suggested method on an educational small dataset. As an example, we consi...
395
Control of Overfitting with Physics 5 Simulations 5.4 Application to Wine Recognition Dataset In Section 5, we applied the predator–prey model to some synthetic conditions to reveal the desired behavior. While the application of the method to real big datasets is a separate complex task, here we investigate the improve...
416
14
Various mimics of physical or biological behavior do appear in machine learning, e.g., in evolutionary and genetic algorithms. In this work, we discuss a possible justification, based on some models appearing in physics and biology, for the ability to control overfitting in SGLD and the GAN. For SGLD, we show that the ...
178
Control of Overfitting with Physics 6 Conclusions Various mimics of physical or biological behavior do appear in machine learning, e.g., in evolutionary and genetic algorithms. In this work, we discuss a possible justification, based on some models appearing in physics and biology, for the ability to control overfittin...
190
15
Here, we provide some relevant notions from the theory of random processes [44]. Fokker–Planck equation. Consider a diffusion with a generator , 1 = L^​f​(x)=12​∑i​jai​j​(x)​∂2f​(x)∂xi​∂xj+∑ibi​(x)​∂f​(x)∂xi,\hat{L}f(x)=\frac{1}{2}\sum_{ij}a^{ij}(x)\frac{\partial^{2}f(x)}{\partial x^{i}\partial x^{j}}+\sum_{i}b^{i}(x)\...
1,104
Control of Overfitting with Physics 7 Appendix A Here, we provide some relevant notions from the theory of random processes [44]. Fokker–Planck equation. Consider a diffusion with a generator , 1 = L^​f​(x)=12​∑i​jai​j​(x)​∂2f​(x)∂xi​∂xj+∑ibi​(x)​∂f​(x)∂xi,\hat{L}f(x)=\frac{1}{2}\sum_{ij}a^{ij}(x)\frac{\partial^{2}f(x)...
1,116
0
We present a significant advancement in the field of Langevin Monte Carlo (LMC) methods by introducing the Inexact Proximal Langevin Algorithm (IPLA). This novel algorithm broadens the scope of problems that LMC can effectively address while maintaining controlled computational costs. IPLA extends LMC’s applicability t...
161
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity Abstract We present a significant advancement in the field of Langevin Monte Carlo (LMC) methods by introducing the Inexact Proximal Langevin Algorithm (IPLA). This novel algorithm broadens the scope of problems that LMC can effectively address while maintaining...
176
1
Langevin Monte Carlo methods are powerful tools for sampling and optimization in complex, high-dimensional problems across various fields, including machine learning, statistics, physics, and beyond. When the exact form of a distribution is unknown or difficult to compute, efficient sampling is essential for high-dimen...
1,921
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 1 Introduction Langevin Monte Carlo methods are powerful tools for sampling and optimization in complex, high-dimensional problems across various fields, including machine learning, statistics, physics, and beyond. When the exact form of a distribution is unknow...
1,937
2
, 1 = ℱV​[μ]subscriptℱ𝑉delimited-[]𝜇\displaystyle{{\mathcal{F}}_{V}}[\mu]. , 2 = :=∫ℝdV​(x)​dμ​(x),assignabsentsubscriptsuperscriptℝ𝑑𝑉𝑥differential-d𝜇𝑥\displaystyle:=\int_{{\mathbb{R}}^{d}}V(x){\,\mathrm{d}\mu}(x)\,,. , 3 = . , 1 = ℱℰ​[μ]subscriptℱℰdelimited-[]𝜇\displaystyle{{\mathcal{F}}_{\mathcal{E}}}[\mu]. ,...
1,935
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 1 Introduction , 1 = ℱV​[μ]subscriptℱ𝑉delimited-[]𝜇\displaystyle{{\mathcal{F}}_{V}}[\mu]. , 2 = :=∫ℝdV​(x)​dμ​(x),assignabsentsubscriptsuperscriptℝ𝑑𝑉𝑥differential-d𝜇𝑥\displaystyle:=\int_{{\mathbb{R}}^{d}}V(x){\,\mathrm{d}\mu}(x)\,,. , 3 = . , 1 = ℱℰ​[μ]su...
1,951
3
In the current paper, we propose and give theoretical guarantees that the Inexact Proximal Langevin Algorithm (IPLA for short) can be used successfully to generate from μ∗superscript𝜇\mu^{*} a sample beyond the assumption of global Lipschitz gradient continuity of V𝑉V. The typical assumption of the strong convexity o...
86
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 1 Introduction In the current paper, we propose and give theoretical guarantees that the Inexact Proximal Langevin Algorithm (IPLA for short) can be used successfully to generate from μ∗superscript𝜇\mu^{*} a sample beyond the assumption of global Lipschitz grad...
102
4
We make significant progress in the field of Langevin Monte Carlo methods by introducing a novel algorithm handling essentially broader scope of problems, than treated up to now, and preserving controlled computational cost. Namely, we propose the Inexact Proximal Langevin Algorithm IPLA (Algorithm 1). We extend the cu...
917
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 2 Our Contribution We make significant progress in the field of Langevin Monte Carlo methods by introducing a novel algorithm handling essentially broader scope of problems, than treated up to now, and preserving controlled computational cost. Namely, we propose...
934
5
We consider an optimization problem on the space of probability measures over ℝdsuperscriptℝ𝑑{{\mathbb{R}}^{d}} for which we use the notation 𝒫​(ℝd)𝒫superscriptℝ𝑑{\mathscr{P}}({{\mathbb{R}}^{d}}). We define 𝒫ac​(ℝd)subscript𝒫acsuperscriptℝ𝑑{\mathscr{P}}_{\rm{ac}}({{\mathbb{R}}^{d}}) – the subspace of 𝒫​(ℝd)𝒫su...
1,874
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 3 Description of our setting We consider an optimization problem on the space of probability measures over ℝdsuperscriptℝ𝑑{{\mathbb{R}}^{d}} for which we use the notation 𝒫​(ℝd)𝒫superscriptℝ𝑑{\mathscr{P}}({{\mathbb{R}}^{d}}). We define 𝒫ac​(ℝd)subscript𝒫ac...
1,893
6
Let us note that due to the Jensen inequality for all measures, μ,ν∈𝒫​(ℝd)𝜇𝜈𝒫superscriptℝ𝑑\mu,\nu\in{\mathscr{P}}({{\mathbb{R}}^{d}}), KL⁡(μ|ν)≥0KLconditional𝜇𝜈0\operatorname{KL}(\mu|\nu)\geq 0 and equality holds only if μ=ν𝜇𝜈\mu=\nu. The second distance used in our analysis is the 2-Wasserstein distance defin...
1,602
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 3 Description of our setting Let us note that due to the Jensen inequality for all measures, μ,ν∈𝒫​(ℝd)𝜇𝜈𝒫superscriptℝ𝑑\mu,\nu\in{\mathscr{P}}({{\mathbb{R}}^{d}}), KL⁡(μ|ν)≥0KLconditional𝜇𝜈0\operatorname{KL}(\mu|\nu)\geq 0 and equality holds only if μ=ν𝜇...
1,621
7
(i) Assumption (V) embraces convex potentials with tails of polynomial growth with power qV+1≥2subscript𝑞𝑉12q_{V}+1\geq 2. The common assumption in the literature for LMC allows for convex functions with growth bounded by square function. (ii) The assumption that x∗=0superscript𝑥0x^{*}=0 is imposed just to simplif...
749
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 3 Description of our setting (i) Assumption (V) embraces convex potentials with tails of polynomial growth with power qV+1≥2subscript𝑞𝑉12q_{V}+1\geq 2. The common assumption in the literature for LMC allows for convex functions with growth bounded by square f...
768
8
Let us present the idea of our algorithm. By (2) we know that μ∗superscript𝜇\mu^{*} is a minimizer of ℱℱ{\mathcal{F}} over a space of measures, so we design an algorithm optimizing this functional. The basic idea to reach the minimizer is the analog of the gradient descent algorithm. In our case the functional ℱℱ{\mat...
1,651
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 4 Inexact Proximal Langevin Algorithm Let us present the idea of our algorithm. By (2) we know that μ∗superscript𝜇\mu^{*} is a minimizer of ℱℱ{\mathcal{F}} over a space of measures, so we design an algorithm optimizing this functional. The basic idea to reach t...
1,674
9
In this section, we provide theoretical guarantees for the accuracy of IPLA. We start with bounds for moments of the Markov chain {Xk}ksubscriptsubscript𝑋𝑘𝑘\{X_{k}\}_{k} generated by IPLA, see Theorem 5.1. Then we show error bounds of IPLA in KLKL\operatorname{KL}-divergence and in Wasserstein distance, see Theorems...
237
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results In this section, we provide theoretical guarantees for the accuracy of IPLA. We start with bounds for moments of the Markov chain {Xk}ksubscriptsubscript𝑋𝑘𝑘\{X_{k}\}_{k} generated by IPLA, see Theorem 5.1. Then we show error bounds of IP...
255
10
Let m≥0𝑚0m\geq 0. Suppose that V𝑉V satisfies  (V) and ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptbold-ϱ0\boldsymbol{\varrho_{0}}), and, for k=1,…,n𝑘1…𝑛k=1,\dots,n, let Xksubscript𝑋𝑘X_{k} be as in Algorithm 1. Then there exists a constant 𝒞m>0subscript𝒞𝑚0{\mathcal{C}}_{m}>0 such that for all 0<τ<1/λ...
643
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Theorem 5.1 (Moment bound). Let m≥0𝑚0m\geq 0. Suppose that V𝑉V satisfies  (V) and ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptbold-ϱ0\boldsymbol{\varrho_{0}}), and, for k=1,…,n𝑘1…𝑛k=1,\dots,n, let Xksubscript𝑋𝑘X_{k} be as i...
671
11
If the minimum of V𝑉V is not at x∗=0superscript𝑥0x^{*}=0, then the result would have been , 1 = supk​𝔼​|Xk−x∗|m≤𝒞m​min⁡{1,λV−m}​dm2.subscriptsup𝑘𝔼superscriptsubscript𝑋𝑘superscript𝑥𝑚subscript𝒞𝑚1superscriptsubscript𝜆𝑉𝑚superscript𝑑𝑚2{\rm sup}_{k}\mathbb{E}|X_{k}-x^{*}|^{m}\leq{\mathcal{C}}_{m}\min\{1,\lam...
936
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Remark 5.2. If the minimum of V𝑉V is not at x∗=0superscript𝑥0x^{*}=0, then the result would have been , 1 = supk​𝔼​|Xk−x∗|m≤𝒞m​min⁡{1,λV−m}​dm2.subscriptsup𝑘𝔼superscriptsubscript𝑋𝑘superscript𝑥𝑚subscript𝒞𝑚1superscriptsubscript𝜆�...
960
12
There exists a constant CqV<∞subscript𝐶subscript𝑞𝑉C_{q_{V}}<\infty such that for τ≤1𝜏1\tau\leq 1 it holds K​(τ)≤CqV​τ​dqV+12.𝐾𝜏subscript𝐶subscript𝑞𝑉𝜏superscript𝑑subscript𝑞𝑉12K(\tau)\leq C_{q_{V}}\tau d^{\frac{q_{V}+1}{2}}\,. The mentioned auxiliary result reads as follows.
146
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Remark 5.3. There exists a constant CqV<∞subscript𝐶subscript𝑞𝑉C_{q_{V}}<\infty such that for τ≤1𝜏1\tau\leq 1 it holds K​(τ)≤CqV​τ​dqV+12.𝐾𝜏subscript𝐶subscript𝑞𝑉𝜏superscript𝑑subscript𝑞𝑉12K(\tau)\leq C_{q_{V}}\tau d^{\frac{q_{V}+...
170
13
Assume that the function V:ℝd→ℝ:𝑉→superscriptℝ𝑑ℝV:{{\mathbb{R}}^{d}}\to{\mathbb{R}}, d≥1𝑑1d\geq 1, satisfies assumption (V) with some qV≥1subscript𝑞𝑉1q_{V}\geq 1, the initial measure ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptitalic-ϱ0\boldsymbol{\varrho_{0}}), and ν∈𝒫qV+1​(ℝd)𝜈subscript𝒫subscript𝑞...
792
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Proposition 5.4. Assume that the function V:ℝd→ℝ:𝑉→superscriptℝ𝑑ℝV:{{\mathbb{R}}^{d}}\to{\mathbb{R}}, d≥1𝑑1d\geq 1, satisfies assumption (V) with some qV≥1subscript𝑞𝑉1q_{V}\geq 1, the initial measure ϱ0subscriptitalic-ϱ0\varrho_{0} sat...
817
14
Suppose that V𝑉V satisfies (V) and ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptbold-ϱ0\boldsymbol{\varrho_{0}}). Let τ<1/λV𝜏1subscript𝜆𝑉\tau<1/\lambda_{V}, κ,α>0𝜅𝛼0\kappa,\alpha>0, and δ≤κ​τ1+α𝛿𝜅superscript𝜏1𝛼\delta\leq\kappa\tau^{1+\alpha}. Then it holds: , 1 = KL⁡(νnN|μ∗)≤12​n​τ​W22​(ϱN,μ∗)−12​n​...
510
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Theorem 5.5 (KLKL\operatorname{KL}-error bound). Suppose that V𝑉V satisfies (V) and ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptbold-ϱ0\boldsymbol{\varrho_{0}}). Let τ<1/λV𝜏1subscript𝜆𝑉\tau<1/\lambda_{V}, κ,α>0𝜅𝛼0\kappa,\al...
546
15
We use the convexity of KLKL\operatorname{KL}-divergence (Cover and Thomas 2012, Theorem 2.7.2.) and get , 1 = KL⁡(νnN|μ∗)≤1n​∑k=N+1N+nKL⁡(ϱk|μ∗).KLconditionalsuperscriptsubscript𝜈𝑛𝑁superscript𝜇1𝑛superscriptsubscript𝑘𝑁1𝑁𝑛KLconditionalsubscriptitalic-ϱ𝑘superscript𝜇\operatorname{KL}({\nu_{n}^{N}}|\mu^{*})\leq\...
570
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Proof. We use the convexity of KLKL\operatorname{KL}-divergence (Cover and Thomas 2012, Theorem 2.7.2.) and get , 1 = KL⁡(νnN|μ∗)≤1n​∑k=N+1N+nKL⁡(ϱk|μ∗).KLconditionalsuperscriptsubscript𝜈𝑛𝑁superscript𝜇1𝑛superscriptsubscript𝑘𝑁1𝑁𝑛KLc...
590
16
Suppose that the assumptions of Theorem 5.5 are satisfied. Assume further that , 1 = 0<τε≤min⁡{(ε3​C​(μ∗)​κ)1α,1}0subscript𝜏𝜀superscript𝜀3𝐶superscript𝜇𝜅1𝛼10<\tau_{\varepsilon}\leq\min\left\{\Big{(}\frac{\varepsilon}{3C(\mu^{*})\kappa}\Big{)}^{\frac{1}{\alpha}},1\right\}. , 2 = and K​(τε)≤ε/3𝐾subscript𝜏𝜀𝜀3K(...
714
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Corollary 5.6. Suppose that the assumptions of Theorem 5.5 are satisfied. Assume further that , 1 = 0<τε≤min⁡{(ε3​C​(μ∗)​κ)1α,1}0subscript𝜏𝜀superscript𝜀3𝐶superscript𝜇𝜅1𝛼10<\tau_{\varepsilon}\leq\min\left\{\Big{(}\frac{\varepsilon}{3C...
740
17
The inequality (6) is a straightforward consequence of Theorem 5.5. To get estimates of computational complexity it is enough to observe that by Remark 5.3 we have that condition K​(τε)≤ε𝐾subscript𝜏𝜀𝜀K(\tau_{\varepsilon})\leq\varepsilon is satisfied if , 1 = 0<τε≤min⁡{ε​CqV−1​d−qV+12,1}.0subscript𝜏𝜀𝜀superscripts...
295
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Proof. The inequality (6) is a straightforward consequence of Theorem 5.5. To get estimates of computational complexity it is enough to observe that by Remark 5.3 we have that condition K​(τε)≤ε𝐾subscript𝜏𝜀𝜀K(\tau_{\varepsilon})\leq\var...
315
18
Suppose that the potential V𝑉V satisfies (V) for RV=0subscript𝑅𝑉0R_{V}=0 and λV>0subscript𝜆𝑉0\lambda_{V}>0, and that the initial measure ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptbold-ϱ0\boldsymbol{\varrho_{0}}). Let τ<1/λV𝜏1subscript𝜆𝑉\tau<1/\lambda_{V}, α≥0𝛼0\alpha\geq 0, and δ≤κ​τ1+α𝛿𝜅supersc...
585
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Theorem 5.7 (Wasserstein error bound). Suppose that the potential V𝑉V satisfies (V) for RV=0subscript𝑅𝑉0R_{V}=0 and λV>0subscript𝜆𝑉0\lambda_{V}>0, and that the initial measure ϱ0subscriptitalic-ϱ0\varrho_{0} satisfies (ϱ𝟎subscriptbold...
616
19
Suppose that the assumptions of Theorem 5.7 are satisfied. Assume further that , 1 = τε2​α≤λV2​ε96​κ2​log2⁡(6​W22​(ϱ0,μ∗)​ε−1)superscriptsubscript𝜏𝜀2𝛼superscriptsubscript𝜆𝑉2𝜀96superscript𝜅2superscript26superscriptsubscript𝑊22subscriptitalic-ϱ0superscript𝜇superscript𝜀1\tau_{\varepsilon}^{2\alpha}\leq\frac{\lam...
926
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Corollary 5.8. Suppose that the assumptions of Theorem 5.7 are satisfied. Assume further that , 1 = τε2​α≤λV2​ε96​κ2​log2⁡(6​W22​(ϱ0,μ∗)​ε−1)superscriptsubscript𝜏𝜀2𝛼superscriptsubscript𝜆𝑉2𝜀96superscript𝜅2superscript26superscriptsubsc...
952
20
Let us note that , 1 = (1−τ​λV2)nε≤exp⁡(−nε​τ​λV2)​ and ​1−e−λV​τ​(nε−1)1−e−λV​τ≤nε.superscript1𝜏subscript𝜆𝑉2subscript𝑛𝜀subscript𝑛𝜀𝜏subscript𝜆𝑉2 and 1superscriptesubscript𝜆𝑉𝜏subscript𝑛𝜀11superscriptesubscript𝜆𝑉𝜏subscript𝑛𝜀\big{(}1-\tfrac{\tau\lambda_{V}}{2}\big{)}^{n_{\varepsilon}}\leq\exp(-n_{\vare...
410
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Proof. Let us note that , 1 = (1−τ​λV2)nε≤exp⁡(−nε​τ​λV2)​ and ​1−e−λV​τ​(nε−1)1−e−λV​τ≤nε.superscript1𝜏subscript𝜆𝑉2subscript𝑛𝜀subscript𝑛𝜀𝜏subscript𝜆𝑉2 and 1superscriptesubscript𝜆𝑉𝜏subscript𝑛𝜀11superscriptesubscript𝜆𝑉𝜏subs...
430
21
The assumption of global λVsubscript𝜆𝑉\lambda_{V}-convexity of V𝑉V decreases the computational cost of a single iteration of IPLA. We can approximate the proximal step with smaller precision and keep the complexity of the whole algorithm of the order ε−2superscript𝜀2\varepsilon^{-2}. Indeed, as a consequence of λVs...
295
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Remark 5.9. The assumption of global λVsubscript𝜆𝑉\lambda_{V}-convexity of V𝑉V decreases the computational cost of a single iteration of IPLA. We can approximate the proximal step with smaller precision and keep the complexity of the who...
319
22
The complexity of IPLA, as shown in Corollaries 5.6 and 5.8, depends on the number of iterations requiring additional computations. Since the optimized function is strongly convex, the cost of one iteration with precision δ𝛿\delta is 𝒪​(d​log⁡(δ))𝒪𝑑𝛿\mathcal{O}(d\log(\delta)), as detailed in Appendix E.
94
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 5 Theoretical results Remark 5.10. The complexity of IPLA, as shown in Corollaries 5.6 and 5.8, depends on the number of iterations requiring additional computations. Since the optimized function is strongly convex, the cost of one iteration with precision δ𝛿\d...
118
23
In this section, we demonstrate the application of IPLA on 333 examples implemented in Python. We analyze the convergence rates and bias of the algorithm compared to the two known related LMC algorithms, namely TULA (Tamed Unadjusted Langevin Algorithm by Brosse et al. (2019)) and ULA (Unadjusted Langevin Algorithm by ...
252
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 6 Experiments In this section, we demonstrate the application of IPLA on 333 examples implemented in Python. We analyze the convergence rates and bias of the algorithm compared to the two known related LMC algorithms, namely TULA (Tamed Unadjusted Langevin Algor...
269
24
Let us start with a simple and natural case where the potential has non-Lipschitz gradient. Our goal is to sample from the density , 1 = μ∗​(x)∝exp⁡(−|x|44),proportional-tosuperscript𝜇𝑥superscript𝑥44\mu^{*}(x)\propto\exp{\big{(}-\tfrac{|x|^{4}}{4}\big{)}}\,,. , 2 = which is a stationary distribution of the process ...
785
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 6 Experiments Example 1: Distribution with Light Tails Let us start with a simple and natural case where the potential has non-Lipschitz gradient. Our goal is to sample from the density , 1 = μ∗​(x)∝exp⁡(−|x|44),proportional-tosuperscript𝜇𝑥superscript𝑥44\mu^{...
812
25
We pass to a more complicated, but physically relevant example. Ginzburg–Landau model was introduced to describe phase transitions in physics, i.e., superconductivity, see Goldenfeld (1992, Chapter 5). It involves the potential , 1 = V​(x)=∑i,j,k=1q1−υ2​xi​j​k2+υ​ϰ2​|∇~​xi​j​k|2+υ​ς4​xi​j​k4,𝑉𝑥superscriptsubscript𝑖�...
917
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 6 Experiments Example 2: Ginzburg–Landau Model We pass to a more complicated, but physically relevant example. Ginzburg–Landau model was introduced to describe phase transitions in physics, i.e., superconductivity, see Goldenfeld (1992, Chapter 5). It involves t...
946
26
We show now an example that IPLA is effective in complex high-dimensional scenarios. We consider the Bayesian Image Deconvolution problem inspired by Experiment 1 in Durmus, Moulines, and Pereyra (2018, Subsection 4.1.2). Given a high-resolution grayscale image x∈ℝd𝑥superscriptℝ𝑑x\in{{\mathbb{R}}^{d}} such that d=n×n...
1,307
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 6 Experiments Example 3: Bayesian Image Deconvolution We show now an example that IPLA is effective in complex high-dimensional scenarios. We consider the Bayesian Image Deconvolution problem inspired by Experiment 1 in Durmus, Moulines, and Pereyra (2018, Subse...
1,334
27
Despite involving the inexact proximal step, the computational complexity of IPLA is controlled. In Examples 1 and 2, we used the Newton conjugate gradient method with the provided analytically computed gradient and the Hessian matrix, making use of the sparsity of the Hessian matrix. Using the standard Python library ...
328
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 6 Experiments Effective implementation of IPLA Despite involving the inexact proximal step, the computational complexity of IPLA is controlled. In Examples 1 and 2, we used the Newton conjugate gradient method with the provided analytically computed gradient and...
351
28
IPLA broadens the scope of problems that known LMC algorithms can address keeping low computational cost.
21
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity 6 Experiments Conclusion IPLA broadens the scope of problems that known LMC algorithms can address keeping low computational cost.
40
29
I.C. is supported by NCN grant 2019/34/E/ST1/00120, B.M. is supported by NCN grant 2018/31/B/ST1/00253. The authors are grateful to Adam Chlebicki-Miasojedow and Sonja Letnes for close assistance in the progress of this research.
75
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity Acknowledgements I.C. is supported by NCN grant 2019/34/E/ST1/00120, B.M. is supported by NCN grant 2018/31/B/ST1/00253. The authors are grateful to Adam Chlebicki-Miasojedow and Sonja Letnes for close assistance in the progress of this research.
91
30
The next lemma is crucial to deal with functions strongly convex only outside some ball.
16
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results The next lemma is crucial to deal with functions strongly convex only outside some ball.
33
31
Let V𝑉V be a function convex and differentiable on ℝdsuperscriptℝ𝑑{\mathbb{R}}^{d} and λVsubscript𝜆𝑉\lambda_{V}-convex outside a given ball BVsubscript𝐵𝑉B_{V} for λV≥0subscript𝜆𝑉0\lambda_{V}\geq 0 that attains its minimum at x∗=0superscript𝑥0x^{*}=0. Then, for any τ>0𝜏0\tau>0 and x,z∈ℝd𝑥𝑧superscriptℝ𝑑x,z\i...
724
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.1. Let V𝑉V be a function convex and differentiable on ℝdsuperscriptℝ𝑑{\mathbb{R}}^{d} and λVsubscript𝜆𝑉\lambda_{V}-convex outside a given ball BVsubscript𝐵𝑉B_{V} for λV≥0subscript𝜆𝑉0\lambda_{V}\geq 0 that attains its minimum a...
746
32
We denote y=proxVτ​(x)𝑦superscriptsubscriptprox𝑉𝜏𝑥y=\mathrm{prox}_{V}^{\tau}(x). Due to the λVsubscript𝜆𝑉\lambda_{V}-convexity of V𝑉V we know that , 1 = V​(y)−V​(z)≤∇V​(y)⋅(y−z)−λV2​𝟙ℝd∖BV​(y)​|y−z|2.𝑉𝑦𝑉𝑧⋅∇𝑉𝑦𝑦𝑧subscript𝜆𝑉2subscript1superscriptℝ𝑑subscript𝐵𝑉𝑦superscript𝑦𝑧2V(y)-V(z)\leq\nabla V(y)\...
716
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. We denote y=proxVτ​(x)𝑦superscriptsubscriptprox𝑉𝜏𝑥y=\mathrm{prox}_{V}^{\tau}(x). Due to the λVsubscript𝜆𝑉\lambda_{V}-convexity of V𝑉V we know that , 1 = V​(y)−V​(z)≤∇V​(y)⋅(y−z)−λV2​𝟙ℝd∖BV​(y)​|y−z|2.𝑉𝑦𝑉𝑧⋅∇𝑉𝑦𝑦𝑧subscript...
735
33
Let V𝑉V be a function convex and differentiable on ℝdsuperscriptℝ𝑑{\mathbb{R}}^{d} and λVsubscript𝜆𝑉\lambda_{V}-convex outside a given ball BV:={x:|x|≤RV}assignsubscript𝐵𝑉conditional-set𝑥𝑥subscript𝑅𝑉B_{V}:=\{x:\ |x|\leq R_{V}\} for λV≥0subscript𝜆𝑉0\lambda_{V}\geq 0 and m≥0𝑚0m\geq 0. Then for any k∈ℕ𝑘ℕk\in...
467
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.2. Let V𝑉V be a function convex and differentiable on ℝdsuperscriptℝ𝑑{\mathbb{R}}^{d} and λVsubscript𝜆𝑉\lambda_{V}-convex outside a given ball BV:={x:|x|≤RV}assignsubscript𝐵𝑉conditional-set𝑥𝑥subscript𝑅𝑉B_{V}:=\{x:\ |x|\leq R...
489
34
For m=0𝑚0m=0 there is nothing to prove. Suppose m>0𝑚0m>0. Since ϱk+13=(proxVτ)#​ϱksubscriptitalic-ϱ𝑘13subscriptsuperscriptsubscriptprox𝑉𝜏#subscriptitalic-ϱ𝑘\varrho_{k+\frac{1}{3}}=(\mathrm{prox}_{V}^{\tau})_{\#}\varrho_{k}, we have , 1 = 𝔼​|Xk+13|m𝔼superscriptsubscript𝑋𝑘13𝑚\displaystyle\mathbb{E}|X_{k+\frac{...
1,720
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. For m=0𝑚0m=0 there is nothing to prove. Suppose m>0𝑚0m>0. Since ϱk+13=(proxVτ)#​ϱksubscriptitalic-ϱ𝑘13subscriptsuperscriptsubscriptprox𝑉𝜏#subscriptitalic-ϱ𝑘\varrho_{k+\frac{1}{3}}=(\mathrm{prox}_{V}^{\tau})_{\#}\varrho_{k}, we ha...
1,739
35
, 1 = 𝔼​|Xk+13|m≤(1+τ​λV)m2−1(1+τ​λV)m2​RVm+(11+τ​λV)m2​∫ℝd|x|m​ϱk​(d​x)𝔼superscriptsubscript𝑋𝑘13𝑚superscript1𝜏subscript𝜆𝑉𝑚21superscript1𝜏subscript𝜆𝑉𝑚2superscriptsubscript𝑅𝑉𝑚superscript11𝜏subscript𝜆𝑉𝑚2subscriptsuperscriptℝ𝑑superscript𝑥𝑚subscriptitalic-ϱ𝑘d𝑥\mathbb{E}|X_{k+\frac{1}{3}}|^{m}\leq\f...
318
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. , 1 = 𝔼​|Xk+13|m≤(1+τ​λV)m2−1(1+τ​λV)m2​RVm+(11+τ​λV)m2​∫ℝd|x|m​ϱk​(d​x)𝔼superscriptsubscript𝑋𝑘13𝑚superscript1𝜏subscript𝜆𝑉𝑚21superscript1𝜏subscript𝜆𝑉𝑚2superscriptsubscript𝑅𝑉𝑚superscript11𝜏subscript𝜆𝑉𝑚2subscriptsuper...
337
36
Let X,Z,Θ∈𝒫m​(ℝd),𝑋𝑍Θsubscript𝒫𝑚superscriptℝ𝑑X,Z,\Theta\in{\mathscr{P}}_{m}({{\mathbb{R}}^{d}}), m≥2𝑚2m\geq 2, be arbitrary and pairwise independent with 𝔼​Z=0𝔼𝑍0\mathbb{E}Z=0. Then , 1 = 𝔼​|X+Θ+Z|m≤𝔼​|X|m+𝖼​(𝔼​|X|m−1​𝔼​|Θ|+𝔼​|X|m−2​|Z+Θ|2+𝔼​|Z+Θ|m).𝔼superscript𝑋Θ𝑍𝑚𝔼superscript𝑋𝑚𝖼𝔼superscript�...
372
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.3. Let X,Z,Θ∈𝒫m​(ℝd),𝑋𝑍Θsubscript𝒫𝑚superscriptℝ𝑑X,Z,\Theta\in{\mathscr{P}}_{m}({{\mathbb{R}}^{d}}), m≥2𝑚2m\geq 2, be arbitrary and pairwise independent with 𝔼​Z=0𝔼𝑍0\mathbb{E}Z=0. Then , 1 = 𝔼​|X+Θ+Z|m≤𝔼​|X|m+𝖼​(𝔼​|X|m−1...
394
37
We compute the Taylor expansion of first order of the function f​(z)=|ξ+z|m,m≥2formulae-sequence𝑓𝑧superscript𝜉𝑧𝑚𝑚2f(z)=|\xi+z|^{m},m\geq 2 around zero with reminder estimation. To do that, we compute the first and second derivative , 1 = ∇f​(z)∇𝑓𝑧\displaystyle\nabla f(z). , 2 = =m​|ξ+z|m−1​(ξ+z)|ξ+z|=m​|ξ+z|m−2...
683
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. We compute the Taylor expansion of first order of the function f​(z)=|ξ+z|m,m≥2formulae-sequence𝑓𝑧superscript𝜉𝑧𝑚𝑚2f(z)=|\xi+z|^{m},m\geq 2 around zero with reminder estimation. To do that, we compute the first and second derivati...
702
38
Let Z∼gτsimilar-to𝑍subscript𝑔𝜏Z\sim g_{\tau}. Then for every m≥0𝑚0m\geq 0, there exists C~m=C~m​(m)>0subscript~𝐶𝑚subscript~𝐶𝑚𝑚0\widetilde{C}_{m}=\widetilde{C}_{m}(m)>0, such that , 1 = 𝔼​|Z|m≤C~m​τm2​dm2.𝔼superscript𝑍𝑚subscript~𝐶𝑚superscript𝜏𝑚2superscript𝑑𝑚2\mathbb{E}|Z|^{m}\leq\widetilde{C}_{m}\tau^...
216
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.4. Let Z∼gτsimilar-to𝑍subscript𝑔𝜏Z\sim g_{\tau}. Then for every m≥0𝑚0m\geq 0, there exists C~m=C~m​(m)>0subscript~𝐶𝑚subscript~𝐶𝑚𝑚0\widetilde{C}_{m}=\widetilde{C}_{m}(m)>0, such that , 1 = 𝔼​|Z|m≤C~m​τm2​dm2.𝔼superscript𝑍𝑚...
238
39
Let Z=(Z1,…,Zd)𝑍superscript𝑍1…superscript𝑍𝑑Z=(Z^{1},\ldots,Z^{d}) and observe that for m≥2𝑚2m\geq 2 it holds , 1 = 𝔼​|Z|m=𝔼​|∑i=1d|Zi|2|m2≤dm2−1​∑i=1d𝔼​|Zi|m=dm2​𝔼​|Z1|m.𝔼superscript𝑍𝑚𝔼superscriptsuperscriptsubscript𝑖1𝑑superscriptsuperscript𝑍𝑖2𝑚2superscript𝑑𝑚21superscriptsubscript𝑖1𝑑𝔼superscripts...
1,423
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. Let Z=(Z1,…,Zd)𝑍superscript𝑍1…superscript𝑍𝑑Z=(Z^{1},\ldots,Z^{d}) and observe that for m≥2𝑚2m\geq 2 it holds , 1 = 𝔼​|Z|m=𝔼​|∑i=1d|Zi|2|m2≤dm2−1​∑i=1d𝔼​|Zi|m=dm2​𝔼​|Z1|m.𝔼superscript𝑍𝑚𝔼superscriptsuperscriptsubscript𝑖1𝑑s...
1,442
40
If m=2𝑚2m=2, from the proof of Lemma A.4, we infer that for Z∼gτsimilar-to𝑍subscript𝑔𝜏Z\sim g_{\tau} it holds 𝔼​|Z|2=2​τ​d𝔼superscript𝑍22𝜏𝑑\mathbb{E}|Z|^{2}=2\tau d. In the theoretical analysis, the key role is played by the following lemma from (Benko et al. 2024).
125
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Remark A.5. If m=2𝑚2m=2, from the proof of Lemma A.4, we infer that for Z∼gτsimilar-to𝑍subscript𝑔𝜏Z\sim g_{\tau} it holds 𝔼​|Z|2=2​τ​d𝔼superscript𝑍22𝜏𝑑\mathbb{E}|Z|^{2}=2\tau d. In the theoretical analysis, the key role is played by ...
147
41
Let the function V:ℝd→ℝ:𝑉→superscriptℝ𝑑ℝV:{{\mathbb{R}}^{d}}\to{\mathbb{R}}, d≥1𝑑1d\geq 1, satisfy assumption (V) with λV≥0subscript𝜆𝑉0\lambda_{V}\geq 0, LqV>0subscript𝐿subscript𝑞𝑉0L_{q_{V}}>0, and qV≥1subscript𝑞𝑉1q_{V}\geq 1. Assume further that μ∗superscript𝜇\mu^{*} is a minimizer of ℱℱ{\mathcal{F}}, τ>0𝜏...
1,442
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.6 (Benko et al. (2024), Lemma 4.5). Let the function V:ℝd→ℝ:𝑉→superscriptℝ𝑑ℝV:{{\mathbb{R}}^{d}}\to{\mathbb{R}}, d≥1𝑑1d\geq 1, satisfy assumption (V) with λV≥0subscript𝜆𝑉0\lambda_{V}\geq 0, LqV>0subscript𝐿subscript𝑞𝑉0L_{q_{V}}...
1,480
42
If X∼μsimilar-to𝑋𝜇X\sim\mu, Y∼νsimilar-to𝑌𝜈Y\sim\nu, Θ∼ρsimilar-toΘ𝜌\Theta\sim\rho and |Θ|≤δΘ𝛿|\Theta|\leq\delta a.s., then , 1 = W22​(μ∗ρ,ν)≤W22​(μ,ν)+(2​𝔼​(|X|+|Y|)+δ)​δ.superscriptsubscript𝑊22𝜇𝜌𝜈superscriptsubscript𝑊22𝜇𝜈2𝔼𝑋𝑌𝛿𝛿W_{2}^{2}(\mu*\rho,\nu)\leq W_{2}^{2}(\mu,\nu)+(2\mathbb{E}(|X|+|Y|)+\de...
221
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.7. If X∼μsimilar-to𝑋𝜇X\sim\mu, Y∼νsimilar-to𝑌𝜈Y\sim\nu, Θ∼ρsimilar-toΘ𝜌\Theta\sim\rho and |Θ|≤δΘ𝛿|\Theta|\leq\delta a.s., then , 1 = W22​(μ∗ρ,ν)≤W22​(μ,ν)+(2​𝔼​(|X|+|Y|)+δ)​δ.superscriptsubscript𝑊22𝜇𝜌𝜈superscriptsubscript𝑊...
243
43
We choose the optimal coupling for X,Y𝑋𝑌X,Y and ΘΘ\Theta independent on X𝑋X and Y𝑌Y. We then observe that , 1 = W22​(μ∗ρ,ν)superscriptsubscript𝑊22𝜇𝜌𝜈\displaystyle W_{2}^{2}(\mu*\rho,\nu). , 2 = ≤𝔼​|X+Θ−Y|2=𝔼​|X−Y|2+2​𝔼​⟨Θ,X−Y⟩+𝔼​|Θ|2≤𝔼​|X−Y|2+2​δ​𝔼​(|X|+|Y|)+δ2absent𝔼superscript𝑋Θ𝑌2𝔼superscript𝑋𝑌22�...
464
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. We choose the optimal coupling for X,Y𝑋𝑌X,Y and ΘΘ\Theta independent on X𝑋X and Y𝑌Y. We then observe that , 1 = W22​(μ∗ρ,ν)superscriptsubscript𝑊22𝜇𝜌𝜈\displaystyle W_{2}^{2}(\mu*\rho,\nu). , 2 = ≤𝔼​|X+Θ−Y|2=𝔼​|X−Y|2+2​𝔼​⟨Θ,X−...
483
44
Assume that the function V:ℝd→ℝ:𝑉→superscriptℝ𝑑ℝV:{{\mathbb{R}}^{d}}\to{\mathbb{R}}, d≥1𝑑1d\geq 1, satisfies assumption (V). Let ξδsubscript𝜉𝛿\xi_{\delta} be a probability measure supported on B​(0,δ)𝐵0𝛿B(0,\delta) and let ν∈𝒫qV​(ℝd)𝜈subscript𝒫subscript𝑞𝑉superscriptℝ𝑑\nu\in{\mathscr{P}}_{q_{V}}({{\mathbb{R...
337
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Lemma A.8. Assume that the function V:ℝd→ℝ:𝑉→superscriptℝ𝑑ℝV:{{\mathbb{R}}^{d}}\to{\mathbb{R}}, d≥1𝑑1d\geq 1, satisfies assumption (V). Let ξδsubscript𝜉𝛿\xi_{\delta} be a probability measure supported on B​(0,δ)𝐵0𝛿B(0,\delta) and let ν...
359
45
We note that by assumption (V) it holds , 1 = ∫ℝd∫B​(0,δ)V​(y)−V​(y−z)subscriptsuperscriptℝ𝑑subscript𝐵0𝛿𝑉𝑦𝑉𝑦𝑧\displaystyle\int_{{{\mathbb{R}}^{d}}}\int_{B(0,\delta)}V(y)-V(y-z). , 2 = d​ξ​(z)​d​ν​(y)≤∫ℝd∫B​(0,δ)(∇V​(y)⋅z+CV​|z|2​(1+|y|qV−1+|z|qV−1))​dξ​(z)​dν​(y)d𝜉𝑧d𝜈𝑦subscriptsuperscriptℝ𝑑subscript𝐵0𝛿⋅∇...
1,325
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity A Auxiliary results Proof. We note that by assumption (V) it holds , 1 = ∫ℝd∫B​(0,δ)V​(y)−V​(y−z)subscriptsuperscriptℝ𝑑subscript𝐵0𝛿𝑉𝑦𝑉𝑦𝑧\displaystyle\int_{{{\mathbb{R}}^{d}}}\int_{B(0,\delta)}V(y)-V(y-z). , 2 = d​ξ​(z)​d​ν​(y)≤∫ℝd∫B​(0,δ)(∇V​(y)⋅z+CV​|z|...
1,344
46
We show a detailed proof in the case λV≤1subscript𝜆𝑉1\lambda_{V}\leq 1. For λV≥1subscript𝜆𝑉1\lambda_{V}\geq 1 it follows from the same arguments upon a few obvious changes. We concentrate now on justifying that for some 𝒞msubscript𝒞𝑚{\mathcal{C}}_{m}, such that 𝒞m≪+∞much-less-thansubscript𝒞𝑚{\mathcal{C}}_{m}\...
1,677
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.1. We show a detailed proof in the case λV≤1subscript𝜆𝑉1\lambda_{V}\leq 1. For λV≥1subscript𝜆𝑉1\lambda_{V}\geq 1 it follows from the same arguments upon a few obvious changes. We concentrate now on jus...
1,709
47
, 1 = 𝔼​|Xk+1|2≤11+τ​λV​𝔼​|Xk|2+τ​λV1+τ​λV​RV2+δ2+2​τ​d+2​δ​11+τ​λV​𝔼​|Xk|+2​δ​τ​λV1+τ​λV+1+τ​λV​RV.𝔼superscriptsubscript𝑋𝑘1211𝜏subscript𝜆𝑉𝔼superscriptsubscript𝑋𝑘2𝜏subscript𝜆𝑉1𝜏subscript𝜆𝑉superscriptsubscript𝑅𝑉2superscript𝛿22𝜏𝑑2𝛿11𝜏subscript𝜆𝑉𝔼subscript𝑋𝑘2𝛿𝜏subscript𝜆𝑉1𝜏subscript𝜆𝑉1...
1,882
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.1. , 1 = 𝔼​|Xk+1|2≤11+τ​λV​𝔼​|Xk|2+τ​λV1+τ​λV​RV2+δ2+2​τ​d+2​δ​11+τ​λV​𝔼​|Xk|+2​δ​τ​λV1+τ​λV+1+τ​λV​RV.𝔼superscriptsubscript𝑋𝑘1211𝜏subscript𝜆𝑉𝔼superscriptsubscript𝑋𝑘2𝜏subscript𝜆𝑉1𝜏subscript...
1,914
48
, 1 = 𝔼​|Xk+1|2𝔼superscriptsubscript𝑋𝑘12\displaystyle\mathbb{E}|X_{k+1}|^{2}. , 2 = ≤max⁡{36​δ2τ2​λV2+12​δ2τ​λV,(22+τ​λV)k+1​𝔼​|X0|2}+(τ​λV​RV2+δ​τ​λV​RV+δ2+2​τ​d)​∑j=1k+1(22+τ​λV)j≤𝒞2λV2​d,absent36superscript𝛿2superscript𝜏2superscriptsubscript𝜆𝑉212superscript𝛿2𝜏subscript𝜆𝑉superscript22𝜏subscript𝜆𝑉𝑘1�...
1,831
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.1. , 1 = 𝔼​|Xk+1|2𝔼superscriptsubscript𝑋𝑘12\displaystyle\mathbb{E}|X_{k+1}|^{2}. , 2 = ≤max⁡{36​δ2τ2​λV2+12​δ2τ​λV,(22+τ​λV)k+1​𝔼​|X0|2}+(τ​λV​RV2+δ​τ​λV​RV+δ2+2​τ​d)​∑j=1k+1(22+τ​λV)j≤𝒞2λV2​d,absent...
1,863
49
Note that the bound for II\mathrm{I} is covered by Lemma A.2. As for term IIII\mathrm{II} we notice that (8) holds for m−1𝑚1m-1 by induction. Since Θk+23subscriptΘ𝑘23\Theta_{k+\frac{2}{3}} and Zk+1subscript𝑍𝑘1Z_{k+1} are assumed to be independent, 𝔼​|Θk+23|=δ𝔼subscriptΘ𝑘23𝛿\mathbb{E}|\Theta_{k+\frac{2}{3}}|=\de...
1,687
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.1. Note that the bound for II\mathrm{I} is covered by Lemma A.2. As for term IIII\mathrm{II} we notice that (8) holds for m−1𝑚1m-1 by induction. Since Θk+23subscriptΘ𝑘23\Theta_{k+\frac{2}{3}} and Zk+1sub...
1,719
50
, 1 = 1𝖼​IV1𝖼IV\displaystyle\tfrac{1}{{\mathsf{c}}}\mathrm{IV}. , 2 = ≤2m−1​𝔼​|Θk+23|m+2m−1​𝔼​|Zk+1|m≤2m−1​(δm+C~m​τm2​dm2).absentsuperscript2𝑚1𝔼superscriptsubscriptΘ𝑘23𝑚superscript2𝑚1𝔼superscriptsubscript𝑍𝑘1𝑚superscript2𝑚1superscript𝛿𝑚subscript~𝐶𝑚superscript𝜏𝑚2superscript𝑑𝑚2\displaystyle\leq 2^{m...
1,888
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.1. , 1 = 1𝖼​IV1𝖼IV\displaystyle\tfrac{1}{{\mathsf{c}}}\mathrm{IV}. , 2 = ≤2m−1​𝔼​|Θk+23|m+2m−1​𝔼​|Zk+1|m≤2m−1​(δm+C~m​τm2​dm2).absentsuperscript2𝑚1𝔼superscriptsubscriptΘ𝑘23𝑚superscript2𝑚1𝔼supersc...
1,920
51
We observe that ∑j=0k(11+τ​λV)m2​j≤(1+τ​λV)m2(1+τ​λV)m2−1superscriptsubscript𝑗0𝑘superscript11𝜏subscript𝜆𝑉𝑚2𝑗superscript1𝜏subscript𝜆𝑉𝑚2superscript1𝜏subscript𝜆𝑉𝑚21\sum_{j=0}^{k}\Big{(}\tfrac{1}{1+\tau\lambda_{V}}\Big{)}^{\frac{m}{2}j}\leq\tfrac{(1+\tau\lambda_{V})^{\frac{m}{2}}}{(1+\tau\lambda_{V})^{\frac{...
744
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.1. We observe that ∑j=0k(11+τ​λV)m2​j≤(1+τ​λV)m2(1+τ​λV)m2−1superscriptsubscript𝑗0𝑘superscript11𝜏subscript𝜆𝑉𝑚2𝑗superscript1𝜏subscript𝜆𝑉𝑚2superscript1𝜏subscript𝜆𝑉𝑚21\sum_{j=0}^{k}\Big{(}\tfra...
776
52
If assumptions (V) and (ϱ𝟎subscriptitalic-ϱ0\boldsymbol{\varrho_{0}}) are satisfied then for any τ<1λV𝜏1subscript𝜆𝑉\tau<\frac{1}{\lambda_{V}}, it holds , 1 = ℱV​[ϱk+1]−ℱV​[ϱk+23]≤K​(τ),subscriptℱ𝑉delimited-[]subscriptitalic-ϱ𝑘1subscriptℱ𝑉delimited-[]subscriptitalic-ϱ𝑘23𝐾𝜏{{\mathcal{F}}_{V}}[\varrho_{k+1}]-{{\...
236
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Corollary B.1. If assumptions (V) and (ϱ𝟎subscriptitalic-ϱ0\boldsymbol{\varrho_{0}}) are satisfied then for any τ<1λV𝜏1subscript𝜆𝑉\tau<\frac{1}{\lambda_{V}}, it holds , 1 = ℱV​[ϱk+1]−ℱV​[ϱk+23]≤K​(τ),subscriptℱ𝑉delimite...
266
53
Let ξδsubscript𝜉𝛿\xi_{\delta} be a measure coming from an inexact proximal step and let us denote , 1 = ϱk+13:=(proxVτ)#​ϱk,ϱk+23:=ϱk+13∗ξδ,ϱk+1:=ϱk+23∗gτ.formulae-sequenceassignsubscriptitalic-ϱ𝑘13subscriptsuperscriptsubscriptprox𝑉𝜏#subscriptitalic-ϱ𝑘formulae-sequenceassignsubscriptitalic-ϱ𝑘23subscriptitalic-ϱ�...
1,523
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Proposition 5.4. Let ξδsubscript𝜉𝛿\xi_{\delta} be a measure coming from an inexact proximal step and let us denote , 1 = ϱk+13:=(proxVτ)#​ϱk,ϱk+23:=ϱk+13∗ξδ,ϱk+1:=ϱk+23∗gτ.formulae-sequenceassignsubscriptitalic-ϱ�...
1,554
54
Let us consider the exact proximal Langevin algorithm, i.e. define, for ϱ^0=ϱ0subscript^italic-ϱ0subscriptitalic-ϱ0\hat{\varrho}_{0}=\varrho_{0}, , 1 = ϱ^k+12:=(proxVτ)#​ϱ^k,ϱ^k+1:=ϱ^k+12∗gτ.formulae-sequenceassignsubscript^italic-ϱ𝑘12subscriptsuperscriptsubscriptprox𝑉𝜏#subscript^italic-ϱ𝑘assignsubscript^italic-ϱ𝑘...
543
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity B Proofs of main results of the paper Proof of Theorem 5.7. Let us consider the exact proximal Langevin algorithm, i.e. define, for ϱ^0=ϱ0subscript^italic-ϱ0subscriptitalic-ϱ0\hat{\varrho}_{0}=\varrho_{0}, , 1 = ϱ^k+12:=(proxVτ)#​ϱ^k,ϱ^k+1:=ϱ^k+12∗gτ.formulae-se...
575
55
In this section we present explicit formulas for constants used in the main body of the paper. • The constant 𝒞 m subscript 𝒞 𝑚 {\mathcal{C}}_{m} from Theorem 5.1 is given by The constant 𝒞msubscript𝒞𝑚{\mathcal{C}}_{m} from Theorem 5.1 is given by – for m≤2𝑚2m\leq 2 𝒞m=[36​κ2d+λV2d​𝔼​|X0|2+16​κ​δ+4​RV2​λV+...
1,753
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity C Explicit formulas for constants In this section we present explicit formulas for constants used in the main body of the paper. • The constant 𝒞 m subscript 𝒞 𝑚 {\mathcal{C}}_{m} from Theorem 5.1 is given by The constant 𝒞msubscript𝒞𝑚{\mathcal{C}}_{m} fro...
1,772
56
In this section we present other figures describing observed properties of IPLA in comparison with other approaches. On Figures 4 and 5 we present trace plots of IPLA, TULA, and Metropolis–Hastings Algorithm for Example 1. Figure 6 relates to Figure 2, but starting in the origin. Figure 6 additionally includes the perf...
94
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity D Discussion In this section we present other figures describing observed properties of IPLA in comparison with other approaches. On Figures 4 and 5 we present trace plots of IPLA, TULA, and Metropolis–Hastings Algorithm for Example 1. Figure 6 relates to Figure...
110
57
The explicit formula for the operator proxVτsuperscriptsubscriptprox𝑉𝜏\mathrm{prox}_{V}^{\tau} is known only in special cases; see, for example, Polson, Scott, and Willard (2015). However, under our assumptions, an approximation of the proximal operator is a relatively easy task, since the optimized function is stron...
276
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity E Proximal operator The explicit formula for the operator proxVτsuperscriptsubscriptprox𝑉𝜏\mathrm{prox}_{V}^{\tau} is known only in special cases; see, for example, Polson, Scott, and Willard (2015). However, under our assumptions, an approximation of the prox...
295
0
Existing deep learning-based object detection models perform well under daytime conditions but face significant challenges at night, primarily because they are predominantly trained on daytime images. Additionally, training with nighttime images presents another challenge: even human annotators struggle to accurately l...
271
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation Abstract Existing deep learning-based object detection models perform well under daytime conditions but face significant challenges at night, primarily because they are predominantly trained on daytime images. Addition...
293
1
Accurate and reliable vehicle detection is essential for a wide range of transportation applications, such as traffic monitoring and incident management. However, large performance gaps exist for vehicle detection between daytime versus nighttime conditions. This especially true for rural environments, where streetligh...
1,842
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 1 Introduction Accurate and reliable vehicle detection is essential for a wide range of transportation applications, such as traffic monitoring and incident management. However, large performance gaps exist for vehicle...
1,865
2
Our proposed framework, illustrated in Figure 1, introduces a novel labeling-free data augmentation method that enables realistic day-to-night image style transfer using synthetic data generated by CARLA [15]. The framework comprises two main components: 1. Synthetic nighttime data generation under rural settings: Thi...
328
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 2 Framework Our proposed framework, illustrated in Figure 1, introduces a novel labeling-free data augmentation method that enables realistic day-to-night image style transfer using synthetic data generated by CARLA [1...
351
3
This section introduces our proposed method, which addresses the challenges of nighttime vehicle detection in rural environments through three key steps: (1) Synthetic Nighttime Data Generation: The process of generating realistic nighttime images is described, where the CARLA simulator is utilized to incorporate criti...
149
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 3 Method This section introduces our proposed method, which addresses the challenges of nighttime vehicle detection in rural environments through three key steps: (1) Synthetic Nighttime Data Generation: The process of...
172
4
As discussed in the previous section, the primary challenges in improving nighttime vehicle detection arise from the low quality of roadside camera images and the difficulty of collecting sufficiently large and diverse datasets. To address these issues, synthetic nighttime images are generated using CARLA [15], a widel...
253
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 3 Method 3.1 Synthetic nighttime data generation As discussed in the previous section, the primary challenges in improving nighttime vehicle detection arise from the low quality of roadside camera images and the diffic...
284
5
The Efficient Attention GAN (EAGAN) [18] builds upon the CycleGAN framework by integrating efficient attention blocks into the generator networks while enabling attention sharing between corresponding encoder and decoder blocks. This mechanism allows the re-utilization of the long-range dependencies computed from the s...
1,623
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 3 Method 3.2 Day-to-night image style transfer The Efficient Attention GAN (EAGAN) [18] builds upon the CycleGAN framework by integrating efficient attention blocks into the generator networks while enabling attention ...
1,656
6
The You Only Look Once (YOLO) family of models has revolutionized object detection, offering real-time detection and high accuracy. YOLO11, the latest version [16], builds upon this legacy with attention mechanisms, deeper feature extraction layers, and an anchor-free detection approach. It is specifically designed to ...
353
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 3 Method 3.3 Labeling-free data augmentation for nighttime images The You Only Look Once (YOLO) family of models has revolutionized object detection, offering real-time detection and high accuracy. YOLO11, the latest v...
388
7
Our data was gathered from multiple public traffic cameras in California, which includes both daytime and nighttime images, serving distinct purposes for training and testing. Specifically, the data is organized into three categories: (1) training datasets for the EAGAN model, (2) fine-tuning datasets for the YOLO11 mo...
104
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 4 Experiments 4.1 Data Our data was gathered from multiple public traffic cameras in California, which includes both daytime and nighttime images, serving distinct purposes for training and testing. Specifically, the d...
133
8
For the image style transfer with EAGAN, we targeted two domains: domain X, which consists of daytime images in real-world settings, and domain Y, comprising CARLA-generated nighttime images. The EAGAN was trained for 200 epochs with scheduled learning rate (Equation 6) that was initialized at 0.0002 and started to dec...
691
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 4 Experiments 4.2 Image style transfer For the image style transfer with EAGAN, we targeted two domains: domain X, which consists of daytime images in real-world settings, and domain Y, comprising CARLA-generated night...
722
9
For this experiment, the YOLO11-Small model was employed. Figure 5 showcase sample predictions generated by the original YOLO11-Small model, which serve as labels for their style-transferred nighttime images. Although CARLA can generate realistic nighttime road scene images, there are still subtle differences in appear...
545
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 4 Experiments 4.3 Nighttime vehicle detection and classification For this experiment, the YOLO11-Small model was employed. Figure 5 showcase sample predictions generated by the original YOLO11-Small model, which serve ...
579
10
In this work, we proposed a novel framework for enhancing nighttime vehicle detection, featuring a labeling-free method to create an augmented dataset to fine-tuning object detection models with improve performance for nighttime conditions. We employed the EAGAN as the image translator to generate corresponding nightti...
337
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation 5 Conclusion In this work, we proposed a novel framework for enhancing nighttime vehicle detection, featuring a labeling-free method to create an augmented dataset to fine-tuning object detection models with improve pe...
360
11
This work was supported by the U.S. Department of Transportation (USDOT) University Transportation Center (UTC) Program under Grant 69A3552348304. Figure 1: Framework overview. Figure 2: CARLA examples (From left to right, first column: side-view approaching; second column: center-view approaching; third column: side-v...
305
Enhancing Nighttime Vehicle Detection with Day-to-Night Style Transfer and Labeling-Free Augmentation Acknowledgment This work was supported by the U.S. Department of Transportation (USDOT) University Transportation Center (UTC) Program under Grant 69A3552348304. Figure 1: Framework overview. Figure 2: CARLA examples (...
328