text
string
source
string
+n 2dlog 2πσ2, V(qt, αt) =1 dZ1 2σ2∥y−Xθ∥2qt(θ)dθ+1 dDKL(qt∥g(·, αt)⊗d) +n 2dlog 2πσ2. Let Π X∈Rn×nbe the orthogonal projection onto the column span of X. Then, applying the above forms with D KL(qt∥g(·, αt)⊗d)≥0 and noting that ∥(I−ΠX)(y−Xθ)∥2=∥(I−ΠX)y∥2which does not depend onθ, we have V(q0, α0)−V(qt, αt) ≤1 dZ1 2σ2∥y−Xθ∥2dY j=1g0(θj)dθj−1 dZ1 2σ2∥y−Xθ∥2qt(θ)dθ+ D KL(g0∥g(·, α0)) =1 dZ1 2σ2∥ΠX(y−Xθ)∥2dY j=1g0(θj)dθj−1 dZ1 2σ2∥ΠX(y−Xθ)∥2qt(θ)dθ+ D KL(g0∥g(·, α0)) ≤1 dZ1 2σ2∥ΠX(y−Xθ)∥2dY j=1g0(θj)dθj+ D KL(g0∥g(·, α0)). Let us apply ∥ΠX(y−Xθ)∥2≤2∥ΠXε∥2+ 2∥X(θ∗−θ)∥2 2, ∥X∥2 op≤C(1 +δ), and ∥ΠXε∥2≤Cmin(n, d)σ2for a universal constant C >0 a.s. for all large n, d. Then, for a constant C(g∗, g0, α0)>0 depending only on g∗, g0, α0, lim sup n,d→∞V(q0, α0)−V(qt, αt)≤C(g∗, g0, α0)1 +δ σ2+ 1 . Applying this to (213) and noting that R(α0) = 0 because ∥α0∥ ≤D, for every t≥0 we get R(αt)≤C(g∗, g0, α0)1 +δ σ2+ 1 . The lemma follows from this bound and the condition R(α)≥ ∥α∥ −Dwhenever ∥α∥ ≥D+ 1. Proof of Proposition 2.16. Fix any s2=σ2/δ > 0. Throughout this proof, constants may depend on s2but not on δ. Proposition 5.2 implies that there exists a constant radius D′>0 (depending on s2but not on δ) such that for any δ >1, αt∈ B(D′) for all t≥0. SetS=B(D′) and O=B(D′+ 1). Then for each fixed α∈O, Assumption 2.2(b) implies C≥ −∂2 θlogg(θ, α)≥( c0 for|θ| ≥r0 −Cfor all θ∈R(214) 64 for some C, r0, c0>0 uniformly over α∈O. By this bound (214) and Proposition 2.8(b), for some sufficiently large δ0=δ0(s2)>0,σ2=δs2, and all δ≥δ0, the LSI (46) must hold for g=g(·, α) and each α∈O. This verifies Assumption 2.11. Throughout the remainder of the proof, let C, C′, c, c′>0 denote constants not depending on δthat may change from instance to instance. We compare the optimization landscape of F(α) with that of Gs2(α) over O. Let mse( α), mse ∗(α),ω(α),ω∗(α) be as defined by (42) and (43) for the prior g=g(·, α). We first bound mse(α), mse ∗(α),ω(α),ω∗(α): Write as shorthand ⟨·⟩=⟨·⟩g(·,α),ω(α)for the posterior expectation in the scalar channel model (37). We have ⟨(y−θ)⟩2=1 ZZ (y−θ)2e−ω(α) 2(y−θ)2g(θ, α)dθ, Z =Z e−ω(α) 2(y−θ)2g(θ, α)dθ. We separate the integrals over the sets {θ:e−ω(α) 2(y−θ)2≤Z}and{θ:e−ω(α) 2(y−θ)2> Z}, and on the latter set apply the upper bound ( y−θ)2≤ −2 ω(α)logZ. This gives ⟨(y−θ)⟩2≤Z (y−θ)21{e−ω(α) 2(y−θ)2≤Z}g(θ, α)dθ−2 ω(α)logZ≤2Z (y−θ)2g(θ, α)dθ, the last inequality applying Jensen’s inequality to bound log Z≥R −ω(α) 2(y−θ)2g(θ, α)dθ. It is clear from the lower bounds of (214) and the boundedness of log g(0, α) and ∂θlogg(0, α) over α∈OthatR θ2g(θ, α)dθ < C for some constant C >0, for all α∈O. Thus this inequality shows ⟨(y−θ)⟩2≤C(1 +y2), which implies also ⟨(⟨θ⟩ −θ)⟩2≤ ⟨(y−θ)⟩2≤C(1 +y2),⟨θ⟩2≤2y2+ 2(y− ⟨θ⟩)2≤2y2+ 2⟨(y−θ)2⟩ ≤C′(1 +y2). Taking expectations over y=θ∗+ω∗(α)−1/2zwith θ∗∼g∗andz∼ N(0,1), we get mse( α),mse∗(α)≤ C(1 + ω∗(α)−1). Then applying ω∗(α)−1= (σ2+ mse ∗(α))/δ≤s2+C(1 + ω∗(α)−1)/δ, for all δ > δ 0 sufficiently large, this implies ω∗(α)−1≤C′. This in turn shows by mse( α),mse∗(α)≤C(1 +ω∗(α)−1) that mse(α),mse∗(α)≤C. (215) Letoδ(1) denote a quantity that converges to 0 uniformly over α∈Oasδ→ ∞ (fixing s2=σ2/δ). Then, applying (215) to the fixed point
https://arxiv.org/abs/2504.15558v1
equations ω(α) =δ/(σ2+ mse( α)) and ω∗(α) =δ/(σ2+ mse ∗(α)), we have ω(α)−1=s2+oδ(1), ω ∗(α)−1=s2+oδ(1). (216) We recall from Lemma 4.12 that ω(α), ω∗(α) must be continuous functions of α∈O. We now argue via the implicit function theorem that for all δ > δ 0sufficiently large, these are in fact continuously-differentiable overα∈O. For this, fix any α∈Oand consider the map fα(ω, ω∗) = ω−1−δ−1(σ2+Eg∗,ω∗[⟨(θ− ⟨θ⟩g(·,α),ω)2⟩g(·,α),ω]) ω−1 ∗−δ−1(σ2+Eg∗,ω∗[(θ∗− ⟨θ⟩g(·,α),ω)2] . (217) Thus (42) and (43) imply that 0 = fα(ω(α), ω∗(α)). Let us momentarily write as shorthand E=Eg∗,ω∗and ⟨·⟩=⟨·⟩g(·,α),ω. Expressing y=θ∗+ω−1/2 ∗z,Emay be understood as the expectation over θ∗∼g∗and z∼ N(0,1). The expected posterior average E⟨·⟩is given explicitly by E⟨f(θ)⟩=ER f(θ)eHα(θ,ω,ω∗)dθR eHα(θ,ω,ω∗)dθ, H α(θ, ω, ω ∗) =ω(θ∗+ω−1/2 ∗z)θ−ω 2θ2+ log g(θ, α), and the derivatives in ( ω, ω∗) may be computed via differentiation of Hα. Let us denote by κj(·) the jth mixed cumulant associated to the posterior mean ⟨·⟩=⟨·⟩g(·,α),ω, i.e. κ1(f(θ)) =⟨f(θ)⟩, κ 2(f(θ), g(θ)) =⟨f(θ)g(θ)⟩ − ⟨f(θ)⟩⟨g(θ)⟩, 65 etc. Then E[⟨(θ− ⟨θ⟩)2⟩] =E[κ2(θ, θ)] and E[(θ∗− ⟨θ⟩)2] =E[(θ∗−κ1(θ))2], and differentiating in ( ω, ω∗) gives ∂ωE[⟨(θ− ⟨θ⟩)2⟩] =E[κ3(θ, θ, ∂ ωHα(θ, ω, ω ∗))], ∂ω∗E[⟨(θ− ⟨θ⟩)2⟩] =E[κ3(θ, θ, ∂ ω∗Hα(θ, ω, ω ∗))], ∂ωE[(θ∗− ⟨θ⟩)2] =E[−2(θ∗−κ1(θ))κ2(θ, ∂ωHα(θ, ω, ω ∗))] ∂ω∗E[(θ∗− ⟨θ⟩)2] =E[−2(θ∗−κ1(θ))κ2(θ, ∂ω∗Hα(θ, ω, ω ∗))] We note that each absolute moment Eg∗,ω∗(α)[⟨|θ|k⟩g(·,α),ω(α)] is bounded by a constant over α∈O, by continuity of this quantity in αand compactness of O. Then it is direct to check that each of the above four derivatives evaluated at ( ω, ω∗) = (ω(α), ω∗(α)) is also bounded by a constant over α∈O. This implies that the derivative of the map fα(ω, ω∗) in (217) satisfies dω,ω∗fα(ω, ω∗) (ω,ω∗)=(ω(α),ω∗(α))=−ω−2+oδ(1) oδ(1) oδ(1) −ω−2 ∗+oδ(1) (ω,ω∗)=(ω(α),ω∗(α))=−s2I+oδ(1), (218) where the last equality applies (216). In particular, for δ > δ 0sufficiently large, this derivative is invertible. Since fα(ω, ω∗) is continuously-differentiable in ( ω, ω∗, α) (where differentiability in αis ensured by Assump- tion 2.2(b) for log g(θ, α)), the implicit function theorem implies that for each α0∈O, there exists a unique continuously-differentiable extension of the root ( ω(α0), ω∗(α0)) of 0 = fα0(ω(α0), ω∗(α0)) to a solution of 0 =fα(ω, ω∗) in an open neighborhood of α0. This extension must then coincide with ω(α), ω∗(α), because Lemma 4.12 ensures that ω(α), ω∗(α) are continuous in α. Thus ω(α), ω∗(α) are continuously-differentiable inα∈O, as claimed. The implicit function theorem shows also that their first derivatives are given by ∇αω⊤ ∇αω⊤ ∗ =−[dω,ω∗fα]−1dαfα (ω,ω∗)=(ω(α),ω∗(α)). We may check as above that the α-derivatives ∂αjE[⟨(θ− ⟨θ⟩)2⟩] =E[κ3(θ, θ, ∂ αjHα(θ, ω, ω ∗))], ∂αjE[(θ∗− ⟨θ⟩)2] =E[−2(θ∗−κ1(θ))κ2(θ, ∂αjHα(θ, ω, ω ∗))] evaluated at ( ω, ω∗) = (ω(α), ω∗(α)) are also both bounded by a constant over α∈O. By the definition of fα, this implies d αfα|(ω,ω∗)=(ω(α),ω∗(α))=oδ(1), so together with (218), this shows also ∇αω(α) =oδ(1),∇αω∗(α) =oδ(1). (219) Recall from Lemma 2.12 that ∇F(α) =−Eθ∼Pα∇αlogg(θ, α) =−Eg∗,ω∗(α)⟨∇αlogg(θ, α)⟩g(·,α),ω(α). (220) Applying continuity of ( ω, ω∗)7→Eg∗,ω∗⟨∇αlogg(θ, α)⟩g(·,α),ωand the approximations ω(α)−1, ω∗(α)−1= s2+oδ(1) shown above, we have ∇F(α) =−Eg∗,s−2⟨∇αlogg(θ, α)⟩g(·,α),s−2+oδ(1) =∇Gs2(α) +oδ(1), (221) where Gs2(α) =−Eg∗,s−2[logPg(·,α),s−2(y)] is the negative population log-likelihood
https://arxiv.org/abs/2504.15558v1
(56) in the scalar channel model with fixed noise variance s2. [Note that fixing an arbitrary point α0∈Oand integrating this gradient approximation over α∈O, this also implies F(α) =G(α) + (F(α0)−G(α0)) +oδ(1), i.e.Fapproximately coincides with Gup to an additive shift.] Furthermore, the above continuous-differentiability ofω(α), ω∗(α) and (220) imply F(α) is twice continuously-differentiable over α∈O, and differentiating 66 ∇F(α) by the chain rule gives ∂αi∂αjF(α) =−∂ω Eg∗,ω∗(α)⟨∂αilogg(θ, α)⟩g(·,α),ω(α) ·∂αjω(α) −∂ω∗ Eg∗,ω∗(α)⟨∂αilogg(θ, α)⟩g(·,α),ω(α) ·∂αjω∗(α) −∂αj Eg∗,ω∗(α)⟨∂αilogg(θ, α)⟩g(·,α),ω(α) . (222) Writing again E=Eg∗,ω∗,⟨·⟩=⟨·⟩g(·,α),ω, and κjfor the cumulants with respect to ⟨·⟩, we have ∂ωE⟨∂αilogg(θ, α)⟩=E[κ2(∂αilogg(θ, α), ∂ωHα(θ, ω, ω ∗))] ∂ω∗E⟨∂αilogg(θ, α)⟩=E[κ2(∂αilogg(θ, α), ∂ω∗Hα(θ, ω, ω ∗))], and these are bounded at ( ω, ω∗) = (ω(α), ω∗(α)) over all α∈O. Furthermore ∂αjE⟨∂αilogg(θ, α)⟩=E⟨∂αi∂αjlogg(θ, α)⟩+E[κ2(∂αilogg(θ, α), ∂αjlogg(θ, α))]. Applying these and the bounds (219) to (222), ∂αi∂αjF(α) =−Eg∗,ω∗(α)⟨∂αi∂αjlogg(θ, α)⟩g(·,α),ω(α)−Eg∗,ω∗(α)Cov⟨g(·,α),ω(α)⟩(∂αilogg(θ, α), ∂αjlogg(θ, α)) +oδ(1) =−Eg∗,s−2⟨∂αi∂αjlogg(θ, α)⟩g(·,α),s−2−Eg∗,s−2Cov⟨g(·,α),s−2⟩(∂αilogg(θ, α), ∂αjlogg(θ, α))| {z } =∂αi∂αjGs2(α)+oδ(1) Thus we have shown ∇2F(α) =∇2Gs2(α) +oδ(1) (223) where again oδ(1) converges to 0 uniformly over α∈Oasδ→ ∞ . The approximation (221) implies that ∇F+∇Rconverges uniformly to ∇Gs2+∇Rover α∈O, as δ→ ∞ . Then for all δ > δ 0sufficiently large and for some function ι: [δ0,∞)→(0,∞) satisfying ι(δ)→0 asδ→ ∞ , each point of Crit ∩B(D) ={α∈ B(D) :∇F(α) = 0}must fall within a ball of radius ι(δ) around a point of Crit G={α∈ B(D) :∇Gs2(α) = 0}. The approximation (223) further implies that for each such ball around a point α0∈Crit G,∇2Fconverges uniformly to ∇2Gs2on this ball, as δ→ ∞ . If∇2Gs2(α0) is non-singular, then for all δ > δ 0sufficiently large, an argument via the topological degree shows that there must be exactly one point of Crit in this ball (having the same index as α0as a critical point of Gs2) — see e.g. [90, Lemma 5]. This shows statements (1) and (2) of the proposition. As a direct consequence of these statements, if α∗is the unique point of Crit Gand∇2Gs2(α∗) is non- singular, then there is a unique point of Crit ∩B(D). If furthermore g∗(θ) =g(θ, α∗), then this point of Crit∩B(D) must be α∗itself, since ∇F(α∗) = 0 by Lemma 2.12(c). Analysis of Example 2.17. We verify Assumption 2.2(b) for Example 2.17 of the Gaussian mixture model with varying means. Let ι∈ {1, . . . , K }denote the mixture component of θ, and let ⟨f(ι, θ)⟩=E[f(ι, θ)|θ] denote the posterior average over ιgiven θ∼ N (αι, ω−1 0) and prior P[ι=k] =pk. Let κ2(·) denote the covariance associated to ⟨·⟩. Then, since logg(θ, α) = logKX k=1pkrωk 2πexp −ωk 2(θ−αk)2 , the derivatives of log g(θ, α) up to order 2 are given by ∂θlogg(θ, α) =⟨ωι(αι−θ)⟩, ∂2 θlogg(θ, α) =κ2 ωι(αι−θ), ωι(αι−θ) − ⟨ωι⟩ ∂αilogg(θ, α) =ωi(θ−αi)⟨1ι=i⟩, ∂ αi∂θlogg(θ, α) =ωi(θ−αi)κ2 1ι=i, ωι(αι−θ) +ωi⟨1ι=i⟩, ∂αi∂αjlogg(θ, α) =ωiωj(θ−αi)(θ−αj)κ2(1ι=i,1ι=j)−1i=jωi⟨1ι=i⟩(224) 67 In particular, |∂θlogg(θ, α)| ≤C(1 +|θ|+|αi|) and |∂αilogg(θ, α)| ≤C(1 +|θ|+|αi|), showing (19). To bound the high-order derivatives of log g(θ, α) locally over α∈RK, let kmax∈ {1, . . . , K }be the (unique) index corresponding to the smallest value of ωk. For any compact
https://arxiv.org/abs/2504.15558v1
subset S⊂RK, there exist constants B(S), c0(S)>0 depending on the fixed values {p1, . . . , p K},{ω1, . . . , ω K}andSsuch that for all α∈S, we have ωk 2(θ−αk)2≥ωkmax 2(θ−αkmax)2+c0(S)θ2for any θ > B (S) and all k̸=kmax. This implies there exists a constant C(S)>0 for which ⟨1ι̸=kmax⟩ ≤C(S)e−c0(S)θ2for all θ > B (S) and α∈S. Letι′denote an independent copy of ιunder its posterior law given θ. Then for any θ > B , any α∈S, and anyk∈ {1, . . . , K }, the posterior variance of 1ι=kis bounded as ⟨|1ι=k− ⟨1ι=k⟩|2⟩ ≤ ⟨|1ι=k−1ι′=k|2⟩ ≤4⟨1ι̸=kmaxorι′̸=kmax⟩ ≤C′(S)e−c0(S)θ2, and similarly ⟨|ωι(θ−αι)− ⟨ωι(θ−αι)⟩|2⟩ ≤C′(S)(1 + θ2)e−c0(S)θ2. Applying these bounds and H¨ older’s inequality, all posterior covariances in (224) are exponentially small in θ2forθ > B (S), implying that all derivatives of order 2 in (224) are bounded over α∈Sandθ > B (S). Similarly they are bounded over α∈Sandθ <−B(S), and hence also bounded uniformly over α∈Sand θ∈Rsince we may bound the cumulants trivially by a constant C(S) for θ∈[−B(S), B(S)]. The same argument bounds all mixed cumulants of 1ι=kandωι(αι−θ) of orders 3 and 4, and hence also all partial derivatives of log g(θ, α) of orders 3 and 4 over α∈Sandθ∈R. These arguments show also that as θ→ ±∞ , uniformly over α∈S,κ2(ωι(αι−θ), ωι(αι−θ))→0 and ⟨ωι⟩ →ωkmax, so∂2 θ[−logg(θ, α)]→ωkmax>0, verifying all statements of Assumption 2.2(b). Analysis of Example 2.18. We verify Assumption 2.2(b) in Example 2.18 for the Gaussian mixture model with fixed mixture means/variances and varying weights. Again let ι∈ {0, . . . , K }denote the mixture component of θ, and let ⟨f(ι, θ)⟩=E[f(ι, θ)|θ] denote the posterior average over ιgiven θ∼ N(µι, ω−1 ι) and priorP[ι=k] =eαk/(eα0+. . .+eαK). Let κ2(·) denote the covariance associated to ⟨·⟩, and in addition, let ⟨·⟩priorandκprior 2denote the mean and covariance over ιdrawn from the prior P[ι=k] =eαk/(eα0+. . .+eαK). Then, since logg(θ, α) = logKX k=0eαkrωk 2πexp −ωk 2(θ−µk)2 −logKX k=0eαk, the derivatives of log g(θ, α) up to order 2 are given by ∂αilogg(θ, α) =⟨1ι=i⟩ − ⟨1ι=i⟩prior, ∂ αi∂αjlogg(θ, α) =κ2(1ι=i,1ι=j)−κprior 2(1ι=i,1ι=j), ∂θlogg(θ, α) =⟨ωι(µι−θ)⟩, ∂ αi∂θlogg(θ, α) =κ2 1ι=i, ωι(µι−θ) , ∂2 θlogg(θ, α) =κ2 ωι(µι−θ), ωι(µι−θ) − ⟨ωι⟩.(225) In particular, this showsP k∂αklogg(θ, α) = 1 −1 = 0, so ∇αlogg(θ, α) always belongs to the subspace E={α∈RK+1:α0+. . .+αK= 0}. Also ∇R(α) =r′(∥α∥2)·α ∥α∥2∈Eifα∈E. Furthermore, |∂αilogg(θ, α)| ≤Cand|∂θlogg(θ, α)| ≤C(1 +|θ|), showing (19). To bound the higher-order derivatives of log g(θ, α) locally over α∈E, let kmax∈ {0, . . . , K }be the index corresponding to the smallest ωk, and among these the largest µk(if there are multiple ωk’s equal to the smallest value). Then for some constants B, c0>0 depending only on the fixed values {µ0, . . . , µ K}and {ω0, . . . , ω K}, we have ωk 2(θ−µk)2≥ωkmax 2(θ−µkmax)2+c0θfor any θ > B and all k̸=kmax. 68 This implies, for any compact subset S⊂E, there is a constant C(S)>0 for which ⟨1ι̸=kmax⟩ ≤C(S)e−c0θfor all θ > B andα∈S. Then the same arguments as in
https://arxiv.org/abs/2504.15558v1
the preceding example show ⟨|ωι− ⟨ωι⟩|2⟩ ≤C′(S)e−c0θ,⟨|ωι(µι−θ)− ⟨ωι(µι−θ)⟩|2⟩ ≤C′(S)(1 + θ)2e−c0θ, implying via Cauchy-Schwarz that each order-2 derivative in (225) is bounded over α∈Sandθ > B . Similarly it is bounded over α∈Sandθ <−B, hence also for all α∈Sandθ∈R. The same argument applies to bound the mixed cumulants of ωιandωι(µι−θ) of orders 3 and 4, and thus the partial derivatives of log g(θ, α) of orders 3 and 4. This shows also lim θ→∞∂2 θ[−logg(θ, α)] =ωkmax>0 uniformly over α∈S, and a similar statement holds for θ→ −∞ , establishing all conditions of Assumption 2.2(b). A Proof of Theorem 2.3 Theorem 2.3(a) follows immediately from [27, Theorem 2.5], upon identifying s(θ, α) of [27, Theorem 2.5] as (logg)′(θ) (with no dependence on α) and G(α,P) = 0. The required conditions of [27, Assumption 2.2] for s(·) hold by Assumption 2.2(a), and the conditions of [27, Assumption 2.3] for G(·) are vacuous. For Theorem 2.3(b), consider first the following global version of Assumption 2.2(b): Assumption A.1. logg(θ, α) and R(α) are thrice continuously-differentiable and satisfy (19), and the conditions (20) hold for constants C, r0, c0>0 globally over all α∈RK. Under Assumption A.1, Theorem 2.3(b) again follows from [27, Theorem 2.5] upon identifying s(θ, α) = ∂θlogg(θ, α) and G(α,P) =Eθ∼P[∇αlogg(θ, α)]− ∇R(α), where all conditions of [27, Assumptions 2.2 and 2.3] may be checked from these conditions of Assumption A.1. To show Theorem 2.3(b) under the weaker local conditions of Assumption 2.2(b), we may apply the following truncation argument: Note first that twice continuous-differentiability of log g(θ, α) and R(α) imply that∇(θ,α)logg(θ, α) and ∇R(α) are locally Lipschitz. Together with the global linear growth conditions of (19), this implies that there exists a unique (non-explosive) solution {(θt,bαt)}t≥0to the joint diffusion (9–10) for all times (c.f. [91, Theorem 12.1]). Furthermore, since θt=θ0+Zt 0 −1 2σ2X⊤(Xθs−y) + ∂θlogg(θs j,bαs)d j=1 ds+√ 2bt bαt=bα0+Zt 01 ddX j=1∇αlogg(θs j,bαs)− ∇ αR(bαs) ds, under the growth conditions (19), this solution satisfies the bounds ∥θt∥2≤ ∥θ0∥2+CZt 0 ∥X∥2 op∥θs∥2+∥X∥op∥y∥2+√ d+∥θs∥2+√ d∥bαs∥2 ds+√ 2∥bt∥2 ∥bαt∥2≤ ∥bα0∥2+CZt 0 1 +∥bαs∥2+∥θs∥2/√ d ds Fixing any T > 0, by the conditions of Assumption 2.2 and a standard maximal inequality for Brownian motion (see [27, Lemma 4.7]) there exists a constant C0>0 large enough such that the event E=n ∥X∥op≤C0,∥y∥2≤C0√ d,∥θ0∥2≤C0√ d,∥bα0∥2≤C0,sup t∈[0,T]∥bt∥2≤C0√ do holds a.s. for all large n, d. Then by a Gronwall argument, for a constant M=M(T, C 0)>0, sup t∈[0,T]∥θt∥2√ d+∥bαt∥2< M 69 holds on E. Applying the conditions of Assumption 2.2(b) with S={α:∥α∥2≤M}, there exist functions gM:R×RK→RandRM:RK→Rsuch that gM(θ, α) =g(θ, α) and RM(α) =R(α) for all ∥α∥ ≤M, andgMandRMsatisfy Assumption A.1. Let {(θt M,bαt M)}t≥0be the solution of (9–10) defined with gM(·) andRM(·) in place of g(·) and R(·), and let ηt M=Xθt M. Then as argued above, Theorem 2.3(b) holds for {(θt M,ηt M,bαt M)}t≥0, showing that a.s. as n, d→ ∞ , 1 ddX j=1δθ∗ j,{θt M,j}t∈[0,T]W2→P(θ∗,{θt M}t∈[0,T]) 1 nnX i=1δη∗ i,εi,{ηt M,i}t∈[0,T]W2→P(η∗, ε,{ηt M}t∈[0,T]) {bαt}t∈[0,T]→ {αt M}t∈[0,T](226) for limiting processes defined by the DMFT equations (22–28) also with gM(·) and RM(·) in place
https://arxiv.org/abs/2504.15558v1
of g(·) andR(·). Since {(θt,ηt,bαt)}t∈[0,T]={(θt M,ηt M,bαt M)}t∈[0,T]a.s. for all large n, d, this implies that (226) holds also with {(θt,ηt,bαt)}t∈[0,T]in place of {(θt M,ηt M,bαt M)}t∈[0,T]. Furthermore, the deterministic limit process {αt M}t∈[0,T]must satisfy ∥αt M∥ ≤Mfor all t∈[0, T], so the solution up to time Tof the DMFT equations (22–28) with gM(·) and RM(·) is also a solution of these equations with g(·) and R(·). This proves Theorem 2.3(b) under Assumption 2.2(b). B Correlation and response functions for a Gaussian prior For illustration, we check Definition 2.4 explicitly for the dynamics (7) with a Gaussian prior g(θ) =r λ 2πexp −λθ2 2 . Then (7) is the Ornstein-Uhlenbeck process dθt=h −X⊤X σ2+λI θt+X⊤y σ2i dt+√ 2 dbt. (227) Lemma B.1. Under Assumption 2.1, let µ= lim n,d→∞1 ddX i=1δλi(X⊤X/σ2) be the almost-sure limit of the empirical eigenvalue distribution of X⊤X/σ2. Then for the dynamics (227) with a fixed Gaussian prior, the corresponding DMFT system prescribed by Theorem 2.3(a) has the correlation and response functions Cθ(t, s) =Z E(θ0)2·e−(λ+x)(t+s)+E(θ∗)2x2+x (λ+x)2(1−e−(λ+x)t)(1−e−(λ+x)s) +1 λ+x e−(λ+x)|t−s|−e−(λ+x)(t+s) µ(dx) Cθ(t,∗) =ZE(θ∗)2x λ+x(1−e−(λ+x)t)µ(dx) Rθ(t, s) =Z e−(λ+x)(t−s)µ(dx) Cη(t, s) =Z E(θ0)2xe−(λ+x)(t+s)+ (E(θ∗)2x+ 1)x λ+x(1−e−(λ+x)t)−1x λ+x(1−e−(λ+x)s)−1 + (δ−1) +x λ+x e−(λ+x)|t−s|−e−(λ+x)(t+s) µ(dx), Rη(t, s) =Z xe−(λ+x)(t−s)µ(dx). 70 Proof. Setting Λ=X⊤X σ2+λI, the dynamics (227) have the explicit solution θt=e−Λtθ0+Λ−1 I−e−ΛtX⊤y σ2+Zt 0e−Λ(t−s)√ 2 dbs =e−Λtθ0+Λ−1 I−e−ΛtX⊤X σ2θ∗+X⊤ε σ2 +Zt 0e−Λ(t−s)√ 2 dbs(228) Recall the definitions of ej(θ) and xi(θ) from (100) and the associated correlation and response matrices (101). Under Assumption 2.1, applying the explicit form (228) and independence of X,θ0,θ∗,ε, it is direct to check that almost surely, lim n,d→∞1 dTrCθ(t, s) = lim n,d→∞1 d⟨θt⊤θs⟩ = lim n,d→∞1 dTr E(θ0)2·e−Λ(t+s)+E(θ∗)2·X⊤X σ2 (I−e−Λt)Λ−2(I−e−Λs)X⊤X σ2 +Zt∧s 02e−Λ(t+s−2r)dr +1 dTr Eε2·X σ2(I−e−Λt)Λ−2(I−e−Λs)X⊤ σ2 =Z E(θ0)2·e−(λ+x)(t+s)+E(θ∗)2x2+x (λ+x)2(1−e−(λ+x)t)(1−e−(λ+x)s) +1 λ+x e−(λ+x)|t−s|−e−(λ+x)(t+s) µ(dx) and lim n,d→∞1 dTrCθ(t,∗) = lim n,d→∞1 d⟨θt⊤θ∗⟩= lim n,d→∞1 dTr E(θ∗)2Λ−1(I−e−Λt)X⊤X σ2 =ZE(θ∗)2x λ+x(1−e−(λ+x)t)µ(dx). Furthermore, the above form (228) for θtimplies Pt(θ) =e−Λtθ+ +Λ−1 I−e−ΛtX⊤X σ2θ∗+X⊤ε σ2 . (229) Then∇Ptej(θ) =∇[e⊤ jPt(θ)] =e−Λtejis a constant function not depending on θ, and lim n,d→∞1 dTrRθ(t, s) = lim n,d→∞1 ddX j=1[∇e⊤ j∇Pt−sej](θs) = lim n,d→∞1 dTre−Λ(t−s)=Z e−(λ+x)(t−s)µ(dx). By Theorem 4.3, this shows the forms of Cθ(t, s),Cθ(t,∗), and Rθ(t, s). From (228) and (229), we have also Xθt−y=Xe−Λtθ0+X Λ−1 I−e−ΛtX⊤X σ2−I θ∗+ XΛ−1 I−e−ΛtX⊤ σ2−In ε +Zt 0Xe−Λ(t−s)√ 2 dbs 71 and∇Ptxi(θ) = (√ δ/σ)∇[e⊤ iX⊤Pt(θ)] = (√ δ/σ)e−ΛtX⊤ei. Then lim n,d→∞1 nTrCη(t, s) = lim n,d→∞1 n·δ σ2(Xθt−y)⊤(Xθs−y) = lim n,d→∞1 dσ2Tr E(θ0)2·e−ΛtX⊤Xe−Λs +E(θ∗)2· Λ−1 I−e−ΛtX⊤X σ2−I X⊤X Λ−1 I−e−ΛsX⊤X σ2−I +Zt∧s 02e−Λ(t−r)X⊤Xe−Λ(s−r)dr +1 dσ2Tr Eε2· XΛ−1 I−e−ΛtX⊤ σ2−In⊤ XΛ−1 I−e−ΛsX⊤ σ2−In =Z E(θ0)2xe−(λ+x)(t+s)+ (E(θ∗)2x+ 1)x λ+x(1−e−(λ+x)t)−1x λ+x(1−e−(λ+x)s)−1 + (δ−1) +x λ+x e−(λ+x)|t−s|−e−(λ+x)(t+s) µ(dx), and lim n,d→∞1 nTrRη(t, s) = lim n,d→∞1 nnX i=1[∇x⊤ i∇Pt−sxi](θs) = lim n,d→∞1 n·δ σ2nX i=1e⊤ iXe−Λ(t−s)X⊤ei = lim n,d→∞1 dσ2TrXe−Λ(t−s)X⊤=Z xe−(λ+x)(t−s)µ(dx). By Theorem 4.3, this shows the forms of Cη(t, s) and Rη(t, s). From Lemma B.1 it is apparent that the approximations (30–32) and (34–35) hold with ε(t) =Ce−ct and cinit θ(s) =−ZE(θ∗)2x2+x (λ+x)2e−(λ+x)sµ(dx), ctti θ(∞) =ZE(θ∗)2x2+x (λ+x)2µ(dx) ctti θ(τ) =ctti θ(∞) +Z1 λ+xe−(λ+x)τµ(dx), rtti θ(τ) =Z e−(λ+x)τµ(dx), c θ(∗) =ZE(θ∗)2x
https://arxiv.org/abs/2504.15558v1
λ+xµ(dx) cinit η(s) =Z(E(θ∗)2x+ 1)λx (λ+x)2e−(λ+x)sµ(dx), ctti η(∞) =Z(E(θ∗)2x+ 1)λ2 (λ+x)2µ(dx) +δ−1 ctti η(τ) =ctti η(∞) +Zx λ+xe−(λ+x)τµ(dx), rtti η(τ) =Z xe−(λ+x)τµ(dx). These functions ctti θ, ctti ηhave the forms (33) for the positive, finite measures µθ(da) =a−1µ(d(a−λ)) and µη(da) = [( a−λ)/a]µ(d(a−λ)) supported on a∈[λ,∞). Furthermore, these functions ctti θ, ctti η, rtti θ, rtti η satisfy the fluctuation-dissipation relations (36), verifying all conditions of Definition 2.4. C Sufficient conditions for a log-Sobolev inequality We prove Proposition 2.8 on a log-Sobolev inequality for the posterior law. Lemma C.1. Under Assumption 2.2(a), the prior density g(·)satisfies the LSI (17). Furthermore, consider the law P(θ) =g(θ)e−a 2θ2+bθ Z, Z =Z g(θ)e−a 2θ2+bθdθ. (230) For any a >0andb∈R, this law P(θ)also satisfies the LSI (17). Both statements hold with the constant CLSI= (4/c0) exp(8 r2 0(c0+C)2/(πc0))where C, c0, r0are the constants of Assumption 2.2(a). 72 Proof. Applying x= min( x,−c0) + max( x+c0,0), define ℓ−(θ) = log g(0) + (log g)′(0)·θ+Zθ 0Zx 0min (logg)′′(u),−c0 dudx ℓ+(θ) =Zθ 0Zx 0max (logg)′′(u) +c0,0 dudx so that log g(θ) =ℓ−(θ) +ℓ+(θ). Then set ˜ℓ−(θ) =−logZ−aθ2+bθ+ℓ−(θ) so that log P(θ) = ˜ℓ−(θ) +ℓ+(θ). By definition we have ℓ′′ −(θ) = min((log g)′′(θ),−c0)≤ −c0and also ˜ℓ′′ −(θ) =−a+ℓ′′ −(θ)≤ −c0. We have (log g)′′(u) +c0≤0 for all |u|> r0and (log g)′′(u) +c0≤c0+Cfor all |u| ≤r0. Hence |ℓ′ +(θ)| ≤r0(c0+C). Thus both log g(θ) and log P(θ) are sums of a c0-strongly-log-concave potential ℓ−(θ) or˜ℓ−(θ) and a r0(c0+C)-Lipschitz perturbation ℓ+(θ). Then [92, Lemma 2.1] shows that both laws satisfy a LSI with constant CLSI= (4/c0) exp(8 r2 0(c0+C)2/(πc0)). Proof of Proposition 2.8. Under condition (a), the posterior density is strongly log-concave, satisfying −∇2logPg(θ|X,y) =1 σ2X⊤X−diag (logg)′′(θj)d j=1⪰c0I. Hence (46) with CLSI= 2/c0follows from the Bakry-Emery criterion. Clearly this holds for any noise variance σ2>0, verifying Assumption 2.7. The proof under condition (b) is an adaptation of the argument of [93]; see also [10, Theorem 3.4] and [94] for similar specializations to the linear model. Under the conditions for Xof Assumption 2.1, by the Bai-Yin law ( [95, Theorem 3.1]), for any ε >0 the event E(X) =n (√ δ−1)2 +−ε≤λmin(X⊤X)≤λmax(X⊤X)≤(√ δ+ 1)2+εo hold a.s. for all large n, d(where δ= lim n/d). Thus, choosing some sufficiently small ε >0 and setting κ= (√ δ−1)2 +−2ε, τ2=σ2 [(√ δ+ 1)2−(√ δ−1)2 ++ 3ε]−1−ε ,Σ=σ2(X⊤X−κI)−1−τ2I, we have X⊤X−κI⪰εI, (X⊤X−κI)−1⪰(τ2 σ2+ε)I, and hence Σ⪰εσ2onE(X). Since σ2(X⊤X−κI)−1= Σ+τ2I, we have the Gaussian convolution identity e−1 2σ2θ⊤(X⊤X−κI)θ∝Z e−1 2τ2∥θ−φ∥2 2e−1 2φ⊤Σ−1φdφ. Then the posterior density Pg(θ|X,y) satisfies Pg(θ|X,y)∝exp −1 2σ2∥y−Xθ∥2dY j=1g(θj) ∝exp −1 2σ2θ⊤(X⊤X−κI)θdY j=1g(θj) exp −κ 2σ2θ2 j+x⊤ jy σ2θj ∝Z e−1 2φ⊤Σ−1φdY j=1g(θj) exp −κ 2σ2θ2 j−1 2τ2(θj−φj)2+x⊤ jy σ2θj dφ. Defining qφj(θj) =1 Zj(φj)g(θj) exp −κ 2σ2θ2 j−1 2τ2(θj−φj)2+x⊤ jy σ2θj , Zj(φj) =Z g(θj) exp −κ 2σ2θ2 j−1 2τ2(θj−φj)2+x⊤ jy σ2θj dθj, µ(φ) =e−1 2φ⊤Σ−1φQd j=1Zj(φj) R e−1 2φ′⊤Σ−1φ′Qd j=1Zj(φ′ j)dφ′ 73 this gives the mixture-of-products representation Pg(θ|X,y) =ZdY j=1qφj(θj) |{z} :=qφ(θ)µ(φ)dφ. Then for any f∈C1(Rd), Ent[f(θ)2|X,y] =Eφ∼µEntθ∼qφf(θ)2+ Ent φ∼µEθ∼qφf(θ)2. (231) For the first term of (231), note that inside the exponential defining qφj(θj) we have κ≥ −2εand τ2≤σ2((1 + 3
https://arxiv.org/abs/2504.15558v1
ε)−1−ε), so the coefficient of θ2 jis negative for sufficiently small ε >0. Then by Lemma C.1,qφj(θj) satisfies the univariate LSI (17) with constant CLSI:= (4 /c0) exp(8 r2 0(c0+C)2/(πc0)). So the product law qφsatisfies the LSI with the same constant by tensorization, and Eφ∼µEntθ∼qφf(θ)2≤CLSIEφ∼µEθ∼qφ∥∇f(θ)∥2 2=CLSIEθ∼q∥∇f(θ)∥2 2. (232) For the second term of (231), note that −∇2 φlogµ(φ) =Σ−1−diag (logZj)′′(φj)d j=1=Σ−1+ diag1 τ2−1 τ4Varθj∼qφj[θj]d j=1. The LSI for qφjimplies Var θj∼qφj[θj]≤(CLSI/2) by its implied Poincar´ e inequality. Applying (√ δ+ 1)2− (√ δ−1)2 += 4√ δ1{δ > 1}+ (√ δ+ 1)21{δ≤1}and the given condition (b) for σ2, we see that for a sufficiently small choice of ε >0, we have τ2≥CLSI/[2(1−ε)]. Then this gives −∇2 φlogµ(φ)⪰(ε/τ2)I. Then by the Bakry-Emery criterion, µsatisfies the LSI Ent φ∼µf(φ)2≤(2τ2/ε)Eφ∼µ∥∇f(φ)∥2 2, hence Entφ∼µEθ∼qφf(θ)2≤2τ2 εEφ∼µ∥∇φ(Eθ∼qφf(θ)2)1/2∥2 2. Denote by qφ−jthe product of components of qφother than the jth. We compute ∂φj(Eθ∼qφf(θ)2)1/2=∂φjEθ∼qφf(θ)2 2(Eθ∼qφf(θ)2)1/2=Eθ−j∼qφ−j∂φjEθj∼qφjf(θ)2 2(Eθ∼qφf(θ)2)1/2=Eθ−j∼qφ−jCovθj∼qφj[f(θ)2, θj] 2τ2(Eθ∼qφf(θ)2)1/2. We apply [96, Proposition 2.2]: For any law νonRdsatisfying a LSI Ent νf2≤CLSIEν∥∇f∥2 2, and for any smooth functions f, g:Rd→R, Covν[f2, g]≤Csup θ∈Rd∥∇g(θ)∥2·(Eνf2)1/2(Eν∥∇f∥2 2)1/2 where Cdepends only on the LSI constant CLSIofν. Applying this to the univariate law ν=qφj, followed by Cauchy-Schwarz, Eθ−j∼qφ−jCovθj∼qφj[f(θ)2, θj]≤C1(Eθ∼qφf(θ)2)1/2(Eθ∼qφ(∂θjf(θ))2)1/2 where C1depends only on the LSI constant CLSIforqφj. Summing over j= 1, . . . , d gives ∥∇φ(Eθ∼qφf(θ)2)1/2∥2 2=dX j=1 ∂φj(Eθ∼qφf(θ)2))1/22 ≤dX j=1C1 2τ22 Eθ∼qφ(∂θjf(θ))2=C1 2τ22 Eθ∼qφ∥∇f(θ)∥2 2. Thus Entφ∼µEθ∼qφf(θ)2≤2τ2 εC1 2τ22 Eθ∼µEθ∼qφ∥∇f(θ)∥2 2=C2 1 2ετ2E[∥∇f(θ)∥2 2|X,y]. (233) 74 Applying (232) and (233) to (231) completes the proof of (46), on the above event E(X). Since the given condition (b) also holds for all ˜ σ2≥σ2when it holds for σ2, this verifies Assumption 2.7. Finally, under condition (c), we note that on the above event E(X) we have −∇2logPg(θ|X,y) =1 σ2X⊤X−diag (logg)′′(θj)d j=1⪰εI for all σ2≤[(√ δ−1)2−ε]/(C+ε), so (46) holds by the Bakry-Emery criterion. Choosing ε >0 small enough, under condition (c) we have [(√ δ−1)2−ε]/(C+ε)>4C0√ δ, so that (46) holds with CLSI= 2/ε forσ2>[(√ δ−1)2−ε]/(C+ε) by the analysis of condition (b). Thus, on E(X), (46) holds with a uniform constant CLSI>0 for all σ2>0, again verifying Assumption 2.7. D Auxiliary lemmas Lemma D.1 (Coupling of Gaussian processes) .Let{K(t, s)}t,s∈[0,T]and{˜K(t, s)}t,s∈[0,T]be two positive semidefinite covariance kernels such that for some ε >0andC0>0 sup t,s∈[0,T]|K(t, s)−˜K(t, s)| ≤ε, (234) and sup t,s∈[0,T]K(t, t) +K(s, s)−2K(t, s)≤C0|t−s|. (235) Then there exists a coupling of the two mean-zero Gaussian processes {ut}t∈[0,T]and{˜ut}t∈[0,T]with covari- ances E[utus] =K(t, s)andE[˜ut˜us] =˜K(t, s)such that sup t∈[0,T]E[(ut−˜ut)2]≤(6C0+ 3)√ Tε+ 15ε. Proof. Fixγ > 0, and let ⌊t⌋= max {iγ:i∈Z+, iγ≤t}where Z+={0,1,2, . . .}. Let vt=u⌊t⌋and ˜vt= ˜u⌊t⌋so that E[vtvs] =K(⌊t⌋,⌊s⌋) and E[˜vt˜vs] =˜K(⌊t⌋,⌊s⌋). Then by (235) and (234), sup t∈[0,T]E(ut−vt)2≤C0γ, sup t∈[0,T]E(˜ut−˜vt)2≤C0γ+ 4ε. LetX= (v0, vγ, v2γ, . . . , v⌊T⌋)∈RN, where here N≤T/γ+1, and similarly let ˜X= (˜v0,˜vγ,˜v2γ, . . . , ˜v⌊T⌋)∈ RN. Let Σ ,˜Σ∈RN×Nbe the covariance matrices of X,˜X, so Σ ij=K(iγ, jγ ) and ˜Σij=˜K(iγ, jγ ). Coupling Xand ˜XbyX= Σ1/2Zand ˜X=˜Σ1/2Zwhere Z∼ N(0, I), for each i= 1, . . . , N , E(Xi−˜Xi)2=e⊤ i(Σ1/2−˜Σ1/2)2ei≤ ∥Σ1/2−˜Σ1/2∥2 op(∗) ≤ ∥Σ−˜Σ∥op≤Nε. Here ( ∗) follows from [97, Theorem X.1.3],
https://arxiv.org/abs/2504.15558v1
and the last inequality applies (234). Then we have sup t∈[0,T]E(ut−˜ut)2≤3h sup t∈[0,T]E(ut−vt)2+ sup t∈[0,T]E(vt−˜vt)2+ sup t∈[0,T]E(˜ut−˜vt)2i ≤6C0γ+ 12ε+ 3Nmax i=1E(Xi−˜Xi)2≤6C0γ+ 12ε+ 3(T/γ+ 1)ε. The conclusion follows by choosing γ=√ Tε. References [1] Greg CG Wei and Martin A Tanner. A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association , 85(411):699–704, 1990. 75 [2] Richard A Levine and George Casella. Implementations of the Monte Carlo EM algorithm. Journal of Computational and Graphical Statistics , 10(3):422–439, 2001. [3] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural compu- tation , 14(8):1771–1800, 2002. [4] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning , pages 2256–2265. pmlr, 2015. [5] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems , 32, 2019. [6] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. [7] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations , 2020. [8] Juan Kuntz, Jen Ning Lim, and Adam M Johansen. Particle algorithms for maximum likelihood training of latent variable models. In International Conference on Artificial Intelligence and Statistics , pages 5134–5180. PMLR, 2023. [9]¨O Deniz Akyildiz, Francesca Romana Crucinio, Mark Girolami, Tim Johnston, and Sotirios Sabanis. Interacting particle langevin algorithm for maximum marginal likelihood estimation. arXiv preprint arXiv:2303.13429 , 2023. [10] Zhou Fan, Leying Guan, Yandi Shen, and Yihong Wu. Gradient flows for empirical Bayes in high- dimensional linear models. arXiv preprint arXiv:2312.12708 , 2023. [11] Louis Sharrock, Daniel Dodd, and Christopher Nemeth. Tuning-free maximum likelihood training of latent variable models via coin betting. In International Conference on Artificial Intelligence and Statistics , pages 1810–1818. PMLR, 2024. [12] Pierre Marion, Anna Korba, Peter Bartlett, Mathieu Blondel, Valentin De Bortoli, Arnaud Doucet, Fe- lipe Llinares-L´ opez, Courtney Paquette, and Quentin Berthet. Implicit diffusion: Efficient optimization through stochastic sampling. arXiv preprint arXiv:2402.05468 , 2024. [13] Charles E McCulloch. Maximum likelihood variance components estimation for binary data. Journal of the American Statistical Association , 89(425):330–335, 1994. [14] Charles E McCulloch. Maximum likelihood algorithms for generalized linear mixed models. Journal of the American statistical Association , 92(437):162–170, 1997. [15] Herbert Robbins. An empirical Bayes approach to statistics. In Proceedings of the Third Berkeley Sym- posium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics . The Regents of the University of California, 1956. [16] Bradley Efron. Large-scale inference: empirical Bayes methods for estimation, testing, and prediction . Cambridge University Press, 2012. [17] Theo HE Meuwissen, Ben J Hayes, and ME Goddard. Prediction of total genetic value using genome- wide dense marker maps. Genetics , 157(4):1819–1829, 2001. [18] Jian Yang, S Hong Lee, Michael E Goddard, and Peter M Visscher. GCTA: a tool for genome-wide complex trait analysis. The American Journal of Human Genetics ,
https://arxiv.org/abs/2504.15558v1
88(1):76–82, 2011. [19] Po-Ru Loh, George Tucker, Brendan K Bulik-Sullivan, et al. Efficient Bayesian mixed-model analysis increases association power in large cohorts. Nature genetics , 47(3):284–290, 2015. 76 [20] Gerhard Moser, Sang Hong Lee, Ben J Hayes, et al. Simultaneous discovery, estimation and prediction analysis of complex traits using a Bayesian mixture model. PLoS genetics , 11(4):e1004969, 2015. [21] Luke R Lloyd-Jones, Jian Zeng, Julia Sidorenko, et al. Improved polygenic prediction by Bayesian multiple regression on summary statistics. Nature communications , 10(1):5086, 2019. [22] Tian Ge, Chia-Yen Chen, Yang Ni, Yen-Chen Anne Feng, and Jordan W Smoller. Polygenic prediction via Bayesian regression and continuous shrinkage priors. Nature communications , 10(1):1776, 2019. [23] Jeffrey P Spence, Nasa Sinnott-Armstrong, Themistocles L Assimes, and Jonathan K Pritchard. A flexible modeling and inference framework for estimating variant effect sizes from GWAS summary statistics. BioRxiv , pages 2022–04, 2022. [24] Fabio Morgante, Peter Carbonetto, Gao Wang, Yuxin Zou, Abhishek Sarkar, and Matthew Stephens. A flexible empirical Bayes approach to multivariate multiple regression, and its improved accuracy in predicting multi-tissue gene expression from genotypes. PLoS Genetics , 19(7):e1010539, 2023. [25] Sumit Mukherjee, Bodhisattva Sen, and Subhabrata Sen. A mean field approach to empirical Bayes estimation in high-dimensional linear regression. arXiv preprint arXiv:2309.16843 , 2023. [26] Valentin De Bortoli, Alain Durmus, Marcelo Pereyra, and Ana F Vidal. Efficient stochastic optimisation by unadjusted Langevin Monte Carlo: Application to maximum marginal likelihood and empirical Bayesian estimation. Statistics and Computing , 31:1–18, 2021. [27] Zhou Fan, Justin Ko, Bruno Loureiro, Yue M. Lu, and Yandi Shen. Dynamical mean-field analysis of adaptive Langevin diffusions: Propagation-of-chaos and convergence of the linear response. 2025. [28] Michael Celentano, Chen Cheng, and Andrea Montanari. The high-dimensional asymptotics of first order methods with random data. arXiv preprint arXiv:2112.07572 , 2021. [29] Cedric Gerbelot, Emanuele Troiani, Francesca Mignacco, Florent Krzakala, and Lenka Zdeborova. Rig- orous dynamical mean-field theory for stochastic gradient descent methods. SIAM Journal on Mathe- matics of Data Science , 6(2):400–427, 2024. [30] Dongning Guo and Sergio Verd´ u. Randomly spread CDMA: Asymptotics via statistical physics. IEEE Transactions on Information Theory , 51(6):1983–2010, 2005. [31] Yoshiyuki Kabashima. Inference from correlated patterns: a unified theory for perceptron learning and linear vector channels. In Journal of Physics: Conference Series , volume 95, page 012001. IOP Publishing, 2008. [32] Haim Sompolinsky and Annette Zippelius. Dynamic theory of the spin-glass phase. Physical Review Letters , 47(5):359, 1981. [33] Haim Sompolinsky and Annette Zippelius. Relaxational dynamics of the Edwards-Anderson model and the mean-field theory of spin-glasses. Physical Review B , 25(11):6860, 1982. [34] Theodore R Kirkpatrick and Devarajan Thirumalai. Dynamics of the structural glass transition and the p-spin—interaction spin-glass model. Physical review letters , 58(20):2091, 1987. [35] Andrea Crisanti, Heinz Horner, and H J Sommers. The spherical p-spin interaction spin-glass model: the dynamics. Zeitschrift f¨ ur Physik B Condensed Matter , 92:257–271, 1993. [36] Leticia F Cugliandolo and Jorge Kurchan. Analytical solution of the off-equilibrium dynamics of a long-range spin-glass model. Physical Review Letters , 71(1):173, 1993. [37] Leticia F Cugliandolo and Jorge Kurchan. On the out-of-equilibrium relaxation of the Sherrington- Kirkpatrick
https://arxiv.org/abs/2504.15558v1
model. Journal of Physics A: Mathematical and General , 27(17):5749, 1994. [38] G Ben Arous and Alice Guionnet. Large deviations for Langevin spin glass dynamics. Probability Theory and Related Fields , 102:455–509, 1995. 77 [39] G Ben Arous and Alice Guionnet. Symmetric Langevin spin glass dynamics. The Annals of Probability , 25(3):1367–1422, 1997. [40] Alice Guionnet. Averaged and quenched propagation of chaos for spin glass dynamics. Probability Theory and Related Fields , 109:183–215, 1997. [41] Francesca Mignacco, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborov´ a. Dynamical mean- field theory for stochastic gradient descent in gaussian mixture classification. Advances in Neural In- formation Processing Systems , 33:9540–9550, 2020. [42] Stefano Sarao Mannelli, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborova. Passed & spu- rious: Descent algorithms and local minima in spiked matrix-tensor models. In international conference on machine learning , pages 4333–4342. PMLR, 2019. [43] Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, Pierfrancesco Urbani, and Lenka Zdeborov´ a. Marvels and pitfalls of the Langevin algorithm in noisy high-dimensional inference. Physical Review X , 10(1):011057, 2020. [44] Tengyuan Liang, Subhabrata Sen, and Pragya Sur. High-dimensional asymptotics of Langevin dynamics in spiked matrix models. Information and Inference: A Journal of the IMA , 12(4), 2023. [45] Francesca Mignacco, Pierfrancesco Urbani, and Lenka Zdeborov´ a. Stochasticity helps to navigate rough landscapes: comparing gradient-descent-based algorithms in the phase retrieval problem. Machine Learning: Science and Technology , 2(3):035029, 2021. [46] Qiyang Han and Xiaocong Xu. Gradient descent inference in empirical risk minimization. arXiv preprint arXiv:2412.09498 , 2024. [47] Elisabeth Agoritsas, Giulio Biroli, Pierfrancesco Urbani, and Francesco Zamponi. Out-of-equilibrium dynamical mean-field equations for the perceptron model. Journal of Physics A: Mathematical and Theoretical , 51(8):085002, 2018. [48] Blake Bordelon and Cengiz Pehlevan. Self-consistent dynamical field theory of kernel evolution in wide neural networks. Advances in Neural Information Processing Systems , 35:32240–32256, 2022. [49] Yatin Dandi, Emanuele Troiani, Luca Arnaboldi, Luca Pesce, Lenka Zdeborov´ a, and Florent Krza- kala. The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents. arXiv preprint arXiv:2402.03220 , 2024. [50] Blake Bordelon, Hamza Chaudhry, and Cengiz Pehlevan. Infinite limits of multi-head transformer dynamics. Advances in Neural Information Processing Systems , 37:35824–35878, 2024. [51] Blake Bordelon, Alexander Atanasov, and Cengiz Pehlevan. A dynamical model of neural scaling laws. arXiv preprint arXiv:2402.01092 , 2024. [52] Andrea Montanari and Pierfrancesco Urbani. Dynamical decoupling of generalization and overfitting in large two-layer networks. arXiv preprint arXiv:2502.21269 , 2025. [53] Ada Altieri, Giulio Biroli, and Chiara Cammarota. Dynamical mean-field theory and aging dynamics. Journal of Physics A: Mathematical and Theoretical , 53(37):375006, 2020. [54] Andrea Crisanti, Silvio Franz, Jorge Kurchan, and Andrea Maiorano. Dynamical mean-field theory and the aging dynamics. In Spin Glass Theory and Far Beyond: Replica Symmetry Breaking After 40 Years , pages 157–186. World Scientific, 2023. [55] G Ben Arous, Amir Dembo, and Alice Guionnet. Aging of spherical spin glasses. Probability Theory and Related Fields , 120:1–67, 2001. [56] Antoine Bodin and Nicolas Macris. Rank-one matrix estimation: analytic time evolution of gradient descent dynamics. In Conference on Learning
https://arxiv.org/abs/2504.15558v1
Theory , pages 635–678. PMLR, 2021. 78 [57] Toshiyuki Tanaka. A statistical-mechanics approach to large-system analysis of CDMA multiuser de- tectors. IEEE Transactions on Information theory , 48(11):2888–2910, 2002. [58] Andrea Montanari and David Tse. Analysis of belief propagation for non-linear problems: The example of CDMA (or: How to prove Tanaka’s formula). In 2006 IEEE Information Theory Workshop-ITW’06 Punta del Este , pages 160–164. IEEE, 2006. [59] Galen Reeves and Henry D Pfister. The replica-symmetric prediction for random linear estimation with Gaussian matrices is exact. IEEE Transactions on Information Theory , 65(4):2252–2283, 2019. [60] Jean Barbier, Nicolas Macris, Mohamad Dia, and Florent Krzakala. Mutual information and optimal- ity of approximate message-passing in random linear estimation. IEEE Transactions on Information Theory , 66(7):4270–4303, 2020. [61] Jean Barbier and Nicolas Macris. The adaptive interpolation method: a simple scheme to prove replica formulas in Bayesian inference. Probability theory and related fields , 174:1133–1185, 2019. [62] Jean Barbier, Florent Krzakala, Nicolas Macris, L´ eo Miolane, and Lenka Zdeborov´ a. Optimal errors and phase transitions in high-dimensional generalized linear models. Proceedings of the National Academy of Sciences , 116(12):5451–5460, 2019. [63] Francesco Guerra. Broken replica symmetry bounds in the mean field spin glass model. Communications in mathematical physics , 233:1–12, 2003. [64] Michel Talagrand. The Parisi formula. Annals of Mathematics , pages 221–263, 2006. [65] Takashi Takahashi and Yoshiyuki Kabashima. Macroscopic analysis of vector approximate message passing in a model-mismatched setting. IEEE Transactions on Information Theory , 68(8):5579–5600, 2022. [66] Jean Barbier, Wei-Kuo Chen, Dmitry Panchenko, and Manuel S´ aenz. Performance of Bayesian linear regression in a model with mismatch. arXiv preprint arXiv:2107.06936 , 2021. [67] Alice Guionnet, Justin Ko, Florent Krzakala, and Lenka Zdeborov´ a. Estimating rank-one matrices with mismatched prior and noise: universality and large deviations. Communications in Mathematical Physics , 406(1):9, 2025. [68] Yury Polyanskiy and Yihong Wu. Information theory: From coding to learning . Cambridge university press, 2025. [69] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the Fokker–Planck equation. SIAM journal on mathematical analysis , 29(1):1–17, 1998. [70] Ji Xu, Daniel J Hsu, and Arian Maleki. Global analysis of expectation maximization for mixtures of two gaussians. Advances in Neural Information Processing Systems , 29, 2016. [71] Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J Wainwright, and Michael I Jordan. Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences. Advances in neural information processing systems , 29, 2016. [72] Anya Katsevich and Afonso S Bandeira. Likelihood maximization and moment matching in low SNR Gaussian mixture models. Communications on Pure and Applied Mathematics , 76(4):788–842, 2023. [73] Yudong Chen, Dogyoon Song, Xumei Xi, and Yuqian Zhang. Local minima structures in gaussian mixture models. IEEE Transactions on Information Theory , 2024. [74] Andreas Eberle. Reflection couplings and contraction rates for diffusions. Probability theory and related fields , 166:851–886, 2016. 79 [75] Andreas Eberle and Raphael Zimmer. Sticky couplings of multidimensional diffusions with different drifts. In Annales de l’Institut Henri Poincar´ e-Probabilit´ es et Statistiques , volume 55, pages 2370–2394, 2019. [76] Philip E Protter and Philip E
https://arxiv.org/abs/2504.15558v1
Protter. Stochastic differential equations . Springer, 2005. [77] Amir Dembo and Jean-Dominique Deuschel. Markovian perturbation, response and fluctuation dissi- pation theorem. Annales de l’IHP Probabilit´ es et statistiques , 46(3):822–852, 2010. [78] Xian Chen and Chen Jia. Mathematical foundation of nonequilibrium fluctuation–dissipation theo- rems for inhomogeneous diffusion processes with unbounded coefficients. Stochastic Processes and their Applications , 130(1):171–202, 2020. [79] Jean-Michel Bismut. Large deviations and the Malliavin calculus , volume 45 of Progress in Mathematics . Birkh¨ auser Boston, Inc., Boston, MA, 1984. [80] K David Elworthy and Xue-Mei Li. Formulae for the derivatives of heat semigroups. Journal of Functional Analysis , 125(1):252–286, 1994. [81] Roman Vershynin. High-dimensional probability: An introduction with applications in data science , volume 47. Cambridge university press, 2018. [82] Sergey G Bobkov, Ivan Gentil, and Michel Ledoux. Hypercontractivity of Hamilton–Jacobi equations. Journal de Math´ ematiques Pures et Appliqu´ ees , 80(7):669–696, 2001. [83] Dominique Bakry, Ivan Gentil, and Michel Ledoux. Analysis and geometry of Markov diffusion operators , volume 103. Springer, 2014. [84] H Kunita. Stochastic differential equations and stochastic flows of diffeomorphisms. ´Ecole d’ ´Et´ e de Probabilit´ es de Saint-Flour XII-1982 , pages 143–303, 1984. [85] R.T. Rockafellar. Convex Analysis . Princeton Landmarks in Mathematics and Physics. Princeton University Press, 1997. [86] Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. Advances in neural information processing systems , 32, 2019. [87] Dongning Guo, Shlomo Shamai, and Sergio Verd´ u. Mutual information and minimum mean-square error in Gaussian channels. IEEE transactions on information theory , 51(4):1261–1282, 2005. [88] Sergio Verd´ u. Mismatched estimation and relative entropy. IEEE Transactions on Information Theory , 56(8):3712–3720, 2010. [89] Alex Bloemendal, L´ aszl´ o Erdos, Antti Knowles, Horng-Tzer Yau, and Jun Yin. Isotropic local laws for sample covariance and generalized wigner matrices. Electron. J. Probab , 19(33):1–53, 2014. [90] Song Mei, Yu Bai, and Andrea Montanari. The landscape of empirical risk for nonconvex losses. The Annals of Statistics , 46(6A):2747–2774, 2018. [91] L Chris G Rogers and David Williams. Diffusions, Markov processes, and martingales: Itˆ o calculus , volume 2. Cambridge university press, 2000. [92] Jean-Baptiste Bardet, Natha¨ el Gozlan, Florent Malrieu, and Pierre-Andr´ e Zitt. Functional inequalities for gaussian convolutions of compactly supported measures: explicit bounds and dimension dependence. Bernoulli , 24(1):333–353, 2018. [93] Roland Bauerschmidt and Thierry Bodineau. A very simple proof of the LSI for high temperature spin systems. Journal of Functional Analysis , 276(8):2582–2588, 2019. [94] Andrea Montanari and Yuchen Wu. Provably efficient posterior sampling for sparse linear regression via measure decomposition. arXiv preprint arXiv:2406.19550 , 2024. 80 [95] Yong-Quan Yin, Zhi-Dong Bai, and Pathak R Krishnaiah. On the limit of the largest eigenvalue of the large dimensional sample covariance matrix. Probability theory and related fields , 78:509–521, 1988. [96] Michel Ledoux. Logarithmic Sobolev inequalities for unbounded spin systems revisited. S´ eminaire de Probabilit´ es XXXV , pages 167–194, 2001. [97] Rajendra Bhatia. Matrix analysis , volume 169. Springer Science & Business Media, 2013. 81
https://arxiv.org/abs/2504.15558v1
arXiv:2504.15753v1 [math.OC] 22 Apr 2025 1 Markov Kernels, Distances and Optimal Control: A Parable of Linear Quadratic Non-Gaussian Distribution Steering Alexis M.H. Teter, Wenqing Wang, Sachin Shivakumar, Abhish ek Halder, Senior Member, IEEE Abstract —For a controllable linear time-varying (LTV) pair(At,Bt)andQtpositive semidefinite, we derive the Markov kernel κfor the It ˆo diffusion dxt=Atxtdt+√ 2Btdwt with an accompanying killing of probability mass at rate 1 2x⊤Qtx. This Markov kernel is the Green’s function for the linear reaction-advection-diffusion partial differe ntial equation ∂tκ=−/an}b∇acketle{t∇x,κAtx/an}b∇acket∇i}ht+/an}b∇acketle{tBtB⊤ t,∇2 xκ/an}b∇acket∇i}ht−1 2x⊤Qtxκ. Our result generalizes the recently derived kernel for the special case (At,Bt) = (0,I), and depends on the solu- tion of an associated Riccati matrix ODE. A consequence of this result is that the linear quadratic non-Gaussian Schr ¨odinger bridge is exactly solvable. This means that the problem of steering a controlled LTV diffusion from a given non-Gaussian distribution to another over a fixed deadline while minimizing an expected quadratic cost can be solved using dynamic Sinkhorn recursions performed with the derived kernel. The endpoint non-Gaussian distributions are only required to have finite second moments, and are arbitrary otherwise. Our derivation for the (At,Bt)-parametrized kernel pur- sues a new idea that relies on finding a state-time de- pendent distance-like functional given by the solution of a deterministic optimal control problem. This technique breaks away from existing methods, such as generalizing Hermite polynomials or Weyl calculus, which have seen limited success in the reaction-diffusion context. Our tec h- nique uncovers a new connection between Markov kernels, distances, and optimal control. This connection is of inter - est beyond its immediate application in solving the linear quadratic Schr ¨odinger bridge problem. Index Terms —Markov kernel, stochastic optimal control, non-Gaussian distribution, Green’s function, Schr ¨odinger bridge. Alexis M.H. Teter is with the Department of Applied Math- ematics, University of California, Santa Cruz, CA 95064, US A, amteter@ucsc.edu . Wenqing Wang and Abhishek Halder are with the Department of Aerospace Engineering, Iowa State University, Ames, IA 500 11, USA, {wqwang,ahalder }@iastate.edu . Sachin Shivakumar is with the Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA, sshivakumar@lanl.gov .I. INTRODUCTION Markov kernels κ(t0,x,t,y),0≤t0≤t <∞,x,y∈Rn, play a central role in the analysis [1, Ch. 1] and control [2]– [5], [6, Ch. V-VI] of Markov diffusion processes. Often but n ot always, they can be interpreted as transition probability , which are measurable maps sending Borel probability measures1 supported on subsets of Rnto itself. A well-known example is the (Euclidean) heat kernel [7, p. 44-47] κHeat(t0,x,t,y) :=1 (4π(t−t0))n/2exp/parenleftbigg −|x−y|2 4(t−t0)/parenrightbigg , (1) which is the transition probability for the Itˆ o process dxt=√ 2dwt,wt∈Rn, (2) wherewtis the standard Wiener process, and |·|denotes the Euclidean norm. A more general example of interest in systems-control is the Markov kernel κLinear(t0,x,t,y) = (4π(t−t0))−n/2det(Γtt0)−1/2 exp/parenleftBigg −(Φtt0x−y)⊤Γ−1 tt0(Φtt0x−y) 4(t−t0)/parenrightBigg ,(3) which is the transition probability for the Itˆ o process dxt=Atxtdt+√ 2Btdwt,wt∈Rm, (4) where(At,Bt)is a uniformly controllable matrix-valued trajectory pair in (Rn×n,Rn×m)that is bounded and contin- uous int∈[t0,∞), and the state transition matrix Φtτ:= Φ(t,τ)∀t0≤τ≤t. Here, uniformly controllable means posi- tive definiteness of the associated finite horizon controlla
https://arxiv.org/abs/2504.15753v1
bility Gramian Γtt0, i.e., Γtt0:=/integraldisplayt t0ΦtτBτB⊤ τΦ⊤ tτdτ≻0,0≤t0≤t <∞. As expected, (3) reduces to (1) for (At,Bt) = (0,I). Both (1) and (3) are instances of κthat are transition probabilities, and satisfy κ≥0,/integraltext Rnκdy= 1. They solve 1endowed with the topology of weak convergence 2 Kolmogorov’s forward partial differential equation (PDE) ini- tial value problem with Dirac delta initial condition: ∂tκ=Lκ, κ(t0,x,t0,y) =δ(x−y), (5) whereLis an advection-diffusion spatial operator induced by the drift and diffusion coefficients of the underlying Itˆ o p ro- cess. For example, for the Itˆ o process (2), Lκ≡∆xκ(standard Laplacian). For (4), Lκ≡−/an}b∇acketle{t∇ x,κAtx/an}b∇acket∇i}ht+BtB⊤ t∆xκ. More generally, for the Itˆ o process dxt=f(t,xt)dt+g(t,xt)dwt, with Lipschitz f, uniformly lower bounded G:=gg⊤, and |f|+|g|≤c(1+|x|)uniformly in tfor some constant c >0, we haveLκ≡/an}b∇acketle{t∇x,κf/an}b∇acket∇i}ht+∆Gκwhere the weighted Laplacian ∆G:=/summationtext i,j∂2 xixj(κGij). At this level of generality, closed- form formula for the transition probability κsuch as (1) or (3) are not available. A typical situation where the Markov kernel κisnot a transition probability is when the underlying Itˆ o process, in addition to drift and diffusion, also allows the creation or killing of probability mass at a rate q(xt)for some bounded measurable q. In such cases, κis a measurable map that sends nonnegative Borel measures2supported on subsets of Rnto itself. Then, (5) gets replaced with ∂tκ= (L−q)κ, κ(t0,x,t0,y) =δ(x−y), (6) whereLis an advection-diffusion operator as before ( q= 0case) and L−qbecomes a reaction-advection-diffusion operator. We say that qis the reaction rate. For both (5) and (6), the Markov kernel κcan be seen as theGreen’s function of the associated linear PDE initial value problem. Thus, a closed-form handle on κhelps solve the associated prediction problem in the sense if the initial st ate x0∼µ0(symbol∼denotes “follows the statistics of”) then xt∼/integraltext Rnκ(t0,x,t,y)dµ0(y). Compared to (5), explicit formula for κin (6) are less known. Recently, a closed-form formula for κin the case L≡∆xand convex quadratic q(xt)was found using Hermite polynomials [5] and Weyl calculus [8]. Motivation behind these studies came from the Schr ¨odinger bridge problem (SBP) with quadratic state cost q(xt) :=1 2x⊤ tQxt,Q/{ollowsequal0, which are stochastic optimal control problems of the form: inf (µu,u)/integraldisplay Rn/integraldisplayt1 t0/braceleftbigg1 2|u|2+1 2(xu t)⊤Qxu t/bracerightbigg dtdµu(xu t)(7a) subject to dxu t=ut(t,xu t) dt+√ 2dwt, (7b) xu t(t=t0)∼µ0,xu t(t=t1)∼µ1, (7c) where the deadline [t0,t1], the state cost weight matrix Q/{ollowsequal 0, and the endpoint statistics µ0,µ1are given as problem data. Problem (7) has the interpretation of linear quadratic (LQ) optimal control synthesis for steering non-Gaussian state statistics over a given finite horizon. The existence-uniqueness of solution for (7) is guaranteed provided µ0,µ1have finite second moments. Solution of (7) for Gaussian µ0,µ1was detailed in [9, Sec. III] in a more general setting with (7b) 2still endowed with the topology of weak convergencereplaced by the controlled linear time-varying (LTV) dynam - ics: dxu t= (Atxu t+Btut) dt+√ 2Btdwt, (8) wherein as before, (At,Bt)is controllable matrix-valued tra- jectory pair in (Rn×n,Rn×m)that is bounded and continuous int∈[t0,t1]. Writing the conditions of optimality for (7) followed by certain change-of-variables, it can be shown [5, Sec. 3] tha t solving problem (7) leads to computing the “propagator”
https://arxiv.org/abs/2504.15753v1
a.k .a. the action of the Green’s function/integraldisplay Rnκ(t0,x,t,y)/hatwideϕ0(y)dy, whereκsolves (6), and /hatwideϕ0is a suitable measurable function. In other words, the state cost-to-go manifests as a reaction rate in the PDE for the Markov kernel . Then, knowing a closed- form formula for κfacilitates the solution of (7) with generic non-Gaussian µ0,µ1having finite second moments. This is what was accomplished in [5], [8]. A natural question is whether such a closed-form formula forκcan be derived when (7b) is replaced by (8). Finding such κwould enable solving LQ SBPs with generic non-Gaussian µ0,µ1having finite second moments. From a probabilistic point of view, this κis the Markov kernel of the Itˆ o process (4) with quadratic creation or killing of probability mass w ith rateq(xt) :=1 2x⊤ tQxt,Q/{ollowsequal0. In this work, we derive this Markov kernel in the more general setting of time-varying weight matrices (see Assumptions A1-A2in Sec. IV). Contributions: This work makes two concrete contributions. •The first contribution is the solution of a specific problem. We deduce the Markov kernel for the Itˆ o diffusion (4) with killing rate1 2x⊤QtxforQt/{ollowsequal0. We explain how integral transforms defined by this kernel help in solving the generic LQ SBP. •The second contribution is methodological. To derive the aforesaid kernel, we propose a new technique that involves identifying a deterministic optimal control prob - lem from the Itˆ o diffusion, solving the same to find a distance function, and then to identify the Markov kernel in a structure defined by the same. We provide computa- tional details to demonstrate that the proposed technique systematically recovers Markov kernels of interest: old and new. Related works: In Table I, we contrast the technical contri- bution of this work vis-` a-vis the related works in the liter ature. Of particular relevance are [5], [8], [10], [11], all of whic h consider the quadratic state cost in the cost-to-go. The dev elop- ments in [10], [11] focused on µ0,µ1Gaussian, and that case did not require the kernel, thanks to linear dynamics preser ving the Gaussianity. On the other hand, the works [5], [8] derive d the kernel only for (At,Bt) = (0,I)and constant Q/{ollowsequal0. Neither the Hermite polynomials in [5] nor the Weyl calculus computation in [8] generalize in any obvious way for deriving the κin the LQ setting of our interest. In this work, we chart a new path motivated by a basic observation on the structure of the Markov kernels, distances and certai n deterministic optimal control problems. 3 Ref.(At,Bt)Qt/followsequal0 Distributions µ0,µ1Markov kernel κ [12], [13] (0,I) 0 generic (1) [14] generic 0 Gaussian (3) [10], [11] generic generic Gaussian n/a [4] generic 0 generic (3) [5], [8] (0,I)fixedQ/followsequal0 generic [5, (A.22)], [8, (43)] This work generic generic generic (36) TABLE I : Comparison of related works on LQ SBP. Organization: In Sec. II, we motivate the postulated form (see (9)) for the Markov kernel of interest in terms of a distance function. In Sec. III, we re-visit the known Markov kernels to demonstrate that the distance function appearin g in our postulated form can be obtained by
https://arxiv.org/abs/2504.15753v1
solving an associated deterministic optimal control problem. Motivated by the structural observations made in Sections II and III, we next follow the computational template: Marko v kernel←−distance function←−deterministic optimal control problem, to derive the Markov kernel for the Itˆ o diffusion ( 4) with rate of killing1 2x⊤QtxforQt/{ollowsequal0. In particular, the corresponding distance is derived in Sec. IV-A, which we the n use to compute the Markov kernel of interest in Sec. IV-B. Our main result is Theorem 1. In Sec. IV-C, we explain how the derived kernel helps to solve the LQ SBP with non-Gaussian endpoints. Sec. V concludes the work. II. M ARKOV KERNELS AND DISTANCES Both (1) and (3) are of the form κ=c(t,t0)exp/parenleftbigg −1 2dist2 tt0(x,y)/parenrightbigg (9) for some suitable distance function disttt0:Rn×Rn/ma√sto→R≥0. The subscript tt0signifies the distance function’s parametric dependence on t,t0. Because (1) and (3) are both transition probabilities, the distance function disttt0uniquely determines κ. Oncedisttt0 is identified, the pre-factor c(t,t0), and hence κis completely determined by the normalization condition/integraltext Rnκdy= 1. Interestingly, the κassociated with problem (7) derived in [5], [8] is also of the form (9) even though κthen is not a transition probability, and c(t,t0)does not follow from normalization. These exemplars hint that for controlled dynamics over Rn, the distance function might be induced by some principle of least action. That Markov kernels are related to distances is in itself not a new observation. The most well-known result relating kerne ls and distances is Varadhan’s formula [15]–[17] which says that the heat kernel κM Heat on a complete Riemannian manifold M satisfies lim t↓t0tlogκM Heat(t0,x,t,y) =−1 2dist2(x,y) (10) uniformly on every compact subsets of M×M , wheredist is the minimal geodesic distance connecting x,y∈M . In otherwords,dist can be recovered as the short time asymptotic of the heat kernel on M. In the context of Varadhan’s formula (10), the Lin (5) is the Laplace-Beltrami operator on M. For specific manifolds, few exact or asymptotic relations ar e also known [18, Ch. VI], [19], [20, Ch. 5]: κH3 Heat(t0,x,t,y) =1 (4π(t−t0))3/2dist(x,y) sinhdist( x,y) ×exp/parenleftbigg −t−dist2(x,y) 4(t−t0)/parenrightbigg , (11a) κHn Heat(t0,x,t,y)∼1 (4π(t−t0))n/2/parenleftbiggdist(x,y) sinhdist( x,y)/parenrightbigg(n−1)/2 ×exp/parenleftbigg −dist2(x,y) 4(t−t0)/parenrightbigg , (11b) κSn Heat(t0,x,t,y)∼1 (4π(t−t0))n/2/parenleftbiggdist(x,y) sindist(x,y)/parenrightbigg(n−1)/2 ×exp/parenleftbigg −dist2(x,y) 4(t−t0)/parenrightbigg , (11c) whereHn,Sndenote the ndimensional hyperbolic manifold and the sphere, respectively. The symbol ∼in (11) denotes asymptotic equivalence. In contrast, the form (9) that we focus on is rather specific, and is for a flat geometry. It is then natural to speculate that disttt0may arise from the finite horizon reachability constraint over[t0,t]imposed by the controlled dynamics, i.e., disttt0 is of sub-Riemannian orCarnot-Carath ´eodory type [21]–[23]. This motivates us to formulate disttt0as the minimal value of an action integral explained next. III. D ISTANCES FROM OPTIMAL CONTROL We postulate that for κof the form (9), the disttt0is induced by a deterministic optimal control problem (OCP) of particular structure. The objective for this OCP is 1 2dist2 tt0(x,y) = min uτ/integraldisplayt t0/parenleftbigg1 2|uτ|2+q(zτ)/parenrightbigg dτ. (12) The constraint for this OCP is a controlled ODE obtained by replacing dwτin the underlying Itˆ o diffusion with a controlled driftuτdτ, and boundary conditions zτ(τ=t0)
https://arxiv.org/abs/2504.15753v1
=x, zτ(τ=t) =y. Notice that in objective (12), the state cost q is the rate of creation or killing of probability mass. Let us verify this postulate for known cases discussed before. Heat kernel (1).Hereq≡0, anddwτ/ma√sto→udτin (2) gives the controlled ODE ˙zτ=√ 2uτwhere dot denotes derivative w.r.t.τ∈[t0,t]. This leads to the deterministic OCP 1 2dist2 tt0(x,y) =min uτ/integraldisplayt t01 2|uτ|2dτ (13a) ˙zτ=√ 2uτ, (13b) zτ(τ=t0) =x,zτ(τ=t) =y.(13c) For solving (13), we apply the Pontryagin’s minimum prin- ciple to get the optimal control uopt τ=−√ 2λopt τwhere λopt τis the optimal costate. The optimal state ¨zopt τ= 0, or 4 ˙zopt τ=−2λopt τ=α, or equivalently zopt τ=ατ+βfor some constant α,β∈Rn. Using (13c), we find α= (x−y)/(t0−t), andλopt τ= −α/2 = (x−y)/2(t−t0). Hence the optimal value in (13a): 1 2dist2 tt0(x,y) =1 2·2|λopt τ|2·(t−t0) =|x−y|2 4(t−t0), which indeed transcribes (1) into the form (9). The normal- ization condition for κleads to evaluating a Gaussian integral, which determines the pre-factor c(t,t0) = (4π(t−t0))−n/2. Linear kernel (3).Hereq≡0, anddwτ/ma√sto→udτin (4) yields the deterministic OCP 1 2dist2 tt0(x,y) =min uτ/integraldisplayt t01 2|uτ|2dτ (14a) ˙zτ=Aτzτ+√ 2Bτuτ, (14b) zτ(τ=t0) =x,zτ(τ=t) =y.(14c) Problem (14) is that of minimum effort state steering for controllable linear system, and its solution is commonplac e in optimal control textbooks; see e.g., [24, p. 194]. The optim al value in (14a): 1 2dist2 tt0(x,y) =(Φtt0x−y)⊤Γ−1 tt0(Φtt0x−y) 4(t−t0), which indeed transcribes (3) into the form (9). As before, th e normalization condition for κdetermines the pre-factor c(t,t0) = (4π(t−t0))−n/2det(Γtt0)−1/2 via Gaussian integral. The kernel in [5], [8]. The kernel derived in [5], [8] cor- responds to the Itˆ o diffusion (2) with reaction rate1 2z⊤Qz, Q/{ollowsequal0. Without loss of generality [5, Sec. 4.1], we consider state coordinates where a given Q≻0is diagonalized to yield the diagonal matrix 2D≻0. The factor 2scaling is unimportant but kept for consistency with [5, Sec. 4]. Importantly, the Laplacian operator is invariant under thi s change of coordinates. For the case Q/{ollowsequal0, we refer the readers to [5, Sec. 4.3]. In the new state coordinates, q≡1 2z⊤ τ2Dzτ, anddwτ/ma√sto→ udτin (2) gives the following modified version of the OCP (13): 1 2dist2 tt0(x,y) =min uτ/integraldisplayt t0/parenleftbigg1 2|uτ|2+1 2z⊤ τ2Dzτ/parenrightbigg dτ(15a) ˙zτ=√ 2uτ, (15b) zτ(τ=t0) =x,zτ(τ=t) =y.(15c) Applying Pontryagin’s minimum principle, we get the optima l controluopt τ=−√ 2λopt τ, the optimal costate ODE ˙λopt τ= −2Dzopt τ, and the optimal state ODE ¨zopt τ=−2˙λopt τ= 4Dzopt τ. (16)Lettingωi:= 2√Diifor alli= 1,...,n , we solve for the components of (16) as /parenleftbig zopt τ/parenrightbig i=aieωiτ+bie−ωiτ, (17) and use (15c) to determine the constants ai=−xie−ωit+yie−ωit0 2sinh(ωi(t−t0)),bi=xieωit−yieωit0 2sinh(ωi(t−t0)).(18) Hence the optimal value in (15a): 1 2dist2 tt0(x,y) =/integraldisplayt t0/parenleftBigg 1 2|uopt τ|2+1 4n/summationdisplay i=1ω2 i/parenleftbig/parenleftbig zopt τ/parenrightbig i/parenrightbig2/parenrightBigg dτ =/integraldisplayt t0/parenleftBigg 1 4|zopt τ|2+1 4n/summationdisplay i=1ω2 i/parenleftbig/parenleftbig zopt τ/parenrightbig i/parenrightbig2/parenrightBigg dτ =1 2n/summationdisplay i=1ωi/parenleftbig x2 i+y2 i/parenrightbig cosh(ωi(t−t0))−2ωixiyi 2sinh(ωi(t−t0)),(19) where we have used (17)-(18) followed by algebraic simplifi- cation using the identity sinh(2(·)) = 2sinh(·)cosh(·). We note that the expression (19) indeed transcribes the Markov kernel in [8, Eq. (43)] or that in [5, Eq. (A.22)] into the form (9). Since this Markov kernel is
https://arxiv.org/abs/2504.15753v1
not a transitio n probability, the normalization condition no longer holds, and the pre-factor c(t,t0) =n/productdisplay i=1/parenleftbiggωi 4πsinh(ωi(t−t0))/parenrightbigg1/2 (20) does not follow from there. However, having determined (19) , the pre-factor (20) can be obtained by substituting (9) with (19) in (6). See Appendix A for details of this simple but non-trivial computation. IV. D ISTANCE AND KERNEL FOR LQ N ON-GAUSSIAN DISTRIBUTION STEERING Having seen three exemplars for the computational pipeline : Markov kernel κ←−dist←− deterministic OCP , we now apply the same to derive κfor the Itˆ o diffusion (4) with reaction rate q(z)≡1 2z⊤Qτz,τ∈[t0,t]. For convenience, letˆBτ:=√ 2Bτ. Our standing assumptions are: A1.(Aτ,Bτ), and thus/parenleftBig Aτ,ˆBτ/parenrightBig , is controllable matrix- valued trajectory pair in (Rn×n,Rn×m)that is bounded and continuous in τ∈[t0,t]. A2.the matricial trajectory Qτ/{ollowsequal0is continuous and bounded w.r.t.τ∈[t0,t], andQs≻0for some s∈[t0,t]. Finding the Markov kernel κin this setting enables the solution of the LQ SBP with generic non-Gaussian endpoint distributions having finite second moments, i.e., the solution of the problem: inf (µu,u)/integraldisplay Rn/integraldisplayt1 t0/braceleftbigg1 2|u|2+1 2(xu t)⊤Qtxu t/bracerightbigg dtdµu(xu t) (21a) 5 subject to dxu t= (Atxu t+Btut) dt+√ 2Btdwt, (21b) xu t(t=t0)∼µ0,xu t(t=t1)∼µ1.(21c) Forµ0,µ1with finite second moments, the existence- uniqueness for the solution of (21) is guaranteed. This foll ows from transcribing (21) to a stochastic calculus of variatio ns problem involving the relative entropy a.k.a. the Kullback - Leibler divergence DKL(·/ba∇dbl·)minimization: min P∈Π01DKL P/ba∇dblexp/parenleftBig −1 4/integraltextt1 t0x⊤Qtxdt/parenrightBig W Z  (22) where Π01:={M∈M(C([t0,t1];Rn))|Mhas marginal µk at timetk∀k∈{0,1}},(23) theM(C([t0,t1];Rn))is the set of probability measures on the path spaceC([t0,t1];Rn)generated by the Itˆ o diffusion (4), theW∈M(C([t0,t1];Rn))is the Wiener measure, and theZis a normalizing constant. Problem (22) seeks to find the most likely measure-valued path generated by (4) w.r.t. a weighted Wiener (i.e., Gibbs) measure defined by the quadrat ic state cost, with endpoint marginal constraints given by the problem data µ0,µ1. For the equivalence of the formulations (21) and (22), see e.g., [25]–[27], [28, Sec. 4.1]. The exist ence- uniqueness of solution for (22) is a consequence of strict convexity of DKL(·/ba∇dbl·)w.r.t. the first argument. Unlike the three kernels in Sec. III, the κfor problem (21) is not known. Moreover, previously mentioned approaches (Hermite polynomials, Weyl calculus) to compute κthat works for simpler cases no longer generalize in any obvious way. We postulate that the form (9) for the Markov kernel holds here as well. In Sections IV-A-IV-B, we will verify the same. In Section IV-C, we will explain how the resulting kernel helps to solve the LQ SBP (21). A. Getting dist Inspired by the instances in Sec. III, we define dist through the following deterministic OCP: 1 2dist2 tt0(x,y) =min uτ/integraldisplayt t0/parenleftbigg1 2|uτ|2+1 2z⊤ τQτzτ/parenrightbigg dτ(24a) ˙zτ=Aτzτ+√ 2Bτuτ, (24b) zτ(τ=t0) =x,zτ(τ=t) =y.(24c) This is an atypical linear quadratic OCP in that when x,yare not too close to the origin, the optimal state trajectory tra des off the soft penalty (state cost-to-go) in deviating away fr om origin with that of meeting the hard endpoint constraints (2 4c) within the given deadline. We need the following result from [29, p. 140-141] para- phrased below to suit our
https://arxiv.org/abs/2504.15753v1
notations. Proposition 1. [29, p. 140-141] Suppose there exists sym- metric matrix K1such that the solution map3Π(τ,K1,t)of 3The mapping Π(τ,K1,t)is understood as the solution of (25a) at any τ∈[t0,t]solved backward in time with initial condition (25b).the Riccati matrix ODE initial value problem ˙Kτ=−A⊤ τKτ−KτAτ+KτˆBτˆB⊤ τKτ−Qτ,(25a) Kτ=t=K1, (25b) exists for all τ∈[t0,t]. Let ˆAτ:=Aτ−ˆBτˆB⊤ τΠ(τ,K1,t)∀τ∈[t0,t]. (26) Then the differentiable trajectory zτminimizes η=/integraldisplayt t0/parenleftbigg1 2|uτ|2+1 2z⊤ τQτzτ/parenrightbigg dτ subject to ˙zτ=Aτzτ+ˆBτuτ, zτ=t0=x,zτ=t=y, if and only if it also minimizes ˆη=/integraldisplayt t01 2|vτ|2dτ subject to ˙zτ=ˆAτzτ+ˆBτvτ, zτ=t0=x,zτ=t=y. Additionally, along any trajectory satisfying the boundar y conditions, we have η= ˆη+1 2x⊤Π(t0,K1,t)x−1 2y⊤K1y. (27) Remark 1 (Existence, uniqueness and positive semi-definite- ness ofΠ).Proposition 1 does not assume the controllability of(24b) . However, under our standing controllability assump- tion A1and the positive semi-definiteness assumption for Qτ ∀τ∈[t0,t]inA2, the existence-uniqueness of the mapping Πis guaranteed for all τ∈[t0,t]. Furthermore, the unique solutionΠ(τ,K1,t)/{ollowsequal0for anyK1/{ollowsequal0. See e.g., [30, Thm. 8-10]. Remark 2 (Strict positive definiteness of Π).The assumption Qs≻0for some s∈[t0,t], stated in the latter part of A2, further ensures that Π(τ,K1,t)≻0for anyK1/{ollowsequal0. This is a consequence of the matrix variations of constants formula [29, p. 59, exercise 1 in p. 162], [31, Sec. II]. Since the optimal ˆηin Proposition 1 is the cost for minimum energy state transfer from xatt0toyattover the LTV system matrix pair (ˆAτ,ˆBτ), we have ˆηopt=1 2/parenleftBig ˆΦtt0x−y/parenrightBig⊤ˆΓ−1 tt0/parenleftBig ˆΦtt0x−y/parenrightBig , (28) whereˆΦtt0is the state transition matrix from t0totfor the state matrix ˆAτin (26). Likewise, ˆΓtt0in (28) is the controllability Gramian for the pair (ˆAτ,ˆBτ). To see why (28) is well-defined, recall that for linear systems, controllability remains invariant under state fe edback. Specifically, the following Proposition 2 holds. For its pro of in the LTV setting, see [32]; the proof in the linear time-invar iant setting appeared earlier in [33, Thm. 3] . Proposition 2. [32] Let ˆAτbe given by (26). If the LTV system defined by the pair (Aτ,ˆBτ)is controllable, then so is the LTV system defined by the pair (ˆAτ,ˆBτ). 6 From Proposition 2, it follows that Γtt0≻0impliesˆΓtt0≻ 0. In particular, ˆΓtt0is non-singular, and (28) is well-defined. Since the optimal ηin (27), and thus the optimal value in (24a) must be independent of K1, we can set K1≡0without loss of generality. Combining this observation with (28), w e find the optimal value in (24a) as 1 2dist2 tt0(x,y) =1 2/parenleftbiggx y/parenrightbigg⊤ Mtt0/parenleftbiggx y/parenrightbigg , (29) where Mtt0:= ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0+Π(t0,0,t)−ˆΦ⊤ tt0ˆΓ−1 tt0 −ˆΓ−1 tt0ˆΦtt0ˆΓ−1 tt0 .(30) Using the Schur complement lemma, we can check that Mtt0≻0, as expected. Specifically, from (30), note that ˆΓ−1 tt0≻0, (31a) ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0+Π(t0,0,t)−/parenleftBig −ˆΦ⊤ tt0ˆΓ−1 tt0/parenrightBig ˆΓtt0/parenleftBig −ˆΓ−1 tt0ˆΦtt0/parenrightBig =Π(t0,0,t)≻0, (31b) thanks to Remark 2. The formulae (29)-(30) generalize the previously known result in [5], [8] for zero prior dynamics. In particular, th e previously known result can be recovered from (29)-(30) as follows (proof in Appendix B). Proposition 3. WhenQτ= 2D≻0(constant positive diagonal matrix) and (Aτ,ˆBτ)≡(0,√ 2I)∀τ∈[t0,t], the formulas (29)-(30) reduce to (19). B. Getting κ Now that we have determined the distance functional in the generic LQ setting given by (29)-(30), what remains in findin
https://arxiv.org/abs/2504.15753v1
g κin the postulated form (9), i.e., in the form κ(t0,x,t,y) =c(t,t0)exp/parenleftBigg −1 2/parenleftbiggx y/parenrightbigg⊤ Mtt0/parenleftbiggx y/parenrightbigg/parenrightBigg ,(32) is to compute the pre-factor c(t,t0). Unique identification of c(t,t0)serves the dual purpose of uniquely determining the kernel as well as verifying the postulated form (9). As was the case in the last example in Sec. III, here κis not a transition probability kernel and/integraltextκ/ne}ationslash= 1. Thus, the pre- factorc(t,t0)cannot be determined from normalization. To findc(t,t0), the idea now is to substitute (9) with (29) in (6) and invoke the initial condition. Specifically, here Lκ≡−/an}b∇acketle{t∇ x,κAtx/an}b∇acket∇i}ht+BtB⊤ t∆xκ, soLis an advection-diffusion operator. Also, recall that the reaction rate q(x)≡1 2x⊤QτxwithQτsatisfying Assump- tion A2. So (6) becomes an reaction-advection-diffusion PDE initial value problem ∂tκ=−/an}b∇acketle{t∇x,κAtx/an}b∇acket∇i}ht+/an}b∇acketle{tBtB⊤ t,∇2 xκ/an}b∇acket∇i}ht−1 2x⊤Qtxκ,(33a) κ(t0,x,t0,y) =δ(x−y), (33b)where∇2 xdenotes the Hessian w.r.t. x. We now state our main result in the following theorem (proof in Appendix C). Theorem 1 (Kernel for LQ non-Gaussian steering) .Forτ∈ [t0,t], consider assumptions A1-A2. Let θ(τ) := trace Aτ+/an}b∇acketle{tBτB⊤ τ,ˆΦ⊤ τt0ˆΓ−1 τt0ˆΦτt0+Π(t0,0,τ)/an}b∇acket∇i}ht, (34a) = trace/parenleftbig Aτ+BτB⊤ τM11(τ,t0)/parenrightbig , (34b) a:= (2π)−n/2lim t↓t0det/parenleftbigg M1/2 11(t,t0)exp/integraldisplayt t0(Aτ+ BτB⊤ τM11(τ,t0)/parenrightbig dτ/parenrightbigg ,(35) whereˆΦτt0,ˆΓτt0,Π(t0,0,τ)are as in Sec. IV-A, and exp denotes the matrix exponential. The matrix M11(τ,t0)ap- pearing in (34b) and(35) is the(1,1)block ofMτt0given by (30). Then the Markov kernel κsolving the reaction-advection- diffusion PDE initial value problem (33) is κ(t0,x,t,y) =aexp/parenleftbigg −/integraldisplayt t0θ(s)ds/parenrightbigg ×exp/parenleftBigg −1 2/parenleftbiggx y/parenrightbigg⊤ Mtt0/parenleftbiggx y/parenrightbigg/parenrightBigg , (36) whereMtt0≻0is given by (30). The kernel (36) significantly extends the results in [5], [8] to the generic LQ case. The corollary next (proof in Appendix D) recovers the previously known special case. Corollary 2. [Kernel in [5], [8] as special case of (36)] WhenQτ= 2D≻0(constant positive diagonal matrix) and (Aτ,ˆBτ)≡(0,√ 2I)∀τ∈[t0,t], the kernel (36) reduces to that in [8, Eq. (43)] or that in [5, Eq. (A.22)]. Remark 3. Using assumptions A1-A2and the standard (ε,δ) definition of limit, it is not too difficult to show that the upp er limit in (35) exists. In special cases such as Corollary 2 above, the limit in (35) can be evaluated in closed form, as shown in Appendix D. Importantly, the limit in (35) cannot be distributed to the determinant and the exponential factors ; see the computation in (85). The following properties are immediate from the expression of the kernel (36): •(spatial symmetry) κ(t0,x,t,y) =κ(t0,y,t,x)∀x,y∈ Rn, •(positivity) κ(t0,x,t,y)>0since from (35), a >0. We next discuss how the derived kernel helps in solving the LQ SBP (21). C. Using κto solve LQ SBP with non-Gaussian µ0,µ1 Using the derived kernel (36), we define the integral trans- forms /hatwideϕ0/ma√sto→T01/hatwideϕ0:=/integraldisplay Rnκ(t0,x,t1,y)/hatwideϕ0(y)dy, (37a) 7 ϕ1/ma√sto→T10ϕ1:=/integraldisplay Rnκ(t1,x,t0,y)ϕ1(y)dy, (37b) for measurable /hatwideϕ0(·),ϕ1(·). These integral transforms help solve the LQ SBP (21) as follows. Suppose that the endpoint measures µ0,µ1in (21c) are absolutely continuous with respective probability densit y func- tions (PDFs) ρ0,ρ1. Standard computation shows that the necessary conditions for optimality for (21) yield a pair of second order coupled nonlinear PDEs: ∂tρu opt+∇x·/parenleftbig ρu opt/parenleftbig Atx+BtB⊤ t∇xS/parenrightbig/parenrightbig =/an}b∇acketle{tBtB⊤ t,∇2 xρu opt/an}b∇acket∇i}ht,(38a) ∂tS+/an}b∇acketle{t∇xS,Atx/an}b∇acket∇i}ht+1 2/angbracketleftbig ∇xS,BtB⊤ t∇xS/angbracketrightbig +/angbracketleftbig BtB⊤ t,∇2 xS/angbracketrightbig =1 2x⊤Qtx, (38b) ρu opt(t=t0,·) =ρ0(·), ρu opt(t=t1,·) =ρ1(·),
https://arxiv.org/abs/2504.15753v1
(38c) in unknown primal-dual pair/parenleftbig ρu opt(t,x),S(t,x)/parenrightbig , i.e., the optimally controlled joint state PDF and the value function . The Hopf-Cole transform [34], [35] given by /parenleftbig ρu opt,S/parenrightbig /ma√sto→(/hatwideϕ,ϕ) :=/parenleftbig ρu optexp(−S),expS/parenrightbig , (39) recasts (38) to a pair of boundary-coupled linear PDEs: ∂t/hatwideϕ=−/an}b∇acketle{t∇x,/hatwideϕAtx/an}b∇acket∇i}ht+/an}b∇acketle{tBtB⊤ t,∇2 x/hatwideϕ/an}b∇acket∇i}ht−1 2x⊤Qtx/hatwideϕ,(40a) ∂tϕ=−/an}b∇acketle{t∇xϕ,Atx/an}b∇acket∇i}ht−/an}b∇acketle{tBtB⊤ t,∇2 xϕ/an}b∇acket∇i}ht+1 2x⊤Qtxϕ,(40b) /hatwideϕ(t0,·)ϕ(t0,·) =ρ0(·),/hatwideϕ(t1,·)ϕ(t1,·) =ρ1(·). (40c) We note that the PDE (40a) is precisely the forward reaction- advection-diffusion PDE (33a), and the PDE (40b) is the associated backward reaction-advection-diffusion PDE. The solution of (40) recovers the solution of (21). Specif- ically,∀t∈[t0,t1], the optimally controlled joint state PDFρu opt(t,x) =/hatwideϕ(t,x)ϕ(t,x), and the optimal control uopt(t,x) =B⊤ t∇xS(t,x) =B⊤ t∇xlogϕ(t,x). Fork∈{0,1}, letting/hatwideϕk(·) :=/hatwideϕ(tk,·),ϕk(·) :=ϕ(tk,·), the system (40) can be solved by the dynamic Sinkhorn recursion: /hatwideϕ0T01−−→/hatwideϕ1ρ1//hatwideϕ1−−−−→ϕ1T10−−→ϕ0ρ0/ϕ0−−−−→(/hatwideϕ0)next,(41) involving the integral transforms (37). The recursion (41) is known [36], [37] to be contractive w.r.t. Hilbert’s projec- tive metric [38], [39]. For different variants of the dynami c Sinkhorn recursions in the SBP context, we refer the readers to [2], [4], [28], [40]. For discussions on (41) in the specia l case(At,Bt)≡(0,I), see [5, Sec. 3.2]. From computational point of view, having a handle for the kernel (36) helps us to apply T01,T10in (37) during each pass of the recursion (41). This in turn helps to solve the LQ SBP (21) with arbitrary non-Gaussian µ0,µ1having finite second moments. The solution for (21) when both µ0,µ1are Gaussians , appeared in [11] and did not need to derive the kernel. However, the techniques and results in [11] are diffi cult to generalize for non-Gaussian µ0,µ1. This is what motivated our derivation for the kernel.Remark 4. If the kernel κin(37) were not available, an alternative way to implement T01,T10in(41) is to apply the Feynman-Kac path integral [41, Ch. 8.2], [42, Ch. 3.3] resulting in randomized numerical approximations for /hatwideϕ1,ϕ0. For instance, the Feynman-Kac path integral representatio n for(37b) is T10ϕ1(x) =E/bracketleftbigg ϕ1(xt1)exp/parenleftbigg −/integraldisplayt1 t1 2x⊤ τQτxτdτ/parenrightbigg |xt=x/bracketrightbigg ,(42) and the conditional expectation can be approximated by Mont e Carlo simulation. Having an explicit handle on κremoves the need for such randomized approximation of the deterministi c functions/hatwideϕ1,ϕ0. To further understand the action of the derived kernel, noti ce that combining (36) and (37a) yields T01/hatwideϕ0=aexp/parenleftbigg −/integraldisplayt t0θ(s)ds/parenrightbigg exp/parenleftbigg −1 2x⊤M11(t,t0)x/parenrightbigg /integraldisplay Rnexp/parenleftbigg −1 2y⊤M22(t,t0)y+/an}b∇acketle{t−M⊤ 12(t,t0)x,y/an}b∇acket∇i}ht/parenrightbigg /hatwideϕ0(y)dy, (43) and likewise forT10ϕ1, whereM11,M12,M22refer to the respective blocks of (30). For suitably smooth /hatwideϕ0, the integral in (43) can be evaluated using Lemma 1 in Appendix A. We close with an example of such computation. Example 1 (Mixture-of-Gaussian /hatwideϕ0).For fixed nc∈N, let /hatwideϕ0be annccomponent conic mixture-of-Gaussians, i.e., /hatwideϕ0(y) =nc/summationdisplay i=1wiexp/parenleftbigg −1 2(y−mi)⊤Σ−1 i(y−mi)/parenrightbigg , wherewi>0,mi∈Rn,Σi≻0∀i∈[nc]. Then (43) gives T01/hatwideϕ0=aexp/parenleftbigg −/integraldisplayt t0θ(s)ds/parenrightbigg ×exp/parenleftbigg −1 2x⊤M11(t,t0)x/parenrightbiggnc/summationdisplay i=1wiexp/parenleftbigg −1 2m⊤ iΣ−1 imi/parenrightbigg /integraldisplay Rnexp/parenleftbigg −1 2y⊤/parenleftbig M22(t,t0)+Σ−1 i/parenrightbig y +/an}b∇acketle{t−M⊤ 12(t,t0)x+Σ−1 imi,y/an}b∇acket∇i}ht/parenrightbigg dy =a(2π)n/2exp/parenleftbigg −/integraldisplayt t0θ(s)ds/parenrightbigg ×exp/parenleftbigg −1 2x⊤M11(t,t0)x/parenrightbiggnc/summationdisplay i=1wi/radicalBig det/parenleftbig M22(t,t0)+Σ−1 i/parenrightbig ×exp/braceleftbigg −1 2m⊤ iΣ−1 imi+1 2/parenleftbig −M⊤ 12(t,t0)x+Σ−1 imi/parenrightbig⊤ /parenleftbig M22(t,t0)+Σ−1 i/parenrightbig/parenleftbig −M⊤ 12(t,t0)x+Σ−1 imi/parenrightbig/bracerightbigg , (44) where the last equality uses Lemma 1. 8 V. C ONCLUDING REMARKS We derived the Markov kernel for an LTV Itˆ o diffusion in the
https://arxiv.org/abs/2504.15753v1
presence of killing of probability mass at a rate that is convex quadratic with time-varying weight matrix. The resulting kernel is parameterized by the LTV matrix pair (At,Bt)and the killing weight Qt/{ollowsequal0, in terms of the solution of a Riccati matrix ODE initial value problem. The derived kernel has relevance in stochastic control: it i s the Green’s function for a reaction-advection-diffusion P DE that appears in solving the generic linear quadratic Schr¨ o dinger bridge problem. This problem concerns with steering a given state distribution to another over a given finite horizon sub ject to controlled LTV diffusion while minimizing a cost-to-go that is quadratic in state and control. The solution for this problem has appeared in prior literature for the case when the endpoint distributions are Gaussians. It is also unders tood that for non-Gaussian endpoints, the problem can be solved via dynamic Sinkhorn recursions, which however, require solving initial value problems involving the aforesaid rea ction- advection-diffusion PDE within each epoch of the recursion , with updated initial conditions. By deriving the correspon ding Green’s function, our results facilitate this computation . The results here also generalize our prior works where the Markov kernel was derived for the special case (At,Bt) = (0,I)using generalized Hermite polynomials and Weyl cal- culus. However, for generic (At,Bt)andQt/{ollowsequal0, those techniques become unwieldy. To overcome this technical cha l- lenge, we pursued a new line of attack by postulating the structure of the Markov kernel in terms of a distance func- tion induced by an underlying deterministic optimal contro l problem. Using this new technique, we demonstrated that bot h new and existing results can be recovered in a conceptually transparent manner even when the underlying Markov kernel is not a transition probability, as is the case here. These interconnections between the Markov kernels, distances an d optimal control, should be of independent interest. APPENDIX A. Derivation of (20) Consider κas in (9) with1 2dist2 tt0given by (19). For finding the pre-factor c(t,t0), we substitute (9) with (19) in the reaction-diffusion PDE in (6), i.e., in ∂tκ=/parenleftbigg ∆x−1 2x⊤2Dx/parenrightbigg κ =/parenleftBiggn/summationdisplay i=1∂2 xii−ω2 i 4x2 ii/parenrightBigg κ, (45) sinceω2 i= 4Diifor alli= 1,...,n . This substitution in (45), after algebraic simplification, results in the ODE ˙c=c/parenleftBigg −1 2n/summationdisplay i=1ωicoth(ωi(t−t0))/parenrightBigg , (46) where˙cdenotes the derivative of c(t,t0)w.r.t.t. Integrating (46) yields logc= loga−1 2n/summationdisplay i=1logsinh(ωi(t−t0)),wherelogais the numerical constant of integration. Thus, c(t,t0) =an/productdisplay i=11 (sinh(ωi(t−t0)))1/2. (47) Combining (47) with (9) and (19), we obtain κ(t0,x,t,y) =an/productdisplay i=11/radicalbig sinh(ωi(t−t0)) ×exp/parenleftBigg −1 2ωi/parenleftbig x2 i+y2 i/parenrightbig cosh(ωi(t−t0))−2ωixiyi 2sinh(ωi(t−t0))/parenrightBigg , (48) for all0≤t0≤t <∞. All that remains is to find the constant a. Lettingτ:=t−t0, and invoking the initial condition in (6) for (48), we have δ(x−y) =alim τ↓0n/productdisplay i=11/radicalbig sinh(ωiτ) ×exp/parenleftBigg −1 2ωi/parenleftbig x2 i+y2 i/parenrightbig cosh(ωiτ)−2ωixiyi 2sinh(ωiτ)/parenrightBigg .(49) Integrating both sides of (49) w.r.t. xoverRngives 1 =a/parenleftBigg lim τ↓0n/productdisplay i=11/radicalbig sinh(ωiτ)/parenrightBigg × /integraldisplay Rnlim τ↓0n/productdisplay i=1exp/parenleftBigg −1 2ωi/parenleftbig x2 i+y2 i/parenrightbig cosh(ωiτ)−2ωixiyi 2sinh(ωiτ)/parenrightBigg dx, (50) wherein the LHS used the shift property of the Dirac delta:/integraltext Rnδ(x−y)·1·dx= 1. Since ωi/parenleftbig x2 i+y2 i/parenrightbig cosh(ωiτ)−2ωixiyi 2sinh(ωiτ) =/parenleftbigg√ωixi√ωiyi/parenrightbigg⊤/parenleftbigg coth(ωiτ)−csch(ωiτ) −csch(ωiτ)
https://arxiv.org/abs/2504.15753v1
coth( ωiτ)/parenrightbigg/parenleftbigg√ωixi√ωiyi/parenrightbigg is a positive definite quadratic form, we have 0≤exp/parenleftBigg −1 2ωi/parenleftbig x2 i+y2 i/parenrightbig cosh(ωiτ)−2ωixiyi 2sinh(ωiτ)/parenrightBigg ≤1. Hence, using the dominated convergence theorem [43, Thm. 1.13], we exchange the limit and integral in (50) to yield 1 =alim τ↓0n/productdisplay i=1exp/parenleftbigg −1 2ωiy2 icosh(ωiτ) 2sinh(ωiτ)/parenrightbigg /radicalbig sinh(ωiτ) ×/parenleftbigg/integraldisplay∞ −∞exp/parenleftbigg −1 2ωix2 icosh(ωiτ)−2ωixiyi 2sinh(ωiτ)/parenrightbigg dxi/parenrightbigg . (51) We need the following auxiliary lemma. Lemma 1 (Central identity of quantum field theory) .[44, p. 2], [45, p. 15] For fixed n×nmatrixA≻0and suitably smoothf:Rn/ma√sto→R, /integraldisplay Rnexp/parenleftbigg −1 2x⊤Ax/parenrightbigg f(x)dx 9 =/radicalBigg (2π)n det(A)exp/parenleftbigg1 2∇⊤ xA−1∇x/parenrightbigg f(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle x=0, (52) where the exponential of differential operator is un- derstood as a power series. As a special case, for b∈Rn, we have/integraltext Rnexp/parenleftbig −1 2x⊤Ax+/an}b∇acketle{tb,x/an}b∇acket∇i}ht/parenrightbig dx=/radicalBig (2π)n detAexp/parenleftbig1 2b⊤A−1b/parenrightbig . In particular, for fixed scalars a > 0,b∈R, we have /integraldisplay∞ −∞exp/parenleftbigg −1 2ax2+bx/parenrightbigg dx=/radicalbigg 2π aexp/parenleftbigg1 2b2 a/parenrightbigg . Using Lemma 1, the integral in (51) evaluates to /radicalBigg 4πsinh(ωiτ) ωicosh(ωiτ)exp/parenleftbiggωiy2 i 4sinh(ωiτ)cosh(ωiτ)/parenrightbigg . Substituting back this value in (51), followed by simplifica tion via identity: 1−cosh2(·) =−sinh2(·), gives 1 =alim τ↓0n/productdisplay i=1/radicalBigg 4π ωicosh(ωiτ)exp/parenleftbigg −ωiy2 isinh(ωiτ) 4/parenrightbigg =an/productdisplay i=1/radicalbigg 4π ωi. (53) Therefore, a=/producttextn i=1/radicalbigωi 4π, which upon substitution in (47), yields (20). B. Proof of Proposition 3 As is well-known [29, p. 156], [46], for any τ∈[t0,t], the solution of the Riccati ODE initial value problem (25) admit s linear fractional representation Π(τ,K1,t) = (Ψ21(t,τ)+Ψ22(t,τ)K1)(Ψ11(t,τ)+Ψ12(t,τ)K1)−1, (54) where Ψtτ:=/bracketleftbiggΨ11(t,τ)Ψ12(t,τ) Ψ21(t,τ)Ψ22(t,τ)/bracketrightbigg ∈R2n×2n(55) is the state transition matrix of the linear Hamiltonian mat rix ODE /parenleftbigg˙Xτ ˙Λτ/parenrightbigg =/bracketleftbigg Aτ−ˆBτˆB⊤ τ −Qτ−A⊤ τ/bracketrightbigg/parenleftbigg Xτ Λτ/parenrightbigg ,Xτ,Λτ∈Rn×n. (56) For the special case in hand, the coefficient matrix in (56) equals /bracketleftbigg0−2I −2D0/bracketrightbigg , and its (backward in time) state transition matrix becomes Ψtτ= exp/parenleftbigg/bracketleftbigg 02I 2D0/bracketrightbigg (t−τ)/parenrightbigg = cosh/parenleftig 2√ D(t−τ)/parenrightig /parenleftig√ D/parenrightig−1 sinh/parenleftig 2√ D(t−τ)/parenrightig √ Dsinh/parenleftig 2√ D(t−τ)/parenrightig cosh/parenleftig 2√ D(t−τ)/parenrightig , (57)where all hyperbolic functions act element-wise. To see the last equality, it suffices to note that for 2×2matrix/bracketleftbigg 0b c0/bracketrightbigg withb,c >0, we have exp/parenleftbigg/bracketleftbigg 0b c0/bracketrightbigg (t−τ)/parenrightbigg =I∞/summationdisplay k=0/parenleftBig√ bc(t−τ)/parenrightBig2k (2k)! +/bracketleftbigg 0/radicalbig b/c/radicalbig c/b0/bracketrightbigg∞/summationdisplay k=0/parenleftBig√ bc(t−τ)/parenrightBig2k+1 (2k+1)! =/bracketleftbigg cosh(√ bc(t−τ))/radicalbig b/csinh(√ bc(t−τ))/radicalbig c/bsinh(√ bc(t−τ)) cosh(√ bc(t−τ))/bracketrightbigg . From (54) and (57), we then get Π(τ,0,t) =Ψ21(t,τ)(Ψ11(t,τ))−1 =√ Dtanh/parenleftBig 2√ D(t−τ)/parenrightBig . (58) Following (26), in this case, we also have ˆAτ=−2Π(τ,0,t) =−2√ Dtanh/parenleftBig 2√ D(t−τ)/parenrightBig (59) for allτ∈[t0,t]. From (59), ˆAτand/integraltextτ t0ˆAσdσcommute. Hence the state transition matrix for the coefficient matrix ˆAτ equals ˆΦtτ= exp/parenleftbigg −2√ D/integraldisplayt τtanh/parenleftBig 2√ D(t−σ)/parenrightBig dσ/parenrightbigg = exp/parenleftbigg −2√ D/integraldisplayt−τ 0tanh/parenleftBig 2√ Ds/parenrightBig ds/parenrightbigg = exp/parenleftBig −logcosh/parenleftBig 2√ D(t−τ)/parenrightBig/parenrightBig = sech/parenleftBig 2√ D(t−τ)/parenrightBig . (60) Thus ˆΓtτ=/integraldisplayt τˆΦtσˆBσˆB⊤ σˆΦ⊤ tσdσ= 2/integraldisplayt τsech2(2√ D(t−σ))dσ =D−1 2tanh(2√ D(t−τ)), so, ˆΓ−1 tτ=√ Dcoth(2√ D(t−τ)). (61) Therefore, (30) simplifies to Mtt0=/bracketleftBigg√ Dcoth/parenleftig 2√ D(t−t0)/parenrightig −√ Dcsch/parenleftig 2√ D(t−t0)/parenrightig −√ Dcsch/parenleftig 2√ D(t−t0)/parenrightig√ Dcoth/parenleftig 2√ D(t−t0)/parenrightig/bracketrightBigg (62) which is exactly formula (4.10) in [5] derived via completel y different computation. Substituting (62) in (29), and recalling that ωi:= 2√Dii for alli= 1,...,n , we recover (19). /squaresolid 10 C. Proof of Theorem 1 The proof is divided into three steps. Step 1: Determining
https://arxiv.org/abs/2504.15753v1
c(t,t0)up to a constant a. Combining (9) with (29), we get logκ= logc(t,t0)−1 2/parenleftbiggx y/parenrightbigg⊤ Mtt0/parenleftbiggx y/parenrightbigg . (63) Applying ∂tto both sides of (63) and rearranging, gives ∂tκ=κ/parenleftBigg ˙c c−1 2/parenleftbiggx y/parenrightbigg⊤ ˙Mtt0/parenleftbiggx y/parenrightbigg/parenrightBigg , (64) where dot denotes derivative w.r.t. t. Applying∇xto both sides of (63), and using (30), we find ∇xlogκ=ˆΦ⊤ tt0ˆΓ−1 tt0/parenleftBig y−ˆΦtt0x/parenrightBig −Π(t0,0,t)x,(65) and since∇2 x=∇x◦∇x, ∇2 xlogκ=−ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0−Π(t0,0,t). (66) We next use the identity ∇2 xlogκ=∇2 xκ κ−(∇xlogκ)(∇xlogκ)⊤ ⇒∇2 xκ=κ/braceleftbigg (∇xlogκ)(∇xlogκ)⊤+∇2 xlogκ/bracerightbigg .(67) Substituting (65) and (66) in (67), we obtain ∇2 xκ=κ/braceleftbigg/parenleftBig ˆΦ⊤ tt0ˆΓ−1 tt0/parenleftBig y−ˆΦtt0x/parenrightBig −Π(t0,0,t)x/parenrightBig ×/parenleftBig ˆΦ⊤ tt0ˆΓ−1 tt0/parenleftBig y−ˆΦtt0x/parenrightBig −Π(t0,0,t)x/parenrightBig⊤ −ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0 −Π(t0,0,t)/bracerightbigg , (68) and thus, the diffusion term in the RHS of (33a) is /an}b∇acketle{tBtB⊤ t,∇2 xκ/an}b∇acket∇i}ht =κ/an}b∇acketle{tBtB⊤ t,ˆΦ⊤ tt0ˆΓ−1 tt0(y−ˆΦtt0x)(y−ˆΦtt0x)⊤ˆΓ−1 tt0ˆΦtt0 −Πx(y−ˆΦtt0x)⊤ˆΓ−1 tt0ˆΦtt0−ˆΦ⊤ tt0ˆΓ−1 tt0(y−ˆΦtt0x)x⊤Π +Πxx⊤Π−ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0−Π(t0,0,t)/an}b∇acket∇i}ht. (69) We note that the advection term in the RHS of (33a) is −/an}b∇acketle{t∇x,κAtx/an}b∇acket∇i}ht=−κ/an}b∇acketle{t∇xlogκ,Atx/an}b∇acket∇i}ht−κtraceAt.(70) Substituting (64), (65), (69) and (70) in (33a), we arrive at ˙c c−1 2/parenleftbigg x y/parenrightbigg⊤ ˙Mtt0/parenleftbigg x y/parenrightbigg =−/an}b∇acketle{tˆΦ⊤ tt0ˆΓ−1 tt0(y−ˆΦtt0x),Atx/an}b∇acket∇i}ht−traceAt +/an}b∇acketle{tBtB⊤ t,ˆΦ⊤ tt0ˆΓ−1 tt0(y−ˆΦtt0x)(y−ˆΦtt0x)⊤ˆΓ−1 tt0ˆΦtt0 −Πx(y−ˆΦtt0x)⊤ˆΓ−1 tt0ˆΦtt0−ˆΦ⊤ tt0ˆΓ−1 tt0(y−ˆΦtt0x)x⊤Π +Πxx⊤Π−ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0−Π(t0,0,t)/an}b∇acket∇i}ht−1 2x⊤Qtx =−θ(t)−1 2/parenleftbiggx y/parenrightbigg⊤ Stt0/parenleftbiggx y/parenrightbigg , (71)where the last equality grouped the spatially independent t erms and spatially dependent (quadratic in x,y) terms, then used the definition of θ(t)from (34a). In particular, the matrix Stt0=/bracketleftbiggS11(t,t0)S12(t,t0) S⊤ 12(t,t0)S22(t,t0)/bracketrightbigg ∈R2n×2n comprises of n×nblocks S11(t,t0) :=−2ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0ˆAt −ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0ˆBtˆB⊤ tˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0 −Π(t0,0,t)ˆBtˆB⊤ tΠ(t0,0,t)+Qt, (72a) S12(t,t0) :=ˆΓ−1 tt0ˆΦtt0/parenleftBig ˆAt+ˆBtˆB⊤ tˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0/parenrightBig ,(72b) S22(t,t0) :=−ˆΓ−1 tt0ˆΦtt0ˆBtˆB⊤ tˆΦ⊤ tt0ˆΓ−1 tt0. (72c) Since (71) holds for arbitrary x,y∈Rn, equating the x,y independent terms from both sides of (71), we conclude that csolves the linear ODE ˙c=−θ(t)c. (73) The solution of (73) is c(t,t0) =aexp/parenleftbigg −/integraldisplayt t0θ(s)ds/parenrightbigg , (74) for to-be-determined constant a. In Step 2 detailed next, we determine a. As a by-product of the above calculation, we get ˙Mtt0= Stt0, but this will not be used hereafter. Step 2: Determining the constant a. Substituting (74) and (29)-(30) in (9), letting t↓t0, and invoking the initial condition (33b), we get δ(x−y) =alim t↓t0exp/parenleftBigg −/integraldisplayt t0θ(τ)dτ−1 2/parenleftbigg x y/parenrightbigg⊤ Mtt0/parenleftbigg x y/parenrightbigg/parenrightBigg . (75) Integrating both sides of (75) w.r.t. x, and using the shift property of the Dirac delta:/integraltext Rnδ(x−y)·1·dx= 1, we obtain 1 =a/integraldisplay Rnlim t↓t0exp/parenleftbigg −/integraldisplayt t0θ(τ)dτ/parenrightbigg exp/parenleftBigg −1 2/parenleftbiggx y/parenrightbigg⊤ Mtt0/parenleftbiggx y/parenrightbigg/parenrightBigg dx. (76) As noted in Sec. IV-A, Mtt0≻0, so for any x,y∈Rn, the second exponential term in (76) takes values in [0,1]. Also, under assumption A1,θ(τ)in (34a) is bounded for 0≤t0< t <∞, and so is exp(−/integraltextt t0θ(τ)dτ). Thus, the product of the two exponential terms in (76) is bounded w.r.t.x. Therefore, by dominated convergence theorem, we exchange the order of integral and limit in (76), to find 1 =alim t↓t0exp/parenleftbigg −/integraldisplayt t0θ(τ)dτ/parenrightbigg/integraldisplay Rnexp/parenleftBigg −1 2/parenleftbiggx y/parenrightbigg⊤ Mtt0/parenleftbiggx y/parenrightbigg/parenrightBigg dx =alim t↓t0exp/parenleftbigg −/integraldisplayt t0θ(τ)dτ/parenrightbigg exp/parenleftbigg −1 2y⊤M22(t,t0)y/parenrightbigg ×/integraldisplay Rnexp/parenleftbigg −1 2x⊤M11(t,t0)x+/an}b∇acketle{t−M12(t,t0)y,x/an}b∇acket∇i}ht/parenrightbigg dx 11 =a(2π)n 2lim t↓t0exp/parenleftbigg −/integraldisplayt t0θ(τ)dτ/parenrightbigg (detM11(t,t0))−1 2 exp/parenleftbigg −1 2y⊤{M22(t,t0)−M⊤ 12(t,t0)M−1 11(t,t0)M12(t,t0)}y/parenrightbigg , (77) wherein M11,M12,M22are the respective n×nblocks of (30), and the last step follows from Lemma 1. Invoking Lemma 1 here is in
https://arxiv.org/abs/2504.15753v1
turn made possible by the fact that M11 being a sum of two positive definite matrices (see (30)), is positive definite. Now the idea is to evaluate the limits in (77). For the exponential of negative quadratic term, using (30), we find M22(t,t0)−M⊤ 12(t,t0)M−1 11(t,t0)M12(t,t0) =ˆΓ−1 tt0−ˆΓ−1 tt0ˆΦtt0/parenleftig ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0+Π(t0,0,t)/parenrightig−1ˆΦ⊤ tt0ˆΓ−1 =ˆΓ−1 tt0 −/parenleftig ˆΦ−1 tt0ˆΓtt0/parenrightig−1/parenleftig ˆΦ⊤ tt0ˆΓ−1 tt0ˆΦtt0+Π(t0,0,t)/parenrightig−1/parenleftig ˆΓtt0ˆΦ−⊤ tt0/parenrightig−1 =ˆΓ−1 tt0−/parenleftig ˆΓtt0+ˆΓtt0ˆΦ−⊤ tt0Π(t0,0,t)ˆΦ−1 tt0ˆΓtt0/parenrightig−1 , (78) thanks to the invertibility of ˆΦtt0,ˆΓtt0. Using the Woodbury identity [47], [48, p. 19], /parenleftBig ˆΓtt0+ˆΓtt0ˆΦ−⊤ tt0Π(t0,0,t)ˆΦ−1 tt0ˆΓtt0/parenrightBig−1 =ˆΓ−1 tt0−ˆΦ−⊤ tt0/parenleftBig Π−1(t0,0,t)+ˆΦ−1 tt0ˆΓtt0ˆΦ−⊤ tt0/parenrightBig−1ˆΦ−1 tt0, (79) which upon substituting in (78), gives M22(t,t0)−M⊤ 12(t,t0)M−1 11(t,t0)M12(t,t0) =ˆΦ−⊤ tt0/parenleftBig Π−1(t0,0,t)+ˆΦ−1 tt0ˆΓtt0ˆΦ−⊤ tt0/parenrightBig−1ˆΦ−1 tt0. (80) SinceˆΦt0t0=I,ˆΓt0t0=0,Π(t0,0,t0) =0, we then have lim t↓t0exp/parenleftbigg −1 2y⊤{M22(t,t0)−M⊤ 12(t,t0)M−1 11(t,t0)M12(t,t0)}y/parenrightbigg = exp −1 2y⊤lim t↓t0{ˆΦ−⊤ tt0/parenleftig Π−1(t0,0,t)+ˆΦ−1 tt0ˆΓtt0ˆΦ−⊤ tt0/parenrightig−1ˆΦ−1 tt0} /bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright =0y  = 1. (81) Therefore, (77) simplifies to 1 =a(2π)n 2lim t↓t0exp/parenleftbigg −/integraldisplayt t0θ(τ)dτ/parenrightbigg (detM11(t,t0))−1 2. (82) Notice that for the limit in (82) to be defined, the limit canno t be further distributed to the exponential and determinant factors. Since the limit of reciprocal equals to the reciprocal of the limit, and by Jacobi identity: exptrace(·) = detexp(·), we re-write (82) as (35). Step 3: Putting everything together. Substituting (35) in (74), and then substituting the result ing expression for c(t,t0)in (32), the statement follows. /squaresolidD. Proof of Corollary 2 Under the stated conditions, we proved in Proposition 3 that the term1 2/parenleftbigg x y/parenrightbigg⊤ Mtt0/parenleftbigg x y/parenrightbigg in (36) reduces to (19). What remains is to show that the term exp/parenleftBig −/integraltextt t0θ(s)ds/parenrightBig in (36) reduces ton/productdisplay i=11 (sinh(ωi(t−t0)))1/2as found in (47), and to compute the pre-factor a. To this end, notice that (34a) in this case specializes to θ(s) = trace/parenleftBig ˆΦ⊤ st0ˆΓ−1 st0ˆΦst0+Π(t0,0,s)/parenrightBig = trace/parenleftBig√ Dcoth/parenleftBig 2√ D(s−t0)/parenrightBig/parenrightBig =n/summationdisplay i=1/radicalbig Diicoth/parenleftBig/radicalbig Dii(s−t0)/parenrightBig , (83) where the second equality follows from (58), (60) and (61). Then exp/parenleftbigg −/integraldisplayt t0θ(s)ds/parenrightbigg =lim ε↓0exp/parenleftBigg −n/summationdisplay i=1√ Dii/integraldisplayt−t0 εcoth/parenleftBig 2√ Diiτ/parenrightBig dτ/parenrightBigg =exp/parenleftBigg −n/summationdisplay i=11 2logsinh/parenleftBig 2√ Dii(t−t0)/parenrightBig/parenrightBigg =n/productdisplay i=11 (sinh(ωi(t−t0)))1/2. (84) In above, the first equality used (83) and a change-of-variab le s−t0/ma√sto→τ. The second equality follows from integration and the limit. The last equality used ωi:= 2√ Dii. To compute the pre-factor afor this case, we note from (62) thatdetM1/2 11(t,t0) =D1/4/parenleftBig coth/parenleftBig 2√ D(t−t0)/parenrightBig/parenrightBig1/2 . Then using the Jacobi identity detexp(·) = exptrace(·), we have a= (2π)−n/2lim t↓t0/braceleftbigg detM1/2 11(t,t0)/parenleftbigg exp/integraldisplayt t0θ(τ)dτ/parenrightbigg/bracerightbigg = (2π)−n/2D1/4lim t↓t0/braceleftbigg/parenleftBig coth/parenleftBig 2√ D(t−t0)/parenrightBig/parenrightBig1/2 ×/parenleftBig sinh/parenleftBig 2√ D(t−t0)/parenrightBig/parenrightBig1/2/bracerightbigg = (2π)−n/2D1/4lim t↓t0/parenleftBig cosh/parenleftBig 2√ D(t−t0)/parenrightBig/parenrightBig1/2 = (2π)−n/2D1/4, (85) where the second equality is due to (84). With the agiven by (85), exp(−/integraltextt t0θ(s)ds)given by (84), and the specialization of Mtt0as in Proposition 3, the kernel (36) specializes to [8, Eq. (43)] or that in [5, Eq. (A.22)]. /squaresolid REFERENCES [1] D. W. Stroock, Partial differential equations for probabalists . Cam- bridge University Press, 2008, no. 112. 12 [2] K. F. Caluya and A. Halder, “Wasserstein proximal algori thms for the Schr¨ odinger bridge problem: Density control with nonline ar drift,” IEEE Transactions on Automatic Control , vol. 67, no. 3, pp.
https://arxiv.org/abs/2504.15753v1
1163–1178, 2021. [3] Y . Chen, T. T. Georgiou, and M. Pavon, “Stochastic contro l liaisons: Richard Sinkhorn meets Gaspard Monge on a Schrodinger bridg e,”Siam Review , vol. 63, no. 2, pp. 249–313, 2021. [4] A. M. Teter, Y . Chen, and A. Halder, “On the contraction co efficient of the Schr¨ odinger bridge for stochastic linear systems,” IEEE Control Systems Letters , 2023. [5] A. M. Teter, W. Wang, and A. Halder, “Schr¨ odinger bridge with quadratic state cost is exactly solvable,” arXiv preprint arXiv:2406.00503 , 2024. [6] W. H. Fleming and R. W. Rishel, Deterministic and stochastic optimal control . Springer Science & Business Media, 2012, vol. 1. [7] L. C. Evans, Partial differential equations . American Mathematical Society, 2022, vol. 19. [8] A. M. Teter, W. Wang, and A. Halder, “Weyl calculus and exa ctly solvable Schr¨ odinger bridges with quadratic state cost,” in2024 60th Annual Allerton Conference on Communication, Control, and Comput- ing. IEEE, 2024, pp. 1–8. [9] Y . Chen, T. Georgiou, and M. Pavon, “Optimal steering of i nertial par- ticles diffusing anisotropically with losses,” in 2015 American Control Conference (ACC) , 2015, pp. 1252–1257. [10] ——, “Optimal steering of inertial particles diffusing anisotropically with losses,” in 2015 American Control Conference (ACC) . IEEE, 2015, pp. 1252–1257. [11] Y . Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution—part iii,” IEEE Transactions on Automatic Control , vol. 63, no. 9, pp. 3112–3118, 2018. [12] E. Schr¨ odinger, “ ¨Uber die Umkehrung der Naturgesetze,” Sitzungs- berichte der Preuss Akad. Wissen. Phys. Math. Klasse, Sonde rausgabe , vol. IX, pp. 144–153, 1931. [13] ——, “Sur la th´ eorie relativiste de l’´ electron et l’in terpr´ etation de la m´ ecanique quantique,” in Annales de L’Institut Henri Poincar´ e , vol. 2, no. 4. Presses universitaires de France, 1932, pp. 269–310. [14] Y . Chen, T. T. Georgiou, and M. Pavon, “Optimal steering of a linear stochastic system to a final probability distribution, part i,”IEEE Transactions on Automatic Control , vol. 61, no. 5, pp. 1158–1169, 2015. [15] S. R. S. Varadhan, “On the behavior of the fundamental so lution of the heat equation with variable coefficients,” Communications on Pure and Applied Mathematics , vol. 20, no. 2, pp. 431–455, 1967. [16] S. A. Molchanov, “Diffusion processes and Riemannian g eometry,” Russian Mathematical Surveys , vol. 30, no. 1, p. 1, 1975. [17] K. Crane, C. Weischedel, and M. Wardetzky, “The heat met hod for distance computation,” Communications of the ACM , vol. 60, no. 11, pp. 90–99, 2017. [18] I. Chavel, Eigenvalues in Riemannian geometry . Academic press, 1984. [19] A. Grigor’yan and M. Noguchi, “The heat kernel on hyperb olic space,” Bulletin of the London Mathematical Society , vol. 30, no. 6, pp. 643– 650, 1998. [20] E. P. Hsu, Stochastic analysis on manifolds . American Mathematical Soc., 2002, no. 38. [21] R. W. Brockett, “Control theory and singular Riemannia n geometry,” in New Directions in Applied Mathematics: Papers Presented Ap ril 25/26, 1980, on the
https://arxiv.org/abs/2504.15753v1
Occasion of the Case Centennial Celebration . Springer, 1982, pp. 11–27. [22] R. S. Strichartz, “Sub-Riemannian geometry,” Journal of Differential Geometry , vol. 24, no. 2, pp. 221–263, 1986. [23] M. Gromov, “Carnot-Carath´ eodory spaces seen from wit hin,” in Sub- Riemannian geometry . Springer, 1996, pp. 79–323. [24] E. B. Lee and L. Markus, Foundations of optimal control theory . Wiley New York, 1967, vol. 87. [25] A. Wakolbinger, “Schr¨ odinger bridges from 1931 to 199 1,” in Proc. of the 4th Latin American Congress in Probability and Mathem atical Statistics, Mexico City , 1990, pp. 61–79. [26] D. Dawson, L. Gorostiza, and A. Wakolbinger, “Schr¨ odi nger processes and large deviations,” Journal of mathematical physics , vol. 31, no. 10, pp. 2385–2388, 1990. [27] C. L´ eonard, “Stochastic derivatives and generalized h-transforms of markov processes,” arXiv preprint arXiv:1102.3172 , 2011. [28] A. M. Teter, I. Nodozi, and A. Halder, “Probabilistic La mbert problem: Connections with optimal mass transport, Schr¨ odinger bri dge, and reaction-diffusion pdes,” SIAM Journal on Applied Dynamical Systems , vol. 24, no. 1, pp. 16–43, 2025. [29] R. W. Brockett, Finite dimensional linear systems . John Wiley & Sons Inc., 2015.[30] V . Kuˇ cera, “A review of the matrix Riccati equation,” Kybernetika , vol. 9, no. 1, pp. 42–61, 1973. [31] W. Porter, “On the matrix Riccati equation,” IEEE Transactions on Automatic Control , vol. 12, no. 6, pp. 746–749, 1967. [32] N. Viswanadham and D. Atherton, “On invariance of degre e of control- lability under state feedback,” IEEE Transactions on Automatic Control , vol. 20, no. 2, pp. 271–273, 1975. [33] R. Brockett, “Poles, zeros, and feedback: State space i nterpretation,” IEEE Transactions on Automatic Control , vol. 10, no. 2, pp. 129–135, 1965. [34] E. Hopf, “The partial differential equation ut+uux=µxx,”Commu- nications on Pure and Applied Mathematics , vol. 3, no. 3, pp. 201–230, 1950. [35] J. D. Cole, “On a quasi-linear parabolic equation occur ring in aerody- namics,” Quarterly of Applied Mathematics , vol. 9, no. 3, pp. 225–236, 1951. [36] Y . Chen, T. Georgiou, and M. Pavon, “Entropic and displa cement interpolation: a computational approach using the Hilbert metric,” SIAM Journal on Applied Mathematics , vol. 76, no. 6, pp. 2375–2396, 2016. [37] S. D. Marino and A. Gerolin, “An optimal transport appro ach for the Schr¨ odinger bridge problem and convergence of Sinkhorn al gorithm,” Journal of Scientific Computing , vol. 85, no. 2, p. 27, 2020. [38] P. J. Bushell, “Hilbert’s metric and positive contract ion mappings in a Banach space,” Archive for Rational Mechanics and Analysis , vol. 52, pp. 330–338, 1973. [39] B. Lemmens and R. Nussbaum, “Birkhoff’s version of Hilb ert’s metric and its applications in analysis,” Handbook of Hilbert Geometry , pp. 275–303, 2014. [40] K. F. Caluya and A. Halder, “Reflected Schr¨ odinger brid ge: Density control with path constraints,” in 2021 American Control Conference (ACC) . IEEE, 2021, pp. 1137–1142. [41] B. Øksendal, Stochastic differential equations: an introduction with applications . Springer Science & Business Media, 2013. [42] J. Yong and X. Y
https://arxiv.org/abs/2504.15753v1
. Zhou, Stochastic controls: Hamiltonian systems and HJB equations . Springer Science & Business Media, 2012, vol. 43. [43] E. M. Stein and R. Shakarchi, Real analysis: measure theory, integration, and Hilbert spaces . Princeton University Press, 2009. [44] J. Zinn-Justin, Quantum field theory and critical phenomena . Oxford university press, 2021, vol. 171. [45] A. Zee, Quantum field theory in a nutshell . Princeton university press, 2010, vol. 7. [46] M. A. Shayman, “Phase portrait of the matrix Riccati equ ation,” SIAM Journal on Control and Optimization , vol. 24, no. 1, pp. 1–65, 1986. [47] W. W. Hager, “Updating the inverse of a matrix,” SIAM review , vol. 31, no. 2, pp. 221–239, 1989. [48] R. A. Horn and C. R. Johnson, Matrix analysis . Cambridge university press, 2nd ed., 2012.
https://arxiv.org/abs/2504.15753v1
Multiscale detection of practically significant changes in a gradually varying time series Patrick Bastian, Holger Dette Ruhr-Universit¨ at Bochum Fakult¨ at f¨ ur Mathematik 44780 Bochum, Germany Abstract In many change point problems it is reasonable to assume that compared to a benchmark at a given time point t0the properties of the observed stochastic process change gradually over time for t > t 0. Often, these gradual changes are not of interest as long as they are small (nonrelevant), but one is interested in the question if the deviations are practically significant in the sense that the deviation of the process compared to the time t0(measured by an appropriate metric) exceeds a given threshold, which is of practical significance (relevant change). In this paper we develop novel and powerful change point analysis for detecting such deviations in a sequence of gradually varying means, which is compared with the average mean from a previous time period. Current approaches to this problem suffer from low power, rely on the selection of smoothing parameters and require a rather regular (smooth) development for the means. We develop a multiscale procedure that alleviates all these issues, validate it theoretically and demonstrate its good finite sample performance on both synthetic and real data. 1 Introduction Change point analysis is an ubiquitous topic in mathematical statistics with numerous ap- plications in diverse areas such as economics, climatology and linguistics. In this paper we are concerned with the detection of (gradual) changes in the univariate time series Xi=µ(i/n) +ϵi, i= 1, . . . , n, (1.1) 1arXiv:2504.15872v1 [stat.ME] 22 Apr 2025 where ( ϵi)1≤i≤nis a centered error process and µ: [0,1]→Ris an unknown (mean) function. Starting with the seminal work of (Page, 1955), a large part of the literature considers the case where the function µis piecewise constant with at most one change and we refer to reviews of (Aue and Horv´ ath, 2013), (Aue and Kirch, 2023) and to the recent textbook of (Horv´ ath and Rice, 2024) and the reference therein. More recently the problem of detecting multiple changes has found considerable interest as well. Among many others, we mention (Fryzlewicz and Rao, 2013), (Frick et al., 2014), (Baranowski et al., 2019), (Dette et al., 2020), who considered one-dimensional data and to (Chen et al., 2022; Li et al., 2023; Madrid Padilla et al., 2022) for some recent results in the high-dimensional and multivariate case. A good review of the current state of the art on data segmentation of a piecewise constant signal can be found in the recent papers of (Truong et al., 2020) and (Cho and Kirch, 2024). A common aspect of the methodology in most of these references consists in the fact that it is based on the construction of testing procedures for the hypothesis H0:µ(t) =c∀t∈[0,1], where cis an unknown constant, and the different procedures address different forms of the piecewise constant function µunder the alternative. Here a large part of the literature has its focus on piecewise stationary alternatives. While there are many applications where this can be well justified, at least
https://arxiv.org/abs/2504.15872v1
approximately (see for instance Aston and Kirch, 2012; Hotz et al., 2013, for some examples), there exist also many situations where it is not reasonable to assume that the mean function µin model (1.1) is piecewise constant over the full period because it is continuously smoothly between potential jumps points. Examples include climate data (Karl et al., 1995), where it is continuously varying with no jumps, financial data (Vogt and Dette, 2015) and medical data (Gao et al., 2018). In this context, (M¨ uller, 1992), (Gijbels et al., 2007) and (Gao et al., 2008) among others, considered model (1.1) with a gradually varying mean function and sudden jumps and developed change point analysis for determining the locations of these jumps. (Vogt and Dette, 2015) considered the problem of detecting gradual changes in a more general context. For the special case of the mean they assumed that µis constant for some time, then slowly starts to change (with no jumps), and developed a fully nonparametric method to estimate such a smooth change point. The present paper differs from these works. Although we consider a model of the form (1.1) with smooth regression function and potential jumps, we are not interested in the locations of the jump points or in the first time point where the mean functions starts to vary. Instead we define a change differently as a practical significant deviation of µin the interval [0 ,1] from a given benchmark, say µt0 0. More precisely, for a constant µt0 0and a threshold ∆ >0 2 we are interested in the hypotheses H0(∆) : sup t∈[t0,1]|µ(t)−µt0 0| ≤∆vs H 1(∆) : sup t∈[t0,1]|µ(t)−µt0 0|>∆, (1.2) where t0∈[0,1) is a given point defining the time interval of interest. A prominent ex- ample for the consideration of these hypotheses is the analysis of global mean temperature anomalies, where one is interested in a significant deviation of the current temperatures from a reference value µt0 0at time t0(such as the average temperature before time t0), and an important problem is to investigate if these deviations exceed ∆ after the time t0, such as 1.5 degrees Celsius as postulated in the Paris agreement. Hypotheses of the form (1.2) are also considered in quality control (often in an online framework), where one is interested in the “stability” of a given process. This means that the sequence of means stays within a predefined range (as specified by the null hypothesis in (1.2)). While in change-point anal- ysis the focus is often on testing for the presence of a change and on estimating the time at which a change occurs once it has been detected, quality control typically has its focus more on detecting such a change as quickly as possible after it occurs (see, for example, Woodall and Montgomery, 1999). Despite their importance, a test for the hypotheses in (1.2) has only been recently developed by (B¨ ucher et al., 2021), who used local linear regression techniques to estimate the quantity supt∈[t0,1]|µ(t)−µt0 0|. They proposed to reject the null hypothesis for large values of the estimate, where critical values
https://arxiv.org/abs/2504.15872v1
are either obtained by asymptotic theory (which shows that a properly scaled version of the estimate converges weakly to a Gumbel type distribution) or by resampling based on a Gaussian approximation. As a consequence, the resulting procedure suffers from several deficits First, it is conservative in finite samples, particularly for small sample sizes. Second, the mean function µin model (1.1) has to be twice differentiable and the difference µ(t)−µt0 0has to satisfy some convexity properties, making the method unreliable for less smooth or discontinuous functions. Finally, the procedure proposed in (B¨ ucher et al., 2021) relies on a bandwidth parameter that also prevents detection of local alternatives at the standard parametric rate n−1/2. Our contribution in this paper is a novel multiscale test for the hypotheses (1.2) that does not suffer from the aforementioned problems. More precisely, we compare (an estimate of) the benchmark with local means calculated over different scales and establish the weak convergence of this statistic (see Theorem 2.2). The limit is not distribution free and depends on sequences of “extremal sets”, which can only be defined implicitly due to certain properties of the Brownian motion. Based on this result, we propose first a testing procedure that relies on a distributional upper bound of the limiting distribution and only requires estimation of a long-run variance. By construction, the resulting test is conservative, but it it can detect 3 local alternatives at a parametric rate and already outperforms the existing methodology in our finite sample study. By estimating the extremal sets we are able to construct a more elaborate procedure which achieves the nominal level asymptotically. The resulting test can detect local alternatives at a parametric rate as well and yields a further substantial improvement of the finite sample performance. Summarizing, both novel multiscale tests for practically relevant changes in the gradually changing mean function can detect local alternatives converging to the null at a faster rate than the currently available procedure. In contrast to this test, they are applicable for piecewise smooth mean functions without further constraints on their geometry and do not require the choice of smoothing parameters. 2 Multiscale detection of relevant changes In the following we consider the location scale model Xi=µ(i/n) +ϵii= 1, ..., n (2.1) where ( ϵi)i∈Nis a stationary centered process and µis a bounded and piecewise Lipschitz- continuous function. We are interested in detecting significant deviations of the function µ on the interval [ t0,1] from its long-term average in the past µt0 0:=1 t0Zt0 0µ(x)dx , where 0 < t0<1 is some predefined point in time. We address this problem by testing the hypotheses H0(∆): d∞≤∆vs H 1(∆): d∞>∆, (2.2) where d∞= sup t∈[t0,1]|µ(t)−µt0 0| (2.3) denotes the maximum (absolute) deviation of the function µfrom µt0 0over the interval [ t0,1] and ∆ >0 is a given threshold. A typical application for this benchmark is encountered in the analysis of temperature data where deviations from a pre-industrial average temperature are of interest. In this case the choice of threshold is also quite clear. For example, if we are interested in the
https://arxiv.org/abs/2504.15872v1
global mean 4 temperature anomalies, a reasonable choice is ∆ = 1 .5 degrees Celsius corresponding to the Paris Agreement adopted at the UN Climate Change Conference(COP21) in Paris, 2015. In other circumstances the choice of ∆ might not be so obvious and has to be carefully discussed for each application. We defer further discussion of this issue to Remark 2.7, where we also propose a data based choice of the threshold ∆. In the remainder of this section we introduce the necessary concepts and assumptions to define and theoretically analyze a multiscale test statistic for the testing problem (2.2). Assumption 2.1. The random variables in model (2.1) form a triangular array of real valued random variables where (ϵi)i∈Zis a mean zero stationary sequence with existing long run variance σ2=P i∈ZE[ϵ0ϵi] (A1) For some 0< p < 1/2there exists a standard Brownian motion B, such that for all k∈N kX i=1ϵi−σB(k) ≤Ck1/2−p almost surely for some constant C >0. (A2) The function µ: [0,1]→Ris piecewise Lipschitz continuous with finitely many jumps. Assumption (A1) is a high level assumption that is standard in the literature and is satisfied by a large class of weakly dependent time series (see, for instance (Dehling, 1983) for mix- ing, (Wu, 2005) for physically dependent and (Berkes et al., 2011) for Lp-m-approximable processes). Assumption (A2) is a weak regularity assumption on the function µ, that can in principle be weakened to H¨ older continuity with some additional technical effort. We now introduce the test statistic, and to that end denote for j < k by ˆµk j=1 k−jkX i=j+1Xi the (local) mean of the observations Xj+1, . . . , X k. Note that E ˆµk j =1 k−jkX i=j+1µ(i/n)≃n k−jZk/n j/nµ(t)dt and therefore ˆ µ⌊nt0⌋ 0−ˆµk j(approximately) compares the integral of the function µover the interval [0 ,⌊nt0⌋/n] with the “local” integral over the interval [ j/n, k/n ], which is approxi- mately given byn k−jRk/n j/nµ(t)dt≈µ(k+j 2n) ifk−jis small. We now consider these differences 5 on different scales and define for a sequence ( cn)n∈Nof natural numbers such that cn→ ∞ and n1−2p cn=o(1), (2.4) the test statistic ˆTn,∆= sup cn≤c c≤n−⌊nt0⌋ c∈Nsup |k−j|=c k>j≥⌊nt0⌋√c ˆµ⌊nt0⌋ 0−ˆµk j −r 2 logne c −√c∆ . (2.5) By the discussion of the previous paragraph ˆ µk jestimates µ(k+j 2n) for smaller scales, which ensures that relatively short excursions of the function t→ |µ(t)−µt0 0|above ∆ are detected. For larger scales the statistic (2.5) is able to take advantage of longer excursions of the function t→ |µ(t)−µt0 0|above the threshold ∆, thereby increasing the power of the test substantially. The additive factor Γn(c) :=r 2 logne c equalizes the magnitude of the different scales which would otherwise be dominated by the small scales. We also note that the scaling factor√cdepends on nand is of larger and smaller order than n1/2−pandn1/2, which leads to a non-trivial asymptotic distribution of the statistic ˆTn,∆in the case d∞= ∆, which we call boundary of the hypotheses. Our first main result provides such a weak convergence result for the statistic ˆTn= sup cn≤c≤n−⌊nt0⌋ c∈Nsup
https://arxiv.org/abs/2504.15872v1
|k−j|=c k>j≥⌊nt0⌋√c ˆµ⌊nt0⌋ 0−ˆµk j −Γn(c)−d∞ , which reduces to the statistic ˆTn,∆in (2.5), if the centering term d∞defined in (2.3) is replaced by the threshold ∆. Theorem 2.2. Grant assumptions (A1) and (A2). We then have ˆTnd−→Td∞ (2.6) where Td∞:=σlim ϵ↓0sup (s,t)∈Aϵ,d∞n s(s, t)√ t−sB(t0) t0−B(t)−B(s)√t−s −Γ(t−s)o , (2.7) 6 Bdenotes a standard Brownian motion and Aϵ,d∞=n (s, t)∈[t0,1]2 s < t, µt0 0−µt s ≥d∞−ϵo (2.8) µt s=1 t−sZt sµ(x)dx s(s, t) =sgn( µt0 0−µt s Γ(t−s) =r 2 loge t−s . (2.9) Moreover, the distribution of the random variable Td∞is continuous. In particular, (1) if d∞= ∆, we have ˆTn,∆d−→T∆, (2) If d∞<∆ord∞>∆we have ˆTn,∆P−→ −∞ orˆTn,∆P−→ ∞ , respectively. Note that the limit distribution in Theorem 2.2 is not distribution free even if the long run variance of the error process ( ϵi)i∈Nwould be known. In fact this distribution depends in a rather delicate way on the regression function µwhich appears in the definition Td∞through the set Aε,d∞. This is contrast to many other multiscale tests proposed in the literature (see, for example, D¨ umbgen and Spokoiny, 2001; D¨ umbgen and Walther, 2008; Schmidt-Hieber et al., 2013; Dette et al., 2020). The difference can be explained by the fact that these and - to the best of our knowledge - all other papers on multiscale testing do not consider relevant hypotheses of the form (1.2) and (2.3) with ∆ >0. In fact, transferring the hypotheses considered in the multiscale testing literature so far to the situation considered in this paper yields the testing problem for the “classical” hypotheses H0:d∞= 0 vs H 1:d∞>0, which corresponds to the choice ∆ = 0 in (2.2). It follows from the arguments given in the proof of Theorem 2.2 that in the case d∞= 0 ˆTnd−→σM , where the random variable Mis defined by M= sup t0≤s<t≤1 √ t−sB(t0) t0−B(t)−B(s)√t−s −Γ(t−s). (2.10) As the quantiles of the distribution of Mcan be obtained by simulation, we can already use this result for the construction of a valid (conservative) inference procedure for the 7 hypotheses (2.2) employing the upper (distributional) bound P(T∆> q)≤P(σM > q ) (2.11) and estimating the long run variance σ2. To be specific, following (Wu and Zhao, 2007) we define ˆσ2=1 ⌊n/m⌋ −1⌊n/m⌋−1X j=1 ˆµjm (j−1)m−ˆµ(j+1)m jm2 2m(2.12) as an estimator of the long run variance, where the parameter m∈Nconverges to ∞as n→ ∞ and is is proportional to n1/3. The null hypothesis is then rejected, whenever Tn,∆≥ˆσq1−α, (2.13) where q1−αdenotes the (1 −α) quantile of the distribution of M. We will show in the Section 5.4 of the appendix that (under the assumptions made in this paper) the estimator ˆ σ2is consistent for σ2, that is, ˆσ2=σ2+OP(n−1/3), which yields the following result. Theorem 2.3. Under assumptions (A1) and (A2) the test defined by (2.13) is consistent and has asymptotic level α. We continue investigating the asymptotic power properties of the test (2.13) by considering a class of local alternatives of the form µn(t)−µt0 0= ∆ + βnh(t), (2.14) where his some non-negative and Lipschitz continuous function. Theorem 2.4. Let
https://arxiv.org/abs/2504.15872v1
assumptions (A1) and (A2) and be satisfied and consider local alterna- tives of the form (2.14) withβn=n−1/2, then ˆTn,∆d→σsup t0≤s<t≤1√ t−sB(t0) t0−B(t)−B(s)√t−s−Γ(t−s) +1√t−sZt sh(x)dx We collect some observations that follow from this result in the following remark. 8 Remark 2.5. (1) By Theorem 2.4 the test (2.13) can detect local alternatives converging to the null hypothesis at a parametric rate n−1/2. This establishes a substantial improvement over the results obtained in (B¨ ucher et al., 2021), where the nonparametric rate βn≃p nhnlog(hn)−1 is required to obtain non-trivial power. Here hnis the bandwidth used for the local linear estimator of the regression function µ. (2) It is clear from the inequality (2.11) that the test (2.13) is in general conservative even in the case that |µ(t)−µt0 0|= ∆ for all t∈[t0,1]. Comparing the definitions of the random variables T∆andMin (2.7) and (2.10), respectively, the difference P(T∆> q)−P(σM > q ) will be large for those models where d∞= ∆ and where at the same time the set {s∈[t0,1]| |µt0 0−µ(s)|<∆}is “large”. This indicates the need for a test procedure that is able to take into account the structure of the sets Aε,d∞ appearing in the definition of the random variable Td∞in (2.6). To alleviate the issue raised in the second part of the previous remark we will develop an alternative test which uses quantiles from a distribution which approximates the distribution of the random variable T∆more directly. Obviously, such an approach has to take the estimation of the sets Aϵ,d∞in (2.8) into account. For this purpose we define for a scale parameter c∈N ˆEc=n (j, k)∈ {⌊nt0⌋, . . . , n }2 k−j=c, ˆµ⌊nt0⌋ 0−ˆµk j ≥ˆd∞,c−ˆσlog(n)/√co as an estimator of the extremal set at scale c, where ˆd∞,c= max k−j=c,⌊nt0⌋≤j<k≤n|ˆµ⌊nt0⌋ 0−ˆµk j|. Note that the sequence of sets ˆAn=[ cn≤c≤n−⌊nt0⌋ˆEc may heuristically be interpreted as a sequence of estimators for the sets Aϵ,d∞for a suitable sequence of ϵ↓0. Next we introduce ˆs(j/n, k/n ) = sgn((ˆ µ⌊nt0⌋ 0−ˆµk j) 9 as an estimator of the sign s(j/n, k/n ) in (2.9) and consider the statistic ˆT∗ n= ˆσ sup cn≤c≤n−⌊nt0⌋ c∈Nsup (j,k)∈ˆEcˆs(j/n, k/n )√cB(⌊nt0⌋) ⌊nt0⌋−B(k)−B(j)√c −Γn(c) where ˆ σ2is the estimator of the long run variance defined in (2.12) and suprema over empty sets are defined as 0. We denote the (1 −α)-quantile of the distribution of ˆT∗ nbyq∗ 1−α, which can easily be simulated. Our second test for a practically relevant deviation from deviation from the average µt0 0rejects the null hypothesis in (2.2), whenever ˆTn,∆≥q∗ 1−α (2.15) and the following result shows that this decision rule defines a consistent asymptotic level α, which can detect local alternatives converging to null hypothesis at a parametric rate. Theorem 2.6. Under assumptions (A1) and (A2) the test (2.15) is consistent and has asymptotic level α. More precisely, P(ˆTn,∆≥q∗ 1−α)→  0d∞<∆ α d ∞= ∆ 1d∞>∆ Remark 2.7. An important question from a practical point of view is the choice of the threshold ∆ >0, which has to be carefully discussed for each specific application. Essentially, this boils down to the important
https://arxiv.org/abs/2504.15872v1
question when a deviation from the reference value µt0 0is practically significant, which is related to the specification of the effect size (see Cohen, 1988). While in many situations, such as in the climate data example mentioned before, this specification is quite obvious, there are other applications where this choice might be less clear. However, for such cases it is possible to determine a threshold from the data which can serve as measure of evidence for a deviation of µfrom the long term average µt0 0with a controlled type I error α. To be precise, note that the hypotheses H0(∆1) and H0(∆2) in (2.2) are nested for ∆ 1<∆2 and that the test statistic (2.5) is monotone in ∆. As the quantile q∗ 1−αdoes not depend on ∆, rejecting H0(∆) for ∆ = ∆ 1also implies rejecting H0(∆) for all ∆ <∆1. The sequential rejection principle then yields that we may simultaneously test the hypotheses (2.2) for different choices of ∆ ≥0 until we find the minimum value, say ˆ∆α, for which H0(∆) is not 10 rejected, that is ˆ∆α:= min ∆≥0|ˆTn,∆≤q∗ 1−α . (2.16) Consequently, one may postpone the selection of ∆ until one has seen the data. The same arguments of course also hold for the more conservative procedure defined by (2.13). 3 Estimating the time of the first relevant deviation If a relevant deviation from a benchmark has been detected, it is of interest to determine the first time where this deviation occurs, that is t∗= min t∈[t0,1] |µ(t)−µt0 0| ≥∆ . (3.1) A natural estimator for t∗is the first time kwhere at least one estimated difference ˆ µ⌊nt0⌋ 0−ˆµk j exceeds approximately ∆, and therefore we define ˆt= minn k≥ ⌊nt0⌋+cn ∃j∈ {⌊nt0⌋, . . . , k −cn}such that (3.2) |ˆµ⌊nt0⌋ 0−ˆµk j| ≥∆−ˆσlog(n)√k−jo where the constant cnsatisfies (2.4) (here we define the minimum over an empty set as ∞). In the following discussion we investigate the theoretical performance of this estimator, distinguishing the case where t∗is a point of continuity of |µ(t)−µt0 0|and where it is not. For this purpose, we introduce the function t→d(t) =|µ(t)−µt0 0| and consider first the smooth setting. Intuitively, a change at a point of continuity will be harder to detect if the function µis very flat at t∗. To quantify this property, we assume that there exists constants κ, cκ>0 such that lim t↑t∗|d(t∗)−d(t)| (t∗−t)κ→cκ. (3.3) To obtain an explicit convergence rate we also need an assumption to ensure that the function ddoes not behave too irregularly close to the point t∗. 11 (A3) For some constant γ∈(0, t∗) the function t→sign(d(t))d(t) is increasing on the set Uγ(t∗) = t∈[t0, t∗] t > t∗−γ . Theorem 3.1. Let Assumptions (A1) - (A3) be satisified. (a) If t∗∈(t0,1], condition (3.3) holds and cκ+1/2 n≲nκp log(n), then ˆt=t∗+OPlog(n)√cn1/κ . (b) If t∗=∞we have P(ˆt=∞) = 1−o(1). Next we discuss the case, where there is a jump at the point t∗and assume for some ϵ >0 that |µ(t)−µt0 0|  <∆−ϵ ift < t∗ ≥∆ if t=t∗ ≥∆ +O(t−t∗) if ˜t > t
https://arxiv.org/abs/2504.15872v1
> t∗(3.4) where ˜t > t∗is the smallest point with a jump of the function µat˜t(if there are no jumps fort > t∗we set ˜t= 1). Theorem 3.2. Let Assumptions (A1) - (A3) be satisfied and let cnsatisfy c3/2 n≲np log(n). Then (a) If t∗∈(t0,1]is a jump discontinuity satisfying (3.4), then ˆt=t∗+OPcn n . (b) If t∗=∞, we have P(ˆt=∞) = 1−o(1). Comparing the Theorem 3.1 and 3.2, it is readily apparent that detecting smooth changes profits from a large cn(i.e. the mean is only estimated over longer intervals) while abrupt changes are easier detected if cnis chosen small (i.e. the mean is also estimated over shorter intervals). 4 Finite sample properties In this section we investigate the finite sample properties of the proposed methodology by means of a simulation study and illustrate its application by analyzing a real data example. 12 0.0 0.2 0.4 0.6 0.8 1.09.5 10.0 10.5 11.0 TimemuFigure 1: Plot of the regression function µa(x)in(4.1) fora= 2. The dotted line is given byµ1/4 0= 4R1/4 0µ2(s)ds. For the sake of comparison we consider the same scenarios as investigated in (B¨ ucher et al., 2021). 4.1 Synthetic Data We choose ∆ = 1 and the mean function µa(x) = 10 + 1 /2 sin(8 πx) +a x−1 42 1n x >1 4o , (4.1) which is displayed in Figure 1 for a= 2. We consider various choices of the parameter a where we choose t0= 1/4 so that the hypotheses are given by H0(1) :d∞≤1vs H 1(1) :d∞>1 (4.2) where d∞= sup t∈[1/4,1] µa(t)−4Z1/4 0µa(s)ds Note that d∞= 1 (boundary of the hypotheses) for a=128 81and that d∞>1 (alternative) andd∞<1 (interior of the null hypothesis), whenever a >128 81anda <128 81, respectively. 13 For the error processes ( ϵi)i∈Zin model (2.1) we investigate the processes (IID) ϵi=1 2ηi, (MA) ϵi=1√ 5 ηi+1 2ηi−1 (AR) ϵi=√ 3 4 ηi+1 2ϵi−1 , (4.3) where ( ηi)i∈Zis an i.i.d. sequence of standard normally distributed random variables. In particular, we have Var( ϵi) =1 4for all error processes under consideration. We will compare the novel testing procedures (2.13) and (2.15) proposed in this paper with the most powerful test from (B¨ ucher et al., 2021) which is given in equation (4.6) therein. Throughout this section we generically choose m= 5 (tuning parameter for the long run variance estimation) andcn= 20 (lower bound for scales in the multi-scale statistic (2.5)). The results are fairly stable under perturbation of these parameters as long as they are not chosen too small. The empirical rejection rates are calculated by 1000 simulation runs. For the test (2.13) we calculated the quantiles of the distribution of Mby 1000 samples from a Brownian motion sampled on a grid with width 0 .001. For the test (2.15) we used 200 samples to calculate the quantile q1−α∗in (2.13) for each of the 1000 simulation runs. The empirical rejection probabilities of all three tests are recorded in Table 1 and confirm the asymptotic theory. Regarding the interpretation of the empirical findings we note that the null hypothesis in (4.2) is true, whenever the
https://arxiv.org/abs/2504.15872v1
parameter asatisfies a≤128/81≃1.58 and that we expect an increasing number of rejections for larger values of a, which yield increasing values for the difference d∞−∆. Note that all three tests are conservative in the sense that the empirical size is smaller than 5% at the boundary of the hypotheses (2.2) defined by d∞−∆ = 0 (in boldface). However, it is worthwhile to mention that the test (2.15) provides a better approximation of the nominal level than its competitors. 14 µa test (2.13) (2.15) B¨ ucher et al. (2021) a d∞−∆200 500 1000 200 500 1000 200 500 1000 Panel A: iid errors 1.5 -0.03 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.58 0.00 0.0 0.1 0.2 0.2 1.5 0.2 0.0 0.0 0.2 2.0 0.13 0.6 2.5 3.2 5.5 14.1 15.4 0.0 3.3 23.1 2.5 0.29 10.4 42.6 50.3 42.1 73.6 80.2 0.0 29.9 97.8 3.0 0.45 54.3 93.8 99.7 86.6 99.4 100 0.2 57.3 100 Panel B: MA errors 1.5 -0.03 0.1 0.1 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.58 0.0 0.1 0.6 0.1 0.7 2.7 2.4 0.0 0.0 0.3 2.0 0.13 1.2 5.1 4.3 6.3 14.9 16.1 0.0 3.7 18.7 2.5 0.29 10.3 31.0 47.1 26.7 56.8 75.6 0.2 27.0 87.9 3.0 0.45 44.9 81.4 96.6 69.7 94.5 99.7 0.5 52.8 99.7 Panel C: AR errors 1.5 -0.03 0.7 1.1 1.1 0.0 0.0 0.0 0.1 0.5 1.0 1.58 0.00 0.8 1.1 2.0 2.6 5.9 6.7 0.1 1.4 1.5 2.0 0.13 3.1 9.4 10.7 7.8 22.6 27.0 0.0 7.8 23.1 2.5 0.29 15.6 36.1 51.1 29.6 60.6 75.1 0.4 27.3 77.7 3.0 0.45 43.5 81.2 95.9 67.3 91.7 99.5 1.1 53.9 98.4 Table 1: Empirical rejection rates of the tests (2.13) and(2.15) and the test proposed in equation (4.6) of (B¨ ucher et al., 2021) for the hypotheses (4.2). Different values for the parameter ain the mean function (4.1), error processes, and sample sizes n= 200 ,500,1000 are considered. A comparison of the power properties of the different procedures shows that the conservative multiscale test (2.13) outperforms the test in (B¨ ucher et al., 2021) for moderates samples sizes ( n= 200 ,500), but the last-named test yields larger rejection probabilities if the sample sizen= 1000, in particular if the deviation is d∞−∆ = 2. On the other hand the multi-scale test (2.15) yields an even larger power for sample size n= 200 ,500, and for n= 1000 it shows a similar performance as the procedure in (B¨ ucher et al., 2021). Finally, we present a brief simulation study to investigate the performance of the estimator (3.2) for the first time of a relevant deviation as defined in (3.1), where we use m= 5 andcn= 20 + n1/2. We consider the mean function (4.1) with a= 2, thus the time of a first relevant deviation is t∗= 0.791. The error process is given by (4.3) and the sample size n= 500. In Figure 2 we display histograms of the estimator (3.2) and the estimator proposed in equation (5.2) of (B¨ ucher et al., 2021) based on 10 .000 simulation
https://arxiv.org/abs/2504.15872v1
runs. We observe that the estimator introduced in (B¨ ucher et al., 2021) does not detect a relevant deviation in more than 90% of all cases for many of the considered settings while our estimator detects 15 such a deviation almost always. However, due to the noise of the error process there exist also cases where ˆtunderestimates the true point t∗and delivers an estimate for the local maximum of the function µatt= 0.57 which is the closest local peak to the time t= 0.8 (here the deviation is |µ(0.57)−µ1/4 0| ≃0.695<1. We therefore refrain from a direct comparison of the tow estimators and display in Table 2 the empirical bias, standard deviation and the detection rate of the estimator (3.2) proposed in this paper. Histogram of t^ ValueFrequency 01000 2000 3000 4000 5000 0.50.550.60.650.70.750.80.850.90.95 1Inf Histogram of t^ ValueFrequency 02000 4000 6000 800010000 0.50.550.60.650.70.750.80.850.90.95 1Inf Figure 2: Histograms for the estimator ˆtdefined in (3.2) (left) and the estimator proposed in equation (5.2) of (B¨ ucher et al., 2021) (right). Note that all choices of the parameter ain Table 2 correspond to a violation of the null hypotheses. Therefore, a “good” estimator of the time point of the first relevant deviation should be finite in as many cases as possible. We observe that the estimator (3.2) proposed in this paper is finite almost always for all choices of aeven for the sample size n= 200. The bias and standard deviation of the estimator first increase when the sample size increases to 500. This is due to the method sometimes detecting a change at the second highest peak of the the curve (see Figure (2)) when the sample size becomes larger. This happens more rarely when the sample size grows further, as is reflected by a lower bias for n= 1000 compared to n= 500. The estimator generally performs better for less dependent data but nonetheless performs well across all settings. 16 ˆE[t∗] ˆE[1(t∗=∞)] n\a 2.0 2.5 3.0 2.0 2.5 3.0 Panel A: iid errors 200 0.864 (0.029) 0.839 (0.036) 0.812 (0.052) 0.031 0.002 0.000 500 0.809 (0.044) 0.780 (0.069) 0.738 (0.090) 0.001 0.000 0.000 1000 0.803 (0.023) 0.783 (0.046) 0.749 (0.076) 0.000 0.000 0.000 Panel B: MA errors 200 0.854 (0.054) 0.826 (0.063) 0.794 (0.074) 0.057 0.029 0.000 500 0.788 (0.078) 0.754 (0.093) 0.711 (0.101) 0.001 0.000 0.000 1000 0.789 (0.057) 0.757 (0.080) 0.715 (0.095) 0.000 0.000 0.000 Panel C: AR errors 200 0.835 (0.084) 0.810 (0.087) 0.778 (0.090) 0.094 0.011 0.001 500 0.764 (0.108) 0.727 (0.116) 0.693 (0.111) 0.009 0.001 0.000 1000 0.767 (0.087) 0.731 (0.099) 0.692 (0.104) 0.002 0.000 0.000 t∗0.79 0.78 0.77 Table 2: Empirical bias, standard deviation and detection rate of the estimator ˆtdefined in (3.2). Central part: empirical mean and standard deviation (in brackets) of ˆt, conditional on t∗̸=∞. Right part: proportion of cases for which ˆt=∞. Last line: true change point t∗. 4.2 Real Data Application We consider the mean of daily minimal temperatures (in degree Celsius) over the month of July for different weather stations in Australia. The data set is available via the R package
https://arxiv.org/abs/2504.15872v1
fChange (see Sonmez et al., 2025) on Github and the sample size varies between 100 and 150, depending on the weather station. For each weather station we test the hypotheses (2.2) for different thresholds ∆ ∈ {0.5,1,1.5}, where t0is chosen such that the years 1 , ..., nt 0 correspond to the time frame until the year 1950. In Table 3 we record the p-values of the multi-scale test (2.15) and the test proposed in equation (4.6) of (B¨ ucher et al., 2021). For the test (2.15) these p-values are calculated by 1000 bootstrap repetitions, while the parameters mandcnare chosen as in the previous section, that is cn= 20 and m= 5. 17 ∆ 0.5 1.0 1.5 0.5 1.0 1.5 test B¨ ucher et al. (2021) (2.15) Boulia/p-value 29.0 73.1 98.0 0.0 0.8 37.7 Boulia/year - - - 1950 1956 - Cape Otway/p-value 11.4 98.0 100.0 7.4 78.8 99.9 Cape Otway/year - - - - - - Gayndah/p-value 0.2 1.4 3.2 0.0 0.0 0.3 Gayndah/year 1952 1968 1974 1950 1950 1984 Gunnedah/p-value 0.8 2.7 10.3 0.0 0.0 1.4 Gunnedah/year 1952 1955 - 1962 1973 1984 Hobart/p-value 94.9 100.0 100.0 34.1 95.5 100.0 Hobart/year - - - - - - Melbourne/p-value 0.0 1.1 27.3 0.0 0.0 3.7 Melbourne/year 1968 1976 - 1950 1972 1990 Robe/p-value 3.3 44.9 98.2 6.1 68.3 99.8 Robe/year 1953 - - - - - Sydney/p-value 41.1 98.1 100 0.05 0.14 89.9 Sydney/year - - - 1950 - - Table 3: p-values and estimates for t∗of the test in B¨ ucher et al. (2021) (left part) and the bootstrap test (2.15) proposed in this paper (right part) for the hypotheses (2.2) for various values of the threshold ∆. Except for the Robe weather station the pvalues of the multi-scale test test (2.15) are either similar or substantially smaller than the pvalues obtained by the test in (B¨ ucher et al., 2021). In particular the new test detects changes in Boulia, Melbourne and Sydney that the procedure from (B¨ ucher et al., 2021) was not able to identify. We also observe that in general the new estimator ˆtproposed in this paper generally dates deviations earlier than its counterpart from (B¨ ucher et al., 2021). The only exception is the station Robe, where the test from (B¨ ucher et al., 2021) detects a difference of at least 0 .5 degrees Celsius that our method does not detect at significance level α= 0.05. However, a more precise look at the minimum value ˆ∆α, for which H0(∆) is not rejected at a controlled type I error α(see equation (2.16) and equation (5.2) in (B¨ ucher et al., 2021)) shows that the difference between the two tests are small: the method proposed in (B¨ ucher et al., 2021) gives ˆ∆0.05= 0.53, while the estimator (2.16) yields ˆ∆0.05= 0.49. These values are taken from Table 4, which displays the values ˆ∆0.05for both methods (here we have recalculated the results of the test of (B¨ ucher et al., 2021)). The results further confirm the previous findings. Except for the weather station in Robe the test (2.15) always
https://arxiv.org/abs/2504.15872v1
detects larger differences than the test proposed in (B¨ ucher et al., 2021). In particular we are able to detect changes in Boulia and Hobart where the method (B¨ ucher et al., 2021) does 18 not detect any relevant deviation. While at Hobart the difference in the value ˆδ0.05is small, it is larger than 1 degree Celsius at Boulia. ˆ∆0.05 Boulia Cape Otway Gayndah Gunnedah Hobart Melbourne Robe Sydney B¨ uDH 0.00 0.38 1.69 1.22 0.00 1.17 0.53 0.13 BaD 1.16 0.45 1.74 1.64 0.17 1.52 0.49 0.87 Table 4: The minimum value ˆ∆0.05, for which H0(∆)is not rejected at a controlled type I error of 5%(see equation (2.16) and equation (5.2) in B¨ ucher et al. (2021)). Acknowldgements: This research was partially funded in the course of TRR 391 Spatio- temporal Statistics for the Transition of Energy and Transport (520388526) by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). 19 5 Proofs Throughout this section we use the notation k0=⌊nt0⌋and define for any constant a >0 the sets Ec(a) =n (j, k)∈ {k0, ..., n}2 k−j=c, µt0 0−µk/n j/n ≥d∞−alog(n)/√co Eco c(a) =n (j, k)∈ {k0, ..., n}2 k−j=co \ Ec(a). (5.1) We will generally suppress the dependence on ain the notation except for the subsection about the bootstrap procedure. In the following we shall assume that σ= 1, the general case follows by simple rescaling. 5.1 Proof of Theorem 2.2 Define ¯µk j=1 k−jkX i=j+1µ(i/n) and denote by ˇB(s) =n−1/2B(ns) a rescaled version of the Brownian motion B. We will start stating four auxiliary results, which will be used in the proof of Proof of Theorem 2.2 and of other results. The proofs are given at the end of this section. Lemma 5.1. Grant assumption (A2). It then holds that sup 1≤j<k≤n(k−j)(¯µk j−µk/n j/n) =O(1) Lemma 5.2. Ifd∞>0and assumption (A1) and (A2) are satisfied, we have with high probability that ˆTn≤ sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ecs(j/n, k/n )√cB(k0) k0−B(k)−B(j)√c −Γn(c) where Bis a Brownian motion with variance σ2and suprema over empty sets are defined as as−∞. 20 Lemma 5.3. Ifd∞>0and Assumption (A1) and (A2) aresatisfied, we have sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ecs(j/n, k/n )√cB(k0) k0−B(k)−B(j)√c −Γn(c) ≤lim ϵ↓0sup (s,t)∈Aϵ,d∞s(s, t)√ t−sˇB(t0) t0−ˇB(t)−ˇB(s)√t−s −Γ(t−s) +oa.s.(1), where Aϵ,d∞=n (s, t)∈[t0,1]2 s < t, µt0 0−µt s ≥d∞−ϵo Lemma 5.4. Ifd∞>0and Assumption (A1) and (A2) are satisfied, we have lim ϵ↓0sup (s,t)∈Aϵ,d∞s(s, t)√ t−sˇB(t0) t0−ˇB(t)−ˇB(s)√t−s −Γ(t−s) ≤ sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ecs(j/n, k/n )√cB(k0) k0−B(k)−B(j)√c−√c µt0 0−µk/n j/n −Γn(c)−√cd∞ +oP(1) =ˆTn+oP(1) Proof of Theorem 2.2 The weak convergence of the statistic ˆTnin (2.6) follows directly from Lemma 5.2 - 5.4. Regarding the continuity of the distribution of Td∞we note that Td∞is a limit of convex functions of a Gaussian process and therefore a convex function of a Gaussian process itself. The continutiy then follows by Theorem 4.4.1 from (Bogachev, 2015). The statements (1) and (2) regarding the asymptotic properties of the test statistic ˆTn,∆ are a direct consequence of (2.6) if d∞≤∆. In the case d∞>∆ we note that piecewise Lipschitz continuity of µyields that there exists a sequence of jn, knwith kn−jn≃nsuch that for some ρ
https://arxiv.org/abs/2504.15872v1
>0 we have |µ(i/n)−µt0 0| ≥∆ +ρfor all jn≤i≤kn. Consequently, using Lemma 5.1, we obtain 1 k0k0X i=1µ(i/n)−1 cknX i=jn+1µ(i/n) ≥∆ +ρ−O(n−1) which yields ˆTn,∆→ ∞ by an application of the triangle inequality. 21 5.1.1 Proof of Lemma 5.1 - 5.4 Proof of Lemma 5.1. Let us first assume that µhas no discontinuities, then ¯µk j−µk/n j/n=1 k−jkX i=j+1 µ(i/n)−nZi/n (i−1)/nµ(t)dt =1 k−jkX i=j+1nZi/n (i−1)/nµ(t)−µ(i/n)dt ≲1 k−jkX i=j+1n−1=n−1 The general case follows by splitting up the integrals containing the discontinuities, leading to finitely many additional terms in the sum that can be bounded only by a constant instead ofn−1. Proof of Lemma 5.2. By Assumption (A1) and condition (2.4) we have sup n≥k−j≥cnp k−j|ˆµk0 0−ˆµk j|= sup |k−j|≥cn|B(k)−B(j)|√k−j+O(n1/2−p)√k−j = sup |k−j|≥cn|B(k)−B(j)|√k−j+oP(1). Using this and Lemma 5.1 we therefore obtain that ˆTn= sup cn≤c≤n−k0 c∈Nsup |k−j|=c k>j≥k0 √cB(k0) k0−B(k)−B(j)√c−√c1 k0k0X i=1µ(i/n)−1 ckX i=j+1µ(i/n) −Γn(c)−√cd∞ +oP(1) = sup cn≤c≤n−k0 c∈Nsup |k−j|=c k>j≥k0 √cB(k0) k0−B(k)−B(j)√c−√c µt0 0−µk/n j/n −Γn(c)−√cd∞ +oP(1) Now note that it follows from the discusssion in Section 2.2 of (Frick et al., 2014) that the random variable Mdefined in (2.10) is finite with probability 1, which implies sup cn≤c≤n−k0 c∈Nsup |k−j|=c k>j≥k0 √cB(k0) k0−B(k)−B(j)√c −Γn(c) =OP(1). 22 This yields sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ecoc √cB(k0) k0−B(k)−B(j)√c−√c µt0 0−µk/n j/n −Γn(c)−√cd∞ ≲−log(n) with high probability by the definition of the set Eco cin (5.1). Therefore, sup cn≤c≤n−k0 c∈Nsup |k−j|=c k>j≥k0 √cB(k0) k0−B(k)−B(j)√c−√c µt0 0−µk/n j/n  −Γn(c)−√cd∞ = sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ec √cB(k0) k0−B(k)−B(j)√c−√c µt0 0−µk/n j/n  −Γn(c)−√cd∞ +oP(1) = sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ecs(j/n, k/n )√cB(k0) k0−B(k)−B(j)√c−√c µt0 0−µk/n j/n −Γn(c)−√cd∞ +oP(1) ≤ sup cn≤c≤n−k0 c∈Nsup (j,k)∈Ecs(j/n, k/n )√cB(k0) k0−B(k)−B(j)√c −Γn(c) with high probability, which yields the desired statement. Proof of Lemma 5.3. Existence of the limit with respect to ϵfollows because the quantity is, as a function on the probability space, pointwise monotonically non-increasing in ϵand bounded because the random variable Mis finite almost surely. The asymptotic inequality follows becauseS cn≤c≤n−k0Ecis, for any ϵ >0, eventually a subset of Aϵ,d∞. Proof of Lemma 5.4. The equality has been established in the proof of Lemma 5.2 already. For the upper bound we proceed in three steps: 23 (I) For any sequence with bn=o(n−1/2) we have sup (j,k)∈S cn≤c≤n−k0Ecs(j/n, k/n )p k−jB(k0) k0−B(k)−B(j)√k−j −p k−j µt0 0−µk/n j/n −Γn(k−j)−p k−jd∞ ≥ sup (s,t)∈Abn∩{k0/n,..., 1}2 |s−t|≥cn/ns(s, t)√ t−sˇB(t0) t0−ˇB(t)−ˇB(s)√t−s−p (t−s)n µt0 0−µt s −Γ(t−s)−p (t−s)nd∞ by definition of the involved sets and of ˇB. (II) By the definition of Abnwe then obtain sup (s,t)∈Abn∩{k0/n,..., 1}2 |s−t|≥cn/ns(s, t)√ t−sˇB(t0) t0−ˇB(t)−ˇB(s)√t−s−p (t−s)n µt0 0−µt s −Γ(t−s)−p (t−s)nd∞ = sup (s,t)∈Abn∩{k0/n,..., 1}2 |s−t|≥cn/n s(s, t)√cˇB(t0) t0−ˇB(t)−ˇB(s)√t−s −Γ(t−s) +o(1) (III) Using similar arguments as in the proof of Theorem 2.1 in (D¨ umbgen, 2002) (use Theorem 7.1 and Lemma 7.2 in (D¨ umbgen and Walther, 2008) with β(x) = 1{x∈ [0,1]}instead of Proposition 7.1 from (D¨ umbgen, 2002)) we obtain sup (s,t)∈Abn∩{k0/n,..., 1}2 |s−t|≥cn/n s(j/n, k/n )√cˇB(k0) k0−ˇB(k)−ˇB(j)√c −Γn(c) (5.2) = sup (s,t)∈Abn s(j/n, k/n )√cˇB(k0) k0−ˇB(k)−ˇB(j)√c −Γn(c) +oP(1) To be precise we proceed as in the proof of Theorem 2.1 in (D¨ umbgen, 2002) to obtain, for 24
https://arxiv.org/abs/2504.15872v1
anyδ1>0, a set A1with probability 1 −δ1on which there exists δ2>0 such that sup (s,t)∈Abn∩{k0/n,..., 1}2 |s−t|≥cn/n s(j/n, k/n )√cˇB(k0) k0−ˇB(k)−ˇB(j)√c −Γn(c) = sup (s,t)∈Abn∩{k0/n,..., 1}2 |s−t|≥δ2 s(j/n, k/n )√cˇB(k0) k0−ˇB(k)−ˇB(j)√c −Γn(c) holds. Exactly the same arguments also yield a set A2with probability 1 −δ1on which sup (s,t)∈Abn s(j/n, k/n )√cˇB(k0) k0−ˇB(k)−ˇB(j)√c −Γn(c) = sup (s,t)∈Abn |s−t|≥δ2 s(j/n, k/n )√cˇB(k0) k0−ˇB(k)−ˇB(j)√c −Γn(c) holds. Equation (5.2) then follows by a standard argument involving the uniform continuity of the process ( s, t)→B(t)−B(s)√t−son the set {(s, t)||s−t| ≥δ2}. The Lemma then by the fact that sup (s,t)∈Aϵ,d∞s(s, t)√ t−sˇB(t0) t0−ˇB(t)−ˇB(s)√t−s −Γ(t−s) is a decreasing function of ϵ. 5.2 Proof of Theorem 2.4 The proof follows by a straightforward modification of Lemmas 5.2 to 5.4. The key difference is that the quantity p k−j µt0 0−µk/n j/n −∆ is not upper bounded by (or in some cases converges to) 0 anymore. Instead on uses the expansion  µt0 0−µk/n j/n −∆ = ∆ +βn k/n−j/nZk/n j/nh(x)dx−∆ =βn k/n−j/nZk/n j/nh(x)dx 25 in the last inequality in the proof of Lemma 5.2 and in step (II) of the proof of Lemma 5.4. 5.3 Proof of Theorem 2.6 We define the quantity ˆEc(a) =n (j, k)∈ {k0, ..., n}2 k−j=c, ˆµk0 0−ˆµk j ≥∆−alog(n)/√co . and prove at the end of this section the following auxiliary result. Lemma 5.5. Let∆ =d∞. For any fixed a >0we have for nlarge enough that Ec(a−ϵ)⊂ˆEc(a)⊂ E c(a+ϵ) for all c≥cn. Proof of Theorem 2.6. Let us first consider the case d∞= ∆. By the consistency of the long run variance estimate ˆ σ2(see Section 5.4 for a proof) we have that ˆEc(2σ)⊂ˆEc⊂ˆEc(σ/2) with high probability. Lemmas 5.5 and 5.3 then yield the desired statement if we can show that ˆs(j/n, k/n ) = s(j/n, k/n ) holds with high probability uniformly over k, jwith (j/n, k/n )∈ A ϵ,d∞for some ϵ <∆. This is an easy consequence of Lemma 5.1 and equation (5.3). For the other cases we note that the statistic ˆT∗ nis stochastically bounded because the random variable Mdefined in (2.10) is almost surely finite. As ˆTn,∆→ −∞ (∞) ifd∞<∆ (>∆) the assertion follows. Proof of Lemma 5.5. By Assumption (A1) and a union bound it follows that the inequal- ity |¯µk j−ˆµk j|= (k−j)−1kX i=j+1ϵi (5.3) ≲ (k−j)−1(B(k)−B(j)) +n1/2−p/(k−j) ≲p log(n)√k−j+n1/2−p/(k−j) 26 holds uniformly with respect to 1 ≤k−j≤nwith high probability. As a consequence we have P |¯µk j−ˆµk j|< ϵlog(n)/p k−j, cn≤k−j≤n (5.4) ≥Pp log(n)√k−j+n1/2−p/(k−j)< ϵlog(n)/p k−j, cn≤k−j≤n . Now condition (2.4) implies uniformly with resepect to cn≤k−j≤nthat n1/2−p k−j≤n1/2−p √cn√k−j≲o(1)√k−j which then implies that the probability in (5.4) is of order 1 −o(1). This yields the desired set inclusions as they are true whenever |¯µk j−ˆµk j|< ϵlog(n)/p k−j , c n≤k−j≤n . 5.4 Consistency of the long run variance estimate Lemma 5.6. Under assumption (A1) and (A2) we have ˆσ2=σ2+OP(n−1/3). Proof. By assumption (A1) we have that ˆσ2=1 ⌊n/m⌋ −1⌊n/m⌋−1X i=1 2B(im)−B((i−1)m)−B((i+ 1)m) +Ai2 2m+oP(n−1/3) where Ai:= ¯µim (i−1)m−¯µ(i+1)m im . By Lemma 5.1 we have |Ai|=n m µim/n (i−1)m/n−µ(i+1)m/n im/n +O(m−1) ≲O(1) 27 where
https://arxiv.org/abs/2504.15872v1
the inequality follows by the Lipschitz continuity of µt sinsandt. Standard arguments then yield ˆσ2=1 ⌊n/m⌋ −1⌊n/m⌋−1X i=1 2B(im)−B((i−1)m)−B((i+ 1)m)2 2m+OP(n−1/3) which in turn yields the desired statement by noting that Zim=B(im)−B((i−1)m) is a triangular array of independent N(0, σ2) variables. 5.5 Proof of Theorem 3.1 and 3.2 Proof of Theorem 3.1 We only consider the case t∗<∞, the case t∗=∞follows by easier and analogous arguments. We give an upper and a lower bound which establish the desired result upon combining them. Upper bound: We first note that it follows by Assumption (A1), condition (2.4) and the fact that the random variable Mis almost surely finite that sup cn≤|k−j|≤n |ˆµk0 0−ˆµk j| − |¯µk0 0−¯µk j| =OPs log(n) cn . (5.5) We also note that the identity µ(t∗)−µt0 0= ∆ implies ¯µkn jn−µt0 0≥1 kn−jnknX i=jn+1(∆−C(t−i/n)κ) = ∆ −C kn−jnknX i=jn+1(t−i/n)κ forkn=⌊nt⌋andjn=kn−cn, where Cis some constant that only depends on µandcκ. We thus obtain ¯µkn jn−µt0 0≳∆−(cn/n)κ, (5.6) and a similar argument is valid when µ(t)−µt0 0=−∆. Combining (5.5) and (5.6) therefore yields |ˆµk0 0−ˆµkn jn|≳∆−(cn/n)κ−OPs log(n) cn , (5.7) 28 which implies that ˆt≤t∗holds with high probability. Lower bound: We give the argument for µ(t∗)−µt0 0= ∆, the other case follows analogously. By (3.3) we know that µ(t∗−x) = ∆ −cκxκ+o(xκ). Thereby, choosing x= 3σlog(n) cκ√cn1/κ , we have µ(t∗−x)−µt0 0= ∆−3σlog(n)/√cn+op log(n)/cn Consequently, using the continuity of µ(t)−µt0 0and Assumption (A3) we obtain max t∈[t0,t∗−x]|µ(t∗−x)−µt0 0| ≤∆−2σlog(n)/√cn when nis sufficiently large. Using similar arguments as in the derivation of the upper bound we therefore obtain sup k/n∈[t0,t∗−x]sup j<k,k−j≥cn|ˆµk0 0−ˆµk j| ≤∆−2σlog(n)/√cn+OP(p log(n)/cn). (5.8) Now suppose that there exists jn< k n, satisfying kn−jn≥cn, kn/n∈[t0, t∗−x], such that |ˆµk0 0−ˆµkn jn| ≥∆−ˆσlog(n)√kn−jn holds. By (5.8) this would imply ∆−2σlog(n)/√cn+OP(p log(n)/cn)≥∆−ˆσlog(n)√cn which happens only with probability o(1) by Lemma 5.6. Consequently we have that ˆt≥ t∗−xwith high probability, i.e. ˆt≥t∗−OPlog(n)√cn1/κ . as desired. 29 5.5.1 Proof of Theorem 3.2 Again we only consider the case t∗<∞and note that the case t∗=∞follows by easier and analogous arguments. We give an upper and a lower bound which establish the desired result upon combination. Lower bound: Consider any t < t∗and assume WLOG that µ(t∗)−µt0 0= ∆. Then, by (3.4), we have ¯µk j−µt0 0<∆−ϵ Equation (5.5) then yields that ˆt≥t∗with high probability. Upper bound: Assume WLOG (last inquality of (3.4)) that for some δ >0 we have for any t∗≤t≤t∗+δ that µ(t)−µt0 0≥∆ +O(t−t∗). Consequently, by the same arguments leading to (5.7), we have |ˆµk0 0−ˆµk0+cn k0|≳∆−cn/n−OPs log(n) cn , which yields ˆt≤t∗+cn/n. References Aston, J. A. D. and Kirch, C. (2012). Evaluating stationarity via change-point alternatives with applications to fMRI data. The Annals of Applied Statistics , 6(4):1906 – 1948. Aue, A. and Horv´ ath, L. (2013). Structural breaks in time series. Journal of Time Series Analysis , 34(1):1–16. Aue, A. and Kirch, C. (2023). The state of cumulative sum sequential changepoint testing 70 years after page. Biometrika , 111(2):367–391. Baranowski, R., Chen, Y., and Fryzlewicz, P. (2019). Narrowest-Over-Threshold Detection of Multiple Change Points and Change-Point-Like Features. Journal of the Royal Statistical Society
https://arxiv.org/abs/2504.15872v1
Series B: Statistical Methodology , 81(3):649–672. Berkes, I., H¨ ormann, S., and Schauer, J. (2011). Split invariance principles for stationary processes. The Annals of Probability , 39(6):2441 – 2473. 30 Bogachev, V. (2015). Gaussian Measures . Mathematical Surveys and Monographs. American Mathematical Society. B¨ ucher, A., Dette, H., and Heinrichs, F. (2021). Are deviations in a gradually varying mean relevant? a testing approach based on sup-norm estimators. The Annals of Statistics , 49(6):3583 – 3617. Chen, Y., Wang, T., and Samworth, R. J. (2022). High-dimensional, multiscale online changepoint detection. J. R. Stat. Soc. Ser. B. Stat. Methodol. , 84(1):234–266. Cho, H. and Kirch, C. (2024). Data segmentation algorithms: Univariate mean change and beyond. Econometrics and Statistics , 30:76–95. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.) . Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Dehling, H. (1983). Limit theorems for sums of weakly dependent banach space valued ran- dom variables. Zeitschrift f¨ ur Wahrscheinlichkeitstheorie und Verwandte Gebiete , 63:393– 432. Dette, H., Eckle, T., and Vetter, M. (2020). Multiscale change point detection for dependent data. Scandinavian Journal of Statistics , 47(4):1243–1274. D¨ umbgen, L. and Spokoiny, V. G. (2001). Multiscale Testing of Qualitative Hypotheses. The Annals of Statistics , 29(1):124 – 152. D¨ umbgen, L. (2002). Application of local rank tests to nonparametric regression. Journal of Nonparametric Statistics , 14(5):511–537. D¨ umbgen, L. and Walther, G. (2008). Multiscale inference about a density. The Annals of Statistics , 36(4):1758–1785. Frick, K., Munk, A., and Sieling, H. (2014). Multiscale change point inference. Journal of the Royal Statistical Society. Series B (Statistical Methodology) , 76(3):495–580. Fryzlewicz, P. and Rao, S. (2013). Multiple-change-point detection for auto-regressive con- ditional heteroscedastic processes. Journal of the Royal Statistical Society: Series B (Sta- tistical Methodology) , 76. Gao, J., Gijbels, I., and Van Bellegem, S. (2008). Nonparametric simultaneous testing for structural breaks. Journal of Econometrics , 143(1):123–142. Specification testing. Gao, Z., Shang, Z., Du, P., and Robertson, J. (2018). Variance change point detection under a smoothly-changing mean trend with application to liver procurement. Journal of the American Statistical Association , 114:1–29. Gijbels, I., Lambert, A., and Qiu, P. (2007). Jump-preserving regression and smoothing using local linear fitting: A compromise. Annals of the Institute of Statistical Mathematics , 59:235–272. 31 Horv´ ath, L. and Rice, G. (2024). Change point analysis for time series . Springer Series in Statistics. Springer Nature,. Hotz, T., Sch¨ utte, O. M., Sieling, H., Polupanow, T., Diederichsen, U., Steinem, C., and Munk, A. (2013). Idealizing ion channel recordings by a jump segmentation multiresolution filter. IEEE Transactions on NanoBioscience , 12(4):376–386. Karl, T., Kniught, R., and Plummer, N. (1995). Trends in high frequency climate variability in the twentieth century. Nature , 377:217–220. Li, W., Wang, D., and Rinaldo, A. (2023). Divide and conquer dynamic programming: An almost linear time change point detection methodology in high dimensions. In Interna- tional Conference on Machine Learning , pages 20065–20148. PMLR. Madrid Padilla, O. H., Yu, Y., Wang, D., and Rinaldo, A. (2022). Optimal nonparamet- ric multivariate change point detection and localization. IEEE Trans. Inform. Theory , 68(3):1922–1944. M¨
https://arxiv.org/abs/2504.15872v1
uller, H.-G. (1992). Change-Points in Nonparametric Regression Analysis. The Annals of Statistics , 20(2):737 – 761. Page, E. S. (1955). A test for a change in a parameter occurring at an unknown point. Biometrika , 42(3-4):523–527. Schmidt-Hieber, J., Munk, A., and D¨ umbgen, L. (2013). Multiscale methods for shape constraints in deconvolution: Confidence statements for qualitative features. The Annals of Statistics , 41(3):1299 – 1328. Sonmez, O., Aue, A., and Rice, G. (2025). fChange: Change Point Analysis in Functional Data. R package version 0.2.0. Truong, C., Oudre, L., and Vayatis, N. (2020). Selective review of offline change point detection methods. Signal Processing , 167:107299. Vogt, M. and Dette, H. (2015). Detecting gradual changes in locally stationary processes. The Annals of Statistics , 43(2):713–740. Woodall, W. H. and Montgomery, D. C. (1999). Research issues and ideas in statistical process control. Journal of Quality Technology , 31(4):376–386. Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proceedings of the National Academy of Sciences , 102(40):14150–14154. Wu, W. B. and Zhao, Z. (2007). Inference of trends in time series. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 69(3):391–410. 32
https://arxiv.org/abs/2504.15872v1
Deep learning of point processes for modeling high-frequency data∗ Yoshihiro Gyotoku†1,3, Ioane Muni Toke‡2,3, and Nakahiro Yoshida§1,3 1University of Tokyo, Graduate School of Mathematical Sciences¶ 2Universit´ e Paris-Saclay, CentraleSup´ elec, Math´ ematiques et Informatique pour la Complexit´ e et les Syst` emes‖ 3Japan Science and Technology Agency CREST April 23, 2025 Summary We investigate applications of deep neural networks to a point process having an in- tensity with mixing covariates processes as input. Our generic model includes Cox-type models and marked point processes as well as multivariate point processes. An oracle inequality and a rate of convergence are derived for the prediction error. A simulation study shows that the marked point process can be superior to the simple multivariate model in prediction. We apply the marked ratio model to real limit order book data. Keywords and phrases Deep learning, point process, marked ratio model, prediction, rate of convergence, limit order book. 1 Point process with covariates and deep learning Given a stochastic basis B= (Ω ,F,F, P), we consider a dN-dimensional counting process N= (Ni)i∈Iwith a dN-dimensional intensity process λtwith respect to F, where Iis a finite ∗This work was in part supported by Japan Science and Technology Agency CREST JPMJCR2115; Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research No. 23H03354 (Scientific Research); Forefront Physics and Mathematics Program to Drive Transformation (FoPM), a World-leading Innovative Graduate Study (WINGS) Program, the University of Tokyo; and by a Cooperative Research Program of the Institute of Statistical Mathematics. †gyotoku@ms.u-tokyo.ac.jp ‡ioane.muni-toke@centralesupelec.fr §nakahiro@ms.u-tokyo.ac.jp ¶Graduate School of Mathematical Sciences, University of Tokyo: 3-8-1 Komaba, Meguro-ku, Tokyo 153- 8914, Japan. ‖Universit´ e Paris-Saclay, CentraleSupelec, 3 rue Joliot Curie, 91190 Gif-sur-Yvette, France 1arXiv:2504.15944v1 [math.ST] 22 Apr 2025 index set with # I=dN. It is assumed that F= (Ft)t∈R+is a right-continuous filtration and the stochastic basis Bsatisfies the usual condition. The intensity process λtis supposed to admit a representation λt=λ(Xt) with a bounded measurable mapping λ= ((λi))i∈I:X → RdNand adX-dimensional F-predictable covariate process X= (Xt)t∈R+taking values in X ∈B[RdX], the Borel σ-field. Suppose that the true mechanism that generates the data Nis denoted by a mapping λ∗among possible mappings λ. Moreover, we suppose that the components Nihave no common jumps. For a positive number h, let Ij= ((j−1)h, jh) (j∈N={1,2, ...}) and Xj= (Xt, Nt− Ns)t,s∈Ij. Suppose that ( X, N ) is periodically stationary, that is, ( Xj)j∈Nis stationary. For example, the periodical stationarity models the stochastic evolution of a limit order book which has intraday non-stationarity but has a long-term stationarity. We will consider the periodically stationary case in this paper, while the (exactly) stationary case can be dealt with, as a matter of fact, more simply. The model of the intensity process is expressed by λ:X → RdN, as already mentioned. Let T∈T=hN. Consider certain functions a= (ai)i∈Iandbofλ, more general than λitself; see the examples below. Let ( a∗, b∗) denote ( a, b) for λ∗. We estimate the mapping ( a∗, b∗) from the data ( Xt, Nt)t∈[0,T]with a family FTof candidates mappings ( a, b). It is not assumed that ( a∗, b∗)
https://arxiv.org/abs/2504.15944v1
belongs to the family FT. Three examples of setting ( a, b) will be provided later. A family FTwe are interested in in this article is a deep neural network the size of which is increasing to infinity as T→ ∞ . However, the result we will obtain is more general and not confined to the case of deep learning (DL). The aim of this paper is to derive a bound for the prediction error by the machine FTapplied to point processes. Modeling limit order book (LOB) with point processes has been a big trend for the past two decades. Early point process models include Bowsher [3], Large [10], Bacry et al. [1, 2], Muni Toke and Pomponio [15], Lu and Abergel [11], just to mention few. More recent contributions propose intensity models depending on observable LOB covariates: Muni Toke and Yoshida [16], Rambaldi et al. [20], Morariu-Patrichi and Pakkanen [14], Wu et al. [29], Sfendourakis et al. [23]. Deep learning architectures have also been proposed for the modeling of limit order book, see e.g. Tsantekidis et al. [28], Sirignano [25], Zhang et al. [32], Maglaras and Moallemi [12] among many others. Many deep learning contributions focus on the prediction of price movements at a given horizon using some specifically designed neural network architecture feeded with limit order book features. This paper’s attempt to incorporate deep learning to point processes is motivated by the authors’ studies on modeling of the limit order book. Muni Toke and Yoshida [17] took a parametric approach with a Cox-type model (the ratio model) for relative intensities of order flows in the limit order book. The Cox-type model with a nuisance baseline hazard is well suited to cancel non-stationary intraday trends in the market data. They showed consistency and asymptotic normality of a quasi-likelihood estimator and validated the model selection criteria applied to the point processes, based on the quasi-likelihood analysis (Yoshida [30, 31]). Their scheme is applied to real data from the Paris Stock Exchange and achieves accurate prediction of the signs of market orders, as the method outperforms the traditional Hawkes model. It is suggested that the selection of the covariates is crucial for prediction. Succeedingly, Muni Toke and Yoshida [18] extended the ratio model to a marked ratio model to express a hierarchical structure in market orders. Each market order is categorized by bid/ask, further into aggressive/non-aggressive orders according to the existence of price change. The marked 2 ratio model outperforms other intensity-based methods like Hawkes-based methods in predicting the sign and aggressiveness of market orders on financial markets. However, the trials of model selection in [17, 18] suggest a possibility of taking more covariates in the model; the information criteria seem to prefer relatively large models among a large number of models generated by combinations of our proposal covariates. This motivates us to use deep learning to automatically generate more covariates and to enhance the power of expression of the model for more nonlinear dependencies behind the data. According to a recent big surge of applications of deep learning, theoretical analysis of the
https://arxiv.org/abs/2504.15944v1
prediction error has been a hot topic in the nonparametric statistical approaches to it. Among these efforts, several survey papers (e.g., [26, 6, 4, 5]) provide a comprehensive overview of the state of the art, offering valuable insights into the key open questions and major developments in the field. More specifically, our work builds on the seminal research presented by Schmidt-Hieber [22], which analysed the nonparametric estimation of a specific class of function using fully connected feed forward neural networks with ReLU (Rectified Linear Unit) activation function under independent and identically distributed observations. Since the publication of Schmidt- Hieber [22], several subsequent works [7, 8, 9, 19] have explored its ideas further, extending or applying them to other types of data with various dependency and/or more sophisticated neural network architectures. The organization of this paper is as follows. In Section 2, we formulate the problem more precisely and give a theoretical result on the prediction error, whose proof is given in Section 5. The result is not restricted to the case of deep learning. Section 3 treats an application of the above result to the case of deep learning. The ratio model is investigated in Section 4 in the light of deep learning. It will be shown that information on the structure of the model can serve to diminish the error even if it is nonparametric, and that it is the case when one uses deep learning models. 2 Rate of convergence of the error Let us start the discussion with the loss function defining the prediction error. We introduce a contrast function ΨT(a, b) = −ZT 0a(Xt)·dNt+ZT 0b(Xt)dt (T∈T). The discrepancy between ( a, b) and ( a∗, b∗) is assessed by the function U(x) = U(a,b)(x) =−λ∗(x)· a(x)−a∗(x) + b(x)−b∗(x) . Assume that U(x)≥0 for all x∈ X. Example 2.1. (Likelihood) The minus log-likelihood function Ψ T(a, b) is realized as λ(x) = λi(x) i∈I,ai(x) = log λi(x) (i∈I) and b(x) =P i∈Iλi(x). Then U(x)≥0. Example 2.2. (Ratio model) The ratio model of Muni Toke and Yoshida [17] uses ri(x) = λi(x)/P i′∈Iλi′(x) (i∈I). In this case, a(x) = logri(x) i∈Iandb(x) = 0, Then U(x)≥0. As a generalization, Muni Toke and Yoshida [18] considered a marked ratio model. The loss functions for the marked ratio model are exemplified in Section 4. 3 Example 2.3. (Mixed loss) The marks for the i-th counting process Nitake values in a finite setKi. The process Ni,kicounts the number of the events ( i, ki), and the intensities are given by λi,ki(Xt, Yt) = λi(Xt)pki i(Yi t),X ki∈Kipki i= 1, where Yi= (Yi t)t∈R+is a covariate process for the mark process associated with Ni. Then the likelihood type loss function becomes a mixture of log-likelihoods of a point process and a ratio model: −X i,kiZT 0log λi(Xt)pki i(Yi t) dNi,ki t+X i,kiZT 0λi(Xt)pki i(Yi t)dt =−ZT 0X ilogλi(Xt)dNi t−ZT 0X iλi(Xt)dt −ZT 0X i,kilogpki i(Yi t)dNi,ki t, where Ni=P ki∈KiNi,ki. In this case, a(x, y) = (logλi(x))i∈I,(logpki i(y))i∈I,ki∈Ki andb(x, y) =P i∈Iλi(x), for the multivariate point process (Ni)i∈I,(Ni,ki)i∈I,ki∈Ki , with ( x, y) for the
https://arxiv.org/abs/2504.15944v1
argu- ment “ x”. Denote by Aa family of pairs of bounded measurable mappings ( a, b) onXsuch that sup (a,b)∈A |a| ∞∨ ∥b∥∞ ≤F for some positive constant F. The true function ( a∗, b∗) correspond the true structure is assumed to satisfy ( a∗, b∗)∈A, as well as FT⊂A. We consider an estimator ( baT,bbT) for ( a∗, b∗) from the data ( Xt, Nt)t∈[0,T]forT∈Tby optimizing Ψ T(a, b) for a family FTof models in A, e.g. deep learning models. The estimator ( baT,bbT) takes the values in FT. Let (X,N) be an independent copy of ( X, N ). The risk function (i.e., the expected predic- tion error) when ( baT,bbT) is used is RT=E T−1 1ZT1 0bUT(Xt)dt for a fixed T1∈T, where bUT(x) = −λ∗(x)· baT(x)−a∗(x) +bbT(x)−b∗(x) . We may choose T1=hdue to the periodical stationarity. We also have the representation of RT: RT=E −T−1ZT 0 baT(Xt)−a∗(Xt) ·dNt+T−1ZT 0bbT(Xt)−b∗(Xt) dt forT∈T. 4 The following compatibility condition is assumed: there exists a positive constant C∗≥1 such that C∗−2 a(x)−a∗(x) 2+ b(x)−b∗(x) 2 ≤ − λ∗(x)· a(x)−a∗(x) + b(x)−b∗(x) ≤C∗2 a(x)−a∗(x) 2+ b(x)−b∗(x) 2 (2.1) for all ( a, b)∈A,T∈T, and all x∈ X. Under (2.1), in particular, C∗−1 h−1Zh 0EX a(Xt)−a∗(Xt) 2+ b(Xt)−b∗(Xt) 2 dt1/2 ≤ h−1Zh 0EX −λ∗(Xt)· a(Xt)−a∗(Xt) + b(Xt)−b∗(Xt)  dt1/2 ≤C∗ h−1Zh 0EX a(Xt)−a∗(Xt) 2+ b(Xt)−b∗(Xt) 2 dt1/2 (2.2) for all ( a, b)∈AandT∈T. Here EXstands for the expectation with respect to X. Such a condition can be checked e.g. by the estimate: for any positive constants x0andx1, there exist constants c0andc1such that c0(x−1)2≤ −logx+x−1≤c1(x−1)2 for all x∈[x0, x1]. In Example 2.1, when the family of mappings λ= (λi)i∈Iassociated with ( a.b)∈Asatisfies 0 <infx∈X,T∈Tλi(x)≤supx∈X,T∈Tλi(x)<∞, then the compatibility condition (2.1) holds true. The compatibility condition is a condition on the structure of A. The compatibility condition can be verified in a similar manner in Examples 2.2 and 2.3. Suppose that Aadmits a distance dsuch that d (a′, b′),(a, b) ≥2C∗dN(1 +∥|λ∗|∥∞)(∥|a′−a|∥∞+∥b′−b∥∞) (2.3) for (a, b),(a′, b′)∈A. Then, in particular, d (a′, b′),(a, b) ≥E T−1 1ZT1 0 a′(Xt)−a(Xt) |λ∗(Xt)|dt+T−1 1ZT1 0 b′(Xt)−b(Xt) dt . Define ∆ Tby ∆T=E T−1ΨT(baT,bbT)−inf (a,b)∈FTT−1ΨT(a, b) Theα-mixing coefficient for h-periodically stationary process Xis given by αX h(k) = sup j∈Z+sup A∈FX [0,jh],B∈FX [(j+k)h,∞) P[A∩B]−P[A]P[B] fork∈Z+, where FX I=σ[Xs;s∈I] for I⊂R+, i.e., the σ-field generated by {Xs;s∈I}. We assume that αX h(k)≤γ−1e−γhfor all k∈Z+, for some constant γ >0. A usual α-mixing coefficient for Xis αX(h) = sup t∈R+sup A∈FX [0,t],B∈FX [t+h,∞) P[A∩B]−P[A]P[B] 5 IfαX(h)≤γ′−1e−γ′hfor all h∈R+, for some constant γ′>0, then the α-mixing coefficient αX h(k) geometrically decays. LetT=T/hforT∈T. We give a rate of convergence of RT. Theorem 2.4. Letξbe any positive number. Then there exists a constant C0depending on γ, h, |λ∗| ∞,dN,C∗andξ, such that RT≤2∆T+ 2 inf (a,b)∈FTh−1E Ψh(a, b)−Ψh(a∗, b∗) +C0(1 +F2) T−1(logT)2logNT+δ (2.4) whenever T≥2∨ {ξ(logT)2logNT}andNT≥2. Here NT=NT,δis the covering number of FTby the δ-balls with respect to the distance d. We will prove Theorem 2.4 in Section 5. 3 Application to deep learning The inequality (2.4) can provide a rate of convergence of the risk if
https://arxiv.org/abs/2504.15944v1
combined with an error bound of the approximation by the machine FTand an estimate of its covering number NT. Schmidt- Hieber [22] considered a deep neural network with ReLU activation function and presented a covering number when the network is fitted under a sparse condition. The shifted ReLU activation function σv:Rd→Rdis defined as σv(x) = (x1−v1)+, ...,(xd−vd)+⋆ where for x= (x1, ..., x d)⋆∈Rdandv= (v1, ..., v d)⋆,x+max{u,0}foru∈R. For weight matrices Wi∈Rpi+1⊗Rpi(i= 0,1, ..., L ) and shift vectors vi∈Rpi(i= 1, ..., L ), the mapping f ·; (WL, ..., W 1, W0),(vL, ...,v1) :Rp0→RpL+1is defined as f x; (WL, ..., W 1, W0),(vL, ...,v1) =WLσvLWL−1σvL−1···W1σv1W0x(x∈Rp0).(3.1) The dimension p0=dXof the input process X, and pL= 1 in the applications to point processes in this article The set of functions f x; (WL, ..., W 1, W0),(vL, ...,v1) taking the form (3.1) is denoted by D, and it is called a deep neural network or deep learning. Some restrictions are posed on D, depending on the situation one is working. Schmidt-Hieber [22] uses the class FT= f∈ Dof the form (3 .1); max ℓ=0,...,L,j =1,...,L ∥Wℓ∥∞∨ ∥vj∥∞ ≤1, LX ℓ=0∥Wℓ∥0+LX j=1∥vj∥0≤s,∥f∥∞≤F for a given positive constant F, where the parameters Land pi(i= 1, ..., L ) determining the size of the learning machine, as well as the sparsity index s, depend on T. The 0-norm ∥ · ∥ 0 denotes the number of non-zero entries of the object. 6 The function gthat generates the data is assumed to be expressed as a composite of functions of H¨ older classes, in Schmidt-Hieber [22]. Therein prepared is a ball of β-H¨ older functions with radius Kdenoted by Cβ(D, K ) = g:D→R;X α:|α|<β∥∂αg∥∞+X α:|α|=⌊β⌋sup x,y∈D,x̸=y|∂αg(x)−∂αg(y)| ∥x−y∥β−⌊β⌋ ∞≤K for a domain Din a Euclidean space and a number K, where ⌊β⌋denotes the largest integer strictly smaller than β. A family G=G(q,d,t,β, K) of the possible data generating mechanisms is a collection of functions g=gq◦ ··· ◦ g0such that gi= (gij)j: [ai, bi]di→[ai+1, bi+1]di+1, where each component gijis a function of some of the arguments in Rdiand satisfies gij∈ Cβi([ai, bi]ti, K), given vectors d= (d0, ..., d q+1),t= (t0, ..., t q) and β= (β0, ..., β q). In order to obtain a good bound for the risk RT, a set of conditions is required to make FTsufficiently rich and not too large in the same time. Naturally, such conditions involve the smoothness of the target function g. As Schmidt-Hieber [22], we impose the following conditions: F≥max{K,1},qX i=0log24(ti∨βi) log2T≤L<∼TϕT, TϕT<∼min{p1, ..., pL}, s≍TϕTlogT. (3.2) The effective smoothness index is defined as β∗ i=βiQq j=i+1(βj∧1), and the key convergence rate as ϕT= max i=0,...,qT−2β∗ i 2β∗ i+ti. As Inequality (26) of Schmidt-Hieber [22], we obtain inf (a,b)∈FTh−1E Ψh(a, b)−Ψh(a∗, b∗)<∼ϕT (3.3) by the compatibility (2.2). On the other hand, Lemma 5 of Schmidt-Hieber [22] gives an estimate of the covering number as logNT≤(s+ 1) log 2δ−1(L+ 1)L+1Y ℓ=0(pℓ+ 1) . (3.4) The above covering number is based on the uniform norm but we can now take a metric dof (2.3) compatible with the sup-norm. Following Schmidt-Hieber [22],
https://arxiv.org/abs/2504.15944v1
we obtain the following estimate of the risk in the prediction withFTspecified above, if combined with the properties (3.3)-(3.4) ( pℓare bounded by s). Theorem 3.1. Letξ >0. If∆T≤C0ϕTL(logT)4(T≥T0)for some positive constants C0 andT0>1, then there exists a constant Csuch that RT≤CϕTL(logT)4(3.5) forT≥T0whenever T≥ξ(logT)2(s+ 1) log 2δ−1(L+ 1)QL+1 ℓ=0(pℓ+ 1) . 7 Remark 3.2. (i) The error bound (3.5) has the factor (log T)4instead of (log T)2in Schmidt- Hieber[22]. This factor comes from the large deviation estimate for a functional in the mixing condition. The error bound (3.5) becomes CϕT(logT)5when L≍logT. (ii)The error bound (3.5) is minimax-optimal up to the logarithmic factor in that our model includes the case where the covariate process Xis periodically independent. Schmidt- Hieber[22] showed the optimality for an independent input process in the context of nonparametric regression setting and the risk function is the same under the compatibility condition. (iii) Suzuki and Nitanda [27] propose the use of an anisotropic Besov space to represent the target function. This approach can be taken to obtain a better error bound also for the point process models. 4 The marked ratio model 4.1 A simulation study We propose in this section a simulation study in the case of marked intensities. Recall that in this case we consider the marked intensity processes λi,ki(Xt, Yt) =λ0(t)λi(Xt)pki i(Yt), i∈I, ki∈Ki, (4.1) where we assume thatP ki∈Kipki i= 1 for all i∈I, and λ0is an unobserved baseline intensity. We refer the reader to Muni Toke and Yoshida [18] for more details on the model. 4.1.1 Description of the numerical example In our numerical example we consider a 4-dimensional process with 2-dimensional marks, i.e. we set dN= 4, I={0,1,2,3}and for all i∈I,Ki={0,1}. Covariates process XisdX- dimensional with dX= 2, covariates process YisdY-dimensional with dY= 1, and all three coordinate covariates are independent Ornstein-Uhlenbeck (OU) processes. More precisely, we consider ( BX0, BX1, BY) a 3-dimensional Brownian motion in our probability space and set: ( dXj t=θXj(¯xXj−Xj t)dt+σXjdBXj t, j= 0,1, dYt=θY(¯xY−Yt)dt+σYdBY t.(4.2) Values of the OU parameters (( θXj,¯xXj, σXj)j=0,1, θY,¯xY, σY) are given in Table 1. We keep the number of covariates reasonably low in this numerical example so that our fitting results can still be represented graphically in a manageable way. The base line intensity is set to λ0(t) = 1 + cos(2 πt). Non-marked intensities λiare defined forx= (x0, x1) as:  λ0(x) = 2 + tanh( x0) exp(−x2 1), λ1(x) = 2 + cos( πx0) tanh( x1), λ2(x) = 2 + sin(2 πx0)ex1(1 +ex1)−1, λ3(x) = 3−exp(−x2 0),(4.3) 8 θ·¯x·σ· X00.1 0.0 0.1 X10.2 0.0 0.2 Y0.1 0.0 0.1 Table 1: Numerical values for the OU covariate processes. and mark probabilities pki iare written:   p0 0(y) = 0 .25, p1 0= 1−p0 0, p0 1(y) = 0 .05 + 0 .9|cos(πy)|, p1 1= 1−p0 1, p0 2(y) =ey(1 +ey)−1, p1 2= 1−p0 2, p0 3(y) = 0 .6 exp(−y2), p1 3= 1−p0 3.(4.4) Numerical simulations of the point processes ( Ni,ki)i∈I,ki∈Kiare carried via a thinning algorithm. 4.1.2 One-step ratio estimation We define a first estimation method in the spirit
https://arxiv.org/abs/2504.15944v1
of [17]. We start by considering the 8- dimensional point process ( Ni,ki)i∈I,ki∈Ki. We define the ratio functions ri,ki 1(x, y) =λi,ki(x, y)P j∈I,kj∈Kjλj,kj(x, y)and ˜ ri,ki 1(x, y) =ri,ki 1(x, y) r0,0 1(x, y). (4.5) Obviously,P i∈I,ki∈Kiri,ki 1= 1, ˜ r0,0 1= 1 andP i∈I,ki∈Ki˜ri,ki 1=1 r0,0 1. In this first estimation method, we set li,ki 1(x, y) = log ˜ ri,ki 1(x, y) (4.6) and these functions are estimated for ( i, ki)̸= (0,0) with a neural network. We define a standard dense feed-forward neural network with a ( dX, nN 1)-shaped input layer for the covariates X,nL 1inner layers with nN 1neurons per layer and a final ( nN 1,7)-shaped output layer to output the estimated quantities ˆli,ki 1(x, y), (i, ki)̸= (0,0), that approximate the li,ki 1(x, y). All layers except the last hidden one and the output one use a LeakyReLu activation function. In the general terminology of the previous sections, the contrast function Ψ T(a1, b1) is in this case defined with b1(x, y) = 0 and ai,ki 1(x, y) = log ri,ki 1(x, y) = log˜ri,ki 1(x, y) P j∈I,kj∈Kj˜rj,kj 1(x, y). (4.7) The loss function L1of the neural network computed on a sample S1,T={(Xt, Yt,(Ni,ki t)i,ki)}t∈[0,T] is thus L1(S1,T) =−ZT 0X i∈I,ki∈Kilogexp(li,ki 1(Xt, Yt)) P j∈I,kj∈Kjexp(lj,kj 1(Xt, Yt))dNi,ki t. (4.8) Recall that l0,0 1= 0 is not learned. Index 1 in the notation of this section is used to indicate that this is our first estimation method (one-step ratio estimation). 9 4.1.3 Two-step ratio estimation We now define a second estimation method in the spirit of [18]. In a first step we use a ratio model on the non-marked intensities λi(Xt) and then in a second step we use # I= 4 other ratio estimations on the mark probabilities pki i, one for each i∈I. The notation for the first step of this second estimation is ri 2(x) =λi(x)P j∈Iλj(x),˜ri 2(x) =ri 2(x) r0 2(x)and li 2(x) = log ˜ ri 2(x) (4.9) and these functions are estimated for i̸= 0 with a neural network. In order to compute the estimators ˆli 2(x) we use the general architecture previously defined in the first estimation method, but now with parameters nL 2,1, nN 2,1and a 3-dimensional output ( i∈I\ {0}). The loss function L2,1of the neural network computed on a sample S2,1,T={(Xt,(Ni t)i)}t∈[0,T]is thus L2,1(S2,1,T) =−ZT 0X i∈Ilogexp(li 2(Xt))P j∈Iexp(lj 2(Xt))dNi t. (4.10) The notation for the i-th ratio model of the second step of the second estimation method is then ri,ki 2(y) =pki i(y)P k∈Kipk i(y)=pki i(y),˜ri,ki 2(y) =ri,ki 2(y) ri,0 2(y)=pki i(y) p0 i(y)and li,ki 2(y) = log ˜ ri,ki 2(y) (4.11) and these functions are estimated for ki̸= 0 with a neural network. Again, we use in order to compute the estimators ˆli,ki 2(y) the same general architecture for the neural networks, now with adY-dimensional input, parameters nL 2,2, nN 2,2and a 1-dimensional output ( ki∈Ki\ {0}). The loss function Li 2,2of the neural network computed on a sample Si 2,2,T={(Yt,(Ni,ki t)ki)}t∈[0,T]is in this case Li 2,2(Si 2,2,T) =−ZT 0X ki∈Klogexp(li,ki 2(Yt))P
https://arxiv.org/abs/2504.15944v1
k∈Kexp(li,k 2(Yt))dNi,ki t. (4.12) 4.1.4 Fitting results As a first illustration, we simulate the model (4.1)-(4.4) for an horizon T= 128 ,000 (note that given the above definitions, a sample has roughly 2 Tpoints in each of the 4 dimensions of the process in this model). We then fit the model with our two estimation methods and the parameters nL 1=nL 2,1=nL 2,2= 8 and nN 1=nN 2,1=nLN2,2= 64. Figure 1 plots the true functions li,ki 1(x, y) and the estimated functions ˆli,ki 1(x, y) by the one- step estimation method. In order to better visualize the results, we provide plots of the 7 func- tions x07→ˆli,ki 1(x0,ˆqX1(α),ˆqY(β)) where ˆ qX1(α) is the α-quantile of the empirical distribution ofX1, ˆqY(β)) is the β-quantile of the empirical distribution of Y, and α, β∈[0.2,0.4,0.6,0.8], hence the 4 ×4 matrix of plots. Figure 2 plots the true functions li 2(x) and the estimated functions ˆli 2(x) (Again, we plot x07→ˆli 2(x0,ˆqX1(α)) for α∈[0.2,0.4,0.6,0.8]). Figure 3 plots the true functions li,ki 2(y) and the estimated functions ˆli,ki 2(y). All these graphs illustrate the 10 0.000.250.500.751.001.251.50x1=0.637 y=0.382 1.0 0.5 0.00.51.01.52.0y=0.114 1 012y=0.111 0.250.500.751.001.251.50y=0.374 0.000.250.500.751.001.251.50x1=0.168 1.0 0.5 0.00.51.01.5 1.0 0.5 0.00.51.01.5 0.000.250.500.751.001.251.50 0.000.250.500.751.001.251.50x1=0.237 1.0 0.5 0.00.51.01.5 1.0 0.5 0.00.51.01.5 0.000.250.500.751.001.251.50 1 0 1 x00.00.51.0x1=0.707 1 0 1 x01.5 1.0 0.5 0.00.51.01.5 1 0 1 x01.5 1.0 0.5 0.00.51.01.5 1 0 1 x00.000.250.500.751.001.25One-step estimation - Learned functions li,ki 1 l0,1 1 l1,0 1 l1,1 1 l2,0 1 l2,1 1 l3,0 1 l3,1 1Figure 1: Simulation study — Estimated functions ˆli,ki 1(x, y) by the one-step estimation method. True functions are plotted as dotted lines of the color of the corresponding estimated function. 11 1 0 1 x00.6 0.4 0.2 0.00.20.40.60.8x1=-0.637 1 0 1 x0x1=-0.168 1 0 1 x0x1=0.237 1 0 1 x0x1=0.707Two-step estimation -- Learned function li 2 l1 2 l2 2 l3 2Figure 2: Simulation study — Estimated functions ˆli 2(x) by the two-step estimation method. True functions are plotted as dotted lines of the color of the corresponding estimated function. 1.0 0.5 0.0 0.5 1.0 x3 2 1 0123i=0 l0,1 2(x) 1.0 0.5 0.0 0.5 1.0 xi=1 l1,1 2(x) 1.0 0.5 0.0 0.5 1.0 xi=2 l2,1 2(x) 1.0 0.5 0.0 0.5 1.0 xi=3 l3,1 2(x)Two-step estimation -- Learned function li,ki 2 Figure 3: Simulation study — Estimated functions ˆli,ki 2(y) by the two-step estimation method. True functions are plotted as dotted lines of the color of the corresponding estimated function. 12 ability of the estimation methods to retrieve the various shapes of ratio functions defined by the model. Now, in order to compare the estimation methods, we illustrate the results in terms of probabilities. In the model (4.1)-(4.4), the probability that an observed event in state ( x, y) is of type iand mark kiis pi,ki(x, y) =λi,ki(x, y)P j∈I,kj∈Kjλj,kj(x, y). (4.13) Note that pi,kiandpki iare not the same. pi,kidefined at Equation (4.13) is the joint probability of the type iand the mark ki, while pki idefined at Equation (4.1) is the conditional probability of the mark kigiven the type i. Probabilities pi,ki(x, y) are straightforwardly estimated
https://arxiv.org/abs/2504.15944v1
by the one-step estimation method with ˆpi,ki 1(x, y) =exp(ˆli,ki 1(x, y)) P j∈I,kj∈Kjexp(ˆlj,kj 1(x, y)), (4.14) and by the two-step estimation method with ˆpi,ki 2(x, y) =exp(ˆli 2(x))P j∈Iexp(ˆlj 2(x))exp(ˆli,ki 2(y))P k∈Kiexp(ˆli,k 2(y)). (4.15) Figure 4 plots the true functions pi,ki(x, y) and the estimated functions ˆ pi,ki 1(x, y) and Figure 5 plots the true functions pi,ki(x, y) and the estimated probabilities ˆ pi,ki 2(x, y). We use the 4×4-matrix representation defined above for Figure 1. Both methods provide visually high- quality fits for the event probabilities of the model. However, a careful examination of the plots indicates that the two-step estimation method estimates provide a better fit to the true probabilities. We formalize this observation in the following section. 4.1.5 Comparison of the methods and convergence results Recall that using the general terminology of the Section 2 the risk function of our models is (since b≡0 in the ratio estimations) RT=E −1 TZT 0 ˆa(Xt,Yt)−a∗(Xt,Yt) ·dNt, (4.16) where ( X,Y ,N) are independent copies of ( X, Y, N ). We can thus simulate a new sample of length Tof our model and compute empirical versions RTof the risk functions. In the one-step estimation, we obtain on the sample S1,T={(Xt,Yt,(Ni,ki t)i,ki)}t∈[0,T] R1,T(S1,T) =−1 TZT 0 log ˆr1(Xt,Yt)−logr1(Xt,Yt) dNt =−1 TZT 0X i∈I,ki∈Kilogexp(ˆli,ki 1(Xt,Yt)) P j∈I,kj∈Kjexp(ˆlj,kj 1(Xt,Yt))P j∈I,kj∈Kjexp(lj,kj 1(Xt,Yt)) exp(li,ki 1(Xt,Yt))dNi,ki t. (4.17) 13 0.050.100.150.200.250.30x1=-0.637y=-0.382 y=-0.114 y=0.111 y=0.374 0.050.100.150.200.250.30x1=-0.168 0.050.100.150.200.250.30x1=0.237 1.0 0.5 0.0 0.5 1.0 x00.050.100.150.200.250.30x1=0.707 1.0 0.5 0.0 0.5 1.0 x01.0 0.5 0.0 0.5 1.0 x01.0 0.5 0.0 0.5 1.0 x0One-step estimation - Learned probabilities pi,ki 1 p0,0 1 p0,1 1 p1,0 1 p1,1 1 p2,0 1 p2,1 1 p3,0 1 p3,1 1Figure 4: Simulation study — Estimated probabilities ˆ pi,ki 1(x, y) by the one-step estimation method. True functions are plotted as dotted lines of the color of the corresponding estimated function. 14 0.050.100.150.200.250.30x1=-0.637y=-0.382 y=-0.114 y=0.111 y=0.374 0.050.100.150.200.250.30x1=-0.168 0.050.100.150.200.250.30x1=0.237 1.0 0.5 0.0 0.5 1.00.050.100.150.200.250.30x1=0.707 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0Two-step estimation - Learned probabilities pi,ki 2 p0,0 2 p0,1 2 p1,0 2 p1,1 2 p2,0 2 p2,1 2 p3,0 2 p3,1 2Figure 5: Simulation study — Estimated probabilities ˆ pi,ki 2(x, y) by the two-step estimation method. True functions are plotted as dotted lines of the color of the corresponding estimated function. 15 103104105 horizon102 101 Error L2 error one-step L error one-step L2 error two-step L error two-step x1/3 103104105 horizon102 101 Error Empirical RT one-step Empirical RT two-step x2/3 Estimation errors w.r.t. the length of the sampleFigure 6: Simulation study — L2-errors (full lines, left panel), L∞-errors (dashed lines, left panel) and empirical risk function RT(dash-dotted lines, right panel) as function of the horizon of the simulation for the one-step (blue) and the two-step (red) estimation methods. Dotted black lines with slopes −1/3 and −2/3 are plotted for visual guidance. The empirical risk functions R2,1,T(S2,1,T) and Ri 2,2,T(Si 2,2,T),i∈Iare analogously defined with appropriate subsamples Si 2,1,TandSi 2,2,T. Moreover, to provide a complementary view, we define a standard uniform mean square error ϵL2,mof the estimation method m= 1,2
https://arxiv.org/abs/2504.15944v1
(m= 1 for the one-step ratio method, m= 2 for the two-step ratio method). For each covariate Z∈ {X0, X1, Y}, we compute the 1% and 99% empirical quantiles qZ 0.01andqZ 0.99on the (full) data and define a 1-dimensional regular grid of sizeG+ 1: GZ= qZ 0.01+gqZ 0.99−qZ 0.01 M:g= 0, . . . , G (4.18) The uniform L2-type error is straightforwardly defined on the 3-dimensional grid G=GX0× GX1× GYas ϵL2,m=X i∈IX ki∈Kis1 #GX (x,y)∈G ˆpi,kim(x, y)−pi,ki(x, y)2 (4.19) This error is uniform in the sense that it does not take into account the distribution of the covariates. Similarly, a L∞-type error on the regular grid Gis defined as ϵL∞,m= max i∈Imax ki∈Kimax (x,y)∈G ˆpi,ki m(x, y)−pi,ki(x, y) . (4.20) Figure 6 plots these three measures of estimation error as function of the simulation horizon in the case of the one-step and the two-step estimation methods. For each horizon T, we simulate 20 samples and run both estimation methods on each sample. We then compute the mean L2- errors, mean L∞-errors and mean empirical risk function RTacross the 20 estimations. Both methods exhibit close order of convergence with respect to the length of the sample, which is 16 close to −1/3 forL2andL∞errors and −2/3 in the case of the risk function RT. The superiority of the two-step estimation method, which takes into account the multiplicative structure of the model, is clear. 4.1.6 Robustness with respect to shapes of the neural networks Results of the previous sections have been obtained with the parameters nL 1=nL 2,1=nL 2,2= 8 andnN 1=nN 2,1=nN 2,2= 64. We now run some tests to illustrate the robustness of the estimation with respect to the architecture of the neural networks used. For number of inner layers nL∈ {1,2,4,6,8,10,12,16,20}and each number of neurons per layer nN∈ {4,8,16,32,64,128,256,512}, we run 20 simulations with horizon T= 32,000 and estimations using both estimation methods. In the two-step estimations methods, all networks use the same parameters, i.e. nL 2,1=nL 2,2=nLandnN 1=nN 2,1=nN. Figures 8, 9 and 10 in Appendix provide the robustness results with respect to the shapes of the neural networks as heatmaps of the three error measures defined in Section 4.1.5. It appears clearly that the estimation methods are quite robust with respect to the shapes of the neural networks used for the ratio estimation, and that in this simulation study the values nL 1=nL 2,1=nL 2,2= 8 and nN 1=nN 2,1=nLN2,2= 64 used in Section 4.1.4 provide good results. For the one-step estimation results, shallow but large networks might be slightly preferable to the chosen architecture, but not in a way suffi- cient to change our analysis. Indeed, it appears clearly that modifying the architecture and/or increasing the number of parameters in the one-step estimation is not sufficient to improve the estimation to the level of the two-step estimation, stressing the importance of taking advantage of the multiplying structure in the estimation. 4.1.7 Parsimony and computational time We end this simulation study with a few comments on the parsimony and the computational cost of the estimation methods. If we set
https://arxiv.org/abs/2504.15944v1
nL 1=nL 2,1=nL 2,2andnN 1=nN 2,1=nLN2,2as we did above, then the two-step estimation has a much larger number of parameters since we use 1 + # I= 5 networks very close in shape to the single one used for the one-step estimation method. In the case nL 1=nL 2,1=nL 2,2= 8 and nN 1=nN 2,1=nLN2,2= 64, this represents 33,991 parameters for the one-step estimation method and 167,559 parameters in the two-step estimation method. However, the networks of the two-step estimation are trained on smaller subsamples and convergence is attained quickly, so that in our example the total estimation time for the two-step estimation method is only approximately twice the time used for the one-step estimation. Moreover, since the multiple networks use in the two-step estimation can in fact be trained in parallel, the two-step estimation can in fact be faster than the one-step estimation. 4.2 An application to high-frequency trades and LOB data In this section we use limit order book data of the stock Total Energies SA (ISIN : FR0000120171) traded in Euronext Paris. Our dataset covers 22 trading days, from January 2nd, 2017 to Jan- uary 31st, 2017. For each trading day, the dataset lists all market and marketable orders (all referred to as market orders hereafter) submitted to the exchange between 9:05 and 17:25 local 17 time, i.e. excluding a few minutes after the opening auction and before the closing auction. For each submission, the dataset lists the timestamp of the order with microsecond precision, as well the limit order book (LOB) data at the first level, namely best bid and ask prices and quantities. In the following, for an order entering the system at time t,a(t−) (resp. b(t−)) is the ask price (resp. bid price) and qA(t−) (resp. qB(t−)) is the quantity available at the best ask (resp. bid) queue of the limit order book just before t. If a market order triggered multiple transactions, then only one market order is in the dataset. The resulting number of market orders in the sample is greater than 1,750,000. LetN0denote the counting process of market orders submitted on the bid side (sell market order) and N1denote the counting process of market orders submitted on the ask side (buy market order). Each order is marked 0 if it does not change the mid-price, and marked 1 if it changes the mid-price (which is equivalent to say that its execution depletes the best quote, or that the size of the order is greater that the size of the best quote). The dataset can thus be modeled by a point process (( Ni,ki t)t≥0)i=0,1,k=0,1which can either be seen as a 4-dimensional point process or as a 2-dimensional point process with marks in {0,1}. The Level-I order book data can be used to compute significant covariates in a high- frequency finance context. Let X0 t−:=i(t−) :=qB(t−)−qA(t−) qB(t−)+qA(t−)the imbalance measured just before the submission of an order at time t. Imbalance is a well-known indicator of the short- term behaviour of the market: an imbalance close to 1 (resp. -1) indicate a positive (resp. negative)
https://arxiv.org/abs/2504.15944v1
pressure on the price. Let X1 t−be the sign of the last trade, i.e. X1 t−=−1 if the last transaction occured on the bid side of the limit order book, X1 t−= 1 if the last transaction occured on the ask side of the limit order book. It is well-known in high-frequency finance that the series of trade signs have long-memory and are thus informative in our context. Finally, letX2 t−:=s(t−) :=a(t−)−b(t−) be the bid-ask spread measured just before the submission of a market order at time t. When the spread is greater than 1 tick, a trader can gain pri- ority by placing limit orders inside the bid-ask spread and thus get faster execution without using market orders. The spread is thus informative in an intensity model for the point process ((Ni,ki t)t≥0)i=0,1,k=0,1. In the following, the spread is expressed in number of ticks and takes values in {1,2,3}(in the rare cases (2% of the dataset) where the spread is greater than 3 ticks, we set it equal to 3 ticks). We can write two intensity models for the submission of market orders in a limit order book. The first intensity model is simply written λi,ki(t) =λ0(t)λi,ki(X0 t, X1 t, X2 t), (4.21) with i= 0,1 (bid side or ask side), ki= 0,1 (not price-changing or price-changing) and the c` agl` ad processes Xj,j= 1,2,3 are defined above. The second intensity model for the submission is written as the marked ratio model λi,ki(t) =λ0(t)λi(X0 t, X1 t, X2 t)pki i(X0 t, X1 t, X2 t). (4.22) The first model can be estimated with the one-step estimation method of Section 4.1.2. The second model can be estimated with the two-step estimation methods of Section 4.1.3. In both cases the neural networks are defined with parameters nL 1=nL 2,1=nL 2,2= 8 and nN 1=nN 2,1=nLN2,2= 64. 18 1.0 0.5 0.0 0.5 1.0 x0 (imbalance)0.00.20.40.60.81.0Probability p0,0 1(x0,1,3) p0,0 1(x0,1,2) p0,0 1(x0,1,1) p0,0 1(x0,1,3) p0,0 1(x0,1,2) p0,0 1(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance) p0,1 1(x0,1,3) p0,1 1(x0,1,2) p0,1 1(x0,1,1) p0,1 1(x0,1,3) p0,1 1(x0,1,2) p0,1 1(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance) p1,0 1(x0,1,3) p1,0 1(x0,1,2) p1,0 1(x0,1,1) p1,0 1(x0,1,3) p1,0 1(x0,1,2) p1,0 1(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance) p1,1 1(x0,1,3) p1,1 1(x0,1,2) p1,1 1(x0,1,1) p1,1 1(x0,1,3) p1,1 1(x0,1,2) p1,1 1(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance)0.00.20.40.60.81.0Probability p0,0 2(x0,1,3) p0,0 2(x0,1,2) p0,0 2(x0,1,1) p0,0 2(x0,1,3) p0,0 2(x0,1,2) p0,0 2(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance) p0,1 2(x0,1,3) p0,1 2(x0,1,2) p0,1 2(x0,1,1) p0,1 2(x0,1,3) p0,1 2(x0,1,2) p0,1 2(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance) p1,0 2(x0,1,3) p1,0 2(x0,1,2) p1,0 2(x0,1,1) p1,0 2(x0,1,3) p1,0 2(x0,1,2) p1,0 2(x0,1,1) 1.0 0.5 0.0 0.5 1.0 x0 (imbalance) p1,1 2(x0,1,3) p1,1 2(x0,1,2) p1,1 2(x0,1,1) p1,1 2(x0,1,3) p1,1 2(x0,1,2) p1,1 2(x0,1,1)Figure 7: Sign and price-changing character of trades – Joint probabilities pi,k(x0, x1, x2). One- step (above) and two-step (below) estimation method. From left to right column-wise, ( i, k) = (0,0),(0,1),(1,0) and (1 ,1). Empirical values with triangle and dotted lines. Fitted values as plain lines. Recall that pi,k(x0, x1, x2) is the probability
https://arxiv.org/abs/2504.15944v1
to observe a market order on the side i with price-changing character kwhen the LOB has imbalance x0, spread x2and the last traded order was if sign x1.) Figure 7 plots the fitting results. The first row plot the fitted probabilities ˆ pi,ki 1(x0, x1, x2) (one-step estimation). From left to right, the four columns plot the probabilities in the case (i, ki) = ((0 ,0), then (0 ,1), then (1 ,0) and finally (1 ,1). On each plot, we have six curves corresponding to the cases x1, x2∈ {− 1,1} × { 1,2,3}. Imbalance x0is set as the abscissa of each plot. The second row provides the same plot for ˆ pi,ki 2(x0, x1, x2), i.e. for the two-step estimation. It appears that both estimation methods give excellent fitting results. No method provides a strikingly better fit than the other. The model captures well-known characteristics of order flows in market microstructure: when the spread is equal to one tick, given that the previous order was a sell market order, the probability to observe another sell market order is high and the lesser the imbalance the higher the probability that the order will deplete the best quote and change the price. When the spread increases, the curves flatten as the dependency to the imbalance is less strong. Observations are symmetric for buy market orders. A simple parametric form of the functionals λi,k,λi, and pki iwould not be able to reproduce this variety of shapes (exponential forms have been tested and not shown here, because of poor results). All in all, given these fitting results and the analysis of the simulation study, the multiplicative structure of Equation (4.22) with a deep learning architecture seems well-suited for a intensity model for market orders depending in the observed spread, imbalance and last trade sign. 19 5 Some basic estimates and proof of Theorem 2.4 5.1 Preparations First, we replace RTby its empirical version. Define the empirical error RTby RT=T−1 −ZT 0 baT(Xt)−a∗(Xt) ·dNt+ZT 0bbT(Xt)−b∗(Xt) dt , and the expected empirical error of ( baT,bbT) by Re T=E RT]. (5.1) The compensated Nis denoted by eN, that is, deN=dN−λ∗(Xt)dt. For T, T 1∈T, we have RT−Re T ≤ Φ(5.3) T + Φ(5.4) T , (5.2) where Φ(5.3) T =T−1E −ZT 0 baT(Xt)−a∗(Xt) ·deNt (5.3) and Φ(5.4) T =E T−1 1ZT1 0bUT(Xt)dt −T−1E −ZT 0 baT(Xt)−a∗(Xt) ·λ∗(Xt)dt+ZT 0bbT(Xt)−b∗(Xt) dt =T−1EZT 0bUT(Xt)−bUT(Xt) dt . (5.4) For any δ >0, we consider a δ-net {(a, b); d((a, b),(ak, bk))< δ} k∈KTofFTsuch that each ball has the radius δin d. We may assume that # KT<∞; otherwise, the targeted inequality (2.4) is trivial. So we let KT={1, ...,NT}. As already mentioned, NTdepends on δas well as T. We denote by ( ak, bk) the center of a δ-ball for which the distance to ( baT,bbT) is mimimum among all the centers ( ak, bk)∈A(k= 1, ...,NT), where kis a random variable that indicates the number of one of the nearest points. The gap between ( ak, bk) and ( a∗, b∗) is evaluated with the function Uk T(x) = −λ∗(x)·
https://arxiv.org/abs/2504.15944v1
ak(x)−a∗(x) + bk(x)−b∗(x) . 5.2 Estimate of Φ(5.4) T Define rk Tby rk T= ( T−1(logT)2logNT)1/2 ∨ h−1Zh 0E −λ∗(Xt)· ak(Xt)−a∗(Xt) + bk(Xt)−b∗(Xt)  dt1/2 (5.5) 20 fork∈ {1, ...,NT}. The random number rk Tisrk Twith kplugged into k. We have Φ(5.4) T = T−1EZT 0bUT(Xt)−bUT(Xt) dt ≤ T−1EZT 0 Uk T(Xt)−Uk T(Xt) dt +δ. (5.6) The compatibility condition (2.2) implies  h−1Zh 0EX −λ∗(Xt)· ak(Xt)−a∗(Xt) + bk(Xt)−b∗(Xt)  dt1/2 k=k ≤C∗ h−1Zh 0EX ak(Xt)−a∗(Xt) 2+ bk(Xt)−b∗(Xt) 2 dt1/2 k=k ≤C∗ h−1Zh 0EX baT(Xt)−a∗(Xt) 2+ bbT(Xt)−b∗(Xt) 2 dt1/2 +C∗ h−1Zh 0EX baT(Xt)−ak(Xt) 2+ bbT(Xt)−bk(Xt) 2 dt1/2 k=k (by the triangular inequality) ≤C∗ h−1Zh 0EX baT(Xt)−a∗(Xt) 2+ bbT(Xt)−b∗(Xt) 2 dt1/2 +δ ≤C∗2 h−1Zh 0EX −λ∗(Xt)· baT(Xt)−a∗(Xt) +bbT(Xt)−b∗(Xt)  dt1/2 +δ. (5.7) Then, rk T≤ (T−1(logT)2logNT)1/2 + h−1Zh 0EX −λ∗(Xt)· ak(Xt)−a∗(Xt) + bk(Xt)−b∗(Xt)  dt1/2 k=k ≤(5.7)(T−1(logT)2logNT)1/2 +C∗2 h−1Zh 0EX −λ∗(Xt)· baT(Xt)−a∗(Xt) +bbT(Xt)−b∗(Xt)  dt1/2 +δ = ( T−1(logT)2logNT)1/2+C∗2bE1/2 T+δ, (5.8) where bET=h−1Zh 0EXbUT(Xt) dt. For simplicity of the presentation, we often write inequalities like ≤(∗.∗)indicating use of the item provided by ( ∗.∗). 21 LetUk(x) =−λ∗(x)· ak(x)−a∗(x) + bk(x)−b∗(x) and LT= (rk T)−1F−1h−1ZT 0 Uk(Xt)−Uk(Xt) dt. (5.9) Then, by (5.6) and (5.8), Φ(5.4) T ≤ E rk TFT−1 ×(rk TFh)−1ZT 0 Uk T(Xt)−Uk T(Xt) dt +δ = E rk TFT−1 ×LT +δ ≤C∗2FT−1 EbE1/2 TLT +FT−1 E (T−1(logT)2logNT)1/2+δ LT +δ ≤C∗2FT−1R1/2 T E[|LT|2]1/2+FT−1 (T−1(logT)2logNT)1/2+δ E[|LT|] +δ. (5.10) 5.3 A large deviation estimate for an additive functional Recall that the covariate process Xtakes values in a measurable set XinRdX. It is assumed thatXis periodically stationary. For a bounded measurable function U:X → R+= [0,∞), let Z(T) ℓ= (rT)−1h−1Zℓh (ℓ−1)h U(Xt)−E[U(Xt)] dt (ℓ∈N, T∈T) (5.11) for rT= ( T−1(logT)2logNT)1/2∨ E U(X[0,h])1/2, U(X[0,h]) = h−1Zh 0U(Xt)dt.(5.12) From (5.12), in particular, rT≥(T−1(logT)2logNT)1/2,equivalently, r−1 T≤T1/2(logT)−1(logNT)−1/2(5.13) and r2 T≥E U(X[0,h]) . (5.14) The following lemma gives a large deviation inequality for the sumPT ℓ=1Z(T) ℓ. Lemma 5.1. Letϵandzbe positive numbers. Suppose that T≥3∨logNT, (5.15) logNT≥4∥U∥2 ∞, (5.16) x≥zT1/2(logNT)1/2. (5.17) Then, for some positive constant C1depending only on γ, it holds that P TX ℓ=1Z(T) ℓ ≥x ≤exp −C1x1−ϵ 1+ϵ K(γ,∥U∥∞, z, ϵ, T ) (x >0, T∈T) (5.18) 22 where K(γ,∥U∥∞, z, ϵ, T ) = (1 + ∥U∥∞)2(z−1+ 1)z−2ϵ 1+ϵ V(γ, ϵ) + log T T1/2(logNT)−1/2−2ϵ 1+ϵ (5.19) for a constant V(γ, ϵ)given by V(γ, ϵ) = 4 1 + 4X j∈Nγ−1 1+ϵ−1exp −γ 1 +ϵ−1j . Proof. We may assume that ∥U∥∞>0; otherwise, the inequality (5.18) is trivial since Z(T) ℓ= 0. ThenNT>1 by (5.16). From Theorem 2 of Merlev` ede et al. [13], we have P TX ℓ=1Z(T) ℓ ≥x ≤exp(−IT(x)) (5.20) for all T≥3, where IT(x) =C1(rT)2x2 v2T+ 4∥U∥2 ∞+ 2∥U∥∞(logT)2rTx. (5.21) The constant C1depends only on γ, and the constant vis given by v2= sup ℓ∈N Var[bZ(T) ℓ] + 2X j>ℓ Cov[bZ(T) ℓ,bZ(T) j]  (5.22) for bZ(T) ℓ=h−1Zℓh (ℓ−1)h U(Xt)−E[U(Xt)] dt. The covariance inequality (Rio [21] p. 6) gives v2≤V(γ, ϵ) E[U(X[0,h])]1 1+ϵ∥U∥1+2ϵ 1+ϵ∞. (5.23) Remark that1 2+2ϵ+1 2+2ϵ+1 1+ϵ−1= 1 and that ∥U(X[0,h])∥2+2ϵ≤ E[U2+2ϵ(X[0,h])]1 2+2ϵ≤ E[U(X[0,h])]1 2+2ϵ∥U(X[0,h])∥1+2ϵ 2+2ϵ∞, additionally, ∥E[U(X[0,h])]∥2+2ϵ=E[U(X[0,h])]≤ E[U(X[0,h])]1 2+2ϵ∥U(X[0,h])∥1+2ϵ 2+2ϵ∞. For the con- stant V(γ, ϵ), we have V(γ,
https://arxiv.org/abs/2504.15944v1
ϵ)≤4 +16γ−1 1+ϵ−1 1−exp −γ 1+ϵ−1. (5.24) 23 We know r−1 T≤(5.13)T1/2(logT)−1(logNT)−1/2=T1/2(logNT)1/2(logT)−1(logNT)−1 ≤(5.17)z−1(logT)−1(logNT)−1x (5.25) and hence (rT)−2ϵ 1+ϵ≤ z−2ϵ 1+ϵx2ϵ 1+ϵ(logT)−2ϵ 1+ϵ(logNT)−2ϵ 1+ϵ ≤(5.15)z−2ϵ 1+ϵx2ϵ 1+ϵ(logNT)−2ϵ 1+ϵ. (5.26) We have (rT)2T≥(5.13)(logT)2logNT≥(5.15) (5.16)4∥U∥2 ∞, (5.27) (rT)2T≤(5.15)(1 +∥U∥2 ∞)(log T)(rT)2T1/2·T1/2 ≤(5.17)(1 +∥U∥2 ∞)(log T)(rT)2T1/2z−1(logNT)−1/2x (5.28) and 2∥U∥∞(logT)2rTx = 2 ∥U∥∞(logT)2(rT)2(rT)−1x ≤(5.13)2∥U∥∞(logT)(rT)2T1/2(logNT)−1/2x. (5.29) From (5.27), (5.28) and (5.29), we obtain 4∥U∥2 ∞+ 2∥U∥∞(logT)2rTx≤(1 +∥U∥∞)2(logT)(rT)2T1/2(z−1+ 1)(log NT)−1/2x.(5.30) Since x(logNT)−1≥(5.17)zT1/2(logNT)−1/2≥(5.15)z, we have 4∥U∥2 ∞+ 2∥U∥∞(logT)2rTx ≤(5.30)(1 +∥U∥∞)2(logT)(rT)2T1/2(z−1+ 1)(log NT)−1/2x·x2ϵ 1+ϵ(logNT)−2ϵ 1+ϵz−2ϵ 1+ϵ = (1 + ∥U∥∞)2(z−1+ 1)z−2ϵ 1+ϵ(logT)(rT)2T1/2(logNT)−1/2−2ϵ 1+ϵx1+2ϵ 1+ϵ. (5.31) Moreover, T1/2(logNT)−1/2−2ϵ 1+ϵx1+2ϵ 1+ϵ = T1/2(logNT)−1/2x·x2ϵ 1+ϵ(logNT)−2ϵ 1+ϵ ≥(5.17)zT·x2ϵ 1+ϵ(logNT)−2ϵ 1+ϵ ≥(5.26)z1+2ϵ 1+ϵT(rT)−2ϵ 1+ϵ. (5.32) Then v2T≤(5.23)V(γ, ϵ) E[U(X[0,h])]1 1+ϵ∥U∥1+2ϵ 1+ϵ∞T ≤(5.14)V(γ, ϵ)(rT)2∥U∥1+2ϵ 1+ϵ∞(rT)−2ϵ 1+ϵT ≤(5.32)V(γ, ϵ)(1 +∥U∥2 ∞)z−1−2ϵ 1+ϵ(rT)2T1/2(logNT)−1/2−2ϵ 1+ϵx1+2ϵ 1+ϵ. (5.33) 24 From (5.31) and (5.33), we obtain v2T+ 4∥U∥2 ∞+ 2∥U∥∞(logT)2rTx ≤V(γ, ϵ)(1 +∥U∥2 ∞)z−1−2ϵ 1+ϵ(rT)2T1/2(logNT)−1/2−2ϵ 1+ϵx1+2ϵ 1+ϵ +(1 + ∥U∥∞)2(z−1+ 1)z−2ϵ 1+ϵ(logT)(rT)2T1/2(logNT)−1/2−2ϵ 1+ϵx1+2ϵ 1+ϵ ≤(1 +∥U∥∞)2(z−1+ 1)z−2ϵ 1+ϵ V(γ, ϵ) + log T (rT)2T1/2(logNT)−1/2−2ϵ 1+ϵx1+2ϵ 1+ϵ =K(γ,∥U∥∞, z, ϵ, T )(rT)2x1+2ϵ 1+ϵ. (5.34) Now, from (5.21) and (5.34), we obtain IT(x)≥C1(rT)2x2 K(γ,∥U∥∞, z, ϵ, T )(rT)2x1+2ϵ 1+ϵ=C1x1−ϵ 1+ϵ K(γ,∥U∥∞, z, ϵ, T ). (5.35) This completes the proof. Lemma 5.2. LetC2,C3,C4andxbe positive numbers with C2, C4≥1. Suppose that T≥3∨logNT, (5.36) logNT≥4∥U∥2 ∞ (5.37) C2TC3/2≥x≥C4(logT)T1/2(logNT)1/2. (5.38) Then, for some positive constant C5depending on γ, it holds that P T/hX ℓ=1Z(T) ℓ ≥x ≤exp −(C2)−1e−C3C5x K0(∥U∥∞, T) , (5.39) where K0(∥U∥∞, T) = (1 + ∥U∥∞)2(logT)T1/2(logNT)−1/2. (5.40) Proof. We may assume that ∥U∥∞>0. Let ϵ= 1/logTandz=C4logT. Then (5.24) gives the estimate V(γ, ϵ)≤C6logT (5.41) for some constant C6only depending on γ, and it follows from (5.19) and (5.36) that K(γ,∥U∥∞, z, ϵ, T )≤C7(1 +∥U∥∞)2(logT)T1/2(logNT)−1/2(5.42) for some constant C7depending on γ. We remark that (logNT)−2ϵ 1+ϵ≤ 1/log 2)2ϵ 1+ϵ≤ 1/log 2)2 log 3+1 sinceNT≥2 from log NT>0. Moreover, x−2ϵ 1+ϵ≥ C2TC3/2−2ϵ 1+ϵ≥exp −2 logC2 1 + log 3−C3 ≥exp −(logC2+C3) =C−1 2e−C3. Now Lemma 5.1 provides the inequality (5.39). 25 5.4 Estimation of E[|LT|2] We write ∥λ∗∥∞for |λ∗| ∞. Define Lk Tby Lk T= (rk T)−1F−1h−1ZT 0 Uk(Xt)−Uk(Xt) dt. Lemma 5.3. Suppose that T≥3∨logNTandNT≥2. Then, there exists a constant C8such that E |LT|2 ≤E (max k|Lk T|)2 ≤C8T(logT)2logNT for all T∈T. Proof. Since−Lk Tis the sum of the integrals eLk T:= (rk T)−1F−1h−1RT 0 Uk(Xt)−E[Uk(Xt)] dt and−(rk T)−1F−1h−1RT 0 Uk(Xt)−E[Uk(Xt)] dt, it is sufficient to estimate E[(max k|eLk T|)2] only. Let ζ= 2−1(log 2)1/2. Set U=ζ 2F(∥λ∗∥∞+ 1)−1UkandrT=rk T, then T/hX ℓ=1Z(T) ℓ ≤2T3/2(logT)−1(logNT)−1/2∥U∥∞≤T3/2 forZ(T) ℓof (5.11) since ∥U∥∞≤ζ,T≥3 and NT≥2. Let C9be an arbitrary positive number. Then E (max k|eLk T|)2 =Z2(∥λ∗∥∞+1)T3/2 02xP max keLk T> x dx ≤(C9)2T(logT)2logNT +NTZ∞ C9(logT)T1/2(logNT)1/22xexp −(C2)−1e−C3C5x 2(∥λ∗∥∞+ 1)−1 K0(∥U∥∞, T) dx by Lemma 5.2. LetC′ 5=C5 2(∥λ∗∥∞+ 1)−1. Using Lemma 5.5 below, we obtain NTZ∞ C9(logT)T1/2(logNT)1/22xexp −(C2)−1e−C3C′ 5x K0(∥U∥∞, T) dx ≤C10(1 +∥U∥∞)4 (C2)−2e−2C3C′ 52(logT)2T(logNT)−1NTexp −(C2)−1e−C3C′ 5C9logNT 2(1 +∥U∥∞)2 ≤C11(logT)2T with some constants C10andC11, suppose that C9is chosen so large as (C2)−1e−C3C′ 5C9 8>1. This completes the proof. 26 5.5 Estimate of Φ(5.3) T We will estimate Φ(5.3) T. The constant rk Tis defined by (5.5). Since T−1EZT 0 baT(Xt)−ak(Xt) ·deNt ≤E T−1X i∈I(Ni T+∥(λi)∗∥∞T) baT−ak ∞
https://arxiv.org/abs/2504.15944v1
≤2dN∥λ∗∥∞ baT−ak ∞ ≤d (baT,bbT),(ak, bk) ≤δ, we obtain Φ(5.3) T ≤ T−1EZT 0 ak(Xt)−a∗(Xt) ·deNt +δ = E rk TFT−1 × rk TFh−1ZT 0 ak(Xt)−a∗(Xt) ·deNt +δ ≤(5.8)E FT−1 T−1(logT)2logNT1/2+C∗2bE1/2 T+δ |MT| +δ ≤ C∗2FT−1R1/2 T E[|MT|2]1/2+FT−1 T−1(logT)2logNT1/2+δ E[|MT|] +δ, (5.43) where MT= (rk T)−1F−1h−1ZT 0 ak(Xt)−a∗(Xt) ·deNt. 5.6 Estimation of E[|MT|2] Let Mk T= (rk T)−1F−1h−1ZT 0 ak(Xt)−a∗(Xt) ·deNt. The terminal value of the predictable quadratic variation of the local martingale associated withMk Tis Vk(T) = ( rk T)−2F−2h−2 Z· 0 ak(Xt)−a∗(Xt) ·deNt T = (rk T)−2F−2h−2ZT 0X i∈I (ak(Xt)−a∗(Xt))i 2 λ∗(Xt)idt since there are no common jumps. Then Vk(T)≤(rk T)−2F−2h−2dN∥λ∗∥∞ZT 0|ak(Xt)−a∗(Xt)|2dt <∼(rk T)−2F−2h−2ZT 0 −λ∗(Xt)· {ak(Xt)−a∗(Xt)}+{bk(Xt)−b∗(Xt)} dt 27 due to the compatibility (2.1). Therefore, E Vk(T) ≤2−1C12F−2h−1T, (5.44) where C12is a positive constant depending on dN,∥λ∗∥∞andC∗, and independent of k. It is possible to enlarge C12as we like. Let eVk(T) = rk Th(∥λ∗∥∞)−1 Vk(T)−E[Vk(T)] . Then, by using Z(T) ℓof Section 5.3, the functional eVk(T) can be represented as eVk(T) = 4 ζ−1TX ℓ=1Z(T) ℓ with rT=rk TandU(x) = 4−1F−2(∥λ∗∥∞)−1ζP i∈I (ak(x)−a∗(x))i 2 λ∗(x)i. We suppose that T≥3∨logNTandNT≥2. Let xT= 2−1C12(∥λ∗∥∞)−1F−2rk TT, then xT≥2−1C12(∥λ∗∥∞)−1F−2(logT)(logNT)1/2T1/2. Choose the positive numbers C2andC3(after setting C12), so as C2TC3/2≥xTlogTwhenever T≥3∨logNTandNT≥2. Moreover, take a sufficiently large C12such that 2−1C12(∥λ∗∥∞)−1F−2≥1 =:C4. (5.45) Let Ω( k, T) = Vk(T)≤C12F−2h−1(logT)T forz≥1. Then Lemma 5.2 gives P Ω(k, T)c ≤(5.44)P Vk(T)−E[Vk(T)]≥2−1C12F−2h−1(logT)T ≤ P eVk(T) ≥(logT)xT =P TX ℓ=1Z(T) ℓ ≥4−1ζ(logT)xT ≤ exp −C13C12F−2(∥λ∗∥∞)−1(logT) logNT 16 (5.46) forC13= 2−1(C2)−1e−C3C5ζdepending on γ. Due to e.g. Inequality 1 of Shorack and Wellner [24], p.899, we obtain P Mk T ≥x,Ω(k, T) ≤2 exp −x2 2C12F−2h−1(logT)Tψ2F2x C12rk T(logT)T (x >0) (5.47) for any k, where ψ(y) = 2 y−2 (y+ 1){log(y+ 1)−1}+ 1 . 28 The second-order moment of MT=Mk Tis estimated as follows: E MT 2 =Z∞ 02xP MT ≥x dx ≤ C9T1/2(logT)(logNT)1/22 +NTsup kZ∞ C9T1/2(logT)(logNT)1/22xP Mk T ≥x dx ≤ C9T1/2(logT)(logNT)1/22 +NTsup kZ∞ C9T1/2(logT)(logNT)1/22xP Mk T ≥x,Ω(k, T) dx +NTsup kE Mk T 21Ω(k,T)c . (5.48) Apply the Burkholder-Davis-Gundy inequality to obtain E |Mk T|4<∼(rk T)−4F−4h−4X i∈IEZT 0 ak(Xt)−a∗(Xt)i 2dNi t2 ≤2(rk T)−4F−4h−4X i∈IEZT 0 ak(Xt)−a∗(Xt)i 2deNi t2 +X i∈IEZT 0 ak(Xt)−a∗(Xt)i 2(λ∗(Xt))idt2 ≤32(rk T)−4h−4dN(1 +∥λ∗∥∞)2(T+T2) ≤64(rk T)−4h−2(1 + h−1)dN(1 +∥λ∗∥∞)2T2 ≤64h−2(1 + h−1)dN(1 +∥λ∗∥∞)2T4. We then have E |Mk T|21Ω(k,T,z )c ≤ E |Mk T|41/2P Ω(k, T, z )c1/2 <∼(5.46)8h−1(1 + h−1)1/2dN(1 +∥λ∗∥∞)T2 ×exp −C13C12F−2(∥λ∗∥∞)−1(logT) logNT 32 (5.49) We set C12=C14F2, and choose a sufficiently large C14so that C14≥max{2,32C13−1}∥λ∗∥∞, additionally to (5.44). Then we obtain NTsup kE Mk T 21Ω(k,T,z )c ≤C15 (5.50) for some constant C15depending on γ,h,dN,C∗and∥λ∗∥∞, by using logT log 3logNT log 2≥logT log 3+logNT log 2−1 29 due to T≥3 and NT≥2. We will show x2 2C12F−2h−1(logT)Tψ2F2x C12(logT)rk TT ≥C16T−1/2(logNT)1/2x (5.51) for all xsatisfying x≥C9(logT)T1/2(logNT)1/2, (5.52) where C16is a constant depending on h,dN,C∗and∥λ∗∥∞, but independent of C9≥1. First, we see C17:= inf y>0(y+ 1)ψ(y)>0. (5.53) When2F2x C12(logT)rk TT≤1, x2 2C12(logT)F−2h−1Tψ2F2x C12(logT)rk TT ≥(5.53)C17x2 4C14h−1TlogT ≥(5.52)C17C9 4C14h−1T−1/2(logNT)1/2x ≥C17 4C14h−1T−1/2(logNT)1/2x. When2F2x C12(logT)rk TT>1, x2 2C12(logT)F−2h−1Tψ2F2x C12(logT)rk TT =xrk T 8h−1× 2·2x C14(logT)rk TT ψ2x C14(logT)rk TT ≥(5.53)C17rk Tx 8h−1 ≥(5.5)C17 8h−1T−1/2(logNT)1/2x. So we obtained (5.51).
https://arxiv.org/abs/2504.15944v1
We haveZ∞ C9T1/2(logT)(logNT)1/22xP Mk T ≥x,Ω(k, T) dx ≤(5.47)Z∞ C9T1/2(logT)(logNT)1/24xexp −x2 2C12F−2h−1TlogTψ2F2x C12rk TTlogT dx ≤(5.51)Z∞ C9T1/2(logT)(logNT)1/24xe−C16T−1/2(logNT)1/2xdx. (5.54) Applying Lemma 5.5 in the case where q= 1, p= 1 and C=C16T−1/2(logNT)1/2, we obtain the estimateZ∞ C9T1/2(logT)(logNT)1/24xe−C16T−1/2(logNT)1/2xdx <∼T(logNT)−1exp −1 2C16C9(logT)(logNT) <∼TN−1 T (5.55) 30 uniformly in k, if we take a sufficiently large C9. Lemma 5.4. Suppose that T≥3∨logNTandNT≥2. Then, there exists a constant C18such that E MT 2 ≤ C18T1/2(logT)(logNT)1/22. (5.56) The constant C18is depending on γ,h, |λ∗| ∞,dNandC∗, but not on T∈T. Proof. From (5.48), (5.50) and (5.55), it is concluded that (5.56) holds for some constant C18. 5.7 Proof of Theorem 2.4 Now we go back to estimation of RT. We will basically follow the line of the proof of Theorem 1 in Schmidt-Hieber [22] for the i.i.d. case in the nonparametric regression. By choosing a large constant C0, it suffices to show that the inequality (2.4) holds for sufficiently large T, since log NT>0 by the assumption NT≥2, and RT/(1 +F2) is bounded. On the other hand, the condition T≥ξ(logT)2logNTverifies T≥3∨logNTfor large T. Therefore, the assumptions in Lemmas 5.3 and 5.4 are satisfied when Tis large. From (5.2), (5.10) and (5.43), we obtain RT−Re T≤ RT−Re T ≤ Φ(5.3) T + Φ(5.4) T ≤C∗2FT−1R1/2 T E[|LT|2]1/2+ E[|MT|2]1/2 +FT−1 (T−1(logT)2logNT)1/2+δ E[|LT|] +E[|MT|]) + 2 δ. (5.57) Solving the quadratic inequality (5.57) in x=R1/2 Tto obtain1 RT≤2 C∗2FT−1 E[|LT|2]1/2+ E[|MT|2]1/2 2 +2 Re T+FT−1 (T−1(logT)2logNT)1/2+δ (E[|LT|] +E[|MT|]) + 2 δ , (5.58) and hence, for some constant C19depending on h(for the representation in T) and C∗, RT≤2Re T+C19(1 +F2) T−1(logT)2logNT+δ (5.59) ifTis sufficiently large and T≥3∨ {ξ(logT)2logNT}andNT≥2, from Lemmas 5.3 and 5.4. The condition T≥ξ(logT)2logNTis used for showing the boundedness of T−1E[|LT|] and T−1E[|MT|]. 1We obtain an estimate taking the form x≤2−1(A+√ A2+ 4B)≤A+√ B. It gives x2≤2A2+ 2B. 31 By (5.1), Re T=E RT] =T−1E −ZT 0 baT(Xt)−a∗(Xt) ·dNt+ZT 0bbT(Xt)−b∗(Xt) dt =T−1E ΨT(baT,bbT)−ΨT(a∗, b∗) =T−1E ΨT(baT,bbT)−ΨT(a, b) +T−1E ΨT(a, b)−ΨT(a∗, b∗) ≤∆T+h−1E Ψh(a, b)−Ψh(a∗, b∗) for any ( a, b)∈FT. Therefore, Re T≤∆T+ inf (a,b)∈FTh−1E Ψh(a, b)−Ψh(a∗, b∗) (5.60) From (5.59) and (5.60), we obtain Theorem 2.4. Lemma 5.5. For positive numbers p, q, C andB, let I(p, q, C, B ) =Z∞ Byqe−Cypdy. Suppose that q+ 1 p≤k (5.61) for some number k. Then there exists a constant ckdepending only on ksuch that I(p, q, C, B )≤ckp−1C−1+q pexp −1 2CBp whenever CBp≥1. (5.62) Proof. We give a proof here for the sake of self-containedness. By change of variables, I(p, q, C, B ) =Z∞ Byqe−Cypdy =p−1C−1+q pZ∞ CBpup−1(q+1)−1e−udu. Suppose that CBp≥1. Since p−1(q+ 1)−1≤k−1 by (5.61), we have I(p, q, C, B )≤p−1C−1+q pZ∞ CBpuk−1e−udu. There exists a constant C(k) such that uk−1e−u≤C(k)e−u/2for all u≥1. Then I(p, q, C, B )≤C(k)p−1C−1+q pZ∞ CBpe−u/2du = 2C(k)p−1C−1+q pe−CBp/2 This competes the proof. 32 Acknowledgement The authors thank Professor Taiji Suzuki for the valuable discussion. References [1] Bacry, E., Dayri, K., Muzy, J.F.: Non-parametric kernel estimation for symmetric Hawkes processes. application to high frequency financial data. The European Physical Journal B 85, 1–12 (2012) [2] Bacry, E., Delattre, S., Hoffmann, M.,
https://arxiv.org/abs/2504.15944v1
Muzy, J.F.: Modelling microstructure noise with mutually exciting point processes. Quantitative Finance 13(1), 65–77 (2013) [3] Bowsher, C.G.: Modelling security market events in continuous time: Intensity based, multivariate point process models. Journal of Econometrics 141(2), 876–912 (2007) [4] DeVore, R., Hanin, B., Petrova, G.: Neural network approximation. Acta Numerica 30, 327 – 444 (2021). DOI 10.1017/s0962492921000052. URL http://dx.doi.org/10.1017/ S0962492921000052 [5] Fan, J., Ma, C., Zhong, Y.: A selective overview of deep learning. Statistical Science 36(2) (2021). DOI 10.1214/20-sts783. URL http://dx.doi.org/10.1214/20-STS783 [6] Farrell, M.H., Liang, T., Misra, S.: Deep neural networks for estimation and inference. Econometrica 89(1), 181 – 213 (2021). DOI 10.3982/ecta16901. URL http://dx.doi. org/10.3982/ECTA16901 [7] Imaizumi, M.: Sup-norm convergence of deep neural network estimator for nonparametric regression by adversarial training (2023). DOI 10.48550/ARXIV.2307.04042. URL https: //arxiv.org/abs/2307.04042 [8] Kim, J., Nakamaki, T., Suzuki, T.: Transformers are minimax optimal nonparametric in-context learners (2024). DOI 10.48550/ARXIV.2408.12186. URL https://arxiv.org/ abs/2408.12186 [9] Kurisu, D., Fukami, R., Koike, Y.: Adaptive deep learning for nonlinear time series models. Bernoulli 31(1) (2025). DOI 10.3150/24-bej1726. URL http://dx.doi.org/10.3150/ 24-BEJ1726 [10] Large, J.: Measuring the resiliency of an electronic limit order book. Journal of Financial Markets 10(1), 1–25 (2007) [11] Lu, X., Abergel, F.: High-dimensional Hawkes processes for limit order books: modelling, empirical analysis and numerical calibration. Quantitative Finance 18(2), 249–264 (2018) [12] Maglaras, C., Moallemi, C.C., Wang, M.: A deep learning approach to estimating fill probabilities in a limit order book. Quantitative Finance 22(11), 1989–2003 (2022) 33 [13] Merlevede, F., Peligrad, M., Rio, E.: Bernstein inequality and moderate deviations under strong mixing conditions. In: High dimensional probability V: the Luminy volume, vol. 5, pp. 273–293. Institute of Mathematical Statistics (2009) [14] Morariu-Patrichi, M., Pakkanen, M.S.: State-dependent Hawkes processes and their appli- cation to limit order book modelling. Quantitative Finance 22(3), 563–583 (2022) [15] Muni Toke, I., Pomponio, F.: Modelling trades-through in a limited order book using Hawkes processes. Economics discussion paper (2011-32) (2011) [16] Muni Toke, I., Yoshida, N.: Modelling intensities of order flows in a limit order book. Quantitative Finance 17(5), 683–701 (2017) [17] Muni Toke, I., Yoshida, N.: Analyzing order flows in limit order books with ratios of Cox-type intensities. Quantitative Finance pp. 1–18 (2019) [18] Muni Toke, I., Yoshida, N.: Marked point processes and intensity ratios for limit order book modeling. Japanese Journal of Statistics and Data Science 5(1), 1–39 (2022) [19] Oko, K., Akiyama, S., Suzuki, T.: Diffusion models are minimax optimal distribution estimators (2023). DOI 10.48550/ARXIV.2303.01861. URL https://arxiv.org/abs/ 2303.01861 [20] Rambaldi, M., Bacry, E., Lillo, F.: The role of volume in order book dynamics: a multi- variate Hawkes process analysis. Quantitative Finance 17(7), 999–1020 (2017) [21] Rio, E.: Asymptotic Theory of Weakly Dependent Random Processes. Springer (2017) [22] Schmidt-Hieber, J.: Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics 48(4), 1875 – 1897 (2020). DOI 10.1214/ 19-AOS1875. URL https://doi.org/10.1214/19-AOS1875 [23] Sfendourakis, E., Muni Toke, I.: Lob modeling using Hawkes processes with a state- dependent factor. Market Microstructure and Liquidity (2023) [24] Shorack, G.R., Wellner, J.A.: Empirical processes with applications to statistics. SIAM (2009) [25] Sirignano, J.A.: Deep learning for limit order
https://arxiv.org/abs/2504.15944v1
books. Quantitative Finance 19(4), 549–570 (2019) [26] Suh, N., Cheng, G.: A survey on statistical theory of deep learning: Approximation, training dynamics, and generative models (2024). DOI 10.48550/ARXIV.2401.07187. URL https://arxiv.org/abs/2401.07187 [27] Suzuki, T., Nitanda, A.: Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic besov space. Advances in Neural Information Processing Systems 34, 3609–3621 (2021) 34 [28] Tsantekidis, A., Passalis, N., Tefas, A., Kanniainen, J., Gabbouj, M., Iosifidis, A.: Fore- casting stock prices from the limit order book using convolutional neural networks. In: 2017 IEEE 19th conference on business informatics (CBI), vol. 1, pp. 7–12. IEEE (2017) [29] Wu, P., Rambaldi, M., Muzy, J.F., Bacry, E.: Queue-reactive Hawkes models for the order flow. Market microstructure and liquidity (2022) [30] Yoshida, N.: Polynomial type large deviation inequalities and quasi-likelihood analysis for stochastic differential equations. Ann. Inst. Statist. Math. 63(3), 431–479 (2011). DOI 10.1007/s10463-009-0263-z. URL http://dx.doi.org/10.1007/s10463-009-0263-z [31] Yoshida, N.: Simplified quasi-likelihood analysis for a locally asymptotically quadratic random field. Annals of the Institute of Statistical Mathematics pp. 1–24 (2024) [32] Zhang, Z., Zohren, S., Roberts, S.: Deeplob: Deep convolutional neural networks for limit order books. IEEE Transactions on Signal Processing 67(11), 3001–3012 (2019) Appendix 35 1 2 4 6 8 10 12 16 20 nL 14 8 16 32 64 128 256 512nN 16.87e-02 6.71e-02 6.96e-02 7.46e-02 7.46e-02 8.27e-02 1.02e-01 1.04e-01 1.10e-01 5.06e-02 4.86e-02 5.34e-02 5.69e-02 6.07e-02 6.27e-02 6.66e-02 7.49e-02 8.51e-02 3.44e-02 3.64e-02 3.87e-02 4.17e-02 4.79e-02 5.24e-02 5.57e-02 6.66e-02 7.34e-02 3.43e-02 3.52e-02 3.79e-02 3.81e-02 3.91e-02 4.46e-02 4.79e-02 5.77e-02 6.74e-02 3.31e-02 3.59e-02 3.61e-02 3.64e-02 3.75e-02 3.92e-02 4.41e-02 5.04e-02 6.26e-02 3.29e-02 3.62e-02 3.57e-02 3.64e-02 3.72e-02 3.88e-02 4.19e-02 5.36e-02 6.69e-02 3.42e-02 3.89e-02 3.80e-02 3.56e-02 3.79e-02 3.92e-02 4.39e-02 3.80e-02 3.98e-02 4.15e-02 3.75e-02Mean L2 error - One-step estimation method 1 2 4 6 8 10 12 16 20 nL 2,1=nL 2,24 8 16 32 64 128 256 512nN 2,1=nN 2,23.74e-02 4.62e-02 4.32e-02 5.61e-02 8.05e-02 7.96e-02 1.00e-01 1.19e-01 1.29e-01 2.37e-02 2.70e-02 2.36e-02 2.32e-02 2.84e-02 3.01e-02 4.06e-02 7.20e-02 9.04e-02 2.16e-02 2.06e-02 1.88e-02 1.86e-02 1.87e-02 1.97e-02 2.11e-02 3.39e-02 6.81e-02 2.05e-02 1.96e-02 1.67e-02 1.74e-02 1.62e-02 1.73e-02 1.83e-02 2.00e-02 2.99e-02 1.92e-02 1.82e-02 1.61e-02 1.68e-02 1.65e-02 1.65e-02 1.72e-02 1.84e-02 2.18e-02 1.96e-02 1.85e-02 1.68e-02 1.58e-02 1.66e-02 1.73e-02 1.81e-02 1.99e-02 2.30e-02 2.04e-02 1.80e-02 1.64e-02 1.63e-02 1.70e-02 1.77e-02 1.82e-02 2.17e-02 1.75e-02 1.73e-02 1.75e-02Mean L2 error - Two-step estimation method 0.040.050.060.070.080.090.10 0.020.040.060.080.100.12Figure 8: Simulation study — Heatmap of L2-errors w.r.t the parameters nLandnNfor both estimation methods. 36 1 2 4 6 8 10 12 16 20 nL 14 8 16 32 64 128 256 512nN 11.77e-01 1.61e-01 1.55e-01 1.53e-01 1.53e-01 1.61e-01 1.90e-01 1.88e-01 2.03e-01 1.44e-01 1.31e-01 1.45e-01 1.47e-01 1.60e-01 1.52e-01 1.51e-01 1.58e-01 1.59e-01 1.24e-01 1.17e-01 1.33e-01 1.29e-01 1.44e-01 1.62e-01 1.52e-01 1.52e-01 1.54e-01 1.33e-01 1.19e-01 1.19e-01 1.24e-01 1.25e-01 1.38e-01 1.44e-01 1.54e-01 1.54e-01 1.17e-01 1.16e-01 1.24e-01 1.29e-01 1.26e-01 1.26e-01 1.22e-01 1.45e-01 1.51e-01 1.12e-01 1.21e-01 1.19e-01 1.20e-01 1.20e-01 1.20e-01 1.25e-01 1.40e-01 1.45e-01 9.71e-02 1.39e-01 1.32e-01 1.20e-01 1.09e-01 1.14e-01 1.19e-01 1.00e-01 1.41e-01 1.50e-01 1.34e-01Mean L error - One-step estimation method 1 2 4 6 8 10 12 16 20 nL 2,1=nL 2,24 8 16 32 64 128 256 512nN 2,1=nN 2,29.39e-02 1.08e-01 1.07e-01 1.19e-01 1.64e-01 1.55e-01 1.88e-01 2.17e-01 2.18e-01 7.55e-02
https://arxiv.org/abs/2504.15944v1
7.95e-02 7.33e-02 7.19e-02 7.65e-02 7.87e-02 9.89e-02 1.43e-01 1.69e-01 7.34e-02 7.05e-02 7.19e-02 6.50e-02 6.39e-02 6.19e-02 6.33e-02 8.63e-02 1.34e-01 6.87e-02 6.86e-02 6.05e-02 5.95e-02 5.48e-02 5.30e-02 5.79e-02 5.63e-02 8.33e-02 6.52e-02 6.89e-02 5.87e-02 5.67e-02 5.47e-02 5.37e-02 5.04e-02 5.01e-02 6.40e-02 5.91e-02 7.30e-02 5.61e-02 5.07e-02 4.70e-02 4.98e-02 4.84e-02 5.28e-02 6.59e-02 5.74e-02 6.70e-02 5.37e-02 4.55e-02 5.17e-02 4.95e-02 4.54e-02 5.65e-02 6.75e-02 5.36e-02 4.90e-02Mean L error - Two-step estimation method 0.100.120.140.160.180.20 0.060.080.100.120.140.160.180.20Figure 9: Simulation study — Heatmap of L∞-errors w.r.t the parameters nLandnNfor both estimation methods. 37 1 2 4 6 8 10 12 16 20 nL 14 8 16 32 64 128 256 512nN 19.19e-02 9.45e-02 1.09e-01 1.32e-01 1.16e-01 1.38e-01 2.54e-01 2.57e-01 2.88e-01 4.64e-02 4.38e-02 5.34e-02 6.16e-02 7.09e-02 7.77e-02 9.06e-02 1.14e-01 1.58e-01 2.17e-02 2.57e-02 2.64e-02 3.27e-02 4.30e-02 5.16e-02 5.95e-02 9.15e-02 1.14e-01 2.44e-02 2.55e-02 2.74e-02 2.78e-02 2.84e-02 3.72e-02 4.72e-02 6.50e-02 9.52e-02 2.48e-02 2.82e-02 2.76e-02 2.53e-02 2.57e-02 2.98e-02 4.04e-02 5.32e-02 7.84e-02 2.48e-02 2.98e-02 2.69e-02 2.60e-02 2.75e-02 2.92e-02 3.56e-02 6.04e-02 9.40e-02 2.90e-02 3.27e-02 2.95e-02 2.44e-02 3.07e-02 3.27e-02 4.09e-02 4.10e-02 3.57e-02 3.49e-02 2.90e-02Mean RT value - One-step estimation method 1 2 4 6 8 10 12 16 20 nL 2,1=nL 2,24 8 16 32 64 128 256 512nN 2,1=nN 2,22.73e-02 4.41e-02 3.92e-02 6.78e-02 1.60e-01 1.55e-01 2.44e-01 3.90e-01 4.45e-01 1.30e-02 1.62e-02 1.13e-02 1.12e-02 2.01e-02 2.03e-02 5.27e-02 1.44e-01 2.46e-01 1.10e-02 1.11e-02 7.54e-03 8.87e-03 7.49e-03 7.38e-03 1.02e-02 5.14e-02 1.13e-01 1.07e-02 1.04e-02 6.51e-03 7.23e-03 6.03e-03 6.65e-03 7.23e-03 8.68e-03 2.25e-02 9.03e-03 7.58e-03 6.55e-03 6.79e-03 5.95e-03 6.28e-03 7.14e-03 7.45e-03 1.07e-02 9.64e-03 9.67e-03 7.83e-03 5.87e-03 6.89e-03 7.16e-03 7.72e-03 9.65e-03 1.25e-02 1.23e-02 8.59e-03 7.07e-03 7.29e-03 7.69e-03 7.55e-03 7.81e-03 2.09e-02 8.68e-03 8.20e-03 7.67e-03Mean RT value - Two-step estimation method 0.050.100.150.200.25 0.050.100.150.200.250.300.350.40Figure 10: Simulation study — Heatmap of the empirical values of RTw.r.t the parameters nLandnNfor both estimation methods. 38
https://arxiv.org/abs/2504.15944v1
arXiv:2504.15946v1 [math.ST] 22 Apr 2025Thee-Partitioning Principle of False Discovery Rate Control Jelle Goeman* Rianne de Heide†Aldo Solari‡ April 23, 2025 Abstract We present a novel necessary and sufficient principle for Fal se Discovery Rate (FDR) control. This e-Partitioning Principle says that a procedure controls FDR if and only if it is a special case of a general e-Partitioning procedure. By writing existing methods as sp ecial cases of this procedure, we can achieve uniform improvements of these methods, and we show this in pa rticular for the eBH, BY and Su methods. We also show that methods developed using the e-Partitioning Principle have several valuable properties . They generally control FDR not just for one rejected set, but simultaneously over many, allowing post hoc flexibility for the researcher in the final choice of the rejec ted hypotheses. Under some conditions, they also allow for post hoc adjustment of the error rate, choosin g the FDR level αpost hoc, or switching to familywise error control after seeing the data. In addition ,e-Partitioning allows FDR control methods to exploit logical relationships between hypotheses to gain p ower. 1 Introduction For familywise error rate (FWER) control, it has long been kn own (Sonnemann, 1982, 2008) that there is a single universal principle that is necessary and suffici ent for the construction of valid methods. This Closure Principle says that any method that controls FWER is a special case of a closed testing procedure (Marcus et al., 1976). The Closure Principle was later chall enged by the seemingly more powerful Partition- ing Principle (Finner and Strassburger, 2002), but Goeman e t al. (2021) have shown that the two principles are equivalent. Genovese and Wasserman (2006) and Goeman an d Solari (2011) have extended closed test- ing to control of False Discovery Proportions (FDPs), and Go eman et al. (2021) showed that all methods controlling a quantile of the distribution of FDP are either equivalent to a closed testing procedure or are dominated by one, extending the Closure Principle to all met hods controlling FDP. The Closure Principle is useful for method development in FW ER and FDP in several ways. In the first place, it reduces the complex task of constructing a mul tiple testing method to the simpler task of choosing hypothesis tests for intersection hypotheses. Af ter making this choice, method construction reduces to a discrete optimization problem. Moreover, the Closure P rinciple helps to handle complex situations such as restricted combinations, i.e., logical implicatio ns between hypotheses (Shaffer, 1986; Goeman et al., 2021). Finally, methods constructed using closed testing o ften allow for some user flexibility, permitting researchers to modify the results of the multiple testing pr ocedure post-hoc without compromising error control (Goeman and Solari, 2011). False Discovery Rate (FDR) methods control the expectation of FDP rather than a quantile (Benjamini and Hochberg, 1995). Such methods are therefore outside of the scope of the Closure Principle. For the construction of such *Leiden University Medical Center, Leiden, The Netherlands †University of Twente, Enschede, The Netherlands ‡Ca’ Foscari University of Venice, Venice, Italy 1
https://arxiv.org/abs/2504.15946v1
methods, Blanchard and Roquain (2008) formulated two quite general sufficient conditions, self-consistency and dependence control, under which, if both hold, FDR contr ol is guaranteed. This Self-Consistency Prin- ciple simplifies the proof of well-known FDR-controlling pr ocedures such as BH (Benjamini and Hochberg, 1995) and BY (Benjamini and Yekutieli, 2001) and has been sem inal for the creation of many others. An important recent example is the eBH procedure (Wang and Ramd as, 2022), that controls FDR on the basis of per-hypothesis e-values rather than p-values. The concept of the e-value, a random variable with expectation at most 1 under the null hypothesis, tunes well with FDR since both concepts are expectation-based. Several other authors (Ignatiadis et al., 2024b; Lee and Ren, 2024; R en and Barber, 2024; Ignatiadis et al., 2024a) have pointed out useful connections between FDR and e-values. However, the Self-Consistency Principle is sufficient but n ot necessary for FDR control, and there is some indication that it is not optimal. Exchangeable methods des igned using the self-consistency principle control FDR at the more stringent level π0αrather than at α, but Solari and Goeman (2017) have shown that any method controlling FDR at π0αcan be uniformly improved by a method that controls FDR at α, and that does not adhere to the Self-Consistency Principle. Some methods constructed using that principle, e.g., eBH and BY , have a reputation for low power (Lee and Ren, 2024; Xu and R amdas, 2023). Furthermore, methods cre- ated using the Self-Consistency Principle lack some of the p roperties that methods created using the Closure Principle do get. Full post-hoc user flexibility, for exampl e, as offered by closed testing (Goeman and Solari, 2011), is unknown in combination with FDR control, although some FDR methods allow some user flexibil- ity (Lei and Fithian, 2018; Lei et al., 2021; Katsevich and Ra mdas, 2020; Katsevich et al., 2023). Moreover, so far no FDR-controlling methods have been proposed that ca n deal with restricted combinations. In this paper, we present a novel principle that is both neces sary and sufficient for FDR control. We call this principle the e-Partitioning Principle of FDR control because of its simil arity to the Partitioning Principle formulation of the Closure Principle for FWER and FDP, and because it is based on e-values. Like the Closure Principle, the new e-Partitioning Principle facilitates the task of construct ing a multiple testing procedure by reducing it to the simpler task of choosing appr opriatee-values for specifically constructed partitioning hypotheses. Once these e-values are chosen, the multiple testing procedure reduces to a discrete optimization problem. Like the Closure Principle for FWER a nd FDP, the Partitioning Principle for FDR offers some post-hoc user flexibility in a natural way, and ha ndles restricted combinations without effort. It can be used to formulate uniform improvements of existing me thods, including the eBH and BY procedures and the procedure of Su (2018). The outline of the paper is as follows. We first revisit the defi nition of FDR control, generalizing that concept to allow for simultaneity in
https://arxiv.org/abs/2504.15946v1
the rejections. Next, w e formulate the general e-Partitioning procedure and establish that its necessity and sufficiency: the actual e-Partitioning Principle. Sections 5 and 6 explore existing methods for FDR control and investigate whether th ey can be improved. Section 7 shows how the e- Partitioning Principle allows restricted combinations to be used in combination with FDR. In Sections 8 and 9 we establish different ways in which e-Partitioning brings flexibility to the researcher in terms of choice of error rates and rejected sets. We end with some numerical ill ustrations. 2 Contributions The novel contributions of this paper are the following. 1. We formulate the general e-Partitioning procedure and prove the e-Partitioning Principle the says that any method controls FDR if and only if it is a special case of th at general procedure (Section 4). 2. We generalize the concept of FDR control to allow methods t o control FDR for more than one set simultaneously (Section 3), and show that such simultaneit y comes for free (without reducing α-levels) 2 as a consequence of the e-Partitioning Principle. We explore the uses of this simult aneity in Section 8. 3. We use the e-Partitioning Principle to construct substantial uniform improvements of the eBH, BY and Su methods. Our improvements always allow more flexibility i nf the rejected sets, and often allow rejection of larger sets than the original methods at the sam e nominal FDR level (Sections 5 and 6). Polynomial time algorithms for these methods are given (Sec tion 10). 4. We show how power can be gained in FDR control methods for th e case of restricted combinations (Section 7). 5. We show two ways in which e-Partitioning allows researcher flexibility in the choice o f error rates. In the first place, researchers may switch to FWER control, if they reject a smal enough set (Section 8). Secondly, we show that for some FDR controlling methods, researchers may choose the α-level at which FDR is controlled post hoc (Section 9). 3 False Discovery Rate control Let us first define the criterion of FDR control. We will introd uce a generalized form of the usual definition of FDR control that allows methods to claim FDR control for more than one set simultaneously. The resulting simultaneous control will be mathematically easier to work with. Moreover, it will allow some post hoc user flexibility, as we will explore further in Section 8. We will use a set-centered notation for the theory we develop . Throughout the paper, we will denote all sets with capital letters (e.g., R), collections of sets in calligraphic (e.g., R), and scalars in lower case (e.g., r). We use the shorthand [i]for{1,2,...,i}. The power set of Ris2R. Random variables and random sets will be in boldface (e.g., e,R). Direct inequalities between random variables should alw ays be understood to hold surely, i.e., for all events, unless stated otherwis e. We denote max(a,b)asa∨b. Let the statistical model Mbe a set of probability measures, e.g., M={Pθ:θ∈Θ}in a parametric model. Hypotheses are restrictions to the model M, and therefore subsets
https://arxiv.org/abs/2504.15946v1
of M. Suppose that we have m hypotheses H1,...,H m⊆Mof interest. For every P∈M, some hypotheses are true and others false; let NP={i: P∈Hi}be the index set of the true hypotheses for P. If we choose to reject the hypotheses with indices in R⊆[m], we say that we make |R|discoveries, of which|R∩NP|are false. The false discovery proportion (FDP) for RandP, therefore, is |R∩NP|/|R|, if R/\⌉}atio\slash=∅, and conventionally defined as 0 if R=∅. IfRis random, then FDP is random. A random Rcontrols the False Discovery Rate (FDR) at αif the expected FDP is bounded by α. Formally, FDR control is defined in Definition 1. Let EPdenote the expectation under P. Definition 1 (Classical FDR control) .R⊆[m]controls FDR at level αif, for every P∈M, EP/parenleftbigg|R∩NP| |R|∨1/parenrightbigg ≤α. In this paper, we prefer to work with a more general definition of FDR control, which refers to a random collection of sets, rather than to a single random set. In the classical Definition 1, a researcher rejecting all hypotheses with indices in the set Rcan expect FDP to be at most α. In the simultaneous Definition 2, the researcher has a collection Rof such sets to choose from. Definition 2 (Simultaneous FDR control) .R⊆2[m]controls FDR at level αif, for every P∈M, EP/parenleftbigg max R∈R|R∩NP| |R|∨1/parenrightbigg ≤α. 3 In the remainder of this paper, when we speak about FDR contro l at levelα, this will refer to Definition 1 or 2 depending on whether we speak about an index set or a colle ction of such sets. While our focus will be on simultaneous FDR control (Definition 2), our results will imply corresponding results for classical FDR control (Definition 1) through a duality between the two defin itions, which is made explicit in Lemma 1. The proof of this lemma is immediate from the definitions. Lemma 1. IfRcontrols FDR at α, then so does R={R}; ifRcontrols FDR at α, then so does any R∈R. We remark that αis taken as fixed in Definitions 1 and 2, which implies that it is not allowed to depend on the data. In Section 9 we will consider a further generaliz ation of Definition 2 that allows simultaneous FDR control over all α∈(0,1], and therefore random α. 4 Thee-Partitioning Principle of FDR control The central result of this paper is the e-Partitioning Principle of FDR control. Analogous to the Pa rtitioning Principle of FWER control (Finner and Strassburger, 2002), this novel principle gives a general recipe for designing FDR control methods. In this section we show that t his novel principle is both necessary and suf- ficient for FDR control. To distinguish the two Partitioning Principles, we will call the Partitioning Principle for FWER control the p-Partitioning Principle. Both Partitioning Principles partition the model Minto (maximally) 2mdisjoint parts, defined by the equivalence classes of the collection of true hypotheses NP. For every S∈2[m], the partitioning hypothesis corresponding to Sis HS={P∈M:NP=S}. Since the partitioning hypotheses are defined through an equ ivalence class on M, they are disjoint and cover M. As a consequence, exactly one among HS,S∈2[m], is true and all the
https://arxiv.org/abs/2504.15946v1
others are false. We will generally ignore H∅, since it is impossible to make false discoveries if H∅is true; define M= 2[m]\{∅} as the collection indexing the partitioning hypotheses of int erest. To adapt the Partitioning Principle for FDR control, we comb ine partitioning with the concept of the e- value, building on earlier work (Wang and Ramdas, 2022; Igna tiadis et al., 2024a) that demonstrates a close relationship between FDR control and e-values. In particular, Ignatiadis et al. (2024a, Theorem 6 .5) prove that every FDR controlling procedure can be recovered by eBH applied to compound e-values. However, their result is not constructive. We calleane-value for a hypothesis Hife≥0andEP(e)≤1for allP∈H. Hypothesis testing may be based on e-values rather than on p-values (Shafer, 2021; Ramdas et al., 2023; Grunwald et al., 2024). For a single hypothesis H, we may reject Hwhene≥1/α, while controlling Type I error at level α, since for P∈H, by Markov’s inequality, P(e≥1/α)≤EP(e) 1/α≤α. Combining e-values and partitioning hypotheses, let eS, for every S∈ M , be ane-value for the par- titioning hypothesis HS. Together, all e-values for all partitioning hypotheses form a suite ofe-values E= (eS)S∈M. Choosing e-values for partitioning hypotheses may seem like a complex task since par- titioning hypotheses may look unusual. However, Lemma 2 sho ws that partitioning e-values can simply be constructed as compound e-values for intersection hypotheses, allowing the literat ure on this subject to be exploited (V ovk and Wang, 2021, 2024; Wang, 2025). Often, th ere is no loss in power doing so, but Section 7 will discuss situations for which there is a meaningful dif ference between partitioning and intersection hypotheses. 4 Lemma 2. Ifeis ane-value for ˜HS=/intersectiontext i∈SHi, then it is an e-value for HS. Proof. We haveEP(e)≤1for allP∈˜HS={P∈M:S⊆NP} ⊇HS. We are now ready to formulate our main result. Based on any sui teE= (ES)S∈Mwe can construct an FDR-controlling procedure as follows. Define the an e-partitioning procedure as Rα(E) =/braceleftBig R∈2[m]:αeS≥|R∩S| |R|∨1for allS∈ M/bracerightBig . (1) To understand why Rα(E)would control FDR at α, note that the FDP |R∩NP|/|R|depends on Ponly throughNP. Equation (1) bounds the FDP for all S=NPbyαeS, which has expectation at most α. We claim not just that (1) controls FDR, but that every FDR-co ntrol procedure is of the form (1). This is thee-Partitioning Principle of FDR control, formulated as Theo rem 1. Analogous to the Closure Principle for FWER and FDP (Marcus et al., 1976; Goeman et al., 2021), it says that the e-partitioning procedure is both necessary and sufficient for FDR control. Theorem 1 (Thee-Partitioning Principle) .Rcontrols FDR at level αif and only if R⊆ Rα(E)for a suite ofe-valuesE. Proof. SupposeE= (eS)S∈Mis a suite of e-values. Choose any P∈M. IfNP/\⌉}atio\slash=∅, letS=NP. Then P∈HS, so that EP/parenleftbigg max R∈Rα(E)|R∩NP| |R|∨1/parenrightbigg ≤EP(αeS)≤α, and the same holds trivially if NP=∅, so thatRα(E), and consequently R, controls FDR at level α. This proves the “if” part. Next, suppose Rcontrols FDR at level α. Choose any S∈ M , andP∈HS. Then eS=1 αmax R∈R|R∩S| |R|∨1=1 αmax R∈R|R∩NP| |R|∨1 hasEP(eS)≤1. Therefore, E= (eS)S∈Mis a suite of e-values. For every
https://arxiv.org/abs/2504.15946v1
R∈Rand every S∈ M , we have, αeS= max U∈R|U∩S| |U|∨1≥|R∩S| |R|∨1, soR∈ Rα(E). This proves the “only if” part. It is worth making explicit that Theorem 1 is not tied to our no vel Definition 2 of simultaneous FDR, as Corollary 1 below, which combines Theorem 1 and Lemma 1, ma kes clear. In fact, rewriting an FDR- controlling Rin terms of a suite of e-values is often a way of obtaining simultaneous FDR control for a classical FDR control methods in a less trivial way than was d one by Lemma 1. We will look into this aspect in more detail in Section 8. Corollary 1. Rcontrols FDR at level αif and only if R∈ Rα(E)for a suite of e-valuesE. Thee-Partitioning Principle reduces the task of constructing a n FDR control procedure to the much simpler task of constructing e-values for partitioning hypotheses. When constructing th esee-values, the researcher should take into account any knowledge, or lack o f it, on the joint distribution of the data. After choosing e-values, the remaining task, implementing (1), is purely co mputational. This computation may have exponential complexity, since 2m−1e-values must be taken into account. However, computation ca n be done in polynomial time in important cases, as we shall see in Section 10. 5 One way to view the general method of the e-Partitioning Principle (1) is that it divides the model Minto parts, designs an FDR-bound for each part, and combines the r esults to an overall FDR-control procedure. When constructing each of the partial FDR-bounds, the metho d designer may assume HS, which gives access to powerful oracle-like information in the form of ex act knowledge of S=NP, the set of true null hypotheses. This is important information in FDR control, w here knowledge of π0,P=|NP|/mis often already extremely valuable (Storey et al., 2004; Benjamini et al., 2006; Blanchard and Roquain, 2009). Thee-Partitioning Principle is also helpful for improving exis ting methods. The “only if” part of Theo- rem 1 asserts that every existing method controlling FDR has an implicit suite of e-values that can be used to reconstruct the method as a special case of (1). These e-values may be inefficient, e.g. because they have an expectation strictly smaller than 1; in such cases Theore m 1 can sometimes be used to propose a superior method based on a suite of stochastically larger e-values. We will give several examples of such improve- ments in Sections 5 and 6. However, it should be noted that the suite ofe-values constructed in the proof of Theorem 1 is rather circular and not very insightful. For find ing improvements it is better to reverse engi- neer the proof of the existing method to distill the e-values implicitly or explicitly used there, and to try and improve those. Potential for improvement also arises when we note that the c ompound e-values in (1) are never compared to critical values larger than 1/α. Without loss of power, we may therefore truncate all compou nde-values at1/α. If that operation results in e-values with expectation strictly below
https://arxiv.org/abs/2504.15946v1
1, there may be room for uniform improvement of the method as described above. In Wang and Ram das (2022) this is done by boosting the e-values using a truncation function. However, in Section 9 we will see that in some cases there are good reasons to forgo such a truncation. Finally, it is worth noting that, in the proof of Theorem 1, th e control of FDR hinges on the validity of the singlee-valueeS, forS=NP. This implies that relevant properties of eStranslate directly to properties of the FDR control procedure. For example, if eSis ane-value only in some asymptotic sense, the FDR control of the resulting procedure converges to αat the same rate as E(eS)converges to 1. Returning to Ignatiadis et al. (2024a), we conclude that the e-Partitioning Principle is an operationaliza- tion of their insight, generalized to simultaneous FDR acco rding to Definition 2. It is ironic, however, that as we will see in the next section, this principle shows that eBH itself is not admissible. 5 FDR-control by combining e-values As a first application of the e-Partitioning Principle we will look at the important speci al case that we have e-values available for the hypotheses H1,...,H m, and that the FDR-controlling Rαshould be a function of thesee-values. Let e1≥...≥embee-values for H1,...,H m, respectively, which we assume ordered without loss of generality. We make no assumptions on the joi nt distribution of these e-values. For this situation Wang and Ramdas (2022) proposed the eBH pr ocedure. It is essentially the BH proce- dure (Benjamini and Hochberg, 1995) applied to p1= 1/e1,...,pm= 1/em. However, where BH is only valid forp-values whose joint distribution satisfies the PRDS conditi on (Benjamini and Yekutieli, 2001), the eBH procedure is valid for any joint distribution of the e-values. The eBH procedure at level αrejects the set Rα= [rα], where rα= max{1≤r≤m:rer≥m/α}, (2) or 0 if the maximum does not exist. The set Rαcontrols FDR at level α, as proven by Wang and Ramdas (2022). We will now use e-Partitioning Principle to propose an alternative method f or controlling FDR based on e-values under arbitrary dependence. We build on the work of V ovk and Wang (2021), who showed that the 6 only admissible exchangeable method for combining arbitra rily dependent e-values into a new e-value is to average them, mixing, if desired, with the trivial e-value of 1. We will use the unmixed average eS=1 |S|/summationdisplay i∈Sei (3) as ane-value for HS,S∈ M . This is an e-value for HSby the results of V ovk and Wang (2021) and Lemma 2, or directly by Lemma 3 below. Lemma 3. For allS∈ M theeSdefined in (3) is an e-value for HS. Proof.EP(ei) = 1 for allP∈Hi⊇HS, soEP(eS) =1 |S|/summationtext i∈SEP(ei)≤1for allP∈HS. We can now apply (1) using E= (eS)S∈Mas the suite of e-values, to obtain the eBH+ procedure: Rα(E) =/braceleftbigg R∈2[m]:α |S|/summationdisplay i∈Sei≥|R∩S| |R|∨1for allS∈ M/bracerightbigg . (4) This procedure controls FDR under any joint distribution of thee-values by Theorem 1 and Lemma 3. Moreover, the procedure uniformly improves upon eBH as Theo rem 2 asserts,
https://arxiv.org/abs/2504.15946v1
motivating its name of eBH+. First, we must formally define what we mean by a uniform improv ement of a classical FDR-control procedure by a simultaneous FDR-control procedure. Definition 3 makes this explicit. A simultaneous procedure R that uniformly improves a classical procedure Smust always allow rejection of the same set of hypotheses, and sometimes allow rejection of a strictly larger set. Definition 3. LetRandSboth control FDR at level α. We say that Runiformly improves Sif (1.)S∈R; and (2.) there exists an event E, such that, if Ehappens, S⊂R∈R. Definition 3 does not explicitly exclude uninteresting “uni form improvements” that reject more than the original procedure only in one or more null events. It will be clear from the context that the improvements proposed in this paper are not of that trivial type. Theorem 2. Ifm >1, eBH+ uniformly improves eBH. Proof. LetRα(E)(=eBH+) be defined in (4) and Rα(=eBH) be defined just above (2). Suppose Rα/\⌉}atio\slash=∅ and choose any S⊆ M , we have α |S|/summationdisplay i∈Sei≥α |S|/summationdisplay i∈Rα∩Sei≥|Rα∩S| |S|αe|Rα|≥m|Rα∩S| |S||Rα|≥|Rα∩S| |Rα|, which shows that Rα∈ Rα(E). The same is trivially true if Rα=∅. To show actual improvement, let m >1and consider the event that e1= (m−1/2)/α,e2= 1/(2α), e3=...=em= 0. ThenRα=∅, but{1} ∈ Rα(E), becauseα |S|/summationtext i∈Sei≥1/αwhenever 1∈S. The improvement from eBH to eBH+ can be substantial, as can be gauged from the proof of Theorem 2, in which each of the four inequalities leaves a substantia l amount of room, and each has a different worst case in which it is an equality. The eBH+ often rejects more hy potheses than eBH, and may reject some hypotheses in the event that eBH does not reject any. An extre me example of this is given in Example 1, in which eBH+ rejects all hypotheses, but eBH none. Example 1. Supposem >1, and(2m−2i+1)/(mα)≤ei< m/(iα)fori= 1,...,m . Then eBH rejects nothing, but eBH+ rejects [m]. 7 Proof. It is tedious but straightforward to show that (2m−2i+1)/m < m/i fori= 1,...,m , so that the example is not void. It follows immediately from (2) that eBH rejects nothing. Choose R= [m]and any non-empty S⊆[m]withs=|S|. Then /summationdisplay i∈Sei s≥m/summationdisplay i=m−s+1ei s≥m/summationdisplay i=m−s+12m−2i+1 msα=2m−2m+(m−s+1) 2+1 mα=s mα=|R∩S| |R|α. The eBH+ procedure is qualitatively different from the eBH p rocedure, and has some properties that eBH does not have, and which procedures constructed according t o the Self-Consistency Principle in general do not have. In the first place, rejection of a certain set Rdepends not only on e-values of hypotheses in R itself, but also on the e-values of other hypotheses. To see an example of this, consi der the case m= 2, ande1= 9/(5α)ande2= 0. In this case, nothing is rejected. However, if we increase e2to1/(5α), we obtainRα(E) ={∅,{1}}. Increasing e2, apparently, facilitates the rejection of a set of hypothes es that does not include H2. Secondly, Rα(E)may reject hypotheses for which the corresponding e-value is less than1/α. To see an example, consider m= 2,e1= 3/(2α)ande2= 1/(2α). It is easily checked that {1,2} ∈ R α(E), implying that H2can be rejected with FDR control at α, although
https://arxiv.org/abs/2504.15946v1
e2<1/α. Translated top-values, these properties of the procedure of (4) are shared by adaptive FDR control procedures that plug in an estimate ˆπ0ofπ0,P=|NP|/minto a procedure controlling FDR at π0,Pα(Storey et al., 2004; Benjamini et al., 2006; Blanchard and Roquain, 2009). For su ch procedures, rejection of a hypothesis may also depend on p-values of other hypotheses through ˆπ0, and the procedure may reject hypotheses with p- values up to α/ˆπ0> α. The eBH+ procedure can therefore be seen as an adaptive proc edure, even though it was not explicitly constructed using an estimate of π0,P. Where eBH controls FDR according to Definition 1 and rejects o nly a single set, eBH+ rejects a collection of such sets, following Definition 2. This extension to simul taneous FDR control is not simply the trivial extension of Lemma 1. To see this, remark that the proof of The orem 2 does not just apply to the set Rα rejected by eBH, but to any other self-consistent set, i.e., any set[s]for which ses≥m/α . All such sets are therefore rejected by eBH+. We will explore use of the divers ity of the collection Rrejected by eBH+ in more detail in Section 8. Wang and Ramdas (2022) already observe that eBH can be improv ed upon, i.e. the rejection set can be enlarged while retaining FDR control, by pre-processing th e e-values in some way. Wang and Ramdas (2022) themselves introduce boosting by truncating e-values, when the marginal distribution of t he nulle-values is known. Lee and Ren (2024) boost the e-values by conditioning on a specific sufficient statistic. Both of these improvements require assumptions on the distribution of th e e-values. Xu and Ramdas (2023), in contrast, do not assume anything on the distribution of the e-values but boost the power of eBH by introducing exogenous randomness in a clever way: stochastic rounding . While it is evident that truncating the e-values in a suitab le way can also improve the power of methods based on the e-Partitioning Principle, such as eBH+, it is not straightforwardly clear whether stochastic rounding woul d give an improvement. In the set-up of stochastic rounding, some e-values are rounded up and some are rounded d own to the grid of the eBH thresholds, in such a way that rounding them down will never lead to fewer rej ections, and rounding up might lead to more. However, in our method, the magnitude of e-values from hypotheses that are not rejected do influence the rejected set, as in the example above. Investigating the influence of the various types of boosting and stochastic rounding on methods created by the e-Partitioning Principle is a direction for future work. 8 6 FDR-control by combining p-values Many FDR control procedures start from p-valuesp1≤...≤pmfor hypotheses H1,...,H m. We will assume these p-values ordered without loss of generality. There are sever al famous procedures, the choice of which depends on the assumptions that we are willing to mak e on the joint distribution of the p-values. Here, we will explore two such procedures. The first is BY (Ben
https://arxiv.org/abs/2504.15946v1
jamini and Yekutieli, 2001), which is valid for any distribution of the p-values. The second is the procedure of Su (2018) based on his based on the FDR Linking Theorem. The Su method is valid under the PRDN ass umption, a weaker variant of the PRDS assumption onderlying BH. We will place these two methods in the context of the e-Partitioning Principle and show that they can be uniformly improved. Since the e-Partitioning Principle depends on e-values, we will need to convert the input p-values to e- values. We use p-to-econverters for this (Shafer et al., 2011). A function e(p)is ap-to-ecalibrator if e(p)is ane-value whenever pis ap-value, i.e., whenever P(p≤t)≤tfor all0≤t≤1. One straightforward way to construct an FDR control procedure would be to calibrate p1≤...≤pmtoe-values and apply eBH+. However, in general, this does not turn out to be the most effic ient approach. 6.1 BY+: FDR control under general dependence The BY method of Benjamini and Yekutieli (2001) rejects the s etRα= [rα], where rα= max{1≤r≤m:mhmpi≤rα}, (5) or 0 if the maximum does not exist, and hm=/summationtextm i=11/iis themth harmonic number. As proven by Benjamini and Yekutieli (2001), Rαcontrols FDR for any joint distribution of the p-values. To place the BY procedure into the context of the e-Partitioning Principle, we define the following e- value, motivated by Xu et al. (2024), and building on the grid harmonic p-to-ecalibrator of V ovk et al. (2022). Thise-value averages, for every S,e-values obtained by applying a p-to-ecalibrator to the p-values of hypotheses in S; however, this p-to-ecalibrator depends on Sas well as on α. Lemma 4. UnderHS eS=/summationdisplay i∈S1{h|S|pi≤α} α⌈|S|h|S|pi/α⌉(6) is ane-value. Proof. Xu et al. (2024, Proposition 3) proved that e(p) =k1{hkp≤α} α⌈khkp/α⌉ is ap-to-ecalibrator for all α∈(0,1)and allk∈N. We take k=|S|, apply the p-to-ecalibrator to all pi, i∈S, and note that the average of the resulting e-values is again an e-value. Next, we can build an FDR control method by plugging in E= (eS)S∈M, for thee-values we have just defined in (6), into the general e-Partitioning Procedure (1). We call the resulting procedu re BY+ because it uniformly improves BY . This property is proven in the follow ing theorem. Theorem 3. Ifm >1, BY+ uniformly improves BY. 9 Proof. LetRα(E)(=BY+) be defined in (6) and Rα(=BY) be defined just above (5). By definition of rα, we havepi≤ |Rα|α/mhmfor alli∈Rα. Suppose Rα/\⌉}atio\slash=∅and choose any S∈ M . We have αeS≥/summationdisplay i∈S∩Rα1{h|S|pi≤α} ⌈|S|h|S|pi/α⌉=/summationdisplay i∈S∩Rα1 ⌈|S|h|S|pi/α⌉≥/summationdisplay i∈S∩Rα1 ⌈|S|h|S||Rα|/mhm⌉≥|S∩Rα| |Rα|. which shows that Rα∈ Rα(E). The same is trivially true if Rα=∅. To show actual improvement, let m >1and consider the event that p1=α/(mhm),p2= 2α/(mhm− 1),p3=...=pm= 1. ThenRα={1}, but{1,2} ∈ Rα(E), because 1∈Simplies that eS≥1/α, and 2∈Simplies that eS≥1/(2α). We now construct an example where BY fails to reject any hypot hesis, while BY+ rejects all but one. Example 2. Letm≥4, and consider the event that iα mhm<pi≤(i+1)α mhmfori= 1,...,m−1,andpm>α hm. Then BY rejects nothing, but BY+ rejects [m−1]. Proof. It is immediate from the definition that the BY rejection set i s empty. Let R= [m−1]and choose any non-empty S⊆[m]. We have 1{h|S|pi≤α}= 1for alli≤m−1.
https://arxiv.org/abs/2504.15946v1
ForS/\⌉}atio\slash= [m], it follows that /ceilingleftbigg|S|h|S|pi α/ceilingrightbigg ≤/ceilingleftbigg|S|h|S|(i+1) mhm/ceilingrightbigg ≤m−1for alli≤m−1. Hence, for all S/\⌉}atio\slash= [m], we obtain αeS≥/summationdisplay i∈S\{m}1/ceilingleftBig|S|h|S|pi α/ceilingrightBig≥|S\{m}| m−1=|S∩R| |R|. In caseS= [m], we have αe[m]=m−1/summationdisplay i=11/ceilingleftBig mhmpi α/ceilingrightBig≥m/summationdisplay i=21 i≥1. This establishes that [m−1]∈ Rα(E). The improvement of BY+ upon BY is similar in spirit to the impr ovement of eBH+ relative to eBH. However, BY+ will never reject hypotheses Hiwithpi> α, since when i∈R, forS={i}, we have αeS=1{pi≤α} ⌈pi/α⌉= 0<|S∩R| |R|. Like eBH+, BY+ rejects a non-trivial collection of sets, fol lowing Definition 2. The proof of Theorem 2 does not just apply to the set Rαrejected by BY , but to any other self-consistent set, i.e., a ny set[s]for which ps≥sα/(mhm). All such sets, and others, are contained in Rα(E)for BY+ (see also Section 8). 10 6.2 Su+: FDR control under the PRDN assumption Su (2018) investigated FDR control by BH under the PRDN assum ption. PRDN is a weaker assumption than the PRDS assumption that is sufficient for BH to be valid (Benj amini and Yekutieli, 2001). The p-values p1...,pmsatisfy PRDS in Mif, for every increasing set A⊆Rm, for every P∈M, and for every i∈NP, we have that P((p1,...,pm)∈A|pi=t) is weakly increasing in t. PRDS, therefore, is an assumption both on the p-values of true and false hypothe- ses. PRDN, in contrast, is an analogous assumption only on th ep-values of true hypotheses. The p-values p1...,pmsatisfy PRDN in Mif, for every P∈M, for every increasing set A⊆R|NP|, and for every i∈NP, we have that P((pj)j∈NP∈A|pi=t) is weakly increasing in t. As argued by Su (2018), PRDN is a more attractive assumption than PRDS, since generally we do not want to assume anything about p-values of false hypotheses. Where PRDS is the assumption under which BH was proven (Benja mini and Yekutieli, 2001), PRDN is the sufficient assumption of the Simes test (Simes, 1986; Su, 2018), a test that rejects HSwhen pS= min 1≤i≤|S||S|pi:S i≤α, wherepi:Sis theith smallest p-value among pi,i∈S. Su (2018) showed that there is a close connection between the BH procedure and the Simes test. Moreover, Su (2018) proved that when BH is applied at level qunder the assumption of PRDN, rather than PRDS, it achieves an FDR of at most q−qlog(q)rather than q. Solving α=q−qlog(q), we obtain q=−α/w(−α/e),1wherewis the−1branch of the Lambert W function. It follows that it is possible to control FDR under PRDN by applying the BH procedu re at level αlα, wherelα=−1/w(−α/e). We call this the Su procedure. It rejects Rα= [rα], where rα= max{1≤r≤m:mpr≤rαlα}, (7) or 0 if the maximum does not exist. For α= 0.01,0.05,0.1, we have lα= 0.131,0.174,0.205, respectively. In this section we will improve the Su procedure uniformly us ing thee-Partitioning Principle. First, we will define a useful p-to-ecalibrator, which will allow us to calibrate the Simes p-value. Lemma 5. e(x) = min( lα/x,1/α)is ap-to-ecalibrator for all α∈(0,1]. Proof. The Lambert W function has the property that w(x) =x/exp(w(x)). Therefore, log/parenleftBige lαα/parenrightBig = log/parenleftBig −e αw/parenleftbig −α e/parenrightbig/parenrightBig =w/parenleftbig −α e/parenrightbig =1 lα. Letpbe ap-value and e=e(p). Then E(e)≤/integraldisplayαlα 01 αdp+/integraldisplay1 αlαlα pdp=lα−lαlog(lαα) =lαlog/parenleftBige lαα/parenrightBig
https://arxiv.org/abs/2504.15946v1
= 1. 1Though we use eboth forp-to-ecalibrators (as a function) and for exp(1) (as a constant), this should lead to no confusion. 11 Corollary 2. For allP∈HS, eS= min(lα/pS,1/α) (8) is ane-value under PRDN. We can combine the e-values from (8) into a suite E= (eS)S∈M and use the general e-Partitioning procedure (1). We call the resulting procedure Su+ because i t uniformly improves the Su procedure. This is proven in the following theorem. Theorem 4. Ifm >1, Su+ uniformly improves Su. Proof. LetRα(E)(=Su+) be defined using (8) and Rα(=Su) be defined just above (7). By definition of rα, we have pi≤ |Rα|lαα/m for alli∈Rα. Choose any S∈ M such that S∩Rα/\⌉}atio\slash=∅. Then pS≤p|Rα∩S|:S|S| |Rα∩S|≤p|Rα||S| |Rα∩S|≤|Rα|lαα m|S| |Rα∩S|≤|Rα|lαα |Rα∩S|. It follows that αeS≥ |Rα∩S|/|Rα|ifS∩Rα/\⌉}atio\slash=∅andpS≥lαα. The same is trivially true if either S∩Rα=∅orpS< lαα. Therefore, we have Rα∈ Rα(E). To show actual improvement, let m >1and consider the event that p1=lαα/m ,p2= 2lαα/(m−1), p3=...,pm= 1. Then,Rα={1}. However, 1∈Simplies that pS=lαα|S|/m, soeS= 1/α, and 2∈SimpliespS≤2lαα|S|/(m−1), soeS≥1/(2α). It follows that Rα(E) =/braceleftbig {1},{1,2}/bracerightbig . The following example gives an event in which Su rejects only one hypothesis, but Su+ rejects all. Example 3. Letm >1, and consider the event that p1=lαα m,mlαα m−1≤p2=···=pm<mlαα m. Then Su rejects only one hypothesis, but Su+ rejects [m] Proof. That Su rejects [1]only is immediate from the definition. Su+ rejects [m], since we have eS= 1/αif 1∈S, andeS= (m−1)/(mα)if1/∈S. 7 Restricted combinations Thus far, we have not made use of the special properties of the partitioning hypotheses. We have created e- values for intersection hypotheses ˜HSand used then as e-values for partitioning hypotheses HSusing Lemma 2. However, the use of partitioning hypotheses rather than i ntersection hypotheses may gain power in certain situations, as is well-known in the context of FWER control ( Shafer, 2021; Finner and Strassburger, 2002). We will now explain how this power gain extends to FDR control through the e-Partitioning Principle. The partitioning hypothesis HS={P∈M:NP=S}is, in general, smaller than the intersection hypothesis ˜HS={P∈M:NP⊆S}. This difference, however, is not always appreciable. For e xample, supposem= 2 and the two hypotheses are H1:θ1= 0 andH2:θ2= 0. Then for S={1}we have HS={P∈M:θ1= 0andθ2/\⌉}atio\slash= 0}, while˜HS={P∈M:θ1= 0}=H1. In most statistical models, it is not possible to construct a more powerful test for H{1}than forH1. This changes, however, when H1andH2are interval hypotheses. If we have H1:θ1≤0andH2:θ2≤ 0, thenH{1}={P∈M:θ1≤0andθ2>0}is appreciably smaller than H1, and, as a consequence, we may often formulate a more powerful (stochastically larger )e-value for H{1}than forH1. 12 Logical relationships between hypotheses (restricted com binations; Shaffer, 1986) can be seen as an extreme case of the same phenomenon, where HScan even become ∅for some S. For example, suppose we havem= 3 andH1:θ1=θ2,H2:θ1=θ3andH3:θ2=θ3. Then we have H{1,3}={P∈M:θ1= θ2=θ3andθ1/\⌉}atio\slash=θ3}=∅. Consequently, e{1,3}=∞is a valid e-value for this hypothesis, and there is no need to take any stochastically smaller e-value based on the data. Taking restricted combinations into account can therefore lead to substantial gains in power. For example, if we test the m= 6pairwise comparisons among four parameters θ1,θ2,θ3,θ4, then out of 63 hypotheses in M, only 13 are non-empty; the
https://arxiv.org/abs/2504.15946v1
other 50 can essentially be ignor ed by the-Partitioning procedure. Suppose that the hypotheses H1:θ1=θ2, has ane-value of 4/α,H2:θ1=θ3andH3:θ1=θ4both have e-values of1/α, while the other three hypotheses we have e-values of 0, then eBH+ would only reject R={1}when restricted combinations would not be taken into account, bu t could additionally reject R={1,2,3}(and all subsets) when they are. 8 Human-in-the-loop post hoc FDR control Classical FDR control according to Definition 1 provides the researcher of the method with a single random set of hypotheses that they are allowed to reject. Generally , this is the largest set of a pre-specified form that allows FDR control. For example, in BH this set consists of th e hypotheses with the 1,2,...,rth smallest p-values, for some random r. For Knockoff-based inference they are the set with the 1,2,...,rth knockoff weights. Methods generally return the largest set Rof the chosen form. Classical FDR control (Definition 1) guarantees that FDR rem ains bounded by αonly for the set R returned by the method. However, as argued by Goeman and Sola ri (2011), in applied contexts there may be reasons for a researcher to deviate from that set. For exampl e, the set Rmay turn out to be too large, since the researcher may only have a limited budget for follow-up e xperiments. Alternatively, some of the smallest p-values may be suspect, or less interesting, because of seco ndary characteristics. An important example of this is the volcano plot method popular in bioinformatics , in which researchers take only a subset of the FDR-significant results, discarding the findings with small effect size estimates (Ebrahimpoor and Goeman, 2021; Enjalbert-Courrech and Neuvial, 2022). Such post hoc discarding of results destroys FDR control, however, as the guarantee of Definition 1 is for Ronly. Finner and Roters (2001) showed how researchers can, intentionally or not, “cheat with FDR” by reducing the s etRpost hoc. Even reducing the set Rwhile retaining its pre-specified form, e.g. rejecting a set of the s<rsmallestp-values in BH, compromises FDR control if sis chosen post hoc (Katsevich and Ramdas, 2020). Deviation from the set Rof thersmallestp-values has been investigated by several authors. However, in most cases such deviation has to be pre-specified, and the r esearcher must commit to an algorithm for reducing or changing the set R, before seeing the data, i.e., independently of the data (Le i and Fithian, 2018; Katsevich et al., 2023). Some methods allow some inter active post hoc amending of this pre-chosen algorithm based on progressively revealing part of the data (Lei et al., 2021; Katsevich and Ramdas, 2020). As far as we are aware, no FDR control methods have so far been p roposed that allow researchers to reduce or amend Rafter looking at all of the data. Notably, methods controlli ng tail probabilities of FDP, rather than FDR, all get some post hoc flexibility for free by the Clos ure Principle (Goeman and Solari, 2011; Goeman et al., 2021). Post hoc flexibility comes as a direct and free consequence of thee-Partitioning Principle through its FDR guarantee according to Definition 2. Since
https://arxiv.org/abs/2504.15946v1
FDR is controlled over the maximum over all sets in Rα(E), the researcher is allowed to choose the final rejected set from am ong the collection Rα(E)in any desired way, using all information available. Control of FDR according t o Definition 2 is simultaneous over the sets inRα(E), and can be used much like simultaneous confidence intervals are. The e-Partitioning Principle, 13 therefore, addresses the “cheating with FDR” phenomenon in a clear and effective way, since it specifies exactly in what way the researcher may reduce the optimal set R: reductions to Sretain FDR control if and only ifS∈ Rα(E). For the methods we have considered above, eBH+, BY+ and Su+, the collection Rα(E) is often rich enough to allow plenty of choice for researcher s. Simultaneous control of FDR as offered by Definition 2 is not j ust useful for reducing a single FDR- significant set to a single smaller one, but also for the possi bility of splitting it into several smaller sets. This can be relevant in fields such as bioinformatics and neur oimaging. In neuroimaging, hypotheses cor- respond to voxels (3d equivalents of pixels) in the brain, an d the significant set Roften splits naturally into several brain areas (“clusters”). In such situations, FDR c ontrol on the total set Ris not very informative, since researchers will interpret their results in terms of t he clusters (Rosenblatt et al., 2018). Simultaneous FDR control using the e-Partitioning Principle allows researchers to claim that t hese clusters have FDR at mostαif and only if these clusters are in Rα(E). Similar considerations apply in bioinformatics, where hypotheses correspond to molecular markers, which can be me aningfully grouped into sets called pathways (Ebrahimpoor et al., 2020). FDR inference based on Definitio n 2 allows researchers to make claims about such pathways, where classical FDR inference based on Defini tion 1 does not. Of special interest are the singleton sets in Rα(E). Since singleton sets have an FDP of either 0 or 1, there is no distinction between FDR and FWER for such sets. Th is has the following important implication, made explicit in Theorem 5. The theorem says that when contro lling FDR, if the signal is strong enough and the collection Rcontains one or more singleton sets, the researcher may choo se to reject the union of all these singleton sets and control FWER in addition to FDR. Thi s switch from FDR control to FWER control may be made fully post hoc, after observing all the data and th e resulting rejected collection R. Theorem 5. SupposeRcontrols FDR at level α. Define R={i∈[m]:{i} ∈R}. ThenRcontrols FWER at level α, that is, for every P∈M, P(R∩NP=∅)≥1−α. Proof. LetS={S∈R:|S|= 1}. Then, for every P∈M, P(R∩NP/\⌉}atio\slash=∅) = EP/parenleftbig max i∈R1{i∈NP}/parenrightbig = EP/parenleftbigg max R∈S|R∩NP| |R|∨1/parenrightbigg ≤EP/parenleftbigg max R∈R|R∩NP| |R|∨1/parenrightbigg ≤α. 9 Post hoc choice of α Theorem 5 gives researchers the exciting option of switchin g from FDR control to FWER after seeing the data. In this section we shall see that, for certain e-Partitioning procedures, there are additional options fo r adapting
https://arxiv.org/abs/2504.15946v1
the error rate to the data. To achieve this, we will e xtend the Definition 2 of FDR control further, building on the work of Gr¨ unwald (2024) and Koning (2023). Definition 4 (FDR control with post hoc α).For every α∈(0,1], letRα⊆2[m]. Then(Rα)α∈(0,1], controls FDR with post hoc αif, for every P∈M, EP/parenleftbigg sup α∈(0,1]max R∈Rα|R∩NP| α(|R|∨1)/parenrightbigg ≤1. 14 Where FDR control according to Definition 2 allows the resear cher to choose R∈Rαfreely for a pre- chosenα, FDR control according to Definition 4 allows the researcher to choose αfreely and post hoc, and R∈Rαwithin the sets on offer for that α. FDR control with post hoc αimplies FDR control in the sense of Definition 2 with random α. Rather than fixing αin advance, Definition 4, therefore, allows αto be a random variable that depends on the data in any desired way. FDR control with post hoc α, though seemingly more complicated, follows more or less di rectly from the e-Partitioning Principle. It only adds the additional requi rement that the suite Eofe-values does not depend onα, as the following theorem asserts. The proof is essentially identical to that of Theorem 1. Theorem 6 extends the corresponding result of Ramdas and Wang (2025, T heorem 9.10) that αcan be chosen post hoc in eBH to general e-Partitioning procedures and to simultaneous FDR control. Theorem 6. SupposeEdoes not depend on α. Then(Rα(E))α∈(0,1]controls FDR with post hoc α. Proof. Choose any P∈M, and letS=NP. ThenP∈HS, so that EP/parenleftbigg sup α∈(0,1]max R∈Rα|R∩NP| α(|R|∨1)/parenrightbigg ≤EP(eS)≤1. 10 Computation Checking whether R∈ Rα(E)involves checking exponentially many e-values and may therefore take exponential time. However, in special cases, most notably f or eBH+, BY+ and Su+ methods proposed above, computation reduces to polynomial time. It is easy to check that the compound e-values of BY+ and Su+ have the property that they are weakly decreasing in the per-hypothesis p-values. We can exploit this in the spirit of the shortcuts fo r closed testing by Dobriban (2020). To check for a set RwhetherαeS≥ |R∩S|/|R|, it suffices to check, for each 1≤a≤ |R|and each0≤b≤m−|R|, whether the condition holds for the set Sconsisting of the indices of alargestp-values in Rand of the blargestp-values in [m]\R. This takes O(m2)evaluations of eS. Finding, e.g., the largest set [r]that is rejected by the method then takes O(m3)such evaluations. For eBH+ we can do the calculations even faster. We will illus trate this for sets Rof the form [r]. Define fk=k/summationdisplay i=1ei. Then we have αeS≥ |R∩S|/|R|, for allS∈ M , if and only if, for all 0≤a < r andr≤b≤m, g(a,r,b) =fm−fb+fr−fa−(m−b+r−a)(r−a) rα≥0. It is easily checked that g(a,r,b)is convex in a. Therefore, for fixed randb, the minimum of gcan be found inO(logm)time. We can therefore check whether R∈ Rα(E)inO(mlogm)time, and find the largest such RinO(m2logm)time by trying all values of rfrommdownward. In practice, if ris substantially larger than the size of the largest rejected R, one quickly finds a,bfor which g(a,r,b)<0, so that computation time for finding the largest rejected Ris often closer to mlogmin practice. Checking whether R∈ Rα(E) for setsRnot of the form
https://arxiv.org/abs/2504.15946v1
[r]can also be done in O(mlogm)time following essentially the same reasoning as for sets of the form [r], adapting the definition of gas appropriate. 15 0.125 0.25 0.375 0.500.20.40.60.81 Signal strength APowerPositive dependence eBH eBH+0.125 0.25 0.375 0.500.20.40.60.81 Signal strength APowerNegative dependence eBH eBH+ 11 Numerical illustrations We show the power improvement of eBH+ in comparison with eBH b y depicting the size of the largest rejected set of eBH+ in the figures below in a simple z-testing problem with α= 0.05, taking inspiration from Lee and Ren (2024). We take m= 8and the set of non-nulls such that µ= (A,...,A, 0,...,0)∈R8, where the number of true null hypotheses is π0m, and where Ais the the mean of the non-null hypotheses. The covariance matrix is Σij= 0.8|i−j|for anyi,j∈[m]for the positively dependent data and Σij:i/\e}atio\slash=j= −0.8/(m−1)and ones on the diagonal for the negatively dependent data. W e takeπ0= 0.25and vary A∈ {0.125,0.25,0.375,0.5}. After generating 100mdata points Z∼N(µ,Σ), we produce e-values by ej(Zj) = exp(ajZj−a2 j/2), where we choose ajto be equal to the oracle value Afor anyj∈[m], to compare the best power for both eBH and eBH+. Lee and Ren (2024) did experiments for differen t values and found that when µis learned via a hold-out set, the performance is quite similar to the op timal value. Since we compare two methods based on the same e-values, there is no harm in choosing aas we like. We average over 1000 simulations. Additional experiments with π0∈ {0.5,0.75}and under independence of the e-values showed the same trends. The FDP is (far) below 0.05in all simulations.2 12 Discussion Thee-Partitioning Principle resolves the imparity that existe d between FWER and FDP tail probability meth- ods on one side and FDR control methods on the other. It is the d irect translation of the Closure Principle of these methods to the FDR context. Therefore, it brings many o f the boons of the Closure Principle into the realm of FDR. Thee-Partitioning Principle can be used to propose new methods a nd to possibly improve existing ones. In this paper we have mostly focused on improving existing me thods, because was best suitable to showcase 2R code for reproducing these figures can be found at https://github.com/RianneDeHeide/ePartitioningPrinc iple . 16 the power of the principle. However, we think its greatest st rength is actually in developing novel methods. To develop an FDR control method, a researcher only needs to d ecide how to aggregate the evidence against a partitioning or intersection hypothesis into an e-value. After making that choice, the only remaining proble m to be solved is a computational one. Aggregating such eviden ce may often be naturally done in terms of p-values. In that case, it may seem an attractive option to con vert such p-values to e-values and apply eBH+, but we have seen for BY+ and Su+ that it is better to aggregate p-values directly to compound e-values for partitioning hypotheses. When constructing compound e-values, assumptions on the joint distribution and on logical relationships between variables may be profitabl y used. Aside from possibly improving methods,
https://arxiv.org/abs/2504.15946v1
the e-Partitioning Principle brings post hoc flexibility to FDR control on a scale that was previously only known in FDP contr ol. Rather than only a single rejection set, researchers have a choice of many rejection sets to choose fr om, and they may use all the data to decide which one to report, while still retaining FDR control. If signal i s very strong, an attractive option is to switch to FWER control post hoc, or for methods in which the e-values do not depend on α, to adjust the target FDR level to match the amount of signal in the data. There are several ways in which this work may be extended. We a ssumed a finite number of hypotheses. We think that there are no substantial problems to extending thee-Partitioning Principle infinite testing problems, but we leave this to future work. This also holds fo r the online setting. Similarly, it remains to be investigated if and when stochastic rounding is beneficial i n combination with the eBH+ procedure and other procedures generated using e-Partitioning. There are many more FDR control methods than we have consider ed in this paper, but we believe that many of them can be improved using e-Partitioning. It will be a challenge to find polynomial time algorithms for some such improvements, but the literature on advanced s hortcuts in closed testing may help in this case. For example, we believe that the branch and bound algorithm m ay be as useful in e-Partitioning as it is in closed testing (Vesely et al., 2023). Finally, an important open problem is the question whether a nd how BH, by far the most popular FDR control method, can be uniformly improved using the e-Partitioning Principle, or at least generalized to allow rejection of more than one single set. The fact that BH is know n to be inadmissible (Solari and Goeman, 2017) could be seen as an indication that such an improvement is possible, but so far we have not found a suitable suite of e-values that allows this. Declaration of funding Rianne de Heide’s work was supported by NWO Veni grant number VI.Veni.222.018. References Benjamini, Y . and Y . Hochberg (1995). Controlling the false discovery rate: a practical and powerful ap- proach to multiple testing. Journal of the Royal statistical society: series B (Methodo logical) 57 (1), 289–300. Benjamini, Y ., A. M. Krieger, and D. Yekutieli (2006). Adapt ive linear step-up procedures that control the false discovery rate. Biometrika 93 (3), 491–507. Benjamini, Y . and D. Yekutieli (2001). The control of the fal se discovery rate in multiple testing under dependency. Annals of statistics , 1165–1188. Blanchard, G. and E. Roquain (2008). Two simple sufficient co nditions for FDR control. Electronic Journal of Statistics 2 , 963–992. 17 Blanchard, G. and E. Roquain (2009). Adaptive false discove ry rate control under independence and depen- dence. Journal of Machine Learning Research 10 (12). Dobriban, E. (2020). Fast closed testing for exchangeable l ocal tests. Biometrika 107 (3), 761–768. Ebrahimpoor, M. and J. J. Goeman
https://arxiv.org/abs/2504.15946v1
(2021). Inflated false disco very rate due to volcano plots: problem and solutions. Briefings in bioinformatics 22 (5), bbab053. Ebrahimpoor, M., P. Spitali, K. Hettne, R. Tsonaka, and J. Go eman (2020). Simultaneous enrichment anal- ysis of all possible gene-sets: unifying self-contained an d competitive methods. Briefings in bioinformat- ics 21 (4). Enjalbert-Courrech, N. and P. Neuvial (2022). Powerful and interpretable control of false discoveries in two-group differential expression studies. Bioinformatics 38 (23), 5214–5221. Finner, H. and M. Roters (2001). On the false discovery rate a nd expected type I errors. Biometrical Journal 43 (8), 985–1005. Finner, H. and K. Strassburger (2002). The partitioning pri nciple: a powerful tool in multiple decision theory. The Annals of Statistics 30 (4), 1194–1213. Genovese, C. R. and L. Wasserman (2006). Exceedance control of the false discovery proportion. Journal of the American Statistical Association 101 (476), 1408–1417. Goeman, J. J., J. Hemerik, and A. Solari (2021). Only closed t esting procedures are admissible for controlling false discovery proportions. The Annals of Statistics 49 (2), 1218–1238. Goeman, J. J. and A. Solari (2011). Multiple testing for expl oratory research. Statistical Science 26 (4), 584–597. Grunwald, P., R. De Heide, and K. W. M. (2024). Safe testing. Journal of the Royal Statistical Society Series B: Statistical Methodology 86 , 1163–1171. Gr¨ unwald, P. D. (2024). Beyond Neyman–Pearson: E-values e nable hypothesis testing with a data-driven alpha. Proceedings of the National Academy of Sciences 121 (39), e2302098121. Ignatiadis, N., R. Wang, and A. Ramdas (2024a). Asymptotic a nd compound e-values: multiple testing and empirical Bayes. arXiv preprint arXiv:2409.19812 . Ignatiadis, N., R. Wang, and A. Ramdas (2024b). E-values as u nnormalized weights in multiple testing. Biometrika 111 (2), 417–439. Katsevich, E. and A. Ramdas (2020). Simultaneous high-prob ability bounds on the false discovery proportion in structured, regression and online settings. The Annals of Statistics 48 (6), 3465–3487. Katsevich, E., C. Sabatti, and M. Bogomolov (2023). Filteri ng the rejection set while preserving false discovery rate control. Journal of the American Statistical Association 118 (541), 165–176. Koning, N. W. (2023). Post-hoc αhypothesis testing and the post-hoc p-value. arXiv preprint arXiv:2312.08040 . Lee, J. and Z. Ren (2024). Boosting e-BH via conditional cali bration. arXiv preprint arXiv:2404.17562 . 18 Lei, L. and W. Fithian (2018). AdaPT: an interactive procedu re for multiple testing with side information. Journal of the Royal Statistical Society Series B: Statisti cal Methodology 80 (4), 649–679. Lei, L., A. Ramdas, and W. Fithian (2021). A general interact ive framework for false discovery rate control under structural constraints. Biometrika 108 (2), 253–267. Marcus, R., P. Eric, and K. R. Gabriel (1976). On closed testi ng procedures with special reference to ordered analysis of variance. Biometrika 63 (3), 655–660. Ramdas, A., P. Gr¨ unwald, V . V ovk, and G. Shafer (2023). Game -theoretic statistics and safe anytime-valid inference. Statistical Science 38 (4), 576–601. Ramdas, A. and R. Wang (2025). Hypothesis testing with e-val ues. arXiv preprint arXiv:2410.23614v3 . Ren, Z. and R. F. Barber (2024). Derandomised knockoffs: lev eraging e-values for false discovery rate control. Journal of the
https://arxiv.org/abs/2504.15946v1
Royal Statistical Society Series B: Statisti cal Methodology 86 (1), 122–154. Rosenblatt, J. D., L. Finos, W. D. Weeda, A. Solari, and J. J. G oeman (2018). All-resolutions inference for brain imaging. Neuroimage 181 , 786–796. Shafer, G. (2021). Testing by betting: A strategy for statis tical and scientific communication. Journal of the Royal Statistical Society Series A: Statistics in Society 1 84(2), 407–431. Shafer, G., A. Shen, N. Vereshchagin, and V . V ovk (2011). Tes t martingales, Bayes factors and p-values. Shaffer, J. P. (1986). Modified sequentially rejective mult iple test procedures. Journal of the American Statistical Association 81 (395), 826–831. Simes, R. J. (1986). An improved Bonferroni procedure for mu ltiple tests of significance. Biometrika 73 (3), 751–754. Solari, A. and J. J. Goeman (2017). Minimally adaptive BH: A t iny but uniform improvement of the proce- dure of benjamini and hochberg. Biometrical Journal 59 (4), 776–780. Sonnemann, E. (1982). Allgemeine L ¨osungen multipler Testprobleme . Universit¨ at Bern. Institut f¨ ur Mathe- matische Statistik und Versicherungslehre. Sonnemann, E. (2008). General solutions to multiple testin g problems. Biometrical Journal 50 (5), 641–656. Storey, J. D., J. E. Taylor, and D. Siegmund (2004). Strong co ntrol, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach. Journal of the Royal Statistical Society Series B: Statistical Methodology 66 (1), 187–205. Su, W. J. (2018). The FDR-linking theorem. arXiv preprint arXiv:1812.08965 . Vesely, A., L. Finos, and J. J. Goeman (2023). Permutation-b ased true discovery guarantee by sum tests. Journal of the Royal Statistical Society Series B: Statisti cal Methodology 85 (3), 664–683. V ovk, V ., B. Wang, and R. Wang (2022). Admissible ways of merg ing p-values under arbitrary dependence. The Annals of Statistics 50 (1), 351–375. V ovk, V . and R. Wang (2021). E-values: Calibration, combina tion and applications. The Annals of Statis- tics 49 (3), 1736–1754. 19 V ovk, V . and R. Wang (2024). Merging sequential e-values via martingales. Electronic Journal of Statis- tics 18 (1), 1185–1205. Wang, R. (2025). The only admissible way of merging arbitrar y e-values. Biometrika , asaf020. Wang, R. and A. Ramdas (2022). False discovery rate control w ith e-values. Journal of the Royal Statistical Society Series B: Statistical Methodology 84 (3), 822–852. Xu, Z. and A. Ramdas (2023). More powerful multiple testing u nder dependence via randomization. arXiv preprint arXiv:2305.11126 . Xu, Z., R. Wang, and A. Ramdas (2024). Post-selection infere nce for e-value based confidence intervals. Electronic Journal of Statistics 18 (1), 2292–2338. 20
https://arxiv.org/abs/2504.15946v1
Bayesian Parameter Identification in the Landau-de Gennes Theory for Nematic Liquid Crystals Heiko Gimperlein§Ruma R. Maity¶Apala Majumdar‖Michael Oberguggenberger∗∗ Abstract This manuscript establishes a pathway to reconstruct material parameters from measurements within the Landau-de Gennes model for nematic liquid crystals. We present a Bayesian approach to this inverse problem and analyse its properties using given, simulated data for benchmark problems ofaplanarbistablenematicdevice. Inparticular,wediscusstheaccuracyoftheMarkovchainMonte Carlo approximations, confidence intervals and the limits of identifiability. 1 Introduction Nematicliquidcrystals(NLCs)areclassicalmesophasesthataremoreorderedthanconventionalliquids and less ordered than conventional solids [10]. The constituent asymmetric NLC molecules move freely buttendtoalignalonglocallypreferreddirections,referredtoasnematicdirectors. Consequently,NLCs exhibit direction-dependent responses to external stimuli, e.g., light, electric fields, temperature and this directionality and material softness make them excellent candidate working materials for electro-optic devices, photonics, elastomers and biomimetic materials [22]. NLCapplicationscruciallydependonacomprehensiveunderstandingofthefollowingquestion: given a specified NLC system, can we mathematically predict and control the experimentally observable NLC configurations? There are multiple mathematical theories for NLCs, that describe both the equilibrium and non-equilibrium properties of NLCs in confinement, across different length and time-scales. For example, there are molecular models that incorporate information about molecular shapes, sizes and intermolecular interactions; there are mean-field models that average the intermolecular interactions in terms of a mean field acting on the molecules and retain some molecular-level information; there are macroscopiccontinuumtheorieswhichdescribetheNLCstateintermsofamacroscopicorderparameter withnoexplicitconnectionstotheunderlyingmolecular-levelormicroscopicdetails[1]. Wefocusonthe powerfulcontinuumLandau-deGennes(LdG)theoryforNLCsinthispaper,whichisaphenomenological variationaltheorythathasenjoyedtremendoussuccessinthemodellingandapplicationsofNLCsacross disciplines [27]. The LdG theory describes the NLC state in terms of a macroscopic order parameter, theQ-tensor order parameter which contains information on the nematic directors and the degree of orientational ordering about the directors [27]. As with many variational theories in materials science, there is a LdG free energy which is a measure of the free energy of a NLC system and the modelling hypothesis is that physically observable equilibrium NLC configurations are mathematically modelled by LdG energy minimisers, subject to the imposed boundary conditions. The energy minimisers, and indeed,allcriticalpointsoftheLdGfreeenergyareclassicalsolutionsoftheassociatedEuler-Lagrange equations, which are systems of nonlinear partial differential equations (PDE) with (typically) multiple solutions [28]. Therefore, the equilibrium properties of NLC systems or NLC equilibria are studied in §Engineering Mathematics, University of Innsbruck, Innsbruck, Austria. Email: heiko.gimperlein@uibk.ac.at ¶Engineering Mathematics, University of Innsbruck, Innsbruck, Austria. Email: ruma.maity@uibk.ac.at ‖Department of Mathematics and Statistics, University of Strathclyde, Glasgow, United Kingdom. Email: apala.majumdar@strath.ac.uk ∗∗Engineering Mathematics, University of Innsbruck, Innsbruck, Austria. Email. michael.oberguggenberger@uibk.ac.at 1arXiv:2504.16029v1 [math.NA] 22 Apr 2025 terms of boundary-value problems for the Euler-Lagrange equations of the LdG free energy, which is a complex PDE problem in its own right. The LdG free energy, and consequently, the Euler-Lagrange equations have multiple phenomenological parameters related to the temperature, material properties, elastic constants, chirality etc. There are inherent uncertainties in these phenomenological parameters, the experimental system, in the mathematical model and the numerical methods. We need uncertainty quantification(UQ)to(i)estimatetheuncertaintiesinthemathematicalpredictionsandcontroltheerrors and (ii) understand the limitations of the models themselves. One can approach UQ in at least two different ways: forward UQ which focuses on how uncertainties in the LdG model inputs propagate to themodeloutputs,e.g.,thepredictedvaluesoftheequilibriumLdG Q-tensororderparametersothatwe can compute error bounds for the LdG model outputs and assess their accuracy/reliability. Inverse UQ focuses on the question – given a LdG Q-tensor or a distribution for the same, perhaps constructed from experimental data,
https://arxiv.org/abs/2504.16029v1
can we estimate the unknown/uncertain LdG model inputs for benchmark problems? We focus on inverse UQ problems in the LdG theory in this manuscript. Our goal is to establish an algorithmic pathway from optical measurements and experimentally measured dielectric data to experimental measurements of the Q-tensor order parameter (this is already known in the literature), interprettheexperimentalmeasurementsofthe Q-tensororderparameterasanequilibriummeasurement oftheLdG Q-tensororderparameter,andthenuseinverseUQtoreconstructtheLdGmodelinputsfrom the observed values/given measurements of the LdG Q-tensor order parameter. We focus on solving this inverse problem using statistical tools – Bayes’ theorem [8, 18, 42] and Markov chain Monte Carlo sampling [13, 36, 40]. Our contributions are summarized as follows. •A comprehensive formulation of the inverse problems that can estimate the LdG model inputs from appropriate NLC experimental measurements. •Bayesian inference to reconstruct LdG model inputs from given data on the observed values of the LdG Q-tensor order parameter, using Markov chain Monte Carlo (MCMC) methods. •Computation of posterior distributions of the LdG model inputs, given data and distributions for the LdG Q-tensor order parameters for a benchmark example, accompanied by computations of Bayesian estimators (posterior mean, posterior median and corresponding confidence intervals for the parameters of interest). There are existing well established methods for measuring the phenomenological parameters in the LdG free energy or the LdG model inputs, e.g., one can use the nematic-isotropic transition temperature to estimatesomebulkLdGconstants;researchersusetheclassicalFreedericksztransitionorclassicalNLC instabilities,onexposuretoexternalelectricfieldstomeasureNLCelasticconstants–see[20]whereinthis methodhasbeenappliedtomeasuretheelasticconstantsofthecommonNLCmaterials,6CHBTandE7; a dielectric and an optical method has been used in [12] to measure the splay and bend elastic constants for the Merck nematic mixture E49; an optical method is used in [2] for measuring the twist elastic constantalongwiththesplayandbendelasticconstantsofcommonlyusedNLCs: 5CB,6CHBT,andE7. Recently a neural networks-based method has been explored in [47] to determine NLC elastic constants based on combined modelling of NLC dynamics, light transmission, and supervised machine learning. OtherrelevantworksonBayesianinferenceforinverseproblemsincludeidentification/reconstructionof the optical parameters in the Ginzburg-Landau equation, which is a celebrated fundamental model for dissipative optical solitons [48]. We focus on a benchmark example motivated by the planar bistable nematic device reported in [45]. The experimental domain is a shallow three-dimensional square well with tangent boundary conditions on the well surfaces, that constrain the NLC molecules to lie on the well surfaces. This device has been experimentally manufactured and is known to support multiple competing NLC equilibria (referred to as diagonal and rotated solutions), without any external electric fields, if the square well is sufficiently large in size. This device has been modelled in [45] and [24], in the LdG framework. In both cases, the authors neglect the height of the square well (since the height is typically much smaller than the cross-sectionaldimensions)andtakethecomputationaldomaintobeasquaredomain,subjecttotangent boundaryconditionsfortheLdG Q-tensororderparameteronthesquareedges. Theyworkinareduced two-dimensional LdG framework (to be described in Subsection 22.3), so that the governing Euler- 2 Lagrange equations are a system of two nonlinear PDEs, with two model parameters (denoted by 𝛼and 𝛽). Themodelparameterscontaininformationaboutthesquareedgelength,thetemperatureandtheNLC materialproperties. Theauthorsof[45]and[24]workinadeterministicframework,assume 𝛼and𝛽are known, and recover the diagonal and rotated solutions for appropriate choices of 𝛼and𝛽in the reduced LdG framework. Of course, they also study additional aspects such as how multistability (co-existence of multiple stable diagonal and rotated solutions) depends on the square edge length and the boundary conditions, but there is no uncertainty
https://arxiv.org/abs/2504.16029v1
in their work. In [7], the authors study the solution landscape of the LdG Euler-Lagrange equations, for this benchmark problem, with additive and multiplicative noise wherebythisnoisecapturesuncertaintiesinaholisticsense. Theauthorsconcludethatthedeterministic LdG predictions are fairly robust, notwithstanding the addition of stochastic noise terms, for physically relevant values of 𝛼and𝛽. Inreallife,𝛼and𝛽arealmostcertainlynotknownexactly. Inthispaper,wefocusonreconstructing 𝛼and𝛽, given diagonal and rotated solutions for this benchmark problem on a square domain or equivalently, given Q-tensor solutions of the reduced LdG Euler-Lagrange equations for this benchmark problem. Wetreatthediagonalandrotatedsolutionsasgivenobservedsolutions, Qobs,whichareexactly known. We also assume that the square edge length and the temperature are exactly known. We assume prior distributions for 𝛼and𝛽, for which there is no general consensus. Hence, we experiment with two different prior distributions for 𝛼and𝛽– a uniform or diffusive prior and a suitably constructed Gaussian prior distribution. We use Bayesian methodology to construct the posterior distribution for 𝛼and𝛽(the probability distribution for 𝛼and𝛽, given Qobs), given a prior distribution for the LdG model inputs and a suitably constructed likelihood function (the probability distribution of Qobs, given the LdG model inputs). The posterior distributions are computed using MCMC methods, implemented byaMetropolis-Hastingsalgorithm. Ourmethodsworkforseveralphysicallyrelevantvaluesof 𝛼and𝛽, asillustratedbycomputationsofBayesianestimatorsandconfidenceintervalsfortheestimators. Wealso identify situations for which our methods do not work and offer some heuristic insights into the possible reasons for non-identifiability. WebelievethatourworkisagoodattemptatincorporatingUQintotheLdGtheoreticalframework, which can much improve modelling capabilities. The liquid crystal research community needs multiple reliableandefficientmethodsforestimatingmaterialproperties,foraccurateandpowerfulmathematical modelling. Forexample,onecouldpartiallyuseourmethodstocompute 𝛼and𝛽(witherrorestimators) fromasetofexperimentallyderivedNLCdirectorpatterns,foragivenNLCmaterial. Once 𝛼and𝛽are reliably known for a NLC material, researchers could numerically canvass complex solution landscapes of the LdG Euler-Lagrange equations on complex geometries and accurately predict equilibrium NLC propertiesorreciprocally,designnewNLCsystemsthatcanyielddesiredequilibriumproperties. Wedo not argue that inverse UQ will overtake conventional methods for estimating NLC material properties, but it can provide a decent alternative and inverse UQ also has great potential for machine learning approaches to liquid crystal research. This paper is organized as follows. In Section 2, we outline the pathway from optical and dielectric measurementstoestimatesoftheLdGmodelinputs,inareducedtwo-dimensionalsetting. InSection3, we outline our Bayesian methodology. In Section 4, we describe our finite element framework for the benchmark example motivated by the planar bistable nematic device reported in [45]. In Sections 5 and 6, we present several numerical experiments on the reconstruction of posterior distributions of 𝛼 and𝛽, given Qobsand information about temperature and square edge length. In Section 7, we discuss non-identifiability or situations wherein our methods do not converge, which might necessitate further study. We conclude with some perspectives in Section 8. 2 An inverse problem in the Landau-de Gennes framework In this section, we outline the main steps involved in determining the nematic liquid crystal (NLC) materialparametersfromexperimentalmeasurements,e.g,opticalmeasurements. Thekeyexperimental quantityisthedielectricanisotropy, 𝜀,whichisameasureoftheNLCresponsetoexternalelectricfields. 3 Figure 1: Experimental setup used at Hewlett Packard Laboratories, Bristol, as reported in [23]. The primary mathematical variable is the Landau-de Gennes (LdG) Q-tensor order parameter, which is a symmetric traceless 3×3matrix that encodes information about the nematic directors and the order parameters. TheLdG Q-tensororderparametercanbeempiricallyrelatedto 𝜀,providingalinkbetween experiments and theory. Equally, the LdG Q-tensor order parameter depends phenomenologically on various NLC material parameters and the temperature, as dictated by the LdG modelling framework. Hence, we can, in principle, compute the NLC material parameters from experimental measurements of 𝜀in the following three steps:
https://arxiv.org/abs/2504.16029v1
1. inferring the dielectric parameters embedded in 𝜀from optical measurements; 2. determining the LdG Q-tensor from 𝜀; 3. andusingtheLdGframeworktodeterminetheNLCmaterialparametersfrommeasurements/computations of the equilibrium values of the LdG Q-tensor order parameter. Thispaperisprimarilydevotedtothelaststepintheabovesequence,butwebrieflydescribethefirsttwo steps in the remainder of this section. 2.1 From optical measurements to 𝜀 LionheartandNewton[23]describeatypicalexperimentalset-upinthreedimensions(3D).Asillustrated in Figure 1, an NLC sample is illuminated by a monochromatic, polarized laser beam. The normalized Stokesparametersofthetransmittedlightaremeasureddependingontheincidentangleandpolarization. The measured data is then compared to the results of a forward model for the light transmission in the sample. The key model parameter is the dielectric anisotropy tensor, 𝜀, which is fitted to the data. We briefly review the forward model for light transmission from Section 3 in [23]. We assume that the direction of the laser beam is along the 𝑧-axis, and its polarization defines a vector in the 𝑥𝑦-plane, perpendicular to the direction of the beam. The monochromatic electric field E [37] is of the form E=(E𝑥,E𝑦)withE𝑥:=𝑎cos(𝜔𝑡−𝑘𝑧)andE𝑦:=𝑏cos(𝜔𝑡−𝑘𝑧+𝛿). Here𝜔is the angular frequency, 𝑘is the wave number, and 𝛿is the phase difference between the two components. TheexperimentalmeasurementsyieldvaluesfortheStokesparameters 𝑆=(𝑆0,𝑆1,𝑆2,𝑆3)oftheelectric field, defined by 𝑆0:=𝑎2+𝑏2,𝑆1:=𝑎2−𝑏2,𝑆2:=2𝑎𝑏cos𝛿,𝑆3:=2𝑎𝑏sin𝛿. These parameters characterize the state of the polarization, which solely depends on the amplitudes 4 𝑎,𝑏, and the phase difference 𝛿. The next step is to consider the interaction between the NLC and propagating electric and magnetic fields, EandH. In a NLC medium, the Berreman field vector 𝑋:=(E𝑥,H𝑦,E𝑦,−H𝑥)𝑇evolves according to the linear differential equation 𝜕 𝜕𝑧𝑋=−𝑖𝜔 𝑐𝑀𝑋. (2.1) Here, the Berreman matrix 𝑀depends on the dielectric tensor 𝜀=(𝜀𝑖𝑗)as shown below [4]: 𝑀=©­­­­­­­­ «−𝜀13 𝜀33𝜉 𝜇 0𝑐𝜀33−𝜉2 𝜀33−𝜀23 𝜀33𝜉 0 𝜀0𝑐 𝜀11−𝜀2 13 𝜀33 −𝜀13 𝜀33𝜉 𝜀 0𝑐 𝜀12−𝜀13𝜀23 𝜀33 0 0 0 0 𝜇0𝑐 𝜀0𝑐 𝜀12−𝜀13𝜀23 𝜀33 −𝜀23 𝜀33𝜉 𝜀 0𝑐 𝜀22−𝜀2 23 𝜀33−𝜉2 0ª®®®®®®®® ¬. (2.2) Thepermeabilityofthesurroundingmedium(typicallyair)isdenotedby 𝜇0,𝑐isthespeedoflight,and 𝜉isdeterminedbytheincidentangleandtherefractiveindexofthesurroundingmedium. Moreover,one can show that 𝜀is uniquely determined by the matrix 𝑀. Theforwardproblemcanthusbewrittenas 𝑆=𝐹(𝜀,𝜇0,𝑐,𝜉),with𝑆thevectorofStokesparameters. The value of 𝑆can be computed from the forward model, as a function of (𝜀,𝜇0,𝑐,𝜉), or measured experimentally. The inverse problem is to compute 𝜀from measurements of 𝑆for various𝜉, given𝜇0, 𝑐, as studied in [23]. In the remainder of the manuscript, we assume that 𝜀has been determined from measurements of 𝑆. 2.2 From𝜀to the LdG Q-tensor order parameter The dielectric anisotropy tensor, 𝜀, is a measure of the anisotropic NLC response to external electric fields [10]. The tensor, 𝜀, has two key components: 𝜀||which measures the dielectric response along thenematic(NLC)director(thedistinguisheddirectionofaveragedmolecularalignment),and 𝜀⊥which measures the dielectric response in all orthogonal directions to the director [10]. If 𝜀||−𝜀⊥>0, then the NLC molecules reorient to align with the applied external electric field and if 𝜀||−𝜀⊥<0, then the NLC molecules reorient to be orthogonal to the applied external electric field. The dielectric anisotropy tensor, 𝜀, can be measured with laboratory studies [31] (as described in Subsection 2.1) and is often used as an implicit phenomenological definition for the Landau-de Gennes (LdG) Q-tensor order parameter, i.e., 𝜀andQare related to each other as shown below [44]: 𝜀𝑖𝑗=1 3𝜀∥+2𝜀⊥𝛿𝑖𝑗+(𝜀∥−𝜀⊥)𝑄𝑖𝑗=1 3tr(𝜀)𝛿𝑖𝑗+(𝜀∥−𝜀⊥)𝑄𝑖𝑗;𝑖,𝑗=1...3,(2.3) where𝜀⊥and𝜀∥are the dielectric permittivities perpendicular and parallel to the director respectively, measured in units of the vacuum permittivity 𝜀0. In (2.3), there is an implicit assumption that the NLC sample under consideration is
https://arxiv.org/abs/2504.16029v1
uniaxial, i.e., the NLC sample has a single distinguished director so that all directions perpendicular to the NLC director are physically equivalent. Next,wedescribeathoughtexperimenttomeasurethedielectricpermittivitiesinthematrixequation above. Onecouldtakeathinslab-based3Dgeometry,whereinthetopandbottomsurfacesareuniformly rubbed to fix the NLC director in the same direction, on the top and bottom surfaces. In the absence of any external constraints, we expect the NLC director to be uniform throughout the domain, as dictated by the boundary treatments. The electric field can be applied in the direction of the boundary condition, or equivalently in the direction of the nematic director for spatially homogeneous uniform samples. One can experimentally measure the dielectric responses parallel and perpendicular to the applied electric field, to measure 𝜀||and𝜀⊥respectively. Once there are measurements of 𝜀||and𝜀⊥in (2.3) for a given NLCmaterial,wecanexperimentallymeasurethedielectricanisotropytensorforgenericconfinedNLC systems(usingthemethodologyoutlinedinSubsection2.1)anduse(2.3)toreconstructthecorresponding 5 Q. Tosummarise,(2.3)isamatrixequationrelatingtheninematrixcomponentsofthedielectrictensor, 𝜀,andtheLdG Q-tensor,where 𝜀canbeexperimentallymeasuredand 𝜀||and𝜀⊥aregivenfromprototype experiments on spatially homogeneous NLC systems. 2.3 From the LdG Q-tensor order parameter to the NLC material parameters Our main goal is to provide algorithms that can compute the LdG model parameters from given data for the LdG Q-tensor order parameters. The LdG theory is the most powerful continuum theory for NLCs in the literature [10]. The LdG theory describes the NLC state by a macroscopic order parameter, the LdG Q-tensor order parameter whichcontainsinformationabouttheNLCdirectorsandthedegreeoforientationalorderingaboutthem [29]. TheLdG Q-tensorisasymmetrictraceless 3×3matrixwhoseeigenvectorsmodeltheNLCdirectors andthecorrespondingeigenvaluesmeasurethedegreeoforientationalorderingaboutthedirectors. More precisely, the admissible LdG Q-tensors belong to the space 𝑆𝑛 0:={P=(𝑃𝑖𝑗)𝑛×𝑛|𝑃𝑖𝑗=𝑃𝑗𝑖,trP=0} for𝑛=2, 3. TheNLCphasescanbecategorisedasfollows,dependingontheeigenvaluestructureofthe Q-tensor: (i)Q=0for isotropic (disordered) NLC phases with no defined special material directions; (ii) uniaxial ifQhas two degenerate non-zero eigenvalues and the NLC director is the eigenvector corresponding to the non-degenerate eigenvalue and (iii) biaxial if Qhas three distinct eigenvalues 𝜆1> 𝜆 2> 𝜆 3, and there are two NLC directors corresponding to the two largest eigenvalues. For uniaxial NLC phases, the Q-tensor can be written as Q:=𝑠(n⊗n−I/3) where Iisthe 3×3identitymatrixand nistheleadingeigenvector,withthenon-degenerateeigenvalue, that models the single distinguished direction of averaged molecular alignment at every point in space. Thescalarorderparameter, 𝑠,measuresthedegreeoforientationalorderabout n.Thedefectsetisoften identified with the nodal set of 𝑠, wherein the NLC ordering breaks down. Intheabsenceofsurfaceenergiesandexternalfields,weworkwiththesimplestformofthe3DLdG energy [28]: E(Q):=∫𝐿 2|∇Q|2+𝑓𝐵(Q) d𝑥,where𝑓𝐵(Q):=𝐴 2trQ2−𝐵 3trQ3+𝐶 4(trQ2)2.(2.4) Here,𝐿 > 0is a material-dependent elastic constant, the Dirichlet elastic energy density penalises spatialinhomogeneitiesand 𝑓𝐵(Q)isthebulkpotential[27]. Thevariable 𝐴:=𝛼0(𝑇−𝑇∗)isarescaled temperature where 𝛼0,𝐵,𝐶 > 0are material dependent constants [10], and 𝑇∗is the characteristic nematic supercooling temperature. The critical points and minimisers of the bulk potential can be explicitly computed in terms of an algebraic problem [27]. The critical points of 𝑓𝐵are either uniaxial Q-tensors or the isotropic phase, Q=0. The rescaled temperature 𝐴has three characteristic values:1.𝐴=0, below which the isotropic phase Q=0loses stability; 2. the nematic-isotropic transition temperature, 𝐴=𝐵2 27𝐶, at which𝑓𝐵is minimised by the isotropic phase and a continuum of uniaxial states with 𝑠=𝑠+=𝐵+√ 𝐵2−24𝐴𝐶 4𝐶andnarbitrary; and 3. the nematic super-heating temperature, 𝐴=𝐵2 24𝐶above which the isotropic state is the unique critical point of 𝑓𝐵. There are some reported values of the bulk constants [33] for the canonical NLC material, MBBA, which are𝐵=0.64×106𝐽/𝑚3and𝐶=0.35×106𝐽/𝑚3,𝛼0=0.042×106𝐽𝑚−3𝐾−1,𝑇∗=45𝑜𝐶,𝑇𝑐= 46𝑜𝐶, where𝑇𝑐is the nematic-isotropic transition temperature. Typical values of 𝐿are around 10−12− 10−11𝐽/𝑚[21]. ThephysicallyobservableconfigurationsaremodelledbyminimisersoftheLdGenergy (2.4), subject to the imposed boundary conditions. The energy minimisers, and indeed all critical points of the LdG energy in (2.4) are analytic
https://arxiv.org/abs/2504.16029v1
solutions of the corresponding Euler-Lagrange equations, which are a system of five nonlinear and coupled elliptic partial differential equations [28]: 𝐿ΔQ=𝐴Q−𝐵 QQ−1 3 trQ2 I +𝐶 trQ2 Q. (2.5) 6 For a given non-homogeneous Dirichlet boundary condition Q𝑏∈𝐻1 2(𝜕Ω;𝑆3 0),and the admissible set,A(Q𝑏):={Q∈𝐻1(Ω;𝑆3 0)|Q=Q𝑏on𝜕Ω},thereexistsanenergyminimiser,i.e.,theoptimisation problem min Q∈A(Q𝑏)E[Q] has a solution from the direct methods in the calculus of variations [9]. Indeed, the bulk density 𝑓𝐵(•) satisfies the growth condition [9, Corollary 4.4], 𝑓𝐵(Q)≥𝐶+𝐶|Q|2for all Q∈𝑆3 0for some constants 𝐶 > 0.The coercivity and convexity of the elastic energy in ∇Q,together with the above mentioned growth condition of 𝑓𝐵, suffice to guarantee the existence of a global LdG energy minimiser in A(Q𝑏). We work in a two-dimensional (2D) framework in this paper. Using the gamma convergence argu- ments in [15], one can argue that for thin-film geometries (where the height is much smaller than the cross-sectional dimensions) and for certain types of physically relevant boundary conditions, the energy minimisersof(2.4)haveafixedeigenvectorwithanassociatedfixedeigenvalue(determinedbythetem- perature and the imposed boundary conditions). In other words, one can define a reduced LdG Q-tensor in 2D scenarios, where Q=𝑠(2n⊗n−I2). (2.6) Here, Qis independent of 𝑧or the spatial coordinate along the height of the well, n∈S1is the nematic director in the 𝑥𝑦-plane,𝑠is the scalar order parameter which measures the degree of nematic ordering about n, and I2is the 2×2identity matrix. The 2D nematic director can be interpreted as the preferred direction of averaged molecular alignment in the 𝑥𝑦-plane, and the defect set is simply the nodal set of𝑠, denoted byS. The symmetry and tracelessness of Qimply that there are only two independent components, 𝑄11and𝑄12, given by 𝑄11=𝑠cos 2𝜗,𝑄12=𝑠sin 2𝜗, where n=(cos𝜗, sin𝜗)and𝜗denotestheanglebetween nandthe𝑥-axis. ThereducedLdGtensorcan be related to the full 3D LdG Q-tensor as follows: Q𝑓=©­ «𝑄11−𝑞3𝑄12 0 𝑄12−𝑄11−𝑞30 0 0 2 𝑞3ª® ¬, (2.7) where𝑄11and𝑄12are as above and 𝑞3is a fixed known constant determined by the temperature and boundary conditions (coded in terms of surface energies). Note that Q𝑓is not, in general, an exact solutionof(2.5)butagoodapproximationtostableenergyminimisingsolutionsof(2.5)inthethin-film limit. The correspondence in (2.7) is only valid for 2D scenarios. In [16], the authors show that for the special temperature 𝐴=−𝐵2 3𝐶<0, there exists a branch of exact 3D solutions of (2.5) of the form (2.7), so that the results in the 2D framework are transferable to fully 3D scenarios. Hence, in the remainder of this manuscript, we focus on this special temperature for which 𝑠+=𝐵 𝐶and work in the reducedLdGframework,withthereducedLdGtensordefinedin(2.6). Thefull3D Q-tensor, Q𝑓,canbe reconstructed from the reduced LdG-tensor as in (2.7). The corresponding reduced 2D LdG free energy forQ:=(𝑄11,𝑄12), at the temperature 𝐴=−𝐵2 3𝐶, is given by E(Q):=∫ Ω𝐿 2|∇Q|2−𝐵2 4𝐶trQ2+𝐶 4(trQ2)2 d𝑥, (2.8) whereΩis the 2D Lipschitz domain and the other quantities, 𝐿,𝐵and𝐶have been defined above. Let 𝜆be a characteristic length associated with Ω; we can rescale the system using the change of variable ˜𝑥=𝑥 𝜆and drop all tildes in the subsequent rescaled/dimensionless formulation. The associated rescaled Euler-Lagrange equations are [16] Δ𝑄11=2𝐶𝜆2 𝐿 𝑄2 11+𝑄2 12−𝐵2 4𝐶2 𝑄11, Δ𝑄12=2𝐶𝜆2 𝐿 𝑄2 11+𝑄2 12−𝐵2 4𝐶2 𝑄12.(2.9) 7 Using the aforementioned values of the parameters 𝐵,𝐶for MBBA, we estimate𝐵2 4𝐶2=0.83592. By rearranging the parameters in (2.9), we
https://arxiv.org/abs/2504.16029v1
obtain the equivalent system of two coupled and nonlinear partial differential equations: −𝛼ΔQ+(|Q|2−𝛽)Q=0inΩ, Q=Q𝑏on𝜕Ω,(2.10) where𝛼:=𝐿 2𝐶𝜆2and𝛽:=𝐵2 4𝐶2.If the temperature, 𝐴=−𝐵2 3𝐶, and the domain length-scale 𝜆are given, the parameters 𝛼,𝛽determine𝐶=−3𝐴 4𝛽, which allows us to compute 𝐵=√︁ 4𝐶2𝛽and𝐿=2𝛼𝐶𝜆2. Therefore,foragiventemperatureanddomainlength,itissufficienttoidentifytheparameters 𝛼and 𝛽to determine the material parameters 𝐿,𝐵,𝐶.For 2D polygons, 𝜆can be the edge length, usually in therange𝜆∈ 10−8, 10−6 𝑚,𝐶∈[105, 106]𝐽𝑚−3and𝐿∈[10−12, 10−11]𝐽𝑚−1. Hence,itisreasonable to work with 𝛼∈10−4, 10−2as a representative range, along with 𝛽∈(0.1, 1). In what follows, we reconstruct 𝛼and𝛽in these physically relevant ranges from given solutions of the reduced LdG Euler-Lagrange equations (2.10) on a square domain, subject to Dirichlet boundary conditions for 𝑄11 and𝑄12, for𝐴=−𝐵2 3𝐶. If we work with arbitrary low temperatures 𝐴 < 0(assumed to be known) in a strictly 2D scenario, thenwedonothaveenoughinformationtocompute 𝐵. Insuchcases, 𝛼=𝐿 2𝜆2𝐶and𝛽=|𝐴| 2𝐶andonecan reconstruct𝐶and𝐿from𝛼and𝛽, for given values of 𝐴and𝜆. 3 Bayesian methodology 3.1 Statistical inverse problems Consider the forward problem defined by an input-output map F, 𝑌=F(𝑋,Θ). Here𝑋isthemodelinput, Θaremodelparametersand 𝑌isthemodeloutput. IntheLdGcontext, 𝑋isthe informationaboutcharacteristicdomain/geometriclengthscalesandambienttemperature(bothassumed to be given), Θare the parameters 𝛼or(𝛼,𝛽),𝑌is the bivariate output Q=(𝑄11,𝑄12), evaluated at the grid points of a shape regular triangulation, and Fis the map(𝛼,𝛽)→Qdefined through equation (2.10). Theinverseproblemistodetermineanestimateoftheparameters, bΘthatminimisesthedistance of the corresponding model output F(𝑋,bΘ)to the observed data 𝑌obs, for given model input 𝑋. WeuseaBayesianapproachtothisinverseproblem(asdescribedindetailin[19,Chapter3]): Θand 𝑌are viewed as random variables, 𝑋is known and the solution of the inverse problem is to compute the (posterior)probabilitydistributionof Θ. TheBayesianset-uprequires(a)theso-calledpriordistribution 𝜋prior(Θ)whichencodestheoreticalknowledgeaboutthedistributionof Θ,(b)observeddata 𝑌obsand(c) the likelihood function 𝜋(𝑌obs|Θ), that is the probability distribution of observing 𝑌obswhenΘis given. Theso-calledposteriordistribution 𝜋post(Θ|𝑌obs)isthesought-afterdistributionof Θ,giventheobserved data𝑌obs. The posterior distribution is computed by means of Bayes’ theorem as shown below: 𝜋post(Θ):=𝜋(Θ|𝑌obs)=𝜋prior(Θ)𝜋(𝑌obs|Θ) 𝜋(𝑌obs). (3.1) Bayes’ theorem is an extension of the well-known formula 𝑃(𝐵|𝐴)=𝑃(𝐵)𝑃(𝐴|𝐵) 𝑃(𝐴) for the conditional probabilities 𝑃(𝐵|𝐴)of events𝐴,𝐵, to the case of random variables. Theprobability 𝜋(𝑌obs)in(3.1)isunknown,butcanbeignoredasitismerelyanormalizingconstant, which in principle could be computed from the fact that 𝜋post(Θ)is a probability distribution (hence has integral one). The relation in (3.1) is usually formulated as 𝜋post(Θ)∝𝜋prior(Θ)𝜋(𝑌obs|Θ). (3.2) 8 The prior distribution. In the Bayesian approach, prior knowledge about Θis encoded in the prior distribution𝜋prior(Θ),e.g.,fromexperimentaldataorknownstatisticalproperties,seeendofSubsection 22.3 for our case. It is common practice to use a so-called improper or diffusive prior, a constant density on a large interval when there is little knowledge about the prior distribution or when we want to minimise the influenceofthepriordistribution. Inournumericalexperiments,wechoose(i)theimproperpriorofthe characteristic function of the interval [0,∞), noting that the parameters 𝛼and𝛽must be nonnegative, (ii) a Gaussian prior centred at the value of Θused for the simulation of the data and truncated at zero. Detailed information and advice on the choice of a prior distribution can be found in [18, Section 6.3]. The distribution of the error. The crucial ingredient for constructing the likelihood function is the error𝐸,whichincorporatesuncertaintiesinthedata,themodel,andthenumericalimplementation. The error𝐸is considered to be a mean zero random variable acting as additive noise, and the probability density𝜋err(𝐸)is assumed to be known. The original problem is modified as follows: F(𝑋,Θ)+𝐸=𝑌. Forpracticalimplementations,theerrorterm 𝐸=𝑌obs−F(𝑋,Θ)isavectorwiththesamedimensionas 𝑌obs. Then, the probability density of the error is taken to be 𝜋err(𝐸)∝exp −1 2𝐸𝑇Σ−1 𝐸𝐸 ; (3.3) consistent with the assumption that the error 𝐸has a Gaussian distribution with zero mean [19, 43, 46]. The
https://arxiv.org/abs/2504.16029v1
covariance matrix Σ𝐸is usually based on the empirical covariance matrix of the observed data [39] or, simpler, on the identity matrix multiplied by the empirical variance of 𝑌obs. For multiple observation variables, say a bivariate 𝐸=(𝐸1,𝐸2), the errors are combined to give 𝜋err(𝐸)∝exp −1 2(𝐸𝑇 1Σ−1 𝐸1𝐸1+𝐸𝑇 2Σ−1 𝐸2𝐸2) . (3.4) Let𝜎2 1,𝜎2 2betheempiricalvariancesofthecomponentsof 𝑌obsandΣ𝐸𝑖(thediagonalmatrixwithentries 𝜎2 𝑖). Then (3.4) reduces to 𝜋err(𝐸)∝exp −1 2(∥𝐸1∥2 ℓ2/𝜎2 1+∥𝐸2∥2 ℓ2/𝜎2 2) . (3.5) The introduction of the different variances, 𝜎2 1and𝜎2 2guarantees that differences in scale (magnitude) of the individual components of 𝑌obsare levelled out, so that each component contributes equally to the error term. Thelikelihoodfunction. Thelikelihoodfunctionisobtainedbysubstituting 𝐸=𝑌obs−F(𝑋,Θ)into (3.3) as shown below: 𝜋(𝑌obs|Θ)=𝜋err(𝑌obs−F(𝑋,Θ)) (3.6) or equivalently 𝜋(𝑌obs|Θ)∝exp −1 2(𝑌obs−F(𝑋,Θ))𝑇Σ−1 𝐸(𝑌obs−F(𝑋,Θ)) . (3.7) The likelihood function is the probability distribution of 𝑌obswhenΘis given, inherited from the assumed probability distribution of the error between observation 𝑌obsand model outputF(𝑋,Θ). The posterior distribution. The posterior distribution of Θis then given by 𝜋post(Θ)∝𝜋prior(Θ)𝜋err(𝑌obs−F(𝑋,Θ)), (3.8) obtainedbycombining(3.2)with(3.6). FromtheBayesianpointofview,itconstitutesthesolutiontothe statistical inverse problem [19, Chapter 3], as it provides the probability distribution of the parameter Θ 9 to be reconstructed, given observation data 𝑌obsand the prior distribution. The factor of proportionality in (3.8) can be obtained as the reciprocal of the integral∫ 𝜋prior(𝜃)𝜋err(𝑌obs−F(𝑋,𝜃))d𝜃. IfΘis low- dimensional,thefactorcanbecomputedbynumericalintegration,whereeachsteprequiresanevaluation of the model function F(𝑋,𝜃). However, we shall see in Subsection 3.2 that the normalization is not required for sampling from the posterior distribution. Bayesian estimators. Given the posterior distribution, 𝜋post(Θ), one can obtain point estimates bΘfor Θ[19, Section 3.1.1]. We use the mean and median of the posterior distribution as our estimators, bΘ, in the subsequent discussion. The posterior mean is defined as follows: bΘ=∫ 𝜃𝜋post(𝜃)d𝜃. (3.9) Given a statistical sample 𝜃1,...,𝜃𝑁of the posterior distribution, an approximation to (3.9) can be obtained as the sample mean1 𝑁Í𝑁 𝑗=1𝜃𝑗. Then, one can estimate the error as follows. Suppose the true value to be reconstructed is 𝜃∗. Then 1 𝑁𝑁∑︁ 𝑗=1𝜃𝑗−𝜃∗ ≤ 1 𝑁𝑁∑︁ 𝑗=1𝜃𝑗−∫ 𝜃𝜋post(𝜃)d𝜃 + ∫ 𝜃𝜋post(𝜃)d𝜃−𝜃∗ . The first summand on the right-hand side can be estimated by the usual Monte Carlo error estimate: the average error is of magnitude 𝜎/√ 𝑁, where𝜎2is the variance of Θ. One can do slightly better asymptoticallywithquasi-MonteCarlorandomgeneratorsforwhichapointwiseerrorbound O(log𝑁/𝑁) canbeachievedif 𝜋post(𝜃)issufficientlyregular[30]. However,thereisnocontrolonthesecondsummand, which subsumes a possible systematic error of the approach. The observed data. In our methodological study, 𝑌obsare the numerically computed solutions of (2.10)forfixedvalues 𝛼∗,𝛽∗. Given𝑌obsand𝑋(lengthscale,temperature),theposteriordistributionsof 𝛼,𝛽are reconstructed using the Bayesian method outlined above. 3.2 The Metropolis algorithm – Markov chain Monte Carlo Markov chains. We refer to [35] for details. A Markov chain is a sequence of random variables 𝑋0,𝑋1,𝑋2,...withvaluesinastatespace 𝑆withtheproperty–informallyspeaking–thattheprobability ofatransitionfromonestatetoanotherdependsonlyonthecurrentstate. Moreprecisely,theconditional probability distributions satisfy 𝜋(𝑋𝑛+1|𝑋0,𝑋1,...,𝑋𝑛)=𝜋(𝑋𝑛+1|𝑋𝑛). Weworkheresolelywithacontinuousstatespace,sothatallrandomvariableshaveaprobabilitydensity. Let𝜋(𝑛)(𝑦)bethemarginal(i.e.,unconditional)probabilitydensityof 𝑋𝑛. Thereisaso-calledtransition kernel𝑝(𝑥,𝑦),suchthatatgiven 𝑥,𝑝(𝑥,𝑦)isaprobabilitydensityandatgiven 𝑦,𝑝(𝑥,𝑦)isameasurable function of𝑥. The marginal probability densities are computed as 𝜋(𝑛)(𝑦)=∫ 𝑆𝑝(𝑥,𝑦)𝜋(𝑛−1)(𝑥)d𝑥 and thus are uniquely determined by the initial density 𝜋(0)(𝑥). A probability distribution 𝜋is called stationary with respect to the Markov chain if 𝜋(𝑦)=∫ 𝑆𝑝(𝑥,𝑦)𝜋(𝑥)d𝑥. Under rather general assumptions, a Markov chain has a unique stationary distribution, and the distribu- tions𝜋(𝑛)of𝑋𝑛convergetothestationarydistribution 𝜋inthefollowingsense: Let 𝜋(0)(𝑥)beanyinitial 10 density and 𝑓(𝑥)be any function integrable with respect to 𝜋(𝑥)d𝑥. Let𝜉0,𝜉1,𝜉2,...be a realization of the chain. Then lim 𝑁→∞1
https://arxiv.org/abs/2504.16029v1
𝑁𝑁∑︁ 𝑛=1𝑓(𝜉𝑛)=∫ 𝑆𝑓(𝑥)𝜋(𝑥)d𝑥. Thisisoftenreferredtoastheergodictheorem[35,Theorem6.63]. Therefore,theendpieces 𝜉𝑀,···,𝜉𝑁 forsufficientlylarge 𝑀(aftertheso-calledburn-inphase)aretreatedasasampleofthelimitingdistribution 𝜋. By the ergodic theorem, the sample mean, moments and quantiles of 𝜉𝑀,···,𝜉𝑁converge to the expectation value, moments and quantiles of 𝜋as𝑁→∞. MCMC – the Metropolis-Hastings algorithm. Let𝜋(𝑥)be any given probability distribution on a state space𝑆. The Metropolis-Hastings algorithm delivers a realization 𝜉0,𝜉1,𝜉2,...of a Markov chain which has𝜋(𝑥)as its stationary distribution. Preparation of the algorithm: •Choose an initial density 𝜋(0)(𝑥)or simply an initial point 𝑥=𝜉0. •Choose an arbitrary transition kernel 𝑞(𝑥,𝑦)(the so-called proposal distribution) such that 𝑞(𝑥,𝑦)=𝑞(𝑦,𝑥)for all𝑥,𝑦∈𝑆. Execution of the algorithm: •Sample a value 𝜉0from𝜋(0). •For𝑘=1,...,𝑁, sample a candidate value 𝜂from the proposal distribution 𝑞(𝜉𝑘−1,·). •Compute the ratio 𝑟=𝜋(𝜂) 𝜋(𝜉𝑘−1). –If𝑟≥1, the value𝜂is accepted; set 𝜉𝑘=𝜂. –If𝑟 <1, the value𝜂is accepted with probability 𝑟and rejected with probability 1−𝑟. •Draw a random number 𝜁from the uniform distribution on [0, 1]. –If𝜁≤𝑟,set𝜉𝑘=𝜂. –If𝜁 >𝑟 ,set𝜉𝑘=𝜉𝑘−1. The algorithm ensures that 𝜉0,𝜉1,𝜉2,...is a realization of a Markov chain with 𝜋(𝑥)as its stationary distribution. Inparticular,therefore,theendpieces 𝜉𝑀,···,𝜉𝑁forsufficientlylarge 𝑀provideasample of𝜋. AnimportantindicatorofthequalityoftheMarkovchainistheacceptancerate. Thisistheproportion of accepted values, that is, 1/𝑁times the number of 𝑘’s for which the candidate value 𝜂is not rejected. If the acceptance rate is close to one, the chain tends to become the Markov process generated by the proposal distribution alone. If the acceptance rate is small, the chain hardly moves and covers the state space only very slowly. Thus a medium size acceptance rate is desirable. Studies of the efficiency of the Metropolis-Hastings algorithm suggest an optimal acceptance rate of 44%for univariate chains and around 23%for multivariate chains [14]. The convergence of the Markov chain, produced by MCMC, can be proven under very general assumptions on the transition kernel and the stationary distribution, 𝜋[35, Section 7.2]. We use a transition kernel of the form 𝑞(𝑥,𝑦)=𝑞(𝑥−𝑦). In this case, additional requirementsontheshapeanddecayof 𝜋areneeded[35,Section7.5],whichareusuallyfulfilled,except for example, in the case of non-identifiable models. 3.3 Practical aspects of the Metropolis algorithm Burn-in phase. The so-called burn-in phase is defined to be the initial segment of the Markov chain, beforetheelementsofthechainhitthemainrangeoftheposteriordistribution. Thelengthoftheburn-in phasedependsonthechoiceoftheinitialvalue(andthevarianceoftheproposaldistribution);seeFigure 2 that corresponds to the 𝛼-component of the Markov chain for the diagonal solution with 𝛼∗=0.0008, 𝛽∗=1.4and a uniform prior distribution. InthenumericalexperimentsinSections5and6,wesimulateaMarkovchainoflength 104,andwe remove the first 200 elements as burn-in phase for the statistical analysis. 11 0200 400 600 8001000 Markov chain051015Parameter10-3 020 40 60 80100 Markov chain051015Parameter10-3Figure 2: Chain for 𝛼∗=0.0008in case of the diagonal solution with uniform prior: first 1000 entries (left), burn-in phase (right). Monitoring convergence. The quality of the simulated Markov chain can be tested by means of its mixingproperties,acceptancerate,andtestsforstationarity,Additionalmethodsofanalysingthequality ofMarkovchainsproducedbyMCMCcanbefoundin[35,Chapter12]andin[6]. Forfurtherstatistical explorations of posterior distributions see, e.g., [3] and [11]. To assess the quality of the Markov chain, one usually checks if the chain explores the whole state space (the range of the parameter under consideration) reasonably fast. One important criterion is the acceptancerate,whichoptimallyshouldbearound44%forunivariatechainsaround 23%formultivariate chains, see Subsection 3.2. The acceptance rates obtained in our numerical experiments are recorded in Tables 2, 3, 4 and 5 and are reasonably close to the optimal ones. A second criterion is a good mixing property so that the candidates move between different parts of the state space quickly. This can
https://arxiv.org/abs/2504.16029v1
be checked visually and is evident for all chains in the figures in Sections 5 and 6. Thefinalquestionconcernstheconvergenceofthechaintoitsstationarydistribution. Thismeansthat subsamples𝜉𝐾,...,𝜉𝐾+𝑀and𝜉𝐿,...,𝜉𝐿+𝑀, with sufficiently large 𝐾and𝐿 > 𝐾+𝑀, should have the samedistribution. ThiscanbetestedbynonparametricstatisticaltestssuchastheKolmogorov-Smirnov test. However, these tests apply only to iid (independent and identically distributed) subsamples. As suggestedin[35,Section12.2.2],independentsamplingcanbeachievedbyselectingbatchesoftheform 𝜉𝐾,𝜉𝐾+𝐺,𝜉𝐾+2𝐺,...,𝜉𝐾+𝑙𝐺with𝐺 >> 1(and𝑙𝐺≤𝑀). Weperformthesetestsrepeatedlyforsuccessive periods of length 1000 with batch step 𝐺=10. The obtained values of the Kolmogorov-Smirnov test statistic (for two samples of equal size) are well below the critical value [38, Table ST8] at significance level 5%, in most cases. Confidence intervals The approximate confidence intervals of the mean in Section 5 (Figure 5) can be computed from a Central Limit Theorem for Markov chains [5, Theorem 27.4]: Let 𝑋1,𝑋2,...be a stationaryMarkovchainsatisfyingcertainmixingconditions;let 𝑆𝑁=𝑋1+···+𝑋𝑁. Thenthevariance V(𝑆𝑁/𝑁)of𝑆𝑁/𝑁converges to 𝛾2=V(𝑋1)+2∞∑︁ 𝑘=1COV(𝑋1,𝑋1+𝑘) (3.10) as𝑁→∞. If𝛾 >0, the random variable 𝑆𝑁/𝛾√ 𝑁converges (in distribution) to the standard normal distribution. Therequiredmixingconditionscannoteasilybeverifiedinpractice,butcanbeenforcedin the MCMC setting (see the discussion in [35, Sections 6.7.2 and 6.9]). Thus,takingasegment 𝜉𝑀+1,...,𝜉𝑀+𝑁oflength𝑁ofthechainaftertheburn-inperiodwithsample mean𝜇𝛼,𝑁=1 𝑁Í𝑁 𝑖=1𝜉𝑀+𝑖, one may assume that 𝜇𝛼,𝑁is approximately normally distributed around 𝜇∗ withvariance 𝛾2/𝑁;here𝜇∗isthetruemeanoftheposteriordistribution. Duetotheassumedstationarity ofthechain,thevarianceandthecovariancesin(3.10)canbeestimatedbytheempiricalautocovariance function of the segment 𝜉𝑀+1,...,𝜉𝑀+𝑁, and the infinite sum can be truncated at large 𝑘=𝐾when the covariances become negligible (empirically around 𝐾=10to20). Based on these considerations, approximate confidence intervals for the posterior mean can be computed as 𝜇𝛼,𝑁±𝑞𝛾/√ 𝑁, where𝑞is the quantile of the standard normal distribution corresponding 12 to the desired confidence level ( 𝑞=1.96for 95% confidence), as has been done in Figure 5. 4 Computational framework Inthissection,webrieflydescribeourfiniteelementframeworkforsolvingthesystemin(2.10),subject to prescribed, continuous and piecewise smooth Dirichlet boundary data, Q𝑏. We takeΩto be a convex polygon and letTbe a shape regular triangulation of Ωinto triangles ( 𝑇). The maximal mesh size in a triangulation is denoted by ℎ:=max𝑇∈Tdiam(𝑇), where diam(𝑇)is the diameter of each simplex 𝑇∈T.Define the finite element space 𝑋ℎ:={𝜑∈𝐶0(¯Ω)|𝑣|𝑇∈𝑃1(𝑇)for all𝑇∈T}⊂𝐻1(Ω)and 𝑋0 ℎ:=𝑋ℎ∩𝐻1 0(Ω),where𝑃1(𝑇)is the space of all polynomials of degree at most 1on𝑇.The Galerkin discretization of the nonlinear problem (2.10) is to find Qℎ:=(𝑄11,ℎ,𝑄12,ℎ)∈𝑋ℎ×𝑋ℎsatisfying the Dirichlet boundary condition Qℎ|𝜕Ω:=IℎQ𝑏, obtained by conforming interpolation, such that 𝛼∫ Ω∇𝑄11,ℎ·∇𝜑1,ℎd𝑥+∫ Ω(𝑄2 11,ℎ+𝑄2 12,ℎ−𝛽)𝑄11,ℎ𝜑1,ℎd𝑥=0, 𝛼∫ Ω∇𝑄12,ℎ·∇𝜑2,ℎd𝑥+∫ Ω(𝑄2 11,ℎ+𝑄2 12,ℎ−𝛽)𝑄12,ℎ𝜑2,ℎd𝑥=0,(4.1) for allΦℎ:=(𝜑1,ℎ,𝜑2,ℎ)∈𝑋0 ℎ×𝑋0 ℎ. WeuseNewton’smethodtoapproximatesolutionsofthediscretenonlinearsystem(4.1). Theinitial guessplaysacrucialroleintheconvergenceoftheNewtoniteratesandtheselectedsolution. Weabstractly write the nonlinear system (4.1) as an operator equation Nℎ(Qℎ)=0. Starting from an initial guess Q0 ℎ, Newton iterates Q𝑘+1 ℎ=Q𝑘 ℎ+𝛿Qare computed by solving ⟨𝐷Nℎ(Q𝑘 ℎ)𝛿Q,Φℎ⟩=−Nℎ(Q𝑘 ℎ,Φℎ). Here, 𝐷Nℎ(Q𝑘 ℎ)denotestheFréchetderivativeof NℎatQ𝑘 ℎ. Wereferto[25,26]formorealgorithmicdetails. Benchmark example We review the benchmark example of NLCs confined to a square domain with tangent boundary condi- tions,asmotivatedbytheplanarbistablenematicdevicereportedin[45]andtheworkin[24]. Consider therescaledsquaredomain Ω:=[0, 1]×[0, 1]withtangentboundaryconditionsonthesquareedges,in thereducedLdGframeworkoutlinedinSubsection22.3. Thetangentboundaryconditionsmeanthaton the square edges, the nematic director has to be tangent to the edges, i.e., n=(±1, 0)on the horizontal edges𝑦=0, 1,and n=(0,±1)ontheverticaledges, 𝑥=0, 1. Fixing𝑠=1onthesquareedges(thisisan arbitrary choice that essentially means that the molecules are well ordered on the edges), this translates to the boundary conditions displayed in Table 1. The discontinuities at the vertices are circumvented by defining the Dirichlet boundary condition, Q𝑏, using the trapezoidal shape function [24]; see below: Q𝑏(𝑥,𝑦)=( (T𝑑(𝑥), 0)on𝑦=0and𝑦=1, (−T𝑑(𝑦), 0)on𝑥=0and𝑥=1,(4.2) with parameter 𝑑=0.06and the trapezoidal shape function T𝑑:[0, 1]→Ris defined by T𝑑(𝑡)=  𝑡/𝑑, 0≤𝑡≤𝑑, 1,𝑑≤𝑡≤1−𝑑, (1−𝑡)/𝑑, 1−𝑑≤𝑡≤1. It is known both experimentally and numerically that the system (4.1),
https://arxiv.org/abs/2504.16029v1
subject to the Dirichlet boundary condition Q𝑏defined above, admits two classes of solutions for small enough 𝛼(for𝜆≥𝜆𝑐 where𝜆𝑐is estimated to be in the nanometre range from the results in [21]): 1. diagonal solutions: the planar director, modelled by nin (2.6), roughly aligns along one of the squarediagonalsandtherearetwoclassesofdiagonalsolutions: D1andD2,oneforeachsquare diagonal; 13 Solution𝑥=0𝑥=1𝑦=0𝑦=1 𝑄11−1−111 𝑄12 0 0 0 0 Table 1: Tangent boundary conditions for Q. 2. rotated solutions: nrotates by𝜋radians between a pair of opposite square edges, and there are 4 classes of rotated solutions labelled by R1, R2, R3 and R4 respectively, related to each other by 𝜋/2radians. In other words, there are at least six competing solutions (two diagonal and four rotated) of the system (2.10)for𝜆largeenough,or 𝛼smallenough. InFigure3,weplotthecomputationalmeshfortherescaled unit square domain, and the numerically computed diagonal and rotated solutions. The diagonal and rotated solutions are distinguished by the locations of the splay vertices; a splay vertex being a vertex such that nsplays around it. Each diagonal solution has a pair of diagonally opposite splay vertices and eachrotatedsolutionhasapairofadjacentsplayvertices,connectedbyanedgeofthesquare. Theblack arrows correspond to n=(cos𝜃, sin𝜃)in (2.6) with 𝜃:=1 2arctan𝑄12 𝑄11whilst the colour bar refers to the scalar order parameter 𝑠in (2.6). (a) Mesh (b) Diagonal solution (c) Rotated solution Figure 3: Mesh with ℎ=0.0442; the diagonal and rotated solutions computed on this mesh. 5 Numerical experiments: one parameter identification Next,wedemonstratehowtoreconstructtheparameters 𝛼,𝛽in(2.10)fromgiven Q,usingtheBayesian methodsandtheMetropolis-HastingsalgorithmoutlinedinSection3. Wefocusonthebenchmarksquare domainexampleinSection4andperformnumericalexperimentsforvariousvaluesof 𝛼∈[0.0008, 0.004] and𝛽∈[0.6, 1.4]; cf. end of Subsection 22.3. In this section, we fix 𝛽=1and focus on the reconstruction of 𝛼in (2.10), and in the next section, we reconstruct both 𝛼and𝛽for given (measured) Q. The strategy is as follows. We first generate artificial data Qobsby numerically solving the nonlinear system(4.1)onasquaredomainwiththeboundaryconditionsinTable1,withreferencevaluesof 𝛼=𝛼∗ and𝛽=1. The full solution of the statistical inverse problem is then given by the posterior distribution of𝛼as outlined in Section 3. From there, the posterior mean or the posterior median serves as point estimate b𝛼. More precisely, substituting Θ=𝛼and𝑌obs=Qobsin formula (3.8) yields 𝜋post(𝛼)∝𝜋prior(𝛼)𝜋err(Qobs−F(𝑋,𝛼)). Here, the variable 𝑋contains information about the square edge length and ambient temperature and F(𝑋,𝛼)is the solution of the discretized forward problem (4.1) corresponding to a fixed value of 𝛼 (and𝛽=1). Recalling that Q=(𝑄11,𝑄12)andF=(F11,F12)have two components, and assuming a Gaussian error distribution (3.5), (3.7) yields 𝜋post(𝛼)∝𝜋prior(𝛼)exp −1 2 ||¯𝑄11−F 11(𝑋,𝛼)||2 ℓ2/𝜎2 11+||¯𝑄12−F 12(𝑋,𝛼)||2 ℓ2/𝜎2 12 ,(5.1) 14 where Qobs=(¯𝑄11,¯𝑄12)are the generated reference data and 𝜎2 11,𝜎2 12are the corresponding variances (computed empirically from the values of ¯𝑄11,¯𝑄12at the grid points of the triangulation T). Weemploytwochoicesofthepriordistributionfor 𝛼: (i)animproperuniformprior,and(ii)aGaussian priordistribution. Theimproperuniformprior(UP)distributionischosentobe 𝜋prior(𝛼)=𝜒(𝛼),where𝜒 isthecharacteristicfunctionof (0,∞)whichencodesthat 𝛼ispositive. TheGaussianprior(GP)ischosen to be𝜋prior(𝛼)∝𝜒(𝛼)exp−(𝛼−𝛼∗)2/2𝜎2 0where𝛼∗is the reference parameter used for generating theartificialdataset Qobs. Weselectthestandarddeviationtobe 𝜎0=0.0005forallourtestcases. This choiceguaranteesareasonablecoefficientofvariation, 𝜎0/𝛼∗,of12.5%for 𝛼∗=0.004. Toimplementthe Metropolis-Hastings algorithm, we choose a proposal distribution of the form 𝑞(𝑥,𝑦)=𝑞(𝑥−𝑦), where 𝑞(•)is the density of a mean-zero Gaussian distribution with standard deviation 0.001. This guarantees good mixing properties of the generated Markov chain (see Section 33.3, a good mixing property is usually guaranteed when the standard deviation is of the same or somewhat smaller magnitude than the parameter to be reconstructed.). The initial value of the Markov chain is always chosen to be 0.005, which results in a rather short burn-in
https://arxiv.org/abs/2504.16029v1
phase for all cases. Next, we use the diagonal and rotated solutions of (4.1) (for the benchmark example in Section 4) with𝛼∗=0.004and𝛽∗=1as the reference solution and use the inverse problem approach in Section 3 to reconstruct the value of 𝛼. For each choice of the prior distribution of 𝛼(UP or GP), we plot the Markov chain, the resulting histogram of the posterior distribution of 𝛼, tabulate the mean and median as estimates for the reconstructed value of 𝛼along with the acceptance rate of the Markov chain which serves as an indicator of the quality of the chain. The diagonal solution of (4.1) is plotted in Figure 4(a), with 𝛼∗=0.004and𝛽∗=1, using the mesh plottedinFigure3(a). Thissolutiongeneratesthedata Qobs. Figure4(b)displaystheendpieceoflength 9800of the Markov chain (with burn-in phase of first 200entries removed) obtained by the MCMC algorithm, with the posterior density distribution given by (5.1). In Figure 4(c), we plot the histogram associatedwiththischain,fortheuniformpriordistribution. InFigures4(d)and(e),weplottheMarkov chain and the histogram associated with the Gaussian prior distribution for 𝛼respectively. (a) Reference solution (b) Markov chain for 𝛼(UP) (c) Histogram of 𝛼(UP) (d) Markov chain for 𝛼(GP) (e) Histogram of 𝛼(GP) Figure 4: Diagonal solution, reference value 𝛼∗=0.004: (a) plot of reference solution; (b) Markov chain and (c) histogram of the posterior distribution of parameter 𝛼for uniform prior (UP) distribution; (d) Markov chain and (e) histogram of the posterior distribution of parameter 𝛼for Gaussian prior (GP) distribution. Thecorrespondingvaluesofthesamplemean (𝜇𝛼)andthesamplemedian (𝑚𝛼)aresummarizedin Table 2. 15 Statistics for Figure 4 mean𝜇𝛼median𝑚𝛼standard deviation acceptance rate Uniform prior 0.0040195 0.0040126 0.0004108 65% Gaussian prior 0.0039962 0.0039969 0.0002121 45% Table 2: Statistics for diagonal solution for 𝛼∗=0.004. Firstly, the posterior mean overestimates the true value in the case of a uniform prior; it is biased. (ThisisgenerallythecaseascanbeseenfromexplicitcalculationscomparingBayesianwithfrequentist estimates as, e.g., in [18, Example 6.8].) Secondly, the posterior median is less biased, as has also been empirically confirmed in the literature [32]. Asymptotic confidence intervals for the mean can be computed as explained in Subsection 33.3. We alsostudythetrendsinthemean,themedianandtheconfidenceintervalsbytakingsegmentsofincreasing length𝑁of the Markov chain starting at 𝑀=201. The results are summarized in Figure 5. One can observe a decreasing bias of the mean and the median, an improved accuracy indicated by a smaller confidence interval and the fact that the reference value 𝛼∗=0.004always lies in the 95%-confidence interval. 400 9001600250036004900640081009800 Length of chain0.00390.00400.00410.0042Parameter mean median 95% confidence interval Figure 5: Chain for 𝛼∗=0.004, diagonal, uniform prior: mean, median and 95% confidence interval for mean depending on length of chain segment. Next, we repeat the same numerical experiments with the rotated solution of (4.1), with 𝛼∗=0.004 and𝛽∗=1asthereference Qobs(seeFigure3(c)). Itisimportanttotesttheinverseproblemapproachin Section 3 with different solutions, not only to assess the robustness of the method but also to understand whether some solutions outperform others for our purposes and if so, why. The description of Figure 6 isanalogoustotheexplanationofFigure 4. Thecorrespondingmeanandmedianvaluesoftheposterior distributionof 𝛼aresummarizedinTable3. Therearenodiscernibledifferencesbetweentherotatedand Statistics for Figure 6 mean𝜇𝛼median𝑚𝛼standard deviation acceptance rate Uniform prior 0.0039951 0.0039996 0.0004258 66% Gaussian prior 0.0040011 0.0039999 0.0002129 44% Table 3: Statistics for rotated solution for 𝛼∗=0.004. diagonalsolutions,ortheuniformandGaussianpriordistributionsfor 𝛼,butthesenumericalexperiments are not exhaustive and are limited in many ways. 6 Numericalexperiments: Identificationof 𝛼and𝛽fromthediagonaland rotated solutions This section demonstrates how the two parameters
https://arxiv.org/abs/2504.16029v1
(𝛼,𝛽)in (2.10) can be reconstructed simultaneously. We follow the same strategy as described at the beginning of Section 5, using solutions to (2.10) 16 (a) Reference solution (b) Markov chain for 𝛼(UP) (c) Histogram of 𝛼(UP) (d) Markov chain for 𝛼(GP) (e) Histogram of 𝛼(GP) Figure 6: Rotated solution, reference value 𝛼∗=0.004: (a) plot of reference solution; (b) Markov chain and (c) histogram of the posterior distribution of parameter 𝛼for uniform prior (UP) distribution; (d) Markov chain and (e) histogram of the posterior distribution of parameter 𝛼for Gaussian prior (GP) distribution. with chosen reference parameters, 𝛼=𝛼∗and𝛽=𝛽∗, on a square domain with tangent boundary conditionsinTable1,asartificialdata Qobs. Thelikelihoodfunctionisasin(5.1)andthemodeloutputs F11=F11(𝑋,𝛼,𝛽)andF12=F12(𝑋,𝛼,𝛽)depend on the two parameters (𝛼,𝛽). As prior distributions for𝛼and𝛽, we take (i) an improper uniform prior 𝜋prior=𝜒(𝛼)𝜒(𝛽), where𝜒is the characteristic functionoftheinterval (0,∞)and(ii)abivariateGaussianpriorasdescribedinthesubsequentexamples. We consider reference values 𝛼∗=0.004,𝛽∗=0.6and𝛼∗=0.0008,𝛽∗=1.4, thus covering a relatively large and a relatively small value of 𝛼∗respectively. The computations are done with the diagonal and rotated solutions as before, with both prior distributions for 𝛼and𝛽respectively. The Gaussian prior is always chosen to be centred at 𝛼∗with standard deviations 𝜎𝛼=0.0005,𝜎𝛽=0.1, and correlation coefficient𝜌𝛼𝛽=𝜎𝛼𝛽 𝜎𝛼𝜎𝛽=0.5, where𝜎𝛼𝛽denotes the covariance of the bivariate Gaussian. In all cases, the Markov chain is based on a bivariate mean zero Gaussian distribution as proposal distribution𝑞(•)withcorrelationcoefficient 0.8. For𝛼∗=0.004,𝛽∗=0.6,thestandarddeviationsofthe proposal distribution are fixed to be (0.001, 0.1); for𝛼∗=0.0008,𝛽∗=1.4, the standard deviations are fixedtobe(0.005, 0.1). TheinitialpointsoftheMarkovchainsare (0.01, 0.5)for(𝛼∗,𝛽∗)=(0.004, 0.6), and(0.005, 0.8)for(𝛼∗,𝛽∗)=(0.0008, 1.4)respectively. As in Section 5, a Markov chain of length 10000 is generated and the end pieces of length 9800 are used for the analysis. Consider a diagonal and a rotated solution of (2.10) with (𝛼∗,𝛽∗)=(0.004, 0.6)and(𝛼∗,𝛽∗)= (0.0008, 1.4)respectively, on a square domain with tangent boundary conditions as outlined in Table 1, both of which are treated as Qobsin our numerical experiments. In what follows, we tabulate the mean andmedianestimatorsfor 𝛼and𝛽,forbothpairsofreferencesolutionsandreferencevaluesof (𝛼∗,𝛽∗), and for both choices of the prior distributions for 𝛼and𝛽. The figures are organised as follows: (a) illustrates the reference solution. A plot of the Markov chains (with burn-in phase deleted) for 𝛼and 𝛽are displayed in (b) and (d), respectively. Plots (c) and (e) focus on the histograms of the (marginal) posterior distributions of 𝛼and𝛽respectively and (f) is a contour plot of the bivariate histogram of the joint posterior distribution of (𝛼,𝛽). The results are summarised in Tables 4 and 5 respectively. D1 labels a diagonal solution and R4 labelsarotatedsolutionrespectively. Asbefore,therearenodiscernibledifferencesbetweenthediagonal and rotated solutions, and between the UP and GP, for 𝛼∗=0.004and𝛽∗=0.6. The acceptance rate for 17 therotatedsolutionistypicallyhigherthanforthediagonalsolution. Theestimatesoftheparameter 𝛼in Table4areslightlylessaccuratethanintheone-parametercase(Tables2and3),butstillwitharelative error between 1% and 5%. The relative error for 𝛽is below 1%. However, the rotated solution provides poorerestimatesfor 𝛼,when𝛼∗=0.0008and𝛽∗=1.4,comparedtothediagonalsolution,showingabias of the mean of about +16%. We recall that a smaller value of 𝛼∗corresponds to a larger square domain anddiagonalsolutionsareglobalenergyminimisersonlargesquaredomains[24]. Therotatedsolutions are locally stable on large square domains and the energy difference between the diagonal and rotated solutions increases as the square edge length increases. This could provide some heuristic insight into whydiagonalsolutionsperformbetterthanrotatedsolutionsforparameterinferenceorinverseproblems, asthesquareedgelengthincreasesorequivalentlyas 𝛼∗decreases. Theestimatesofthelargerparameter 𝛽aremoreaccuratewitharelativeerroroflessthan0.5%forthediagonalsolutionandlessthan1%for the rotated solution. Type of chain mean𝜇𝛼median𝑚𝛼mean𝜇𝛽median𝜇𝛽correlation𝜌𝛼𝛽AR D1, UP 0.0041841 0.0041460 0.6060851 0.6054822 0.8654732 17% D1, GP 0.0040616 0.0040508 0.6017940 0.6020828 0.8332836 14%
https://arxiv.org/abs/2504.16029v1
R4, UP 0.0041191 0.0041147 0.6085217 0.6077666 0.9258516 22% R4, GP 0.0040311 0.0040217 0.6023924 0.6016717 0.9064120 19% Table 4: Sample statistics for posterior distributions, reference values 𝛼∗=0.004,𝛽∗=0.6.Type of chain mean𝜇𝛼median𝑚𝛼mean𝜇𝛽median𝜇𝛽correlation𝜌𝛼𝛽AR D1, UP 0.0008587 0.0008478 1.4044713 1.4055363 0.4372786 16% D1, GP 0.0008596 0.0008417 1.4037591 1.4046242 0.4318876 15% R4, UP 0.0009278 0.0008725 1.4134830 1.4139106 0.4805958 31% R4, GP 0.0009292 0.0008886 1.4113185 1.4106873 0.4865024 30% Table 5: Sample statistics for posterior distributions, reference values 𝛼∗=0.0008,𝛽∗=1.4. InFigures7and8,weplottheposteriordistributionsfor 𝛼and𝛽asdescribedabove,withadiagonal solutionas Qobs. InFigures9and10,weweplottheposteriordistributionsfor 𝛼and𝛽asdescribedabove, witharotatedsolutionas Qobs,when𝛼∗=0.0008and𝛽∗=1.4. Weomittheplotsfor 𝛼∗=0.004,𝛽∗=0.6 for brevity. 18 (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 (d) Markov chain for 𝛽 (e) Histogram of 𝛽 (f) Bivariate histogram of 𝛼,𝛽 Figure 7: Diagonal solution, reference values 𝛼∗=0.0008and𝛽∗=1.4, uniform prior: (a) plot of referencesolution;(b)Markovchainand(c)histogramofthemarginalposteriordistributionofparameter 𝛼;(d)Markovchainand(e)histogramofthemarginalposteriordistributionofparameter 𝛽;(f)bivariate histogram of the joint posterior distribution of (𝛼,𝛽). (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 (d) Markov chain for 𝛽 (e) Histogram of 𝛽 (f) Bivariate histogram of 𝛼,𝛽 Figure 8: Diagonal solution, reference values 𝛼∗=0.0008and𝛽∗=1.4, Gaussian prior: (a) plot of referencesolution;(b)Markovchainand(c)histogramofthemarginalposteriordistributionofparameter 𝛼;(d)Markovchainand(e)histogramofthemarginalposteriordistributionofparameter 𝛽;(f)bivariate histogram of the joint posterior distribution of (𝛼,𝛽). 19 (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 (d) Markov chain for 𝛽 (e) Histogram of 𝛽 (f) Bivariate histogram of 𝛼,𝛽 Figure 9: Rotated solution, reference values 𝛼∗=0.0008and𝛽∗=1.4, uniform prior: (a) plot of referencesolution;(b)Markovchainand(c)histogramofthemarginalposteriordistributionofparameter 𝛼;(d)Markovchainand(e)histogramofthemarginalposteriordistributionofparameter 𝛽;(f)bivariate histogram of the joint posterior distribution of (𝛼,𝛽). (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 (d) Markov chain for 𝛽 (e) Histogram of 𝛽 (f) Bivariate histogram of 𝛼,𝛽 Figure 10: Rotated solution, reference values 𝛼∗=0.0008and𝛽∗=1.4, Gaussian prior: (a) plot of referencesolution;(b)Markovchainand(c)histogramofthemarginalposteriordistributionofparameter 𝛼;(d)Markovchainand(e)histogramofthemarginalposteriordistributionofparameter 𝛽;(f)bivariate histogram of the joint posterior distribution of (𝛼,𝛽). 7 Examples of non-identifiability Next, we show that the MCMC algorithm may fail to identify/reconstruct parameter values of 𝛼,𝛽, when they are outside the physical range mentioned at the end of Subsection 22.3. As a test example, consider the nonlinear system (2.10) (with 𝛽=1) on a unit square domain, Ω:=(0, 1)2, with the 20 non-homogeneous Dirichlet boundary condition by Q𝑏(𝑥,𝑦)=((𝑥−𝑎1),(𝑦−𝑎2))√ (𝑥−𝑎1)2+(𝑦−𝑎2)2.Here𝑎:=(𝑎1,𝑎2)is the centre of an interior point vortex, and we choose 𝑎:=(1/4, 3/4). We follow the same strategy as before. We solve (2.10) with 𝛼∗=1, 0.1, 0.01 ,𝛽∗=1; use these solutionsastheobserveddata Qobsinthelikelihoodfunctionandthenconstructtheposteriordistribution of𝛼∗. Referring to Figures 11, 12 and 13, (a) is a plot of the numerical solution which acts as Qobs.The bluearrowscorrespondtothevectorfield, Q=(𝑄11,𝑄12),andthecolour-barcorrespondsto |Q|.Plot(b) is the Markov chain of length 30000 obtained by the MCMC algorithm with the improper uniform prior distribution𝜋prior(𝛼)=𝜒(𝛼)and the likelihood function in (5.1). Plot (c) displays the corresponding histogram for the posterior distribution of 𝛼∗. (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 Figure 11: (a) Computed solution, reference value 𝛼∗=1;(b) Markov chain and (c) histogram of the posterior distribution of parameter 𝛼. (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 Figure 12: (a) Computed solution, reference value 𝛼∗=0.1;(b) Markov chain and (c) histogram of the posterior distribution of parameter 𝛼. (a) Reference solution (b) Markov chain for 𝛼 (c) Histogram of 𝛼 Figure 13: (a) Computed solution,
https://arxiv.org/abs/2504.16029v1
reference value 𝛼∗=0.01;(b) Markov chain and (c) histogram of the posterior distribution of parameter 𝛼. The figures show that the Markov chain does not converge for data Qobsconstructed from 𝛼∗=1. It somewhat converges around 𝛼∗=0.1, though with a badly mixing Markov chain, and finally converges when𝛼∗=0.01(and smaller). The profile likelihood is another method for detecting such non- identifiability [34, 41]. The general profile likelihood can be used to detect identifiability or non- identifiabilityofindividualcomponentsinamulti-parametermodel. Inourcaseofascalarparameter 𝛼, the profile likelihood is just the likelihood function as in (5.1). If the likelihood function has a flat profile or tails out to a plateau on one or both sides, one can detectnon-identifiability. ThisisthecaseinFigure14(a)for 𝛼∗=1andcorrespondstodivergenceofthe Markov chain as shown in Figure 11(b). Conversely, if the likelihood function tails out to zero on both 21 sides rather quickly, it indicates identifiability. This is seen in Figure 14(c); the corresponding Markov chain,Figure 13(b),shows agoodmixing behaviourandconverges totherespective valuesof 𝛼∗,as can beseenfromtheresultingposteriordistributioninFigure13(c). Theintermediatecaseis 𝛼𝑜=0.1;here thelikelihoodfunctioninFigure14(b)isbell-shaped,buthasaratherfattailontheright-handside. This is reflected in the somewhat rugged Markov chain in Figure 12(b). Thenon-identifiabilityforlargevaluesof 𝛼canbeunderstoodasfollows: thesystem(2.10)admitsa uniquesolutionfor 𝛼largeenough,say 𝛼∈(𝛼𝑐,∞),orequivalentlywhenthesquareedgelengthissmall enough[21]. Forexample,thesystem(2.10)hasauniquesolutionintherange 𝛼∈(𝛼𝑐,∞)subjecttothe boundaryconditionsinTable1,labelledastheWellOrderReconstructionSolution(WORS).TheWORS has two diagonal line defects that partition the square domain into four quadrants and the director is constantineachquadrant. Inotherwords,differentvaluesof 𝛼canconvergetothesamedirectorprofile, makingidentifiabilitydifficultforlargevaluesof 𝛼. Further,therearequestionsaboutthevalidityofthe LdG model on very small length scales or close to critical temperatures, as captured by large positive valuesof𝛼. Thenon-identifiabilityordivergenceoftheMCMCmaysimplybeanindicatorofthemodel failures in the physical regimes associated with large positive 𝛼. (a)𝛼∗=1 (b)𝛼∗=0.1 (c)𝛼∗=0.01 Figure 14: Plots of the likelihood functions 𝜋err(Qobs−F(𝑋,𝛼)). 8 Conclusions We have successfully set up an inverse UQ framework for the reduced LdG model in two dimensions, that has worked for the benchmark problem of the planar bistable nematic device reported in [45], for physically relevant scenarios. However, this is a relatively simple benchmark problem with a square domain and regular boundary conditions. Importantly, we have tested our method with the diagonal and rotated solutions, both of which are free of interior nematic defects. There are numerous follow-up open questions. Will our methods work for more complex scenarios? For example, can our algorithms reconstruct LdG model inputs when Qobshas multiple interior defects? WehavebrieflytouchedonthistopicinSection7butmoreextensivenumericalexperimentsarerequired. Similarcommentsapplytonon-convexdomainsorcomplexboundaryconditions. Itwouldbeinteresting to replace the Dirichlet tangent boundary conditions with surface anchoring energies (see [24]), and develop inverse UQ methods or Bayesian methods that can reconstruct the surface anchoring coefficient from Qobs. We have only tested our methods with the diagonal and rotated solutions, both of which are local LdG energy minimisers. It would be interesting to see if our methods can reconstruct LdG model inputsfromsaddle-pointsolutionsoftheLdGEuler-Lagrangeequations,manyofwhichhavesymmetric constellations of interior defects [17]. In future work, we plan to further develop and adapt our inverse UQ methods or Bayesian methods to more complex scenarios, for fundamental scientific curiosity and for providing new approaches for estimating material properties to the interdisciplinary liquid crystal research community. 22 Acknowledgements The authors gratefully acknowledge funding from the Royal Society International Exchange Grant IES\R2\242240. References [1] J. M. Ball and A. Majumdar. Nematic liquid crystals: From maier-saupe to a continuum theory. Mol. Cryst. Liq. Cryst. , 525(1):1–11, 2010. [2] D.Bankova,N.Podoliak,M.Kaczmarek,andG.D’Alessandro. Opticalmeasurementsofthetwist constant and angle in nematic liquid crystal cells. Sci. Rep., 14(1):17713, Jul
https://arxiv.org/abs/2504.16029v1