text
string | source
string |
|---|---|
the Einstein notation for the two types of indices a,b,c,d∈ {1,...,d,γ }andi,j,k,l,m ∈ {1,...,d}throughout this paper. A pair of subscript and superscript indices implies summation over those indice s. For example, the terms d+1/summationdisplay b=1gab∂b,d/summationdisplay j=1gij∂j are abbreviated as gab∂bandgij∂j, wheregij=gij(θ,γ). The symbol gijis the (i,j) component of g−1. 5 Example 1 (Truncated exponential distributions) .Consider the family of truncated exponential distributions with the density p(x;θ,γ) =θe−θ(x−γ)· /BD[γ,∞)(x) (x∈R) (3) with Θ = (0 ,∞),I=Randq(x;θ) =e−θx. It follows that ψ(θ,γ) =−θγ−logθ. This family is an oTEF with d= 1. Example 2.(Truncated normal distribution) Let Pbe the family of truncated normal distributions with the density p(x;µ,σ,γ) =1 σφ/parenleftbiggx−µ σ/parenrightbigg exp/braceleftbigg −log(1−Φ(γ−µ σ))/bracerightbigg · /BD[γ,∞)(x) (x∈R) (4) with (µ,σ)∈Θ =R×(0,∞),γ∈R, q(x;µ,σ) =φ((x−µ)/σ). Here,φ(x) and Φ(x) are the density and the distribution function of the standard norm al distribution, respectively. Hereafter, N(µ,σ,γ) denotes the truncated normal distribution with the above density. The family of truncated normal distributions is also a n oTEF with the natural parameters ( α,β) = (µ/σ2,−1/2σ2)∈R×(−∞,0) and the density p(x;α,β,γ) =1√πexp/braceleftbigg αx+βx2+1 2log(−β)+α2 4β−log(1−Φ(ν))/bracerightbigg · /BD[γ,∞)(x) (5) forx∈R, whereν=γ√−2β−α√−2β. 3 Probability matching priors for non-regular models with regular multiparameters LetUi:=√n/parenleftig θi−ˆθi ML/parenrightig fori= 1,...,d, U :=/parenleftbig U1,...,Ud/parenrightbig⊤, andT:= nˆc(γ−ˆγML).Uconverges in distribution to the normal distribution N(0,g−1 θ(θ,γ)) asn→ ∞. On the other hand, Tconverges in distribution to the exponential distri- bution Exp(1) as n→ ∞. Consider a smooth prior density πon Θ×I, which satisfies the following property matching the frequentist and posterior pro babilities: Pn θ,γ/parenleftigg Ui /radicalbig ˆgii≤z/parenrightigg =Pn π/parenleftigg Ui /radicalbig ˆgii≤z|Xn/parenrightigg +Op/parenleftbigg1 n/parenrightbigg for allz∈R, whereXn= (X1,...,Xn). Here,Pn θ,γdenotes the joint distribution ofXn, andPn π(· |Xn) is the posterior probability given Xn. Such a prior is called a probability matching prior for the regular parameter θi(i= 1,...,d) (Datta & Ghosh , 1995), and is denoted by πi PM. If the prior πalso satisfies Pn θ,γ(T≤z) =Pn π(T≤z|Xn)+Op/parenleftbigg1 n2/parenrightbigg , 6 then the prior πγ PMis a probability matching prior for the truncation parameter γ (Ghosal,1999). The following theorems extend the results of Ghosal(1999) to the multivariate case, restricted to the oTF. Ghosal considered only t he cased= 1. Our results may also hold for Ghosal’s non-regular model, but the proof is more involved. To derive the probability matching prior for the oTF, we consider the following asymptotic expansion of the posterior density. Lemma 1. LetˆθMLandˆγMLbe the MLEs of θandγ. Withu=√n/parenleftig θ−ˆθML/parenrightig , t= nˆc(γ−ˆγML), the posterior density π(u,t;Xn)admits the asymptotic expansion π(u,t;Xn) =1 (2π)d/2√detˆgθ−1et−u⊤ˆgθu/2/bracketleftbigg 1+1√nB1(u,t)+1 nB2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg , whereˆπ=π(ˆθML,ˆγML), and B1=1 ˆπD⊤ θˆπu+ˆA(1,1)⊤ut ˆc+1 3!ˆA(3,0)⊤u⊗3, B2=1 ˆcˆπ∂γˆπ(t+1)+1 2ˆπD⊗2 θˆπ(u⊗2−vecˆg−1 θ) +1 ˆcˆπD⊤ θˆπ⊗ˆA(1,1)⊤/parenleftbig u⊗2t+vecˆg−1 θ/parenrightbig +1 3!ˆπD⊤ θˆπ⊗ˆA(3,0)⊤/parenleftig u⊗4−3vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/parenrightig +1 2ˆc2ˆA(0,2)/parenleftbig t2−2/parenrightbig −1 2ˆcˆA(2,1)⊤/parenleftbig u⊗2t+vecˆg−1 θ/parenrightbig +1 4!ˆA(4,0)⊤/parenleftig u⊗4−3vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/parenrightig +1 2ˆc2/parenleftig ˆA(1,1)⊤/parenrightig⊗2/parenleftbig u⊗2t2−2vecˆg−1 θ/parenrightbig +1 2·3!2/parenleftig ˆA(3,0)⊤/parenrightig⊗2 S6/parenleftig u⊗6−15vec/parenleftbig ˆg−1 θ/parenrightbig⊗3/parenrightig +1 3!ˆc/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤/parenleftig u⊗4t+3vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/parenrightig . Here,S6∈Rd6×d6is the symmetrizer matrix defined by (A5)in Appendix A. For a matrix A= (aij)∈Rm1×m2(m1,m2∈N), thevecoperator is a linear map fromRm1×m2→Rm1m2that stacks the columns of Ainto a single column vector: vecA= (a11,...,am11,...,a 1m2,...,am1m2)⊤. The proof of Lemma 1is given in Appendix A. This lemma provides the asymptotic
|
https://arxiv.org/abs/2504.21363v1
|
expansion of the posterior prob abilityPn π and the frequentist probability Pn θ,γ, using a shrinkage argument. Then, we derive the conditions for the probability matching prior on an oTF as follows. Theorem 2. The probability matching prior πγ PM(θ,γ)for the non-regular parameter γis the solution of the partial differential equation ∂γlogπ+A(1,1) igij∂jlogπ=∂γlogc−∂iA(1,1) jgij +A(1,1) igij/braceleftig ∂jlogc+∂jlog(detgθ)−/parenleftig Γg mk,j−Γg kj,m/parenrightig gkm/bracerightig . (6) 7 On the other hand, the probability matching prior πi PM(θ,γ)for the regular parameter θi(i= 1,...,d)is the solution of the partial differential equation gij /radicalbig gii∂jlogπ(θ,γ) =−∂j/parenleftigg gij /radicalbig gii/parenrightigg . (7) The proofis given in Appendix B. We can find the solution in the case of the oTEF in Section 5. Example 1 (Truncated exponential distributions (continued)) .Consider the family of truncated exponential distributions with the density ( 3). In this case, the Riemannian metricgis given by g(θ,γ) =/parenleftbigg g11g1γ gγ1gγγ/parenrightbigg , =/parenleftbigg1 θ20 0θ2/parenrightbigg . Note that here θ2means the square of θ, not its second component. The α-connection for the regular parameter θis given by (α) Γ111=−1−α θ3 forα∈R. We also have A(1,1)(θ,γ) = 1, c (θ,γ) =θ. Thus, the condition for πγ PM(θ,γ) from (6) becomes ∂γlogπ+θ2∂θlogπ=−θ. The condition for πθ PM(θ,γ) from (7) becomes ∂θlogπ=−1 θ. Example 2 (Truncated normal distributions (continued)) .Consider the family of truncated normal distributions with the density ( 5), using the natural parameter θ= (α,β),γ. Let Ψ(v):= log(1−Φ(v)) and let Ψ(r)denote the r-th derivative of Ψ(v) forv∈R: Ψ(1)(v) =−φ(v) 1−Φ(v), Ψ(2)(v) =−φ′(v) 1−Φ(v)−/braceleftbiggφ(v) 1−Φ(v)/bracerightbigg2 , 8 Ψ(3)(v) =−φ′′(v) 1−Φ(v)−3φ′(v)φ(v) {1−Φ(v)}2−2/braceleftbiggφ(v) 1−Φ(v)/bracerightbigg3 . The Riemannian metric gis given by g11=−1 2β+(∂αν)2Ψ(2)(ν), g12=α 2β2+(∂α∂βν)Ψ(1)(ν)+(∂αν)(∂βν)Ψ(2)(ν), g22=1 2β2−α2 2β3+(∂β∂βν)Ψ(1)(ν)+(∂βν)2Ψ(2)(ν), g1γ=g2γ= 0, gγγ=/parenleftig Ψ(1)(ν)/parenrightig2 (∂γν)2, where ∂αν=−1√−2β, ∂ βν=−γ√−2β−α √−2β3, ∂γν=/radicalbig −2β, ∂α∂βν=−1 √−2β3, ∂β∂βν=−γ √−2β3−3α √−2β5. We also obtain A(1,1) 1(α,β,γ) =−(∂αν)(∂γν)Ψ(2)(ν), A(1,1) 2(α,β,γ) =−(∂β∂γν)Ψ(1)(ν)−(∂βν)(∂γν)Ψ(2)(ν), c(α,β,γ) =−Ψ(1)(ν)(∂γν). The conditions for the probability matching priors πγ PM,π1 PM,π2 PMin (6) and (7) can be computed from the above equations. 4 Moment matching priors for non-regular models with regular multiparameters This section provides the moment matching priors on a one-sided tru ncated family. Before introducing our main results, we briefly review moment match ing priors. Moment matching priors are prior distributions that asymptotically m atch the Bayesian posterior mean and the maximum likelihood estimator. M. Ghosh and Liu (2011) proposed this idea for regular statistical models. In regular mode ls, both the Bayes posterior mean and the MLE are asymptotically normal with or der 1/√n. A first-order moment matching prior eliminates the 1 /ndiscrepancy between these two estimators. Note that moment matching priors are not generally invariant under parameter transformations ( M. Ghosh & Liu ,2011). For example, in exponential families, the 9 moment matching prior for the natural parameters is the Jeffreys prior, while that for the expectation parameters is not. We derive conditions for moment matching priors in two cases: (i) whe n the non- regular parameter γis of interest; (ii) when the regular parameter θjis of interest for j= 1,...,d. Letˆθπand ˆγπdenote the posterior means of θandγunder a prior π, respectively. In case (i), the moment matching prior πγ MMis defined as
|
https://arxiv.org/abs/2504.21363v1
|
one satisfying ˆγπ−ˆγ∗ ML= Op/parenleftbig n−3/parenrightbig , where ˆγ∗ ML= ˆγML−1 nˆcis the bias-adjusted MLE ( Hashimoto ,2019). In case (ii), the moment matching prior πj MMis defined by ˆθj π−ˆθj ML= Op/parenleftig n−3/2/parenrightig forj= 1,...,d(M. Ghosh & Liu ,2011). Hashimoto (2019) derived these priors in a setting with a one-dimensional regular parameter and a non-regular parameter. Our results extend this tod-dimensional regular parameters, restricted to the oTF setting. The proofs o f the following result is given in Appendix C. The asymptotic expansion of the posterior density in Lemma 1yields the following conditions. Theorem 3. The moment matching prior πγ MM(θ,γ)for the truncation parameter γ is the solution to ∂γlogπ+1 2A(2,1) ijgij−2∂γlogc+A(1,1) igij/braceleftbigg ∂jlogπ−2∂jlogc+1 2A(3,0) jkmgkm/bracerightbigg = 0. (8) The moment matching prior πj MM(θ,γ)for the regular parameter θi(i= 1,...,d)is the solution to ∂ilog/braceleftbiggπ(θ,γ) πJ(θ,γ)/bracerightbigg −1 2(e) Γjk,i(θ,γ)gjk(θ,γ) = 0, (9) fori= 1,...,d, whereπJ(θ,γ) =/radicalbig detg(θ,γ). The partial differential equation for πi MM(θ,γ) resembles that in regular models {Pθ:θ∈Θ}, where the moment matching prior satisfies ∂ilog/braceleftbiggπ(θ) πJ(θ)/bracerightbigg −1 2(e) Γjk,i(θ)gjk(θ) = 0, withπJ(θ) being the Jeffreys prior ( Tanaka,2023). 10 Example 1 (Truncated exponential distributions (continued)) .Consider the family of truncated exponential distributions with the density ( 3). In this case, we have A(2,1)(θ,γ) = 0, A(3,0)(θ,γ) =2 θ3, (e) Γ11,1(θ,γ) = 0, π J(θ,γ) = 1. The values of A(1,1)(θ,γ),c(θ,γ),g(θ,γ) are given in Section 3. Thus, the condition for the moment matching prior πγ MM(8) is given by ∂γlogπ(θ,γ)+θ2∂θlogπ(θ,γ)−θ= 0. The condition for the moment matching prior πθ MM(9) is also given by ∂θlog{π(θ,γ)}= 0. Example 2 (Truncated normal distributions (continued)) .Consider the family of truncated normal distributions with the density ( 5) with the natural parameter θ= (α,β),γ. The components of A(3,0)are given by A(3,0) 111(θ,γ) =−(∂αν)3Ψ(3)(ν), A(3,0) 112(θ,γ) =−1 2β2−(∂αν)2(∂βν)Ψ(3)(ν)−2(∂α∂βν)(∂αν)Ψ(2)(ν), A(3,0) 122(θ,γ) =α β3−(∂αν)(∂βν)2Ψ(3)(ν)−{2(∂α∂βν)(∂βν)+(∂β∂βν)(∂αν)}Ψ(2)(ν) −(∂α∂β∂βν)Ψ(1)(ν), A(3,0) 222(θ,γ) =1 β3−3α2 2β4−(∂βν)3Ψ(3)(ν)−3(∂β∂βν)(∂βν)Ψ(2)(ν)−(∂β∂β∂βν)Ψ(1)(ν). The components of A(2,1)are given by A(2,1) 11(θ,γ) =−(∂αν)2(∂γν)Ψ(3)(ν), A(2,1) 12(θ,γ) =−(∂αν)(∂βν)(∂γν)Ψ(3)(ν)−{(∂α∂βν)(∂γν)+(∂αν)(∂β∂γν)}Ψ(2)(ν), A(2,1) 22(θ,γ) =−(∂βν)2(∂γν)Ψ(3)(ν)−{(∂β∂βν)(∂γν)+2(∂β∂γν)(∂βν)}Ψ(2)(ν) −(∂β∂β∂γν)Ψ(1)(ν). Note that ∂α∂β∂βν=−3 √−2β5, 11 ∂β∂β∂βν=−3γ √−2β5−15α √−2β7, ∂β∂β∂γν=−1 √−2β3. We obtain the condition for the moment matching prior πγ MMin (8) by the above equations. We also obtain the conditions for the moment matching priors π1 MM,π2 MMin (9) by πJ(θ,γ) =√g11g22−g12g12/braceleftig −(∂γν)Ψ(1)(ν)/bracerightig , (e) Γij,k(θ,γ) = 0 fori,j,k= 1,2. 5 The Lie derivative shared by probability and moment matching priors In this section, we derive the relationship between the two matching priors on the oTEF. On the oTEF Pe, the conditions for matching priors have the simple form. Then, we obtain a common point of the two matching priors for the tr uncation parameterγthat the Lie derivative appears in the conditions. LetX1,...,Xnbe i.i.d. random samples from a distribution p(x;θ,γ) in an oTEF Pe. We use the same assumptions and symbols as in the previous section s without the statistical model. Since the density function of the oTEF has the fo rm (1), it follows that DθA(r,0)(θ,γ) =A(r+1,0)(θ,γ) =−D⊗r+1 θψ(θ,γ) (r= 0,1,2,...). This property simplifies the conditions for the matching priors. Consider the vector field χ:=∂γ+A(1,1) igij∂jonPe. LetLχbe a Lie derivative alongχ. Then, the conditions ( 6) for the probability matching
|
https://arxiv.org/abs/2504.21363v1
|
prior πγ PMon the oTEF have the form Lχ/braceleftbigg logπ−log(detgθ)−1 2loggγγ/bracerightbigg = 0. (10) On the other hands, the conditions ( 8) for the moment matching priors πγ MMon the oTEF have the form Lχ/braceleftbigg logπ−1 2log(detgθ)−loggγγ/bracerightbigg = 0. (11) Thus,the twomatchingpriorshaveacommonpointthatthe Liederiv ativeLχappears in the partial differential equations. 12 The vector field χis regarded as the natural vector field on the other coordinate. Consider the reparametrization ( θ,γ)∝ma√sto→(η,γ′) withη=Dθψ(θ,γ) andγ′=γ. We callηexpectation parameters since it followsthat ηi=E[Fi(X)] fori= 1,...,donthe oTEF. In this case, the parameter space is the set H:={(η(θ,γ),γ) :θ∈Θ,γ∈I}. Then, the natural vector fields of ( η,γ) are written as ∂ ∂ηi=gij∂ ∂θi,∂ ∂γ′=χ. The Lie derivative Lχgives the characterization of the two types of match- ing priors on the following submodel. Let H′:={η(θ,γ) :θ∈Θ,γ∈I}, I′ η0:= {γ: (η0,γ)∈H}forη0∈H′. We call Evol(ρ,τ) g:= (detgθ)(ρ+1/2)(gγγ)(τ+1/2)an extended volume element of the oTEF for ρ,τ∈Rwith coordinate θ,γ. Theorem 4. Consider the submodel Pe,η0:=/braceleftbig p(x;η0,γ) :γ∈I′ η0/bracerightbig for a fixedη0∈ H′. Then, there exist a unique probability matching prior πγ PMand a unique moment matching prior πγ MMonPe,η0such that πγ PM(γ)∝Evol(1/2,0) g(η0,γ) ={detgθ(η0,γ)}/radicalig gγγ(η0,γ), πγ MM(γ)∝Evol(0,1/2) g(η0,γ) =/radicalbig detgθ(η0,γ)gγγ(η0,γ). Proof.On the model Pe,η0, the two conditions ( 10) and (11) are given by ∂γ/braceleftig logπ(γ)−log/parenleftig Evol(1/2,0) g(η0,γ)/parenrightig/bracerightig = 0, ∂γ/braceleftig logπ(γ)−log/parenleftig Evol(0,1/2) g(η0,γ)/parenrightig/bracerightig = 0. Thus, Evol(1/2,0) g(η0,γ) and Evol(0,1/2) g(η0,γ) are unique matching priors, respectively. Someα-parallel priors on the oTEF are matching priors. α-parallel priors are the extensions of the Jeffreys prior from the geometric point of view (S eeTakeuchi and Amari(2005) for details). The priors are originally defined for regular models, bu t they can be extended for the oTEF with the α-connections in ( 2) (Yoshioka & Tanaka , 2023a). The explicit form of the α-parallel priors on the oTEF is given by π(α)(θ,γ)∝Evol(0,α/2) g(θ,γ) forα∈R. Note that π(0)=πJ. Then,π(0),π(−1)and the square of π(1/2)satisfy the conditions ( 9), (10) and (11), respectively. The prior π(0)is a moment matching prior for the natural parameter θ. Also,π(−1)is a probability matching prior for the truncation parameter γ. The square of π(1/2)is a moment matching prior for the truncation parameter γ. Example 1 (Truncated exponential distributions (continued)) .Consider the family of truncated exponential distributions with the density ( 3). This family is one of the 13 0.0 0.5 1.0 1.5 2.0 2.5 3.0-3-2-10123 θγ Fig. 1 Streamline: Exp(θ,γ) oTEFs. In Section 3and Section 4, we have the conditions for the two matching priors for this family. Then, πγ PM(θ,γ)∝1 θ, πθ PM(θ,γ)∝1 θ, πγ MM(θ,γ)∝θ, πθ MM(θ,γ)∝1 are one of the solutions of the partial differential equations of the conditions, respectively. The vector field χon this family is given by χ=∂γ+θ2∂θ. The streamline along χis shown in Figure 1. The expectation parameter is η= 1/θ+γwith the parameter space H= {(η,γ) :η>γ}. Then, there exist unique probability and moment matching priors, denoted by πγ PMandπγ MM, respectively, such that πγ PM(η0,γ)∝1 η0−γ, πγ MM(η0,γ)∝η0−γ on the submodel {p(x;η0,γ) :γ <η0}for fixedη0∈R. Note that each streamline in Figure1represents one such submodel in the ( θ,γ) coordinate. Example 2 (Truncated normaldistributions (continued)) .Consider the family oftrun- cated
|
https://arxiv.org/abs/2504.21363v1
|
normal distributions with the fixed β=−1/2 in the density ( 4) for simplicity. The density p(x;α,−1/2,γ) represents the truncated normal distribution with µ=α andσ= 1. 14 -3 -2 -10123-3-2-10123 αγ Fig. 2 Streamline: N(α,1,γ) In this case, the vector field χis given by χ=∂γ+Ψ(2)(ν) 1+Ψ(2)(ν)∂α. The streamline along χshown in Figure 2. The expectation parameter is η=α−Ψ(1)(ν) with the parameter space H=R2. The same calculation can be performed for the full family of truncat ed normal distributions. 6 Concluding remarks This paper reveals the common geometric structure of the probab ility and moment matching prior in multivariate non-regular models. In particular, we d erive the par- tial differential equations characterizing the two types of matchin g priors within the framework of an oTF, as presented in Theorems 2and3. These equations involve the connection coefficients, which do not appear in the univariate case ( d= 1) (Ghosal, 1999;Hashimoto ,2019). These differential equations simplify for the subclass of oTF known a s oTEF due to the vanishing of the e-connectioncoefficients when expressed in terms ofthe natural parameterθ. Under this restriction, we can express the conditions for both ma tching priors in terms of the Lie derivative along a common vector field. This f ormulation highlights an invarianceofa generalizedvolume element with respect t o differentiation in the direction of the truncation parameter under the parameter transformation. While the geometric formulation is made explicit in the oTEF case, exten ding this structure to more general non-regular models remains an open pr oblem. Although the differential equations for both matching priors can be analogously d erived, expressing 15 their conditions by geometric terms requires further understand ing of the underlying geometric properties of non-regular models. Supplementary information. Acknowledgements. This work was supported by JSPS KAKENHI grant number JP23K11006 and JST SPRING grant number JPMJSP2138. Declarations Competing interests. The authors have no competing interests to declare that are relevant to the content of this article. Editorial Policies for: Springer journals and proceedings: https://www.springer.com/gp/editorial-policies Nature Portfolio journals: https://www.nature.com/nature-research/editorial-policies Scientific Reports :https://www.nature.com/srep/journal-policies/editorial-policies BMC journals: https://www.biomedcentral.com/getpublished/editorial-policies Appendix A Proof of Lemma 1 Let ˜θu:=ˆθML+u√n, ˜γt:= ˆγML+t nˆc. The posterior density π(u,t;Xn) is given by π(u,t;Xn) =π/parenleftig ˜θu,˜γt/parenrightig/producttextn i=1p(Xi;˜θu,˜γt) /integraltext π/parenleftig ˜θu′,˜γt′/parenrightig/producttextn i=1p(Xi;˜θu′,˜γt′)du′dt′ =π/parenleftig ˜θu,˜γt/parenrightig exp/bracketleftig/summationtextn i=1/braceleftig logp(Xi;˜θu,˜γt)−logp(Xi;ˆθML,ˆγML)/bracerightig/bracketrightig /integraltext π/parenleftig ˜θu′,˜γt′/parenrightig exp/bracketleftig/summationtextn i=1/braceleftig logp(Xi;˜θu′,˜γt′)−logp(Xi;ˆθML,ˆγML)/bracerightig/bracketrightig du′dt′ (A1) For the calculation of the asymptotic expansion of the posterior de nsity, we will cal- culate the asymptotic expansion of three terms in ( A1):π/parenleftig ˜θu,˜γt/parenrightig , the exponential term, and the denominator. 16 First, we derive the asymptotic expansion of the prior term π/parenleftig ˜θu,˜γt/parenrightig . By Taylor’s theorem, we have π/parenleftig ˜θu,˜γt/parenrightig = ˆπ+D⊤ θˆπu√n+∂γˆπt nˆc +1 2/parenleftbiggu√n/parenrightbigg⊤ DθD⊤ θˆπu√n+Op/parenleftbigg1 n3/2/parenrightbigg = ˆπ+1√nP1(u)+1 nP2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg , (A2) where ˆπ:=π/parenleftig ˆθML,ˆγML/parenrightig , P1(u):=D⊤ θˆπu, P2(u,t):=∂γˆπt ˆc+1 2u⊤DθD⊤ θˆπu. Second, we derive the asymptotic expansion of the exponential te rm in (A1). Let ˜li(u,t):= logp(Xi;˜θu,˜γt) andˆli:= logp(Xi;ˆθML,ˆγML). By Taylor’s theorem, we get the asymptotic expansion of ˜li(u,t)−ˆlias ˜li(u,t)−ˆli=/parenleftig Dθˆli/parenrightig⊤/parenleftbiggu√n/parenrightbigg +∂γˆli/parenleftbiggt nˆc/parenrightbigg +1 2/parenleftig D⊗2 θˆli/parenrightig⊤/parenleftbiggu√n/parenrightbigg⊗2 +1 2∂γ∂γˆli/parenleftbiggt nˆc/parenrightbigg2 +/parenleftig Dθ∂γˆli/parenrightig⊤/parenleftbiggu√n/parenrightbigg/parenleftbiggt nˆc/parenrightbigg +1 3!/parenleftig D⊗3 θˆli/parenrightig⊤/parenleftbiggu√n/parenrightbigg⊗3 +3 3!/parenleftig D⊗2 θ∂γˆli/parenrightig⊤/parenleftbiggu√n/parenrightbigg⊗2/parenleftbiggt nˆc/parenrightbigg +1 4!/parenleftig D⊗4 θˆli/parenrightig⊤/parenleftbiggu√n/parenrightbigg⊗4 +Op/parenleftbigg1 n5/2/parenrightbigg
|
https://arxiv.org/abs/2504.21363v1
|
=1√n/parenleftig Dθˆli/parenrightig⊤ u+1 n∂γˆli/parenleftbiggt ˆc/parenrightbigg +1 2n/parenleftig D⊗2 θˆli/parenrightig⊤ u⊗2+1 n3/2/parenleftig Dθ∂γˆli/parenrightig⊤ u/parenleftbiggt ˆc/parenrightbigg +1 3!n3/2/parenleftig D⊗3 θˆli/parenrightig⊤ u⊗3 17 +1 2n2∂γ∂γˆli/parenleftbiggt ˆc/parenrightbigg2 +1 2n2/parenleftig D⊗2 θ∂γˆli/parenrightig⊤ u⊗2/parenleftbiggt ˆc/parenrightbigg +1 4!n2/parenleftig D⊗4 θˆli/parenrightig⊤ u⊗4+Op/parenleftbigg1 n5/2/parenrightbigg . Recall that/summationtext iDθˆli= 0, ˆgθ=−/summationtext iDθD⊤ θˆli/n, ˆc=/summationtext i∂γˆli/n, andˆA(r,s)=/summationtext iD⊗r θ(∂γ)sˆli/n. It follows that n/summationdisplay i=1/parenleftig logp(Xi;˜θu,˜γt)−logp(Xi;ˆθML,ˆγML)/parenrightig =t−1 2u⊤ˆgθu+1√n/braceleftbigg ˆA(1,1)⊤ut ˆc+1 3!ˆA(3,0)⊤u⊗3/bracerightbigg +1 n/braceleftbigg1 2ˆA(0,2)t2 ˆc2+1 2ˆA(2,1)⊤u⊗2t ˆc+1 4!ˆA(4,0)⊤u⊗4/bracerightbigg +Op/parenleftbigg1 n3/2/parenrightbigg =t−1 2u⊤ˆgθu+1√nL1(u,t)+1 nL2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg , where L1(u,t) =ˆA(1,1)⊤ut ˆc+1 3!ˆA(3,0)⊤u⊗3, L2(u,t) =1 2ˆA(0,2)t2 ˆc2+1 2ˆA(2,1)⊤u⊗2t ˆc+1 4!ˆA(4,0)⊤u⊗4. Then, the asymptotic expansion of the exponential term is given by exp/bracketleftiggn/summationdisplay i=1/braceleftig logp(Xi;˜θu,˜γt)−logp(Xi;ˆθML,ˆγML)/bracerightig/bracketrightigg = exp/braceleftbigg t−1 2u⊤ˆgθu+1√nL1(u,t)+1 nL2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg/bracerightbigg = exp/braceleftbigg t−1 2u⊤ˆgθu/bracerightbigg/bracketleftbigg 1+1√nL1(u,t)+1 n/parenleftbigg L2(u,t)+1 2L2 1(u,t)/parenrightbigg +Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg . (A3) Third, we derive the asymptotic expansion of the denominator of ( A1). From ( A2) and (A3), the numerator of ( A1) is represented as π/parenleftig ˜θu,˜γt/parenrightig exp/bracketleftiggn/summationdisplay i=1/braceleftig logp(Xi;˜θu,˜γt)−logp(Xi;ˆθML,ˆγML)/bracerightig/bracketrightigg =/bracketleftbigg ˆπ+1√nP1(u)+1 nP2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg 18 ·exp/braceleftbigg t−1 2u⊤ˆgθu/bracerightbigg/bracketleftbigg 1+1√nL1(u,t)+1 n/parenleftbigg L2(u,t)+1 2L2 1(u,t)/parenrightbigg +Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg = exp/braceleftbigg t−1 2u⊤ˆgθu/bracerightbigg/bracketleftbigg ˆπ+1√n{P1(u)+ ˆπL1(u,t)} +1 n/braceleftbigg P1(u)L1(u,t)+P2(u,t)+ ˆπL2(u,t)+1 2ˆπL2 1(u,t)/bracerightbigg +Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg (A4) For the description of the integrating of the numerator in ( A1), we introduce sym- metrizermatrices Srasfollows(See Holmquist (1988)forthe details). Let e1,...,edbe the standard basis of Rd. Symmetrizer matrix Sr∈Rdr×dracts onr-tensor vectors as Sr(ei1⊗···⊗eir) =1 r!/summationdisplay π∈Sreiπ(1)⊗···⊗eiπ(r) (A5) fori1,...,ir= 1,...,d, whereSris the symmetric group of degree r. Whenr= 2,3, it holds that S2(ei⊗ej) =1 2(ei⊗ej+ej⊗ei), S3(ei⊗ej⊗ek) =1 6(ei⊗ej⊗ek+ej⊗ek⊗ei+ek⊗ei⊗ej +ei⊗ek⊗ej+ek⊗ej⊗ei+ej⊗ei⊗ek) fori,j,k= 1,...,d. Let us go back to the proof. By integrating the numerator term (A4) overuandt, we obtain the denominator in ( A1). The following equations are useful for the calculation of the integrals: 1 (2π)d/2/radicalig detˆg−1 θ/integraldisplay Ru⊗re−u⊤ˆgθu/2du=/braceleftigg 0 (when ris odd) (r−1)!!Srvec/parenleftbig ˆg−1 θ/parenrightbig⊗r/2(whenris even), /integraldisplay0 −∞tretdt=r!(−1)r. SeeHolmquist (1988) for the first equation. We calculate the integrals by terms. Let ν(ˆg−1 θ) = (2π)d/2/radicalig detˆg−1 θ. It holds that /integraldisplay {P1(u)+ ˆπL1(u,t)}et−u⊤ˆgθu/2dudt= 0 since/integraltext ue−u⊤ˆgθu/2du= 0. In the same way, it follows that /integraldisplay P2(u,t)et−u⊤ˆgθu/2dudt=/integraldisplay/braceleftbigg ∂γˆπt ˆc+1 2D⊗2⊤ θˆπu⊗2/bracerightbigg et−u⊤ˆgθu/2dudt 19 =ν(ˆg−1 θ)/braceleftbigg −1 ˆc∂γˆπ+1 2/parenleftbig D⊗2⊤ θˆπ/parenrightbig vecˆg−1 θ/bracerightbigg , /integraldisplay P1(u)L1(u,t)et−u⊤ˆgθu/2dudt =/integraldisplay/parenleftbig D⊤ θˆπu/parenrightbig/parenleftbigg1 ˆcˆA(1,1)⊤ˆg−1/2ut+1 3!ˆA(3,0)⊤u⊗3/parenrightbigg et−u⊤ˆgθu/2dudt =1 ˆc/parenleftig D⊤ θˆπ⊗ˆA(1,1)⊤/parenrightig/integraldisplay u⊗2tet−u⊤ˆgθu/2dudt +1 3!/parenleftig D⊤ θˆπ⊗ˆA(3,0)⊤/parenrightig/integraldisplay u⊗4et−u⊤ˆgθu/2dudt =ν(ˆg−1 θ)/braceleftbigg −1 ˆc/parenleftig D⊤ θˆπ⊗ˆA(1,1)⊤/parenrightig vecˆg−1 θ+3 3!/parenleftig D⊤ θˆπ⊗ˆA(3,0)⊤/parenrightig vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg , /integraldisplay L2(u,t)et−u⊤ˆgθu/2dudt =/integraldisplay/braceleftbigg1 2ˆA(0,2)t2 ˆc2+1 2ˆA(2,1)⊤u⊗2t ˆc+1 4!ˆA(4,0)⊤u⊗4/bracerightbigg et−u⊤ˆgθu/2dudt =ν(ˆg−1 θ)/braceleftbigg1 ˆc2ˆA(0,2)−1 2ˆcˆA(2,1)⊤vecˆg−1 θ+3 4!ˆA(4,0)⊤vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg , /integraldisplay L2 1(u,t)et−u⊤ˆgθu/2dudt =/integraldisplay/parenleftbigg1 ˆcˆA(1,1)⊤ˆg−1/2ut+1 3!ˆA(3,0)⊤u⊗3/parenrightbigg2 et−u⊤ˆgθu/2dudt =/integraldisplay1 ˆc2/parenleftig ˆA(1,1)⊤/parenrightig⊗2 u⊗2t2et−u⊤ˆgθu/2dudt+/integraldisplay1 (3!)2/parenleftig ˆA(3,0)⊤/parenrightig⊗2 u⊗6et−u⊤ˆgθu/2dudt +/integraldisplay2 3!ˆc/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ u⊗4tet−u⊤ˆgθu/2dudt =ν(ˆg−1 θ)/braceleftbigg2 ˆc2/parenleftig ˆA(1,1)⊤/parenrightig⊗2 vecˆg−1 θ+15 3!2/parenleftig ˆA(3,0)⊤/parenrightig⊗2 S6vec/parenleftbig ˆg−1 θ/parenrightbig⊗3 −1 ˆc/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg . We omitted the symmetrizers S2,S4in the above equations because of the symmetry of the terms ˆAandgθ. Then, the asymptotic expansion of the denominator of ( A1) is (2π)d/2/radicalig detˆg−1 θ/braceleftbigg ˆπ+1 nKn+O/parenleftbigg1 n3/2/parenrightbigg/bracerightbigg , (A6) 20 where Kn:=−1 ˆc∂γˆπ+1 2D⊗2 θˆπvecˆg−1 θ −1 ˆc/parenleftig Dθˆπ⊗ˆA(1,1)/parenrightig⊤ vecˆg−1 θ+3 3!/parenleftig Dθˆπ⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2 + ˆπ/braceleftbigg1 ˆc2ˆA(0,2)−1 2ˆcˆA(2,1)⊤vecˆg−1 θ+3 4!ˆA(4,0)⊤vec/parenleftbig ˆg−1 θ/parenrightbig⊗2 +1 ˆc2/parenleftig ˆA(1,1)⊤/parenrightig⊗2 vecˆg−1 θ+15 2·3!2/parenleftig ˆA(3,0)⊤/parenrightig⊗2 S6vec/parenleftbig ˆg−1 θ/parenrightbig⊗3 −1 2ˆc/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg . Finally, we derive the asymptotic expansion of the
|
https://arxiv.org/abs/2504.21363v1
|
posterior ( A1) from the above calculations. By (A2), (A3) and (A6), it holds that π(u,t;Xn) =1 (2π)d/2/radicalig detˆg−1 θexp/braceleftbig t−u⊤ˆgθu/2/bracerightbig/bracketleftbigg 1+1√nB1(u,t)+1 nB2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg , (A7) where B1=P1(u) ˆπ+L1(u,t) =1 ˆπD⊤ θˆπu+ˆA(1,1)⊤ut ˆc+1 3!ˆA(3,0)⊤u⊗3, B2=1 ˆπ/braceleftbigg P1(u)L1(u,t)+P2(u,t)+ ˆπL2(u,t)+1 2ˆπL2 1(u,t)−Kn/bracerightbigg =1 ˆcˆπ∂γˆπ(t+1)+1 2ˆπ/parenleftbig D⊗2 θˆπ/parenrightbig⊤(u⊗2−vecˆg−1 θ) +1 ˆcˆπ/parenleftig Dθˆπ⊗ˆA(1,1)/parenrightig⊤/parenleftbig u⊗2t+vecˆg−1 θ/parenrightbig +1 3!ˆπ/parenleftig Dθˆπ⊗ˆA(3,0)/parenrightig⊤/parenleftig u⊗4−3vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/parenrightig +1 2ˆc2ˆA(0,2)/parenleftbig t2−2/parenrightbig −1 2ˆcˆA(2,1)⊤/parenleftbig u⊗2t+vecˆg−1 θ/parenrightbig +1 4!ˆA(4,0)⊤/parenleftig u⊗4−3vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/parenrightig +1 2ˆc2/parenleftig ˆA(1,1)⊤/parenrightig⊗2/parenleftbig u⊗2t2−2vecˆg−1 θ/parenrightbig +1 2·3!2/parenleftig ˆA(3,0)⊤/parenrightig⊗2 S6/parenleftig u⊗6−15vec/parenleftbig ˆg−1 θ/parenrightbig⊗3/parenrightig +1 3!ˆc/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤/parenleftig u⊗4t+3vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/parenrightig . 21 Note that the next term B3is the polynomial of odd degree with respect to u. In the Taylor expansions of π/parenleftig ˜θu,˜γt/parenrightig and˜li(u,t) (i= 1,...,d), the polynomials consisting of only ut2, u⊗3tandu⊗5appear in the terms of order n−r/2since˜θu= ˆθML+u/√n,˜γt= ˆγML+t/(nˆc). Then, in the last expansion ( A7), the term of order n−3/2is the polynomial of odd degree with respect to u. Thus, we complete the proof. Appendix B Proof of Theorem 2 B.1 Probability matching prior for the truncation paramete r For proving the theorem, we calculate the asymptotic expansion of the posterior probability Pπ(T≤z|Xn) and the frequentist probability Pn θ,γ(T≤z). By integrating the expansion of the posterior density ( A1) overu, we have π(t;Xn) =et/bracketleftbigg 1+1 nˆc/braceleftig ∂γlogˆπ+D⊤ θlogˆπ⊗ˆA(1,1)⊤vecˆg−1 θ +1 2ˆA(2,1)⊤vecˆg−1 θ+1 2/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg (t+1) +1 nˆc2/braceleftbigg1 2ˆA(0,2)+1 2/parenleftig A(1,1)⊤/parenrightig⊗2 vecˆg−1 θ/bracerightbigg (t2−2)+O p/parenleftbigg1 n2/parenrightbigg/bracketrightbigg .(B8) The term of order n−3/2vanishes since it is the polynomial of odd degree with respect tou. We setα= 1−ezThe posterior probability is Pn π(T≤z) =/integraldisplayz −∞π(t;Xn)dt = (1−α)/bracketleftbigg 1+1 nˆc/braceleftig ∂γlogˆπ+D⊤ θlogˆπ⊗ˆA(1,1)⊤vecˆg−1 θ +1 2ˆA(2,1)⊤vecˆg−1 θ+1 2/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg z +1 nˆc2/braceleftbigg1 2ˆA(0,2)+1 2/parenleftig A(1,1)⊤/parenrightig⊗2 vecˆg−1 θ/bracerightbigg z(z−2)/bracketrightbigg +Op/parenleftbigg1 n3/2/parenrightbigg = (1−α)/bracketleftbigg 1+1 nc/braceleftig ∂γlogπ+D⊤ θlogπ⊗A(1,1)⊤vecg−1 θ +1 2A(2,1)⊤vecg−1 θ+1 2/parenleftig A(1,1)⊗A(3,0)/parenrightig⊤ vec/parenleftbig g−1 θ/parenrightbig⊗2/bracerightbigg z +1 nc2/braceleftbigg1 2A(0,2)+1 2/parenleftig A(1,1)⊤/parenrightig⊗2 vecg−1 θ/bracerightbigg z(z−2)/bracketrightbigg +Op/parenleftbigg1 n3/2/parenrightbigg = (1−α)/bracketleftbigg 1+1 nc/braceleftig ∂γlogπ+A(1,1)⊤g−1 θDθlogπ/bracerightig z 22 +1 cn{Q1(θ,γ)z+Q2(θ,γ)z(z−2)}/bracketrightbigg +O/parenleftbigg1 n3/2/parenrightbigg , (B9) where Q1(θ,γ) =1 2A(2,1)⊤vecg−1 θ+1 2/parenleftig A(1,1)⊗A(3,0)/parenrightig⊤ vec/parenleftbig g−1 θ/parenrightbig⊗2, Q2(θ,γ) =1 2cA(0,2)+1 2c/parenleftig A(1,1)⊤/parenrightig⊗2 vecg−1 θ. Theshrinkageargument( Datta&Mukerjee ,2004,Section1.2)providesthefrequentist probability Pn θ,γ(T≤z). After replacing πbyπδ, a density convergence weakly to the measure degenerate at the point θ, we integrate the expansion of the posterior density (B9) with respect to πδand letting δ→0. Here, it follows that lim δ↓0/integraldisplay1 c(x,y)∂γlog(πδ(x,y))πδ(x,y)dxdy=−∂γ/parenleftbigg1 c(θ,γ)/parenrightbigg , lim δ↓0/integraldisplay1 c(x,y)A(1,1)⊤(x,y)g−1 θ(x,y)Dθlog(πδ(x,y))πδ(x,y)dxdy=−D⊤ θ/parenleftbigg1 cg−1 θ(θ,γ)A(1,1)(θ,γ)/parenrightbigg , Then, we obtain the expansion of the probability Pn θ,γ(T≤z) as follows: Pn θ,γ(T≤z) = (1−α)/bracketleftbigg 1+1 n/braceleftbigg −∂γ/parenleftbigg1 c/parenrightbigg −D⊤ θ/parenleftbigg1 cg−1 θA(1,1)/parenrightbigg/bracerightbigg z +1 cn{Q1(θ,γ)z+Q2(θ,γ)z(z−2)}/bracketrightbigg +O/parenleftbigg1 n3/2/parenrightbigg (B10) Then,bycomparing( B9)and(B10),wegettheconditionsfortheprobabilitymatching priorπγ PMthat a prior πsatisfies the partial differential equation ∂γlogπ+A(1,1)⊤g−1 θDθlogπ=c/braceleftbigg −∂γ/parenleftbigg1 c/parenrightbigg −D⊤ θ/parenleftbigg1 cg−1 θA(1,1)/parenrightbigg/bracerightbigg (B11) Here, it follows that D⊤ θ/parenleftbigg1 cg−1 θA(1,1)/parenrightbigg =−1 cA(1,1)⊤g−1 θDθlogc+1 c/parenleftbig D⊤ θg−1 θ/parenrightbig A(1,1)+1 c/parenleftig DθA(1,1)/parenrightig⊤ vecg−1 θ and the components of D⊤ θg−1 θA(1,1)are written as /parenleftbig ∂igij/parenrightbig A(1,1) j=−(∂igkm)gikgjmA(1,1) j =−/parenleftig ∂mgik+Γg ik,m−Γg km,i/parenrightig gikgjmA(1,1) j 23 =−A(1,1) jgjm∂mlog(detgθ)−A(1,1) jgjmgik/parenleftig Γg ik,m−Γg km,i/parenrightig . Then, the condition ( B11) is represented as ∂γlogπ+A(1,1) igij∂jlogπ=∂γlogc−∂iA(1,1) jgij +A(1,1) jgjm/braceleftig ∂mlogc+∂mlog(detgθ)−gik/parenleftig Γg ik,m−Γg km,i/parenrightig/bracerightig . B.2 Probability matching prior
|
https://arxiv.org/abs/2504.21363v1
|
for the regular parameter Considerthe casethat θ1is the parameterofinterest.We will calculate the asymptotic expansion of the posterior probability Pn π(U1≤z) and the frequentist probability Pn θ,γ(U1≤z) to derive the conditions of probability matching prior π1 PM. By integrating the expansionof the marginalposteriordensity ( A1) overt, we have π(u;Xn) =φd(u;0,ˆg−1 θ)/braceleftbigg 1+1√n/parenleftbigg1 ˆπ/parenleftbig D⊤ θˆπ/parenrightbig u−1 ˆcˆA(1,1)⊤u+1 3!ˆA(3,0)⊤u⊗3/parenrightbigg +Op/parenleftbigg1 n/parenrightbigg/bracerightbigg (B12) We denote the density of the r-dimensional normal distribution with mean µand covariance matrix Σ by φr(·;µ,Σ). Let u−1:= (u2,...,ud), ˆm−1:=/parenleftbig ˆg21 θ/ˆg11 θ,...,ˆgd1 θ/ˆg11 θ/parenrightbig⊤∈Rd−1, ˆm:= (1,ˆm−1)⊤∈Rd, ˆh−1:=/parenleftbig ˆgij−ˆgi1ˆgj1/ˆg11/parenrightbig 2≤i,j≤d∈R(d−1)×(d−1), ˆh:= 0 0···0 0 ...ˆh−1 0 ∈Rd×d. We decompose the density φd(u;0,ˆg−1 θ) as φd(u;0,ˆg−1 θ) =φ1(u1;0,ˆg11)φd−1(u−1;u1ˆm−1,ˆh−1) for the calculation of the probability Pn π(U1≤z). Then, the posterior probability of U1is given by π(u1;Xn) =/integraldisplay π(u;Xn)du−1 24 =φ1(u1;0,ˆg11)/integraldisplay/braceleftbigg 1+1√n/parenleftbigg1 ˆπ/parenleftbig D⊤ θˆπ/parenrightbig u−1 ˆcˆA(1,1)⊤u+1 3!ˆA(3,0)⊤u⊗3/parenrightbigg/bracerightbigg ·φd−1(u−1;u1ˆm−1,ˆh−1)du−1+Op/parenleftbigg1 n/parenrightbigg =φ1(u1;0,ˆg11)/braceleftbigg 1+1√n/parenleftbigg (Dθlogˆπ)⊤ˆmu1−1 ˆcˆA(1,1)⊤ˆmu1 +1 2ˆA(3,0)⊤/parenleftig vecˆh⊗ˆm/parenrightig u1+1 3!ˆA(3,0)⊤ˆm⊗3/parenleftbig u1/parenrightbig3/parenrightbigg/bracerightbigg +Op/parenleftbigg1 n/parenrightbigg . Let ˆσ=/radicalbig ˆg11. The posterior probability of U1/ˆσis Pn π(U1/ˆσ≤z|Xn) =/integraldisplayˆσz −∞π(u1;Xn)du1 =/integraldisplayz −∞π(ˆσv;Xn)ˆσdv =/integraldisplayz −∞φ(v)dv +1√n /parenleftigg Dθlogˆπ−ˆA(1,1) ˆc/parenrightigg⊤ ˆm+1 2ˆA(3.0)⊤/parenleftig vecˆh⊗ˆm/parenrightig ˆσ/integraldisplayz −∞vφ(v)dv +1 3!√nˆA(3,0)⊤ˆm⊗3ˆσ3/integraldisplayz −∞v3φ(v)dv+Op/parenleftbigg1 n/parenrightbigg = Φ(z)−1√n /parenleftigg Dθlogˆπ−ˆA(1,1) ˆc/parenrightigg⊤ ˆm+1 2ˆA(3.0)⊤/parenleftig vecˆh⊗ˆm/parenrightig ˆσφ(z) −1 3!√nˆA(3,0)⊤ˆm⊗3ˆσ3/parenleftbig z2+2/parenrightbig φ(z)+Op/parenleftbigg1 n/parenrightbigg = Φ(z)+√g11 √n/braceleftig −(Dθlogπ)⊤m+Q3(θ,γ,z)/bracerightig φ(z)+Op/parenleftbigg1 n/parenrightbigg (B13) where Q3(θ,γ,t) =1 cA(1,1)⊤m−1 2A(3,0)⊤(vech⊗m)−1 3!A(3,0)⊤g11m⊗3(z2+2). 25 On the other hands, by ( B13) and the shrinkage argument, we obtain the expansion of the probability Pn θ,γ(U≤z) as follows: Pn θ,γ(U1/ˆσ≤z) = Φ1(z)+ +√g11 √n/parenleftig D⊤ θ/parenleftig/radicalbig g11m/parenrightig +Q3(θ,γ,z)/parenrightig φ1(z)+O/parenleftbigg1 n/parenrightbigg .(B14) Then, by comparing ( B13) and (B14), we get the conditions for the probability matching prior π1 PMas gi1 /radicalbig g11∂ilogπ=−∂i/parenleftigg gi1 /radicalbig g11/parenrightigg . Appendix C Proof of Theorem 3 C.1 Moment matching prior for the truncation parameter By integrating ( B8), the posterior mean of tis E[t|Xn] =/integraldisplay0 −∞tetdt+1 nˆc/braceleftig ∂γlogˆπ+D⊤ θlogˆπ⊗ˆA(1,1)⊤vecˆg−1 θ +1 2ˆA(2,1)⊤vecˆg−1 θ+1 2/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg/integraldisplay0 −∞t(t+1)etdt +1 nˆc2/braceleftbigg1 2ˆA(0,2)+1 2/parenleftig A(1,1)⊤/parenrightig⊗2 vecˆg−1 θ/bracerightbigg/integraldisplay0 −∞t(t2−2)etdt+Op/parenleftbigg1 n3/2/parenrightbigg =−1+1 nˆc/braceleftig ∂γlogˆπ+D⊤ θlogˆπ⊗ˆA(1,1)⊤vecˆg−1 θ +1 2ˆA(2,1)⊤vecˆg−1 θ+1 2/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg −4 nˆc2/braceleftbigg1 2ˆA(0,2)+1 2/parenleftig ˆA(1,1)/parenrightig⊗2⊤ vecˆg−1 θ/bracerightbigg +Op/parenleftbigg1 n3/2/parenrightbigg . Then, we have the posterior mean of γas ˆγB π=E[γ|Xn] = ˆγML−1 nˆc+1 n2ˆc2/braceleftig ∂γlogˆπ+D⊤ θlogˆπ⊗ˆA(1,1)⊤vecˆg−1 θ +1 2ˆA(2,1)⊤vecˆg−1 θ+1 2/parenleftig ˆA(1,1)⊗ˆA(3,0)/parenrightig⊤ vec/parenleftbig ˆg−1 θ/parenrightbig⊗2/bracerightbigg −4 n2ˆc3/braceleftbigg1 2ˆA(0,2)+1 2/parenleftig ˆA(1,1)⊤/parenrightig⊗2 vecˆg−1 θ/bracerightbigg +Op/parenleftbigg1 n5/2/parenrightbigg . 26 Here, let ˆγ∗ ML:= ˆγML−1 nˆcbe a bias-correrated MLE of γ. Since the consistency of MLE and the low of large numbers, it holds that n2/parenleftbig γB π−ˆγ∗ ML/parenrightbigP→1 c2/braceleftig ∂γlogπ+D⊤ θlogπ⊗A(1,1)⊤vecg−1 θ +1 2A(2,1)⊤vecg−1 θ+1 2A(1,1)⊤⊗A(3,0)⊤vec/parenleftbig g−1 θ/parenrightbig⊗2 −2 cA(0,2)−2 c/parenleftig A(1,1)⊤/parenrightig⊗2 vecg−1 θ/bracerightbigg =1 c2/braceleftig ∂γlogπ+A(1,1)⊤g−1 θDθlogπ +1 2A(2,1)⊤vecg−1 θ+1 2A(1,1)⊤⊗A(3,0)⊤vec/parenleftbig g−1 θ/parenrightbig⊗2 −2 cA(0,2)−2 cA(1,1)⊤g−1 θA(1,1)/bracerightbigg . The moment matching prior πγ MMis required to make the right-hand side of the above equation zero. Then, the partial differential equation which gives t he condition of the moment matching prior πγ MMis ∂γlogπ+A(1,1)⊤g−1 θDθlogπ=−1 2A(2,1)⊤vecg−1 θ−1 2A(1,1)⊤⊗A(3,0)⊤vec/parenleftbig g−1 θ/parenrightbig⊗2 +2 cA(0,2)+2 cA(1,1)⊤g−1 θA(1,1). This condition is also represented as ∂γlogπ+1 2A(2,1) ijgij−2∂γlogc+A(1,1) igij/braceleftbigg ∂jlogπ−2∂jlogc+1 2A(3,0) jkmgkm/bracerightbigg = 0. Note thatA(0,2)=∂γcandA(1,1) i=∂ic. C.2 Moment matching prior for the regular parameter By integrating
|
https://arxiv.org/abs/2504.21363v1
|
( B12), the posterior mean of uis E[u|Xn] =/integraldisplay Rduπ(u;Xn)du =/integraldisplay Rduφd(u;0,ˆg−1 θ)du+1√n/integraldisplay Rdu/braceleftbigg u⊤1 ˆπDθˆπ/bracerightbigg φd(u;0,ˆg−1 θ)du −1√n/integraldisplay Rdu/braceleftbigg u⊤1 ˆcˆA(1,1)/bracerightbigg φd(u;0,ˆg−1 θ)du +1√n/integraldisplay Rdu/braceleftbigg1 3!ˆA(3,0)⊤u⊗3/bracerightbigg φd(u;0,ˆg−1 θ)du+Op/parenleftbigg1 n/parenrightbigg 27 = 0+1√nˆg−1 θDθlogˆπ−1√nˆcˆg−1 θˆA(1,1) +1 3!√n/parenleftig Id⊗ˆA(3,0)⊤/parenrightig/integraldisplay Rdu⊗4φd(u;0,ˆg−1 θ)du+Op/parenleftbigg1 n/parenrightbigg = 0+1√nˆg−1 θDθlogˆπ−1√nˆcˆg−1 θˆA(1,1) +3 3!√n/parenleftig Id⊗ˆA(3,0)⊤/parenrightig S4vec/parenleftbig ˆg−1 θ/parenrightbig⊗2+Op/parenleftbigg1 n/parenrightbigg =1√nˆg−1 θDθlogˆπ−1√nˆcˆg−1 θˆA(1,1) +3 3!√n/parenleftig Id⊗ˆA(3,0)⊤/parenrightig vec/parenleftbig ˆg−1 θ/parenrightbig⊗2+Op/parenleftbigg1 n/parenrightbigg . Since√n/parenleftig ˆθB π−ˆθML/parenrightig =E[u|Xn], the consistency of MLE and the low of large numbers, it holds that n/parenleftig ˆθB π−ˆθML/parenrightigP→g−1 θDθlogπ−1 cg−1 θA(1,1)+1 2/parenleftig Id⊗A(3,0)⊤/parenrightig vec/parenleftbig g−1 θ/parenrightbig⊗2. Then, the partial differential equation which gives the condition of t he moment matching prior πθ MMis g−1 θDθlogπ=1 cg−1 θA(1,1)−1 2/parenleftig Id⊗A(3,0)⊤/parenrightig vec/parenleftbig g−1 θ/parenrightbig⊗2. This condition is also represented as ∂jlogπ=1 2∂jlog(detg)+1 2(e) Γkm,jgkm θ forj= 1,...,dsinceA(3,0) jkm=−∂jgkm−(e) Γkm,jand (∂jgkm)gkm=∂jlog(detgθ). References Akahira,M. (2016). SecondorderasymptoticvarianceoftheBay esestimatorofatrun- cation parameter for a one-sided truncated exponential family of distributions. J. Japan Statist. Soc. ,46(1), 81–98, https://doi.org/10.14490/jjss.46.81 Akahira, M. (2017). Statistical estimation for truncated exponential familie s. Springer, Singapore. Akahira, M. (2021). Maximum likelihood estimation for a one-sided tru n- cated family of distributions. Jpn. J. Stat. Data Sci. ,4, 317–344, https://doi.org/10.1007/s42081-020-00098-5 28 Amari, S., & Nagaoka, H. (2000). Methods of information geometry (Vol. 191). Providence, RI: American Mathematical Society. Bar-Lev, S.K. (1984, December). Large sample properties of the mle and mcle for the natural parameter of a truncated exponential family. Ann. Inst. Stat. Math , 36(2), 217–222, https://doi.org/10.1007/BF02481966 Bernardo, J.M. (1979). Reference Posterior Distributions for Ba yesian Inference. Journal of the Royal Statistical Society: Series B (Methodo logical),41(2), 113– 128,https://doi.org/10.1111/j.2517-6161.1979.tb01066.x Brown, B.W., & Walker, M.B. (1995, March). Stochastic specification in random production models of cost-minimizing firms. Journal of Econometrics ,66(1), 175–205, https://doi.org/10.1016/0304-4076(94)01614-6 Datta, G.S., & Ghosh, J.K. (1995, March). On priors providing fre- quentist validity for Bayesian inference. Biometrika ,82(1), 37–45, https://doi.org/10.1093/biomet/82.1.37 Datta, G.S., & Ghosh, M. (1996, February). On the invariance of noninformative priors. The Annals of Statistics ,24(1), 141–159, https://doi.org/10.1214/aos/1033066203 Datta, G.S., & Mukerjee, R. (2004). Probability Matching Priors: Higher Order Asymptotics (Vol. 178; P. Bickel et al., Eds.). New York, NY: Springer. Ghosal, S. (1997, June). Reference priors in multiparameter nonr egular cases. Test, 6(1), 159–186, https://doi.org/10.1007/BF02564432 Ghosal, S. (1999, December). Probability matching priors for non- regular cases. Biometrika ,86(4), 956–964, https://doi.org/10.1093/biomet/86.4.956 Ghosal, S., & Samanta, T. (1995). Asymptotic behaviour of Bayes e stimates and pos- terior distributions in multiparameter nonregular cases. Math. Methods Statist. , 4(4), 361–388, 29 Ghosh, J.K., & Mukerjee, R. (1992). Non-informative priors. Bayesian statistics, 4 (Pe˜ n´ ıscola, 1991) (pp. 195–210). New York: Oxford Univ. Press, New York. Ghosh, M., & Liu, R. (2011, August). Moment matching priors. Sankhya A ,73(2), 185–201, https://doi.org/10.1007/s13171-011-0012-2 Hashimoto, S. (2019, December). Moment matching priors for non-regular models. J. Stat. Plann. Inference ,203, 169–177, https://doi.org/10.1016/j.jspi.2019.03.009 Hashimoto, S. (2021). Predictive probability matching priors for a c er- tain non-regular model. Statist. Probab. Lett. ,174, Paper No. 109096, 6, https://doi.org/10.1016/j.spl.2021.109096 Holmquist, B. (1988, January). Moments and cumulants of the mult ivariate normal distribution. Stochastic Analysis and Applications ,6(3), 273–278, https://doi.org/10.1080/07362998808809148 Jeffreys, H. (1961). Theory of probability (Third ed.). Oxford: Clarendon Press. Lancaster,
|
https://arxiv.org/abs/2504.21363v1
|
T. (1997, April). Exact Structural Inference in Opt imal Job- Search Models. Journal of Business & Economic Statistics ,15(2), 165–179, https://doi.org/10.1080/07350015.1997.10524698 Ortega, F.J., & Basulto, J. (2016, August). A generalization of Jeff reys’ rule for non regular models. Communications in Statistics - Theory and Methods ,45(15), 4433–4444, https://doi.org/10.1080/03610926.2014.921303 Peers,H.W. (1965). OnConfidencePointsandBayesianProbabilityP ointsintheCase ofSeveralParameters. Journal of the Royal Statistical Society: Series B (Method- ological),27(1), 9–16, https://doi.org/10.1111/j.2517-6161.1965.tb00581.x Shemyakin, A. (2023, February). Hellinger Information Matrix and Hellinger Priors. Entropy,25(2), 344, https://doi.org/10.3390/e25020344 Sweeting, T.J. (2008). On predictive probability matching priors. Institute of Mathe- matical Statistics Collections (pp. 46–59). Beachwood, Ohio, USA: Institute of Mathematical Statistics. 30 Takeuchi, J., & Amari, S. (2005, March). Alpha-parallel prior and its properties. IEEE Transactions on Information Theory ,51(3), 1011–1023, https://doi.org/10.1109/TIT.2004.842703 Tanaka, F. (2023, March). Geometric properties of noninformat ive pri- ors based on the chi-square divergence. Front. Appl. Math. Stat. ,9, , https://doi.org/10.3389/fams.2023.1141976 Tibshirani, ROBERT. (1989, September). Noninformative priors fo r one parameter of many.Biometrika ,76(3), 604–608, https://doi.org/10.1093/biomet/76.3.604 Welch, B.L., & Peers, H.W. (1963, July). On Formulae for Confi- dence Points Based on Integrals of Weighted Likelihoods. Journal of the Royal Statistical Society: Series B (Methodological) ,25(2), 318–329, https://doi.org/10.1111/j.2517-6161.1963.tb00512.x Yoshioka, M., & Tanaka, F. (2023a). Alpha-parallel Priors on a One- Sided Truncated Exponential Family. F. Nielsen & F. Barbaresco (Eds.), Geometric Science of Information (pp. 226–235). Cham: Springer. Yoshioka, M., & Tanaka, F. (2023b, May). Information-Geometric Approach for a One-Sided Truncated Exponential Family. Entropy,25(5), 769, https://doi.org/10.3390/e25050769 31
|
https://arxiv.org/abs/2504.21363v1
|
NON-PARAMETRIC MULTIPLE CHANGE -POINT DETECTION A P REPRINT Andreas Anastasiou Department of Mathematics and Statistics University of Cyprus anastasiou.andreas@ucy.ac.cyPiotr Fryzlewicz Department of Statistics London School of Economics p.fryzlewicz@lse.ac.uk ABSTRACT We introduce a methodology, labelled Non-Parametric Isolate-Detect (NPID), for the consistent es- timation of the number and locations of multiple change-points in a non-parametric setting. The method can handle general distributional changes and is based on an isolation technique prevent- ing the consideration of intervals that contain more than one change-point, which enhances the estimation accuracy. As stopping rules, we propose both thresholding and the optimization of an information criterion. In the scenarios tested, which cover a broad range of change types, NPID outperforms the state of the art. An Rimplementation is provided. Keywords Non-parametric statistics; segmentation; threshold criterion; information criterion 1 Introduction The focus of this work is on non-parametric, offline, multiple change-point detection, the aim of which is to test whether a data sequence is distributionally homogeneous, and if not, to estimate the number and locations of changes. The problem has seen a recent interest in a range of application areas; a non-exhaustive list includes social networks [11], electrocardiography [21] and hydrology [25]. Denote by Nthe number of change-points and by r1, r2, . . . , r Ntheir locations, with r0= 0,rN+1=T. We work in the model Xt∼Fk, r k−1+ 1≤t≤rk, k = 1, . . . , N + 1, t= 1, . . . , T, (1) where {Xt}t=1,...,Tis the observed univariate data sequence of serially independent observations and Fkis the distri- bution of the Xt’s between the change-points rk−1andrk, with Fk̸=Fk+1∀k∈ {1, . . . , N }. The parameters N andrk, as well as the distributions Fk, are unknown. Our methodology can in principle be extended to variables with values in an arbitrary metric space, as long as the distributional difference is detectable on a Vapnik-Cervonenkis (VC) class. We address this briefly in Section 6. As the choice of such a VC class in general metric spaces (a necessity for the computation of the change-point location estimator) is a difficult practical problem in itself, our literature review, which follows, does not make a distinction between non-parametric methods specifically designed for univariate data and those applicable to more general spaces. Much of the early literature on non-parametric change-point detection covers the case of a single change-point. Some authors [6, 19] test whether there is a change-point at an unknown time point, while others [7, 5, 8] assume that there exists a change-point and focus on its estimation and on the construction of confidence regions. In [5] and [8], mean- dominant norms, as defined in [5], are employed as a measure of the difference of the empirical cumulative distribution functions before and after a change-point candidate. In multiple non-parametric change-point detection, observing a connection between multiple change-points and goodness-of-fit tests, [26] propose a non-parametric maximum likelihood approach using empirical process tech- niques. The number of change-points, N, is estimated via an information criterion; given an estimate of N, a non- parametric profile likelihood-based algorithm is used to recursively
|
https://arxiv.org/abs/2504.21379v1
|
compute the maximizer of an objective function. In an attempt to improve on the computational cost of that method, [10] use the Pruned Exact Linear Time (PELT) method introduced in [13] in order to find the optimal segmentation. [17] propose a non-parametric approach basedarXiv:2504.21379v1 [math.ST] 30 Apr 2025 Non-parametric segmentation A P REPRINT on Euclidean distances between sample observations, which combines binary segmentation with a divergence mea- sure from [22]; however, this approach departs from classical non-parametric change-point detection in the sense that it is not invariant with respect to monotone transformations of data (or in other words: does not use the ranks of the data only). Using the Kolmogorov-Smirnov (KS) statistic, [18] present two consistent procedures for univariate change point localization; one based on the standard binary segmentation algorithm of [24] and the other one on the WBS methodology of [9]. In [23], a multiscale method is developed for detecting changes in pre-defined quantiles of the underlying distribution. Other techniques are based on the estimation of density functions [12], or the exten- sion of well-defined statistics, such as the Wilcoxon/Mann–Whitney rank-based criterion first employed in [7] for the detection of at most one change-point, to the multiple change-point setting [15]. Our proposed approach, labelled Non-Parametric Isolate-Detect (NPID), is a generic technique for consistent non- parametric multiple change-point detection in a data sequence. It adapts the isolate-detect principle, introduced in a parametric change-point detection context by [2] and studied also in [1] and [3], to the non-parametric setting. NPID consists of two main stages; firstly, the isolation of each of the true change-points within subintervals of the domain [1, . . . , T ], and secondly their detection. The isolation step ensures that, with high probability, no other change-point is present in the interval under consideration, which enhances the detection power, especially in difficult scenarios such as ones involving limited spacings between consecutive change-points or other low-signal-to-noise-ratio settings. NPID’s ability to analyse such structures accurately, with low computational cost (see Sections 3.1 and 4), adds to the appeal of the proposed approach. Within each interval, where, by construction, there is at most one change-point, we use the non-parametric approach of [5] to carry out detection. For the theoretical results regarding the accuracy of NPID in estimating the locations of the change-points, we show in Section 2.2 that the optimal consistency rate is attained by employing optimal rate results from [8] in our proof strategy. Section 2.2 situates our method within the existing literature. We now briefly introduce the main steps of NPID. For an observed data sequence {Xt}t=1,...,T, and with λTa suitably chosen positive integer, the method first creates a collection of right- and left-expanding inter- valsSRL={R1, L1, R2, L2, . . . , R K, LK}, where K=⌈T/λ T⌉,Rj= [1 ,min{jλT+ 1, T}]andLj= [max{T−jλT,1}, T]. A suitably chosen contrast function, whose value at location b∈ {s, s+ 1, . . . , e −1}and for an input argument u∈Ris denoted by ˜Bb s,e(u)(formula (5)), measures the difference between the pre- band post- b empirical distributions at u∈R. NPID first works
|
https://arxiv.org/abs/2504.21379v1
|
within R1and calculates ˜Bb 1,λT+1(Xi), for each i∈ {1,2, . . . , T } and for each b∈ {1,2, . . . , λ T}. This creates λTvectors yb= ˜Bb 1,λT+1(X1),˜Bb 1,λT+1(X2), . . . , ˜Bb 1,λT+1(XT) , b= 1, . . . , λ T. The next step is to aggregate the contrast function information across i∈ {1,2, . . . , T }by applying to each yba mean-dominant norm L:RT→R; the mean-dominance property is such for any d∈Z+and∀x∈Rdwith xi≥0, i= 1, . . . , d , it holds that L(x)≥1 dPd i=1xi. The formal mathematical definition of a mean-dominant norm is given in Section 2 of [5] and examples include L1(yb) =1 TTX i=1|yb,i|, L 2(yb) =1√ TvuutTX i=1y2 b,i, L∞(yb) = sup i=1,...,T{|yb,i|}. (2) Applying L(·)to each yb, b= 1, . . . , λ T, returns a vector vof length λT. With ˜bR1:=argmaxj{vj}, ifv˜bR1exceeds a certain threshold ζT, then ˜bR1is taken as a change-point. If not, then the process tests the next interval in SRL. After detection, the algorithm makes a new start from the end-point (or start-point) of the right- (or left-) expanding interval on which the detection occurred. Given a suitable choice of the threshold ζT(more details are provided in Section 3.2), NPID ensures that we work on intervals with at most one change-point, with a high global probability. The paper is organized as follows. Section 2 defines NPID and gives the associated consistency theory for the number and locations of the estimated change-points. In Section 3, we discuss the computational aspects of NPID and the selection of the tuning parameter. In Section 4, we provide a comparative simulation study. Section 5 contains real-life data examples, and Section 6 concludes. The Rcode implementing the method is available at https://github. com/Anastasiou-Andreas/NPID . 2 Methodology and Theory 2.1 Methodology The general non-parametric framework is given in (1). We assume that the Xt’s are mutually independent. Ncan possibly grow with the sample size, T. We first explain the coupling of the Isolate-Detect (ID) scheme, as introduced 2 Non-parametric segmentation A P REPRINT Figure 1: The isolation and detection process for the example in (3). The right and left expanding intervals are coloured in red and blue, respectively. The green dashed line is the first isolation interval at each phase. in [2], with the use of the empirical cumulative distribution function (ECDF) for the detection of distributional changes as in [5]. We start with ID, where the isolation of each change-point, prior to detection, is carried out as in [2]. For clarity of exposition, we show graphically the different isolation phases through a simple example of two change- points, r1= 22 andr2= 44 , in a data sequence Xtof length T= 60 , and with the three different distributions denoted by Fj, j= 1,2,3. We have Xt∼ F1, t= 1, . . . , 22 F2, t= 23,24, . . . , 44 F3, t= 45,46, . . . , 60.(3) We have Phases 1 and 2 involving four and one intervals, respectively. These are clearly
|
https://arxiv.org/abs/2504.21379v1
|
indicated in Figure 1 and they are only related to this specific example, because for cases with a different number of change-points we would have a different number of such phases. At the beginning s= 1, e= 60 , and we take the expansion parameter, λT, to be equal to 10. The first change-point to be isolated is r2; this happens at the fourth step of Phase 1 and in the interval [41,60]. Given a suitable contrast function (details are given later in this section), r2gets detected. Following the detection, eis updated as the start-point of the interval on which the detection occurred; therefore, e= 41 . In Phase 2 indicated in the figure, NPID is then applied in [s, e] = [1 ,41]. Intervals 1 and 3 of Phase 1 will not be re-examined in Phase 2 and r1is isolated, and then detected, in the interval [1,30]. After the detection, sis updated as the end-point of the interval where the detection occurred; therefore, s= 30 . Our method is then applied in [s, e] = [30 ,41]; there are no more change-points to be isolated and given a suitable choice of the threshold, the process will terminate. The guaranteed (with high probability) isolation aspect in NPID ensures that the change-points will be detected one-by- one, while not being affected by changes in the data occurring at locations outside the current interval. This allows us to split the multiple change-point detection problem into a sequence of single change-point detection problems. As a result, optimal rate consistency results are obtained through the machinery appropriate for single change-point estimation; this is where the strategy of [8] comes in useful. To the best of our knowledge, [5] is the first work on non-parametric single change-point estimation under no moment, support, or functional form assumptions for the distribution of the data before and after the change-point. The theoretical properties of the estimator proposed in [5] include an exponential bound on the error probability and strong consistency, though with a sub-optimal rate. [8] embeds the non-parametric estimator of [5] in a more general framework, in which the optimal consistency rate is attained. The contrast function The contrast we use is a non-parametric CUSUM function. For a sample X1, X2, . . . , X T, with Bt(u) := 1{Xt≤u}, define its ECDF as ˆFT(u) =1 TTX t=1Bt(u), u∈R. (4) The non-parametric CUSUM is ˜Bb s,e(u) =s e−b (b−s+ 1)( e−s+ 1)bX t=sBt(u)−s b−s+ 1 (e−b)(e−s+ 1)eX t=b+1Bt(u), (5) 3 Non-parametric segmentation A P REPRINT which is a weighted difference of the pre- band post- baverages of Bt(u)(or in other words a weighted difference of ECDFs), with bbeing the time point under consideration. (5) is used in the single change-point setting in [5] and [8], and in the multiple change-point setting in [18], where taking the supremum over all u∈Rof the absolute value of (5) yields the “CUSUM Kolmogorov-Smirnov statistic”. A detailed comparison of NPID with the method developed in [18] is provided in Sections 2.2 and 3.1. In [26], the sample of Bt(u), t= 1, . .
|
https://arxiv.org/abs/2504.21379v1
|
. , T is regarded as independent Bernoulli data with probability of success ˆF(u). Working under this framework, the joint profile log-likelihood for a candidate set of change-points is obtained, and then maximized in an integrated form (over R) in order to estimate the change-points. We now describe our method generically through pseudocode. The aggregation of the contrast function values at each time point examined is an essential step in our algorithm. For the aggregation of the contrast values a mean-dominant norm, L(·), is used at each location b∈[s, e), where sandeare given. The data sequence X= (X1, X2, . . . , X T)is also given. function AGGREGATION( X, s, e, L ) forb=s, . . . , e −1do fori= 1,2, . . . , T do ˜Bb s,e(Xi) =q e−b (b−s+1)(e−s+1)Pb t=sBt(Xi)−q b−s+1 (e−b)(e−s+1)Pe t=b+1Bt(Xi) end for yb= (˜Bb s,e(X1),˜Bb s,e(X2), . . . , ˜Bb s,e(XT))⊺ vb=L(yb) end for Return( v) end function We now specify the main function for the detection of change-points one by one. With δT= min j=1,...,N +1|rj−rj−1|, (6) we take the expansion parameter λT, which is an integer such that 0< λT< δTand set K=⌈T/λ T⌉. (7) We let cr j=jλT+ 1andcl j=T−jλTforj= 1, . . . , K −1, with cr K=Tandcl K= 1. For a generic interval [s, e], define the sequences Rs,e= cr k1, cr k1+1, . . . , e ,Ls,e= cl k2, cl k2+1, . . . , s , (8) where k1:= argminj∈{1,...,K}{jλT+ 1> s}andk2:= argminj∈{1,...,K}{T−jλT< e}. Let L(·)be the mean- dominant norm used, and denote by |A|the cardinality of A, and by A(j)itsjthelement (if ordered). The pseudocode of the main change-point detection function is below. function N ONPARAMETRIC ISOLATE DETECT (X, s, e, λ T, ζT, L) ife−s <1then STOP else Forj∈ {1, . . . ,|Rs,e|},denote [s2j−1, e2j−1] := [s,Rs,e(j)] Forj∈ {1, . . . ,|Ls,e|},denote [s2j, e2j] := [L s,e(j), e] i= 1 (Main part) s∗:=s2i−1, e∗:=e2i−1 v= AGGREGATION( s∗, e∗, L) b∗ 2i−1:= argmax s∗≤b<e∗{vb} ifvb∗ 2i−1> ζTthen addb∗ 2i−1to the set of estimated change-points. NONPARAMETRIC ISOLATE DETECT (e∗, e, λ T, ζT, L) else s∗:=s2i, e∗:=e2i v= AGGREGATION( s∗, e∗, L) 4 Non-parametric segmentation A P REPRINT b∗ 2i:= argmax s∗≤b<e∗{vb} ifvb∗ 2i> ζTthen addb∗ 2ito the set of estimated change-points. NONPARAMETRIC ISOLATE DETECT (s, s∗, λT, ζT, L) else i=i+ 1 ifi≤max{|Ls,e|,|Rs,e|}then Go back to (Main part) and repeat else STOP end if end if end if end if end function The call to launch NPID is NONPARAMETRIC ISOLATE DETECT (X,1, T, λ T, ζT, L). An explanation of the pseu- docode follows. With Kas in (7), we use the intervals [s1, e1],[s2, e2], . . . , [s2K, e2K]in the isolation step. The algorithm is looking for change-points interchangeably in right- and left-expanding intervals. The change- points, if any, are detected one by one in such intervals. Note that, between subsequent detections, in the odd- indexed intervals [s1, e1],[s3, e3], . . . , [s2K−1, e2K−1], the start-point is fixed and equal to s; in the even intervals [s2, e2],[s4,
|
https://arxiv.org/abs/2504.21379v1
|
e4], . . . , [s2K, e2K], the end-point is fixed and equal to e. After the detection of a change-point, sore is updated based on whether the change-point was detected in a right- or a left-expanding interval, respectively. The process follows until there are no intervals to be checked. Change-point detection occurs if and when the aggregated contrast function values (obtained through the AGGREGATION( ·,·,·,·) function) at specific time points surpass a suitably chosen threshold ζT. The algorithm stops when it is applied on an interval [s, e], such that for all expanding intervals [s∗ j, e∗ j]⊆[s, e], we have that for v(j):=AGGREGATION (X, s∗ j, e∗ j, L), there is no b∗ j∈[s∗ j, e∗ j)with v(j) b∗ j> ζT. 2.2 Theoretical behaviour of NPID Forx∈R, denote δj=rj−rj−1, j = 1, . . . , N + 1, Ft(x) =P(Xt≤x), t= 1, . . . , T, ∆j(x) =Frj+1(x)−Frj(x), j = 1, . . . , N, (9) where L(·)is the mean-dominant norm used in the aggregation step. For the main result of Theorem 1, we make the following assumption. (A1) For δTas in (6), it holds that δT→ ∞ asT→ ∞ . Furthermore, ∀j∈ {1, . . . , N }there are constants ˜Cj>0 independent of T, and sequences {γj,T} ∈R+such that P L(|∆j(X1)|, . . . ,|∆j(XT)|)≥˜Cj γj,T! − − − − → T→∞1 and for δjas in (9), we require that mT:= min j∈{1,...,N}n ˜Cjp min{δj, δj+1}/γj,To ≥C√logT, for a large enough constant C. The number of change-points, N, is assumed neither known nor fixed, and can grow with T. Due to the minimum distance, δT, between two change-points, the only indirect assumption on the true number of change-points is that N+ 1≤T/δT. Below, we state the consistency result for the number and location of the estimated change-points, when any mean-dominant norm L(·)is employed. The proof is given in the supplementary material. However, in Appendix B we provide a brief discussion of the main steps followed in the proof. Theorem 1. Let{Xt}t=1,...,Tfollow model (1)and assume that (A1) holds. Let Nandrj, j= 1, . . . , N be the number and locations of the change-points, while ˆNandˆrj, j= 1, . . . , ˆNare their estimates (sorted in increasing order) when 5 Non-parametric segmentation A P REPRINT NPID is employed with any mean-dominant norm. Then, there exist positive constants C1, C2, independent of T, such that for C1√logT≤ζT< C 2mT, we have that for d→ ∞ , P ˆN=N,max j=1,...,N |ˆrj−rj|/γ2 j,T ≤d − − − − → T→∞1. (10) It is known [8] that in the case of a single non-parametric change-point detection, it is possible to achieve (using the notation from [8] simplified to the real-valued variable setting) ∆2|ˆr−r|=OP(1), where randˆrare the true and estimated change-points, respectively, and ∆ = supu∈R|Fr(u)−Fr+1(u)|, where Ft(·)is the cumulative distribution function at time point t. Previous attempts, for example [5], showed consistency results for the estimated change- point but with a worse rate. Our result attains the optimal consistency rate in the case of multiple change-points, something that
|
https://arxiv.org/abs/2504.21379v1
|
was also obtained in [26] but under stronger assumptions than our (A1). In particular, [26] require that there is an upper bound, denoted by ¯Knfor the true number of change-points. This upper bound, ¯N, is such thatδT/(¯N4(log¯N)2(logT)2+c)− − − − → T→∞∞, where δTis the minimum distance between consecutive change-points andc >0; more details on this restriction can be found later in this section. In addition, [26] require almost sure convergence of the empirical to the true CDF (Glivenko-Cantelli theorem). Proof of Theorem 1 uses Dvoretzky- Kiefer-Wolfowitz inequality, which is necessary for the Glivenko-Cantelli theorem, but not vice versa. Furthermore, the consistency results in [26] hold for continuous distributions within each homogeneous segment (it is highlighted though that, in practice, the distributions can also be discrete or mixed), while in NPID the distributions between two change-points can be (either in theory or in practice) continuous, discrete, or mixed, as is also the case in [5] and [8] for the single change-point detection scenario. A more thorough comparison between NPID and the main competitors in the literature is given later in this section. An earlier attempt by [14] uses weighted empirical measures to detect a distributional difference over a running window. A restrictive assumption on the difference between two neighbouring distributions in that work leads to the almost-sure consistency for locations with the rate of O(logT); a practical drawback of the method is the need to choose the window length AT, which our approach bypasses. Our method, under relatively weak assumptions, attains the optimal rate for the distances between the true and the estimated change-point locations. It can be used in the detection of distributional changes in difficult structures such as ones involving short spacings between consecutive change-points and/or high degrees of distributional similarity across neighbouring segments. We illustrate the practical performance of NPID in Sections 4 and 5. Motivated by Theorem 1, we use in practice thresholds of the form ζT=Cp logT, (11) where the choice of the constant Cwill be discussed in Section 3 for two examples of mean-dominant norms: L2(·) andL∞(·), as in (2). Comparison with the main competitors. We now provide a further comparison of NPID to its main competitors, the NMCD method of [26] and the NWBS algorithm of [18]. We do not include in this comparison the ECP approach of [17] as the latter uses moment assumptions and is therefore not classically non-parametric (in particular, it is not invariant with respect to monotone transformations of the data). Furthermore, the method of [14] is also not included in the comparison as it is shown in [26] that the estimated number and locations of the change-points, using the method of [14], are not satisfactory; more specifically, in a simulation study, it was shown that the resulting models led to overfit in all scenarios tested. Under the mild assumption (A1), NPID consistently estimates the true number and the locations of the change-points, achieving the optimal rate OP(1)for the estimated locations. As discussed earlier, the same rate is attained by the NMCD, but under more restrictive assumptions. The NMCD approach is
|
https://arxiv.org/abs/2504.21379v1
|
a two-step process in which the number of change-points is first estimated through a version of the Bayesian Information Criterion (BIC), and the locations are estimated next. For ¯Nbeing the maximum permitted number of change-points (the fixed- Nscenario), NMCD imposes the restrictive assumption δT/(¯N4(log¯N)2(logT)2+c)→ ∞ forc >0. This means that, for the consistency result to hold, the minimum distance between consecutive change-points needs to be of order at least O((log T)2+c); this is as long as the true number of change-points is kept fixed and not allowed to increase with the sample size. In the case in which ¯Nis allowed to increase with T, the condition on the order of δTbecomes even stricter. Such strict assumptions on δTare absent in our method; Assumption (A1) essentially requires that the product between the distance of consecutive change-points and the square of the associated magnitude of change should be of order at least logT. In addition, [26] require the magnitude of the changes to be bounded away from zero. In contrast, in NPID, if the minimum distance between two consecutive change-points, δT, is of a higher order than O(logT), then Assumption (A1) implies that the magnitudes of the changes could go to zero as T→ ∞ . With regards to the NWBS approach of [18], the rate of consistency for the change-point locations obtained through NWBS is worse than that obtained by NPID, being off by a logarithmic factor. In addition, [18] work under an 6 Non-parametric segmentation A P REPRINT assumption that requiresp minj∈{1,...,N +1}{δj}minj{supx∈R|∆j(x)|} ≥C√logT, while in the same concept we require the less restrictive minj∈{1,...,N}{min{δj, δj+1}supx∈R|∆j(x)|} ≥ C√logT, where Cis a large enough positive constant. This milder assumption (imposing a lower bound on the minimum of the products rather than on the product of the minimums) is the result of the guaranteed (on a high-probability event) change-point isolation prior to detection, enjoyed by our methodology. Furthermore, in NWBS, the distributional change at each change-point is measured through the Kolmogorov-Smirnov (KS) distance. The NPID methodology is more general, as it uses mean-dominant norms, of which L∞(·)(corresponding to KS) is but one example. In signals with a large number of change-points, NWBS needs to increase the number Mof intervals drawn and this has a negative effect on the computation time, which is significantly greater than that for NPID; see Section 3.1 as well as the simulations in Section 4. By contrast, due to the interval expansion approach, NPID is certain to examine all possible change-point locations leading to better practical performance with more predictable execution times. 2.3 Information Criterion approach In NPID, the detection is based on the comparison of aggregated (through mean-dominant norms) contrast func- tion values to a threshold, ζT. In Section 3.2, we explain how the threshold constant C(formula (11)) is care- fully chosen through a large-scale simulation study. The values obtained for the L∞and the L2mean-dominant norms are also shown to control the Type I error; see the discussion in Section 3.2, as well as the relevant re- sults in Tables 1 and 2 of the online supplement. Even though
|
https://arxiv.org/abs/2504.21379v1
|
NPID is robust to small deviations from the optimal threshold value, misspecification of the threshold can possibly lead to the misestimation of the number and/or the location of change-points. In order to reduce the dependence of our methodology on the threshold choice, we propose in this section an approach which starts by possibly overestimating the number of change- points and then creates a solution path, with the estimates ordered in importance according to a certain pre- defined criterion. The best fit is then chosen, based on the optimization of a model selection criterion. The solution path algorithm. With ζ∗ Tbeing the optimal choice for the threshold value (more details can be found in Section 3.2), we denote ˆN:=ˆN(ζ∗ T). For given input data, we run NPID with a threshold set to ˜ζT< ζ∗ T. Let ˜Cbe the ˜ζT-associated constant in (11). We estimate J≥ˆN(ζ∗ T)change-points denoted by ˜rj, j= 1, . . . , J . These are sorted in increasing order in ˜S= [˜r1,˜r2, . . . , ˜rJ]. After this overestimation step, the idea is to remove change- points according to their mean-dominant norm values. The first step in the solution path algorithm is, for ˜r0= 0and ˜rJ+1=T, to collect triplets (˜rj−1,˜rj,˜rj+1),∀{1, . . . , J }and to calculate ˜B˜rj ˜rj−1+1,˜rj+1(Xi)with ˜Bb s,e(Xi)defined in (5). Then, for B˜rj= ˜B˜rj ˜rj−1+1,˜rj+1(X1),˜B˜rj ˜rj−1+1,˜rj+1(X2), . . . , ˜B˜rj ˜rj−1+1,˜rj+1(XT) we employ the same mean-dominant norm L(·), as in the overestimation step explained above, on B˜rj, for each j∈ {1, . . . , J }. For m= argminj L B˜rj we remove ˜rmfrom the set ˜S, reduce Jby 1, relabel the remaining estimates (in increasing order) in ˜S, and repeat this estimate removal process until ˜S=∅. At the end of this change- point removal process, we collect the estimates in a vector b= (b1, b2, . . . , b J), (12) where bJis the estimate that was removed first, bJ−1is the one that was removed second, and so on. The vector bis referred to as the solution path and is used to give a range of different fits. Model selection through BIC. We define the collection {Mj}j=0,1,...,Jwhere M0=∅andMj={b1, b2, . . . , b j}. Among the collection of models {Mj}j=0,1,...,J, we propose to select the one that minimizes a version of Schwarz’s Bayesian Information Criterion (BIC) as given in [26]. For each model Mj, j= 0,1, . . . , J , the BIC provides a balance between the likelihood and the number of change-points by incorporating an appropriate penalty for larger values of j. More specifically, for bas in (12), we denote by b∗,j 1, . . . b∗,j jthe first jelements of bsorted in increasing order. Through the solution path algorithm, for each j∈ {1,2, . . . , J }, the model Mjincludes the jmost important estimated change-points. Let b∗,j 0= 0andb∗,j j+1=Twhen we work with model Mj, for any j. The preferred model is then identified by minimizing, for j∈ {0,1, . . . , J }, BIC(j) =−ST b∗,j 0, b∗,j j+1, . .
|
https://arxiv.org/abs/2504.21379v1
|
. , b∗,j j+1 +j pT, (13) 7 Non-parametric segmentation A P REPRINT where ST b∗,j 0, b∗,j 1, . . . , b∗,j j+1 =TjX i=0T−1X l=2 b∗,j i+1−b∗,j i l(T−l) ˆFb∗,j i+1 b∗,j i(X[l]) log ˆFb∗,j i+1 b∗,j i(X[l]) + 1−ˆFb∗,j i+1 b∗,j i(X[l]) log 1−ˆFb∗,j i+1 b∗,j i(X[l]) (14) forˆFb∗,j i+1 b∗,j i(u) =1 b∗,j i+1−b∗,j iPb∗,j i+1 t=b∗,j i+11{Xt≤u}. The expression in (14), whose expectation is maximized at the true change-point locations, is an integrated form of the profile log-likelihood function in the presence of the jchange- point within each candidate model Mj. The term pTin (13) is the penalty which goes to infinity with T. [26] prove theoretical consistency of the BIC-related approach by taking pT=¯K3 T(log¯KT)2(logT)2+c, where ¯KTis an upper bound on the true number of change-points; such a bound is necessary for NMCD, but not for the thresholding version of NPID, introduced earlier. A small value of the constant chelps to prevent underfitting. The practical choice of pTin [26] is pT=1 2(logT)2.1, which is what we also use with NPID in Sections 4 and 5 (as an alternative to thresholding). We highlight, though, that the theoretical results have been proven only when NPID is combined with thresholding; the proof requires weaker assumptions than those used in [26], where results related to BIC are proved (see also the relevant discussion in Section 2.2). 3 Computational complexity and practicalities 3.1 Computational cost With δTbeing the minimum distance between two change-points, and λTthe interval-expansion parameter, we need λT< δTto achieve isolation. Since K=⌈T/λ T⌉>⌈T/δT⌉and the total number, M, of distinct intervals required to scan the data is no more than 2K(Kintervals from each expanding direction), in the worst case scenario we have M= 2K > 2l T δTm . The cost of computing the non-parametric CUSUM, ˜Bb s,e(u), in (5) is linear in time. Because we are aggregating over all the non-parametric CUSUM values obtained for the different X1, . . . , X T, and since this is done for at most Mintervals, we conclude that the computational cost of our algorithm in the worst possible case is of order O MT2 =O T3δ−1 T . For the NWBS algorithm of [18], in order to guarantee consistency of NWBS, the drawing of S >T2 δ2 Tlog (T/δT)intervals is needed, which means that the computational complexity of NWBS is at least of order O (T4/δ2 T) log( T/δT) . Therefore, NPID enjoys significant speed gains over NWBS. The reason behind this difference in the computational complexity of the methods is that in NWBS the start- and end- points of the randomly drawn intervals have to be chosen, whereas in NPID, depending on the expanding direction, we keep the start- or the end-point fixed. As explained in Section 2.1, in the case of no detection, our method will have to be applied to larger intervals until a detection occurs, which means that the most computationally expensive scenario for NPID is when there are no change-points in a given data sequence. On the other hand, for signals with
|
https://arxiv.org/abs/2504.21379v1
|
a large number of frequently occurring change-points, NPID will keep operating on short intervals, leading to fast computation times. This is not the case for many of the competitors; see for example [18] and [17], where it is explained that the computational time for binary segmentation related approaches and some dynamic programming algorithms (for example the Segment Neighbourhood Search (SNS) algorithm as in [4]) increases proportionally to the number of change-points. This is also the case with NMCD, in which the SNS dynamic programming algorithm is employed for the maximization of an objective function. Given a maximum number, B, of change-points to search for, SNS calculates, based on a given cost function, the optimal segmentations for each i∈ {0,1, . . . , B }. If all the segment costs have already been computed, SNS has a computational cost of O(BT2). In NMCD, the cost for a single segment is O(T)[10], which means that for all segments the computational cost is cubic in time. Therefore, the total computational cost of NMCD as first developed in [26] is O(BT2+T3). To improve upon such a substantial cost, a screening algorithm was introduced in [26]; it reduces the computational complexity, but it is shown in [10] that such a pre-processing step affects the accuracy of NMCD. Taking all these into consideration, [10] extend NMCD by developing a computationally efficient approach that first simplifies the segment cost to be of O(logT)rather than linear in time, and, second, applies the PELT dynamic programming approach as in [13], which is substantially quicker than the SNS algorithm. 8 Non-parametric segmentation A P REPRINT 3.2 Parameter choice Choice of the threshold constant. In order to select the Cin (11), we ran a large-scale simulation study involving a wide range of signals in terms of both the number and the type of the change-points. The number of change-points, N, was generated from the Poisson distribution with rate parameter Nα∈ {4,8,12}. For T∈ {100,200,400,800}, we uniformly distributed the change-point locations in {1, . . . , T }. Then, at each change-point we introduced (with equal probability) one of the following three types of changes: (M) Change in the mean of the observations; (V) change in the variance of the observations; (D) general change in the distribution beyond the mean or the variance only. More information on these three types of changes can be found in Section 2 of the supplement. The simulation procedure was followed for two examples of mean-dominant norms; L2(·)andL∞(·)in (2). The best behavior occurred when, approximately, C= 0.6andC= 0.9forL2andL∞, respectively. From now on, whenever relevant, these values will be referred to as the default constants. In an attempt to measure the Type I error obtained from this choice of threshold constants under the scenario of no change-points, we ran 100 replications for twenty different scenarios of no change-points, covering scenarios from both continuous and discrete distributions. We highlight here that our procedure is invariant under monotone transformations of the data; so, for example, we would expect identical performance for Gaussian and Cauchy models with no change-points (or any
|
https://arxiv.org/abs/2504.21379v1
|
other continuous distribution), and the fact that we explicitly study both these models below only serves as an extra check of the correctness of our implementation. The models used are: Gaussian : The distribution used is the standard normal. There are no change-points and the lengths of the data sequences are T∈ {30,75,200,500}. Cauchy : The distribution used is the Cauchy with location and scale parameters equal to zero and one, respectively. There are no change-points and the lengths of the data sequences are T∈ {30,75,200,500}. Poisson : The distribution used is the Poisson with mean equal to λ, where λ∈ {0.3,3,30}. There are no change- points and the lengths of the data sequences are T∈ {30,75,200,500}. Tables 1 and 2 in Section 1 of the supplement present the frequency distribution of ˆN−Nfor all the above scenarios and for both the L∞andL2mean-dominant norms, when their respective default threshold constants are used. We conclude that the default constants lead to acceptably low values of the Type I error in all scenarios examined. In the BIC-based approach of Section 2.3, the first step requires the detection of change-points using threshold ˜ζT< ζT. In practice, we take ˜C, the constant related to ˜ζT, to be 20% less than the default constant C. Choice of the expansion parameter λT.We use λT= 15 in all examples shown in this paper. A lower value of λTwould lead to finer examination of the data, at the expense of an increased computational cost. Table 1 gives the execution speeds for models (T1) and (T2) as defined below, on a 2.80GHz CPU with 16 GB of RAM. The models are (T1) Length l∈ {3000,6000,9000}, with changes in the mean of a normal distribution at locations 30,60, . . . , l− 30. The magnitude of the changes in the mean are equal to 4. The standard deviation is σ= 0.5. (T2) Length l∈ {3000,6000,9000}, with changes in the variance of a normal distribution at locations 250,500, . . . l−250. The mean is equal to 0, while the standard deviation between change-points takes the values 1and2interchangeably. The table below shows that NPID is quick for a non-parametric method, and can comfortably analyse signals with lengths in the thousands. 3.3 Variants This section describes four different ways to further improve our method’s practical performance with respect to both accuracy and speed. Data splitting: If the sample size is considered large, we can split the given data sequence uniformly into smaller parts (windows) to which our method will be applied. In practical implementations, we split the data sequence only if T >2000 ; for smaller values of Tthere are no significant differences in the execution times of NPID and its window- based variant. 9 Non-parametric segmentation A P REPRINT Table 1: The average computational time of NPID using either the L2or the L∞mean-dominant norm for the data sequences (T1) and (T2) Time for (T1) (s) Time for (T2) (s) T L2 L∞ L2 L∞ 3000 4.08 3.39 8.29 9.27 6000 21.34 13.52 31.97 38.32 9000 130.66 36.01 68.92 90.55 Table 2: The competing
|
https://arxiv.org/abs/2504.21379v1
|
methods used in the simulation study Method notation Reference Rpackage ECP [17] ecp NMCD [26] changepoint.np NBS [18] – NWBS [18] – CPM [20] cpm Sparse grid: For practical applications, instead of creating Tbinary sequences by using every empirical quantile (see the pseudocode for our algorithm), one could take Q, equally spaced, quantile values lj, j= 1, . . . , Q within the interval [X[1], X[T]], in order to reduce computation but still cover the range of the data. For example, for the sequence {Xt}t=1,...,TwithR=X[T]−X[1], we could take lj=X[1]+jR Q+ 1, j = 1, . . . , Q. (15) The loss in accuracy, caused by avoiding the use of all observations, and instead creating a more sparse grid of values, has been shown through an extensive simulation study to be negligible; most of the times the result was exactly the same. On the other hand, the grid method was approximately T/Q times faster. Rescaling the CUSUM values: In NPID, we calculate CUSUMs based on Bt(Xi) = 1{Xt≤Xi}.{Bt(Xi)}t=1,...,T will mostly equal zero for low-ranked data points on input (such as X[1]), and vice versa. Therefore, to bring the dif- ferent sequences {Bt(Xi)}t=1,2,...,Ton a similar scale, we can rescale the contrast function values by their estimated standard deviations,p ˆp(1−ˆp), where ˆpis the sample mean of the values of Bt(Xi)fort∈ {1, . . . , T }. On the other hand, if ˆpis close to zero or one, then the division byp ˆp(1−ˆp)can lead to spuriously large contrast function values and therefore, to overdetection issues. Therefore, in cases in which ˆp <0.1orˆp >0.9, we takep ˆp(1−ˆp) = 0 .3, which is the value obtained when ˆp∈ {0.1,0.9}. Restarting post-detection: In practice, instead of starting from the end-point (or start-point) of the right-expanding (or left-expanding) interval in which a detection occurred, the algorithm offers the option of restarting from the estimated change-point location. This increases the probability of type-I errors (in particular increasing the risk of double detec- tion) but reduces the probability of type-II errors (i.e. reduces the chances of missing true change-points). The speed of the method is not substantially affected. 4 Simulations In this section, we provide a comprehensive simulation study on the performance of NPID against state-of-the-art methods in the non-parametric change-point detection literature. Table 2 shows the competitors. The multiscale method of [23] is not included in our comparative simulation study because it is valid for a given quantile level, rather than their combination. NMCD is implemented through the Rpackage changepoint.np . To our knowledge, there is no Rpackage for the NBS and NWBS methods described in [18] and for our simulations, we used the Rcode available at https:// github.com/hernanmp/NWBS . For ECP, we present the results for the E-Divisive method (as developed in [17]), which performs a hierarchical divisive estimation of multiple change points. The change-points are estimated by iteratively applying a procedure for locating a single change point. While we keep ECP in our simulation study, we remark that this procedure is not classically non-parametric in the sense that it relies on moment assumptions; more specifically, it
|
https://arxiv.org/abs/2504.21379v1
|
assumes that observations are independent with finite αth absolute moments, for some α∈(0,2][16]. 10 Non-parametric segmentation A P REPRINT Three changes in the mean as in (MM_Gauss) TimeData values 0 100 200 300 400−4−202Three changes as in (MM_Gauss_tr) TimeData values 0 100 200 300 400051015202530Three changes in the variance as in (MV_Gauss) TimeData values 0100200300400500600−5 0 5 One distributional change as in (D1) TimeData values 0200 400 600 8001000−5 0 5Two distributional changes as in (MD1) TimeData values 0 200 400 6000246Three distributional changes as in (MD3) TimeData values 0200 400 600 80010000 5 10 Figure 2: Examples of data sequences, used in simulations. The change-point locations are indicated with red, vertical, solid lines. Therefore, unlike many other non-parametric approaches including ours, it is not invariant with respect to monotone transformations of the data. TheRpackage cpm contains implementation of several different models, both parametric and non-parametric, for use on univariate streams. We give results for the Kolmogorov-Smirnov and the Cramer-von Mises statistics used in thecpm Rpackage, suitable for continuous data. There is no function in the cpm Rpackage for discrete random variables. Therefore, the method is excluded when our simulation models involve discrete distributions. Furthermore, in the cpm package, the threshold used for detection is decided through the average run length (ARL) until a false positive occurs. In our simulations, we give results for ARL = 500 (the default value) and, whenever the signal length, ls, is greater than 500, results are also given for ARL = 1000 ⌈ls/1000⌉. With Abeing the value of the ARL used, the notation is CPMKS Aand CPMCvM A, respectively. For the remaining NMCD, NBS, and NWBS methods, we use the default hyperparameter values. With respect to our method, results are presented when NPID is combined with the information criterion stopping rule discussed in Section 2.3. The L∞mean-dominant norm is employed and the obtained contrast function values are rescaled as explained in Section 3.3. We highlight, though, that the performance of our method is robust to the choice of the mean-dominant norm used. All the signals are fully specified in Appendix A. Figure 2 shows examples of the data generated by models (MM Gauss), (MM Gauss tr), (MV Gauss), (D1), (MD1), and (MD3). We ran 100 replications for each signal and the frequency distribution of ˆN−Nis shown for each method. The methods with the highest empirical frequency ofˆN−N= 0and those within 10% off the highest are given in bold. As a measure of the accuracy of the detected locations, we provide the average values (out of the 100 replications) for the scaled Hausdorff distance, dH=n−1 smax max jmin k|rj−ˆrk|,max kmin j|rj−ˆrk| , where nsis the length of the largest segment. In the first example of Table 3, which is a signal without any change- points, dHis not given since it is non-informative in no change-point scenarios. The average computation time for all methods is also provided. The Rcode to replicate the simulation study explained in this section can be found at https://github.com/Anastasiou-Andreas/NPID . Tables 3–5 summarize the results. The results in Tables 3–5 show that NPID consistently,
|
https://arxiv.org/abs/2504.21379v1
|
among all scenarios covered in the models, performs accurately regarding both the number and the locations of the estimated change-points. To be more precise, NPID is always among the top-performing methods when considering accuracy in any aspect (estimated number of change-points and estimated change-point locations); in the vast majority of cases, it is the best method overall. We highlight that, as shown by the results for the models (MM Gauss tr) and (MM Pois tr), NPID’s performance remains unaffected under transformations of the data; this is expected since our method (and some of the competitors) rely on the ranks of the data only for the detection of the change-points. Furthermore, we notice a significant gap in the performance between NPID and its competitors in all models that undergo variance changes, as well as in models (MD1) and (MD3) that cover the case of more general distributional changes. Regarding the practical computational cost of NPID, the results 11 Non-parametric segmentation A P REPRINT Table 3: Distribution of ˆN−Nover 100 simulated data sequences from the structures (NC), (M1), (V1), and (D1). The average dHand computation time are also given ˆN−N Method Model −10 1≥2dH Time (s) NPID - 97 3 0 - 0.84 CPMKS500 - 28 25 47 - 0.07 CPMCvM500 - 34 18 48 - 0.07 ECP (NC) - 94 4 2 - 1.41 NMCD - 59 6 35 - 0.05 NBS - 100 0 0 - 0.44 NWBS - 97 0 3 - 9.73 NPID 0 94 5 1 0.344 0.15 CPMKS500 0 71 18 11 0.179 0.01 CPMCvM500 0 69 17 14 0.165 0.01 ECP (M1) 0 96 4 0 0.045 0.31 NMCD 0 70 19 11 0.174 0.01 NBS 19 70 3 8 0.303 0.09 NWBS 22 69 4 5 0.314 2.28 NPID 1 86 5 9 0.123 0.61 CPMKS500 1 60 18 21 0.443 0.01 CPMCvM500 5 49 25 21 0.424 0.01 ECP (V1) 12 83 3 2 0.450 0.52 NMCD 0 44 23 33 0.384 0.01 NBS 93 1 1 5 0.978 0.15 NWBS 90 4 1 5 0.958 3.92 NPID 2 94 2 2 0.075 2.88 CPMKS500 0 7 11 82 0.637 0.17 CPMCvM500 0 9 12 79 0.576 0.19 CPMKS1000 0 27 17 56 0.442 0.25 CPMCvM1000 0 30 16 54 0.392 0.26 ECP (D1) 0 96 3 1 0.036 7.68 NMCD 0 27 13 60 0.439 0.12 NBS 53 41 6 0 0.579 0.96 NWBS 61 30 4 5 0.678 22.16 in Tables 3-5 indicate that our proposed method can accurately analyze long signals (length in the range of thousands) in seconds. Regarding the competing methods, CPM, NMCD, NBS, and NWBS appear to struggle to detect the change-points correctly, especially in the presence of changes in the variance. The ECP algorithm exhibits very good performance in the at-most-one change-point scenarios. While it performs well in some models with multiple change-points (mainly those involving changes in the mean), this behavior is not consistent; the method seems to struggle (as is the case for the rest of the competitors too) in the presence of variance changes. We
|
https://arxiv.org/abs/2504.21379v1
|
now focus on a comparison of the behavior of the competitors in (MM Gauss) and (MM Pois) to that under (MM Gauss tr) and (MM Pois tr); in the latter two models the exponential function was applied to the data sequences obtained by the former two models. We highlight that NPID remained unaffected under such transformations of the data. The algorithms NMCD, NBS, NWBS, and CPM seem to also remain unaffected by such transformations of the data. In contrast, ECP exhibits very accurate behavior in (MM Gauss) and (MM Pois), but its performance vastly declines when the exponential function is applied to the data; see the results for models (MM Gauss tr) and (MM Pois tr) in Table 4. ECP works under the assumption that observations have finite αth absolute moments for some α∈(0,2][17], and transformations will inevitably affect the performance of methods that rely on moment assumptions. Based on the aforementioned comparison of the behaviour of NPID with the best competitors in the literature, we can conclude that our method has the best behaviour overall. NPID is an accurate, reliable, and quick method for non-parametric change-point detection. 12 Non-parametric segmentation A P REPRINT Table 4: Distribution of ˆN−Nover 100 simulated data sequences from the structures (MM1), (MM2), (MV Gauss), and (MV Gauss2). The average dHand computation time are also given ˆN−N Method Model ≤ −2−101≥2dH Time (s) NPID 0 0 97 3 0 0.090 0.34 CPMKS500 0 0 45 32 23 0.291 0.01 CPMCvM500 0 0 62 17 21 0.225 0.01 ECP (MM Gauss) 0 0 92 6 2 0.108 2.21 NMCD 0 0 83 13 4 0.106 0.02 NBS 36 6 43 7 8 0.914 0.23 NWBS 13 18 54 4 11 0.523 6.21 NPID 0 0 97 3 0 0.090 0.30 CPMKS500 0 0 37 29 34 0.303 0.01 CPMCvM500 0 0 50 20 30 0.265 0.01 ECP (MM Gauss tr) 0 32 59 9 0 0.467 2.01 NMCD 0 0 82 14 4 0.107 0.02 NBS 41 10 36 7 6 1.027 0.22 NWBS 7 20 57 7 9 0.480 5.63 NPID 9 9 81 1 0 0.347 0.28 CPMKS500 0 0 30 37 33 0.403 0.01 CPMCvM500 0 0 47 30 23 0.295 0.01 ECP (MM Student t3)0 1 84 13 2 0.183 2.50 NMCD 0 0 47 30 23 0.321 0.02 NBS 48 6 26 12 8 1.121 0.24 NWBS 21 22 42 10 5 0.619 6.40 NPID 0 0 97 3 0 0.085 6.25 CPMKS500 0 0 2 3 95 0.452 0.01 CPMCvM500 0 0 1 7 92 0.446 0.01 CPMKS2000 0 0 22 36 42 0.266 0.02 CPMCvM2000 0 0 25 33 42 0.274 0.01 ECP (MM Gauss2) 0 0 91 9 0 0.089 113.25 NMCD 0 0 72 24 4 0.089 0.05 NBS 22 13 43 22 0 0.489 0.21 NWBS 0 2 96 2 0 0.166 46.69 NPID 0 9 91 0 0 0.131 0.02 ECP (MM Pois) 0 0 84 15 1 0.114 2.10 NMCD 0 0 89 9 2 0.066 0.02 NBS 0 43 54 2 1 0.489 0.21 NWBS
|
https://arxiv.org/abs/2504.21379v1
|
0 50 45 4 1 0.551 5.71 NPID 0 9 91 0 0 0.131 0.02 ECP (MM Pois tr) 1 67 28 4 0 0.721 1.83 NMCD 0 0 86 11 3 0.098 0.02 NBS 0 39 56 3 2 0.458 0.22 NWBS 0 48 48 1 3 0.531 6.05 13 Non-parametric segmentation A P REPRINT Table 5: Distribution of ˆN−Nover 100 simulated data sequences from the structures (MV Gauss), (MV Gauss2), (MD1), (MD2), and (MD3). The average dHand computation time are also given ˆN−N Method Model ≤ −2−101≥2dH Time (s) NPID 0 1 87 9 3 0.102 0.79 CPMKS500 0 0 19 24 57 0.278 0.03 CPMCvM500 0 0 21 26 53 0.277 0.02 CPMKS1000 0 1 37 27 35 0.223 0.03 CPMCvM1000 0 0 47 24 29 0.194 0.03 ECP (MV Gauss) 0 78 20 2 0 0.631 3.84 NMCD 0 0 33 18 49 0.197 0.03 NBS 87 3 4 4 2 1.574 0.40 NWBS 48 20 19 9 4 1.181 10.86 NPID 0 3 85 7 5 0.171 2.32 CPMKS500 0 0 5 7 88 0.424 0.05 CPMCvM500 0 2 12 19 67 0.407 0.05 CPMKS1000 0 1 16 28 55 0.374 0.07 CPMCvM1000 1 3 31 21 44 0.391 0.06 ECP (MV Gauss2) 71 23 5 0 1 0.736 13.60 NMCD 0 0 26 18 56 0.266 0.07 NBS 87 5 4 1 3 2.934 0.81 NWBS 88 2 4 2 4 1.513 21.55 NPID 0 3 97 0 0 0.070 0.84 ECP (MD1) 17 52 25 6 0 0.919 5.46 NMCD 0 0 29 36 35 0.312 0.07 NBS 0 51 41 7 1 0.569 0.65 NWBS 2 60 31 3 4 0.684 17.45 NPID 0 0 98 2 0 0.069 0.52 CPMKS500 0 0 23 36 51 0.347 0.01 CPMCvM500 0 0 34 29 37 0.301 0.01 ECP (MD2) 0 0 93 6 1 0.080 3.17 NMCD 0 0 52 24 24 0.150 0.02 NBS 16 4 62 4 14 0.486 0.30 NWBS 9 12 65 6 8 0.377 8.11 NPID 0 11 86 3 0 0.173 2.20 CPMKS500 0 0 8 14 78 0.418 0.07 CPMCvM500 0 0 13 12 75 0.367 0.05 CPMKS1000 0 1 25 22 52 0.323 0.08 CPMCvM1000 0 0 31 22 47 0.288 0.06 ECP (MD3) 0 34 63 2 1 0.334 11.56 NMCD 0 0 77 19 4 0.092 0.09 NBS 0 53 32 11 4 0.555 0.82 NWBS 0 69 22 2 7 0.624 21.51 14 Non-parametric segmentation A P REPRINT Results from NPID TimeData values for Individual 1 0 500 1000 1500 2000−1.5−1.0−0.50.00.51.0 Figure 3: Change-point detection for the first individual in the micro-array data set. The NPID-estimated change-point locations are marked with red solid vertical lines. Results from NPID Time0 500 1000 1500 2000−0.8−0.20.4Results from ECP Time0 500 1000 1500 2000−0.8−0.20.4 Results from NMCD Time0 500 1000 1500 2000−0.8−0.20.4Results from NWBS Time0 500 1000 1500 2000−0.8−0.20.4Data values for Individual 4 Figure 4: Change-point detection for Individual 4 in the micro-array data set. The estimated change-point locations are given with dashed vertical lines. Top row: The
|
https://arxiv.org/abs/2504.21379v1
|
results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. 5 Real data examples 5.1 Micro-array data In this section, we investigate the performance of our method on micro-array data, which consists of individuals with a bladder tumour. The data set can be obtained from the Rpackage ecp[16], and has been previously analysed in the literature. More specifically, in [17] the whole multivariate dataset is analysed, while in [18], non-parametric change-point detection is carried out for the first individual in the data set. The results for NPID regarding the first individual can be found in Figure 3. Our method seems to capture the important movements (that are mainly in the mean) in the micro-array data for this specific individual and the obtained results are similar to those presented in [18], with a deviation regarding the estimated number of change-points (14 for NPID compared to 16 for NWBS). We further investigate the performance of the methods for other individuals in the data set, also including ECP and NMCD in the analysis. The results for Individuals 4 and 39 can be found in Figures 4 and 5, respectively. ECP and NMCD may be suspected of overestimation in both subjects. NWPS appears to slightly overestimate change-points for Individual 4 and underestimate change-points for Individual 39. NPID exhibits (at least visually) good performance in both subjects. In real-data examples, it is a challenge to decide which of the methods give the best segmentation. However, the NPID solution path algorithm (Section 2.3) can be used to obtain a range of different segmentation models providing users with the flexibility to choose according to their preferred model selection criterion. In Figure 6, we show the solution path for Individual 4 with the 12 most prominent change-points. 15 Non-parametric segmentation A P REPRINT Results from NPID Time0 500 1000 1500 2000−0.40.00.4Results from ECP Time0 500 1000 1500 2000−0.40.00.4 Results from NMCD Time0 500 1000 1500 2000−0.40.00.4Results from NWBS Time0 500 1000 1500 2000−0.40.00.4Data values for Individual 39 Figure 5: Change-point detection for Individual 39 in the micro-array data set. The estimated change-point locations are given with dashed vertical lines. Top row: The results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. Solution path results for NPID TimeData values for Individual 4 0 500 1000 1500 2000−0.8 −0.4 0.00.20.4 Figure 6: Solution path algorithm results for Individual 4 in the micro-array data set. The 12 most important locations according to the solution path algorithm are given. Three subcategories of importance are created. The four most important locations are given with red solid lines, while the elements in positions 5-8 and 9-12 of the solution path vector are presented with blue dashed and green dotted lines, respectively. Within each subcategory, the thicker the line, the more important the estimated location. 5.2 Major American stock indices. We first analyze the Dow Jones Industrial Average (DJIA) index daily log returns from the 14thof April 2020 until the 17thof April 2025. (All data in this section is taken from https://fred.stlouisfed.org/series/DJIA
|
https://arxiv.org/abs/2504.21379v1
|
.) Figure 7 shows the results for NPID, as well as for the ECP, NMCD, and NWBS algorithms. NPID, ECP and NMCD exhibit a similar behavior detecting 4, 3, and 5 change-points, respectively, while NWBS does not detect any change-points. It is interesting to see that NPID and NMCD capture, through the last estimated change-point, an important increase in the volatility having taken place in April 2025. ECP and NWBS appear insufficiently sensitive in this instance. Further, we consider the S&P 500 and Nasdaq composite indices over the same time period; the results are provided in Figures 8 and 9, respectively. Regarding NPID, NMCD, and NWBS, the results for the three market indexes are very similar. NPID and NMCD capture important distributional changes including the recent volatility change in April 2025; NWBS does not detect any change-points in any index. ECP exhibits similar behaviour for the S&P 500 and Nasdaq composite indexes, while it detects one more change-point for the DJIA index. 5.3 The COVID-19 outbreak in the UK The performance of NPID is investigated on data from the COVID-19 pandemic; more specifically, we attempt to find changes in the percentage change in COVID-19 seven-day case rates in the UK. The data concern the period from the beginning of September 2021 until the 19thof February 2022 and they are available from https://coronavirus. 16 Non-parametric segmentation A P REPRINT Results from NPID Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Results from ECP Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024 Results from NMCD Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Results from NWBS Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Daily log returns for DJIA Figure 7: Change-point detection for the DJIA index daily log returns. Top row: The results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. Results from NPID Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Results from ECP Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024 Results from NMCD Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Results from NWBS Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Daily log returns for S&P 500 Figure 8: Change-point detection for the S&P 500 index daily log returns. Top row: The results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. Results from NPID Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Results from ECP Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024 Results from NMCD Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Results from NWBS Date−0.05 0.05 14/04/2020 14/04/2022 22/04/2024Daily log returns for Nasdaq Figure 9: Change-point detection for the Nasdaq composite index daily log returns. Top row: The results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. 17 Non-parametric segmentation A P REPRINT Results from NPID Date−4004080 03/2021 07/2021 11/2021Results from ECP Date−4004080 03/2021 07/2021 11/2021 Results from NMCD Date−4004080 03/2021 07/2021 11/2021Results from NWBS Date−4004080 03/2021 07/2021 11/2021Covid−19 cases percentage change Figure 10: Change-point detection for the percentage change in COVID-19 seven-day case rates in the UK. The estimated change-point locations are given with solid vertical lines. Top row: The results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. data.gov.uk . We again look
|
https://arxiv.org/abs/2504.21379v1
|
for any changes in the distribution of the data. Figure 10 shows the results for the NPID method, as well as for the ECP, NMCD, and NWBS methods. The estimated numbers of change-points for NPID, ECP, NMCD, and NWBS are 8, 7, 17, and 12, respectively. While the changes returned by NPID and ECP are relatively easy to justify visually, NWBS, and even more so, NMCD, can be suspected of overestimating the number of change-points here. However, we have to bear in mind that we work under model mis-specification in this example, as the data points are clearly not serially independent. 6 Discussion We highlight that the method can be extended to variables with values in an arbitrary metric space, E, as long as the distributional difference is detectable on a Vapnik-Cervonenkis (VC) class. The way to achieve this is through results appearing in [8] for the single change-point scenario. More specifically, one can choose a seminorm NT(·), deterministic or random, on the space Sof all finite signed measures on Eand compute the distributional difference based on this seminorm. Upon the assumptions that first, there is a VC class Dof measurable subsets of E, such that NT(v)≤sup{|v(D)|:D∈ D} ,∀T≥2,∀v∈ S and, second, that the probability of the distributional difference (measured on the seminorm NT(·)) being bounded away from zero goes to 1 as T→ ∞ , one can show consistency of the estimated number and locations of the change- points, as in Theorem 1. For an example of a VC class of measurable subsets in Rpsee p.1474 in [8]. A Models used in the simulation study The characteristics of the data sequences Xt, which were used in the simulation study are given in the list below. (NC) constant signal : length 500 with no change-points. (M1) mean : length 200 with one change-point at 100. The distribution changes from N(0,1)toN(1,1). (V1) variance : length 500 with one change-point at 250. The distribution changes from N(0,1)toN(0,4). (D1) distributional : length 1000 with one distributional change at 500; there are no changes in the first two mo- ments. In the first segment the distribution is Unif(−3,3), and in the second one it is Student- t3. (MM Gauss) multi mean Gaus : length 400 with three change-points at 100, 200, 300. The distribution in the four different segments is N(0,1),N(1,1),N(−0.2,1), andN(−1.3,1). (MM Gauss tr)multi mean Gauss tr: For this scenario, we transform the data sequences created from (MM Gauss) using the exponential function. (MM Student t3)multi mean Student : length 400 with three changes in the mean at 100, 200, 300. The values for the piecewise-constant signal in the four segments are 0,1,−0.2,−1.3. On this signal, we add noise follow- ing the Student- t3distribution. 18 Non-parametric segmentation A P REPRINT (MM Gauss2) multi mean Gauss 2: length 1600 and with 19 change-points at 80,160, . . . , 1520 . There are 20 segments. The distribution of the odd numbered segments is N(0,1), while the distribution of the even numbered ones is N(2,1). (MM Pois) multi mean Pois: length 400 with three changes in the
|
https://arxiv.org/abs/2504.21379v1
|
mean at 100, 200, 300. The values for the piecewise- constant signal in the four segments are 0,1,−0.2,−1.3. On this signal, we add noise following the Pois- son(1) distribution. (MM Pois tr)multi mean Pois tr: For this scenario, we transform the data sequences created from (MM Pois) using the exponential function. (MV Gauss) multi var1: length 600 with three change-points at the locations 150, 350, 500. The distribution in the four different segments is N(0,1),N(0,9),N(0,1.44), andN(0,0.1). (MV Gauss2) multi var2: length 1000 with five change-points at the locations 200, 350, 550, 700, 900. The distribution in the six different segments is N(0,10),N(0,2),N(0,0.3),N(0,4),N(0,20), andN(0,2). (MD1) multi dis1: length 750 with two distributional changes at 250 and 500; there are no changes in the first two moments. In the first segment the distribution is Gamma(1 ,1), in the second one it is Poisson(1), while in the last segment the distribution is Unif(1 −√ 3,1 +√ 3). (MD2) multi dis2: length 500 with three distributional changes at 100, 250, 350. In this example, there are also changes in the first two momemts. More specifically, in the first segment the distribution is N(0,1), in the second one it is χ2 1, in the third one it is Student- t3, while in the last segment the distribution is N(1,1). (MD3) multi dis3: length 1000 with three distributional changes at 200, 500, 750. In this example, there are also changes in the first two momemts. More specifically, in the first segment the distribution is Gamma(1 ,1), in the second one it is χ2 3, in the third one it is N(0.5,1), while in the last segment the distribution is Student- t5. B Brief discussion on the steps we followed for the proof of Theorem 1 In this section, we provide for a better understanding, an informal explanation of the main steps of the proof of Theorem 1. A full mathematical proof is given in the supplement. For any u∈R, we denote by Bt(u) := 1{Xt≤u}, and the notation for ˜Bb s,e(u)is as in (5), while, for Ft(u) :=P(Xt≤u), ˜Fb s,e(u) =s e−b (b−s+ 1)( e−s+ 1)bX t=sFt(u)−s b−s+ 1 (e−b)(e−s+ 1)eX t=b+1Ft(u). (16) In the proof, we derive results for Ft(u). However, the consistency proof is concerned with the estimated number and locations of the change-points in the processes {Bt(Xi)}t=1,...,T i=1,...,T, and by extension in the original data sequence X1, X2, . . . , X T. Therefore, in order to be able to deduce consistency related to Xtfrom our Ft(u)-reliant proof, we need first to show that for a fixed interval [s, e), where 1≤s < e ≤T, and for all b∈[s, e), the observed quantity ˜Bb s,e(u)given in (5) is uniformly close to ˜Fb s,e(u), for any u∈R; this is achieved in Lemma 1 provided in the supplementary material. We only cover the case where there is at most one true change-point, namely rjin the interval [s, e)because our methodology, by construction, prevents the examination of intervals that include more than one change-points. Then, we proceed in the proof of Theorem 1 by showing that as
|
https://arxiv.org/abs/2504.21379v1
|
the NPID algorithm proceeds, each change-point will get isolated in an interval where its detection will occur with high probability; for this result we use again Lemma 1 available in the main paper. It suffices to restrict our proof on a single change-point detection framework within an interval [sj, ej)which contains only the change-point rjand no other change-point, and also the maximum CUSUM value for a data point within [sj, ej)is greater than the threshold ζT. For each such interval, and for ˆrjbeing the point with the maximum contrast function value (greater than ζT) within [sj, ej), we will prove that (ˆrj−rj)/γ2 j,T=Op(1),∀j= 1, . . . , N , where γj,Tis as in Assumption (A1); to achieve this we employ Lemmas 2 and 3 available in the main paper, as well as results from [8]. Because upon detection NPID proceeds from the endpoint of the interval that the detection took place, we also show that with probability one there is no change-point in those bypassed points (between the detection and the new starting point). Furthermore, after each detection, the new starting point is at a place that allows the detection of the next change-point. The last step of the proof is to show that after detecting all the change-points, then NPID, with high probability, will stop since there are no more change-points in the remaining interval [s, e). 19 Non-parametric segmentation A P REPRINT References [1] A. Anastasiou, I. Cribben, and P. Fryzlewicz. Cross-covariance isolate detect: a new change-point method for estimating dynamic functional connectivity. Medical Image Analysis , 75:102252, 2022. [2] A. Anastasiou and P. Fryzlewicz. Detecting multiple generalized change-points by isolating single ones. Metrika , 85:141–174, 2022. [3] A. Anastasiou and A. Papanastasiou. Generalized multiple change-point detection in the structure of multivariate, possibly high-dimensional, data sequences. Statistics and Computing , 33:94, 2023. [4] I. E. Auger and C. E. Lawrence. Algorithms for the Optimal Identification of Segment Neighborhoods. Bulletin of Mathematical Biology ,51:39–54, 1989. [5] E. Carlstein. Nonparametric change-point estimation. The Annals of Statistics ,16:188–197, 1988. [6] M. Cs ¨org˝o and L. Horv ´ath. Invariance principles for changepoint problems. Journal of Multivariate Analysis , 27:151–168, 1988. [7] B. S. Darkhovskh. A nonparametric method for the a posteriori detection of the “disorder” time of a sequence of independent random variables. Theory Probab. Appl. ,21:178–183, 1976. [8] L. D ¨umbgen. The asymptotic behavior of some nonparametric change-point estimators. The Annals of Statistics , 19:1471–1495, 1991. [9] P. Fryzlewicz. Wild binary segmentation for multiple change-point detection. Annals of Statistics ,42:2243–2281, 2014. [10] K. Haynes, P. Fearnhead, and I. A. Eckley. A computationally efficient nonparametric approach for changepoint detection. Statistics and Computing ,27:1293–1305, 2017. [11] H. Hazrati-Marangaloo and R. Noorossana. A nonparametric change detection approach in social networks. Quality and Reliability Engineering International , 37:2916–2935, 2021. [12] Y . Kawahara and M. Sugiyama. Sequential Change-Point Detection Based on Direct Density-Ratio Estimation. Statistical Analysis and Data Mining ,5:114–127, 2011. [13] R. Killick, P. Fearnhead, and I. A. Eckley. Optimal Detection of Changepoints With a Linear Computational Cost. Journal of the American Statistical Association ,107:1590–1598,
|
https://arxiv.org/abs/2504.21379v1
|
2012. [14] C. B. Lee. Nonparametric multiple change-point estimators. Statistics & Probability Letters ,27:295–304, 1996. [15] A. Lung-Yut-Fong, C. L ´evy-Leduc, and O. Capp ´e. Homogeneity and change-point detection tests for multivariate data using rank statistics. Journal de la SFdS ,156:133–162, 2015. [16] D. S. Matteson and N. A. James. ecp: An rPackage for Nonparametric Multiple Change Point Analysis of Multivariate Data. Journal of Statistical Software ,62, No.7:1–25, 2014. [17] D. S. Matteson and N. A. James. A Nonparametric Approach for Multiple Change Point Analysis of Multivariate Data. Journal of the American Statistical Association ,109:334–345, 2014. [18] O. H. M. Padilla, Y . Yu, D. Wang, and A. Rinaldo. Optimal nonparametric change point analysis. Electronic Journal of Statistics ,15, No.1:1154–1201, 2021. [19] A. N. Pettitt. A Non-Parametric Approach to the Change-Point Problem. Journal of the Royal Statistical Society. Series C (Applied Statistics) ,28:126–135, 1979. [20] G. J. Ross. Parametric and Nonparametric Sequential Change Detection in R: The cpm Package. Journal of Statistical Software ,66, No.3:1–20, 2015. [21] N. Shvetsov, N. Buzun, and D. Dylov. Unsupervised non-parametric change point detection in electrocardiogra- phy. SSDBM 2020: 32nd International Conference on Scientific and Statistical Database Management , (19):1–4, 2020. [22] G. J. Sz ´ekely and M. L. Rizzo. Hierarchical Clustering via Joint Between-Within Distances: Extending Ward’s Minimum Variance Method. Journal of Classification ,22:151–183, 2005. [23] L. J. Vanegas, M. Behr, and A. Munk. Multiscale Quantile Segmentation. Journal of the American Statistical Association , pages 1–14, 2021. [24] L. V ostrikova. Detecting “disorder” in multidimensional random processes. Soviet Mathematics: Doklady , 24:55–59, 1981. [25] C. Zhou, R. van Nooijen, A. Kolechkina, and M. Hrachowitz. Comparative analysis of nonparametric change- point detectors commonly used in hydrology. Hydrological Sciences Journal , 64:1690–1710, 2019. [26] C. Zou, G. Yin, L. Feng, and Z. Wang. Nonparametric maximum likelihood approach to multiple change-point problems. The Annals of Statistics ,42:970–1002, 2014. 20 Non-parametric segmentation A P REPRINT Supplementary Material for “Non-parametric multiple change-point detection” Andreas Anastasiou1, Piotr Fryzlewicz2 1Department of Mathematics and Statistics, University of Cyprus 2Department of Statistics, London School of Economics Abstract: In this supplement, we provide tables related to the Type I error obtained when the default threshold values were used for the detection process; more details are provided in Section 3.2 of the main paper. In addition, we provide the details for the different types of changes used in the simulation study of Section 3.2 that lead to an appropriate choice for the threshold constant. Furthermore, we give the step-by-step proof of Theorem 1, which shows the consistency of our method in accurately estimating the true number and the locations of the change-points. S1 Tables related to Section 3.2 of the main paper In Section 3.2 of the main paper, a large-scale simulation study was carried out to decide the best values for the threshold constant. The best behavior occurred when, approximately, C= 0.6andC= 0.9for the mean-dominant norms L2andL∞, respectively. In an attempt to measure the Type I error obtained from this choice of threshold constants under the scenario of no change-points, we ran 100 replications for twenty
|
https://arxiv.org/abs/2504.21379v1
|
different scenarios of no change- points, covering scenarios from both continuous and discrete distributions. The models used are given in Section 3.2 of the main paper. Tables S1 and S2 below present the frequency distribution of ˆN−Nfor all the above scenarios and for both the L∞andL2mean-dominant norms, when their respective default threshold constants are used. Table S1: Distribution of ˆN−Nover 100 simulated data sequences from the Gaussian or Cauchy models. ˆN−N Model T Method 0 1≥2 Gaussian 30 L∞ 97 30 L2 98 20 75 L∞ 98 20 L2 100 00 200 L∞ 99 10 L2 100 00 500 L∞ 95 50 L2 100 00 Cauchy 30 L∞ 93 70 L2 99 10 75 L∞ 97 30 L2 100 00 200 L∞ 92 53 L2 99 01 500 L∞ 97 30 L2 100 00 S1 Non-parametric segmentation A P REPRINT Table S2: Distribution of ˆN−Nover 100 simulated data sequences from the Poisson model with different rate values. ˆN−N Model T Method 0 1≥2 Poisson(0.3) 30 L∞ 100 00 L2 88 84 75 L∞ 99 01 L2 98 11 200 L∞ 100 00 L2 100 00 500 L∞ 100 00 L2 100 00 Poisson(3) 30 L∞ 100 00 L2 100 00 75 L∞ 99 01 L2 100 00 200 L∞ 99 10 L2 100 00 500 L∞ 100 00 L2 100 00 Poisson(30) 30 L∞ 94 51 L2 99 10 75 L∞ 97 30 L2 100 00 200 L∞ 98 20 L2 100 00 500 L∞ 96 40 L2 100 00 S2 Details regarding the simulation study in Section 3.2 In this section of the supplement, we provide all the details regarding the different types of changes ((M), (V), and (D)) used in the simulation study, as explained in Section 3.2 of the main paper, that lead to an appropriate choice of the threshold constant. If types (M) and (V) were chosen, then the change in the mean or variance, respectively, had magnitude which followed the normal distribution with mean zero and variance σ2∈ {1,3,5}. If type (D) was chosen then a distribution from a list of distributions (Normal, Student- t, Uniform, and Poisson) was chosen. The first two moments remained unchanged in the distributional changes. We always started with the first segment (the part of the data sequence before the first change-point) being from the normal distribution. For each value of NαandTwe generated 1000 replicates and estimated the number of change-points using our NPID method with threshold ζTas in Equation (11) of the main paper for a great variety of constant values C. S3 Proof of Theorem 1 In this section, we provide a thorough proof of the main theorem in our paper. We first need to introduce some further notation, as well as state and proof some important lemmas. As mentioned in Appendix B of the main paper, we will mainly be working within an interval [s, e)with at most one change-point. From now on, we denote by Js,e:=1 e−s+ 1,2 e−s+ 1, . . . ,e−s e−s+ 1 , (S1) and let b∗∈Js,e. We denote by ˆhB b∗(u) =1 b∗(e−s+
|
https://arxiv.org/abs/2504.21379v1
|
1)b∗(e−s+1)X t=11{Xt+s−1≤u} ˆhA b∗(u) =1 (1−b∗)(e−s+ 1)e−s+1X t=b∗(e−s+1)+11{Xt+s−1≤u}. (S2) S2 Non-parametric segmentation A P REPRINT The above functions ˆhB b∗(u)andˆhA b∗(u)are used to calculate the empirical cumulative distribution at u∈Rup to and after the point b∗, respectively. In addition, for r0= 0 andrN+1=T, then with rj∈[s, e)andr∗ j=rj−s+1 e−s+1, we have the notation hB b∗(u) := 1{b∗≤r∗ j}Frj(u) + 1{b∗>r∗ j}1 b∗ r∗ jFrj(u) + (b∗−r∗ j)Frj+1(u) hA b∗(u) := 1{b∗≤r∗ j}1 1−b∗ (r∗ j−b∗)Frj(u) + (1 −r∗ j)Frj+1(u) + 1{b∗>r∗ j}Frj+1(u). (S3) The expressions in (S3) are the unknown mixture distributions. Note that if we are under the case of rj−1< s < e ≤ rj, then it is straightforward that both hB b∗(u)andhA b∗(u)are equal to Frj(u). We denote by δb∗ s,e(u) :=hA b∗(u)−hB b∗(u) =1−r∗ j 1−b∗1{b∗≤r∗ j}+r∗ j b∗1{b∗>r∗ j} ∆j(u), (S4) with∆j(u)as in Equation (9) of the main paper. In the same way, db∗ s,e(u) :=ˆhA b∗(u)−ˆhB b∗(u) =1 (1−b∗)(e−s+ 1)e−s+1X t=b∗(e−s+1)+11{Xt+s−1≤u}−1 b∗(e−s+ 1)b∗(e−s+1)X t=11{Xt+s−1≤u}. (S5) We notice that if b∗is near 0 or 1, then both (S4) and (S5) behave badly. Therefore, we introduce weights w(b∗) =p b∗(1−b∗)and for Js,eas in (S1), we consider the measures Db∗ s,e(u) =w(b∗)db∗ s,e(u), b∗∈Js,e (S6) which estimate ∆b∗ s,e(u) =w(b∗)δb∗ s,e(u) =ρ(b∗)∆j(u), (S7) where ρ(b∗) =(1−r∗ j)√ b∗ √ 1−b∗1{b∗≤r∗ j}+r∗ j√ 1−b∗ √ b∗1{b∗>r∗ j}. (S8) Note that the CUSUM expressions ˜Bb s,e(u)and˜Fb s,e(u)in Equations (5) and (16) of the main paper are related to Db∗ s,e(u)and∆b∗ s,e(u), respectively. For b=b∗(e−s+ 1) + s−1, simple steps yield Db∗ s,e(u) =p (b−s+ 1)( e−b) e−s+ 1 1 e−be−s+1X t=b−s+21{Xt+s−1≤u}−1 b−s+ 1b−s+1X t=11{Xt+s−1≤u}! =−1√e−s+ 1˜Bb s,e(u). (S9) Similar steps lead as well to ∆b∗ s,e(u) =−1√e−s+ 1˜Fb s,e(u). From now on, we also denote by HB b∗(u) = hB b∗(u)−ˆhB b∗(u) HA b∗(u) = hA b∗(u)−ˆhA b∗(u) eb∗(u) =HB b∗(u) +HA b∗(u) (S10) A series of lemmas now follows, with results that will be helpful in the proof of Theorem 1. For any data sequence {Xt}t=1,2,...,T, allow us first to denote by ˜Bb s,e:= ˜Bb s,e(X1), . . . , ˜Bb s,e(XT) ˜Fb s,e:= ˜Fb s,e(X1), . . . , ˜Fb s,e(XT) (S11) Db s,e:= Db s,e(X1), . . . , Db s,e(XT) ∆b s,e:= ∆b s,e(X1), . . . , ∆b s,e(XT) , (S12) where 1≤s≤b < e≤T. Lemma 1 below shows that the quantity L ˜Bb s,e is uniformly close to L ˜Fb s,e . S3 Non-parametric segmentation A P REPRINT Lemma 1. For any data sequence {Xt}t=1,2,...,T and for any interval [s, e)which includes at most one change-point (this is denoted by rj), we have that both P(AT)≥1−12 TandP(A∗ T)≥1−12 T, where, for any mean-dominant norm L(·), AT= max b:s≤b<e L ˜Bb s,e −L ˜Fb s,e ≤4p logT A∗ T= max b:s≤b<en L ˜Bb s,e − ˜Fb s,e o ≤4p logT . (S13) The notations ˜Bb s,eand˜Fb s,eare as in (S11) . Proof. For any mean-dominant norm L(·), using the mean dominance property as given in p.190 of [2], it holds that 0≤L1(x)≤L(x)≤L∞(x),∀x∈(Rd)+, where d∈Z+. Furthermore, L ˜Bb s,e −L ˜Fb s,e
|
https://arxiv.org/abs/2504.21379v1
|
≤L ˜Bb s,e − ˜Fb s,e . Therefore, it is straightforward to see that in order to proof Lemma 1, it suffices to show just that P(A∗ T)≥1−12 T, withA∗ Tas in (S13). In order to show this result, we use the notation in (S10) and after simple steps we can show that for any u∈R db∗ s,e(u) ≤ ˆhA b∗(u)−hA b∗(u) + hA b∗(u)−hB b∗(u) + hB b∗(u)−ˆhB b∗(u) = δb∗ s,e(u) +eb∗(u). Employing now the notation in (S11), it is easy to see that L Db∗ s,e − ∆b∗ s,e ≤sup u∈R Db∗ s,e(u) − ∆b∗ s,e(u) ≤p b∗(1−b∗) sup u∈Reb∗(u) ≤p b∗(1−b∗) sup u∈RHB b∗(u) + sup u∈RHA b∗(u) . (S14) The proof will be split into two cases depending on whether there exists one true change-point or no change-points in [s, e). Case 1: rj−1< s≤rj< e≤rj+1. By the definition of HB b∗(u)in (S10) and for Ft(u)as in Equation (9) of the main paper, we have that HB b∗(u) = b∗(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)− 1{b≤rj}Frj(u)− 1{b>rj} r∗ jFrj(u) + (b∗−r∗ j)Frj+1(u) b∗ Therefore, sup u∈RHB b∗(u)≤ 1{b≤rj}sup u∈R b∗(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−Frj(u) (S15) + 1{b>rj} sup u∈R r∗ j(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−r∗ j b∗Frj(u) + sup u∈R b∗(e−s+1)X t=r∗ j(e−s+1)+11{Xt+s−1≤u} b∗(e−s+ 1)−b∗−r∗ j b∗Frj+1(u) (S16) S4 Non-parametric segmentation A P REPRINT In the same way, for HA b∗(u)as in (S10), we have that sup u∈RHA b∗(u)≤ 1{b≤rj} sup u∈R r∗ j(e−s+1)X t=b∗(e−s+1)+11{Xt+s−1≤u} (e−s+ 1)(1 −b∗)−r∗ j−b∗ 1−b∗Frj(u) + sup u∈R e−s+1X t=r∗ j(e−s+1)+11{Xt+s−1≤u} (e−s+ 1)(1 −b∗)−1−r∗ j 1−b∗Frj+1(u) (S17) + 1{b>rj}sup u∈R e−s+1X t=b∗(e−s+1)+11{Xt+s−1≤u} (e−s+ 1)(1 −b∗)−Frj+1(u) . (S18) Using the results in (S14), (S15), (S16), (S17), and (S18), yields L Db∗ s,e − ∆b∗ s,e ≤p b∗(1−b∗)((S15) +(S16) +(S17) +(S18) ). This means that for ϵ1>0and with Js,eas in (S1), P max b∗∈Js,eL Db∗ s,e − ∆b∗ s,e > ϵ1 ≤P max b∗∈Js,ep b∗(1−b∗)((S15) +(S16) +(S17) +(S18) )> ϵ1 ≤P max b∗∈Js,ep b∗(1−b∗)((S15) +(S16) >ϵ1 2 (S19) +P max b∗∈Js,ep b∗(1−b∗)((S17) +(S18) >ϵ1 2 (S20) We will now show how to bound the probability in (S19), and in the same way one can work to bound the probability in (S20). For Js,eas in (S1) and using the Bonferroni inequality we obtain that P max b∗∈Js,ep b∗(1−b∗)((S15) +(S16) >ϵ1 2 ≤X b∗∈Js,ePp b∗(1−b∗)((S15) +(S16) >ϵ1 2 =X b∗∈Js,e b∗≤r∗ jPp b∗(1−b∗)((S15) +(S16) >ϵ1 2 +X b∗∈Js,e b∗>r∗ jPp b∗(1−b∗)((S15) +(S16) >ϵ1 2 ≤X b∗∈Js,e b∗≤r∗ jP p b∗(1−b∗) sup u∈R b∗(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−Frj(u) >ϵ1 2 +X b∗∈Js,e b∗>r∗ j P p b∗(1−b∗) sup u∈R r∗ j(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−r∗ j b∗Frj(u) >ϵ1 4 +P p b∗(1−b∗) sup u∈R b∗(e−s+1)X t=r∗ j(e−s+1)+11{Xt+s−1≤u} b∗(e−s+ 1)−b∗−r∗ j b∗Frj+1(u) >ϵ1 4 . (S21) S5 Non-parametric segmentation A P REPRINT To bound the above probabilities we will employ the Dvoretzky-Kiefer-Wolfowitz inequality as expressed in [4]. Since b∗∈(0,1), we have that for b∗≤r∗ j, P p b∗(1−b∗) sup u∈R b∗(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−Frj(u) >ϵ1 2 ≤P sup u∈R b∗(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−Frj(u) >ϵ1 2√ b∗ ≤2 exp −e−s+
|
https://arxiv.org/abs/2504.21379v1
|
1 2ϵ2 1 . (S22) In the same way, for b∗> r∗ j, P p b∗(1−b∗) sup u∈R r∗ j(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−r∗ j b∗Frj(u) >ϵ1 4 ≤2 exp −e−s+ 1 8ϵ2 1 . (S23) and P p b∗(1−b∗) sup u∈R b∗(e−s+1)X t=r∗ j(e−s+1)+11{Xt+s−1≤u} b∗(e−s+ 1)−b∗−r∗ j b∗Frj+1(u) >ϵ1 4 ≤2 exp −e−s+ 1 8ϵ2 1 . (S24) The results in (S21), (S22), (S23), and (S24), lead to (S19)≤2(e−s+ 1) exp −e−s+ 1 2ϵ2 1 + 2 exp −e−s+ 1 8ϵ2 1 . The same bound holds for (S20), which means that P max b∗∈Js,eL Db∗ s,e − ∆b∗ s,e > ϵ1 ≤4(e−s+ 1) exp −e−s+ 1 2ϵ2 1 + 2 exp −e−s+ 1 8ϵ2 1 (S25) Case 2: rj−1< s < e ≤rj. Following the same process as in Case 1 and using the notation in (S10), in this scenario we have that sup u∈RHB b∗(u) = sup u∈R b∗(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−Frj(u) , (S26) sup u∈RHA b∗(u) = sup u∈R e−s+1X t=b∗(e−s+1)+11{Xt+s−1≤u} (1−b∗)(e−s+ 1)−Frj(u) . (S27) Using again the Dvoretzky-Kiefer-Wolfowitz inequality, then simple calculations yield P max b∗∈Js,eL Db∗ s,e − ∆b∗ s,e > ϵ1 ≤P max b∗∈Js,esup u∈R Db∗ s,e(u) − ∆b∗ s,e(u) > ϵ1 ≤P max b∗∈Js,ep b∗(1−b∗)((S26) +(S27) > ϵ1 ≤P max b∗∈Js,ep b∗(1−b∗)((S26) >ϵ1 2 +P max b∗∈Js,ep b∗(1−b∗)((S27) >ϵ1 2 ≤4 exp −e−s+ 1 2ϵ2 1 , S6 Non-parametric segmentation A P REPRINT which is less than the upper bound for Case 1 given in (S25). Therefore using the result in (S25), we have that for ϵ∗ 1=ϵ1√e−s+ 1, (S9) leads to P max b∗∈Js,eL Db∗ s,e − ∆b∗ s,e > ϵ1 =P max b:s≤b<eL ˜Bb s,e − ˜Fb s,e > ϵ∗ 1 ≤4(e−s+ 1) exp −(ϵ∗ 1)2 2 + 2 exp −(ϵ∗ 1)2 8 ≤4T exp −(ϵ∗ 1)2 2 + 2 exp −(ϵ∗ 1)2 8 , (S28) Through the result in (S28), and since ∀ϵ >0, P max b:s≤b<e L ˜Bb s,e −L ˜Fb s,e > ϵ ≤P max b:s≤b<eL ˜Bb s,e − ˜Fb s,e > ϵ then it is straightforward that for ATandA∗ Tas in (S13), P(Ac T)≤P((A∗ T)c)≤4T(exp{−8 logT}+ 2 exp {−2 logT}) =4 T 2 +1 T6 ≤12 T, which completes the proof. Lemma 2. For any interval [s, e)that has only one true change-point, namely rj, and for any mean-dominant norm, L(·), we have that L Brj s,e−Frj s,e =Op(1). (S29) Proof. Using the definition of ˜Brjs,e(u)and˜Frjs,e(u), we have for l=e−s+ 1that L Brj s,e−Frj s,e ≤sup u∈R ˜Brj s,e(u)−˜Frj s,e(u) = sup u∈R s e−rj l(rj−s+ 1) rjX t=s1{Xt≤u}−(rj−s+ 1)Frj(u)! −s rj−s+ 1 l(e−rj) eX t=rj+11{Xt≤u}−(e−rj)Frj+1(u) , and therefore for any ϵ >0, P L Brj s,e−Frj s,e > ϵ ≤P sup u∈R s e−rj l(rj−s+ 1) rjX t=s1{Xt≤u}−(rj−s+ 1)Frj(u)! >ϵ 2! +P sup u∈R s rj−s+ 1 l(e−rj) eX t=rj+11{Xt≤u}−(e−rj)Frj+1(u) >ϵ 2 =P sup u∈R 1 rj−s+ 1rjX t=s1{Xt≤u}−Frj(u) >ϵ√ l 2p (e−rj)(rj−s+ 1)! +P sup u∈R 1 e−rjeX t=rj+11{Xt≤u}−Frj+1(u) >ϵ√ l 2p (e−rj)(rj−s+ 1) ≤2 exp −ϵ2l 2(e−rj) + exp −ϵ2l 2(rj−s+
|
https://arxiv.org/abs/2504.21379v1
|
1) S7 Non-parametric segmentation A P REPRINT using the Dvoretzky-Kiefer-Wolfowitz inequality. Therefore, for any positive constant Kthat does not depend on T, we have that P L Brj s,e−Frj s,e >Kp max{e−rj, rj−s+ 1}√ l! ≤2 exp −K2max{e−rj, rj−s+ 1} 2(e−rj) + exp −K2max{e−rj, rj−s+ 1} 2(rj−s+ 1) ≤4 exp −K2 2 which leads to the result in (S29). Lemma 3. For any interval [s, e)that has only one true change-point, namely rj, we have that for any arbitrary b∗∈Js,e, with Js,edefined in (S1), there is a constant C >0such that, for r∗ j=rj−s+1 e−s+1, ρ(r∗ j)−ρ(b∗)≥C|b∗−r∗ j|, (S30) where ρ(b∗)is given in (S8). Proof. We will split the proof in two cases depending on the position of b∗with respect to the change-point r∗ j. Case 1: b∗≤r∗ j. We have that ρ(r∗ j)−ρ(b∗) =q r∗ j(1−r∗ j)−r b∗ 1−b∗(1−r∗ j)≥q 1−r∗ jq r∗ j−√ b∗ ≥p1−r∗ j 2 r∗ j−b∗ , due to the fact thatpr∗ j−√ b∗=r∗ j−b∗ √ r∗ j+√ b∗≥r∗ j−b∗ 2. Case 2: b∗> r∗ j. Following a similar approach as in Case 1, we have that ρ(r∗ j)−ρ(b∗)≥pr∗ j 2 b∗−r∗ j . Combining the results of the two cases above, we can conclude that in general there exists a positive constant C, such thatρ(r∗ j)−ρ(b∗)≥C b∗−r∗ j . Proof of Theorem 1 The proof uses the results of Lemmas 1 - 3 and is based on two steps. Step 1: For ease of understanding, we split this step into three smaller parts. From now on, we assume that A∗ Tand therefore, also AT, as in Lemma 1 hold. The constants we use for Theorem 1 are C1>4, C 2=1√ 6−4 C, (S31) where Cis as in assumption (A1). Step 1.1: Firstly, ∀j∈ {1, . . . , N }, we define the intervals IR j= rj+δj+1 3, rj+ 2δj+1 3 , IL j= rj−2δj 3, rj−δj 3 . (S32) In order for IR jandIL jto have at least one point, we actually implicitly require that δj>3,∀j= 1, . . . , N + 1, which is the case for sufficiently large T; see assumption (A1). Since the lengths of IR jandIL jas in (S32) are equal to δj+1/3andδj/3, respectively, and taking λT≤δT/3, where δTis as in Equation (6) of the main paper the minimum distance between two change-points, then the NPID method ensures that for K=⌈T/λ T⌉andk, m∈ {1, . . . , K }, there exist at least one cr k=kλT+1and at least one cl m=T−mλTthat are in IR jandIL j, respectively, ∀j= 1, . . . , N . Depending on whether r1≤T−rNthenr1orrNwill first get isolated in a right- or left-expanding interval, respectively. W.l.o.g., assume that r1≤T−rN. Our aim is to first show that there will be at least an interval of the form [1, cr ˜k], for˜k∈ {1, . . . , K }, which contains only r1and no other change-point, such that max 1≤t<cr ˜kL ˜Bt 1,cr ˜k > S8 Non-parametric segmentation A P REPRINT ζT, where for any 1≤s≤b < e ≤T,Bb s,eis as in (S11). As already
|
https://arxiv.org/abs/2504.21379v1
|
mentioned, our method due to its expansion approach naturally ensures that ∃k∈ {1, . . . , K }such that cr k∈IR 1. There is no other change-point in [1, cr k]apart from r1. We will now show that for b= argmax1≤t≤cr kL(|˜Bt 1,cr k|), then L(|˜Bb 1,cr k|)> ζT. Using Lemma 1, we have that L ˜Bb 1,cr k ≥L ˜Br1 1,cr k ≥L ˜Fr1 1,cr k −4p logT (S33) But, L ˜Fr1 1,cr k =s (cr k−r1)r1 cr kL(|∆1|)≥s (cr k−r1)r1 2 max {cr k−r1, r1}L(|∆1|) =r min{cr k−r1, r1} 2L(|∆1|). (S34) From our notation of r0= 0and of δjas in Equation (9) of the main paper, we know that r1=δ1. In addition, since cr k∈IR 1, then δ2 3≤cr k−r1<2δ2 3, meaning that min{cr k−r1, r1} ≥1 3min{δ1, δ2}. (S35) The result in (S33), the assumption (A1) and the application of (S35) to (S34) yield, for mTas in (A1), with probability going to 1 as T→ ∞ , L ˜Bb 1,cr k ≥r min{δ1, δ2} 6L(|∆1|)−4p logT≥r min{δ1, δ2} 6˜C1 γ1,T−4p logT ≥mT√ 6−4p logT=1√ 6−4√logT mT mT ≥1√ 6−4 C mT=C2mT> ζT. (S36) Therefore, there will be an interval of the form [1, cr ˜k], with cr ˜k> r 1, such that [1, cr ˜k]contains only r1and max 1≤b<cr ˜kL ˜Bb 1,cr ˜k > ζT. Let us, for k∗∈ {1, . . . , K }, to denote by cr k∗≤cr ˜kthe first right-expanding point where this happens and let b1= argmax1≤t<cr k∗L ˜Bt 1,cr k∗ withL ˜Bb1 1,cr k∗ > ζT. Step 1.2: We will now show that |b1−r1|/γ2 1,T=Op(1)through a more general result which holds for any interval [s, e)that has only one true change-point and also, as in Step 1.1, the maximum aggregated (using the mean-dominant norm L(·)) contrast function value for a point within [s, e)is greater that the threshold ζT. From now on, the only change-point in [s, e)is denoted by rjandˆrjis the point that has the maximum CUSUM value in that interval with its value exceeding ζT. For ease of presentation, we also denote by l:=e−s+ 1. Our aim is to show that for r∗ j=rj−s+1 e−s+1, lim inf T→∞P(Λ(d))→1, (S37) where d→ ∞ asT→ ∞ . The constant C∗>0is independent of Tand Λ(d) := L Db∗ s,e −L Dr∗ j s,e ≤ −C∗ b∗−r∗ j /γj,T, ∀b∗∈Js,e\N(d) and L Dr∗ j s,e ≥C∗/γj,T . (S38) with N(d) :=( b∗∈Js,e: b∗−r∗ j ≤dγ2 j,T (e−s+ 1)) . (S39) Proving the result in (S37) and using that r∗ j=rj−s+1 e−s+1andb∗=b−s+1 e−s+1for any b∈[s, e), then we can get that |ˆrj−rj|/γ2 j,T=Op(1). S9 Non-parametric segmentation A P REPRINT ForDr∗ j s,eas in (S11), and using that ∀u∈R, then ˜Frjs,e(u) =−q (e−rj)(rj−s+1) l∆j(u), we have, due to (A1), with probability tending to 1 as T→ ∞ , that L Dr∗ j s,e =1√ lL ˜Brj s,e =1√ lL ˜Brj s,e−˜Frj s,e+˜Frj s,e ≥1√ l L ˜Frj s,e −L ˜Brj s,e−˜Frj s,e ≥1√ l r (e−rj)(rj−s+ 1) l˜Cj γj,T−L ˜Brj
|
https://arxiv.org/abs/2504.21379v1
|
s,e−˜Frj s,e ! . Because l→ ∞ asT→ ∞ means that1√ l=o(1)and we have using (S29) that for w(r∗ j) =q r∗ j(1−r∗ j) L Dr∗ j s,e ≥w(r∗ j)˜Cj γj,T≥C∗/γj,T (S40) with probability tending to 1 for any arbitrary C∗∈(0, w(r∗ j)). ForMb∗ s,e=Db∗ s,e−∆b∗ s,e, we have that L Db∗ s,e =L Mb∗ s,e−Mr∗ j s,e+Mr∗ j s,e+∆b∗ s,e =L Mb∗ s,e−Mr∗ j s,e+Mr∗ j s,e+ρ(b∗) w(r∗ j)∆r∗ j s,e ! =L Mb∗ s,e−Mr∗ j s,e+Mr∗ j s,e+ρ(b∗) w(r∗ j) Dr∗ j s,e−Mr∗ j s,e ! ≤L Mb∗ s,e−Mr∗ j s,e +L ρ(r∗ j)−ρ(b∗) w(r∗ j)Mr∗ j s,e ! +ρ(b∗) w(r∗ j)L Dr∗ j s,e . Therefore, L Db∗ s,e −L Dr∗ j s,e ≤L Mb∗ s,e−Mr∗ j s,e +L ρ(r∗ j)−ρ(b∗) w(r∗ j)Mr∗ j s,e ! +ρ(b∗)−ρ(r∗ j) w(r∗ j)L Dr∗ j s,e =L Mb∗ s,e−Mr∗ j s,e +ρ(r∗ j)−ρ(b∗) w(r∗ j) L Mr∗ j s,e −L Dr∗ j s,e . (S41) However, we know that L Dr∗ j s,e ≥L ∆r∗ j s,e −L Mr∗ j s,e and continuing from (S41), we obtain that L Db∗ s,e −L Dr∗ j s,e ≤L Mb∗ s,e−Mr∗ j s,e +ρ(r∗ j)−ρ(b∗) w(r∗ j) 2L Mr∗ j s,e −L ∆r∗ j s,e =L Mb∗ s,e−Mr∗ j s,e −(ρ(r∗ j)−ρ(b∗)) ˜Cj γj,T−2 w(r∗ j)L Mr∗ j s,e ! . (S42) Using the result in (S29) from Lemma 2, we have that since l→ ∞ asT→ ∞ , 1 w(r∗ j)L Mr∗ j s,e =1 w(r∗ j)√ lL ˜Brj s,e−˜Frj s,e =op(1). S10 Non-parametric segmentation A P REPRINT Therefore, continuing from (S42), L Db∗ s,e −L Dr∗ j s,e ≤L Mb∗ s,e−Mr∗ j s,e −(ρ(r∗ j)−ρ(b∗)) ˜Cj γj,T+op(1)! for all b∗∈Js,ewith probability tending to 1. Using now the result of Lemma 3, we have that P L Db∗ s,e −L Dr∗ j s,e ≤L Mb∗ s,e−Mr∗ j s,e − b∗−r∗ j γj,TC′ jfor all b∗! →1, (S43) where C′ j=C˜CjC′ 0for any C′ 0∈(0,1). For the term L Mb∗ s,e−Mr∗ j s,e in (S43), we have that L Mb∗ s,e−Mr∗ j s,e ≤L Mb∗ s,e +L Mr∗ j s,e ≤2 max j∈Js,eL Mj s,e ≤2 max j∈Js,eL∞ Mj s,e ≤K(log log l)1/2(l)−1/2 with the last inequality coming from Lemma 2 in [3], where K > 0is independent of the sample size. From Assump- tion (A1), we can deduce that γj,T√logT(min{δj, δj+1})−1/2=O(1). Therefore, γj,T(log log l)1/2(l)−1/2=o(1) and therefore, (S43) implies that there is a constant C′′>0such that P L Db∗ s,e −L Dr∗ j s,e ≤ −C′′ b∗−r∗ j /γj,T for all b∗∈Js,e\[α,1−α] →1 (S44) for arbitrary and fixed α∈(0,1/2). This means that it suffices to consider L Mb∗ s,e−Mr∗ j s,e on compact subin- tervals of [0,1]and the result in (S37) follows if we show that for arbitrary constants C′′′>0andα∈(0,1/2) then lim inf T→∞P L Mb∗ s,e−Mr∗ j s,e ≤C′′′ b∗−r∗ j /γj,T for all
|
https://arxiv.org/abs/2504.21379v1
|
b∗∈Js,e∩[α,1−α]\NT(d) →1 (S45) S11 Non-parametric segmentation A P REPRINT asd→ ∞ . W.l.o.g. we take b∗≤r∗ j, and simple calculations yield Mb∗ s,e(Xi)−Mr∗ js,e(Xi) =1 llX t=r∗ jl+11{Xt+s−1≤Xi} r b∗ 1−b∗−s r∗ j 1−r∗ j! +1 lr∗ jlX t=11{Xt+s−1≤Xi} s 1−r∗ j r∗ j−r 1−b∗ b∗! +1p b∗(1−b∗)lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}+ q r∗ j(1−r∗ j)−√ b∗(1−r∗ j)√ 1−b∗! ∆j(Xi) = 1 llX t=r∗ jl+11{Xt+s−1≤Xi}−(1−r∗ j)Frj+1(Xi) r b∗ 1−b∗−s r∗ j 1−r∗ j! + 1 lr∗ jlX t=11{Xt+s−1≤Xi}−r∗ jFrj(Xi) s 1−r∗ j r∗ j−r 1−b∗ b∗! +1p b∗(1−b∗) 1 lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}−(r∗ j−b∗)Frj(Xi) . Adding now and subtracting1√ r∗ j(1−r∗ j) 1 lPr∗ jl t=b∗l+11{Xt+s−1≤Xi}−(r∗ j−b∗)Frj(Xi) , we have that Mb∗ s,e(Xi)−Mr∗ js,e(Xi) = 1 llX t=r∗ jl+11{Xt+s−1≤Xi}−(1−r∗ j)Frj+1(Xi) r b∗ 1−b∗−s r∗ j 1−r∗ j! + 1 lr∗ jlX t=11{Xt+s−1≤Xi}−r∗ jFrj(Xi) s 1−r∗ j r∗ j−r 1−b∗ b∗! + 1p b∗(1−b∗)−1q r∗ j(1−r∗ j) 1 lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}−(r∗ j−b∗)Frj(Xi) +1q r∗ j(1−r∗ j) 1 lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}−(r∗ j−b∗)Frj(Xi) . The functionsq b∗ 1−b∗,q 1−b∗ b∗and1√ b∗(1−b∗)are Lipschitz continuous on the interval [α,1−α]for any arbitrary α∈(0,1/2). Therefore, there is a constant C >0such that for all b∗∈Js,e∩[α,1−α], we have that sup i=1,...,T Mb∗ s,e(Xi)−Mr∗ js,e(Xi)−1q r∗ j(1−r∗ j) 1 lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}−(r∗ j−b∗)Frj(Xi) ≤C b∗−r∗ j max h∈[0,1]sup i=1,...,TA, (S46) where from now on A:=A(h, i) = 1{h≤r∗ j}1 lr∗ jlX t=hl+11{Xt+s−1≤Xi}−(r∗ j−h)Frj(Xi) + 1{h>r∗ j}1 lh∗lX t=r∗ jl+11{Xt+s−1≤Xi}−(h−r∗ j)Frj+1(Xi) S12 Non-parametric segmentation A P REPRINT We will now show that max h∈[0,1]supi=1,...,TA=Op (l)−1/2 . Due to the fact that we have a process with independent increments, then as shown in Lemma 2 of [3] we know that for any η >0, P max h∈[0,1]sup i=1,...,TA > η√ lq 1−r∗ j ≤K1exp −K2η2 4 , where K1andK2are just positive constants. Applying the above maximal inequality to the result in (S46), and with 1X:= 1{X≤X1}, 1{X≤X2}, . . . , 1{X≤XT} andFrj:= Frj(X1), Frj(X2), . . . , F rj(XT) , it follows that for every α∈(0,1/2), L Mb∗ s,e−Mr∗ j s,e−1q r∗ j(1−r∗ j) 1 lr∗ jlX t=b∗l+11Xt+s−1−(r∗ j−b∗)Frj ≤sup i=1,...,T Mb∗ s,e(Xi)−Mr∗ js,e(Xi)−1q r∗ j(1−r∗ j) 1 lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}−(r∗ j−b∗)Frj(Xi) ≤C b∗−r∗ j Op (l)−1/2 , where Op (l)−1/2 denotes a random variable that does not depend on b∗∈Js,e. From the result above, it is easy to see that (S45) would follow from the fact that for arbitrary constant C′′′>0 lim sup T→∞P sup i=1,...,T|lA(b∗, i)|> C′′′l b∗−r∗ j /γj,Tfor some b∗∈Js,e\NT(d) →0 (S47) asd→ ∞ . Let us now denote by Y1, Y2, . . .and˜Y1,˜Y2, . . .to be independent random variables from the distribution of our data in (rj−1, rj]and(rj, rj+1], respectively. We define Rm(Xi) :=mX t=1 1{Yt≤Xi}−Frj(Xi) , ˜Rm(Xi) :=mX t=1 1{˜Yt≤Xi}−Frj+1(Xi) . (S48) Then, because lA(b∗, i) = 1{h≤r∗ j}r∗ jlX t=hl+1 1{Xt+s−1≤Xi}−Frj(Xi) + 1{h>r∗ j}h∗lX t=r∗ jl+1 1{Xt+s−1≤Xi}−Frj+1(Xi) , continuing from (S47) we have that P sup i=1,...,T|lA(b∗, i)|> C′′′l b∗−r∗ j /γj,Tfor some b∗∈Js,e\NT(d) ≤P max m≥dγ2 j,T1 msup i=1,...,T|Rm(Xi)| ≥C′′′/γj,T! +P max m≥dγ2 j,T1 msup i=1,...,T ˜Rm(Xi) ≥C′′′/γj,T! . It now follows
|
https://arxiv.org/abs/2504.21379v1
|
from the result in [3], page 1489, that P max m≥dγ2 j,T1 msup i=1,...,T|Rm(Xi)| ≥C′′′/γj,T! ≤γj,T C′′′E1 m0sup i=1,...,T|Rm0(Xi)| , (S49) where m0:= min m∈N:m≥dγ2 j,T . The same result holds for ˜Rm(Xi)in the place of Rm(Xi). Now, due to the Dvoretzky-Kiefer-Wolfowitz inequality we have that for any ϵ >0 P1√m0sup i=1,...,T|Rm0(Xi)|> ϵ =P sup i=1,...,T 1 m0m0X t=11{Yt≤Xi}−Frj(Xi) >ϵ√m0! ≤P sup x∈R 1 m0m0X t=11{Yt≤x}−P(Xrj≤x) >ϵ√m0! ≤2 exp −2ϵ2 . S13 Non-parametric segmentation A P REPRINT Using this result, then we get that for the right-hand side of (S49) the following holds γj,T C′′′E1 m0sup i=1,...,T|Rm0(Xi)| =γj,T C′′′√m0E1√m0sup i=1,...,T|Rm0(Xi)| ≤2γj,T C′′′√m0Z exp −2x2 dx ≤2√ dC′′′Z exp −2x2 dx, since m0≥dγ2 j,T. It is now straightforward that the right-hand side of the above inequality goes to 0 as d→ ∞ and the result in (S37) follows. Therefore, (ˆrj−rj)/γ2 j,T=Op(1),∀j∈ {1, . . . , N }. (S50) Therefore, for λT≤(δT/3)we have proven that working under the set A∗ T(which implies the set AT), and due to (A1), there will be an interval of the form [1, cr k∗], with L ˜Bb1 1,cr k∗ > ζTwith probability tending to 1 as T→ ∞ , where b1is an estimation of r1, that also satisfies |b1−r1|/γ2 1,T=Op(1). Step 1.3: After detecting the first change-point, NPID follows the exact same process as in Steps 1.1 and 1.2, but only in the set [cr k∗, T], which contains r2, r3, . . . , r N. This means that we bypass, without checking for possible change-points, the interval [b1+ 1, cr k∗). However, we need to prove that: (S.1) There is no change-point in [b1+ 1, cr k∗), apart from maybe the already detected r1; (S.2) cr k∗is at a location which allows for detection of r2. For (S.1): We will split the explanation into two cases with respect to the location of b1. Case 1: b1< r1< cr k∗. Using (S37), (S38) and (S39) and imposing the condition P min{δj, δj+1}>3dγ2 j,T − − − − → T→∞1,∀j∈ {1, . . . , N } (S51) for adwhich goes to infinity as T→ ∞ , then since cr ˜k∈IR 1, we have, with probability going to 1 as T→ ∞ , that cr k∗−b1≤cr ˜k−b1=cr ˜k−r1+r1−b1<2δ2 3+dγ2 1,T< δ2. Since r2−r1=δ2andr1is already in [b1+ 1, cr k∗), then there is no other change-point in [b1+ 1, cr k∗)apart from r1. We need to highlight that in the case of dbeing of order at most O(logT), then the result in (S51) is not in fact an extra assumption and is satisfied through (A1). Case 2: r1≤b1< cr k∗. Since cr ˜k∈IR 1, then cr k∗−r1≤cr ˜k−r1<2δ2/3, which means that apart from r1there is no other change-point in [r1, cr k∗). With r1≤b1, then [b1+ 1, cr k∗)does not have any change-point. Cases 1 and 2 above show that no matter the location of b1, there is no change-point in [b1+1, cr k∗)other than possibly the previously detected r1. Similarly to the approach in Steps 1.1 and 1.2, our method applied now in [cr k∗, T],
|
https://arxiv.org/abs/2504.21379v1
|
will first isolate r2orrNdepending on whether r2−cr k∗is smaller or larger than T−rN. IfT−rN< r2−cr k∗thenrN will get isolated first in a left-expanding interval and the procedure to show its detection is exactly the same as in Step 1.1 where we explained the detection of r1. Therefore, w.l.o.g. and also for the sake of showing (S.2) let us assume thatr2−cr k∗≤T−rN. For (S.2): With Ir s,eas in (2.1) of [1], there exists cr k2∈Ir cr k∗,Tsuch that cr k2∈IR 2, with IR jdefined in (S32). We will show that r2gets detected in [cr k∗, cr k∗ 2], for k∗ 2≤k2and its detection, denoted by b2, satisfies |b2−r2|/γ2 2,T=Op(1), with γj,Tas in Assumption (A1). Following similar steps as in (S34), we have that for ˜b2= argmaxcr k∗≤t<cr k2L ˜Bt cr k∗,cr k2 , L ˜B˜b2 cr k∗,cr k2 ≥L ˜Fr2 cr k∗,cr k2 −4p logT≥s min cr k2−r2, r2−cr k∗+ 1 2L(|∆2|)−4p logT. (S52) S14 Non-parametric segmentation A P REPRINT By construction, cr k2−r2≥δ3 3 r2−cr k∗+ 1≥r2−cr ˜k+ 1 = r2−r1−(cr ˜k−r1) + 1 = δ2−(cr ˜k−r1) + 1 > δ2−2δ2 3+ 1 >δ2 3, which means that min cr k2−r2, r2−cr k∗+ 1 ≥1 3min{δ2, δ3}and therefore continuing from (S52), L ˜B˜b2 cr k∗,cr k2 ≥r min{δ2, δ3} 6L(|∆2|)−4p logT ≥r min{δ2, δ3} 6˜C2 γ2,T−4p logT ≥1√ 6−4√logT mT mT ≥1√ 6−4 C mT=C2mT> ζT. (S53) Therefore, based on (A1), for a cr ˜k2∈Ir cr k∗,Twe have shown that there exists an interval of the form [cr k∗, cr ˜k2], with max cr k∗≤b<cr ˜k2L ˜Bb cr k∗,cr ˜k2 > ζTwith probability tending to 1. Let us denote by cr k∗ 2∈Ir cr k∗,Tthe first right- expanding point where this occurs and let b2= argmaxcr k∗≤t<cr k∗ 2L ˜Bt cr k∗,cr k∗ 2 withL ˜Bb2 cr k∗,cr k∗ 2 > ζTwith probability tending to 1 as T→ ∞ . It is straightforward to show that |b2−r2|/γ2 2,T=Op(1)following exactly the same process as in Step 1.2 and we will not repeat this here. Having detected r2, then our algorithm will proceed in the interval [s, e] = [ cr k∗ 2, T]and all the change-points will get detected one by one since Step 1.3 will be applicable as long as there are previously undetected change-points in [s, e]. Denoting by ˆrjthe estimation of rjas we did in the statement of the theorem, then we conclude that all change-points will get detected one by one and |ˆrj−rj|/γ2 j,T=Op(1). due to the result in (S37). Step 2: The arguments given in Steps 1.1-1.3 hold in the set A∗ T(which also implies AT) defined in (S13). At the beginning of the algorithm, s= 1, e=Tand for N≥1, there exist k1∈ {1, . . . , K }such that sk1=s, ek1∈IR 1and k2∈ {1, . . . , K }such that sk2∈IL N, ek2=e. As in our previous steps, w.l.o.g. assume that r1≤T−rN, meaning thatr1gets isolated and detected first in an interval [s, cr k∗], where cr k∗∈Ir 1,Tand it is less than or equal to ek1. Then, ˆr1= argmaxs≤t<cr
|
https://arxiv.org/abs/2504.21379v1
|
k∗L ˜Bt s,cr k∗ andˆr1is the estimated location for r1and|r1−ˆr1|/γ2 1,T=Op(1). After this, the algorithm continues in [cr k∗, T]and keeps detecting all the change-points as explained in Step 1. It is important to note that there will not be any double detection issues because naturally, at each step of the algorithm, the new interval [s, e]does not include any previously detected change-points. Once all the change-points have been detected one by one, then [s, e]will have no other change-points in it. Our method will keep interchangeably checking for possible change-points in intervals of the formh s, cr ˜k1i andh cl ˜k2, ei forcr ˜k1∈Ir s,eandcl ˜k2∈Il s,e. Allow us to denote by [s∗, e∗]any of these intervals. Our algorithm will not detect anything in [s∗, e∗]since∀i∈ {1, . . . , T }and∀b∈[s∗, e∗), L ˜Bb s∗,e∗ ≤L ˜Fb s∗,e∗ + 4p logT= 4p logT < C 1p logT≤ζT. After not detecting anything in all intervals of the above form, then the algorithm concludes that there are not any change-points in [s, e]and stops. □ S15 Non-parametric segmentation A P REPRINT References [1] A. Anastasiou and P. Fryzlewicz (2022). Detecting multiple generalized change-points by isolating single ones. Metrika , 85, 141–174. [2] E. Carlstein (1988). Nonparametric change-point estimation. The Annals of Statistics ,16, 188–197. [3] L. D ¨umbgen (1991). The asymptotic behavior of some nonparametric change-point estimators. The Annals of Statistics ,19, 1471–1495. [4] A. Dvoretzky, J. Kiefer and J. Wolfowitz (1956). Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Annals of Mathematical Statistics ,27, 642–669. 16
|
https://arxiv.org/abs/2504.21379v1
|
An Aldous-Hoover type representation for row exchangeable arrays Evan Donald University of Central Florida ev446807@ucf.eduJason Swanson University of Central Florida jason.swanson@ucf.edu Abstract In an array of random variables, each row can be regarded as a single, sequence- valued random variable. In this way, the array is seen as a sequence of sequences. Such an array is said to be row exchangeable if each row is an exchangeable sequence, and the entire array, viewed as a sequence of sequences, is exchangeable. We give a representation theorem, analogous to those of Aldous and Hoover, which characterizes row exchangeable arrays. We then use this representation theorem to address the problem of performing Bayesian inference on row exchangeable arrays. AMS subject classifications: Primary 60G09; secondary 62E10, 60G25, 62M20 Keywords and phrases: exchangeability, random arrays, representation theorems, Bayesian inference 1 Introduction Consider a situation in which there are several agents, all from the same population. Each agent undertakes a sequence of actions. These actions are chosen according to the agent’s particular tendencies. Although different agents have different tendencies, there may be patterns in the population. We observe a certain set of agents over a certain amount of time. Based on these observations, we want to make probabilistic forecasts about the future behavior of the agents. We model this situation with an array of random variables. Let Sbe a complete and separable metric space and ξ={ξij:i, j∈N}an array of S-valued random variables. The space Srepresents the set of possible actions that the agents may undertake, and ξij represents the jth action of the ith agent. For a simple example, imagine a pressed penny machine, like those found in museums or tourist attractions. For a fee, the machine presses a penny into a commemorative souvenir. Now imagine the machine is broken, so that it mangles all the pennies we feed it. Each pressed penny it creates is mangled in its own way. Each has its own probability of landing on heads when flipped. The machine, though, might have its own tendencies. For instance, it might tend to produce pennies that are biased toward heads. In this situation, the agents are the pennies and the actions are the heads and tails that they produce. We therefore take 1arXiv:2504.21584v1 [math.PR] 30 Apr 2025 S={0,1}andξij∈Swould denote the result of the jth flip of the ith penny created by the machine. For an example with an infinite S, imagine the machine produces a commemorative globe that fits in the palm of our hand. When we buy such a globe, we might roll it like we would a die. When it comes to rest, whatever latitude and longitude are at the top, we regard as the outcome of the roll. Ordinarily, the machine produces globes that are uniformly weighted, so that every point on the globe is equally likely to be the outcome of the roll. But suppose the machine is broken, so that each globe is imbalanced and irregularly weighted in its own unique way. In this case, we take S=S2to be the unit sphere in R3andξij∈Swould denote the result of the jth
|
https://arxiv.org/abs/2504.21584v1
|
roll of the ith globe created by the machine. In situations such as these, it would be natural to assume that ξhas certain symmetries. More specifically, we assume that (i) each row of the array, ξi={ξij:j∈N}, is an exchangeable sequence of S-valued random variables, and (ii) the sequence of rows, {ξi:i∈N}, is an exchangeable sequence of S∞-valued random variables. Ifξsatisfies (i) and (ii) above, then we say that ξis arow exchangeable array. It is easy to see thatξis row exchangeable if and only if {ξσ(i),τi(j)}and{ξij}have the same finite-dimensional distributions whenever σandτiare (finite) permutations of N. In a collection of papers (see [1, 2, 7, 8]), Aldous and Hoover considered different forms of exchangeability for random arrays. One form they considered was separate exchangeability , which means that {ξσ(i),τ(j)}and{ξij}have the same finite-dimensional distributions whenever σandτare permutations. Clearly, row exchangeability implies separate exchangeability, but not conversely. Separately exchangeable arrays can be characterized by a certain representation involving i.i.d. uniform random variables. This representation theorem is given below in Theorem 1.1, and was originally proven independently by Aldous [1, 2] and Hoover [7, 8]. A proof can also be found in [11, Corollary 7.23]. The version given here appears in [9]. Theorem 1.1. Letξ={ξij:i, j∈N}be an array of S-valued random variables. Then ξis separately exchangeable if and only if there exists a measurable function g:R4→Sand an i.i.d. collection of random variables {α, β i, ηj, λij:i, j∈N}, uniformly distributed on (0,1), such that the array {g(α, β i, ηj, λij)}has the same finite-dimensional distributions as ξ. The main results of the present work are twofold. First, we establish an analogous representation theorem for row exchangeable arrays (see Theorem 3.1). We then use this representation theorem to address the problem of performing Bayesian inference on row exchangeable arrays. For the latter, we start by proving a de Finetti theorem for row exchangeability (see Theorem 3.2). According to this theorem, if ξis row exchangeable, then there exists a sequence of random measure µ1, µ2, . . .onSand a random measure ϖ onM1=M1(S), the space of probability measures on S, such that •given ϖ, the sequence µ1, µ2, . . .is i.i.d. with distribution ϖ, and •for each i, given µi, the sequence ξi1, ξi2, . . .is i.i.d. with distribution µi. 2 We call the random measures µitherow distributions of ξ, and we call ϖtherow distribution generator . To keep our formulas concise, we adopt the following notation: Xin= ξi1··· ξin ,Xmn= X1n ... Xmn = ξ11··· ξ1n ......... ξm1··· ξmm ,µm= µ1 ... µm . Our objective is to make inferences about the future of the process ξbased on past observations. That is, if M, N, N′∈NandN < N′, then we wish to compute L(ξij:i≤M, N < j ≤N′|XMN) (1.1) in terms of the prior distribution of ξ. Here, the notation L(X|Y) denotes the regular conditional distribution of Xgiven Y. We also adopt the convention that a variable with a 0 subscript is omitted. For example, when m= 1, we understand L(µm|XMN,µm−1) to meanL(µ1|XMN). Our main result concerning this problem is the following. Theorem 1.2. The
|
https://arxiv.org/abs/2504.21584v1
|
conditional distribution (1.1) is entirely determined by the conditional distributions, L(µm|XmN, . . . , X MN,µm−1), (1.2) where 1≤m≤M. The proof of Theorem 1.2 (see Section 3.3) is constructive and provides us with a way of using (1.2) to compute (1.1). In follow-up work (see [3]), we take on the question of how to assign a non-informative prior to ξ. According to our de Finetti theorem, this is equivalent to assigning a prior to ϖ. Note that ϖis a random measure on M1(S). That is, ϖis an M1(M1(S))-valued random variable. Its prior is therefore an element of M1(M1(M1(S))). Our approach to assigning a prior will be to use a variant of Ferguson’s [5] Dirichlet process. 2 Background 2.1 Random measures and kernels If (X,U) is a topological space, then B(X) = σ(U) denotes the Borel σ-algebra on X. We may sometimes denote B(X) byBX. We use Rdto denote the Borel σ-algebra onRdwith the Euclidean topology. If ( S,S) is a measurable space and B∈ S, then S|B={A∩B:A∈ S} ={A∈ S:A⊆B}. We let R∗= [−∞,∞] denote the extended real line, equipped with the topology generated by the metric ρ(x, y) =|tan−1(x)−tan−1(y)|. We use R∗to denote B(R∗). Note that R∗={A⊆R∗:A∩R∈ R} . LetSbe a complete and separable metric space with metric dand let S=B(S). LetM=M(S) denotes the space of σ-finite measures on S. For B∈ S, we define the projection πB:M→R∗byπB(ν) = ν(B). If Tis a set and µ:T→M, then we adopt the notation µ(t, B) = ( µ(t))(B). Let M=M(S) be the σ-algebra on M 3 defined by M=σ({πB:B∈ S} ). Note that Mis the collection of all sets of the form {ν∈M:ν(B)∈A}, where B∈ SandA∈ B([0,∞]). Given a probability space, (Ω ,F, P), arandom measure on Sis a function µ: Ω→Mthat is ( F,M)-measurable. LetM1=M1(S) be the set of all probability measures on S. Note that M1={ν∈M: ν(S)∈ {1}}. Hence, M1∈ M and we may define M1byM1=M|M1. Arandom probability measure on Sis a function µ: Ω→M1that is ( F,M1)-measurable, or equivalently, a random measure taking values in M1. We equip M1with the Prohorov metric, π, which metrizes weak convergence. Since S is complete and separable, M1is complete and separable under π. It can be shown that M1=B(M1). (See, for example, [6, Theorem 2.3].) Unless otherwise specified, we will equip S∞with the metric d∞defined by d∞(x, y) =∞X n=1d(xn, yn)∧1 2n. Note that x→yinS∞if and only if xn→yninSfor all n. In particular, the metric d∞ induces the product topology on S∞. Consequently, S∞=B(S∞) and S∞is separable. In fact, ( S∞,d∞) is a complete and separable metric space. Let ( T,T) be a measurable space. A function µ:T→Mis called a kernel from Tto Sifµ(·, B) is (T,R∗)-measurable for all B∈ S. By [10, Lemma 1.37], given a function µ:T→Mand a π-system Csuch that S=σ(C), the following are equivalent: (i)µis a kernel. (ii)µ(·, B) is (T,R∗)-measurable for all B∈ C. (iii)µis (T,M)-measurable. Aprobability kernel is a kernel taking values in M1. Note that a random measure on Sis just a kernel from Ω to S, and a random probability measure on Sis just a
|
https://arxiv.org/abs/2504.21584v1
|
probability kernel from Ω to S. It can be shown that the function µ7→µ∞mapping M1toM1(S∞) is measurable. Therefore, if µis a random probability measure on S, then µ∞is a random probability measure on S∞. In fact, the random variable µ∞isσ(µ)-measurable. 2.2 Regular conditional distributions IfXis an S-valued random variable and µis a probability measure on S, then we write X∼µ, as usual, to mean that Xhas distribution µ. We also use L(X) to denote the distribution of X, so that X∼µandL(X) =µare synonymous. If L(X) =µandB∈ S, then we use L(X;B) to denote µ(B). LetG ⊆ F be a σ-algebra. A regular conditional distribution for XgivenGis a random probability measure µonSsuch that P(X∈A| G) =µ(A) a.s. for all A∈ S. Since S is a complete and separable metric space, regular conditional distributions exist. In fact, such a µexists whenever Xtakes values in a standard Borel space. Moreover, µis unique in the sense that if µandeµare two such random measures, then µ=eµa.s. That is, with 4 probability one, µ(B) =eµ(B) for all B∈ S. The random measure µmay be chosen so that it isG-measurable. We use the notation L(X| G) to refer to the regular conditional distribution of Xgiven G. Ifµis a random probability measure, the notation X| G ∼ µmeans L(X| G) =µa.s. We use the semicolon notation in the same way as for unconditional distributions so that, for instance, L(X| G;A) =P(X∈A| G) a.s. Let ( T,T) be a measurable space and YaT-valued random variable. Then there exists a probability kernel µfrom TtoSsuch that X|σ(Y)∼µ(Y). Moreover, µis unique in the sense that if µandeµare two such probability kernels, then µ=eµ,L(Y)-a.e. In this case, we will typically omit the σ, and write, for instance, that L(X|Y) =µ(Y). Note that for fixed y∈T, the probability measure µ(y,·) is not uniquely determined, since the kernel µis only unique L(Y)-a.e. Nonetheless, if a particular µhas been fixed, we will use the notation L(X|Y=y) to denote the probability measure µ(y,·). 2.3 Conditional independence A sequence ξ={ξi:i∈N}ofS-valued random variables is conditionally i.i.d. if there exists aσ-algebra G ⊆ F and a random probability measure νonSsuch that P(ξ∈A| G) =ν∞(A) a.s., (2.1) for all A∈ S∞. If there is a random probability measure µsuch that P(ξ∈A|µ) =µ∞(A), then ξis conditionally i.i.d. given G=σ(µ). The converse is given by the following lemma. Lemma 2.1. Ifξsatisfies (2.1), then there exists a random probability measure µonSsuch that P(ξ∈A|µ) =µ∞(A)a.s., (2.2) for all A∈ S∞. Moreover, µcan be chosen so that µ∈ Gandµ=νa.s. Proof. Suppose there exists a σ-algebra G ⊆ F and a random probability measure νonS such that P(ξ∈A| G) =ν∞(A) a.s., for all A∈ S∞. Let (Ω ,G,P) be the completion of (Ω ,G, P), so that ν∞(A) is (G,R)-measurable for all A∈ S∞. This means that ν∞is a kernel from (Ω ,G) toS∞, and that ν∞isG-measurable. For any B∈ S, ν(B) =ν∞(B×S∞) is (G,R)-measurable. Hence, νisG-measurable. Choose µ: Ω→M1(S) such that µis G-measurable and µ=νa.s. Thus, µ∞=ν∞a.s. Since µ∞∈σ(µ), we have P(ξ∈A|µ) =E[P(ξ∈A| G)|µ] =E[µ∞(A)|µ] =µ∞(A) a.s., for all A∈ S∞. Corollary 2.2. Ifξis conditionally i.i.d. given G,
|
https://arxiv.org/abs/2504.21584v1
|
then P n\ i=1{ξi∈Ai} G! =nY i=1P(ξi∈Ai| G)a.s., for all n∈Nand all Ai∈ S. 5 Proof. By Lemma 2.1, P n\ i=1{ξi∈Ai} G! =µ∞(A1× ··· × An) =nY i=1µ(Ai) a.s. IfA=Si−1×Ai×S∞, then µ(Ai) =µ∞(A) =P(ξ∈A| G) =P(ξi∈Ai| G) a.s. 2.4 Exchangeability and empirical measures A sequence ξ={ξi:i∈N}ofS-valued random variables is exchangeable if (ξk1, . . . , ξ km)d= (ξ1, . . . , ξ m) whenever k1, . . . , k mare distinct elements of N. A function σ:N→Nis called a (finite) permuatation ifσis bijective and there exists n0∈Nsuch that σ(n) =nfor all n≥n0. The sequence ξis exchangeable if and only if {ξσ(i)}and{ξi}have the same finite-dimensional distributions whenever σis a permutation. By [11, Theorem 1.1], we have that ξis conditionally i.i.d. if and only if ξis exchangeable. In that case, the random measure µin (2.2) is a.s. unique, σ(ξ)-measurable, and satisfies µn(B)→µ(B) a.s., for all B∈ S, where µn=1 nnX i=1δξi are the empirical measures (see [11, Proposition 1.4]). In Proposition 2.4 below, we show thatµn→µa.s. in M1under the Prohorov metric. Lemma 2.3. LetTa convex subset of a real vector space. Let ρbe a metric on Tsuch that (T, ρ)is complete and separable, and for all n∈N, the function y7→n−1Pn i=1yiis continuous from TntoT. Let g:S→Tbe a continuous function, and fix y0∈T. Define φn:S∞→Tby φn(x) =n−1nX i=1g(xi). Define B={x∈S∞: lim n→∞φn(x)exists}, and define λ:S∞→Tby λ(x) =( limn→∞φn(x)ifx∈B, y0 ifx /∈B. Then φnis continuous, B∈ S∞, and λis(S∞,B(T))-measurable. Proof. Define Φ n:Tn→Tby Φ( y) = n−1Pn i=1yi, so that Φ nis continuous. Let πn:S∞→Snbe the projection mapping x7→(x1, . . . , x n), so that πnis continuous. Define gn:Sn→Tnbygn(x) = (g(x1), . . . , g (xn)). Note that φn= Φ n◦gn◦πn, which shows thatφnis continuous. Since ( T, ρ) is complete and separable, and each φnis measurable, it follows that Bis measurable. Thus, for every n, the function x7→φn(x)1B+y01Bcis (S∞,B(T))-measurable. Since ( T, ρ) is separable and λis the pointwise limit of these functions, it follows that λis (S∞,B(T))-measurable. 6 Proposition 2.4. Suppose ξis exchangeable. Let µnbe the empirical measures and µthe a.s. unique measure in (2.2). Then, with probability one, µn→µweakly. Proof. We apply Lemma 2.3 with T=M1,ρ=πthe Prohorov metric, g(x) =δx, and y0 arbitrary. In this case, by [4, Theorem 11.4.1], given any m∈M1, we have m∞(B) = 1 and m∞({x∈S∞:λ(x) =m}) =m∞(λ−1(m)) = 1 . In other words, the empirical distributions of an i.i.d. sequence with distribution mconverge weakly to ma.s. Note that µn=φn(ξ) and P(ξ∈B) =E[P(ξ∈B|µ)] =E[µ∞(B)] = 1 , since µ∞(B)≡1. Hence, µn→λ(ξ) a.s., so it suffices to prove that λ(ξ) =µa.s. For this, let G={(x, m)∈S∞×M1:λ(x) =m}be the graph of λ. Since M1=B(M1), we have that λis (S∞,M1)-measurable, which implies G∈ S∞⊗M 1. Thus, by [10, Theorem 5.4], P(λ(ξ) =µ) =E[P((ξ, µ)∈G|µ)] =EZ S1G(x, µ)µ∞(·, dx) =EZ S1λ−1(µ)(x)µ∞(·, dx) =E[µ∞(λ−1(µ))] = 1 , since µ∞(λ−1(µ))≡1. 2.5 Conditional law of large numbers We can use the preceding results to prove the following conditional version of the law of large numbers. Proposition 2.5. Let(T,T)be a measurable space. Suppose Yis a T-valued random variable, ξis conditionally
|
https://arxiv.org/abs/2504.21584v1
|
i.i.d. given G, and f:T×S→Ris a measurable function such thatE|f(Y, ξ 1)|<∞. IfY∈ GandZis any version of E[f(Y, ξ 1)| G]then P lim n→∞1 nnX i=1f(Y, ξi) =Z G! = 1 a.s. (2.3) In particular, 1 nnX i=1f(Y, ξi)→E[f(Y, ξ 1)| G]a.s. (2.4) Proof. Taking expectations in (2.3) yields (2.4). It thus suffices to prove (2.3). We first prove (2.3) under the assumption that S=Rwith the Euclidean metric and f(t, x) =x. We begin by applying Lemma 2.3 with T=R,ρthe Euclidean metric, g(x) =x, and y0= 0, 7 obtaining φn,B, and λ. Suppose m∈M1(R) andR R|x|m(dx)<∞. Let x=R Rx m(dx). By the law of large numbers, m∞(B) = 1 and m∞({x∈R∞:λ(x) =x}) =m∞(λ−1(x)) = 1 . Since ξis conditionally i.i.d., there exists a random probability measure µsuch that ξ| G ∼ µ∞. Thus, EZ R|x|µ(dx) =EZ R∞|π1(x)|µ∞(dx) =E[E[|ξ1| | G]] =E|ξ1|<∞. Hence, there exists Ω∗∈ F such that P(Ω∗) = 1 and Z R|x|µ(ω, dx )<∞, for all ω∈Ω∗. For any such ω, define xω=R Rx µ(ω, dx ). As above, we then have µ∞(ω, B) = 1 and µ∞(ω, λ−1(xω)) = 1. Since Z=E[ξ1| G] =R Rx µ(dx) a.s., we may assume that Z(ω) =xωfor all ω∈Ω∗. Now, note that n−1Pn i=1ξi=φn(ξ)→Zif and only if ξ∈Bandλ(ξ) =Z. Thus, P lim n→∞1 nnX i=1ξi=Z G! =E[1B(ξ)1G(ξ, Z)| G], where G={(x, y) :λ(x) =y}is the graph of λ. By [10, Theorem 5.4], since Z∈ G, this is equal toR R1B(x)1G(x, Z)µ∞(dx). Thus, for any ω∈Ω∗, we have P lim n→∞1 nnX i=1ξi=Z G! (ω) =Z R1B(x)1G(x,xω)µ∞(ω, dx ) =Z B1λ−1(xω)(x)µ∞(ω, dx ) =µ∞(ω, B∩λ−1(xω)) = 1 , and this proves (2.3) under the assumption that S=Rwith the Euclidean metric and f(t, x) =x. To prove the general result, let ξ′ i=f(Y, ξi). By [10, Theorem 5.4], P n\ j=1{ξ′ i∈Bi} G! =Z SnnY j=11Bi(f(Y, x i))νn(dx) =nY j=1Z S1Bi(f(Y, x i))ν(dxi). Hence, {ξ′ i}are conditionally i.i.d. given G. The result therefore follows by applying the first part of the proof to {ξ′ i}. 3 Bayesian inference for row exchangeable arrays In this section, we prove our main results, which include an Aldous-Hoover type representation (Theorem 3.1) and the proof of Theorem 1.2 concerning Bayesian inference. 8 3.1 An Aldous-Hoover type representation theorem Theorem (3.1) below is the analogue of Theorem 1.1 for row exchangeable arrays. It characterizes row exchangeable arrays, and provides a representation for them in terms of i.i.d. uniform random variables. Theorem 3.1. The array ξis row exchangeable if and only if there exists a measurable function f:R3→Sand an i.i.d. collection of random variables {α, β i, λij:i, j∈N}, uniformly distributed on (0,1), such that the array {f(α, β i, λij)}has the same finite- dimensional distributions as ξ. Proof. The if direction is trivial. For the only if direction, suppose ξis row exchangeable. Note that row exchangeability implies separate exchangeability, so that by Theorem 1.1, there exists a measurable function g:R4→Sand an i.i.d. collection of random variables {α, β i, ηj, λij:i, j∈N}, uniformly distributed on (0 ,1), such that the array {g(α, β i, ηj, λij)} has the same finite-dimensional
|
https://arxiv.org/abs/2504.21584v1
|
distributions as ξ. Forn∈N, define dn: (0,1)→ {0,1}bydn(x) =⌊2nx⌋ −2⌊2n−1x⌋, so that dn(x) is the n-th digit of the canonical binary expansion of x. Define φ: (0,1)→(0,1)2by φ(x) = ∞X n=12−nd2n−1(x),∞X n=12−nd2n(x)! . Note that φis measurable, and that whenever Uis uniform on (0 ,1), it follows that φ1(U) andφ2(U) are independent and also uniform on (0 ,1). Define f:R3→Sbyf(a, b, z ) =g(a, b, φ 1(z), φ2(z)), and let γij=φ1(λij) and ζij= φ2(λij), so that f(α, β i, λij) =g(α, β i, γij, ζij). It now suffices to show that {g(α, β i, γij, ζij)} and{g(α, β i, ηj, λij)}have the same finite-dimensional distributions. Fixm, n∈Nand let Aij∈ Sfori≤mandj≤n. For each i∈N, choose a permutation τithat maps {1, . . . , n }to{in+ 1, . . . , in +n}. Since ξis row exchangeable, we have P m\ i=1n\ j=1{g(α, β i, ηj, λij)∈Aij}! =P m\ i=1n\ j=1{ξij∈Aij}! =P m\ i=1n\ j=1{ξi,τi(j)∈Aij}! =P m\ i=1n\ j=1{g(α, β i, ητi(j), λi,τi(j))∈Aij}! Since the mapping ( i, j)7→τi(j) from {1, . . . , m } × { 1, . . . , n }toNis injective, it follows that{α, β i, ητi(j), λi,τi(j):i≤m, j≤n}is an i.i.d. collection of random variables. In particular, {α, β i, ητi(j), λi,τi(j):i≤m, j≤n}and{α, β i, γij, ζij:i≤m, j≤n}have the same distribution. Thus, P m\ i=1n\ j=1{g(α, β i, ητi(j), λi,τi(j))∈Aij}! =P m\ i=1n\ j=1{g(α, β i, γij, ζij)∈Aij}! . Combined with the above, this shows that {g(α, β i, γij, ζij)}and{g(α, β i, ηj, λij)}have the same finite-dimensional distributions. 9 3.2 A de Finetti theorem for row exchangeability Theorem 3.2 below is a de Finetti type theorem for row exchangeable arrays. It shows that the law of a row exchangeable array, ξ, is governed by its row distributions, µi, and its row distribution generator, ϖ. Theorem 3.2. Suppose ξis a row exchangeable, infinite array of S-valued random variables. Then there exists an a.s. unique random probability measure ϖonM1and an a.s. unique sequence µ={µi:i∈N}of random probability measures on Ssuch that µ|ϖ∼ϖ∞and ξi|µi∼µ∞ ifor all i∈N. Proof. Fixi∈N. Since ξiis exchangeable, Proposition 2.4 implies that there exists an a.s. unique random probability measure µionSsuch that, with probability one, n−1Pn j=1δξij→µiweakly as n→ ∞ , and ξi|µi∼µ∞ i. According to the proof of Proposition 2.4, we may take µi=λ(ξi), where λ:S∞→M1 is measurable and does not depend on i. Let µ={µi}∞ i=1. Since µi=λ(ξi) and the sequence {ξi}∞ i=1is exchangeable, it follows that {µi}∞ i=1is an exchangeable sequence of M1-valued random variables. It now follows from Proposition 2.4 that there exists an a.s. unique random probability measure ϖonM1such that, with probability one, n−1Pn i=1δµi→ϖweakly as n→ ∞ , and µ|ϖ∼ϖ∞. The relation ξi|µi∼µ∞ iin Theorem 3.2 concerns only one fixed row at a time. Theorem 3.3 below generalizes this to multiple rows. It states that ϖandXmnare independent given µm, and gives an explicit expression for the law of Xmngivenµm. Theorem 3.3. Ifξis row exchangeable, then L(Xmn|ϖ,µm) =L(Xmn|µm) =mY i=1µn ia.s., for all m, n∈N. Proof. Fixm, n∈N. For i≤mandj≤n, let Aij∈ S. According to the proof of Proposition 2.4, we
|
https://arxiv.org/abs/2504.21584v1
|
can write ϖas a measurable function of µ, so that ϖ∈σ(µ). Thus, mY i=1µn i∈σ(µm)⊆σ(ϖ,µm)⊆σ(ϖ, µ) =σ(µ). It therefore suffices to show that P m\ i=1n\ j=1{ξij∈Aij} µ! =mY i=1nY j=1µi(Aij) a.s. (3.1) By Theorem 3.1, we may assume that ξij=f(α, β i, λij), where f:R3→Sis measurable and{α, β i, λij:i, j∈N}is an i.i.d. collection of random variables, uniform on (0 ,1). Fixi≤mandB∈ S. As noted in Section 2.4, [11, Proposition 1.4] implies that n−1Pn j=1δξij(B)→µi(B) a.s. On the other hand, δξij(B) =hB(α, β i, λij), where hB= 1B◦f. 10 Since hBis bounded and {hB(α, β i, λij)}∞ j=1is conditionally i.i.d. given αandβi, Proposition 2.5 implies 1 nnX j=1δξij(B) =1 nnX j=1hB(α, β i, λij)→E[hB(α, β i, λi1)|α, β i] =Z1 0hB(α, β i, x)dxa.s. Thus, µi(B) =Z1 0hB(α, β i, x)dxa.s., (3.2) which implies that the random variable µi(B) is measurable with respect to σ(α, β i), the completion of σ(α, β i) with respect to P. Since this is true for every B∈ S, it follows that µi∈σ(α, β i). Thus, we may choose νi∈σ(α, β i) such that µi=νia.s. Letting ν={νi}∞ i=1 andβ={βi}∞ i=1, and noting that ν∈σ(α, β) and µ=νa.s., we have P m\ i=1n\ j=1{ξij∈Aij} µ! =P m\ i=1n\ j=1{ξij∈Aij} ν! =E" P m\ i=1n\ j=1{ξij∈Aij} α, β! ν# =E" P m\ i=1n\ j=1{ξij∈Aij} α, β! µ# a.s. Since ξij=f(α, β i, λij), it follows that {ξij}is conditionally i.i.d. given α, β. By Corollary 2.2, P m\ i=1n\ j=1{ξij∈Aij} α, β! =mY i=1nY j=1P(ξij∈Aij|α, β) a.s. By (3.2), P(ξij∈Aij|α, β) =E[hAij(α, β i, λij)|α, β] =Z1 0hAij(α, β i, x)dx=µi(Aij) a.s. Combining the last three displays gives P m\ i=1n\ j=1{ξij∈Aij} µ! =E"mY i=1nY j=1µi(Aij) µ# =mY i=1nY j=1µi(Aij) a.s. , establishing (3.1) and finishing the proof. 3.3 Computing the posterior distribution In this section, we prove Theorem 1.2. The proof relies on Theorems 3.4 and 3.5 below. The former states that the conditional law of the array {ξij:i, j∈N}is entirely determined by the conditional law of the row distributions {µi:i∈N}. The latter gives a certain conditional independence property. Namely, the conditional law of the rows above a given mdepends only on the row distributions µ1, . . . , µ m, and not on the observed data {ξij:i≤m, j∈N}. 11 Theorem 3.4. Letξbe row exchangeable. Fix M∈N. For each i∈ {1, . . . , M }, let Ni, Oi∈Nwith Ni< O i, and let Aij∈ S fori≤MandNi< j≤Oi. Let Gbe a sub-σ-algebra of σ(X1N1, X 2N2, . . . , X MNM). Then P M\ i=1Oi\ j=Ni+1{ξij∈Aij} G! =E"MY i=1OiY j=Ni+1µi(Aij) G# . (3.3) Proof. LetA∈ Ghave the form A=TM i=1TNi j=1{ξij∈Aij}. Let Zdenote the left-hand side of (3.3). Then E[Z1A] =E" P A∩M\ i=1Oi\ j=Ni+1{ξij∈Aij} G!# =P M\ i=1Oi\ j=1{ξij∈Aij}! =E" P M\ i=1Oi\ j=1{ξij∈Aij} µM!# =E"MY i=1OiY j=1µi(Aij)# , by Theorem 3.3. On the other hand, E" MY i=1OiY j=Ni+1µi(Aij)! 1A# =E" E" MY i=1OiY j=Ni+1µi(Aij)! 1A µM## =E" MY i=1OiY j=Ni+1µi(Aij)! P(A|µM)# =E"MY i=1OiY j=1µi(Aij)# . Hence, the two displays are equal. By the π-λtheorem, they are equal for all A∈ G,
|
https://arxiv.org/abs/2504.21584v1
|
and this proves the claim. Theorem 3.5. Letξbe row exchangeable. Fix m, M, N ∈Nwithm < M . Then XmNand {(µj, XjN)}M j=m+1are conditionally independent given µm. Proof. LetAij∈ Sfori≤Mandj≤N, and let Bi∈ M 1form < i ≤M. We adopt the shorthand Ai=Ai1× ··· × AiN. By Theorem 3.3, P(µm+1∈Bm+1, . . . , µ M∈BM, X 1N∈A1, . . . , X MN∈AM|µM) =MY i=m+11Bi(µi)MY i=1µN i(Ai). Hence, P(µm+1∈Bm+1, . . . , µ M∈BM, X 1N∈A1, . . . , X MN∈AM|µm) =EMY i=m+11Bi(µi)MY i=1µN i(Ai) µm =mY i=1µN i(Ai) EMY i=m+11Bi(µi)µN i(Ai) µm . 12 Taking Aij=Sfor all i≤mgives P(µm+1∈Bm+1, . . . , µ M∈BM, Xm+1,N∈Am+1, . . . , X MN∈AM|µm) =EMY i=m+11Bi(µi)µN i(Ai) µm . Since P(X1N∈A1, . . . , X mN∈Am|µm) =Qm i=1µN i(Ai), this completes the proof. Proof of Theorem 1.2. According to Theorem 3.4 with Ni=N,Oi=N′, andG=σ(XMN), we can compute the posterior distribution (1.1), provided we can compute L(µM|XMN). Note that L(µM|XMN;dν) =L(µ2, . . . , µ M|XMN, µ1=ν1;dν2···dνM)L(µ1|XMN;dν1). Iterating this, we see that we can determine L(µM|XMN) if we know L(µm|XMN,µm−1),1≤m≤M. (3.4) Theorem 3.5 shows that in (3.4), the first m−1 rows of XMNcan be omitted. Hence, the conditional distribution (1.1) is entirely determined by the distributions given in (1.2). References [1] David J. Aldous. Representations for partially exchangeable arrays of random variables. J. Multivariate Anal. , 11(4):581–598, 1981. [2] David J. Aldous. Exchangeability and related topics. In ´Ecole d’´ et´ e de probabilit´ es de Saint-Flour, XIII—1983 , volume 1117 of Lecture Notes in Math. , pages 1–198. Springer, Berlin, 1985. [3] Evan Donald and Jason Swanson. The iterated Dirichlet process and applications to Bayesian inference. Preprint, 2025. [4] Richard M. Dudley. Real analysis and probability . The Wadsworth & Brooks/Cole Mathematics Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1989. [5] Thomas S. Ferguson. A Bayesian analysis of some nonparametric problems. Ann. Statist. , 1:209–230, 1973. [6] Marie Gaudard and Donald Hadwin. Sigma-algebras on spaces of probability measures. Scand. J. Statist. , 16(2):169–175, 1989. [7] D. N. Hoover. Relations on probability spaces and arrays of random variables. Preprint, Princeton Institute for Advanced Study, 1979. 13 [8] D. N. Hoover. Row-column exchangeability and a generalized model for probability. InExchangeability in probability and statistics (Rome, 1981) , pages 281–291. North- Holland, Amsterdam-New York, 1982. [9] Olav Kallenberg. On the representation theorem for exchangeable arrays. J. Multivariate Anal. , 30(1):137–154, 1989. [10] Olav Kallenberg. Foundations of modern probability . Probability and its Applications (New York). Springer-Verlag, New York, 1997. [11] Olav Kallenberg. Probabilistic symmetries and invariance principles . Probability and its Applications (New York). Springer, New York, 2005. 14
|
https://arxiv.org/abs/2504.21584v1
|
Convergence rate for Nearest Neighbour matching: geometry of the domain and higher-order regularity Simon Viel*, Lionel Truquet†, Ikko Yamane‡ May 1, 2025 Abstract Estimating some mathematical expectations from partially observed data and in particular missing outcomes is a central problem encountered in numerous fields such as transfer learning, counterfactual analysis or causal inference. Matching estimators , estimators based on k-nearest neighbours, are widely used in this context. It is known that the variance of such estimators can converge to zero at a parametric rate, but their bias can have a slower rate when the dimension of the covariates is larger than 2. This makes analysis of this bias particularly important. In this paper, we provide higher order properties of the bias. In contrast to the existing literature related to this problem, we do not assume that the support of the target distribution of the covariates is strictly included in that of the source, and we analyse two geometric conditions on the support that avoid such boundary bias problems. We show that these conditions are much more general than the usual convex support assumption, leading to an improvement of existing results. Furthermore, we show that the matching estimator studied by Abadie and Imbens (2006) for the average treatment effect can be asymptotically efficient when the dimension of the covariates is less than 4, a result only known in dimension 1. Index Terms k-nearest neighbour estimators, transfer learning, average treatment effect, boundary bias I. I NTRODUCTION Estimating the expectation of a function of variables from a sample with some missing observations has attracted much effort in the statistical literature and k-nearest neighbours (k-NN) estimators are often useful in this context. In many interesting situations, these expectations concern pairs of random variables, covariates and a label, for which some labels are missing or partially missing in the data set. In the field of transfer learning or domain adaptation, the so-called covariate shift [Shimodaira, 2000] arises in many real- world situations such as natural language recognition [Bickel and Scheffer, 2006]; [Jiang and Zhai, 2007], brain-computer interfaces [Sugiyama et al., 2007]; [Li et al., 2010], pharmaceutics [Klarner et al., 2023], and econometrics [Heckman, 1979]. In a simple but versatile formulation, a central task common in these applications is to estimate the *Univ Rennes, Ensai, CNRS, CREST – UMR 9194, F-35000 Rennes, France, (email: simon.viel@ensai.fr ). †Univ Rennes, Ensai, CNRS, CREST – UMR 9194, F-35000 Rennes, France (email: lionel.truquet@ensai.fr ). ‡Univ Rennes, Ensai, CNRS, CREST – UMR 9194, F-35000 Rennes, France (email: ikko.yamane@ensai.fr ). 1arXiv:2504.21633v1 [math.ST] 30 Apr 2025 2 expectation of a random variable of the form Z∗:=h(X∗, Y∗)using only a sample of the unlabelled covariates X∗and another training sample of the labelled pair (X, Y), for which the two conditional distributions of Y|XandY∗|X∗are identical while the covariates have different probability distributions, i.e., PX̸=PX∗. An important case arises in supervised learning when h(x, y) :=l(f(x), y)for a hypothesis function fand a loss function l. In this case, our problem becomes risk estimation, which is central for empirical risk minimisation. When P≡PXandQ≡PX∗are absolutely continuous with respect to a
|
https://arxiv.org/abs/2504.21633v1
|
reference measure, an expectation of Z∗under Qcan be written as an expectation of Z:=h(X, Y)under P weighted by the density ratio q/p, where pandqdenote respectively the densities of Pand Qwith respect to the reference measure. Shimodaira [2000] considered such a reweighting for parametric maximum likelihood estimation in a mispecified case and some classical references discuss the use of linear-in-parameter models or the Reproducing Kernel Hilbert Space (RKHS) framework to estimate q/p. See for example Gretton et al. [2009], Kanamori et al. [2009], or Sugiyama et al. [2008]. Another important problem is the estimation of treatment effects in biostatistics or econometrics. In these domains, a sample of triplets of random variables (X, Y, W )∈ Rd×R×{0,1}is observed. Using hidden random variables Y(0)andY(1)called potential outcomes , the observed response Yis assumed to be either distributed as Y(1)(ifW= 1, indicating that the corresponding individual is in the treated group) or distributed as Y(0) (ifW= 0, indicating that the corresponding individual is in the control group). Only one of the two potential outcomes is observed for a given individual. The aim is then to estimate the expectation of the difference Y(1)−Y(0), called the average treatment effect (ATE), conditional or unconditional on the treatment. The use of k-NN estimators has been widely studied in this context. Typically, the unobserved response Y(1)orY(0)is replaced by an average of the outcomes corresponding to the k-NN covariate vectors in the other group. See for example Abadie and Imbens (2006, 2008, 2011, 2012), or Lin et al. [2023]. When kis moderate, these estimators are often referred to as matching estimators. This k-NN approach can be naturally translated to covariate shift adaptation and interpreted as density ratio estimators. This connection between matching and density ratio estimation has recently been discussed in Lin et al. [2023]. In a series of papers addressing the estimation of average treatment effects , a problem related to ours, Abadie and Imbens (2006, 2008, 2011, 2012) introduced nearest-neighbour matching estimators and established several asymptotic results. Lin et al. [2023] further studied the properties of their matching estimators, and they showed, by letting the number of neighbours kgrow as the sample size increases under some relation, that the normalised matching estimator was consistent for the estimation of the density ratio. Although we do not restrict ourselves to kfollowing this relation—for instance we allow the parameter kto remain constant—and therefore we lose the consistency result for the density ratio estimation in Lin et al. [2023], we show in this paper that the final risk estimator using NN-matching is still consistent. Recently, k-NN estimators going beyond the approach of density ratio estimation have been proposed in the context of covariate shift adaptation [Loog, 2012, Portier et al., 2024, Holzmann and Meister, 2024]. The k-NN method that appeared in Loog [2012] does not produce an estimate of the density ratio, but it was shown to have properties similar to the matching estimators [Portier et al., 2024]. Holzmann and Meister [2024] extended the k-NN matching estimators by giving weights that can incorporate higher order smoothness. Although our notation focuses mainly on covariate shift adaptation,
|
https://arxiv.org/abs/2504.21633v1
|
both problems can 3 be described in a similar framework. To estimate the expectation e(h) :=E[h(X∗, Y∗)], we observe a sample (Xi, Yi),1≤i≤n, of i.i.d. random vectors taking values in Rd×R as well as an additional sample of unlabelled i.i.d. random variables X∗ j,1≤j≤m. We denote by Xthe measurable subset of Rdon which XandX∗take their values and consider the Euclidean norm ∥·∥onRd. We will discuss some statistical properties of two estimators. The first one, introduced in Portier et al. [2024], produces labels ˆY∗ jwhich are independent conditionally on the (Xi, Yi)’s and the X∗ j’s and with a conditional distribution estimating Y∗ j|X∗ j. More precisely, denote by Fm,n=σ Xi, Yi, X∗ j; 1≤i≤n,1≤j≤m the sigma-algebra generated by those random variables. Their estimator is ˆe1(h) :=1 mmX j=1h X∗ j,ˆY∗ j ,P ˆY∗ j=Yi| Fm,n :=k−11 ∥Xi−X∗ j∥ ≤ˆτk(X∗ j) , (1) where ˆτk(x)is the k-nearest neighbour radius, that is, the kth-order statistic of the sample (∥Xi−x∥)1≤i≤nforx∈Xand1≤k≤n. The second, which is similar to the matching estimators used to infer average treatment effects, is defined by ˆe2(h) :=1 nnX i=1nM∗ k(Xi) mkh(Xi, Yi), M∗ k(x) :=mX j=11 ∥x−X∗ j∥ ≤ˆτk(X∗ j) .(2) The estimator (2) has recently been extended by Holzmann and Meister [2024] to a higher-order local polynomial estimator of the regression function g:X→Rdefined by g(x) :=E[h(X, Y)|X=x] =E[h(X∗, Y∗)|X∗=x]. It can be observed that e(h) =R Xg(x) dQ(x), which is simply the expectation of the regression function gwith respect to the target probability measure Q. Averaging the values of non-parametric estimates of the regression function evaluated with an additional sample of random variables with probability distribution Qis then a natural idea. Note that the estimators (1) and (2) are not very different. The estimator (1) is the empirical mean m−1Pm j=1h(X∗ j,ˆY∗ j)with ˆY∗ jfollowing the conditional distribution ˆPY|X dy|X∗ j =1 knX i=11 ∥Xi−X∗ j∥ ≤ˆτk(X∗ j) δYi(dy), while the estimator (2) equals m−1Pm j=1ˆg(X∗ j), where ˆgn(x) =R XzˆPZ|X(dz|x)and ˆPZ|X dz|X∗ j =1 knX i=11 ∥Xi−X∗ j∥ ≤ˆτk(X∗ j) δh(Yi,Xi)(dz). Notice that ˆgn(x)is the standard k-NN nonparametric regression estimator. As we will see, the bias/variance of the estimators (1) and (2) have the same convergence rates under quite similar regularity assumptions. As is now well known in this literature, estimators constructed from an averaging of k-NN estimators of a regression function often have a variance that converges to 0at a parametric rate even when kis small, k= 1being a standard choice. Then the regression function needs not to be consistently estimated to get this result. This rate for the variance has also been obtained in some references that focus on estimating some expectations of a regression function in a different context. For example, Devroye et al. [2018] and Devroye et al. 4 [2013] studied the estimation of some mean squared errors in non-parametric estimation using two identically distributed samples, from which an averaging of 1-NN estimators leads to a variance converging to 0at a parametric rate. Estimators of some integrals of a probability density, useful for estimating entropies or divergences, have also been studied in Sricharan et al. [2012]
|
https://arxiv.org/abs/2504.21633v1
|
and Singh and P ´oczos [2016], with a similar convergence rate for the variance. The quality of these matching-type estimators then depends on their bias, the convergence rate of which generally depends on the dimension of the covariate vector. If the regression function ghas Lipschitz properties on a bounded domain X, the bias of the estimators (1) or (2) can be bounded by k1/d/n1/d, up to a constant. However, if gis two times continuously differentiable, one could expect the rate k2/d/n2/d, but as we will see, obtaining this rate requires a geometric analysis of the boundary of the domain Xto avoid boundary bias problems. In the literature, this faster bias rate has been obtained by Abadie and Imbens [2006, Theorem 2], or using the local polynomial extension of the estimator (2) recently studied by Holzmann and Meister [2024], but assuming that the support of the measure Q is contained in a compact subset of the interior of a convex domain X, a simple condition that avoids boundary bias problems. In information theory, such a condition means that the relative entropy of Pwith respect to Q, measured by the Kullback-Leibler divergence, is automatically infinite, and it excludes two probability measures with the same support. For the study of standard ATE, the two distributions PandQof the covariates conditional on treatment or conditional on non-treatment are assumed to be mutually absolutely continuous, and such a condition on the support is not satisfied. The contribution of this paper is threefold. 1) First, we show that under some suitable conditions on the domain Xand smoothness conditions, the estimators (1) and (2) have a bias of order k2/d/n2/d. This shows that the parameter e(h)can be estimated at a parametric rate when d≤4, using only k= 1, a choice that leads to fast computation for the numerical implementation of these estimators. For ATE, we show that the matching estimators studied in Abadie and Imbens [2006] can be efficient in a semi-parametric sense when d≤3. We then obtain an extension of Corollary 4.1in Lin et al. [2023], which was only stated for the univariate case d= 1. 2) We analyse two geometric conditions that avoid the bias boundary problem. These conditions are generally satisfied if the boundary of the domain Xis sufficiently smooth or for a finite union of well-shaped subsets of Rdsuch as convex sets. 3) We show that such geometric conditions are also sufficient to avoid the boundary bias problem of the k-NN local polynomial estimator of Holzmann and Meister [2024], who recently studied a very nice extension of matching estimators for obtaining parametric convergence rates for estimating g, under suitable smoothness conditions on g. Such an extension provides an alternative to the doubly robust estimators studied in Lin et al. [2023], for which additional hyperparameters have to be tuned. The paper is organised as follows. In Section II, we analyse the second-order properties of the bias for the estimators (1) and (2) using appropriate geometric conditions on the support of the covariates. We also give some consequences for the estimation of ATE in Section III. A discussion
|
https://arxiv.org/abs/2504.21633v1
|
of various sufficient conditions for checking our geometric conditions is given in Section IV. In Section V we show that such conditions are sufficient to control higher order properties of the bias for a local polynomial extension of the previous estimates. 5 Numerical experiments are given in Section VI. A conclusion is given in Section VII and proofs of our results are given in Section VIII. Finally, some technical lemmas are collected in an Appendix Section. II. S ECOND -ORDER BIAS PROPERTY OF MATCHING ESTIMATORS A. Notation and assumptions For any r≥0andx∈Rd, letB(x, r) := x′∈Rd:∥x′−x∥ ≤r be the closed ball of centre xand radius r, S(x, r)be the sphere of center xand radius r,Sd−1:=S(0,1) be the unit sphere, and for a given Borel subset AofRdwe denote by 1(A)the indicator function which takes the value 1onAand0on its complement. The conditional bias of estimators (1) and (2) is defined as Bi,n:=E[ˆei(h)|X]−e(h), fori= 1,2respectively. We first describe the bias problem due to boundary issues. For simplicity, we only consider the estimator (1) (the case i= 1). The bias for the second estimator behaves in a similar way. For ℓ= 1, . . . , k , let us denote by ˆiℓ(x)the index of theℓth-nearest neighbour of x∈Xin the sample (X1, . . . , X n). Let us also set ∆(x, u) := E[h(x, Y 1)|X1=u]. We then have ∆(x, x) =g(x)and assuming that the mapping ∆is two times continuously differentiable with respect to its second argument, we could expect that B1,n=k−1kX ℓ=1Z Xh ∆ x, X ˆiℓ(x) −g(x)i dQ(x)≈k−1kX ℓ=1Z X∇2∆(x, x)⊤Zℓ(x) dQ(x), where Zℓ(x) :=Xˆiℓ(x)−xand∇2∆(x, x)denotes the gradient of the mapping u7→∆(x, u) with respect to its second argument evaluated at point x. Note that ∥Zℓ(x)∥=∥Xˆiℓ(x)−x∥ can be always bounded by ˆτk(x), which is of order (k/n)1/d, using Lemma 4. This leads to the same rate for the bias. However, one can get a better rate for the expectation of the vector Zℓ(x)if we restrict it to the event {ˆτℓ(x)≤δ(x)}, where δ(x)denotes the distance between xand the complement XcofX. On this event, the ball centered at xwith radius ˆτℓ(x)is included in X. Moreover, conditionally on the radius ˆτℓ(x), the normalized vector Zℓ(x)/ˆτℓ(x)has a density, supported onSd−1, given by θ7→p(x+ ˆτℓ(x)θ), up to a normalizing constant denoted here by ˆCℓ(x). We then deduce that E[Zℓ(x)1(ˆτℓ(x)≤δ(x))] =EZ Sd−1ˆCℓ(x) ˆτℓ(x)θ(p(x+ ˆτℓ(x)θ)−p(x)) dσ(θ)1(ˆτℓ(x)≤δ(x)) , where σdenotes the Haar measure on the unit sphere. Assuming that the density pis Lipschitz and positive on the compact set X, one can get, for a generic constant C >0, ∥E[Zℓ(x)]∥ ≤CE[ˆτk(x)2+ ˆτk(x)1(ˆτk(x)> δ(x))]. Therefore, if ∇2∆is bounded, we gain an order in the expectation of the bias as soon as Z XE[ˆτk(x)1(ˆτk(x)> δ(x))] dQ(x)≤C(k/n)2/d. 6 Thanks to Lemma 6, this property holds true as soon as the following condition holds true. (A)sup L>0{L1/dR Xexp(−L δ(x)d) dQ(x)}<∞. These derivations will be formalized in our proofs, with an upper-bound for E[B2 i,n], which is necessary for bounding the mean squared error of ˆei(h). When the support of Qis included in the interior of X, i.e.{x∈X:δ(x)≥ε}for some ε >0, condition (A) is trivially
|
https://arxiv.org/abs/2504.21633v1
|
satisfied. However, such an assumption is quite restrictive and cannot be used for studying the standard average treatment effect. See Section III for details. Without such a restriction on the target measure Q, condition (A) is related to the geometry of the boundary of X. We defer the reader to Section IV for a discussion on this condition. Though our work is mainly devoted to the control of the bias, we will also analyse the variance term ˆei(h)−E[ˆei(h)|X]from an original exponential bound for the tail distribution function of the volume Q(Ak(z)), where Ak(z) :={x∈X:∥z−x∥ ≤ˆτk(x)} is called the catchment area of point x, see Abadie and Imbens [2006] or Lin et al. [2023]. The results will be presented in Subsection II-C. In the sequel, we will refer to the following assumptions. (X1) Xis a compact subset of Rd. (X2) There exists a constant c >0such that for all x∈Xand all 0≤r≤diam( X), |B(x, r)∩X| ≥c|B(x, r)|, where |B|denotes the Lebesgue measure of a Borel set BinRd. (X3) The distributions PandQare absolutely continuous with respect to the Lebesgue measure on Rd, with respective densities pandq. Furthermore, there exist p,¯q >0 such that for all x∈X, p(x)≥pandq(x)≤¯q. (X4) The function gis Lipschitz-continuous on X. Notes : 1) Assumption (X2) requires that the intersection of a ball with the probability support Xshould have a volume comparable to that of the ball when the center is inside the support. Such an assumption has been widely used in previous studies. See Gadat et al. [2016], Lin et al. [2023], or Portier et al. [2024] among others. As for Assumption (A), such a condition can be seen as a geometric condition for the boundary of the probability support X. Sufficient conditions ensuring the validity of Assumption (X2) or Assumption (A) will be studied in Section IV, as well as some examples showing that these two conditions are not equivalent in general. Assumptions (X1) and (X3) are also classical regularity conditions for studying k-NN type estimators. 2) Assumption (X4) is a standard regularity condition for some conditional expectation and which is often compatible with a bounded probability support X. B. Rate for the bias Simply assuming that the regression function gis Lipschitz-continuous and using the bounds on the moments of the k-NN radius as expressed in Lemma 4, we can obtain the first control of the conditional bias B2,n:=E[ˆe2(h)|X]−e2(h). For B1,n, the same control has been already obtained in Proposition 3in Portier et al. [2024] under similar assumptions. 7 Proposition 1. Under Assumptions (X1) to (X4), for all λ≥1, we have E[|B2,n|λ]≤Cλ,d,P,hk n+ 1λ/d , where Cλ,d,P,h :=Lλ2 Γ(2 + ⌊λ/d⌋) (cp|Vd|)−λ/dandLis the Lipschitz constant for g. Here, Γdenotes Euler’s gamma function and ⌊a⌋denotes the integer part of a∈R. We will now prove that under stronger conditions on the function gand the distribution P, we can derive better rates for the second-order bias E[B2 i,n]fori= 1,2. In what follows, for any x∈X, we denote by ∇2 2∆(x, y)the Hessian matrix of z7→∆(x, z)at point y∈X. The Hessian matrix of gat point xwill be simply denoted by ∇2g(x). We will
|
https://arxiv.org/abs/2504.21633v1
|
also denote by∥·∥the operator norm on the space of d×dmatrices with real coefficients, corresponding to the norm ∥·∥used on Rd. We will therefore consider the following two assumptions. In what follows, we denote by Uan open convex and bounded set containing Xand by co (X) the convex hull of X. We recall that co (X)is compact if Xitself is compact. (X5) The density pis Lipschitz-continuous on X. (X6-1) For any x∈X, the mapping ∆(x,·)is the restriction to Xof a mapping from UtoR, still denoted by ∆(x,·), two-times continuously differentiable on Uand such that sup (x,y)∈X×co(X) ∥∇2∆(x, y)∥+∥∇2 2∆(x, y)∥ <∞. (X6-2) The mapping gis a restriction to Xof a two-times continuously differentiable mapping from UtoR, still denoted by g. Allowing the extension of the mappings ∆orgto differentiable mappings defined on a convex neighbourhood of Xis only a technical assumption which ensures Lipschitz properties for these mappings and their gradients restricted to the set X. Such properties are not necessarily valid for differentiable mappings only defined on the set X. It also avoids the choice of a definition for continuously differentiable functions on the boundary. See Whitney [1934] for a discussion. Theorem 1. Suppose that Assumptions (X1) to (X5) hold true, as well as Assumption (A) and Assumption (X6-1) if i= 1or Assumption to (X6-2) if i= 2. Then there exist constants Ci,d,P,Q,h such that for i= 1,2, E[B2 i,n]≤Ci,d,P,Q,hk n+ 1min{4/d ,3} . a) Note: When d≥2, one can note that the convergence rate for the bias is (k/n)2/d which is the square of the rate obtained from Proposition 1, which only assumes a Lipschitz regularity for the conditional expectation. When d= 1, the upper-bound for the bias is (k/n)3/2whereas a Lipschitz condition only guarantees the rate k/n. In comparison, the standard deviation of our estimators is 1/√n+ 1/√m, as discussed in the next paragraph. When k=O(1), we then conclude that the second-order regularity of the conditional expectation helps to improve the convergence rate for the RMSE only for d≥2, the bias being already negligible with respect to the standard deviation in the Lipschitz case when d= 1. We next give a second-order bias expansion which is valid with a bit more regularity on the mappings g,∆, and p, and under a more restrictive geometric condition, the inclusion of 8 the support of Qin the interior of X. Though quite restrictive, this condition is very common in the literature and useful to avoid boundary bias problems. Moreover, it automatically entails the validity of Assumption (A). In the context of ATE, a similar bias expansion is given in Abadie and Imbens [2006, Theorem 2]. However, our expansion is more general here since we also consider the expectation of the square of Bi,n, a quantity which is important since it appears in the MSE, and kis not necessarily bounded. Theorem 2. Suppose that Assumptions (X1) to (X5) and Assumption (X6-1) if i= 1 or Assumption (X6-2) if i= 2hold true. Suppose furthermore that there exists β∈(0,1)such that∇2gisβ-H¨older continuous on co (X)ifi= 2 or sup (x,y,z)∈X×co(X)2 y̸=z∥∇2 2∆(x, y)− ∇2 2∆(x, z)∥ ∥y−z∥β<∞, ifi= 1.
|
https://arxiv.org/abs/2504.21633v1
|
Finally, assume that kd+1=o(n), that the source density pis a restriction to Xof an element p∈C1(U)with∇pisη-H¨older continuous on co(X)for some η∈(0,1), and that the support of Qlies in the interior of X. Then, for i= 1,2, we have the following expansion for the first order bias E[Bi,n] =Ci(k, P, Q, h )n−2/d+o((k/n)2/d), where Ci(k, P, Q, h ) :=1 kkX ℓ=1Γ(ℓ+ 2/d) Γ(ℓ)Z X(p(x)|Vd|)−2/dΨi(x) dQ(x), and Ψ1(x) :=1 σ(Sd−1)Z Sd−1θ⊤ ∇∆(x, x)⊤∇p(x) p(x)+∇2 2∆(x, x) 2 θdσ(θ), Ψ2(x) :=1 σ(Sd−1)Z Sd−1θ⊤ ∇g(x)⊤∇p(x) p(x)+∇2g(x) 2 θdσ(θ). Similarly, when the dimension of the covariates dis greater than 2, fori= 1,2, we have the following expansion for the second order bias E[B2 i,n] =Ci(k, P, Q, h )2n−4/d+o((k/n)4/d). b) Note.: Under the assumptions of Theorem 2, the support of Q, which is defined as the smallest closed subset of Rdwith measure 1, is a compact subset of Xand its distance to the boundary of Xis then positive. When X= [0,1], the support of Qshould be included in[ϵ,1−ϵ]for some ϵ∈(0,1/2), which excludes many natural probability distributions such as all beta distributions and then a simple uniform distribution. If we relax the condition of inclusion of the supports and we replace it with the more general geometric condition (A), as discussed in subsection II-A, then we no longer have access to this bias expansion but we still have an upper bound on the second-order bias. c) Note.: As mentioned in the proof of Lemma 3in Portier et al. [2024], one can findL1, L2>0not depending on ℓ≥1such that L1ℓ2/d≤Γ (ℓ+ 2/d)/Γ (ℓ)≤L2ℓ2/d. We then observe that the leading term in the bias expansion is really of order k2/d/n2/d. 9 C. Rate for the variance Numerous studies have examined the variance term Vm,n:= ˆei(h)−E[ˆe(h)|X]and have proved that it converges at a parametric rate E[V2 m,n]≤C(m−1+n−1). Interestingly, we can obtain this result as a consequence of a concentration inequality on the volume of catchment areas. Recall that the catchment area of a point z∈Xis defined as the set Ak(z) :={x∈X:∥z−x∥ ≤ˆτk(x)}. Theorem 3. Suppose that Assumptions (X1) to (X3) hold true, then for all t >0, we have the inequality P(nQ(Ak(X1))≥kt)≤3e1/4exp −cp 12¯qt . With the control on the bias and the above concentration inequality, we can now conclude on the mean squared error of our estimator, adding the following assumption. (X7) The random variable Var(h(X, Y)|X)is bounded above by some constant ¯σ2. Corollary 1. Suppose that Assumptions (X1) to (X5) hold true, as well as Assumption (X6-1) if i= 1 or Assumption to (X6-2) if i= 2, Assumptions (X7) and Assumption (A). Then for all m, n∈N∗and all 1≤k≤n, we have E[(ˆei(h)−e(h))2]≤C k n+ 1min{4/d ,3} +1 m+1 n! , where C >0is a constant depending on the dimension d, the function hand the distribu- tions PandQ. a) Note: As a consequence of Corollary 1, if we choose k= 1, our estimator converges at a parametric rate when the dimension of the covariates ddoes not exceed 4. In contrast, as shown by Proposition 1, a Lipschitz regularity for the conditional expectation only guarantees a parametric rate of convergence when
|
https://arxiv.org/abs/2504.21633v1
|
ddoes not exceed 2. Such an improvement, assuming second order regularity for the conditional expectation as well as the additional geometric condition (A) on the support Xof the covariates, is one of the main result of the paper. III. A PPLICATION TO AVERAGE TREATMENT EFFECTS In the setting of average treatment effect, as described in Abadie and Imbens [2006], N units i= 1, . . . , N , each described by a vector of covariates Xi, are given either an active treatment Wi= 1 or a control treatment Wi= 0. For each unit, out of the two potential outcomes Yi(0)andYi(1), only one is observed, namely Yi=Yi(Wi). The quantities of interest are the global average treatment effect µ:=E[Y(1)−Y(0)] and the average treatment effect on the treated µt:=E[Y(1)−Y(0)|W= 1] . We will assume that the random vectors (Wi, Xi, Yi(0), Yi(1)) are i.i.d. for 1≤i≤Nand are some replications of a generic random vector denoted (W, X, Y (0), Y(1)). While it is straightforward to estimate E[Y(w)|W=w]by a simple empirical average for w= 0,1, the estimation of the two expectations E[Y(1−w)|W=w]forw= 0,1is not obvious since the outcome Y(1−w) is never observed in the group assigned with W=w. Related to our setup, the source or target probability measures PorQequal to either the probability distribution X|W= 0or X|W= 1, depending on the expectation E[Y(1)|W= 0] orE[Y(0)|W= 1] we have to 10 estimate. Then, as in this paper, Abadie and Imbens [2006] replaced the missing outputs by an average on the observed outputs of the kunits having the nearest covariates. For x∈X,w∈ {0,1}andℓ∈J1;kK, setˆτℓ,w(x)andˆiℓ,w(x)respectively the distance between xand its ℓth-nearest neighbour among the Xi, i= 1, . . . , N such that Wi=w, and the corresponding index. Setting M∗ k(Xi) =NX j=11(Wj= 1−Wi,∥Xi−Xj∥ ≤ˆτk,Wi(Xj)), it led them to the two estimators ˆµ:=1 NNX i=1(2Wi−1) 1 +M∗ k(Xi) k Yiandˆµt:=1 N1NX i=1 Wi−(1−Wi)M∗ k(Xi) k Yi for the average treatment effect (ATE) and the average treatment effect on the treated (ATT), respectively, where N1is the number of units that were given the active treatment. In what follows, we only focus on the ATE, but similar results are valid for the ATT. One can decompose the difference ˆµ−µasˆµ−µ=E+B+V, where E:=1 NNX i=1(2Wi−1) 1 +M∗ k(Xi) k (Yi−gWi(Xi)), B:=1 NNX i=1(2Wi−1) g1−Wi(Xi)−1 kkX ℓ=1g1−Wi Xˆiℓ,1−Wi(Xi)! ,and V:=1 NNX i=1(g1(Xi)−g0(Xi))−µ, where gw(x) :=E[Y(w)|X=x]and the nearest neighbours are always taken from the other subpopulation. They showed that both the variance term Vand the residual term E are of order N−1/2while the conditional bias is of order at least N−1/d—they ask kto be constant—and similarly for the average treatment effect on the treated. Furthermore, under extra regularity assumptions and the additional condition that •the support X1ofXgiven W= 1 is a compact subset of the interior of the support X0ofXgiven W= 0, they proved in their Theorem 2that the unconditional bias on the treated E[Bt]was of order N−2/d. However, because the above condition cannot be asked to be symmetric in terms of treatment w, in other words we cannot simultaneously have X1⊂X◦ 0andX0⊂X◦ 1, they could not extend this result to the bias
|
https://arxiv.org/abs/2504.21633v1
|
Bof the global average treatment effect. With our results, we can prove that under the next less restrictive geometric condition (T8) below, the conditional bias term Bis of order (k/N)2/d, and (k/N)3/2instead if d= 1. To state the theorem, we consider the following assumptions. (T1) The support XofXis a compact subset of Rd. (T2) There exists a constant c >0such that for all x∈Xand all 0≤r≤diam( X), |B(x, r)∩X| ≥c|B(x, r)|, where |B|denotes the Lebesgue measure of a Borel set BinRd. 11 (T3) The random vector Xhas a density fwith respect to the Lebesgue measure on Rd and there exist f,¯f >0such that for almost all x∈X, f≤f(x)≤¯f. (T4) ( Unconfoundedness ) For almost all x∈X, W is independent of (Y(0), Y(1)) con- ditionally on X=x. (T5) There exists η >0such that for almost all x∈X, η < P(W= 1|X=x)<1−η. (T6) The regression functions g0andg1are restrictions of two-times continuously differ- entiable mappings on an open convex neighbourhood of X. (T7) The probability distributions X|W= 0 andX|W= 1 have densities with respect to the Lebesgue measure on Rd, denoted respectively by f0andf1and which are Lipschitz-continuous on X. (T8) sup L>0, w∈{0,1} L1/dR Xexp(−L δ(x,Xc)d)fw(x) dx <∞. Theorem 4. Suppose that Assumptions (T1) to (T8) hold true, then the conditional bias satisfies E[B2]≤Ck N3∧4/d , where C >0depends on d,X, η, g 0, g1, f0, and f1. Improving on the order of the conditional bias means that there are more cases where it is negligible compared to the other two terms than in Lin et al. [2023]. The following result is a consequence of Theorem 4 and of the asymptotic normality of√ N(E+V). See Lemma C 1in Lin et al. [2023] for such a weak convergence. Corollary 2. Suppose that Assumptions (T1) to (T8) hold true. If kdiverges and if we have one of the following: •d= 1 andk3/N2− →0, •d= 2 andk2/N− →0, or •d= 3 andk4/N− →0, then the average treatment effect matching estimator with no bias correction attains the semiparametric efficiency lower bound [Hahn, 1998]. More precisely √ N(ˆµ−µ)⇒ N 0, σ2 , where σ2:=Eσ2 1(X1) e(X1)+σ0(X1)2 1−e(X1)+ (g1(X1)−g0(X1)−τ)2 , and for (x, w)∈X×{0,1}, e(x) :=E(W1|X1=x)andσw(x)2:= Var ( Y1(w)|X1=x). a) Note.: When d= 4 andkis bounded, we still have√ NB =OP(1)but the stochastic bias is no longer negligible and Corollary 3does not extend to this case, though the three terms E, V, B are of the same order. IV. A NALYSIS OF TWO GEOMETRIC CONDITIONS This section focuses on the following two geometric conditions that address the boundary bias problem: (X2) There exists a constant c >0such that for all x∈Xand all 0≤r≤diam( X), |B(x, r)∩X| ≥c|B(x, r)|. 12 xy y=x2 (a) Example 1 b1 a1 C1b2 a2 C2b3a3 C3. . .(b) Example 2 Fig. 1: Illustrations of Proposition 2 (A)sup L>0 L1/dR Xexp(−L δ(x,Xc)d) dQ(x) <∞. To show that these two conditions are independent in that none of them imply the other, we consider the following two counter-examples. In the following result, Qis defined as the uniform distribution on X. Proposition 2. 1) Consider X:={(x, y)∈R2:x∈[0,1], y∈[0, x2]}. Then Xsatisfies
|
https://arxiv.org/abs/2504.21633v1
|
Condition (A) but not Condition (X2). 2) Consider X:=S k∈N∗Ck, where Ck:={x∈R2:ak≤ ∥x∥ ≤ bk}. We choose ak:= (k+1)−1andbk∈(ak, ak−1)such that for all k≥1,0< a k−1−bk≤δ/(k+1)4 withδsmall enough. Then Xsatisfies Condition (X2) but not Condition (A). For illustrations of Proposition 2, see Figure 1. In the next theorem, we describe general settings in which the two conditions are simultaneously satisfied. Theorem 5. Suppose that the density qis bounded above, then Conditions (X2) and (A) are simultaneously satisfied in either of the following two cases: •Xis a compact convex subset of Rdwith a non-empty interior; or •Xis the closure of a bounded open set whose boundary ∂Xis a(d−1)-dimensional submanifold of Rdof class C1. Furthermore, both conditions are stable by finite union. Let us comment on these results. Under convexity or smooth boundary assumptions, as formulated in the statement of Theorem 5, Condition (X2) is satisfied since it is possible to show that the trace of a ball on Xalways includes a truncated cone with a fixed angle. The volume of this cone is then proportional to that of the ball. See the proof of the theorem for details. In Figure 1, we see that when the boundary has a spike, it is no more possible to include such a cone in the ball and Condition (X2) may be not satisfied. In the proof of Theorem 5, we also show that the same assumptions entail condition (A) but using a different argument. When a point x∈Xis close to the boundary denoted by 13 ∂X, its distance to the boundary can be lower bounded by the distance to the boundary along one of the daxis of Rdwith a finite number of possible boundary points, i.e. inf z∈∂X∥x−z∥ ≥ηmin 1≤i≤dmin s=1,...,N|xi−gs((xj)j̸=i)|, for some real-valued mappings g1, . . . , g Nand a positive constant η. This is sufficient to upper-bound the integral and check Condition (A). However, there exist some pathological examples with infinitely many connected components, such as the rings example given in Proposition 2, for which the previous argument cannot be used and Condition (A) does not hold. Overall, for practical applications, both conditions seem to be satisfied under some sufficiently general regularity assumptions on the domain X. Finally, we prove that condition (A) is equivalent to a constraint on tube volumes for the boundary of the domain X. Proposition 3. Suppose that Assumption (X1) holds true and set for ϵ >0, Aϵ:={x∈X:δ(x,Xc)≤ϵ}. Then condition (A) is equivalent to limϵ↘0Q(Aϵ)/ϵ <∞. From this result, one can deduce that condition (A) is valid for any probability measure Qwith a bounded density if and only if λd(Aϵ) =O(ϵ)where λddenotes the Lebesgue measure on Rd. Let us mention that there is a link between this latter condition and the so-called tube formula of Weyl [1939] who expressed for a smooth submanifold ∂X=X\˚X the volume of a tube of radius ϵaround ∂Xas a polynomial in ϵ. For a submanifold of dimension d−1, the first term of this expression is proportional to ϵ. The equivalence given in Proposition 3 can then be used to get another proof for
|
https://arxiv.org/abs/2504.21633v1
|
checking (A) in the submanifold case, a result already given in Theorem 5. V. E XTENSION TO LOCAL POLYNOMIALS In this section, we discuss a more general estimator introduced by Holzmann and Meister [2024] to reduce the bias when the regression function ghas more regularity. They use local polynomials of order L∈Nto approximate the regression function. In their theorem, Holzmann and Meister [2024] suppose that the supports are convex and that the target support is inside the interior of the source support. We will show that in fact, Condition (X2) is all we need. We set Nd ≤Lthe set of d-tuples of non-negative integers whose sum does not exceed L. LetΨ:J1;K∗K→Nd ≤Lbe a one-to-one and onto function with Ψ(1) = (0 , . . . , 0), where K∗is the cardinality of Nd ≤L. For j= 1, . . . , K∗, we also write ζj(x, z) := ( z−x)Ψ(j):=Qd i=1(zi−xi)(Ψ(j))i, which is a local monomial in zindcoordinates, of degree at most L, and centered at point x. Finally, ζ(x, z) = ( ζj(x, z))1≤j≤K∗will denote the vector in RK∗ composed of all such local monomials. The local estimator ˆg(x)of the regression function is the first coordinate of the least square estimator ˆγ:= argmin γ∈RK∗nX i=1 h(Xi, Yi)−γ⊤ζ(x, X i)21(∥Xi−x∥ ≤ˆτk(x)). 14 In other words, ˆg(x) =e⊤ 1M(x)−1Pn i=1h(Xi, Yi)ζ(x, X i)1(∥Xi−x∥ ≤ˆτk(x))), where e1 is the the first vector in the canonical basis of RK∗andM(x)is the matrix M(x) :=nX i=1ζ(x, X i)ζ(x, X i)⊤1(∥Xi−x∥ ≤ˆτk(x)) = nX i=1(Xi−x)Ψ(j)+Ψ( j′)1(∥Xi−x∥ ≤ˆτk(x))! 1≤j,j′≤K∗. The final estimator is then ˆeL(h) :=1 mPm j=1ˆg(X∗ j). Note that when L= 0, ζ(x, z)is the constant vector 1∈R, M(x) =k, and ˆg(x)coincides with the classical estimator ˆgn(x)defined in section I. In what follows, we will need the constant D:=PL i=1i i d+i−1 . Adapting the nice proof technique of Holzmann and Meister [2024], we obtain the following result. Theorem 6. Suppose that Assumptions (X1) to (X3) hold true, that the source density p is upper bounded by ¯p, and that the function ghave derivatives of order l∈Nwhich are all H ¨older-continuous of order β∈(0,1]with constant Λ>0. Suppose also that k≥(2D+ 1)K∗+ 1, then setting L:=l, there exists a constant C >0depending on d, l, andP, such that the conditional bias Bn:=E[ˆeL(h)−e(h)|X]is controlled by E[B2 n]≤CΛ2k n2(l+β) d . The conditional variance of the estimator ˆeL(h)was already studied by Holzmann and Meister [2024], Theorem 2, who showed it has the parametric rate n−1+m−1. We therefore obtain a parametric rate for ˆeL(h)with fixed kas soon as l+β≥d/2. Extending Theorem 2 from Holzmann and Meister [2024] to a situation where we did not assume any inclusion between the source and the target supports implies that their local polynomial estimator can also be used in the context of global average treatment effect (ATE) estimation. We keep the notation from Section III and we define, for w∈ {0,1} andx∈X, ˆgw(x) :=e⊤ 1argmin γ∈RK∗X Wi=w Yi−γ⊤ζ(x, X i)21(∥Xi−x∥ ≤ˆτk,w(x)),and ˆµ:=1 NNX i=1(2Wi−1)(Yi−ˆg1−Wi(Xi)). Define the conditional bias B:=1 NNX i=1(2Wi−1)(g1−Wi(Xi)−¯g1−Wi(Xi)),where ¯gw(x) :=e⊤ 1Mw(x)−1X Wi=wgw(Xi)ζ(x, X i)1(∥Xi−x∥ ≤ˆτk,w(x)), then proceeding as in the proof of Theorem
|
https://arxiv.org/abs/2504.21633v1
|
4, the previous theorem leads to the following result. 15 Corollary 3. Suppose that Conditions (T1) to (T5) hold true, that the regression functions g0andg1are restrictions of mappings on an open convex neighbourhood of Xhaving derivatives of order l∈Nwhich are all H ¨older continuous of order β∈(0,1]. Suppose further that k≥(2D+1)K∗+1, then there exists a constant C >0such that the conditional bias of the estimator ˆµfor the average treatment effect satisfies E[B2]≤Ck N2(l+β)/d . As stated above, the conditional variance E+V(as defined in Section III) of the estimator ˆµwas shown by Holzmann and Meister [2024] to have the parametric rate N−1. Along with the previous corollary, it proves that when l+β≥d/2, the estimator ˆµwith fixed k has the rate of convergence N−1. VI. N UMERICAL EXPERIMENTS In this section, we conduct numerical experiments to study how the estimators perform with different dimensions and sample sizes. a) Methods that we compare: We use the estimators (1) and (2) with k= 1as well as the local polynomial fitting estimator, proposed by Holzmann and Meister [2024], of order 1. We refer to those methods as the 1-Nearest-Neighbour Conditional Sampling Adaptation (1NN-CSA ), the 1-Nearest-Neighbour Weighting ( 1NN-W ), and the k-Nearest-Neighbour Polynomial fitting ( kNN-Poly ), respectively. kNN-Poly of order 1amounts to a local linear fitting with a bias term, meaning that d+ 1parameters are fitted. For this method, we test two variants with different k’s. kNN-Poly-LB: kNN-Poly with even smaller k= 2d2+ 3d+ 3. This is the minimum k satisfying the condition of Holzmann and Meister [2024, Theorems 1-2] for the local polynomial fitting of order 1. LB stands for “Lower Bound”. We choose the minimum of such k’s because Holzmann and Meister [2024] reported that smaller k’s tend to show better results. kNN-Poly-d+5: kNN-Poly with k=d+ 5. This choice of kis very close to k=d+ 1, the smallest kfor which the polynomial fitting is possibly well-posed. kNN-Poly-d+5 includes a few more points in the fitting for better stability. NoCorrection : A na ¨ıve estimate without covariate shift adaptation, given by the empirical average over the source sample1 nPn i=1h(Xi, Yi). OracleY : A hypothetical estimate with the oracle access to the hidden labels Y∗ j, given by the empirical average over the target sample1 mPm j=1h(X∗ j, Y∗ j). b) Setups: We conduct experiments under two setups described as follows. For both setups, we let each of (X(2), . . . , X(d))and(X∗(2), . . . , X∗(d))be distributed uniformly over[−1,1]d−1, where (·)(j)denotes the jth coordinate of the given vector. X(1), . . . , X(d), X∗(1), . . . , X∗(d)are all independent. We define the function hbyh(x, y) := ( x(1)+y)2. The differences between the setups are the conditional distribution of Ygiven X(equiv- alently, that of Y∗given X∗) and the distributions of X(1)andX∗(1), as summarized in Table I and Figure 2 and as detailed below. Setup TN0.5-Cubic: We let X(1)follow TN(−0.5,0.5,[−1,1])andX∗(1)follow TN(0 .5,0.5,[−1,1]), where TN(µ, σ, S )denotes the truncated normal distribution for (µ, σ)∈R×(0,∞)andS⊂R, defined as the distribution of V1(V∈S),Vbeing a normal variable with mean µand standard deviation σ. The
|
https://arxiv.org/abs/2504.21633v1
|
response is generated 16 according to Y=|X(1)|3+ε, where εa normal noise variable with mean 0and standard deviation 0.1, independent of all the other variables. Setup TN0.5-Cubic-Reversed: The conditional distribution of Ygiven Xis the same as in Setup TN0.5-Cubic. However, we switch the locations of the source and the target covariate distributions, that is, X(1)follows TN(0 .5,0.5,[−1,1])andX∗(1)follows TN(−0.5,0.5,[−1,1]). Note that the supports of the target distribution is not included in the interior of the source distribution as in the literature [Holzmann and Meister, 2024], but Theorem 5 immediately ensures that they satisfy our new conditions (X2) and (A) as well as the other conditions for Theorems 1 and 6. Note that under these setups, the function g(x) = (x(1)+|x(1)|3)2+0.01 is two-times continuously differentiable (X6-2). c) Results for the experiments: The results for Setup TN0.5-Cubic are shown in Fig. 3. First, we can observe that NoCorrection, the estimator without covariate shift adaptation, performs poorly, and OracleY , the hypothetical estimator with the oracle access to the unobserved target labels, performs the best in almost all cases. Other methods have performances between these two methods, and they tend to show smaller errors with larger sample sizes but increasing the covariate dimension slows down the rates of convergence, as the theory suggests. For the kNN-poly methods of Holzmann and Meister [2024], kNN-poly-d+5 often per- forms better than kNN-poly-LB. However, kbeing too small, there is no known theoretical guarantees for kNN-poly-d+5, unlike kNN-poly-LB [Holzmann and Meister, 2024, Theo- rems 1-2]. When comparing 1NN-CSA, 1NN-W, and kNN-poly-d+5, we did not find a simple conclusion on which one would be the best recommended in practice. Note that the convergence rates for the three methods are the same under the assumption of second order differentiability. Which method is preferable should only depend on some constants with for instance the covariate distribution which seems to have an impact. For Setup TN0.5-Cubic, the kNN-poly methods tend to perform better than the others and 1NN-W is the worst (Figure 3) while for Setup TN0.5-Cubic-Reversed, 1NN-W is often the best (Figure 4). Finally, let us mention that local polynomials of higher-order (with L >1) can have a better convergence rate but additional smoothness is required. In contrast, one cannot hope a better rate of convergence for 1NN-CSA or 1NN-W when additional smoothness is possible, as shown by Theorem 2. VII. C ONCLUSION We proposed a method for covariate shift using nearest-neighbour matching. We provided non-asymptotic error bounds and we applied our results to the study of average treatment effects. For future work, we could consider relaxing the boundedness assumptions on the support and on the distribution densities. Name of setup Source distribution of [X]1Target distribution of [X∗]1Type of response TN0.5-Cubic TN(−0.5,0.5,[−1,1]) TN(0 .5,0.5,[−1,1]) Y=|[X]1|3+ε TN0.5-Cubic-Reversed TN(0 .5,0.5,[−1,1]) TN( −0.5,0.5,[−1,1]) Y=|[X]1|3+ε TABLE I: Setups of experiments. 17 1.0 0.5 0.0 0.5 1.0 Xi or X* i020406080100CountPX QX (a) Setup TN0.5-Cubic. 1.0 0.5 0.0 0.5 1.01.0 0.5 0.00.51.0 (X,Y)PXPY|X (X*,Y*)QXPY|X y = -x (b) Setup TN0.5-Cubic. 1.0 0.5 0.0 0.5 1.0 Xi or X* i020406080100CountPX QX (c) Setup TN0.5-Cubic-Reversed. 1.0 0.5 0.0 0.5 1.01.0 0.5 0.00.51.0
|
https://arxiv.org/abs/2504.21633v1
|
(X,Y)PXPY|X (X*,Y*)QXPY|X y = -x (d) Setup TN0.5-Cubic-Reversed. Fig. 2: Visualization of the data distributions used in the experiments. VIII. P ROOFS A. Proof of Theorem 1 We prove the result when i= 2, the case i= 1 follows in the same way using uniform bounds, with respect to x, for the derivatives of ∆(x,·). Recall that B2,n=E[ˆe2(h)|X]−e(h) =R X1 kPk ℓ=1(g(Xˆiℓ(x))−g(x)) dQ(x), we have a double integral E[B2 2,n] =Z X21 k2kX ℓ,ℓ′=1E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))] dQ⊗2(x, y) =1-1+1-2, (3) 18 1NN-CSA 1NN-W NoCorrection OracleY kNN-poly-LB kNN-poly-d+5 102103104105106107 n104 103 102 101 Mean Squared Error (a)d= 1 102103104105106107 n104 103 102 101 Mean Squared Error (b)d= 2 102103104105106107 n104 103 102 101 Mean Squared Error (c)d= 5 Fig. 3: Results for Setup TN0.5-Cubic. 1NN-CSA 1NN-W NoCorrection OracleY kNN-poly-LB kNN-poly-d+5 102103104105106107 n105 104 103 102 101 Mean Squared Error (a)d= 1 102103104105106107 n105 104 103 102 101 Mean Squared Error (b)d= 2 102103104105106107 n105 104 103 102 101 Mean Squared Error (c)d= 5 Fig. 4: Results for Setup TN0.5-Cubic Reversed. where 1-1:=k−2kX ℓ,ℓ′=1Z X2Φ(1) ℓ,ℓ′(x, y) dQ⊗2(x, y),1-2:=k−2kX ℓ,ℓ′=1Z X2Φ(2) ℓ,ℓ′(x, y) dQ⊗2(x, y), with Φ(1) ℓ,ℓ′(x, y) :=E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)]and Φ(2) ℓ,ℓ′(x, y) :=E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))1(ˆτℓ(x) + ˆτℓ′(y)≥ ∥y−x∥)]. It is sufficient to show that for appropriate constants C1andC2, it holds 1-1≤C1k n+ 14/d and 1-2 ≤C2k n+ 11+2/d . (4) Fix(x, y)∈X2and(ℓ, ℓ′)∈J1;kK2. Ifˆτℓ(x) + ˆτℓ′(y)<∥y−x∥holds, then the NN balls B(x,ˆτℓ(x))andB(y,ˆτℓ′(y))do not intersect and the actual positions of the observations 19 Xˆiℓ(x)andXˆiℓ′(y)are conditionally independent given ˆτℓ(x)andˆτℓ′(y). Then the indicator 1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)ensures that there is negative correlation between ˆτℓ(x)and ˆτℓ′(y). Precisely, for Rx, Ry≥0such that Rx+Ry<∥y−x∥, Corollary 4 gives E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))|ˆτℓ(x) =Rx,ˆτℓ′(y) =Ry] =I(x, R x)I(y, R y),(5) where I(a, R a) :=1R Sd−1p(a+Raθ) dσ(θ)Z Sd−1(g(a+Raθ)−g(a))p(a+Raθ) dσ(θ) . Applying Taylor’s expansion on the function g, we get g(a+rθ)−g(a)−r∇g(a)⊤θ ≤ ∥∇2g∥∞ 2r2. Ifr≤d(a,Xc), then the density pisK-Lipschitz continuous on B(a, r)and we can write Z Sd−1rθ p(a+rθ) dσ(θ) =r Z Sd−1(p(a+rθ)−p(a))θdσ(θ) ≤Kσ(Sd−1)r2.(6) It shows that |I(a, R a)| ≤T(a, R a) := ( CR2 a+∥∇g∥∞Ra1(Ra> d(a,Xc))), where C:=∥∇g∥∞K(cp)−1+1 2∥∇2g∥∞. It means that it remains to bound the quantity E[T(x,ˆτℓ(x))T(y,ˆτℓ′(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)]. Since T(a,·)is a non-decreasing function we can apply Corollary 5 on the negative correlation to obtain E[T(x,ˆτℓ(x))T(y,ˆτℓ′(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)]≤E[T(x,ˆτℓ(x))]E[T(y,ˆτℓ′(y))]. It means that 1-1=Z X2E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)] dQ⊗2(x, y) ≤Z XE[Cˆτℓ(x)2+∥∇g∥∞ˆτℓ(x)1(ˆτℓ(x)> δ(x,Xc))] dQ(x)2 . Using Lemmas 4 and 6, and Assumption (A), we establish the bound 1-1≤(C′+C′′)2k n+ 14/d , where we have set C′:= 2∥∇g∥∞K(cp)−1+∥∇2g∥∞ Γ(2 + ⌊2/d⌋)(cp|Vd|)−2/dand C′′:=21+3/d∥∇g∥∞ (cp|Vd|)2/dsup L>0 L1/dR Xexp(−L δ(x,Xc)d) dQ(x) . We are now reduced to analyse the case where the ℓ-NN ball and the ℓ′-NN ball intersect. More precisely, we set Φ(2) ℓ,ℓ′(x, y) :=E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))1(ˆτℓ(x) + ˆτℓ′(y)≥ ∥y−x∥)]. 20 Using the mean value theorem, the inequality ab1(a+b≥δ)≤a21(a≥δ/2)+b21(b≥δ/2), and applying Lemma 6 now yields 1-2=Z X2Φ(2) ℓ,ℓ′(x, y) dQ⊗2(x, y) ≤ ∥∇ g∥2 ∞Z X2E[ˆτℓ(x)ˆτℓ′(y)1(ˆτℓ(x) + ˆτℓ′(y)≥ ∥y−x∥)] dQ⊗2(x, y) ≤ ∥∇ g∥2 ∞Z X2E ˆτℓ(x)21(ˆτℓ(x)≥ ∥y−x∥/2) + ˆτℓ′(y)21(ˆτℓ′(y)≥ ∥y−x∥/2) dQ⊗2(x, y) ≤2Ck n+ 12/dZ X2exp −Ln+ 1 k∥y−x∥d dQ⊗2(y, x) ≤2C¯qk n+ 12/d σ(Sd−1)Z R+exp −Ln+ 1 krd rd−1dr ≤2C¯qk n+ 12/d |Vd|Z R+exp −Ln+ 1 ks ds ≤2C¯q|Vd| Lk
|
https://arxiv.org/abs/2504.21633v1
|
n+ 11+2/d , where C:= 2e1/4∥∇g∥2 ∞Γ(2 + ⌊2/d⌋)(cp|Vd|)−2/dandL:=cp|Vd| 2d+3. In the series of inequalities above, we have used the equality σ Sd−1 =d|Vd|, and we also used the fact that for all a≥0,P(ˆτℓ(x)≥a) =P(ˆτℓ(x)> a)since the distribution Pis absolutely continuous with respect to the Lebesgue measure on Rd. B. Proof of Theorem 2 As for Theorem 1, we only consider the case i= 2. The case i= 1 is similar, using uniform controls for the derivatives of y7→gx(y) := ∆( x, y)with respect to xandy. Proof of the expansion of E[B2,n]:The first-order bias term can be written as E[B2,n] =1 kkX ℓ=1Z XEh g(Xˆiℓ(x))−g(x)i dQ(x). Fix1≤ℓ≤kandx∈Xsuch that xis in the support of Q. We consider the Taylor expansion of the function g∈C2+β(X). We have g(Xˆiℓ(x))−g(x) =∇g(x)⊤(Xˆiℓ(x)−x) +1 2(Xˆiℓ(x)−x)∇2g(x)(Xˆiℓ(x)−x) +O(ˆτℓ(x)2+β). For the rest of the proof, we set φx(z) :=∇g(x)⊤z+1 2z⊤∇2g(x)z. From Lemma 4, we get E[g(Xˆiℓ(x))−g(x)] =E[φx(Xˆiℓ(x)−x)] +o((k/n)2/d). (7) Because we assumed that the support of Qlies inside the interior of X, there exists ε >0 such that for Q-almost all x∈X, B(x,2ε)⊂X. Then by Lemma 6, there exist positive constants C, L > 0such that E[φx(Xˆiℓ(x)−x)1(ˆτℓ(x)> ε)] ≤Cexp −Ln k . 21 The previous bound holds independently from xthanks to the continuity of ∇gand∇2gon the compact set X, and because k=o(n1/2), this term is negligible compared to (k/n)2/d. We know from Lemma 8 that the density of the random vector Xˆiℓ(x)−xis given by fℓ(z) :=np(x+z)n−1 ℓ−1 P(B(x,∥z∥))ℓ−1P(X\B(x,∥z∥))n−ℓ. Then setting Fx(r) :=P(B(x, r)), we have 2-1:=E[∇g(x)⊤(Xˆiℓ(x)−x)1(ˆτℓ(x)≤ε)] =nn−1 ℓ−1Zε 0Fx(r)ℓ−1(1−Fx(r))n−ℓZ Sd−1∇g(x)⊤(rθ)p(x+rθ) dσ(θ)rd−1dr =nn−1 ℓ−1Zε 0Fx(r)ℓ−1(1−Fx(r))n−ℓrdZ Sd−1∇g(x)⊤θ(p(x+rθ)−p(x)) dσ(θ) dr. But since B(x, ε)⊂Xandp∈C1+η(X), we have p(x+rθ) =p(x)+r∇p(x)⊤θ+O(r1+η), so that 2-1=nn−1 ℓ−1Zε 0Fx(r)ℓ−1(1−Fx(r))n−ℓrdZ Sd−1(r∇g(x)⊤θ∇p(x)⊤θ+O(r1+η)) dσ(θ) dr =nn−1 ℓ−1Zε 0Fx(r)ℓ−1(1−Fx(r))n−ℓrd+1Z Sd−1θ⊤∇g(x)∇p(x)⊤θdσ(θ) +O(rη) dr. It means that setting Cℓ,n:=n n−1 ℓ−1 andΨ2(x) :=1 σ(Sd−1)R Sd−1θ⊤ ∇g(x)⊤∇p(x) p(x)+∇2g(x) 2 θdσ(θ), we have E[φx(Xˆiℓ(x)−x)1(ˆτℓ(x)≤ε)] =Cℓ,np(x)σ(Sd−1)Zε 0(Ψ2(x) +O(rη))Fx(r)ℓ−1(1−Fx(r))n−ℓrd+1dr. Now, because pis Lipschitz-continuous on X, one can show that Fx(r) =p(x)|Vd|rd(1 + O(r)), and setting Cp:=∥∇p∥∞/p, we obtain by the binomial expansion and ℓ−1 j ≤ℓj, Fx(r)ℓ−1−(p(x)|Vd|rd)ℓ−1 ≤(p(x)|Vd|rd)ℓ−1ℓ−1X j=1(Cpℓr)j. (8) Similarly, we also have Fx(r)≥p(x)|Vd|rd(1−Cpr)≥p(x)|Vd|rd(1−Cpε). Applying Lemma 6 and using kd+1=o(n), we know that Cℓ,np(x)σ(Sd−1)Zε ε/kr2Fx(r)ℓ−1(1−Fx(r))n−ℓrd−1dr=E[ˆτℓ(x)21(ε/k≤ˆτℓ(x)≤ε)] ≤Ck n2/d exp −C′n k(ε/k)d =o((k/n)2/d), 22 which means that the part of the integral outside the range [0, ε/k]is negligible. Using the inequality 1−t≤exp(−t)fort≥0, we get 2-2:=Cℓ,nZε/k 0 Fx(r)ℓ−1−(p(x)|Vd|rd)ℓ−1 (1−Fx(r))n−ℓrd+1dr ≤Cℓ,nℓ−1X j=1(Cpℓ)jZε/k 0(p(x)|Vd|rd)ℓ−1rjexp(−(n−ℓ)Fx(r))rd+1dr ≤Cℓ,nℓ−1X j=1(Cpℓ)jZ R+(p(x)|Vd|rd)ℓ−1r2+jexp(−Lℓ,k,nnp(x)|Vd|rd)rd−1dr, where Lℓ,k,n:=n−ℓ n(1−Cpε/k)>0. We can redefine the constant εsmall enough to ensure that 1/4≥Cpε. Furthermore, ℓ=o(n), so that Lℓ,k,n≥Lk:= 1−(2k)−1for all sufficiently large n, and thus 2-2≤Cℓ,nℓ−1X j=1(Cpℓ)jZ R+(p(x)|Vd|rd)ℓ−1r2+jexp(−nLkp(x)|Vd|rd)rd−1dr, to which we can apply Lemma 12 to obtain 2-2=Cℓ,nℓ−1X j=1(Cpℓ)jo((k/n)(2+j)/d) =o k n2/d∞X j=1ℓd+1 nj/d! =o k n2/dℓd+1 n1/d 1−ℓd+1 n1/d!−1 =o((k/n)2/d). Similarly, through Lemma 12, we have Cℓ,nZε/k 0rηFx(r)ℓ−1(1−Fx(r))n−ℓrd+1dr=O((k/n)2+η d) =o((k/n)2/d). This far, we have shown that E[g(Xˆiℓ(x))−g(x)]equals Cℓ,np(x)σ(Sd−1)Ψ2(x)Zε/k 0(p(x)|Vd|rd)ℓ−1(1−Fx(r))n−ℓrd+1dr+o((k/n)2/d). Next, we claim Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1(1−Fx(r))n−ℓrd+1dr =Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1(1−Fx(r))nrd+1dr+o((k/n)2/d). (9) To see this, set r1:=r, r2:= 0, and ℓ′:= 0 in Lemma 14 to obtain (1−Fx(r))n−ℓ−(1−Fx(r))n =O ℓexp(−nLkp(x)|Vd|rd)rd , forr∈[0, ϵ/k], and apply Lemma 12: Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1ℓexp(−nLkp(x)|Vd|rd)r2d+1dr=O(k1+(2+ d)/dn−(2+d)/d). 23 The order of this quantity is o((k/n)2/d)by the assumption kγ=o(n)with γ > d + 1. Moreover, setting r2:= 0 in Lemma
|
https://arxiv.org/abs/2504.21633v1
|
13 and then applying Lemma 12 yields Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1(1−Fx(r))nrd+1dr −Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1exp(−np(x)|Vd|rd)rd+1dr ≤2np(x)|Vd|Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1O rd+1exp(−np(x)|Vd|rdLk)rd+1 dr =O(n(k/n)(2+d+1)/d) =o((k/n)2/d), where we used kd+1=o(n). The leading term of the bias is now reduced to the integral 2-3:=Cℓ,np(x)σ(Sd−1)Ψ2(x)Zε/k 0(p(x)|Vd|rd)ℓ−1exp(−np(x)|Vd|rd)rd+1dr.(10) To extend the range of the integral to R+, we show that the following part is negligible: 2-4:=Cℓ,np(x)σ(Sd−1)Ψ2(x)Z∞ ε/k(p(x)|Vd|rd)ℓ−1exp(−np(x)|Vd|rd)rd+1dr. We make the change of variables s:=np(x)|Vd|rd, which yields 2-4=Cℓ,nn−ℓΨ2(x)(p(x)|Vd|n)−2/dZ∞ np(x)|Vd|(ε/k)dsℓ+2/d−1e−sds ≤C Γ(ℓ)Ψ2(x)(p(x)|Vd|n)−2/dexp −n 2kdp(x)|Vd|εdZ R+sℓ+2/d−1e−s/2ds ≤CΨ2(x)2 p(x)|Vd|2/d n−2/dΓ(ℓ+ 2/d) Γ(ℓ)2ℓexp −n 2kdp(x)|Vd|εd . Above we use the identities σ(Sd−1) =d|Vd|andCℓ,nn−ℓ=n1−ℓ n−1 ℓ−1 =1 Γ(ℓ)(1 +o(1)). However, because 2ℓexp −n 2kdp(x)|Vd|εd = exp ℓln 2−n 2kdp(x)|Vd|εd =o(1), using kd+1=o(n), we find that 2-4 =o((k/n)2/d). This means that we can extend the integral in Eq. (10) over R+. Now again the change of variables s:=np(x)|Vd|rdyields 2-3=Cℓ,np(x)σ(Sd−1)Ψ2(x)×Z R+(p(x)|Vd|rd)ℓ−1r2exp(−np(x)|Vd|rd)rd−1dr =Cℓ,np(x)σ(Sd−1)Ψ2(x)×d−1n−ℓ−2/d(p(x)|Vd|)−2/d−1Γ(ℓ+ 2/d). Finally, we showed that Eh g(Xˆiℓ(x))−g(x)i =n−2/dΓ(ℓ+ 2/d) Γ(ℓ)(p(x)|Vd|)−2/dΨ2(x) +o((k/n)2/d), 24 and therefore, E[B2,n] =n−2/d kkX ℓ=1Γ(ℓ+ 2/d) Γ(ℓ)Z X(p(x)|Vd|)−2/dΨ2(x) dQ(x) +o((k/n)2/d). Proof of the expansion of E[B2 2,n]: It is sufficient to show E[B2 2,n]−E[B2,n]2=o((k/n)4/d), because in the first part of the proof, we have shown that E[B2,n]2has the same leading term. However, setting for ε >0 to be fixed later, B2,n,ε:=k−1kX ℓ=1Z Xn g Xˆiℓ(x) −g(x)o 1(ˆτℓ(x)≤ε/k) dQ(x), we first note that |B2,n−B2,nε| ≤K′Z Xˆτk(x)1(ˆτk(x)> ε/k ) dQ(x), where K′is the Lipschitz constant of g. Using Lemma 6, it is not difficult to show that E[B2 2,n]−E[B2,n]2=E[B2 2,n,ε]−E[B2,n,ε]2+o k n4/d! . For simplicity of notations, we will denote B2,n,εbyB2,nin what follows. We can write the the leading term as E[B2,n]2=1 k2X (ℓ,ℓ′)∈J1;kK2Z X2Bℓ(x)Bℓ′(y) dQ⊗2(x, y), where Bℓ(a) :=Eh g(Xˆiℓ(x))−g(a) 1(ˆτℓ(a)≤ε/k)i . On the other hand, from Eqs. (3) and (4) of the proof of Theorem 1 in Appendix VIII-A, because we work in dimension d≥3, the leading term in the second-order bias is E[B2 2,n] =1 k2kX ℓ,ℓ′=1Z X2Bℓ,ℓ′(x, y) dQ⊗2(x, y) +o((k/n)4/d), where Bℓ,ℓ′(x, y) :=E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)]. Using a similar argument as before, we can replace Bℓ,ℓ′(x, y)by the same expectation but restricted to the event {ˆτℓ(x)≤ε/k} ∩ { ˆτℓ′(y)≤ε/k}and for simplicity of notation, we will still denote by Bℓ,ℓ′(x, y)this restricted version. All we need to do is to compare Bℓ(x)Bℓ′(y)andBℓ,ℓ′(x, y). Because the support of Qis included by the interior of XandXis compact, we can find ε >0such that for all xon the support of Q,B(x, ε)⊂X. Then, for any point aon the support of Qand any Ra≤ε, we can express the following expectation using the polar coordinates: E[(g(Xˆiℓ(a))−g(a))|ˆτℓ(x) =Ra] =I(a, R a), (11) 25 where I(a, R a) :=1R Sd−1p(a+Raθ) dσ(θ)Z Sd−1(g(a+Raθ)−g(a))p(a+Raθ) dσ(θ) . Moreover, from Eq. (5) of Proof of Theorem 1, for any pair of points (x, y)on the support ofQandRx+Ry<∥x−y∥andRx, Ry≤ε, we have E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))|ˆτℓ(x) =Rx,ˆτℓ′(y) =Ry] =I(x, R x)I(y, R y),(12) Theβ-H¨older continuity of ∇2gand the η-H¨older continuity of ∇pimply that g(a+Raθ)−g(a) =∇g(a)⊤z+z⊤∇2g(a)z+O R2+β a ,and p(a+Raθ)−p(a) =∇p(a)⊤θRa+O R1+η a . Thus, the integral in the definition of I(a, R a)can be expressed as 2-5:=Z Sd−1(g(a+Raθ)−g(a))p(a+Raθ) dσ(θ) =Z Sd−1 ∇g(a)⊤θRa+1 2θ⊤∇2g(a)θR2 a+O R2+β a ×(p(a) +∇p(a)⊤θRa+O R1+η a ) dσ(θ) =Z Sd−1 ∇g(a)⊤θ∇p(x)⊤θR2 a+1 2θ⊤∇2g(a)θR2 ap(a) dσ(θ) +O R2+κ a =p(a)Ψ2(a)σ(Sd−1)R2
|
https://arxiv.org/abs/2504.21633v1
|
a+O R2+κ a , where κ:= min( η, β). We also usedR Sd−1θdσ(θ) = 0 . Here, Ψ2(a) :=1 σ(Sd−1)Z Sd−1θ⊤∇g(a)⊤∇p(a) p(a)+∇2g(a) 2 θdσ(θ), as defined in the statement of the theorem. By the same expansion of pand the fact thatR Sd−1θdσ(θ) = 0 , we can express the denominator of I(a, R a)as Z Sd−1p(a+Raθ)dσ(θ) =p(a)σ(Sd−1) +O(R1+η a). (13) Here, we choose ε >0small enough in such a way the previous quantity is lower bounded by a positive constant, which is always possible since pis lower bounded on X. Plugging this result into Eqs. (11) and (12), we conclude E[(g(Xˆiℓ(a))−g(a))|ˆτℓ(a) =Ra] = Ψ 2(a)R2 a+O(R2+κ a),and E[(g(Xˆiℓ(x))−g(x))(g(Xˆiℓ′(y))−g(y))|ˆτℓ(x) =Rx,ˆτℓ′(y) =Ry] = Ψ 2(x)Ψ2(y)R2 xR2 y+O(R2+κ xR2+κ y). As a consequence, using Lemma 5, we obtain Bℓ(x)Bℓ′(y) = Ψ 2(x)Ψ2(y)J(1) ℓ(x)J(1) ℓ′(y) +o (k/n)4/d ,and Bℓ,ℓ′(x, y) = Ψ 2(x)Ψ2(y)J(2) ℓ,ℓ′(x, y) +o (k/n)4/d , 26 where for a∈Xand1≤j≤n,J(1) j(a) :=E[ˆτj(a)21(ˆτj(a)≤ε/k)]and J(2) ℓ,ℓ′(x, y) :=E[ˆτℓ′(x)2ˆτℓ′(y)21(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)1(ˆτℓ(x)≤ε/k)1(ˆτℓ′(y)≤ε/k)]. It remains to show that the difference D:=J(1) ℓ(x)J(1) ℓ′(y)−J(2) ℓ,ℓ′(x, y)is negligible. Denoting by fℓ,xthe probability distribution of ˆτℓ(x), we first observe that J(1) ℓ(x)J(1) ℓ′(y) =J(1) ℓ,ℓ′(x, y) +J(3) ℓ,ℓ′(x, y), with J(3) ℓ,ℓ′(x, y) :=Z [0,ε/k]2r2 1r2 21(r1+r2≥ ∥x−y∥)fℓ,x(r1)fℓ′,y(r2) d(r1, r2) ≤E ˆτℓ(x)21(ˆτℓ(x)≥ ∥x−y∥/2) ·E ˆτ2 ℓ′(y) +E ˆτℓ(x)2 ·E ˆτ2 ℓ′(y)1(ˆτℓ′(y)≥ ∥x−y∥/2) ≤C(k/n)4exp −L(n/k)∥x−y∥d/2d , where the last bound is obtained from Lemma 4 and Lemma 6 with suitable positive constants C, L. By integrating with respect to xandy, we get Z X2J(3) ℓ,ℓ′(x, y) dQ⊗2(x, y)≤2Cq(k/n)4/dZ∞ 0rd−1exp −L(n/k)rd/2d dr=o (k/n)4/d . It then remains to show that 2-6:= J(1) ℓ,ℓ′(x, y)−J(2) ℓ,ℓ′(x, y) is negligible compared to the leading term. Using the distributions from Lemmas 7 and 8, and then by Eq. (13), we can show that 2-6≤Z [0,ε/k]2Fx(r1)ℓ−1Fy(r2)ℓ′−1O (r1r2)d+1 |Dx,y|(r1, r2) d(r1, r2), where Dx,y(r1, r2) :=Cn,ℓCn,ℓ′(1−Fx(r1))n−ℓ(1−Fy(r2))n−ℓ′ −n(n−1)n−2 ℓ−1n−ℓ−1 ℓ′−1 ×1(r1+r2<∥y−x∥) (1−Fx(r1)−Fy(r2))n−ℓ−ℓ′ =Cn,ℓCn,ℓ′1(r1+r2<∥y−x∥) ×n (1−Fx(r1))n−ℓ(1−Fy(r2))n−ℓ′−(1 +o(1))(1 −Fx(r1)−Fy(r2))n−ℓ−ℓ′o , where Cℓ,n:=n n−1 ℓ−1 andCℓ′,n:=n n−1 ℓ′−1 and we used n(n−1)n−2 ℓ−1n−ℓ−1 ℓ′−1 =Cℓ,nCℓ′,n(1 +o(1)). Therefore, we are left to analyse the following difference for (r1, r2)∈[0, ε/k]2such thatr1+r2<∥x−y∥: (1−Fx(r1))n−ℓ(1−Fy(r2))n−ℓ′−(1 +o(1))(1 −Fx(r1)−Fy(r2))n−ℓ−ℓ′ . 27 From Lemma 14, applied successively with r1= 0 orr2= 0, and Lemma 13, one can show that (1−Fx(r1))n−ℓ(1−Fy(r2))n−ℓ′equals exp(−n|Vd|(p(x)rd 1+p(y)rd 2)) +O(P(n, ℓ, ℓ′, r1, r2)) exp( −n|Vd|(p(x)rd 1+p(y)rd 2)Lk), for a suitable polynomial PandLk:= 1−1/(2k). Similarly, by Lemmas 14 and 13, (1−Fx(r1)−Fy(r2))n−ℓ−ℓ′equals to exp(−n|Vd|(p(x)rd 1−p(y)rd 2)) +O(P′(n, ℓ, ℓ′, r1, r2)) exp( −n|Vd|(p(x)rd 1+p(y)rd 2)Lk), for another suitable polynomial P′. Applying Lemma 12, it is possible to show that 2-6 is negligible compared to (k/n)4/d, uniformly in (x, y, ℓ, ℓ′), when kd+1=o(n). Before applying Lemma 12, one can note that for0≤r≤ε/k,1≤ℓ≤k, and x∈eX, we have Fx(r)ℓ−1≤ p(x)|Vd|ℓ−1(1 +O(ε/k))k−1 and the second factor is bounded in k. C. Proof of Theorem 3 We use a general result stating that for any integrable non-negative random variable Z and any t≥0, E[(Z−E[Z])1(Z≥t)]≥0. It is clear if t≥E[Z], and if t <E[Z], then we can write E[(Z−E[Z])1(Z≥t)] =E[(Z−E[Z])(1−1(Z < t ))] =−E[(Z−E[Z])1(Z < t )]≥0. It also means that for any non-negative random variable R, we have P(Z≥t)≤E[Z1(Z≥t)] E[Z] ≤E[Z1(Z≥t)] E[R1(Z≥t)]E[R1(Z≥t)] E[Z]
|
https://arxiv.org/abs/2504.21633v1
|
≤E[Z1(Z≥t)] E[Z1(Z≥t)]−E[(Z−R)1(Z≥t)]E[R] E[Z]. Setting Z:=n kQ(Ak(X1)), R:=n kR X1(∥X1−x∥ ≤ˆτk(x))1(∥X1−x∥> a) dQ(x), and ad:=2kt 3n¯q|Vd|, we first have Z−R≤n kZ X1(∥X1−x∥ ≤a) dQ(x)≤n k¯q|Vd|ad≤2t 3. Therefore, E[Z1(Z≥t)] E[Z1(Z≥t)]−E[(Z−R)1(Z≥t)]≤E[Z1(Z≥t)] E[Z1(Z≥t)]−2tP(Z≥t)/3, and because for α≥0, the mapping x7→x x−αis non-increasing on (α,∞), it gives E[Z1(Z≥t)] E[Z1(Z≥t)]−E[(Z−R)1(Z≥t)]≤tP(Z≥t) tP(Z≥t)−2tP(Z≥t)/3= 3. 28 Second we have E[Z] = 1 and because ˆτk(x)is independent of the event (∥X1−x∥ ≤ ˆτk(x)) = (ˆ π−1(1)≤k)by Lemma 2, where ˆπdenotes the permutation giving the random indices of the order statistics of the sample (∥Xi−x∥)1≤i≤n, applying Lemma 5 yields E[R]≤n kZ XP(∥X1−x∥ ≤ˆτk(x))P(ˆτk(x)> a) dQ(x)≤e1/4exp −n kcp|Vd| 8ad! . Putting these two results together yields the statement of Theorem 3. D. Proof of Corollary 1 We first consider the case i= 2. Using the bias-variance decomposition ˆe2(h)−e(h) = B2,n+ (Fm,n+En), where B2,n:=E[ˆe2(h)|X]−e(h) =Z X1 knX i=1(g(Xi)−g(x))1(∥Xi−x∥ ≤ˆτk(x)) dQ(x), Fm,n:= ˆe2(h)−E[ˆe2(h)|X, Y] =1 mmX j=1ˆgn(X∗ j)−Z Xˆgn(x) dQ(x), and En:=E[ˆe2(h)|X, Y]−E[ˆe2(h)|X] =Z X1 knX i=1(h(Xi, Yi)−g(Xi))1(∥Xi−x∥ ≤ˆτk(x)) dQ(x), and using the fact that E[Fm,n|X, Y] = 0 whereas EnandBnare(X, Y)-measurable, and E[En|X] = 0 whereas BnisX-measurable, we find that E[(ˆe2(h)−e(h))2] =E[B2 2,n] +E[F2 m,n] +E[E2 n]. The control of the second-order bias was established in Theorem 1. For the variance term, note that, conditionally on (X, Y),Fm,nis an average of i.i.d. centered random variables, so that E[F2 m,n] =1 mE[Var(ˆ gn(X∗)|X, Y)] ≤1 mEZ Xˆg2 n(x) dQ(x) ≤1 mZ XE" 1 knX i=1h2(Xi, Yi)1(∥Xi−x∥ ≤ˆτk(x))# dQ(x)by Cauchy-Schwarz inequality, =1 mZ XE[Var( h(X, Y)|X) +g(X)2] ≤¯σ2+∥g∥2 ∞ m. Now observe that En=1 nPn i=1 n kQ(Ak(Xi)) (h(Xi, Yi)−g(Xi)), and that conditionally onX, the random variables Q(Ak(Xi))(h(Xi, Yi)−g(Xi)), i= 1, . . . , n are centered and independent. Thus, using the boundedness assumption (X7) on the conditional variance, we have E[E2 n]≤¯σ2 nEn kQ(Ak(X1))2 . 29 Now, applying Theorem 3 yields En kQ(Ak(X1))2 =Z∞ 02tPn kQ(Ak(X1))≥t dt ≤6e1/4Z∞ 0texp −cp 12¯qt dt ≤6e1/412¯q cp2 . It means that setting KP,Q,h :=2533e1/4¯q2¯σ2 c2p2, we have E[E2 n]≤KP,Q,h1 n. This concludes the proof for the case i= 2. If now i= 1, the variance part V1,n:= ˆe1(h)−e(h)−B1,nhas been already studied in Portier et al. [2024], Theorem 1. In particular, we have E[V2 1,n] =O m−1+n−1 . The proof then follows from Theorem 1. E. Proof of Theorem 4 For the global average treatment effect, we can write B=1 NNX i=1 Wi=1 1 kkX ℓ=1 g0(Xi)−g0 Xˆiℓ(Xi)! −1 NNX i=1 Wi=0 1 kkX ℓ=1 g1(Xi)−g1 Xˆiℓ(Xi)! . We work conditionally on the treatment Wand we apply twice Theorem 1, first by setting Pthe distribution of Xgiven W= 0 andQthe distribution of Xgiven W= 1, and second by inverting the roles of W= 0 andW= 1. Assumptions (T3) and (T5) along with Bayes’ theorem ensure that both conditional densities f0andf1are bounded away from zero and from infinity. Applying Theorem 1, the two terms in the bias are of order N1 N k N0min{3/2,2/d} andN0 N k N1min{3/2,2/d} , respectively, where it is assumed that N0or N1do not vanish. However, using Assumption (T5), one can integrate this bounds and get the required rates. Indeed, setting a:= min {3,4/d}andpw=P(W=w)forw∈ {0,1}, we have
|
https://arxiv.org/abs/2504.21633v1
|
for some δ∈(0,1), E NaN−a w1(Nw≥1) ≤E NaN−a w1(Nw≤(1−δ)Npw) +Na((1−δ)Npw)−a ≤NaP(Nw≤(1−δ)Npw) + (1 −δ)−ap−a w ≤Naexp −δ2Npw 2 + (1−δ)−ap−a w, which is bounded in N. The last inequality is based on the Chernoff concentration bound on binomial random variables, see for instance Boucheron et al. [2013]. The previous bounds complete the proof. 30 F . Proof of Proposition 2 1) For z∈[0,1], let(z, z2),(1, z)and(z,0)be three possible points of the boundary of X. We give a lower bound of the distance between (x, y)∈Xand the point (z, z2) using the point (x, x2). We have (x2−y)2≤2(x2−z2)2+ 2(z2−y)2≤8(x−z)2+ 2(z2−y)2, which yields ∥(x, y)−(z, z2)∥2≥1 8(x2−y)2. On the other hand, we have ∥(x, y)−(1, z)∥2≥(1−x)2and∥(x, y)−(z,0)∥2≥y2. We then obtain δ((x, y),Xc)2≥min{(x2−y)2/8,(1−x)2, y2}, so that exp(−Lδ((x, y),Xc)2)≤exp(−L(x2−y)2/8) + exp( −L(1−x)2) + exp( −Ly2). Since √ LZ X exp(−L(1−x)2) + exp( −Ly2) d(x, y)≤2Z R+exp(−z2) dz, and √ LZ Xexp(−L(x2−y)2/8) d(x, y)≤√ LZ1 0Zx2 0exp(−Lz2/8) dzdx≤Z R+exp(−z2/8) dz, we conclude that Assumption (A) is verified. On the other hand, if x≤ε/2, B((x, y), ε/2)∩X⊂ {(x, y)∈R2:x∈[0, ε], y∈[0, x2]}, whose volume is upper bounded by Zε 0x2dx=ε3 3. Condition (X2) is not verified since a lower bound by ε2, up to a positive constant, is impossible. 31 2) Ignoring the normalisation constant 1/|X|and setting for k≥1, ck:=bk−ak, we have √ LZ Xexp(−Lδ(x,Xc)2) dx=√ L∞X k=1Z Ckexp(−Lδ(x,Xc)2) dx ≥√ L∞X k=1Z ak≤∥x∥≤ak+bk 2exp(−L(∥x∥ −ak)2) dx =√ L σ(S1)∞X k=1Zak+bk 2 akexp(−L(r−ak)2)rdr =√ L σ(S1)∞X k=1Zck 2 0exp(−Lr2)(r+ak) dr ≥√ L σ(S1)∞X k=1akZck 2 0exp(−Lr2) dr =σ(S1)∞X k=1akZck√ L 2 0exp(−r2) dr. When L→ ∞ , monotone convergence ensures that the latter lower bound goes to σ(S1)Z∞ 0exp(−r2) dr∞X k=1ak=∞, which shows that Condition (A) is not verified. Now we show that Xsatisfies Condition (X2). Let x∈X, suppose first that ∥x∥ ≤2ε. Setting for k≥1, δk:=ak−1−bk, we have the bound |B(x, ε)∩X| ≥πε2−Cε∞X k=1δk1(bk<∥x∥+ε). Indeed, we have to subtract to the area of the disc the areas of all its intersections with some rings with thickness δk. Any intersection of this type has an area bounded byCδkεwith some C > 0not depending on x, k, ε . Because δk≤δ/(k+ 1)4and 1/(k+ 1) = ak≤bk, we then obtain |B(x, ε)∩X| ≥πε2−Cδε∞X k=1∥x∥+ε (k+ 1)3≥πε2−3Cδε2∞X k=11 (k+ 1)3, and the lower bound follows if δis small enough. We next assume that ∥x∥>2εand we consider three cases. •Assume first that there is no circle with radius althat intersects B(x, ε). Let k≥0 such that x∈ Ck. Then the ball of center x−εu/2, where u=x/∥x∥and of radius ε/2is included in Ck∩B(x, ε)and|B(x, ε)∩X| ≥πε2/4. •Suppose next that there is only one circle with radius aℓthat intersects the ball B(x, ε). Since x∈X, we only have two possibilities. The first one is ak≤ ∥x∥ −ϵ <∥x∥ ≤bk< a k−1≤ ∥x∥+ε 32 for some k≥1. In this case, a ball of radius ε/2can be included in B(x, ε)∩ Ck as above. The second case is ak+1<∥x∥ −ε≤ak≤ ∥x∥<∥x∥+ε < a k−1. In this case, since ε≤ak−1−ak=bk−ak+δk≤2(bk−ak), we get bk−ak≥ε/2 and we deduce that B(x, ε)∩ Ckcontains a ball of radius ε/4. •Finally,
|
https://arxiv.org/abs/2504.21633v1
|
we consider the case for which j≥2circles with radius aℓintersect the ballB(x, ε). Suppose that ak, . . . , a k+j−1are the radii of the circles that intersect B(x, ε). It automatically means that 2ε ∥x∥2−ε2= (∥x∥ −ε)−1−(ε+∥x∥)−1≥a−1 k+j−1−a−1 k=j−1. We then get ∥x∥2≤ε2+ 2ε/(j−1). However, |B(x, ε)∩X| ≥πε2−ηεPj s=0δk+s for some universal constant η >0. Indeed, we have to subtract from the area of the ball some of its intersections with rings of thickness δsfor some values of s. Note that the case s=khas to be considered. Besides jX s=0δk+s≤δ(j+ 1) (k+ 1)4≤2δ(∥x∥+ε)4ε ∥x∥2−ε2+ 1 . Using our inequalities 4ε2≤ ∥x∥2≤ε2+ 2ε, we obtain that |B(x, ε)∩X| ≥πε2−Dδε2 for some D > 0not depending on xandε. This shows (X2) if δ >0is small enough. G. Proof of Theorem 5 1) Suppose that Xis a compact convex subset of Rdwith a non-empty interior. We show thatXsatisfies Condition (X2) by constructing the cone C1depicted in Figure 5 as follows. Let B:=B(z0, ε)be an open ball contained in Xand let x∈X. There exists z∈Bsuch that B(z, ε/2)⊂Band∥z−x∥ ≥ε/2. Let Hbe the hyperplane ofRdcontaining zand orthogonal to the line segment [x;z]and let B′:=B∩H. Then the cone C1= Conv( x, B′)is included in Xby convexity. It means that for all 0≤r≤diam( X),|B(x, r)∩X| ≥ |B(x, r)∩C1|. However, if r < ε/ 2, then ris smaller than the height of the cone. The proportion of the ball B(x, r)contained in the cone as a volume is an increasing function of the cone’s angle with vertex x. This angle is always greater than arctan( ε/(2 diam( X))). Indeed, if u∈B′with∥u−z0∥=ε, the tangent of this angle equals the ratio ∥u−z∥/∥z−x∥, with ∥z−x∥ ≤diam( X)and∥u−z∥ ≥ε/2, since B(z, ε/2)⊂B. It means that there exists a constant c0∈(0,1)determined by d, ε, and diam( X), such that for all 0≤r≤ε/2,we have |B(x, r)∩X| ≥c0|B(x, r)|. Then Assumption (X2) is satisfied for c= ε 2 diam( X)d c0>0. 33 x zz0B(x, r ) C1 ϵ 2X Fig. 5: Illustration for the proof of Theorem 5-1. Inside X, we construct a cone C1and a sufficiently large angle with vertex xso that its intersection with B(x, r)will have a volume that grows proportionally to |B(x, r)|asrincreases. Such a cone can be constructed by taking the convex hull of the ball B(z, ε/2)andx. ai(x)∈Rd−1yi(x)∈R xz ∂X XB(x, r ) C1 Fig. 6: Illustration for the proof of Theorem 5-3. when x∈Xis close to the border. Take z∈∂Xby up-shifting the coordinate yzofz. The ball B(x, r)has a larger intersection withXthan B(zs, r). To show |B(x, r)∩X| ≥c|B(x, r)|for some constant c >0, we construct a cone C1that clips at least a constant proportion of B(x, r)inside X, similarly to Figure 5. We can find such a cone with a positive vertex angle using the fact that a coordinate (the vertical axis in the figure) of the graph of ∂Xas a function of the other coordinates cannot deviate at a rate larger than a Lipschitz constant. We can ensure this by taking C1in a way that its surface does not drop vertically, thanks to the Lipschitz continuity
|
https://arxiv.org/abs/2504.21633v1
|
of the boundary. The vertex angle and thus the clipped volume can depend on the location z, but its minimum is still positive thanks to the compactness. 34 xz(1) x x+sx(2) |tx(1) |e2z(2) x∂Xzx |tx(1)|√ dv e1 e2 Fig. 7: Illustration for the proof of Theorem 5-2 with the Euclidean norm ∥·∥. Without loss of generality, suppose that |tx(1)|= min 1≤j≤d|tx(j)|. The norm ∥zx−x∥is larger than that of the perpendicular line between xandv:∥zx−x∥ ≥ ∥ x−v∥=|tx(1)|√ d. Lj(0,Rd−1)Lj(R,0) z x∂X≤ ∥x−z∥2 ∥x−z∥2≥Kj∥x−z∥2≥ ≤(Kj+ 1)∥x−z∥2 hj(x) ψj,xφj(bj,x) φj(a) bj,xa Fig. 8: Illustration for the proof of Theorem 5-4. The distance ∥x−z∥is lower bounded by hj(x) :=|φj(bj,x)−ψj,x|, up a constant factor. hj(x)is simply the distance between xand a point on the graph having the same d−1first coordinates bj,x. The exponential function x7→exp −Lδ(x,Xc)d will be then first integrated with respect to the last coordinate. 2) Suppose that Xis a compact convex subset of Rdwith a non-empty interior. We show that Xsatisfies Condition (A). First we remind that the boundary of a convex subset has a vanishing Lebesgue measure. For simplicity, we give a short proof of 35 this fact. Without loss of generality, we assume that 0∈X◦. Ifx∈∂X, then for any ε∈(0,1),(1−ε)x∈X◦. Indeed, (1−ε)xis located in the interior of the convex cone{(1−λ)x+λu:λ∈[0,1], u∈B(0, r)}which is contained in Xas soon as B(0, r)⊂X. We then deduce that ∂X⊂(1−ε)−1X◦\X◦and Leb(∂X)≤Leb((1 −ε)−1X◦)−Leb(X◦) =1 (1−ϵ)d−1 Leb(X◦). Letting ε→0, we get the result. Now, let x∈X◦. We denote by zxa point of ∂Xthat minimises the distance between xand the compact set ∂X. For a real number y, we denote by s(y)the sign of y, s(y) := 1 ify≥0, s(y) :=−1ify <0. We also set sx(i) := sign( e⊤ izx−e⊤ ix), where e1, . . . , e dare the canonical basis vectors of Rd. Define tx(i) :=sx(i) inf{t >0 :x+tsx(i)ei∈∂X}. We will prove that min 1≤i≤d|tx(i)| ≤√ d∥x−zx∥(see Figure 7). Suppose that min 1≤i≤d|tx(i)| ≤ √ d∥x−zx∥. We will obtain a contradiction by showing that zxcan be only located in the interior of X. Indeed, if λ1, . . . , λ dare non-negative real numbers such thatPd i=1λi= 1, thenPd i=1λiz(i) x=x+Pd i=1λitx(i)eiare elements of the convex set X, where z(i) x=x+tx(i)ei. Suppose that zx=x+Pd i=1βiei, where βihas the same sign as tx(i)fori= 1, . . . , d . Setting for some t >0, λi:=tβi tx(i)=t|βi| |tx(i)|, i= 1, . . . , d, the conditionPd i=1λi= 1 is equivalent to t= dX i=1|βi|/|tx(i)|!−1 . Note that t >1because dX i=1|βi| |tx(i)|<Pd i=1|βi|√ d∥x−zx∥=Pd i=1|βi| √ dqPd i=1β2 i≤1, where the last inequality follows from the Cauchy-Schwarz inequality. We deduce that x+t(zx−x)∈Xfor some t >1and then zxis the interior of X, which contradicts the fact that it is a point on the boundary. We conclude that min 1≤i≤d|tx(i)| ≤√ d∥x−zx∥. By the equivalence of the norms, there exists η >0such that ∥x−zx∥ ≥ηmin 1≤i≤d|tx(i)|. But from its definition, we have the lower bound |tx(i)| ≥gi(x) := inf {∥x−z∥:z∈∂X, zj=xj, j̸=i}. Next for i= 1, . . . , d , let Ii((xj)j̸=i) :={zi∈R:z∈Xifzj=xj, j̸=i} 36 be the
|
https://arxiv.org/abs/2504.21633v1
|
ith section of the convex set Xat point x. By convexity, these sections are closed intervals, that is Ii((xj)j̸=i) = [a(xj)j̸=i, b(xj)j̸=i] for some real numbers a(xj)j̸=i< b(xj)j̸=i. We now have the bound exp −Lδ(x,Xc)d ≤dX i=1exp −Lηdgi(x)d . Moreover, for i= 1, . . . , d , we have Zb(xj)j̸=i a(xj)j̸=iexp(−Lηdgi(x)d) dxi=Zb(xj)j̸=i a(xj)j̸=iexp(−Lηdmin{xi−a(xj)j̸=i, b(xj)j̸=i−xi)}dxi = 2L−1/dZ(b(xj)j̸=i−a(xj)j̸=i)L1/d 0exp(−ηdvd) dv. Assuming that X⊂[r, R]dfor some real numbers r < R , from the Fubini therorem, we then deduce that 1-1:=L1/dZ Xexp(−Lδ(x,Xc)d) dQ(x) ≤L1/d¯qdX i=1Z xj∈[r,R],j̸=i(Zb(xj)j̸=i a(xj)j̸=iexp(−Lηdgi(x)d) dxi)Y j̸=idxj ≤2d(R−r)d−1¯qZ∞ 0exp(−ηdvd) dv. The last upper bound is finite and the proof follows. 3) Suppose that Xis the closure of a bounded open set whose boundary ∂Xis a(d−1)- dimensional submanifold of Rdof class C1. We show that Xsatisfies Condition (X2) by constructing the truncated cone C1depicted in Figure 6. We use the description of submanifolds of Rdthrough graphs; see for instance Lafontaine et al. [2015, The- orem 1.21]: for all z∈∂X, there exist an open neighbourhood VzofzinRd, an permutation of the coordinates Lz∈ O d(R), an open subset AzofRd−1, and a map φz∈C1(Bz,R)such that Vz∩∂X={Lz(a, φ z(a)) :a∈Az}. If we reduce Vzand Az, we can assume that VzandAzare convex, that Vz⊂Lz(Az×R), and that the gradient ∇φzis bounded by Kz≥0onAz. By connectedness, we can show that Vz∩Xis either included in the subgraph or the supergraph of φz. We give a short proof below. We partition Vz⊂Lz(Az×R)into the boundary ∂X∩Vz, the supergraph V+ z:= {Lz(a, y)∈Vz:y > φ z(a)}, and the subgraph V− z:={Lz(a, y)∈Vz:y < φ z(a)}. Because φzis continuous, V+ zis a connected open subset of Rd, which is covered by the two open sets X◦and(¯X)c, so that one of the two intersections V+ z∩X◦andV+ z∩(¯X)c is empty. The same applies to V− z. We cannot have V+ z∩(¯X)c=V− z∩(¯X)c=∅since it would imply that z∈X◦. Then we either have Vz∩X◦⊂V− zorVz∩X◦⊂V+ z. Because z∈∂Xis in the closure of X◦, reducing further Vz, we can assume that Vz∩X◦is exactly either the subgraph or the supergraph of φz. 37 Because ∂Xis compact, there exist z1, . . . , z k∈∂Xfor some k∈Nsuch that Vz1, . . . , V zkis a subcover of ∂X. In what follows, we simply write Vi:=Vzi, Ai:= Azi, Li:=Lzi, φi:=φzi, and Ki:=Kzi. First, by compactness, we can find εb>0 such that for all x∈X\∪k i=1Vi, δ(x, ∂X)≥εb. We set S:=∪z∈∂XB(z, εb/2)a compact subset of ∪k i=1Vi. Let x∈X, ifx̸∈S, then xis sufficiently in the interior of X, so that B(x, εb/2)⊂X. It implies that for all r∈[0;εb/2],|B(x, r)∩X|=|B(x, r)|, and for r∈[εb/2; diam( X)], we can write |B(x, r)∩X| ≥|B(x,εb/2)∩X| |B(x,diam( X))|× |B(x,diam( X))| ≥ εb 2 diam( X)d |B(x, r)|. We now consider the case where x∈S⊂ ∪k i=1Viis close to the boundary. Then there exist 1≤i≤kandεA>0such that B(x, εA)⊂Vi. By compactness of S,εAcan be chosen independently from x. We set ai(x)∈Aiandyi(x)∈Rsuch thatx=Li(ai(x), yi(x)). We will assume that is Vi∩Xis the subgraph of φi, the supergraph case being dealt with in the same way. We now consider the point z:=Li(ai(x), φi(ai(x)))on the boundary ∂Xthat is right above x. By
|
https://arxiv.org/abs/2504.21633v1
|
the mean value inequality, for all a∈Ai,|φi(ai(x))−φi(a)| ≤ Ki∥ai(x)−a∥. This means that the slope of the boundary does not drop vertically around z. We construct the set C1depicted in Figure 6 as the intersection of the cone with vertex z, directed downside or towards x, with slope Ki, and the ball B(x, εA) that is included in Vi. Formally, C1:= Conv {Li(a, φ i(ai(x))−Ki∥ai(x)−a∥)|a∈Ai} ∩B(x, εA). It is included in Viand in the subgraph of φi, and therefore it is included in X. We can then control the proportion of balls centered in xthat are inside Xas in the convex case. If r∈[0;εA], then |B(x, r)∩X| ≥ B(x, r)∩C1 ≥c(d, K i, εA,diam( X))|B(x, r)|. Ifr∈[εA; diam( X)], then |B(x, r)∩X| ≥c(d, K i, εA,diam( X)) εA diam( X)d |B(x, r)|. In the end, we showed that Condition (X2) was satisfied for c:=εb 2 diam( X)d ∧min 1≤i≤kc(d, K i, εA,diam( X))×εA diam( X)d >0. 4) We keep the same notation as in the previous point and still consider a covering ∪k i=1Viof the boundary ∂Xsuch that on each Vi, the boundary ∂Xcan be represented as a graph of a Lipschitz mapping φi. Now, let x∈Xandz∈∂X, we want to lower bound the distance between xandz. Without loss of generality, we assume that Vi=B(zi, εi)for some zi∈∂Xandεi>0. Suppose first that x /∈ ∪k i=1Vi. Then, for z∈∂Xand1≤i≤k, ∥x−z∥ ≥ ∥ x−zi∥ − ∥ z−zi∥ ≥εi− ∥z−zi∥. We deduce that ∥x−z∥ ≥max 1≤i≤k{εi− ∥z−zi∥}. Note that the mapping z7→g(z) := max 1≤i≤k{εi− ∥z−zi∥} 38 is continuous (as a maximum of a finite number of continuous mappings) and positive on the compact set ∂X, we deduce that δ(x,Xc)≥infz∈∂Xg(z)>0. Next, suppose that for a non empty subset JofJ1;kK, x∈Vjforj∈Jandx /∈Vℓ forℓ /∈J. Suppose first that there is j∈Jsuch that z∈Vj, and set z:=Lj(a, φ j(a)) for some a∈Ajandx=Lj(bj,x, ψj,x)for some (bj,x, ψj,x)∈Aj×R. Our aim here is to lower bound the distance from xtozby its distance with respect to the point Lj(bj,x, ϕj(bj,x))located on the graph of ϕj. See Figure 8 for an illustration. We have |ψj,x−φj(bj,x)| ≤ |ψj,x−φj(a)|+|φj(a)−φj(bj,x)| ≤ ∥x−z∥+Kj∥a−bj,x∥ ≤(Kj+ 1)∥x−z∥. Now if zis in the compact set ∂X\ ∪j∈JVj, we still have ∥x−z∥ ≥cJ:= inf z∈∂X\∪j∈JVjmax ℓ/∈J{εℓ− ∥z−zℓ∥}>0, as we obtained before when J=∅. Note that cJcoincides with inf z∈∂Xg(z)in the latter case. Setting η:= min 1≤i≤k(1 +Ki)−1, we conclude that δ(x, ∂X)≥ηmin{cJ, hJ(x)}, where hJ(x) := min j∈J L−1 j(x)d−φj πd−1◦L−1 j(z) , L−1 j(x)ddenotes the last coordinates of the vector L−1 j(x), and πd−1(x) = (x1, . . . , x d−1). Setting for J⊂J1;kK, VJ:=∩j∈JVj∩ ∩ ℓ∈J1;kK\JVc ℓ, the set {VJ:J⊂J1;kK}is a partition of Rdand we obtain 1-2=Z Xexp(−L δ(x,Xc)d) dQ(x) ≤X J⊂J1;kKZ BJexp(−L(ηcJ)d) dQ(x) +X J⊂J1;kK,J̸=∅Z BJexp(−L(ηhJ(x))d) dQ(x) ≤2k CL−1/d+kX i=1Z Bjexp −Lηd L−1 j(x)d−φj πd−1◦L−1 j(x) d dx! ≤C L−1/d+kX j=1Z L−1 jXexp(−Lηd|xd−φj(x1, . . . , x d−1)|d) dx! ≤C L−1/d+Z R+exp(−Lηdyd) dy ≤CL−1/d. For the third and the fourth inequality given above, we first use a change of variables involving the orthogonal transformation Ljand we then apply Fubini’s therorem using the fact that L−1 jXand its sections
|
https://arxiv.org/abs/2504.21633v1
|
are bounded sets and thus have a finite Lebesgue measure. 5) IfX1andX2satisfy Condition (X2), then it follows directly from the definition that X1∪X1satisfies Condition (X2). 39 6) Suppose that X1andX2satisfy Condition (A). We have the inequality, for all x∈Rd, δ(x,(X1∪X2)c)≥max{δ(x,Xc 1), δ(x,Xc 2)}. Therefore if LX:= sup L>0 L1/dZ Xexp(−L δ(x,Xc)d) dQ(x) , then we have LX1∪X2≤LX1+LX2<∞, and X1∪X2satisfies Condition (A). H. Proof of Proposition 3 Setδ(x)the distance between xand the complement XcofX. Suppose first that Q(Aϵ) = O(ϵ). Note that Q(X∩ {δ= 0}) = lim r→∞Q(X∩ {δ≤1/r}) = 0 . It is sufficient to show that sup L>0L1/dZ X∩{0<δ≤1}exp −Lδ(x)d dQ(x)<∞. We have 1-1=L1/dZ X∩{0<δ≤1}exp −Lδ(x)d dQ(x) =L1/d∞X r=0Z X∩{2−r−1<δ≤2−r}exp −Lδ(x)d dQ(x) ≤L1/d∞X r=0Q 2−r−1< δ≤2−r exp −L2−d(r+1) ≤CL1/d∞X r=02−rexp −L2−d(r+1) ≤4CL1/d∞X r=0Z2−r−1 2−r−2exp −Lyd dy ≤4CZ∞ 0exp −yd dy, which is finite. Conversely, suppose that condition (A) is satisfied. Setting L=ϵ−dfor some ϵ >0, we have L1/dZ Xexp −Lδ(x)d dQ(x)≥L1/dexp −Lϵd Q(Aϵ) =e−1Q(Aϵ)/ϵ. This shows that Q(Aϵ) =O(ϵ)which is equivalent to the desired assertion. 40 I. Proof of Theorem 6 Applying Taylor’s formula to the regression function g(x) =E[h(X, Y)|X=x]yields g(x) =ζ(x, z)⊤G(x) +R(x, z), where Gj(x) :=∂Ψ(j)g(x) Ψ(j)!and R(x, z) :=1 (l−1)!Z1 0(1−t)l−1(∇lg(x+tz)− ∇lg(x))(z, . . . , z ) dt. Then ˆγ= argmin γ∈RK∗nX i=1[(G(x)−γ)⊤ζ(x, X i) + (h−g)(Xi, Yi) +R(x, X i)]21(∥Xi−x∥ ≤ˆτk(x)) =G(x) +M−1(x)nX i=1(R(x, X i) + (h−g)(Xi, Yi))ζ(x, X i)1(∥Xi−x∥ ≤ˆτk(x)). Thus we have Bn=Z X(E[e⊤ 1ˆγ|X]−g(x)) dQ(x) =Z XnX i=1R(x, X i)e⊤ 1M−1(x)ζ(x, X i)1(∥Xi−x∥ ≤ˆτk(x)) dQ(x), B2 n≤Z XknX i=1R(x, X i)2 e⊤ 1M−1(x)ζ(x, X i)1(∥Xi−x∥ ≤ˆτk(x))2dQ(x) ≤Λ2 l!kZ Xˆτk(x)2(l+β)e⊤ 1 nX i=1M−1(x)ζ(x, X i)ζ(x, X i)⊤M−1(x)1(∥Xi−x∥ ≤ˆτk(x))! e1dQ(x) =Λ2 l!kZ Xˆτk(x)2(l+β)e⊤ 1M−1(x)e1dQ(x) E[B2 n]≤Λ2 l!kZ XE[ˆτk(x)2(l+β)E[e⊤ 1M−1(x)e1|ˆτk(x)]] dQ(x) ≤CXΛ2 l!k n2(l+β) d ,using the following key lemma . Lemma 1. Suppose that Assumptions (X1) to (X3) hold true, then there exists a constant CXdepending on d, l, and Psuch that almost surely, for all k≥(2D+ 1)K∗+ 1 and Q-almost all x∈X, E[e⊤ 1M−1(x)e1|ˆτk(x)]≤CX/k. Proof. IfL= 0, then M−1(x) = 1 /k, so now we suppose that L≥1. We follow the same proof structure as in [Holzmann and Meister, 2024]. Fix x∈X. Letube the first column of 41 the random matrix M−1(x), then M(x)u=e1ande⊤ 1M−1(x)e1=u1=u⊤e1=u⊤M(x)u, so that u1=nX i=1 K∗X j=1ujζj(x, X i)!2 1(∥Xi−x∥ ≤ˆτk(x)), 1/u1=nX i=1 K∗X j=1uj u1ζj(x, X i)!2 1(∥Xi−x∥ ≤ˆτk(x)) ≥inf ˜P∈P1nX i=1˜P(x−Xi)21(∥Xi−x∥ ≤ˆτk(x)) ≥inf ˜P∈P1nX i=1˜P(x−Xi)21(∥Xi−x∥<ˆτk(x)), where we have set P1the set of polynomials in dvariables with total degree at most L taking the value 1at0. Now write {i∈J1;nK| ∥Xi−x∥<ˆτk(x)}={i1<···< ik−1} and for a= 1, . . . , k −1, Va:=x−Xia ˆτk(x). It means that we perform a normalisation, changing B(x,ˆτk(x))intoB(0,1). We can therefore write 1/u1≥inf ˜P∈P1k−1X a=1˜P(Va)2, E[u1|ˆτk(x)]≤Z R+P inf ˜P∈P1k−1X a=1˜P(Va)2≤t−1|ˆτk(x)! dt. We want to divide the conditional sample (Va)a=1,...,k−1intoνsmaller samples all of sizeK∗. Let ν:=⌊k−1/K∗⌋, we can write E[u1|ˆτk(x)]≤Z R+P ν−1X j=0inf ˜P∈P1K∗X a=1˜P(VjK∗+a)2≤t−1|ˆτk(x)! dt. We are now interested in bounding the quantity P inf ˜P∈P1K∗X a=1˜P(Va)2≤t−1|ˆτk(x)! . As in Holzmann and Meister [2024], for w∈(Rd)K∗, letΞ(w)be the matrix of size K∗ defined by Ξ(w)a,j:=wΨ(j) a. Then setting V:= (Va)a=1,...,K∗∈(Rd)K∗, we have
|
https://arxiv.org/abs/2504.21633v1
|
inf ˜P∈P1K∗X a=1˜P(Va)2≥ inf γ∈RK∗,γ1=1∥Ξ(V)γ∥2 ≥ inf γ∈RK∗,∥γ∥≥1∥Ξ(V)γ∥2 ≥λmin(Ξ(V))2 ≥det(Ξ( V))2 ρ(Ξ(V))2(K∗−1) ≥det(Ξ( V))2 ∥Ξ(V)∥2(K∗−1) F, 42 where λmindenotes the smallest eigenvalue, ρthe spectral radius and ∥·∥ Fthe Frobenius norm. Thus, ∥Ξ(V)∥2 F=K∗X a,j=1V2Ψ(j) a =K∗X a=1LX l=0X λ∈Nd |λ|=lV2λ a ≤K∗X a=1LX l=0X λ∈Nd |λ|=ll λ1, . . . , λ d V2λ a =K∗X a=1LX l=0∥Va∥2l ≤K∗(L+ 1). It yields that for all t >0, P inf ˜P∈P1K∗X a=1˜P(Va)2≤t−1|ˆτk(x)! ≤P(det(Ξ( V))2≤ ∥Ξ(V)∥2(K∗−1) F t−1|ˆτk(x)) ≤P(|det(Ξ( V))| ≤t−1/2(K∗(L+ 1))(K∗−1)|ˆτk(x)). Letε >0, we are now interested in bounding the quantity P(|det(Ξ( V))| ≤ε|ˆτk(x)). Letµ:= max w∈Bd(0,1)K∗|det Ξ( w)|>0and let wbe a point on which this maximum is attained. Forθ∈SdK∗−1, we can apply the fundamental theorem of algebra to the polynomial det(Ξ( w+sθ))∈C[s]. Writing its roots vθ 1, . . . , vθ D, we have |det(Ξ( w+sθ))|=µDY i=1 1−s/vθ i . Ifs≥0, then the distance between sand the complex circle (|·|= vθ i )is reached at vθ i so that |det(Ξ( w+sθ))| ≥µDY i=1 1−s vθ i . Introduce Θ∈SdK∗−1the direction of [w, V], that is Θa=Va−wa ∥V−w∥, a= 1, . . . , K∗which is almost surely well-defined since Vhas a density. We then have V=w+∥V−w∥Θand |det(Ξ( V))| ≥µDY i=1 1−∥V−w∥ |vΘ i| . ButBd(0,1)K∗⊂BdK∗(0,√ K∗), so that ∥V−w∥ ≤2√ K∗and |det(Ξ( V))| ≥µDY i=1 1−∥V−w∥ |vΘ i| ∧2√ K∗ , 43 which means we may replace |vθ i|by|vθ i| ∧2√ K∗. Thus, (|det(Ξ( V))| ≤ε)⊂ DY i=1 1−∥V−w∥ |vΘ i| ≤ε/µ! ⊂D[ i=1 1−∥V−w∥ |vΘ i| ≤(ε/µ)1/D ⊂D[ i=1 1−(ε/µ)1/D≤∥V−w∥ |vΘ i|≤1 + (ε/µ)1/D , P(|det(Ξ( V))| ≤ϵ|ˆτk(x))≤DX i=1P(V−w∈ Ci(ε)|ˆτk(x)), where Ci(ε) :={rθ:θ∈SdK∗−1,0∨(1−(ε/µ)1/D)|vθ i| ≤r≤(1+(ε/µ)1/D)|vθ i|}∩BdK∗ 0,2√ K∗ . This is where the geometrical condition (X2) plays a role. Holzmann and Meister [2024] worked conditionally on the direction Θ. It is justified by the fact that if we assume that the target support is included in the interior of the source support X, then there exists ϵ >0 such that for all zin the target support and all directions θ∈Sd−1,[z−ϵθ, z+ϵθ]⊂X. This is not the case in our setting, which is why we have to let the direction free. It is only by averaging on all possible directions with Condition (X2) that we will get the same bound. First, we can upper bound the volume of the set Ci(ε)by |Ci(ε)|=Z2√ K∗ 0Z SdK∗−11 r∈[(1−(ε/µ)1/D)|vθ i|; (1 + ( ε/µ)1/D)|vθ i|] dσ(θ) rdK∗−1dr ≤2 2√ K∗dK∗−1 (ε/µ)1/DZ SdK∗−1|vθ i|dσ(θ), with|vθ i| ≤2√ K∗. Therefore, using the conditional density from Lemma 10, we obtain P(|det(Ξ( V))| ≤ε|ˆτk(x))≤DX i=1|Ci(ε)|¯pˆτk(x)d R Rdf∗(z) dzK∗ , where f∗(z) :=1(z∈Bd(0,1))p(x−zˆτk(x)) ˆτk(x)d. The geometric condition (X2) now appears in the normalisation constantR Rdf∗(z) dz. Using the lower bound from Lemma 10, we obtain P(|det(Ξ( V))| ≤ε|ˆτk(x))≤C ε1/D, where C:= 2D (2√ K∗)d¯p cp|Vd|K∗ µ−1/Dσ(SdK∗−1)<∞. We have proved that for all t >0,P inf ˜P∈P1PK∗ a=1˜P(Va)2≤t−1|ˆτk(x) ≤C t−1/2D. Now recall that E[u1|ˆτk(x)]≤R R+PPν−1 j=0inf ˜P∈P1PK∗ a=1˜P(VjK∗+a)2≤t−1|ˆτk(x) dt, where ν=⌊k−1/K∗⌋>2D. To obtain the statement of Lemma 1, because we work 44 with variable k, we will need an anti-concentration result. Applying Lemma 11 with Zj:= inf ˜P∈P1PK∗ a=1˜P(VjK∗+a)2andb:= 1/2D, we
|
https://arxiv.org/abs/2504.21633v1
|
obtain, for all C′>0, E[u1|ˆτk(x)]≤C′ ν+1 νΓ(b)CνΓ(1 + b)ν−1 Γ(bν)Z∞ C′/νt−bνdt. But since bν > 1, CνΓ(1 + b)ν Γ(bν)Z∞ C′/νt−bνdt=1 bν−1(CΓ(1 + b))ν Γ(bν)ν C′bν−1 ≤C′C′′√ b (bν−1)√ 2πνCΓ(1 + b)ebb−b C′bν , using Stirling’s approximation. If we choose C′large enough, the above expression remains bounded as ν→ ∞ . Since ν=⌊k−1/K∗⌋, we have shown that E[u1|ˆτk(x)]≤CX/k. REFERENCES Alberto Abadie and Guido W. Imbens. Large sample properties of matching estimators for average treatment effects. Econometrica , 74:235–267, 2006. Alberto Abadie and Guido W Imbens. On the failure of the bootstrap for matching estimators. Econometrica , 76(6):1537–1557, 2008. Alberto Abadie and Guido W. Imbens. Bias-corrected matching estimators for average treatment effects. Journal of Business & Economic Statistics , 29(1):1–11, 2011. Alberto Abadie and Guido W Imbens. A martingale representation for matching estimators. Journal of the American Statistical Association , 107(498):833–843, 2012. Steffen Bickel and Tobias Scheffer. Dirichlet-enhanced spam filtering based on biased samples. Advances in Neural Information Processing Systems , 19, 2006. St´ephane Boucheron, G ´abor Lugosi, and Pascal Massart. Concentration Inequalities - A Nonasymptotic Theory of Independence . Oxford University Press, 2013. ISBN 978-0- 19-953525-5. URL https://doi.org/10.1093/acprof:oso/9780199535255.001.0001. Luc Devroye, Paola G Ferrario, L ´aszl´o Gy ¨orfi, and Harro Walk. Strong universal consistent estimate of the minimum mean squared error. Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik , pages 143–160, 2013. Luc Devroye, L ´aszl´o Gy ¨orfi, G ´abor Lugosi, and Harro Walk. A nearest neighbor estimate of the residual variance. Electronic Journal of Statistics , 12(1):1752 – 1778, 2018. S´ebastien Gadat, Thierry Klein, and Cl ´ement Marteau. Classification in general finite dimensional spaces with the k-nearest neighbor rule. The Annals of Statistics , 44(3):982 – 1009, 2016. Arthur Gretton, Alex Smola, Jiayuan Huang, Marcel Schmittfull, Karsten Borgwardt, Bernhard Sch ¨olkopf, et al. Covariate shift by kernel mean matching. Dataset shift in machine learning , 3(4):5, 2009. Jinyong Hahn. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica , 66(2):315–331, 1998. James J Heckman. Sample selection bias as a specification error. Econometrica: Journal of the Econometric Society , pages 153–161, 1979. Hajo Holzmann and Alexander Meister. Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations. arXiv preprint arXiv:2407.08494 , 2024. URL https://arxiv.org/abs/2407.08494. 45 Jing Jiang and ChengXiang Zhai. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association Computational Linguistics . ACL, 2007. Takafumi Kanamori, Shohei Hido, and Masashi Sugiyama. A least-squares approach to direct importance estimation. The Journal of Machine Learning Research , 10:1391–1445, 2009. Leo Klarner, Tim GJ Rudner, Michael Reutlinger, Torsten Schindler, Garrett M Morris, Charlotte Deane, and Yee Whye Teh. Drug discovery under covariate shift with domain- informed prior distributions over functions. In International Conference on Machine Learning , pages 17176–17197. PMLR, 2023. Jacques Lafontaine et al. An introduction to differential manifolds . Springer, 2015. Yan Li, Hiroyuki Kambara, Yasuharu Koike, and Masashi Sugiyama. Application of covariate shift adaptation techniques in brain–computer interfaces. IEEE Transactions on Biomedical Engineering , 57(6):1318–1324, 2010. Zhexiao Lin, Peng Ding, and Fang
|
https://arxiv.org/abs/2504.21633v1
|
Han. Estimation based on nearest neighbor matching: From density ratio to average treatment effect. Econometrica , 91(6):2187–2217, 2023. Marco Loog. Nearest neighbor-based importance weighting. In 2012 IEEE international workshop on machine learning for signal processing , pages 1–6. IEEE, 2012. Franc ¸ois Portier, Lionel Truquet, and Ikko Yamane. Nearest neighbor sampling for covariate shift adaptation. Journal of Machine Learning Research , 25(410):1–42, 2024. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference , 90(2):227–244, 2000. Shashank Singh and Barnab ´as P´oczos. Finite-sample analysis of fixed-k nearest neighbor density functional estimators. Advances in neural information processing systems , 29, 2016. Kumar Sricharan, Raviv Raich, and Alfred O Hero. Estimation of nonlinear functionals of densities with confidence. IEEE Transactions on Information Theory , 58(7):4135–4159, 2012. Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert M ¨uller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research , 8(5), 2007. Masashi Sugiyama, Taiji Suzuki, Shinichi Nakajima, Hisashi Kashima, Paul von B ¨unau, and Motoaki Kawanabe. Direct importance estimation for covariate shift adaptation. Annals of the Institute of Statistical Mathematics , 60(4):699–746, 2008. Hermann Weyl. On the volume of tubes. American Journal of Mathematics , 61(2):461–472, 1939. Hassler Whitney. Functions differentiable on the boundaries of regions. Annals of Mathematics , 35(3):482–485, 1934. APPENDIX Definition 1. Notation: •PandQdistributions on Rdwith respective densities pandq, •Xa subset of Rdcontaining the supports of PandQ, •((X1, Y1), . . . , (Xn, Yn))ann-sample of PX,Y=PY|X·P, •((X∗ 1,·), . . . , (X∗ m,·))anm-sample of QX,Y=QY|X·Q, 46 •ha measurable function from X×RtoR, •g:x7→E[h(X, Y)|X=x], •1(A)the indicator function of the event A, •∥·∥the Euclidean norm on Rd, •ˆik(x)the index of the kth-nearest neighbour to xamong (X1, . . . , X n), •ˆτk(x)thekth-order statistic of (∥Xi−x∥)i=1,...,n, •Ak(z) :={x∈Rd:∥z−x∥ ≤ˆτk(x)}the catchment area of a point x, •M∗ k(x) :=Pm j=11 ∥x−X∗ j∥ ≤ˆτk(X∗ j) the matching estimator, •J1;nKthe integer interval i= 1, . . . , n , •Vdthe∥·∥-unit ball in Rd, •B(x, r)the closed ball for ∥·∥with center xand radius r, •S(x, r)the sphere for ∥·∥with center xand radius r, •Sd−1the∥·∥-unit sphere in Rd, •σthe surface measure on Sd−1, •|B|the Lebesgue measure of a Borel set BinRd, •diam( X) := sup {∥y−x∥:x, y∈X}, •Γ:s >07→R R+ts−1e−tdtEuler’s gamma function, •β: (s, t)>07→R1 0us−1(1−u)t−1du=Γ(s)Γ(t) Γ(s+t)Euler’s beta function. Lemma 2. LetZ1, . . . , Z nbe iid real-valued random variables having density fZwith respect to the Lebesgue measure. •Almost surely, there is a unique permutation ˆπofJ1;nKsuch that Zˆπ(1)<···< Z ˆπ(n). From now on, we write Z(i)forZˆπ(i). •The random variable ˆπis uniformly distributed over the set of permutations of J1;nK. •The random vector (Z(1), . . . , Z (n))has density f(x1, . . . , x n) =n!fZ(x1). . . f Z(xn)1(x1<···< x n) with respect to the Lebesgue measure on Rn. •The vector (Z(1), . . . , Z (n))and the permutation ˆπare independent. Proof. For the first item, recall that because the variables Zihave density, the event of having two observations with the same value has probability zero. We will prove the last
|
https://arxiv.org/abs/2504.21633v1
|
three items at once. Let φ:R→Rbe a bounded and measurable function and let πbe a permutation of J1;nK, then E[φ(Z(1), . . . , Z (n))1(ˆπ=π)] =E[φ(Zπ(1), . . . , Z π(n))1 Zπ(1)<···< Z π(n) ] =Z Rnφ(x)1(x1<···< x n)fZ(x1). . . f Z(xn) dx, since the vector (Zπ(1), . . . , Z π(n))has the same distribution as the vector (Z1, . . . , Z n). Because the last expression above does not depend on the permutation π, we deduce that ˆπ follows a uniform distribution on the set of permutations of J1;nK, that is P(ˆπ=π) =1 n!, and the last two points are valid. Lemma 3. Define Euler’s beta function β: (s, t)>07→R1 0us−1(1−u)t−1du=Γ(s)Γ(t) Γ(s+t). LetUn 1, . . . , Un nbe an n-sample of uniformly distributed random variables over [0,1]and 47 let1≤k≤n, then the kth-order statistic of this sample has density with respect to the Lebesgue measure on [0,1], u7→1 β(k, n−k+ 1)uk−1(1−u)n−k. LetI(x;s, t) :=1 β(s,t)Rx 0us−1(1−u)t−1dube the regularised incomplete beta function. Then for all x∈[0,1], we have I(x;k, n−k+ 1) = P(Un (k)≤x) =P(Bn,x≥k), where Bn,xis a random variable following the binomial distribution with parameters n andx. Proof. Setting x(¯k):=x1, . . . , x k−1, xk+1, . . . , x n, we know from Lemma 2 that the density of the kth-order statistic of (Ui)1≤i≤nis equal to fUn (k)(u) =Z [0,1]n−1n!1(x1<···< x k−1< u < x k+1<···< x n) dx(¯k) =n!uk−1 (k−1)!(1−u)n−k (n−k)! =1 β(k, n−k+ 1)uk−1(1−u)n−k. From this, we can deduce the first equality I(x;k, n−k+ 1) = P(Un (k)≤x). Observing that the events (Un (k)≤x)and(Pn i=11(Un i≤x)≥k)are the same gives the second equality. Lemma 4. Suppose that Assumptions (X1) to (X3) hold true. Let Γ:s >07→R∞ 0ts−1e−sdt denote Euler’s gamma function and let ⌊·⌋denote the floor function. Let λ≥0, then for allx∈X, we have E[ˆτk(x)λ]≤Cλ,d,Pk n+ 1λ/d , where Cλ,d,P:= 2 Γ(2 + ⌊λ/d⌋) (cp|Vd|)−λ/d. Proof. The proof can be found in Portier et al. [2024]. We give a proof below for the sake of completeness. LetFx(r) :=P(X∈B(x, r)), F−1 x(u) := inf {r∈R:Fx(r)≥u}its generalised inverse, and let (U1, . . . , U n)be an n-sample of the uniform distribution on [0,1]. Then the two random vectors (∥X1−x∥, . . . ,∥Xn−x∥)and(F−1 x(U1), . . . , F−1 x(Un))have the same distribution. Because the function F−1 xis non-decreasing, the two vectors (ˆτ1(x), . . . , ˆτn(x)) and(F−1 x(U(1)), . . . , F−1 x(U(n)))also have the same distribution. In particular, by Lemma 2, we can write E[ˆτk(x)λ] =1 β(k, n−k+ 1)Z1 0(F−1 x(u))λuk−1(1−u)n−kdu. 48 However, using Assumptions (X2) and (X3), we know that for all 0≤r≤diam( X), we have Fx(r)≥cp|Vd|rd, so that F−1 x(u)≤ u cp|Vd|1/d , which also holds for u > cp|Vd|diam( X)d. It means that E[ˆτk(x)λ]≤1 β(k, n−k+ 1)(cp|Vd|)−λ/dβ(k+λ/d, n −k+ 1) ≤(cp|Vd|)−λ/d Γ(n+ 1) Γ(n+λ/d+ 1)Γ(k+λ/d) Γ(k) ≤(cp|Vd|)−λ/d 2 (n+ 1)λ/dΓ(2 + ⌊λ/d⌋)kλ/d, using the inequalitiesΓ(a+s) Γ(a)≤Γ(2 +⌊s⌋)asandΓ(a) Γ(a+s)≤2a−s, as shown in Portier et al. [2024]. Lemma 5. Suppose that Assumptions (X1) to (X3) hold true. Let a >0, then for all x∈X, P(ˆτk(x)>
|
https://arxiv.org/abs/2504.21633v1
|
a)≤e1/4exp −n kcp|Vd| 8ad! . Proof. Let(Un 1, . . . , Un n)be an n-sample of the uniform distribution on [0,1], then applying Lemma 3, we know that P(ˆτk(x)> a) =P(Un (k)> F x(a)) =P(B≤k−1), where Bhas a binomial distribution with parameters nandFx(a), with Fx(a)being defined in the proof of Lemma 4. We know from concentration results on binomial random variables, as can be seen in Boucheron et al. [2013], that for all δ∈(0,1), we have the Chernoff bound P(B≤(1−δ)E[B])≤exp −δ2E[B] 2 . With δ:= 1/2, we find P(B≤E[B]/2)≤exp(−E[B] 8). It means that if t≤E[B]/2, then we can write P(B≤t)≤exp(−E[B] 8), while if t≥E[B]/2, we simply write P(B≤t)≤ 1≤e1/4exp(−E[B] 8t). In either case, we showed that for all t≥1, P(B≤t)≤e1/4exp −E[B] 8t . In our context, it means that P(B≤k−1)≤P(B≤k) ≤e1/4exp −n 8kFx(a) ≤e1/4exp −n kcp|Vd| 8ad! . 49 Lemma 6. Suppose that Assumptions (X1) to (X3) hold true. Let λ≥0anda >0, then for all x∈X, E[ˆτk(x)λ1(ˆτk(x)> a)]≤Cλ,d,Pk n+ 1λ/d exp −n+ 1 kLλ,d,Pad , where Cλ,d,P:= 2e1/4Γ(2 + ⌊λ/d⌋) (cp|Vd|)−λ/dandLλ,d,P:=cp|Vd| 8(1+⌊λ/d⌋). Proof. Exactly as in the proof of Lemma 4, we can write E[ˆτk(x)λ1(ˆτk(x)> a)] =1 β(k, n−k+ 1)Z1 Fx(a)(F−1 x(u))λuk−1(1−u)n−kdu ≤1 β(k, n−k+ 1)(cp|Vd|)−λ/dZ1 Fx(a)uk+λ/d−1(1−u)n−kdu, where Z1 Fx(a)uk+λ/d−1(1−u)n−kdu=β(k+λ/d, n −k+ 1)(1 −I(Fx(a);k+λ/d, n −k+ 1)) . Now β(s, t)2∂sI(α;s, t) =Z [0,1]21(u≤α)us−1(1−u)t−1vs−1(1−v)t−1(logu−logv) d(u, v) =Z [0,1]21(u≤α≤v)us−1(1−u)t−1vs−1(1−v)t−1(logu−logv) d(u, v) ≤0. In the second equality above, we used the fact that for an integrable function h: [0,1]2→R such that h(u, v) =−h(v, u)for each (u, v)∈[0,1]2, we haveR [0,1]21(u, v≤α)h(u, v) d(u, v) = 0. It means that the function s7→I(α;s, t)is non-increasing, so that 1−I(Fx(a);k+λ/d, n −k+ 1)≤1−I(Fx(a);k+⌊λ/d⌋+ 1, n−k+ 1). Then, because k+⌊λ/d⌋+ 1is a positive integer, applying Lemma 3 yields 1−I(Fx(a);k+⌊λ/d⌋+ 1, n−k+ 1)≤P(B < k +⌊λ/d⌋+ 1), where Bhas a binomial distribution with parameters n+⌊λ/d⌋+ 1andFx(a). Thanks to the binomial inequality proved in Lemma 5, we get P(B≤k+⌊λ/d⌋)≤e1/4exp −n+⌊λ/d⌋+ 1 8(k+⌊λ/d⌋)Fx(a) ≤e1/4exp −n+ 1 kcp|Vd| 8(1 +⌊λ/d⌋)ad! . In then end, we proved that E[ˆτk(x)λ1(ˆτk(x)> a)]≤β(k+λ/d, n −k+ 1) β(k, n−k+ 1)(cp|Vd|)−λ/de1/4exp −n+ 1 kcp|Vd| 8(1 +⌊λ/d⌋)ad! , 50 where β(k+λ/d, n −k+ 1) β(k, n−k+ 1)≤2Γ(2 + ⌊λ/d⌋)k n+ 1λ/d , as we have seen in the proof of Lemma 4. Lemma 7. Letx, y∈X,1≤ℓ, ℓ′≤nwith ℓ+ℓ′≤n, and let φ1, φ2be measur- able bounded functions from RdtoR, then setting Φ := E[φ1(Xˆiℓ(x)−x)φ2(Xˆiℓ′(y)− y)1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)], we have Φ =Cn,ℓ,ℓ′Z R2 +1(r1+r2<∥y−x∥)Fx(r1)ℓ−1Fy(r2)ℓ′−1(1−Fx(r1)−Fy(r2))n−ℓ−ℓ′ ×(r1r2)d−1Px(φ1, r1)Py(φ2, r2) d(r1, r2), where we have set Cn,ℓ,ℓ′:=n(n−1) n−2 ℓ−1 n−ℓ−1 ℓ′−1 ,Pa(φ, r) :=R Sd−1φ(rθ)p(a+rθ) dσ(θ), andFa(r) :=P(B(a, r)). Proof. We can show that the event D:= (ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)occurs if and only if there exists a unique couple of integers (i1, i2)∈J1;nK2with i1̸=i2and a unique couple of subsets A1, A2⊂J1;nK\{i1, i2}with Card( A1) =ℓ−1,Card( A2) =ℓ′−1, andA1∩A2=∅such that the event CA1,A2,i1,i2, defined by the following three conditions, occurs: •(i)∥Xi1−x∥+∥Xi2−y∥<∥y−x∥, •(ii) for all (a1, a2)∈A1×A2,∥Xa1−x∥<∥Xi1−x∥,∥Xa2−y∥<∥Xi2−y∥, and •(iii) for all a̸∈A1∪A2∪{i1, i2},∥Xa−x∥>∥Xi1−x∥and∥Xa−y∥>∥Xi2−y∥. We can see D ⊆ ∪ (i1,i2,A1;A2)CA1,A2,i1,i2by letting i1be the index of the ℓ-th nearest neighbour and A1be the set of indexes of the ℓ−1nearest neighbours of x; and similarly i2be the index
|
https://arxiv.org/abs/2504.21633v1
|
of the ℓ′-th nearest neighbour and A2be the set of indexes of the ℓ′−1 nearest neighbours of y. To see CA1,A2,i1,i2⊆ D , we can show that a∈A1⊔ {i1} ⇐⇒ ∥Xa−x∥ ≤ ∥ Xi1−x∥anda∈A2⊔ {i2} ⇐⇒ ∥ Xa−y∥ ≤ ∥ Xi2−y∥using Conditions (i-iii) and the triangular inequality. A1thus characterizes the ℓ−1nearest neighbours of x and∥Xi1−y∥= ˆτℓ(x). Likewise, A2is the set of the ℓ′−1nearest neighbours of yand ∥Xi2−y∥= ˆτℓ′(y). By Condition (i), we get the condition of D. Here, (A1, A2,{i1, i2}) satisfying Conditions (i-iii) is unique because of the uniqueness of those nearest neighbours. One can then write the event Das a finite union of the disjoint events CA1,A2,i1,i2. The number of all possible triplets (A1, A2,{i1, i2})is given by Cn,ℓ,ℓ′. Computing the restriction of the expectation to one of these events, applying the change of variables z1:=Xi1−x andz2:=Xi2−y, we get the expression Φ =Cn,ℓ,ℓ′Z 1(∥z1∥+∥z2∥<∥y−x∥)φ1(z1)φ2(z2)f(z1, z2) d(z1, z2),where f(z1, z2) :=p(x+z1)p(y+z2)P(B(x,∥z1∥))ℓ−1P(B(y,∥z2∥))ℓ′−1 ×P(B(x,∥z1∥)c∩B(y,∥z2∥)c)n−ℓ−ℓ′. Further switching to polar coordinates yields the result. Corollary 4. In the context of Lemma 7, setting this time Φc:=E[φ1(Xˆiℓ(x)−x)φ2(Xˆiℓ′(y)−y)|ˆτℓ(x),ˆτℓ′(y)]1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥), 51 we have the conditional independence Φc=E[φ1(Xˆiℓ(x)−x)|ˆτℓ(x)]E[φ2(Xˆiℓ′(y)−y)|ˆτℓ′(y)]1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥),where E[φ1(Xˆiℓ(x)−x)|ˆτℓ(x) =r] =1R Sd−1p(x+rθ) dσ(θ)Z Sd−1φ1(rθ)p(x+rθ) dσ(θ), and similarly for y. Proof. We use the expression with polar coordinates from Lemma 7, where the only quantity that is not radial is Px(φ1, r1)Py(φ2, r2), hence the conditional independence and the distribution after normalisation. Lemma 8. Letx∈Xand1≤ℓ≤k, then the random vector Xˆiℓ(x)−xhas density fℓ with respect to the Lebesgue measure of Rd, where, setting Fx(r) =P(B(x, r)), we have fℓ(z) =nn−1 ℓ−1 p(x+z)Fx(∥z∥)ℓ−1(1−Fx(∥z∥))n−ℓ. Proof. One can show that for a bounded and measurable mapping φ:Rd→R, Eh φ Xˆiℓ(x)−xi =Z Xφ(z)fℓ(z) dx, using some arguments very similar to that used for proving Lemma 7. Details are omitted. Lemma 9. LetAandBbe disjoint Borel sets of X, and 0≤ℓ, ℓ′≤n, then we have the negative correlation P nX i=11A(Xi)≤ℓ,nX i=11B(Xi)≤ℓ′! ≤P nX i=11A(Xi)≤ℓ! P nX i=11B(Xi)≤ℓ′! . Proof. We prove the result by induction on the number of observations n. Assuming thatP0 i=1(·) = 0 , the result is trivially true when n= 0. We define the events I(n) ℓ,ℓ′:= nX i=11A(Xi)≤ℓ,nX i=11B(Xi)≤ℓ′! , A(n) ℓ:= nX i=11A(Xi)≤ℓ! ,and B(n) ℓ′:= nX i=11B(Xi)≤ℓ′! , as well as the two quantities a:=P(A)andb:=P(B). To apply our induction hypothesis, we apply the law of total probability to the random variable Xn. It yields the three equations I(n) ℓ,ℓ′=aI(n−1) ℓ−1,ℓ′+bI(n−1) ℓ,ℓ′−1+ (1−a−b)I(n−1) ℓ,ℓ′, A(n) ℓ=aA(n−1) ℓ−1+ (1−a)A(n−1) ℓ,and B(n) ℓ′=bB(n−1) ℓ′−1+ (1−b)B(n−1) ℓ′. 52 Therefore, A(n) ℓB(n) ℓ′=abA(n−1) ℓ−1B(n−1) ℓ′−1+a(1−b)A(n−1) ℓ−1B(n−1) ℓ′+b(1−a)A(n−1) ℓB(n−1) ℓ′−1 + (1−a)(1−b)A(n−1) ℓB(n−1) ℓ′. We recognise the terms aA(n−1) ℓ−1B(n−1) ℓ′, bA(n−1) ℓB(n−1) ℓ′−1, and (1−a−b)A(n−1) ℓB(n−1) ℓ′, so that applying the induction hypothesis, we find that A(n) ℓB(n) ℓ′−I(n) ℓ,ℓ′≥ab(A(n−1) ℓ−1B(n−1) ℓ′−1−A(n−1) ℓ−1B(n−1) ℓ′−A(n−1) ℓB(n−1) ℓ′−1+A(n−1) ℓB(n−1) ℓ′) =ab(A(n−1) ℓ−A(n−1) ℓ−1)(B(n−1) ℓ′−B(n−1) ℓ′−1) ≥0, which concludes the proof by induction. Corollary 5. Letx, y∈X,1≤ℓ, ℓ′≤kand let χx, χy:R+→R+be non-decreasing functions. Then E[χx(ˆτℓ(x))χy(ˆτℓ′(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)]≤E[χx(ˆτℓ(x))]E[χy(ˆτℓ′(y))]. Proof. LetRx, Ry≥0such that Rx+Ry<∥y−x∥, then the event (ˆτℓ(x)> R x,ˆτℓ′(y)> Ry)coincides with the event (Pn i=11(Xi∈B(x, R x))≤ℓ−1,Pn i=11(Xi∈B(y, R y))≤ℓ′−1). But because Rx+Ry<∥y−x∥, the two
|
https://arxiv.org/abs/2504.21633v1
|
Borel sets B(x, R x)andB(y, R y)are disjoint, and we may apply Lemma 9, which yields P(ˆτℓ(x)> R x,ˆτℓ′(y)> R y)≤P(ˆτℓ(x)> R x)P(ˆτℓ′(y)> R y). Now, we can write the expectation of nonnegative random variables thanks to tail distri- bution functions as follows. Let χ+denote the right-continuous generalised inverse of a non-decreasing function χ, then 1-1:=E[χx(ˆτℓ(x))χy(ˆτℓ′(y))1(ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥)] =Z R2 +P(χx(ˆτℓ(x))> t, χ y(ˆτℓ′(y))> s, ˆτℓ(x) + ˆτℓ′(y)<∥y−x∥) d(t, s) ≤Z R2 +1 χ+ x(t) +χ+ y(s)<∥y−x∥ P(ˆτℓ(x)> χ+ x(t),ˆτℓ′(y)> χ+ y(s)) d(t, s) ≤Z R2 +P(ˆτℓ(x)> χ+ x(t))P(ˆτℓ′(y)> χ+ y(s)) d(t, s) =E[χx(ˆτℓ(x))]E[χy(ˆτℓ′(y))]. Lemma 10. Suppose that Assumptions (X1) to (X3) hold true, and let ψ: (Rd)k−1→R be a measurable and bounded mapping, invariant with respect to any permutation of its coordinates. Then with the notation of Lemma 1, conditionally on ˆτk(x)and the set of indexes Ik−1(x)of the k−1nearest neighbours of x, we have the expression E[ψ(V1, . . . , V k−1)|ˆτk(x), Ik−1(x)] =Z B(0,1)k−1ψ(z1, . . . , z k−1)k−1Y i=1fx(zi) Fx(ˆτℓ(x))dz1···dzk−1, 53 where fx(z) =1(x∈B(0,1))p(x−zˆτk(x)) ˆτk(x)d, andFx(r) := P(B(x, z))denotes the cdf of the variable ∥X1−x∥. Furthermore, the normalisation constant is lower bounded by Fx(ˆτk(x)) =Z Rdfx(z) dz≥cp|Vd|ˆτk(x)d. Proof. For conciseness of notation, we set Zi:=∥Xi−x∥for1≤i≤n. Let I:= {i1, . . . , i k−1}be a subset of J1;nKwith cardinality k−1. Let ˇg:R+→Randψ: Rd(k−1)→Rbe two measurable and bounded mappings, with ψbeing invariant with respect to any permutation of its coordinates. We also denote for t >0, ψ(t) :=Z B(0,1)k−1ψ(z1, . . . , z k−1)k−1Y i=1p(x−tzi)td(k−1)dz1···dzk−1. Finally, we denote by Fxthe cdf of the variable Z1. Using the independence properties between X1, . . . , X n, we have E[ψ(V1, . . . , V k−1) ˇg(ˆτk(x))1(Ik−1(x) =I)] =X ik/∈IE ψXi1−x Zk, . . . ,Xik−1−x ZkY j∈I1(Zj< Z ik) ˇg(Zik)Y ℓ/∈I∪{ik}1(Zℓ> Z ik) =X ik/∈IE ψ(Zik) ˇg(Zik)Y ℓ/∈I∪{ik}1(Zℓ> Z ik) =X ik/∈IE ψ(Zik) Fx(Zik)k−1ˇg(Zik)Y j∈I1(Zj< Z ik)Y ℓ/∈I∪{ik}1(Zℓ> Z ik) =E" ψ(ˆτk(x)) Fz(ˆτk(x))k−1ˇg(ˆτk(x))1(Ik−1(x) =I)# . From the definition of the conditional expectation, we deduce that E[ψ(V1, . . . , V k−1)|ˆτk(x), Ik−1(x)] =ψ(ˆτk(x)) Fx(ˆτk(x))k−1, which is the required expression. Lemma 11. Let(Zj)1≤j≤νbe independent positive random variables such that there exist C > 0andb∈(0,1)such that for all 1≤j≤νand all ε >0,P(Zj≤ε)≤Cεb. Then for all ε >0, we have P νX j=1Zj≤ε! ≤1 νΓ(b)Γ(1 + b)ν−1 Γ(bν) Cεbν. Proof. We prove by induction on νthat P νX j=1Zj≤ε! ≤Kν Cεbν=bν−1(ν−1)! ν−1Y j=1β(1 +b, bj)! Cεbν. 54 Forν= 1, the statement writes P(Z1≤ε)≤Cεb, which is true by hypothesis. Suppose that the result is true for νrandom variables, we can write P ν+1X j=1Zj≤ε! =E" P νX j=1Zj≤ε−Zν+1|Zν+1! 1(Zν+1≤ε)# ≤KνCνE[(ε−Zν+1)bν1(Zν+1≤ε)] =KνCνZ R+P(Zν+1≤ε, ε−Zν+1> s1/bν) ds ≤KνCν+1Zεbν 0(ε−s1/bν)bds =KνCν+1Zε 0(ε−s)bs−1+bνbνds =KνCν+1bν εb(ν+1)Zε 0 1−s εbs ε−1+bνds ε =Kνbν β(1 +b, bν) Cεbν+1 =Kν+1 Cεbν+1, which concludes the induction. To obtain the expression in the lemma’s statement, observe that Kν=bν−1(ν−1)!ν−1Y j=1Γ(1 + b)Γ(bj) Γ(1 + b(j+ 1)) =bν−1(ν−1)! Γ(1 + b)ν−1Γ(b) Γ(bν)ν−1Y j=11 b(j+ 1) =1 νΓ(1 + b)ν−1Γ(b) Γ(bν). Lemma 12. Letκ∈[0,∞). Suppose that k2=O(n)andκ=O(k). Then, for all ℓ∈J1;kK, denoting Lk= 1−(2k)−1andCℓ,n:=n n−1 ℓ−1 , we have
|
https://arxiv.org/abs/2504.21633v1
|
Cℓ,nZ R+(p(x)|Vd|rd)ℓ−1rκexp(−nLkp(x)|Vd|rd)n−ℓrd+1dr=O(k2+κ dn−2+κ d). Proof. Using the bound Cℓ,n≤nℓ Γ(ℓ)and making the change of variables s:=nLkp(x)|Vd|rd, we can write 1-1:=Cℓ,nZ R+(p(x)|Vd|rd)ℓ−1r2+κexp(−nLkp(x)|Vd|rd)rd−1dr ≤nℓ Γ(ℓ)Z R+s nLkℓ−1s nLkp(x)|Vd|2+κ d exp(−s) (dnL kp(x)|Vd|)−1ds ≤Cn−2+κ dL−ℓ−2+κ d kΓ 2+κ d+ℓ Γ(ℓ), 55 where C:=d−1(p|Vd|)−2+κ d−1. BecauseΓ(2+κ d+ℓ) Γ(ℓ)=O ℓ2+κ d , it remains to show that the factor L−ℓ−2+κ d k ≤L−k−2+κ d k is bounded in k, which is straightforward from the convergence e−c= lim k→∞(1−c/k)k for any c >0. Lemma 13. Suppose that Assumptions (X1) and (X3) hold true and that the support of Q, which is denoted by eX, lies in the interior of X. There exists ε >0such that for all points xandyineX, alln∈N, and all (r1, r2)∈[0, ε/k]2, (1−Fx(r1)−Fy(r2))n−exp(−np(x)|Vd|rd 1−np(y)|Vd|rd 2) ≤2np|Vd|O(rd+1 1+rd+1 2) exp(−n|Vd| p(x)rd 1+p(y)rd 2 Lk), withLk:= 1−(2k)−1. Proof. We can write (1−Fx(r1)−Fy(r2))n=χ(δr1,r2), (14) where χ:ξ7→exp(−n(p(x)|Vd|rd 1+p(y)|Vd|rd 2) (1 + ξ)), and δr1,r2:= (−p(x)|Vd|rd 1−p(y)|Vd|rd 2)−ln(1−Fx(r1)−Fy(r2)) p(x)|Vd|rd 1+p(y)|Vd|rd 2ifr1+r2>0, 0 ifr1+r2= 0. By taking εsufficiently small, specifically ε≤(2p|Vd|)−1/d, we can ensure 1−Fx(r1)− Fy(r2)≥1/2for all (r1, r2)∈[0, ε/k]2. Then, applying the Taylor theorem to the function ξ7→ln(1−ξ), with the bound supξ∈[0,1/2]∥∇2ln(1−ξ)∥ ≤2, we have for (r1, r2)∈[0, ε/k]2, |ln(1−Fx(r1)−Fy(r2))−(−Fx(r1)−Fy(r2))| ≤2(Fx(r1) +Fy(r2))2 ≤4p|Vd|(r2d 1+r2d 2). (15) By the compactness of X, choosing sufficiently small εalso ensures that B(x, ε)⊂Xand B(y, ε)for all points xandyineX. By the triangular inequality, Eq. (15), the Lipschitz- continuity of p, and the inclusion B(x, ε/k )⊂XandB(y, ε/k )⊂X, we can bound the numerator of δr1,r2as (p(x)|Vd|rd 1+p(y)|Vd|rd 2)× |δr1,r2| ≤ |ln(1−Fx(r1)−Fy(r2))−(−Fx(r1)−Fy(r2))| + −Fx(r1)−Fy(r2)−(−p(x)|Vd|rd 1−p(y)|Vd|rd 2) =O(r2d 1+r2d 2) + Z ∥z−x∥≤r1(p(z)−p(x)) d(z) + Z ∥z−y∥≤r2(p(z)−p(y)) dz ≤O(r2d 1+r2d 2) +K|Vd|(rd+1 1+rd+1 2) =O(rd+1 1+rd+1 2), where Kis the Lipschitz constant of p. Since p(x)andp(y)are bounded below by p>0, we deduce that |δr1,r2|=O(r1+r2) =O(2ε/k)and that taking εsufficiently small ensures 56 |δr1,r2| ≤(2k)−1for all (r1, r2)∈[0, ε/k]2and all n∈N. By applying the mean value theorem to the function χ(Eq. 14) on the interval [−|δr1,r2|,|δr1,r2|], we get |χ(δr1,r2)−χ(0)| ≤ |χ(δr1,r2)−χ(−δr1,r2)| ≤ sup ξ∈[−|δr1,r2|,|δr1,r2|]|χ′(ξ)| ×2|δr1,r2| ≤2 sup ξ∈[−(2k)−1,(2k)−1]exp(−n|Vd|(p(x)rd 1+p(y)rd 2)(1 + ξ)) ×n|Vd|(p(x)rd 1+p(y)rd 2)× |δr1,r2| ≤2np|Vd|O(rd+1 1+rd+1 2) exp(−n|Vd|(p(x)rd 1+p(y)rd 2)Lk), where Lk= 1−(2k)−1. Combining this result with (14), we obtain the lemma. Lemma 14. Suppose that Assumptions (X1) and (X3) hold true. Then, there exists ε >0 such that for all points xandyineX, the support of Qincluded in the interior of X, all n∈N, allℓ, ℓ′∈J1;kK2, with k2=o(n), and all (r1, r2)∈[0, ε/k]2, we have (1−Fx(r1)−Fy(r2))n−ℓ−ℓ′−(1−Fx(r1)−Fy(r2))n =O (ℓ+ℓ′) exp(−n|Vd|(p(x)Lkrd 1+p(y)Lkrd 2))(rd 1+rd 2) , where Lk:= 1−(2k)−1. Proof of Lemma 14. As in the proof of Lemma 13, one can take ε >0small enough such thatFx(r1) +Fy(r2)≤1/2. By the mean value theorem, we have (1−Fx(r1)−Fy(r2))n−ℓ−ℓ′−(1−Fx(r1)−Fy(r2))n = (1−Fx(r1)−Fy(r2))n−ℓ−ℓ′ 1−(1−Fx(r1)−Fy(r2))ℓ+ℓ′ ≤(1−Fx(r1)−Fy(r2))n−ℓ−ℓ′sup ξ∈[0,Fx(r1)+Fy(r2)](ℓ+ℓ′)(1−ξ)ℓ+ℓ′−1(Fx(r1) +Fy(r2)) = (ℓ+ℓ′) (1−Fx(r1)−Fy(r2))n−ℓ−ℓ′p|Vd|(rd 1+rd 2) ≤(ℓ+ℓ′) exp(−(n−ℓ−ℓ′)(Fx(r1) +Fy(r2)))O(rd 1+rd 2). By the compactness of X, there exists ε > 0such that for all points in eX, we have B(x, ε)⊂XandB(y, ε)⊂X. Then, we can give a lower bound on Fx(r1) +Fy(r2) as Fx(r1)≥Z B(x,r1)(p(x)− |p(z)−p(x)|) dz ≥p(x)|Vd|rd 1(1− ∥∇ p∥∞r1)≥p(x)|Vd|rd 1(1−Cpε/k) ≥p(x)|Vd|rd 1(1−(4k)−1) withCp:=∥∇p∥∞/p, by making the constant εsufficiently small so that Cpε≤1/4will hold. Likewise, Fy(r2)≥p(y)|Vd|rd
|
https://arxiv.org/abs/2504.21633v1
|
arXiv:2504.21669v1 [econ.EM] 30 Apr 2025On the Robustness of Mixture Models in the Presence of Hidden Markov Regimes with Covariate-Dependent Transition Probabilities∗ Demian Pouzo Department of Economics, University of California, Berkeley, U.S.A. Email: dpouzo@berkeley.edu Zacharias Psaradakis Birkbeck Business School, Birkbeck, University of London, U.K. Email: z.psaradakis@bbk.ac.uk Martin Sola Department of Economics, Universidad Torcuato di Tella, Argentin a Email: msola@utdt.edu Abstract This paper studies the robustness of quasi-maximum-likeli hood (QML) esti- mation in hidden Markov models (HMMs) when the regime-switc hing struc- ture is misspecified. Specifically, we examine the case where the true data- generating process features a hidden Markov regime sequenc e with covariate- dependent transition probabilities, but estimation proce eds under a simpli- fied mixture model that assumes regimes are independent and i dentically dis- tributed. We show that the parameters governing the conditi onal distribution of the observables can still be consistently estimated unde r this misspecifi- cation, provided certain regularity conditions hold. Our r esults highlight a practical benefit of using computationally simpler mixture models in settings where regime dependence is complex or difficult to model direc tly. Key words and phrases : Consistency; covariate-dependent transitionprobabil- ities; identifiability; hidden Markov model; mixture model ; quasi-maximum- likelihood; misspecified model. ∗TheauthorswishtothankPatrikGuggenberger,PeterPhillipsandt worefereesfortheirhelpful comments and suggestions. For the purposes of open access, th e corresponding author has applied a CC-BY public copyright licence to any accepted manuscript version arising from this submission. Address correspondence to: Zacharias Psaradakis, Birkbeck Bu siness School, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK; e-mail: z.psaradak is@bbk.ac.uk. 1 1 Introduction Consistency andasymptoticnormalityofleast-squares estimator s inregression mod- els in the presence of potential model misspecification — e.g., misspec ification of the response function or misspecification of the dynamic structure of the errors — are well-established facts (see, e.g., Domowitz and White (1982)). Such fundamental results, together with the related classical work of Huber(1967), underpin a large bodyofliteratureexploringthefeasibilityofdrawingvalidandmeaning fulinferences from parametric models that need not necessarily contain the true data-generating process (DGP). Numerous results of this kind have been establishe d for a wide va- riety of models and estimators, both in static and dynamic settings, ranging from inference procedures based on estimating equations and moment c onditions (e.g., Bates and White (1985))toquasi-maximum-likelihood (QML)procedures forcondi- tionalmean, conditionalvarianceandconditionalquantilemodels(e .g.,White(1982, 1994),Levine(1983),Gourieroux et al. (1984),Newey and Steigerwald (1997),Komunjer (2005)). This paper adds to the literature by presenting another example of robustness with respect to misspecification. Specifically, we consider the case o f Hidden Markov Models (HMMs), where observable variables exhibit conditional indep endence given an underlying unobservable regime sequence (and, possibly, exoge nous covariate sequences), focusing on situations where the dependence struc ture of the regime sequence is misspecified. In our set-up, the DGP is taken to be a gen eralized HMM that may include covariates and has a finite number of Markov regime s, but the postulated probability model is a finite mixture model, that is, an HMM w ith in- dependent, identically distributed (i.i.d.) regimes. By considering the p seudo-true parametersetfortheQMLestimatorinthe(misspecified) mixturem odel, itisshown that the parameters of the conditional distribution of the observ able response vari- ables are
|
https://arxiv.org/abs/2504.21669v1
|
consistently estimable even if the dependence of the unob servable regime sequence is not taken into account. A condition on the tail behavior of the char- acteristic function of the (standardized) conditional distribution of the observable 2 responses is also provided under which the pseudo-true paramete r for the QML es- timator is a singleton set. An important distinguishing feature of our analysis is that the true regime sequence is allowed to be a temporally inhomogen eous Markov chain whose transition probabilities are functions of observable var iables. This case holds practical significance given the widespread use of bo th HMMs and mixture models. HMMs with temporally inhomogeneous regime sequ ences have foundapplications indiverse areassuch asbiology(e.g., Ghavidel et al. (2015)), eco- nomics (e.g., Diebold et al. (1994),Engel and Hakkio (1996)), earth sciences (e.g., Hughes et al. (1999)), and engineering (e.g., Ramesh and Wilpon (1992)). Tempo- rally homogenous variants of HMMs and of Markov-switching regres sion models are also used extensively in economics and finance (e.g., Engel and Hamilton (1990), Ryd´ en et al. (1998),Jeanne and Masson (2000),Bollen et al. (2008)), as well as in biology, computing, engineering and statistics (see Ephraim and Merhav (2002) and references therein). Statistical inference in such models is typica lly likelihood-based andtheproperties ofQML procedures are, naturally, ofmuch inte rest. Nevertheless, HMMs are inherently intricate and computationally demanding due to t he need to account for the underlying correlated regime sequence and for th e dependence of the conditional distribution on the current hidden regime. By demonstr ating that it is feasible to use a mixture model — a simpler and computationally less dem anding framework — while still estimating consistently the parameters of th e conditional distribution of the observations, this paper offers a more accessib le avenue for prac- titioners to follow without sacrificing the accuracy of parameter es timates. In related recent work, Pouzo et al. (2022) considered the asymptotic proper- ties of the QML estimator in a rich class of models with Markov regimes u nder general conditions which allow for autoregressive dynamics in the ob servation se- quence, covariate-dependence in the transition probabilities of th e hidden regime sequence, and potential model misspecification. The QML estimato r was shown to be consistent for the pseudo-true parameter (set) that minim izes the Kullback– Leibler information measure. Unsurprisingly, identifying the possible limit of the 3 QML estimator when the true probability structure of the data doe s not necessar- ily lie within the parametric family of distributions specified by the model is not always a feasible task in such a general set-up. This paper provides an answer in the simpler case of switching-regression models, HMMs and related m ixture models. Consistency results for misspecified pure HMMs (with no covariates in the outcome equation) can also be found in Mevel and Finesso (2004) andDouc and Moulines (2012). Unlike our analysis, which allows the regime transition probabilities to be time-dependent and driven by observable variables, these papers restrict attention to the case of time-invariant transition mechanisms. In the next section, we introduce the DGP and statistical model
|
https://arxiv.org/abs/2504.21669v1
|
of interest, and considerQMLestimationoftheparametersoftheoutcomeequatio nofamisspecified generalized HMM. Section 3discusses numerical results from a simulation study. Section4summarizes and concludes. 2 Framework, Results and Discussion 2.1 DGP and Model Consider a discrete-time stochastic process {(Xt,St)}t≥0such thatXt= (Yt,Zt,Wt) is an observable variable with values in X⊂R3andStis a latent variable with values in S:={1,2,...,d} ⊂Nfor somed≥2. The variable Stis viewed as the hidden regime (or state) associated with index t, which is “observable” only indirectly through its effect on Xt. The following assumptions are made about the DGP: 1. For each t≥1, the conditional distribution of StgivenXt−1 0:= (X0,...,X t−1) andSt−1 0:= (S0,...,S t−1), denoted by Q∗(·|Zt−1,St−1), depends only on (Zt−1,St−1) and is such that Q∗(s|z,s′)>0 for all (s,s′,z)∈S2× Z, where Z ⊂Ris the state space of Zt. 2. For each t≥1, the conditional distribution of Ztgiven (Xt−1 0,St 0) depends only onZt−1; furthermore, {(Zt,St)}t≥0is strictly stationary with invariant 4 distribution νZS. 3. For each t≥1, the conditional distribution of Ytgiven (Xt−1 0,St 0,Wt) depends only on (Wt,St) and is specified via the equation Yt=µ∗ 1(St)+γ∗(St)Wt+σ∗ 1(St)U1,t, (1) whereµ∗ 1,γ∗andσ∗ 1>0 are known real functions on S; the noise variables {U1,t}t≥0are i.i.d., independent of {St}t≥0, with mean zero, variance one, and densityf. 4. For each t≥1, the conditional distribution of Wtgiven (Xt−1 0,St 0) depends only onWt−1, andWtis independent of ( Zt−1,St−1); furthermore, {Wt}≥0is strictly exogenous in ( 1) and strictly stationary with invariant distribution νW. Instead of the Markov-switching structure of the DGP, the rese archer’s postu- lated parametric model is a family of finite mixture models (without Mar kov depen- dence). Specifically, the model is specified by assuming that the reg ime variables {St}t≥1are i.i.d. with distribution Q¯ϑ(s) =¯ϑs∈(0,1), s∈S. (2) In addition, the observable variables {Yt}t≥1are assumed to satisfy the equations Yt=µ(St)+γ(St)Wt+σ(St)εt, t≥1, (3) whereµ,γandσ >0 are known real functions on Sand{εt}t≥1are i.i.d. random variables, independent of {(St,Wt)}t≥1, such that ε1has the same density fasU1,1. The mixture model defined by ( 2) and (3) is parameterized by θ:= (π(s),¯ϑs)s∈S, withπ(s) := (µ(s),γ(s),σ(s)), which is assumed to take values in a compact set Θ⊂Rq,q >1. We denote by Pπ(·|Wt,St) the conditional distribution of Ytgiven (Wt,St) that is implied by ( 3); the corresponding conditional density is denoted by pπ(·|Wt,St). 5 Key aspects of our set-up: First, the DGP has a (generalized) HMM structure in which {Yt}t≥0are independent, conditionally on the regime sequence {St}t≥0and an exogenous covariate sequence {Wt}t≥0(having the Markov property), so that the conditional distribution of Ytgiven the regime and covariate sequences depends only on (St,Wt). The inclusion of the exogenous covariate Wtin (1) and (3) allows the study of the causal effect of WonYunder different regimes; this causal effect is captured by γ∗and is estimable via the mixture specification ( 2)–(3). Exogeneity of W(assumption 4 above) is essential for the results discussed in Sect ion2.2to hold and, hence, for consistent estimation of the causal effect γ∗under the (erroneous) assumption of independent regimes.1Second, the true hidden regimes {St}t≥0are a temporally inhomogeneous Markov chain whose transition probabilit ies depend on the lagged value of the observable variable Zt. The sequence {Zt}t≥0has the Markov property
|
https://arxiv.org/abs/2504.21669v1
|
and is not required to be exogenous, in the sense thatZtmay be contemporaneously correlated with U1,t. Third, the statistical model is misspecified, in the sense that the DGPis not a member of the family {(Pπ,Q¯ϑ): (π,¯ϑ)∈Θ}; this is because thedynamic structure of the regimes is misspecified. As a lready discussed in Section 1, this relatively simple set-up is of much practical interest since HMMs with temporally inhomogeneous regime sequences have found many a pplications. Mixture models with i.i.d. regimes are also widely used in many different field s (see McLachlan and Peel (2000) andFr¨ uhwirth-Schnatter (2006)), including economics and econometrics (see Compiani and Kitamura (2016)). It is worth noting that, although we focus on scalar responses and covariates for the sake of simplicity, all our results can be extended straightf orwardly to cases whereXt∈X⊂Rhwithh >3. For example, Wtmay be a vector of covariates, which may include lagged values of Wtin cases where dynamic causal effects are of interest. Similarly, Ztmay be a vector of information variables that affect the dynamic profile of the regime transition probabilities, whose generat ing mechanism has a finite-order autoregressive structure. 1The standard HMM formulation is a special case in which Wtis absent from the outcome equation ( 1). 6 2.2 QML Estimation Given observations ( X1,...,X T),T≥1, the quasi-log-likelihood function for the parameterθis θ/ma√sto→ℓT(θ) :=T−1T/summationdisplay t=1ln/parenleftBigg/summationdisplay s∈S¯ϑspπ(Yt|Wt,s)/parenrightBigg . (4) The QML estimator ˆθTofθis defined as an approximate maximizer of ℓT(θ) over Θ, so that ℓT(ˆθT)≥sup θ∈ΘℓT(θ)−ηT, for some sequence {ηT}T≥1⊂R+converging to zero. It is not too onerous to verify that, under assumptions that are c ommon in the literature (e.g., Gaussianity of U1,1andQ∗(s|z,s′) =G(αs,s′+βs,s′z) for some continuous distribution function GonRwhose support is all of R), the conditions ofPouzo et al. (2022) required for convergence of the QML estimator of θto a well-defined limit are satisfied. Specifically, let θ/ma√sto→H∗(θ) :=E¯P∗/bracketleftbigg ln/parenleftbiggp∗(Y1|W1) pθ(Y1|W1)/parenrightbigg/bracketrightbigg betheKullback–Leiblerinformationfunction,where pθ(Y1|W1) :=/summationtext s∈S¯ϑspπ(Y1|W1,s) denotes the conditional density of Y1givenW1induced by ( Pπ,Q¯ϑ) for each (π,¯ϑ)∈ Θ,p∗(Y1|W1) denotes the conditional density of Y1givenW1induced by the (true) DGP,andtheexpectation E¯P∗(·)iswithrespecttothedistribution ¯P∗of{(Xt,St)}t≥0 induced by the (true) DGP. Then, we have inf θ∈Θ∗||ˆθT−θ|| →0 asT→ ∞, (5) in¯P∗-probability, where Θ∗:= argmin θ∈ΘH∗(θ) (6) is the pseudo-true parameter (set) and /bardbl·/bardbldenotes the Euclidean norm on Rq(cf. Theorem 1 of Pouzo et al. (2022)). A sharper result can be established by considering the pseudo-tru e parameter Θ∗under the specified DGP. Together with ( 5) and (6), the following theorem 7 shows that, despite the erroneous treatment of hidden regimes a s independent, QML based on the (misspecified) mixture model provides consistent est imators of the true parameters of the outcome equation. Theorem 1. The choice µ=µ∗ 1,σ=σ∗ 1,γ=γ∗, and(¯ϑ∗ s)s∈Ssuch that ¯ϑ∗ s= EνZS[Q∗(s|Z,S)]for alls∈Sis a pseudo-true parameter, that is, it maximizes the function θ/ma√sto→E¯P∗/bracketleftBigg ln/parenleftBigg/summationdisplay s∈S¯ϑs σ(s)f/parenleftbiggY1−µ(s)−γ(s)W1 σ(s)/parenrightbigg/parenrightBigg/bracketrightBigg . Proof.Observe that the Kullback–Leibler information function H∗is proportional to θ/ma√sto→ −/integraldisplay R2ln/parenleftbigg/summationtext s∈S¯ϑsσ(s)−1f((y−µ(s)−γ(s)w)/σ(s)) p∗(y|w)/parenrightbigg p∗(y|w)dyνW(dw),(7) where (y,w)/ma√sto→p∗(y|w) =/summationdisplay s∈SPr∗(S1=s|W1=w)σ∗ 1(s)−1f((y−µ∗ 1(s)−γ∗(s)w)/σ∗ 1(s)). Under our assumptions about ( Wt,Zt−1), Pr∗(S1=·|W1=w) = Pr ∗(S1=·), where Pr∗stands for the true probability over the hidden regimes, given by s/ma√sto→Pr∗(S1=s) :=/integraldisplay R×S/summationdisplay s′∈SQ∗(s|z,s′)νZS(dz,ds′). The minimizers of the
|
https://arxiv.org/abs/2504.21669v1
|
function in ( 7) are allθsuch that /summationdisplay s∈S¯ϑsσ(s)−1f((·−µ(s)−γ(s)·)/σ(s)) =p∗(·|·). It is straightforward to verify that the equality above holds for µ=µ∗ 1,σ=σ∗ 1, γ=γ∗, and¯ϑ∗such that ¯ϑ∗ s= Pr∗(S1=s). Theorem 1establishes that the true parameters π∗(s) := (µ∗ 1(s),σ∗ 1(s),γ∗(s)), s∈S, associated with the observation equation ( 1), together with ( ¯ϑ∗ s)s∈S, minimize the Kullback–Leibler information H∗(θ). The corollary that follows shows that this minimizer is unique, as long as θ∗:= (π∗(s),¯ϑ∗ s)s∈Sis identified in Θ. In the present 8 context,θ∗is said to be identified in Θ if, for any θ∈Θ such that pθ(·|·) =pθ∗(·|·) ¯P∗-almost surely, θ=θ∗up to permutations.2 Corollary 1. Ifθ∗is identified in Θ, then it is the unique maximizer (up to permu- tations) of θ/ma√sto→E¯P∗/bracketleftBigg ln/parenleftBigg/summationdisplay s∈S¯ϑs σ(s)f/parenleftbiggY1−µ(s)−γ(s)W1 σ(s)/parenrightbigg/parenrightBigg/bracketrightBigg . Proof.By Theorem 1, it suffices to show that /integraldisplay R2ln(p∗(y|w))p∗(y|w)dyνW(dw)>/integraldisplay R2ln(pθ(y|w))p∗(y|w)dyνW(dw), for anyθ∈Θ which is not equal to a permutation of θ∗. Sinceθ∗is identified, for any suchθ,pθ(·|·)/ne}ationslash=pθ∗(·|·) with positive probability under ¯P∗. Therefore, the strict inequality above follows from the (strict) Jensen inequality. To provide some (non-technical) intuition behind Theorem 1and its corollary, recallthatwhenthemisspecifiedmixturemodelisfittedtodatausin gmaximumlike- lihood techniques, the objective function which is maximized is a quasi- likelihood. The resulting QML estimator is an estimator for the parameters whic h make the model’s implied conditional distribution of the response as close as pos sible — mea- sured in Kullback–Leibler divergence — to the true conditional distrib ution, even when the model is misspecified. Despite the fact that the research er ignores de- pendence of the regimes, the conditional distribution of the respo nse variable given the covariates remains a mixture of distributions under both the tr ue and misspec- ified models. This structural similarity allows the mixture model to mat ch the key features of the conditional distribution of interest correctly and , thus, the QML es- timator consistently recovers the parameters of the outcome eq uation. This result relies heavily on two key conditions: stationarity of the regimes (Ass umption 2) and exogeneity of the covariates (Assumption 4). The former sta tionarity condition is crucial because it ensures stability of the marginal distribution of the regimes, 2Formally, ( π(p[s]),¯ϑp[s]) = (π∗(s),¯ϑ∗ s) for all s∈Sand any permutation p:S→S. The qualifier ‘up to permutations’ reflects the fact that the problem re mains unchanged if the indices of the regimes are permuted. 9 which the mixture model tries to fit. Without stationarity, the limiting object of the QML estimation procedure could vary over time, and consistenc y would break down. The latter exogeneity condition is fundamental because it gu arantees that the covariates do not “carry” information about future states o r future shocks into the noise of the outcome equation. If exogeneity failed, the condit ional distribution of the response would not be properly captured by the simple mixtur e model and the QML estimator would be biased and inconsistent for the paramet ers of interest. We conclude by remarking that when the minimizer θ∗of the Kullback–Leibler information function is identified in Θ (and belongs to the interior of Θ) , asymptotic normality of√ T(ˆθT−θ∗) may be deduced
|
https://arxiv.org/abs/2504.21669v1
|
from the results of Pouzo et al. (2022) under suitable differentiability and moment conditions. These conditio ns are satis- fied, for example, in the case where fis Gaussian and Q∗(s|z,s′) =G(αs,s′+βs,s′z) for some continuous distribution function GonRwhose support is all of R. In the next subsection, we discuss a sufficient condition for the high-level identifiability requirement of Corollary 1and show that this condition holds in some commonly used models. 2.3 Discussion on the Identifiability Condition Identifiability of mixture models has been studied extensively in the lite rature fol- lowing the original contribution of Teicher(1963), who established identifiability of finite mixtures of distributions such as the one-dimensional Gaussia n and gamma. Yakowitz and Spragins (1968) gave a necessary and sufficient condition for iden- tifiability, which holds, for example, in the case of finite mixtures of mu ltivariate Gaussian distributions. This condition was exploited by Holzmann et al. (2004) and Holzmann et al. (2006) to provide sufficient low-level identifiability conditions based on the tail behavior of the characteristic function of the compone nt distributions.3 Using the approach of Holzmann et al. (2004) andHolzmann et al. (2006), we now present a low-level condition, based only on features of f, which is sufficient for 3Areview ofrelatedresults for parametricand nonparametericmod els that incorporatemixture distributions can be found in Compiani and Kitamura (2016). 10 the identifiability of θ∗required in Corollary 1. Lemma 1. Letϕbe the characteristic function associated with f. If, for any a1> a2, lim τ→∞ϕ(a1τ) ϕ(a2τ)= 0, thenθ∗is identified in Θ. Proof.By the classical result of Yakowitz and Spragins (1968), to establish unique identifiability, it suffices to show that/braceleftBig 1 σ(s)f/parenleftBig y−m(s,w) σ(s)/parenrightBig/bracerightBig s∈Sare linearly indepen- dent, where m(s,w) :=µ(s)+γ(s)w. To this end, observe that /integraldisplay Reiτy1 σ(s)f/parenleftbiggy−m(s,w) σ(s)/parenrightbigg dy=eiτm(s,w)/integraldisplay Reiτuσ(s)f(u)du=eiτm(s,w)ϕ(τσ(s)). Hence, for any λ1,...,λ dinR,/summationtext s∈Sλs1 σ(s)f/parenleftBig y−m(s,w) σ(s)/parenrightBig = 0 implies /summationdisplay s∈Sλseiτm(s,w)ϕ(τσ(s)) = 0, (8) for anyτ∈Rand anyw∈R. Without loss of generality, let σ(1)≤σ(2)≤ ··· ≤ σ(d), withm∈Sbeing such that σ(1) =···=σ(m)< σ(m+ 1). Then, ( 8) is equivalent to λ1+m/summationdisplay s=2λseiτ[m(s,w)−m(1,w)]+d/summationdisplay s=m+1λseiτ[m(s,w)−m(1,w)]ϕ(τσ(s)) ϕ(τσ(1))= 0.(9) Sinceσ(s)> σ(1) for any s∈ {m+ 1,...,d}andeiτ[m(s,w)−m(1,w)]is uniformly bounded in τ, it follows by our assumption that/summationtextd s=m+1λseiτ[m(s,w)−m(1,w)]ϕ(τσ(s)) ϕ(τσ(1))→ 0asτ→ ∞. Thisresultreadilyimpliesthat n−1/summationtextn l=1/summationtextd s=m+1λseilu0[m(s,w)−m(1,w)]ϕ(lu0σ(s)) ϕ(lu0σ(1))→ 0 asn→ ∞, for anyu0>0. Regarding the term/summationtextm s=2λseiτ[m(s,w)−m(1,w)], observe thatm(s,w)/ne}ationslash=m(1,w) for anys∈ {1,...,m}— otherwise, since σ(s) =σ(1), the regimes associated with St=sandSt= 1 would be identical rather than distinct. Thus, by choosing τ=lu0, wherel∈Nandu0∈R+is such that u0[m(s,w)−m(1,w)]∈(−π,π) for alls∈ {1,...,m}, it follows by Lemma 2.1 in Holzmann et al. (2004) thatn−1/summationtextn l=1/summationtextm s=2λseilu0[m(s,w)−m(1,w)]→0 asn→ ∞. By 11 these two results and ( 9), λ1=−lim n→∞n−1n/summationdisplay l=1/parenleftBiggm/summationdisplay s=2λseilu0[m(s,w)−m(1,w)]+d/summationdisplay s=m+1λseilu0[m(s,w)−m(1,w)]ϕ(lu0σ(s)) ϕ(lu0σ(1))/parenrightBigg = 0. By iterating on this procedure, it follows that λ1=λ2=···=λd= 0, thereby establishing the desired result. As an example, consider what is, arguably, the most widely used class of mixture models, namely those in which fis Gaussian. In this case, ϕ(τ) =e−τ2/2,τ∈R, andϕ(a1τ)/ϕ(a2τ) =e−(a2 1−a2 2)τ2/2,a1>a2, so the condition of Lemma 1is satisfied. Thus, in the Gaussian case, the QML estimator of the parameters o f the mixture model (2)–(3) converges, in ¯P∗-probability, to θ∗. This result remains valid for non-
|
https://arxiv.org/abs/2504.21669v1
|
Gaussiandistributions, including distributions withheavy tails(andfin itevariance). For instance, the result holds if fis the density of a (rescaled) Student- tdistribution with degrees of freedom υ>2 (see Example 1 in Holzmann et al. (2006)). 2.4 Discussion on the Main Theorem The consistency results in ( 5)–(6) and in Theorem 1are quite general, in the sense that they cover misspecified generalized HMMs with temporally inhomo geneous regime sequences and arbitrary observation conditional densities . They imply that dependence of the regimes in such HMMs may be safely ignored as long as the pa- rameters of interest are those of the conditional density of the o bservations given the regimes and the covariates. It is important to note, however, tha t care should be taken in estimating the asymptotic covariance matrix of the QML est imator since the inverse of the observed information matrix is not necessarily a c onsistent esti- mator in a misspecified model. Consistent estimation in this case typica lly requires the use of an empirical sandwich estimator that does not rely on the information matrix equality (cf. Theorem 5 of Pouzo et al. (2022)). Treating the regimes as an independent sequence simplifies likelihood- based in- ference compared to the case of correlated Markov regimes. In t he latter case, an 12 added difficulty, as demonstrated by Pouzo et al. (2022), is that consistent QML estimation of the true parameter values in a model with Markov regim es having covariate-dependent transition functions typically requires joint analysis of equa- tions such as ( 1) and the generating mechanism of {Zt}, even if the parameters of interest are only those associated with ( 1). Furthermore, as pointed out by Hamilton (2016), rich parameterizations of the transition mechanism of the regime sequence may not necessarily be desirable when working with relative ly short time series because of legitimate concerns relating to potential over-fi tting and inaccu- rate statistical inference. In such cases, parsimonious specifica tions which provide good approximations to key features of the data — and, in our sett ing, consistent estimates of the parameters of interest — can be attractive and u seful. Note that, for a class of regime-switching models in which the regime s equence {St}is a temporally homogeneous, two-state Markov chain, an observa tion anal- ogous to that implied by Theorem 1was made by Cho and White (2007). They argued that the parameters of a model for the conditional distrib ution of the ob- servable variable Xt, given (Xt−1 0,St 0), can be consistently estimated by QML based on a misspecified version of the model with i.i.d. regimes — and exploited t his result to construct a quasi-likelihood-ratio test of the null hypothesis of a single regime against the alternative hypothesis of two regimes. However, Carter and Steigerwald (2012) demonstrated that consistency of the QML estimator for the tr ue parame- ters in such a setting does not, in fact, hold if the model and the DGP contain an autoregressive component. This observation remains true in our m ore general set- up with temporally inhomogeneous hidden regime sequences. Specific ally,
|
https://arxiv.org/abs/2504.21669v1
|
a result analogous to that in Theorem 1does not hold when lagged values of Ytare present as covariates in the outcome equations ( 1) and (2) (e.g., as in Markov-switching autoregressive models). In this case, misspecification of the depe ndence structure of the regimes will affect estimation of all the parameters, not just those associated with the transition functions of the regime sequence. 13 3 Numerical Examples As a numerical illustration of the results discussed in Section 2, we report here findings from a small Monte Carlo simulation study in which the effect on QML estimators of ignoring Markov dependence of hidden regimes is asse ssed. In the experiments, artificial data are generated according to th e generalized HMM defined by ( 1), with regimes {St}which form a Markov chain on S={1,2} such that Pr(St=s|St−1=s,Zt−1=z) = [1+exp( −α∗ s−β∗ sz)]−1, s∈ {1,2}, z∈R, and with {Zt}and{Wt}satisfying the autoregressive equations Zt=µ∗ 2+ψ∗Zt−1+σ∗ 2U2,t, Wt=µ∗ 3+δ∗Wt−1+σ∗ 3U3,t. The noise variables {(U1,t,U2,t,U3,t)}are i.i.d, Gaussian, independent of {St}, with mean zero and covariance matrix 1ρ∗ω∗ ρ∗1 0 ω∗0 1 . The parameter values are α∗ 1=α∗ 2= 2,β∗ 1=−β∗ 2= 0.5,µ∗ 1(1) =−µ∗ 1(2) = 1, γ∗(1) = 0.5,γ∗(2) = 1,σ∗ 1(1) =σ∗ 1(2) = 1,µ∗ 2=µ∗ 3= 0.2,ψ∗=δ∗= 0.8, σ∗ 2=σ∗ 3= 1, andρ∗,ω∗∈ {0,0.65}. For each of 1000 samples of size T∈ {200,800,1600,3200}from this DGP, estimates of the parameters of the outcome equation are obtaine d by maximizing thequasi-log-likelihood function( 4) associatedwith themixture model ( 2)–(3), with Pr(St= 1) =¯ϑandεt∼ N(0,1). Monte Carlo estimates of the bias of the QML estimators of µ(1),µ(2),γ(1),γ(2),σ(1) andσ(2) are reported in Table 1. We also report the ratio of the sampling standard deviation of the estimato rs to estimated standard errors (averaged across replications for each design p oint). The latter are computed using a sandwich estimator based on the Hessian and the g radient of 14 the quasi-log-likelihood function (cf. Pouzo et al. (2022, Theorem 5)), with weights obtained from the Parzen kernel and a data-dependent bandwidt h selected by the plug-in method of Andrews (1991). The results for ω∗= 0 shown in the top panel of Table 1reveal that, although the estimators of µ(1) andµ(2) are somewhat biased in the smallest of the sample sizes considered, finite-sample bias becomes insignificant in the rest of the cases (regardless of the value of the correlation parameter ρ∗), as is to be expected in light of the result in Theorem 1. Furthermore, unless the sample size is small, estimated standard errors are very accurate as approximations to the sta ndard deviation of the QML estimators. The bottom panel of Table 1contains results for a DGP with ω∗= 0.65. A non-zero value for the correlation parameter ω∗violates the exogeneity assumption aboutWtthat is maintained throughout Section 2(and it is not obvious what the limit point of the QML estimator based on ( 4) might be in this case). The simulation results show that estimators of the parameters of the o utcome equation aresignificantlybiased, evenforthelargestsamplesizeconsidered inthesimulations. Biases in this case are clearly a consequence of the mixture model be ing misspecified
|
https://arxiv.org/abs/2504.21669v1
|
beyond the assumption of i.i.d. regimes, the additional source of miss pecification being the incorrect assumption of uncorrelatedness of the covar iateWtand the noise variableU1,t. The results relating to the accuracy of the estimated standard e rrors are not substantially different from those obtained with ω∗= 0. As pointed out inSection 2.4, another situation inwhich ignoringMarkov depen- dence oftheregimes iscostly involves outcome equations that cont ainautoregressive dynamics. To demonstrate numerically the difficulties in such a case, 1 000 artificial samples of various sizes are generated according to the Markov-s witching autore- gression Yt=µ∗ 1(St)+φ∗Yt−1+σ∗ 1(St)U1,t, (10) withφ∗= 0.9; the remaining parameter values and the generating mechanisms o f {Zt},{St}and{(U1,t,U2,t)}are the same as in earlier simulation experiments. For 15 Table 1: Bias and Standard Deviation of QML Estimators (HMM) T µ (1)µ(2)γ(1)γ(2)σ(1)σ(2)µ(1)µ(2)γ(1)γ(2)σ(1)σ(2) ρ∗= 0, ω∗= 0 ρ∗= 0.65, ω∗= 0 Bias 200 0.093 -0.022 -0.033 0.018 -0.125 -0.045 0.090 -0.032 -0.029 0.013 -0 .111 -0.041 800 0.017 -0.006 -0.003 0.002 -0.028 -0.013 0.021 -0.006 -0.003 0.004 -0 .023 -0.010 1600 0.009 -0.001 -0.001 0.000 -0.014 -0.006 -0.001 -0.008 0.002 0.003 - 0.008 -0.008 3200 -0.001 -0.004 -0.001 0.001 -0.005 -0.003 0.010 0.000 -0.003 0.000 - 0.006 -0.002 Standard Deviation / Standard Error 200 1.361 1.129 1.338 1.142 1.452 1.179 1.365 1.234 1.520 1.168 1.484 1.157800 1.049 0.954 1.031 1.022 1.040 0.984 1.035 0.979 1.036 1.010 1.033 0.9901600 1.054 1.031 1.022 1.008 1.024 1.006 0.998 0.966 1.005 0.972 1.016 0.97 5 3200 1.031 0.973 0.974 0.997 1.040 0.992 1.045 1.041 1.015 0.948 0.976 1.01 4 ρ∗= 0, ω∗= 0.65 ρ∗= 0.65, ω∗= 0.65 Bias 200 -0.190 -0.262 0.228 0.254 -0.171 -0.120 -0.205 -0.280 0.225 0.261 -0 .162 -0.116 800 -0.226 -0.241 0.238 0.238 -0.098 -0.090 -0.233 -0.243 0.235 0.240 -0 .096 -0.090 1600 -0.231 -0.238 0.236 0.235 -0.089 -0.084 -0.227 -0.237 0.233 0.235 - 0.089 -0.084 3200 -0.235 -0.237 0.236 0.235 -0.083 -0.082 -0.230 -0.238 0.234 0.236 - 0.085 -0.082 Standard Deviation / Standard Error 200 1.183 1.151 1.328 1.129 1.328 1.075 1.182 1.189 1.322 1.111 1.317 1.183800 0.812 0.888 1.008 0.819 0.826 0.861 0.979 1.012 1.035 0.991 1.038 1.0251600 0.997 1.042 0.998 0.988 1.000 0.975 0.989 1.071 0.982 0.965 1.011 1.00 3 3200 1.021 1.080 1.002 0.995 0.971 1.022 1.084 1.162 1.024 1.030 1.018 1.03 2 16 Table 2: Bias and Standard Deviation of QML Estimators (Markov-Sw itching Au- toregressive Model) T µ (1)µ(2)σ(1)σ(2) φ µ (1)µ(2)σ(1)σ(2) φ ρ∗= 0 ρ∗= 0.8 Bias 200 -0.500 0.222 -0.093 -0.141 -0.012 -0.765 0.468 -0.114 -0.135 -0.049 800 -0.400 0.039 0.042 -0.067 0.002 -0.626 0.288 0.027 -0.060 -0.034 1600 -0.440 0.023 0.086 -0.049 0.004 -0.753 0.276 0.096 -0.049 -0.031 3200 -0.462 0.013 0.115 -0.036 0.005 -0.699 0.249 0.103 -0.035 -0.028 Standard Deviation / Standard Error 200 1.546 1.214 1.764 1.563 1.140 0.438 1.261 0.542 1.008 1.023 800 1.282 0.907 1.335 1.177 0.997 1.228 0.928 1.347 1.121 0.954 1600 1.396 0.916 1.335 0.708 0.988 1.542 1.011 1.504 1.066 1.008 3200 1.424 0.897 1.398 0.913 0.968 1.078 0.803 1.102 0.775 0.976 each artificial sample, the parameters of the regime-switching aut oregressive model Yt=µ(St)+φYt−1+σ(St)εt, (11) are estimated by maximizing
|
https://arxiv.org/abs/2504.21669v1
|
the quasi-log-likelihood function associat ed with it under the assumption that the regime variables {St}are i.i.d., with Pr( St= 1) =¯ϑ, and the noise variables {εt}are i.i.d., independent of {St}, withεt∼ N(0,1). The Monte Carlo results reported in Table 2reveal substantial finite-sample bias in the case of the QML estimators of the intercepts µ(1) andµ(2). The QML estimators of σ(1),σ(2) andφgenerally exhibit little bias, which may be partly due to the fact that the simulation design is such that the values of φ∗andσ∗ 1are the same regardless of the realized regime. Unlike the HMM case cons idered before, estimated standard errors are not always accurate as approxima tions to the finite- sample standard deviation of the QML estimators in the autoregres sive model, even for a parameter such as σ(1), which is estimated with little bias. We note that qualitatively similar results are obtained when, in addition to Yt−1, an exogenous covariateWt, generated as in the previous experiments, is included in the right-h and sides of ( 10) and (11). 17 4 Conclusion Inthis paper, we have considered QML estimation oftheparameter s ofa generalized HMM with exogenous covariates and a finite hidden state space. A dis tinguishing feature of our approach is that it allows the regime sequence to be a temporally inhomogeneous Markov chain with covariate-dependent transition probabilities. It has been shown that a mixture model with independent regimes is rob ust in the presence of correlated Markov regimes, in the sense that the par ameters of the outcome equation can be estimated consistently by maximizing the qu asi-likelihood function associated with the misspecified mixture model. One possible application of our main result is to exploit it to construct t ests for the number of regimes in HMMs with covariate-dependent transition probabilities, adopting a QML-based approach analogous to that of Cho and White (2007). As is well known, such testing problems are non-standard and typically in volve unidenti- fiable nuisance parameters, parameters that lie on the boundary o f the parameter space, singularity of the information matrix, and non-quadratic ap proximations to the log-likelihood function. References Andrews, D. W. K. (1991). Heteroskedasticity and autocorrelat ion consistent co- variance matrix estimation. Econometrica 59 , 817–858. Bates, C. and H. White (1985). A unified theory of consistent estim ation for para- metric models. Econometric Theory 1 , 151–178. Bollen, N. P. B., S. F. Gray, and R. E. Whaley (2008). Regime switching in foreign exchange rates: Evidence from currency option prices. Journal of Economet- rics 94, 239–276. Carter, A. V. and D. G. Steigerwald (2012). Testing for regime swit ching: A com- ment.Econometrica 80 , 1809–1812. 18 Cho, J. S. and H. White (2007). Testing for regime switching. Econometrica 75 , 1671–1720. Compiani, G. and Y. Kitamura (2016). Using mixtures in econometric m odels: a brief review and some new results. Econometrics Journal 19 , C95–C127. Diebold, F. X., J.-H. Lee, and G. C. Weinbach (1994). Regime switching with time- varying transition probabilities. In C. P. Hargreaves (Ed.), Nonstationary Time Series Analysis and Cointegration , pp. 283–302. Oxford: Oxford University Press. Domowitz, I.
|
https://arxiv.org/abs/2504.21669v1
|
and H. White (1982). Misspecified models with dependen t observa- tions.Journal of Econometrics 20 , 35–58. Douc, R. and E. Moulines (2012). Asymptotic properties of the max imum likelihood estimation in misspecified hidden Markov models. Annals of Statistics 40 , 2697– 2732. Engel, C. and C. S. Hakkio (1996). The distribution of the exchange rate in the EMS.International Journal of Finance and Economics 1 , 55–67. Engel, C. and J. D. Hamilton (1990). Long swings in the Dollar: Are the y in the data and do markets know it? American Economic Review 80 , 689–713. Ephraim, Y. and N. Merhav (2002). Hidden Markov processes. IEEE Transactions on Information Theory 48 , 1518–1569. Fr¨ uhwirth-Schnatter, S. (2006). Finite Mixture and Markov Switching Models . New York: Springer. Ghavidel, F. Z., J. Claesen, and T. Burzykowski (2015). A nonhomo geneous hid- den Markov model for gene mapping based on next-generation seq uencing data. Journal of Computational Biology 22 , 178–188. Gourieroux, C., A. Monfort, and A. Trognon (1984). Pseudo maxim um likelihood methods: Theory. Econometrica 52 , 681–700. 19 Hamilton, J. D. (2016). Macroeconomic regimes and regime shifts. I n J. B. Taylor and H. Uhlig (Eds.), Handbook of Macroeconomics, Vol. 2A , pp. 163–201. Ams- terdam: North-Holland. Holzmann, H., A. Munk, and T. Gneiting (2006). Identifiability of finite mixtures of elliptical distributions. Scandinavian Journal of Statistics 33 , 753–763. Holzmann, H., A. Munk, and B. Stratman (2004). Identifiability of fin ite mixtures - with applications to circular distributions. Sankhy¯ a 66 , 440–449. Huber, P. J. (1967). The behavior of maximum likelihood estimates un der nonstan- dard conditions. In L. M. Le Cam and J. Neyman (Eds.), Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probabi lity, Volume 1, pp. 221–233. University of California Press, Berkeley, CA. Hughes, J. P., P. Guttorp, and S. P. Charles (1999). A non-homog eneous hidden Markov model for precipitation occurrence. Applied Statistics 48 , 15–30. Jeanne, O. and P. Masson (2000). Currency crises, sunspots an d Markov-switching models.Journal of International Economics 50 , 327–350. Komunjer, I.(2005). Quasi-maximumlikelihoodestimationforcondit ionalquantiles. Journal of Econometrics 128 (1), 137–164. Levine, D. (1983). A remark on serial correlation in maximum likelihood .Journal of Econometrics 23 , 337–342. McLachlan, G. J. and D. Peel (2000). Finite Mixture Models . New York: Wiley. Mevel, L. and L. Finesso (2004). Asymptotical statistics of misspe cified hidden Markov models. IEEE Transactions on Automatic Control 49 , 1123–1132. Newey, W. K. and D. G. Steigerwald (1997). Asymptotic bias for qua si-maximum- likelihood estimators in conditional heteroskedasticity models. Econometrica 65 , 587–599. 20 Pouzo, D., Z. Psaradakis, and M. Sola (2022). Maximum likelihood estim ation in Markov regime-switching models with covariate-dependent transit ion probabili- ties.Econometrica 90 , 1681–1710. Ramesh, P. and J. G. Wilpon (1992). Modeling state durations in hidde n Markov modelsforautomaticspeechrecognition. In ICASSP-92: 1992IEEE International Conference on Acoustics, Speech, and Signal Processing , Volume 1, pp. 381–384. IEEE. Ryd´ en, T., T. Ter¨ asvirta, andS. ˚Asbrink (1998). Stylized factsofdailyreturns series and the hidden Markov model. Journal of Applied Econometrics 13 , 217–244. Teicher, H. (1963).
|
https://arxiv.org/abs/2504.21669v1
|
arXiv:2505.00151v1 [math.ST] 30 Apr 2025A BAYESIAN APPROACH TO INVERSE PROBLEMS IN SPACES OF MEASURES PHUOC-TRUONG HUYNH Abstract. In this work, we develop a Bayesian framework for solving inv erse prob- lems in which the unknown parameter belongs to a space of Rado n measures taking values in a separable Hilbert space. The inherent ill-posed ness of such problems is addressed by introducing suitable measure-valued priors t hat encode prior informa- tion and promote desired sparsity properties of the paramet er. Under appropriate assumptions on the forward operator and noise model, we esta blish the well-posedness of the Bayesian formulation by proving the existence, uniqu eness, and stability of the posterior with respect to perturbations in the observed dat a. In addition, we also dis- cuss computational strategies for approximating the poste rior distribution. Finally, we present some examples that demonstrate the effectiveness of the proposed approach. Keywords. Bayesian inverse problems, Radon measures, non-Gaussian p rior. 2020 Mathematics Subject Classification. 35R30, 35Q62, 62F15, 62G35. Contents 1. Introduction 1 2. Vector-valued measures 3 3. Prior distribution on the space of measures 7 4. Well-posedness of the Bayesian inverse problem 11 5. Applications 14 6. Conclusion and remarks 15 Acknowledgement 16 References 16 1.Introduction In this work, we study an infinite-dimensional Bayesian fram ework for solving in- verse problems, where the emphasis is placed on incorporati ng sparsity-promoting prior knowledge about the unknown signal. Such sparsity assumpti ons are motivated by the observation that in many real-world applications, the unde rlying physical parameters to be identified are localized or concentrated in space, whic h results naturally in sparse representations. Institut für Mathematik, Alpen-Adria-Universität Klagenfu rt, 9020 Klagenfurt, Austria. E-mail address :phuoc.huynh (at) aau.at . 1 To be precise, we consider an additive model for inverse prob lems of the form z=G(u) +ξ, (1.1) where uis the unknown parameter, zrepresents the vector of measurement data, Gis the forward operator and ξis the measurement noise. We assume that the uadmits a sparse structure, in the sense that it can be expressed as a finite sum of weighted Dirac measures supported on a domain Ω. Specifically, uis a discrete measure taking values in a separable Hilbert space H, given by u=Ns/summationdisplay k=1qkδyk, q 1, . . . , q Ns∈H, y 1, y2, . . . , y Ns∈Ω, (1.2) where Nsdenotes the number of sources (typically small), q1, . . . , q Nsare the amplitudes inH, and y1, . . . , y Nsare their locations. This type of signal has various applica tions, such as in Helmholtz source identification [ 16], optimal control problem of the wave equation with measure-valued control [ 21], and optimal transport problems [ 3]. Hence, the forward operator Gmaps from the parameter space M(Ω, H ), which is the space of Radon measures on Ωthat take values in a Hilbert space H, to a Banach space Y. Detailed descriptions of the parameter space M(Ω, H ) will be given in Section 2. Since the map Gis typically not continuously invertible, this problem is, in general, ill-posed; thus, regularization methods are
|
https://arxiv.org/abs/2505.00151v1
|
required. In recent years, the Bayesian approach as a means of regulari zation has become more popular. We recall from [ 31,32] that a Bayesian solution to ( 1.1) is a probability measure µz post, namely the posterior distribution of uconditioned on measurement data z, as described by Bayes’ rule [ 31] dµz post dµpr(u) =1 Z(z)exp(−Φ(u;z)),where Z(z) =/integraldisplay M(Ω,H )exp(−Φ(u;z))dµpr(u), provided that all the quantities are well-defined. Here, µpris the prior measure reflecting our belief about the parameter u,Ψ(u;z) is the likelihood potential , and µz postis the posterior measure representing the distribution of uafter incorporating the observed data z. As is well known, the space M(Ω, H ) is a nonseparable Banach space, which is, in fact, the main challenge in studying the inverse probl em (1.1) within the Bayesian setting, since the well-posedness theory in [ 31] cannot be directly applicable. In addition, a suitable prior measure µpronM(Ω, H ) should be introduced to establish the well- posedness of the Bayesian inverse problem, as well as to ensu re computationally feasible sampling of the solutions. Since the pioneering work by Stuart [ 31], the theory of Bayesian inverse problems has been extensively developed. The well-posedness of Bayesia n inverse problems has been studied in several settings, such as in quasi-Banach spaces [33], in non-separable Banach spaces with unconditional Schauder bases [ 18]. The stability of the map z/ma√sto→µz posthas been considered in various distances on the space of probabi lity measures, for instance, the Hellinger distance [ 31], the total variation distance [ 18]. Notably, the work [ 23] introduced a general concept for the well-posedness of Baye sian inverse problems, under a mild assumption on the measurability of the parameter spac e and the likelihood function L(z|u). In addition, the study of prior measures is of particular i nterest for characterizing prior beliefs about the ground truth. For in stance, Besov priors [ 13,1] promote sparsity in the representation of functions, while convex and heavy-tailed priors [18,19] offer alternative structural properties. 2 From a practical point of view, Bayesian approaches have bee n employed in vari- ous applications, such as Helmholtz source identification w ith Dirac sources [ 16], scalar conservation laws [ 12], and geophysics [ 6]. We remark that the application of Bayesian inverse problems most closely related to our work is found in [16], where a prior mea- sure is defined on the space ℓ1of summable sequences. While the authors’ main result on well-posedness is mathematically justified and presents a notable application to the sound source localization problem within the Bayesian fram ework, we believe that this manuscript offers a more intuitive approach to Bayesian inve rse problems that incorpo- rates the sparsity assumption of the underlying signal. 1.1. Contributions. The main contributions of this manuscript are the following : (1) We introduce a suitable prior measure µpron the space of measures M(Ω, H ). This prior is characterized in terms of point processes, see , for instance, [ 10, 28]. While there is a rich theory on point processes for nonpara metric Bayesian estimation
|
https://arxiv.org/abs/2505.00151v1
|
(see [ 5] and the references therein), most existing work considers point processes with real, positive coefficients. We instead consider vector-valued coefficients and show that the resulting point process belong s toM(Ω, H )µpr- almost surely. (2) Given the aforementioned priors, we study the well-pose dness of the Bayesian inverse problem in this setting. Our approach follows that o f [23], where the well-posedness of the Bayesian inverse problem ( 1.1) is characterized under mild assumptions on the parameter space M(Ω, H ) and the likelihood func- tionL(z|u) = exp( −Ψ(u;z)). To this end, rather than treating M(Ω, H ) as a Banach space, we employ the weak* topology on M(Ω, H ), leveraging favorable properties of this topological structure to obtain suitabl e measurability proper- ties on this space. Consequently, under the weak* continuit y assumption on the forward operator, we establish its measurability, thereby making the theory in [23] applicable. Finally, we present some examples that illustrate the appli cability of our approach within this setting. 1.2. Organization of the paper. The paper is organized as follows: In Section 2, we recall some topological and measure-theoretic properties of the space M(Ω, H ). Based on these properties, we define a suitable prior on the space M(Ω, H ) in Section 3. In Section 4, we establish a well-posedness result for the Bayesian inve rse problem ( 1.1). Finally, we present some examples that illustrate the appli cability of the developed theory. 2.Vector-valued measures Let us introduce some notation that will be used in the sequel . First, we denote by Ω⊂Rd,d≥1 the closure of a bounded domain and denote by Ha separable Hilbert space equipped with the inner product ( ·,·)H. Also, we consider two types of measures that are crucial for the analysis: The first type includes ele ments of M(Ω, H ), which are (Radon) H-valued measures on Ω, and the second type consists of probability measures onM(Ω, H ). Elements of the first type will be denoted by u, v,etc., while those of the second type will be denoted by µ, ν, etc., for clarity. 3 2.1. Topological properties of M(Ω, H ).We first introduce vector measures on Ω. AnH-valued measure on Ωis a countably additive mapping u:B(Ω)→H, where B(Ω) denotes the Borel σ−algebra on Ω. For every H-valued measure u, we define the associated total variation measure |u|:B(Ω)→R+∪ {+∞}by |u|(B) := sup/braceleftBigg∞/summationdisplay n=1/bardblu(Bn)/bardblH:{Bn}n∈N⊂ B(Ω) is a disjoint partition of B/bracerightBigg , for every B∈ B(Ω). Hence, the space M(Ω, H ) is the space of H-valued measures on Ωwith finite total variations, namely M(Ω, H ) :={uis an H-valued measure on Ω:|u|(Ω)<∞}, which is a Banach space equipped with the norm /bardblu/bardblM(Ω,H ):=|u|(Ω) =/integraldisplay Ωd|u|. The support of the vector measure u, defined in the usual way, satisfies supp u= supp |u|. Hence, for a discrete measure u=Ns/summationdisplay k=1qkδyk, q k∈H, y k∈Ω,for all k= 1, . . . , N s, one has supp u={y1, . . . , y Ns}and /bardblu/bardblM(Ω,H )=Ns/summationdisplay k=1/bardblqk/bardblH. Note that for H=R, we obtain the classical space of real-valued
|
https://arxiv.org/abs/2505.00151v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.